Wikimedia Developer Summit/2016/T114542
10am Pacific Time on Monday, January 4, 2016
Purpose
[edit]Discussion of the following areas:
- T114542 - Next Generation Content Loading and Routing, in Practice - Reading Web
- T111588 - API-driven web front-end - Gabriel Wicke
- T106099 - Page composition using service workers and server-side JS fall-back - Gabriel Wicke
Agenda
[edit]- 20 minutes - introductory presentation: Next Generation Content Loading and Routing - Adam Baso, Jon Robson, Joaquin Hernandez, Gabriel Wicke, Sam Smith
- 60 minutes - open discussion
Notes
[edit]- copied from Etherpad live notes on 2016-02-15
Reminder: All current and past edits in any pad are public. Removing content from a pad does not erase it.
Next Generation Content Loading and Routing
https://www.mediawiki.org/wiki/Wikimedia_Developer_Summit_2016/T114542 https://phabricator.wikimedia.org/T114542
Session name: Next Generation Content Loading and Routing
Meeting goal: [ENTER GOAL HERE]
Meeting style: [ENTER STYLE HERE]
Choose one of:
* Problem-solving: surveying many possible solutions * Strawman: exploring one specific solution * Field narrowing: narrowing down choices of solution * Consensus: coming to agreement on one solution * Education: teaching people about an agreed solution
Phabricator task link: https://phabricator.wikimedia.org/T114542
Slides: https://commons.wikimedia.org/wiki/File:Paradigm.pdf
(not shared publicly, or even with @wm.org) (that's...disappointing)
- Apologies, the title slide had the link to the Commons file on it, but I should have pointed it out earlier --dr0ptp4kt. Here's the PDF on Commons with all of the information:
https://commons.wikimedia.org/wiki/File:Paradigm.pdf
- Out of date version of vision. Actually vision statement is https://wikimediafoundation.org/wiki/Vision ("Imagine a world in which every single human being can freely share in the sum of all knowledge. That's our commitment.")
(Outdated but also longer) PDF version of slides: https://commons.wikimedia.org/wiki/File:Paradigm.pdf
- Many networks and devices are slow and will remain slow for the near future (3-5 years)
- Google and others are acting as intermeidiaries to trim down the page size and increase speed of loading for 2G networks
- Images and HTML size influence load time
Stripping references, navboxes, infoboxes are some strategies to reduce HTML size
- Reading Web team built transformation API server that plugs into RESTBase to experiment with reduced payload size for 2G networks
- nodejs app that can run on both sever and client side depending on device capability and local device cache
- Load only initial section of html content initially
- Lazy load images using javascript
- Service workers to cache content on device
- Service worker is basically a proxy that can be installed in the browser to selectively intercept and modify HTTP requests
- Same/similar code can be run on the server side to compose page for clients without service worker support (no-js, first visit)
- Certain composition apis are faster than others, varnish cache is an order of magnitutde faster still
For each approach, what functionality would need to be reimplemented? Who is responsible for making the change?
Skins would change. For the single-page app, I think skins would have to be implemented on the client-side.
For each approach, what functionality would need to be dropped? Which stakeholders care?
Q&A: ServiceWorker composition vs. single-page app ● Pros ○ Can be introduced as a progressive enhancement, targeting only specific views. ○ Can be compatible with e
Question: cscott: Special:Everything, gadgets..., there's a long tail. It would be nice if we could use the future stuff where possible and fall back to the old PHP code for the long tail of random (but very useful for particular users) crap Answer (gwicke): "null skin". When you ask for a page on the "long tail", the service worker can turn around and ask core PHP to render that page with the "null skin", and then take the resulting opaque chunk of HTML and drop it into the existing page.
Alternatively, you can just load that particular page from core PHP bypassing the service worker, but that involves being careful to synchronize skins/UX so that the reload is not unduely jarring (and slow). (For instance, if you did this in the existing mobile app, the result would be very janky because the UX doesn't align.)
Can Service Workers actually distinguish /wiki/Earth from /wiki/Special:Log? Remark (Timo): Pro/cons of service workers: SWs work in a different thread in the browser, so transparent to JS/Gadgets. document ready etc still fires at the same time even if no server round-trip occurred in the background.
Timo: DOM still looks the same even though mechanics are different, so gadgets which run after "page load" should Just Work. ...and in theory the *content* markup can gradually become more semantic, which helps gadget authors more easily pull out semantic information from the page.
Will the HTML attributes be changed, e.g. class, id.? No, html doesn't need to change. Composition happens at the network level. So when the browser downloads the page and JS starts seeing it, it sees the composed version (streamed).
[cscott] that's an orthogonal issue. not necessarily. we do want to (for example) gradually improve <figure> markup, but that's not really related to this front-end proposal. (current implementation would have slightly different markup because it's based on Parsoid/RESTbase, but we're trying to unify the markup between Parsoid and PHP.) Joseph Allemandou: 3rd party MediaWiki, how? Gabriel Wicke: Doing more on the client relieves the server, so it would make running high-traffic wikis earlier. But yes, if (IF) we do this with services, that makes 3rd party harder. There's a separate session about that later today Toby Negrin: similar to Squid/Varnish/ESI on some level, just a wrapper around MW Subbu Sastry: Are we trying to figure out which approach is viable? What's the goal of this discussion? Adam Baso: Open to different approaches, we thought these two were viable. Probably have to borrow pieces of both. Toby Negrin: One of our (Reading's) goals is to reach more readers in the Global South, so have to make performance viable Subbu Sastry: Network issues with mobile is a motivation; is this solution only for mobile? What happens to desktop? Joaquin Hernandez: You don't often have a desktop client with a 2G connection :) but these improvements benefit all clients; on desktop it doesn't make as much of a difference because it's already fast
Adam: note congested wifi, congested cellular Gabriel: Can adapt to network conditions on the fly using client-side code; hard to do with server-side fragmented cache, so separation is beneficial
Matt Flaschen: 1) how will references and UGC be handled for no-JS users? Joaquin: For the HTML-only version, replace links with links to different HTML end point that does have ref content; with client-side, show tooltips
2) Since we're talking about emerging countries and out-of-date phones, are they actually gonna have service worker tech you'd need? Joaquin: Chrome updates separately from the OS, so you probably run a very new version of Chrome even on old phones; it gracefully degrades, the web app approach still works without service worker
Jon Katz: One of the goals of this meeting is to hear from the community, mostly heard from staff so far. Straw poll: Don't care not my area (2) Super against (1) Sounds good, no questions (~10)
Meh-be?
Tim Starling: Just seeing a lot of details now, but it seems that there's a big difference between the single-page app proposal and the service worker proposal. SW degrades nicely, unequivocal perf improvement since it simply drops request. API-driven frontend doesn't degrade nicely for no-JS, case for it seems to be based on faulty (cherry picked) perf data (slide says 50s for first-paint on Obama, I measured 3s on 2G). Much more positive about service worker proposal. Comments? Adam Baso: As for data, a lot of stuff varies on user agent, browser implementation. High powered devices can perform well even on 2G Tim Starling: If you have barely sufficient JS support, and are on a slow connection in a country where Opera Mini is the default -- Opera Mini has a big page of recommendations (https://dev.opera.com/articles/making-sites-work-opera-mini/ ) for good perf with them. One item: please please please send HTML, don't send JS that loads HTML. So it seems: service worker is best practice, unequivocally better for perf; API is none of those things. Trying to mimic Google Lite? will break Opera Mini. Joaquin Hernandez: Web app prototype works in no-JS, runs the app on the server in node. What we've done is split the contents, less CSS and images. Web app approach is like having a separate different experience, can link to the old experience; service worker is like a layer on top of the existing experience. Stas Malyshev: About caching. AIUI our model is caching fully rendered pages. How will that change, and how would invalidation work? Gabriel: Right now we package a lot of things in one response, have to invalidate when any change. In this model, we cache those all separately and invalidate them separately. Stas: What is the granularity of the caching? Would text be split too? Would references be separate? Gabriel: Depends. We hinted to separating navboxes too, because they're 40% of HTML size and initially collapsed. So it depends on what makes sense. Stas: What about infoboxes? Gabriel: Maybe. Depends on whether it's beneficial. Idea is to make it possible, put infra in place, to load things optionally. Then we can decide what to apply that to. James Hare: I see this being sold as a tech improvement, but will there be any changes to the appearance? New layout or will it look the same? Joaquin: Not necessarily. Might change things as we're rearchitecting of course. Do you want it to change? James: I'd be happy with full responsive design, but some people still want to use MonoBook. Will you incite a rebellion by killing MonoBook and Vector? Adam: No comment :P Toby: There are two parallel interlocked processes but not the same thing. We'd like to have more ability to change look&feel, but at WMF we understand that we did not create the content, we need to discuss how the content is presented with the community that created the content. So this initiative is more technical, and at the same time we're becoming more familiar with community opinions on look&feel. James: So just tech for now? Adam: Yes, first get tech done, then look at look&feel. There might be small changes but no big ones. James: OK, makes sense. Jordan From Google: As far as Web Lite: doesn't support HTTPS, not a long-term solution to anything. Opera Mini and UC: basically the same. Short-term solution to a network issue. I share some of Tim's concerns: not sure either proposal solves the fundamental goal (improved perf). Is 50s first paint or complete rendering? Adam: Depends. Been running WebPageTest. On an iOS device without SW support, I might see 50s to first paint + 15s to load other things. Other UAs, notably Chrome, some things load earlier. (2G, or congested 3G) Bootstrapping the experience is the key thing Jordan: 2G is not a standard at all. Can have significant packet loss, which in and of itself can make it worse. Adam: We've seen that too. Android App uses two-step load, seems to be the only thing that holds up well in the face of bad networks. Some networks are slow without packet loss, some are fast with high loss. Or users move from one coverage zone to the next. Might be able to handle that gracefully client-side without leaning on HTTP status codes. Jordan: So, two approaches. Use similar technologies. How are they different? They could both use client-side stuff, do lazy-loading, pull from RESTbase, etc. Please contrast them. Gabriel: lots of client-side JS state and rehydration vs. request-level proxy at the server. SW just takes the request, doesn't touch how the page works, or the DOM, or on-load JS. Installs a small load-time Varnish server in the browser Jordan: Role of RESTbase? Gabriel: RB is just an API implementation. We need a cached API. Pulling in content from Parsoid via RB. Parsoid's marked content allows a lot of customization. Timo: Service worker only affects second view onwards. What hasn't been mentioned is prereqs for doing that, which apply to both solutions. They come at a high cost, but bring advantages. Only improves fallback perf (applying to vast majority of traffic). We won't have a situation where most views are SW, as most views are first views. Both solutions will use API-driven frontend, will fetch skin from somewhere, etc. From that perspective, I think single page app is a non-starter given compatibility etc. One advantage on the backend that I see is perf improvements for logged-in users. Right now, logged-in perf is worse because your traffic goes to Virginia instead of closest POP. Also, ability to do string manipulation (like ESI, but actually works) would allow us to do lots of things we currently can't. Can you give us an overview of how separation would have to work in the backend? Gabriel: backend separation. When it's server side it is *still* API-driven. Derk-Jan Hartman (TheDJ): Sort of agree with Timo (or Tim?) , feel that some of the beneifts that SW would bring are a lot more future-proof and easier to add, whereas single-page requires us to do things that aren't good long-term investments. I feel better about SW for that reason, but sadly it's newer and less vetted, so that holds me back. Otherwise, everything tells me SW is a much better approach. Single-page should not be the main focal point. Wondering if anyone has looked a lot at SW beyond what e..g Google has been doing, so we have a better understanding of what SW has done for others, aside from Google and our own testing. cscott: Medium uses it Joaquin: static asset caching in use, the big one is Flipkart http://tech-blog.flipkart.net/2015/11/progressive-web-app/ (e-commerce vendor in India). Not much more because it's painful to work with right now. Mozilla wants people to test SW sites with FF nightly. Gabriel: Clear from how Google et al have been pushing it is that it's squarely aimed at gmail and off-line support etc. Mozilla is putting it in the next FF stable release. Still rough around the edges, but I'd expect it to be pretty stable and widely supported ~6mo from now. DJ: Started discussion with how it's gonna improve perf on mobile. Conflict: SW support isn't there yet for most people. Adam: Yeah, there's different goals short-term, mid-term and long-term. We expect uptake of SW, can capitalize on that by getting ahead of it. We want to innovate continuously, adopting this approach could help us do that better. We are trying to serve different user needs, but focusing on perf now. Nuria Ruiz: Did our perf numbers come from real phones, not the phone button in Chrome? Google and Facebook use single-page, but their users have long sessions. We don't. As Timo said, most views are first views. We have some repeat users daily, but that's not the norm. Joaquin: Regarding session length, that's right, most readers don't come back. For them, splitting the content makes a huge difference. As for testing, I've been using my real phone (Nexus 5), and you're right, it makes a huge difference vs emulation. Gabriel: Regarding perf for very low-end devices, idea is to serve a very light-weight HTML version, maybe with inline styles, so you don't get loading contention. Browsers can render HTML as it comes in as long as styles are there. That should work relatively well, it's as simple as you can get it. Nuria: We should have test data from real phones (incl. e.g. Android 2.3 phones) Gabriel: Those numbers are also important to decide whether to split out e.g. navboxes C. Scott: I'd like to foreground the UX and architecture issue here (as opposed to the performance rationale). DJ and James Hare have touched on this. One of the reasons I like this general direction is that it, in the long term, gives us a way to slim down and decouple mediawiki-core. API-driven UI gives us good discipline to sharply separate "model" and "view", and facilitates the creation of new UX experiences (even while using the same URLs). You can switch the service worker you use for the wp domain, to get a very different UX for the same links. (Caching of ServiceWorker can be effectively infinite, more like downloading an app.) Similarly, the architecture that uses server-side JavaScript to render the HTML-only/no-Javascript view allows us to (in the long term) refresh our skin architecture and reuse a single code base to provide very different user experiences. Fundamentally this approach is about factoring out the page composition and customization (like user stub length preferences). That refactoring is a positive long term trend. We don't want to say "this is so we can change MonoBook", because the whole point is that this architecture should allow us to faithfully render even the Monobook UX, but it allows us to try different things. So I like this for the future, for letting us innovate without breaking old things. Ori Livneh: There's a lot I really like here, but on the whole I'm a no. Because of risk. 1. We haven't put RESTbase to the test for the load all page views would put on it. That could be costly and require a lot of work. 2. ServiceWorker tech is really new 3. Things are not set up for accountability right now. Numbers, expectations... I see a lot of looseness with goals, success criteria. Not entirely clear what the setup is for collecting or reproducing these measurements, and having them reproducible. Easy to come up with a success narrative regardless of reality. 4. Risk has to be informed by a sense of alternatives, what else could you be working on that would serve the same ends. There's a huge one: changing the way we load images by lazy-loading them. I know that's part of your strategy. But doing that separately without relying on new infra would be a very safe bet for a perf win. Adam: Accountability: some testing labs have real devices, we probably won't set up our own but we can work connections. Agree there needs to be more rigor around that. Re alternatives: Agree we should explore lazy-loading without all this other stuff. Planning to look atall of that in Q3 and make some pragmatic short-term changes. Gabriel: Not a proposal to rewrite everything, it's a gradual path. We just need a cacheable API. Order of implementation is open. Want to get an idea of where we're headed long-term. Ori: Circular argument: you need a cacheable API because you want to use an API for page views. Moritz Schubotz: What about benchmarks, in particular automated ones? As volunteer dev, you often have a hard time estimating impact of your changes. Joaquin: webpagetest, Performance team tools & dashboards Matt: Follow-up re performance: if you're trying to simulate 2G in Africa, don't forget to simulate latency (not just bandwidth).
Question about single-page app: are you saying that on a cold cache, you'll load the shell and the entire page rendered on the server? Joaquin: cold cache = regular HTML Matt: So if most sessions are one view, is this only a significant benefit for multi-view sessions? Toby: Where's the data that says most people only view one page? IIRC Nuria said there was data on this. cscott: Don't forget that the cache *duration* can be a lot higher as well, since we're effectively separately caching the UX and the content. So the cache expiry time for the client-side JS and/or service worker can be much much longer than content expiry times. So "multi view" sessions could span a month, say. (This seems most convincing for the ServiceWorker implementation.) Daniel Kinzler: Like the direction, have to make sure to get it right. Agree with Ori there are easier ways to improve perf, but I like the flexibility we get from these approaches. Seems like single-page app will be a nightmare with back compat with user scripts and gadgets. One reason people keep using MonoBook is because porting Gadgets is hard (even to Vector, let alone SPA). People have brought up architecture, but let's talk about maintainability. We kind of already have an API-drive front-end: mobile apps/sites. We'd add another one, and also an HTML fallback: spreading ourselves too thin. SW seems better, because you start with doing nothing, and AIUI, just incrementally pull out more things. Gabriel: Lot of overlap in API endpoints used by these things. cscott: Part of the point is that the "HTML fallback" will reuse the client-side code, so we're not actually multiplying codepaths. Jaime Crespo: Don't have anything against the idea, some things against the implementation. We need more testing, both on actual devices and testing of server-side tech. My own tests aren't very promising, will share them with you. Caching user-dependent data is a security concern. Jordan: Agree with Daniel and Ori, thing SW makes the most sense, but I think the RB component should be seen as a last step if at all. Pulling out infoboxes and navboxes will make a much larger difference than using RB. 10%/90% rule applies (10% effort for 90% of improvement) Timo: Next session at 11:30 will mostly continue on from this, will focus mostly on the front-end side of things. Specifically on how skinning will work in the future.
In-Etherpad comments:
Jie: In HTML side, currently we use <table class="infobox"> to represent infobox. Does it mean we'll switch to <mw-infobox> tag in the furture.
cscott: come to my session at 15:40! We'll talk about this. This is one of the options on the table. We'll also discuss possibly storing infoboxes separate from the main article, say as wikidata facts. So the "storage" and "rendering" of the infoboxes are separate questions. We could change the storage (or retrieval API) without changing the rendering, or vice versa. https://www.mediawiki.org/wiki/Wikimedia_Developer_Summit_2016/T112987
Scott_WUaS: cscott: will this also be for "T112996, T112984, T113004: Make it easy to fork, branch, and merge pages (or pages)" and related? Thanks.
Trevor: This is what happens when you use HTML as an API, you can't change anything without breaking everything
Tim: Since there's not much chance of getting another turn of the microphone, I'll put my position here. I think any UX degradation, e.g. lower quality images, deferred image loading, deferred navboxes, first section only delivery needs to be carefully considered and supported by high-quality performance data. With current data, I would only support deferring collapsed navboxes until click.
Packet loss is obviously the biggest performance problem on mobile, in both the developed and developing world. We should do performance tests with simulated packet loss, not simulated fanciful >1s RTTs. The obvious way to address packet loss is to reduce retransmission time by reducing RTT. For example, we could have a cache POP in Indonesia.
When the Opera Mini team say "send HTML not a web app", they mean fully functional HTML with <img> tags, not first section HTML with JS which will load the rest of the content. It's especially critical to send all HTML at once, with no deferrals, on high RTT high bandwidth connections such as satellite. Satellite ISPs will even bundle CSS/JS/images along with the HTML in order to reduce round trips.
TO THE BIKESHED:
Topics for discussion (feel free to fill in answers as they are discussed, or after the session):
Which stakeholders will likely benefit from an API-driven UI approach? Which stakeholders will be negatively impacted by an API-driven UI approach? What alternatives should we consider for API-driven UI (or generally decreasing perceived page load times)? API-driven UI: Single-page application API-driven UI: ServiceWorker composition Lazy loading images Inlining CSS (any others?)
General notes
* * *
Action items with owners:
* * *
DON’T FORGET: When the meeting is over, copy any relevant notes (especially areas of agreement or disagreement, useful proposals, and action items) into the Phabricator task.
See https://www.mediawiki.org/wiki/Wikimedia_Developer_Summit_2016/Session_checklist for more details.