Production PWAs with frameworks (Chrome Dev Summit 2016)

Production PWAs with frameworks (Chrome Dev Summit 2016)


[MUSIC PLAYING] [APPLAUSE] [CAR ENGINE] ADDY: That was worth the
hour it took to render. [LAUGHTER] So in many ways,
Polymer has been a sort of Tesla vehicle
for the Chrome scene, highlighting one
path for how you can shoot fast, high
performance, progressive web apps that work really,
really well on mobile. But we work in a really
diverse community. Like everyone is using
different tech stacks. And today we want
to talk a little bit about how you can
use other libraries and frameworks, like React,
to build fast progressive web apps. Looking at– what do you need
to do in order to make things like React qualify to
build instant experiences on real devices? Flipkart are going to
get up right after me to talk a little bit about
their experience shipping React PWAs at scale and all
the lessons that they learned. And we have a little
surprise for you at the tail end of
this talk, that you’ll see in a short while. So let’s start off
with this statement. Frameworks can be fast
if we put the work in. I firmly believe this. I think that we’re at a
point where fast is not the default for a lot of
libraries and frameworks. I think that a lot of them–
a lot of framework authors acknowledge that
we can do better when it comes to performance
on real world devices. But let’s take a look at
what’s possible today. So this is Flipkart
on a real device. They’re doing
really, really well. They’re interactive
in just a few seconds. They’re shipping just the
minimal functional code to get a route, interactive,
very, very quickly. They’re deferring a
lot of the work that’s not needed for this route
to a future point in time. And they’re taking advantage
of techniques code splitting and PRPL in order
to accomplish this. Housing.com are similarly doing
really great work in this area. Again, they’re interactive
in just a few seconds. But we talk a lot
about speed and what it means to be fast at CDS. What do we actually
mean by fast? So there are a few key moments
when it comes to modern loading performance. And some of these
metrics are things you might be familiar with. So the idea of first paint–
first meaningful paint– but really there are
three phases here. There’s the is it
happening moment, is it useful, and is it usable? Now we’re increasingly trying to
focus on the is a usable phase. So, time to interactive, at
what point during loading you’re actually
engagable by the user. If they tap on different
things inside the app, can they actually accomplish
things that are useful to them? And time to
interactive is really– it’s that point when I can tap
and I can get something useful. Now we’re saying that
ideally, regardless of what it is that you’re
using to build these apps, it would be great
to be interactive in under 5 seconds
on a real device, under real representative
network conditions– so 3G. If you happen to be using
service worker caching, you’re going to benefit from
trying to hit an instant repeat load, and your time
to interactivity is going to be even
better in those cases. So service worker caching
definitely worth looking at. In this case, there’s actually
nothing on this person’s phone screen. And I think they’re
going through withdrawals of some sort here. [LAUGHTER] So Lighthouse has
been mentioned– Darren mentioned in his
keynote– Lighthouse is currently one
of the best ways to easily track things
like time to interactivity. It includes a number of
different performance metrics. This is a Lighthouse extension. It’s also available as a CLI. But time to interactive is
included inside the performance audits. If you want to take a
look at how well you’re doing, what I
recommend is trying out Lighthouse over
remote debugging– testing it with a real phone. It will give you an eye-opening
look at your performance on real-world devices. So that’s definitely worth
spending some time on. Recently I was very curious
about how the React community were shipping down code. How they were tackling
things like module bundling. So I put out this call
on Twitter asking people, how do you ship
React in production, and what were your
experiences doing that? And I published a little
bit of the data on that, but let’s dive into it. So what JavaScript module
bundler do you use? The majority of people
are using Webpack. That breaks down
into 65% of people are using Webpack 1, a smaller
number using Webpack 2– but those numbers
are increasing– and the rest of these numbers
are browser [INAUDIBLE], other bits and pieces. So Webpack is kind
of a big deal. Let’s take that from
this particular slide. I then asked people if
they’re using code splitting to check out their JavaScript. And I got a very
surprising answer. I saw that 58% of people
thought that they were. This surprised me, because
when I talked to the Webpack community– when I talked to the
Webpack authors– they’re like, we don’t think that anymore
than like, 10% of people are really using code splitting. And there’s something
interesting there. Maybe there’s a
breakdown in terminology. Maybe people are
using code splitting but not necessarily
doing it the right way. I don’t kind of blame them
because configuring Webpack– [LAUGHTER] –is so fun. It’s the best. [LAUGHTER] [APPLAUSE] But I think that we have
opportunity to improve that. Other concepts that people were
looking at– 11% of respondents said that they were exploring
service worker support. So that’s good, love to
see more people doing. 14% were looking
at HTTP/2 and what would be involved in
granularly shipping stuff down, and 19% were looking
at tree-shaking. So Interesting stats. Now, we mentioned the Polymer
shop demo quite a lot. And the reason for that
is it’s using PRPL. It does really, really well on
real-world devices under 3G. So on throttled 3G–
this app is interactive in about 4.3 seconds. About four seconds. If you’re looking at it with
a really, really 3G networks, say something with
more packet loss, it’s still doing pretty
good– 5.8 seconds. We take a look at Flipkart
and Housing.com next. And Flipkart– between these
two apps I did the averages, and they’re getting interactive
in about 4.5 seconds. It’s still fairly
fast, fairly good. About 6.9 with packet
loss, but they’re still doing pretty well. So these guys are
using basically all of the tooling,
all the performance best practices that
we’re encouraging folks to take a look at to ship
these experiences down in ways that are going
to ideally benefit their users at the
end of the day. So here’s the crux of the study. I ended up profiling
over 150 React apps that people submitted over
the last couple of weeks, re-did the numbers quite
a lot of times– it’s fun, so fun– on real devices. And what I discovered,
was that the average React app in that survey
was interactive in about 11 seconds. So there’s quite a gap there
between what’s possible and where the average
app is right now. With packet loss we’re
looking at 12 seconds. Probably the worst app
in that particular study was interactive in 24 seconds. So the user is going to be
an uncanny valley that’s like tapping around the
screen and not really seeing anything happening. So this is a timeline trace
of what the average React app built with Webpack looks like. In this case I saw
hundreds of kilobytes of script being shipped down
just for a single route. A lot of it wasn’t being used. They are using code splitting,
but they’re actually– it’s taking eight
seconds before all of the script in their common
chunks are being shipped down. Thousands of seconds are being
spent in parse and eval time. And for anyone that sort of
followed Paul Lewis and Paul Irish’s guidance over
the last couple of years about trying to ship a frame
in 16.6 milliseconds– well, these guys have got a
frame that lasts 7,973. It’s doing really great there. It’s great. We can do better. So first piece of
advice, is try not to keep the main thread busy. If you are someone
that’s shipping down really large bundles
of JavaScript, it’s going to take longer to
load, parse, execute, and run. It’s definitely going
to peg the main thread. Now this advice
comes with nuance. And nuance is something we often
lack in these conversations. It’s really tricky to pack it
into a short amount of time. But basically, if you’re
working on a page that is not going to be useful to
your user in any way unless you ship an
amount of script, you’re probably better
off shipping it. If you can however, trim
that down so that you’re just shipping minimal
functional stuff that’s going to be useful
to your users, please consider doing that. It’s going to help them out. Because they’re not going
to need you shipping all the scripts for the entire
site or the entire app in one go. Other things that can impact
the main thread being busy and time to interactivity,
are suboptimal back and forth between the client
and the server. Sam [? Ciccone ?]
touched a little bit on the idea of JavaScript
parse, compile, and eval execution times
being a little bit different between desktop
devices and mobile. Here we have a mega script,
about 250 kilobytes meta-file. And the amount of time it
takes to parse and compile it on what a lot of us– I see
a lot of MacBook in the room. This is how long it
takes to sort of parse and execute that on a
MacBook Pro from 2015. And take a look
at the difference. Like, how much of
our assumptions are broken when it comes
to the average phone? Something like a Moto G? This is taking
about three seconds to parse, compile, and
execute, and that’s not even looking at load time. If you’re trying
to get interactive in under five seconds, this
is just not going to cut it. But all this again,
it’s got nuance. You need to make sure that
you’re measuring before you’re optimizing, but
you’re ideally trying to make sure you’re doing
the right thing for users. Test on real phones
and real networks. This is something
that we’ve mentioned in a few talks at a
Chrome Dev Summit. I cannot stress enough how
important it is to test on real devices. Emulation is only going
to get you so far. You can be testing
with 3G throttling on, with CPU throttling
on a desktop, and the difference between that
and the stats you will get out of a real phone are still
going to be multiple seconds. I think there are opportunities
there for us to do better at a spooling level,
but real devices have got differences mixes
of cores, GPU, memory, there’s going to be
packet level differences for different networks. So do try to make Chrome Inspect
your best friend, and use it. So when Alex Russell carries
around all these phones, he’s not crazy. Mobile wipe speeds
do kind of matter. In fact, on average,
faster experiences tend to lead to longer sessions. And one of the– I think it
was perhaps the Double Click report that recently
published– said that people that did
optimize performance were seeing anywhere up to
two times mobile ad revenue. So test on real devices,
make real money. [CH-CHING] [LAUGHTER] Let’s riff on this other idea. So less code, loaded
better, helps everyone. This is another one of those
items that requires nuance. But if you’re able to load
less code up for a route in order to get it
useful, please do so. The nuance part comes
again from that part of you may require more script. Me shipping down 300
kilobytes of script may be very different than
someone else doing it. There’s going to be different
parse and eval times at play there. So again, very
important to measure. But let’s riff on this idea
of less code, loaded better. We’re going to use Webpack. A lot of you may be familiar
with what Webpack is. For anyone that
hasn’t used it before, it’s basically a popular
JavaScript module bundler. It packs lots of modules
into smaller bundles so you can ship them
down to your users. And we’re going to look
at some of these ideas around the PRPL pattern and how
you can serve these things down to your users. The first one is code splitting. So I’ve been
talking about trying to ship the minimal
code down to your users. Code splitting is one answer
to this problem of serving people monolithic bundles. It’s the idea that by defining
split points in your code– from sort of view to view for
example, or route to route– you can split them into
different files that get lazy loaded on demand. That can improve
your startup time, and help you to getting
interactive much, much quicker. Now, with Webpack, there
are two ways of doing this. Actually there are quite a
few ways, not just two ways. With Webpack 1, you
can use require.ensure in order to do that. Webpack is going to
take a look at anywhere using require.ensure and create
a chunk for you based on that. That’s how you
define a split point. In Webpack 2 they currently use
system.import from the loader spec in order to
accomplish the same thing. I do believe Webpacker
also– they’re a little bit future facing
looking at what else is happening in
the loading space. But basically, these are two
ways to do code splitting. There are great articles that
cover this in more depth. There are other ways that you
can do code splitting as well. The bundle loader
is another option. If you don’t like the pattern
that you just saw on screen, you can actually
use bundle loader and pre-fix the
things that you want to require in to your page. And it will automatically
wrap those things into a require.ensure for you
and take care of the rest. It’s also possible to wait
for that chunk asynchronously before you do anything
with the code. And finally, if you happen
to be using React Router, it’s actually got
really great support for working with require.ensure. So this is a declarative option. It’s also got a slightly
more imperative one. But basically, when I’m
defining routes here, I’m able to use a
synchronous debt component. And inside there
I can say, well go and please get me the
User Profile view. And then I can go
and do stuff with it. So it doesn’t
necessarily need to be included in a big
monolithic bundle up front. The next thing is
the PRPL pattern. So Sam talked about the
PRPL pattern yesterday. It’s basically a pattern
for structuring and serving progressive web apps. It’s got a lot of emphasis
on perform and app delivery, maybe looking at the ideas of
how you can more granularly do things at a route level. But it focuses very
heavily on giving you a minimum time to interactive. So the idea here is push
the minimal functional code for a route, render that route,
pre-cache the remaining routes, and lazy load routes
on demand as needed. Again, lots of nuance here. But we do have a guide on
that you can go and check out. Now, with Webpack it’s
possible to do something a lot like PRPL using require.ensure
or system.import with an async get component react router. And there are a few
different options here. Sam talked a little bit
about the differences between preload or HTTP/2
push, so let’s unpack some of the ideas there. So Link Rel preload,
if you haven’t used it before, it’s basically a
declarative fetch directive. In human terms, it’s a
way to tell the browser to start fetching
a certain resource, because you as an author
know that you’re probably going to need it. Some people have done really
interesting experiments here where they’ve used stuff
like their Google analytics to decide what routes
should get preloaded based on the navigation of the user. But with Webpack, you can
use things like Asset Webpack plug-in in order
to wire up chunks that are generated at build
time up to your markup. There’s more you can read
up about Link Rel preload. I believe Housing
may have mentioned some of their experience with
preload earlier today as well. If you’re exploring
HTTP/2, there’s a really violently named plug-in
called Aggressive Splitting plug-in I’m not sure
why it was called that. [LAUGHTER] But this is another option for
basically going a little bit more granular with the chunks
that you’re shipping down to users. Nuance again. Different JavaScript
engines might treat the way that you split things
up differently. There are going to be cases
where in fact, shipping a larger chunk will
just mean that it’s able to stream that JavaScript
in and parse and compile it a little bit faster than going
and fetching yet another chunk. So know that this exists. Try it out if you’re interested
in the idea of HTTP/2 with Webpack, but
nuance once again. Now another piece
of interesting data that came back from my
research, is that code splitting itself does not
solve everything. In fact, I just
focused on the apps where people
self-identified as saying they were code splitting. What I found was that they were
interactive in 9.8 seconds. So definitely not where we
thought they would be, right? We expect them to be a little
bit closer to those Flipkart and Housing.com numbers. What I discovered
after profiling them in slightly more depth,
was that a lot of folks were shipping down chunks for
a route that were 600, 700, 800 kilobytes of script– in some
cases 1.2 megs of script. And then they were lazy loading
even more right after the fact for some crazy reason. But this is
something– you know, I don’t entirely
blame people for it. Because our current
tooling doesn’t do an amazing job of
highlighting these issues. It doesn’t really put
performance in your face. So ask yourself
what’s in your bundle. I think it’s very, very easy
for us these days to NPM install the entire world. It’s very easy to
include more modules than we necessarily
need when you’re shipping down code for routes. But I thought that maybe
it would be interesting for us to see what we could do
about this at a Webpack level. So I put together
an RFC for an idea I call Webpack performance
budgets or Webpack performance insights. And Sean Larkin, who’s in
the audience over there, has actually been
helping me with this. And I thought it would be
interesting to give you guys a preview of what we
think could be a better way of highlighting some
of these performance issues earlier on in
your development process. So here is what the output
you’d normally get with Webpack looks like today. I’ve got a build
here where I’ve got I’ve got almost two megs of
script in two of these bundles. And I have– as a
user, if I’m not really that familiar with
web performance, I don’t know that
there’s an issue here that I need to solve. It should be obvious–
and these numbers are quite large on purpose–
but it should be something that– maybe Webpack could
tell me that I have an issue. So we looked at implementing a
proposal that I put together, and this is what it looks like. So you go and run
Webpack on your project, and it includes
this output for you. Let’s try to unbundle some
of the ideas that are here. So the first thing it
does, is it tells you if you have particularly
large chunks in your bundle. So you’ll see at the very
top, instead of just listing all of our different
JavaScript output in green, it’s highlighted
in yellow chunks that are particularly large and cross
a specific performance budget that’s defined by
Webpack as default. If it notices that you’re
doing that– so in this case, I’ve actually customized
this a little bit, and said that the maximum
size for chunks 100 kilobytes– it’s
going to tell you. It’s going to warn you
and say, this is an issue. It also can highlight
large entries. So trying to look at
defining, what budget are you crossing for an entire
route or an entire view? Because you might easily
have multiple chunks that can post
something, and you don’t want to be one of those people
shipping down a mega script if you don’t need to. So large entry tracking is
going to help you with that. And finally– at the moment
in this proof of concept that we’ve got– we
also highlight patterns. So if we see somewhere
where we think you’re going to benefit
from doing something like using credit
splitting, using require.ensure,
or system.import, we’ll tell you about it. Now again, this is a very
early proof of concept. We’ve just been hacking on it
over the last couple of days. But I think that we
have an opportunity to work together with
tooling vendors like Webpack to try solving some of
these performance issues together in a meaningful way
that will hopefully end up giving users better time
to interactivity scores. So something you might also
be wondering once again, is can I configure this stuff? And yes, you absolutely can. Using the performance
object, you can actually set
the maximum asset size, the maximum
initial chunk size, and turn on or off the idea
of getting those hints. There’s a preview available
today you can go and check out. At this point, all of
the UX you’ve seen, you might think that that’s a
really long report in your CLI. But we welcome people to
try out the proof of concept we’ve got today. And let us know if it helps. Let us know if you’ve got any
feedback on the UX at all. I think that this is
just the beginning. Size alone is just one aspect
when it comes to script loading performance. There’s also things
like parse eval times, execution times, and so on. There are interesting
opportunities for us to use this as a
baseline for building up more tooling that then
benefits all Webpack users. I’d love to explore at some
point in the future what things like code
coverage could even mean for these experiences. So that’s our first preview. Please go check it out, and
let us know what you think. Now another thing I
wanted to recommend– there’s going to be a point
where you’re optimizing your progressive
web app, and you’re going to get a point where you
can optimize the size of React down any further. And something that I found is
actually really great for just swapping in, is Preact, which
is a much smaller– it’s almost a three-kilobyte alternative
to React with the same ESX API. I believe Jason Miller,
who worked on Preact, id in the audience. So thank you, Jason. And a lot of the traces that
I’ve done of Preact apps are showing them–
like, this is again, on a real device
with a real network. They’re interactive
in under five seconds. I was taking a look– so this
is Source Map Explorer and it’s sort of a nice– a little bit
like the bundle analyzer tools that Sam was
showing in his talk. This gives you
something very similar. So this is what my
dependency graph looks like when I have React
in place on the very right. So lots of stuff going on. When I switched over to using
Preact and Preact Compact, this changed quite
significantly. This is almost the same API. I did run into one or two
bugs– I will say that– and Jason kindly fixed
them very, very quickly. But this is definitely
something that I consider– if you’re running into places
where you’ve tried optimizing your app down, you’re still
finding you get a bottleneck, Preact is definitely
worth checking out. Especially if you
care about your time to interactivity being small. Setting this up with Webpack
is actually quite trivial. You can use resolve
aliases to map React to Preact Compact, React
[? On ?] to Preact Compact 2. Definitely worth checking out. Now in previous
years, Jake has talked a lot about offline
and the benefits that you get from instant
loading using Service Worker. And I’d like us to
consider layering our app so the network is an
enhancement a little bit more. When you do this,
you’re able to actually give your users those almost
instant experiences on repeat visit. And you crush your
time to interactivity. In this case, this
is Housing.com. On first visit, on a 3G
network, on a real phone, they’re getting content on
the screen in 3.5 seconds. On repeat visit,
it’s almost instant. It’s in under a second. And the amount of
script and everything that they were trying
to load up initially is no longer an issue. That’s already cached using
Service Worker Cache API. And they’re able to get
interactive really quickly. So definitely something
worth taking a look at. A lot of the time we talk
about progressive web apps, we talk about the
application shell model, which is this idea
of caching your shell and loading in content
using JavaScript. There are many different
variations of this pattern. This isn’t the only one. But if you’re trying
to get Service Worker Caching in place, I highly
recommend SW-Precache Webpack plugin. This will integrate with
your Webpack build process, it will generate
a service worker that precaches your static
assets, like your application shell. It just generates a hash of
all your file contents as well. There’s a lot of best
practices for you out of the box
worth checking out. If you’ve tried vanilla
service workers, found that there is a little
bit of boilerplate there, and you’d like it
so it just helps you with the rest of your workflow. Jeff is going to talk
a little bit more about SW-Precache and
SW-Toolbox in his talk. Now another thing that
Lighthouse tries to highlight is progressive enhancement. And I think that this is one of
those super contentious topics. Luckily I’m on stage, so
I can’t look at Twitter in any shape or form. I’ll have to see
people’s opinions on PE. But I do like this idea of
supporting all your target users using
progressive enhancement and trying to target all the
people that are in your market so that your app at
least works for them. I think that
progressive enhancement has evolved over
the last few years as we’ve gotten support for
better primitives like Service Worker, so that instead
of necessarily optimizing for people that have
JavaScript disabled, you’re optimizing for
network resilience. So if you’re using
patterns like PRPL– and again PRPL isn’t the
solution to everything– if you’re using
patterns like PRPL, you can end up shipping so
little code to your users to get them useful, that
maybe things like service side rendering aren’t necessarily
as beneficial in those places, or as necessary in those places
as you might need them to be. However as Flipkart are going to
talk about a little bit later, there are still
benefits to things like service side
rendering for SEO bots, and there are places
where you might need to get content
on the screen quicker. For those cases, React
supports this idea of server side rendering, or
universal JavaScript rendering. [INAUDIBLE] has a
really good story around things like universal
data flow and data fetching. So React provides you
this method called render to string for rendering
markup on the server. As part of it’s story
around Universal JavaScript. And it’s this idea of
you ship down your HTML, you then hydrate
as soon as React and all the rest
of your components have loaded it up, attaching
event listers and so on, so that the person can
actually interact with the app. So React has got a
good story around this. This stuff is actually not
too difficult to get set up, as demonstrated by
folks like [? Celio, ?] who are using server side
rendering with React. However, universal
JavaScript has got issues. I don’t think that this
is something that’s talked about enough in the community. I think it’s something
that we can probably share more data on definitely. It’s very, very easy to
get stuck in uncanny valley when you’re server
side rendering. Where your user is in
a place where they’re able to see content,
they can tap around it, but they can’t
actually really do anything, because
they’re still waiting on the rest of your JavaScript
chunks, and your modules, and so on to load up in order
to touch those event handlers. Render to string has
also got known issues around being synchronous. So it can affect things like
your time to first byte. Streaming server side rendered
React can actually help here. And I’d recommend checking out
projects like React DOM Stream. Registering can also monopolize
the CPU and waste resources when it comes to
re-rendering components. Component memorization
can help there. So take a look at things
like React SSR optimization, another project that tries
to help with this stuff. But don’t consider things like
universal JavaScript or server side rendering with React
as a given solution that’s going to be fast. It’s very, very
important– once again– to consider there
will be nuance here, and it’s important to measure. If you’d like to learn
more about any of the stuff that I’ve been talking
about, I recently published a series of articles
called “Progressive Web Apps with React.” You can go and check those out. But I’d like to invite to
the stage Abhinav, who’s going to talk about
Flipkart’s experience shipping production progressive
with React at scale. [APPLAUSE] ABHINAV RASHLOGI: Thanks, Addy. So I’m Abhinav Rashlogi. I’m a developer on the team
that built Flipkart.com. I spent most of 2015
working on Flipkart light, a cutting edge mobile
progressive web app that some of you may have
heard about in recent times. And this year I have
been mostly leading the team of bringing that PWA
goodness to the desktop site. So Flipkart– let me
introduce you to it. Flipkart is the largest
e-commerce site in India, and a first class progressive
web app across all form factors and browsers. And by that, I mean
across mobile and desktop. We have got the opportunity
to showcase a new website CDS and Chrome Dev
Summit last year. And this is what it looks
like now on the site. And it’s virtually
indistinguishable from our native app both
feature and design wise. So Alex tweeted this
through the morning. “For all of us coming
from desktop to mobile, a change in outlook is crucial. Mobile is much less forgiving.” And I wholeheartedly
agree with this. Luckily for us, we were
going from mobile to desktop. So we carried our
learnings along, and this is what our
desktop site looks like now. So let me go over quickly
the kind of technologies that we are using
now to build this. At every level we are
using a combination of React, React-Router,
Flux/Redux on mobile and
desktop respectively, and a web app to
bundle it all together, along with a bunch
of other technologies that help us build
this and [? pull ?] and pack it together. So that includes ESX and the
latest JavaScript technologies, fetch, promises, and
load on the back end. So let me talk a bit
about the architecture. At a very high level of both
mobile and desktop sites, for us have a very
similar architecture. Let’s see what that is. [INAUDIBLE] splitting
[INAUDIBLE], we have a smart
pre-loading of chunks and we implement
the concept of PRPL, which we have heard about. We have partial server side
rendering, and a concept of build time rendering on
each, and we have obviously, Service Worker for caching
different kinds of resources. But an important
thing to keep in mind is that the
implementations for us are different based
on the requirements. There are significant
differences on how you treat– how
you need to treat– mobile and desktop users. The requirements are
different, the user behaviors are definitely different, their
attention spans are different. Network conditions are
definitely different. Your mobile can have a
flaky network– 2G or 3G. Desktops tend to
have a more stable than a faster connection. Device capabilities
are very different, as Alex mentioned yesterday. And browser fragmentation,
of course, and distribution. For example, in India, the
browser distribution on mobile is such that you see
browser takes a fair chunk of the pie, a majority chunk. But on desktop, it’s the
latest version of Chrome which takes the majority chunk. So how you treat development
and which one you target first, and [INAUDIBLE] you have to take
the least common denominator. You solve for the
one which is probably going to cause you
the most problems, and you build up on
top of that supporting more and more features,
treating things like network and excess
CPU, things like that, as a progressive enhancement. So let us look at the
differences in implementation like I pointed out. On the mobile site,
we have a concept of build time rendering. Which essentially means that
we build the apps shells, all of our code, and we
clear static HTML files which resolve to the user
when we get a request. So there is no request
time processing needed, it’s the same profile. We have a Service Worker in
place which caches that shell. And obviously after that
it can work offline first. And for our mobile
site, it’s a composition of multiple single
page apps, which I will talk about in a bit. On the other side–
on the desktop– we have partial
server side rendering. That means we try to optimize
what [? content ?] needs to be rendered on the server. We don’t have a concept
of build time rendering, and we don’t have a
concept of app shells. The reason for this is simply
user’s requirement and the user experience. I feel– and that’s what
we feel at Flipkart– that the user experience
of an app shell can work really well on
a mobile device where you can show a header, a
footer, and a loader maybe, and some content. But on a desktop, showing
just a header and a loader still leaves you with a
pretty big blank page. It’s not a very good experience. So therefore, we went
for a partial server side rendering approach. Apart from that, we have
a [INAUDIBLE] response for our first request
on the HTTP response, which allows us to achieve a
faster time to first paint. I’ll explain that in a bit. And we use server side–
we use a Service Worker for caching things
like data and resources like images and
things like that. So here is the output
from a Webpack build. Webpack supports code spitting
out of the box– like Addy was just mentioning– and it
figures out the split points based on how you
include your components. It also takes care of
loading the appropriate chunk when needed. Example, when it’s navigate. The benefit here is that
you significantly reduce the amount of JavaScript
that you need to render the first full of your page. Like for example–
the screen shot that I’ve put up
here– the combined build that we had for our
website at some point of time, was around 206 kilobytes. With code splitting
based on routes, we were able to split it. For example, home page
only needed 32 kilobytes of JavaScript to render. And similarly, other
pages needed anything from 7 kilobytes
to 100 kilobytes. This really helps a lot. But there’s an
important caveat here. As I said, Webpack–
out of the box, Webpack will try to load
these files on navigation. When the route changes,
it figures out OK, this route is this,
JavaScript is not present. And it has a map somewhere
which tends it, OK, load this JavaScript fail. Which means it is
downloading, evaluating, and passing the
JavaScript after you have clicked on a link, which
is a very bad user experience. So to solve that, PRPL
comes to the rescue. Implementing these concepts of
chunking, streaming, and code splitting, you get a picture
that looks like this. The first one at
the top is what you see before all these
improvements for us. So you’ve got your HTML
parsing in blue at the top. And all your static
resources and JavaScript CSS starts loading when
the HTML is parsed. And you get a render time of
around 2,500 milliseconds, and a page complete
around 3,500. With these
optimizations in place, you get a first paint
of around one second. Your resource is
loading in parallel to the parse of your HTML. This is achieved using things
like Preload, Script Defer and similar things. But [INAUDIBLE]. What about time to interactive
and meaningful content? We think that your
entire content doesn’t need to be
rendered together for it to be meaningful. For example, what we do,
is we– our first paint– our first render that we
put on the user’s page contains the search box. And it functions
without any JavaScript. Which means that the
user is able to interact with the plain HTML that
we serve to them which gets rendered even before
any JavaScript has started downloading. Since most of our users–
a lot of our users– start their journey
by searching, and not just
navigating and looking for products on that page,
this really helps us a lot. So some major wins
for us that we have seen when we did this
migration– this adoption of progressive web app
concepts on desktop and mobile both– is that route-based
code splitting amortizes the high cost that you have of
single page apps and frameworks over the session of the user. You don’t load all the
JavaScript up front, you load it across the session. Similarly, smart
pre-loading of those chunks and using PRPL concepts makes
the experience seamless. User doesn’t have to
wait after clicking on the link for the
JavaScript to load. Thirdly, chunked
encoding allows us to download JS chunks while
HTML is still being parsed. An interesting approach
we took, was that based on the user
requirements that we figured made sense for users in India,
we solved for repeat visits on mobile specifically and
for first visit on desktop. Of course we care about both
on both their platforms, but we decided to focus
on one over the other. Let me talk about
the impact now. So up to 2x conversion
during sale events– after we migrated to this
because of the high speed and reliability,
and the benefits we have talked about
of progress web apps– we have a significantly
reduced bounce rate. Interestingly, a lot of
people have seen concerns around search
engine optimization. How will a crawler
crawl the website? What’s the impact on SEO? After doing all
this, we have seen a 50% reduction in time
taken by the search engines to crawl a page,
and a 50% increase in the number of pages that
are crawled by Google search. That’s a significant
improvement. Apart from that, we’ve also
seen a massive 70% reduction in the tickets that are
raised, the issues that we get on the website. There are less
errors in general. Plus it’s much easier
and faster to develop, and it’s more developer-friendly
to get new developers on board to fix those errors
for us to maintain. Of course there are
a bunch of gotchas. Webpack has been a super
useful tool for us. That’s what we use,
as I mentioned. And it’s documentation is going
through some very well deserved improvements. So working with PRPL and
code splitting you’re bound to run into a bunch
of interesting issues. And Webpack does provide a
lot of help to solve them, but some of it is buried really
deep in the documentation. You have to really
search for it. And mostly you find the
answer in StackOverflow before you find it in the docs. So the first issue we ran
into was cross origin resource sharing and route-based
code splitting. So an interesting
thing that happens– which might be true
for a lot of us here– JavaScript files and
starting assets generally are solved from a CDN which
is on a different origin as compared to your website. Now when you do a
link pre-load, you can tell it to load
it as a script, and you can tell it to load it
from a different origin– it’s cross origin, anonymous. And similarly, when
you can define that it’s loading as a
cross origin resource. But when Webpack tries to load
a script like we mentioned based on the chance– when it sees
it needs a new JavaScript– it will, by default, not load
it as a cross origin script, and a browser may
end up blocking it. It’s caused us quite
a lot of headaches. So interestingly,
it does provide a attribute or a conflict
that you can’t specify, which makes Webpack load
those chunks as a cross origin script. It takes care of
that internally. The second one
was– as we know– a cache invalidation
is a bit of a problem apart from naming variables. When you create a chunk– and
usually for long-term caching purposes the name of the
chunk, the file name, usually contain
the hash– that’s how you determine whether
this file is a newer version– if the content is new. So now what happens,
is that when Webpack creates these
chunks, it needs to maintain a look up table. That in your entry chunk, which
is loaded at your page load, it needs to know that
when this route is open, this is the JavaScript
file it needs to download. Now that URL– that
file– is going to change at some point of time. So for example, you have
route-based chunks– like I mentioned before,
you have these 15 routes on a website, and you
have those 15 JavaScript files correspondingly– as each file–
suppose one of them changes. Suppose you make change on one,
like a product details page. Ideally only that one JavaScript
file– that juncture– get invalidated in the cache. Only that should be needed for
the user to download again. Others should still be solved
from Service Worker, not the HTTP cache. What happens, is because
that chunk has changed, its file name has changed. The manifest in the look-up
table in Webpack’s entry chunk will also
change, which means the entry chunk will change. Which means the user ends up
downloading extra JavaScript which has not actually changed. So for that, Webpack
provides a think called a Webpack Manifest. It’s pretty simple. In the [INAUDIBLE]
plugin you just define the name
for the manifest, and you end up with a
separate file– like 500 bytes or something– which will
just have that look-up table. And your entry becomes
independent of the content of your other chunks. It’s these kind of small
things which we ran into, and a lot of you may run
into when you’re implementing these kind of things. So what’s next
for us at Flipkart is making things faster. So we’re are looking
into things like HTTP/2 for enabling push of
these resources smartly. We’re also working on [? Amp ?]
to make the first visit faster. So that’s all from my site. You can reach out to me
on this Twitter handle or my team at Flipkart
and Disco-Tech. It’s great to be here. Thank you. [APPLAUSE] ADDY: So I’ve got
one more thing. So I’d like to tell
you a quick story. I don’t have a lot
of time, but I’d like to tell you a quick story
about a small group of us who got to write
some code for NASA. So a while back– a
few years ago– NASA released a master
list of software projects it cooked up over
the last couple of years. This is more stuff than you just
run on your personal computer. It’s apps that would
help with robotics, and cryogenic systems,
and space simulations, and all sorts of things. And they had these in a bunch
of different places, Github, Gitlab, Source Forge, it
was all over the place. But it was part of the
government initiative to try open sourcing more stuff. And it was kind of neat to see. So off the back of
that NASA released a site called co.NASA.gov that
looked a little bit like this. The idea here was that any
time you come to the site, and you could take
a look at what NASA engineers were hacking
on in the open, which is kind of cool. I discovered this on
Hacker News one day, and my friend Sam [? Ciccone ?]
also discovered it around the same time. And we tried looking at
this on a real device, and it basically
crashed my phone. What happened was we ended up
profiling this a little bit. And there were a number
of interesting quirks with this particular
implementation. It kept the main
thread [? pecked ?] for quite a long time. In fact, we ended up working on
a number of performance audits. There’s actually a performance
I’ll be publishing shortly on this whole thing. We ended up trying to make
this existing implementation as fast as we could. This was sort of
an angular one app. And at that time that
framework wasn’t really built with real mobile devices
in mind– at the time. And we ran into all
these interesting issues like digest cycles
taking up to a second. This particular app had 10,000
watchers for some reason. They had a Github embed
for every single entry. So they had like 300 or 400
projects listed on this page, and they had a Github embed for
every single one so that you go and pull up the project. So that was like an additional
300 or 400 network requests for loads. It also had a ton of web fonts
and other interesting issues here that I don’t
think is entirely not atypical of something that if
you were new to this stuff, you’d probably run into some
of these similar problems. And so we started optimizing
this as much as we could. But we reached a
point where we thought this just isn’t worth it. It’s probably worth taking a
look at rewriting this thing. And I know that today
I’ve been talking– we’ve been talking
quite a lot about React and Preact and other
libraries, but I like this idea of best
practices being automated. I think that some of the
ideas we talked about today around PRPL, and
code splitting, and so on, are things that
we can do a better job of building in by
default into today’s tooling. I’d love to get to a point
where things like Create React Op, and Angular CLI, and Ember
CLI, and so on– next JS– whatever it is that
you happen to be using, are considering some
of these approaches and looking at where
they can provide real improvements to developers
so that we balance developer experience with user experience. So Polymer does
this kind of well with the Polymer App Toolbox. I consider it a good reference
for how to do this stuff. Sam, and I think Taylor,
mentioned some of this stuff. So it’s got PRPL and
code splitting built in, and lazy loading,
and offline caching, and support for
HTTP/2 server push. But using the
Polymer App Toolbox allowed us to actually ship a
completely brand new version of co.NASA.gov. This is NASA’s very
first progressive web app that we deployed last night. [APPLAUSE] Thank you. [APPLAUSE] I’ve got to give big props to
Frankie over on the Polymer team, and Keanu, Hannah
Lee, and all the folks that helped us get this shipped. Basically, everything
here is faster. Here we were looking up– as
you would code for the Apollo 11 mission from all those
years ago– looking up ways in which NASA would
publish projects or even share projects with other people. All of these views on
a real mobile device perform really well. It’s a massive improvement
from what they had before. We spent a lot of time on
things like making sure that the infinite scrolling
for their project list view was really, really fast,
hitting 60 frames a second. And this experience works
really great on desktop as well. So the experience there–
again, it’s responsive. We can see the list
there, and actually be able to search things
really, really quickly. There’s no lag in place. But all of the views
work just as well there, just showing you a slightly
different look and feel to this thing. We profiled this using
Lighthouse on a real device with a real network. And this thing was interactive
in under four seconds. So under 4,000 milliseconds. We were really happy with
that, because we actually spent less than a week
redoing this site. It’s not a complex
site by any means, but the idea that you
could completely throw away and old code base and
try exploring something like a PRPL pattern in
such a short amount of time with a very small team, was–
I thought– kind of cool. So we really enjoyed hacking
with NASA on that site. And I encourage you to
contribute to co.NASA.gov. Just being able to tell your
mom that you hacked on NASA code is kind of neat. So– [LAUGHTER] –that’s always an opportunity. But it’s all open source. This entire app is open source. You can go and check it out
on NASA’s Github organization. So
Github.com/NASA/code-NASA.gov. I am certain we will get pull
requests from folks mentioning things we’ve done wrong,
but I welcome all of those. So please feel free
to check that out, and let us know if there’s
anything we can improve. In closing, I hope that some
of the ideas in this talk give us inspiration to perf
the web forward together. Because we’re all
in this together. I see browser vendors
as being in a good place to tell you about the
engine and the performance targets we should be hitting. I see framework authors
and tooling vendors as being people
that ideally want to make sure the
developers are able to ship the right experiences
that benefit their users and the experiences you’re
shipping for your users. So let’s work together. I would love– you
know, if you’re working on any of this
stuff, please talk to me. Please talk to us. And let’s move things
forward together. Thank you. [APPLAUSE] [MUSIC PLAYING]

Author:

9 thoughts on “Production PWAs with frameworks (Chrome Dev Summit 2016)”

Leave a Reply

Your email address will not be published. Required fields are marked *