Service workers at scale with Facebook and Flipkart – Google I/O 2016

Service workers at scale with Facebook and Flipkart – Google I/O 2016


OWEN CAMPBELL-MOORE: Hi. Welcome back. My name is Owen, and
I’m a product manager on the Chrome team. I focus on making sure that
all the browsers provide you all of the
capabilities that you need to be successful on the web. So I’m thrilled to be here
today, joined by Aditya Punjani from Flipkart, who worked
on the amazing Flipkart Lite Progressive Web App, and by
Nate Schloss from Facebook, who worked on their service
worker-based push notification implementation,
and is now working on rolling out
more service worker features across their site. So our goal here
today is to make sure that you leave with
all the knowledge that you need in order to bring
service workers into production at scale. And to start, I’m going
to quickly recap on how service workers can be used
to solve a number of key use cases on the web. So the first is caching
with service workers. So service workers give you
full programmatic control of the network and
of your caching. They’re a kind of
event-based web worker. And the way it works is, when
a user first goes to your site, what happens is a
service worker can be downloaded by the browser
and stored on the device. And then whenever any network
request is made by your app, like an Ajax request or you
include some kind of image, the first thing is that an
event is fired into this service worker that essentially allows
it to intercept that request and handle it programmatically. It can do this by forwarding
it onto a web server, or by reading from a
cache, or by generating a response entirely. And so these allow you to
build an experience that’s responsive and reliable,
regardless of the network conditions. So next, synchronizing
with a service worker. So a service worker has
this great capability called background
sync, which means that, if a user takes
an action in your app, whether they’re on a flaky
network or completely offline, you can be sure that that action
will make it up to the server. And so the way this works
is when a user, for example, writes a post or
takes an action that generates an analytics event,
which here is represented as the blue dot, even when the
web server is not available, you can make that
network request. The service worker
receives the request, intercepts it, and
sees that there’s no web server available. It can now register for
an OnSync event, which will be fired the next
time the device connects to the internet. So now the user can
navigate away from the page, they can close it,
they can be doing something else on their
phone, and the service worker can go to sleep. So now nothing is running. But then at the point where
their internet connection comes back, the operating
system will notice. It’ll let the web browser know. The web browser can wake
up the service worker and fire the OnSync event. This allows the
service worker to run, and you to synchronize
your data up to the server reliably
and in the background. And finally, push notifications
with service workers. And so this starts by you
tell your service worker to subscribe to a push server. The push server will generate
an endpoint and some encryption keys. The endpoint is kind
of like a magic URL that, if your back end
sends a request to it, it will trigger an event
to be fired on the user’s client in the service worker. And so the push server
generates this endpoint and the encryption
keys, and passes it to your service worker. Then you send those
up to your web server. At this point, the
user can navigate away. Chrome can be closed,
they can be doing something else on their phone. And then when you have a
notification on your server that you want to
send to the user, you simply encrypt
it with the user– with the keys that
were given to you, and you make a request
to that endpoint. That will pass the data,
the encrypted data, over to the push server, which
will in turn wake up the device and send an event into
your service worker running in the background. The service worker then receives
this decrypted payload down from the web server, and can use
the notifications API in order to show that
notification to the user. And so together,
service worker allows you to build
advanced caching that makes your website
fast and reliable, regardless of the
network condition. It allows you to build offline
and background synchronization, and it allows you to send
your users push notifications. And so together with
these capabilities, you can build a really
great experience. And so we added one
more thing to it, which is Add to Home Screen. So by providing
just a small JSON manifest with some metadata
about your app, icons, and its name, you are
able to show a banner to your users
asking them if they want to add the Progressive
Web App to their home screen. And if they click it, then they
get an icon on the home screen, just like any native app. And together, sites that use
these rich service worker capabilities and Add
to Home Screen we call Progressive Web Apps. And so the world today
looks something like this. You have this array
of ingredients that you know can be combined
together to build something, to create something amazing. And you might have even cooked
up a little something at home and got a taste of it,
and you think it’s great. But it turns out that, as
Rahul mentioned in the keynote yesterday about the mobile
web, service workers are now handling over 13 billion
page loads a day, and they’re responsible for
delivering over 10 billion push notifications every day. And so suddenly, you’re not
just cooking for yourself, you’re cooking for a pretty
large group of house guests. And so that’s why
I’m thrilled that we have here today two of
the world’s master chefs in service worker cooking. And so I’d like to invite up to
the stage Aditya from Flipkart to tell you more. [APPLAUSE] ADITYA PUNJANI: Thanks, Owen. It’s great to be here. My name is Aditya, and
I work for Flipkart, which is one of the largest
e-commerce retailers in India. Now in India– if I can
get my slide to change, all right– mobile is
profoundly important, and it’s at the crux
of everything we do. At Flipkart, we
continuously strive to build really compelling
and delightful mobile user experiences. And in that regard,
early last year we actually shut down
our mobile website and directed our users to
download the native app, which we believed gave a far
more superior experience than the mobile web back then. At the same time, we
actually asked ourselves, what is it that
gives native apps an edge over the mobile
web, especially given the unique properties
of mobile web, such as an always
updated distribution and a frictionless instant load. We identified three core areas,
and that is high performance, an immersive experience, and the
ability to reengage our users. Let’s look at high performance. There was a common
feedback among our users that the native apps
somehow failed faster than the mobile web back then. What they really meant was
that the native apps had a reliable and
consistent performance independent of the
network conditions or the type of device
profile they had. Network– mobile network have
a lot of variations to it. You know, you have
things like the time of the day, your location, or
the number of concurrent users. All of these factors can affect
the quality of a network. Take, for example, when
you have a low signal, yet it shows that I do have
an internet connection. But in reality, I don’t. Or what happens if your
internet stops working and has just decided to
reconnect at some point? Or if you totally lose
your signal at all. In all these cases,
the native apps seem to endure the network
conditions and open up reliably even so. On the web, we have
actually lacked the model to build web apps that
can endure flaky networks. Native apps, however, do
this at a very high cost. On a 2G connection, it
can take several minutes for a user to download and
install the native app. Compare that to the different
ways of building web app at Flipkart, even
with server-side rendering or a client-side
single-page app, it takes several seconds
before anything meaningful is painted for the first user. On repeat visits, native apps
have a significant advantage. On the first visit, they have
managed to package and download the entire set of critical
resources required for an instant
load the next time. We wanted to bring this
instant load model to the web. And the way we do that is with
the app shell architecture. The app shell architecture
is, in many ways, an evolution of the traditional
server-side rendering, or the isomorphic or
universal, as we call it. But it has key
differences to it, which I’ll get to in a minute. For us, the app
shell architecture meant breaking down our entire
application into two states– a loading state
and a loaded state. The loading state is essentially
what the app shell is. It’s an HTML structure which
has placeholders, and acts as a host for the dynamic
content to come fill in. A well-designed app shell
would give visual cues to the user of what to
expect as the data loads in progressively. This will enhance his
perception of speed. Now, with app shells, you throw
service workers to the mix, and you can cache these app
shells on the first visit. And on the second load, you
have instant load performance. Now, a lot of you
may argue that we could do this with
other technologies, such as app cache. What is so special
about service workers? Well for us, social
worker is a highly programmable
low-level primitive. What that means is,
there is no magic. Everything is left up
to us as developers to design the solutions. And with that, we could actually
devise very sophisticated cache policies that could
never be done before. The last important bit is
that the service worker acts as a network proxy
layer in your browser. What that means is that our
application can be completely indifferent if– whether
a service worker exists or not, and just function
with or without it. The impact of this was immense. Even on a 2G connection, we
brought down the load times to a few MS types of
service worker cache. So now, on a repeat visit,
with service worker and the app shell architecture, we
have a comparable load time to the native app. And it’s not just
blazing fast, it’s even reliable on flaky networks. This is what we believe is
the offline first pattern, which is you respond to all
the critical resources from the offline cache first, and go
only to the network for dynamic content that cannot
be otherwise cached. This allows us to build really
reliable and network-resilient web apps. All right. So let’s compare the
actual architecture to the traditional way of
doing server-side rendering. Now, server-side rendering
is a recommended approach to improve first paint. But we get the same
benefits with the app shell architecture, because
in both the cases, we are executing
JavaScript on the server and generating HTML for
a quick first paint. The interesting thing about
the app shell architecture is that the app shells are
just static HTML pages. That means they have no
dynamic elements to it, which means they could be
generated during your build time. So you can offload all the heavy
lifting to your build process, rather than on the
server-side rendering where you have to process and
generate it per request. Again, being static,
they can be easily cached on the client side,
which may be tricky to do with the server-side
generated HTML pages, because they have
dynamic elements to it. So you might end up with
stale content, which in e-commerce is unacceptable. The best part about app
shell is a single app shell can be reused across millions
of URLs at the same time. At Flipkart, we have a catalog
of over 30 million products. That means 30 million
unique product build URLs. But we can share the same
product page app shell across all these
product page URLs. The last part is SEO. A lot of users do
server-side rendering to [INAUDIBLE] for SEO. But with experiments
at Flipkart, we have managed to achieve
the same benefits of SEO with the app shell architecture. There were many
challenges that we faced throughout
this whole journey, but one of the biggest challenge
was maintaining and scaling the handcrafted service
worker code [INAUDIBLE]. As more team members
collaborated on the same file and our use cases grew
complex, we found the need to sort of abstract the common
patterns and move to a library. We chose to use SW Toolbox. For those who are not
familiar, SW Toolbox is a wrapper library on
top of service worker. It allows you to write– it
allows you to explicitly define routes and map different caching
strategies to these routes. Here’s an example of
what the code looks like. On the top, you see
a product page URL mapped to the sw.fastest
strategy, which is essentially a race between the
cache and the network, most of the time won by
the cache, of course. But it also means that, in
the background, if the network request succeeds, it
is going to update the cache with the latest
version of the product shell. Service Worker Toolbox also
adds a bunch of capabilities on top of service workers. For example, the max
entries that allows us to have an LRU-based
cache implementation so we don’t bloat the cache
with too many resources. Max set seconds allows
us to easily purge cache after a given time. The networkTimeoutSeconds
option enables us to build
network-resilient web apps by falling back to cache
if a request on the network is taking too long. One of the patterns
that we followed before deploying service
workers to production is to devise a service
worker kill switch. The kill switch is
essentially a combination of four different things. The first is a two-level
versioned cache names. Second, the no-cache
HTTP headers. And the third is skip waiting,
along with clients.claim in the service worker. The way we name our
caches is that we have a global version and
then a local version appended to the canonical cache name. In the install event
of service worker, we clear out all the
caches that are not part of this cache object. So that means if we had,
in case of an emergency, to purge all the
caches, we would just increment the global version. And if for some use
case we had to just bust one particular
cache, we would just increment the local version. Now, we want to
make sure that, as soon as we update our
service worker file, it reaches the user. So we set the no-cache and
max-age cache control header and a negative expires that
makes sure the browser always downloads the latest
service worker file on every navigation. Now, like us, if you’re worried
about the amount of download that adds for users on
low mobile bandwidth, well, you can add the last
modified or the ETag header and respond with
a 304 not modified so that it’s only downloaded
when the service worker file changes. All right. So now, once the browser has
the new service worker file, it will install it. On the install event,
we want to make sure that the service
worker immediately moves from the Install
state to the Activate state. So we call self.skipWaiting. This will make sure that
the service worker goes to Activate state
without requiring a navigation from the user. On Activate, we want to take
control of any and open clients under the same scope
of the service worker, so we call self.clients.claim. All these patterns
put together allows us to confidently
and reliably deploy service worker to millions of
users and manage them at scale. Native apps have this
first-class experience where they live on your home
screen icon– home screen page or the app menu. Under a tab of an icon,
you can open them up in a full-screen
immersive experience. On the web, we have
been stuck in a browser tab for a very long time. The good thing is, that’s
changing with the service worker and web manifest. As Owen mentioned, now we
have the Add to Home Screen function. On Flipkart Lite, we make Add
to Home Screen completely opt-in for the user. So as a user, once you engage
and explore the web app, you can decide to press the
Install this Web App Icon. And it’ll open up
the native pop-up which will then add the
icon to the home screen. From there begins the
immersive experience. As you can see, as
soon as I tap the icon, it opens up with
a splash screen. There’s a full-screen
experience. Gone is the URL bar. The interactions are
smooth and fluid. We give touch feedback. When I search something,
data loads in really quickly. And overall, the entire
UI stays responsive. But that’s not it. This web app is resilient to
all kinds of flaky networks. So if I emulate the
offline mode here, the web app still
works seamlessly and allows me to browse my
last– the cached product. And this works even if– [APPLAUSE] Thank you. [APPLAUSE] The best part is this
works even if the user tries to boot up Flipkart
Lite on a flaky network. So here’s example of me
trying to open Flipkart Lite on the airplane mode. It still opens up reliably at
that consistent performance that we promise. And while we wait for
the internet connectivity to restore, the user can still
interact with the web app, and browse previously
cached content. This keeps the user engaged. And as soon as the
connectivity comes back, the full functionality
is restored. [APPLAUSE] This is what we mean
by reliable performance and network-resilient web apps. All right. So I did save this all for SEO,
but how exactly did we do that? Well, we just followed a
couple of best practices. The first one was we
treat service worker as an Opaque black box. That means that application code
is not all aware that a service worker exists or not. The second is that we embed SEO
content within the app shells. Now, this might seem
counterintuitive. What if my app shell of
product page A gets cached, and then when I visit
product page B the service worker picks up the
shell of product page A and serves me the content? Well, that might be true. But the good thing is
web crawlers don’t really have service workers. So web crawlers have got to make
a network request every time, and with that, we can give
them the relevant content. Third point goes
without saying, that you want to have
cross-browser support for efficient and reliable
crawling and indexing. Our main content, however, is
still rendered via JavaScript and generated dynamically. With experiments at
Flipkart, we have seen that the Googlebot does
execute JavaScript and indexes the dynamic content. We have launched this just
a couple of weeks back, and we are already seeing a
huge upside in organic search traffic and a big
surge in the number of mobile-friendly
search results. When we launched
Flipkart Lite, it started as a
Chrome-only experiment. But from day one,
we have always been committed to building
a ubiquitous web app. Today, Flipkart Lite works on
a wide spectrum of browsers, with just a couple
of more left to go. And this is essentially the
theme of Progressive Web Apps. You have a web app that
works almost everywhere, and it starts in a browser tab. And the more you interact
with it, the more you engage with it, on
more capable browsers, it transforms into a
native-like experience. Fast forward today,
we see over 45% users that shop– 45% users that
shop on the mobile web are brand-new
customers for Flipkart. And not just that, we have
over 40% monthly repeat users. Moreover, we have seen a
jump of 70% in conversions from the user that
browsed Flipkart from the full-screen immersive
experience launched right from the home screen icon. And the best part is we have
barely scratched the surface. There’s so much more we
can do with the mobile web, and we have an amazing team back
at home in Bangalore, India, working very hard at this. Now, native apps have
an incredible tool to reach out to their users,
known as push notifications. Thanks to service worker, we
can even send push notification on the mobile web today. We’re working really hard to
bring this to Flipkart Lite soon. But to talk more about web
push and service workers, I’d like to invite Nate
Schloss from Facebook. Thank you. [APPLAUSE] NATE SCHLOSS: Thanks
so much, Aditya. It’s really exciting to be here. My name’s Nate Schloss,
and I’m a software engineer at Facebook. I built out our browser
push implementation, and I’m also working on our
future use of service worker and rolling out service
worker across our apps. At Facebook, we love the web. The web is cross-compatible. Users understand
how to use the web, and there’s low
barriers to entry. The web is also fast. Navigating to a website can
happen in a matter of seconds, versus minutes to
download a full program. As a mobile-first
company, you might be wondering why does Facebook
care so much about the web. Well, mobile-first does
not mean native-only. To have a successful
fully, like, encompassing mobile-first strategy,
you also need to include the mobile
web, as well as native. One might worry,
all right, we’re going to invest all of this
time in a mobile web app. But what about the
native experience? Well, the mobile web is
growing right along native. When we see growth
in the mobile web, we don’t see any downsides
going on a native. When we see growth in
native, we see the mobile web grow right along with it. The platforms are complementary. They’re not competing
against each other. The mobile web plays an
even more important role in emerging markets like India. In places where the
barriers to download an app are high– maybe
there’s flaky networks, maybe people have a hard
time understanding how to install Google Play,
get online on Google Play, maybe you don’t have a lot of
data so it takes time to– they can’t download a full app–
the mobile web is a lot easier. You can just click on
a link, get to a site, and load it instead
of having to download a full app every single time. The desktop web is also an
area that’s very important. The desktop web is an area of
continued strategic importance to Facebook. Lots of people use
Facebook on desktop, and it’s an area that we
need to make really awesome. We need to have it be
a polished experience, and continue to make it a really
great way to use Facebook. So speaking of desktop,
you might remember your first desktop computer. Maybe it was 1995 and you
wanted to check, like, your encyclopedia. So instead of, like,
browsing to a website, you would open your
encyclopedia program. You would go, search what
you want in your encyclopedia program, everything
would be local, and you would get what you want. Let’s say you wanted to
play, like, a pinball game. You would open your pinball
app on your computer back then, and you would play
your pinball game. But then we started
seeing a shift to the web. Around, like, 2000s,
people started– stopped using native
apps for everything and started shifting to the
web for a lot of the reasons that I outlined before. The barrier to entry
on the web is lower. The web is fast. You don’t have to
download a full program. It’s a lot greater. But then when
mobile came around, we saw a shift back in
the other direction. People stopped using
the web so much, and they started using
native apps on mobile. And why was this? So originally on mobile, the
web was more of the desktop web just brought to
a smaller screen. It was missing many of the
things that made mobile great. The mobile experience and
the desktop experience are pretty different. On mobile, you expect a
real-time communication device. You don’t expect the
full desktop experience. And there’s a lot of
features on mobile that haven’t traditionally
been on desktop– for example, things working really well
offline on flaky networks, and especially that real-time
communication experience with push notifications. Push is necessary to be
successful on mobile. You wouldn’t use a messaging
app that didn’t tell you when you had new messages. You wouldn’t use a calendar
app that didn’t tell you when you had a new appointment. And for example, you wouldn’t
use a social networking app that didn’t tell
you when somebody commented on your status. Now on the web, for
the longest time, the best way to re-engage
people was email. Now, email is not terrible. It can look pretty good,
and users understand email. It’s not, like, so bad. However, the barriers to entry
for email are pretty high. You have to type in
your email address. Then you have to go to your
email program, click on a link, go back to the site. It’s very– there’s
a lot of steps. Additionally, you can’t get
some of the real-time engagement that you can with
push notifications. With email, if you’re active,
like, commenting on a thread or in a messenger
conversation, you have to keep going back
to your email program to check what’s going on. You don’t get notified
about anything in real time. With SMS– so we can
also use SMS here, but SMS has a lot
of the same barriers to entry that email
has, in addition to being flaky sometimes. So we knew that push
notifications were a good way to solve this, and we also
knew that they were very, very successful on our native apps. And we cared about the
mobile web, as well. And for the longest
time, we wanted to bring push notifications
to the mobile web. We wanted to do it so badly
that we built private push notification implementations
with UCBrowser and Opera Mini. And just like we saw
in our native apps, this was very impactful. Visitation grows up,
engagement goes up. This was really, really
great for the mobile web. We wanted– so
after successfully doing this in a
private way, we wanted to start doing it in a
standards-compliant way, in a way that would
work everywhere, and just really well for
everybody outside of the box. So when we heard that Chrome and
Mozilla and others were working on doing a web push API
with service workers, we were really, really excited
to build a notification implementation there, as well. So what does this look like? Well, when a user
browses to Facebook, they can opt in to get
push notifications. Once they do that, let’s say
somebody posts on their wall. So somebody tags
them in a post, they can click on a notification,
and they get to the content right away. This is really similar
to the native experience. In many ways, it looks identical
to the native experience. And it’s the experiences that
users know and expect and like. This is the way to do
notifications on mobile. So from a technical side,
as Owen was saying before, setting it up isn’t too bad. When your user opts in
for push notifications, you get an endpoint. You send data to this endpoint. In this case, it’s going to
be Google Cloud Messaging. Now, GCM is going to have
a persistent connection with browsers. So it has this
persistent connection, and it sends a push
event with the data that you sent to it
before to the browser. The browser knows
which service worker to wake up for this push event. It gives the push event
to the service worker. The service worker takes the
data and makes a notification. It gives the notification
to the browser, and the browser can then go
ahead and display notification to the user. Now, if you’re doing this
on the most modern browsers, you get push
notifications with data. However, [INAUDIBLE] long tail
of the web, not everything supports push
notifications with data. So you also want to set up push
notifications that can work in situations without payloads. Doing this is just
one additional step. It’s not too bad. So just like before, you
have your push endpoint, and you send a request to it. However, this time you don’t
have an encryption key, and you don’t send data. Just like before, you have your
push provider, which is there, and it has a persistent
connection with the browser, and sends your push
event to the browser. Just like before,
the browser knows which service worker to
wake up for this push event, and it wakes up
the service worker and gives it the push event. However, this time
your service worker doesn’t have any
push data, so it has to go back to your
server, fetch the data, and then it has the
data so that it can use to display notification. It constructs the
notification just like before, gives it to the
browser, and the browser can display the
notification to the user. However, I should
note, in this scenario, there’s one other thing
you want to think about. That fresh request back to your
server could potentially fail, but the service worker
still has the push event. As part of the contract
to getting push events in a service worker, you’re
promising the browser that you’re going to go
ahead and actually display notification. If you don’t, many browsers
will apply a penalty, and some browsers will
actually show an error message notification. This error message notification
is not a good user experience. It looks really bad. So you want to always
have a backup notification ready in your service worker
for if that fetch to you fails. So as we rolled
up push at scale, there were many lessons that
we learned along the way. One of these is when it comes
to clicking on notifications. The problem of clicking
on notifications might seem pretty
straightforward. You click on a notification, and
it opens a new browser window. So let’s say you’re in an
active messaging conversation. I click on a notification, it
opens a new browser window. Now the next message comes in. I click a notification, and
it opens a new browser window. Now another message comes in. I click on a notification, it
opens a new browser window. As you can tell,
pretty soon you’re going to end up with a lot
of browser windows open. It becomes really, really hard
to use Chrome, because you just can’t find the tab you want. Maybe things could
be a little slow. It’s not a good user experience. All right, so how
do you solve this? Well, maybe you click
on a notification, now the service worker checks
if there are any windows open, and then, if there are,
it asks that window to navigate to the new–
the place we clicked. Or if there’s no windows
open, it opens a new window. This brings some other
challenges, though. Let’s say that you have a site
that users can engage with. Maybe they’re in the middle of
typing a comment or a new post. And they click a notification,
and then the window navigates and it blows away
their post they’re in the middle of writing. That’s also not a
good user experience, because now you’ve just
deleted what the user was in the middle of doing. So to solve this, we came
up with a solution that’s also– that’s not too bad. So, somebody can click
on your notification. Then the service worker is going
to get the notification click event. Now, what the service
worker will do is it’ll check if there’s any
windows of your site open. If there aren’t,
what it can– just like the first
scenario I outlined, it can go and tell the
browser, hey browser, please open a new
window to my site. And the browser can open it. Pretty straightforward. Now, this case gets a
lot more interesting. Let’s say you have a window
open and somebody clicks on your notification again. Just like before,
the service worker gets a notification click event. However, this time,
the service worker is going to check
and see if there’s any windows of your site open. If there are, instead of posting
a message to the browser, it’ll send a message to the
window saying, hey, window, can you please navigate to this URL. The window can then
either say yes, I’m going to go navigate,
if the user’s not in the middle of writing
content, in which case the service worker
can say, hey, browser, please focus on this window. Or, this window can
say, hey, the user is in the middle
of writing a post. I can’t navigate right now. In that case, just like
before, the service worker can go ahead and
open a new window. We found that this was the
best compromise between not overwhelming the user with
too many tabs of the same site being open, but also not
blowing away what the user was in the middle of doing. So in a similar vein,
when I was first– when we were first
building this, I had notifications on
for every single platform. At the beginning,
this wasn’t too bad. It was just Facebook,
the native Facebook app. So I would get a
notification from Facebook. My phone would shake, I
would see the notification. Not too bad. But then, once we hit
every single platform, this got to be a
little overwhelming. So I get my notification
from Facebook. My phone just shook. I’m pulling out my phone. Now my phone shakes again. I get a notification
from Chrome. I’m maybe looking at the
notification from Chrome, trying to pick between
Chrome and Facebook. Then I get another
notification from Opera. Then maybe a little
bit of time later, I get another notification
[INAUDIBLE] phone and I’m looking at this. Then my computer makes a
sound, and I get a notification from Chrome on my computer. There’s a lot of
things going on. I’m just like,
what is happening. So solving this is kind
of straightforward. Only send notifications to
your users on the interfaces that they frequently use. Just because you can
send notifications on every interface does
not mean you should. Pay attention to where users
are engaging with your site and using your app,
and notify them there. In a very, very
similar vein, you want to make sure you don’t
send too many– display too many notifications at once. Let’s say you’re in a
Messenger conversation, and I get one notification
for another message. Then another notification
comes in for the next message. Then another notification
comes in for the third message. Then I get another notification
for the fourth message. And if I show a different
notification on the screen each time, pretty
soon the screen is just full of messages,
and it’s very, very hard to use my computer
or really even engage with any of the messages,
because there just kind of is a list that’s just
constantly filling up. To solve this, you can use tags
on service worker notifications to replace previous
notifications. Think of tags as a slot. You can say I have
tag A, slot A. And then, if there’s no
notification in the slot, the browser will just display
a new notification there. But if there is already
a notification there, the browser will replace
the existing notification with the newest one. This way, you can
make sure to not overwhelm the user by displaying
too many notifications on the screen at the same time. Another interesting
case we ran into was accidentally
downgrading clients. When we push out new
code on Facebook, we don’t roll all of our servers
to the newest version 100% all at once. We test the newest
version of our server code for a little bit before
we roll out the entire site. For this example, let’s say
we have version one rolled out at 80% and version
two rolled out at 20%. And we have our service
worker and the entry points to Facebook. So the browser wants to go ahead
and update the service worker. It hits Facebook and
says, hey, Facebook, can I have the newest
version of the site. We’re like yeah, here you go. We hit version two this
time, and we give the browser the newest version of
the service worker. This is great. The user’s fully up-to-date. They have the latest experience. This is really awesome. However, let’s say some
time later the browser goes ahead and updates
the service worker again. Just like before,
it hits Facebook. But this time, we serve
the service worker from version one. So this is not the
best experience. Let’s say we save data in a
new format in version two. We’d have to make it backwards
compatible with anything that was going on in version one. Also it means that
the service worker is going to do an update
that it shouldn’t do. The service worker is going
to be in the Update state way more often than it should, which
does not lead to the best user experience. To solve this, what
we did is we started looking at the current version
during the install event. In the install
event, we would check to see if the
currently installed version of the service
worker was greater than the newer version. If the currently
installed version is greater than
the newer version, we throw an error
in the install event and we just don’t let the
install event complete. This means that
the user is always going to be on the newest
version of the service worker, which makes
developing for a service worker a lot easier, and
it leads to a greater user experience overall. So after polishing and
perfecting the user experience with service worker and
push, we rolled out web push and started looking at impact. So it’s been about a year,
and a year later we’re happy to say that
web push continues to drive significant
impact for Facebook. Mobile was right in line
with what we were expecting. Web push on mobile is great. We knew that it was great. It fit our native– it fit our
native experience really well. We kind of knew that web
push on mobile was awesome, and we saw the engagement
we were expecting. Daily and monthly
active users increased, commenting increased,
engagement increased. It was really awesome. On desktop, we were
a little bit worried. It was many users’ first time
getting push notifications on desktop. The desktop experience
is different than mobile. People don’t always expect to
have real-time communication on their desktop just
like they do on mobile. Well, we were very, very happy
that this had great impact on desktop, as well. Daily and monthly active
users increased, commenting, all the things that we would
exp– wanted to increase, increased. What we saw was that
users kept it on, and they really
enjoyed the experience. It turns out that people like
getting notifications where they’re already using Facebook. If you meet a user
where they are, it’s just a greater
experience for everybody. So now that we
rolled out web push, we were starting to look
into and investigating where we can use service
workers elsewhere at Facebook. Today, loading on–
loading Facebook is often blocked in a network. What we try to do
when we load Facebook is we do a little bit
of work on the server and then, as soon as we can,
we push it out on the network and let the client do
work as soon as it can. Then, in parallel, we do
a little bit more work on the server and try to
push that out to the network as soon as we can. What this does is we’re taking
full advantage of everything that we can across the stack to
try to get the user Facebook as quickly as possible. However, if the network
is flaky or slow, or something’s going wrong,
this can just block the client from being able to do anything. The client doesn’t
get Facebook at all, maybe it takes a long time. A lot of the
optimizations that we can get by doing
things in parallel just don’t apply anymore. With service worker,
we do much better. Like Aditya said,
with service worker we can make sure
that we already have the shell of the app loaded. We can start doing
work on the client before we even hit the
network, and before we even hit the server. If the client can start doing
work to display the site, then all it has to do
is fetch the content that it needs to display
this piece of content, instead of fetching
an entire app every single time
you hit the site. So we can start doing work
in parallel even earlier, and parallelize across
more of the stack. Another experience
that we can get is being able to run
an app totally offline. Now, offline dinosaur is great. He’s super cute. But really, it’s time
for him to go extinct. Native apps can work offline. When you open a native
app, you don’t see, like– and you don’t
have internet connection, you’re not going to
see offline dinosaur. You’re still going to
see an app experience. And this is what we should
be able to get on the web, and now we finally can. At Facebook, we’re
already really starting to invest heavily
in service workers, and we’re very excited
about their future. Starting this week, we
began testing offline mode on Messenger.com
using service workers. We’re using service
workers on WhatsApp web to make the site load
much, much quicker. And as I talked
about before, we’re using service workers to power
push notifications on Facebook. Service workers make
native experiences possible on the web. At Facebook, we’re very excited
about the future of service workers, and we can’t wait
to explore their potential in the months and years to come. Now to wrap things up, Owen’s
going to come back on stage. [APPLAUSE] OWEN CAMPBELL-MOORE: Great job. [APPLAUSE] Thanks, Nate. So I’m really excited about
all of the momentum we’re seeing in the community around
service workers and Progressive Web Apps. I was really happy to steal this
slide from Rahul’s mobile web state of the union
yesterday, which shows all of the different
companies that have either already shipped
Progressive Web Apps or are investing in
Progressive Web Apps. And it’s not just Chrome. This is a journey that all of
the– a number of the browser vendors are taking together. And here you can see some
of the tweets and posts from a number of the
other browser vendors about service workers
and Progressive Web Apps. We’ve been really excited to see
the momentum in this community, and I encourage all of
you to get involved. Ask and answer questions
on StackOverflow, post your libraries and your
pulll requests on GitHub, and tweet at us on Twitter. With service workers, we need
to rethink web development. It’s now possible to build
high-performant engaging experience on the web. And all of these features are
available in production today, and they work progressively. And so I encourage all of
you to go back home and join Facebook and Flipkart in using
service workers at scale. Thank you. [MUSIC PLAYING]

Author:

2 thoughts on “Service workers at scale with Facebook and Flipkart – Google I/O 2016”

Leave a Reply

Your email address will not be published. Required fields are marked *