Using Web Workers for more responsive apps – Jason Teplitz

Using Web Workers for more responsive apps – Jason Teplitz


JASON TEPLITZ: My
name is Jason Teplitz. And along with
[INAUDIBLE], I worked on implementing WebWorker
support directly into the Angular
framework this summer while I was an intern on
the core team at Google. And I’m here to talk to you
a little bit about how that works, why we did it, and why
we think it’s a great idea, and you should check it out. And if we have some time,
get to some really cool stuff that it lets you do. So before I talk about
what are WebWorkers, why do we use them, why do
we think this is a good idea, I just want to go
over a general problem that I think we’ve
all seen on the web. And that is this
horrible dialog box that shows up after you
use a web application. And it tries to do something,
and it just utterly fails. And it freezes, and
your computer is slow, and you see this, and
you hate your life. And hopefully it wasn’t
your web application, but maybe it was sometimes. We’ve all done it. And this is really bad. But why does this happen? And the reason that
this happens is because traditionally the web
has been the single process, single-threaded model. And you share that single
process with the browser. So what that means is if
you’re running an application and you start doing anything
that is CPU intensive, you have the potential to steal
resources away from the browser and basically skip frames. And when you start
skipping frames, it doesn’t take a very long
time before users notice this. And they do something, and
an event fires, and then all of a sudden the application
doesn’t seem responsive anymore. And that’s when you start
getting these horrible reviews, like, oh, your app sucks, or
it’s so laggy, it’s janky, how do I make this smooth? And when we’re talking about
making applications smooth and having a good
user experience, which I think is something
we all strive for, we’re talking about
60 frames per second. This is the rate. If your application is running
at 60 frames per second, users are not going to
notice any delay between when they do something and when the
result appears on the screen. So this is what we want. This is great. So let’s do some math. Don’t worry, it’s not too hard. That means you have 16
milliseconds per frame to get a synchronous
block of work done. If you take more than
that amount of time, you’re going to run over, you’re
going to skip a frame sooner. You do that too many times,
users are going to notice, your application is
going to be slow. It turns out on the
web, though, that if you do a block of work
that modifies the DOM, the browser needs about
eight milliseconds on average to update the DOM. So you actually only
have eight milliseconds to get your synchronous
block of work done. So it’s not a lot of time. And if you start doing too much,
you’re going to skip frames, and that’s bad. So here’s just a list
of some things that can very easily take longer than
eight milliseconds to get done. You might have a large set of
data that you need to parse. You might have images that you
need to process or sound that needs to get processed. You might be running some
sort of intensive algorithm, or you might have
a very large table. We’ll see in a little bit that
large tables in some browsers are not optimized very well and
scrolling can be really, really poor performance-wise,
especially if you’re doing analytics or any
sort of data visualization on that table. Now, there’s sort of a theme to
all of the things on this list. And that is that
they’re all things that modern web applications
do more and more of. They’re all things that we
want in our applications, and they’re all things that
native applications do. So we love to do these
things, but on the web today we’re really limited. And we really just can’t do
them without having poor user experiences. And this problem’s
even worse on mobile. This is the new Android
One phone in India. It has a 1.3
gigahertz processor. At that speed, Chrome
is pretty sluggish no matter what you’re doing. And if you start doing something
that’s intended for– uh-oh. No, it’s not a screen saver. If you start doing
something that’s intended for a native
application or something intensive, you’re going to run
into these performance problems pretty quickly. But there’s something
cool about this phone– it has a quad core processor. And in fact, almost every
processor created today, even the really, really cheap
ones, have multiple cores. So how do we unlock these cores? How do we use the full
potential of the phone? How do we have
smooth applications? Well, one solution
is WebWorkers. It’s not the only solution,
there are other ways to do this, but it’s one that
we think is really promising. It’s one we’ve chosen to focus
on at the framework level. And I’m going to talk to you a
little bit about how it works. Now, a WebWorker–
this is not great. So a WebWorker Worker is
basically a separate execution context for your application. What that means
is that it allows you to run some part
of your JavaScript in a totally different
process that can run in parallel with your UI. And that prevents blocking. So it allows you to run
code that doesn’t block. It allows you to also
run your application code across multiple windows. So imagine that
you’re using Electron, and you want to write this
native desktop application. You want it to feel
like a native app. Native apps typically
have multiple windows. And that’s kind of
tricky on the web, because you have to have
different JavaScript on the windows. They have to talk to each other. But what if they could all
just drive the application logic from the same
window and then push the UI to different windows? And the WebWorker
infrastructure that we’ve built makes that really trivial. In addition, you can better
compute performance-wise with native mobile applications. So if that’s something
that you’re trying to do, if you’re trying to avoid
writing native code by writing web code, native
applications all take advantage of
multi-threading. So if you’re not doing something
parallel in your application, you’re inherently going to be
slower than any competition that you have. Also, if you write
WebWorker components, you can test
without the browser. A WebWorker component
is a component that doesn’t rely on the DOM. It doesn’t rely on browser APIs. And therefore it doesn’t
need WebDriver or Karma to be tested. So you can write
integration tests that run a lot faster
that aren’t flaky, and that’s a really great plus. And this feature, it
might sound really new, but it works with all
modern browsers IE10 and up. Now, there is sort of this
other idea that you might have, which is, uh, no, no, no. That’s a bad idea. Things are hard. Hard things should not
be done on the client. The web is not meant for that. All JavaScript was ever
designed for was changing colors on a page when I
click something. I don’t want to do anything
more intensive than that. I’m just going to
do it on a server. If I want to process an image
or process some sound– no, no, no. I’m going to upload
that to the server. I have these great server guys. They have a great server team. They’ll take care of it. I have all these server
resources– great idea. And as it turns out, that’s
generally not a good solution for a number of reasons. If the data already
needs to be on a server, if it started there,
for instance, then that’s probably a good
idea to process it there. But if the data
is on the device, it doesn’t make a lot
of sense to upload it. For one, you pay for
server CPU usage. You don’t pay for
client CPU usage. So not utilizing the CPU
usage on your clients’ devices is just losing you money. But that’s not in any
way the worst reason. Something that’s
really critical is that especially on
wireless and mobile devices it costs significantly
more to transmit a byte than to compute it. Now, what do I mean by that? Imagine that you have
some set of data, and you want to run
maybe a linear algorithm or an o event squared
algorithm on that data set. If your approach
is, oh, no, no, no, I don’t want to waste
local resources by running an algorithm locally. I’m going to upload
it to the server. I have these great servers
that can run the algorithm very efficiently. And then I’ll
download the result. Well, the problem
is you’re probably going to end up costing your
users more battery life. Because sure– your CPU is not
doing much, but your radio is. And sending out
radio waves is one of the most energy intensive
things that phones do. Typically they don’t reach the
access point or the cell tower. They need to be
re-transmitted, especially if you’re in a congested
area or poor cell service. So that energy cost ends up
being really high, right? You’re sending out giant RF
waves in a spherical pattern. That tends to be more expensive
than computing something locally. Additionally, I’m sure
these are some numbers that a lot of people have seen. Jeff Dean is a principle
engineer at Google. He basically built some of
the coolest infrastructure that Google runs on and
some of the most performance critical stuff. And these are
numbers that I think he was just born with and
then decided to tell all of us on this written slab that this
is the way the world works and everyone should know these. So as you see up top
is local operations, things like reading
from a cache, or reading from main memory. Way down at the bottom is
sending a packet round trip across the internet. It’s around six
orders of magnitude longer than reading
from main memory. So if your approach is– upload
the data, run the algorithm, get the result. Then
you have to be sure that that algorithm
is going to run at least that much faster
on the server, or else the cost of transmitting
is going to make the end result slower for your users. So in summary,
WebWorkers are awesome. They are faster in some cases. They allow you to write much
more responsive applications. They can be more
battery efficient. They enable you to
do better testing. They’re supported in
every major browser, unless you’re an IE9 user–
but meh– or Opera Mini– but I don’t think you guys are
making decisions on Opera Mini. If you are, I’m sorry. This isn’t the
right talk for you. And in addition, they
can save you money. So you have all
these great things. If you walked into your
manager’s office tomorrow, and you said, hey, I can
do all of these things with implementing
this one feature inside of our application. They would be like, why
aren’t you doing that? You should do this. This is amazing. It’s sort of this golden
feature of the web. So obviously, everyone’s
using it, right? Raise your hand here if
you’ve used WebWorkers before in an application. Cool. So you can’t see this
on the live stream, but I think in a room of
like many hundreds of people, like eight raised their hand. So people aren’t using
it, even though I just said it was amazing. Why might that be? And it turns out there’s a lot
of challenges with WebWorkers. WebWorkers are similar
to Unix process model, although not quite as tricky,
but still pretty tricky. And they have no
access to the DOM. They share absolutely no
memory with your main process. And what that means
is that everything you want to do in a worker
needs to communicate with your main process
through message passing. That message has
to be serializable, and that runs into
concurrency issues that you haven’t had
on the web before. So it’s very challenging to use
WebWorkers, and as a result, people don’t. And that sort of led to this
model of, OK, WebWorkers might be helpful, maybe
my application needs them, but they’re hard. I don’t know how to use them. It’s confusing. So instead, I’m going to write
my application without them, like hope and pray,
cross my fingers that my application’s not slow. And then I’ll test it,
discover performance problems, realize this module is
slow, move that module to a WebWorker, and
then it will be fast, and I’ll have no problems. And that’s the
paradigm right now. And that’s a really bad
idea for a couple reasons. The first is you typically
don’t know what part of your application is slow. And what’s not slow on your
corporate MacBook probably has a good chance of being slow
on a three-year-old Android phone. And it’s pretty hard to test
across all those devices. Additionally, even once
you know what is slow, it’s nontrivial to move
that code to a WebWorker. You wrote this code without
these constraints in mind. It had access to the DOM. It used memory that other
parts of your application used. You now want to take it
and move it somewhere else, and that’s not going to be easy. That’s going to be
bug prone, and you’re going to have to debug it,
and that’s going to take time. So it’s difficult to
extract that logic. Essentially,
WebWorkers are hard. They might be helpful,
but they’re just hard. So this is Angular, the
whole point of Angular is to make web development
easier and fun. So let’s make them easy. How do we do that? And the paradigm
that we came up with is run everything
in a WebWorker– run your application logic
in a WebWorker, run as much of Angular’s core framework in
WebWorker as we possibly can, and then let the framework
take care of updating the UI. Let the framework take
care of the synchronization and these concurrency issues. That’s not your job anymore. So let me show you just
how easy that can be. So here I have a demo
of a large table. And this is just an
example of some data that you might want to process. And in this case, I have some
numbers that are generated, and I want to find their
largest prime factor. I’m doing this all locally. And as I scroll
through the table, it’s going to try and
load more results. But we’ll notice pretty
quickly that I’m scrolling, but nothing is happening. And this scrolling bar
gets larger, more results are coming. Oh, crap. My browser froze. Why did I do this? This was a horrible idea. I should have paid
attention in my theory class when this problem
was called MP hard. I should really know what
MP hard means, but I forgot. It’s been a long time. So I actually have to
force quit Firefox, because this application
is so crappy. And this is actually
the first time I’ve ever been excited
about force quitting something during a demo,
usually that ends really poorly. And so that didn’t
go very well at all. And that was a standard
Angular 2 application written without WebWorkers. So what if we wrote
it with WebWorkers? Now, that’s probably
hard, right? That was the whole
thing that I just talked about is that
WebWorkers are hard. But as it turns out, they’re
actually not that hard. So here is the index file
for that application. And as you can see,
I’m loading Angular. How do I use WebWorkers instead? Well, it’s actually pretty easy. All I have to do is
load a different bundle. So instead of the [INAUDIBLE]
of the Angular bundle, I’m going to load
the WebWorker bundle, which is webworker/UI.dev.js. This basically just says,
I want the parts of Angular that live on the UI not the
parts that live on the worker. Once I do that, I’m
going to go into my app. This is the entry point
for my application. This is where I call bootstrap. If you’ve seen any
Angular 2 app so far, this is probably
fairly familiar to you. And instead of calling bootstrap
from Angular 2 Angular 2, I’m going to load it in
from Angular 2 WebWorker UI. This is the library
that contains all of the UI specific
code for WebWorker apps. And instead of passing my
component to bootstrap, I’m going to pass the
name of a background file, in this case loader.js. So what Angular is going to do
is it’s going to say, OK, cool. You want a WebWorker? I’m going to start
one up for you. I’m going to load in loaded.js. Let’s take a look at that. I’ve written it
ahead of time here, because it’s basically just
a bunch of long strings. But essentially loaded.js loads
in all of your library code. So I load in system, and I load
in the WebWorker worker bundle. And then I’m going to start
up this background.js file. So what does that look like? Oops, that’s the JS. Let’s look at the TypeScript. So I have to write this file. This is the file that’s going
to bootstrap the application on the WebWorker. So first thing I need
is my main component, which I called table demo, which
is from my table demo file. This is the exact same
component I just tried to run without WebWorkers. And then I need a
different version of bootstrap for the WebWorker
from Angular 2 WebWorker worker. And then I’m just going to
call bootstrap from there and pass in my component. Now, you might think, oh,
but that component wasn’t written for WebWorkers, right? We have to change it. And you’d be kind of right,
we do have to change it. Instead of importing
from Angular 2 Angular 2, we need to import from
this new WebWorker bundle. But I don’t have to
change anything else. This component was written
with just the Angular APIs. I didn’t access the DOM. So that’s it. I’m done. So I’m going to recompile. I’m going to go back to
Chrome– Firefox, sorry. And I’m going to
go ahead, and I’m going to try and load
that application again. And we’ll see if my live coding
skills are actually existent. And here’s the exact
same application, but let’s see what
happens when I scroll. We get this beautiful,
smooth scrolling. Everything loads, everything’s
happening in a WebWorker. I can still scroll
through all my results, but it’s not freezing anymore. It’s much better. [APPLAUSE] OK. So there were a lot of files
there, really quickly, let’s just go over them. Here is a typical
Angular 2 application running without WebWorkers. I have my index HTML file. I load my app. I bootstrap. When I turn on WebWorker
support, here’s what happens. I have the exact same files on
the UI in terms of the HTML. I load the WebWorker bundle
of Angular, specifically the UI part of it. And then I load my app file. But previously I
called bootstrap with my main
component, this time I pass it the name of
my background file. That lives on the worker. So everything else
lived on the UI. This lives on the worker
in a separate process, so it can’t block. [INAUDIBLE]
import system– now, I know we’re importing
system twice here, but it’s going to load from the
browser cache the second time, so it’s not much of
a performance hit. And then we’re loading the
WebWorker bundle of Angular. So we’ve partitioned
Angular into the parts that the UI needs and the
part that the worker needs. And actually not much
is shared, so there’s not a lot of duplication there. Then we’re going to import
our background file, which is going to call bootstrap
with the name of our component. And really critically, that
component didn’t change. The code is exactly the same. We didn’t have to deal
with all those problems. I’ve got one more demo here. This is a sample
Angular 2 application that allows you to
load a bunch of images. I’ve got a bunch of images of
the Angular core team here. I don’t want just Alex. Alex is great, but
I want everyone. So you can load up
a bunch of images. And when I click
this button, it’s going to apply a
sepia filter to them. And it’s going to apply
that filter locally. And what we’d like
to have happen is have these progress bars
show up below each picture that tells us that, hey, some
data is being processed, your application is working. But what actually happens is
the browser totally freezes. We can’t animate
those progress bars. My mouse turned into sort of the
pointer, but it didn’t go away. And then eventually
we got the result, but it wasn’t a great
user experience. The user didn’t know
what was happening. This is the exact
same Angular code, the exact same
components, but now it’s running with WebWorker
support turned on. So again, I load
up all my images. And I go ahead and I
click the filter button. But this time you’ll see we get
all of these wonderful progress bars. We know exactly
what’s happening. The process is not frozen. If we had multiple components,
this component doing work would not block the
other components. So that’s a lot better. OK. So I’ve kind of
hinted to this idea that you don’t need to change
things when you use WebWorkers. Obviously, that’s not always
going to be true, right? You’re running on a
different context. We can’t take care of
everything magically for you, though, I think
we do a pretty good job. So what is a WebWorker
compatible component? First of all, they have full
access to the Angular APIs. Everything except actually
getting native DOM elements is totally available to you. But there’s no DOM access,
because you are in a WebWorker. So you should be using data
bindings instead of directly manipulating the DOM. That’s kind of always
been true with Angular, but now it’s critical. If you absolutely need to
programmatically alter the DOM, we do have APIs for that. You can inject the render
and do so asynchronously. It’s not really
recommended, though. You should really
use data bindings. But that is there as a fallback. And what’s cool is that these
are a subset of Angular 2 components. If you write your
application with WebWorkers, and then you decided,
oh, you know what? I don’t need it. That code’s just going
to work guaranteed in a non-WebWorker scenario. The reverse is not always true,
although, it typically is. But WebWorker code can always
run without WebWorkers. Now, this all raises a central
question– running code on the WebWorker is
good, it’s fast, yay. But we do want to run some
code on the UI, right? This whole idea
of parallelization is not run everything
in one other process, because then that process
is not going to be– it’s not any faster, right? Sure, the browser is not blocked
anymore, and that is good. But we do want some
code on the UI. So how do we actually do that? And we’ve given
you a few options. The first is to use
custom elements. This is the easiest. I’ll talk about it a
little bit in a second. But basically if you
have a very defined thing you want to do on
the UI, it doesn’t need to interact with your app
a lot, that’s your best bet. But it doesn’t have
a lot of messaging with your application. Then if you need to really
integrate it with your Angular application, you can use
low level messaging APIs that we’ve written that are used
internally when Angular runs your application in WebWorker. And those are a little tricky. There’s no protocol
to them, so we’ve built some higher
level abstractions to make that a bit easier. So first, custom
elements– this is just part of the web components
spec, for those of you who aren’t familiar with it. So it’s very new. It’s evolving very
quickly, but we’re trying to keep up with it. Basically you create a
custom element for something that’s really contained
and defined on the UI. So imagine you have an
infinitely scrolling table, and that table needs to do some
rendering work as you scroll. Well, custom elements
are great for that. You just have Angular pass
the data, it renders it. If your Angular code is
blocked, then the rendering is still going to
continue as you scroll. So everything is
fantastic there. And you can use these life
cycle callbacks to figure out when the element gets created. Communication with Angular
is fairly limited, though. You just have DOM
events, specifically standard DOM events. We’d love to support
custom events, but right now we have
a couple problems with doing that, which
we’re working on. So if you need more
control, and you need to really
talk to Angular, we had this idea of a MessageBus. It’s this language agnostic
API for communicating with Angular components across
any runtime boundary, which is a mouthful. And when I wrote it, I didn’t
even know what it meant. But basically it means
we have this API. It doesn’t depend on
JavaScript or Dart. You can write an implementation
of a MessageBus in any language that you want. And a runtime
boundary is whenever you have parts of your
application running in places where they don’t share memory. So a WebWorker is a great
example, but also a client and a server, or two separate
windows in an Electron application, or two frames
in a web application. So this is analogous
to a message queue, if you’re familiar
with that concept. If you’re not, it
basically looks like this. You have two sides of a runtime
boundary, in this case a UI and a worker. They both have a MessageBus. They pass messages into it. It has this multiplex
channel model, so that messages don’t
interfere with each other. And then you can use– the
MessageBus uses some API internally, in this case
post message to figure out how to communicate. Now, that’s all fine and
good, but it’s actually pretty tricky. It’s great if you really want
to control the messaging. But as it turns out, you have
to write the protocol yourself. It’s very slow. It’s annoying. It’s hard. So what we noticed
was you typically only want to do one thing. And that is you have one side of
your runtime boundary, usually the worker, but
not always, says, hey, I want to run some code
on the other side, typically the UI, but again, not always. And I would like to receive
the result if that happens. So we’ve built this
MessageBroker that makes that really, really easy. And the idea is you have
a service MessageBroker and a client MessageBroker. So it’s a one-way
data flow, where the client requests that the
server does something and gets the result back. What does that look like? Let’s imagine we
have a service– in a previous example,
it would be the UI. So the UI has this
function M. It wants to expose M to the client. And M has the
following signature. So it loads the
service MessageBroker. Tells it, hey, I
have this method M. I would like to make it
available to the client. Obviously, the client
can’t call it directly, because they don’t
share any memory. So the service recognizes that. Then the client is running. It also has some
of your application logic but a different part. That logic would
like to call M. So it loads the client MessageBroker
and says, hey, go ahead and call M. The MessageBroker
takes care of that. It calls M. It gets
the result back, and it passes that back
up to your application. So it’s really easy to run
code on the opposite side. We have one more
really cool idea, which is this idea of
a custom MessageBus. So we implemented stable
MessageBuses for WebWorkers into Angular. Those exist in the Angular
2 repository right now. You can go use them. But also one of the
coolest things we did was implement an experimental
Dart MessageBus for WebSockets. And I’ll show you why
I did that in a second, because I think
it’s really cool. But you’re not limited
to those MessageBuses. If you have some custom
runtime boundary, whatever your stack looks
like that you would like to run part of Angular in
someplace that doesn’t share memory with another part, you
can write your own MessageBus to bridge that gap. It’s pretty technical. It’s pretty involved. I’m not going to go
into it right now, but you can go to the bitly
link for more information. So why did I write a
WebSocket MessageBus? So the idea was I would like
to run Angular on a server. And I know I just
said, wait, don’t run all your code on the server. That’s a horrible idea. So what I’m actually talking
about is a local server. Now, why would I
want to do that? And the reason is that I was
able to build this really cool debug tool as a result.
So what’s happening right now is that Angular is running on
a local server on my laptop. And I’m able to connect
multiple clients to that server. So I have one instance
of my application, one version of all my
variables, one version of state, nothing is duplicated. But multiple different
browsers can easily connect to it– in theory. All right, multiple
instances of Chrome at least can connect to it. This is not a good
demo for Firefox. I don’t know why
that’s happening. OK. Sorry. So I have Safari and
Chrome both connected to the exact same
Angular application. And what I’m going to
do is I’m going to say, hey, I’m going to enter
a task in one browser. Oh, no. Everything was going so
well with the live coding. I should have known
there was going to be one little thing at the
end that just totally died. OK. Let’s try this now. Cool. OK. So this server died, but
I restarted the server. And the idea here is that I only
have one instance of my Angular application, and
it’s pushing data to all connected clients
at the same time. So the clients are
totally in sync. I can add whatever I want
in any of the clients. I can do whatever I
want in my application. And I can see how it might
load in different clients at the exact same time. And what’s cool is
that I didn’t write any of this synchronization
code at the application level. It’s all in the framework. So you shouldn’t
run this on the web, because it’s not
good with latency, but it’s a great debugging tool. If you have a
complex application, and you want to say,
OK, maybe I want to connect an iPad,
and an iPhone, and an Android phone all
to the same local server. And I want to see if I have
an application in this state, where I clicked on
this button, and then I did this– what does it look
like on all these devices? Does it look the same? Which is super crucial in our
applications and pretty hard to do. Usually because you
have to open each device and do all of those operations. Well, now it’s really easy,
because you just do it once. And then you click in
one of the browsers, and all the browsers mimic
the result seamlessly. And you get this for free
if you use WebWorkers. It’s the exact same
infrastructure. So if your application
supports WebWorkers, you can just turn on
this debugging tool. And this is one of the things
that we’re really excited about is to build out more
tools that are only possible with WebWorkers,
like the testing tools that I talked about
earlier or that demo. Now, I said earlier– how many
people have used WebWorkers? After this, how
many of you think you might want to
try WebWorkers? Yes. Cool. Again, livestream
people, you can’t tell, but that was like a
way bigger number, like those six
orders of magnitude I talked about earlier. So if you want to
get started, there’s a starter pack at
this bitly link. It’s basically just a
Hello World application, but it tells you
where you can start entering your component code. And it’s already done on the
WebWorker boilerplate for you, so you can give it a try. You can get the slides
at that g.co link. And all the code is available
on both the Angular– the Angular GitHub has
all of the framework code. It’s all in there already in
the latest alpha releases, and all the demos are
available on that Github repo. And you can reach
out to me at Twitter, my email, GitHub, whatever. There’s an AMA at 1:30,
if you have questions. Thank you. [APPLAUSE]

Author:

Leave a Reply

Your email address will not be published. Required fields are marked *