Top-Level Awaiting for Godot

Myles Borins

Recorded at ColdFront 2018

hey everyone how's it going
round of applause for everyone back
there is helping making this happen
today so hedge
I like to do this thing and I just kind
of like chill I like look at the cat
I like the longer the talk is the longer
I'll do this cuz I have time to burn but
it kind of started this thing that I did
to just get myself loose onstage and get
comfortable and it slowly turned into an
audience test so like I have my back to
you and I hear how much you're laughing
and I know how painful this is going to
be because if this doesn't work for you
well it's gonna be fun
so I'm miles and I'm a developer
advocate at Google cloud platform mostly
focused on the node.js ecosystem and our
various compute offerings around node I
have a laser pointer I'm gonna use it to
point at that you know I'm not
representing Google on stage right now
this is just mostly you know things I
think about late at night so chapter 1 a
synchronicity is hard C you know you
don't want to stop the thread and I
think you know as people who are working
in the front end you you notice this may
be a little bit faster than people who
are writing servers or probably you've
noticed it in both but you know if you
block the thread in the browser it's
gonna stop scrolling and that's a pretty
bad user experience
my personal experience on the back end
if you're trying to make
request and that's blocking the thread
and blocking the server like the next
person is just kind of waiting and I
have a feeling you don't want to be
dealing with your user traffic
sequentially so in JavaScript we have a
bunch of different ways that we've
created for handling a synchronicity and
traditionally one of them is callbacks
and you know I'm pretty sure most people
in the audience have used a callback
before and same code like this that
slowly floats to the right I know
there's a lot of slides like this online
already I made my own take on it but you
know to deal with kind of some of those
problems with a synchronicity and things
floating to the right we've adopted new
patterns promises are a pattern that
have actually existed in computer
science
longer than Java
- believe it or not but it's you know
fairly recent that it's been
standardized in the language there's a
number of different implementations you
may have used such as bluebird but now
you don't need to worry about a
dependency you can just use the promises
that come with javascript and because of
really fast and that code before can
start to look like this which is better
from like a mental model perspective
arguably when we're thinking about like
handling a synchronicity the ability to
catch and kind of do a catch through a
whole bunch of promises that all
pipeline together is a really nice
pattern we don't have to kind of
consistently be error-handling the way
we do with callbacks although you know
controversial whether or not this is
better than the callback pattern but at
least things aren't floating right
anymore but you know we actually still
have a lot of callbacks here you don't
really separate the two but in December
of 2016 a new pattern came out that I
personally really loved called
async await and just quickly in the
audience how many people have played
with async await yet right on it's it's
kind of a game-changer in my personal
opinion it's a much more intuitive
mental model for thinking about a
synchronicity in the same code you know
looks like this and you know there's a
couple things this is kind of both like
the future in the past at the same time
in this really weird way where we've got
this kind of like immediately invoked
function expression thing going on which
I don't know that you but when I first
started programming in JavaScript there
was that immediately invoked function
expression from jQuery that I just
copied everywhere I didn't know what it
did but I know that I needed it and then
we have try and catch which are some
friends we haven't seen in a while but
they're back and they're back for good
but so you know that was kind of this
really high-level example of these
patterns but I thought I would give a
bit of a practical example in use so
here's an example of an implementation
using callbacks to get at the nodejs
download page we have an index JSON
which has all the metadata for all the
releases we've ever done and this is an
example of using requests and callbacks
to get the version and the date of our
latest release and so you could see
we've got some kind of manual error
handling there we have this function
here we have the data but there's a
couple things
here that are important to think about
one is we've started floating right we
haven't had to do it too much
you notice that body sort there is
another function that could have floated
right as well but I've broken it out but
the data that we have about the body of
accessible in the outer scope it can't
be exported from the module it's kind of
stuck in that closure with promises we
can refactor the code and it gets a
little bit more streamlined but for this
case where we only had like one instance
of asynchronicity that we're dealing
with the pattern is not that obvious but
we've moved to the fetch API here so you
can see that we've got a couple promises
happening one after the other
specifically here to like fetch the
resource and then convert it into JSON
now this promise code can be refactored
into async await and it looks like this
which you know is better maybe but when
we're thinking about JavaScript from an
educational standpoint and from a
pattern standpoint and an onboarding
standpoint all we've really done here is
just tack on more and more concepts that
people need to know to sit down and use
this and you know by the hands in the
room I'm going to assume that the
majority of people here you know this
just kind of makes sense off the bat but
if you think about the things that we
need to know we need to learn what a
function is then we need to learn what
an async function is we kind of still
need to know what a promises because to
execute the async function and do any
error handling we need to still have a
catch there and manage that promise and
we still I guess we've kind of hot
hidden callbacks here a little bit but
there's still a lot of different
concepts that we need to introduce
before we can just introduce the concept
of let's do something sequential in the
same code with top-level await just kind
of removes that async function but all
from a getting started standpoint from a
readability standpoint at least in my
own opinion we've removed a lot of
abstractions that you need to learn in
order to just understand hey that first
line is going to run
and it's going to do something
asynchronous but that second line won't
run until the first ones over and this
is a pattern that I'm really interested
in seeing happen which leads us right
into Chapter two don't tell me what I
can't do so in February of 2017 node
released seven point six which included
v8 five point five which was the release
where async and await first landed now
it came out in December in v8 it took us
a little while to get it out and release
but we had it I was excited I was ready
to play with it until I got this error I
don't know if any of you had seen this
but I had not gone through and read all
the docs about async/await
and for someone you know I used the
language a lot I'm following the
standards you know I probably should
have known better but I didn't and I was
very surprised and it was unintuitive
that I needed to have a wait inside of
an async function and this is kind of
how I felt and and it looked like
top-level await wasn't a thing so I
started digging and I started playing
and I have the saying that I've been
iterating on that people work to the
abstraction boundary in which they're
comfortable with to try to fix things
and since I am comfortable a node I
tried to just implement top level away
in a node and so if you look here it's
probably hard to read back in the back I
apologize but we've got a like a
one-line change in nodejs and our
bootstrap node module we do this really
fancy thing where we take the module
code that you give us and we do a string
concatenation let me just wrap it an
immediately invoked function expression
and that's how we inject all the
variables like dur name and file name
and require and exports into your code
so if you've ever wondered how that's
that happens it's this fancy string
concatenation so if you actually close
the brace at the beginning of your code
you can break note in really weird ways
but so I thought okay maybe I could just
make that an async function and maybe
that will work it didn't it turns out
async function you immediately turn it
into a
which means that it starts getting
executed at the next tick and since all
of the different functions in note are
expected to execute in a particular
order this completely messed up all the
timing of everything in node so it just
completely broke require it didn't seem
for example the top-level await was
something that we would be able to
implement in node core unfortunately and
it turned out that the nodejs test suite
wasn't the only place that had a problem
with top-level await
because it turns out that some people
think top-level await is a foot gun has
anyone in the audience heard that
expression before top-level await is a
foot gun I guess before we dig in a
definition of a foot gun from Google a
foot gun plural foot guns informal
humorous derogatory any features whose
addition to a product results in the
user shooting themselves in the foot I
like this idea from like a user
experience standpoint which is like a
foot gun as any feature that will likely
be used for people to hurt themselves
and I'm all about like being given all
of you know I want to get into trouble
but you know there is this weird balance
as a platform developer and when you
start thinking about designing ap is
that people are building api's with
about like how much leeway do you give
people how much opportunities do you
give them you don't need to implement a
every API you just need to make things
possible and small fundamental changes
like something like top-level oh wait
can I have very very deep impacts in the
language in the runtime that may not be
obvious and in September of 2016
a year before async/await even made it
out in you know release rich harris of
the New York Times you may know them
from such amazing products as felt and
roll up wrote this really informative
gist called top level of eight is a foot
gun I just that would be brought up
whenever I mentioned top level of eight
and you know the TLDR of that gist
if you haven't read it is that top level
wait could block execution if Anna wait
is no longer inside of an async function
what do you do you just kind of stop
executing things so is that going to
block the whole thread is that going to
that going to like how does that work
could block fetching resources so
depending on how the module loader
itself is worked out in the runtime is
calling awake gonna block fetching
resources for other nodes in the graph
that haven't been fetched yet it's
possible that there would be no clear
Interop story for commonjs which is the
module loader that we use a node and as
you saw from my prior example there it
was very clear that this was like pretty
true it wasn't easy to introduce
top-level await into no js' without
completely changing all the semantics of
how things run and it's also possible
that circular dependencies utilizing
top-level await and dynamic import can
introduce deadlock into your graph if
two modules are relying on each other
and never resolve and the conclusion of
this gist was hey you know for all these
reasons you know can you not can you
like please not top-level o8 and a lot
of the rhetoric around top-level await
at this time was very much it will never
happen it should never happen can you
not but I wasn't ready to not I wanted
to
so a little bit of history and like how
you approach problems like this so when
of see all these flame wars that are
going on it turns out though that flame
wars are not canonical decisions for the
JavaScript language thankfully what you
can do though is you can review the
notes of tc39 meetings which are all
available at that link tc39 slash
agendas and they're all in markdown and
you can go and use your favorite text
editor of choice or the command line and
grep if you so choose and search for the
keywords of the things that you care
about decisions on and from like a
canonical level if things have not been
decided or stated or mentioned in these
notes it's maybe fair game
it doesn't mean it's gonna work out the
way you wanted to but you can try and
what I found when digging in was that
despite the strong critique of top-level
await there was actually no prior
decision that banned it from ever
happening in the language in fact when
going through I found out a few things
that proved that maybe top level wage
the thing that should happen so when a
weight was originally brought
async/await was brought to tc39 it was
in January of 2014 so we're going back a
bit further now and in April 2014 it was
decided that a weight would be a
reserved keyword in the module goal goal
is a term that has to do with a
combination of like the execution I
won't take too much into it but you have
a module goal and a script goal and you
can pick between different goals such as
the sloppy goal and the strict goal by
using you strict the module goal happens
when you use a script tag with type
module otherwise you're in the script
goal but inside the module goal the
keyword await was reserved so the only
reason this would be reserved as a
symbol would be if it were going to be
used at the top level and in July 2015 a
single weight was moved to stage two in
the process and at that time it was
decided to delay the decision about top
level of weight it was so contentious
that simply trying to make a decision
about top level the way it would have
blocked the entire async/await feature
and you're gonna notice a bit of a
pattern here that happens in the
standards if there's some part of the
implementation that can't be agreed upon
but it doesn't necessarily get in the
way of doing a smaller subset of what's
being worked on you just push that thing
off don't deal with it
so async/await moved forward and as we
can see from the hands in the crowd
it's a beloved feature but this
particular decision which could have
blocked it from ever happening was just
decided you know to be dealt with later
but the important takeaway from all of
this research was that top-level await
was always being considered while this
things were standardized in the language
to protect for it to be a possibility so
you know there is a chance but I just
said a bunch of things about how we put
together standards and maybe a few
phrases that don't entirely make sense
without context so let's add a little
more context and talk about how a
language body is formed so the spec is
called ACMA 262 and the committee who
implements it is tc39 so we have
JavaScript the language which was also
which is actually called Eknath script
it's standardized enactment - six - by a
committee called the tc39 and the
locations around the world
and participants include major browser
vendors academics industry experts
open-source maintainer z' and maybe you
and you may be asking this question can
anyone join tc39 who can join and so
there's actually a bit of a difference
between a member and a delegate a member
is an organization that sponsors Eknath
and ACMA is the European computer
Manufacturers Association it is a
nonprofit based in Geneva that handles a
number of different standardizations not
just including JavaScript but c-sharp is
also standardized there as is the width
of a cd-rom important standards that
have modernized society but like
seriously I think dart is standardized
there as well they're doing some stuff
around AI there's a lot of different
standards and there's this whole kind of
back-end to the standards process that
you don't see that involves like
actually lobbying governments and making
sure that the things that they're doing
are
approved by ISO and there's this whole
kind of bit like releasing software that
you just don't see but it costs money to
do and so various organizations such as
my organization that I work for Google
or academic institutions like a number
of different universities can all join
ECMO and they have a sliding scale for
membership from free for nonprofits and
education institutes to not free for
organizations like Google and there's a
sliding scale that's based on how many
employees you have individuals are not
members though individuals are delegates
they go to represent a member
organization and as long as you are part
of an organization that's a member
organization you can go as long as you
can navigate the internal politics of
allowing you to attend as an individual
you can be an invited expert and I've
seen individuals from the bable project
for example show up and the real only
way to do that today is by being
involved in knowing the right people but
there's a number of people involved in
ACMA including myself who are trying to
make a documented process or a more in
streamline process for individuals from
the community who have proposals that
they want to work on to identify
champions to help drive that proposal
forward for them and find opportunities
to come and potentially even get
sponsored financially to attend meetings
to work on their own proposals but so
that's a lot about kind of the structure
of ECMO but how does a feature itself
get developed it's all through consensus
and consensus is this really interesting
whereby you need to get everyone in the
room to agree to not disagree so the
whole concept of consensus is making
sure that no one dissents so you've got
a room at tc39 with 50-plus people
representing a ton of different
organizations so you know at one corner
of the room you may have Microsoft than
you have Google and then you have some
academic hackers and then you have
people who work on embedded systems and
then you have LinkedIn and then you have
Bloomberg and I'm not just gonna list
member companies but you have a lot of
different stakeholders with different
needs and for a feature to move forward
you need consensus from every single
person in that room that they see
nothing wrong so every single feature
that you've seen land in JavaScript
after Ekman script five has gone through
this consensus process
and there's a number of stages so the
first stage of stage zero is called the
strawman and this just means hey I've
got an idea and I'm sharing it with
people it doesn't necessarily even mean
that the committee agrees that this is
an idea we're gonna explore but it's
like here's kind of the shape of the
problem stage one it's a proposal it
means that not only do we know the shape
of the problem that we're trying to
solve and potential shape of the
solution to it but the committee agrees
that this is a problem space we want to
explore Stage two is a draft by Stage
two not only do we have an like a very
very strong idea of the problem space
and the solution but we have gone into
the solution and come up with
specification language about how the
algorithm for that solution might look
like Stage three is a candidate at this
point that solution that specification
text has been solidified it's finished
it's been reviewed by at least two other
committee members and signed off by the
editor when a future reaches stage three
at this point it's ready for run times
to start implementing it now tools such
as babel or maybe even run times such as
v8 may implement things that are stage
zero through stage two but it's not
really until something moves to stage
three that you know that the shape of
the solution isn't really going to
change that the API is not likely to
has been pulled out from under us
smooshes a really great example of this
so if you're writing code using babel
for example and you're using any of
these stage transpilers unless it's
stage 4 which is finished which means
it's in the spec and it's in the
language these can change from under you
so just kind of be prepared for that but
it also is like I'm not trying to
discourage you from trying these things
out it's really great it's had people
trying out these language features early
in their lifecycle so we know how
they're working we find edge cases early
but just kind of an idea of how this
works and each of these stages for a
feature to go through needs consensus of
the committee which brings us into our
next thing getting a foot gun in the
door so I hatched a bit of a plan here
and the idea was if I brought top level
away to tc39 we could at the very least
get a decision and even if that decision
was no we could at least stop all of the
arguments and so what when I was
thinking about like optimization and how
many hours we were saving of like
development time on the planet there's
one way where you could say oh well I'm
optimizing asynchronicity and I'm making
it way easier and there's all these
hours of development time will save the
other side is oh we'll never do this so
that's like a thousand hours of hacker
news time that's not gonna happen
anymore with people arguing about this
and in January 2018 I brought top a
little weight to tc39 attempting for
stage one and the proposal included a
handful of different things and included
the history of top-level await that we
had just gone through showing that the
committee had presented interest in it
in the past motivation for the feature
why are reasons that we would want to do
this use cases how it would actually be
used potential implementations so a
couple different algorithmic solutions
to how we could approach the problem as
well as constraints and this was one of
the most important things to bring early
in the progress in the process it's
actually really important for you to
identify all the reasons people may
object to your proposal and you may not
have answers to them but at least you
the problem space and none of these
things are a surprise to you so one of
the motivations that I had was the
immediately invoked async functions
expressions this it's kind of gross and
well it's not such a big deal to just
write two lines to wrap your async
function there's a couple limitations
that come from this first when we're
thinking about executing code in the
graph which will dig into a little bit
more later if the bodies of all of our
graphs are wrapped in these async
function expressions we have no actual
idea of the order of execution of things
in the graph one of the reasons why
people are saying we shouldn't have
top-level await to begin with so if this
pattern is getting cargo cult 'add and
is showing up all throughout your graph
it's actually introducing a
synchronicity to bodies that may not
even need it people may be introducing
these async function expressions and
they're never even using a weight in
there simply because they want to be
able to
when they're gonna need to also any
symbols that happen inside of its async
function expression are not exposed
outside of the scope of that function
expression unless you're lazily loading
them and if then you're exporting lazily
loaded symbols you're introducing a
whole bunch of new really weird race
conditions where all you wanted to do is
say hey don't export the symbol until
this thing is done so this was a huge
reason that I pushed for it was this
pattern a second motivation for it was
the completely async module so if you
thought immediately invoked function
expressions were bad wait for this
imagine people exporting default async
functions that in turn then await
importing async functions that they in
turn then export and you may say no one
would ever do this but if you actually
search on github for kind of the the
base pattern here you will actually have
more matches than you want to see
there's a lot of people who have already
started exporting async functions which
are fine unless you start falling into
this pattern at this point we've
completely gotten rid of any sort of
static module system and the whole idea
behind DSM has kind of fallen apart so
why would we even want to do this what
are the use cases that we'd have for
top-level await one of them right off
the bat that we can think of is dynamic
dynamic dependency mapping let's say we
want to import a version that's specific
to a language for internationalization
that would be really great if that's
something that we can do dynamically at
runtime resource initialization I don't
know how many people are doing things
with databases or robotics but sometimes
you know you need a serial port to be
open before you do anything else and if
anyone's ever done those kind of like
intro to robotics classes literally the
first thing we need to teach someone is
hey here's a promise or here's a
callback not hey open a port and do a
thing and all of a sudden you get really
into kind of the mechanics of JavaScript
of a language before people can even
start doing things and things such as
being able to await a database
connection I personally think is a
really powerful pattern we have that
pattern in node with require because
require is synchronous and inline and
with ESM as it exists today you do have
dynamic import
which we will see on the next slide but
you're not able to take those symbols
that you dynamically imported and export
them in the same way that you can with
node so if you wanted to do something
like dependency fallback or any sort of
CDN related stuff this isn't really
something that you can do as symbols
that you're going to export from a
module these are a variety of patterns
that I think top-level await would
unlock but not all of them so we start
getting into solutions and potential
solutions how would we solve this what
would it look like and the first way
that we talked about solving this was
variant a which is the idea that a call
to top-level away it would Blake would
block execution of the graph until it
had resolved and if you thought about
this quote code right here where you're
importing a and then B and then C and
you log them and modules a B and C all
have a call to top-level await in them
it would kind of be the equivalent to
this an immediately invoked async
function expression where we await a and
then await B and then await C a has a
top-level await in it so we're going to
wait for a to resolve an exported symbol
before we start executing B the other
variant that we talked about was variant
B where a call to top-level await would
block execution of parent nodes in the
graph but allow civilians to execute so
the same code from before would start to
look like this
where we would be awaiting a promise dot
all of a B and C a would execute until
the point that it hit a top-level await
it would then defer and B would begin
executing this has the advantage that if
you are doing things like component
systems that would be very very wide but
not very deep graphs you don't have to
wait to lazy load component a before you
can lazy load component B before you can
lazy load component C and if you're
working in a component system where you
may have like hundreds of components
wide if we did variant a that's really
really not going to work for a
dynamically importing any sort of
components the difference here though is
this is introducing some indeterminancy
which we'll dig into in a second we did
talk about an optional constraint that
could be applied to either A or B which
is enforcing the top-level awake could
only be used inside
module without exports the idea being if
you are using a module that doesn't have
exports you're either going to be just
the root node of the graph or a module
that's only being imported for
side-effects and in those cases you kind
of are just like locking that leaky
abstraction to a point in the graph
we'll talk a little bit about that in a
bit too but I just used a bunch of turns
talking about a sink and loading and
execution order and it's not very
intuitive to just hear his words so I
thought maybe I could dig in a little
bit more with more words so each module
will specify an asynchronous load and a
synchronous execution what this means is
that the first thing that happens when
is you do a load you fetch all the
resources for the module graph and
resolve all the specifiers so every
single import gets resolved you figure
out where that resource is coming from
and you fetch that resource from over
the network and the whole graph is
populated with all the resources that
are statically imported before anything
else happens and it's all asynchronous
so if resource a is loading and resource
B is loading that can happen at the same
time and once that resource loads we
figure out all the symbols from that and
we just kind of go through and fetch the
whole graph if you're calling the same
symbol multiple times it gets cached and
you don't have to fetch it more than
once after it's been loaded it's linked
and for a link that modulo graph must be
in memory and that linking is happening
the root then the left and the right
this link is how we actually build up
the graph work out any cycles and
actually figure out how all these
different records that we just fetched
connect so here's a graph with a root
node module a a module B that are
imported by the root and module we'll
see a module D which are imported by a
so when we're linking the first thing
that we're gonna link is going to be the
root then we're going to link a then we
will link C then we'll link D and then
we'll link B not very intuitive but
thankfully this is mechanics internal so
you should never really have to know
about this part of the graph traversal
but I thought that kind of visualizing
it would maybe help
so execution once everything is linked
and we have all of the modules in memory
we can actually start executing your
program and it requires that
everything's linked and it's done in a
post traversal order and this is I think
one of the most confusing things about
ESN that people are not expecting so
here's your graph and if you were
writing something in node you may expect
that it's going to start executing the
route and then when it hits the module
data imports it will start executing
that but ESM doesn't our forces you buy
the spec to have all your imports at the
top and because of that it actually
starts executing at the bottom left C
will be the first module that gets
executed in ESM graph because all the
symbols from C need to be exported
before a can be executed so next will
come D and then a and then B and then
the route so this is the order that your
codes actually executing if you're using
SM and you're using native ESM not if
you've written es m and you've used
babel or web pack to transpile it to
require and then to something that's
synchronous in the tree and a single
bundle in that case it will actually be
from the route but es m as its
standardized this is the execution order
that your code will go in because you
need the symbols from your children
available when you execute yourself and
here's where things start to become a
little less intuitive so we finished
loading and we kind of come back to that
thing that we were talking about with
variant a and so if we look at this
graph again and we think about variant a
and let's say that C here has a top
level of weight C is going to be the
first module that starts executing and
we're actually going to wait for that
top level of weight in C the deepest
note in our graph before anything else
will execute and this is potentially
quite problematic if you have a really
really deep graph and one module at the
bottom has an a weight in it it could
block the execution of the rest of the
graph it's worth mentioning that this is
different than blocking execution on the
thread anything that's already in memory
anything that's already in the event
loop will actually be doing its thing
it's only the traversal of the graph
that would be blocked so to vary it B
when we were talking about a call to
top-level await blocking execution of
parents but not siblings and we look at
it again here and let's say once again
see has a top level of 8c would defer
and D could start executing and when D
is done exporting its symbol a would be
deferred because it's waiting on C but B
could start executing in an exported
symbol and at the point that C was done
then it could start executing and then
the route could start executing and so
you could see how this unlocks a problem
for execution order and for some of the
problems that may be introduced from
speed but it introduces a new problem
that didn't exist before we had a
guaranteed order that the graph would
execute and always execute and if we
introduced variant B into the
specification we fundamentally changed
the underpinnings of the JavaScript
language and the expectation of
execution order the only real strong
argument that I have to this is that
most people wouldn't know the execution
order to begin with because it's not
very intuitive I don't know if that's
strong enough and we have to work out a
bunch of models to really figure out if
this will bite you because if you're
doing anything that's messing with
global State and expecting that order it
could introduce really weird unexpected
errors which are kind of the worst kind
of errors but we identified a handful of
different constraints here first that
variant a would halt progress in the
module graph until it was resolved that
B would halt progress but not blocked
assembling siblings and that that
optional constraint that we were talking
about could alleviate those above
concerns but it doesn't really solve it
especially since any module could be
imported we also have that constraint
that we talked about the circular
dependencies could introduce deadlock
and we didn't really have a solution to
that but to the concept of blocking we
brought up that there's actually a
handful of ways to already block
progress in JavaScript the good ol
infinite loop that one gets us sometimes
loops are not good enough atomic start
wait for when you want to use an API
you've never heard of these are all
different ways that you can already halt
progress in JavaScript and so just
bringing up these examples as a response
to the halting progress being a reason
not to move forward we're enough to have
at least a seated out that perhaps
blocking progress was not a strong
enough reason to
have this feature that blocking progress
halting the graph is a foot gun that
already exists in a whole bunch of
different shades now that may not be a
strong enough argument to take it all
the way to the end but it was good
enough to get us to say hey we want to
explore this problem space and we just
punted on deadlock we said we'll figure
this out later and this is a really fun
point to is you don't always need the
answer to every problem in order to move
forward with the solution you just need
to identify the problems so simply
saying that we were aware deadlock was a
problem and we would try to solve that
in a later stage was enough and with all
of that together we were able to get the
tc39 to agree to a stage one four top
level of eight which means that the
committee agreed that this was a problem
space we were going to explore so of
course the next thing I needed to do is
figure out how to get it to stage two
and with stage one it was the signal
that they were interested but we really
really needed to figure out the shape of
the solution even further we needed two
hot wok in with a strong idea of how we
were going to solve this problem and so
what we really needed to identify where
the spec change is necessary
so when ESM is is executed it's
represented as a graph of module records
so that diagram that I was showing
before you could think of each of those
nodes as like a map with information
about the module itself so every single
one of these nodes is just a set of
metadata it may have information about
where it existed on the file system
what is the actual binary representation
of the data that that module is
representing and it's all linked and
it's ready to be executed and the
algorithm for executing is also defined
and the underlying mechanism of how that
should be implemented in a VM is also
defined and this is what our spec
changes looked like and so what you'll
see that's really fun the first thing
that we ended up doing is we took the
code from inside of a sync function
start and this is the underlying
mechanism that's called inside of the
virtual machine when you start an async
function we have strap did it into a new
you'll start to see hey writing a spec
is kind of like
writing code that you can't run and we
just performed that async blocks start
thing we just essentially copy pasted
all those steps into its own function
and then we went in to the evaluate
method this is like the actual method
that's being called when you're
evaluating and we we set up promise
capabilities a promise capability is a
lower-level thing like a promise it
doesn't have all of the all of the like
ins and outs of a promise that it gives
something the ability to be venable the
ability to be deferred and wait on it
and what we've essentially done here and
it's you know maybe a little bit
esoteric and we can dig through this
later but this I think is the best
example in the module execution is we
created a new promise capability and
instead of just executing the context
we've performed an async block block
start on the context with that promise
capability essentially instantiating
every single module as a promise to
execute and what we were able to do then
which is super fun if I can just find
that one bit in here is this right here
be a wait there's actually an
abstraction inside of the spec called a
weight that can block execution in the
spec until a promise capability resolves
so we actually use a weight to implement
top level the weight which is like this
fun recursion that I really like
but essentially what we did was we made
it that every single module that's
executing instead of just synchronously
returns a promise to execute we await
that promise to resolve and then we
execute the next module and this
algorithm supports variant a but if we
implement it another abstraction in the
specification itself for promised at all
we could just replace that call to a
weight with a waiting a promise not all
of all of those promise capabilities and
in the same way that those two bits of
pseudocode that I showed you we could
actually convert those pseudocode almost
directly into spec language and it would
maybe just work at least that's as far
as I was able to take it so far I'm
working with some much smarter people to
help take it over the finish line but
what the the importance of the way in
which this algorithm was designed was
that for going a stage two
we could actually even punt on a
decision between variant a and variant
be because the same basic shape of a
solution could solve either of those so
as long as we had consensus that top
level weight would either be a or B and
that the algorithm that we had
implemented could work for either A or B
that was enough to get to stage 2 that
we have a general shape of the solution
to the problem that unblocked those
objections we clarified in the
specification the top level the weight
would only ever exist in the module goal
not the script goal never in common J s
and that unblocked all the objections
regarding interoperability we were just
going to be clear there wouldn't be
Interop and we deferred to the current
behavior for handling deadlock we kind
of just punted on that one again but we
said hey there's current behavior for
deadlock if you end up in cycles it will
throw and we want to at least rely on
the current behavior in a specification
and then we're going to run some tests
that allowed us to unblock objections
related to cycles and you'll see a bit
of a pattern here starting all the way
back from that original top level weight
as a foot gun we identified what are the
constraints that people will block on
and we came with a with a proposal that
spoke directly to those objections not
only did we have to come up with a
solution but we had to directly speak to
the objections of people who had
problems with it and in May of 2018 we
were able to get top level away to stage
2 which was really exciting I didn't
even think we'd be able to get that far
that kind of brings us to where we are
now which is what what's next how do we
get to that next stage and I personally
want to move forward with variant B but
we need to build consensus around those
semantics we still have people who are
pretty hard on wanting it to be variant
a and we still have people who aren't
even convinced that top level of weight
should be a thing so building consensus
around this will be important I do think
that we're off to a good start rich
harris who wrote that original top-level
wade as a foot gun recently tweeted that
he thinks variant b which allows some
parallel execution it's probably the
best trade-off between the various
different goals specifically because
async immediately invoked function
expressions everywhere are almost worse
than not having top-level await and
we're going through a variety of example
specifically to define the semantics of
deadlocks so we are up to about six
different iterations of how we could
identify cycles in a graph and how we
should handle that semantics and the
intent is right now to make it fail
early so if you have two modules that
are awaiting a dynamic import that
import each other it should just throw
that's most likely the semantics that
we're gonna move forward on just don't
allow dynamic cycles that seems to be
the easiest way to avoid for lack of a
better term a bunch of foot guns and we
need to finish writing the spec text and
the text needs to be reviewed and
approved and this is going to be the
hardest step because essentially getting
it to stage three getting that last bit
of sign-off getting to that next stage
will mean that v8 can start implementing
it that edge can start implementing it
and that implementation from what I hear
actually not even that hard it's the
semantics and consensus that are the
hard part so a little bit of a weighting
on how we could imagine this works out
so as long as you are all patient and
I'm persistent I'm hoping that we get
there and that persistence is important
and I think I just want to like relay
that idea from earlier that your
intuition may be to fix problems at the
layer of abstraction that you have
control over if one of your dependencies
is broken you may just monkey patch it
in the scripts that you control instead
of sending something upstream if
something upstream is broken because the
language has a problem you made not you
know try to engage in the standards
processor you may not even you know have
the platform to do so but I want to try
to empower every single person in this
room it takes time and it takes
determination a bit of luck and a bit of
support but don't let anything stop you
if there's things that you want to do
and people tell you that you can't do it
do the research maybe you can don't let
anyone tell you what you can't do thank
you very much this is a surfing dog
wonderful thank you so much thank you
miles
um we don't have a lot of time for Hugh
neighbors since Charlie's got a couple
of things to plug in I'm gonna take the
opportunity and still vie for just a
couple of minutes if that's okay come
and take a seat on one of these core
strengthening wobble seats I'm getting
in good shape throughout the day for
this that's such a fascinating thing to
see like a little glimpse behind the
curtain of how some of these things
these things come to life and like the
workings behind the scenes um and you
talked about you know how people get
involved or how tc39 works and their
different ways you can participate and
what's the easiest way for me to get
involved so you know at without getting
like super involved in an organization
or becoming a member can I get involved
in the conversation can I go somewhere
and see the conversation and chime in
yeah if you go to github.com slash teeth
the tc39 slash proposals you'll actually
see a list of every single proposal and
what stage that they're in and so in
github and github and if you go to that
link and you see a proposal that you're
interested in click on it read it open
an issue open a pull request
anyone can contribute on github like any
other projects so if there's something
that you're interested in just get
involved nice and is there a particular
phase of the of that process that is
more valuable to get involved in I mean
is it like from right from you know the
straw man right through to the proposal
the draft pay is anywhere good to get
involved do you think you could learn at
every stage of the of the way I think it
depends on your skill sets but there are
things that people who are just getting
started to can contribute all throughout
you don't need to know how to write spec
text you can write a babel transform
earlier you can write there's a thing
called web platform tests which are
compliance tests for these features
there's lots of different ways that you
can pitch in from varying skill levels
okay awesome I mean there's there's so
many questions I'd like to ask you but I
think on on that note and you're kind of
closing note of empowering every person
in the room and then we can do that
everyone please make him say say your
thanks please for the miles Thanks