►
Description
Today, async and await are stable parts of the language, but you can't actually run async code without a third party runtime. Unlike most languages, an async function has to be awaited before any work gets done. Cancellation can happen at any time and can cause surprising errors. Why is async Rust this way? And how is it changing? This talk will cover the design decisions and trade-offs which led to the current design, what that design means for async programming today, and what the Async Working Group is doing to make it better: our plans, current status, and ongoing work.
A
Hi,
I'm
nick
cameron,
I'm
part
of
the
async
working
group,
and
in
this
talk
I'm
going
to
talk
about
asynchronous
programming
and
rust,
so
in
particular
I
want
to
charter
course
from
the
how
how
the
the
current
async
system
was
designed
and
in
particular
like
the
the
design
pressures
that
led
to
the
design
decisions
that
have
led
to
where
we
are
today
and
some
of
the
some
things
that
are
good
and
some
of
the
things
that
are
not
so
good
about
the
current
system.
A
I
also
want
to
talk
about
the
the
work
that
the
async
working
group
is
doing,
to
make
things
better
and
push
us
up
into
a
shiny
future,
where
everything
is
great,
with
async
programming
so
very
quickly,
just
to
kind
of
like
fill
you
in
on
what
you
can
do
today.
If
you're
not
aware,
async
and
the
weight
are
keywords
and
they
are
stable,
they've
been
stable
since
2018,
and
so
you
can
do
asynchronous
programming
in
rust.
A
Today,
however,
there
is
minimal
support
in
the
standard
library
for
it
and
and
frankly,
kind
of
like
to
support
the
language
is
somewhat
minimal
too,
and
we're
we're
really
relying
on
the
ecosystem
to
fill
in
a
lot
of
the
gaps
there.
A
But
the
asic
working
group
is
well
working
and
we're
we're
trying
to
make
things
better.
A
So
let's
go
way
back
in
time
and
put
up
and
think
about
how
things
were
designed
so
to
reflect
on
some
of
these
kind
of
design
pressures,
some
of
the
requirements
that
led
to
the
design.
I
want
to
think
about
what
makes
rust
unique
what
makes
rust
rust-
and
I
I
often
come
back
to
this
kind
of
like
fundamental
trifecta
of
of
things
we
want
rust
to
achieve
so.
Safety
super
important
performance
and
ergonomics,
and
you
get
to
choose
three
in
rust.
Okay,
that's
our
that's
our
real
offering!
A
So,
let's,
let's
dig
into
that
performance
a
little
bit
because
performance
is
really
important.
It's
one
of
the
big
motivators
for
the
async
programming
and
performance
can
mean
a
lot
of
things
in
a
lot
of
places
and
we
don't
just
care
about
making
your
program
kind
of
like
incrementally,
faster
over
what
you
could
achieve
in
in
other
languages,
and
we
don't
care
just
about
kind
of
like
the
the
difference
between
an
algorithm
which
takes
kind
of
days
versus
minutes
to
run
right.
A
We
care
potentially
about
every
single
byte
of
memory,
every
single
cycle
of
the
of
the
cpu,
and
we
want
to
give
you
the
rust
programmer,
precise
control
over
what
your
program
is
doing,
because
you
are
the
one
who
decides
which
bytes
and
which
cycles
actually
matter.
Okay,
now
a
consequence
of
these
kind
of,
like
you
know,
fundamental
goals
of
rust.
We've
got
a
few
kind
of
design
principles.
A
The
first
one
that
often
comes
up
is
this
idea
of
zero
cost
abstractions.
A
This
was
in
this
is
a
design
principle
that
we've
inherited
from
c
plus,
and
this
doesn't
mean
that
your
abstractions
are
free,
that's
not
how
it
works.
What
it
means
is
that,
if
you
don't
use
an
abstraction,
then
you
don't
pay
for
it
and,
furthermore,
if
you
do
use
it,
it
will
be
basically
the
same
cost
as
if
you
had
done
a
well-crafted
version
by
hand.
A
A
similar
idea,
similar-ish
is
related
to
this
idea
of
precise
control.
Performance
is
about
this
aversion
to
magic.
You'll,
often
hear
kind
of
like
people
talking
disparagingly
about
a
proposed
feature,
feeling
too
magical,
and
I
think
kind
of
like
a
better
way
to
to
explain
this
idea.
Is
that
it's?
If
you,
if
you
want
to
have
precise
control
over
what
your
program
is
doing,
you
have
to
understand
how
the
compiler
is
compiling
your
program,
or
at
least
be
able
to
understand
that
at
a
at
a
higher
level,.
A
A
We,
you
know,
there's,
there's
a
string
abstraction
in
the
standard
library
and
a
hash
map,
but
we
don't
provide
much
more
than
the
fundamentals
and
in
particular
we
don't
want
to
dictate
to
you
like
the
high-level
architecture
of
your
program,
by
having
a
really
kind
of
like
opinionated
point
of
view,
on
kind
of
like
concurrency
architectures,
or
what
have
you?
A
Okay,
so
looking
at
the
design,
probably
like
the
the
really
most
fundamental
thing
about
designing,
an
asynchronous
programming
system
is
whether
you
have
a
model
based
on
green
threads
or
on.
We
call
stackless
covetines.
These
are
both
kind
of
jargony
names.
I'm
going
to
explain
both
of
these
things,
so
green
threads
are
like
mini
operating
system,
threads,
okay,
so
the
programming
language
provides
a
runtime,
and
this
does
everything
that
the
the
operating
system
would
do
for
operating
system
threads.
A
So
it's
got
a
scheduler.
It's
got
context,
switching
and
and
so
on.
This
is
fairly
kind
of
like
a
heavyweight
runtime
and
the
programming
model
for
the
programmer
is
pretty
similar
to
using
operating
threads.
You
might
have
kind
of
like
some
kind
of
spawn
construct,
but
other
than
that
there's
very
little
syntactic
overhead,
and
you
basically
pretend
that
you're
using
operating
system
threads
except
things,
are
a
little
bit
more
performant
because
you
don't
have
to
keep
going
through
the
operating
system
to
contact
switch
or
what
have
you.
A
The
alternative
is
to
build
a
system
around
kind
of
like
async
and
the
weight.
This
is
the
so-called
co-routines
model,
where
you
you
have
this
syntactic
overhead,
the
of
async
and
the
weight.
The
the
program
is
explicitly
aware
about
these
things,
but
they
are
compile
compiled
down
to
to
just
regular
machine
code
and
so
at
runtime.
A
A
There's
very
little
little
overhead
and
we'll
talk
in
a
bit
more
detail
about
that
how
that
works,
because
this
is
obviously
where
we've
ended
up
now,
rust
back
in
the
day
way
before
1.0
just
used
to
have
a
green
threads
model.
But
this
didn't
fit
with
those
design
principles
that
I
talked
about
earlier.
In
particular,
it
was
not
a
zero
cost
abstraction
because
everyone,
if
you're
using
green
threads,
then
then
everyone
is
in
the
screen
threads
world
right
and
there's
there's
a
lot
of
downsides
to
that.
A
In
particular,
like
the
the
view
that
you
have
in
your
rust
code
of
the
of
what's
a
thread
and
and
how
how
your
programming
is
executing
is
very
different
from
what
other
programming
languages
or
or
what
you
would
see
from
the
point
of
view
of
another
programming
language
or
from
the
operating
system,
and
that
introduces
a
whole
bunch
of
friction
and
therefore
performance
penalty
when
you
are
interacting
between
rust
and
other
programming
and
programs
written
in
other
languages.
A
And
furthermore,
this
mismatch
between
kind
of
like
rust's
idea
of
a
thread
and
everyone
else's
idea
of
a
thread
really
becomes
important
with
when
you
block
the
thread
and
as
a
consequence
of
this,
it's
basically
impossible
to
write
like
a
high
quality
low
level.
I
o
library,
using
the
green
threads
model,
and
so
this
the
the
green
thread
system
in
ross,
was
ripped
out
before
1.0
and
with
the
idea
that
kind
of
like
we
would
later
develop.
A
system
based
around
async
awaits.
A
So
a
future
is
just
a
regular
rust
data
type
any
data
type
as
long
as
it
implements
the
future
trait
and
the
future
trade
is
pretty
simple.
It
just
has
a
single
method
poll
and
the
the
user
or
the
runtime
makes
progress
by
calling
the
poll
function,
so
the
the
future
itself
just
totally
inert
just
a
a
regular
piece
of
data
and
execution
happens
by
repeatedly
calling
the
poll
function,
and
you
can
imagine
that
if
you
have
multiple
futures
and
a
runtime
schedules
kind
of
execution.
A
A
Futures
okay,
so
if
that's
how
futures
can
give
you
this
concurrency?
How
do
we
get
from
async
and
away
to
futures?
Well?
Well,
let's
look
at
that
translation.
Let's
start
by
looking
at
how
we
deal
with
the
async
keyword,
so
an
async
function
is
is
lowered
to
just
a
regular
function
that
returns
a
future.
A
And
if
we
look
at
the
the
type
here,
we
use
an
infiltrates
type,
because
we
don't
care
what
the
precise
type
is,
in
fact,
that
precise
type
is
kind
of
unknown
other
than
to
the
compiler,
but
we
do
care
that
it
implements
this
future
trait
and
that's
all
the
infiltrate
type
tells
us.
So
that's
perfect.
A
Now,
if
we
actually
think
about
like
the
semantics
of
this,
when
we
call
an
async
function,
we're
just
going
to
get
a
future
back
and,
as
I
said,
the
future
is
just
a
totally
inert
piece
of
data
and
that's
why
you
have
to
await
or
poll
that
future
to
actually
make
progress,
which
can
be
a
bit
surprising
if
you're
used
to
kind
of
like
async
functions
and
other
languages
which
start
making
progress
as
soon
as
you
call
them
now.
A
If
you
can
have
async
functions,
it's
like
a
natural
step
to
then
want
to
have
async
methods
and
traits,
and
unfortunately
this
is
not
supported
in
rest
of
the
moment
and
it's
one
of
the
like
most
kind
of
like
in-demand
features
right.
So
let's
I'll
try
and
explain
why
this
is
trickier
than
regular
icing
functions.
A
So,
let's
look
at
that
that
translation
step,
so
we
would
just
do
the
same
translation
and
the
the
semantics
of
returning
future
is
super
easy,
but
the
type
here
well
the
type
is:
we've
got
this
infiltrate
type
in
it
on
a
trait
method
and,
unfortunately,
that's
not
supported
in
rust.
This
feature
is
called
return
position
infiltrate
in
traits
and
I'm
not
even
going
to
try
and
pronounce
that
acronym,
because
only
a
frog
could
do
that.
But
the
good
news
is
that
this
feature
like
implement
implementation
of
this
feature
is
way
underway.
A
There's
like
lots
of
the
parts
of
it
in
in
the
compiler,
which
is
what
we
care
about
for
for
async
functions
for
async
methods,
and
this
is
going
to
be
available
to
to
use
for
everybody
pretty
soon.
I
hope
now.
A
The
way
that
return
position,
infiltrate
and
trade
is
implemented
is
that
the
return
type
becomes
an
associated
type,
and
this
is
really
important,
because
this
means
that
different
implementations
of
the
trait
could
have
different
concrete
types
for
the
async
function
and
that
allows
you
to
have
different
implementations,
which,
after
all,
is
the
whole
point
of
having
like
a
trait
in
the
first
place.
So
that's
really
important.
A
There's
a
wrinkle
here
in
that,
if
you
have
a
generic
function
or
you
use
a
lifetime
from
cell
for
one
of
the
other
arguments
in
the
future-
and
this
is
super
common
without
you
really
noticing
in
in
async
functions,
then
you
need
the
generic
associated
type,
which
we
abbreviate
as
gats
and
again
this
is
another
feature.
That's
not
fully
implemented
this
one's
a
little
bit
further
ahead.
You
can
actually
use
this
on
unstable,
rust
and
we're
we're
talking
about
stabilization
at
the
moment.
A
What's
important
for
the
async
futures
async
methods
work
is
that
it
is
implemented
in
the
compiler.
A
So
with
these
two
features
return
position,
input,
trade
and
trait
and
generic
associated
types,
we're
really
close
to
being
able
to
support
asynchronous
functions
in
asynchronous
methods
and
traits,
and
so
that's
something
that
should
ship
in
the
very
near
future.
A
It
gets
even
more
complicated
than
that,
because
we
often
want
to
call
asynchronous
functions,
you
on
trades
objects,
and
that
is
not
something
that
would
be
supported
with
this
translation.
So
the
working
group
is
looking
at
doing
that.
I
won't
dive
into
the
details
here,
but
that
that
works
underway
too.
A
Okay,
so
we've
talked
about
translating
async.
How
about
a
weight?
A
weight
is
a
little
bit
more
complicated.
I'm
going
to
go
into
that
in
in
depth.
In
this
talk,
a
weight
is
a
signal
to
the
compiler
that
you
can
split
the
the
function
here
into
futures,
and
then
you
can
stitch
all
those
futures
together
into
a
state
machine
and
the
run.
A
The
the
asynchronous
runtime
can
then
pull
that
state
machine
to
to
make
progress
and-
and
in
turn,
that's
going
to
poll
these
individual
futures
which
and
where
the
future
here
will
probably
come
from
big
and
I
think
synchronous
function.
As
I
said,
it's
a
bit
complicated.
We
only
need
to
kind
of
like
understand
this
in
depth.
A
What's
important
is
that
it's
a
future
that's
being
awaited,
and
one
thing
that
the
asynchronous
working
group
has
looked
at
is
making
that
more
flexible
and
the
way
we
do
that
is
with
an
into
future
traits.
A
So
I
want
to
talk
by
analogy
about
into
iterator
first,
so
into
iterator
is
a
trait
that
says
this
is
a
a
data
type
that
is
iterable
and
here's
a
way
to
get
an
iterator
over
it,
and
this
is
implemented
by
things
like
vec
by
borrowed
slice
by
many
collections
and
so
and
it's
used
in
the
the
implementation
of
the
for
loop.
So
when
you
write
a
for
loop
like
this,
the
thing
that
you
can
kind
of
iterate
over
in
the
for
loop,
it
doesn't
have
to
be
an
iterator.
A
A
We
have
this
interfuture
trait
now,
which
is
anything
that
can
be
converted
into
a
future,
and
now,
when
we
call
a
weight,
it
doesn't
have
to
be
a
future
on
the
left-hand
side
of
that
it
can
be
anything
that
can
be
converted
into
a
future,
and
this
makes
a
number
of
programming
patterns
just
a
bit
more
ergonomic.
So
here's
an
example
of
using
an
async
builder
so
like
the
kind
of
really
common
build
pattern.
A
This
is
a
replacement
for
kind
of
complicated
constructors,
where
we
call
kind
of
like
property
methods
to
initialize
the
builder
and
the
eighth
and
the
the
nice
thing
here
is
that
we
can
then
just
await
the
builder
itself.
We
don't
need
to
have
kind
of
an
explicit
step
to
convert
it
into
a
future
before
we
await
it.
A
Okay,
so
just
kind
of
like
changing
tack
a
little
bit
now,
so
we
talked
about
how
we
go
from
async
functions
to
futures,
and
then
we
poll
the
future
to
make
progress.
Okay
and
after
after
we've
polled
it
a
few
times
we've
met,
the
future
will
hopefully
make
progress
and
return
a
result
to
the
to
the
to
to
the
caller
of
the
future.
A
What
happens
if
you
just
well
stop
polling
in
particular,
if
you
drop,
if
you,
if
you
poll
a
future
or
even
don't,
pull
at
all,
you
just
drop
it
or
you
pull
it
for
a
bit
and
then
drop.
It
then
well,
as
I
said
before,
this
is
just
an
inert
data
type.
A
There's
the
there's,
no
execution,
that's
going
to
carry
on
the
the
future
is
cancelled
and
cancellation
is
a
really
useful
thing
to
do
and
in
fact
like
having
cancellations
so
easily
like
linked
to
drop,
is
really
nice
as
well,
because
it
means
you
never
need
to
worry
about,
like
zombie
futures
who
are
just
progressing
in
the
background,
with
no
way
for
you
to
get
the
result
of
them,
and
we
make
use
of
this
in
mac.
A
In
features
like
the
select
macro,
which
is
going
to
drop
futures,
that
we
don't
care
about
and
to
to
cancel
them
and
that's
really
kind
of
like
a
clean
way
for
it
to
tidy
up
the
these
futures,
but
it
comes
with
some
problems
too
and
again,
these
can
be
somewhat
surprising,
especially
if
you're
used
to
asynchronous
programming
in
another
language,
so
consider
a
future
which,
when
it's
polls,
it
is
going
to
make
some
progress
and
it's
going
to
write
that
progress
to
an
internal
buffer
and
when
it's
made
enough
progress
when
it's
done,
it's
going
to
return
the
contents
of
that
buffer
to
the
user.
A
A
Unless
that
future
gets
cancelled
and
if
the
future
gets
cancelled,
then
this
data
we've
written
into
the
intermediate
buffer,
just
gonna,
be
lost,
gone
and
data
loss
is
generally
pretty
bad.
You
generally
want
to
avoid
this,
so
you
have
to
be
really
careful,
and
so
you
have
to
reason
about
this
idea
of
kind
of
like
cancellation
safety
for
your
futures.
Is
this
a
is
this?
A
A
future
that
can
get
cancelled
is
something
bad
gonna
happen
if
it
does
get
cancelled
and
so
forth,
and
this
is
a
bit
of
a
foot
gun
for
for
asynchronous
programmers
and
it's
something
the
working
group
has
spent
a
lot
of
time
kind
of
thinking
about
thinking.
Can
we
do
better.
A
And
another
place
that
this
idea
of
cancellation
causes
issues
is
with
completion.
I
o.
So
this
is
about
designing
the
async
I
o
traits,
so
that's
kind
of
like
async,
read
async,
right
traits
and
completion.
I
o
is
it's
kind
of
the
new
hotness
at
the
moment,
because
iou
ring
on
linux
uses
the
completion
model
and
is
pretty
awesome,
less
fashionably.
It's
also
the
default
way
to
do
asynchronous
io
on
windows
using
iocp.
A
The
way
completion
io
works
is
pretty
intuitive.
Actually,
so
the
user
initiates
a
read
with
some
calls
or
kind
of
like
a
into
the
operating
system,
and
you
pass
in
a
buffer
to
the
operating
system,
and
then
you,
the
the
user,
can
get
on
with
something
else
right.
A
That's
asynchronous,
I
o
after
all
and
at
some
point
further
on
in
time
the
the
the
I
o
is
going
to
be
done
and
the
os
somehow
notifies
the
user
that
the
I
o
is
complete
and
at
that
point
the
user
can
look
at
the
its
buffer,
and
lo
and
behold,
the
data
will
be
there
to
read
now.
There's
some
interesting
invariants
about
this
buffer.
A
Okay,
so
the
buffer
is
supplied
by
the
user,
but
the
user
must
guarantee
that
that
buffer
stays
alive
and
that
the
os
has
unique
access
to
that
buffer
for
the
entire
duration
of
that
I
o
right
now.
This
actually
fits
really
well
with
kind
of
some
of
the
core
concepts
of
kind
of
like
borrowing
and
rust,
and
wrapping
all
that
stuff
up
in
an
async
function.
A
So
the
way
that
this
would
work
in
rust
is
that
you've
got
your
buffer
here,
we're
just
going
to
allocate
it
locally
on
the
on
the
stack
and
then
you
call
read
and
you're
going
to
pass
a
mutable
reference
to
that
buffer
and
that's
unique
and
then
you're
going
to
await
the
results
and
rust
guarantees
that
buffer
is
not
going
to
be
used
by
anyone
else,
and
it's
guarantees
that
it's
going
to
live
until
after
that
recall
completes.
So
this
is
fantastic.
A
A
So
if
we
take
the
the
future
remember,
this
asynchronous
function
is
just
returning
a
future
and
we
just
drop
it
or
even
worse.
We
call
men,
forget
and
yeah.
This
is
a
silly
thing
to
do,
but
it's
probably
actually
wrapped
up
in
a
select
macro
or
something
similar
to
that.
So
this
does
actually
happen
in
real
life.
Well,
what
happens
now?
A
What
happens
is
that
the
future
is
cancelled,
which
means
it's
dropped,
which
means
that
the
buffer
is
deallocated
and
that's
safe
to
do
because
we've
reached
the
we're,
never
going
to
call
the
the
poll
function.
That's
underlying
these
things.
Again,
it's
like
buffer's
never
going
to
get
used.
A
As
far
as
the
operating
system
is
concerned,
we
still
want
that
io
to
happen,
and
it
still
thinks
it
has
unique
access
to
that
buffer,
and
so
it's
going
to
read
into
that
buffer
and
that's
going
to
be
a
news
after
free
error
and
we
hate
those
in
rust.
We
try
really
hard
not
to
have
use
after
free
errors
all
right.
Well,
why
can't
you
just
cancel
the
read
with
the
operating
system
and
you
can
do
that.
Most
completion
systems
allow
you
somewhere
to
to
cancel
the
I
o
the.
A
Unfortunately,
cancellation
is
asynchronous
too,
and
the
invariant
around
cancellation
is
that
you
have
to
ensure
that
that
buffer
lives
as
well
until
either
the
cancellation
or
the
original
read,
completes
okay
and-
and
we
can't
do
that
right-
we
we-
we
don't
have
kind
of
any
facility
for
garrett
for
calling
an
asynchronous
function
when
a
future
gets
cancelled.
A
Okay.
Well,
what
about
having
async
drop?
Would
that
solve
the
problem?
Well,
for
one
thing,
async
drop
is
turns
out
to
be
really
complicated,
although
it's
kind
of
simple
to
explain
that
it's
kind
of
complicated
to
to
design-
and
I
I
don't
have
time
to
go
into
that-
which
is
shanks-
it's
really
interesting,
but
it's
something
that
the
acid
working
group
has
been
discussing
and
thinking
about
even
like
I
and
I'm
hopeful
that
we
will
actually
have
an
async
drop
implemented
at
some
point.
A
It's
just
not
as
easy
as
it
sounds,
but
even
if
it
were,
or
even
if
we
did
put
all
that
effort
in
and
got
that
kind
of
like
soon.
The
more
fundamental
problem
is
that
in
rust,
destructors
are
not
guaranteed
to
get
run.
So
it's
okay
to
rely
on
destructors
running
most
of
the
time,
because,
most
of
the
time
they
run,
but
in
this
case
we'd
be
relying
on
destructors
for
soundness
and
that's
not
okay.
A
This
would
because
you
can
call
men
get
or
just
destroy
the
memory
by
in
other
ways
you
you
can't
guarantee
that
the
destructor
is
going
to
be
run
and
in
those
cases
you
would
still
have
this
use
after
free
error.
So
so,
unfortunately,
calling
drop,
whether
that's
async
or
synchronous,
is
not
a
solution.
A
We
do
have
solutions,
though
the
simplest
is
that
the
I
o
library
manages
a
buffer
and
you
just
copy
into
the
user's
buffer,
and,
although
that's
simple,
which
is
good,
it's
also
not
very
performant
and
worse.
It
doesn't
fit
in
with
this
idea
of
having
precise
control
over
the
performance,
because
the
the
eye
library
is
managing
the
buffer,
and
this
copy
is
forced
upon
you
and
really
like
one
of
the
main
advantages
of
using
completion
ir.
Was
this
idea
of
having
zero
copies
so
that
that
works?
But
it's
not
great.
A
A
better
approach
is
perhaps
using
the
buffery
trait.
Now
we
usually
think
of
buff
read
in
terms
of
having
a
buffer
where
we
kind
of
collate
multiple
small
reads
before
returning
the
whole
thing,
but
that's
not
what's
important
here.
What's
important
here
is
that
that
buffer
is
internal
to
the
reader
right,
and
in
this
case
that
would
mean
the
io
library
can
can
manage
that
buffer
and
then
the
the
caller
can
read
that
without
that
copy
being
forced
upon
them.
A
But
this
is.
This
is
fine
in
a
lot
of
cases,
but
sometimes
you
might
want
to
read
directly
into
the
user's
buffer
or
you
don't
want
the
buffer
to
be
managed
by
the
io
library
and
in
this
kind
of
case
we
might
need
a
new
trait
like
owned
read,
and
the
idea
here
is
we'd
pass
an
owned
buffer
to
the
to
the
io
library
rather
than
a
borrowed
one.
A
So
imagine
something
like
a
vec
of
you
ate,
rather
than
a
borrowed
slice
if
you
ate,
and
so
now
the
caller
is
responsible
for
the
buffer
management
and
we
just
pass
it
for
the
to
the
io
library.
So
the
ilo,
io
library,
can
keep
that
buffer
alive
just
for
long
enough.
For
that
call,
so
we're
not
exactly
sure
what
the
the
the
eventual
solution
here
is
going
to
look
like.
Perhaps
we'll
have
support
both
of
these
trades,
but
we
are
confident
that
we'll
have
some
kind
of
optimal
weights.
What
completion?
A
I
am
all
right.
I
want
to
finish
this
talk
off
just
to
summarize
what
I've
talked
about,
in
particular,
where
we're
going
and
give
you
an
idea
of
the
kind
of
like
shiny
future.
We're
heading
towards,
hopefully,
so
we
want
you
to
be
able
to
write
async
any
way
you
can
write
a
function
so,
whether
that's
methods
on
traits,
whether
that's
closures,
whether
that's
destructors,
we
want
async
to
be
everywhere
and
it
shouldn't
matter
which
runtime
you
use.
A
There's
always
going
to
be
like
reasons
for
for
different
users
to
want
to
use
different
runtimes,
but
you
should
still
have
access
to
the
entire
ecosystem
and
all
the
goodness
of
programming
with
with
asynchronous
rust,
okay,
and,
in
particular,
if
your
requirements,
your
constraints,
change.
It
should
be
easy
to
change.
A
Runtime
too,
and
when
you
s
for
somebody
who's
starting
programming,
an
asynchronous
rust,
you
should
be
able
to
get
started
without
having
to
care
about
which
runtime
you
use
without
having
to
do
that
research
into
finding
the
right
one
for
you
and,
depending
on
a
you,
know,
a
great
big
runtime
from
the
ecosystem
and
once
you've
made
your
program.
Inevitably
there
are
going
to
be
bugs
in
it
and
finding
those
using
a
debugger
should
be
a
great
experience.
In
particular,
it
should
be.
A
You
know
work
just
as
well
as
using
a
debugger
with
synchronous,
rust
and
I
haven't
had
a
chance
to
talk
about
that.
Much
in
this
talk,
but
the
the
ic
working
group
has
been
doing
work
on
that
too,
to
to
work
towards
having
a
great
debugging
experience.
A
And
we
want
programming
with
async
rust
to
be
to
be
easier.
A
We've
been
doing
a
lot
of
work,
thinking
about
the
invariance,
the
guarantees
that
you
should
have
when
doing
asynchronous
programming,
and
especially
around
the
life
cycle
of
futures
like
how
they're
started,
how
they
end
whether
they're
cancelled
and
we
we
want
to
kind
of
like
use
that
to
design
libraries,
perhaps
in
the
ecosystem,
perhaps
in
in
the
standard
library
and
design,
come
up
with
design
patterns
which
enable
you
to
to
to
write
asynchronous
rust
code
more
safely,
more
easily.
A
A
If
you
would
like
to
contribute
to
the
design
work
to
the
implementation,
documentation
testing,
we
would
love
you
to
help
out,
but
in
particular,
even
if
you're,
even
if
you
don't
want
to
do
that
work,
but
you
are
using
async
for
us
today
and
especially
if
you
are
implementing
a
runtime,
whether
that's
a
general
purpose,
runtime,
that's
available
for
everyone
or
whether
that's
specific
to
your
project
and
perhaps
that's
kind
of
like
a
closed
source,
and
so
we
don't
know
about
it.
A
Then
we
really
want
to
hear
about
your
experiences
and
your
opinions
on
the
work
we're
doing
so
that
we
ensure
that
we're
building
the
right
thing
to
to
help.
You
continue
to
to
build
asynchronous
rust
programs
and
to
do
that
kind
of
better.
A
A
We
use
the
rustling
zoolop
chat
and
then
we
have
the
wga
sync
channel
there,
where
we
communicate
so
please
reach
out
to
us
in
in
either
of
those
places,
I'm
nick
cameron
on
zurich
and
I'm
nick
underscore
r
cameron
on
twitter
and
I'm
gonna
be
on
disc,
on
the
the
rustconf
discord
well
right
now
for
and
for
some
time
after
the
talk,
so
please
reach
out
either
on
discord
or
or
zulu
or
twitter.
A
If
there's
anything
in
this
talk,
you
want
to
to
talk
about
or
that's
interesting
or
if
you
want
to
just
follow
along
to
see
where
this
goes,
so
that
that's
the
end
of
the
talk.
Just
thank
you
for
for
listening,
cheers.