►
From YouTube: code::dive 2017 – Alex Crichton – Concurrency in Rust
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
And
the
next
speaker
is
Alex
Creighton
alex
is
the
mem.
Is
a
member
of
the
rust
call
team
and
has
been
working
on
the
rust
programming
language
for
five
years?
He
is
employed
by
Mozilla
to
work
on
rust
and
works
throughout
the
project
on
aspects
such
as
the
standard
library,
cargo,
the
asynchronous
I/o
ecosystem
and
the
infrastructure
of
rust
itself.
A
Currently
alex
works
primarily
on
the
Tokyo
project
in
rust
and
asynchronous
I/o
stack
as
well
as
cargo
rusts
package
manager.
Now
what
you
don't
know
about
Alex
is
that
his
favorite
indie
time
travel
movie
is
called
primer.
Have
you
seen
it?
Have
you
seen
primer,
okay,
so
one
day
Alex,
he
says,
he's
actually
going
to
understand
what
happens
in
the
movie,
but
before
he
does
that
he's
going
to
deliver
his
talk
called
concurrency
and
rust.
Let's
have
a
big
hand
for
alex
creighton.
B
All
right,
thank
you!
So,
yes,
my
name
is
Alex,
and
today
I
would
like
to
talk
to
you
about
concurrency
in
rust,
and
so
rust
is
a
new
programming
language
coming
out
of
Mozilla
research
and
I'll,
be
giving
you
a
little
bit
of
an
intro
to
that
more
in
depth
tomorrow,
but
also
a
little
brief
one
today.
So
we
can
kind
of
get
started
first
off.
B
What
I
would
like
to
talk
about
is
kind
of
why
we're
talking
about
concurrency
and
why
concurrency
so
interesting
today,
especially
in
the
realm
of
C
and
C++,
and
so
this
is
a
graph
of
a
bunch
of
data
collected
over
40
years
of
just
kind
of
CPUs
in
the
various
trends
among
them
and
a
couple
of
different
properties.
But
there's
two
that
actually
show
out
in
high
detail
here
where
the
first
of
which
is
this
aspect
that
CPUs
are
not
getting
faster.
B
Cpus
have
flatlined
over
the
past
decade
and
this
whole
Moore's
law,
where
we're
gonna
double
our
CPU
speeds
every
six
months
or
a
year.
At
this
point,
that's
all
out
the
window
and
that's
not
going
to
work
anymore.
Well,
the
other
trend.
We
see
is
that
we
have
this
exponential
increase
or
we
see
a
very
large
increase
in
the
number
of
cores
available
in
your
machine.
B
That's
not
accidentally
crashing
all
of
our
programs,
so
this
sounds
great,
and
so
it
will
open
some
bugs,
and
so
this
is
actually
a
real
bug
on
the
firefox
rendering
engine
saying
we
should
parallelize
CSS
selector
matching,
and
so
that's
that's
easy
to
do
we,
but
the
problem
here
is
that
this
bug
was
open
seven
years
ago.
So
for
seven
years
this
bug
has
remained
inert
saying
all
of
these
possible
speed.
B
But
there's
this
sign
in
the
San
Francisco
office
of
Mozilla,
which
is
low,
I,
think
it's
three
or
four
meters
in
the
air
saying
you
have
to
be
this
tall
to
write
multi-threaded
code
and
it's
really
showing
off
this
aspect
where
to
actually
use
concurrency
in
a
language
like
C
or
C++,
is
incredibly
difficult
and
especially
in
a
multi
million
line
code
base.
Trying
to
retrofit
that
in
there
and
trying
to
retroactively
Li,
add
that
is
it
next
to
impossible
tasks.
B
So
we
have
these
bugs
that
are
open
for
seven
years,
with
no
one
actually
making
any
progress,
because
we're
never
sure
when
we've
fixed
it
or
we're
actually
complete
and
when
we
don't
have
any
bugs,
and
so
this
is
where
Russ
starts,
to
enter
the
picture.
This
is
a
blog
post
entitled
fearless
concurrency
in
rust,
which
came
out
just
before
was
rust.
B
Now
it
is
so
before
we
go
much
further.
I
want
to
give
you
a
brief
overview
of
rust
itself.
Like
I
said
a
little
bit
earlier.
I
was
talking
about
these
strong
safety
guarantees,
these
no
sig
Falls,
there's
no
data
races,
which
is
kind
of
the
thread
safety
aspect,
but
also
this
without
compromising
on
performance.
B
The
perfect
example
of
being
like
gecko
bug,
which
is
actually
been
solved
now
and
I'll,
be
talking
a
little
bit
more
about
that
in
a
bit.
So
in
this
talk
first,
I'm
going
to
talk
a
little
bit
about
concurrence.
You
just
kind
of
make
sure
we're
on
the
same
page
of
what
it
is
and
kind
of
give
you
an
overview
of
what
I
mean
by
this
I'll.
Give
you
a
bit
of
a
small
intro
about
rust.
So
it's
okay,
if
you've
never
heard
of
rust
before
and
then
I'll
dive
into
the
libraries.
B
This
is
kind
of
what
you
can
do
with
concurrency
and
rust,
some
of
the
primitives.
We
have
how
they
work
and
how
we
actually
make
them
safe,
using
the
guarantees
that
rust
has
and
then
finally
not
entirely
concurrency
related
or
a
little
bit.
Different
is
futures
specifically
about
asynchronous
I/o
and
the
kind
of
the.
What
rust
story
is
that
for
today
and
kind
of
how
it
all
works
in
a
bit
of
a
deep
dive
and
kind
of
how
that's
implemented
and
how
it's
so
fast,
all
right.
B
All
of
the
possible
interleavings
here
it's
if
you
can
never
fully
comprehend
how
exactly
every
single
state
and
are
they
all
going
to
work
and
they
all
valid,
and
so
this
is
where
concurrency
has
all
of
these
bugs
associated
with
it.
These
data
races
these
race
conditions,
these
seg
faults,
and
some
of
these
are
actually
exploitable,
which
is
really
bad
in
the
sense
that,
if
you're,
just
kind
of
an
innocent
library,
maintainer
and
you'd
like
to
actually
use
concurrency
you'd
like
to
use
the
CPUs
you
have
available
to
you.
B
The
problem
is
that
if
you
do
so,
you
might
be
opening
up
your
users
to
see
V's
to
remote
code
execution
to
all
of
these
vulnerabilities
that
are
targeted
back
at
you,
and
so
this
is
a
real
problem,
because
concurrency
is
also
super
super
nice.
This
is
actually
that
the
parallelizing
CSS
matching
and
gecko
has
now
been
implemented
in
rust
and
is
actually
shipping
I
think
in
about
ten
hours
in
the
when
the
US
wakes
up
and
we
get
these
nice
speed.
B
Ups,
so
Amazon
starts
rendering
for
instantly
almost
20%
faster
and
you
to
when
some
computers
goes
up
to
30%
faster,
and
so
we
have
this
problem
where
we
have.
We
have
all
these
resources
and
we
want
to
use
them
because
we
put
some
clear
speed,
ups
and
some
clear
winds,
but
we're
afraid
of
doing
that
because
of
all
these
problems,
these
data
races,
these
seg
faults.
B
These
risks,
these
race
conditions,
and
so
this
is
where
Russ
starts
to
end
of
the
picture,
and
I
want
to
talk
about
first,
to
hear
about
the
safety
aspects
of
Russ
kind
of
how
we
actually
fundamentally
build
up
safety
and
rust,
and
then
that's
kind
of
kind
of
lead
into
how
we
actually
use
those
fundamental
aspects
to
build
up.
Concurrency
primitives
as
well,
so
rust
is
kind
of
made
of
these
two
key
ingredients.
B
These
your
cost
abstractions,
which
you're
very
familiar
with
from
C++
and
then
also
this
memory,
safety
and
data,
race,
freedom
aspect
and
so
I'm
going
to
zero
in
here
on
this
memory,
safety
and
inner
ease
freedom,
kind
of
specifically
how
that
actually
works
and
what
the
fundamental
concepts
are
within
rust
itself
and
to
start
off.
I
want
to
show
you
a
small
example
written
in
C++,
so
kind
of
what
I
mean
by
safety
and
as
we
do,
this
I
want
to
dig
into
what
the
bug
is
here
and
kind
of.
B
What's
going
to
happen
so
naturally,
in
C++
we'll
start
with
a
program.
Here
we
have
a
vector
of
strings.
The
vector
itself
is
stored
directly
on
the
stack
and
then
the
first
thing
we'll
do
is:
we've
got
a
pointer
into
it
that
will
push
some
data
onto
it,
transfer
it
out.
So
as
we
come
down
here,
we
try
and
actually
push
some
new
data
on
the
vector.
Those
of
you
who
have
implanted
vectors
before
or
familiar
with
this
might
know
that.
Okay,
this
is
that
vectors
length
might
equal
the
capacity.
B
So
now
what
he
does
allocates
a
new
data
copy
over
the
existing
data,
push
some
new
data
on,
and
then
the
next
step
is
that
we're
going
to
update
the
do
vectors
data
pointers.
The
vector
now
points
this
new
chunk
of
memory
and
we're
going
to
deallocate
and
free
the
previous
chunk
of
memory,
and
clearly
this
is
the
problem
that
were
going
to
see
here,
which
is
that
we
have
this
dangling
pointer
into
freed
memory
now
our
element
pointer,
which
means
that
when
we
actually
try
and
print
this
out,
this
is
undefined
behavior.
B
This
is
could
do
effectively
anything
at
this
point
in
time
it
could
psych
fold,
it
could
actually
succeed.
It
could
then
kind
of
do
whatever,
and
this
is
what
we
see
explicitly
want
to
avoid,
because
these
are
the
kinds
of
problems
where
you
write
it
into
your
code.
It
works
today
and
then,
ten
years
from
now
you
get
an
email
saying.
B
Oh,
you
broke
my
program
and
you're
not
around
to
fix
it
and
I
had
to
debug
it
for
six
hours
or
thirty
hours
to
figure
that
out,
but
so
in
any
case,
I
want
to
dig
into
here
kind
of
what
actually
happened
like.
Why
did
this
go
wrong?
And
what
can
we
learn
from
this?
And
how
can
we
kind
of
generalize
this
to
try
and
solve
more
problems
than
just
this
one
specific
issue
and
there's
two
key
ingredients,
the
first
of
which
you
find
here
is
aliased
pointers.
B
So
we
have
this
element
pointer
and
this
vector
pointer,
which
are
pointing
to
the
same
chunk
of
memory
and
then
the
problem
there
is
that
that
alone
is
fine,
but
it's
when
you
add
in
mutations,
so
you
have
this
mutation
of
the
vector
kind
of
pushing
on
some
new
data
and
it's
this
simultaneous
act
of
both
mutation
and
aliasing,
which
is
causing
a
problems
here.
If
you
only
had
mutation,
then
the
vector
itself
was
always
internally
consistent.
B
If
you
only
had
aliasing,
then
they're
both
pointing
the
same
piece
of
data,
it's
not
changing,
so
it's
totally
fine.
Typically,
this
is
when
a
garbage
bucket
comes
in
and
saves
the
day,
but
in
rust,
we've
kind
of
have
this
constraint
where
we
cannot
have
a
garbage
collector.
We
want
this
kind
of
systems
level
of
performance.
B
You
also
want
to
have
this
lack
of
a
runtime,
and
so
the
rest
solution
to
this
problem
of
kind
of
preventing,
simultaneous
mutation
and
aliasing
is
called
ownership
and
borrowing
these
two
aspects,
this
cadiz
kind
of
two
pillars,
are
the
foundation
on
which
all
safety
and
rust
is
built
is
kind
of
the
two
fundamental
language
features
that
rust
gives.
You
and
I'll
talk
about
these
in
some
more
detail,
but
at
a
very
high
level,
the
first
thing
that
ownership
and
runtime
or
ownership/borrowing
gives
you
is
no
runtime.
This
is
not
a
garbage
collector.
B
This
is
purely
static
analysis,
so
there's
no
extra
metadata
tracking
there's
no
extra
layers
of
indirection,
it's
kind
of
all
executing
is
you
actually
wrote
it
and
just
running
static
analysis
at
compile
time,
we
as
I've,
been
saying
this
is
kind
of
the
foundation
for
memory
safety
itself.
This
is
how
we're
going
to
be
preventing
databases,
how
we're
going
to
be
preventing
seg
faults.
B
They
have
a
garbage
collector,
which
gives
the
memory
safety,
but
they
obviously
have
a
very
large
runtime
associated
with
them.
It's
very
difficult
to
embed
it's
very
difficult
to
run
in
constrained
environments,
and
the
very
interesting
thing
about
these
languages
is
that
they
don't
actually
protect
you
from
data
races,
and
so
this
is
kind
of
the
key
aspect
of
ownership
and
borrowing
which
is
it's
giving
us
all
these
benefits
effectively
for
free
and
this
in
the
sense
of
we
have
no
runtime.
We
have
memory
safety
and
we
also
have
no
data
races.
B
Sorry
I
want
to
give
you
an
overview
of
first
ownership
and
a
little
bit
of
borrowing.
I'll
talk
more
about
these
in
the
talks
tomorrow.
This
is
gonna
be
a
bit
of
a
fast
tour,
but
so
here's
an
example
of
some
russ
code.
Where
the
first
thing
we
do
is
we
create
a
vector
on
the
stack
and,
as
we
saw
with
c++,
the
vector
is
stored,
directly
inline,
the
kind
of
data
the
length
go
to
the
class
to
do
on
the
stack
itself.
B
We
have
this
word
mute
here,
which
means
that
we
actually
can
mutate
the
vectors
saying.
Okay,
we
can
push
someone
onto
it.
We
can
push
them
too
onto
it,
put
some
data
onto
it
onto
the
heap
and
then
we're
gonna
call.
This
function
called
take,
and
so
what's
happening
here.
Is
that,
on
the
right
hand,
side
we
have
this
bear
V
of
X
32,
that's
kind
of
like
the
type
that
it's
receiving
and
then
we're
calling
the
function.
Take
with
the
binding
V
and
what's
happening
at
runtime.
B
We
know
that
it
has
gone
out
of
scope.
There
are
no
possible
owners
of
this
data
so
as
the
owner,
you
have
control
over
the
destruction
of
this,
giving
us
deterministic
destruct
destruction
and
rust
very
similar
to
C++,
and
this
is
where
you
free
resources.
You
close
sockets,
you
free
memory,
you
destroy
everything
internally
and
so,
as
we
come
back
to
the
function
main,
this
is
where
may
no
longer
has
access
to
this
vector.
It
has
relinquished
ownership.
B
So
the
next
thing
in
rust
was
borrowing.
Oh
sorry,
in
the
the
key
aspect
of
ownership,
I'd
like
to
go
through
was
the
why
I
was
talking
earlier
about
owner
or
aliasing
and
mutation
we're
trying
to
prevent
both
of
these
from
happening
at
the
same
time,
because
that's
where
all
these
bugs
are
coming
from,
and
so
what
ownership
is
doing
is
its
allowing
mutation,
but
it's
not
allowing
aliasing.
B
So
borrowing
is
kind
of
the
other
aspect
of
that,
where,
if
we
only
had
ownership
transfer,
it
would
be
kind
of
an
ergonomic
could
just
keep
passing
everything
around
by
value.
So
here
we'll
have
a
vector
of
on
the
stack
as
well,
and
the
first
kind
of
borrow
of
2
and
rust
is
a
mutable
borrow
denoted
with
this
ampersand
mute
on
both
the
caller
side
and
the
collie
side
on
this
pin
this
push
function
now
a
mutable
borrow,
as
the
name
implies,
does
allow
mutations.
B
So,
as
we
come
over
here,
we'll
create
a
lightweight
pointer,
that's
kind
of
what
what
references
are
doing
in
rust
and
will
be
allowed
to
mutate
this
and
actually
push
some
data
onto
that
vector.
Now,
because
this
is
allowing
mutation,
a
mutable
borrow
does
not
allow
aliasing.
So
once
you
have
acquired
a
mutable
borrow,
we
are
no
longer
allowed
to
modify
this
vector,
so
we
can't
read
the
vector
we
can't
create
a
new
mutable
borrow.
B
So
now
we
can
continue
to
read
the
vector
and
do
it
also
want
with
it,
and
the
second
kind
of
borrow
and
rust
is
a
shared
borrow,
and
so
this
is
denoted
with
this
ampersand
digital
on
the
right
hand,
side
and
also,
on
the
left,
hand,
side
and
shared
borrow.
It
allows
aliasing,
but
not
mutation,
so
we
can
create
many
shared
borrows.
We
can
read
some
data,
but
while
we
are
doing
that,
we
cannot
mutate
it
this.
B
This
read
function
cannot
push
any
anymore
constants
anymore
contents
on
the
vector
you
can't
create
anymore
mutable
oros,
but
you
can
pass
around
all
these
share.
Bios
and
whatnot
all
right.
That
was
kind
of
a
very
quick
overview
state
of
ownership
and
borrowing
I'll
be
going
in
more
depth
tomorrow
on
this.
But
the
key
thing
here
is
that
this
is
all
happening
statically
in
rust.
This
is
kind
of
one
static
analysis
pass,
which
is
guaranteeing
these
these
aspects
of
never
having
simultaneous
aliasing
and
nutation,
and
it's
kind
of
preventing
all
these
bugs.
B
Where
ownership,
for
example,
is
you
never
have
a
double
free,
because
only
one
person
fries
it
and
yeah
is
the
owner
of
it,
and
then
borrowing
prevents
these
use.
Data
freeze,
like
we
saw
in
C++,
where
once
we
had
a
borrow
on
that
vector
kind
of
once,
we
have
the
pointer
into
it,
were
not
allowed
to
mutate
it.
B
We
can't
say:
ok
now
update
it
behind
the
scenes
and
we
forgot
to
update
one
or
the
other,
and
so
the
coolest
thing
here
is
I've
also
been
saying
that
rust
does
not
have
said
at
erases,
and
these
are
the
three
key
ingredients
for
a
database.
This
sharing
this
mutation,
no
ordering
kind
of
with
the
C
11
member
model.
It
can
cost
over
that
if
you
have
all
those
all
at
once,
you
have
a
database-
and
this
sounds
pretty
familiar
at
this
point
in
terms
of
by
preventing
either
aliasing
or
mutation,
never
never
happening
simultaneously.
B
Sorry
now
I
want
to
kind
of
dive
into
given
these
foundations.
Early's,
quick
foundations
talk
about
some
of
the
concurrency
primitives
that
rust
has
and
how
they're
leveraging
ownership
and
borrowing
to
actually
give
you
these.
This
management
of
the
of
the
machine
and
a
productive
sense
of
not
having
data
is
not
having
second
thoughts
and
all
up.
So
the
key
thing
to
know
about
rust
is
that
rusts
concurrency
is
all
baked
into
libraries,
not
the
language
itself.
B
Historically,
we
actually
had
message
passing
and
we
had
a
bunch
of
stuff
baked
into
the
language,
but
we
ended
up
removing
all
that
by
only
having
ownership
and
borrowing.
So
what
we're
gonna
see
here
is
actually
purely
baked
in
the
libraries
are
those
standard
library
or
the
rust
ecosystem,
and
all
of
its
do
all
it's
doing
is
leveraging
ownership
and
borrowing
to
give
you
this
ironclad
guarantee
of
safety
and
I'll
be
going
over
some
examples
of
how
that
all
works.
So
first
thing
we
need
to
do
is
to
actually
introduce
concurrency.
B
We
need
to
actually
spawn
a
thread.
We
need
to
do
something
that
adds
a
new
actor
to
our
system.
So
this
is
an
example
in
Russel
reuse,
the
stood
thread
module
which
caused
a
spawn
function.
This
will
just
have
a
closure
internal.
That's
that
double
bar
and
braces
are
a
closure
in
rust,
in
this
case,
we'll
compute
evallo
world,
and
eventually
we
can
actually
have
a
synchronization
point
waiting
for
that
thread
to
exit,
and
so
this
is
a
relatively
simple
program
which
will
just
print
hello
world.
B
Now
what
we
can
also
do
with
closures
in
rust
is
actually
close
over
out
our
data.
We
can
kind
of
seamlessly
pull
it
in
and
start
working
with
it,
and
this
is
an
example
where
we're
going
to
crease
start
by
creating
a
vector
on
the
main
thread,
but
then
we're
gonna
transfer
that
vector
to
a
child
thread
and
start
actually
modifying
it,
and
the
key
word
here.
B
The
the
key
thing
here
is
this
key
word
called
move
saying
that
we
are
moving
the
contents
of
the
closures
of
this
vector
into
the
child
thread,
so
this
destination
vector
it's
starting
on
the
main
thread.
We're
then
transferring
ownership
to
the
closure
and
then
that
whole
closure
is
gonna
go
run
on
some
separate
thread
and
because
it's
mutable
we
can
actually
start
pushing
onto
it
and
it's
gonna
all
work
out,
but
so
what
happens?
B
If
we
actually
come
here
and
try
it
and
add
some
modifications
afterwards,
this
would
be
a
typical
data
race
if
we
actually
were
pushing
onto
this
vector
from
two
different
threads
or
this
one
actually
called
some
internal
seg
faults,
but
rust
will
prevent
this
at
compile.
Time
is
saying
that
say
use
after
move,
because
we
have
actually
moved
this
content
into
the
child
thread.
We
no
longer
have
access
in
the
main
thread,
and
so
you
can
no
longer
touch
it.
B
B
Where,
when
you
spawn
a
thread,
you
only
have
to
capture
everything
by
ownership.
You
have
to
capture
everything
by
value.
You
can't
have
any
shared
references
in
there
or
mutable
references
in
there
because
they
do
not
live
long
enough,
and
so
this
is
one
where
only
we
can
only
actually
successfully
compile
this
code.
If
we
transfer
ownership
of
a
vector
into
the
into
the
actual
closure
that
responding
or
the
main
function
no
longer
has
access
to
it,
but
that's
not
always
the
most
useful
things.
B
So
the
first
thing
I
might
try
to
do
is
to
actually
share
this
data
and
kind
of
use
it
on
both
the
main
thread
and
the
child
thread
is
used.
Reference
counting-
and
this
is
where
we
have
this
RC
type.
The
standing
here
reference
counted
pointer
is
where
this
is
creating
a
new
reference
kind
of
pointer
with
some
vectors
inside
and
then
inside
the
child.
We're
gonna,
try
and
print
it
out
and
then
use
they
don't
use
it
externally
as
well.
B
B
So
when
we
actually
captured
this
via
moving
into
the
child
closure,
even
though
it
was
passed
by
ownership
kind
of
that
one
reference
of
it,
it
was
not
actually
safe
to
mutate
that
reference
kind
of
multiple
threads,
and
so
this
is
rejected
at
compile
time.
This
is
one
we're
kind
of
whether
or
not
a
type
is
sent
or
send
a
ball
or
not
send
a
ball
across
threads
as
one
that's
automatically
inferred
by
the
compiler
and
kind
of
works
out
internally.
B
So
instead
what
we
can
do
actually
to
actually
successfully
compile
this,
which
I'll
dig
into
some
more
detail
here
is
use
this
type
called
arc,
and
this
stands
for
atomically
reference
counted
and
naturally,
is
therefore
safe
to
send
across
threads.
We're
gonna,
be
atomically,
manage
the
reference,
the
reference
count,
so
it's
safe
to
frog
that
on
multiple
threads
at
a
time.
So
here
the
first
thing
I'm
we're
gonna
do
is
we're
gonna,
take
a
vector,
put
it
into
an
arc.
That's
going
to
create
some
data
on
the
heap.
B
B
When
we
come
here
and
call
this
clone
function,
what
we're
actually
doing
is
creating
a
separate
reference
to
the
same
reference
kind
of
data,
so
how
this
just
this
extra
V,
two
pointer,
that
is
pointing
directly
at
this
rough
count-
it'll
update
the
actual
reference
count
to
two,
and
so
now
we
can
start
actually
moving
these
and
kind
of
moving
them
in
two
separate
threads.
So
our
thread
here
is
actually
executing
with
the
first
reference
and
when
we
try
and
actually
print
out
the
length
of
this
vector
what's
happening.
B
Is
that
you're
getting
a
direct
pointer
directly
inside
this
arc
so
kind
of
going
past?
All
the
data
going
past
the
reference
kind
of
data
that
extra
extra
header,
with
the
reference
count,
if
you're
reaching
directly
into
the
arc
and
kind
of
printing,
all
that
out
and
there's
some
key
things
happening
here
in
terms
of
how
this
ends
up
all
being
safe,
the
first
of
which
is
that
arc.
When
you
construct
it
is
taking
ownership
of
the
data
as
you
pass
it
in,
and
so
this
is.
B
This
is
key
in
the
sense
that
we
know
that
as
the
owner
of
this
vector,
there
are
no
outstanding
aliases
ownership
does
not
allow
aliasing,
so
the
arc
here
is
the
sole
owner
of
this
piece
of
data
and
then,
as
the
sole
owner
of
that
piece
of
data,
you
can
then
decide
how
others
get
access
to
it.
So,
for
example,
here
an
arc
is
only
going
to
allow
shared
access
because
we
have
some
aliasing
with
extra
reference
counts
here.
B
B
It's
not
giving
you
a
mutable
reference,
and
so
this
is
preventing
you
at
compile
time
from
mutating
the
data
behind
this
arc,
which
kind
of
is
again
finding
those
databases
providing
those
seg
faults
preventing
these
kind
of
classical
concurrency
bugs.
But
so
this
also
isn't
we've
minutes
to
share
memory.
Now
we
can
share
some
memory
between
the
main
thread
and
between
a
child
thread,
but
it's
not
so
useful
if
we
can't
mutate
it.
This
is
where
mutex
has
come
in.
This
is
similar
to
a
mutex
or
a
lock
in
other
languages.
B
So
mutex
and
rust
is
denoted
with
this
mutex
of
I
32,
this
kind
of
type
parameter
in
here
or
what's
inside,
of
the
mutex,
and
this
is
another
thing
like
with
arc.
We
are
taking
ownership
when
we
create
the
mutex,
which
means
that
we
are
protecting
this
data
inside
of
the
mutex
and
as
the
sole
owner
of
that
piece
of
data.
B
So
the
first
thing-
and
the
only
thing
that
you
can
do
with
the
mutex
is
luck
it,
and
this
is
kind
of
codifying
the
the
necessary
pattern
that
most
people
would
agree
that
if
you
have
data
ba
protected
by
a
mutex,
you
should
only
access
the
data
if
you
actually
acquire
the
mutex
itself.
At
that
point,
if
you
actually
gone
through
that-
and
this
lock
function
is
going
to
return,
what
we
call
a
guard
type
and
this
guard
is
actually
a
proxy
through
which
you
can
actually
internally
access
the
data.
B
So
here
we'll
lock
our
mutex,
no
one
know
the
thread
can
exit
at
this
point,
we've
blocked
and
made
sure
that
no
one
else
is
accessing
this
and
then
through
data.
We
can
now
directly
access
the
underlying
contents
of
the
of
the
integer
here,
and
so
we
can
add
one
to
it.
We
could
update
our
zero
to
one
and
kind
of
do
whatever
we
like
to
the
actual
underlying
data.
B
So
again,
this
is
using
ownership
to
make
sure
that
once
you
put
data
inside
of
a
mutex,
you
have
to
access
it
through
the
mutex
there's
no
outstanding
aliases.
There's
no
outstanding
references,
there's
no
other
way
to
touch
that
data,
and
now
once
it's
inside
there,
you
can
only
touch
it
after
you've,
locked
the
mutex
kind
of
the
only
safe
operation.
Is
you
wait
for
all
threads
dead
to
not
be
touching
the
mutex,
and
then
you
have
access
to
read
it
to
modify
it.
B
You
can
get
up
shared
or
immutable
borrow
from
this
guard
type
and
also,
as
I
was
saying
earlier.
We
have
deterministic
destruction
in
rust,
and
so
this
guard
as
an
own
type.
We
know
that
it's
coming
out
of
scope
here
at
the
end
of
the
function
and
so
naturally
you're
just
going
to
unlock
the
mutex
I'll,
allow
some
other
threads
to
come
in
there
and
actually
modify
it
and
do
whatever
they
like
to
it.
So
this
is
one
we're
very
similar
to
C++
and
rust.
You
will
explicitly
acquire
resources
as
a
goal.
B
Lock,
a
mutex
you'll
allocate
some
memory,
you'll
open
a
file,
but
you
very
rarely
explicitly
deallocate
references
you
just
kind
of
let
them
fall
out
of
scope,
so
the
locks
here
you
just
let
it
fall
out
of
scope.
You
never
deallocate
memory.
You
just
let
it
fall
you
just
let
it
fall
out
of
scope
like
the
vector
we
saw
earlier.
So
the
next
thing
I
want
to
talk
about
is
that
we
not
only
have
mutation
through
mutexes.
B
Those
are
some
little
heavyweights
sometimes
so
we
also
have
Atomics,
or
these
are
very
similar
to
C
11
or
C
tommix
with
the
same
memory
model,
and
the
key
thing
here
is
that
we
have.
We
do
not
declare
this
number
as
mutable.
We're
actually
mutating
this
through
a
shared
reference.
Now
that
sounds
kind
of
bad
in
terms
of
rust
was
all
about
preventing
simultaneous
aliasing
and
mutation.
B
But
the
key
thing
here
is:
this
is
still
not
a
data
race,
in
the
sense
that
one
of
those
ingredients
was
no
ordering,
but
you
can
only
access
these
atomic
variables
with
some
ordering.
This
seek
kissed
here,
this
seq
CSCS
to
use
kind
of
sequentially
consistent,
and
so
once
you
have
Atomics,
you
can
have
atomic
fetch
ads
atomic
swaps,
top
glow
Tomic
store
was
kind
of
all.
B
You
would
expect
from
c11
or
kind
of
that
meant
that
atomic
memory
model
you
can
do
in
rest
as
well,
and
this
is
a
little
bit
lighter
weight
to
share
to
share
mutable
memory
amongst
threads
than
just
having
a
mutex
at
all
that
all
the
time.
So
the
last
thing
I
want
to
talk
about
in
the
standard
library
itself.
Are
these
MPSC
channels,
these
ability
to
send
messages
across
channels
and
the
ability
to
kind
of
pass
these
own
values
between
threads?
And
so
previously?
B
We've
we've
been
mostly
looking
at
shared
memory,
concurrency,
where
you
have
a
big
chunk
of
data,
probably
managed
with
an
arc
or
some
other
different
map
memory
management
scheme,
and
then
you
have
some
sort
of
internal
mutability
with
Atomics
or
mutex
to
this,
or
it's
just
being
shared
concurrently
amongst
a
bunch
of
threads.
But
message
passing
is
also
quite
useful,
sometimes
in
terms
of
it's
much
easier
to
pair
a
paradigm
to
work
with
you
don't
have
any
extra
sharing
of
memory,
it
kind
of
depends
on
like
the
application
at
hand.
B
So,
to
start
out,
we
call
this
channel
function
in
the
NP
SC
module,
which
creates
a
TX
and
an
RX
have
standing
for
transmission
and
receiving
for
sending
messages
and
receiving
messages.
With
these
two
halves,
we
can
actually
clone
the
TX
half.
This
is
the
MP,
the
multi
producer.
We
can't
clone
the
RX,
that's
the
single
consumer
aspect
of
this,
and
so
we
have
all
of
these
pointers
that
are
pointing
to
kind
of
the
same
chunk
of
memory.
B
The
same
channel
and
memory-
and
these
are
kind
of
just
managed
externally,
well
start
up,
two
threads
that
are
kind
of
going
to
send
a
five
and
a
four,
and
so
we've
now
isolated,
one
of
our
TX
and
one
threaded
rth
to
in
another
thread,
and
then
these
messages
will
return
will
be
actually
will
come
to
us
in
some
non
deterministic
order.
We'll
say
it
comes
in
a
four
and
then
a
five
MPSC
channels,
our
FIFO
first-in
first-out
on
the
sense,
the
the
this
will
print
out.
B
Four
and
five,
the
kind
of
whatever
order
they
come
in
will
actually
come
out.
The
same
order
on
the
other
side.
So
once
we've
actually
free
the
t
act,
the
T,
the
T
X
and
the
TX
two.
Now
we
know
that
the
channel
is
now
closed
because
it
can
no
longer
actually
receive
any
messages
at
this
point,
no
one
else
can
possibly
send
messages
in,
and
so
we
can
actually
iterate
over.
All
messages
left
here
and
kind
of
atomically
close
the
channel
anatomically
handle,
handle
all
the
messages
and
free
all
the
resources
associated
with
that.
B
So
that's
kind
of
an
overview
of
what
the
standard
library
gives
you
these
kind
of
multiple,
multiple
paradigms
of
shared
memory,
concurrency
message,
passing
kind
of
whatever
fits
the
bill
for
you.
We
have
a
couple
of
various
tools
there,
but
the
actual
concurrency
story
and
rust
goes
far
beyond
just
the
standard
library,
that's
kind
of
just
what
we
give
you,
but
the
aspect
of
this
is
again.
This
is
all
using
ownership
and
borrowing
kind
of
everything.
B
You've
seen
here
is
just
building
on
these
fundamental
language
concepts
and
giving
you
these
concurrency
primitives,
and
this
can
be
done
externally
as
well
in
the
ecosystem.
So
rayon
is
a
crate
or
a
library
which
we
call
it
in
the
rust
ecosystem,
which
is
kind
of
primarily
focused
on
giving
you
very
easy
access
to
the
concurrent
concurrency
available
and
kind
of
making
very
easy
to
add
parallelism
to
your
program
and
I
could
give
an
entire
talk
on
just
rayon,
but
as
an
example
of
this
we'll
start
out
here
with
a
small
function.
B
B
At
the
same
time,
to
kind
of
make
use
of
all
those
resources
and
with
Ray
on
all
you
have
to
do
is
take
this
one
function
called
itter
and
switch
it
to
a
call
to
par
it
or
which
stands
for
parallel
iteration,
and
so
that's
the
kind
of
the
key
aspect
of
rihanna's,
very,
very
lightweight
concurrency.
And
so
what
will
happen
here?
B
Is
this
little
divided
into
big
chunks,
throw
it
all
at
a
bunch
of
work-stealing
thread,
pool
threads
and
kind
of
do
this
all
on
parallel
machine,
and
then
we've
had
some
very
very
nice
benchmarks
with
with
rayon
in
terms
of
it's
a
very,
very
productive,
confer
concurrency
and
does
a
very
good
job
in
terms
of
slicing.
This
up
the
work-stealing
kind
of
nicely
spreads
out
the
load
and
everything
and
there's
a
whole
lot
more.
B
You
can
do
with
with
rayon
itself,
but
this
is
kind
of
just
a
taste
of
how
on
the
on
once,
you
can
actually
get
outside
the
standard
library.
We
have
kind
of
further
abstractions
that
are
all
leveraging
these
these
underlying
language
produce
and
what's
in
the
standard
library
as
well.
The
key
thing,
though,
about
rayon,
is
that
what
I
saw
you
is
not
too
too
hard
to
implement,
especially
in
C++.
B
You
can
have
a
library
that
has
all
these
works
doing
aspects,
but
the
the
real
benefit
of
doing
this
in
rust
is
that
you
might
forget
at
some
point
that
this
is
a
parallel
iterator.
You
might
forget
that
this
is
actually
running
across
multiple
threads.
So
you
could
introduce
a
bug
here
by
saying.
Oh,
let
me
just
count
up
every
time.
This
map
this
map
function
happens,
but
obviously
this
is
a
data
race.
This
is
cannot
actually
be
shared
safely
amongst
multiple
threads.
This
is
not
atomically.
B
Updating
this
local
counter,
and
so
the
key
aspect
of
rust
is
that
this
is
rejected
at
compile
time
with
this
error,
saying
that
this
variable
cannot
be
shared
concurrently
amongst
multiple
threads,
so
again
like
what
you're,
seeing
with
a
standard
library
where
you
cannot
misuse
these
these
fundamental
primitives,
it's
the
same
with
the
ecosystem,
you
cannot
miss
use
rayon.
If
your
code
compiles,
then
you
know
there
are
no
data
races.
B
They
kind
of
assume
that
there's
some
garbage
collector
don't
take
care
of
that,
and
so
what
crossbeam
is
doing
is
giving
this
technique
of
epoch
based
memory,
reclamation,
which
is
kind
of
like
a
mini
GC,
just
kind
of
localized
to
this
one
crate,
and
so
it's
making
us
making
it
very
easy
to
port
these
libraries
from
other
languages
like
Java
and
kind
of
put
them
into
rust
and
implement
them
there
as
well.
So
inside
of
crossbeam
will
find
these
work
seedling
decks.
B
That
rayon
was
using
will
find
and
the
MCQs
for
multiple
consumers,
which
is
kind
of
extending
the
standard
library
aspect
as
well
and
a
whole
bunch
of
other
things
built
up
on
this
internally
and
so
again
to
kind
of
wrap
up
the
the
library
aspect
of
a
concurrency
in
rust,
though
the
key
thing
here
is
everything
we
just
saw
was
a
hundred
percent
safe.
There's,
no
way
to
misuse
these.
You
cannot
get
a
date
erase.
You
cannot
get
a
segfault,
no
matter
what
you
do.
B
This
is
code,
is
going
to
compile
and
run
as
you
expected,
and
yes,
sorry
so
now,
I
want
to
actually
shift
gears
a
bit
to
talk
about
features
instead.
So
this
is
more
of
not
so
much
the
concurrency
aspect
on
the
machine
with
multiple
cores,
but
more
so
with
asynchronous
I/o
kind
of
having
that
aspect
of
lots
of
high
scale
servers,
lots
of
various
actors,
there
lots
of
TCP
connections
and
all
that
good
stuff.
And
so
again
we
have
the
the
rust
at
rest.
B
The
language
itself
is
the
one
that's
actually
fueling
the
shared
memory,
parallelism
with
ownership
and
borrowing.
We
have
the
ecosystem
with
kind
of
all
these
extra
paradigms
we
solve
mutexes
and
locks
and
arcs
and
channels
and
features,
are
kind
of
the
async
I/o
story
in
rust
kind
of
this
aspect
of
you're
not
necessarily
running
in
parallel,
but
you
have
highly
concurrent
with
tons
of
tons
of
connected
clients
that
you
have
to
be
managing
all
at
one
point
in
time.
B
Http
server,
so
I
get
slashed
and
it
returns
hello
world
back
to
you.
The
y-axis
ears
request
for
a
second,
the
x-axis
is
a
bunch
of
different
frameworks
and
the
one
here
on
the
far
left
that
super-tall
is
actually
features
written
in
rust,
and
so
this
is
what
this
is
showing
is
that
we've
created
servers
with
these
features
that
have
the
utmost
highest
performance
that
we
can
possibly
like
squeeze
out
of
these
servers,
and
so
this
is
something
to
keep
in
mind.
Where
I'll
be
talking
a
lot
about
how
everything
is
features.
B
We
have
features
all
throughout
the
stack.
But
the
key
thing
here
is
that
this
is
not
quite
the
same
as
languages
futures
you'll
find
in
other
languages.
They
end
up
being
not
quite
as
costly,
not
quite
as
a
spensive
and
end
up
giving
us
this
level
of
performance,
and
this
is
kind
of
showing
off
the
the
zero
cost,
abstractions
and
kind
of
the
the
possible
performance
you
can
get
from
us
as
well.
B
So
I
want
to
start
off
by
talking
a
little
bit
about
just
async
I/o
itself,
and
so
the
first
thing
to
talk
about
is
synchronous,
I/o
kind
of
the
contrast
of
async
eyewear
you'll,
eventually
you'll
tell
the
colonel
I.
Have
a
TCP
socket
I'd
like
to
read
some
data
into
this
buffer
and
the
kernel
will
block
your
thread.
B
In
contrast,
though,
what
async
I/o
is
doing,
is
you
ask
the
kernel
I'd
like
you
to
fill
in
this
buffer,
but
it
immediately
tells
you,
oh,
that
would
block
I
can't
actually
do
that
operation
and
you'll
have
to
go
and
figure
out
when
to
do
that
later,
and
so
this
is
ends
up
being
much
much
more
difficult
to
actually
work
with
we're.
Now
we
know
that
no
I/o
ever
blocks,
but
we're
gonna
have
to
somehow
dispatch
these
events.
B
Otherwise,
where
we'll
have
some
interface,
the
kernels
saying:
okay
well
I'd
like
to
block
my
thread
now,
because
I
have
nothing
else
to
do
and
then
eventually
the
kernel
will
tell
you
okay.
Well,
while
you
were
waiting
I
had
these
thirty
events
come
in
and
I
have
cat
five
is
readable:
sly
cat
six
is
writable
you
can.
These
bytes
have
been
transferred
kind
of
all
that
good
stuff,
but
then
you,
as
a
user,
are
now
responsible
for
actually
figuring
out.
B
Where
do
I
put
all
these
events
and
how
do
I
actually
execute
that
and
what
is
it?
What
does
this
actually
translate
to
in
terms
of
executing
my
code?
So,
for
example,
you
have
this
kind
of
high-level
request.
You
just
want
to
fetch
the
contents,
the
rustling
homepage,
but
this
is
actually
quite
involved
in
terms
of
what's
happening
here.
You're,
not
only
opening
a
TCP
socket
but
you're
doing
name
resolution
you
might
have
TLS
with
encryption.
B
You
might
have
some
sort
of
compression
here,
you're
decoding
HTTP,
you
kind
of
doing
all
this
internally,
but
all
the
kernel
gives
you
is
this
all:
okay,
descriptor
five
is
ready.
Now
it's
up
to
you
to
figure
out
how
to
do
that,
and
so
this
is
where
previously
kind
of
working
with
asynchronous
I/o.
It
tends
to
be
very,
very
difficult,
very,
very
difficult
to
compose.
B
But
the
key
part
about
a
futures
is
kind
of
like
an
object
or
an
object-oriented
aspect.
To
this,
where
it
internally
is
capturing
everything
necessary
to
actually
compute
that
future.
So
you
know
that
you
have
a
future
of
a
string
or
a
list
of
bytes,
but
you
have
no
idea
how
it's
being
computer,
that's
kind
of
abstract
from
you
at
that
point
internally.
B
It
knows
how
it's
being
produced,
how
it's
being
executed
asynchronously,
but
you
as
the
consumer
just
know
that
at
some
point,
you're
going
to
get
a
list
of
bytes
and
you're
you're
gonna
get
a
string,
and
this
primarily
allows
us
to
start
actually
doing
this
composition
that
we
wanted
to
do
a
future.
For
example,
we
can
say
when
that's
done,
I
like
to
run
this
I'd
like
to
sequence,
some
computations,
just
kind
of
run
things
one
after
another.
B
We
can
also
say
I'd
like
to
execute
these
two
things
in
parallel
and
wait
for
them
to
wait
for
them
to
both
finish
or
maybe
I'd
like
to
wait
for
one
and
not
the
other,
and
so
this
is
very
difficult
to
do.
Sometimes,
where
I
have
this
high-level
concept,
where
I
want
to
fetch
some
home
page
like
wrestling
org
but
I
want
to
give
it
a
time
out.
B
I
just
want
to
very
quickly
throw
on
some
timer
on
the
on
the
side
of
that
and
what
features
is
doing
is
giving
us
this
level
of
composition,
giving
us
this
kind
of
interface,
where
it's
actually
almost
trivial
to
do
those
high-level
operations,
and
so
this
means
that,
if
you
actually
come
and
say,
I
would
like
the
wrestling
homepage.
Instead
of
kind
of
these
weird
arcane
things,
what
we're
getting
out,
we
get
all
right.
Here's
a
feature
of
a
list
of
bytes
we're
internally.
This
is
doing
the
DNS,
the
name
resolution,
the
TCP
connections.
B
They
encryption
kind
of
everything
internally
here
is
now
captured
inside
of
that
one
feature,
and
you
can
now
kind
of
just
sequence:
extra
data
onto
this
thing
and
say:
okay.
Well
now
that
I
have
this
object,
that
represents
this
computation.
I
can
continue,
interposing,
that
or
composing
that,
with
with
other
operations,.
B
All
right,
so
this
is
a
bit
of
an
this-
is
an
example
of
what
it
actually
looks
like
with
using
features
with
out
I/o
kind
of,
but
before
we've
actually
touched,
TCP
or
DNS,
or
anything
like
that.
So
first
thing
we'll
have
here
is
we'll
just
have
some
sort
of
thread
pool
and
we'll
spawn
some
computation
onto
that
or
we're
going
to
say,
disco
cocktail
hundred
Fibonacci
number.
B
This
result,
though,
is
an
actual
future
to
an
integer,
so
this
result
is
kind
of
representing
that
an
integer
will
eventually
come
here,
but
it
internally
it's
just
then
concurrently
and
then
that'll
get
resolved
once
that
computation
was
actually
finished
on
that
remote
thread
and
the
meantime,
though,
this
immediately
returns,
so
the
number
is
actually
being
computed
on
some
remote
thread.
So
we
can
do
some
other
aspects.
B
We
can
get
a
coffee
or
do
or
do
whatever,
but
then
eventually
we
can
actually
come
down
and
say
I'd
like
to
wait
on
this
I'd
like
to
actually
block
and
say,
please
give
me
the
value
inside
of
this
feature
and
then
we'll
do
the
necessary
synchronization
to
say.
Okay,
now
I'm
going
to
wait
for
that
thread
to
finish
and
or
if
it's
our
there's
good
peel
it
out
for
you
and
then
once
you
actually
have
it.
B
This
result
is
now
an
integer
and
you
can
start
working
with
it
and
doing
whatever
you
like
with
that,
and
so
the
the
interesting
part
come
here
comes
here
now
with
how
do
we
actually
deal
with
futures
and
I/o?
So
we
have
all
these
TCP
objects.
This
I
think
is
DNS
stuff.
We
want
to
work
with.
We
also
want
to
package
it
up
in
futures
and
rust.
This
is
done
with
a
library
called
Tokyo
which
is
kind
of
pulling
together
these
two
existing
libraries
in
rust,
called
mio,
and
then
the
futures
crate
itself.
B
So
Tokyo
is
kind
of
this
package,
giving
you
fuel
and
giving
you
all
these
I/o
primitives
to
build
up
these
features
and
kind
of
have
that
all
be
an
internal
logic
to
implement
those
features
themselves,
because
it's
using
me
it
works
across
all
major
platforms
doesn't
have
to
worry
about
iocp
versus
eople
or
anything
like
that,
and
this
is
rusts
implementation
of
an
event
loop,
which
is
actually
blocking
the
thread.
What's
dispatching
all
these
events
and
kind
of,
what's
doing
all
that
internally,
the
futures
create
today
has
a
number
of
various
abstractions
inside
of
it.
B
So
we
have
a
future
which
represents
kind
of
one
value
becoming
available.
We
have
a
stream
trait,
which
means
that
multiple
values
are
coming
over
time.
This
is
similar
to
kind
of
our
excuse
or
kind
of
reactive
programming
in
Java
or
JavaScript.
In
a
sense
streams
are
very
pull
based,
so
we
have
sinks
which
is
the
dual
push
base.
You
can
push
data
into
them,
but
the
key
thing
here
is
that
futures
gives
you
a
nice
toolkit
when
you're
working
with
it
as
well.
So
you
have
these
one-shot
channels
for
I.
B
Just
have
some
computation
on
another
thread
and
I
want
to
make
a
future
to
complete
that
and
push
it
over.
Here
we
have
channels
as
well,
so
we
can
have
a
stream
of
values
being
produced
over
time
and
then
rayon
like
I
was
showing
earlier.
Not
only
gives
you
data
concurrency,
but
also
gives
you
these
future
aspects
as
well,
where
you
can
spawn
some
work
onto
a
thread,
pool
and
say
I'd
like
a
future.
B
The
result,
you
can
start
composing
that
and
doing
all
that
internally,
so
with
features
you'll
find
kind
of
a
nice
library,
integration
and
kind
of
a
nice
set
of
tools
to
immediately
get
off
the
ground
running.
Another
important
aspect
of
features
which
you,
those
of
you
coming
from
JavaScript
or
C,
sharp
know
or
Python.
That
was
absolutely
vital
to
working
with,
with
rust
or
working
with
features
which
is
async,
syntax,
a
single
wave.
So
in
rust
we
have
this
async
attribute
saying
this
function
is
actually
returning
a
future.
B
It's
not
returning
a
result,
but
it's
kind
of
internally
being
transformed
to
a
state
machine
that
is
going
to
actually
compile
down
and
kind
of
into
one
nice
feature
being
returned
internally.
We
can
use
this
a
weight
macro,
which
is
saying
that
I
would
like
to
block
on
the
value
of
this
feature,
but
not
actually
block
the
thread.
Just
block
my
own
personal
feature
itself
and
kind
of
men
manages
all
the
concurrency
there
for
you.
We
have
early
returns
where
you
can
just
immediately
return
from
a
function.
B
And
so
this
is
a
nice
aspect
of
making
this
much
more
approachable
and
much
easier
to
read,
much
easy
to
write
and
maintain
over
time.
The
Tokyo
crate
that
I've
been
talking
about
has
a
number
of
primitives
as
well.
It
not
only
has
it
has
a
some
organization
internally
and
then
but
effectively.
What's
what
he
gives
you
is
kind
of
all
these
bear
print
that
you
would
expect
of
tcp/udp
named
pipes,
processes,
signals
and
a
number
of
protocols
as
well,
such
as
HTTP
HTTP
to
WebSockets,
and
all
that.
B
So
the
key
thing
here
is
the
like.
You
have
a
nice
package
to
get
up
and
ground
running
pretty
quickly
and
we've
seen
this
deployed
in
production
and
a
couple
of
companies
and
kind
of
using
it
all
internally
and
using
it
to
a
lot
of
benefit.
So
that's
mostly
what
I
want
to
talk
about
about
Tokyo
and
futures
itself
at
kind
of
a
high
layer,
but
I
want
to
switch
now
switch
gears
a
bit
to
actually
how
we
implement
these
features
and
how
the
how
this
is
all
actually
happening
under
the
hood,
and
so
here.
B
I
want
to
try
and
build
up
this
concept
of
a
future
from
scratch
in
rusts.
I
was
showing
you
earlier
that
we
have
this
very,
very
tall
bar
or
showing
that
this
entire
stack
is
indeed
quite
fast,
but
I
want
to
show
you
a
deep
dive
into
how
features
work,
how
they
ended
up
being
designed
and
kind
of.
Why
some
of
the
classic
pitfalls
of
other
languages?
We
managed
to
avoid
those
as
well,
so
to
start
off
first
thing:
I
might
try
is
making
us
structs.
B
This
is
just
kind
of
a
struct
feature
with
some
generic
type
parameter
that
we're
gonna
get
with
it.
But
the
key
thing
here
is
that,
as
we
said
structs
we
have
now
said
it's
kind
of
got
class
in
c++.
This
is
the
one
implementation
of
features
that
will
ever
have,
which
that's
not
necessarily
something
we
can
always
declare.
This
might
be
a
thread
safe
imitation
and
we
might
not
need
thread
safety.
B
This
might
be
some
kind
of
coordinated
implementation
that
allocates
memory,
but
I'm
not
know
I
might
I
might
not
even
need
to
allocate
memory,
and
so
having
just
one
implementation
of
a
future
is
actually
not
going
to
cut
it.
We
need
some
extra
flexibility
here.
Extra
flexibility
here
to
say,
oh
well,
I
know
how
to
implement
a
future
for
my
very,
very
specialized
scenario,
and
so
instead
we're
gonna
use.
B
We
just
have
to
say
what's
available,
we
have
to
actually
fill
out
this
treat
or
first
thing
we'll
have
is
just
some
item.
There
saying
this
is
the
actual
type
that
we
have
that
we're
actually
going
to
resolve
to.
But
then
what
are
we
actually
going
to
put
as
a
method
here
in
terms
of
how
we
act
implement
this
future
and
you
think
about
it
a
feature.
B
What
I
was
saying
is
a
sentinel
for
a
value
being
computed
at
some
point
later
in
time,
and
so
the
first
thing
we
tend
to
think
of-
and
actually
this
is
how
its
implemented
in
most
languages
is
to
have
some
sort
of
callback
based
solution
where
this
is
what
this
function
is
saying
is
I'll
have
a
function
called
schedule,
it'll
take
the
receiver
of
the
actual
future
itself
and
then
a
callback.
That's
this
FN
once
business.
Well,
the
callback
receives
this
T,
which
is
the
actual
item
being
produced
in
the
future.
B
So
here
we're
basically
saying
when
the
future
is
done,
run
this
callback.
The
problem
with
this,
though,
is
we
have
this
kind
of
bracket
F
in
this
pair
self,
which
I'm
not
gonna,
go
into
too
many
details
here,
but
it
basically
means
we
can't
do
virtual
dispatch.
We
cannot
erase
the
type.
So
we
have
to
always
kind
of
use
this
as
a
bare
value
and
there's
nothing
to
really
abstract
over
multiple
different
kinds
of
features,
and
so
sometimes
it's
not
always
the
or
you
don't
always
need
virtual
dispatch.
B
But
I
want
to
make
a
brief
digression
to
kind
of
explain
why
virtual
dispatch
is
so
important
here
in
the
Contin,
the
context
of
futures.
So
this
is
an
example
function.
Where
we'll
just
say
we
have
some
computation,
we're
gonna
cache.
We
have
some
key
and
then,
if
our
cache
has
it,
we're
gonna
immediately
return
that
saying
that
we
are
now
done
with
that.
But
if
it's
not
in
the
cache,
then
we'll
go
in
compute,
it's
very,
very
slowly
and
then
it'll
actually
fill
in
the
cache
later
and
we're
turning
some
different
feature.
B
But
the
problem
is
that
this
doesn't
actually
compile.
We
have
one
branch
of
the
if
statement
returning
one
type
of
feature,
and
now
we
have
the
other
branch
of
the
estate
'men
training,
a
different
type
of
future,
and
so
in
rusty
o2
have
a
well
type
program.
You
have
to
return
the
same
type
and
all
branches,
and
so
the
way
to
solve
this,
we
might
think
is
okay.
B
But
the
problem
is,
we
might
actually
add
some
extra
code
here
might
have
some
more
ifs,
who
might
adds
a
whole
bunch
of
ifs
there
and
it's
kind
of
unclear
how
scalable
this
kind
of
adding
an
enum
solution
is
going
to
be
kind
of
adding
the
static
dispatch
aspect.
Where
do
we
get
two?
We
get
to
zze
it's
kind
of
unclear
where
this
stops,
and
so
what
we
really
need
here
and
kind
of
what
we
really
want.
Is
this
at
this
notion
of
virtual
dispatch
we're
in
Rus?
This
is
done
with
this
box
aspect.
B
B
So
we
knew
we
have
erased
the
type
we
no
longer
know
what
was
actually
underneath
there,
but
we
can
know
that
we
can
call
it
through
some
virtual
dispatch
and
then
go
and
actually
use
the
feature
itself,
and
so
this
is
kind
of
an
example
to
show
how
important
virtual
dispatch
is
in
in
working
with
features
and
why
we
need
to
enable
this
in
the
trade
itself
and
kind
of
why
we
need
to
be
catering
to
this
use
case.
So
what
we
can
do
is
given
this
trait
definition.
B
We
can
tweak
it
a
little
bit
saying:
okay.
Well,
let's
remove
this
little
F.
That's
remove
this
bear
self,
which
I'm
not
gonna,
go
into
details
of
why
it's
not
safe
for
virtual
dispatch
but
sufficient
to
say
that
this
aspect
is
where
we
have
this
ampersand
mute
and
we
have
this
box,
which
kind
of
is
a
allocated
closure
on
the
heap.
Well,
it's
that's
a
very
important
aspect.
We're
now.
Instead
of
having
just
kind
of
a
bear
closure
in
memory,
we
now
we
only
are
compatible
with
closures
allocated
on
the
heap.
B
Now,
that's
a
not
immediately
obvious
as
to
why
it
might
not
be
desirable.
So
what
I
want
to
talk
about
here
is
make
another
digression
of
how
we
expect
to
see
servers
built
with
futures
and
kind
of
how
we
expect
futures
to
be
used
in
the
ecosystem,
and
so
the
idea
here
is
very
similar
to
finagle
and
that
in
Scala,
that
kind
of
Twitter
has
produced,
or
the
idea
is
that
every
server
is
a
function
from
a
request
to
a
future
of
a
response.
B
Kind
of
this
asynchronous
function
here
and
all
of
your
logic
is
gonna
go
internally,
and
so
you
might
have
a
request.
That's
kind
of
you
receive
a
request,
and
after
that
you
load
some
information
from
a
database.
You'll
do
some
are
pcs,
you
do
some
more
database
requests
and,
finally,
you
will
render
a
response
internally
what's
happening
here.
Is
that
a
bunch
of
these
aspects?
B
Each
of
these
states
are
features,
so
loading
from
a
database
will
take
some
time,
so
that
has
to
be
a
future
and
RPC
take
some
time,
so
it
has
to
be
a
future,
and
so
we'll
take
a
look
at
this
from
kind
of
a
state
machine
diagram,
a
kind
of
a
state
transition
diagram.
Well,
we'll
start
with
a
get
slash.
Then
we'll
do
some
sequel
queries.
B
Some
extra
are
PCs
and
whatnot
internally
what's
happening
is
we
were
boxing
all
this
up,
literally
boxing
on
the
heat
but
kind
of
like
putting
this
wrap
around
this
saying
this
is
our
future
that
we
are
returning
kind
of
our
server
is
entirely
representing
a
kind
of
internal
processing
is
represented
by
all
of
these
happening
together
so
internally,
in
the
one
feature
that
we're
returning
is
kind
of.
This
is
how
it's
working
internally.
Okay,
this
is
how
it's
actually
being
executed
now
we're
to
actually
implement
this
what's
happening.
B
Is
we're
using
this
schedule
function
saying
that
when
a
future
is
done,
we're
going
to
execute
some
more
data
later
on?
So,
for
example,
once
we
have
the
original
requests
we're
going
to
issue
the
database
loaded,
we're
gonna
say
scheduled
when
the
database
is
finished,
I
want
to
use
that
to
start
actually
issuing
the
RPC.
B
Now,
when
the
RPC
is
finished,
I'd
like
to
schedule
again
to
go,
move
to
the
next
state
and
so
on
and
so
forth,
and
the
key
thing
here
is
that,
between
all
these
state
transitions
is
where
we're
executing
the
schedule
function,
kind
of
the
schedule
primitive
is
being
used
to
transition
between
states
of
the
future.
In
this
case,
we
have
five
states,
but
internally,
as
we
kind
of
have
even
more
states,
possibly
so
our
server
itself
is
kind
of
one
giant
feature
that
we're
returning
internally.
B
That
has
many
many
features
inside
of
it
and
then
internally
of
those
and
kind
of
externally
as
well.
We
have
tons
and
tons
of
state
transitions,
which
kind
of
means
very
very
quickly.
The
number
of
state
transitions
that
we
are
accumulating
is
very,
very
large,
and
the
key
thing
here
is
the
schedule
function.
We
have
that
box
parameter,
which
means
that
every
single
state
transition
is
now
an
allocation.
We
have
to
allocate
some
callback
to
say
that
to
progress
between
states.
B
This
is
how
we
actually
execute
that,
and
so
overall
this
ends
up
being
very,
very
costly
and
kind
of
having
a
not
quite
the
runtime
performance
we
will
we
would
like,
and
so
originally
when
we
had
that
very
tall
graph
feels
much
much
smaller
when
we
were
kind
of
doing
all
this
allocation
and
internally,
and
so
that
was
kind
of
the
primary
thing
we
wanted
to
solve.
But
there's
this
other
threading
related
aspect,
which
is
a
little
bit
more
subtle
and
not
always
readily
apparent.
When
you
talk
about
futures
and
have
you
have
these
callbacks?
B
B
Leon
are
kind
of
concurrently
on
multiple
threads
at
a
time,
and
this
is
very
important
for
futures
in
the
sense
that
we
don't
actually
know,
can
we,
how
are
we
going
to
execute
a
future
where
the
result
is
being
computed
on
one
thread
and
I'm
actually
consuming
it
on
a
different
thread?
So
we
can
actually
only
do
that
if
we
add
in
the
send
bound
here
you're
saying
this,
closure
can
be
sent
across
threads,
but
then
that
incurs
even
more
costs
where
what?
If
we
didn't
have
that?
B
What
if
we
only
had
one
thread
by
requiring
this
one
closure
to
always
be
sending
all
across
threads?
Now
we're
in
this
conundrum,
where
now
we
have
extra
synchronization,
we
otherwise
wouldn't
need
it,
and
so
dealing
with
this
ends
up
being
very,
very
difficult
in
terms
of
how
we
make
features
thread
safe.
How
we
make
them
appropriately
non
thread
safe
in
a
sense
for
this
single
threaded
scenarios
to
not
have
too
much
overhead
and
too
many
costs
associated
with
them,
and
so
for
all
these
reasons.
B
This
is
why
we
ended
up
not
actually
going
along
with
callbacks,
not
only
are
they're
far
too
slow,
but
they
also
have
these
drawbacks
of
with
the
threading
scenario.
We
have
these
issues
where
they're
too
fast,
they're,
too
slow,
or
they
just.
We
can
never
strike
the
right
balance
there.
So
we're
back
to
the
drawing
board-
and
it's
helpful
now
at
this
point
to
actually
enumerate
what
we've
gone
over
so
far.
We're
first
thing
we
saw
is
features
have
to
be
a
treat.
We
have
to
say
that
there
are
many
implementations
of
a
future.
B
We
can't
suffice
by
saying
there's
only
one
and
everyone's
got
to
use
it.
Similarly,
though,
once
we
have
a
tree,
we
kind
of
have
this
interface.
It
must
support
virtual
dispatch.
It
must
support
the
ability
to
erase
the
types
to
some
point
where
you
have
no
idea.
What's
underneath
that
box.
What's
underneath
that
feature
it's
kind
of
anything
internally,
but
similarly,
we've
noticed
that
they're
the
kind
of
the
way
we
envision
building
these
features.
B
B
But
we
can
zoom
out
a
little
bit
even
further
in
saying
that
there's
actually
more
happening
in
the
server
other
than
what's
happening
internally
here,
where
we
have
we're
actually
reading
by
it's
22
P
connection
and
we're
decrypting
now
or
decrypting
or
decoding
HTTP,
and
then
doing
the
opposite
on
the
other
end,
and
so
all
of
these
aspects
are
kind
of
happening
in
addition
to
what
we
are
running
inside
of
our
server
itself.
But
then
this
is
kind
of
where
it
ends.
We
can
say
that
for,
for
example,
one
connected
TCP
client.
B
B
We
can
actually
zoom
out
even
a
little
farther
and
say
that
while
server
actually
has
tons
of
tasks
executing
concurrently
for
many
connected
clients
for
many
connected
various
aspects
of
the
server,
and
so
we've
seen
that
a
task
that
we're
talking
about
is
composed
internally
of
many
many
features,
kind
of
I/o
based
features
or
two
server
based
features
that
you
have
landed
yourself
and
then.
The
key
thing,
though,
is
that
all
these
features
kind
of
as
that
are
manufactured
and
destroyed
over
time
over
the
lifetime.
B
Of
that
one
TCP
connection
was
all
connected
to
the
same
task.
This
task
we're
gonna,
want
to
be
the
actual
unit
of
concurrency
kind
of
like
a
green
thread
in
a
sense,
but
not
in
the
same
implementation
aspect,
just
kind
of
in
the
semantic
aspect.
If
we
have
these
kind
of
lightweight
threads
lightweight
units
that
are
being
executed
concurrently
and
especially
with
async/await
syntax
etics
kind
of
look
similar
issue,
but
what
I'll
do
is
stack
so
anything
like
that.
So
we'll
come
back
here
to
say
all
right.
B
Well,
our
callback
based
solution
on
this
trait
didn't
work.
So
how
are
we
actually
going
to
do
this
and
it
turns
out
the
next
thing
we
might
think
of
is
okay.
Well,
instead
of
saying,
when
you're
done
do
this,
what
if
I
ask
you
just?
Are
you
done
yet
say?
Have
this
function
called
pol,
where
this
says
we
still
have
a
trait,
so
it's
still
nice
and
we
can
implement
it
for
a
bunch
of
types
and
then
this
signature,
all
just
suffice
to
say
it
does
support
virtual
dispatch.
B
So
we
don't
have
to
worry
about
that.
We
don't
have
to
put
that
type
parameter
graph
or
anything.
But
then
the
key
thing
here
is
happening.
Is
the
actual
protocol
around
what's
actually
being
returned,
so
there's
this
option
type
that
was
returned.
It
can
represent
either
none
with
nothing
else
or
some
with
one
particular
value
like
that
item
in
the
future.
So
we'll
say
that
a
none
value
says
that
a
future
is
not
ready,
yet
we're
not
ready
to
make
progress,
and
so
you're
gonna
have
to
come
back
at
a
later
date.
B
If
you
receive
some,
then
okay
we're
now
a
resolved
future.
Here's
the
item
and
you
can't
use
the
feature
anymore,
but
the
key
thing
is
that
if
we
see
none,
we
need
to
know
when
to
come
back
again
and
try
again.
So
we
know
that
this
feature
is
not
ready
yet,
but
we
don't.
We
need
to
know
precisely
when
it
will
be
ready,
because
we
don't
want
to
just
sit
there
polling
in
as
that's
just
a
bunch
of
busy
waiting
and
not
actually
going
to
make
an
efficient
server.
B
So
we
ended
up
building
this
kind
of
layered
protocol
kind
of
like
con
implicit
contract
around
this
poll
function
saying
that
we
know
that
futures
are
owned
by
tasks,
and
we
know
that
this
one
task
is
persistent
for
the
lifetime
of
kind
of
everything
in
this
one
unit.
So
what
we
can
say
is
that
the
task
is
the
one
that
needs
to
be
realized
that
actually
needs
to
realize
when
the
future
is
ready
to
go.
The
task
is
then
going
to
accommodate
that
it's
actually
going
to
pull
it
and
continue
to
make
progress.
B
B
Building
up
these
really
quite
efficient
features,
but
to
show
you
an
example:
I
want
to
dive
into
kind
of
how
we
would
implement
the
poll
function
for
a
civil
future
like
a
timeout,
where
this
is
a
future,
where
it's
just
going
to
wait
for
some
period
of
time,
it's
kind
of
like
a
sleep,
but
it's
like
an
asynchronous
sleep
or
it
doesn't
literally
block
the
thread.
It
just
takes
some
time
to
actually
resolve
so
we'll
say:
there's
two
fields
here,
the
first
of
which
is
just
when
our
timeout
is
gonna
fire.
B
This
instant
is
just
a
point
in
time
saying
that
this
is
the
point
at
which
point
before
which
we
will
not
be
resolved
and
after
which
we
will
have
become
resolved,
and
you
can
just
pull
a
unit
value
of
this
teacher.
This
timer
is
just
some
fictitious
library
support.
We're
not
gonna
worry
so
much
about
that
here,
but
it's
basically
gives
us
the
ability
to
run
something
at
a
later
date.
B
So
we
can
just
say:
I
would
like
to
run
this
piece
of
code
at
this
particular
point
in
time,
and
you
can
just
assume
this
has
managed
and
turtley
and
kind
of
have
some
separate
thread,
maybe
but
as
efficiently
I've
blended.
So
we'll
start
off
by
saying
we'll
implement
the
future
treat
for
our
timeout
type,
the
type
parameter
here
or
the
type
here.
The
item
is
just
a
unit.
We
don't
actually
gonna
we're
not
going
to
pull
anything
out
of
this
feature.
B
It's
just
going
to
become
resolved,
it's
just
going
to
say:
I
am
resolved
or
we
won't
actually
get
any
data
associated
with
that.
Well
start
by
filling
in
this
implementation
by
saying:
okay,
if
we
have
a
lot
like
if
the
timeout
has
elapsed
like
if
we
are
actually
done
now
at
this
point,
then
we
will
return
some
saying:
okay,
the
future
has
resolved.
We
have
waited
an
appropriate
amount
of
time,
we're
ready
to
go,
but
the
trickiness
comes
here
in
terms
of
how
do
we
actually
implement
this?
B
If
we're
not
ready
yet,
and
we
need
to
somehow
return
none
and
so
remember,
we
have
to
make
sure
that
the
current
task
is
ready
to
receive
a
notification
when
it
otherwise
would
be
ready.
So
we'll
do
that
with
this
task
current
and
this
task
notify
function
start
off
by
saying:
okay,
well,
pull
the
current
tasks
looking
at
the
futures,
crate
and
then
we'll
use
our
library
support
to
say
when
the
timeout
is
ready,
we're
going
to
send
a
note
of
notification
saying
that
now
we
are
ready.
B
Sorry,
taking
back
a
look
again
at
the
constraints
we
set
out
for
ourselves.
The
first
thing,
the
first
two
things
we
said
was
this
has
to
be
a
treat,
and
this
has
to
be
virtually
dispatch
evil.
So,
naturally,
what
we've
can
converge
on
at
this
point
is
indeed
a
trait
does,
indeed
support
virtual
dispatch.
So
these
two,
these
two
constraints,
are
nice
and
solved
at
this
point
now,
we've
also
been
saying,
though,
that
this
needs
to
be
very
cheap.
B
This
needs
to
have
a
very
cheap
state
transitions,
very
cheap
transitions
between
all
the
aspects,
the
features
here,
and
that
was
primarily
driven
with
these
tasks,
where
every
time
we
look
at
it,
the
state
transition
is
governed
by
notifying
that
task
by
figuring
out
how
to
manage
that
task
itself.
It
turns
out
that
acquisition
of
the
current
task
is
actually
very
easy
to
do.
It
tends
to
just
bump
a
reference
count,
so
I
kind
of
increment
the
reference
count
of
that
arc.
A
notification
is
very
similar
as
well.
B
It's
just
kind
of
in
queueing
a
piece
of
data
and
kind
of
putting
it
on
some
separate
thread
to
execute
for
a
while,
so,
namely
these
two
operations
are
indeed
very
cheap.
We're
kind
of
leveraging
that
persistent
data
kind
of
how
it's
been
the
task
is
always
tracking
that
feature
any
one
point
in
time
and
so
sure
enough.
The
state
transitions
here
are
now
an
order
of
magnitude
cheaper
than
they
were
before,
as
we
could
see
with
the
giant
craft
earlier
now.
B
The
final
thing
we
wanted
to
solve
here
was
this
thread
safety
aspect
where
we
wanted
it
to
be
thread
safe
when
necessary,
but
not
actually
incur
any
costs
when
it
was
not
thread
safe,
and
so
we
can
do
that
by
saying
that
the
tasks
themselves,
which
are
routing
all
those
notifications,
are
indeed
send
able
across
threads.
That's
the
only
thing
the
features
themselves
never
never
need
to
be
send
of
all
they
can
just
be
routing
notifications
from
other
remote
threads.
So
indeed
we
have
that
we
solve
these
constraints.
B
All
right,
that's
kind
of
a
brief
ish
overview
of
how
futures
in
rust
are
working
kind
of
how
they're
implemented
underneath
the
hood.
But
the
last
thing
I
want
to
talk
about
is
how
we're
actually
routing
these
notifications
into
futures
themselves.
So
how
we're
how
Tokyo
is
implemented
in
the
sense
of
we
have
this
e
pole
system?
We
have.
B
This
is
EP
system
and
we
have
all
these
features
that
have
all
these
test
notifications,
but
we
need
to
match
these
up
to
make
sure
the
program
is
actually
making
progress
so
in
Tokyo,
literally
everything
as
a
future
from
top
to
bottom
you'll
find
everything
in
the
stack
is
a
feature
from
that
reading,
TCP
bytes
all
the
way
to
writing.
Tcp
bytes
everything
there
was
powered
by
features
all
the
way
up
and
down
which
we've
empowered
by
having
making
these
state
transitions.
Making
these
common
operations
between
features
so
cheap.
We
don't
have
this
typical
overhead.
B
You
see
from
futures
and
other
languages,
and
so
we
could.
We
can
ever
leveraging
this
abstraction
across
the
entire
stack
features.
However,
they
will
have
to
say,
ok
and
I
need
to
actually
wait
for
some
I/o
I
need
to
wait
for
some
TCP
sockets
I
need
to
do
to
wait
for
that
until
Keys
gonna
figure
out
how
to
actually
read
all
that
internally.
B
Do
anything
at
that
point
but
like
with
the
future
trait
where
you
poll-
and
you
saw
you're-
not
ready
yet
there's
this
implicit
contract
here
as
well
saying
that
if
you
tried
to
read
some
data
and
you're
not
ready
yet
you've
implicitly
registered
your
global
task
to
be
routed
to
to
get
notified,
otherwise
would
receive
some
data.
So
it's
very
similar
to
the
poll
protocol.
Only
its
kind
of
I/o
based
as
well
kind
of
hooking
into
the
I/o
system.
B
So
an
example
of
how
you
implement
this
is
kind
of
this
is
under
the
hood
in
the
Tokyo
crates
itself,
where
first
up,
we
say
that
we
will
actually
check
to
see
if
we're
readable.
If
we're
not
readable,
then
we
can
just
return
immediately
again.
That's
internal
you're,
actually
registering
it
right,
registering
interest,
but
well
actually
reads:
once
we've
read
read
some
data:
if
the
kernel
says
it's
not
readable,
then
we're
gonna,
say:
okay,
let's
actually
block
our
tasks
on
this
and
say
that
we
need
to.
B
We
need
to
wait
or
we
need
to
as
part
of
our
protocol,
because
we're
returning
not
ready,
actually
register
ourselves
to
receive
a
notification
at
a
later
date.
So
this
basically
means
that
now
I
have
some
Dulles
up
where
this
original
thing
that
the
kernel
was
giving
us,
which
we
had
no
idea.
What
to
do
with
Tokyo
now
says
precisely:
oh
I
know
exactly
how
to
match
that
up
to
a
task,
because
that
previous
task
read
from
file
descriptor,
four
or
five
and
said
that
it
was
not
readable
and
snow.
B
Now,
I
know
precisely
what
to
do.
Given
these
notifications
from
the
kernel
and
how
to
write
all
right,
all
that
internal
aim
so
to
wrap
all
this
up
took
use
event.
Loop
is
effectively
the
actual
thing.
That's
responsible
for
now
blocking
your
thread,
the
Tokyo
event.
Loop,
will
kind
of
wait
for
all
those
events
and
then
dis
special,
all
of
them
internally,
and
make
progress
in
all
the
various
features
and
whatnot
and
then
the
overall,
it's
kind
of
like
a
glory
translating
colonel
colonel
notifications
to
task
notifications
in
the
future
system.
B
B
The
book
is
probably
the
best
place
to
get
started
if
you're
brand
new
to
rust,
though
we
have
users
forum
just
for
asking
questions
and
whatnot,
and
if
you're
more
curious
about
the
async
I/o
stack,
you
might
want
to
start
there,
that's
kind
of
the
the
current
library
for
futures.
Otherwise,
thank
you
so
much
for
coming.
Thank.
A
You
very
much
people
can
you
do
like
whoo-hooing?
Do
you
know
what
whoo-hooing
is
it's
like,
woohoo,
okay,
give
me
a
good
one
on
on
three
one,
two
three
thank
you
that
was
very
nice
now
I'm,
going
to
ask
you
three
questions
Alex,
if
you're,
ok
with
that
we've
got
some
time,
so
these
are
from
from
our
website.
Question
number
one
is
very
short:
can
you
use
two
mutexes
to
achieve
deadlock?
Yes,.
B
So
rust
guarantees
you
data
race,
freedom,
it
does
not
guarantee
you
deadlock
or
race,
condition,
freedom
and
rust.
You
still
have
race
conditions.
You
still
have
deadlocks.
We
actually
attempted
to
have
deadlock
freedom,
but
it
did
not
work
out
and
suffice
to
say,
though
that
is
you
can't
have
we
don't
do
any
extra
static
analysis
there.
Once
you
have
two
mutexes,
you
can
tell
log
two
channels,
you
can
deadlock,
you
can
have
race
conditions
where
you
ordered
your
condition
variable
correctly
and
you
block
forever.
That's
also
possible
in
rusts.
A
B
Question
actually
I
forgot
to
mention
this,
which
is
that
when
you
use
the
stood
thread
module
when
you
actually
create
a
thread,
it's
an
OS
thread.
We
do
nothing
else.
There
you're
going
straight
to
the
system,
you're
calling
pthread
create
and
the
only
extra
cost
is.
We
need
to
box
up
that
closure
and
put
it
on
the
heap
to
actually
transfer
the
new
thread.
Otherwise,
it's
the
exact
same
thing
as
Noah's
thread.
So
there's.
No,
it's,
however
cheap.
Your
OS
thread
is
compared
to
a
cur
routine.
It's
exactly
that
and.
A
A
Alex,
thank
you.
Thank
you
very
much
indeed.
Just
a
quick
reminder,
Alex
will
be
back.
I
mean
I,
now
understand
why
you
like
time-travel
movies,
because
the
second
talk
is
like
an
example
of
time-travel,
so
you
had
co-routines
first
and
then
only
after,
which
is
what
tomorrow
4:15
room
9.
There
will
be
intro
to
rust
here
very
interesting,
take
other
than
that.
We've
got
a
little
gift
for
your
Alex.
Thank
you
very
much
indeed,
and
thank
you
I
invite
everybody
to
come
back
later
tomorrow.
Yeah.
Do
it
properly
one
two
three
go.
Thank
you.