►
From YouTube: Intern Presentation: Message Passing in Rust
Description
Eric Holk from the Research team presents “Fast, safe message passing for Rust.”
Help us caption & translate this video!
http://amara.org/v/2FhG/
A
A
I
did
this
summer
with
the
message
passing
system
and
rest,
so
Lindsey
mentioned
that
rust
is
supposed
to
be
safe,
concurrent
and
fast,
and
my
my
work
started
actually
just
deal
with
all
three
of
those
and
that
it
message-passing
is
important
for
concurrency,
so
you
can
have
multiple
things
running
at
once.
We
want
the
messes
patched
thing
to
be
fast,
and
then
we
also
want
to
be
able
to
provide
some
safety
guarantees
in
there
too.
So
to
start
off
with
how
many
people
here
are
familiar
with
double
buffering
for
graphics.
Well,
that's
like
everyone.
A
A
Can
ignore
the
next
plate
for
slide
system?
So
if
you
aren't
familiar
with
table,
double
buffering
works.
The
it
is
the
way
that
people
used
to
do
graphics
is,
you
would
have
I
just
fly
up
there
and
you
have
some
picture
that
you're
drawing
on
there
and
then
you
draw,
and
then
you
can
see
how
this
drawing
in
there.
You
might
actually
see
that
the
intermediate
forms
on
the
screen
and
that's
a
bad
user
experience,
and
so
what
we've
you
know
and
then
it
goes
back,
and
you
can
see
that
again
so
for
double
buffering.
A
What
we
end
up
doing
is
we
create
this
other
renderer
task?
That's
up
there
and
then
we
create
a
copy
of
the
video
buffer
and
the
render
task
can
then
be
drawing
on
it
so
draws,
and
then,
when
it's
all
done,
it
sends
the
buffer
back
to
the
display
the
display
swaps,
the
two
which
happens
really
fast,
so
you
don't
see
it
happen
and
then
the
renderer
can
go
back
and
keep
on
drawing,
and
so
this
actually
comes
up
in
sort
of
one
of
the
the
biggest
projects
in
the
rest.
A
So
there
there's
like
some
problems
there.
That
could
happen,
though,
and
like
you
need
to
make
sure
that
you
only
ever
have
one
buffer
at
a
time,
so
that
there's
always
a
buffer
available
to
show
on
the
screen-
and
you
know
things
like
that,
so
we
want
to
try
to
have
rust,
help
us
make
this
program
correct
and
then
also
provide
us
performance.
A
So
here's
an
example
of
what
the
code
would
look
like
if
you
were
doing
it,
the
double
buffering
in
rust,
using
the
new
channel
contract
system.
So
we
have
this
function
called
draw
frame
and
it
takes
a
channel
which
it
uses
to
communicate
with
the
compositor,
and
so
the
first
thing
it
does.
Is
it
okay,
this
whoa,
okay,
I?
Guess
you
can't
see
my
mouse,
so
you
just
have
to
imagine
what
I'm
saying
anyway.
A
So
there
is
a
channel
up
there
and
we
use
that
to
request
a
buffer,
and
then
we
go
into
the
Select
thing
and
we
wait
for
the
the
compositor
to
send
us
a
give
buffer
message
which
includes
a
buffer.
So
then
we
call
the
render
function
on
that
buffer
and
then,
when
we're
all
done,
we
release
the
buffer.
So
we
send
it
back
to
the
compositor
and
the
compositor
will
draw
whatever
it
is
you
drew
in
there.
It
will
show
it
up
on
the
screen,
so
the
user
can
see
it.
A
A
A
B
A
So
what
I
forgot
to
do
was
return
the
buffer
to
the
server
so
or
to
the
compositor,
so
I
requested
a
buffer
I
drew
into
it
and
then
I
requested
another
buffer
without
returning
the
other
one,
and
so
now,
there's
not
enough
buffers
left
in
the
system
and
I
think
on
some
operating
systems.
You
will
actually
just
hang
if
you
do
this,
because
it
doesn't
know
what
to
do.
If
you
forget
to
return
the
buffer
and
rust,
we
get
this
compiler
error
up
there,
which
is
a
little
late,
I
guess,
but
so.
B
A
A
A
Okay,
so
I
keep
saying
critical
a
lot
and
that's
sort
of
a
key
part
of
evolve
this
and
so
underlying
that
code
example
up
there
is
this
thing
here:
this
is
a
protocol
specification
and
so
we've
defined
a
protocol
called
a
double
buffer
and
there's
a
list
of
states
on
there.
So
we've
got
three
states
and
then
a
set
of
messages
that
were
allowed
to
send
in
each
one.
Okay.
So
so
there's
the
acquire
state,
the
weight
buffer
state
and
the
release
state
and
in
each
of
those
states
you
can
either
send
or
receive.
A
So
that's
the
part
after
the
:
says
and
then
there's
a
list
of
messages
that
are
allowed.
So
in
this
case,
we're
only
allowed
to
send
one
kind
of
message
in
each
state.
Although
you
can
a
protocol,
it's
in
multiple
ones,
so
in
the
entire
state,
you're
allowed
to
request
a
buffer,
and
then,
when
you
do
that
the
arrow
over
there
says
that
we
transition
into
the
weight
buffer
state,
at
which
point
you
wait
to
receive
a
gift
buffer
from
the
other
side
of
the
connection.
A
And
then
that
gives
you
a
gift
buffer
message
with
some
sort
of
buffer
type
associated
with
the
message.
And
then
you
transition
to
the
release
date,
which
says
that
the
server
is
waiting
for
you
to
return
the
buffer
back
and
then
the
release
date.
We
have
a
release
message
which
takes
a
buffer
and
passes
it
on
over
to
the
server.
A
If
we
like
pictures,
we
can.
We
can
draw
this
out
so
I've
colored,
these
ten
states
as
red
and
then
the
receive
states
is
blue.
So
we
started
up
there
and
then
require
or
acquire
when
you
send
a
request
and
then,
when
weight
buffer,
we
send
release
and
then
we
or
we
receive
a
gift
offer.
And
then
we
come
back
to
the
release
date.
Then
we
go
around
and
we're
in
the
acquire
state.
A
A
So
one
of
the
things
we
create
is
a
dual
protocol
and
basically
the
way
you
get
a
dual
protocol.
Is
you
just
reverse
all
the
sends
and
receives
in
there,
and
so
that
the
protocol
compiler
will
take
that
protocol
specification
and
actually
produce
two
protocols
which
I've
drawn
here
so
there's
one
where
again
ready
to
send
so
we
send
and
receive.
A
A
So
we
take
this
protocol
specification
and
we
run
it
through
our
protocol
compiler,
which
generates
a
bunch
of
rest
code
that
provides
all
the
guarantees
that
we
need
to
make
this
work.
So
that
means
we
have
to
have
a
module
for
the
client
in
the
server
and
then
all
of
the
states
and
the
protocol
will
get
compiled
into
rest
types.
And
then
those
are
surest,
has
an
affine
type
system
which
is
sort
of
a
scary
term,
for
it
makes
sure
it
counts.
A
How
many
times
you
use
something,
and
so
I
find
types
are
things
that
you
can
if
you
use
the
type
more
than
once,
it
will
be
a
type
error.
Okay,
so
each
of
these
states,
once
you
send
a
message
in
the
state
you've
used
that
up
and
then
you
have
to
get
a
somehow
like
a
representation
of
the
next
state
and
that's
how
we
use
the
authentic
systems
to
support
that.
A
A
A
Basically,
each
protocol
is
a
client
and
a
server,
and
so
this
is
the
the
client
perspective,
which
is
what
we
write,
everything
from
so
in
the
client
the
case
we're
allowed
to
receive
somewhere.
There
would
be
other
code
which
describes
the
server
side
of
this,
and
they
would
get
a
double
buffer
server
require
object
and
it
can
receive
from
that.
Okay,
so
I
mentioned,
we
generate
functions
to
send
messages.
A
Here's
an
example
of
that
where
we
generate
the
request
function,
which
sends
our
request
on
the
channel
and
then
so
request
consumes
the
channel
endpoint,
and
it
gives
you
back
another
channel
endpoint.
So
a
lot
of
times
when
you're
writing
code,
that's
in
the
style,
you'll
say
like.
Let
channel
equals
something
of
channel
followed
by
a
let
channel
equal,
something
a
channel
and
so
on,
because
we
can't
use
channel
once
we've
sent
on
it.
But
then
we
get
a
new
channel
that
we
can
use
to
continue
the
protocol.
A
So
in
this
case,
when
we
send
a
request,
it
was
supposed
to
transfer
the
protocol
steps
into
the
weight
buffer
state,
and
so
we
get
a
new
end
point
out
that
we
store
in
channel
and
then
we
do.
This
to
the
Select
is
basically
receive
in
this
case,
because
we're
only
selecting
one,
but
this
is
receive
a
message
from
the
channel
that
we
just
got
and
then
we
do
pattern
matching
on
that
and
so
this
sort
of
mirrors
the
syntax
for
describing
a
message
format.
A
So
we
will
say
we
want
to
match
the
give
buffer
message
and
it's
going
to
have
a
buffer.
So
we
want
to
bind
that
to
a
variable,
and
then
it's
also
there's
going
to
be
some
next
date
for
the
protocol
because
it
continues
and
so
we're
going
to
call
that
next
state
channel,
so
we're
rebinding
channel
again
to
the
next
state
of
the
protocol
and
then
render
is
a
function
that
takes
a
reference
to
the
buffer
and
draws
into
it.
And
that's
all
done.
A
Okay.
Well
so
that
sort
of
gets
in
the
safety
of
it.
Because
you
know
we,
we
have
this
statically
checked
communication
protocol
to
make
sure
you're
sending
messages
at
the
right
time
and
receiving
the
right
messages
when
you
should
and
getting
the
right
types
and
all
that.
But
the
other
thing
is
we
want
it
to
be
fast
right.
A
So
I
did
a
bunch
of
benchmarks
or
well
I'm,
going
to
talk
about
three
of
them.
I
guess
and
I
was
pretty
happy
with
how
they
turned
out.
It
turns
out,
because
a
side-effect
of
the
having
this
protocol
specification
is
that
not
only
does
the
compiler
make
you
be
careful
about
how
your
you're
doing
your
communication
and
that
you're.
Actually
following
the
protocol,
you
said
you
would
the
compiler
that
can
then
assume
that
you're
following
the
protocol
and
it
can
generate
faster
code
than
it
would
in
a
more
generic
case.
A
In
so
a
particular
we
know
in
every
case
we
have
one
sender
and
one
receiver
at
every
point
and
we
know
who
they
are,
which
makes
synchronization
a
lot
easier
and
and
in
some
at
least
in
the
fast
path
where,
where
the
send
completes
before
the
receiver
tries
to
receive
it,
we
can
do
this
whole
thing
without
any
locking
in
there,
which
gives
us
quite
a
bit
of
performance.
So
the
first
benchmark
I
want
to
talk
about.
Is
this
one?
We
called
message
syn
ring,
and
this
is
a
benchmark.
A
I
wrote
earlier
in
the
summer,
and
it
was
specifically
crafted
to
make
my
new
system
look
good,
so
the
old
system
we
we
didn't,
have
maybe
the
best
implementation
we
could
have
had
and
in
particular
every
time
any
task
anywhere
in
the
system
sent
a
mess
there
a
message
there
was
a
global
lock.
We
had
to
acquire
in
release,
which
means,
basically,
all
the
threads
gets
to
realize
through
message
sending
which
just
isn't
very
scalable
scalable,
and
so
that
was
like.
A
One
of
the
first
goals
is
to
make
sure
we
don't
have
any
sort
of
global
synchronization
that
any
synchronization
that
happened
to
miss
passing
only
involves
the
two
tasks
that
are
exchanging
messages.
So
the
it
is,
these
blue
boxes
all
represent
tasks.
They
we
start
up
a
bunch
of
tasks,
they
arrange
themselves
into
a
ring.
Then
everyone
sends
a
message
to
the
right.
A
A
Earlier
in
the
summer,
it
was
like
17
X,
but
I
was
being
a
little
sloppy
in
the
synchronization
and
so
getting
rid
of
all
the
race
conditions
made
it
slower,
but
it's
still
significantly
faster
than
what
we
had
before
the
other
thing
we
can
do
with
these
protocols.
Is
they
give
us
an
idea
of
how
much
buffer
space
we
need?
A
So,
under
the
old
system
we
didn't,
we
just
assumed
that,
like
the
sender
could
sentence
and
then
sentence
and
all
they
want,
and
we
don't
know
if
the
receiver
is
ever
going
to
receive
it
or
not.
Whereas
some
of
these
these
protocols
that
we
get
now,
you
can
tell
just
by
looking
at
the
protocol
that
the
sender
will
not
go
any
further
until
the
receivers
received
all
the
messages
so
I
call
this
bounded
versus
unbounded
and
so
an
example.
A
In
the
case
where
we
have
a
bounded
protocol,
we
can
actually
get
rid
of
all
the
memory
allocation
and
just
say
as
soon
as
you
start
this
protocol,
we
know
how
much
things
the
thing
could
ever
possibly
use,
and
so
we
allocate
all
the
space
that
we
need
and
then
we
just
we
never
allocate
anymore
again,
and
so
that
should
make
things
faster
right
and
so
to
test.
This
I
use
just
a
really
simple
ping
pong
protocol.
A
So
it
has
to
allocate
every
time
it
sends
because
it
needs
more
buffer
space
and
then
there's
the
bounded
example
where
we
don't
do
any
allocation
ever
once.
You
start
up
the
communication.
So
once
again
this
was
faster,
which
is
good,
although
only
like
only
about
a
about
9%
or
so
so
it's
it's
worth
doing.
A
But
you
know
it's
a
smaller
change
and
that's
generally
a
sign
that
you're
doing
something
right
when
you
start
only
seeing
like
10%
performance
improvements
rather
than
like
800
percent,
because
that
usually
means
you
did
something
wrong
to
start
with.
If
you
are
able
to
do
it
and
a
percent
so
anyway,.
C
A
Is
one
of
the
optimizations
we
do
and
and
it's
it
pays
off
pretty
well
in
this
case
at
least
these
have
both
been
sort
of
micro
benchmarks.
So
far,
and
so
we
don't
know
like
yeah,
I,
think
and
basically
what
we've
seen
so
far
is
that
I
can
construct
tests
that
make
my
code
look
really
good
or
benchmarks
to
do
that.
But.
A
Well,
this
light
pays
off
in
sort
of
real-world
applications,
and
so
another
program
ever
earlier
in
the
summer,
was
a
parallel
breakfast
parallel
breadth-first
search
through
a
graph,
and
so
once
I
had
that
I
compared
the
old
version
versus
the
new
version
and
one
to
see
how
well
whether,
basically,
this
faster
message
passing
actually
makes
any
real-world
performance
difference
and
something
that's
mostly
keep
you
found,
rather
than
just
testing
just
the
message
passing
so
we're,
probably
mostly
familiar
with
breadth-first
search,
but
just
in
case
here's
how
it
works.
A
It's
a
graphical
and
we
picked
some
notice
the
route
so
I
chosen
the
one
in
the
lower
left
and
we
say
that
this
one
is
in
level
zero.
So
it's
the
first
one
we
see
and
then
the
next
iteration
we
do
the
ones
that
are
reachable
from
that.
Then
we
do
another
one,
so
the
ones
that
are
reachable
from
there
and
so
on,
and
now
we've
sort
of
labeled
all
these
with
a
number.
So
we
know
what
order
you
end
up.
Visiting
this
nodes
in
and.
A
Can
also
create
a
tree
so
that
every
like
you
can
remove
the
extra
edges
and
then
you
have
a
breadth
first
search
tree,
and
so
the
idea
behind
breadth-first
search
is
to
produce
this
tree
from
an
arbitrary
graph,
okay
and
so
comparing
the
performance
of
this
using
the
old
ports
and
channel
system
and
rest
versus
the
new
ones
using
channel
contracts.
A
Here's
about
what
it
looks
like
so
again:
I'm
sort
of
a
real
world
application
things
get
faster,
I,
think
that's
roughly
15%
or
so
difference
in
there.
So
again,
not
the
amazing
eight
hundred
percent,
but
you
know
ideally
were
spending
most
of
our
time
computing
it.
So
the
fact
that
making
message
passing
can
give
a
measurable
improvement
performance
is
still
pretty
worthwhile,
so
anyway,
that's
basically
what
I
had
as
far
as
the
channel
contracts.
So
we
looked
at
a
way
of
providing
safer
communication
and
rest
in
that
now.
A
C
A
So
the
so
Ben's
question
was
I
said
on
the
fast
path,
and
so
the
fast
path
is
the
send
completes
before
the
receive
inserts,
and
so
the
question
was
alright.
I
claimed
that
there
were
no
locks
on
that
path
and
then
says
donated
an
atomic
operation.
So
the
the
answer
is
yes,
I
guess:
I
wasn't
counting
atomic
operations
as
a
lock
right,
but.