►
From YouTube: Thomas Eizinger - Refactoring away locks in async code
Description
Thomas entered the world of Rust through the blockchain space but has happily moved on from the finance world since then.
For the moment, he is contracted to maintain rust-libp2p.
A
So
quick
intro,
this
talks
about
all
these
coding
sessions
about
locks
in
asynchronous
Rust
and
they
can
be
tricky
because
you
like
have
your
tasks
scheduled
across
multiple
worker
threads
and
if
you
just
take
locks
in
between,
you
might
be
blocking
your
entire
worker
for
it,
and
especially
Tokyo
it's
really
sensitive
here.
A
So
we
kind
of
don't
really
want
to
have
locks
in
asynchronous
Rust.
But
then
the
question
is:
how
do
we?
How
do
we
share
State
across
tasks?
You
can
use
channels,
but
they
can
get
quite
messy
really
quickly.
So
what
I'm
going
to
do
today
is
I'm
going
to
show
you
a
quick
toy
program
that
I
wrote
and
we're
gonna
refactor
it
from
using
a
lock
to
not
using
a
lock
but
still
sharing,
State
across
tasks.
A
Time
I
go
so
first,
let's
show
you
the
program,
so
it's
a
little
server
and
we
can
run
multiple
clients
against
it
and
we
send
messages.
Nothing
happens.
If
we
send
the
message
ping,
we
get
back
how
many
pings
have
been
sent,
and
this
state
is
shared
across
the
multiple
clients
right,
so
we've
got
one
web
server
or
one
server
and
we
share
State
across
the
clients,
easy
okay.
How
does
this
look
like?
We've
got
our
client
here,
it's
pretty
straightforward.
A
We
make
a
TCP
connection
and
we
wrap
the
stream
in
a
frame
with
Alliance.
Codec
means
we'll
eliminate
the
items
by
new
lines
and
we
wrap
standard
in
as
well.
We
make
a
future
select
if
the
stream
gets
an
item.
Our
protocol
between
server
and
client
is
that
we
just
send
the
number
of
things
back
as
a
number,
so
we'll
parse
that
from
the
line
and
we
print
out
number
of
things
and
if
we
get
a
line
from
our
stream
and
we'll
just
from
our
standard
in
we'll
just
send
it
to
the
server.
A
What
does
the
server
look
like
has
a
TCP
listener.
We
have
some
shared
state,
which
is
the
number
of
pings,
and
that's
the
thing,
that's
wrapped
behind
the
archmutix,
and
then
we
have
an
endless
loop
which
just
accepts
new
streams
and
spawns
a
new
task
for
for
each
stream.
We
clone
the
state,
we
again
wrap
it
in
framed
and
then
we're
going
to
read
each
message
if
it
is
a
ping,
we're
gonna
block
the
state.
This
is
where
the
mutex
comes
in.
We
increase
the
number
of
pings
and
we
send
it
back.
A
A
A
Okay,
so
and
then
we
implement
the
pole
function,
which
basically
will
just
make
progress
on
our
state.
We'll
take
a
context
as
with
asynchronous
programming,
we'll
always
take
a
context.
A
That
is
where
we
registered
awaker
and
stuff,
and
we
will
turn
a
poll
of
an
I
o
result
with
a
unit
type.
We
should
import
Paul,
probably
for
this
to
work
cool,
so
we
can
now
basically
just
start
with
the
first
part
of
the
state
machine
we
want
to
sort
of.
We
need
to
check
which
data
we
actually
in
so
we're
going
to
match
self,
but
the
annoying
part
here
is
or
not
annoying
or
the
difficult
part
is
that
this
is
immutable
reference
right.
A
But
if
you
want
to
have
state
transitions,
we
sort
of
need
to
get
a
hold
of
the
current
state
and
actually
transition
to
a
new
one.
So
the
trick
on
how
you
do
that,
is
you
use
memory
place
when
you
pass
itself
and
then
a
socket
State
and
some
kind
of
dummy,
State
and
memoryplace
returns.
You
whatever
was
at
the
current
memory
address
and
gives
you
back
the
result.
A
So
now
we
have
an
own
version
of
our
socket
and
we
can
actually
move
on
to
a
different
state,
but
first
we
need
to
do
something
in
way
to
receive.
We
actually
just
want
to
receive,
so
we
can
say
not
self
socket,
Paul
receive
unpin
or
our
next
unpin.
That's
what
it's
called
right
question
mark
that
I'm
just
going
to
quickly
return
poll
pending
at
the
end.
So
all
these
red
lines
go
away,
it's
much
better
at
a
mutable
reference,
and
so
we
can
either
read
a
line
right
which
actually
is
an
option.
A
So
the
stream
could
have
ended,
that's
when
it
returns
none.
So
we
get
a
line
or
we
return
pending.
Now,
if
we
are
pending,
we
sort
of
need
to
the
state
is
currently
set
to
poison.
So
we
need
to
set
that
back
to
what
we
currently
were
at,
which
is
Socket
State
ready
to
receive,
and
we
pass
the
socket
back
in
and
we
can
we
can
return
Paul
pending.
A
A
A
If
it's
a
ping,
we
want
to
increase
the
counter,
but
we
don't
have
a
counter
here,
but
what
we
can
do
is
we
can
just
pass
it
in.
So
we
say
the
number
of
things
just
take
that
as
a
reference
q64.
A
and
we're
just
going
to
say
what
the
number
of
things
just
increased
plus
equals.
One
notice
how
we
don't
have
any
blocks
here
right
and
if
it
is
a
ping
we
actually
want
to
set,
we
need
to
send
a
message
back
now.
We
can't
just
say:
socket
dot
send
here
because
we're
not
within
an
async
function.
We
can't
use
a
weight,
but
we
can
transition
ourselves
to
a
new
state
that
will
once
we
get
there,
send
a
message.
A
A
This
is
a
bit
of
a
tricky
part
in
our
state
transition.
We
need
to
set
ourselves
still
back
to
a
valid
State
at
the
moment,
we're
poisoned.
So
if
we,
if
we
read
a
message
that
doesn't
make
any
sense
to
us,
which
is
not
ping,
we're
just
going
to
go
back
to,
we
are
ready
to
receive
again,
but
now
we
have
an
issue
which
is
do
we.
We
can't
return
poll
pending,
because
our
call
to
Paul
next
actually
succeeded
if
we
return
Paul
painting
here,
we'll
never
get
called
again
right.
A
So
the
way
you
solve
these
kind
of
problems
is
that
you
wrap
the
entire
poll
function
in
a
loop,
and
you
basically
try
to
do
as
much
work
as
possible
within
within
one
call
to
the
function,
and
we
can't
just
say
continue
here.
A
Right
now,
this
is
kind
of
important
we
weren't
ready
to
receive.
We
pulled
our
socket
up.
Socket
was
ready,
we
had
data
in
the
buffer,
but
it
turned
out
not
to
be
a
ping,
so
we
set
ourselves
back
to
ready
to
receive
and
continue
again
makes
us
go
to
the
top.
We
replace
ourselves
with
poisons
and
we
pull
again
and
if
we
then
return
Paul
pending,
we
just
exit,
and
we
are
still
adhering
to
the
poll
contract
now,
what
happens
if
we
actually
return
Paul
ready,
none.
That
means
that
our
socket
has
disconnected.
A
A
This
does
compile
because
we're
not
doing
any
of
the
other
state
transitions
I'm
not
going
to
code
these
now,
because
that
is
just
always
sort
of
the
same
pattern.
So
I'll
go
to
my
cheat
sheet
and
we
can
look
at
it
here.
It's
like
okay
on
send
message.
Well,
we'll
first
pull
the
socket
one.
That's
ready
to
accept
the
message
we're
going
to
start.
Send
we
go
to
flush
pending
we
stay
in
our
same
state.
A
It's
always
the
same
pattern,
so
I'm
just
going
to
copy
that
and
in
Poison
we
should
never
be
in
Poison.
If
we
do
our
state
transitions
correctly,
so
in
Poison
it's
safe
to
panic
and
it's
just
something
that
we
need
to
do
because
of
the
rust
type
system.
A
So
this
does
compile
and
oh
actually,
really
important
part.
Obviously
we
don't
want
an
endless
loop,
but
we
do
have
a
return
from
each
branch.
So
kind
of
an
interesting
visual
check
is
that
we
can
do
is
do
we
have
a
control
flow,
always
paired
with
this
assignment
to
self
return
self
contain
yourself,
return
self
continue
self
yeah.
So
that's
sort
of
like
the
visual
check
that
we
have
to
make,
and
this
should
compile
cool
now.
The
question
is:
how
do
we
actually
use
this
thing?
A
A
So
here
we're
going
to
track
our
number
of
pings
and
we're
also
going
to
have
a
thing
called
incoming,
which
is
a
type
from
async
STD
that
has
a
lifetime
because
it's
bound
to
a
socket
that
it's
where
the
messages
are
or
the
streams
are
incoming
from
and
we're
going
to
have
some
streams
and
that's
just
going
to
be
a
list
of
our
socket
state.
A
A
So
when
you're
writing
manual,
Paul
State
machines,
it's
just
like
pull
all
the
way
down
really
and
then
you
just
call
more
pull
functions
from
your
poll
functions.
This
is
essentially
what
async
await
also
deshuggers
to
under
the
hood,
but
you
don't
get
like
you,
don't
get
any
control
over
it
anymore
once
you
spawn
it
away,
because
the
only
thing
you
can
call
on
is
Paul,
so
you
can't
really
make
any
decisions
on
the
code
that
the
compiler
generates.
A
That's
why
we
sometimes
write
them
ourselves.
So
again,
we
Loop
in
here
and
one
of
the
first
things
we
do
is
we
say
incoming
Paul
next
unpin,
and
this
thing
we're
going
to
match
on
that.
If
that
is
ready,
which
is
again
I
believe
some
string
yeah
yeah.
So
if
this
returns
ready,
we
have
a
new
stream.
Well,
what
do
we
do
with
a
new
stream?
We
just
shove
it
into
our
vector
done
if
it's
pending?
A
Oh
sorry,
we
need
to
actually
wrap
it
in
this
Frame
thing
so
that
we
are
correctly
reading
lines
and
not
individual
bytes
on
this.
Why
is
this
not
happy?
Oh
yeah?
Obviously
it
should
be
an
entire
socket
State
ready
to
receive
here
we
go
now
in
pending
here
for
the
moment,
I'm
not
going
to
do
anything.
A
A
Now
we
also
need
to
have
our
streams
make
progress
right.
So
what
we
can
do
is
we're
just
going
to
say
self.streams
Paul
for
just
go
through
all
our
streams,
with
a
mutable
reference
and
we're
just
going
to
pull
each
individual
one,
so
we're
going
to
say
streams
which
should
be
singular
Paul
and
now
this
pole
function
requires
our
number
of
pings
variable
and
we
can
just
pass
that
in
right.
A
We've
got
a
mutable
reference
here,
passing
a
number
of
streams
pass
on
the
context
this
could
fail
and
then
we
match
on
that
again
now.
This
thing
I
believe.
A
So
that's
that's
a
result
yeah,
so
that
just
never
happens.
Actually
we
probably
don't
want
to
question
mark
this
because
that
will
abort
the
entire
server.
If
only
one
stream
courses
and
error,
so
we're
gonna,
say:
okay,
it's
not
going
to
do
anything
if
the
stream,
this
actually
should
never
happen.
We
could
probably
statically
type
check
that
with
the
never
type,
but
that
seems
to
never
get
stabilized,
so
we
don't
have
it
if
it's
an
error
which
is
going
to
make
a
print
line,
string
failed
two
minutes,
oh
good
cool.
A
So
now
what
we
do
if
it's
pending!
Well,
if
it's
pending
for
now,
we
don't
want
to
do
anything,
but
what
we
do
is
at
the
end
of
the
loop
we
say
return
pole
pending
and
the
reason
we
do.
That
is
if
this
thing
gets
all
the
way
to
pending
means
awaker
got
registered.
We
move
on
to
our
next
thing.
If
all
the
streams
return
hold
pending,
then
we
are
safe
to
return
pole
pending
now.
A
The
most
important
bit
here
is
that
actually
here
we
would
also
fall
to
the
next
thing,
but
this
socket
could
be
making
more
progress.
So
we
say
continue
here
again,
and
this
should
compile.
It
does
compile
cool,
and
now
we
just
need
to
use
it
right.
So
we
get
rid
of
all
of
this
stuff
that
is
in
our
old
Main.
We
make
a
pin
counter
with
number
of
things.
Incoming
is
listener.
Dot
incoming
empty
streams
encounter,
and
now
all
we
need
to
do
is
call
this
within
a
polyphen
which
is
a
helper
function.
A
A
A
50
seconds
left
so
I'm
gonna
pause
here
for
now.
I
just
want
to
work
through
some
interesting
things
that
we
can
observe
here
now
that
we
have
this
architecture.
Many
of
you
may
think
like
this
is
about
three
times
as
much
code
as
we
have
before.
How
is
this
any
better?
Well,
it's
got,
it
might
not
necessarily
be
better.
A
It
just
has
different
Traders,
so
one
thing
we
can
see
is
here:
we're
no
longer
spawning
any
tasks
right,
we're
handling
all
streams
and
everything
on
a
single
task,
which
can
be
good
if
you
want
to
sort
of
limit
what
kind
of
system
resources
incoming
clients
want
to
impose
on
you
if
I
spawn
a
task
for
every
incoming
connection,
even
though
systems
can
handle
thousands
or
ten
thousands
of
tasks
easily,
it's
still
sort
of
unbounded
growth
based
on
Behavior
outside
of
your
system,
which
is
can
sometimes
not
be,
can
something
to
be
a
bad
idea
to
do.
A
A
Another
thing
that's
interesting
here
is
that
it's
fairly
easy
to
test
this
individual
State
machine
right,
it's
its
own,
encapsulated
thing:
it's
a
module
that
you
can
put
somewhere
and
you
just
instantiate
it
whenever
you
need
it.
So
the
code
is
more
modular
and
the
third
thing
and
that's
sort
of
what's
really
exciting
and
prompted
me
to
to
share
this.
Is
we
do
a
lot
of
this
inside
of
rustler
P2P
and
the
reason
for
that
is
because
it
gives
you
ex
it
gives
us
explicit
control
over
latencies
and
local
buffer
Keys
local
buffers.
A
So,
for
example,
at
the
moment,
if
ping
counter
is
pulled,
I
can
see
the
first
piece
of
work.
That
happens
is
we're
going
to
accept
incoming
connections,
but
that's
probably
not
that
good.
Actually,
because
accepting
incoming
connections
means
more
work,
we
should
be
focusing
on
finishing
the
stuff
that
we
have
locally
to
do.
First,
so
I'm
just
going
to
take
this
and
put
it
at
the
bottom
right.
So
now,
I've
optimized
for
latency
of
finishing
all
the
work
that
we
have
on
local
streams.
A
First,
before
I'm,
taking
on
new
work
by
accepting
more
incoming
connections
and
depending
on
how
complex
your
network
application
is,
you
can
make
a
lot
of
you
can
make
a
lot
of
decisions
here
and
you
can
have
a
lot
of
influence
over.
Have
certain
local
cues
be
small
dispatch
work
locally,
have
your
components
to
work
first
before
you
accept
new
ones,
word
from
our
sponsored.
B
Thank
you,
hi
everyone,
I'm
Cal,
I
work
here
at
Harrison
AI.
We
sponsored
the
event
for
today.
So
thanks
a
lot
for
everyone
for
turning
out
being
engaged
in
all
the
talks
today
and
all
the
speakers
taking
part
today
following
on
from
this
event,
we're
going
to
be
rolling
through
to
Maloney's
Hotel.
So
if
you're
interested
in
having
a
couple
of
beers
or
whatever
other
beverages
you
might
be
into
feel
free
to
come
along,
it's
only
a
three
minute
walk
just
around
the
corner.
Thank
you.
A
Are
the
next
iteration,
so
what
this
thing
is
not
handling
very
well
at
the
moment
is
stream
failures.
So
what
you
want
to
do
here
is
you
probably
want
to
remove
the
stream
that
failed
from
the
list,
which
is
a
bit
tricky
to
do
because
you're
borrowing
it
mutably
here,
so
you
have
to
kind
of
there's
a
trick.
A
You
can
do
where
you
reverse
it
right
over
an
iterator
and
you
always
call
swap
which
is
an
o1
iteration
and
you
take
it
out
and
then
you
can
remove
it
again,
but
you
sort
of
you
will
accumulate
streams
in
here
that
are
dead
right
because
we're
not
removing
them
here
and
we
kind
of
really
want
to
remove
them
yeah.
That
would
probably
be
the
one
thing
I'd
add
to
that.
Really
this
kind
of
idea
of
writing
your
state
machines
yourself
with
Paul
can
just
be
really
interesting.
A
How
unclear
it
is
whether
or
not
Futures
are
cancellation,
safe,
so
cancellation
safety
in
two
words
is:
what
happens
if
I
drop
a
future,
will
I
lose
some
work
somewhere
like
does
the
future
State
hold
something
that
I
will
never
get
back
or
not,
and
that's
actually
hello,
yeah,
that's
something!
That's
really
problematic
if
you
use
async
and
the
weight,
because
if
you
select
like
in
the
client
here,
for
example,
right
this
second
argument
here-
that's
always
the
unfinished
future
that
didn't
finish
right.
A
If
I
add
a
variable
type
here
we
see
next
framed,
read
in
this
case.
I
just
know
that
the
future
return
from
frame
is
cancellation
saved,
so
I
can
just
drop
it,
but
I
would
have
to
like
carefully
read
through
the
library
code
and
be
like.
Is
it
actually
cancellation
safe?
So
that's
kind
of
annoying
and
that's
I
guess
also
an
interesting
advantage
of
this
pattern
is
that
you
don't
have
to
worry
about
that,
because
you're
just
pulling
things
and
you
will
work
off
each
individual
item
that
you
get
back
from
that
asynchronous
component.