►
From YouTube: RustConf 2016 - Back to the Futures by Alex Crichton
Description
RustConf 2016 - Back to the Futures by Alex Crichton
One of the core building blocks of any library is the I/O abstraction it works with, but unfortunately they isn't always composable to plug libraries together. The concept of futures is tried and true in frameworks like Scala's Finagle, and provides an ergonomic solution to composing multiple I/O libraries together. This talk will explore the implementation of a cross-platform futures library in Rust backed by the full power of mio and leveraging many aspects of Rust to make futures programming even easier.
A
All
right,
hello,
everyone
so
today,
I
would
like
to
talk
about
some
work
that
Aaron
Karl
and,
at
this
point
a
lot
of
other
people
and
I
have
been
doing
on
futures
and
Tokyo
and
async
IO
over
the
past
few
months.
And
if
you
look
really
closely
you'll
find
a
few
back
to
the
future
references
during
this
talk.
A
A
First
thing
I
would
like
to
do
is
take
you
a
little
bit
back
in
time,
we'll
take
about
the
age
of
April
2015
right
before
1.0
was
released.
We
see
this
issue
on
github
show
up
on
the
RFC
repo
talking
about
async
I/o.
This
has
been
a
very
hotly
debated
topic
now
at
this
point.
There's
over
a
hundred
comments
on
this,
and
you
can
see
this
basically
throughout
the
entire
ecosystem.
Ever
since
1.0,
this
has
been
a
major
gap.
It's
something
that
rust
is
missing.
A
It's
one
of
those
major
features
where
you
it's
kind
of
I
would
use.
This
I
would
use
rust,
I
put
rust
production
if
it
had
a
sink
I
owned,
but
the
story
doesn't
go
here.
We
can
actually
go
a
little
bit
farther
back
in
time.
We
can
go
back
to
May
30th
of
2013
the
ancient
ages
of
rust,
where
only
a
few
of
us
are
from
at
this
point
and
we'll
see
that
this
actually
spawned
off
from
this
issue.
A
This
kind
of
general
selected
async
events,
so
we
can
see
that
this
kind
of
this
concept
of
it's
not
just
async
I/o,
but
we
have
all
these
events
and
moderates
programming
systems
kind
of
flying
around.
We
just
want
a
nice
generic
way
to
deal
with
that,
and
the
solution
at
this
time
was
actually
dated
back
even
farther
once
we
go
into
Yale.
2011,
oh
and
you'll
also
be
very
impressed
with
my
photoshop
skills
throughout
this
presentation.
A
So,
let's
kind
of
rewind
the
state
back
to
the
vet
to
today
and
we'll
see
you
around
June
30th
think
this
is
actually
the
date
of
the
blog
post.
We
had
this
survey
go
out
and
this
survey
one
of
the
questions,
was:
what
areas
are
you
working
in
whatever
not
just
in
Russ
but
wherever
used
to
working
in
general
and
there's
a
really
clear
trend
here
that
the
top
four
categories?
Here
are
all
very
related
to
the
same
concept
of
a
sinc
IO
or
lots
of
events
flying
around.
A
So
we
have
web
development
and
servers
which
clearly,
if
you
want
scalable
servers,
you
would
need
a
sinc
IO
within
the
modern
day
and
age
for
a
server
programming
generally,
in
kind
of
our
pcs,
you
could
have
database
systems,
you
have
micro
services.
This
is
kind
of
the
bread
and
butter
of
async
IO
nowadays
and
then
even
just
going
down
farther.
A
We
have
front-end
apps,
where
you
have
these
events
coming
in
from
clicks
from
keyboards
from
the
file
system
from
all
over
the
place,
and
we
also
have
network
programming,
which
is
also
basically
the
heart
of
all
async
I/o.
So
this
is
really
showing
that
if
we
were
to
solve
this
problem,
if
we
were
to
kind
of
really
heads
on
tackle
this,
we're
going
to
impact
a
major
portion
of
not
only
the
russ
community,
but
actually
the
technical
community
at
large
and
the
great
part
about
this
is
we're
not
actually
starting
from
scratch.
A
The
underlying
servers
allowing
composition
allowing
a
scalability
and
kind
of
separation
of
concerns,
and
all
of
this
is
in
turn
built
on
Neddie,
which
is
the
async
I/o
system
of
Java
and
then,
whilst
all
you
can
also
start
seeing,
finagle
is
also
written
in
C++
kind
of
inspired
in
C++
and
with
Facebook's
project
called
wangle.
So
we
can
take
all
this
and
we
can
kind
of
conclude
that
one
of
the
best
routes
to
go
along
with
rust
is
futures
again
awesome,
photoshop
skills.
A
So
the
first
thing
I
wanna
talk
about
is
what
actually
is
a
future
if
you've
never
heard
of
this
before
or
if
you've
come
from
anywhere
else.
Actually
lots
people
have
different
definitions
of
futures,
so
this
is
kind
of
just
a
placeholder
for
a
value.
That's
gonna
come
out
at
some
point
later
in
time,
so
this
could
be
a
database
query
where
you're
fetching
a
row
and
you
want
to
actually
get
the
row
out.
A
This
could
be
an
RPC
request
where
you're
sending
sending
something
saying
I
would
like
to
update
my
row
and
know:
what's
you
know
when
you're
actually
done?
It
could
also
just
be
some
ephemeral
value,
which
is
not
actually
something
coming
at
you
so
a
time
out
just
some
event
happening
in
the
future.
You
get
notified
when
that
happens,
but
it
could
also
be.
A
So
we
take
a
look
at
what
it
actually
is
in
rusts
you'll
find
this
definition
in
the
futures,
trade
and
the
first
thing
it'll
jump
out
at
you
is
actually
the
fact
that
it's
a
trait
and
I'll
talk
a
little
bit
more
about
that
later.
But
this
ends
up
being
a
key
aspect
of
the
design
that
features
themselves,
and
the
next
thing
you'll
see
is
the
item
and
arrow
types
where
all
that's
really
saying.
Is
that
a
future
will
resolve
to
some
value
in
the
future?
A
So
in
this
case,
if
you're
successful
it'll
come
up
with
an
item,
if
you're
an
error,
it'll
come
out
with
that
with
that
particular
error,
and
these
are
generics
that
there's
not
one
kind
of
error.
There's
not
one
kind
of
item,
this
kind
of
domain-specific,
whatever
is
most
appropriate
for
that
application,
and
then
the
last
thing
we'll
see
is
kind
of
the
core
method
which
all
features
are
built
on
top
of
this
is
kind
of
the
only
way
to
actually
pull
a
value
out
of
the
future.
A
Is
this
function
called
pole,
and
this
there's
a
lot
of
stuff
going
on
in
this
function
called
pole,
it's
a
little
more
subtle
than
what
you
would
actually
see,
but
basically,
in
a
nutshell
that
says:
here's
the
value,
if
I
have
it
or
I'm,
not
ready,
and
then
the
real
magic
happens
by
saying,
if
I'm,
not
ready,
I'm
gonna
make
sure
I'm
going
to
let
you
know
what
it
actually
is
ready.
So
this
kind
of
it's
a
kind
of
a
simple
concept,
but
it's
we'll
see
you
a
little
bit
later.
A
How
just
this
simple
idea
of
pulling
a
value
out,
but
also
not
knowing
that's
ready
it
can
allow
us
to
build
up
these
really
large
chains
of
computation
and
kind
of
fuse
all
these
systems
together.
So
one
of
the
great
things
you
can
do
with
futures
is
something
very
similar
to
iterators,
where
you
can
compose
completely
arbitrary
futures
to
one
another
where
all
you
know
about
them.
A
Is
you
have
this
function
called
pole
and
that's
it,
and
so
the
first
thing
you
can
do
is,
you
can
say,
let's
run
up
with
a
future
F
and
then
we're
done
when
we're
done
we'll
take
that
value
that
I've
actually
resolved
to
will
generate
a
new
future
and
then
run
something
else.
So
this
is
similar
and
it's
just
sequential
rust
to
say
like
let
a
equals
B
and
then
let
C
equal
sum
function
today.
A
This
is
just
saying
after
one
thing
run
the
next
well,
we
can
also
also
run
two
things
in
concurrently,
which
that's
actually
very
difficult
to
express
in
sequential
code.
You
have
to
start
spawning
threads
or
kind
of
worry
about,
what's
actually
happening
at
runtime,
but
the
features
you
can
take
any
two
arbitrary
futures
and
say:
I
won't
wait
for
either
one
to
be
finished
and
that's
it
and
then
this
itself
actually
returns
yet
another
futures,
so
we're
just
kind
of
layering.
This
onion
or
layering
this
composition,
where
we
can
continue
using
all
of
these.
A
A
I'm
not
gonna,
show
the
full
list
here
fit
on
the
slide,
but
you
can
kind
of
think
of
this
as
the
iterator
trade,
where
never
you
look
at
that,
like
documentation,
there's
tons
of
little
gadgets
and
gizmos
for
you
to
play
around
with
and
then
the
last
thing
that
I
want
sorry
Waltman
futures
create
is
streams
as
well.
So
a
future
has
the
ability
to
provide
one
value
over
time,
but
sometimes
computations
are
best
modeled
by
a
stream
of
values
happening
asynchronously.
A
So
a
great
example
of
this
is
that
when
you
have
a
TCP
listener,
that's
accepting
sockets,
that's
actually
a
great
version
of
a
stream
by
saying
over
time,
I'm
going
to
give
you
new
client
connections.
It
just
wasn't
all
available
all
at
once,
so
the
trade
itself
is
actually
very
similar,
a
future.
There's
these
item
and
error
types
that
are
alighted
here
and
then
the
only
real
difference
is
the
fact
that
instead
of
returning
an
item,
it
returns
an
option
of
an
item
and
that's
basically
the
same
thing
as
area.
Here.
A
You
have
some
of
an
item
if
you
have
an
item
or
known
if
the
stream
has
ended
and
keep
going-
and
this
is
actually
really
powerful
by
we
can
down
here.
We
have
an
example
where
we
have
a
stream
of
requests,
this
Reks
variable
and
that's
just
kind
of
data
coming
in
off
the
wire
we've
parsed
into
a
bunch
of
requests,
and
we
have
some
stream
representing
that.
We
can
then
really
easily
represent
that.
Well,
we
already
have
we
now
we
have
a
fuchsia
function
from
this
request
to
a
future
of
a
response.
A
I
kind
of
like
plumbing
through
features
all
the
way,
so
we
can
say
for
every
request:
I
want
you
to
and
then
process
the
request
and
then
get
the
results.
This
is
very
similar
to
what
was
that
we
saw
previously
with
futures
and
composing
them.
But
the
key
thing
here
is
that
this
is
happening
for
every
item
and
with
that
simple
function
in
that
simple,
common
Eider,
we've
now
created
a
stream
of
responses.
A
So
now
all
of
that
logic
of
what's
actually
happening
of
parsing
the
request
of
sequencing,
like
once,
you
have
the
request,
doing
your
entire
process
and
service,
which
might
involve
database
requests
or
whatever
it's
kind
of
all
encapsulated
in
this
one
stream.
That
now
you
can
pull
some
responses
out
of
ID.
You
might
go
then
serialize
to
a
socket
at
some
point.
A
So
the
next
thing
I
want
to
talk
about
is
futures
are
great
they're
kind
of
awesome
for
composing
things
together
they
could
provide
a
nice
framework
to
talk
about
to
work
with
and
to
write
these
a
seekers
applications,
but
it's
not
really
that
good.
If
it's
not
fast,
so
we
can
take
this
bread
and
butter
of
rest.
These
zero
cost
abstractions
and,
let's
see
if
we
can,
both
on
top
of
futures
as
well.
A
This
is
oh
I.
Gotta
show
you
my
awesome
pic.
Oh
my
awesome
picture
anyway,
we'll
just
go
straight
to
this.
Slowly,
all
right,
so
the
to
talk
about
this
of
what
it
actually
means
to
be
zero
cost.
We
need
to
think
about.
What's
at
what
does
it
actually
mean?
What
is
the
best
thing
we
can
possibly
write?
So
let's
say
you
have
just
a
simple
server:
you're
accepting
sockets
you're
gonna
do
have
a
do
you
process
that
in
some
fashion
and
then
drop
the
connection.
A
What
you're,
probably
gonna,
do
is
you're
going
to
use
mio
off-the-shelf,
that's
kind
of
a
pull
on
Linux
kq1
OS,
X,
iOS
CPU
in
Windows,
the
state
of
asynchronous
I/o
and
rust.
You'll
have
some
state
machine
that
for
every
socket
you
accept
your
kind
of
transition
throughout
that
state
machine
and
then
along
the
way,
you
won't
necessarily
allocate
too
much.
A
You
won't
have
any
synchronization,
because
you're,
probably
just
on
one
thread
and
like
this-
is
kind
of
basically
how
you
construct
your
entire
server,
it's
not
the
most
organized,
but
that's
the
fastest
thing
you
can
possibly
write,
and
so
this
gives
us
a
concrete
thing
to
remember
of
what
do
futures
actually
maybe
compile
down
to.
They
need
to
compile
down
to
the
state
machine.
They
need
to
be
using
Meo
into
the
hood
and
they
need
to
be
avoiding
all
these
extra
costs.
They
don't
need
to
have
any
sort
of
implicit,
synchronization
or
implicit
allocation.
A
So
when
I
talk
about
zero
cost
features,
what
I
actually
mean
is
all
of
these
kinds
of
goals.
By
saying
there
are
no
allocations
in
any
of
the
Combinator's
select
join
and
then
or
else
then,
every
single
one
you
find
is
all
zero
cost,
no
allocations
and,
at
the
same
time,
there's
no
synchronization
in
them
as
well.
A
So
this
is
something
that
the
cost
of
this
kind
of
comes
out.
The
wash
is
almost
entirely
by
the
processing
of
the
request
takes
far
longer
than
one
dynamic
dispatch
or
one
allocation,
and
this
is
kind
of
where
we
can
really
truly
see
the
zero
cost
of
futures
and
how
they
all
compile
down.
Now
you
all
seen
this
graph
before
in
the
keynote,
but
I
want
to
show
it
again,
because
it
is
really
really
cool.
A
We
can
see
this
as
we
as
we
go
up
to
implement
to
see
the
rest
of
category
if
I
don't
make
sure
you
look
quite
like
this.
This
is
the
reaction
you
should
be
having.
This
is
not
just
rust,
but
this
is
futures.
These
are
the
zero
cost
features
I'm
talking
about.
This
is
not
just
oh
yeah
we're
fast.
This
is
I
can't
find
anything
faster
than
this.
This
is
insane,
and
so
I
wanted
to
talk
about
what.
How
does
this
actually
happen?
A
You
can
write
your
own
info,
you
can
put
your
own
infiltrate
and
whatever
you
like,
you
could
use
the
standard
ones,
but
you
get
to
choose
whatever's
best
for
you
and
whatever
application,
if
it
doesn't
quite
fit
into
what
all
those
standard
ones
do.
You
can
have
your
own
imple
and
do
the
very
specialized
state
machine
internally,
and
not
only
this,
but
having
a
trait
means
that
everything
is
nice
and
generic.
A
We
can
plumb
that
all
through,
so
the
compiler
can
see
through
any
sequence
of
chains
of
anthems
or
selects
or
ORS,
and
it
can
kind
of
make
sure
to
have
all
the
inter
Combinator
optimizations.
So
if
one
future
doesn't
actually
produce
an
error
and
the
compiler
can
determine
that,
it
can
actually
eliminate
all
the
branches
later
on
for
error
handling
and
then
finally,
the
traits
also
give
us
this
great.
This
great
ability
to
say
sometimes
we
do
want
this
generic
dispatch.
A
Sometimes
we
do
want
this
really
fast
code,
but
other
times,
I
just
want
to
compile
something
and
I
want
to
compile
a
nice
unit
of
abstraction.
So
you
have
this
option
of
dynamic
dispatch
as
well.
This
option
of
pilot
of
packaging
up
in
a
trait
object
I
want
to
dive
into
one
of
them.
What
one
of
the
function
signatures
look
like.
So
this
is
and
then
from
before,
which
runs.
The
cell
feature
afterwards
runs
a
closure
with
the
resulting
value
and
then
returns.
A
Another
feature,
and
the
first
thing
you
might
notice
about
this
is
that
looks
a
lot
like
iterator.
This
means
that
it's
be
very
familiar
to
you.
You
know
all
of
the
ergonomics
you'll
see
in
the
standard
library
tend
to
show
up
with
it
with
features
as
well,
and
we'll
see
these
zero
cross
closures.
The
fact
that
F
has
held
generically
all
the
way
through
to
the
common
Eider
means
that
there's
no
extra
allocations
for
this
for
this
callback,
there's
no
extra
state
being
held
onto
here.
A
We
can
actually
compile
all
that
away
and
kind
of
mono
more
files.
It
all
the
way
at
compile
time
and
then
not
only
that,
but
like
the
into
iterator
trait.
We
also
have
this
easy
ability
to
actually
convert
into
features
themselves,
so
you
don't
really
have
to
worry
about.
Am
I
returning
a
result
from
this
closure,
or
am
I
returning
a
future
from
this
closure?
A
Is
we
not
only
have
designed
them
to
actually
be
very
fast,
but
we've
also
designed
them
to
be
very
understandable
and
work
very
well,
as
they
scale
up
into
larger
systems,
and
so
this
typically
comes
hand-in-hand
with
concerns
like
back
pressure
where,
when
you
have
a
lot
of
components
in
a
system,
but
one
of
them
is
moving
very
very
quickly
or
one
of
them
is
moving
very,
very
slow.
You
want
to
make
sure
that
the
work
doesn't
pile
up
and
take
down
this
history.
A
Take
down
the
system
because
there's
not
enough
memory
to
buffer
up
all
those
requests.
So
the
fact
that
all
features
are
pull.
We
have
to
pull
it
to
actually
pull
up
a
value.
This
means
that
once
you've
stopped
pulling
you're
not
actually
making
any
progress,
so
all
you
have
to
do
is
just
arrange
yourself
to
stop
pulling.
So
it's
very
easy
to
apply
back
pressure
to
any
particular
feature
and
say
all
right.
A
You
need
to
not
start
making
progress
right
now
and
not
only
that,
but
in
these
asynchronous
systems
they're
sent
to
be
hundreds
of
millions
of
requests
flying
around.
So
we
have
database
requests
and
each
RPC
requests
and
on
a
CPU
thread,
pull
requests
and
you
need
a
very
clear
picture
of
how
do
I
actually
cancel
one
of
these
I.
Don't
necessarily
want
it
all
of
them
to
finish
to
completion.
For
example,
the
TCP
connection
closed
a
little
bit
earlier.
A
I
don't
want
to
do
any
unnecessary
work
so
having
this
concept
of
to
actually
make
progress,
you
have
to
pull
a
feature.
This
means
to
cancel
it.
You
just
have
to
stop
calling
polled
and
that
actually
really
shows
up
very
well
and
rust
by
just
using
drop
as
soon
as
you've
dropped
a
future,
it's
impossible
to
pull
value
out.
A
I
no
longer
have
to
worry
about
the
value
and
it'll
kind
of
all
get
propagated
automatically
for
you
and
then.
Finally,
the
last
thing
that
helps
us
scale
up
features
is
that
those
of
you
who
are
familiar
with
a
pole
know
that
not
only
do
you
get
that
an
event
happened.
Where
do
you
get?
What
happened?
You
get
a
token
saying
what
actually
happened
with
a
pole,
and
this
is
kind
of
crucial
to
having
a
pole,
be
high
performance.
A
So
if
teachers
can
get
woken
up,
but
if
a
future
is
waiting
on
a
now
very
large
number
of,
like
thousands
of
things
to
happen,
it
needs
to
know
why
it
actually
woke
up
to
avoid
doing
unnecessary
work
to
figure
that
out
so
features.
Also
give
you
all
these
built-in
tools
and
utilities
to
ensure
that,
whenever
wakes
up,
you
know
exactly
why
it
woke
up,
and
you
can
basically
get
the
e
pole
interface,
but
at
any
level
of
a
future
alright.
A
So,
given
all
of
that
background,
oh
my
god,
awesome
slides
are
not
showing
it
up.
Okay,
all
right!
Well!
So
next
thing
I'm
gonna
talk
about
is
a
framework
called
Tokyo
which
you've
been
using
to
build
up.
These
features
to
build
up
asynchronous
I/o
with
all
these
futures
plum
throughout
it.
So
to
talk
about
what?
Actually?
What
actually?
What
took
you
is
it's
a
futures
power,
async
I/o
stack
completely
written
in
rust.
That
is
going
to
be
a
one-stop
shop
for
all
your
async
I/o
needs
and
rust.
A
If
you
would
need
an
event
loop,
if
you
need
a
webserver
framework,
whatever
you
need,
it'll
be
it'll
be
span
within
Tokyo
and
at
the
same
time,
this
is
a
very
layered
architecture
where
those
different
entry
points,
depending
on
what
you
actually
need.
So
not
everyone
needs
this
RPC
framework
or
not.
Everyone
needs
multiplexing
implementation.
A
You
might
just
need
an
event
loop
and
then
you'll
also
see
that
at
the
upper
layers
we're
drawing
a
lot
of
inspiration
from
finagle,
these
very
successful
systems
externally
to
rust,
and
we
kind
of
see
how
to
bring
all
those
best
of
ideas
in
into
it
ourselves
so
to
describe
what
Tokyo
is
going
on.
What's
going
on
here
at
the
bottom
layer
we
have
futures,
which
is
the
bread
and
butter
of
basically
everything
that's
going
on.
This
is
how
we're
expressing
all
of
these
asynchronous
computations.
A
On
top
of
that,
we'll
have
crates
like
Tokyo,
core
and
Tokyo
service,
where
cores
the
actual
event
loop.
That's
happening
built
on
top
of
me,
oh
and
services,
this
service
abstraction
from
finagle,
which
is
allowing
a
lot
of
composition
at
the
server
level.
We
can
say
my
server
is
simply
just
a
function
from
a
request
to
a
feature
of
response.
On
top
of
these,
we
can
build
layers
like
tls/ssl.
We
can
build
protocol
implementations
like
pipe
landing
and
multiplexing.
A
I
can
do
this
in
very
generic
fashions,
where
you
don't
have
to
actually
they're,
not
Wed
to
any
particular
IO
objects
here
there
and
then.
Finally,
on
top
we're
planning
on
providing
a
overarching,
Tokyo
crate,
which
is
basically
saying
the
99%
use
case
of
what
people
want
for
a
cigar.
Sigh,
oh
and
rust,
will
come
in
this
one
crate,
everything's,
all
nice
and
packaged
up
in
one
piece
for
you
to
go
and
get
running
pretty
quickly,
and
one
of
the
coolest
things
about
Tokyo.
Is
that
all
of
these
layers?
A
We
have
this
concept
of
middleware
as
well,
so
with
in
Tokyo
service,
we
can
have
middlewares
for
things
like
timing
out
requests
for
failing
over
between
servers
for
having
low
balancing
for
sending
a
bunch
of
requests
to
servers.
This
is
very
much
inspired
by
finagle
and
you'll
see
a
lot
of
this
there
as
well,
but
the
key
idea
is
that
you
can
kind
of
slot
in
and
just
a
Tokyo
service
later
you
don't
have
to
worry
about
protocols
or
TLS
or
what
the
actual
event
loop
is.
A
You
can
just
work
here
and
have
all
of
these
great
abstractions
depend
not
actually
worry
about
all
the
implementation
details.
Other
layers
are
like
at
the
TLS
layer.
We
can
not
only
have
TLS,
you
can
have
compression
like
gzip
or
XZ
or
deflate
or
whatever
you
want
serialization
like
json
or
whatever,
and
the
protocol
layers.
A
We
can
have
very
generic
implementations
of
pipelining
or
multiplexing,
so
this
means
that
kind
of
all
these
really
grungy
nitty
gritty
details
of
writing
protocols
with
these
asynchronous
IO
parsers
have
historically
been
pretty
difficult,
but
you
can
kind
of
use
these
off-the-shelf
abstractions
to
kind
of
get
up
and
up
this
up,
get
up
to
speed,
running
very
quickly
and
have
very
high-performing
implementations
as
well,
and
so
we
have
a
lot
of
examples
of
crates
that
are
using
Tokyo.
Today
we
have
a
couple
of
protocols
like
a
line
protocol,
the
rightest
protocol.
A
We
have
a
TLS
of
limitation,
we
have
which
is
built
on
native
libraries
like
open,
SSL
and
secure
transport
in
this
channel,
and
all
that
we
have
a
sample
socks
proxy
server.
So
you
can
just
kind
of
see
all
the
various
pieces
of
the
layer
kind
of
working
together
and
how
it
might
all
work
out.
We
have
mini
HTTP,
which
you're
seeing
in
the
graph
earlier.
We
have
a
hyper
client.
We
have
a
signal
handling.
A
We
have
process
handling,
we
have
UNIX
domain
sockets,
we
have
a
small
another
HTTP
client
back
for
the
Lib
curl,
so
I
have
a
lot
of
examples
to
kind
of
get
the
ecosystem
bootstraps
and
up
and
running
where,
if
you're,
just
kind
of
interested
in
how
to
actually
how
do
I
actually
work
with
this
stuff,
how
do
I
actually
see?
What's
going
on?
You
can
check
out
any
of
these
repos
and
kind
of
see.
A
What's
a
look
at
the
source
code,
see
how
it's
all
interacting
and
see
how
you
can
get
up
and
running
with,
took
you
and
start
building
at
yourself
and
so
the
the
future
kind
of
what's
coming
next?
Well,
the
the
futures
in
Tokyo
core
create
I
just
published
yesterday.
I
increase
that
I,
oh
so
those
and
what
then,
what
that
actually
represents
is
that
we
have
now
reached
a
level
of
stability
with
the
futures
themselves
and
the
actual
event
loop,
that
it's
kind
of
able
to
be
put
on
crates
of
it
crates
they.
A
Oh
it's
performant,
it's
nice
and
stable.
We
feel
like
the
interfaces,
are
a
really
solid
location
and
we'll
probably
still
have
an
0.2
at
some
point,
but
the
idea
is,
everyone
can
start
building
on
top
of
these
as
soon
as
possible.
All
the
other
crates
like
took
your
prototype.
You
service,
all
timer
implementations.
Tls
are
coming
very
soon
to
Crystal.
They
go
as
well.
We
have
a.
A
We
have
a
little
bit
of
work,
more
work
to
do
in
terms
of
stabilizing
the
api's
and
such
and
what's
right
and
get
those
out
as
soon
as
possible
and
then
in
the
next
near
future.
We're
gonna
have
protocol
table
tations
like
HTTP
2,
which
kind
of
exercise
a
lot
of
different
portions
of
an
iOS
stack
and
kind
of
make
sure
it's
a
nice
and
battle-tested
and
ready,
and
also
a
very,
very
performant,
HTTP
client
and
probably
through
hyper
at
some
point,
all
right
and
oh,
my
god,
you're
missing.
All
my
awesome
slides.
A
But
that
was
the
end
of
my
talk.
I
have
a
couple
of
Link's
here
they
have
the
futures
repo,
which
is
an
under
my
user,
Alex
Creighton,
slash,
uses
RS
and
then
in
the
Tokyo
RS
organization
on
github.
You
can
also
find
Tokyo
itself,
which
has
a
very
nice
readme,
pointing
to
lots
of
different
projects
and
then,
if
you're,
using
IRC,
you
can
find
us
on
rust,
internals
and
rust
in
whatever
and
if
using
git
er.
You
can
also
find
us
at
at
okrs
channel
they're.
A
A
A
A
So
it
gives
you
like,
they
build
a
register
objects
and
you
can
then
pull
out
a
ventilator
on
and
we
have
an
ICP
back
in
from
four
mio,
which
is
basically
giving
you
the
readiness
model
on
top
of
the
ICP
model
and
the
way
that
works
is
that
internally
kind
of
whenever
you
accept
a
socket,
we
immediately
start
a
read
and
then,
whenever
there's
a
buffer,
we
say
are
readable
or
it's
like
we're
immediately
writable.
So
we
we
do
a
lot
of
the
buffer
management
internally
at
the
mio
layer.
A
D
The
the
first
protocol
I
always
left
to
do
is
WebSockets,
because
it's
got
that
lovely
upgrade
feature.
Is
there?
Are
there
ways
within
the
the
Tokyo
framework
to
swap
out
the
HTTP
with
WebSocket,
or
what
I
have
to
have
some
parent
enum
that
swaps
between
them
or
have
you
even
try
to
design
those
type
of
things?
Yet
we.
A
Haven't
quite
gotten
to
the
layer
of
like
having
a
website
and
protocol
on
implementation
for
sure
and
some
of
the
HTTP
implementations.
We've
done
have
been
pretty
early
on,
like
just
parsing
off
a
couple
headers
and
not
a
lot
of
streaming
bodies
and
some
requests,
but
I
think
the
idea.
There
would
be
that
you
would
probably
use
an
off-the-shelf
HTTP
parser
that
the
general
HTTP
client
is
using.
But
then
you
would
very
quickly
start
promoting
yourself
to
WebSocket
afterwards
and
the
idea
I
get
a
lot
of
this
is
still
in
progress.
A
But
we
plan
to
have
kind
of
we're
not
really
imposing
sort
of
buffering
strategies
if
you
start
going
lower
down
on
the
stack,
and
so
you
can
kind
of
figure
out
where
you
want
to
insert
yourself
to
have
as
little
cost
as
possible
and
ideally
not
impose
anything
at
depe
layers,
but
so
kind
of
in
design.
As
to
what
happens
there,
I'm.
A
Right
so
the
idea
is
that
when
you
call
pole,
if
it's
not
ready,
it
will
somehow
schedule
you
to
receive
an
event
when
you
would
be
ready
and
as
part
of
that,
what
you
can
do
is
you
can
say
when
you
can't,
like
you,
pull
out
a
handle
and
then
you
throw
it
somewhere
and
then
the
handle
gets
notified
to
kind
of
let
the
future
go
and
acquire
later.
So
the
implementation
detail.
A
So
what
you
can
end
up
doing
is
that
internally,
you
have
like
a
list
of
events
that
happened
and
then
when
then,
when,
when
the
notifications
come
in,
you
start
pushing
on
those
events
to
seeing
what
actually
happened
and
then,
when
you
get
pulled
you
just
iterate
over
that
list
of
events
and
see
what
happens
so.
The
idea
there
is
that
you
still
have
to
look
at
everything.
You
just
have
a
list
of
things
to
look
at
I
was.
F
Wondering
if
you
could
give
us
a
little
bit
more
details
on
the
relationships
or
the
expectation
that
you
think
people
should
have
for
building
things
directly
with
futures
and
streams
versus
using
Tokyo
services
as
the
abstraction
to
build
into
like?
When
would
you
decide
to
use
one
or
the
other
as
your
kind
of
layering,
with
right.
A
So
futures
are
not
like
you
would
not
want
a
socket
to
be
a
literal
stream
of
bytes
like
that's
just
it's
the
same
way
that
sockets
or
files
or
not
iterators,
of
bytes,
as
some
of
those
like
these,
these
abstractions
don't
quite
make
sense.
So
you
still
have
the
concept
that
whenever
I
read
a
socket,
if
it
returns
not
ready
it's
kind
of
future
s
where
it
will
notify
my
testimony
when
it
is
ready
to
make
progress.
A
But
the
key
thing
here
is
that
will
tend
to
have
futures
at
all
layers
of
the
stack
you
can
kind
of
at
any
point,
go
off
and
actually
implement
a
future
kind
of
converted
to
a
future
or
work
with
features,
but
at
some
intermediate
layers
like
at
the
protocol
layer,
we
don't
literally
be
using
futures
all
the
way
and
then
the
service
layer,
it
kind
of
depends
on
whether
are
you
actually
writing
a
protocol
or
the
protocol
layer.
If
you're
writing
a
server
you're.
Probably
writing
this
at
the
service
layer.
C
A
So
this
is
something
the
futures
create
itself,
has
no
concept
of
I/o
or
runtime
or
polling
or
anything
or
what's
actually
happening
there.
So
it's
very
flexible
and
up
to
you,
as
a
user
kind
of
how
to
pull
it,
how
to
run
a
future,
how
to
compose
a
bunch
of
futures
together.
So
at
that
layer,
there's
not
there's
not
really
a
question
to
that,
because
it's
kind
of
you're,
the
one
writing
it.
So
it's
whatever
you
happen
to,
but
I.
A
But
then,
as
you
like,
that
tends
to
be
pretty
under
economic
cuz,
you
don't
want
to
always
deal
with
everything
all
the
time.
So
once
he
cooks
like
the
Tokyo
core
layer
later,
you
actually
have
utilities
to
spawn
futures.
You're
saying,
like
I,
want
to
run
this
teacher
on
the
event.
Loop
I
want
to
run
this
feature
over
here
and
that
just
provides
a
round-robin
sequence
where
it
says
I'm,
just
gonna
I'm,
just
gonna
write
all
the
features
that
are
ready,
pull
them
and
then
not
ready.
A
I'll
just
turn
around
the
event
loop
and
just
do
it
all
again.
So
it's
you
have
the
ability,
in
a
future,
to
kind
of
yield
where
you
say
that
I
am
making
a
lot
of
progress
but
I'm,
making
too
much
progress.
So
I
want
to
let
someone
else
go.
Make
progress
for
a
while
and
that's
kind
of
a
opportunistic
thing.
It's
not
automatically
you'll
have
to
actually
popped
into
yielding
at
some
point.
Okay.
A
A
That
specific
benchmark
was
threaded
in
the
sense
that
there
are
eight
event
loops,
basically
one
event:
loop,
one
epicor
and
they're
totally
dis,
separate
they're,
not
communicating
with
each
other
at
all,
and
so
the
threading
story
for
tokyo.
Core,
though,
is
that
there's
typically
one
event
loop
thread
that
that
event
loop
is
owned
by
one
IO
thread,
and
then
you
can
have
a
CPU
pool.
You
can
kind
of
throw
a
thread
pool
which
is
another
crate
for
that,
and
so
you
can
offload
work
onto
that
as
well,
so
they
use
the
event.
A
Loop
itself
will
typically
be
owned
by
one
thread
and
then
IO
objects
themselves
are
bound
to
that.
And
then,
if
you
want
to
have
a
motivated
server,
you
tend
to
have
like
ways
to
say
share
this
share
the
port
like
on
Linux
with
so
you
reused,
port
or
you
can
have
various
other
kinds
of
cooperative
stuff
there.
But
you
tend.
H
We
actually
discussed
the
chart
class
so
like
if
I
have
a
chat,
client,
where
I
process
a
series
of
messages
and
reply
with
messages.
How
would
I
store
the
local
context
like
I
assumed
there
would
be
neck?
No
local
stack,
/,
future
execution
or
something
like
Ingo,
lank
I
would
just
spawn
coroutines
and
would
never
care.
But
what
would
be
the
story
for
Rost
in
this
case.
A
In
with
futures,
there's
kind
of
a
very
strong
parallel
to
things
like
green
threads
or
actually
threads
in
general,
but
there
are
called
tasks,
so
every
future
is
pol
as
part
of
a
overarching
task
and
then
a
task
follows
this
feature
throughout
since
its
entire
lifetime.
That's
like
that
one
imposed
allocation.
So
what
if
you
would
be
spawning
goroutines
or
threads
and
other
and
other
systems,
what
the
equivalent
would
be
would
be
spawning
new
features
and
other
in
the
future
system.