►
Description
One of the capabilities async Rust enables is the ability to easily execute code concurrently. We have solutions for this in the ecosystem, but none yet provided by the stdlib. That’s because there is still a lot of design work left to be done.
In this talk, Yosh will cover a basic model for concurrent execution of futures. Then walk you through the design space of this feature, showing various ways in which it can be implemented. And finally showcase a novel futures concurrency library that ergonomically implements concurrent futures execution in Rust.
Yosh on Twitter: https://twitter.com/yoshuawuyts
Rust Linz on Twitter: https://twitter.com/rustlinz
A
Hey
everyone
welcome
to
my
talk.
Futures
concurrency
in
the
future.
Wait
big
screen.
I
can
see
myself.
Okay,
that's
gone
futures
concurrency
in
the
future,
maybe
kind
of
rustling,
hey
so
yeah,
I'm
josh.
I
work
as
a
rust
developer
advocate
at
microsoft
as
my
day
job.
So
you
know
in
case
you
don't
know
what
microsoft
is
microsoft,
yeah,
so
I've
got
cats,
that's
very
important.
This
is
chashu.
A
She
was
eating
from
my
hand
up
until,
like
just
now,
there's
a
treat
here
but
she's
like
no
longer
interested.
So
maybe
she'll
make
an
appearance
later,
and
this
is
nori
sitting
in
a
box
yeah,
and
this
was
them
together
in
our
last
apartment,
doing
what
they
do
best,
which
is
asking
for
food
that
they're
not
allowed
to
have,
and
they
look
very
cute.
So
yeah,
that's!
Oh!
Oh,
oh
double
slide!
Oh
what
else
did
I
do?
Yes,
maybe
you
know
me
like
what
might
you
know
me
from?
A
I
helped
create
async
stud
once
upon
a
time.
I
am
currently
part
of
the
async
foundation's
working
group.
It's
a
rust
working
group
responsible
for
making
async
await
things
happen
in
the
standard,
lib
and
the
language,
so
yeah.
Why
this
talk
like
what?
What
what
are
we
talking
about
today?
What
what's
the
broader
theme
here?
A
So
we
are
talking
about
futures
and
async
and
rust,
because
it
started
in
2015
or
maybe
2008
kind
of
depends
on
where
you,
where
you,
how
you
start
counting
but
stood
future
today
in
the
standard
lib
kind
of
looks
like
this
there's
like
a
couple
traits
and
like
three
functions
and
like
barely
a
description.
So
it's
not
much
we're
trying
to
expand
on
that.
You
know,
like
the
future.
Trade
itself
only
has
like
a
single
method,
and
you
know
what
we
really
want
to
do.
A
Is
we
want
to
expand
on
on
these
trades?
We
want
to
build
them
out.
We
want
to
make
it
a
richer
experience,
move
past
an
mvp
stage.
So
in
this
talk,
what
we'll
be
covering
is
like
how
like
possible
designs
and
considerations
around
those
designs
for
async
concurrency
adapters
that
we
could
potentially
add
to
a
standard
lib.
So
that's
that's
the
broader
theme
and
we'll
dive
into
details.
A
More
specifically
for
that,
so
what
we'll
be
covering
in
this
talk
is
first
we'll
be
looking
at
terminology,
then
we'll
be
looking
at
the
various
modes
of
concurrency
establish
a
little
bit
of
like
theoretical
background
stuff
there.
I
guess
api
design
comes
next,
then
we'll
be
looking
at
merging
streams,
which
is
another
kind
of
concurrency,
not
quite
futures,
but
a
little
bit
different
and
finally,
looking
ahead.
A
Hey
we
all
love
terminology,
so
this
this
future
is
intended
to
be
like.
I
don't
expect
you
to
like
be
on
the
land
team
or
anything
so
I'm
trying
to
like
start
from
the
start.
A
So
if
you
already
know
these
things,
then
yeah
just
bear
with
me,
so
parallelism
and
concurrency
like
they're,
different
and
like,
but
that
what
what
parallelism
is
you
you
might
like,
sometimes
like
hear
them
like
used
interchangeably,
but
they're
they're,
separate
things
really
like
the
way
I
like
to
think
about
parallelism
is
as
a
resource
like
any
computer
will
have
a
set
amount
of
parallelism
that
it
can
like
give
right,
like
usually
people
refer
to
it
as
like
the
the
hardware
threads
in
your
computer,
but
sometimes
it's
like
locked
down
through
software
like
container
or
whatever.
A
They
have
like
fewer
threads
available,
less
parallelism
available
than
like
what
what
the
underlying
hardware
exposes.
So
it's
just
a
resource,
and
it's
it's
usually
like
a
max
like
a
minimum
of
one
maximum
of.
However
much
your
computer
are
going
to
expose
right.
So
using
multiple
cores
on
machine
is
like
a
typical
example
of
parallelism
in
action.
A
Multiple
cores
work
at
the
same
time
and
do
stuff
at
the
same
time
right
the
the
way
you
can
measure
parallelism
using
rust
is
you
use
the
num
cpu
scrape
and
you
like
call
it
and
gives
you
back
like
a
number
and
that
that's
that's
the
resource,
usually
now
we
we
won't
be
talking
about
that
today.
So
concurrency
not
parallelism,
so
all
the
older
multi-core
stuff,
not
really
in
the
picture
for
today,
instead
we'll
be
talking
about
concurrency,
which
is
a
system
structuring
mechanism.
I
I
lifted
this
from
literature
by
the
way.
A
So
you
know
in
node.js
is
famously
single
thread
like
single
threaded,
javascript
runtime,
multiple
threads
behind
it,
but
but
the
actual
thing
you
execute
on
a
single
threaded,
but
still
async.
That
is
because
async
programming
does
not
require
multiple
threads.
It's
because
concurrency
does
not
require
multiple
threads.
You
can
structure
a
program
in
such
a
way
that
you
just
use
one
thread,
but
you
still
have
stuff
going
on
executing
concurrently
right,
so
they're
they're
not
the
same
thing
yeah.
So
let's
give
a
practical
example
or
well.
Not
quite
so.
A
What's
the
future,
a
future
is
an
asynchronous
value.
The
way
you
can
think
about
it.
It
is
a
value
that
you
know
will
only
become
a
value
after
you
called
a
weight
on
it,
there's
some
magic
on
the
under
the
hood
going
on.
I
I
don't
want
to
dig
too
deep
into
that,
but
here's
like
a
practical
example
say
we
have
like
let
num
equals
10
we're
defining
a
variable
here
of
name
num,
which
has
a
value
of
10
right.
A
Here's,
the
async
version
of
that,
where
we
have
a
future
that
will
resolve
to
the
value
10
and
when
we
await
it,
it
resolves
to
that
value
and
then
we
can
assign
that
to
a
variable
binding
right.
This
is
the
same
thing
of
writing
like
what
we
had
here.
Instead
of
like
creating
an
anonymous
future
using
the
async
keyword,
we
can
manually
create
it
using
like
ready.
This
is
also
the
same,
the
same
exact
result
but
yeah.
A
Whenever
you
see
this,
you
can
just
think
that
you
know,
instead
of
like
resolving
to
a
number,
we
could
be
doing
something
more
interesting.
Like
reading
a
file,
asynchronously
whoops,
I
swapped
the
question
mark:
can
you
wait
there,
but
you
know
you,
you
get
the
idea.
Hopefully
yes,
so
yeah,
let
numb
async
10,
it's
a
placeholder
for
doing
more
interesting
things
like
reading
files
or
doing
like
network
requests
or
stuff
like
that,
all
right,
so
yeah!
Well,
a
future
resolves.
We
can
do
other
stuff.
A
Concurrently
is
the
main
takeaway
of
like
what
a
future
is.
So,
let's
take
a
look
at
the
hello
world
of
concurrency,
so
like
a
quick
little
join
here
right,
so
we
have
like
one
so
starting
off
like
let's,
let's
do
some
stuff
like
sequentially,
let's
define
like
a
future
a
which
resolves
to
ten.
We
delay
it
for
like
one.
Second,
the
delay
is
part
of
async
stood.
I
don't
know
if
other
runtimes
have
it
right.
We
say
like
hey.
A
This
resolves
to
number
10
after
waiting
for
one
second
right
b,
the
number
11
also
wait
one
second,
and
then
we
await
a
which
you
know
results
in
10,
then
we
await
b,
which
results
to
11
and
that
took
two
seconds
right,
one
second
for
one
another.
Second,
for
the
second
total
time
elapsed,
two
seconds
right.
A
We
get
the
result
of
both
back
and
now
how
long
this
took
is
just
one
second
right,
one
second,
one
second,
but
they
happen
they're
waiting
at
the
same
time,
so
we
don't
wait
like
in
real
time.
This
actually
reduces
the
real
time,
even
though
not
multiple
cores
are
involved,
it's
still
concurrent,
so
it
reduces
real-time
execution
and
we're
we're
not
waiting
around
for
stuff
instead
right,
yay,
concurrency,
okay,
so
hopefully
that
that
sort
of
sets
you
in
the
mindset
of
like
how
how
concurrency
is
kind
of
used
all
right.
A
First
act
modes
of
concurrency
here
here
we
get
a
little
bit
techy
first
off
infallible,
concurrency.
So
what's
infallible?
Basically,
a
future
which
does
not
return
a
result,
that's
what
we
mean
by
infallible
here
right,
so
we
wait
for
all
inputs.
The
two
modes
of
like
infallible,
like
concurrency,
is
like
wait
for
all
inputs
or
wait
for
the
first
input
right
future
join
was
wait
for
all
inputs.
We
give
it
like
a
set
of
inputs.
A
A
Instead
using
a
race,
we
can
rewrite
like
this
and
we
say
a
dot,
raise,
b,
dot
weight
and
that
results
to
11..
So
what
happens
here
is
we
have
two
futures
of
the
same
type
and
we
try
and
get
the
few
or
we
get
the
future
which
resolves
first,
so
future
a
takes
four
seconds
to
resolve
future
b
resolves
instantly,
which
means
we
get
the
value
of
future
b,
which
in
this
case
is
11
and
we
discard
future
a
this
is
useful.
A
When
say,
you
have
a
file
system
call
like
that
you're
like
getting
something
from
cache,
but
maybe
you
also
have
a
request
going
on
you're
like
oh
just
give
me,
whichever
one
like
resolves
first,
I
believe
firefox
used
to
do
this
or
still
does,
and
sometimes
like
networks
faster.
But
sometimes
you
know
on
very
fast
hard
drives.
Hard
drive
is
faster,
so
it
kind
of
does
both
and
says.
Give
me
give
me
the
first
result.
That's
what
bracing
does
so
yeah
wait
for
first
output.
A
Now,
if
we
introduce
results
like
fallability
into
the
mix,
things
become
a
bit
more
interesting,
so
fallibility
is
when
or
like
something's
fallible
when
it
returns
a
result,
so
in
this
case
a
future
of
a
result.
These
are
like
effect
systems.
A
You
need
to
like
figure
it
out
or
like
no,
not
too
much
about
it,
but
you
know
it
creates
like
a
fun
little
matrix
so
fallible
we,
we
still
have
the
first
two,
which
is
like
wait
for
all
outputs,
wait
for
first
output,
that's
like
one
axis
and
then
there
is
also
do
we
continue
on
error,
or
do
we
return
early
on
error
right?
So
we
can
plot
them
out
like
this
a
little
table,
and
then
you
know
our
join
example.
A
We
can
fill
in
there,
which
is
we
wait
for
all
outputs,
and
then
we
continue
on
there.
So
if
an
error
occurs,
we
we
just
keep
on
going
now.
Here
we
have
a
race
right
here.
We
can
put
that
in
future
race,
which
way
it's
for
the
first
output
and
if
an
error
occurs,
we
actually
do
return
early.
It's
kind
of
like
counterintuitive
that,
like
sits
on
the
bottom
right
corner,
but
it's
not
aware
of
like
what
result
does
so
an
error
occurs.
It
says,
oh
cool.
A
A
Instead,
we
have
a
method
called
tri-join,
which
does
that
oh,
the
alarm's
going
off
outside
right,
which
can
do
a
dot,
try
join
b,
both
like
okay
and
then
here
we
get
back
10
11,
because
both
values
are
okay,
but
if
we
switch
it
up
and
we
get
like
a
okay
and
an
error
future
right,
then
this
will
resolve
and
will
like
return
early.
It
will
short
circuit
and
resolve
to
the
error
value
right.
So
we
can
put
that
there.
A
This
returns
early
on
error,
but
we'll
still
try
and
wait
for
all
outputs
like
otherwise
right.
So
if
the
values
are
all
okay,
you
get
all
the
values.
If
one
of
them
errors,
it
drops
all
the
futures-
and
it
just
gives
you
back
the
error
now,
the
bot
in
the
top
right
corner
a.race.
We
can
replace
that
with
like
a.tri-race
right.
We
await
that,
and
here
you
know
in
in
the
case
of
okay.
We
just
get
back
okay,
but
if
we
convert
this
to
error,
then
we
get
back
yeah.
We
still,
you
know.
A
The
the
first
feature
to
resolve
is
future
b,
but
we
get
back
the
value
of
future.
A
it
looks
at
future
b.
It
says:
oh
wait.
That
is
an
error.
Let
me
not
return
early
quite
yet.
Let
me
try
and
like
wait
for
future
a
to
come
back
and
then
it
says
like
okay,
then
the
name
tri-race
is
kind
of
like
not
great
here,
and
we
need
to
work
on
that,
but
you
know
yeah.
We
can
fill
that
into
into
the
chart
like
this.
A
So
if
we
rewrite
this
and
we
both
have
errors,
then
you
know
the
output
of
this.
Is
it's
an
error
right?
It
will
try
and
keep
going
and
try
and
get
like
a
single
value
like
race
values
up
until
it
has
no
more
values
to
like
resolve
and
if
there's
an
error
it
will
like
retry.
That's
what
that
does
yay
that's
good.
A
Okay,
so
yeah
yeah,
that's
our
little
overview.
Other
languages
have
concurrency
methods
as
well,
subscript
or
node.js.
You
can
plot
the
the
promise
methods
out
kind
of
like
this
problem
is
all
settled.
Goes
there
promise
raise
goes
there
promise
all
there.
I
promise
any
in
the
top
right
corner
and-
and
these
do
the
same
thing
these
are
all
like
added
or
like
stabilized
late
last
years,
when
they,
when
they
all
became
available.
A
So
now
we
have
like
a
sort
of
like
framework
of
like
the
various
concurrency
modes.
We
can
dig
into
api
design.
Like
I
briefly
mentioned.
Some
of
these
things
are
already
like
available
in
libraries
in
async
stud,
for
example,
both
have
futures
rs,
and
I
believe
tokyo
probably
also
has
something
you
know,
but
but
you
know
we're
trying
to
like
figure
out
ways
to
like
stabilize
this
into
standard
lib
and
recently
this
talk
is,
I
I
think
we
can
do
better
with
api
design.
A
So,
let's,
let's
dig
into
into
the
exact
apis
here
for
a
sec
right,
so
we
have
a
two-way
join
I'll
use
join
for
as
the
example
but
kind
of
applies
to
all
the
all
the
combinators
here
so
two-way
join
looks
like
this
future,
a
b
we
join
it.
We
get
a
tuple
back
right,
so
the
return
type
of
that
is
join
t1
t2
for
like
a
tuple,
because
tuples
can
contain
like
different
values,
values
of
different
types
right,
so
it
you
need
to
represent
that
like
in
the
tuple.
A
So
a
three-way
join
looks
like
this,
so
we
start
off
with
our
two-way
join.
Then
we
create.
Oh
no
very
fancy.
We
split
it
off
into
like
its
own
future.
It
doesn't
matter
right,
but
we
create
a
third
future
resolves
to
number
12.
We
join
that
and
then
this
becomes
a
nested
tuple.
So
we
get
open
brack,
a
tuple
of
10,
11
close
and
then
like
on
the
right
hand,
side
number
12..
You
would
probably
expect
like
10,
comma
11,
comma
12,
but
becomes
like
nested
right.
A
That's
not
good,
it's
kind
of
like
annoying
to
work
with.
So
what
yeah?
That's
the
return
type!
It's,
because
we're
like
nesting,
join
futures
right,
so
a
a
four-way
join.
You
can
probably
expect
it
now,
but
you
know
we
add
a
future
d
and
we
get
back.
You
know
a
a
three
level
nested
like
tuple.
That's
you
know
you.
You
can
imagine,
as
we
like
start
to
like
nest.
More
and
more
things
become
worse
and
worse
right.
A
So
the
the
way
this
is
solved
in
many
of
the
libraries
is
by
having
a
join
macro,
which
uses
an
async
block,
and
this
cogen
based
on
the
input
to
then
return
a
flat
tubable
of
of
the
output
types.
A
And
the
return
type
of
this
is
generated
by
the
by
the
compiler
at
compile
time,
essentially
using
the
async
block.
Okay.
So
anyway,
drawing
right
a
question
here
that
we
need
to
consider
is:
like
is
n,
const
or
cons
means.
Does
the
compiler
know
the
length
at
compile?
Time
can
quite
fit
into
a
bullet
point
so
yeah,
but
yeah
do
we
know
the
length
of
the
type
or
the
size
of
the
type
at
compile
time
right.
A
So
here,
if
we
don't
like
what
we
can
do
is
we
can
like
create
two
futures
put
them
inside
of
a
vec
and
keep
adding
futures
to
avec
effects
can
grow
as
long
as
you
like.
As
long
as
you
don't
like
run
out
of
memory,
you
can
just
put
them
in
there.
You
don't
need
to
know
up
front
how
long
a
vec
will
be.
That's
it's
nice
quality
right
and
then
we
can
call
something
to
join
it.
And
then
we
await
that
and
we
get
it
back
back
and
the
method
in
futures.
A
For
that
non-basically
we
don't
have.
This
is
join
all
so
you
know
that's
a
different
import.
You
need
to
be
aware
of
it,
so
you
need
to
use
futures
future
join
all
for
this,
whereas
for
the
joint
macro
use
it
you,
for
example,
use
async
stud
future
join.
These
are
all
like
slightly
like
different
ways
to
get
there
and
you
like
need
to
remember
them,
which
is
not
great
and
like
they're,
also
like
different
calling
conventions.
Oh
whoa
yeah,
like
one
uses
like
you
know,
there
are
different
import
paths.
A
One
uses
like
is
like
a
macro
invocation
lives
in
the
macro
name,
space,
the
other
one's
not.
But
you
know,
depending
on
like
a
variety
of
factors,
you
need
to
think
of
like
okay,
which
one
should
I
use.
Do
I
want
to
use
types
like
race
all
dry,
erase,
all
tri-join,
all
race,
bang
join.
You
know.
If
you
look
at
the
whole
matrix,
it
becomes
like
a
lot
so
yeah.
It
applies
to
like
all
the
other
concurrency
adapters
too.
So
I
set
out
to
be
like
well.
Can
we
unify
this?
A
I've
been
thinking
about
this.
For
a
couple
years
now
I've
written
a
couple
blog
posts.
I
came
up
with
an
experiment.
It's
called
futures
concurrency.
It's
a
library
you
can,
you
can
use
it
today.
It's
it's
not
super.
It
has
like
eight
commits,
so
don't
expect
too
much
yeah,
but
I
I
just
want
to
show
like
the
direction
that
we
could
be
taking
things
or
you
know,
I'm
I'm
currently
taking
things
so
two-way
join
with
you
know.
A
We
saw
it
like
this:
a
dot
join
b,
but
using
future
concurrency
and
using
the
traits
defined
there.
We
can
use
a
comma
b
dot
join
and
if
you
join
that,
then
the
output
type
becomes
10
11..
If
you
want
to
have
a
three-way
join,
all
you
do,
is
you
define
a
new
future,
a
comma
b,
comma
c
comma,
like
dot
join,
and
then
you
get
back
10,
11
12.
and
you
can
sort
of
like
keep
expanding
on
this
right.
A
So
you
know
if
we
have
15
futures
we
get
to
like
you
know,
just
join
all
of
them
like
manually
if
we
want
to
and
and
that
will
just
work,
not
just
tuples-
also
also
arrays,
by
the
way
it's
kind
of
nice.
A
So
for
like
non-cons
types
for
like
types
whose
size
we
don't
know
like
during
com,
compile
time
so
like
using
vex
of
futures,
for
example,
rather
than
doing
join,
all
rewrite
it
to
then
just
be
like
avec
vec
ap,
and
we
just
call
dot
join
on
the
vec
and
we
get
a
new
back
of
the
resolved
futures
which,
which
is
quite
nice,
so
yeah
that
that's
that's
sort
of
like
where
we're
at
in
terms
of
the
design
you
can
use
this
with
tuples.
You
can
use
this
with
vex.
A
You
can
use
this
with
regular
arrays
as
well,
and
you
know
they'll
they'll
just
work.
The
the
nice
thing
here
is
you
have
a
container
type,
you
put
a
little
features
in
there
and
then
you
suffix
it
with
your
your
mode
of
concurrency,
whether
it's
join
race,
join,
all
etc.
Right,
that's
the
idea!
You
have
your
futures,
you
join
it
or
you.
You
use
method
of
concurrency
and
you
get
your
values
back
out
in
the
same
shape
that
they
went
in,
so
the
translation
hopefully
becomes
like
very
fluid.
A
So
that's
the
theory-
implementation
yay,
okay.
So
it's
very
short!
So
what
do
we
have
today
right,
futures,
concurrency?
The
joint
trade
looks
like
this.
A
A
We
don't
have
any
of
the
other
methods
quite
implemented.
Yet
right-
and
you
know
like
some
some
stuff
that
we
still
like
need
to
figure
out
is
like.
Is
this
the
right
trait?
How
can
we
like
move
this
right
into
standard
lib?
There's
some?
A
I
have
a
working
theory
that
we
can
like
move
it
in
there
today,
but
we
need
to
seal
it
and,
like
you
know,
lock
it
down,
so
we
are
like
future
compatible,
but
yeah
and
there's
like
legit
questions,
also
like
how
do
we
introduce
try,
join
there's
some
like
fun.
Considerations
of
like
try
join,
should
be
on
the
same
method
as
like,
join
as
a
joint
rate,
but
we
can't
express
that
today.
So
how
do
we
do
that?
A
You
know
there's
stuff
to
figure
out,
so
experiments
left
to
do,
but
if
you
want
to
use
this
mode
of
concurrency
today,
it
is
fully
implemented
like
an
async
stud.
So
you
know,
although
try
joy
and
try
race
all
these
things,
they
they
do
exist,
albeit
like
flagged
yeah,
so
act
four
merging
streams,
and
this
is
new
content.
A
What
I
mean
by
that
is
I
published
a
blog
post
last
week
covering
most
of
the
stuff
up
until
now,
but
this
is
a
unreleased,
blog
post
and
essentially
all
the
stuff
that
I
had
to
cut
from
the
previous
blog
post.
That
has
become
its
own
blog
post.
So
if
you
didn't
read
that,
then
this
will
likely
be
new.
A
Anyway,
that
that
was
a
lot
of
words
to
like
me,
have
a
sip
of
water
as
well,
so
all
right
merging
streams.
So,
oh,
oh
interesting,
I
forgot
what
example
this
is
so
I'll.
Just
talk
it
through.
What
we're
doing
here
is
we
define
a
future
of
one.
We
give
it
like
of
a
value
one
again.
You
can
replace
it
with
like.
A
Do
a
do,
a
file
system
call
or
like
get
something
from
a
channel
stuff
like
that,
where
it
officially
added
a
delay
of
one
second,
so
I'll
resolve
after
one
second
right.
Second
future,
also
with
a
delay
a
few
seconds.
Third
future
three
seconds,
so
we
know
a
results
before
b
resolved
before
c
right.
A
So
here
we
create
a
little
array.
We
call
dot
race
on
it
and
then
we
call
await,
and
then
then
we
get
one
back.
So
the
problem
that
we're
like
trying
to
the
resolve
or
the
the
problem
that
we're
trying
to
cover
here
is
like
what,
if
we
want
to
do,
two
things
right
like
get
futures
back
as
soon
as
they're
available
right.
A
So
we
want
to
have
the
value
of
one
like
available
to
us
as
soon
as
it
like
resolves,
but
rather
than
like
discarding
future
b
and
c
right,
which
we
currently
do
race,
does
that
we
eventually
also
want
to
get
the
values
of
b
and
c,
because
we
care
about
those.
But
we
want
to
start
acting
as
soon
as
a
is
ready
right.
A
So
race
is
not
the
right
way
to
do
that
and
so
yeah.
Well,
how?
How
can
we
do
this?
So,
looking
at
the
past,
the
futures
arrest
library
says
that
we
can
use
the
select
macro
for
this.
A
So
the
way
this
works
is-
and
I
I
took
this
example
from
the
async
book,
which
is
officially
part
of
the
rust
org
right-
it
says
like
hey.
This
is
a
hello
world
of
how
to
do
stuff
with
like
select,
so
you
know
not
trying
to
create
any
any
specific
examples,
but
I
I
think
this
is
like
pretty
representative
right.
A
So
this
is
the
the
basic
example
which
creates
a
counter
that
sums
up
a
bunch
of
like
things
numbers
like
going
through
it
line
by
line
we
import
futures
stream
and
stream
x.
We
import
the
select
macro
from
the
futures
lib.
We
create
an
iterator
that
gives
us
or
like
a
stream
right
from
an
iterator
which
yields
the
numbers
one.
Two
and
three,
then
we
create
a
oh
and
we
call
fuse
on
it,
which
is
important
like
what
fuse
does.
A
Is
it
says
that
once
this
stream
has
yielded
none
once,
it
will
forever
continue
to
yield,
none
rather
than
say
panic,
which
is
valid
behavior,
so
we
fuse
it,
and
then
we
have
stream
b,
and
we
like
make
it
return
like
a
vec
of
like
four
five
six.
We
fuse
that
and
then
we
initialize
our
counter.
We
initialize
it
to
zero
right,
create
a
big
loop,
because
we
want
to
loop
over
the
streams
within
it.
We
now
start
to
use
the
select
macro.
A
The
select
macro
will
return
like
an
item
of
a
type.
So
here
we
say
we
assign
item
to
a.next.
So
whenever
a
dot
next
resolves
something,
we
create
a
variable
called
item
and
we
return
that
from
the
block
same
for
like
b.
A
You
know
that
also
returns
item
and
here's
like
a
special
select
keyword,
called
complete
and
says
when
complete,
when
all
the
streams
have
been
exhausted
break
so
that
breaks
the
outer
loop
right
and
then
we
match
on
item-
and
we
say
hey
if
this
is
sum
get
the
number
contained
within
like
the
option
and
add
it
to
our
total
and
finally,
we
assert
their
totals
21.
so
again
putting
it
all
together.
We
we
have
like
the
various
parts
and
that's
hello
world
for
using
select
yeah,
so
async
stud
stream
merge.
A
A
However,
we
don't
need
to
fuse
them
that
that's
like
one
very
nice
like
aspect
of
this
right,
no
fusing
going
on
you
just
have
two
streams.
Then
we
initialize
our
counter
like
we
did
and
we
call
a
dot
merge
b,
which
creates
a
stream
of
like
a
shared
stream
from
two
streams.
It
says
as
soon
as
a
value
is
available
from
like
either.
A
You
know
give
me
that
value
so
in
this
case
and
it'll
do
some
like
random
stuff
in
there,
but
essentially
it
merges
multiple
streams
into
a
single
stream,
not
like
zip,
which
creates
tuples,
I
believe
or
like
chain
which
exhausts
one
and
then
the
other.
Now
it
says
like
whenever,
there's
data
I'll
take
that
data,
it's
all
async,
so
stuff
resolves
at
different
times
right.
A
So
aid
up,
merge
b
creates
one
one
one
stream
here
and
then
we
say
for
each
number.
We
add
that
number
to
total.
We
await
that
and
then
we
assert
our
total
is
21,
and
then
we
can.
You
know
because
we're
doing
stuff
with
numbers
here,
you
probably
you
you
may
notice
from
like
the
iterator
async
that
stream
like
follows
all
the
iterator
adapters
and
stuff.
A
So
one
thing
that,
like
the
select
blocks
like
says,
it's
like
really
good
for
or
like
attempts
to
also
like
do
is
like
not
just
like
handle
streams,
but
also
handle
resulting
futures.
So
you
can
put
like
a
stream
in
there,
but
you
can
also
just
directly
give
the
futures
and
though,
when
those
resolve
you
can
like
select
over
those,
so.
A
Let's
take
a
look
at
how
that
looks:
using
stream
merge,
oh
yeah,
so
the
there
there
will
be
like
roughly
three
steps.
First,
we
convert
our
futures
into
streams,
so
here
we
have
three
futures
or
are
like
sorry.
A
a
is
an
array
of
values
which
will
become
a
stream
b.
A
Is
an
array
of
strings
stars
like
static
stars,
convert
that
into
a
stream
because
multiple
values
and
then
c
is
the
future,
which
is
also
a
type
like
stir
right,
so
stream
from
itter
stream
from
error
and
stream
once
stream
ones
takes
a
single
future
and
converts
into
a
stream
right
now
we
have
a
b
and
c,
and
now
all
three
are
streams.
We
normalize
the
futures
into
streams.
First,
second
step
is
map
the
streams
to
an
enum.
A
We
define
an
enum,
so
here
we
say
in
a
message
which
either
has
a
num
or
text
right
and
then
here
in
our
stream
we
say:
hey
all
the
ones
that
are
like
numbers,
dot
map
them
into
numbers,
all
the
ones
which
are
text,
dot,
map
them
into
text
right
now,
they're
all
of
the
same
type,
and
now
that
they're
all
the
same
type,
we
can
call
merge
on
them
right.
So
here
we
go:
adap,
merge,
b,
dot,
merge
c.
A
This
now
works,
which
gives
us
back
our
stream,
and
then
we
iterate
over
the
stream.
We
get
a
message
out
because
it's
an
e
m
and
we
want
to
get
the
values
out
we
match
on
it.
Then
we
say
num
we
print
it.
Then
we
say
text
and
we
print
that,
and
now
we
have
like
it
all
together,
we
have
like
a
variety
of
input
types
and
we
merge
mole
into
the
same
stream.
Variety
of
like
you
know
it's
straight.
A
It's
streams
combined
with
futures
combined
with
like
different
types,
and
they
all
like
get
flat
into
like
a
single
loop
and
rather
than
having
like
a
select
block
with
like
custom,
syntax
and
semantics
and
stuff
like
continue
or
what
was
it
on
break
yeah
for
the
I
forgot
what
the
keyword
was,
but
it
just
becomes
back
into
into
a
match
statement
which
is
kind
of
nice
right,
so
shiny
future.
This
is
the
lasso
part.
It's
the
last
little
part.
It's
all
it's!
A
What
oh
we're
almost
there
we're
almost
there
all
right,
so
sort
of
like
taking
putting
our
little
like
hats
on
and
being
like.
Oh,
what
could
stuff
look
like
in
the
future
if
we
had
like
a
bunch
of
like
language
features
right
so
warning?
This
is
all
made
up.
None
of
this
lives
in
rfcs
all
of
this
lives
in
my
brain.
A
So
what
you're
seeing
here
is
not
real,
don't
expect
it
to
become
real,
but
it's
fun
to
think
about,
and
I
think
it's
also
like
fun
to
sort
of
like
set
a
dot
on
the
horizon
of
like.
Where
could
we
go
right
like
what
would
be
better?
How
could
things
look
like
because
that
helps
us
like
design
things
as
well
right?
So
this
is
my
grandiose
design.
There's
a
lot
going
on
here.
Let
me
break
it
down.
So,
first
off
whoa
yeah
yeah,
we
have
our.
A
We
have
our
three
part
oh
wait
hold
on.
Oh
did
I
add
the
wrong
yeah.
It
was
wrong.
Sorry,
there's
like
three
parts
about
this
right.
We
have
the
enum
message,
part
where
we
define
our
enum.
Then
we
have
our
center.
I
added
those
arrows
wrong
yeah.
Then
we
have
like
our
center
little
part
where
we're
like
mapping
values
into
like
stuff,
and
then
we
have
our
final
part
where
we're
like
looping
over
things.
A
So
first
off
async,
iteration
right
so
async
iteration
currently
doesn't
exist
with
like,
while
at
some
await
loops.
Instead
one
could
conceive.
We
could
have
like
408
message
in
sorry,
like
four
x
in
y
loops.
We
would
have
four
or
weight
x
and
y
loops
right.
So
that's
something
that
we
could
get
into
the
language
most
likely
async
overloading
is
a
fun
one
for
the
blog
post
on
this.
Maybe
the
idea
is
that
just
like
swift
async
has
async
overloading
built
in.
A
Maybe
we
could
have
it
so
rather
than
like
having
separate
trades
for
like
either
an
async,
hitter,
right
or
ether
and
stream.
The
way
it
is
like
today
today
we
could
detect
like.
Is
this
running
like
an
async
block?
Are
you
calling
this
an
async
block?
If
so
convert
this
into
an
async
method
right?
So
here
we
see
our
thing,
we
call
intuitor
and
that
just
works
because
it's
like
an
async
version
of
that
trait.
A
Finally,
the
merge
trait
what
if
stream
merge
had
its
own
like
traits-
and
it
was
like
part
of
the
top
level
like
imports
and
scope
right,
we
could
do
the
same
thing
that
we
were
doing
earlier,
which
is
be
like.
Oh,
you
just
have
like
an
array
of
separate
like
streams,
and
you
call
that
merge
on
them
or
vec
or
a
yeah,
maybe
not
tuple,
but
the
other
two.
A
Yes
right,
then
async
io
in
standard
lib,
like
what,
if
print
line
you
know
actually
didn't
like
block
but
would
like
be
async-
requires
executor
runtime,
all
this
stuff
right,
inline
print
args.
I
believe
it's
coming
in
russ
2021,
which
is
a
couple
weeks
out,
but
you'll
just
be
able
to
do
this,
so
you
don't
need
to
like
have
separate,
comma
and
then
param,
but
you
can
just
inline.
The
param,
which
is
nice
match
shorthand
is
another
fun
one.
A
What
if
we
could
like,
rather
than
like
indenting
the
match
statement
in
a
separate
block,
we
could
lift
it
out
to
the
top
like,
so
it
becomes
like
a
one-liner
with
the
408.
Maybe
that's
nice
right
so
yeah,
that's
all
like
very
shiny
future.
Maybe
maybe
we
can
make
these
things
happen.
I
I
think
it
could
be
fun
to
like
get
some
of
this
stuff
in
right,
so
act,
five,
looking
ahead,
yeah!
A
So
what's
next
for
this
whole
proposal,
I'm
sure
it
was
like
quite
quite
a
bit
to
take
in
but
yeah,
I'm
gonna
release
a
third
blog
post
in
a
series
of
asynchron
currency,
blog
posts
or
futures
concurrency
blog
posts
I'll
be
covering
what
we
covered
in
the
second
part,
which
is
the
select
blocks
and
stream
merge
and
stuff
right.
I
need
to
put
it
into
writing,
so
I
can
share
it
with
like
the
rest
of
the
working
group,
so
yeah
that's
in
progress
right
now,
we
need
to
implement,
try
join.
A
I
think
I
mentioned
that
earlier
as
well.
There's
like
we
don't
have
to
try
joint
methods
and
tri-race
in
in
like
the
the
futures,
concurrency
library.
A
Yet
then
that
will
have
some
fun
design,
complications
implications
that
we
may
want
to
address
at
a
language
level
as
well,
so
we'll
be
fourth
blog
post
and
after
that,
we'll
probably
need
some
time
to
digest
all
of
this
and
figure
it
out
talk
to
folks
see
how
people
feel
like
stuff
works,
and
if
all
that
goes
well,
we
can
start
writing
rfcs
to
actually
starting
these
things
with
standard
lib
yeah
anyway.