►
Description
Rust Cologne User Group - Meetup June 2016
More on http://rust.cologne/2016/06/06/rust-anniversary-part-2.html
Help us caption & translate this video!
http://amara.org/v/2Fia/
A
All
right
good
evening,
everyone,
my
name,
is
Alex
Creighton
and
I'm
here
from
the
core
team
from
rust,
I'm,
actually
traveled,
all
the
way
up
from
San
Francisco.
Just
for
this
one
event,
just
to
talk
with
you
guys,
yeah
awesome
well,
so
today,
I'd
like
to
tell
you
a
little
bit
about
the
state
of
rust
today
and
kind
of
where
it's
going
in
terms
of
how,
where
we
come
from
since
the
1.0
in
2015.
A
What
are
we
currently
doing
right
now
and
kind
of
what's
on
the
near
horizons,
we're
kind
of
early
2050
or
late
2016
kind
of
what
goes
beyond
that.
So
first
thing
is
hurray.
Rust
is
one
point:
why
is
one
year
old
or
loose
on
May
15th
last
year?
So
it's
the
guess,
a
little
bit
past
that
by
this
point
and
Hugh
and
Wilson
actually
collected
a
lot
of
these
really
awesome
numbers.
A
So
if
you
see
this
ever
since
1.0,
this
is
not
like
accounting,
the
1.0
statistics
we've
had
almost
12,000
commits
with
over
700
contributors,
and
that's
actually
a
astonishingly
large
amount
where
cuz.
If
you
think,
if
you
take
a
look
at
the
numbers
for
before
1.0
I
think
we
had
almost
a
thousand
unique
contributors
up
to
that
point-
and
this
was
the
many
many
years
driving
up
to
that-
and
this
is
just
one
year
after
that.
So
this
is
clear.
A
The
the
steam
behind
rust
is
not
abating
anytime
soon,
there's
still
tons
of
enthusiasm,
there's
still
tons
of
effort
actually
going
forward
and
pushing
this
project
forward.
We've
had
88
RFC's
merged.
This
is
an
insane
number
of
rfcs.
We
have
a
week-long
final
comp
period
for
our
season
now
it
is,
and
this
map
still
out
I
think
like
two
rfcs
a
week
ish,
if
you
consider
like
the
times,
we're
actually
merging
RFC's,
that's
an
incredible
paint
rate
of
change
for
a
language
which
has
been
incredibly
stable
this
entire
time.
A
Now
it's
crazy
how
you
can
see
rust
running
in
so
many
more
places
nowadays
than
you
could
at
one
point,
oh
and
we've
delivered
nine
releases
I
think
we
loosed
1.9
a
couple
of
weeks
ago
at
this
point
and
of
course,
we've
delivered
one
year
of
stability
to
kind
of
one
of
the
true
promises
of
rust
at
one
point,
Noah
was
you
no
longer
have
to
update
all
your
code
literally
every
night?
Even
now
you
think
you
can
take
your
1.0
code
and
be
pretty
confident.
A
That's
still
compiling
to
the
another
thing
we've
seen
in
rusts
is
what,
since
1.0,
is
this
massive
uptick
of
rust
in
production
as
well?
So
we
have
this
new
page
forensic
HTML
on
the
website,
and
if
you
take
a
look
at
that,
you'll
see
actually
a
huge
number
of
companies
that
are
using
rust
in
production
today,
and
each
of
these
have
their
own
story
associate
with
them,
and
you
like
I,
would
certainly
encourage
you
to
go.
A
Take
a
look
see
if
you
recognize
a
few
or
if
you
know
anyone
you
can
feel
free
to
open
an
issue,
and
let
us
know
ourselves,
but
it's
kind
of
it's
been
very
humbling
to
see
this
small
project
going
flower
and
so
many
different
companies
over
this
time
and
in
ways
that
we
never
really
imagined.
Every
time
we
see
these
tweets
or
hacker
news
comments
or
anywhere
on
the
web.
People
keep
mentioning.
Oh
yeah,
I
just
happened
to
ship
rust
in
production.
A
The
other
day
I
just
happen
to
replace
my
one
little
micro
servers
with
rust
and
it's
crazy
how
you
keep
seeing
that
all
the
way
after
a
year,
you
can
actually-
and
it's
only
taking
up
since
then,
so
we
had
a
blog
post
I
think
it
was
late
2015.
That
was
talking
about
our
focus
after
1.0
and
I
kind
of
had
these
three
pillars,
these
three
key
points
of
branching
out,
which
is
taking
taking
rust
farmer
places
than
it
could
go
before
and
making
sure
that
it
runs
just
as
well,
literally
everywhere.
A
We
have
this
doubling
down,
which
is
saying
that
we
have
all
this
infrastructure.
We
have
all
these
current
investments
and,
let's
kind
of
make
sure
we
flush
them
out
entirely
and
kind
of
reap
the
full
benefit
from
all
that.
And
finally,
we
have
this
zeroing
in
where
we
have
these
like.
We
have
a
lot
of
language
features,
but
there
are
some
gaps
in
them
so
a
lot
of
times.
It
goes
maybe
90%
of
the
way
there,
but
this
is
really
thing
filling
in
that
final
10%.
A
This
is
really
kind
of
going
all
the
way
with
all
each
of
these
language
features
you'll
see,
and
so
these
are
kind
of
like
the
three
broad
areas
I'm
going
to
talk
about.
During
this
talk
of
how
rust
has
been
progressing
and
the
first
of
which
is
we're
now
have
rust
everywhere
we
can
put
rust
on
so
many
devices
all
over
the
place,
and
it's
kind
of
amazing
how
we've
seen
the
progression
since
1.0.
A
So
one
of
the
biggest
things
we've
seen
here,
which
you
don't
really
tend
to
consider
in
terms
of
running
rust
all
over
the
place.
Is
this
whole
idea
of
embedding
rust
in
another
language?
This
is
a
very
unique
property
of
rust,
which
you
can
only
do
with
essentially
C
or
C++,
and
even
and
even
those
are
somewhat
difficult.
Sometimes
so,
we've
seen
these
a
couple
of
awesome
projects
actually
doing
this,
so
the
first
of
which
is
called
neon
on
the
side.
This
is
run
by
Dave,
Herman
I
believe
that
Mozilla
and
their
con.
A
So
this
is
part
of
rusts
idea
of
bringing
systems
programming
to
the
masses
where,
if
you're
working
in
nodejs
you're
working
in
Ruby
and
you've
always
wanted
to
get
the
most
performance
out
of
your
little
tight
loop
or
the
most
performance
out
of
some
other
part
of
the
application,
or
maybe
some
more
portability,
you
can
now
lower
down
to
rust
without
even
having
to
worry
about
unsafe
code.
You
can
just
stay
and
completely
say
for
us
the
entire
time
and
you
truly
are
guaranteed
no
seg
faults.
A
There's
not
this
little
caveat
saying
that
on
the
very
edge
you
have
to
worry
about,
and
not
only
that
have
we
seen
this
in
new
languages,
but
we've
actually
seen
this
embedded
in
new
engines
today.
So
if
you
run
Firefox
today
you're
running
rusts
code,
there
is
rust
code
in
Firefox
stable.
It's
currently
a
pretty
small
portion.
A
That's
only
parsing
I
think
the
mp4
metadata
are
coming
off
of
images,
but
we're
seeing
this
massive
uptake
there's
actually
a
project
inside
of
gecko
right
now
to
take
servos
style
sheet
engine
and
put
it
in
in
gecko
itself.
This
is
a
massive
component.
That's
all
going
into
gecko
and
it's
kind
of
enabled,
with
these
properties
that
rusts
have
of
no
runtime
it's
zero
costs
typify.
A
But
there's
always
been
this
kind
of
weird
problem
with
rust
when
you
embed
somewhere
else
so
rust,
has
this
macro
called
panic,
and
what
panic
essentially
is
is
kind
of
a
bit
of
a
structured
exception,
not
quite
like
c
and
c++
exceptions
or
other
exceptional
languages,
but
kind
of
destroy
what
you're
currently
doing
all
the
way
to
some
isolation,
boundary
per
se-
and
this
happens
in
the
way
we
implemented
today-
is
via
stack
unwinding.
So
whenever
you
do
a
panic
will
actually
unwind
the
stack
will
run
a
bunch
of
destructors
will
come
down.
A
The
bottom
will
catch
that
exception
and
then
probably
exit
the
program
and
print
something
out
and
do
something
fancy
there.
But
the
problem
is
that
this
whole
unwinding.
The
stack
idea
is
undefined
once
you
enter
Stasi
program.
So
if
you
kind
of,
if
see,
calls
into
you,
you
then
call
some
Russ
code
and
then
it
panics
and
you're,
trying
to
unwind
back
down
you're,
probably
gonna,
seg
fold,
you're
gonna
have
serious
problems,
you're,
certainly
not
portable
across
various
platforms.
So
well.
A
This
has
always
been
true
with
Russ
today
and
at
one
point
out
of
this
was
certainly
very
true,
so
your
only
real
option
was
actually
anytime
see
called
in
to
you
as
you
spawn
thread,
which
was
the
current
isolation
boundary
at
that
point,
or
you
just
pray
that
panics
don't
die
or
do
weird
linker
hackery
to
make
sure
it
all
works
out.
But
this
is
basically
a
kind
of
area
of
rust
that
wasn't
quite
fill
about
just
yet.
A
We
had
a
lot
of
ideas
of
where
to
go,
but
we
hadn't
quite
done
it
just
yet,
but
ever
since
1.9
I
think
we
have
the
new
stud
panic
module
and
the
crux
of
the
stood
pack
module
is
this
new
function
called
catch
unwind,
and
this
is
just
a
fancy
way
of
saying
give
it
a
closure
which
can
execute
once
that
returns
a
value.
Well
then,
return
you
something
saying
it
might
have
succeeded
where
it
might
have
failed,
and
so
this
the
failure
case
here,
is
the
panicking
case.
A
We
can
kind
of
recapture
that
payload
of
what
the
actual
panic
was
and
then
this
little
plus
unsafe,
unwind
safe,
is
actually
this
bound
in
this
function,
which
took
not
one
but
two
RFC's
to
get
through.
There
was
a
huge
amount
of
discussion
about
this,
and
the
whole
idea
is
that
I'm
catching
exceptions
is
not
exactly
a
very
safe
operation
to
do.
There's
tons
of
bugs
that
this
kind
of
derives
from
in
C
and
C++.
A
It's
this
whole
idea
of
exception
safety,
where
once
you've
caught
an
exception,
you
don't
actually
know
that
you
can
continue,
because
someone
might
not
have
expected
the
exception
to
go
on
really
is
to
say
it's
relatively
complicated,
but
this
unone
safe
bound
is
essentially
saying
that
this
is
a
safe
function
and
Ruston's
kind
of
foot,
speed,
bump
saying
you
have
to
think
about
like
oh
by
the
by
default.
You
don't
have
to
think
about
this,
where
it's
kind
of
uploaded
for
some
natural
types
like
references
or
when
you
move
values
in
there.
A
But
for
the
cases
where
you
do
have
to
worry
about
it,
you
might
have
to
kind
of
it.
The
compiler
will
tell
you
exactly.
This
variable
might
be
exception
safe.
You
might
have
to
go
and
kind
of
reassured
yourself
that
you're
actually
dealing
with
kind
of
a
lot
of
the
common
problems
that
come
up
in
exception
safety
and
another
big
thing
about
this
is
that,
although
we
have
added
this,
which
is
incredibly
useful
in
empathize,
like
you
no
longer
have
to
spawn
a
thread,
you
can
kind
of
transmit
this
information
that
you
panicked.
A
You
know
Fred
or
to
back
to
your
original
application.
This
is
not
a
shift
in
rusts
air
handling,
so
rust
has
historically
always
done
error
handling
through
results,
which
is
just
an
enum
saying,
you're,
either
okay
or
an
error,
and
you
have
one
or
two
variants,
and
this
is
this-
is
still
the
automatic
way
to
handle
errors
in
rust.
You'll
notice
I'll
talk
about
this
in
a
second
where
it
doesn't
actually
quite
work
and
all
in
all
modes,
but
essentially
this
is
not
a
new
control
flow
constructs.
A
This
is
only
kind
of
a
new
building
block
for
these
applications
which
needed
it
originally,
but
didn't
quite
have
it
and
one
of
the
cases
this
doesn't
work.
Is
this
whole
new
idea
of
when
you
panic,
instead
of
unwinding
the
stack
you
actually
abort
and
so
a
lot
of
place
a
lot
of
times,
you're
not
actually
sure
whether
you
can
recover
from
the
error
that
happens
so
panics
are
like
index
out
of
bounds
or
you
tried
to
unwrap
a
none
object
or
you
don't
really.
Actually
you
might
not
even
know
what
happened.
A
There's
some
buried
in
some
library
where
you're
not
really
sure
what
state
you're
in
so
continue
at.
That
point
isn't
always
desired,
and
that's
kind
of
one
of
the
big
points
of
panicking
of
unwinding
exceptions
is
that
you
can.
You
can
tear
down
to
an
isolation,
boundary
and
then
move
forward,
but
that
isolation,
boundary
might
not
always
exist,
and
so
another
thing
is
that
you
can't
always
implement
unwinding.
A
So
then,
when
you
unwind
the
stack,
you
kind
of
run
all
these
little
sections
of
code,
but
this
is
stuff
we
have
to
generate.
This
is
stuff
we
have
to
say,
like
Oh,
run
the
destructors
for
those
variables
and
then
kind
of
go
back
to
the
next
stack
frame
and
continue
that,
and
we've
actually
witnessed
that
if
you
don't
emit
these,
which
you
don't
need,
if
you
just
abort
in
the
program,
you
don't
have
unwind,
you
can
actually
get
10%
faster
compiles
as
well
as
10%,
faster
binaries,
and
these
are
going
to
small
numbers.
A
A
Think
it's
going
to
be
stabilized
in
1.10
coming
out
relatively
soon,
so
this
is
very
suitable
for
any
application
which
either
cannot
recover
from
Penix
once
it
kind
of
get
a
little
bit
of
a
boost
in
compile
times
where
you
don't
benefit
from
covering
panics
or
and
just
can't,
implement
panics
at
all,
and
so
I
I
was
lost
to
saying
we
have
a
massive
number
of
targets.
Speaking
of
taking
rust
in
more
places
and
1.00,
we
had
six
targets.
We
had
32
and
64-bit
64-bit
of
OS,
X,
Linux
and
Windows,
and
nowadays
I
think.
A
When
I
checked
this
there
was
30
and
I'm
pretty
sure
we
have
like
three
more
since
then.
So
we
have
an
absurd
number
of
targets
that
we
can
compile
for
now.
It
is
we're
only
limited
by
what
elleven
can
target
and
LEM
can
target
a
lot
of
platforms,
and
some
other
ships
we've
seen
is
that
M
SVC
one
of
the
major
tool
chains
on
Windows,
is
now
a
tier
one
platform.
A
It
didn't
even
exist
at
one
point:
oh
and
so
we've
seen
there
was
a
massive
amount
of
popularity
on
Windows
and
a
lot
of
desire
to
have
this
tool
chain,
or
we
have
that
now
fully
of
mudded
and
a
cool
statistic.
I
pulled
was
we're
uploading,
four
and
a
half
gigs
of
data
every
night.
These
are
just
tar
balls
of
a
standard
library
and
the
compiler
itself,
four
and
a
half
gigs
every
night.
For
all
these
platforms,
you
see
at
the
bottom
and
even
more
I
couldn't
list
them
all
cuz.
A
This
would
just
be
one
giant
black
slide,
but
now,
once
we
have
all
of
these
targets,
we
had
this
problem
of.
How
do
we
actually
give
them
to
you?
We
have
all
this
great
support,
so
you
have
very
simple
questions,
sometimes
like
how
do
I
build
a
static
binary.
This
is
kind
of
a
great
aspect
to
some
other
languages,
but
I
wanted
to
the
same
thing
in
rust.
I
want
to
build
one
thing
and
ship
it
all
a
bunch
of
times
and
it
turns
out.
A
Nowadays
we
have
this
new
tool
called
rust
up,
which
makes
this
really
really
simple.
So
in
these
five
lines
of
code,
some
of
which,
which
can
actually
be
mostly
omitted,
this
is
downloading
rest
up,
adding
a
new
target
to
the
compiler
compiling
that
and
you're
done.
There
really
is
no
third
step
here.
There's
no
extra
step.
This
is
kind
of
a
push-button
scenario
where
you
just
add
a
target
and
you're
done
so.
A
The
whole
concept
of
rust
up
is
that
is
a
tool
chain
manager
where
you
can
kind
of
add
the
stable
channel,
the
beta
channel,
the
nightly
channel
the
rest,
compiler
it'll
update
them.
You
can
add
new
targets,
these
channels,
they'll
kind
of
manage
everything
for
you
via,
if
you
familiar
with
our
bian
or
our
VM
or
nvm
kind
of
those
suite
of
tools
or
they
multiplex
on
multiple
versions
underneath
them.
A
This
is
the
idea
of
rust
up
itself,
but
once
we
go
a
little
bit
farther
here
and
we
say
all
right
what
if
I
actually
want
to
compile
for
Android
now,
I
want
to
have
a
true
cost
cross
compilation
scenario
when
I'm
compiling
it's
kind
of
hard
to
run
the
Ruskin
they're
on
Android
itself.
So,
let's
compile
for
it.
So
we'll
try
the
same
thing
again,
where
we'll
just
add
a
target,
we'll
try
and
build
for
it.
A
But
we
get
this
really
weird
error,
saying
we
can't
actually
link
with
this
thing
called
CC,
and
what
this
is
indicative
of
is
that
you
actually
need
a
lot
of
extra
tools
for
compiling
to
Android
itself.
There's
this
whole
thing
called
the
Android
NDK,
and
this
is
giving
you
a
lot
of
system,
libraries,
a
linker,
a
lot
of
other
kind
of
various
small
utilities.
You
might
need
along
the
way-
and
this
is
required
actually
across
compile
for
Android
itself.
A
So
rest
up
currently
assumes
that
you're
the
one
managing
this,
but
this
is
within
rush,
stops
horizon
of
actually
solving
this
as
well.
It's
intended
that
rest
stop
is
going
to
come
along
and
when
you
actually
have
the
arm
target,
rust
up
will
automatically
say:
I
didn't
I
did
not
detect
the
NDK.
Would
you
like
me
to
install
it
and
then
would
you
like
me
to
configure
Carter
which,
like
when
you
configure
everything
along
the
world?
So
basically
these
two
lines
are
all
we're
going
to
need
of
in
solution
and
the
whole
Android
cross.
A
Compilation
scenario
is
not
the
only
one
you
can
taste
take
basically
any
host
target
pair,
for
example,
Windows
to
Linux
or
Linux
to
Windows
back
to
Windows
or
just
Mac
the
Linux
a
guy
more
in
there.
But
all
of
these
specific
situations
we
can
have
very
targeted
scenarios.
Saying
you
didn't
have
this
package?
Would
you
like
me
to
install
it
and
kind
of
assist
you
along
the
way
of
doing
all
these
nice
cross
compilations,
which
in
the
past
have
been
very,
very
difficult
to
actually
perform?
A
So
these
two
together
we're
not
only
actually
supporting
rust
on
all
these
targets,
but
we're
actually
putting
in
power
in
your
hands
to
run
rust
on
all
these
targets
and
we're
not
requiring
you
to
read
30
blog
posts
and
download
30
things
and
put
them
all
in
just
the
right
order.
So
they
work
and
said
it's
really
just
you
issue
one
command
and
you're
all
of
a
sudden
ready
to
go,
and
you
just
go
and
say
what
you
actually
need
to
do.
A
So.
The
next
thing
I
want
to
talk
about
is
kind
of
this
doubling
down
infrastructure.
So
this
this
is
apparently
one
of
the
mirror
mission
patches
to
the
MIR
space
station.
If
it's
not,
oh
boo,
boo
headed
up,
so
it's
fine.
So
first
what
is
mirror
this?
This
concept
of
mirror
it
so
fortunately
not
a
space
station,
but
today
in
the
compiler
we're
kind
of
a
standard,
pretty
a
pretty
standard
compilation
pipeline.
Where
you
take
the
rust
source
code,
you
parse
it.
A
A
A
We
want
to
mean
that
kind
of
the
easy
things
you
run
into
are
like
kind
of
the
first
bugs
you
see
in
rust,
and
you
open
an
issue
we're
like.
Oh,
it's
known
to
worry
about.
It
mirrors
actually
going
to
solve
a
lot
of
these
problems
and
finally,
by
having
a
new
layer
representation,
it
gives
us
a
lot
of
engineering
benefits.
So
this
one
transition
is
unlocking
all
of
these
steps
and
we
have
a
massive
number
which
are
queued
up
right
after
it
as
well.
A
So
we've
been
tending
to
have
this
kind
of
a
factor
on
the
compiler
for
quite
some
time
now.
So
this
is
just
the
beginning
of
the
benefits
that
we're
going
to
be
able
to
reap
from
here
itself.
So
I'm
just
show
you
an
example
of
what
mirrors
actually
do.
Kind
of
the
core
goal
of
mirror
is
to
simplify
rust
as
much
as
possible.
So
we
take
a
look
at
this
very
simple,
just
an
iterator
over
a
vector
and
then
run
a
function
on
every
element.
There's
actually
a
lot
of
stuff
going
on
here.
A
That
we'll
see
so
the
first
thing
that
we
can
do
is
you
can
probably
eliminate
this
for
loop.
What
this
is
actually
doing
is
it's
taking
the
iterator.
It's
then
actually
converting
it
to
an
iterator,
and
then
it's
calling
next
a
bunch
of
times
innovating
over
that
and
actually
running
and
running
the
function
on
every
element.
But
even
this
is
somewhat
complicated.
Now
we
used
to
have
four
loops
and
while
loops
and
loops,
but
now
we
have
while
loops
and
loops
so
there's
two
different
constructs
to
do
loops.
A
For
so
why
don't
we
eliminate
that
we
can
get
rid
of
while
loops
entirely
and
in
this
case,
we'll
just
expand
that
to
a
loop
with
a
match
and
a
couple
of
breaks
in
there?
So
we
can
see
the
the
code
is
starting
to
inflate
a
little
bit,
but
from
the
compilers
perspective,
this
is
actually
getting
much
much
simpler
over
time.
So
we've
eliminated
a
couple
of
constructs
already,
but
even
here,
this
dot
notation,
where
we're
calling
methods
is
actually
pretty
complicated
and
there's
a
lot
of
stuff
going
on
there
as
well.
A
So
what
we
can
do
is
we
can
expand
that
as
well.
This
is
something
that
in
recipe
call
it.
You
have
CSS
or
function
calls
syntax,
but
this
is
essentially
saying
that
we
know
exactly
what
function.
We're
calling
we
know
exactly
what
the
arguments
are
exactly
what
the
receivers
are
and
we
don't.
You
know,
there's
no
extra
fluff
here,
you
kind
of
know
exactly
what's
happening.
So
not
awful.
Not
only
have
we
limited
for
loops
and
wire
loops,
we've
now
also
lamented
method
calls,
and
this
is
getting
to
become
relatively
unreadable
rust.
A
So
you
probably
wouldn't
be
writing
this
anytime
soon,
but
from
the
compilers
perspective,
this
is
what
the
analyses
actually
have
to
always
take
into
account.
This
is
all
these
layers
of
indirection
keep
going
and
it
goes
further
too.
So
now
we
have
two
control
flow
constructs.
We
have
this
loop
and
we
have
this
break,
which
are
two
different
ways
to
kind
of
remove
control
flow
to
another
part
of
the
program.
So
we
can
go
this
even
farther
and
say
that
we
don't
even
have
loops
and
we
don't
have
brakes.
A
All
we
have
are
go
twos,
so
this
is
starting
to
breach
into
the
realm
of
you
can't
even
write
this
in
rust.
Rust
doesn't
actually
have
an
arbitrary
go-to
statement,
and
the
part
of
this
is
that
we're
kind
of
simplifying
rust
so
much
it's
actually
becoming
somewhat
unsafe,
where
you
can't
write
it
directly,
but
it's
always
coming
from
a
safe,
safe
routes
that
you
would
expect
and
then
we're
also
getting
out
of
control
flow
in
general.
A
In
terms
of
actual
brace
languages
and
kind
of
those
kinds
of
scopes,
and
essentially
what
this
is
doing,
is
it's
creating
a
control
flow
graph,
so
those
of
you
have
implemented
anything
in
compilers
or
working
compilers.
Recently,
you'll
certainly
be
coming
with
the
idea
of
the
control
flow
graph.
But
what's
going
on
here
is
it's
saying:
we're
breaking
up
our
program
at
a
certain
nodes
in
a
graph
and
then
we're
going
to
have
edges
amongst
them,
saying
how
you're
actually
flowing
amongst
it.
A
So
in
this
case
we
started
out
by
creating
an
iterator
within
transition
to
the
loop
header,
where
the
condition
of
this
loop
was
saying.
Alright,
we're
gonna,
look
at
the
iterator,
we're
gonna
say
we're
whether
were
the
sum
or
the
none
case.
If
we're
sum
we're
gonna
process
their
element
and
go
right
back,
try
the
next
one
and
if
we're
done
we're
breaking
out
of
the
loop.
A
This
is
just
a
simplified
representation
inside
the
compiler
itself
and
you'll
find
that
a
lot
of
compiler
algorithms
or
compiler
optimizations
nowadays
are
based
on
control
flow
graphs
rather
than
the
original
AST
itself,
and
not
only
that,
but
it's
much
easier
to
understand
the
control
flow
in
a
control
flow
graph,
because
it's
essentially
what
it's
designed
to
model.
So
a
lot
of
the
Associated
static
analyses
in
rust
become
much
much
easier
to
implement
because
you
just
understand
you
have
much
better.
Kyle
has
much
better
knowledge
of
what's
actually
happening
under
the
hood.
A
So
this
what
we're
talking
about
with
match
is
actually
also
still
kind
of
complicated.
There's
a
lot
of
stuff
going
on
here.
We're
matching
we're
kind
of
looking
at
the
discriminant
of
the
value
to
see
whether
we're
some
Arnon
and
then,
if
we
are
some
we're
gonna
pull
something
out
and
actually
work
with
it.
So
we
can
switch
this.
A
Oh,
we
can
make
this
a
little
simpler
by
saying
we'll
actually
tease
this
out,
so
we,
instead
of
saying
we
kind
of
bind
at
the
same
time
we
actually
just
switch
directly
on
the
discriminant
and
then
once
we've
actually
selected
in
the
some
case.
We're
gonna,
try
and
cast
it
as
Salm
and
then
pull
out
that
inner
value.
So
this
is
definitely
not
rust.
You
could
never
write
this
in
rust
and
it's
incredibly
unsafe
in
terms
of,
if
you
just
arbitrarily
generated
this,
but
you
can
kind
of
see
from
the
compilers
perspective.
A
This
is
getting
even
more
simpler,
but
it's
retaining
the
original
kind
of
meaning
of
the
code
from
the
first
part
and
one
awesome
thing
which
is
going
to
follow
just
directly
from
the
sort
of
translation
directly
from
this
abstraction
is.
If
anyone
here
before
is
written
rust,
where
you
want
to
look
up
a
value
at
a
map
and
if
it's
not
there,
you
want
to
insert
it.
You'll
say:
match
map
that
lookup
value,
and
if
it's
none,
you
try
and
insert
it.
A
But
the
problem
is
the
compiler
thinks
that
while
you
after
that
matt,
you
actually
borrowed
something
from
that
map
during
that
during
the
entire
scope
of
the
match,
so
you
can't
insert
while
you've
borrowed
from
it,
because
that
would
be
a
mutable
conflict.
But
here
the
compiler
has
much
more
precise
knowledge
saying
that
only
in
this
some
case
do
we
actually
have
any
borrow
into
this
map.
A
So
only
right
here
can
we
actually
say
you
can't
insert
during
like
the
actual
process
closure,
but
at
this
none
case
we
know
that
there's
nothing
from
the
map
itself,
so
it
can
continue
inserting
into
that.
So
it's
kind
of
an
example
of
some
of
the
optimizations
and
kind
of
more
precise
type
analysis
or
borrowing
analysis
that
we'll
be
able
to
do
once.
We've
simplified
the
language.
This
much
and
one
of
the
final
things
I
wanna
talk
about
with
Mir
itself.
A
Is
this
whole
concept
of
drop,
so
in
rust
you
always
have
deterministic
destruction,
which
means
that
once
the
variable
goes
out
of
scope,
you
are
in
a
structure,
and
you
know
precisely
one:
it
actually
runs
out
of
scope.
So,
in
this
case,
this
is
our
control
flow
graph
from
before
we
create
our
iterator.
We
pulled
out
the
next
value.
A
We
took
a
look
at
it
and
if,
for
some,
we're
gonna
run
some
code
and
go
back,
but
if
we're
none,
this
is
the
key
thing
where
this
iterator
value
we
have
we
had
from
before
the
area
is
going
to
go
out
of
scope
at
this
point,
so
we
explicitly
drop
it.
This
is
a
pretty
big
change
from
the
ast
today
and
how
trans
works,
which
is
drops,
are
always
implicit
in
your
source
code,
but
in
kind
of
the
way
the
compiler
needs
to
understand
this
they're
very
explicit
so
in
in
mirror.
A
But
there's
also
this
other
aspect
to
drops,
which
are
the
panics
I
was
talking
about
earlier
in
terms
of
panic,
goes
abort
all
that
fun
stuff,
which
is
that
every
single
one
of
these
function
calls
actually
has
another
edge
coming
out
of
it.
So
the
first
did
each
of
one
of
these
compan
ik,
in
which
case
you
have
to
clean
up
all
local
variables
and
kind
of
keep
going.
So
when
we
first
create
the
iterator,
if
that,
if
that
panics,
we
don't
actually
have
any
local
variables,
there's
nothing.
We
need
to
clean
up.
A
So
we
just
jettison
ourselves
in
the
function
itself,
but
the
next
one
once
you
actually
call
the
next
method
on
the
iterator.
If
that
panics
we
have
to
clean
up
the
iterator,
so
we
enter
a
block
where
we
say
clean
up
the
iterator,
and
then
we
keep
going
and
it's
the
same
for
our
processing
function
of
that
panic
script
to
clean
up
the
iterator.
So
this
is
kind
of
showing
how
drop
is
very
explicit
in
the
control
flow
graph,
we're
even
on
the
panic
edges
and
the
normal
edges
of
normal
control
flow.
A
We
have
to
explicitly
say
when
we're
actually
dropping
this
value
and
drop
is
a
pretty
interesting
thing
in
Russell.
So
we
have
this
notion
of
drop
legs
right
now,
which
is,
if
you
consider
a
function
like
this,
where
you
take
some
data
and
if
some
condition
is
true
about
that
data,
you
send
it
off
to
another
thread
and
you
run
some
other
stuff.
So
in
rust
you
cannot
use
this
data
value
after
the.
If,
after
the,
if
block
it's
been,
it
could
have
been
moved
out.
A
You
don't
actually
know
if
it
has
so
at
syntactically,
you're
disallowed
from
actually
using
that,
but
from
the
compilers
Kochan
point
of
view.
What's
gonna
happen
is
the
destructor,
for
this
vector
is
going
to
run
perhaps
on
the
other
thread,
which
has
been
moved
on
ship
to
it
or
locally
after
the
Posehn
function.
Wouldn't
actually
falls
out
of
scope,
so
we
have
these
two
places
where
we
might
have
to
run
destructors
and
the
way
that
rust,
if
it
wants
us
today,
is
actually
kind
of
relatively
inefficient.
A
Where
once
you
move
the
data
out,
it
just
paves
over
it
at
the
local
variable.
With
a
little
with
a
bit
pattern,
saying
I've
been
moved
out,
you
can't
run
me,
you
can't
run
destructor
here
and
then
so
when
we
actually
attempt
to
run
this
structure
at
the
end,
we'll
kind
of
check
in
and
what
it
won't
be
correct
and
we
won't
actually
run
the
destructor
or
it'll
be
valid,
and
then
we
actually
do
run
the
destructor.
So
we
take
a
look
at
this
as
a
control
flow
graph
itself.
A
We'll
see
we
have
this
if
block.
If
it's
true,
we
send
it
to
another
thread.
If
it's
false,
we
then
run
our
other
stuff
drop
it
and
return,
and
so
and
not
an
easy
optimization
here
you
might
be
thinking
is:
why
are
we
painting
over
this
entire
value?
Why
are
we
kind
of
writing
all
this
information
in
here?
A
If
the
condition
is
true,
we'll
send
it
off
to
another
thread
and
then
we'll
say
we
no
longer
own
this
value,
our
stack
flag
will
say
false
and
then
once
you
come
down
to
the
very
end,
we'll
run
our
final
code,
but
when
we're
supposed
to
drop,
we
actually
have
a
condition
here
saying
if
it
still
owns,
we
go
run
the
destructor,
but
if
it's
not
knowing,
we
just
skip
it
directly.
So
this
is
not
a
knife,
a
minute
Oh.
A
Actually
this
wasn't
implemented
too,
when
I
wrote
these
slides
but
I
think
as
of
two
days
ago,
this
is
actually
employed
in
the
mirror
now,
and
this
is
a
massive
optimization
in
terms
of
actual
run
times
of
rus
code
itself.
So
if
you
compile
with
mirror
you'll,
probably
actually
see
some
speed,
ups
and
you're,
just
raw
like
your
code
as
is,
and
the
reason
for
this
is
LVM-
can
see
through
these
up
these
stacked
flags
way
better
than
it
can
through
paving
over
a
value.
A
A
We
always
do
this
paving
of
writing
a
new
value,
but
instead,
if
I
just
moved
like
if
we
didn't
have
this
condition,
if
we
just
always
ran
straight
to
sending
the
data
through
the
thread,
you
can
say
this:
if
data
is
owned,
is
always
false,
so
we
can
actually
totally
eliminate
the
entire
basic
block,
saying
drop
of
data,
and
that
is
kind
of
the
key
to
many
other
optimizations.
So
you
can.
This
is
kind
of
opening
up
the
world
for
mere
optimizations
and
kind
of
processing.
A
All
this
information
before
we
hit
all
of
you
them
so
the
next.
The
final
thing
that
I
want
to
talk
about
is
the
doubling
down
on
key
infrastructure,
I'm,
sorry,
doubling
down
on
the
features
that
we
have
in
rust
today,
so
I
specifically
going
to
talk
about
async
I/o
in
features
and
I've.
You
who've
seen
Back
to
the
Future.
It
was
an
awesome
movie
if
you
haven't
seen
it
I
recommend
seeing
it
so
taking
a
look
at
the
kind
of
the
I/o
world
today,
you'll
have
the
api's.
A
A
It's
a
very
thin
wrapper
around
the
e
pol
syscall
on
unix
and
the
cake
uses
call
on
the
bsd
use
and
OSX,
and
all
that
is
really
saying
is
Meo
kind
of
queues
up
a
bunch
of
stuff
and
it
says
dear
Colonel,
what
happened
since
I
last
after
you
ask
you
what
happened
and
it'll
give
you
a
bunch
of
events.
Saying
this
was
this
is
readable,
that's
writable!
This
can
be.
This
hasn't
hurt
on
it
or
it'll
block.
Until
anything
does
happen.
A
So
in
UNIX
you
ask
the
kernel
until
when
can
I
do
some
work
and
then,
when
it's
ready
you
do
the
work,
whereas
in
Windows
it's
you
do
the
work,
and
then
the
kernel
tells
you
when
it's
done
so
these
two
they
sound
relatively
similar,
but
they
end
up
having
to
be
implemented
in
entirely
different
ways.
So
the
Meo
title
of
metal
io
is
not
quite
as
metal
on
Windows,
but
there
is
a
notation
of
me
on
Windows,
so
we
have
cross-platform
support
there.
A
It
just
there's
a
little
bit
more
of
an
abstraction
on
Windows
and
we've
seen
that
this
meal
crate
has
become
the
foundation
for
AI
synchronous
I/o
throughout
the
ecosystem.
Today
so
you'll
see
a
lot
of
crates
that
are
based
on
async
I/o,
a
lot
of
event
loops.
All
this
is
kind
of
has
Meo
at
its
heart
and
me
with
the
core
we're
being
cross-platform
right
now.
I
can
actually
run
everywhere
and
it's
it's
got
that
very
thin
layer
for
the
very
performant
use
case
of,
for
example,
having
Linux
servers.
A
You
don't
have
to
worry
about
this
extra
abstraction
once
you
go
down
that
far.
The
problem
with
neo,
though,
is
it's
kind
of
hard
to
write.
So
this
is
the
echo
server
and
I.
Don't
know
if
you
can
read
that
because
I
certainly
can.
But
this
is
just
a
simple
way
of
saying
read
some
like
accept
the
socket,
read
some
data
and
then
write
the
data
bite
back
out
to
it.
It's
really
complicated,
and
this
is
not
a
problem.
This
is
not
a
problem
with
me.
A
A
We
needed
something
to
work
with
neo
for
us,
and
then
we
can
then
build
our
abstraction
on
top
of
that,
and
if
we
take
a
look
around
of
what
exists
today,
first
you'll
probably
find
eventual,
which
is
a
thread
safe
futures
library,
and
if
you
look
a
little
farther,
you
might
find
Miyoko,
which
is
curry
teams
based
on
rot
or
corrode
based
on
move
itself.
This
is
very
similar
to
the
old,
green
runtime
and
rust,
the
multi
green
threads
and
Bateson
libuv.
A
It's
kind
of
a
very
similar
concept,
you'll
also
find
GJ,
which
is
I,
think
the
foundation
of
IO
and
captain
proto
itself,
and
this
is
very
similar
preventional,
except
it's
single
threaded
instead
of
threaded
and
then
once
you
look
outside
of
rust
itself,
you'll
also
find
that
there's
frameworks
like
finagle
written
in
Scala
that
are
used
at
Twitter,
and
this
is
kind
of
all
based
on
features
in
the
hood.
But
it's
got
a
lot
of
nice
abstractions
inside
of
it
as
well,
and
you'll
also
see
a
very
similar
framework
called
wangle
and
c++
at
facebook.
A
This
is
essentially
solving
one
of
the
same
problems
that
finagle
is
doing,
which
is
kind
of
teasing
apart
all
these
composable
abstractions
of
this
layer.
So
clearly
there's
a
nice
theme
here
of
futures
going
on,
and
so
this
has
kind
of
been
our
focus
for
looking
at
asynchronous
I/o
and
rust
is
kind
of
diving
along
this
futures
idea
and
seeing
how
far
we
can
take
it.
A
So
if
you
take
a
look
at
what
is
a
future,
you
look
at
this
kind
of
pretty
bland
Wikipedia
definition,
but
essentially
it's
a
placeholder
for
a
value
that
might
eventually
become
available
and
all
that's
a
fancy
way
of
saying
they
have
a
trait
that
looks
a
lot
like
this.
Where
you
have
a
successful,
a
future
can
resolve
with
an
actual
item.
A
There's
some
error
that
I
might
happen
along
the
way,
and
you
just
say
there,
you
just
schedule
a
callback
saying
at
some
point
run
this
callback
with
this
item
or
the
error
or
some
result
saying
what's
going
on
there.
This
is
kind
of
the
court,
but
if
you
take
a
look
at
that,
you
might
think
this
looks
a
lot
like
JavaScript.
So,
are
we
just
getting
ourselves
back
into
callback
hell?
This
is
the
whole
thing
we're
trying
to
avoid.
We
don't
want
to
have
in
rust
itself.
A
It's
not
a
very
good
async
IO
story,
but
the
future
is
it's
actually
very
different,
because
futures
are
a
placeholder
for
a
value
that
is
to
come
about.
You
can
do
a
lot
of
really
strong
manipulations
with
that
that
look
very
ergonomic
as
well.
So
this
is
an
example
of
a
function
which
download
some
Jason
from
Chris,
so
they
go.
Parses
it
out,
gets
a
fueled
out,
returns
an
integer
and
does
that
all
in
a
very
compact
series
of
lines.
So
this
is.
A
We
can
imagine
that
this
is
this
HTTP
GET
function,
which
is
kind
of
just
pulling
down
a
future
of
a
response.
So
that's
the
actual
value
returned
by
this,
and
then
it
gets
used.
These
things
called
Combinator's,
which
is
very
similar
to
a
results
or
iterator
or
option,
and
in
this
case
we
can
say
if
that
future
was
successful,
the
and
then
will
then
run
the
jason
parse
function
over
the
result
we
got
back
and
then
that
itself
might
fail.
A
That's
kind
of
the
end,
then
versus
a
map,
so
here
we,
the
map
here,
will
then
change
the
type
from
a
future
of
T
to
a
future
of
you
kind
of
changes.
What's
actually
going
to
be
resolved
and
we'll
say
from
that
giant,
Jason
blob,
we'll
pull
this
out
and
then
finally
we'll
try
and
parse
that
as
a
string
once
we
could
go
along
at
the
very
end.
A
So
we
can't
have
delay
that
resolution
until
all
the
work
is
actually
happening
and
then
finally,
a
very
common
thing
when
you're
doing
I/o
is
to
kind
of
propagate
errors
as
much
as
possible.
So
once
you
hit
an
error,
you
just
punted
up
the
stack
and
say
all
right,
something
let's
handle
that
error,
but
I
will
continue.
No
more
there's
nothing
else
for
me
to
do
because
an
error
has
happened.
A
So
there's
actually
not
a
lot
of
error
handling
here,
and
the
reason
is
that
it's
all
happening
implicitly
this
and
then
this
map
are
only
running
in
the
successful
case.
So
once
this
HTTP
GET
fails,
it
election
just
skip
all
future
code
and
just
pull
it
all
the
way
through,
and
the
error
will
kind
of
naturally
go
through
and
not
only
that,
but
the
and
then
means
that
the
future,
the
future
future
is
then
allowed
to
fail.
So
the
Jason
parse
could
fail.
A
The
parsing
into
a
string
from
oppressing
a
string
in
an
engine
could
also
fail,
but
all
along
the
way
this
error
is
kind
of
plumbed
through,
and
you
don't
have
to
worry
about
explicitly
doing
so.
So
this
is
kind
of
showing
that
this
isn't
callback
hell.
We
don't
have
to
worry
about
this
kind
of
problem
from
from
JavaScript
futures,
which
are
of
lindos.
Promises
in
JavaScript
are
a
much
more
promising
way
to
do
asynchronous,
I/o
a
much
more
ergonomically.
D
E
A
Finally
afterwards,
so
each
of
those
is
kind
of
showing
what
the
type
that's
going
to
be
resolved
will
keep
going,
and
you
actually
just
skip
all
the
code
if
it
is
an
error
case,
because
you
don't
have
an
object
of
that
value,
to
kind
of
actually
run
that
so
the
HTTP
GET
fails.
The
jason
purse
is
never
called
because
there's
no
jason
actually
hand
it.
B
E
A
E
A
It's
actually
there
the
combinators
kind
of
a
lot
anyways
to
do
things
so,
for
example,
and
then
is
saying
from
a
successful
result,
return
of
future,
but
there's
also
a
commentary
called
then,
which
means
given
a
future
like
given
the
result
of
the
resolved
future
then
do
something
else
return
into
the
future.
So
there
you
can
only
explicitly
say
I'm
handling,
both
the
successful
and
the
error
result.
A
You
can
also
do
things
like
a
map
error
or
you
can
change
the
error
back
up,
but
there
are
a
lot
of
commentators
to
say
run
this
action
on
the
feature
itself
kind
of
all
in
the
error.
When
it
happens,
so
you
don't
always
you
it's
you
never
ignore
it.
You
always
kind
of
explicitly
do
say
at
some
point
along
the
line
and
then
this
very
thin
shim
once
you
get
to
the
very
end
where
you
actually
want
to
wait
for
the
value
of
the
future
is
kind
of
it
depends
on
the
library.
A
Are
you
actually
using
at
that
point?
So
you
might
be
tied
to
me?
Oh,
you
might
be
tied
to
some
thread
pool
you
might
be
tied
to
some
other
async
I/o
library,
but
that's
kind
of
that's
where
it
essentially
defines
saying
you'll
get
the
result
from
that.
You
can
then
match
on
after
the
so
make
sense.
F
I'm
just
wondering
I
read
something
about
C++
and
there's
weight,
something
like
that
and
I
think
they
tried
to
solve
the
same
problem
and
it's
like
as
far
as
I
understand
it.
It's
looks
nicer
because
then
you
don't
need
all
these
communities.
I
think
I
said,
but
just
write
a
wait
and
then,
after
that
you
can
continue
your
like
writing
normal
right.
A
The
problem
with
a
wait
is
that
it's
blocking
once
you
call
the
wait.
You
actually
wait
for
the
real
to
the
future
to
go
together,
so
one
of
the
main
purposes
of
futures
is
kind
of
composed,
asynchronous
IO,
so
you
kind
of
fire
off
a
socket
request.
First,
you
fire
off
a
read
and
you
wait
for
either
one
to
finish.
If
you
wait
for
either
than
the
finish
at
that
point
in
time,
you
can't
execute
the
other
one.
So
once
you
calling
a
wait,
is
essentially
translating
this
back
to
synchronous
code.
F
D
A
It's
essentially
kind
of
degenerate
to
a
number
of
commentators
where,
if
you
have
various
ways
or
to
manipulating
these
features,
or
you
kind
of
what
we
doing
something
once
the
value
is
there
doing
something
once
an
array
of
values?
Is
there
it
kind
of
all
fits
in
the
future
abstraction
where
that's
a
future
is
always
a
placeholder
for
something
that's
later
to
come.
A
A
I'm
talking
afterwards
and
I'll,
see
if
I
can
just
wait
a
little
more
so
kind
of
going
over
what
futures
look
like
in
rust.
There's
a
couple
of
interesting
pieces
that
come
out
and
one
is
that
ownership
is
really
really
important.
Ownership
is
a
crux
of
rust
itself
and
it
really
shows
in
the
futures
of
invitations.
So
if
you
find
futures
in
most
of
the
languages,
then
it
turns
out
they
resolve
to
a
value,
but
then
you
access
this
value
at
anytime.
A
So
it's
kind
of
permanently
resolved
to
that
one
value,
but
in
rust
we
don't.
We
want
to
pass
through
ownership
with
this
value,
that's
actually
been
created,
so
we
want
to
kind
of
make
sure
it
keeps
going,
and
that
means
that
you
can
actually
only
run
this
callback
once
and
that
has
a
lot
of
interesting
implications
for
the
API
itself,
but
as
a
tree
we
can
make
sure
we
always
have
zero
cost
iterators.
We
don't
have
this
extra
layer
of
indirection
extra
layer
extra
allocation
sitting
in
the
middle.
A
We
can
make
sure
that
you
implement
the
future
as
efficiently
as
possible.
As
you
see
for
your
one.
Little
tiny
construct
and
then
you
get
all
these
nice
common
laters.
You
get
all
this
nice
composability
with
other
actions
and
you
get
that
all
for
free.
This
is
very
similar
to
the
iterator
trait,
where
you
have
very
similar
economics
in
terms
of
combining
features
together,
combining
iterators
together
running
all
these
callbacks
and
doing
fancy
things
to
that.
A
But
the
other
final
thing
that
we've
seen
is
that
in
other
libraries,
you'll
see
that
cancellation
is
a
very
core
value
of
any
of
the
futures
library.
This
kind
of
comes
out
where
it's
like
you
fire
off
a
request,
but
you
don't
actually
end
up
needing
the
response.
Something
dynamically
tells
you
you
don't
want
to
wait
for
that.
You
don't
want
to
you
just
kind
of
want
to
prevent
it
from
executing
any
more
work,
because
you're
done
with
that
and
in
rust.
This
really
shows
up
in
the
common
leaders
kind
of
so,
for
example.
A
This
is
a
with
select
where
here
we're
taking
a
socket
off
of
a
listener,
so
we're
kind
of
accepting
connection.
We're
then
going
to
run
our
process
function
so
we're
going
to
map
it
over
whatever
is
gonna.
Actually
go
and
run
the
request.
At
the
same
time,
we're
gonna
create
a
timer
which
is
going
to
fire
in
1,000
milliseconds
per
second,
and
then
we're
going
to
create
this.
A
You
don't
want
to
have
all
this
happen
behind
the
scenes,
because
you'll
never
get
the
value
back.
You'll
never
want
it
again.
You,
the
timeout,
has
already
fired.
So
cancellation
is
a
great
way
to
express
this
and
it's
kind
of
a
key
it
showing
how,
because
this
is
a
in
a
Combinator,
it's
a
very
key
part
of
the
trade
itself,
and
one
of
the
coolest
things
about
rust
is
that
we
already
have
this
ability
of
deterministic
destruction.
A
We
can
semi
linear
types,
so
we
actually
express
cancellation
purely
through
drop
once
you
don't
need
a
value
of
a
future
you
just
let
it
go
out
of
scope,
you
just
forget
about
it
and
it
goes
away,
and
this
is
a
really
powerful
way
to
say.
We
no
longer
are
we're
using
this
feature,
because
it
kind
of
prevents
a
lot
of
bugs
statically.
You
can't
ever
use
a
canceled
future
because,
by
definition
you
don't
have
access
to
a
canceled
future.
A
At
that
point,
so
selection
join
is
another
Combinator
which
uses
this
take
advantage,
there's
quite
a
bit
to
make
sure
that
you
get
the
right
semantics.
One
kind
of
select
the
first
thing,
fires
or
enjoying
one
thing
fires,
but
it
was
an
error.
Then
you
definitely
opportunity
the
other
ones,
so
you
cancel
it
off
and
then
one
of
the
key
parts
this
actually
gets
implemented
at
is
the
and
then
Combinator.
So
if
the
first
feature
is
canceled,
you
definitely
don't
want
to
run.
A
The
second
feature
you
don't
want
to
have
this
whole
chain
of
futures
get
fired
when
they've
been
cancelled
along
the
way,
so
you'll
see
that
a
certainly
one
once
you
start
working
with
features
and
in
and
so
the
vision
that
we're
seeing
for
futures
is
at
this
very
bottom
layer
we
have
Meo,
which
is
the
library
to
build
async
I/o.
On
top
of,
we
might
also
have
some
thread
pools,
but
these
are
very
low-level.
You
might
not
want
to
actually
use
them
directly.
A
They
tend
to
be
very
difficult
to
use
on
organ
ama
like
but
they're,
very
fast,
that's
kind
of
what
they're
intended
to
be,
but
on
top
of
this,
we'll
see
a
bunch
of
other
libraries
such
as
co-routine
or
our
career
teams
like
with
Miyoko
or
eventual
with
futures,
and
they
might
even
these
somewhat
built
on
top
of
each
other,
depending
on
how
it
actually
shakes
out.
But
this
is
an
extra
layer
abstraction
on
top
of
the
MU
and
thread
pool
worlds.
This
is
kind
of
the
more
ergonomic
which
you
might
actually
want
to
consider
using.
A
So
you
have
applications
that
are
built
on
carisi's
they'll
have
applications
that
are
built
on
finagle
or
they
might
be
built
on
eventual
directly
or
they
might
somehow
blend
the
two
of
these.
But
the
key
thing
here
is
there:
we
have
this
layering
scheme
where
features
are
not
the
end-all
be-all
of
async
I/o
in
rust.
We
have
all
these
layers
that
are
useful
to
everyone
in
the
road
right.
So
we
have
this
whole
stack.
We
need
to
develop
in
this
whole
stack.
A
A
So
it's
going
to
require
a
lot
of
iteration
a
lot
of
discussion
to
kind
of
see
what
exactly
this
needs
to
look
like
what
are
the
very
precise
semantics
and
all
these
very
edge
in
corner
cases,
and
this
is
going
to
be
really
long,
an
ongoing
discussion
with
the
community
as
we
continue
to
do
it
to
develop
these
libraries
all
right.
So
finally,
I
want
to
talk
about
some
of
the
events
we
have
upcoming
in
2016.
That
was
kind
of
what's
been
happening
and,
what's
going
to
happen
directly,
but
the
first
up.
A
We
have
a
lot
of
conferences
going
on
in
2015,
we
had
one
conference
a
trust
camp
over
in
Berkeley
in
California,
but
now
this
year
we
have
three
separate
conferences
in
Portland's
and
Berlin
over
here
and
Pittsburgh
itself.
So
I
would
highly
recommend
if
you're
interested
in
going
to
any
of
these
conferences
feel
free
to
submit
a
talk
to
them
or
just
sign
up
for
them.
I
think
a
couple
of
them
have
said
not
all
the
tickets
on
sale
yet,
but
they'll
be
coming
on
soon
and
I.
A
We
have
a
crate
that
they've
been
baked
in
for
quite
a
long
time
and
now
they're
actually
being
supported
fully
in
the
standard
library
itself,
and
so,
in
addition
to
all
that,
we
have
a
ton
of
new
features
in
the
pipeline.
I
think
this
is
not
even
an
exhaustive
list
of
everything
that
I
could
think
of,
but
we
have.
One
of
the
major
ones
is
incremental
compilation
coming
down
the
pipeline.
A
We
have
nonzero
and
Rob
is,
in
theory
a
new
feature,
but
it
was
actually
implemented
before
after
I
made
this
slide,
so
it
something
we
already
have
in
rust
today,
but
other
great
things.
I
was
talking
about
in
rust
up
in
decays,
where
we're
going
to
have
all
this
management,
where
cross-compilation
truly
is
push-button,
we
have
infiltrate,
which
is
a
kind
of
very
concise
way,
to
return
an
iterator
to
return
a
future
to
return
a
closure.
A
A
We
have
a
very
strong
story
for
now
at
this
point
and
we
think
we
might
actually
get
them
roughly
to
the
stable
point,
not
stable,
rust
but
probably
still
nightly,
but
somewhat
de
facto
stable
and
finally,
some
a
new
revamp
of
the
macro
system,
with
new
macro
rules
and
kind
of
a
a
new
layer
in
terms
of
hygiene
resolution
and
we'll
just
kind
of
play
with
the
rest
of
rust.
Pretty
nicely
I'm.
A
This
is
actually
a
really
misleading
name,
it's
actually
well.
What
we
did
yesterday
was
something
called
filling
drop,
which
is
where
I
was
talking
about.
When
you
move
a
value,
you
pave
over
it
with
a
bunch
of
specific
bit
pattern
that
used
to
actually
pave
over
with
zeros,
except
then
someone
could
actually
rely
on
that
being
zero,
so
we
did
it
an
arbitrary
anyway.
So
now,
instead
of
nonzero
drop,
we
have
stack
legs,
so
this
is
saying
you
won't
actually
pave
over
anything.
C
A
A
A
Okay,
it's
like
it's
kind
of
like
C
and
C++,
where
you
can
do
it
a
little
more
ergonomically
in
rust,
where
you
have
an
injection
point
where
you
actually
can
run
this
code.
So
as
long
as
you
have
a
way
to
tell
the
optimizer,
like
don't
optimize
the
way
this
right
like
we
have
volatile
region
right,
so
you
can.
You
can
probably
do
that,
but
we
also
have
the
you
also
have
the
guarantee
and
rust
routes.
C
A
Right
and
so
to
kind
of
wrap
this
all
up.
A
lot
of
the
focus
that
we've
seen
before
right
after
1.0
is
kind
of
the
same
focus
now
in
terms
of
moving
forward.
So
here
brings
we're
seeing
branching
out
with
rust
and
webassembly
rust
in
new
platforms,
rust
up
and
then
decays
all
this
management.
We
see
the
doubling
down
of
infrastructure
investments
in
terms
of
mere
really
coming
to
take
shape,
ramier,
which
we've
been
working
on
for
so
long
I
started
to
produce
a
lot
of
its
fruit.
A
We
see
a
lot
of
new
optimizations,
a
lot
of
new
kind
of
paradigms.
They
actually
write
in
rust
itself.
And
finally,
we
have
this
zeroing
in
where
we're
developing
the
async
I/o
ecosystem,
we're
developing
features
like
specialization
or
features
like
infiltrate
kind
of
closing
gaps
in
the
language
which
allow
you
to
do
paradigms.
You
haven't
actually
able
to
do
before.
That's
all
I
have
for
today.
So
thanks
so
much
for
coming.
A
This
is
all
working
pretty
solidly.
Can
we
make
it
even
better
once
we
have
futures
on
top
of
that,
so
in
terms
of
like
which
is
more
ergonomic
to
use
in
like
I
I
had
not
personally
worked
a
whole
lot
with
Terriers,
so
I,
but
I
would
suspect
that
from
what
I've
seen,
it
actually
ends
up
being
a
little
more
ergonomic
in
some
cases,
because
you
don't
have
to
worry
about
this
and
then
or
a
little
more
closures
here
and
there.
But
it's
it's
mostly
love,
because
it
kind
of
all
translates
to
futures.
D
So
talking
about
futures,
have
you
also
thought
about
implementing
observables,
because
they're,
basically
futures
was
multiple
item
spread
over
time
and
I've
looked
at
some
crates,
I
think
one
of
them
is
called
carbon
dioxide
or
something
like
that
and
it
tries
to
implement
observables
or
event
streams
in
rust
and
it's
pretty
complicated
and
I
think
it
doesn't
really
work
that
well
with
the
bra
tracker,
but
I
could
see
some
improvements
there
and
I
think
a
lot
of
opportunities
to
aesthetically
check
more
of
the
variables
beforehand.
Right.
A
This
is
actually
currently
implemented.
Eventual
today
is
this
concept
of
streams.
We
have
one
one
list
of
values:
that's
going
to
produced
at
various
points
in
time.
That's
our
technical
right,
okay,
so
those
are
all
actually
built
on
top
of
futures
themselves
and
the
way
they're
planned
and
eventual
a
stream
is
actually
just
a
future
of
the
first
item
and
then
the
rest
of
the
rest
of
the
items,
it's
kind
of
a
future
of
something
and
then
itself
again
and
then
eventually
that
just
exhausts-
and
you
don't
actually
get
anything
back
out.
A
So
this
is
a
concept.
We've
thought
about.
I've
also
been
reading
about
this
concept
of
reactive
streams,
which
in
Java
very
well
in
terms
of
control,
back
pressure
and
dealing
with
kind
of
the
rates
of
flow
throughout
a
system
it
makes
making
sure
they're
all
balanced.
So
it's
certainly
on
the
radar.
It's
not
it's
kind
of
a
add-on
to
future.
So
far
from
what
we've
seen,
it's
certainly
something
like
we've
built
on
top
of
what
we
currently
have
today.
C
I'm
really
waiting
from
Europe,
because
I'm
addressed
to
in
Detroit
and
describe
I
think
that's
an
opportunity
to
try
to
compile
rust
to
other
targets
like
on
the
other
intermediate
language.
Like
let's
say,
I'm
saying
it's
a
ravine.
That's
intermediate
language
of
the
Khronos
group
that
can
be
executed
on
a
hardware
accelerated.
A
I
think
we're
not
planning
on
stabilizing
the
mirror
representation
like
that
any
sort
of
texture,
representation
or
in-memory
representation,
so
it
would
probably
restrict
it
to
nightly
for
a
while,
but
for
things
like
whether
some
way
this
is
probably
the
vector
that
we're
going
to
take
for
the
initial
limitation.
What.
A
And
the
question
about
me
earlier:
there
is
a
actually
proposal
to
turn
mirror
on
for
a
nightly
literally
right
now
and
there's
been
some
discussion
going
on
to
turn
it
on
my
default
or
not,
and
we're
gonna
kind
of
see
how
that
plays
out.
But
it's
it's
definitely
very
ready.
We,
the
non
zero
shop,
was
one
of
the
final
performance
improvements
to
kind
of
get
it
back
up
to
par
with
old
trends,
and
now
that
were
at
that
point,
we
can
just
keep
on
moving
it
forward.
C
A
A
My
taste
so
I
haven't
personally
worked
too
much
in
mirror
itself,
but
I
the
few
times
that
I
have
glanced
at
it
in
the
code
itself.
It's
actually
very
well
documented,
and
it's
pretty
clear
like
if
you,
if
you
don't
experiment,
if
you
have
any
experience
with
compiler,
ir's
or
various
things
here
and
there
it'll
be
very
natural,
it
kind
of
flows
very
readable.
You
can
kind
of
look
at
the
page
structure
so.
B
You
just
mentioned
in
the
in
the
embedding
rust
thing
you
mentioned
that
there's
zero
run
time
and
it's
that's
a
bit
too
good
to
be
true,
because
there's
obviously
something
that
needs
to
be
done.
Can
you
can
you
tell
bit
about
this
like
like?
Are
you
well
just
generating
boilerplate,
or
is
it
moved
to
the
run
time
because,
let's
say
well,
you
mentioned
the
stick
unwinding
part,
but
maybe
global
scope,
initialization
and
teardowns
that
something
bad.
How
is
it
done,
then,
if
it's
not
in
the
run
time,
the
notion.
A
Of
so
it's
kind
of
a
loaded
loaded
thing
when
I
say
run
time
is
it
kind
of
depends
on
who
you
talk
to
is
how
they
actually
define
it.
So
if
you
consider
C
having
a
run
time,
then
we
definitely
run
time
if
you
consider
Java
threshold
for
having
run
time,
we
definitely
don't
have
a
run
sound
like
we
know
the
garbage
collector,
but
essentially
we
there
are
some
glue.
There's
some
global
state.
A
A
We
have
essentially
no
life
before
or
after
main,
so
in
all
in
all
these
cases,
we
don't
actually
have
to
deal
with
that,
because
we
don't
have
that,
but
the
very
small
pieces
of
state
here
and
there
like,
we-
have
I-
think
the
arguments
that
you
pull
off
on
the
command
line
need
to
be
initialized
somehow
and
maybe
a
few
of
the
tiny
things.
But
it
ends
up
being
that
we
just
don't
provide
the
abstractions
for
newing
for
doing
this,
and
the
unwinder
is
just
a
library
that
reads
metadata
written
by
the
compiler.
A
So
we
actually
in
terms
of
how
low
down
you
can
actually
get.
There
is
a
there's
kind
of
two
ish
standard
libraries
in
rust.
There's
the
standard
like
the
stood,
the
standard
library
but
then
there's
a
very
small
subset
of
that
called
core
the
core
of
the
standard
library.
But
it's
truly
is
a
subset
in
terms
of
the
standard
library,
just
literally
re-exports
everything,
but
this
small
subset
assumes
not.
It
doesn't
even
have
an
alligator.
So
this
is
kind
of
the
core
bare
metal
of
rust,
where
you
literally
can
read
it
anywhere.
A
So
we've
seen
a
lot
of
kernels
written
on
top
of
the
core.
We've
seen
a
lot
of
very
bare
metal
projects
written
on
top
of
the
core,
and
that's
where
you
assume
you
only
can
do
like
addition
or
subtraction,
or
memory
moves
and
memory
copies,
and
that's
kind
of
that's
kind
of
what
you
would
expect
from
see
with
no
runtime.
But
it's
giving
you
all
the
idioms
of
Russ
who
still
have
massive
results.
You
have
options
you
you
can
panic,
but
it's
up
to
the
actual
user.
Who
then
actually
writes
the
tip
month.
E
E
Vein
on
the
first
side
of
him,
he
said
18
targets
and
then
it
was
30
and
then
I
was
like
oh
kind
of
everything
that
LLVM
supports
and
but
I
don't
know.
It
just
seems
like
everything
that
LLVM
supports
how
many
mohammadi
targets
are
truly
supported,
that
somebody
who
really
cares
for
them,
because
everybody
does
like
backs
deck,
but
nobody
actually
like
right.
It's
exciting
to
do,
but
nobody
really
supports
it.
I.
A
Forgot
to
mention
this
so
think
you're
reminding
me,
but
we
have
a
notion
of
tier
1,
2
&
3
platforms
or
the
guarantees
that
we
provide
tier
1
tests
and
builds
tier
2
builds
tier
3.
Is
someone
try
to
get
some
poor
networked?
So
we
have
eight
Tier
one
targets:
the
Windows
Mac
Linux,
32
and
64
on
Windows
we
have
MS,
PC
and
mingw
for
a
to
2
card
for
tier
2
targets
we're.
Actually
this
means
that
we're
we
are.
A
We
ourselves
are
producing
binaries
for
the
targets,
we're
not
running
tests,
mostly
because
we
don't
have
the
hardware
to
actually
run
the
tests.
Those
include
targets
like
Android
and
muscle
which
we
actually
do
run
tests
for,
but
it's
kind
of
not
fully
up
to
par
with
everything
else,
and
then,
once
you
get
beyond
that,
you
have
like
armed
Linux,
PowerPC,
Linux,
a
or
64
or
Linux
iOS.
A
These
are
all
building
so
make
sure
that
I
can
rest.
You
kinda
have
the
guarantee
that,
if,
like
the
target
on
other
platforms,
runs
all
tests,
it'll
probably
run
out
test,
but
not
the
major
guarantees.
So
essentially,
if
you
want
something
that
literally,
everyone
is
using
its
tier
1
target
to
move
eight
of
those
and
those
are
like
MS
PC
is
the
only
one
that
we've
added
since
1.0
was
released
and
Android
and
muscle,
which
is
the
static.
Lib
C
are
the
closest
to
next
becoming
a
Tier
one
target.