►
Description
Benjamin Blum from the Research team presents "Parallel Programming with Failure in Rust"
Michael Sullivan from the Research team presents "Static Typeclass Methods in Rust"
Video starts 1 min and 40 seconds in.
Help us caption & translate this video!
http://amara.org/v/2FhK/
A
A
Somebody
with
a
sharp
eye
for
concurrency
will
think.
Oh,
this
is
a
data
race.
You
don't
know
whether
0
or
1
is
going
to
get
printed
from
the
debug
statement
and
in
fact
this
program
fails
to
compile
rusty.
It
tells
you
that
state
pointer
is
not
a
sensible
value,
which
means
that
you
can't
send
it
between
tasks.
A
So
what
can
you
do?
How
can
you
have
tasks
communicate
with
each
other?
If
you
here
last
week,
you
saw
Eric
Hauck
present
on
pipes,
which
is
our
wonderful
concurrency,
primitive
for
message
passing
between
tasks,
and
the
key
point
is
that
when
you
send
state
to
another
task,
you
give
up
ownership
of
the
state.
So
here's
some
code
I
create
I'm,
creating
a
stream
of
pipes
with
a
send
end
and
the
receive
end
I
hand
the
receive
end
to
a
child
task
and
I
send
a
string
on
it.
A
So
this
is
all
great.
My
contrived
examples
make
sense,
but
let's
put
tasks
to
the
test.
Let's
suppose
that
we
were
writing
a
web
browser
in
rust.
Well,
if
you're
here
last
week,
you
would
know
that
we
actually
are
it's
called
servo
and
it
makes
heavy
use
of
tasks.
A
How
might
you
want
to
use
tasks
in
real-world
application
development?
Three
examples.
First,
you
definitely
want
to
render
a
different
different
content
elements
on
a
web
page
in
parallel,
there's
no
reason
to
block
on
network
I/o
in
serial
for
each
content
on
the
web
page.
So
here
I
have
a
web
page
with
three
images:
they're,
each
partially
loaded
and
all
together,
they'll
all
complete,
though
I'll
finish,
loading
a
bunch
faster
if
I
parallelize
them
with
a
task
for
loading
each
one.
A
A
A
It
turns
out
that
when
things
go
wrong,
there
are
different
ways
in
which
they
go
wrong
and,
as
any
experienced
programmer
will
know,
you
have
to
sit
down
and
think
about
the
way
think
about
what
the
right
action
to
take
is.
Instead
of
crashing
your
program
wholesale
or
ignoring
the
error
wholesale
in
all
cases,
so
first,
for
example,
you
might
have
you
know
the
user
causes
your
program
to
abort
by
killing
the
program
by
closing
a
tab.
You
might
have
a
logic
error
in
your
program
and
assertion
trips
and
the
program
crashes.
A
A
So
what
I
did
was
I
enabled
tasks
to
have
different
failure?
Propagation
modes?
We
now
have
three
different
main
spawn
modes
for
tasks.
You
can
spawn
a
task
supervised,
which
means
that
if
the
parent
task
fails,
it
will
propagate
and
take
out
the
child
as
well.
You
can
spawn
tasks
linked
which
puts
all
the
tasks
in
the
same
link:
failure
group,
which
means
that
if
any
one
of
them
fails,
all
of
them
will
be
taken
out
or
you
can
spawn
tasks
unlinked,
which
means
that
failure
in
one
is
isolated
from
failure
in
another.
A
A
So,
let's
say
that
I
have
a
main
tab
manager
task
for
a
webpage
that
I'm
loading
and
it
spawned
three
different
image.
Renderer
tasks
and
one
of
the
images
happens
to
be
corrupted.
Well,
you
don't
want
it
to
take
out
all
the
rest
of
the
web
page.
You
just
wanted
to
display
an
empty
box
to
mark
where
that
task
failed.
A
However,
if
the
tab
itself
crashes,
the
user
clicks,
the
X
button
or
the
web
page
is
malformed
or
something
you
don't
want
to
leave.
The
image
render
task
hanging
around
downloading
data
that
nobody's
ever
going
to
see
you
want
the
failure
to
propagate
case
two.
Why
would
you
ever
want
tasks
that
are
bi-directional
bidirectionally
linked
to
each
other?
Well,
let's
say
that
you're
doing
this
pipeline
CSS
and
HTML
parsing
and
the
parser
encounters
some
invalid
input.
There's
a
parse
error
and
the
lexer
there's.
No
really,
there's
really
no
point
for
the
lexer
to
continue.
A
So
you
want
the
failure
to
propagate
to
the
lexer.
By
the
same
token,
if
the
lexer
encountered
some
invalid
HTML
that
was
successfully
parsed,
there's
no
reason
for
the
parser
to
go
on
processing
the
rest
of
the
input
stream,
because
the
lexer
is
never
going
to
catch
up.
So
you
won't
failure
to
propagate
bi-directionally
in
this
case
case
three.
Why
would
you
ever
want
tasks
whose
failure
whose
failures
are
isolated
from
each
other?
A
A
So
that's
the
three
spawn
modes.
If
you
really
want
to
get
your
fingers
dirty,
there
are
some
extra.
There
are
some
extra
Advanced
Options,
which
you
can
chain
together
with
with
a
bunch
of
method,
calls
all
on
one
line
like
this
top
line
here
and
in
brief,
you
can
configure
the
scheduler
mode
which
changes
the
scheduler
behavior
of
the
task
you
can.
You
can
get
an
object
called
a
future
which
you
can
carry
around
and
later
call
out
to
which
causes
you
to
block
until
the
task
finishes.
A
You
can
call
try
which
spawns
the
task
much
like
the
future,
immediately
blocks
on
it
and
gives
you
a
result
which
tells
you
whether
it
succeeded
or
failed.
You
can
use
some
convenience
wrappers
for
setting
up
communication
with
pipes
between
the
tasks,
so
you
don't
have
to
repeat
all
the
dirty
work.
A
So
I'd
like
to
thank
the
rest
team,
especially
graden
for
his
leadership
and
Brian
for
his
endless
patience.
Answering
all
my
questions
about
the
restaurant
time
during
the
first
month
or
two
I'd
also
like
to
thank
the
rest
interns,
especially
for
being
an
excellent
community.
It's
been
a
fantastic
summer.
Any
questions.
B
A
I'm
glad
you
asked
that
I
decided
not
to
make
this
a
primitive
because
one,
it
would
be
a
lot
harder
to
implement
a
sort
of
integrated
in
an
integrated
way
with
the
other
three
and
two,
because
you
can
program
it
on
top
of
the
other
three
mechanisms
that
I
said
so
here's
some
example
code.
That
does
that
you
spawn
a
task
linked
to
yourself
which
spawns
unlinked
the
child
tasks
and
communicates
with
the
child
test
to
see
if
it
fails.
If
it
fails,
the
middle
task
fails,
which
is
linked
to
you,
which
kills
you.
A
B
A
A
C
E
B
A
E
A
C
A
C
E
A
C
D
Okay,
so
I'm
Michael,
Sullivan
I'm
on
the
rust
team
and
I,
worked
on
a
bunch
of
different
things
this
summer
and
I'm
going
to
talk
about
some
of
them.
That
I
found
most
interesting
first
I'm,
going
to
talk
a
little
bit
about
what
rust
is
and
I'm
going
to
talk
a
bit
about
how
vectors
work
in
rust,
because
I
worked
a
bunch
on
that
this
summer
and
then
what
I'm
calling
static,
trait
methods
and
just
sort
of
wrapping
up
with
other
things
that
worked
on
as
first
a
disclaimer
rust
is
still
under
heavy
development.
D
The
things
I
talk
about
in
this
talk
might
not
be
true
tomorrow,
but
they
probably
will,
because
we
aren't
so
I,
don't
think
we're
throwing
away
anything
I've
shown
in
this
talk,
but
there's
still
a
lot
of
churn
happening,
and
you
know
what
I
discussed
in
how
I
present
issues
I
reflect
my
personal
biases
and
language
design,
which
may
differ
from
other
more
important
members
of
the
rust
team,
okay,
so
sort
of
goals.
So
what
do
we
want
in
a
programming
language?
Well,
we
want
it
to
be
fast.
D
We
want
to
generate
efficient
code,
offering
a
sort
of
obvious
reasons
we
want
to
be
safe.
We
want
the
type
system
to
provide
guarantees
that
rule
out
different
bugs
like
buffer
overruns
and
null
pointer,
dereferences
and
various
sorts
of
crashes.
I
want
it
to
be
concurrent
because
so
you
know
for
every
talk
about
concurrency,
there's
sort
of
a
mandatory
graph
of
how
Moore's
Law
isn't
working
anymore
I'm,
leaving.
D
We
want
a
language
that
makes
it
easy
to
take
advantage
of
parallelism
and
you
know
for
for
us
at
least
we
want
something
sort
of
systems
and
we
want
fine
grained
control
over.
What's
going
on
and
predictable
performance
characteristics,
we
don't
want
it,
you
know
doing
lots
of
memory
allocation
behind
our
back
and
so
what
languages
do
we
have?
D
Well,
we
are,
you
know
the
what
we
have
Firefox
written
in
now
is
C++,
which
is
you
know
fast
in
System
z
ml
is,
you
know
sometimes
fast
and,
as
you
know,
quite
safe
Erlang
is
safe
and
concurrent
Haskell
is,
you
know
sometimes
fast.
You
know
very
safe
and
also
concurrent
and
Java
and
c-sharp
are
fast
and
safe,
but
also
you
know
a
job
on
c-sharp
plenty
of
their
own
problems
right.
So
we
want
is
a
systems
language
pursuing
the
trifecta,
safe
concurrent,
fast,
okay,
so
some
of
the
design
issues
in
rust.
D
We
have
a
lot
of
you
know:
fancy
features
in
our
type
system.
We
have
algebraic
data
types
and
pattern
matching
a
la
ml
or
Haskell,
which
means
we
don't
have
any
male
pointers.
So
you
never
have
a
rust
program
crash
because
of
a
null
pointer,
dereference-
and
you
have
you-
know,
sort
of
very
high
level
ways
of
operating
on
lots
of
different
data.
We
have
polymorphism,
you
can
write
functions
that
work
over
different
sorts
of
parameters.
D
We
have,
you
know,
type
inference,
so
you
know,
even
though
we're
statically
typed,
you
don't
need
to
write
the
types
over
and
over.
We
have
this
somewhat
idiosyncratic
type
quest
system
that
we
call
traits
that,
let
you
you
know,
write
code,
that's
parametric
over
types
that
have
certain
properties.
We
have
a
strong
emphasis
on
immutability
in
our
data
structures
and
we
have
what
we
call
region
pointers
which
allow
you
to
have.
You
know
safe
pointers
into
any
object
with
a
static,
borrow
check
that
make
sure
you
don't
access
dead
data.
D
We
have
some
other
features
like
lightweight
tasks,
with
no
shared
state
which
been
talked
about
control,
fine-grained
control
over
when
you're
allocating
memory
and
when
you're
putting
things
in
the
stack
and
the
ability
to
I
been
talked
about.
This
some
to
you
know,
have
unique
pointers
that
you
can
move
and
send
over
a
channel
while
giving
up
your
own
reference
to
them.
D
Someone
described
is
like
C++,
grew
up,
went
to
grad
school
started
dating
ml
and
to
share
in
an
office
with
Erlang
sort
of
reflects
some
of
the
influences
here
so
status.
We
currently
have
a
self
hosting
rest
compiler
that
uses
LVM
as
a
back-end.
We
handle
our
polymorphism
by
generating
different
copies
of
the
code
for
each
type
parameter.
This
is
similar
to
what
C++
does
for
templates,
but
yeah.
D
Better
and
memory
management
is
currently
automatic
reference
counting,
which
I
have
a
distaste
for,
but
that
might
be
changing
the
catch
is
you
know
we're
we're
not
done
yet.
There
are
lots
of
bugs
and
sharp
edges
we're
still
changing
rapidly,
but
we're
getting
really
close
were
we're
a
lot
closer
than
when
I
said
this.
Last
summer,.
D
Okay,
so
I'm
going
to
talk
a
little
bit
about
about
the
different
sorts
of
heat
pointers
we
have
in
rust
and
then
a
bit
about
what
vectors
and
rust
look
like.
So
first,
you
know.
Obviously
we
want
to
be
able
to
keep
allocate
memory
and
we
want
that
to
be
garbage
collected
when
nothing's
using
it.
We
want
it
to
automatically
be
free.
We
don't
want
c-style
free
to
you,
because
then
we
screw
that
up
and
leak
memory
and
crash,
and
you
know
access
free
memory.
It's
just
very
bad
right.
D
So
we
have
what
is
when
we
call
app
pointers.
Something
that's
a
type
at
int
is
a
pointer
to
a
heap
allocated
integer.
There
can
be
multiple
references
to
each.
You
know
object
in
this
heap
when
you
copy
it,
you
just
copy
the
pointer,
so
there
multiple
references
Dijon
right
now
when
you
copy
it
you'll
bump
a
reference
count,
but
if
we
get
rid
of
reference,
counting
it'll
just
be
a
copy,
but
you
can't
share
these
between
tasks.
D
D
So
we
have
tilde
pointers,
and
these
are
unique
pointers
that
uniquely
owned
the
data
they're
pointed
to
it's
owned
by
one
pointer.
If
you
copy
one,
you
have
to
allocate
more
space
and
copy
the
underlying
data,
but
the
advantage
is
since
there's
only
one
of
them.
If
you
give
up
your
reference
to
this
and
hand
it
to
someone
else,
you
know
that
now
they
have
the
only
reference
which
makes
it
safe
to
send
across
a
channel
as
long
as
you
give
up
your
handle
to
it.
So
these
are
useful
for
inner
tasks.
D
Communication
now
I'm
going
to
talk
about
how
this
different,
these
different
sorts
of
ways
to
heap,
allocate
an
impact
vectors
because
you
you
wind
up
with
the
Troy.
If
you
want
heap-allocated
vectors,
which
he
do
you
put
them,
and
so
we
have
you
know,
braces
of
t
is
considered
to
be
the
type
of
vectors
containing
some
type
t,
but
the
problem
is
vectors
can
have
any
size
right
based
their
sizes
based
on
their
number
of
elements.
D
So
you
can't
just
put
this
on
the
stack
like
a
whip
and
you
know
copy
it
around
because
you
don't
know
how
big
it
is.
You
don't
know
how
much
memory
to
reserve
so
they're
what
we
call.
What
some
of
us
call
is
a
second
class
type.
They
can.
These
vectors
can
only
appear
when
they're
pointed
at
by
some
sort
of
pointer,
and
that
can
be.
D
You
know
in
that
pointer
or
a
tilde
point,
or
I
haven't
really
talked
about
them,
but
region
pointers
in
memory,
a
vector
is,
you
know,
has
a
size
field
field
for
how
much
of
the
space
has
actually
been
allocated.
And
then
you
know
a
buffer
of
allocated
memory
and
these
will
be
living
inside
heap
objects
or
yeah,
mainly
they'll,
be
living
inside
heap
objects
in
either
the
at
or
the
tilde
heaps.
D
D
You
know
that
you
can
there's
only
one
reference
to
it,
and
so
you
can
update
it.
So
it's
with
these
unique
pointers,
it's
safe
to
do
that,
but
this
trick
doesn't
work
with
an
OP
vector
with
a
vector
pointed
to
by
an
app
pointer,
because
the
problem
you
run
into
there
is
that
there
could
be
multiple
pointers
to
that
vector
and
if
you
have
to
move
the
object
in
memory
because
you
need
to
resize
it,
you
can't
possibly
find
all
of
those
and
update
it
and
further
more.
D
Some
of
those
pointers,
you
know,
might
be
immutable
pointers.
They
think
that
the
array
is
not
changing,
so
it
doesn't
work
at
all.
We
can't
modify
it.
We
can't
resize
it,
but
you
know:
building
up
a
vector
by
pushing
elements
on
the
back
seems
a
very
natural
way
to
build
arrays
up
in
an
imperative
style
like
like
we
showed
in
the
previous
slide.
So
if
we
know
for
sure
that
there's
only
one
reference
to
this
vector,
then
we
can
get
away
with
pushing
onto
it.
D
So
we
could
build
up
safe,
abstractions
that
sort
of
have
hidden
away
a
reference
to
an
app
vector,
and
if
it
knows
it's,
the
only
one
it
can
go.
Do
some
unsafe
stuff.
Under
the
hood
to
push
it
on
and
then
when
you're
done,
you
can
ask
for
the
add
factor
well
well.
This
would
be
something
like
job
is
array
list
right
work.
It
has
a
written
array
under
the
hood
and
you
go
through
this
object.
D
That
mediates
your
accesses
to
it,
but
this
is
somewhat
unsatisfying,
though,
for
some
reasons
I'll
get
to
later,
when
I
talk
about
static,
trait
methods,
but
also
just
sort
of
from
a
from
you
know
a
philosophical
perspective.
It
it
bothers
me
that
there
is,
you
know
no
nice
way
to
construct
these
objects
directly.
I,
don't
think
it.
You
need
to
be
mediated
through
this
other
thing.
D
So
I
can
I
came
up
with
with
a
nice
interface
for
building
this
sort
of
vector,
that's
a
function
called
build
and
it
takes
as
an
argument
a
a
builder
function
that
it
calls
with
a
push
function,
and
what
builder
does
is
it?
You
know,
is
executed
for
its
side-effects,
but
at
any
time
it
can
call
the
push
function.
It
was
given
as
an
argument
which
will
add
some
new
element
to
the
end
of
an
array
so
build.
D
You
know
allocates
a
new
vector
which
it
doesn't
give
to
anyone,
but
then
calls
builder
with
a
function
that,
when
called
pushes
on
to
this
vector
that
build
has
hidden
under
the
hood
build,
has
the
only
reference
to
it.
So
you
know
that
if
it
needs
to
resize
it,
it
can
do
that
no
one
else
can
be
using
it,
and
so
the
code
to
implement
this
is
very
is
very
unsafe
and
doza
does
a
lot
of
nasty
hacks,
but
the
the
interface
is
is
completely
safe.
This
is
so.
D
D
So
you
know
if
that
seemed
too
complicated.
You
know
it's
not
really.
If
you
look
at
the
code,
it
seems
pretty
natural,
they
you
you
say
like
I
want
I
want
to
build
up
and
erase.
You
call
build
and
you
get
back
a
push
function,
and
then
you
just
go
basically
write
the
same
code
that
you
wrote
it
for
building
up
the
tilde
vector
the
you
know.
D
The
type
signature
looks
a
lot
scarier
than
the
actual
code
does
the
code
yeah
seems
fairly
natural,
the
the
biggest
downside
is
you
sort
of
have
to
indent
your
code
for
spaces
which,
when
you're
working
on
a
project
with
a
hard
78
character,
column
limit
is
maybe
a
big
drawback,
but
but
so
this
seems
like
a
fairly
natural
idiom
and
lots
of
other
primitives
can
be
built
on
top
of
it.
Basically,
anything
you
want
to
build
up
a
vector,
or
most
sequences
really
can
be
built.
D
D
So
now
I'm
going
to
talk
a
little
bit
about
trains
and
Lyndsey
Cooper
gave
a
talk
about
this
last
week,
but
I
don't
know
how
many
people
remember
that
so
I'm
going
to
talk
about
them.
A
little
and
trades
are
just
interfaces
that
specify
a
set
of
methods
for
types
to
implement
and
then
functions
can
be
written
that
are
parameterised
over
any
type
that
implements.
That
rate,
this
is
just
like
type
classes
in
Haskell,
and
it
is
somewhat
related
to
interfaces
in
Java.
Also.
This
is
something
that's
again
easier
to
understand
with
an
example.
D
So
here
we
can
define
a
trait
call.
The
show
that
has
one
function
show
that
when
called
creates
a
string,
formatting
the
the
object-
and
so
you
here
we
have
an
implementation
for
int
and
a
function
exclaim
that
is
parameterized
over
any
type
T
that
implements
shell
and
it
will
take
that
format.
It
and
then
add
an
exclamation
mark
to
the
end,
and
you
can
see
these
type
class
methods
are
called
with.
D
You
know,
sort
of
an
object-oriented
style,
dot
notation
where
you
call
X
dot
Show
and
that's
how
you
call
it,
and
so
this
sort
of
thing
is
useful
in
a
lot
of
places,
but
there's
sort
of
this
annoying
limitation
with
our
trade
system,
which
is
that
traits,
you
know,
contain
methods
and
they're
called
in
dot
notation
and
since
they're
called
dot
notation.
They
all
require
as
their
first
argument,
an
element
of
the
trait
type.
D
There
are
plenty
of
places
where
you
want
to
be
able
to
not
just
you
know,
modify
or
access
any
up
like
objects
in
the
tight
parametric
way,
but
you
want
to
be
able
to
create
them
so
consider.
For
example,
you
want
to
define
a
trait
that
lets.
You
read
elements
of
some
type
from
a
string.
You
want
to
be
able
to
say
you
know,
I
want
to
be
able
to
read
in
you
know
a
generic
way
where
you
are
given
a
string
and
it
can
read
back
any
type.
D
I
will
be
some
code
demonstrating
what
that
would
look
like,
but
so
I
solved
this
by
adding
static
trait
methods.
So
we
add
a
static
keyword
that
can
be
applied
to
trait
methods.
They
don't
take
a
self
parameter
like
other
methods
and
can't
be
called
with
dot
notation
but
they're
treated
as
a
regular
function,
sort
of
in
the
parent
namespace
of
where
the
traits
define
the
function
then
gets
parameterised
over
the
trait
type.
So
you
call
it
like
a
normal
function
and
it
can
operate
on
any
type
of
that
tre.
D
This
is,
if
anyone's
familiar
with
Haskell
sort
of
the
norm.
In
Haskell,
this
is
how
all
type
class
functions
work
in
Haskell.
They
don't
have
this
notion
of
methods
that
we
have
so
we
can
define
a
tray
read
that
has
a
static
function,
string
that
takes
in
a
string
and
returns
self,
which
means
the
type
that
the
traits
being
declared
for
and
we'll
have
the
type
signature
where
it
is
parametrized
over
any
tea
that
implements
read
and
for
any
to
the
documents
we
need.
It
takes
a
string
and
gives
you
back
a
team.
D
Now
sort
of
bring
this
back
together
with
what
I
talked
about
in
the
vector
section,
we
can
take
that
interface.
I
came
up
with
for
defining
app
vectors
and
generalize
it,
so
we
can
create
a
trait
buildable
that
has
a
function,
build
that
you
know,
calls
a
builder
function
with
the
push
argument
and
returns
back
something
of
the
trade
type.
So
now
we
can
write.
You
know
this
sequence
range
function
that
operates
over
any
buildable,
any
buildable
type.
D
So
that's
a
brief
overview
of
static
trick
methods.
I
worked
on
a
bunch
of
other
things
this
summer,
something
much
bigger
than
those
but
well
some
of
the
more
time-consuming
than
those
at
least,
but
those
were
the
most
interesting
parts.
I
made
a
lot
of
major
syntax
changes
to
vectors
and
strings
and
sort
of
normalized,
all
of
them
before
vectors,
who
are
sort
of
implicitly
assumed.
D
My
goal
is
a
bug
a
day
for
the
summer,
but
I've
fallen
somewhat
behind
that
I'm
thinking
about
44,
and
you
need
53
right
now
to
to
have
kept
up.
Okay,
so
sort
of
inclusion.
Rust
is
a
new
systems,
language
out
of
Mozilla
research.
That's
designed
to
be
fast,
concurrent
and
safe,
worked
in
a
bunch
of
different
stuff
this
summer.
Third
order
functions
are
apparently
useful
for
constructing
arrays
imperative
lis
and
our
traits
between
my
work
and
Lindsay
Cooper's
work,
which
she
presented
last
week,
are
almost
as
cool
as
Haskell
98
stipe
classes.
D
D
D
D
So
right
now
we
don't
have
inheritance
yet,
but
we're
planning
to
get
that
soon,
like
tech
class
inheritance,
the
other
big
one
is
since
we
don't
have
higher
kind
of
type
constructors,
we
don't
have
higher
kinda
type
classes
and
if
you
look
at
Haskell
a
lot
of
their
most
powerful
type
classes
are
higher.
Like
monad,
we
can't
implement
monad
in
Russ
type
class
system,
which
you
know.
Maybe
that's
fine.