►
From YouTube: Intern Presentations: GC in Rust
Description
Elliott Slaughter from the Research team presents "High-performance GC in Rust."
Help us caption & translate this video!
http://amara.org/v/2FhO/
A
Hi
I'm
Elliott,
slaughter
and
I'm
an
intern
with
the
research
team
this
summer
and
I've
been
working
on
garbage
collection
and
rust
when
I
chose
the
title
for
this
talk,
I
was
being
a
little
bit
optimistic,
I.
Think
a
better
title
for
the
talk
would
have
been
out
hack
LLVM,
to
allow
us
to
start
thinking
about
implementing
garbage
collection
and
rust,
but
anyway,
I'm
going
to
do
a
short
intro
to
rust
and
garbage
collection
and
then
I'm
going
to
talk
about
LLVM.
A
This
current
support
for
GC,
such
as
it
is
how
to
hack
it
to
make
it
actually
work
for
what
we
want
to
do
and
then
see
where
that
gets
us.
So
rust
is
a
new
systems,
programming
language
from
Mozilla
research.
It's
intended
to
be
fast,
safe
and
concurrent,
so
I'm
going
to
walk
through
a
couple
of
code,
examples
that
demonstrate
some
of
the
features
that
give
us
each
of
those
goals.
A
So
here's
a
short
snippet
of
rust
code,
we're
walking
down
a
vector
of
three
elements
and
printing
each
integer,
and
one
of
the
things
that
might
not
be
completely
obvious
at
first
looking
at
this
code
is
that
the
vector
is
actually
not
allocated
on
the
heap
here.
Since
it's
a
constant
vector,
we
can
keep
this
in
the
data
section
of
the
program,
and
so
we
don't
actually
have
to
do
any
allocation
at
all
to
run
this
code.
B
A
Dot
2
string,
that's
actually
not
a
be
table
there,
we're
doing
static,
dispatch,
sort
of
like
C++
templates
and
so
features
like
that
help.
Us
get
a
lot
of
performance
out
of
our
code
in
this
language,
but
we
also
want
our
code
to
be
safe
and
especially
in
C++,
it's
really
easy
to
make
memory,
leaks
or
return
pointers
to
the
allocated
memory
or
get
no
null
pointers.
And
so
we
tried
really
hard
to
make
this
language
work
in
a
safe
way.
And
so
in
this
and
the
first
line
the
LED
X
there.
A
The
app
3
is
creating
a
new
integer
on
the
heap
and
then
it's
a
reference
counted
box
and
so
it'll
automatically
go
away
at
the
end
of
the
function.
But
before
we
do
that,
we
try
to
take
a
pointer
to
the
contents
of
the
box
and
return
it
and
rust
has
a
powerful
type
system
that
can
actually
check
that.
A
A
A
In
reality,
it's
really
just
doing
an
S
printf,
but
you
could
imagine
that
being
a
little
bit
more
complicated
and
one
thing
that
you'll
notice
looking
at
this
source
code
is
all
the
tilde
is
running
around.
Those
are
similar
to
the
@
sign.
In
the
last
example,
they're
referring
to
a
pointer
to
heap
allocated
memory,
but
in
this
case
double
tilde
guarantees
that
there
won't
be
any
data
races
in
this
code
by
proving
that
only
one
thread
will
the
pointer
to
this
allocation
at
any.
Given
time.
A
A
A
So
at
this
point,
you
probably
notice
that
we
have
a
couple
of
different
types
of
pointers
in
this
language.
The
two
that
you
need
to
know
about
here
are
the
tilde
and
the
at
from
the
last
two
examples.
The
tilde
refers
to
what
we
call
the
exchange
heap,
which
is
actually
similar
to
C++,
is
unique,
pointer
added
in
the
newest
version
of
the
language.
A
It's
just
a
reference
counted
box
like
you'd,
expect
from
any
other
language
with
additional
restriction
that,
since
its
reference
counted-
and
we
don't
know
exactly
how
many
references
live
at
any
given
point
in
time,
you
can't
move
it
between
tasks,
and
so,
if
you
want
to
pass
a
value
to
another
thread,
you
always
have
to
create
a
tilde
box.
But
if
you
only
need
to
operate
on
data
locally,
then
you
generally
use
app
boxes.
A
So
why
do
we
want
to
use
garbage
collection
in
this
language?
I
should
probably
mention
that
the
tilde
boxes
don't
require
garbage
collection
at
all.
So
that's
not
something
that
I'm
going
to
be
working
on
or
have
worked
on,
because
since
only
one
task
owns
them
at
any
given
time
when
the
task,
if
it
if
the
pointer
ever
goes
out
of
scope,
you
know
for
sure
that
you
can
be
allocated.
So
that's
really
easy,
but
for
the
app
boxes
it
gets
a
lot
more
tricky
because
we're
reference
counting
right
now
and
so
you'd
figure.
A
Well,
you
know
objective-c
and
all
these
languages
do
reference
counting,
isn't
that
efficient.
It
turns
out
that
and
a
lot
of
applications
that
currently
do
reference
counting
the
performance.
Isn't
there
as
much
as
you'd
think
turn
on
on
the
reference
count
is
actually
an
issue,
and
so
that's
something
where
we
could
actually
probably
get
better
performance
with
a
garbage
collector.
And
the
other
thing
is
that
you
can't
collect
cycles
with
reference
counting
or
you
could
add,
a
garbage
collector
on
the
side
and
collect
the
cycles.
A
But
that
turns
out
that
it's
actually
slower
than
just
going
all
the
way
with
a
garbage
collector.
So
we
might
as
well
just
do
that,
but
it
turns
out
that
we
actually
have
a
couple
advantages
that
languages
like,
for
example,
Java,
don't
have,
and
the
biggest
one
is
that
all
of
these
heaps,
the
App
boxes,
that
we're
going
to
be
collecting
are
local
to
each
task.
A
A
And
so,
if
you
have
one
task
that
doesn't
GC
and
a
different
task
that
does
GC,
then
the
one
that
does
GC
never
has
to
stop
the
task
that
doesn't
want
to
garbage
collect,
and
on
top
of
that,
we
can
make
the
compiler
enforce
that
the
task
that
doesn't
want
to
DC
never
allocates
any
memory,
and
so
we
can.
If
we
have
like
a
UI
thread
in
a
mobile
app,
for
example,
we
can
guarantee
that
we'll
never
pause
at
all.
A
So
that
leaves
us
with
GC
in
a
context
where
I
think
it
makes
a
lot
more
sense.
Instead
of
pushing
GC
to
work
in
these
really
low
latency
contexts,
we
can
use
GC
where
GC
really
does
well,
which
is
where
latency
is
acceptable,
but
we
want
really
high
throughput
and
I
think
in
that
context,
we
can
really
push
the
performance
to
be
reference
Connie.
A
A
B
A
You
have
all
the
live
roots
on
the
stack.
Of
course,
you
need
to
dereference
them
to
figure
out
where
all
the
other
pointers
are
and
the
tricky
bit
there
is
knowing
what
kind
of
a
value
you're
pointing
into
so
it
turns
out
in
rust.
That's
actually
pretty
easy,
because
the
types
on
the
heap
are
self
describing.
So
that's
pretty
much
a
non-issue
for
us.
B
A
A
So,
when
I
started
this
project,
a
lot
of
my
friends
said:
wait
a
moment.
Doesn't
LLVM
already
support
garbage
collection
like
why?
Don't
you
just
use
that
and
the
answer
yes,
Olivia
LLVM
is
the
low.
Well,
it
doesn't
really
matter
what
it
stands
for.
It's
a
back-end
for
a
compiler
similar
to
GCC
that
we're
using
in
rust
yeah.
A
So,
for
example,
Apple
uses
it
in
their
C
compiler
right
now.
So
lob
em
has
a
lot
of
fancy
features
and
it
does
support
fully
featured
garbage
collection.
They
have
documentation
and
there's
like
a
nice
web
page.
It
tells
you
exactly
how
to
implement
it
and
they
have
a
list
of
all
the
different
kinds
of
GC
you
can
implement
with
it
and
that's
all
great,
except
for
the
one
problem,
which
is
that
it's
slow
and
specifically
when
you
tell
LLVM
that
you
have
a
value
which
needs
to
be
garbage
collected.
A
It
can't
be
optimized,
you
can't
copy
the
pointer
into
a
register.
You
can't
move
those
values
around,
basically
all
of
the
LLVM
optimizations.
If
they
see
those
values
they'll
just
like
escape
early
and
they
won't
optimize
your
code,
and
so
that's
really
not
acceptable
for
us
and
we
really
the
existing
languages
that
use
LLVM
is
GC.
Infrastructure
are
really
suffering
in
performance
because
of
it,
and
so
how
would
we
go
about
solving
that
in
the
most
obvious
way
to
solve?
A
It
is
to
teach
LLVM
to
understand
what
it
means
for
a
garbage
collected
pointer
to
live
in
a
register,
and
the
problem
with
that
is
that
you're
now
going
to
have
to
modify
all
of
the
LLVM
optimization
passes
to
also
know
how
to
deal
with
those
pointers
and
how
to
move
them
that
from
the
stack
onto
registers
and
then
back
and
that's
a
lot
of
code.
We've
talked
to
the
LLVM
team
and
no
one
there
wants
to
do
it
and
we
don't
have
the
people
to
do
it
so
technically.
A
So
so
that's
what
I
spent
my
summer
doing
it
is
working
on.
Unoptimized
builds
only
and
yes,
I
feel
the
irony
there.
But
it's
it's
a
similar
situation
with
the
first
approach.
We're
really
making
this
work
would
be
optimized.
Llvm
builds
is
still
a
lot
of
work,
and
so
that's
something
that
someone's
going
to
have
to
do
either
me
or
someone
else
will
have
to
spend
some
more
time
making
this
actually
work.
So
that's
my
talk
and
thank
you
for
having
a
great
summer.