►
Description
All Things Open 2014 - Day 2
Thursday, October 23rd, 2014
Steve Klabnik
Developer/Author with Mozilla
Front Dev 1
RUST - The Programming Language From Mozilla
A
Thank
you
so
much
for
having
me
I'm
steve.
I
would
like
to
share
with
you
some
basic
information
about
rust,
which
is
I'm
lucky
enough
to
have
a
day
job
that
involves
working
on
open
source,
so
this
has
been
what
I
spend
most
of
my
time
on
nowadays.
A
Historically,
if
you're
not
familiar
with
my
previous
work,
I
have
worked
on
rails.
I
like
have
a
ruby
tattooed
on
my
body,
so
I
was
kind
of
into
it
a
lot.
I
also
have
a
pearl
camel,
though,
because
you
can't
just
have
one
scripting
language
tattoo.
You
need
at
least
multiples,
and
so
now
I'm
doing
rust
stuff
so
a
little
bit
of
meta
stuff.
What
is
rust
if
you're
not
familiar
russ,
is
a
brand
new
programming
language
from
mozilla
and
friends.
A
Other
people,
although
it's
funny
when
you're
working
on
a
pre-release
programming
language,
most
of
the
people
that
work
on
it
are
either
working
for
mozilla
or
our
students
or
put
in
a
lot
of
weekend
and
night
hours.
So
we
do
have
core
members
that
are
not
from
mozilla,
but
the
vast
majority
of
people
are
mozillains.
A
Russ
is
a
little
hard
to
talk
about,
and
we've
sort
of
been
evolving.
The
like
marketing
for
rust,
because
it's
interesting
so
rust
is
a
low-level
programming
language,
but
it
often
feels
like
a
high-level
programming
language,
and
so
it's
it's
sort
of
awkward.
It's
not
trying
to
be
everything
to
everyone.
A
It's
just
that
when
you
take
modern
programming,
language
features
and
you
bolt
them
onto
a
low-level
language
a
lot
of
times
you
get
something
that
feels
more
high-end,
even
if
it
is
not
so
it's
hard
to
describe
how
rust
gives
you
the
like
low
level
power
that
cnc
plus
plus,
do
while
feeling
high
level
like
a
ruby
or
python
or
ml,
and
so
you
don't
want
to
alienate
the
low-level
people
by
saying
like
no
seriously,
you
can
have
your
pointers
in
your
low-level
memory
management,
but
you
also
want
to
tell
the
high
level
people
like.
A
Don't
worry,
you
don't
have
to
worry
about
that
stuff.
So
it's
a
little
awkward.
So
I
hope
to
show
you
what
I
mean
over
the
course
of
this
talk,
and
I
think
what
makes
rust
really
special
is
that
it
makes
low-level
programming
accessible
to
people
who
have
not
done
level
programming
before
and
it
makes
it
safer
for
people
who
do
have
experience
with
this
low-level
stuff,
and
I'm
going
to
spend
most
of
this
talk
talking
about
what
I
mean
exactly
by
safety,
but
basically
what
it
boils
down
to
is.
A
Historically,
this
is
sort
of
a
this
is
a
slide
from
the
most
recent
apple
keynote,
and
I
think
that
lots
of
programmers
think
about
programming
languages
just
like
this
chart.
Actually,
so
you
tend
to
think
that
there's
this
line,
there's
a
trade-off
right
like
programming,
is
largely
an
engineering
discipline,
or
at
least
it
feels
like
it,
and
so
we
think
we
think
about
things
as
trade-offs.
A
So
one
of
the
big
trade-offs
in
programming
language
design
and
when
you
choose
what
language
you
want
to
use,
is
that
there's
this
performance
access
on
one
side
and
a
productivity
access
on
the
other,
so
you
tend
to
think
of
languages
like
c
and
c,
plus
as
being
incredibly
low
level
and
therefore
not
productive,
because
you
really
really
need
to
be
careful
to
pay
attention
to
dot
all
your
eyes
and
cross
all
your
t's
and
asterisk
all
your
pointers,
but
in
exchange
for
that
tedium
you
get
unparalleled
performance
right.
A
A
You
actually
don't
need
to
sacrifice
performance
to
gain
productivity
and
in
fact
some
of
that
is
due
to
programming
language
theory.
Some
of
that
is
just
due
to
things
like
tooling,
so
I'm
going
to
talk
a
lot
about
tooling
towards
the
end
as
well,
but
we've
sort
of
like
been
able
to
import
a
lot
of
the
lessons
that
we've
learned
from
high-level
stuff
back
into
these
low-level
languages.
I've
been
programming
for
almost
my
entire
life.
A
I
started
off
with
gw
basic
and
then
learned
c,
so
sort
of
my
own
personal
progression
has
been
like
doing
low-level
things
and
then
moving
into
high-level
land,
and
now
I'm
sort
of
going
back
into
low-level
again.
So
a
lot
of
the
stuff
is,
like
you
know,
that's
my
perspective,
because
that
was
also
my
personal
journey
with
programming,
but
I
think
also
just
an
interesting
things
move,
but
I
think
that
this
might
be
the
wrong
way
to
think
about
programming
languages,
because
this
doesn't
actually
have
time
involved
right.
A
So,
for
example,
javascript
used
to
be
incredibly
slow
and
now
is
quite
fast
so
like
how
does
that
play
into
a
chart
like
this?
Does
it
like
move?
You
know
along
these
axes,
so
I'm
not
totally
sure
this
is
the
best
way
to
characterize
languages,
but
it's
definitely
a
way
that
we
tend
to
think
about
them.
Okay,
I'm
gonna
make
these
slides
go
a
little
smaller
again
so
safety.
A
Let
me
show
you
what
I
mean:
here's
some
ruby
code-
oh
also,
I
should
mention
this
too.
So
my
slides
are
kind
of
interesting
we,
so
we
have
a
tool
that
documents
like
api
documentation
and
rust
called
rust
dock
and
it
can
also
process
regular
old
markdown
files,
and
so
last
week
someone
wrote
a
thing
that
injects
the
right,
html
and
css
to
give
you
slides
in
rust
docs.
So
these
slides
are
actually
generated
by
rust
code
generating
html.
A
So
I
was
going
to
show
you
like
servo
running
my
slides
in
stuff,
but
oh
well,
that's
what
I
get
for
using
linux
can't
present
on
anything.
So
I'm
going
to
scroll
down
a
little
bit
for
some
of
these
slides.
But
here's
here's
some
ruby
code-
and
this
is
a
little
you
know
made
up.
But
it
illustrates
my
point
so
we
create
an
array
called
v.
We
push
an
element
onto
that
array.
Then
we
set
a
variable
x.
That's
a
reference
to
the
first
element
of
the
array.
A
We
then
push
world
on
and
we
put
x
now
because
ruby
is
memory
safe
via
its
garbage
collector.
This
will
work
and
it
will
print
out
hello.
Actually,
if
I
remember
correctly
but
like
the
world,
does
not
break
down,
if
everything
works,
it's
fine,
it
runs
if
you
write
this
same
code
in
c,
plus
it's
a
little
bit
longer,
but
it
pretty
much
looks
like
the
same
thing,
with
a
little
bit
more
type
annotations.
A
So
you
have
in
main
we
create
a
vector,
and
we
have
to
say
that
it's
going
to
be
a
vector
of
strings.
We
call
it
v.
Pushback
is
the
method
on
vectors,
that's
the
same
as
push
in
ruby.
So
we
push
a
hello.
Then
we
take
a
reference
to
the
string.
That
is
at
the
first
element
of
the
array
and
we
call
world
and
then
we
print
out
the
value
of
x.
B
A
Someday,
I'm
gonna
make
a
slide.
That's
like
gonna.
Have
that
be
a
thing.
Small
tangent.
I've
joked
that
if
I
ever
write
a
science
fiction
trilogy,
I
want
it
to
be
like
set
up
in
the
first
episode
and
then
you
know
the
big
battle
in
the
second
episode
like
modeled
after
star
wars,
but
then
the
bad
guys
win
and
there's
no
trilogy,
but
you
just
like
tell
everyone
it's
going
to
be
a
trilogy.
So
it's
the
problem
with
trilogies.
A
Is
you
always
know
that,
no
matter
what
predicament
they
get
in
in
the
second
movie
or
book,
it's
never
going
to
be
bad,
someday
I'll,
someday
I'll.
Do
that
with
slides,
but
no
so
this
doesn't.
This
doesn't
actually
work
with
gp
g
plus
plus.
So
if
I
say,
if
I
turn
on
all
warnings
and
I
make
them
errors-
and
actually
now
that
I'm
looking
at
this,
I
think
that
w
all
is
actually
not
all
warnings
anymore.
A
I
think
there's
also
an
extended
or
whatever
it's
called
oh
versioning,
but
this
code
will
give
you
no
errors
whatsoever
and
then,
when
you
run
it,
the
slide
says
it's
segment
segfaults
and
it
does
on
my
machine.
But
what's
fun
about
this
code?
Is
it's
it's
actually
undefined
behavior
so,
like
anything,
could
actually
happen.
A
Some
machines,
like
I
have
some
people
run
this
in
os
10
and
it
will
not
stag
fault,
it
will
work,
but
on
my
linux
machine
it
will
break
and
I
didn't
try
it
on
windows,
because
I
don't
even
know
how
that
works
at
all.
So
why
is
this
undefined
behavior
in
c
plus
plus?
And
why
does
this
like
work
properly
in
ruby?
A
So
let's
go
over
this
code
a
little
bit
and
I'm
gonna
have
some
scary
hex
numbers
for
memory
but
like
if
you
haven't
dealt
with
low
level
memory
stuff
in
a
long
time.
You
just
know
there's
memory
locations
and
they
have
a
hex
address
and
then
I'm
going
to
show
the
name
that
we
put
in
c
plus
plus
and
then
the
value
that
sorted
that
location
in
memory
below
this
code.
A
So
we
say:
okay,
I
want
a
vector
and
it's
going
to
have
some
strings
in
it,
so
we
get
an
address
0x30
and
it's
going
to
have
v
and
then,
when
I
call
push
back
hello,
it
allocates
another
element
to
the
array
and
it
stores
the
string
hello
there.
So
we
have
our.
You
know
v
points
at
this,
this
location.
A
When
we
make
the
reference
to
that
string
at
v0,
we
now
get
a
pointer.
So
this
this
reference,
slash
pointer,
is
at
the
address
0x14.
The
value
of
that
pointer
is
0x18,
because
it's
pointing
at
the
first
element
of
the
array,
the
hello
right,
here's
the
tricky
bit.
So
when
we
then
push
world
onto
the
array,
we
only
had
an
array
of
length
one
and
if
you'll
notice,
like
back
here,
the
hello
and
then
the
pointer
being
right
after
it
there's
no
room
to
place
that
second
element
of
the
array.
A
Is
that,
like
basically
c
plus
plus
answer
to
this
kind
of
problem
is
well
the
documentation
says
if
the
new
size
is
greater
than
the
capacity,
then
all
iterators
and
references
are
invalidated.
So
don't
do
that,
but
there's
no
ability
of
the
tooling
in
the
actual
language
to
handle
this
kind
of
thing,
and
this
isn't
because
the
people
that
write
sql
plus
are
dumb.
It's
because
they've
been
building
up.
You
know
a
long,
long
amount
of
backwards
compatibility,
which
is
one
thing,
that's
really
nice.
A
I
think
one
of
the
things
that
I
am
sad
about
in
my
ruby
days
is
how
little
rubios
care
about
backwards
compatibility.
It
seems
like
if
I
deploy
a
rails
app
this
week
like
six
months
from
now.
I
don't
know
if
it's
going
to
still
work
or
not,
because
I
haven't
been
like
updating
all
the
things
all
the
time
so
backwards.
A
Compatibility
is
nice,
but
because
the
the
heritage
of
cbs
plus
is
so
old,
they
can't
actually
fix
these
kinds
of
problems
at
the
language
level
anyway,
without
introducing
backwards
incompatibilities
which
they
don't
want
to
do
so.
This
is
this
is
where
we're
at
in
the
like
c
c,
plus
kind
of
era,
and
because
systems
programming
is
very
hard
and
because
less
and
less
people
are
writing
in
low-level
programming
languages,
this
has
kind
of
been
the
status
quo
for
a
long
time.
A
If
this
is
a
million
lines
of
code
right
now,
you
have
to
figure
out
what
seg
faulted
and
where
and
why-
and
it
might
be,
this
huge
like
complicated
situation
and
it's
just
really
really
hard
to
debug
and
you
you
have
to
be
perfect
and
I
don't
know
about
you,
but
I
am
a
bad
programmer
and
I
am
fallible
and
I
will
screw
things
up,
even
if
I
know
the
right
answers,
and
so
let's
talk
about
what
it
happens
with
isn't
rust.
A
So
I'm
not
going
to
give
you
the
like
super
details
about
rust,
syntax
in
this
particular
iteration
of
the
talk.
You
are
programmers.
You
can
pick
up
what
a
vector
looks
like
and
what
array
accesses
look
like
it
looks
vaguely
familiar.
There
might
be
a
little
bit
of
weirdness,
but
I'm
just
going
to
show
you
so
here's
function
main.
We
make
a
mutable
vector
that
is
empty
because
in
rust
you
need
to
annotate
things
that
are
mutable.
Things
are
immutable.
By
default.
We
push
the
hello
onto
there.
A
A
So
this
error
message
is
a
little
truncated,
cutoff
because
again
brand
new
slide
framework.
That
just
happened
in
the
last
week,
but
it
says,
cannot
borrow
v
is
mutable
because
it
is
also
it
is
also
borrowed
as
immutable
at
v.push
world
note.
The
previous
borrow
v
occurs
here.
The
immutable
borrow
prevents
subsequent
moves
or
mutable
borrows
a
v
until
the
borrow
ends.
Let
x
equals
v
0..
A
It's
a
big,
long
error
message:
if
you're
not
familiar
with
the
terminology,
it's
a
little
opaque
once
you
are,
it's
super
great
and
we're
working
on
error
messages.
A
patch
I'm
actually
working
on
is
going
to
include
a
url
for
every
error
in
the
compiler
that
points
to
a
web
page
with
extended
descriptions
sample
code
that
like
this,
is
what
this
code
probably
looks
like
here's,
how
you'd
fix
it
and
all
that
kind
of
stuff.
So
I'm
psyched
about
that,
but
we're
not
there
yet
so
so.
A
Basically,
the
issue
is
because
this
needs
to
do
the
exact
same
thing
as
the
c
plus
does
it
needs
to
allocate
the
second
element
of
the
array
and
then
move
all
the
things,
but
if
it
does
that
it
knows
that
this
reference
will
be
invalid,
so
the
compiler
at
compile
time
is
able
to
do
this
kind
of
analysis
and
determine
that
the
the
pointer
at
x
would
be
a
dangling
pointer,
and
so
therefore
it
just
does
not
let
us
compile
this
program
without
fixing
this
kind
of
like
ownership
issue,
and
so
that's
it's
basically
telling
us
what
the
problem
was
in
the
siebel's
plus
code,
admittedly
with
a
little
bit
of
lingo.
A
But
it's
saying,
like
hey
you're,
already
taking
a
reference
here,
you
can't
mutate
this
because
we
might
reallocate
basically-
and
so
this
is
the
example
of
like
how
rust
improves
this
kind
of
safety.
The
language
semantics
have
these
additional
things
in
them,
I'm
gonna
sort
of
hand
wave
what
those
things
all
are,
but
basically
what
it
boils
down
to
is
rust
knows
at
compile
time
what
stuff
is
pointing
to
where
and
it's
able
to
tell
you,
if
you're
about
to
do
something
that
you
know
is
going
to
blow
things
up.
A
There's
like
an
old
joke
that,
like
c
plus
plus,
isn't
just
a
rocket
launcher,
but
it's
a
rocket
launcher,
that's
pre-pointed
at
your
feet,
and
so
we
try
to
handle
this
at
compile
time
stuff.
I
also
want
to
point
out
that
this
analysis
is
done
entirely
at
compile
time.
So,
if
you
can
manage
to
fix
this
and
I'll
show
you
one
way
to
fix
it
in
a
moment,
it
will
compile
down
to
the
same
assembly
code
that
the
c
plus
will
use
rust,
actually
uses
llvm
as
a
backend.
A
So
we
get
to
take
advantage
of
all
the
like
performance
improvements
that
happen
in
lvm,
so
this
is
a
compile
time
cost
not
a
runtime
cost.
So
this
is
what
I
mean
by
like
you,
get
the
low
level
speed
of
the
language
while
being
a
little
higher
level,
because
you
get
that
sort
of
super.
You
know
optimized
output,
so
here's
one
way
to
fix
it,
and
this
basically
does
what
the
ruby
does.
A
So
all
I've
added
is
a
dot
clone
onto
the
end
of
this
reference,
and
so
now,
instead
of
pointing
into
the
vector
we
make
a
copy
of
the
data,
that's
at
the
first
element
of
the
vector
and
then
point
at
that,
and
so
now
our
pointer
is
not
pointing
at
the
place
where
it
would
become
invalidated.
Russ
knows
everything
is
okay
and
this
will
compile
and
it
will
print
out
hello
in
this
particular
case.
A
A
It
actually
is
it
does
the
analysis
through
other
function,
calls
and
stuff,
so
it
it
would
know
that
you
are
taking
a
reference
to
it
there.
I
don't
want
to
attempt
the
live
coding
gods
right
now,
but
if
we
get
to
the
end
I'll
show
you
I'll
show
you
how
that
works
after
we
get
to
the
after
yehudah's
thing
just
happened,
I'm
a
little
like
skeptical.
A
I
don't
know
if
you
are
in
the
last
talk
here,
but
you
would
have
decided
to
try
a
demo
that
involved
the
network
and
that's
just
always
a
bad
decision,
but
he
tried
valiantly.
I
feel
really
bad,
okay,
so
here's-
maybe
maybe
this
will
also
give
you
a
little
bit
of
idea.
So
here's
a
much
more
complicated
example-
and
this
involves
concurrency.
A
So
one
of
the
things
that's
really
cool
about
rust
is
that
lots
and
lots
of
things
in
the
language
are
actually
moved
out
to
libraries
rather
than
being
in
the
language
itself,
and
it
just
happens
that
the
like
language
type
system,
primitives
work,
really
really
well.
So
this
is
spinning
up
a
thread
it's
entirely.
This
is
not.
I
just
want
to
emphasize.
I
guess
that
this
falls
out
of
the
language
semantics.
A
You
could
build
your
own
threading
primitives
if
you
wanted
to,
and
they
would
have
the
same
safety
guarantees,
because
it's
not
that
rust
has
special
knowledge
of
threading.
It's
the
the
emergent
properties
of
the
type
system
fix
problems
that
happen
to
be
threading
problems
too.
Okay,
so
this
example,
let
mute
numbers
equals
avec
with
three
integers
in
them.
Right
now
I
have
to
annotate
those
with
I.
This
is
a
an
issue
that
is
sort
of
being
fixed.
A
We
originally
inferred
integers
to
a
standard
machine
sized
integer,
but
then
one
of
the
things
that
would
happen
is
that
you
know
you
don't
know.
If
you
compile
your
code
on
a
64-bit
machine
in
a
32-bit
machine,
you
have
different
size
integers,
which
means
you
may
have
an
overflow
bug
that
you
didn't
on
another
machine
and
that
sucks,
so
we
took
it
out.
We
told
everyone
to
annotate
every
literal.
A
The
problem
is,
then,
when
you
put
these
examples
on
slides
now,
I
have
to
annotate
the
literals,
and
now
I
have
to
explain
to
you
that
in
real
code,
type
inference
would
infer
this
to
a
size
integer,
because
you
have
a
function.
That
would
say
something
and
it's
like
it's
a
mess
so
we're
talking
about
adding
it
back
in.
I
will
talk
about
stability
a
little
later,
but
and
how
that
works
so
for
right
now.
A
Just
trust
me,
that's
not
the
way
that
you
don't
normally
have
to
annotate
things,
but
in
this
example
you
do
okay,
so
for
I
and
range
so
rust
does
not
have
a
c
style
for
loop.
It
uses
iterators
for
everything,
and
I
will
talk
more
about
that
later.
But
basically,
this
is
an
iterator
and
it
goes
from
zero.
To
three
spawn
is
a
function
that
spins
up
a
you,
give
it
a
closure
and
it
spins
up
a
new
thread
and
runs
that
code
in
the
closure.
A
Basically
and
proc
is
a
thing
that
makes
a
closure.
That
is
also
eventually
changing,
but
it's
fine.
It
will
be
basically
the
same
thing.
You
just
won't
write
proc
and
then
inside
that
thread.
What
we're
doing
is
we're
calling
another
range
zero
to
three
and
we're
incrementing
these
numbers
that
are
in
the
array.
A
So
the
closure
is
capturing
the
mutable
reference
to
this
numbers,
that's
outside
of
the
closure,
if
you've
done
stuff
like
this
in
other
programming
languages,
you
know
you
know
how
closures
work,
and
so
this
will
be
mutating
inside
of
a
loop
with
multiple
threads
spinning
up,
so
we
have
like
a
whole
bunch
of
stuff
mutating,
an
array
and
if
you've
ever
dealt
with
multi-threaded
code,
you
know
that
this
is
very
bad.
I
could
run
this
a
bunch
of
different
times
and
I
would
get
a
bunch
of
different
outputs
right.
A
Well,
rust
actually
knows
that
this
is
bad
and
it
will
give
you
a
compile
time
error.
This
is
actually
one
of
the
grosser
errors,
but
happens
so
capture
of
moved
value.
Numbers
so
note
numbers
moved
into
the
closure
environment
because
it
has
some
type
stuff,
blah
blah.
So
basically
russ
is
saying:
hey.
You
already
used
a
reference
to
numbers
in
one
thread
when
you
start
spinning
up
a
second
thread
like
this
is
going
to
be
bad
news.
So
don't
do
that,
and
so
this
is
really
cool.
A
We
have
this
like
compile
time,
checking
of
a
data
race
to
not
technically
a
race
condition,
because
it's
a
little
broader
of
a
definition
but
whatever
it's,
it's
basically
a
race
condition
where
we'd
have
multiple
threads
mutating,
something
at
once.
A
Now,
that's
all
fun
and
good,
and
if
you
were
using
a
higher
level
language,
then
you
might
say
make
a
new
copy
of
this
array
for
everything
and
you'd
be
wanting
to
mutate,
separate
arrays.
A
A
A
So
if
you,
if
you're
familiar
with
mutexes,
basically
this
is
a
concurrency
primitive.
That
will
only
allow
one
thing
to
modify
the
vector
at
any
given
time.
So
we
like
acquire
a
lock
on
the
mutex
and
then
we
have
to
release
the
lock
when
we're
done
and
only
one
thread
can
acquire
a
lock
at
once.
That's
the
basic
idea
of
a
mutex
and
then
an
arc
stands
for
atomically
reference
counted
value.
A
A
So
now
that
we've
wrapped
the
vec
in
these
two
things,
we
can
actually
do
this
the
right
way
so
inside
here,
instead
of
instead
of
just
making
a
reference
to
numbers
inside
the
loop,
I'm
going
to
call
numbers.clone
and
I'm
going
to
introduce
a
new
variable
number
and
what
clone
is
going
to
do
since
clone
is
on
the
arc.
That's
going
to
bump
the
reference
count
up.
A
One
basically-
and
it
gives
me
this
this
new
reference
to
this
thing,
so
it
says,
there's
one
reference
outstanding
and
then
a
second
one
and
then
a
third
one,
and
then
we
spin
up
our
closure
inside
that
closure.
You
notice,
I
have
number.lock.
A
So
that's
the
reference
to
the
reference
count
that
I
just
bumped
up
and
lock
acquires
the
lock
from
the
mutex
and
when
it's
successful
this
will
basically
block
until
the
mutex
is
open
and
then
return
the
value
inside
the
mutex.
So
now
I
have
this
mutable
reference
to
the
array
and
there's
some
dereferencing
shenanigans.
That
needs
to
happen,
but
basically,
if
I
dereference
the
value,
I
can
add
one
to
that
particular
position
and
print
out
that
this
element
of
the
numbers
is
whatever
now.
A
Because
of
this,
like
ownership,
knowledge,
rust
is
actually
able
to
then
release
the
lock
you'll
notice.
I
don't
have
like
a
call
to
unrelease
the
lock,
basically
because
the
lock
is
valid
for
as
long
as
this
array,
reference
is
valid,
rust,
actually
at
compile
time
is
able
to
say
oh,
this
is
when
array
goes
out
of
bounds,
I'll
release
the
lock
for
you,
then
that
way,
you
can't
possibly
screw
up
giving
up
the
lock,
because
the
the
language
handles
it
for
you
and
the
same
way
with
a
reference
count.
A
It
knows
when
that
reference
goes
out
of
scope
and
then
it
bumps
the
reference
down
then
so
the
type
system
is
be
able
to
encode
these
properties
about
these
concurrency
aspects,
and
it's
able
to
not
only
like
only
let
us
share
things
when
we've
promised
that
we've
shared
them
in
a
way
that
is
okay
to
share,
but
we've
also
managed
to
not
screw
up
the
ability
to
screw
up
either
the
counts
or
the
locking,
unlocking
mechanism,
because
the
type
system
is
able
to
handle
that
for
us.
Okay.
A
So
so
that's
that's
that
and
we'll
talk.
I
want
to
well,
okay,
let's,
why
not
demo
god's
time!
A
I
want
to
talk
a
little
teeny
bit
more
about
this.
This
ownership
thing
all
right
since
tab
cool
okay,
so
we
have
a
sandbox
at
play.russlang.org.
A
Let's
see
if
this
actually
gets
bigger,
yeah
cool,
so
this
actually
lets
you
run
rust
code
on
the
web.
So
you
don't
need
a
compiler
installed,
which
is
pretty
cool
and
it's
usually
only
a
day
or
two
out
of
date.
So
I
want
to
show
you
one
more
example
of
how
this
sort
of
memory
stuff
or
this
releasing
works.
I'm
going
to
show
you
some
c
code
and
some
rust
code.
So
let
x
equals
box
5.,
I'm
so
bad
at
typing.
A
It's
terrible,
int
main
stuff,
I'm
not
going
to
type
all
that
out,
because
that's
useless
instar
x
equals
instar
malloc
size
of
int.
A
Asterisk
x
equals
five
three
x.
Okay,
so
I
mentioned
that
ruby
is
safe
because
it
has
a
garbage
collector
and
the
c
plus
is
not
safe.
It
does
not
have
a
garbage
collector
right.
So
rust
has
this
interesting
situation
going
on
where
rust
does
not
have
a
garbage
collector,
but
it
feels
like
you
do
because
it
does
this
compile
time
analysis
to
determine
where
the
mallocs
and
free
should
occur,
and
it
does
the
right
thing
for
you,
so
you
don't
have
to
worry
about
the
details
of
the
calls.
A
So,
for
example,
box
is
the
way
that
you
do
a
heap
allocation.
Basically,
and
so
this
is
allocating
a
pointer
to
a
five
allocated
on
the
heap.
Obviously
allocating
an
integer
on
the
heap
is
kind
of
silly,
but,
like
I
don't
wanna
get
distracted
with
all
the
other
things
that
are
in
here
and
then
we
just
like
never
free
that
memory.
So
what
russ
is
able
to
do
is
it's
able
to
say
at
compile
time?
A
Okay,
you
to
allocate
an
integer
on
the
heap,
so
I
know
I
need
to
malloc
a
size
of
one
inch
and
if
this
was
an
array,
it
would
count
the
elements
of
the
array
and
it
would
mallock
that
much
memory
and
then,
when
x,
goes
out
of
scope
at
the
end
of
the
function.
I
know
I
can
insert
a
free
here.
So
this
is
what
the
code
actually
is
generated
by
the
rust.
A
Compiler
is
the
assembly
code
that
is
equivalent
to
this,
but
you
you
don't
have
to
worry
about
this
shenanigans
or
like
making
sure
that
you
have
the
right
size.
We
can
like
look
at
the
type
and
determine
what
size
it
should
be,
and
this
stuff
all
works
as
far
as
like.
So
the
the
one
comment
was
like
what
about
going
across
functions.
So
let
me
add
one
to
this
number.
A
A
A
A
I
don't
actually
I
don't
I'll
explain
that
in
a
second
that's
a
weird
thing
I
was
hoping.
Nobody
would
ask
me
about
the
semicolon,
but
oh
well,
all
right.
Let's
see,
I
think
this
is
gonna
break
because
it's
gonna
complain
yeah,
so
I
actually
need
to
pass
a
reference.
This
is
a
little
gross
at
the
moment
because
we
don't
do
automatic
okay,
so
we
get
a
six.
So
what
happens
here
is
I
make
my
five
on
the
heap
and
then
I
pass.
A
I
dereference
that
value
to
point
into
it
and
I
pass
a
reference
to
what
it
is
pointing
to.
We
basically
took
out
all
of
the
like
automatic
make
the
references
work
pretty,
because
we
wanted
you
to
like
control
that
kind
of
stuff.
It's
still
a
little
controversial,
because
ampersand
star
is
a
little
gross.
But
the
point
is
this:
this
ampersand
int,
so
this
is
a
borrowing
opera
operator,
so
x,
is
the
one
that
is
responsible
for
freeing
this
memory.
A
A
It
wouldn't
really
cause
a
problem,
but
like
for
the
duration
of
this
function,
call
rust
knows
that
this
function
is
temporarily
asking
to
see
an
immutable
reference
to
this
value,
and
so
it's
able
to
do
this
kind
of
like
analysis,
to
figure
out
that
it
should
not
be
destructed
so
like
if
we
were
doing
threading
stuff
too.
While
this
was
going
on.
A
If
this
like
took
a
while,
it
would
not
let
anything
happen
until
after
it's
done,
and
so
this
ends
up
generating
the
same
call
in
the
sense
that
we
don't
allocate
a
new
integer.
We
just
pass
a
a
reference
to
it,
but
we're
able
to
do
the
type
stuff
like
that.
This
is
the
most
complicated
and
most
important
concept
in
rust
and
I've
written
three
drafts
of
the
guide
on
explaining
this
and
I've
not
been
happy
with
any
of
them.
A
So,
and
I
know
this
is
a
little
confusing,
and
this
is
like
a
you
know,
talk,
and
so
it
probably
takes
half
an
hour
to
totally
wrap
your
head
around
the
the
way
that
all
this
stuff
works.
So
I
apologize
that
I'm
not
yet
better
at
explaining
it.
Yes,.
A
At
runtime,
yeah
yeah,
so
the
way
that
we
do
function
pointers
is
I
could
actually
so
I
always
forget
the
type
signature
for
accepting
something
but,
let's
just
say,
function
pointer.
Let's
say
the
one
x
to
be
a
function
pointer.
I
can
actually
use
just
the
bare
name
and
it
will.
It
will
then
turn
that
into
the
right
function,
pointer
and
pass
it
along,
and
it's
able
to
handle
all
that
stuff.
A
Yeah
it
does
that
whole
analysis
and,
like
figures
out
that
everything's
right
yep,
the
other
thing
I
guess
I
should
say
and
change
this
back.
I
guess
it
doesn't
really
matter
that
much
yeah,
I
don't
know.
What's
gonna
say
there,
never
mind
yay
demos,
okay,
so
let
me
get
back
to
the
actual
stuff.
I
think
it's
here
yeah
full
screen
who
knows?
Okay,
so
accessibility,
I
just
threw
a
ton
of
code
at
you
with
some
relatively
complicated
concepts
and
explained
them
only
halfway.
A
So
maybe
by
saying
this
makes
it
easy
for
people
is
like
the
wrong
transition
in
this
particular
talk.
But
I
think
one
of
the
things
that's
important
is
that
systems
programming,
like
the
systems
program
that
I
learned
as
a
little
kid
like,
I
wrote,
make
files
as
a
little
kid
and
today,
if
I
was
writing
a
c
program,
I
would
use
makefiles
like
it's
the
same
and
that's
nice,
because
it's
familiar
and
I
know
make
files
and
I
love
slash
hate,
make
files
but
like
technology,
has
been
significantly
improved
since
make
files.
A
Basically,
and
so
that's
that's
cool
and
again,
stability
is
really
important,
but
it
also
means
that
it's
like
it's
really
hard
to
convince
someone
that
they
should
be
using
make
files
today
if
they
haven't
been
using
makefiles
for
the
last
like
20
years,
because
makefiles
are
a
pain
and
so
systems.
A
Programming
in
some
ways
has
been
like
stuck
in
this
like
late
80s
early
90s,
not
only
like
mindset
but
also
like
approach
to
almost
everything,
and
one
of
the
things
we
try
to
do
very
seriously
in
rust
is
use
more
modern,
tooling,
and
it's
heavily
inspired
by
what
so
yehuda
also
does
a
bunch
of
rust
stuff,
and
so
we've
been
bringing
the
like
ruby,
javascript
mindset
to
this
kind
of
tooling.
So
I
want
to
show
you
one
of
the
tools
which
was
written
by
huda
called
cargo.
A
That's,
like
sort
of
the
official
rust
package
manager,
slash
build
tool
that
would
like
replace
makefiles.
So
it's
called
cargo,
because
rust
calls
the
unit
you
compile
a
crate,
and
so
you
ship
crates
because
they're
cargo,
so
there's
a
little
bit
of
you,
know
punny
stuff.
There
cargo.io
is
taken
by
some
startups.
So
it
has
to
be
crates.io
for
the
website
because
you
know
they're
like
bankrupt
or
whatever,
but
they
still
on
the
domain
name
thanks
startups.
A
Okay,
so
to
start
a
new
project,
you
can
do
cargo
new,
hello
world,
and
we
want
this
to
be
a
binary,
not
a
library,
so
we
pass
the
bin
flag
and
then,
if
we
cd
into
that
hello
world
and
we
do
tree
dot,
we
see
that
it
set
up
this
very
minimal
project
structure.
So
we
don't
need
if
you're
building
a
web
app,
you
start
doing
like
models
and
views
and
controllers
and
all
this
crap
all
over
the
place.
If
you
use
a
tool
like
bundler
npm,
you
would
get
all
these
files.
A
You
may
be
wondering
what
tomml
is
tamil
is
a
file
format
named
after
tom
preston
warner,
one
of
the
co-founders
of
github,
and
basically
it's
like
ini,
but
actually
standardized,
instead
of
just
being
random
and
with
some
extra
nice
features
on
top,
we
we
argued
forever,
as
you
might
imagine,
with
the
way
the
bike
sheds
work
about
what
file
format
this
should
be,
and
the
problem
is,
is
that
you
know
yaml
is
just
like
totally
terrible.
I
mean
it's
fun
to
use
until
it
like
destroys
everything
due
to
security.
A
Vulnerabilities
json
is
a
terrible
format
for
human
editing
configuration
because
you
need
trailing
commas
or
you
can't
have
trailing
commas
rather
and
there's
no
comments
inside
of
json,
so
you
like
it's
really
awkward
and
it's
not
really
human-writable
in
any
appreciable
way.
Ini
is
just
like
totally
a
mess,
because
some
ini
things
do
this
and
so
mine,
I
think,
do
that.
It's
like
csv,
like
everyone,
has
their
own
definition.
A
So
we
sort
of
settled
on
like
tomml
is
kind
of
okay,
but
it's
like
the
least
terrible
option:
okay
and
then
a
source
main.rs
file
inside
that
toml
file.
We
have
these
so
this
directive
package
and
then
we
have
three
keys,
hello,
world
version,
0.0.1,
and
then
this
will
actually
take
from
your
git,
your
git
string.
It
will
fill
in
your
name
and
email
address
and
those
three
lines
are
all
we
need
to
be
able
to
like
have
this
metadata
about
our
project
and
inside
that
main.rs
file.
A
There's
just
the
regular
old
hello
world
that
does
this,
and
so
then
we
can
type
cargo
run
and
it
will
actually
compile
the
project
entirely
and
then
run
the
output
and
we
get
hello
world.
So
this
is
really
nice
in
the
simple
case,
but
of
course
you
know
everyone
can
demo
the
simple
case.
What
happens
when
you
get
to
more
complicated
things?
So
one
of
the
things
that's
really
nice
and
why?
I
think
that
cargo
is
a
super
big
advantage
over
make
files.
Is
this
dependency
management?
A
A
Basically,
there
is
a
git
repo,
so
the
the
website
for
the
package
manager
is
up
like
any
couple
days
or
weeks
now
so
right
now
you
have
to
fetch
everything
off
github
but
eventually
you'll
be
able
to
just
say,
like
fetch
this
from
the
central
repository
please
and
there's
a
whole
bunch
of
other
options
about
fetching
a
particular
commit
and
tag
and
all
that
crap.
But
right
now
you
pointed
at
a
git
repository
and
then
what
happens
is
when
you
run
right,
cargo
run
it'll,
say
updating.
A
Git
repository
it'll
go
fetch
that
it'll
place
it
in
the
right
place.
It
will
then
compile
that
library
at
that
version,
and
you
also
notice
that
there's
a
hash
here.
So
it's
actually
add
a
specific
hash.
I'll
talk
about
that
in
a
second
and
then
it
compiles
the
hello
world
thing.
It
links
them
all
together
and
then
runs
it.
So
we
sort
of
get
this
kind
of
like
hole,
dependency
resolution
shenanigans,
basically
just
declarative,
you
name
what
it
is.
A
This
actually
then
writes
out
a
cargo.lock
file
that
writes
out
all
of
the
very
specific
versions.
So
if
then,
you
copy
this
onto
your
machine
and
say
that
I,
as
the
maintainer
of
semver,
push
up
a
new
commit
that
would
break
the
world.
You'll
build
with
this
exact
same
commit
that
I've
highlighted
as
opposed
to
the
brand
new
one
that's
off
of
head.
A
And
then,
when
you
want
to
update
the
head,
you
can
run
cargo
update,
semver
and
it
will
fetch
the
latest
version
of
head
and
like
figure
it
out
if
we
had
multiple
dependencies
and
they
had
version
restrictions
on
them.
So
if
say,
I
was
using
some
other
library
that
depend
on
semver,
1.0
or
whatever.
A
Cargo
will
automatically
figure
out
those
transitive
deposit
dependencies
and
all
the
versions
that
are
appropriate
for
all
those
things
and
download
and
put
them
in
the
right
place
and
do
all
the
magic
necessary
and
do
all
the
linking
flags
and
all
that
kind
of
crap.
So
it's
super
super
wonderful
compared
to
like.
Let
me
write
a
make
file
where
I
do
get
sub
modules
and
like
hope
that
I'm
not
breaking
everything
and
like
all
that
kind
of
stuff,
and
it
feels
very
familiar
to
me
as
a
rubyist
and
so
therefore
it
is
not
scary.
A
The
last
thing
that's
really
interesting:
we've
been
able
to
do
with
rust.
This
filter
call
ended
up
being
a
little
off
to
the
side,
but
in
rust
we
use
iterators
very
heavily
and
rust
bets
on
it.
Iterator
supports
for
speed
and
what's
really
cool
about,
like
modern
compiler
technology,
is
this
sort
of
looks
like
what
you
would
write
in
ruby
or
javascript,
but
it
ends
up
being
as
fast
as
a
c
for
loop
implementation.
So
here
I'm
writing
an
iterator.
So
x's
is
an
array
I
say:
x's
is
going
to
be
a
vector
of
n.
A
So
if
you
want
to
add
type
annotations,
you'll
notice
there's
been
a
lot
of
inference
of
this
overall,
but
you
can
actually
annotate
specifically
what
you
want
this
to
be,
and
so
I
take
x's
and
I
make
an
iterator
out
of
it
and
I
call
map
where
I
pass
in
an
x
and
I
add
one
and
I
have
a
filter
where
I
just
do
a
modulo
two
and
test
if
it's
equal
to
zero,
so
we
get
the
evens
and
then
we
collect
up
that
list
into
a
final
array
and
x's
will
be.
You
know
this.
A
This
vector
of
integer
numbers
lvm
is
actually
smart
enough
to
realize
that
these
closures
can
be
inlined
and
then
because
these
iterators
are
like
stream,
they
don't
allocate
intermediate
arrays
at
all.
It
then
does
stream
fusion
and
is
able
to
like
make
both
of
these
things
happen
on
one
thing,
and
so
this
ends
up
being
like
hyper
performance,
super
low
level
stuff.
A
But
you
get
to
write
this
really
nice
high
level
feeling
thing,
and
so
this
is
kind
of
what
I
mean
when
I
say
rust
is
like
simultaneously
high
level
and
super
low
level
and
that
you
don't
this
is
we
call
them
zero
cost
abstractions
they're,
not
always
zero
cost,
but
they're
at
least
as
minimal
cost
as
possible.
In
that
we
want
to
give
you
this
high
level
feel,
but
we
don't
want
you
to
have
to
pay
the
costs
that
are
traditionally
associated
with
the
high
level
feel,
and
so
that's
really
an
important
principle
of
rust.
A
Okay,
so
I'm
a
t
a
little
bit
early,
but
so,
if
anybody
has
more
questions,
I'd
be
happy
to
go
over
them
and
we'll
see
about
demo
shenanigans
yeah.
What's
up
one
question,
yes,.
A
Yes,
third,
one
before
your
question,
I
forgot
to
mention
the
stability
thing.
So
rust
is
still
pre
1.0,
but
we
have
told
basically
everyone
that
1.0
will
ship
by
the
end
of
the
year.
So
it's
going
to
have
to
happen
and
it
will
happen
no
matter
how
many
nights
and
weekends
I
need
to
work
to
make
it
happen,
because
I've
been
on
stages
telling
people
this,
and
I
don't
want
to
break
that
promise.
A
A
We
have
a
release
candidate
at
the
end
of
the
year,
maybe
the
first
week
of
january
and
then
six
weeks
later,
1.1
will
come
out
and
it
will
be
a
backwards,
compatible
version
with
some
new
features
and
then
six
weeks
after
that,
one
two
will
come
out
with
backwards,
compatibly
with
new
features
and
so
we're
hoping
that
a
super
fast
release
cycle
like
this
will
enable
us
to
ship
stuff
really
quickly,
and
it
also
has
really
nice
benefits
for
like
making
the
language
nicer.
A
So
one
of
the
weird
things
that
happens:
I've
seen
this
in
ruby
world
when
you
have
a
release
cycle.
That's
like
every
christmas!
If
someone
comes
up
with
a
new
feature
in
november
oftentimes,
you
get
a
half-baked
feature
shipped
at
christmas
right,
because
you
don't
have
to
wait
another
year
to
get
that
feature.
So
when
you
have
this
six
week,
release
cycle,
you
can
afford
to
say,
like
we're
going
to
put
this
in
behind
a
feature
gate.
A
It's
going
to
show
up
in
the
nightly
builds,
but
it's
not
going
to
show
up
in
the
stable
builds
people
that
want
to
play
around
with
a
new
feature
and
try
it
out
can
do
it
in
nightlys
and
then,
if
it's
the
next
6.6
week
point
when
it
makes
sense
it'll
get
shipped.
So
I
think
it
has
good
effects
for
code
quality
too,
but
yeah
right
now,
there's
still
some
stuff
that
is
changing
a
little
bit.
The
core
language
semantics
are
totally
fine
perfect.
A
It's
like
we're
going
over
the
standard
libraries
and
making
sure
that
similar
things
are
named
the
same,
for
example,
because
we
like
had
no
convention
for
naming
stuff
and
it
was
built
by
random
people
over
six
years.
So,
like
there's
a
lot
of
really
ugly
method
names
that
were
just
like
changing
to
be
nice
and
consistent
and
stuff
like
that,
so
that's
sort
of
the
process,
one
right
now,
okay,
so
libraries
and
the
other
thing
was
compiler
speed.
A
So
the
com
rust
compiler,
first
of
all,
it's
kind
of
notorious
for
compiling
itself
is
very
slow.
A
full
rebuild
fresh
takes
like
an
hour
to
compile
itself.
The
reason
is
that
rust
is
written
in
rust
itself,
and
so
it
needs
to
bootstrap.
So
it
actually
ends
up
compiling
itself
three
times
and
if
you
do
a
full
build,
sometimes
we
have
custom
patches
that
haven't
made
it
into
lvm
trunk.
A
Yet
so
sometimes
you
have
to
do
a
full
lvm
build
and
then
do
three
rust
builds,
and
so
the
compiler
itself
takes
a
while,
but
once
you
have
compiled
the
compiler
or
if
you
install
a
binary
version
of
the
compiler,
it
is
faster
than
c
plus
plus,
although
not
quite
as
fast
as
go,
and
most
of
that
is
due
to
lvm.
Not
us.
A
We
lean
on
llvms,
like
performance
passes
quite
a
bit,
and
so
we've
made
it
as
fast
as
we
possibly
can,
and
it
is
something
that
we
care
about,
but
it
is
not
going
to
be
as
fast
as
go
is,
which
is
one
of
the
big
super
strong
string
points
of
go,
but
it's
like
I
saw
so.
A
Someone
did
a
seven
point,
blog
series
about
writing
like
a
ray
tracer
and
rust
and
they
ported
their
c
plus
version
over,
and
it
was
something
like
the
c
plus
plus
version
compiled
in
like
25
seconds
and
the
rust
one
was
like
five
seconds
or
something.
So
preliminary
reports
are
looking
pretty
good,
we'll
see
once
1.0
ships
and
more
and
more
people
start
actually
using
the
language
like
how
that
actually
works
out.
But
it
is
something
that
we
care
about
library
ecosystem.
A
So,
even
though
we
don't
have
the
central
package
repository
up
yet
there's
a
surprising
number
of
packages
that
do
exist.
We
have
like
three
competing
web
frameworks
already,
actually,
which
is
kind
of
interesting,
like
a
lot
of
them,
are
sort
of
more
of
the
like
express
lesson,
sinatra
kind
of
thing,
rather
than
like
a
full
rails
kind
of
deal.
But
it's
sort
of
interesting
and
people
are
making
new
ones
all
the
time.
There's
also
weird
blind
spots
so
like
we
have
a
really
awesome,
postgres
and
a
really
awesome,
redis
adapter.
A
But
like
other
databases,
I
don't
know,
and
so
this
is
what
happens
when
you're
in
a
pre-release
ecosystem.
But
after
we
got,
cargo
built,
like
packages
have
started
to
proliferate
and
once
the
central
repository
is
up,
things
are
like,
if
you
graph
the
curve
of
packages
that
we
know
about,
it,
is
like
super
exponential,
which
is
always
going
to
happen
in
an
asset
community
right,
so
it
is
currently
being
filled
out.
There
are
some
things
that
have
really
really
good
coverage
and
other
things
that
have
terrible
coverage.
A
You
know
just
like
everything
else,
but
a
lot
of
people
since
rust
has
been
changing
stuff,
sometimes
on
a
daily
basis.
I'm
actually
changing
I'm
working
on
a
patch
right
now.
That
is
like
literally
going
to
break
every
rust
program
ever,
but
it's
replacing
the
word
fail
with
panic.
So
when
you
fail
like
a
thread,
we
called
it
fail
and
go
call
it
panic,
and
we
think
that
name
is
better,
so
we're
gonna
use
it
too.
So
that's
like
a
find
and
replace
operation,
but
it's
technically
a
breaking
change
right.
A
So
a
lot
of
people
have
sort
of
said.
This
sounds
really
awesome
steve,
but
I
am
not
going
to
dig
into
actually
learning
it
until
you
have
some
sort
of
semblance
of
stability,
but
a
lot
of
people
are
like
really
interested,
so
the
post
1.0-
I
I
am
like
excited
and
slightly
scared
to
like
be
on
irc,
like
january
1st
or
like
whatever
this
actually
ships,
and
then
just
watch
like
tons
of
people
pour
in
and
like
have
all
these
questions
and
like
it's
january's
gonna,
be
terrible.
A
Awesome,
I'm
really
excited
about
it,
but
but
yeah
so
we'll
see
library
stuff
continue
to
grow
after
that
is
already,
then
we
have
two
production
deployments
that
we
know
about.
One
is
opendns:
they
use
it
as
middleware
to
do
spam
catching
stuff,
so
they
like
connect
to
crmq
and
do
shenanigans.
The
other
one
is
skylight.
Tilde
is
a
project
that
does
rails
performance,
monitoring
the
ruby
gem
that
you
install
in
your
rails.
Application
is
actually
implemented
in
rust,
and
this
is
one
of
the
areas
where,
like
rust.
A
Actually
shines
is
embedding
in
other
languages
because
it
does
not
have
a
garbage
collector.
It
is
trivial
to
embed
in
a
garbage
collected
language,
because
you
don't
have
two
gcs
fighting
over
who
owns
what
memory,
I
guess
I
shouldn't
say
trivial.
I
should
say
way
more
trivial
than
betting,
a
g
seed
language
in
another
gc
language,
and
so
that's
been
that's
been
sort
of
a
niche.
That's
surprising
that
we
didn't
really
think
about
it.
First
mozilla's
motivation-
I
should
have
mentioned
this
too-
is
writing
servos.
A
So
the
reason
that
mozilla
cares
about
this
low-level
stuff
is
basically
firefox
is
implemented
in
cbs,
plus
and
firefox.
I
don't
I
shouldn't
say
this,
but
I'll
say
it
because
whatever
it's
like
riddled
with
security
holes
pwn
to
own,
like
every
browser,
gets
totally
wrecked
right,
there's
always
security
vulnerabilities
and
so
in
the
last
pwned
owned
competition.
The
four
security
vulnerabilities
that
firefox
had
would
have
been
compile
time
errors
in
rust,
which
is
a
super
awesome,
significant
savings
and
so
they're.
A
Actually,
the
rust
compiler
is
written
in
rust
and
then
they're
writing
servo,
which
is
like
a
gecko
webkit
replacement
in
rust
that
I
was
going
to
show
you
my
slides
in,
but
I'm
not
using
my
computer,
and
so
that's
sort
of
that's
mozilla's.
Motivation
is
like
browsers
need
to
be
high
performance
and
they
need
to
be
super
safe,
so
that's
sort
of
why
they
got
why
they
started
building
language
in
the
first
place.
A
Yes
yeah,
so
the
biggest
part
in
terms
of
the
question
was
like
what
parts
of
c
plus
plus
do
we
not
support
or
like
what
features?
Does
people's
plus
have
that
we
don't
the
the
biggest
one
so
templates
is
rust,
has
a
macro
system
that
is
hygienic
and
it
operates
on
asts.
A
So
it's
not
like
a
textual
substitution
like
c's
stuff
is,
but
it's
a
little
more
restrictive
than
c
plus
plus
templates,
so
we
don't
have
like
constix
pressure
and
like
those
kinds
of
things,
although
we're
like
hoping
to
add
that
kind
of
stuff,
one
of
the
other
things
is
that,
like
a
lot
of
times,
if
you
are
one
of
those
people
that
wants
to
do
super
low
level,
hero
mode
kind
of
stuff,
you
can
do
it
in
rust.
A
But
we
have
a
thing
that
basically
is
like
unsafe
mode
that,
in
a
block
of
code,
turns
off
some
of
these
safety
checks,
if
you're,
basically
making
a
promise
to
the
compiler.
That's
okay,
and
it's
a
little
more,
even
though
you
can
do
the
same
kinds
of
things,
it's
a
little
more
awkward
and
takes
a
little
more
work.
So,
like
cpus
plus,
is
better
at
being
dangerous
than
we
are,
which
is
like
good
and
also
bad
right,
but
those
are
definitely
the
other
big
thing.
A
This
is
more
of
a
d
thing
than
a
c
plus
plus
thing,
but
like
compile
time
function.
Evaluation
is
something
that
we
just
don't
have
that
a
lot
of
people
like
would
really
like
to
have,
but
you
can't
have
everything
so
we'll
get
there
someday,
but
not
right.
Now
I
would
say
those
are
probably
the
the
biggest
things
the
other
one
is
just
the
ecosystem
right.
A
You
can't
beat
30
years
of
compilers
and
tools
and
tooling
we
can
steal
some
of
it
with
using
llvm,
because
we
share
every
all
those
people
that
apple
are
paying
to
like
make.
Clang
better
are
like
also
making
us
better,
but
you
know,
like
the
ecosystem
of
a
pre-release
programming.
Language
to
one
that's
been
around
30
years
is
like
hard
to
beat.
So
that's
the
big
things.
Yes,.
A
Does
russ
interrupt
yeah,
so
the
question
is:
what
about
interop
with
cbc
plus
c
interop
via
ffi
is
super
fantastic.
We
went
to
great
pains
to
make
sure
that
it
has
like
zero
cost.
So
for
one
example,
we
used
to
use
segmented
stacks
and
we
abandoned
those
because
doing
that
with
c
is
like
terrible,
and
so
you
basically
have
like
an
ffi
where
you
decl,
you
declare
like
the
equivalent
of
the
header
files
and
you
just
link
against
it
and
it
just
works.
A
We
also
have
a
tool
that
will
generate
those
link,
header
file,
things
for
you
and
it's
mostly
awesome.
There's
occasionally
you
need
to
clean
some
stuff
up,
but
it
like
gets
you
most
of
the
way
there
c
plus
plus
we
have
no
real
interoperability
with,
but
if
it
exposes
a
c
abi,
then
we
interrupt
with
that.
Basically,
so
that's
on
the
author
of
the
cpus
library
d
has
done,
has
some
limited
amount
of
like
interacting
with
cps
plus,
but
this
is
no
stable
api.
That's
like
really
hard
to
do
so.
A
We
just
like
do
the
same
thing.
We're
like
expose
c
please
and
we'll
handle
it,
but
yeah
ffi
is
super
cheap
and
pretty
easy.
I
actually
yeah.
I
was
doing
some
of
that
ruby
embedded
in
rust
and
technically
I
have.
I
have
a
ruby
gem,
that's
written
in
c
and
the
c
calls
into
a
rust
library
actually
which
is
like
interesting.
So
you
can
expose
rust.
A
Libraries
2c,
as
well
as
like
call
in
to
c
libraries
from
rust,
so
like
c
int
is
a
different
type
than
r
int
and
you
have
to
like
cast
where
appropriate
and
stuff
like
that,
and
it
is
by
default
unsafe.
It
has
that
unsafe
blocker
around
it.
So
you
have
to
like
tell
the
compiler
like
I
promise
that
this
is
correct
and
I'm
handling
some
of
the
details
so
but
yeah
the
nice
thing
about
so,
as
we
I
mentioned
a
little
bit
briefly,
this
on
safe
mode.
A
Basically,
the
deal
is
like
if
uns,
regular,
safe,
rust
cannot
seg
fault.
So
if
it's
seg
faults,
it's
because
you
have
an
unsafe
thing
somewhere
that
you
screwed
up.
So
you
only
have
to
audit
like
this
little
tiny
chunk
of
code.
Cargo
actually
has
zero
unsafe
code
in
it
at
all
entirely,
so
it's
completely
feasible
to
build
large
programs
that
are
fast
without
needing
to
use
unsafe.
A
But
if
you
want
to
bind
to
a
c
library
or
whatever
you
can,
and
then
you
just
know
that,
like
that's
what
caused
the
problem
so
and
but
also
the
stuff
I
got
actually
maybe
I
don't
know
if
I
have
any
time,
dude
nobody's
nobody.
Okay,
yeah
one
more
question
cool.
I
was
like
nobody's
kicking
me
off
this
stage
and
then
here
he
is
yo.
What's
up?
A
A
Yeah,
so
what
we
do
is
it's
very
similar
to
npm
shrink
wrap.
Basically,
so
what
happens
is
if
the
two
libraries
are
semver
compatible?
Then
we
only
have
one
version
of
that
library
and
we
make
them
do
that
stuff.
If
so,
okay,
so
I
should
have
repeated
the
question.
Question
is
transitive
dependencies.
A
If
my
library
a
depends
on
b
and
c
and
b
depends
on
d
version,
one
and
c
depends
on
d
version
two:
what
happens
we
can
actually,
if
they're
not
compatible
like
in
that
example,
one's
version,
one
version
of
two
you.
Actually
it
will
compile
the
two
different
versions
and
link
them
against
those
libraries,
and
you
will
basically
get
duplicated
stuff,
so
it
works
if
they're
like
1.1
and
1.2,
then
we'll
compile
version
1.1
or
1.2
whatever.
Whatever
works.
B
A
Do
we'll
do
some
deduplication
if
it
makes
sense
within
the
rules
of
semver
and
if
not,
then
we
basically
throw
an
error.
Saying
like
hey
there's,
a
problem
with
you
know.
This
depends
on.
This
depends
on
that
you've
got
to
figure
out.
What's
going
on
yeah,
it's
a
good
question,
though.
Okay
thanks
so
much
and
feel
free
to
come.
Talk
to
me
the
rest
of
the
conference
about
this
stuff.
If
you're
interested.