►
From YouTube: Bay Area Rust Meetup August 2014
Description
- "The State of Rust 1.0" by Niko Matsakis
- "Dropping the Drop Flag" by Felix Klock
- "The Borrow Check in 10 minutes" by Patrick Walton
- "Cargo" by Alex Crichton
A
All
right,
hi,
everybody,
hi
Internet,
welcome
to
the
August
edition
of
the
bay
area.
Russ
meet
up.
Thank
you
so
much
for
all
coming
out
tonight.
We
have
a
very
special
opportunity
where
we
actually
have
I
think
all
of
the
the
Mozilla
Russ
team
here
and
most
of
them
are
actually
giving
a
talk
tonight.
So
it's
it's
very
exciting,
so
I
hope
you
find
it
all
very
interesting,
as
always
thanks
Mozilla
thanks
for
the
boot
and
also
the
the
hosting
of
these
events
is
always
great.
A
So
we
have
a
long
agenda
tonight,
I'm
not
going
to
speak
through
these
because
we're
going
to
hear
about
them
in
just
couple
minutes,
but
it's
going
to
be
a
very
full
night,
so
we
might
actually
run
a
little
bit
late,
especially
if
any
of
you
have
questions
at
the
end.
I
was
also
hoping
to
do
if
we
have
time
some
question
answers
so
either
anyone
here
or
anyone
from
the
internet
have
questions
for
any
of
the
Mozilla
people.
A
Please
get
those
prepped
and
finally,
I
just
want
to
mention
that
we
have
coming
up
in
September
right
now.
We
have
four
events:
I
haven't
yet
scheduled
the
the
next
Bay
Area
rust
meet
up,
so
that
might
be
it
I'll,
hopefully
get
it
done
by
the
end
of
this
week.
You
can
also
check
out
the
the
community
calendar
of
where
we
are
announcing
the
new
events
so
anyway,
so
I
think
first
up
is
Nico.
Talking
about
the
state
of
rust.
One
point:
oh
please
welcome.
B
So
my
name
is
Nicholas
or
Nico.
Maitakes
really
and
I
want
to
talk
to
you
guys
about
rust.
One
point
oh
and
kind
of
what
it
is,
what
it's
not
and
how
we
got
here,
essentially
all
right
so
I
think
most
of
you
have
probably
heard
that
rust.
One
point
always
coming
right,
and
what
do
we
mean
by?
That
is
really.
It
means
a
lot
of
things,
but
one
of
them
kind
of
the
most
obvious
thing
is.
It
means
it's
the
first
release
of
rust
where
we're
going
to
promise
backwards
compatibility.
B
B
So
we've
been
working
on
it
for
a
long
time,
and
a
lot
of
you
guys
have
been
here
with
us
following
us
for
a
long
time
and
I
think
you
can
attest
that
it's
come
through
a
lot
of
different
stages
and
we're
really
getting
to
a
place
where
we're
basically
achieving
the
goals
that
we
set
out
in
the
first
place.
And
so
we
wanted
to
have
this
low
level
power
and
flexibility
like
see
where
you
can
have
great
memory,
layout
and
good
cash
performance.
B
And
all
these
things
when
you
need
it,
but
also
with
the
feeling
of
a
high
level
language
with
all
the
convenience
of
closures
and
type
classes,
kind
of
flick,
features
and
convenient,
matching,
syntax
and
so
forth,
and
also
the
safety
right
not
have
to
deal
with
all
the
crashes
and
kind
of
it
took
a
long
time
to
figure
out
how
to
do
that
right.
So,
basically,
when
we
started
it
really
wasn't
obvious
how
to
get
the
low
level
performance
and
how
much
of
that
low
level
performance
we
could
get
along
with
safety
and
so
forth.
B
So
before
I
go
on
I
just
want
to
take
a
minute
to
say
and
part
of
that
I
think.
A
really
big
part
of
that
is
that
we
would
never
have
been
able
to
get
this
far
with
all
the
people
in
the
community.
So
those
of
you
guys
who
are
here
today,
people
who
are
on
the
internet
who've
been
contributing
patches.
It's
been
really
helpful
and
really
enabled
us
to
make
rust
go
forward.
B
So
we're
really
excited
and
I
want
to
say
thanks
to
everyone,
who's
done
that
so
I
thought
I'd
start
and
I
said
it
wasn't
obvious
at
first
what
one
point!
Oh,
what
r
us
should
look
like
I
thought,
I'd
go
way
back
in
time
and
look
at
what
did
rust
look
like
in
the
very
beginning.
Some
of
the
stuff
is
even
before
I've
been
involved,
which
was
now
for
two
or
three
years.
B
I,
don't
remember,
but
you
know,
and
some
of
it
is
not,
but
basically
when
Russ
first
started,
we
said
well,
here's
a
lot
of
nice
features
that
we
want
to
do
and
want
them
to
be
safe.
So
we
want
to
have
like
tasks
that
can
send
messages
across
channels
and
to
do
that,
we'll
what
we'll
add
it
into
the
language
so
that
the
compiler
can
reason
about
it.
B
So
we'll
have
special
types
that
represent
channels
and
special
types
for
tasks
and
so
forth,
and
that
way
we
can
have
special
type
rules
that
just
rely
on
channels.
Then
we
can
have
things
like
tilde
T
to
represent
a
message.
That's
unique.
It
can
be
sent
for
at
T
for
garbage,
collected
data
and
argument
modes.
B
This
is
some
of
the
older
syntax
we
used
to
have.
This
is
as
of
hash
revision,
0411
1326,
which
I
don't
actually
member
what
what
date
that
was,
but
that's,
okay,
something
in
2011,
I
think
any
case.
You
can
see
what
certainly
looks
kind
of
different
today
so
like
we
used
to
put
the
types
and
variable
names
in
the
other
order,
and
we
had
some
I,
don't
really
know
what
tighty
was
I
guess.
I
was
tied,
colon
colon
T
that
all
that
was
sort
used
to
call
match.
B
Question
marks
when
you
wanted
to
bind
something
in
a
match.
Statement
looks
a
little
funny
today,
but
underneath
this
kind
of
different
looking
syntax,
there
was
really
this
different
approach.
That
I
was
talking
about
where
a
lot
of
the
safety
features
were
really
baked
into
the
language,
so
like
here's,
another
snippet
from
73,
f,
ed
011,
where
you
have
this
function
and
it's
returning
this
thing.
Well,
what
this
is
is
that
that
was
how
we
used
to
write
struct
rec
for
record,
but
here
these
two
types.
B
These
were
built-in
types
that
the
language
knew
about
tasks
and
channels,
and
they
were
like
kind
of
like
you
in
during
day,
which
is
part
of
the
language,
and
there
were
special
rules
and
that's
the
problem
with
this
is
maybe
you
like
the
tasking
channels
that
we
have,
but
maybe
you
don't?
Maybe
you
think
you
can
do
better,
or
maybe
you
think
you
can
have
a
more
flexible
set
of
rules
around
them.
B
There
really
wasn't
much
choice
at
that
time
because
it
was
just
how
the
language
worked,
and
it
wasn't
something
you
could
make
a
new
library
for
and
let
other
people
use
and
build
on,
and
so
since
then,
we've
come
a
pretty
long
way
and
I.
Think
essentially,
we've
been
able
to
take
all
those
features
there.
B
We
go
that
we
used
to
have
channel
type,
stole
the
TI
te
etc
and
move
them
for
the
most
part
into
libraries
or
sometimes
just
get
rid
of
them
all
together
because
they
weren't
that
necessary
and
instead
in
the
language
itself,
we
just
have
this
small
core.
We
have
rules
about
ownership,
so
the
compiler
knows
who
owns
what
and
using
that
you
can
do
all
this
other
things.
B
It's
all
kind
of
the
same
and
we
add
in
over
loadable
operators
and
type
traits,
and
you
can
just
kind
of
build
all
the
rest
of
us
in
the
library
itself,
and
that
means,
of
course,
that
you
can
come
along
and
extend
it
and
other
people
can
build
on
it
and
bring
in
competitive
competing
libraries
and
we
can
have
a
whole
ecosystem,
so
I'm
pretty
excited
about
that
and
I.
Think
that's
now
that
we're
here.
It
really
feels
like.
Oh
well.
B
B
We
go
I
think,
what's
also
really
exciting.
Is
that,
as
rust
has
gotten
simpler,
we
found
that
we
can
actually
do
more
than
we
originally
set
out
to
do
right.
B
So
now
we
can
have
things
like
pointers
that
go
directly
into
hash
maps
and
vectors
and
get
returned
and
passed
around,
and
we
can
have
zero,
essentially
zero
external
dependencies
thanks
to
a
lot
of
hard
work,
a
lot
of
people
so
that
you
can
build
a
kernel
module
in
rust
and
things
like
that,
and
we
can
have
shared
memories
or
it's
not
just
tasks
and
message
passing.
You
have
those,
but
you
can
also
have
locks
and
other
things
which
are
useful.
B
Sometimes,
basically,
the
bottom
line
is:
we've
got
this
zero
cost
abstraction
infrastructure,
the
ability
to
build
up
new
language
features
that
don't
cost
you
and
runtime
performance
compared
to
something
that
was
built
in
and
it
don't
involve
lots
of
crashes
or
other
errors.
So
that's
that's
where
the
language
I
think
is
more
or
less
now
and
where
we're
pushing
it
for
one
point:
oh
we've
got
some
things
to
finish
up
a
few
big
DST
closures,
a
few
other
things,
but
one
point:
0
is
about
more
than
just
having
a
stable
language.
B
B
But
these
days
it
takes
a
little
bit
more
to
have
a
have
a
programming
language.
You
need
to
have
all
these
collections
and
abstractions
and
things
that
everybody
can
build
on
and
share
and
so
forth.
So
that's
we're
as
part
of
stabilizing
rust.
I
think
we
can't
really
call
rust
one
point
0
backwards
compatible.
Unless
we
also
say
we
have
some
stable
libraries
and
that's
another
big
thing
that
we've
been
working
on
is
taking
the
libraries
that
we
have
sorting
them
out
to
figure
out
which
ones
are
stable,
which
ones
can
we
stand
behind?
B
Our
their
interface
is
going
to
stay
the
same,
going
forward,
which
ones
are
kind
of
still
experimental
and
getting
and
getting
our
stories
straight
essentially
and
also
making
sure
that
the
conventions
across
all
libraries
are
consistent
so
that
the
method
names
are
what
you
expect.
Basically,
final
polish
and
some
of
them
may
go
out
into
cargo.
So
if
there
are
libraries
that
just
don't
seem
like,
they
have
up
they're,
not
ready
yet
to
be
released
for
real,
they
can
live
as
external
libraries.
So
this
process,
I
think,
is
very
crucial
as
well,
but
basically
I.
B
Think
what
I
would
most
want
to
emphasize
in
this
talk
is
that
launched.
One
point.
0
is
a
big
goal
for
all
of
us.
There
were
all
working
towards
and,
and
things
going
to
be
a
big
moment
for
the
language,
but
it
is
not
the
end
point.
It's
really
the
starting
point,
it's
the
point
at
which
we
can
start
to
build
up
the
libraries
out.
The
way
we
want
and
we
can
keep
working
and
building
up
the
set
of
features.
Now
that
we
have
this
stable
core
that
lets
rust
be
what
it's
meant
to
be.
B
We
can
push
it
further,
so
we're
going
to
be
releasing
new
releases
on
a
train
schedule.
I
think
the
plan
is
every
six
weeks,
we'll
see
how
it
goes.
Hopefully,
works
well
with
kind
of
nightly
beta
release
channels,
sort
of
like
web
browsers,
do
these
days
and
some
other
projects
and
we're
going
to
work
after
one
point.
B
Basically,
this
core
language
that
we've
got
so
that's
about
all
I
had
to
say,
but
thank
you
all,
and
I
wanted
to
mention
that
normally
I
work
from
boston,
so
it's
actually
really
nice
and
really
exciting
to
come
here
and
see
everybody
all
gathered
together
from
these
rust
meetups
that
I
usually
have
to
watch
over
the
internet
or
read
the
read
about
afterwards.
But
so,
if
you,
I
would
really
love
to
talk
to
you
guys
if
we
have
some
time
afterwards
or
when
talks
are
over.
Please
come
and
find
me.
So
thank
you.
D
All
right,
hey
everybody.
My
name
is
Aaron
Turin
I'm,
the
sort
of
newest
member
of
the
rest
team
I
joined
about
four
months
ago,
and
my
primary
focus
has
been
one
of
the
things
that
nico
mentioned,
which
is
trying
to
get
the
the
rust
libraries
in
the
best
shape
possible
so
that
we
can
release
sort
of
a
you
know
a
stable
core
at
one
point:
oh,
so
what
I?
D
What
I
wanted
to
talk
about
tonight
briefly,
is
just
first
of
all,
you
know
what
do
the
standard
libraries
look
like
that
we're
planning
to
ship
with
rust?
One
point
Oh
a
little
more
detail
on
this
stabilization
process
that
Nico
mentioned,
and
then
I
wanted
to
talk
about
some
stuff.
I've
been
thinking
a
lot
about
recently,
which
is
so
these
language
features.
These
final
features
that
we're
trying
to
get
nailed
down
for
one
point:
oh,
they
all
interact
with
the
libraries.
D
So
in
some
cases
it's
it's
important
that
we
get
features
landed
sooner
than
one
point:
oh,
so
that
we
can
get
the
api's
where
we
want
them.
So
these
things
all
have
to
fit
together
and
I'm.
Just
going
to
highlight
some
of
the
like
major
features
that
have
an
impact
on
the
library
and
sort
of
show
you
what's
what's
ahead
of
us.
D
Ok,
but
let's
start
with
the
standard
library
which
those
of
you
who
do
much
less
rest
programming,
you
probably
are
pretty
familiar
with
all
these
things,
so
the
standard
library
includes,
of
course,
as
you'd,
expect
the
the
primitive
types
that
are
built
into
language
and
operations
on
them
as
well.
As
you
know,
lots
of
other
basic
constructs,
like
you
know,
Nico
was
mentioning.
We
have
this
ability
to
build
a
lot
of
abstractions
in
the
light
in
the
libraries
that
you'd
normally
have
in
the
language
that
we
have
all
kinds
of
smart
pointers.
D
We
have
you
know
different
kinds
of
concurrency
primitives,
so
on
and
so
forth.
Then
another
really
important
part
of
the
standard
library
is
iterators.
That's
you
know
serve
our
way
of
working
really
generically
across
collections
and
that's
something
that
I
think
has
worked
really
well
for
us.
We
also
have
some
number
of
concrete
collections,
things
like
length,
less
vectors,
hash,
pops
and
the
kind
of
stuff.
You
expect
basic
concurrence,
primitives
mutexes.
D
Some
interesting
varieties
of
channels
and
a
couple
others
and
then
a
cross-platform
IO
module
and
a
few
other
things.
So
that's
the
basic
gist
of
what's
in
the
standard
library
today,
and
we
can
expect
that
that
one
point
out
will
basically
include
all
of
this
stuff
and
probably
not
much
more
in
that
core
distribution.
D
Okay,
so
that
said,
we've
been
going
through
this
process,
going
going
through
the
kind
of
modules
that
I
just
outlined
and
basically
scrutinizing
the
API
is
trying
to
figure
out
what
we
feel
good
about,
what
we're
not
sure
about,
and
so
we've
adopted
this
model
that
nodejs
actually
pioneered,
which
is
a
notion
of
stability
levels.
So
basically,
this
is
a
way
of
explicitly
marking
api's,
with
the
kind
of
promise
that
you
want
to
make
about
them.
D
Okay,
so
basically,
we've
been
going
through
this
process
of
walking
through
all
the
definitions.
All
the
API
is
in
the
standard
library
and
trying
to
assign
them
into
these
categories,
which
you
know
is
an
enormous
process
and
the
whole
team
is
sort
of
pitching
in
on
this.
So
the
way
we've
been
doing,
this
is
basically
we
started
by
marking
everything
experimental
across
the
board.
D
So
basically,
we
start
by
making
no
promises
whatsoever
and
then
we're
taking
a
first
pass
where
we're
just
sort
of
getting
a
look
as
a
group
at
what
the
api's
are
not
marking
a
whole
lot
stable
right
away.
In
a
lot
of
cases
we
find
conventions,
issues
and
other
things
that
we
we
need
to
work
out
and
so
we're
not
quite
ready
to
save
but
trying
to
work
things
at
least
unstable
or
in
some
case
deprecated,
deprecated
them.
D
And
then
the
plan
is,
you
know,
as
these
things
start
converging
will
take
a
much
faster
second
pass
and
move
most
of
this
stuff
over
to
stable.
So
personally,
I'm
pretty
optimistic
about
where
we're
going
to
be.
When
we
hit
one
point:
oh
there's
a
lot
of
work
left
to
do,
but
I
think
we
can
stabilize
quite
a
bit
of
the
standard
library
that
I
outlined
earlier.
So,
okay,
so
part
of
the
work
that
that
I've
been
doing
along
these
lines
is
some
infrastructure
work
to
let
us
and
you
sort
of
track
the
stabilization
process.
D
So
if
you
go
to
any
rust
doc,
there's
a
link
at
the
top
of
the
crate
that
gives
you
what
we
call
the
stability
dashboard.
If
you
click
on
that,
you
get
sort
of
a
graphical
representation
of
how
far
we're
coming
in
the
stability
process
that
breaks
down
module
by
module.
Two,
you
can
see
right
now,
like
seventy.
Eight
percent
of
the
standard
library
is
still
experimental.
That's
partly
because
there's
a
lot
of
stuff,
that's
in
standard
that
is
not
actually
part
of
the
list.
D
D
So
how
close
are
we
in
terms
of
what
I
said
before,
so
we're
basically
done
covering
all
the
the
built-in
types
and
sort
of
the
most
core
types
for
rust?
We've
also
done
a
lot
of
the
basic
concurrency
primitives
we've
hit.
You
know
a
bunch
of
conventions
issues
along
the
way,
but
we
have
RFC's
trying
to
address
a
lot
of
these
and
I
think
once
we
get
that
sorted
out,
we'll
be
able
to
start
making
more
rapid
progress,
but
obviously
there's
still
a
lot
left
to
tackle
on
this
list.
So
of
what's
left
I.
D
The
other
major
part
is
working
on
Io
and
we've
been
talking
about
this
very
intensively
during
the
workweek
and
again,
there's
gonna
be
an
RFC
coming
out
pretty
soon
I'm
sort
of
describing
our
general
plan,
but
I
think
for
I
owe
you
can
expect
things
to
stay
roughly
in
the
shape
there,
and
today
you
know
modular,
bringing
a
lot
of
things
into
line
for
conventions.
It's
mostly
sort
of
the
factoring,
underneath
that's
going
to
change.
Ok!
So
now,
I
want
to
transition
into
sort
of
the
second
half
of
the
talk
which
is
just
thinking
about.
D
Ok,
how
does?
How
does
the
API
design
actually
interact
with
the
language
features
that
we're
trying
to
get
done
or
figure
out
where
to
schedule?
For
one
point,
oh
so
I
put
up
on
the
side
here,
a
list
of
some
of
the
major
features
that
are
in
the
works
proposed
things
that
that
might
land
in
the
one
point,
no
timeframe
and
I'm
going
to
focus
on
the
last
three,
where
clauses
associated
items
in
multi
dispatch.
D
Ok,
so
so
the
first
one
is
where
clauses
and
the
idea
the
basic
idea
of
where
clauses
is,
is
really
quite
simple.
So
at
the
top
of
the
slide,
I
have
what
you
write
in
today's
rust,
where
you're
you
know,
defining
an
info
for
hash
map,
and
basically
you
say
that
you're
taking
some
type
parameters
like
k
and
then
right
after
that,
you
actually
write
the
trait
bounds
that
you
expect
to
hold
of
K
and
where
clauses
just
tease
these
two
things
apart.
D
So
you
sort
of
write
the
type
parameter
separately
and
then
you
have
a
where
clause
that
actually
lists
their
bounds.
Now
this
probably
seems
like
a
pretty
trivial
thing:
I
mean
probably
if
you're
like
me,
I
mean
I
prefer
the
second
one
I
think
it's
more
readable
and
that's
certainly
part
of
the
motivation
for
where
clauses,
but
there's
actually
a
lot
more
to
it
than
that.
So
one
of
the
things
that,
where
clauses
give
you
is
the
ability
to
express
a
sort
of
more
constraints
than
you
could
before.
D
So
this
is
an
example
actually
from
the
RFC,
where
basically
you're,
taking
in
two
types,
T
and
K,
and
what
you
want
to
say
about
this
type.
T
is
not
that
it
satisfies
some
freight
directly,
but
rather
that,
if
you
put
it
inside
this
other
type
option,
then
that
whole
thing
satisfies
some
trait
okay.
This
is
a
somewhat
uncommon
situation
to
be
in,
but
right
now,
there's
no
way
to
express,
there's
basically
no
way
to
express
this
without
a
bunch
of
contortion
in
your
API.
D
So
this
is
one
of
the
other
motivations
for
where
clauses
in
there
are
few
places
in
the
API.
Where
will
want
to
use
this
capability,
but
the
thing
that
that
I'm
actually
most
excited
about
in
terms
of
the
standard
library,
which
is
something
we
sort
of
only
realized
you
could
do
by
accident-
is
something
that's
going
to
allow
us
to
collapse,
a
lot
of
traits
that
we
have
right
now.
D
So
if
you
look
at
the
the
iterator
module
in
the
standard
library
right
now,
you'll
see
we
have
an
iterator
trait,
that's
what
most
of
you
sort
of
know,
unloved
and
then
we
have
a
whole
bunch
of
other
traits
that
are
really
closely
related.
We
have
additive
interrater,
multiplicative,
cloneable
or,
and
each
one
of
these
things
is
kind
of
funny.
D
It
basically
eat
each
one
defines
just
a
few
methods
so
like
in
the
case
of
ordered
Reiter
you
get
max
and
min
and
maybe
one
more
and
then
there's
a
blanket
imple
like
the
one
I
have
up
on
the
slide
here
and
so
by
blanket
impul
I
just
mean
it's.
It's
actually
saying
at
literally
every
type
T
has
an
implementation
for
this
thing,
as
long
as
certain
requirements
are
met,
okay.
So
what
is
the
point
of
this?
D
Basically,
what
what
this
is
trying
to
do
is
say
that
when
you're
iterating
over
something
that
you
can
do
comparisons
on,
then
you
automatically
get
some
extra
features
like
you
automatically
get
code
that
can
compute
the
maximum
over
the
iterator
right.
But
right
now
in
today's
rusts
who
like
express
this
relationship,
we
have
to
create
a
whole
separate,
trait
and
write
one
of
these
blanket
impuls
to
connect
all
this
together
and
I
can
say
like
as
a
newcomer
thrust.
D
When
I
first
started
and
I
landed
at
the
iterator
library
and
I
saw
these
traits,
it
was
completely
overwhelming
it
was.
It
took
me
a
while
to
understand
how
this
all
fit
together.
Okay,
so
the
cool
thing
is
actually
with
where
clauses
we
can
get
rid
of
all
of
this
and
express
this
relationship
much
more
directly.
So
basically,
we
can
move
all
these
extra
methods
actually
into
the
iterator
trades
and
then
just
express
like
the
extra
constraints
directly
as
where
clauses
so
how
we
can
say
any
iterator
whatsoever.
D
If
you
know
that
the
element
type
a
is
ordered,
then
you
get
the
max
method,
and
otherwise
you
don't.
So
that's
what
we
were
trying
to
express
before
you
know
with
with
a
whole
bunch
of
traits.
Now
we
can
do
it
much
more
directly,
and
you
know
there
are
a
handful
of
places
in
the
standard
library
where
we're
going
to
be
able
to
do
this
kind
of
consolidation.
So
I'm
really
excited
about
that.
So
that's
the
impact
of
work
losses.
Maybe
I
should
pause
it.
Does
anyone
have
questions
about
that,
make
sense?
D
D
If
we
can
have
just
one
way
to
do
something,
but
it's
not
clear
yet
so
we
should
know
soon
all
right,
okay,
so
the
next
feature
is
associated
items,
which
is
something
that's
been
requested
for
a
really
long
time
and
it's
basically
an
extension
to
the
trade
model.
So
I
want
to
just
give
like
the
sort
of
canonical
motivating
example
here,
so
imagine
that
you're
trying
to
write
some
sort
of
generic
code
over
graphs
and
when
you're
dealing
with
brass,
depending
on
how
you
set
up
the
API.
D
There
are
actually
several
types
involved,
there's
like
the
type
of
the
graph
itself
and
then
you
might
also
have
the
type
of
nodes
and
the
type
of
edges
and
api's
that
connect
all
these
different
types
together.
But
the
thing
is
these
types
really
come
together
as
a
family.
It
doesn't
make
sense
to
sort
of
very
them
independently,
like
say,
here's
one,
concrete
graph
type
and
I
have
different
concrete,
node
types
that
I
can
use
with
it
that
doesn't
make
sense.
D
They
come
together
as
a
package,
but
you
don't
have
a
way
to
express
that
in
today's
rust.
What
you
have
to
do
instead
is
treat
these
other
types
as
parameters
to
the
trade
as
if
something
that
you
could
choose
differently,
while
keeping
the
same
graph
right-
and
this
also
means
that,
when
you're
using
the
graph
type
like,
if
you're
writing
this
distance
function,
I
have
on
the
bottom.
D
You
have
to
explicitly
take
all
these
extra
parameters
and
like
tie
them
together
with
the
trade,
and
you
know
for
this
graph
example
where
there
are
only
three
types:
that's
not
a
huge
deal,
but
I
think
there
are
a
lot
of
things
that
people
would
like
to
do
with
traits
that
involve
many
more
types
than
this.
Cmr
actually
sent
me
an
example
that
involved
like
20
types,
and
it's
just
totally
infeasible
to
do
this
with
explicit
parameters.
But
the
signature
would
just
be
completely
unusable
right,
okay,
so
associated
items
again
sort
of
like
where
clauses.
D
When
you
first
look
at
it,
it
seems
really
simple
all
right.
Basically,
all
we're
going
to
do
is
move
these
parameters
to
become
items
of
the
trade.
The
same
way
that
you
can
have
methods
and
you
can
have
static
functions
which
are
now
called
associated
functions.
You
can
have
associated
types
to
the
trade,
and
this
just
says
for
any
implementation
for
any
info
love
graph.
D
You
also
have
to
provide
a
specific
type
and
the
specific
type
II,
but
it
really
conveys
that
once
you've
chosen
the
graph
type,
the
other
types
are
determined
they're,
not
like
parameters
that
you
get
to
pick
different
things
for
they
all
come
together
as
a
package,
and
so
the
impact
of
this
now
is
when
you're
writing
client
code.
That's
using
graphs,
you
just
say
give
me
something
that
implements
graph,
and
now
you
have
the
whole
family
of
tight.
D
So
you
can
just
say,
like
G,
colon
colon
n
here
to
sort
of
get
at
the
Associated
type
all
right.
So
it
has
clear
ergonomic
benefits
and
you
know
there
are
a
few
places
in
the
standard
library
where
this
will
help
so
like
the
encode,
a
bowl
and
decodable
traits
are
sort
of
right
on
the
border
of
like
you,
know
too
many
parameters
floating
around,
and
this
will
definitely
make
that
more
manageable.
D
Okay
and
there
are
many
more
details
in
the
RFC,
but
one
thing
I
wanted
to
call
out
for
those
of
you
who
have
like
them
any
serious
breast
programming
if
you,
if
you've,
worked
with
hash
map,
so
you
probably
run
into
this
so
called
a
quick
problem
and
I
don't
want
to
spend
time
talking
about
it
in
detail
tonight,
but
I
just
want
to
say,
associated
items.
Give
you
a
really
nice
way
to
solve
this
problem.
It's
in
the
RFC.
D
You
can
take
a
look
or
me
about
it
afterwards,
okay
and
then
the
final
feature
that
we're
looking
at
from
one
point
no
I
want
to
talk
about
is
multi
dispatch,
which
is
actually
really
closely
connected
with
associated
items.
So
so,
basically,
like
you
might
wonder,
okay
now
we
have
like
two
ways
to
connect
different
types
to
a
trade.
We
have
type
parameters
and
we
have
associated
types.
Do
we
need
both?
Do
they
work
the
same?
D
You
know
what
what's
going
on
so
the
design
that
I
sort
of
proposed
for
this
is
that
you
do
have
both
type
parameters
and
associated
types,
but
they
play
different
roles.
Basically,
the
idea
is
type
parameters
are
used
when
choosing
the
impul
to
match
against.
So
basically
they
become
part
of
dispatch
right.
So
it's
in
today's
rust,
the
only
type
that
matters
for
choosing
the
impulse,
the
self
type
nothing
else,
but
in
this
proposed
tomorrow's
raw
rust,
any
types
that
you
list
as
inputs
to
the
trait
actually
become
actually
play
some
role
in
dispatch
right.
D
So
the
canonical
example
is:
if
you
have
a
binary
operation
like
add,
you
want
to
be
able
to
talk
about
like
adding
an
int
and
in
it
and
adding
an
int
and
complex
and
those
should
have
run
different
code.
They
should
give
you
different
types
back
right,
and
this
allows
you
to
do
done,
which
is
which
is
pretty
tricky
to
do
in
today's
rust,
all
right
but
notice,
the
type
of
like
the
sum
is
an
Associated
type,
because
once
you
know
the
two
input
types,
then
the
some
type
is
determined
right.
D
So
what
I
really
like
about
this
is
I
feel
like
it
gives
you
a
very
clear
mental
model
like
basically
for
every
different
type
I
can
plug
into
a
trait.
I
can
provide
a
different
info,
but
for
every
associated
type.
That's
like
determined
once
I've,
once
I
plugged
in
the
info
types.
So
that's
the
model
of
multi
dispatch
and
that's
being
proposed
right.
So
that's
sort
of
this
this
first
bill.
It
clarifies
the
trait
matching
rules,
so
this
is.
D
This
is
really
important
for
stabilizing
our
binary
operators
that
we
already
provide,
but
this
opens
the
door
to
a
lot
of
interesting
things
in
the
standard
library
like
now,
it
becomes
possible
to
write
traits
that
encompass
conversions
right,
because
you
really
care
about
both
the
source
and
the
target.
You
need
multi
dispatch
to
do
generic
conversions.
We
can
do
that
now.
I
also
have
an
RFC
about
error
propagation
that
uses
this
feature
and
I
think
we're
there
going
to
be
a
lot
of
other
things
that
will
fit
into
this
mold
right.
D
D
The
problem
is
that,
even
with
all,
even
if
we
land
all
the
features
I
just
mentioned,
it's
still
not
enough
to
really
express
generic
collection
traits
in
the
sort
of
right
way
or
in
the
ideal
way,
and
basically
here's
the
problem
like
if
you
were
naively
like
setting
out
trying
to
define
ok
what
is
a
generic
container?
Well,
you
might
say
it
has
a
type,
a
of
items
and
there's
some
iterator
type.
D
I
and
then
I
have
a
wave
iterating
over
the
items
and
getting
getting
some
I
out
and
I
have
a
way
of
mapping
and
giving
myself
a
new
container
over
a
different
type
of
elements
right.
So
this
is.
You
might
like
to
try
to
write
something
like
this,
but
actually
there
are
a
bunch
of
problems
with
this.
So,
first
of
all,
when
you're
doing
this,
iteration
you're
actually
getting
borrowed
references,
but
what
is
their
lifetime
right?
You
don't
have
a
way.
D
You
have
the
single
type
I
but
the
lifetime
that
you're
getting
out
of
it
really
depends
on
the
lifetime
of
you
know
the
ticket
below
when
you're
actually
calling
the
iterator
and
tying
it
together.
You
have
no
way
to
express
that
relationship
right.
So
all
you
can
write
here
is
this
bear
type
I,
but
you
really
need
that
I
to
connect
to
the
earlier
take
a
you
can't
do
that.
Another
thing
is
well.
D
We
say
we're
returning
self,
but
that's
actually
not
right,
because
the
self
type
right
now
like
say
this
was
a
vector
that
maybe
it's
a
vector
of
int
but
we're
mapping
it
to
a
vector
of
strings.
So
we
really
need
the
self
type
to
change
its
related
right.
It's
still
a
vector,
but
but
its
interior
type
has
changed,
and
so
basically,
what
all
of
these
things
call
for
is
something
called
hire.
Kinda
types
and
it's
a
scary
sounding
term
and
people
start
talking
about
Mona
heads
and
all
kinds
of
crazy
stuff.
D
But
actually
it's
a
really
really
simple
idea
and
it's
motivated
by
really
crucial
things
like
like
generic
collections.
Basically,
all
it
means
is
being
able
to
talk
about
not
just
individual
types
but
like
families
of
types
where
you
plug
in
one
type
and
you
get
out
of
type.
So
that
is
a
thing
where
you
plug
in
a
type
like
in,
and
you
get
out
of
type
back
int
right
and
that's
what
we
need
to
be
able
to
talk
about
like
self
for
exam-
ex
right.
D
D
We
have
some
idea
of
where
we
want
to
go
in
the
language
to
be
able
to
support
this
kind
of
thing
in
the
future
to
really
support
like
strong
generic
programming
and
because
we
know
that's
coming
later,
where
we
at
least
I
hope
this
is
coming
later.
We
don't
want
to
stabilize
on
a
set
of
traits
now
that
are
going
to
like
prevent
us
from
going.
D
You
know
to
this
good
place
in
the
future,
and
basically
my
feeling
is
you
know
at
one
point:
oh
what's
important
is
we
have
good
concrete
collections
like
with
consistent
ap?
Is
that
you
can
use
and
then
later
on,
you
can
do
more
generic
programming
over
them.
So
that's
kind
of
looking
into
the
future,
and
with
that
I'm
pretty
much
done.
E
Okay,
I
I'm
Felix
my
work
out
of
the
Paris
office.
Normally
it's
good
to
see
all
of
you
I,
don't
know
almost
any
of
you
okay,
so
my
talk
is
going
to
be
about
the
drop
flag
and
don't
worry
if
you
don't
know
what
that
is.
These
previous
talks
that
we've
seen
have
been
pretty
broad
descriptions
of
you
know
lots
of
things.
E
One
point:
oh,
this
is
gonna,
be
more
like
a
deep
dive
into
one
particular
corner
area
of
the
language
that
we're
planning
to
change,
and
it's
just
got
some
interesting
details
to
it.
So
here's
a
pop
quiz
feel
free
to
shout
out.
The
answer
should
be
pretty
simple.
I
hope
so.
I've
got
to
structure
definitions
on
the
top
of
this
slide,
there's
s1
and
s2.
How
many
bytes
are
in
each
of
these
things?
How
large
are
they
you
don't
know?
E
Well,
we
can
always
run
this
code
and
find
out,
but
I'm
pretty
sure
that
this
ends
up
being
something
where
they
end
up
being
four
bytes.
So,
okay,
so
you
can,
you
know
print
out
the
size
of
them
and
say:
oh
yeah,
give
all
the
same
size.
But
of
course
this
is
a
joke
because
of
this
detail
of
the
drop
flag,
which
is
that
as
soon
as
I
had
an
impulse
for
one
of
these
trucks,
this
assertion
fails.
E
In
fact,
the
s
1
is
4.
Bytes
and
s2
is
8
bytes,
which
is
totally
not
intuitive
if
you're
coming
from
the
world
of
at
least
see
and
probably
also
C++
in
terms
of
looking
at
these
two
structures
and
not
seeing
anything
within
them
locally
to
indicate
that
there
is
anything
different
about
them
in
terms
of
their
sizes
and
their
layout.
So
why
is
the
soap?
Why
does
this
happen?
E
And
the
simple
answer
for
you
is
that
rust
today
currently
uses
what
I
call
a
dynamic
drop
semantics,
so
when
we
run
destructors
for
local
variables
at
what
happens?
Is
that,
at
the
end
of
the
scope
of
you
know
a
block?
We
walk
over
all
the
local
variables
and
we
drop
them
except
it's
not
quite
that
simple,
and
so
we
ask
each
each
variable:
hey
do
you
need
to
be
dropped?
Do
you
need
to
be
dropped?
E
E
So
in
this
case
in
this
code,
what
ends
up
happening
is
that
we
will
I
haven't
showed
it
here,
but
in
the
Associated
playpen
code,
the
DF
struct
has
an
Associated
drop
in
pull
and
it
just
prints
out
the
name
of
a
structure
when
it
runs,
and
so
in
this
code
we
first
allocate
f3
and
then
create
a
4
+
f5.
We
move
f4
and
f5
into
the
pair
key.
So
that's
you
know
ownership
semantics.
E
E
Thus,
at
the
end
of
the
whole
thing
we
may
or
may
not
drop
at
four
at
the
end
of
this
block,
because
f4
might
end
up
getting
passed
along
during
take
and
pass
to
the
other
function,
who
presumably
dropped
it.
So,
at
the
end
of
this
block
we
may
or
may
not
print
out
f4,
we
definitely
always
print
f10
and
then
maybe
f4
and
f5
and
f3.
E
Okay.
So
that's
kind
of
testing,
but
also
you
know
not
great,
because
there's
a
lot
of
details
about
why
this
is
not
so
good
one
is
the
unintuitive
drop
flag
being
added
on
which
is
bloating
your
structures
and
that's
all
of
your
strokes
that
if
I
meant
drop,
it's
not
just
the
ones
that
are
allocated
locally
on
the
stack.
E
It's
also
the
ones
that
are
allocating
your
vectors,
so
you're,
basically
double
you
know
you
can
double
the
size
of
your
vectors
just
due
to
this,
and
also
we
also
have
this
semantics,
where
it's
not
just
that
we
run
the
destructors
when
we
reach
the
end
of
the
scope
of
something
we
also
zero
them
out.
We
zero
out
data
and
that's
basically
to
ensure
that
these
drop
lives
are
cleared,
so
that's
a
expense.
E
They
are
paying
in
terms
of
performance
that
it'd
be
better
not
to
have
to
pay
that
expense
with
runtime
performance,
and
perhaps
more
importantly,
you
don't
want
to
pay
that
price
during
cogeneration
time
at
compile
time.
That's
the
biggest
one
we're
actually
expecting
here
is
that
there's
a
ton
of
code,
that's
being
stupidly,
compiled
20
the
stuff
out.
That
would
go
away
if
you
cover
the
drop
flags.
So
that's
Russ
today
with
the
dynamic
drop
semantics.
How
can
we
thick
that
fix
this?
E
How
can
we
get
rid
of
the
drop
lag
and
the
fix
that
we're
planning
to
adopt
an
eye
of
an
RFC
offer?
This
soon
is
a
switch
from
my
dynamic
drop
semantics
to
a
static
drop
semantics
and
the
heart
of
the
rule
for
a
static
drop
semantics
is
that
you
basically
employ
a
new
rule
that
says
on
every
path
through
the
control
flow
of
a
block.
E
P
then
create
up
10,
and
the
interesting
thing
here
is
that
in
the
control
flow
split
points,
we
start
off
with
the
drop
obligations,
f3,
p,
+,
f10
and
then
in
the
branch
where
we
pass
away
or
we
yeah.
We
move
PX
away
that
changes.
The
set
of
drop
obligation
so
that,
instead
of
having
to
drop
all
of
p
now
we
just
have
the
responsibility
to
drop
pede
pede.
Why
and
that's
represented
in
this
set
the
other
control
flow
branch?
It's
just
the
same
set
that
we
started
with
f3p
f10.
E
So
what
happens
here
now
is
we
have
this
merge
point
where
there's
a
drop
mismatch
where
the
set
of
drop
obligations
is
different,
and
this
is
not
going
to
be
strictly
speaking
aloud
in
rust
as
we
plan
it
in
the
future
and
when
I
say
not
allowed
well.
First
of
all,
I
just
want
to
point
out
this
is
you
know,
based
on
a
control
flow
graph
abstraction.
So
this
is
the
control
flow
graph
from
the
left-hand
side
that
you
get
for
that
earlier
function
and
I
just
want
to
point
out.
E
You
know
really
is
look
at
the
two
point
things
coming
in,
compare
the
sets
and
identify
the
things
that
are
different.
So
what's
the
fix
here,
originally
I
thought:
oh,
who
the
simplest
fix,
just
force
the
user
to
have
to
drop
things
manually.
You
know
tell
them
no,
no
you're
not
allowed
to
do
that.
You've
got
it
for
a
dropped
call,
but
it
became
clearly
pretty
clear
pretty
quickly
that
that
was
not
a
tenable
solution.
They
we've
got
really
annoying
to
have
to
insert
drop
cloth
calls
by
hand,
especially
in
a
lot
of
cases.
E
Recent
they're
saying
this
is
so
dumb.
The
compiler
completely
do
this
for
me,
so
the
simplest
answer
is
to
say:
okay,
let's
just
have
rust,
see
insert
the
drops
for
us,
so
the
compiler
is
already
freaking
out
what
think
what
things
need
to
be
dropped?
So
we
can
insert
this
little
red
code
here
that
ensures
that
the
two
paths
have
the
same
set
of
drop
obligations
when
they
meet.
Basically,
what
we're
doing
here
is
we
had
a
dropout
look.
We
were
going
to
drop
pdx
under
the
dynamic
drop
semantics.
E
We
were
going
to
drop
all
of
P
at
the
end
of
that
block,
which
effectively
means
we're
going
to
drop,
px
and
py,
and
what
this
change
does
is.
It
shifts
those
both
of
separate
drops
of
px
and
py.
It
moves
one
of
them
up
to
an
earlier
point.
In
the
control
flow,
so
that's
why
you
know
you
get
this
auto
and
sort
of
drop
or
conceptually.
The
idea
is
that
we're
just
moving
kodos
already
going
to
run
we're
going
to
move
it
upwards
to
an
earlier
point.
So
are
there
any
obvious
problems
with
this?
E
Well,
are
there
yes?
So
what
about
drops
in
of
side
effects?
It's
kind
of
sketchy
to
be
reordering
code
when
it
might
have
side
effects
in
particular,
one
of
the
things
about
rust
is
that
you
know
we
like
to
advertise
that
we
support
our
aii
patterns,
resource
acquisition,
his
initialization
patterns
from
c++
particular
coding
with
locks.
You
use
our
aii
to
define
to
grab
a
lock,
the
beating
of
its
scope
and
then
on
meth
released
at
the
end
of
the
scope.
E
So,
in
those
cases
it's
safe
to
drop
early,
but
in
the
cases
where
it's
not
safe
to
drop
early,
this
seems
pretty
dangerous,
and
our
answer
for
that
is
like
many
things
in
rust.
We're
not
quite
sure
what
the
right
answer
is.
We'll
add
a
lint,
we'll
add
a
link
for
this.
In
particular
this
lint.
We
already
have
the
analysis
that
I
showed
you
earlier.
E
We
got
the
control
flow
graph
graph
to
look
at,
so
we
can
just
tell
you
in
a
lint
hey
morning,
p
that
X
is
still
initialized
in
this
control
flow
path
and
not-
and
it's
not
utilizing
others.
Maybe
you
should
do
something
about
this.
This,
basically
is
you
know,
reinstituting
the
policy
that
I
mentioned
earlier,
about
forcing
the
user
to
explicitly
drop
things
or
find
some
other
way
to
solve
this
problem.
E
For
example,
you
can
use
the
option
I
didn't
mention
this
before,
but
it
should
be
able
to
use
the
option
struct
or
any
num
to
re-implement
the
drop
like.
If
you
have
code
that
actually
needs
to
be
using
a
drop
flag,
you
can
re
encode
that
same
semantics
using
option.
So
it's
not
like
any
expressive.
This
is
going
away
here.
So
a
Lind
consult
a
lot
of
the
problems
here,
but
we
don't
want
this
lint
to
beyond,
for
everything
that
would
just
put
us
back
in
the
same
annoying
spot.
E
That
I
said
was
untenable
at
the
outset
before
so,
instead
of
having
it
be
a
lint
that
applies
to
everything.
Instead,
we're
going
to
have
a
system
where
at
least
one
of
the
current
plans
when
you
implement
drop
first,
we
assume
the
common
case
is
for
people
implement,
drop
in
a
manner.
That's
pure
and
the
side
effects
actually
aren't
visible
in
a
way
the
user
would
care
about.
An
easy
example
of
this
is
bek,
a
Veck
of
you
ate
or
a
BEC
of
chars.
E
That's
the
kind
of
thing
where
all
you
do
on
dropping
that
is
free,
a
vacuum
buffer.
It's
not
the
kind
of
thing
those
side
effects
or
something
you
care
about
running
if
it
runs
earlier,
usually
usually
so.
The
plan
here
is
that
normally,
when
you
might
drop
on
something
like
vacker,
maybe
some
other
type
of
your
own.
E
If
you
you
know
know
that
it's
side-effect
free,
then
we
won't
tell
you
about
the
cases
where
the
drop
that
moved
to
earlier
by
default,
as
soon
as
you
implement
noisy
drop
for
a
type,
though
that
marks,
it
is
something
that
now
I
care.
Now
we
think
that
these
this
thing
has
some
sort
of
cement
drop
semantics
with
a
side
effect
like
printing
to
the
screen
that
we
might
care
about,
and
this
bubbles
outward
in
a
structure
so
that
if
another
struct
s
has
an
instance
of
DF,
then
s
becomes
itself
noisy
until
you
can.
E
F
Great
okay
hi
everyone
I'm
going
to
talk
about
the
borrow
check,
and
this
is
going
to
be
because
I
don't
want
to
bore
you
with
too
many
internal
compiler
details.
I'm
going
to
limit
this
to
10
minutes,
so
I'm
going
to
do
the
best.
I
can
to
talk
about
how
the
virus
check
works
in
10
minutes
if
you've
written
any
rest,
you've
probably
become
acquainted
with
the
borrower
check.
It's
your
friend
or
your
foe,
as
the
case
may
be
I
hope
it's
your
friend.
F
It
should
be
your
friend
because
it
guarantees
memory,
safety
and
rust,
and
so
I'm
going
to
talk
briefly
about
how
it
works
and
then,
at
the
end
here,
I'm
going
to
talk
about
some
of
the
improvements
are
going
to
make
to
it
to
make
it
even
more
your
friend.
So
let
me
go
back
here
briefly.
It's
divided
into
so
it's
a
static
analysis.
It
runs
on
your
rust
code.
F
It
happens
after
type
checking
and
basically
what
it's
designed
to
do
is
prevent
things
like
iterator
and
validation
and
degli,
and
some
random
type
safety
holes
that
basically
make
it
so
that
you
don't
have
to
spend
as
much
time
in
front
of
the
debugger
in
rust
codes
and
it's
basically
it's
the
secret
that
allows
us
to
get
a
memory
safety
out
of
a
ricci
plus
plus
like
memory
model.
So
it's
divided
into
kind
of
four
categories
and
I'm,
hoping
that's
by
understanding
better
how
it
works.
F
You
can
basically
understand
how
better
to
diagnose
the
error,
messages
that
you
get
and,
and
hopefully
clear
things
up
and
make
it
simpler.
So
the
first
thing
that
it
does
is
that
it
analyzes
each
expression
in
your
program
to
determine
the
kind
of
memory
that's
used
to
evaluate
it,
so
it
has
local.
So
it
looks
at
every
expression
determined
this.
Is
this
a
local
variable?
Is
this
an
argument?
Is
this
what
we
call
an
up
far,
which
means
a
captured
variable
in
a
closure,
and
it
can
also
it?
F
Not
only
can
it
reason
about
those
things,
but
it
can
also
reason
about
things
that
are
derived
from
them.
So
you
can.
So
if
you
have
a
local,
X
and
you're
accessing
the
field
foo
on
it
it'll,
it's
able
to
reason
that
about
X
dot,
foo
and
know,
for
example,
that
it's
different
from
X
bar
it
can.
It
also
knows
about
dereferences,
so
you
know
if
you
have
like,
if
you
implement
the
duf
trade
it'll,
know
about
that.
F
So
the
L
value
is
the
thing
that
you're
borrowing
and
it
knows
how
long
you,
how
long
your
reference
is
supposed
to
last,
and
it
knows
whether
you
took
a
mutable
reference
or
not,
and
so
that's
fairly
simple
and
once
it's
the
mat,
it
enforces
the
restrictions
which
are
the
rules
that
rust
places
on
borrows.
So,
while
you
have
an
immutable
reference,
it
enforces
that
you
can't
have
any
mutation.
Obviously,
and
you
can't
have
any
mutable
references,
because
that
would
be
a
contradiction.
F
So
once
you
once
we've
got
together
all
the
restrictions.
Then
we,
then
we
just
check
the
loans
and
we
report
an
error
if,
based
on
the
memory
catergory
categorization,
if
it
tells
you
that
the
that
the
thing
that
you
borrowed
was
being
mutated,
while
you
had
an
immutable,
mutable
reference,
if
it
had,
if
you
moved
it
away,
while
there
was
a
reference
or
if
the
loan
is
conflicted
with
each
other
and
basically
it's
a
very
simple
algorithm,
it
just
goes
whatever
it
sees
alone.
F
F
If
the
answer
is
no
that
your
program
compiles
and
you're
happy
and
during
a
lifetime
here
means
in
the
same
block,
so
hopefully
that
sounded
not
too
hard
and
and
rather
obvious,
which
is
the
point
because
you
should
be
able
to
run
the
stuff
in
your
head
and
so
now
I'm
going
to
talk
about
some
more
improvements
where
that
we
would
like
to
make.
So
that's
all
right.
So
for
a
while,
I
have
been
every
time:
I
gotta,
borrow
check,
error
and
I
thought
it
was
something
that
I
should
not
have
gotten.
F
I
wrote
a
downloadable
text
file
and
I've,
basically
categorized
all
the
problems
that
I
found
more
or
less
into
these
two
categories,
and
so
we
have
some
improvements
on
the
way
these
I
can't
make
any
promises
about
when
these
will
land,
but
I
would
hope
for
them
to
land
soon
and
they'll
and
watch
for
RFC
soon
about
both
of
these.
So
the
first
one
is
what
we
call
nan
lexical
lifetimes
and
I.
F
You
have
you're
not
using
the
immutable
reference
anymore,
so
what
like
you
called
find,
it
was
done.
You
didn't
get
anything
back
from
it,
so
why
is
it
yelling
at
you
and
the
reason
it
and
it's
you
can
see
why?
If
you
look
at
what,
if
you
look
at
what
happened
before
it's
because
you
were
inside
the
block
where
you
called
find,
but
that's
really
annoying,
so
we
have
so
we
have
an
algorithm
sketched
out
that'll,
actually
where
we
can
actually
have
what
we
call
them
out
of
lexical
lifetimes,
and
this
will
just
work.
F
A
lot
of
other
annoying
patterns
will
also
just
work.
So
so
that's
great.
The
second
thing
is
nested
method,
calls
where,
if
you
have
fun
foo
of
enemy
itself,
and
you
have
fun
bar
and
mute
self-
and
you
call
like
this-
you
get
a
bar
trick
error
and
its
really
annoying
and
the
ants.
That
is
also
something
that's,
and
this
can
easily
be
fixed
today
by
just
saying:
let
x
equals
self
de
par
and
then
self
def
of
X
and
you're
on
your
way.
F
So
this
is
not,
you
know
a
showstopper,
but
it
is
a
paper
cut
and
the
good
thing
the
good
news
is.
We
have
a
way
to
solve
this
as
well,
and
that
is
something
that
will
also
be
coming
soon.
So
actually
these
I
basically
account
for
almost
all
the
really
annoying
borrow
check
errors
that
I've
ever
seen,
which
is
awesome.
So
that's
it
are
there
any
questions.
F
H
Alright,
so
my
name
is
Alex
I'm
going
to
be
talking
today
about
something
that
I've
been
working
on
with
yehuda
katz
and
car
lurched,
and
it's
called
carterville.
You've
probably
heard
of
it
by
this
point.
So
karo
is
a
rusts
new
package
manager
and
one
of
the
first
questions
that
people
ask
is:
why
would
you
write
out
your
own
package
manager?
There
are
tons
and
tons
of
these
nowadays
and
the
the
kind
of
key
point
that
you'll
see
on
the
website.
H
Right
now
is
carville
as
a
tool
that
allows
rust
project
to
declare
under
dependencies
and
sure
they'll
always
get
a
repeatable
built,
and
it's
kind
of
a
few
things
packed
in
that
sentence
and
I'll
point
them
out
here
like
the
first
of
those
is.
This
is
really
focused
on
rust.
Cargo
is
meant
to
be
a
rust
package
manager,
which
means
it
is
tailored
for
the
rust
language,
the
rust
ecosystem
and
the
entire
rust
compiler.
It
is
meant
so
using
it
is.
H
You
don't
have
to
configure
a
lot
of
stuff
to
try
and
get
it
to
wedge
it
in
to
rust,
as
opposed
to
wedge
again
at
anything
else.
At
all
kind
of
just
works
when
you
start
out
and
roast
projects
in
general
right
now
are
basically
just
a
package
with
a
name
and
a
version,
and
a
couple
of
authors
and
some
other
other
similar
configuration
in
there
as
well
a
second
of
these
as
dependencies.
Everything
with
a
package
manager
means
that
I
have
an
entire
ecosystem
of
libraries
that
I'm,
depending
on
I'm
developing.
H
Everything
is
rapidly
changing,
but
I
want
to
make
sense
of
all
this,
so
it's
cargoes
job
to
kind
of
keep
that
keep
track
of
the
own
its
head
for
every
single
project,
on
its
own
and
kind
of
keep
the
entire
world
saying
and
then
the
final
thing
that
cargo
is
really
really
one
of
the
major
design
goals
of
cargo
is,
is
repeatable
builds.
This
is
something
which,
especially
in
a
compiled
language,
is
really
important.
H
Where,
if
I
can
compile
something
today,
I
should
guarantee
that
anyone
who
checks
this
out
at
any
point
in
the
future
can
continue
to
compile
it.
That
is
a
really
strong
guarantee
for
us
to
give
and
its
really
really
useful
in
basically
essentially
any
application
or
checking
out
a
project
and
cargo.
It
employs
a
few
strategies
for
this.
One
of
these
is
called
cargo
net
lock,
which
is
a
file
in
your
repository.
H
That's
all
you
type,
you
type
those
two
words
and
you
build
dozens
and
dozens
of
dependencies
and
might
take
20
minutes
to
come
by
all
the
other
100
dependencies,
but
it'll
all
build
and
one
same
fashion.
It
gets
you
cross,
compilation,
everything
all
in
one
package
and
then
the
other
really
big
thing
that
cargo
is
going
to
be
relying
on
is
something
called
sember
and
cember
is
kind
of
just
this,
like
a
dot
v.
Dot
c
is
three
numbers,
and
the
really
big
thing
is
that
this
major
number
this
a
in
front.
H
If
that
changes,
it's
a
breaking
library
change,
and
otherwise
it's
not
so.
Cargo
is
going
to
be
relying
on
the
fact
that
cember
says
that,
where,
if
I,
if
I
depend
on
one
point,
two
point
one
and
you
defend
a
one
point,
five
point:
four
we
are
totally
compatible.
I
can
use
the
newer
version
guaranteed.
So
those
are
those
are
some
of
the
guiding
principles
behind
cargo
when
it's
going
to
be
guaranteeing
these
repeatable
builds.
H
So
I
took
a
look
today
at
github
and
I
was
just
kind
of
googled.
Around
I
searched
around
trying
to
find
how
many
cargo
de
tamos
I
could
find
on
github
and
Carl
has
been
around
for
two
months
now.
We
already
have
almost
a
thousand
cargo
repositories,
and
this
number
is
only
going
to
keep
going
up.
I
think
I
might
actually
keep
the
spreadsheet
of
this
thing.
H
I
want
to
see
one
of
those
exponential
curves
going
up
to
see
what
it
looks
like
so
I
kind
of
want
to
talk
for
a
second
about
how
cargo
is
built,
because
it's
a
little
non-trivial
kind
of
like
rust
itself,
and
it's
very
similar
to
question
selfish
you'll,
see
and
one
of
those
is
cargo-
is
entirely
written
in
rust.
Rust
package
manager
has
to
be
written
in
Ruston's
its
natural
extension
of
the
language
itself,
and
we
have
some
other.
H
These
are
the
stats
from
github
in
the
shells
just
because
we
have
a
giant
install
script
and
the
other
thing
is
Carlo
was
written,
an
entirely
safe
rest.
There's
not
one
unsafe
block
anywhere
on
the
cargo.
Repository
and
I
feel
like
this
is
kind
of
just
a
cool
stat,
but
it
really
shows
that
we
can
build
very
large
projects
like
cargo
and
continue
to
grow
them
with
large
sets
of
dependencies,
and
it's
all
totally
safe.
H
I
never
have
to
worry
about
a
segfault
in
car
go
beyond
modulo,
bugs
and
Rossi
itself,
but
those
aren't
really
cargo
spelled
and
one
of
the
best
things
I
love
about
cargo.
Was
we
bootstrapped
just
like
rust
itself?
Carro
is
built
with
cargo.
The
reason
for
this
as
you'll
see
is
we
have
a
ton
of
dependencies.
This
list
started
out
as
to
get
sub
modules,
which
is
not
so
bad
to
manage
over
time,
but
we
had
to
maintain
make
files
in
the
sub
modules
manage
them
manually,
go
and
try
to
update
them.
H
All
the
time
learn
learn,
get
sub
modules
which
I
still
don't
know,
and
but
once
we
move
to
car
like
we
were
looking
at
this
one
day
and,
like
you
know,
I
feel
like
we
just
wrote
a
tool
to
solve
this
problem,
so
we
ended
up
using
cargo.
So
we
have
all
these
dependencies.
Today
we
have
like
Doc,
opted
for
option
parsing
for
CLI
utilities.
We
use
Tamil
as
the
manifest
format
and
kind
of
the
human
interface
DeCaro.
Things
like
that.
H
Hamcrest
is
a
testing
library
which
gives
you
kind
of
very
nice
testing,
match
testing,
macros
assertions
very
next
error
messages
and
testing
things
like
that.
We
use
rust
URL
from
servo,
which
is
just
got
a
very
high-quality
URL,
parsing
library
we
use
rust
lang
zone,
cember
library
moved
out
a
tree,
and
then
we
also
have
some
liggett
to
bindings,
and
this
list
is
just
growing
like
I
have
three
or
four
more
dependencies
that
I'm
adding
on
to
this
every
day
like
every
so
every
few
weeks
or
so
is
we
had
more
and
more
functionality?
H
So
I'm
going
to
talk
a
little
bit
about
configuring
cargo,
just
kind
of
give
you
a
brief
overview
of
how
you
actually
use
cargo
and
the
primary
part
of
cargo.
Is
this
cargo
dot
Tom
will
file
the
root
of
your
project,
which
is
going
to
manifest
describing
your
project,
your
package
and
exactly
what
what's
going
on
inside
there?
H
The
tunnel
itself
was
divided
into
sections
as
you'll
probably
see
it's,
not
exactly
sections,
but
much
very
much
visually
looks
like
sections,
and
the
top
of
these
is
just
the
package
section
package
just
has
a
name,
a
version
and
an
author
and
you'll
note
that
all
of
these
are
required.
So
we
are
actually
from
day
one.
We
have
a
version
of
every
cargo
package
in
the
ecosystem,
which
is
actually
really
important
to
as
you'll
see
later
on,
and
then,
as
we
go
farther
down,
we
can
move.
H
Most
of
these
rest
sections
are
actually
inferred
the
the
previous
one.
This
is
all
you
actually
have
to
write,
and
this
is
all
that's
required
and
everything
beyond
here
is
inferred,
but
you
can
kind
of
configure
it
as
whatever
you,
whatever
way
you
feel
like,
so
every
package
can
have
at
most
one
library.
I
can
talk
about
that
restriction
later,
but
I'll
limit
like
there
for
now
and
with
libraries
empathic
you
can
have
a
name
or
whatever
you
want
to
call
the
crate
of
the
actual
path
to
the
source
file.
H
H
That
are
things
like
that.
You
can
also
with
the
specific
dependency
like
in
the
case
bar
you
can
specify
on
a
git
repository
and
you
can
say
either
a
tag,
a
branch
or
a
vision.
You
can
say
within
my
own
repository
I
have
another
cargo
package
somewhere
inside
of
it,
but
it's
actually
totally
distinct
and
then.
H
These
are
generally
not
necessarily
checked
in,
but
this
is
where
you
kind
of
put
your
like
dollar
home,
/,
kharghar
config
and
you
can
put
a
fun
stuff
in
there.
There's
not
a
whole
lot
of
configuration.
We
support
there
now,
but
it's
definitely
going
to
grow
over
time.
This
is
a
some
examples
that
I've
been
playing
around
with,
where
I
can
have
my
own
custom
registry
and
API
tokens
and
whatnot.
So
now,
I
want
to
give
you
a
little
little
demo
about
using
cargo
and
kind
of
the
a
few
small
things
about
it.
H
Let
me
blow
that
up
a
little
bit
all
right,
so
you've
hurt
you
just
install
cargo
on
your
machine,
your
kind
of
curious
about
what
you
can
do,
so
you
just
run.
Cargo
and
cargo
has
a
couple
of
commands
that
you
most
worried
about
at
the
beginning,
so
we're
probably
mostly
worried
about
this
new
command,
so
we're
just
kind
of
create
a
new
library,
and
when
we
look
inside
here
all
it
does
is
it
creates
this
carpet
out
Tamil
on
the
source
live
for
us,
I,
look
inside
there.
H
We
can
see
that
this
is
the
name
of
the
library
just
said
this
actually
picked
up
from
get
config
and
it
can
also
pick
up
from
cargo
to
you
from
taking
things
like
that,
and
the
library
itself
is
just
something
that
can
work,
and
then
you
can
see
that
we
can
start
building
a
library.
They
can
start
testing
a
library,
not
much
is
happening.
We
have
a
very
simple
test,
but
if
you
want
to
get
more
complicated,
we
can
take
this
a
very
simple
kharghar
Tomalin.
We
can
add
a
dependency.
This
is
a
supply
tree.
H
I
wrote
a
little
while
ago,
so
I'm
just
gonna
say
I
want
this
splay
tree
rotation.
I'm
gonna
need
it
in
my
my
food
package
and
you
can
see
that
when
we
build
it,
cargill
is
automatically
going
to
download
the
gate
repository
and
then
it's
going
to
build
everything
in
order.
So
you
can
you
notice
that
it
actually
updated
the
git
repository
and
then
it
made
sure
the
splay
library
was
built
before
the
foo
library
and
everything
like
that,
and
then,
if
I
run
it
again,
we
can
actually
keep
track
of
I.
H
Don't
need
to
download
anything
everything's
all
fresh
I
don't
have
to
actually
build
anything.
So
the
next
thing
we
can
work
on
is
actually
write
some
tests.
So
in
this
case
I
just
filled
in
I
link
to
the
splay
crate
and
have
a
small
test
where
I
just
put
put
a
to
map
21
on
the
map
and
I
just
make
sure
that
I
can
get
it
back
out
and
when
you
just
run
cargo
test
again
an
automatically
it
will
compile.
Splay
it'll,
compile
foo
and
you'll.
H
Just
again,
it
didn't
download
anything
and
actually
it's
it's
staying
locked
to
that
same
version
of
splay,
which
of
is,
is
this
locked
hash
and
then
you
can
see
it
ran
the
test,
and
it's
all
passing
just
fine.
Now
take
this
moment
to
show
you
this
file
called
cargo
dot
lock.
This
is
the
serialized
version
of
the
exact
dependencies
that
you
are
using
so
I'm
personally,
developing
a
library
right
now
so
I'm
not
going
to
check
this
in,
but
if
I
were
developing
an
application,
I
would
check
this
in
every
time.
Cargo
finds
this.
H
It
will
alway.
You
always
use
this
exact
revision
of
splay
to
build
this
package,
and
this
means
that
everyone
who
builds
this
at
any
point
in
the
future
will
always
use
the
exact
same
code,
and
this
is
kind
of
a
driver
that
one
of
the
major
drivers
that
cargo
uses
for
repeatable
builds.
So
the
next
thing
I
want
to
show
you
is
cargo,
has
other
fun
stuff
like
a
documentation.
H
So
let's
say
I
want
to
take
that
test
and
I
want
to
kind
of
just
extract
out
that
fun
functionality
of
making
a
122
map
and
I
have
make
some
fun
little
documentation.
Things
like
that.
I
can
just
run
cargo
dock
and
it's
going
to
automatically
compile
everything,
compile
the
documentation
and
then,
if
I
take
a
look
at
it,
I
can
see
here's
my
function
and
I
can
have
my
good
documentation,
but
not
only
that,
but
all
my
all
my
dependencies
are
also
compiled
here.
I
didn't!
Actually.
H
I
never
said
I
want
the
documentation
with
this
for
this
play
map
or
the
splay
tree
crate,
but
it's
all
here.
It's
all
just
there
and
cargo
kind
of
keeps
that
all
together.
An
awesome
example
of
this
is
servo
has
recently
started
doing
this
I'm,
not
sure
I
think
they
use
cargo
to
go
to
watch
this,
but
they
have
like
50
some
libraries
on
the
side
here
and
they're
all
interlinked
together.
Everything
is
searchable,
it's
actually
really
cool
to
kind
of
see.
H
All
this
come
together,
and
so
the
last
thing
I
want
to
show
you
is
cargo,
run
where
let's
say
I
want
to
add
a
command
to
this
library.
So
I
want
to
add
this
its
main
function
and,
let's
just
have
some
simple
things:
printing
out
see
what's
going
on
there
and
I'm
going
to
type
cargo
run,
and
it's
going
to
build
everything
and
it's
going
to
run
the
run.
The
run,
the
binary
that
actually
built
and
you'll
notice
here
that
again
it's
managing
all
dependencies
all
the
time.
H
It
makes
sure
that
the
library
was
built
before
the
executable,
and
I
just
kind
of
let
the
looks
executable
link
to
the
library
and
all
basically
just
worked,
and
one
thing
you'll
also
notice
from
this
is:
I
haven't
modified
car
gautama
from
the
beginning.
This
looks
exactly
the
same,
so
this
is
one
of
the
convention.
We
we
have
a
set
of
conventions
or
kind
of
default
configuration
where,
if
you
follow
the
default
repository
layout,
you'd
actually
don't
have
to
write
a
very
large
cargo
Tamil.
H
You
can
get
some
really
rich
commands
and
some
rich
configuration
from
that
by
default.
I
think
that's
all
that
for
now,
so
we'll
go
back
to
the
presentation.
So
this
one
thing
I
alluded
to
earlier
is
called
the
cargo
registry,
and
basically,
what
this
is
is
it's
a
API
server
is
going
to
be
powered
by
rust
and
it's
it's
a
work
in
progress.
H
But
again
it's
all
built
by
cargo
has
tons
of
dependencies
I'm
using
speckles,
amazing
postgres
jam
and
all
that
good
stuff
our
package
and
then
bit
of
the
slit
there
and
then
for
HTTP
and
kind
of
the
general
interface
there.
We're
using
a
library,
Club
conduit,
the
yahood
and
I
are
working
on
it's
kind
of
similar
to
the
iron
and
nickel
and
all
that
good
stuff
and
then
Russ
civet
is
the
actual
server
that
we're
going
to
be
running
on.
H
For
now,
which
is
an
embedded
c
web
server
and
we're
kind
of
going
to
use
that
further
actual
front
end
or
the
actual
back
end
and
the
front
end
itself
is
all
going
to
be
an
ember.
So
it's
going
to
be
like
when
you
actually
hit
the
web
site
yourself.
You
can
have
a
nice
big
fancy,
j/s
app
and
everything,
but
cargo
and
everyone
else
is
going
to
be
hitting
just
the
API
server,
which
is
all
written
in
ruston.
It's
online.
It's
very
much
a
work
in
progress.
Look.
H
H
Originally
I
showed
you
earlier,
so
the
registry
will
kind
of
serve
as
the
default
namespace
by
which
you
can
load
packages
from
so
earlier
on,
the
slides
you
saw
that
I
depended
on
just
foo
version,
1.2
point2
and
that's
going
to
automatically
look
up
in
the
registry,
and
you
can
also
configure
which
reg
which
registry
it
comes
from.
You
can
have
more
multiple
registries
and
things
like
that.
So
from
all
this
we
might
be
wondering.
Should
I
use
cargo
to
my
answer?
Is
yes,
you
should
definitely
his
cargo.
Her
was
very
much
ready.
H
Today
there
are
tons
of
user
sizes.
I
can
see
there's
nearly
a
thousand
repository
is
already
using
it,
and
this
number
is
just
growing
and
growing
over
time
and
the
more
usage
we
get.
The
more
like
please
send
in
bug.
Reports
on
rustling
/
cargo
I'll
show
that
later,
but
we
we
definitely
still
have
minor,
looking
stuff
to
fill
out
here
and
there
and
but
by
and
large,
were
mostly
implanted
today.
H
We're
actually
pretty
much
feature
complete
and
minus
the
whole
registry
business,
and
once
we
get
past
that
we're
actually
very
close
to
where
we
want
to
be
in
a
shippable
state
of
cargo.
So
the
big
things
that
I'm
missing
that
I
was
talking
about.
As
the
registry
is
missing,
it's
basically
non-existent.
You
cannot
use
this
today.
H
It's
kind
of
a
prototype
in
essence,
portable
Native
dependencies
is
kind
of
a
fun
thing
where
this
is
when
you're
building
C
code
as
part
of
your
package
when
you're
loading
it-
and
this
is
like
Rus
civet-
is
a
bunch
of
C
binding.
So
it
needs
to
be
able
to
make
it
portable.
So
the
story
today
for
depending
on
your
platform,
C,
is
actually
not
so
bad.
You
just
kind
of
build
it
cross
cross.
Compiling
is
not
that
great
of
a
story
today,
and
it
is
pause.
H
It
is
plausible,
but
it's
kind
of
tough
and
then
the
last
thing
which
we
kind
of
want,
is
cargo
install,
which
is
some
form
of
taking
the
artifacts
you
just
made
and
putting
them
somewhere
readily
accessible,
so
I
kind
of
want
to
like
I
personally
want
to
go,
say,
cargo
and
saw
cargo
installed,
rustling,
/,
turbo
or
Mozilla
/,
servo
and
I
want
that
to
just
pull
and
servo
and
just
build
everything
and
put
it
off
somewhere.
That
has
some
details
around
that
we
haven't
quite
worked
out
just
yet
so
we're
still
working
on
that.
H
H
H
So
the
question
was
whether
we're
doing
binary,
binary
reproducible
builds
and
answers,
no
we're
just
going
for
actual
building
rusty
itself,
the
problem
or
just
building
the
code
in
getting
it
past
rusty.
The
rusty
itself
has
a
long
way
to
go
before
it
gets
actual
binary.
Binary
reproducible
builds,
we
would
love
to
be
there
one
day,
but
that
is
basically
hindered
by
technical,
technical
limitations
in
the
compiler
itself
and
we
gotta
tackle
those
first.
H
H
C
H
I
suppose
yes,
and
no
as
the
the
actual
things
like
the
structure
of
a
package
and
creating
a
new
package,
it
will
like
cargo
news,
will
support
that
and
it's
pretty
bare
bones
today.
We
want
to
add,
like
templating
support
for
adding
a
like
any
project
can
specify
their
own
template
to
make.
A
new
project
are
also
initializing
this
as
an
existing
project
to
be
a
cargo
project.
Things
like
watching
kind
of
the
more
fancy
features.
H
I
imagine
will
be
left
out
to
dependencies
like
I
think
we
want
may
be
developing
dependencies
to
be
watching
watching,
builds
to
run
tests
again,
and
you
might
all
depend
on
that
via
cargo.
You
have
to
when
cargo
wants
to
pull
it
in,
but
then
you
over
to
run
it
every
time
since
then,
so
it
sounds
like
cargo
wills,
I
guess
replacing
it
in
a
sense
of
a
subset
of
the
function
on
a
restaurant.
You,
but
probably
not
the
entire
thing.
H
Well,
the
question
will
the
binary
dependencies
and
the
answer
is
no
right.
Now,
it's
actually
the
it
eludes
the
segment
question
before,
which
was
about
binary
to
binary
compatibility
and
because
rust
see,
can
produce
a
binary
and
there's
not
a
whole
lot
of
guarantee.
We
can
continue
to
link
to
that
from
all
up
from
from
the
future,
and
the
problem
with
that
is
whenever
an
artifact
is
linked
against
another
one.
H
It
has
to
be
linked
against
that
exact
artifact,
which
means
you
have
to
do:
we'd
have
to
distribute
entire
trees
or
dependencies,
and
that's
infeasible
for
now
so
again.
Sadly,
Carr
rusty
limitations
are
preventing
us
from
doing
that,
but
we
would.
We
would
like
to
do
that
one
day,
because
it
would
certainly
help
reduce
bill
times
and
things
like
that.
A
H
So
the
question
was
cember:
you
can
kind
of
use
many
different
versions
and
cover
deluxe,
as
you
can
use
one
version
and
kind
of
reconciling
those
two
concepts.
So
the
cargo
net
lock
actually
goes
through
a
resolve
phase
in
cargo
or
what
it
does
is
attempts
to
what
it'll
do
is
it'll
attempt
to
minimize
the
number
of
things
that
actually
depend
on
like
we
will
support
two
versions
of
library
foo
being
in
a
final
product,
but
that
will
be
very
fairly
rare
to
happen
so
with
semper.
H
We
know
that
anything
at
the
one
point
0
to
2
point
0
range
is
totally
compatible
as
long
as
you
use
the
maximal
one.
So
we
will
basically
always
choose
the
maximum
one
in
that
case,
and
if
you
depend
on
like
1.9
and
also
2.4,
then
you're
gonna
have
two
copies,
and
the
idea
is
that
this.
This
read
this
resolved
kind
of
happens
ahead
of
time
and
it
minimizes
the
number
of
times
they
actually
select
a
library.
And
then,
whenever
you
add,
new
libraries
it'll
actually
completely
resolve
that
portion
of
the
dependency
graph.
H
So
if
I
add
a
new
library
which
depends
on
foo,
it's
gonna
try
and
lock
to
the
previous
version
of
food
that
I
was
using
already
and
not
introduce
a
new
one.
So
we're
kind
of
we're
a
bit
in
between
like
bundler,
will
choose
one
vers,
exactly
one
version
for
all
libraries,
whereas
know
basically
pull
in
all
libraries
all
the
time
we're
kind
of
in
between
those
two
where
we
attempt
to
use
as
very
as
few
libraries
as
possible.
We
realize
that
eventually
we're
going
to
have
to
have
more
than
one
copy
that
make
sense.
H
The
question
is:
if
the
registry
is
going
to
force
Enver-
and
the
answer
is
the
registry
will
probably
not,
but
it
is
very,
very
highly
desirable
and
we
actually
really
want
to
look
into
this
is
to
have
a
tool
to
audit,
oh
as
you
upgrade
in
summer.
So
when
you
say
cargo
published
or
cargo
package,
it's
actually
we
want
to
have
an
automated
tool
which
looks
at
the
previous
version,
which
we
have
an
archived
copy
of
and
compare.
The
two
AP
is,
and
we
can
actually
alert
you
to
any
breaking
changes
we
think
you're
making.
A
G
Ok,
so
we've
talked
a
lot
about
two
different
kinds
of
stability
today
in
like
our
tooling
for
users
and
for
getting
like
the
language
to
one
point:
oh
and
the
question
is
how
those
are
going
to
intersect
and
if
the
compiler
itself
is
going
to
be
stable
in
a
sort
of
way
for
building
itself.
So
right
now
we
use
this
like
scheme
of
snapshots,
which,
at
least
with
what
I
do
seems
really
annoying
for
the
future.
So
is
like
rust,
1.1
going
to
build
with
one
point.
B
Well,
I
I
mean
I,
basically
think
we
haven't
thought
haven't
talked
a
lot
about
changing
the
snapshot
scheme,
the
problem
with
locking
to
an
older.
We
could
lock
to
an
older
revision
a
lot
of
times,
I
guess
that
would
be
much
more
feasible.
Now
that
we're
we
won't
be
making
as
many
backwards
incompatible
changes
that
usually
makes
it
a
little
harder,
but
I
don't
think
at
any
time
you
end
up
needing
for.
If
I,
like
you
mentioned
needing
several
snapshots
sort
of
simultaneously,
maybe
I
misunderstood.
G
They
say:
I
have
a
one-way
nose,
say
a
run
on
the
next
to
Stroh,
which
I
do
and
I
have
a
1
point
0
in
like
version
n
and
when
I'm
making
version
and
pulse
I
want
to
include
rust
and
plus
three
or
something.
So.
The
question
is
kind
of
then
like
how
do
I
build
the
new
version
can
I
use
the
one
that's
already
in
my
system,
which
would
probably
save
me
having
to
grab
these
binaries
from
somewhere
and
put
them
into
a
bill
to
stone
kind
of
other
steps
like
that.
Yeah.
B
F
Yes,
so
I,
actually,
we
I
have
talked
briefly
about
it
with
some
other
members
of
the
core
team
and
basically
I
brought
up
the
same
issue,
and
essentially,
we
are
probably
not
going
to
make
any
changes
to
the
snapshot
process
and
the
reason
is
that
one
point
o
is
going
to
have
a
large
focus
on
binaries.
Binaries
are
going
to
be
the
web,
because
compiling
rust
from
source
is
not
a
good
experience
either
way
for
most
users
who
just
want
to
get
things
just
want
to
get
started
with
it.
F
I
mean
it's
fine
if
you're
hacking
on
rusty,
but
for
most
people,
you're
going
to
want
to
do
you're
going
to
want
to
download
a
binary
and
that's
going
to
be
happy
and
that's
going
to
be
how
you
interact
with
Russ
and
when
you
download
a
binary,
it
doesn't
matter
so
much
whether
your
weather,
that
we're
making
snapshots
internally,
those
are
just
for
us
for
the
core
hackers
for
people
who
are
just
using
us.
It
doesn't
matter
how
many
snapshots
we
really
take,
so
we're
going
to
have
the
planets
have
nightly
builds.
F
A
Anyone
else
going
once
going
twice:
okay,
all
right!
Well,
thank
you
so
much.
I
believe
we
have
the
space
for
at
least
another
half
an
hour
so
so
feel
free
to
stick
around
and
mingle.
Thank
you
once
again
for
Oh
Kevin,
so
I
I
need
to
settle
it.
It
I'm
betting,
it's
going
to
be
September,
I,
think
tuesday,
September,
21st
or
23rd.
Whatever
is
that
tuesday
that
week,
but
it's
not
locked
in
yet,
but
that
is
going
to
be
the
second
part
of
web
tech.
A
We
actually
have
Chris
Morgan
me
here
from
all
the
way
from
Australia
that
will
speaking
and
I
will
be
also
speaking
about
some
of
the
cool
stuff
he's
doing
so
anyway,
and
also
if
anyone
else
has
anything
web
techie,
they
want
to
speak
about.
Please
come
and
talk
to
me.
So,
okay,
all
right!
Well,
thank
you
very
much
and
have
a
good
night.