►
From YouTube: Marijn Haverbeke - The Rust That Could Have Been
Description
This talk will describe a number of language concepts and features that were in the pre-1.0 Rust language at some point but were ultimately abandoned, such as typestate, garbage collection, structural types, and a more or less classical object system. I’ll go over the reasons they were abandoned, and try to convince you that the Rust we have now is the best Rust yet.
---
For more go to https://rustfest.eu or follow us on Twitter: https://twitter.com/rustfest
A
Yeah
I'm
les,
so
many
people
still
showed
up
at
the
very
end
of
the
conference,
so
yeah
I'm
going
to
be
talking
about
mostly
a
number
of
features
that
were
part
of
the
rest
language
at
some
points,
but
no
longer
are
and
why
this
is
the
case
and
usually
why
this
is
a
good
thing.
A
So,
as
I'm
said,
I'm
Martin,
hava
Baker,
if
you
are
into
JavaScript,
you
might
see
my
name
before
and
so
rest
has
been
under
development
for
a
while
about
ten
years
now,
I
think
first
long
stretch
was
just
right
and
working
in
isolation
and
who
knows
what
kind
of
IDs
and
experiments
here
Ben
is
at
that
point:
there's
not
even
a
like
code
repository
from
that
time
public.
A
We
would
sometimes
come
up
with
a
breaking
change
in
the
morning
and
then
have
a
batch
ready
in
the
afternoon
and
convince
someone
to
merge
it
slightly
on
and
then
because
there
was
only
once
out
guys
who
just
fix
everything
right
away
and
people
could
continue
working.
There
was
some
some
trickiness
was
actually
getting
like
a
compiler
that
compels
the
current
code
after
you
make
a
breaking
change,
so
you
first
change
the
compiler,
then
upload
a
snapshot
then
change
the
code,
and
then
you
could.
A
Everyone
could
proceed
with
the
new
snapshot
and
then,
of
course,
a
year
and
a
half
ago,
I
think
the
team
cuts
version
1.0
and
then
the
the
just
process
changed
entirely.
So
now
it's
like
everything
stays
backwards-compatible.
It
impresses
how
how
seriously
have
been
peeking,
taking
backward
compatibility
and
experience
moves
like
our
FCS
move
very
slowly
and
there
has
to
be
a
wide
consensus
and
has
to
fit
within
the
current
code
base.
A
So
that's
a
whole
different
stage
again
I'm
going
to
be
mostly
talking
about
the
period
where
I
was
part
of
the
team,
which
was
2011-2012
and
which
was
probably
the
wildest
furious
in
terms
of
features.
Cut
features
changed
stuff
like
that,
so
it
may
seem
a
it's
ridiculous
that
we
put
so
much
time
into
really
complicated
features
just
to
end
up,
dropping
them
again
and
but
I
think
this
kind
of
an
essential
part
of
getting
a
complex
design.
A
Like
a
programming
language
rights
that,
unless
you're
a
super
genius,
you
won't
really
see
in
advance
what
the
implications
and
the
interactions
between
the
various
parts
of
the
system
are,
and
you
have
to
try
it
and
see
how
well
you
can
make
it
work
and
how
well
it
fits
into
the
system,
and
sometimes
you
later
have
to
just
abandon
it
again.
I
think
that's
part
of
a
healthy,
healthy
design
process
for
like
mere
mortals
who
need
to
actually
see
how
something
works
before
they
can
evaluate
it.
A
I'm
searching
this
talk
about
around
a
number
of
visions
that
were
part
of
the
language
and
then
dropped
again
and
I'll.
Try
to
explain
why
I
think
that's
in
every
case
it
was
a
real
good
decision
to
drop
them,
but
it's
still
interesting
to
see
what
the
original
visions
were
and
what
we
did
end
up
with
said.
These
are
a
type
state,
a
structural
type
system
and
lightweight
processes
and
finally,
garbage
collection.
A
Let's
start
with
type
state
type
states
is,
it
was
actually
an
important
point
in
initial
announcements
of
the
language
and
people
were
very
excited
about
it.
What
type
state
does
is
basically
allow
you
to
allow
the
compiler
to
know
more
about
value
than
just
its
type,
so
an
example
would
be.
This
is
something
of
type
circuits,
but
we
also
happen
to
know
that
it's
open-
or
this
is
something
of
size
array,
but
we
happen
to
know
our
sector
in
the
current
terminology.
We
happen
to
know
that
it's
not
empty.
A
So
when
you're
programming,
you
usually
have
some
mental
model
of
why
the
thing
you
are
doing
right
now
is
is
valid,
is
not
going
to
crash
like
if
you're,
not
just
making
random
changes
and
seeing
is
its
test
bus.
You
will
have
some
mental
model
of
your
of
your
program
and,
to
a
certain
degree,
depending
on
the
language
you
can
tell
the
compiler
about
this
model,
and
the
compiler
can
then
check
whether
you
are
applying
your
model
consistently.
A
So
the
simple
cases
just
types
that
you're
actually
passing
the
type
that
you
think
you
are
passing
somewhere
and
if
you
don't,
then,
instead
of
finding
out
at
runtime,
you
find
out
at
compile
them-
and
this
is
nice-
there's
a
kind
of
Specter.
This
computer
does
not
have
the
fonts
that
my
computer
had
with
them.
Imagine
arrow
has
on
both
sides,
there's
a
kind
of
spectrum
on
which
languages
fall
in
terms
of
how
much
you
can
actually
communicate
to
the
compiler.
So
you
don't
know
one
side.
A
It
does
have
quite
a
bit
of
aesthetic
guarantees
and
it
helps
quite
a
lot,
but
it
still
has
to
be
ergonomic
like
easy
to
program
in
where
you
don't
have
to
spend
too
much
time.
Working
on
these
things
and
I
won
one
way
to
see
the
history
of
programming.
Languages
is
kind
of
one
aspect
of
it
is
least
at
least
is
that
we've
been
finding
better
and
better
for
kepler
vocabulary
too.
To
describe
these
things.
We
know
about
our
program
to
the
compiler.
A
In
a
way,
that's
actually
convenient,
so
if
you
have
a
really
terrible
type
system,
that's
often
worse
than
no
type
system
at
all,
if
I
have
to
choose
to
write
something
in
Java
or
JavaScript,
I'll
just
take
several
scripts.
Thank
you
very
much,
but
we're
getting
better
at
this
and
rest
is
making
a
big
contribution
here
and
like
bringing
a
real
modern
type
system
to
the
systems.
Programming
space
and
the
ownership
model
is
I.
Think
just
really
really
good.
I,
unfortunately,
wasn't
on
the
team
anymore
when
this
was
introduced.
A
A
You
could
define
predicate,
which
is
this
pure
function
not
empty
as
itself,
and
the
extra
information
that
the
compiler
had
about
your
values
came
in
the
form
of
this
predicate
holds,
but
these
were
actually
just
predicates
written
in
normal
rest
codes
that
they
were
supposed
to
be
pure.
There
was
a
concept
of
phonetic
system
at
that
point
which
also
gone
now,
but
they
just
took
a
value
and
said:
okay,
I
hold
or
I
don't
hold,
and
then
you
could
define
for
your
functions,
preconditions
and
postconditions.
A
Then,
before
you
could
pass
that
you
value
to
such
a
function,
you
have
to
convince
the
compiler
that
this
predicate
held
at
this
point
for
some
things.
This
works
relatively
well.
The
compiler
is
very
clever
in
propagating
its
information
through
the
control
flow
graph
and
like
taking
it
from
the
post
conditions
of
the
functions
you
call,
but
here
you
have
an
example,
for
example,
I
create
an
array
and
then
I
want
to
pass
it
to
lost,
but
it's
not.
A
Okay,
I
first
have
to
check
that
it's
not
empty,
and
it's
actually
a
sink
check
would
insert
a
runtime
test,
a
call
to
the
to
the
predicate
and
then
sanik
is
it
filled
so,
but
actually
I
mean
this
array
isn't
empty.
This
is
very
easy
to
prove,
but
because
the
compiler
only
saw
these
predicates
says
like
opaque
pieces
of
code,
it
couldn't
actually
reason
about
them.
It
could
only
take
what
you
told
is
like.
If
you
check
there
were
variances
check,
one
of
them,
which
just
was
like
an
unsafe
form
of
just
believe
me.
A
So
this
aesthetic
amount
of
static
air,
he
wasn't
very
great
because
often
usually
the
compiler
couldn't
really
help
a
lot
of
it
like
reasoning
about
when
they
actually
held
them
when
they
did,
it
was
in
the
compiler
for
a
long
time
still,
but
eventually
it
was
dropped
because
it
was
just
not
pulling
its
way
so
in
terms
of
experiments
in
good
expressive.
Ways
to
express
these
kind
of
things.
I
think
this
is
a
failed
experiment
that
existed
it
some
research
languages
before,
but
it's
never
really
made
it
into
a
big
mainstream
type
language.
A
For
good
reason,
I
think
so
so
much
for
that
next
topic
is
structural
typing.
So
in
typing
systems
you
have
two
concepts
where
structural
typing
is
say:
you
have
a
function
size
which
has
a
few
arguments,
types
and
a
return
type,
and
you
want
to
compare
it
to
another
function,
type
so
you're
just
going
to
look
at
the
fields
in
the
function
that
have
the
same
amount
of
arguments
are
its
arguments
of
compatible
types
is
return,
types
of
compatible
types
and
that's
that's
structural.
A
On
the
other
hand,
there
is
a
nominal
sizing
where
you
just
say:
where
is
this
type
declared?
What's
the
name
of
this
type,
and
it
has
to
be
the
same
so
rest
structures
currently
works
way
as
to
enums.
Two
types
are
only
compatible
if
they
are
actual
instances
of
the
thing
that
was
declared
in
the
same
points
in
the
code,
initially
structure
or
structural
types.
A
So
this
curly
braces
thing:
there
is
a
syntax
for
a
struct
type
which,
with
two
two
fields
x
and
y
of
type
float,
and
the
type
declaration
just
defines
an
alias
for
the
type.
This
is
just
like
a
name
for
the
type
record,
with
two
floats
field,
and
so,
if
I
define
a
function
which
takes
an
argument
at
this
point,
I
can
call
it
with
just
a
record
constructed
on
the
fly.
Without
any
record
name
involved
records
themselves,
don't
have
a
name.
A
They
just
have
a
structure
in
this
system
and
it's
kind
of
nice
and
lightweight
and
minimal,
and
that
often
you
don't
even
bother
to
give
your
record
to
name
if
you
only
use
it
a
few
times.
So
you
where
you
would
now
probably
use
it
to
pull.
You
could
use
records
with
nice,
descriptive
field
names,
I
kind
of
like
this,
for
programming
with,
but
I'll
come
back
later
to
why
this
part
was
removed.
A
Another
aspect
of
this
was
object,
types
where,
as
structure
types
were
only
compatible
if
they
had
actually
the
exact
same
fields
and
in
the
same
order,
they
weren't
reordered
because
see
compatibility
and
they
had
to
be
the
exact
same
to
be
able
to
compile
it
efficiently.
Because
then,
all
codes
within
directed
with
such
a
record
too
knew
how
it
was
laid
out
in
memory.
Objects
were
a
more
dynamic
feature
and
here
any
object
size
that
has
a
subset
of
the
fields.
A
So
these
were
both
the
types
of
the
concrete
objects
and
also
serve
the
role
of
interfaces
which
is
kinda
nice.
In
terms
of
how
many
concepts
you
need
to
do,
object-oriented
programming,
I
think
could
also
even
use
it
as
a
kind
of
checked
duck
typing.
Where
you
define
your
function
and
you
just
say:
I'm
only
going
to
call
lengths
on
the
thing
that
I'm
getting
and
then
anything
that
had
a
length
method
could
be
passed
in.
You
don't
even
need
to
formally
define
an
interface
name
or
anything
is
just
all
like
structural
by
name.
A
A
One
implication
of
this
was
that,
because
code,
that
you
some
didn't
know
their
size,
they
always
had
to
be
heat
allocated.
I,
think
they
use
always
had
to
be
garbage
collected
and
any
call
student
will
be
going
through
a
dispatch
table
of
heat
table,
so
they're
so
much
more
heavy
ways
compared
to
the
rest
of
the
language,
and
we
were
finding
that
in
the
compiler.
We
were
kind
of
shying
away
from
them
and
this
we
absolutely
needed
polymorphism
because
they
were
more
heavyweight
than
necessary
in
many
situations
and.
A
It
was
conceptually
simple
but
not
terribly
simple,
to
implement
and
then
at
some
points
we
got
more
high
school
people
on
the
team
and
we
all
started
educating
for
a
typed
class
kind
of
implementation.
Interface
thing
that
we
ended
up
now
and
because
no
one
really
likes
these
objects
very
much.
We
migrated
this
to
death
and
I
think
they
just
fits
with
the
language
much
better.
They
don't
require
you
to
put
something
on
the
heath.
They
don't
require.
A
Structural
records
also
becomes
problematic,
because,
if
you're
using
a
records
that
happens
to
have
the
same
shape
in
two
completely
independent
contexts-
and
they
both
define,
say
a
two
string
implementation
of
it.
Then
these
will
clash,
even
though
the
actual
usages
have
nothing
to
do
with
each
other.
They
will
both
be
trying
to
implement
the
same
interface,
the
same
trace
on
it.
That
doesn't
work
so
well.
No
one
care
that
much
about
structural
records
either,
so
they
became
nominal
for
this
region.
At
that
point,
and
now,
of
course,
people's
and
functions
are
still
structural.
A
Then
so
some
we
sort
of
talk
about
asynchronous
and
if
there's
a
synchronous
programming
yesterday-
and
there
are
languages
like
like,
go
and
Erlang-
which
kind
of
cells
is
in
a
different
way
where
your
programs
look
like
they're
synchronous,
but
that's
just
like
a
slight
offense
and
well.
No,
they
are
actually
programmed
synchronously,
but
you
don't
say
for
I,
don't
know
how
many
tress,
because
they
have
their
own
process
abstraction,
which
is
much
more
likely
than
operating
system
tress.
A
That
was
part
of
Russ
initial
vision,
so
you
don't
need
to
mess
with
future
Shore
reactors
or
anything.
You
just
spawn
a
bunch
of
tasks
which,
for
example,
for
each
sockets
or
you
could,
even
if
you
are
writing
a
calendar,
spawn
a
sauce
for
each
task
on
the
calendar
or
something
and
they
run
as
independent
processes,
but
because
they're
like
designs
to
be
cheap,
you
don't
have
to
worry
about
allocate
millions
of
them.
It
just
works.
Now.
Operating
systems
aren't
really
designed.
A
So
what
you
have
to
do,
then,
is
you
create
your
own
thread
pool
in
the
language
runtime,
and
you
have
your
own
task?
Abstraction
and
your
traps
are
kind
of
just
picking
up
tasks
running
them
for
a
while
and
then
when
they
block
or
when
their
time
runs
out.
They
put
them
aside
again
and
they
take
another
task.
You
do
your
own
scheduling,
which
is
also
not
trivial,
and,
of
course,
even
if
you
do
this,
you
still
have
the
problem.
That's
this
simple
machine
coat.
A
A
A
The
reason
we
don't
just
like
grow
this
tag
and
copied
all
stuff
onto
the
new
stack
is
that
that
would
involve
moving
values
in
memory
and
that's
a
whole
different
kind
of
worms.
Then
every
all
the
code
has
to
actually
be
able
to
like
locate
every
pointer
to
rewrite
it
and
if
the
pointer
is
held
by
some
C
codes
and
who
knows
where
it
should
be
written.
So
that's
why
we
actually
preserve
the
old
piece
of
stack
and
then
continue
on
a
new
piece.
That
work
actually
is
like
quite
a
magic
trick,
but
it
works.
A
It
ran
for
a
while.
In
this
way
it
does
have
some
drawbacks.
The
biggest
one
is
that
if
you
have
like
an
inner
loop
which
is
going
to
be
running
very
often-
and
that
is
exactly
at
the
point
where
you're
crossing
through
a
new
stack
segment,
it's
going
to
be
allocating
and
throwing
away.
So
many
stack
segments
that,
like
some
of
our
benchmarks,
I
think
it
happened
only
once
with
our
benchmarks,
but
it
was
just
suddenly
ridiculously
slow
exactly
because
of
this.
The
like
this
text,
which
happens
exactly
as
a
part
of
the
benchmark.
A
It
was
running
millions
of
time,
so
that's
not
a
great
abstraction,
it's
kind
of
leaky.
It
also
has
like
issues
if
you're
going
to
call
C
code,
the
C
codes
won't
be
managing
segments
of
specs.
So
you
have
to
provide
us
with
a
big
stack,
so
we
had
a
pool
of
big
stacks
and
whenever
you
made
a
phone
call,
you'd
get
a
big
stack
and
then
call
them
that
stack.
A
A
Is
kind
of
a
similar
story,
so
we
started
out
with
garbage
collection
because
most
of
us
were
coming
from
garbage
collected
languages
as
a
kind
of
in
terms
of
okay.
This
is
what
a
good
language
looks
like,
and
we
felt
that
you
can't
really
provide
an
ergonomic
language.
If
you
don't
provide
garbage
collection,
we
did
have
a
model
where,
of
course,
you
only
use
garbage
collection
when
you
want
it
like
it
was
opt-in.
A
A
Actually,
this
was
quite
a
while
after
I
left,
then
Patrick
Wilson
announced
this
plan
like
hey,
we
could
just
get
rid
of
garbage
collector,
and
my
initial
reaction
was
know
that
this
is
ridiculous,
but
then
he
explains
like-
and
it
finally
clicked
for
me
that
we
could
really
be
in
the
same
niche
as
C++
s,
which
is
I.
Think
the
hugest
thing
is
the
reason
why
Russ
is
at
all
successful
because
it
can
be
this,
like
almost
run
timeless
language
without
any
complex
machinery
around
it.
A
So
it's
kind
of
capitalizing
to
the
simple-minded
programming
model
that
we've
been
using
for
ages
as
a
kind
of
baseline,
which
is
a
shame
in
a
way
because
it's
not
perfect,
but
it's
for
systems
programming,
it's
the
best
we
have
and
enabling
allowing
this
language
to
just
be
dropped
in
where
you
would
normally
use.
C++
is
probably
the
biggest
selling
point,
so
then
I
kind
of
got
it
and
it
was
like
okay,
this
is
yeah.
A
Don't
know,
I
think
we
land
us
on
a
on
a
really
good
place
here
in
terms
of
how
bare-bones
the
language
should
be
yeah.
This
was
an
example
from
the
book
says
that
I
wanted
to
show
it's
not
entirely
painless,
not
happening
a
garbage
collector
returning.
It
closure
looks
like
this,
and
it's
not
not
exactly
Haskell,
but
yeah.
Okay
I
mean
it's
a
compromise.
We
have
to
make
and
I
I
think
it's
worth
it.
A
B
A
Yes,
you
can
actually
and
I.
Think
LLVM
was
in
some
guys
clever
enough
to
just
do
this
for
us,
where
it
would
see
that
it
was
always
the
same
feasible
and
it
was
just
inline
it,
but
you're
still
kind
of
paying
a
conceptual
cost
for
us
and
I
think
it's
easier
to
just
have
a
model
where
you
don't
generate
this
complex
code
and
then
optimize
it
back
to
one
where
you
actually
rely
on
these
kind
of
clever
optimization
tricks.
A
D
So
recently
I
saw
a
it
was
either
an
RFC
or
a
Russ
internal
read
where
someone
was
proposing.
Also
this
this
anonymous
struct,
like
you,
had
in
your
survivor,
so
you
didn't
really
touch
that
much
on
why
they
were
removed.
I,
just
curious
what
your
opinion
is
on
like
whether
that
would
be
a
good
thing
to
riad
into
the
language,
because
my
personal
opinion
on
is
a
fairly
favorable
I
think
it
would
be
a
good
idea,
so.
A
A
D
A
But
you
have
to
do
that
in
the
Chryse
that
defines
the
trace
you've
gone
through
anywhere
else
yeah,
and
it's
the
restriction
that
yeah
it
could
work.
Of
course
you
would
wrap
your
structs
in
some
other
types
and
then
define
it
on
that.
If
you
really
need
to
define
a
trait
in
a
different
trade,
it's
not
very
easy.