►
From YouTube: Bay Area Rust Meetup August 2015
Description
The SF Rust Meetup for August.
Help us caption & translate this video!
http://amara.org/v/2Fho/
A
Oh
we're
on
the
internet,
all
right,
hi,
everybody,
hi
internet,
welcome
to
another
barrier
rust
meet
up.
Thank
you.
All
for
coming,
y'all
should
come
out
to
Mozilla
sometime
we
throw
a
nice
parties,
but
anyway,
I
am
very
pleased
to
have
our
our
first
distributed
systems.
This
is
a
project.
That's
or
this
is
a
meetup,
that's
very
near
and
dear
to
my
heart,
so
I'm
very
happy
that
we
are
finally
getting
the
distributed.
People
talking
so
anyway,
as
always.
Thank
you
Mozilla
for
feeding
us
and
giving
us
this
lovely
space
I.
A
You
guys
are
awesome
and
thank
you
so
much
for
for
making
ruston
all
the
stuff
lots
of
fun.
So
I
don't
have
the
next
meetup
organized
just
yet
I'm
probably
going
to
be
leaning
on
unremitting
to
be
web
tech
stuff,
but
I
need
to
actually
set
a
date.
So
you
know
all
you
people
like
what
things
that
will
be
happening
and,
finally,
for
our
agenda,
we
have
Yvonne
who
is
Andrews
advisor
from
Canada.
A
All
the
way
from
Canada
will
be
speaking
remotely,
and
then
we
have
lots
of
other
people
with
Diego
and
Andrew
and
Dan
and
Alex,
and
it's
going
to
be
an
awesome
meet
up.
We
have
a
big
lineup,
so
I'm
going
to
end
it
here
so
anyway,
I'm
going
to
head
it
off
to
you
Vaughn,
who
is
on
the
Internet.
So
if
you
could
share
your
screen
and
do
your
talk
and
everything?
Thank
you.
Please
give
her
a
warm
welcome.
B
Wow
this
is
fantastic.
Thank
you,
so
much
I
know
I'm
violating
every
rule
in
the
book
to
be
remotely
attending
a
meetup,
but
it's
an
incredible
thrill
for
me.
I
think
that
Andrew
I
really
wanted
me
to
talk
about
the
past,
and
so
I
wanted
to
try
to.
You
know,
communicate
that
you
know
it's
a
scary
past.
In
some
ways,
I
mean
when
I
think
about
when
I
started,
working
with
distributed
systems
back
in
the
midday
and
I
was
working
on
my
master's
degree.
B
You
know
that's
now
30
years
ago
and
and
and
he's
really.
I
think
wanting
me
to
tell
you
a
little
bit
about
all
the
things
that
I
thought
were
going
to
be
really
big,
that
didn't
turn
out
and
I'm,
hoping
that,
by
virtue
of
sharing
some
of
that
kind
of
stuff
30
years
from
tonight,
they'll
be
this
huge
party
to
remember
what
you
guys
were
all
doing
there.
B
If
the
mythos
meet
up
and
what's
happening
with
raft
and
what's
happening
with
rust
and
and
how
you're
not
gonna
repeat
the
mistakes
of
those
that
went
before
you,
I
am
up
there
in
Victoria,
I
am
very
lucky
to
be
a
prophet.
You
Vic
and
I
met
Andrew
when
he
was
a
student
in
the
distributed
systems
course
and
basically
told
me
that
I
better
modernize,
my
lectures,
so
we
consider
Victoria
kind
of
the
other
rock
and
here's
an
open
invitation
for
everyone
to
come
up
and
visit
us.
B
If
you
are
ever
interested
so
I
thought.
Maybe
what
we
could
do
is
take
a
look
at.
You
know.
I
love
this
obviously
Leslie
Lamport,
and
you
know
this
quote
that
I'm
sure
everybody's
familiar
with
but
I
just
love
it
so
much
this
you
know
a
distributed
system
is
one
in
which
the
failure
of
a
computer,
you
didn't
even
know
existed,
can
render
your
own
computer
unusable.
B
And,
of
course,
you
know
it's
so
funny
being
here
using
you,
know,
slides
that
are
on
the
internet
having
this
broadcast
on
the
internet,
using
this
video
stuff
and
and
being
so
overwhelmed
with
how
great
things
have
gone
with
distributed
systems.
But
you
know
looking
back,
it's
true.
I
want
to
be
able
to
at
least
set
the
stage
for
what's
coming
here
with
raft
and
consensus
and
how
rest
is
going
to
play
into
it
all,
and
I
think
you
know
really
helped
us
build
distributed
systems
that
we
can
count
on.
B
I'm
certainly
the
story
of
Paxos
and
the
way
that
Leslie
Lamport
tells
it
on
his
webpage
about
how
it
was
submitted
in
1990,
but
wasn't
published
until
98.
It's
it's
very
funny.
If
you
hadn't
had
a
chance
to
read
over
sort
of
his
description
of
the
story,
you
know
it
was
we're
talking
about
distributed
somewhere
in
the
eighties
and
and
he's
thinking
about
consistency,
and
he
wants
to
make
it
fun,
and
so
he
puts
it
into
this
context
about
this.
B
This
ancient
Parliament
and-
and
he
even
does
a
few
lectures-
I
mean
I,
just
love
this.
He
does
a
few
lectures
kind
of
like
this
archaeologists
or
anthropologists
tour.
You
know,
sort
of
Indiana,
Jones
kind
of
style
and
and
he's
talking
about
the
system
for
consistency
and
that
and
the
way
that
he
puts
it.
You
know
you
said
he
asked
some
of
the
people
who
he
had
circulated
the
paper
too,
and
he
asked
him.
He
said
you
know
so
now
that
you've
read
the
paper.
What
do
you
think
about
this
question?
B
So
certainly
the
story
here
is
kind
of
about
how
you
know
consensus
and
certainly
the
problems
of
consistency
put
in
some
of
the
older
context,
and
especially
my
my
bad
is
the
way
that
you
know.
I
was
looking
at
the
distributed
course
material
and
kind
of
dragging
people
through
things
that
Andrew
kinda
grabbed
me
and
said.
B
At
the
end
of
the
end
of
the
80s
and
and
again
you
know-
maybe
not
one
of
the
top
of
your
reading
list
at
this
point,
but
I'm
looking
at
over
a
hundred
different
languages
for
distributed
computing
and
and
trying
to
absolutely
make
it
easier
to
understand,
distributed
systems
or
the
implementation
of
distributed
systems
really
make
the
code.
Look
like
the
design
really
capture
the
right,
abstractions
and,
of
course,
I
put
the
dodo
bird
there.
B
B
But
you
know
primitives
for
things
like
parallel
ISM
communication
and
partial
failures
were
being
you
know,
broken
up
and
to
find
your
green
issues,
and
you
don't
have
to
read
this
table,
and
certainly
you
can
take
a
look
at
the
paper,
but
I
just
wanted
to
give
you
the
sense
of
you
know:
here's
the
kind
of
breakdown
of
the
kinds
of
primitives.
We
started
to
look
at
in
these
languages
over
there
on
the
left
hand,
side
and
then
over
on
the
right.
B
You'll
see
you
know
all
of
the
languages
that
we're
trying
to
to
to
hit
the
hit
the
target
with
the
right,
abstractions
and
again,
obviously
a
lot
of
these.
Maybe
some
of
you
have
heard
of
some
of
them,
but
again
it's
not
like
you're
programming,
actively
a
lot
of
these
right.
Now,
if
we
just
zoomed
in
on
communication
and
messages,
you
know
I
just
wanted
to
show
you
g.
There
was
a
whole
set
of
languages
and
we
were
having
a
hard
time
letting
go
pascal.
B
You
can
probably
see
that
at
this
time
you
know
just
looking
at
synchronous
message
passing
you
know,
and
then,
if
you
didn't
think
that
the
primitives
for
synchronous
message
passing
where
we're
gonna
be
where
it's
act
with
distributed
systems,
maybe
you
want
these
ones
over.
Here
is
a
whole
other
set
of
languages.
Looking
at
asynchronous
message,
passing
and
again
it
a
nice
long
list
there
and
lots
of
great
stuff
going
on,
and
it's
not
that
it's
all
dead.
B
It's
just
that
when
I
think
of
rust
and
I
think
of
30
years
from
tonight,
I
want
to
tell
a
different
story.
Well,
probably
not
me,
but
you
guys,
then
of
course
there
were
other
constructs
to
be
able
to
make
communication
easier,
and
maybe
this
was
the
rendezvous
and
languages
that
we're
going
to
promote
that
kind
of
thing
and
again
one
more
the
good
old
remote
procedure
call.
B
You
know,
what's
not
to
love
about
that
other
languages,
we're
making
that
part
of
the
linguistic
constructs
that
they
were
offering,
because
we
all
knew
that
distributed
systems
were
going
to
be.
You
know
the
future,
and
last
but
not
least,
ladies
and
gentlemen,
of
course,
there
were
those
languages
that
had
a
little
bit
of
everything.
And
yes,
this
is
where
I
went.
B
Certainly
I
used
SR
that
one
down
at
the
bottom
there,
it's
called
synchronizing
resources
and
I
was
pretty
sure
that
that
was
the
bee's
knees
and
it
was
offering
everything
we
needed
for
distributed
systems,
and
that
was
it.
But
of
course
there
were
others
too.
You
know
this
was
all
about
trying
to
make
the
code
something
that
we
could
modify,
maintain
sustain
debug,
so
objects.
B
First
of
all,
by
saying
how
wonderful
I
think
you
guys
all
are
and
maybe
to
try
to
reflect
a
little
bit,
because
that's
what
you
know
I
sold
folks
do
upon
maybe
what
some
of
the
barriers
to
adoption
might
have
been
and
why
I
think
you're
in
a
great
position
to
actually
avoid
some
of
the
some
of
the
things
that
may
have
taken
some
of
these
languages
out.
You
know,
maybe
it's
timing.
Maybe
it
was
just
far
too
early
in
the
80s.
B
Maybe
it
was
that
a
lot
of
these
were
coming
out
of
ivory
towers
and
the
they
weren't
in
the
hands
of
practitioners.
Maybe
there
was
a
lack
of
mechanical
sympathy.
You
know
just
being
able
to
make
those
machines
really
do
what
we
wanted
to,
because
in
some
ways
the
abstractions
that
were
being
introduced
were
actually
getting
in
the
way,
and
it
wasn't
just
that
they
were
runtime
overheads.
It
was
really
that
you
know
we
were
still
evolving
and
they
weren't
the
right
ones.
B
Certainly,
when
you
are
going
down
the
path
of
adopting
a
new
programming
language
and
you're
climbing
the
learning
curve,
it's
a
bummer
when
you
hit
the
spot
where
all
of
a
sudden
there's
no
support
for
the
things
that
you
want
to
be
doing.
So
you
know
not
a
lot
of
standard
libraries
or
certainly
pack
reap
oats
with
packages.
Some
of
the
problems
were
being
addressed,
as
you
saw
in
some
of
the
languages,
while
others
weren't,
and
obviously
that
could
be
a
problem,
could
make
things
error-prone
for
the
things
that
weren't
being
addressed.
B
But
here's
where
I
think
we're
at
tonight
and
why
I'm
just
so
excited
I
mean
in
the
terms
of
killer
apps
consensus,
which
you
know
we
don't
need
to
think
hard
about
what
it
really
means
anymore,
and
although
it
was
great
that
Leslie
Lamport
trying
to
give
us
something
entertaining.
Certainly
if
you're
trying
to
organize
dinner
in
a
movie
with
friends
that
you're
doing
text
messages
and
emails,
it's
really
hard
to
get
concerns.
As
we
know
it,
this
problem
is
obviously
diego
is
gonna,
do
a
much
better
job
than
I
am.
B
But
here's
a
big
thing.
It's
about
community
and
what
you
have
there
is
what
we
didn't
have
in
the
80s.
You
know
you
could
have
a
great
language
that
just
wasn't
being
picked
up
and
used
and
being
being
pulled
apart
and
actually
maybe
you
know
that
sense
of
people
trying
to
build
together
what's
needed
when
new
things
come
up
so
again,
I'm
so
excited
I'm,
so
excited
to
be
handing
over
to
Diego
I
just
had
one
last
slide
for
you
guys
down
there
take
on
those
challenge,
problems
march
towards
the
beat
of
the
drums.
B
A
C
C
Good
point
sorry
bone
wrap
this
project
that
I
did
for
my
PhD
at
stanford.
After
I
graduated
I
at
Stanford,
I
started
a
system
in
C++
called
log
cabin
and
after
I
graduated,
that
was
still
a
research
system.
So
I
worked
with
scale
computing
to
get
that
production
ready
and
they
are
deploying
it
to
production
nowish
and
I
just
recently
joined
sale
source
where
I'm
starting
to
get
involved
in
a
budget
infrastructure
projects
and
with
part
of
my
time,
I'll
continue
supporting
the
raft
community
and
whatever
way,
makes
sex.
C
So,
let's
see
hopefully
even
give
you
a
sense
for
what
consensus
is
for
I
will
be
digging
deeper
and
kind
of
giving
you
an
introduction.
But
one
of
the
things
we
did
in
raft
will
say:
look
this
taxes
thing:
it's
super
general.
It's
too
general
90
I'll
make
up
a
number
99
percent
of
the
time
what
people
want
when
they
want
consensus,
is
actually
this
thing
called
replicated
state
machines.
So
a
typical
consensus
system
is
going
to
look
like
this.
C
C
C
Sorry
that
side
of
the
room,
so
in
a
replicated
state
machine
for
the
architecture,
each
server
is
going
to
have
three
different
things
on
it.
It
will
have
a
consensus
module
that
would
be
an
algorithm
like
raft
or
Paxos
a
copy
of
the
replicated
log
and
a
state
machine.
C
D
C
C
The
state
machines
are
deterministic
and
all
the
logs
have
the
same
series
of
commands
and
all
the
state
machines
will
end
up
in
the
same
state
and
also
produce
the
same
outputs
to
send
back
the
clients,
so
raft
really
in
charge
of
managing
that
replicated
log
and
doesn't
really
care
much
about
what
the
state
machines.
In
fact,
the
raft
RS
that
you'll
hear
about
next
it
has
a
sample
state
machine
than
it
should
slist,
but
that's
not
part
of
that
library.
C
E
C
And
so
like
what
is
this
interesting
from
a
programming
languages
perspective?
There's
actually
a
lot
of
people
who,
when
they
go
and
they
learn
a
first
language.
The
first
thing
they
do
is
build
a
replicated
state
machine.
That's
weird
right,
but
what
like?
It's
got
its
got.
Networking
it
does
a
fair
amount
of
that.
It's
got
this
log.
The
slug
has
to
be
persistent.
So
that's
access
to
disk.
There's
performance
considerations
like
it
is
a
really
good
test
for
a
systems,
programming,
language
in
user
space.
I,
don't
think
anyone
does
this
in
kernel.
C
So,
as
I
said
raft
just
an
algorithm
for
implementing
a
replicated
log
property
that
mostly
consensus
algorithms,
give
you
that
the
system
will,
at
all
costs,
maintain
consistency
and
it'll,
make
progress
it'll
be
available
if
any
majority
of
the
servers
are
up
so
in
a
three
server
cluster.
As
long
as
you
have
any
two
servers
up
that
systems
fully
operational,
it
deals
with
fail,
stop
failures,
not
byzantine
so
servers
can
crash
and
they
can
restart.
Messages
can
be
slow,
can
be
delayed,
do
located
and
lost,
but
they
you
know,
corruption.
C
So
that
led
to
a
different
breakdown
of
the
problem,
then
people
were
doing
before
so
in
raft,
there's
a
leader
election
log,
replication
and
safety.
So
I'll
talk
about
leader
election
in
more
depth.
I
don't
want
to
take
all
the
time
for
the
meetups.
I
won't
do
the
other
two
today
in
glitter
election.
The
idea
is,
we
want
to
select
one
of
the
servers
to
act
as
a
cluster
leader
and
then,
if
that
leader
were
to
fail,
we
need
to
detect
crashes
and
choose
a
new
leader.
C
So,
leader
elections
pretty
exceptional,
it
happens
when
you
don't
have
a
leader
and
normal
operation,
we're
really
ill
thinking
about
replicating
the
leaders.
Log
outwards
so,
as
I
showed
matt
figure,
the
leader
takes
commands
from
the
clients
and
append
it
to
its
copy
of
the
log,
and
then
it's
just
trying
to
get
the
other
logs
to
look
like
its
own.
So
it
replicates
its
log
out
to
other
servers
and
overrides
any
inconsistency.
Just
blindly.
The
third
section
safety
kind
of
ties
these
two
together.
So
we
can't
have
the
leader
blindly
overriding
other
slugs.
F
C
C
C
That's
our
leader:
it's
sending
heart
beats
getting
responses
back,
I've
paused,
it
now
a
little
quick,
but
that's
that's
what
leader
election
I!
That's
what
a
normal
leader
election
looks
like
in
rafts,
it's
a
about
a
200
millisecond
time
out
and
then
a
round
of
requests
I'll
go
through
it
more
slowly
now.
C
So
the
next
thing
that's
going
to
happen
is
I'm
going
to
kill
off
server
two
and
then
we'll
wait
for
another
timeout
period
and
watch
another
election
go
buy
more
slowly,
so
I
killed
off
server
too
who's
the
leader
in
turn
to
this
server
three
is
about
to
time
out,
so
it
timed
out
and
it
switched
to
term
three,
but
the
terms
justice
increasing
number
that
kind
of
keeps
track
of
time.
So
you
know
we'll
have
a
leader
for
term
two
then
we'll
have
a
leader
for
time.
Three
might
be
the
same.
C
C
When
you
become
a
candidate,
you
just
grant
yourself
to
vote,
and
these
messages
are
called
request.
Vote
our
pcs.
So
all
of
the
communication
is
request
and
then
a
response
back
so
server.
Three
is
it's
informing
everyone
else,
like
hey,
I've,
moved
on
to
term
three,
you
should
too,
and
also
can
I,
have
your
vote.
C
C
For
now,
rivers
just
grant
their
votes
first
come
first
serve.
That's
the
thing
that
we
mess
with
in
the
safety
section
that
I
won't
be
talking
about
later
and
then
you
know
the
votes
are
kind
of
great
in
on
forever.
Three
like
it's
gotten
them,
but
it
doesn't
know
it
yet.
Once
it
hears
back
from
a
majority
of
servers
and
get
some
majority
of
votes,
it
becomes
leader
no
for
just
heartbeat
started
sending
out.
Oh.
C
So
that
was
good
case
leader
election.
Let
me
show
you
bad
case
leader
election,
what
we
call
a
split
vote,
so
here
server,
one
and
five
on
the
top
left
are
going
to
happen
to
time
out.
At
the
exact
same
time
they
become
candidates
in
term
for
they
vote
for
themselves.
They
request
votes
from
the
others,
and
it
just
so
happens
that
they
each
get
one
more
vote
server
to
is
still
down.
So
it
can't
reply
so
server
11
has
two
boats.
Server.
C
C
Yes,
did
you
know
that
before?
Well,
that's
cheating,
okay,
we
thought
about
this.
You
know
me
going
into
my
advisor
weekly
hour-long
one-on-one
slick.
What
are
we
gonna
do
about
this?
Like
you,
you
know
foot
clearly.
If
we
had
these
to
communicate,
they
could
they
could
go
and
exchange
folks
with
each
other,
and
we
know
that
one
of
them
ought
to
win.
C
But
we
finally
decided,
like
the
easiest
thing
to
do,
is
to
just
wait
for
another
timeout,
because
you
know
that's
that's
already
behavior.
We
understand
it's
waiting
for
a
timeout
and
going
through
a
new
election.
C
So
all
these
timeouts
were
actually
randomized,
hey
anytime,
you
hear
from
a
leader
or
you
grant
your
vote.
You
reset
your
time
out
to
somewhere
between
half
way
and
all
the
way
around
the
circle,
and
so,
if
that
random
range
is
big
enough,
it
makes
it
extremely
likely
for
one
server
to
wake
up
worse
and
collect
folks.
First,
so
most
likely,
this
gets
resolved
in
the
very
next
term.
C
Serve
it
three
times
out
quick
enough.
It
becomes
leader
in
term
five,
so
in
practice
did
a
bunch
of
evaluation.
My
conclusion,
in
practice
you
want
that
random
range
to
be
about
10
times
your
round
trip
time.
If
you've
got
that,
then
split
votes
will
be
pretty
rare
and
they
get
resolved
pretty
quickly.
C
Okay,
so
just
to
review
leader
election,
we
use
heartbeats
and
timeouts
to
detect
crashes,
so
leaders
send
signing
heartbeats
periodically.
The
other
servers
are
timing
out
randomly
you
time
out
for
randomized,
to
avoid
split
books
and
to
resolve
to
split
folks
and
then
that
that
they
need
a
majority
of
votes
to
become
leader
in
a
particular
term
and
service.
My
only
vote
once
per
liter
guarantees
that
we're
only
going
to
have
one
liter
per
term.
The
whole
rest
of
the
algorithm
really
deeply
depends
on
this
property.
C
C
C
Okay,
good
things
to
say
about
C++
I
promise,
we'll
lift
us
a
little
longer
than
two
bullocks.
It
is
fast
write
it
and
it's
it's
predictably
fast,
like
you,
don't
pay
for
what
you
don't
use
and
you
can
kind
of
guess
like
I
know
how
long
a
function
calls
going
to
take.
I
know
how
long
you
know
within
a
few
milliseconds
I
know
how
long
every
operation
in
the
language
will
take.
C
There's
not
a
runtime.
So
there's
no
GC
pauses,
so
it's
quite
predictable.
I
was
pushing
raft
timeouts
down
I,
think
I
had
them
at
six
to
12
milliseconds
on
a
on
a
gigabit
network,
and
things
were
actually
stable,
which
you
probably
can't
say
for
many
higher-level
languages.
C
You
can
get
pretty
nasty,
go
low
level
with
these
systems
aspects
of
hit
of
it.
So
one
thing
I
didn't
talk
about
with
raft
is
that
that
log
can't
grow
forever.
Eventually,
you
have
to
delete
some
entries
from
it
and
so
log
cabin
use
of
snapshotting,
where
it
needs
a
consistent
view
of
a
state
machine
for
an
extended
period
of
time.
Well,
since
this
was
C++,
I
just
called
fork
and
the
child
process
gets
this
copy-on-write
consistent
view
of
the
whole
address
space
automatically
thanks
to
the
OS.
C
That's
something
you
can't
really
do
in
a
high-level
language
and
at
save
time,
I'll
give
give
some
credit
to
c
plus
plus
0
X
4
11,
whatever
you
want
to
call
it
where
this,
the
unique
pointer,
you
know
the
move,
semantics
it
introduces,
it
makes
it
makes
memory
leaks
pretty
hard
to
do
so.
It's
it's
not
really
am
I
pairing
all
my
news
with
deletes,
like
most
of
that
is
handled
by
the
by
the
language
by
the
libraries
good.
C
G
C
C
Good
and
bad
log
cabins
pretty
much
self
contained
it.
It
doesn't
have
many
dependencies
other
than
protobuf
when
it
comes
to
see
realizing
data
and
G
test
for
unit
testing.
So
that
means
that
you
know
it
has
its
own
of
that
loop.
Based
on
a
pole.
It
has
its
own
RPC
system.
It
has
its
own
pretty
much
everything
that
is
it's
nice
in
some
ways
right.
It's
a
few
dependencies.
Few
libraries
can
bite
you
by
changing
pretty
easy
to
debug.
C
If
you
find
a
function,
call
you
can
find
the
source
code
really
easily,
all
in
the
same
style,
all
thoroughly
tested
and
I'd
say:
I
learned
a
lot
by
doing
this,
but
you
know
that
was
in
a
university
setting.
It
may
not
be
the
most
practical
way
to
rag
software,
so
why
well
it's
hard
to
depend
on
lab,
reeks
in
C++
for
a
number
of
reasons:
there's
no
standard
packaging
system.
It's
no
standard,
build
system,
there's
no
package
ecosystem
really,
like
some
people,
depend
on
boost,
and
that
does
a
few
things
for
you.
C
Libraries,
you
do
find
or
code
you
do
find
all
use
of
different
subsets
of
C++
at
log
cabins
and
C++
Oh
X,
as
of
GCC
for
dot,
for
it
uses
exceptions,
it
does
not
use
lambda
stuff
on
until
four
dot.
Six,
if
you
know,
if
you
want
to
depend
on
a
library
that
library
uses
exceptions,
your
code,
doesn't
that's
a
problem,
etc.
C
There's
some
some
libraries
go
really
heavy
on
shared
pointer,
unfortunately,
could
be
boost
SharePoint
or
could
be
standard
shape
pointer,
but
so
this
is
a
reference
counted
pointer
and
if
the
library's
doing
that-
and
you
didn't
want
to
use
reference
counting,
it
can
spread
kind
of
vitally.
So
it's
it's
hard
I.
C
C
B
C
At
least
like
Linux
is
clear
on
what
the
guarantees
are,
so
all
of
these
things
also
mean
that
it's
hard
as
much
as
I
want
to.
It
is
hard
to
extract
log
cabins
raft
implementation
from
log
cabin
the
network
service,
because
a
packaging
system
am
I
going
to
use
what
ap
appli
export
do.
I
have
to
worry
about
a
bi
compatibility,
blah
blah
blah.
C
C
I
think
rest
is
a
huge
improvement
on
all
this
stuff.
The
cargo
for
packaging
means
I,
you
know,
I,
don't
really
have
to
deal
with
build
systems.
Much
when
I
pull
in
a
library,
Mike's,
dot
IO
means
I
can
find
libraries.
They
exist
they're
out
there
they're
even
in
a
standard
place,
and
the
rich
rich
type
system
means
that
these
thread
safety
issues.
C
C
On
thread
safety,
because
this
stuff's
hard-
you
know,
in
my
experience
these
bugs
are
rare,
but
when
they
happen,
they're
pretty
hard
to
debug,
so
log
cabin
uses,
a
monitor,
style
and
I
I'm
sure
we'll
put
the
slides
up.
I
have
a
link
to
the
paper
that
introduced
that
that's
an
old
paper
you
gone
will
be
excited.
C
But,
but
that
basically
means
there's
one
mutex
per
object,
vote
for
the
objects
that
are
monitors
and
all
public
methods
hold
that
mutex
the
whole
time
you're
in
the
method,
unless
they're
blocked
on
a
condition
variable.
So
you
know
that
you
know
this
is
a
synchronization
thing
it
and-
and
you
don't
have
to
worry
much
as
a
color.
It's
it's
a
good
strategy,
but
there's
no
language
support
for
it,
and
so
it's
a
lot.
I
guess
it's
just
really
hard
to
enforce
it.
C
Let
me
let
me
give
you
an
example
of
a
bug,
real
bug
that
would
cause
an
hour-long
hang
on
shut
down
sometimes
so
this
is
the
main
function
for
a
thread
says
while
not
exiting
grab
the
lock.
Do
some
stuff
well
kind
of
condition.
Babel,
what's
I
must
have
refactored
this
code
or
something
and
accidentally
I'm
accessing
that
exiting
variable.
Without
the
lock
it's
just
a
boolean,
it
doesn't
matter
that
one
I
mean
it's
easy
to
spot
on
the
slide.
C
C
C
You
can
find
some
videos
on
the
raft
website
where
I
go
through
the
log
replication
safety
aspects
of
raft
and
then,
if
you
want
a
whole
lot
more
info,
there's
the
raft
paper.
We
tried
really
hard
to
make
this
readable
and
I
wrote
a
255
page.
Look
talk
about
how
do
you
change
the
members
of
the
cluster?
So
if
you
want
to
get
a
3
from
a
3
server
cluster
up
to
a
five
server
cluster
you're
changing
the
definition
of
a
majority
that
has
to
go
through
the
consensus.
C
So
in
the
know,
log
cabin
implementation,
I'd
say
C++
was
a
mixed
bag.
The
performance
is
awesome,
the
you
know
the
ecosystem
that
it
lives
in
its
pretty
siloed
so
and
log
cabin
is
self
contained
in
that
the
memory
and
thread
safety
bugs
well
I,
think
we're
past
most
of
them.
They
can
be
annoying
and
I
guess
in
terms
of
rust.
I'm
really
excited
to
see
it.
C
You
know
it's
at
one
point:
O
is
a
language,
that's
getting
quite
stable,
I
think
there's
more
work
to
do
on
the
libraries,
but
I
want
to
see
distributed
systems
help
push
those
forward
I
and
raft.
Rs.
Of
course,
I'm
excited
to
see
that
turn
into
a
production-ready
implementation
that
we
know
for
sure
has
no
memory
bugs
I.
H
H
C
So
I
can
do
some
a
couple
things.
You
definitely
need
to
do
well.
The
thread
safety,
I
think,
will
be
free
in
that
mio.
You
know
rest
level.
Interface
will
describe
how
to
use
it
correctly
because
because,
ultimately
like,
even
if
you
have
an
event
based
network
system
you're,
you
may
may
not
have
a
thread
based
rest
of
your
system
right.
It
was
very
hard
in
a
log
cabin
to
communicate
between
those
two
aspects.
The
other
thing
is,
you
know.
I
I
There's
no
clas,
it's
MIT!
It's
all
community
guided
and
we
really
want
to
be
your
first
distributing
system.
So
if
you're
a
virgin
come
on,
join
us
and
asking
questions
during
the
talk,
if
you
want
because
I
don't
like
talking
for
15
months
straight,
we
have
lots
of
friends,
especially
the
IRC
channel.
It's
the
friendliest
I
retain
c-channel
I've
ever
been
on.
I
So
let's
take
a
quick
big
picture.
Look
some
libraries
that
we
use
in
a
little
bit
of
history.
So
a
little
bit
of
history
I
started
working
on
it
in
a
vons
distributed
systems
class
in
0.9
nightlies,
so
pretty
much
half
my
week
of
working
on
it
was
fixing
the
things
happen
last
week,
so
I
learned
very
quickly
to
update
on
Friday
night
and
then
work
on
it
all
saturday,
sunday
and
crates
really
weren't
around
then.
I
I
After
a
few
finals,
we
finally
kind
of
got
back
to
working
on
it,
and
people
started
cooking
with
me,
like
Dan
who
you'll
meet
later,
and
we
had
a
few
other
computers
that
was
really
cool.
As
someone
who
was
just
kind
of
getting
into
writing
open
source
software
and
actually
making
libraries
instead
of
little
executables
that
a
couple
people
would
download
and
run.
I
So
we
took
a
lot
of
time
to
think
about
how
we
were
going
to
make
this
one,
because
we
actually
want
people
to
use
it,
not
just
look
at
it
and
be
like
oh,
so
we
want
to
be
fast.
We
don't
have
an
opinion
about
how
you
write
your
code
or
how
you
architect
your
system.
We
want
it
to
fit
into
how
you
want
it
to
work.
I
really
want
to
be
correct
to
diegos
paper.
I
We
notice
that
a
lot
of
the
raft,
implementations
online
and
on
the
raft
website
we're
just
key
value
stores,
and
we
don't
want
to
lock
you
into
anything.
So
we've
made
some
choices
that
will
hopefully
later
enable
things
like
Python
to
start
calling
into
our
code
or
see
or
something
so
we
picked
to
use
me.
Oh
and
captain
proto.
We
have
a
captain
proto
guy
here,
it's
pretty
cool.
If
you
want
to
learn
more
about
it,
talk
to
him,
we
actually
had
to
fork
it,
though,
because
it
didn't
have
good
asing
support
at
the
time.
I
That's
getting
better,
though,
and
that's
an
ongoing
story
and
we
really
like
Captain,
brodo
and
neyo
was
a
cool
pairing
with
it,
because
me
was
really
low
level
and
it
was
much
much
better
than
no
to
play
with,
because
you
actually
got
to
play
with
the
system
and
not
have
to
deal
with
callbacks,
because
callback,
hell's,
hell
I,
really
like
cooling,
that's
a
big
thing
for
me.
So
I
did
a
lot
of
work
on
making
sure
that
we
have
automated
testing
make
sure
we
have
documentation.
I
That's
always
up-to-date
huan
actually
took
one
of
my
blog
posts
and
made
it
an
awesome,
little
application,
which
is
great.
You
should
use
it
if
you're
not
and
we
started
using
home,
which
is
boars
and
if
you've
contributed
rusts.
You
know
what
bourses
he
keeps
mastered.
It
keeps
master
green
and
make
sure
that
someone
always
reviews
your
code.
Even
if
that
see
you,
but
generally
we
don't
do
that.
So
it's
a
pretty
simple
diagram,
just
like
you
saw
Diego
do
so.
I
So
we've
built
the
client
which
has
a
really
nice
easy
API.
You
just
spawn
a
client,
you
tell
it
the
nodes
in
the
cluster,
it
doesn't
have
to
be
in
the
cluster,
so
your
client
could
be
anywhere
and
you
can
get
immutable
or
immutable
access.
So
you're
probably
wondering
what
what
are
these
messages
like?
What
am
I
passing
around
right?
I
You
can
pick
whatever
you
want.
You
give
us
some
bites
and
we
ship
them
off
to
the
state
machine.
Your
state
machine
that
you
implement
knows
how
to
handle
those
bites
and
that
could
be
captain
proto
buffers.
It
could
be,
sir
day
it
could
be
bin
code.
We
don't
care,
you
do
whatever
you
want,
and
then
you
get
a
response
back.
These
calls
are
blocking,
which
is
with
a
little
bit
weird
after
working
me.
I
Oh,
the
server
is
a
big
meal
reactor
that
your
clients
talk
to
you're,
never
going
to
touch
this
other
than
to
run
it
and
running
it
is
you
could
an
ID
the
address
and
your
implementations
and
don't
use
unwrapped
and
product
there's
no
raft
specific
logic
in
the
server
implementation.
All
it
is,
is
message
handling
and
acting
on
what
we
get
back
from
the
consensus
module.
So
this
was
done
so
that
we
could
test
the
consensus
module
because
we
really
want
to
have
really
robust
testing
the
consists
of
this
module.
I
You
will
never
touch
unless
you're
contributing,
which
you
should
it's
pretty
fun.
Most
of
the
calls
look
a
little
bit
weird
because
we
don't
actually
return
anything.
What
you
do
is
when
we
pass
it
in.
We
pass
in
this
action
structure
and
then
that
comes
back
out
of
the
function
call
and
that
way
we
can
react
on
it
in
the
server
in
different
ways.
There's
no
I,
oh
nothing!
So
it's
really
kind
of
safe
and
one.
I
If
you
want
to
start
playing
with
writing
a
log
dan
berger,
it's
started
to
work
on
a
right
ahead.
Log,
it's
been
a
bit
of
a
yak
shave.
I
think
you
might
actually
talk
about
it,
but
that
would
be
a
fun
project
and
we're
hoping
to
improve
the
API
which
you'll
see
here,
and
it's
basically
just
a
bunch
of
really
simple
calls
that
return
results.
So
you
could
go
and
implement
all
these
and
you
have
a
log.
I
This
is
where
you'll
handle
those
bites
that
you
send
from
the
client,
so
you
can
find
whatever
we
have
examples
with
get
put
and
compare
and
swap
someone
was
talking
me
early
about
watch
so
that
you
could
watch
something
you
could
totally
put
all
of
us
in
here.
The
watch
might
be
a
little
bit
hard
because
you'll
actually
have
to
have
network
calls
and
stuff.
That
might
be
interesting.
I
It
is
also
responsible
for
retrieving
log,
convened
a
lot
of
compaction,
which
we
don't
have
yet
it's
in
process
and
I,
don't
think
we're
gonna
fork
like
Diego
did
but
I
don't
know
that
might
be
the
best
choice
and
again
the
safety
is
not
set
in
stone.
It's
pretty
simple.
You
can
apply
which
gives
you
mutable
access.
So
that's
where
we
do
things
like
puts,
and
these
all
go
through
the
consensus
module
in
the
log,
so
they're
all
audible.
You
can
see
where
they
are.
I
The
query
is
read-only,
so
it
doesn't
go
through
the
log,
it's
very
fast
to
get
a
response
back,
so
you
should
only
be
using
apply
when
you
actually
need
mutable
access,
because
it's
going
to
take
a
lot
longer
to
go
through
and
then
we
have
snapshot
and
restore
snapshot
which
aren't
implemented
yet.
So
maybe
one
of
you
can
help
us
with
that
or
we'll
work
on
that
we
built
raft
because
we
want
to
build
things
together
with
you,
so
we're
working
on
getting
the
one
point
home.
We
have
a
few
things
on
the
go.
I
Membership
changes,
log
compaction,
snapshotting
the
things
in
Diego's
paper
that
we
haven't
done
yet
need
to
be
done
before
we
can
hit
one
point
out
of
this:
that's
the
law
and
we
really
want
to
improve
some
robustness
in
the
client
and
in
the
server.
We
want
to
make
sure
that
we
don't
have
any
crashes.
That
might
happen
and
surprise
you,
this
press,
the
supposed
to
be
crushed
free
right
in
the
future.
Some
cool
projects
would
be
C
bindings
because
we
don't
really
have
a
huge
run
everything
so
that
shouldn't
be
a
huge
difficulty.
I
I
So
get
involved
come
over
to
our
github
repo.
It's
not
hard
to
find
just
look
for
Harbor,
github
and
you'll,
find
it
I'll
break
it.
Improve
it.
Yell
at
us,
that's,
okay,
just
be
nice
and
join
us
on
IRC
were
on
the
raft,
IRC
channel,
we're
all
very
friendly
people
and
we
won't
bite
you
I
was
going
to
do
a
demo,
but
I
don't
have
my
computer,
so
maybe
maybe
you
can
demo
it
yourself,
because
it's
not
that
hard
to
demo.
I
If
you
want
to
play
with
a
demo,
you
can
just
clone
the
repo
and
there's
an
experiment
solder.
It
has
a
automated
playbook
to
deploy
built
raft
cluster
over
ansible,
and
we
have
like
a
team
ups
dashboard
that
you
can
play
with
some
simple,
like
macro
commands
just
to
play
with
algorithm,
you
can
shut
down
notes
with
control,
see
it's
nice
and
easy
and
yeah
have
fun
with
it.
I
F
Everything
needed
to
make
writing
distributed
systems
and
russ
better.
So
I'll
be
talking
about
why
you
would
want
to
do
that.
Why
r
us
is
a
good
programming
language
for
it,
some
of
the
things
that
it's
not
yet
good
at
mostly
around
libraries
and
yeah.
So
that's
where
we're
headed
I'm
DC
be
on
IRC
on
Dan
Burke
or
on
github.
So
you
can
find
me
there
and
yeah.
F
Okay,
so
distributed
systems.
It's
a
big
topic
right.
There's,
no
way
we're
going
to
cover
it
all
tonight.
In
that
10
minutes
I
have
I've
kind
of
bolded
the
topics
I
do
want
to
cover,
not
that
security,
encryption
and
performance
and
all
the
other
things
aren't
important.
But
in
the
case
of
security
encryption
I
think
I'm
not
the
person
to
be
talking
about
that
and
in
any
case,
we've
already
had
rusts
whole
rust.
Meetups
on
that,
and
you
don't
want
me
talking
about
that
performance.
F
F
You
know,
if
you
make
your
language
faster
for
writing
systems
programs,
it
will
benefit
distributed
systems,
but
I
will
be
talking
about
I
Owen
networking,
because
those
are
so
intrinsically
intertwined
with
distributed
systems
and
I'll,
be
talking
about
operations,
kind
of
like
ops,
things
and
I'm
be
talking
about
this,
because
it's
something
that
I
haven't
seen
a
lot
of
people
talking
about
yet
and
rust.
I
think
it's
coming.
It's
a
matter
of
time,
but
nobody's
talking
about
it
yet
so
networking
and
rust.
Today
you
have
kind
of
two
options.
F
If
you
want
to
do
networking,
you
have
the
stood
net
package,
which
is
a
really
nicely
designed
blocking
thread
based
library
and
this
this
code
block
on
the
left
here,
is
taken
directly
from
the
standard
documentation.
You
know
it's
if
you
know
a
little
bit
of
rust,
it's
pretty
easy
to
follow.
We
have
kind
of
an
iterator
of
TCP
sockets
coming
in
from
a
listener,
so
listener
not
in
coming,
and
you
get
an
iterator
of
streams.
F
You
open
up
the
stream,
the
TCP
socket
you
give
it
to
a
thread
and
the
thread
handles
it
right
looks,
looks
pretty
nice
pretty
good.
You
spawn
up
a
new
thread
per
socket
and
everything's
good.
This
will
be
very
fast
and
scale
pretty
well
to
a
lot
of
threats.
I
don't
have
hard
numbers,
but
it's
like
you
know
you
can
do
thousands
of
connections
with
the
model
like
this
on.
You
know
a
reasonably
hefty
machine
with
with
a
modern
linux.
F
On
the
other
side,
here
we
have
a
code
snippet
from
my
oh,
my
oh,
is
a
library
to
do
non.
Blocking
a
vented
IO
in
rust,
and
the
code
block
on
the
right
is
not
at
all
comparable
to
the
one
on
the
left.
It's
doing
far
less
and
it's
far
more
complicated
and
that's
just
kind
of
the
state
of
the
world
with
my
own
and
that's
actually
going
to
be
a
lot
of
this
talk
is,
is
I
know.
A
lot
of
people
are
interested
in
trying
out
my
Oh.
F
A
lot
of
the
challenges
in
that
we've
had
to
face
in
rafts
have
not
actually
been
raft
or
distributed
challenges.
It's
been
how
the
heck.
Do
we
get
this
library
in
this
library
to
talk
to
each
other
and
do
this
in
an
efficient
way?
Early
on
in
rough,
we
decided
to
go
with
my
oh
and
non-blocking,
invented
I,
oh
and
I.
Don't
know
that
we
had
a
great
reason.
It
was
kind
of
the
shiny
new
thing
on
the
block.
F
The
whole
so
non-blocking
sockets
invert,
your
program
flow,
and
it's
it's
an
entirely
new
way
of
thinking
where,
as
before,
when
you
try
and
read
from
a
socket,
you
will
always
get
back
the
data
it
just
might
be
in
an
indeterminate
amount
of
time,
but
you
never
see
that
time
in
the
thread.
It's
the
thread
just
goes
to
sleep
magically
and
another
thread
wakes
up
and
takes
over
with
non-blocking
I/o.
F
You
have
to
handle
the
case
where
no
there's,
no
data
from
that
socket
to
be
read,
go,
do
something
else
for
a
while
and
then
come
back
and
read
it
and
try
and
read
again,
and
so
that's
the
whole
trick.
You
can
have
a
set
of
ten
or
a
hundred
or
10,000
sockets,
and
as
long
as
you're
trying
to
read
from
each
and
and
they're
not
blocking
the
thread.
F
That's
fine,
which
kind
of
begs
the
question:
how
do
you
know
when
to
read
from
a
socket
right
if
it
doesn't
block-
and
it
just
gives
you
back
an
error
that
says
no
I,
don't
have
any
data?
Well,
you
can't
just
go
down
a
list
and
ask
them
all
over
again,
and
so
the
operating
system
gives
you.
Some
tools
are
on
this,
and
my
o
is
basically
a
wrapper
around
these
operating
system
tools
that
tell
you
hey,
go
look
at
this
socket
now.
F
F
So
let's
build
something
right:
let's,
let's
look
at
these
api's
and
kind
of
just
dissect
them
we're
not
going
to
start
with
something
as
complicated
as
raft.
What
we're
going
to
do
is
we're
going
to
build
a
simple
key
value,
store,
something
you
know
very,
very
simple.
So
little
telnet
such
situation
or
session
put
my
key
in
some
value
in
the
server
sends
back.
Okay
get
my
key
and
it
sends
back
the
value
that
I
just
put
get
another
key.
No
value
bogus
command
sends
back
error.
F
Telnet
doesn't
actually
have
the
arrows,
but
I
put
that
in
to
show
what
I'm
sending
and
then
the
other
stuffs.
What
we're
getting
back,
if
you
are
following
along
at
home
or
have
a
laptop
out,
go
to
github
com,
damn
burkert,
simple,
kb
and
server
RS
has
all
the
code
that
will
be
taking
a
look
at
here.
F
F
We
have
a
tcp
listener.
This
particular
tcp
listener
is
a
is
a
my
Oh
branded
tcp
listener,
as
opposed
to
the
one
we
saw
earlier,
which
was
a
stud
net
and
I'll
talk
about
how
that's
a
little
bit
different,
although
it
looks
very
similar
and
we
have
a
slab
of
connections
and
a
slab
is
essentially
just
a
bag,
and
it's
just
a
container
for
connections,
and
you
can
ask
the
container
for
a
specific
connection
given
a
token
which
is
kind
of
a
key.
F
So
my
o
is
invented.
It
gives
you
an
event,
loop,
abstraction,
and
this
is
way
too
small
to
read
and
if
you
could
read
it
I'm,
not
sure
it
would
make
any
sense
to
you
unless
you
had
quite
a
bit
of
my
experience
already,
but
basically
what's
happening
here
is
that
we
implement
an
interface
for
our
server
type.
So
our
server
type
is
this
struct
we
implement
a
trait
interface
for
this
server
type
that
has
a
function
called
ready
and
the
Myo
event
loop
will
call
ready
on
our
handler.
F
The
trade
is
called
handler
that
we're
implementing
it
will
call
ready
on
our
on
our
implementation,
with
the
token
and
the
event
set
for
a
specific
socket
that
you've
registered.
So,
for
instance,
our
key
value
store
is
just
sitting
there
waiting
for
connections
to
open
to
it
for
clients
to
connect,
and
then
it
waits
for
bites
to
come
in
that
it
can
read
from
that
socket.
It
parses
the
bytes
figures
out
whether
it's
a
good
or
a
put.
Does
the
operation
and
then
sends
back
the
the
response.
F
Ready
will
get
called
with
the
token
corresponding
to
that
tcp
listener,
as
well
as
an
event
set
and
the
event
set
is
basically
just
a
bit
mask
that
tells
you
it's
it's
either
readable
or
writable
or
interrupt,
or
something
like
that
and
the
story
force
for
normal
sockets.
It's
very
much
the
same,
so
you
register
it
with
the
event
loop
and
down
here
you
can
see
what
that
register
called
looks
like.
So
we
look
up
the
connection
out
of
the
connections,
which
is
the
the
collection
of
connections
we
register
it
with.
F
The
token
are
sorry
we
registered
with
a
socket
from
that
connection.
The
token
the
events
were
interested
in
as
well
as
some
polling
options
which,
if
you're
interested
in
definitely
check
out
the
Maya
documentation
or
car,
lurched
at
a
great
talk
at
the
rust
camp
day.
A
couple
of
weeks
ago,
that's
online,
one
of
the
biggest
things
you
have
to
deal
with
when
using
non-blocking
I/o,
as
opposed
to
blocking
I/o,
is
buffer
management.
F
When
you're
reading
from
a
socket,
that's
going
to
block
you,
you
have
an
implicit
buffer
there,
that's
kind
of
on
the
stack
frame
and
when
you
move
to
an
ngon
to
a
basically
invented
model,
you
have
to
make
that
buffer,
explicit
and
so
with
every
single
connection.
Now
we
are
going
to
add
a
read
buffer
and
a
write
buffer,
and
so
when
we
read
bites
from
the
connection
we're
going
to
read
into
the
read
buffer
and
when
we
write
bites
out
instead
of
writing
directly
to
the
socket,
will
serialize
to
the
write
buffer
register.
F
An
interest
in
writing
to
that
socket
asynchronously
later
be
notified
by
the
event
loop
and
then
only
then
will
we
copy
the
bites
from
the
write
buffer
onto
the
socket,
pretty
complicated
way
more
complicated
than
just
saying,
read
or
write,
write.
The
reason
you
have
to
read
so
the
write
buffer
is
pretty
obvious.
Why
you
need
it?
Well,
it's
the
right
buffer
is
a
little
more
clear.
You
need
a
write
buffer,
because
when
you
decide
to
write
is
not
when
you
get
to
write,
you
only
get
to
write
when
you're
notified
that
you
can
write.
F
So
you
have
to
have
somewhere
to
store
that
message.
The
read
buffer
is
a
little
bit
more
subtle,
typically
or
always
in
a
distributed
system.
You
have
some
kind
of
protocol
that
you're
conforming
to
whether
it's
HTTP
or
captain
proto,
RPC
or
soap
or
XML,
or
whatever
you
have
some
protocol
that
you're
trying
to
do.
And
typically,
if
you
only
have
received
half
of
your
message,
that's
not
great
from
a
protocol
standpoint.
F
So
that
gets
to
the
next
point:
how
do
you
DC
realize
this
is
again
quite
small,
but
if
all
you
can
do
is
read
the
comment,
that's
I'll
explain
it.
You
don't
really
need
to
be
able
to
read
the
code
here.
This
is
this.
What
this
code
is
doing
is
looking
at
the
read
buffer
for
a
connection
scanning
over
it
for
new
line
symbols.
F
Remember
our
protocol
that
we're
implementing
here
is
a
line
based,
get
key,
put
key
value,
it's
very
simple,
so
it
scans
over
the
read
buffer
for
a
new
line
takes
that
slice
of
the
bytes
and
then
turns
it
into
a
door
get
or
a
read,
pretty
simple
right,
we're
just
scanning
through
input
for
new
lines.
The
standard
library,
of
course,
has
something
that
can
do
this
off
of
the
standard
file
and
network
abstractions.
F
But
those
things
don't
work
very
well
with
non
blocking
sockets,
because,
if
you're
reading
through,
if
you're
reading
bites
from
that
socket
and
all
of
a
sudden,
it
returns
an
error,
I
don't
have
any
bites.
I
would
block
so
I'm,
giving
you
back
an
error
instead
of
bytes.
The
standard
types
will
basically
just
throw
that
data
on
the
floor
instead
of
giving
you
back
the
error
and
the
things
you've
already
read.
F
So
if
your
line
is
100
bytes
long
and
your
sockets
only
has
50
bytes
received,
you
read
out
of
it,
you
have
50
bytes
and
an
error.
Well,
it
just
throws
that
is
on
the
floor,
gets
you
back
the
air
and
so
that
that's
not
obviously,
that
won't
work.
So
we
have
to
implement
our
own
logic
to
do
all
this
D
serialization
this
code
is,
you
know
it's
not
very
long.
F
F
So
given
that
it's
so
much
harder
to
do
this,
at
least
with
the
abstractions
that
Maya
provides.
Why
do
we
use
non-blocking
I/o
or
why
are
people
interested
in
and
the
primary
thing
is
scalability?
So
if
you
have
a
server
that
you
want
to
scale,
do
hundreds
of
thousands
of
connections,
it's
not
possible
or
it's
not
great-
to
have
to
spin
up
a
thread
per
one
of
those
there's
overheads
associated
with
threads
that
are
can
be
more
than
then
with
the
non
book.
So
there's
overhead
associated
non-blocking
connections
right
we
had
to.
F
We
had
to
use
to
give
that
explicit
buffer.
In
this
case,
sometimes
you
can
get
around
that,
but
usually
you'll
have
to
have
something
like
that,
but
the
overheads
with
threads
can
be
even
greater,
including
scheduling
overhead
one
of
the
interesting
things
about
using
my
oh,
that
we've
discovered
with
raft
is
that,
because
it's
explicitly
single
threaded,
we
don't
have
to
have
any
synchronization
on
access
to
what
otherwise
would
be
shared
state.
F
I
think
for
I
think
everyone
kind
of
who's
studied
this
understands.
It's
the
way
forward
for
non-blocking
I/o
is
that
we
need
better
abstractions,
and
this
is
kind
of
a
call
to
arms
for
the
community.
There's
already
people
working
on
this
really
smart
people
who
have
done
way
more
network
programming
than
me
so
I'm
not
like
trying
to
say
anyone's
doing
anything
wrong,
I
just
think
more
attention.
It's
always
good
and
more
and
getting
people
kind
of
rallied
behind.
Something
would
be
good,
of
course,
there's
no
right
answer.
F
There's
different
abstractions,
there's
futures,
there's
futures
and
streams,
there's
co
routines,
there's
call
backs
there's
the
event
loop,
which
you
could
claim
is
as
an
abstraction
of
itself
and
what
is
the
right
choices?
Probably
you
know
there's
no
right
answer
for
all
applications
or
all
libraries,
but
it's
definitely
important
to
keep
in
mind
that
an
async
abstraction
has
very
far-reaching
consequences.
F
Diego
is
talking
a
little
bit
about
how,
if
you
use
reference,
counted
smart
pointers
in
c
plus
plus
they
tend
to
be
viral
and
take
over
your
code
base
cook
async
abstractions
are
very
much
the
same
thing.
So
if
you've
ever
used,
Nettie
or
something
and
on
the
JVM,
like
that,
all
of
a
sudden
every
single
function
call
you
have,
has
a
neti
future
type
in
its
method
signature.
F
So
these
things
you
know,
there's
a
lot
of
thought
that
needs
to
be
and
design
the
needs
to
be
put
into
these
things.
It's
relatively
simple
to
go
out
and
design
a
future
abstraction
and
hook
it
up
to
e
pole
or
whatever,
but
making
sure
that
all
the
libraries
in
the
ecosystem
fit
together
with
that
is
the
challenge
and
I
think
we
need
to
be
thinking
about
that.
I
did
not
call
out
any
specific
projects
on
here
specifically,
but
there
are.
There
are
quite
there's
multiple
implementations
of
features
and
streams
hooked
up
to
two
I.
F
Oh
there's,
even
multiple
co,
routine
implementations,
although
I
think
they're
all
sharing
somewhat
of
a
common
base
and
they're
all
very
interesting
check
them
out
there.
They
should
be
relatively
easy
to
find
if
you
go
to
like
the
discuss
or
the
reddit
page
or
ask
on
IRC,
so
backing
up
a
little
bit
going
to
move
on
to
the
operations
things
I
was
kind
of
hinting
at
right
now,
and
this
is
totally
understandable.
F
Rust
is
somewhat
of
a
young
excuse
me,
young
ecosystem
and
there's
not
a
lot
of
people
talking
right
now
about
tools
that
make
it
possible
or
easier
or
more
sane,
to
run
rust,
apps
in
production
and
run
rough
stops
for
a
long
time.
So
things
like
collecting
metrics
things
like
reporting
those
metrics
things
like
tracing
and
debugging.
The
debugging
story
is
pretty
good,
I
mean
we
have
gdb
and
LD
b,
and
you
can
remote
debug
via
that.
But
these
other
things
there's
basically
no
answer
and
I
might
be
wrong.
F
If
you
know,
if
you
need
things
to
look
at
there's
great
models
in
other
communities,
one
like
four
metrics,
for
instance,
1,
I'm
familiar
with-
is
there's
a
JVM
library
called
metrics
I,
think
yeah
Kota,
hell
metrics,
which
is
pretty
good,
and
if
we
had
a
rust
equivalent,
that
would
be
excellent
reporting.
You
know
we
need.
We
need
connectors
for
graphite
in
for
getting
out
there.
You
know,
there's
10
different
visualization
and
metric
collecting
projects
out
there
tracing
stuff.
F
You
know
you
can
go
all
the
way
from
actually
so
the
the
very
coolest
things
going
on
in
tracing
as
well
as
metrics
are
coming
out
of
browsers.
So
Firefox
has
some
really
excellent
memory.
Tracing
things
coming
out
of
it.
Servo
has
a
project
called
heap
size
I
believe
that
I'm
very
hopeful
is
going
to
grow
and
become
kind
of
the
standard
go
to
for
this
type
of
thing,
at
least
for
heap.
Metrics
Chrome,
for
instance,
has
a
tracing
dashboard
that,
if
you've
never
seen
it
I
encourage
you
to
look
at
it.
It's
absolutely
fantastic.
F
I
don't
know
if
firefox
has
an
equivalent
I
should
ask
some
of
the
experts
here,
but
I
know
that
Firefox
its
memory
trip,
tracing
and
tracking
stuff
is
equally
as
high
quality
yeah.
So
those
are
just
some
ideas
of
things
that
rust
currently
has
no
kind
of
solution
for
and
that
are
pretty
critical
when
you're
trying
to
go
out
and
production
eyes
distributed
system
yeah
special
thanks
to
everybody.
Who's
contributed
to
rust,
to
everybody's,
contributed
to
my
oh
and
thanks
to
andrew
and
the
rest
of
the
raft
crew.
G
H
G
H
Cool
yeah,
so
I'm
going
to
talk
a
little
bit
about
my
experience.
Building
a
distributed
system
using
raft
in
a
safe
language,
specifically
I
built
a
we
forked
HBase
a
while
ago
and
at
my
previous
company,
and
actually
replaced
the
HDFS
wall.
Logging
with
a
raft
base
model
and
a
lot
of
cool
stuff
happened,
learn
a
lot
of
stuff
along
the
way,
but
I
think
the
real
idea
here
is
we've
covered.
You
know
what
raft
is
the
raft
project
with
rust,
the
things
that
we
need
to
make
the
raft
project
with
rust
a
lot
better?
H
I
kind
of
want
to
come
at
it
from
the
other
direction
and
be
like
here's
some
like
another
way.
You
can
kind
of
look
at
these
technologies
like
raft
and
some
of
the
advantages
of
using
a
language
lake
rest
over,
say,
job,
which
is
language
that
I
had
to
do
this
in
and
maybe
some
lessons
that
I
learned
to
do
it
better
next
time.
So
yeah
as
I
said
you
know,
the
point
of
this
talk
is
to
first
of
all
say
that
you
know
you
know.
H
I
would
basically
argue
if
you're
architecting
a
system
identically
in
C++
and
Java
you're,
probably
doing
it
wrong,
at
least
when
you're
dealing
with
something
that's
going
to
be
like
close
to
the
persistence
layer,
ie
close
to
syscall.
So
please
keep
keep
all
of
these
those
criticisms
in
mind
and
they
don't
apply
to
any
of
the
C++
stuff.
So
really
quickly
raft
is
like
I
said:
Diego
defined
it
for
now
I'm
just
going
to
simplify
it
to
say
it's
a
way
to
get
a
whole
bunch
of
processes
whatever.
That
means
to
agree
on
stuff.
H
The
obvious
reasons
why
raft
is
a
good
idea
is
well,
it's
there's
all
the
performance
stuff
so
compared
to
like
other
consensus
based
systems,
you
don't
have
to
do
a
lot
of
our
pcs.
In
fact,
people
used
to
argue
that
it's
a
minimum
number
of
our
pcs-
that's
not
really
true,
but
for
practicality.
It
is,
but
it
also
has
this
notion
of
kind
of
like
passive
replication,
as
opposed
to
active
replication.
There's.
H
Actually,
this
great
paper
I'll
link
in
the
back
called
like
vive
la
différence,
where
they
talk
about
the
whole
classes
of
consensus,
algorithms
and
how
some
are
more
passively
replicated.
Like
arguably
view
stamp
replication
is
more
passively
replicated
than
even
raft
is
versus
an
actively
replicated
system
like
a
single
decree
pack
so
system.
H
So
generally,
when
people
build
like
a
multi
master,
active-active
pack
so
system,
when
you
put
an
RPC
into
that
consensus
system,
it's
in
memory
on
all
those
machines
and
that
actually
makes
sense
because
any
of
those
machines
conserves
reads
in
or
any
other
moves
machines
can
take
rights.
With
raft
the
mutability,
the
mutation
only
happens
on
the
master
and
serving
reads
off
the
master.
Is
you
know
it's
pretty
limited
when
you
actually
do
that?
H
So,
as
a
result,
the
overhead
on
each
system
is
generally
like
one-third
versus,
like
an
active
replicated
system
and
frankly,
it's
way
easier
to
implement
like
a
raft
based
system
than
a
signal
to
keep
access
system.
Having
seen
how
these
things
are
implemented
and
tested,
I,
basically,
don't
trust
that
anyone
has
actually
ever
done
it
right
period.
H
So
another
reason
why
raft
is
like
a
lot
better
than
Java
is
what
skeletor
is
pointing
out
being
in
the
Java
ghetto
like
for
building
distributed
systems
is
basically
the
worst
thing
in
the
world.
Even
if
you
ignore
you
know
the
long
GC,
pauses
and
just
random
pauses,
that
Java
makes
which
will
cause
failure,
detection
and
recovery
to
be
slow.
We
can
get
the
way.
That
is
a
problem.
Basically,
what
a
persistence
layer
is
is
the
thinnest
wrapper
around
unix
system
calls
that
you
can
make
that's
literally
what
that
thing
at
the
bottom
does.
H
So,
if
you
have
a
language
like
Java,
which
can
actually
really
make
system
calls
in
any
reasonable
way,
you
know,
and
you
have
to
go
this
high
level
of
abstraction.
Now
you
have
a
high
level
abstraction
making
system
calls
in
the
thinnest
thing
you
can
make,
which
is
not
a
good
situation
to
be
in
so
the
the
real
reason
why
why
stands
up
like
winning
here
versus
all
the
other,
those
all
assistant
languages,
of
the
things
that
were
kind
of
all
proud
of,
not
just
the
GC
stuff?
H
It's
just
you
know,
what's
going
on
under
the
hood,
if
we
were
to
imagine
this
and
like
a
language
like
java
script
or
even
a
language
like
go,
arguably
people
could
say:
oh
I
know,
what's
going
on
in
the
hood,
I
can
kind
of
trace
what's
going
on,
but
you
really
can't
write
like
those
go.
Libraries
that
are
doing
system
calls
they're
all
unsafe.
It's
not
a
language
set
up
to
allow
you
to
use.
H
As
you
know,
there's
that
unsafe
keyword,
that's
there,
you
know
rust,
is
fundamentally
providing
the
right
level
of
protection
for
those
types
of
system
calls
while
not
getting
in
your
way.
So,
for
instance,
like
the
mio
library
that
we're
they
were
just
talking
about,
it's
really
like
a
very
thin
layer
around
epoll.
So,
ok
anyway,
so
that's
kind
of
some
high
level
stuff
I
wanted
to
get
into
like
a
real
system.
H
So
I'll
tell
you
about
this
is
my
built
and
what
we
did,
but
first
to
do
that
I'm
going
to
have
to
tell
you
a
little
bit
about
HBase.
So
this
is
what
H
basis
architecture
looks
like.
The
interesting
thing
is
on
the
bottom.
We
have
this
HDFS
thing,
which
is
that
Hadoop
file
system
and
that's
where
all
the
data
goes,
and
then
we
have
the
zookeeper
thing,
which
is
another
multi
packs
of
space
system
similar
to
raft
and
that's
where
the
HBase
system
stores
kind
of
what
the
state
of
the
system
is.
H
So
when
a
server
goes
down
its
zookeeper,
that
is
used
to
alert
the
system
that
it
has
to
do
some
type
of
cluster
repair
on
the
other.
Interesting
thing
about
this
slide
is,
if
you
look
at
the
three
components
in
the
region:
server,
homm
store
and
right
ahead.
Log
we're
going
to
quickly
talk
about
those,
but
basically
this.
H
These
are
the
main
components
of
this
HBase
database
and
where
data
is
stored
and
basically,
we
have
a
right
ahead
log
that
normally
writes
all
of
the
RPCs
coming
into
the
database
on
to
HDFS
every
one
small
just
like
we
were
talking
about
with
rust.
Oh
and
sorry,
it
writes
the
right
head
log
and
and
writes
the
entries
to
this
in
memory
store.
Eventually,
the
mem
store
fills
up,
so
we
take
a
snapshot
and
write
it
to
an
h-file
just
like
we
snapshot
in
raft.
H
So,
if
you,
this
is
actually
a
Hadoop
vendor
bragging
about
how
complicated
the
stack
is,
but
the
the
if
you
look
at
this
bottom
portion
down
here
with
the
file
system
and
the
no
sequel
and
the
all
that
stuff
I
counted,
there's
like
something
like
12
components
like
12,
different
damon's
that
you
have
to
run
actually
do
that.
So
we
are
big
goal,
was
to
kind
of
simplify
that
bottom
layer
because
well
besides,
Samuel
Jackson
would
shoot
me
for
a
12
components
towards
layer.
H
Like
I
said
that
thing,
that's
responsible
for
getting
your
bites
on
to
disk
should
be
kind
of
the
minimum
most
manageable
service
that
one
really
can
make.
So
that's,
basically
everything
we're
going
to
talk
about
now
is
how
we
made
HBase
simpler,
using
raft
to
do
that.
So
one
thing
is,
as
we
pointed
out,
there
is
this.
So
this
is
the
data
tracing
through
HBase,
so
you
can
see
a
client
writes
data,
some
cluster.
It
gets
sent
to
one
of
the
H
region
servers.
H
So
a
region
is
a
portion
of
a
table
so
that
region
server
figures
out
where
the
mutation
should
be
applied
to
what
region
and
then
that
data
gets
written
to
a
men's
store
and
then
eventually
this
H
log
thing.
So
when
the
mem
store
gets
full,
it
writes
it
out
to
a
store
file
in
HDFS,
and
we
said
oh
well,
that
kind
of
sucks,
because
if
I
can't
write
to
HDFS
and
my
server
crashes,
so
we
said
okay
well,
let's
just
write
these
to
local
disk.
H
For
now,
we're
already
gonna
have
to
have
local
district
raft
anyway,
and
eventual
will
get
it
to
HDFS,
but
we'll
hide
that
from
the
user,
then.
Finally,
this
H
log
thing
that
right
ahead
log
is
normally
general
de
chiefess,
that's
literally
where
our
raft
quorum
wet.
This
is
where,
like,
instead
of
writing
to
a
right
ahead
log,
a
reading
sir,
would
receive
a
write
and
write
it
to
a
set
of
rap
cohorts.
When
the
cohort
said
it
was
durable,
it
was
just
like
written
to
a
lock,
so
that's
the
only
architectural
change
that
we
made.
H
H
Oh,
and
this
is
the
part
where
I
do
compaction
right
and
then
I
go
through
RAF
compaction,
but
that's
different
than
this
notion
of
compaction
up
here
where
I
was
flushing,
the
the
mem
store
to
the
H
file
and
all
of
that
type
of
stuff,
and
what
I
eventually
realized
is
is
this
is
this
is
a
important
lesson
when
learning
using
raft
is
that
when
you
first
start
using
raft,
it's
really
easy
to
kind
of
build
your
whole
system
around
it
and
architect.
Your
code,
around
rafts
notion
of
compaction,
rafts
notion
of
snapshots.
H
In
fact,
we
even
architect
at
our
code,
around
rafts
notion
of
leader,
not
realizing
that
there's
a
big
difference
between
a
node
thinking
that
it's
a
leader
and
a
node
actually
being
a
leader
which
is
kind
of
an
intricate
thing.
To
talk
about
here,
but
what
we
realized
is
that,
let's
see
here,
see
I
think
I
skipped
a
slide
there
we
go.
H
Were
we
only
care
about
that
final
state
of
the
database,
so
for
us,
what
we
were
able
to
do
is
instead
of
building
complicated
membership
logic
to
change
membership
instead
of
focusing
on
how
do
we
kind
of
take
the
in
memory
state
and
snap
it
out?
What
we
were
trying
to
focus
on
is:
how
do
we
write
out?
In
essence,
what
is
a
column
or
file
and
and
compact
completely
external,
from
the
raft
system
itself?
So
our
raft
implementation
really
didn't
need
membership
changes.
H
H
So
there's
another
thing:
when
we
first
designed
our
system,
we
kind
of
designed
it
around
raft,
and
that
was
like
the
major
thing
and
then
someone
was
like
well
I.
Think
what
we
want
to
do
is
have
multi-row
transactions,
and
we
know
this
works
because
spanner
did
it
and
how
hard
could
that
be?
Google?
Isn't
that
smart?
You
know,
I,
don't
know
one
to
date,
it
and
then,
when
we,
when
I
like
read
the
paper
the
third
time
and
that's
ended
up
talking
some
spanner
engineers.
H
What
I
realized
is
is
that
whenever
they're
doing
anything
complicated,
it's
all
two-phase
commit
under
the
hood
anyway,
I
don't
know
if
you
guys
remember,
two-phase
commit
they
make
the
big
problem
with
two
phase
commit.
Is
that
there's
some
coordinator?
He
needs
to
be
made
reliable.
Somehow
I
think
in
Google's
case.
They
then
shove
that
reliability
back
on
to
their
multi
pack,
so
system,
which
we
kind
of
think
of
like
doing
like
graph,
but
that's
probably
not
the
best
way
to
do
it.
H
It's
definitely
the
most
expedient
way
to
do
it,
but
then
the
meta
lesson
here
is
that
you
know
just
like
rat.
Your
system
isn't
raft
it
uses
raft.
Even
if
your
system
heavily
uses
Raph
to
store
data,
it's
probably
not
the
end-all,
be-all
consensus
algorithm
for
all
time.
Actually
two
phase
commit
is
way
better
for
a
whole
bunch
of
things
and
I'm
sure
Diego
would
be
glad
to
tell
you
how
to
integrate
two-phase
commit
with
raft.
H
So
that's
that,
finally
moving
forward,
so
oh
yeah
there's
one
of
the
things
there's
a
database
called
tree
owed
that
use
something
called
mini
transactions,
which
is
probably
worth
checking
out
anyway.
So
I
actually
think
skeletor
is
not
overstating
his
case
here.
H
Almost
all
of
the
performance
guarantee
advantages
we
had
overstock
HBase
had
to
do
solely
with
our
our
serialization
and
RPC
story.
In
fact,
if
I
could
give
Dan's
talk
over
again,
I
probably
would
because
those
concerns
pretty
much
dominated
everything
and
we
need
way
better.
Abstractions
here,
I've
been
trying
to
work
on
a
library
called
eventual
there's
some
stuff
with
co
routines.
H
The
scary
part
about
this
is
the
story
here
in
Java
is
way
better
than
in
rust,
which
is
actually
quite
surprising,
because
we
should
be
able
to
smash
the
hell
out
of
any
type
of
performance
you
get
with
Java.
The
reality
is,
if
you
like,
throw
some
Neddy
and
some
like
jet
Lang
at
some
at
code.
You
know
it's
pretty
easy
to
reason
about.
What's
going
on
the
codes,
not
very
difficult
and
it'll
go,
you
know
not
at
the
theoretical
speed
of
the
hardware,
but
pretty
close.
H
Nonetheless,
you
know
I
think
you
know
even
from
basic
calculations
as
the
size,
the
payloads
is.
We
have
get
smaller.
Our
advantage
should
increase
more
and
more,
but
I
mean
I
overall,
when
building
a
distributed
system.
We
really
need
to
get
these
fundamentals
right.
We
need
to
make
them
a
lot
better.
One
other
thing
that
I
wanted
to
talk
about
really
quickly
is
basically
all
of
operating
system
design,
which
means
all
of
database
architecture
is
about
to
be
thrown
on
its
head.
H
So
that's
the
other
thing
that
we
really
need
to
think
about,
basically
with
non-volatile
dims
and
all
this
type
of
thing,
which
it
sounds
like
a
non
sequitur.
But
this
is
actually
maybe
core
to
how
databases
are
designed.
You
should
definitely
check
out
the
work
in
the
linux
kernel
around
Dax
and
xip,
but
basically
on
almost
all
contemporary
databases
in
the
last
15
years
are
designed
around
the
page
cache.
H
If
you
look
at
like
MongoDB
and
all
these
guys,
they're,
basically
leveraging
a
map
for
fun
and
profit
and
that
whole
strategy
of
database
architecture
is
going
out
of
the
window
and
we
need
new
answers
there.
So
listen
to
the
baby.
So,
on
the
last
bit
you
know
I
just
wanted
to
point
out.
You
know
I've
been
kind
of
quickly
as
quickly
as
I
can
trying
to
cover
these
kind
of
huge
topics
and
what
I,
I
think,
the
overarching
point
of
the
the
talk
is.
H
Is
that
it's
these,
like
these
sharp
edges
and
details
that
are
going
to
dominate
your
system
right.
So
when
you
first
start
off
developing
a
distributed
system,
you
start
drawing
on
a
whiteboard
and
you
draw
boxes
and
you
draw
arrows
to
the
boxes
and
then
eventually
you're
like
perfect
system,
and
you
end
up
with
something
like
like
that:
I
guess.
But
the
reality
of
the
situation
is
that
really
all
that
matters
is
iam
is
the
the
details
of
the
implementation.
H
Unless
you
really
think
about
you
know,
okay
I've
got
the
architecture
on
my
system,
I've
kind
of
drawn
out
in
my
head,
how
all
the
code
works
and
then
I
really
figure
out
in
detail
how
it
all
fits
together,
because
you
know,
if
you
just
take
like
a
raft,
server
and
say
I
cast
raft
on
my
replicated
state
machine,
and
now
I
have
a
a
reliable
system.
I,
don't
think
you're
going
to
get
ideal
performance.
That's
not
saying
it's
not
an
important
tool.
It's
just
that!
We
need
we
shouldn't.
H
You
know
raft,
is
creating
the
resurgence
and
a
capability
for
us
to
use
these
tools,
which
is
awesome,
but
we
really
need
to
understand
them,
and
the
real
nice
thing
about
raft
is
that,
theoretically,
you
can
understand
it
or
at
least
reach
some
level
of
detail
and
I.
Think
that's
one
of
the
reasons
why
I
found
it
be
so
important
anyway,
thanks
to
the
ramp.
A
Okay,
all
right!
Well,
thank
you!
So
much!
This
was
great.
You
all
were
amazing
and
beautiful
people.
Thank
you.
So
much
for
speaking
I
believe
we
have
the
space
for
a
little
bit
so
feel
free
to
hang
out.
Have
some
more
drinks,
mingle
talk
if
you
are
on
the
Internet,
I'm,
sorry,
but
hang
out
on
pound
rust,
they're,
all
so
amazing
and
beautiful
people
there
too
so
anyway.
Thank
you
so
much
for
coming
and
have
a
wonderful
night.
All
right,
good
night
see
you
next
time.