►
From YouTube: Rust and Tell Berlin - April 2020
Description
https://berline.rs/2020/04/28/rust-and-tell.html
Rust & Tell Berlin, the monthly event to share ideas, and learn about new things in and about Rust, went fully online for the first time.
#1 00:05:44 - Dev Diary: Writing a Clipboard Manager with Rust by Tymoteusz Jankowski
#2 00:22:27 - Project Spotlight: Maelstrom Matrix Server in Rust by Chris Bruce
#3 00:47:46 - Artillery: Fire-forged Cluster Management & Distributed Data Protocol by Mahmut Bulut
Bonus: 01:22:58 - Ryan explains `ManualDrop`
Code example: https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=8004f1ac04b47c6db4ad12af74782a06
A
And
I'm
going
to
assume
that
zoom
is
working
right
now,
if
anybody
can
give
me
a
thumbs
up.
So
that's
that
it's
working
yes
thumbs.
We
have
some
thumbs,
so
we're
gonna
get
started.
Welcome
everybody
to
another
edition
of
rust
and
tell
the
second
one
that
we've
done
online
and
we
are
enjoying
this
format,
despite
the
reason
why
we
have
to
do
this.
It
was
a
lot
of
fun
last
time
and
we
expect
it
to
be
a
lot
of
fun
today,
as
well.
A
I
assume
we
have
people
in
Berlin
and
also
may
be
joining
from
outside
of
Berlin.
So
if
you
are
joining
us
for
the
first
time
at
Rustin,
tell
Berlin
because
you're
not
normally
here
then
welcome
it's
great
to
have
you
and,
of
course,
to
all
of
our
regular
visitors.
Welcome
back
real
quick
about
ourselves,
I'm
Ryan
Levesque!
B
Yeah
and
I'm
the
co-host
I'm
a
pastor,
groomer
and
I'm
a
software
engineer,
doing
a
rust
for
a
year
now
and
like
I'm
self-employed.
So
we
go
and
yeah
I
am
always
happy
to
talk
about
rusts,
maybe
like
approached
me
on
LinkedIn
or
on
other
boring
social
media
sites.
Yeah
and
happy
to
have
you
here
welcome.
A
And
so
for
a
little
bit
for
those
who
have
not
joined
us
before
about
what
rusts
Intel
is
all
abouts.
This
is
a
meet-up
mainly
about
rust,
but
really
we
want
anybody
from
beginner
experts
to
come
and
share
struggles,
ideas,
hacks
projects,
funny
stories,
anything
and
everything
related
to
rust,
no
matter
how
big
inner
or
advanced
you
are,
it
doesn't
matter.
A
We
want
to
hear
from
me.
Oh
really,
we
are
trying
to
build
a
space
for
a
community
to
learn
and
grow
together.
Everybody
should
feel
welcome
if
you
don't
feel
welcome,
please
let
us
know,
and
we
will
try
our
best
to
make
sure
that
we
improve
in
the
future
and
really
what
that
means
at
the
end
of
the
day
is
that
we
want
you
to
come,
speak
here,
a
trust
and
tell,
and
of
course
now
with
us
being
online
for
the
foreseeable
future.
It
might
be
even
easier
than
before.
A
We
really
say
we
want
you,
because
we
want
each
and
every
one
of
you
to
feel
comfortable
enough
to
speak
here.
It
doesn't
matter
how
often
you
use
rest,
how
long
you've
been
using
it,
for
you
might
think
that
you're
a
total
newb.
Well,
we
all
are
so
that's
totally
fine.
Come
tell
us
about
your
experiences.
We
really
want
to
hear
from
you
and.
B
We
follow
the
polling
code
of
contract
contact
and
which
might
not
be
the
worldwide
code
of
contact.
What
it
basically
means
is
we
want
to
have
a
nice
time
here.
There
is
no
reason
to
be
or
to
get
harassed,
to
complain
about
unsafe
code
or
new
code
or
whatever
you
think
it
is.
This
is
just
a
space
to
have
fun
to
exchange.
We
all
use
our
and
free
time
here
too,
the
best
to
hang
out
and
chat
and
to
be
like
a
better
version
of
us
after
this,
like
group
meeting.
B
So
if
you
don't
feel
like
you
fit
in
here,
because
it's
not
advanced
like
enough,
maybe
don't
be
here
and
if
you
feel
like
you're,
getting
harassed
or
I'm
something
please
contact
us.
We
take
this
very
seriously.
So
anything
you
don't
like
about
this
Meetup,
please
approach
us
and
we
will
deal
with
it.
A
A
We
have
three
talks
to
the
night
that
we
will
be
going
through.
This
reminds
me
as
well.
We
are
recording
these
talks,
and
so
it
might
happen
if
you're
in
the
zoom
tonight,
if
you
put
on
your
camera
that
you
might
be
recorded.
So
if
you
don't
want
that,
make
sure
to
not
turn
on
your
camera
or
you
can
watch
from
YouTube
or
whatever
you
want
to
do
just
so.
A
Everybody
is
aware,
so
the
three
talks
at
night
that
we
have
the
first
one
will
be
from
Tim
who
will
be
talking
about
writing
a
clipboard
manager
and
Russ
that
we
have
Kris
talking
about
a
matrix
server
and
then
Mahmood's
will
be
back
again
talking
about
cluster
management
distributed
data
protocol
called
artillery
written
and
rust.
So
we
should
have
a
good
night
and
if
you
have
any
questions,
feel
free
to
write
us.
Some
chat
and
we
will
make
sure
to
help
you
out.
So
thank
you,
everyone
and
let's
get
started
with
with
the
first
talk.
C
Okay,
okay,
so
hello.
Everyone
first
I'd
like
to
thank
you
for
making
this
this
event
online
and
I've
really
think
we
need
more
of
it
in
future.
So
thanks.
Okay,
now
to
the
talk,
my
today's
talk
is
diarrhea
writing
clipboard
manager
in
Russ
and
here's
the
agenda
so
I.
First
I
talk
about
calls
for
a
clipboard
manager
and
then
I
go
with
a
process
of
picking
a
GUI
crate
for
us,
and
then
I
described
a
few
encountered
issues
and,
lastly,
a
few
random
slides.
C
C
C
You
can
have
three
approaches
to
that.
One
is
native
approach.
One
is
the
second
one
could
be
cross-platform,
toolkits
and
last.
One
is
rest,
rest,
centric
approach.
I
briefed
briefly
described
all
of
them,
so
I
picked
the
res
centric
approach,
because
because
I
could
many,
so
what
is
all
about
in
restaurant,
drink
approach?
Oh
right,
rest
code,
but
outcome
application.
It
looks
a
bit
different
than
the
rest
of
the
platform.
C
Okay,
so
once
I
had
once
I
picked,
the
approach
I
had
to
pick
crates,
quick
rates
for
rust
and
I
found
this
ones
and,
after
a
brief
brief
review
of
all
of
them,
I
found
that
I
can
I
can
well
I'd
like
to
work
only
with
treat
or
iced,
mostly
because
of
the
activity
in
the
in
Windows
project.
So
so
I
checked
github
activity,
which
looks
like
this
for
Conrad
and
rest
of
the
project.
C
So
at
this
stage,
I
played
around
with
druid
and
iced
appear,
both
IDI
looks,
looks
very
convenient
and
and
I
had
a
really
good
time
working
with
them.
They
are
also
quite
similar,
but
I
finally
decided
to
destroy
it,
because
it
seems
to
me
more
like
a
team
project
instead
of
one
man
project
like
iced.
Ok,
so
no
matter
what
I
told
you
so
far,
I
recommend
you
to
do
your
own
evaluation
and
in
case
you
want
to
write
program
because
I'm
a
bit
biased.
C
So
my
choice
was
mine
was
druid,
but
your
may
not
okay.
So,
let's,
let's,
let's
talk
about
encountered
issues
shortly
after
I
picked
through
it.
It
turns
out
that
it
doesn't
support
a
power
application,
which
is
usually
what
you
do
when
you
write
clipboard
manager,
but
it's
not
a
big
deal,
because
I
thought
that
I
could
make
it
my
own
in
my
in
the
program,
because,
because
bar
program
consists
of
you
know,
runs
in
background,
it
allows
you
to
register
a
hotkey
and
it
supports
tray
icon.
C
So
I
don't
care
about
tray,
tray
icon
and
for
registration.
There
are
also
great.
So
this
should
be
ok,
so
my
concept
for
the
for
the
program.
It
would
be
something
like
this.
First,
you
register
a
registration
with
which
is
responsible
for
GUI,
for
showing
GUI
and
in
the
loop
you
record
clipboard
content,
okay,
so
so,
let's
start
developing
and
we've
got
road
block
block
number
two,
so
yeah.
C
It
turns
out
that
plancha,
which
is
a
thing
of
Troy
it
couldn't
be,
couldn't
couldn't
be
run
twice
or
more,
and
this
is
actually
you
you
ran
it
when
you
initiate.
You
need
to
initialize
your
window,
so
it's
quite
often
yeah.
So
it's
kind
of
its
kind
of
problem,
but
let's
see
what
else
we
have
and
then
it
quickly
turns
out
that
one
of
the
registration
rate
doesn't
support
key
combinations
and
also
it
doesn't
support
weight
lunch,
which
was
my
assumption
at
the
beginning.
C
Okay,
but
it's
not
a
problem
again,
since
there
is
another
great
input,
bot
and-
and
hopefully
it
it
works
pretty
well
I
mean
it
solves
problems
of
the
previous,
but
it
requires
sudo
to
execution.
Actually,
Weiland
requires
sudo
to
execute
that,
but
yeah
okay,
but
it's
not
a
problem.
We
can
live
with
and
probably
maybe
fix
that
night.
I
I
found
this
one.
It
was
a
bit
tough
because
it
turns
out
that
x11
clears
the
clipboard.
C
When
and
when
the
program
is
closed,
I
mean
let's
say
you
have
I,
don't
know:
firefox
window,
you
select
a
text
there,
you
copy
a
copy
to
clipboard,
and
then
you
close
the
Firefox
window.
The
clipboard
item
is
cleared
and
yeah.
So
this
is
another
problem
and
it
occurs
when
you
add
a
user
to
select
an
item
from
the
historic
okay.
C
And
at
the
end,
I
found
that
there
is
some
kind
of
inconvenience
when
you
try
to
use
it
in
gnome
environment,
because
because
the
window
yeah,
because
no
improvement
from
popping
windows-
but
it's
minor,
you
can
walk
around
it
with
I.
Don't
know
still
my
focus,
for
example,
extension.
Okay.
So
at
this
stage
I
started
to
get
a
depression
a
little
because
yeah
my
I
thought
that
would
be
a
weekend
project.
For
example,
and
everything
around
me
told
me
that
I
will
I
didn't
know.
I
would
have
to
I
would
have
to
fix
ecosystem
around.
C
C
My
initial
concept,
so
instead
of
putting
everything
into
a
single
process
like
like
before
I
thought
that
maybe
I
could
split
it
into
two
processes.
One
process
would
be
be
responsible
for
recording,
clipboard
content
and
second
process
would
be
responsible
for
showing
GUI
and
how
does
it
work
yeah?
It
works
pretty
well,
because
we
don't
need
all
mentioned
yeah.
C
We
skipping
all
encountered
problems,
so
this
is.
This
is
great,
for
example,
why
why
do
we?
Why?
Why
don't?
We
need,
for
example,
huts
a
hotkey
registration?
We
can
delegate
that,
for
example,
we
which
has
building
registration
registration,
so
why
not
use
that?
Ok,
so
quick
recap
and
so
I
was
able
to
deliver.
Mvp
I
published
the
code
on
my
github
and
learned
a
lot,
for
example,
that
it's
always
good
to
reuse.
C
B
B
A
A
C
C
B
C
Yeah
so
I
get
lab
in
the
projector
is
a
tourist,
so
I'm
not
sure
export
managers
are
pretty
pretty
well.
I,
don't
know
explored
anything
because
there
is
plenty
of
them
on
the
market.
So
I
don't
know
I'm,
not
sure
I'm
open
to
to
any
features
so
MVP
works,
I,
use
it
and
and
about
screenshots
right
I
could
I
could
found
actually
I'm
using
on
mice
on
my
computer,
this
clipboard
manager
right
now
so
I
so
I
could
even
show
it
yeah.
B
C
C
D
E
E
E
You
may
ask
what
is
matrix.
We
have
a
have
it
going
today
here,
which
is
cool,
but
matrix
actually
is
is
a
messaging
platform,
it's
decentralized.
So
right
now
it
uses
Federation
to
share
messages
between
what
they
call
home
servers.
They
are
doing
a
lot
of
work
on
p2p,
so
you
actually
can
run
a
home
server
in
webassembly
and
be
a
complete
note
to
yourself,
and
so
there
is
no
even
federated
server
that
you
need
to
connect
to
eat.
In
that
case,
you
connect
to
other
PDP
peers
and
exchange
messages.
It
just
enabled
end-to-end
encryption.
E
One
of
the
coolest
things
that
really
attracted
me
to
matrix
was
bridging.
It
has
a
lot
of
support
for
and
I
think.
Maybe
this
is
what
it
was
originally
designed
for.
It
was
to
take
a
lot
of
different
disparate
networks
of
messaging
platforms
and
combine
them
all
into
one.
So
there's
bridges
for
WeChat,
there's
bridges
for
discord,
slack,
you
name
it,
and
so
that's
a
it's
a
pretty
interesting
concept
and
then,
when
you
really
start
to
look
at
some
of
the
PDP
features
down
the
it
really
really
amounts
to
something
pretty
neat.
E
Also,
it's
open,
so
I,
don't
know
about
you,
but
I
have
SMS.
I
have
iMessage
I,
have
discord,
I
have
slack
multiple
slacks
I
have
email,
I
have
so
many
streams
of
messages
coming
in
and
most
of
those
platforms
are
all
closed
and
it
really
sucks
that
I
have
people
on
whatsapp
which
I
hate
what's
up
and
I
want
to
give
them
over
the
signal.
But
it's
like
a
platform
switch
for
everybody
and
so
I'm
really
hoping
that
the
future.
You
know
maybe
something
like
matrix
where
it
is
open
standard.
E
It
is
not
centralized
and
the
open
protocol
enables
a
bunch
of
bridging
to
work
and
so
I
think
this
is
one
of
the
things.
If
you
think
about
the
internet,
the
email
is
very
federated,
which
is
it's
a
pretty
good
system,
but
you
see
the
centralization
of
it,
and
so
a
lot
of
the
things
about
matrix
is
really
this
kind
of
web
3.0.
If
you
will,
which
is
more
of
a
peer-to-peer,
decentralized
web
and
so
I
really
think
that
we
need
to
get
back
to
that.
I.
E
E
So
if
you
look
at
matrix
stats,
there's
a
bit
of
an
old
slide
that
I
picked
up
somewhere,
but
right
now
they
have
about
fourteen
point:
seven
million
global
accounts.
They
do
five
million
messages
a
day.
They
have
four
point:
three
million
chat
rooms,
there's
forty
thousand
federated
servers,
which
is
yeah
kind
of
mind-blowing.
They
do
about
3.5
thousand
messages
out
per
second
and
500
projects
are
developing
with
matrix.
E
So
if
we
look
at
the
landscape,
there
are
quite
a
few
home
servers
and
so
in
the
matrix
land,
the
home
server
is
essentially
kind
of
where
your
account
home
and
you
create
an
account
there,
and
then
it
federates
that.
So,
if
somebody,
if
I'm
on
one
home
server
called
matrix,
org
and
you're
on
a
home
server
called
you
know,
rest
and
tell
org
and
we
want
exchange
messages
or
share
a
st.
chat
being
the
same
chatroom.
E
E
They
also
have
sequel
light,
that's
built
on
the
twisted
framework,
and
you
know
lately
I
think
they've
had
a
lot
of
scaling
issues,
especially
with
the
co
vid
stuff,
going
on
they've
seen
just
a
huge
increase
in
usage,
and
you
know
when
you're
on
a
matrix
home
server,
you
could
actually
feel
the
slowness.
You
know
the
Python
as
a
reference
in
little
implementations
been
built
from
day
one.
So
it's
a
pretty
large
code
base
and
you
know
it's
evolved
over
time.
So
I
think
you
know
as
a
developer.
E
You
always
know
you
build
at
once
the
first
time
it's
kind
of
packed
together,
but
you
always
know
that
you
can
do
it
so
much
better.
If
you
build
it
a
second
time
and
so
that's
what
they
started
to
do,
they
started
to
build
a
more
scalable
server
called
dendrite,
which
is
I.
Think
you're
also
seeing
a
theme
here
with
dendrite
synapse
those
types
of
things,
but
a
dendrite
is
a
go
server
or
threatenin
go
it
uses
Postgres
in
sequel,
it's
a
very
heavy
microservices
architecture.
E
In
fact,
you
know
all
the
major
services
like
a
chat
room
like
room
management
user
mean
they're
all
separate
post,
crushed
databases
all
together,
and
so
you
know
it's
been
kind
of
designed
from
the
ground
up
as
a
very
sort
of
microservices
architectures.
It's
a
bit
complex.
It's
still
under
heavy
development.
It's
not
ready
for
any
kind
of
use
yet
and
they've
actually
been
using
it
a
lot
more
for
the
playground
aspect
of
hacking
on
things.
So
a
lot
of
the
end
stuff
kind
of
hacked
on
there.
E
You
know
it's
built
on
iron,
it's
not
asynchronous.
Obviously,
a
relatively
new
feature.
You
know
it's
just
dated
a
little
bit
and
they
also
built
a
lot
of
heavy
modules
that
sort
of
abstract
a
lot
of
stuff
away.
So
they
have
this
whole
complex
kind
of
micro
or
macro
language
to
build
API
endpoints,
and
things
like
that-
which
is
probably
good
in
some
case.
E
E
Again,
it's
brand-new
just
a
couple
weeks
old,
it's
very
hawkish
approach.
So
it's
very
about
kind
of
just
making
things
work,
not
not
a
whole
lot
of
structure.
Yet
it's
evolving!
It's
pretty
cool
to
see
it's
kind
of
a
one.
Guy
show
that's
gotten
a
lot
of
traction
recently,
but
again,
I
think
I
think
just
a
heavy
use
of
the
existing
rumah
modules.
It's
kind
of
what
this
whole
thing's
built
on.
E
So
there's
a
little
bit
of
legacy
there,
maybe
a
lot
but
a
little
bit
and
so
Maelstrom,
so
obviously
building
my
own
rest,
server,
in-home,
server
and
rust
and
I
just
started
project
a
couple
weeks
ago.
It's
written
and
rushed
trying
to
be
storage
agnostic.
So
we
actually
have
a
storage
trade
want
to
be
able
to
plug
in
anything
Postgres,
equal
lights,
LED
a
lot
of
the
sort
of
trade
we
really
are
thinking
about.
E
You
know
you
don't
necessarily
need
a
micro
service
if,
if
you
really
wanted
to
abstract
away
certain
services
into
different
storage
structures,
like
maybe
Kafka
or
something
like
that,
you
could
just
create
a
new
storage
tray,
because
our
thinking
again
we're
right
at
the
beginning
of
this.
So
a
lot
of
architectures
has
kind
of
been
thought
about,
but
you
know
I
think
a
lot
of
stuff
alone
covers
would
go
it's
built
around
acting
sweb.
You
know,
I'm
glad
the
project
came
back
from
what
looked
like
certain
death.
It's.
E
It
is
still
fast
super
fast
and
I.
When
I
did
some
benchmarking
against
rocket
and
warp,
it's
still
like
two
to
three
times
faster
and
I
know
it's
you
know
benchmarks.
This
is
probably
not
going
to
be
the
you
know
the
hot
path
in
this
thing,
but
I
think
as
you'll
see
everything
that
we're
trying
to
do
is
really
performance
focused
again.
This
project
was
just
a
couple
weeks
old.
E
You
know
it's
a
lot
of
emphasis
on
code
architecture
and
just
a
good
structure
around
power,
laying
this
code
out
and
a
lot
of
heavy
performance
considerations,
so
you'll
see
that
our
project
goals
are
really
about
fast
and
scalable
solution
for
a
large
user
base.
What
I
really
like
to
see
is
I
like
to
see
Russ
replace
some
of
these.
You
know
performance,
critical
areas
and
so
I've
used
Russ
servers
for
a
lot
of
things.
E
You
know
in
sort
of
a
performance,
critical
area
and
I'm
just
blown
away
by
how
scalable
it
can
be
without
any
effort
and
how
minimal
of
resources
it
uses.
You
know
so
things
like
doing
you
know:
6,000
requests
a
second
on
a
$15.00
digitalocean
box
and
it
only
uses
you
know
maybe
10
Meg's
of
RAM.
It's
it
just
blows
my
mind,
who
you
know
me
personally
have
done
back
in
development
for
a
long
time
and
so
I
think
it's
really
super
cool
that
with
little
effort
you
can,
you
can
build
some
very
highly
scalable.
E
You
know
low
overhead
systems,
so
the
project
goals
obviously
are
fast.
I.
Think
it's
a
big
big
thing:
storage,
agnostic!
Again,
I
think
we
should
be
able
to
plug
in
sequel,
Lite
sled,
so
one
of
the
other
contributors
is
they
have
a
self-hosted
server.
Basically,
it's
a
box
that
you
plug
in
and
encrypts
everything.
It
runs
like
blockchain
if
you
want,
and
so
their
goal
is
to
run
this
on
kind
of
a
Raspberry
Pi
equivalent
hardware.
So
really
looking
at
you
know,
sled
or
sequel,
light
on
embedded
platform
or
Postgres
or
other
things.
E
When
you
want
to
scale
us
up
on,
say
a
kubernetes
cluster
efficient,
again
I
think
you
know
I
think
we
we
get
when
you
develop
for
cloud.
You
gotta
get
into
this
mindset
that
you
know
you
don't
have
to
be
that
efficient,
but
those
costs
add
up,
and
so
it
may
be
a
bit
of
a
hindrance.
But
you
know
everything
that
were
really
focused
on
is
efficiency,
so
you'll
see
a
lot
in
code
that
we
use
cow,
copy-on-write
semantics
we
do.
A
lot
of
you
know
borrows
when
we
pass
things
around.
E
So
you
know,
for
me,
is
who's
not
I.
Haven't
done
a
whole
lot
of
high-performance
rust,
it's
great
project,
because
you
know
one
or
two
other
folks
that
are
working
on.
It
are
very
performance
why
they've
done
a
lot
stuff,
so
I'm
learning
a
lot
and
I
think
this
is
one
of
the
things
that
tracked
with
me
is
really
what
is
high-performance
for
us
to
look
like
on
a
back-end
service.
So
it's
pretty
cool,
again
clean
non
legacy
code,
architectures!
E
Really
what
we're
looking
for
so
trying
to
not
just
take
what's
out
there
and
and
implement
it,
but
try
and
you
know
really
look
at
making
this
something.
That's
easy
to
maintain
and
hopefully
welcome
new
users.
And
then
you
know
call
me
crazy,
but
I've
done
a
lot
with
IOT
in
the
past
and
I.
Think
that
there's
something
here
so
matrix
had
this
concept
of
kind
of
being
something
for
matrix
or
for
for
IOT.
You
see
it.
You
see
I
Oh,
tea
literally
and
some
of
the
documentation
really
hasn't
panned
out.
E
But
but
when
you
start
to
look
at
kind
of
this
p2p
aspect
of
it,
I
think
there
is
some
benefit
there
to
the
way
that
the
message
passing
works
and
the
way
that
they
build
a
dag
to
sort
of
to
merge
different
messages
on
different
home
server.
So
it's
partially
an
interest
here,
I,
don't
know
if
it'll
ever
pan
out
but
kind
of
one
of
the
things
I'm
thinking
towards
this.
You
know
something
that
may
be
a
good
and
like
a
distributed,
p2p
mqtt
type
replacement.
E
So
let's
just
have
a
quick
look
at
the
code
I'm
going
to.
Let
me
share
my
screen
here
with
code
and
I'll.
Just
give
you
a
quick
just
overview
of
the
project.
I
won't
go
too
deep,
but
I
think
you
know
I
think
you
can
see
that
you
know
what
we're
really
starting
out
to
do
here.
I
hope
everybody
can
see
this
so
again.
This
is
obviously
the
github
I'll.
Just
give
you
a
high
level
thing.
E
E
You
would
basically
just
call
models
if
you
will
and
render
the
output
models
here
are.
Just
you
know
again.
This
is
a
new
product,
so
we're
still
pretty
early,
but
models
here
are,
you
know,
obviously
represent
sort
of
the
data
structures
in
here
in
the
system.
There's
a
lot
of
data
structures,
there's
a
lot
of
endpoints.
You
know
the
rest
interface
for
matrix
server
is
huge,
so
it's
a
lot
of
work
to
do
and
the
DB.
What
I
think
is
kind
of
cool
is
just
the
storage
trade
that
we've
done.
E
You
know,
we've
basically
created
a
very
simple
store.
You
know
we
basically
wrap
PGs
or
SQL
X
for
the
Postgres.
You
can
pretty
much
do
anything
with
a
storage
tray.
What's
nice
is
the
store
trait,
because
we
have
these.
You
know
these
sort
of
simple
things
like
you
know,
got
a
device,
a
device
things
like
that.
E
You
know,
create
a
structure
and
then
right
before
your
unit
test
you
just
you
just
set
the
value
that
you
want
return.
You
know
your
expected
return
value
from
the
handler
and
it
makes
it
really
nice
to
be
able
to
mock
this
out
and
so
from
the
get-go.
Since
we're
doing
the
store
tray,
it
makes
it
pretty
easy.
What
else
can
I
say
here?
E
E
All
right,
so,
let's
see
so
yeah
help
I
mean
part
of
my
whole
thing
is
I.
Think
is
a
great
time
if
you
ever
want
to
get
an
open
source
project
you
want
to
get
into.
You
know
API
or
server-side
development.
It's
a
great
project.
It
has
a
great
spec
already
written,
have
complete
open,
epi
Docs,
it's
a
well
well-documented
spec,
so
it's
pretty
easy
to
implement,
trying
to
be
as
beginner
friendly.
So
we
have
a
matrix
channel.
E
We
spent
I
spent
a
lot
of
time,
just
helping
people
try
and
get
through
their
first
commit.
There's
ton
of
work
again,
there's
a
lot
of
endpoints.
So
there's
lots
of
things
to
take
off.
I.
Think
I
copied
all
the
red
endpoints
over
to
issues
in
github
and
it's
like
a
hundred
endpoints.
So
lots
of
easy
things
to
pick
off
and
it's
brand-new,
so
I
think
it's
kind
of
cool
to
be.
You
know
it's
hard
to
come
into
a
project
that
has
been
established
and
make
it
you
know
meaningful
dent.
E
You
know
the
barrier
is
a
little
bit
higher,
but
I
think
with
a
brand-new
project.
It's
it's
great
because
so
much
stuff
to
do
you
get
to
be
involved
in
the
architecture,
so
I'm,
hoping
that
some
of
you
will
be
interested
in
checking
this
out
and
you
can
check
it
out.
It's
at
github,
Maelstrom,
IRS
Maelstrom,
and
then
you
can
reach
me
here.
My
company
is
hiring
if
you're
in
Germany
and
you
want
a
rush
job
feel
free
to
reach
out
to
me
and
as
my
email
and
Twitter
handle
questions.
B
E
E
You
know
that
you
need
to
implement
and
the
models
are
returned
from
the
storage
trait
or
that
are
sent
to
the
storage
trait
are
there,
and
so
because
we
define
all
of
the
sort
of
you
know
the
interface
between
what
the
storage
layer
needs
to
get
and
set
and
the
rest
application
it
becomes
becomes
pretty
simple
and
then
what
we
do
is
in
the
beginning.
We
look
at
the
connection
string
and
then
we
initialize
the
correct
storage.
So
in
your
initialization
function
you
you
know
whatever
struct
or
whatever
structure
you
create
for
this.
B
Okay,
cool
and
we
have
a
few
more
questions.
I
will
grab
them
first
from
the
right
chat
so
and
someone
asked
for
the
project
URL.
Maybe
you
can
post
it
afterwards
again
in
matrix
or
in
the
zoom
chat
or,
and
then
the
next
question
is,
is
metrics
protocol
all
all
rest
or
is
there
some
WebSocket
front-end?
No.
E
All
rest,
surprisingly,
very
simple,
and
for
some
of
the
p2p
stuff
they're,
looking
at
doing
it
over
co-op,
which
is,
would
be
something
new,
very
lightweight
rats,
but
it's
all
rest.
The
hard
part
really
is
in
the
guest.
The
version
control,
if
you
will
so
most
the
systems
use
a
you
know,
directed
a
cyclical
graph
to
sort
of
use
very
much
like
it,
but
all
these
home
servers
are
keeping
track
of
local,
their
local
copy
of
the
chatroom
and
so
that's
all
merged.
So
that
is
really
the
hard
part
they
do
quite
well.
E
They
do
a
long
pole
process.
So
still,
let's
still
get
requests.
It's
not
WebSocket,
and
you
know
it's
pretty
responsive.
It
looks
good.
So
it's
very
simple.
That's
the
beauty
of
the
rest
interface.
It's
super
simple!
This
is
why
I
think
it's
a
good
project
for
anybody
that
wants
to
get
involved
in
something
it's
not
complex.
You
know
it's
pretty
simple
and
straightforward.
Okay,.
E
Okay,
so
the
first
question
so
condiment
one
was
slide
because
I
think
they
kind
of
wanted
batteries
included,
sort
of
turnkey
thing
that
somebody
can
just
wants
her
own
home
server
and
again
you
know
the
guy
that
does.
It
is
super
cool
him
and
I.
You
know
chat
a
lot
and
you
know
he's
just
looking
for
kind
of
a
hack
together
system.
He
realized
heavily
on
rumah,
which
I
think
is
good.
E
There's
all
it's
been
a
lot
of
work,
evolving,
those
libraries
and
we're
more
of
a
you
know
we're
trying
to
we're
trying
to
appeal
to
both
sort
of
a
large-scale
implementation,
as
well
as
an
embedded
solution.
So
again,
I
think
we
wanted
to
be
agnostic.
Slide,
look
super
cool,
but
it's
not
1.0
yet,
and
you
know
I
think
I
think
we'll
implement
a
storage
tray
for
sled
kind
of
when
we
get
a
little
bit
more
solidified
on
the
data
structure.
E
Now
in
terms
of
room
and
yeah,
we
are
trying
to
use
a
limited
part
of
the
romaso
Roma
and
we've
been
working
with
the
maintainer
of
Roma,
which
is
this
guy
Jonas
super
super
super
cool
guy.
That's
doing
a
lot
of
work
to
maintain
that
and
try
and
give
it
back,
and
so
because
we're
performance
minded
we've
been
trying
to
push
back
some
sort
of
things
like
you
know,
using
cow
or
copying
right
for
some
of
those
things
I'm
trying
to
stay
away
from
some
of
the
macro
stuff.
The
macro
stuff
is
really
neat.
E
It'll,
save
you
a
lot
of
code
when
you're
implementing
API
endpoints,
but
it's
also
kind
of
a
it's
black
magic
in
a
way.
The
way
it's
structured,
it's
kind
of
a
lot
of
stuff,
that's
hidden
from
you,
and
not
that
that's
a
bad
thing.
But
what
we're
trying
to
do
is
make
our
even
our
API
layer
or
the
server
side
to
be
able
to
easy
easily
be
replaced
if
we
want-
and
so
you
know,
we're
just
rying
to
make
it
more
accessible.
B
Amazing,
this
was
really
really
interesting.
There
are
a
few
more
questions
and
zoom,
but
I
think
yeah.
There
are
three
more.
Maybe
you
have
time
to
answer
them
in
soon
like
there's
somebody
asking
about
know
a
city
and
no
I'll
look
for
embedded.
Maybe
you
can
give
a
quick
answer
there
and
then
you
can
answer
if
you
want
to
the
rest
in
soon.
Okay,.
B
Okay,
people,
we
have
a
little
break
now
and
we
have
the
tradition
that
we
break
out
in
breakout
rooms.
So
we
will
have
em
six
rooms
with
em
ten
people
each,
so
you
will
get
assigned
to
a
random
room
in
case
you
wanna
talk
and
chat
with
people
feel
free
to
activate
your
phone
or
your
or
your
Emma
camera.
If
not
just
don't
join
the
room
or
if
the
room
don't
talk,
feel
free,
but
I.
Think
young
eric
is
gonna,
create
the
rooms
now
and
will
assign
you
to
a
random
one.
B
B
B
Thank
you
for
sticking
around
based
on
the
amount
of
participants.
We
are
all
all
back.
Maybe
we
lost
two
people.
That's
a
good
ratio.
I
think
we
are
ready
right,
run,
I!
Think
so,
yes,
cool
and
someone
said
the
breakout
rooms
were
too
short.
That's
a
good
good
sign.
I!
Guess
we
can
try
the
next
time
to
make
them
a
bit
longer
also
to
film
feel
free
to
exchange,
zoom
credentials.
I
think
you
don't
even
need
to
hang
out
afterwards
in
zoom
rooms
and
we
try
to
make
them
a
bit
longer
the
next
time.
B
D
Okay,
thanks
cool,
so
today
I'm
going
to
talk
about
our
today.
First,
everyone
should
be
okay
and
everyone,
I
hope,
feeling
good
and
safe
and
healthy
at
home,
and
today
I'm
going
to
present
artillery
it's
like
this
fancy
name
is
just
a
description
of
the
repto
story,
but
it's
basically
for
the
things
that
I
need
at
work
and
also
things
that
I
need
to
use
for.
D
They
storage
systems
distributed
storage
systems
so
who
I
am
Who?
I
am
I'm,
vertex
click
on
Internet,
I'm,
doing
data
processing
engineer
by
day
I'm,
currently
working
on
the
experimental
software
I
am
currently
lead
of
the
localization
team
in
rust
and
named
community
by
that,
and
it's
my
rear
side
and,
let's
start
so
this
any
other
systems.
Yes,
this
is
the
topic
of
today,
and
so
everybody
talks
about
the
city
meter
systems.
This
is
very
important.
D
Everyone
to
give
you
some
ideas
about
it
and
everybody
has
some
knowledge
about
it,
but
is
it
easy
so
making
distributed
systems
are
not
semi
hard.
From
my
point
of
view,
it
is
hard,
so
this
is
by
definition.
It
is
like
that
everybody
talks
about
it.
So
what
I
can
say
is
that
we
are
going
to
go
through
from
ground
up
how
we
do
that.
How
can
you
do
the
service
discovery?
How
can
it
form
the
clusters
after
that
and
then
do
the
data
replication?
Are
you
rotate
or
application,
which
is
basically
distributed?
D
D
So
this
is
the
first
need.
I
need
to
give
and
diving
into
the
service
discovery.
So
let's
talk
about
this
a
little
bit.
How
can
I
share
the
configuration
when
we
talk
about
the
configuration
of
instances?
How
do
we
share?
How
do
we
discover
discover
the
network
topology
in
most
of
the
times?
What
we
do
is
that
we
have
a
central,
shared
state
store.
We
build
up
some
kind
of
a
central
shared
state
store,
you
call
it
comes
to
do
you
call
it
something
else.
D
You
call
it
etcd
and
you
basically
deploy
it
as
with
all
other
services
together
in
the
same
network,
and
you
actually
register
these
instances
or
the
service
descriptions
to
this
key
value.
Store
this
central
service
change,
the
configuration
or
network
topology
in
the
local
link
at
the
same
local
network
and
be
writing
a
web
api
with
hash
table.
Lookup,
so
it
is
basically
right.
All
these
centralized
services,
like
I,
mean
the
previous
slide.
I.
Think
Reese
said
that
the
future
is
as
decentralized
I
totally
agree.
This
is
correct.
D
We
are
what
we
are
doing
in
most
of
the
times
in
the
production.
Whirlpool
was
basically
deploying
some
kind
of
a
small
service
that
is
actually
doing
some
kind
of
a
configuration
distribution.
This
is
wrong
again,
in
my
opinion,
I
always
be
subjective
opinions.
In
my
toes
this
isn't.
This
is
my
opinion,
and
this
is
it
feels
like
every
single
ni,
sunde
pull
something
like
this.
It
feels
like
they
just
deploy
the
hash
table
and
then
are
you
thinking
that
bringing
the
ancient
tech
back
in
the
foreground
is
the
good
thing.
D
So
all
the
distributed
system
engineers
best
at
that,
so
they
change
a
little
bit
put
some
mesh
up
and
they
is
serving
to
your
face.
So
this
is
how
it
is
going
to
work.
So
zeroconf
is
the
thing
and
20
years
old
stuff.
Yes,
nearly
two
and
you
make
more
than
that.
Yet
network
is
unreliable
plains.
Planet
scale
systems
are
not
for
Network
they're,
based
on
rendezvous,
so
and
I
think
random.
Who
is
a
good
thing
on
the
local
network
and.
D
Configure
in
combination
with
networks
are
the
good
thing
also,
and
services
curve
should
be
conformed
to
rendezvous
style,
the
rendezvous
mentality.
That
form
is
the
good
form,
I
will
say
so.
How
does
it'll
convert?
There
is
something
called
bonkers
protocol,
let's
say,
does
the
instance
is
serving
at
13:37
and
there's
also
creates
a
protocol
that
some
other
server
serves.
Also
in
addition
to
the
bonkers
protocol,
they
are
actually
in
the
local
area
network.
They
are
saying
that
yeah
I
mean
I'm
serving
this
I'm,
showing
these
services
and
stuff
like
that.
D
I
mean
this
is
my
IP.
You
already
know
it
from
the
destination
yeah
and
source
and
stuff.
So,
let's,
if
you
it's
ok,
did
that
and
if
your
software
is
aligning
with
this.
Let's
talk
on
that
protocol.
This
can
be
from
broadcast
from
one
server
to
all
the
servers
or
in
UDP
style.
Any
casts
of
one
server,
the
one
server
directed
or
hopped
hop
paste.
This
can
work
quite
good
in
the
without
any
configuration
server
that
is
running
around.
D
So
what
this
thing
is
used
used
from
was
computing
to
the
wireless
sensor
networks.
We
are
not
using
that
much
in
clouds.
Why?
Because
we
are
using
something
called
cuban
areas
and
the
star
I
put
asterisks
in
there,
because
cordon
is
two
DNA's
servation
meshes
and
stuff
like
that
handling
these
kind
of
things
I
mean
you
might
not
need
that.
But
let
me
tell
me
again:
DNS
also
kind
of
centralized.
Even
nobody
wants
to
admit
that
this
is
the
thing,
but
if
you
say
still
want
full-blown
zero
Kong.
D
Then
use
few
minutes
with
the
host
network.
This
is
just
a
one
single
line
just
hold
it
work.
The
blood
is
in
the
Amazon
in
AWS.
There
is
something
called
the
CNI
driver,
so
you
can
deploy
the
CNI
driver
with
computing,
the
cni
you
can
deploy
that
zeroconf
network
inside
eks
and
stuff,
like
that
on
hosts,
you
can
use
hosts
network
to
send
broadcast
messages.
D
This
host
network
is
basically
that
for
that
and
you
can
share
network
topology.
You
know
intermediate
third
party
server
or
funky
demonized
web
applications.
Yeah,
like
I've,
said
before
I
call
it
funky
denies
my
obligations
I,
don't
think
that
this
is
needed
and
it
shouldn't
be
needed
at
all.
So
at
this
point,
when
we
come
to
that
point
and
when
we
know
how
this
thing
works
and
how
the
configuration
is
distributed,
we
can
say
that
we
established
a
network
at
the
link
local
level,
so
in
one
single
data
center
in
the
once
in.
D
An
amount
of
nodes
is
already
in
the
already
established
kind
of
mesh
mesh.
They
call
it
nowadays,
so
this
is
the
good
part
of
it.
We
know
that
this
is
one
single
data
center.
If
you
want
to
go
across
data
centers,
so
data
centers
data
center
discovery.
If
you
don't
care
about
the
broadcasts
and
or
if
you
want
to
do
some
kind
of
some
kind
of
optimization
to
do
lookups,
you
can
use
DHT.
D
So
this
is
one
of
the
papers,
one
of
the
most
prominent
papers
about
using
DHT
in
the
packet
bodies
to
discover
the
other
data
centers
and
their
topology,
and
this
is
actually
a
good
thing,
because
th-th-the
servers
level
I
mean
data
center.
That
would
cloud
level.
It's
nice,
it's
not
privacy,
doubting
I,
would
say.
D
D
This
is
actually
makes
everything
get
tested
and
also
kind
of
helping
me
to
find
out
IPS
in
the
community
hardware
deployments
and
stuff
like
that.
So
at
that
point
we
have
the
in
IP
information,
I,
would
say
and
I
enter
form
a
cluster,
so
I
need
to
form
something
something
that
makes
notes
communicate
so
the
upper
level
protocol
that
are
going
to
the
application
level
kind
of,
in
necessity,
so
forming
clusters
I
need
something.
D
I
have
some
requirements
in
my
mind
when
I'm
doing
this,
and
it
should
be
very
flexible
at
every
single
user
also
who
also
develop
some
kind
of
the
queries
you
know
them.
Database
system
should
also
lose
this
and
or
some
kind
of
like
peer
to
peer
application
or
whatever
you
want
to
call
it.
That
is
also
a
good
thing
that
can
use
I
think
forming
a
colossal
and
receiving
membership.
Events
is
needed
to
make
the
communication
medium
for
nodes,
so
members
can
go
down.
Members
can
join.
D
D
Also,
it
is
for
it's
needed
for
the
database
system,
so
look
up
for
replicas
or
they
actually
taste.
That
is
not
for
the
dead
look
based
studies,
conclusive
a
kind
of
heartbeat
messages
and
between
the
instances
and
stuff
like
that
is
nice
or
any
other
service.
Actually
not
only
data
services
or
data
systems.
To
be
honest,
so
the
protocol
should
be
flexible,
so
I
mean
in
paper.
It
says
that
there
are
four
states
and
the
replication.
D
The
replicated
state
machine
has
four
states
to
actually
distribute
the
work,
but
it's
not
like
that
on
reality,
every
everything
in
the
Academy
Academy
is
like
pink
glasses.
Everything
is
beautiful,
but
when,
in
reality
it's
not
like
that,
so
you
need
to
alter
the
existing
paper
implementation
to
conform
to
the
actual
use
cases,
so
probably
some
one
of
the
users
and
come
and
say
that
they
want
to
send
some
kind
of
custom
packages,
custom
messages.
This
will
allow
that
and
it
allowed
that
this
was
the
requirement
and
the
network
should
be
congested
of
packets.
D
This
is
broadcast
on
parish
in
congestion
and
all
other
things
should
be
considered
during
the
design
and
I
need
less
packets
and
more
work.
So
the
this
is
related
to
congestion,
bound
uppercase
traffic,
so
it
should
actually
slow
itself
down
at
some
point.
So
I
am
thinking
about
what
can
be
algorithm.
So
what
can
the
protocol?
What
can
the
algorithm?
How
I
can
implement
this
I
said
epidemic?
So
it's
a
very
well
known.
Everybody
knows
that
actually
cause
it.
D
Let's
call
it
goes
it
because
rumor
mongering
protocol
you
can
say
this
is
think
that
actually
helps
you
to
form
the
cluster
and
get
modified
by
the
membership
changes
in
their
necks
and
stuff,
like
that.
So
I
implement
this
even
like
falling
fully
customized
agility
based
membership
and
I
also
implemented
the
user-defined
events
that
you
can
send
any
payload
up
to
the
your
outers
fragmentation
scheme
or
that
orchestration
systems.
D
So
maybe
you
know
that
I
am
also
working
on
it's
a
bunch
of
cool
people
on
Bastion
and
you
can
pick
it
up
with
Bastion,
so
you
can
just
put
inside
the
bastion,
and
you
have
undying
fully
reliable,
ap
available
and
partially
party
torrents.
This
is
you
know.
The
cluster
bit
custom
message,
send
events
so
and
you
don't
need
to
call
someone
when
it
fails,
because
you
wrote
the
code
there
is
it
and
it's
not
going
to
crash
down.
That
is
the
good
part
of
it.
D
So
when
I
rewrite
it,
I
think
that
I
should
make
it
well
tested
because
I
not
use
it
in
the
actual
workload
and
I
need
to
do
more.
Checking
there
to
model
checkers
and
possible
worst-case
scenarios,
and
one
of
the
best
thing
that
I'm
looking
for
is
that
Monte
Carlo,
elevated
methods
and
stuff
like
that,
will
check
the
system
and
I'm
working
on
that
I
checked
the
worst
case
scenarios
not
best
case
scenarios.
I
mean
this
is
the
stock.
Stochastic
metals
are
shiny
and
meanwhile
I
release
the
cows.
D
Testing
harness
called
chaos,
he's
like
in
my
language
is
called
Karl's
and
I
was
in.
Somebody
took
the
cows
crate,
so
I
think
that
was
the
reason
that
I
put
this
in
my
language.
So
this
enabled
our
tiller
to
be
tested,
so
it
just
randomly
in
France
in
points,
but
you
define
different
points,
and
these
are
the
very
important
points
that
might
actually
crash
the
system.
D
So
this
thing
is
also
open
sourced.
Meanwhile,
I
worked
on
that
and
then
publish
this
crate.
D
The
broadcasting
is
not
working
on
some
stacks.
To
be
honest,
that
was
one
of
the
reasons
I
think
especially
on
dueling
in
the
local
level,
but
yeah
I
mean
and
then
I
overcome
with
any
cast
method.
So
this
was
from
one
of
the
old
ones
old
videos.
So
this
is,
as
you
can
see,
that
our
membership
events
are
sending
to
each
other
it
sent
to
each
other
and
then
sometimes
random.
Increment
of
the
membership
states
are
pushed
down
and
forming.
D
The
clusters
like
like
it
was
like
that,
ultimately
comes
at
the
mentioned
membership
level
and
in
the
future
for
distributed.
Actors
became
this
just
this
exchange
messages
and
one
of
our
members
in
the
bastion
actually
I
just
learned
that
he
wrote
some
kind
of
in
an
uncommitted
or
in
a
lung
language.
Something
called
the
time
to
binary
serialization,
so
I
mean
if
he
wrote
this.
D
So
next
up
was
the
data
replication
I
just
passed,
but
this
is
the
part
that
is
actually
got
funky
and
I
implemented
this
in
two
weeks
and
yet
another
one.
The
fresh
protocol
is
coming
up
for
data
replication,
I'm
just
going
to
present
two
protocols
at
the
same
time
in
here
that
is
going
to
do
data
replication,
so
we
formed
the
cluster.
So
right
now
everybody
knows
everyone
in
this
local
eat
local
network.
D
Now
we
come
to
the
point
that
we
need
to
share
like
huge
data
or
objects:
big
blob
objects,
big
object,
blobs,
and
we
need
to
work
on
these.
They
work
on
these
things
read
and
write.
So
we
start
again
some
requirements,
no
not
loads,
are
aware
of
each
other.
They
are
doing
room
or
one
mungry
network
is
in
place.
Topology
is
in
place,
like
I
told
you
before,
need
to
replicate
data
between
the
nodes.
D
Let's
have
strong
consistency,
please,
because
people
are
tend
to
do
the
strong
consistency
and
they
are
using
rubber
to
the
strong
consensus.
They
consistency
for
the
data
replication.
That
is
yet
another
topic.
I'm
going
to
come
and
go
I'm
going
to
come
baton
and
don't
use
causes
consoles.
That
doesn't
mean
meant
for
data
replication,
but
just
for
agreement
that
is
all
set.
Another
thing
to
learn,
strategies
should
be
one
or
all
style
may
be.
One
downside
is
that,
against
the
other
consensus
based
approaches,
but
this
actually
amortizes
that
what
it
is
I
think.
D
The
protocol
I'm
going
to
mention
soon
likely
to
replicate
the
state
machines
the
protocol
should
have
lightweight
replicated
state
machines
logs
should
be
ordered.
This
was
the
water.
What
I
wanted
to
have
don't
need
to
old-style
orders,
probably
and
yeah
so
everybody's
doing,
say:
yeah
I
mean
yeah
raft
yeah,
but
it's
not
for
data
replication.
Are
you
sayin
taxes,
but
I,
always
not
for
data
replication?
No,
not
for
that.
It's
just
for
consensus
like
you
want
to
have
a
agreement
on
a
volume.
It's
not
for
replicating
about
it.
D
It
is,
you
are
just
replicating
me,
the
third
channel.
You
are
just
doing
something
else
over
it,
so
it's
not
actually,
both
of
them
not
actually
meant
for
actual
data
replication
and
when
I
think
about
what
is
the
workload
that
probably
I'm
going
to
get
at
work.
Is
that
and
I
research
it
a
little
bit
more
and
most
of
the
workloads
are
mixed
workflows,
it's
60%
rate,
25%
updates,
10%,
insert
and
5%
scan
this
form
the
yahoos
benchmark,
and
it's
definition
for
Cassandra
and
Cassandra
like
so.
D
This
was
the
mixed.
Workloads
are,
and
I
am,
reading
a
bunch
of
papers
in
this
between
these
two
slides
and
I'm.
Just
like
reading
this
that
you
seeing
this
graph
saying
that
the
update
operations
are
not
actually
for
even
for
a
ten
times
more
update
operation,
trudeau
doesn't
change
that
much
in
a
sense
that
was
actually
expected
and
the
percentage
below
so
I
am
like
okay,
so
most
of
the
work
laws
are
read,
only
read
mostly
sorry
and
it's
hard
to
make
things
like
the
read,
mostly
workloads
available
with
low
latency
latency.
D
In
these
circumstances,
especially
in
the
like
Tenenbaums
asymmetrical
geological
geological
distribution,
this
is
what
I
discover
and
then
I
am
like.
Oh
yeah
chain
replication,
best
thing
ever
and
yeah,
it's
like,
probably
in
there
for
a
long
time.
Oh
god,
I
mean
it's
like
somebody
is
going
to
probably
implemented
this
no
yeah.
A
D
It's
basically
what
you
see
in
here.
In
the
left
hand,
side
is
normal,
ordinary
chain.
Replication
carry
is
run
with
the
tail
and
the
updates.
Are
it
from
the
head
of
this
link
links
list
of
a
node
chain
at
last
mode,
apportioned
keyrings
mode,
so
the
crack
mode,
yeah
I
mean
I'm,
not
supporting
anything.
His
name
is
crack.
D
So
I'm
implementing
this
in
two
weeks,
I've
implanted
that
done-
and
that
is
that-
has
a
very
booting
reach
throughput
in
the
single
threaded
synchronized
workload,
synchronous
whirl,
all
right
now
block
three
except
the
one
that
I
need
to
remove
that
I
actually
forgot
see
your
implementation,
so
it
is
a
chip
doing
what
it's
supposed
to
do
and
it
has
rusts
safety.
Concurrency
and
speed
is
the
best
combo
that
I
realized
one
more
time
again,
because
it
was
neat
implementation
and
I
just
wrote,
wire
format
and
then
just
slammed
things
that
I
need
to
do.
D
D
And
right
right,
I
made
backup
when
it
is
needed
for
the
state
machine
for
the
checkpointing,
and
these
were
these
are
the
things
that
I
need
to
do.
I
think
when
I
have
time,
I
will
do
that
and
benchmarks.
This
is
synchronous,
one
I
mean
I
hate
benchmarks
every
single
time.
This
is
the
thing
that
rust
comes
up.
There
are
benchmarks
and
you
can
see
a
bunch
of
other
graphs
about
the
benchmarks
of
this
replication
algorithm
and
my
blog.
D
D
So,
to
sum
up,
artillery
is
a
state
of
the
art,
I
think
going
to
be
distributed,
data
library
and
it
is
becoming
they're,
not
a
blockchain
library
or
networking
library
less
dependencies,
and
then
you
can
imagine,
exploits
latest
research
techniques
and
other
modern
approaches
designed
for
database
systems
and
pure
angels
and
stuff
like
this,
mostly
the
data
systems.
To
be
honest,
you
can
say
you
cannot
make
pure
distributed
databases
with
the
tooling
out
there
in
rust,
its
planet-scale,
because
this
is
the
algorithm
that
Google
uses
also
react.
Users
also
Heroku,
uses
and
stuff
like
this.
D
So
this
is
the
one
of
the
hard
core
Albertans
out
and
protocols
are
saying
blazing
the
fast
if
you're
interested,
read
artillery
in
pro
based
on
documentation,
whenever
I
have
time
I'm
trying
to
write
that
you
can
contribute
to
the
development
that
there
are
identify
issues,
but
as
soon
as
I
have
time,
I
can
open
some
issues,
organize
my
mind
and
open
it.
You
can
also
sponsor
me
and
please
sustain
the
source
of
page
and
thanks
for
listening
me,
you
go
the
twenty
eight
parts.
D
Artillery
core
I
didn't
upload
the
version
I'm
going
to
upload
the
version
as
soon
as
I
have
time,
just
pull
it
up
it
pull
to
get
to
app.
Oh
it's
in
there
it's
separate
everything
is
even
service
discovery.
You
don't
need
to
use.
For
example,
the
parts
service
discovery
separate
all
these
for
cluster
forming
protocols
are
separate
modules,
so
they
are
not
dependent
on
each
other,
there's
nothing
being
coming
up
as
a
dependency
to
your
system,
there's
cross
being
threads,
nothing
more.
Obviously,
couple
of
places,
chaos
and
Bastion.
Okay,
yeah.
B
D
D
For
this
also,
it
is
a
problem
because
DHT-
and
you
know,
as
the
privacy
related
things-
it
is
not
very
good
thing,
so
a
Mallory
or
for
the
security
engineers
out
there.
Some
Eve,
like
us
broker,
can
actually
listen
the
network
and
join
as
a
fake
note
and
do
those
some
bad
shenanigans.
Yes,
that
is
possible.
You
can
check
with
some
kind
of
fake
entries
to
the
zeroconf
packets
signed
in
tourism
that
might
work,
but
yeah
I
didn't
do
that.
B
D
D
A
D
A
hundred
percent
sure
that
DNS
service
discovery
is
going
to
be
something
like,
and
the
production
employment
is
much
more
reliable
than
anything
else,
but
data
systems
and
most
of
the
fans
are,
if
you
know
some
companies
or
this
orchestration
schemes
are
behaving
like
most
of
the
times.
Stateful
set
or
German
set
so
I
mean
they
are
mostly
a
fine
to
denote
so.
D
A
A
A
Share
real,
quick
just
just
for
posterity
sake
here,
it's
the
very
long
link.
Sorry
for
that,
but
hopefully
those
that
are
in
the
chat
and
stuff
can
can
just
copy-paste
it,
but
if
you're
watching
this
later
on,
then
you'll
have
to
well,
hopefully
paste
it
in
the
description
of
the
video
wherever
we
posted
okay.
Let
me
stop
sharing
that
and
move
on
over
to
here
to.
A
A
All
right
can
everybody
see
my
screen.
Hopefully
everyone's
he's
a
playground.
Awesome,
so
I've
been
playing
around
with
a
bunch
of
unsafe
codes
lately
and
I
learned
an
interesting
thing
about
dropping
that
I
thought
I
would
talk
about
briefly
with
everybody,
and
this
is
a
super
contrived
example,
and
the
first
thing
we
should
talk
about
is
that
if
you
don't
need
to
use
unsafe
codes
and
don't
use
it,
so
hopefully
you
should
come
away
with
this
example.
Learning
a
little
bit
more
about
drop,
but
at
the
end
of
the
day,
this
particular
example.
A
You
would
never
actually
do
it
this
way
in
real
life,
because
there's
no
need
to
to
do
it
this
way,
but
I
couldn't
come
up
with
a
better
example
that
didn't
require
ten
minutes
of
explanation,
and
so
what
we're
going
to
be
doing
today
is
looking
to
strux,
foo
and
bar
here
and
foo
and
bar
are
both
structurally
equivalent
to
each
other.
So
foo
is
just
composed
of
these
two
pointers.
Are
these
two
numbers
here
you
ate?
A
So
when
these,
when
these
things
are,
are
destroyed,
when
they're
dropped,
they're
going
to
print
out
that
they're
being
dropped
and
the
first
one
that
we're
going
to
be
looking
at
real
quick?
Is
this
normal
function
and
we're
just
gonna?
Look
at
we
create
a
foo,
we
create
a
bar
and
then
we're
going
to
see
in
what
order
they
they
get
dropped.
A
So
if
we
execute
that
real
quick,
then
you
can
see
down
here
at
the
bottom,
we're
calling
normal
and
it
drops
bar
and
it
drops
foo,
and
if
we
look
at
at
normal
there
you
can
see
they
get
revert.
They
get
dropped
in
reverse
order.
So
first
lesson
for
today
is
that,
depending
on
the
order
and
that
things
are
instantiated
and
a
function,
they
will
be
dropped
in
that
reverse
order.
So
we
created
two
first
and
then
we
created
bar
and
the
bars
drop
first
and
foolish
drop
second,
so
they
get
dropped
in
reverse
order.
A
C
A
Awesome
we've
learned
that
today
so
now
we're
going
to
be
looking
at
something
else
here
and
we're
going
to
be
looking
at
this
forget
function
here
and
what
would
forget
does
is
it
creates
a
foo
and
then
it
transmutes
foo
Jabar,
and
what,
if
you're
not
familiar
with
stewed
mem
transmit
copy?
Basically,
what
it
does
is
you
pass
in
a
reference
to
something,
and
it
just
says
hey
that
thing
you
passed
me.
A
Thing
and
so
it's
effectively
and
it
basically
men
copies
the
bytes
onto
the
stack
and
so
effectively
what
it
does
is.
It
takes
all
the
bytes
that
in
memory
that
compose
that
that
foo-
and
it
just
says
those
bytes
I'm,
gonna
copy
them
over
and
I'm
gonna,
treat
it
as
if
it
were
bar
instead
and
because
foo
is
just
two
bytes
and
bar
as
au
16,
they're
the
same
they're,
the
same
thing.
A
A
We
don't
want
it
to
drop,
and
some
situations
where
you
might
run
into
this
is
let's
say,
you're
implementing
your
own
smart,
pointer
or
something,
and
you
want
that
smart
pointer
to
move
from
being
a
smart
pointer
to
one
thing
to
another
thing,
so
you
need
to
transmit
it.
But
if
you,
let's
say
you're
doing
reference
counting
or
something
like
that,
something
that
runs
every
time.
A
A
It's
not
the
wrong
thing,
because
all
we're
doing
is
printing
out
something,
but
you
can
imagine
having
complex
logic
in
your
job
function,
that
you
really
only
want
to
drop
that
you
only
want
to
call
the
drop
thing
once
per
logical
value
here,
and
so,
if
we
go,
what
we're
going
to
do
here
is
call
foo,
then
we're
gonna
call
bar
and
after
we
call
bar,
we
call
stood
men
forget
and
that
says:
hey
you
know,
drop
thing,
don't
call
the
drop
implementation
on
foo
just
forget
about
it.
Forget
that
it
ever
existed.
A
We
we
change
foo
into
a
bar,
and
we
just
forget
that
foo
ever
existed.
So
when
we
run
this,
it's
not
going
to
it's
not
actually
going
to
print
out
the
it's
not
going
to
drop
through,
and
so
when
we,
you
could
see
here,
calling
forget,
we
only
drop
bar
foo
is
just
forgot
about
it,
never
gets
dropped
right
and
that's
really
great.
A
You
might
see
this
commented
out
thing
here
where
we
panic.
Just
keep
that
in
mind.
There's
another
way
to
implement
this.
So,
instead
of
using
Steadman
forget,
you
can
do
the
same
thing
using
this
student
and
manually
drop
and
what
that
does.
Is
it's
just
a
simple
wrapper
around
things
and
it
says:
don't
automatically
drop
this
thing
when
it
goes
out
of
scope
like
I.
A
Will
I
will
tell
you
exactly
when
to
drop
it,
or
maybe
I
won't
tell
you
at
all
until
we
wrap
foo
here
and
a
manually
drop
and
we
never
drop
it,
and
so
effectively
these
two
things
are
equivalent
to
each
other,
and
so,
if
I
go
up
here
and
I
run
manual
drop
instead
of
forget
here
we
should
see
the
same
thing
then
we're
only
calling
drop
on
on
bar.
So
we've
done
the
same
thing:
we've
forgotten
about
too,
and
we've
and
we've
changed
it
into
a
bar
and
we
only
drop
bar.
A
So,
okay,
why
do
I
care
like
what's
the
difference
between
these
two?
They
can
be
used
interchangeably
right
if
I
just
want
to
not
run
the
destructor
for
one
thing,
I
can
use,
steadman
forget,
Oh
or
I
could
use
manually
drop
well,
there
there's
subtly
different
in
different
ways,
and
it
turns
out
that
most
of
the
time
you
probably
want
to
prefer
using
manually
drop
and
the
reason
for
that
is
panics.
A
So
we're
going
to
go
back
up
here
and
run
forget
again
and
this
time
we're
going
to
panic
right
after
we
transmit
Tabar,
and
one
thing
that
you
need
to
know
about
what
panics
do
is
when
something
panics
it
unwinds
the
stack
and
what
that
means
is.
It
calls
the
destructor
for
all
the
local
variables
in
your
stack
frame,
and
so
when
we
panic
here,
we
never
reach.
A
This
forget
thing:
we're
gonna,
run
the
destructor
for
foo
and
bar,
and
so
let's,
let's
run
this
calling
forget
again
and
you
can
see
bar
and
foo,
both
get
get
destroyed.
So
you
know
if
there
was
some
kind
likes
logic
in
there
that
required
that
we
only
want
to
run
the
destructor
for
foo
we're
out
of
luck
here.
A
A
Like
this,
we're
only
going
to
drop
bar,
and
why
is
that?
Because
we
basically
effectively
said
up
here
that
we
just
want
to
forget
about
food,
so
we've
done
it
even
before
we've
gone
ahead
to
do
the
transmute
step,
and
so
it
doesn't
matter
now.
What's
the
downside
of
this
of
this
way?
Well,
we
might
leak
memory,
so
we've
only
called
we've
only
called
foo,
the
destructor
for
fur
bar
here
we
haven't
called
the
destructor
for
foo.
A
Maybe
that's
what
we
want
in
this
situation,
but
if,
instead
we
panicked
right
here
instead,
then
we
would
then
we
would
potentially
leak
memory
for
foo,
because
we
would
panic
right
here.
Foo
is
wrapped
in
this
manually
drop.
We
unwind
the
stack,
but
the
the
destructor
of
the
the
drop
implementation.
Food
won't
bake
won't
get
called
an
if
that's
responsible
for
cleaning
up
memory
or
something
like
that,
we're
out
of
luck
and
who
will
never
be
cleaned
up
and
for
some
things.
A
That's
that's
not
what
we
want,
but
at
the
end
of
the
day,
leaking
memory-
or
you
know,
consuming
more
memory
than
we
need
to
is
unfortunate,
but
it's
safe,
whereas
running
destructors
twice
potentially
is
unsafe,
and
so
that's
why
you
most
likely
want
to
prefer
manually
drop
here
and
I'll
just
end
with
the
fact
that
if
none
of
that
made
sense
to
you
or
if
you,
if
this
seems
horribly
confusing,
that's
fine,
this
is
unsafe
code
and
unsafe
code
is
supposed
to
be
hard.
So
just
don't
use
an
unsafe
code.
A
So
by
the
way,
I'm
happy
to
answer
questions
on
that
I.
Don't
think
we'll
take
questions
on
the
chats,
but
it
was
really
great
having
the
the
three
speakers
today.
So
thank
you
to
all
of
our
speakers,
thanks
again
to
John
davic,
for
helping
out
with
with
everything,
thank
you
to
my
co
organizer
bastion
for
all
of
his
hard
work
as
well,
and
we
really
appreciate
everybody
showing
up
today,
any
last
words
busting
yeah.
B
I'm
happy
how
all
this
goes:
I'm
really
looking
forward
to
now
each
month
for
the
next
while
to
host
it
here,
and
it's
also
fun
to
have
more
people
not
just
from
Berlin
but
from
all
all
over
the
place,
so
I.
Thank
you
all
for
Amma
joining
and
if
you
have
any
suggestions,
how
we
can
make
this
even
more
fun
and
interesting,
please
feel
free
to
reach
out.