►
Description
Originally recorded during the Lisbon Hack Week from May 21-25, 2018.
A
Good
morning,
so
so
I'm
Ally
shocker
I
am
one
of
the
co-authors
of
Delta,
based
T,
oddities
and
pure
operation
based
the
oddities
I'm
based
in
Braga.
We
work
together
in
we
worked
for
anezka
tech,
which
is
which
is
an
associate
lab,
a
search
club
with
a
lot
of
disciplines
like
security,
this
with
systems,
formal
methods.
A
Robotics,
so
we
are
around
1,000
researchers
and
we
are
our
group
like
in
Braga.
We
are
devoted
to
work
on
to
develop
pure
operation
operation
based
charities
and
Delta
veggie
oddities
regarding
all
the
stuff
that
you
have
mentioned
regarding
garbage
collection,
efficiency
and
all
this
kind
of
stuff
and
migration,
and
all
this
so
my
talk
today
will
be
on
as
secure
as
possible
eventual
consistency
and
basically,
by
eventuosity
I'm
gonna
focus
on
authorities
like
using
Shorty's
on
eventual
consistencies
to
reconcile
conflicts
as
secure
as
possible.
It's
using
by
sometime
fault
tolerance.
A
So
you
probably
know
it
it's
a
buzzword
recently
because
of
block
chains,
so
I
raised
in
the
BFD
community.
I
did
my
PhD
in
2009
on
that.
So
my
talk
will
be
a
little
bit
we'll
touch
upon
the
system.
I
gonna
describe
an
overview
of
the
system,
it's
quite
complicated
to
cover
in
15
minutes,
so
I
should
acknowledge.
This
work
has
been
like
through
three
projects:
ink
free
light
bone
to
a
European
project
and
tech
for
growth,
a
national
project-
and
these
are
my
co-authors.
A
On
this
work,
no
sonic
team
is
my
PhD
student
and
Carl
Spackler.
Oh,
you
know
him
from
Saudis,
so
just
to
me
or
4d
talk,
so
we
all
see
this
reviews
to
see
this
kind
of
of
message
when
when
systems
fail,
but
the
message
is
that
this
is
not.
This
is
no
longer
acceptable
in
this
error.
We
can't
accept
this
kind
of
thing
and
we
should
now
what
are
the
reasons
behind
things
to
fix
them?
A
Okay,
so
I
think
that
many
of
the
engineers
in
the
industry-
mainly
so
they
don't
know
or
they
didn't
know
about
Byzantine
4-
turns
from
now
on.
I
think
that
this
started
to
level,
but
previously
they
didn't
and
I
think
that
we
need
this
kind
of
culture
to
educate
engineers
about
buy
some
time
for
terms,
because
systems
will
break
if
we
consider
the
scheduler
so
who
is
familiar
with
buy
some
time
for
thorns.
A
So
let
me
try
to
describe
what's
the
problem
and
what
I
will
be
providing
as
a
solution.
So
first,
let's
assume
this
part,
which
is
an
eventual
consistent
systems.
We
have
a
couple
of
servers,
eventual
consistent
servers,
they
are
replicas
and
they
are
communicating
through
reliable
causal
broadcast
or
causal
broadcast
between
them
and
clients
are
usually
accessing
each
node
and
they
get
their
reply
immediately
without
synchronizing
with
the
others
and
they
synchronize.
A
In
the
background,
it's
the
system,
the
oddity
model,
so
in
this,
in
this
kind
of
so
like
I,
would
say
that
known
was
claiming
that
the
system
will
converge
with
this
and
it
will
not.
It
depends
actually,
on
the
fault
model
that
you
are
using,
so
all
the
systems.
Currently
they
assume
the
fault,
recovery
method,
model
fault,
but
assume
that,
for
example,
these
servers
are
not
applying
the
operation
correctly,
whether
it's
an
operation,
increment
decrement,
add
remove
or
its
immerse
or
it's
okay.
If
the
server
is
not
behaving
well,
you
will
never
reach
convergence
okay.
A
So
what
we
are
doing
is
that
we
integrate
this
with
by
cluster.
We
create
a
50
proxy
with
on
each
server
and
we
push.
We
push
the
history
of
the
operations
to
the
VFD
cluster,
to
validate
them
and
get
a
certificate
to
send
to
the
client.
So
what
we
are
doing
now
is
to
certifcate
the
history,
the
that's
common
across
all
these
servers.
Okay.
So
in
this
we
guarantee
that
we
are
working
in
the
backend
who
are.
A
We
are
not
creating
any
consistency
on
the
front
level,
so
so
I'm
gonna
give
a
small
or
brief
on
by
some
time
for
tolerance
and
the
eventual
consistency
how
they
work.
Basically,
I
will
be
very
fast
here
because
you
know
them
and
our
protocol,
we
call
it
PI
Zack,
but
we
hopefully
get
another
name
better
than
a
I'm
gonna
show
the
trade-offs
and
discussions
and
discussion
and
future
work.
A
So
what
is
buy
some
time
for
tolerance
so
buy
some
time?
Photons
is
actually
the
strongest
fault
model,
because
by
some
time
players
can
do
anything
and
I
think
that
you
could
imagine.
Okay,
so
I
wasn't
in
node.
Candle
I
can
remove
the
memory
and
add
things
can
okay
and
can
even
behave
correctly.
So
this
is
so
the
approach,
usually
that's
followed
by
bunting
for
tourist
protocols,
this
historical
scenes
like
the
82
or
even
before
since
Lambert's
paper,
so
the
approach
is
usually
to
use
the
state
machine
replication
with
the
majority
consensus.
A
It's
similar
kind
of
taxes.
If
you
know
about
taxes,
so
the
idea
is
to
have
a
kind
of
a
replicant,
for
example,
for
each
datum.
You
have
like
three
four
replicas
and
then
we
should
guarantee
that
the
right
columns
and
read
columns
overlap.
Okay,
so
this.
Actually,
this
is
what
happens.
What's
what's
correct
in
similar
systems
like
taxes
or
raft
or
something,
but
in
Valentine
forums.
You
need
the
intersection
to
be
numbers
anytime,
even
okay.
So
it's
not
only
it's
not
enough
to
have
one
intersection.
A
So
that's
why
you
usually
see
the
proofs
you
need
for.
If
you
assume
that
you
have
one
Byzantine
server,
you
will
need
three
F,
plus
one
okay,
so
for
one
server
you
need
for
server
a
total
of
four
servers
to
guarantee
one
fault
among
servers,
and
the
very
famous
protocol
is
pbft
devil
by
Castro
and
Lisco
in
2000,
and
there
are
a
lot
of
protocols
among
them,
mine,
I,
developed
other
protocols,
but
this
is
really
the
most
robust
and
famous
one,
and
it
was
they
want
the
first
one.
A
Actually,
you
can
consider
the
first
seminal
work,
it's
the
same,
a
practical
one,
so
first
and
the
last
so
the
regarding
the
challenges
in
this
model.
Why
it's
different
from
other
models,
because
it's
impossible
distinguish
between
a
Byzantium
node
and
a
slow
node?
And
you
can.
You
could
imagine
how
much
does
make
sense
this
in
only
in
internet
platform
and
also
it's
impossible
to
distinguish
between
well
behaving
Byzantine
node
from
a
correct,
node?
Okay,
so
so
a
server
could
behave
like
for
a
long
time
in
a
correct
way.
A
So
you
could
you
could
never
catch
him
and
recent
and
the
finally
the
independent
of
failures.
So
you
assume
independence
of
failure.
Okay
among
Europe,
because,
but
they
should
behave,
should
be
deterministic,
it's
kind
of
controversy,
so
usually
this
this
work.
So
if
you
know
inversion
programming
at
this
kind
of
stuff,
so
you
could
implement
the
same
protocol
with
different
operating
system
with
different
programming
language.
You
install
different
pressure
systems,
different
hardwoods,
so
you
can
go
to
the
extreme,
but
usually
I.
Don't
think
that
this
is
what's
being
done
in
practice
because
it's
quite
costly.
A
So
these
are
basically
the
challenge
for
eventual
consistency,
I'm
going
to
assume
that
you
know
more
or
less
what
what
should
be,
what
would
help
to
understand
what's
going
on?
So
basically,
you
have
as
I'm
and
I've
explained
before
you
have
servers,
replicas,
reliable
broadcast
between
them
and
the
clients
cab.
Could
access
the
servers
immediately
and
take
the
reply
back
without
synchronization.
A
And
this
assumes
that
okay,
you're
gonna
get
some
conflicts.
You're
gonna
resolve
them,
using
probably,
for
example,
see
oddities
in
our
case,
if
you
want
and
application
should
accept
sailor,
it's
okay
or
read
in
the
past,
so
why
not
just
use
pbft,
for
example,
as
a
PFD
protocol
we've
with
the
eventual
consistency?
A
Okay,
the
answer
might
be
obvious
first,
because
pbft
is
blocking,
which
means
that
any
request
from
the
client
will
need
to
like
visit
all
the
servers
in
the
cluster
to
get
back
to
the
reply
back
to
the
clients,
and
the
second
thing
is
it
assumed
that
nodes
are
deterministic
and
it
requires
total
order?
Okay,
so
in
the
eventual
consistency
model
you
have
a
non
blocking
or
no
agreement,
so
you
actually
execute
things
immediately
on
the
server
and
then
we
think
how
to
resolved
okay.
So
that's
it
controversy
and
you
have
a
partial
order.
A
You
don't
have
total
order,
which
means
that
you
have
concurrent
operations
that
you
could,
at
the
same
instant,
for
example,
apply
different
operations
on
different
servers
at
the
same
instant.
It
might
look
in
deterministic,
okay,
because
you
are
actually
implementing
executing
different
operations.
So
here
it,
this
might
be
confusing,
so
the
replicas
are
are
deterministic,
but
you
are
actually
the
input
is
different.
Okay,
you
are
applying
different
operations
because
they
are
concurrent.
A
So
that's
why
it
couldn't
be
used
like
you
can
didn't
simply
use
pbft.
So
our
approach
is
this
thing,
so
this
is
one
what
papers,
someone
from
Microsoft
I
guess
he
wrote
a
blog
and
was
complaining
about
the
complexity
of
the
protocols
and
we
indeed
that's
true
okay,
but
we
we
should
not
go
to
that
area.
Okay.
So
this
is
the
wrong
way
to
go,
and
we
had
some
experience
from
these
guys
in
the
US.
A
Also
they
tried
to
implement
this
and
to
change
the
protocol
to
make
it
easy
eventual
consistent
protocol
58
protocol,
and
they
noticed
that
because
of
complexity,
they
couldn't
implement
it
and
even
the
specs
or
or
vague
like
so
we
here
we
follow
another
approach
in
by
Zek.
We
try
to
keep
the
different
layers
modular
layers.
We
keep
the
event
resistant
layer
as
it
it
and
we
just
plug
in
the
Byzantine
for
tolerance,
layer
and
how
it
works.
This
works
as
usual.
A
Okay
and
then
we
only
work
with
the
Byzantine
fault,
tolerance
on
the
only
history
of
the
log.
Ok
and
as
you
know
that
in
wasn't
done
for
tolerance,
we
have
said
that
it's,
you
should
have
a
total
order
kind
of
level.
So
it
means
that
you
should
push
the
same
message
to
the
Byzantine
cluster
at
any
time,
which
means
you
need
to
have
a
consistent
offset
here:
HSN,
ok,
so
actually
you
have
a
log,
so
the
top
of
the
log
will
change
will
be
different
across
the
replicas.
A
But
at
some
history
in
the
sub
there
is
an
offset
upto.
Some
history
in
the
back.
The
lock
should
be
the
same.
So
even
if
the
order
is
different,
but
if
you
execute
the
same
operations
even
in
different
orders,
it
will.
You
will
get
the
same,
the
same
reply
or
the
same
state,
because
this
is
what's
your
duty
in
does
OK.
B
A
They
are
commutative,
so
this
is
basically
the
idea
that
we
have
developed
and
the
last
thing
is
it's:
it's
modular,
because
it's
easy
to
test
and
maintain.
So
this
is
one
of
the
well
implicit
complex.
Isn't
time
for
tolerance,
it's
not
easy
to
test
the
system
and
also
the
integration.
So
in
this
kind
of
system,
it's
more
practical,
like
you
have
your
your
running
system,
you
keep
running,
and
then
you
plug
these
and
you
test
them.
Ok,
so
you
don't
need
to
change
the
protocol
that
you
are
running
a
new
servers.
A
Ok,
you
just
plug
in
these.
You
test
them
and
then
you
remove
them,
and
your
system
in
the
foreground
is
working
perfectly
ok
and
there
are
also
other
options.
I'm
gonna
discuss
now.
So
why
buy
the
excels?
Because
you
care
about
security,
but
you
can't
give
up
availability,
so
you
can't
use
pbft
because
it's
strongly
consistent.
Ok!
So
in
this
way
you
need
you
really
care
about
availability
in
first
place,
but
also
about
security.
Ok,
because
here
this
is
kind
of
a
question
or
or
debate.
Ok.
A
So
if
you
are
giving
the
client
a
certificate
on
the
history,
so
how
would
it
help
the
client?
It's
not
secure?
Ok,
so
that's
why
we
are
saying
it's
as
secure
as
possible,
and
this
is
what
the
best
that
you
could
do,
because
you
couldn't
compromise
availability,
it's
not
your
option.
The
first
option
is
for
availability
and
then
you
care
about
consistency
in
eventual
consistency.
Why?
Because
inside
it
is
services
themselves,
they
want.
The
system
won't
want
converge
with
this
oddities.
If
some
of
the
replicas
are
by
sometime,
so
I
could
give
you
an
example.
A
B
Yep
that
is
digitally
signed
with
a
lot
of
qualifications
are
digitally
signed.
The
Byzantine
can
only
censor
some
of
the
operations
and,
if
they're
positively
changed,
then
they
would
have
to
censor
entire
chains
of
causal
history,
but
they
wouldn't
be
able
to
forge
or
change
the
state
right.
Yeah.
A
But
I'm
not
talking
about
forging
the
state
I'm
talking
about
executing
the
operation.
It's
not
the
state,
because
here
in
the
in
in
this
model,
eventually
says
model
the
different
replicas:
they
don't
synchronize
on
the
state.
They
just
disseminate
operations
and
everyone.
You
are
like.
You
trust
everyone
to
execute
the
operation,
and
if
the
execution
is
correct,
we
know
that
all
the
replicas
will
converge,
not
because
the
exchange
state
in
this
case,
because
the
exchange
operations
yep.
C
C
B
A
So
only
here
in
this
model,
the
server's
hold
States.
These
states-
and
the
servers
in
this
case
are,
could
be
like
many
servers,
this
eventual
consistent
it's
for,
but
it
can
be
like
100
servers,
okay
or
proxy
servers,
or
if
you
want
the
browser's,
okay,
but
the
client.
The
end
client
is
interested
only
in
the
in
the
reply
like
read
this
state,
it's
for,
okay,
you
don't.
The
client
doesn't
need
to
hold
the
state,
it's
only
their
plan,
okay,
so
everything
was
happening
in
the
replicas.
A
So
and
you
care
about
your
legacy
system-
and
this
is
the
point
that
we've
said-
you
can
still
run
your
on
your
system
and
you
you
test
your
new
plugin,
probably
online.
If
you
want
and
you
care
about
your
clients
actually
in
this
in
this
way
that
we
have
decided,
as
I've
told
you
like
in
the
replicas,
so
the
the
basic
method
is
to
push
the
certificate
kind
of
certificate
on
the
history
to
the
client.
Now,
for
example,
some
clients
might
have
different
levels
of
security.
Some
clients
said
I,
know,
I,
don't
need
any
security.
A
I
want
the
basic
eventual
KCC
model
like
it
works
now,
I!
Don't
care
about
why
sometime?
Okay,
so
you
can
you
can
like
forget
about
certificates,
you
can
always
read
anything
and
you
forget
about
certificates.
A
very
conservative
client
say
no
I
want
a
certificate
for
every
operation.
I
want
that
every
operation
should
be
correct.
Okay,
so
here
we
have
like
you
can
tune,
there's
a
knob
to
to
tune.
Every
client
could
ask
for
a
certain
level
or
certificate.
For
example,
I
can
accept
1000
operation
without
certificate.
I
can't
accept
more
okay,
so
always
so.
A
I
can
tolerate
a
log
of
1000
operations
to
be
faulty,
probably
suspicious.
Okay
and
every
client,
for
example,
in
this
state,
have
a
different
option.
So
either
you
can
still
use
the
eventuosity
model
even
with
this
plug-in,
and
you
can
still
use
the
Byzantine
fault
model,
even
okay,
if
you
have,
if
you
are
very
conservative
you're
as
if
you
are
using,
buy
some
time
for
protocol
okay
and
the
spectrum
between
them.
A
A
For
example,
an
f-type,
for
example,
assume
that
there's
a
a
bug
in
the
system.
This
is
one
one
case,
another
thing
because
in
this
system,
for
example,
you
have
you
might
have
different
implementations,
you
had
had
two
different
operating
systems,
so
you
might
have
this
bug.
This
is
the
common
theme
you
have
also
there
might
be
an
attack
from
from
the
outside
to
compromise
one
server.
Okay,
so
you
could
get
different
numbers,
so
you
place
one
version
with
another
version:
okay,.