►
From YouTube: IETF101-DINRG-20180319-0930
Description
DINRG meeting session at IETF101
2018/03/19 0930
https://datatracker.ietf.org/meeting/101/proceedings/
A
Good
morning,
this
is
the
proposed
energy
meeting,
so
centralized
Internet
infrastructure.
If
that's
not
what
you
interested
in
you're
on
the
wrong
room,
if
you
want
to
be
here-
and
you
can't
find
a
seat,
there
may
be
a
few
empty
seats.
So
can
people
maybe
raise
their
hand
if
they
have
an
empty
seat
next
to
them,
so
mostly
on
the
front
line,
a
few
in
the
back.
A
Okay,
it's
great
to
see
so
many
of
you
thanks
for
coming
so
before
we
do
anything.
We
need
a
note-taker
for
the
for
this
session,
and
so
we
cannot
really
start
without
having
one
it's
a
really
important
function
and
the
job
is
mainly
to
keep
track
of
the
discussion.
So
you
don't
have
to
write
down
everything
that
is
in
the
presentation
or
you
just
a
good
record
of
the
discussion.
Thank
you
very
much.
Chris.
A
A
A
A
A
Think
it's
Alicia
actually
in
the
room.
She
will
not
yet
okay,
she's
trying
to
friend
the
room,
I!
Think
okay,
just
quickly
if
you
hadn't
haven't
seen
that
we
had
a
full
day
into
a
meeting
a
few
weeks
ago
at
the
Indian
SS
conference
in
San
Diego,
and
so
that
was
actually
pretty
interesting
meeting
and
I
thought
what
I
post
presentations
come
from
like
different
backgrounds.
So
key
management,
secure
online
payments,
really
different
ideas
and
also
quite
interactive.
So
you
will
find
all
the
material
in
the
data
tracker.
C
D
So
if
the
function
can
be
a
very
interesting
tool
for
some
not
use
cases
but
I
divide
this
in
use
case
and
drivers,
because
when
I
was
when
we
were
working
on
this
material,
we
figured
out
that
there
are
some
issues
that
are
actually
central
core
without
them.
I
think
this
will
not
work
or
will
be
kind
of
jeopardizes
kind
of
centralized,
so
I
call
those
drivers,
so
just
to
start
with
this
group
is
called
decentralized
in
internet
architecture,
but
the
literature
long
time
ago
make
the
distinction
between
decentralized
in
distributed.
So
I'm,
not
sure.
D
If
we
are
talking
about
the
centralization
or
distribution,
maybe
something
in
the
middle,
maybe
for
some
functionalities
we
actually
mean
the
centralization
others,
you
mean
distribution,
but
they
are
slightly
different
things.
Okay,
so
centralization
is
a
kind
of
centralized
clusters
where
we
have
discussed
our
heads
and
they
have
quite
autonomy,
but
they
still
need
to
top
each
other.
They
need
to
discover
it
themselves.
They
need
to
interact
okay,
while
fully
distributed
is,
as
you
is
quite
more
more
complex,
but
stability
is
quite
high,
but
leads
to
other
requirements
like
self-organization
the
etc.
D
So
it's
their
complete
slightly
different
things.
So,
as
you
as
we
go
over
the
slides,
you
might
think
that
maybe
this
is
more
decentralized.
Maybe
this
is
more
distributed.
So
why,
in
my
opinion-
while
we
are
here
because
we
need
to,
there
is
no
way
to
go
around
it
and
this
picture
I
think
a
lot
of
know.
Lots
of
people
know
it.
So
there
is
a
huge
amount
of
traffic
energy
constrained.
D
Scalability
is
a
problem.
We
have
IIIT
with
billions
of
new
devices
being
able
or
wanted
to
communicate.
So
our
are
we
going
to
manage
all
this
stuff
in
a
centralized
way.
Not
in
my
opinion,
the
internet
by
itself
is
centralized
right,
but
routing
is
centralized.
Ess
DNS
is
fully
decentralized,
but
based
on
some
issues
on
daily
activity.
Some
of
these
operations
are
coming
from
the
centralization
to
centralization,
and
this
is
raising
some
issues.
Okay,
this
is
kind
of
a
general
picture
about
topics
that
I
think
we
could
tackle.
D
Some
are
more
widespread
room
other
are
more
located,
but
are
things
like
related
to
the
the
wireless
local
communications
data
sharing
data
production,
production,
IOT,
etc?
One
first
use
case
that
I
I
think
at
least
my
opinion.
It
makes
sense,
at
least
from
I
worked
before
in
operator
and
I
was
always
saying.
We
need
to
make
our
clients
happy,
so
we
need
to
personalize
our
our
services
and
we.
If
we
want
to
personalize
our
service,
you
have,
we
need
to
be
aware
of
what
the
users
want,
what
they
need.
D
D
Another
use
case:
it's
what
I
call
edge
networking
not
edge
computing
but
edge
networking
that
is,
that
are
some
operations
in
the
network
that
can
be
co-located
in
an
edge,
because
the
information
that
is
needed
to
do
the
networking
functionality
is
there.
The
users
are
moving
around
these
edge
devices.
Data
is
coming
from
this
edge
devices,
so
we
can
actually
classify
the
data
when
they
enter
the
network,
not
just
redirecting
the
data
to
a
central
point
to
be
classified
right,
so
self-healing
should
be
done
at
the
edges,
so
not
really
at
the
core
yeah.
D
So
this
isn't
some
kind
of
use
case
that
can
be
useful,
for
instance,
to
to
manage
D
to
D
communication
or
to
manage
user
mobility.
I
think
it
until
this
point
I'm
not
saying
anything
new
but
I'm,
just
refreshing
that
there
are
some
functional
that
we
can
just
do
it
in
a
decentralized
or
distributed
way.
For
instance,
all
remembers
deep
serve
and
RSVP
is
something
like
that
right.
So
this
kind
of
things
the
classification
of
traffic
can
be
done
at
the
edges
right.
It
was
done
to
be
done
at
the
edges.
D
So,
for
some
reason
is
not
because
of
administrative
and
dua
operates
want
to
control
the
network.
Another
issue
is
so
I
thought
this
edge
computing.
Okay,
I
can
generalize.
This
is
not
only
for
IO
T's
for
anything,
okay
for
our
personal
data.
Even
these
photos
that
I
take
that
I
put
on
Facebook
okay.
D
It
will
be
interesting
not
to
centralize
this
in
an
entity
Facebook
Google,
Amazon
AWS,
but
just
put
it
on
my
personal
cloud.
What
I
call
here
cloud
lights
so
if
everyone
has
their
personal
at
home,
our
are
these
personal
thoughts
being
too
organized
right.
If
I
want
to
share
data
with
some
colleagues
and
friends,
how
I'm
going
to
do
that
because
my
date
is
in
my
home,
so
how
I'm
going
to
this
discover?
Other
personal
cloud
among
our
I
am
I
going
to
exchange
this
data.
D
Ok,
but
this
is
a
kind
of
scenario
that
allows
to
solve
some
of
the
problems
that
we
have
currently
is
that
the
people
do
not
own
the
data
you're
losing
the
control
of
your
data
and
you're
losing
control
your
privacy.
So,
by
putting
your
data
in
your
personal
cloud
that
is
this
centralized
or
distributed
system,
it
will
solve
the
problem
right.
The
other
thing
is
that,
based
on
this,
why
do
we
need
the
centralized
decision?
/
distribution?
D
Is
that
because,
for
instance,
it
should
replace
the
data
where
it
makes
more
sales
right
if
I
have
a
bunch
of
users
here
in
this
neighborhood
in
London,
trying
to
get
information
about
ITF?
Why
should
this
information
be
in
a
central
device
somewhere?
I,
don't
know
where
in
US
or
something
should
be
here
right.
D
So
we
should
replace
the
data
here
if
we
do
not
replace
the
data,
maybe
we'll
replace
the
computational
functions
right,
so
there
are
lots
of
distributed,
functional
or
functional
that
can
be
distributed
in
this
kind
of
scenario,
if
you
were
more
into
the
wireless
part,
these
are
three
examples
that
could
be
more
like
I
said.
This
is
not
an
exhaustive
list
of
use
cases,
just
food
for
thought.
I
selected
this
one
in
the
first
one
named
by
its
optimistic
communication.
We
can
generalize
this
and
call
it
just
optimistic
communication.
D
It
means
any
communication
of
our
wireless
network,
multi
hoc
wireless
network,
right,
if
you
do
that,
how
do
you
do,
for
instance,
your
ID
verification?
If
someone
wants
to
contact
you
I
do
know
that
this
person
is
really
what
they
are.
Okay,
a
certification
Authority,
but
you
don't
have
access
to
the
Internet.
So
how
do
you
do
that?
So
that's
a
problem,
cooperative,
relying
is
also
something
that
by
nature
is
centralized.
Also
is
not
wide.
Spectrum
is
located,
but
is
decentralized,
something
that
is
important
for
operate.
It
is
multi
cell
wireless
provisioning.
D
D
So
before
coming
to
this
slide,
it
dislike
that
to
attach
thing:
okay,
in
all
these
use
cases
and
I
others
in
my
mind
that
I
didn't
place
in
the
slides
because
of
time
constraint.
What
are
the
issues
that
are
always
present
that
you
should
look
at
if
you
want
to
do
it
centralized
slash
distribution,
the
first
one
that
actually
is
not
here
because
I
think
is
obvious-
is
the
ID
management.
D
If
I
want
to
start
communicating
with
someone,
I
have
to
make
sure
that
this
person
is
who
they
really
current
to
use
certification
of
authentication,
but
in
some
scenarios
that
may
not
be
the
most
interesting
thing
for
assessing
name
dated
networking
is
use
this
hierarchical
naming
scheme
where
the
data
packets
are
sign
it
and
signatures
are
verified
in
a
chain.
So
this
is
an
example
of
a
kind
of
decentralized
approach
for
this.
Now
that
actually
can
be
used
in
a
opportunistic
network
by
itself,
because
we
don't
I,
don't
need
to
go
to
a
centralized
server.
D
I
can
just
ask
for
the
keys
to
my
neighbors
keys
for
the
node
keys,
okay,
so,
based
on
that
I
think
at
least
my
opinion
and
I
share
this
of
these
thoughts
with
deal.
This
is
kind
of
raw
thoughts,
but
I
elect
this
to
you,
trust
management,
cooperative
incentives
and
consensus.
Why?
Because,
when
I,
if
I
know
good
I'm
talking
you
if
I
know
everyone
in
the
room,
first,
one
I
will
even
if
I
know
all
of
you
I
will
not
communicate
with
all
of
you
at
the
same
level.
D
I
have
several
different
trust
levels
with
with
each
which
each
one
of
you
so
I'll
use
this
trust
level
to
set
up
my
communication.
So
this
is
why
trust
communication
is
quite
important,
and
this
is
based
on
reputation.
This
is
based
on
creat
IDs,
that
I
just
going
back
to
the
ID
management,
and
but
this
may
not
scale,
because
if
I
try
to
come
up
with
trust
of
circles
on
the
fly
every
time
we're
on
different
locations,
it's
going
to
be
in
like
a
nightmare.
D
So
maybe
I
want
to
go
to
cooperative
incentives
in
sense
that
I
want
to
communicate
with
people
or
exchange
data
with
sensors.
That
I
actually
do
not
trust,
but
I
have
some
some
incentives
to
do
that.
Ok
and
with
this
I
can
go
in
things
like
virtual
currency,
so
I
have
I
have
to
get
some
reward
to
do
something.
If
I
want
to
share
my
bandwidth
to
serve
with
someone
or
to
share
my
data,
I
need
to
get
some
reward
back.
I,
don't
trust
you
but
I
need.
D
If
you
get
me,
give
me
some
reward,
then
I
give
me
I
give
you
what
I
want.
This
idea
of
eternal
currency
is
from
2011
or
2012
much
before
this
there's
about
blockchain,
but
this
could
be
implemented
that
way
in
a
decentralized
way,
because
this
idea
of
patrol
the
sphere
potency
was
not
centralized,
it
was
centralized.
D
Finally,
even
if
you
are
in
a
very
let's
say,
trust
fall
or
if
you
have
the
incentive
to
cooperate.
There
are
a
lot
of
things
situations
in
that
centralized
if
I
wanted.
You
need
consensus.
Imagine
operator
network
where
you
have
different
network
functionalities
distributed.
That
actually
are
doing
competitive
things.
One
network
function
wants
to
save
energy,
but
the
other
wants
to
do
load
balancing
it
does
not
care
about
energy.
How
does
their
work?
So,
if
this
is
not,
this
can
be
done
by
just
having
an
Orchestrator
centralized
one,
that's
the
classical
approach.
D
G
Hey
thanks
so
the
stellar
consensus
protocol,
which
is
a
draft
that
we've
submitted,
the
zeroeth
version
of
and
I've
talked
like
a
year
ago,
I
guess
in
Chicago
I
talked
a
lot
about
kind
of
like
the
motivation
and
the
model
and
and
in
in
the
interim
meeting
we
had
a
bunch
of
kind
of
motivation.
So
this
talk
is
really
about
getting
into
like
how
the
protocol
actually
works.
How
do
I.
G
A
G
You
kept
pressing
okay
I'm
it
just
hold
this
tripping
over
the
microphone
stand
and
I
keep
pressing
that
okay
and-
and
so
obviously,
the
big
problem
in
an
open
system
is
that
traditional
Byzantine
agreement
protocols
assumes
of
majority
voting
or
super
majority
voting
and
that
completely
breaks
down
in
an
open
system,
because
you
have
these
civil
attacks,
where
the
bad
guy
joins
a
hundred
times
and
then
can
kind
of
overwhelm
the
number
of
good
guys.
So
the
idea
in
SCP
is
that
instead,
we're
gonna
determine
quorums
in
a
decentralized
way
based
on
the
participants
trust.
G
So
we
say
like
if
capital
V
is
all
the
nodes
in
the
world,
then
for
every
individual
node
V.
This
participating
in
this
protocol
there's
some
set
of
sets
that
V
would
accept
as
a
quorum,
we'll
call
them
little
Q,
1,
2,
2,
2
little
Q
n
here
and
so
have
this
capital
function,
Q
of
V,
which
basically
says
all
the
sets
that
that
V
would
accept,
is
a
quorum
and
will
say
that
the
V
has
to
be
in
each
of
these
little
queues.
G
But
the
thing
is
each
of
these
little
queues
is
not
actually
a
quorum,
it's
what
we
call
a
quorum
slice
right
and
a
quorum
has
to
kind
of
transitively
satisfy
the
dependencies
of
all
its
members
right.
So
the
kind
of
key
definition
in
this
whole
protocol
is
that
a
quorum
is
a
set
of
nodes,
so
us
that
contains
at
least
one
slice
belonging
to
each
of
its
members
right.
Every
V
in
the
quorum
has
some
quorum
slice
Q.
That
is
a
subset
of
you.
G
So
so
this
works
for
consensus,
but
like
requires
an
assumption,
and
the
assumption
is
that
there's
sort
of
Trance
transitive
overlap
of
of
trust.
If
you
follow
the
people's
dependencies
and
there's
some
analogies
to
kind
of
build
an
intuition
for
why
this
might
work.
So
one
is,
if
you
consider
transitive,
reach
ability
on
the
internet
right.
Imagine
two
networks
that
both
speak.
You
know
ipv4
and
ipv6
and
they're
all
equally
sort
of
RFC
compliance
and
but
they're
not
reachable.
No
note
on
the
left
network.
Can't
can
you
note
on
the
right
networks?
G
Well,
if
you,
you
know
they
both
obey
the
Internet
Protocol,
but
one
of
these
networks
is
gonna,
contain
Google
and
Amazon,
and
what
our
Apple
and
Microsoft,
and
so
like
we're
gonna,
said.
Well,
we
you
know
Stanford's
on
that,
one
like
that
that
one
is
the
Internet
and
the
other
one,
even
though
it
speaks
the
same
protocol.
That's
not
the
internet!
So
that's
just
the
the
notion
which
are
trying
to
capture
this
overwhelming
consensus
about
something
without
centralized
control.
It's
just
that
everybody
kind
of
agrees
on
what
the
Internet
is.
G
Another
example
would
be
kind
of
the
rough
agreement
on
who
constitutes
at
tier
one
is
P.
If
you
asked
everybody
in
the
world
who
are
the
networks
that
you
think
you
would
be
glad
to
get
transit
from
or
peer
with
right,
and
then
you
sort
of
transitively
follow
that
there
will
be
a
large
amount
of
overlap,
even
if
everybody
doesn't
agree
on
what
it's
your
one
ISP
is
right
and
similarly,
you
know
every
browser
doesn't
have
exactly
the
same
set
of
certificate
authorities,
but
there's
a
lot
of
overlap
and
that's
right.
G
Each
note
has
only
one
quorum
slice
here
so
here
you
can
see
that
v2
v3
v4
is
a
quorum
because
it
contains
a
slice
of
each
member
but
v1.
V2
v3
is
a
quorum
slice
for
v1,
but
it's
not
a
quorum
right,
because
v1
says
what
I
would
believe
a
quorum
with
v1,
v2
v3,
but
B
2
and
B
3
are
saying
well,
we'll
only
believe
a
quorum
if
v4
is
also
a
member,
so
the
smallest
quorum.
That
includes
all
these
is
the
set
of
all
nodes
in
this
example.
G
So
of
course,
we
have
to
actually
represent
quorum
slices
in
this
protocol,
so
you
know
mathematically.
We
could
choose.
You
know
any
set
of
sets
of
nodes
as
quorum
slices,
but
we
want
to
be
able
to
represent
this
compactly
and
so
what
we,
what
we
do,
which
seems
to
capture
a
lot
of
real-world
examples.
G
Of
these
n
nodes,
where
any
individual
node
is
also
at
an
SCP
quorum
set,
so
you
can,
you
can
have
or
is
either
a
public
key
directly
or
an
inner
set.
So
basically,
you
could
say
I
want
to
believe
you
know
three
out
of
the
following
note:
a
public
key,
a
public
key
B
public.
You
see-
or
you
know,
2/3
of
public
key,
D,
EF
or
or
and
so
on.
Right
and
you
can
go,
go
to
levels
down.
G
So
this
makes
sense
for
me
well
I
be
m
and
our
current
deployment
of
this
IBM
is
running
like
8
nodes
around
the
world.
So
you
might
want
to
say
well.
I'll
cut
touched
trust
like
3
out
of
these
four
companies,
but
one
of
them
is
IBM
and
I'll.
Wait
for
you
know
six
out
of
eight
notes
from
IBM
or
something
right
figuring
that
those
would
be
correlated
in
terms
of
maliciousness
and
but
you
want
to
be
able
to
tolerate
one
of
IBM's
data
centers
being
okay.
G
So
the
the
key
idea
in
this
protocol
is
that
you,
you
vote
for
nodes,
vote
for
statements
by
issuing
these
sizes,
but
every
vote
has
to
specify
the
acceptable
quorum
slices
to
the
burger
alright.
And
so,
as
you
collect
these
votes,
you
learn
of
a
quorum
because
you
see
a
vote
and
you
see
what
that
nodes
quorum.
G
Slices
are,
and
then
you
you
that
kind
of
expands,
the
set
of
other
votes
that
you
need
until
you've
got
a
closed
set
and,
and
obviously
a
well
behaved
knows
not
allowed
to
vote
for
four
contradictory
statements
and
we'll
get
to
like
what
you
can
vote
for
later
on.
But
as
you're
collecting
these
votes
there's
two
important
thresholds
that
have
meaning
in
the
protocol.
G
The
first
is,
if
you
have
quorum
threshold,
so
if
a
node
V
sees
a
bunch
of
nodes,
all
voting
for
statement
a
and
that
set
contains
a
quorum
that
includes
V,
then
we'll
say
that
that
statement
a
has
reached
quorum
threshold
at
V,
or
it
could
also
be
in
a
situation
where
node
V
notices
that
every
single
one
of
its
quorum
slices
contains
at
least
one
node.
That
has
voted
for
some
statement,
a
and
then
I'll
say.
What's
we've
reached
blocking
threshold
because,
basically
there's
you'll,
never
V
will
never
see.
G
Assuming
these
nodes
are
honest,
V
will
never
see
a
quorum
vote
for
a
contradictory
statement,
because
that
form
would
have
to
contain
one
of
those
quorum
slices
which
would
contain
a
node.
That's
voted
for
a
okay,
and
so
so
I'll
say
that
a
node
V
ratifies
a
statement
a
if
the
statement
a
reaches
quorum,
threshold,
V
right,
meaning
V,
is
in
a
quorum
where
every
single
node
has
voted.
For
this.
This
statement,
a
and
I
show
in
the
white
paper
that
the
proof
of
the
protocol
that
you,
the
system,
will
not
ratify
contradictory
statements.
G
As
long
as
you
have
this
property
of
quorum
intersection,
despite
ill-behaved
nodes
or
Q
I
di
n-
in
other
words
you,
if
you
delete
all
the
nodes
that
deviate
from
the
protocol
kind
of
conceptually
from
everybody's
quorum
slices
and
from
the
set
of
nodes,
every
two
quorums
still
have
to
intersect,
and
if
they
don't
well,
then
no
protocol
could
actually
guarantee
safety.
Now
this
isn't
really
a
particularly
intuitive
notion,
the
study
of
conceptually
deleting
the
ill-behaved
nodes.
G
But
the
point
is
that
it's
a
necessary
property
to
guarantee
safety,
and
so
if
the
protocol
can
guarantee
safety
whenever
this
holds-
and
that
means
it
will
also
imply
any
other
model.
In
other
words,
this
means
that's
kind
of
optimal
in
terms
of
safety,
so
you
can
kind
of
think
of
this.
If
you
were
presenting
an
encryption
algorithm-
and
someone
might
say
like
well
this-
your
encryption
algorithm
is
secure
against
an
adaptive
chosen.
G
Cipher
text
attacker,
like
that,
seems
like
a
weird
kind
of
attack-
who's
gonna
pull
that
off,
but
we
do
that
because
we
think
that
captures
all
the
other
kinds
of
attacks
that
might
happen.
So
the
idea
is
that
this,
this
weird
requirement
is
supposed
to
capture
any
other
like
weaker
failure,
model
that
you
might
have
okay.
G
So
so
the
way
these
vote
messages
happen
is
that
nodes
send
around
these
statements
and
pair
of
signatures,
and
the
statements
themselves
contain
the
public
key
of
the
sending
node
the
slot
index
us,
because
you
agree
on
a
series
that
you
run
this
protocol,
a
series
of
times
to
kind
of
agree
on
on
log
entries.
If
you
will,
if
you're,
updating
a
state
machine
and
then
it
includes
the
sha-256
hash
of
the
the
quorum
set,
so
that
because
you'd
these-
oh
it
changed
that
often
so
you
don't
want
to
communicate
the
whole
thing.
G
It's
more
compact,
just
to
name
it
by
by
a
hash,
and
you
can
just
request
from
the
sending
node
what
the
preimage
of
the
hash
is.
If
you
don't
happen
to
have
it
cached
and
then
there's
there's
a
statement
in
which
you,
you
pledged
a
bunch
of
votes
and
what
we'll
get
into
what
these
statements
are
in
a
second.
But
basically,
the
kind
of
the
big
picture
here
is
that
you're
sending
around
these
nodes
you're
sent
around
these
votes
along
with
quorum
slices,
and
several
things
will
happen.
G
So,
let's
say
you're
trying
to
decide
between
some
statement
a
and
its
opposite,
not
a
well
in
the
beginning.
Nobody
has
voted
right,
so
everybody
could
vote
for
a
or
everybody
could
vote
for
a
bar
so
say
that
the
system
is
five
valence
right,
because
either
outcome
is
possible.
Then
at
some
point
maybe
some
node
ratifies
a
statement
a
and
at
that
point,
assuming
you
have
this
quorum
intersection.
Despite
ill-behaved
notes,
property,
no
node
will
will
ratify
a
contradictory
statement.
G
A
bar
so
in
a
sense,
is
the
only
possible
value
that
the
system
could
agree
on
here
and
if
every
node
learns
that,
in
fact
the
system
is
a
villain,
then
they'll
basically
decide
well.
We've
agreed
on
a
because
we
know
no
other
possible
we're
not
going
to
vote
on
any
other
possible
value,
but
at
any
point
along
the
way
you
could
potentially
get
stuck
right
like,
for
example,
a
node
that
accidentally
voted
for,
not
a
even
when
everybody
else
votes
for
a
will
not
be
able
to
vote
for
a
so.
G
It
will
not
be
able
to
ratify
a
right
or
it
could
be
that
a
node
kind
of
unknowingly
ratified
a
but
then
some
other
node
in
its
quorum.
That
also
voted
for
a
crashed.
After
voting
for
a
and
the
vote,
message
got
lost
or
something
so
so,
basically,
you'll.
Never
learn
that
in
fact
that
systems
eval.
So
the
result
is
that
that
we're
kind
of
stuck
okay.
G
So
an
important
question
is:
how
do
you
actually
know
just
cuz
you
seen,
you
know,
say
like
a
quorum
vote
for
a
if,
like
say
this
t
nodes
in
your
quorum
great,
you
know
the
system's
a
valid.
But
how
do
you
know?
Everybody
else
is
going
to
learn
that
the
system
is
a
valence
and
the
traditional
solution
in
Byzantine
Agreement
protocols
is
that
you
say
well.
G
You
can
kind
of
believe
them
with
no
loss
of
safety,
so
this
hat
this
turns
up
a
lot
in
pbft,
for
example,
if
one
of
the
more
popular
Byzantine
agreement
protocols
that's
a
centralized
protocol.
Unfortunately,
this
completely
breaks
down
in
this
open
model,
because
what
happens
here
is
that
from
say
this
nodes
point
of
view,
VN
minus
1,
it
says!
G
Now,
in
this
decentralized
case,
you
have
to
care
about
the
failure
as
a
kind
of
per
node
thing,
and
each
node
has
to
kind
of
look
out
for
itself
and
say
you
know
what
I'm
not
willing
to
believe
anything
unless
it
there's
a
I'm,
actually
a
member
of
the
quorum,
but
that
voted
for
this
statement,
so
so
so.
On
the
other
hand,
okay,
so
there's
this
couple
problems
here
right
to
kind
of
motivate
why
we
need
to
do
something
fancier
than
just
a
straight-up
vote
right.
G
How
do
you
agree
on
a
statement
after
you
voted
on
it
or
after
you
voted
against
it?
And
you
know
how
do
you
know
once
you've
agreed
on
a
statement
that
you
know,
everybody
else
will
agree
to
that
statement
so
suppose
that
your
node
v1
and
you
notice
that
in
every
single
one
of
your
quorum
slices
there's
a
another
node,
that's
willing
to
say
that
this
is
the
system
is
available.
You
know
that
it
saw
a
quorum
vote
for
a
or
for
whatever
reason
it
thinks
a
is
the
only
possible
output
value
right.
G
So,
in
other
words,
this
this
statement
that
the
system
is
a
valent
has
reached
blocking
threshold
for
V,
because
it's
in
every
single
one
of
these
quorums.
So
at
that
point
either
this
it's
true
that
the
system
really
is
a
valence
or
v1
is
not
a
member
of
any
well.
They
have
quorum.
So
it's
not
going
to
be
able
it's
not
guaranteed
liveness,
it
may
be
bad
guys
may
be
able
to
get
it
stuck,
so
it
can't
make
any
any
progress
right.
G
G
You
can
have
well-behaved
nodes
that
accept
the
diverging
statements,
which
would
be
it
just
bad
and
also
you
know
we
have
some
nodes
that
previously
couldn't
ratify
statement
that
can
now
accept
it,
but
there's
no
guarantee
that
all
the
nodes
will
be
able
to
to
accept
the
statement,
even
if
you
have
so.
The
solution
here
is
that
we
have
to
hold
a
second
vote
on
the
fact
that
the
first
vote
succeeded.
G
So,
basically,
you
have
the
second
vote
on
the
fact
that
you've
already
accepted
the
statement
a
and
this
solves
both
of
the
problems
on
the
previous
slide.
It
solves
safety,
because
now
we
need
quorum
threshold
for
this
ratification.
There's
nothing
about
blocking
Thresh
with
here.
It
also
solves
the
problem
of
honest
nodes.
You
know
in
an
honest
quorum
being
unable
to
accept
the
statement
that
other
nodes
have
accepted.
Why?
Because
there
are
nodes
in
a
well
behaved
quorum
that
might
vote
against
accepted
statements,
but
they
won't
vote
against
the
fact
that
those
statements
were
accepted.
G
So,
in
fact,
one
of
the
things
that
I've
proven
in
my
white
paper
is
that
if
you
have
a
single
node
in
a
well-behaved
quorum
that
that
confirms
a
statement,
a
then
eventually,
all
the
nodes
will
confirm
the
statement
name.
So
that
gives
you
this
notion
that,
yes,
we're
done
the
system
is
actually
going
to
agree
on
a
so.
Basically,
this
is
like
the
main
building
block
of
the
SCP
protocol.
Here
it's
this
federated
voting
process
where
you
start
in
kind
of
an
uncommitted
state
and
then
there's
some
statement,
a
that,
you
think,
is
valid.
G
So
you
vote
for
a
and
then
the
statement
of
like
I
vote
for
a
RI
or
I
accept
a
reaches
quorum
threshold.
Then
you
accept
a
and
then
you
you
have
the
second
vote.
Where
you
confirm
a,
and
at
that
point
you
know
you
can
kind
of
assume
that
the
system
is
confirmed.
Has
that
a
is
true
without
sacrificing
safety
or
liveness
right
and,
conversely,
of
course
you
could
vote
against
a.
G
If
you
locally
confirm
the
statement
a
then
you
know
that
you
know,
and
you
know,
the
system
is
not
in
some
terrible
state
where
you
don't
have
a
quorum
intersection
despite
okay
have
nodes
and
where
you
actually
have
a
working
on
this
quorum,
then
you
know
that
kind
of
all
the
other
good
nodes
will
also
accept
this
statement
a
so
so
with
that
said,
once
you
understand
that
building
block
the
the
rest
of
the
protocol
should
ideally
at
least
make
some
intuitive
sense.
I
know
this
is
a
lot
of
material
to
plow
through.
G
So
the
first
phase
of
the
protocol
is
that
you
have
to
decide
what
you,
what
value
you
actually
want
to
agree
on
right
and
so
for
that
we
nodes
do.
Is
they
vote
to
nominate
particular
values?
And,
for
you
know
the
purposes
here,
a
value
is,
is
basically
like
an
just
an
opaque,
byte
byte
string
right.
So
we
they
send
these.
G
These
nomination
messages
around
that
contain
votes
to
nominate
particular
values
and
also
contain
a
lists
of
values
that
have
already
been
accepted,
meaning
they've
progressed
to
kind
of
the
second
stage
of
the
federated
voting
protocol,
and
so
so
so,
basically,
initially
what
nodes
do?
Is
they
there's
an
optimization
where
not
everybody
initially
nominates
something
just
to
make
it
more
efficient,
but
conceptually
everybody
could
could
nominate
a
value
and
then
what's
important
is,
as
you
receive
values.
Yuri
nominate
those
values.
G
G
Then,
in
the
accepted
vectors
and
as
you
accept
values,
you
move
them
from
votes
to,
accept
it
and
you
keep
broadcasting
until
one
of
these
accepted
values
reaches
a
quorum
threshold
and
then
you've
actually
confirmed
a
value
as
nominated,
and
at
that
point
you
stop
accepting
you
stop
voting
to
nominate
new
values,
but
you
keep
you
know,
reen
ominous,
that
you've
been
nominated.
So
eventually,
you
might
actually
have
one
or
multiple
values
end
up
as
confirmed
nominated.
So
so
then,
so,
basically,
what
happens?
Is
these
nodes?
G
G
Is
you
don't
know
when,
as
the
convergence
has
happened
right,
so
you
take
your
best
guess
at
okay,
we've
probably
converged
on
X,
but
you
may
be,
it
may
be
too
early,
and
so
you
need
to
do
something
to
guarantee
you
that
this
will
still
be
safe,
and
so
for
that
you,
the
protocol,
moves
to
kind
of
a
second
phase
that
involves
balloting
and
this,
if
you
seen
Paxos,
this
is
kind
of
a
generalization
of
Byzantine
paxos.
So
so
so
what
is
a
ballot?
G
Well
about
is
basically
a
pair
of
a
counter
and
a
a
value
and
and
the
the
idea
is
that
you
know
initially,
the
value
you
choose
will
be
whatever
the
output
of
your
your
nomination
protocol
was,
so
these
ballots
are
totally
ordered
and
the
counter
is
the
the
more
significant
one.
So
you
know
one
comma
anything
is
less
than
the
to
come
anything
and
then
the
value
is
the
tiebreaker
and
nodes
are
allowed
to
vote
to
commit
an
abort
ballots,
but
not
both
it's
contradictory.
G
You
can
only
vote
one
or
the
other,
and
if
ever
the
federated
voting
confirms
that
some
ballot
B
is
committed,
then
it
becomes
safe
to
output,
the
value
in
X,
which
I'll
write,
B,
dot,
X
or
the
value
in
a
ballot
in
and
B
M
for
the
counter
and
about
right.
So
this
this
is
how
we
finish
and
we
actually
choose
a
value.
G
So
there's
an
important
invariant,
though,
which
is
that
you
can't
vote
to
commit
a
value
unless
you
first
prepared
the
value
and
where
the
ballot
and
what
does
it
mean
to
prepare
a
ballot?
It
means
you
prepared
a
ballot
B
if
you've
basically
confirmed
a
whole
bunch
of
abort
statements,
namely
you've
aborted,
every
ballot,
B
old,
that
is
less
than
the
ballot
B,
and
that
has
a
different
x
value
right
and
and
so
basically
you
first
have
to
prepare
B.
G
G
Okay,
so
so,
basically
you
work
through
these
two
phase
of
the
protocol,
where
you
first,
you
have
to
prepare
event
ballots
and
then
you,
you
vote
to
to
commit
ballots,
and
this
confirm
message
and
then
once
you've
committed
ballots.
You
can
help
with
this
externalized
message
which
helps
other
people.
G
Who've
thought
you're
done
at
that
point,
but
you
issue
this
final
message
that
helps
other
people
may
have
fallen
behind
quickly
catch
up
because
you
can
kind
of
say
like
actually
it's
as
if
I
don't
have
I
have
a
trivial
quorum
slice
because
I've
already
seen
my
quorum.
So
you
don't
even
need
to
worry
about
my
dependencies
and
you
know
I
guess
now
that
I've
laid
the
groundwork
the
next
one
I
can
actually
get
into
more
details
of
how
to
the
protocol.
G
I
J
I
The
RFC
English
book
you
mean
the
description
of
coin
slices
in
the
in
the
draft.
Sorry,
sorry
Nick,
Johnson,
etherion
foundation,
starting
in
out
the
description
of
corn
slices
in
the
draft
was
quite
opaque,
but
your
description
in
the
slide
was
quite
clear.
So
may
I
suggest
rewriting
it
a
bit
because
yeah.
G
I
G
I
G
You
just
have
to
satisfy
any
k.
You
have
a
threshold
you
have
to
satisfy.
You
have
to
have
that
yourself,
plus
that
many
of
at
the
current
level,
so
you
need
to
either
that
like,
for
example,
if
your
threshold
is
five,
then
you
could
have
three
nodes
in
a
validator
set.
Do
it
plus
two
recursive
SCP
quorum
slice
that
would
reach
five
so.
I
A
D
G
Assume
that
I
mean
I
assume
that
basically,
the
nodes
depend
on
one
other
note
say
like
I.
You
know:
I'm
not
gonna,
agree
to
something
unless,
like
Google
and
Amazon
and
like
Verisign,
do
because
I
think
those
are
all
important
or
three
out
of
these
four
companies
or
something.
And
what
I
assume
is
that
if
you
sort
of
take
the
transitive
closure
of
these
trust
dependencies
that
there
will
be
overlap,
so
you
take
any
two
people.
They
might
not
know
each
other.
G
I
mean
Trust
is
kind
of
a
weird.
You
might
not
trust
these
people.
What
you
trust
is
the
conjunction
of
a
bunch
of
people.
Right
I
might
say,
like
you
know
what
I
trust
is
actually
the
conjunction
of
these
three
big
companies,
plus
the
e
FF,
and
they
CLU,
like
once,
all
all
of
those
organizations
all
say
the
same
thing
like
I'm
willing
to
actually
that
Besant
I'm,
like
a
trust
and
so
I
depend
on
each
of
these
individual
ones
like
they
could.
G
H
G
G
A
bunch
of
these
messages
around
and
like
right
now
it
still
er
does
is
not
especially
intelligent,
like
it's
sort
of
more
reminiscent
of
Nutella,
it's
like
just
a
flooding
protocol,
so
you
may
get
multiple
copies
of
the
same
message,
and
so,
if
there's,
if
there's
like
work
in
the
ITF
or
I
RTF,
that
is
that
we
could
use
off-the-shelf
as
a
as
a
multicast
peer-to-peer,
multi
house
protocol.
That
would
be
useful,
so
you
is
there.
You
think
we
should
her.
Is
there.
D
Themselves
and
to
do
to
set
up
infrastructure
to
do
this
kind
of
one
use
case
could
be
this
consensus,
so
they
are
able
to
discover
themselves.
They
are
able
to
synchronize
their
state
and
then
I
opportunity.
Eight,
some
arguments.
Yes,
the
accession
of
arguments
could
be
this
consensus
to
actually
could
be
this
already.
G
They
I
feel
like
this.
We
might
be
targeting
like
slightly
differently.
This
is
a
protocol
that,
like
I'm
imagining
ICP,
is
something
you
would
run,
probably
not
unlike
a
bunch
of
IOT
device
or
something,
but
rather
it
would
be
a
way
of
you
know,
having
like
all
the
CT
logs,
like
authorities
like
combined
together
and
like
how,
like
a
single
blog
or
summary
they'd,
be
like
sort
of
known
trusted
like
available
entities
on
the
internet
who
would
be
participating
in
this
look.
H
A
C
K
B
K
L
K
Okay,
yeah
so
I'm
calling
man
from
Stanford,
University
and
online
with
me.
Sorry,
can
you
go
back?
One
slide
to
the
title
I
saw
on
the
line
with
me
is
John
Luke
Watson,
also
from
Stanford
and
Sydney
Lee
from
the
yeah.
So
today
we're
gonna
talk
about
distributed,
authentication
and
authenticated
mappings,
and
so
what
what
are
authenticated
mappings?
You
may
ask
so
next
time,
please
so
essentially
what
we
did
was
like.
K
K
How
do
you
have
secure
domain
lookups
like
like
domain
resolution,
and
just
like?
Do
you
have
secure
identity
verification
so
next
slide?
Please
the
solutions
that
we
have
right
now
in
the
current
Internet
is
like
trusted.
Key
servers
for
encryption,
email
like
HSTs
policies
and
preload
lists
for
for
secure
internet
and
DNS
SEC
for
domain
lookups
and
CA
trust
chain
for
certificate.
Transparency
for
verifying
identity
next
slide,
please,
but
these
all
have
their
own
problems,
though,
with
trusted
key
servers,
eg
man,
the
Middle's
HSTs,
is
susceptible
to
downgrade
attacks.
K
You
know
DNS
poisoning,
there's
a
single
point
of
failure
for
CA
trusted
chains
and
what
we
saw
out
here
is
really
that
there's
kind
of
a
need
for
a
general
authenticated
mapping.
So
in
the
first
case,
what
we
really
have
is
is
a
public
key
mapping
from
an
identity
to
a
public
key
for
encrypted
email.
In
the
second
case,
it's
a
mapping
from
and
identity
to,
a
policy
so
like
I
can
come
main
to
a
policy
and
the
third
one.
It's
some
domain
name
to
IP
and
then
in
the
the
fourth
one.
K
It's
it's
a
it's
a
domain
to
a
certificate
so
really
like
the
underlying
problem
here
is
all
about
all
about
mappings.
So
next
slide,
please
yeah!
So
the
idea
is
like
whether
we
can
derive
scalable
solution.
That's
gonna
work
for
any
kind
of
mapping,
so
ideally
we
want
it
to
be
distributed
so
that
there's
no
single
point
of
failure
and
we
want
kind
of
like,
like
a
global
state
database.
K
So
some
of
the
properties
we
want,
can
you
go
next
slide?
So
some
of
the
properties
we
want
are
we
want
append-only
property?
K
The
transitions
need
to
be
authenticated
in
your
next
slide
and
then
last
thing
is
that
it
has
to
be
transparent.
So
we
kind
of
took
a
leaf
from
like,
like
onyx,
for
example,
where
you
have
to
be
able
to
see
exactly
what's
happening,
and
everyone
has
to
be
able
to
see
the
exact
state
of
of
the
mappings
at
all
time.
So,
looking
at
the
looking
at
the
layers
that,
but
also
looking
at
looking
at
like
how
we
want
to
design
this,
the
first
thing
is
that
we
want
to
bootstrap
our.
K
We
want
to
bootstrap
our
network
from
existing
PKI.
We
don't
want
to
just
invent
something,
that's
like
completely
separate
and
we
want
to
be
able
to
use
existing
existing
trust
network.
So
next
slide
please.
So
the
idea
is
like
you
should
be
able
to
actually
use
an
existing
certificate
in
order
to
certify
a
mapping
in
this.
This
authenticated
mapping
system
and
and
then
maybe
from
there
we
can
use
like
new
keys
or
like
keys
on
on
this
new
mapping
to
enforce
transitions
afterwards.
K
So
in
terms
of
the
actual
consensus
protocols
that
were
under
it,
we
evaluated
a
couple
different
options.
The
first
one
is
just
like
the
Byzantine
fault,
tolerant
cluster,
a
problem
with
that
is
that
the
participation
is
limited
and
uniform.
Set
of
incentives
undermines
the
security
so
yeah
and
then,
and
then
the
and
then
the
second
thing
we
looked
at
is
like
maybe
building
on
top
of,
like
a
proof
of
work
or
proof
of
stake
system.
K
K
Please
was
actually
like
some
federated
Byzantine
agreements,
so
we
actually
worked
with
David
those
areas
as
well,
the
previous
presenter,
to
talk
about
possibly
overlaying
our
framework
on
top
of
the
stellar
consensus
protocol,
because
because
that
stellar
is
kind
of
based
on
existing
trust
relationships
and
it's
the
kind
of
protocol
that
we're
looking
at
where
we
can
basically
take
an
existing
trust
network
that
already
exists
in
the
world,
plus
some
kind
of
like
existing
public
key
infrastructure
and
use
that
to
bootstrap
our
or
authenticated
mapping
system.
K
So,
right
now,
I
think
hydrated
Byzantine
agreement
seems
like
the
best
choice
for
a
mapping
system
like
this
next
slide.
Please
so
in
terms
of
actually
how
to
how
to
create
a
framework
that
would
be
easily
usable
and
be
able
to
support
these
well-formed
transitions.
We
thought,
through
kind
of
like
how
protocol
and
an
interface
would
look
for
yet
for
this,
for
this
kind
of
transition.
K
So
next
slide,
so
to
do
that
we
actually
went
through
two
examples
and
thought
about
how
the
mapping
works
and
what
you
really
need
in
order
to
provide
like
a
good,
a
good,
a
good
mapping
and
like
what
kind
of
semantics
are
needed.
So
the
first
example
is
PGP
is
like,
for
example,
here.
K
If
you're
doing
email
encryption,
you
really,
what
you
want
to
do
is
be
able
to
map
aliases
to
public
keys,
so
aliases
could
be
like
email
or
a
domain
is
the
two
that
we
have
in
mind
right
now,
but
essentially
you
need
to
be
able
to
map
that
to
a
key,
so
you
can
encrypt
information
to
it.
So,
on
creation
of
the
entry,
what
we
really
want
to
do
is
check
that
some
kind
of
like
domain
authority
can
verify
the
identity.
K
Second,
one
is
like:
maybe
we
want
to
do
recovery
or
something
and
yeah,
and
then
the
and
the
second
thing
is
binary
hashes,
so
right
now,
mapping
download
URLs
to
to
their
hashes
or
like
their
check
sums
is,
is
also
something
that
wants
that
you
want
to
be
like
authenticated,
because
when
you
download
a
binary,
you
don't
really
know
if
someone's
like
changing
your
page
in
the
middle
to
like
show
different
checksum
or
something
you
need.
You
need
some
kind
of
mapping
further
from
the
URL
to
the
hash.
K
So
for
this
kind
of
semantics,
what
we
want
is
again
on
creation.
We
want
the
domain
hosting
the
URL
to
sign
the
entry
and
then
on
an
update.
What
we
want
is
this
is
like
the
same
domain,
signing
the
update.
So
looking
at
the
thanks
light,
please
some
observations
that
we
have
essentially
that
on
creation
and
update,
we
want
some
kind
of
validation
based
on
local
state.
K
So,
for
example,
if
you
were
running
this
on
stellar,
each
node
would
look
at
the
update
and
and
run
these
validator
things
or
that
whatever
entry
is
trying
is
like
being
created
for
a
particular
mapping.
It's
valid
before
before
voting
YES
on
on
the
charge,
and
in
that
way
we
can
actually
always
maintain
the
fact
that
everything
in
the
cluster
is
valid
and
like,
and
you
look
up
or
for
a
value.
Is
it's
gonna
be
valid?
K
So
next
slide,
please.
So
one
example
here
is
like:
if
it's
like,
one
of
the
mappings
was
like
PGP
keys,
write
emails
of
PHP
keys,
then
the
mapping
would
be
like
email.
Pgp
is
like
the
name
of
the
mapping,
and
you
can
have
here
like
the
creating
the
create
validator.
You
have
one
operations
like
the
domain,
Authority
signature
like
that's
what
you
need
to
create
the
entry
and
then
an
update
validator.
You
have
like,
like
the
signature
from
the
previous
key
or
NMM
signatures
that
are
specified
inside
operation.
K
That's
all-
and
you
can
see
the
struck
down
here,
that
we
that
we
came
up
with
as
well
next
thing.
So
in
terms
of
like
what
validators
are
I
talked
I
talked
about
this
a
little
bit
earlier
as
well.
Essentially,
it's
a
collection
of
operations
that
are
enforced,
so
you
can
actually
there's
a
link
to
to
the
github
repo
I
know
this
presentation,
if
you
guys
want
to
go
through
and
take
a
longer
look
at
what
the
interface
looks
like,
but
yeah
so
like
here.
K
If
you're
mapping,
I'm
a
domain
name
and
obviously
the
authority
is
gonna,
be
that
domain.
So,
like
idea,
this
mapping
is
like
there's
always
gonna,
be
some
kind
of
like
owner.
Who
can
who
can
like
you
can
verify
so
so
that
that's
the
owner
signature
and
then
I'm
like
an
event
signatures
as
well,
and
then
here's
an
example
of
an
entry
update.
So
it's
also
for
PGP
keys.
K
So
I
can
like
update
my
my
email
mapping
with
with
a
new
public
key,
and
the
idea
here
is
like
there
are
also
update
operations
that
I
can
put
in
and
then
the
idea
is
like
some
of
the
operations
that
are
set
on
the
map,
thing
or
gonna
need
to
be
parametrized
by
by
by
advice
by
the
entry
itself.
So,
for
example,
who
was
creating
the
PGP
key
mapping
doesn't
know
like
which
end
of
M
signatures
I
actually
want
to
have
for
my
entry
like
that,
should
be
something
that
I'm
able
to
set
myself.
K
So
the
idea
here
is
like
you
can
actually
put
in
parameters
like
I,
can
say:
okay,
well,
I
trust,
jean-luc
and
Sydney,
like
they
like.
If
two
of
two
of
two
signatures
from
both
of
them
are
present,
then,
like
that's,
also
a
valid
transition.
So
here,
like
the
actual
operations
are
set
by
by
the
mapping,
but
the
parameters
are
provided
on
a
good
entry
by
entry
level,
yeah
next
time
and
and
and
another
thing
here
is
like
some
of
you
guys
might
be
thinking.
K
The
second
thing
is
like
it's
easy
to
use,
because
all
the
operations
are
already
well
defined,
so
it's
easy
to
to
create,
create
a
new
mapping
depending
on
the
use
case,
like
maybe
I'm
writing
an
encrypted
messenger.
Okay
and
I
want
to
create
a
public
key
directory
for
for
my
users
with,
and
it's
like
really
easy
for
me
just
to
find
what
operations
I
need.
It's
less
error-prone.
K
I
K
You
mean
in
terms
of
like
someone
who's,
just
like
spamming,
with
with
keep
many
like
reading
too
many
mountains,
precisely
yeah
I.
Think
that's
that's
something
that
we
will
need
to
we.
Can
you
still
need
to
think
about
it
so
kind
of
like
in
the
early
phases
like?
Maybe
maybe
we
will
restrict
it
a
little
bit,
but
we're
also
open
to
any
ideas
on
that.
If
you
yeah.
N
K
O
E
So
professors
are
known
to
misbehave,
you
know.
Normally
we
make
the
lecture
slides
either
doing
the
breakfast.
Otherwise
we
used
like
10
years
olds
last
time
and
this
time
is
it
different
from
anything
else
that
I
literally
finished
when
they
were
sitting
there
on
the
floor,
so
I
mean
the
little
nervous
and
it's
not
mine,.
P
H
H
H
H
An
interesting
feature
of
smart
contracts
in
change
space
is
that
smart
contracts
are
composable.
This
means
that
procedures
in
one
smart
contract
can
call
procedures
within
another
smart
contract,
so
it
allows
the
creation
of
libraries
where
these
libraries
can
be
used
as
utilities
to
compose
higher-level
smart
contracts.
H
H
Once
a
transaction
is
accepted
into
change
space,
all
its
input,
objects
become
inactive
and
all
its
output
object
are
born
as
in
they
become
active.
So,
for
example,
imagine
that
the
object
is
Alice's
bank
account,
which
has
a
balance
of
10
points.
Now
Alice
submits
a
transaction
to
change
space
to
transfer
five
coins
to
another
bank
account.
If
the
transaction
is
accepted,
then
the
object
corresponding
to
Alice's
previous
bank
account
with
a
balance
of
10
points
becomes
inactive
and
a
new
object
is
created
which
represents
the
current
balance
of
five
coins
and
it
becomes
active.
H
H
Records
are
pure
functions
that
only
output
a
binary
value-
yes,
except
a
transaction
or
notice,
reject
the
transaction,
but
the
key
property
here
that
a
checker
does
not
require
any
secret
information.
All
it
needs
is
Nestle's
necessary
information
to
empower
it
to
validate
if
a
transaction
should
be
accepted
or
rejected.
H
H
So
to
clarify
this
here
is
an
example
of
an
application
that
could
benefit
from
this
kind
of
distinction
between
checkers
and
smart
contracts,
so
imagine
that
a
person
wants
to
run
an
e
petition
within
the
participants
or
city
citizens
of
London,
but
it
wants
to
do
it
in
a
privacy
preserving
fashion.
That
is,
it
does
not
want
to
know
the
precise
identity
of
the
citizens
of
London.
Yet
it
wants
to
be
sure
that
participants
are
actual
citizens
of
London.
So
how
can
this
be
implemented
in
change
space?
H
Well,
the
user
locally
run
this
transaction,
which
is
basically
casting
a
vote
by
submitting
input
objects
and
the
secret
information
which
in
this
case
is
is
its
ID
may
be
that
it
is
a
citizen
of
London
and
creates
an
output
object.
Then
it
submits
the
input
and
output
objects
and
a
zero
knowledge
proof
that
proves
that
it
is
a
citizen
of
London
to
this
checker
the
checker
verifies
and
accepts
or
rejects
a
transaction
which
is
casting
the
vote.
H
H
H
We
achieve
this
using
existing
Byzantine
fault,
tolerant
protocol,
which
guarantees
safety.
That
is
as
long
as
there
are
three
F
plus
one
honest
nodes
within
the
shard,
a
common
sequence
of
actions
would
be
agreed,
and
the
other
property
is
that
of
liveness
that
this
agreement
would
be
eventually
reached.
H
H
So
here
is
an
example.
Imagine
that
Alice
wants
to
book
a
hotel
room
as
well
as
book
a
train
seat,
but
the
hotel
room
object
are
handled
by
short
one
and
the
train
seats
are
handled
by
Chartreux,
so
Alice
wants
both
the
room
and
the
train
seat
to
be
booked
or
none
of
them
so
space.
We
achieve
this
using
an
atomic
commit
protocol,
which
means
that
a
transaction
is
only
accepted
if
all
the
concern
charts
agree
that
it
should
be
accepted.
Otherwise
it
is
rejected.
H
H
So,
given
this
threat
model
chain
scape
a
chain
space
offers
the
following:
four
properties:
transparency,
encapsulation
integrity
and
non-repudiation,
and
I
am
going
to
discuss
these
one
by
one.
So
encapsulation
is
related
to
smart
contracts
that
we've
already
discussed.
The
key
idea
here
is
that
a
smart
contract
cannot
interfere
with
the
objects
created
by
another
smart
contract
unless
that
contract
allows
it
the
other
property
that
of
integrity.
We
also
discussed
when
we
talked
about
s
bag.
Consensus
protocol
that
is
chain
space
will
only
accept
valid
and
non-conflicting
transactions.
H
H
The
way
objects
and
transactions
have
been
designed
in
chain
space.
This
naturally
leads
to
a
directed
graph
right.
A
transaction
creates
certain
objects,
those
objects
become
active,
then
another
transaction
comes
along
and
consumes
those
objects,
and
so
on.
So
you
can
imagine
that
this
is
a
directed
graph.
This
is
this
graph
is
also
a
cyclic,
because
remember
that
we
said
that
an
object
once
it
is
consumed
by
transaction
becomes
inactive
and
no
future
transaction
can
again
use
that
object.
So
this
forms
a
directed
acyclic
graph.
H
Now
every
transaction
has
an
ID,
which
is
the
hash
of
all
input,
information
that
goes
inside
that
transaction
and
every
output.
Also
every
object
also
has
an
ID,
which
is
the
hash
of
the
transactions
ID
and
the
object
itself,
so
I
hope
it
is
now
clear
that,
given
an
object
and
it's
ID,
it
is
possible
to
verify
all
the
history
that
led
to
the
creation
of
this
object.
H
H
H
We
also
developed
a
Python
contract
simulator
environment
to
enable
developers
to
locally
write
and
test
chain
space,
smart
contracts
and
checkers
without
the
need
to
actually
run
them
on
the
chains
chain.
Space
infrastructure
and
all
our
code
is
available
as
open
source
software.
On
github,
we
evaluated
the
performance,
especially
the
scalability
and
latency,
by
deploying
chain
space
nodes
on
AWS.
H
We
first
evaluated
the
scalability
claims
of
chain
space
in
particular.
Here
we
are
looking
at
how
does
throughput
of
chain
space
improve
or
degrade,
as
we
add
more
shards
to
the
system.
So
here
on
the
x-axis,
you
can
see
the
number
of
shards
where
there
are
four
nodes
per
shot
and
on
the
y-axis.
You
can
see
the
throughput
in
terms
of
transactions
per
second,
so,
as
you
can
see,
chain,
space
offers
linear
scalability.
H
One
thing
I'd
like
to
mention
here
is
that
we
use
the
BFD
smart
implementation
of
PFD,
which
is
based
on
pbft
and
offer
the
same
communication
complexity
that
is
o
of
N
squared,
so
increasing
the
number
of
nodes
per
shard
here
in
our
case
will
not
lead
to
dramatic
improvement
in
throughput.
However,
there
are
recent
proposals
which
are
more
efficient
in
terms
of
communication
latency,
for
example,
this
coin,
which
was
published
in
use
next
security
in
2016.
H
H
So
here
on
the
x-axis,
you
can
see
different
loads
for
the
different
time
perceived
latency
in
terms
of
milliseconds
and
then
on
the
y-axis.
You
can
see
the
fraction
of
clients
to
whom
this
latency
corresponds
and
then
the
different
we
tested
this
latency
for
different
system
loads
and
we
find
that
even
under
high
load,
that
is
200
transactions
per
second
having
to
be
handled
by
a
chain
space.
The
latency
is
about
one
second,
for
50%
of
the
clients
and
about
two
and
a
half
seconds
for
all
of
them,
which
is
quite
low.
H
So
I
don't
have
time
to
go
into
the
details
of
the
applications
that
we
built
on
top
of
chain
space,
but
you
can
read
about
those
in
the
paper
which
was
recently
presented
at
NDS
s.
In
particular,
we
implemented
a
privacy-preserving
smart
metering,
application
on
top
of
chain
space
and
also
a
privacy
preserving
platform
for
decision
making,
and
the
paper
also
includes
benchmarking
and
evaluation
information
for
these
two
applications.
H
H
The
third
question
is
that
how
do
we
map
nodes
to
different
charts?
Is
it
just
a
random
process
where
a
coin
is
tossed
and
nor
just
ends
up
in
any
shard,
or
can
we
use
a
more
intelligent
policy
for
mapping
nodes
too
short?
And
finally-
and
this
is
a
question
that
is
shared
by
all
open
peer-to-peer
systems,
how
do
we
incentivize
nodes
in
chain
space
to
continue
to
participate
in
the
system
and
remain
honest.
H
So
to
conclude,
I
presented
chain
space,
which
is
a
smart
contracts
platform.
It's
two
contributions
are
number
one,
it
is
scalable
and,
secondly,
it
supports
privacy-preserving
transactions,
it
is
kill,
it
is
scalable
because
of
sharding
and
it
offers
privacy
because
of
the
distinction
between
the
smart
contract
part
and
that
there
is
a
checker
part
which
is
only
run
on
nodes.
H
G
G
G
G
H
Would
be
a
bottleneck,
for
example,
if
you
had
only
one
shard
or
two
shards
in
chain
space,
with
a
very
large
number
of
nodes
per
shard.
But
if
you
had
a
reasonable
number
of
nodes
per
shard,
then
you
can
achieve
scalability
by
increasing
the
number
of
shard
rather
than
the
number
of
nodes
within
each
shard.
G
Yeah
I
guess
I'm
confused
before
you're.
Trying
to
optimize
is
transactions
per
second,
which
you
might
want
to
optimize
more
so
I
really
be
extremely
inefficient
to
run
pbft
with
100
nodes.
It
would
be
like
insane,
but
you
could
still
run
it
once
a
day
and
agree
on,
like
you
know,
40
million
transactions
and
as
in
a
batch
right
that
the
PBS
key
itself
can
be
amortized
over
arbitrarily
many
operations,
so
it
may
have
leniency.
But
if,
if
that's
becoming
a
through
a
bottleneck,
then
you
may
want
to
look
at
batching.
G
O
Am
turn
from
Internet
Institute,
so
I
had
a
question
going
back
to
some
of
the
issues
that
you
raised
going
forward
and
from
the
paper
it
kind
of
seems
that,
like
in
the
end,
you're
sort
of
the
benevolent
dictators
because
you
assign
the
shards.
But
how
do
you
then
go
to
like
an
actual
open
infrastructure
model
that
is
like
properly
trustless,
because
that
doesn't
seem
possible
with
the
current
mapping.
A
H
H
Q
Q
H
H
Information
about
chain
space
charts
like
the
whole
infrastructure,
would
be
available
in
some
directory
service
or
somewhere,
and
these
will
be
like
in
a
rich
ecosystem.
These
will
be
may
be
well
known,
entities
with
different
levels
of
trust,
so
a
smart
contract
creator
can
specify
which
charts
it
wants
to
allow
to
handle
its
objects,
and
so.
Q
If
I
don't
like
any
any
shot,
I
can
I
make
the
system
create
one
by
demand
on
demand,
so
I
mean
I,
won't
agree.
This
Matt
Conger
I,
don't
like
any
of
the
shirts
available
for
whatever
the
reasons
so
can
I
create
the
shotgun
as
the
system
to
create
that
independence.
Although
this
something
that
is
decided
it.
H
Q
R
H
The
evaluation
that
I
showed
the
the
key
take
away
of
that
evaluation,
I
would
say,
is
not
to
focus
on
the
absolute
number
of
transactions
per
second,
that
change
space
is
able
to
handle
and
in
fact,
in
general,
that's
not
a
very
good
metric
to
assess
the
evaluation
of
a
system.
For
example,
there
are
systems
that
report
thousands
of
transactions
per
second
in
their
paper,
but
then
they
also
have
hundreds
of
nodes
in
that
system
right.
H
D
E
E
E
E
So,
which
is
submitted
this
proposal,
while
to
theory
and
how
we
get
collaborate
with
its
opening
solar?
That's
a
story:
I
gonna
tell
you
later
it
so
cool
next
slice
and
use
okay.
We
have
a
little
background
about
offering
solar
assume
that
nobody
in
the
room
has
ever
heard
about
it.
So
this
is
a
startup
founded
about
us
Yui.
We
had
this
sunshine
program
promoting
the
deployment
of
a
solar
system.
We're
in
California,
of
course,
is
a
big
business.
E
E
E
So
therefore,
it's
a
very
expensive
understand
show
system
and
at
the
low
death
rate,
but
is
more
than
adequate
for
the
solar,
natural
communications
reliably
and
securely
with
the
power
grid
system
in
the
most
unique
feature
of
this
system.
Is
that
a
security
implemented
with
Indian
name,
the
data
networking
so
I
think
I
see
HF
Thompson
sitting
in
the
back,
and
he
has
actually
had
been
helping
with
other
developments
of
this
system.
They
actually
have
the
operational
parks
right.
They
haven't
done
successful
demonstrations
in
some
much
larger
scale.
E
Now
the
question
is
why
they
implemented
over
the
Indian
I
tell
you
story
good
next
step.
So
one
day
it
is
about
two
years
ago,
I
got
a
call
from
some
strangers
in
that
can
we
visit
you
to
learn
about
how
to
use
in
the
end.
I
was
a
surprised
to
say
how
do
you
have
fun
than
being
where
a
few
people
are
under?
The
songs
have
heard
that
name
up
to
this
point.
They
wanted
to
build
this
rooftop
solar
system
as
a
mesh
networking
and
then
look
at
that.
E
The
the
existing
Turkish
stack
I
felt
that
way
too
complex
and
somehow
Google
want
them
to
Indian.
So
the
one
who
came
to
visit
us-
and
so
they
came
down,
visited
us.
They
immediately
liked
the
idea
and
I'll
peel
their
whole
thing
out
in
the
end,
then
you
can
see
you
can
see
difference,
so
we
haven't
come
to
in
the
IETF
to
give
an
Indian
town,
yet
we
will
but
Nana
Connexus.
E
So
what's
the
new
in
unison,
new
proposal,
so
we're
hoping
you're
working
with
them
to
build
that
the
system
moving
to
the
operations
what's
new,
this
time
is
that
we
want
to
discuss
blockchain
pista
system
to
build
a
distributed
transaction.
So
everyone
knows
that
the
power
system
security
is
so
important
and
the
solar
is
attached
to
the
power
system.
E
Now
how
you
ensure
the
security
and
everything
is
just
top
priority
now,
what's
what's
good
about
utilizing,
distributed
transactions
that
basically
create
a
permanent
record
readily
accessible
to
everybody
and
about
both
the
locking
of
your
energy
transactions
as
well
as
other
incidents
in
the
system,
just
create
permanent
record.
You
know
the
user.
The
homeo
Sulli
wanted
to
know
how
much
the
hell
contributed,
whether
the
power
system
actually
accredited
them,
with
whatever
the
the
contribution,
the
homemade
and
also
all
the
other
incidences,
and
you
might
want
to
make
it
available.
E
So
it's
right
there
and
therefore,
of
course,
we
have
the
expectation
that
West
we
make
available.
There
will
be
a
big
improvement
to
the
existing
system
system
system.
I,
don't
know
all
the
details,
but
I
have
heard
the
anger
that
was
a
hint
that
there
there
is
all
sorts
of
issues
about
keeping
track
of
the
record.
Rather,
the
human
errors
is
other
things
and,
of
course,
there
can
be
malicious
effects
using
record
changes
record
and
a
person's.
E
So
if
we
utilizing
the
blockchain
technology,
we
hope
that
we
can
really
all
calmly
existing
problems
in
exercise
now
we're
not
using
the
blockchain
as
a
one
chain.
Instead,
there's
this
thing
called
Alda
I'm,
pretty
sure
many
people
have
heard
this,
especially
in
Europe
I,
think
it's
more
the
better
known
in
Europe
in
u.s..
What
I
thought
this
next-generation
blockchain
is.
Not
our
claim,
that's
what
the
Alta
people
clipped,
so
we
just
covered
it
there.
Now.
E
So
you
can
see
that
this
can
have
a
parallel
and
therefore
it
is
more
scalable.
They
claim
that
they
have
a
lower
full
of
a
work
workload
and
it
has
a
while
offline
periods
when
still
ID
into
the
transactions
connects.
It's
that
so
what's,
although
the
blocks,
though
you
can
see
the
gray
blocks
to
the
edges,
I
mean
in
the
diff
node,
it
is
the
new
blocks
being
I.
Did
the
red
blocks
are
those
that
has
been
verified
by
some,
but
not
completely
proofs?
E
The
Queen
wants
a
complete
proof
so
code
on
the
red
ones,
being
interns
by
some,
but
not
interning,
and
the
green
ones
actually
has
been
endorsed
directly
or
indirectly
by
all
the
green
nodes.
That
is,
the
system
has
accepted
them
as
a
permanent
transactions.
So
this
is
how
the
elder
works
next
class.
So,
basically
those
three
steps
when
you
want
to
make
a
transaction
you
sign
of
the
the
blockages
like
you
normally
do
even
be
the
blockchain
now
the
tip
direction
so
the
the
blocks
as
had
not
been
entirely
proofed.
E
You
pick
a
one
flows,
you
actually
you
have
to
pick
to
add
yourself
onto
it
and
a
hollow
peak.
That's
where
the
mass
came
in
I'm,
not
going
to
answer
that.
Basically,
that
my
screw
that
and
if
you
do
this
kind
of
MCMC
this
a
Markov
chain,
Monte
Carlo
selection,
then
you
can
Critias,
can
exceed
the
coherent
mesh.
E
But
for
the
original
Elda
they
still
have
this
so-called
reduced
proof
of
work.
Nonetheless,
everything
is
anonymous,
so
therefore
they
still
need
the
proof
of
work
to
allow
you
in
addition
to
attach
to
to
previously
tips,
they
still
allow
you
to
add.
You
have
to
do
this
proof
work
to
add
yourself
onto
the
chain
for
next
slice.
So
now
what
we
propose
to
do
is
to
get
rid
of
the
proof
work
entirely
for
the
out
you
systems.
You
know
that
power
everything
in
the
pressures.
E
It
is
very
appropriate
for
the
use
case
here,
because,
when
you're
home
contribute
it
into
the
power
grid,
you
want
to
know
exactly
how
much
you
contributed.
It's
not
credited
to
your
neighbors.
So
therefore
the
use
case
itself
requires
the
identity
and
you're
authorized
that
the
identity
I
mean
proof
that
how
much
it
has
actually
contributed
and
the
fact
that
is
that
the
operand
boxes
is
already
running.
E
India
and
Indian
already
gave
every
note,
identity
together
away
the
certificate
because
for
people
who
haven't
heard
about
Indian
Indian,
essentially
named
the
data
name,
the
data
and
use
ID
to
network.
So
therefore,
every
note
have
an
every
package
has
a
name
and
because
the
whole
name
you
have
contains
it
just
landed
so
by
design.
Indian
package
is
critically
secured
and
that
is
reason
actually
become
the
transaction.
E
So
there's
a
strong
synergy
between
the
existing,
the
Indian
product
that
they
have
and
which
is
a
proposed
puree
base
to
shoot
the
lighter.
So
the
plan
is
we
try
to
integrate
these
two
together,
a
Connexus
slice
so
how
this
operand
Sola
face
the
criminal
transaction
works.
You
do
the
same
thing:
MCMC
selection
from
this
outta
distributing
the
mesh.
You
pick
two
tips
and
then
you
choose
a
crit
and
then
data
packets.
E
That's
it
the
the
the
beauty,
besides,
that
which
says,
have
Undine
with
the
name
and
the
with
the
security
already,
and
it
is
a
nature
in
addition
that
in
the
end,
has
the
multicast
deliberate
built
in.
In
addition
to
that,
there
is
another
indian
utility
called
this
rich
distributed
that
data
set
synchronization.
E
Normally
we
just
call
it
and
in
sync,
so
you
can
think
of
for
the
current
products
tag
you
have
IP
is
like
a
reliable
maori
level,
unreliable,
datagram
delivery.
You
have
application
that
one's
reliable
in
the
delivery.
So
therefore
you
have
this
so-called
transport
now
see
if
he
has
been
that
and
now
we're
working
on
quick
write
to
bridge
the
gap
between
what
the
nitro
the
delivers,
that
is
telegrams
and
with
the
application.
E
We
still
have
the
same
issue
from
what
a
network
which
is
still
in
the
graph,
because
intagram
tells
us
that
the
works
and
whatever
kitchen
wants,
which
is
reliable
delivery
and
the
majority
of
the
cases
we're
doing
really
connect
proof
is
the
applications,
the
states
really.
You
only
have
one
client
talk
to
one
specific
server.
You
know
think
about
the
server's
of
replicated
and
other
other
cases,
so
in
the
end
he
won,
for
this
is
city
that
is
allows
group
of
entities
who
can
the
same
an
application
to
maintain
a
consistent.
E
I
believe
that
my
collaborator
opening
solar
I
did
another
can
a
little
demo
interesting
to
show
that
their
mesh
network
actually
works.
But
that's
not
exactly
related
to
this
topic,
so
last
slide
if
you
uploaded
it
to
the
to
the
yeah.
So
this
would
be
the
demo,
but
we
don't
have
to
show
it
here.
This
enemy
talk.
Thank
you.
E
G
G
E
G
Is
that
permanent
log
important
for
the
power
grid?
This
is
the
question,
so
you
keep
a
check
of
things
you.
Can
you
really
go
back
issues?
Are
we
worried
about
Russians,
bring
down
the
power
of
Italy,
worried
about
people
not
paying
their
electric
bill
or
something
like?
What's
the
actual
threat,
its.
E
J
Was
hurt,
occur
us
see,
I
say:
does
your
clients
have
any?
So
you
said
that
this
is
it
takes
less
power
than
you
know
like
typical
watching,
if
you,
if
you
look
a
lot
of
voting
block
chains,
they
take
way
too
much
power.
I
assume
your
client
being
an
energy
company.
Has
some
power
requirements?
Have
you
looked
at
comparing
you
know,
multiple
solutions
in
order
to
figure
out
which
one
actually
consumes
the
least
amount
of
power
and
and
who's
paying?
E
The
opera
solar
gateway
actually
is
connected
at
eppley
to
the
power
grid.
What
did
they
imagine
and
collect
is
really
about
the
rooftop
solar
generation
is
now
the
dependent
I
the
solar
generation
to
operate
this
one
thing,
but
the
second
thing
inside
know
I
want
to
be
honest
that
we
haven't
done
computed
comparison,
but
just
that
intuitively,
if
you
do
a
simple,
a
signature,
verification
should
be
cheaper
than
the
oldest
other
proof
of
working
those
additional
things
about
through
a
mistake
that
would
develop
recently,
but
that's
I
think
has
other
issues.
E
T
One
simple
question:
a
lot
of
our
tip
device
have
a
limited
CPU
ability
and
that
and
that
the
bodily
conception
the
limit
their
memory.
I,
don't
know
whether
this
solution,
your
proposed,
they
have,
can
be
used
for
such
limits.
The
capability
LT
device.
That
means
how
your
solution
is
cover
for
ability,
I'm.
E
Not
talking
about
a
general
solution
for
LT
under
the
Sun,
this
is
a
specific
solution
to
secure
the
the
solar
data
collection
system,
but
I
will
write
to
say
that
we
have
the
work
about
using
in
the
end,
to
support
IOT.
There's
the
toy
code
here
there
again,
just
in
the
back,
probably
knows
more
than
I.
T
A
A
S
A
U
U
The
problem
statement
is
what,
if
Lisp
XD
ours
didn't
you
depend
on
a
third
party.
So
Lisp
is
a
protocol
architecture.
That's
been
around
for
about
a
decade.
It's
an
overlay
solution
that
map's
a
IDs
to
locators
so
via
these
can
remain
fixed
while
locators
change.
A
next
er
is
a
data
plane
component
of
less
for
those
of
you
that
don't
know
less.
So
what?
If
you're
stuck
stairs
didn't
need
append
on
a
third
party,
in
other
words,
a
third
party
that
operates
the
mapping
system
between
the
Andes
and
locators?
What
affects
the
ice?
U
Could
multi-home
and
wrong
to
form
each
other
about.
There
are
love
changes
on
our
locus,
a
routing
locator
a
locator
was
a
topological
significant
address,
that's
in
the
underlying
routing
system.
What
if
this
affects
TRS
could
be
on?
There
could
be
their
own
mapping
system
in
a
peer-to-peer
fashion,
let's
build
a
purely
democratized
and
decentralized
control
plane,
and
that's
what
the
talks
about
so
throughout
the
presentation.
Wherever
you
see
green,
we're
talking
about
endpoint
IDs
and
wherever
you
see
red
we're
talking
about
routing
locators,
these
are
not
topologically
significant.
U
U
Okay,
so
it
turns
out
that
the
mapping
system
is
very
much
like
DNS,
it's
in
the
infrastructure,
so
you
depend
on
it.
You
depend
on
it,
you
might
have
to
trust
it.
You
usually
do
trust
it
and
it
usually
gives
you
good
answers,
but
not
always
guaranteed,
but
it
turns
out
that
a
lot
of
topologies
show
that
maybe
this
South
and
East
site
has
backdoor
connectivity
just
like
the
north
and
the
West
site.
Maybe
there's
some
wireless
infrastructure
there
and
what
happens
is
let's
see
I
could
do
this
fast?
U
What
happens
in
what?
If
the
links
to
the
where
the
third-party
is
go
down
now
it
turns
out
that
the
sites
could
still
talk
to
each
other,
but
they
don't
have
a
mapping
system.
They
don't
even
have
DNS
either
right.
We
still
want
them
to
function.
We
still
want
them
to
run.
We
still
want
to
move
you
multi-home,
okay,.
U
Ok,
so
how
do
we
take
this
model
and
make
it
distributed?
So?
Can
we
have
a
decentralized
map
server?
What
if
each
x
TR
itself
was
a
map
server?
What
if
each
TR
could
map
register
to
each
x
TR?
The
mapping
system
would
be
synchronized
because
everybody's
talking
to
everybody
very
similar
to
like
a
routing
protocol,
but
it's
really
over
the
top.
Ok
and
the
x
TR
could
actually
be
a
map
resolver
for
itself.
So
when
it
says
map
request,
it
doesn't
have
to
go
outside.
U
So
let's
define
how
this
decentralized
mapping
system
could
look.
A
consolidated
mapping
system
is
identified
by
a
multicast
group
address.
Ok,
let's
say
that
the
map
server
is
a
multicast
group.
Not
an
individual
IP
address
the
XT
hours
that
are
part
of
the
mapping
system.
During
the
same
multicast
group
and
the
map
registers
are
sent
to
the
group,
which
means
all
XTS
receive
all
mappings.
U
It's
fish
and
distribution
when
the
under
life
supports
multicast,
because
rather
than
one
router
sending
end
packets
to
the
all
the
other
XT
hours
that
are
primarily
a
solid
8
system,
it
sends
one
message
and
the
underlay
delivers
it
efficiently
with
IP
multicast.
When
IP
multicast
is
not
available
in
the
underlay,
we
can
use
headed
replication.
We
have
something
in
list
called
signal:
free
multicast,
where
we
use
the
mapping
system
to
store
SG
entries
and
we
find
out
who
the
receivers
are
and
where
the
members
are
located
through
the
mapping
system.
U
So
we
could
actually
iterate
and
use
the
lisp
data
plane
to
simulate
multicast
on
the
overlay.
If
the
underlay
doesn't
support
it,
so
here's
an
example
how
it
could
work,
the
guys
on
the
west
hand,
side
are
probably
the
same
mapping
system
they're,
not
involved
they're
the
IED
riders
one,
three,
five,
the
ones
on
the
right-hand
side
are
two
four
six
they're
completely
separate.
Let's
say
they
have
no
connectivity
to
internet,
but
they
could
still
talk
to
each
other.
This
would
be
an
example.
How
would
work
they're
all
part
of
the
mapping
system?
U
They
each
have
their
a
IDs
one.
Three
and
five.
They
register
two
to
twenty
four
one,
one
one
which
is
a
multicast
group,
they're
all
joined
to
the
multicast
group,
so
they
all
receive
each
other's
registrations,
so
they
all
have
a
sync
copy.
The
right
hand
side
is
the
same
thing
it's
just
using
ipv6.
U
Of
course
you
don't
have
to
have
separate
ipv6
in
v4
mapping
systems,
they
could
all
be
merged
and
you
could
have
different
types
of
eid
types,
so
benefits
xt
hours
only
depend
on
each
other
and
they
already
depend
on
each
other
because
they
want
to
talk
to
each
other.
There's
no
third
party,
trust
or
dependency
that
exists.
The
map
request
lookup,
has
very
low
lake.
U
It's
late
see
because
it's
just
a
map
request
message
that
you
can
loop
back
to
yourself
or
if
your
implementation
wants
to
access
the
data
structure
directly,
it
could
be
even
faster.
Xt
is
been
build
and
send
one
map
register
for
and
xt
RS
because
of
the
multicast
routing
capabilities.
Also,
everybody
in
some
consolidated
mapping
system
will
discover
each
other
just
by
joining
the
multicast
groups,
so
resource
discovery
is
just
automatic.
Management
is
simplified
by
accessing
one
xt
I
to
get
all
mappings
kind
of
like
a
link
state
database.
U
U
U
The
high
level
use
cases
that
we
think
this
will
be
useful
for
is
to
be
consistent
at
the
network
layer
with
decentralization
as
we
are
with
cryptocurrency
applications.
So
if
peer-to-peer
networking
is
going
on
for
cryptocurrencies,
we
think
we
could
do
it
at
the
network
layer
as
well.
Emergency
networking
or
mesh
networks
is
very
important.
You
know
what,
if
you
lose
power
amidst
her
Akane's
earthquakes
or
whatever.
This
is
very
useful.
Wi-Fi
direct
on
cellphones
can
make
use
of
this
sort
of
technology
and
think
people
could
still
move
around
okay.
U
You
know
when
there's
an
emergency
you're
moving
around
quite
a
bit
and
you
you
may
have
LTE
or
5g
access
in
the
future,
but
you
may
not
so
you're
hopping
around
between
Wi-Fi
hotspots,
and
so
your
IP
address
is
changing.
Your
our
lives
are
changing.
You
want
that
finding
to
be
known
to
everybody,
that's
relatively
in
signal
rings,
so
you
can
keep
talking
to
each
other.
U
We
think
this
is
going
to
be
good
for
plug-and-play
VPN
networking
rights,
but
there's
a
lot
of
container
based
micro
service
micro
segmentation
type
applications
that
just
want
to
come
up
do
LS
dash,
L
and
shutdown,
and
they
may
want
to
talk
to
each
other
so
rather
than
having
to
configure
or
go
really
far
out
to
get
your
mappings
which
require
additional
latency.
You
could
do
this
stuff
locally
space
networking
on
software-defined
satellites.
U
You
might
want
to
do
things
up
in
orbit,
because
the
distance
from
space
to
down
here
is
pretty
far
shareable
economy,
apps
just
in
general,
I,
guess,
I,
think
Paula
made
it
or
I
think
you
may
think
the
comment
about
you
want
a
sort
of
reward
for
doing
work,
so
you
want
the
cost
of
running
the
network
to
be
low,
so
you
want
to
be
able
to
share
a
bandwidth,
CPU
resources.
All
this
when
you
don't
have
internet
connected
each,
but
you
do
have
network
connectivity.
U
So
we
had
a
we
did
at
implementation
of
this
and
here's
just
a
brief
demo.
We
had
three
containers
each
running,
lispers,
dotnet
xtr,
that's
my
implementation.
We
had
they
were
running
in
the
docker
containers,
but
the
docker
bridge
was
not
doing
multicast,
so
we
wanted
to
show
that
the
underlaid
did
not
support
multicast,
so
the
XT
are
doing
headed
replication
in
their
data
plane
and
then
the
XT
are
registering
both
in
ipv4
a
ID
prefix,
as
well
as
the
name
EW.
U
If
you
wanted
to
in
this
emergency
situation
and
didn't
have
DNS,
you
could
put
names
in
the
mapping
system
and
the
names
could
be
resolved
to
e
IDs.
So
this
is
just
an
example.
In
M
one
is
one
of
the
containers,
the
first
one
that
you
see,
EW
d's
being
registered
for
1/32,
which
is
n,
1z
ID,
and
it's
name.
You
see
two
and
three
are
also
there.
If
I
just
look
at
the
next
one,
they're
all
just
going
to
be
synchronized.
U
Now
you
see
this
special
entry
here.
That's
called
0,
slash,
0,
comma
2,
24
4.
That's
actually
the
mapping
system
saying
it
allow
anybody
to
join
any
left,
ipv4
multicast
group
it
turns
out,
since
this
mapping
system
was
running
on
224
one
nine
1/32
that
contains
the
information
of
all
the
members
of
the
growth.
This
is
part
of
signal
free
multicast.
What
it
means
is
that
to
24
1
when
one
says
here
the
3,
routers
and
those
are
the
outlooks
of
the
underlying
network.
U
Those
arlok's
could
be
IP
addresses
that
are
bluetooth
interfaces
that
allow
three
phones
to
be
able
to
talk
to
each
other.
Ok,
so
that's!
What's
in
the
mapping
system-
and
this
is
in
one
of
the
routers
showing
that
for
a
particular
forwarding
entry.
If,
if
map
register
is
coming
down
from
the
list,
control
plane
to
the
data
plane,
it
matches
this
entry,
it
will
replicate
to
those
three
unicast
addresses.
G
U
U
H
U
U
P
U
Your
application
is
going
to
want
to
know
that
you
want
to
talk
to
an
IP
address,
that's
running
with
that
xtr
and,
and
since
you
start
sending
packets
to
it,
what
you're
going
to
do
is
you're
going
to
try
to
find
the
routing
locator
associated
with
that
device
right.
That
thing
must
be
already
registered
to
the
mapping
system
and
must
be
authenticated
to
the
admission.
U
Not
trust
the
way
out
the
way
indication
works
with
map
registers
is
when
you
send
a
map
register
to
a
map
server.
You
sign
the
map
register
and
then
the
map
server
verifies
the
signature
by
looking
up
the
public
key
and
it
looks
up
the
public
key
in
a
mapping
database
which
is
its
own.
So
at
the
say
that
centralized.
U
U
There's
multiple
ways
of
handling
that
on
the
protocol
right
now
the
standard
says
to
send
you
send
map
registers
periodically
on
they're
sent
out
of
one
minute
intervals,
if
you
think
that's
too
slow,
because
you
want
better
convergence,
you
there's
a
map
watts
map,
notify
bid
in
the
map
register.
Where
you
get
a
map
notifying
response
to
the
map
register,
that's
basically
an
acknowledgment,
and
when
you're
more
than
that
mode,
it
means
you
will
be
transmitted
much
faster
until
you
get
a
map
notify.
U
C
This
is
a
site
at
the
at
the
end
of
the
agenda
deck
about
this,
but
we
had
formal
request
for
return
on
the
agenda
than
we
had
time
available.
So
we've
had
some
people
advising
the
possibility
of
having
side
meetings,
and
so,
if
I
definitely
take
it
to
the
mailing
list.
I
know
that
Bo
Young
has
expressed
interest
in
discussing
discussing
PKI,
Anna
blockchain
and
their
similar
applications.
So
please,
as
I
said,
TVs
the
mailing
list.
A
Right
for
the
rest
of
the
meeting,
we
wanted
to
get
a
bit
of
feedback
from
you.
So
so
we're,
like
you,
know
a
couple
of
different
interesting
presentations
today,
also
and
that
interim
meeting
so
ranging
from
ideas
for
use
cases
particular
solutions
pointing
at
problems
with,
for
example,
certain
ledger
and
technologies,
different
consensus
mechanisms.
A
I
And
Nick
Johnson
etherion
Foundation
regarding
internetworking
was
also
polkadot,
which
is
an
effort
by
peri
technologies
and
plasma
that
the
challenge
is
largely
are
that
currently
they're
so
diverse
and
differing
in
capabilities
that
is
difficult
to
build
in
a
general-purpose
system
for
consensus,
but
I
think
that
if
we
can
make
progress
towards
that,
this
would
probably
be
a
continue.
One.
C
Thing
worth
pointing
out
is
so
we're
not
really
a
blockchain
right,
we're
really
trying
to
deal
with
decentralizing
Internet
infrastructure,
and
there
are
various
technologies
which
may
or
may
not
be
used.
Boxing
is
an
obvious
obvious
one
which
the
in
which
there's
a
lot
of
interest,
but
when
we
probably
already
working
on
blockchain
particles
per
se,
we're
interested
in
consensus
protocols
because
of
the
there
are
these
scaling
issues
when
you're
talking
about
something
your
internet
scale.
C
I
C
And
what
we're
seeing
is
a
lot
of
made
proposals
about
the
use
of
black
teams
where
they're
not
really
dealing
with
the
consensus
aspect
of
it
very,
very
thoughtfully.
Frankly-
and
you
know
I
also
chair
this
et
working
group
in
a
lot
of
ways
of
permission,
blockchain
is
basically
a
transparency
log
right.
Yes,.
I
C
U
Cedeno
I
assume
the
RG
cares
about
interoperability
at
any
level,
even
though
you
don't
want
to
do
blockchains
but
I'm.
Thinking
of
the
the
other
thing
that's
probably
useful,
is
today
a
wallet
supports,
tries
to
support
all
blockchain
and
that's
not
very
efficient.
So
do
you
think
we
could
standardize
a
protocol
between
wallets
and
black
chainsaw?
The
black
shoes
could
use
the
same
interface,
and
this
doesn't
mean
a
restful
interface
which
everybody
seems
to
want
to
use.
G
G
You
know,
HSCs
preload
list
or
something
so
so
I
think
one
thing
to
work
on
is
taking
things
that
currently
are
a
pain
to
do
and
making
them
sort
of
self-serve,
so
that
you
can,
you
know
basically
the
equivalent
of
like
let's
encrypt,
but
for
like
all
these
other
things
you
may
want
like.
Maybe
you
want
yeah,
you
know
HP
KP,
you
want
to
be
on
the
HP
KP
preload
list.
There
should
be
some
way
to
get
that
done,
for
your
domain
name,
I,.
V
Am
thankful
University
of
London
I
think
this
group
is
really
good
at
the
moment
in
terms
of
the
consensus
side,
because
we're
doing
some
research
on
proof-of-work
algorithms
and
some
of
the
economic
research
we've
done
shows
if
you
look
at
the
ASIC
miners
involved
in
the
sha-256
hashing
I'm,
just
the
capital
expenditure
on
those.
V
If
you
take
the
daily
house,
raids
and
bitcoins,
be
about
ten
grand
to
be
about
four
billion
US
dollars,
so
the
sooner
consensus
algorithm
sort
of
decide
on
something
that
are
used
more
ubiquitously
rather
than
just
stop
hashing
or
sha-256
hashing
I.
Think
that's
something
really
positive,
because
the
growth
curve
for
the
purchase
of
ASIC
minors
is
has
been
exponential.
Since
the
beginning
of
2017.
E
Hi,
this
is
a
dish
again.
I
have
shown
this
group.
If
you
created
it
would
be
called
a
thing
right
idea,
and
now
the
bin
be
a
no.
So
therefore
I
think
we
should
the
focus,
sound
decentralized
infrastructure
and
now
to
blockchain,
specifically,
that
can
be
a
potential
tool
when
they
utilize,
but
that
still
deserves
investigation,
not
otherwise
I
think
for
decentralized
the
infrastructure.
There
are
other
potential
solutions
and
in
this
group,
I
would
suggest
to
explore
and
now
limit
ourselves
to
blockchain.
O
This
is
marios
a
Mochaccino
affiliation,
I,
actually,
I
will
follow
on
to
it.
Lycia
said
it's
for
me:
I
was
here
today,
because
I
thought
there
were
incredible
problems
when
you
start
decentralizing,
the
internet
I
think
the
idea
of
the
trust
and
the
distribution
of
the
authentication
that
consensus
is
fantastic,
but
there
are
still
people
out
there
who
think
that
the
only
way
that
you
can
make
networks
be
very
efficient
is
to
have
them
under
a
certain.
O
You
know
completely
non
distributed
infrastructure,
so
is
it
going
to
be
part
of
this
group
also
to
see
what
are
the
other
problems
in
terms
of
management?
In
terms
of
you
know,
performance
in
terms
of
all
kinds
of
things
that
are
very
important
when
you
start,
you
know
blowing
up
something
that
is
centralized
and
becomes
a
bunch
of
violence
talking
together.
C
A
That's
a
good
question:
it's
not
easy
to
answer
because
I
mean
there
are
certain.
You
know
networking
or
communication
system
so
like
ICN,
for
example,
I
mean
they
have
their
own.
They
have
their
own
group,
they'll,
probably
deal
with
like
the
management
and
self-organization
features
for
their
system,
so
one
way
I'm
up
seeing
is
it
so
everything
that
has
to
do
with
like
distributed
computing
consensus
that
there
could
be
some
kind
of
focus
point
in
this
group
that
your
own
takes
the
requirements
from
all
these
different
networks?
And
what
have
you
and
discuss
this
here?
O
I
I
actually
thought
that
there
was
going
to
be
more
like
yeah,
maybe
there's
stuff
done
in
other
groups,
but
I
think
that's
becoming
the
problem.
Is
that
there's
a
lot
of
pockets
of
people
thinking
about
the
centralization
another
group
and
they
thought
this
one
would
probably
be
the
focus
and
then
have
more
and
I
agree
that
the
consensus
stuff
is
very
important,
but
when
I
think
about
it,
I
think
also
of
other
things
and
if
again,
if
the
work
has
done,
another
group
I
was
a
nice
young
yesterday.
O
A
A
So
I
think
what
will
not
work
I
mean
if
we
make
the
master
plan
on
how
we
organize
decentralized
systems
and
so
how
this
would
work
control
this,
but
I
mean
this
is
really
coalition
driven.
So,
for
example,
if,
if
there's
a
topic,
it
has
to
do
with
the
decentralization
and
has
some
interesting
research
questions,
please
bring
it
here
and
we
can
discuss
it.
Okay,.
C
Hi
Shuler
from
Intel
and
I
guess:
I
I
showed
up
here.
I've
shown
up
here
a
couple
of
times
now
because
well
I
liked
I
I
agree
that
the
consensus
algorithms
are
really
important,
but
I
think
the
motivating
factor
for
me
is
that
when
I
look
at
what's
going
on
in
the
IOT,
there
is
two
things
that
I
think
we
should
be
addressing.
The
first
is
that
yes,
we're
organization
that
worries
about
connectivity,
but
increasingly
there
are
lots
of
things
that
are
disconnected
either
injured
intermittently
or
for
long
stretches
of
time.
C
So
I
like
dinos
presentation
in
that
there's
this
concession
that
you
know
not
everything's
connected
all
the
time,
and
so
how
do
you
bootstrap
particularly
I,
mean
yes,
the
Federation
and
the
director
is
super
important,
finding
things
what
are
their
attributes,
but
then
coming
back
to
some
of
these
talks
earlier
about
trust.
How
are
we
going
to
trust
the
things
that
we
found,
whether
they're
directories
or
whether
they're
nodes
or
and
so
here's?
C
My
second
point,
the
other
reason
why
I
showed
up,
which
is
the
amount
of
things,
creates
a
tremendous
amount
of
data
and
thinking
about
managing
and
not
just
nodes
and
the
trustworthiness
of
nodes.
But
can
we
somehow,
through
chains
of
trust,
we
can
trust
the
the
nodes
that
in
turn
create
the
data,
and
this
goes
back
to
leashes
talk.
You
might
be
using
brought
tears
to
be
attesting
to
the
earth
to
the
data,
the
authenticity
and
the
trustworthiness
of
the
data,
and
so
I
think
that
the
silver
example
is
quite
interesting.
C
The
cost,
once
you
have
trustworthy
data
you're
now,
can
do
analytics
on
the
data,
maybe
of
trustworthy,
analytics
and
we'll
get
back
to
critical
infrastructure.
You've
got
trust
where
the
decisions
that
you're
making
so
so
those
are
two
things.
I
would
love
for
us
work
on
the
disk
connectivity
and
then
this
chain
of
trust.
So,
though,
things
get
actuated
in
a
real
world
that
that
we
have
particular
for
critical
infrastructure
that
we
can.
You
know
things
don't
so
that's
why
I'm
here,
I'm
hoping
that
we
enjoy?
C
Oh
so,
back
to
this
point
that
was
made
by
before
I
came
to
the
mic,
which
is
maybe
we
don't
have
clearing
houses
but
there's
clearly
there's
a
broad
collection
of
we
could
work
on.
This
could
be
a
great
place
for
people
with
such
disparate
interest
to
come
together
just
to
coordinate
what
they're
doing
so.
Maybe
it's
not
a
clearinghouse
so
much
as
a
coordination
mechanism.
G
David
from
Stanford
I
agree
about
what
you
said:
I
just
want
to
be
pedantic
that
watchings
cannot
attest
to
the
authenticity
of
data.
They
can
maybe
attest
to
the
immutability
of
data,
but
I
actually
think
it's
important
to
always
keep
that
in
mind,
because
somehow
there's
just
like
mission
creep,
where
people
start
pinging
up
watching
and
it'll
solve
everything
good
point.
So.
C
I
was
okay
in
the
IOT
F
chair
and
I
think
that
well
make
sure
that
we
use
the
the
high-volume
face-to-face
time
really
well.
So
it's
a
great
time
for
understanding
fundamental
problems
really
focusing
on
hard
stuff
together
learning
to
talk
about
hard
stuff.
Together,
you
can
put
a
clearinghouse
on
the
wiki
or
whatever,
but
I
am
very
tough
on
the
rate
RGS,
because
I
don't
want
that
to
have
so
that
assortment
of
stuff
that
allows
you
to
sort
of
drift
off
when
it's
not
your
topic,
I
really
want
everybody
in
the
room
focused
on.