►
From YouTube: Ethereum 1.x Afternoon [Day 3]
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
the
gas
model
is
similar
to
EVM.
We
have
web
assembly
app
codes
just
like
we
have
EVM
app
codes.
Each
of
these
app
codes
has
a
cost
and
the
contract
gets
executed
and
the
cost
of
the
execution
is
the
sum
of
all
the
app
codes
that
were
executed.
So
that's
our
how
we
charge
gas
for
a
web
assembly,
you
azam
execution,
so
we
have
options,
I'm
going
to
go
over
five
options
and
then
that's
the
whole
presentation.
A
Option
two
is
metering
injection.
This
is
what
we're
doing
right
now.
So
this
is
an
optimization
that
we
know.
Webassembly
has
a
specific
structure
that
we
can
jump
to
specific
locations,
and
we
know
that
certain
chunks
of
app
codes,
sequences
of
app
codes
must
complete,
so
must
execute
in
sequence.
So
we
can
for
each
of
these
basic
I
guess
they
call
them
basic
blocks.
We
inject
metering,
so
we
have
a.
We
already
have
a
few
implementations
of
this
injection,
and
so
in
this
example,
we
injected
these
two
webassembly.
A
This
is
web
assembly
code
by
the
way
and
what
it
means
is
the
I
32
counts.
3
call
use
gas,
so
we're
using
gas.
For
this
whole,
the
the
next
three
app
codes
under
those
yellow
ones,
and
then
this
becomes
the
e
azam
that
we
execute,
and
so
now
we're
just
using
gas
one
time
instead
of
three
times
so
and
then
perhaps
the
block
might
be
longer.
So
this
is
an
optimization.
Can
we
do
better?
It's?
The
next
question
may
be
I
hope
so,
okay,
so
I.
A
A
So
this
idea
of
upper
bound
metering
came
up,
so
at
the
ploy
time
we
would
like
to
be
able
to
statically
analyze
the
contract
and
find
an
upper
bound
to
the
any
for
any
execution
path
for
all
execution
paths.
We
have
an
upper
bound
of
how
much
gas
is
used
for
this
contract
and
at
call
time
we
just
charged
this
gas
now
I
realize
that
sometimes
that
this
might
be
much
lower.
A
A
Okay,
so
this
upper
bound
may
be
larger
than
the
average
case
for
the
for
the
average
context,
accusing
because
we
might
exit
early
like
right
away,
because
you
know
our
input
sizes
is
too
small
or
too
big,
and
so
we
might
overcharge
people
sometimes.
But
it's
simple
so
for
a
given
contract
we
just
charge
gas
once
and
it's
more
efficient
because
we're
not
constantly
charging
yes,
so
this
would
be
sort
of
elegant
to
have
this.
So
the
question
is:
how
can
we
do
this
sort
of
computation
the
static
analysis
it
turns
out?
A
No,
unfortunately,
we
can't
for
a
given
contract,
there's
no
effective
procedure
to
do
this
sort
of
upper
bound
metering,
because
we
don't
even
know
if
we
can
bound
the
runtime
of
an
arbitrary
contract.
So
this
is
the
I
guess
the
decision,
the
halting
problem.
So
we
can't
decide
whether
a
given
arbitrary
code
will
halt
its
first
specific
examples
we
can
decide,
but
for
arbitrary
examples.
We
do
not
have
an
effective
procedure
for
it.
A
A
Viper
model
if
we
used
it
for
awasum,
where
we
have
no
recursion
bounded.
Every
loop
has
a
loop
variable
that
just
acts
as
a
loop
variable
and
nothing
else
and
gets
incremented.
We
can
put
an
upper
bound
on
things
and
maybe
there's
some
other
things
we
would
have
to
do.
But
this
is
just
an
idea
just
to
think
about,
but
there
are
some
elegance
to
it.
A
I
think,
because
for
each
contract
we
just
have
an
upper
bound
and
that's
what
that's
what
we
charge
is
Evie
I'm
currently
touring
completes
the
yellow
paper
says
it's
quasi
touring
completes
I,
guess
maybe
the
the
language
itself
is
touring
complete,
but
with
the
guest
limits
you
can't
enumerates
an
infinite
set.
So
in
some
sense
it's
not
touring
completes.
So
this
is
just
an
idea.
That's
I'm!
That's
been
around
for
a
long
time,
but
this
is
just
an
idea.
A
A
Okay,
so
this
formula
will
grow
exponentially
fast
in
some
cases,
so
there
might
be
some
pathological
code
that
this
formula
of
the
input,
parameters
and
state
will
be
just
arbitrary,
large,
we're
better
off
just
computing,
the
the
contract
and
not
the
and
not
the
guess,
cost,
but
for
some
cases
for
some
pre
code
Powell's.
This
might
be
very
reasonable.
Like
this
catch
a
qug
sample,
we
might
use
a
formula.
A
We
have
some
ideas
of
how
we're
gonna
do
this,
maybe
some
symbolic
execution,
maybe
some
what
I
call
interval
arithmetic,
so
we
execute
explore
execution
paths
came
for
a
framework.
Does
this
and
I
might
build
something
that
does
this
too?
On
pi
web
assembly,
which
I
wrote
or
cpp
web
web
assembly,
which
I'm
writing
and
yeah.
C
Yeah,
the
K
framework
team
is
a
fan
of
that
option.
So
a
question
about
this:
the
ket
chick
plot
did
you
so,
let's
say
for
input
link
3000.
Did
you
try
all
possible
white
sequences
of
three
thousand
bytes?
No.
A
We
understand
the
structure
of
cat
check
and
we're
saying
that
it's
arbitrary,
it
doesn't
matter,
they're
just
doing
X
doors
and
bit
shifts
and
things
like
that
and
they
don't
really
care.
So
we
have
this
sort
of
pre
previous
knowledge
of
how
this
cut,
how
this
function
works,
but
if
someone's
deploying
something,
then
we
can't.
We
don't
have
this
sort
of
domain
knowledge,
so
we
can't
really.
A
C
D
So
I
I
think
we
could
actually
sort
of
maybe
combine,
serves
these
ideas.
The
ideas
of
the
injection
metering
turn
to
in
non
completeness
in
this
sort
of
function.
You
basically
can
try
to
you
stride
try
to
do
the
like
the
static
analysis.
First,
if
it
sort
of
fails,
there's
some
criteria
for
failure.
D
So
if
it
succeeds,
then
you
basically
get
the
best
situation,
and
maybe
you
even
kind
of
reward
they
because
people
can
try
to
if
they
know
how
the
static
analysis
work,
they
probably
tried
to
make
their
contracts
whatever
they
write,
an
amenable
to
the
static
analysis
to
get
some
sort
of
benefits.
If,
however,
they
don't
care
or
they,
they
don't
know
how
to
do
that,
they
just
read:
read:
they
just
fall
back
to
the
to
the
easiest
option,
which
is
the
injection
or
whatever
/
metering.
What
do
you
think
about
them?
Yes,.
A
I've
had
this
idea
that
we
might
have
two
classes
of
contracts,
one
that
the
user
might
request,
people
to
pre-compile
or
to
compile
or
to
do
this
sort
of
I'm
sorry,
the
class
that
they
request,
this
upper
bound
metering
and
then
they
save
on
gas,
because
they
do
this
ahead
of
time.
Metering
and
and
they'll
have
like
when
they're
deploying
their
contract,
they'll
put
some
sort
of
flag
to.
You
know
do
this,
and
then
we
would
have
to
bound
the
algorithm
that
does
this
sort
of
static
analysis.
A
You
know
if
there
are
only
a
few
execution
paths
that
it
can
actually
go
through.
Then
it
can
do
it
in
a
reasonable
amount
of
time
and
then
they
can
save.
They
can
amortize
that
cost
over
the
all
the
executions
in
the
future.
So
we
would
have
some
flag
at
deploy
time.
That
says
yes
do
this.
Is
it
worth
it?
I
don't
know.
Is
this
metering
so
expensive?
Maybe
just
the
injection
is
fine.
My
sort
of
dream,
I
didn't
finish
my
talk.
There
was
actually
one
more
point.
The
option
5
is
metering.
Coprocessor
I.
A
Imagine
that
eventually
this
will
be
done
in
hardware
and
that's
why
I
don't
want
to
back
ourselves
in
the
corner
with
metering
injection
where
we
must
do
metering
injection
I'd,
rather
it
be
sort
of
user
users,
decide
how
they
want
a
meter
and
if
they
want
to
invest
sort
of
resources
in
this
ahead
of
ahead
of
time
stuff
or
if
we
want
flags
and
things
like
that,
so
because
eventually
it
might
be
done
in
hardware
anyway.
So
that's
it.
A
That's
what's
happening
now:
I
think
that
there's
some
extra
compute
time
to
do
the
metering
and
that's
that
might
be
too
expensive
I
don't
know.
Is
it
too
expensive?
That's
a
good
question,
but
yeah
that's
the
interpreter.
The
VM
is
doing
it
now
and
we're
doing
the
basic
black
one
now
for
our
test
net.
Then.
B
A
So
rimi
Turing
on
the
top
right
four
options:
two
three
and
four
we
would
have
to
you
know:
go
through
and
really
Dean's.
Then
we
have
to
re-inject
new
metering
and
we
would
have
to
redo
our
static
analysis
best
based
on
these,
because
maybe
the
the
worst
case
path
might
be
different,
based
based
on
the
new
gas
costs.
So
we
have.
We
have
to
redo
everything
if
we
change
gas
costs.
So
there's
an
overhead
there,
too
yeah.
G
G
H
Yeah
I
guess
it
follows
up
on
this.
One
thing
you
could
do
is
not
try
to
do
the
static
analysis
over
the
whole
program,
but
do
it
over,
let's
say
a
particular
function,
and
then
you
can
see.
Oh
this
loop
always
repeats
this
many
times.
I,
don't
need
to
compute
it
very
gasp.
I
can
do
it
per
block
instead
yeah.
So.
A
H
A
I
We'd,
like
GT,
optimization
and
optimization
general,
how
do
you
predict,
or
how
do
you
predict
what
the
gas
castle
is
gonna,
be
at
the
end
of
the
execution?
What's
the
what's
the
up?
What's
the
gas
cost
at
the
end
of
the
execution
or
the
total
gas
used
at
the
end
of
the
execution?
How
do
you
determine
that
with
all
the
optimizations?
It's.
A
I
A
No,
you
can
do
it
once
you
have
that
upper
bound.
It's
fixed
it's
consensus,
then
you
can
do
whatever
optimizations
you
want,
or
you
can
even
take.
You
know
not
even
execute
the
wasum.
You
can
execute
a
hand
written
you
know,
hand,
optimized,
FPGA
implementation
of
about
256.
If
you
learn
whatever
as
long
as
you're
charging
that
guess.
D
Okay,
so
we're
back
I
think
so
this
is
a
I
just
forgot
to
forgot
to
mention
this
before
before
I
go
into
the
snapshot.
Think
the
discussion
so
first
I
have
made
addition
to
the
framework.
So
this
is
the
framework
document
like
state
management,
state,
size
management,
so
I
added.
Another
item
number
3
here.
D
D
H
D
So
in
my
naive
understanding,
that
was
a
you
need
64
hashes
in
more
refined
understanding,
it's
32
hashes,
which
is
a
you
know,
1,000
or
2,000
bytes,
and
it
takes
in
or
70
K
or
140k
to
pay
for
transaction
data.
So
I
wanted
to
clarify.
This
is
in
situation
where
you
both
read
the
data.
You
basically
want
to
prove
that
you're
reading
the
day's
reading
the
correct
data
or-
and
in
the
case
where
you
actually
write
it
right.
D
H
You
want
to
prove
the
old
value
yeah
in
in
the
simplest
case.
Where
you
have
one
value,
you
would
need
32
values
to
hash
it
back
to
the
root
and
then,
in
order
to
update
it,
you
can
reuse
the
same
merkel
path
because
all
of
the
side
branches
didn't
change
in
the
process,
so
I
say
I
understand
so,
but
more
complex
scenarios
make
the
situation
much
worse.
Fast.
H
D
So
that's,
therefore,
the
one
of
the
possible
designs,
the
forest
path
is
or
combination
of
paths
should
be
to
reduce
the
gust
of
transaction
data
or
increase
bogus
limit
by
let's
say,
a
factor
of
20,
as
you
suggested
20
factor
of
20
Remco
a
or
you
do.
The
combination
of
the
two,
your
slashing
the
cost
and
an
increase
in
the
block
size
limit
at
the
same
time.
So
that's
what
I
was
going
to
say
about
the
framework
so
added
it
in
well.
Let's
see
I
woke
keep
iterating
through
this.
D
D
D
This
is
probably
better
illustration
like
this
is
the
top
of
this
giant
Patricia
Merkle
tree,
which
is
heck.
Sorry
heck.
Sorry
means
that
each
node
could
have
up
to
16
children
and
this
everything
is
routed
into
the
state
route.
So
when
the
new
node
starts,
it
has
a
state
route
which
is
retrieved
from
the
block
header
and
what
it
for
the
first
thing
it
does
is
in
the
first
thing
it
uses
one
of
the
operatives.
D
So
you
can
see
here
there's
operatives
of
the
east
63
death
PGP
protocol,
so
one
of
them
is
called
a
get
node
data
and
the
one
it
takes
one
parameter,
which
is
the
hash
of
the
node
in
in
a
miracle
tree
in
this
patrician
miracle
tree
and
the
the
result
of
that.
If
you,
if
the
another
peer
responds,
it
responds
with
the
corresponding
no
data
operative,
which
carries
the
ROP
representation
of
that
note.
D
So,
obviously,
when
you
receive
that
response,
you
can
immediately
verify
that
that
this
is
exactly
what
you
asked
for,
because
you
can
hash
therapy
and
compare
it
with
the
hash
that
you
asked
for.
So
it's
immediately
verifiable.
So
then,
if
you
look
at
the
picture,
imagine
that
you
first
reconstructed
the
the
node
and
a
left
and
what
it
gives
you
is.
It
gives
you
array
of
another
up
to
16.
Hashes
like
this
is
probably
the
better
picture.
D
Imagine
that
you
had
a
root
and
then,
after
the
first
response,
you've
got
this
first
little
house
with
16
hashes,
so
each
of
the
hashes
is
32
bytes
and
then
your
issue,
another
16,
get
no
data
operatives,
maybe
potentially
multiple
peers
and
then
what
you
getting
back
is
the
thing
on
a
second
layer.
So
for
each
of
these
requests,
you're
expecting
to
to
get
those
other
nodes
and
so
forth,
and
eventually
you
know
you
can
use
the
parallelism
here
by
requesting
multiple
things
from
multiple
nodes
and
you
can
do
pipelining
what?
What?
D
What
would
you
what
you
want
to
do
so
eventually,
you
will
sync
the
whole
tree:
let's
go
to
the
yet
to
the
points
so
Merkle
trees
reconstructed
from
the
root
to
the
leaves,
but
what
you're
really
interested
in
is
the
leaves
you're
interested
in
each
accounts,
because
if
you
only
had
the
leaves
but
not
the
tree,
you
could
always
recalculate
the
tree,
because
the
tree
is
just
deterministically
calculate
abel.
So
and,
as
I
said
before,
each
result
of
the
node.
D
No
data
is
instantly
verifiable
to
belong
to
the
tree,
so
you're,
if
you
appear
start
sending
you
bad
data,
you
can
figure
it
out
pretty
quickly
and
there's
maybe
ban
them
and
then
disconnect
from
them.
And
things
like
this
so
but
the
biggest
problem
with
this
mechanism
is
that
the
weight
like
the
the
traffic
that
you
require
to
transmit
the
tree
is
much
several
times
bigger
than
the
lease.
So
because
there's
lots
of
hashing
it
there
and
but
the
good
thing
about
is,
is
there's
no
presentation.
D
Okay.
Let's
now
look
at
the
warp
sink
in
parody,
I,
don't
have
a
picture
for
this
because
it
I
didn't
have
time,
but
let's
highlight
the
differences.
So,
in
order
to
save
on
that,
the
the
bandwidth
usage,
the
parity
warp
saying
only
transmits
the
leaves,
so
it
takes
all
the
leaves
and
then
packages
them
into
the
chunks
of
four
megabytes.
D
And
then
just
you
can
sort
of
receive
all
the
chunks
there.
Unfortunately
bit
about
it
is
that
the
verification
of
the
chunks
can
only
happens
after
you've
received
all
the
chunks.
So
you
received
all
the
chunks
you
have
the
leaves,
then
you
rebuild
the
tree,
arrive
at
the
root,
and
only
then
you
know
whether
the
chunks
are
actually
good
or
not.
D
D
So
you
you
have
to
traverse
the
the
miracle
tree
down
to
the
leaves
to
be
able
to
to
prepackaged
them
as
a
chunks
and
at
the
moment
it's
I,
oh
intensive
operation,
and
it
takes
about
four
hours
to
j2
progenitor
chunks
for
the
current
main
net.
So
that
is
due
to
the
sort
of
discrepancy
between
the
how
the
data
is
stored
and
how
data
needs
to
be
presented
to
the
to
the
peers.
D
So
their
current.
The
party
theory
was
currently
researching
the
new
mechanism,
which
is
called
fast
warp
sink,
which
so
slightly
different
problems.
So,
as
we
noticed
yesterday,
how
would
you
be
describes?
Okay?
So
let's
look
at
this
little
thing
in
the
corner.
In
the
right
hand,
side
corner
right
or
top
right
hand
corner.
So
each
block
has
its
like
each
block.
D
No,
what
would
you
do
next?
So
so
they
the
as
far
as
I
understand
the
first
word
sink
solve
this
problem
by
enabling
you
to
fill
the
holes.
So
essentially,
you
said:
okay,
fine,
if
you
don't
have
the
previous
chunk,
give
me
the
whatever
you've
got
the
latest
chunk
you've
got
and
then
what
will
happen
is
that
you
will
construct
the
miracle
tree
of
the
as
much
of
the
miracle
tree
of
the
past
and
the
miracle
tree
of
the
present
and
you
sort
of
compare
them.
D
Compare
these
two
Merkle
trees
interactively
with
your
peer
and
try
to
figure
out
which
paths
in
the
tree
have
changed
since
you've
received
the
data
and
then
you're
interactively
kind
of
who
using
the
same
mechanism,
because
if
I
think
you're
you're,
you
feel
the
the
holes
in
the
in
your
in
your
snapshot.
I,
don't
exactly
know
how
this
is
gonna
work,
but
I
understand
the
general
idea
of
this,
so
that
would
allow
you
to
potentially
catch
up.
Even
you.
D
If
you
had
an
old,
saw
slightly
older
information
that
you
can
still
like
try
to
catch
up
with
the
network.
Again,
I,
don't
know
the
details.
I
think
the
the
implementation
has
been
started,
but
we
will
see
so
but
again,
the
same
problem
remains
here
that
verification
of
the
chunks
can
only
happen
when
the
chunks
are
received.
As
far
as
I
understand,
I
asked
Frederik
about
it
yesterday,
so
you
confirm
that
so
this
is.
This
is
still
an
issue.
D
So
some
of
you
are
familiar
with
that,
but
essentially,
if
you
look
at
this
picture
that
so
the
so
the
note
the
first
basically
that
the
first
block
is
the
the
values
of
the
first
nibble,
the
nimble
as
a
first.
It
is
a
four
bits
of
the
address
of
the
catch
arc
of
the
address.
So
then,
so
the
the
second
level,
which
contained
256
roots
and
each
root,
corresponds
to
the
sub
tree.
Then
you
go
to
the
level
3,
which
has
the
404
2096
routes,
which
correspond
to
400
4096
subtrees
in
the
miracle
tree.
D
So
now
let's
stop
at
that
point.
So
let's
stop
at
number
the
level
3
and
look
at
one
of
these
routes,
so
these
routes
corresponds
to
another
to
this
subtree
and
this
subtree
can
be
constructed
from
all
the
accounts
that
have
the
specific
3
nibbles
in
their
hash
of
the
address.
So
let's
say
that
we
have
1
1
D.
So
if
we
query
all
the
accounts
that
have
that
starts
with
1
1
D
in
their
hash,
then
this
is
going
to
be
the
con.
D
This
has
been
the
gonna
correspond
to
our
chunk,
so
our
chunk
is
essentially
everything
which
will
go
into
that
subtree
in
a
miracle
in
in
the
whole
Merkle
tree.
So
this
is
1
chunks.
As
we
know,
on
the
third
level
we
have
4096
roots,
which
means
that
we
can
have
4096
chunks,
ok
and
now,
let's
do
some
math.
D
So,
in
order
to
satisfy
the
property
that
each
chunks
is
verifiable
on
its
own,
there
are
two
ways
to
solve
this.
So,
first
of
all,
because
this
chug
is
the
distinct
subtree,
then
we
can
use
the
miracle
tree
for
what
it
is
good
for
is
that
to
provide
with
with
each
chunk,
we
provide
the
proof
which
which
convinces
us
this
chunk
actually
belongs
to
the
tree.
If
we
know
the
root
and
then
if
you
have
a
three
levels,
so
each
proof
of
each
level
is
about
is
480
bytes.
D
Alternatively,
instead
of
having
chunk
having
a
root
the
proofs,
we
can
download
all
the
proofs
ahead
of
time,
which
means
we
simply
download
the
first
three
levels
of
the
hash
tree
so
that
we
we
already
know
the
proofs
for
all
Chung's
ahead
of
time.
So
don't
they
don't
need
to
tell
us
that
the
proofs
we
can
verify
them
already.
So
this
is
actually
so.
D
If
it's
too
big,
let's
say
that
if
it's
too
big,
then
we
can
add
a
level
4.
We
do
the
same
logic,
but
with
the
level
4
other
than
level
3,
so
that
we
have
instead
of
4000
chunks,
we're
going
to
have
64,000
chunks
and
each
chunk
will
be
about
200
kilobytes
and
then
again
we
can
do
the
proofs
or
we
can
download
some
data
upfront
it's
up
to
up
to
us
and
when
it
comes
to
the
storage
of
large
contracts.
D
So
this
is
this
whole
scheme
allows
you
to
to
download
the
the
main
account
tree,
and
so
what
we
left
to
do
is
to
download
the
storage
of
all
the
contracts,
because
some
of
you
may
know
that
for
the
contracts,
the
the
actual
leaf
contains
the
storage
route
for
this
for
this
contract.
So
that
allows
us
to
to
use
the
similar
procedure,
but
with
maybe
some
smaller
number
of
levels
for
large
contracts.
So,
let's
say
for
the
things
like
Ida
X
or
a
either
Delta.
D
We
have
millions
of
millions
of
storage
items
we
could
do
a
similar
procedure
might
be
with
the
two
levels
or
something
like
that.
But
if
you
have
a
very
small
contract,
which
is
just
bunch
them
up
in
the
in
one
big
chunks
in
big
chunks
and
send
them
all
together,
because
then
in
this
case,
if
you
know
that
your
chunk
contains
like
data
4000,
a
contract,
you
can
just
verify
them
straight
away
and
or
throw
them
away.
If
it's
a
bad
junk,
they
could
be
more
sophisticated
approaches
to
apportion
the
chunks,
but
I
am
NOT.
D
F
D
D
D
I
D
So
the
chunk,
if
we
decide
to
send
the
chunks
with
the
proof
so
that
you
send
the
chunks
which
contains
all
the
Leafs
of
the
subtree
and
then,
if
the
energy
and
the
proofs,
then
what
what
the
receiver
does
is
that
it
takes
the
leaves
reconstructs
the
tree.
Hashes
the
tree
verifies
the
proof.
If
the
proof
belongs
to
the
state,
it
keeps
it
if
it
doesn't
that
it
throws
it
away
and
and
disconnects
from
the
peer
okay.
J
Is
it
better
now?
Okay,
can
you
say
that
it's
a
modification
of
the
fast
sync
algorithm,
but
instead
of
asking
just
one
level
further,
you
ask
some
number
of
levels,
but
you
don't
need
the
entire
subtree
and
not
only
need
the.
If
you
ask
for
the
four
levels
that
you
only
need
the
fourth
level
instead,
but
you
don't
need
the
intermediate
levels,
because
you
can't
calculate
those
yourselves.
No,
yes,.
D
J
D
J
D
Well,
I'm
suggesting
that,
basically,
where
the
chunk
is
just
go
back
down
to
the
Leafs,
the
only
hashes
you
would
only
you
would
ever
send
are
the
ones
that
that
are
used
for
the
proofs
that
to
prove
that
the
chunk
is
inside
the
is
in
the
root.
It
belongs
to
the
state
yeah.
But,
of
course,
your
what
you're
saying
that
it
could
be.
It
could
be
brought
to
what
I'm
suggesting
with
setting
the
the
depth
to
whatever
yeah
now.
J
J
F
K
K
Okay,
so
decent
put
me
up
really
quickly
for
a
quick
talk
on
deaf
p2p
discovery.
V5
currently
is
before
and
just
for
rationale
or
why
this
description
is
coming
about
and
what
we
could
possibly
do.
That
would
actually
match.
Very
well
was
a
previous
discussion
with
Alex
II.
So
today
you
come
up
discovery
connect
to
all
the
nodes
start
with
your
foot
nodes.
Then
you
expand
the
number
of
nodes.
You
talk
to
you,
you
feel
academia
buckets,
and
then
you
connect
you
a
number
of
them
of
our
lpx.
So
you
exchange
your
public
keys.
K
You
could
handshake!
You
have
a
hello
message
when
you
talk
about
what
subpolar
calls
are
being
supported
on
both
sides.
Eventually,
you
find
out
that
some
of
them
had
Samsa
protocols
that
you
wanted
to
talk
about,
and
then
you
do,
for
example,
for
if
a
status
message-
and
you
find
out
that
actually
they
are
ATC
nodes
right,
so
you
can't
actually
do
anything
with
them.
So
there's
quite
a
bit
of
churn
and
you
are
doing
quite
a
few
manipulations
to
get
to
a
stable
number
of
nodes
that
you
talked
to
today.
K
That's
a
problem
for
support
calls
which
are
not
very
common
like
Elias
or
whisper,
and
you
know
it's
kind
of
resolved
by
having
both
nodes
for
those
particular
subnets
for
those
particular
protocols.
So
it's
easier
for
you
to
get
started.
If
you
just
try
to
do
any
s
with
random
good
nodes,
then
you
don't
get
that
many
nodes.
K
So
some
Eve
clients
will
have
some
range
of
blocks,
but
some
others
won't
so
you'll
need
to
find
a
way
to
connect
pretty
quickly
to
the
right
node
with
the
right
number
of
the
right,
the
right
bugs
right-
and
this
goes
well
with
what
you're
saying
but
fasting.
How
do
I
find
a
network
of
a
bunch
of
peers
which
I
want
to
talk
to
at
this
point?
I,
don't
want
to
connect
you
everybody
on
network
just
to
find
out
it's
five
of
them
on
this
I'll
have
my
minute.
K
So
how
do
we
add
in
ideas
to
add
capabilities
when
we
do
discovery,
so
we
just
add
more
information,
the
discovery
level
we
say
which
support
calls
we
support
and
maybe
even
more
metadata.
That
would
be
specific
to
your
protocol.
Like
what's
the
range
of
blocks
we
support
today.
There's
a
few
considerations.
We
don't
want
to
create
Islands.
This
is
taken
from
the
mystical
I
project.
You
can
see,
for
example,
islands
on
the
right
here
right.
So
it's
easy
for
you.
K
If
you
specialize
too
much
the
the
protocol
to
create
a
bunch
of
nodes,
don't
talk
to
anybody
else,
because
they
just
are
curious,
for
example,
so
we're
gonna
be
between
ourselves
and
the
another
thing.
That's
kind
of
neat
was
discovery
that
it's
very
generic
and
it's
not
related
to
a
particular
supplier
code.
So
you
don't
want
to
kill
on
capsulation
of
having
those
sub
protocols
being
neatly
arranged
inside
discovery,
so
a
simple
possible
fix
would
being
we
create
two
new
types
of
messages
on
this
particular
discovery.
K
K
We
can
list
that
for
each
sub
protocol
so
an
if
example,
the
chain
ID
the
range
of
blocks
stored,
and
you
would
want
to
call
that
on
a
regular
basis
as
part
of
pings
and
poems
as
you
do
today,
so
that
you
would
be
able
to
see
what
changes
over
time
because
possible
that
you
start
storing
more
blocks.
You
start
dropping
more
blocks
depending
what
what
you're
doing
right
so
in
an
example
of
a
v5
ping
ping
packet
that
I
made
up
two
months
ago.
K
And
we
could
do
more,
so
the
second
you
add
metadata
to
a
ping
package.
You
can
do
a
lot
more,
isn't
gonna.
Add,
like
your
hashtag
about
your
custom
note
you
could
create
even
metadata.
That
would
be
useful.
That
you'd
like
to
share
so
you'd
like
to
say
well,
I
only
support
this
range,
but
I
know
that
don't
support
that
other
range.
It's
kind
of
useful
for
me,
so
just
go
to
those
other
nodes
in
a
little
helps
you
get
your
fast
sink
to
the
next
step,
for
example.
K
So
these
are
just
ideas
that
would
be
possible.
What's
kind
of
neat
about.
This
is
that
it
leaves
this
to
every
sub
protocol
to
define
what
they
want
to
do.
If
I
go
back
to
the
d5,
you
can
see.
Elias
has
kind
of
its
own
little
things,
which
are
now
in
the
hello
status
message
that
you
have
when
you
do
Ellie.
Yes,
so
whether
you're
reading
transactions,
something
like
that,
so
each
sub
protocols
will
have
a
way
to
encode
additional
metadata.
F
K
E
Can
you
tell
us
more
about
the
I
guess
the
cadet
Academy
layer
is
a
different
use
of
Kedah
million
v4
and
v5
No.
K
Okay,
so
cadonia
is
just
about
making
sure
that
you
have
some
level
of
fairness
on
the
network
right,
so
you
won't.
You
have
discovery,
be
as
plain
in
as
possible.
That's
why
I
came
up
with
this
slide
about
avoiding
islands.
So
the
way
you
do
it
is
you
create
16
buckets
of
16
peers
and
you
create
a
there's,
an
algorithm
of
distance.
K
There
is
comparing
your
node
IDs
with
the
clear
IDs
to
decide
which
bucket
you
would
go
into
so
this
allows
you
to
create
a
hashing
function
that
you
know
puts
a
random
number
of
peers
in
your
bucket,
so
that
there's
no
islands
being
created
based
on
capacities
or,
if
you
like
that,
so
we
want
to
absolutely
have
a
way
to
keep
this
as
Imagineers
as
possible
so
that
you
can
have.
As
you
know,
you
don't
want
to
create
a
situation
where
you
have
a
network
partition,
because
everybody
decided
that
fasting
is
much
better.
K
So
we're
only
going
to
talk
to
this
peer
v5
nodes
that
have
this
particular
set
of
range
right
and
I
might
happen.
If,
if
we
don't,
if
we're
not
careful
for
not
careful
by
the
way
we
implement
it,
nodes
will
select,
which
nodes
are
really
interesting
and
they'll
start
their
studies
connecting
from
those
that
don't
have
the
right
attributes.
So.
D
K
Right
now,
that's
why
discovery
is
very
agnostic
and
does
not
actually
give
you
that
much
it's,
because
it's
just
giving
you
things
and
bones
and
ideas,
and
all
that
it
doesn't
actually
tell
you
anything
about
the
capacities
of
the
of
the
node.
If
we
add
it
in
the
pings,
it
might
actually
be
too
aggressive
and
people
may
be
able
to
start.
You
know
filtering
which
not
start
to
talk
to
you.
D
K
Came
up
with
this
ago,
those
people
over
there
put
me
up
to
it,
so
they
can
answer
you
a
bit
more,
but
I
think
it's
not
nowhere
near
ready.
The
problem
is
that
you
are
with
chain
pruning.
Creating
those
issues
of
having
a
thorough
generis
network
of
nodes
would
have
any
sort
in
the
case
today
anyway,
but
you
have
different
nodes
with
different
functions
and
they
have
to
start
having
a
lot
more
different
behaviors
because
they
won't
have
the
whole
chain.
D
E
D
So
yeah
the
the
idea
was
that,
because
you
use
I
mean
the
premise
was
that
if
you,
if
you
use
Kadhim
Leah,
then
your
network
becomes
too
structured
and
that
poses
the
risk
of
eclipsed
attacks
so
where
you
can
basically
surround
a
node
with
your
own
puppet
nodes
and
then
you
just
let
the
node
believe
whatever
you
want
it
to
believe.
I.
K
Mean
truthfully
that
makes
sense
right
as
you
discover
more
and
more
nodes.
So
that's
why
we
are
using
this
Academy
running
you're,
trying
to
make
it
so
that
you're
not
going
to
connect
every
single
node
Xena
in
the
network
economy
allows
some
of
them,
but
it
has
the
issue
that
I
mean
discovery
itself
is
completely
unrelated
to
how
you
can
actually
use
those
nodes
in
a
peer-to-peer.
This
is
just
trying
to
create
an
element
of
a
homogeneous
environment
for
everybody
to
participate,
and
so
I'm
not
answering
questions.
Sorry,
but.
I
Yeah
regarding
eclipse
attacks,
all
Kadeem-
lea
is
known
to
be
vulnerable
to
that,
but
it's
described
in
ESCA
d'emilia
white
paper,
the
range
of
attacks
that
are
possible
and
how
to
mitigate
them
so
they're,
relatively
well,
understood
and
and
I
haven't
read
the
paper
that
you
mentioned,
but
I.
Imagine
that
the
attacks
that
were
presented
there
were
just
general
attacks
on
code,
Emily
and
Emily,
so
they
might
be
relatively
well
understood
and
mitigated,
and-
and
the
second
is
that
yeah-
that
possibly
will
require
discoveries
for
each
purpose.
I
K
J
D
D
Yes,
baby,
basically
just
I'm
just
imagining
now
that
if
we
were
basing
our
fast
or
whatever
sync
capabilities
on
to
we
put
them
onto
discovery,
then
obviously
there's
no
way
to
verify
that
this
node,
actually
it
advertises
yes,
I,
have
whatever
work,
saying
or
or
I
have
this
thing
or
that
thing
and
then
but
there's
no
proof
that
he's
actually
gonna
give
you
something
good,
but
anyway
so
I
guess
it's
I.
Think.
F
L
It
better
yeah,
so
attack
and
kill.
Amelia
is
basically,
as
you
know,
that,
there's
a
distance
function
you
can
use
with
information
to
fill
the
buckets,
especially
because
we're
Burkett's
are
quite
small
16.
So,
for
example,
you
know
that
the
last
bucket
you
won't
be
able
to
fill
it
because
it's
all
run
notes,
but
now
it's
50%
of
an
origin,
whereas
buckets
with
the
first
bucket.
It's
very
easy
to
tune
your
distance,
because
you
can
actually
calculate
it
from
the
ash
of
your
node
to
maybe
put
your
I
functions.
L
We
will
feel
all
the
first
buckets,
and
this
way
we
can
equip
the
node
for
some
of
functions
that
you
want
to
use
that's
with
generic
attack,
and
it
was
a
quite
serious
attack.
Actually
so
it
had
to
be
fixed
in
one
of
a
Willys
effect
home,
whatever
I
can't
remember
number,
but
it
was
kind
of
workarounds
with,
for
example,
taking
it
work
on
the
IP
addresses
in
the
bucket,
so
in
the
back
edge.
B
I
think
you
also
that
attack
goes
against
its
a
certain
note
ID.
So
you
have
to
already
be
online
from
the
start,
attacking
you
and
it
seems
like
you
would
be
able
to
like.
You
already
have
a
few
peers
in
your
bucket,
like
not
having
any
peers
when
you
find
a
new
one
and
then
when
you
start,
if
you
get
and
you
do
an
ADEA
every
time
they
can't
just
like
have
already
attacked
you
before
hands.