►
From YouTube: Eth2.0 Implementers Call #14 [2019/3/14]
Description
A
A
B
B
Dimitri
and
Paul
shopping
PES
is
updated
too.
Zero
point
four
point:
zero
and
it's
just
released
thyself
and
POS
fixed
as
count
Ming.
D
D
We
implemented
a
data
provider
inside
our
client
to
output
information
to
it
a
CSV
or
JSON
files.
We
started
profiling,
our
code
to
look
at
performance
bottlenecks
and
visualize
the
call
graph
and,
lastly,
we're
working
with
that
call
from
white
block
and
Anton's
wound
on
getting
a
test
net
set
up
and
implementing
the
hobbits
wire
protocol.
E
Incorrectly
proposed
and
throughout
their
lifecycle
and
people,
know
concurrently
process
not
far
apart.
We
fix
a
few
bars
here
and
there
and
they're
mostly
related
to
for
the
written
they're,
mostly
related
to
boundary
conditions,
and
we
also
started
testing
a
multi,
no
setups
by
multi
no
setup
I
mean
there's
four
validate
this
connectivity.
You
know,
there's
other
favela
this
mid
without
people
node
and
then
we
made
sure
they
sink
in
war
and
then
the
Bella
data's
can
fail
over.
You
won't
be
hotels,
so
so
so
we'll.
E
So
we
will
have
more
update
on
that
next
time
for
the
test
result
and
we
also
implemented
lmd
ghost
framework.
We
have
some
preliminary
batch
numbers
and
what
we
to
optimize
it.
So
we're
looking
at
our
photos
with
reports
in
series,
since
we
did
a
great
great
job
on
that
other
than
that
yeah.
Just
basically
just
testing
yeah.
G
So
we've
kind
of
taken
a
break
from
working
on
the
actual
client
and
we're
currently
looking
into
implementing
lippy,
dopey
and
Swift.
So
we
can
start
working
on
networking
stuff,
cool.
A
H
Priorities
so
right
now
we
are
mostly
focusing
around
testing
I
think
we
will
continue
to
do
that
at
least
for
recently.
So
the
rationale
is
we
still
plan
to
do
a
lot
of
refactoring
for
the
runtime,
but
we
kind
of
want
to
make
sure
that
our
refactoring
is
correct
and
we
don't
accidentally
break
since
so,
of
course,
matching
the
and
the
who
state
test
is
our
unco,
but
right
now
we
are
just
trying
to
do.
H
You
need
tests
so
so
we're
basically
just
manually
getting
the
license
back
file
and
try
to
random
and
Carol
the
output
and
put
a
function
to
the
output
of
the
Python
fell
in
flower
last
codes.
It
mostly
works,
and
we
also
try
to
whom
this
is
not
related
to
testing.
But
for
the
other
thing
we
are
trying
to
move
adapter
layer
for
our
subject
from
is
in
the
execute
block
from
above.
H
I
Hey
sorry,
yes,
nothing
likes
major.
We
pretty
much
scaffolded
all
of
our
like
auxilary
stuff
inside
the
client,
like
the
database
like
no
we're
p2p,
stuffs,
gonna
handle
and
all
that
other
good
stuff.
We
pretty
much.
We
finished
up
all
our
state
transitions
and
we're
gonna
start
like
actually
testing
everything
kind
of
with
a
fake
1.0,
1x
deposit.
Just
we
we're
almost
we're
pretty
close
on
getting
lip
p2p
done.
Javascript.
I
C
C
Simulator
works
well
with
hundreds
of
validators,
but
with
thousands
of
them
simulation
becomes
insanely
slow.
This
is
basically
because
everything
is
running
the
single
thread
so
far.
The
next
step
would
be
to
make
work
with,
let's
say,
50
thousands
for
the
daters
and
due
to
some
benchmarks
on
these
big
numbers.
This
is
a
work
in
progress
so
far
during
doing
benchmarks
on
a
small
number
for
the
daters,
we
have
found
several
bottlenecks
and
already
sorted
them
out.
These
bottlenecks
are
not
even
related
to
this
packets.
C
Some
issues
with
our
implementation
for
those
who
interested
in
trying
out
the
simulator
I'll
post
a
link
and
if
feedback
would
be
much
appreciated.
Also,
we
started
to
get
integrated
with
distance
use,
made
a
small
PR.
It
fixes
shufflin
fast
generators
and
also
encountered
an
issue
with
sharply
Schieffelin
swaps
and
mandates.
Yes,
we
will
talk
either
about
that.
So
that's
pretty
much
it
great.
A
J
Yep
so
we've
been
building
at
our
runtime
I'm
working
on
syncing
we've
been
trying
to
get
kind
of
DeFazio
a
wire
protocol
in
very
roughly
aware
that
it's
moving
target,
we
did
a
bunch
of
benchmarking,
so
we
made
a
document
it's
in
the
in
the
PM
reaper
like
the
issue
for
this
call,
which
has
a
whole
bunch
of
information
and
how
it's
useful
to
people
that
are
trying
to
optimize.
We
did
16k
300k
and
4
million
validated
benchmarks.
J
K
Let's
start
with
Li
p2p
we've
been
working
on
the
p2p
named
to
replace,
go
demon.
So
on
that
try
deep
restore
are
done.
We
are
considering
bounties
to
progress
further
and
we're
working
on
it
refreshes
and
fixed
or
discovery
on
whisper
testing.
Now,
regarding
lipid
to
be
in
the
beacon
chain
using
lipid
2p
demon.
K
So
we
right
now
use
the
Arabic
X,
but
we
also
have
a
stimulation
that
kind
of
work
using
a
lipid
to
be
demon,
and
we
will
provide
an
option
to
select
both
lpx
only
p2p
for
test
net
on
sink
for
the
beacon
chain.
We
have
an
open
question
on
the
handshake
and
we
want
to.
We
are
using
arrow
peace
organization
for
the
wire
protocol
that
we
implemented
and
we
want
to
kill
error,
be
in
that
regarding
the
state.
K
In
the
past
month
we
froze
or
spec
to
0.3.
We
moved
to
0.4
this
week
and
we
are
eagerly
waiting
for
the
0.5
so
that
we
are
able
to
compare
all
block
hashes
with
the
executive
of
spec
and
we've
been
working
on
a
Roth
optimization
and
to
remove
production
behaviors
that
we
found
in
the
become
state
and
in
terms
of
preventing
bug.
We
have
been
using
an
intact
system
to
create
like
a
pouch
and
slot
types
and
have
the
compiler
tell
us
that
we
are
mixing
both.
K
K
As
you
know,
last
week,
I
did
a
talk
on
at
ETH
CC
on
testing
and
simulation,
so
I
will
provide
the
slide
on
the
sobbing
guitar
channel,
like
in
two
minutes.
And
lastly,
a
big
attempt
for
the
week
was
integrating
naive,
LMG
girls
in
our
simulation,
because
before
we
were
using
some
kind
of
folk
choice
called
latest
resolve
block
like
the
latest
block
will
received.
We
consider
that
it
was
the
one
to
use
and
it's
working
fine.
K
A
F
A
L
M
Beside
the
Trinity
implementation
about
there's,
two
interface
updates
of
the
deposit
contract,
and
so,
if
you
using
the
latest
deposit
control,
every
deposit
contract
repository
as
to
PR
has
been
merged.
This
week
wise
here
21,
we
updated
the
deposit
event
interface
and
another
one
is
the
hundred
which
updated
the
is
to
genesis
event,
interface.
M
Another
thing
that
you
want
to
give
update
is
that,
since
there's
a
proposal
about
changing
the
hash
functions
to
sha-256
by
protein
is
very
supportive
left
edit
sha-256
you'd
in
function
to
the
latest
hyper
release
so
yeah.
If
that's
something
we
are
considering
it's
okay
to
change
it
in
the
contract.
Thank
you.
A
Great
I
think
I
got
it.
Okay,
next
I'm
going
to
read
a
quick
update
from
Robert
Paul
apps.
He
says
we're
working
to
add
deprecation
notices
to
the
areas
of
the
spec
that
are
outdated.
We're
making
significant
strides
on
the
new
Docs
live,
p2p,
I/o
and
writing
a
non
normative
walkthrough
of
the
blood
PDP
stack
that
everyone
used
as
a
reference
and
engaging
in
various
debates
on
github
Raul
is
generally
very
responsive.
J
J
Yeah,
that's
right,
so
we
we
kind
of
had
this
problem
where
we
wanted
to
processing
a
deposit.
We
want
to
see
if
a
public
key
exists
in
the
validator
registry
or
not.
So
we
were
keeping
a
map
of
like
a
building
a
hash
map
of
hub
keys,
validator
index
and
then,
when
we're
doing
that,
we
found
that
you
know
going
to
bytes
was
really
slows
and
then
we
were
like,
oh
well.
J
Why
don't
we
store
the
just
instead
of
storing
it
as
like,
the
the
point
for
our
BLS
library
will
store
as
uncompressed
by
its
which
made
a
hash
map
really
faster,
then
made
our
SSN
serialization
really
slow,
because
then
we
had
to
like
compress
it
yeah.
So
we
kind
of
found
interesting
thing
and
I
spoke
to
Danny
about.
It
was
just
kind
of
wondering
why
that
we're
compressing
the
public
key
bytes
when
it
seems
like
everyone's
gonna
need
it
in
uncompressed
form
anyway,.
N
N
Like
clients
given
especially
given
that
the
uncompressed
points
doesn't
actually
store
like
and
the
doesn't
really
get
changed
or
access
like
it
might
even
be
reasonable
for
a
client
implementation
to
just
be
basically
as
soon
as
you
get
the
undepressed
points,
you
would
just
hash.
It
store
the
hash
for
like
triage
purposes
and
that's
and
then
immediately
uncompressed
and
then
store
the
uncompressed
version.
N
J
N
One
really
have
other
more
general
thing
to
keep
in
mind
by
the
way
s
like
we
start
like
exploring
different
efficiency
trade-offs
as
like
in
this
case
I
think
it
might
actually
be
fine,
but
it's
actually
for
the
beacon
chain.
It
might
not
be
too
bad,
but
especially
once
we
go
into
the
shard
chains.
N
Basically,
because
I
guess
it's,
but
we
do
have
an
important
task
of
making
sure
the
running.
A
validator
is
like
marginally
cheap
because
it
like,
if
it's
not,
that
it's
going
to
significantly
eat
into
a
validator
prone
in
profits
and
and
which
could
mcgraw,
which
could
both
really
hurt
participation
and
encourage
things
like
stake
willing.
N
J
Yeah,
it's
something
that
I
did
in
the
bench
box
was
bench
on
my
laptop
and
on
my
desktop,
it's
quite
interesting
to
see
the
difference
between
the
two
starting
to
look
like
just
the
way
that
we've
got
it
written.
It
seems
like
the
more
calls
you
have.
It
really
improves
the
speed
for
you,
so
something
to
think
about.
O
N
K
N
K
J
N
Actually,
the
other
kind
of
like
I
think
we
should.
Definitely.
We
really
should
avoid
the
trap
of
running
benchmarks
on
powerful
hardware
because,
like
we
don't
want
to
repeat
the
mistake
of
ether
ether
one
and
like
having
really
high
spec
requirements,
it
would
be
good
to
get
numbers
on
Nagbe
$200.00
laptops.
N
P
I
A
Yes,
yep.
Thank
you.
The
if
you
haven't
taken
a
look
at
those
benchmarks,
do
they're
pretty
exciting
and
a
little
wizard.
Okay.
Next
up
is
leap
seconds
and
time.
Drift
I
believe
Justin
added
a
note
about
how
leap
seconds
is
handled
in
UNIX
time
and
how
were
conforming
to
you
next
time,
I
believe,
and
so
there
were
a
couple
questions
on
on
wire
and
what
exactly
is
happening
there
and
maybe
a
discussion
on
time
drew
general
Justin.
Can
you
give
us
just
a
quick
on
that.
R
Yes,
so
basically,
we
need
some
sort
of
notion
of
time
in
if
you're,
m2
and
I
guess
we
have
two
options.
One
is
kind
of
you
next
time
which
subtracts
number
of
leap
seconds
so
in
and
then
there's
this
other
thing,
which
is
much
more
esoteric,
which
is
called
the
international
atomic
time
or
something
like
that
now,
I
guess
there's
several
reasons
why
we're
favoring
you
next
time
over
this
atomic
time.
R
One
is
that
it's
it's
much
more
common
ly
accessible
in
from
programming
languages,
so
I
guess
ease
of
use
for
the
programmers
is
one
thing.
The
other
thing
is
that
it's
compatible
with
if,
if
one
time
stamp
I
believe
if
one
uses
UNIX
timestamps-
and
you
have
a
nice
thing-
is
that
it
it
provides
a
nice
invariance
that
at
basically
midnight
UTC
the.
R
The
slot
number
is
going
to
be
a
multiple
of
fourteen
thousand
four
hundred
I
believe
so
what
one
of
the
things
that
that
we've
done
with
with,
if
to
genesis,
is
to
basically
have
the
genesis,
be
at
midnight
UTC,
and
so
this
invariant
would
remain
over
time.
If
we
take
into
account
the
leap
seconds
I
guess
by
having
this
invariant
did.
The
trade-off
is
that
we
lose
the
invariant
that
every
slot
is
exactly
6
seconds
or
whatever
you
know
constant.
We
set
it
to
in
the
future.
S
I'm,
just
gonna
say
that
as
far
as
programming
or
ease
I
mean
like
this,
a
lot
of
us
are
just
you
know,
literally
setting
six
second
timers
and
letting
the
slot
stick
I
mean
so
so
that
that
sort
of
surprised
me.
He
said
that
we
were
accounting
for
the
eight
seconds,
so
so
we're
literally
just
gonna
have
to
use
a
system,
clock
or
or
something
like
that
is
that
is
that
what
you're
suggesting
here
a.
A
N
K
N
N
Think
if
you
just
like
see
Google
for
a
networker
adjust
the
timestamps.
If
the
area
I
meet
they're,
probably
you
can
probably
find
it
and
I'm
like
I
mean
this
is
both
something
that
probably
should
be
implemented.
I
mean
unless
people
come
up
with
reasons
why
it's
not
a
good
idea
or
why
there's
something
better,
so
I
and
I
definitely
like
to
encourage
people
to
read.
N
Comments
on
it,
the
the
second
so
that
on
base,
the
kind
of
practical
effect
of
that
is
that
it
makes
some
NTP
attacks
not
effective
unless
they're,
coupled
with
at
least
33%
attacks
on
the
underlying
systems
or
on
the
group
of
stake
and
then
as
far
as
51%
attacks
with
that
kind
of
push
time
forward.
Go,
that's
not
really
like.
N
It
is
like
the
kind
of
attack.
That's
a
bit
more
dangerous.
That
I've
thought
about
is
a
51%
attack
where,
instead
of
being
two
minutes
in
the
future,
you're
like
two
seconds
in
the
future,
and
then
you
try
to
sort
of
pull
a
portion
of
honest
the
nodes
with
you,
and
that
is
the
sort
of
thing
that
probably
should
be
analyzed
more
by.
As
I
said,
it's
like
technically
just
as
doable
of
current
work
as
improve
Stig.
S
N
S
D
N
T
S
S
A
S
S
A
S
N
We
don't
we
don't
have
we
don't
have
special
reasons
to
be
more
concerned
about
51%
attacks
that
have
to
do
with
timing
than
we
did
before,
but,
like
proof
of
simian,
we
do
new.
Who
wants
to
like
me?
We
do
want
to
take
the
timestamp
problems
seriously,
because
proof
of
stake
does
kind
of
depend
on
time,
stamps
more
so
like.
If
you
can,
if
you
can
get
people
to
believe
wrong
things
about
the
time
you
can
cause
more
a
tree,
you
can
wreak
more
havoc
more
quickly.
N
O
N
U
Just
to
comment
on
the
implementation
of
the
simulator
that
that
I
did
for
the
supercomputer
implemented
local
timers
for
each
one
of
the
audio
nodes
and
so
kind
of
simulating
a
UNIX
system,
but
I
also
implemented
some
kind
of
global
synchronization
every
five
seconds
of
this
local
timer
being
five
seconds,
a
parameter
that
can
be
customized
at
the
the
configuration
of
the
execution
yeah.
So
maybe
this
is
some
kind
of
combination
that
yeah
mixes
both
UNIX
time
and
the
word
time
and
I
also
saw
some
kind
of
weird.
U
Yeah
yeah
yeah.
So
that's
what
I
said
at
this
point
for
most
of
the
wrong
side.
I
said
global
synchronization
of
five
seconds,
assuming
that
most
of
benevolent
non
attack
nodes
would
have
a
maximum
delay
or
time
difference
of
five
seconds,
and
that
is
also
smaller
than
the
H
dot
time,
and
so
that
could
allow
for
more
or
less.
A
V
It's
and
it
does
the
encryption
and
authorization
of
authentication
by
default
and
also
our
like
AWS
instances
are
very
small.
So
we,
we
are
overloaded
CPU,
but
we
didn't
have
bigger
problems
with
integrations
and
also
we
are
using
the
quick
implementation
by
protocol
labs.
They
are
still
working
on.
It
is
this
feature
of
zero
round-trip
handshake,
which
we
wanted
to
check,
but
it's
not
available,
yet
they
are
working
on
it
but
I've.
We
think
this
can
improve.
Also
the
performance.
V
Q
N
Is
in
phase
zero
is
a
bit
it's
kind
of
less
elegant
in
that,
but
we
defined
get
Bermuda
the
index
and
then
we
defined
shuffle,
and
then
we
take
a
slice
of
shuffle.
So
in
some
ways
the
stack
has
written
kind
of
has
kind
of
combines
two
inefficiencies
in
that
it
doesn't
yet
get
me
an
efficiency
from
doing
shuffle
with
the
optimal
algorithm,
and
then
it
also
has
any
fish
and
see
is
because
you're
doing
and
complete
shuffle
and
then
taking
a
slice
out
of
it.
N
So,
like
I,
mean
I
personally
actually
find
like
it's
not
just
the
kind
of
forward
versus
backward,
a
probe
backward
thing.
It's
also
just
about
the
kind
of
aesthetic
of
whether
you
want
to
do
a
shuffle
at
that.
Take
a
slice
versus.
Do
you
wants
to
a
call
Jett
get
an
index
for
a
range
of
indices.
You
know,
I'd
argue,
there's
like
aesthetic
reasons.
Why
be
a
get
per
muted
index
for
a
range
of
indices,
approaches.
O
N
F
N
It's
77479
another
spec
formatting
thing
is
that
when
we
actually
after
this
well
get
to
a
real
research
update,
we
do
we're
doing
hash
tree.
We
look
currently
in
a
lot
of
places,
we're
doing
hash
tree
roots,
taking
one
argument
and
with
the
kind
of
implication
being
that
from
the
one
argument
you
can
infer
what
the
type
is,
but
there
are
some
cases
where
you
can't
infer
the
type
from
the
arguments
and
probably
the
most
agree.
N
Just
one
of
those
is
lists
like
because
you
can't
distinguish
between
a
static
list
and
a
dynamic
list,
so
one
possible
solution
to
the
assist
in
hash
would
beat
me
good
so
that
hash
tree
root
just
always
consistently
takes
both
an
object
and
a
type.
The
other
approach
is
that
we
basically
add
wrapper
classes
for
static
lists
and
dynamic
lists
and
Nick
applied.
Yeah
apply
them
more
consistently.
You
know.
A
N
A
N
N
So
the
idea,
basically,
is
that
whenever
you
have
a
Merkel
proof
in
the
Merkel
proofs,
they're
kind
of
redundant
and
share
hashes
or
one
of
the
hashes
can
be
computed
from
the
other
hashes.
It
removes
all
all
of
the
redundancy,
and
it
just
includes
the
minimum
possible
data
that
you
need
to
compute
everything
and,
like
code
complexity,
wise,
it's
actually
really
not
hard
at
all.
N
So
that
so
that's
the
first
thing
and
then
the
second
thing
is
that
basically,
there
is
an
algorithm
called
that
get
generalized
embassies,
and
the
idea
here
is
that
you
can
represent
in
arbitrary
SSC
hash
trees
to
an
arbitrator
like
a
hash
tree
of
an
arbitrary
objects
that
yet
that
you
use
to
give
you
the
hash
tree
root
as
a
binary
Merkel
tree
where
the
depths
and
different
locations
might
be
different.
And
so
you
can
represent
a
path
so
a
path,
basically
being
a
function.
N
That
says:
oh,
given
a
blog
as
an
input,
return
the
kind
of
the
public
key
of
the
hundred
and
ninety
seven
validator
or
return
the
length
of
the
list
of
open
challenges,
or
something
like
that.
So
and
it
so
it
takes.
It
takes
a
path
and
it
turns
it
into
a
kind
of
generalized
index
in
the
Merkle
tree,
which
basically
says
like.
Where
do
you
go
left
or
go
right
kind
of
Express?
How
deep
see
you
go
expressed
as
a
number
and
from
there?
N
You
have
a
function
that
you
can
use
to
compute
the
shuffled
Committee
of
some
points
in
the
future
and
then
from
there
you
can.
You
can
ask
for
future
vlog
headers
and
then
you
can
verify
future
vlog
headers
by
basically
verifying
the
episode
of
verifying
the
persistent
committee
and
the
data
complexity
of
this
is
something
like
about
80
kilobytes,
every
nine
days,
plus
a
couple
of
hundred
bytes
for
a
blog.
So
it's
actually
really
nice.
N
N
N
N
N
Stopped
by
saying
that,
basically,
it
does
require
a
kind
of
minimal
stubby
part
of
a
portion
of
phase
one
which
is
like
some
kind
of
shard
blocks.
Header
structure
like
even
if
the
data
is
empty.
That's
fine,
but
shard
block
headers
need
to
exist
and
people
need
to
be
signing
them
and
they
need
to
be
cross-linked.
I
know.
N
W
Like
so,
I
was
looking
at
the
composition
of
a
block.
Right
now
is
that
there's
a
beacon
block
with
a
header
portion,
and
then
it
becomes
block
buddy
in
the
work
holder.
What
polls
recently
they
there's
a
mention
of
the
Piton
blog
header,
which
is
actually
not
defined
or
specified
anywhere
else,
and
it
says
it's
for,
like
mine
friendliness,
it's
possible
for
people
to
propagate
slug,
hitters
who's
out
having
too
much
all
right
kind
of
like
approach,
because
it's
really
easy
for
me.
W
N
N
You
can
create
another
SSD
object
where
parts
of
the
object
are
replaced
by
their
hash
tree
roots
or
where,
alternatively,
where
hash
tree
roots,
are
and
if
replaced
by
the
thing
that
they
stand
for
and
like
part
of
the
reason
why
that's
useful
is
that
let's
use
talk
about
like
an
object,
that's
part
of
the
state.
That's
part
of
the
block,
even
though
like
as
technically
written
in
the
protocol
blocks,
don't
have
States
to
only
have
state
roads.
W
I
mean
my
biggest
thing
is
the
dating
and
all
the
block,
getting
proofs
and
lettuce
that'd
be
great,
but
I
think
the
version
of
a
client
a
lifetime.
You
just
did
a
good
job
at
replaying
blocks
and
headers
or
the
network,
and
maybe
having
a
way
to
just
consult
for
five
twenty
years,
make
sure
it's
not
serving
garbage
directory.
W
F
R
The
quick
research
update
I
wanted
to
share
is
basically
an
improvement
to
the
challenge
game
in
the
the
custody
bed
scheme,
so
the
custody
bid
scheme.
Basically
in
the
optimistic
case
where
there's
no
challenge
is
basically
you
get
this
for
free,
it's
just
one
extra
bit
in
the
in
the
other
stations
and
its
really
really
nice,
but
in
the
worst
case,
there's
this
challenge
that
gain
that
happens,
and
it
turns
out
that
the
this
meant
that
the
phase
one
spec
kind
of
grew
much
more
complex
than
we
would
have
liked
it.
R
So
the
good
news
is
that
we
have
a
new
challenge
game
which
drastically
simplifies
the
communication
complexity,
where
a
challenge
happens
and
it
was
looking
like,
we
can
basically
have
a
challenge,
which
is
just
a
two-step
thing,
where
there's
a
challenge
and
a
response.
So
it's
like
a
single
round
of
challenge
response
which
will
allow
us
to
do
things
like
simplify
the
way
we
do
in
sensitization
and
the
way
we
I
don't
know
handled
and
the
mechanics
of
the
of
the
challenge.
So
hopefully
there
were
hours
to
have.
You
know
the
phase.
R
A
Great,
it's
just
okay.
Next
up
I'd
like
to
talk
about
the
or
give
us
a
chance
to
talk
about
the
networks
back
I
know
the
so
Matt
slipper
posted
an
PR
on
the
specs,
repo
and
I
know
that's
very
active
right
now.
So
if
the
conversations
mainly
happening
there
and
there's
good
feedback
there,
we
don't
need
to
go
into
it
too
deeply
right
now,
but
I
did
want
to
give
just
a
second
a
minute
of
time.
If
anybody
has
some
quick
comments,
questions
feedback,
etcetera,
I,.
X
S
Well,
yeah,
but
we
started
working
on
a
kind
of
like
just
building
out
a
lightweight
proof-of-concept
type
of
wire
protocol,
we're
calling
it
aa
'but
or
there
and
back
again,
but
the
idea
is
to
kind
of
just
create
something
that
works
right
now
and
we've
just
about
finished.
Implementing
it
Matt's
working
on
that
with
me,
antoine,
is
as
well
there's
like
a
few
people
that
are
contributing
to
it
right
now,
but
I
just
wanted
to
kind
of
get
you
guys
this
feedback
and
like
see
what
you
think
about
that
thing.
Y
We're
just
having
hearing
a
lot
of
feedback
from
people
creating
the
clients
that
there's
not
a
lot
of
activity
going
on
as
far
as
like
testing
peers
talking
to
each
other
and
yet
and
I
know
that
there's
a
lot
of
work
going
on
the
TTP
to
figure
a
lot
of
things
regarding
finding
the
piers
in
different
parts
of
that
higher-level
network
stack.
But
a
lot
of
these
are
like
maybe
in
the
application
layer
details
of
application
layer.
Y
So
we
were
thinking
that
maybe
there
should
be
like
a
different
layer
at
the
bottom,
which
is
just
like
a
simple,
simple
wire
protocol,
which
is
extensible
enough
to
allow
people
to
start
iterating
and
start
communicating,
and
so
what
we've
done
is
I've
created
a
EB
and
F
grammar,
which
is
basically
it's
it's
kind
of
like
sort
of
inspired
by
HTTP,
but
it's
very,
very
minimalistic
and
narrow
to
the
use
case
of
sending
like
binary
payloads
in
an
RPC
matter.
I
looked
at
the
specification.
Y
So
so
SSD,
as
far
as
I
can
tell
is
basically
very
very
similar
to
beasts
on
and
far
as
the
types
it
supports,
but
it
has
this
additional
tree
hash
type,
and
so
my
concern
there
is
rework
that's
moving
in
a
direction
where
is
sort
of
bit
you're
you're,
putting
application
schema
into
the
protocol
and
you're
making
a
sort
of
one
monolith
now
over
time.
The
types
of
data
structures
that
might
need
to
be
supported
will
evolve,
and
it's
just
nice
fuel
to
negotiate
some
of
these
things.
Y
If,
if
you
have
to
iterate
over
time
with
different
clients,
that
may
not
be
at
the
same
level
of
being
up-to-date,
so
what
we're
thinking
is
beats
on
provides
the
the
basic
primitives
you
want
like
integer
float
things
like
that,
but
but
what
you
get
is
also
a
binary
data
structure
and
inside
these
binary
data
structures,
which
you
know
can
be
tagged,
is
like
you
names
you
can
use
it.
You
can
bake
in
your
SSD
if
you
really
want
to
or
any
of
your
like
actual
application
specific
data
payloads
but
I,
don't
see.
Y
N
Y
Right
so
there's
a
lot
of
experimentation,
the
prime-minister
go
on
and
what
the
most
optimal
pickling
format
is.
But
in
the
meantime
you
know,
maybe
you
can
get
rolling
faster
if
we
have
a
simple
protocol
and
figure
out
the
data
structures
as
we're
iterating
inception
is
specify
the
protocol
completely
upfront
by.
Y
Y
It's
like
we're
all,
while
this
is
getting
conflated
with
the
holded
p2p
thing
where
it's
like.
That's
gonna
solve
everything,
so
everything
just
gonna
be
implemented
through
that,
but
that
makes
it
you
know
it's.
It's
like
it
could
be
very
simple.
It's
like!
Oh
all,
you
got
to
do
is
parse
this
simple
thing
and
here's
an
EB
enough
grammar.
So
it's
like
easy
understand
what
to
do
and
you
can
start
interacting
as
opposed
to
having
it's
just
like.
Oh
well,
you
have
to
just
install
that
p2p
enough
doesn't
make
any
sense.
Yeah.
S
So
go
ahead,
just
like
would
PTP
def,
PT,
pls,
PT,
p
stacks
are
really
designed
specifically
to
accommodate
for
the
application
layer.
The
wire
protocol
should
be
something
completely
separate,
like
RL
px
stuff,
like
that
is
kind
of
just
a
mechanism
to
establish
a
handshake
and
kind
of
define
the
rules
of
engagement
for
communicating
with
those
peers
and
then
layers
on
top
of
that
are
responsible
for
adding
additional
logic
to
how
those
messages
are
formatted.
All
of
that
application
logic
that
says,
like
this
message,
means
this.
S
S
S
Yeah
matters,
all
that
matters
is
the
we
have
these
messages,
and
these
bytes
right
all
them
up
is
that
each
client
agrees
that
this
particular
byte
sequence
correlates
to
this
particular
action
or
application
logic.
The
wire
protocol
itself
doesn't
need
to
account
for
any
of
that,
so
it
should
be
lightweight
it
shouldn't.
We
shouldn't
have
to
worry
about
TLS.
We
shouldn't
have
to
worry
about
like
any
of
those
additional
features.
You
know
like
those
are,
those
are
just
gonna
bloat
the
protocol
itself,
and
it's
gonna
take
much
longer
to
try
to
get
something
out.
S
S
Z
G
X
But
the
reason
why,
like
if
you
look
at
the,
for
example,
there's
a
messaging
like
specification
and
basically
just
has
a
version
bit
that
represents
your
compression
protocol,
your
or
your
compression
scheme,
your
encoding
scheme
and
then
some
encoded
body
that
can
be
SSD
or
whatever.
The
fact
that
we're
using
Lib
p2p
does
kind
of
matter
here,
because
that's
going
to
determine
the
way
that
we
negotiate,
which
compression
and
encoding
formats
we're
going
to
be
using
right.
X
S
So
we
were
talking
about
we
about
starting
a
working
group,
so
I
mean
there
are
about
eight
of
us
working
on
this
stuff
right
now
and
I
mean
rather
than
like
kind
of
like
you
know,
duplicating
any
effort
so
yeah.
We
should
probably
just
start
a
working
group
that
we
can
all
contribute
to
and
discuss
these
things
together.
Yeah.
A
T
Didn't
really
know
when
to
jump
in
properly
so
I
just
wanted
to
quickly
circle
back
to
that.
To
that
earlier
question
of
like
can't,
we
just
have
like
a
simple
it's.
The
simple
sort
of
protocol
like
that.
That's
the
thing
that
originally
kind
of
got
me
started
but,
and
you
guys
already
kind
of
reach
a
like
reasonable
conclusion.
So
I
guess
what
I
wanted
to
say
is
that
basically
weapon
voices
concern
many
times.
T
It
said
basically,
I
think
that
it's
totally
fine
to
represent
consensus
objects
whenever
they're
transferred
using
using
SSE
or
any
other
serialization
format,
but
that
might
actually
be
the
best
choice
to
you
know.
Encode
say
like
they.
The
message
frame
that
carries
the
RPC
command
or
anything
like
that,
but
it
can
totally
be
useful
and
I.
Think
Matthew
is
kind
of
right
in
saying
that
it
does
it
does
it
does
matter
a
bit.
T
You
know
if
two
is
relying
on
only
p2p,
because
because
that
already
defines
how
how
these
RPC
commands
are
encoded
on
the
wire
and
things
like
that,
so
I
think
that
the
high
level
one
a
protocol
specification
for
it
to
really
only
needs
to
include
the
names
of
these
RPC
messages
and
then
they're
sort
of
like
payload
and
how
that's
encoded
and
I
think
that's
pretty
much
what
it
does,
so
it
is
kind
of
it
is
kind
of
alright
right
now.
Yes,.
Y
So
I
just
wanted
to
mention
a
few
more
things.
I
didn't
have
a
chance
to
finish
so
the
the
wire
protocol
specifies
we
would
like
to
feedback
on
it,
but
the
thing
is:
is
all
it
really
is
responsible
for
right
now
is
like
Antoine
was
saying,
identifies
the
name
of
the
commands,
negotiates
compression
and
and
then
there's
two
separate
headers
and
payload.
These
are
just
basic
protocol.
Optimizations.
If
you
put
everything
in
one
serialized
payload
like
an
one
SSE,
then
that
means
that
the
whole
thing
has
to
be
pretty
much
decoded
before.
Y
Maybe
you
can
partially
do
it
perhaps,
but
still
like
you
have
to
do
a
lot
of
work
to
like
look
at
partial
or
the
whole
thing,
and
so
you,
these
little
optimizations
and
protocols
where
you
have
in
the
first
10
bytes
or
first
20
bytes.
All
the
information
you
need
to
make
a
decision
sometimes
makes
a
difference,
because
you
don't
even
have
to
realize
payload.
X
Y
T
Look
into
so
we've
been
doing
like
like
in
on
the
DEF
p2p
side,
we've
had
like
streaming
messaging
and
things
like
for
kind
of
a
long
time.
I
can
kind
of
tell
you
that
we
haven't
actually
needed
it
very
often,
so
we
do
it
in
boy
theorem
in
the
deputy
implementation.
The
way
we
handle
it
is
that,
basically,
all
of
the
message
decoding
is
streaming
we
process.
W
That's
actually
that's
that's
a
normal
point
that
we
want
to
make,
which
is.
There
is
a
an
excellent
PR
from
my
few
sleeper,
a
method
that
would
allow
exchanging
data
between
clients
and
they
require
RPC
level
methods
and
some
intense
behind
them,
but
that
it's
actually
being
pointed-
and
that's
in
that
document
in
in
one
of
a
comment
that
Gus
it
would
be
used
to
propagate
blocks
and
headers,
and
that
seems
to
be
kind
of
a
magic
bullet
that
people
just
want
to
use
for
for
the
p2p
propagation
I.
W
W
And
those
actually
would
actually
have
to
do
with
the
rest
of
your
domain
lifecycle
and
your
consensus
in
your
block
time
right,
because,
if
anything,
if
you're
pushing
or
pulling
depending
how
fast
we
can
see
propagation
and
network,
the
block
time
of
six
seconds
is
going
to
be
interesting
right.
So
we
need
to
meet
us.
I
mean
the
the
performance
of
that
networking
layer
is
really
really
critical
and
maybe
even
so
then
some
of
the
you
know
freaks
you
can
play
with
icing
and
all
that
and
I
think
to
so.
W
So
if
we
talk
about
gossiping
you
just
making
it
really
noisy,
we
could
do
that
and
it's
kind
of
because
it's
bare-bones
it's
more
open
to
iterative
development
in
changes
where
most
of
the
updates
of
this
call
so
far
have
been
people
trying
to
implement
Leiby
to
be
in
a
fashion
that
they
can
leverage
it.
So.
S
S
S
W
T
So
I
guess
it
will
be
really
beneficial
at
this
point
to
just
get
together
and
find
this
like
minimal,
insecure,
unencrypted
wire
protocol
transport
mechanism
that
allows
you
to
do
to
do
cross,
client,
testing
and
I.
Think
this
is
something
that
you
know
the
to
implement.
A
community
should
look
into
actively
to
just
kind
of
make
something
work.
It
doesn't
have
to
be
perfect,
doesn't
have
the
world's
like
best
and
most
secure
and
most
optimal
transport.
T
You
know
it
can
just
be
something
where,
basically
you
can.
You
could
do
basic
interoperability
testing,
maybe
unlike
a
test
that
and
that's
and
that's
totally
gonna
define,
and
then
you
know,
there's
gonna
be
so
much
time
later
to
figure
out
what
the
actual
perfect
transport
mechanism
is,
and
you
know
there
are
literally
like
hundreds
of
options
available
for
that,
and
don't
think
it's
like
the
right
time
now
in
the
implementation
cycle
of
it
to
to
worry
about
what
the
final
transport
protocol
is
going
to
be
or
what
the
final
sort
of
you
know.
T
S
What
I
wanted?
Thank
you
for
saying
now,
that's
exactly
what
I
was
gonna,
try
to
say
like
what
we're
not
trying
to
preserve
it's
gonna
like
override
anything.
We
want
to
get
something
out
the
door
very
quickly.
That'll
allow
us
to
actually
start
doing
network
testing
and
getting
clients
to
one
another
rather
than
operating
on
these
like
simulations
and
these
like
monolithic
applications.
This
will
help
us
push
the
ball
forward.
It's
something
that
we
can
implement
like
before.
S
T
This
is
also
something
that
this
is
also
some.
Some
lesson
learned
that
I
can
share
from.
You
know
the
development
of
each
one,
so
the
the
way
that
each
one
was
developed
throughout
the
entire
POC
serious.
There
was
no
RL
TX.
There
was
nothing
we
have
this.
All
we
had
was
like
unencrypted
TCP
connections
carrying
our
LP
and
that
took
us
like
really
really
far,
and
then
we
only
added
encryption,
I
mean
in
just
you
know,
like
kind
of
late
before
the
launch
I
mean.
R
T
But
it's
definitely
you
know
something
that
you
know
got
this
really
really
really
far
to
just
to
just
to
find
the
most
minimal
thing
possible
and
then,
and
then
you
know,
take
it
from
there.
So
I
think
this
is
something
that
I.
That
I
would
really
like
to
see
is
to
have
this
like
super
super
basic
protocol
that
you
can.
You
can
implement
in
like
three
days,
yeah.
T
F
T
Not
so
much
episode,
I've
actually
not
really
been
busy
with
discovery
last
two
weeks
because
we
tried
to
push
the
eath
version
64
discussion
forward.
So
that
is
something
that
must
be
concerns.
If
one
and
you're
gonna
maybe
hear
about
it
tomorrow
in
the
all
quarters
call,
we
did
revamp
all
of
the
e3
if
one
related
specification.
T
For
that,
so
there
is
an
issue
open
on
the
def
PDP
repo,
that
kind
of
tries
to
explain
the
problem
and
if
anyone's
interested,
fun,
UDP
drop
the
proverbs,
and
you
know
you
can
come
there
and
help
out
otherwise,
before
we
gonna
have
a
solution
in
time
for
like
next
fall
or
something
I,
don't
know.
So
that's
the
that's
current
update
so
started
implementing
and
we're
starting
to
hit
like
the
first
sort
of
like
real-world
issues
with
the
whole
thing.
F
A
Okay
on
networking,
yes,
there's
value
in
getting
some
these
minimal
protocols
out
and
if
a
team
or
two
want
to
drive
that
forward
and
want
to
begin
experiment
an
arrived
ability,
but
in
that
respect,
go
for
it
I.
Don't
think
that
clients
should
experiment
with
interoperability
on
the
wire
until
they're,
conforming
to
state
tests
which
I
plan
on
releasing
today,
but
for
b05.
X
S
Z
This
will
be
short,
I'm
glad
that
there's
other
people
working
on
the
actual
wire
protocol
part
because
other
people
are
more
knowledgeable
than
me
on
that
I
just
have
been
like
digging
into
serialization
and
sort
of
at
the
more
at
the
application
layer,
but
with
thoughts
towards
the
network.
Layer
and
I
ran
a
bunch
of
benchmarks
and
there's
data
that
you
can
go
look
at
it's
pretty
extensive,
but
the
gist
is
that
ROP
is
actually
really
terrible
for
eath
to
data
structures.
O
Z
I
think
was
interesting,
but
Felix
said
earlier
about
rarely
actually
meeting
to
index
into
some
of
these
data
structures.
However,
what
I'm
currently
leaning
towards
and
I'm,
not
I,
do
not
feel
like,
but
I
don't
know
enough
about
this
yet
to
have
like
a
really
strong
opinion,
but
where
a
meaning
right
now
is
pushing
to
modify
the
SSD
spec
to
include
the
SOS
style,
offset
pattern
so
that
we
can
have
a
serialization
format
that
also
works
as
a
contract.
Z
Abi
and
I
believe
that
at
least
those
two
things
combined
together
give
us
reasonably
compact
messages
that
can
also
be
used
to
talk
directly
into
contracts
that
give
us
that
fast
indexing
into
data
structures
which,
at
the
application
level,
may
not
be
useful.
But
inside
of
the
context
of
like
the
EVM
or
any
sort
of
like
and
I
say
that
in
the
most
generic
like
EVM,
he
was
and
whatever
being
able
to
reach
into
these
things
and
grab.
The
data
that
you
need
in
those
contexts
is
actually
useful.
Z
Sorry,
sighs
so
resulting
sighs,
like
9
or
10
percent
size
reduction
in
overall
messages,
similarly
the
like,
but
to
get
that
9
or
10
percent.
You
also
gain
the
ability
to
do
streaming,
encoding
and
decoding,
but
you
lose
the
ability
to
dynamically
acts
index
into
a
data
structure
without
decoding
it's
at
least
all
the
way
up
to
the
point
where
your
index
so
there's
trade-offs
across
the
board.
K
Z
A
A
Okay,
we
are
at
the
end
of
the
call
we
are
meeting
in
person
for
whoever
it
happens
to
be
in
Sydney
on
April
9th
from
95
I
will
share
location,
I'm
gonna
send
a
doc
around,
so
we
can
get
a
head
count
and
figure
out
what
we
want
to
work
on
on
that
day.
That
is
the
middle
day
of
the
hackathon,
so
we
won't
miss
the
opening
or
closing
ceremonies
and
then
the
actual
addcom
starts
the
11th.
A
N
N
P
On
the
shuffling,
so
basically
we
talked
to
our
time
today
and-
and
the
way
it
looks
from
from
our
perspective
is
basically
that
we
schedule
a
handler
and
it
looks
at
like.
What's
what's
the
time
now,
oh,
and
what
do
I
have
to
do
so
if
we're
changing
the
shuffling
around?
Let's,
like
the
question
that
let
me
answer-
and
we
have
to
answer
it
in
two
ways
like
what
do
I
have
to
do
right
now.
P
You
know
what
I'm
like
what
am
I
going
to
have
to
do
in
the
future,
so
that
I
can
prepare
by,
for
example,
thinking
some
some
charge
right.
Yeah.
One
thing
there
is
that
what
I
have
to
do
now,
my
chain
just
my
hand,
just
write
it.
We
have
these
deep
reorg,
so
that's
something
to
consider
as
well
and
selecting
which,
which
side
of
the
coin
the
shuffling
or
with
aesthetic
for
the
shuffling.
We
should
use
mm-hmm
right.
N
P
N
It
also
has
the
advantage
that
I
think
we
save
like
a
tiny
amount
of
hashing
because
the
like,
maybe
something
like
10%,
because
if
we're
calculating
the
akima
saladna
die
committee
members
first
members
for
some
committee,
then
at
least
in
the
first
few
rounds
of
hashing
they'll,
be
concentrated
and
like
what
a
fairly
small
set
of
ranges
of
the
full
validator
said,
and
so
the
hashes
he
needs
expects.
Only
for
the
shuffling
are
gonna
be
shared.
But
that's
like
maybe
a
10%
gain.
N
P
That
hold
across,
for
example,
where
which
are
fixed
from
the
finalized
block
or
which
are
fixed
from
a
justified
doc
and
so
on
in
the
future,
would
understand
a
lot
clearer,
which
ones
we
can
rely
on
to
be
maintained
in
the
future.
For
example,
you
know
committee
won't
change
for
an
epoch
from
from
just
ready
for
it,
like.
We
can
reverse
engineer
that
from
the
spec
sometimes,
but
if
they
were
spelled
out,
would
be
much
more
comfortable,
relying
on
them
for
optimizations
and
the
like.