►
From YouTube: Eth2.0 Implementers Call #10 [2019/1/17]
Description
A
A
So
let's
go
ahead
and
get
started,
we'll
start
with
client
updates.
I
know,
there's
some
people
on
this
call
that
haven't
been
here
before.
Let's
do
client
updates
and
if
you've
even
been
on
the
call
and
your
client
comes
up,
and
you
want
to
introduce
yourself.
Please
do
that
and
then
afterward,
if
you
were
not
involved
with
client
wanna
treat
yourself
to
do
that.
A
A
B
Sure
so
we
so
we've
been
implementing
GRP,
C
API
inspect.
Of
course,
we've
started
our
validated
service,
which
is
pretty
cool,
so
that's
something
to
come
along
pretty
well
focusing
on
the
runtime
and
IPC
at
the
moment
and
then
doing
safe
should
be
it
later.
One
of
our
guys,
Mehdi
has
started
adapting
looking
at
that
thing.
The
H
1.0
C++
buzzer
to
try
and
point
it
at
s
2.0,
so
that
can
hopefully
benefit
everyone,
we're
kind
of
looking
into
as
well
a
research
thing
as
to
how
we
can
do
hashing
to
G
2
in
Milagro.
A
C
C
C
F
F
G
Yeah
we've
been
starting
to
work
to
work
on
the
implementation
from
scratch
about
a
month
ago.
This
is
because
previously
we
work
inside
of
TMJ
code
base,
which
is
licensed
under
GPL,
and
we
don't
want
a
new
new
client
and
you
library,
licensed
under
these
spices,
and
there
is
no
feasible
feasible
way
of
extracting
the
code
from
TMJ
code
base
and
just
moving
it
to
some
two
separate
repository
and
setting
up
a
different
license.
G
So
we
started
from
scratch
and
we're
doing
a
pretty
good
progress
so
far
and
we
aim
in
to
release
standalone.
We
contain
client
like
in
the
first
week
of
February
so
far
we
have
implemented
BLS,
simple
serialize
database
infrastructure
and
all
core.
We
can
change
structures
and
the
consensus
is
almost
done.
We
had
not
fear.
G
Not
the
things
that
are
left
is
our
fork:
choice,
rule
a
validator
service
and
roof
work
chain
into
creation
and
I
think
that
where
we
will
manage
to
to
get
this
implemented
by
the
end
of
January,
I'll
post
a
link
to
this
repository
now,
it's
not
there
is
no
license
set
up
for
this
repository.
Yet
I
need
to
be
careful
here,
but
we
are
choosing
between
a
page
two
point:
O
and
and
MIT
or
a
the
third
option
is
to
license
it
under
both
licenses
on
the
clients
choice.
So.
A
G
We
were
thinking
about
it,
but
it's
about
70
contributors
in
material
J
ends.
I,
don't
think
it's
it's
feasible
to
to
contact
them
all
and
to
I
mean
yeah,
it's
it's.
It
might
be
feasible,
but
it
will
require
a
lot
of
work
on
this
point.
So
I,
don't
even
I,
don't
even
know
if
all
of
them
are
available
at
this
time.
So
that's
this
could
be
a
big
issue.
G
I
I
We
laid
out
scaffolding
for
our
lid
p2p
interface
and
we
shifted
to
micro-services
architecture.
We
think,
like
long
term,
that'll
be
beneficial
for
projects
like
in
fira,
who
kind
of
want
a
house
like
super
nodes
and
we're
kind
of
continuing
our
comic
sort
of
harmony
about
kind
of
combined
in
our
efforts
in
java
client.
I
E
J
We
sort
of
been
gaining
a
little
bogged
down
was
like
expect
changes,
so
we've
been
spending
a
lot
of
time
kind
of
architecting
and
doing
the
supplementary
I
work
on
my
supplementary
physic,
simple
sterilize
and
BLS
and
they're
coming
along.
They
should
be
done
pretty
soon
and
we
can
integrate
them
actually,
like
start
using
them
in
the
beacon
chain,
client
and
do
to
expect
change.
We
decided
internally
to
kind
of
every
two
weeks
will
do
an
update
unless
something
drastic
happened
in
the
spec.
J
That
way,
we
can
actually
get
some
progress
going
consistently
for
like
two
weeks
straight
and
do
a
update
and
continue
on
that
way.
Outside
of
that,
we
started
looking
into
JSON
p2p
to
kind
of
see
if
there's
any
holes
that
we
can
plug,
because
there
is
a
bit
of
a
discrepancy
between
got
some
feature.
Sets
between
that
one
and
like
the
go
client,
for
instance,
are
the
go
version,
for
instance,
so
we're
looking
to
try
to
help
them
out
and
fix
some
of
their
stuff
up.
That's
about
it
for
great
crust.
K
Hi,
so
we
moved
our
certain
point,
all
state
transition
called
past
to
the
Trinity,
a
Satori
and
here's
an
overview
document
that
and
posting
to
the
chat
and
most
of
modifications
are
said
under
kind
of
structures
and
beacon
chain
boxing
and
also
the
PI,
is
the
refactoring
and
finished
the
tree.
Having
functions
and
Yannick
proposed
the
test
format,
and
we
can
discuss
that
later
after
the
regular
updates.
L
L
It's
currently
pretty
slow
around,
like
800
milliseconds,
to
select
a
new
head
and
I
think
that
peleg
mentioned
that
things
should
be
around
50
milliseconds
with
a
hundred
thousand
validators.
So
we're
aiming
to
optimize
that
over
the
next
update
and
thanks
to
Terrance
from
our
team,
we
removed
shark
committees
from
the
beacon
state
which
took
a
while,
because
that
was
basically
everywhere
in
our
code
base.
L
Under
the
cool
thing
want
to
share
with
you
guys
is
we
have
a
llamo
test
for
a
full
state
transition
without
you,
puck
processing,
so
everything
related
to
slot
processing
and
also
blood
processing
is
done
so
I'll
link
with
you,
I'll
link.
You
guys
here
in
the
chat
kind
of
weather,
is
Oh
Terrence
already
sent
Thank
You
Terrance
yeah,
so
you
guys
can
check
that
out
play
around.
We
can
basically
simulate
like
hey.
We
want
sixty
four
sixty
forty
transitions.
I
want
you
to
skip
these
blocks
at
these
slots.
L
Simulate
like
a
proposer,
slashing
simulate
like
an
exit,
so
you
can
do
a
bunch
of
different
simulations
and
see
what
happens
in
the
end.
We
want
to
use
this
for,
like
our
end-to-end
sweep
so
yeah
yeah
Moe
has
been
really
useful
and
building
out
the
simulated
back
and
has
been
really
good
for
us
so
far,
yeah
aside
from
that
we're
just
you
know,
we
really
want
to
wrap
up
for
Chris
rule
and
make
it
optimal.
So
that's
it.
Oh.
A
M
D
A
N
Hey
so
I'm
Stan
I'm
between
jobs,
I,
work
on
random
projects
and
one
of
those
random
projects
happen
to
be
lighthouse.
I
contributes
for
I've
been
contributing
for
about
two
weeks.
Well,
are
we
trying
to
get
up
to
date
with
this
bag
by
attending
the
calls
and
rating
it
up?
This
thing,
I'm.
Looking
at
some
issues,
I
pushed
some
to
tiny
PRS
into
this
bacon.
I
intend
to
work
more
with
you
guys
on
get
landing.
This
thing,
Thank,
You.
O
A
quick
one
because
I
also
just
like
recently
joined
I'm,
Christoph
I
work
on
the
Trinity
team
and,
as
you
may
know,
Trinity
of
sunny
theam
one
client
as
well.
So
my
main
work
is
not
so
much
speculated
but
more
on
making
sure
that
we
integrate
it
too
smoothly
to
work
alongside
everyone.
So
Trinity
will
continue
to
serve
both
chains.
P
Spent
some
time
trying
working
on
improvements
to
or
coming
up,
we
still
optimize
the
lmg
ghost
implementation,
which
I
get
some
of
which
I
put
into
that
file
in
the
if
you're
young
research,
repo,
it's
in
the
ghost
folder
and
they
think
prismatic
has
already
been
doing.
Some
updates
based
on
those
ideas,
so
I'm
really
looking
forward
to
is
seeing
more
of
the
more
client
teams
actually
implemented,
goest
and
actually
try
running
a
chain
with
a
very
large
number
of
validators
and
just
seeing
how
long
all
the
state
transitions
take
is
I
think
it's.
P
P
That
one
github
issue
that
idea
really
I
just
published
a
couple
hours
ago
today
and
we're
basically
what
I
am
and
suggest
is
that
we
commit
the
list,
the
list
of
validator
active,
validator
industries
into
the
country
and
state,
and
we
cheap
Asia
recent
I
history
of
those
around
the
same
way.
P
We
keep
erasing
history
of
Iran
down
X's
around
and
that
that
would
then
meet
and
then
what
we
do,
the
shuffling
for
across
winged
committees
and
persistent
committees
around
that
and
that
calculate
committee
is
even
if
or
even
for
our
light
clients
that
only
have
via
black
headers
and
they
can
and
they
they
would
be
able
to
do.
Everything
else
was
just
Merkel
branches.
P
M
One
of
the
main
things
has
been
my
clients,
I've
started
thinking
about
those
I
guess,
historically,
are
kind
of
ignore
them
somewhat
and
they're
they're
harder
and
more
subtle
than
I,
when
I
thought
and
the
design
spaces
is
quite
large
to
an
extent
so
I'll
be
reviewing.
You
know
the
various
proposals
by
the
talaq
and
trying
to
improve
them
work
and
in
terms.
M
Of
logistics
around
the
github
wish
the
github
repo
I'm
going
to
try
and
move
things
a
little
bit
faster,
so
trying
to
close
down
all
the
issues
that
have
been
addressed
that
are
still
try
and
move
faster
on
the
PRS
and
I.
Think.
One
thing
which
might
be
a
good
idea
is
to
avoid
working
directly
on
master,
so
we
have,
for
example,
like
proof-of-concept
releases
that
are
spaced
out
by
let's
say
six
weeks,
and
then
we
work
on
some
sort
of
scratch
pads
that
is
out
five
abreast
now.
I.
A
A
M
M
Another
thing
is
vdf,
so
a
lot
of
progress
has
happened,
we're
moving
forward
with
with
PowerPoint
on
on
on
some
of
the
studies.
You
know
involving
no
very
studies
around
PDF,
so
a
lot
has
happened
and
we
will
have
a
PDF
day
on
February
3rd
and
there's
other
PDF
activities
on
there
before
and
there.
So
quite
a
bit
is
happening
behind
the
scenes
and
there
will
be
even
more
stuff
that
will
happen
on
around
these
days
and
so
I'm
hoping
to
write
an
update,
communicating
all
the
progress
that
have
been
made.
Sometimes
anyway,.
A
Q
So
we
wait
on
a
protocol
to
aggregate
signatures
BLS
in
a
Charles
thousand
of
Els
signatures
in
basically
one
or
two
seconds
so
on
the
simulation
it
works
and
we've
done
an
implementation
and
go
but
we're
going
to
test
on
3000
nodes,
and
we
expect
to
have
something
in
two
seconds
on
something
like
this.
Okay,.
R
R
R
Sorry,
if
my
voice
is
a
bit
broken
so
yeah,
the
idea
is
to
be
try
it
because
the
simulations
produce
an
awful
amount
of
data.
It
has
five
levels
of
perversity,
so
I
have
been
working
on
a
way
to
visualize
the
data,
and
so
here
is
what
I
came
up
with.
This
is
the
result
of
one
at
small
time
that
I
did
today,
one
hour
ago,
more
or
less,
and
so
it
runs
on
on
or
NPA
ranks,
and
so
it
simulates
64
knots.
R
So
it's
a
small
network,
and
here
you
can
see
the
networker
that
is
produced.
You
can
see
the
the
light
nodes
here
are
nodes
that
have
many
peers
and
the
dark.
Blue
nodes
are
nodes
that
are
less
beard.
So
in
the
configuration
file,
one
can
decide
the
minimum
number
of
years
that
each
node
will
have,
and
then
here
you
can
see
a
figure
that
shows
the
number
of
peers
for
for
all
the
nodes
that
are
being
simulated.
R
I
don't
know
if
this
is
representative
of
a
VR
client
at
this
point,
but
I
think
for
this
run,
I
said
because
it's
a
small
network
I
set
the
minimum
number
of
peers
two
or
four
nodes,
and
then
here
you
see
the
list
of
notes.
So
when
you
click
on
one
of
those
and
you
can
see
basically
the
main
chain
at
this
point,
none
of
this
visualization
part
is
it
showing
anything
of
the
beacon,
shine
or
anything
of
the
charts
decide
to
start
working
on
this.
R
But
basically
you
can
see
all
the
blocks
number
hash,
the
parents
and
minor
and
the
time
it
has
be
mine
and
you
can
see
the
entire
chain.
You
can
also
see
the
unco
blocks.
In
this
case
we
have
three
for
this
simulation,
which
is
about
55
blocks
long
and
then
here
on
the
bottom,
you
can
see
the
peers
of
these
nodes,
which,
when
you
click
on
any
of
the
peers,
you
can
see
again
the
chain
which
should
be
the
same
chain.
R
You
can
also
see
the
anchor
blocks
for
this
change
will
be
CSS
design
and
so
on
for
all
the
nodes
yeah.
So
this
is
more
or
less
what
I
have
been
working
on
for
the
last
two
weeks,
if
you
click
on
the
bottom
of
the
of
the
page,
if
it
guides
it
to
the
repository,
so
it's
it's
available
online
here
you
can
see
all
the
code
and
how
to
run
it.
I
just
added
the
instructions
on
the
wanted
finish.
What
I
have
been
working
on,
but
I
would
probably
add
instructions
for
all
the
distributions.
R
I
mean
it
doesn't
change
much,
and
so
this
is
again
for
a
blockchain
that
is
not
charted
at
this
point.
I
have
other
versions
that
include
the
beacon
shine
and
the
charts.
Those
are
private.
I
haven't
added
those
into
this
repository.
Yet
my
plan
is
to
lie
slowly
start
adding
those
features
inside
this
responsibility
and
also
adding
nice
ways
to
visualize
the
the
results
of
the
of
this
Malaysian
yeah,
so
I
think
that's
pretty
much
it
I
will
share
the
link.
T
R
Yeah,
so
there
are,
there
are
several
things
that
we
have
this
question
in
the
previous
in
the
previous
calls
one
was
about,
you
know
what
happens
when
the
number
of
validators
is
it's
not
as
far
as
one
will
be
desired
and
how
fast
the
cross-links
will
happen.
In
that
case,
another
question
is
basically
I
think,
but
it
was
posted
in
from
one
of
these
malaysian
emails
about
simulating
uncle
rate
versus
number
of
transactions
per
block,
so
that
could
ya
know
during
well.
I
think
there
are
several
other
things.
H
I
have
a
question
regarding
research:
it's
a
great
time
to
ask
yes,
absolutely
I've,
seen
a
few
posts
on
CBC
vacation
that
can
be
kinky
and
those
are
very
inside
for
thank
you
for
that
as
a
client
implemented.
And
what
should
we
be
aware
of,
and
do
you
think
any
of
this
structure
changes
will
go
into
phase
zero.
P
I
was
so
I
think
that
definitely
not
phase
zero
at
this
point,
given
that
were
trying
to
wine
Z
and
like
any
age,
any
further
changes
to
the
specter
that
aren't
critical
to
faith
zero.
At
this
point,
that's
at
some
time
a
leader
I
think
it's
still
there's
still
there's
still
some
details
that
are
in
that
are
in
the
research
mode
for
T
Tauri
around
finding
ways
to
increase
efficiency.
P
H
B
M
L
Sorry,
yeah
sorry,
no
mutant
I'm
just
wanted
to
ask
patella
giving
it
clarify
a
little
bit
more
about
fork:
choice,
optimization
for
people
that
might
not
be
as
familiar
here
with
LMZ
goes
like
you
might
I
think
what
the
big
culprit
for.
Like
the
you
know,
the
inefficiency
is
counting
the
votes
and
figuring
out
the
Glock
ancestors,
but
just
wondering
you
know
what
you
had
in
mind:
yeah.
P
So
one
of
the
largest
optimizations
that
I
had
in
is
basically
that,
instead
of
treating
every
single
validator
as
a
separate
unit,
you
would
mean
store
a
list
of
the
most
recent
washed
every
validator
voted
for.
But
when
you
calculate
the
fortress
rule,
you
would
treat
every
valid
everyone
who
voted
for
a
particular
block,
hashes
as
a
kind
of
single
block,
and
that
makes
produces
a
number
of
kind
of
units
you
have
to
worry
about
in
calculate
over
potentially
from
like
many
thousands
to
a
few
hundred,
it
sure
makes
things
a
lot
easier.
P
Also,
there's
a
team
there's
some
small
tweaks
in
terms
of
how
you
actually
calculate
the
ghost
fortress
rule.
So
there's
this
binary
search
mechanism
for
finding
the
largest
hash
that
still
or
the
most
of
you
got
a
recent
book
that
still
has
over
how
over
fifty
fifty
percent
support
and
that's
something
that
I've
managed
speed
up
well,
first
from
ofn.
A
T
So
yeah,
thank
you
very
much.
So
I
listed
the
four
in
the
post.
My
post
I
listed
four
different
points
that
I
don't
want
them
to
be
addressed
now,
but
I
want
people
to
start
thinking
and
perhaps
point
me
to
a
part
of
the
specification
which
already
answered
those
questions,
but
I
also
present
my
initial
thoughts
about
how
to
kind
of
improve
the
state
of
things
and
the
number
one
some
people
already
talked
about
it.
T
If
finality
gadget
on
the
work
chain,
which
is
which
is
going
to
be
possible,
how
soon
is
going
to
be
possible
after
the
introduction
of
the
beacon
chain,
so
I've
seen
some
post
on
the
research
research,
but
I
didn't
get
into
the
much
of
the
detail.
How
like
what
are
the
details?
So
second,
is
the
tapering
of
their
wave
of
work
rewards.
So
essentially
it's
I,
don't
think
it
has
been
figured
out
yet.
So
how
are
we
going
to
stop
the
the
pre-work
rewards
inflating
the
the
supply
in
the
you
know
to
point?
T
Oh
because
then
it
will
be
sort
of
competition
between
between
the
miners
in
proof
of
work
and
then
validators
and
proof
of
stake
in
terms
of
like
over
the
supply
ether.
And
here
my
suggestion
is
kind
of
not
very
thought
through-
is
that
if
we
implement
the
finality
gadget
as
described
in
first
point,
then
it's
possible
to
tie
the
miner
in
reward
with
the
remaining
ether
supply
on
the
proof
of
work
chain
and
since
the
the
clients
proof
work
lines
will
have
to
start
watching
the
beacon
change
to
implement
the
gadget.
They
will
also.
T
T
Is
that
the
risk
of
the
beacon
chain
validators
essentially
coming
in
getting
their
deposits
in
launching
the
beacon
chain
and
then
not
letting
anybody
else
come
in
so
essentially
becoming
the
king
of
kings
of
the
Queen's
of
the
of
this
year
2.0
and
at
the
moment,
I
think
when
the
current
spec
it's
designed,
so
that
they
have
to
vote
on
on.
Let's
say
the
proof
of
work
block.
But
what,
if
they
don't
vote,
is
there
any
penalty
for
them
to
basically
just
sealing
the
door
once
they
come
in?
T
And
here
one
of
the
ideas
I
had
is
it's
to
basically
tie
the
the
validator
reward
in
the
proof
of
stake
with
the
with
the
fact
that
they
actually
letting
other
people
in.
So
it
means
that
if
they
stop
letting
anybody
in,
then
it
they
don't
receive,
rewards
of
course
it.
It
has
a
problem
that
if
the
proof
of
work
chain
legitimately
completely
holds,
then
that
means
that
the
validators
will
stop
getting
their
rewards.
T
T
So
the
reason
why
I
wanted
to
make
it
important
is
because
there
is
an
out
talks
about
you've,
probably
heard
about
the
talks
of
change
of
proof
of
work
in
and
some
of
the
the
arguments
are
predicated
on
the
fact
that
you
know
if
the
miners
are
unfriendly
in
the
future.
The
name
might
prevent
the
proof
of
stake,
launch
which
I
trying
to
trying
to
dig
into
this
argument
and
actually
for
I
convinced
myself
personally,
that
this
is
not
in
the
problem,
because
you
can
actually
create
pretty
much
perfect
censorship.
T
Resistance
for
the
deposit
and
one
of
the
ideas
are
described
in
detail
there.
It
needs
some
details
to
be
figured
out
and
the
same.
At
the
same
time,
this
idea
also
allows
the
individual
depositors
to
prevent
the
reorg
attacks
because
they
choose
the
you
know
they
can
wait
efficient
time
before
they
reveal
their
deposits
to
the
beacon
chain
so
that
you
know
they
can
prevent
the
attacks.
So
I
looked
at
also
another
question:
I
looked
into
the
specification
around
the
deposits
truck
data
structure
and
I
saw
that
there
is
the
the
in
deposit
data
structure.
T
There
is
this
branch,
which
is
the
array
of
hashes
and
in
index
and
deposit
data,
so
I'm
wondering
like
we
just
I
assumed
this
is
the
miracle
proof,
but
when
the
data
changes,
the
miracle
proof
also
have
to
be
changed.
So
I,
just
wonder
how
this
is
going
to
work
anyway.
So
that's
it
from
me.
Don't
want
to
take
so
much
time,
but
any
discussion
is
welcome,
but.
P
P
So
the
reason
why
that's
not
an
issue
is
that,
even
if
the
proof
of
work
chain
itself
might
change
walks
of
war,
the
Merkel
or
first
of
all,
the
Merkel
route
that
exists
within
the
state
of
the
beacon
chain
does
not
change
within.
It
was
in
the
process
of
executing
a
single
beacon
chain,
walk,
and
so
whoever
a3
its
to
be.
P
In
block
it
is
our
updated
merkel
branches
if
they
have
to
and
then
just
include
them,
because
it's
in
it's
all
public
date,
public
data,
and
so
anyone
can
construct
some
easy.
I
merkel
branches
based
off
of
anymore.
What
if
they
want
to,
but
also
the
merkel
roots,
are
constant
for
period
or
the
for
periods
in
1024
blocks,
so
in
general
I
don't
see
it
too
much
to
to
be
worried
about.
From
that
point
of
view,.
T
Okay,
so
you're
saying
that
either
within
this
one
thousand
over
the
two
thousand
blocks,
there
was
no
need
to
reconstruct
the
proof,
because
you
would
just
take
snapshot
every
whatever
2000
block.
And
then,
if
you
passed
that
period,
then
you
have
to
read
reconstruct
the
proof
in
and
put
a
fresh
date
in
there
right.
T
P
A
Does
that
premise
that
does
that
require
the
new
validators
are
like
if
we
reach
some
sort
of
equilibrium
we're
given
the
risk
reward
profile
of
safe,
a
zero
or
one
that
no
new
validators
are
joining?
Does
that
mean
that
rewards
slowed
down
at
that
point
and
does
it
require
some
sort
of
continuous
flow
I?
Don't
know
I'm,
not
sure
if
I
understand
the
function,
yeah.
P
P
I,
don't
see
why
the
issues
especially
greater
for
the
proof
of
work,
support
of
state
transition,
as
opposed
to
or
just
regular
operation
of
the
group-stage
chain,
because
they,
if
at
some
point
in
the
future,
there
is
some
set
of
active
alligators
and
that
set
of
active
alligators
decides
that
they
wants
to
and
of
cord
vegan
chain
your
rewards
for
themselves
and
not
accept
anyone
else
joining.
Then
they
will
have
the
ability
to
do
that.
I.
T
P
That
is
interesting
like
if
we
require
the
indices
to
keep
on
sick
or
or
if
we
require
them
or
the
order
in
which
the
merkel
proofs
get
included
to
be
sequential,
which
is
something
we
were
considering
then,
and
if
we,
it
could
potentially
make
sense.
To
just
add
a
rule
that
says
that
a
vegan
chain
block
should
be
considered
invalid.
P
P
T
A
A
I
do
want
to
put
a
couple
comments
about
a
counterfactual.
One
is
one
it
it
messes
with
the
initialization
count.
So
if
we
use
that
from
the
beginning,
you
don't
have
that
firm
count
of
kind
of
chain
start
count,
then
threshold,
if
everyone's
kind
of
actually
doing
it
and
I
think
it's
harder
to
construct
the
initial
validator
set,
and
it's
also
the
mechanism
is
tougher
to
prevent
double
spins
or
like
all
deposits
over
some
amount
right
now.
A
T
A
R
Yeah
just
comment
on
the
small
number
of
validators
issue:
this
is
something
that
I
I
want
to
study
with
you
later
on
I.
It
will
be
interesting
to
evaluate
the
case
of
a
smooth
transition
in
the
number
of
shots.
So
so
we
are
right
now
working
on
a
chain
that
has
basically
one
shot
and
instead
of
jump
into
1,000
charts
from
the
night
of
the
morning,
maybe
would
be
interesting
to
see
if
it
makes
sense
to
do
a
gradual.
R
R
A
P
A
Okay,
thank
you
anything
else
on
this
kind
of
range
of
Alexie's
comments
in
the
340
mistake,
transition
and
alexei
there
is.
There
was
an
explicit
mechanism
added
to
the
spec
the
past
couple
of
days
that,
instead
of
just
devoting
on
this
deposit
route
and
the
contract,
it
votes
on
a
combo
of
de
positive
root,
a
group
of
work
hash,
and
so
the
final
of
the
finality
gadget
is
explicitly
in
there
to
be
used
at
this
point.
Okay,.
T
A
A
We
will
move
on
to
the
next
item
on
the
agenda,
which
is
the
test
formats.
Discussion.
Yanik
has
two
new
gamal
test
formats
that
have
been
posted.
Hopefully,
you've
been
able
to
take
a
look
Yanik.
Do
you
want
to
just
give
us
a
quick
on
that
and
then,
if
anybody
has
any
comments
now,
you
can
take
them.
Otherwise,
we'll
try
to
get
these
kind
of
like
thumbs
up
or
proves
the
next
day
or
two,
so
people
can
be
done
using
them.
Yeah
like
sonic.
U
Yeah
I
think
that
pretty
simple
we
can
start
with
a
simple
serialize
test.
We
have
basically
three
kinds
of
tests
and
the
first
one
is
where
everything
is
valid.
We
specify
the
type
of
the
thing
we
want
to
serialize,
then
the
value
and
then
the
still
less
value
in
bytes,
basically
a
hex
string,
and
then
we
also
have
a
test
for
invalid
values
that
have
been
sealed
as
invalidly,
for
example,
with
strings
that
are
not
too
long
or
too
short,
that
we
do
not
specify
the
value,
but
only
the
SSE
string
and
a
short
description.
U
Why?
That
is
wrong
and
a
third
kind
of
tests
where
the
value
to
answer
lies
is
wrong.
For
example,
if
you
have
an
integer
that
it's
out
of
out
of
range
or
an
a
byte
string,
that's
too
too
long
again
with
the
description,
and
then
we
only
specified
value
and
not
yeah,
not
the
SSE
types,
if
possible,
their
specified
strings.
For
example,
bool
un
Speights
addresses
for
lists.
We
specify
them
as
a
Yemen
list
with
a
single
element
that
specifies
the
and
the
type
of
the
element
of
the
list
and
for
containers.
U
U
Yeah
well
use
for
most
values.
We
have
ya,
know
things
which
represent
these
values
and
I
think
all
the
rest
is
pretty
straightforward
and
for
the
three
hashing
things
we
basically
use
the
same
type
definitions,
the
same
value,
definitions
and
then
we
just
add
yeah
the
tree
hash
of
the
value
yeah,
any
feedback
on
that
to
change.
U
Basically,
the
direction
so
one
is,
we
have
a
value
and
we
can't
see
realize
it
because
it
does
not
match
the
type.
Basically,
for
example,
an
integer.
That's
basically,
if
you
want
to
serialize
500s
in
you
and
eight
that's
not
possible,
or
if
you
want
to
yes,
you
realize
50
bytes
is
about
32,
it
doesn't
work
and
the
other
way
around
is
if
the,
if
you
get
the
serialized
string
and
you
can't
deserialize
it
properly,
I
see
I
see
so
it's
1
1
has.
N
U
A
F
F
U
A
A
Some
of
these
Oh
extremes
I
think
if
I
just
want
to
make
sure
that
we're
not
we're
not
making
something
difficult
here,
I
think
it
might
have
been
maybe
trying
to
parse
the
Oh
X
trains
and
in
like
ambiguous
context,
and
if
that's
the
case,
this
isn't
really
an
issue.
Kristof
do
you
know,
given
the
knowledge
of.
A
H
M
A
C
The
container
and
reformat
the
drown
and
how
they
encoded
the
fuel
and
encoded
beef
actually
are
for
the
rest
of
the
company,
like
I'm,
just
wondering
in
like
like
what
our
configuration
for
the
GUI
like
the
cult,
because
it
like,
like
account
of
the
perfect
Maui
County,
like
between
the
dreamer
containers,
track
device.
But
we
need
to
wait
for
the
food
to
fuel
at
5:00
and
we
take
that
with
of
the
prototype
in
the
beginning.
A
G
E
C
G
T
T
F
Lippe
to
P
will
produce
some
tools,
Wireshark
D
sectors,
if
I
remember
to
be
able
to
analyze
whatever
format
we
end
up
using
for
to
peer
networking
in
I
show
you
a
beat
SSE
or
proto
before
captain
proto
or
something,
but
as
of
now
we
will
use
a
Z
unless
I
guess
varies
on
module
blocker
and
there
is
a
discussion
open
or
maybe
closed
on
the
tests.
Also
old,
if
area
becomes
Enrico
about
that.
So
you
can
ask
wall
history
of
the
discussion,
with
lots
of
inputs
from
my
alexei.
T
Well,
my
current
opinion
on
this
is
that
I
agree
with
way
that
if
there,
if
this
format
is
going
to
be
wrapped
into
the
other
packaging
like
late
p2p,
then
that
packaging
would
add
necessary
prefixes.
So
you
don't
need
to
design
as
a
Z
for
that
purpose.
So
in
that
that
is,
it
means
that
it,
you
need
to
optimize
for
something
else.
If
it's
gonna
be
used
as
an
input
for
hashing,
then
I
wouldn't
use
prefixes,
as
I
said
before,.
G
Well
already
have
a
hashing
algorithm
right,
I
mean
the
hashing
algorithm,
that
is,
that
is
used
in
a
consensus,
part
of
the
spec.
It's
already.
It's
also
described
in
in
the
simple
serialization
spec
in
the
three
hash
section,
so
I
mean
do
we
even
need
any
other
hashing
so
far,
which
require
which
would
work
with
the
streaming.
N
P
N
I
I
see
so
for
me
that
immediately
shows
a
sign
of
a
problem
that
might
might
be.
There
might
be
a
problem
of
this
because
imagine
you
have
a
data
structure
that
has
many
different
fields
of
the
same
type
and
for
whatever
reason
at
some
point,
for
example,
you
need
to
you
need
them
to
switch
places.
N
In
that
case,
you
might,
you
might
get
into
bugs
where
you
have
same
filter
of
same
type,
but
in
a
different
place
within
a
within
the
data
structure
and
for
communications
it
might
not
might
not
be
unnecessary
to
cut
out
the
named
fields.
But
if
you-
yes,
if
you
were
to
do
that,
you
might
run
into
problems
of
of
struck
stack
of
two
different
structures
that
look
the
same.
One.
F
Normally
is
sorry,
nobody
sec,
we
should
have
some
kind
of
spec
version,
and
so,
when
you
communicate
with
someone,
you
also
spent
the
SSD
version.
You
are
speaking,
and
so
you
agree
on
the
field
orders
and
the
content
that
way.
So
it's
kind
of
a
schema
of
of
the
the
communication
is
written
in
the
spec
with
a
certain
version.
F
N
B
F
G
B
A
A
139
that
recently
Justin
approved
generally
the
arguments
laid
out
favored
slightly
in
the
direction
of
little-endian.
There's
anybody
and
it
also
helps
substrate,
because
they
only
support
little-endian
and
I
know.
Does
anybody
feel
very
strongly
about
merging
this
poor
request
that
has
been
sitting
there
graduate
what.