►
From YouTube: Eth2.0 Implementers Call #9 [2019/1/3]
Description
A
A
B
We've
finished
up
our
fixed
size,
number
library,
it's
pretty
much
complete,
there's
a
few
tweaks
when
you
have
fixed
in
the
test
cases,
but
it
works.
So
we
are
gonna,
be
importing
new
incidents
and
to
the
beacon
chain,
probably
this
end
of
this
week
or
early
next
week.
So
we
can
be
like
as
close
to
a
spec
as
we
can,
especially
with
types
and
outside
of
that
we've
got.
B
We
started
getting
a
couple
of
contributors
which
is
nice
outside
a
chain
safe,
so
we
have
like
three
people
right
now,
contributing
actively
everywhere
from
issues
to
pull
requests.
So
it's
yeah,
that's
pretty
much.
What's
been
going
on
so
far
and
we've
been
working
on
our
BLS
a
little
bit
more
great.
B
C
Hi,
so
in
Nimbus
we
kept
in
sync
with
the
latest
spec
changes.
We
also
progressed
on
the
I
think
we're
like
a
one-to-one
with
the
spec
regarding
state,
so
we
now
have
a
state
simulator
and
we
can
transition
between
states
with
for
dissemination,
one
epoch
per
second,
and
the
next
step
is
fortress
rule.
C
We
also
have
a
network
simulation
working,
so
it
works
a
real
time
with
machine
to
machine
communication
and
the
next
step,
for
that
is
a
persistence
also
as
tester
generator.
Repo
that
we
had
at
Terry's
has
been
up
streamed
to
e
2.0
tests.
So
as
a
shuffling
test
will
the
updated
where
and
all
the
or
the
new
tests
will
be
added,
where
also,
we
also
focused
on
giving
the
community
some
way
to
reproduce
and
set
up
a
name,
Efram
2.0
echo
system
on
their
machine.
C
So
there
is
now
a
vagrant
container
available
and
a
tutorial
I
will
post
the
link
in
the
chat
box
and
should
allow
you
to
set
up
easily
everything
you
need
to
run
a
name
and
a
ferry
on
in
your
Linux
or
Windows
machine.
And
lastly,
if
you,
we
also
have
the
development
updates
in
a
blog
post
form,
and
that
was
done
at
the
end
of
December.
So
it's
like
from
five
days
ago
and
you
can
have
the
detail
on
everything
we
were
doing
India
and
the
link
is
also
in
the
chat
box.
D
Hi
so
the
past
two
to
three
weeks,
we
are
keeping
keep
trying
to
sync
with
the
specs
and
rework
our
code
based
architecture
and
also
we
added
more
custom
pipe
hinting
in
Python
code
base
and
once
it
fully
finished,
we
can
update
the
spec
as
well
and
make
the
expect
more
readable.
And
next
week
we
are
planning
to
give
some
input
for
the
tree.
Hashing
test
factors,
the
PI
SSE
functions,
are
almost
finished
and
also
add
more
documentation
in
the
Python
code
base.
So
that's
our
updates.
E
E
We
kind
of
completed
our
VRC
interface
for
Artemis.
We
are
starting
work
on
GRP,
see
our
BLS
verification
and
hash
tree
root,
so
we
opened
a
minor
CR
that
was
closed
and
merged
351.
That
was
like
a
modification
minor
modification
to
the
validator
relay
contract,
and
just
yesterday
we
opened
an
issue
regarding
the
validator
records
in
that
kind
of
in
this
slot
club
I.
E
Think
there's
like
probably
update
like
28
days
ago,
where
the
validator
balances
were
moved
from
the
validator
record,
I'm,
not
sure
what
they
call
it
in
Python,
but
I
assume
object
right
into
a
into
the
beacon
state
and
yeah.
So
that's
where
we
are
been
still
producing
is
what's
new
I
need
to.
If
you
don't
read
that
you
should
and
yep
I
think
that's
pretty
much
it
for
us
for
updates.
F
Yeah
hey,
this
is
Terrance
from
Kris
managed,
so
so
I
just
yes,
been
pretty
slow
due
to
the
holidays,
but
we
have
accomplished
a
lot.
Over
the
last
three
weeks
we
deprecated
the
old
code
with
the
whole
spec,
so
it
is
some
code
and
we
align
this
bag.
We
have
implemented
about
90%
of
the
blood
operations,
call
processing
and
import
processing
functions.
We
also
finish
implementing
the
function
to
process
the
well
data
deposits
and
on
the
libraries-
and
we
finish-
implementation
of
SSD
and
the
tree.
G
We've
been
a
little
bit
smaller,
that's
been
slow
over
the
holidays.
Both
progressing
would
be
stripping
out.
A
lot
of
the
old
spec
Adrienne's
been
progressing
on
rostered
p2p
got
a
PR
waiting
for
tree
hashing,
which
we
could
we're
looking
forward
to
seeing
those
test
vectors
that
we
mentioned
earlier.
So
we've
been
stalling
out
some
of
the
new
spec
stuff
to
try
and
avoid
duplicating
work.
Also,
we've
been
moving
more
towards
ink
and
RPC,
and
also
kind
of
thinking
about
the
architectural
considerations,
around
code
being
shared
between
components
like
validator
clients
and
so
forth.
A
H
Hey
yeah,
we
entered
the
opposite
of
everyone
else.
We've
taken
the
opportunity
of
relaxing
holidays
to
do
some
work
or
I
should
say
way
s
so
he's
been
trying
to
update
our
previous
implementation
to
the
latest
version
of
the
spec
I.
Think
we've
run
into
a
couple
of
snags
like
the
change
from
Blake.
To
makes
it
a
little
bit
more
complicated
for
us.
We
have
to
introduce
some
abstractions
that
we
weren't
finding
undoing
at
this
stage.
H
The
decision
to
not
switch,
as
is
said
to
little
endian,
basically
makes
it
so
that
we
have
our
existing
kind
of
codec
is
the
same,
except
for
endianness
so
same
thing.
They
would
have
to
introduce
a
level
of
abstraction
we
weren't
really
planning
on.
Overall,
though,
I
think
it's
it's
kind
of
phony,
ok
and
you
know
we'll
see
we're
ghosts
and
if
we
can
keep
going
on
substrates-
and
you
know
it
still
still
pretty
awesome
and
we
get
everything
else
for
free,
but
if
we
can't
then
I
think.
I
J
J
Simplification
of
the
status
code
logic,
so
we
used
to
have
this
somewhat
complicated
state
machine
with
various
value,
beta
status
codes
and
there
were
all
sorts
of
edge
cases.
When
you
do
transitions,
that's
mostly
gone
almost
completely,
so
it's
been
replaced
with
and
time
stamps
in
the
in
the
validator
records,
and
we
still
have
some
what
we
call
status
flags
now.
So
it's
yeah,
there's
only
two
flags.
So
it's
quite
simple
one
of
the
things
that
we're
looking
to
move
towards
is
this
idea
of
a
locally
computable
shuffling.
J
In
order
to
calculate
the
shuffling
of
a
specific
validator
committee,
you
basically
have
to
it.
Does
it
scales
linearly
with
what
the
side
of
the
validator
pool
and
we
wanted
to
scale
better,
and
once
we
have
this
locally
computable
shuffling,
then
it
means
that
we
can
be
light
client
friendly
without
all
sorts
of
light,
client,
specific
infrastructure.
So
this
means
that
we
can
also
simplify
the
beacon
state.
So
specifically,
we
had
this,
in
my
opinion,
rather
ugly
shot
committees
at
slots,
data
structure.
J
We
also
had
stuff,
like
the
persistent
committee
assignments,
and-
and
this
was
you
know
again-
infrastructure
for
like
clients
and
once
once
now
that
we
have
this
this
news.
Well
now
that
we
have
more
information
about
the
validators
with
this
time
stamps
and
we're
looking
to
have
this
locally
computable
shuffling
can
remove
that
we're
also
looking
to
potentially
remove
the
the
validator
registry
Delta
chain
and
and
rely
on
the
the
double
bash
mark
or
cumulated
that
was
that
was
added
recently.
J
So
that's
it's
not
a
simplification
which
is
nice,
I
guess
I've
also
tried
to
push
for
kind
of
a
clean
separation
between
phase
0
and
phase
2
and
phase
1.
So
a
few
bits
and
pieces
of
logic
and
and
constants
have
have
been
removed
and
I
guess
other
than
that
I've
tried
to
push
for
our
clean
apps
of
the
codebase.
J
Sorry,
not
the
code
base
of
the
the
stack
but
yeah
I,
guess
that
will
take
quite
some
time
because
we're
trying
to
do
it
in
in
such
a
way
that
is
not
one
big
and
pull
the
plaster
in
one
go
but
make
lots
of
small
pour
requests.
I
I
About
the
phase
zero
case,
one
thing
I
was
wondering
a
little
bit.
What
the
thinking
is
there,
because
I
saw
that
some
phase,
one
things
are
being
added
and
placeholders
and
stuff
like
this,
like
from
a
spec
implementation
point
of
view
that
slightly
complicates
things
and
I.
Think
that,
if
we're
going
to
do
phrase,
one
I
don't
know
six
months
a
year
after
phase
year,
I
think
we
will
have
learned
many
lessons
there
and
will
want
to
change
it
further.
I
J
A
Go
back
and
forth
a
little
bit,
it's
the
balance
between
you
know
trying
to
figure
out
what
the
future
holds
and
try
to
reduce
the
amount
of
spaghetti
code
that
will
be
in
these
clients.
I
mean
every
fork.
If
you
don't
really
get
get
to
get
rid
of
logic
and
get
to
get
rid
of
old
code,
as
you
upgrade
a
blockchain,
and
so
it's
a
little
bit
different
than
creating
most
systems.
A
So
if
we
know
something's
gonna,
be
there
I'm
not
I'd
like
for
it
to
be
embedded
in
the
data
structure,
but
maybe
we
can
make
a
more
informed
decision
in
February,
it's
like
if,
before
we
launch
the
phase
zero
beacon
chain,
we
have
what
looks
like
the
beginnings
of
a
robust
phase.
One
spec,
then
I'd
say
put
the
two
data
structures
in
there
and
put
the
components
of
the
data
structures
in
there
and
but
if
it
seems
like
a
major
unknown,
then
just
going
out.
I
A
A
K
I'll
speak
on
the
team,
so
main
focuses
on
Stanford
blocking
conference
where
we're
presenting
a
paper
on
their
less
signature,
a
Greg,
a
shoe
at
scale.
I
know
that
there's
some
interest
amongst
those
present
in
seeing
details
of
that
so
I
think
the
team's
busy
just
doing
the
doing
the
tests
and
the
write-up
that
so
hopefully,
in
a
couple
of
weeks,
we'll
have
something
we
can
share
ahead
of
the
year.
The
conference
I
think
that's
pretty
much
it,
but
today,
unless
the
cloud
guy
II
want
to
add
anything,
no.
L
Basically
yeah
we
we
plan
on
running
a
large-scale
experiment
next
week,
so
we
should
have
some
preliminary
results
by
the
end
of
next
week
or
in
two
weeks.
So
far
we
only
run
large-scale
experiment
on
the
simulator.
We
can
stein
now
we're
planning
to
do
on
Amazon
ec2
instances,
so
the
development
is
almost
is
in
pretty
good
shape.
Now
and
our
the
experimentation
phases
is
the
next
one.
M
L
M
M
So
we
have
a
first
version
of
this
emulator
that
it's
that
is
ready
on
running.
We
have
a
trident
on
the
computer
at
this
case.
Well,
and
so
now
we
are
planning
to
tackle
two
questions.
The
first
problem
that
we
would
like
to
study
with
this
monitor,
where
the
the
point
that
was
supposed
to
the
previous
phone
call
about
how
fast
or
how
slow
different
charts
will
evolve
in
the
case
of
having
a
short
low
number
of
body
laters.
So
what
happens?
M
So
there
is
a
there's
already
a
an
article
paper
that
was
written
on
this
I
thought
it's
quite
it's
about
the
couple
of
years
old,
so
I
think
it
would
be
interesting
to
do
deceptive
conduct
more
take
the
data
and
in
the
case
of
Sheldon.
So
these
are
between
the
two
points.
I
would
like
to
study
with
this
image.
A
The
evolute,
you
said
the
evolution
of
the
shards
in
a
I
guess
the
low
validator
count
environment,
evolution
in
what
respect
generally
in
a
low
value
account
environment.
You
still
have
you
have
kind
of
an
even
distribution
across
the
shards.
So
what?
What
exactly
are
you
planning
on?
Looking
at
there.
A
Me
something
right
so,
the
few
nuances
there
there
are.
There
are
always
two
shufflings
validators,
so
as
a
validator
always
have
two
rules
and
one
of
which
is
a
long
term
role
on
a
shard
and
I'm
always
committed
to
one
shard
in
time
and
I'm.
If
there's
in
shards,
there's
validator
count
over
in
number
of
elevators
on
each
shard
and
the
shards,
regardless
of
the
amount
of
validators
on
that
shard.
Until
you
reach
very,
very
low
numbers
build
at
the
can
build
at
the
the
normal
clip
of
one
block
per
slot.
A
There
is
another
shuffling,
the
they're
called
shark
committees.
They
might
change
soon,
cross-linked
committees,
because
that's
what
they
do
they
cross
link
and
which
is
a
separate
set
of
validators
from
the
persistent
committees
which
are
building
the
shards.
These
valleys,
the
time
with
which
the
cross
links
occur,
can
become
longer
in
times
of
lower
validators.
So,
although
the
shard
chains
are
still
building
at
the
same
clip,
it
is
taking
they're
only
getting
a
finalized
reference
into
the
beacon
chain
on
some
slower.
A
So,
instead
of
maybe
one
scurry
pocket
might
be
once
every
2d
box,
or
maybe
once
every
40
bucks
and
so
again,
you're
degrading
the
performance
of
being
cross-linked
in
which
could
degrade
the
performance
on
crush
our
communication
and
it
it
might
do
some
interesting
things
to
the
four
choice
rule
of
the
shard
chains,
I'm
not
entirely
sure
certain
what
they're
but
the
pork
choice.
The
rule
of
the
shark
chain
always
starts
from
the
last
cross-linked
reference
in
the
beacon
chain,
and
so
you
might
have
you
have
these
longer.
A
You
could
have
these
longer
stretches
without
a
cross
link,
so
you're
more
dependent
on
the
fork
choice
in
the
shard
chain.
Then
then,
when
you're
quickly
cross
link
again
so
there
there
might
be
some
interesting
stuff
there
around
the
stability
of
the
fork
choice
rule,
but
but
these
shards
do
are.
It
was
still
able
to
build
at
the
same
speed.
M
N
Might
be
2019
to
everybody,
I
want
to
say
thanks
for
all
the
useful,
an
actionable
feedback
we've
received
on
the
on
the
boat
lab.
We,
the
core
team,
the
p2p,
would
be
meeting
in
Porto
in
two
weeks
time
or
an
s2,
amongst
other
things,
finalize
the
roadmap
discussions
and,
of
course,
you're
invited
to
attend
if
you're
around
yeah
I
just
get
in
touch
with
me.
This
is
all
condensed
public
to
everybody.
N
N
We
can
gain
speed
on
that
in
the
next
in
the
next
weeks
and,
of
course,
supporting
Imperium
continued
and
continues
to
be
a
priority
and
a
top
level
for
us
and
other
key
topics
that
we're
addressing
our
multi
stream
2.0
to
reduce
the
latency
of
establishing
connections
and
leverage
certain
functionalities
of
new
transports
that
we're
introducing
like
quick,
which
allows
for
zero
round-trip
negotiation.
We're
also
going
to
be
working
on
a
plan
with
recording
THD
2.0,
which
basically
introduces
a
number
of
new
functionalities
such
as
overlaid,
HTS,
privacy,
secrecy
and
other
features.
N
We're
also
focusing
on
interoperability,
testing
and
visualization
tools,
and
also
we
starting
to
hack
well
actually
start
discussing
packet
switching
with
a
bit
more
death
than
we've
done
in
the
past
I'll
post
a
link
to
our
okiya
sheets
in
the
chat
box.
Shortly
then,
on
the
pilot
be
to
be
front,
we
are
waiting
on
the
brand
resolution
to
move
the
project
into
the
net
p2p
organization.
N
I.
Think
that's
gonna
pick
up
some
speed
as
well
and
January.
Also,
we've
got
some
some
news
on
the
front
of
the
on
the
front
of
go
package
management.
This
is
benefit
offer
of
a
pain
point
for
a
lot
of
downstream
adopters.
The
the
usage
of
GX
as
a
package
manager-
and
we
are
gonna-
be
taking
a
spike,
this
water
to
evaluate
adopting,
go
mod
in
general
and
even
perhaps
replacing
GX
if
the
hooks
that
go
mod
exposes
allow
us
to
bring
in
some
features
like
content,
accessibility
and
so
on.
N
Some
guarantees
that
GX
gives
us
and
yeah
I
wanted
to
provide
a
follow-up
as
one
of
the
conversations
around
that
we
had
in
the
last
in
the
last
call
that
I
was
present
in.
Regarding
the
native
findings
for
the
Libby
to
be
teaming,
we
have
been
working
on
a
plan
with
Pegasus
as
well,
or
the
contributors
of
this
patch
to
to
support
combining
native
libraries
with
bridges
to
do
other
languages.
N
So
one
of
the
use
cases
that
we're
targeting
is
embedding
the
mid
me
to
be
demon
in
environments
such
as
iOS
I
know
that
status
in
were
easy.
2.0
workshops
in
Prague
somebody
from
status
said
that
this
would
be
desirable
for
them.
I
can't
remember
exactly
Queen,
so
I'm,
hoping
that
by
calling
it
out
in
this
column,
president
would
raise
their
hand,
because
I
think
would
be
able
to
support
this
very
shortly.
N
And
I
really
had
talked
to
to
to
the
Pegasus
guys,
because
they've
been
very
proactive
about
getting
this
patch
merged
and
we're
defined
to
find
a
plan
with
different
with
different
different
pieces
to
to
make
sure
that
this
is
done
in
an
orderly
fashion,
because
it
does
come
I
but
change.
A
change
in
architecture
and
the
demon
that
can
post
a
link
to
the
comment
where
it
that
plan
is
summarized.
That's
all
for
me
open
for
questions.
If
you've
got
any.
C
N
So
somebody
from
status
suggested
that
having
a
deplorable
form
of
living
to
be
for
iOS
environments,
and
particularly
the
little
demon,
would
be
useful
as
a
first.
You
know
experiment
of
running
Libby
to
be
an
iOS
applications.
Basically,
the
idea
that
came
up
with
was
well
now
that
we
have
a
demon
and
the
demon
is
exposing
local
endpoint
over
IBC
and
apparently
I
OS
supports
IPC
and
UNIX
a
UNIX
domain
sockets,
then
creating
a
native
implementation.
I
N
N
E
N
N
We
are
planning
to
get
better
at
that
as
well
in
2019
and
becoming
essentially
becoming
a
spec
first
project
that
makes
it
easier
for
other,
downstream,
implement
implementer
easier
to
engage
and
to
enter
built-in
validations.
We
do
so.
One
of
our
focus
for
the
next
years
is
gonna
be
mobile.
Adoption
as
well,
because
this
is
important
for
a
number
of
use
cases-
offline,
use
cases
that
we
want
to
address
what
likely
to
be
I
do
expect
at
some
point.
N
We
will
be
seeking
a
swift
native
implementation
of
Libby
to
be
for
the
time
being,
while
Swift
and,
of
course,
the
Java
one
is
already
underway
for
the
time
being,
I
think
the
demon
itself
so
there.
So
the
problem
with
that
with
a
current
model
of
deployment
of
the
daemon
is
that
it's
basically
a
binary
right
I,
don't
think
you
can
run
a
binary
in
iOS
just
as
this,
so
by
compiling
it
down
to
a
native
library.
N
I
I
C
A
A
O
So
basically
the
thing
in
the
new
year
is
that
so
the
effort
in
on
the
on
the
discovery
of
e5
is
split
into
two
different
efforts.
A11
is
getting
the
whole
topic
thing
worked
out
and
then
the
other
one
is
the
actual
protocol,
so
I
put
Frank
on
the
wire
protocol,
so
he's
basically
taking
care
of
that
now.
I,
don't
know
if
you've
met
Frank
at
DEFCON,
but
he
is
he's
also
been
working
on
a
on
the
new
test
tube
for
for
the
rest
of
the
PDP.
O
So,
but
now
that
the
test
shoot
is
done,
he's
moved
on
to
actually
taking
care
of
the
wire
protocol
tasks
like
just
basically
getting
a
a
preliminary
respect
in
place
and
then,
when
it
comes
to
the
to
the
topics
yeah,
so
over
Christmas,
like
not
much,
has
happened
but
yeah
I'm
still
at
the
point
where
basically
I'm,
trying
out
the
simulations
that
what's-his-name
these
the
someone
from
from
from
brain
board
did
yeah
yeah
exactly
Yannick.
So
Yannick
did
some
some
simulations
with
Oh,
Emmet
and
I've
been
trying
to.
O
Actually
you
know
get
this
to
run
on
my
laptop.
So
thanks.
That's
that's
approximately!
Where
I
am
right
now,
but
I,
don't
have
any
sort
of
I've
tried
to
kind
of
keep
my
cue
clear
on
the
gas
side,
so
I
have
more
time
in
the
beginning
of
the
year.
Now
to
you
know,
actually
really
look
into
the
discovery
version
five,
because
so
far
it's
always
happened
that
basically
I
just
been
swamped
with
like
get
tasks
and
right
now,
I
don't
have
any
like
big
pending
guest
tasks
left
so
yeah.
O
A
A
And
again,
like
keep
us,
we
often
work
on
created
in
the
sharding
getter.
So
if
there's
something
specific,
that
you
want
some
feedback
on
or
input
or
help
implementation
or
anything
like
that,
please
they
be.
Let
us
know,
because
this
is
a
but
really
a
top.
Priority
is
figuring
out
this
every
protocol,
because
one
of
the
components
that
is
still
not
quite
locked
down
at
this
plane,
yeah.
A
P
One
question
that
I
have
is
regarding
running
beacon
nodes,
so
the
incentives
for
running
that
there's
an
active
issue
regarding
that,
and
this
is
back,
and
there
is
some
concerns
that
Bruno
brought
up
and
the
previous
call
I'm
just
pulling
up
the
spec
right
now
in
which
the
and
we
would
have
to
design
the
validator
clients
in
a
particular
way
in
order
to
preserve
piracy,
because
otherwise
I
wouldn't
be
able
to
get
piracy.
Let
me
just
search
for
that.
A
One
would
be
I'm
a
validator
and
I
want
to
have
my
own
direct
connection
to
the
network
to
get
the
state
of
the
world
and
sign
messages
and
to
see
crumpet.
Another
might
be
that
I
have
some
sort
of
service
in
which
I
have
the
state
of
the
world
and
I
provide
information
about
the
state
of
the
world
to
others,
either
for
free,
altruistic,
aliy
or
even
in
some
sort
of
pay
model.
If
I
needed
some
more
incentive
to
ride
another
reason
you
might
run.
A
One
of
these
notes
is
because
you
are
running
any
sort
of
application.
You
know
a
block,
Explorer
II
Thirsk
and
would
run
a
note
or
many
nodes.
I
mean
it's
similar
in
to
why.
Why
do
I
run
a?
Why
do
I
run
a
current
group
of
work?
Node
one
is
one
small
set
of
people
doing
that,
as
for
mining
and
the
rest
are
either
altruistic
actors
and
the
network
or
people
who
have
applications
or
whatever,
and
so
they
again
it's
pretty
much.
Anyone
that
has
a
reason
to
be
running.
A
The
protocol
directly
would
be
running
one
of
these
nodes
and
again
the
the
beacon
node.
It's
just
really
just
an
implementation
of
the
protocol.
It's
like
the
beacon
chain
is
the
core
system
level
stuff
that
I
need
to
sync:
the
application
change
of
the
shard
chains,
and
so
that's
just
kind
of
the
core
piece
of
infrastructure
and
then
I
think
whatever
chains
are
relevant
to
my
my
needs,
whether
I'm
a
validator
or
a
black
Explorer
or
any
sort
of
application
that
might
need
to
sync
one
of
these
chains.
P
A
7Ru
rego
asked
a
question
in
the
chat:
is
there
a
reward
for
running
a
beacon
node?
There
is
no
similar
to
running
a
so
again
a
beacon,
node
I,
don't
know
what
the
correct
terminology
is.
That
is
just
an
eighth
OH
node.
There
is
no
direct
incentive
for
running
one
of
these
there.
So
there's
no
direct
reward
for
running
one
piece,
but
if
you
do
want
to
become
a
validator
post
up
there.
P
E
Become
and
I
guess
PR
three
one
seven,
which
was
the
updates
that
made
the
changes
to
that
kind
of
like
doing
some
preliminary
reading.
The
two
moving,
the
my
understanding
is
the
moving
of
the
validator
balance
into
the
beacon
state
is
for
hashing.
Optimization,
yes
in
is
that
language
specific
to
Python?
You
know
so
that
so
we
should
have
two
states,
the
active
state
and
the
crystallized.
Api
could
say
it
was
small
and
had
to
get
rehashed.
Frequently.
The
crystal
a
state
was
very
large.
A
And
would
get
rehashed
every
epoch,
the
we
had
the
separation
because
we
were
using
a
flat
hash
and
really
had
no
ability
to
cache
the
components
of
the
state
that
had
not
changed
her
cash
fashion.
When
we
moved
to
the
SSD
tree
hash,
we
now
have
isolated
the
various
components,
the
arrays
and
the
objects
from
each
other
into
this
hash
tree,
and
so
you,
when
you
say,
update
just
the
balance
of
one
valid,
a
or
a
neighbor
to
rehash
most
rehash
the
data
structure.
A
Most
of
the
components
of
the
tree
remain
stable
and
you
have
to
do
a
relatively
low
number
of
hashes
to
update
the
this
was
generally
perceived
as
good
and
fine
when
we
were
using
Blake
as
the
hash,
because
Blake
his
house
faster
than
Shawn
fear
connected
into
six.
When
we
switch
to
connect
256,
we
decided
that
most
of
the
validator
record
is
not
updated
frequently,
but
the
balances
every
epoch
are
being
most
of
the
balances
of
nothing
all
the
balances
are
updated
via
their
rewards
and
penalties,
and
so
by
moving
the
balances
out.
We've
isolated.
A
What
needs
to
be
the
large
component
of
what
needs
to
be
rehashed
into
a
smaller
data
structure
and
so
we're
able
to
benefit
from
caching
the
validator
record,
the
hashes
of
the
validator
records
a
lot
more
and
isolate
the
amount
of
passion
has
to
be
done,
and
this
was
in
an
effort
to
reduce
the
loss
in
hash
time
or
the
increase
in
hash
time.
When
we
move
attack,
256
boy
does
that
make
sense?
Yeah
totally
totally
makes
sense.
Okay,
so
maybe
I'm
proposing
and
I
may
open
a
issue
for
this.
Maybe
a
name
change
for.
E
The
validator
balances,
just
because
kind
of
like
because
we
moved
it
from
validator
registry
there's
kind
of
like
I,
don't
know,
I
think
like
sort
of
a
naming
collision
and
validator
registry
invalid
in
balances,
maybe
like
epoch,
validator
balances
or
something
might
be
more
appropriate
ye,
but
they
are.
They
are
the
balances
of
each
validator.
So
each
there's
a
one-to-one
relation
between
your
indicee,
your
validator,
indicee
and
and.
A
Your
balance
in
this
validate
your
balance
array.
It's
just.
We
moved
it
out
of
the
validator
data
structure
so
that
we
can
reduce
the
amount
of
passion
that
needs
to
be
done.
Okay,
I've
got
it,
but
opens
the
name.
Changes
I,
don't
know
if
he
Puck
does
what
we
need
to
do.
It
explains
we
need
to
explain
them.
Yeah
yeah.
E
A
K
Seem
to
be
a
lot
of
documentation
about
what
the
expected
behavior
of
a
block
proposer.
It's
supposed
to
be
a
be
contained,
block,
proposer,
I!
Guess
it's
possible
to
infer
it
from
the
spec,
because
we've
got
the
kind
of
valid
validity
side
is
down,
but
as
an
exercise,
I
I
went
through
what
happens.
K
You
know
how
our
block
proposed
are
supposed
to
deal
with
deposit
receipts
from
the
main
chain,
and
it
took
me
quite
a
few
hours
to
reverse-engineer
it
to
try
and
find
out
exactly
what
the
proposed
is
supposed
to
be
doing
and
I'm
still
not
sure.
I've
got
it
right
and
it
doesn't
seem
to
be
documented
anywhere.
So
I've
done
a
lot
in
the
shopping
getter.
K
What
my
conclusions
are
and
I'd
be
glad
if
somebody
can
check
and
make
sure
that
you
know
correct
me
where
I'm
wrong,
but
is
there
a
plan
to
be
a
document
or
a
place
somewhere
where
we
just
spell
out?
Yeah
talked
over
that
in
the
past
and
that
you're
right,
the
entire?
What
a
validator
do
is
implicit
on
what
is
valid
in
the.
A
Beacon
chain
which
is
good
and
that
you
can
separate
these
two
things
but
I
think
that
a
an
accompanying
document
of
what
quote
an
honest,
validator
does
just
spelled
out
very
explicitly,
is
I.
Think
would
be
a
valuable
addition
to
the
spectra,
though,
and
maybe
we
can
open
up
an
issue
for
that
and
January's
probably
beginning
to
be
a
good
time
to
put
that
in
there,
and
maybe
it
was
a
little
early
before
just
in
that
spec
moving
a
lot.
A
K
A
Q
A
And
providing
it
to
the
validator,
the
providing,
essentially
a
block
proposal
to
sign
similar
to
an
approved
work
minor.
The
node
provides
a
proposal
that
the
proof-of-work
minor
is
supposed
to
try
to
hash,
so
the
heavy-lifting
I
think
should
happen
in
the
node.
The
the
main
information
that
needs
to
pass
along
to
the
validator
is
enough
information,
so
the
validator
can
decide
if
this
is
a
safe
or
dangerous
message
to
sign.
I
think
strategies
for
the
validator
to
assess
the
validity
of
the
information.
A
I
think
that's
more
of
a
trust
relationship
and
the
validator
maybe
should
be
asking
multiple
notes
or
something
there,
but
I
don't
think
that
they
I.
Don't
think
that
that
essentially,
the
signing
entity
should
be
doing
much
of
the
heavy
lifting.
Obviously
there's
a
lot
of
design
work
there
I
would
say
the
you're,
not
here
you
wouldn't
nested
er
I,
think
you're
passing
more
of
these
block
proposals,
it's
a
sign,
so
that
makes
sense.
Yeah.
A
R
A
lot
of
spec
changes
about
having
the
beacon,
that's
okay,
then
send
the
information
to
the
validator
client.
However,
like
we
think
that
this
creates
this
makes
it
a
lot
harder
to
be
couple
the
two
in
the
sense
that
you
know
you
can't.
You
know
we
want
to
be
able
to
swap
the
underlying
beacon
load,
but
if
you're
really
really
dependent
on
one
beacon
note
for
everything
you
do
and
in
particular
you
don't
really
have
that
much
data
that
you
store
locally
without
their
client
and
that
is-
and
you
you're
also
not
fyodor
p2p.
R
A
So
it
can
make
good
informed
Sims
in
the
future,
and
then
it
can
pass
these
signatures
along
to
something
that's
connected
directly
to
the
p2p
network
to
broadcast
once
you
start
putting
p2p
requirements
on
a
validator
you've
now
increased
the
scope
of
this
entity
massively
and
you've
also
now
directly
connected
a
validator
with
signing
keys
to
the
Internet
and
to
a
p2p
network,
full
of
potentially
malicious
actors,
and
so
I
think
even
from
is
just
a
security
standpoint.
You've
now
moods
the
validator
out
of
a
place
of
isolation
to
a
incredibly
risky
place,
so.
A
Again
back
to
the
swapping,
if
the
validator
only
asks
questions
about
the
state
of
the
world
and
the
state
of
things
that
it
might
sign,
then
whoever
whomever
it's
asking
questions
to
it's
easy
very,
very
easily
swapped.
As
long
as
there's
a
common
interface
to
ask
these
questions
once
I've
put
in
b2b
requirements,
processing
requirements,
all
sorts
of
stuff
I've,
actually
I've
started
to
just
build
a
node
inside
of
the
validator
when
there's
already
robust
notifications.
A
Why
am
I
essentially
repeating
all
of
this
logic,
one
of
the
one
of
the
big
things
the
data
requirements
about
it?
Is
that
the
validator
any
information
that
it's
competing
proof
of
custody
buts
on
and
it
essentially
has
to
store
data
over
long
period
time.
It
actually
has
to
pull
that
data
down,
but
a
node
is
already
syncing
data
from
the
from
the
world.
A
A
Others
are
to
write
applications
to
be
block
explorers
to
be
a
hobbyist,
all
sorts
of
things,
people
that
aren't
validators,
and
so
so
a
note
has
to
be
able
to
sync
charts
like
Evernote.
Can
sync
shards
it's
not
very
useful
and
if
a
validator
has
to
sink
shards
now
we
have
to
sync
shard.
We
have
to.
We
have
two
different
entities
that
have
to
be
able
to
sync
shards,
which
again
is
sounds.
A
A
J
The
other
comment
I
have
is
on
the
honest,
behavior
and
I.
Think
it's
actually
very
simple,
so
I
advise
to
try
and
just
spell
it
out
in
like
a
couple
sentences.
So
basically
step
0
is
just
apply
the
validating
rules
on
the
various
blocks
than
you
then
you've
received,
and
then
you
got
this
this
block
tree.
J
So
you
have
various
Forks,
then
you
apply
the
focus
rule,
so
you
get
a
simple
and
single
kind
of
canonical
block
chain,
and
now
your
your
your
duty
as
an
honest
proposer
is
to
build
on
top
of
the
tip
of
this
canonical
chain
and
there's.
Basically,
only
two
things
need
to
do
from
the
point
of
view
of
the
deposit
roots,
which
I
think
was
the
the
point
that
was
brought
up
number
one.
J
You
need
to
cast
a
vote
for
a
deposit
root
from
the
theorem
1.0
deposit
contract
and
the
rule
there
and
I'm,
not
actually
not
sure
it's
actually
spelled
out
in
the
in
the
in
the
spec
is
that
you
want
to
vote
for
the
latest
one
which
is
contained
in
a
block
which
has
height
0
mod,
some
power
of
two.
So,
for
example,
you
know
every
1024
blocks
on
the
firm
1.0
chain.
J
You're
gonna
have
a
corresponding
deposit
root
for
whatever
you
consider
the
if
in
one
point
canonical
if
you're
1.0
chain-
and
then
you
just
vote
for
that
and
then
as
soon
you
you
have
to
require
the
threshold
of
validators
who
have
voted
with
that
specific
rule,
then
that
becomes
you
know
right
now.
It's
called
process
that
positive
roots,
but
it
will
soon
be
called
latest
deposit.
J
So
you
have
this
latest
deposit
route,
and
then
the
second
thing
you
need
to
do
is
that
you
need
to
include
deposit
receipts
from
it
from
1.0
into
a
theorem
2.0.
You
need
to
include
them
in
in
order
you
need
to
include
up
to
16
of
them,
so
that's
specified
in
the
max
deposits,
constant,
which
is
equal
to
16,
and
you
need
to
include
them
up
to
basically
the
latest
deposit
route
that
has
been
voted
upon
and
I
think
that's
it
and
I
think
we're
currently
missing
a
validity
condition
on
ordering
of
those
deposits.
J
A
A
You
know
there
actually
are.
There
are
a
few
others.
There
are
a
few
other
things.
I
mean
the
the
proposals,
but
the
associations
when
and
and
the
the
timing
of,
when
you're
expected
to
do
things
with
respect
to
a
slot
like
you're
expected
to
attest
the
head
of
the
slot
actually
halfway
through
a
slot
rather
than
at
the
beginning.
This
is
just
kind
of
knowledge
floating
around,
but
not
really
stated
anywhere
and
again,
there's
gonna
be
more.
A
D
I
E
G
A
Great
okay,
well,
thank
you.
Everyone
for
coming
and
I
was
really
close
to
the
New
Year's
and
the
holidays,
but
I'm
excited
to
keep
the
momentum
going.
As
always
when
I
say
to
reach
out
and
the
getter
we'll
schedule,
one
of
these
I
think
to
exempt
it.
We
have
any
conflicts
I'm
generally
out.
The
next
few
days
would
be
a
little
bit
slower
and
responding
to
things,
but
I'll
be
back
in
full
force
on
Monday.