►
From YouTube: Eth2.0 Implementers Call #24 [2019/8/29]
Description
A
A
B
A
C
A
D
Hi,
so
we
were
in
Berlin
two
weeks
ago,
and
now
we
have
team
updates
about
everything
that
we
did
in
Berlin.
If
you
want
to
read
up
on
that,
we
know
also
updated
or
documentation
to
welcome
to
include
all
well
the
main
libraries
that
we
developed
while
doing
this
freedom
to
be
folks
and
we're
even
to
world
build
system.
So
now
you
can
directly
build
a
freedom
to
client
from
the
name
becomes
an
repo
instead
of
going
through
Nimbus.
So
that's
for
the
support
part
now
on.
D
D
We
melt
a
big
PR
that
introduced
a
lot
of
unit
tests
and
working
routines
for
the
state
tests,
because
so
far
forces
take
testing
where
we
relied
on
simulations,
and
now
the
focus
is
just
to
make
sure
that
we
pass
the
test
for
interrupt.
But
since
the
unit
tests
were
based
on
PI
spec,
we
were
confident
that
it
should
be
okay.
D
One
concern
that
we
have
is
that
CI
text
a
long
time,
so
that
would
be
one
thing
that
we
will
have
to
optimize
after
winter
up
in
terms
of
performance.
We
have
some
benchmarks
on
cryptography
in
progress
and
we
measured
also
the
scaling
of
attestation
processing
justification
finalization
on
a
single
node
and
on
a
typical
laptop.
We
can
have
2030
data
and,
lastly,
on
networking,
so
we
have
on
the
pure,
namely
p2p
implementation.
E
So
we
should
be
able
to
pass.
These
are
a
to
soon.
We
explained
on
the
engine
also
on
the
inter
oxide,
we're
working
on
in
networking,
so
Johnny
recently
developed
mantra,
which
is
the
intubation
of
Lighthouse.
There
Russell
admitted
peak
integration
so
that
we
can
aim
to
have
a
common
wire
protocol
to
communicate
as
well.
We
are
working.
F
E
E
Some
of
our
folks
just
returned
from
eath
berlin,
who
had
conversations
with
team
white
block
and
also
team
handle,
and
which
is
something
we're
looking
into
for
the
future,
to
integrate
the
novel
networking
spec
that
they've
developed
so
also
related
time,
Interop,
which
is
taking
place
next
week.
If
anybody
has
any
questions
about
that
feel
free
to
reach
out
to
Joseph
DeLong
and
finally,
we
have
benchmarking
in
progress
for
BLS
signatures
and
aggregation.
There
are
any
questions,
I'm
happy
to
take
them
great
thanks.
Kevin
thanks
Danny.
G
Hey
guys
Terence
here,
given
the
updates,
so
we've
gotten
a
new
DB
and
the
new
fortress
integrated
runtime.
So
we
have
been
testing
this
locally
with
a
multi
node
setup
using
I,
64
128
value
live
discrimination
and
it's
well
it's
working
nicely
so
far,
so
we're
just
waiting
on
implementing
issues
saying
before
we
move
the
employees
in
a
single
higher
test
environment
and
from
the
audience
best
point
of
view,
we've
implemented
regular,
stick
or
PC
and
then
process
up,
and
we
also
enabled
no
discovery,
be
a
Discoverer
v5
and
that's
working.
G
So
the
last
penny
item,
like
I
said,
is
just
initial
sync,
which
is
currently
getting
implemented
and
we're
also
working
on
aligning
on
our
code
base
to
0.82
you
before
in
a
row
also
we're
playing
and
they
experimenting.
The
like
high-end
protocol,
that
is
current,
is
specifying
the
spec
and
also
looking
into
slashing
detection
and
then
prevention,
algorithms,
so
yeah.
That's
it.
H
Hey
pull
there
so
he
completed
our
upgrade
to
the
latest.
Networking
expect
with
sinking
away
debugging
that
on
a
fun
test.
That
now
we
as
let
me
said
the
D
bogs
and
patches
ago
and
compatibility
so
he
wrote
some
stream
Ochoco
implementation
he's
verified
the
lower
level,
handshake,
sick,
I/o,
multi-stream
and
Amex
and,
as
many
said,
he
tested
gossip
sub.
H
So
it's
nice
an
efficient
and
we'll
work
on
that,
a
bit
more
we're
building
out
more
HTTP
endpoints
we
made
a
suggestion
about
the
API,
is
moving
forward,
I'm
standardizing
and
we're
working
to
make
sure
that
I'm
using
lighthouse
is
very
ergonomic
very
interrupt.
So
it's
easy
to
spin
up
and
join
stamps
and
stuff
like
that,
that's
it
from
us
thanks
Bob.
A
I
I
I
The
same
is
discovery,
I
hope
it
will
be
finished
next
week
and
as
a
test
for
Interop
min
or
like
metrics
and
whatever
I.
Also
we
have.
We
have
made
new
registration
pool,
but
it's
only
you
now
visible
doctors
or
choice
and
Susur
on
off
at
the
station
from
a
poser
and
the
bad
thing
is.
We
are
not
sure
that
anyone
from
our
team
will
be
able
to
go
to
interval.
We
have
visa
problems,
but
I
hope
they
will
be
at
least
one
for
us
and
this
all
right.
A
J
J
Applying
our
test
case
passer
to
upgrade
to
the
point
8.3
picture
test
right
now
and
hopefully
we'll
be
passing
swim
and
about
our
pilot
p2p
package.
We
finished
the
SEC,
I
Oh
module,
and
now
we
are
testing
it
against
the
cold
escape
to
pee,
and
we
also
done
some
improvement
of
the
PI
s
SZ
about
the
hatchery
root
cache
and
also
for
the
performance
that
might
be
our
one
of
our
bottleneck
falling
it
up
so
yeah.
We
are
trying
to
make
it
fast
as
possible.
K
K
K
A
L
F
L
Provides
some
immediate
basic,
immediate
scalability,
basically
using
techniques
kind
of
similar
to
plasma
except
putting
more
data
unchanged.
So
you
don't
have
to
deal
with
like
operators
and
exit
games
and
all
of
that
and
all
of
that
complexity
and
would
be
irrelevant
to
execution
environments
as
well.
M
C
So
I
guess
on
my
side,
I've
started
recently
a
very
detailed
review
of
the
phase,
one
and
I
guess
is
potentially
a
couple
of
the
substantive
changes
to
face
zero
that
are
coming
out
of
this
review.
One
is
a
focus
fix
and
the
other
one
is
the
idea
of
having
Universal
kind
of
slashing
condition
for
equivocations
that
would
work
for
beacon,
blocks,
short
blocks
and
jollity
stations.
So
that's
nice
complexity,
reduction.
C
I
am
expecting
kind
of
the
phase,
one
spec
to
be
feature
complete,
polished
and
tested
by
September,
30th,
but
I
think
it
would
be.
It
would
be
premature
to
to
declare
it
frozen
by
then.
The
other
thing
I've
been
looking
into
recently
is
this
new
proof
system.
Snark
proof
system
called
drunk,
and
the
idea
here
is
that
you
can
have
a
trusted
setup
which
is
a
universal
and
updatable.
C
So,
in
practice,
what
this
means
is
that
the
the
trusted
setup
issue
with
the
specific
instruction
doesn't
really
exist,
because
you
have
a
single
setup,
which
is
usable
for
all
the
circuits
and
it's
updatable,
meaning
that
you
have
a
continuous
ceremony
and
you
just
need
one
single
person
in
that
very
long-running
everybody
to
be
honest
for
the
setup
to
be
to
be
trustworthy
so
and
it
like.
There
was
a
previous
construction
called
sonic,
but
that
had
a
huge
performance
hit.
C
Yes
and
like
this,
this
project
called
a
stack
they're
starting
a
ceremony
in
September,
so
I'm
expecting
that
to
kind
of
be
the
the
genesis
of
plunk,
you
know
blockchain
kind
of
long-running
ceremony
and
that
I
guess
people
can
build
upon
and
one
of
the
cool
things
of
removing
the
trusted
setup
issue
is
that
it?
It
might
be
a
tipping
point
to
be
able
to
use
snarks
at
the
consensus
layer
and
there's
all
sorts
of
places
where
they
could
be
useful
data
availability
for
proofs,
a
secret
single
leader
election
with
this
compression
and
I
guess.
C
The
nice
thing
is
that
if
we
already
have
BLS
12
3
2
1,
then
we
already
have
the
pairings,
at
least
from
the
point
of
view
of
verifying
the
snark
there's
very
little
incremental
complexity.
Most
of
the
complexity
would
be
on
the
approver
side,
which
is
I.
Guess
just
you
know,
algorithmic.
So
I'm
I
think
this
is
kind
of
an
important
moment
in
of
snogs
yeah,
quite
excited
about
plonk.
A
Great
thanks,
I've
also
been
digging
into
that
phase.
Their
respect
and
finding
nope
thinks
you're
in
there.
Something
else
I,
don't
think
about
is
if
you
can
have
any
sort
of
like
herd.
Immunity
on
PP
networks,
we're
not
all
messages
need
to
be
very
verified
by
all
validators
when
being
gossiped.
I
can't
find
any
existing
literature
on
the
problem,
but
if
anyone
has
thought
about
these
things
or
knows
of
some
good
papers
check
out,
please
share.
M
O
O
O
L
F
L
L
In
the
past,
then
currently,
this
is
very
difficult
because
you
have
to
go
through
a
bunch
of
people.
Transitions
in
every
epoch.
Transition
has
oh
of
n
work
that,
because
there
is,
you
have
to
update
balance.
This
very
validator
calculate
the
new
state,
hashes
calculate
committees
and
all
of
those
things
and
what
this
proposal.
F
L
Is
it
basically
goes
through
the
parts
that
have
that
o
event,
work
and
I
know
restructure?
Is
the
protocol
using
some
techniques
from
the
VIPRE
version
of
cast
for
a
couple
of
years
ago,
but
now,
but
in
a
more
limited
way,
to
try
to
remove
that
so
that
you
would
only
have
kind
of
either
o
of
1
or
generally,
a
very
small
amount
of
work
to
be
done
in
a
pocket
on
transitions
and
that
come
at
the
end
of
empty
box?
And
so.
B
During
if
Berlin
I
put
some
time
in
to
looking
into
parcels
and
not
accrues
lot
of
coats,
and
if
anyone
was
interested
in
helping
here
at
be
like
happy
to
discuss
new
IDs
and
how
we
serialize
parcels
and
how
we're
in
these
boutique
roasts
more
efficiently,
also
in
the
context
of
a
bustle
I'll
share
a
link.
After
the
call
thanks.
P
So
yeah
as
a
as
a
as
a
nice
to
client
I
think
we
have.
We
have
been
refining
the
specification
now
and
also
the
testing
infrastructure,
so
so
we're
working
on
that.
What's
probably
more
interesting,
is
at
least
from
a
research
point
of
view.
Is
that
how
this
kahit
model
is
going
to
help
us
later
on
show
safety,
aliveness
properties
of
the
protocol?
So
this
is
something
that
I
think
we
are
excited
about,
but
of
course
it's
we're
still
trying
to
organize
our
ideas
on
that
and
yeah.
So
so
this
is.
A
Thank
you
and
on
the
deposit
contracts,
I've
I
purchased
merged
in
the
compiler
fix
that
was
found
by
rent
and
verification
they
planned
on
releasing
yesterday
and
check
today
if
it
had
been
released,
that
and
1p
are
on.
Our
side
is
kind
of
the
last
thing
before
we
do
the
final
verification,
so
that's
moving
along
well,
okay,
I
think
we're
done
with
research
right
great,
nothing.
A
Q
Q
I'm
just
posting
a
link,
a
feelings
on
the
chat
we
received
several
contributions
that
are
gonna,
be
pretty
interesting
for
free
there's,
a
set
of
lab
test
vectors,
especially-
and
these
are
important
because,
like
in
contrast
with
with
previous
efforts,
but
we
try
to
assemble
much
hotter
sectors.
These
ones
are
actually
capable
of
decrypting
Sicario
traffic.
The
contributor
that
also
picked
up
this
this
issue.
Q
This
bounty,
also
is
working
on
finishing
this
task
by
creating
a
text
inspector-
and
these
are,
incidentally,
the
two
protocols
that
we've
chosen
for
Tesla,
so
I'm
hopeful
that
we
should
be
having
a
sort
of
like
semi,
mature
set
of
detectors
for
the
team.
That's
going
to
be
interrupt
thing
in
two
weeks,
so
that
they
can
potentially
debug.
You
know
potential
interoperability
issues
another
one
that
was
interesting
is,
of
course,
the
noise
handshakes.
The
change,
15
plus
add
this
one
up
and
the
spec
is,
is
evolving.
We
now
have
a
implementation
you
go.
Q
This
is
going
to
serve
as
the
reference
implementation
and
has
the
spec
evolves,
and
we
continue,
including
maturing
this
back.
We
expect
you
to
modify
the
reference
implementation
in
lockstep.
Another
interesting
contribution
we
received
was
from
from
puja
lambda
as
well.
He
did
he
created
a
set
of
test
harnesses
and
conducted
some
initial
profiling
on
the
reference
implementation
of
gossip
gossip
southern
go.
Q
Message
propagation,
which
is
basically
my
same
as
a
tool
developed
by
matrix
code
org
that
allows
you
to
create
networks
of
matrix
servers
by
a
very
nice
UI
with
dragongirl
and
then
set
network
characteristics,
quality
of
service
and
traffic,
shaping
tools
and
so
on.
Just
simply
matter,
name
dropping
on
on
a
canvas
we
for
that,
and
we
adapted
it
to
create
gossip
sub
containers
and
to
wire
them
and
to
establish
quality
of
service
rules
and
so
on
and
simulate
message
propagation.
Q
Q
We
are
from
the
signal
scalability
testing
on
decentralized,
not
for
punching
on
jsk
physical
improvements
to
adopt
a
single
weight
and
a
center
directors,
which
was
sort
of
like
a
main
or
a
pain
point
for
the
guys
that
for
the
team
that
picked
up
the
noise
handshakes,
because
we
wanted
to
defend
a
je
s,
a
je
s
version
as
well,
but
we
face
some
roadblocks
there.
Those
are
going
to
be
working
on
visualizations
and,
of
course,
supporting
this
group,
especially
in
the
event
of
off
the
interpretability
testing.
A
The
general
idea
is
this:
API
repo
is
going
to
have
these
api's
that
are
generally
agreed
upon
and
conformed
upon
by
a
number
of
clients.
These
things
are
not
consensus,
critical
they're,
not
Network
critical.
These
are
things
for
users
and
the
more
that
we
can
kind
of
agree
on
some
of
these
things,
the
better
user
experience
and
better
dev
tools.
A
We
can
probably
build,
but
they're
also
I,
fully
expect
the
cycle
around
this
kind
of
stuff
is
often
probably
going
to
be
some
innovation,
some
local
testing
and
some
talking
to
different
users,
customers,
etc
and
then
kind
of
like
coming
and
trying
to
conform
upon
them.
Iii
don't
expect
to
expect
these
to
evolve
over
time,
because
we
don't
always
need
it
know
the
needs
of
our
users,
and
so
we
actually
have
users.
That
said
getting
some
of
the
core
stuff
core
data.
A
H
Yeah
sure
I
made
a
PR
that
proposes
it
seems
what
kind
of
still
in
the
meta
stage
since
buying
these
API
is
I.
Can
how
exactly
are
we
gonna
define
them
so
I
made
a
new
prismatic
have
been
working.
Defining
there's
in
protobufs
and
I
propose
that
we
make
a
more
kind
of
abstract,
markdown
definition
of
an
API
that
provides
a
basic
kind
of
rough
HTTP
guide
enough
to
get
people
started,
and
then
we
can
use
that
kind
of
core
abstract
API
definitions
and
then
make
specializations
from
it.
H
So
you
can
like
to
find
a
prototype
file
that
implements
all
the
methods
in
it
or
you
can
you
can
write.
You
can
write
out
some
method
that
map's
that
abstract
definition
to
concrete
definition
and
then,
when
we
want
to
make
like
a
super,
you
know
strict
API,
that
the
consumers
going
to
love
we
can.
We
can
make
like
a
swag
s
specialization
if
that
abstract
one.
So
that
was
my
proposal
always
keen
to
hear
what
people
think.
So
you
know
jump
on
the
on
the
PR
and
have
a
look
yeah
bring
it
up
here.
H
Yes,
I
tried
to
put
in
the
API
that
I
made
is
pretty
loose
and
I'm,
not
even
sure
about
like
I
pulled
it.
The
basic
API
I'm,
not
even
sure
about
the
grouping
of
things.
So
what
I
was
trying
to
get
out
mostly
was
the
format
in
that
PR,
not
not
necessarily
the
detail
at
the
endpoints,
but
we
might
as
well
do
both
the
people
who
came.
R
H
Yeah
so
we're
using
swagger
originally
and
then
once
I
had
to
go
in
and
do
a
bunch
of
modification
in
swagger
I
found
it
was
actually
I'm
pretty
painful
unless
you
really
get
your
head
around
the
gamble
specification.
So
I
thought
that
if
we
have
this
kind
of
more
human
language
specification,
that's
in
line
with
the
rest
of
the
specs.
We
have
that
it
might
be
a
bit
easier
for
us
to
iterate
on
I.
M
Think
another
reason
to
do
it
this
way
and
I
agree
that
this
is
a
good
way
is
basically
that
if
you
specify
it
in
one
of
these
more
concrete
formats,
you've
already
done
second
piece
of
work,
which
is
mapping
our
sort
of
world
that
the
rest
of
this
e
to
spec
rests
upon.
To
this
specific
instance,
we've
already
done
the
work
of
mapping
I,
don't
know.
Bls
keys,
who
wrote
about
strings
and
I,
feel
that
that
is
really
a
separate
piece
of
work
that
might
be
different,
depending
on
where.
M
Your
with
your
public
API
say
that
you
want
to
implement
them
a
similarly
shaped
API
over,
find
they're
here
or
whatever.
That
might
be
so
I
think
it's
much
more
useful
to
have
a
more
at
definition,
which
is
more
aligned
with
what
we
have
in
terms
of
data
types
and
so
on
in
the
spec,
rather
than
some
specific
encoding
protocol.
Whatever
you
want
to
call
it.
S
Hey
you
guys
hear
me:
it's
personal
prismatic,
yeah,
hey,
so
just
a
little
background
on
what
we've
been
doing
so
a
couple
of
months
ago.
We
we
saw
this,
isn't
you
know
an
upcoming
problem
where
we
really
need
to
start
thinking
about
data
API?
Is
people
have
been
asking
already
for
artists
net
where's,
the
black
Explorer
they
want?
They
really
want
to
understand
like
what's
going
on
with
their
validator
and
within
the
network.
S
S
So
right
now
it
just
has
the
the
eath
two
phase:
zero
that
we're
proposing
and
we're
happy
to
move
this.
But
you
know
again
under
the
same
name,
because
we
we
don't
it
doesn't
make
sense
for
us
to
necessarily
own
it,
but
that's
kind
of
where
we
started
and
we
chose
protocol
buffers
because
it's
a
really
rigid
definition.
S
So
it
has
the
types
it
has:
the
ability
for
inline
documentation,
and
you
can
really
see
what
the
schema
looks
like
in
terms
of
what
the
request,
the
response
and
also
the
HTTP
protocols
can
be
defined
there
too.
So
you
see
the
route
and
how
the
server
is
supposed
to
outline
these
services.
So
when
you're
going
to
implement
it,
you
can
really
see
all
that
one
spot.
Now
that
might
be
premature
in
terms
of
you
know,
oh
we
haven't
decided
the
final
thing
so
maybe
doing
it.
S
Markdown
is
fine,
but
we
chose
this
because
of
the
generative
properties.
So
you
can
take
this
definition
and
generate
your
swagger
or
even
generated
markdown
documentation
from
it.
We
put
out
some.
We
put
this
out
for
feedback.
You
know
a
couple
of
months
ago
we
mentioned
on
the
call,
and
we've
got
some
some
feedback
from
Paul,
and
we.
What
we'd
like
to
do
here
is
to
agree
on
not
really
so
much
the
format
of
how
we're
defining
these
things,
but
the
actual
service
routes,
message,
message,
objects
and
response
objects.
A
T
Hey
guys
they're
all
for
prismatic.
There
were
some
comments,
also
about
just
like
having
it
like
being
unreadable.
If
we
do
it,
we
don't
think
that's
the
case.
I
think
everything
is
extremely
well
documented
and,
like
Bryson
said,
it's
totally
possible
to
have
kind
of
a
mirrored
markdown
kind
of
object.
Definition
of
that
I.
A
M
T
A
Okay,
we
have
the
center
drop
retreat
coming
up
and
white
block-long
proto
have
put
out
a
lot
of
these.
They
put
out
that
survey.
Thank
you.
Everyone
you're,
responding
to
it.
It's
gonna
help
us
be
really
productive
at
this
thing,
so
Zack
or
proto,
would
you
like
to
discuss
at
a
high
level
some
of
the
things
that
are
worth
mentioning,
and
then
we
can
talk
a
little
bit
about
some
of
the
stuff
that
you
know.
U
Yeah
sure
my
internet
connection
is
a
bit
spotty,
so
I
I
may
drop
drop
off
and
yeah
proto
and
I
have
been
collaborating
working
on
all
these
Interop
survey.
Thanks
everybody
for
responding,
so
it
seems,
like
most,
clients
are
on
the
current,
are
up
to
date
on
the
spec
and
or
have
agreed
that
8.2
is
going
to
be
the
version
that
will
standardize
on
so
most
clients
didn't
have
an
issue
with
updating,
but
there
were
issues
in
testing
the
remaining
issues
had
to
do
with
like
merc
polarization
and
SSC,
but
they're
pretty
minimal.
U
There
was
a
most
a
lot
of
people
had
bottlenecks
with
networking,
but
it
seems
like
everybody
is
kind
of
has
has
already
or
is
very
close
to
implementing
a
functional,
Lib
PDP
stack,
so
good
work
on
everybody
for
that
and
sac
IO
has
been
implemented
as
well,
so
that
she
didn't
present
any
bottleneck.
The
only
immediate
bottlenecks
that
we
can
foresee,
just
based
on
the
results
of
this
survey,
is
sinking
and
I.
U
Think
that's
probably
going
to
be
something
that
we
want
to
iron
out
and
define
in
a
better
way
to
make
sure
that
we're.
We
know
what
we're
doing
once
we
once
we
get
to
get
to
Canada
the
wire
protocol
as
well,
so
we
should
probably
iron
out
and
review
the
existing
wire
protocol
make
sure
it's
up
to
date.
I
personally
haven't
looked
at
it
for
a
couple
months.
U
So,
and
also
anybody
who
isn't
aware
just
know
is
like
the
networking
spec
is
the
pretty
good
progress,
so
everybody
should
be
up
to
speed
on
what
that
presents.
Cuz,
it
breaks
down
everything,
pretty
nice
like
what
we're
gonna
do
for
Interop
and
what
we're
gonna
do
for
main
net.
What
we're
gonna
do
in
these
coming
weeks
aren't
necessarily
going
to
be
reflective
of
what
we're
going
to
be
doing
for
main
matter.
What
the
main
that
plan
is,
but
it
shouldn't
require
any
additional
development
time
from
anybody.
U
Another
thing
we
want
to
try
to
work
out
is
like:
how
do
we
effectively
communicate
between
client
teams
like
what
are
we
gonna
do?
What
we're
talking
about
is
setting
up
like
like
within
the
Interop
event,
we'll
set
up
like
these
specialized
kind
of
client
agnostic
teams
just
based
on
certain
topics,
so
everybody
can
help
them
help
each
other
out,
like
passing
existing
tests.
U
What
our
sync
strategy
is
going
to
be
wire
protocol
and
like
exchanging
initial
network
messages,
so
we're
trying
to
write
some
cooling
that
kind
of
helps
everybody
along
with
this
process
and
automate
as
much
as
possible.
We're
going
to
be
sharing
these
results
on
Monday,
so
I,
don't
wanna
get
too
long
when
it's
so
proto.
If
you
have
anything
that
freedom,
I
will
say,.
A
That
it
seems,
like
maybe
half
the
teams,
don't
have
the
facility
to
even
be
gossiping
individual
authorizations,
not
aggregate
and
that
for
some
very
limited
types
of
tests
with
very
limited
number
of
nodes
and
validators.
This
will
likely
work
to
some
extent,
but
that,
if
we're
doing
I
have
any
sort
of
interesting
tests
and
networks
beyond
that,
we
do
need
to
be
a
gossipping
attestation.
So
if
that's
not
currently
part
of
your
network
wire
protocol-
and
you
can
you
can
gossip
box,
you
should
take
them
on
that
and
add
that,
if
possible,
I.
E
V
A
So
the
high
level
right
now,
some
of
the
more
like
subjective
answers,
we'll
probably
pull
out
but
having
a
table
of
just
where
clients
and
with
respect
to
features
so
that
we
can
make
informed
decisions
on
how
to
group
clients
and
doing
a
full
testing.
This
is
something
that
I,
even
though
we
might
publish
on
Monday
we'd,
like
people
to
be
actively
updating
as
features
and
things
that
are
completed
or
close
to
the
next
week
sounds
good.
A
But
in
general
things
are
things
are
looking
good
and
I'm,
pretty
confident
that
we're
gonna
have
some
interesting
stuff
and
some
at
least
two,
maybe
three
client
test
nets
that
we
can
spin
up
and
get
some
good
stuff
out.
So
anything
else
on
this
before
I
move
on
there's
plenty
of
planning
and
coordination
and
things
that
we
want
to
do
over
the
coming
seven
days
and
and
it's
going
to
be
an
active
conversation
and
we'll
get
some
different
Docs
and
things
together
on
that
I.
A
A
And
I'll
share
that
that
being
able
to
have
a
coordinated
start
is
probably
very
important.
So
targeting
that
is
useful
and
I.
There
looks
like
there's,
there's
a
few
notes
and
there
on
how
things
can
be
so
and
the
way
to
start
from
a
state
was
a
little
bit
under
defined,
which
was
pointed
out
and
there's
an
issue
on
that.
But
we
can
clear
that
up
today
tomorrow.
H
A
Thing
is
starting
from
a
Genesis
state
is
obviously
the
bare
minimum
and
being
able
to
I
think
coordinate
on
a
couple
of
parameters
to
define
a
Genesis
state
which
is
I.
Think
like
the
Genesis
time
and
the
number
of
validators,
with
some
underlying
algorithm
I'm,
generating
that
I
think
it's
going
to
be
very
useful
to
spin
out
these
systems
quickly.
A
That
said,
the
initial
debug
path,
if
there's
some
sort
of
issue
on
the
network,
is
not
to
just
restart
the
network
at
a
recent
state
and
see
what
happens
again,
it's
so
like
pull
down
data
debug
go
against
check
with
specs,
add
different
things
like
that.
So
there
might
be
some
missed
opportunity
and
doing
some
interesting
things
if
we
can't
start
from
the
arbitrary
final
eye
state
but
I'm,
not
certain
that
we're
gonna
get
to
the
point
at
which
we'd
want
those.
During
this
we
can
erect
proto.
B
So
the
results
from
the
survey
there
were
like
seven
different
options
listed
in
the
survey
and
there's
even
more
options.
I
always
would
start
a
chain
as
well,
and
so
for
debugging,
like
the
real
minimal
thing
he
really
needs
is
don't
post
a
it's
a
low
two
states,
then,
if
we
produce
books
in
from
one
client
or
from
one
test
net
in
another
run
or
that's
another
clients
to
see
if
they
have
the
same
problems,
this
would
be
really
useful
to
reproduce
these
books
without
repeating
the
complete
network
setting.
D
K
B
H
H
U
A
H
B
T
K
M
A
B
Think
we
have
relation
so
loading
a
CSUN
thing
if
you
have
a
vaild
to
output,
as
you
see
as
you
see,
we
can
debug
it's
in
the
spec.
We
can
debug
it
in
other
clients,
so
you
can
find
these
books
occur
and
that
works
really
really
fast.
Instead
of
waiting
to
have
it
reappear
on
a
network
or
elsewhere,
and
dumping
status
also
really
useful,
but
optional.
M
A
M
D
D
A
E
I
agree
with
Rose
point
when
I
extended
slightly
with
the
formats
and
the
types
that
would
be
synced
up
through
the
network
once
it's
established,
so
whether
that
means
SSE
tests
are
also
passing
for
whichever
attestations
or
whatever
info
is
being
communicated
and
I'm
presuming.
Some
of
that
can
be
done
in
parallel,
like
esse
spec
test,
plus
the
smoke
tests.
I
probably
mentioned.
W
But
yet
to
answer
your
question
Raul,
that
was,
that
was
our
plan
and
Artemis.
You
know
to
do
some
initial
tests
with
lighthouse
and
you
know
ping
Adrian
involved
decided.
Oh
so
yeah
yeah.
Q
I'm,
just
thinking,
if
that's
kinda,
like
a
maybe
like
a
procedure
that
we
can
use
here,
because,
for
example,
if
we
can
create
kind
of
like
a
hawk
MD,
that
we
share
across
whole
teams,
where
teams
basically
add
the
instructions
to
run
their
client
and
just
keep
it
kinda
like
as
a
running
as
a
running
doc.
Well,.
W
W
Q
W
A
N
A
I
know
there's
a
lot
of
work
to
be
done
and
the
next
week
a
lot
of
times
cleaning
up
clean
up
the
last
things
before
interrupt,
but
if
you
do
have
a
chance
to
just
initially
do
it
some
attempted
peering
and
things
like
that,
the
more
of
that
we
can
do
before
we
get
together
the
better
anything
else.
You
want
to
talk
about
prospectus
in-app
stuff
before
we
move
on.
A
A
A
Okay
and
this
week,
if
you
have
this,
is
not
the
week
to
spend
banging
your
head
against
something
to
try
to
figure
out
yourself.
If
you
have
questions
around
the
spec
questions
regarding
network
anything
that
can
aid
you
in
getting
where
you
need
to
be
seven
or
eight
days
from
now,
please
don't
hesitate
to
reach
out
to
me
reach
out
to
proto
reach
out
the
white
blog
reach
out
to
other
teams.
You
know
we
want
to
help
everyone
get
to
where
they
need
to
be
to
make
this
next
week,
productive.