►
From YouTube: Eth2.0 Call #63 [2021/5/6]
Description
A
A
A
A
The
agenda
is
issue:
20
217.,
nothing
crazy.
Today,
client
updates.
I
put
a
position
in
the
agenda
for
discussion
in
the
incident.
If
we
have
anything
else
to
discuss.
I
think
we've
discussed
this
a
ton
offline
and
there's
a
lot
of
like
public
updates
and
discussions
around
it.
But
if
anybody
hasn't
had
a
place
there,
altair
general
engineering
progress,
spec
in
testing
planning,
we're
going
to
research,
updates,
spec
discussion
and
any
closing
remarks.
Just
a
reminder.
A
We
can
talk
about
some
merch
stuff
here,
but
there's
super
active
discussion
going
on
in
the
rainism
chat
and
there
are
still
merge
specific
calls
on
the
opposite
week
of
this.
So
we
won't
go
too
deep,
but
you
know
if
there's
interesting
stuff
to
discuss,
we
can
touch
on
it.
A
If
you
can
give
us
definitely
a
picture
of
where
you
stand
on
on
some
of
the
merge
progress
and,
more
importantly,
for
this
call
where
you
stand
on
altair
that'd
be
great,
starting
with
lodestar.
B
Hey
so
as
far
as
altair,
I'm
still
working
on
it,
I
think,
let's
see
so.
We've
added
the
new
gossip
and
recresp
methods
and
we've
also
been
working
on
the
light
client
side.
Specifically
now
we're
able
to
generate
sync
objects
and
consume
them
and
generate
to
consume
state
proofs.
B
I
think
the
big
open
item
for
all
the
altair,
like
being
able
to
run
a
test
net
locally
or
anything,
is
just
updating
the
validator
interaction.
B
And
other
than
that,
just
generally
we've
been
adding
more
metrics
to
leadstar.
It's
really.
I've
been
helpful
and
updating
our
microfauna
dashboard
and
that's
it
for
us.
A
Great
and
in
terms
of
the
validator
interactions,
it's
primarily
additive
as
long
as
you
have
the
data
structures
correct
and
that
I
guess
a
block
producer
could
do
the
same
role
and
just
not
be
paying
attention
to
those
sync
committee
and
similarly,
someone
could
just
not
pay
attention
to
their
security,
so
you
might
even
be
able
to
stand
up
a
testament
today,
but
great
progress.
Let's
move
on
to
lighthouse.
C
Hello
paul
here
so
when
it
comes
to
altair,
we
have
our
consensus
changes
awaiting
review,
we're
adding
some
caching
to
our
beacon
chain
for
sync
committee,
so
getting
the
the
fast
verification
of
that
down
pat
and
we're
also
going
we've
got
the
network
protocols
under
review,
then,
when
it
comes
to
rainism
we're
just
passing
the
merged
test
sectors.
Today,
generally
we're
aiming
towards
at
the
1.4.0
release.
C
It's
probably
going
gonna
be
the
next
few
weeks,
but
it
rolls
in
a
bunch
of
features
that
we've
been
working
on,
including
beta
window
support,
big
reduction
memory,
footprint,
doppelganger
service,
we're
reducing
our
outbound
f1
calls,
and
it
will
also
have
the
altair
structure,
definitions
and
the
the
the
mechanics
to
to
choose
between
the
two
forks.
That's
also
going
to
be
included
in
that
that
big
release,
so
it's
coming
in
a
few
weeks.
C
We're
also
planning
to
share
some
validated
performance
stats.
It's
not
comparing
clients,
but
it's.
You
can
compare
a
group
of
validators
just
to
the
global
average
for
some
span
of
time,
I'm
just
trying
to
try
and
get
the
the
broader
staking
community
to
be
a
little
bit
more
aware
of
the
details
of
how
their
annotation
rewards
go.
C
So
they
can
come
back
to
their
clients
with
more
info
and
michael
also
has
a
pr
open
on
spec
repo
for
lsat
balance,
carryover,
oh
and
that
validated
performance
thing
I'll
I'll
release
it
on
like
reddit
or
twitter,
or
something
at
some
point
soon,
just
a
little
spreadsheet.
That's
it
for
me.
D
Hi
so
last
monday
we
released
1.
D
D
D
D
We
have
improved
our
nimbus
guide
in
a
lot
of
areas
following
feedbacks
from
especially
a
rocket
pool
users,
and
we
have
a
blog
post
called
if
two
is
green,
which
made
headlines
on
twitter,
especially
and
was
well
received.
Also
on
a
lot
of
discord-
and
that
was
with
gaff
plus
numbers,
plus
a
rocket
pool,
running
10
validators.
E
Hey
guys
parents
here
so
we
finally
merged
the
optimized
slasher
implementation
and
that
will
go
into
a
release
in
about
a
week
and
a
half
and
a
terrorist
fat
test
is
finally
passing
and
now
we're
working
on
optimization
of
sync
committee,
like
paul,
said
also
working
on
networking
and
the
validator
to
be
connect
interactions,
and
so
we
participate
in
the
merge,
the
the
testnet.
Unfortunately,
we
had
some
consensus
error
and
that
was
confirmed
using
the
merge
spread
test
product
put
out
yesterday.
So
thank
you
for
that.
E
So
we
failed
at
the
execution,
payloads
transactions
and
we're
looking
into
the
hashtag
implementation
of
that.
I've
also
started
working
on
implementing
shardings
back
I'll,
be
asking
also
questions
to
protocol
at
the
and
the
and
turn
on
that
and
yeah.
That's
it
from
us.
Thank
you.
A
Great
yeah,
I'm
in
retrospect
it
was
crazy
that
we
tried
to
do
that
devnet
without
consensus
vectors.
I'm
surprised
at
one
of
those.
It
did
great.
Thank
you
and.
F
Hey
guys
so
we're
pretty
much
caught
up
with
the
latest
alpha
spec
release
with
respect
to
altair
reference
tests
are
passing,
except
for
a
few
altair
state
upgrade
test
cases
that
we
need
to
debug.
All
the
sync
committee
related
validator
duties
have
been
implemented.
We
integrated
the
altair
state
upgrade
logic
so
that
we
can
transition
across
fort
boundaries.
F
We
added
logic
to
update
the
enr
fork
id
field
at
fork
boundaries.
We
have
sync
committee,
subnet
subscription
updates
wired
through
to
update
the
new
in
our
sync
nets
field.
F
This
isn't
in
an
official
release,
but
we
went
ahead
and
implemented
the
version
two
of
the
get
metadata,
rpc
request,
which
adds
a
new
sync
nets
field
and
we're
continuing
to
migrate.
Jim
mcdonald's
proposed
sync
committee
apis
to
the
standard
spec
and
implementing
those
within
tekku
as
we
go,
and
we
also
had
a
community
contribution
for
a
custom
rest
endpoint
to
query
peer
gossips
course,
and
that's
it
for
me.
A
Great
thank
you.
Next
up
two
weekends
ago
we
had
the
incident.
Seventy
percent
of
block
proposals
went
offline.
I
don't
know
if
there's
anything
discussed
here,
just
leaving
room
and
kiss
this.
A
Okay,
if
you're
listening
in
prism
has
a
great
incident
report
and
a
number
of
others
have
been
discussing
this
and
you
can
probably
easily
find
a
lot
of
that.
Thank
you
and
thank
you
for
everyone
that
worked
their
ass
off
through
the
weekend
on
that
one
altair.
So
we
got
general
engineering.
Progress
seems
like
things
move
forward,
not
too
much
issue.
I
did
want
to
have
another
pre-release
out
now-ish.
A
I
actually
had
eye
surgery
one
week
ago
and
was
a
little
more
optimistic
and
how
much
I
was
going
to
be
able
to
read
and
do
computer
work
in
the
past
week.
But
many
people
from
my
team
have
stepped
up
there's
a
lot
of
prs
out
for
kind
of
not
a
final
release,
but
a
wave
of
kind
of
cleanups
and
testing,
primarily
a
lot
of
a
lot
of
testing
to
be
added.
So
we
are
aiming
to
get
a
lot
of
that
review
done
today
and
try
to
get
respect
pre-release
out
tomorrow.
A
There
is
this
item
that
michael
proposed
that
before
he
put
this
out,
we
were
essentially
dropping
an
epoch's
worth
of
attestation
participation
which,
as
michael
pointed
out
as
others
agree.
A
This
would
probably
not
look
and
feel
good
from
the
perspective
of
validators
losing
rewards
on
that
one
epoch,
and
so
there's
the
simple
mitigation
might
be
just
give
a
plus
a
tiny
bit
of
reward,
but
the
slightly
more
complicated
but
not
very
complicated
method,
which
michael
proposed,
is
to
translate
the
current
epoch
out
to
state
pending
stations
into
flags
at
that
epoch
crossing
the
one
major,
not
major,
but
the
primary
downside
to
this
is
just
more
testing
needs
to
happen
on
that
fork
boundary.
A
But
looking
at
his
proposed
changes,
I
don't
personally
think
it's
extremely
complicated.
I
think
we
could
get
a
reasonable
amount
of
test
factors
in
there
without
too
much
issue,
but
while
we
enhance
the
testing
the
next
couple
days
on
that
pr,
I
would
suggest
if
you
haven't
taken
a
look
in
your
client
team
to
take
a
look
at
that.
A
G
All
right,
so
this
would
only
basic
affect,
like
attestations,
created
in
the
last
pre-4
key
box,
that
get
included
in
the
first
post
for
epoch.
Right.
A
Well,
no,
it's
it's
pending
attestations
in
state.
G
A
Be
one
e-book
of
it
will
look
like
essentially
empty
participation
and
a
minor
reward.
A
minor
drop.
A
That
was
my
gut
michael
does
have
a
pr
up
that
reuses
the
functionality
of
prosthetization
in
a
modular
way,
so
that
it's
primarily
just
for
using
functionality.
My
yet.
A
A
I
mean
I
guess
from
the
first
from
like
the
financial
analysis:
it's
probably
a
very
small
amount
of
money,
one
way
or
the
other,
and
so
but
a
reasonable
amount
of
dev
time.
So
that's
kind
of
a
funny
way
to
look
at
it,
but
there's
also
it's
definitely
probably
more
correct
in
the
sense
to
just
do
differentiation.
I
J
C
There's
also,
I
guess,
some
complexity
in
breaking
the
you
know
the
variant
that
you
know.
If
you
get
your
intestation
included
in
blocks
in
the
chain,
then
you
get
rewarded
and
and
that
these
you
know,
participation,
metrics
stay
the
same.
So
you
know,
I
guess,
there's
a
there's,
a
there's,
a
complexity
that
lives
there
too.
If
we
choose
not
to
address
it
in
the
spec.
I
You're
saying
like
there's
a
complexity
because
because
the
there
are
no
rewards
like,
but
even
if
they
wear
like
one
pock,
where
nobody
participated,
because
there
are
no
rewards
for
it.
That
wouldn't
do
very
much
right.
C
C
Yeah,
I'm
not
saying
that
the
financial
difference
is
is
a
lot,
but
it's
just
you
know
it's
it's
just
a
thing
where
the
system
doesn't
it
just
has
this
epoch,
where
you
include
your
attestations
and
you
don't
get
rewarded
and
it
just
doesn't.
A
Expect
it
could
potentially
introduce
complexity
in
third-party
tooling.
That
is
making
some
sort
of
assumption
based
off
of
that,
but
I
I
would
imagine,
block
explorer
well.
I
A
A
Complexity,
that's
what
I'm
saying
I
could
imagine
a
block
explorer
that
uses
on-chain
attestations
to
cap
to
display
granular
rewards
and
that
getting
out
of
sync
with
what
was
actually
given,
because
what
is
given
is
very
is
is
bulk
and
so,
but
that's
that's
not
necessary,
but.
I
I
J
Yeah,
I
think
one
of
michael's
point
sorry.
One
of
michael's
point
I
think,
was
that
a
lot
of
validators
will
actually
be
watching
this
quite
closely
that
particular
epoch
quite
closely
and
monitoring
their
rewards
and
noticing
that
they're
not
actually
getting
any.
Might
you
know
trigger
a
whole
heap
of
users
complaining
on
our
discord
channels
and
probably
but.
I
I
A
H
I
I
L
Think
the
issue
is
more
that
if
someone
sees
something
happen,
that's
unexpected,
they
trigger
a
retrospective,
they
trigger
a
dev
time,
they
start
investigating
and
if
they're
not
prepared
to
for
this
to
happen,
they
may
devote
you
know
hundreds
of
hours
of
engineering
effort
to
try
to
figure
out.
Why
did
they
miss
the
cpoc
and
it
turns
out
just
because
it's
a
little
bug
that
was
never
bothered
to
be
fixed,
they
didn't
know
about
it
can
be
that
I
think
that's
real.
L
A
A
A
question
of
complexity
and
there
are
four
or
five
teams
here
that
are
going
to
implement
that
complexity
and
ultimately,
I'd
like
to
hear
more
input
from
the
various
teams.
It
seems
like
the
lighthouse
team
is
keen
on
adding
this
amount
of
complexity
for
it.
But
I'd
like
to
hear
what
other
people
say.
C
I'm
not
sure
that
any
of
the
lighthouse
castles
down
the
hill
for
fyi,
but
we
still
stand
by
what
we
said.
I
think.
K
M
Yeah,
I
think,
no
strong
opinion
from
us
either.
Nimbus
I
mean
we
will
have
to
maintain
the
boundary
code
between
the
two
fourths
for
a
while,
anyway
thinking
and
so
on.
A
A
Behavior,
the
behavior,
the
current
behavior
is
extremely
simple:
it's
wipe
an
array,
replace
it
with
a
different
array.
This
is
like
a
translation
function,
taking
the
state
of
a
current
array
and
then
mapping
it
into
like
a
parallel
state.
In
the
other
array.
M
A
A
Okay,
if
you
haven't
taken
a
look
at
the
open
pr,
please
take
a
look
at
the
opr.
It
represents
additional
complexity
to
solve
this
minor
issue
and
would
certainly
require
a
bit
more
testing
to
make
sure
that
we
can
get
that
for
transition.
Correct.
Let's
can
every
team,
please
take
a
look
at
it.
It's
a
simple
pr.
We
just
need
to
make
calling.
A
Okay,
has
anyone
stood
up
a
an
altair
testament
with
the
current
specification
locally
or
even
any
sort
of
like
ci
that
stands
up
a
testament.
K
M
E
A
Okay,
so
I
think
we
need
to
make
a
few
more
decisions.
Look
at
these
lasso
cleanups
through,
and
we
will
discuss
again
in
two
weeks
on
plan.
K
Yeah,
I
just
just
a
little
concerned
about
timing.
If
we're
punting
two
more
weeks
before
we
make
a
decision,
it
just
so
june,
is
definitely
kind
of
outright.
If
we're
going
to
run
a
six
week
or
so
test
net.
At
that
stage,
then
we're
sort
of
into
mid-july
and
we've
got
things
going
on
on
the
eth1
network
and
so
on
time
starts
slipping
away.
K
I
mean,
should
we
start
planning
a
date
because
it
it
always
there's
a
latency
between
sort
of
planning,
a
test
note
and
getting
everything
ready
and
prepared
for
that,
and
so,
if
we
don't
decide
for
until
everyone's
ready,
then
we've
got
this
sort
of
latent
period.
A
A
We
have
these
two
public
test
sets
that
we
can
fork,
but
we
should
probably
do
a
multi-client
short
lived
test
set
before
then,
would
be
my
guess.
C
I'd
need
to
chat
to
the
team,
the
broader
team
in
order
to
commit
to
anything,
but
I
I'd
really
be
leaning
towards
the
later,
rather
than
sooner
right,
it's
in
like
the
next
month.
Instead
of
like
this
one.
A
C
Yeah,
I
think
so
I
would
have
to
check
with
everyone
else
before
I
could
come
in.
A
Yeah,
actually
on
the
fuzzing
effort
and
fuzzing
infrastructure,
when
what's
the
status
in
that,
I
know.
J
No
good
so
yeah
we're
spending
a
lot
of
time,
patching
youth,
chufas
and
basically
incorporating
the
latest
changes
of
various
clients
that
we
have
in
there.
So
that's
taken
actually
a
lot
more
time
than
I
anticipated,
but
that
timeline
will
probably
you
know,
work
for
us.
I'm
I'm
expecting
us
to
have
both
the
differential
fuzzing
and
the
coverage
guided
fuzzing
up
by
you
know
june
early
june.
So
hopefully
we
can
get
a
decent
amount
of
fuzzing
cycles
in
before
we
or
around
the
time
we'll
be
forking.
The
test
nets.
A
I
guess
the
nice
part
is
that
our
validators
could
update
both
of
their
nodes
at
the
same
time
to
deal
with
altair
and
london
rather
than
having
to
do
one
and
then
do
the
other.
K
Cool
thanks
for
considering
that,
I
think
it
helps
just
to
have
some
idea
of
what
what
a
plan
might
look
like.
Even
if
it's
you
know
somewhat
vague.
A
Okay,
anything
else
on
altair.
G
A
Okay,
specs
can
you
let
us
know
what
the
state
of
that
testing
release
was.
N
Sure
so,
right
now
we
have
a
lot
of
new
features
in
the
left
branch
like
in
respect
to
testing
both
altair
and
the
merge.
N
We
don't
have
a
release
out
just
yet,
but
so
yesterday
I
made
two
different
pre-releases
or
pre-releases
of
the
pre-release,
really
that
enable
all
the
client
implementers
to
move
to
go
ahead
and
test
the
merge
and
try
the
latest
alter
changes.
And
then
I
expect
like
end
of
this
week
or
maybe
like
start
next
week.
It
will
cut
an
official
release.
A
Thanks
create
any
other,
just
back
related
items.
N
One
more
thing,
so
I'm
currently
writing
a
proposal
for
the
new
way.
I
would
like
to
handle
configuration
within
the
clients,
and
so
I
am
working
on
updating
the
specs
to
separate
constants
configuration
and
presets,
and
the
idea
here
really
is
to
try
and
separate
the
things
that
are
really
intended
for
test
builds
and
these
kind
of
more
static
configuration
things
and
separate
them
from
the
more
dynamic
things
that
we
change
with.
N
Almost
every
testnet,
like
the
fork
versions
forward
planning
like
the
timing
of
these
works
and
a
few
of
these
common
configuration
variables
that
we
would
want
to
change
in
test
nuts.
M
H
N
Right,
so
the
idea
here
is
that
it's
much
it's
less
scary,
to
try
and
change
the
configuration
for
different
test
nets,
and
you
can
rely
on
the
runtime
capability
of
clients
for
different
local
tests
and
test
nuts
and
then
settle
on
like
a
few
set
presets
that
change
more
of
the
configuration
for
testing
purposes
but
which
won't
have
to
you
won't
have
to
go
beyond
those
presets.
You
want
to
support
them
in
binary,
so
we
can
have
compiled
time
configuration
for
a
lot
of
things.
A
M
I
mean,
speaking
of
which
do
we
even
want
to
maintain
the
minimal
configurations
anymore.
I
feel
that
they
were
kind
of
a
hacks,
because
we
didn't
know
very
much
about
performance
back
then,
and
now
we
do.
A
They're
used
extensively
on
python
spec
testing,
it's
very
important
for
our
ci,
just
because
we
can't
wait
the
time
to
run
the
the
mainnet
configuration
testing.
N
Right
so
I
think
we
should
maintain
the
minimal
preset
that
really
just
define
it.
Does
this
one
preset,
so
we
have
mainnet.
We
have
minimal
these
things,
only
change
during
compile
time,
and
then
we
do
specify
like
which
things
are
like
part
of
a
preset.
So
clients
can
opt
to
define
additional
presets,
but
we
don't
require
clients
to
define
more
than
these
two
spec
presets
and
we
just
try
and
isolate
the
parts
of
the
configuration
that
we
do
want
to
configure
as
a.
N
A
N
That's
right
as
well
so
for
for
the
shape
of
the
of
the
beacon
state
and
many
of
the
other
types.
These
constants
are
very
important
to
know
in
compile
time
to
both
optimize
for
them.
O
Yes,
any
craving
yep,
yep
yeah
so
used
to
quick
maintenance
quickly
mentioned
that
we
have
set
up
a
group
for
working
on
the
standardization
of
the
style
metrics.
So
I
would
take
this
opportunity
to
thank
the
client
teams
to
help
us
on
this
effort
and
we
are
starting
to
make
some
progress
into
selecting
a
subset
of
metrics
that
is
already
implemented
on
our
clients
and
trying
to
standardize
those
so
that
we
can
track
many
things
across
clients.
O
A
O
A
Excellent,
okay,
that
tentative
altair
timeline
is
the
goal.
Let's
work
towards
it,
I'll
talk
to
you
all
very
soon
appreciate
everyone
joining
and
for
all
the
updates
and
conversation
take
care.