►
From YouTube: Consensus Layer Call #88 [2022/6/2]
Description
A
Okay,
the
stream
should
be
transferred
over
if
you
are
in
the
youtube
chat.
Please
let
us
know
if
you
can
hear
us.
This
is
issue
536
on
the
pm,
repo
consensuslayer
call
88,
we'll
focus
on
the
merge,
talk
a
little
bit
about
the
reorg
that
happened
on
my
birthday
and
then
open
up.
This
is
any
other
discussion
I
want
to
handle
okay
cool,
so
merge.
We
have
a
number
of
agenda
items.
A
First
of
all,
can
somebody
give
us
an
update
on
what's
going
on
with
robston
deposit
tracking,
and
if
this
has,
if
this
is
isolated
to
specifically
what
we're
seeing
in
robson
or
if,
instead,
this
is
some
more
fundamental
issue
that
we
might
see
on
mainnet
or
might
see
during
the
merge
or
anything
like
that?
Who
has
the
update
on
this.
B
Paul
from
lighthouse
here
I
know
we
had
some
problems
on
our
end
due
to
the
really
long
block
times
up
to
like
a
minute
or
two.
I
think
so.
We
were
kind
of
voting
on
some
old
blocks,
so
not
a
threat
for
mainnet
unless
the
block
times
get
to
be
in
more
than
a
minute,
or
something
like
that.
We've
got
a
pr
up
that
solves
this.
Moves
to
a
kind
of
a
more
dynamic
approach
in
building
a
block
case,
similar
to
what
tekku
does.
B
I
think
that
it's
looking
good
on
robson
now
so
that'll
be
in
the
next
release.
A
Got
it?
Okay
and
the
player
comes
really
long
on
robston
because
of
the
amplification
of
the
difficulty
in
the
prior
week,
and
then
it
slowing
down
again.
B
C
A
Not
trashing
the
head,
it's
tracking
a
depth
and
then
we
have
to
agree
on
on
this
depth
and
the
depth
is
based
off
of
time.
Oh
time
stamps,
so
that's
going
to
be
at
least
the
the
the
grounding
of
the
reason
and
maybe
there's
some
sort
of
assumption
incorrectly
being
made
paul.
Was
there
some
sort
of
assumption
correctly
being
made.
B
Yeah,
that's
right,
just
I
guess
trying
to
avoid
unnecessarily
downloading
blocks.
So
you
make
some
mishaps
and
it's
about
block
time,
so
you
don't
have
to
download
all
of
them.
Yeah!
That's
right!
Oh
I
got
you.
Okay,.
B
Yeah,
I
think
we
had
a
tolerance
factor.
I
can't
remember
what
it
was,
but
we're
well
outside
of
that.
B
Spec
thing,
at
least
for
us,
it
was
a
lighthouse
thing,
the
way
that
we
were
doing
the
caching.
I
think
there
might
have
been
some
other
clients
who
are
having
having
issues
as
well,
but
I
can't
I
can't
speak
to
that.
A
B
Yeah,
that's
right,
I
think,
what's
worth
mentioning
as
well.
Is
that
f1
we
we
call
it
voting,
so
it
was,
is
for
better
or
worse
just
kind
of
it's
always
been
a
little
bit
a
little
bit.
Wonky
of
room
kind
of
it's
got
room
to
move
room
to
be
wrong.
So
it's
been
a
part
of
the
code
that
hasn't
received
the
same
attention
as
say
what
an
incorrect
state
route
might
receive.
B
So
yeah,
it's!
I
guess
it's
worked.
It's
it's
always
worked
well
enough
for
mainnet,
and
it's
been
the
case
that,
when
block
times
get
really
weird
and
wonky,
it's
something
that
we
can
release
a
patch
for
so
yeah.
It's
kind
of
one
of
those
flexible
parts
of
of
the
code,
but
clearly
we
could
have
done
better.
A
Okay:
okay,
okay,
I
think
there's
certainly
a
desire
to
simplify
this
mechanism
and
in
doing
so,
potentially
make
it
a
faster
mechanism.
You
know
now
that
we
are
much
more
tightly
coupled
to
the
execution
there.
I
this
isn't
a
merge
discussion,
but
I
think
it's
something
that
myself,
mikhail
and
others
want
to
spend
some
cycles
on
thinking
about,
because
it's
actually
been
when
something
goes
wrong
on
a
test
net
or
even
main
net.
You
know
it's
a
likely
culprit
should
say
assign
if
there's
some
unnecessary
complexity.
Here.
A
Okay,
anything
else
on
robson,
deposit
tracking,
its
impact
on
robson
beacon,
chain,
sorry,
bobstone
or
any
of
the
or
main
net.
Are
we
good
in
this.
A
Okay,
great,
so
the
next
thing
is,
we
need
to
launch
a
sepolia
beacon
chain.
It's
generally
there's
an
issue
standing
on
the
pm
repo.
I
just
updated
this
morning
based
off
of
the
conversation
we
had
maybe
a
month
ago
about
keeping
it
the
validators
small
validators
that
small
and
generally
permissioned
and
utilizing
that.
A
I
think
we
just
need
to
agree
on
the
final
parameters
and
get
this
thing
launched
and
I
think,
there's
also
general
agreement
amongst
the
people
that
I've
spoken
with
that
launching
it
sooner
rather
than
later,
just
for
the
best,
so
that
we
are
just
generally
prepared
for
sepolio
perry
is
out.
I
think
perry
would
when
he
comes
back
and
help
us
finalize
some
configs.
C
Okay,
not
not
exactly
that,
but
one
thing
also
we
mentioned
on
awkward
devs
is
we
should
launch
a
separate
speaking
chain
and
run
it
through
altair
asap,
but
not
necessarily
belatrix,
so
that
it's
in
the
same
state
as
like,
mainnet
and
and
and
we
kind
of
work
through
the
process.
A
C
A
Okay,
so
when
perry
gets
back,
we'll
propose
a
date
and
a
config
that'll
be
approximately
in
the
next
couple
of
weeks
and
get
it
up.
The
I
think
the
validator
set
size
is
going
to
be
on
the
order
of
a
couple
thousand
and
permissioned.
F
Can
I
just
clarify
not
on
naming
have
we
decided
gurley
prata
merge
before
sepolio
or
sipolia
first,
because
I've
heard
both
recently
yeah.
C
So
the
rough
feeling
that
I
had
was
we
would
do
gordy
first
and
the
reason
there
was
we
will
get
more
like
data
out
of
gordy,
because
it's
like
a
network
with
more
activity,
there's
more
people
running
validators
on
prater,
and
I
think
maybe
the
the
only
argument
I
could
see
for
doing.
C
Sepolia
first,
if
we,
if
we
did,
is
like
if
we
do
sepolia,
perhaps
like
when
we're
not
quite
ready
when
we
don't
have
code,
that's
like
quite
ready
for
mainnet,
but
we
do
want
to
get
another
test
that
run
in
somehow
and
and
because
it
feels
like
gordy.
What
goes
on.
Gordy
should
probably
be
like
extremely
close
to
what
goes
on
mainnet,
because
it's
like
what
most
users
will
use
and
and
test
on.
So
that's
like
the
only
reason
I
could
seem
to
do
support
before
is.
C
If
we
want
another
run
on
a
test
net
with
stuff.
That's
maybe
not
quite
ready
to
find
that
yet.
A
Yeah
I
find
that
argument
compelling
actually
just
to
be
able
to
kind
of
keep
things
moving
and
save
orly,
for
I
mean
no
matter
what
the
last
test
net
is
going
to
have
the
code
closest
to
what's
around
the
main
net.
So
I
I
buy
that
argument
and
I
we
do
do
gorly
and
main
net
shadow
forks,
which
help
us
understand
some
of
the
things
that
come
out
of
that,
obviously
something
you
mentioned
was
actually
having
a
much
more
open,
validator
set
and
having
validators
and
stakers
actually
test
this
stuff
at
scale.
B
I
also
like
the
idea
of
pushing
goalie
to
afterwards
to
give
us
a
little
bit
more
time
and
make
sure
that
the
code
is
close
to
reduction.
C
Oh
yeah,
I
don't
think
you'd
want
to
guarantee
that,
but
I
think
you'd
want
high
confidence
that
it's
as
close,
you
know
yeah,
but
but
we
definitely
would
not
like
frame
it
to
users
like
download
this
for
both
gordy
and
mainnet.
A
Yeah
agreed
all
around,
I
saw
america
a
thumbs
up
as
well.
Was
this
touched
on
all
core
devs
last
week,
not
the
order.
No
okay,
we
can
do
maybe
do
a
round
of
communications
over
the
next
few
days
and
just
see
if
we
do
want
to
swap.
I
guess,
there's
nothing
been
quite
official
here,
but
nonetheless
I
think
we
should
get
this
polio
beacon
chain
out
soon,
just
so
that
it's
out
and
ready
for
our
use,
no
matter
the
order.
A
Okay,
we'll
circulate
a
suggested
date
and
configurations,
probably
in
about
a
week
when
perry
gets
back,
and
we
can
kind
of
finalize
that
okay,
something
perry
pink
me
about
this
morning,
was
robson
ttd
and
discussions
around
that
and
the
choice
of
that.
I
think
we
tim.
We
need
to
choose
that
monday.
C
No,
no,
we
need
so.
I
think
I
have
a
very
strong
preference
for
a
number.
So
the
thing
I
think
yeah,
the
thing
I
think
probably
makes
sense,
is
picking
a
number
and
we've
had
someone
on
our
team
at
the
ef.
Mario.
C
Look
into
that
communicating
that
number
with
the
folks
who
run
validators
like
on
the
client
teams
and
and
on
the
testing
teams,
making
sure
all
those
are
upgraded
and
then
basically
publicly
communicating
the
number
so
that
in
the
worst
case
you
know
it
doesn't
affect
the
network.
If
somebody
decides
to
mine
upload
the
wickfctd.
C
So
my
so,
I
guess
what
I
would
suggest
is
like
we
have
a
number
suggestion
and
and
like
some
hash
rate
assumptions
around
it
right
after
this
call,
we
can
send
it
to
all
the
client
teams
to
make
sure
that
there's
no
major
objections
and
then
once
once
the
latricks
hits
which
is
tonight
like
in,
I
think,
like
10
hours
or
so
from
now,
then
we
run
a
ttd
override
on
the
validators
that
are
controlled
by
by
client
teams
and
and
the
ef
and
then
tomorrow,
basically
like
exactly
24
hours
from
now,
we
publish
this
number,
so
everyone
has
a
chance
to
upgrade
and
then
obviously,
as
soon
as
we
published
number,
there
might
be
some
incentive
for
people
to
mine
towards
that.
C
We've
purposely
chosen
something
that,
like
gordy,
should
not
hit
by
next
week,
but
that
we
can
then
rent
hash
rates
to
accelerate
ourselves.
So
the
goal
would
still
be
for
it
to
be
hit.
You
know
sometime
late
next
week
but
like
given
the
current
hash
rate
on
gordy.
It's
it's.
It's
targeted
for
much
farther
than
that.
A
C
C
C
Okay,
yeah
yeah
right.
So
so
I
think
that
the
what
I
would
suggest
is
like
whenever
because
like
right
now,
there
are
client
releases
with
a
ttd
in
it
for
robsten.
Whenever
the
next
client
releases,
they
should
update
that
value
to
what
the
actual
ttp
ends
up
being,
but
I
wouldn't
like
rush
releases
to
just
have
that
value
asap,
and
we
should
just
communicate
that
people
need
to
do
an
override.
C
Oh
and
one
last
note
on
that,
I
shared
this
in
the
awkward
devs
chat,
but
I'll
repost,
the
hackmd
here,
I'm
using
like
a
hackmd
that
was
put
together
last
week
by
mariusz,
which,
which
kind
of
mentions
how
to
change
the
ttd
on
every
single
client.
I've
posted
it
in
the
chat
here.
C
If
there's
something
that's
missing
or
wrong
for
your
client.
Please
send
me
a
message
and
I'll
make
sure
that
the
blog
posts
obviously
does
not
have
the
wrong
information.
All
the
values
in
this
posts
have
the
fake
ttd,
so
that
will
be
changed
everywhere,
but
just
generally
the
command
the
the
flag
name.
All
that
would
be
good
if
people
double
check
that
it's
accurate.
A
Excellent
pardon
me
any
other
discussion
points
related
to
the
merge.
D
This
is
unrelated,
and
I
know
perry
is
not
here,
but
I
think
that
it's
worth
bringing
up
once
robson
has
merged.
We
probably
can
defecate
kyung
just
because
I
would
love
to
save
some
money
if
we
can-
and
I
don't
think
we
have
to
come
to
conclusion
right
now.
But
it's
just
something
to
note.
C
Is
is
it
like
a
significant
cost
to
keep
killed
up
to
the
main
net?
Merge
because
we
do
have
like
a
bunch
of
applications,
you've
deployed
on
it
to
test
stuff
and
they
might
not
already
have
deployments
on
roxton.
So
if
it's
not
like
a
significant
cost,
I
think
keeping
it
until
the
main
net
merge
and
deprecating
it
like
really
close
after
that
would
be
a
bit
better.
C
Is
still
up,
I
don't
even
know
that
I
think
we
can
deprecate
like
literally
today.
I
don't
think
yeah
yeah,
but
but
for
kill
itself
like
yeah,
there's
at
least
like
five
or
ten
big
applications
that
I'm
aware
that
I'm
not
sure
if
they're
still
actively
using
it
but
they've
deployed
on
it,
and
then
it
makes
it
easier
for
others
to
then
come
and
deploy
and
try
stuff.
So
I
I
would
keep
it
up
to
the
main
gun.
Merge.
C
A
F
The
roxton
beacon
chain
is
showing
zero
new
validators
zero
pending
validators.
So
it
looks
like
the
beacon
chain
can't
see
the
can't
agree
on
the
deposit
contract
state
at
all.
A
The
fixes
on
lighthouse
and
prism
are
those
are
not
yet
released
onto
those
nodes.
B
A
Okay,
let's
circle
back
on
that
right
after
the
call,
I
think
we
need
to
make
sure
that
if
those
fixes
are
out,
they
should
be
agreeing
on
a
value
at
this
point
and
inducting
new
deposits.
And
if
that's
not
the
case,
that's
certainly
I
mean
tim.
The
fact
that
if,
if
people
can't
come
to
consensus
on
this
value,
then
deposits
cannot
be
added
to
the
beacon
chain,
and
so
that's
almost
certainly
what's
going
on
here,
which
is.
A
Not
great
I
can
we
see,
can
somebody
investigate
how
many
deposits
were
made
to
the
bobston
beacon
deposit
contract
just
curious.
G
We
deployed
one
fix
to
numbers,
but
there
might
be
more
needed
we'll
see.
Our
resident
expert
is
on
vacation,
the
ethon
data
expert,
hey,
that's
almost
a
full-time
job.
A
It's
funny
the
things
that
you
design
for
redesigned
that
I
would
have
never
expected
that
to
be
the
you
know,
a
major
source
of
error,
but
it
has
been
and
there's
other,
seemingly
more
complex
things
that
work
follow
all
the
time
it
is.
A
I
guess
everything
else
is,
you
could
argue
more
elegant.
A
Okay,
so
there's
been
376
transactions
there.
We
can
clear
that
cube
pretty
quickly
once
we
begin
voting
correctly,
but
there's
that's
in
there's
also
the
follow
distance
to
contend
with
there.
I
think
that
we
can
definitely
if
we
can
patch
this
up
in
the
next
day,
then
these
validators
will
be
inducted
before
the
merge.
G
Oh
yeah
yeah,
we.
C
C
A
C
Has
it
yeah
sent
some
stuff,
I
think,
for
aragon
it
was
ever
for
aragon.
It
was
every
combo
for
base
two.
It
was
every
combo
except
lighthouse
and
then
for
nethermind.
There
was
one
combo
with
nimbus
that
didn't
work,
but
it's
not
clear
for
another
mind
if
it's
just
because
it
didn't
happen
yet
or
if
there
was
actually
an
issue
there.
G
I
Another
mind
issue:
to
be
honest:
we
prepare
something
similar
to
death,
that
we
will
wait
for
some
amount
of
time
and
we
will
prepare
the
the
payload,
but
we
only
mask
this
issue.
We
hide
this
issue
and
this
is
still
need
to
be
fixed
on
cl
side.
I
I
think
there
was
a
problem
with
not
sure
exactly,
but
definitely
not
nedermite.
Probably
erican
and
participation
was
lower
right
now.
So
I'm
not
sure
what
happened,
maybe
barry
or
marios
will
share
details.
A
If
there's
any
follow-up
discussion
on
the
shadow
fork,
let
me
take
it
to
discord.
H
A
You
know,
I
think
the
core
of
this
was
a
an
update
to
the
fork
choice
that
was
deemed
safe
to
roll
out
continuously,
but
then
an
additional
bug
in
the
spec
was
found
that
compounded
that
issue
casper
or
anyone
that's
been
close
to
this.
Do
you
want
to
give
us-
or
I
think,
a
lot
of
people
here
are
familiar
with
it,
but
give
anyone
listening?
Also
some
familiarity
with
what
happened
a
week
ago.
E
Sure
yeah
I
mean
so
essentially
the
initial
setup
was
that
two
blocks
virtually
arrived.
At
the
same
time,
both
blocks
accumulated
roughly
the
same
weight
and
then
essentially,
the
validators
were
roughly
split
in
half
between
running
the
proposal,
boost
fork,
choice
and
the
slightly
less
than
half
not
running
it,
and
not
the
other
way
around.
E
So
slightly
more
people
were
not
running
it,
and
the
problem
was
that
then,
six
block
proposals
in
a
row
were
running
proposal
boost,
but
basically
because
of
this
known
bug,
where
proposers
don't
rerun
the
fork
choice
before
proposing,
they
essentially
falsely
attribute
the
boost
to
the
proceeding
proposal,
instead
of
just
looking
at
the
attestations
itself
and
that
way,
essentially,
because
we
had
six
block
proposals
in
a
row
running
the
proposal
boost
and
not
re-running
the
fork
choice
before
proposing
they
basically
extended
this
slightly
less
heavy
chain,
and
eventually
one
proposal
was
not
running.
E
The
proposal
boost
and
therefore
saw
the
heavier
chain
is
leading
and
yeah
that
kind
of
concluded
the
seven
block
reorg
and,
as
danny
already
mentioned,
it's
kind
of
a
unfortunate
situation
where
validators
were
split
between
running
proposal,
boost
and
not
running.
It,
and
on
top
of
that,
this
bug
of
not
re-running
the
fog
choice,
so
if
proposers
were
re-running
the
fog
choice,
this
actually
wouldn't
have
happened.
A
Right-
and
that
was
our
assessment
when
we
rolled
it
out-
was
that,
although
this
might
have
lead
to
a
split
view
on
the
order
of
a
slot
that
would
quickly
be
resolved,
but
then
the
compounding
with
this
additional
bug,
on
the
timing
of
when
to
run
for
choice.
That
assumption
was
totally
broken.
A
So
I
think
there's
there's
a
bit
of
a
discussion
on
you
know
if
we
are
rolling
out
fork,
choice,
changes
to
one
ensure
that
we
do
an
analysis
on
the
safety
of
how
to
roll
it
out
and
whether
that
should
be
on
a
coordinated
point
like
a
not
necessarily
a
hard
fork,
but
essentially
telling
everyone
to
update
their
nodes.
A
For
this
event
and
to
enable
it
at
one
point
and
the
other
question
is
obviously
if
how
to
account
for
potential
unknown,
bugs
that
when
we're
doing
such
an
analysis,
one
of
I
think
the
primary
reason,
given
the
analysis
that
this
would
not
lead
to
long
term
splits.
Without
this
other
bug.
A
And
so
I
would
potentially
argue
for
being
able
to
do
it
in
the
future,
but
I
guess
the
one
thing
to
note
here
is
that
you
don't
have
to
manage
these
code
paths
in
perpetuity
like
a
like
a
hard
fork
when
there's
a
logic
change
you
kind
of
after
the
after
you
pass
the
epoch
on
the
next
release,
you
can
actually
do
a
you
can
eliminate
the
old
logic
entirely
with
the
fork
choice
changes.
A
A
I
guess
my
recommendation
would
be
when
we
run
into
something
like
this
again,
just
to
make
sure
that
we
discuss
it
much
here
and
and
right
write
down
at
least
our
analysis
of
why
we're
making
the
decision
to
roll
out
on
a
continuous
basis
or
to
roll
out
at
a
coordinated
point.
A
Okay,
so
I
also
had
a
question
about
seconds
per
ethel
in
block
this.
This
is
an
estimation
to
get
to
an
approximate
depth.
This
does
not
actually
matter
like.
So,
if
you,
if
there
were
three
blocks
in
that
range,
you
would
still
you
could
still
agree
on
the
block,
even
though
the
depth
wasn't
exactly
what
you
estimated.
So
it's
not.
A
The
problem
isn't
actually
related
to
the
spec.
That
would
just
change
the
amount
of
blocks
that
you're
kind
of
digging
through
or
the
depth
that
you
get
to,
but
it
doesn't
change
the
ability
for
nodes
to
agree
at
that
depth
and
it
did
used
to
be
a
precise
block
depth,
if
I
remember
correctly,
but
was
simplified
to
the
seconds
estimation
for
simplicity
reasons,
but
I
would
have
to
pull
up
some
old
issues
to
see
exactly
why
that
was
the
case.
B
A
Well
think
about
like,
if,
if
seconds
worth,
one
block
is
15
and
you
want
to
get
to
usually
about
a
thousand
blocks
deep,
but
in
mainnet
or
in
on
the
network
you
saw
the
actual
time
between
blocks
was
30
seconds.
Then
you
would
only
get
to
500
blocks
deep,
but
that
you
can
still
come
to
agreement
with
each
other,
even
if
that
average
is
is
off.
A
A
And
being
outside
of
that
margin
of
error
on
blockchains,
okay,
cool
thanks
for
the
reorg
chat,
I
linked
we
linked
to
the
barnabas
discussion,
visualization
of
the
seven
block
reorg,
it's
very
good.
If
you
want
to
take
a
deeper
look.
A
G
G
What
we're
using
it
for
well
one
one
cool
use
case
that
that
has
come
up
is
that
if
you're
not
running
a
validator
you're
just
running
some
random
infrastructure
and
want
to
read
the
you
know
your
state
at
some
point,
you
don't
really
need
to
run
the
full
consensus
protocol.
So
to
that
end,
we've
actually
developed
a
little
standalone
application
that
uses
the
lifeline
protocol
to
kind
of
feed
an
execution
layer
with
what's
called
purchase,
update
and
the
blocks.
G
A
Okay
and
I
would
encourage
teams
to
send
a
person
to
all
core
devs
for
the
next
couple
of
all
core
devs,
because
I
think
we're
going
to
continue
to
be
talking
about
timing,
test,
net
launches
and
different
things
like
that
that
we,
we
generally
need
some
agreement
on
both
sides.
For
so,
if
you
can,
please
join
us
there,
and
I
will
encourage
them
to
join
us
here.