►
From YouTube: Consensus Layer Call #74 [2021/10/21]
Description
A
Okay,
I
have
created
an
issue
in
the
p
e
2,
0
p.m,
repo
which
we
did
not
rename
in
the
great
renaming,
because
we
plan
on
deprecating
it.
But
I
haven't-
and
I
made
an
issue
named
deprecate
this
repo,
so
I've
publicly
committed
to
doing
so
by
the
middle
of
next
month.
I
just
need
to
figure
out
where
to
put
the
old
information
and
move
into
the
other
pm
repo
announcement.
A
But
until
then
we're
using
an
issue
in
the
old
repo
issue,
two
three
nine
call,
seven,
four,
let's
begin
altair
is
happening.
I
know
that
everyone's
call
knows,
but
if
you're
listening
altair
is
happening,
if
you
run
a
node
validator
or
not
a
beacon
node,
you
must
upgrade
unless
you
want
to
run
the
old
chain,
but
otherwise
you
must
upgrade.
A
Nodewatch.Io,
which
has,
when
possible,
collecting
the
client
version,
and
I
I
summed
the
altair
versions
against
the
total
and
the
dock
I
just
shared
with
you-
shows
that
by
node
count
by
node
count
that
nodewatch.io
can
find,
which
I
believe,
between
the
couple
of
crawlers.
You
know
there
are
some
disparities,
so
this
is
not
canon
but
at
least
from
what
we
can
see
from
this
tool.
A
62
percent
of
nodes
have
upgraded
and
then,
depending
on
the
client
type,
it's
a
bit
skewed
looks
like
prism's,
58
percent
lighthouse,
73,
teku86
lodestar,
a
hundred
percent
and
nimbus
for
some
reason
that
crawler
is
not
getting
the
no
the
client
version,
maybe
it's
being
obvious,
but
I
have
not
dug
into
that.
So
I
think
we're
in
like
reasonably
good
shape
that
doesn't
map
to
the
validator
node
weight,
but
that
has
increased
a
couple
days
ago.
It
was
less
than
50,
so
I'll
be
keeping
my
eye
on
that
through
the
weekend.
A
A
Any
I
guess
we
wanted
to
get
on
this
call
and
make
sure
everything's
fine
with
respect
to
altair.
Are
there
any
issues
or
anything
people
would
like
to
discuss
as
we
move
into
next
week?.
A
B
Sorry
we
we
launched
an
experiment
with
comparing
them
the
results
of
both
crawlers,
not
watch
and
our
crawler
in
the
same
node
running
from
the
same
moment,
and
we
let
it
run
both
for
24
hours
and
see
to
see
if
we
get
the
same
results
or
or
not,
and
we
observe
quite
a
number
of
nodes
that
differ.
So
we
look
at
the
source
code
of
them
and
see
if
they
were
doing
another
using
another
technique
to
to
categorize
nodes.
B
But
actually
it's
exactly
the
same
that
we
do
so
that
our
conclusion
was
that
most
likely
there
is
a
kind
of
networking
differences
on
the
way
we
we
appeared
with
nodes
notes,
and
so
we
started
looking
into
it.
We
look
under
into
the
ip
addresses
of
those
nodes
that
we
recognize
differently
and
we
we
noticed
that
there
was
a
bunch
of
nodes
that
we
we
saw,
that
they
don't
see
and
there's
a
bunch
of
notes
that
they
see
that
we
don't
see
and
that's
basically,
the
the
origin
of
the
of
the
differences.
C
B
B
A
B
That's
correct
and-
and
we
noticed
that
there
was
a
couple
of
small
bugs
in
in
our
case
where,
when
we
appear
afterwards
for
a
second
time
with
with
the
same
node
that
we
already
recognized
in
the
past,
but
for
some
reason
the
connection
drops,
then
we
switch
it
as
unknown,
but
in
a
wrong
way,
because
we
already
knew
the
information
of
that
peer
before.
So
we
corrected
that,
but
that
those
nodes
were
like.
Very
I
mean
the
numbers
were
very
small
so
and
they
don't
account
for
a
significant
part.
Okay,
so.
A
That
that
might
be
the
culprit.
So
let's
we
can
take
this
offline,
but
I
I
really
appreciate
you
looking
in.
Hopefully
we
can
get
some
lockdown
on
the
numbers
in
the
next
couple
weeks:
yep
cool!
Thank
you
leo
other
altair
related
items
as
we
move
into
next
week,.
A
A
Okay,
we
can
run
through
some
client
updates
and
start
with
lighthouse.
D
Hello
thanks
denny,
so
we've
been
working
to
get
the
merge
face-to-face
branch
merged
into
our
main
branch
kind
of
splitting
off
little
pr's
and
bringing
them
in
to
just
slowly
get
into
into
master
or
what
we
call
stable.
We've
been
engaging
in
some
discussion
around
the
roles
of
payload
building
for
the
beacon,
node
and
evaluator
client,
so
being
doing
that
on
issue
two
seven
one:
five
on
the
lighthouse
reaper.
D
It
seems
that
the
current
favored
approach,
just
for
the
bn
to
drive
most
of
the
process
after
the
vc
issues
are
kind
of
subscription
type
pc.
As
the
vc
issues
subscription
type
message
yeah.
If
you're
interested
in
implementing
that,
then
maybe
checking
out.
That
issue
is
something
worth
doing.
D
Michael
sproul's
been
doing
continued
work
on
client,
fingerprinting
and
diversity
analysis
and
we're
also
wiring
up
some
significant
bandwidth
usage
improvements
for
the
next
release
just
been
working
on
some
a
couple
of
breakages
in
rusted
p2p,
but
I
think
that
we're
pretty
close
being
done
with
that.
That's
about
it
from
us.
A
Excellent,
thank
you
and
nimbus.
E
Hi,
so
we
released
the
nimbus
1.5.2.
That
is
ready
for
our
tire
yesterday,
not
urgent
for
1.5.1
users,
but
for
the
others.
We
need
to
upgrade
immediately.
E
Otherwise
we
had
several
improvement
on
the
rest
api.
We
increased
the
default
limits
so
that
you
can
make
bigger
requests
and
also
we
improved
throughput
for
especially
for
people
who
makes
a
lot
of
historical
data
and
also
improvement
on
the
networking
side
and
peer
selection
side,
so
that
you
can
have
less
peers
and
with
cycling
faster
into
peers
to
disconnect
phones
that
are
less
useful
to
us.
F
Hey
guys,
raul
here,
yeah
definitely
missed
all
you
guys
at
the
face-to-face
and
that
liscon
nobody
was
able
to
attend
from
our
team,
but
you
know
we're
really
looking
forward
to
going
to
more
events.
You
know
starting
starting
early
next
year,
so
yeah
prism
can
now
sync
with
the
with
the
test
net.
With
the
merge
test
net.
That's
been,
it's
been
a
huge
milestone
for
us
and
we're
pretty
much
all
ready
for
altair.
We
should
work
into
other
things.
F
F
Aside
from
that,
we
have
been
completing
checkpoint,
sync
and
and
starting
to
integrate
support
for
web3
signer,
which
is
going
to
be
important
for
multi-client
for
client
diversity
code.
Health
is
also
top
of
mind,
so
we've
been
working
a
lot
on
just
improving
the
code,
health
of
a
repository,
reducing
you
know,
technical
debt,
you
know
kind
of
when
we
have
this
downtime
between
out
there
and
starting
work
on
the
merge.
F
Aside
from
that,
we've
been
also
thinking
a
lot
about
validator
key
management,
api
standardization-
and
you
know,
thanks
to
that
deptlion
for
the
design
here,
we're
looking
forward
to
supporting
the
standard
in
prism,
which
will
make
it
a
lot
easier
to
build
kind
of
like
multi-client
web
web
interfaces,
installation
interfaces,
a
lot
of
people
rely
on
the
prism
web
ui
in
like
critical
capacity,
and
especially
for,
like
you
know,
onboarding
and
and
getting
you
know,
importing
their
key
stores
and
getting
started.
F
So
we
foresee
this
as
being
you
know,
a
huge
huge
boost
and
usability
for
everyone.
That's
it
for
us.
Thank
you.
G
G
So
what
we've
done
also
not
read
not
yet
released,
is
implementing
the
api
release.
2.1,
which
includes
the
new
header
for
specifying
the
consensus
version
and
what
is
not
yet
merged
is
support
of
the
liveness
and
point
for
double
ganger
protection.
G
We
are
ready
for
supporting
publishing,
publishing
system
metrics
to
remote
service
and
mostly
working
on
moving
merge
data
from
merge,
merge
code
from
merge,
interrupt
branch
to
to
master,
which
is
a
long
process,
and
we
for
the
ptos
test
net.
We
fixed
the
one
bug
related
to
related
to
this
in
committee
participation
rate
and
we
roll
over
roll
it
over
a
new
taco
version
on
the
on
the
test
net
and
works
good.
So
far.
H
H
H
A
I
Anyway,
oh
yeah,
this
is,
I
would
say,
the
one
of
the
reason
why
we
tried
this.
We
thought
that
well,
if,
if
all
the
hot
fork
is
faster,
then
we
should
not
touch
it
anymore,
but
it
turns
out.
That's
not
always
the
case.
You
know
right
and
yeah.
I
It
felt
like
initially
like
it's
it's
a
great
idea,
but
later
we
thought
that
well,
some
optimizations
and
so
on
and
and
some
similar
things
are
a
good
back
part.
There's
one
thing,
the
another
thing.
Actually,
what
we
found
really
really
surprisingly
hard
is
the
transition
process
when
you
spin
a
new
runtime
and
it
needs
a
lot
of
context
from
the
old
runtime
for
your
previous
forum,
and
this
is
something
that
we
underestimated
really.
So
I
think
I'll
just
try
to
to
write
a
bit
more
about.
A
That
that'd
be
great.
Thank
you,
okay.
Moving
on
to
merge
discussion,
obviously
we
did
not
have
this
call
two
weeks
ago
because
of
the
amphora
interop
and
there's
been
plenty
of
write-ups
about
that.
Moving
forward.
A
There
were
a
number
of
based
off
of
discussions
there,
a
number
of
alterations,
the
specs
primarily
in
the
simplification
and
a
lot
to
do
with
the
x
engine
api.
I've
been
chipping
away
at
that
and
mikhail
just
got
back
from
vacation
today.
So
we
will
both
be
chipping
away
at
that
with
a
target
to
complete
all
of
those
changes
by
the
end
of
october,
so
that
we
can
release
a
new,
stable
target
of
specs.
A
But
that
is
not
what
pithos
is
targeting
and
so,
and
it
is
also
will
change.
A
lot
will
continue
to
change
over
the
next
week,
but
moving
out
from
there,
we
do
plan
on
then
having
kind
of
a
new
test
net
target
at
the
end
of
november.
Based
off
of
this
stuff,
shouldn't
nothing's
radically
changing
and
most
of
the
core
functionality
is
is
stable.
If
not
some
communication,
a
bit
simplified.
A
Something
that
paul
brought
up
is
naming
right
now
we
kind
of
call
the
whole
thing
the
merge,
but
then
we
call
the
time
at
which
the
beacon
chain
upgrades
its
logic,
but
the
merge
hasn't
happened,
the
merge
fork
and
then
we
kind
of
call
the
point
at
which
the
transition
occurs,
the
transition
process,
but
it's
also
a
bit
confusing,
because
the
whole
thing
is
called
the
merge.
You
know
the
merge
fork,
the
merge
transition
process
and
there
was
a
suggestion
to
maybe
name
the
upgrade.
A
I
I
don't
know
the
proper
path
here
on
picking
a
name.
It
also
collides
with
naming
process
on
the
execution
layer.
There's
a
bunch
of
conversation
in
the
merge
general
channel
any.
I
don't
think
we're
we're
not
going
to
come
to
a
conclusion
today,
but
any
any
thoughts
to
share
that
were
not
shared
in
the
merge
general
channel
on
naming.
J
A
Yeah,
I
think
by
so
by
default.
That's
the
that's
the
path,
I
think,
so
we
could
pick
a
b
name,
but
then
we
have
to
think
about
how
what
is
the
interaction
between
that
and
the
upgrade
on
the
execution
layer?
Does
this
envelope
the
naming
scheme
over
there
or
is
additive
this?
That.
J
J
It's
whereas
it's
like
more
difficult
thing
like
what's
the
what's
the
upgrades
that
involves
consensus,
execution
layers
simultaneously,
will
look
like
and
how
do?
How
should
they
be
set
up?
Yeah,
of
course,.
A
Because
we
also
very
well
might
have
upgrades
that
are
just
on
one
layer.
You
know
if
just
the
evm
changes
in
the
future,
so
beetlejuice
shanghai,
tim
said
no,
because
shanghai's
been
reserved
for
a
different
fork,
but
we
could
we
could
kind
of
keep
the
the
naming
independently
and
and
have
it
additive
as
the
sum
total
beetlejuice
serenity.
Thank
you
light
clam.
A
That's
also
not
my
intentional
spelling.
The
intention
of
the
spelling
is,
after
the
name
of
the
star,
not
the
mad
character.
A
A
Maybe
we
talk
with
the
the
people
on
the
other
side
of
the
aisle
and
see
if
we
can
come
at
least
like
a
compromise
on
how
these
names
are
related
and
then,
if
we
do
pick
a
star
name,
we
can
pick
some
some
nice
ones
and
then
either
do
a
emoji
vote
or
bring
it
to
the
call
and
see
if
anyone
has
some
strong
opinions.
A
Any
other
merge,
related
items,
tldr
being
pathos
is
up,
people
are
iterating
and
making
things
more
stable,
specs
to
be
done
at
the
end
of
october,
and
then
we'll
have
kind
of
a
new
meta
spec
that
targets
the
stable
versions
of
things
moving
into
november,
within
the
intention
of
these
being
near
mainnet,
ready,
specs
and
really
only
changing
them.
If
issues
are
uncovered
between
then
and
later.
C
I
do
believe
that
maybe
one
or
two
clients
are
not
quite
ready
for
our
transactions
at
the
emergent
drop
event.
It
does
certainly
it
affects
the
state's
roots
in
some
ways
red.
So
it's
just
another
way
looks
good
surface.
So
far
it's
been
running.
Well,
I
think
we
can
handle
it.
So
if
anyone
likes
to
have
some
testy
for
transactions,
then
please
just
reach
out
and
we'll
start
distributing
some.
A
Okay,
any
research
updates
that
people
like
to
share
today.
C
A
C
C
The
push
model
is
basically
a
new
transaction
type
on
the
execution
layer
and
an
addition
to
the
prepare
payload
methods
to
be
able
to
introduce
a
transaction
into
a
block.
That's
suggested
by
the
consensus
layer
instead
of
taken
out
of
the
memory
pool,
then
the
alternative
is
the
pull
model
where
the
consensus
layer
keeps
track
of
withdrawn
federal
data
tasks,
and
then
the
execution
layer
allows
you
to
mint
if,
based
on
the
commitment
and
based
on
another
transaction,
not
a
regular
transaction
to
some
special
pre-compiled
contracts
that
can
process
the
withdrawal.
A
Right
where
the
the
formers,
probably
if
you
can
get
it
right,
maybe
a
more
elegant
design
but
at
first
look
has
a
bunch
of
edge
cases
around,
especially
when
those
withdrawals
are
headed
towards
smart
contracts
which
consume
gas
and
there's
a
question
of
who
pays
for
the
gas.
C
What
happens
that
kind
of
stuff
in
developed
context?
You've
been
thinking
about
this
in
two
ways
you
could
have
a
deposit
that
doesn't
trigger
the
evm.
It
just
increases
the
balance
so
that
you
don't
have
these
edge
classes,
but
then
you
also
probably
still
want
the
other
side
as
well.
So
you
end
up
with
two
types
of
transaction
or
maybe
some
kind
of
flag
within
the
transaction.
C
A
Yeah
absolutely,
and
I
think,
once
we
get
the
merge
spec
stable
at
the
end
of
the
month.
That's
one
of
my
priorities
is
to
begin
to
specify
and
engage
on
the
different
designs
on
this.
A
A
Great
well,
thank
you
for
joining
if
you're
at
liz
con
enjoy,
and
we
will
talk
to
you
all
soon
upgrade
in
six
days.
It's
exciting
thanks.
Everyone
wait.