►
From YouTube: Consensus Layer Call #73 [2021/9/23]
Description
A
A
A
Thank
you.
Members
of
rig
members
of
the
research
team
and
members
of
client
teams
working
through
data
analysis
and
bug
fixing
around
sync
committee
performance
proto
has
quite
the
graph
on
that
issue.
If
you
want
to
look
and
the
fact
that
all
of
the
software
that
all
of
the
nodes
that
we
run
look
blue
at
the
end
of
that
graph
is
is
a
good,
a
good
sign
and,
as
far
as
I
know,
our
sync
committee
troubles
have
been
fixed.
A
B
So
I
mean
I
can
just
briefly
mention
the
problem
that
we
had.
Yes,
it
was
kind
of
funny,
so
we
have
this
anti-dos
protection
and
lip
b2p,
which
adds
some
extra
filtering,
because
when
people
subscribe
to
gossip
technically
you
could
send
lots
and
lots
and
lots
of
subscription
and
fill
up
our
subscription
table
and
like
make
numbers,
use
lots
of
memory.
So
we
had
a
special
filter
there
and
we
kind
of
forgot
to
add.
A
I
guess
one
question
is:
are
we
do
we
have
sufficient
monitoring
in
place
now?
I
know
we
have
sufficient
debug
tools,
but
will
we
be
aware
of
issues
because
I
I
felt
like
this
issue
kind
of
sat
for
a
while
without
anybody
maybe
being
aware,
I'm
not
sure
if
that's
the
truth.
A
Cool
any
other
comments
on
the
investigation
issues
found
where
we
stand
with
respect
to
this
feature,
we're
generally
good.
I
I
know
that
nimbus
had.
I
think
there
were
a
number
of
patches,
not
just
nimbus.
We
don't
have
to
go
over
all
of
them,
but
just
anything
else
we
want
to
discuss
here.
A
A
I
believe
we
had
a
release
of
additional
test
factors
in
that
and
we
wanted
to
fix
the
altair,
the
sync
committee
performance,
which
came
down
to
the
wire
we
did,
and
if
we
did
that
we
wanted
to
talk
about
a
fork
date
today.
A
A
fork
date
would
imply
client
releases,
mainnet,
ready,
client
releases
no
later
than
two
weeks
prior,
maybe
16
days
prior,
so
that
we
can
do
a
a
blog
post
14
days
prior
at
the
minimum,
to
get
people
ready
for
an
upgrade.
A
Tentatively,
we
talked
about
doing
releases
at
the
end
of
september.
That's
the
end
of
next
week,
maybe
doing
a
blog
post
a
few
days
later
and
doing
an
upgrade
mid
october,
maybe
october
18
20th,
something
in
that
range.
Where
do
we
stand
on
that?
Are
we
feeling
do?
We
have
the
same
amount
of
confidence
we're
ready
to
do
this?
Do
we
have
new
information
and
need
to
make
a
different
decision.
C
Speaking
of
teku
ready
to
go,
but
would
prefer
to
see
three
to
four
weeks
lead
time,
just
because
there
are,
the
big
operators
will
take
longer
to
upgrade.
I
expect
and
there'll
be
more
due
diligence
around
it.
I
want
to
make
sure
everyone's
got
time.
A
So
three
to
four
weeks
from
releases
plus
blog
post
to
mainnet
date,
right,
yeah,
okay,
okay,
so
releases
but
y'all
are
still
on
the
we
can
release
kind
of
pre-october
and
then
right
right
on
the
cusp
of
september
october
and
then
a
three
to
four
week
lead
time
from
that.
A
For
us,
yep
got
it
other.
A
A
So,
let's,
let's
pick
a
date,
my
calendar,
I
think,
kicks
me
out
every
four
weeks
at
the
exam
same
exact
time
and
I
have
to
relog
in.
I
think
it's
literally
during
this
call,
because
it's
happened
another
time
I'm
trying
to
log
back
into
my
calendar.
I
apologize.
A
A
Okay,
so
we
have
the
the
end
of
next
week
that
thursday
friday
is
september
30th
september
and
october
1st.
So
we
could
target
our
releases
for
the
end
of
next
week
and
a
blog
post.
A
A
Okay,
wednesday,
the
27th
we
can
pick
a
precise
fork
epoch
using
adrian's
sweet
tool
right
after
the
call
and
make
a
pr
to
configurations.
A
Okay,
okay,
so
say
it
out
or
get
out
loud
again
end
of
next
week,
client
releases
mainnet
releases,
a
blog
post
by
the
ef
and
anyone
else
that
wants
to
join
in
that
on
october,
4th
to
discuss
dates,
upgrades
and
client
releases
that
then
three
and
a
half
weeks
from
that
point
is
october
27th,
which,
over
the
next
day,
we
will
select
an
epoch
on
that
date.
C
Right
design,
let's
start
with
introductions,
so
we've
had
a
couple
of
new
joiners
reggio's,
been
around
for
a
few
weeks
and
is
in
australia
and
not
here
enrico
joined
last
week
and
is
in
europe
and
is
here
so
welcome
enrico
on
the
client
side.
We
fixed
an
issue
that
came
up
on
prior
to
network.
We
first
saw
it
there
in.
There
was
a
rare
edge
case
where
teku
would
fail
to
produce
blocks.
C
It's
some
kind
of
weird
race
condition
where
we
would
see
an
attestation
first
in
a
block
and
then
later
receive
it
via
gossip,
and
then
there
was
some
other
sort
of
weird
conditions
and
when
it
all
coincided,
the
we've
failed
to
produce
a
block.
Later
it
was
rare
on
prata
and
we've
not
seen
it
on
may
net
or
had
it
reported,
but
nonetheless,
it's
fixed
in
21.9,
which
is
the
most
recent
release
there
and
other
minor
things.
We've
upgraded
to
blast
0.3.5,
which
came
out
last
week.
C
C
Adrian
is
is
astonishingly
good
at
debugging
excellent
prism.
F
Yeah,
hey
guys
so,
on
the
a
tear
front,
we're
just
chugging
along,
we
have
merged
everything
into
the
canonical
develop
branch.
So
that's
done
so
right
now,
just
a
few
minor
budget
fixes
with
the
rpc
endpoint
and
then
a
few
optimizations
here
and
there
so
so
far
is
looking
pretty
good
on
our
end
and
then
we
are
also
gearing
for
the
v2
reduce
which,
which
will
be
end
of
next
week
and
they're
reduced
from
10
slashers.
F
We
also
reorganize
package
structures
to
be
to
be
more
draw-like,
also
align
all
the
matrix
namings,
all
the
system,
standardization
and
all
the
good
stuff
and
like
tech
who
were
also
working
on
the
merge,
spec
support
on
a
dedicated
branch
and
yep.
That's
it
from
us.
D
Hi
so
we've
been
working
on
passing
all
the
new
tests,
also
the
one
from
blst
as
well
or
last
week
was
focused
on
making
sure
that
pratya
worked.
So
we
had
issues
regarding
sync
committee
messages
and
also
a
low
number
of
peers
that
we
are
debugging
or
have
debugged,
and
we
want
to
do
a
release
next
week.
Well,
so
the
the
timing
is
good
to
fix
everything
and
make
sure
that
or
main
release
is
rock
solid
for
pratter
and
altair.
A
E
Hello
paul
here
so
this
week
we
merged
two
big
pr's
into
our
unstable
branch,
so
they
were
weak
subjectivity,
sync
or
checkpoint
sync,
so
we
can
now
with
a
trusted
api
endpoint.
We
can
now
sync
from
scratch
to
head
in
less
than
a
minute,
which
is
pretty
cool,
we've
been
playing
with
that
and
having
fun.
We
also
emerged
batch
bls
verification
and
attestation
signatures,
so
that
sees
about
a
40
to
fifty
percent
drop
in
cpu
load
average
on
our
prada
nodes,
so
that
improvement
is
significant.
E
On
knows,
it
is
subscribed
to
all
subnets
still
noticeable
on
other
nodes,
but
much
less
so
so
we're
likely
pushing
a
release.
Candidate
of
version
1.6.0
next
week
and
that'll
include
those
fixes.
Michael
did
some
analysis
on
client
distribution
across
validators
by
fingerprinting
attestation.
Packing
characteristics
got
some
attention
on
twitter
was
pretty
cool,
we're
working
on
our
implementation
of
the
merge,
getting
ready
for
more
test
nets
very
soon,
and
adrian
manning
has
been
working
on
reducing
the
p2p
bandwidth.
E
E
I
haven't
been
paying
attention,
I
think
he's
trying
to
reduce
gossip
duplicates
by
reducing
his
mesh
size
or
something
like
that
and
not
super
clear
on
it
got
it.
H
This
is
solus
from
the
nina
team,
so
so
we
are
still
focusing
on
the
multiple
runtimes
refactoring
and
regarding
the
alter
er.
We
can
sink
the
chain
now
and
we
still
need
some
work
on
the
validator
side
and
I
think,
maybe
in
a
couple
of
next
weeks
we
will
be
back
on
the
broader.
A
Okay.
Moving
on
merge
discussion,
we
are
working
very
diligently,
mcguile
myself
and
many
others
on
refining
initial
interop
specs
for
the
merge.
We
expect
a
release
tomorrow
on
the
consensus
side,
which
would
be
kind
of
a
stable
target
for
initial
interop
in
the
start
of
october.
A
Additionally,
you
know
on
the
on
the
execution
layer
side
we'd
have
that
eip
stabilized
and
also
there
is
a
minimal
version
of
the
execution
api
of
the
engine
api
here,
that's
being
put
through
the
ringer,
and
we
expect
to
stabilize
again
in
about
24
hours
for
release
so
that
we
can
all
have
a
common
target
moving
into
october.
G
Thanks
danny
for
the
update,
I
don't
have
anything
to
add
here.
I
was
just
going
to
say
something
like
that.
G
Oh
yeah
right
right,
yeah
yeah
I've
started
to
to
work
on
this
document.
It
shouldn't
be
that
long.
I
mean
just
spec
versions
and
fill
in
the
gaps
that
we
have
in
this
pack
like
what
what
to
do
with
the
random
field
on
the
execution
client
side.
So
because
we
don't
have
any
eip
for.
G
A
Great
and
if,
if
you
all
haven't
taken
a
look,
I
mean
primarily
what
we're
dealing
with
is
upgraded
types
and
this
communication
protocol
from
the
the
consensus
side
so
order
of
magnitude
simpler
than
altair.
I
would
suppose,
especially
now
that
clients
have
kind
of
a
standard
fork.
Mechanic
path
in
the
code
base.
So
take
a
look,
any
questions
about
the
merge
or
comments
or
discussion
points.
E
A
I'm
not
sure
who
is
farthest
along.
Hopefully
that
would
be
something
relatively
soon
pro
and
others
were
thinking
about,
actually
mocking
an
execution
engine
or
mocking
the
consensus
side
for
testing
of
the
api
proto.
Do
you
want
to
talk
about
that?
A
bit.
I
It
can
delay
things,
and
so,
instead,
I
would
like
teams
to
focus
on
sharing
more
tooling
and
trying
to
share
testing,
and
this
one
of
these
things
that
we
can
share
is
a
mock
version
of
the
beacon
node.
A
mock
version
of
the
engine,
mod
that
conform
to
the
specification
are
lighter,
run
and
you
can
test
against
and
once
we
can
have
all
consensus,
clients
work
against
the
mock
exclusion
engine
and
our
exclusion
engines
mock
work
against
the
mock
consensus
clients.
E
Yeah,
I
totally
agree.
I
had
a
lot
of
trouble
last
time
trying
to
get
matches,
because
the
serialization
formats
between
the
consensus,
clients
and
and
execution
clients
are
quite
different
just
in
the
way
that
we,
you
know,
serialize,
byte,
string,
hex,
strings
and
stuff
like
that,
so
it'd
be
super
useful.
I
think
I
published
some
test
sections
last
time.
I'll
do
the
same
once
I
can
get
it
up,
but
yeah
a
mock
thing
would
be
really
handy.
B
Taking
mental
notes
for
and
and
some
real
notes
as
well
is
that,
given
that
the
execution
api
is
kind
of
this
private
ish
api
between
the
big
like
the
consensus,
client
and
the
execution
client,
they
shouldn't
really
be
exposed
to
user
applications
like
normal
user
applications,
and
that
means
that
it
kind
of
could
live
in
a
separate
space
in
the
execution
client.
B
Now,
for
many
many
reasons,
we
chose
rest
to
talk
between
you
know,
vc
and
bn,
and
and
also
for
encoding
purposes.
It
would
make
a
lot
of
sense
if
we
continue
to
rest
to
use
rest
between
consensus
and
execution
but
like
before,
making
that
pr
is
there
anybody
that's
going
to
like
loudly
scream
and
inject
right
now
or
is
there
something
worth
considering?
I
think
it
would
be
a
big
simplification
for
the
future
if
we
want
to
kind
of
focus
on
just
one
kind
of
encoding
and
one
kind
of
api
style.
A
Tommy,
the
primary
pushback
is
the
execution
layer
already
has
a
json
rpc
interface.
It's
it's
suggested
to
be
exposed
on
a
different
port
for
some
sort
of
separation,
but
at
the
end
of
the
day,
it's
a
compromise
in
one
way
or
the
other
in
terms
of
formats
required,
because
the
the
user
api
on
the
json
rpc
is
not
going
to
go
anywhere.
I
this
has
been
debated
a
bunch
I'm.
I
just
wanted
to
provide
that
for
context.
I
I'm
not
going
to
throw
my
hand
in
the
ring
too
much.
E
My
understanding
is
that
we
used
an
rpc,
because
this
type
of
communication
is
not
suitable
to
rest.
We
need
because
we
need
the
like.
We
can't
really
load
balance
the
execution
node,
because
we
need
to
stay
in
sync,
so
eventually
we're
going
to
have
a
protocol
where
the
execution
note
is
going
to
be
like.
Oh
no,
I
actually
don't
know
that
block
that
you
were
that
you
refer
to,
and
then
we
have
to
push
blocks
to,
and
we
have
to
like
create
this
very
much
rpc
style.
E
Sync
method
between
the
two
that
was
my
understanding,
is
why
rpc
was
chosen.
B
B
A
Of
it
right,
but
one
of
the
primary
reasons
was
the
because
it's
a
user
api
that
might
have
need
load,
balancing
and
all
sorts
of
other
nice
things
that
are
going
to
fall
out
of
a
restful
api,
whereas
this
doesn't
have
the
same
requirement.
I
think,
is
what
paul's
arguing,
because
it's
more
of
a
one-to-one
relationship.
But
again
I
I'm
not
gonna,
go
too
deep
in
here.
I
I.
G
One
more
argument
here
is
that
rest
is
tightly
coupled
with
http
protocol,
while
the
json
rpc
could
be
implemented
on
top
of
any
communication
protocol
like
tcp
websocket
yeah,
I
guess
the
rest
will
be
done
in
websocket
as
well,
but
I
don't
think
that
rest
suits
us
well
here,
because
rest
is
good
in
accessing
and
updating
some
resources.
G
While
this
communication
protocol
is
more
like
updating
the
states
and
syncing
the
states
between
two
clients,
so
it's
not
always
not.
Each
request
and
response
corresponds
to
some
like
resource
some
logical
resource
that.
A
From
a
practical
standpoint
of
just
getting
this
api
ready
to
be
used
tomorrow
for
people
to
build
and
then
initially
do
some
interop
at
the
beginning
of
october,
I
think
there's
probably
zero
chance.
It's
going
to
change,
but
if
there
is
a
compelling
argument-
and
you
want
to
continue
to
have
the
conversation
and
try
to
change
it
after
that's-
probably
the
path.
A
Yeah
with
the
release
tomorrow,
there
will
be
at
least
kind
of
a
minimal
set
of
consensus,
vectors
available
on
the
consensus
side,
so
keep
your
eyes
peeled.
J
Yeah
on
our
website,
I
wanted
to
mention
that
our
paper
on
the
ethereum
ii
network
crawler
has
been
accepted
for
publication
in
the
ieee
international
conference
on
blockchain
computing
and
applications
that
should
be
held
in
estonia
in
november.
I
will
add
the
link
in
the
in
the
chat
and
also
regarding
the
crawler.
Just
a
couple
of
days
ago.
There
was
another
client
distribution
analysis
that
came
out
done
by
michael.
J
I
guess
many
of
you
already
saw
it.
It's
a
completely
different
method
that
is
derived
from
block
proposal
data
and
the
distribution
that
came
out.
It
was
astonishingly
similar
to
the
distribution
that
we
showed
in
that
we
demonstrated
with
our
crawler
from
several
months
ago.
So
I
think
this
kind
of
validates
a
little
bit
the
data
that
we
have
got
and
shown
over
the
last
few
months.
J
I'm
gonna
add
the
link
to
the
twitter,
so
you
can
take
a
look
at
it
and
yeah
on
the
the
other
paper
that
was
accepted
concerning
the
entering
plans,
resource
utilization
is
going
to
be
presented
next
week
at
the
conference
on
blockchain
research
and
applications
for
innovative
networks
and
services.
I'm
adding
the
link
to
the
program
in
case
somebody
is
interested
to
take
a
look.
I
think
the
registration
is
free
yeah.
So
that's
that's.
Research
updates
on
my
site.
D
Thank
you
regarding
crawlers,
there
is
one
that
underestimate
nimbus
or
always
connect
to
nimbus
node.
I
don't
remember
the
team
that.
A
D
Right
so
there
is
one
thing
is
that
there
is
a
difference
like
some
clients
allow
crawlers
to
connect
and
then
eject
them,
while
numbers
don't
allow
them
to
connect
at
all,
for
example,
and
there
are
different
behaviors
between
clients
that
makes
statistics
a
bit
tricky,
sometimes.
A
Right
and
I
think
cerium
is
actually
maybe
having
a
very
large
nimbus,
pure
store
and
dumping
it
and
so
they're,
seeing
like
actually
what
you
see
from
that
is
like
biases
and
how
nimbus
connects
to
and
doesn't
connect
to
clients
over
time.
I
believe
that
that
was
my
understanding
of
it.
When
I
spoke
with
them.
A
Yeah
and
I'm
actually
surprised
how
close
the
michael
sproul
validator
metrics
were
compared
to
the
crawler
metrics
we've
seen,
I
expected
some
more
asymmetries
between
size
of
validator
allocation
and
and
the
nodes
in
the
network.
A
E
I
was
just
gonna
say
that
I
was
talking
to
the
esteriam
person
as
well,
and
they
were
using
it.
They
were
dumping
in
nimbus
peer
story.
I
think
they
had
like
40
percent
of
the
network
was
nimbus
or
something
like
this
yeah,
I
think
I
mean
trying
to
crawl,
is
very
hard,
and
people
need
to
really
consider
think
very
hard
about
it
when
they
do
it,
not
just
dump
peer
stalls
and
cat
peers.
A
A
Any
open
discussion
closing
remarks
anything
else.
People
want
to
discuss
today.
A
A
I
would
say
we
only
would
would
do
a
call
or
some
sort
of
formal
gathering
in
the
event
that
we're
having
some
sort
of
issue
or
hiccup
with
the
altair
progression
at
that
point,
and
then
we
would
regroup
again
on
the
21st
of
october,
which
would
still
be
before
the
altair
upgrade
for
any
kind
of
final
or
emergency
discussions.
Then.
B
Cool,
I
would
have
one
question
actually
to
client
teams.
I
mean
we're
still
seeing
a
little
bit
of
missed,
attestations
and
so
on.
B
I
was
curious
what
the
latest
investigations
point
to
I
mean
orphan
blocks
were
one
thing
so
when
those
things
were
fixed,
but
if
there's
any
news
on
that
front,
maybe
releases
that
address
potential
issues
and
cure
incoming
attestation.
Queueing
things
like
this.
F
Yeah,
so
on
the
prism
front,
we
have
a
bunch
of
optimizations
that
improve
that
and
it
was
released
in
the
previous
release,
but
those
were
actually
part
of
the
feature
flight
set.
So
basically,
in
order
to
use
this
optimization
feature,
user
needs
to
enable
the
flag,
just
just
so
user
knows
what
they're
doing
so
now,
with
enough
testing
for
our
v2
release,
those
flags
will
be
flagged.
Those
flags
will
be
flagged
or
into
into
the
default
state,
so
most
of
the
other
transactions
will
be
enabled,
so
we
should
definitely
see
some
improvement
there.
A
Yeah,
so
specifically,
what
we
saw-
and
some
analysis
left
by
barnabay,
was
that
there
are
these
like
zero
zeroth
epoch,
zero
slot
blocks
that
are
late
and
and
kind
of
hap
increasingly,
so
that
cause
issues
with
voting
on
that
epoch,
boundary
cusp
and
that,
from
what
we
can
tell
is,
is
primarily
the
drop
that
we've
seen
over
the
past
handful
of
months
and
using
graffiti
analysis.
A
It
was
did
generally
happen
when
prison
validators
were
voting
were
proposing
at
that
epoch
boundary
and
what
terence
was
alluding
to
was
they
have
quite
a
few
optimizations
for
that
epoch,
boundary,
one
of
which
is
just
not
waiting
just
in
time
to
do
it
and
and
kind
of
when
you're
at
the
prior
to
that
boundary,
and
you
know
kind
of
what
to
build
on.
You
can
optimistically
just
do
that.
Epoch
transition
get
the
shuffling
in
place
and
stuff.
E
G
E
I
know
micah
was
looking
into
it
as
well.
I
think
he
found
some
cases
where
clients
weren't
packing
attestations
to
blocks
as
well
as
they
could.
I
think
he's
reached
out
to
those
teams
and
they're
aware
so
and
doing
stuff
about
it.
A
Yeah,
the
nice
thing
is,
we
have
a
required
upgrade
in
late
october,
where
any
of
these
optimizations
that
have
been
kind
of
filtering
out
will
all
be
enabled
at
that
point.
So
that
would
be
you
know
our
our
data
point
as
to
whether
what
we've
been
putting
in
place
on
packing
and
putting
the
place
on
these
zero
slot
optimizations
epoc
transition
optimizations
are
actually
going
to
fix
what
we
think
they're
going
to
fix.
A
Okay,
cool.
Thank
you.
We
will
regroup
in
the
chats
to
figure
out
a
fork
epoch
and
get
that
up
in
the
configs
good
work.
Everyone
and
releases
at
the
end
of
next
week
talk
to
y'all
soon,
thanks
all
bye.
Thank
you.