►
From YouTube: Eth2.0 Call #55 [2021/1/13]
Description
A
B
B
Okay,
happy
new
year,
you
know
2020
was
an
interesting
year
for
many
accounts,
but
for
you
and
ethereum
I
think
it
was
a
great
success.
B
Thank
you
all
around
and
we
have
a
lot
of
work
to
do
and
a
lot
of
exciting
things
to
accomplish
this
year.
So
here
we
are,
I
flipped
it
a
little
bit
today.
I
think
we'll
start
with
client
updates.
Then
we'll
talk
about
a
proposed
mid-year
upgrade
and
some
of
the
things
there.
I
think
there
are
a
couple
of
proposals
floating
around.
B
We
need
to
get
this
kind
of
written
up
in
a
more
concrete
way,
so
we
can
engage
with
it
better,
but
there
are
some
some
pr's
in
the
repo
that
I'd
like
some
input
on.
We
can
get
to
that
a
little
bit
later.
Then
I
want
to
just
kind
of
give
you
the
broad
on
q1,
r
and
d,
which
will
kind
of
I
think
client
teams
can
dip
their
toes
in.
B
Do
some
education
here
and
some
some
input
which
will
help
set
the
stage
for
upgrades
kind
of
expected
in
the
latter
part
of
the
year
and
early
next
year,
and
then
we
can
talk
about
just
some
general
stuff
and
see
where
we're
at
so.
Let's
go
ahead
and
get
started.
We
can
start
with
client
updates
and
teku
get
started.
A
Sure,
hey
guys
so
we
merged
in
a
new
community,
contributed
feature
the
option
to
load
graffiti
dynamically
from
a
file
at
runtime.
A
We
incorporated
the
recent
risk
api
updates
to
the
debug
state
endpoint,
so
we
now
return
the
requested
state
as
json
or
ssc.
Depending
on
the
accept
header.
We
reworked
our
validator
client
to
optionally
use
the
dependent
root
fields
which
were
added
just
before
mainnet
to
detect
when
duties
need
to
be
invalidated.
A
We've
reviewed
the
upcoming
protocol
upgrade
changes
and
have
started
some
refactoring
to
prep
for
it,
and
we've
been
doing
some
general
cleanup
bug
fixes
because
to
get
performance,
tests
up
and
run
running
some
hardening
of
the
p2p
layer
and
some
code
quality
related
refactoring.
That's
it
for
me.
C
All
right
we've
released
a
new
release.
1.0.6.
The
most
significant
new
feature
is
that
we
now
have
reproducible
builds
for
arm
devices.
The
other
important
fix
in
this
series
is
that
we've
improved
our
subnet
walking
to
logic.
B
Nice
that
as
superfiz
called
it
that
doppelganger
doppelganger
detection,
I
think,
is
a
fantastic
feature
so
from
the
command
line.
The
default
is
some
number
of
epochs
in.
If
you
don't
specify,
and
then
you
could
drive
that
number
in
any
direction
even
down
to
zero.
If
you
wanted
to
override.
C
Well,
we
have
kind
of
configure
parameters
for
advanced
users,
but
so
far
our
thinking
thinking
is
to
make
this
really
simple
for
end
users,
and
we
don't
offer
many
options
just
either.
You
have
the
protection,
you
don't
have
it
gotcha.
Our
parameter
is
hidden
so
far,
and
we
are
also
deploying
this
right
now
in
production
in
a
special
mode
where
it
produces
only
a
warning.
So
our
thinking
here
is
that
we
want
to
make
sure
that
there
won't
be
any
false
positives.
B
Right,
nice
and
that
that
override
command
line
pram
can
have
in
caps
unsafe
cool.
That's
that's
awesome.
D
C
So
it
will
wait
for
another
and
ebooks
and
we
will
see
again
and
so
on
and
on.
E
E
E
B
Okay,
cool
moving
on
appreciate
it
lighthouse.
F
Hello,
so
we've
been
coordinating
as
a
team
to
determine
how
we
can
best
apply
ourselves
to
the
phase,
zero
maintenance
and
also
the
shutting
and
merge
works.
I've
written
a
blog
post
about
this
with
our
intentions
and
we
should
be
releasing
it
next
week
since
the
last
score,
we've
published
a
few
releases
lots
of
fixes
and
improvements.
There,
we've
added
a
new
system
for
monitoring
validators,
that's
in
a
pr
now.
F
So
this
is
on
the
beacon,
node
side.
It
has
a
list
of
validators
and
it
performs
additional
logging
when
your
evaluator
has
a
block
included
or
a
test
station
included.
It
has
a
lot
of
prometheus
metrics
to
do
like
fine
training
tracking
of
front
grain
tracking.
Your
validators,
like
the
delay
when
you're
getting
their
attestations
and
blocks
and
things
so
you
can
supply
the
beacon,
node
manually,
a
list
of
validators
to
track
or
you
can
get
it
to
detect
them
automatically
by
subscription
calls
to
the
api.
F
That's
doing
the
pr,
but
we
should
see
it
soon.
We're
adding
support
for
weak
subjectivity.
Sync-
and
this
turned
out
to
be
quite
nice,
since
it's
making
our
database
more
flexible
and
generic
we've
been
reviewing,
the
media
upgrade
prs
and
thinking
a
lot
about
how
we
can
best
support
hard
forks
with
minimal
code,
duplication
and
complexity
and
we're
also
thinking
about
how
we
can
support
the
sharding
and
merge
experimental
works
inside
the
lighthouse
repo
without
you
know,
jeopardizing
the
stability
of
the
production
code.
That's
about
it
from
us.
B
Great
thanks
paul
lodestar.
G
Hey
so
we've
been
working
on
just
getting
making
the
our
beacon,
node
more
stable
and
more
friendly
to
use,
so
we
spent
some
time
speeding
up
our
epoch
transition
and
figuring
out
how
to
store
fewer
states
to
disk.
G
We
also
refactored
our
request
and
response
code,
which
will
help
us
revisit
our
syncing,
so
concurrently
we're
working
on
updating
the
sync,
so
we
can
like
download
and
process
blocks
in
parallel.
It's
a
lot
better
to
do
that,
and
we're
also
now
shifting
some
of
our
team
to
be
looking
at
future
hard
forks.
So
we've
been
looking
at
how
to
update
our
database
to
more
easily
support
hard,
forks
and
different
types
of
data,
and
we
are
planning
our
next
release
for
monday,
with
all
of
the
things
that
we've
done
so
far.
H
Hey
guys
happy
new
year,
and
so
in
the
last
couple
weeks
we
have
made
our
slashing
protection
on
validated
on
the
validated
client
side,
more
performant,
it's
working
better
without
the
decline
that
holds
over
more
than
hundreds
of
keys.
This
is
on
track
for
version
1.1
release
next
monday,
and
we
also
have
a
bunch
of
box
fixes
that
has
gone
into
the
release
and
then
on
the
e2
api
implement
implementation
side.
We
are
almost
done
with
the
now,
with
the
networking
stack
in
parallel.
H
We
are
also
experimenting
like
cleansing
committees
and
then
reforming
accounting
with
the
participation
based
changes
and
then
we're
also
revamping
our
test
test
utilities
for
better
test
setups
and
that's
it.
Thank
you.
B
Okay,
I
believe
that
is
everyone,
so
I
think
we've
talked
about
this
across
different
channels.
The
current
intention
would
be
to
do
a
minor
upgrade
to
the
beacon
chain
in
mid
year.
B
Early
summer
would
be
the
target
which
would
include
a
couple
of
nice
to
have
features
and
some
cleanups
in
the
way
state
and
things
are
managed
to
help
with
just
the
maintenance
of
the
beacon
chain
and
maybe
some
edge
cases,
for
example,
processing
empty
epochs.
B
This
currently
is
in
the
form
of
some
pr
proposals
and
stuff
in
what's
called
the
light
client
spec
folder.
That
would
be
renamed
to
be
the
name
of
whatever
this
upgrade
ends
up
being
called
and
generally.
I
think
we
want
input,
especially
on
some
of
the
state
reform
stuff
from
engineering
teams,
because
this
ultimately
should
make
things
cleaner,
but
also
might,
depending
on
the
path
taken,
represent
technical
debt.
So
we
kind
of
want
to
find
the
right
balance
here.
So
please,
please.
B
There
are
a
couple
of
pr's
up
for
review,
accounting
reform
and
global
penalty
quotient.
These
are
probably
the
two
more
invasive
things
with
respect
to
state
and
management
and
validators,
but
would
love
for
you
to
take
a
look.
The
other
big
one
there
is
which
is
already
in
dev.
Under
that
light,
client
folder
is
adding
this
notion
of
a
sync
committee,
which
is
a
very
similar
to
a
beacon
committee,
but
larger
and
longer
standing.
B
That
proposers
can
include
the
signature
of
which
they
just
kind
of
sign
over
the
latest
block
and
what
that
does
is
provide
like
client
support,
kind
of
as
a
first
class
citizen,
in
addition
to
some
of
the
beacon
chain
changes
to
support
that
feature.
There
is
a
sync
protocol
file
which
demonstrates
how
you
can
use
this
to
construct
a
liteclasing
protocol.
Analyte
client,
that's
reuses,
a
lot
of
the
components
that
we
already
have
the
notion
of
committee,
subnets
aggregate
signatures,
that
kind
of
stuff,
and
it's
a
really
a
big
win.
B
B
Good
features
to
get
in
there
beyond
that.
There
are
a
couple
of
like
network
iterations
network
fixes
that
are
coming
out,
which
I
believe
should
be
released
prior
to
such
a
fork
and
a
couple
of
security
fixes
to
the
fork
choice
which
are
under
some
internal
review.
Right
now-
and
I
will
share
those
with
you
once
we
have
concrete
proposals
which
should
be
in
the
next
week
or
so
that
too
can
be
released
prior
to
this
kind
of
mid-year
upgrade
and
probably
should
be.
B
On
all
of
that,
I'm
happy-
you
know
open
up
for
discussion
here
and
also
take
the
discussion
to
the
repo.
I
think
we
need
to
figure
out
how
to
kind
of
package
all
of
this
for
discussion
and
review
other
than
just
hammering
out
these
pr's
and
the
spec
repo,
and
obviously
that
can
be
a
bit
opaque
so
in
the
next
week
or
so
I'd
like
to
get
you
know
this
just
written
down
kind
of
bulleted,
at
least
in
some
form.
So
we
can
talk
about
it
in
aggregate
better.
B
Any
questions
or
discussion
on
that.
This
also
serves
as
something
of
a
warm-up
gets
our
hands
dirty
in
being
able
to
upgrade
the
beacon
chain
in
a
production
context.
I
was
going
to
have
to
you
know
about
to
do
that
on
test
nets
and
then
do
that
on
mainnet
before
a
couple
of
the
more
ambitious
and
larger
upgrades,
which
would
be
the
merge
and
charting
further
down
the
line.
G
Right,
I
guess
one
thing
to
add
is
that
the
ideally
we
want
to
kind
of
come
to
consensus
on
like
what
we
what
we
wants
to
go
into
heart?
Well,
I
guess
what
we're
calling
hard
fork
one
kind
of
fairly
quickly
just
so
that
we
don't
need
to
lose
any
lose
any
time
on
moving
forward
on
it.
So
I'm
not
in
the
list
of
kind
of
items
that
can
potentially
go
in.
G
As
I
understand
it
as
like
number,
one
sync
committee
is
numbered
to
the
incen
accounting
reforms,
where
there's
a
yeah
enough,
a
lighter
version
and
then
there's
a
a
somewhat
deeper
version
that
replaces
effective
balances
or
it
changes
the
way
that
effective
balances
works
and
that
kind
of
moves
quotient
some
out
and
makes
it
easier
to
process
an
empty
e-box.
G
So
that's
three
and
then
also
fork
choice,
changes,
there's
two
tweaks
to
resolve
some
of
the
issues
that
people
have
come
up
with
over
the
and
published
some
papers
about
over
the
last
few
months,
but
like
I
guess
this
will
be
written
on
work.
Well,
this
is
these.
Things
are
already
in
the
issues
and
there's
already
some
documents
about
specific
things,
but
we'll
come
up
on.
G
Oh,
we
will
help
kind
of
package
that
together
into
a
more
unified
list-
and
it
would
be-
I
guess,
nice-
to
make
some
a
lot
of
pro
progress
on
kind
of
at
least
some
decision
making
over
the
next
couple
of
weeks.
B
Yeah,
I
agreed,
I
think
we
set
ourselves
up
for
moving
quickly
if
we
can
kind
of
agree
on
this
in
the
next
couple.
Next
two
weeks,
I
guess
by
the
end
of
january
and
so.
B
Yeah
yeah
I'll.
If
I
haven't
heard
from
you
on
from
your
team
on
on
especially
those
accounting
reforms,
I
will
knock
on
your
door
pretty.
B
E
I
had
a
question
about
that.
I
mean
part
of
the
reason
of
doing
the
accounting
reform
is
because
we
feel
that
some
of
the
processing
and
the
epoch
transition
is
a
little
bit
heavy,
but
I
was
curious.
E
Have
you
looked
at
how
much
of
that
processing
is
really
in
the
spec
like
in
the
spec
function
and
how
much
of
it
is
due
to
finality
occurring
and
therefore
clients
needing
to
prune
trees,
save
databases
and,
and
things
like
this
across
the
board,
I
mean,
I
know
from
nimbus
like
we,
the
the
difference
between
processing,
a
block
and
processing
an
epoch.
Isn't
that
significant?
If
we
only
consider
the
part
in
the
state
transition.
G
I
yeah,
I
remember
benchmarks
from
like
months
earliest.
It
could
just
be
a
long
time
ago,
but
that
said
that
at
the
maximum
possible
validator
count,
it
takes
six,
something
like
once
a
lot
to
fully
process
and
the
e-book
processing
function
and.
B
Yeah,
I
mean
any
numbers
from
clients
on
that
would
be
great.
Additionally,
it
also
make
there's
two
other
wins.
One
is
throughout
the
epoch.
The
state
actually
makes
it
is
much
more
aware,
informationally
about,
what's
going
to
happen,
which
I
think
could
help
serve
users
and
more
sophisticated
apis,
and
it
actually
makes
spec
writing
and
spec
reasoning
simpler.
B
For
example,
there
is
an
issue
currently
where
the
intended
proposed
reward
is
just
quite
frankly
incorrect,
and
that
was
probably
it
was
an
oversight,
but
is
also
because
there's
not
there
wasn't
like
a
very
clean
way
to
reason
about
the
rewards
with
respect
to
kind
of
how
things
were
configured
and
one
of
the
things
there
is.
B
The
way
rewards
are
handled
and
based
reward
is
handled
in
the
accounting
reform
makes
it
very
clear,
as
we
extend
and
modify,
rewards
and
subsequent
phases
when
there's
additional
validator
duties
that
rewards
and
issuance
are
kind
of,
like
you
can
reason
about
the
relationship
between
rewards
and
things
to
each
other,
much
more
cleaner.
B
G
Other
thing
worth
kind
of
restating
is
that
the
the
deeper
reform
the
quotient
reform
is
like
particularly
attractive
because
it's
in
greatly
increases
the
efficiency
of
processing
empty
or
nearly
empty
chains.
B
I
guess
another
thing
to
note
is
that
the
validator
set
size
is
probably
going
to
grow,
so
optimizations
here
are
valuable,
but
you're
right.
A
lot
of
the
optimizations.
A
lot
of
what
this
reform
does
is
takes
some
of
the
the
way
things
are
cached
and
optimized
on
epoch
transitions
today
and
actually
just
kind
of
chucks
it
into
state.
As
canonical.
F
At
it,
I
would
be
surprised
if
the
accounting
reform
one
saved
this
much
time
in
in
the
actual
epoch
processing,
but
I
think
it's
got
potential
to
save
us
time
in
tree
hashing
and
some
other
some
other
kind
of
edge
cases
on
the
pr
little
list
of
things
that
I
think
it
might
improve.
Right.
E
Well,
I
can.
I
can
just
briefly
mention
that
I
know
from
from
from
experience
in
left
numbers
that
our
slowest
processing
is
definitely
persisting.
The
state
on
finalization
and
on
epoch,
transitions
really
like
that.
Just
so
massively
outweighs
everything
else
that
that's
what
we're
focusing
on
right
now
with
you
know,
with
more
clever
ways
of
of
storing
the
state
so
that
we
don't
duplicate
as
much
information
every
time,
but
that's
going
to
be
highly
client
dependent.
I
suspect
right.
F
Yeah
the
nice
thing
about
the
accounting
reform,
one
is
that
it
makes
a
little
bit
easier
to.
I
think
it'll
make
it
easier
to
optimize
the
database
because
you're
storing
this
kind
of
list
of
of
of
bits
that
it's
quite
easy
to
reason
about
versus
a
list
of
weird
attestations
that
are
quite
you
know,
it's
really
hard
to
compress
them.
So
I
think
I
think
it'll
be
handy
in
the
long
run.
B
Okay,
so
let's
refine
this
conversation
in
the
coming
two
weeks
and
plan
on
being
able
to
make
a
decision
on,
what's
in
here
at
the
end
of
january,
again
any
input,
especially
on
this
global
penalty
quotient,
which
is
a
little
bit
deeper
surgery,
maybe
for
some
big
gains.
It
would
be
great.
B
Okay,
moving
on,
I
wrote
in
the
agenda
q1
rnd-
and
this
is
really
what's
happening-
the
next
couple
of
months
next
few
months
to
better
solidify
the
path
for
the
two
larger
upgrades
that
we'd
like
to
do
in
the
coming
couple
of
years,
one
of
which
is
the
merge.
B
There's
some
prototypes
and
even
miguel
has
some
like
local
test.
That's
running
on
the
merge
stuff,
I
think
in
terms
of
client
resourcing
there's
stuff
to
dig
into
here
which
can
help
the
r
d
effort
and
help
spec
effort
and
help
refine
these,
and
then
I
think
the
goal
would
be
to
at
the
end
of
q1
have
a
pretty
good
handle
on
both
have
something
looking
like
pretty
good
specs
on
both
and
being
able
to
at
that
point,
make
a
decision
on
what
to
drive
on
more
concretely
in
production
engineering.
B
Obviously,
at
that
point
I
think
we'll
be
working
towards
this
first
mid-year
upgrade,
but
after
that
mid-year
upgrade,
we
want
to
be
able
to
put
the
pedal
the
metal
on
one
of
these
two
major
upgrades,
so
point
being,
is
there's
some
r
d
work
to
do
there.
B
So
in
terms
of
resourcing
on
your
team
part
of
it,
I
think
probably
continues
to
work
on
phase
zero
beacon
chain,
optimizations
and
we're
looking
towards
this
upgrade,
and
then
part
of
the
team
should
probably
spend
some
time
on
education
and
digging
into
the
r
d
to
flesh
out
some
of
this
other
stuff.
B
B
I
think
the
teams
here
to
help
schedule
something
there
and
I
think,
just
in
general,
dig
into
what
the
resources
are
so
far
and
start
thinking
about
it,
providing
feedback
and
thinking
about
what
your
team
can
do
to
engage
with
it
to
help
refine
both
of
those
it's
a
bit
long-winded.
Essentially,
we
have
merge
work.
B
We
have
sharding
work
both
are
in
like
heavy
r
d
and
kind
of
in
the
early
spec
phases
and
testing
and
we're
working
actively
to
refine
them
and
can
use
help
at
the
same
time
bring
you
up
to
speed.
I
know
everyone's
been
pretty
heads
down
on
on
shipping
the
beacon
chain
and
there's
some
fun
stuff
to
learn
about
and
start
engaging
with.
B
I
I
However,
there
are
some
open
questions,
mostly
on
the
h1
side,
like
the
block
hash
stuff
right.
Some
of
the
op
codes.
I
I
can
share,
I
have
like
a
document.
Let
me
just
drop
it
here.
I
Yeah,
actually,
what
we
have
so
far
is
the
research
post
first
and
this
hackmd
dropped
in
chat
with,
like
the
communication
protocol.
Actually,
you
can
take
this
document
and
and
see
what's
required
on
the
site,
to
communicate
with
the
ethon
engine.
B
Right
so
we've,
I
think,
up
to
this
point
been
relatively
quietly
working
in
you
know
a
working
group
and
driving
on
this,
and
I
think
it's
it's
definitely
time
to
compile
what
we
have
worked
on
into
digestible
information
and
also
kind
of
opening
up
this
working
group.
So
we're
not
just
working
in
a
silo
and
welcoming
moran.
B
B
There
is
a
merge
channel
on
the
discord.
It's
probably
a
good
place
to
ask
questions
and
start
digging
deeper
and
mikhail,
and
others
will
work
on
making
sure
that
everything's
well
documented.
So
we
move
from
there.
B
B
It
would
be
would
be
good.
This
is
some
stuff.
That's
gonna
need
to
be
written,
the
kind
of
extension
of
some
libraries
and
so
yeah.
That's
that's.
Definitely
something
to
highlight
some
work
to
do
there.
B
B
Chat
great
so
we're
going
to
work
on
getting
a
date
together
for
a
three
four
hour
session,
where
we
can
go
over
a
lot
of
this
material
and
share
things
and
until
then
dig
into
what's
there
and
ask
all
the
questions.
B
So
we
can
help
get
everyone
up
to
speed
and
working
on
this
stuff.
Anything
else
here,
just
kind
of
wanted
to
lay
it
out.
I'm
working
on
a
big
blog
post,
I
think
paul
also
in
his
blog
post,
is
talks
about
some
of
the
different
r
d
efforts
that
are
so
hopefully
that'll
help
orient
as
well.
B
K
And
I
guess
one
note
here
and
it's
more
of
a
roadmap
strategic
thing
is
that
basically
there's
this
two
types
of
features
that
where
we
have
you
know
planned
for
the
future,
one
is
like
functionality,
you
know
things
like
the
merge
and
sharding
and
the
other
thing
is
security
features,
and
we
have
you
know
four
or
five
different
security
features.
K
We
have
things
like
data
availability,
sampling,
like
proof
of
custody
like
secret
leader,
election
vdfs
and
and
whatnot,
and
I
guess
you
know
one
of
the
the
realizations
is
that
the
the
security
of
the
beacon
chain
is
probably
good
enough.
K
You
know
in
the
in
the
short
and
medium
term
is
not
world
war
three
grade.
We
can
make
it
better
with
all
these.
You
know
fancy
cryptographic
security
features,
but
maybe
we
should
be
focusing
on
on
the
on
the
functionality,
and
I
guess
you
know
from
a
from
a
research
standpoint.
K
I
think
we
want
to
try
and
keep
you
know,
keep
making
progress
internally
at
ef
in
terms
of
of
specking
and
prototyping,
but
maybe
limit
the
the
expectation
that
implementers
have
to
understand
all
this
stuff
and
and
and
and
be
on
top
of
things
and
and
implement
it
in
the
short
term.
So
I'm
I'm
hopeful
that
kind
of
this.
K
This
separation
of
of
concerns
between
security
and
functionality
will
allow
us
to
really
go
full
steam
on
on
functionality
when
it
comes
to
implementation,
as
opposed
to
having
to
worry
about
the
the
more
fancy
cryptographic
stuff
which
can
come
after
we've
implemented
the
the
functionality.
B
Here
great
networking
age
put
up
an
issue.
There
is
in
the
way
we
do
aggregate
attestations,
there's
this
optimization
in
the
validations,
so
that
you
ignore
an
aggregate
attestation
if
it
has
the
same
exact
aggregate
if
it's
essentially
the
same
attestation
but
different
aggregator
this.
Actually,
this
was
a
nice
to
have
optimization
that
ultimately
actually
makes
more
work
on
the
network
today,
because
all
the
message
id
is
actually
related
to
the
outer
message
and
not
that
interactive
station,
and
so
you
end
up,
everyone
has
different.
I
want
I
haves
for
these.
B
What
look
like
the
same
aggregate
message
and
you
halt
the
gossip
on
the
repeats,
but
you
end
up
wasting
a
bunch
of
energy
on
I
want.
I
have
to
recover.
So
take
a
look
at
age's
issue.
It
explains
it.
Ultimately,
the
optimization
is
a
nice
to
have,
and
it
probably
makes
sense
to
remove
that
that
line
unless
there
is
a
like
clean
way
to
to
keep
that,
obviously
in
which,
based
on
a
little
bit
of
back
and
forth
between
me
and
age,
I
don't
know
if
there
is
at
least
well.
I
want.
B
I
have
still
works
the
same
way
it
does
and
message
id.
I
think
you
could
do
it
if
you
like
message
id
hooked
more
into
the
application
layer,
but
then
all
of
a
sudden
you're
like
mixing
layers
in
in
a
weird
way,
and
you
can't
very
quickly
and
efficiently
calculate
message,
ids.
B
So
take
a
look
at
that
issue.
If
we
don't
have
any
brilliant
thoughts
on
how
to
keep
that
nice
to
have
optimization
in
there
next
week,
we'll
probably
put
up
a
pr
and
remove
that
line.
In
addition
to
that,
this
box
sync
every
week
subjectivity
yes,
jastic,
please.
E
B
I
yeah
I
mean
I
think
that
that's
a
valuable
conversation
to
have
is
especially
in
some
of
these
transient
attestation
channels.
It
might
not
be
helping
us
that
much.
That
said,
I
think
if
we
reduce
the
number
of
message
ids,
that
we
include
in
ihave,
I
want
and
rely
on
each
node
kind
of
randomly
sending
some
of
them
that
we
greatly
reduce
the
overhead
of
there
and
still
get
a
lot
of
the
benefit.
B
E
Well,
I
mean
there's
what
say:
50k
validators
thousand
validators
per
slot,
whatever
right
thousand
messages
thousand
message:
ids,
we
select
five
of
them.
What
benefit
is
there
really
if
we
select
the
thousand
of
them?
That's
a
lot
of
traffic.
B
Right,
if
you
select
five
of
them
and
everyone
selects
five
of
them
and
you
have
100
peers,
there's
500
of
them,
but
coming
from
all
different
directions.
I
guess
that's
not
the
right
assumption,
because
people
aren't
different
subnets,
but
there
is
a.
There
is
potentially
a
play
at
getting
some
value
out
of
it
with
some
randomness.
But
look
I
hear
you
I
if
somebody
wants
to
quantify
what
I
want-
and
I
have
is
doing
for
us
today
on
mainnet.
That
would
help
us
make
a
decision
here.
B
I'd
love
to
have
this
conversation
because
you're
right
there
might
be
unnecessary
bandwidth
usages
here
and
maybe
removing
it
on
disabling
it
on
certain
subnets
could
make
sense,
if
not
all.
B
I
think
there
might
be
one
other
standing
minor
network
issue.
Oh
well,
I
don't
know
if,
after
our
last
call,
somebody
opened
up
an
issue
about
error,
codes
and
error
handling,
but
that's
an
ongoing
debate.
B
Those
couple
of
minor
network
adjustments
I
would
expect
in
a
minor
release,
rather
than
waiting
for,
like
a
mid-year
upgrade,
so
keep
your
eye
on
that
and
provide
any
input
on
the
spec
repo.
If
you
have.
B
Okay,
general
spec
discussion
any
questions,
thoughts
on
anything
in
there
that
we
want
to
talk
about.
L
B
Yeah,
I
guess
just
a
couple
of
things
one
would
be
test
sets
serve
a
couple
of
purposes
for
us,
one,
staging
our
upgrades
and
also
providing
like
staging
engineering
upgrades
and
also
providing
a
quality
of
service
to
the
community
to
be
able
to
test
things.
B
B
I
haven't
put
a
lot
of
thought
into
this.
I
haven't
put
any
work
into
it.
It's
working,
but
that's
about
all
I
can
say:
does
anybody
have
any
thoughts
on
this
one.
L
I'm
not
hearing
anything
particularly
negative
about
piermont,
I
mean
it
does
occasionally
stumble,
but
this
this
is
fine.
It
seems
to
be
serving
those
needs
just
trying
to
get
a
sense
of
clarity
for
planning.
You
know
for
people
like
infuria
want
to
know
where
they
stand
if
they
adopt
peer
month,
whether
they're
gonna
have
to
adopt
something
else
straight
away
and
stuff
like
that
right.
B
One
way
to
potentially
keep
paramount
in
check
is
as
a
function
of
community
that
joins.
You
know
we
we
join
in
a
similar
ratio
with
validators
we
control
to
kind
of
keep
something
like
75
to
80
in
our
control.
I
don't
think
we've
added
any
validators
in
a
long
time.
We
could
consider
doing
so.
That'll
also
bump
the
load
up
a
little
bit
on
the
network,
which
would
be
good
for
testing.
B
Upgrade
the
main
problem
is
that
you
know
a
girly
whale
troll
could
mess
up
pyramid
overnight
without
us
being
able
to
control
that,
but
maybe
that's
not
too
much
of
a
concern.
M
One
one
minor
point
is
that
any
large
players,
like
exchanges
or
stuff
who
want
to
do
testing,
are
not
going
to
be
able
to
use
pure
mod
today,
in
all
likelihood
just
because
if
they
want
to
test
set
up
similar
to
what
they
will
eventually
go
on
maynet.
But
my
inclination
there
is
just
kind
of
make
them
keep
them
waiting
because
they're
may
not
necessarily
yeah
it's.
It
doesn't
seem
like
super
high
priority.
So
when
things
get
sorted,
they
get
sorted.
B
Okay,
maybe
we
investigate
us
doing
auto
deposits
as
a
function
of
community
deposits
to
keep
the
ratio.
Good
is
anything
wrong.
B
Oh,
they
can,
but
if
you
want
to
test
ten
thousand
validators
well,
it
might
be
hard
to
get
gorilla
eth
and
all
of
a
sudden,
you
might
disrupt
the
network,
which
some
of
these
exchanges
we
know
do
want
to
run.
Such
tests.
F
I
think,
if,
if
exchanges
and
these
big
players,
if
they're
actually
expecting
to
have
a
really
decent
chunk
of
validators,
then
I
think
perhaps
we
should
be
open
to
spinning
one
up
for
them
just
to
protect
our
own
interests
as
clients.
By
ensuring
you
know
this
good
two
thousand
validators
run
well,
so
I
don't
know
if
there's
any
big
big
players
out
there
listening,
then
they
can
approach
me
at
least,
and
we
can
just
spin
you
up
a
test
net.
We
used
to
do
it
all
the
time.
B
Okay,
long
live
piermont.
Consider
us
padding
the
deposits
that
client
teams
control
as
a
function
of
new
entries
so
that
quality
doesn't
degrade.
B
B
Great
well,
thank
you.
Everyone
please
have
someone
on
your
team.
Take
a
look
at
the
open
prs
that
we'd,
like
engineering
feedback
on
please
begin
to
dig
into
sharding
and
emerge
resources
to
bring
yourself
up
to
speed,
and
we
will
talk
to
all
in
two
weeks
and
then
do
some
sort
of
educational
information
exchange
workshop
shortly
after.