►
From YouTube: Eth2.0 Call #66 [2021/6/17]
Description
A
A
Here's
the
agenda,
nothing
crazy,
client
updates,
with
a
focus
on
altair.
We
will
discuss
the
point
brought
up
by
adrian
today
on
some
of
the
subtleties
on
gossiping
sync
signatures:
we'll
talk
about
altair
planning.
Thank
you
for
perry
for
joining
to
help
us
discuss
and
coordinate
that
and
then
general
discussion.
The
merge
call
was
right
before
so.
A
B
Sure
so,
we've
updated
to
the
alpha
7
release
with
the
new
gossip
message,
id
changes
and
the
new
rewards
for
sync
committees.
That's
all
up
and
running.
We've
kicked
off
euronpili
testnet
with
that.
So
the
details
are
in
the
e2
test
nets:
repo
prefixes
to
the
node
health
api,
to
make
it
work
a
bit
better
with
kubernetes
in
particular.
For
us
we
now
say
we're
syncing
on
that
api
right
after
startup
until
we
found
peers.
B
Previously
we
had
no
beer,
so
we
had
nothing
to
sync
say:
hey
we're
in
sync,
because
we
don't
know
anybody
yet
so
we're
actually
exposing
that
startup
mode
now
through
that
api.
B
That
is
the
main
thing.
We've
got
a
lot
of
fixes
for
discovery
that
have
come
out
and
a
few
tweaks
to
our
gossip
stuff,
mostly
that's
just
particularly
specific
stuff,
but
a
lot
of
learnings
out
of
the
previous
devnet's
variety.
I
think
that's
us.
C
Hey
guys,
terence
here,
so
we
aligned
to
alpha
seven
passing
spec
tests,
we're
working
on
the
new
validated
rbc
endpoints,
mainly
for
the
sig
community
stuff,
and
almost
done
with
that,
and
then
we're
also
working
on
the
networking
spec
and
that's
almost
done
as
well.
So
we're
almost
there.
C
I
think
we're
on
track
to
start
a
local
interruption
test
net
to
test
the
for
transition
by
end
of
this
week
or
early
next
week,
and
if
that
goes
well,
we
will
jump
into
the
multi-client
test
net
with
you
guys
as
well,
and
on
the
maintenance
front,
we
are
planning
to
release
our
e2
api
support
by
nets
release
so
that's
very
exciting
and
other
than
that.
Just
bug
fixes
and
then
yep,
that's
it.
Thank
you.
D
Besides
that
we
are
working
on
the
validator
client,
we
have
a
pr
sitting
there
and
we
manage
to
control
lighthouse
with
numbers,
validator
client,
so
we
are
in
good
shape
and
in
parallel
to
that,
we
are
also
adding
more
and
more
api
endpoints
to
be
in
line
with
the
f2
api
requirements.
E
Hey
so,
on
the
alter
front,
we
are
still
on
alpha
six,
we're
in
the
process
of
upgrading
to
seven,
but
we've
been
fixing
various
performance
issues
and
stability,
issues
that
we
have
come
up
just
either
during
our
own
internal
ephemeral
test
nets
or
trying
to
interrupt
with
techus
devnet.
E
As
of
right
now
it
looks
pretty
good
locally.
We
need
to
upgrade
this
alpha
seven
and
see
see
what
it
looks
like
other
than
that
I've
been
doing
some
other
optimizations
and
things
we
we're
going
to
delete
our
pending
at
the
station
cache,
because
we've
found
out
it's
not
very
dos-resistant
and
looking
for
feedback
or
ideas
how
to
actually
implement
something
like
that.
E
We
stopped
updating
our
fork
choice
head
on
every
call
and
now
instead
we're
caching
that
result
and
then
updating
it
only
after
processing
every
block
and
then,
as
far
as
like
publishing
load
star
we've,
we
pulled
out
our
api
client
into
a
separate
package
which
is
pretty
nice
just
to
be
able
to.
E
You
know,
query
a
needs
to
endpoint
pulled
that
into
a
package
and
published
it
and
also
pulled
out
our
liteclient
prototype
into
a
package
and
we
started
releasing
nightly,
builds
and
integrated
all
that
into
our
docker
system.
So
yeah.
E
F
Hello,
everyone
paul
here
so
with
regard
to
altair,
we've
got
alpha
7
merged
into
our
altair
branch.
It's
working
well
passing
tests
we're
following
euronpili.
We
seem
to
be
getting
sync
committee
messages.
We
had
a
couple
of
issues
with
the
new
configs,
but
we
sorted
those
out
and
kind
of
yeah.
It
seems
to
be
working
out,
we're
looking
into
it
to
try
and
find
some
bugs
and
weirdness,
which
I'm
sure
we'll
find
we're.
Also
without
today
we're
patching
an
issue
with
our
bc.
F
We
had
some
some
problems
about
signing
just
around
the
fork
boundary,
so
we're
going
to
fix
that
we're
still
finishing
some
caching
stuff
and
then
we'll
be
working
on
merging
our
alter
branch
into
our
primary
branch.
At
some
point
out
of
altair
now
last
week
we
released
version
1.4.0
and
it
went
quite
well,
which
was
nice.
F
We
dropped
our
memory
from
like
six
gig
to
like
1.5
gig
dropped,
calls
to
f1
nodes
by
about
80,
and
the
users
seem
very
happy
so,
which
makes
us
happy
and
moving
forward
we're
going
to
be
pushing
forward
with
altair
and
hopefully
cutting
a
version.
1.5.0
release
maybe
end
of
this
month,
starting
next
month,
we
should
have
some
cpu
savings
and
a
bunch
of
other
features.
A
Excellent.
Thank
you
on
this
recent
devnet
adrian.
Did
it
do
the
transition,
or
did
it
start
from
outside.
B
It
did
the
transition
that
was
some
of
the
confusion
I
caused
for
lighthouse.
It's
an
epoch
because
I
forgot
to
enable
it
as
we
pass
through
epoch
10
when
it's
first
going
to
happen,
feature.
B
Yeah,
so
that's
that's
gone
through
the
transition
which
the
previous
ones
did
as
well
for
just
a
deep
octane.
A
Okay,
great
good
to
hear
that,
let's,
let's
actually
do
planning
and
then
we
can
talk
about
the
same
committee,
signature
broadcast
time
in
cash,
okay,
so
we're
making
progress
adrian.
Thank
you.
Thank
you.
Thank
you.
I
think
posting
these
devnets
up
and
doing
some
initial
interop
is
is
invaluable.
A
I
think
that
most
of
you
have
met
peritosh
perry,
who
is
at
the
ef,
who
does
primarily
e2
devops
related
tasks,
so
you've
probably
run
into
them
with
test
nets
and
other
things
perry's
going
to
be
joining
our
calls
here
and
and
getting
his
hands
dirty
with
various
test
net
things
so
helping
set
dates,
helping
pick
and
host
configs
and
and
that
kind
of
stuff,
and
then
maybe
working
through
problems
if
they
arise
like
hitting
epoch,
10
and
forgetting
and
then
changing
its
ebock
20.
perry.
A
A
In
terms
of,
I
think,
we'll
do
iteration
on
the
short-lived
devnets,
adrian
and
or
perry
you
can
maybe
pick
and
host
another
one
or
two
over
the
next
week
or
two,
if
that
seems
valuable
and
then
what's
the
current
temperature
on
earliest
for
one
of
our
large
test
net
forks
is
that
a
two
week
target
is
that
a
four
week
target.
F
Is
in
something
like
pm
on
or
prada
yeah,
I
would.
I
would
think
that
two
weeks
is
probably
too
early
just
looking
broadly
I'd
be
definitely
more
in
a
four-week
kind
of
bracket,
given
those
two
options.
A
A
So,
and
I
think
that's
where
I
would
fall
on
that
as
well,
terence,
did
I
see
you
on
mute.
C
A
Four
weeks
puts
us
at
the
15th
of
july,
so
let's
state
the
intention
of
continued,
interrupt
short-lived
devnets
and
maybe
keep
on
running
once
a
bunch
of
people
have
joined
it
and
do
some
testing
just
centering
random
transactions
and
things
like
that
on
the
first,
which
is
two
weeks,
the
intention
would
be
to
set
a
fork
date
for
one
of
our
large
test
nets,
maybe
in
the
two-week
time
rise
and
after
that
so
heading
towards
the
15th
of
july,
and
maybe,
if
we're
comfortable
at
that
point,
setting
some
targets
for
a
main
net
fork.
A
It
very
well.
We
might
get
to
the
first
be
comfortable
setting
a
test
net
fork,
but
then
wanting
to
regroup
on
the
15th,
which
would
be
another
call
before
we
actually
maybe
we'll.
The
intention
would
be
to
fork
one
of
those
then
to
get
to
the
15th
and
then
set
fork
targets
for
if
things
had
gone
well,
forked
targets
for
the
other
test,
not
in
for
maine.
At
that
point,
I
think
definitely
in
the
august
early
mid-august
horizon
for
an
earliest
main
network.
A
B
Moving
yeah,
I
think
that
sounds
about
right.
I
think
the
the
key
thing
we
need
to
get
onto
now
in
terms
of
talking
test
nets
is
getting
users
used
to
the
fact
that
it's
coming
and
coming
fairly
soon,
and
they
will
have
relatively
short
notice
to
upgrade
right,
because
otherwise
it's
going
to
go
very
badly
when
the
change
splits.
A
Yeah-
and
we
can,
I
guess
once
once
we
pick
a
main
net
target-
I
think
we'll
probably
do
a
minimum
from
announcing
that
to
the
main
net
launch
of
four
weeks,
I
think
is,
is
what
is
the
minimum
given
often
on
forks
on
the
other
side
of
this
thing,
I
will
write
a
blog
post
for
target
release
monday,
making
it
very
clear
that
this
is
coming
and
also
talk
with
some
of
the
easttaker
folks
to
make
this
is
that
to
make
sure
that
they've
begun
discussing
that
within
their
community
as
well.
G
You
guys
are
happy
for
us
to
disclose
some
of
those
bugs,
so
yeah
might
get
a
message
for
me
over
the
next
few
days.
C
I
had
another
question:
are
teams
thinking
about
doing
like
a
quick
audit
just
for
the
thief
that
you
guys
have
for
a
pair.
F
A
It's
not
that
this
is
correct,
but
it's
certainly
not
done
on
a
per
fork
basis
for
most
eth1
mainnet
clients,
and
that's
not
saying
again
that
it
is
correct,
but
they
rely
heavily
on
testing
and
and
test
nets
and
assume
that
their
architecture
is
generally
correct
and
that
they
and
those
processes
are
the
things
that
are
going
to
best
find
consensus
bugs,
which
is
probably
true
from
a
two
week.
Audit
perspective.
But
extra
eyes
is
never
a
bad
thing.
A
If
you
do
intend
to
do
that,
you
need
to
knock
on
somebody's
door
immediately
as
you're
all
probably
well
aware.
Auditors
are
extremely
backlogged.
A
A
A
Okay,
adrian,
would
you
mind
discussing
the
introducing
the
issue
that
you
brought
up
this
morning.
B
Yeah,
so
in
the
earlier
test
nets
we
found
that
we
were
losing
gossip
messages.
Basically,
the
inclusion
rate
for
sync
committee
signatures
has
been
really
low
with
the
message
id
changes
that
jumped
up
to
about
70
inclusion
rate,
but
we
were
still
missing
a
bunch
and
it
turned
out
that
it's
because
the
network
was
functioning
really
well,
so
we
were
producing
a
block
every
slot
and
then
immediately
producing
a
sync
signature
and
for
both
nodes.
B
Actually
in
the
network,
they
were
managing
to
process
the
sync
signature
before
they
actually
imported
the
block,
and
so
they
wound
up
ignoring
most
of
the
signatures
and
it's
just
the
same
race
condition.
We've
seen
with
attestations.
If
you
publish
them,
when
you
first
receive
a
block
and
other
nodes,
don't
have
any
any
caching
say
for
later
type
behavior.
B
You
get
this
race
condition
between
the
block
and
the
the
sync
committee
signature
or
the
app
station
actually
being
ready,
causing
causing
you
to
drop.
Some
I've
run
an
experiment
this
evening
with
delaying
producing
sync
committee,
signatures
to
the
four
second
mark
in
the
slot
and
we're
not
getting
100
participation
yet,
but
we're
missing
entire
subnets
at
a
time.
So
I
think,
what's
happening.
B
Is
that
we're
randomly
not
getting
an
aggregator
and
to
do
one
more
log
message
to
prove
that,
but
I'm
pretty
sure
that
we're
actually
getting
perfect
inclusion
rate
of
signatures
now
and
we
just
sometimes
don't
aggregate
so
essentially
that
that
delay
solves
the
problem
and
we
now
just
need
to
have
a
choice
between
whether
we
I
think
there
are
three
proposals
on
the
table.
One
is
just
don't
publish
signatures
early
when
you
get
a
block,
always
wait
for
the
four
second
mark.
B
The
other
was
that
we
introduce
a
cache,
so
you
hold
on
to
signatures.
If
you
don't
have
the
block
they're
pointing
to
until
the
end
of
the
slot-
and
the
third
was
yes
x-
suggestion
that
there's
a
random
delay
after
you
get
the
block
between.
I
think
a
quarter
of
a
second
and
a
second.
B
A
Of
them
are
probably
my
gut
would
be
to
go
with
the
second
and
introduce
a
short-lived
cache
so
that
you
can
kind
of
have
the
dispersal
of
attestation
messages
not
all
being
blasted
exactly
the
same
time
and
reuse
the
mechanism
that
protects
attestations
from
from
having
the
same
issue.
Obviously,
the
cash,
I
think,
should
be
much
more
short-lived
on
the
order
of
slot,
rather
than
I
think,
on
the
order
of
an
epoch,
and
we
get
to
ideally
reuse
kind
of
the
the
similar
logic
from
that
structure.
B
Yeah,
I
mean
that's
the
way
I
lean
as
well,
partly
because
I
already
have
the
attestation
case,
so
I
can
plug
it
in
relatively
easily
for
any
client
who
doesn't
do
that.
Yet
that's
obviously
more
work.
So
that's
a
consideration
and
it
just
feels
it's
a
deterministic
deterministic
solution.
You
are
never
going
to
reject
a
signature
because
you
don't
have
the
block
yet,
but
assuming
you
will
get
pretty
good,
it.
F
But
you
can
get
can't
you
get
non,
you
can't
always
verify
the
signatures
of
the
things
that
you
put
into
that
case
is
that
is
that
true,
that
is
probably
true.
You
can.
F
B
F
Yeah,
I'm
not
saying
the
case
is
like
definitely
a
bad
idea.
I
think
the
delay
is
also
a
good
idea,
because
I
I
I
mean
I
get
somewhere
else,
but
if,
if
we
know
that
we're
going
to
have
to
cache
these
things,
then
why
make
every
node
on
the
network
k-shirt
rather
than
just
the
sender,
hold
it
for
a
little
bit
more.
F
That's
that's
one
argument,
and
then
another
argument
is
that
if
we
just
have
some
delay,
then
you
can
get
by
without
a
cache
and
if
the
cache
isn't
a
perfect
case
like
in
this
case,
we
can't
always
verify
the
signatures
going
into
it.
Then
there's,
like
you
know
the
case,
isn't
a
perfect
deterministic
solution
and
we
already
have
this
problem
with
attestations,
like
I
think
loads
dimension
to
the
start
of
it.
How
do
you
make
the
attestation
case?
Boss
resistant?
You
can't
same
with
this
one
you
can't.
A
A
Then
th
the
randomness
I.
Why
does
the
randomness
look
much
different
than
this?
Because
if
I
wait
0.25
seconds
or
on
a
random
stretch
between
that,
like
what
does
that
actually
buy
me?
Because
I
can
still
end
up
sending
a
signature
to
somebody
who
hasn't
seen
their
block.
Yet.
F
I
guess
I
mean
it's
probably
not
thinking
that
it's
only
512,
it
probably
doesn't
earn
you
a
whole
lot,
but
just
I
guess,
looking
from
like
a
broader
network
perspective,
you
know
if
you're
gonna
you've
got
a
message
right,
do
you
does
the
send
and-
and
you
know
that
that
every
the
receivers
all
need
like
half
a
second
before
they
can
process
it
to
you.
Do
you
hold
it
on
the
sender
side,
or
does
every
receiver
hold
it
until
they
get
it
yeah,
I'm
not
sure
that
that's
the
strongest
start.
A
I
guess,
but
I'm
also
it's
also
network
latency.
It's
not
just
like
the
processing
of
the
block
into
into
the
into
the
actual
state
which,
unless
you're
telling
me
it's
like
almost
that's
almost
always
the
dominant
factor.
E
F
I
I
would
add
one
more
question,
which
is
basically
that
losing
an
attestation
by
and
large
doesn't
matter,
I
mean
it'll,
get
included,
somebody
will
have
seen
it
and
then
is
likely
to
get
included
anyway.
B
I
And
would
it
be
worse
to
introduce
some
sort
of
forgiveness
mechanism?
In
general
I
mean
this
was
during
the
attestation
discussions.
We
were
discussing
whether
the
attestation
inclusion
delay
should
be
increased
in
order
to
allow
for
more
leniency
on
block
versus
attestation.
Timing.
A
I
mean
it's
an
option:
it's
a
trade-off
in
terms
of
the
optimal
latency
to
follow
the
chain
as
a
light
client.
If
you,
if
you
added
a
minimum
inclusion
delay
by
if
it
was
it,
followed
two
slots
instead
of
one
slot,
then
you'd
have
24
seconds
for
your
node
to
be
able
to
follow
the
head
rather
than.
A
B
I
think
one
of
the
things
the
bits
of
data
we're
missing
in
this
is
that
we
don't
have
a
decent
sized
network
that
we're
seeing
how
this
actually
behaves
in
the
real
world.
So
I've
got
two
nodes
that
are,
you
know
right
next
to
each
other
in
aws,
it's
not
really
surprising,
they're
getting
the
block
all
at
the
same
time.
It's
all
happening
very
fast,
whereas
even
on
piermont
and
prada
we're
seeing
a
much
bigger
distribution
of
when
blocks
arrive,
and
you
know
take
your
publishers
at
a
station
straight
away.
Lighthouse
currently
drops
them.
B
A
I'd
say,
if
possible,
I'd
like
to
solve
this
in
the
pdp
spec,
which
can
be
done
with
minimal
damage
to
moving
towards
spec
freeze,
especially
on
the
state
transition
side
networks
that
have
slightly
different
nodes
on
the
same
network
that
have
slightly
different
agreement
on
how
they're
handling
this
case
would
still
be
able
to
hang
out
and
chat
with
each
other.
A
So,
if
possible,
I'd
like
to
solve
it
with
one
of
the
two
or
one
of
the
suggestions
that
just
touches
the
p2p,
rather
than
adding
some
sort
of
induced
delay
on
the
on
the
state
transition
side.
If
we
want
to
gather
more
data,
that's
I
don't
think
it's
critical
that
we
can
form
on
the
solution
immediately
right
now,
but
we
should
probably
pick
something
in
the
next
five
days.
F
So
is
the
reason
that
we're
not
going
to
send
them
all
the
four
second
mark
so
that
we
don't
pump
the
network,
and
if
that
is
the
case,
it's
only
512
messages
right
so,
and
it
just
seems
appealing
to
me
to
say
just
send
them
all
four
seconds
and
implement
a
cache.
If
you
want
as
well.
A
There's
multiple
reasons
there
one
would
be
there
is
this:
you
do
want
ad
decisions
well
propagated,
so
that
aggregation
can
happen
at
that
eight
second
mark,
and
so,
if
you
have
the
information
that
you
need
to
send
your
message,
the
idea
is
that
it's
an
optimization
to
send
your
message
when
it's
available
like
when
the
correctness
is
available,
and
then
that
also
is
so
that
you
can
potentially
get
aggregated
at
a
higher
success
rate.
But
then
that
is
also
a
does
help
stagger.
F
A
But
even
then,
even
if
you're
not
finalizing
for
an
entire
day,
the
assuming
that
there
was
actually
two
forks
that
are
deep
of
an
entire
day
and
that
those
networks
are
actually
those
partitions,
the
network
are
actually
communicating
and
not
resolving.
That
fork
is
like
a
more
extremist
issue
in
and
of
itself.
B
You
do
need
to
to
do
all
of
the
validations
like
checking
if
they're
in
the
sync
committee,
so
right,
you're
at
worst
bounded
on
the
number
of
validators
and
oh.
B
B
Yeah
there
is,
but
you
can't
validate
that
without
the
shuffling,
but
I
think
it
would
be
where.
B
B
A
Okay,
this
issue
was
just
uncovered
a
few
hours
ago,
or
at
least
since
I
was
awake,
let's
continue
chatting
about
it
offline
and
try
to
aim
for
a
p2p
related
solution
in
the
next
few
days.
A
Otherwise,
if
we
needed
to
get
some
better
data
on
larger
networks,
then
we
can
take
it
there.
Adrian
you
mentioned
something
as
an
aside
that
was
interesting,
that
you
were
not
getting
aggregators
you
weren't,
always
getting
aggregators
on
different
sections
of
the.
I
think
many
subnets,
I
believe
the
number
of
aggregators
was
the
target
number
aggregator,
which
is
actually
reduced,
and
that
might
be
an
error,
not
an
error,
but
it
might
not
be
a
good
thing.
So
do
you
have
any
other
data
on
that
or
should
we?
B
I
think
the
key
thing
is
to
try
it
with
a
bigger
set
of
validators.
I've
only
got
two
thousand
four
hundred,
so
I'm
seeing
duplicates
in
the
same,
oh
okay,
which
we're
not
expecting,
and
that
makes
it
less
likely
if
you
get
an
aggregator
okay
that
makes
sense
but
yeah,
it
is
definitely
worth
reviewing
those
numbers.
Yeah.
A
A
Okay,
many
other
altar
discussions.
Discussion
points.
A
K
A
No,
I
think
sharding
is
probably
the
main
thing
to
share
right.
Okay,.
K
So
where
are
currently
at,
is
this
updated
charting
spec
with
a
new
state
format,
so
this
makes
it
easier
to
track
confirmation
data
and
keeps
it
all
in
one
place
to
make
a
miracle
proof
to
the
commitment
easier
generally
happy
with
that
piece
of
refactoring,
where
there
is
more
so
we're
thinking
of
changing
the
way
shard
proposers
are
tasked
with
their
their
proposals.
K
Currently,
we
split
shared
proposers
up,
or
we
split
validators
up
in
charge,
committees
and
then
out
of
these
committees.
We
select
the
proposers
to
keep
the
proposers
bare
shard
separate
for
a
time
window
of
about
a
day,
and
so
this
helps
the
network
player,
but
within
the
spec
there's
not
really
much
to
it.
K
So
we
may
even
end
up
with
a
model
where
the
proposers
can
select.
Data
can
grab
data.
They
don't
have
to
learn
all
the
layer
tools,
they
don't
have
to
specialize
as
much
and
then
it's
the
builder
that,
after
paying
the
fee,
is
then
incentivized
to
make
the
data
available
by
publishing
it
on
the
chart
topic
and
so
we're
looking
into
this
changing
incentives.
How
that
could
work.
A
For
for
inclusion,
just
kind
of
like
the
commitment
to
that
data
and
proof
that
you
can
pay
a
fee,
and
so
I
think
that
would
actually
greatly
reduce
the
bandwidth
requirements
even
in
the
event
there's
a
competitive
fee
market
or
competitive
landscape
for
getting
trans
data
into
the
shards.
K
Right
then,
there
is
this
discussion
about
firewalling
mfv,
so
making
it
open
like
flashbots,
I
think,
is
really
good
is
we
should
try
and
encourage
every
every
fellow
director
to
participate
in
a
way
that
all
these
incentives
are
even
there's
no
federator
with
a
lot
more
mfe,
or
that
has
to
do
a
lot
of
extra
steps
to
specialize.
K
And
then
it's
basically
the
data
trans,
the
data
transaction
that
first
offers
the
data
and
then
later
the
builder
publishes
the
data.
So
we
shift
this
availability
problem
towards
the
builder,
and
so
this
is
good
for
privacy,
since
we
don't
have
to
so
it's
like
if
we
have
this
very
critical
part
of
publishing
the
data
like-
and
this
is
a
larger
piece
of
data
on
the
charts
if
he
moved
it
away
from
the
federator
from
the
consensus
identity-
and
I
mean
it
can
still
be
the
same
person.
A
Cool
I
proto
is
going
to
be
working
on
a
pr
that
highlights
some
of
these
changes
soon.
J
I'm
thinking
about
the
after
the
merge,
so
this
is
like
sounds
like
this
separate
block.
Builders
sounds
like
an
alternative
to
the
execution
services
right,
so
they
will
just
prepare
the
payload.
The
entire
payload
for
the
proposer
and
browser
will
need
to
be
to
sign
off
on
it.
K
H
K
Builder
can
still
combine
different
tabs;
they
can
still
combine
data
and
fit
it
together
in
one
block
and
then
we're
thinking
of
like
a
standard
outside
of
the
protocol,
but
something
that
deps
would
use.
Think
of
it
like
the
erc20,
like
which
kind
of
layer
this
lives
on,
and
we
basically
just
need
users
of
shard
data
to
recognize
which
part
of
the
chart
blob
is
being
used
for
which
protocol
or
for
which
application,
and
so
we
need
some
kind
of
small
header
to
say
where
to
find
data
of
specific
tabs.
K
J
Yeah,
okay,
so
yeah
for
the
execution
service.
Now
it
will
not
be
an
alternative
because
it
just
allows
for
building
blocks,
but
testers
will
have
to
execute
the
payload
anyway.
So
yeah.
K
Right,
so
it
is
similar
to
the
mainnet
separation
in
a
way
that
we
do
have.
Builders
like
this
data
still
has
meaning
it's
just
that
the
base
parts
called
layer,
one
doesn't
execute
the
data,
but
the
builders
will
still
want
to
execute
their
rollup
data
and
sequence
transactions,
whatever
they
want
to
do,
and
this
is
separate.
It's
a
separate
process
from
the
shared
proposal
rule
which
just
needs
to
select
data,
and
then
we
shift
this
incentive
from
of
the
availability
towards
the
builder.
K
L
Yes,
just
a
quick
comment:
the
the
word
that
I
showed
a
couple
of
months
ago
about
the
resource,
consumption
of
the
different
clients
will
be
published
at
the
blockchain
research
conference
in
paris
in
september,
and
there
will
be
also
a
poster
about
the
network
crawlers
that
we
have
been
working
on
and
talking
about
the
new
crawler.
We
developed
a
new
version
of
the
crawler
that
can
run
24,
7
and
gather
data
continuously
and
charging
a
dashboard.
L
And
another
thing
is
that
with
paritos,
we
finally
have
a
first
minimal
set
of
metrics
that
are
already
existing
across
all
the
clients,
and
we
are
just
in
the
process
of
deciding
the
best
nomenclature
for
the
best
names
for
these
metrics,
so
that
we
can
have
some
first
standard
for
the
metrics
across
clients
and
that's
it
on
my
site.
A
Great
anything
else
related
to
spec
or
any
discussion
points
in
general
that
we'd
like
to
discuss
before
we
close.
A
Today,
okay,
we
will
aim
to
get
this
p2p
sync
committee
issue
resolved
in
the
coming
days,
and
we
also
are
continuing
to
increase
test
coverage
for
altair,
so
we
will
expect
a
new,
very
iterative
release
next
week
with
additional
tests.
Then
thank
you
really
appreciate
all
the
hard
work
and
excited
to
see
altair
moving
talk
soon,
bye.