►
From YouTube: EIP-4844 Breakout Room #2
Description
A
Okay,
we
are
live,
welcome
everyone
to
the
second
4-h
4-4
breakout.
A
I
guess
the
goal
for
this
call
is
just
to
kind
of
get
everyone
on
the
same
page,
about
the
progress
on
the
implementation
on
the
kcg
ceremony
and
then
take
some
time
to
chat
about
like
what
we
see
as
the
biggest
blockers
or
like
issues
that
we
need
to
address
on
the
eip
and
also
kind
of
try
and
list
out
like
what
are
the
types
of
skills
that
would
be
helpful
to
have
people
contribute
to
the
eip
so
like
yesterday
there
were
a
bunch
of
people
talking
on
twitter
about
like
how
important
this
is,
and
I
think
yeah
a
few
people
already
reached
out.
A
But
if
there's
a
way
we
can
just
better
articulate
like
what's
needed
and
what's
helpful,
I
think
it'll
help
filter
the
different
people
who
would
like
to
help
out
yeah.
That
should
be
it.
I
guess
to
kick
it
off.
Mophie,
and
I
don't
know
if
michael
is
on
the
call-
I
don't
see
him
but
yeah
mophie.
Do
you
want
to
start
and
give
us
an
update
on
like
the
implementation
and
where
things
are
out
there.
C
Hey,
can
you
hear
me.
A
Yeah,
it's
a
bit,
it's
a
bit
quiet,
but
we
can't
hear
you
okay,.
C
During
the
meeting
right
now,
we
have
an
implementation
for
the
voluntary
vision
optimization,
and
this
is
something
that
worked
on
a
couple
weeks
ago,
where
we're.
B
C
Okay,
I'll
take
it
all
right,
so
yeah
we're.
Currently
we
have
like
most
of
the
spec
now
now
and
other
than
the
couple
open
issues
that
we
need
to
resolve
right
now,
we're
working
on
just
optimizing,
the
implementation,
making
it
as
fast
as
possible
and
the
point
of
contention
there
is
the
kcg
blob
verification,
there's
like
an
open
issue
where
we
want
to
ensure
that
verifying
blobs
is
an
adult
vector.
So
there's
been
some
work.
That's
been
put
into
this
back-end
implementation
to
speed
that
up
and
yeah.
C
That's
mostly
where
we
are
right
now
and
also
like
a
quick
like
prelude
to
announcement.
We
are
working.
D
C
A
devnet
that
will
be
publicly
available
pretty
soon
so
looking
forward
to
having
like
external
contributors,
joining
the
network
and
testing
things
out,
because
that's
going
to
be
really
needed.
A
Nice
anyone
have
questions
comments,
thoughts
on
that.
A
I
guess:
do
you
have
a
link
to
like
the
repo?
That's
you
and
or
repos
that
you
and
michael
are
working
off
to
share
here
with
with
folks
yeah.
C
Yeah
yeah
sure,
what's
that
in
the
zoo,
awesome.
E
By
the
way,
mophie,
you
asked
some
questions
about
the
verification
code
and
why
it's
so
slow
and
all
that
stuff.
I
tried
to
answer
last
week.
I
hope
my
answers
made
sense,
but
if
they
didn't
just
like
ask
again
and
or
we
can
do
a
call,
the
two
of
us
to
figure
out
in
more
details
how
to
optimize
the
code.
C
Yeah
thanks
george,
I
did
skim
through
them,
but
I
haven't
looked
into
it
in
detail,
but
it
should
have
more
time
next
week
to
take
a
closer
look
at
that.
B
F
Maybe
maybe
it's
what
what
the
briefly
saying,
because
I
also
put
it
in
chat
just
no
but
like
we
got,
because
we
have
this
kind
of
1559
like
mechanism
for
blobs
as
well.
We
kind
of
we
have
a
reasonably
good
understanding
of
the
kind
of
frequency
with
which
transactions
will
will
come
in
because
the
mempool
can
be
very
small
for
them
as
well.
So
so
you'd
only
expect
like
to
see
one
legitimate
block
transaction
legitimate,
meaning
that
kind
of
the
commitment
actually
matches
that
the
bob
sent
with
it
coming
in
every
few
seconds.
F
So
the
verification
of
the
legitimate
ones
is
not
a
problem
at
all,
like
performance
wise.
It's
really
about
handling.
If
people
spam
you
with
transactions,
where
the
blobs
just
don't
match
the
commitments,
because
then
you
can't
even
charge
them
for
for
it.
So
it's
it's
like
it's
similar
to
like
an
invalid
signature.
So
it's
really
mostly
about
like
peer
scoring
and
making
sure
that
you
just
don't
allow
one
pier
to
send
you
multiple
of
those
and
yeah.
So
so
that's
at
the
core
of
the
dos
issue.
B
A
It
and
as
I
understand
it,
though,
there's
just
no
pure
scoring
on
the
execution
layer
right
like
there's,
no
yeah,
there's
no
easy
way
like
you
need
to
be
able
to
verify
them
quickly,
because
we
there's
no
like
granularity
in
the
scoring
either
you
stay
with
the
peer
you
disconnect
them.
So
so
you
could
disconnect
up
here,
but
basically
after
they've
tossed
you
in
ideally
you
you
haven't
gone
down.
C
Okay,
so
I'm
just
related
to
that,
we
do
have
like
peer
scoring
in
the
consensus
layer,
but
there's
like
a
weird
issue,
where
you
sort
of
have
to
defer
verification
of
log
kcgs
in
consensus
whenever
the
block
headers
are
not
available
right
and
at
that
point
at
that.
In
that
case,
if
you
are
deferring
bob
verification,
it's
much
harder
to
penalize
peers
if
they
do
send
invalid
laws,
because
it's
it's
sort
of
like
you'd
have
to
like
keep
track
of
like
what
peer
is
associated
with.
C
There's
a
blob-
and
I
imagine
that's
like
that-
complicates
the
implementation
of
various
consensus
clients,
at
least
that's
what
my
experience
has
been
implementing
this
in
prison.
So.
C
So
as
that's
being
gossiped,
it
is
possible
that
you
receive
a
side
card,
that's
associated
with
the
beacon
block
that
hasn't
been
observed
yet,
and
in
that
case
you
want
to
like
defer,
processing
of
that
blobside
car
rather
than
just
simply
rejecting
it,
because
you,
it
might
be
incorrectly
labeled
as
valid.
So
if.
C
That
processing,
then
you
need
to
keep
track
of
what
peers
sent
that
sidecar
in
order
to
penalize
it,
and
that's
just
one
complexity
with
the
implementation.
A
B
B
A
And
and
yeah
I
see
you
have
a
comment
about
like
the
the
cl
sync
and
the
site
cars.
Does
it
make
sense
to
like
discuss
this
now
yeah?
I
guess
I
guess
so
actually
yeah
important
implementation
stuff
yeah.
Do
you
want
to
take
a
minute
and
scar
and
kind
of
share
your
thoughts
on
that.
F
Right,
so
I
think
this
is
basically
just
a
question
that
a
couple
people
had,
but
when
we
were
discussing
this
so
in
paris,
so
basically
I
think
for
now
the
plan
is
to
as
morpheus
was
saying
just
now
as
well
to
have
this.
This
sidekick
architecture,
where
basically
blobs,
are
more
or
less
gossiped
independently
from
from
from
the
beacon
blocks
between
cl
clients,
and
that
can
lead
to
all
these
differences.
F
Where
sometimes
you
get
a
blob
and
you
haven't
actually
received
the
the
observed,
the
beacon
block
yet
and
the
other
way
around
and
and
and
everything,
and
there
was
some
concern,
and
I
think
I
only
heard
that,
like
secondhand,
like
I
think,
proto
was
saying
that
some
cl
client
teams
had
kind
of
I'm
not
sure,
maybe
could
say
something
but
like
there
were
people
that
basically
yeah
raise
some
concern
that
this
this
might
introduce
scene
complexity.
F
F
Now,
of
course,
the
the
the
reason
people
did
it
in
the
first
place,
who
came
up
with
this
architecture
in
the
first
place,
is
that
it's
more
cleanly
forward
compatible
with
full
sharding,
because,
basically,
then
we
could
just
drop
that
that
whole
kind
of
this
is
because
they're
like
like
in
the
future,
we'll
have
to
have
blobs
and
blocks
be
separated
anyway,
because
you
know,
clients
will
no
longer
download
all
the
blobs.
So
it's
cleaner
to
already
have
that
separation
today,
but
it
does
front
load
some
of
the
extra
complexity.
F
So
if
we
want
to
really
follow
the
strict
minimum
complexity
approach
for
484-
and
there
is
a
case
to
be
made
to
just
to
return
to
something
where
basically
you-
you
bundle
the
blobs
and
the
blocks
after
all,
so
that
like
whenever
you
receive
a
blob,
it
comes
with
a
block
and
the
other
way
around.
So
there's
never
like
this
and
there's
no
extra
complexity
around
having
one,
but
not
the
other,
and
things
like
that.
G
I
think
that's
fair
long
term
with
tank
sharding,
you
may
need
the
separation
short
term.
Maybe
I
think
it's
up
to
implementers
to
make
the
right
call
mophie.
What
do
you
think.
C
Yeah,
I
think
that
bundling
them
will
simplify
this
implementation
on
ways
to
present.
C
One
concern
I
do
kind
of
like
have
is
so
the
advantage
of
like
having
to
keep
the
tip
separate
is
it
makes
it
easy
and
quickly
to
drop
invalid
sidecars
before
we
even
observe
them,
and
what
I
mean
by
this
is
basically,
if
you
observe
like
the
vegan
block,
that
is
invalid
and
you
immediately
later
receive
like
the
associated
sidecar.
C
You
don't
have
to
like
do
expensive
pallets.
You
can
just
drop
it
immediately
and
if
we
start
bundling
things,
there
is
a
network
cost
of
like
transmitting
the
whole
gamut
and
therefore.
C
It's
I
guess,
like
you're,
shifting
the
the
cost,
there's
like
a
cost
involved
with,
like
always
transmitting
the
entire
video
block
and
making
sure
that.
C
If
it's
invalid,
then
you've
already
incurred
that
cost
of,
like
you
know,
storing
that
you
can
walk
momentarily,
which
includes
the
sidecar
and
yeah.
I
guess
this
issue
can
be
solved
with
appropriate
peer
scoring
and
maybe
yeah.
Maybe
this
is
not
a
non-issue,
but
that's
basically
my
only
concern
here
we're
doing
this.
G
B
A
Yeah,
I
I
would
tend
to
agree
with
that
and
I
feel
like
once.
We
have
maybe
a
devnet
working
and
like
kind
of
these
other
spec
issues
resolved,
we
can
also
bring
this
up
on
the
cl
calls
and
like
get
and
and
in
the
meantime,
also
get
cl
devs
to
like
look
into
it,
assuming
they
have
time,
which
is
a
very
generous
assumption
and
but
yeah
I
agree
like
if,
if
the
current
version
works
right
now,
it's
not
worth
refactoring
the
entire
sync.
A
Cool
any
other
thoughts
comments
on
the
implementations
generally
or
on.
G
So
we
have
these
two
forks
of
consensus
and
excretion
clients
that
are
that
may
have
this
distance
in
terms
of
kids.
Differences
from
the
latest
merge
work,
and
so
if
people
from
these
clients
are
listening,
I'd
like
to
hear
our
feedback
about
incorporating
more
of
the
latest
merge
work
and
whether
or
not
it's
the
right
time
to
start
rebasing.
A
Okay,
yeah,
I
don't
think
there's
anyone
from
geth
here
and
terence
from
prison,
told
me
he
could
join,
probably
for
the
second
half
of
the
call
so
yeah
when
he
joins.
We
can
maybe
ask
him
about
that
directly
as
well.
C
C
We
are
using
for
the
stages
like
the
blog
verification.
We've
basically
just
been
using
the
library
and
guest,
and
now
that
we
are,
we
also
need
some
of
that
functionality
in
prison.
The
consensus
we
sort
of
like
have
to
like
decide,
what's
like
the
best
approach
to
we're
using
the
same
functionalities
across
both
implementations.
E
So,
on
that
front,
we
are
in
contact
with
the
blast
international
people.
I
would
say
that
there
is
progress,
but
this
progress
is
kind
of
slow
so
like
we
sent
them
like
that,
like
over
a
month
ago,
we
sent
them
the
requirements
for
what
is
needed.
E
E
I
will
report
back
with
what
I
learned,
but
also
another
thing
I
want
to
raise
on
this
topic
is
that
it
might
be
a
good
idea
to
have
some
of
the
more
people
more
involved
in
the
implementations
of
this.
Of
these
things
involved
in
such
future
calls
with
the
supranational
people
to
give
a
better
idea
of
what
is
needed
in
terms
of
interface,
because
you
know
like
I
think
I
know
what
is
needed,
but
maybe
someone
who
is
more
involved
with
the
actual
stuff
can
give
more
insight.
E
A
Cool
yeah,
that's
really
you
so
I
know
amerius
from
guest
had
mentioned,
like
he
had
some
thoughts
on
that.
So
he's
probably
a
good
person
to
reach
out
to
to
join
those
calls
and
beyond.
Obviously,
like
you
know,
murphy
and
michael
here
as
well,
but
yeah
he
on
the
last
one
of
these
calls.
He
seemed
to
have
some
pretty
strong
opinions
about
it.
So
yeah
yeah.
B
A
And
I
guess
generally,
the
people
feel
like
blast
is
like
basically,
the
best
option
is
to
adapt
blast
and
make
that
better,
because
I
believe
that's
like
what
all
the
cl
teams
use
already
correct,
but
there's
no
kind
of
other
option
really
on
the
table.
Right
now,.
B
The
the
interfaces
we
need
are
a
very
thin
wrapper
around
functionality.
That
blast
already
has
I
mean
around
functionality
that
any
bls
library
will
implement
already.
So
since
we're
all
using
blast,
it
makes
a
lot
of
sense
just
to
to
put
those
in
got
it.
B
A
Okay,
I
guess
next
up
trent,
I
see
you're
here.
Do
you
want
to
give
a
quick
update
on
the
bls
side
of
things
all
right?
Sorry
about
bls
at
the
kzg
side
of
things.
D
Yeah,
I
was
going
to
say
I
could
barely
cover
the
kcg
set.
I
definitely
can't
cover
bls
but
yeah
so
similar
to
the.
Since
we
started
this
we're
just
doing
kind
of
the
same
stuff.
We
have
an
audit
coming
up
for
the
the
ceremony
implement
early,
not
the
implementation,
but
the
the
design
of
the
ceremony
with
sec
bit
coming
up
soon,
so
we're
preparing
for
that.
D
In
a
few
weeks
I
just
shared
a
link
to
a
bunch
of
resources
which
has
a
link
to
the
implement
or
one
of
the
implementations,
but
specifically
their
calls.
If
anybody
wants
to
catch
up
or
is
curious
how
far
along
we
are.
That's
the
main
thing
we're
preparing
for
the
audit
and
we
have
the
next
call
next
week
on
thursday,
11
30,
30
or
utc
awesome
yeah.
D
Doc
in
there,
if
anybody's
curious
about
when
we
plan
to
start
this,
hopefully
around
defcon
and
then
we'll
have
an
we'll,
have
a
period
of
closed
contributions
before
that
to
test
it
and
then
at
devcon,
hopefully,
we'll
have
some
live
contributions
from
the
audience
and
then
it'll
run
for
a
few
months.
We
also
have
some
people
starting
to
work
on
a
couple
test
sites.
D
C
C
D
Roving
for
ten
thousand
okay,
which
would
depending
on
who
you
ask
it,
would
make
it
the
largest
trusted
setup
ceremony.
A
Okay,
I
see
terence
has
joined.
Oh,
are
you,
can
you
hear
us
terence,
do
you
have
a
mic.
A
Yeah,
there's
two:
I
think,
there's
two
things
that
we
discussed
that,
like
we're
curious
to
get
your
your
input
on.
The
first
is
around
the
the
cl
sync.
A
We
were
having
a
conversation
that,
like
we've
decoupled
blobsync
from
block
sync,
to
have
it
be
kind
of
forward
compatible
with
the
full
sharding
approach,
but
that
might
introduce
like
more
complexity
at
the
cl
side,
and
we
were
thinking
that,
like
there
might
be
value
and
potentially
just
recoupling,
blobs
and
blocks
at
the
sinking
level
for
like
the
first
version
of
4844,
and
then
you
know
eventually
making
the
this
the
sink
more
decoupled.
A
But
I
I
hear
like
yeah
generally,
do
you
have
any
thoughts
on
that
and
like
how?
How
much
of
a
simplification
it
would
be
to
like
couple
them
now
and
like?
Is
it
first
and
is
it
valuable
to
do
it,
or
should
we
try
and
front
load
as
much
of
like
the
sync
design
as
possible.
H
Right,
we
definitely
had
this
conversation
at
ecc,
which
I
remember,
and
I
am
in
favor-
of
the
coupling
approach-
I'm
not
too
worried
about
like
trying
to
be
the
same
as
sharding
in
day
zero.
I
think
like
we,
we
got
real,
then
charting.
We
need
to
have
a
far
hard
work
anyway,
so
we
can
change
it.
Then
it's
not
that
big
of
a
change,
but
it
would
be
nice
to
just
like.
I
think
we
can
definitely
shoot
444
slightly
faster,
just
couple
them
together.
It's
less
engineering
challenge
that
way.
A
And
the
second
question
we
had
for
you
is
the
diffs
between,
like
the
current
prototype,
are
starting
to
diverge
from,
like
master
with
the
merge
work
and
presuming
get
when
do.
H
Yeah
yeah
yeah
portal
asked
me
to
help.
I
am
so.
I
think
I
should
be
free
after
in
a
few
days,
just
trying
to
finish
last
minute
met
those
related
issues
so
yeah,
I
should
be
free
in
a
few
days
and
then
I'm
more
than
happy
to
help
just
send
it
over
the
branch
I
can
replace
it
for
you
shouldn't.
Take
me
more
than
a
few
hours
yeah.
Oh
okay,.
B
A
Nice,
so
I
think
those
are
two
things
for
for
parents.
A
Sweet-
and
I
guess
yeah,
the
other
thing
I
I
want
to
make
sure
we
chat
about
is
like
we
have
folks
now,
like
obviously
on
the
optimism
side
on
coinbase
kind
of
working
on
this.
But
this
is
like
a
pretty
big
vip
and
there's
always
a
bunch
of
folks
like
sorry.
A
So
by
this
I
mean
the
implementations,
there's
a
bunch
of
other
work
as
well,
but
yeah
there's,
obviously
a
lot
of
work
to
do
to
like
get
this
implemented
and
tested
in
clients,
and
it
seems
like
there's
some
interest
by
like
the
community
to
help,
and
I
guess
I'm
curious,
like
from
like
coinbase
and
optimism
like
what
like
skill
sets
or
like
tasks.
A
Do
you
think
would
be
most
helpful
to
have
people
help
out
with
that
are
like
maybe
a
bit
like
independent
from
the
work
that
you're
doing
or
that
like
can
be
parallelized.
If
there's
like
engineers,
who
have
some
time
and
like
yeah
experience
that
that
can
help
here,.
C
I
guess
yeah
one
I'll
start
one.
That's
will
be
really
useful.
Once
we
have,
the
data
running
is
just
having
users
in
the
network
testing
all
sorts
of
scenarios,
sending
blobs
downloading
jobs,
ensuring
that
you
know
they.
The
current
gas
feed
calculation,
sort
of
works
in
a
dev
environment
and
yeah.
We
just
like
to
have
more
participants
in
the
eib4044
test
net.
C
That
would
be
super
useful.
Another
thing
would
be
if
people
should
just
take
a
look
at
the
the
codes,
the
various
repos
that
I
posted
zoom.
C
Maybe
we
can
make
these
available
somewhere,
like
in
the
community,
call
agenda,
but
take
a
look
at
the
repo
see
if,
like
we
can
improve
test
coverage,
particularly
in
prison,
because
a
lot
of
the
testing
we're
doing
here
is
based
on
another
repo
that
basically
come,
interrupts
both
guest
and
prism
for
testing,
but
it'll
be
nicer
to
have
like
more
test
coverage
in
presents
you
target
specific
scenarios
that
ensure
that
you
know
the
vip
is
as
most
as
possible.
A
Got
it
and
I
guess
in
terms
of
like
actually
implementing
things,
I
guess
we
have
like
kind
of
the
coinbase
coinbase
folks
working
on
on
the
guest
implementation
you
working
on
on
the
prism
side.
I
see
there's
like
a
bunch
of
pine
devs
on
the
call
like
do
we
think.
Oh
sorry,
yeah
mixed
up
but
get
in
prison
and
coinbase
and
optimism
yeah.
A
C
Yeah,
I
think
it
makes
sense
to
get
as
far
as
possible
because
yeah
we
are
still
making
changes
to
the
spec,
particularly
the
gas
price
update
rule.
We're
probably
gonna
have
a
discussion
later
right
now
and
for
how
we're
gonna
do
that.
Also,
if
we
do
decide
to
go
ahead
with
bundling
the
law
we
can
block
inside
cars
and
that's
like
another
change
that
other
implementers
will
have
to
like
do
so.
C
It
just
makes
sense
like
consolidate
all
the
changes
that
we
want,
and
once
we
get
to
a
point
where
we're
sort
of
like
the
the
spec
is
sort
of
stable,
then
we
can
start
introducing
more
implementations.
A
Okay,
that
makes
sense,
okay
and
so
base,
so
basically
yeah.
I
guess
the
two
main
things
now
is
just
like
testing
on
the
devnet
as
soon
as
that's
out
and
then
basically
seeing
if
there's
test
coverage
that
can
be
approved
in
the
in
the
current
prism
and
get
implementations.
Those
would
be
like
the
two
most
useful
things
right.
B
A
Yeah-
and
I
guess
would
it
be
helpful
like
if
someone
comes
along
and
they're
an
expert
in
prism
or
guest,
you
know.
Is
that
also
helpful
to
have
more
people
working
on
those
specific
implementations,
or
is
it
just
like
too
much
people
on
the
same
kind
of
parts
of
the
code.
C
No,
I
think
that
would
also
be
helpful.
There
are
like
two
or
three
major
items
I
foresee
in
like
the
next
couple
weeks,
where
back
like
two
or
three
people
can
work
on
differently
without,
like
stepping
each
other's
toes
so
yeah.
I
think
that
will
be
helpful.
Having
like
experienced
prison
or
death
devs
contributing
to
the
implementations.
A
Okay,
great
so
so
I
guess,
if
tier
and
experience
get
the
prism
dev
listening,
you
can
reach
out
like
to
me-
or
I
guess
liam-
you
also
posted
about
this
yesterday.
So
I'll
put
you
on
the
spot
here,
yeah,
if
you're,
if
you're
interested
in
contributing
and
like
if
you're,
not
sure
where
to
start
we
linked
I'm.
A
I
have
notes
for
this
call,
so
we
linked
a
bunch
of
stuff
there
and
then,
like
the
very
like
first
place,
is
probably
either
the
devnet
or
looking
at
the
specs
and
and
kind
of
diving
deeper
from
there.
Does
that
make
sense.
A
Okay,
so
I
guess
yeah.
The
last
thing
I
wanted
to
cover
today-
and
I
think
it
should
bring
us
right
to
time-
is
just
basically
like
our
list
of
of
issues
from
the
last
time,
and
we
touched
on
some
of
these
already,
but
not
not
all
so.
On
the
kcg
library
section,
you
know
we're
still
working
on
improving
this
on
this.
A
This
we
discussed
the
sync
a
fair
bit
and
I
guess
the
last
one
is
like
the
fee
market,
yeah
and-
and
I
guess
just
to
to
put
some
context
here
so
right
now,
the
current
devnet
implementations
use
kind
of
the
the
the
naive
fee
market
with
like
a
hard-coded
hard-coded
gas
price
for
blob
all
the
time.
This
is
not
gonna
work.
There
was
a
proposal
in
the
eip
for
just
a
more
complex
one
that
was
basically
uses,
eip1559
style
pricing
for
for
the
free
market.
A
On
the
last
call,
we
kind
of
discussed
moving
this
from
the
from
a
special
contract
in
the
state
to
the
block
header
yeah,
and
I
guess
I
was
curious
to
hear
a
from
people.
Like
you
know.
Does
this
general
fee
market
just
makes
sense?
Do
we
think
it's
good
enough
to
move
forward
and
b?
A
F
Yeah
sure
so
I
think
kind
of
with
regards
to
the
header,
I
think
basically,
everyone
agreed
that
it
might
just
be
the
more
practical
way
to
go
for
now.
The
only
person
disagreeing
with
vitalik,
incidentally,
but.
F
So
you
know
forfeiting
is
his
boys
here
and
I
think
on
the
mechanism
itself,
generally
kind
of
the
mechanism
proposed
by
the
ap
more
or
less
works.
F
The
only
reason
why
we
kind
of
by
for
a
while
now
it's
been
a
somewhat
open
research
topic
is
just
that
there
are
things
we
would
like
to
get
that
are
not
fully
provided
by
the
fee
mechanism,
but
they
are
more
like
nice
to
have
so,
basically
for
one,
it's
that,
while
this
works
really
well
for
something
like
blobs,
where
demand
is
relatively
slow
moving,
it
wouldn't
quite
like
perfectly
be
be
generalizable
because
for
like,
basically
sorry
stepping
a
step
back
like
this
would
be
the
first
time
that
we
introduced
like
a
two-dimensional
pricing
mechanism,
one
dimension
for
bobs,
one
dimension
for
normal
execution
roller
projects.
F
For
why
now
have
been
saying
that
they
would
really
like
to
have
like
a
standard
standard
for
doing
two-dimensional
pricing,
because
they
have
to
do
that
anyway,
because
they
have
to
price
layer,
two
guys
and
layer,
one
guess
with
inside
one
transaction.
Basically,
for
now,
all
roll
ups
basically
hand
roll
their
own
mechanism.
For
for
like
two
dimensional
pricing,
we
would
like
for
the
four
eight
for
four
mechanism,
basically
to
be
generalizable.
F
The
current
version
is
not
ideally
generalizable,
just
because
like
in
in
in
that
context,
kind
of
the
two
dimensions
would
be
much
more
fluctuating
and
because
they
share
the
same
gas
limit
that
that
might
become
a
problem
again,
not
a
problem
for
blobs,
just
a
problem
for
generalizing
the
mechanism
and
then
also
kind
of.
Similarly,
we
would
also
ideally
want
this
to
be
maximally
forward,
compatible
with
like
full
on
multi-dimensional
pricing
further
down
the
road,
but
I
think
on
both
of
these
counts.
It's
in
this.
F
It's
a
somewhat
similar
situation
like
we
when
we're
talking
earlier
about
bundling
blobs
and
blocks
on
the
cl
side,
where
we
might
just
want
to
be
practical
and
say
we
move
forward
with
the
minimum
working
version
for
now,
and
then
you
know
we
can
always
iterate
on
it
later.
So
I
think,
there's
still
some
effort
to
try
and
maybe
look
into
this
whole
kind
of
compatibility
with
layer
twos
just
because
they
would
really
like
that.
I
think
so.
F
Maybe
we'll
you
know
if,
if
we
come
up
with
a
slightly
alternative
design
within
the
next
month
or
so
that
would
include
that
I
think
well,
like
all
the
better,
but
for
now
we
can
just
you
know
we
can
just
work
on
the
basis
that
we
have
a
mechanism.
That
is
good
enough.
Basically,
sorry
that
was
a
bit
long,
but
I
hope
that
that
made
sense.
A
G
Right
systemat
mentioned
that
live
client
does
have
a
pr
open
in
the
eaps
repository
to
update
the
old
mechanism
to
a
new
mechanism
that
uses
a
header
field
instead
of
state.
G
The
adjustment
works
a
little
bit
different
and
I
think
there
are
some
subtle
issues
with
this
of
that
mechanism
and
I'm
not
entirely
sure
what
the
right
direction
is
to
correct
them.
With
this
blob
pricing
problem,
we
have
this
balance
we
can
make,
or
this
incentive,
whether
or
not
we
want
to
prefer
a
burst
of
blood
data
or
a
repeated
small,
smaller
burst.
So
if
we
go
over
the
target,
the
guest
price
or
the
fee
rises
and.
G
This
is
incrementally
more
costly,
and
so
small
bursts
right
now
are
more
expensive
than
grouping
all
the
the
blobs
together,
even
though
the
total
amount
of
throughput
after
the
end
of
the
the
example
is
the
same,
and
so
at
this
question
do
are
we
more
concerned
about
bandwidth
on
the
network
and
about
the
stability
of
the
bandwidth,
or
are
we
more
concerned
about
the
processing,
because,
if
processing,
I
think
it
might
actually
be
favorable
to
create
this
incentive
for
a
large
burst
of
blobs,
rather
than
this
more
stable
amount
of
blocks.
G
F
F
So
sorry,
just
briefly
mentioned-
I
think
one
of
the
concerns
with
on
the
pruning
side
was
just
that
it
might
be
not
not
ideal
to
basically
entry
in
specific
retention
like
specific
assumptions
about
retention
periods
in
the
pricing
mechanism
itself,
because
otherwise
this
is
basically
just
a
a
client
parameter
where
of
course,
I
don't
know
we,
we
we
like
to
give
some
defaults
and
some
some
some
recommendations,
but
basically,
if
you
want
to
run
a
cl
and
just
drop
blops
after
a
week,
you
can
do
that
or
if
you
want
to
keep
them
for
a
year,
you
can
do
that.
F
But
with
the
moment
we
kind
of
have
have
some
sort
of
like
finite
memory
set
in
in
in
in
the
block
pricing
mechanism.
Then,
of
course
we're
starting
to
enjoy
that
other
than
that.
I
think
it's
perfectly
reasonable
and
it's
also
not
not
too
complicated.
I
think
to
do
that.
A
I'm
just
gonna
say
I
agree.
We
probably
shouldn't
enshrine
some
specific
value,
but
we
should
price
the
fact
that,
like
they
are
like
temporary
to
some
extent
right-
and
it's
almost
like
you-
don't
want
to
enshrine
like
a
week
versus
a
month,
but
you
also
don't
want
the
mechanism
to
like,
even
implicitly
assume
they're
going
to
be
stored
for
a
year
if
that
makes
sense,
because
that
kind
of
nudges
clients
to
like
not
store
them
for
a
year,
which
is
what
we
want.
But
it's
I
agree.
G
G
I
However,
I
believe
that
the
counter
argument
there
is
that's
a
kind
of
latent,
like
remem
memory
of
historic
pricing,
is
completely
lost
the
noise
in
the
real
world,
like
so
in
a
theater
in
your
theoretical
scenario,
you
had
perfectly
even
throughput,
except
for
that
one
little
spike,
and
that
one
little
spike
causes
that
to
retain
kind
of
remember
the
spike
forever,
but
in
the
real
world
you
are
never
going
to
get
that
perfect
and
as
soon
as
you
have
any
kind
of
variance
that
little
tiny
spike
gets
lost
in
the
noise
like
right
away.
I
G
G
G
So
we
know
whether
or
not
we
are
under
bloated
targets
and
say
if
we're
over
the
targets,
we're
going
to
adjust
the
prices
upwards.
If
we're
into
the
target,
then
the
sorry
for
under
order
targets,
I
think
the
current
efp
makes
blobs
very,
very
cheap.
I
don't
I'm
not
exactly
sure
if
the
erp
is
correct
in
this
case,
but
let's
just
take
the
case
where
we
are
over
the
target
in
the
case
that
we're
over
the
target.
B
F
Reading,
I
think
I
think
that
that's
just
basically
so
it
doesn't
so.
I
think
the
the
pricing.
Basically,
the
difference
between
the
pricing
of
the
the
post
pricing
for
484
and
1559-
is
that
1559
basically
always
does
relative
adjustments,
so
it
it
doesn't
care
about
the
absolute
value
of
the
base.
Basically
just
says:
okay,
the
block
was
under
full
go
down.
F
The
book
was
over
full
go
up,
whereas
so
it's
always
like
just
you
know,
it
only
looks
at
one
last
block,
whereas
four
four
eight
four
four
does
the
the
exact
opposite.
It
has
like
this
infinite
time
horizon
where
it
just
says.
I
want
to
always
have
half
of
the
blob
space
filled
and
I
just
keep
track
of
historically
like
accumulating
over
all
history.
F
What
was
the
percentage
and
as
long
as
the
percentage
was
under
is
under
50,
then
basically
blocks
are
free
and
the
moment
we
are
over
50,
then
blobs
basically
cost
something,
and
that
price
keeps
like
keeps
going
up
the
the
further
we
are
about
50
to
basically
until
we
at
some
point
you
know,
get
pushed
back
down
to
to
50
or
like
there
could
be
some
equilibrium
where
we
know
we
are
51
or
something,
but
now
just
very
briefly
saying
like.
F
Why
does
it
not
really
matter
that
this
is
that
it
has
this
long
term
memory
and
I
think,
that's
kind
of
also
what
mica
was
alluding
to
because
of
this
mechanism.
We
will
always
end
up
in
a
scenario
where
we
are
close
to
50.
We
could
be
below
50
in
the
very
early
days
when
no
one
uses
blobs,
but
besides,
that
we'll
always
be
like
in
the
50
to
55
range
or
something
something
like
that
right
and
so
just
because
bob's
might
have
been
more
in
demand
in
the
past.
F
Something
doesn't
really
matter,
because
it
just
means
that
this
value
will
be
at
50
between
50
and
55.
So
the
the
worst
case
is
that
now
the
demand
is
only
50
and
it
or
51,
and
it
used
to
be
55.
So
there's
like
a
four
percent
difference
or
something,
but
that
that
really
doesn't
make
a
big
difference
and
it
washes
out
over
time.
So
so
it.
I
agree
that
maybe
it's
still
preferable
to
to
to
to
make
that
more
explicit.
F
But
there
can't
be
a
scenario
in
which,
like
the
the
historic
accumulator
is
at
like
90
or
something
because
that's
so
that's
the
entire
like
thing
that
the
the
kind
of
targeting
was
supposed
to
help
against
is
that.
G
F
No
but
but
but
it
kind
of
does
so
so
basically,
the
idea
is
that,
because
we
have
this
maximum,
that's
only
two
x,
the
average
anyway
like
we,
we
would
be
okay
with
it
to
sustained.
Oh,
like
okay,
yeah.
A
B
G
F
F
The
the
assessment
was
just
that,
basically,
this
inefficiency
is
there
like.
You
could
basically
just
because
I
mean
in
the
long
run.
We
don't
expect
this
to
be
to
really
be
the
case,
much
because
you'd
you'd
never
be
like
for
a
sustained
period
of
time
be
below
50,
because,
at
least
in
our
assumption
there
would
always
be
some
demand
for
blobs
so
that
it
would
be
used
like
before
we
get
dipped
down.
F
You
know
50,
but
in
the
early
days
it
could
definitely
happen,
and
so
we
have
this
slide
in
efficiency
that
we
basically
have
to
be
able
to
handle
storing
2x
the
amount,
the
average
amount
for
say
a
month
or
so
because
there
would
have
been
an
empty
month
and
then
a
double
month,
and
so
we
basically
have
to
store
two
eggs
for
that.
We
gain
the
simplicity
in
the
algorithm.
F
G
I
What
what
was
the
reason
behind
choosing
this
mechanism
instead
of
the
1559
mechanism
like
what
is
the
perceived
advantage?
They
seem
like
they'd
result
in
basically
the
same
thing,
but
this
one
requires
an
extra
header
field.
F
G
I
G
B
G
The
parent's
information,
the
parent
block
base
fee
and
then
has
this
lag
to
update
towards
the
new
base
fee
for
the
dating,
but
the
base
view
update
is
correct
and
it
uses
the
total
amount
of
gas
that
was
used
to
do
so.
So
this
is
the
second
header
field
that
is
already
available
for
your
regular
gas
to
be
able
to
do
this,
update
with
two
header
fields,
from
the
parent
block
to
get
and
compute
the
new
base.
Fifteen
expo
block
with
this
erp.
G
We
don't
have
such
information
that
captures
how
many
blobs
were
included
in
the
previous
block,
without
having
to
make
the
full
block
available
like
the
header
data
itself
is
not
enough
to
get
the
right
information
to
update
a
base
fee
in
the
same
way
that
eip-1559
would
do
so.
Instead,
this
mechanism
tracks
just
that
information,
the
amount
of
blobs
that
have
been
included
and
then,
instead
of
introducing
this
base
feed
that
needs
to
be
updated.
G
It
complete
it
computes
it
just
from
the
total
amount
of
data
that
has
been
included
by
keeping
track
not
just
of
the
last
parent
block
but
of
all
of
the
total
included.
Blobs
and
then
comparing
it
against
a
theoretical
target
based
on
the
the
block,
height
difference
and
the
number
of
blocks
that
are
the
number
of
blobs
that
you
go
into.
Each
block.
I
So,
with
the
short
version
of
that
beef
that,
if
1559
requires
the
transactions
from
the
parent
block,
this
does
not
require
the
equivalent
of
that
which
would
be
the
blob.
G
G
G
I
Is
this
formula
written
down
in
the
eip
at
the
moment.
D
A
Yeah,
just
because
we're
kind
of
basically
hitting
on
time
here,
I
feel
like
yeah,
the
the
two
like
this
idea
around,
like
yeah
short
burst
versus,
like
long-term
history,
is
something
that
we
probably
should
get
like.
Client
team
seat
backs
on,
and
especially
on
the
cl
side
and
along
with
the
the
sync
design.
A
That
feels
like
the
main,
probably
like
thing
here.
I
guess
the
other
part
like
enzgard,
you
mentioned
around,
like
having
l2s
being
able
to
use
this
as
well
as
a
pricing
mechanism.
It
feels
to
me
like
once,
we
kind
of
have
the
preference
from
the
cl
teams.
That's
maybe
like
the
second
thing
to
look
at
and
and
basically
those
are
like
the
two
most
important
things
to
figure
out
for
the
free
market.
Does
that
make
sense.
F
F
I
guess
they
would
favor,
of
course,
the
short-term
stabilization
mechanism,
but
for
that
it's
much
more
about
kind
of
how
does
the
the
two-dimensional
pricing
actually
works
so
the
way
the
base,
the
the
erp
right
now
works
is
just
basically,
basically
just
translates
the
variable
price
into
like
a
variable
amount
of
gas
consumption.
But
then
the
gas
is
within
the
transaction
is
accounted
as
normal.
F
That
has
some
like
disadvantages
that
aren't
really
that
relevant
for
484,
but
they
would
be
more
relevant
for
roll-ups.
So
so,
basically,
if
we
wanted
to
make
this
kind
of
more
raw
compatible,
that
might
need
might
mean
we
would
have
to
slightly
change
the
way
the
accounting
works
as
well.
Not
not
just
this.
Okay,
this
design
choice,
but
but
yeah.
I
Feel
like
yeah,
oh
good,
so
am
I
correcting
that
this
is
not
adjusting
the
gas
price.
It's
adjusting
the
gas
cost
like
the
amount
of
gas
that's
used
for
blob
yeah.
We
just
said
nascar.
Yes,
oh
I
see
yeah,
I'm
not
a
fan
of
that,
but
I'm
running
out
of
time.
So
I
won't
complain
too
much
right
now.
Yeah
yeah.
A
Yeah,
that
makes
sense-
I
I
I
guess
it's
like
yeah,
indeed,
if
you
think
of
it
as
like
the
interloping
constraints
or
something
I
just
want
to
make
sure
that,
like
what
we
present
as
like
the
trade-off
space
for
l2s
is
or
yeah
is
kind
of
what
cl
teams
want
to
optimize
for,
because,
like
yeah,
I
it's
it's.
It's
kind
of
crucial
that,
like
cl
teams
are
happy
with
this.
A
If
we
wanted
to
implement
it
on
l1
and
then
yeah,
I
guess
beyond
that
I
guess
getting
yeah
the
blst
editions
that'd
be
really
helpful.
Launching
the
devnet
and
having
people
kind
of
look,
look
into
that
and
then
finally,
does
it
make
sense
to
like
already
schedule
another
one
of
these
calls
or
the
people
prefer
to
do
this
like
async,
oh
karen,.
H
E
I
know
I
was
just
gonna
say
that,
like
this
desync
between
the
two
specs
right
now
is
an
actual
like
issue,
because
with
shawway
we
did
the
consensus
specs
for
eight
for
fourth
thing
to
be
executable
and
that
brought
a
bunch
of
like
edits
and
differences.
And
right
now
the
two
specs
are
pretty
desynchronized
in
terms
of
the
kcg
stuff
and
I've
been
waiting
to
make
an
eip
pr
to
bring
it
in
sync.
But
I'm
not
sure
when
to
do
that.
So
that
was
another
topic
I
want
to
raise
in
this
call.
I
Is
the
reason
for
not
updating
the
ap
regularly,
just
because
too
much
hassle
when
you
wait
until
things
are
kind
of
hammered
out
and
then
update
the
ap
or
is
there
some
other
reason
that
the
ip
is
lagging.
E
Yeah,
that's
that's
the
reason
that,
like
it's
like
two,
like
code,
duplication
in
the
code
base,
but
to
like
change
the
second
code
duplicate,
I
need
to
go
through
the
whole
pr
process,
and
so
I
was
waiting
to
batch
a
bunch
of
stuff
inside
before
I
do
so,
but
this
is
all
related
to
the
execution,
executable
spec
thing,
so
maybe
after
the
acd,
we
can
do
have
a
more
like
productive
discussion
about
this
stuff.
I.
A
I
think
this
is
like
one
of
the
best
examples
of
like
why
our
process
is
broken
because
anyways
like
yeah
and
I
know
already
over
time,
but
I
I
think
if,
if
you
want
to
come
or
and
proto
as
well
like
on
awkward
ebbs
to
like
kind
of
highlight
that
next
week,
I
think
it
would
be
good
because
I
don't
think
this
is
the
last
time
we
have
a
feature
that
touches
like
both
layers
and
yeah
yep
yeah.
So
yeah
that
would
be
that'd,
be
really
helpful.
A
A
And
the
time
I
guess
looking
just
like
roughly
at
the
next
couple
weeks,
I
think
the
time
I
would
propose
would
be
like
wednesday
august
17th
at
14
utc.
So
if
everyone
here
is
happy
with
that,
we
can
just
put
that
now.
Otherwise
we
can
just
chat
about
it
on
the
discord.
So
any
objections
to
the
17th
1400
utc.
A
Okay,
no
objections
cool,
so
I
will
see
you
all
then
and
yeah.
Let
me
share
the
notes
in
the
chat
here,
I'll
post
them
in
the
in
the
github
agenda
as
well.
Yeah
thanks
everyone.
This
is.
This
is
really
good.