►
From YouTube: EIP-4844 Breakout Room #3
Description
A
Okay,
we
are
recording.
This
is
the
third
484
breakout
room
share
the
agenda
again
in
the
chat
at
a
high
level.
There's
been
like
some
updates
in
terms
of
the
implementation
on
the
devnet
and
we'll
start
by
kind
of
sharing
those
and
going
over
it.
A
There's
a
bunch
of
things
left
to
do
in
terms
of
like
work
on
the
devnet,
and
I
know
over
the
past
couple
weeks,
a
bunch
of
engineers
have
reached
out
that
they
wanted
to
help
so
we'll
make
sure
to
take
time
to
cover
those
and
and
see
you
know,
if
there's
some
some
folks
here
who
can
help
with
any
of
these
these
tasks,
and
then
I
want
to
make
sure
we
kind
of
spend
a
good
chunk
of
the
call
as
well
talking
about
the
the
higher
level
kind
of
design
questions.
A
A
If
we
get
all
that
done,
we're
happy
if
we
have
some
more
time,
I
think
some
stuff
to
chat
about
it's
just
like
you
know.
If
we
have
a
second
devnet,
what's
the
features
that
we
want,
if
there's
any
updates
on
the
kcg
ceremony
side,
but
yeah
these
the
kcg
stuff
has
called
every
every
two
weeks.
So
it's
fine
to
leave
that
to
there
yeah,
I
guess
mophie.
Do
you
want
to
give
us
a
quick
update
and
maybe
like
demo
of
of
the
devnet.
B
Hey
tim,
yeah
yeah
sure,
so
we
have
a
devnet
out.
This
is
going
to
be
the
first
of
hopefully
only
two
devnets
we're
going
to
have
rolled
out
for
the
eip4844
this
devnet.
Basically,
there
are
a
few
things
in
the
spec
that
we
still
need
to.
B
I
guess,
discuss
and
finalize
so
hopefully
the
the
goal
of
this
death
tonight
is
to
let
us
I
guess,
test
what
we
have,
what
we've
already
come
into
consensus
and
have
something
that
a
community
can
start
like
playing
around
with
and
hacking
with
before.
We
later
like
decide
on
what
we
need
for
the
spec
at
that
point,
we'll
have
like
a
second
devnet
and
then
we
can
go
from
there.
B
So
devnet
we
have
running
available
contains
basically
four
validator
nodes
and
four
beacon
nodes,
and
I
wrote
up
like
a
pretty
handy
guide,
that's
linked
in
the
github
issue
to
help
onboard
folks
into
it.
Let
me
post
it
here
on
zoom.
B
So
to
like
connect
to
the
devnet,
all
you
need
is
basically
the
latest
death
and
prism
implementations
of
the
spec,
and
I
posted
like
some
configs.
You
could
use
it's
not
really
geared
towards.
I
guess,
folks
that
are
not
very
familiar
with
running
death
or
prism.
B
B
B
C
B
B
So
we
actually
have
like
so
michael
from
coinbase,
has
like
this
really
cool
repo
that
sets
up
like
the
daca
compose,
making
it
very
easy
for
you
to
like
connect
to
the
devnet.
Let
me
post
that
in
the
chat
I
think
that'll
be
like
the
easiest
way
to
onboard
users.
B
Basically
takes
the
guy
that
I
wrote
and
like
solidifies,
that
into
like
a
couple
scripts
and
I
think,
michael.
We
still
need
to
like
update
the
genesis
and
that
reaper
right
yeah.
B
Right
now
cool,
so
I
have
that
repo
pulled
in.
In
my
instance,
I
have
the
genesis
running
and
pretty
much
if
you
want
to
like
start
the
devnet
for
my
containers.
Here
you
can
pull
the
repo
like,
I
posted
in
the
chat,
and
I
think
once
michael
has
like
updated
the
genesis
files.
B
I
think
you
also
need
to
update
the
the
guest
image
to
point
to
the
latest
one
because
it
has
the
embedded
geth
genesis
as
well
once
you
have
that
set
up.
All
you
have
to
do
is
just
run
dock
and
compose
up,
and
it
should
start
both
gaff
and
prism,
connect
to
the
boot
nodes.
We
have
set
up
for
both
execution
and
consensus
and
should
be
good
to
go.
B
B
B
You
kind
of
use
that
to
interact
with
with
blobs
in
the
network
pretty
easily
so,
for
example,
if
you
want
to
like
upload
a
blob
file
or
rather
to
send
a
blob
transaction
kind
of
like
you
just
like
this
command,
it
gives
it
like
a
your
guest
url,
a
blob
file,
private
key,
hopefully
no
one.
Everyone
forgets
that
one
and
two
and
you
can
easily
send
blob
transactions
to
the
network
same
thing.
You
can
do
it
for
like
to
download
a
blob
that
was
sent
to
the
network.
B
B
So
right
now
one
thing:
that's
kind
of
missing
we'll
get
into
it
later.
While
you
can
like
interact
with
the
beacon
chain
network,
you
can't
send
any
vouchers
actions
without
eth
right
so
feel
free
to
ping
me
on
discord
or
telegram.
If
you
need
some
east
to
get
started
with,
and
then
I
can
just
send
that
to
you,
if
you
want
to
start
sending
blobster's
actions.
B
Yeah,
that's
pretty
much
it
yeah
any
questions.
A
Yeah-
and
I
guess
you
kind
of
hinted
at
this
like
there's,
there's
a
couple
things
missing
right
now
so
like
one
of
them
is
obviously
a
there
was
like
a
faucet
or
some
less
manual
way,
the
the
the
to
get
eat
on
the
devnet.
I
think
that
would
be
pretty
useful
and
then
beyond
that
yeah.
A
Beyond
that
sorry,
I
have
like
a
little
list
of
other
things
yeah
beyond
that,
I
guess
is
if
there
was
a
way
to
like
automatically
use
your
blog
transaction,
sending
tool
and
like
kind
of
spam,
the
network
or
create
like
a
high
load
of
blog
transactions.
A
I
feel
like
that
would
be
good
to
see
the
nodes
processing
that
to
be
able
to
also
just
like
verify
that
when
the
blobs
expire
they're
actually
like
taken
out
of
the
network
and
like
remove
kind
of
from
from
the
beacon
chain-
and
then
I
don't
know
if
like
it,
would
make
sense
to
have
an
rpc
endpoint
as
well,
so
that
people
who
may
want
to
interact
with
the
devnet
but
not
run
the
whole
docker
setup
might
be
able
to
do
that.
B
Yeah,
the
rpc
endpoint,
I
think
that's
something
we
can
definitely
add
in
a
day
or
two.
We
just
need
to
like
harden
it
a
little
bit
on
our
optimism
side
to
make
sure
that
you
just
don't
spam.
The
network
yeah
awesome.
A
Yeah
that
makes
sense,
and
then
I'm
curious.
I
know
yeah
there's
a
bunch
of
like
newer
folks
on
the
call.
Does
anyone
feel
like
one
of
these
things
like
around
either
the
faucets
kind
of
a
blob,
spamming
transaction
tool,
just
like
testing
the
expiry
of
blobs?
C
C
A
Cool
yeah
that
sounds
great
and
we
can
chat
in
the
in
the
discord.
A
Yeah
that
that
would
be
okay
and-
and
I
think
georgios
is
on
the
call-
I
think
paradigm
has
a
faucet
repo
that
might
be
open
source.
I
don't
know
if
that's
100
so
correctly.
D
Yeah,
I
can
probably
get
some
notes
running
as
well,
and
I
could
take
a
look
into
writing
like
a
utility
to
just
kind
of
spam
transactions
or
whatever.
That
would.
D
You
also
mentioned
it
might
be
useful
to
have
like
an
rpc
endpoint.
So
I
know
mofi
just
said
that
you
want
to
harden
it
to
avoid
spam
on
the
network,
but
is
it
useful
to
kind
of
have
an
open
one
that
we
designate
for
spam?
A
I
think
you-
probably
it's
probably
best
to
just
if
you're
spamming,
the
network
actually
run
your
own
instance
of
the
test
net
and
like
propagate
the
transactions
that
way:
okay,
yeah
yeah,
unless
mofi
disagrees.
That
would
be
my
gut
feeling
but
yeah,
rather
than
rallying
it
through,
like
a
like
external
rpc,
yeah.
B
Yeah,
I
think
I
guess
by
spam
I
mean
not
quite
the
right
word
meant
to
like
to
dos
like
there
are
our
endpoints.
B
It's
generally
fine,
the
the
end
point
once
we
have
it
set
up,
you
can
spam
it
it's
just.
You
will
have
like
some
dots
protections
and
it's
best.
If
you
really
want
to
go
that
far,
then
like
what
tim
said,
it's
best
to
just
run
your
node
and
send
transactions
to
that
yeah
yeah.
D
Okay
makes
sense
whether
that
blob
spammer
should
it
basically
simulate
kind
of
like
a
realistic.
C
B
I
I
would
say
try
to
like
trigger
the
so
we
have
limits
in
place
like,
for
example,
the
number
of
blobs
in
the
block
that
can
be
included
in
the
block
or
the
number
of
the
size
of
the
blobs
that
can
be
included
in
the
block.
It
would
be
really
useful
to
be
able
to
push
these
limits
or
trigger
them,
and
that's
one
way.
We
could
see
to
make
sure
that
the
network
is
still
working,
even
as
those
limits
are
being
hit.
A
And
I
feel
in
terms
of
like
yeah,
that's
probably
the
main
thing
if
you
want
to
make
it
like
a
tad
more
realistic,
it's
like
there's,
probably
gonna,
be
a
handful
of
different
contracts
that,
like
mostly
get
like,
interact
with
the
blobs.
You
know,
if
you
think
of
like
l1
right
there's
like
call
it
maybe
five
to
ten
roll
ups
there's
not
a
hundred,
but
there's
also
not
just
one.
A
A
And
yeah
I'll
I'll
post
this
here,
I'm
not
sure
how
helpful
this
is
for
the
spamming
stuff,
but
marius
from
the
get
team
has
a
bunch
of
like
fuzzing,
repos
and
transaction.
Sending
repos
like
I
don't
know
like.
I
know
that
they
don't
work
with
blob
transactions.
A
I
don't
know
if
it's
easiest
to
extend
something
like
that
or
just
to
write
something
from
scratch.
Yeah
smaller
in
case
that's
helpful.
A
Sweet
yeah
and
I
think
yeah
the
the
I
think
at
a
high
level
if
we
can
get
the
faucet
up
just
like
some
spam
on
the
network,
an
rpc
endpoint
and
then
a
couple
people
running
the
devnet
and
also
trying
to
like
look
at
the
kind
of
blobs
expiring
and
making
sure
that
works.
A
That's
really
valuable,
I
think
the
other
part
that's
like,
maybe
not
as
urgent,
but
it's
if
we
have
a
second
demo
of
that
working
swords
like
documenting
things
better
and
that
can
be
as
simple
as
like,
if
you're
playing
around
with
this
and
like
you're
in
mophie's
hacking
d
and
something
is
like
wrong
or
could
be
better
documented,
just
like
gradually
adding
to
it.
That's
that's
always
helpful,
because
you've
hit
a
bunch
of
hedge
cases
when
you
have
different
people
running
this
stuff
and
yeah.
A
D
B
D
Okay,
cool
I've
got
a
few
servers
on
hexner
that
are
bare
metal
and
have
a
fair
amount
of
resources,
but
okay,
cool
thanks.
D
I
I
can
take
on
another
task,
which
is
just
to
sort
of
monitor
the
dev,
the
monitor
the
node
and
try
to
get
like
a
baseline
of
resource
usage.
And
we
can
maybe
publish
that
as
like
a
first
step
for
how
much
you
actually
need
for
erp.
4844.
A
Have
some
real
roll-ups?
Yes,
I
think
turtle's
not
on
this
call.
He
said
he
was
planning
to
look
into
doing
this
on
the
optimism
side.
I
believe
they
still
needed
some
changes
to
like
bedrock
to
make
it
work.
Yeah
and
I
think
yeah
I
can.
I
can
also
send
it
around
to
the
other
l2
teams
to
see
if
some
of
them
have
the
bandwidth
to
deploy
on
it
quickly.
No,
that's
a
that's
a
good
point.
I
think
I
guess,
especially
once
we.
A
A
A
Yeah,
okay,
see
anything
else
on
the
dev
map.
A
Sounds
good!
Okay!
Next
up,
I
guess
the
thing
I
wanted
to
chat
about
was
like
scientific
pr
about
the
the
fee
market.
I
know
proto
had
left
a
bunch
of
comments
on
it.
I'm
curious
you
have
a
feeling,
basically
of
where
we've
landed
and
whether
whether
we
can
go
ahead
and
and
and
merge
this
or
is
there
still
some
some
work
to
be
done.
C
So
my
understanding
with
the
disagreement
on
the
fee
market
is
it's
a
question
of.
Are
we
targeting
more
of
a
like
long-term
number
of
blobs
or
are
we
target?
C
I
mean,
I
guess,
they're
both
going
to
target
a
long-term
number
of
blobs,
but
one
is
going
to
target
the
long-term
number
of
lobs
in
a
much
slower
way,
whereas
the
1559
mechanism
is
going
to
very
quickly
change
the
price
of
the
blobs
to
reach
that
amount,
and
I
think
that
the
original
idea
was
that
we
didn't
want
to
like
use
the
1559
mechanism
here
again
because
of
how
quickly
it
was
moving
within
just
a
handful
of
blocks.
C
So
on
that
front
I
don't
know
if
anyone
has
any
other
thoughts
related
to
it
like.
I
don't
think
that
this
pr
is
changing
the
fee
mechanism
from
what
it
was.
It's
really
only
removing
the
fee
mechanism
from
the
state
contract.
A
C
A
C
A
F
Yeah,
I
also
just
briefly
wanted
to
give
my
plus
one
to
that
as
well.
I
think
it
makes
a
lot
of
sense
just
because
kind
of
having
it
in
the
state
is
just
this
excess
source
of
complexity,
and
even
if
we
gonna
change
it
in
the
future
that
the
mechanism
itself,
I
I
don't
see,
why
might
make
sense,
to
wait
into
bundle,
so
I
think,
having
them
separate.
It's
the
right
way
to
go
cool.
A
And
then
okay,
so
let's
yeah.
Let's
do
that
and
then
we
were
going
to
discuss
this
on
the
cl
call
last
week,
but
we
got
busy
with
the
merge
and
maybe
we
can
bump
it
to
next
week's
call
and
see
if
there's
a
yeah,
some
more
insights
there,
because
I
think
yeah,
the
core
of
photos.
Things
was
like.
Is
it
better
for
the
nodes
to
receive,
like
short,
bursts
of
blobs
or
just
like
a
more
constant
stream
of
them,
and
we
wanted
cl
teams
feedback
for
that.
G
C
I
think
that
we
should
be
able
to
also
achieve
it
by
by
tuning
the
constants.
F
I'm
yeah,
I'm
also
not
100
certain
that
it's
in
a
playset
where
necessary,
it's
already
very
helpful
to
get
cl
teams
feedback
on
just
because
again,
like
it's
a
little
bit
hand
wavy
right
kind
of
describing
the
distinction
for
now
but
yeah.
We
we
can
talk
about
it.
Offline.
A
And
I
think
I
I
think
the
thing
that
the
reason
why
we
did
want
to
reach
out
to
cl
teams
is
there's
an
argument
that,
like
maybe
actually
getting
like
burst
of
blobs,
is
like
a
bit
easier
to
process,
because
yeah
they're
like
easier
to
process
in
chunks
than
like
in
small
increments,
and
I
think
if
there
is
something
there
with
like
how
the
clients
work
that
can
at
least
help
like
cuts,
the
design
space
or
like
yeah
narrow
the
design
space
a
little
bit.
A
I
don't
know
terence,
do
you
have
any
thoughts
on
that
or
actually
yeah
and
enrico?
Is
here
as
well,
so
yeah.
E
Yeah,
I
don't
have
a
strong
opinion.
I
need
to
see
some
benchmark
data
or
even
just
play
with
myself.
First
before
I
form
and
before
I
format
an
opinion
for
this.
E
Like
just
the
basic
like
the.
How
like
just
the
compression
data
and
how
fast
to
verify
and
to
validate
stuff
and.
C
G
Yeah
and
I
just
jumped
into
the
into
the
topic-
and
I
need
to
wrap
my
head
around
this,
so
I
need
some
time
to
to
form
some
this
kind
of
question
before
different,
asking
yeah.
G
By
the
way,
sorry
just
to
add
one
thing:
you
mentioned
that
there's
some
sinking
discussion
taking
place
in
some
somewhere
in
github
or
somewhere.
Do
we
do
we
know
where
this
is
taking
place?
So
I
can
catch
up.
A
I
don't
know
that
there's
a
discussion-
and
I
was
gonna-
be
the
next
thing,
but
basically
we
did
discuss
it
in
various
places
and,
like
terence
wrote,
a
doc
summarizing
kind
of
the
approaches.
I
guess
we
can
move
to
that
next.
If
there's
nothing
else
on
the
free
market
yeah
just
on
the
feed
market,
it
seems
like
merging
this
pr
just
about
this
moving
things
to
the
the
header
instead
of
the
state
is
like.
A
We
should
do
that
unclear
about
what
the
right
like
design
mechanism
is,
and
we
probably
want
to
get
some
more
data
and
potentially
some
more
opinions
from
like
client
teams
to
like
inform
that.
But
it's
also
not
it
doesn't
seem
like
the
most
urgent
thing
and
getting
like
the
verification
optimizations
in
before
is
probably
better.
A
C
Sorry,
I
just
wanted
to
say
one
last
thing
on
that
if
somebody
could
just
go
through
and
give
a
thumbs
up
on
it,
so
that
yeah,
we
feel
confident,
because
there
was
a
change
in
how
we're
calculating
the
gas
cost
for
the
blob.
C
It
should
have
the
same
result,
but
it
would
be
nice
to
have
someone
else
thumbs
it
up
just
to
make
sure.
A
Sweet
okay
on
the
blob
sync
terence:
do
you
want
to
take
a
minute
or
two
and
like
walk
us
through
your
dock,
either
sharing
your
screen
or
we
can
pull
it
up.
I've
I've
posted
it
in
the
chat.
E
Yeah
sure
I
don't
so
like,
if
you
just
open
the
dog,
sorry
yeah,
okay,
I
am
a
muted
yeah,
so
just
open
the
dock
and
I
don't
think
we
need
to
keep
this
loan,
so
I
can
just
quickly.
I
can
quickly
like
go
through
it.
E
So
there's
just
two
approaches:
we
are
considering
right
now:
one
is
you
essentially
decouple
the
the
the
sidecar
and
the
block,
and
that's
that's
how
the
spec
is
right
now,
so
there
are
two
different
objects
and
the
and
and
one
is
just
tightly
coupled
right,
so
you
put
the
sidecar
within
the
block
right
and
there
are
just
essentially
pros
and
cons
between
each
trade-off
for
the
coupling
right.
E
So
if
it's
not
coupled,
then
it's
likely
more
optimized
and
more
extensible
for
the
spec,
because
for
then
sharding
we
can
essentially
use
the
same
like
function
and
then,
if
it's
coupled,
then
it's
more
better
for
the
client,
I
would
say
so
but,
like
honestly,
like,
I
went
through
like
two
different
approaches
here
and
there
I
thought
about
this
at
some
time
and
I
don't
think
the
difference
like.
Is
that
drastic
right?
So
if
you?
E
If,
if
you
don't
couple
it
like
there's
more
code
on
the
client
side,
you
have
to
handle
the
queue,
you
basically
have
to
wait
until
you
receive
the
blog
before
you
can
process
the
sidecar
and
before
you
can
run
for
choice
and
then
the
changes
are
like
not
that
bad,
because
we
kind
of
do
this
for
attestations
already
today,
just
say
today
you
receive
like
attestation
on
the
beacon
chain
right.
You
can't
really
process
the
attestation
until
you
get
the
block
that
the
attestation
is
voted
for,
so
it's
kind
of
the
similar
concept.
E
So
I
don't
foresee
like
that,
like
I
don't
foresee
like
that,
I
don't
foresee
that
bad
of
a
pushback
from
the
client
team,
but
with
that
said
right,
I
do
want
to
like
give
more
inputs
to
the
client
team
because,
like
if
just
my
input
is
not
enough,
there's
like
four
other
awesome
teams
out
there.
They
definitely
should
voice
their
opinion.
I
would
say
so
and
yeah
so
tldr
like
it.
G
Yeah
just
yeah
to
go
through
it,
and
it
was
more
more
thing
thinking
about
thinking
what
I
see
at
the
very
end
of
the
the
document
you
you
mentioned
the
blob
side,
cars,
so
I
mean
this
one
is
the
change
to
actually
be
able
to
sync.
E
Yeah,
so
right
now
there
is
a
request
response.
Gossip
topic
basically
allow
your
peers
to
request
the
site
cards
by
range
right.
So
one
thing
that,
like
one
scenario
we
can
think
of,
is
that
a
note
joins
say:
you've
seen
the
checkpoint
sync,
whether
it's
finalized
epoch,
finalize
checkpoint
or
with
subjectivity
checkpoint
when
he
joins
right
and
that's
usually
gonna-
be
like
t
minus
a
few
days
or
t
minus
one
week-
something
like
that
right.
E
So
he
needs
to
basically
backtrack
and
get
the
and
get
the
blob
data
right,
but
usually
it
doesn't
mean
that
he
has
to
backtrack
and
get
a
block,
so
that's
kind
of
the
symmetry
right
there.
So
if
they're
coupled
together
right,
you
can
just
say
hey,
we
can
just
backtrack
and
just
get
a
block
for
the
last
month
easy.
But
now,
since
they're,
not
coupled
you
just
you
have
to
backtrack
and
get
a
blob
without
the
block.
So
that's
kind
of
like
the
little
odd
part.
C
E
So
yeah,
on
my
end,
I'm
gonna
also
like
forward
this
to
paul
from
lighthouse
and
then
everyone
else
just
to
try
to
get
more
feedback
on
the
client
side.
But
I
really
don't
think
like
since
they're
everyone's
really
busy
working
on
the
merge.
I
don't
think
we'll
form
like
an
opinion
or
a
decision
until
like
maybe
shortly
after
the
merge
so
yeah.
G
F
Yeah,
I
was
just
wondering
this
could
be
a
stupid
question,
but
would
there
be
any
sense
in
keeping
them
like
loosely
coupled
like
like
basically
separate
but
creating
some
sort
of
new
rapper
to
just
kind
of
doing
kind
of?
So
so
why?
While
you
are
at
the
head
of
the
network
with
why
you're
not
kind
of
doing
history
or
something
that,
like
they
usually
come
in
together,
but
then
for
basically
further
back
data,
they
they're
still
in
their
separate
form.
E
Yeah,
no
that's
what
I'm
actually
doing
on
the
code,
but
that's
kind
of
in
my
opinion,
like
implementation
detail,
depends
on
like
other
languages
may
handle
like
differently,
for
example,
for
go,
I'm
just
using
interface
for
it,
which
is
quite
nice,
so
on
the
code
level,
they're
pretty
much
treated
as
the
same
object.
But
the
point
is
that
you
cannot
do
one
thing
without
the
other
for
like
for
the
fourth
choice,
so
you
always
have
to
wait
right.
E
F
I
know,
but
what
I
meant
was
just
like,
though,
and
then
on
the
networking
level
right.
You
basically
have
some
news
kind
of
official
kind
of
structure,
something
that's
kind
of
like
you
know,
block
with
blob
or
some
with
blops
or
something.
And
then
basically
you
usually
request
that
entire
thing,
so
they
just
come
in
together,
but
but
yeah.
E
A
Okay
and
then
I
guess
the
last
kind
of
spec
level
issue
was
around
the
the
verification.
Optimizations
george,
I
know
you've
been
spending
some
time
looking
at
that
and
talking
to
the
super
national
team.
G
Okay,
so
I
talked
with
the
supernational
team
like
two
weeks
ago
or
something
and
we
gave
them
like
a
list
of
tasks
that
we
need
from.
I
mean
that's
not
exactly
related
to
the
verification
thing,
that's
more
about
the
kcg
library
situation
right
but
like
we
gave
them
a
list
of
functionality,
we
need
from
the
library
they
came
up.
They
came
back
with
us
with
like
some
timelines
and
stuff
like
that.
G
Then
we
discussed
it
further
and
kind
of
scrutinized
the
stuff
that
we
already
sent
them
and
kind
of
tried
to
minimize
the
work
to
make
it
come
out
as
soon
as
possible,
so
that
we
don't
so
you
know
so
that
ideally
client
teams
have
a
library
to
work
with
as
soon
as
possible.
G
So
now
they're
supposed
to
get
back
to
us
this
week
with
a
new
deliverable
list,
but
I
haven't
heard
from
them
this
week,
so
I
don't
have
any
more
precise
updates.
A
A
Okay
and
then
I
guess
on
the
actual,
optimization
side,
so
I
think
the
the
gist
of
the
issue
there
was
like
mophie,
you
implemented
like
some
of
the
original
optimizations
are
added
to
the
spec,
and
then
they
were
not
actually
as
as
performant
as
as
expected,
and
I
know
there
was
like
some
back
and
forth
about
that
in
the
in
the
discord,
but
I'm
curious
yeah.
What
was
the
status
there?
A
B
So
yeah,
basically,
I
think
george
pointed
to
basically
so
the
the
crux
of
the
I
guess
performance
problem
was
it
was
there
were
two
major
operations
that
I
think
we
can.
You
know
optimize
when
computing
aggregate
proofs
and
one
of
them
is
the
the
modular
inverse.
We
do
a
lot
of
them
when
evaluating
the
polynomials
and
george
pointed
out
that
we
should
be
batch,
running
modular
inverses
and
pointing
out
helpfully
pointing
out
to
like
a
resource
and
kill
it
where
this
could
be
done.
B
I've
been
taking
a
look
at
this,
but
this
week,
that'll
be
like
my
like
my
main
thing
to
get
back
into
that
related
to
that.
This,
though
one
other
thing
that
kind
of
like
want
to
bring
out
is
that
we
also
noticed
that
running
the
sse
routes
on
the
fiat
premiere
challenges
is
expensive.
B
G
Evaluation
yeah.
I
think
we
really
don't
need
the
entire,
like
miracle
3
situation,
that
ssz
has
to
get
security
here.
So
ideally,
we
would
just
switch
that
entire
hashing
thing
to
just
use
like
straight
up.
You
know
a
basic
hash
function
instead
of
computing
like
crazy
miracle
trees.
I
guess
that
also
has
we
have
to
change
the
spec.
I
guess
to
be
able
to
do
that.
I
haven't
done
it
yet,
but
you're
right
that
this
is
probably
also
useless
time
drain.
B
Right,
okay,
so
yeah.
I
will
also
look
into
that
and
I
guess
we
can
discuss
the
details
offline
because
before
we
I
think
before
we
should
change
the
spec.
We
should
like
take
a
look
at
a
a
couple,
hash
functions
and
see
what
works
performance
wise
and
without
sacrificing
security,
and
now
we
can
go
ahead.
B
G
Yeah,
that
makes
sense
I
mean
my
my
basic
intuition
would
be
to
just
use
the
underlying
hash
function
that
the
miracle
tree
uses,
but
instead
of
doing
the
crazy
three
things
just
like
straight
up
hash
the
value.
However
yeah
we
should
talk
about
it
offline
and
we
can
figure
it
out.
That's
a
good
point.
I
actually
forgot
about
this.
A
Yeah,
because
that
seems
like
the
main
blocker
it's
like
if
we
can
get
that
or
I
mean
for
like
a
next
iteration,
if
we
can
get
that,
then
we
can
get
some
benchmarks.
It
helps
figure
out
like
the
the
throughput
with
regards
to
blobs
and
and
that
might
shape
the
free
market
design
as
well
so
yeah.
That
seems
like
a
really
valuable
next
step.
G
Okay,
some
conference
in
stanford
or
something
in
two
weeks.
G
A
G
C
G
To
tell
you
that
if
you
come,
we
can
take
like
do
some
hands-on,
together
and
figure
out
more
precisely
performance
stuff.
A
Sweet
okay,
I
think
yeah.
I
think
that
was
like
the
last
kind
of
big.
I
guess
design
level
issue
in
the
spec.
Is
there
anything
else
that
people
feel
is
like
really
important
to
make
progress
on
on
this
that
we
haven't
discussed
so
far.
A
Okay,
that's
a
good
sign
and
the
case
of
g-side,
like
I
said
they
have
like
bi-weekly
calls
now,
so
we
don't
have
to
like
kind
of
rehash
all
of
them,
but
it
seems
like
they're
getting
audits
started
for
the
ceremony,
both
for
the
spec
and
the
implementation.
A
So
so
that's
that's
good,
so
I
don't
think
we'll
be
blocked
on
that
and
I
think
yeah
just
in
terms
of
like
next
steps.
Generally,
we
had
all
these
like
tasks
about
the
existing
death
nets,
so
yeah,
if
folks
can
help
out
with
those
that
would
be
valuable
and
then
that
means
like
mophie
can
start
focusing
on
this
verification,
optimization.
A
That
seems
like
the
main
thing
we
want
to
like
unblock
now
and
then,
based
on
how
complicated
it
is
to
get
the
fee
market,
we
might
want
to
have
a
second
definite
either
either
we
we
just
like
combine
those
two
in
the
second
devnet
or
maybe
we
like
launch
them
separately.
If
the
fee
market
is
a
big
other
separate
discussion,
we
are
just
getting.
The
optimizations
right
seems
like
the
core
thing.
B
A
B
I
just
wanted
to
add
on
to
that,
like
an
interest
of
progress,
really
appreciate
if
we
could
get
more
feedback
on
like
clients
pr,
so
that
we
can
merge
that
as
possible,
and
I
know
like
there's
some
things
that
probably
can
still
hash
out
a
bit.
But
I'm
just
thinking
like
from
an
implementer's
point
of
view.
B
The
bulk
of
the
work
is
getting
consensus
on,
where
the
I
guess,
where
the
the
fees
like
are
being
tracked,
whether
it's
the
system
address
or
in
the
block
header,
and
that's
like
the
most
important
aspect
to
me
as
an
implementer,
and
if
we
can
get
that
merged,
then
we
can
iterate
on
the
actual,
like
fee
mechanism
later
on
in
the
free
market.
Later
on,
yeah
yeah.
A
F
And
also
just
proof
it
by
the
way,
that's
one
of
the
nice
things
about
the
p
market,
at
least
that
it's
like
it's
purely
a
theoretical
kind
of
question,
and
so
once
we
have
a
kind
of
a
final
decision
there,
it
should
be
really
lightweight
in
either
way
on
the
on
the
instrumentals
like
that.
This
should
not
be
at
all
like
one
of
the
challenges
for
them
and
just
whatever
we
land
on.
D
In
the
last
4844
meeting,
we
had
brought
up
that
test
coverage
on
both
geth
and
prism
needed.
Some
work
is
that
something
that's
still
either
in
progress
or
active
or
needs
help
with.
E
Yeah,
I
can
give
an
update
on
that,
so
so,
since
the
last
meeting,
I
look
so
a
few
days
after
the
last
meeting,
I
managed
to
sync
mophie's
awesome
repo
with
our
latest
develop
branch.
So
it
took
me
like
a
while
actually
to
see
that
just
because
there's
so
many
conflicts,
so
I
finally
finished
that
and
as
of
our
update,
so
this
friday
will
reduce
our
v3.
E
We
reduce
our
v3
release,
so
that
means
that
there
will
not
be
any
code
changes
unless
my
critical
body
is
found
post
merge.
So
I
think,
like
after
this
friday,
our
clothes
should
be
in
a
very
stable
place,
and
then
so
after
this
friday,
maybe
over
the
weekend,
I
will
resync
again
from
just
to
make
sure
the
code
isn't
in
the
latest
state
and
after
I'm
finished
that
I
will
ping
you
or
anyone
else,
that's
interested
to
contribute.
So,
therefore,
then
we
can
start
adding
more
unit
tests.
D
Okay,
awesome
thanks.
I
had
one
other
comment
about
some
of
the
perf
stuff.
I
I
don't
have
a
ton
of
context,
but
just
a
quick
suggestion
or
question.
I
guess
are
folks
using
like
flame
graphs
to
measure
performance
or
how
is
that
process
working
online?
We.
E
Typically,
do
that
on
like
the
running
process
so
yeah,
we
definitely
do
that,
but
I
would
say,
like
we'll,
do
unit
tests
we'll
measure
like
production
performance,
so
so
so
we'll
run
the
note
and
we'll
run
flame
graphs
just
to
see
the
latency
of
just
for
the
traces
as
well
and
yeah.
We
I
mean
we
do
everything.
So
definitely
like
anything
you
you
can
help
like
feel
free
to
take
it.
A
Okay,
I
guess
last
thing
before
we
wrap
up:
does
it
make
sense
to
already
schedule
another
call,
or
should
we
wait
a
bit
and
their
reason
for
waiting
is
like
about
a
month
from
now
the
merge
is
scheduled
to
happen
on
main
nets,
so
I
feel
like
we
can
maybe
schedule
something
today,
but
there's
a
world
where
we
just
canceled
it.
A
If,
like
it's,
really
close
to
the
merge,
so
I
don't
know
what
what
do
people
prefer
or
is
it,
and
is
it
also
worth
waiting
until
like
we've
had
time
to
actually
talk
with
the
the
cl
teams,
or
should
we
just
yeah
schedule
something
optimistically
and
if
the
merge
happens
on
that
day,
we
scrap
it.
A
Okay
yeah.
I
like
that
yeah,
let's
wait
after
the
next
cl
call
see
if
we've
had
time
to
discuss
see
if
we
get
time
to
discuss
any
of
those
issues
on
the
cl
call,
and
then
we
can,
we
can
potentially
schedule
something
after
that.
A
Okay,
well
yeah,
thanks
again,
everyone
and
yeah
talk
to
you
all
on
the
discord
have
a
good
one.
Thanks.