►
From YouTube: Ethereum Core Devs Meeting #134 [2022-3-18]
A
Good
morning,
everyone
welcome
to
awkwardev's
number
134.
a
couple
things
on
the
agenda
today.
I
think
by
far
the
biggest
one
is
killed
and
and
kind
of
going
through
what
what
happened,
making
sure
we
we
figure
out
next
steps
from
here.
Then
we
have
some
updates
on
some
shanghai
proposals.
So
alex
has
some
updates
on
on
beacon
chain
withdrawals.
A
There's
also
someone
else,
I'm
sorry,
I'm
blanking
on
their
name,
who
left
a
comment
and
wanted
to
discuss
something
about
partial
withdrawals
and
then
their
proto
had
a
bunch
of
updates
about
eip
4844
and
then,
if
we
still
have
time
at
the
end,
I
I
had
a
proposal
about
how
we
can
harmonize
the
core
eip
process
and
the
executable
specs
that
are
being
worked
on
so
to
kick
us
off,
yeah
perry,
I'll
I'll
point
to
you,
but
maybe
some
somebody
else
did
a
greater
position,
but
someone
kind
of
wanted
to
walk
through
high
level.
A
What
happened
through
the
kiln
merge,
and
then
we
can
probably
hear
from
like
the
different
client
themes,
kind
of
specifically
about
what
went
down
on
their
side.
B
Sure
I
can
give
a
high
level
overview,
so
we
had
the
killer
test
net
launch
last
the
proof-of-work
portion
of
it
launched
last
week
on
wednesday,
and
we
had
the
proof-of-stake
beacon
chain
launched
last
week
on
friday.
The
merge
itself
happened
on
tuesday
a
bit
earlier
than
expected.
B
We
had
to
delay
the
merge
once
by
using
the
terminal
override
terminal,
total
difficulty
override
flag,
and
that
was
unexpected,
but
that
exercise
seemed
to
have
worked
perfectly.
All
the
clients
respected
it
and
we
noticed
no,
no
weird
behavior
from
anyone.
B
However,
once
the
merge
merge
transition
actually
happened,
a
few
clients
did
have
issues
with
block
proposal
and
or
syncing.
I
let
the
individual
client
teams
go
into
detail
later
on.
Since
then
the
network
seems
stable.
I
think
there
are
still
one
or
two
clients
that
have
some
issues,
but
they
should
they
all
seem
to
be
minor
and
should
be
fixed
relatively
soon.
A
Got
it
thanks?
One
thing,
I'm
curious
about
the
studies
as
well
as
the
the
block
explorers.
I
know
there
were
some
issues
with
the
blocking
spurs
like
shortly
around
the
merge
and
they
kind
of
were
were
lagging
for
for
a
while,
and
can
you
just
give
us
a
quick
update
if
you
know
what
happened
there,
yeah.
B
B
This
wasn't
an
issue
in
the
previous
ones,
because
well
the
base
fee
by
gas
didn't
wasn't
too
high,
but
apparently
in
this
the
number
of
transactions
one
too
high,
and
in
this
one
it
was
high.
So
it
just
overflowed.
I
think
it's
pretty
much
the
same
thing
that
tripped
up
prism
that
tripped
up
the
explorer,
but
it's
fixed
now
and
the
explorer
just
takes
a
long
time
to
sync.
So
I
think
we're
still
stuck
about
we're
lagging
by
a
d
or
something.
A
Okay,
great,
I
apologize
to
everyone
on
the
live
stream.
There
was
a
obs
issue
and
only
my
voice
was
audible.
So
I'll
recap
what
perry
said
in
30
seconds
and
we
we
also
have
the
zoom
transcript,
which
we
can.
We
can
add
to
the
notes
to
get
the
full
version,
but
basically
what
kiln
launched
under
proof
of
work.
Last
wednesday,
the
proof
of
stake
vegan
chain
went
live
last
friday.
A
A
There
are
a
couple
issues
on
the
merge
around
block,
production
and
sinking
and
we'll
dive
into
into
the
specific
ones
from
from
clients
right
after,
but
the
network
was
still
was
still
finalizing
and
is
still
doing
so
today,
and
there
were
some
issues
also
with
the
block
explorers.
Some
integer
overflow
issues
and
these
have
been
fixed,
but
the
block
explorers
are
still
lagging
importing.
All
the
data.
C
A
Yes,
I
think
actually
we
we
might
can
maybe
do
that
now,
like
I
know,
maris
perry
and
I
and
others
have
talked
about
shadow
forking
gordy
next
week.
Basically,
so
I
I
think
the.
D
Yeah,
I
think,
given
the
given
that
kiln,
at
least
as
we
believe
the
specs
will
be
is
feature
complete
and
we've
kind
of
solicited
a
lot
of
the
community
to
jump
in
on
it.
I'd
like
to
just
kind
of
keep
that,
as
quote
the
public
test
net
right
now,
and
continue
to
ask
people
to
do
full,
you
know
application
deploys
and
things
like
that.
I
would
say.
D
Certainly
we
should
do
another
test
net,
you
know
call
another
transition,
and,
and
maybe
that
just
call
it
a
devnet
and
have
an
end
of
life
and
I'd
also
I'm
an
advocate,
for
you
know,
shadow
fork
and
gourley
and
or
spolia
every
few
days
for
the
next
few
months.
You
know
if
we
can,
if
we
can
automate
that
in
a
way
that
kind
of
just
shows
us
the
latest
builds
continue
to
work
and
and
block
production
is
happening
and
all
that
kind
of
stuff.
That's
going
to.
A
Right
and
yeah-
I
I
feel
I'll
be
more
confident
in
this
in
about
five
ten
minutes,
but,
as
I
understand
it
right
now,
none
of
the
issues
we
found
on
kiln
are
like
spec
issues.
They
all
seem
to
be
client
implementation
issues.
So
to
me
that
says,
like
you
know,
we
don't
need
like
a
different
public
test
net,
which
runs
like
an
updated
version
of
the
merge
specs.
A
We
need,
as
I
understand
it,
like
clients
to
obviously
fix
the
issues
that
they
found,
and
we
obviously
want
to
test
that
on
on
devnet's
but
yeah.
Unless
I
think
there's
like
a
a
significant
change
to
the
spec,
it
seems
like
we
can
just
keep
killing,
as
is,
and
obviously
have
clients
issue
new
releases
which
will
work
on
the
network.
A
And
then
there's
a
question
in
the
chat
about
mev
boost.
I
don't
think
it's
being
used
yet
no
and
we've
reached
out
to
the
flashbots
teams
this
week
to
chat
with
them
about
that.
A
Cool,
I
guess
yeah
to
dive
into
the
client
stuff
a
bit
more
I'll
start
with
the
you
know.
First,
one
that
comes
to
mind
was
that
there
was
a
prism
guess
kind
of
incompatibility
around
the
encoding
of
values
like
the
base
sheet.
I
don't
know
if
anyone
from
prism
is
on
I'm
here,
terence,
yes,
yeah
karen,
so
you
want
to
give
us
a
quick,
quick
overview
of
what
would
happen.
E
Yeah
sounds
good
thanks
for
having
me
here
so
high
level.
Summary
execution
layer
uses
big
indian
consistently
used
little
indian.
So
when
we
we
have
this
costume
implementation
for
protobuf.
So
whenever
we
marshal
and
and
the
our
martial
specific
data
field,
we
have
to
be
careful
to
basically
reverse
the
buy
code,
and
we
missed
that
for
the
big
speed
per
gas
field
and
unfortunately
we
didn't
catch
them
for
the
previous
test
net
because
because
the
basically
progress
was
quite
low
and
there
wasn't
actually
people
are
reporting.
Only
the
videos
are
broken.
E
Yeah
in
the
chat
people
are
saying
the
audio
yeah,
it's
okay,
yeah.
A
E
Good
yeah
yeah,
yeah,
okay,
okay,
I'm
gonna,
keep
going
then
so.
Basically,
the
previous
testnet
was
unable
to
catch
that
because
the
base
feature
gas
was
quite
low,
so
yeah
I'm
happy
to
I'm
happy
to
yeah.
I
was
quite
happy
to
to
realize
this
bug
so
as
the
corrective
action
I
posted
a
postmodern
on
twitter,
I'm
sure
most
of
y'all
have
seen
it
but
high
level
summary,
for
that
is
that
we
will
be
up
our
testing
infrastructure.
E
So
right
now
we
are
working
on
our
differential
fusser
for
all
the
api
endpoint,
meaning
that
we
will
aiming
to
be
hundred
percent
compliant
with
all
the
marshall
and
our
marshall
and
for
our
end-to-end
testing.
We
are
also
adding
the
transaction
generator,
and
so
we
can
make
sure
to
send
all
the
exotic
transactions
to
make
sure
the
base
vapor
gas
does
not
remain
low
and
stuff
so
yeah.
That's
the
high
level
summary.
A
Thanks
and
we're
marius,
I
understand
there
was
no
issue
on
the
get
side
right.
It
was
just
the
guest
combination,
but
like
death
was
working
right.
Is
that
exactly
right?
Yes,.
F
And
like
we,
we
saw
this
bug
because
guest
prism
didn't
didn't,
create
any
blocks,
and
we
only
noticed
it
because
gas
prism
is
such
a
large
majority
of
the
network
and
that's
why
we
only
saw
gas
prism,
but
it
it's
probably.
It
was
probably
also
on
gas.
It
was
also-
and
guess,
visual
and
guess
in
the
mind,
so
it
has
nothing
to
do
with
got
it
prism,
piece
of
prism
out
of
mind.
Yeah
yeah,.
D
D
Also,
I
will
say
I
was
looking
at
the
base
fee
on
gourley
and
I
don't
believe
it's
always
above
255,
so
we
might
consider
when
we
do
shadow
fork
to
make
sure
that
there's
sufficient
activity
on
there
as
well.
A
F
One
thing
that,
like
one
thing
that
I
wanted
to
talk
about,
was
that
I
think
we
lacked
insights
a
bit
into
which
clients
were
like
missing
the
slots
and
which
clients
were
like
not
proposing
blocks
or
not
the
testing,
and
I
think
we
we
need
to
up
our
game
there
a
bit
so
that
we
can
like
see
the
bad
client
the
odd
one
out
way
quicker
than
we
than
we
did
just
on
on
tuesday.
A
Yes
agreed,
I
know
there
was
an
idea.
I
think
it
was
perry's
idea
of,
like
maybe
we
build
like
devnets,
which
have
each
combination
of
client
as
like
a
super
majority
client,
so
that
if
we,
those
issues
that
show
up
kind
of
they,
they
show
up
and
the
network
stops
finalizing.
B
Yeah
exactly
so
one
variation
of
the
nightly
build
is
just
to
have
all
clients
together,
and
the
second
variation
is
to
just
have
a
combination
of
every
client
as
a
super
majority,
and
we
can
just
do
this
parallelly
nightly.
So
if
a
super
majority
client
fails,
it's
a
lot
louder.
Whereas
if,
if
it's
just
a
minority
client,
we
might
not
even
notice
an
issue.
D
E
B
Yeah
I'll
look
into
that
one
as
well,
and
one
other
nice
thing
that
jim
from
a
testing
vouch
worked
on
updating
eth2,
so
you
can
now
process
a
block
or
an
epoch
as
soon
as
it
comes
out,
and
it
will
list
out
all
the
proposal
indexes
that
have
failed
or
missing
attestations
or
sync
committee
participation.
B
A
Nice,
I
think
there
were
also
some
issues
on
on
kiln
between
basu
and
taku.
If,
if
that's
right,
yeah
does
anyone
from
either
team
once
want
to
give
an
update.
G
Yeah
I
can.
I
can
speak
to
that.
There
is
a
a
case
that
we
weren't
expecting
where
the
terminal
block
was
finalized
and
our
logic
to
check
that
we
were
descending
from
a
valid
terminal
block,
wasn't
considering
that
the
block
itself
was
being
finalized,
was
the
terminal
block.
So
there
were
some
cases
where
basically
just
sit
sit
there
at
ttd.
G
So
we've
got
that.
We've
got
a
pr
for
that
and
also
we
had
some
issues
with
backwards,
sync,
which
we
have
a
matching
backwards:
sync
pr
that
is
emerging
today,
so
we
should
have
22.1.3
snapshot,
which
is
what
we
are
gonna
recommend
for
using
for
kiln
but
yeah.
That's
the
issue
that
we
were
having
and
encountered,
and
there
was
a
couple
of
reports
of
basically
sitting
at
ttd.
For
that
reason,
got
it
and.
A
So
the
fact
that
it
was
basically
deku
was
there
was
nothing
on
the
takuan
right
it
was.
It
was
just
a
coincidence.
The
other
side,
correct,
okay,
got
it
yes,
and
then
I
think
nethermine
also
had
had
an
issue
or
two.
Is
that
correct?.
H
Yeah,
so
the
issue
for
another
night
was
that
some
notes,
after
the
transition
require
restart
to
make
it
work.
So
after
the
restart,
they
are
working,
fine,
we
are
still
investigating
it
and
to
make
sure
that
it
was
fixed.
We
need
to
experiment
a
bit
with
transition.
H
A
A
And
I
know
ergon
as
well.
I
think
you
you,
you
knew
that
aragon
would
probably
have
some
issues
during
the
transition,
but
still
run
it
anyway.
You
want
to
give
a
quick
update
on
like
yeah
what
you
learned
from
this
and
and
where
you're
at
right
now.
I
So
there
was
one
issue
in
aragon
actually
related
to
endianness
as
well,
that
was
fixed,
but
so
we
were
incorrectly
sending
invalid
block
hash
for
a
valid
block,
but
I
think
taco
was
incorrectly
ignoring
our
incorrect
invalid
block
hash
and
it
was
keeping
like
resending
the
same
block,
though
we
sent
invalid
block
cash.
I
Also
because
erigon
was
quite
late
to
the
party,
I
personally
would
like
more
time
to.
We
are
still
refactoring
our
sync
code
and
giving
that
kiln.
Like
a
quite
a
few
issues.
We
are
discovered
during
kiln,
though
maybe
not
like,
theoretically
blocking,
but
still
I
I
would
suggest
not
to
rush
the
merge
take.
I
Take
it
slowly,
spend
more
time
on
testing
more
like
match
transitions,
and
things
like
that
yeah
and
I
personally
would
like
to
spend
more
time
understanding
how
sync
works
on
the
consensus
layer
side,
the
difference
between
optimistic
non-optimistic,
sync
performance
implications.
I
Think
about
the
test,
because
I'm
worried
like
what
we
were
discussing
in
the
chat
really
really
recently.
Is
that
what
a
kind
of
the
performance
implications
of
like
consensus
layer
sending
us
when
you
have
to
sync
something
and
then
consensus
layer
sends
a
blog
and
just
have
every
block
to
the
execution
layer?
Is
it
okay?
Is
it
not?
Okay?
Maybe
it
is
okay,
but
I
personally
would
like
to
spend
some
more
time
thinking
about
it
and
also
testing
it
great.
A
Yeah
thanks
thanks
for
sharing.
Are
there
other
client
teams?
I
think
those
were
all
the
ones
that
kind
of
had
issues
on
on
kyon
specifically,
but
did
I
did
I
miss
anyone.
A
Okay-
I
guess
not,
and
so
I
guess
yes
so
in
terms
of
next
steps
from
here.
Like
obviously,
I
think
it's
clear
to
everyone
that,
like
we
need
more
testing
infrastructure
and-
and
things
like
like
running
shadow
forks
of
gordy
and
and
rerunning
through
the
transition
couple
more
times.
I
think
in
terms
of
like
you
know,
rushing
the
merge
or
not
and
timelines
we
we
probably
have
another
month
or
so
before.
A
We
need
to
make
a
call
about
whether
we
want
to
move
to
test
nuts
or
whether
you
know
whether
we're
not
ready
for
that.
So
I
I
I
feel
like
that.
The
next
step
is
probably
to
spend
obviously
the
next
two
weeks
and
then
possibly
the
next
four
weeks,
improving
the
testing
infrastructure.
A
Finding
these
these
these
issues
and-
and
you
know,
growing
confidence
in
our
in
our
implementations
and
then
we
can
probably
make
a
call
about
you
know.
Do
we
feel
comfortable
moving
this
through
the
test
nets
or
not,
and
and
if
not,
then
I
think
at
that
point
it's
like
we
might
have
to
to
discuss
potentially
pushing
back
the
difficulty
bomb
but
yeah.
I
I
do
think
we
probably
still
have
like
one
month
basically
until
until
we
we
have
to
make
that
call
at
a
high
level.
A
I
think
that
the
bomb
is
gonna
start
being.
You
know,
noticed
early
june
around
mid-june
mid-june
10th.
In
july
we
probably
have
like
14
15
second
block
times,
which
is
high,
but
not
unmanageable,
and
late
july.
Early
august,
assuming,
like
things
are
the
same,
you
probably
are
looking
at
like
17
or
more,
and
that
starts
to
be.
You
know
much
much
greater
delays
and
just
considering
the
time
it
takes
to
generally
go
from
test,
that's
the
main
net
yeah.
A
F
So
one
one
like
one
thing
that
I'm
that
I'm
thinking
about
is
like,
I
don't
think
we
we
like.
I
think
we
uncover
a
lot
of
bugs,
and
but
we
don't
like,
we
don't
really
notice
them.
We
don't
really
recognize
them
so
like
there
was
this
sink
aggregation
attestation,
thingy
whatever.
F
Where
like
that
that
was
like
at
like
60
or
something
and
then
after
two
weeks
someone
decided
to
look
into
it
and
it
turned
out
that
nethermine
was
prison
was
just
not
sending
sending
some
something-
and
I
think
that
is
like
that
is
the
bigger
issue
like.
I
think
we
do
trigger
a
lot
of
bugs
and
I'm
I'm
pretty
sure
that
we
triggered
the
the
prism
the
prism
base
fee,
encoding,
bug
at
least
five
times
already,
but
we,
but
we
never
recognized
it
as
as
such.
F
So
I
think
we
should
spend
more
time
building
infrastructure
to
recognize
these
bugs,
and
I
would
like
really
urge
all
the
client
teams
that
if
they
see
something
funny
on
or
something
interesting
on
all
on
a
test
net
or
wherever,
then
they
should
reach
out
to
the
clients
that
they
think
are
affected.
And
then,
and
like
really
look
into
it
instead
of
just
saying:
okay,
there
was
that
was
pretty
funny.
But
if
I
restart
my
client,
then
it's
away.
D
I
think
we
could
probably
come
up
with
something
like
key
10,
10
or
so
key
indicators
that
a
test
that's
healthy
right
and
it's
not
it's
not
just
finality
finality
is
good.
You
know
that's
what
you
hope
to
see
always
on
main
net,
even
if
there's
some
sort
of
issue
with
a
client
here
or
there,
but
there's
other
things
like
no
one's.
D
Looking
at
the
the
sync
aggregate
thing,
because
no
one's
really
running
like
clients,
and
so
it
doesn't
really
matter
and
just
kind
of
falls
into
the
wayside,
but
that's
an
indicator
that
something's
not
right
right.
So
there's
that
there's
the
number
of
blocks
per
epoch,
it
should
be
32
almost
all
the
time.
D
There's
the
number
you
know,
there's
the
the
finality
there's,
the
the
amount
of
the
percentage
of
attestation
is
actually
making
on
et
cetera,
and
so
I
would
say
most
of
our
most
of
our
monitoring
and
most
of
our
kind
of
like
integration
testing
should
be
looking
at
a
number
of
these
things,
rather
than
just
finality,
which
I
think
we
rely
a
bit
too
much
on
the
two-thirds
metric
there.
We
should
obviously
can
let
a
lot
of
air
through.
H
D
A
Yes
agreed-
and
I
know
that
I
don't
think
he's
on
the
call,
but
frederick
from
the
ef
has
been
trying
to
gather
people
from
different
teams
that
to
coordinate
all
of
that,
so
yeah,
hopefully
hopefully
like
we.
We
can
just
have
more
people
from
each
team,
but
also
kind
of
be
a
bit
more
proactive
in
sharing
the
stuff.
That's
being
worked
on
so
that
everybody's
aware
of
what
everybody
else
is
working
on.
I
A
Thanks
and
maybe
just
to
touch
back
on
on
one
thing
that
andrew
brought
up
the
the
thing
around
like
syncing
and
and
box,
sending
the
blocks
of
the
execution
layer,
I'm
curious
check
generally
how
people
feel
about
that
like?
Is
that
something
that
can
realistically
be
changed
before
it
emerges?
That's
something
that
like
we
might
want
to
improve
shortly.
After
so.
D
It's
something
that
can
be
lever
like
the
execution
layer
can
already
leverage
the
information
do
whatever
it
wants
here.
The
the
consensus
layer,
if
it's
syncing
and
sending
you
bulk
blocks,
means
that
it
is
not
at
the
head
and
it
literally
doesn't
have
a
good
piece
of
information
for
you
to
to
decide
how
to
like.
D
D
You
can
wait
until
the
consensus
layer
gets
to
the
head
and
then
do
whatever
sync
techniques
that
you
like
to
do
at
that
point,
but
there's
kind
of
a
bit
of
a
chicken
egg
problem.
Here
you
can,
you
can
lock
step
sync
with
them
or
you
can
wait
until
they
get
to
the
head
and
then
do
whatever
sync
method
you
want
and
there's
there's
sufficient
information
to
be
able
to
make
either
of
those
decisions.
D
In
some
of
the
channels
and
stuff,
if
people
want
to
discuss
different
design
decisions,.
A
J
I
would
like
to
quickly
discuss
safe,
unsafe
and
finalized
tags,
so
yeah
there
is
a
pr
opened
into
the
execution
apis
repo
that
just
adds
finalized
block
tag
to
the
theorem
jason
rpc
and
on
the
last
call
we
roughly
decided
not
to
have
safe,
or
I
was
a
bit
uncertain
about
that
and
one
of
the
suggestions,
one
of
the
one
of
the
things
that
we
may
do
for
safe
block
tag
is
to
is
to
use
justified,
justified
block
for
it
as
a
stop
gap
until
we
get
safe
rule
implemented
like
in
in
its
like
full
proposal,
that's
made
by
the
democrat
and
aditya.
J
So
this
using
justified
block
is
pretty
cheap
from
cl
standpoint.
So
it's
just
responding
with
just
sending
these
justified
block
cache
to
el,
and
it
also
brings
a
safe
block
closer
to
the
head
than
the
finalized
one,
and
it's
a
truly
safe
block.
It's
not
going
to
be
reworked,
assuming
that
there
is
the
honest
majority
and
the
synchronicity
also
yeah,
but
yeah
it's
it's
still,
not
that
close
to
the
head
like
as
we
previously
discussed
as
safe
could
be.
So
that's
just
the
proposal
and
just
curious.
F
D
Yeah,
I
think
it's
nice
too,
because
it
gives
the
exchanges
and
stuff
a
chance
to
begin
using
this
and
the
algorithm
can
improve.
Not
good
are
you
here?
Can
you
chime
in
on
how
you
feel
about
justified
being
the
safe
currently.
L
I
mean
there's
definitely
no
downside
to
using
that
for
now.
I
think
yeah
yeah,
I
mean
as
an
update
on
that
is
this
like
it's.
Unfortunately,
it
turned
out
that
it
is
much
harder
than
we
thought
to
define
a
safe
head
with
lmd,
so
yeah,
I'm
so
optimistic
that
we
can
do
it,
but
but
yeah.
It's
definitely
a
good
idea
to
have
this
intermediate
solution.
C
M
C
D
It
might,
it
might
be
if
we
had
safe,
justified
and
finalized,
where
safe
and
justified
are
just
kind
of
equivalent.
At
this
point,
that
does
give
an
additional
granularity
of
progressive
confirmation
in
a
sense
you
know
not
finalized,
but
now
the
assumption
on
this
break.
This
changing
is
much
higher
than
just
safe,
but
you're
also
kind
of
just
giving
users
more
choice,
which
may
or
may
not
be
good.
C
I
feel
like
if
there's
use
cases
then
exposing.
It
seems
like
the
right
thing
to
do,
and
we
can
always
just
in
the
docs
make
it
clear
that
hey
you
should
probably
just
use
safe.
If
you,
if
you
don't,
if
you
don't
know
what
you're
doing
use
safe,
but
we
also
offer
justified
and
finalized
or
whatever,
like,
I
feel
like,
we
can
solve
the
problem
of
too
many
choices,
be
a
good
documentation.
F
D
O
J
C
I'm
open
to
other
terms
for
the
safe.
If
people
have
them,
I'm
also
fine
with
safe.
A
A
J
Okay,
so
finalized
justified,
safe,
unsafe
latest
safe
to
justify
it
later
still
unsafe.
C
We
can
finally
deprecate
latest,
but
I
do
think
there's
value
in
making
it
very
clear
that
you
should
not
be
using
this.
Unless
you
really
know
what
you're
doing.
J
A
Okay
and
to
recap,
you
will
so
finalized
and
then
justified
safe
and
that's
it
yeah.
D
M
J
Yeah
and
yeah,
it
will
be
probably
some
time
difficult
to
explain
the
difference
between
finalized
and
justified
for
end
users.
I'm
not
sure
if
it's
gonna
be
that
much
useful,
but
anyway
having
heavenly
granularity
is
always
good.
J
So,
okay,
let's
have
this
all.
Also
like
a
minor
question,
is
what
should
el
respond
with
when
once
it
gets
finalized
requests
before
the
merge,
and
I
think,
yeah
and
finalized
and
justified
and
saved
before
the
merge.
I
think
it
should
respond
with
error,
which
is
which
will
allow
to
avoid
any
bugs
or
unexpected
things
happening.
If,
if
your
request
is
finalized
before
the
merchant
got
something
I
mean
in
full
other
than
error,
so
yeah,
I
think
error
is
preferable
option.
Unless
someone
has
any
other
opinion.
J
D
I
just
I
would
worry
that
someone
has
some
sort
of
setup
where
they're
trying
to
switch
from
confirmations
to
finalized
and
then
all
of
a
sudden
they
go
from
thinking
something
10
blocks
ago
was
equivalently
finalized
to
nothing
ever
in
the
chain
ever
being
finalized,
and
I
worry
about
edge
cases
there.
I
think
an
error
is
safer.
C
D
D
C
I
think
I'm
thinking
of
developers
who
want
to
be
merge,
ready
and
you
want
to
deploy
your
app
before
the
merge
happens,
and
you
want
your
app
to
smoothly
transition
to
pre-merge
behavior
to
post-merge
behavior.
With
regards
to
finalized,
justified,
say
fiat,
I'm
trying
to
think
like
do.
We
have
a
good
story
to
tell
them
or
a
good
narrative
for
how
they
should
build
their
apps
like
what
should
they
do?
You
know,
should
they
be
checking
difficulty
equal
zero?
C
J
J
Yeah
yeah,
I
agree.
I
agree
with
that,
but
difficulty
zero
would
mean
that
the
transition
is
has
just
started
and
is
in
progress.
I
mean
the
first
block
was
difficult
to
do
and
the
merge
is
finished.
It's
considered
as
finished
once
this
first.
This
transition
block
is
finalized.
C
Sorry
I
had
myself
muted
and
I
had
some
closing
comments
I
feel
like.
We
should
give
users
a
a
more
clear
way
to
like
a
a
reasonable
way
to
find
out
when
it's
say
it's
I
want
to
use
word
save
when
it's
a
reasonable
time
to
switch
to
using
finalized
that
doesn't
involve
them
like
recording
things
that
are
erroring
and
then
having
to
set
up
error
handling
that
alters
their
code
flow
like.
Maybe
we
can
give
them
a
simple
json,
rcpc
method
or
something
they
can
query
that
says
is
now
the
time
like.
C
Is
the
merge,
fully
complete
or
is
finalized
available
yet
or
something
along
those
lines?
Just
adapt
developers
don't
have
to
put
in
these
horrible
hacks
just
to
build
good
apps
around
them.
D
Yeah
there
might
be
a
good
blog
post
to
yeah.
K
J
Right,
if
we
don't
provide
and
like
this
graceful
method,
call
that
will
return
that
emergency
has
happened.
They
will
have
to
rely
on
errors
before
the
merge
and
yeah
absence
of
errors.
Don't
get
finalized
block
as
the
signal
that
the
merge
has
happened,
I
mean
that
from
json
rfc
it
will
not
be
possible.
To
I
mean
you
can
look
at
the
block
header
right.
C
Stuff
will
be
zero
after
the
merge.
That's
that's
during
the
transition,
so
you'll
you'll
know
you're
in
the
transition,
but
you
won't
know
the
merge
is
complete
and
that's
when
you
don't
want
to
switch
over
your
strategy
and
your
app
until
after
the
merge
is
complete
and
so
right
now,
there's
no
way
for
other
than
just
like
trying
things
and
getting
errors
and
then,
like
catching
the
error
and
changing
your
behavior
based
on
getting
an
error,
there's
no
way
for
app
developer
to
build
something
that
changes
behavior.
Once
the
merge
is
complete.
F
Yeah,
like
we
can
implement
this,
this
is
like
a
five
line
change.
I
just
don't
really
like
the
notion
of
having
a
new
adjacent
rpc
call
for
like
12
minutes
or
that
that
is
important
for,
like
I
don't
know,
maybe
maybe
like
three
weeks
and
then
you
know
once
we
really
want
to
use
it.
No
one
need
needs
that
anymore.
D
Yeah
I'd
prefer
writing
some
pseudo
code
to
show
how
people
can
decide
this
from
the
and
then
libraries
can
write
a
function
if
they
want.
You
know
what,
through
js.
R
A
Okay,
next
up,
I
guess
yeah
before
we
move
to
the
next
thing,
anything
else
on
the
merge
itself
or
kill
or
testing.
A
Okay,
next
up
alex,
has
an
update
about
beacon,
chain
withdrawals
and
we
also
have
someone
else.
I'm
sorry,
I'm
I'm
searching
on
the
zoom
screens,
but
there's
literally
too
many
people.
We
had
someone
from
the
lyto
team
who
had
a
proposal
for
partial
withdrawals
as
well,
so
maybe
alex.
If
you
want
to
go
first
kind
of
give
an
update
on
on
on
on
what
you've
been
working
on,
and
then
we
can
have
yeah
we're.
O
Sure
yeah,
so
last
time
we
talked
about
this.
Essentially,
I
think
there
was
a
lot
of
sort
of
demand
for
something
to
organize
all
different
threads,
so
that
turned
into
a
meta
spec.
I'm
just
gonna
share
my
screen
quickly
and
we'll
just
run
through
it.
O
Right,
oh
I
see,
can
you
guys
see
this?
Yes,
we
can
okay,
so
yeah,
I'm
not
going
to
go
through
this
in
detail.
If
you
want
to
read
it
it's
here,
but
essentially
it
just
like
has
some
pros
of
like
how
the
drawls
flow
will
go
and
then
links
to
specifications
at
a
high
level.
The
consensus
layer
essentially
schedules
when
the
trial
should
happen,
and
then
it
puts
them
into
this
queue
and
then
the
consensus
layer
is
also
in
charge
of
again
dequeueing
withdrawals
into
execution
blocks.
O
There's
a
specification
for
how
that
works
at
the
existence
layer
here,
there's
a
pr
for
the
modifications
to
the
engine
api
because
then,
essentially
again
in
some
way,
the
consensus
layer
dequeues
these
withdrawals,
they
shove
through
the
engine
api
to
the
execution
layer
and
then
what
I
want
to
talk
about
today
is
essentially
two
different
options
here.
There's
like
two
different
eips
one
of
them.
We
discussed
last
time
in
terms
of
having
a
new
transaction
type
to
represent
the
withdrawal
there's
another
option
that
is
essentially
some
sort
of
system
level
operation.
O
That's
far
more
involved,
but
essentially
it's
just
saying:
okay,
rather
than
have
a
new
27,
18
style
transaction
type.
We
have
this
new
type
of
thing
called
an
operation
and
that's
where
the
withdrawals
live
and
the
reason
we
want
to
do
that
is
basically
to
firewall
off.
You
know:
mixing
withdrawals
from
user
level
transactions
and
there's
probably
some
like
safety
benefits
there,
but
and
here's
the
catch
is
that
one
thing
we'd
really
like
to
have
is
some
sort
of
logging.
O
So
when
a
withdrawal
happens,
it'd
be
really
nice.
If
there's
some
way
to
just
watch
the
execution
layer
and
know
that
the
withdrawal
you
know
has
actually
occurred-
and
the
point
here
is
that
if
we
go
with
4863
the
previous
cip,
that
we
talked
about
where
it's
new
transaction
type,
then
you
know
we
can
reuse
all
the
existing
sort
of
events,
infrastructure,
logging
and
evm.
That's
great.
O
If
we
go
this
other
route
48.95,
which
is
maybe
in
some
sense,
you
know
cleaner,
more
elegant,
we
basically
have
to
recreate
all
of
the
receipts
infrastructure
and
that,
basically,
is
a
lot
of
work.
So
I
essentially
want
input
on
this
call.
Does
anyone
have
any
preferences
on
either
route?
Have
you
had
time
to
look
at
these
eips?
Do
you
have
any
feedback?
O
So
it
doesn't
have
to
be.
I
think
it
is
like
pretty
nice
ux,
but
yeah
I
mean
there's
probably
other
ways
to
figure
out
that
your
withdrawal
is
processed
as
a
validator
and
yeah.
It's
really
just
a
question
of
like
what
what
kind
of
facilities
do
we
want
to
provide
validators
right.
A
Comment
on
that
sorry,
oh
I
was
gonna
say:
what's
the
argument
against
having
some
way
to
log
this.
O
So
yeah
I
mean
I
can
just
click
through
so
basically
like
to
give
you
a
sketch
like
it's
literally
just
having
like.
Is
that
important?
Okay,
sorry,
it
seems
connecting
very
weird
today.
That's
okay
anyway,
like
basically
it's
just
like
adding
a
whole
new
field
to
the
block,
and
so
because
of
that
we
can't
necessarily
directly
reuse.
O
J
Right,
right
and
spec
says
that
in
case
of
withdrawal
operation,
it
must
never
fail.
It's
like
unconditional
one,
so
you
may
basically
use
like
this
withdrawal
operations
as
logs
or
a
kind
of
vlogs.
C
K
R
First
of
all,
I
just
wanted
to
say
to
mikhail:
I
think
the
spec
does
not
say
that
it
must
unconditionally
succeed.
I
think
it
originally
this
text.
So
if
you
had
defined
your
your
es1
contract
recipient,
which
is
something
which
would
never
ever
accept
anything,
then
you
would
never
be
able
to
withdraw,
because
no
matter
how
much
gas
you
specified,
you
can
still
play,
but
I'm
kind
of
so.
D
R
Right,
so
that's
kind
of
so
both
of
these
options
are
they
both
centered
around
the
yeah,
the
gases,
the
one
where
we
just
do
it
yeah.
These
are
both
push
good.
Okay,
that's.
D
I
I
think
that
the
system
approach
is
much
cleaner
and
it's
a
very
important
operation,
so
we
want
to
do
it
in
a
reliable
fashion
and
the
transaction
is
just
abusing
the
notion
of
transaction
because
it's
not
really
a
transaction.
It's
just
a
balance
update,
but
it's
not
like
it's
not
a
balanced
transfer.
I
Like
if
we
want
some
logs
on
the
avm
side,
things
like
that,
then
it
should
be
crafted
specifically
for
this
operation,
because
it's
yeah,
I
wouldn't
have
used
the
notion
of
transaction
for
this.
O
Which
is
fine,
but
then
I'd
probably
suggest
going
with
this
option
to
route.
So
we
build
out
a
new
operation
thing,
but
then
yeah
just
drop
the
logs,
because
I
think
it's
going
to
be
way
too
much
to
like
have
a
whole
new,
like
receipts,
try,
half
tooling
testing,
etc
to
cover
all
that.
I
But
then
again
like
do
we
need
access
to
this
from
the
evm
side,
because
if
we
don't
like,
if
the
observability
will
be
there,
you
can
observe
it
by
the
looking
at
the
header
and
the
withdrawals.
As
was
mentioned,
they
cannot
fail
right.
So
if
you
see
the
withdrawals
in
in
in
the
header,
then
you
you,
you
see
all
of
them,
but
the
question
is
whether
you
need
this
observability
from
the
evm
side
as
an
evm
op
code.
O
Right-
and
I
think
I
think
part
of
where
this
came
from
is
we
looked
at
the
existing
zero
sort
of
eth1
credentials
that
had
been
deployed
and
the
only
real
thing
anyone
was
doing
was
logging
but
yeah
like
danny
said,
we've
talked,
I
think,
to
some
of
the
bigger.
D
Players
and
it's
been
fun,
rocketpool's,
really
the
only
one
doing
the
logging
and
they
said
they
hadn't
actually
expected
code
execution,
but
just
kind
of
put
that
in
there
as
the
best
coding
practice.
So
you
know
they're
not
even
necessarily
expecting
the
block.
R
So
so,
as
for
my
five
so
yeah
and
I'm
sorry,
I'm
not
really
up
to
speed
on
all
the
details,
but
I
agree
that
pretty
much
transaction
is
abusing
transacting.
The
constable
transactions,
however
block
body,
is
now
a
list
of
uncles.
It's
going
to
be
empty,
this
bank
will
say
after
the
merge
and
it's
a
list
of
transactions,
and
there
is
a
lot
of
code
out
there
that
purchase
bot
is
based
on
this
and
youth
protocol,
which
have
the
capacity
to
request
these
pieces
of
information
from
another
peer.
R
C
So
the
the
two
options
for
where
to
put
these
system
transactions
in
the
block.
I
think
right
now
are
just
a
pen
to
the
end
or
take
over
uncles.
Previously
peter
had
argued
pretty
strongly
against
taking
over
uncles,
and
he
strongly
prefers
appending
new
things
to
the
end
and
just
throwing
away
like
accepting
the
cost
of
the
extra
bytes.
I
D
A
fixed
amount
per
slot
and
they
will
be
so
we
could
decide
on
a
number.
You
know
it
affects
kind
of
some
of
the
ux
here,
but
the
exit
queue
is
already
bound
to
approximately
four
ish
per
slot.
So
after
you
clear
out,
you
know
maybe
some
large
amount
of
withdrawals
at
the
beginning.
D
And
the
and
the
consensus
layer
will
have
a
bound
on
what
how
much
putting
in
here.
Because
there
will
be.
You
know,
a
maximum
cost
of
this
operation
on
the
system
and
that
number
can
be
turned
depending.
A
A
It
seems,
and
please
correct
me
if
I'm
wrong
here,
we,
it
seems
like
we
have
rough
consensus
around
the
system,
level,
operations
approach
and
it's
like
a
question
of
how,
rather
than
like,
if
does
that
generally
make
sense,
yeah
and
no
logging
right
and
no
logging,
and
I
think
so
I
alex
I
don't
know.
I
I
think
the
ether
scan
people
had
said
they
were
fine
with
like
either
option
as
well.
A
D
So
yeah,
I
think
the
one
use
case
you
don't
really
get
is
like
I'm
a
validator.
I
turn
on
my
node
and
I
just
want
to.
I
have
I
know
my
withdrawal
index
and
I
want
to
ask
if
it
happened
or
not.
You
know
where
it
happened
efficiently.
Otherwise,
yeah
you
can
scan,
I
mean
and
they're
sequential,
so
you
can
do
a
binary
search
to
find
where
your,
where
your
receipt
happened
or
actually
yeah.
D
You
can
actually
know
if
your
receipt
happened
very
quickly,
because
you
can
look
at
the
latest
withdrawal
and
if
it's
greater
than
your
receipt
index-
and
it
has
happened
so
there's
there's
there
are
things
there
you
can
do
without
logs
to
probably
handle
those
these
cases
you
care
about
outside
of
the
evm
okay.
Q
Yeah,
just
briefly
wanted
to
ask
and
maybe
there's
an
answer:
the.
D
Yeah
it's
to
differentiate
it's
when
we
were
going
with
the
transaction
method
and
any
sort
of
logging
and
that
kind
of
stuff
it
allows
you
to
differentiate.
It
also
would
allow
you
to
do
you
know
a
search
like
I
just
talked
about
more
easily,
because
it's
not
necessary
amount
amount
and
address
are
not
necessarily
unique.
Given
partial
withdrawals.
Q
A
I
guess
okay
in
in
terms
of
next
step.
Does
it
make
sense
to
move
the
the
system,
level
versions
of
48.95
to
consider
for
inclusion
and
have
people
kind
of
keep?
Obviously
looking
into
that
and
kind
of
figuring
out
the
quirks
around
it,
but
just
so
yeah
we
can
make
sure
we're
kind
of
all
focused
on
the
same
thing.
D
Nope,
okay,
on
the
on
the
partial
withdrawals,
I
just
wanted
to
say
yeah.
I
have
this
tracking
issue
on
the
consensus
layer.
There
are
three
key
features
here:
one
is
to
fully
withdraw
exit
validators.
The
other
is
to
change
credentials
from
bls
credentials
to
execution
layer
credentials
such
that
you
can
perform
withdrawals
and
the
third
is
a
partial
withdrawal
option
which
I'm
working
on
in
a
pr
right
now.
D
I
think
that
this
is
one
from
the
from
the
perspective,
the
execution
layer,
all
of
them
just
look
like
things
being
withdrawals
being
dequeued
into
the
execution
layer
and
so
that
that
complexity
doesn't
really
matter,
but
from
the
perspective
of
validators
and
and
features,
I
think,
is
a
pretty
critical
feature
to
not
put
crazy
pressure
on
validators
exiting
about
it
for
sure
so
it
is.
It
is,
and
has
been
kind
of
in
the
the
consensusly
real
map.
A
Right
and
rtm
do
you
want
to
take
maybe
a
minute
to
kind
of
explain
what
your
proposal
was.
M
Yeah
it's
kind
of
obsolete
now,
like
the
new
push-based
proposal,
is
definitely
more
preferable
from
the
point
of
view
of
liquid
staking
protocol
as
well.
Yeah.
I'd
just
like
to
know
that
partial
withdrawals
are
crucial
for
us
yeah
and
no.
A
M
D
Yeah-
and
I
yeah
I'm
happy
to-
I
haven't-
had
a
chance
to
read
the
proposal,
but
I
do
think
that
that
is
a
very
nice
feature
and
actually
protects
against
a
couple
of
like
weird
withholding
attacks
that
we
should
talk
about.
D
Four
is
the
amount
number
of
exits
per
slot
currently
and
so
the
number
of
withdrawals
per
slot
you
would
probably
have
in
that
same
order.
It
depends
on
the
partial
withdrawal
scheme,
but
I
would
say
4
to
16
are
the
the
realm
of
what
we
do
here.
C
I
If
it's
only
four
or
even
16,
maybe
we
can
put
them
in
into
the
header.
But
if,
if
you
foresee
that
in
the
future
it
will
be
more
than
maybe
to
be
future
proof,
then
yeah,
it
should
be
better
to
be
in
the
body.
M
I
could
also
say
that
there,
from
the
point
of
view
of
stake
in
protocol,
there
might
be
a
slight
desire
to
be
able
to
distinguish
partial
and
full
withdrawals,
like
maybe
different
addresses.
But
it
seems
to
be
too
difficult
to
implement
and
not
not
very
crucial
for
for
us
just
to
note
right.
D
Yeah
I
mean
with
the
beacon
root
up
code.
All
of
that
becomes
possible.
It's
a
matter
of
that
existing
or
not.
A
Okay,
yeah
just
to
move
on,
because
we
only
have
20
minutes
and
at
least
one
big
topic
left
proto.
You
had
an
update
on
eip
four
four.
Eight
four
four
never
pronounced
that
right
for
the
for
the
shardbob
transactions.
S
Exactly
hello,
everyone
give
your
very
quick
updates.
So,
since
the
last
awkward
devs
call,
we
have
worked
on
consensus,
packs
execution
apis.
We
have
built
a
meta
spec
to
link
everything
together.
I
think
that
relates
to
the
proposal
tim
is
going
to
share
where
we're
trying
to
find
a
structure
for
this
cross-layer
erp
process.
S
S
R
Nice,
so
if
you
have
any
tlr
on
how
so
say,
someone
sends
me
a
batch
of
1000
block
transactions,
every
single
one
of
them
isn't
valid,
but
they're
constructed
to
be
as
hard
as
possible
to
verify
what
ballpark
we're
talking
about.
S
You
you
want
to
like
penalize
players
that
are
giving
you
this
bad
information
in
the
first
place
like
this
is
like
provably
bad.
So
you
like
it's
objective
and
it
doesn't
affect
consensus
like
this
is
in
the
transaction
pool
like
this
is
the
step
before
that.
R
R
We
do
kind
of
have
some
ip
limits,
but
yeah
we
don't
spend
a
lot
of
time
scoring
pairs.
We
kick
out
here
if
it
does
something
bad,
but
note
that
these
are
free.
R
I
mean
I
can
turn
up
a
thousand
node
ids
connect
them
to
a
guy
and
send
them
a
thousand
transactions
from
different
identities
for
free.
Well,
there
is
some
processing
power
because
we
need
to
generate
these
transactions
at
some
point,
but
then
I
can
reuse
them
and
they've
I
mean
the
people
that
I
sell
them
to
will
not
remember
them
indefinitely.
K
S
S
S
Look,
I
mean
no
different
current
transactions
so,
like.
R
R
S
S
R
K
Q
Just
only
because,
of
course,
notes
would
support
having
that
locally
affected
mental
so
like
they
could
just
run
their
own
staking
notes,
so
they
could
operate
or
they
could
create
an
off
like
a
separate
network
for
that
like
basically,
it's
fine
if
we
only
add
that
support
for
memphis
later,
of
course
not
ideal,
but
so
this
is
not
necessarily
like
strange
to
the
timeline.
A
Okay,
anyone
else
have
comments,
thoughts
on
4844.
D
I
do
think
that
benchmarks
from
a
number
of
or
stress
tests
from
a
number
of
different
places
are
probably
pretty
important.
You
know
just
the
consensus
layer,
decrypting
one
to
two
megabyte
blocks.
You
know
and
then
passing
the
execution
layer
there's
just
you're
going
from
20
kilobyte
blocks
90
as
we
are
with
the
merge.
I
don't
expect
things
to
burst
the
steams,
but
I
do.
D
I
do
think
that
it's
not
unlikely
that
once
we
get
to
one
megabyte
that,
like
little
things,
we
didn't
expect
start
to
operate
in
different
ways
that
we
are
unexpected
in
bad
ways.
I
don't
think
any
of
this
can
be
intractable
solvably,
but
I
think
that
it's
good
to
investigate
something.
A
Okay
last
on
the
agenda,
I
had
something
so
we
discussed
this.
I
think
on
the
last
call,
if
not
on
the
discord.
After
basically,
the
the
the
core
eip
process
is
kind
of
reaching
its
limits,
with
the
merge,
where
we
have
a
completely
different
process
on
the
beacon
chain
than
than
on
on
mainnet,
and
we
are
starting
to
have
proposals
which
clearly
span
across
both.
So
the
two
things
we
talked
today
are
talked
about.
Today
are
good
examples
of
that.
A
So
it's
quite
hard
to
like
reason
about
like
what
the
entire
spec
for
something
should
be
and
and
how
the
different
parts
all
work
together
and
in
parallel,
there
are
folks
working
on
an
executable
spec
for
the
execution
layer
which
aims
to
kind
of
over
time,
complement
or
replace
the
yellow
paper
as
a
canonical
spec
for
ethereum.
So
I
had
a
proposal
that
yeah
I
put
together
about
how
we
could
harmonize
all
of
this,
just
shared
it
in
the
chat
at
a
very
high
level.
A
The
idea
is
that
we
would
keep
core
eips
as
the
way
to
like
describe
changes,
provide
the
motivation,
the
rationale
list,
security
considerations
and
also
just
have
like
a
eip
number.
That's
easy
to
reference
within
the
community
use
these
for
both
consensus,
layer
and
execution
layer
changes,
but
then
over
time
basically
move
the
implementation
sections
to
the
execution
specifications
rather
than
having
them
live
directly
in
in
the
eip
itself,
and
so
that
you
know
the
benefit
we
get
there.
A
Is
that
a
is
like
harmonized
across
the
beacon
chain
and
the
execution
layer
b?
You
can
link
both.
So
if
you
have
any
ip
like
you
can
chain
withdrawals,
you
can
just
say:
hey:
here's,
the
change
to
the
execution,
spec,
here's,
the
change
to
consensus,
specs
and
maybe
even
the
api
repositories
and
then
see.
There's
always
been
like
this
big
concern
with
like
that.
A
We
don't
have
a
lot
of
eip
editors,
so
we
want
it
to
be
easy
for
them
to
actually
review
eips
and
one
of
the
one
of
the
things
that's
actually
quite
hard
for
them
to
review
is
when
people
put
links
in
the
ips,
because
there's
a
bunch
of
dead
links
over
time.
It's
hard
to
assess
the
quality
so
by
having
links
out
to
just
the
different
specs
repo,
and
you
can
have
a
pretty
easy
to
enforce
rule.
A
That's
like
you
only
allow
links
to
you,
know
these
two
or
three
repositories,
and
and
and
just
blocking
the
ip
if
it
has
a
link
elsewhere
and
then,
if
you
know
the
eip
author
wants
to
add
a
whole
bunch
of
links
as
part
of
their
pr
to
the
to
the
specs
repo,
then
they
can
do
that.
But
it's
not
it's
not
like
blocked
in
the
eip
process.
A
I
know
greg
you
had
some
comments
about.
This
is
greg
still
on
the
call.
N
Yes,
yes,
so
yeah
greg,
you
had
some
comments
about
this
I'll
I'll.
Let
you
share
them.
I
also
put
together
an
east
magicians
link
for
people
to
discuss
yeah
great.
T
A
lot
of
this
we'll
just
need
to
discuss
as
editors
we've
only
got
about
seven
minutes
left,
so
I
don't
think
we
can
dig
very
deep,
there's
some
good
ideas
there,
but
I
think
it's
a
lot
more
intrusion
on
the
eip
process
than
we
want
to
see
and
in
some
ways
it's
making
it
harder.
T
T
So
I
I
actually
don't
expect
that
a
core
ip
eip
could
be
a
totally
complete
and
accurate
reference
when
it's
done,
the
the
network
itself
is
ground
truth
and
so
having
one
client
that
we
can
point
to
and
say
we
intend
for
that
to
be
the
actual
reference
is
great,
but
whether
we
try
to
pull
that
back
into
the
eip
as
a
diff
against
a
particular
implementation
doesn't
doesn't
really
seem
to
help
matters.
T
D
Enough
and
danny,
you
also
have
your
hand
up
yeah.
So
just
a
quick
follow-up
on
that,
and
then
I
have
a
quick
comment
on
the
consensus
layer.
It
isn't
a
full
client,
it
is
actually
you
know,
implementation
of
core
state
transition
logic
such
in
very
non-optimized
ways
in
ways
that
just
expose
what
the
logic
should
be,
rather
than
the
sophistication
of
the
logic
as
it
will
be
in
a
client,
and
so
it
can't
run
on
a
maintenance.
It
also
doesn't
have
networking
interfaces
and
other
things.
T
S
D
And
then
we
can
build
test
vectors
of
it.
So
I
just
there's,
there's
a
different
there's,
a
spectrum
of
what
you
can
do
with
it,
and
I
I
just
wanted
to
say
that
out
loud
and
then,
but
I
will
say
I
don't
know
if
this
helps
our
eip
editor
problem,
I
mean
it
just
shifts
the
burden
to
a
different,
highly
specialized
group.
D
You
know
on
the
consensus
layer,
there's
a
handful
of
people
that
have
the
ability
to
review
these
types
of
specs
and
provide
rich
dynamic
feedback,
and
sometimes
there
are
pr's
that
are
open
for
a
very
long
time
because
it's
hard
to
take
the
time
to
to
dig
into
it.
So
I
I
it
may
be
it's
useful
in
getting
more
people
on
the
table,
but
I
don't
know
if
it
like
solves
the
eip
editor
problem.
I
do
think
it
maybe
solves
other
types
of
problems,
though.
K
D
D
T
T
So
there's
it's
just
not
clear
that
this
language
is
going
to
be
the
best
way
to
actually
say
this
is
what
I
want
to
do
so
to
ask
someone
with
an
idea
to
improve
the
protocol
to
say:
oh,
but
first
you
have
to
figure
out
how
to
say
it
in
this
specialized
language.
That
may
or
may
not
be
the
best
way
to
express
your
idea.
T
K
I
think
that's
a
fair
criticism.
I
mean,
I
think
one
of
the
the
it's
kind
of
a
trade-off
right,
like
a
lot
of
things,
will
be
easier
to
express
without
having
to
describe
the
current
state.
So
that's
a
problem
that
comes
up
in
eips
a
lot
today
is
that
to
say
how
you're
changing
something
you
first
have
to
have
to
define
how
that
something
works,
and
you
know
the
this
code
if
process
will
improve
that
part
of
it
but
you're
right.
K
T
Yeah,
I
know
just
enough
python
to
hack
a
script
together
or
to
to
read
fairly
simple
descriptions
of
structures
and
stuff,
but.
D
D
Pythonic
and
it
does
not
compose
things
in
weird
ways
and
is
pretty
much-
it
doesn't
use
even
like
complex,
not
even
complex,
but
just
pythonic
type
construction
such
that
you,
you
know
you
have
for
loops,
you
have
variable
assignments
and
you
have
very
simple
data
structures.
Again,
I
I
I
I
think,
there's
I'm
not
trying
to
make
a
claim
one
way
or
the
other
on
on
the
best
ideas.
T
Yeah,
I
wouldn't
see
the
the
eip
process
close
out
and
then
this
this.
This
executable
spec
is
its
own
thing
and
I
think
it's
there'll
have
to
be
some
way
of
vectoring
over,
but
trying
to
do
both
in
one
process.
I
think
will
only
make
it
harder.
T
I
mean
when
I
put
in
code,
for
instance,
that
code
is
often
going
to
be
go,
not
python,
because
I
might
have
implemented
the
algorithm
in
geth
and
then
I
can
put
in
code,
I've
actually
tested
and
translating
the
code
to
python.
Isn't
that
hard
for
the
core
group?
T
R
Yeah
well
initially,
I
was
gonna
say
the
same
thing
that
I
said
originally
that
doesn't
need
to
be
a
full-blown.
You
know
network
machine
that
can
little
minutes,
but
then
I
want
to
say
that
you
know
I've
spent
quite
a
lot
of
time,
reviewing
eats
and
also
implementing
eeps
and.
R
R
Which
shows
that
this
specification
was
underspecified,
it
was
too
vague.
So
I
think
it's
good
if
we
get
closer
to
how
it
looks
on
these
2.0
side,
where
it
is
code
where
the
author,
who
put
this
down,
was
actually
forced
to
figure
all
these
things
out
and
anyone
who
actually
implements
it
can
can
basically
pass
his
implementation
against
the
reference
implementation
or
just
transpile
it
into
his
language.
R
So
I
think
that
lowers
the
lowers.
The
amount
of
work
needed
to
be
done
at
five
different
client
implementers,
since
the
work
of
you
know
putting
it
down
in
the
code
is
only
done
once
in
the
like
translate
from
english
to
code.
It's
only
done
once
by
the
author
or
someone
else,
so
I
think
it's
good
to
move
in
that
direction,
but
there's
a
spectrum
yeah.
T
R
T
I'm
just
saying
if,
if
I
have
an
eip,
I've
completely
implemented
it
in
go
or
c
plus
plus
in
a
client,
and
then
it's
like
well,
that's
all
very
well
and
good,
but
it's
not
an
acceptable
eip
until
you
translate
it
to
python
and
it's
like,
but
it's
expressed
here
and
go
and
the
go
works.
Do
you
want
me
to
translate
it
to
something
else
that
I
can't
actually
run
and
test?
T
A
But
I
guess
what
martin
is
saying
is,
if
I
understand
correctly,
if
that's
not
the
case
for
like
the
median
eip,
the
median
eip
is
like
basically
underspecified
in
the
current
format,
because
yeah
and
and
so
this
spec
kind
of
forces
you
to
at
least
fully
specify
it
and
or
or
realize
that
you
can't
and
that
you
need
to
do
some
more
work.
Basically
yeah.
T
T
So
the
question
gets
more
at
the
tail
end
when
it's
actually
working
how
best
to
express
that
and
currently
slowly
the
eips
make
it
into
the
yellow
paper
and
the
yellow
paper
is,
is
the
canonical
spec
and
we
can
change
that,
but
trying
to
get
the
authors
to
write
canonically
up
front.
It's
going
to
be
hard
you're,
going
to
need
an
expert
to
work
with
the
author
to
do
that,
and
I
think
that's
just
going
to
be
even
harder
to
find
that
person.
A
We're
already
five
minutes
over
time,
so
we
can
continue
this
on
each
magician.
So
I
I
shared
the
link.
It's
it's
in
the
agenda.
A
Pasted
all
your
comments
from
the
agenda.
Indeed,
anyone
have
anything
else
they
wanted
to
share
before
we
head
out
yeah.
D
To
echo
what
like
client
said,
you
can
run
the
full
python
implementation
for
the
consensus
layer
side
and
I
think,
very
importantly,
the
tests
that
you
write
for
that
implementation,
for
that
python,
spec
actually
become
the
consensus
tests,
and
so
when
we
have
people
build
new
features,
they
also
write
tests
and
those
actually
become
the
reference
tests.
Whereas
I
think
when
we
have
many
different
clients
and
learning
eips,
we
don't
always
capture
all
of
the
edge
cases
and
reference
tests,
even
though
even
when
we're
kind
of
like
cross
testing
our
implementations
and
stuff.
D
A
Okay,
I
think
yeah
worth
noting.
I
think
europeans,
your
time
will
shift
before
the
next
awkward
devs.
We
are
not
shifting
the
awkward
devs
time
so
it'll
be
at
a
different
local
time
for
you.
If
you
live
somewhere
where
daylight
savings
time
isn't
yeah
thanks.
Everyone
for
coming
on,
thanks
to
everybody
who
didn't
drop
off
halfway
through
after
the
merge,
stuff
and
I'll,
see
you
all
in
two
weeks.