►
Description
Agenda: https://github.com/ethereum/pm/issues/408
EIP-4396: Time-Aware Base Fee Calculation - https://eips.ethereum.org/EIPS/eip-4396
Description - Accounts for block time in the base fee calculation to target a stable throughput by time instead of by block.
A
Okay
yeah,
let's
just
start,
then
so.
Basically
I
so
so
the
way
I
basically
see
the
situation
is
that
we
have
the
marshall
coming
and
then
afterwards
we
like,
I,
I
personally
see
us
going
down
four
different
paths.
We
just
basically,
if
you
choose
which
of
the
four
we
wanted
to
go
down.
A
The
first
one
would
just
be
no
change,
so
we
don't
include
any
erp
into
the
merge
that
would
alter
the
the
basic
calculation
and
then
we'd
probably
do
something
in
shanghai,
but
we
have
more
time
to
actually
properly
work
work
on
something-
and
the
second
alternative
that
I
see
is
what
I
kind
of
like
to
maybe
call
mikhail's
proposal.
A
Although,
as
miguel
was
pointing
out,
he
was
only
kind
of
like
bringing
it
up
as
like
an
orthotic
experiment,
but
I
actually
think
it's
it's
worth
thinking
through,
because
it
has
interesting
trailers
as
well.
That
is
basically
like
the
this,
basically
acting
as
if
there
would
have
been
some
like
empty
blocks
in
the
mist
slots
during
the
race
recalculation.
So
you
don't
actually
insert
empty
blocks
with
all
the
complexity
that
would
entail,
but
just
doing
the
calculation
you
just
basically
just
act
as
if
there
would
have
been
empty
slots.
A
Basically,
the
base
fee
goes
down
even
before,
and
the
the
block
you're
you're
building
now
as
if
there
would
have
been
empty,
empty
box
in
between
which
has
interesting
trade-offs,
and
then
the
other
two
alternatives
would
be
the
ones
like
the
one
that
kind
of
like
the
ap
proposal.
So
I
would
call
that
time
aware
basically,
calculation
and
then
the
fourth
one.
I
would
say
time
to
a
time
aware,
plus
this
buffer,
which
would
make
make
the
timer
much
more
smooth
right.
A
Is
it
generally
like
just
briefly
before
we
then
dive
into
like
the
pros
and
cons
and
dive
into
the
individual
ones
like?
Is
there?
Are
there
any
ones?
I
don't
know
people
have
thought
about
over
the
weekend
or
something
any
any
any
other
alternatives
that
people
have
come
up
with,
that
they
would
want
to
include
in
this
list
of
possible
options.
A
B
A
Right,
but
I
think
it
kind
of
makes
sense
to
know
what
we
could
do
because
okay,
I
mean,
I
guess
the
first
question
is:
would
you
want
to
do
something
right
like
because
if
we
already
say
that
we
don't
want
to
do
anything,
even
if
we
have
good
options,
then
we
don't
even
have
to
look
at
more
into
the
options.
If
we
say
we
want
to
do
something,
we
can
always
like
fall
back
to
doing
nothing.
If,
if
it
turns
out
that
all
the
options
we
have
are
not
nice
enough
or
something
so
yes.
A
Right
so
marius
is
saying
I
think,
he's
on
on
the
train.
So
it's
like
a
spotty
connection,
probably
not
good
enough
for
boys,
but
he's
saying
that
he's
not
not
a
fan
of
doing
anything
because
it's
like
it
introduces
extra
complexity.
It's
a
change
and
it
is
not
convinced
that
there's
a
need
for
it.
D
As
I
understand
it,
the
problem
we're
trying
to
solve
is
that
as
the
number
of
the,
if
the,
if
not
all,
blocks,
get
proposed,
each
block
that
doesn't
get
proposed
just
takes
away
one
block's
worth
of
capacity
from
the
system,
because
the
way
the
difficulty
adjusts
the
base
of
the
adjustment
algorithm
works.
Is
there
any
other
problem
we
are
trying
to
fix,
or
is
it
just
that
one.
A
I
think
it's
mainly
so
there's
this
one
additional
small
one
that,
like
after
missed
slot,
there's
this
small
spike
of
the
base
b
before
it
then
settles
settles
back
at
like
basically
the
the
same
level.
So
so
this
is
like,
but
this
is
the
small
one.
So,
basically
just
it's
a
little
bit
unsmooth.
A
Unfortunately
ux-wise
that's
not
ideal,
but
I
think
the
main
ones
are
indeed
missed
throughput
and
then
the
true
consequence
of
this
one,
the
the
the
kind
of
the
how
it
incentivizes
it,
potentially
in
incentivizes
those
attacks
to
for
individual
validators
and
two
that
it
can
make
the
whole
system
react
even
worse
unless
in
times
of
consensus
issues-
and
I
guess
the
third
one
of
just
in
general-
that
we
don't
like
reduces
throughput-
although
that
we
can
just
correct
by
increasing
the
gas
limit.
E
F
So
I'm
I'm
currently
in
the
train
station,
so
I
can
speak
a
bit.
Maybe
I
don't
know
I
wouldn't
say
that
robustness
actually
decreases,
because
that,
like
in
the
in
the
in
the
case
of
a
the
chain,
would
keep
on
going
the
chain.
D
Like
I'm,
not
convinced
that
the
dos
attack
is
like
actually
really
dealt
with
by
that
change,
because
the
most
promising
time
to
do
a
dos
attack
is
if
you
notice
that
there
is
a
massive
mev
opportunity
and
that
you
are
the
next
but
one
block
proposer.
You
are
very
strongly
incentivized
to
dos
the
next
proposal
and
then
you
can
exploit
the
mbv
opportunity.
Yeah.
A
D
A
Like
that,
that's
I
mean
like
we
talked
about
it
on
all
codes,
basically,
that
that
reduces
the
attacker
set
from
everyone
who
potentially
could
benefit
from
ethereum
having
degraded
performance
to
only
the
immediately
next
block
proposal
and
like
because
it
does
it
like
something
like
this
attack
is
not
something
you
can
just
spin
spin
up
willy-nilly
where
like.
If,
when
you
see
something
like
it,
you'd
have
to
prepare
it
for,
like
very
thoroughly
in
advance.
A
You'd
have
to
map
out
the
whole
network
and
de-anonymize
individual
validators
all
for
the
off
of
chance
that,
like
once
a
month,
you
produce
a
block,
and,
like
you
have
this,
it
may
be
opportunity
immediately
in
front
of
you
so
basically
like.
I
just
don't
think
that
we
have
to
worry
at
all
about
individual
volatility
just
about
crossing
each
other.
F
A
Right,
so
this
is
basically
an
intent
yeah
that
that's
true,
that
there
will
be
an
incentive
that
exists
under
all
different
scenarios
so
like.
If
we
don't
know
change
at
all,
then
there's
already
an
incentive,
because
if
you
just
does
the
person
in
front
of
you,
their
transactions
just
pass
through
to
you.
If,
but
but
any
of
the
proposed
adjustments.
Kind
of
like
leave,
leave
that
situation
as
is,
and
even
the
the
the
the
synthetic
empty
box
as
a
solution
would
even
make
it
worse,
because
you
could
extract
more
value.
A
C
C
A
Just
with
a
different
mechanism:
here
you
don't
you're,
not
dosing
by
creating
like
especially
hostile
blocks
or
anything
you're,
just
trying
to
kick
down
individual
data,
so
I
do
think
even
though,
basically
there's
another
set
of
incentivized
parties
naming
the
valid
meaning
the
biologists
themselves.
I
still
think
that,
like
getting
rid
of
this
other
set
of
potential
incentivized
attackers
would
be
helpful.
A
Yes,
matt
was
saying,
but
of
course
I
don't
know,
I
think
it's
hard
to
grasp
right.
It
would
be
like
a
extra
consensus
change
for
like
a
very
hard
to
assess
risk
right.
It
could
just
be
that
this
is
completely
theoretical
and
no
one
will
ever
bother
trying
to
do
spell
data,
so
it
could
also
be
that,
like
people,
just
invalidators
would
just
do
it
in
order
to
cause
chaos,
and
they
would
even
do
it
if
it
doesn't
hurt
the
throughput
of
the
chain.
F
I
I
agree,
I
think
the
incentive
is
already
there.
It's
already
there's
already
a
big
incentive
to
just
toss
the
validators
and
you're
not
really
fixing
that
with
anything,
because
if
you
like,
if
you
propose
a
mechanism
that
increases
the
block
size
for
the
next
block
or
whatever,
then
you
you've
just
increased
the
incentive
to
also
does
the
next.
The
next
guy.
A
A
Yeah,
I
I'm
wondering
what
what
what
what
would
be
incentives
for
external
parties
to
dos
validators
and
I'm
just
not
that
convinced
that
reducing
throughput
is
all
that
valuable
for
an
attacker,
because
it
only
reduces
throughput
by
a
couple
percent,
and
we
can
correct
that
manually
anyway
with
the
gas
limit.
So
it's
still
nice
if
it
was
unable
to
even
even
do
this
couple
percent
effect,
but,
like
I
just
don't
think
attackers
would
be
motivated
by
this
couple
percent.
In
the
first
place.
A
I
think
it's
more
likely
that
they're,
just
motivated
by
trying
to
generally
degrade
and
just
like
yeah
cause
conflict
within
ethereum,
and
if
I,
as
a
hobby
validator,
regularly,
have
my
blog
proposals
prevented
by
some
dos
attack
but
coinbase
taking
because
they
have
a
beef
firewall.
They
can
just
chug
along
and
produce
bugs.
Then
I
might
be
really
annoyed
and
I
might
stop
staking,
and
that
decreases
the
decentralization.
So
I
think
that's
like
the
implications
like
the
the
the
motivations
for
dosing.
A
I
I'm
personally
like
my
motivation,
for
this
erp
was
definitely
more
more
on
the
keeping
throughput
up
during
times
of
consensus
issues,
because
during
the
during
the
interop
event,
we
talked
a
whole
lot
about
scenarios
where
all
the
prison
notes
go
offline
or
all
the
get
notes
go
offline
or
all
the
lighthouse
notes
go
offline,
and
I'm
just
I
personally
very
much
expect
this
to
happen
within
the
first
year
after
the
merge
at
least
one
event
where,
where
one
of
the
client
combinations
can't
keep
up
with
the
network,
and
that
would
that
would
kick
a
significant
amount
of
validators
offline,
and
I
personally
really
really
really
want
the
chain
to
just
not
lose
any
throughput
during
that
time.
A
E
Yeah,
I
I
guess
if,
if
the
concern
is
with
one
specific
combo,
that
seems
less
of
a
problem
because
you
can
swap
out
like
if
it's
I
don't
know
like
prism
and
base?
U
right,
like
you,
can
swap
out
basu
for
death,
or
something
like
that.
That
seems
like
an
easier
fix
that
like
if
it's
an
issue
with
like
prism
itself
right
so
I
yeah
and
I
don't
know
it
seems
like
the
odds
of
an
issue
with
just
a
single
client-
are
kind
of
lower.
E
Just
given
that,
like
we've
had
the
beacon
chain
live
for
a
year,
we've
had
maintenance
live
for
five
years,
and
you
know
they
both
they
both
work
and
haven't.
Had
these
these
kind
of
major
issues
and
and
recovering
from
like,
like
compatibility,
issues,
yeah
and.
A
No,
I
just
wanted
to
say
that,
like
I
I'm,
I
don't
follow
the
beacon
chain
as
much
as
I
follow
the
execution
chain,
but
like
on
on
eth1,
I
mean
we
had
two
consensus
issues
over
the
last
two
years
right,
which
was
like
only
made
minor
impact,
but
one
of
them
kicked
to
kick
all
the
open,
ethereum
nodes
of
the
network
for
a
while,
and
so
so
these
things
do
happen,
and
I
think
I
think
there
was
something
on
the
beacon
as
well,
where,
like
one
client
for
a
while
couldn't
keep
up
or
something
I'm
I'm
not
quite
sure,
but
okay,
right
yeah.
A
That
makes
sense
yeah,
but
I
mean
these
things
do
happen,
yeah
and,
and
the
the
nice
thing
right
now
in
ethereum
and
we're
really
about
to
lose
this.
It's
like
in
proof
of
work
I
mean
for
one.
Miners
are
relatively
centralized
for
better
or
worse
and
so
like
they
are
relatively
good
at
directing
quickly,
if
they're
on
the
wrong
chain
or
something,
but
also
with
the
difficulty
adjustments.
A
We
have
this
really
quick,
self-healing
property,
we're
sure
we
have
a
little
bit
of
reduced
throughput
for
a
short
amount
of
time
and
but
then
we
usually
back
up
and
running
and
given
that
almost
all
miners
run
the
same
led
guest
anyway,
they
kind
of
like,
usually
like
it.
It's
rare
that,
like
a
small
like
a
20
portion
or
something
splits
off,
so
it's
really
like,
usually
just
all
works
and
in
proof
of
stake.
This
will
just
no
longer
be
the
case.
A
It
can
it's
it's
much
more
likely
that
we
have
like
20
drop
off
or
like
a
30
drop
off
and
there's
no
out
like
no
cell
feeling
on
any
reasonable
timeline.
It
just
stays
this
way,
and
it's
just
basically
like
from
one
second
to
another.
We
just
have
sustained
30
percent
throughput
loss
on
on
the
chain.
E
Yeah,
I
agree,
and
I
guess
if
this
does
happen,
we
want
to
be
in
a
spot
where,
like
we're,
also
not
making
things
worse
with
1559
right
like
where
we're.
E
Actually,
you
know
keeping
like
and
well
actually,
even
with
your
proposal,
though,
if
this
happens
and
we're
losing
like
30
percent
the
best
we
can
get
because
of
like
the
the
slack
factor,
the
blocks
is
like
twice
as
I
guess
that
can
bring
us
all
the
way
up
to
like
say
we
lost
50
percent
and
we'd
have
like
200
percent
full
blocks,
every
other
block,
okay
yeah.
So
your
proposal.
A
Right,
because
if
we
do
the
simplest
thing
where
we
don't
have
a
buffer
or
anything,
we
would
only
recover
like
parts
of
of
the
lost
throughput.
That's
why
I
yeah.
I
really
think
some
something
like
this
buffer
would
be
nice,
because
then
you
would
actually
could
lose
all
the
way
up
to
50
without
any
throughput
losses
or
even
considering
the
like
the
mikhail
proposal
of
the
synthetic
empty
box.
The
nice
thing
of
the
synthetic
empty
book
is
that
they
also
completely
avoid
throughput
losses
all
the
way
up
to
50
offline.
A
The
disadvantage
is
that
the
synthetic
empty
blocks
have
other
un
undesirable
side
effects
that
that
I
think
the
the
the
time
aware
proposal
does
not
or
we
can
dive
into
them,
but
so
there
are
ways
of
of
trying
to
really
not
lose
any
throughput
all
the
way
up
to
50
offline
blog
proposals.
E
And
if
I
understand
like
is
still
here
yeah,
I'm
not
sure
if
you
can
answer,
if
you're
in
the
train.
But
like
is
the
main
concern
like
just
that
we're
adding
additional
complexity
that
were
like
changing
this
thing,
which
already
kind
of
works.
F
I
I
have
multiple
concerns:
we're
changing
changing
something
that
kind
of
works
already,
and
that
has
been
like
proven
to
work.
So
there
was
like
a
lot
of
work
done
in
getting
this
mechanism
to
where
it
is
from
barnaby
and
others.
I
think-
and
I
don't
know
I
I
I
really
don't
like
changing
this-
this
thing
at
hog
and
yeah.
It
also
it
just
introduces
another
change
into
the
merge
and
the
merge
is
going
to
be
the
the
like
the
most
insecure
point
in
in
ethereum
history.
F
The
point
where,
like
most
thing,
can
go
wrong
ever
and
I
would
like
to
have
as
little
changes
as
possible.
There.
A
I
don't
actually
think
that's
unreasonable.
I
think
I
think
it's
definitely
something
where
we
could
step
back
and
talk
about
like
yes,
even
if
say
under
a
consensus
issue,
which
I
think
it's
not
unrealistic,
that
we'll
have
one
hopefully
minor
consensus
issue
between
the
emerge
and
shanghai.
But
if
we
say
we
really
have,
we
will
have
some
version
of
this
in
shanghai
and
with
more
time
to
work
on
the
best
version
of
the
proposal
and
really
properly
analyze,
all
the
edge
cases
say.
A
Then
it's
really
only
about
for
like
about
the
time
between
the
merchant
shanghai
and,
like
I
mean
we
always
have
some
delays
in
minutes.
I
assume
that
the
time
won't
be
like
three
or
four
months.
I
assume
it's
more
likely
to
be
like
six
to
eight
months
between
the
merchant
shanghai,
but
even
that
is
like
a
limited
amount
of
time
and
during
that
time
it's
not
like
it's
it's
a
huge
vulnerability.
A
It's
like
it
is
a
like,
basically
a
non-insignificant
degradation
of
surface
during
consensus
issue,
but
we
can
react
by
increasing
the
gas
gas
limit
right
so
basically
days
while
there
is
no
self-healing
abilities.
If
we
react
relatively
quickly
and
increase
the
gas
limit,
it
kind
of
counters
the
throughput
loss
as
well
it.
Of
course,
it's
something
we
could
probably
take
a
couple
of
hours,
at
least
it's
not
like.
A
We
have
to
live
with
a
couple
of
weeks
of
reduced
throughput
and
and
if
we
really
think
that
the
dos
attacks
are
not
that
severe
again,
that's
just
not
my
my
area
of
expertise
too
much,
and
then
I
I,
I
think
marius's
proposal
of
saying,
like
let's
do
this
properly
and
only
do
it
in
shanghai
and
keep
too
much
minimal.
I
don't.
I
don't
think
it's
necessarily
a
bad
idea.
A
I
don't
know
to
me
it
seems
it
seems
to
be
kind
of
like
in
the
in
the
a
little
bit
yeah
right
at
the
border
of
like
it's
small
enough,
that
it
might
be
good
to
still
do
it
now,
but
it
also
like.
Maybe
not.
Maybe
we
should
wait.
E
I
would
agree
with
you
if
there
weren't
already
like
40
000
things
that
we're
saying
you're
gonna
be
in
shanghai,
so
I
you
know,
I,
like
my
personal
read
right
now
is
like
something
like,
I
don't
know
say
like
the
amm
fee
market,
like
proposal
like,
I
see
that
very
unlikely
in
shanghai,
just
because
we
have
a
bunch
of
vips
that
are
like
already
much
more
specked
out
than
that
have
been
waiting.
E
You
know
for,
like
probably
close
to
a
year
already
so
more
realistically,
like
maybe
something
small
like
this
gets
done
in
shanghai,
because
it's
small
and
like
it's
easy
to
articulate
the
value
and
and
whatnot,
but
I
think
like
if
we,
if
we
need
to
do,
withdrawals
in
shanghai,
if
we
have
like
four
or
five
evm
related
eips,
that
we
want
to
all
do
or
do
a
subset
of.
If
we
have
to
do
like
you
know,
a
small
fix
with
1559
that
seems
reasonable.
A
Yeah
yeah,
no,
it
was
just
for
the
record.
I
I
think-
and
that
was
part
of
the
design
choice
of
this
eap-
that
it's
mostly
orthogonal
to
all
these
other
base
fee
changes.
A
Where
I
mean
I
initially
went
into
this
and
also
had
some
more
interesting
ideas
around
how
to
make
the
base
fee
calculation
more
elegant,
but
I
abstained
from
putting
anything
any
of
that
into
the
cip,
because,
basically,
the
way
to
think
about
it,
I
think,
is
that,
like
there's
the
signal
for
the
base
fee,
update
that
comes
out
of
the
fullness
of
the
block
and
right
now
we're
debating
how
to
modify
the
signal
right
by
also
taking
the
block
time
into
account
potentially,
but
then
there's
the
question
of
once.
A
You
have
the
signal
like
of
that
the
base
we
should
go
up
or
down
like
how
do
you
calculate
the
new
base
fee
like
do
you
do
this?
One,
eighth
rule
that
we
have
right
now:
do
you
do
some?
Maybe
additive
rule
instead,
do
you
do
some
exponential
rule?
Instead,
do
you
do
like
an
amm
rule
instead?
But
all
of
this
is
basically
just
taking
the
existing
signal.
A
E
Right
and-
and
I
think
yeah
like
barnaby's-
also
looking
right
now
at
ways
that
we
could
improve
this
signal.
So
like
yeah,
I
suspect
it's
it's
possible
that
that's
another
big
thing
that
gets
discussed
for
I'm
not
sure
if
we
could
put
it
in
shanghai,
but
either
shanghai
or
one
after.
A
I
mean
also,
it
seems
at
least
to
me
like.
No
one
is
so
so
just
really
like
how
concerned
are
we
about
the
external
dos,
like
again,
external
meaning,
like
we
ignore
for
a
second,
the
validators
does
see
each
other
like
how
concerned?
Are
we
about
external
dos?
How
concerned
are
we
about
throughput
reductions
during
consensus
issues?
It
sounded
like
we.
People
are
barely
concerned
at
all
about
external
dos
attacks
on
validators
and
only
a
little
bit
concerned
about
the
throughput
loss
as
well.
Is
that
a
good
assessment.
B
I
am
more
concerned
about
the
throughput
loss
than
the
external
dos
attack
on
the
time
horizon
that
we're
discussing
like
post
merge.
There
is
a
sufficiently
high
chance
that
there
will
be
a
outage
of
some
kind
and
consent
on
one
of
the
clients
or
one
of
the
execution
clients,
and
during
that
time
it
would
be
nice
if
we
didn't
have
throughput
loss.
B
A
Yeah
that
sounds
reasonable
actually
and
like
how
would
you
say
than
looking
focusing
on
the
under
throughput
last
time?
How
okay,
would
you
be
with
a
solution
of
just
saying
we
just
basically
have
to
manually?
Well,
I
mean
okay
first
question
with
the
gas
limit
right,
because
the
the
way
to
manually
react
at
least
would
be
by
pushing
up
the
gas
limit.
A
We
do
not
expect
to
have
the
eap
met
your
erp,
that
that
removes
this
ability
from
the
individual
stakers
and
gives
it
into
consensus
control.
I
don't
expect.
I
don't
think
we
want
this
included.
Is
that
right.
A
Shanghai,
okay,
so
then,
during
this
time
period
we
would
still
have
the
gas
limit
control
in
the
hands
of
the
stakers,
and
so
we
could.
What
we
can
always
do
is
like
if
we
see
a
30
throughput
bus,
because
some
client
combination
or
combinations
making
up
30
of
purposes
go
offline,
we
could
just
basically
tell
people
to
manually
increase
push
the
gas
limit
up.
30
percent,
of
course,
that
disadvantage
is
twofold:
one
it
takes
a
while.
I
guess
it
takes
at
least
several
hours,
probably
more
like
a
day
or
so
before.
A
This
information
reaches
enough
stakes
that
we
can
actually
move
with
the
gas
limit
and
then
also.
Secondly,
once
the
participants
come
back
online,
there's
a
risk
that,
because
there's
like
some
amount
of
people
who
want
a
higher
gasoline
anyway,
that
the
gas
limit
is
just
sticky
at
the
higher
value
and
doesn't
come
back
down.
And
then
we
have
the
problems
of
like
a
too
high
throughput,
and
then
you
can
hard
fork.
E
The
limit
in
the
next
upgrade
in
a
convenience
way,
but
yeah
that
that
is
but,
but
I
I
think,
you're
right
that
that's
a
a
risk
right
like
we.
We've
raised
the
gas
like
a
30
and
it
gets
stuck
there
and
like
it,
becomes
contentious
to
bring
it
back
down.
A
Right
I
mean
this
doesn't
doesn't
sound
like
too
bad
right,
I
mean
oh.
This
is
only
in
the
scenario
that
we
do
have
an
issue
which
is
not
like
yeah.
It's
not
guaranteed
to
happen,
and
I
mean
like
do
did
how
well
are
people
doing,
because
I
always
hear
these
these.
These
kind
of
things
that,
like
with
the
difficulty
bomb,
even
a
few
percent
percent
of
slaw
blocks,
would
be
terrible
for
d4
or
something.
I
never
believe
this.
I
always
feel
like
that's
hot,
but
how
bad
would
it
do?
A
E
To
take
the
d5
perspective
right,
like
it's
probably
millions
of
dollars
in
like
economic
inefficiencies
right
like
and
and
like
you
imagine,
just
like
ripple
effects,
it's
like
if
your
theorem
gets
tossed
and
like
it
slows
down,
then
like
it,
it
starts.
You
know,
people
are
upset
about
it
and
it
starts
being
like
negative
coverage
and
whatnot
like.
I
think
it
will
be
a
bad
thing,
but
it'll
be
a
bad
thing
kind
of
because
ethereum
is
getting
dosed.
A
E
Think
what's
happening,
yeah
yeah,
what's
bad
with
the
narrative
in
a
way
is
like.
If
you
have
a
narrative
and
like
say
that
the
price
starts
going
down,
then
that
actually
creates
demand
for
on-chain
activity,
because
everybody
is
then
trying
to
get
like
their.
You
know
their
collateralized
positions,
not
liquidated,
so
everybody
wants
to
use
the
chain
at
the
same
time.
But
then
we
already
have
you
know
less
throughput,
so
it
makes
it
it's
just
like
a
compounding
bad
problem,
but
like.
E
And
also
to
be
fair,
like
this
is
not
like
the
only
case
in
which
these
things
can
happen
and
applications,
I
think,
are
like
starting
to
work
to
improve
like
their
resiliency
towards
that.
So
like.
There
was
obviously
like
the
black
tuesday
thing
or
thursday
like
when
cobit
hit
and
a
bunch
of
people
got
liquidated
and
and
maker
had
had
a
bunch
of
issues,
and
I
think
you
know
applications
are
starting
to
do
things
like
having
longer
time
windows
to
to
to
put
collateral
and
stuff
like
that.
E
So
like
this,
an
issue
on
mainnet
is
not
the
only
case
where,
like
the
the
performance
can
be
degraded
for
some
of
these
applications.
If,
like
there's
a
massive
drop
in
price
for
something
external
to
the
chain,
like
everybody's
gonna
want
to
rush-
and
you
know
do
stuff
so
so
these
applications
kind
of
need
to
be
robust
to
that
as
well.
And
it's
obviously
not
a
good
thing
and
you
don't
want
to
like
have
it
happen.
A
I
mean
especially
right
because
if
we're
saying
that,
because
it
would
just
basically
mean
the
higher
base
fee-
and
so
I
don't
know
say
right
2x,
the
basic
would
would
result
in
like
a
30
reduced
demand
or
something
I
don't
know
something
like
that
seems
realistic,
and
so
I
mean
we
already
have
inter
week
variability
of
base
fees
of
more
than
50
right
like
it's.
Not
it
wouldn't
be
super
unheard
of.
If
we
have
like
twice
as
high
base
fee
for
a
week
right,
we
already
have
that
yeah.
E
And
I
think
the
concern
is
not
as
much
to
twice
as
much
basis
for
a
week,
but
it's
like
the
10
to
100
x
as
much
base
fee
for
30
to
you
know
300
minutes
right
like
if
there's
like
this
hours,
long
period
of
time
where
everybody
is
trying
to
use
the
chain
asap
that
prices
a
lot
of
people
out,
obviously,
and
then,
if
some
actions
need
to
be
taken
within
that
period
and
for
some
reason
you
can't
get
in
or
the
reason
being
like,
you're
not
bidding
high
enough.
E
That
can
be
kind
of
really
negative,
but
I
don't
think
a
weak
of
twice
as
high
the
base.
He
is
like.
I
mean
it's
bad,
but
it's
not
like
catastrophic
in
any
use
case.
B
So
the
thing
to
keep
in
mind
is
that,
for
the
last,
what
two
years
there
have
been
some
non-trivial
percentage
of
users
that
would
like
to
use
ethereum
that
cannot,
that
percentage
will
go
up
if
we
have
30
of
empty
blocks.
But
fundamentally
there
will
still
be
a
class
who
can
use
ethereum
in
a
class
who
cannot
and
that,
like
fundamental
aspect,
won't
change.
Just
the
size
of
each
pool
will
change
a
little
bit.
A
B
Yes,
so
I
mean
you:
could
I
think,
you're
making
there
you're
making
percent
quickly
is
basically
that
the
sudden
change
this
can
be
unexpected,
and
so,
while
users
pre,
you
know
one
minute,
they're
able
to
participate
and
use
ethereum
and
then
the
next
minute
they're
not,
and
they
can't
for
a
day
and
that's
ordinary,
but
even
that
isn't
unheard
of
like
we've
seen
that
behavior
just
due
to
something
hits
the
news
or
some
new
product
comes
out
or
whatever,
and
you
see
you
know,
the
set
of
users
that
can
afford
to
use
ethereum
just
went
up
by
huge
margin,
and
one
could
argue
that
is
functionally
the
same.
B
So
whether
we
have
an
increase
in
supply
or
sorry,
a
decrease
in
supply
or
an
increase
in
demand.
They're
functionally
the
same
overall,
and
so
we
have
seen
times
where
the
increase
in
demand
suddenly
appeared
out
of
nowhere
and
lasted
for
an
extended
period
of
time,
and
we
have
also
seen-
and
so
I
don't
think,
that's
very
different,
like
in
terms
of
broad
economics,
game
theory
etc
than
what
we're
talking
about,
which
is
that
we
have
a
sudden
decrease
in
supply
that
lasts
for
a
while.
A
Not
not
only
have
we
seen
this,
I
mean,
I
don't
know,
we've
seen.
I
think
there
was
in
the
news
literally
yesterday
that
we
have
the
first
full
week
of
ethereum
being
deflationary,
which
is
is
because
we
had
a
week
of
really
high
base
fees,
and
I
don't
know
right
now.
The
base
fee
is
at
230
and
we've
had
weeks
where
the
base
fee
on
average
was
at
30..
A
So
that's
like
almost
a
10x
increase
and
we,
while
it's
not
nice
kind
of
like
people,
are
able
to
adapt
and
absorb
this,
and
so
like
a
2x,
two
tweaks
difference
in
a
situation
in
a
very
exceptional
situation
that
we
don't
expect
to
ever
arise
but
like
it
could
arise
like
once
during
six
months
or
so
for
two
three
days.
A
Right,
so
all
of
this
does
kind
of
point
slightly
towards
us
giving
in
to
marius,
and
but
like
is,
is
anyone
would?
Does
anyone
still
feel
strongly
that
we
should
do
this
nah
like
for
the
merge
already
and
would
want
to
try
and
bring
us
back
on
like
point
us
back
towards
that
direction,
or
is
everyone
either
ambivalent
or
convinced
that
we
shouldn't
do
it
because
I'm
definitely
ambivalent
and
I'm
fine
with
not
doing
it.
C
I'd
just
like
to
understand:
oh
go
ahead,
I
was
gonna
say
I'm
pretty
empathetic
to
the
concerns
that
1559
has
been
vetted
by
many
different
people
and
it's
concerning
to
like
change
these
mechanics,
but
I
I
think
we
still
have
enough
time
to
have
the
proper
people
vet.
These
things-
and
I
find
it
pretty-
concerning
that
people
who
are
not
part
of
the
ethereum
ecosystem
can
attack
the
network
for
a
relatively
cheap
amount.
A
Just
for
the
record,
by
the
way,
I
do
think
that
we've
had
been
we've
been
hugely
paranoid
as
an
ecosystem
about
1559,
where
we
consider
really
treated
as
like
this
super
black
magic
voodoo
thing
with
super
complicated
impact
and
similarly,
after
the
change,
I
like
every
negative
aspect,
thing
that
ever
happened
on
ethereum
afterwards
was
playing
with
1559
without
ever
anyone
bothering
to
try
and
even
point
out
how
a
plausible
mechanism
could
have
worked
like
like
oh
yeah.
Basically,
it
went
up
like
this
has
to
be
39
and
everything
and
like
in
reality.
A
A
So
matt,
what
you're
saying,
though,
is
that
you
you're
concerned
about
the
external
dust
issue
and
like
for
that
reason.
You
would
still
prefer
us
to
do
something
for
the
merge
already
to
disincentivize
this
further.
B
B
That
the
it's
it
is
another
change
that
is
just
one
more
thing
that
can
have
a
consensus,
failure
or
a
bug,
and
so,
when
you're,
you
know
going
through
and
dealing
with
consensus
failure
after
consensus
failure
after
consistent
failure,
well
we're
like
leading
up
to
merge
and
working
on
it
and
all
this
stuff,
because
it's
really
complicated,
adding
one
more
thing
to
that.
B
Just
is
yet
another
thing
that
can
fail
and
delay
and
cause
distractions
stuff
like
that,
and
so
I
think
the
idea
is
just
literally
anything
would
meet
that
bar
of
it's
a
change
things
will
change
tests
will
break
stuff
is
going
to
go
wrong
because
that's
what
happens
when
you
change
things?
I
guess
the
way
the
world
works
when
you
change
things
things
break.
E
Is
there
so
is
there
value
in
maybe
like
trying
to
do
kind
of
the
the
review
of
this
in
parallel
to
the
merge
work,
and
we
can
kind
of
see
because
like
if
we,
if
we
do
nothing
now,
obviously
we're
not
going
to
do
this
for
the
merch
right?
Maybe
it's
it's
it's
it's
valuable
to
take
enzgar's
proposal,
have
it
reviewed
by
barnabay
and
potentially
tim
roughgarden
if
he
he
still
has
the
bandwidth
for
this
stuff,
and
you
know
in
parallel.
E
B
I
would
rather
spend
barnabay
and
tim
rough
gardens.
I
would
rather
spend
their
time
on
something
more
long-term
personally
like
unless
there's
a
good
chance.
This
will
actually
go
into
the
merge
I'd.
Rather
them
spend
that
same
time
like
working
on
the
am
fee
market,
for
example,
or
changing
it.
So
we
do
rate
based
throughput
instead
of
block
based
throughput
like
those
are
chains,
I
think,
are
way
more
valuable,
but
not
definitely
out
of
the
scope
for
the
merge
and
I'd.
B
C
C
I
was
just
going
to
say
I
generally
prefer
to
like
softly,
accept
things
and
then
based
on
the
like
output
of
the
research
or
development,
remove
them
rather
than
like
quote-unquote
work
on
parallel,
because
I
feel
like
it's
just
a
stronger
signal
that
this
is
something
that
might
happen.
C
It's
not
really
that
relevant,
but
I
also
don't
understand
marius's
point
that
there
needs
to
be
a
change
to
the
transaction
envelope.
I
think
this
is
only
a
change
in
the
state
transition
in
the
verification
logic
of
1559.
B
B
Yes,
I
guess
depends
on
how
your
specific
client,
I
think,
that's
just
that's
dependent
on
the
architecture.
Client.
I
can
imagine
my
client
design
where
the
consensus
module
is
pluggable
and
they
assume
most
clients
are
like
that,
because
they
all
have
multiple
consensus
engines
already,
in
which
case
it's
just
like
you
know
you,
you
instantiate
a
different
class
or
something
very
early
on.
Otherwise
no
code
is
touched
like
instantiate
a
instead
of
b.
B
B
Okay
and,
and
so
it
will
validate,
the
difficulty
is
in
fact
zero
or
or
sorry
and
on
definition,
so
difficulty
they
can't
validate
because
it's
random
so
but
you're
needing
to
remove
the
difficulty
check
at
least
so
there
you're
right.
There
will
be
a
change
in
the
block
validation
because
we
have
to
remove
the
difficulty
check
at
the
very
least.
G
Right
and
there
will
be
also
a
change
in
the
block
validation
because
we
need
to
remove
rewards.
We
need
to
deprecate
some
fields
right,
so
these
are
also
changes,
and
my
main
question
here
is:
what
kind
of
work
should
we
do
before?
We
can
accept
this
eap,
I
mean:
do
we
need
to
do
some
analysis?
G
I
guess
yes
do
we
need
to
and
what
kind
of
like
engineering
work
like
testing,
observing
this
new
behavior
in
in
the
test
net
artificially
making
mis
slots
and
see
how
it
progress
or
what
else
could
be.
A
In
this
list
yeah,
I
think
that
kind
of
depends.
Oh,
I
mean
not
depends,
but
maybe
it's
related
to
the
general
question
of
time
right
here.
Right
like
what
is
because
I
think
the
goal
was
to
have
the
the
spec
freeze
more
or
less
for
the
merge
by
the
end
of
october.
We
have
like
first
of
november
now.
Is
it
first?
A
Yes
first
so
I
I
mean,
I
guess,
spec
freeze
in
general
is
kind
of
like
a
week's
victories
where
we
like
there
might
still
be
changes
afterwards,
and
this
I
guess,
could
come
in
later.
But
what's
what's
the
time
that
by
what
time
would
we
have
to
make
a
final
call
on
this
being
part
of
not
being
part
of
the
merch.
G
That's
a
good
question,
but
this
this
freeze
for
the
end
of
the
october
that
we
are
working
currently
at
is
not
the
last
one,
so
this
is
to
initiate
the
next
wave
of
engineering
efforts
around
the
merge.
It's
not
like
the
like.
You
know
the
final
set
of
things
that
will
go
into
the
match
so
for
yeah,
for,
like
the
last
for
for
when
the
last
like
for
free
spec
freeze,
it
should
be
expected.
It's
like
a
really
difficult
question
depends
on
on
what
we'll
see
in
november.
E
Yeah,
but
I
do
think
we
probably
don't
want
any
new
features
like
this
seems
like
it's.
Maybe
the
last
feature
that
we
want
in
the
merge
right,
like
obviously
the
spec
freeze
like
if
we
find
bugs
and
whatnot,
we
need
to
change
the
specs,
but
like
the
general
mechanism,
I
don't
think
is
going
to
change
and-
and
you
know
the
like,
the
block,
header
and
whatnot
is
not
going
to
change,
but
this
seems
like
it's.
E
Maybe
the
last
feature
change
we
want
to
bring
into
the
merge,
and
that
being
said,
it's
like
we,
you
know,
then
it
becomes
a
a
trade-off
like
if
we
really
want
this,
then
obviously
we'll
add
it
in.
I
think
we
do
need
if
we
wanted
to
get
this
done.
We
should
start
researching,
and
you
know,
testing
things
kind
of
now,
at
least
from
like
an
eip
level
and
expect
that,
like
once,
clients
have
implemented
this
latest
spec
freeze,
which
I
don't
know
say
it
takes
like
a
month.
E
Then
this
would
be
like
the
next
thing
they
implement
so
either
very
late
this
year
or
early
next
year.
But
I
suspect-
and
I
I
guess
for
the
people
here
like
what
this
means
is
like-
we
probably
need
to
get
this
into
a
spot
where
it's
like
spec
frozen
for
the
eip
before
kind
of
the
holidays.
A
Right
and
then
what
I'm
just
wondering
is
because
I'm
personally
kind
of
optimistic
that
kind
of
working
on
this
in
parallel,
like
we
will
probably
kind
of
get
to
some
finalized
version
of
the
erp,
that
everyone
thinks
is
the
the
best
thing
we
can
do
for
the
merge
and
that
it's
incentive
compatible
with
59
and
everything.
But
the
question
then,
is
at
that
point
we
would
come
back
to
say
oh
cortez,
or
something
and
discuss
this
again
and
then
I
feel,
like
we
kind
of
fall
back
to
the
same
discussion.
A
A
Chance
that
people
will
actually
want
to
include
it
when,
if
it
is
ready
in
time
and
if
the
arguments
don't
change
to
what
we
discussed
here
today
is
basically
my
question,
because
I
mean
I
would
be
more
than
happy
to
just
continue
working
on
this
and
working
with
barnaby
and
whatever
and
like
making
sure
it
is
ready
in
some
form.
But
all
this
only
makes
sense.
If
there's
a
reasonable
chance.
People
then
want
it
in
and
I'm
just
not
not
sure,
right.
E
E
Maybe
it's
worth
spending
like
a
bit
more
time,
like
spending
like
say
until
the
next
awkward
devs
to
like
get.
I
don't
think
we
can
get
like
tim
rothbard
in
the
next
two
weeks
to
like
comment
on
this.
I
mean
we
can
try,
shooting
him
an
email,
but
we
can
probably
get
barnaby.
I
guess
there's
not
a
lot
of
people
from
the
e2
side
here
as
well
and
like
getting
also
kind
of
their
input
just
to
see
like
oh,
it's
like
with
regards
to
how
just
like
the
spec
works
and
whatnot.
E
You
know
the
the
benefits
of
this
yeah.
I
I
don't
know
it
feels
like
maybe
there's
value
in
spending
like
the
next
two
weeks,
trying
to
get
more
eyes
on
this
and
more
feedback
and
making
kind
of
a
call
on
our
core
devs
like
in
in
two
weeks.
Whether
this
is
something
we
want
to
like
weekly
include
potentially.
B
H
B
Premise
of
can,
should
we
change
anything
and
and
or
is
the
the
threat
significant
enough
that
we
need
to
care
like,
even
if
we
had,
you
know,
100
buy-in
from
barnaby
and
tim
rough
garden
and
they
were
like
yeah.
This
is
amazing.
This
is
will
be
great.
This
will
fix
all
of
our
problems.
I
don't
think
that
would
change
the
outcome
in
any
way,
and
so
I'm
hesitant
to
devote
resources
to
getting
that
if
it's
not
going
to
actually
impact
the
outcome.
F
For
me,
it
would
change
the
equation
a
bit,
so
I'm
for
me,
it's
not
only
the
I'm,
probably
I'm
in
the
tunnel
right
now,
so
you
can
probably
not
hear
me.
F
Nice,
for
me,
it's
probably
it's
it's
a
combination.
H
F
It's
a
it's
a
change
in
the
algorithm
that
we
don't
really
know
if
it
like
makes
sense
or
if
it
even
breaks
something
or
what
whatever
it
does
and
it's
it's
a
change.
So
if,
like
one
of
those
would
get
resolved,
I
will
probably
and
there's
sufficient
buy-in
from
from
others
to
say.
Okay,
this
is
actually
an
issue
that
we
need
to
solve.
E
F
E
Thing
sorry,
one
thing
I
I'm
talking
with
danny
also
he
couldn't
make
the
call
send
me
a
couple
messages
about
this.
One
thing,
danny
ryan
kind
of
feels
about
strongly
about
is
right.
Now,
there's
no
incentive
in
the
e2
protocol
to
there's
no
disincentive
to
not
produce
a
block,
so
there's
obviously
an
incentive
to
produce
a
block.
You
get
the
transaction
fees,
but
you're
not
penalized
in
any
way.
E
If
you
don't
produce
a
block
so
because
of
that,
he
would
like
to
see
something
that
takes
into
account
missed
proposals
generally,
when
you
know
when
calculating
1559,
because
it's
like,
if
you
can,
if
people
decide
they
want
to
agree
by
not
proposing
blocks,
then
you
can
make
it
easier
for
the
next
proposer
you
know
to
like
if
the
next
proposal
basically
has
to
raise
their
base
fee
because
of
that
then
you're
you're,
just
including
less
transactions,
and-
and
that's
not
great,
so
I
think
it
might
be
worth
also
talking
with
him
in
the
next
two
weeks
to
see
like
what's?
E
B
In
a
world
of
spherical
cows,
the
opportunity
cost
is
the
penalty
it's
equivalent
to
slashing
in
the
real
world.
I
understand
psychology
is
different.
Around
opportunity,
cost
versus
actual
cost
right,
but
like
mathematically,
which
is
what
we
usually
try
to
model
things
on,
even
though
it
doesn't
match
reality,
they
are
being
slashed
functionally.
A
E
E
If
there's
like
a
proposal
that
we
can
come
up
with
that's
simple
kind
of
helps,
with
the
spec
and
and
is
not
like
yeah,
obviously
doesn't
have
like
a
huge
engineering
overhead,
and
then
we
can
kind
of
make
a
soft
call
about
that
in
like
two
weeks
on
our
core
devs.
Does
that
seem
reasonable?
E
C
It
is
there
anyone,
oh
sorry,
go
ahead.
I
was
just
gonna
say
I
thought
that
we
were
already
under
the
assumption
that
we
were
doing
the
most
minimal
change,
that's
possible
and
so
we're
trying
to
determine
like.
Is
that
acceptable
right?
And
I
guess
that
yeah?
I
was
just
understanding
like
what
you're
saying
as
like,
exploring
if
there
are
minimal
changes
and
then
trying
to
figure
out
if
people
are.
E
You
know
making
sure
that
folks,
like
I
think,
danny
and
barnabay
at
the
verities
like,
are
on
board
and
think
it's
like
generally
a
good
idea,
or
you
know,
if
there's
tweaks
to
the
current
proposal,
that
don't
add
a
ton
of
engineering
complexity,
but
you
know
make
it
like
more
robust.
I
think
exploring
those
in
the
next
two
weeks
makes
sense.
B
So
I
I
haven't
heard
anyone
here
really
indicate
that
they
are
strongly
in
favor
of
this.
I
think
matt
is
probably
the
most
strongly
in
favor
from
just
the
sentiment
analysis.
I've
done
in
my
head.
Meanwhile,
I
think
every
single
one
of
the
core
devs
is
going
to
push
back
on
this.
I
suspect
that
unless
we
have
someone
who
is
really
passionate
about
pushing
this
through,
it
is
not
going
to
go
through
just
due
to
inertia
alone.
B
Even
if
we
do
solve
and
address
all
the
problems,
I
don't
see
it
going
through
without
a
strong
champion,
so
I
guess
the
question
I
would
like
to
answer
before
we
walk
away
is:
is
anyone
in
this
room
passionate
enough
that
they
really
want
to
fight
and
die
on
a
hill
in
court
death?
Calls?
B
C
A
Right
and
then
maybe
like
the
last
last
point
here-
is
that
just
like,
because
I'm
super
willing
to
spend
a
little
bit
more
time
over
the
next
two
weeks
a
week
and
a
half,
even
I.
I
don't
think
it's
lost
time
anyway,
because
it
helps
understand
59
better,
and
so
it
would
also,
even
if
we
don't
go
ahead
with
it
for
the
merge.
A
I
think
the
the
kind
of
effort
wouldn't
be
lost
for
them
working
on
something
similar
for
shanghai,
so
I
think
it's
reasonable
and
then
make
a
call
if
we
want
to
move
ahead
or
just
stop
on
aquadevs.
I
think
it
would
be
helpful
if
maybe
someone
else
or
maybe
also
myself-
I
don't
know-
could
also
spend
some
time
doing
doing
during
the
next
two
weeks,
kind
of
like
working
a
little
bit
working
out
a
little
bit
clearer.
A
These
the
specific
reasons
for
why
to
do
it
right
basically
like
this
thus
concern
and
the
throughput
reduction
concern
just
basically
kind
of
like
exploring
this
a
bit
further,
and
just
just
so.
We
have
something
of
saying
that
should
clearly
answer
like.
Why
would
we
even
want
to
do
this
right?
Because.
A
B
And
are
you
guys
do
do
the
two
of
you
care
enough
about
this
project
to
work
on
it
in
lieu
of
working
on
any
of
the
other
thousand
projects
you
could
work
on
on
ethereum?
I
think
that's
the
other
question
for
the
champions.
A
B
E
H
Yeah
this
needs
to
be
merged,
yet
I
see
that
it
is,
it
is
approved,
but
still
to
be
married.
A
B
E
Just
I
guess
just
because
we're
we're
coming
up
on
time.
I
think
the
people
it
makes
sense
to
speak
for
you
to
speak
with
enzgar
are
at
the
very
like
first,
I
would
say
barnaby
and
danny.
I
think
they
can
provide
like
good
feedback.
E
The
the
problem
with
the
dos
concerns.
So
unless
like
marist,
is
on
the
call
here-
and
I
you
know-
don't-
want
to
volunteer
it,
but,
like
my
my
feeling
is
like
all
of
the
client
devs
who
are
most
qualified
to
speak
about
that
are
going
to
be
working
on
the
merge
implementation
in
the
next
few
weeks.
So
it
might
be
hard
to
get
kind
of
somebody
yeah.
If
marius
wants
to
volunteer
himself,
he
can
but
like
yeah,
I
think
that's
the
that's
kind
of
the
the
main
thing
yeah.
E
I
know
maybe
asking
maybe
asking
in
in
awkward
abs
or
something
if
somebody
on
the
client
side
is
interested
in
kind
of
working
on
that
right.
But.
E
They
just
shift
their
hard
forks.
They
have
plenty
of
time
on
their
head,
so
I
think
yeah
yeah.
I
think
then
just
asking
in
the
consensus
dev
or
like
asking
danny
who's
like
the
right
person.
So
I
think
that
still
makes
danny
and
barnaby
the
two
first
stops
and
I'm
sure
danny
can
find
somebody
on
the
the
consensus
side.
Who
can
who
can
look
into
that
more.
A
A
Okay,
so
then
thank
you
thanks.
Everyone
for
coming.
H
Yeah
just
a
couple
of
announcements
before
I
let
everyone
go,
there
are
two
meetings
coming
up
for
that
may
be
irrelevant
for
people
here.
One
is
the
ethereum
execution
layer
of
the
state
of
ethereum
execution
layer
specification
with
sam
wilson
that
is
scheduled
tomorrow.
It's
a
penny
meeting,
but
it
could
be
a
very
good
discussion
to
know
where
the
ethereum
execution
aspects
are
right
now
and
the
meeting
is
scheduled
at
1830
utc.
The
another
meeting
is
the
merch
community
call
one
that
is
planned
on
november
5
at
1400
utc.
A
H
All
right,
thank
you.
Everyone
for
joining
today,
I'm
gonna
share
the
recording
very
soon,
and
I
hope
the
next
meeting
is
an
all
coder
meeting.
So,
let's
see
if
there
is
any
decision
of
having
any
further
call
related
to
this
particular
proposal,
like
the
breakout
room,
call
I'll
be
happy
to
schedule
one.
Thank
you.
Everyone
awesome.