►
From YouTube: Post London EIP-1559 Assessment (Breakout #12)
Description
A
Okay,
so
thanks
everyone
for
joining
this
post,
london
infrastructure
call
yeah.
So
I
guess
the
goal
here
is
mostly
just
to
discuss
kind
of
you
know
what
different
people
have
seen
since
london
has
gone
live,
how
you
know
we
can
adjust.
I
know
that,
like
there's
been
a
lot
of
conversations
happening
around
like
you
know,
how
do
we
handle
the
new,
the
new
free
mechanism
and
yeah?
How?
A
How
do
we
make
sure
that
you
know
we're
providing
like
a
good
user
experience
to
people,
so
I
guess
yeah
the
first
hand,
the
agenda
was
like
an
overview
of
the
upgrade.
I
mean
to
just
do
this
quickly,
at
least
on
the
consensus
level
side
everything
worked
kind
of
as
expected.
So
you
know
we
don't
expect
to
do
any
changes
to
kind
of
the
core
protocol
shortly.
A
Regarding
the
fee
market,
we
didn't
see
anything
kind
of
go
wrong
or
go
different
than
what
we
would
have
expected
from
the
test
nets
and
from
the
the
just
the
simulations
that
we've
done
in
the
past
yeah.
A
I
I
guess
I'm
curious
to
hear
just
like
from
others
on
this
call,
what
they've
seen
and
it
can
be
useful
if
you
know
if
you
share
something
to
just
kind
of
share
what
what
product
or,
at
the
very
least,
what
type
of
product
you're
working
on,
I
think
you
know
wallets,
have
had
a
lot
of
interesting
experiences,
but
yeah
just
curious
to
hear
kind
of
for
everybody.
What
what
they've
seen
and
and
if
there's
any
issues
that
they
think
are
important,
that
we
should.
We
should
bring
up.
B
I
guess
I'll
go
first.
My
name
is
austin
bunsen,
I'm
the
co-founder
of
quicknote.
We
provide
blockchain
infrastructure
to
companies,
we're
running
open
ethereum,
and
this
is
probably
very
very
unrelated
to
the
london
hard
fork,
but
maybe
just
in
case
I'm
going
to
mention
it.
We
are
noticing
that
we're
seeing
a
lot
of
dropped
piers
after
upgrading
to
the
version
of
open
ethereum
that
supports
london,
again,
probably
unrelated,
but
just
throwing
it
out
there
in
case.
It
is
pertinent.
A
Okay,
thanks,
that's
good
to
know,
I'm
not
sure.
What's
this
so
I
know
open
ethereum
was
being
deprecated
after
basically
after
london,
I
assume
like
they
still
have
people
looking
at
the
the
repo
help.
Yeah
so
is
there?
Is
there
a
specific
issue
actually
that
you've
opened
or
anything
yet
or
that
you've
seen
on
the
repo.
A
Okay,
yeah,
if,
if
you
do
not
figure
it
out,
I
think
yeah.
If
you
can
share
just
the
issue
like
in
the
all
core
devs
chat
once
you
have
that
that's
really
helpful.
I
know
a
lot
of
the
basically
a
lot
of
the
open
ethereum
team
has
migrated
to
aragon,
so
the
devs
are
still
kind
of
working
in
the
ecosystem
and
that's
probably
something
they
should
look
at
if,
if
it
needs
to
be
fixed.
A
If
no
one
has
like
anything
burning,
I
saw
barnabay.
You
were
on
the
agenda
as
having
some
data
to
present
about
the
upgrade.
So
far
is
that
right.
C
C
Can
you
see
yes,
yep
all
right
cool,
yeah
disclaimer,
it's
been
only
a
week,
so
it's
really
early
impressions
and
I
did
not
get
as
much
time
also
as
I
wanted
to
to
dig
into
much
more
data
but
yeah.
I
hope
these
impressions
kind
of
maybe
help
frame
some
of
the
discussions.
C
After
so
as
a
kind
of
look
back
to
the
previous
conversation
we've
had
on
the
podcast,
I
said
I
would
be
looking
at
three
things:
the
gas
used
kind
of
the
dynamics
to
see
when
are
we
in
full
blocks
and
maybe
first
price
auctions
kind
of
come
back
on
the
table
and
the
second
thing
is
base
fee.
What
does
the
let's
say
trace
of
the
basically
look
like
is?
Is
it
smooth?
Is
it
more
like
oscillatory
and
then
the
last
part
is
the
oracles
and
are
they
doing
the
job?
Well
enough
like?
C
Are
they
tuned
properly,
so
I'll,
try
to
say
a
couple
of
things
on
each
point
on
the
gas
used?
I
focused
here
on
the
longest
trick
that
I
found
as
of
yesterday.
I
I
don't
know
if
another
nft
drop
happened
in
the
meantime,
but
the
longest
streak
of
foolish
blocks
I
found
was
around
35
blocks,
so
it
took
the
base
fee
from
yeah
low
30s
to
something
like
over
1
500
gray,
and
I'm
plotting
below
here.
C
The
priority
fees
of
one
five,
five,
nine
transactions
that
were
included
during
that
ramp
up,
namely
I'm
plotting
the
inter
quantity
range
so
like
the
25th
percentile
and
the
75th
percentile
of
priority
fees.
C
I
guess
if
we
expected
to
see
like
these
first
price
auctions
on
the
priority
fee,
we
would
expect
some
kind
of
ramp
up.
That
would
be
a
bit
smoother
than
it
is
like
here.
The
priority
fees
really
get
super
high
very
quickly,
and
my
impression
is
that
what
happened
is
legacy.
Transactions
are
still
very
much
dominating
on
the
network
and
at
this
point
they
were
at
the
time,
and
so
these
transactions
are
sent
with
a
high
gas
price
because
we
cast
them
into
these
1559
formats
with
like
full
priority
fees.
C
Their
priority
fees
are
high
and
so,
in
turn,
the
one
five
five
nine
transactions,
the
kind
of
copy
via
the
oracle,
and
they
also
send
high
priority
fees.
I
see
that
mika
has
the
hand
raised.
Should
I
yeah
do
you
want
to
go
ahead.
D
C
Yeah
at
some
point
there
were
like
a
less
than
full
block,
so
I
I
kind
of
took
this
trick
to
be
yeah.
If
you,
if
you
don't
take
like,
I
can't
really
take
100,
because
there's
always
a
little
bit
of
space,
so
I
took
it
as
like.
The
moving
average
was
above
something
like
95,
so
there
was
indeed
like.
Maybe
one
block
where
the
gas
use
was
a
bit
lower
than
others.
D
C
C
C
C
Is
that
normal?
So
we
know
that
there's
definitely
like
a
let's
say,
a
region
of
the
system
where
things
can
happen
that
make
basically
look
like
this.
My
take
on
this
at
the
moment-
and
this
is
more
of
a
reason,
intuition
than
actual
data
analysis-
is
that
legacy
transactions
that
are
sent
by
users.
C
They
would
set
the
gas
price
to
something
that's
maybe
higher
than
base
fee,
but
which
remains
kind
of
close
to
base
fee,
because
the
oracles
they
use
they
gave
they
give
them
kind
of
okay.
This
is
the
ambient
current
gas
price,
and
so
this
is
what
you
should
set
your
parameter
to.
So
you
send
a
transaction
which
is
close
to
base
v,
which
means
that
includable
transactions
they
actually
tend
to
clump
around
the
current.
C
Basically,
small
upward
deviations
of
base
fee,
then
price
out
a
lot
of
transactions,
and
if
you
have
then
basically
decreasing
a
little,
you
have
suddenly
like
a
lot
of
transactions
which
are
again
includable.
So
you
could
observe,
like
these
sorts
of
hiccups,
due
to
the
fact
that
these
legacy
transactions
they
they
are
not.
They
don't
have
like
a
lot
of
margin
to
get
into
the
system
or
not
as
much
margin
as
the
one
five.
Five
nine
transactions
have
with
the
max
fee.
E
C
So
what
happens
is
anytime
basically
kind
of
rises,
suddenly
legacy
users
they
get
priced
out
and
they
have
to
wait
quite
a
bit.
So
so
I
see
I
see
kind
of
two
scenarios:
either
they
are
included
quickly
either
basically
stable
with
legacy
users
or
we
are
priced
out
and
they
need
to
wait
until
the
base
fee
comes
back
down.
So
my
take
on
this
is
that
they
seem
to
be
paying
more
with
their
time
than
with
their
money,
and
actually
thanks
block
native
I've.
C
Seen
this
graph
parameter
gave
me
a
heads
up
that
this
exists.
It
was
posted
in
the
gef
discord.
It
does
seem
like,
so
these
blue
spikes
is
kind
of
a
pending
time
to
inclusion
for
legacy
transactions,
and
it
does
seem
to
be
considerably
longer
than
most
of
the
one
five
five
nine
transactions.
C
I
think
it
was
quite
smooth
experience,
but,
as
tim
noticed
also
in
the
notes
for
this
call,
I'm
guessing
that
the
priority
fee
oracle
might
be
biased,
a
bit
upward
and
most
likely
this
is
due
again
to
the
legacy
transactions
who,
when
they
got
in,
they
might
be,
even
if
I
said
they
pay
more
with
their
time
and
with
their
money,
they're
still
more
range
for
them
to
to
overpay
on
the
priority
fee.
And
so
perhaps
this
is
biasing.
The
the
fee
history
oracle,
a
little.
C
I
think,
if
we
want
to
be
sure
of
that,
we
could
kind
of
take
a
look
at
the
inclusion
delay
as
a
function
of
a
priority
fee
that
was
sent
yeah.
That
would
give
us
maybe
a
clearer
picture
of
what's
good
happening,
but
it'd
be
interesting
to
know
how
low
can
we
go
with
the
priority
fee
and
what
are
the
kinds
of
guarantees
that
we
need
on
this
fee
to
be
to
be
included.
F
Yeah,
I
think
I
I
don't
know
for
sure,
but
I
think
the
four
to
five
is
some
sort
of
minimum
depending
on
the
the
setting.
So
I
think
you
know,
if
I
remember
correctly,
it
was
the
last
call
where
I
think
we
had
discussed
sort
of
like
starting
minimums
and
four
to
five
was
in
the
range
I
think
like
two
three
and
four
or
something
so
if
it
pulls
from
the
fee
history.
But
if
it's,
if
it's
like
one
way
or
something
we
might
be
hitting
a
minimum,
but
I'm
not
positive.
D
F
C
Yeah
so
there
might
be
a
minimum,
but
I
still
believe
there
is
some
kind
of
dynamics
here
where
the
priority
fee
sophie
story
is
looking
at
the
quantize.
The
quantize
might
be
quite
high,
because
a
lot
of
people
are
a
little
overpaying
with
their
priority
fee
so
that
that
could
also
be
yeah.
Both
a
combination
of
having
a
hardcoded
minimum
plus
the
oracle
itself.
A
Yeah
and
if
you
had
a
lot
of
legacy
transactions
right
like
these
might
push,
you
know
you,
you
could
have
a
case.
Where
say
only
five
percent
of
the
block
is
1559
transactions.
Those
transactions
can
get
in
with
a
one
great
priority
fee,
but
then,
if
you're
looking
at
say
the
first,
you
know
decile,
there's
not
even
10
of
the
block.
That's
1559
transactions,
so
you're
pulling
the
priority
fee
from
a
legacy
transaction
and
that
might
explain
why
it's
it's!
It's
lower.
D
C
Right
so
this
was
for
a
priority
fee,
so
the
second
parameter
that
is
kind
of
relying
on
oracle,
a
bit
more
crudely
at
the
moment,
is
the
max
p.
So
the
early
guideline
that
we
we
set
was
to
say:
okay,
look
at
the
current
base
fee,
multiply
it
by
two
add
whatever
you
were
going
to
propose
as
priority,
and
this
should
be
good
enough,
and
if
we
have
more
data
we
will
make
it
better,
and
this
is
some
data,
courtesy
of
parama.
C
Thank
you
for
the
data
which
which
seems
to
be
saying
that
so,
okay,
what
is
happening
here,
the
what
he's
been
looking
at
is
how
long
does
the
transaction
stay
viable,
given
that
you
are
multiplying
you're
setting
the
max
v
as
a
multiple
of
the
current
base
mean
and
2x
definitely
remains
viable
for
30
seconds,
because
there's
no
way
the
base
fee
can
go
that
high
99
plus
of
the
time
it
remains
viable
for
a
couple
of
minutes.
C
Even
but
it
seems
the
numbers
are
fairly
close,
even
for
lower
multipliers,
so
1.7
1.5,
even
1.3.
C
So,
in
a
sense,
what
this
seems
to
say
is:
maybe
the
market
isn't
so
unstable
that
we
need
to
have
2x?
Maybe
we
can
use
less
aggressive
max
fees
than
2x
default
of
1.5
seems
fairly
reasonable.
This
is,
of
course,
maybe
more
of
a
static
analysis
of
if
everybody
changes
their
max
v
to
something
else.
The
dynamics
could
be
different,
but
I
I
do
think
this
is
early
evidence
that
yeah
2x
might
be
a
little
high
go
ahead.
D
C
D
C
Check
but
okay,
I
believe
it
does.
C
C
E
C
A
I
guess
the
other
just
kind
of
thing
I
wanted
to
make
sure
we
mentioned
on
the
call.
Is
we
talked
about
the
fee
history
api?
I
know
there
was
some
issue
where,
like
the
return
type
for
the
oldest
block
before
was
in
decimal
rather
than
hex,
and
that
caused
some
problems,
so
geth
released
version
1.10.7
yesterday,
where
the
return
type
of
the
oldest
block
in
v.
A
History
is
now
a
hex
string,
so
I
think
yeah,
a
few
people
had
mentioned
that
this
was
causing
issues
and
that
should
be
fixed.
If
you,
if
you
use
the
latest
release
by
geth-
and
I
guess
that's
pretty
much-
you
know
what
we
have
planned
for
the
agenda
like
I'm
happy
to
leave
the
rest
of
the
call
for
just
people's
concerns
or
comments
or
anything.
You
know,
y'all
want
to
discuss.
G
If
I
jump
in
here,
the
only
thing
that
I've
noticed
is
that
very
low
priority
fees
are
getting
included
so,
for
example,
0.3
guai
grey,
and
I'm
just
wondering
if
anybody
had
any
thoughts
around
why
that
is
happening
or
how
that
might
change,
because
previously
we'd
spoke
about
the
minimum
being,
for
example,
one
or
two
or,
as
jake
said
before,
possibly
three
four
grey,
so
yeah
and
any
ideas
around
why
that
is
happening.
A
G
Yeah
so
yeah
people
from
outside
have
been
testing.
I
don't
know
if
roman
yeah.
G
H
Well,
because
base
fee
would
be
kind
of
higher
than
I
expected,
but
those
does
actually
still
would
be
included.
Yeah.
H
Well,
I
I
had
so
sometimes
it
took
like
hours,
but
I
had
like
multiple
times
when
it
was
like
four
or
ten
minutes.
A
That's
really
cool,
like
I
mean
I
guess
the
reason
you
know
we
mentioned
one
or
two
as
an
initial
base
fee
is
because
that's
the
price
that,
like
offsets,
the
uncle
risk
for
miners
and
it's
kind
of
like
the
economically
fair
price,
if
you
want,
whereas
where,
if
they
include
transactions
on
average
for
one
way
and
then
they
get
on
cold,
you
know
every
so
often,
then
they
should
end
out
end
up
net
ahead.
A
So
I'd
be
like
I
wouldn't
want
to
like
see
like
defaults,
go
below
one
way
just
because
I
think
then
we
end
up
in
a
spot
where
like,
if,
if
we're
sending
transactions
which
are
on
average,
not
profitable
for
miners
to
include
that
that
might
not
be
great,
but
I
yeah,
I
guess
you
know
some
miners
might
be
willing
to
pick
up
those
transactions.
You
know
if
there's
nothing
else
in
the
transaction
pool
and
yeah.
I
I
guess
what
we're
seeing
is
kind
of
like
the
equivalent
of
like
before.
A
A
H
Way
for
me
to
reproduce
this
was
to
set
the
like
max
fifa
gas
as
like
base
fee,
which
is
like
10th
percentile
for
past
100
blocks,
10th
percentile
of
base
fee,
plus
the
value
which
is
returned
by
max
max
heaper
gas
method,
and
in
this
case
usually
transactions
would
be
included
like
in
10
minutes
or
so,
and
the
tip
would
be
reduced
to
some
value
below
one
grade.
I
D
Are
you
saying,
are
you
saying
the
the
effective
premium
was
below
one
or
the
premium
you've
set
on
the
transaction
was
below
one.
D
Okay,
so
that
suggests
there
might
actually
be
a
bug
in
gas
then,
since,
and
I
would
not
be
surprised
if
one
of
them
screwed
it
up.
My
guess
is:
is
they're
not
they're,
sorting
transactions
by
the
premium
on
the
transaction,
not
the
effective
premium
and
they're,
not
correctly
excluding
when
that
doesn't
get
met?
I
would
be
curious
if
someone
can
get
through
a
transaction
separately,
so
so
this
should
be
looked
into
see
if
it's
a
bug
and
geth.
D
If
not,
it's
probably
some
miner
running
a
custom
fork
or
they
screwed
something
up
either
way.
I
would
be
curious
if
someone
can
get
through
a
transaction
with
less
than
one
as
the
configured
premium
on
the
transaction,
and
that
would
indicate
a
different
situation.
Oh.
H
It
happened
also,
I
I
used
like
0.2
0.3
is
set
as
max
fee
per
gas
and
it
worked
yeah.
D
So,
in
the
case
of
when
you
set
that,
so
I
think
there's
probably
two
situations
here:
one
either
a
bug
and
geth
or
a
bug
in
miners
with
regards
to
internal
transaction
sorting
and
causing
them
to
not
correctly
do
the
rational
thing,
the
other
one
for
when
the
priority
fee
on
the
transaction
is
actually
set
to
lower
than
one
we
saw
this
long
time
ago.
D
Back
before
there
was
congestion
in
ethereum
when
you
could
just
throw
a
transaction
out
and
it
would
get
included
eventually,
because
there's
always
space,
there
were
some
miners
that
mined
and
transactions
that
were
below
like
what
was
profitable
for
them,
like
despite
everyone
knowing
that
it
was
unprofitable.
D
We
we
always
assumed
they
were
just
like
altruistic
miners,
who,
just
you
know,
didn't
really
care
about
the
money
just
wanted
to
make
ethereum
great
and
include
everybody
who
wanted
to
get
included,
and
so
you
could
like
do
like
a
0.5
or
0.1
even
transaction,
and
just
wait
several
hours
until
this
one
random,
altruistic,
minor,
would
show
up
and
include
your
transaction.
It
might
be
something
similar
or
it
could
just
be
one
of
the
miners
configured
something
wrong
or
they're
testing
things
yeah.
It's
interesting.
D
Maybe
I
don't
think
I
don't
think
any
miner
that
we
know
of
has
uncle
risk
that's
lower
than
significantly
lower
than.
D
A
Yeah
yeah
I'll
definitely
share
this
with
the
get
team
and
make
sure
that
they
look
into
it.
Thanks,
yeah
thanks
for
sharing
the
two
transactions
there.
A
J
I
was
gonna,
bring
up
something
else
and
just
thinking
out
loud
here,
based
on
some
conversations
with
users
of
each
scan
like
it
feels
like
there's
like
two
different
groups
of
users
when
it
comes
to
gas
prices
like
there
are
the
people
who
want
to
get
their
transaction
through
in
a
short
or
reasonable
amount
of
time
for
them,
and
that's
the
ones
who
want
to
have
like
the
current
or
previous
kind
of
grass
oracle
experience
and
then
kind
of
the
comments
that
I've
seen
in
the
agenda
and
even
for
myself
like
there
is
a
another
kind
of
user
who
would
be
okay
with
spending
like
one
or
two
priority
max
fee
and
then
with
a
max
fee
of
like
a
few,
a
few
way
higher
than
the
current
base
fee,
and
it
gives
them,
like
a
say,
a
90,
plus
percent
probability
of
getting
their
transaction
through
within
the
next
couple
minutes
and
yeah.
J
I
don't
know
if
others
have
a
similar
kind
of
perception
that
there's
like
like
two
different
groups
and
it
kind
of
like
whatever
guest
oracle
that
you
want
to
show.
A
D
So
the
the
core
problem
here
is
that
when
you
try
to
each
every
user
has
different
time
preference,
and
so
some
users
have
a
high
time
preference.
Some
users
have
low
time
preference
and
when
you
try
to
factor
that
in
you
end
up
with
a
far
more
complicated
problem
and
the
ux
becomes
insane
very
rapidly.
Basically,
it
turns
it
from
a
like
being
able
to
ask
user
a
question,
yes
or
no
versus
asking
a
user
to
look
at
a
3d
chart
and
say:
where
are
you
on
this
three-dimensional
graph
like
in
this.
D
D
What
I
have,
which
is
you
know,
low
priority
fee
high
max
fee
is
because
it
simplifies
the
problem
to
just
a
boolean
question,
so
just
a
yes
or
no
for
users,
which
is
very
easy,
and
it
just
kind
of
assumes
that
everybody
has
a
time
preference
of
I
want
in
right
now
or
not
at
all,
and
the
only
reason
not
because
we
think
that
is
the
dominant
user,
but
because
that
is
the
easiest
user
to
solve
a
problem
for,
and
it
gives
a
a
very
comfortable
user
experience
even
when
they
fail
so
like
even
it
gracefully
degrades
to
okay
I'll,
just
come
back
later,
which
you
know
for
you
can
express
that
to
use
by
saying
hey
by
the
way.
D
You
know
transaction
fees
change
throughout
the
day.
You
might
want
to
try
again
it's
something
easy
to
communicate
to
users
where
it's
very
difficult
to
communicate
to
users.
Hey
here's!
A
three-dimensional
curve
of
all
the
possible
things
you
need
to
consider
if
you
want
to
include
your
transaction
and
so
yeah
so
you're,
definitely
right.
There
are
definitely
users
across
the
spectrum
that
have
you
know
different
time.
D
Preferences
different
price
preferences
and
the-
and
I
definitely
encourage
wallets
to
try
to
think
of
how
you
can
cater
to
those
different
users.
I
just
want
to
exercise
caution
of
building
like
super
complicated
uis.
That
users
see
first
like
as
long
as
the
first
ui
they
see
is
the
easy
one.
I
think
it
can
work
out
pretty
well
and
then
for
more
advanced
users.
You
know
they
can
use
things
like
we've
seen
these.
You
know
east
gas
station,
and
I
don't
remember.
D
Was
I
think
my
block
native
has
one
as
well?
They
have
all
sorts
of
different
pieces
of
information
you
can
see
and
then
you
can
also
look
at
like
base
fee
over
time,
and
so
you
can
say:
okay,
I've
noticed
the
base
fee
usually
drops
on
sunday
at
4
p.m.
The
base
fee
is
usually
at
its
lowest,
so
I'm
going
to
wait
till
sunday
at
4
pm
and
see
what
the
base
fee
is
and
then
try
it
then.
D
But
you
know
if
it's
a
if
there's
a
big
token
sale
going
on
or
an
ft
sale
going
on
at
that
time,
then
I'm
going
to
wait
until
5
pm,
like
this
things,
get
really
complex
really
fast.
So
just
a
warning
that
if
you
go
down
this
path,
they're
trying
to
build
a
ui
for
that
it
gets
really
complicated
incredibly
quickly.
K
Yeah,
maybe
I'm
sorry,
it's
hi.
It's
dom
vincento.
Here,
I'm
a
democrat
mistakes
for
the
alchemist
community.
K
We
are
utilizing
a
technology,
that's
called
flashbots
for
those
who
don't
know
about
it,
it's
directly
sending
to
miners
the
transactions
and
we
are
having
a
hard
time
building
the
ux
around
the
base
fee.
Due
to
the
fact
that
the
flashbacks,
the,
when
you
send
a
transaction,
the
transaction
will
be
sent
and
signed.
Sorry,
it
will
be
signed
by
the
user
immediately,
including
the
the
max
augustine.
K
But
then
this
transaction
is
sent
to
flash
votes
and
retried
until
it's
included,
and
this
is
what
we
do.
This
is
the
habit
that
we
propose.
The
issue
that
we
are
having
is
that
to
on
the
ui
on
the
ui
and
as
a
ux.
We
need
to
estimate
the
transaction
fee
to
from
the
screen
to
the
user,
and
this
estimation
needs
to
include
the
basic
due
to
the
fact
that
we
are
submitting
at
every
single
block.
K
That
means
that
we
need
to
increase
on
display
the
on
display
potential
estimated
base
sheet
that
the
user
may
be
paying,
so
we
basically
need
to
show
the
max
fee
that
they
may
detain
to
remain
honest
about
it
and
to
remain
transparent,
but
this
is
this
is
a
big
problem
that
we
have,
because
this
on
display
only
shows
a
really
a
potential
really
high
fee
when
the
user
is
really
not
going
to
pay
that
max
fee.
K
This
is
going
to
be
a
very
rare
case
where
the
increase
is
going
to
be
at
every
block
for
the
next
20
blocks.
For
example,.
A
F
Yeah,
it's
yeah.
I
can't
I.
I
can't
speak
to
exactly
what
the
estimated
is,
but
it's
it's
taking
our
best
guess
and
then
highlighting
the
estimated
number,
and
then
we
also
showed
the
max
feed
too,
as
like
an
fyi.
So
it
is,
I
mean
it's
one
of
the
things
we
struggled
with
the
most
in
the
ui
is
trying
to
to
show
that
right,
because
you
don't
want
to
show
a
super
high
fee
that
you're
not
actually
going
to
pay
most
likely.
Then
you
also
don't
want
them
to
be
surprised
by
a
high
fee.
F
K
K
So
users
clearly
and
naturally
understand
that
this
is
extra
fees
and
network
fees.
Since
we
are
sending
that
to
freshboots.
We
can't
have
that
metamask
window
opening
and
we
are
throwing
everything
in
one
go
in
one
place
and
and
where
we
are
being
heard
today
is
that
users
they
will
compare
our
prices
and
our
field
with
uni,
swap,
for
example,
and
on
uni,
swap
they
will
not.
That
will
not
include
any
any
base
field
or
anything
that
will
be
shown
after
in
the
next
display.
K
D
I
see
so
this
is
the
problem
of
the
person
who
is
submitting.
The
transaction
is
not
the
same
person
who
is
paying
for
the
transaction
and
currently
the
transaction
types
that
we
support
do
not
support
secondary
payers
like
that
we
used
to
so
it
used
to
be
that
you
could
have
a
different
person
paying
for
the
transaction
versus
who
signed
the
transaction,
particularly
with
flashbots,
because
you
could,
you
know,
submit
a
bundle
where
one
person
pays
one
person
doesn't.
D
We
have
talked
about
this
before
on
having
new
transaction
type.
That
would
as
a
couple
options
one
is
we
can
make
it
so
the
miners
can
choose
to
cover
the
base
fee
and
something
we
talked
about
and
it
almost
got
included,
but
we
withdrew
it
because
we
wanted
to
keep
the
initial
1559
simpler.
I
don't
think
there
are
any
strong
arguments
against
that
one.
So
if
people
have
real
use,
cases
like
it
sounds
like
you
do
for
making
it
so
miners
can
pay
the
base
fee.
D
Then
what
that
effectively
means
is
that,
from
the
flashback's
perspective,
you
would
submit
a
bundle
where
the
user's
transaction
had
a
base
fee
of
zero
or
sorry
had
a
max
base
fee
of
whatever,
but
that
would
be
covered
by
the
miner.
I
think
that
use
case
work
and
the
other
option
is
a
new
transaction
type
where
you
have
two
signatures.
D
Basically,
one
signature
of
the
person
who
wants
to
do
something
on
ethereum
and
another
signature
for
the
person
who's
going
to
be
paying
for
gas,
and
that
also,
you
know,
there's
no
strong
arguments
against
it
like
theoretically,
it
just
needs
a
champion
to
kind
of
push
it
through
the
process
and
work
out
all
the
details.
Both
of
these
things
are
on
the
table,
so
I
think
your
situation
we
can
do
better
in
the
future.
It's
just
this
initial
launch
didn't
have
either
of
those.
K
Yeah,
I
agree
that
will
help
definitely
both
of
those
proposals.
K
I
don't
think
it
will
solve
if,
if
you
want
the
users
to
pay
for
the
feed
at
the
end
of
the
day,
I
don't
think
that
would
be
solved
by
those,
but
definitely
that
will
give
us
room
for
for
for
extending
that
kind
of
options
and
start
thinking
of
another
way
to
make
profit
and
pay
for
the
user
base
fee
and
not
carry
them
away
like
he
does
today.
A
If
not,
I
guess
one
question
I
I
had
for
all
the
folks
on
here
is.
I
understand
that,
like
1559
was
the
first
time
in
a
long
time,
we've
had
such
a
a
broadly
impacting
change
to
ethereum
that
that
rippled
across
across
a
a
whole
lot
of
different
areas.
We
do
have
another
one
of
those
changes
coming
in
the
next.
I
don't
know
six
to
nine
months,
depending
on
how
things
go
with
the
merge.
I'm
curious.
A
If
people
here
have
anything,
you
know
that
they
they
would
like
to
see,
or
they
think
could
help
them
as
we're
working
on
the
merge
to
make
the
transition
smoother
and
to
offer
kind
of
a
you
know
the
the
best
experience
to
their
users
yeah,
like
things
I
I
don't
know,
either
things
that
we
didn't
do
in
london,
that,
like
you,
would
have
wished
or
things
that
you
thought
were
actually
quite
good
and
and
that
we
should
definitely
do
again
yeah.
L
Personally,
I
think
these
calls
are
great
and
trying
to
bring
people
from
different
levels
of
the
stack
helps
a
lot
in
terms
of
what
to
improve,
I
would
say:
try
to
stagger
more
to
give
a
little
bit
more
time
for
each
layer
to
implement
whatever
the
they
have
to
implement,
so
the
players
on
top
have
enough
time
to
adapt,
for
instance,
having
get
ship
fee
history
or
all
or
require
changes.
Just
a
few
weeks
before
the
the
merge
made
it
really
really
difficult
to
to
get
to
it.
L
Yeah
and
focusing
a
lot
on
making
sure
that
test
nets
are
really
representative
of.
What's
coming
up
on
mainnet,
for
instance,
something
that
beat
us
after
the
merge
was
that
we
have
tested
everything
thoroughly
on
test
nets,
but
the
base
fee
on
test
net
was
ridiculously
low,
since
blocks
were
not
full,
usually,
and
so
when
we
actually
got
a
higher
base
field
maintenance.
Some
things
started
fading
due
to
poorly
set
up
gases,
yeah
yeah.
A
That's
useful
what
like
so
you
mentioned.
You
know
like
having
more
time
for,
like
you
know,
different
layers
of
the
stack
to
adapt
and
whatnot.
What
do
you
think
is,
like
you
know
the
right
amount
of
time
from
when
you
know
we
have
kind
of
a
release
of
json
rpc
to
that
that
people
can
actually
use.
You
know
the
the
features
to
like
going
live
on
mainnet
yeah.
Is
it
like?
You
know,
one
month,
two
months
three
months,
hopefully
not
six
months.
L
L
D
So
I
just
curious:
we
have
a
particular
group
of
people
here,
like
wallet
developers
and
whatnot.
I'm
curious
what
you
all
think
you
need
to
do
for
the
merge.
I
suspect
that,
like
it
differs
greatly
from
what
you
actually
need
to
do-
and
I
think
now
sooner
rather
than
later
is
a
good
time
to
start
clearing
that
up,
but
I'm
curious,
like
what
do
people
think
is
necessary
from
all
the
developers
related
to
the
merge.
D
In
theory,
the
the
merge
should
have
relatively
little
impact
on
integrations,
but
I
want
to
start
those
conversations
now
to
make
sure
we're
not
forgetting
something.
D
M
Yeah,
this
is
jen
from
rainbow
year
yeah.
I
guess.
E
M
I
think
about
the
merge,
at
least
from
a
wallet
perspective,
that
kind
of
relying
on
the
the
tools
underneath
me
to
to
maybe
have
to
shift
a
little
bit,
but
we're
kind
of
relying
on
not
too
much
kind
of
changing
from
a
ux
perspective,
so
but
yeah
also
thinking
of
it
as
like.
Oh
sometime
in
the
future,
and
once
it
gets
more
real.
Then
we'll
have
these
calls
again
with
like
the
different
at
the
different
layers
of
who
needs
to
change.
What.
A
That's
a
great
point
like
what
maybe
what's
the
point
where
it
starts
to
feel
real
for
you
all
right
like,
and
I
understand
it's
kind
of
farther
in
the
future
than
when
the
clients
start
looking
at
it,
because
you
know
right
now:
it's
not
even
implemented
in
places
like
gas
and
whatnot,
but
yeah.
What
what
are
like,
the
I
don't
know
the
signs
that'll
make
it
feel
real
for
you
is
that
that's
helpful.
N
A
N
Bruno
from
rainbow
yeah,
I
just
want
to
say
that
I
think
it's
like
a
rad
well
kind
of,
like
you
know,
implementation
like
you,
know,
first
client
level,
then
having
a
test
net,
then
wallets
can
start
playing
with
it
and
then
the
apps
and
other
people
right
like
in
that
order,
yeah
without
without
having
test
nets,
starting
to
think
about
stuff
like
not
starting
to
think
but
like
actually
doing
any
work.
It's
you
know
it's
just
like
theory,
right,
yeah,.
A
When
you
say
test
that
so
one
one
challenge
that
we've
had
in
the
past
is
like
it's
easy
to
spin
up
new
test
nets.
You
know
like
like
a
merged
test
net,
for
example,
but
because
a
lot
of
people
actually
rely
on
gordy,
robston
and
rinkeby,
it's
it's
a
bit
harder
to
like
fork
them
until
we're
pretty.
You
know
pretty
far
in
the
process.
How
useful
is
it
for
like
you
all
when
we
have
test
like
new
test
nets?
A
Is
it
something
that's
like
easy
for
you
to
integrate
and,
like
you
know,
you
can
kind
of
start
prototyping
or
is
it
something
that's
basically
useless,
because
if
it's
not
gordie,
rinkeby
or
or
or
robston,
then
you
just
can't
really
do
much
because
of
how
your
infrastructure
is
set
up.
N
For
us
it's
not,
it
doesn't
make
a
difference.
I
think.
Okay,
like
you
know,
it
depends
like
aside
from
like
being
a
testament
or
a
new
testament
or
not
it
like.
It
depends
on
like
how
many
ranking
changes
in
like
code
like
json.
E
E
A
Cool,
that's
pretty
much
all
I
had
anything
else.
Anybody
wanted
to
bring
up.
O
This
is
a
michigan
from
anchorage
digital.
I
am,
I
know
I
kind
of
missed
the
boat
on
this,
but
I
just
wanted
to
voice
support
on.
I
think
it
was
dawn
from
flashbots.
That
was
mentioning.
O
You
know
the
difficulty
of
predicting
the
fees
that
we're
displaying
for
our
customers
and
perhaps
including
the
new
kind
of
transaction
types
or
whatever.
The
ideas
are
that
we're
kind
of
coming
up
with
for
kind
of
solving
the
issue
of
you
know
a
a
long
time
for
where
base
fee
may
be
changing
drastically
between
when
transaction
is
initiated
to
submitted,
so
just
wanted
to
kind
of
reiterate,
support
for
that
got.
It.
M
I
had
just
a
few
I
mean
I
haven't.
I
have
to
think
about
it,
a
little
bit
more
but
barnaby.
Thank
you.
Thank
you
for
your
for
your
notes
and
presentation,
and
I
think
I'd
like
to
digest
kind
of
what
your
findings
were,
and
especially
that
kind
of
the
the
last
slide
that
you
had
with
the
you
know.
M
This
2x
is
equal
to
you
know,
or
it
covers
100
of
time
or
99,
whatever
percent
of
the
time,
I'd
like
to
maybe
it'll
be
easier
if
we
follow
up
afterwards
with
some
people
just
to
see
if
we
could
break
down
down
even
further,
because
I
know
that
your
your
numbers
were,
for
all
all
time
or,
and
so
I'd
like
to
kind
of
distinguish
between
certain
peaks
and
and
norms
and
like
when
things
are
flat
versus
when
things
are
spiking
and
then
micah.
M
You
were
talking
about
it's
funny
because,
like
from
a
wallet
perspective
yeah,
we
were
kind
of
trapped
because
we
were
trying
to
take
into
consideration
the
user's
intent,
which
is
distinguishing
between
what
a
user
wants
now
or
never
versus.
Whenever
versus
like.
I
really
want
to
get
this
in.
It's
got
to
be
asap
and
I
want
to
keep
trying
until
it
gets
in.
You
know
like
that
that
urgent,
but
also
extended
timeline
versus
just
now
or
never,
and
of
course,
now
or
never
is
much
easier.
M
We
do
have
plans
in
our
ui
to
kind
of
be
like
hey
things
are
going
crazy
right
now.
It
might
be
better
just
to
try
again
later
sort
of
sort
of
thing
when
things
are
spiking,
but
we
would.
M
We
were
kind
of
hoping
that
I
don't
know
if
ether
scan
or
some
other
apis
are
here,
but
we're
kind
of
hoping
that
this
could
be
more
of
like
a
math
problem
that
is
solved
by
someone
who
could
just
give
us
the
numbers
that
we
want
for
these
different
scenarios
versus
you
know
like,
so
that
the
user
doesn't
have
to
do
the
math,
but
the
api
can
do
the
math
and
we
can
just
give
them
an
appropriate
suggestion
based
off
of
the
intent
that
we
read
off
of
the
user.
D
It's
so
it
is
possible
to
do
the
math.
Like
I
mentioned,
it's
really
like
a
three
dimensional
curve
and
if
you
know
the
inputs
for
that
curve,
you
can
do
the
math
and
tell
the
user
okay.
This
is
what
you
should
do.
The
the
hard
part
is
getting
those
inputs
from
the
user
in
a
like
digital
form,
so
so
a
user
who
just
kind
of
vaguely
says
you
know
I
kind
of
in
a
hurry.
D
That's
not
super
helpful
for
for
the
math
side,
like
turning
I'm
kind
of
in
a
hurry
into
like
this
is
the
digitization
of
my
time.
Preference.
That's
the
real
hard
part.
So
if
you
guys
can
figure
out
or
someone
can
figure
out
how
to
distill
a
user's
time,
preference
and
price
preference
relative
to
each
other
into
like
quantifiable
numbers,
then
we
can
definitely,
you
know,
put
together
a
formula
that
will
tell
them
okay.
This
is
what
you
should
do
based
on.
You
know
all
of
history,
and
you
know
what
we
know
about
the
ecosystem.
D
All
these
things,
I'm
not
sure
if
that's
reasonable
or
realistic
at
all,
because
I
think
like
even
when
I
ask
myself
like
what
is
my
time
preference.
I
can't
put
that
into
a
number
like.
I
don't
know
what
my
time
preference
number
is.
I
just
know
that
you
know
I
kind
of
want
to
go
in
today,
maybe
or
like
you
know,
I'm
gonna,
I'm
gonna
go
to
bed
soon
and
I
want
to
be
sure
it's
in
before
I
fall
asleep.
M
M
Things
do
you
think
it
would
be
so
so
I
can
understand
like
trying
to
extrapolate
users
intent
into
actual
inputs
to
a
mathematical
function.
M
I
understand
that,
but
I
I'm
wondering
if
we
can
kind
of
chop
it
up
into
like
maybe
three
or
four
different
categories
or
boxes
in
some
way
I
mean
yeah
I'd
have
to
think
about
a
bit
more,
but
that
would
be
if
that
math
function
is
there
and
then
maybe
we
can
give
a
little
bit
more
thought
of
like
how
do
we
translate
user
intent,
or
how
do
we
even
get
a
signal
about
user
intel?
I
think
we
have
a
few
ideas
about
how
to
get
the
signal
for
users
intent.
M
Then
yeah.
Maybe
we
can
follow
up
on
that.
D
You
might
be
able
to
kind
of
craft
some
like
straw,
man,
users,
where
you
just
kind
of
describe
a
particular
person,
and
then
you
give
that,
like
fake
person,
some
actual
numbers
to
plug
into
these
formulas-
and
then
you
say
you
know,
are
you
this
person
or
are
you
this
person?
Are
you
this
person
that
might
be
possible?
E
D
M
Yeah
yeah,
I
think
I
think
that
would
be
helpful.
Just
and
I
guess
from
our
perspective
we
would.
I
am
glad
that
barnaby
had
that
that
slide
about
how
2x
seems
generally
on
average,
too
high,
because
intuitively.
We
also
think
that
we
should
tighten
the
the
multiple
that's
placed
on
the
base
fee
and,
if
anything
kind
of
like
play
around
with
a
priority
fee
instead
because
yeah,
because
we
are
kind
of
like
like
even
if
you're
urgent,
then
you
kind
of,
if
you're,
urgent,
now
or
never.
M
You
also
want
like
a
a
tighter
multiple,
because
you
don't
want
to
be
potentially
waiting.
You
know
waiting
around
forever,
but
if
you
don't
care
that
much,
then
it's
kind
of
unfair
to
show
you
a
huge
range
of
prices
that
you
could
you
could
you
could
use.
So
it's
better
just
to
like
wait
around
and
and
maybe
you
might
get
dropped
if
it's
really
busy.
M
M
So
yeah,
I
guess
I'm
just
asking
for
like
magical
estimations
that
that'll
work
all
right.
G
Yeah,
I
I
don't
know
if
we
have
enough
time,
but
I'm
not
and
and
barnabas
slides
you
heard
there
was
one
example
of
kind
of
a
normal
scenario
where
there
was
some
variability
in
the
base
fee,
but
I
think
if
the,
if
you
were
to
create
a
moving
average,
you
would
see
oscillations
in
in
the
base
fee
across
that,
and
then
there
was
the
other
scenario
with
with
multiple
consecutive
full
blocks
and
the
the
issue
on
the
design
side
is
right.
Okay,
which
of
these
two
scenarios,
are
we
currently
in?
G
So
how
do
we
reasonably
estimate
whether
somebody
will
be
included
in
the
next
few
blocks,
based
on
which
curve
the
the
the
everything
is
currently
on,
and
it's
almost
impossible
to
tell
like
you,
don't
know
what
is
going
to
happen?
You
don't
know
if
the
trend
is
going
to
continue
upwards
and
it
will
take
a
long
time
for
that
period
of
time
to
pass
with
the
multiple
consecutive
full
parks
or
whether
it's
just
kind
of
the
normal
state
where
things
oscillate
up
and
down
and
with
that
on
the
design
side.
G
It's
really
really
difficult
to
make
a
call
and
say:
okay,
actually
we're
in
this
situation,
and
it's
going
to
take
x
amount
of
time
for
you
to
be
included,
and
it's
also
quite
difficult
to
flag.
Okay,
we're
on
a
we're
on
an
upward
trend
here,
and
we
don't
know
when
this
is
going
to
end.
It
doesn't
install
confidence
and
it's
quite
difficult
to
communicate
and
if
somebody
figures
out
how
to
communicate
these
potential
scenarios,
then
great
perfect.
G
But
I'm
not
sure
whether
it's,
whether
just
having
like
a
way
to
determine
which
of
these
two
trends
you're
on,
is
going
to
be
helpful
to
lots
of
people.
It
might
be
helpful
to
some
definitely,
but
I
think,
yeah,
there's
lots
of
communications
that
need
to
be
done
here
above
actually
determining
those
those
trends.
D
So
the
core
problem
here
is
anyone
who
can
answer
the
question
of
which
trend
are
we
on
can
make
far
more
money
by
going
into
finance
than
we
can
offer
to
pay
them
to
help
us,
because
essentially
it's
the
exact
same
problem
as
predicting
the
like
future
price
of
a
stock,
or
you
know
a
commodity?
It's
it's
it's
it's
an
attempt
to
predict
future
demand
for
an
asset
which
comes
down
to,
like
you
know,
keeping
an
eye
on
all
the
things
that
are
happening
in
ethereum.
Keep
an
eye
on
the
news
sentiment.
D
Analysis
like
when
we
get
these
bursts
they're
not
burst
because
of
something
that's
predictable,
they're
always
bursts
of
like
something
that
happened
like
an
event
occurred
in
the
world
that
resulted
in
all
of
a
sudden.
Everybody
wants
to
use
ethereum
now.
That
being
said,
some
of
these
events
air
quotes
around
that
are
seasonal,
and
so
there
really
is,
you
know
at
a
certain
time
of
day
every
day
the
fees
generally
are
lower
than
other
times
a
day.
D
D
So
you
look
at
the
base
fee
over
time
and
then
you
know
plot
by
day
plot
by
time
of
day
plot
by
day
and
time,
and
we
should
see
some
very
strong
seasonality,
but
the
ones
like
we
saw
earlier,
which
was
like
an
nft
sale
or
something
those
ones
are
just
effectively
random
for
anyone
who's,
not
you
know
an
nft
buyer,
and
you
know
it's
nfts.
Today
it
was
icos
before
that.
It
was
cryptokitties
before
that
it
was.
You
know
some
some
event
occurs
in
the
world.
D
Elon
musk
tweets
something
and
it
triggers
it,
and
so
I
just
want
to
make
sure
everybody's
aware
that
it's
very
unlikely
we'll
ever
fully
solve
that
problem.
The
best
we
can
do
is
capture
the
seasonal
stuff
and
try
to
express
present
that
and
let
people
know
hey
we're
on
the
you
know
morning.
Uptrend
like
every
morning,
it
starts
to
go
up,
so
we're
gonna
estimate
a
little
higher
or
we're
on
the
evening
downtrend.
K
And
one
thing
I've
noticed
is
that
it
was
especially
difficult.
I
mean
it
was
really
like.
Eap1569
went
live
and
we
discovered
the
effects
of
some
of
the
effects
were
discovered
after
the
go
live.
Is
there
already
and
I'm
not?
I
would
not
be
aware
of
that
yet,
or
is
it
possible
to
have
a
test
net
which
will
have
all
transactions
replayed,
maybe
with
a
delay
of
24
hours
or
something
that
will
allow
us
to
see
the
impact
of
such
upgrades?
K
A
Challenges,
good
yeah.
The
two
challenges
with
that
is
one
basically,
the
tech
to
replay
transactions
is
is,
is
hard,
that's
you
know
solvable,
but
yeah
we
don't
have
a
team
or
that
that
can
that
can
help
with
that.
The
second
part
is
the
money,
basically
so
like
because
transactions
on
main
that
are
worth
something
we
get
like
a
different
patterns
and
different
incentives
to
send
than
like
on
test
nets.
So
it's
it's
hard
to
get
like
a
yeah,
a
perfect
replay,
basically.
D
And
to
expand
on
that
a
little
bit,
the
one
of
the
issues
replay
test
nets
is
that
they
very
quickly
fall
out
of
sync,
because
if
you're
replaying
under
a
different
rule
set,
some
transactions
will
fail
that
previous
that,
on
the
main
net
succeeded-
and
so
I
guess
I
guess
in
this
case
you're
sorry
when
we
talked
about
this
before
we're
talking
about
test
nets
for
future
changes.
Are
you
talking
about
tests
for
future
changes?
Or
do
you
want
just
a
a
test
net
that
just
replace
history?
K
What
I'm
thinking
is,
for
example,
for
this
go
live
if
a
week
before
erp
eip
1559
was
was
deploying
on
the
test
net
that
every
day
was
getting
the
main
net
transactions
with
the
delay.
K
We
will
have
seen
some
of
those
issues
that
we're
having
now
a
bit
earlier
with
our
new
ui
new
ux
and
and
and
in
fact
it
has.
I
understand
it's
difficult
right
understand:
it's
not
something
easy
to
do,
but
I
was
thinking
you
don't
really
need
to
reproduce
some
of
those
data
like
you,
don't
need
to
reproduce
just
the
from
or
the
addresses
or
anything.
K
I
think
that
what
matters
here
is
is
the
the
amounts
the
tokens
in
play
and
and
then
the
that
all
the
transactions
that
I
mean
we
got
the
same
flow
of
transactions,
some
number
of
transactions
in
the
system,
so
we
can
really
see
the
impact
that
those
would
have
on
the
gas
and
the
user
drop
into
accuracy.
D
D
The
issue
that
we
run
into
is
that
when
the
rules
are
different
between
the
two
chains,
they
very
quickly
fall
to
sync
with
each
other
and
so
transactions,
like
you,
start
with
one
transaction
that
fails
on
the
test
net,
but
doesn't
fail
and
mainnet,
and
then
that
leads
to
the
world
state
being
slightly
different
and
then
now
another
transaction
fails,
because
the
world
state
is
different
and
then
another
one
and
this
kind
of
balloons
out
pretty
rapidly.
We
don't
know
how
rapidly
that
happens.
D
It
probably
depends
on
specific,
so
the
rule
changes,
but
so
it
means
we
can't
just
like
spin
one
up
and
leave
it
up
forever,
because
eventually
the
world
state
will
differ
so
much
that
we
just
simply
doesn't
make
any
sense
to
replay
anymore
because
everything's
failing
that
being
said,
we
can
do
we
talk
about
doing
things
like
you
know,
having
a
daily
or
weekly
reset
or
something.
So
it's
just
so.
D
We
constantly
do
have
real
real
world
transactions
being
replayed
on
a
test
network
using
new
rule
set,
but
we
just
reset
it
periodically
to
make
sure
the
world
states
say
stay
in
sync,
and
that
is
an
option.
The
like,
I
said,
the
the
core
devs
generally
were
favorable
to
this
idea.
It's
just
a
matter
of
core
devs
are
overworked
and
we
have
to
choose,
and
at
least
for
london
we
decided
not
to
do
this,
but
we
did
talk
about.
D
You
know,
maybe
for
the
next
next
big
feature
fork
or
something
we
will
want
to
do.
This,
like,
like
I
said,
the
general
interest
is
just
a
matter
of
prioritization
yeah.
Thank
you.
E
Oh
yeah,
it
was
to
the
other
thing.
Basically,
it's
not
only
to
say
it's
not
only
personas,
basically,
where
you
need
to
make
preferences.
It's
also
the
transaction
type.
For
example,
I'm
the
persona
that
usually
doesn't
care
about
the
timing,
but
then,
when
it
comes
to
uni
swap
transactions
that
has
a
timeout,
then
it's
a
problem.
E
Then
you
want
it
fast,
and
I
once
made
a
post
on
magicians
about
that
that
we
should
make
a
way
to
signal
that
so
either
via
match
specs,
so
that
contract
authors
specify
that
on
nutspec
or
we
add
it
to
the
rpc
so
that
we
as
wallets
that's
also
good
for
the
user
experience,
because
then
we
have
less
cognitive
load
on
the
user
right.
So
if
they
don't
need
to
decide
it's
better,
but
it's
also
an
important
signal
because
then
also
it's
happening
for
a
lot
of
users.
E
They
don't
really
know
about
the
timeout
or
the
expiry
of
the
transaction,
and
so
basically
the
the
transactions
can
also
signal
that
they
want
in
very
fast.
It's
not
only
personal.
J
A
Any
we're
already
a
bit
over
time,
but
we're
still
here
so
any
final
questions
or
comments.
A
Yeah
and
then
relatedly
does
it
make
sense
to
have
another
one
of
these
calls
probably
not
like
two
weeks
but,
like
I
don't
know,
does
it
help
people
in
like
a
month
or
like
when
people
have
had
more
time
to
to
like
dig
into
this
yeah?
We
don't
have
to
schedule
it
now,
but
yeah.
The
people
generally
want
another
one
of
these
calls
about
1559
and
if
so,
when
would
be
like
the
right
timing.
M
A
Cool
yeah,
okay,
so
let's
yeah,
let's
aim
for
a
a
month
from
now,
roughly
all
all
yeah.
It's
probably
easiest
to
have
it
be
literally
four
weeks
from
now
when
it's
like,
not
all
core
devs
or
something
yeah
so
I'll
make
sure
to
to
set
that
up.
A
Off
the
market
yeah
there
might
be
like
yeah,
so
there
might,
we
might
rename
the
channels
on
discord
to
have
it
be
just
fee
market.
So
if
you
can't
find
a
channel
which
1559
fee
market
is
basically
the
same
thing,
it
might
make
sense,
I
guess
just
given
this
and
that
we're
having
another
call
in
a
month,
I'm
I'm
finally
just
holding
to
change
that
yeah.
A
O
D
A
Cool
well
yeah
thanks
a
lot
everybody
for
coming
on
and
yeah
I'll
share
the
information
for
the
next
call
when
it's
when
it's
all
set
up.