►
From YouTube: EIP-1559 Gas API Call (Breakout #10)
Description
EIP-1559 Gas API Call (Breakout #10)
https://github.com/ethereum/pm/issues/328
B
Yeah,
okay,
so
we
are
recording
thanks
everybody
for
coming.
This
is
a
call
to
talk
about
the
gas
api
and
how
1559
affects
it.
If
we
have
time
we
can
cover
any
other
questions,
concerns
that
that
folks
here
have
so
trent
already
shared
the
agenda
in
the
in
the
in
the
chat
here.
B
Basically
the
main
I
think,
topic
or
yeah
the
main
topic
of
discussion
today.
Is
you
know
what
do
we
do
to
return
the
priority
fee
in
the
json
rpc
api?
There
was
already
some
discussion
about
that
on
the
issue
and
before
that,
though,
I
think
trenty
put
in
the
agenda
the
presentation
by
gas
api
providers,
so
I
don't
know
if
there's
folks
here,
who've,
actually,
you
know
prototyped
or
looked
at
what
you
know
like
a
gas
price.
B
Oracle
can
look
like
post
1559,
but
if
anybody
wants
to
share
that,
it's
usually
pretty
helpful
to
just
start
off
with
looking
at
something.
Otherwise
we
can
go
right
into
the
api.
C
Yeah,
I
see
there's
some
etherscan
people
here
or
if
anybody
else
wants
to
just
jump
in
go
ahead.
D
Yeah
hi
I'm
from
the
get
team
and
well
I
can
I
can.
I
can
talk
about
what
we
have
as
a
guest
price
record
now,
if
someone
is
not
familiar
with
that
already
or
it
does
that,
there's
already,
everyone
know
that.
B
I
think
it
would
be
pretty
bad
like
it
was
at
least
valuable
for
me
yesterday
and
the
day
before,
to
understand
it
better.
So
yeah,
I
think,
walking
through
what
you
have
now
and
how
it's
changing
under
fifteen
pieces,
and
I
know
that,
like
you
and
peter
posted
some
some
comments
as
well,
but
yeah
just
to
make
sure
we're
all
on
the
same
page.
D
Yeah,
so,
okay,
I
won't
go
into
like
very
fine
details,
but
it's
pretty
simple.
Actually,
so
what
we
had
for
a
very
long
time
like
for
regular
transactions,
was
that
basically
we
took
the
past.
I
don't
know
how
many
blocks
well.
Actually
it
depended
on
whether
you
were
running
a
food
or
a
light
node,
because
if
you
were
a
full
node,
the
gas
price
record
took
the
last
20
blocks.
D
So
it's
quite
a
lot,
but
if
you
were
running
a
light
client
there
may
be
two
and
maybe
now
the
latter
will
be
better
but
and
what
it
did
is
it
took
the
few
smallest
guest
priced
transactions
and
basically
found
the
not
the
median
but
slightly
below
that,
so
if
we
put
them
in
descending
order,
there
may
be,
I
think,
60th
percentile
or
something
like
that
and
just
return
that
as
as
a
suggestion
and
and
yeah.
So
what
we
currently
we
are
currently
planning.
D
At
least
the
latest
like
kind
of
team
consensus,
is
that
we
are
going
to
keep
this
mechanism
and
use
it.
For
I
mean
we
feed
the
effective
miner
reverse
into
it.
So
so
that's
what
it
will
actually
use
and-
and
this
will
be
a
suggestion
for
the
tip
for
the
the
private
max
priority
fee
and
for
the
fee
cap
for
the
for
the
max
fee
per
gas.
D
We
suggest
this
tip
plus
twice
the
current
base
fee
and
yeah.
It's
still
a
good
question.
How
many
blocks
we
should
take
and-
and
it
might
depend
on
certain
situations,
so
I
also
had
this
proposal
that
I
just
posted
like
this
morning
that
maybe
we
could
just
so
it
might
depend
on
whether
there's
a
congestion
right
now
or
not.
So
we
could.
D
We
could
like
iterate
through
the
recent
blocks
and
and
like
offer
different
priority
fees
depending
on
how
urgent
it
is
for
you-
and
maybe
this
could
be
also
like
a
nice
signal
for
to
for
the
users
to
see
if
there's
a
congestion
or
not,
I
can
dig
up
the
link
and
but
I
it's
it's
in
the
in
the
1959
free
market
left
channel,
so
yeah,
and
but
basically
we
are
going
to
so.
This
is
what
we
want
to
do.
We
want
to
just
use
this.
D
This
take
take,
take
the
minimum
or
close
to
minimum
tips
of
of
recent
blocks
and
offer
something
below
the
median
india.
So
that's
what
we.
D
B
Yeah
thanks
for
sharing
so
and
yeah
again
on
the
on
the
issue.
I
think
the
main
concern
about
the
current
get
implementation
is
if,
if
there's
a
spike
in
usage,
those
will
likely
be
short-lived
and
yeah.
The
20
block
is
almost
remembering
like
too
much
like
looking
at
too
much
history,
whereas
under
1559
you
know
things
will
probably
happen.
B
Much
quicker
like
if
there's
a
spike,
it's
likely
that
it's
going
to
be
something
on
the
order
of
less
than
10
blocks
and
and
if
you're,
looking
back
at
20,
you
might
have
users
overpay
slightly.
I'm.
D
Not
so
sure
about
that,
actually,
if
there's
a
spike,
the
spike
is
short-lived.
So
if
you,
if
you
take
the
recent
block,
so
if
you
like
accommodate
yourself
to
the
spike,
then
you
will
pay
a
lot
and
get
in
earlier
and
if
you
take
longer
the
longer
history,
then
you
will
find
a
tip
that
has
worked
like
in
the
past.
Usually
and
then
what
will
happen
is
that
you
will
wait
out
like
the
spike
and
and
get
in
somewhere
at
the
descending
edge
of
the
spike.
B
B
Obviously,
if
there's
no
spike
at
all
like
if,
if
the
blocks
are
pretty
constant,
it
would
also
work
pretty
well
if
there
has
been
a
spike
in
the
last
20
blocks,
but
it's
kind
of
over
and
it
would,
it
would
probably
fail-
and
you
know
there
is
a
spike
happening
right
now,
and
you
know
yeah,
then
that
that
means
you
send
your
transaction
and
it
just
kind
of
has
to
wait
until
the
spike
is
cleared
to
be
to
be
included
again.
Is
that
roughly
right.
D
Well,
yeah,
if
we
use
like
a
constant
setting,
then
I
mean
content
setting
for
how
many
blocks
we
look
back
then.
Yes,
that's
right
and
yeah
thanks
for
linking
my
proposal,
so
I
think
I
think
that
kind
of
addresses
this
but
yeah.
So
this
is
just
like
putting
up
ideas
right
now,
but
okay.
So
that's
what
we
have
now
yeah.
B
Got
it
micah
your
hand
is
up.
E
So
I
just
want
to
reiterate
my
broken
recordness.
Most
people
here,
probably
already
know
what
I'm
gonna
say,
but
I'm
going
to
say
it
again
for
the
new
audience,
I'm
generally
against
any
sort
of
priority
fee
estimation.
That's
not
just
what
do
we
believe
the
miner's
min
value
is.
E
The
reason
for
this
is
because
it's
kind
of
self-reinforcing
getting
people
into
these
auction
and
bidding
wars,
and
in
most
cases
it's
probably
unnecessary
and
in
the
cases
that
are
remaining,
it
often
can
just
hurt
the
user
as
much
as
it
helps
them,
and
so
I
think
it's
much
better
that
most
of
our
oracles
are
writing
unless
we're
writing
oracles,
specifically
for
like
very
advanced
users,
like
you
know,
bot,
authors
and
stuff,
like
that,
which
I
don't
think
any
of
us
are.
I
really
think
that
for
the
premium
we
should
just
be
saying:
hey.
E
We
know
that
miners
will
accept
a
premium
of
one
or
two
or
three
or
whatever,
and
that's
unlikely
to
be
changing,
and
so
this
is
what
you
need
to
set
the
premium
to
and
that's
it
like.
I
do
not
think
we
should
be
incentivizing
or
incentivizing
encouraging
and
helping
people
get
into
these
gas
auctions,
because
they're
just
they're
going
to
get
themselves
hurt,
like
things
are
going
to
go
wrong
like
it's
just
for
the
end
user.
It
doesn't.
E
D
Yeah,
I
kind
of
agree,
but
what
so
so
this
is
so.
This
is
why
I'm
saying
that
sometimes
it
makes
sense
to
look
like
more
into
the
past
and
okay.
This
is
like
the
minimum
that
has
ever
worked
and
suggest
that,
but
I
mean
so
you
you
are
talking
about
using
a
constant,
basically
and
ether
price
is
changing.
So
basically,
I
don't
know
minor
preferences,
the
technology.
A
lot
of
things
can
change
so
these.
E
Yeah,
so
I
think
the
we
we
do
need
to
have
it
be
dynamic,
but
that
dynamicism
should
be
over
like
really
long
time
scales
like
we
don't
I
don't
yeah.
I
want
to
be
cautious
here,
because
it
is
possible
that
there
is
a
little
bit
of
incentive
for
miners
to
actually
have
dynamic
base
fee
or
sorry
dynamic
premium
or
priority
pricing
based
on
current
mev
rewards.
E
Like
guess
that
miners,
we
think
miners
are
using
have
just
like
a
command
line,
option
for
set
your
minimum
priority
fee
and
we
believe
most
miners
are
just
setting
a
minimum
to
something,
and
we
have
seen
you
know
over
the
last
ten
thousand
blocks.
Ninety-Five
percent
of
the
miners
have
been
below
two
and
so
or
have
mined
a
block
with
a
transaction
below
two.
E
So
set
your
base
fuel
or
say
your
priority
fee
to
two,
and
I
wanna
be
careful
to
not
get
into
this,
not
trying
to
be
too
dynamic,
not
trying
to
adjust
hyper
fast
to
what
we
think
miners
might
be
changing,
because,
most
of
the
time
when
that
changes,
it's
just
due
to
a
very
short-term
congestion
spike
and
does
not
last,
and
so
I
do
think
it
should
be
dynamic.
We
shouldn't
just.
F
The
value
probably
needs
to
be
dynamic,
but
the
issue
with
looking
at
let's
say
past
records
of
what
people
have
been
bidding
is
that
we
might
be
too
slow
to
actually
catch
that
the
spikes
are
happening
in
which
case,
while
this
pack
spike
is
happening,
you're
still
recommending
the
minimum
tip
to
users
and
at
the
end,
when
the
spike
is
over
you,
your
indicator
will
still
be
kind
of
trailing
these
high
values,
and
it
might
not
be
that
useful,
but
we
do
have
an
objective
source
that
we
get
for
free
from
one
five,
five,
nine
itself
like
we
don't
need
to
look
at
what
users
are
doing.
F
We
can
simply
look
at
how
full
the
blocks
are,
or
maybe
like
the
two
or
three
recent
blocks,
and
if
we
see
that
two
or
three
blocks
in
the
in
in
sequence
or
even
the
previous
block
was
full,
then
we
know
that
we
are
in
one
of
these
spike
regimes
and-
and
we
don't
need
to
wait
to
see
users
increasing
their
tips
because
they
might
not
do
that
first
by
themselves.
F
They
might
rely
on
wallets,
which
would
do
that
for
them
and
and
second,
even
if
we
wait
for
this
with
the
parameters
that
are
set.
Looking
back
20
blocks
and
looking
at
the
percentile,
it's
not
clear
that
you
would
catch
immediately,
that
the
spike
is
happening
and
you
can
really
do
get
it
quickly
enough
by
looking
at
the
at
the
gas
usage
in
the
block
itself.
So
I
I
would.
F
F
Well,
I
think,
if
you're
going
to
react
at
all
so
mika
recommends
not
reacting
at
all
and
and
that's
definitely
a
valuable
position,
but
I
do
think
that
it
might
be
available
for
users
to
have
at
least
some
kind
of
indication
that
something
is
going
on.
So,
if
you
do
want
this
indication,
I
I
think
relying
on
the
gas
used
by
the
previous
block
of
the
previous
two
or
three
blocks
would
be
more
accurate
than
relying
on
more
subjective
price
points,
such
as
what
the
users
are
currently
doing.
Yeah.
D
Well,
yeah,
so
this
is
why
I
propose
that
we
should
like
return
a
series
of
suggestions
depending
on
how
urgent
it
is
and
yeah,
so
the
users
could
decide
whether
they
want
to
like
find
the
fight
fight
for
for
for
priority
or
not
and
yeah.
It's
also
good
to
see
whether
there's
actually
had
something
happening
right
now,
but
yeah
always
suggesting
like
to
to
to
to
jump
on
the
spikes.
I
I
don't
think
that's
a
good
idea
offering
it
as
an
option
that
that
might
be
good.
E
Yeah,
so
just
to
reinforce
the
barnaby
says:
if
we
are
going
to
do
reactive,
gas
pricing
to
congestion,
we
should
definitely
use
the
fullness
of
previous
blocks
to
identify
congestion.
Similarly,
when
we're
trying
to
determine
what,
like
the
95th
percentile
minimum
is,
if
we
decide
to
go
with
that,
we
should
use
that
same
block
fullness
to
filter
out
minimums
like
we're,
trying
to
figure
out
okay.
What
what
do
we
think
95
percent
of
miners
have
set
their
min
to?
E
We
should
first
filter
out
any
blocks
that
were
full
or
sorry
any
blocks
yeah,
so
any
boxer
full
filter
those
out
and
don't
count
them
at
all
to
get
those
numbers.
So
that
way
we
are
seeing
just
the
minimums
we're
not
seeing
the
congestion
times
separately
the
thing
to
keep
in
mind,
I
think,
with
this
debate
of,
should
we
be
reactive
or
not?
E
I
know
it
sounds
weird
to
introduce
a
feature
that
we
don't
want
people
to
use,
but
if
we
introduce
them
in
a
way
that
everybody
uses
them,
then
they
become
not
useful
anymore,
like
they
no
longer
serve
a
purpose.
We
very
much
need
to
introduce
this
and
one
way
to
achieve
that
is
by
having
like
this
concept
of
transaction
priority,
like
kind
of
fast,
medium,
slow
or
whatever,
where
the
fast
is
saying.
Yes,
I
want
to
be
reactive
and
as
slow
as
saying.
No,
I
don't
want
to
react.
E
One
caveat
with
that,
though,
is
that
I'm
worried
that
compared
to
the
base
fee,
you
know
if
the
base
fee
is
100
and
the
fast
medium
slow
is
like
two
one.
Two
and
three
like
everybody
will
always
choose
fast
and
now
we're
back
in
that
same
situation,
where
everybody
is
choosing
fast,
at
which
point
it
is
no
longer
helping
anybody
because
everybody's
following
the
same
strategy
like
in
order
for
this
to
work,
we
need
people
to
be
following
different
strategies.
If
everybody
follows
the
same
strategy,
the
strategy
stops
working.
A
So
one
question
I
want
to
ask
is
a
bit
tangential
to
the
discussion:
is
that
we're
kind
of
trying
to
solve
the
whole
gas
price
suggestion
problem
before
we
actually
see
how
the
network
behaves,
and
so
my
personal
two
cents
would
be
that
so
the
current
model,
that
is,
that
get
it
get
implemented,
is
essentially
just
continuing
the
old
algorithm,
and
I
completely
agree
that
this
might
be
completely
unsuitable
for
certain
tasks
or
certain
scenarios,
but
it
kind
of
worked
until
now.
A
So
wouldn't
it
be
kind
of
prudent
to
wait
until
mainnet,
actually
forks
over
and
see
how
the
base
fluctuates
and
how
tips
fluctuate
before
we
try
to
solve
this
problem.
So
I'm
kind
of
the
only
thing,
I'm
afraid
of
is
that
we're
coming
up
with
a
solution
to
the
wrong
problem,
because
we
don't
know
what
the
problem
gets
until
the
fork.
D
A
Yeah,
of
course,
but
essentially,
if
we
continue
our
current
algorithm,
then
at
least
we
know
how
wrong
it
is,
whereas,
for
example,
mica
had
a
really
nice
example,
I
think
the
base
fee
is
a
hundred
and
the
tips
are
one
two
and
three.
Then
it
doesn't
really
matter,
and
this
is
exactly
the
problem-
we
don't
know
how
the
tip
will
fluctuate
in
comparison
with
the
basic.
A
H
Yeah,
I
mean
for
me,
it
was
just
kind
of
just
coming
back
on
mikko,
but
he
kind
of
answered
it.
The
the
big
one
for
me
is
that
you
know
I
personally.
I've
like
personally
believe
like
I
would
rather
people
pulling
the
nodes
to
figure
out
a
gas
price
than
a
third-party
api,
and,
in
that
case,
like
we're,
always
gonna
have
to
be
competitive
to
some
degree.
So
if
you
know
you
kind
of
have
to
go
back
down
to
like
there
has
to
be
some
level
of
competitiveness
there.
H
Obviously
the
issue
being,
naturally
that,
like
we're
gonna,
run
to
the
same
same
problem,
we
have
now
everybody
just
competing
for
astronomically.
High
prices
is
a
problem,
but
in
the
case
where,
like
you
know,
we
have
products
that
we
use.
If
we
don't,
we
try
using
the
node
and
we
actually
had
to
switch
off
of
geth
and
open
a
theorem
just
like
we
couldn't
rely
on
the
node
for
the
gas
price
anymore
and
now
we're
using
a
third
party,
which
is
not
what
I
want
to
be
doing
right
so
like.
H
E
So
I
think
there
I
think
there
is
a
simple
thing
we
can
do,
that
has
a
good
chance
of
working
for
launch,
and
then
we
can
re-evaluate
once
you
have
more
data,
and
that
is
to
encourage
the
client
devs
to
have
a
hard-coded
defaults
for
the
priority
min
priority
fee
that
miners
use
and
a
hard-coded
default
for
the
party
fee.
That
gets
returned.
If
you
ask
for
a
gas
price
recommendation
and
make
sure
those
two
are
the
same
thing.
E
E
But
if
we
can
get
all
the
clients,
that
kind
of
just
agree
that
hey
where's
our
miners,
we'll
do
this
by
default,
is
the
min
and
our
users
will
get
this
is
the
min.
Then
I
think
we
have
something
that
can
work
out
of
the
gate,
and
my
guess
is
is
that
most
miners
are
probably
going
to
run
stock
out
of
the
gate
and
similarly
watch
and
see
before
they
crank
up
their
numbers,
and
so
we
can
set
that
to
one.
We
said:
that's
two,
let
me
say
it
to
five.
You
know.
E
We
believe
that
you
know
one
or
two
is
probably
the
right
number.
Well,
you
said
it's
five,
just
because
around
launch
that
will
probably
be
inconsequential
compared
to
the
base
fee.
Anyways,
and
so
you
know
people
will
mind
five
and
it
means
that
it's
less
likely
that
miners
are
going
to
manually,
adjust
that
again.
That
requires
all
the
clients
kind
of
a
green,
we're
kind
of
agreeing
hey.
This
is
our
launch
number
just
to
feel
things
out,
but
I
think
it's
really
simple
and
it
gets
us
to
a
point
where
we
have
more
data.
A
So
a
counter
argument
to
that
would
be
that
currently
the
gas
prices
fluctuate.
I
mean
I
have
no
idea
what
it
is
currently
lasts.
A
couple
of
days
ago
it
was
around
30
a
week
before
that
it
was
around
100..
So
you
have
a
quite
large
fluctuation
with
me,
which
means
that
the
node
has
to
fluctuate
along
with
the
gas
price.
Otherwise,
your
the
transaction
you
make
will
never
get
included.
A
No,
I'm
I'm
talking
about
internal
both
that
one.
If
I,
if
you
want
to
submit
a
transaction
by
get,
then
your
assumption
is
that
the
transaction
will
go
through
reasonably
fast
now,
if,
if
cat
will
always
tell
you
that
the
tip
is
to
peek
away
and
the
base
keys
whatever,
then
probably
when
others
are
paying
100
feet
away
for
the
tip,
I
mean
good
luck
with
your
two
gigahertz.
B
Yeah-
and
I
think
so,
this
is
yeah-
the
failure
mode
of
heart,
basically
hard
coding.
The
base
fee
works
only
when
there's
not
a
spike
right.
So
what
you're
the
trade-off
you're
saying
there
is
like
you,
won't
you're
guaranteed
to
like
not
overpay,
when
there's
not
a
spike,
but
if
there
is
a
spike
you'll
be
way
underpriced
and
then
you
need
some
other
way
to
to
estimate
what
the
right
base
right.
The
right
priority
fee
is.
E
Yeah
exactly
and
the
caveat
there
is
that
we
expect
spikes
to
be
both
rare
and
short-lived,
and
so
for
users
that
are
just
using
the
default.
They
will
probably
still
get
through
like
as
long
as
you're
setting
like
base
feed
times,
two
or
whatever,
like
this
common
people.
Talk
about
you'll,
probably
get
through
in
almost
all
cases
like
it.
Just
might
take
you
until
at
the
end
of
the
spike
and
the
spikes,
like
you
know,
seven
blocks
or
whatever
and.
B
Not
just
the
spikes
yeah,
there's,
there's
two
cases
where
you
won't
get
in
it's
one,
if
there's
a
spike
and
two,
if
there's
a
high
mev
transaction-
and
this
is
why
selling
a
constant
is
a
bit
harder.
Barnabay
has
made
some
some
some
graphs
about
this,
but
basically,
if
you
know,
if
a
block
has
a
really
high
mvv
transaction,
the
opportunity
cost
of
being
uncold
is
is
quite
high.
So
it's
it's
kind
of
unlikely
to
include
anything
with
this
kind
of
hard-coded
tip.
B
B
It
still
makes
sense
to
include
those
transactions,
if
you,
if
you
have
a
tip
of
and
the
top
25,
probably
just
won't-
include
transactions
with
low
tip.
So
that's
the
other
case
where
you're
just
kind
of
selling
it
out.
I
think
right
now,
last
time
I
checked
there's
about
35
40
of
blocks
that
have
mev.
So
that
means
you
know
statistically
like
if
you're
really
unlucky,
you
set
your
transaction.
B
The
block
has
a
ton
of
mvv
in
it,
but
then
you
know
the
block,
after
probably
doesn't
have
a
ton
and,
and
you
get
into
that
block
but
yeah.
It
is
a
case
where
like-
and
I
don't
think
the
current
gas
price
oracle
can
really
pick
it
up.
B
Like
it'll,
probably
pick
up,
you
know,
what's
the
sort
of
average
longer
an
average
and-
and
you
know
looking
at
it
right
now,
it
would
be
like
two
way
would
compensate
for
the
uncle
risk,
accounting
for
like
something
like
the
75th
percentile
of
mev,
but
you're
yeah
you're
not
going
to
be
included
in
those
blocks
where
there's
like
a
10,
eth
front-running
opportunity.
E
Yeah
with
the
again
the
cap,
my
caveat
here
is
that
we
we
should
do
this
as
a
launch
thing,
with
plans
to
change
it
in
the
future,
and
the
reason
I
think
this
is
fine
is
because.
E
Okay,
so
the
reason
I
think
this
is
fine
is
because,
on
launch
day,
I
find
it
very
unlikely
that
all
the
miners
are
going
to
all
of
a
sudden
have
super
advanced
pricing,
gas,
min
pricing
strategies
already
coded
into
a
patch
for
geth
or
whatever
minor
they're
running,
even
without
having
any
data
on
one
five.
I
just
like
us
like
remember:
miners
are
going
through
this
exact
same
process
as
we
are
where
they
have
no
data.
They
have
no
idea
how
things
are
gonna
work
out
in
the
wild.
E
They
don't
have
the
guest
code
to
work
on
yet
so
they
can't
even
start
their
patch
until
after
we
get
our
release,
candidates
out
and
so
like,
if
we
just
plan
on
having
this
like
this
is
our
kind
of
launch
thing
to
get
to
gain
more
data,
and
we,
you
know
in
a
couple
weeks,
we'll
change
it
or
in
a
month
we'll
change
it.
I
think
that's
safe,
like
I
don't
think
we
have
to
worry
too
much
about
like
a
large
percentage
of
miners
having
hyper
advanced
gas
pricing
strategies
on
launch
day.
I
So
I
I
I
want
to
bring
up
a
point
which
is
like
on
launch
day
on
the
day
of
the
fork.
Most
people,
most
clients
who
are
sending
transactions
are
probably
going
to
continue
sending
legacy
transactions
until,
like
the
market
is
stabilized
or
they'll,
gradually
roll
that
out
to
something
and
those
folks
are
gonna.
You
know
many
of
them
still
rely
on
the
e
gas
price
api
and
assuming
that
still
exists-
and
you
know
at
least
is
backwards,
compatible
and
continues
to
return
the
same
implementation
for
legacy
transactions.
I
That
means
that
folks
are
essentially
going
to
be
like
the
majority
of
the
market
is
going
to
be
sending
legacy
transactions
with
max
fee
set
to
max
fee
and
max
priority
fees
set
to
the
same
thing,
which
I
believe
means
that
the
majority
like
unless
we
are
committed
to
like
breaking
eve
gas
price
and
getting
rid
of
that
api
all
together.
We
are
the
fact
that,
like
clients,
you
know
guest
is
going
to
be
de
facto
making
pricing
recommendations.
Anyways.
Is
that
correct.
A
Is
correct
almost
so,
it
doesn't
really
matter
what
you
have
to
eat
gas
price
or
not.
Guest
price
10
point
because
legacy
transactions
still
only
have
one
gas
price
fields
which
get
interpreted
as
both
the
different
I
mean
as
both
these.
So
as
long
as
you're
sending
a
legacy
transaction
doesn't
matter
how
you
estimate
the
gas
price,
it's
still
going
to
burn
block.
E
So
I
think
I
see
the
difference
here.
I
think
peter
is
talking
about
people
who
send
their
transactions
unsigned
to
geth
and
then
guest
fills
them
in
and
signs
them
and
submits
them.
I
think
yuga
and
other
people
are
talking
about
people
who
ask
geth
for
the
gas
price,
and
then
they
fill
out
their
own
transaction
in
a
script
or
an
external
service,
sign
it
and
then
give
that
to
guest
to
submit
to
the
chain.
A
A
E
A
Yes,
for
that
we
didn't
so
there
was.
I
think
that
client
made
the
the
pr
to
the
something
specky
whatever
about
the
new
rpc
endpoint,
so
did
introduce
that
so
we
do
have
the
I
don't
know
what
it's
called
whatever
is
in
vip.
It's
called
that
endpoint
to
actually
return
just
the
tip
and
then
okay,
we
have
a
separate
endpoint.
A
So
if
you
want
to
submit
1559
transactions,
we
do
have
a
separate
endpoint
to
specifically
give
you
a
tip-
and
we
do
not
have
an
end
point
to
give
you
a
fee
cap,
because
it's
so,
if
you
don't
specify
you
will
just
default
to
the
tip
plus
to
the
base
keys.
A
I
No,
no,
no
worries
at
all.
I
mean
I
guess
the
only
point
I'm
making
is
basically
it's
clear
that
clients
are
like
each
clients
are
going
to
make
recommendations.
There's
no
way
around
that
right,
because
there
are
many,
many
people
who
rely
on
these
apis
on
the
east
gas
price
api.
Specifically.
So
we
are,
you
know
the
community's
de
facto
making
a
recommendation
about
how
to
price
1559
transactions,
because
you
know
legacy
transactions
can
be
interpreted
as
1559
transactions
so
like
that
ship
has
essentially
sailed.
I
think
so.
A
A
And
then,
if
you
have
a
network
with
90
taxi
transactions,
then
you
need
to
create
your
1559
transactions
in
a
way
that
they
can
actually
compete
with
the
legacy
transactions.
Because
if
the
legacy
transactions
are
paying
10x
the
tip,
then
it
doesn't
matter
how
nice
algorithm
you
come
up
with
for
the
1559
transaction.
They
won't
get
included
because
they
were
just
always
under
price
compared
to
the
legacy
transactions.
B
So
I
guess
one
thing
I'd
be
curious
to
hear
kind
of
people's
thoughts
on
this
greg
you
kind
of
mentioned
earlier.
You
know
you
see
it
as
like
a
bad
thing
to
query
like
a
third-party
service
to
get
more
precise
gas
price
estimates.
At
the
same
time,
it
kind
of
feels
like
a
separation
of
concern.
Issues
like
where
you
know
guess
gets
like
main
functionality
is
not
to
be
like
a
gas
rights
oracle
right,
it's
to
be
a
node
and
to
submit
some.
B
You
know
reasonable
estimate
for
the
gas
price
and
it
does
feel
like
you
know,
1559
has
like
a
much
broader
design
space
for
like
gas
price
oracle.
So
I
I'm
curious
like
what
what
people
feel
you
know
like
if
geth
has
like
this
good
enough
kind
of
backwards
compatible
solution,
that's
like
not
optimal.
In
all
cases,
you
know.
B
Does
it
make
sense
to
have
folks
like
eat
gas
station
gas
now
and
what
not
be
the
ones
who
kind
of
you
know
come
up
with
like
fancier
apis
that
do
look
at
the
block
history
that
that
do
help
with
this
use
case,
like
I
guess
I
know
more
granular.
If,
if
you
want
like
use
cases,
I
don't
know
if
people
have
thoughts
on
that
rick,
I
see
your
hand
is
up.
L
Hi
hi
yeah,
I
mean
for
me
personally,
I
feel
like
geth
is
the
best
place
to
put
an
oracle,
because
everything
already
kind
of
needs
it
and
I
mean
itself,
needs
it.
But
it's
like
a
point.
I
can
kind
of
trust
if
a
person's
trusting
in
fear
they're
going
to
continue
trusting
in
fear.
It's
weird
that
you
know
in
order
to
do
anything,
I
trust
infira
and
now
some
like
a
get
gas
price,
something
something
especially
when
all
the
data
is
sitting
in
memory
in
geth.
L
It
has
to
for
other
purposes
anyways,
and
so
that's
kind
of
my
hope
is
that
I
mean
at
some
point
I
saw
somebody
else
recommend
it
as
well
as
even
like
a
histogram
or
something
of
gas
braces,
but
it
seems
like
there
should
be
some
way
to
like
bubble
up
information
in
a
call
that
they
can
be
used
by
a
more
clever
oracle.
L
Even
if,
if
geth
doesn't
want
to
be
the
final
call,
if
they
can
bubble
up
enough
information,
that's
sitting
there
literally
in
memory,
it
doesn't
have
to
hit
the
disc
or
anything
in
my
mind.
So
that's
kind
of
my
take
on
that,
like
in
ethers,
when
you
connect
to
something
you
connect
to
something.
If
you
call
get
gas
price,
it's
not
going
to
start
then
trusting
some
some
other
service
for
the
gas
price.
J
Yeah,
I
agree
with
rick
in
that
it
would
be
great
if
geth
could
solve
95
of
the
cases
and
we're
mentioning
that
we
still
haven't
figured
out
exactly
how
to
solve
the
difficult
cases
like
the
bot
writers
or
the
traders
or
people
who
need
to
get
in
during
a
spike.
I
think
that
would
be
the
place
where
we
would
rely
on
gas
now
any
gas
station
or
more
complex
casper
gas
price
oracles,
but
for
the
average
user
I
would
love
if
get
can
provide
the
whole
solution.
A
Yeah,
so
an
interesting
question
from
from
get's
perspective
is
essentially
currently
provides
one
api
endpoint
now,
given
that
1559
will
arrive,
let's
say
we
will
have
two
api
inputs,
one
collected
transactions
and
one
for
1559
transactions.
A
Now
our
assumption
up
until
this
point
is
that
get
is
kind
of
work
operates
in
this
headless
mode,
where
an
external
app
just
does
that
to
somebody's
transaction
and
then
gat
needs
to
figure
it
out
now.
From
this
perspective,
I
don't
think
we
can
make
it
much
smarter
now.
I
think
it
will
show
suggestion
to
maybe
have
an
additional
api
endpoint,
maybe
automatically
additional
part.
A
Could
be
smarter
and
look
at
various
metrics
and
try
to
give
some
options
to
the
user,
but
for
such
an
api.
Essentially,
you
need
something
in
front
of
guest
that
can
actually
show
this
to
the
user
or
make
heads
or
tales
of
the
recommendations
or
the
variations,
and
then
the
user
or
something
gets
too
big.
D
Yeah,
I
I
the
way
I
imagined
my
suggestion.
Yes,
so
this
is.
This
is
why
I
think
it's
a
good
thing
if
the
like
more
flexible
thing
is
like
a
general
generalization
of
the
default
thing,
and
we
should
definitely
leave
the
default
default
api
and
I
also
want
it
to
work
more
or
less
the
way
it
did
work
before,
because
yeah
it's
better
to
not
break
things
that
already
exist.
J
J
Sir,
can
we
get
a
confirmation
to
what
bernabeu
asked
on
the
chat
on
what
exactly
this
gas
price
api
would
be
returning?
Is
it
going
to
be
base
fee,
plus
the
get
estimation
of
max
priority
fee.
A
So
currently,
okay,
so
currently
the
gas
price
workflow
within
gas
just
looks
at
the
past
blocks
and
tried
to
see
what
was
the
minimum
minimum.
I
think
for
each
block.
What
was
the
minimum
three
tips
actually
paid
to
the
miner
and
then
based
on
that
it
will.
Currently,
the
crisis
takes
the
60th
percentile,
so
it
essentially
tries
to
take
not
if
not
the
smallest
tips
within
the
blocks,
but
something
very
close.
The
smallest
tips.
D
Yeah,
but
I
think
the
best
question
was
that
what
will
the
old
guess
at
gus
price
api
recommend.
A
So
essentially,
I
was
saying
that
internally
get
calculates
a
recommendation
for
the
tip
and
then
for
the
old
east
gas
price.
We
just
add
the
current
base
fee
for
that
to
that
tip,
and
essentially
that
way
the
basically
gets
burned
and
the
the
tip
that
the
miner
gets
will
be
more
or
less
what
the
miners
were
getting
in
the
previous
blocks.
So
the
miners
should
be
happy
with
that.
So.
J
Thank
you.
Can
I
ask
you
for
a
quick
follow-up
on
that?
What
is
going
to
be
great
behavior
if,
if
it
sees
a
transaction
on
the
mempool
with
a
base
fee,
that's
below
the
current
block?
Is
it
going
to
keep
it
on
the
member?
Is
it
going
to
drop
it.
D
Yeah
just
real
quickly
yeah,
so
I
don't
want
to
again
go
into
details,
but
yes,
we
do
keep
if,
if
there's
so
so
we
do
do
keep
transactions
in
the
manpod
that
are
currently
not
includable.
If
they
have
a
high
fee
cap
because
then
they
will
surely
become
includable
really
soon.
So
so
what
we
do
is
that
for
most
of
the
pool
we
have
this,
we
recalculate
the
actual
binary
reward
based
on
the
current
base
fee,
the
latest
base
fee
and
we
prioritize
transactions
based
on
that.
D
But
there's
like
a
little
space
reserved
for
those
transactions
that
would
like
fare
very
badly
in
this
comparison,
but
still
have
a
high
fi
cap
or
max
fee
and
therefore
they
are
worth
keeping
so
that
because
they
will
be
includable
in
the
next.
I
don't
know
five
blocks,
probably
so.
Yeah.
A
Perfect,
thank
you
essentially
about
previously
so
currently
transaction
pool
maintains
4000
transactions
and,
with
this
update
at
1559,
we
added
another
1000
transactions,
whose
purpose
is
to
be
those
transactions
which
cannot
currently
be
executed
because
the
base
fee
overflows
or
underflows
or
whatever,
but
but
otherwise,
they
kind
of
look
good.
F
And
the
reason
I
was
asking
is
because
there
is
the
intuition
that
legacy
users
who
are
sending
the
old
format
transaction
will
always
be
grossly
overpaying
because
they
have
their
max
fee
equal
to
the
max
priority,
etc.
But
actually,
if
you,
if
your
api
returns,
the
base
fee
plus
an
estimation
of
the
max
priority
fee
and
if
the
max
priority
fee
of
this
legacy
user
is
really
large
over
time
base.
F
Fish
should
kind
of
try
and
compensate
for
that,
and
basically,
we
will
sort
of
match
the
the
price
levels
that
these
legacy
users
are
sending
initially,
which
means
that
once
that
happens,
the
actual
priority
fee
that
these
legacy
users
are
sending
will
should
should
be
pretty
small
and
should
be
once
again
close
to
the
minimum
that
miners
would
accept
and
so
legacy
users
are
actually
a
bit
hampered
by
this,
because
they
are
recommended
prices
which
are
close
to
basically,
which
means
that
any
small
fluctuation
of
ones
of
the
base
fee
means
they
are
priced
out.
F
K
I'd
like
to
ask
a
follow-up
question
to
peter's
comment
about
memphis
structure
there.
Currently
the
mempool
is
divided
into
two
parts
of
the
cubed
and
the
pending
did.
I
understand
correctly
that
there
is
now
going
to
be
a
new
component
to
the
mempool
that
contains
these
high
max
fee,
but
not,
but
the
base
fee
is
insufficient
for
the
current
block.
D
Yeah,
so
this
is
a
different
division,
so
cued
and
pending
that's
like
per
account
thing
and
it's
about
the
ordering
of
of
like
sequential
transactions,
but
so
there's
there's
like
a
big
heap
for
all
the
or
we
had
one
big
heap
for
all
the
the
remote
transactions
and
and
yeah.
D
So
that
was
so
that's
that
that
that
priority
heap
was
for
for,
like
eviction
of
underpriced,
very
low
price
transactions,
and
this
is
what
has
changed,
and
this
is
now
that
works
that,
yes,
if,
if
if
it
falls
out
from
one
queue
that
is
based
on
current
miner
reward,
then
it
still
has
a
chance
to
stay
in
the
second
queue.
That's
based
most,
that
is
just
based
on
vcap
on
max
fee.
So
yeah
and.
K
This
is
a
new
queue
and
is
this
additional
queue,
this
new
queue
does
that
consume
additional
queue?
I'm
sorry
sort
of
slots
like
we
have
yes
4
000
now
for
the.
D
K
D
Did
not
want
to
break
the
existing
situation,
so
we
raised
the
mempool's
size
slightly.
So
now
we
have
a
4
000
sorted
by
current
miner
reward
and
an
extra
1000
sorted
by
fee
cap,
which
is,
I
think,
affordable
and-
and
it's
also
guaranteed
that
it
will
not
work
any
worse
than
before,
at
least
if
the
code
is
not
broken
or
something
yeah.
A
So
one
slight
verification
that
I
wanted
to
make
or
precision
is
that
so
currently
this
this
cue
split,
isn't
really
a
split,
isn't
really
introducing
any
new
cues.
A
B
Any
more
questions
on
the
gas
price
article.
There
was
one
other
thing
on
the
agenda,
so
I
just
want
to
make.
We
have
10
minutes,
so
it
feels
like
a
natural
transition.
C
E
I've
seen
a
lot
of
people
comment
that
they
want
to
avoid
centralized
oracles,
which
I
am
100
on
board
with.
I
think
the
thing
to
keep
in
mind
is
that
we
need
to
drop
our
understanding
of
the
old
system
and
think
about
the
new
one
in
the
old
system
in
order
to
build
an
oracle.
You
need
to
basically
monitor
the
pending
pool,
have
a
access
to
large
amounts
of
data,
and
you
know
the
flow
of
transactions.
It
was
really
complicated.
E
These
new
oracle
oracles
should
be
mostly
implementable,
as
just
a
javascript
library
like
it'll,
be
like
three
functions
long
and
you
can
just
copy
and
paste
it
into
any
piece
of
code.
We
can
have
you
know
gists
that
have
them
there'll
be
githubs
that
have
them
et
cetera.
You
don't
need
this
high
frequency
data
access.
E
Once
we
return
those
two
pieces
of
data,
though
everything
else
should
be
calculatable
with
a
small
javascript
library
like
you,
don't
need
more
data
than
that
like
you
used
to,
and
so
I
don't
think
we
need
to
worry
about
centralization
of
oracle's
like
we
see
with
gas
now
and
inferior
or
whatever,
because
the
oracle
is
simplified,
so
much
that
it
fits
in
a
library.
E
As
long
as
we
have
the
data
we
need
from
the
clients,
and
so
I
would
much
rather
see
these
endpoints
in
the
clients
return
that
data
that
we
need-
and
this
is
what
rickman
was
talking
about
where
we
need.
There
is
some
data
we
do
need
from
the
clients
and
we
need
endpoints
to
get
that
like
a
histogram,
for
example,
of
minor
priority
fees,
but
once
we
have
that
data
like
we
can
have,
every
wallet
can
use
their
own
library,
they
have
their
own
little
oracle.
They
can
tweak
it
and
tune
it.
A
Yeah,
I
kind
of
agree
that
that
is
a
nice
approach,
just
explosive
data.
One
thing
I
wanted
to
still
highlight
is
that
the
basically
is
exposed
already,
because
it's
part
of
the
block
headers,
so
I
mean
you
can
always
retrieve
the
basically
of
the
current
block.
A
I
mean,
if
you
just
retrieve
the
header
you
have
to
basically
and
you
can
see
whether
the
block
is
full
or
not.
So
you
can,
if
you
must
calculate
the
basically
for
the
next
block
you
could,
but
I
don't
think
anyone
wants
to
estimate
that
close
to
the
limit.
E
I
think
some
will
like
it'd
be
nice
if
we
could
have
just
like
the
end
point
that,
just
because,
in
order
to
calculate
the
base
suite
for
the
next
block,
it
is
kind
of
complicated
and
you
do
need
the
full
transaction
list,
or
you
at
least
need
the
gas
used
for
the
block.
If
you
have
the
gas
used
for
the
block,
then
end
the
base
fee
from
the
previous
block.
It's
already
there.
E
B
I
I
tend
to
agree
that,
like
over
time,
I
I
think,
because
the
estimation
was
like
so
complicated
and
now
it
becomes
simpler
like
over
time.
It
probably
makes
sense.
You
know,
like
wallets,
can
probably
write
some
of
it
themselves,
but
I
I
guess
I
do
appreciate
that
like
this
is
like
a
transition
and
you
want
things
to
kind
of
be
smooth
so
yeah.
I
I
feel
like
that's.
B
Probably
something
like
we'll
kind
of
gradually
see
happen,
and-
and
maybe
one
thing
I
can
follow
up
on
is
like
how
do
we
actually
provide
like
this
kind
of
just
base
implementation
in
javascript?
That,
like
you,
know,
helps
you
do
a
good
estimation
and
shows
people
that,
like
yeah,
it's
not
rocket
science,
and
we
can
do
it
quite
quite
easily,
just
because
we
only
have
five
minutes
left
though,
and
this
is
kind
of
related
to
the
same
topic.
B
A
few
folks
asked
about
having
a
json
rpc
endpoint
for
the
next
blocks
base
fee.
I
just
wanted
to
check,
I
guess
both
from
the
people
here
and
I
know
the
get
team
like
how
valuable
and
easy
that
is
because
it
is
like
it
is
easy
to
calculate
in
a
way.
But
it's
also
like
you
know
you
do
need
to
like
actually
look
at
the
spec
from
1559.
B
A
M
A
E
I
I
would
be
happy
to
work
with
rickmoo
to
just
make
sure
that
ethers.js
has
a
calculated
base
fee
from
the
pending
block
the
latest
blocks.
Basically,
I
think
it's
simple
enough
that
you
know
just
once:
javascript
has
it.
You
can
just
copy
that
into
whatever
your
language
of
choice.
Is
it
shouldn't
be
too
hard?
It's
already
exists
in
python.
L
Yeah
I
mean
currently
what
I've
been
doing
in
my
current
application
of
eip1559.
Is
I
actually
just
grab,
I
get
block
negative
one
and
take
the
black
the
the
the
base
view
of
that.
My
one
concern
with
is
this
get
pending
block?
Is
that
new,
or
is
that
something
that
exactly
like
1559,
because
part
of
ethers
is
right
now
detecting
whether
or
not
the
network
supports
eip
1559
by
checking
the
previous
block?
If
there
is
a
base
fee
on
it,.
E
L
A
L
Number
you,
you
are
retrieving
block
minus
one.
So
that's
I
mean.
A
L
A
Yeah,
so
if
you
get
at
least
in
gambit,
you
got
minus
two:
that's
the
pending
block,
but
I
don't
know
if
you
can
actually
pass
it.
So
if
you
just
okay,
let
me
just
check
which
end
point.
E
M
A
Well,
so
if
you
I
mean
define
not
to
understand
it,
I
mean
sure
the
base
key
is
probably
five
bytes
and
the
header
is
500,
so
yeah
I
mean
from
that
perspective.
Yes,
you
do
waste
a
lot
of
data.
The
question
is:
is
that
too
much
or
isn't
it?
It's
a
valid
question,
so
I
I'm
not
saying
we
should
not
add
a
gap.
Basically
I'm
just
saying
that
we
can
do
it
currently
too,
so
might
be
worthwhile
to
see
how
people
use
it
and
then
add
the
endpoint.
That's
actually.
B
Needed
we
have
two
minutes
left
any
other
quick
concern
that
people
had
they
wanted
to
bring
up.
N
I
just
had
a
quick,
quick
comment
or,
if
I'm
not
sure
what
the
what
the
plan
is
after
this,
but
I
was
struggling
to
follow
along
in
some
parts,
and
so,
if
someone
could
give
me
a
summary
of
like
the
it
sounded
like
they're
gonna
be
certain
phases.
There
is
still
a
little
bit
of
debate
of
exactly
what
the
guest
client
will
be
providing,
and
it
sounds
like
also
that
the
gas
station
apis
will
also
be
providing
some
fancy
extra
fancy
features
potentially
or
not
as
a
wallet.
N
We
would
still
rather
prefer
to
be
able
to
get
information
easily
and
digestibly,
with
like
rich
content
from
an
api,
if
that's
possible
from
geth,
but
without
like
we
don't
want
to
have
to
constantly
be
pulling
on
on
each
of
our
clients
for
the
last
x
number
of
blocks.
So
it'd
be
great.
You
know
if
both
were
provided
from
an
api
standpoint,
as
well
as
from
the
clients
directly
but
yeah.
If
you
could
summarize
what
the
phase
like
the
different
phases
are
for
rolling
out,
that
would
be
great.
B
Sure
so,
right
now
or
oh
no,
like.
A
So
one
thing
that,
before
we
close
this
video,
I
think
michael
mentioned
that
it
would
be
beneficial
for
gas
or
ethereum
clients
in
general
to
expose
certain
past
historical.
A
I
don't
know
histograms
of
who's
been
paying
how
much
for
which
miner,
if
we
can
so,
I
think,
providing
a
gas
oracle
that
works
on.
These
is
kind
of
hard.
I
mean
forget,
because
it's
an
api
that
we
cannot
just
change
afterwards,
because
if
somebody
relies
on
it
we're
shooting
them
a
network.
However,
if
it's
an
api
that
just
provides
data
that
others
can
build
upon
that
I
mean
that
can
remain
stable.
A
So
if
we
just
provide
that
api
that
returns
a
histogram
of
priority
fees
paid,
I
mean
at
worst,
nobody
is
going
to
use
it,
but
we
don't
need
to
change
the
api.
It
cannot
be
wrong,
so
I
think
that
might
be
actually
a
really
good
good
idea
to
expose
this
information.
Then
anyone
can
build
a
gas
circle
on
top
if
they
want
something
custom
and
if
something
turns
out
to
be
nice
and
something
tells
us
to
be
stable,
then
we
can
also
shift
that
within
gap.
A
A
E
Yeah
this
just
to
keep
it
quick
and
tie
things
up.
I
think
my
recommendation
is
that,
like
I
said,
guest
returns
just
some
data
that
data
would
be-
and
some
of
this
is
already
returned,
so
I'm
just
going
to
try
to
be
all
inclusive
here.
The
base
fee
of
the
latest
block
the
base
fee
of
the
pending
block
the
fullness
of
the
latest
block
the
fullness.
E
I
guess
that
yeah,
so
the
defaults
of
the
latest
block
base
field
latest
block
base
fee
of
the
pending
block
and
then
a
histogram
of
the
minimum,
the
lowest
gas
price
accepted
by
over
the
last
n
blocks,
with
full
blocks
filtered
out.
I
think
that
that
full
box
filtered
out,
I
think,
is
critical
for
getting
the
use
most
useful
data
here,
and
I
think
with
that
anyone
can
build
a
oracle
like
with
that
data.
E
You
should
be
able
to
build
most
of
the
types
of
oracles
I've
seen
people
propose
with
just
like
a
handful
of
lines
of
code
in
any
language,
a.
L
Quick
idea,
along
with
the
histogram
of
gas
prices,
maybe
also
histogram,
of
full
blocks.
If
that
even
makes
sense
or-
or
so
I
have
like
so
there's
someone
to
know
how
blocks
are.
E
I
think
it's
definitely
useful
and
interesting
data
and
I
can
imagine
someone
wanting
to
write
an
oracle
that
takes
that
into
consideration.
Like
oh
you've
noticed
that
you
know,
there's
a
lot
of
volatility
and
block
fullness
lately,
and
so
we're
going
to
change
our
strategy
and
so
yeah.
So,
let's
add
in
a
stretch
goal
would
be
a
histogram
of
block
fullness
over
and
blocks.
L
A
L
A
Idea
for
the
concept
yeah,
so
I
think
it
would
be
super
nice
if
we
could
just
write
up
a
small
brain
dump
of
what
we
would
like
to
see
and
then
then
we
can
see
what
would
be
how
we
could
expose
the
whole
thing,
because
I
guess
gathering
all
that
data
and
exposing
it
is
not
particularly
complicated.
So
it's
just
more
like
figuring
out
what
the
actual
data
we
want
to
expose.
I
mean
what
format.
E
B
Yeah,
okay,
sure
I'll.
Do
that
I'll?
Send
you
something
I'll
post
opposite
the
1559
fee
market
channel
and
discord.
If
folks
want
to
comment
there,
yeah
that
would
be
really
valuable,
so
yeah
I'll
I'll
put
together
like
a
hackmd
or
something
that
anyone
can
edit
yeah
great
yeah.
This
was
pretty
helpful
and
I
suspect
you
know
we'll
probably
have
another
one
of
these
calls
in
like
a
few
weeks
and
once
we
actually
have
1559
on
a
test
net.
B
It
might
also
make
things
a
bit
more
concrete
in
the
meantime,
if
you
do
want
to
just
like
play
in
a
very
experimental
way,
which
I
think
it's
fine,
we
do
have
a
devnet
called
calaveras,
that's
up
so
that
that's
running
there's
a
spec
for
it
in
the
the
github
specs
repo.
Let
me
just
link
it
here
in
the
chat.
B
If
anybody
wants
to
check
it
out,
there's
you
know
very
basic,
like
rpc
support
and
whatnot,
but
it
allows
you
to
send
the
transactions
and-
and
if
you
have
your
own
tooling,
to
kind
of
play
with
them,
yeah
that
that
could
be
useful.
A
L
B
The
explorer
is
there
already,
I
don't
know
about
the
rpc
node,
okay
yeah.
The
explorer
is
linked
in
the
spec
and
there's
an
each
stats
in
the
faucet
as
well.
B
Okay,
last
quick
question:
what's
the
parameter
to
think
death?
Oh
perfect,.
B
Cool
okay,
well,
yeah
thanks
everybody
and
yeah
talk
to
you
all,
or
at
least
part
of
you
in
the
next
in
the
coming
weeks.