►
From YouTube: Ethereum Core Devs Meeting #56 [2019-03-01]
Description
A
A
A
A
Let
me
just
pull
up
my
update
real
quick,
so
we're
gonna
publish
a
more
the
cat.
Herders
are
gonna
publish
a
more
official
like
blog
post,
that
gives
more
details
on
it,
but
I
wanted
to
clear
up
some
things
about
the
Prague
Pal
audit,
from
the
perspective
of
the
etherium
cat
herders,
who
were
designated
to
try
to
organize
one
and
also
get
signals
from
the
community
on
what
was
going
on
with
the
audit.
So,
first
of
all,
there's
two
components
to
the
audit.
A
A
Than
the
other,
in
an
unfair
way,
white
bloc
has
a
bounty
out
right
now
on
both
bounties
Network
and
get
coin
to
perform.
That
work.
That's
to
try
to
raise
funding,
to
do
benchmarking
on
prog
pal.
The
other
component
of
the
audit
is
to
see
how
long
it
would
take
an
ASIC
company
to
actually
build
a
prog
pal
ASIC
and
how
perform
it
it
would
be
compared
to
a
GPU
by
a
factor
of
like
1x
or
2x
or
10x.
A
So
we
can
see
if
it's
even
going
to
be
worth
it
to
implement
or
if
someone
is
just
gonna
make
another
ASIC
in
three
months
and
all
our
work
will
be
for
nothing
since
the
work
isn't
entirely
done
yet,
and
so
that
I
feel
like
would
be
a
more
major
factor
and
contributing
whether
or
not
to
go
forward
with
prog
pal.
As
far
as
signals
from
the
community,
we
have
a
what
I'm
calling
a
hash
vote,
but
I
think
there's
a
better
name
for
it.
A
There's
also
a
coin
vote
that
is
also
overwhelmingly
yes.
However,
I
want
to
stress
these
signals,
and,
and
also
a
third
thing,
I
guess
would
be
the
an
official
Twitter
poll
that
the
cat
herders
ran.
All
of
those
things
are
just
signals:
they're,
not
anything.
That's
gonna
be
like
a
deciding
factor.
Oh
Lane
posted
the
link
thanks
Lane,
none
of
those
are
going
to
be
a
like
truly
deciding
factor,
it's
more
just
figuring
out
like
from
which
types
of
folks
like
that
which
stakeholders
think
of
certain
things.
B
I,
don't
I,
don't
know
how
it's
calculated,
I,
don't
know
if
that's
heard
like
number
of
blocks
or
if
it's
hot
I
guess
it
equates
the
same
thing.
If
you
take
a
large
enough
sample,
then
then
it's
hash
power
is
the
same
as
number
of
blocks.
So
yeah
that
sounds
accurate,
I
think
it's
50
percent
of
hash
power,
yep.
That
makes
sense.
A
So
yeah
that
other
company
they're
gonna
get
back
to
me
hopefully
by
early
next
week,
and
then
we
would
start
either
getting
a
get
coin
grant
together
start
looking
for
funding
like
we
did
for
white
block
and
see
what
that's
all
about.
So
that's
the
update,
Charles
pooja
joseph.
Is
there
anything
I
missed.
C
So
I
just
wanted
to
say
that
I've
seen
it
somewhere,
so
the
there
is
another
interpretation
which
was
potentially
can
be
useful
of
this
55%
number,
which
is
the
turnout
for
the
miners,
signalling
somebody
said
it,
but
I
think
it's
a
good
where,
essentially,
if
those
55%
that
turned
out-
and
they
all
voted
in
favor,
that
gives
you
essentially
a
lower
bound
on
how
many
GPUs
are
currently
mining
and
network.
If
you
think
it's
that
make
sense.
B
A
A
B
I
was
gonna:
ask
if
you
wouldn't
mind
talking
a
tiny
bit
more
about
the
funding
question.
I
know,
there's
been
some
question
about
this.
Like
I
know,
there
was
at
least
one
application,
I
believe
aundrea
submitted
to
EF
and
I
just
I'm
a
little
unclear
on
this
myself
like
what
was
submitted
is,
is
the
official
statement
on
the
part
of
the
foundation
that
they
won't
be
financing
any
part
of
this,
and,
if
so,
like,
what's
the
plan,
anything
you
could
say
about
funding
would
be
helpful,
yeah
absolutely
so.
A
It's
very
important
that
Andreea
get
his
funding
for
prog
Powe.
If
we
decide
to
go
forward
with
it
or
in
my
opinion,
even
if
we
don't
just
because
it's
starting
to
trend
in
a
way
where
we
would
go
forward
with
it,
I
think
it's
important
that
he
continue
to
work
because
he's
been
working
without
pay
on
prog,
Powe
and
I.
Don't
think
that's
sustainable!
In
fact,
I
think
he
stopped
working
because
it
was
not
sustainable.
A
B
Was
just
wondering
if
there's
any
color
there
on
whether
that
was
more
of
a
political
decision
or
a
technical
decision
like
if
it's
technical
decision,
fine,
you
know
that
happens
all
the
time.
That's
the
grants
process.
But
you
know
if
it
were
the
case
hypothetically
again,
I
don't
have
any
inside
information
either,
but
like
EF
is
saying
we
don't
want
to
fund
this.
We
don't
want
to
fund
things
like
this,
because
we
don't
want
to
take
a
stance.
I
think
that
would
be
helpful
for
the
community
to
know
so.
I
don't
know
maybe
Hudson
Park.
E
The
grant
was
for
to
developers
to
to
work
on
propel
implementation
and
Eve
miner
and
also
to
work
on
instruction
protocol
new
version
of
it
that
will
help
with
switching
to
any
other
I
need
to
make
a
proof-of-work
switch
and-
and
there
was
some
I
think
two
more
tiny
items
that
were
also
included
are
related
to
mining
infrastructure.
Linda,
no.
D
I
wouldn't
read
it
as
political
I
wasn't
involved
with
this,
but
I
I
know
there
in
the
past.
3-4
months
have
been
kind
of
like
going
back
to
the
drawing
board
and
making
sure
that
they
have
the
priorities
straight
and
really
trying
to
ensure
that
they're
giving
grants
to
things
that
are
providing
distinct
value
if
they
did
not
and
I
think.
Maybe
in
my
my
interpretation
would
be
maybe
there's
a
there's,
a
lot
of
prog
power
work
going
on,
and
maybe
the
assumption
is
that
it
would
be
going
on
with
without
the
grant.
E
F
C
C
So
I,
don't
think
it's
political
at
all,
and
the
second
thing
I
wanted
to
point
out
is
that
if
you
think
about
what
the
Assyrian
foundation
should
actually
be
finding
and
I
think
it
should
be
finding
the
things
that
otherwise
cannot
be
commercialized
or
made
you
used
in
a
profitable
way,
so
essentially
public
goods.
So
if
you
think
about
it,
you
might
kind
of
conclude
that
some
of
this
work
could
actually
be
profitable
for
some
people,
so
therefore
they
might
try
to
find
funding
elsewhere.
But
this
is
my
personal
opinion.
B
Thanks
Alex,
hey,
that's
helpful,
yeah
I
agree:
let's
not
read
too
much
into
the
EF
gray,
I
guess
then.
The
question
on
my
mind,
is
just
what
are
the
other
options
on
the
table.
I
know
that
we
have
there's
at
least
two
funding
kind
of
requests,
open,
I,
think
they're
both
on
get
coin
right,
one
is
for
ondrea's
work
and
the
other
one
I
believe
is
for
white
block.
Oh
that's,
correct
yeah,
so
just
making
people
aware
of
those
would
be
good,
I
think
and
the.
C
In
and
I
would
say,
the
last
thing
about
my
thought
about
I
did
read
the
magician
thread
about
listen
about
the
funding.
It
was
a
cause
for
the
serum
foundation
funded,
and
but
there
was
other
ways
to
think
about.
It
is
that
if
you
look
at
the
carbon
vote,
I
did
actually
look
through
the
carbon
vote,
where
the
people
were
voting
with
the
ether,
and
you
could
see
that
the
number
of
ether
which
voted
for
what
part
was
about
3,
3
million
ether
and
I
looked
at.
Where
did
it
come
from
it?
C
So
it
basically
very
large
amounts
like
up
to
half
a
million
200,000
ether.
So
essentially,
there
was
about
20
different
votes,
which
probably
came
up
from
about
three
entities,
and
so
surely
this
this
basically
reflects
that
this
huge
entities
actually
have
interest
in
this
project.
So
maybe
we
should
ask
them
to
fund
this.
A
G
Best
thing
they
can
do
is
just
speed
up
their
process.
Honestly
I've
had
a
grant
almost
two
months
there
and
they
can't
just
let
us
know
yes
or
no,
so
we
can
move
on.
Knowing
we
have
funding
or
not
not.
Everybody
can
hang
out
for
months
waiting
weatr
to
know
to
go,
get
a
job
rather
than
help
aetherium.
E
H
C
Well,
the
I
remember
like
like
some
time
ago
when
we
had
this
really
huge
discussions
on
on
a
guitar
channel.
I
have
been
asking
the
question:
what
is
the
success
criteria
for
for
the
broke
pal
and
I've
never
got
the
answer,
and
actually
what
pyro
is
saying
is
probably
would
potentially
be
the
answer,
because
whenever
you
try
to
pin
it
down,
there's
always
things
like:
oh,
but
the
GPUs
are
also
a
6
or,
but
you
know
they
probably
can
be
created,
but
in
three
years
and
this
kind
of
thing.
A
A
G
G
We
were
looking
only
for
for
technical
problems
like
holes
in
the
algorithm,
not
not
whether
the
algorithm
was
going
to
have
a
certain
percentage
of
effect.
I
think
we
knew
we're
in
an
arms
race
with
the
a6
and
at
some
point
we'll
either
decide
to
maintain
the
racer,
decide
the
a6
win,
but
for
the
next
nine
months
we
decided
we're
going
to
do
this
unless
there's
a
technical
problem
with
the
algorithm,
not
whether
somebody
thinks
that
maybe
the
a6
can
win
the
battle
over
some
period
of
time.
G
H
A
C
A
C
Also
would
say
the
comment
that
I
also
read
recently
is
that
so
the
question
I
also
been
asking
in
like
were
the
whether
we
we
would
like
to
to
kind
of
keep
fighting
basics,
or
we
just
do
one
attempt,
and
then
we
stop
at
this,
because
I
heard
sort
of
somewhere
renew
sort
of
call.
Ok,
because
we
have
the
knobs.
So
the
profile
has
lots
of
knobs
like
six
different
parameters
that
we
can
change.
But
then
my
question
is
that
are
we
actually
going
to
do
this?
C
A
Yeah,
that's
a
good
point
and
also
the
people.
So
when
you
say
working
group,
that's
kind
of
what
the
cat
herders
are
doing,
so
we
don't
have
to
have
it
on
future
agendas.
Until
we
have
the
audit
done,
it
was
more
community
requested
that
we
have
it
on
the
agenda,
so
I
decide
to
put
it
on
there
cuz
it's
something
that
people
want
to
hear
about,
that
it's
on
a
technical
level
at
some,
that's
in
some
ways.
A
A
A
Cool,
so
the
next
item
is
the
Istanbul
hard
fork,
roadmap
and
I
forgot
to
add
something
else
in
there.
The
hard
fork
coordinator
role
so
off
Rees,
no
longer
the
hard
fork
coordinator
and
a
very
good
question
came
up
actually
how
he
became
hard
for
coordinator,
and
it
was
discovered
that
I
thought
that
I
talked
about
it
in
a
meeting
but
I
hadn't
and
so
technically
I.
Just
like
decided.
A
He
was
hard
for
coordinator
along
with
the
rest
of
the
cat
herders
like
we
all
just
were
like,
oh
after
he
wants
to
do
it
that's
great,
but
but
then
we
didn't
really
talk
about
it
in
here.
So
we
should
probably
talk
about
that
now.
Dude.
Does
anyone
want
to
or
what
do
we
think
about
having
a
heart
fork,
release
coordinator?
It
seems
like
from
previous
discussions.
Everybody
wants
it,
but
is
there
anyone
who's
against
it?.
B
Before
we
get
to
that
Greg
I
I
agree
with
Hudson's
question.
It
would
be
helpful
to
talk
about
what
the
role
is
and
whether
we
all
think
it's
worth
having
I'll
just
whatever
I'll
just
share.
One
quick
thought
like
I,
think
there's
value
in
this,
and
the
reason
is
because
the
number
of
teams
and
the
number
of
individuals
working
on
even
just
the
etherium
one
work
stream,
not
even
to
mention
the
youth
to
stuff,
but
of
course
that
as
well
has
grown
quite
a
bit
right.
The
size
of
these
calls
have
grown
the
complexity.
B
The
coordination
work
required
in
getting
everyone
on
the
same
page
and
getting
these
hard
Forks
to
happen
has
grown
as
well.
I
do
think
and
hope
that
I
am
expressing.
You
know
the
consensus
of
this
group
when
I
say
that,
like
we'd
like
to
do
hard,
Forks,
more
often
or
upgrades
I
should
say
more
often
and
I
do
think
that
the
coordinator
role
can
help
a
lot
with
that.
So
I
see
value
in
it,
but
I'm
curious
as
well.
If
anyone
disagrees.
C
So
my
my
question
to
the
sort
of
hard
for
a
coordinator
role
is
whether
this
person
or
a
group
of
people
going
to
be
only
concerned
with
the
actual
card
fork,
meaning
like
something
which
sort
of
starts.
Let's
say
one
month
before
the
the
block
of
the
heart
fork
and
then
end
of
the
day
on
the
day
of
the
heart
fork,
or
is
it
more
like
loosely
defined
spills
over
to
VIPs
and
all
sorts
of
stuff?
So
where
does
the
barrier
between
kind
of
yappy,
coordinator
and
hard
for
coordinator
stands?
C
A
And
just
to
answer
your
question:
they
would
not
be
an
EIP
editor.
They
would
obviously
have
to
Reedy
I
peas
and
coordinates
some
of
the
quarry
IPS
that
are
going
into
like
the
meta
EIP
for
upcoming
hard
Forks,
but
in
general,
a
hard
fort
coordinator,
in
my
mind,
would
be
someone
who
between
now
and
the
next
hard
fork,
decides
hard
dates
for
deadlines
of
submitting
e
IPS
for
consideration
deciding
on
those
AI,
P's
implementation
and
testing,
and
then,
finally,
what
day
the
hard
fork
would
be.
A
Of
course
they
wouldn't
be
a
dictator
in
this
regard,
but
they
would
come
up
with.
They
would
be
the
one
to
come
up
with
suggestions
or
different
options
to
bring
to
the
table,
because
no
one
really
has
time
to
do
that
so
far.
Also,
if
there's
any
kind
of
disorganization
with
it
like
confusion
over
a
IPS
or
like
confusion
over
what
people
want,
they
can
kind
of
sift
through
like
core
dev
meetings
and
online
discussions
and
kind
of
filter
through
that
stuff.
A
So
in
general
a
few
people
have
come
to
me
and
said
they
want
to
help
so
I'm
gonna
collect
a
group
of
those
people
and
maybe
do
some
people.
A
lot
of
people
actually
suggested
that
we
do
a
community
vote
on.
Who
should
do
it?
I,
don't
know
if
that's
the
best
route
or
not
I
wanted
to
hear
opinions
on
that.
If
it
should
be
the
core
devs
deciding
who
the
core
dev
release
manager
is
or
the
community
deciding
who
the
release
manager
is.
A
J
G
J
G
A
No
there's
not
it's
more,
like
people,
understanding
like
if
there's
gonna
be
more
than
one
person
and
then
really
I
mean.
If
the
core
devs
don't
really
care,
then
they
can
delegate
it
to
the
cat
herders
and
the
cat.
Herders
can
pick
some
people.
Okay,
I
either
split
the
role
or
to
have
one
single
person.
Do
it
if.
B
C
C
A
We
have
a
blog
out
that
we
released.
Do
we
release
that
yesterday,
everyone
who's
a
cat
herder
in
this
room.
When
did
we
release
that
blog?
That
explained
when
that
what
the
cat
herders
are.
A
A
A
A
C
No,
that's,
okay,
I!
Think
it's
too
I!
Don't
expect
this
all
these
questions
to
be
answered
straightaway,
because
it
requires
a
bit
of
figuring
out.
Try
yeah,
so
I
just
wanted
to
just,
for
example,
more
concrete
question.
You
probably
saw
it
and
you
know
the
github
issue
that
I
was
proposing
couple
of
things,
a
couple
of
changes
into
the
actual
process
and
I
wonder
what
this
is.
C
So
I
would
rather
have
to
you
know,
get
some
appointed
potentially
people
who
are
not
even
on
the
calls,
but
they
want
to
review
the
changes
they
just
can
review
them
in
the
common
code
to
explain
what
they
found,
and
so
we
can
iterate
quicker
over
all
these
things
rather
than
having
things
lying
down.
Hopefully,.
C
Reviewer
I
mean
that
it's
like
for
each
change.
You
basically
somehow
pick
the
reviewers,
who
probably
want
to
do
the
review,
and
those
reviewers
are
not
necessarily
like
the
core
devs
who
are
kind
of
in
initiated
I
would
say
in
this
call,
but
somebody
else
who
actually
really
wants
to
do
it
and,
and
then
the
second
thing
I
wanted
to
propose
is
something
I
already
said
before-
is
that
we
need
to
revisit
the
assumption
that
we
have
to
kind
of
bundle,
a
lots
of
updates
into
one
big
release.
C
A
H
One
of
the
things
I'd
like
to
point
out
is
the
for
a
free
left.
If
it's
a
schedule
that
you're
going
to
have
this
done
by
that
we're
expecting
this
to
be
done
by
this
point.
This
will
be
done
by
that
point.
I
think
some
of
those
road
back
dates.
Roadmap
dates
are
gonna
help
us,
you
know,
get
them
to
the
point.
Where
get
these
things
review
to
get
these
days
in
at
a
time
and
make
a
decision
to
put
it
in
or
pull
it
out.
H
So
I
think
you
know,
there's
some
changes
that
are
ready
coming
that
are
gonna,
help
address
some
of
these
issues.
If
we
actually
follow
the
dates-
and
you
know
if
the
roadmap
is
still
valid
but
as
far
as
my
opinion
on
even
multiple
smaller
releases,
it's
gonna
make
things
take
even
longer
because
there's
a
lot
of
lead-up
and
preparation
that
goes
towards
getting
me
to
releases
out
that
I've
seen
that
if
we
do
have
multiple
times
of
smaller
things,
it's
gonna
be
a
lot
more
inefficient.
H
C
I
think
thank
you
for
this
comment,
but
I
think
this
is
made,
and
this
is
based
on
the
assumption
that
we
continue
do
things
as
we
did
before,
and
this
is
why
I
put
another
two
or
one
or
comment
on
my,
so
we
so
what
I
also
propose
is
we?
First
of
all,
we
introduced
high
standards
on
the
IPS
that
we
really
need
to
require
some
kind
of
proof
of
concept
that
probably
pre
generated
test
cases.
C
So
we
don't
leave
it
to
the
later
and
also
the
my
proposal
about
the
the
pointing
the
reviewers
might
help
as
well.
So
because
I
think
a
lot
of
people
assume
that
you
know
we
have
to
sit
on
a
change
for
months
before
it
actually
gets
in.
But
as
we
we
did
a
bit
of
introspect
with
respective
on
this.
During
the
workshop
that
we
don't
actually
have
to
sit
on
this
for
a
month.
If
you
find
the
reviewers
quickly
to
rate
implementation,
doesn't
actually
take
that
long.
C
A
G
E
Yeah
mostly
I
think
most
of
people
that
were
involved
in
it,
and
so
it
was
originally
proposed
as
a
kind
of
counter
argument
for
propo.
But
in
the
end
we
realize
it's,
it's
not
very
effective
and
considering
how
much
time
does
it
take
to
this
castle
pro
proof-of-work
changes
and
related
issues
with
that
I?
Don't
think
it'sit's
really
possible
to
have
done
it
and
it
anytime
so
yeah
to
clear
it
up.
I
would
like
to
mark
that
as
a
reject
it.
So
I.
A
L
A
Next
up
working
group
updates,
I
think
the
main
people
who
are
here
for
that
there's
he
was
ohms
here
and
state
fees
are
here,
I
believe
and
then
maybe,
if
Peter
has
any
update
on
on
the
state
pruning
I,
don't
know
if
there'd
be
an
update
on
that.
But
let's
start
with
state
fees
that
would
be
Alexi
who
just
released
part
six
of
his
really
cool
in-depth
articles
on
reflections
from
the
Stanford
working
group,
so
Alexi.
If
you
want
to
give
an
update
on
that
and
then
I
had
some
comments.
C
But
essentially
the
main
thing
we
that
I
carried
out
or
we
carried
out
of
the
OD
workshop-
is
that
there
were
four
main
problems.
There
are
performance
problems
that
are
coming
from
their
large
State
and
a
growing
state,
and
the
first
problem
is
the
failure
of
the
snapshot.
Saying
second
problem
is
the
duration
of
the
snapshot.
Sync.
C
How
we're
planning
to,
but
some
of
them,
how
we're
planning
to
to
address
them
and,
for
example,
the
first
problem
which
we
deem
the
or
I
deem
the
most
critical
one
can
actually
be
solved
by
introducing
a
better
Sync
protocol
and
I
know
that
already
some
work
going
on
about
it
in
the
parody
and
go
cerium,
they
have
different
names.
For
this
also
andrew
is
not
working
with
me
on
here.
1X
he's
also
working
on
the
modeling
and
documenting
the
sort
of
other
work,
our
version
of
this
protocol.
C
He
came
up
with
a
cool
name
for
it.
It's
called
the
Red
Queen,
which
is
not
be
confused
car
over
reference.
It's
not
to
be
confused
with
the
Queen
of
Hearts,
actually
they're,
two
different
things,
but
the
Red
Queen
is
the
one
which
basically
says
that,
in
order
to
even
stay
on
the
same
place,
you
have
to
keep
running,
and
this
is
where
so.
The
idea
is
that
you
have
to
keep
following
that.
C
So
when
you
think
you
actually
try
to
chase
the
chase,
the
the
head
of
the
chain
while
you're
thinking
so
but
this
is
this.
Hopefully,
these
all
efforts
for
hopefully
converge
when
we
start
once
we
start
having
some
specification
and
I,
don't
think
we
need
the
hard
work
for
that,
but
we
will
need
to
get
some
coordination
in
terms
of
the
purity
protocol.
C
The
state
of
clients
is
actually
the
idea
that
that
the
blocks
that
we
are
propagating
through
the
network-
they
are
they
basically
augmented
by
some
more
information
which
provides
you
the
sort
of
the
state
subtree
like
with
all
the
hashes
that
essentially
provide
the
proof
that
the
data
that
the
transactions
are
reading
are
indeed
belong
to
the
state
and
and
then
the
the
they
are
Bates.
The
transactions
are
performing
result
in
the
new
state
fruit.
C
So,
essentially,
by
having
simply
the
blocks
and
the
state
fruits
of
the
Endo's
argumented
data,
which
is
like
we
call
them
block
proofs,
then
we
will
be
able
to
execute
transactions
without
even
having
access
to
the
state,
and
this
has
previously
been
researched
a
little
bit,
but
not
so
much
so.
I
also
wanted
like
one
year
ago
to
do
some
data
analysis
on
how
big
this
book
troops
would
be.
I
know
that
vitalik
has
has
done
some
analysis
miss
a.
C
She
also
did
that,
but
this
time
I
wanted
to
do
kind
of
more
seriously,
meaning
that
I
almost
been
doing
the
proof
of
concept
for
this.
So
to
make
sure
that
my
estimates
are
correct
and
so
I
will
publish
the
details
about
how
big
these
things
are,
and
we
will
see
if
this
is
something
which
we
could
combine
with
with
the
state
fees.
M
I
guess
the
mean
challenge
in
all
of
this
is
definitely
going
to
be
like
basically,
whether
or
not
the
date,
whether
or
not
the
costs
of
like
having
potentially
much
higher
bandwidth
are
worth
it
and
as
part
of
that
I
guess
a
couple
of
questions.
One
of
them
is
how
would
we
actually
do
gas
costs
for
the
bandwidth,
so
it
would
we
have.
Would
we
still
have
a
fixed
gas
costs
practices
like
a
storage
salat?
M
Would
we
try
to
do
gas
costs
based
on
the
length
or
the
moral
branches,
to
try
to
incentivize
having
smaller
trees
and
incentivize
reusing
access?
Would
we
try
to
do
something
else
and
otherwise
like
if
it
becomes
too
big?
What
actually
is
the
cost
of
DNA?
Do
we,
you
know
I
know
we
decided
at
the
workshop,
but
called
data
is
like
probably
a
very
over
a
kind
of
overpriced
by
now,
but
like
what
would
the
new
illicit
price
would
be?
Is
that
a
number
that
we're
okay
was.
C
Okay,
thank
you
for
that.
So
to
answer
the
question
about
the
cost.
So
yes
definitely
these
proofs
that
are
well
according
to
the
current
idea
that
these
these
extra
proofs
they
will
have
to
be
paid
for
by
the
transaction
center,
and
in
this
case
the
payment
would
go
to
the
miner
rather
than
being
burned,
because
essentially,
introduction
of
this
system
relieves
everybody
all
the
nodes,
apart
from
the
miner
from
actually
having
a
state.
C
So
everybody
else
that
don't
doesn't
even
need
to
have
a
let's
say
that
the
cash
you
try
in
the
memory
because
they
can
simply
verify
the
proofs
and
that
they
update
their
database,
even
if
they
have
the
entire
state
in
the
database.
They
don't
actually
have
to
cash
it
in
and
so
in
the
memory.
So
that
means
that
we
we
get
like
everybody.
Apart
from
the
miners,
we'll
get
much
huge
boost
in
terms
of
performance
in
terms
of
the
resource,
consumption
so
and
I
would
charge.
C
The
note
in
terms
of
the
length
of
the
branch
is
that,
but
basically
per
byte
of
the
proof
so
currently
as
I'm
working
on
this
sort
of
little
proof
of
concept.
I'm
actually
going
to
calculate
the
two
parts
of
disprove,
the
actual
crashes
that
are
accompanying
the
thing
then,
and
the
data,
so
the
data
is
essentially
what
are
you?
Have
you
read
during
a
transaction
assuming
my
charge
for
it
differently
and
also
if
the
proofs
turned
out
to
be
super
big,
what
we
could
also
do.
C
We
could
use
something
like
Stark,
Bruce
or
snart
proofs
to
actually
to
compress
them
in
such
a
way
that
there
will
be
a
fixed
size
per
block
for
all
the
proofs,
but
you
will
not
be
able
to
compress
the
data
this
way
so
I'm
want
to
explore,
explore
this,
maybe
within
the
next
week
or
two
and
publish
some
data.
So
we
could
have
a
discussion
about
it.
It's
not
sort
of
decided
yet
that
this
is
the
pivot
we're
going
to
take,
but
it
is
potential
pivot.
M
Guess
just
what
kind
of
is
zooming
you
know
more
more
broadly,
has
there
been
progress
on
like
figuring
out
what
the
costs
of
like
bandwidth
data
are?
Is
a
I
guess,
like
proof,
data
of
this
kind
I
seems
like
it
would
have.
The
exact
same
cost
sounds
like
transaction
data.
So
do
we
have
good
like
better
ideas
on
how
much
that
and
if
theoretically,
should
be
charged
for
no.
C
M
C
Thought
about
it,
but
not
for
very
long
time.
So
as
far
as
I
as
I
can
tell
now,
it's
very
challenging
to
try
to
swap
the
the
the
healthy
tree
binary
tree.
That's
why
I
hope
that
if
we
get
something,
if
we
have
some
sort
of
star
core
snark
roofs,
we
might
create
this
altogether,
but
yeah
so
I
haven't
figured.
M
C
C
M
A
bit
more
optimistic
on
that
I
guess,
like
my
main
concern
aside
from
like
all
of
these
technical
issues,
is
that
there's
the
zero
knowledge
proving
things
in
general?
Like
you
know,
each
general
purpose
has
an
overhead
of
a
factor
of
like
anywhere
between
like
100
and
100
thousand,
depending
on
how
ugly
the
computation
is
and
a
hash
functions
that
we
use.
C
M
N
B
The
update
a
lot
of
the
kind
of
stuff
you
just
shared
I,
don't
recall
seeing
that
in
the
latest
kind
of
fellowship
of
a
theory
or
magicians
post
thread,
and
the
latest
thing
you
posted
on
kind
of
storage
management,
state
fees,
just
wondering
if
there's
like
a
more
recent
thread
or
I
have
a
lot
of
questions
and
ideas
as
well,
but
rather
than
take
up
a
lot
of
time
on
this
call.
I
was
thinking,
that'd
be
a
good
forum.
So
where
should
we
have
that
conversation
yeah.
C
So
I
haven't
so
I
only
started
working
on
this
like
three
days
ago,
so
I
haven't
had
the
time
to
publish
anything
yet
and
but
I
will
hopefully
publish
something
in
within
like
next
few
days,
and
so
because
I
really
want
to
have
a
finish.
This
coding
first
and
then
I
will
just
write
something
up
so
I
brought
up
in
this
code
because
it's
important
to
know
about
the
potential
upcoming
pivot,
but
I
simply
didn't
have
time
to
put
it
down
in
writing.
Yet
I'll.
B
H
C
This
is
the
the
the
the
the
challenge
at
the
moment
and
I
think
if
somebody
from
parity
and
go
cerium
or
Nicole,
they
could
correct
me.
But
at
the
moment
there
are
some
prototypes
in
the
code,
but
there
is
it's
not
like
written
down
as
a
spec
or
a
model.
So
that's
why
we
be
so
one
of
the
things
we're
gonna
do
with
the
this
Red
Queen.
We
will
actually
try
to
do
the
spec
as
well
as
the
modeling.
C
K
Least
the
girl
code
is
there's
a
pool
request,
I'm,
not
sure
whether
there's
a
pull
request
or
just
my
branch,
but
I
can
shared
link
the
code
itself.
I
mean
it's
fairly,
simple,
I!
Guess
it's
more
or
less
the
idea.
You
get
the
idea,
but
there
are
still
a
few
few
corner
cases.
I
haven't
solved
completely
and
I
had
honestly
I
haven't
I
wrote
that
code
a
year
ago
and
I
haven't
touched
it
since
because
something
always
came
up.
A
C
I
was
talking
about
the
so
we
have
this
thing
called
East
63,
which
currently
is
used
to
essentially
shuffle
the
data
around
this
year
network
by
the
data
I
mean
the
headers
blocks
transactions.
So
for
each
of
these
types
of
data
there
is
a
normally
pair
of
what
I
call
operative.
It's
like
get
block,
get
block
header.
Yet
this
and
then
they
also
announced
thing
so
what
I
mean
by
changing
peer-to-peer
protocol?
C
It's
actually
adding
more
operatives
to
that,
for
example,
to
get
me
like
a
range
of
leaves
or
something
like
that,
so
adding
some
operatives
to
that,
which
means
that
we
will
need
to
sort
of
upgrade
the
protocol
so
that
the
clients
can
all
understand
each
other,
and
these
operatives
will
be
dedicated
to
supporting
this
new
advanced
protocol,
which
pretty
much
never
fails.
Hopefully,.
A
B
C
Great
sorry,
if
I
may
I
wanted
to
throw
something
in
for
the
awasum
as
well
sure
so
what
I've
been
thinking
about
in
terms
of
Eva's
and
I
didn't
after
the
workshop
and
and
then
before.
That
is
also,
if
some
some
of
you
might
remember
in
the
first
proposal
of
the
state
feast
or
stay
granted
was
cold.
I
was
actually
suggesting
linear
storage,
which
is
like
a
new
type
of
storage.
C
It
doesn't
exist
yet
in
etherium,
and
one
of
the
reasons
why
I
really
liked
that
was
that
II
wasn't,
is
essentially
kind
of
operates
on
a
linear
memory.
An
idea
was
that
you
can
map.
Do
the
memory
mapping
from
the
storage
into
the
memory
in
a
very
straightforward
way,
I
still
think
it's
a
good
idea
and
I'm
currently
like
trying
to
research
this
as
well,
and
maybe
I
will
propose
as
part
of
the
whole
state
management
thing.
Maybe
I
will
propose
this
linear
storage
again,
but
in
a
different
guise,
maybe
more
integrated
with
the
u.s..
C
Yes,
so
the
current
idea
that,
for
example,
if
we
introduced
a
new
type
of
contract,
for
example,
that
will
have
you
as
encode
instead
of
the
idiom
code
or
something
like
that
and
storage,
which
is
like
a
mapping
to
have
essentially
an
array
of
bytes
word
and
say
you
do
some
sort
of
merkel.
Mountain
ranges
on
the
miracle
trees
which
is
friendly
to
the
expansion.
So
that
basically
means
that
whenever
we
execute
it
was
and
we
just
map
the
map.
Is
there
part
of
the
storage
into
the
memory?
C
And
then
you
can
have
this
really
great
benefit,
because
then
you
can
use
all
sorts
of
libraries
from
the
let's
say.
The
red
black
trees
are
sort
of
some
sort
of
sort
of
structures,
whatever
whatever
have
you
so
because
they
would
be
the
all.
These
libraries
are
written
in
the
assumption
that
you
have
a
linear
memory
instead
of
like
kind
of
the
lithium
storage,
so
I
see
it
as
a
potential
for
code,
reuse,.
C
C
M
M
E
E
E
E
They
will
probably
look
acts
as
as
imported
functions
in
new
APIs
MB,
but
the
other
aspect
of
it
is
how
actually
you
want
to
interact
between
contracts,
because
both
of
them
are
written
in
in
in
in
the
end
in
in
webassembly
and
the
possibly
provides
some
more
efficient
or
more
direct
ways
of
calling
one
model
from
from
the
other.
So
so
this
opens
are
not
another
set
of
possibilities,
I'm
trying
to
keep
it
on
its
high
level.
What
yeah.
M
M
A
F
A
A
A
A
C
Ahead,
yes,
I
would
say
that
the
the
person
meetings
a
word
but
I
wouldn't
do
it
in
like
very
often
because
there
is
a
certain
amount
of
work.
You
know
so
you
know
there's
all
this
traveling
fatigue
people
have
and
some
people
feel
that
it's
actually
not
very
inclusive.
When
people
gather
around
in
the
I
almost
say
that,
for
example,
we
did
workshop
that
we
did
in
Stanford.
I
was
a
skeptical
before
I
came,
but
after
afterwards
it
turned
out
to
be
a
very,
very
useful
thing
to
do.
M
In
terms
of
like
making
them
more,
in
course,
a
more
inclusive
if
they
have
to
happen
like
bouncing
them
between
continents
is
also
helpful.
So
I
think,
like
the
last
one,
the
last
one
was
in
San,
Francisco
and
there'll
be
a
kind
of
some
people
descending
on
Australia,
both
of
which
are
not
very
convenient
for
people
from
Europe.
But
then
at
least
like
in
our
2.0
calls.
M
We've
been
getting
someone
kind
of
constantly
yeah,
so
Jim
badgering
us
about
having
one
in
Barcelona,
so
it
it
might
make
sense
to
do
something
in
Barcelona
at
some
point
and
then
I
I'm
sure
lots
of
people
that
were
that
will
just
found
a
way
to
inconvenience
it
come
to
like,
or
just
would
end
up
coming
to
that.
One
I.
B
F
M
M
G
K
A
Okay,
so
it
sounds
like
there
is
an
appetite
for
future
meetings,
but
we
may
need
to
have
them
with
enough
lead
time.
So
anyone
who
wants
to
take
initiative
on
planning
those
meetings
can
go
ahead
and
do
it
even
if
it's
more
I
guess
ad
hoc
and
not
like
a
really
really
big
one.
They
can
at
least
give
the
opportunity
and
then
people
can
attend
and
I
know
ed
Kahn's
pretty
well
attended.
O
C
B
A
E
Well,
yeah
there
was
those
small
change
to
blockchain
tests
recently,
which
which
aligned
the
configuration
for
for
what
actually
happened
on
the
Maenads
yesterday,
which
is
Constantinople
and
the
Constantinople
fix,
were
activated
on
the
same
block
so
frivolously
the
blockchain.
The
block
train
test
for
this
case.
What's
what
was
a
bit
different
and
we
changed
that
so
some
set
of
tests
are
different
now
and
they
were
uploaded
I
believe
yesterday.
So.
A
K
He'll
so
update
wise
I.
Think
two
more
interesting
updates
are
that
we
managed
finally
to
merge
in
all
the
changes
that
Slim's
down
the
databases
and
cut
out
a
lot
of
redundant
data,
so
we're
kind
of
down
by
about
16
gigs
compared
to
our
previous
one,
which
kind
of
puts
us
in
the
same
ballpark
as
parity.
K
K
Apparently
we
managed
to
shave
off
quite
a
bit
from
the
sync
times
and
yeah
I
guess
that's
that's
mostly
what
success
stories,
and
apart
from
that,
we're
mostly
working
on
the
networking
voice
and
discovery
protocol
and
nightlife
client
and
essentially
trying
to
pull
off
the
whole
historical
state
Kooning.
But
that
seems
to
be
a
bit
tough
debug
and
that's
about
it.
E
A
A
A
O
Yes,
so
big
news
this
week
is
we
released
pantheon
1.0,
the
big
improvement
there
is
cutting
our
archive
sync
time
down.
We
cut
it
about
in
half,
which
brings
us
to
about
half
the
speed
of
death
on
rough
stone
more
to
come
in
future
releases,
but
that
was
a
pretty
big
one
for
us,
we'll
be
continuing
to
work
on
performance,
reliability
and
fast
sink,
and
we
also
want
to
get
involved
with
the
work
on
the
fast
warp
POCs.
O
A
C
C
If
you
have
that
sort
of
memory,
but
that
would
result
in
ultra
sync
file.
Sorry
ultra
fast
archive
sync,
so
I
haven't
tested
it
yet,
but
I
would
do
it
afterwards,
but
yes,
at
the
moment,
I'm,
mostly
using
turbo
jet.
This
is
basically
a
working
horse
for
all
the
data
analysis,
which
is
pretty
awesome,
but
yeah
I
haven't
caught
up
with
the
other
things
yet.
L
F
J
D
D
This
is
something
that
we
want
to
see
and
it's
beginning
to
ramp
up
the
efforts
in
parallel,
there's
a
Vitalik
add
as
an
issue
in
the
specs
repo
I
think
actually
laying
Rd
close
to
that
oil
go
where
we're
beginning
to
discuss.
This
is
just
kind
of
a
large
design
space
and
a
good
time
to
start
narrowing
down.
D
D
B
Think
there
was
a
little
bit
of
debate
on
the
on
the
issue
about
specifically
IP
use
that
were
proposed
for
Istanbul,
so
Alex
put
put
three
of
them
forward,
but
then
Alexei
had
also
said
before
we
do
that.
Why
don't
we
talk
about
some
of
the
higher-level
questions
that
he
brought
up
earlier,
so
I
just
wanted
to
acknowledge
that
that
there
was
the
IPS
were
there
I,
don't
know
that
we
need
to
talk
about
them
necessarily
today.
C
C
Well,
this
comes
back
to
my
two
suggestions
is
about
the
sort
of
processing
actually
yep.
It's
quicker
by
appointing
the
reviewers
and
essentially
by
but
potentially
making
the
releases
shorter,
and
therefore
we
don't
really
make
kind
of
everybody
is,
is
a
hostage
of
everybody
else
like
so
that's
my
my
basically
two
suggestions
and
I
I
think
generally,
we
might
also
do
more
reflection
on
the
past
me
retrospective,
whether
like
what
would
you
know
whether
we
we
did
the
best
we
could
in
the
previous
two
releases.
A
Okay,
we
might
discuss
that
more
in
the
next
meeting,
but
for
the
moment
everybody
be
thinking,
maybe
not
putting
down
like
on
that
meta
EIP,
but
thinking
about
the
different
IPS
that
you
would
want
to
go
into
Istanbul
and
we
can
talk
about
those
feel
free
to
put
them
on
the
agenda
that
I'll
be
putting
up
pretty
soon
and
yeah.
I.
Think
that's!
That's
all
good
I
think.
Is
there
anything
else
anybody
had.