►
From YouTube: Ethereum Core Devs Meeting #79 [2020-1-24]
Description
A
A
A
B
I
can
probably
alike,
because
we
cannot
go
very
deep
into
the
technical
details
anyway.
I
would
be
able
to
answer.
Probably
most
questions
and
I
also
will
explain
what
the
last
thing
that
way
was
saying
about
how
it's
related
to
the
account
versioning,
the
IP
for
Berlin
like,
for
example,
he
said
that
he
wants
shirt
with
the
droid
from
Berlin
Bush
today.
So
you
want
me
to
start
to
explain.
Yeah.
B
B
It
in
conjunction
with
other
things,
like
account,
versioning
and
reprise
of
the
op
codes.
So
if
you
look
at
these
three
things
together,
then
it
starts
making
sense,
because
on
the
on
its
own
or
any
of
these
three
things
on
it
on
their
own
bang,
gasps
account
versioning
and
repricing.
They
basically
kind
of
you
know
they.
They
know
they
don't
really
have
a
lot
of
merit
on
their
own.
But
if
you
put
them
together,
then
it
actually
starts
making
sense.
And
then
we
discovered
that
okay,
so
an.
B
B
That
we
might
be
able
to
not
reprise
any
specific
opcode
to
accommodate
for
state
the
city
Rio,
but
instead
simply
a
sort
of
meter.
The
gas
which
is
used
for
for
the
witness,
it's
also,
which
would
be
much
more
I,
would
say
much
better
design
because
it
has
the
proper
separation
of
concerns
so
in.
In
short,
the
an
gas
proposal
itself
is
the
suggestion
to
remove
the
the
ability
for
smaller
contracts
or
any
idiom
code
to
observe
the
notion
of
gas
entirely
and
which
is
currently
possible
through
three
different
mechanisms.
B
Maybe
there's
more,
but
we
didn't
discover
anymore
yet.
So
the
first
mechanism
is
the
opcode
gas,
which
returns,
which
puts
on
a
stack
the
amount
of
gas
remaining
in
the
current,
whatever
frame.
So
the
second
mechanism
is
where
you,
when
you
do
the
call
or
a
call
like
instructions,
you
can
specify
how
much
gas
is
forwarded
to
the
to
the
next
call.
So
this
is
where
another
point
where
you
observe
it,
and
the
third
point
of
observation
is:
when
you
hit
the
out
of
gas
exception.
B
What
happens
is
that
in
color
and
idiom
semantics
they
it
causes
the
current
execution
frame
to
revert
and
essentially
cancel
all
the
state
changes,
but
it
does
not
revert
the
the
frame
which
is
called
this
current
frame.
So
essentially,
if
I
do
in
this,
somebody
could
observe
the
gas
it
might
not
be
very
useful,
but
but
anyway,
so
what
Wei
is
proposing
with
angas?
Essentially
three
main
changes
first,
is
to
disable
the
instruction
gas.
B
Secondly,
to
not
to
stop
the
call
like
instructions
from
forwarding
the
gas
and
the
behavior
should
be
to
forward
the
entire
gas
all
the
time,
and
the
third
change
is
essentially
to
change
the
semantics
of
out
of
gas
exception
so
that,
instead
of
reverting
the
current
frame,
it
would
revert
all
the
frames
and
essentially
the
entire
transaction.
So
the
entire
transaction
immediately
fails
upon
the
out
of
gas
exception.
B
So
these
three
changes
I
do
not
I,
probably
won't
be
able
to
debate
the
like
the
technical
merits
of
all
of
this,
but
maybe
we
could
use
this
just
to
sort
of
warm
up,
so
everybody
can
research
and
they
own
and
sort
of
sink
into
this
idea
and
now
I'm
going
to
explain
why
it
is
it's
sort
of
it
makes
sense
to
view
or
them
all
three
changes
together.
So
essentially,
this
particular
change
in
EVM
is
very
disruptive
like
you.
B
Essentially,
if
you
try
to
implement
it
as
it
is,
it
will
break
pretty
much.
Everything
like
this
particular
semantic
changes
and
therefore
way
is
proposing
to
roll
it
out,
together
with
the
account
versioning,
but
the
account
versioning,
not
in
a
sense
that
we're
gonna
have
increasingly
many
account
versions,
but
we
will
only
have
we
will
only
ever
have
two
versions
and
no
more.
There
will
be
one
version
which
is
before
the
ungass,
which
she
calls
legacy
version
and
the
second
version
which
after
ungass
and
that's
it
there
will
be
only
two
versions.
B
So
essentially,
it's
like
a
bit
which
would
be
marking
which
one
of
this
so
in,
as
you
can
imagine
in
the
legacy
version,
the
semantics
doesn't
change.
Everything
works
as
it
works.
Now,
a
gas
is
observable.
Everything
is
okay
in
the
future
version,
which
is
introduced
together
with
the
ungass,
essentially,
the
rules
of
an
gas
applied
and
also
the
the
way
that
account
versioning
is
proposed.
B
The
code
which
fade
is
deployed,
so
the
contracts
which
are
deployed
from
a
legacy
code
usually
works,
for
example,
some
sort
of
factories.
They
will
also
inherit
the
legacy
flag
and
therefore
we'll
also
be
using
legacy
rules.
So
everybody
could
probably
notice
that
this
is
the
way
to
potentially
completely
circumvent
the
like
the
the
new
version,
so
you
can
easily
leave
out
the
very
generic
proxy
and
just
keep
creating
constructs
through
it
after
the
ungass
is
introduced.
So
how
does
this
you
know?
How
does
this
get
addressed?
B
So
this
is
where
the
third
change
comes
in
is
repricing.
So
what
is
a
way
suggesting
is
that
once
we've
introduced
the
second
version,
initially,
it
probably
will
be
very,
very
little
used.
But
then,
after
that,
we
start
repricing
opcodes,
but
not
in
a
way
that
we
just
choose
one
up
code
and
reprise.
B
B
I
think
we
think
that
the
miners
will
not
be
able
to
do
that
in
general
case,
so
they
will
not
be
able
to
distinguish
them
and
I
just
apply
the
uniform
gas
price
and
then,
of
course
it
raises
the
questions
like.
How
are
we
going
to
reduce
the
gas
cost
of
the
operations
which
already
is
like
to
gas,
and
this
is
where
we
have
two
alternatives?
Is
a
particle
gas
cost
and
way
has
its
own
proposal,
which
is
which
he
thinks
is
simple.
B
Simpler
is
essentially
ideas
that
when
you
do
the
call
instruction
you're
artificially
bump
the
gas
available
gas
like
times
ten
or
something,
and
then
when
you
return
from
the
coal,
you
reduce
it
again.
So
you
create
the
illusion
that
the
everything
is
actually
cheaper.
But
this
is
basically
the
the
summary
and
in
a
stateless
etherium.
If
we
want
to
the
reason
why
it
would
help
us
is
that
we
will
be
able
to
to
meter
the
witness
size
separately
from
everything
else.
B
B
And
so
we
will
apply
the
the
the
out
of
gas
exception
in
a
case
where
either
it's
the
you
know,
either
the
execution
basically
consumed
in
tear
gas
or
it's
the
witness
that
consumed
the
entire
gas
I
haven't
worked
out
all
the
details,
but
I
think
it
will
greatly
simplify
the
design
and
it
will
not.
It
will
help
us
not
to
basically
mix
everything
together.
B
So
that's
my
summary
and
as
I
mentioned
that
way
during
the
opening
Syrian
workshop
a
so,
he
had
a
little
presentation
on
Wednesday
when
he
was
describing,
which
AI
peas
are
currently
kind
of
already
sort
of
accepted
for
well,
not
actually
not
accepted,
but
proposed,
or
select
pre-selected
for
the
next
heart
Fork,
and
that
includes
account
versioning,
it's
1702.
So
he
mentioned
that
he
is
intend.
His
intention
is
to
withdraw
it,
but
so
that
it
could
be
included
into
the
later
release,
just
together
with
Angus
and
I.
F
B
F
B
Yes,
but
it's
not
at
the
moment,
we
don't
envisage
this,
but
by
I'm
happy
to
try
to
explore
this.
Whether
this
is
going
to
be
like
too
tempting
to
not
to
do
it,
but
so
I
think
the
reason
why
a
way
specifically
said
that,
because
one
of
the
criticism
of
account
versioning
was
that
oh,
but
we're
gonna
end
up
like
with
the
hundred
different
versions
that
we
have
to
all
support
so,
which
is
a
very
valid,
could
just
as
me.
What
we
don't
want
to
do
that
and
we're
gonna
keep
all
this
functionality
forever.
B
E
F
F
Maybe
this
is
not
even
across
coin,
which
is
just
a
version
to
version
2
Co.
So
if
you
were
saying
that
if
I
call
another
contract
and
everything
gets
forwarded,
all
the
gas
in
yes
in
the
new
version-
yes
correct,
okay,
but
this
kind
of
means
that,
if
I'm
writing
a
contract
which
so,
if
I'm
writing
the
host
contract,
the
origin
contract
that
would
call
out
to
something
else,
then
how
can
I
make
sure
that
if
that
something
else
has
a
bad
core
is
messed
up
or
whatever,
then
I
can
still
finish.
F
E
F
B
F
B
E
B
B
So
that's
why
I
thought
for
a
long
time
that
the
prerequisite
for
this
change
is
actually
to
some
kind
of
formal
semantics,
and
this
is
what
I
currently
started
started
to
work
on
and
so
that
we
can
analyze
this
really
rigorously,
because
the
chances
that
something
is
not
getting
covered
are
very
high
here
in
this
particular
change,
because
it's
very
it
doesn't
look
like
it.
The
description
is
very
short,
but
actually
it's
very
complex.
B
F
B
B
Exactly
so,
if,
if
there
is
a
hard
rule
that,
like
any
transaction,
either
runs
version
one
or
overton
two,
then
the
argument
about
repricing
doesn't
work
anymore,
because
miners
could
very
easy
to
distinguish
what
is
what
is
going
on
and
they
can
apply
different
gas
prices.
I
completely
agree
with
that,
so
the
repricing
doesn't
work
if
they're.
If
the
cross
version
codes
are
not
allowed.
B
F
B
G
But
I
had
one
question
about
the
state
as
a
theorem
side,
if
it
doesn't
the
version
if
it
works
as
a
way
describes-
and
you
can't
have
that
separation
of
that
that
doesn't
break
a
lot
of
things
then
do
we
do
we
still
get?
What
is
what
we
need
for
a
stateless
aetherium
like
the
benefit
out
of
it,
or
does
the
backwards
incompatibility
kind
of
remove
that
part
so.
B
If
the,
if
the
separation
of
the
versions
is,
does
not
work
and
that
simply
means
that
account
versioning
isn't
going
to
work
at
all,
I
mean
it
means
that
everybody
will
probably
be
using
the
version
1
and
you
there's
nothing
you
can
do
about
it.
You
I
six.
You
have
to
design
the
entire
entire,
like
Stata
CT
room
on
the
premise
that
everybody
is
still
gonna,
be
using
version
1,
so
you
will
not
get
any
of
the
benefits
of
sort
of.
B
So
essentially,
the
plan
was
with
the
ungass
the
reason
why
it
would
help,
because
if
we,
if
we
start
to
economically
kind
of
push
out
to
version
1-
and
we
know
that
the
the
usage
become-
let's
say
negligible-
then
for
the
purposes
of
pricing
the
witness
we
can
simply
ignore
anything
that
happens
with
version
1,
because
it's
gonna
be
super
expensive
and
we
are
design
our
providing
logic
based
on
version
2.
So
if
we
know
that
ok,
this
is
the
version.
B
1
pays
so
much
that
it
will
definitely
cover
any
kind
of
witness
that
will
arise
and
version
2.
We
know
that
we
can
do
ungass
it
that's
why
we
can
price
more
correctly.
However,
if
the
the
version
separation
doesn't
work
out
and
we
could
not
price
things
out,
then
it
means
that
this
logic
is
not
gonna
apply
anymore
and
we
have
to
design
based
on
version
1,
which
is
basically
where
we
lost
all
the
benefits.
G
B
Yeah
the
reason
why
the
account
version
was
pushed
out
of
Istanbul
because
we
realized
it's
not
useful
yet
and
I
think
on
its
own
in
Berlin.
It's
also
not
useful
yet
so
there
is
nothing
it
sort
of
a
trick
that
requires
it.
But
if
we
have
something
which
actually
depends
on
it
critically,
then
you
know
it
could
be
done.
H
B
It
was
simply
going
through
the
EIP
he's
in
Berlin
and
essentially
saying
which
one
are
already
implemented
in
parity
and
which
one
are
easy,
and
essentially
the
conclusion
was
that
like,
if
all
the
Berlin
changes
could
be
implemented
in
priority
or
whatever
you
four
would
open
a
theorem
without
major,
really
texture,
so
it
actually
could
be
done
in
like
a
low
resource
mode,
and
there
wasn't
really
a
lot
of
information
about
ungass
at
all.
So
I
would
rather
actually
do
something
separate
for
that.
C
A
B
A
B
A
B
This
moment
we
are
like
not
there
and
we
will
go
into
refining
specific
like
because,
as
I
said,
we
just
opening
a
can
of
worms
and
we
need
to
address
all
the
questions
and
everything
and
whenever
we
have
some
new
information,
we'll
present
it.
If
they
don't
have
a
new
information,
we
will
probably
just
keep
researching.
A
F
We
have
25
to
50
years
depending
on
client,
and
whenever
you
receive
a
new
block,
you
stand
that
block
in
its
entirety.
To
I,
don't
know
four
five,
six,
seven
of
your
peers
and
you
just
announce
it's
the
rest
and
the
reason
why
this
is
good
is
because,
if
a
block
has
50
kilobytes,
then
something
it's
only.
F
If
you
want
it,
come
get
it
and
then
it
usually
clients
what
they
do
is
whenever
they
get
a
block
announcement,
they
wait
about
half
a
second,
and
if
the
block
doesn't
arrive
from
any
other
peer,
then
they
actually
go
and
retrieve
it.
Now.
This
is
how
block
propagation
works.
It
works
really
well.
We
can
probably
make
some
minor
optimizations,
but
it's
fine.
Now,
if.
F
F
Let
me
check
a
number
around
one
megabyte
per
second
wait.
They
are
doing
one
megabyte
per
second
download
upload
just
to
shuffle
the
transactions
and
there's
absolutely
no
point.
So
the
way
currently
is
its
implemented
is
that
since
there's
no
announced
mechanism
and
no
retrieval
mechanism,
we
just
send
the
transaction
to
everybody.
So
if
I
have
500
peers
and
I
will
send
that
single
transaction
to
500
different
places
problem
is
that
if
I
have
500
peers,
I
will
also
receive
the
transient
transaction
from
500
different
places.
F
So
it's
a
huge
waste
of
bandwidth
and
essentially
dcpip
is
kind
of
tiny.
All
it
does
is.
Is
it
defines
a
few
more
network
packet
types?
One
of
them
is
announcing
a
transaction
so
that
the
same
way
that
we
can
announce
a
block
by
hash,
we
can
announce
a
set
of
transactions
by
hashes.
It
introduces
a
retrieval
request
so
that,
if
I
know
of
transactions
by
hash
but
I,
don't
have
them
that
I
can
retrieve
them
and,
of
course
they
reply
that.
F
Okay,
here's,
the
transaction
and
my
hunch
is
that
this
should
cut
down
the
global
bandwidth
usage
of
the
theorem
by
at
least
an
order
of
magnitude
and
yeah
kind
of
that.
So
does
the
IP
div
has
some
some
minor
details
in
it
feel
free
to
read
it
I?
Don't
you
want
to
go
into
those
but
yeah?
Our
question
is
whether
does
anybody
oppose
this
I
know
that
we've
talked
with
another
mine,
they
like
it
as
far
as
I
know,
the
Trinity
team
also
responded.
F
So
just
a
just
another
quick
note:
if
you,
if
you
look
at
the
Eid,
we
also
linked
in
a
poor
request
that
implements
the
same
gas
and
that
pull
request
is
huge.
I
mean
it's
thousand
or
almost
two
thousand
lines
of
code
and
I.
Think
it's
important
to
highlight
that
supporting
this
Eid
takes
maybe
about
20
lines
of
code.
So
the
rest
of
the
1000
plus
lines
is
all
about
the
plethora
of
optimizations
that
the
CIB
enables
us
so
that
we
can.
We
cannot
essentially
cut
down
so
many
things.
F
F
Yeah,
so
so
this
is
the
same
way
as
with
each
64.
So
all
notes,
so
that
b2b
protocol
supports
running
arbitrarily
many
versions
of
these
sub
protocols.
On
top
of
one
another
and,
for
example,
parity
didn't
implement
yet
East
64
I'm,
not
sure
whether
they
people,
but
we
have
absolutely
no
no
plans
of
cropping
support
for
either
63
or
64.
F
G
F
Well,
not
really,
no
I
think
it's
so
probably
the
reason
we
don't
have
it
is
that
each
63
was
backed
out
when
a
theorem
launched
so
I
mean
it's
a
you
could
assume
that
everybody
implements
is
663,
because
that
was
the
first
official
version
and
then
the
first
upgrade
to
this
was
made
by
us
last
year
in
November.
So
it's
this
is
the
first
time
when
we
actually
made
it
upgrade.
F
As
for
all
protocols,
I
know
that
before
63
there
was
61
and
62,
but
61
was
only
implemented
by
I,
guess
and
C++,
but
since
EPP
theorem
is
not
the
biggest
rage
now
I
think
they
even
they
implemented
62
and
63.
So
eventually
we
just
dropped
support
for
the
older
ones.
So
nobody
was
running
these
61
and
62
anymore,
so
we
just
moved
it
out
of
gas
and
I.
Think
the
same
thing
can
be
can
be
done
done
again,
if
apparently
ever
upgrades
to
64,
65
etc.
F
A
K
A
A
E
E
An
extra
rule
in
the
validation
I
pointed
out
that
there's
like
a
code
segment
that
exists
before
the
begin
data
and
after
the
begin
data
and
as
part
of
the
validation,
I
limited
the
size
of
that
to
the
contract
size
that
can
be
stored
on
the
block
chain.
There's
a
couple
of
ways
you
can
work
on
this
block
change,
storage
limit:
you
can
do
it
in
the
knit
code,
although
I
don't
know
how
you
can
make
it
terribly
useful,
because
you're
still
limited
on
building
in
memory.
E
The
second
way
we're
actually
seeing
on
the
network
is
where
you
can
pass
it
in
as
part
of
the
transaction
I'm
already
seeing
roll-ups
that
are
in
excess
of
32
K
and
now
at
128
K,
so
the
data,
those
are
mostly
data.
Storing
all
the
information
about
the
particular
zkt
roll-up
that
they're
doing
to
get
those
stored
by
addressed
and
young.
So
that's
the
first
major
change
that
I
put
in
the
second
one
that
I
put
in.
E
To
specify
the
the
particulars
of
the
header,
there
was
a
previous
recommendation
that
had
EF
EF
0.
What
is
the
header
and
I
adapted
that
it's
EF
EBM,
the
hex
EF,
which
is
the
eye
of
two
dots
under
Lasky,
and
to
clarify
that
when
you
start
running
a
validated
contract,
you
start
at
PT
equals
4
to
keep
all
the
XD
code
copies.
You
know,
relatives
the
zero
indexed.
Those
are
the
two
major
revisions
that
I
did
on
to
try
and
address
some
of
the
concerns
that
I've
heard
and
I
want
to
know
at
this
point.
E
E
Audition
is
not
running
any
code
in
order
for
the
validation
to
run,
you
must
have
the
header
in
the
code,
the
0x
EF
0,
6,
5
au
x,
7,
6,
o
X
60,
there's
the
first
4
bytes
of
your
code,
and
it
requires
a
versioning
spec.
The
block
team
must
turn
on
version
code,
so
it
must
be
a
virgin
white
count.
That's
run
again.
If
either
those
aren't
true,
the
validation
does
not
rot.
Okay,.
E
That
would
be
now
right
now.
The
instruction
0x
kia
is
not
a
valid
eb
in
instruction
right
now.
So
if
you
try
to
load
a
contract
with
the
validation,
it
would,
you
know
happily
story
in
code,
come
time
to
execute
you
quickly
unfailing
out
when
the
first
PC
that's
in
season
invalid
instruction,
how
to
deal
under
the
old
sets
of
instructions,
and
so,
if
you
try
and
put
a
validated
contract
out
there,
you
wouldn't
get
anything
out
of
it.
E
C
C
E
Added
another
step
in
there
as
you're
validating
the
code,
once
you
get
past
32
megabytes
and
haven't
third
series:
mailings,
32,
kilobytes
or
whatever
the
contract,
with
minutes
that
just
want
to
put
a
4k
actually
yeah.
Once
you
hit
once
you
get
past
that
stop
validating
it's,
not
bad
anymore,
because
you
really
can't
figure
out
where
the
begin
date
of
it.
Because
that
begin
data
bite
could
be
hiding
a
set
of
multi-byte
instruction
like
the
pipeline
of
bush.
E
E
You're,
going
by
Martin
model
at
this
point
do
be
eligible
for
inclusion
was
not
committed,
so
someone
would
need
to
implement
it
and
most
important
stuff
the
implementation.
It
is
ready
to
reference
tests,
yeah,
there's
champion,
I,
think
that
would
follow
me
to
get
it
implemented
it
and
get
the
reference
test
written
and
then
it
comes
back
again
once
reference
tests
and
everyone
can
look
at
it-
security
before
it's
generally
right
for
all
clients
to
implement
it.
E
A
E
C
C
I
was
gonna
I'm
reading
through
the
motivation,
and
so
the
second
motivation
is
that
we
don't
need
to
do
a
jump
test,
validation
and
it's
third
one
is
to
improve
JIT
and
those
are
practical
in
my
opinion,
although
I
think
the
second
one
I
don't
think,
there's
a
big
improvement,
because
it's
pretty
fast
solution,
but
the
first
motivation
about
the
evolve
of
bbm,
and
it
strikes
me
as
kind
of
vague.
And
could
you
elaborate
the
bits
on
the
real
on
the
practical
impact
for
that
corporation?
Okay,.
E
So
the
biggest
thing
that
this
will
unlock
is
the
ability
to
add
new
multibyte
instructions,
the
only
multi-buyer
instructions
we
have
right
now
for
to
push
series
of
instructions
we're
there
to
push
than
a
bunch
of
data.
That's
part
of
the
byte
stream
and
the
reason
why
I
meani
to
validate
the
code
before
we
can
add
any
multiple
by
push
instructions
is
for
jump
tests.
For
example,
if
you
had
an
old
invalid
instruction
that
became
a
morning
by
and
then
you
put
the
jump
test
inside
of.
What's
that
multiply
would
consume
all
of
a
sudden.
E
You
have
created
a
new
thing
that
has
been
valid
to
be
jumped
into
code.
It
was
once
valid
is
suddenly
now
invalid
because
he
added
a
new
instruction
where
there
was
a
bunch
of
noise
data
that
you
could
have
jumped
into
and
then
now
you
you
can't.
That
was
one
of
the
test
cases
that
was
brought
up
in
response
to
us
650.
It
was
Greg's
old
standard.
That
was
one
of
the
test
cases
that
you
know
this
was
was
critical
and
was
topping
it
from
going
forward.
E
So,
by
putting
the
validation
in,
we
can
make
sure
that
in
the
code
segment
that
invalid
codes
aren't
there,
so
don't
have
a
situation.
You
know
like
say
an
air
in
transcription
suddenly
making
the
code
unusable.
So
if
you
can
add
my
instructions
later,
you
don't
have
to
worry
about
going
back
from
validate
contracts
to
say,
oh
by
the
way,
the
jump.
J
E
E
To
just
improve
numbers,
but
how
do
we
keep
if
you
all
deployed
contract
still
working
is
my
concern.
We
want
to
keep
us
working
working
as
it
is,
have
any
expected
rules
and
not
all
the
sudden
and
validate
what's
on
there.
We
can't
just
say:
well,
don't
do
this
because
of
the
time
we
announce.
It
then
tell
me
deploy
it.
We
should
expect
people
will
deploy
broken
contracts
to
try
and
exploit
it.
B
E
J
C
L
L
E
J
C
B
I
guess
we're
using
now
it's
usually
pretty
difficult
to
estimate
the
the
semantical
impact
of
this
change
on
to
because,
as
we
know,
there
will
be
a
lot
of
interesting
edge
cases.
So
I
would
like
to
spend
more
time
looking
at
this
somehow,
but
so
I
sort
of
caught
me
unawares.
This
particular
thing,
but
I,
don't
yeah.
B
So
it's
very
interesting,
but
I
also
I'm,
where
I'm
wondering
how
this
is
going
to
play
together
with
like
with
the
account
versioning
and
ungass
as
well,
and
whether
this
particular
so
whether
the
path
that
we're
going
into
good,
because
this
is
actually
just
one
first
step
in
the
path
right.
Whether
this
path
entails
us
to
have
a
multiple
versions
introduced
because
I
it
does
it
and
then
this
sort
of
it
interplays.
E
So
there's
the
enables
multiple
models.
We
have
you
know
versions
where
there's
a
version
zero
and
that
version
one
is
version
zero.
Plus
these
odd
codes,
the
particular
future
that
I
would
prefer
to
wear.
We
don't
really
run
sinful
versions
of
the
EBM.
If
you
run
invalidated
mode
it
just
unlocks
more
operations.
You
still
have
the
old
ones
that
you
disposed.
B
So
what
what
I
would
say
is
that
we
probably
should
identify
so
the
advantage
of
so
of
bundling
everything
together,
of
course,
is
that
you
only
have
ever
two
versions,
but
obviously
the
disadvantage
is
that
you
don't
you
basically
end
up
with
a
huge
release
with
a
huge
heart
fork
which
you
don't
want,
so
maybe
what
we
could
try
to
do
is
that
figure
out
the
path
where
we
could
have
still
two
versions,
but
we
can
add
stuff
to
it.
So
what
could?
What
does
the?
B
Obviously?
We
need
to
be
a
proper
order
of
adding
things
like
what
is
the
first
thing
need
to
be
added.
What
could
be
added
next
without
ruining
that
version
I
mean
like
the
sort
of
backwards
compatible
change
of
that
version,
so
it
looks
like
this
one,
for
example,
could
probably
going
to
be
one
of
the
first,
because
it
actually
establishes
a
better
better
platform
for
backwards.
Compatible
additional
new
stuff,
I
would
say,
but
I
would
look
at
it
add
links
from
this
perspective,
so.
G
E
A
E
L
A
He
says
he's
kind
of
negative
about
an
implementation
that
requires
looking
up
a
bunch
of
historical
headers
to
decide
which
vm
version
applies
to
a
given
header
and
to
be
clear.
This
is
related
to
the
time-based
upgrade
transition,
so
using
your
time
stamp
along
with
a
block,
rather
than
just
the
block
like
we
do
down
yeah.
E
Recommended
to
change
that
might
not
get
us
don't
have
to
do
this
complex
to
face.
Firing
is,
if
we
just
require
that
all
bombers
have
to
have
a
time
stamp
equal
to
or
less
than
the
header
they're
included
in
so
the
question
there
is:
what
would
that
impact
network
wise
and
would
that
adjust
the
problems
where
we
can
just
fire
on
a
specific
time
right.
E
So
if
there's
an
armor,
that
is
so
let's
say
that
that
block
is
before
the
transition
and
onwards
after
the
transition,
and
there
are
some
really
expensive
changes
like
a
change
to
a
proof-of-work
algorithm
as
an
example.
Given
it
gets
kind
of
choppy
at
that
point.
So
we
want
to.
But
then
we
because
when
you
validate
you
just
validate
the
header,
no.
E
J
J
E
E
So
Jason
standard
proposal
is
just
instead
of
looking
back
at
blocks
to
see
if
it's
been
activated,
we
simply
branched
on
a
time
and
the
one
changed
the
protocol
to
solve.
Some
of
the
concerns
that
were
voiced
was
that
they
would
require
that
owners
be
at
were
prior
to
the
time
in
the
block
being
valid.
G
J
J
E
J
It
can
be
if
you
have
the
time
to
synchronize
between
two
different
miners,
and
one
miner
is
just
one
second
lighter
and
it
will
just
broadcast
the
block,
and
the
other
miner
will
pick
it
up
and
we'll
just
milliseconds
after
will
find
its
own
block.
It
will
publish
with
that
Homer.
It
will
be
still
valid,
because
obviously
consensus
is
one
of
the
part
of
agreeing
on
what
the
actual
timing,
sir.
C
E
Because
all
the
planks
right
now
we
get
a
15
second
in
the
future,
10
and
40
in
the
future
of
any
what
we
see
for
now,
it's
not
like
Bitcoin
when
they
get
two
hours
of
skew
I,
think
our
clients
give
only
20
seconds.
So
if
we
codify
that
in
the
honor
roll,
then
it's
something
we
can
point
to
pretty
pretty
efficiently
as
well
as
they
want.
This
is
why
the
clients
think
this
magic
number
two.
C
E
G
A
Yeah
I
was
gonna,
say
if
it's,
if
it's
possible
to
like
get
some
of
those.
You
know
arguments
on
like
the
eath
magicians
dread,
I
think
that
would
be
valuable
as
well
like.
It
seems
like
there's
a
lot
of
edge
cases
and
whatnot
to
think
through
and
and
getting
more
eyes
on
that.
This
is
probably
a
good
thing
to
do.
Yeah.
E
A
M
F
M
F
I
guess
that
was
mostly
my
here.
My
concern
to
that
I
mean
it's
nice.
If
somebody
ported
to
what
to
go
I'm,
just
curious
about
how
how
stabilities
on
how
reliable
it
is
because
so,
if
it,
if
it
goes
down
to
assembly,
I
mean
I,
know,
goes
assemblies
a
bit
more
simplified,
but
still
if
it
is
an
assembly
optimized,
then
that's
probably
something
that
we
cannot
meaningfully
review.
I
mean
I'm
I,
barely
understand
that
some
day
I
may
be
I
could
definitely
not
review
it
or
validated
that
it
actually
does
what
it's
supposed
to
do.
F
M
Assembly
parts
there
is
only
very
basic
arithmetic
so
like
how
you
do
the
multiplication
or
plus
model
reduction
of
two
numbers.
Everything
else
is
normal
code,
which
you
can
breathe,
which
uses
model
is
the
same
optimizations
which
I
have
in
rust,
plus
minus
some
specifics
or
go
in
terms
of
like
no
reallocation
variables.
We
use
them
and
don't
get
garbage
collection.
Example
the
like
kind
of
to
reuse,
the
garbage
collection,
but
others
in
this.
Yes,
it's
not
yet
portable.
M
F
Oh
yeah
I
thought,
if
you
have
a
link
to
the
actual
thing
done.
I
would
really
appreciate.
F
H
It's
for
question
XK
here:
yeah
I,
don't
yeah,
okay,
I'm!
An
extra
question
is
this:
the
CIP,
the
the
the
the
codes
are
going
to
to
give
to
the
various
implement
are
either
going
to
be
independently
audits
by
an
external
auditing
company
or
or
another
independently
from
a
finger
matter.
It
is
plan
I.
M
M
M
So,
for
now
we
rely
on
at
least
I
rely
on
the
fact
that
there
are
a
few
implementations
done
by
different
people
and
which
use
different
even
like
different
approaches,
how
you
would
implement
it
in
different
languages
and
as
soon
as
they
all
kind
silent
same
out
to
date
on
the
same
input
data
and
on
some
set
of
predefined
test
factors.
They
all
sit
inside
on
a
solid
results
like
kind
of
testings
I
by
linearity
of
cravings.
I
would
say
this
as
it's
a.
It
is
already
very
good,
sir.
A
I
A
G
Do
I
do
have
an
update
on
thinking
about
the
tiny
of
deployment
for
it,
okay,
yeah
and
I,
just
as
I've
been
reading
more
about
it.
One
of
the
concerns
is
minor,
collusion
or
and
then
be
manipulating
the
fee
in
some
way
and
the
more
I
thought
about
it.
The
more
I
think
that
the
order
in
which
this
should
be
done
is
would
be
to
have
product
pal,
be
done
first
and
then
have
EIP
1559
as
a
way
to
combat
that
which
I
know
is
a
giant
can
of
worms,
but
that
is
where
am
I.
G
So
the
if,
if
a
bunch
of
minors
wanted
get
together
and
say,
hey,
let's
lower
the
fee
lower,
let's
increase
our
minor,
what
minor
tip
we
would
allow
and
then
have
the
base
fee
slowly
be
chipped
away
down
like
that
would
be
an
actual
attack
that
they
could
do
and
the
only
and
the
reason
it's
hard
to
do
now.
Is
you
don't
really
gain
a
lot
it's
currently
and
as
far
as
minor,
it
is
difficult
to
coordinate
among
minors,
but
as
a
six
have
a
harder
have
a
like.
G
If
a
six
are
harder
to
get,
then
there
is
this
possibility
that
market
power
could
be
developed
among
them
and
I
mean
that
in
the
economic
sense
of
they
would
be
able
to
enact
prices
that
wouldn't
that
a
free
market,
otherwise
wouldn't
allow
and
the
way
to
get
rid
of
that
would
be
to
have
a
it
easier
for
people
to
get
into
mining
and
not
have
and
not
have
that
be
through.
You
can
there's
two
ways
to
to
get
market
power.
G
I
G
A
A
N
I
can
give
a
little
brief
about
what
first
meeting
was
conducted
on
January
15th
and
the
participation
stress
it
was
quite
decent.
The
best
part
of
the
group
was
that
about
half
of
them
have
actually
pushed
or
tried
to
push
at
least
one
IP
at
one
point
of
time.
So
the
discussion
was
around
the
present
concerns
around
the
current
AIP
process.
N
We
discussed
a
few
suggestions
on
how
to
improve
and
that
have
been
documented
I'm
going
to
share
the
link
of
the
document
for
those
who
could
not
attend
the
meeting
and
are
interested
to
know
what
we
did
in
like
the
video
of
those
those
have
been
updated
on
the
etherium
clattered
github,
so
we
created
a
repository
and
we
are
gonna,
maintain
it
for
every
forthcoming
meetings
with
the
video
and
the
notes
there.
The
next
meeting
is
scheduled
on
29th
January
at
1500
UTC.
N
There
is
a
telegram
group
for
a
IP
IP.
It
is
not
published
for
the
spam
boat
reasons,
so
people
who
are
interested
they
can
reach
out
to
Hudson
at
Hudson,
at
ATM,
dot,
o-r-g
or
just
drop
a
note
at
aetherium
cat
little
Skeeter
that
they
are
interested
to
join
the
telegram
group
and
I
will
take
it
from
there
so
yeah,
that's
it
from
the
update
site.
If
anybody
has
any
question,
I
can.
A
F
F
Essentially,
when
you
retrieve
a
block
on
the
RPC
api's,
if
the
block
is
the
pending
block,
then
the
spec
states
that
the
number
field
in
the
results
should
be
known
and,
from
my
part,
so
not
just
a
number
of
a
percent
for
the
minor,
the
hash
and
some
other
few
fields
now,
obviously
the
minor
it
makes
sense
to
leave
it
as
no
because
there's
no
miner
involved
same
goes
for
the
hash.
Since
you
don't
really
know
what
the
hash
of
the
pending
block
will
be,
so
it's
it
doesn't
make
any
sense
to
return
it.
F
However,
it
also
states
that
the
number
should
be
null
and
I
think
that's
actually
wrong,
because
the
same
way
that
the
pending
block
actually
has
a
parent
hash,
which
which
references
a
real
block.
If
it
has
a
parent
that
we
would
then
we
know
specifically
what
the
number
of
depending
block
is,
of
course,
by
the
time
it
gets
included.
Maybe
there
will
be
three
other
blocks
and
whatnot.
F
So
of
course
you
cannot
rely
on
this
block
being
final,
but
we
know
that
this
thing
will
should
have
numbers
something
something
and
furthermore,
since
transactions
can
access
the
number
of
the
block
that
they
they
are
executed
in
also
means
that
transaction
execution
can
depend
on
this
number.
So
if
we
just
say
that
well,
pending
blocks,
don't
have
numbers,
that's
kind
of
a
little
white
lie,
because
transactions
executing
inside
do
have
access
to
that
number.
So
I
don't
think
we
should
hide
it
from
the
end
user.
F
So
essentially,
this
was
something
that
we
fixed
in
the
last
gas
release.
I
mean
fixed
it
by
turning
it
into
low,
and
some
people
already
started
complaining
that
it
broke
one
of
their
pools,
gonna
be
investigated
and
we
kind
of
decided
that,
although
our
latest
release
confirm
so
the
spec
I
think
the
spec
should
be
fixed
and
we
should
revert
that
change.
A
F
F
There
are
not,
there
should
be
a
saying,
I
guess.
The
problem
with
these
conformist
sweets
is
that
you
need
to
actually
set
up
a
meaningful
chain
so
that
you
can
fire
RPC
requests
against
it,
and
you
need
something
like
that
in
a
cross-site,
a
passable
way.
There
was
way
back,
I,
think
Fabian
for
the
seller
wrote
one,
but
that
was
even
before
her
frontier
launch
so
just
tested
a
fuse
of
a
subset
of
the
RPC
calls
originally
before
them,
and
then
boss,
rock.
E
Yeah
cuz.
That
goes
on
the
second
question
I
had
on
because
I
know
like
I
guess
away.
Sis
is
trying
to
spin
up
work
with
the
open,
RPC
stuff.
Is
this
something
that
we
should
owner?
Should
we
encourage
like
the
Oasis
group,
it's
bidding
up
to
own
it
and
run
with
it
and
we
fully
adopt
it.
It's
more
of
an
open
discussion.
Question
I,
don't
have
the
answer.
I'm
looking
for.
F
So
me
personally,
I'm
not
a
fan
of
open
RPC
because
so
I
think
open
RPC.
Its
own
open
up
is
essentially
kind
of
a
clone
of
swagger.
Maybe
it
did
there's
there
is
a
new
name
for
it.
Open
rest,
I,
don't
know
what
what
the
name
is,
but
essentially
that's
specifying
or
spanking
a
restful
cause.
There's
a
standard.
F
That
thing
has
been
kind
of
embraced
by
a
lot
of
industry
players,
big
players,
and
it's
been
a
really
pushed
hard.
So
it's
we
can
assume
that
swagger
is
kind
of
a
standard.
However,
we
open
our
PC
well
open
our
PC
positions
itself
as
the
spec
for
our
PC,
but
really
is
just
a
team
that
put
it
together
for
aetherium.
So
the
reason
I'm
not
a
fan
of
it
is
because
it
was
safe.
It
is
specifically
made
for
a
theorem
done
and
nobody
else
uses
it.
Then
I,
it's
not
really
a
standard.
F
E
G
After
going
through
the
call
going
through
all
the
awkward
as
stuff
I
think
we
have
the
recipe
for
doing
it
right,
which
is.
We
have
groups
that
are
willing
to
dog
food
and
are
working
on
it
and
have
pain
points
and
that
we
can
it's
almost
like
you're
doing
it
in
house,
but
in
house
is
an
in
protocol.
So
as
far
as
making
a
standard,
so
we
can
make
another
standard
having
a
group
of
people
go
to
another
room
and
then
decide
hey.
This
is
how
the
standard
should
be
versus.
G
H
A
A
Okay,
I
let
this
rig
I
had
some
weird
echo,
so
that
was
the
last
thing
on
the
agenda.
We
don't
have
the
notes
from
the
previous
call
yet
so
we
can't
really
review
the
action
items
unless
anybody
remembers
anything
that
was
important,
that
we
should
follow
up
on
or
otherwise,
if
people
have
anything
else
they
want
to
discuss
in
the
last
four
minutes,
we
can
do
that.
A
Okay,
I
guess
one
thing
I
would
want
to
bring
up,
but
probably
for
the
next
call
two
weeks
from
now
is.
It
seems
like
we
have
a
lot
of
stuff,
that's
kind
of
in
flight
for
Berlin
and
then
some
stuff,
that's
starting
to
shape
up
for
London.
So
it
might
be
good
to
have
like
an
ASIC
discussion
of
you
know
what
could
be
ready
to
ship
when,
what's
like
how
early
or
later
all
of
these
proposals,
I
don't
know,
if
anybody
else
thinks
that
would
be
valuable.
A
Cool
so
yeah,
maybe
we
can.
We
can
kind
of
have
that
as
an
action
item
for
next
call,
but
over
the
next
two
weeks
to
just
try
and
follow
up
with
the
teams
and
and
if
you
had
a
champion
for
an
EP
that
try
and
get
a
feel
for
how
far
long
things
are.
What's
like
a
realistic
timeline
to
get
this
to
get
this
done,
and
then
based
on
that,
we
can
maybe
see
you
know
what
can
be
bundled
together,
what
should
be
standalone
and
and
what
are
the
dependencies
across
various
initiatives.