►
From YouTube: EOF Implementers Call #6
Description
A
Hi
everybody
hi
everybody
Welcome
to
the
eof
implementer's
call
number
six,
formerly
known
as
the
eof
breakout
room.
We
decided
last
week
to
start
doing
this
on
alternating
weeks
of
all
core
devs
for
at
least
the
next
couple
months
to
try
and
get
a
hold
on
eof
and
figure
out
what
the
game
plan,
for
you
know,
trying
to
get
a
Cancun
inclusion
is
so
yeah.
Let's
start
with
some
client
updates.
Does
anybody
want
to
share
what
they've
been
up
to
since
awkward
devs
last
week,
or
maybe
like
a
little
bit
before
that.
B
C
B
Doing
mostly
cleanup
and
some
taking
time
to
do
some
optimizations
now
we
got
it
correct.
We
can
see
if
we
can
do
it
faster,
so
one
of
them
has
to
do
with
the
PC
of
rbh
and
another
one
has
to
do
with
Counting
dead
code,
so
nothing
too
exciting
or
groundbreaking.
A
Great
Aragon
is
anybody
here.
A
Guess
not
okay,
on
the
guest
side,
also
pretty
similar
spend
some
time
trying
to
clean
things.
Up,
I
haven't
really
done
much
performance
work
on
the
pr
still
mostly
have
been
trying
that
sends
the
cleanup
to
improve
errors
and
logging,
and
so
that
kind
of
also
came
out
through
this
proposal.
For
this
reference
test
format,
which
we'll
talk
a
bit
about,
is
testing
section
but
yeah
I
think
yeah,
that's
most,
that's
mostly
it
yeah,
not
a
ton
of
new
things
on
the
gas
implementation.
E
Yeah,
can
you
hear
me
I
hear
you,
okay,
yeah,
nothing
substantial
has
happened
since
the
new
year
for
us
as
well.
I
think
I
mean
we
did
have
the
yeah
PPR
draft
of
an
implementation
in
the
end
of
last
year
reported
the
last
dishes
we
found
but
yeah.
That's
basically
it
for
us.
E
E
A
Okay,
spec
updates.
Are
there
any
things
from
ipsilon
that
you
guys
wanted
to
bring
up
and
talk
about?
There's
a
couple
of
things
that
Dano
listed
that
we
can
chat
about,
but
is
there
any
things
that
you
guys
have
on
your
mind.
A
B
Yeah
so
back
before
we
got
rid
of
the
PC
operation.
It
made
sense
to
aim
to
where
the
PC
equals
zero
is
inside
of
the
execution
and
that's
the
beginning
of
the
code
section
now
that
we
don't
have
a
PC
or
a
direct
jump.
There
is
no
way
to
introspect
or
interact
with
the
absolute
position
of
the
PC
inside
of
the
spec,
so
I
would
like
it
removed,
and
you
know
execution
starts
at
the
beginning
of
a
code
section
without
saying
the
PC,
because.
B
So
it
helped
me
from
having
to
you
know
like
in
the
degenerate
case
of
1024
different
containers.
I
don't
have
to
slice
up
the
array
a
thousand
times.
I
just
use
the
same
array
passing
it
around.
So
it's
helping.
D
F
Yeah,
so
EIP
4200
still
have
or
also
refers
to
PC
and
had
a
PC
is
modified,
but
the
internet
there
and
I
think
the
same
applies
to
the
other
EIP
as
well.
The
intent
isn't
about
what
the
the
PC
opcode
is
returning
it's
rather
the
internal
PC
representation
right.
F
I
mean
in
the
Erp
4200.
If
you
want
to
eradicate
PC
as
a
term
entirely,
then
I
think
we
need
to
come
up.
B
F
F
B
C
A
F
Didn't
send
a
point
but
yeah.
A
It
I
guess
it
just
only
matters
if
PC
error
becomes
something
that
can
be
introspected
again,
because
right
now,
the
way
that
I
understand
it
is
that
you
have
implemented
the
eof
spec.
It's
just
that
there
is.
This
guiding
principle
of
PC
equals
zero
to
states.
That
execution
starts
should
start
from
the
first
byte
of
the
code
section,
but
the
fact
that
your
PC
is
an
absolute
value
across
the
container.
You
still
get
the
same
outcome.
F
C
D
A
Okay,
I
I
personally,
don't
have
a
problem
either
way.
So
if
you
guys
are
okay
with
removing
it
I
think
that
that's
okay.
F
I
would
suggest
if
we
remove
the
PC
equals
zero
wording,
then,
somewhere
we
should
explain
this
explicitly
that
it
is
implementation
defined
and
every
single
instruction
has
to
be
described
as
being
you
know
relative.
If
they
change
the
PC,
it
has
to
be
relative.
Maybe
this
is
under
security
considerations.
I'm,
not
sure
you
know
they're,
maybe
that's
the
best
section
for
it,
but
I
think
it
has
to
be
made
clear.
B
So
the
trace
specification
on
the
standard
tracing
may
impact
that
I'm
looking
at
this
now
and
now
here
hearing
the
discussion
on
it.
That's
the
next
point
that
I
brought
up
is:
how
do
we
in
3d155
our
standard
output?
It
has
a
PC
value,
so
we
would
have
to
do
some
math
to
get
to
PC
zero.
B
A
My
intuition
is
that
relative
to
the
code
section
is
preferable.
It
seems
much
clearer.
What's
going
on
when
reading
the
execution
I,
don't
think
that
would
cause
your
optimizations
to
be
thrown
out
the
window,
because
you
can
still
calculate
it
in
some
branch
that,
if
tracing
is
on
then
calculate
this
yeah.
A
A
B
You
know
what
Block
it's
coming
from,
so
you
could
Define
it
from
that,
but
it
is
also
be
useful
when
you
call
between
contracts
to
know
you're
calling
between
an
eof
and
a
legacy
contract.
If
you
see
this
section
up
here
and
disappear.
B
B
Those
are
my
two
questions,
so
no
change
on
the
spec
I'm
fine
with,
but
the
the
tracing
spec
needs
to
have
some
clarity
somewhere.
I,
don't
know
how
to
do
that
to
add
it.
A
Oh,
it's
stagnant,
so
fortunately,
Martin
and
Marius
can
change
it.
Yeah,
yeah.
Okay,
that's
something
that
we
should
work
on.
I
have
a
couple
of
those
things.
I
just
stuck
on
here.
I
know
that
a
big
spec
question
that
we
have
outstanding
is
figuring
out
how
to
support
net
code
arguments.
I,
don't
know
if
there
is
any
other
thinking
about
how
we
should
deal
with
this,
how
we
should
resolve
this
problem.
Maybe
this
is
something-
and
we
don't
have
to
like
go
into
super
long
debate
here,
but
I
do
have
another
section.
A
F
We
started
to
work
on
how
to
do
you
know
creation.
We
started
a
document
last
week
to
balance
off
some
ideas
internally
and
we
want
to
share
it
as
soon
as
possible,
but
there
hasn't
been
too
much
work
on
it
last
week,
but
my
hope
is
that
we
actually
gonna
share
it
early
next
week
and
that
could
be
a
discussion
Point,
especially
for
the
interrupt,
but
there
have
been
some
interesting
ideas
how
to
actually
support
immutables
and
yeah.
It
seems
like
a
solvable
issue.
A
Okay,
great:
let's
talk
a
little
bit
more
about
that.
When
we
talk
about
the
interl
plans,
then
the
last
spec
update
I
had
a
bit
of
a
question
here.
We
have
some
more
time
now.
Does
it
make
sense
to
work
on
some
unified
Eep
at
this
point,
or
do
you
think
we
should
just
continue
forward
with
the
multi-ip
approach.
B
It's
a
whole
lot
easier
to
read
a
unified
eat.
Do
we
go
The,
Meta
Heap
approach
and
have
it
copy
it
or
do
we
deprecate
the
other
Eaves?
If
we
go
to
assembly.
F
Probably
not
do
you
mean
that
we're
gonna
have
a
like
a
brand
new,
unified
Eep
with
copying
that
entire
spec
text,
or
are
we
saying
that
for
now
we're
gonna
just
rely
on
the
unified
spec
and
once
everything
is,
is
you
know
100,
you
know
clear,
then
we're
gonna
copy
back
stuff
into
the
earpiece
or
you
know
or
luggage
for
adoption.
What
what
is
the
suggestion.
A
F
Is
one
of
the
reasons
the
unified
Eep
is
nice
because
it
doesn't
need
to
be
concerned
by
all
those
other
required
sections
by?
If
you
want
you
know,
the
demotivation
can
be
quite
big,
the
rationale
and
security
and
backwards
compatibility
stuff.
F
All
those
likely
could
will
double
the
text,
but
also
the
current
tips
have
like
the
python
code,
which
we
discussed
that
maybe
they
obsolete
at
this
point,
so
they
could
be
even
the
current
ones
could
be
cut
back
personally.
I
think
the
the
least
amount
of
work
for
now
at
least
for
this
month.
You
know
during
the
interrupt
is
just
using
the
unified
spec
I'm,
not
caring
about
any
of
the
other
eips,
and
all
that.
A
Yeah
I
was
getting
the
feeling
that
people
preferred
working
off
the
eaps
and
to
be
fair.
The
eips
are
more
concise,
like
I've
realized
that
there
are
some
small
things
that
are
overlooked
in
the
unified
spec,
the
contract
creation
aspects,
I
think
are
under
specified
and
like,
of
course,
we
can
specify
these
things
and
the
unified
spec
in
a
good
way.
It's
just
that
it's
not
quite
as
clear
as
the
eaps
and
everybody
tends
to
just
fall
back
to
the
eaps
and
look
and
say:
okay.
A
Well,
what
does
the
EIP
have
to
say
about
this,
and
so,
in
that
case,
then
we're
kind
of
losing
the
advantage
of
the
unified
spec
rather
than
because
I
think
that
the
behavior
should
be
if
the
unified
spec
is
not
clear,
then
we
should
update
it
to
be
clear
rather
than
falling
back
to
the
Eeps
and
then
at
some
point
in
the
future.
Try
and
Port
all
of
the
changes
in
the
unifies
back
to
the
appropriate
heaps
I,
don't
think
it's
the
flow
we
followed
yet.
A
Anyways,
we
don't
have
to
make
a
decision
on
this
I.
Just
think
that,
as
things
continue
to
change,
it
is
better
to
have
all
the
changes
in
a
single
place,
and
this
is
something
that
has
been
difficult
over
the
last
two
months
for
implementers
trying
to
make
sure
the
end
testers
to
make
sure
that
everybody
is
agreeing
on
the
exact
same
things.
So
something
that
we
can
consider.
A
Okay,
testing
updates
Epsilon
guys.
Do
you
have
anything
to
share.
G
I
have
some
some
stuff,
so
maybe
quickly
about
the
spec
updates,
I
didn't
say
before
there
were
two.
There
were
two
changes
from
my
side.
One
was
just
to
rename
the
data
stack
to
operand
stack,
that
was
a
prequest
to
the
some
of
the
APS
and
that
this
has
been
merged,
and
there
is
follow-up
that
that
is
listed
in
the
in
the
in
the
agenda
issue.
G
For
for
this
call
that
that
kind
of
changes
some
of
the
runtime
checks
to
to
asserts
because
they,
the
conditions,
are
already
guaranteed
by
the
the
the
or
the
stack
validation.
So
there's
a
request
for
that.
G
It
hasn't
been
marked
yet
and
on
the
testing
side,
so
we
were
focusing
on
the
yeah
like
comparing
mostly
the
the
Gap
implementation,
with
vvm1
implementation,
using
State
tests
and
at
as
of
today,
I
think,
there's
only
one
issue
remaining
and
the
rest
kind
of,
like
the
implementations,
agree
on
their
rest
of
that.
So
the
the
issue
is
in
the
create,
as
usual,
so
I
think
what
the
text
test
expects
and
what
graph
implementation
is
doing
in
case
init
code
is
invalid.
G
G
The
the
check
should
be
before
that
and
the
nouns
should
not
be
updated,
but
somehow
the
test
explicitly
expects
this.
So
maybe
there's
some
decision
made
I'm
not
really
aware
of,
but
I
think
I'm
still
in
the
position
that
that
should
be
the
other
way.
G
Yeah,
so
if
that
is
resolved,
I
think
we,
we
kind
of
went
a
good
position
right
now
and
yeah
I.
Think
more
tests
are
coming
this
week
or
next
week
for
other
test
cases,
but
like
two
big
chunks
of
the
existing
tests
are
pretty
good
right
now.
A
Okay,
there's
been
some
talk
about
having
a
reference
format
for
UF
tests.
A
So
this
is
about
as
Bare
Bones
as
you
can
have
for
a
filled
test
fixture.
It
really
just
has
the
code
and
then
the
results
for
each
Forks
that
we
would
expect
the
results
for
the
maybe
big
question
here
is
there
was
this
idea
of
maybe
standardizing
the
the
error
errors
that
we
that
we
have
from
the
parsing
and
validation
code?
A
This
kind
of
puts
some
maybe
unnecessary
constraints
on
implementations,
because
certain
checks
need
to
be
done
in
certain
ways
to
be
in
conformance,
and
maybe
that
is
overbearing
for
implementers,
but
I
do
think
that
having
the
ability
is
a
a
as
a
tester
to
say:
I
am
writing
a
test,
and
this
is
the
error
that
I'm
writing.
This
test
to
cover
is
useful,
because,
generally
this
binary
output
of
pass
fail.
B
So
I,
like
enumerating
the
expected
exception
in
there
I,
would
prefer
to
see
a
texturing
instead
of
a
number,
so
I
don't
have
to
look
up
a
table,
so
I
can
tell
it
from
the
test
directly.
But
what
basic
does
there's
a
couple
of
other
tests
scripts
inside
of
the
reference
tests
that
specify
the
exception?
We
don't
map
the
exceptions
right
now.
We
just
verify
that
the
exception
occurred.
We
use
that
as
a
plugin
as
a
failure,
but
we
could
easily
you
know,
do
the
mapping
and
make
sure
we
get
the
same.
B
Fails
I
do
like
the
idea
of
trying
to
say
what
should
fail
and
try
to
make
a
test
fail
in
only
one
way,
because
there
was
a
several
years
ago.
One
of
the
LPN
tests
had
two
ways
to
fail
and
basically
was
only
getting
one
of
them
correct
and
wasn't
checking
the
other
way,
and
that
was
the
only
test
case
that
checked
for
LPN,
pairing
or
something,
and
because
none
of
us
are
crypto
Masters
and
know
what
this
checks
are.
B
We
didn't
know
that
the
check
was
invalid,
so
enumerating
in
the
test,
how
we
expected
to
fail.
I
think
is
valuable
to
those
who
aren't
you
know
like
say
you
know,
phds
and
cryptography.
It
can
tell
you
what
all
the
failure
cases
of
a
b
Edgar
are
so
I
see
value
in
enumerating
them,
even
if
they're
not
directly
used.
A
Yeah
I
guess
I
can
I
can
update
it
to
you
strings.
My
initial
hesitation
was
that
there,
you
know
each
language
has
its
kind
of
like
own
way
that
they,
like
writing.
What
errors
are
and
each
code
base
kind
of
has
the
same
thing
and
I
felt
it
was
kind
of
maybe
overbearing
to
say.
Your
text
needs
to
look
like
this,
but
the
way
that
I
ended
up
implementing
it
is
it's
actually
outside
the
main
code
base
for
Geth.
A
It's
really
just
a
table
of
these
air
integer,
or
these
like
GO
error
types
to
integer,
and
so
really
I
can
just
update
that
to
some
standardized
text
output.
That's
totally
fine.
A
A
F
This
is
a
highly
tough
topic,
but
for
a
long
while
I
was
kind
of
interested,
if
you
could
introduce
error,
codes
or
execution
stopped,
because
that
could
help
comparing
the
different,
deviant
implementations
and
could
help
in
establishing
like
better
coverage
or
at
least
understanding
the
coverage
of
the
test.
F
But
one
big
downside
is:
if
we
do
that,
then
a
lot
of
these
different
conditions,
something
can
abort,
has
to
be
way
more
precisely
described
than
it
is
today,
and
one
good
example
is
like
these
discussions
around
the
ip3860,
the
init
code
and,
of
course,
then
that
would
be
more
prone
to
consensus.
Bugs
I
wonder
you
know
if
this
step
is
like
a
step
towards
that.
B
So
it
depends
how
much
we
were
driving
the
standardization.
If
it's
a
name
used
in
test
in
this
advisory
might
say
we
just
go
ahead
and
do
it
and
clients
can
use
it
or
not.
The
tricky
part
comes
in
if
we
need
a
fail
in
a
specific
way,
in
a
failure
to
fail
with
the
right
failure
becomes
a
failure.
That's
where
you
know
you
know
the
spec
allows
for
multiple
failure.
B
Paths,
that's
where
it
gets
tricky,
because
if
five
things
fail,
I
don't
want
to
have
to
say
that
we
need
to
check
them
in
a
certain
order.
I
just
care
that
one
of
the
five
is
returned.
So
it's
there
there's
because
you
know
if
we
just
put
in
the
test
and
say
we
expected
to
fail
this
way.
I
I
have
no
problem
with
that.
It's
moving.
C
A
A
It
may
turn
out
that
it's
pretty
straightforward
for
us
to
do
this
error,
propagation
and
degree
on
certain
errors,
and
so
maybe
that
is
like
a
signal
that
we
should
look
more
into
doing
this
at
the
evm
level.
It
could
also
go
the
other
way
too,
where
we
realize
that
this
is
hard,
and
maybe
the
binary
pass
fail
is
preferred.
B
A
H
Yeah
so
I
seen
at
least
a
couple
of
desk
pieces
that
I've
written
I
I've
seen
cases
where
it's
yeah,
it's
totally
possible,
to
have
two
different
errors
from
the
ones
that
you
listed.
So
maybe,
if
we
have
that
scenario
or
it's
impossible
to
just
narrow
down
to
a
single
error,
because
writing
test
is
just
not
possible
and
it
depends
on
the
client
where
the
check
in
the
order
of
the
checks
they
are
doing.
So
maybe
we
can
have
an
either
error
for
very
special
test
scenario.
H
H
Yeah
I
mean
that
that
could
be
one
suggestion,
but
I'm
not
sure
if
we
are
just
lowering
the
yeah.
The
the
the
the
coverage
of
of
the
of
the
of
these
disk
is
just
by
doing
that.
It's
it
I
I
know
it's
not
ideal,
but
I've
seen
at
least
one
test
case
where
it's
impossible
to
determine
the
exact
error,
because
you
would
need
to
know
the
implementation
to
see
what
check
goes
in
first.
H
I
could
give
an
example,
but
I
don't
quite
remember
now:
I
I
just
know
there.
There
is
at
least
one
test
case
where
it's
impossible
to
know
which
exact
error
are
you
gonna
get.
A
Yeah
well
right
now
there
is
this
like
kind
of
confusing
error
unexpected
into
file,
and
this
is
something
that
probably
needs
to
change,
but
it
kind
of
overlaps
with
other
errors
like
you
might
have
a
magic,
that's
invalid,
because
you,
the
by
you,
only
have
the
EF
byte
and
some
might
say
that
that's
an
invalid
magic.
Depending
on
how
you
check
it.
Others
might
have
said
that
I
was
trying
to
read
something
and
I
ran
out
of
things
to
read,
and
so
it
was,
you
know,
an
unexpected
end
of
file.
A
I
Yeah,
but
in
this
particular
case,
I
think
that
if
you
have
EF
without
the
zero
zero,
then
you
shouldn't
be
running
the
eof
checks
in
the
first
place,
because
this
is
not
eof
code
and
should
be
just
treated
like
normal
code
and
feel
like
the
normal
code
fails
when
it
gets
an
AF.
B
So
there's
another
way
we
might
have
multiple
errors.
Let's
say:
there's
five
ways
that
the
code
section
can
fail.
Let
me
have
a
test
case.
It
has
five
code
sections
in
each
code,
section
failing
in
a
unique
way.
Do
we
then
mandate
the
chat
to
evaluate
the
errors
in
in
byte
order
in
linear
order,
so
there's
I
think
it's
gonna
be
impossible
to
say
that
you
know
there's
always
going
to
be
with
one
particular
EF.
H
I
would
argue
that
that's
not
a
good
test
case
because
yeah
you
want
to
own
this
one
section
one
particular
at
the
time,
but
it
was
more
like,
for
example,
I
I
thought
of
an
example.
I
wanted
to
make
an
a
test
case
where
you
overflow
the
stack.
So
we
have
a
stack
Overflow
error
in
the
list,
but
we
for
to
do
that.
H
We
would
have
to
list
too
many
the
the
two
large
Max
stack
height,
for
example,
so
which
one
of
the
two
errors
would
you
list,
because
if
you
want
to
overflow
the
stack
you
have
to
have
a
high
maxed
height.
So
it's
those
kind
of
things,
those
very
specific
things,
not
not
yeah
yeah.
C
I
Is
that,
for
example,
never
mind,
we
check
that
the
code,
the
eof
code,
so
basically
any
code
that
starts
with
ef-00,
that
it
has
a
minimum
of
like
20,
bytes
or
19
bytes
or
whatever
the
minimum
valid
eof
code
could
be
right,
and
this
is
the
first
thing
that
we
do
to
validate
the
the
code,
because
it
does
not
make
sense
to
go
through
parsing
the
code
if
we
know
that
it
is
smaller
than
any
valid
eof
code
out
there
and
yeah.
This
probably
is
not
the
same
way.
I
It
is
implemented
in
other
clients,
for
example.
So,
for
example,
other
clients
would
go
through
things
and
they
might
have
an
error
that
is
different
than
this
one.
So
we
would
say:
oh,
this
is
not
eof
valid
code,
because
its
size
is
smaller
than
the
minimum
uof
valid
code,
but
the
other
client
failed
because,
for
example,
the
Virgin
is
zero.
Two
instead
of
zero
one
and
zero
two
is
not,
for
example,
valid
version
for
Eos.
I
E
D
In
my
opinion
like,
if
we,
if
we
are
gonna,
do
this,
we
need
to
be
extremely
sure
that
each
that
case
is
isolated,
but
it
doesn't
touch
any
other
test
case.
So
if
we
have
truncated
code
or
very
small
code,
we
should
make
sure
that
at
least
the
prefix
or
what
is
included
in
data
spaces
correct
and
the
only
invalid
thing
is
in
that-
let's
say
the
example.
I
said
in
that
test
case.
Is
the
code
being
incomplete?
A
A
Yeah,
that
makes
sense
so
I
mean
one
way
that
we
can
use
this.
We
kind
of
use
this
as
a
tool
to
help
us
understand
if
the
tests
we're
writing
or
testing,
like
the
things
that
we
expect
and
the
like
thing
that
we
really
care
about
when
we
look
at
okay
are
clients
passing
these
tests.
We
look
at
the
binary
output.
A
Is
there
an
exception
and
did
the
client,
though
an
exception
that
way
we
don't
get
stuck
on
this
whole
our
clients
doing
things
in
the
right
order,
but,
as
like
a
test,
writer
still
like
saying,
like
I,
am
expecting
this
test
to
fail.
This
way.
Having
that
in
the
field
test,
maybe,
as
a
client
runs
those
tests
and
the
test
says
it
should
fail
because
of
stack
Overflow,
but
instead
it
fails
because
there
were
not
enough
code
sections
or
something.
This
can
be
a
signal
to
implementers
that
you
know.
A
Maybe
there
is
something
to
look
for
more
deeply.
It
could
be
a
signal
for
test
writers
that
maybe
the
test
that
they
wrote
is
not
testing
exactly
what
they
think
I.
This
is
I,
don't
think
gonna
be
like
a
perfect
system.
It's
not
going
to
be
something
where,
like
one
side
either
the
client
or
the
like
test,
filler
format
proposal
is
going
to
like
be
perfect
in
a
vacuum.
A
It's
something
that's
going
to
have
to
go
back
and
forth
a
little
bit,
but
if
people
are
like
okay
with
this,
then
I
think
that
we
should
just
like
move
forward
with
having
these
enumerated.
Errors
see
how
it
goes,
but
not
get
too
hung
up.
If
the
error
that
test
is
requesting
is
different
than
what
the
client
is
throwing.
A
F
Yeah
I
would
be
in
favor
of
giving
this
trick
torsion
a
try,
but
quickly
able
to
iterate
and
and
roll
back
I.
Don't
know
if
you
get
confirmation
that
I
think
it's
completely
assessing
too
much
yeah
I
think
in
an
ideal
case
it
would
be
nice
to
actually
have
all
of
the
air
conditions
for
the
evm,
properly
specified
and
and
test
the
order
of
everything.
F
But
yeah
I
mean
that's.
That's
a
really
long-term
goal.
A
D
Also,
there's
this
one
thing
like:
there
are
three
aspects
in
the
code
that
we
should
test.
There
is
the
parsing
of
the
header
and
POI
code
itself,
there's
the
execution
of
the
of
code
and
there's
the
deployment
of
UF
uofco,
the
parsing
itself.
We
can
do
stuff
in
different
order
and
we
we
can
have
different
errors,
but
the
result
should
be
the
same
like
a
valid
you
have.
D
Rather,
a
quality
of
code
is
a
valid
code,
no
matter
what
the
implementation
says,
but
for
deployments
and
for
execution,
those
who
should
be
strict
about
the
error
messages
for
the
parts
and
we
can
be
lose
a
bit
but
for
execution
and
deployment.
Who
should
be
straight
from
this
there's
no
other
cases.
They
are
consensus.
Software
they're,
not
just
passing
and.
A
D
A
I
think
we
should
talk
about
the
interop
sum
we
can
keep
talking
about
this
testing
formats
more
in
the
evm
channel.
There
are
these
filled
tests
if
people
want
to
start
looking
at
implementing
this
implementing
some
sort
of
test
runner
in
your
clients,
it
should
be
really
similar
to
how
the
UF
parse
tool
that
you
guys
mostly
have
already
implemented,
would
look.
A
B
B
I
should
put
it
in
for
the
other
basic
devs
that
were
involved
in
the
eof
they
said
put
in
Cancun,
so
we
I
put
mine
in
Cancun,
but
I
can
Alias
the
test
to
Shanghai
plus
eof
and
have
that
go
to
Cancun,
because
right
now
the
you
know
in
the
basic
code.
That's
the
only
thing
they
can
code
so
I'm
flexible.
But
that's
why?
Where
we're
at.
A
Okay,
let's
just
put
it
in
Cancun
I
can
rebuild
the
tests
and
they
will
be
yeah
forked
by
Cancun.
So
let's
do
that.
I
G
Yeah
I
think
they
should,
unless
there's
some
mistake,
so
so
so
the
tests
are
how
they
are
written.
You
also
put
some
expectations
section
that
are
not
visible
to
the
the
final
file,
but
it's
in
the
source
file
of
the
test,
so
yeah
I
think
it's.
G
It's
rather
don't
happen
that,
like
that
the
test
case
kind
of
so
I
think
the
tests
are
not
like
100
blind.
So
you
put
some
code
and
then
just
execute
and
whatever
the
the
evm
returns
it's
your
test.
So
there
are
some
expectations
that
I've
checked
against,
so
so
yeah
I.
Would,
if
there's
something
that
you
think
is
wrong.
That
might
be
a
mistake
in
the
test
or
something
like
that,
but
yeah
they
usually
should
reflect.
What's
what
are
actually
testing
can.
I
Can
you
point
me
to
the
place
where
this
expectation
is
low,
laid
out
or
mentioned.
I
G
A
F
F
C
A
I
guess
you
know
from
my
point
of
view,
I'm
a
bit
apprehensive
about
it,
because
it
does
feel
that
we're
starting
to
reach
the
edge
of
what
we
can
comfortably
do
in
a
single
Fork.
We
have
a
lot
more
time
now
and
maybe
that
affords
us
a
greater
ability
and
range
of
changes,
but
yeah
not
really
sure
Paul.
G
Yeah
press
wrong
button.
Yes,
how
I
feel
I
think
about
it
is
like
the
con
spec
and
the
current
tests.
Suite
is
like
eof
1.0,
and
there
are
some
additions
that
I
think
we
can
consider
later,
mostly
for
testing
so
think,
once
everyone
is
kind
of
on
the
same
same
level,
about
testing
and
we
kind
of
confident
I
think
there
are
three
or
four
additions
that
can
be
added
to
yeah
to
current
EOS
that
are
backwards,
they're
kind
of
compatible
with
the
clients
back.
G
So
examples
are
this,
like
swap
dupe,
unlimited
version,
this
jumper
instruction
that
we
removed
there.
Is
this
no
return,
the
notion
of
no
returning
function,
that
has
some
affects
the
stack
validation
and
can
enable
some
optimizations
on
the
compiler
side,
I,
think
and
yeah,
the
fourth
one.
That
is,
that
what
someone
suggested
I
think
yesterday
or
two
days
ago
about
the
delegate,
call
issue.
G
So
we
can
also
add
this
one
in
I'm,
not
sure
this
is
like
this
should
be
commit
commitment
to
to
like
ship
it.
But
if
we
have
some
basic
like
like
revert
ability
to
like
like
revert
implementations
to
the
stable
1.0
version,
without
so
much
issues
and
we
can
always
kind
of
revert,
the
snapshot
of
tests
and
so
on.
So
if
we
have
time
I
would
go
this
way
and
also
work
on
the
eof
2.0,
whatever
spec,
with
some
additional
changes.
G
J
Yeah
I
think
during
the
last
ACD
call.
It
was
mentioned
that
there
are
some
hacks
with
the
eof
required
in
solidity.
So,
to
my
mind
that
the
the
priority
we
need
to
address
like,
is
it
due
to
construct
arguments
or
something
else?
I,
don't
remember
exactly,
but
yeah
I
would
prioritize
solidity,
implementation
and
author
eof
so
that
there
are
no
hacks
or
like
minor,
only
minor
hacks
in
solidity.
J
A
E
E
Why
I
didn't
bring
it
up
earlier,
but
which
would
be
relaxing
the
stack
validation,
because
currently
it
prevents
code,
D
duplication
in
cases
where
I
think
it
doesn't
have
to
if
we
make
this
algorithm
more
complicated,
but
still
in
linear
land
runtime
in
the
number
of
of
op
codes,
I
haven't
thought
it
through,
but
I
may
be
able
to
suggest
a
change
there,
which
would
allow
for
more
code
application
for
us
to
do,
for
example,
to
jump
to
through
to
rewards
from
positions
with
different
stack
Heights.
E
That's
the
issue,
but
yeah
not
sure
how
the
general
idea
is
there.
That's
something
that
it's
really
a
change
and
can't
be
added
later,
whereas
we
really
like
these
swapped
up
stuff,
but
I
agree
that
it
can
be
added
later
with
less
effort.
So.
A
B
So
we
need
to
tread
carefully
on
the
stack
validation,
because
that
is
critical
to
some
transpilation
issues.
If
we
could,
if
we
can
enforce
the
swap
Duke,
but
like
the
case
that
you
outline
where
it's
you
jump
into
something
terminal
and
only
the
top
X
components
matter.
But
there's
you
know
mappings,
like
color
mapping,
register
swaps.
That
can
be
done
that
we
would
lose
if
we
get
rid
of
Stack
validation.
But
if
we're
taking
what
would
be
invalid
and
moving
it
to
something
that
would
be
valid.
B
That's
like
a
soft
pork
fix,
and
that
is
something
that
can't
be
added
later,
is
that
we
take
something
that
was
valid
and
then
making
it
invalid.
That's
where
we
have
to
bump
major
versions.
So
adding
things
like
swap
dupe
to
if
the
slop
do
those
can
be
soft
work
added
into
1.1
and
don't
have
to
be
done
on
the
first
thing,
because
it's
just
adding
it's
increasing
the
scope
of
what's
valid
rather
than
moving
stuff
from
valid
to
invalid.
A
Yeah
yeah,
okay,
I,
want
to
switch
gears
and
talk
a
bit
about
this
time
that
we're
going
to
have
in
person
in
a
couple
in
a
week
and
a
half
or
so
what
are
clients
teams,
plans
for
interop
in
general,
like
I've,
been
talking
to
Tim
a
bit
and
it
seems
like
it
would
be
really
ideal
if
we
had
this
separate
track,
where
we
just
saw
it
together,
and
we
were
just
really
worked
through
some
of
these
eof
things.
A
Is
that
like
do
we
have
enough
things
to
work
on?
Are
people
going
to
be
able
to
work
on
eof?
Do
you
have
like
other
responsibilities,
you're
going
to
need
to
be
working
on
with
respect
to
444
in
Shanghai
I'm,
not
sure
what
what
are
people's
I
am
like
personally
able
to
work
only
on
eof
that
week,
but
I
don't
know
what
other
people's
responsibility
load
is.
I
Won't
be
able
to
work
on
hey
man,
okay
I
will
be
there
to
delete,
but
I
will
be
also
working
on
making
sure
timestamp
working
works
correctly.
Okay-
and
probably
that's
mostly
it
devnet
stuff,
but
that's
mostly.
A
Okay,
yeah
Tim
was
all
starting
about
this
Daniel
I
think
it
would
be
great
to
have
you
at
some
point.
I
think
we
should
spend
some
time
and
figure
out
like
what
we
want
to
do
with
eof
that
week
to
know
when
the
best
days
for
you
to
come
would
be
I,
don't
know
if
they're,
like
certain
days
yeah,
we
can
talk
about
offline,
but
if
there
are
like
certain
days
that
work
better
for
you,
obviously
let
us
know
so.
A
Better,
if
anything,
okay,
that
sounds
good
I
talked
to
Tim,
also
a
little
bit,
and
he
wants
to
do
a
session
where
we
talk
about
eof2
or
this.
You
know
you
know
mainly
about
these
ideas
that
vitalik
is
having
yeah,
so
that
will
definitely
be
something
we
probably
want
to
do
it
later
in
the
week.
So
we
can
spend
some
time
together
and
flush
some
ideas
out
beforehand.
A
We
could
continue
talking
about
this,
like
async
I.
Think
the
biggest
thing
I
want
to
talk
about
before
we
go
from.
This
call
is
like
what
types
of
things
do
people
want
to
have
ready
when
we
show
up
there
like
we
already
sort
of
have
pretty
good
interoperability
of
clients?
Are
there
any
things
that
we
can
prepare
in
the
next
week
and
a
half
that
will
put
us
in
a
good
place
for
getting
a
lot
of
work
done
at
the
workshop.
I
Because,
if
that's,
if
they
are,
then
clients
should
be
working
on
making
sure
they
pass
all
the
tests,
so
we
can
have
some
type
of
a
divnet
I
believe
this
is
one
of
the
best
ways
we
can
go
with
that
week.
A
G
C
G
Of
tests
I
think
they're,
not
final,
yet
so,
there's
still
like
tests
being
in
preparation
in,
like
mostly
to
add
new
test
cases
which
hasn't
landed
yet
and,
as
I
mentioned,
this
one
I
think
there's
one
issue
with
the
tests
and
one
of
the
implementations
that
we
have
to
also
correct.
A
All
right
tests
are
some
of
these
tests
merged
now
to
East
Taskmaster.
A
G
About
this,
the
main
branch
so
yeah,
not
everything,
landed
there
and
I.
A
A
Okay,
so
if
I
can
make
a
proposal
for
a
game
plan
until
we
arrive
there,
maybe
first
we
come
up
with
a
list
of
the
tests
that
are
released
and
make
sure
that
clients
are
passing
those
tests.
That
would
be
a
good
place
to
start
from
just
knowing
that
we're
all
employing
the
same
things
and
then
maybe
I
would
make
the
request
that
we
would
also
implement
the
reference
test
tool.
A
That
could
be
like
a
good
place
to
expand
upon,
in
person
figuring
out
like
how
to
deal
with
these
error
codes.
If
we're
going
to
deal
with
them,
maybe
trying
to
do
some
fuzzing
based
on
that
I
think
that
if
we
came
with
those
two
things
that
could
be
a
pretty
good
place
to
start,
does
anybody
disagree
or
have
other
things?
I
would
like
to
add
to
the
list.
B
Does
sound
reasonable?
We
also
need
to
have
discussions
on
things
like
the
delegate
call
basically
circum
circumventing
suppressions.
We
didn't
have
time
for
that
on
this
call,
but
I
think
that's
also
on
the
critical
mass
effects
for
eofv-1
or
we
can
also
get
called
to
other
EFS
EOS,
all
right,
yeah.
A
So
I
guess
yeah
I'll
write
a
document
and
I'll
share
with
everybody
about
some
like
milestones.
We
want
to
have
before
we
get
there,
but
I
think
maybe
one
other
thing
I
would
also
add,
is
just
expensive
time
and
think
about
what
are
the
things
that
need
to
be
solved
for
ef1,
and
then
we
can
just
kind
of
start
out
the
week
and
sit
down
and
talk
about
those
things
and
like
really
Hammer
through
any
kind
of
spec
issues
that
still
exist.