►
From YouTube: Breakout room meeting #6 (Pt.2)
Description
A
Is
worse
and
I'm
not
sure
exactly
what's
the
concern
with
the
count
abstraction,
there
was
some
similar
issue
where,
like
having
29.29
made
it
less
worse,
so
I
think
it
is
kind
of
a
prerequisite.
So
if
we
push
that
back
out
by
two
three
months,
then
we're
basically
pushing
out
1559
and,
like
the
other
kind
of
big
things
we
want
in
future
hard
forks
also
out
by
a
while.
B
B
C
B
Don't
think
that's
what
was
possible
well.
I
was
just
saying
that
a
couple
of
reasons
there
was
that
you
know
we
want
to
get
1559
in,
and
these
are
things
that
are
prerequisites,
and
you
know
I
understand
this,
but
if
we
have
to
push
1559
back
three
or
four
months
to
not
introduce
something
to
just
do
things
right
the
first
time,
I
don't
think
that
this
should
that
the
show
should
really
be
a
debate.
I
feel
like
we
should
focus
on
doing
things
things
right,
rather
than
taking
shortcuts.
E
Yeah,
I
mean,
I
think
you
can
leave
the
whole
1559
account
abstraction
argument
to
the
side
and
then
just
talk
about
like
okay.
Does
the
community
feel
it's
it's
worth
it
to?
You
know,
leave
this
dos
factor
out
there
for
n
more
months
and
then
for
what
value
of
n
is
that
worth
the
you
know
worth
doing
the
2018
thing
right,
the
first
time.
F
What's
the
what's
the
argument
here
that
what
is
the
cheat
and
what
is
doing
right.
A
Is
doing
it
right?
Think
matt's
question
was
may
correct
me
if
I'm
wrong
but
like.
Why
are
we
doing
27.18
with
rlp
encoding.
E
Yeah
wait.
I
don't
was
your
argument
that
we
should
do
with
ssc
or
that
we
should
do
it
in
a
way
that
is.
B
The
way
that's
compatible
with
ssd
is
to
do
it
with
ssd.
I
think
that
mica
kind
of
realized
some
of
these
things
that
make
it
difficult
to
support
it
in
this.
You
know
nice
way
where
we
can
have
both,
and
so,
if
that's
the
way
and
in
where
in
is
the
value
that
we're
okay
with
the
star
sector
living
is
six
months,
then
I
I
don't
understand
why
we
wouldn't
just
go
ahead
and
do
ssd.
F
It's
the
kind
of
never
ending,
it's
a
very
large
hole
once
we
start
digging,
so
we
do
the
transaction
in
ssc.
Oh
then
we
want
to
do
the
transaction
route
in
ssc
and
hey.
We
actually
want
a
cool
block
in
sse
and
it's
just
it's
this
big
snowball
that
keeps
accumulating
at
some
point.
We
need
to
like
just
draw
the
line
and
say:
okay,
we
do
this
as
the
c
ish,
but
we
don't
do
all
of
it
right
now.
Yeah.
B
D
B
D
Yeah,
I
think
I
think
that
would
be
the
lowest
technical
debt
path.
The
like,
I
said
like
indicated
earlier,
the
big
pushback
is
that
there's
a
very
ever
growing
fear
about
the
us
vector
like.
I
think
I
think
people
are
afraid
that
this
is
one
of
those
things
where
it's
a
combination
of
the
attackers.
Don't
listen
to
these
calls
super
closely
so
they're
not
aware
of
it.
Also
there's
not
like
profiting
off
of
it.
G
C
Yeah,
I've
kind
of
just
viewed
it
as
its
trade-offs,
like
all
the
way
around,
so
even
going
the
route
of
purely
doing
everything
right.
The
first
time
also
has
trade-offs
for
some
of
these
other
things,
so
like
the
idea
that
somehow
we
can
purely
get
through
like
doing
everything
right,
I
just
think
does-
doesn't
really
practically
work
when
we're
just
trying
to
ship
at
least
something.
C
B
Yeah
I
mean
I,
I
don't
want
to
like
throw
felix
out
there,
but
I
talked
to
him
a
little
bit
out
of
band
and
he
wasn't
sure
if
putting
ssd
in
networking
first
was
the
right
play,
and
so
I
don't
know
I
like.
I
don't
want
to
take
up
too
much
of
this
time
for
this
debate.
That
just
seems
to
continue
raging,
but
it's
something
that's
like
bugs
me
about
going
down
this
path
of
something
that
is
going
to
need
to
be
changed
in
the
future.
B
It
seems
like
there
is
fatigue
and
it
seems
like
there
is
a
desire
to
shift
something,
but
I
don't
think
that's
the
right
motivation
behind
a
hard
fork.
I
think
a
hard
fork
should
do
its
job,
and
at
this
you
know,
point
in
ethereum's
life.
I
think
that
we
should
just
do
things
right
without
having
the
expectation
of
changing
it
and
in
a
year
or
18
months
or
whatever.
That
period
is
to
put
sfc
in.
H
Okay,
so
now
that
get
team
has
also
joined.
Let's
come
back
to
the
agenda
of
the
a
meeting
today
so
today,
today's
a
breakout
room
meeting
was
proposed
to
get
a
consensus
on
encoding
options
proposed
for
two
seven
one,
eight,
that
type
transaction
another,
and
there
was
some
discussion
on
the
discord
channel
proposed
around
the
option.
Those
options
are
listed
in
the
agenda
and
also
some
questions
that
that
we
would
try
to
get
answers
to
today
proposed
by
mica.
How
do
we
hash
type
transaction?
H
D
So
the
high
level
is
is
the
current
specification
for
2718
is
incompatible
with
the
plan
b
that
peter
proposed.
That's
the
the
the
big
topic
that
we
need
to
discuss.
I
think
the
the
other
two
which
we
can
probably
start
with
because
they
should
go
quicker,
are
what
is
the
hashing
mechanism
for
type
transactions?
I
realized
that
this
isn't
actually
specified
anywhere.
D
If
anyone
disagrees
with
that,
we
can
discuss
that
right
after
I'm
done
this
intro.
If
not,
we
can
just.
I
can
just
put
that
into
a
spec
somewhere.
I
don't
know
which
spec
it
will
go
in,
but
it'll
go
in
aspect
somewhere,
probably
the
working
protocol
specs.
Since
I
think
that
the
networking
protocol
is
where
this
actually
shows
up.
Can
you
guys
see
my
screen
by
the
way.
D
D
Currently,
the
legacy
transactions
are
and
as
according
to
the
spec
legacy
transactions
will
change
to
be
oh
shoot
that
do
you
remember
like
client,
I
had
it
written
down.
H
D
D
Each
of
those
inner
list
is
a
transaction.
The
spec,
as
currently
written
changes
that
again
not
entirely
intentional
such
that
you
get
an
rlp
list
of
byte
arrays
and
each
byte
array
is
a
transaction.
Some
of
them
are
lp
encoded
and
some
of
them
are
type
transactions,
meaning
they've
got
a
concatenation
of
the
transaction
type
follow,
but
by
rlp
encoded
list.
D
This
creates
complexities
because,
specifically,
if
we
want
to
go
with
peter's
plan
b
solution,
because
the
we
can't
kind
of
magically
backport
this,
because
anyone
who's
speaking
the
old
networking
protocols,
we'll
be
sending
a
list
of
lists,
not
a
list
of
byte
arrays
and
anyone
who
is
upgraded
will
now
be
expecting
a
list
of
byte
arrays,
not
a
list
of
lists,
so
we
need
to
change
most
likely
if
we
want
to
stick
with
plan
plan
b
of
peters,
then,
where
you're
going
to
have
to
change
to
some
other
wire
encoding,
most
likely,
it
will
be
a
combination
of
the
legacy
transactions
will
just
still
be
a
list,
and
then
type
transactions
will
be
something
else.
D
Maybe
a
list
or
maybe
that's
the
third
topic
which
is
going
to
require
a
little
bit
more
debate.
I
suspect,
which
is
how
do
we
encode
the
type
transactions?
Currently
they
are
wire
encoded
like
like
this
opaque
byte.
So
basically
we
have.
This
is
a
transaction
list
over
the
wire
is
an
rlp
list
of
things,
legs
or
sorry.
Type
transactions
in
that
list
would
just
be
a
byte
array
where
the
first
byte
is
the
transaction
type
and
all
the
remaining
bytes
are
an
rlp
encoded
list,
which
is
the
transaction.
D
So
if
you,
if
we
open
our
minds
up
to
that,
we
could
then
encode
transactions
as
just
a
list
where
the
first
byte
or
the
first
item
in
the
list
is
the
transaction
type.
And
then
the
second
item
in
the
list
is
another
list
which
is
the
rest
of
the
transaction.
We
could
do
just
flatten
it
all.
So
it's
just
a
list
or
the
first
byte,
keep
in
mind.
D
The
problem
with
these
two
is
that
they
kind
of
conflict
with
legacy
transactions,
because
a
nonce
which
is
the
first
item
in
this
list
for
legacy
transactions,
can
be
one
that
is
a
valid
not,
and
so
we
would
have
to
do
more
complex
things,
which
kind
of
defeats
the
purpose
a
little
bit
of
2718,
which
is
to
try
to
avoid
having
to
do
things
like
okay.
Well,
if
if
the
second
item
in
the
decoded
list
is
a
list-
and
that
means
this
is
a
type
transaction.
D
If
the
second
item
is
a
number,
then
it
means
the
legacy
transaction
we're
trying
to
get
away
from
that
with
type
transactions.
So
these
two
to
me
feel
like
they're
kind
of
going
against
the
ethos
of
this
whole
type
transaction
concept.
But
there
are
options
on
the
table
if
we
decide
that
makes
more
sense.
The
last
one
that's
been
proposed
is
that
we
actually
have
a
sort
of.
D
This
does
not
have
the
problem
of
conflicting
here,
because
you'd
first
read
this
transaction
type,
byte
and
then
you'd
use
that
to
determine
how
to
decode
or
how
to
interpret
the
next
list
item
and
for
legacy
transactions.
We
could
either
you
know,
have
some
magic
type
or
we
could
just
say
if
there
is
a
list
that
is
not
pre
prefixed
by
or
doesn't
come
after
immediately
after
a
single
byte
item,
then
it's
a
legacy
transaction.
D
So
these
these
are
the
current
options
that
have
been
proposed
that
are
semi-viable.
D
F
D
D
F
D
Sad
that
I
didn't
type
these
out
here,
but
the
two
options
would
be
to
kind
of
like
this,
where
we
just
say
a
legacy.
Transaction
is
rlp,
rlp
encoding
of
the
transaction,
and
it
shows
up
as
a
byte
array,
not
as
a
a
list
inside
of
the
outer
transaction
list.
D
Again,
this
is
not
compatible,
I
believe,
with
peter's
plan
b,
and
so,
if
we
decide
to
do
that,
it
cleans
some
things
up,
but
it
means
we
cannot
go
with
peter's
plan
b,
where
people
don't
implement
all
the
network
protocol
versions
before
berlin,
which
would
seem
to
be
a
big
blocker,
the
other
I
think
top
option
is
to
have
it
legacy.
Transactions
just
continue
to
be
encoded
exactly
as
they
are,
which
means
they
will
not
look
similar
in
the
transaction
list.
D
D
And
then
you
know
branch
off
legacy,
transaction
handling
versus
type
transaction
handling,
instead
of
just
having
a
single
switch
statement
for
type
transaction,
and
we
do
the
special
case
where,
if
it's
f8,
f9
or
ffa,
we
treat
it
as
legacy
and
if
not,
then
we
treat
it
as
new.
So
it's
a
little
uglier
code,
but
it's
the
least
change.
E
And
for
keeping
the
legacy
transaction
format,
as
is
so
that
we
can
continue
to
do
option
b.
E
Well,
because
if
you,
if
you
change
the
way
the
legacy
legacy
transactions
are
encoded,
then
now
all
the
messages,
like
you
know,
new
block
message
which
will
contain
this
list
of
transactions.
The
legacy
transactions
are
going
to
be
different
there,
and
so,
if
you
don't
implement
this
new
eth
protocol,
you're
just
going
to
reject
all
of
those
messages
because
they're
not
going
to
have
valid
transactions
in
them.
I
But
can
can
you
revert
back
for
basically
in
this
example,
you
just
concatenate
byte
array
for
the
type
transaction,
the
legacy
section
we
can
use
list,
as
it
is
basically
only
when
the
type
transaction
is
included
in
the
list
of
the
transaction.
You
include
that
as
the
bytes
in
lp,
bytes.
I
Okay,
okay
type,
that
out,
I'm
somehow
confusing
it's
between
last
option
and
this
option.
Sorry.
D
Yeah,
yes,
sorry
so
so
these
are
two
options
for
legacy,
and
then
these
four
are
for
type
transactions,
and
so
the
question
is
is:
do
we
want
legacy
transactions
to
look
kind
of
like
the
like,
if
we
choose
opaque
bytes,
for
example,
for
type
transactions?
If
we
go
with
this,
then
these
two
look
very
similar
like
they're,
both
just
biter
arrays.
You
can
switch
on
the
first
byte,
which
is
kind
of
convenient
and
nice.
However,
again
it
would
be
incompatible
with
legacy
versions
of
the
network
protocol.
D
D
Just
to
kick
things
off?
My
preference
is
so
I
dislike
these
two
because
type
transaction
type,
one
and
nonce
one
are
both
valid
and
it
means
we
can
no
longer
do
simple.
Switching.
We
have
to
do
things
like
okay,
we'll
look
to
see.
If
the
second
item
is
a
list
or
look
to
see
how
many
items
are
in
the
list
to
determine
the
type.
D
So
I'm
not
a
fan
of
these
two
personally,
my
favorite
is
this
one,
because
this
encoding
matches
the
way
we
are
currently
doing
it
for
blocks,
and
that
gives
me
a
slight
preference
there.
Also
I'm
not
personally
a
fan
of
lists
that
have
mixed
types
and
while
this
list
already
does
have
some
mixed
typing,
this
one
feels
I
really
feel
like.
These
two
should
be
enveloped
together.
That's
just
kind
of
like
my
professional
okay.
E
And
the
opaque
bites
does
that
help
us
with
transitioning
to
ssc
in
the
future?
Is
there
is
anything
that
the
heterogeneous
sequence
would
make.
D
H
D
Don't
know
so
like
if
this
whole
transaction
list
switches
to
ssc
hypothetically,
then
this
piece
would
now
just
be
ssd
by
nature
of
that,
whereas
if
this
whole
thing
switches
to
ssc,
then
this
doesn't
necessarily
change
ssc.
But
this
format
does
allow
us
to
have
this
piece
be
ssd,
but
not
the
list
vssc
so,
depending
on
how
we
do
this
transition
to
sse
over
time,
opaque,
bytes
may
or
may
not
help.
E
B
One
of
the
things
that
I
like
about
the
rlp
heterogeneous
sequence,
though,
is
that
it
in
I
mean,
in
my
view
it
is
a
similar
decoding
pattern
as
what
is
currently
being
used.
Where
you
have
some
list,
you
have
some
array
of
bytes,
you
give
it
to
a
transaction
decoder
and
you
say:
decode
a
transaction,
and
now
your
transaction
decoder
decodes
the
exact
logic
that
we've
already
defined
in
2018
and
says:
okay,
let
me
look
at
the
first
byte.
Is
it?
Is
it
like
less
than
7f
and
if
it
is
but
the
type
transaction?
B
So
let's
just
go
ahead
and
decode
it
and
then
we'll
return
the
rest
of
the
bytes
that
weren't
decoding
the
transaction
to
the
to
the
outer
decoder
to
continue
looking
for
transactions.
This
is
a
little
bit
different
and
will
require
some
changes
for
opaque
bytes
because
you
actually
have
to
decode
the
the
byte
array
first
and
then
you
have
to
pass
that
decoding
in.
B
My
main
experience
is
using
the
go
ethereum's
decoder
and
the
way
that
typically
works
is
you're
decoding
you're,
trying
to
decode
some
rlp
into
a
transaction,
as
you
kind
of
have
to
define
this
function
which,
to
code
what
rlp
is
because
I
mean
this
is
rlp
obviously
like
this
is
a
list
that
it
has
some
lists
in
it.
It
has
some
integers
in
it,
and
so
that
can
be
decoded
into
an
arbitrary
rop
sequence.
But
to
put
that
into
a
structure
that
we
can
use
in
our
program,
you
know
that's
defined.
B
B
B
Because
you
need
to
make
sure
that
you're
that
the
the
bite
laying
specified
for
the
opaque
bites
is
correct,
and
so
you
either
have
to
read
it
all
together
in
one
pass
and
then
go
back
and
decode
or
you
have
to
keep
track
of
how
many
bytes
you've
read
and
then
compare
that
to
make
sure
that
that
initial
prefix
was
correct
with
the
heterogeneous
sequence.
You
just
use
all
the
tooling
that's
already
in
rlp
to
make
sure,
because
rlp
decoders
are
already
checking
or
should
be
checking
to
see.
If
the
list
prefixes
length
is
correct,.
D
I
believe
geth
has
a
stream
decoder,
it's
not
a
batch
decoder,
whatever
the
opposite
of
a
stream
decoder
is
yes,.
E
E
Okay,
yeah,
I
don't
so
yeah,
so
we
use
the
stream
decoder
as
well.
Right,
let's
say
some:
we
have
we.
Let's
say
we
do
the
opaque
bites
thing
and
yes,
it's
going
to
when
we
call
read
bites
it's
going
to
look
at
the
length
of
that,
it's
going
to
read
all
those
bytes
in,
but
we
would
have
had
to
read
them
in
any
ways
and
then,
at
that
point
we
switch
on
the
first
term.
F
I
See
would
you
just
check
if
this
list
or
with
this
byte
array
and
if
this
is
just
use
all
the
legacy
decoder,
if
this
is.
F
I
D
Yeah,
so
a
list
will
start
with
a
different
prefix
byte
than
a
byte
array.
F
E
Yeah,
and
also
the
like
doing
it
doing
a
second
pass
here
over
the
bytes,
like
it's
not
really
a
big
issue
when
we're
talking
about
sending
these
things
over
the
network
right,
like
it's,
a
very
small
amount,
relatively
speaking
of
the
of
the
processing
of
this
whole
system,
of
sending
these
things
across
to
other
clients
like
I'm,
not
sure
we
should
be
optimizing
for.
Oh,
we
have
to
read
this
again.
We
need
to
look
back
and
read
this.
E
E
E
Rlp
heterogeneous
ends
up
being
smaller.
If
you
scroll
down
to
what
like
what
matt
wrote.
E
E
D
E
D
I
think,
if
we're
looking
to,
I
believe
people
have
talked
about
this
in
the
past
and
the
consensus.
If
I
remember
correctly,
someone
please
correct
me
if
I'm
wrong
here,
the
consensus
I
vaguely
remember
hearing
was
that
if
we
start
caring
about
bites
on
the
wire
size
for
transactions,
there
are
other
ways
that
we
can
get
much
bigger
wins.
H
B
B
Yeah,
I
don't
think
the
over
by
overhead.
Is
that
big
a
deal
my
mind,
the
biggest
concern
would
be
the
additional
the
you
know
is
it.
Is
it
desirable
to
have
to
read
the
bytes
it
transacts
to
a
buffer
and
then
be
sterilized
from
there
or
would
we
you
know
kind
of
go
through
in
a
different
way,
where
we're
decoding
and
kind
of
keeping
track
of
the
length
of
the
byte
array?
At
the
same
time,.
E
F
B
So
that
means
that
we
will
have
rlp
list
items
and
we'll
and
those
will
be
the
the
legacy
transactions
and
we
will
have
these
byte
arrays,
which
will
be
the
type
transactions.
So
those
will
be
different
types,
but
at
least
the
number
of
elements
will
be
equivalent
to
the
number
of
elements
and
yeah
or
the
number
of
actual
transactions
that
there
was.
C
D
D
I
can
so
I
can
go
update
all
the
various
specs,
probably
tomorrow,
since
it's
late
here
with
those
conclusions,
I
don't
know
if
you
guys
want
us
to
stick
around
to
debate
what
light
client
was
brought
up
earlier
or
or
not.
I
can,
if
you
guys,
want.
D
You're
not
happy
so
I
don't
mind
it.
I
think
for
for
me,
the
the
frustration
is
is
that
I
keep
thinking
that
we've
come
to
consensus
and
we
don't
have
to
have
breakout
rooms
because
I
know
these
are
expensive
and
a
burden
on
everybody.
D
E
A
E
Yeah
we
we
did
go
out
to
the
type
transactions
channel.
The
problem
is,
I
think,
the
medium
is
that
the
instant
messaging
causes
you
to
take
shortcuts
in
the
in
the
representation
of
your
ideas
and
then
as
soon
as
like,
for
example,
we
move
to
github.
You
know
matt's,
like
oh
here's,
this
really
clean
type
of
thing
and
we're
all
like.
Oh
yeah,
we
get
it.
C
E
So
I
guess
this
is
something
that
is
that
it
is
kind
of
related
like
when
I'm
when
I'm
like
implementing
this,
I
realize
like.
Oh,
do
we
keep
type
like
if
we
can
do
serialize
the
type
transactions?
Do
we
keep
them
around
waiting
for
the
fork
block?
I
don't
want
to
do
that.
I
want
to
implement
it
like
okay,
we're
gonna.
D
E
It's
well
actually,
I
think
the
optimization
would
be
to
decode
them
and
keep
them
around
and
babysit
them
to
make
sure
they
can't
be
mined
or
whatever.
But
the
easier
thing
to
do
implementation-wise
is
to
just
reject
them
until
the
fork
block
passes
locally
and
you
are
now
switched
like.
You
have
now
switched
to
berlin
and
then
like,
for
example,
like
yeah.
So
that
way
what
happens
is
yeah.
D
Like
if
they,
if
you
switch
the
way
you
process
at
fork
block
and
then
it
gets
uncle
and
you
go
back
to
pre-fork
block,
do
you
then
go
and
drop
all
those
things
that
you
just
decoded
or
if
it
works.
F
D
F
Rollback
problem,
oh
you
would,
I
don't
think
you
would
do
that,
but
the
problem
would
be.
I
think,
if
we
like
enable
so
that
we
start
sending
out
and
accepting
transactions
that
are
not
actually
eligible
for
inclusion
in
a
block,
then
there
will
be
lots
of
traffic
and
notifications
and
sending
or
passing
around
with
these
transaction
hashes.
F
But
no,
but
I
mean
it's
like
you-
can't
open
acceptance
without
having
somewhere
to
actually
dump
the
transactions.
I
mean
you,
you
have
a
source,
but
no
sync
to
put
them
in
and
it
will
just
take
up.
My
articles.
D
You're
saying
you're
staying
like
you've
got
the
your
pending
pool
and
you
want
your
pending
pool
to
be
sorted
so
that
way
when
block.
If
you're
a
miner
you
mine,
a
block,
you
can
just
pull
from
that
pending
pool
but
you're
saying:
oh,
we
can't
actually
put
these
in
there
because
they're
not
valid
yet
like
we
don't
currently
have
a
place
to
put
not
valid
yet
transactions
in
any
client
yeah.
E
So
that's
why
I'm
saying
the
way
I
was
planning
on
doing.
It
was
just
like:
okay
until
the
fourth
block
passes,
or
at
least
or
whatever
our
data
structure
for
determining
what
you
know,
what
we're
currently
on
right
now,
what
the
consensus
rules
are
like
that
will
also
affect
you
know
being
able
to
accept
transactions
over
the
wire
and,
like
you
know,
gossip
transactions
and
also
rpc
transactions.
D
Yeah,
so
something
to
keep
in
mind
just
in
the
back
of
your
your
head
for
this
is
that
I'm
hopeful
that
one
day
we
will
get
the
ability
to
do
time,
transactions
which
are
transactions
that
are
not
valid
until
a
certain
time
and
I'm
curious
if
we
would
handle
them
the
same
way
where,
basically,
we
say
yeah
they're,
not
valid,
don't
send
them
to
me
until
that
time
is
passed
and
basically
it's
up
to
clients
to
keep
them
in
local
pools
or
whatever
until
the
time
timestamp
is
reached.
F
F
The
sinks
are
either
blocks,
so
once
it's
seen,
this
has
been
included
in
block,
it
can
drop
it
and
the
other
thing
is:
throw
it
on
the
floor
and
there
are
different
levels
like
the
pending
pool
has
a
different
guarantees
for
how
many
it
will
keep
the
memory
than
the
queue,
but
eventually,
when
things
go
without
too
many
transactions,
things
get
dropped
on
the
floor
and
things
that
will
not
be
valid
until
in
the
future,
such
as
your
transactions,
you
just
mentioned,
or
a
transaction
which
has
announced,
but
we're
missing
the
previous
one
yeah
it
will
wind
up
in
the
queue
and
it
will
get
dropped
at
some
point.
D
D
F
D
F
D
F
D
F
Yeah,
so
someone
would
have
had
to
mess
with
time
stamps,
probably
also
we
have
like
synthetic
test
cases
for
this
behavior
to
see
that
go.
Ethereum
behaves
correctly,
but
I
don't
think
it's
something
that
would
happen.
Naturally,.
F
D
Curiosity,
do
we
have
integration
tests
in
those
scenarios
yeah
like
to
make
sure
all
the
clients
behave
the
same
way.
F
B
I've
got
a
question
on
the
networking
of
this,
so
if
we
go,
if
we
go
with
peter's
option
b,
where
we
just
back
port
some
of
these
changes
to
older
wire
protocols,
what's
going
to
happen,
if
I
am
connected
with
my
peer
on,
you
know,
e64
and
I
send
a
type
transaction
to
them
and
they
haven't
updated
their
client
to
support
the
correct
handling
of
that.
That
is
typically
grounds
for
a
disconnect
right.
D
B
B
D
B
D
F
B
D
I
think
this
needs
to
be
a
consensus
agreement,
because
if
you
do
propagate
that,
then
we
can
have
a
network
split
like
I
just
described,
but
if
we
say
that
no
client
will
accept
any
type
transaction
prior
to
berlin
block,
then
you're
gonna
get
this
disconnected,
even
if
we're
like
out
of
sync
on
berlin
by
like
a
block
or
two
with
each
other.
That's
fine,
because
if
you're
not
upgrade
your
client
by
two
blocks
from
berlin,
you're
gonna
get
these
things
in
two
blocks
anyway.
D
I
H
Cool,
so
for
today's
meeting,
thanks
to
life
cliente,
I
already
have
added
the
decision
item
that
we
made
to
the
decisions
that
we
made
today
in
the
agenda
itself.
As
for
next
step,
I
believe
micah
will
be
adding
specs
to
the
networking
reform.
D
Yeah
I
will
update
the
appropriate
eips
with
these
decisions
tomorrow.
Hopefully.
H
Okay
and
all
the
clients
who
get
basically
an
open,
ethereum
those
the
participated
today
in
the
meeting,
are
in
sync
with
the
decision.
We
do
not
have
an
estimated
date
and
duration
for
implementation
from
the
client
yet,
but
they
would
be
working
on
it.
H
C
H
H
Okay,
so
I'm
in
like
just
from
the
agenda
point
of
view,
I'm
gonna
just
add
this
thing:
recording
as
unlisted
version
for
people
who
are
actually
involved
in
this
implementation
may
give
it
a
look,
and
because
this
has
like
quite
a
good
conversation
and
with
screen
sharing
it,
I
think,
will
add
value
there
and
if
we
find
any
other
need
of
further
future
meetings,
we
can
arrange
it
accordingly,
particularly,
I
do
not
mind
having
breakout
rooms
frequently
if
it
helps
and
decision
making
that
we
came
across
today,
real
quick,
I'm
happy
to
arrange
these
calls.