►
From YouTube: Breakout room meeting #5
Description
B
A
C
Sense,
but
I
feel
like
some
of
my
concerns
would
be
best
voice
to
the
ghost
ethereum
team
to
hear
what
their
thoughts
are.
B
D
C
C
Okay,
so
clearly
we
want
to
talk
about
ssd,
but
the
first
thing
I
was
wanted
to
ask
was:
what
is
the
value
that
we
get
out
of
completely
eradicating
the
legacy
transactions
from
protocol,
because
I
know
that
we've
kind
of
gone
back
and
forth
on
this,
and
I
think
that
having
a
type
transaction
that
would
allow
people
implementing
signers
and
walls
and
these
types
of
things
to
not
have
to
worry
about
the
legacy
transactions?
Valuable.
C
But
in
terms
of
at
the
protocol
level,
saying
we're
going
to
have
this
type,
zero
transaction
and
the
signing
format
is
the
same
as
the
legacy
transaction,
because
it's
valuable.
I
don't
know
what
the
real
value
of
this
is.
From
my
point
of
view
implementing
in
the
go
ethereum
code,
it's
just
going
to
add
a
lot
of
complexity,
and
so
I
wanted,
if
other
people
had
any
thoughts
on
this.
B
The
impression
I
got-
and
I
do
share
this
just
my
personal
view,
not
as
a
client
author
but
like
in-
and
I
have
I
have
an
imaginary
client
that
I've
written
in
my
head
and
it's
perfect
and
whatnot,
and
so
I'm
arguing
I
I
would.
I
share
their
views
in
my
imaginary
client,
so
I
kind
of
understand
where
they're
coming
from,
but
I
believe
the
impression
I
got
from
other
client
devs
was
the
their
concern
is,
is
having
a
transaction
in
a
block
and
gossiped
over
devp2p
having
them
all
be.
B
C
C
C
It's
kind
of
like
that,
but
it's
a
little
bit
more.
The
fact
that
we're
changing
the
definition
of
an
existing
definition
within
the
transaction,
because
adding
the
new
transaction
type
has
been
fairly
lightweight
in
general,
because
you
know
we
can
decode
that
those
rlp
bytes
into
the
type
transaction.
And
then
you
know
within
the
state
processing
logic.
We
could
just
say:
okay,
if
we
have
this
type
transaction
and
we
are
not
past
the
berlin
hardcore.
C
This
is
an
invalid
block,
but
it's
a
little
bit
more
complicated
if
we
are
changing
the
the
legacy
transaction
to
say.
Okay
now,
even
though
these
signatures
are
also
valid
for
this
transaction,
because
the
signature
format
is
malleable,
if
we
want
to
say
okay
type
zero
transactions,
they
have
to
have
this
type
zero
bytes.
Otherwise
they
are
no
longer
valid
after
the
berlin
fork
and
so
now
for
the
serialization.
We
kind
of
have
to
have
like
this
awareness,
or
we
have
to
have
an
additional
type
within
the
code
base.
C
We
have
to
have
like
a
type
0
and
a
type
1
which
decodes
to
type
0
if
you're
following
it,
basically
means
that
this
is
a
berlin
legacy
transaction
with
type
zero,
and
so
this
is
these
are
all
these
transactions
are
valid
after
the
berlin
hard
fork,
the
other
ones
aren't
and
so
to
me,
it's
like
you
know,
we're
jumping
through
a
lot
of
these
hoops
and
I'm
not
sure
that
we're
really
saving
a
lot
of
effort.
There.
E
C
F
F
But
what
if
we
say
that
for
any
new
transaction
type,
the
first
byte
must
be
less
than
7f,
but
we
also
specify
that
c5
and
c8
or
whatever
the
two
cases
for
the
the
current
ones.
The
first
byte
is
those
are
like
a
transaction
type
c5
and
whatever,
and
they
can't
believe.
B
I
I
believe
the
reason
is,
is
it's
actually
I
think
if
I'm
recording
someone's
cracking
them
wrong
here,
it's
actually
a
pretty
large
range.
It's
not
just
c5
and
ca.
It's
like
c5
up
to
you
know,
see
something
big
like
it's.
It's
not
just
like
a
switch
statement.
You
put
two
entries
in
it's
like
you
know.
You
need
40
entries
kind
of
thing.
B
F
D
I've
got
a
clarifying
question
about
the
complexity
thing
in
implementing
this.
Do
you
see
that
complexity
is
something
that
is
temporary
and
good?
I
didn't
fully
understand
so
so
pardon
this
question.
But
do
you
see
that
complexity
is
something
that
that
was
temporary
and
that
would
go
away
at
some
point
or
something
that
that
just
stays,
because
because
we
always
support
the
zero
transaction
type
or
something
like
that.
C
Yeah
I
mean
I
think
that,
as
long
as
we
are
building
clients
that
can
fully
sync
from
genesis
that
this
is
something
that's
like
pure
additive
change.
I
don't
think
that
this
is
something
that
we
can
remove,
and
so
that's
to
me
that's
kind
of
why
we
should
continue
to
keep
legacy
transactions
as
they
are
because
they
are
completely
outside
the
range
of
type
transactions.
B
It's
the
age-old
problem
of
we
can
never
delete
code
in
a
client
because
we
always
have
to
sync
from
genesis.
B
If
we
ever
do
something
like
regenesis,
then
that
problem
goes
away
and
part
of
me
dreams
of
a
day
when
we
regenerate
this
and
we
can
just
delete
massive
swaths
of
dead
bit
rotted
code
and
the
this
is
an
opportunity
to
kind
of
add
to
that
pile
of
bit
rotted
code
that,
hopefully,
I
dream
of
one
day
deleting,
but
I
recognize
also
that
that
dream
may
never
come
to
fruition,
in
which
case
as
like
clients
indicating
this
is
just
an
additive
change.
E
C
C
C
E
C
Yeah
they
so
they
can
sign
things
and
generate
the
rap
legacy.
But
if
it's
the
type
zero
wrap
legacy
like
it's
defined
in
the
2972
eep,
then
they
would
have
to
have
a
signature
format.
That's
not
the
same
as
the
signature
format
is
going
to
be
defined,
as
I.
B
B
B
So
it's
slightly
different,
but
I
think
they'll
help
facilitate
this
conversation.
D
Something
that
might,
or
at
least
help
me
understand
or
like
wrap
my
head
around
this
discussion
and
kind
of
like
what
we're
trying
to
figure
out
is
understanding
kind
of
like
what
our
ideal
situation
is
that
we
want
to
get
to,
because
that's
the
thing
that
I'm
at
least
currently
lost
on,
because
there's
just
a
lot
of
weeds
that
it
feels
like
this.
This
is
getting
us
lost,
then
yeah.
D
Good,
so
so
we've
got
one
complexity
piece,
which
is
that
we
know
that
we
can't
just
drop
the
legacy
one
quickly,
even
remotely
quickly
like
there
is
some
amount
of
of
hardware
and
software
out
there
that
just
will
not
be
able
to
upgrade
quickly,
and
so
whatever
we
do.
We
have
to
have
a
situation
in
which,
even
when
the
new
thing
gets
implemented,
things
that
are
submitting
the
old
type
of
transaction
can
continue
to
do
so
for
at
least
some
decent
amount
of
time.
D
D
B
Yeah
so
they're
signature
compatible,
there's
that
right
now,
there's
proposed
two
types,
because
we
actually
already
have
two:
it's
just.
We
bit
pack
in
funny
ways
to
make
them
look
different
right.
So
that's
the
pre
one,
five,
five
post,
one,
five
five!
So
we're
at
this
point
we're
saying:
let's
put
those
out
into
two
and
actually
have
two
separate
things
so
type,
zero
and
type.
One
is
the
current
proposal.
D
Okay,
so
we've
got
these
types
that
that
just
reformat
the
legacy
to
look
like
the
new
format,
with
some
some
kind
of
compromises
that
we
have
to
make
in
there
right,
like
the
hashing
scheme,
doesn't
include
the
type
and
we
don't
love
that,
but
it's
probably
okay
is
that
is
that
the
consensus
there
is
that,
even
though
it's
not
ideal,
it's
it's
probably
okay,.
B
Yes,
we
so
we
can
actually
assert
that
it
is
okay,
just
due
to
the
fact
that
we
do
not
allow
types
that
collide
yeah,
because
our
all
our
opi
lists
start
with
a
certain
range
we've
just.
A
B
D
So
that's
one
desire
or
what
that's
a
requirement,
the
you
know
the
2718
its
benefits
are
somewhat
I'm
going
to
use
the
word.
The
very
loaded
word
obvious
here.
Just
in
that
we
we
like
this
idea
of
being
able
to
introduce
new
transaction
types
without
doing
stupid
stuff,
like
counting
how
many
fields
are
in
the
rlp
list
or
whatever,
and
then
sorry,
kids,
just
dumping
out
toy
bins
downstairs
and
now
I'm
losing
the
thread,
other
requirements
or
goals
here
like
like.
D
Are
we
wanting
to
get
rid
of
the
legacy
type
long
term
and
set
ourselves
up
for
that?
Because
that's
the
piece
that,
like
having
the
the
new
format,
seems
to
move
us
at
least
towards
that.
B
I
I
think
that
there
are
two
camps
there.
One
one
camp
would
like
to
see
legacy
go
away.
Eventually,
I
think
there's
another
camp
that
doesn't
feel
comfortable
ever
saying
that
a
paper
wallet
generated
a
day
after
genesis
will
not
work
at
some
point
in
the
future
or
sorry
paper.
Signature
like
where
you
just
wrote
down
a
signature,
and
so
that's
highly
contested.
I
think
there
are
people
who
would
like
that,
but
I
do
not
think
we
have
consensus
on
that.
D
That's
fair
and
that's
something
that
we
don't
have
to
decide
today.
I
do
think
that
we
are.
D
I
think
that
I
don't
think
that
we
can
bank
on
it
exactly,
but
I
think
that
there
is
gaining
momentum
towards
any
solution
that
gets
us
so
that
clients
can
drop
the
old
fork
rules,
so,
whether
that's
regenesis
or
whether
that's
just
protocol
upgrades
that,
let
us
do
you
know
snap
sync
to
head
and
checkpoint
a
header
somewhere
along
the
way,
and
just
just
ignore
the
fact
that
there's
that
much
history
sitting
behind
it,
I'm
not
saying
that
we
can
bank
on
that.
D
But
I
do
think
that
the
appeal
there
is
large
and
we're
likely
to
see
that
come
to
happen.
D
Yes,
except
for
I'm
kind
of
trying
to
think
about
this
in
the
responsible
way
like
I
could
do
it,
but
if
geth
started
dropping
all
of
the
old
blocks
and
everything
right,
because
you
still
need
the
old
fork
rules
to
validate
the
old
blocks
and
the
eth
protocol
doesn't
exist,
helpfully
if
large
portions
of
the
client
drop,
the
train,
the
chain,
history
and
part
of
eliminating
all
of
those
old
history
pieces
is
also
forgetting
about
the
history
itself,
because
you
can't
validate
it
anymore.
D
So
so
I
so
yes,
you
could
do
it
today,
but
like
right,
you
guys
wouldn't
feel
comfortable
doing
it
today
right,
because
because
then
the
eath
protocol
would
would
not
be
healthy.
B
That
that
does
bring
up
a
good
point,
though,
that
if
we
introduce
separate
like
we
introduce
rap
legacy
and
we
say
blocks
after
fork,
block
cannot
contain
unwrapped
legacy
transactions.
Then
at
least
some
of
the
clients
could
choose
to
make
that
decision
and
they
can
not
have
to
deal
with
anything
other
than
type
transactions.
C
B
This
is
the
situation
where,
if
we
we
say
what
was
proposed
in
the
meeting
last
time,
that
if
after
fork
block
all
legacy
transactions
must
be
wrapped,
there
will
you
it
will
not
be
a
valid
block
if
it
contains
an
unwrapped
legacy
transaction.
It's
like
a
legacy
transaction
in
the
legacy
format,
and
in
that
scenario,
then
some
clients
can
choose
to
not
ever
not
even
have
those
old
pre-fork
rules
and
they
can
validate
new
blocks.
B
G
B
Yes,
I
think
the
question
that
was
brought
up
before
you
got
here:
martin,
just
bringing
up
speed
real
quick
was:
do
we
actually
gain
a
technical
debt
reduction
by
saying
legacy
trend
unwrapped
legacy
transactions
are
no
longer
allowed
in
a
block.
There
was,
I
think,
in
the
previous
meeting
there
was
people
were
saying
that
there's
a
belief
that
we
do
and
someone's
questioning
is
that
actually
true,
like
does
the
complexity
of
a
client
go
down
by
disallowing
unwrapped
transactions
in
a
block
after
the
fork
block.
G
I
think
it
does-
and
I
think
this
was
the
I
think
peter
suggested
this
approach
due
to
reasons
of
basically
cleanliness
that
when
we
do
like,
when
we
do
the
derive
sha
list,
then
we
don't
have
to
have
particular
code
paths
that
checks
on
each
individual
item.
Isn't
this
or
that
do
different
types
of
encoding
or
decoding
in
various
places.
So
the
reasoning
is
that
it's
better
to
just
okay.
Now
we
do
all
the
transactions
like
this
first,
the
type
then
the
opec
blob
or
whatever.
G
So
we
would
have
this
kind
of
ambiguity
where
basically
have
two
and
a
half
types
of
transactions
or
whatever
it
is,
and
at
that
point
peter
suggested.
Why
not
just
for
consensus,
we
wrap
them
and
everything
is
typed.
D
So
I
think
the
core
of
what
we're
talking
about
right
now
comes
from
like
clients,
experience
implementing
something
and
and
not
necessarily
seeing
that
complexity
go
away.
C
Yeah
definitely
making
the
assumption
that
we
were
going
to
continue
using
the
encode
rlp
function
to
just
return,
the
you
know
non-conformant
transaction
encoding,
which
would
be
the
transaction
type
concatenated
with
the
actual
rlp
bytes.
And
so
I
think
that
if
we
do
not
go
with
that,
then
it
probably
isn't
that
positive
to
have
all
type
transactions.
D
So
so
I
think
that
what
we're
looking
at
here
is
something
that
we're
probably
not
going
to
resolve
in
this
call,
and
please
please
tell
me
I'm
wrong
if
you
think
I'm
wrong,
but
we're
kind
of
talking
about
like
now
we're
into
implementation.
I
guess
what
we're
looking
at
is
kind
of
a
conceptual
implementation
details
across
clients
about
whether
or
not
this
reduces
complexity
or
not
at
a
high
level.
D
It
seems
like
we
all
or
that
there's
like
a
relative
consensus,
that
that
it
should
reduce
complexity
or
that
we
think
it'll
reduce
complexity
or
that
this
is
the
type
of
thing
that
reduces
complexity.
So
I
think,
let's
just
maybe
pay
attention
to
whether
or
not
anybody
else
like
feels
this
doesn't
reduce
the
complexity
or
something
like
that.
But
I
feel
like
this
is
more
of
an
architecture
thing:
that's
gonna,
you
know
each
client
may
do
it
a
little
bit
differently,
but
at
a
high
level,
not
having
to
do
that.
H
B
B
The
question
is:
how
do
we,
how
do
we
format
these
new
transaction
types?
So
we
now
have
the
mechanism
2718
for
having
different
types
of
transactions,
and
they
do
not
all
need
to
have
the
same
encoding
type.
They
do
not
only
to
have
the
same
structure
that
we
do
whatever
we
want.
We
have
kind
of
a
green
field
space
to
work
in
given
that
is
now
the
right
time
to
introduce
ssc
is
ever
the
right
time
to
introduce
ssc.
I
think
there's.
D
C
D
So
I
think
I
can
answer
that
or
I
I
have
an
answer
for
that:
okay,
which,
which
is
that
ssz,
while
in
and
of
itself,
is
its
own
spec
that
that
makes
this
spec
you
know,
maybe
you
could.
I
guess
the
argument
is
that
we
could
define
this
relatively
simply
without
ssd.
So
what's
the
value
of
adding
sse
is
that
the
kind
of
maybe
a
restatement
of
that
question.
D
So
so
I
think
the
idea
here
is
that
sse
captures
a
bunch
of
stuff
in
one
spec
that
we
can
then
reuse,
and
so
it's
sort
of
reusable
specification.
Tooling.
It
gives
us,
you
know
serialization
and
hashing
and
encoding
and
the
whole
thing
without
without
needing
to
define
all
of
those
things
within
the
spec
itself.
All
that
we
have
to
do
is
define
a
schema
if.
B
I
were
to
rephrase
we
just
said:
would
that
basically
be
that
you
believe
that
ssc
gives
us
value
long
term
and
we
should
be
using
it
in
general,
in
ethereum
clients
and
now
is
as
good
of
time
as
as
any
to
introduce
it.
C
B
Which
are
you
talking
about
because
if
the
2930,
my
proposal
for
that
one
and
whatever
the
other
one
was,
is
to
be
pure
ssc?
Okay,
sorry
for
29
30,
it
would
be
pure
ssc.
If,
if
I
had
my
my
way
with
no
care
in
the
world
for
what
anyone
else
thought,
I
would
probably
go
with
ssc
for
2930
and
for
29
72
or
27
92
whatever
it
is
that
one
I'm
much
more
on
the
fence
on
because
you're
right,
because
the
the
signature
part
we
can't
change
for
the
wrapped
legacy
transaction.
C
D
B
Would
you
say,
martin
that
it
would,
if
I
were
to
to
rephrase
that
is
we're,
saying
ssd
might
be
good
you're,
perhaps
undecided,
but
not
now.
Yes,.
G
G
D
G
G
G
Yeah
I
mean
I
don't
have
a
plan.
How
do
we
introduce
a
scene
to
the
code
base,
but
it
just
yeah
strikes
me
as
a
quirky
to
add
it
in
in
this
way,
I
tried
to
think
peter
and
felix
about
this.
I'm
curious
to
hear
what
they,
what
their
thoughts
are
about
it,
because
those
are
that's
my
personal
feelings
right
now.
D
So
so
one
of
the
ways
that
I
like
gauging
people's
agreement
or
disagreement
is
if,
if
essentially,
how
like
how
strongly
I
mean
earlier,
you
said
that
your
your
your
objection
is
not
strong,
and
so
I
guess
my
my
curiosity
here
is
how
how
strong
is
it
like
if,
if
felix
and
peter
are
in
roughly
the
same
boat
or
like
we
just
kind
of
like
ignore
their
existence
for
just
a
moment
for
a
thought
experiment,
if
most
of,
if
there's
not
any
other
strong
objection,
are
you
inclined
for
us
to
continue
discussing
and
debating
it,
or
are
you
okay
putting
it
down
or
do
you
want
to
like,
like
sometimes
it's
good
to
understand
where
disagreement
lies?
D
G
I
don't
think
so.
It
feels
to
me
like
this
discussion
would
be.
It
would
be
better
to
have
this
discussion
with
either
peter
or
felix
on
it,
especially
felix,
since
he
is
the
like
the
one
most
yeah
protocol
person
and
rlp
and
encoding
person
on
the
team,
and
it
is
like
yeah
that
he
should
be
on
this
discussion
speak
on
behalf
of
get
and
should
not
be
me.
D
Oh,
and
also
you
missed
context
at
the
beginning,
we
we
knew
that
we
didn't
have
good
representation
of
of
those
that
might
be
opposed
to
ssc,
so
we're
definitively
not
making
a
decision
in
their
absence.
So.
B
So
part
of
my
again
unicorns
and
ponies
futureland
is
that
I
do
have
a
mild
preference
for
ssc
just
the
way
it's
laid
out.
The
way
style-
and
my
hope
is-
is
that
one
day,
some
somewhere
in
the
distant
future,
we
don't
use
rlp
for
any
of
the
transactions
and
we
eventually
have
everything
as
new
type
transactions,
and
we
also
do
regeneration.
So
we
can
get
rid
of
delete
some
all
the
dead
bit
rotten
code
from
fast
forks
and
we
live
in
this
magical
world,
where
ssc
everywhere.
B
In
order
to
get
to
that
hypothetical
world
like
in
order
to
have
a
chance
to
get
the
hypothetical
world,
we
do
need
to
start
moving
things
over
to
ssc,
particularly
introduce
new
things.
If
we
want
to
live
in
a
world
of
ssc,
only
then
new
things
need
to
be
ssc
at
some
point
like
at
some
point,
we
need
to
start
saying:
okay,
new
things
are
going
to
be
ssc.
B
B
And
so
I
can
definitely
appreciate
that,
if
that's
what
people
are
worried
about,
maybe
do
people
think
that
s
is
the
only
future
is
remotely
possible,
or
do
we
think
that
we're
going
to
be
with
rlp,
at
least
to
some
extent,
in
transactions
and
in
blocks
forever.
B
B
D
C
B
I
believe
the
only
place
that
we
have
to
have
rlp
is
supporting
transactions
that
were
signed
from
some
point
in
history.
I
believe
everything
else
is,
in
theory,
malleable
over
time.
If
we
ignore
having
to
support
all
of
history,
like
we
imagine
some
sort
of
future
pastor
genesis
that
I
believe
that
the
only
thing
that's
absolutely
required
is
supporting
legacy
signatures,
since
that
does
seem
like
something
that
I
do
not
think
we're
going
to
get
consensus
on
ever.
D
That
that
rlp
lives
are
at
the
actual
wire
protocol
level
itself,
both
in
devp2p
and
in
discovery
v5.
The
base
protocol,
as
well
as
kind
of
the
like
headers
themselves,
and
the
way
that
those
feel
like
header,
hashes
and
block
hashes
and
things
are
derived.
I
feel
like
the
remaining
kind
of
big
places
that
rlp
exists.
B
Yes,
so
I
believe
that
I
I
think
from
the
last
meeting
there
was
the
alcohol
devs
call.
There
is
consensus
on
2972
and
there
is
consensus
on
29,
30
and
29
29
because
of
that
and
29
and
27
18..
So
I
think
all
those
are
going
in
the.
However,
in
order
to
actually
implement
2972,
we
have
to
define
it
and
currently
it's
defined
with
ssc
and
there's
some
concerns
expressed
about
that.
B
So
either
we
need
to
update
the
spec,
which,
if
we
decide
that's
the
right
way
to
go,
we
definitely
can
change
that
to
be
rlp
like
changing
the
spec
to
be
rlp
is
probably
the
easiest
part
of
this
making
the
decision
to
change
it
to
rlp
is,
I
think,
the
hard
part.
So
you
are
correct,
we
do.
This
does
not
need
to
be
a
blocker.
B
E
Yeah
and
it
doesn't
have
to
be,
and
that
eip
can
also
specify,
like
you
know
where
and
how
we
want
to
introduce
it.
Do
we
want
to
do
it
in
completely
new
structures,
or
are
we
okay
with
you
know,
mexican
into
frankenstein's
or
something
like
obviously
don't
use
that
language
but
yeah?
I
think
that
we
don't
want
any
individual
eep
author
to
be
like,
like
I
want
to
use
sse,
but
this
is
just
gonna
like
create
this
whole
flame
war.
On
my
eip.
B
Right
so
the
one
problem
we've
had
when
we
tried
to
do
that
strategy
in
the
past
is
that
core
devs
are
hesitant
to
sign
off
on
an
eip
or
implement
an
eip
or
commit
to
an
aip
before
it
has
a
use.
We
actually
tried
this
with
28
7
18.
Interestingly,
where
we
said:
hey
2017,
let's
introduce
a
new
transaction
type,
but
we
don't
actually
introduce
any
new
type
transactions.
We
just
adjust
the
structure
and
there's
pushback
saying.
B
Well,
I
mean
sure
that
sounds
like
a
good
idea,
but
until
we
have
a
transaction
that
actually
uses
it,
we
don't
want
to
commit
to
it.
We
don't
want
to
implement
it.
We
don't
want
to.
You
know,
discuss
it.
We
don't
really
want
to
put
our
brain
power
into
it,
and
so
I'd
worry
just
a
little
bit
that
that
might
repeat
itself
exponentially.
E
Well,
maybe
people
didn't
bring
this
up
at
the
time,
but
I
think
bringing
up
eip
1559
transactions
would
have
been
a
great
thing
to
do
that.
I
don't
know
what
the
time
dependency
here
is
when,
when
they
were
created
but
yeah
saying
like
look,
we
have
these
eip
1559
transactions,
we're
probably
going
to
think
of
more
ones.
E
You
can
look
to
the
past
for
examples
as
much
as
look
to
the
future
and
if
all
core
devs
aren't
very,
if
that
you
know
if
it
seems
like
the
process,
is
not
very
amenable
to
thinking
about
future
cases.
That
haven't
been
thought
up
yet
well.
This
is
you
know
a
past
case
that
was
blocked
on
ssc
being
introduced.
I
think
the
past
cases
would
work.
G
Yeah,
actually,
I
heard
got
some
opinions
from
peter
and
felix
now
and
they
seem
quite
not
opposed
to
scc.
So
yeah
just
wanted
to
make
note
of
that.
G
So
it
would
be
quite
okay
with
having
it
rolling
out
starting
using
sse
in
the
consensus
objects,
but
not
rolling
it
out
in
the
network
segments
right
now.
D
D
Are
there
other
people
on
the
call
who
have
other
objections
or
hesitancies
or
something
other
words
that
are
less
loaded
or
something
about
introducing
ssd.
B
So
there's
some
some
minor
benefits
the
I
try
to
enumerate
them
and
they
post
somewhere.
So
if
you
guys
don't
want
to
look
this
up
later
feel
free,
one
of
them
is
with
because
ss
the
way
ssd
is
encoded.
B
B
In
particular,
this
is
useful
for
transaction
validation
if
the
transaction
data,
which
is
variable
length,
but
the
signature
is
fixed
length,
and
so,
if
you
asses
put
the
signature
up
front
when
you're
ssd
decoding
you
can
pluck
out
the
signature
validate
the
transaction
without
actually
having
to
have
an
ssd
decoder
at
all,
and
so
this
I
acknowledge
this
is
this:
is
minor
it's
one
of
those
kind
of
that's,
not
that's
nice.
It's
not
huge.
B
Sorry,
when
I
say
validate,
I
mean
just
validate
the
signature.
Like
you
could
say
this
transaction
signature
is
valid
before
you
actually
do
any
decoding.
You
are
correct.
You
cannot
fully
validate
the
transaction
itself,
just
the
transaction
validity.
Now,
interestingly,
I
think
for
a
transaction.
The
data
field
is
the
only
one.
That's
variable
length,
that's
correct
for
just
a
legacy,
normal
transaction,
and
so
if
that
one
is
the
only
variable
length
field
and
it's
at
the
end,
then
you
actually
can
also
validate
the
nonce.
B
The
account
balances
all
that
stuff,
and
the
only
thing
you
can't
do-
is
actually
execute
the
transaction
without
ssc,
again
minor
benefit,
I'm
not
claiming.
D
Ssc
tree
hashing
is
is
very
valuable
because
it
means
that
the
hash
represents
merkle
tree,
which
means
that
when
you're
transmitting
it
over
the
wire,
specifically
in
something
like
a
udp
based
network
that
has
limited
packet
sizes,
that
you
have
a
hash
that
is
natively
good.
At
splitting
the
structure
up
into
arbitrarily
small
chunks
in
merkel,
proofs,
and
that
is
very
powerful.
D
Admittedly,
transactions
are
typically
small,
but
some
of
them
are
large
and
being
able
to
split
them
up
across
multiple
payloads
is
natively
with
the
native
hash
that
the
protocol
uses
is,
is
pretty
valuable.
D
D
C
Yeah,
I
definitely
think
the
advantage
of
soc
is
in
the
merkelization
function,
but
you
know
that's,
I
don't
know
to
me.
It
just
feels
like
we
should
look
more
at
using
it
for
what
it's
really
good
at
and
things
that
we
can
really
have
advantages
of,
whereas
this
transaction
type.
In
my
mind,
if
we
say
you
know
we'll
do
ssd,
but
let's
do
it
on
something
in
the
header
field,
and
we
can
do
this
rop
transaction.
We
can
ship
berlin
in
the
near
future.
C
D
Sorry,
I'd
like
you
to
back
up
that
statement
of
that.
It's
going
to
push
it
out
because
I
still
don't
see
where
the
complexity
or
the
time
delay
comes
from.
I
I
suspect
we
add
at
least
a
month
or
two
to
berlin.
Just
like
off
the
top
of
my
head.
It's
gonna
take
longer
to
test.
It's
gonna
increase
complexity.
I'm
you
know
I.
I
don't
have
a
feel
for
this
discussion
of
how
worth
it.
That
is,
but
I
think
yeah
for
sure,
there's
a
one
to
two
month
delay
for
berlin,
which.
D
C
Yeah
agreed,
if,
if
the
team
is
okay
with
using
the
prismatic
implementation,
I
think
that
will
improve
it,
because
I
assume
that
this
that
information
is
significantly
tested
advised.
But
my
feeling
is
that
they
would
want
to
implement
this
natively,
in
which
case
they
would
have
to
test
and
fuzz
themselves.
E
Also
ssd
is
forgive
me
if
I'm
incorrect,
but
it's
not
like
a
widely
used
serialization
format
in
many
fields
right.
This
was
kind
of
invented
in
our
field,
correct
right,
and
so
I
don't
know
off
the
top
of
my
head.
If
there
are
any
java
implementations,
so
we'd
also
have
to
do
the
work
of
rolling
our
own.
For
that.
F
So
I
need
to
leave
like
in
one
minute
but
wanted
to
bring
up
one
more
question
about
the
the
rep
legacy
transactions.
So
I
I'm
not
fully
convinced
that
the
the
extra
repping
gives
any
major
benefit,
because
you
already
have
to
make
a
distinction.
F
It's
not
like
fully
sse
anyway,
but
I
think
what
at
the
bare
minimum
should
happen.
Is
that
the
receipt
format,
if
you
agree
on
ssd,
that
receipt
format
has
to
be
mandated
to
be
sse
and
any
new
transaction
format
has
to
be
mandated
to
be
ssc
defined.
F
But
I'm
just
not
entirely
sure
that
this
this
wrapped
stuff
is
all
that
important,
because
it's
as
mica
figured
out
last
week
that
offset
the
the
the
65
and
the
zeros.
That
would
change
if
you
actually
encode
the
entire
content,
including
the
transaction
type
as
ssd
and
yeah.
I
think
clients
already
have
to
have
like
this
special
behavior
anyway.
F
So
I'm
not
sure
if,
if
it's
just
not
enough
to
say
that,
if
the
first
byte
is
outside
of
this
range
of
0
to
7f,
then
then
it's
the
old
one,
I'm
just
not
fully
convinced.
We
need
the
new
transaction
wrapping,
but
I
would
definitely
say
that
we
should
enforce
that
the
receipt
is
ssc
and
any
new
transaction
type
is
ssd.
B
A
B
C
The
way
that
it's
implemented
now
and
I
think,
go
through
and
assume
most
clients
is
that
you
ask
for
some
rlp
bytes
for
the
first
transaction
and
then
you
hash
it
and
then
you
insert
into
the
tree
and
you
go
through
all
the
transactions.
But
if
you
have
both
type
transactions
and
legacy
transactions,
if
you
ask
for
the
rlp
bytes
of
a
type
transaction
technically
speaking,
it
should
not
return
the
type
concatenate
at
the
front,
because
that's
not
rlp
conformant,
and
so
then
you
would
have
to
have
an
additional
branching
path
there.
C
B
C
So
right
now
it's
done
at
the
same
layer.
If
you,
if
you
ask
for
rlp
by
the
type
transaction,
it
returns.
The
type
concatenated
with
the
rlp
bytes,
but
right
now
we're
discussing
is
that
appropriate
behavior
because
that's
not
conforming
rop,
and
so,
if
that
ends
up
being
the
way
that
we
go
and
that's
going
to
add
more
complexity
to
the
rest
of
codebase,
we're
everywhere,
where
we
have
to
interact
with
transaction
bytes
and
we
would
have
to
fork
on.
Is
this
a
legacy
transaction?
If
so
just
put
the
rlp
by?
E
B
So
if
I
just
summarize
where
I
I
think
we're
at
it
sounds
like
most,
people
are
in
favor
of
wrapping
and
only
rapping.
So
as
of
fork
block
x,
only
rap
legacy
transactions
will
be
allowed.
B
An
unwrapped
legacy
transaction
will
not
be
allowed
because
of
that
forking
behavior,
we
just
discussed
it
sounds
like
perhaps
axtec
is
not
100
convinced,
but
he
has
to
go.
Unfortunately,
it
sounds
like.
B
It
takes
the
no
response
says,
that's
true,
so
perhaps
if
you
have
further
concerns
exit,
we
can
discuss
them
later
on
when
you're
available
and
then
secondly
for
ssz
versus
rlp,
it
sounds
like
no
one
is
against
ssc
and
eth1,
at
least
not
that
I've
heard
and
most
people
are
in
favor
of
ssc
to
some
extent
in
each
one
and
the
real
contention
here
is:
do
we
introduce
that
with
2972,
or
do
we
introduce
that
with
2930?
Or
do
we
introduce
that
with
one
five
five?
B
J
I
D
Footnote
to
that,
which
is
that
we
currently
have
a
legacy
transaction,
and
we
know
that
we
have
to
support
it
for
a
long
time.
So
if
we
introduce
another
transaction
that
does
that
that
still
holds
on
to
this
rlp
stuff-
and
we
do
actually
want
to
get
rid
of
the
rlp
stuff,
then
we're
adding
something
that
we're
going
to
end
up
having
to
support
for
a
very,
very
long
time.
That
is
on
rlp.
Instead
of
ssc.
E
Yeah
and
also
another
thing,
another
point
like
another
clarification
right
guys.
I
think
that
one
of
the
ways
which
we
some
of
us,
I
think
would
like
to
see
us
he
talked
about
in
the
future-
is
like
yeah.
Let's
seriously
consider
you
know
bringing
it
in,
but
for
something
that
it's
really
really
good
for
and
uses
all
of
its
properties
nicely.
B
Right,
yes,
just
some
clarification,
I
think,
hopefully
piper
you
can
answer
this.
B
D
If
you've
got,
I'm
not
exactly
sure
hold
on
a
second,
so
the
the
benefit
comes
from.
When
the
like
right
now,
the
hash
that
we
use
to
reference
transactions,
it
makes
it
so
that
we
can't
take
it
very
easily.
D
Take
advantage
of
of
the
we
can't
take
advantage
of
any
any
kind
of
like
partial
transmission
of
a
transaction,
because
the
hash
is
the
whole
thing.
D
So
so
the
advantage
shows
up
when
we
start
using
the
hash
tree
root
to
reference
things
instead
of
the
cac
hack
of
the
rlp
bytes
and
depending
on
where
we
introduce
ssc
kind
of,
is
you
know
where
that
advantage
shows
up,
but
it
shows
up
nicely
when
we
want
to
do
things
like
transfer
partial
pieces
of
objects
themselves
like
right
now
we
can
provably
transmit
part
of
the
transaction
of
a
block,
because
we've
got
that.
D
You
know
that
tree,
but
in
effect
it
doesn't
actually
give
us
anything
because
rarely
because
we
can
already,
we
typically
already
know
the
hash
of
the
transaction
that
we
want.
And
thus
we
know
how
to
validate
it
when
we
get
it,
whereas
when
but,
but
we
can't
split
that
object
up
into
individual
pieces,
we
can't
break
it
up
into
small.
You
know
half
of
the
transaction
and
transmit
only
part
of
it,
because
we
don't
have
a
way
to
provably.
D
B
D
D
D
In
a
single
udp
packet,
and
thus
they
have
to
be
transmitted
in
multiple
packets
and
today,
there's
no
way
to
do
that.
Provably,
if
all
you
have
is
the
transaction
hash
really
there's
no
way
to
do
that.
Provably
unless
you
have
a
secondary
hash
that
somebody's
generated.
That's
it's
merkilization
other
than
just
the
cack
of
the
bites.
B
D
Makes
sense
because
if
you,
if
you've
got
to
split
it
up
across
10
packets,
then
the
then
there's
a
lot
of
briefing
vectors
that
show
up.
I
can
send
you
nine
of
them
and
then
don't
respond
on
the
10th
or
I
can
send
you
10
of
them,
but
screw
up
one
of
them
so
that
by
the
time
you
get
the
tenth
and
you
put
them
all
together.
D
B
D
Sorry
I
didn't
hit
the
button,
it's
because
the
hashes
that
we
use
at
the
at
the
protocol
level
don't
they're
they're
kekkak
of
the
bytes.
Oh
sorry,
baby
I'll
get
it
they're.
The
kecko
of
the
bites.
Sorry
dropping
kid
off
at
school,
I'm
sorry,
and
so
so
we
we
use
those
hashes
as
like
the
reference
and
protocols
for
saying
hey.
Please
give
me
this
transaction,
but
that
hash
all
it
does
is
tell
us
about
the
total
data.
B
D
B
The
right
now
as
the
eap
is
written,
we
actually
have
we,
I
I
think,
I'm
doing
this
wrong.
Now
that
you've
explained
this,
I
just
take
the
ssd
bytes
and
then
I
get
the
keck
of
them
to
get
the
transaction
hash.
It
sounds
like
that
is
not
correct
right.
I
need
to
take
the
ssd
bytes
and
then
do
something
else
with
them
to
get.
D
B
D
I
mean
it's
sort
of
advantageous
if
you
ever
want
to
talk
about
partial
pieces
of
the
object
without
without
needing
to
include
the
entire
thing,
which
happens
a
lot
at
you
know,
networking
protocol
level
type
things
or
needing
to
make
proofs
about
the
inclusion
of
a
log
in
the
receipt
without
actually
or
getting
the
actual
log
from
the
receipt
without
transmitting
the
entire
receipt
itself.
C
Yeah
I
mean
that's
not
going
to
take
this
definitely,
and
I
just
start
with
that
piper,
I'm
not
sure
if
the
go
ethereum
team
is
aware
of
that.
Since
it's
kind
of
you
know,
this
is
a
lot
more
hashes
that
have
to
be
calculated
if
we're
mercalizing
a
transaction.
Now,
if
the
transaction
has
a
lot
of
data.
B
D
And
the
magic
part
here
is
that
if
we
actually
take,
you
know
take
on
eating
this
elephant
and
we
eventually
update
things
like
headers
to
be
ssc
and
we
update
things
like
the
transaction
route
and
the
uncle
or
the
omer's
route
and
those
things
to
be
ssc
lists.
And
we
update
this
to
essentially
be
one
big
ssc
object.
D
Then,
then,
there's
sort
of
this
synergy
between
all
of
those
being
ssd
objects
that
just
from
the
the
the
header
hash
that
the
hash
of
the
block
that
becomes
a
merkle
root,
from
which
you
can
make
a
proof
about
any
kind
of
child
piece.
That's
hiding
underneath
it
potentially
even
including
reaching
back
into
past
blocks
because
it
uses
a
hash
of
the
parent,
and
so
you
can
do
really
nice.
D
You
just
get
this
really
nice
miracle
structure
that
you
can
make
proofs
about
being
able
to
reach
down
into
these
individual
objects,
like
you
said,
reaching
down
into
an
individual
log,
that's
hidden
down
in
the
receipt
of
a
header
straight
from
the
header
hash
itself,
which
is
really
powerful.
When
you
want
to
do
things
like
transmitting.
B
B
Ironically,
the
big
use
I've
seen
for
really
really
heavy
transactions
is
because
we
have
to
supply
this
giant
proof
struck
thing
to
like
do:
storage
proofs,
for
example,
you
have
to
prove
the
block
and
you
have
to
prove
the
account
and
you
have
to
prove
the
storage,
because
you
can't
pluck
easily.
A
A
Sounds
like
we
have
made
good
progress
today
and
tried
to
collect
answers
from
like
the
questions,
the
goal
that
we
started
with,
so
I
I
have
just
one
last
question,
maybe
because
we
could
not
get
the
clarity
on
part
like
when
to
start
using
the
sse
and
do
we
want
to
keep
it
going
on
discard.
I
mean
discussion
async
way
on
discord
or
do
you
think
do
people
think
that
there
is
any
use
of
having
another
breakout
room
before
the
all
core
dev
meeting,
like
maybe
on
the
next
monday.
D
I
mean,
I
think,
it's
probably
good
to
take
what
we
learned
here,
which
is
that
it
seems
like
there
is
enough
support
and
not
a
lot
of
strong
opposition
to
using
sse
and
then
to
essentially
pose
that
and
say.
Does
this
you
know
like,
I
think
who
disagrees
with
this
who's
you
know.
Are
there?
Are
there
people
whose
opinions
haven't
been
heard
and
we'd
like
to
hear
them?
D
So
it's
sort
of
we're
edging
towards
that
and
to
ask
that
open
question
to
continue
to
suss
out
whether
or
not
there's
any
hidden
pockets
of
concern
or
opposition
before
potentially
moving
forward
with
it,
as
well
as
the
same
question
about
potential
potential
delays
to
the
hard
fork
in?
If
we
choose
that
and
figuring
out
whether
or
not
those
are
real,
whether
or
not
the
go
ethereum
team
is
going
to
be
okay
using
prismatics
implementation
and
some
of
those
details.
B
I
think
the
that
is
a
concrete
question
that
I
think
we
can
get
hopefully
get
answers
to.
I
don't
know
if
anyone
wants
to
volunteer
to
ping
each
of
the
teams
async
and
say:
are
you
willing
to
use
an
eth2,
client's,
ssc
implementation
or,
and
are
you
comfortable
with
that
from
a
security
standpoint,
or
will
you
feel
the
need
to
like
do
massive
amounts
of
testing
and
maybe
write
your
own
because
I
think
that's
probably
one
of
the
bigger
deciding
factors.
C
D
I
I
I
think,
I'm
game
reaching
out
to
the
to
the
different
client
teams
over
discord
and
asking
that
question.
That
seems
like
a
good
place,
because
we'll
also
have
the
the
various
eth2
client
teams
present
there,
who
can
maybe
answer
questions
about
their
implementation.
If
any
of
the
teams,
you
know
have
questions.
E
H
Yeah
I
mean
we
need
to
check
to
be
honest
for
the
last
implementation,
but
yeah
concrete
current
answer
is,
I
don't
should
be
okay,
but
we
need
to
check
that,
but
even
we
could
even
implement
because
for
sc
you
need
you
need
to
encrypt
transaction.
You
need
to
encrypt
the
crypto
recipe
you
can.
Maybe
you
can
maybe
do
some
custom
solution
just
so
that
we
don't
block
inclusion.
Let's
see,
but
best
thing
is
best
option
for
us
is
of
course
taking
the
tag
that's
already
made
in
rust,
client
for.
A
Eth2,
so
I
will
have
this
recording
uploaded
on
katarda's
channel.
If
people
would
like
to
go
ahead,
I
mean
who
missed
the
meeting
and
would
like
to
go
ahead
with
the
conversation.
I
hope
that
would
be
helpful
and
yeah.
I
mean
if
I
understand
it,
correct
the
further
discussion
that
piper
would
be
reaching
out
to
client
team
and
further
discussion
would
take
place
in
the
discord
channel
itself,
correct
cool,
so
yeah
any
anything
any
last
words
before
we
close
the
meeting
today.
H
Yeah,
I
think,
for
ssds.
It
is
better
if
we
focus
on
the
benefits
than
just
saying
on
on
that
yeah
that
focus
on
the
benefit,
because,
if
you
say
just
removing
sc,
because
it
is
better,
it
doesn't
mean
a
lot.
Let
me
just,
let's
include
some
fancy
new
tech
in,
and
it
is
it's
problematic
if
you
think
only
on
that.
H
D
Thank
you
for
that.
I
think
I've
done
a
very
poor
job
in
the
past
of
actually
explaining
any
of
the
benefits
problem
of
being
steeped
in
it
a
little
bit
too
much.