►
From YouTube: Eth2.0 Implementers Call #13 [2019/2/28]
Description
A
A
A
And
we
will
get
started
okay,
so
we
will
start,
as
we
did
last
week,
with
testing
updates.
I
can
get
that
started.
I
believe,
since
the
last
call
that
I
shared
a
very
pretty
simplified
for
choice
tests,
it's
very
abstract
in
the
sense
that
it's
not
really
talking
about
a
lot
of
the
data
structures.
We
have
it's
more
just
like
building
a
block
tree
saying
what
the
recent
votes
are
on
that
block
tree
and
saying
what
the
head
is.
A
A
The
idea
is
specify
a
state,
the
starting
state
specify
blocks
to
transition
and
then
specify
a
resulting
state
or
a
resulting
subset
of
the
state.
If
we're
just
testing
some
particular
portion
of
it
I
know
Yannick
have
seen
cut
off
then!
Oh,
can
you
hear
me
now,
so
you
hear
me
yeah
you,
okay,
sorry,
that
I
was
vocal,
mommy.
A
Cool
anyway,
the
state
test-
those
are
the
state
testing
for
choice
s
and
they
are
big
for
interoperability.
So,
if
you
haven't
taken
a
look
at
those
take
a
look
and
comment
and
Yannick
I
know
you
had
some
comments
and
we
can
talk
about
them
now
or
I
can
hit
them
offline
and
we
can
focus
on
that.
I.
A
C
D
That
test
in
the
e2
reference
test,
repo
yeah,
you
know
yep,
okay,
sorry
yeah.
We
certainly
do
that.
I
can't
open
the
pr
in
the
next
couple
of
hours
right
on
okay,
cuz,
that
the
only
thing
I
was
worried
about
is
like
it
is
GPL
and
like
I,
don't
I'm,
not
a
lawyer
or
whatever,
but
like
I
was
worried
about
any
kind
of
copyleft
its.
B
E
It's
you
like
particularly
useful
for
the
f-test
repo
I,
think
it's
a
little
bit
more
like
internal
sanity
testing.
At
this
point,
I
see:
okay,
great
any
other
yeah
yeah
I've
also
discussed
with
Zach
Cole
for
my
white
blocks
this
morning.
So
of
currently
open-sourcing
part
of
their
tests
and
simulation
and
I
will
link
to
history
poll
where
he
has
some
kind
of
structure
to
lunch:
the
daughters
of
Ohio's
clients
and
he
has
example,
configuration
for
Ephraim.
He
owes
Bitcoin.
F
A
G
Great
now
we're
gonna
click
the
run
through
client
updates.
How
about
Pegasus
starts
alright.
This
is
Steven
Trevor
from
Pegasus.
We
made
quite
a
bit
of
progress
over
the
last
couple
weeks,
we're
in
the
process
of
upgrading
to
the
0.4
version
of
this
back.
We
we
have
simulated
blocks
from
my
Mach
network,
adapter
kind
of
running
through
the
client,
so
we
are
running
it
closed.
G
Looping
related
know
that
executing
state-transition
and
at
the
moment,
executing
a
simple
fork,
choice
rule,
but
we're
ready
to
plug
in
LMD
ghost
been,
has
been
instrumental
for
helping
us
get
VLS
set
up
and
running,
like
you
mentioned,
we're
still
waiting
on
some
integration
tests
for
it,
but
we
have
BLS
with
Milagro
we're
for
the
working
for
the
most
part
and
couple
of
our
next
steps.
You're
just
working
with
the
other
research
team,
with
the
Pegasus
for
narrating
handle,
which
is
their
signature,
aggregation.
H
Week
I
think
we
are
still
working
on
sync
with
the
latest
spec
and
most
of
all
so
part
of
the
research
team.
So
we
try
to
sync
the
latest
one
version.
Instead
of
the
previous
snapshot
releases
and
on
the
network
side,
we
are
test
named
a
star
after
that
p2p
network,
with
our
current
Trinity
technical
technology
stake,
but
also
Kevin
our
team
working
on
somebody
with
the
golden.
I
I
Guys,
we've
been
doing
a
lot
of
progress
on
getting
our
real
run
Tempe,
so
we've
been
we're
pretty
much
complete
with
B
0.4
Saturn,
so
small
bug
fixes
and
just
really
trying
to
get
a
simple
test
out
of,
like
eight
validators
in
the
beacon
node
running,
just
to
make
sure
that
we
can
advance
the
chain
indefinitely.
I
think
we'd
advise
everyone
to
be
super
careful
when
it
comes
to
committees
like
fighting
committees
optimistically,
if
you're
gonna
do
that
at
the
start
of
an
epoch,
there's
a
lot
of
weird
boundary
condition.
I
So
there
are
lot
of
parts
in
the
in
the
spec
that
specify
like
currently
Park
next
epoch
previous
epoch.
So
if
you're
like
it's
really
easy
to
run
into
really
scary
bugs
that
just
don't
like
that,
you
just
crash
at
specific,
be
parks.
So
that's
something
we've
been
running
into.
But
aside
from
that,
we've
been
just
optimizing
that
adding
more
caching
layers
to
items
in
the
repo
caddy
things
to
move.
We
finished
implementing
initial
sync
of
our
beacon
nodes
to
say
block
since
last,
finalize
ones
and
mic
state.
I
A
Having
some
trouble
with
like
assignment
fetching
and
the
sea
changing
at
some
point,
so
Danny
helped
us
out
a
lot
but
yeah
impact
over
the
next
few
weeks.
We're
just
gonna
be
prepping
more
for
that,
like
some
sort
of
patient,
really
yeah,
great
and
I
was
happy
to
work
with
you
all
to
find
some.
Those
folks
yesterday,
which
there's
a
couple
PRS
and
there's
another
pending.
E
But
that
we
found
that
I'm
gonna
go
fix
up
for,
but
thank
you
very
exciting.
E
Hi
so
smile
out
the
prismatic,
a
lot
of
good
work
towards
a
simple
test
net
and
we
come
true.
We
also
had
a
lot
of
epochs
and
the
slot
issues
that
we
debunked
and
that
we
are
still
debugging
in
terms
of
spec.
Like
we
said
two
weeks
ago,
we
are
targeting
zero
point
three,
instead
of
zero
point,
four
to
reduce
the
impact
of
changes,
and
we
now
have
full
same
initial
implementation
for
become
chain
and
the
block
pool
to
deal
with
missing
blocks
to
requests
to
other
nodes
in
terms
of
protocols.
E
We
have
been
working
on
our
LP
x
on
issues
that
benefits
both
f1
and
f2
and
discovery,
and
regarding
Lippe
to
B.
We
also
make
make
progress
in
parallel
to
that
regarding
cryptography,
because
it
relies
on
erase
a
nice
sexy
and
eg
two
five,
five
one
nine
curves
for
various
handshakes,
and
we
implemented
all
that.
E
So
we
will
be
able
to
use
fully
p2p
in
the
future
and
besides,
we
have
been
working
on
the
documentation
generator
so
that
we
can
have
proper
documentation
when
it's
time
to
release
that
so
that
the
public
can
play
with
it.
In
terms
of
for
to
do,
we
have
pls
multi
verify
in
the
pipeline
and
for
the
test
net.
The
main
pieces
for
trace
and
also
like
I,
said
in
agenda
and
shouting
I
will
be
doing
a
presentation
on
Tuesday
at
each
DC
on
the
Ephrem
tests
and
simulation.
E
So
I
will
be
focusing
on
testing,
because
everyone
is
still
working
towards
the
test
net
and
for
simulation.
We
still
have
a
lot
of
projects
we
can
use
like
what
white
blocks
has
been
doing.
What
consensus
has
been
going
with
Wittgenstein
what
layer
was
being
presenting
so
far,
Leo
from
the
personal,
a
supercomputing
and,
of
course
they
can
also
play
with
the
proof
of
concept
that
vitalik
did
in
ephraim
research
trip.
Oh
one
thing
that
would
be
super
helpful
is
I
will
refer
to
all
implemental
projects.
J
J
We
pretty
much
done
with
BLS
in
regards
to
porting
over
the
using
lozen,
so
we
were
using
hiromi's
library.
Running
done
was
kind
of
we've
had
to
make
a
couple
changes
to
it,
just
make
it
work
for
us,
and
that's
almost
done.
We
have
us,
is
that
important,
arch
and
we're
using,
and
we
think
we
have
a
branch
running
with
us
as
well.
We
finished
up
all
the
state
transitions
we
found
a
few
bugs
in
epoch
and
we've
been
kind
of
working
through
those
and
thanks
for
your
help.
There.
J
K
K
K
Then
we've
been
playing
around
around
the
general
simulation
scenario
with
just
for
validators
and
found
the
number
of
box,
and
the
implementation
and
fixed
them
also
have
found
a
couple
of
issues
in
the
spec,
but
our
spec
is
not
up
to
date
and
those
issues
or
the
fixed,
inlay
and
latest
version.
So
now
one
of
one
of
the
things
we
are
working
at
the
moment
is
they
into
the
0.4
stack
version.
Also.
K
M
M
Importing
blog
is
for
the
PRS
back
verification
still
also
we
figured
out
it's
not
super
slow.
That
is
still
gravity
Li,
not
very
fast.
So,
but
in
like
black
fur,
for
propriety,
assume
for
a
po8
highs
net.
We
can
get
like
over
a
thousand
flocks
per
second,
but
so
we
are
a
little
bit
shocked
that
we
can
only
to
ten
blocks
per
second,
but
I
guess
it
would
be
fun
for
for
real
men.
M
That's
because
we
will
have
is
not
a
purely
network
and
we
have
a
lot
of
meditators
anyway,
but
this
is
something
we
spend
some
time
looking
to
and
right
now
our
serger
system
run
timers
to
updating
our
spec
to
the
to
the
newest
version.
So
so
we
got
standard
around
implementation
of
the
committee
and
shuttling,
but
we
haven't
integrated
into
the
actual
stage
as
a
function
and.
A
Yeah
I
think
we
will
also
start
looking
to
those
tests,
big
cards
which
we
haven't
before,
so
that's
for
us.
Okay,
thank
you
and
I
think,
although
we
will
get
some
speed
ups
in
the
signature
verification
over
time,
there
is
going
to
be
some
like
order
of
magnitude,
difference
on
being
able
to
process
blocks
because
of
the
the
simple
operation
of
checking.
N
So
I
updated
to
spec
0.4
version
currently
working
on
a
bit
of
refactoring,
so
we
can
clean
up
the
spec
or
the
implementation.
So
it's
no
longer
one-to-one
coffee
and
then
still
looking
into
networking,
we
found
someone
who
already
did
live:
P
2p
headers
in
Swift,
so
I'm
gonna,
try
and
see
if
that
works
for
us.
So
we
can
use
that.
Then
we
finally
managed
to
get
the
a
list
of
compiling
properly
but
we're
having
a.
O
Hey
yeah,
so,
like
everyone
else,
we've
been
doing
some
internal
updates
optimizations.
In
particular,
we've
been
a
caching
committee
info
which
has
got
us
to
to
do
some
PE
processing
with
16,000
validators
in
about
1
millisecond,
with
pre-built
caches,
we're
trying
to
work
towards
building.
You
know,
sub
1.
Second
block
imports,
we've
also
been
building.
The
fork
chair,
fork,
choice,
test
framework
that
you
were
talking
about
earlier,
the
start
Danny,
so
we
have
like
a
working
version
of
that
which
will
be
modified
once
the
most
actual
framework
comes
out.
O
K
A
P
To
research
Mike
from
lead
peter
peas:
here
I
know
a
number
of
you
met
him
at
DEFCON.
He
works
with
protocol
labs.
You
want
to
do
a
quick,
intro,
Mike,
yeah,
sure
odd
yeah.
My
name
is
Mike
gelser
I'm
from
protocol
labs,
I
work
on
the
p2p,
along
with
Raul
who's.
Also
on
this
call,
and
just
just
joining
to
get
a
better
sort.
A
Q
A
A
R
I'm
happy
to
start
all
right
so
created
a
a
meta
issue
on
github
called
miscellaneous.
We
can
change
changes.
Take
for
so
basically
keep
track
of
some
of
the
remaining
changes
that
we'd
like
to
make
for
phase
zero.
Hopefully
that
would
be
the
last
such
meta
issue,
so
it's
partly
for
visibility,
but
also
just
for
keeping
track
of
things.
R
I've
started
thinking
a
little
bit
more
about
moving
towards
and
an
executable
spec,
as
opposed
to
a
spec
wave
like
human
words.
So
I
think
example.
Specs
will
help
kind
of
converge
everyone
and
help
with
testing.
So
one
of
the
things
I
did
yesterday
was
submit
this
ten
Yvonne
T
for
such
easy
steps
back
button
and
go
and
Diedrich
AKA
protocol
vendor
seems
to
be
working
on
that
which
is
fantastic,
and
you
know
there's
this
kind
of
constraints
on
the
line
count,
and
so
this
kind
of
force
is
a
simplification
mindset.
R
R
One
of
the
topics
I
want
to
bring
up
again
is
the
sha-256
versus
catch
X,
56
design
decision,
basically
to
simplify
this
on
the
pro
side
of
things.
There's
the
interoperability,
because
many
different
watching
projects
are
moving
to
watch
a
t56
and
then
on
the
con
side
of
things.
There's
security
concerns,
so
one
of
the
things
that
I
realized
recently
is
that
it's
quite
likely
that
the
in
the
context
of
the
standardization
for
BLS
you
know
for
the
hash
to
Jiwon,
has
Jeter
that
we
would
need
standardization
on
the
hash
function.
R
R
So
in
that
sense,
connect
five.
Six
is
a
bit
of
a
dead
end
on
the
security
side
of
things.
I
think
the
the
main
thing
we
want
to
make
sure
is
that
there
is
no
like
deal
breaker,
and
one
of
the
pieces
of
good
news
here
is
that
our
hash
function
that
we
choose
today
to
last
decades.
Necessarily
it's
possible
that
you
know
ten
years
is
a
sufficient
kind
of
lifetime
that
we're
looking
to
to
have.
R
So
one
of
the
things
that
we're
looking
to
do
is
basically
ask
a
stand
when
they
what
he
thinks
about
the
security
of
sha-256.
The
two
things
that
are
known
is
one.
Is
this
length
extension
attack,
which
is
it
seems
mostly
in
annoyance,
because
in
some
cases
you
need
to
put
mitigations
to
remove
that
attack
and
then
the
other
thing
is
that
there's
these
kind
of
mostly
academic
security
reductions,
so
like
small
attacks
on
sha-256,
which
are
in
no
way
practical
but
kind
of
not
ideal
and
an
NK
check,
five
six.
A
Clients
for
to
evaluate
the
fortress
rule,
so
we
you
know
I
might
be
a
little
bit
more
time
on
my
clients,
great
so
on
the
sha-256
one.
We
just
want
to
give
you
an
update
on
where
we're
at
and
to
like
I
know
some
of
you
weighed
in
a
little
bit
on
that,
but
we
want
to
want
to
make
sure
to
get
kind
of
the
wide
set
of
opinions
on
that
or
at
least
technical
perspectives.
S
T
And
I
guess
it
might
have
you
know
one
two
more
years,
additional
life
span
over
sha-256
because
it
was
invented
a
bit
later.
So
one
major
kind
of
desideratum
for
hash
functions
is,
unfortunately,
a
easy
executive
ility
inside
of
the
existing
evil
1.0
chain,
because
the
Abbasid
contractor
needs
to
be
able
to
generate
the
merkel
branches.
So
we
haven't
thought
of
like
shot
to
be
the
standardized
version.
T
S
T
S
Hash
function,
then
Blake
might
even
be
the
best
one
because
I
know
like
super
already
wrote
on
the
IP
and
I
think
might
have
even
contributed
some
codes
worded.
So
so
basically
I
mean
just
you
know
like
adding
primitives
like
that,
it
doesn't
like
decrease
the
security
of
the
EVM
at
all
or
it's
you
know,
it's
not
a
contentious
issue
or
anything
like
that.
It's
just
you.
A
Know
if
any
particular
hash
function
is
needed,
it
could
be
added.
I
do
agree,
though,
that
users
might
actually
take
some
time
because
of
the
you
know
how
slow
the
whole
four
processes
and
everything
right,
and
one
thing
to
highlight
is
that
sha-256
other
than
just
a
couple
of
blocks
is
really
becoming
a
standard
that
people
are
beginning
to
use.
So
one
of
the
arguments
is
this
interoperability
thing,
whereas
Shaw
3,
even.
T
S
A
F
We're
still
we'd
still
need
a
code,
even
if
we
wanted
to
support
multiple
hashes.
There's
still
need
to
call
us
upon
the
hash
that
will
be
used
inside
this
deposit
contract.
That
is
in
for
the
navier--
all
right,
but
it
doesn't
have
to
be
the
same
as
everything
else
that
could
be
flagged
by
the
EVM
as
a
shot
to
5k
track
two
five,
six
as
it
is,
and
we
can
just
use
something
else
for
everything
else.
We
do,
and
as
long
as
we.
T
A
T
Okay,
do
we
know
what
the
end
ins
this
cosmos
years,
because,
like
I
guess,
the
other
considerations
here
is
that,
like
for
any
chain,
that's
using
like
any
of
these
was.
T
G
A
E
Even
if
we,
if
we
decide
to
go
on
like
a
self-describing
hash
to
come
all
the
ash,
we
still
need
to
decide
on.
Some
subset
of
hashes
see
is
right,
yeah,
regarding
which
I
think
it
was
supposed
to
be
a
way
to
answer
a
great
ability.
Like
we
say:
okay,
we
use
multi
ash,
but
we
only
use
Schaffer,
II
or
shadow
or
Blake,
but
if,
like
five
years
from
now,
we
want
to
upgrade
as
a
hash,
for
whatever
reason
it's
easier.
When.
T
We
are
already
using
multi
ash
because
it
takes
care
of
versioning
and
other
things
like
that,
which
is
also
an
issue
in
SSD.
For
example,
I,
don't
don't
be
too
hard
because
everything
that
is
signed
is
like,
for
example,
right
like
there
is
already
this
signature
domain,
which
represent,
and
that's
ignition
or
oh
man,
encodes
the
fork
ideas.
So
there's
already.
T
T
Well,
I
guess
the
other.
The
other
issue
is
that
it
doesn't
really
make
sense
to
be
able
like
there's
the
place
where
multi
hash
makes
sense
is
when
you
wants
to
be,
you
know,
be
able
to
support
objects
that
other
people
create
with
different
hash
functions.
But
here
we're
talking
about
blocks
and
if
you
have
to
kind
of.
F
T
That,
basically
means
is
that
you
actually
have
to
calculate
like
keep
all
four
state
routes
calculated
for
every
block,
so
which
increases
your
work
by
a
factor
of
four
and
then
so
that
you
can
verify
like
any
one
of
them
you
would
need
to,
but
it
solves
the
well.
It
could
go
some
way
to
solving
the
interoperability
with
polka
dot,
for
example.
Well,
no,
like
I,
feel,
like
we've
been
using
the
word
interoperability
without
really
guy
knows
thinking.
T
You
know
where
the
interoperability
issue
is
because
I
don't
feel
like
the
interoperability
issue
exists
because
we
don't
have
like
EFS
compatible
like
free,
prefixes
somewhere
like
so,
for
example,
the
interoperability
between
e2
and
e1.
Like
the
only
issue
there
is
that
there
is
an
ether.
One
currently
only
has
an
entire
of
efficient
support
for
sha-3
and
and
sha-256
right,
or
it
can't
shout
of
g6
and
sha-256
well,
yeah.
T
T
What's
the
resistance
to
using
the
multi
hash
format
like
I,
get
that
there's
another
workaround
js.
It
just
seems
that
Italy,
it's
like
it's
just
a
little
bit
more
mad
is
basically
like
having
one
of
the
really
nice
aesthetic
things
that
we
have
in
the
current
spec
is
that's
pretty
much.
Everything
is
a
power
of
two
and
once.
S
A
E
You
know
to
you
know:
support
arbitrarily
many
hash
functions
and
being
able
to
you
know,
tell
them
apart
right
and
to
to
be
interoperable
with
aetherium,
is
to
just
support
and
know
that
there's
a
hash
used
on
aetherium
and
use
that
rather
than
having
to
like
arbitrarily
be
able
to
like
read
it
on
the
fly.
Yeah
I
think
having
a
single
hash
is.
T
Simpler,
it's
just
that
we
don't
want
to
paint
ourself
in
a
corner
in
the
future.
So
even
if
we
don't
use
multi
hush
just
make
sure
that
if
we
need
to
change
ash
function
in
the
future
for
stocks,
for
example,
we
are
able
to
do
so
without
reengineering
everything,
yeah
I
think
we
are,
and
it's
like
and
I'll
describe
again
like
how
that
would
happen
right.
T
So
the
like
every
single
object
that
so,
first
of
all,
if
we
have
a
hard
work
to
change
the
hash
function
from
function,
one
phone
from,
let's
say,
for
example,
from
like
md5
to
sha-1,
just
a
year's
totally
stupid.
Examples,
then
up
until
block
1
million.
Everyone
is
like
happily
turning
along
and
using
md5
forth
for
their
hash
and
then
after
one
let's
say,
1
million
is
2
4
o'clock
any
book
at
1
million.
There
would
be
a
state-transition
added
the
door
that
basically
says
well,
not
even
a
state-transition.
T
It
would
just
say,
oh
from
now
on
we're
checking
the
state
routes
and
we're
checking
blog
roots
and
everything
using
sha-1
instead
of
using
md5
and
so
clients.
What
needs
to
kind
of
very
quickly
burn
through
basically
rehashing
the
entire
state
which
they
could
probably
do
and
like
something
like
half
a
minute,
and
that
would
be
kind
of
temporarily
disruptive,
but
it's
only
one
time
and
then
from
zero
on
any
blocks
after
the
transition
would
be.
They
would
just
be
hash
with
sha-1.
T
The
one
corner
case
that
you'd
have
to
cover
is,
if,
in
an
test
station,
got
created
of
a
slot
before
block
1
million
and
it
got
included
after
1
million,
then,
basically,
even
though
it
gets
included
in
the
kind
of
kochava,
an
era
would
still
be
using
the
md5
hash
function.
But
there
already
is
infrastructure
for
handling
that
case,
which
is
basically
a
big
like
anything
which
is
the
BLS
signed.
Has
this
domain
attached
and.
A
Okay,
to
recap,
before
we
move
on
we're
going
to
get
kind
of
a
security
briefing
on
sha-256,
there's
seems
like
there's
a
little
bit
of
a
motivation
to
try
to
get
Blake
into
eath.
One
I
don't
think
that
getting
Blake
in
the
eath
one
within
this
calendar
year
is
I
mean.
Maybe
there
will
be
another
Fork
at
the
end
of
this
calendar
year.
T
T
Let's
move
on
more
research
updates,
we're
still
in
research
Vitalik.
You
have
anything
you
want
update
us
on
legs
yeah.
Actually,
so
things
I've
been
working
on.
First
of
all,
there
is
phase
one,
so
I
wrote
up
a
couple
of
an
updated
PR
for
the
group
of
custody
game
which
is
number
682,
and
the
main
thing
there
is
that
the
game
is
basically
the
same
as
it
was
before,
except
with
a
couple
of
fixes,
but
I
managed
to
move
the
state
objects
they
kind
of
outside
of
the
validator
record.
T
So,
with
these
one
there's
an
you
can
like
click
on
phase
one
directly
from
begun
of
me
and
aetherium
2.0
specification
section,
the
one
PR
that
hasn't
mean
682
hasn't
really
been
merged
in
yet,
but
honestly,
probably
probably
should
be.
Given
that
its
thing
I
knew
in
with
a
bit
of
feedback.
You
can
like
review
to
be
like
as
mature
as
the
rest
as
the
rest
of
the
document,
but
I
think
in
general,
there's
just
to
go
through
it
a
bit.
T
There's
two
major
major
sections
to
do
it,
where
the
first
section
is
shard
chains
and
crossbone
thena,
which
basically
describes
how
shard
chains
work.
How
person
is
that
committees
work
who
can
create
shard
blocks,
what
the
shard
state
transition
looks
like
and
also
what
the
data
is
that
should
go
into
this
short
cross
linked
data
BrewDog
field
which
we're
currently
leaving
empty
during
a
phase.
T
One
we're
sorry
during
phase
zero-
and
we
also
talked
about
this
before
chest
rule
for
shorter
blocks
and
then
the
big
section
two
is
updates
to
the
beacon
chain
and
that
basically
adds
the
proof
of
custody
game.
Both
not
a
one
round.
Interactive
branch
challenges
and
multi
am
around
interactive
proof
of
custody,
big
challenges
so
there
in
terms
of
what
what
still
could
be
done
for
phase
one
I
opened
up
a
checklist
which
is
issue
number
686.
T
So
there's
one
two
three
four
seven
items
there
as
for
the
first
is
basically
like
is
are
they're
kind
of
substantial
simplifications
or
substantive
that
you
we
can
do.
The
crew
of
custody
game
might
be
possible.
I've
tried
for
a
week
and
haven't
really
found
any
yet
number
two
is
grammar,
phoenixes
and
edits.
Number
three
is
naming
changes.
Economic
review
is
number
four,
which
is
basically
just
verifying
that
the
rewards
and
penalty
is
kind
of
makes
sense
and
you
what
they
should
be
doing.
T
Number
five
is
the
kind
of
multi-branch
number
five
and
six
are
basically
optimizations.
That
say
in
that.
Allow
us
to
and
if
reused
Myrtle
become
a
large
part
of
merkel,
branches
and
reuse
at
the
station
data
to
kind
of
challenge
for
much
larger
pieces
of
data,
at
the
same
time,
with
relatively
low
and
efficiency
losses
so
like
on
net,
it's
a
potentially
huge
efficiency
game
and
then
number
seven
is
another
one
of
those
kind
of
very
separated
out.
Farmable
research
problems.
It's
kinda.
T
It
feels
kind
of
very
similar
to
the
design
and
design
a
shuffling
function
problem,
except
this
time.
It's
design
and
mixing
function
where
the
goal
is
to
be
m
friendly
to
us
multi-party
computation.
So
those
are
kind
of
phase
money,
things,
people
or
and
have
encouraged
to
read,
encouraged
to
comments
and
so
forth.
Beast
who
I
wrote
a
document
on
that.
T
So
basically
issue
number
seven
zero,
two,
which
is
kind
of
much
more
preliminary
exploratory,
which
is
things
that
we
even
need
to
just
decide
on
and
decide
for
Phase,
two
and
I
have
a
bunch
of
items
there.
Another
thing
that
I
mentioned
is
coccyx
and
I
started
a
brief
discussion
on
how
he
was
somewhat
fit.
Ian
and
I
had,
and
maybe
what
the
US
an
interface
would
look
like.
T
So
that's
like
a
discussion,
that's
worth
starting
to
participate
in
I
guess
beyond
phase
two
I
also
brought
up
a
document
where
I
made
my
proposal
for
what
CBC
fi
machine.
What
might
look
like
much
more
concrete,
so
that's
number
seven.
Oh
one
of
people
wants
to
look
through
that
they
can
feel
free
to
there's
also
smaller
improvements
that
do
you
figure
it
out.
So
one
of
them
is
where
is
it
now?
A
number
685
which
has
changed
how
about
leader
balances
are
stored?
So
that's
basically
it
it's
there
to
provide
two
benefits.
T
T
T
A
Codes
and
in
terms
that,
in
terms
of
like
English
bullet
points-
and
it
seems
to
have
worked
well
and
I-
think
we
can
make
some
PRS
it's
her
in
the
face,
like
that's
her
Fieros
back
into
something
closer
to
that.
So
because
I
kind
of
keep
it
exactly
the
same
in
terms
of
meeting
meaning
but
write.
It.
H
V
Great,
so
the
phase
one
stuff
is
beginning
enter
into
that
kind
of
iterative
design
process
like
we
had
in
the
phase
zero.
So
if
you
want
to
get
your
hands
dirty
and
take
a
look
at
it
and
even
write,
some
code
know
that,
although
the
core
functionality
is
there
that
it's
gonna
be
continually
kind
of
reworked
and
ironed
out,
we
protected
well
protected,
say
not
very
good
made.
E
W
A
R
T
A
move
on
I
had
the
question
for
Justine.
Is
there
any
like
github
issues
about
the
fork?
Choice
does
ha'tak.
T
B
H
T
T
T
T
B
E
Further
by
a
factor
of
two,
but
you
know
otherwise,
that's
something
that
I
wonder
if
this,
like
shuffle
code
should
maybe
equally
have
somewhere
as
it's
somewhere
else
in
appendix
yeah.
It
would
be
handy
to
see
the
optimized
stuff,
because
the
like
I
know
that
the
spec
is
not
supposed
to
be
optimized
it
mostly.
You
can
figure
out
it's
just
normal
programming
stuff
for
that,
that's
roughly
an
algorithm
is
pretty
intense.
So
if
someone's
already
work
through,
it
would
be
pretty
cool
to
share
yeah
yeah
yeah.
E
Q
B
B
B
T
T
T
A
U
U
U
H
U
T
Obvious
optimizations
here
to
do
which,
which
are
all
centered
around
splitting
up,
because
apparently
the
spec
is
written
with
this
beacon
state
in
languages.
This
big
monolith,
but
if
you
like
one
optimization
that
we
will
be
doing,
is
to
split
up
that
weakened
state
into
more
independent
operations
that
maybe
can
be
done
in
parallel
but
whatever,
but
at
least
they
can
be
applied
partially,
and
that
would
help
a
lot
with
these
rewind
all
right.
T
T
A
A
Like
just
having
log
n
like
copies
of
the
state,
is
one
way
to
do
it,
but
then,
if
you,
if
you're
using
some
journaling
trick
that
or
whatever
then
as
long
as
it's
like
as
long
as
you
can
bound
memory,
consumption,
then
they'll
be
fine.
You
guys,
like
as
you
do,
that
exercise
and
splitting
things
up
and
seeing
what
can
be
parallelized.
Please
report
back
your
findings.
A
Q
Q
S
Specification
requires
to
keep
everybody
associated
each
node
network
default
key
pair
is
an
RSA
key
pair
I
believe
it's
possible
to
change
it,
but
there
does
need
to
be
some
concept
of
a
pair,
a
network
identifying
key
pair
associated
particular
node
on
a
limb
p2p
network
because
all
over
the
pearbook
and
basically
no
identification
logic
in
any
the
PDE
library
and
the
spec
relies
on
that
key
pair.
So
we
can
reuse
key
pairs
from
it.
Just
has
to
be
some
pair
of
some
encryption
algorithm.
Q
S
Q
Understand
no
I
just
think,
depending
on
I'm
a
bit
unsure
in
general,
with
this
with
the
specification,
what
what
the
sort
of
which
level
this
is
actually
sitting
at
I
mean.
Is
this
supposed
to
be
like
a
like
complete
specification
of
the
entire
protocol,
including
you
know,
basically
which
which,
which
bins
go
where
on
the
wire?
All
the
way
up
to
you
know
which
message
types
include,
which
consensus
objects.
S
The
I
added
a
little
bit
of
a
sort
of
additional
context
in
there
about
the
PP,
because
I'm
not
sure
what
the
sort
of
collective
familiarity
is
with
live.
Pdp
yeah,
it
extends
a
little
bit
so
so
so
the
fundamental
goal
of
the
spec
is
to
provide.
These
are
the
bikes
that
go
over
the
wire.
These
are
the
well
the
equivalent
of
endpoints
that
this
those
passages
go
to,
as
well
as
like
how
they
are
transported,
compressed,
etc.
With
an
additional
like
bit
of
info
about
load
P
to
be
on
top.
U
The
other
thing
that
could
be
useful
would
be
to
more
clearly
separate
I,
mean
there's
already
a
separation
between
the
protocol
that
is
spoken
between
two
individual
nodes
that
is
probably
gonna,
be
used
for
syncing
and
the
broadcast
I
feel
like.
Maybe
it
would
be
kind
of
nice
to
even
make
these
like
totally
separate
specifications,
so
have
these
sort
of
like
what
goes
on
the
broadcast
channel.
Be
that
once
back
and
then
having
like
the
actual
solar
sync
protocol
as
an
independent
specification.
U
Q
C
C
S
Sorry
this
is
Ronald.
I
just
want
to
make
a
point
there,
because
this
spec
was
brought
to
my
attention
just
a
few
minutes
ago
before
joining
the
school,
so
I
wasn't
able
to
Wayne,
but
but
there
are
some
misunderstandings
as
to
how
messages
are
carried,
particularly
the
fact
that
messages
on
each
message
is
not
prefixed.
With
with
a
protocol
itself,
you
negotiate
the
protocol
only
once
when
you
open
the
when
you
open
each
stream
and
that
stream
is
then
tagged
with
that
protocol.
S
C
All
the
messages
are
going
to
be
carried
across
nipi
two-piece,
multi
stream
likely
the
format
is
gonna,
be
a
bit
more
complex
than
what
it
is.
What
it
has
there
there's
a
front
negotiation
of
that
and
I
think.
Maybe
this
shouldn't
actually
be
included
in
the
spec,
because
the
spec,
the
thing
that
we
need
to
agree
on
maybe
at
this
moment,
is
more
like.
A
A
high-level
message
structure
and
then
exactly
how
these
things
are
going
to
be
multiplex
on
single
connection
and
how
the
whole
negotiation
and
versioning
and
everything
works.
I,
guess
that
can
be
left
up
to
a
into
a
later
specification
or
to
the
live
EVP,
multi-stream
I,
think
yeah
I
agree
with
you,
Felix
I
think
ideally
you'd
be
able
to
reference
a
well
form,
well-written
wealth,
backed
out
reference
in
the
new
b2b
Docs,
which
we're
working
on
so
hopefully
we'll
be
there
soon:
okay,
yeah.
S
Frank
had
a
question
regarding
Ian
ours.
If
nodes
are
discovered
using
in
ours
the
information
that
is
already
available,
this
means
the
client
would
be
identified
by
an
en
our
leaptv
multi
adder
is
used
and
it
would
be
two
vehicles,
the
adders
use
and
is
not
implementation.
Detail
and
not
a
protocol
requirement
question
so
I
guess
the
question
is
the:
where
en
ours
versus
multi
adders
come
into
play,
oh
yeah
I
can
I
can
help
this,
so
Frank
isn't
here,
I
think
he's
listening
on
YouTube,
but
he's
not
in
this
call.
A
S
S
S
Is
I
hope
this
isn't
the
end
of
the
meeting,
because
Frank
and
I
had
some
more
things
to
say
about
this
stuff,
so
there's
the
ongoing
debate
also
seems
to
be
about
the
serialization
formats,
and
this
is
actually
pretty
big
chunk
of
this
debate
on
the
issue
has
been
about
the
serialization
format,
so
we've
kind
of
nice-
if
we
could,
you
know
settle
this
once
and
for
all,
theorization
is
next
up
and
any
other
comments
on
the
wire
protocol
for
anyone.
S
So
yeah
so
I
feel,
like
you
know
the
from
my
point
of
view.
It
looks
like
the
the
the
same
option
seems
to
be
to
just
say
that
you
know
consensus
objects
should
ever
it
should
be
encoded
using
as
a
seed,
basically
everywhere,
and
then
you
know
how
the
actual
messages
are.
Encoded
is
a
bit.
You
know
up
to
anyone's
opinion,
I
guess
so
it
will
be
kind
of
nice
to
hear
people's
opinion.
G
S
E
It's
a
free-form
type
of
thing,
just
like
Jason
is
so
it
means
you
can.
Just
you
know,
could
put
a
message
there
and
then
sort
of
like
you
don't
actually
have
to
know
what
type
of
object
is
contained
in
a
message
in
order
to
decode
it
and
then
with
SC.
That's
a
bit
different
because
it's
basically
just
you
know
a
collection
of
bytes
that
doesn't
necessarily
have
any
sort
of
self-describing
structure,
at
least
as
far
as
I
know.
You
understand
right.
It
does
not
have
a
certain
structure.
E
What
one
thing
was
that,
especially
regarding
the
SOS
spec,
that
I,
don't
remember
Kerala
be
put
forward.
Is
that
with
SOS
SSE
with
a
little
work,
you
can
have
a
specific
byte
offset,
so
you
don't
have
to
read
the
and
decodes
wharfing
to
get
a
specific
field,
and
that
was
one
thing
that
is
really
missing
from
our
P.
C
C
Network
analyzer
yeah.
So
to
that
last
comment
why
shark
has
been?
You
know
some
a
desired
feature
for
quite
some
time,
and
it
is
definitely
something
that
I
wouldn't
like
to
see.
But
currently
the
nepeta
be
quarantine,
doesn't
really
have
the
capacity
to
to
work
on
where
shock
detectors
at
the
moment,
so
I'll
be
happy
to
engage
with
anybody
who
needs
some
guidance
to
work
on
this
and
to
maybe
get
a
head
start.
I
have
implemented
watertight
detectors
in
the
past,
so
definitely
happy
to
talk
about
this
and
there.
A
Are
some
ideas?
There
are
some
things
that
need
to
happen
in
the
b2b
before
we
can
facilitate
creating
motion
detectors
that
can
tap
into
the
message
and
decrypt
the
message,
because
there
are,
as
you
know,
it
b2b
encrypt
messages
by
encrypt
data
by
default.
So
there
is
certain
instrumentation
that
we
need
to
provide
need
to
expose
secrets
to
to
things
like
mahjongg
detectors
and
other
tools
of
the
future,
so
happy
to
talk
about
that
offline.
A
U
U
U
H
E
H
E
Can
see
the
high-level
structure
of
the
message
without
necessarily
knowing
which
field
is
what
and
you
this
is
something
that
is,
to
my
knowledge,
impossible
with
it,
but
that
doesn't
mean
that
I'm,
like
massively
in
favor
of
using
LP.
For
this
to
be
honest,
maybe
this
is
that
could
be
kind
of
overkill.
S
Funny,
let's
say
because
in
July,
when
we
are
discussing
about
Sur
ization
format,
the
main
critic
of
our
LP
was
that
you
didn't
have
any
kind
of
schema
and
we
couldn't
like
know
the
offset
without
reading
and
becoming
the
wulfing.
So
I
guess
it's
either
a
plus
or
minus,
and
we
need
to
know
for
if
to
what
what
stance
we
take
on
that.
So
the
offset
thing
is
kind
of
useful
for
I.
S
U
Don't
know
like
a
bunch
of
health
stations
as
an
input
and
then
be
able
to
quickly
find
a
specific
one
without
necessarily
decoding
the
whole
thing.
So
and
then
you
know,
for
this
type
of
thing,
having
quite
offsets
is
very
useful,
but
for
p2p
messages,
usually
you
would
you'd
have
to
decode
the
whole
thing
anyways,
because
you're
likely
gonna
deal
with
every
part
of
it
and
being
able
to
quickly
access.
A
tiny
portion
of
the
PDP
message
isn't
something
that
has
come
up
a
lot
in
the
past.
U
I
would
counter
that
on
embedded
devices
and
on
upon
any
kind
of
resource,
restricted
devices
is
actually
pretty
common.
That
you
want.
X
is
just
a
subset
of
the
message
and
then
it's
in
order
to
gain
access
to
some
limited
sort
of
functionality,
maybe
not
the
full
thing,
but
to
be
able
to
run
a
specific
part
of
or
use
a
specific
part
of
the
information
on
limits.
It's
actually
a
big
asset
to
not
have
to
go
through
too
much
decoding
in
order
to
get
there.
U
S
But
I'll
just
add
a
trivial
example
here,
like
memory
usage
right,
if
you
have
to
decode
stuff,
you
have
to
usually
put
it
in
a
separate
memory,
location
or
you
have
to
build
up
a
complex
structure
that
points
out
where
the
data
is
before
you
can
use
it.
So
you
will
actually
let
me
draw
so
in
the
needs
of
work
yeah,
so
you
quickly
just
grow,
then
the
amount
of
stuff
you
need
to
know
before
you.
You
can
use
the
message.
S
S
In
the
past,
in
the
context
of
transaction
signing
on
a
hardware
wallet,
so
there
was
an
implementation
by
Nick
Johnson
for
sort
of
a
for
an
oral
processor
that
could
essentially
grab
useful
bits
out
of
a
transaction
while
receiving
it.
So,
but
in
order
to
do
that,
you
actually
need
to
be
able
to
decode
any
streaming
fashion,
so
we
have
a
streaming
a
plantation
of
our
LP
available.
So
that's
like
for
us.
U
You
to
do
this
kind
of
thing,
but
in
general
I
mean
for
most
implementations.
What
they're
likely
gonna
end
up
doing
is
just
checking
how
long
the
messages
and
then
receiving
the
whole
thing
and
into
one
buffer
and
then
working
off
of
this
buffer.
So
working
in
a
very
resource
constrained
environment,
where
you
don't
actually
have
enough
memory
to
even
fit
one
single
Network
message
into
memory,
all
at
once,
I
guess
yeah
will
you'll
have
to
come
up
with
a
workaround.
S
To
be
the
core
issue
around
these
two
is
series
asian
formats,
because
we
keep
going
back
to
it.
I
I
don't
think
we're
gonna
decide
today.
I,
with
respect
to
the
discovery
protocol,
75
I,
know
that
our
LP
is
currently
baked
into
it.
Is
that
decision
I
mean?
Is
that
some
sort
of
technical
requirement
in
there
or
does
it
not
really
care?
What
so
as
a
should?
What
is
using,
so
it
is
actually
kind
of
a
I
mean
we're
coming
I'm,
not
sure.
Are
we
gonna
talk
about
this
v5
in
general?
S
A
S
The
IANA
aspect
we've
also
used
our
OPA
and
there
it's
actually
required.
Well,
it's
not
well
doesn't
have
to
be
oral
T
specifically,
but
the
way
this
works
is
because
you
know
it
with
collection
of
key
value
pairs.
You
kind
of
need
something
that
can
represent
that,
but
I
guess
you
could
even
build
that
with
with
as
I
see
if
you,
if
you
absolutely
want
you
to
write
because
you
really
need
just
like
you-
have
a
variable
length
lists.
So,
okay,
thank
you.
S
Let's
table
serialization
for
the
rest
of
the
call
and
we
are
we're
actually
at
an
hour
and
a
half,
but
if
you
could
give
us
an
update
on
35
and
status
and
things
moving
there
that'd
be
great
before
we
close
the
call
okay
yeah,
so
what
we've
been
doing
is
basically,
we
have
now
published
a
a
sort
of
draft
of
the
specification
over
in
the
in
the
in
the
FPTP
repository
we're
still
working
on
it.
Just
now,
I
am
reverting
the
requirements.
Document
and
I
will
add
the
topic.
S
Discovery
specification
as
well
as
the
you
know,
a
spec
in
there
probably
next
week
so
and
then
at
that
point,
we'll
have
a
pretty
much
a
complete
specification,
but
it's
gonna
have
a
few
rough
edges
at
the
moment.
I'm
also
working
on
implementing
the
whole
thing
so
I'm,
you
know
I,
don't
know,
maybe
like
40%
done
or
something
so
I
do
have
the
basic
structure
in
place
and
but
I
don't
have
any
any
sort
of
code
to
share
yet.
So
that's
that-
and
yes,
specifically
about
that
about
that
repo.
S
S
S
You
know,
I,
don't
know
it's
kind
of
hard
for
me
to
understand
why
people
are
so
confused
about
it,
but
for
us,
deaf
PDP's
just
always
been
the
name
for,
like
whatever
a
p2p
system.
A
theorem
is
using
at
the
time
and
I
will
just
continue
seeing
it
that
way.
So
this
is
also
why
we've,
you
know
just
started
specifying
the
new
stuff
in
that
repo,
because
for
us
this
is
like
the
repo
for
a
3mm
p2p.
A
S
Place
of
discussion
is
needed
or
another
place
for
specs
is
needed.
Then
we
were
happy
to
move
our
stuff
there.
I
guess
before
us
has
just
kind
of
been
a
bit
weird
because
we
feel
like
there
is
this
repo
and
you
know
it's.
Basically.
The
main
purpose
of
this
report
is
to
discuss
and
specify.
If
mptp
are
things
so
I
guess,
that's.
A
Why
this
is
why
we
are
asking
to
maybe
consolidate
everything
under
if
and
FPTP,
because
it's
it's
a
place
for
that
yeah!
That's
fine
with
me.
We
can
maybe
just
make
a
tag
related
to
oh
yeah.
There's
also
now
I
think
there's
now
a
feature
on
github
that
allows
transferring
issues.
So
if
you
do
want
to
transfer
any
issues
there
or
if
anyone
needs
bright
access
to
that
repo,
we
can.
We
can
give
that
out
at
the
moment.
There's
I
think
like
three
or
four
people
who
have
write
access
to
that
repo.
P
S
Topics
and
discovery
in
general
wait.
So
are
you
talking
about
pub
subtopics
now
award
or
discovery
topics?
I
guess
both
well
yeah,
I,
guess:
they're,
distinct
right,
because
I
could,
if
I
discover
someone's
interested
in
shard
one,
then
I
could
use
some
subset.
Some
like
extended
set
of
chard
one
pub/sub
targets,
so
you're
right
there
are
distinct
yeah.
So
this
is
yeah.
This
is
basically
the
thing
so
I
think
like.
A
Discovery
topic
structure
is
a
different
topic
from
the
topic
from
the
perfect
gossip
subtopic
structure,
because
because,
as
long
as
you
know,
you
and
and
and
all
the
other
participants
agree
on
like
what
what
broad
range
of
things
to
talk
about,
you
can
just
yeah
like
any
at
tagging.
The
message
with
any
topic
is
fine.
As
long
as
it's
gonna
reach,
you
so
I
think.
A
The
main
reason
why
we're
even
talking
about
having
a
sort
of
topic
based
index
in
the
in
the
peer
discovery
is
because
it's
gonna
make
it
easier
for
larger
networks,
where
maybe
you're
not
going
to
be
interested
in
everything.
Every
other
node
has
to
say.
But
you
know
as
as
soon
as
you
get
connected
to
a
certain
good
set
of
fuse
that
are
roughly
interested
in
the
same
things
you
you
pretty
sure
to
receive
all
the
relevant
broadcasts:
oh
okay,
cool,
yeah,
I.
Thank
you
for
your
answer
and
I
kind
of
sorted
through
something
else.
A
F
We
aren't
certain
of
a
date
yet
we're
thinking,
maybe
the
middle
day
of
the
hackathon,
which
is
the
ninth,
so
that
we
don't
detract
from
the
opening
and
closing
ceremonies
by
pulling
a
ton
of
us
out
of
the
heck
phone.
But
we
still
need
to
figure
out
location
and
details
around
that.
So
I
will
keep
y'all
updated
on
that.