►
From YouTube: Eth2.0 Implementers Call #4 [9/27/2018]
Description
A
C
Someone
Prismatic
go
hey
guys.
I
can
get
started
jirobo
from
prismatic
a
lot
of
updates.
We
basically
are
on
our
way
towards
creating
some
meaningful
demo,
the
contain
workflow.
So
having
you
know
an
initial
Genesis
chain,
starting
and
advancing
through
attestations
of
proposals.
We
have
merged
a
lot
of
stuff.
Recently,
we
expect
our
public.
C
We
contain
a
PR,
a
so
we're
able
to
stream
to
value
to
clients,
assignments,
chart,
IDs
and
and
basically
their
validator
index
at
every
single
cycle,
transistor
and
you're
able
to
request
a
subset
of
public
keys,
so
say
like
you're,
some
third-party
application.
You
want
to
like
see
the
validator
assignments
for,
like
n
number
of
public
keys,
you
can.
You
can
touch
those
as
well.
We
stream
those
to
like
validator
clients
that
are
connected
via
our
PC.
C
D
Yeah
sure
we
also
kept
up
with
the
2.1
spec,
we
updated
the
FFG
rewards
and
crusting
rewards.
We
also
implemented
the
proposed
such
attestation
check
during
during
blood
verification.
We
also
implemented
this
attestation
service
for
the
beacon
note,
it's
it's
job
is
just
to
aggregate
attestation
and
then
save
the
aggregated
at
the
attestation
to
the
local
DB
and
yeah.
That's
pretty
much.
It.
E
A
G
So
we're
also
working
on
implementing
simple,
serialize
and
PJs,
but
we're
expected
to
finish
it
within
two
weeks
so
by
the
next
call
and
we'll
have
it
available
as
an
NPM
module
and
we're
still
kind
of
working
on
R&D
for
gossip
sub,
pairings
BLS
and
the
vdf
libraries
that
we're
working
on
we've
done
a
bit
of
we've
created
a
bit
of
issues
to
get
people
more
involved,
I
suggested
and
yeah.
That's
about
it.
Thank
you.
H
H
It's
using
Milagro
I
made
it
so
we
took
it
and
made
it
a
bit
more
of
a
standard
crate
and
read
a
bunch
of
tests
for
it.
It's
passing
them,
but
we
could
use
some
professional
cryptographers
to
have
a
look
at
it
and
make
sure
it
works.
On
top
of
that,
we've
been
working
on
SS
said
we
got
our
sterilized.
H
A
Great
on
the
Python
side,
we
spent
some
time
working
through
the
rewards
and
found
a
few
different
bug
that
had
been
sick,
a
few
different
bugs
of
the
spec
that
have
been
fixed
via
the
PRS
that
I
think
you've
probably
seen
at
this
point.
We
also
found
a
win
benchmarking,
which
we
have
some
benchmarking
results.
A
A
I
Harmony,
yeah,
no
problem,
we
have
finished
our
work
and
bought
proposers
and
work
processing.
Part
in
some
places.
The
our
implementation
is
a
bit
not
aligned
with
the
spec,
especially
in
its
database,
schema
part,
but
in
general
yeah
it's
it's
like.
We
have
a
high
level
design
and
some
things
that
are
and
some
details
that
are
implemented
from
this
back.
We
were
working
on
at
the
stations
now
and
we
have
updated
our
roadmap.
The
next
things
will
be
a
cusper
or
with
finality
and
the
POS
signature
aggregation
so
yeah.
A
B
Sure
under
inion,
like
I,
we
move
fixed
another
couple
of
bugs
and
this
in
this,
and
then
aside
from
that,
like
I'm
also,
I
think
in
the
in
one
of
the
three
search
threads
raised,
the
suggestion
of
changing
the
fortress
role
from
being
immediate
message,
driven
to
being
like
latest
message
driven,
and
they
think
I
gave
some
of
the
arguments
in
there.
Actually,
let's
see
if
I
can
just
find
it
and
paste
it.
E
B
B
B
Also,
another
thing
that
we
talked
about
yesterday
was
like:
if
we're
going
to
do
kind
of
two
layer,
beacon
chain
message:
a
groggy
at
the
station
aggregation,
but
basically
because
with
the
current
spec
I
think
in
the
average
case,
we
figured
out
that
the
minimum
peer-to-peer
network
load
was
something
like
50
kilobytes,
a
second
and
like
in
reality.
That's
that's
multiplied
by
also
all
of
the
various
I
peer-to-peer,
an
efficiency
isn't
in
the
worst
case.
There
would
be
500
kilobytes
a
second.
A
A
B
B
And
what
if
there
were
given
that
there's
not
gonna
be
computation?
That
would
basically
me
in
the
byte
limit.
Okay,
so
Dina
says
yeah
yeah.
So,
like
imagine,
you
know
like
4000
byte
blocks
in
every
shard
and
then
multiply
by
1024.
That's
like
four
maybes
every
15
seconds
or
something
like
that,
possibly
even
less.
J
Maybe
now
is
a
good
time
if
I
can
ask
about
phase
zero,
which
meaning
added
to
the
the
wiki
and
I
hadn't
heard
the
term
before.
But
can
you
all
link
the
wiki
page
you're
talking
about.
K
A
Really
say
there
was
the
beacon
chain
with
validators
hesitation,
but
the
cross
links
for
the
actual,
like
char
blockhouses
is
dubbed
mm-hmm
in
KC.
I
were
discussing
and
realized
that
KC
was
not
aware
of
the
potential
term
phase
zero
and
that
likely
may
be
more
of
the
community
was
also
not
aware
of
that.
So
I
added
that
to
the
wiki.
Okay.
If
you
get
a
question
regarding
phase
zero,
oh.
B
It's
hold
on,
let
me
go
grab
the
succubus
beacon
chain
without
shards.
Oh
I,
see
yeah,
so
I
think
the
idea
like
what
I
just
suggested
basically
I
mean.
Maybe
we
just
came
up
with
the
idea
independently,
but
it
basically
is
a
way
of
implementing
phase
zero.
J
B
Yeah
like
to
me,
it
intuitively
makes
sense
that
we
should
kind
of
that
it
would
be
a
kind
of
safe
launch
strategy.
It's
a
start
with
launching
the
char
didn't
mean
that
with
everyone
downloading
everything,
and
then
we
can
kind
of
sliding
scale
make
more
and
more
notes
kind
of
quotes
emulate
over
time.
B
And
like
realistically
right,
if
we,
if
we
end
up
having
large
staging
pools,
then
those
large
staking
pools
are
probably
going
to
end
up
like
running
again.
If
they
have
and
I
like
1%
of
all
of
the
ether,
then
they're
going
to
be
getting
called
into
ever
into
every
shard
anyway.
So
they're
going
to
have
to
have
the
data
from
all
of
the
shards,
so
they
might
as
well
just
run
a
super
full
mode.
L
L
M
L
Yeah,
so
we've
also
confirmed
the
power
usage
ballpark
around
seven
watts
and
hopefully
we'll
set
up
a
grant
with
them
shortly
so
that
they
can.
They
can
do
some
more
work.
We've
also
got
a
quote
from
obelisk,
so
the
one
of
the
companies
that
is,
you
know
potentially
going
to
help
us
design
and
manufacture
the
Dedes
vdf
Asics,
and
it's
a
quote
for
initial
viability
study.
L
We
also
have
a
team
in
the
UK
working
on
on
the
proven
aspect,
so
the
vdf
has
two
parts:
it
has
the
evaluator
and
the
prover
and
the
prover
is
not
as
latency
sensitive,
actually
it's
more
limited
by
throughput
and
so
we're
looking
at
the
kind
of
hardware
that
would
be
most
appropriate
to
implement
approver.
There
I've
also
had
various
calls
with
with
Intel.
So
three
three
guys
from
Intel
are
very
much
interested
in
the
PDF
ASIC.
Two
of
them
are
very
experienced
engineers.
L
Very
interesting
remarks
so
that
that's
been
good
in
terms
of
where
I'd
like
to
be
in
the
coming
months,
I'd
like
to
try
and
wrap
up
the
viability
study.
My
my
inviability
estimate
right
now
standards
around
75%,
so
it's
been
gradually
increasing
over
the
weeks
from
one
point
and
hopefully
maybe
only
19.
We
could
have
some
sort
of
initial
test
net
or
CPU.
B
L
B
L
Mean
I
mean
another
thing
is
where
I
have
some
uncertainty
is
the
design
side
of
things.
So
it's
possible
that
you
know
there
could
be
some
new
breakthroughs
as
to
how
people
do
multiplications
and
things
like
that.
So
I
want
to
try
and
gauge.
You
know
what
the
researchers
think
there
is
in
terms
of
fats,
formulation.
There
I
mean
another
thing
is
going
to
be
the
cost
fabrication
vdf
rig
in
the
latest
proposal.
L
Exactly
okay,
so
none
of
having
direct
in
protocol
rewards-
and
you
know,
buying
an
ASIC
rig-
would
be
investment
because
you'd
get
these
rewards.
We
scrap
the
rewards
that
are
given
internally
to
the
protocol
and
we
give
these
this
Hardware
for
free,
and
we
need
only
one
single
person
to
run
this
hardware
and
to
be
online
I
feel.
B
M
A
L
Think
it's
it's
gonna,
take
quite
some
time,
I'd
say
at
least
18
months.
So
the
nice
thing
is
that
quite
likely,
the
protocol
layer
can
survive
on
Randall.
It
just
means
that
the
security
analysis
of
the
protocol
will
be
will
be
harder
or
kind
of
more
hand
wavy,
and
we
might
have
to
have
these
security
margins
and
the
various
parentage
that
we
choose.
L
So
when
we
have
this
kind
of
this
upgrade
with
the
vdf
we'd
be
able
to
even
make
the
the
whole
protocol
more
performance
by
removing
the
margins
or
just
making
it
more
robust
by
by
keeping
the
margins
and
having
this
margin,
I
guess
a
lot
of
the
value
is,
in
my
opinion,
at
the
application
area,
where
we
exposed
a
code
for
strong
randomness,
and
you
know
if
that's
delayed,
if
that
you
know
that's
only
meaningful
really
for
phase
2
plus.
So
in
terms
of
timing,
this
there's
no
rush.
In
my
opinion,.
B
Not
yet,
but
it's
something
that's
a
very
simple
to
include.
Well,
there
has
been
a
spec
for
R
and
L
in
other
contexts,
but
like
a
spec
for
how
R
and
L
would
be
integrated
specifically
into
the
Casper
2.1
spec.
Not
yet,
but
it
is
no.
It
is
very
easy
to
include
right,
like
the
basic
idea
is
just
that
you
start
off.
By
do
you
require
every
validator
at
the
positive
time
to
provide
a
random
seed
and
then
every
time
they
make
a
block,
they
just
kind
of
unwrap
one
layer
of
the
hash
onion.
B
The
block
would
I
do
basically
I
mean
it
would
yeah.
It
would
check
that
the
unraveling
of
the
hash
onion
is
correct.
So
basically,
if
you're,
if
the
previous
saved
round
out
seed
for
that
validator
is
X,
then
if
you
provide
your
end
out
preimage
why
it
would
check
the
hash
of
y
equals
x
and
then
it
would
change
you're
like
pre-commitment
from
X
to
Y.
L
L
The
other
thing
which
kind
of
I
would
I
would
like
to
see
address
somehow,
but
it's
unclear
to
me
what
the
best
solution
is
is
what
happens
if,
as
a
block
proposer,
you
propose
a
block
and
you
reveal
some
piece
of
local
entropy,
but
your
block
gets
doesn't
get
into
the
canonical
chain.
For
some
reason,
is
there
a
wave,
you
know
I've
multiple
reveals
next
time
round
or
to
cancel
this
reveal,
and
so
an
open
question
to
me.
J
N
A
O
P
J
J
B
J
J
Spiraling
up
so
down
here,
the
blocks
are
to
finalize
and
I
guess:
they're
finalizing,
like
one
one
cycle
length
back,
which
is
yeah
speaking
blocks
because
there's
paint
shards
and
then
so
then,
once
this
crossing
is
finalized,
then
this
this
length
of
tear
chain
is
fine
lines.
Not
this
one
than
this
one
and
there's
a
bug
where
you'll
see
once
it
gets
up
to
the
shard
length
is
pink.
Is
a
eight
shard
blocks
long
finalized?
J
J
Then
it
sort
of
stops
working
there,
one
two,
three:
four:
that's
it
and
I
think
yeah.
So
the
timing
right
now
is
exactly
it's
five
seconds
per
for
a
beacon
block
and
yeah
I
know
that
so
there's
the
bug
with
the
the
finalization
is
not
proceeding
up
correctly.
I'll
fix
that
this
is
another
twenty
ornament,
that's
about
it.
Yeah.
A
J
K
B
Exactly
like
you
can
ask
people,
but
then
that's
like
not
this,
like
people,
don't
really
know
like
the
other
problem.
Yes,
people
don't
really
know
yet
what
the
sneaking
experience
is
like
like
they
don't
really
have
experience
of
like
here.
It
is
what
it
what
it
actually
feels
like
to
run
a
node
means
in
a
node
having
eat
up
a
bit
of
your
bandwidth,
go
offline,
some
of
the
time
and
so
forth.
So.
M
Rocket
pool
recently
published
a
pretty
comprehensive
blog
post
culture.
People
saw
that,
with
a
lot
of
numbers
from
there
like
test
net
kind
of
alpha
or
beta,
you
know
how
many
people
stake,
how
much
if
there
was
earned
etcetera
just
interesting
on,
if
we
might
want
to
do
something
like
that
on
investment,
I'll
find
the
link
and
share
it
here.
You.
A
A
A
Okay,
we're
gonna
get
the
timing.
Analysis
we'll
go
to
that
graph
to
this,
but
but
I
like
tissues
Raoul
from
protocol
labs
the
next.
So
the
point
of
the
agenda
is
to
discuss
the
lid
p2p
daemon,
which
is
begun
to
be
work
done,
but
it's
still
kind
of
in
the
proto
phases
and
lrl
will
take
it
from
here.
Yeah.
R
Thanks
everybody
thanks
for
hosting
me,
I'm
happy
to
see
this
cross-pollination
between
communities.
This
is
Bravo
I'm,
a
core
engineer.
The
PT
team
here
at
protocol
and
first
week,
I
also
have
a
background,
and
that
beats
a
piano
theory.
I
know
so
I'm
hoping
that
be
useful
to
this
Danny.
Do
we
have
like
five
to
ten
minutes
for
me
to
do
a
quick
introduction
with
a
demon
yeah.
A
R
R
Essentially
the
array
of
limited
features,
so
what
the
daemon
does
with
you
is.
It
takes
care
of
connection
management,
stream
management,
multiplexing
security,
negotiation
and
so
on,
and
essentially
you
get
streams,
role,
streams
back
where
each
stream
maps
to
a
baton,
stream
and
the
key
with
a
specific
gear
over
specific
protocol
and
also
you're
able
to
send
control
messages
back
and
forth
from
from
the
team.
R
R
Is
yannick
socket
domains,
so
essentially
each
stream
will
appear
over
specific
protocol
maps
over
to
a
nice
to
a
UNIX
socket
the
main
domain
sockets
sorry,
but
in
a
roadmap
we
also
have
general
transport,
so
this
will
be
developed
a
bit
later
on.
The
demon
itself
exposes
a
control
endpoints
to
quit,
through
which
you
can
essentially
ask
it
to
open
connections
and
streams
with
tears
and
for
each
kept
for
each
stream.
The
demon
gives
you
back
a
dedicated,
UNIX
socket
for
that
particular
stream.
R
So
it's
actually
all
reads
and
writes
from
into
that
socket
through
as
reads
and
writes
on
these
treatments,
so
I,
essentially
your
application
in
this.
In
this
case,
each
student
annotations
would
attach
protocol
handlers
on
top
of
those
streams,
so
you'll
be
able
to
interact
with
peers,
essentially
by
doing
Stata
died
right.
So
we
are
aside
from
all
this,
which
is
the
core
and
the
basics
of
the
of
the
dynamics.
R
Years
from
section
you
think
neck
chest
appears
opening
streams,
negotiation
negotiating
protocol
system
and
stashing
protocol
members,
we're
also
working
on
exposing
different
subsystems
of
the
p2p,
like
the
DHD
relays
absorb
and
so
on.
So,
essentially,
applications
will
be
able
to
to
get
the
values
of
the
DHT
to
find
those
to
subscribe
to
topics
on
pops
up
to
gossip
and
so
on.
So
this
is
just
the
demon
itself,
but
in
essentially
your
application.
So
particularly
if
Eve
two
implementations
I'm
particularly
The
Shining
pork,
that
Kevin
has
been
working
on
and
collaborating
with
em
as
well.
J
R
I
know
that
kevin
has
already
expressed
interest
and
developing
binding
Pegasus,
as
well
as
express
entrance
for
developing
a
Java,
a
java
binding.
So
essentially
what
a
binding
does.
This
is
a
very
lightweight
library
that
basically
encapsulates
all
the
all
the
exposes.the
control
api
in
an
idiomatic
and
clean
manner
for
that
particular
language
and
allows
applications,
and
particularly
implementations
of
youth,
to
to
attach
rows
of
all
hangers
in
a
pneumatic
matter
for
whatever,
whatever
that
making
its
mechanism
is
for
for
a
particular
program
callbacks,
it
could
be
called
code
routines
as
well.
R
R
Can
start
working
as
well
as
possible
so
just
make
sure
that
you
watch
our
record
everything
so
even
in
the
ideation
and
the
conceptualization,
and
basically
the
roadmap
for
the
demon
is
being
worked
in
the
open,
I've
just
posted
in
the
chat,
a
pull
request
for
the
roadmap,
which
has
already
been
approved
by
several
team
members.
So
I
encourage
you
to
take
a
look
they're
taking
it
through
the
road
map.
It's
divided
into
short
term
medium
term
and
I
think
all
features
that
each
do
needs
and
Charlie
needs
from
from
the
demon
are
included.
R
I've
made
an
effort
to
include
that
in
the
short
term
roadmap,
so
we're
actively
developing
and
of
course,
we
are
happy
to
have
you
go
through.
The
road
map
to
you
know,
point
out
different
features
that
you
like
in
there.
We
accept
all
kind
of
contributions,
of
course,
and
of
course
firstly
I'll
be
happy
to
support
you
in
your
development
and
tests
of
Libby
to
be
for
us,
a
protocol
labs.
R
We
have
made
supporting
the
indian
community
a
priority,
so
I'll
be
acting
as
the
point
person
for
everything
that
you
need
from
the
the
BJP
team.
So
if
you've
got
any
issues,
questions
suggestions,
you
can
open
them
and
github
in
any
of
our
oppose
the
ones
that
is
concerned
for
that
particular
issue,
and
just
mention
me
so
that
it
comes
to
my
attention
and
I
can
track
it
and
of.
R
The
the
sharding
Paul
we're
trying
to
sort
out
as
well
what
the
network
flows
would
look
like
together
and
they
are
also
engaging
with
the
ones
will
be
appearing.
One
point:
no
teen:
it's
also
developing
a
lippy
to
be
base
proof
of
concept
of
whispered
version.
Six
and
yeah.
So
just
wanted
to
say.
There's
a
lot
of
things
happening
here,
I'm
hanging
out
in
your
in
your
guitar.
You
can
find
me
there
as
well
we'll
take
and
just
thing
me
whenever
you
have
a
question
so
I'm
happy
to
take
questions
now
as
well.
I
hope.
S
R
R
R
R
P
S
Actually,
I
do
have
one
question:
it'sit's
all
it's
regarding
the
the
end
points
that
you
open
up
to
the
world
later
on.
One
issue:
we're
having
overhead
statuses
that
a
lot
of
people
run
like
a
tyranids
only
on
weird
ports
and
there's
no
masquerading
support.
So
you
can't
really,
if
you
have
a
hostile
network,
it's
kind
of
difficult
yeah.
R
R
S
R
Yeah
yeah
I,
see
I,
see
what
you
mean:
yeah
yeah,
of
course,
so
there's
there's
a
variety
of
ways
that
we
can
make
sure
connectivity.
One
of
them
is
hole
punching
if
from
NASA
another
one
in
the
circuit
relays.
So
let
me
just
it:
has
the
concept
of
nodes
being
able
to
hand
over
to
pipe
through
connections
to
other
nodes,
so
this
is
another
possibility
if
you
know
Ferb's
free.
R
F
A
Everything
so
far
is
pointing
to
yes,
we
will
be
using
Lib
PHP
I,
don't
we
have
not
made
a
final
decision
as
in
we
haven't
and
last
time,
I
think
we
were
on
the
the
research
is
pointing
us
in
a
direction
more
and
more
and
we
were
say
90,
95
percent.
Sure
I
am
fairly
confident
that
this
is
the
solution
for
us.
But
again
we
haven't
I,
don't
know
taken
a
vote
or
whatever
Yannick
Kevin.
Are
you
getting
closer
and
closer
to
sit
to
giving
your
your
blessing
on
this
or
what.
P
So
I'm
not
quite
sure
how
to
answer
such
a
question.
I
think
I
didn't
really
make
any
progress
since
two
weeks
on
that
question,
I
would
say
that
even
if
the
higher-level
protocols
don't
work
out,
then
we
can
still
use
the
lower
levels
of
the
p2p
stack,
so
I
think
getting
started
on
implementing
upto
some
kind
of
vegetal
ap
to
be
should
be
yeah.
So.
R
And
I
do
I,
do
want
to
stress,
transform
a
side
that
we
are
completely
open
to
a
gossip
service
is
an
evolution.
We
have
other
algorithms
that
we're
exploring
as
well
for
gossip
dissemination
and
so
on
or
membership.
A
few
other
things
same
thing
is
happening
on
the
DHD
level.
We've
got
a
research
group,
that's
dedicated
to
different
challenges
on
the
DHD
in
terms
of
security,
foster,
lookups
scoring
and
so
on.
Q
Right,
I'm,
sorry
for
me,
I
am
I
hold
this
in
conclusion,
as
yummy
at
least
the
lower
the
level
of
the
database,
it's
pretty
it's
pretty
good
to
use
and
given
and
for
the
cossatot
itself,
I
think
that
we're
still
doing
the
testing
for
this
part
and
sorry
for
late
and
progress,
but
for
a
bronco
yourself,
I'm,
pretty
competent
with
it,
and
but
and
maybe
we
if,
if
we
want
specific
features,
we
can
add
something
or
do
some
modification
to
it.
Yep.
A
At
this
point
is
that
aligning
and
working
with
with
p2p
is
probably
a
net
gain
for
the
EPOC
versus
women
through
the
p2p
protocols
as
well,
and
so
I
think
that's
likely
the
right
direction.
Moving
forward,
especially
you
know,
they've
been
very
open
and
excited
to
collaborate
with
us.
So
you
know
I
got
a
boost
for
us.
Q
Yeah,
oh
yeah,
big
things
to
Rose
Brow's
introduction
to
the
gold.
A
plea
to
the
demon
now
I
think
we
can
give
it
a
try.
I
mean
I
mean
for
every
ending
for
every
language
we
can
yeah
I
mean
at
least
we
can
try
and
and
for
the
language
wish
they'd
be
too
late,
p2p
the
didn't
support.
Previously
we
can
start
from
the
goal:
epitome,
demon
and
boy.
R
Yeah
yeah,
so
I'll
be
I'll,
be
hanging
out
in
getter
if
you're
intending
so
as
I
said,
we
are
going
to
be
working
on
a
spec
for
the
demon
itself
so
that
it
would
be
super
easy
for
binding
implementers
for
people
who
want
to
come
in
bindings
to
pick
up
the
concepts
and
exactly
how
these
to
happen,
how
the
sake
dynamics
needs
to
play,
and
so
on
so
and
of
course,
I
mean
I'm
gonna
be
hanging
out
in
guitar
and
you
can
open
issue
so
network.
We
are.
L
Q
A
A
Right,
thank
you
so
much
okay.
The
next
thing
is,
we
have
some
preliminary
results
on
block
processing,
just
the
lighthouse
and
lighthouse
and
the
Python
and
be
contained
implementation.
Did
some
quick
analysis
to
check
sanity
check
our
estimates
on
being
able
to
process
signatures
in
that
scale?
I
just
shared
the
link,
it
looks.
A
The
V
in
the
results
were
very
much
within
the
bounds
of
reason
and
are
solidified
or
lint
credit
to
our
initial
estimates,
and
it
looks
like,
as
long
as
we
can
figure
out
the
aggregation
on
network
layer
that
these
aggregate
signatures
are
going
to
serve
our
purposes,
even
in
the
extreme
case
where
Olly
is
validating,
lighthouse
contributed
Paul.
You
have
any
comments
about
the
results.
H
Really,
the
only
thing
that
I
would
like
to
know
to
make
sure
that
fully
accurate
is
that
the
the
Milagro
library
were
using
is
is
fit
for
purpose.
We
don't
have
any
test
vectors
for
Baylor,
so
it's
just
you
know
an
early
trust
in
so
I
know
it,
but
that
I
would
be
called
and
I
made.
Maybe
I'm
Justin
Drake
those.
A
A
H
Yeah,
okay,
cool
yeah,
but
apart
from
that,
as
we're
pretty
good
we're
using
our
concurrency
to
attestation
validation,
I
thought.
It
was
interesting
that
if
we
had
like
10
million
s,
miss
point
zero
six
seconds
to
evaluate
a
block.
Then
if
we
get
to
100
million
and
it's
not,
it's
not
a
ten-time
to
increase
so
I'm,
not
exactly
sure
what
that
is,
but
I
think
it
might
be.
Overheads
do
two
threads
I
think
the
way
I'm
doing
it
is
using
a
rayon
library
in
rust.
H
A
Great
yeah
and
I'll
put
a
with
like
a
standardized
format,
kind
of
in
this
in
the
vein
of
what
Paul
and
I
have
posted.
So
if
anybody
any
other
teams
as
they
get
to
that
point,
want
to
just
post
some
sanity
check,
block
crossing
numbers,
that'll
be
useful,
but
good
results
over
there.
It's
a
hope,
hole
cool.
So
the
next
thing
on
the
agenda
is
testing.
A
A
If
anybody
has
reviewed
those
and
has
some
comments,
please
speak
up.
If
you
haven't,
please
have
someone
from
your
team.
Look
these
over,
because
this
coming
week
or
two
we
have
a
new
test.
We
have
a
new
test
repo
for
the
unified
test,
we're
going
to
start
putting
some
tests
in
there
under
this
format.
A
A
O
Alex
II
I'm
high,
yes,
so
basically
I
was
thinking
about
the
I.
Looked
at
the
symptoms,
realization,
unfortunately
a
bit
later,
just
after
the
last
meeting
means
that
a
lot
of
people
started
working
honest.
But
the
thing
I've
noticed
straight
away
is
that
it's
essentially
impossible
to
derive
a
sufficient
sufficient
structure
from
these
serialize
from
serial
extreme.
When
you,
when
you're,
just
looking
at
whether
you
don't
have
a
schema
information,
essentially
ready
that
might
be
okay.
O
With
this,
when
I
was
trying
to
optimize
to
a
guest,
but
actually
is
there
is
a
wire
format
is
pretty
good
because
you
get
first
of
all,
you
have
this
length
prefixes,
which
allow
you
to
pre-allocate
the
buffers
and
also
it
allows
you
to
derive
sufficient
structure
before
you
you,
without
even
looking
at
the
schema.
So
you
know
how
many
items
there
are
you
know
like
where
they
begin
and
end
and
so
for
the
hashing
itself.
O
I,
the
basically
having
the
length
is
actually
a
bad
idea,
because
it
requires
you
to
have
that
buffer
before
you
start
hashing.
So
for
that,
I
would
suggest
just
to
have
some
format
which
doesn't
have
a
prefixes,
so
you
can
actually
stream
into
the
into
the
hash
function
like.
If
it's
a
ket
check,
then
you
can
essentially
use
that
property
of
the
of
the
what's
called
the
sponge
yeah.
O
So
imagine
that
if
you
need
to
hash
a
huge
huge
hash
tree,
then
you
can
actually
start
streaming
from
the
leaves
and
then,
as
you
go
up,
you
kind
of
have
like
one
stream
per
level
and
then
you
can.
You
can
actually
hash
the
whole
tree
very
efficiently,
because
at
the
moment
you
all
need
the
buffers
at
each
level
and
it's
Revilla
it
pretty.
It's
pretty
memory
intensive.
O
So
my
suggestion
is
to
basically
split
up
the
the
the
serialization
format
and
make
them
optimized
for
their
respective
uses
and
I
would
say
that
it's
an
unfortunately,
simple
and
serialize
doesn't
fit
any
of
those
requirements.
Basically
it
it's,
it's
not
the
optimal
for
any
of
the
two
categories.
So
that's
what
I
was
gonna
say.
O
B
O
Okay,
so
you
can
actually
yeah
you
can.
Essentially,
if
you
really
want
to
do
that,
you
can
actually
add
them
as
a
suffix
is
not
as
a
prefixes,
so
that
you
can
have
a
same
same
kind
of
known
non-duplication
thing,
but
because
you
added
as
a
suffix,
you
can
compute
it
without
create
pre
of
occasional
buffer
yeah.
F
F
O
Another
thing
is
that
the
thing
is
that
when
you,
when
you
use
this
as
an
input
for
hashing,
you
never
have
to
deserialize
you
only
it's
only
one
way.
So
that's
why
you
don't
care
you
like.
The
only
requirement
is
that
it's
very
easy
to
produce,
and
it's
also
it's
also
unique
of
italic
tension.
So
you
don't
have
anybody
to
actually
dis
realizing
that
data.
B
So
one
other
reason
why
I
like
it
would
be
a
nice
to
have
the
same
format
for
both
as
based.
Is
that
for
like
say,
the
Python
client,
and
this
would
probably
apply
to
clients
and,
like
other
soul
languages
as
well,
like
converting
from
one
serialization
format
to
another,
is
like
a
source
of
a
huge
amount
of
overhead,
and
so
it
would
be
nice
if
you
can
like
have
something
that
you
actually
just
treat
us
under
I.
Just
treat
us
a
blob
of
data.
O
Well,
first
of
all,
the
conversion
will
only
happen
in
one
way,
as
I
said,
because
nobody,
nobody
nobody,
passes
a
hash
format
around
exactly
so.
The
conversion
will
only
be
implemented
from
a
wire
format
into
the
whatever
it's
called
the
zero,
the
one,
but
it
could
also
be
done
very
efficiently
because
the
you
essentially
you,
take
the
length
three
sixes
and
push
them
into
the
end
or
something
like
this.
I
we
can.
Actually
we
can
look
at
this.
It's
interesting.
B
O
J
O
O
You
see
so
when
I
was
implementing
like
I
was
reimplemented
like
the
Patricia
tree
hashing
in
turbo
gas.
One
of
the
things
was
that
when
you
have
a
string
wouldn't
have
a
string
of
bytes
depending
on
whether
they
are
like
less
than
56
bytes
long
or
more
than
66
bytes
you've
got
a
different
size
of
actual
prefix,
so.
A
O
B
O
J
F
A
B
You
know
so
SS
Z,
it
does
not
even
like
it's
like
it
exists
at
this
low
level.
It
does
not
even
need
to
exist
at
the
transaction
level
right
because,
like
with
the
abstraction,
but
basically
the
way
that
blocks
will
be
divided
into
transactions
is
something
completely
different,
which
is
the
format
where
you
basically
have
a
bunch
of
shares
and
each
share
is
like
256,
bytes
or
whatever.
Then
the
first
byte,
a
teacher,
tells
you
where
the
separators
are
and
then
the
format
of
a
transaction.
B
S
I
B
B
No,
no!
What
I'm
saying
is
that,
like
basically
instead,
the
other
alternative
to
all
of
this
is
that,
instead
of
having
a
hashing
format,
we
basically
like
hash
the
data
structure
as
a
Merkel
tree.
So
we
would
have
a
standard
that
says
you
go
in
like.
If
you
see
an
array,
then
you
first
make
a
Merkel
tree
hash
or
if
you
see
a
variable
sized
array,
you
first
make
a
myrtle
tree
hash
from
the
variable
sized
array
and
then
you
just
add
higher
levels.
You
pretend
that
some
bytes
32.
S
F
B
B
O
A
B
B
A
B
O
You
see
what
I'm
just
gonna
say.
Another
last
comment
is
that
the
reason
why
I
suggested
that
is
because
I'm
looking
at
this
at
perspective,
like
what
have
we
done
wrong
in
let's
say
in
the
theorem
and
the
things
that
we
know
now
that
we
can
do
differently.
So
if
we
do
this
same
thing
as
we
did
in
a
serum,
it
might
be
one
of
those
things
because
I
can
see
now
which
which
things
are
actually
creating.
The
inefficiencies
on
the
foundation
level,
and
these
are
the
serialization.
The
unified
serialization
format
is
one
of
them.
L
F
F
B
Also
I'm,
regarding
the
like
hashing
in
serialization
thing.
Another
thing
to
keep
in
mind
is
that,
like
a
lot
of
the
Oh
like
the
exam,
the
example
I'm
alexi
mentioned
about
the
patricia
tree,
like
that's,
probably
not
going
to
exist
here
because
we're
like,
if
we're
using
this
far
smeargle
tree
than
all
the
hashes,
are
a
nice
clean,
64
bytes.
So
in
this
case
the
serialization
is
like
at
a
like,
isn't
a
much
smaller
number
of
places
and
there
is
kind
of
like
significantly
less
of
it
in
an
absolute
sense.
O
O
O
O
Like
not,
you
can
fit
in
them,
but
I'm,
saying
in
the
hash
tree
much
more
efficiently,
then
you
don't
or
so
much
of
it
in
the
memory.
So
it's
like
trade
off
and
I
want
to
ship
made
of
in
look.
If
you
have
a
very
efficient
computation
of
the
tree,
then
you
be
a
memory
for
other
things
in
your
process.
B
Sure
I
guess
like
at
this
point
I'm
so
for
the
shard,
the
be
the
beacon
chains
is
specifically
I.
Think
they're,
basically
isn't
really
any
choice
other
than
keeping
the
entire
400
megabytes
of
memory,
and
the
reason
basically
is
that
we
have
all
these
rewards
and
incentives
that
are
going
to
be
adjusting
pretty
much.
Every
single
valuators
balance
every
time.
There's
a
stereo
there's
a
crystallized,
a
recalculation.
R
R
A
A
A
O
A
A
S
A
B
A
A
K
A
The
idea
right
now
is
you
can
chain
launches
people
can
deposit
from
East
102
become
validators
and
you
can
change,
but
at
that
point
the
beacon
chain
has
they
have
no
way
to
exit,
because
the
shard
chains
don't
exist
and
you
can
only
exit
to
one
of
the
shard
chains
so
early
adoption,
it's
really
for
enthusiasts
that
want
to
get
their
hands
dirty
because
there's
kind
of
this
unknown
time
limit
before
they
can
get
access
their
funds
again.
That
said,
they're
also
because
a
smaller
amount
of
each
will
show
up
due
to
the
risk
profile.