►
From YouTube: Eth2.0 Implementers Call #18 [2019/5/23]
Description
A
A
Hey:
hey
Jose,
Rafael,
Gomez
Gomez,
the
only
person
in
the
chat,
okay,
cool!
Thank
you,
everyone
for
coming.
We
skipped
a
week.
It's
been
three
weeks
because
we,
a
lot
of
us,
met
Kirsten,
New
York
last
week,
discussing
some
Interop
stuff
and
on
the
agenda.
Today
we
will
talk
a
little
bit
more
about
some
of
that
and
potentially
on
the
yep
in
September.
Okay,
so
I
just
shared
the
agenda
with
you
or
even
has
it.
We
will
start
today
with
testing
a
lot
of
stuff.
It's
in
the
works
to
be
released
shortly.
Proto.
B
Sure,
yes,
great
okay,
so,
like
things
slowed
down
a
little
bit
lived
in
New
York
City
week
doctormick,
although
we
do
get
like
we
got
together
with
the
implementers,
starts
talking
intro
all
these
things
and
in
the
meanwhile
meantime,
I
got
block
operations
in
the
dust
repository
if
I
booked
transitions
also
in
their
separate
frames,
and
we
got
the
sanity
test
rightists,
it's
great.
The
song.
A
A
C
A
C
A
C
B
A
Make
them
available
I
think
we
definitely
agree.
They
either
should
be
dumped
dynamically
into
the
tests
dr.
levo
or
live,
maybe
in
a
separate
repo
of
their
own
configuration
file
so
either
on
either
either
of
those
potential
paths
they'll
be
available
without
having
to
pull
them.
The
entire
spectacle.
B
D
So
we
will
be
having
votes
directly
in
no
code
base
and
we
will
not
lock
them
llamo,
so
you'll
find
that
are
providing
a
frame
to
spec,
but
this
is
a
something
that
we
are
thinking
of
in
the
future,
especially
once
we
have
actual
users,
because
we
don't
want
them
to
have
to
recompile
nimbus
every
time
like
if
they
want
to
do
a
private
chain,
and
the
main
issue
is
with
constants
that
are
used
for
RS
sizes,
like
sharp
count.
Others
are
not
really
an
issue
with.
E
In
ordinance
we
we
have
a
config
Tommo
file.
You
can
swap
out
the
network
adapter,
even
the
type
of
timer
switch
ports.
You
know,
specify
your
output
parameters
and
then
all
of
the
constants
to
I
just
throw
a
link
in
there.
So
you
can
literally
just
write
about
a
script
to
transfer
from
you
able
to
you.
Do
it
that
way,
just
another
option.
B
So
elaborate
on
the
current
approach:
how
to
support,
compile
time,
constants
very
also
like
dynamic
enough
to
support
both
configurations
in
my
gopher
chien,
a
both
a
source
file
for
each
of
the
configuration
files
and
then
using
a
build
constraints,
a
compiler
which
constants
to
use
and
then
also
encode,
are
unified
and
times.
Thus
that
you
don't
see
the
difference.
F
C
A
Something
that
also
kind
of
accompanies
this
is
the
configurable
constants,
for
whether
you
can
do
some
switch
I
have
a
branch
where
a
new
rating
goes
I'm
kind
of
a
file
that
takes
a
yellow
file
and
says
whether
it's
correct
or
not.
So
that
will
help,
and
some
of
these
configuration
things
as
well,
I
guess,
regardless
these
constants
will
likely
live
essentially
I'm
a
test
repo
or
cyber
repo,
so
that
they
can
be
easily
pulled
on
by
others
other
than
that
I
think
you
can
find
move
on
today.
G
Yeah
is
working
without
discovery
for
now
and
we're
able
to
just
destined
for
connection
for
for
at
this
time
to
make
steps
to
actually
have
them,
make
sure
to
pier
together
and
have
them
produce
some
output
so
that
we
can
check
that
they
are
able
to
sync,
and
then
we
can.
We
can
start
having
more
and
more
sanity
checks
as
part
of
that.
So
that's
what
I'm
working
on
at
this
time.
G
I'll
bring
so
the
framework
that
I
use
is
build
by
wedlock
and
it's
actually
pretty
powerful.
You
can
do
packet
loss
you
can
make,
you
know,
can
have
sorry
Charlie.
You
can
have
network
today's
thing
like
that.
So
it's
easy
for
you
to
recreate
kind
of
a
real
mob
simulations
I
think
we're
a
bit
early
seasons.
We
don't
have
a
everybody,
doesn't
really
have
Network
nailed
down
to
to
the
point
to
porch
for
everyone.
G
A
There's
another
initial
fuzzing
effort
targeted
at
the
ghost
back
in
PI
stack,
initially,
is
underway,
led
by
cuido.
It's
not
a
lot
of
fuzzing
on
each
one
and
that
over
time
we'll
be
integrating
clients
and
stuff
there,
but
without
first
we're
gonna
target
just
these
like
more
minimal
spec
invitations.
I
So
so
we
finished
the
block
out
there
in
six
pack
we
have
a
pacing
about
a
better
implementation
running.
One
potential
issue
we
have
is
PRS
implementation.
We
found
that
there
are
some,
like
rare
case,
is
where
we
use
the
same
library.
If
your
son,
my
stage,
you
will
be
deemed
invalid
if
your
valid
using
the
same
lab.
So
that's
kind
of
a
really
bad
issue,
and-
and
we
haven't
thought
of
anything,
but
we
are
working
on
still
trying
to
figure
out
how
to
always
a
PRS
implementation
which.
A
I
F
I
G
C
C
So
what
else
we
have
almost
finished
our
work
on
the
networking
stack
have
implemented
wire
protocol
and
regular
scene
if
the
sink
has
also
online
mode
when
it
just
work
with
the
new
blocks
and
other
stations
that
have
been
proposed
by
by
the
validator.
So
it
is
disappear
on
a
transport
layer
for
now,
and
we
have
introduced
a
couple
of
messages
to
notify
about
new
block
in
the
stations
and
and
we
have
a
command-line
interface
to
run
a
node.
C
C
So
what
I'm
not
sure
that
right
there
are
somebody
who
does
it
that
way,
yeah
also
I've
created
a
short
write-up
about
benchmarks,
goes
to
the
link
about
an
hour
before
the
call
a
couple
of
takeaways
from
from
the
benchmark
that
there
is
a
interesting
thing
that
get
a
test
in
this
is
for
a
method
from
from
this
back
boughs
up
ever
because
it
has
a
lot
of
little
girls
to
eat.
You
may
just
check
the
right
app
and
see
why
it
happens.
C
C
A
E
E
D
D
We
still
continue
working
on
li
p
to
p
on
the
crypto
side,
so
we
had
the
kind
of
a
mix-up
because
we
had
someone
implementing
a
compile-time
key
Jack
256,
but
at
the
same
time
we
switched
to
sha
256,
which
is
working
fine.
On
the
test
side,
we
have
shuffling
main
net
tests
integrated
and
passing
and
same
thing
for
the
BLS
maintenance
tests.
So
for
reference
we
are
using
Milagro.
D
So
if
you
have
issues
regarding
Milagro
feel
free
to
contact
us
and
we'll
give
you
a
link
to
our
implementation
and
the
minimal
preset
so
to
be
able
to
switch
constants
it's
in
progress,
and
we
don't
see
any
issue,
so
it
should
be
done
this
week
now
a
bit
of
update
on
ephraim
1.
So
since
a
month
ago
we
have
no.
Since
two
months
ago,
we
have
a
new
joiner
who's
working
on
the
networking
part
of
ephraim
1
and
everything
that
can
be
reused
for
freedom
too,
and
he
worked
on
fusing
the
discovery
protocol.
D
So
hopefully
we'll
be
able
to
reuse
that,
and
freedom
too,
as
well
and
on
the
documentation
side.
We
have
a
doc
generator
for
repo,
it
should
be
able
to
be
used
for
all
language
is
not
only
name
and
on
or
next
to
the
regarding
documentation
is
the
internationalization
of
for
documentation.
So
the
first
language
will
be
Korean,
because
it's
we
are
very
excited
about
a
project
and
another
to
do
is
automatically
generating
VIP
reference
and
miscellaneous.
K
Yeah,
so
we
pretty
much
finished,
aligning
core
process
into
the
version
0.61
office
back
and
the
next
steps
were
taking
is
how
to
properly
use
them
in
or
a
test
at
runtime
and
then
figure
out
how
to
properly
optimize
them
and
in
parallel
we
are
also
working
towards
updating
SSC
and
the
tree.
Hashing.
Also
we're
investigating
alton
BOS
alternatives
and
also
investigating
how
to
incorporate
test
factor
data
in
prison
and
also
worth
looking
into
collaboration
with.
Why
block
is
covering
five
for
pvp
yeah
and
that's
the
update.
Thank
you.
L
Yeah,
so
we
were
working
on
getting
compliant
with
three
point
six
point
one,
and
currently
we
have
the
SSC
test
passing
and
also
our
BLS
tests.
We
switched
from
Hermes
implementation
to
the
Milagro
and
that's
working
for
us
and
in
the
coming
weeks,
were
going
to
be
getting
they're
going
to
be
implementing
or
bringing
in
the
state
transition
tests
and
shuffling.
L
M
Also,
sorry
also
just
want
to
make
like
a
quick
general
announcement
after
working
and
chatting
a
lot
with
ye
theme,
where
you
know
about
working
together
and
kind
of,
and
to
convert
a
bunch
of
lodestar
components
that
are
difficult
for
them
to
basically
convert
them
into
webassembly.
So
we're
gonna
be
moving
over
a
lot
of
our
core
components
like
SSC
and
we're
gonna
try
to
be
us
as
well
and
native
what
webassembly
I'm
so
that
the
Swift
team
can
eventually
go
and
you're.
M
N
Hey
everyone
lighthouses
been
making
some
good
city
progress
as
a
last
couple
of
weeks,
Paul
and
Michael
have
been
getting
us
up
to
date
with
zero
point
six
point
one.
They
got
a
big
PR
that
it's
very
nearly
done.
We're
also
implemented
a
pars
of
the
at
their
own
foundation
tests
and
we're
trying
to
make
sure
that
any
failure
messages
we
get
from
them
are
also
very
useful
and
in
doing
that,
we've
also
put
his
in
provisions
for
fuzzing
in
the
future,
and
it's
we're
glad
to
say
that
Laura
is
passing.
N
The
vast
majority
of
these
tests
as
well
on
the
networking
side.
Adrian's
were
making
steady
progress
with
discovery
version
5
and
he's
close
to
initial
initial
implementation
with
rust.
The
p2p
and
I
also
published
a
proposed
REST
API
between
the
beacon
of
alligator
client,
which
has
largely
been
approved,
but
is
ongoing
discussion
for
later
in
this
call.
In
other
news,
we've
refactored
our
database
rapper
preparing
for
some
optimizations
with
state
storage
in
future
and
next
week
we're
going
to
be
starting
to
suppose
some
components
of
white
house,
and
that's
us.
O
So
those
two
weeks
we
are
adding
more
functionality
into
our
potential
validator
and
adding
a
normal
block.
Syncing
process
in
the
dev
p2p
internally
and
in
parallel
we
are
making
progress
in
discovery.
V5
implantation
as
well
Jannik
has
implemented,
is
an
S
sorry,
you
know
our
date.
High
definition
and
the
encryption
decryption
functions
may
sounds
big
plus
the
deposited
countryside.
We've
started
to
update
the
deposit
contract
with
the
review
prepared
from
around
Harvard
vacation
things.
So
thank
you,
mr.
Park.
A
E
L
Hold
on
I
can
I
can
find
the
document
mo
paste
and
I'll
paste
it
into
the
telegram
I'm
one.
Second.
Basically,
the
idea
is
that
it
builds
on
the
proposal
that
I
made
a
couple
of
weeks
ago
that
I
percent
the
last
call,
except
here
instead
of
I,
unlike
the
earlier
model
where
the
there's
the
system
still
maintains
this
fairly
big
array
of
State
for
every
execution
environment
for
every
shard.
L
So
one
of
them,
for
example,
is
because
the
big
state
isn't
part
of
consensus
anymore,
the
so
the
big
state,
basically
being
like
people's
actuals
accounts
contracts
or
what
and
whatever
kind
of
larger
pieces
of
data
people
wants
to
I
keep
track
of.
We
don't
need
to
have
one
standardized.
The
rent
scheme,
and
instead
rents
just
becomes
something
that
individual
execution
environments
need
to
figure
figure
out
for
themselves,
and
that
gives
us
a
lot
of
freedom
to
come
up
with
different
schemes
involving
hibernation.
L
Waking
bit
fields
so
like,
for
example,
lets
us
make
this
major
simplification
that
the
big
deals
themselves
aren't
subjects
to
rent
and
we
don't
need
to
have
like
ante
replay
bid
fields
if
they're.
If,
if
for
the
bit
fields,
they
get
hibernated
and-
and
things
went
like
that.
So
the
proposal
basically
is
worked
says
that
inside
of
every
shard,
you
have
this
list
of
32
byte
hashes,
where
each
story
to
by
dashes,
and
it
meant
to
be
the
state
route
for
the
execution
environments.
L
So
the
advantage
of
this
model
of
this
model,
like
aside
from
protocol
simplicity,
has
said
it
allows
different
execution
environments
to
experiment
with
number
one
different
work:
different
types
of
Merkel
trees.
So
this
could
be
X,
parsed,
binary,
trees,
all
trees
whatever,
but
could
even
experiment
with
things
like
snorting
and
start
seeing
merkel
branches.
We
it
could
involve
accumulators
that
don't
involve
more
coal
trees.
L
Another
really
nice
advantage
of
this
is
that
the
it
creates
a
very
natural
place
for
the
existing
f1
environments
to
kind
of
salaat
itself
in
which
is
basically
that
if
we
can
write
a
eath
one,
stateless
client
in
webassembly
then
like
we
can
just
stick
that
into
the
execution
environments
like
criminal
of
the
ethan's
of
that
and
just
what
the
system
will
the
kind
of
live
inside
inside
of
there
with
all
of
the
same
rules
that
it
had
before
him,
which
is
really
nice.
L
So
I
inside
of
the
new
proposal
document
it's
definitely
simpler,
but
I
basically
did
the
same
thing
that
I
wrote
up
before,
which
is
just
what
the
actual
protocol
level
spec
for
it
would
be,
and
a
very
simple
example
of
how
you
would
implement
in
shard
eath
transfers
on
top
of
it,
and
that
section
turns
out
to
be
a
you
even
simpler
than
it
was
before.
So
if
people
are
following
along
the
document,
that's
there's
a
few
sections.
L
There
is
introduction
out
of
scope,
standardized
merkel
groups,
we
can
change
changes
and
then
shard
processing
and
then
implementing
in
shard
each
transfers
as
a
section
that
and
it
describes
how
you
implement
just
a
simple
execution
environment
for
people
sending
each
to
each
other
on
top.
So
even
like
it's
weird,
because,
even
though
this
design
is
even
more
abstracted
in
some
sense,
it's
still
like,
at
least
from
my
experience,
writing
this.
It
feels
like
the
experience
of
actually
like
writing
a
concrete
execution
environment
that
people
would
use
on
top
of.
L
So
I
guess
people
feel
free
to
I.
Take
a
look
at
that
also
talked
about
cross
showing
execution
the
hell.
Generic
fee
payments
and
markets
are
going
to
work
and
so
forth.
So
I
know
I
already
presented
about
this
in
the
workshop
a
week
ago,
but
if
anyone
has
any
more
questions,
I'm
happy
to
answer
so
about
that.
First.
L
L
Another
unknown
is
basically
that,
like
what
the
challenge
is
in
fragmentation
and
what,
if
there's
in
many
different
environments
and
like
can
we
make
it?
Does
that
mean
that
we
have
to
make
it
easy
to
go
to
jump
between
different
environments?
Does
that
mean
that
we
had
have
to
like
that?
It
would
become
harder
to
upgrade.
Would
it
become
easier
to
upgrade
the
main
kind
of
proposal
that
I've
that
I've
heard
for
mitigating
it
is?
L
We
could
do
something
like,
for
example,
when
we
first
launched
phase
2
we
base
quake,
make
sure
we
launch
with
one
execution
environment
and
possibly
even
only
allow
one
for
the
short
for
some
time
or,
alternatively,
if
we
wants
to
be
able
to
upgrade
it,
then
we
could
just
kind
of
have
a
social
contract
where
we
agree
that
doing
and
mutability
Forks
on
the
code
of
that
execution.
Environments
from
for
some
period
of
time
is,
like
you
know,
caping,
for
the
go
for
protocol
governance
to
do
a.
L
You
know,
basically
just
because
pools
would
handle
that
responsibility
way
better
than
individuals
would,
and
so
we
need
this
two
layer
structure
where
there's
one
class
of
node
called
a
real
layer.
The
real
layer
would
grab
by
transactions
from
users.
It
would,
it
would
add,
Merkel
proofs,
it
would
package
them
up
and
then
really
or
make
purple
would
make
proposals
to
proposers
and
proposers
would
choose
one
of
the
blocks
from
a
real
layer.
They
were
keys
of
the
real
layer,
the
real
work,
so
the
real
layered
piece
of
them,
the
transaction,
senders.
L
L
So
in
terms
of
risks,
I
would
say:
Trillian
fragmentation
and,
like
us,
not
not
having
a
good
strategy
for
ensuring
that
upgrades
can
continue
to
happen
in
the
context
of
this
design
is
probably
one
a
second
new
one
is
like:
what's?
Would
this
I
kind
of
America
design
end
up
like
centralizing?
You
know
in
unexpected
ways.
Another
very
specific
issue
is
that.
L
If
the
thing
they
wants
to
do
is
just
sensors,
specific
execution
environments
or
within
execution
environments
like
sensor
merkel,
branches
that
touch
one,
one
particular
object,
and
I
mean
for
the
second
one
I
feel
like
there
are
ways
to
get
around
get
a
renowned
big
issue
kind
of
inside
of
the
execution
environment.
So
you
can
do
fancy
things
like
creating
code
that
generates
merkel
branch
as
they
get
added
to
that
get
added
to
a
blog,
but
so
the
and
kind
of
the
inner.
L
So
it
should
be
possible
to
design
execution
environments
that
feel
sort
of
all
or
nothing
in
terms
of
what
you
accept,
but
there
is
still
this
ability
to
kind
like
for
51%
cartels
to
just
agree
to
not
accept
blocks
from
entire
environment
environments
that
they
don't
like,
and
there
are
mitigation
so
that,
but
like
the
what
extent
are
the
mitigations
going
to
be
you're
going
to
work?
Something
probably
need
to
talk
about
more
and
of
this
topic.
Another
thing
I've
been
working
on
recently
is
I
mean
this
is
totally
not
a.
L
The
code
is
don't
cigar,
so
the
idea
behind
an
SS,
ed
Merkel
proofs
for
those
parts
of
that
is
that
object,
but
not
the
entire
object,
and
it
comes
in
this
Python
wrapper
that
lets
you
Nick
babies
I
mean
the
thing
that
I
haven't
written,
yet
is
the
functionality
for
updating
yet,
but
that
probably
would
would
not
even
be
that
difficult,
so
being
sick.
We,
the.
L
Idea
here
is
that
with
and
as
I
said,
with
this
object,
you
would
be
able
to
like
just
like.
If
you
have
one
of
these
objects,
you
would
know
that
you
would
be
able
to
just
do
whatever
work
with
it,
that
you
are
able
to
do
on
the
original
object,
and
this
me
except
very
easy
to
write
like
client
protocols,
because
probably
the
majority
of
hawaiians
protocols
are
basically
going
to
be
alive
coins
that
would
run
for
if
it
was
a
phone
note,
except
it
would.
L
Matt
garnets
questioner
crippled
a
thing
that
you
can
do
is
one
execution:
environments
can
generate
a
receipt
and
that
receipt
can
be
verified
by
the
other
execution
environment.
So
you
could
do
across
some
execution
environment
transactions.
The
same
way
you
do
cross
card
transactions
and
if
those
across
execution
environment
transactions
are
within
the
same
shard,
you
can
get
them
done
and
if
at
base
level-
and
we
could
look
into
coming
up
with
ways
to
add
that
functionality-
if
we
want
to.
H
Since
we
talked
last
week
a
little
bit
about
potentially
letting
execution
environments
in
the
same
shards
like
mutate,
maybe
another
execution,
environment,
state
I,
don't
know,
if
think,
that's
still
something
you
think
might
be
possible
or
if
you
like
thought
of
some
reason.
Why
that's
not
a
good
idea
in
the
meantime,
Sola.
L
Spec
doesn't
have
that
yet,
but
it's
certainly
theoretically
doable.
So
the
idea
would
basically
be
that
you
just
have
like
there
would
be
a
foreign
in
the
EEI
for
the
just
execution
of
in
environments.
We
just
have
code
that
says
a
jump
over
to
this
other
execution
environment
and
apply
a
block
with
this
data
inside
of
it.
That
seems
totally
doable,
but.
L
So
if
we
wants
to
make
it
simple
to
run
multiple
execution
environments
inside
of
a
block
like
even
one
after
the
other,
which
seemed
like
a
goal-
and
it's
not
implemented
in
the
proposed
in
though
proposal-
it's
not
written
in
the
proposal
yet,
but
it's
definitely
like
our
goal
that
would
be
implemented.
If
and
it's
not
hard
to
implement,
then
a
block
definitely
wouldn't
be
multiple
modifying
different
portions
of
the
of
the
state.
But
it's
not
that
bad
right.
But
basically
what
that
means
is
that
in
the
blocks
partials.
P
If
you're,
making
a
small
contract
language
you'd
have
to
write
the
length,
the
compiler
in
a
way
that
allows
you
know
you'd
like
smart
contract
authors
to
like
pretend
there
is
a
synchronous,
interaction
or
going
on.
But
actually
what
happens
is
that
when
with
the
Vicker,
because
having
all
that
complexity
from
just
you
know,
having
their
like
jump
between
execution
environments
and
processing,
multiple
execution
environments
per
block,
that's
kind
of
anything
can
get
messy
real
fast,
so
might
actually
raise.
Q
L
I
have
fun,
I,
guess,
fundamentally,
I,
definitely
agree
and
I
think
the
kind
of
the
whole
thing
that's
wonderful
about
this
proposal
is
that
it
lets
you
make
not
have
those
things
in
consensus,
layer
and
just
and
figure
them
out
in
compilers
and
layer.
Two
is
later
the
only
reason
I
could
see
it
being
a
good
idea
to
allow
execution
environments
to
dynamically
call.
Others
is
if
we
really
really
care
about
the
school
of
making
it
very
difficult
for
blog
over
fifty
one
percent
cartels
of
what
proposers
to
censor
specific
execution
environments.
P
You
know
the
finding
like
the
one
true
meta
execution
environment
that
then
basically
executes
dynamic
execution
environments
like
at
random
or
something
like
you
can
just
move
that
problem.
Like
one
layer
down
you
don't
have
to
have
any
consensus.
You
can
just
say
like
this
is
the
one
true
environment.
L
Yeah
yeah
and
I
fund,
but
there's
actually
a
lot
of
things
that
you
can
do
right
like,
for
example,
you
could,
if
we
set
it
up
so
that
there's
one
environments
that
has
a
lot
of
network
effects
and
that
environments
could
have
code.
That
leaves
at
least
one
quarter
of
this,
of
the
blog
space
to
the
other
execution
environment.
L
L
Now
one
thing
that,
and
the
one
name
actually
issue
about
partials-
that
that
does
annoy
me
a
bit
is
that
is
that,
with
our
current
approach
for
us
realizing
lists,
the
kind
of
the
path
toward
a
value
doesn't
depends
on
the
length
of
the
value.
It
isn't
just
the
static
thing
and
I
have
a
proposal
for
changing
how
I'm
hash
tree
routing
a
dynamic
list
works
that
might
come.
L
L
We
also
I,
think
wants
to
rewrite
it
potentially
rewrite
the
MSS
oedipal
the
implementation
to
use
types
in
a
different
way,
because
right
now
it's
the
way
types
are
written,
is
them
kind
of
ugly,
but
otherwise,
like
it's
a
thing
that
it
exists
and
clients
probably
could
start
with
the
potential.
We
could
even
start
building
test
cases
for
it
fairly
soon.
A
L
So
the
third
thing
for
me
is
that
I
brought
up
the
kind
of
most
basic
PR
for
date,
availability
proofs,
it's
a
just
links,
it
I
mean
the
basic.
The
basic
idea:
just
is
data
availability
proofs
as
we've
done
them
for
a
long
time
or
as
we've
written
in
the
paper
with
my
stuff
and
the
ideas
not
really
changed
there
it
does.
The
link
to
implementations
of
binary
am
FFT
is
that
you
can
use
to
compute
the
racial
coding
and
that
you
interpolating
fairly
quickly.
L
Q
Q
We
have
a
manuscript
which
we
feel
comfortable
sharing,
which
I'm
posting
in
the
chat
right
now,
which
we
submitted
to
USENIX
and
we're
waiting
for
well.
This
will
probably
take
one
or
two
months,
but
we're
waiting
for
feedback
on
that
and
outside
of
that.
This
is
already
somewhat
a
few
weeks
out,
but
we
published
a
little
blog
post
series
on
PCPs.
Q
A
A
R
S
We've
finished
running
our
initial
test
series,
which
was
defined
within
the
PGP
test
document
on
our
github,
come
on.
Lobos
I
can't
drop
a
link,
but
I
can
add
it
as
a
comment
to
the
the
github
so
we're
going
to
release
the
data
tomorrow
right
now
we're
just
working
on
presenting
that
data.
In
a
way.
That's
you
know
human
readable.
S
That's
what
I
refer
now,
because
I
don't
want
to
go
too
far
in
a
detail
until
we
have
our
comprehensive
data
available
which
will
be
released
tomorrow,
so
I
will
I
will
share
that
one
that
time
is
appropriate,
so
we've
been
wear
it.
Last
week
we
worked
with.
We
worked
credible
with
Felix,
we
deconstructed
discovery,
b5
identified.
S
P
P
P
S
P
P
S
T
Hi
everyone
I'm
I'm,
filling
in
for
Raul,
so
I
asked
him
before
this
call.
Did
he
hang
up
days
and
he
said
no,
he
actually,
he
wasn't
trying
to
do
any
major
update
this
week,
though
so
I'll
just
shoot.
One
thing
into
the
discussion:
we've
put
a
lot
more
effort
into
specs
for
lit
p2p,
because
it's
it's
a
big
barrier
for
implementers.
So
we've
got
a
guy
work
on
that
full
time
now.
T
A
S
Well,
one
thing
we'll
probably
want
to
do
is:
after
we've
run,
all
these
tests
is
identify
it
spec
out
what
the
actual
implementation
for
a
would
be.
Tp
is
going
to
be,
that
is
gonna,
be
Jenny,
general
general
enough
for
everybody
that
hasn't
implemented
it
to
the
reference,
because
I
think
it's
important
that
all
of
the
clients
implement
things
in
the
same
manner
to
prevent
any
additional
or
unnecessary
complexity.
G
P
S
So
what
are
the
average
message?
Sizes
like
what
is
gonna?
Be
the
Matt
the
max
size
like?
What
are
we
gonna?
We're
testing
like
what?
What
are
those?
What
are
those
metrics
gonna?
Look
like
what
are
those
values
going
to
beat
so
because,
like
our
initial
tests,
that
we're
running
right
now
are
good
but
they're,
not
necessarily
going
to
be
indicative
of
what's
actually
going
to
occur
within
the
production
network,
because
it's
just
the
bare
minimum
protocol
that
we're
testing
so
there's
no
we're
not
using
any
like
SEC
IO
and
there's
no
TLS.
S
That's
and
I've
heard
the
bed
adds
a
significant
amount
of
computational
overhead,
which
has
some
performance
effects,
so
we'll
probably
want
to
revamp
these
tests
and
kind
of
take
a
more
realistic
approach
after
we
get
this
initial
data
out
the
door
based
on
those
results
and
I
think
we
should
come
to
that
decision.
As
a
group,
therefore,.
U
U
E
E
You,
like
I,
had
I
kind
of
spelled
out
this
design
we're.
Basically,
you
have
a
sea
of
eight
clients
to
break
them
up
to
a
bracket
system,
clients
pair
up
and
then
just
and
they
come
to.
You
know
they
get
to
where
they
can
talk
to
each
other
using
basics.
You
know
basic
TCP
connections,
just
you
know
just
a
static,
no
topology,
no
discovery.
E
No
additional
latency
is
added
and
you
keep
pairing
up
until
everyone
all
the
clients
can
can
actually
can
actually
talk
and
I
mean
that
shouldn't
be
that
much
code
to
to
do
that.
It
basically
gives
us
sort
of
to
work
from
and
then
from
there
we
can.
You
know
either
decide
to
start
to
add
additional
nodes
until
we
see
it
break
or
try.
E
You
know
different,
you
know
topologies
or
add
bad
actors,
or
you
could
just
go
ahead
and
switch
and
say:
okay
now
that
we've
got
this
working,
let's,
let's
try
it
with
Libby
now
the
reason,
the
reason
why
I
think
it's
it's
useful
to
do
it
this
way,
it's
just
that.
It
really
just
allows
you
to
isolate
the
unknowns.
I
feel
super,
confident
that
we
could
get
to
you
know
we
could
get
phased
stage.
Woman
done,
we
could
say
that
we're
we
have
the
nodes
talking.
E
It
eliminates
just
a
lot
of
a
lot
of
you
know,
variance
and
bounding
factors
doing
it
that
way.
So
I've
heard
mixed,
you
know
sort
of
a
mixed
response
to
that.
Some
of
it
might
have
been
just
due
to
the
fact
that
they
might
have
think
some
people
might
have
thought.
I
was
saying
not
to
usually
b2b.
That's
not
what
I
was
saying
so
I'm
just
curious.
What
the
general
sentiment
that
strategy
is.
I
can
put
a
link
to.
V
Yeah,
so
from
a
perspective,
yeah
there's
we
were
trying
to
figure
out
that
if
we
have
lipid
P
as
a
as
a
base
framework
and
that's
something
we're
going
to
use
in
production
lib
p2p
at
if
we
strip
out
all
the
higher-level
protocols,
we
simply
just
have
a
transport
which
is
TCP
and
we
have
a
multi
stream
select
or
the
or
or
the
multi
stream
kind
of
functionality.
For
the
first
level
testing,
instead
of
in
setting
up
raw
TCP
connections,
can't
we
just
use
like
the
basically
p2p
basic
you
did.
V
S
V
W
V
Yes,
my
understanding
is
that
lipid
apibe
at
the
moment
is
well
for
the
ones
that,
for
the
the
parts
that
are
already
implemented,
interoperable
so
Russ
they
can
go
and
JavaScript
without
the
high
level
protocols
are
already
interrupt.
So
it's
not
something
we
need
to
test.
If
we
just
have
royal
ability
with
multi
stream
select,
you
should
you
should
have
multiple
different
clients
being
out
or
at
least
talk
to
each
other
and
then
a
very
simple
wire
protocol
on
a
flat.
So
the
message
formats
already
set
up
for
us.
P
F
T
S
S
M
Good
I'm,
not
too
sure
about
this
cuz
I
haven't
got
there,
but
you
know
I'm
not
too
sure
how
well
the
daemon
will
interact
with
stuff
for
in
browser
I'm
related
stuff
that
we're
working
on
like
our
like
client
trying
to
get
like
the
actual
beacon
chain
itself
in
the
browser.
So
I
don't
know
how
that's
gonna
play
out.
So
that's
why
we've
been
just
like
heads
down
on
the
p2p.
X
X
P
X
Q
G
S
Yeah,
well,
it
wouldn't
just
be,
it
would
be
like
consistency
and
availability
and
the
process
of
relaying
messages
throughout
the
network.
Also,
implementation
I
got
the.
S
S
E
S
E
The
only
thing
it's
lacking
is
that
it
was
it
still
uses
it
still
uses
protobufs,
so
I
didn't
rip
that
part
out
yet,
and
but
that
was
back
in
January.
So.
R
Yeah
some
regions
want
to
bring
out
that
I
think
the
most
complicated
part
is
not
the
findings
itself,
because
it
just
used
the
probe
up
to
serializing
and
send
a
message
to
the
demon
through
the
TCP,
socket
or
UNIX
socket,
but
I
think
the
the
Troublesome
part
is
when
you
need
some
feature
like
you
want
to
you
wanted
this
on.
You
want
to
disconnect
appear.
Who
is
trying
to
connect
with
you
and
currently
the
demon?
R
That's
not
support
something
like
this,
so
you
know
we
need
extra
support
or
modification
to
make
these
things
to
work.
So
so
I
think
the
most
troublesome
party,
like.
E
R
From
my
point
of
view,
I
think
the
demon,
the
functionalities
in
daemon
is
sufficient
to
do
the
internal
testing
for
from
for
the
beginning.
So
we
we
can
connect
and
disconnect
and
sin
and
broke-ass
things
from
our
understanding.
It
is
enough.
I
mean
I
mean
if
we
want
to
write
the
the
naked,
the
epitome
or
the
bindings
to
to
the
libraries
at
they
did.
It
is
a
lot
more
work
in
the
bindings.
S
So
another
thing
some
people
have
been
having
issues
with
is
like
peers
that
are
added
to
the
routing
table
without
any
routable
IP
address
and
what
PTP
and
that's
their
Kadeem
lea
implementation.
So
I
think
that
Felix
is
discovery.
V5
is
going
to
be
able
to
account
for
that,
because
you
can
add
arbitrary
node
metadata.
V
V
P
P
S
But
that's
I
mean
that's
also
what
we're
intending
to
do
with
these
tests,
like
we
start
with
the
minimum
protocol,
and
then
we
can't
begin
to
add
other
components
and
we
test
each
one
individually
and
we
compare
them
to
the
baseline.
So
that
way
we
understand
which
ones
are
exactly
going
to
result
in
some
sort
of
performance
degradation.
And
then
we
understand
what
needs
to
be
optimized.
X
P
T
X
A
S
We
have
a
share
channel
quite
block
in
protocol
labs
as
we're
proceeding
with
these
tests.
So
it's
you
know
good
to
have
feedback
from
them
directly
and
also
their
input
to
just
make
sure
we're
not
doing
anything
incorrectly.
So
if,
like
you
know,
if
we
get
like
weird
results
or
data,
it's
not
like
it's.
You
know
it
was
a
collaborative
effort.
So
it's
not
like
any
one
particular
person's.
You
know
mistake.
S
V
I
just
want
to
clarify
that
we're
not
actually
using
that
much
lip
here
to
pay
things
I
think
pretty
much.
Only
two
things
we're
using.
We
have
a
transport
which
is
just
a
CP.
We
have
optional
encryption,
so
we
can
just
take
that
out.
We
have
the
multi
string,
select
and
a
multiplexing,
so
we
essentially
just
when
we
connect
to
to
a
single
node.
We
set
up
multiple
streams
based
on
the
protocols
they're,
the
only
things
of
Lippi
to
be
essentially
we're
using.
A
D
A
Need
a
pub
sub,
Manor
gossip
so
via
our
initial
testing
many
months
ago,
was
the
best
thing
that
we
found
if
it
needs
to
be
modified
or
if
we
need
to
take
a
different
approach,
we
still
need
the
pub
sub
capability
so
break
like
if
that
is
broken
in
some
way.
We
don't
necessarily
need
to
break
out
of
the
p2p
just
solve
it,
but
we
should
we'll
get
the
test
results
and
things
on
that
work
before
we
make
this
yeah.
S
A
A
My
opinion
is
to
find
that
very
minimal,
as
Adrienne
was
suggesting
components
just
talk
to
each
other
in
p2p,
rather
than
something
outside
of
it.
I
know
that,
somewhere
on
the
order
of
half
the
teams,
don't
have
the
PDP
right
now,
if
that
is
likely
to
change
at
some
amount
over
the
next
few
months,
and
we
can't
forget
that
half
of
the
teams
do
have
lucky
to
be
and
already
using
it.
A
A
The
intention
is
to
release
something
at
the
end
of
May,
I,
guess
in
approximately
weeks
time
and
then
release
the
frozen
spec
at
the
end
of
June,
and
that
is
what
we
will
do.
The
tests
that
are
being
the
roughly
set
number
of
additional
tests
on
oh
six,
one
and
then
probably
stop
releasing
stuff
106
branch
and
the
subsequently
be
honest,
I.
Guess,
though,
so,
let's
continue
the
conversation
and
all
the
channels.