►
From YouTube: Eth2.0 Implementers Call #20 [2019/6/27]
Description
The original livestream failed due to technical difficulties. This was recorded locally and uploaded later. Apologies for the local recording software not displaying the names and highlighting the speaker. Should be back to normal next week.
Agenda: https://github.com/ethereum/eth2.0-pm/issues/51
A
A
A
A
A
The
intention
here
is
to
be
feature
complete
to
not
be
just
meddling
with
things
to
make
them
cleaner,
we're
trying
to
clean
up
everything
before
and
to
really
get
it
in
a
stable
place
for
implementers
auditors
from
verification
people,
buzzing,
etc
to
to
dig
in
obviously,
three
out
of
the
four
of
those
people.
The
intention
is
to
find
issues.
So
if
we
have
issues,
if
they're
relatively
minor,
bugs
we're
going
to
be
releasing
the
minor
releases,
we're
also
going
to
continue
our
testing
efforts.
So
we're
going
to
be
releasing.
A
You
know:
B
0,
8,
0,
T,
1,
T
2,
if
we're
incrementing,
just
on
testing
and
not
substantive
things,
so
that
would
just
be
increasing
the
test
vectors
feel.
If
the
increasing
and
test
vectors
found
some
minor
bug,
we
will
fix
the
minor
bug
and
release
as
a
minor
version
4
on
the
on
the
audit
and
form
verification
stuff.
There
might
be
some
structural
things
that
come
up.
There
might
be
some
deeper
change
where
maybe
some
sort
of
additional
abstraction
it
was
warranted
for
X,
Y
or
Z.
A
A
C
A
short
2/3,
so
there's
this
B
are
open,
but
basically
aims
to
completes
the
spectres
coverage.
There's
these
two
open
issues
remaining
Munford,
how
we
formalized
the
finalization
and
how
we
deal
lift
missiles,
there's
really
just
days
of
representing
data
and
I
hope
to
complete
this
release
to
edge
cases
are
soon
and
all
the
other
tests
are
complete.
So
do
we
get
much
higher
coverage
of
that
suspect.
A
C
Real
ongoing
we've
been
trying
to
improve
how
we
move
on
from
our
initial
set
of
states
to
like
it
more
the
first
set
of
states.
The
difficult
thing
is
it's
not
like
a
virtual
machine
where
there's
many
many
difference
in
boot
stance.
The
input
states
are
red,
like
relatively
sparse,
because
there's
all
these
invariants
that
have
to
be
met
by
the
state.
So
what
they
do
is
we
maybe
creates
input,
sets
then
first
block
changes,
and
then,
when
there's
a
felon
posters,
we
continue
from
there.
So
we
expand
and
expand
the
output
States
miss
it.
C
D
D
E
Hello,
so
yes,
we
just
joined
the
yes
hi.
Some
things
are
spectra
Merce
retreat
and
had
a
great
week
in
popular
to
discuss
and
play
around
the
new
Python
libraries.
We
plan
to
migrate,
found
I,
think
I
owe
to
trio,
which
is
another
Python,
synchronised
library
and
the
reason
most
important
progress
is
that
Alex
has
a
huge
PR,
folder
version,
zero
point,
seven
point
one
and
also
since
the
specs,
which
is
coming
so
we
plan
to
a
bump
to
version
0.8
or
together,
yep
I.
Think
that's
all.
G
We
have
updated
our
client
to
the
latest
spec
zero
point,
seven
point:
one
inclusion:
as
a
union,
we
are
placing
all
tests
from
github
and
our
benchmarks
showed
no
significant
performance
changes
since
zero
six
version
of
spec.
Also
we
have
read
it
really:
data
RPC
server
part,
we
will
add
client
part
Sunday
in
a
fusion
and
we
are
working
only
P
to
be
a
minimal
implementation.
We
are
fortunate
to
Java
and
we
have
finished
second
part
and
moving
forward,
and
next
we
are
going
to
add
persistence
to
our
client
and.
H
Yeah
we've
been
building
a
few
last
kind
of
stubbed
pieces
of
the
client
things
like
getting
a
valid
each
data,
eath
one
data
for
creating
a
new
block,
getting
our
deposit
processing,
actually
working
with
the
real
Merkle
tree
and
syncing,
getting
a
real
sync
between
a
network
and
a
chain
going
and
we're
still
in
the
process
of
moving
to
zero
point.
Seven
point:
one:
the
spec
and
we're
also
working
on
getting
a
benchmarking
chassis,
setup,
yeah.
A
H
So
assembly
spent
yeah
so
worse,
so
we
have
kind
of
a
rough
implementation
of
an
LMP
ghost
its.
We
haven't
integrated
it
into
the
into
the
code,
yet
it
still
as
a
PR,
and
we
are
also
thinking
about
doing
SSD
or
rewriting
SSD
with
assembly
script.
I
think
the
kind
of
the
blocker
there
is
a
sha-2
implementation,
and
so
those
are
still
works
of
progress.
I
think
I
think
they're
gonna
take
a
little
bit
of
time,
but
we're
we're
we're
we're
still
working
right.
That's
still
I
guess
a
priority
cool.
F
My
benchmarking
improve
the
clay
ID
itself.
We
importantly
spend
a
bunch
of
time
working
on
transforming
the
yeah
molds
because
they
use
hex
trims
to
represent
binary
data
instead
of
base64.
So
just
a
lot
of
hiccups,
basically
on
for
based
on
based
on
that,
but
things
are
good
now,
aside
from
that,
we
put
together
a
central
repo
for
etherium
2.0
API
schemas,
we'll
be
sharing
with
that
I'm
gonna
chat
about
this
on
this
call,
so
we
don't
want
to
take
time
away,
but
out
areas
are
recent
they're
really
over
the
chat.
Yeah.
I
Yeah,
so
the
goal
here
is:
we
just
want
to
give
feedback
and
sort
of
collect
together
these
API
schemas,
so
that
people
wanting
to
build
on
at
the
EM
2.0
have
one
place
to
go
to
this
could
be
going
into
upstream
and
into
the
spec
repo
or
live
here.
We
don't
really
have
a
preference
now,
but
we
wanted
to
start
getting
feedback
on
this
idea.
So,
like
Errol,
said,
don't
take
too
much.
Tannins
call
goes
down
the
agenda,
but
let
us
know
afterwards
thanks.
J
Okay,
so
on
5/8
quarter
now
due
to
my
teammates
beating
me
in
the
bed
and
upgrading
from
V
5.1
to
b7,
one
of
the
spec,
so
I
think
that's
95,
Dave,
yeah
I
think
that's
the
first
time
we've
actually
been
like
up-to-date
this
back,
so
you
know
interpret
that
how
you
will,
but,
but
they
really
did
a
good
job.
So
that
was
awesome.
We
are
also
working
on
some
stuff
with
deposits
tweaking
that
whole
process
incorporating
some
feedback
received
from
The
Hobbit
spec.
There's
some
good
comments
about.
J
You
know
like
some
cases
it
was
there.
Was
it
needed
to
be
a
little
bit
simpler
like
it
for
its
purpose
and
some
more
mods
to
match,
like
other
implementations,
so
that
it's
less
work
to
use
it
and
then
also
matching.
J
You
know
the
the
actual
real
by
our
protocol
and
really
a
lot
of
credit
goes
to
Dean
and
Rene
actually
for
volunteering
their
time
to
like
rewrite
this
back
and
it
was,
it
was
a
little
bit
spread
out
over
some
documents
and
and
they
they
both
kind
of,
took
all
of
the
information
incorporated
it
made.
It
do
much
better
version
of
it,
so
that
was
so
cool
and
I
think
that's
pretty
much.
It.
K
So
so
last
week
we
also
updated
to
seven
one
test
which
we
are
really
happy
to
see
all
the
the
backs
of
fakes
and
we
are
able
to
remove
our
work
accounts.
So
that's
really
great,
and
for
this
week
we
are,
we
did
some
fix
for
our
Ivers
in
rocks
TV.
So
it's
more
stable
now
and
we
did
a
major
overhaul
for
our
panel
remark.
Oh
library,
which
still
hasn't
been
terminated,
but
we
had
a
library
and
the
the
Suffolk
Networks
stat
is
doing
the
works.
So
that's
for
us.
Oh.
A
Yeah,
thanks
for
finding
some
of
those
book
lighthouse.
B
Hi
everyone
West
at
the
moment
passing
all
the
zero
point,
six
point:
three
tests
and
we've
decided
not
to
keep
up
to
date
with
the
zero
point,
seven
point
one
and
we're
going
to
wait
straight
for
the
spec
freeze
and
then
jump
straight
to
that
one
and
since
the
spec
freeze
is
happening,
we've
also
decided
to
start
doing
releases
and
so
we're
targeting
a
version.
Zero
point
zero
point,
one
release
of
lighthouse
next
month,
which
will
of
course
still
be
just
for
developers
and
researchers.
B
So,
instead
of
doing,
although
the
spec
updates
0.7
we've
been
working
on
things
like
the
reduced
tree
fork,
choice
that
was
discussed
I
see
three
and
we're
seeing
already
some
good
speed
improvements
with
that
around
you
know
sort
of
five
times
faster
than
our
previous
implementation,
without
any
significant
overheads,
which
is
great,
but
we
haven't
got
any
direct
benchmarking
to
sort
of
show
that
yet,
but
we
should
expect
that
expect
that
soon,
on
the
networking
front,
we've
been
making
some
great
progress
with
our
leading
implementation
and
especially
disk
discovery
version.
Five
we're
proud
to
say.
B
We
have
discovery
version
five
running
in
lighthouse.
At
the
moment
doing
discovery,
although
it's
we'll
just
an
initial
implementation
for
our
purposes,
it's
not
the
full
spec
yet
and
also
we've
been
having
a
chat
to
the
Apache.
Milagro
maintainer
and
we're
gonna
start
be
pushing
pushing
some
fixes
and
some
stuff
up
to
them
as
well,
because
that's
our
our
core
BLS
library,
yeah.
B
D
D
Now
beyond
Kos
pecs.
We
continue
working
on
a
stink
library,
because
we
we
fought
NIM,
I,
think
library
and
we
add
more
and
more
function
to
support
a
peer-to-peer
networking.
We
also
launched
a
lipid
to
pee
demon
based
test
net
last
week.
So
now
we
have
test
net
0
based
on
our
px
and
test
net
1
based
on
the
p2p.
We
will
do
a
blog
posts-
probably
not
this
week,
but
a
little,
maybe
tweets
from
now
to
explain
how
to
install
everything.
We
still
are
all
running
out
some
details.
A
N
Know
so
no
real
updates
from
us.
We
kind
of
stopped
working
on
it
while
the
spec
was
still
rolling
I
think
we
did
some
refactoring
stuff
since
the
last
time
I
was
here
and
then
once
this
spec
is
frozen,
I'm
gonna
try
catch
to
seven
point
whatever
faster
than
Artemis
did
just
to
like
get
some
clout
there,
but
yeah
that's
pretty
much.
It
challenge
accepted
y'all
wanna
make.
D
A
O
Yeah,
so
we
started
this
formal
modeling
of
this
:
resist
state
function
and
a
month
ago
and
yeah
so
far
we
trying
to
understand
the
details
and
rationale
underlying
into
the
hood
and
then
you
know
you
are
starting
to
actually
modeling
in
keh
keh
framework
and
yeah.
That's
what
I
have
right
now:
cool.
D
O
A
A
H
So,
let's
see
on
the
phase
one
side,
I
wrote
up
a
small,
definitely
incomplete
checklist
of
things
that
we
might
want
to
consider
changing,
or
at
least
we'll
have
to
decide
what
we'll
have
to
decide
on
for
phase.
One
I
think
it's
the
most
recent
issue
in
the
issues
list
at
the
moment,
so
the
big
one,
the
big
ones
there
that
I
can
think
of
so
one
of
them
is
just
a
short
blog
time.
H
H
I
also
wants
to
I
consider
removing
the
attestation
list
and
basically
only
having
one
at
the
station
object
or
at
least
pushing
the
data
from
the
one
at
the
station
object
top
into
the
header
and
the
reasoning,
basically
being
that
I
not
really
convinced
that
there's
a
particular
need
to
in
to
have
space
for
more
than
one
attestation,
because
the
the
the
set
of
things
that
we're
using
these
a
char
data
stations
for
is
much
less
than
and
the
equivalent
set
for
amecameca
stations
and
there's
a
couple
of
other
smaller
ones.
So,
I
guess.
H
If
anyone
wants
to
I,
take
a
look
at
that
list
then,
and
none
of
its
origins,
but
once
the
0
is
frozen,
they
do
expect
that
we
were
that
we'd
wants
there
and
have
moved
full
steam
ahead
on
getting
the
theist
once
back
finalized.
So
it's
definitely
good
to
start
looking
at
so
that's
phase,
one
on
the
fees
to
side.
H
The
main
thing
is
I've
been
talking
to
the
Phase
two
research
people,
people
on
an
offense
trying
to
figure
out.
They
basically
helped
me
markets
would
work
and
some
of
the
issues
around
batching
transactions,
and
this
is
more
on
three
more
in
III
search,
batching
transactions.
How
to
make
sure
this
scheme
is
censorship,
resistance
and
how
to
make
sure
we
actually
get
the
efficiency
gains
from
batching
and
so
forth.
H
The
so
I
think
what
we're
the
kind
of
concrete
possible
changes
to
the
basic
execution,
environments
or
sorry,
the
basic
they
face,
two
specs
that
seem
likely.
One
of
them
is
changing
it
so
that
you
can
have
multiple
beacon
chain
or
multiple
top-level
transactions
in
one
shared
block.
One
of
them
is.
H
Allowing
larger
short
execution
environment
states
so
instead
of
32
bytes
and
you
could
still
have
32
bytes,
but
you
could
pay
more
and
potentially
go
all
the
way
up
to
something
like
32
kilobytes.
So
basically,
the
upper
limit
being
is
it's
something
that
still
needs
to
be
small
enough
to
fit
into
a
beacon
block
for
a
front
for
a
fraud
proof,
but
otherwise
it
can
be
larger
and
it
being
larger
has
a
lot
of
really
nice
benefits.
So,
like
you
can
have
some
some
level
of
proof
matching
happen
between
blocks.
H
You
can
have
some
level
of
batching
happen,
even
if
there's
or
you
can
have
multiple
transactions
I
get
what
portal
Bruce
get
created
independently,
both
getting
I
get
included
without
either
either
of
them
breaking
as
well
as
some
other
things.
So
that's
nice
by
the
way
Karl
or
you
won't
be
a
forestry
on
the
call.
H
Well,
if
not,
then
Karl
has
been
doing
some
wonderful
thinking
around
teaching
plasma
like
I
and
applying
them
to
a
nice
context
or
data
just
gets
published
on
chain,
and
it
turns
out
that
you
can
do
that
to
do
some
really
nice
things
like
potentially
do
cross
your
transactions
much
more
easily
improve
efficiency,
a
lot
theoretically,
you
might
you
were
in
the
normal
case.
You
might
not
even
need
to
publish
Marco
proofs
into
the
into
the
chain.
B
H
You
can
achieve
and
have
extremely
fast
de-facto
confirmation
times
for
any
application,
even
if
the
shard
in
the
individual
shard
block
times
are
still
longer
like
four
seconds
or
or
even
eight
or
whenever.
So
I'm
that's
and
that's
one
III
search,
that's
something!
I'm
also
potentially
really
excited
about
because
it
lets
us
how
create
basically
user
experience
equivalence
to
all
these
more
centralized
platforms
without
us
actually
being
more
centralized.
So.
D
G
P
Yeah,
so
I
only
have
just
one
one
update.
Basically
I
was
at
Deakin
and
avataric
was
at
Deakin
and
there
was
this
excitement
for
a
new
curve
which
was
introduced
with
Zach
C
called
the
BLS
12
377.
So
it's
kind
of
similar
to
BLS
12
381.
It
has
the
same
embedding
degree
of
12,
but
it
is
slightly
different
bit
size
of
377
and
the
reason
why
this
excitement
is
because
you
can
do
efficient.
P
P
P
So
there
is
a
little
bit
of
work
to
to
take
the
existing
implementations
and
put
them
over.
The
other
downside
is
that
there's
a
it
has
a
cost
in
terms
of
hash
to
g2,
so
that
becomes
a
bit
more
expensive,
so
I
think
at
this
point
in
time.
Pragmatically
speaking,
we
were
looking
to
to
stick
with
TLS
1281.
You
know
which
has
more
maturity,
more
infrastructure,
more
testing
and
by
sticking
with
be
as
1231,
we
also
you
know,
can
can
meet
the
DEF
CON
suggestion
of
launching
the
the
deposit
contract
during
a
public
ceremony.
P
So
I
guess
it
will
be
interesting
to
see
how
this,
how
this
space
kind
of
evolves
in
the
future
I
mean
it's.
It's
mind-boggling
how
much
improvement
we're
seeing
over
time
and
I
wouldn't
be
surprised
if
there's
new
suggestions
that
that
come
up.
You
know
this
year
and
next
year.
So
maybe
you
know
during
phase
one
or
phase
two,
we
could
potentially
evaluates
a
change
to
to
a
new
curve,
but
I'd
say
in
the
short
term,
stick
would
be
less
1281.
P
You
know
if
it
does
have
a
little
bit
of
weight
in
the
space
and
momentum,
so
it's
possible.
You
know
that
the
mayor
fact
that
we
do
go
ahead
would
be
a
lesser
austerity.
One
might
be
enough
of
an
incentive
for
others
to
come
in.
One
thing
that
was
you
know,
voice
during
the
the
standardization
meetings
is
that
other
people
also
want.
You
know
no
fragmentation
and
cohesiveness.
So
we'll
see
what
the
next
meeting
come
up
comes
comes
up
with,
which
is
a
bit
less
than
two
weeks
but
yeah
for
sure.
H
Although
what
that
does
sounds
like
is
that
there
will
be
some
level
of
fragmentation,
especially
going
off
further
into
the
future,
whether
anyone
wants
it
or
not,
basically,
and
because,
if
he
expects
to
keep
on
finding
better
and
better
curves
that
have
more
and
more
capabilities,
and
so-
and
you
start
with
one
that
has
one
level
of
full
recursion
with
pairings,
and
you
might
find
one
that
has
two
levels
of
recursion.
Eventually,
someone
might
might
find
a
cycle,
then
someone
might
find
a
more
efficient
curve
with
a
cycle.
So
that's
I,
guess
a
there's.
P
So
one
of
the
things
we're
trying
to
do
with
the
standardization
effort
is
to
have
the
notion
of
a
cipher
suite,
so
a
little
bit
of
metadata
which
specifies
you
know
which
curve
you're
using
average
hash
function,
and
so
I
guess
this
is
maybe
a
good
test
of
the
robustness
of
the
cipher
suite
you
know.
Can
how
well
does
it
work
with
the
existing
curves
that
we
know
of
the
IETF
Aston
is
a
standardization
effort
is
not
just
for
the
blockchain
project,
so
they
will
be
interested
in
standardizing
all
the
various
meaningful
options.
Q
Yes,
just
a
very
quick
notes,
so
from
beers
from
Barcelona
I
have
been
contact
from
the
Stark,
we're
a
team
and
they
showed
some
interest
on
working
with
with
a
simulator,
and
the
idea
would
be
to
study
how
various
network
parameters
are
affected
by
block
size
yeah.
This
is
in
the
context
of
the
TDM
improvement
proposal
2028
so
that
I
just
wanted
to
share
that
thanks.
Oh.
N
P
The
the
last
month
has
been
really
busy.
Probably
many
of
you
have
seen
that
we
released
a
tool
called
Scout,
which
is
a
black
box
prototyping
environment
for
for
phase
2
execution
it
uses
wasn't
internally.
It
was
based
on
metallics
phase
2
proposal.
There
is
a
research
post
explaining,
while
introducing
scape
and
giving
some
background,
just
look
for
worst
guy
out
on
each
research
and
the
code
itself
can
be
found
on.
P
Is
they
have
to
get
a
witness
for
a
state
and
they
have
to
verify
that
witness
and
then
they
need
to
apply
the
changes
on
it.
So
our
first
goal
right
now
is
to
prototype
this
witness
verification.
So
one
one
way
to
do
that
is
using
sse
partials.
We
haven't
have
that
implemented
yet,
but
that
is
one
of
the
next
steps
and
the
main
outcome
we
hope
to
get
out
of
this.
A
R
I'll
go
ahead,
cool
cool,
so
I
guess
number
one
I
worked
on
a
wiki
this
past
week
and
so
I'm
actually
posting
it
here
in
the
chat.
So
this
covers
a
lot
of
the
glossary
terms,
a
lot
of
the
material,
a
lot
of
the
current
conversations
and
basically
consolidates
all
the
info
on
so
far
on
Phase,
two
in
one
spot
and
so
I'd
like
to
get
this
on
the
etherium,
github
wiki,
but
I'm
not
sure
the
best
place
to
put
it
up.
So
maybe
Danny.
If
you
have
a
suggestion
there.
A
R
Awesome
yeah
that
works
other
things
on
our
front
a
you
know:
we've
been
collaborating
with
the
awasum
team
on
various
things,
so
one
thing
that
we've
been
doing
is
trying
to
support
scout,
and
so
we've
we've
been
working
on
implementing
assistive
partials
in
rest
and
helping
with
that
effort.
Also,
we
continue
to
dive
into
kind
of
the
theory
and
some
of
the
ideas
behind
the
the
relay
market.
I
think
you
know,
batalik
talked
about
that
and
that's
there's
a
discussion.
There
are
only
three
search,
so
we're
kind
of
thinking
about
that.
R
A
little
bit
deeper
and
there's
there's
been
good
conversation
there.
Another
another
thing
is
what
Alex
just
mentioned.
We
are
looking
to
help
basically
get
a
phase
phase,
one
a
test
net
up
that
can
support
a
certain
number
of
shards
that
we
can
integrate
Scout
into
and
the
basic
execution
the
basic
execution
engine
from
that.
So
we
can
start
having
playgrounds
with
execution
environments
that
that
can
be
a
number
of
assumptions
can
be
tested
and
benchmarked
and
explored
also
from
our
front
we're
in
a
kind
of
a
transitionary
phase.
R
A
S
So
we
had
submitted
the
undulant
paper
to
use
linux
and
we
got
accepted
to
face
to
with
some
commands
to
take
into
a
code.
So
there
will
be
a
new
version
of
the
paper
in
produced
without
any
change.
The
as
well
result
just
something
easier
to
wait.
I
will
Stan,
so
that's
paperwork
and
something
that
we're
working
on
right
now,
we're
looking
at
how
to
use
whole
apps
for
interim
to
as
a
way
to
execute
reductions
on
any
shot
from
in
developed.
T
We
basically
we
don't
have
any
update
on
grants,
yet
we're
very
close
to
making
a
couple
grants
some
in
conjunction
with
other
funding
sources,
some
on
our
own,
so
we'll
probably
have
an
announcement
about
that
next
week,
but
all
of
them
are
aimed
at
building
Lippe
to
be
implementations
in
the
languages
that
all
of
the
client
folks
on
this
call
need.
So
they
should
be.
This
should
be
encouraging
announcements
and
I.
T
Think,
though,
they'll
bear
fruit
by
September
when
we
need
them
the
second
thing
and
then
we
can
people
can
ask
questions
or
whatever
the
word
they
want
to
go.
Last
time
there
was
a
question
about
TLS
and
I
did
I.
Think
I
didn't
answer
the
question.
Well,
because
I
didn't
fully
understand
it,
but
I
believe
the
question
was
sort
of
along
the
lines
of
what
security
is
being
provided
by
TLS
and
versus
what
does
the
application
layer
need
to
provide,
and
so
assuming
that's
a
correct
understanding
of
the
question.
T
U
U
Of
course,
so,
transport
level
security
is
necessary
to
be
able
to
first
of
all
not
be
subjected
to
man
in
the
middle
attacks.
That
could
be
a
note,
that's
being
transmitted
to
be
able
to
authenticate
the
pier
that
you
that
you
that
you
are
interfacing
with,
if
you,
for
example,
have
a
key,
a
public
you
for
that,
pier
then
by
authenticating
them
one
initiate
in
the
connection
or
handshaking
your
you're
able
to
certify
that
you're,
really
speaking
to
them.
U
Of
course,
for
you
know
and
all
kinds
of
reasons
to
avoid
observability
and
censorship,
and
so
on,
encourages
necessary
as
well
and
the
transport
layer
and,
of
course,
is
if
the
application
itself
needs
to
use
for
two
primitives
to,
for
example,
sign
specific
pieces
of
data.
As
my
specific
roles
or.
L
U
J
U
Help
a
lot
as
well:
it's
censorship
persistence.
If
we,
you
can
very
easily
imagine
a
transport
that
is
when
HTTP
1.3
is
our
HTTP
3
is
deployed
in
real
life.
You
could
easily
imagine
transport
that
mimics
HTTP
3
about
using
quake
with
TLS
1.3
over
port
443,
for
example.
So
therefore,
you
know
would
be
very
difficult
for
forest.
U
U
There
are
teams
that
were
actually
probably
going
to
be
funding
a
team
as
well
to
implement
some
primitives
that
are
lacking
in
a
JavaScript
environment
to
be
able
to
conduct
joist
in
a
very
in
not
use
around
essentially
so
that
would
be
and
like.
If
you
want
me,
I
can
I
can
go
down
into
like
why
we're
interested
in
a
noise
and
handshakes,
but
basically
we're
probably
looking
at
a
system
where
we
can
conduct
the
I-x
or
the
ID
handshake
a
strong.
What
is.
U
U
Yeah
yeah
it
does
so
one
of
the
tier
benefits
that
I
personally
I'm
very
excited
about
is
that
it
allows
us
to
send
push
data
on
the
first
message
as
the
handshake
is
being
completed
or
it's
going
through
the
different
steps,
it
any
push
data
or
any
accessory
data
that
is
conveyed
in
any
of
those
hundred
messages.
Messages
acquires
different
levels
of
security
based
on
the
state
of
that
handshake.
U
So
you
can
imagine
on
an
IX
hundred,
for
example,
the
first
initiate
a
message
to
the
pier
yeah
that
accessory
data
would
be
or
that
push
data
would
be
plain
text,
but
then,
if
the
other
pier
wants
to
push
back.
So
if
the
respondent
wants
to
like
any
data,
then
on
that
second,
on
the
response,
because,
as
already
cryptic
enough
cryptographic
material
to
secure
that
push
data,
it
would
be,
it
would
be
invited.
U
So
it
makes
for
a
very
elegant
I,
like
this
TLS
one
country
also
provides
this
disability,
but
for
example,
and
go
it's
not
I
I,
don't
think
a
goal.
Sdk
is
capable
of
sending
zero
round-trip
data,
yet
it
should
be
on
the
roadmap,
but
noise
does
already
so,
and
there
is
a
variant
of
quick
as
well
that
you
just
noise
for
its
handshakes,
is
called
again
quick,
so
I
mean
I,
do
see
some
very
interesting
developments
there
and
major
adoption
by
projects
that
definitely
have
you
know
important
vegetation,
so
I'm
pretty
confident
with
us.
Okay,.
A
U
I
would
I
would
say
psych
Iowa.
So
sorry
in
the
early
days
of
maybe
two
people,
we
definitely
want
to
move
away
from
psych
I
owe
that
is
just
say,
cetera,
is
pretty
trivial
to
implement
and
for
a
best
baseline
interoperability
across
the
b2b
implementations.
You
want
to
implement
Sicario,
because
this
is
what
led
to
the
implementation
support
and,
for
example,
for
the
JDL
implementations
is
exactly
what
we're
doing.
Not
all
programming
languages
support
TLS
1.2
get.
U
T
I
would
just
add
one
more
thing:
Sekai
Oh
as
of
right
now
it
has
not
enough
been
security,
audited,
that'll
actually
probably
change
by
the
end
of
a
year,
but
these
other
protocols,
like
noise
I,
believe,
has
undergone
it's
like
a
formal
verification
and
TLS
1.2.
Obviously
it's
an
IETF
standard,
so
you
know
there
is
that
to
consider
as
well.
U
Correct
so,
in
parallel
with
all
of
this,
there
is
an
ongoing
the
architecture
of
multi-screen
right
now.
The
selection
of
the
encryption
channel
is
being
conducted
in
plain
text,
which
is
not
great,
but
it
does
allow
for
that
applicability,
so
peers
negotiate
on
what's
on
what
secure
channel
they
want
to.
They
want
to
adopt.
For
that
connection,
this
will
probably
move
to
the
multi
adder
as
a
component
of
the
multi
adder.
U
So
you
can
imagine
a
multi
adder
like
IP
I
before
the
the
IP
address,
/
TCP
/,
the
port
number
/
cycle,
or
noise
I
cane
or
TLS
1.3.
So
that
would
allow
peers
to
directly
initiate
a
secure
channel
without
having
to
conduct
any
plaintext
negotiation
out
of
the
open,
which
makes
the
system
prone
to
deep
packet,
inspection
and
censorship
by.
U
From
the
point
of
view
of
the
p2p,
we
would
be
basically
implementing
and
adopting
probably
a
stick,
a
library
in
each
language.
You
probably
want
to
do
that,
like
you
want
to
adopt
an
SDK.
If
you
want
to
make
sure
that
the
language
has
a
supports
in
such
a
point.
Fourth
41.3,
so
yeah
we
are
likely
to
be
as
a
user
of
TLS
one
country
is,
you
know
vulnerable
to
anything.
J
Have
a
quick
question,
so
you
I
think
Mike
mentioned
in
the
beginning
that
Yeller
Yeller
y'all
are
providing
a
lot
of
funding
and
support
for
new
implementations,
which
is
badass,
because
that
obviously
helps
several
of
the
teams.
I
was
curious
about
about
like
testing.
J
What's
the
what's
the
status
on
on
that,
what
we
need
to
do
in
order
to
ensure
that,
then
you
know,
lid
p2p
and
the
gossip
protocol
or
production-ready
I
know,
like
y'all
main,
like
our
time
line
for
youth
to
might
be
slightly
different
than
a
lid
p2p
and
so
I'm
curious.
What
your
thoughts
are
on
that,
if
they're
just
gonna,
be
a
grant
or
not
yeah.
T
So
we
think
of
the
testing
and
two
different
there's
two
aspects
of
it.
One
is
the
interoperability
testing
between
the
different
languages
and
that's
an
area
where
we
are
very
interested
in
making
a
grant.
We,
we
have
a
rudimentary
system
called
IPTV,
which
you
think
at
Orchestra
interoperability
test,
but
we
need
we
would
need
some
help.
I
mean
some
somebody
with
some
time
to
actually
turn
it
into
a
proper
Interop
test
suite
which
could
also
be
used
to
validate
that
particularly
looking
at
the
implementation
needs
minimum
minimum
requirements
necessarily
called
the
fee
fee.
T
Whatever
that
means
exactly.
The
the
other
side
of
testing
is
what
I
think
you're
getting
at
Johnnie
like
sort
of
production
readiness
testing,
basically
integrate
integration
tests
of
the
whole
system
to
get
data
on
performance,
and
you.
T
If
it
falls
over
or
not,
and
for
that,
we've
built
a
system
that
we
call
test
lab,
you
can
look
at
it.
It's
a
github,
/,
Lapita,
b-side,
test
lab
lav
and
basically,
what
it
is,
is
it's
a
an
Orchestrator
built
on
top
of
nomad
for
those
of
you
who
are
interested
in
container
orchestrators
and
what
it
does
is
spin
up.
Large
numbers
of
nodes
so,
like
a
thousand
nodes,
probably
give
it
in
the
far
where
it
could
probably
go
beyond
that
and
that
that's
our
plan
for
testing.
T
T
J
If
we
could,
you
know,
what's
performant
for
us
may
or
may
not
be
what's
performant
for
y'all,
maybe
y'all
have
higher
requirements
than
we
do,
but
it
would
be.
It
would
be
nice
to
be
able
to
do.
You
know
to
sweep
some
on
things
like
you
know,
message,
grates
packet,
size
and
with
limitations
like
how
fast
we
need
you
know,
like
actual
gossip
messages
to
propagate
through
the
network,
just
so
that
we're
you
know
we're
aware
of
where
things
break
down,
because
there's
always
there's
always
you
know
options,
you
know.
J
T
A
J
Okay,
dude
I
just
wanted
to
say
one
just
one
last
thing:
real,
quick,
like
a
phone,
makes
sense
that
they
want
to
do
the
interoperability
kind
of
stuff
first,
but
but
we
have
this
deadline
of
you
know
like
of
January
3rd.
You
know,
maybe
we
could
all
find
talk
about
how
everyone
feels
like
like
realistically
like.
Can
we
play
out
some
different
scenarios
and
see
like
with
these
tests
with
a
long-running
test
that,
like
how
realistic
is
it
really
is
January
3rd,
really.
A
Good,
a
very
3rd
with
a
suggestion:
okay,
that,
obviously
it's
a
nice
target,
but
it's
not
that's,
not
a
deadline
and
I
want
to
tell
the
reporters
that
are
listening
to
this.
That
is
not
a
deadline,
and
that
is
something
that
is
more
in
the
hands
of
employers
than
it
is
even
in
the
hands
of
the
researchers.
So
you
know
that
that
was
purely
a
suggestion
and
something
that
is
maybe
feasible,
but
not
that
there's
a
lot.
You
know
I
expected
the
last
miles
to
be
long.
J
Though
it's
just
a
I'm,
not
one
of
those
groups
but,
like
you
know,
obviously
I'm
a
model
of
sensitivity
here
but
like
I,
would
imagine
that
groups
that
get
you
know
their
funding
from
the
EF.
They
hear
January,
3rd
and
they're
like
okay.
We
have
to
do
everything
possible.
You
know
not
to
do
that
and
it's
just
you
know,
I
think
that
the
focus
should
you
know,
but
he
obviously
had
like
a
good
push
date,
but
like
no
one
should
be
like.
J
P
Mean
the
general
Feraud
suggestion
was
very
tentative
and
I
think
it
was
mostly
to
try
and
avoid
the
December
holiday.
So
basically
it
would
be.
We
wouldn't
launch
before
Dan
referred
John
referred.
All
the
words
could
make
sense.
What
I
have
done
is
a
survey
some
of
the
implementers
to
ask
them
if
they
think
they
will
be
production
ready
in
2019
for
launch
on
January,
for
it,
and
two
of
the
teams
have
responded
positively
with
optimism
that
it
is
possible.
So,
at
the
end
of
the
day
we
only
need
you
know.
J
I
would
I
don't
I,
don't
believe
you
reached
out
to
us
and
I'm
curious
what
the
exactly
what
the,
what
we're
defining
ready
as
like.
Are
we
saying
that
there's
going
to
be
a
three
month,
long
multi-client
test
net
starting
in
September,
you
know
so
that
we
can.
You
know
that
we
can
kind
of
sort
out
any
bugs
that
are
found
and,
if
so
like.
That
means
that
that
multi-client
test
net
is
gonna
run
flawlessly
for
three
months,
and
then
we
immediately
get
a
lot
of
and
I
just
seen.
It
seems
improbable.
J
So,
like
that's
just
my
thinking,
I
think
all
of
us
could
push
really
hard
and
make
January
third,
but
it's
just
it's
dangerous.
In
my
opinion,.
L
Can
I
guy
say
something:
yep,
okay,
cool,
so
we've
been
working
with
prism
and
a
few
different
other
teams
Renee's
been
working
on
implementing
hobbits
we're
planning
on
doing
your
kind
of
like
an
impromptu
meeting
in
Toronto
next
week.
For
anybody,
that's
around
I,
think
it's
gonna
be
Preston
and
Renee
Dean
Greg
change,
safe,
guys,
I
think
Anton
might
be
joining
us.
L
So
if
anybody
else
is
interested,
we're
going
to
kind
of
start
ironing
out
some
of
the
networking
stuff
and
trying
to
come
up
with
somewhat
of
a
loose
specification
for
what
that
stack
is
gonna.
Look
like
and
next
we're
gonna
try
to
move
on
to
do
some
research
in
terms
of
like
data
sync
like
peer
discovery,
etcetera.
So
anyone
wants
to
join
us.
Please,
please
do
it
we'll
be
in
Toronto
next
week.
A
Greg
from
chain
safe
suggested
that
we
have
Brussels
move
to
make
communications
to
discord.
Primarily,
we
communicate
in
one
get
er
there's
a
couple.
You
know
maybe
two
or
three
getters
that
we
communicate
in
and
then
there's
like
tons
of
fragmented
telegram,
communications
and
emails,
and
things
like
that.
The
proposal
is
to
have
a
more
unified
place
to
talk.
A
One
of
the
main
downside
that
I've
seen
it
just
is
a
little
bit
more
overhead
to
come
in
eavesdrop
participate
because
you
do
have
to
create
a
username
and
login.
So
it
kind
of
I
think
the
minimum
hire
for
me
is
to
bridge
the
current
sharding
getter
to
like
the
general
room
on
this
discord
put.
Otherwise,
it
seems,
like
people
are
generally
very
positive
about
this,
because
anyone
not
I
would
like
to
speak
up.
A
I,
don't
have
right,
I,
don't
intend
it
download
it
I'm
kind
of
I'm
at
the
match
anyway.
I.
Let's
do
it.
I
think
this
is
good.
I
think
it
makes
them.
I
know
proto
that
he
would
help
us
do
that
so
protocol
Jeff
this
calm,
cool
and
then
proto
has
a
proposal
for
standardizing
I,
think
rafidhi
escapes
and
testing
bro.
Do
you
want
to
give
it
quick
on
that.
C
Sure
so
the
idea
here
is
that
you
can
use
this
one
field
that
can
contain
any
data
in
the
clock
body
to
use
it
for
debugging
during
testing.
So
you
can
put
like
this
little
meta
information
in
it
about
who
is
like
what
kind
of
client
is
running
or
producing
the
block
where's
the
client
locate.
That's
for
how
long
is
it
running
on
these
kinds
of
meta
information
metadata
and
then
be
able
to
easily
debug,
like
large
amounts
of
blocks.
C
A
C
D
C
C
A
A
A
A
Figure
out
the
minimum
requirements
to
be
production-ready.
Some
of
this
is
a
little
fuzzy,
because
there's
a
lot
of
unknown
unknowns
that
we're
going
to
run
into
in
the
next
four
five
months,
but
it's
probably
worth
at
least
beginning
to
enumerate
the
the
knowns
there.
So
why
don't
we
take
that
to
an
issue
in
e
to
PM,
Rico
and
we'll
just
start
the
list,
and
then
we
can
from
there
engage
in
the
conversation.
A
Yeah
yeah,
but
like
how
long
do
we
need
a
test
net
before
we
feel
comfortable?
Do
we
need
to
do
some
incentivize
test
nets
before
we're
ready?
What
sort
of
performance,
metrics
and
stress
testing
doing
it's
beyond
the
network
layer?
Things
like
that
there
are
a
bunch
of
things
that
maybe
we
can't
quite
a
numerate
day,
because
we
haven't
hit
them,
but
we
there
are
some
things
that
we
should.