►
From YouTube: Eth2.0 Call #40 [2020/5/27]
Description
A
A
A
Okay,
testing
and
releases
we
released
v012
0
last
week.
This
took
me
a
lot
longer
than
I
thought
it
would.
I
apologize
for
any
delay
there
upgraded
bls
to
latest
itf
standard,
some
fixed
rewards
and
penalties,
as
well
as
a
slight
rewards
and
penalties
modification
to
ensure
that
if
you
are
performing
optimally
during
a
leak,
you
don't
lose
anybody,
as
was
intended
other
than
that
some
networking
changes
based
off
of
kind
of
continued
feedback
loop
with
the
nets
and
clients.
A
I
know
y'all
are
working
on
this.
Let
me
know
if
you
have
any
issues,
I
hope
there's
a
big
refactor
in
the
rewards
and
penalties
and
how
they
are
tested
just
in
a
much
more
granular
fashion.
I
really
we
found
some
bugs
in
serenity
when
proto
is
updating
that
I
I
do
think
that
we
have
much
better
coverage
at
this
point.
A
So
hopefully
we
won't
see
any,
whereas
the
penalties
works,
but
if
we
do
we'll
handle
it,
okay,
so
that
we
also
have
alex's
from
tcrx
for
choice
tess.
I
did
not
end
up
getting
those
into
the
v012
release,
but
they
are
at
the
top
of
my
list
to
get
into
a
v0121
1,
and
so
those
are
coming.
They
look
good.
It's
just
a
matter
of
figuring
out
the
proper
way
to
combine
this,
like
external
repo,
with
the
testing
flow
release
vectors.
So
that's
coming.
A
I
don't
have
a
good
update
on
stethoscope
the
network
testing
that
was
being
worked
on
proto.
Do
you
have
any
updates
there.
B
No
backup
tracks
really
so
well.
We
have
these
two
different
paths
so
rumor
of
tooling
for
network
testing,
which
is
you
can
already
use
more
advanced
and
then
the
stethoscope
project,
which
tries
to
tie
it
together
into
test
factors
and
integrated
clients.
This
is
like
a
bigger
effort
to
attack
some
markline.
Okay.
A
I'm
hoping
we
can
get
something
out
to
y'all
relatively
soon.
I
know
there
was
a
number
of
last
minute
bugs
some
around
genesis
and
some
around
some
just
basic
networking
prior
to
the
witty
watch,
which
we
can
talk
about
in
a
minute
a
couple
days
ago.
So
any
type
of
this
network
testing
right
aid-
oh
hey,
electron,
just
wanted
to
know
status
of
stethoscope.
If
there's
any
progress
there
and
what
we
might
expect.
A
All
right
well,
we'll
get
okay.
C
D
Okay,
not
much
of
an
update.
I've
been
on
vacation
for
the
last
week
and
a
half
or
so,
but
yeah
nothing
much
stopped
it
down
past.
The
last
networking
call
update.
A
Yeah:
okay,
thanks.
I
was
saying
that
we,
we
did
run
into
some
last
minute,
bugs
launching
the
witty
test
net.
So
getting
some
of
these
success
factors
out
relatively
soon
would
be,
I
think,
really
useful,
but
thank
you
yeah.
We
can
talk
offline.
A
A
Great
I'm
going
to
swap
client
updates
and
test
nets
and
go
ahead
and
let
afri
talk
about
witty
and
others
to
bring
up
any
questions
or
comments
about
test
nuts
and
stuff.
E
Okay
sure
I
give
a
shot.
We
did
a
couple
of
ten
of
attempts
to
relaunch
the
schlesi
test
net.
The
schlizzy
testnet
revealed
couple
of
issues
and
eventually
lost
the
finality,
and
we
I
have
shut
down
my
last
validator
today,
so
the
schlezzy
testnet
is
officially
dead
now
and
I
did
a
couple
of
attempts
to
relaunch
a
new
test
net.
E
Once
again,
we
had
a
different
genesis,
time
for
two
clients
and
in
the
end
it
figured
we
figured
it
out
and
we
had
just
launched
yesterday
night,
I
believe-
and
we
even
had
to
deploy
a
last
minute
hot
fix
on
one
of
the
clients,
but
in
the
end
we
have
almost
perfect
liveliness
on
the
network
and
yeah.
We
have
this
network
up
now,
and
people
are
encouraged
to
break
it
and
yeah.
That's
the
state
of
videos
of
today.
A
Yeah
cool,
I
think
client
teams
are
generally
curious
as
to
the
plan
in
the
coming
weeks
and
months.
I
would
think
maybe
you
and
I
can
work
on
a
at
least
just
some
bullet
points
on
what
the
plan
is.
I
guess,
with
test
net
restarts
doing
larger
test
nets
and
that
kind
of
stuff.
Before
we
we
do
a
larger
public
launch.
E
Yeah,
so
I
can
so
I
envision
that
we
try
to
launch
as
many
test
nets
as
possible
whenever
we
we
see
a
reason
to
launch
a
new
testnet
either,
because
the
old
one
fails
or
we
have
a
new
version
of
the
spec
or
we
can,
let's
have
rather
more
small
and
fast
failing
test
nets
before
going
into
the
more
stabilized,
fast
phase.
Sorry.
A
Great
one
item
came
up.
If
anyone
wants
to
take
a
look
here,
it's
primarily
it's
a
adding
the
ability
to
tune
when
what
time
of
day
genesis
happens.
If
afree
continues
to
launch
test
nets
with
main
net
configuration,
it
will
happen
at
2am
constantly
at
his
end.
I
know
australia
that
you're
really
happy
that
it
happens
at
2am
every
time,
but
for
the
sanity
of
of
aphro.
We
might
consider
adjusting
it.
A
Take
a
look
at
that
issue,
but
yeah
we'll
we'll
get
some
more
concrete
notes
out
on
the
test
net
trajectory,
or
at
least
the
tentative
test
that
trajectory
that
was
alliteration,
okay,
cool
any
other
thoughts
on
this
before
we
move
on.
Thank
you,
afri.
There's
a
heroic,
heroic
last
minute
effort
on
your,
your
and
and
many
of
the
client
teams
to
get
it
working,
but
it
looks
good
now.
F
Hey
all
so
past
few
weeks,
let's
see
we
published
a
new
version
of
our
bls
library
that
uses
the
updated
harumi
code
compiled
to
wasm.
So
hopefully,
if
you
needed
that
it's
available
and
now
we're
basically
just
going
through
the
012
updates
working
on
updating
gossip
sub
as
well
to
v
1.1-
and
I
guess
I'll,
be
kind
of
curious
to
hear
what.
If
everyone
else
is
working
on
that
awesome
issue
as
well,
but
yeah.
That's
that's!
Basically
it
for
us.
A
Yeah-
and
I
know
age
has
been
looking
into
it-
we
can
talk
about
that
a
little
bit
after
in
the
networking
section
if
people
want
to
share
some
notes
on
that,
thanks
thanksgiving
nimbus.
G
Hi,
so
on
the
networking
side
we
have
been
investigating
and
solved
interrupt
issues
with
lighthouse.
We
had
the
prototype,
passing
issues
we
and
we
published
this
morning.
We
updated
the
multinet
script
in
the
multinational
repo
so
that
you
can
create
a
network
with
a
split
of
50,
50,
lighthouse
and
members
validators
to
investigate
more
easily
interrupt
issues.
G
We
also
add
the
snappy
fixes
and
we
also
realized
that
gossip
sub
assumes
that
we
can
validate
an
attestation
immediately
when
we
receive
it,
but
in
reality
we
have
to
wait
for
the
block
to
be
available
and
sometimes
attestations
are
received
before
the
blocks.
G
Otherwise,
regarding
test
nets,
we
have
a
make
witty
added
to
the
makefile.
So
now
you
can
connect
to
it
easily
and
we
were
investigating
thinking
band
in
schlesi,
but
we
are
moving
to
viti
to
investigate
that,
and
also
one
of
our
test
net
investigation
is
regarding
the
few
peers
issue,
where
nimbus
only
had
a
couple
of
peers,
while
thinking
compared
to
other
clients
on
the
course
back
side.
We
have
the
bs
changes
merged
in
a
crypto
backend
and
we
have
a
toggle.
G
A
Nice
exciting
and
does
nimbus
currently
run
into
the
same
sinking,
bug
on
woody.
G
But
basically
the
thinking
one
of
the
singing
bug
is
that
as
when
we
are
within
the
last
100
blocks,
we
changed
thinking,
algorithm
and,
and
we
stayed
behind
got
it.
H
Hey,
that's
me,
so
we've
started
doing
official
releases
of
techo
every
two
weeks
with
release
notes.
Tecu's
been
updated,
with
options
to
run
on
the
new
woody
testnet
with
respect
to
v0.12
the
gossip
sub
v
1.1
jvm
lib
pdp
implementation
is
almost
done
and
the
bls
updates
are
under
review.
H
Our
open
api
schemas
have
been
updated
to
work
more
easily
with
code
generation
tools,
we've
updated
our
eth1
data
handling
logic,
so
that
we
no
longer
need
to
query
for
world
state
data.
So
that
means
we
can
run
again
against
fast
synced
eth1
nodes
without
worrying
about
missing
world
state.
H
We've
added
peer
filtering
based
on
you,
know,
e
and
our
fields,
there's
some
ongoing
work
to
improve
attestation
processing
and
we
fixed
some
jvm
lib
pdp
issues
related
to
handling
large
stream
packets
and
that's
it
for
me.
I
Hey
guys
from
prism
here
yeah,
we
are
complete
with
0.12.
We've
been
running
test
nets
so
far,
so
good,
seeing
100
participation
on
different
testing
configurations
for
v
0.12,
so,
hopefully
planning
a
test
net
restart
for
topaz,
with
that
config
version
working
next
on
releasing
a
lot
of
memory.
Improvement
features
into
master.
So
we
have
a
lot
of
experimental
features
that
have
different
degrees
of
stability
that
we're
trying
to
get
really
solid
and
make
those
become
defaults.
I
Also
working
on
optimize
at
the
station
aggregation,
the
our
slasher
has
making
good
progress.
It's
been
become
playing
a
lot
nicer
with
beacon
node.
So
previously
it
would
overwhelm
beacon
nodes,
make
it
difficult
for
them
to.
You
know
to
also
handle
validators,
but
now
things
are
getting
better
for.
On
that
front,
fuss
testing
is
going
well
and
our
audit
from
quan
snap
is
in
progress.
It's
also
going
pretty
well,
so
we're
excited
to
see
some
preliminary
results
from
the
audits
yeah.
That's
it
from
us.
A
Cool,
do
you
know
the
range
of
history
that
you
can
find
slashable
attestations
with
slasher?
I
knew
it
was
on
the
order
of
a
few
days.
I
was
wondering
if
that's
longer
now
just
curious.
I
Yeah,
so
we've
been
working
a
lot
on
the
look
back:
we're
able
to
do
historical
detection,
just
fine,
so
we're
able
to
just
go
all
the
way
back
from
genesis
and
really
find
slashable
offenses.
So
that's
been
going
pretty
well
we're
actually
finding
some
trouble
with
real-time
detection
dealing
with
kind
of
making
sure
we
get
all
the
right
attestations
from
the
beacon
nodes
and
that
sort
of
stuff,
but
historical
detection
has
been
going
really
well:
okay,
yeah
cool!
Thank
you.
J
Hey
everyone,
so
we've
been
we've
on
our
consensus
code.
We've
started
an
order
with
trailer
bits,
so
we
have
we've
got
the
the
audits
kind
of
being
done
in
two
parts,
which
is
the
consensus
code
and
then
the
networking
code
we've
delayed
the
networking
code
because
it
still
needs
to.
We
still
need
to
kind
of
do
it
finish
some
of
the
underlying
protocols
and
harden
it
before
we
actually
get
an
order
of
it.
So
that's
going
to
be
a
little
bit
delayed.
J
We've
been
kind
of
doing
a
lot
of
debugging
with
the
witty
testdown
launch
expanding
some
of
our
apis
to
kind
of
help
with
some
of
the
metrics
and
logging
in
lighthouse.
With
the
witty
test.
Now
we
found
that
our
disk
v5
implementation
is
chewing
heaps
of
cpu.
J
J
K
A
quick
question
on
on
on
the
rusty
of
b2b
stuff,
like
how
much
has
it
diverged
between
your
branch?
Basically-
and
I
guess,
parities
in
the
pcb.
J
Yeah,
so
we
were
maintaining
our
own
fork
and
using
that,
because
it
had
got
some
subfixes
and
causes
up
changes
as
well
as
disk
v5.
So
one
of
the
issues
with
just
v5
is
that
it
was
architected
to
be
built
inside
of
lib
p2p.
So
what
we've
done
recently
is
ripped
out
disk
v5.
So
it's
now
like
its
own
standalone
thing,
which
allows
us
to
kind
of
rebuild
it
to
be
more
concurrent
use
more
cpu
cause.
J
It
will
be
a
bit
nicer,
so
it's
not
it's
not
actually
going
to
be
put
into
the
video
anymore,
so
we've
ripped
that
out
and
all
the
gossip
sub
stuff
has
been
merged
down
into
master.
J
Apart
from
the
new
gossip
sub
1.1,
which
is
still
being
developed
so
fundamentally,
at
the
moment,
lighthouse
uses
the
the
official,
the
pdp
rust,
the
b2b.
Everything
on
master
from
that
anything
else,
we've
used
is
either
in
its
own
repair
or
something
or
somewhere
else.
L
Hey
everyone:
let's
see
the
big
thing
is:
we've
added
a
full-time
contributor
I
think
grant
might
be
on
the
call,
so
that's
exciting
to
help
move
things
along
a
little
faster.
L
The
the
latest
sort
of
code
update
is
we
merged
in
a
big
networking
pr
to
implement
the
latest
old
p2p
protocols,
so
that
was
a
bunch
of
bug,
fixes
to
to
pilot
ptp
itself
and
then
general
improvements
to
stability,
and
then
you
know
all
the
gossip
sub
and
the
sort
of
the
rest
request
response
protocols
there.
A
Cool,
thank
you
and,
from
another
mind,
slides
built
code
code
for
attestations,
and
this
is
now
pending.
Merge
into
master
tomas
has
built
the
code
for
connecting
with
database
the
merkle
trees.
Previously
we're
only
testing
things
in
memory.
Tomas
is
also
testing
multiple
validator
onboarding
soon
with
the
numerical
tree
code.
So
I
think
this
is
like
remarkable
markle-backed
state
and
other
items.
They're
still
updating
some
code
for
v011
moving
a
junior
dev
full
time
at
eth2
to
ensure
the
tests
are
running
properly
and
they're.
A
Expanding
the
mother
of
for
c
sharp
have
better
control
over
the
connections.
They
have
a
senior
dev
who
has
experience
of
both
rust
and
c
sharp,
so
may
be.
Looking
at
the
improvements
in
june,
tomas
is
working
on
adding
another
mine
heath
wand,
another
on
e2
connectivity
for
whole
flow
and
but
probably
won't
be
able
to
join
woody
before
the
end
of
june.
We'll
see,
maybe
thank
you.
Tomas
doing
god's
work
on
the
1559
call
cool
great,
so
we
will
move
on
to
research
updates.
M
We'll
go
for
txrx
so
this
past,
since
we
met
last
time,
dimitri
has
completed
two
write-ups
on
discovery
v5,
the
first
one
is
called
comparing
discovery.
Advertisement
features
for
efficiency,
enr,
attributes
and
topic
advertisement.
Real
sexy
titles
discovery
peer
advertisement,
efficiency,
analysis,
great
write-up,
set
cover
and
simulate
some
of
the
efficiencies
and
inefficiencies
of
discovery
v5
and
those
one
of
them
is
an
e3
search
post.
M
The
other
one
is
a
hack,
md,
probably
going
to
be
an
e3
search
post
coming
soon
for
that,
but
the
heck
md
is
available
for
y'all.
Last
time
I
misspoke
about
ontol.
This
is
alex
is
developing
ontol,
which
is
a
pi
spec
transpiler.
M
It's
not
used
for
transpiling
fork
choice
tests,
but
he
is
working
on
fork,
choice,
tests
as
well
like
separate
from
that
effort,
and
so
he's
been
moving
on
that
it's
been
transpiling
into
kotlin
code,
so
it's
taking
the
pi
spec
and
transpiling.
That
and
mikhail
is
integrating
that
transpiled
ontol
kotlin
code
into
tekku
in
our
tx
rx
repo,
not
into
the
teku
master
right
or
worth
branch.
He
johnny's
done
some
test
nut
analysis.
He
had
like
a
pretty
entertaining
tweet.
M
That
shows
kind
of
like
some
of
the
breakdown
of
the
topaz
network
and
he's
kind
of
created,
a
prototype
tool
for
test
net
analysis
that
is
auto
generated
with
the
c
ci
job.
So
yeah.
That's
all
based
around
the
purple
and
network
monitor
work,
he's
been
doing
and
right
now
we're
working
on
upgrading
the
network
agent
to
age,
manning's,
disk
v5
code,
so
that
the
crawl
will
be
more
efficient
and
we
also
released
right
up
on
probabilistic.
M
Crosshard
transaction
simulation-
and
it
shows
some
interesting
results
on
the
effects
of
network
throughput
I'll
drop,
a
link
to
that
in
the
zoom
chat.
M
This
the
research
for
that
is
gonna,
keep
going,
but
we're
gonna
use
analyze
some
eth1
data,
which
we're
in
process
of
doing
to
make
the
simulations
for
crosstalk
transactions
more
realistic
and
kind
of
moving
towards
working
on
an
alpha
alpha
cross
chart
spec
in
the
next
coming
weeks.
C
Cool
I'll
I'll
go
next,
so
quilt
has
been
working
on
exploring
account
abstraction,
and
so
this
has
been
there's
kind
of
explained
this
in
the
past.
C
But
it's
kind
of
twofold
one
we're
doing
the
the
work
on
geth
right
now,
and
one
of
the
one
of
the
things
is
that
one
we'd
like
to
get
this
into
either
one,
if,
if
it
doesn't
move
forward
on
each
one,
because
the
changes
are
too
heavy,
the
idea
is
to
bring
this
as
one
of
the
guiding
structures
of
the
account
model
that
is
built
for
eth2
as
a
whole,
and
so
we've
been
doing
this
with
respect
to
eth1
to
get
some
tangible
data
in
what
we
currently
have
so
right
now
we
are
actually
integrating
all
the
pieces
and
we
should
have
a
basic
mvp,
the
basic
mvp
that
the
talakan
specked
out
finished
pretty
soon,
so
that
that's
within
the
next
next
couple
weeks
next
week
or
two
and
part
of
that
too,
is
we've
done
work
on
building
a
tool
to
dos
to
dos
the
design.
C
So
we
can
begin
looking
at
how
how
the
transaction
for
mempool,
if
we
explore
an
abstraction,
can
stand
defensible
in
in
regards
to
some
of
the
you
know,
some
of
the
new
constraints
that
are
added
with
that.
So
that's
where
we're
at.
N
If
there's
nobody
else,
yeah,
please,
I
would
also
give
some
update
from
the
iwasm
team.
There
would
be,
I
think,
two
relevant
updates
I
can
give
to
it
too.
N
N
The
contract
itself
would
serialize
itself
because
it
can
read
its
own
storage
and
would
send
all
that
data
through
through
the
receipt
model
and
on
the
receiving
side
there
there
would
be
a
need
for
a
factory
which
then
can
create
this
contract
and
set
the
the
storage
recreate
the
storage.
N
The
reason
we
did
this
we
wanted
to
see
whether
there's
a
need
for
protocol
level
support
for
yanking,
and
it
seems
that
if,
if
we
indeed
want
to
have
yanking
to
be
useful,
then
then
there
is
a
need
for
protocol
support,
because
there's
so
many
overheads.
One
of
the
main
overheads
here,
apart
from
the
need
to
to
read
the
storage
one
by
one
in
evm,
is
that
one
needs
to
serialize
everything
into
memory,
keep
it
in
memory
and
then
save
it
as
a
receipt
and
memory
is
quite
expensive.
N
On
avm
yeah,
we
do
plan
to
prototype
the
protocol
level,
yanking
support
and
then
publish
this.
This
variant
too,
which
talks
about
yanking
and
then
another
piece
of
work
we
have
been
looking
into
is
regarding
caching,
and
so
this
is
basically
on
on
phase
two.
N
If
there
would
be
some
kind
of
a
standard
witness
format,
then
there
could
be
a
potential
to
introduce
what
we
call
a
shard
committee
cache
where
there
would
be
a
cash
for
the
transaction
businesses.
So
whatever
is
incoming,
there
would
be
a
short-lived
cash,
a
few
epochs
long
cash
which
would
reduce
the
transaction
witness
size,
but
the
block
witness
included
in
the
shard
blocks.
Those
won't
be
bothered
by
the
cash.
Those
would
be
always
full
witnesses.
N
Yeah,
I
think,
only
transaction
processing,
so
we
had
just
a
rough
idea
sketched
out
and
we
we
bugged,
danny,
to
give
some
feedback
and
they
got
some
great
feedback
from
denny,
but
we
haven't
finished
updating
this.
This
rough
idea
to
make
it
you
know
more
useful
based
on
that
feedback,
but
we
do
plan
to
you
to
share
it
soon.
A
N
After
this
discussion
we
also
realized
that
maybe
this
idea
could
be
applied
in
some
different
fashion
to
e1x
or
stateless
ethereum.
So
probably,
we
can
also
look
into
that.
A
Cool,
thank
you
on
the
spec
side.
Now
that
these
are
12
is
done.
Charway
myself
and
a
couple
others
on
our
team
are
spending
a
lot
of
time,
making
sure
that
phase
one
is
well
tested,
well
structured
and
has
all
the
components
that
you
need
to
do
prototypes
and
implementation.
A
This
includes
finishing
the
work
in
progress,
validator
dock,
adding
much
more
test
coverage
working
through
any
bugs,
so
you
don't
have
to
work
through
them
and
even
building
out
the
base
of
the
network
spec.
A
Great
next
is
networking.
We
have
a
networking
call
in
a
little
bit.
I
might
consider
doing
one
soon
if
there
is
demand,
as
cayman
mentioned,
there's
a
few
people
working
on
a
b11
implementations
of
gossip
sub.
Is
there
any
questions
anyone
had
to
post
the
group
or
any
particular
learnings
or
gotchas
that
we
run
into
that
are
worth
sharing.
K
I
guess
the
question
to
decide
is
whether
we
should
support
1.0
or
not
at
all
like,
even
though
in
theory,
they're
compatible
right.
The
current
networking
specs
says
1.0,
for
example,.
A
Oh
yeah,
clients
must
support
gossip
sub
v1,
including
the
v11
extension,
because
v11
is
really
just
additional
components
on
top,
so
I
the
intention
was
to
write
it
in
such
a
way
that
you
must
support
this.
I
can
clarify
that,
if
there's
any
confusion
in
the
way
it
was
written
the
for
the
time
being,
I
think
we're
gonna
see
at
least
on
the
test
nets
for
the
next
few
weeks,
a
mix
which
is
a
reasonable
test,
but
there's
no
need,
I
think,
to
go
to
production
without
it.
A
A
Yeah,
based
on
the
testing
results
by
protocol
labs,
it's
a
no-brainer
to
require
it.
A
Yeah
cool
use,
the
networking
chat
in
the
discord,
or
you
know,
one-on-one
chats.
If
you
have
questions
and
things
as
it
comes
up,
david
from
protocolab
is
also
very
willing
to
help.
If
there's
any
issues
on
the
v11
upgrade
is
anyone,
does
anyone
have
v11
running
on
testnets
prism?
I
guess
would
be
the
only
one.
A
O
So
yes,
yeah
we've
been
discussing
a
change
to
the
this
v5
wire
protocol
on
over
and
on
on
discord,
and
I
made
a
very
detailed
proposal
for
it
and
the
idea
is
to
redo
the
outer
layer
wire
encoding,
one
last
time
before
finalizing
the
spec.
Basically,
so
that's
the
that's.
That's
that's
one
thing.
O
I
have
received
a
lot
of
thumbs
up
on
this
proposal,
but
we
haven't
really
made
a
decision
yet
so
I
would
like
to
get
a
decision
in
pretty
soon
because
yeah,
I
kind
of
want
to
release
the
final
version
sort
of
of
the
like
the
initial
final
version
of
the
spec.
O
O
It
was
already
linked
by
joseph
earlier
and
we
are
still
sort
of
discussing
things
out
there,
so
the
the
mechanism
that
he
proposed,
based
on
basically
reading
dmitry
exactly
reading
other
notes,
condemn
your
tables
using
special
operation
which
can
sort
of
like
pre-filter
the
notes
on
the
server
side.
The
proposal
is
kind
of
a
short-term
game.
O
I
guess,
but
we
still
need
to
answer
the
fundamental
fundamental
question:
whether
this
is
really
a
sound
proposal
or
not
so
one
thing
that
we
are
still
that
I'm
still
worried
about
there
with
this
proposal
is
that
it
cannot
give
like
there's
an
issue
with
it,
because
it
cannot
really
give
you
like.
You
cannot
find
100
of
notes
with
this.
You
can
only
find
a
certain
percentage
of
the
network
and
that
percentage
grows
lower
the
larger
the
network
gets
which
is
kind
of
yeah.
A
O
There's
this
idea
to
reuse
the
quick
packet
format
and
I
haven't
heard
back
so
I
don't
know
if,
like
this
is
still,
this
discussion
is
still
ongoing
or
if
yeah.
O
Yeah,
I
mean
it's
not
I'm
not
in
a
super
big
hurry.
It's
more
like
I
just
kind
of
want
to
make
a
decision
on
like
how
to
move
forward
with
the
basic
wire
protocol
this
week,
because
I
still
need
to
update
the
the
actual
spec
with
all
the
changes
and
track
the
implementation
and
so
on.
So
it's
like
really
only
the
first
step
now
and
I
kind
of
wanna,
but
thanks.
K
Yeah,
we're
kind
of
reading
the
quick
spec
right
now
to
figure
out
whether
whether
it's
feasible
so.
A
A
A
Networking
spec
is
at
least
last.
We
checked
tls
1.3
support
is
there's
not
a
lot
of
like
widespread
cls,
1.3
support
across
many
different
languages,
and
so
that
is
something
that
we're
interested
in
integrating
into
the
pdp
spec
over
time,
but
that
was
the
main
blocker
was
the
tls
1.3.
So
that
might
be
a
similar
issue
with
requiring
quick
on
the
discovery
today.
O
So
it's
not
really
that,
like
the
plan,
is
not,
as
far
as
I
understand
to
run
the
discovery
over
quick,
because
the
quick
does
way
too
much
for
that.
It's
more
like
to
basically
encode
the
discovery
packets,
so
they
look
like
quick
packets.
So,
basically
master
is
coping
traffic
as
quick,
because
it
would
mean
that
we
don't
have
to
worry
so
much
about
it.
O
Getting,
I
don't
know
blocked
by
a
firewall
or
something,
although
I
still
think
that,
like
this
whole
firewall
discussion
is
a
little
bit
of
a
yeah,
I'm
not
really
sure
what
to
think
about
it.
To
be
honest,.
O
I
have
one
more
thing,
though,
so
we
are
working.
We
are
starting
to
work
now
in
the
guest
team
on
a
sort
of
official
like
test
suit
for
the
discovery,
and
I'm
hoping
we'll
have
it
done
in
like
two
or
three
weeks.
So
we'll
have
like
a
basic
sort
of
test
you
that
you
can
run
against
like
any
client
to
see
if
it
like
to
cover
some
common
scenarios,
because
we've
already
seen
some
implementation
issues
in
the
current
testing.
J
I've
got
two
questions.
The
first
one
is.
There
were
other
changes
that
I
that
you
were
talking
about
a
while
ago.
One
was
if
I
recall
correctly
in
the
find
notes
request
instead
of
doing
like
multiple
requests
to
get
all
the
buckets
just
to
group
them
and
just
have
like
multiple
distances
in
the
fine
node
request.
Is
that.
O
Still
on
the
table,
yes,
so
the
the
changes
that
are
coming
to
the
pro
to
the
final
version
are
going
to
be
one
sec.
I
can
give
you
the
list
real
quick,
so
the
idea
is
that
in
the
final
initial
version,
whatever
we
want
to
call
it
like
version,
one
like
5.1
or
whatever
we're
gonna-
have
a
complete
description
of
the
session.
Cache.
O
There's
gonna
be
a
change
to
the
version
number,
which
will
be
version
number
one.
Instead
of
five.
There
will
be
this
multi-distance
final
request.
We
will
have
the
connection
pre-negotiation
request.
For
now
it
is
called
talk.
Rec.
This
thing
is
totally
like.
If
you
don't
use
it,
it's
basically
gonna
be
five
lines
to
implement
like
it's.
Not
it's
not
a
big
change,
but
we
need
it
for
another
protocol.
So
that's
why
it's
gonna
come
in
the
topic.
O
J
J
Yeah,
the
other
question
that
I
had,
I
don't
want
to
kind
of
derail.
You
is.
If
f1
is
moving
across
to
disk
v5.
Are
you
in
f1?
Are
they
going
to
be
in
order
to
connect
peer
peer-to-peer
use
using
enr's?
So
I
know
that
we've
had
this
discussion,
but
enough
kind
of
potentially
causing
a
bit
of
havoc
in
in
usability.
E
J
So
the
question
is
in
f1
in
order
to
connect
two
f1
nodes:
if
you're
using
disk
v5,
do
you
use
enr's.
O
O
O
For
at
the
moment,
we
mostly
just
use
the
dns
based
in
our
distribution
mechanism
that
we
have
for
bootstrapping,
and
that
basically
means
that,
like
we
just
have
a
reference
to
that
dns
name
and
it's
public
key
in
the
source
code
and
then
basically
the
boot
nodes
and
stuff
are
just
fetched
by
dns
and
when
we
will
move
to
the
this,
v5
basically
is
gonna,
be
the
same
I
mean
there's
we're
definitely
gonna
have
the
like
we're.
Definitely
gonna
keep
the
dns
we're
gonna
set
up.
O
Probably
gonna
set
up
a
new
dns
tree
for
the
v5
to
find
the
entry
points
and
in
general
you
can
still
use
the
old
like
enode
url
format,
which
is
great
but
inflexible,
and
you
as
well.
You
can
use
the
enr
text
form
if
you
want,
but
I
know
that
it's
not
the
most
convenient
if
you
want
to
just
like
type
something
in.
O
J
J
K
O
Yeah
they
have
the
pub
key,
but
they
are
not
signed,
and
so
the
thing
about
this
is
like
the
enr
is
not
the
greatest.
When
it
comes
to.
You
know,
like
says
admin
stuff
because
they
are
signed.
You
cannot
change
the
content
of
the
enr
ever.
This
is
like
very
nice
for
certain
things,
and
it's
definitely
good
when
there's
machines
talking
to
each
other.
O
We
should
probably
all
agree
on
multi
adder
as
the
readable
format,
but
when
it
comes
to
pasting
nodes
into
code
or
something
I
would
still
go
with
the
enr,
because
it
allows
you
to
basically
prove
to
everyone
else.
Also
that
you
are
not
cheating-
and
this
note
is
really
who
it
is.
They
give
you
this
info
and.
O
O
G
O
O
Yeah,
you
must
have
the
public
key.
I
am
also
thinking
about,
but
this
is
more
like
a
long-term
item
and
I
don't
want
to
hold
up
a
call
too
long.
I
am
thinking
about
adding
a
an
enr
identity
scheme
using
et25519
keys
because
it's
gonna
be
much
faster
during
the
handshake.
P
Got
it
we
didn't
invest,
we
actually
tried
to
implement
the
initial
version
of
discovery
v5
to
accept
both
mode
rather
multi
address,
and
you
know
records,
but
this
is
proved
to
be
too
difficult.
It's
a
little
bit
hard
for
me
to
cite
the
reasons,
but
you
just
need
to
have
the
enr
the
sign
dnr
record
of
all
the
notes
that
you
are
talking
with
it.
O
P
P
Yeah,
yes,
exactly
that's
what
I'm
talking
about
as
well,
but
what
I
need!
What
I
envision
here
is
that
perhaps
it
would
be
necessary
when
you
connect
to
the
bootstrap
node
to
ask
it
about
its
own
enr
record,
which
will
be
signed
because
if
you
don't
know
the
signed
in
a
record
of
the
any
note
you're
talking
with
you
run
into
these
issues,
so
that
you
cannot
reply
properly
to
some
other
messages.
P
J
Yeah,
I'm
trying
to
think
of
the
reason
as
well.
I
I
agree
with
you.
I
I
think
I
tried
to
do
it
as
well
like
off
the
top
of
my
head.
We
we
send
like
a
a
random
packet
and
then
responding
to,
I
think,
responding
to
the
who
are
you,
you
need
to
know
their
enr
at
some
point.
That's
the
signed
version
and
you
run
into
trouble.
O
O
A
Yep
cool
we'll
check
that
out
the
next
day
or
so
and
again
that
multi-editor
must
have
the
pub
key
associated
with
their
discover,
identity,
okay,
cool
anything
else
here,
new
design
goal
make
sure
that
we
can
have
a
multi-adder
on
the
command
line.
To
initiate
these
conversations
on
discovery,
avery
will
be
very
happy.
A
Items
cool
general,
spec
discussion:
I
know
you
all
are
working
on
v012,
please
don't
hesitate
to
reach
out
on
issues
or
in
direct
conversation.
If
you
have
any
issues
here,
but
any
any
other
questions
or
comments
here.
A
Okay,
great
general,
open
discussion.
Anybody
have
anything
they
want
to
talk
about.
B
A
Thank
you.
Everyone
really
appreciate
it.
Excellent
work
on
all
the
test
network
and
talk
to
you
all
very
soon,
bye,
guys.