►
From YouTube: Eth2.0 Call #34 [2020/2/27]
Description
A
A
Okay,
the
stream
is
switching
over
if
you're
on
youtube
hit
me
up
in
the
chat
box.
I
know:
okay,
here's
the
agenda
issue,
129.
A
Same
as
usual
testing
and
release
updates,
I'm
a
little
bit
behind
east
denver
and
then
the
related
east
denver
flu
has
set
me
back
a
little
bit.
So
I
am
behind
on
getting
that
release
out.
A
That
has
some
stuff
that
came
out
of
police
authority
audit,
which
should
be
shared
publicly
very
soon
as
as
well
as
a
bunch
of
little
things
that
we've
been
catching,
especially
on
the
network
side,
the
past
few
weeks
that
should
be
out
in
the
next
few
days,
which
should
be
that
stable
target
post
art
audit
target
we've
been
looking
for.
A
Along
with
that,
I
believe
the
alex
from
txrx
has
started
or
is
starting
on
some
poor
choice
tests
migrating
some
of
the
four
choice
tests
has
been
working
on
from
java
into
pi
the
pi
spec.
Let's,
hopefully
get
some
of
that
out
soon.
In
addition
to
that,
I
did
a
huge.
A
Pass
on
the
testing
to
get
basic
testing
in
place
for
phase
one,
which
also
involved
doing
some
fun
stuff
to
get
testing
working
across
forks
properly
in
many
cases
that
pr
is
up
and
should
be
in
soon,
that's
primarily,
it
produces
anything.
On
your
end.
B
Hello,
so
we
have
during
ethan
for
my
continued
work
on
the
networking
ripple.
With
this
thing,
I
hope
to
test
clients
networking
functionalities,
so
it
has
discovered
v5
and
rpc
as
well.
Now
working
like
this
version,
this
is
the
ken's
lighthouse,
so
I'm
still
working
on
testing
if
the
compression
of
rpc
works
well,
it
kind
of
relates
to
the
open
vr.
A
Yeah
cool-
I
I
think
generally
people
have
not
cared
that
much
or
have
been
positive
in
response
to
that.
So
I
do
want
to
get
that
out
into
the
next
release,
but
if
you
do
feel
strongly
about
it,
now
is
the
time
or
after
the
call
in
that
pr.
If
you
want
to
speak
up.
C
Yeah
midi's
not
around,
but
I
can
give
an
update
on
on
the
beacon,
fizz
yeah
cool,
we're
interested
okay
yeah,
so
we
recently
published
a
blog
post,
which
kind
of
gives
a
lot
of
the
progress
update,
which
is
on
a
website,
and
I
can
and
I'll
link
here
after
I
finish
just
giving
an
overview
of
it.
But
I
guess
the
main
points
are
that
we
we
found
an
interesting
bug
which
is
a
deposit
with
with
an
invalid
merkle
proofs
in
nimbus,
which
they
patch
pretty
quickly.
C
C
Definitely,
by
the
next
update
we've
successfully
integrated
prism,
we
have
to
use
kind
of
a
fork
of
go,
go
fuzz,
build
which
allows
us
to
use
like
shared
library,
compilation
and
symbolic
linking,
but
that
has
some
issues,
because
nimbus
also
uses
go
in
the
go
wrap
around
library,
so
we
expect
to
have
prism
supported
in
the
fuzzer
in
master.
C
Hopefully,
by
next
week,
we've
also
improved
some
of
the
tooling
so
that
we
can
programmatically
generate
corpus
for
the
from
the
test
repo,
provided
we
give
it
a
specific
spec
version
which
helps
us
kind
of
build.
These
things
faster.
C
We've
got
a
better
build
process,
so
we've
updated
the
make
file
and
the
ongoing
steps
and
current
next
steps
is
to
start,
including
the
epoch,
state
transitions,
java
integration
and
update
to
the
f2
spec
version.
10.1.
C
Oh
yeah
yeah,
the
issue
is
using
multiple
go
different
libraries.
So
there
are
there.
D
Yeah,
it's
just
that
we
don't.
We
shouldn't,
have
go
as
a
part
of
the
normal
build.
We
just
have
a
demon
running,
which
is
which
is
running
as
a
standalone
processor,
like
the
nim
code
shouldn't
be
tainted
by
goal,
so
I'm
just
a
little
bit
surprised,
but
we
can
take
it
offline,
yeah.
A
A
Okay,
client
updates
I'd
like
to
hear
the
details
of
what's
been
going
on,
but
also
with
an
eye
for
multi-client
test
nets
in
the
coming
month.
I'd
like
to
hear
kind
of
the
biggest
bottleneck.
A
What
y'all
are
currently
doing
to
address
it,
and
also
if
we
should
be
probably
seeing
some
similar
issues
across
clients.
So
if
there
are
things
to
share
feedback
to
give
each
other
please
chime
in
and
we
can
start
with
teku.
E
Yup,
I
can
speak
for
techo,
so
we're
making
progress
on
sync
we're
connecting
to
and
downloading
blocks
from
prism
sapphire
testnet.
E
We
haven't
yet
caught
up
to
chain
head
we're
working
through
performance
and
reliability,
issues
that
are
slowing
us
down,
so
there's
more
work
to
do
but
exciting
that
we
are
downloading
blocks
from
the
live
test.
Net
we've
also
been
doing
some
work
related
to
deposit
processing.
We
should
now
be
correctly
processing,
pre
and
post
genesis
deposits.
E
E
As
far
as
bottlenecks,
one
issue
that
we
definitely
need
to
address
is
storage,
so
we're
kind
of
using
an
absurd
amount
of
disk
space
right
now
we
haven't
really
started
looking
at
optimizing
this
at
all.
So
if
people
have
ideas
that
would
be
interesting,
we
floated
some
ideas
around
about
maybe
using
a
tri-backed
state
storage
so
that
we're
sort
of
only
storing
diffs
between
states
or
maybe
like
a
state
snapshot
state
snapshot
strategy
where
we
only
store
like
a
few
states
and
basically
rebuild
them
as
needed.
E
If
people
have
ideas
about
general
storage,
schemas
we'd
be
interested
and
jim
if
you're
online,
I
think
you
had
some
some
issues
related
to
each
one
data
management.
I
don't
know
if
you
want
to
speak
to
that.
F
Yeah
I'm
online
one
thing
that,
like
we've,
been
having
like
problems
that
like
we
were
like
trying
to
like
process
deposits
efficiently
so
like
for
that,
like
we
wanna
just
like
use
event
logs,
instead
of
like
just
asking
blocks
and
then
getting
deposits
in
them
when
like
starting
up
like
tekku.
F
However,
when
we
do
that,
like
there's
an
issue
of
like
being
able
to
like
miss
a
genesis
block
which
might
have
like
no
deposits
but
an
eth1
timestamp,
so
that's
kind
of
like
an
age
case
that
we've
been
working
on
for
like
the
past
week.
But
like
I'm
curious,
if
any
other
clients
have
like
thought
about
that
scenario
or
like
like
already
fixed
that
and
if
they.
If
anybody
has
fixed
it,
I'd
really
appreciate
it
like
knowing
how
they
did
it.
F
So
basically,
the
issue
is
like
just
like
getting
deposit
events
and
using
deposit
events,
because
otherwise,
like
it's
just
like
really
efficiently
you're,
hitting
like
the
east,
one
node
for
each
block.
After
your
like
contract
deployment,
plug
contract
deployment
block,
basically
and
being
able
to
like
start
like
like
trigger
genesis
on
a
block
that
doesn't
have
any
deposit
events
due
to
its
time.
A
So
the
is
it
that
the
only
blocks
you're
aware
of
are
those
with
deposit
events.
F
It
is
like
the
existence
of
a
block.
You
mean
just
like
one
block
right
yeah.
It
is
definitely
able
to
do
that,
but
like
in
that
scenario
like
like,
we
would
be
asking
for
like
basically
each
block
after,
like
the
like
e2
deposit
contract
deployment,
block.
G
F
A
I
would
definitely
reach
out
to
paul
after
call,
I
think,
he's
he's
been
deepest
in
this.
B
Just
bangkok,
just
after
we
have
been
looking
into
slant
storage
and
how
to
reduce
the
start
size
for
lighthouse,
and
we
figured
out
what
the
the
registry
is
the
biggest
and
you
can
take
some
hybrid
departure.
So
you
can!
So
you
don't
do
the
field
tree
storage,
because
it's
much
too
detailed.
B
You
don't
want
to
store
binary.
Now
it's
on
this
old
time.
What
you
can
do
is
store
the
finalized
state
in
a
flat
manner
and
whenever
you
load
it
into
the
state
converted
to
three
and
then
just
for
the
whole
part
of
the
state,
you
maintain
these
deaths
of
like
tree
and
store
them
as
gifts.
Instead,.
A
H
One
sec
before
we
move
on
meredith,
you
mentioned
you're
implementing
a
separate
signing
interface.
Can
you
speak
to
exactly
what
you're
implementing
there
or
can
you
share
a
link
to
what
that
is.
E
Yeah,
someone
actually
dropped
me
a
link
to
an
eep.
Let
me
see
if
I
can
find
the
actual
specification
awesome.
Thank
you
I'll
I'll
I'll
find
it
and
I'll
drop
it
in
chat.
I
Yeah
so
internally
we're
calling
it
eth2signer
as
a
standalone
signing
as
a
service.
There
is,
if
you
look
at
the
pegasus
eng
github,
there
is
e2
signer
repo,
but
it's
just
started
just
now.
We
haven't
made
a
huge
amount
of
progress
just
yet,
but
it
would
be
good
to
get
some
common
interfaces
on
this
stuff.
If
anyone
else
is
interested.
H
I
Yeah
backhand
and
front-end
interfaces
for
sure.
I
H
J
K
Back
to
trinity
yeah,
hey
everyone,
yeah
things
have
been
a
little
slow,
mainly
just
working
on
spec
updates.
K
We
have
some
work
on
stability
of
our
pilot,
p2p
library
updates
to
fork
choice
and
some
integration
with
milagro,
and
I
really
can't
speak
to
bottlenecks
like
we've
been
discussing
at
the
moment,
but
I
imagine
we'll
have
all
the
same
issues.
Everyone
else
has
happened.
A
Cool
thanks
numbers.
I
L
L
We
created
an
auto
detection
on
a
report
on
skip
test
because
we
realized
that
when
refactoring
the
repo,
sometimes
we
forgot
to
re-enable
tests
and
sometimes
they
show
up
on
phasing.
So
that's
that
that
should
be
catched
up
much
earlier.
We
have
a
bls
signatures,
implementation
ready.
L
L
More
than
a
year
ago,
we
had
the
bounty
program,
and
that
was
how
we
maintained
the
if
one
at
first
and
we
restarted
this
bounty
program
and
the
first
two
bounties
would
be
on
improving
test
runners,
so
that
can
be
used
with
nimbus
and
on
an
https
server
so
that
it
can
be
used
for
collecting
metrics
and
for
eth2
api.
L
Now,
on
the
networking
side,
we
have
a
new
code
to
manage
peer
lifetime.
So
pretty
disconnected
predict
reply.
We
have,
we
had
a
significant
focus
on
discovery
in
the
past
three
weeks.
We
have
some
issues
on
windows,
maybe
on
not
traversal
and
all
of
those
issues
manifest
as
finalization
issues.
L
On
the
speed
size
side,
we
have
implemented
a
lightweight
stack
traces
and
this
improves
both
compilation
and
runtime,
of
or
nimbus
by
twice,
because
we
enable
stack
traces
and
they
take
a
significant
toll
on
the
binaries
because
they
prevent
lots
of
compiler
optimization.
L
We
have
created
a
specialized
infrastructure
to
test
finalization
issues,
because
I
talked
about
discovery
that
manifest
as
feminization
issues,
but
also
when
we
have
speed
issues
like
we
have
too
many
nodes
on
the
same
machine.
The
way
we
detect
that
is
by
finalization,
and
we
want
to
know
if
the
finalization
issues
come
from
spec
from
speeds
or
from
networking
on
if1.
L
We
are
pleased
to
say
that
we
pass
all
the
transaction
tests
from
f1
so
the
same
one
as
deaf
and
parity,
and
we
are
other
evmc
implementation
done
and
that
would
allow
sorry
not
yes
host
implementation
done,
and
the
next
step
is
to
allow
fuzzing
or
own
evm
implementation
with
the
same
tools
as
evm1
in
terms
of
bottleneck.
L
L
It
will
be
impossible
to
manage
by
hand
to
to
debug,
so
we
will
need
some
kind
of
parser
to
deal
with
all
of
those
and
for
the
volume
we
just
rotate
the
logs
every
four
hours
to
keep
it
manageable
and
not
flus
or
adws
instances
with.
A
L
So
for
choice,
both
one
straight
from
the
copy
past
it
from
the
spec
and
one
implementation
from
proto
array,
and
the
idea
is
to
make
it
almost
an
independent
module
so
that
it's
easier
to
fizz
and
in
terms
of
testing,
we
use
the
same
approach
as
lighthouse,
which
is
to
create
some
kind
of
interpreter
that
says:
okay,
push
a
block
at
that
slot.
Push
another
block
at
that
slope,
not
now
run
process
slots
and
then
now
run
the
fork.
A
Got
choice
anybody
else
running
into
issues
with
massive
amounts
of
vlogs
and
writing
a
custom
parser
and
have
any
advice
there.
Our.
C
We
kind
of
dump
ours
all
into
aws
and
have
aws
kind
of
consume
them,
so
there's
no
fancy
anything
to
kind
of
help
there,
but
you
did
mention
natural
vessel.
I
just
wanted
to
ask
what
kind
of
nat
traversal
techniques
you're
using.
I
Mommy,
could
I
ask
a
couple
of
quick
questions.
You
mentioned
spec
v10
0.10.2.
This
is
not
an
official
release
right.
What
do
you
mean
by
this.
I
L
No,
it
says
draft
five,
so
the
one
that
is
current
and
the
vectors
I'm
waiting
for
it's
a
fast
aggregate
verify,
because
some
of
the
test
vectors
are
expecting
wrong.
Signatures
are
actually
correct.
It's
the.
I
I
Yeah,
if
you
re-download
the
test
repo
they're
correct
in
the
latest
taji
zed,
I
mean
that
confused
me,
ours
was
cached
and
danny
didn't
update.
We
didn't
change
the
version
number
on
the
test
repo,
so
if
you've
cached
it
previously
they're
wrong,
but
the
latest
version
is
correct.
A
I'll
check
it
out
they're,
I
believe,
they're
incorrect,
actually
in
the
repo
in
the
code
in
the
repo
and
the
files,
but
they're
correct
in
the
tar
gz
right
associated
with
it,
which
was
maybe
a
confusing
decision
to
make
armed.
L
H
Speaking
further
on
the
vls
front,
as
was
mentioned,
there's
that
there's
a
new
pr
on
the
hashtag
repo.
So,
although
we
were
supposed
to
have
finalized
vls,
there
have
been
some
complaints
as
to
the
efficiency
of
this
particular
low
power
devices.
Amongst
a
few
other
things,
so
there's
a
new
pr.
I
can
link
to
it
quickly
open.
It
only
affects
the
hash
to
base
which
is
now
called
hash
to
fields
which
is
the
first
part
of
the
the
hashing
into
the
curve.
H
It
should
be
a
relatively
minor
change,
and
the
people
seem
very
certain
that
this
is
the
very
final
version.
I
think
it's
worthwhile
making
the
change
to
try
avoid
a
ketchup
v2.
H
So
I
I
think,
provided.
H
Until
we
launch
mainnet,
I
think
it's
advisable
to
try
make
changes
to
to
follow
the
bls
back.
I
do
really
think
that
this
is
the
final
one
I
think.
Having
spoken
to
the
the
authors
of
of
these
specs.
M
H
Would
guess
the
yeah,
so
nothing
came
up
at
the
last
quarter
meeting
there
were
no
perceived
issues,
but
this
was
something
else
realized
internally.
H
The
the
quarterly
meetings
aren't
enough
when
it's
officially
standardized
it's
more
just
to
bring
bring
it
out
to
the
public
as
a
point
to
get
get
feedback
on
all
these
these
kinds
of
changes.
So
I
guess
maybe
this
is
the
results
of
the
last
meeting
in
some
weird
roundabout
way,
but
certainly
was
not
the
intention.
L
Yeah
thanks,
so
you
were
talking
about
the
hash
to
curve
being
finalized,
but
regarding
the
bls
signature
itself,
which
depends
on
how
to
curve
and
is
a
separate
spec,
how
stable
is
it.
H
To
the
extent
that
I
know
it
is
100
stable,
so
there
is
one
minor
caveat
to
that.
Well,
two
caveats:
I
guess
one
is
that
there
are
still
no
test
vectors,
so
I
do
expect
it
to
change
to
add
test
vectors
I've
as
a
part
of
some
of
the
bls
pre-compile
work.
I've
been
doing
lately.
We've
generated
some
some
test
vectors
there,
so
maybe
we
can
we
can
leverage
those.
H
The
other
thing
which
may
change
is
if
officially
the
spec
or
the
draft
expired
or
bls,
because
there
hasn't
been
there
haven't
been
any
changes
to
it
for
over
a
year
now,
so
there
would
have
to
be.
I
think,
a
version
bump,
I'm
not
exactly
familiar
with
with
the
intricacies
of
versioning
under
under
these
standards,
but
there
will
be
a
version
vlog,
but
I
don't
expect
any
changes
on
that
front
at
least
and
then
that
one
I
can
work
from
whatever.
Then
I'm
on
the
horizon.
A
Okay,
moving
on
prism.
O
Oh
hey
guys
so
yeah
we
have
a
bunch
of
updates,
so
we
are
working
on
this
slashes
servers,
which
is
to
slash
double
votes,
double
test
and
surround
and
surrounding
a
test.
So
the
latest
progress
on
that
is
that
a
vision
node
was
able
to
detect
was
able
to
detect
a
several
vote
and
then
were
able
to
include
the
slashing
object
in
abroad.
O
So
the
next
thing
to
verify
is
that
the
slashing
actually
happened
and
then
the
validator
gets
ejected,
so
we're
working
on
that
I'm
also
working
on
a
various
day
management
service.
So,
where
that
we
store
hot
state,
aka
post
finalized
state
on
a
per
epo
boundary
interval,
and
then
we
basically
do
playback
on
that
and
then
a
code
state
which
is
before
the
finalized
sharepoint
state
on
on
a
per
user.
O
Define
interval
and
the
such
design
was
highly
multivated
by
the
by
the
lighthouse
design,
so
so
props
to
them
for
being
the
pioneer
on
that
and
the
and
then
the
implementation
is
mostly
done.
I'm
just
working
on
micro
optimizations.
O
Subnet
subscription
we're
also
working
on
better
on
contrarian
blood
fashion
for
sinking,
and
then
we
also
like
thinking
in
the
back
of
our
head,
how
to
use
less
memory
during
initial
sinking,
and
then
we
also
updated
the
test
away
time
from
one
third
to
basically
right
away
when
he
sees
the
vlog
and
in
terms
of
bottleneck,
I
would
say
our
biggest
bottleneck
today
is
basically
we
subscribe
to
all
the
subnet
for
the
for
the
for
the
committee
id.
So
we
have
a
tons
of
undergraduates
on
aggregated
signature
to
verify.
O
I
just
took
a
profile
before
this
code
and
then
looks
like
30
of
our
runtime,
which
is
verifying
on
aggregated
signature.
So,
like
danny
said,
that's
not
sustainable.
So
we're
working
towards
solving
that
yeah
and
that's
my
update.
A
Thanks,
terence-
and
maybe
age
can
speak
to
that
a
little
bit.
I
think
he's
looking
he's
also
trying
to
tackle
that
hurdle
right.
A
Now:
okay,
any
comments
for
prism.
A
C
A
C
Yeah,
if
you're
subscribed
to
all
the
all
the
subnets,
what
what
checks
do
you
have
for
checking
the
attestations
before
re-propagating
them
across
gossip
sub.
O
We
basically
implement
what's
in
the
spread
today,
which
I
believe
it
does
check
this
signature
basically
checking
the
signature
is
the
it's
it's
the
heaviest
part.
So
the
debate
is
whether
you
should
check
the
signature,
whether
that
you
should
check
the
signature
before
you
republicate,
and
I
think
we
do
check
the
signature
today.
D
O
We're
not
using
the
latest
one
we're
using
the
previous
implementation.
I
think
he
has
updated
to
version
10
which
were
not.
D
D
One
more
question:
sorry,
out
of
curiosity:
do
you
cash.
O
So
we
have
a
fru
cache
on
top
of
the
db
for
the
state,
so
those
cash
gets
hit
pretty
often
ish
given
this
is
a
target
state
and
we
use
a
lot.
So
yes,
so
so
so
so
so
yeah
we,
it
is
cash.
P
P
So
we've
got
a
thread
of
that
working.
We
still
only
have
blocks
no,
no
attestations
or
anything
yet
and
still
on
version
like
nine
one
of
the
spec
or
something,
but
we,
but
it's
good
to
have
to
be
able
to
use
that
mothra
library
and
not
have
to
as
far
as
a
quick
leg
up
to
get
that
started
so
yep
next
up,
I
think
we're
gonna,
I'm
gonna,
try
and
update
the
version
to
10.1,
because
I.
E
P
Change
in
things
like
getting
rid
of
signing
route,
I
think
will
make
it
easier
to
to
finish
off
the
the
rest
of
the
stuff,
like
attestations.
P
We're
probably
here
we're
probably
behind
other
people,
so
I
mean
like,
although
I
did
look,
I'm
not
sure
what
like
spec
level
other
people
are
up
to,
like,
I
think
the
prism
tested.
I
was
looking
at
still
said.
It's
9.3
on
the
website,
but
I
don't
know
if
that's
correct
so.
O
Yeah
I
took
him
from
we're
still
9.3
on
the
test
net,
but
then,
like
our
our
client,
actually
runs
10.1.
We
just
haven't
updated
the
number
yet.
P
Okay,
that's
cool,
so
yeah,
I'm
going
to
probably
try
and
go
for
10.1
because
it
seems
have
some
improvements
over
for
the
rest
of
the
development,
but
we've
got
yeah
it'll
be
good
to
try
and
once
I've
got
that
in
there
trying
to
get
it
working
with
interrupt
with
one
of
the
other
clients.
I
know
there's
some
guys
from
tekku
locally
in
brisbane.
So
that's
probably
going
to
be
my
my
target,
but
I'm
happy
to
talk
to
some
other
people
and
try
and
get
some
interrupt
working.
C
Hey
everyone
so
I'll,
try
and
make
it
relatively
quick.
I
guess,
first
and
foremost,
we've
added
two
new
developers
for
lighthouse,
so
welcome
to
diva
and
adam.
I'm
sure
you
guys
will
be
hearing
from
them
very
shortly
in
the
near
future.
C
So
the
things
we've
done
over
the
past
two
weeks
we
raised,
we
raised
a
4k
kind
of
test
net
validated
test
net
for
f
denver,
which
turned
out
to
be
quite
useful.
Some
developers
and
researchers
to
prototype
with
it's
been
running
for
about
93
000
slots
and
we
kind
of
haven't
touched
it's
just
running
smoothly.
We
kind
of
just
had
it
just
as
a
as
a
kind
of
a
test
for
the
denver
hackathon.
C
So
while
steadf
denver,
michael
from
our
team,
met
up
with
proto
and
implemented
the
merkle
tree
based
storage
system
for
validated
field
in
the
beacon
state,
it's
shown
some
pretty
significant
reductions
in
tree
hashing
time,
but
an
increased
increase
during
rewards
and
penalties.
So
we're
still
deciding
whether
to
adopt
the
approach
or
not
throughout
the
throughout
the
client.
C
We
did
a.
We
did
a
project
wide
sweep
of
temporary
heap
allocations
because
we
were
getting
we're
finding
a
whole
heap
of
using
a
ridiculous
amount
of
memory
more
than
what
we
needed.
So
we've
actually
reduced
our
memory
footprint
by
about
half
again
so
since
the
since
the
start
of
the
month
we're
down
to
about
quarter,
then
what
of
what
we
were
originally
using
we're
still
using
a
two
to
four
gigabyte
gigabytes
of
ram
for
a
beacon
node
on
a
100k
validator
test
net.
C
C
The
reduction
of
heap
allocations
and
memory
usages
also
gave
us
a
30
improvement
in
block
processing
in
time,
which
is
pretty
good.
So
that'll
also
help
our
syncing
speeds
and
we're
in
the
process
of
refactoring
our
bls
library,
so
that
at
compile
time
you
can
choose
to
weather
use
either
the
milagro
or
the
harumi
implementations,
we'll
still
kind
of
go
through
those
to
find
out,
which
is
to
actually
benchmark,
which
is
which
is
faster
in
terms
of
interoperability.
C
So
this
is
interoperability
is
very,
is,
is
kind
of
one
of
our
main
focuses
in
the
in
the
very
near
future.
The
bottlenecks
of
getting
there
is
that
we're
pretty
much
in
the
process
of
upgrading
lighthouse
to
what
we're
calling
kind
of
version
0.2.0,
and
this
is
going
to
have
pretty
much
be
feature
complete
for
for
mainnet
launch.
So
that
means
it'll
include
the
attestation
aggregation
strategy,
noise
and
and
snappy
compression.
C
So
we
pretty
much
have
most
of
that
implemented,
there's
still
a
bit
of
code
to
go,
but
we
probably
need
to
go
into
a
fair
amount
of
testing
before
we
merge
that
to
master.
But
once
we
have
that
merged
into
master,
we
will
be
ready
to
do
interrupt
with
everybody.
We
assume
so
as
soon
as
we
get
that
merged
down
we're
going
to
be
we'll.
C
Have
this
we're
going
to
start
up
like
an
interrupt
test
net,
which
will
be
a
long,
lasting
test
net
that
we
hope
other
clients
can
join,
but
we'll
also
try
and
join
other
clients
test
nets.
So
the
first
thing
is
just
kind
of
finishing
off
the
testing
of
that,
and
I
guess
the
last
thing
is
that
we've
kicked
off
a
process
to
build
a
kind
of
like
a
ui
front,
end
for
our
validator
client
and
that's
currently
in
like
the
research
phase.
C
So
we'll
probably
be
updating
everyone,
as
that
kind
of
develops,
cheers
cool.
A
O
C
So
I
guess
yeah
I
mean
we
have
actual
performance
bottlenecks,
so
this
is
the
ram
that
we've
kind
of
been
targeting.
We,
I
think
we
fixed
and
tracked
down
most
of
all
of
our
deadlocks
in
terms
of
actually
just
getting
to
an
interrupt,
interrupt
test,
net
and
testing
with
other
clients.
It's
it's
just
a
matter
of
finishing
off
the
last
bit
of
code
and
then
thoroughly
testing
it
internally
before
kind
of
releasing
and
testing,
because
we
don't
really
want
to
have
a
test
and
then
realize.
C
Oh,
we
need
to
change
something
then
restart
the
test
net.
So
pretty
much
that
right
and.
A
C
C
Yeah
yeah
I
mean
in
in
reality,
that's
probably
not
gonna
happen,
but
we
still
wanna.
We
have
that
problem.
Yeah.
A
And
can
you
give
us
any
details
on
your
strategy
to
find
validators
of
particular
attestation
subnets,
given
the
e
r.
C
Yeah,
so,
as
I
was
saying
on
that
thread,
the
original
plan
was
just
to
so
what
pretty
much
we
when
a
validator
kind
of
subscribes.
We
know
in
advance
when
it
needs
to
subscribe
to
a
to
a
subnet.
So
we
we
we've
given
ourselves
kind
of
like
an
epoch
leeway,
so
we
know
an
epoch
in
advance
when
which
subnet
we
kind
of
need
to
subscribe
to.
C
So
the
initial
plan
is
to
use
this
v5
and
just
kind
of
search
for
random
peers
and
just
try
and
and
when
we'll
only
connect
to
ones
that
have
the
subnet
in
the
in
our
field,
but
failing
that,
if
that's
too
slow
or
if
that
doesn't
give
us
results,
it'll
be
dependent
on
the
number
of
nodes
on
the
network
that
are
validators
versus
the
number
of
nodes
just
sitting
there.
That
aren't
subscribed
to
any
of
the
any
of
the
subnets.
C
So
the
other
solution,
which
is
what
alex
suggested,
is
to
just
crawl
the
dhd,
which
isn't
too
difficult
but
it'll,
be
a
different
kind
of
search
where
we
kind
of
just
ask
all
the
all
the
peers,
all
the
other
peers
that
they
know
about.
That
have
this
particular
field
in
an
enr
and
if
you
specify
return
me
at
least
three
of
them,
then
the
query
hopefully
shouldn't
take
too
long,
definitely
not
at
least
an
epoch.
I
imagine
so
that
that's
the
second
strategy,
if
we
need
it
right.
C
Yeah,
so
if
ryog's
happen,
the
validated
client
will
detect
it
because
it
kind
of
polls
all
the
time
for
its
duties,
and
so
it
will
resend
a
subscription.
So
we
have
like
a
a
service.
That's
looking
after
all
these
subscriptions
on
the
beacon
node,
so
it
will
update
and
realize
that
it
it'll
change
which
which
subnets
it
needs
to
connect
to.
C
So
if
a
re-log
happens,
it'll
kind
of
readjust
itself,
we
may
have
less
time
than
an
epoch
in
some
circumstances
like
if
a
validator
just
connects
and
needs
to,
you
know,
perform
an
attestation
on
the
next
slot.
Well,
in
those
circumstances,
obviously
not
gonna
have
enough
time
to
track
down
peers,
but
in
a
long
running
scenario
we
we
in
principle
should.
A
Something
also
to
consider
is
that
the
agitation
subnet
subscriptions
in
enrs
are
relatively
stable
on
the
order
of
the
day,
and
so
there's
also
the
chance
to
kind
of
like
pre-walk
the
dhc
and
have
information
that
you
think
is
correct
and
is
very
likely
correct.
Locked
and
loaded.
C
C
A
M
A
C
C
What
do
you
mean?
Yeah,
I
mean
it'll,
be
statistical
based
on
the
number
of
peers
like
if
you
have
as
long
as
as
long
as
there's
at
least
a
thousand
validators,
then
it
shouldn't
be
too
difficult
to
find
those
thousand
value
days
across
any
subnet
that
you
need,
but
I
guess
we'll
find
that
in
practice,
correct.
A
Okay,
any
jim
did
you
have
a
question
or
thought
for
my
house.
I
saw
you
I
made
earlier.
A
No,
I
just
I
saw
you
on
mute
towards
the
end
of
when
age
was
speaking.
It's
wonderful.
F
I
was
curious
like
which,
like
spec
version,
you
guys
are
like
gonna
go
for
for
your,
like.
That's
not.
C
I
think
we'll
be
going
for
10.1
cool.
Thank
you
another.
Another
quick
thing
is
that
we've
we've
implemented
noise
and
we're
testing
that.
So,
if
nimbus
nimbus
said
they
needed
a
testing
partner
to
check
out
their
noise,
then
we
were
happy
to
that'd
be
interesting
to
interrupt
with.
A
Great
okay,
lodestar.
Q
Hello,
so
past
few
weeks,
let's
see
so
we've
upgraded
our
bls
implementation
to
ten
point
x,
release
10.1.
I
guess-
and
we've
cut
a
new
release
of
that
so
based
on
harumi's
implementation
compiled
to
wasm
everything
else
in
our
repo
is
still
on
a
0.9
level.
We
we
have
some
fork
choice,
things
that
we're
still
upgrading
and
our
network
I'd
say
is
probably
our
bottleneck
at
this
point,
so
we're
adding.
Q
So
we
have
a
noise
implementation
that
we're
interopping
with
go
at
the
moment.
I
don't
know
exactly
what
the
status
is,
so
I
don't
want
to
offer
us
up
as
a
testing
partner,
but
that's
in
progress
and
we
have
a
pr
open
for
snappy
compression
and
we're
going
to
begin
working
again
on
disk
v5
we're
about
halfway
through
and
we
had
stopped
work
for
a
while
and
now
we're
gonna
get
back
to
that.
Q
Some
other
things
we
merged
in
this
new
ssc
implementation
that
we
had
been
working
on
for
a
while
and
just
merging
it
in
no
changes,
kind
of
sped
up
our
state
transition
by
roughly
10
to
100
x
and
then
just
lightly
memoizing.
A
few
functions
spread.
It
up
another
10
to
100.
Q
and
we'll
probably
stop
there
for
now,
because
I
don't
we
don't
have
like
the
best
data
to
benchmark
against,
but
I
think
once
we
start
really
syncing
a
bunch
of
blocks,
we'll
we'll
have
something
we
can
test
against
and
take
it
a
little
further.
Q
Q
Q
Q
A
Okay,
other
client
updates
or.
J
Sure,
okay,
so
yeah
we
were
at
east
denver
and
we
worked
with
ethicia.
We
made
a
an
ee,
it's
called
simply
kind
of,
and
that
was
just
kind
of
like
a
rudimentary
execution
environment
to
see
kind
of
like
what
some
of
the
components
are
and
we're
using
that
to
kind
of
like
inform
some
research.
J
J
We
have
two
pending
write-ups
right
now,
one
from
mikhail
around
a
safety
of
and
east
one
e2
bridge.
Each
one
is
two
finality
gadget
and
that
should
be
out
if
it's
not
out
today,
if
it's
not
already
out
it
should
be
out
today
and
another
write-up
on
disk
v5
from
alex
is
imminent.
A
Proto
has
put
together
a
draft
phase,
2
spec.
If
you
want
to
take
a
look
at
that
inspects,
your
post,
pr,
open
other
research
items,
you
want
to
go
over.
M
I
can
give
a
summary
for
the
football
team
sure
so
we'll
will
is
currently
on
the
plane
as
the
sun's
gone
yeah.
We
basically
spent
last
week
at
sbc
and
had
many
productive
discussions
there
and
synced
up
with
the
other
research
teams
we
struggled
before,
but
before
sbc
we
published
the
eath
research
post
on
on
our
vision
for
phase
two.
If
you
guys
want
to
have
a
look
on
that,
that's
basically
all
of
our
thoughts
around
phase
two.
M
In
our
opinion,
there
are
no
real,
like
blockers
left
for
phase
two,
it's
still
a
huge
design
space,
but
the
approach
that
we
plan
on
on
taking
is
to
just
like
basically
take
like
the
minimal
implementation,
minimal
phase,
two
spec
implementation
that
we
see
and
that's
basically
also
what
proto
has
been
started.
Writing
the
specs
on.
M
They
came
from
the
same
discussions
around
that
there
and
we
want
to
shift
our
focus
for
now
on
like
implementing
that
and
then
with
a
target
of
having
like
a
minimal
like
mvp
version
of
a
phase,
two
implementation
done
so
that
we
can
then
iterate
on
that
and
compare
it
with.
Like
other
more
involved
research.
That's
still
to
be
done
there,
but
but
yeah.
M
Our
next
few
months
will
be
like
mostly
focused
on
getting
this
minimal
face
to
implement
it
and
then
a
few
other
small
updates
sam
and
I
have
been
looking
into
this
whole
question-
around
dynamics
that
access
aesthetic
state
access.
Some
of
you
might
have
heard
of
that.
That's
this
dsasa
topic
account.
Internal
preference
is
the
to
likely
go
with
ssa
for
phase
two.
We
have
been
looking
a
little
bit
into
feasibility
there.
M
I've
I've
been
specifically
looking
at
existing
popular
et1
projects
to
see
like
how
easily
those
like
similar
use
cases
could
be
implemented
in
a
like
ssa
phase,
two
world-
that's
looking
great
so
far,
and
sam
has
been
working
on
for
the
last
few
weeks.
Last
two
weeks,
specifically
like
on
a
a
so-called
like
taint
analysis
tool
for
for
solidity
integration.
M
So
so
the
idea
here
is
to
to
have
have
a
tool
that
it's
basically
doing
like
an
optimization
pass
over
yule
and
it's
checking
contracts
for
dsa.
So
if
we
were
to
go
with
with
like
purely
ssa,
then
we
had.
We
would
have
need
like
developer
tooling,
around
right,
detecting
dsa
and
basically
so
that
would
just
like,
as
any
syntax
check
it
would
just
like
highlight
code
parts
of
your
code
that
would
that
use
dsa
patterns.
So
you
can
correct
that
or
overwrite
the
this
check.
M
If
you,
if
you
really
know
what
you're
doing
that's
really
looking
great
so
far
as
well,
here's
a
write-up
on
each
research
on
that
and
that's
basically
done
as
a
mvp
for
now
and-
and
I
I
think,
we'll
have
like
a
more
comprehensive
write-up
on
the
whole
ssa
dsa
topic
as
soon
as
we
have
a
quick,
clear
picture
on
all
of
the
remaining
questions
there
and
then,
as
a
last
update
from
our
sides,
I
mean,
as
just
already
said,
matt
and
johnny
had
like
a
new
release
of
the
east
tool.
M
M
H
So
having
having
had
some
well
quite
a
few
discussions
about
the
whole
ssd
say
spc
it,
it's
a
very
interesting
change
and
I
think
it
it's
definitely
the
right
move
and
simplifies
a
lot
of
the
the
harder
problems,
or
at
least
there's
it's
easier
to
come
up
with
a
solution.
H
But
it
does
mean
changing
the
standard
pattern,
and
this
comes
up
with
the
the
the
tainting
and
whatever
oscar
was
talking
to,
but
I'd
just
like
to
encourage
people
to
to
stay
up
to
date
with
us,
because
I
think
it's
important
that
we
get
as
many
items
as
possible
on
whether
this
is
a
feasible
direction
to
go
on
to
go
towards
for
for
how
we
do
state
in
the
future.
H
From
what
we
discussed,
we
couldn't
see
any
major
design
patterns
that
are
that
we
prevent
by
switching
over
to
ssa,
and
certainly
it's
worth
it
for
the
because
the
simplifications
that
can
be
made
but
yeah.
I
just
would
like
to
advise
people
to
just
to
follow
up
to
follow,
what's
happening
here,
because
I
think
it's
an
interesting
design
space
and
it's
important
to
fully
understand
the
decisions.
We're.
A
A
Okay,
we
will
have
a
networking
call
on
wednesday
the
week
from
yesterday.
I
pushed
that
sorry
about
that.
But
are
there
any
pressing
networking
guidance
we
can
bring
up.
A
D
M
D
Pr
pushed
for
segregating
it
was.
D
G
A
G
Hello
shower
is
here
I'm
thinking
about
if
well,
we
can
have
something
on
the
march
march
6
and
there's
a
potential
port
on
the
is
to
discord.
China,
the
general
channel,
which
I'm
trying
to
check
the
people's
opinion
of
having
zanzi
on
the
sixth
and
if
you
are
interested
in
and
please
dm
dme
or
just
can
put
if
some
signal
on
the
is
to
discord-
and
I
know
that
the
golly
testnating
they
might
want
to
have
organizing
some
events.
So
the
discussion
is
also
on
discord.
L
B
If
london,
so,
the
rest
is
hackathon
in
london
this
weekend
and
I
invite
everyone
who
is
around
the
co-work
or
just
talk
about
you
can
do.
I
L
A
Great
keep
up
the
good
work.
We'll
do
this
call
in
two
weeks.
We
do
the
networking
call
next
week
and
I
got
a
lot
of
spec
work
to
do
so.
I'm
gonna
get
to
it
talk
to
y'all
soon,
thanks.
Everyone.