►
From YouTube: Eth2.0 Call #54 [2020/12/17]
Description
A
A
A
There
are
some
minor
features
that
are
being
worked
on
that
we
expect
to
come
out
in
something
of
like
a
minor
hard
fork
early
mid
next
year.
The
actual
discussion
of
what's
going
to
go
in
there
I'd
say:
let's
wait
until
january,
we
can
spend
some
time
making
it
kind
of
clear
what
the
intentions
are
and
we
can
debate
and
go
back
and
forth
a
little
bit.
A
I
think
the
low-hanging
fruit
here
are
a
little
bit
of
accounting
reform
on
how
attestations
and
things
are
stored
in
state,
adding
a
sync
committee
or
light
client
sync
committee,
which
reuses
some
of
the
generally
similar
committee
functionality
both
on
the
networking
level
level
and
in
the
in
the
spec
to
add
a
white
client
first
support
and
some
other
small
stuff.
But
again,
I
think
our
time
will
be
better
spent
formalizing
some
of
that
stuff.
Writing
it
down
getting
people
up
to
speed,
and
then
we
can
discuss
it
in
january.
A
Additionally,
there's
ongoing
work
in
the
spec
on
him.
I.
B
Should
interrupt
in
just
those
specific
issues
like
there
are
github
issues
and
pull
requests
and
that
you
specs
that
you
can
look
at
already
so
on
the
accounting
reform
stuff,
if
you
just
go
into
the
polls
section
of
each
use,
specs,
so
the
the
most
recent
and
recent
two
from
myself.
I
forget
the
numbers,
but
one
says
like
accounting
reform
and
the
other
says
quotient
reform,
or
something
like
that
highly
recommends
for
client
developers
to
look
at
them
and
at
least
familiar
familiarize
yourself
with
the
proposals
and
comments.
A
Right
and
some
of
that
in
there
moves
some
of
the
state
accounting
to
the
way
it's
generally
optimized
on
epoch
transitions
anyway.
So
I
think
this
would
be
a
net
benefit,
helps
for
things
like
empty
pop
transitions
and
stuff,
which
are
currently
probably
a
dos
vector
cool,
so
ongoing
work.
There
there's
also
to
make
this
testing
a
little
bit
more
sustainable.
A
I
want
to
use
a
little
bit
different
model
and
I
expect
that
to
come
out
early
next
year,
didn't
really
want
to
pull
off
the
pull
the
rug
on
you
on
testing
infrastructure
right
near
launch,
but
that's
right.
A
Let's
see
anything
else
on
testing
and
releases
media.
You
got
anything
for
us.
C
Not
much
crashes
lately,
which
is
good.
I
guess
we
had
to
fix
our
fuzzers
after
a
few
client
updates.
Messed
them
up
still
have
one
small
issue
with
hong
kong's:
the
mutation
based
fuzzing
engine,
not
the
structural
one.
It's
spitting
out
way
too
many
false
positives
to
be
useful,
so
we
need
to
basically
implement
some
timeouts
to
give
it
enough
time
to
initialize
all
the
clients
but
yeah.
It's
been
good.
We've
been
working
with
parry
on
deploying
the
fuzzers
to
the
aws
infrastructure.
C
He's
got
some
great
ansible
and
terraform
scripts,
but
yeah
pretty
pretty
quiet,
I'd,
say
no
crashes
to
report,
which
is
good
good
thanks.
D
D
I
think
I
got
some
feedback
from
the
clients
and
the
os
the
battery
implementers,
so
I
think
they
will
continue
to
implement
it
in
our
test
vector,
but
the
the
interface
might
change.
If
I
found
something
new
but
basically
is
the
test
will
focus
on
the
three
significant
bytes
and
the
verifications.
A
E
A
F
Everyone
from
the
taco
side
we've
been
focusing
on
optimizing,
epoc
processing,
creating
the
first
block
of
the
epoch
is
still
slower
than
others,
so
there's
potential
for
further
improvement,
but
it's
significantly
better
than
it
was,
and
we're
now
like
we're
not
seeing
blocks,
get
orphaned
on
paramount
anymore
at
the
stations
for
the
first
slot
is
also
now
much
more
accurate
because
block
import
times
are
faster.
F
We've
modified,
eth1
genesis,
block
search
to
be
more
tolerant
of
missing
historical
blocks,
we've
reduced,
we
can
say
backing
tree
heat
consumption
and
we
significantly
sped
up
as
a
seed
serialized
in
the
serialized
and
we
fixed
the
bug
which
caused
eco
to
produce
invalid
at
the
station
aggregates.
That's
about
it.
A
G
Hey
guys,
terence
here
so
we've
added
a
database
backups
at
runtime
for
both
and
validate
validator
client.
We
also
support
team
reading
graffiti
from
file,
and
this
is
for
the
validator
side,
and
this
was
a
highly
requested
user.
We
have
peer
scoring
enabled
by
default,
we
update
it
to
golan
1
15.6
and
with
disk
io
for
validator
client,
and
this
results
in
safer
testage,
a
safer
tester
slashing
protection,
which
is
great.
G
Last
but
not
least,
we've
added
it
one
fallback
option
for
the
for
the
eth1
node,
and
this
was
also
highly
requested
by
our
by
our
user,
and
that's
it.
Thank
you.
A
I'm
curious
on
the
fallback
option
is
that
if
the
api,
the
connection,
fails
or
does
it
try
to
detect,
if
maybe
the
the
main
option
is
following
the
head,
we're
just
doing
the
api
protection
fail.
A
Cool
yeah.
I
wonder
I
wonder
maybe
if
you
like,
look
at
peer
count
or
some
other
stuff
that
might
think
that
it's
bad
I
I
was
just
setting
that
up
the
other
day
and
didn't
know
exactly
what
was
gonna
happen,
but
thank
you
lighthouse.
C
Hello,
first
of
all,
apologies
for
missing
the
last
call.
Two
weeks
ago
we
been
working
on
enhancing
our
slasher,
so
we
fixed
a
misproposed
slashing
bug
and
we
added
the
option
for
users
to
broadcast
their
slashings.
C
I
think,
like
a
lot
of
people,
there's
been
some
f1
client
issues,
so
geth
works
seemingly
and
seems
to
work
really
well,
but
everything
else
seems
to
struggle.
So
we
we
also
added
an
f1
fallback
node
and
a
while
ago,
and
we
also
now
have
a
manual
flag
for
users
to
purge
their
ethernet
cache
for
dodgy
clients.
We
simplified
the
beacon,
lock
files
we're
now
using
os
locks.
C
We
almost
completed
the
standard,
http
api,
the
service
and
events
are
implemented.
Now
I
went
and
corrected
some
light
attestations
we've
been
testing
and
analyzing
gossip
cell
data
metrics
and
scoring
so
we
found
a
little
discrepancy
in
the
expected
message,
propagation,
which
will
be
corrected
very
soon
in
the
future.
Pr
and
yeah
we're
currently
working
on
reducing
our
block
proposal.
C
Time
couldn't
get
down
from
500
millis
to
about
100,
millis,
paul
and
benedict
are
working
on
a
beacon,
node
fallback,
so
kind
of
similar
to
the
f1
fallback,
giving
users
the
ability
to
specify
multiple
beacon
notes,
and
we
started
experimenting
with
a
new
scoring
parameter
to
essentially
help
detect
censoring
nodes
and
that's
about
it.
E
Great
hi,
so
yesterday
we
released
our
version
1.0.4,
we
didn't
announce
it,
but
we
will
be
announcing
it
soon,
though
some
users
found
it
still,
though
the
biggest
change
that
we
had,
where
properly
handling
the
termination
signals,
for
example,
making
sure
that
the
slashing
protection
database
is
correctly
flushed
on
disk
and
closed.
E
We
added
also
a
new
performance
scoring
so
that
you
can
check
if
your
system
can
keep
up
with
the
small
requirement
of
the
if
to
chain,
and
also
importantly,
we
fixed
some
a
bug
where
we
didn't
fully
aggregation.
We
received
and
we
forwarded
them
to
the
network
and
we
ended
up
getting
bad
score
and
we
also
fixed
issue
in
gossip
sub,
which
led
to
a
slightly
worse
attestation
effectiveness.
E
So,
right
now
we
are
focusing
on
a
couple
of
performance
bottleneck.
One
is
disk
io.
So
when
we
we
have
some
io
when
we
do
attestational
block
proposal
that
we
would
like
to
reduce
and
accelerating
state
transition
via
having
something
called
a
state
diff.
Instead
of
storing
the
wall,
the
wall
state
and
lastly,
multi-threading
that
we
plan
to
have,
we
have
the
primitives
done
and
we
plan
to
refactor
or
verify
benefits
from
multithreading
at
two
different
levels
and
for
maybe
three
weeks
from
now.
E
H
Hello,
everyone
so
past
few
weeks
we
turns
out,
we
hadn't
implemented
the
api,
the
standard
api,
but
we
implemented
those
two
thing
to
end
points
we
finally
got
on
implementing
and
integrating
the
bls
batch
verification
which
really
helped
helped
us
and
we've
just
been
going
going
through
profiling,
writing,
beacon,
node
and
fighting
finding
low
hanging
fruits.
Some
things
as
simple,
as
you
know,
like
in
lib
p2p,
getting
piers
in
in
jsp
is
kind
of
a
it
takes
a
while.
So
just
doing
our.
A
Hey,
did
I
lose
cayman,
or
am
I
disconnected?
We
also,
you
know.
H
Yeah
sorry
was
last
thing:
you
heard
bls
batch
verification
profiling,
how
to
get
peers
right,
so
we're
just
doing
less
to
try
to
get
some
final
things
together
that
are
really
stopping
us
from
having
a
responsive,
beacon,
node.
All
the
time
and
we've
decided
to
cut
cut
releases
every
two
weeks.
So
we're
just
going
to
do
like
a
really
that
kind
of
release
schedule
so.
H
Yes,
so
it
works,
I
will
say
currently,
so
you
could
pull
it
down
from
either
master,
build
it
or
from
npm.
But
one
thing
it
will
like
kind
of
pause
every
few
minutes,
but
it
will
it
will.
It
will
stay
on
the
head,
but
it
will
not
be
responsive
at
all
times,
so
we're
working
through
those
last
things.
A
B
B
It's
also
one
of
them
very
get
more
recent
ones
in
the
pr
list,
so
that
includes
the
the
beacon
chain,
transitions
and
there's
also
a
yes,
a
separate
document
that
the
army
that
the
pr
links
to
that
has
been
around
for
a
while
that
basically
talks
about
like
what
it,
how
the
data
availability,
sampling
works
and
in
the
ima
dan
grant
has
been
doing
some
excellent
work
on
optimizing
proof
generation
and
making
it
look
like
generating
the
inclusion
proofs
for
or
or
the
correctness
proofs
for,
branches
of
a
block
is
going
to
be
much
more
practical,
possibly
some
somewhere
around
eight
to
eight
times
eight
times
faster
or
more
than
we
than
we
had
thought
before,
which
is
potentially
good
news
because
it
just
kill
because
it
reduces
the
headaches
that
we
might
expect
from
my
needing
from
a
block
proposing
otherwise
block
proposing
just
be
a
significantly
simpler
process.
A
A
B
I
think
one
really
important
thing
is
that
you
need
to
have
a
library
that
exposes
bls
operations
so
multiplication
and
addition
being
the
most
important
ones.
And
ideally
you
want
this
fast
linear
combination,
some
aka
multi-exponentiation,
which
basically
just
takes
a
whole
bunch
of
points
and
a
whole
bunch
of
coefficients
multiplies
each
point
by
the
coefficient
sums
them
all
up
and
gives
you
the
output.
B
Basically
there's
you
know,
optimized
ways
of
doing
that
that
are
potentially
up
to
something
like
10
times
faster
than
for
for
large
inputs
than
doing
it.
The
kind
of
naive
way.
So
if
you
have
libraries
that
do
those
things
saying,
if
you
don't
have
the
fast
linear
combinations
that
you,
then
you
could
also
just
code
it
yourself.
So
if
you
do
have
things
it's
actually
not
that
hard
and
so
making
a
academic
commitment
for
a
piece
of
data,
just
what
it
really
means.
It's
a
step.
B
You
have
a
kind
of
pre-existing
set
of
points
which
would
basically
be
the
trust.
Basically,
an
fft
of
the
trusted
setup
pre-computer
fairly
easily,
and
you
just
take
the
first
object.
You
just
basically
do
a
fast
linear
combination
of
the
setup
to
get
to
a
block.
So
so
you
do
the
first
point:
multiplied
by
the
first
piece
of
data
plus
the
second
point:
multiplied
by
the
second
piece
of
data
and
so
forth.
So
kim
data
is
easy,
gener
verifying
a
proof
is
an
inclusion.
B
Proof
is
also
easy
and
basically
it's
just
a
single
pairing
check
and
now
generating
all
inclusion.
Proofs
is
the
thing
that
incred
has
been
working
a
lot
on
optimizing,
but
the
good
use
is
that
it's
still
something
like
41
or
250
lines
of
python
code
or
so,
and
so
you
can
probably
just
take
the
to
take
the
python
code
and
translate
it
so
my
opinion
is
that
it
really.
It
should
not
be
all
that
difficult
actually
and.
I
E
E
B
Correct
so
the
fft
is
needed
in
a
couple
of
places,
so
one
place
is
that
to
generate
all
all
inclusion
proofs
for
a
block
so
to
generate
and
inclusion
proofs
in
analog
end
time.
Instead
of
an
n
squared
time,
you
need
the
the
the
fancy
algorithm,
which
I
dance
right.
B
I
have
created
by
adapting
this
earlier
thing
from
demetri
and
one
other
person
and
that
uses
ffts
internally,
and
the
second
thing
is
that
the
mccatey
trusted
setup
that
you
normally
see
comes
in
the
form
of
a
sequence
of
powers,
but
in
order
to
just
commit
to
a
blog,
you
want
one
that
comes
in
the
form
of
being
in
sequence
of
evaluations
and
converting
from
one
to
the
other
also
requires
an
fft.
E
Okay,
so
in
terms
of
the
state
of
the
bls
libraries
that
we
can
use,
remember
that,
with
my
very
first
talk
with
supranational
back
in
april,
that
was
one
of
the
things
they
wanted
to
look
at
the
multiscalar
mule.
But
I
don't
think
it's
ready.
Yet.
However,
consensus
has
a
library
called
golf
and
gnark
for
snacks
and
12
381,
and
they
have
the
multiscalar
multiplication
that
is
also
multi-threaded
and
that
they
optimized
heavily
and
the
zxz
project
has
also
a
library
in
rust.
B
Oh
excellent,
so
I
know
looking
at
the
the
libraries
that
are
being
used
for
snarks
definitely
seems
like
a
potentially
fruitful
route
to
take,
but
they're
one
thing
I
wants
to
stress
is
that
there
definitely
is
value
from
working
on
the
cate
pieces
sooner.
The
reason,
basically,
is
that
we
really
wants
to
just
get
the
statistics
on
how
what
it,
what
is
the
run
time
of
one
of
these
things
in
practice,
I
mean
basically
adjust
them
like.
If
we
have
that
statistic,
I
think
it
significantly
de-risks
the
whole
thing.
I
Also,
I'm
very
happy
to
like
create
a
little
intro
to
the
fft
on
group
stuff,
like
the
method
to
create
all
the
proofs
like,
so
if,
as
soon
as
anyone
is
kind
of
saying
they
want
to
start
working
on
this,
I
would
yeah
I
would
create
something
to
like
get
them
on
board.
A
B
I
think
that's
the
main,
the
main
new
stuff
happening
on
the
research
front.
No,
I
can't
think
of
anything
else,
though,.
A
Okay,
anyone
else.
J
I
mean
related
to
the
data
availability
sampling.
I
guess
one
aspect
is
the
then
possibly
you
know
slightly
modified
dhts
with
new
properties
like
low
latency,
and
I
guess
one
of
the
things
we're
looking
to
do
is
is
hire
a
networking
expert
who
could
maybe
help
us
a
different
foundation
investigate
these
issues.
A
All
right,
any
other
updates.
A
A
All
right
great,
I
might
have
had
my
mute
backwards
there,
any
other
updates
all
right
great.
We
had
a
networking
call
one
week
ago,
there's
a
to-do
list
that
I
need
to
get
to.
Thank
you
for
taking
notes
ben.
Is
there
any
other
networking
components
people
want
to
discuss
today.
A
Okay
and
general
spec
discussion,
like
I
said
there
is
plenty
of
movement
on
the
spectre.
I
think
it.
A
It
makes
sense
to
have
at
least
one
person
on
your
team
keeping
up
on
the
various
developments
and
engaging
and
very
well
have
plenty
of
good
stuff
to
talk
about
on
maybe
like
a
first
hard
fork
and,
like
I
said,
I
think,
it'd
be
a
good
time
to
maybe
get
on
some
calls
and
make
sure
we
like
write
down
a
lot
of
stuff
to
just
do
some
knowledge
transfer
and
get
people
up
to
speed,
because
there's
plenty
of
engineering,
r
d
tasks
to
begin
to
tackle
here.
A
B
B
Yep
makes
sense,
no,
I
mean
for
anyone
listening.
The
quick
update
is
basically
that
there's
some
multiple
small
tweaks
to
the
fork
choice
rule
that
we
have
been
mulling
over
in
order
to
address
all
of
the
issues
that
some
academic
people
have
found
with
the
fork
tourist
role.
Basically,
some
theoretical
issues
with
liveness,
so
yeah
we
will
like,
and
the
idea
the
the
ideas
on
exactly
what
to
do
have
been
around
for
a
couple
of
months
now,
but
just
kind
of
pushing
them
forward
to
something
close
to
a
ready-to-go
stage.
A
And
kind
of
underneath
that
is
more
of
a
general
just
contemplation
of
liveness
in
general,
because,
as
you
aware,
the
fork
choice,
you
know
is
lmd
ghost,
but
then
has
a
number
of
these
like
tweaks
and
fixes
to
make
it
more
alive.
Under
the
attacks
found,
which
is
a
game
of
whack-a-mole,
which
is
maybe
not
the
like.
A
Okay,
anything
else
at
all
that
people
want
to
discuss
today
before
we
take
a
law,
at
least
for
this.
B
A
A
Yeah
yeah,
you
can
start
us
off
metallic.
Can
you
start
us
off
with
a
10
minute
diatribe.
A
Thank
you
excellent
work
as
always
happy
to
see
main
net
very
stable
and
have
a
great
holidays.
Talk
to
you
all
soon.