►
From YouTube: Eth2.0 call #39
Description
A
Lot
of
my
focus
has
been
on
p0m.
These
are
twelve,
the
last
thing
really
in
that
and
the
queue
there
is
getting
the
upgraded
BLS,
including
that
draft
seven
version
to
the
spec
for
the
new
test
vectors
I'll
be
looking
at
today.
We're
trying
to
get
this
out
the
next
two
whole
days.
I
know
it's
a
little
late,
but
we've
been
the
blocker.
There's
been
a
number
of
things
going
on,
but
getting
this
BLS
is
the
last
thing
in
there
other
than
that.
A
There
are
a
number
of
networking
updates
and
modifications
that
came
out
of
that
networking.
Paul.
Thank
you
for
everyone,
for
the
input
and
review,
and
also
some
increased
testing
is
gonna
come
out.
There
were
some
corner
cases,
especially
around
handling
multiple
different
operations
in
blocks
that
could
result
in
some
corner
cases,
with
respect
to
kind
of
modifying
modifying
state
in
the
middle
of
state
transitions
so
might
catch
some
catching
new
bugs
on
your
end
cool.
B
That,
if
that's
all
right,
okay,
please
no,
it's
been
a
little
while
no
all
right,
so
we
actually
pushed
a
blog
post
last
week
that
details
all
the
stuff
that
we've
been
busy
with
I'll
just
push
it
now
on
the
chat
before
I
forget
and
yeah,
it's
been,
it's
been
pretty
good.
We've
done
a
lot
of
made
a
lot
of
progress
on
the
structural
fuzzing
so
implemented
and
derived
the
arbitrary
trait
on
our
in
two
types.
So
we
can
now
provide
well
form
instances
of
custom
in
custom
types
from
row
by
offers.
B
So
it's
a
huge
improvement
in
our
fuzzing
coverage,
so
this
has
already
allowed
us
to
identify
an
integer
under
flow
in
an
upstream
dependency.
The
snappy
crap
that
we
use
so
the
maintainer
confirmed
the
bug
we
pushed
the
PR,
but
it's
yet
to
be
merged.
With
the
last
few
weeks,
we've
been
working
with
Jake,
who
and
Nimbus
been
really
great
working.
These
guys
raised
a
bunch
of
issues.
Some
of
them
could
have
been
exploited
off.
The
wire
there
is
are
more
hardening
opportunities,
I
guess
the
main
ones.
B
There
was
a
Luke
in
Turku
when
I
said
the
coding
bit
lists
without
an
end
of
list
marker
and
on
Nimbus
there
was
a
segfault
future
stack.
Overflow
final
updates
function,
I
recall
correctly,
updated
the
trophy's
list
of
big
and
fuzz
we're
now
up
to
18
unique,
bugs
she's,
pretty
cool
made
some
good
progress
with
the
goal
and
integration.
B
So
just
a
quick
reminder
for
everyone:
we've
been
experiencing
a
lot
of
issues
integrating
both
serenity
and
prism,
and
we
actually
lined
up
a
call
with
prism
in
a
few
hours
to
see
how
we
can
better
collaborate
on
on
this
one
and
forward
yeah.
If
you
go
through
the
blog
post,
is
that
we're
proposing
a
new
architecture
for
the
for
beacon
fuzz?
So
it's
all
outlined
in
the
blog
post.
Please
feel
free
to
give
us
feedback
super
keen
to
hear
from
from
everyone
on
this
basically
going
to
move
away
from
C++
and
we
implement
they.
B
If
I
buy
things
in
rust,
don't
necessarily
you
want
to
go
into
details
here,
but
we're
breaking
down
Beacon
files
into
three
separate
tools
each
to
fuzz,
which
will
be
coverage
guided
fuzzing
leveraging
the
structural
fuzzing
that
we've
been
working
on
to
generate
interesting
samples
is
to
DIF
that
will
allow
us
to
replay
samples
those
samples
that,
of
course,
all
the
implementations
using
the
nice
facilities
that
you
guys
have
been
building
so
P
CLI
LCL
is
the
airline
and
so
on.
And
finally,
we
have
five
bindings.
That's
the
core
against
differential
fuzzing
part.
B
That's
because
b2
there's
a
nice
diagram
at
the
bottom
of
the
blog
post,
if
you're,
if
you're
interested
yeah,
so
please
check
it
out
super
keen
to
get
some
feedback
from
everyone
will
also
be
pushing
blogger
images
so
that
the
community
can
help
find
bugs
things.
A
lot
of
people
are
asking
how
they
can
contribute
to
e
to,
and
that
might
be
an
interesting
experiment
kudos
to
Justin
dragged
for
this
question.
We
should
be
wrapping
this
up
next
week
and
I
guess.
B
Finally,
we've
been
starting
playing
with
lodestar
I
think
we
caught
a
few
type
errors
on
the
SS
said
like
pitch,
but
you
know
we're
not
really
JavaScript
expert,
so
I
might
reach
out
to
came
in
next
week
to
discuss
this
further
because
I
guess
most
likely
these
are
probably
caught
by
the
calling
package.
Yeah,
that's
pretty
much.
It.
C
Good,
yes,
there
was
a
discussion
three
weeks
ago
about
some
beacon
states
that
shouldn't
be
trusted,
so
it
involved
everyone
but
apparently
well
last
time,
I
looked,
it
was
actually
states
that
couldn't
happen
if
we
were
in
the
natural
function
client.
So
what's
the
latest
news
on
that
was
yeah
Anthes
modified?
Oh,
yes,
so.
B
That's
the
reason
why
we're
splitting
things
up
so
that
we
can
avoid
this
confusion.
You
probably
saw
an
issue
or
a
PR
from
Danny
I.
Don't
think
it
ended
up
being
emerged
to
the
respects
repo,
but
we
were
probably
have
to
sync
of
beacon
states
at
some
point,
so
the
hope.
In
my
opinion,
the
whole
concept
of
you
know
beacon,
States,
the
entrusted
inputs.
We
might
not
rely
on
this
assumption
for
too
long.
B
In
fact,
we've
we
found
a
couple
of
overflows
on
my
house
when
the
Allegri,
you
know
invalid,
because
states
per
se,
so
the
specs,
the
spec,
has
actually
been
clarified
right,
like
I
think
last
week
or
two
weeks
ago.
Now,
if
you
overflow
in
a
state
transition,
it's
clear
that
the
state
transition
is
invalid,
which
is
good
to
see.
But
yes,
we've
been
so
the
structural
fuzzing
help
us
you
take
the
beacon
States
better,
so
that
we
can
actually
not
only
have
valid.
B
As
it
said,
containers
are
the
even
stateless,
isn't
a
nose,
but
also
valid
even
states
as
per
the
spec.
So
it's
yeah
it's
one
of
the
reasons
why
we
split
up
the
the
tool
said
that
we
have
in
three
separate
tools
and
yeah.
Those
conversations
were
super
interesting
in
my
opinions,
so
it
raised
a
lot
of
yeah,
interesting
thoughts
and
yeah
thanks
thanks
everyone
for
being
involved
and
I
guess
one
of
the
the
issues
that
we
had
as
well
is
heating.
B
A
And
I
guess
to
clarify
in
the
sinking,
there's
there's
probably
two
ways
to
sink
a
network
safely.
Once
the
networks
run
for
more
than
say
three
weeks,
one
is
to
have
some
have
a
check
point
for
a
given
epoch
and
block
sink
from
from
Genesis
and
then
make
sure
that
the
check
point
you
reach
is
that
checkpoint
route
that
you
had
that
wouldn't
involve
having
to
get
an
untrusted
state
from
somewhere.
But
if
you
instead,
the
there's
a
much
better
UX
around,
actually
just
starting
from
a
state.
A
You
know,
if
you
have
this
checkpoint,
that
you
that
you
want.
You
want
to
actually
just
start
from
that
state.
So
at
that
point,
you're
actually
inputting
a
state
and
your
there's
a
there's,
an
avenue
to
input
a
state
into
your
system.
Generally.
You've
got
this
from
say
a
trusted
source,
but
there's
a
small
likelihood
that
you
got
it
from
a
nefarious
trusted
source
and
you
have
some
sort
of
like
tainted
state
that
you're
inputting
into
your
system
and
so
I
think
that's
the
that's
the
nuance
there.
A
A
E
This
is
gem
from
techo,
so
in
the
past
couple
of
weeks
we
added
tsunami
compression
over
Gaza
an
RPC.
We
also
added
support
for
pink
and
get
made
a
metadata.
Now
we're
randomly
subscribing
to
persistent
subnets.
We
majorly
reduced
our
memory
usage
while
sinking
and
lastly,
we
now
have
built-in
support
for
syncing
schlossie.
That's
it
cool.
F
F
Of
bugs
and
sinking
bugs
and
what
have
you
still
not
really
stable,
but
hopefully
the
next
few
weeks,
we'll
we'll
kind
of
nail
down
some
of
that
node
level
stability?
It
still
looks
like
our
microbiota
c-5
isn't
running.
So
we
are,
you
know
stuck
with
our
bootstrap
peers
and
it's
not
really
sustainable,
so
different
things
like
that.
We're
still
working
through.
C
Hi
so
like
lodestar,
we
had
multiple
sync
Fink's
fixes
in
the
past
three
weeks
in
particular,
snappy,
which
had
the
compatibility
issue
with
lighthouses.
We
now
have
a
single
makes:
lessee
target
to
connect
to
Schley,
see
sync
is
working
slowly,
but
steadily
that's
life
and
right
now
the
main
focus
for
Ashley.
She
NSYNC
is
working
on
performance
in
particular
on
Windows.
C
Besides
that
we
had,
we
have
worked
a
lot
on
multiple
memory,
leaks
that
were
preventing
or
test
net
to
last
for
a
week
somewhere
coming
from
p2p
some
from
a
block
caching
system,
and
we
are
we
added
several
memory
tracking
tools
and
also
since
we
are
entering
like
we
are
focusing
on
bug
fixing.
We
have
no
tools
to
debug
on
discovery
to
debug
value.
The
topics
and
message
received.
A
Thanks-
and
this
probably
goes
without
being
said,
but
you
know
once
we
once
we
have
fast
state
transitions,
it
seems
like
the
the
next
big
culprit
is
memory
usage.
I
know
everyone's
kind
of
been
attacking
this
from
different
angles,
but
be
sure
not
going
to
other
stores.
I
know,
there's
a
lot
of
pretty
solid
strategies
to
go
off
of
now,
so
you
don't
have
to
do
this
alone.
A
G
Everyone
not
a
huge
update
this
week,
mainly
we've
been
continuing
our
portkey,
the
trio
async
framework.
We
have
updates
to
the
latest
beacon,
note
and
validator
api's,
which
was
good
to
get
into
place.
We've
also
made
some
progress,
I'm,
bringing
more
full-time
contributors
to
the
project,
which
should
just
really
help
out
with
everything
we
have
going
on.
I
Guys
Terence
here
so
over
the
last
few
weeks
we've
been
working
on
topaz
man,
tennis
with
fixing
you
Xbox
and
their
users
feedback
as
to
get
reporting
the
Topaz
test
net
also
fixing
their
water
parts
as
they
do
reporting
in
the
multi
contest
net.
So
nothing
really
substantive
from
their
regards
just
typical
bug
fixes.
We
are
also
fully
aligned
to
spec
version
11
lemon
walking
into
a
lonnie
into
version.
I
Zero
12
right
now
and
peter
has
been
doing
great
work
on
optimizing
in
nature
syncing,
so
his
latest
experiment
resulted
in
100
watts
per
second
to
reduce
your
sinking,
and
this
is
without
at
the
station
signature
verification.
So
we
still
need
to
optimize
signature
verification
for
your
sinking.
Basically,
what
the
lighthouse
is
doing,
I've
been
in
shape
in
doing
great
water
and
slashing
detection.
So
Eric
under
reported
to
us
that
Wow
his
validator
was
earning
more
money
than
the
rest.
It
turns
out.
His
validator
has
included
a
special
object
lock.
I
So
that
means
that
our
backhand
slashing
service
is
working.
That
means
that
pops
up
of
slash
in
network
is
working.
So
it's
pretty
exciting.
Let's
the
way
be
running
climb
for
dodging
readiness
tests
in
particular
stressed
hands,
and
then
the
inactivity
permit
finality
tests.
So
we
updated
a
few
internal
matrix
to
better
very
super
monitoring,
we're
running
a
16,000
elevators
one
section
and
in
twenty
notes,
and
no
and
then
and
then
no
issue
on
that,
so
we're
starting
the
and
him.t
finality
tests
were
within
a
few
days
to
see
the
outcome
so
yeah.
A
I
B
So
we've
wrapped
up
the
slashing
protection.
I
spend
a
lot
of
time,
handling
concurrency
and
to
me
city
guarantees
for
database
transactions
and
we've
also
changed
the
directory
structure
to
better
suit
all
dates.
In
the
future,
we've
been
running
several
16k
validator
tests
that
some
of
the
last
couple
weeks
we've
seen
two
panics
one
from
an
upstream
package.
One
from
our
code
both
have
been
fixed.
B
You
were
asking
about
memory.
Usage,
Danny
I've,
seen
some
great
improvements
there,
so
we're
looking
at
300
megabytes
of
RAM
for
a
node
running
at
the
keynote
and
volley
better
client
with
2k
validators,
which
is
quite
cool.
You've
been
working
to
fixing
consensus
bugs
as
well
that
have
been
identified
by
Justin
Drake,
so
we've
removed
almost
all
paralyzation
from
our
state
transition
code.
It's
not
really
needed
anymore,
since
we
we
can
do
batch,
pls
verification.
B
Now
we
finished
implementing
full
gossips
of
reification
logic,
because
a
bunch
of
issues
about
test
harnesses
really
looking
forward
to
seeing
how
everyone
should
I
see
talk
about
Leslie
we've
we've
defaulted,
our
lighthouse
to
run
on
the
chassis
tested
by
default,
so
the
spec
in
the
Genesis
state
are
now
back
in
to
our
binary
I
guess
other
miss
updates.
We
moved
our
disk
refile
implementation
into
standalone
single
prime
repo
and
we've
been
upgrading
to
stable
futures.
B
The
entire
code
base
bumping
all
the
lighthouse
dependencies
to
the
latest
versions,
and
yes,
so
we're
really
almost
done
with
this
massive
upgrade
that
adrian
has
been
busy
with
and
we're
hoping
that
trailer
fix
can
tackle.
My
house
with
these
incorporated
and
yeah
we've
just
been
work
on
the
RPC
error
handling
as
well,
and
between
integrated
into
our
around
reputation
system.
A
A
J
Yes
sure
thank
you.
I
can
talk
a
little
bit
of
about
multi-client
chestnuts.
A
lot
of
stuff
happened
the
last
three
weeks
since
our
last
call
I
did
multiple
attempts
to
create
a
multi-client
chest
net
that,
unfortunately
failed
mainly
due
to
network
fragmentations,
but
also
due
to
beacon.
I
was
disconnecting
and
rejecting
other
peers
each
other
for
right.
Limiting
other
reasons
then
remember
it's
a
problem
with
kind
of
educates
that
is
two
times
them
can
be
less
than
minimum
can
assist
time,
so
we
had
different
Genesis
times
calculated
and
prison
meant
lighthouse.
J
J
But
then
again,
as
I
said,
client
teams
are
super,
had
full
and
responsive
and
we're
doing
an
amazing
job
in
fixing
box
and
eventually
now
she
has
almost
perfect
final
t8
like
penis
for
more
than
a
week
after
fixing
the
most
important
parts
and
I
think
everyone
is
surprised
how
stable
this
network
is
running
after
couple
of
days,
Tikku
joints
at
estimate,
since
they
managed
to
successfully
connect
and
shrink
the
network
first,
but
now
I
also
know
they
run
validators
on
easy.
So
we
have
three
full
clients
on
crazy
right
now.
J
I
know
that
the
numbers
we
could
shine
client
is
also
synchronizing.
I
still
present,
you're
experienced
networking
and
sync
issues,
but
I
know
the
team
is
very
close
to
fixing
it.
I
didn't
manage
to
get
to
the
shine
head
yet,
but
I
know
it
synchronizes
and
connects,
but
maybe
proto
has
more
details
because
he
mentioned
earlier
today
that
he
managed
to
could
do
a
foreign
country
busy.
I
also
know
that
loadstar
managed
to
connect
and
synchronize
at
least
some
epochs
on
CBC,
but
I
didn't
test
that
out.
J
D
Right,
so
let
me
trying
and
experimenting
with
dog
star
inverse.
There
are
clients
that
are
relatively
new
to
the
Russia
does
not
load
star
has
made
some
great
progress
in
sensibility
and
they're
thinking
they're.
Indeed,
in
thinking
many
books
like
positive
songs,
but
10%
of
them,
social
stuff,
I,
think
just
ability
to
get
in
there
and
then
moombas
is
close
to
very
close
to
the
head
of
the
Jani
about
the
hundred
blocks
distance.
This
is
where
the
sync
mode
changes
I.
A
A
There
are
a
number
of
networking
changes.
Most
of
these
are
very
minor
I.
Think
the
big
thing
is
going
to
be.
The
support
for
BLS
I
know
that
we
just
got
it
on
her
roomie
and
I'm,
not
sure
the
state
of
the
Java
implementation
and
the
Milaca
implementation.
Does
anybody
anybody
looked
into
those
yet
yeah.
B
K
B
K
A
Python
is
also
updated
and
we
will
have
those
test.
Vectors
output,
so
I
think
a
like
coordinated
start
in
June
is
certainly
like
right
at
the
beginning.
In
earnest
is
like
for
some
sort
of
larger
coordinated
start
will
make
a
lot
of
sense
and
I
think
we'll
even
be
able
to
do
some
smaller
test
runs
in
the
weeks
before,
but
we'll
I
guess
that'll
be
more
on
the
client
end
to
digest
how
much
this
p0
12
is
going
to
take
cool
other
conversation
on
test
nets,
questions
comments,
thoughts.
A
A
L
Yeah
we
we
have
quite
a
bit
to
report
because
we
haven't
been
a
reporting
yes
frequently
on
these
calls.
Last
I
believe
that
was
a
few
months
ago.
L
L
So
this
first
variant
uses
receipts
which
are
generated
on
the
sending
shard
and
need
to
be
submitted
on
the
receiving
shirt
and
the
the
simple
examples
we
have
regarding
this
are
tokens
two
kinds
of
tokens.
One
of
them
is
is
wrapped
tokens
and
with
that
example,
it
is
possible
to
you
to
accomplish
having
died,
for
example,
on
all
the
different
shards
and
now
on
the
next
steps.
L
If
you
look
closely
at
the
specification
in
one
of
the
appendices,
there
is
something
called
rich
transactions
and
we
also
use
that
to
devise
yet
another
version
of
yanking
and
the
only
realized
it
was
retrospectively
that
it
is
kind
of
yanking.
So
anyway,
I
think
we
can
either
look
into
yanking
next
or
something
based
on
the
eat.
Transfer
objects,
which
I
believe
was
mentioned
by
Casey
as
part
of
some
phase.
Two
ideas
earlier
on
at
this
point.
I
also
want
to
emphasize
one
of
the
the
main
reasons
for
this.
L
But
eventually
we
do
expect
that
the
more
useful
designs
would
mean
much
larger
changes
in
the
EVM,
where
it
may
not
make
sense
anymore.
To
keep
EVM
because
the
at
least
historically
the
the
each
one
community
has
been
really
reluctant
to
accept
radical
idiom
changes.
And
if
you
would
do
radical,
EVM
changes,
then
you
already
lose
the
the
benefit
of
the
existing
EVM
tooling.
L
He
wasn't
compatible
was
an
engine
called
SS
VM,
but
it
doesn't
really
bring
any
speed
benefits
over
over
rabbit,
which
was
our
main
main
engine.
So
far,
and
I
also
mentioned
fizzy
a
few
months
ago,
probably
in
February,
which
is
an
interpreter
written
by
the
II.
Wasn't
team
and
I'm
happy
to
report
that,
just
today
we
managed
to
release
the
zero
one.
L
Okay,
which
happened
and
now
coming
into
may
be
a
really
interesting
part,
and
so,
as
part
of
the
benchmarking,
we
have
been
looking
at
all
the
different.
Well,
basically,
all
the
different
pre
comp
eyes,
which
exist
on
EVM
on
each
one
currently,
because
if
he
would
propose
like
it
wasn't
based
system,
we
wouldn't
want
to
keep
the
pecan
pies,
and
we
have
reported
this
previously
that
the
elliptic
curve
pre-comp
eyes.
We
got
quite
good
results,
so
that
was
with
their
to
be
and
one
to
eight
or
the
end
to
five
four.
L
We
got
quite
a
good
results
with
those,
including
pairings,
and
but
it's
required,
and
what
we
call
big
integer
host
functions,
and
so
those
are
you
could
see.
Those
are
kind
of
similar
to
pecan
pies,
but
and
those
are
much
more
primitive
operations
than
the
pre
compiles
which
exists
on
on
each
one.
So
as
an
example,
it
would
be
twenty
fifty
six
bit
addition
because
wasn't
doesn't
have
it,
and
but
we
also
proposed
montgomery
multiplication
on
256
bit
numbers
in
this
big
integer,
API
and
videos.
L
We
were
able
to
achieve
really
good
speeds
on
B,
N
1
to
8,
and
in
the
past
month
we
have
been
looking
into
BLS
12
and
again
we
managed
to
reach
speeds
very
closely
to
the
native
speeds.
So
first
we
have
looked
at
and
all
of
this
is
on
interpreters.
So
first
we
have
looked
at
just
a
basic
BLS
felt
implementation
in
rust,
which
didn't
really
produce
the
speeds
we
expected-
or
at
least
we
hoped
so
just
some
random
numbers.
Here.
L
We
have
been
using
for
beyond
one
to
eight
and
and
they
managed
to
implement
support
for
be
less
twelve
and
that,
with
that
code,
with
some
more
optimizations
on
there,
there,
the
big
entered
integer
API
and
we
have
been
able
to
move
to
the
speed
down
from
500
milliseconds
to
close
to
14
and
I.
Think
with
one
set
of
the
optimizations,
we
were
able
to
get
close
to
eight
milliseconds,
so
that's
less
than
less
than
half
to
learn
more
than
half
the
speed
of
native
and
so
I
would
say.
L
L
L
A
M
Looking
into
homework,
homomorphic
encryption
things
more,
one
thing
that
we
this
one
that
we
discovered
is
that
there's
some
use
case
for
them.
Private
information
retrieval,
so
we'll
probably
ask
more
thing
around
more
things
about
that
producer
and
I'm
also
published
a
a
post
this
morning,
basically
kind
of
open
calling
cryptographers
to
see
if
they
can
solve
our
polynomial
commitment
problems.
M
M
And
I
have
a
kind
of
meaningful
thing
that
I
haven't
posted,
but
the
follow-up,
which
basically
is
there
and
I've
reduced
the
free,
the
the
frequency
of
key
revealing.
So
you
don't
have
to
worry
about
and
if
keeping
track
of
reviewing
this,
and
it's
just
that,
if
you
don't
work,
if
you
don't
review
on
time,
then
it's
invalid
yeah.
So
between
those
two
things
it
seems
like
we
can
cut
through
their
complexity
over
the
Perot
custody
by
more
than
half.
M
Maybe
two
three
quarters
yeah,
which
is
awesome,
yeah
the
thing
where
it
turns
out
that
we
got
a
bit
unlucky
as
well.
What
went
down
this
summer
rabbit
hole
of
trying
to
get
self
verifying
proof
of
custody
based
on
the
Cate
commitments,
and
it's
I
mean
lyric
and
there's
efficiency
and
uncertainty
is.
It
would
also
bind
us
to
using
geek
using
K
commitments
for
more
clarification,
so
there's
mmm-hmm,
there's
a
challenges
and
going
going
down
that
path
and
don't
exist.
If
we
just
go
down
this
kind
of
0.001
bit
approach,
yeah.
A
N
Do
one
for
tx/rx
great
home
for
eath
1
e
to
merge
research
mikail
has
just
released
an
e3
search
post
about
that
I
think
just
shortly
after
so
call
last
time.
Oh
no,
actually,
probably
about
a
week
ago,
he
released
it
and
he
started
working
on
a
draft
e
20
to
communication
protocol
and
and
he's
kind
of
like
working
on
a
PSC
for
face
1
as
well.
N
O
Yeah
I
mean
there's,
a
recipe
are
in
the
works,
we're
still
kind
of
discussing
the
documentation
or
in
the
list
of
RPC
calls
with
with
Peter.
We
had
the
meeting
this
morning
and
I
think
we
just
shared
with
you.
The
document
so
I
was
going
to
ask
for
your
input.
That's
it
it's
called.
But
overall
you
haven't
say
the
skeleton
is
already
there,
but
we
still
need
to
yeah.
I
ran
out
a
couple,
a
couple
Peter's
right.
A
A
P
You,
hey
sorry,
I,
couldn't
really
run.
You
didn't
I'm,
sorry
for
not
participating
in
the
networking
call.
Last
time
so
I
said
I'm
working
on
still
working
on
the
new
on
the
sort
of
like
expect
updates
to
the
disk
e5
spec,
which
will
improve
the
performance
a
little
bit
and
also
resolve
this.
One
error
message
that
I
guess
if
you've
been
running
it
on
a
test
net
you're,
also
seeing
it.
P
I'm
actually
not
sure
how
to
basically
I'll
have
something
for
for
feedback
next
couple
days
and
then
so
would
be
kind
of
nice
to
get
into
a
bit
of
a
conversation
with,
like
all
of
the
implementation
teams,
to
figure
out
like
what
is
going
to
be
the
path
of
least
resistance
to
upgrading,
because
the
discovery
upgrading
can
be
kind
of
complicated.
So
we
have
to
figure
out
if
we
actually
wanted
to
try
to
do
something.
Soft
update
or
just
you
know,
basically
live
with.
P
P
A
A
P
A
P
I
guess
it's
gonna,
take
approximately
like
I,
don't
know
one
one
more
week,
at
least
to
like
kind
of
get
to
like
get
to
get
the
actual
spec
done
and
then
I'll
I
can
assist
people
with
the
implementation.
It's
not
gonna
be
like,
like
there's
gonna,
be
one
bigger
change
to
the
packet
format,
but
always
basically
gonna
be
fine.
Okay,
all
minor
change,
cool.
A
A
A
Okay,
general
spec
discussion
I
have
seen
those
phase
one
PRS
with
the
bugs
I.
Certainly
the
testing
on
phase
one
is
minimal.
We're
currently
so
I
really
appreciate
those
those
bugs
but
I've
been
prioritizing
the
bees
0:12,
but
I'll
get
to
those
soon
other
spec
items.
I.
Imagine
this!
This
one's
been
pretty
quiet
for
the
past
six
months.
The
spec
item
discussion,
but
as
we
move
into
phase
1
into
plantations
I'm
sure
it
will
get
a
little
more
lively.
A
Right
so
Joseph,
what
is
the?
What
is
it?
The
the
state
of
the
four
choices
that
you
all
been
working
on?
Are
these
generated
off
of
the
PI
spec
or
he's
generated
off
of
the
old
harmony
implementation,
and
what's
the
format
that
they're
outputting
in
and
given
that
format?
Is
this
something
that
we
can
enter
paid
into
the
canonical
vectors
for
the
next
release?.
N
I'll,
let
Alex
who's
likely
on
the
call
to
speak
to
it,
but
essentially
what
it
does
is.
It
reads
the
PI
spec
and
then
trans
files
it
to
generate
the
tests.
But
Alex
are
you
on
I.
Q
Hi
and
so
about
the
POS
so
before
this
car
I
just
made
an
appeal
to
fix
the
face,
Nero
face
one,
the
zero
signature
issue
I
put
here
and
I
hope
that
people
who
are
interesting
can
take
a
look
like
in
24
hours,
so
we
can
generate
the
POS
test.
Sure.
Thank
you.