►
From YouTube: Eth2.0 Implementers Call #11 [2019/1/31]
Description
A
A
Cool
so
I
guess
Mark's
generally
feature
complete.
There's
a
few.
Now
it's
like
we're
the
point-wise
shuffle
standardization
of
the
BLS
one
minor
thing
around
deposits,
but
it's
approaching
its
final
form
with
respect
to
features
and
stability
and
we're
gonna
we're
gonna
do
one
release
for
one
release
per
week
through
February,
as
we
continue
to
find
bugs
and
clean
up
the
delivery
and
then
we'll
slow
down
the
release
cycle
in
March
I
mention.
A
C
First,
we
have
decided
on
the
conception
of
our
initial
release.
It
will
be
a
beacon
chain
emulator,
where
it's
possible
to
change
parameters
like
epic
wins,
slot,
duration
and
so
forth.
It
will
run
as
much
for
the
daters
as
gvm
instance
can
support
with
the
same
amount
memory
and
also
it's
difficult
to
estimate
the
number.
C
But
it's
gonna
be
thousands
of
validators
instances
and
it's
very
exciting
to
see
how
such
a
big
number
for
leaders
who
work
together
almost
all
components
of
stand-alone
beacon
chain
already
for
the
moment,
and
there
is
the
work
to
do
on
assembling
them
into
into
one
client
and
right.
Let's
cover
a
code
base
with
the
tests.
This
is
one
of
the
biggest
issues
so
far
we
have
started
to
work
on
it,
but
it's
also
trying
to
start
actively
collaborated
with
our
tennis
team
and
the
moment
we
stuck
into
a
legal
issue.
C
D
So
when
we
launched
the
girl
to
the
client,
we
wait
for
the
chain
store
log
and
once
there's
enough
deposits,
the
validator
will
receive
the
log
and
then
kick
up
the
deviation.
So
so
we
work
on
the
routine
and
that
part
is
working,
we're
also
working
on
using
a
simulated
back-end
to
advance
the
between
chain.
So
this
will
create
some
sort
of
end-to-end
test
which,
which
will
be
really
nice
before
the
test
at
lunch
and
we're
also
using
key
store
for
preserving,
validate
its
key,
we're
working
on
and
the
education
proof
or
porter's.
D
E
Related
to
the
70
were
mostly
doing
housekeeping
stuff,
so
basically
refactoring
and
make
a
code
cleaner.
So
I
finished
the
refactoring
of
two
selfies,
a
set
hatched
set
best
blogs
with
resultant
blog,
so
we
can
invoke
for
choice,
first
law,
others
and
only
our
import
block
of
fusion
and
we
are
working
on
the
migrating
our
code
days
into
last
edition
2018
and
we
are
also
doing
some
a
fracturing
for
the
substrate
a
URI
engine.
So
we
can
reuse
more
coat
in
the
since
entering
the
cracked.
So
that's
just
about.
A
E
F
F
A
G
H
H
H
It's
not
too
hard
like
all
you
have
to
do
is
if
you
have
an
array
of
and
objects,
then
you
like
priests
or
and
objects
they're
like
roughly
on
that
on
that
order.
That
are
everything
below
the
everything
before
them
on
the
tree,
and
then
it
keeps
you
an
update.
He
would
just
kind
of
meat,
wrap,
setters
and
getters
that
also
adjust
those
and
they're
recalculate
the
new
hash,
and
to
really
optimize.
You
can
make
a
somewhat
a
somewhat
better
way
of
doing
and
updates
at
the
same
time.
J
K
A
L
L
And
you
know
like
right
back
at
me
file,
you
know
about
the
trying
to
work,
the
other,
with
harmony
and
kinda
like
just
like
working
through
some
like
licensing
issues,
and
you
know
kind
of
understand
everyone's
needs
and
so
we're
I
think
we
were
like
pretty
close
now.
L
L
M
F
F
O
O
F
O
F
O
O
P
O
B
Q
A
Thank
you
and
I
I
believe
so
pi
vm+
Trinity
is
a
full
client
and
the
EVM
implementation
I.
Imagine
is
somewhere
on
the
order
of
an
order
of
magnitude
slower,
but
because,
in
the
real
world,
a
lot
of
its
I
own
network
that
it
does
it's
beginning
to
begin
to
be
moderately
performant
with
you
know,
beginning
to
be
perform.
It
not
totally
perform
it
yet
right.
F
H
On
the
research
side,
I
guess
now
that
phase
zero
is
entering
basically,
yes,
small
change
in
bug-fix
mode,
its
first
of
all
importance
there
and
if
keep
on,
keep
up
the
momentum
and
make
sure
that
we
have
everything
ready
for
phase
one
by
the
time
clients
get
ready
for
it
to
develop
that.
So.
On
that
side,
the
main
and
of
speckle
a
whole
reunion
study
LS
signatures.
H
I
try
to
specify
soon,
so
that's
one
part
and
then
the
other
part
is
actually
figuring
out
the
proof
custody
game.
So
the
latest
update
on
that
is
that
the
game?
That's
us
sitting
around
and
kind
of
lying
in
a
half-finished
form
in
phase
one.
That's
the
weakness
that
if
the
data
is
available-
and
you
calculate
the
proof
of
custody,
and
then
you
calculate
out
the
the
bit
that
someone
should
have
made-
and
it
turns
out
that
that
bit
is
just
too
low
is
wrong.
H
Then
you
need
to
do
something
like
sixteen
rounds
of
I'm
asking
for
more
go
branch
before
you
can
figure
out
whether
or
not
they
actually,
they
actually
did
something
wrong,
which
basically
means
that
it
becomes
easy
to
what
you
would
need
to
extend
someone
out
someone's
withdrawal
by
yet
sixteen
rounds
of,
like
whatever
the
watch
a
messaging.
The
way
yes
and
that's
the
whole
note
just
need
to
be
bought
could
be
imposed
even
on
an
innocent
validators,
which
is
not
nice.
H
So
I've
come
up
with
a
way
to
reduce
that
to
something
like
four
rounds,
but
while
s
still
reads
retaining
the
prop
the
same
amount
of
theater
going
on
chain,
so
I'll
probably
write
that
up.
Another
important
thing
to
think
about
is
also
on
the
game.
Theoretic
issues,
so
the
main
challenge
with
proofs
of
custody
is
that
it's
a
different
kind
of
game
from
all
the
other
challenges
that
we
have,
because
for
all
of
the
other
slashing
conditions.
H
If
you
slash
someone,
then
the
beacon
chain
can
immediately
figure
out
whether
you're,
right
or
wrong,
and
if
you're
right
then
J
and
if
you're
wrong
with
the
message,
is
not
even
valid.
But
here
we're
talking
about
forcing
burdens
of
responding
to
challenges
even
on
innocent
validators,
and
we're
also
talking
about
the
possibility
of
denial
of
service
attacks
around
that.
H
The
possibility
of
denial
of
service
attacks
around
malicious
validators
making
and,
if
fake,
an
answerable
course
of
custody
for
them
for
themselves
or
proof
custody
challenges
for
themselves
and
trying
to
kind
of
push
out
to
challenge
any
challenges
by
made
by
us.
Someone
that
actually
knows
that
they're
being
malicious,
along
with
some
other
issues.
So
actually
and
if
figuring
out
the
economics
and
specifying
that
and
making
sure
that
that's
robust
does
like
one
challenge
and
then
also
like
specifying
LP.
H
Racial
coding
is
another
challenge
and
then
that's
something
that
we
don't
even
need
to
decide
which?
What
phase
that
goes
into
because
that's
like
in
extremely
separate
and
parallel
process,
which
is
great
in
phase
two,
so
phase
one?
The
research
challenges
are
actually
where
the
spec
wise
are
actually
relatively
small
phase.
One
is
like
meanly,
yeah,
I
would
say
a
peer-to-peer
network
engineering
challenge
and
which
is
realistic
way
to
create,
if
you're,
an
engineering
challenge
of
like
pretty
unprecedented
scale,
because
there
just
happen
to
be
in
shorted,
lock
chains.
H
H
Under
state
that
the
challenge
here
and
that
this
is
something
that
a
lot
of
people
who
meets
the
work
quite
hard
on,
but
the
good
news
is
that
phase
2
is
not
a
very
significant
networking
challenge
once
you've
already
done
phase
1,
but
on
the
other
hand,
phase
2
has
more
research
challenges.
So,
from
the
point
of
view
of
getting
face
to
look
we're
just
getting
2.0
launched
and
usable
faster,
it
seems
like
there's
a
lot
of
eye
peril.
H
I
think
we
can
do
between
those
two
and,
as
far
as
like
what
actually
phase
2
contains
I'm,
probably
going
to
write
up
a
lot.
Get
some
look
at
some
points
about
like
basically,
what
concrete
ones
can
be
figure
it
out.
What
has
been
figured
out?
What
needs
to
be
specified
more,
it's
probably
still
too
early
to
actually
start
writing
a
spec,
because
there's
still
room
for
and
of
idea,
refinements
and
the
choosing
ideas,
but
between
multiple
proposals.
H
I
Yeah
I
would
agree
with
wimp
italic
that,
from
the
research
perspective,
phase,
zero
and
and
one
are
very
much
under
control,
and
we
have
you
know,
lots
of
time
for
for
Phase
two
in
terms
of
research,
so
I'm
not
going
to
spend
too
much
time
on
the
face
to
research
in
the
short
term.
Try
and
focus
on
getting
the
phase
zero
spec
finalized
as
soon
as
possible.
I
Try
to
find
the
remaining
bugs
I
still
think,
there's
there's
quite
a
few
non-trivial
bugs
that
are
lurking
around
find
simplifications
that
will
make
everyone's
life
easier
and
cleaner,
specific
things
that
I've
been
thinking
a
little
bit
for
phase
zero.
One
is
trying
to
standardize
the
BLS
1281
curve
across
multiple
block
chains.
So
I've
approached
the
block
chains.
I
I
knew
I,
went
Macedon
BLS
both
which
one
which
includes
a
Z
cash
and
Chia
and
I
learnt
during
yesterday
SP
during
the
conference
that
Definity
and
Falcone
are
also
interested
in
that
curve,
and
it
sounds
like
people
interested
in
sanitation.
So
standardization
specifically
includes
hash
to
g1.
The
hash
to
g2
includes
serialization.
It
includes
generators,
so
I
think
that's
a
nice
effort
to
try
and,
and
so
that
will
likely
mean
that
we
would
make
changes
to
their
current
relatively
minor
changes
to
the
currently
specified.
I
H
I
For
example,
okay,
definitely
so
I
can
ask
you
and
see
what
they've
done
sure
yeah.
H
I
I
We
haven't
really
specified
right
now:
the
the
networking
topological
phase
zero,
especially
when
it
comes
to
how
we
would
handle
the
the
aggregation
for
our
stations,
I,
think
one
one
idea
that
looks
you
know
very
simple
which
doesn't
require
a
fancy
shoddy.
The
network
is
just
have
a
monolithic
gossip
sub
channel
for
for
everyone,
and
hopefully
there
won't
be
too
many
well
if
there
aren't
too
many
validators
in
phase
zero
than
that,
that
might
actually
be
a
workable
solution.
If
not
we
could.
I
We
could
consider
having
a
hotly
feedback
from
the
AMA
is
looking
into
ways
to
try
and
make
beacon
if,
as
its
called
somewhat
more
transferable
or
saleable
marketable-
or
you
know,
fungible
Zak,
with
big
words
yeah.
So
one
of
the
ideas
is
to
try
and
make
validators
be
able
to
change
the
withdrawal
credentials,
so
that
would
effectively
mean
that
they
would
sell
the
whole
balance
to
some
other
party,
such
as
a
centralized
exchange.
I
I
H
H
R
R
Basically,
you
see
the
statistics
of
the
imperium,
Network
and
I
just
wanted
to
compare
morale
as
a
couple
things
with
one
of
the
simulations
that
I
have
done
this
morning,
so
this
simulation
in
particular
it
simulates
256
nodes
in
the
peer-to-peer
network.
So
it's
that
it's
a
small
simulation,
yet
it
still
is
quite-
is
becoming
a
little
bit
hard
to
visualize
when,
when
you
increment
the
number
of
nodes,
so
this
is
something
I
have
to
work
on.
What
you
see
here
is
the
is
the
peer-to-peer
network
for
those
that
were
not
in
the
cold.
R
Assign
the
colors
means
the
number
of
peers,
so
green
colors
mean
nodes
with
many
tears,
and
if
you
see
on
the
sides,
you
have
dark
colors,
meaning
nodes
that
have
a
few
number
of
peers.
So
the
minimum
number
of
years
for
this
particular
simulation
is
4.
Okay,
you
obviously
consider
this
one.
So
here
are
all
the
nodes
from
0
to
264.
R
R
Now,
I,
don't
know
if
this
is
representative
of
the
of
the
real
network,
but
we
can.
This
is
something
that
we
can
change.
This
is
something
that
we
can
change
in.
The
configuration
for
the
simulation
I
can
simulate
with
two
peers
or
with
20
peers
as
much
as
we
want-
and
this
is
something
that
we
can
definitely
change
and
undo
different
experiments
on
this.
R
Another
thing
that
I
have
added
is
to
change
the
number
of
messages
that
are
sent
at
every
broadcast.
So,
for
example,
if
you
have,
even
if
you
have
14
peers,
I,
can
limit
the
number
of
messages
in
a
broadcast
to
they
say
10
and
then,
when
you
receive
a
new
block,
for
example,
you
will
broadcast
this
information
to
only
10
nodes
among
the
on
the
14
you
have
and
of
course,
if
you
have
less
what
the
engine
would
transmit
to
all
of
those
now
I
have
played
with
this.
R
Okay,
so
that
is
one
thing
also
the
simulations
produces
statistics
about
the
time
to
block
how
to
produce
a
block
okay.
So
in
the
Y
in
the
x-axis,
you
see
the
block
number,
and
this
is
a
simulation
that
was
about
4000
seconds,
which
is
a
bit
more
than
one
hour.
Okay
and
the
whole
simulation
run
in
under
and
about
40
seconds,
okay,
and
then
here
you
see
the
time
to
produce
the
block
which
you
can
put.
You
can
compare
a
little
bit
with
this
figure
that
you
see
in
the
stats
of
the
hilum
Network.
R
R
Now,
if
you
compare
with
this
figure
here,
you
can
see
the
same
or
less
difference
in
some
blocks
that
are
present
every
2
seconds
4
seconds,
then
you
see
others
that
are
that
takes
79
seconds
to
be
produced
so
more
or
less
took
up
similar
statistics
for
this
particular
simulation,
and
and
you
see
that
the
average
block
time
here
is
17
seconds
in
the
case
of
this
malignus
about
19
seconds.
So
it's
not
that
different
are
going
down.
You
see.
R
So
the
thing
is
that
I
use
a
kind
of
you
know:
the
for
mining,
I
use
a
random
function
and
the
kind
of
aggregate
random
solution
among
all
the
peers
in
the
network,
and
so
sometimes
you
know,
none
of
them
find.
The
solution
is
not
really
a
mining
I'm
going
really,
but
sometimes
it
takes.
It
takes
much
time
for
some
of
them
for
any
of
them
to
find
out
a
block.
I,
don't.
A
R
Yeah,
okay,
so
this
is
these
have
main
chain
drugs.
This
is
I,
don't
know
these
are
not
big.
Honcho
joke!
Sorry,
okay!
Moving
to
the
next
figure.
This
is
the
ankle
rate,
so
here
they
incorporate.
This
represent
any
number
of
bank
codes
for
25
blocks.
Okay,
so
in
a
bar
in
history
year
you
have
25
blocks.
He
similar
tries
to
similar
to
the
figure
that
you
see
here
on
this
on
this
network.
You
see
most
of
them,
I
said
between
zero
one
or
two
uncles.
Every
25
blocks,
okay
and
that's
more
or
less.
R
What
you
see
here.
This
is
just
one
hour
of
simulation.
I
have
other
simulations
that
are
longer
than
two
hours
or
so,
and
you
see
a
little
bit
more
variance,
but
in
general
of
the
number
of
ankles
is
quite
low
and
and
and
it's
more
or
less
like
similar
to
you
know,
represented
people.
What
we
see
on
the
Trillium
main
chain
now
I
have,
as
I
said,
I
have
played
with
the
number
of
peers.
I
have
played
with
the
number
of
messages
per
broadcast
and
that
definitely
increases
the
day.
R
Anchor
rate
I
have
other
simulations,
but
I
don't
want
to
make
a
cooperation
between
as
multiple
simulations
today,
because
I
don't
want
this
to
take
too
long.
I
just
wanted
to
show
you
the
feature
so,
but
this
is
something
that
we
can
play
with
the
different
parameters
to
produce
different
ankle
rates.
R
Now
this
figure
is
I,
don't
know,
I
didn't
find
a
better
way
to
to
plug
it.
I
is
a
saying
right
now
it
doesn't
give
much
information.
Basically,
what
is
the
x-axis
is.
There
is
a
notes
and
the
y-axis
is
the
last
block
that
has
been
mined,
and
so
basically,
what
it
shows
is
that
all
the
nodes
are
at
the
same
level.
In
the
you
know,
I
know
when
the
block
200
something
and
the
reason
we
upload.
R
This
figure
is
because
by
accident
that
some
nodes,
as
I
say,
were
isolated
and
they
were
not
advancing
in
the
in
the
chain,
and
so
that's
what
I
plotted
this
figure
to
see.
If
there
is
any
chance,
any
node
is
getting
stuck
at
some
point
and
doesn't
received
the
updates
from
the
chain
so
that
that's
what
I
I
plot
this
figure
yeah
it
looks,
are
a
bit
flat,
but
you
know
at
least
it
shows
that
all
of
them
are
more
or
less
in
the
same
block.
R
I
moving
to
the
nodes
I
want
to
show,
and
this
is
so
you
can
click
on
any
node
in
the
simulation,
and
you
can
see
all
the
information
about
the
node,
so
you
can
see
the
basically
the
address
they
either
find
out.
This
is
completely
useless,
but
you
know
it
randomly
generates
an
amount
of
meter
that
is
going
to
be
news
later
on
to
to
become
validators,
and
you
know,
put
your
slashing
money,
etc.
R
It
can
be
a
minor
or
you
cannot.
Some
notes
of
them
are
minor,
some
of
them
are
not.
These
are
not
parameter
that
you
can
set
in
the
in
the
simulation
is
how
much?
What
is
the
percentage
of
nodes
in
the
network?
There
are
miners
and
what
is
the
percentage
of
nodes
in
the
network
that
not,
and
then
you
see
that
the
main
chain-
and
so
you
see
again
the
hash,
the
parent,
the
miner,
so
we
know
that
mind
the
block
and
the
time
it
was
it
was
mine.
So
is
really.
This
is
a
lot.
R
Almost
14
seconds
is
a
bit
more
than
one
hour.
Then
you
can
see
you
can
go
all
the
way
down.
You
see
all
the
blocks
that
have
been
mined.
You
can
see
all
the
ankles,
then
you
have
a
figure
here
and
it
shows
the
block
delay.
A
block
delay
is
more
or
less
what
you
call
hearing
this
for
your
blog
propagation.
R
The
block
was
produced
somewhere
is
less
is
one
second
now
the
reason
is
well
one
second
flat
in
most
of
the
cases
and
not
milliseconds,
like
900
milliseconds
or
so
is
because
the
time
step
granularity
for
this
simulation
is
one
second
right
now,
okay,
now
this
is
something
that
can
be
changed
in
the
future.
We
can
go
a
little
bit
with
a
finer
granularity,
let's
say
100,
milliseconds
or
so
I've
at
this
point,
if
time,
your
narrative
time
is
that
donor,
it
is
one
second,
so
that's
why
we
see
here.
R
Some
of
them
are
two
seconds
which
is
represented
before
what
we
do.
We
see
here
more
or
less,
sometimes
up
to
four
seconds.
Well,
it's
more
dogs
that
you
see
here.
Okay,
now
there
is
another
figure.
Also
that
shows
the
messages
that
are
transmitted
at
every
single
time
estate
and
so
in
red
is
the
messages
that
I
send
and
in
blue.
Is
the
messages
I
receive?
Okay,
you
will
see
that
most
times,
I
send
messages.
R
So
this
is
an
anchor
block
and
so
I
send
to
2
in
for
two
messages
per
peer,
and
so
that
when
you
see
14,
14
messages
say
at
the
same
time
and
the
integral
you
see
how
many
messages
I
received
per
time
step
now
this
is
just
to
have
an
information
about.
How
is
the
network
behaving?
There
are
some
cases
where
you
can
see
here
is
like
there's
a
lot
of
significantly
amount
of
time
that
elapsed
without
without
any
message
being
sent
office.
R
If
this
is,
for
example,
one
of
those
blog
that
took
maybe
one
minute
or
so
to
being
to
being
produced,
okay
and
I
can
click
on
any
of
my
peers
and
I
will
see
again
all
the
information
about
the
uncles.
It
blocks
again
the
blog
propagation,
the
messages
transmitted
in
Surrey.
You
can
see
in
this
case
how
you
have
seven
messages
per
time
step,
but
eight
messages
I
need
recon.
The
beers.
You
have
one
two,
three,
four,
five,
six,
seven,
eight
so
eight
years.
R
R
After
they
start
of
the
simulation,
the
source
is
note,
193
and
so
on
and
so
on,
and
you
can
see
pretty
much
entirely
the
information
of
everything
that
the
note
has
seen
or
received,
or
sent
during
the
simulation,
and
these
you
can
see
for
absolutely
any
note,
so
you
can
click
on
any
note
and
you
can
see
a
very
tiny
look.
This
is
great
for
debugging,
as
you
can
imagine,
and
it
give
us
quite
an
amount
of
information,
and
this
can
be
useful
suppose
for
post
post
post
processing
to
get
more
statistics
about.
A
N
N
Think
there's
a
light
stream
flow
SBC,
but
correct
me:
if
I'm
wrong,
it
would
make
me
happy
you
could
work,
there
are
live
streaming
from
SPC,
like
all
the
talks
are
being
live
stream
and
you,
okay,
if
you
have
a
euro,
that
would
be
great
I
understood
that
they
recorded
that
I.
Don't
know,
but
I've
been
yeah.
Okay,.
L
A
S
That's
yeah
I
was
expecting
to
be
introduced
after
Gary.
Well,
you
may
know
me
from
schematic,
chart
or
diagram
of
the
beacon
change
back.
Oh
cool
yeah,
yeah
yeah,
so
I
worked
earlier
this
week
and
you
may
also
know
me
from
Eve
Singapore.
You
remember
me
and
I
worked
on
a
sharding
clients
there,
just
a
hack
and
then
after
the
holidays,
continue
with
bitter
work.
On
its
and
now
it's
like
the
life
cycle.
Darn
have
implemented
on
the
ghost
and
some
state
machine
they
haven't
implemented
all
the
rules.
S
A
S
H
Eggshells,
you
queued
the
shove
the
value
at
any
position
in
one
time,
which
is
like
not
nice.
If
you
want
to
just
you
know
whether
or
not
you're
the
proposer
or
just
want
to
know
some
particular
committee,
so
the
mean
the
three
alternates
are
there.
So
one
of
them
is
this
number
theoretic
shuffle,
which
is
just
listed
in
number
three
three
which
just
uses
the
and,
if
X,
X
cubed,
plus
that
K
permutation,
where
we
have
random
key
values,
they
get
so
I
get
selected
based
off
of
the
hash
that
comes
through
the
seed.
H
The
second
is
there.
There
was
the
playstyle
shuffle
that
was
like
a
bit
later,
and
the
third
is
that
we
reached
out
to
Stanford
academics
and
they
told
us
about
some
provably
optimal
design.
Let's
see
if
I
can
actually
we're
gonna
go
grab
it
up
and
extend
it
over
to
you
and
you
can
probably
push
it
into
the
chat
hold
on
there.
H
So
this
is
like
designed
to
be
proved
with
provably,
a
kind
of
uniform
in
the
case
of
at
least
small,
including
they're.
A
very
small
set
switch,
so
I
have
ran
some
tests
on
the
prime
shuffle
and
the
feistel
shuffle,
and
it
turns
out
the
faisal
shuffle
is
basically
broken,
and
I
think
oh
the
reason.
H
But
that's
like
one
test
and
the
reason
why
I
want
to
talk
to
academics
on
these
guy
two
on
the
same
map
dais,
so
that
we
can
see
what
more
rigorous
tests
we
can
do
on
it.
So
one
approach
is
starting
from
practice,
fractional
approach
which
is
and
seeing
either
if
you
are
buying
it
or
making
it
better.
H
The
other
approach
is
to
start
from
this
existing
academic
thing,
which
is
like
provably
if
either
an
optimally
correct
request
optimally
correct,
but
it's
like
insanely
complex
and
it's
probably
even
in
its
current
form,
just
too
slow
for
our
purposes,
like
the
prime
shuffles,
already
on
the
upper
edge
of
what
what
I
think
we
is
acceptable.
We
cast
for
our
our
purposes
and
see
if
we
can
kind
of
start
from
that
and
so
and
simplify
things
to
be
efficiency
up
more
while
keeping
like
all
or
most
of
the
safety.
H
A
Last
week
or
two
weeks
ago,
Alexi
had
a
number
of
comments
and
I
said
we're.
Gonna
have
it
on
the
agenda
this
week,
Alexie's
not
here,
but
I
did
want
to
just.
If
anyone
have
any
follow-ups
on
its
like
chain
start
a
number
of
things
with
what
I
think
we
did
we
up
the
minimum
threshold
for
the
chain
start.
We
talked
about
it,
not.
H
Sure,
okay,
one
one
actually
no
November,
we
yeah,
we
talked
about
it
and
I.
Remember
we
yeah
I
think
we
were
undecided,
but
basically
Alex's
concern
was
that
you
could
have
a
kind
of
majority.
Takeover
attack
happen
with
five
home
with
500
key
ether,
because
there's
totally
cartels
that
have
that
amount.
A
D
Like
you
know
mentioned
we're
targeting
a
test
night
launch
this
border,
and
you
guys
say
that
you
know
it's
mostly
just
little
bug
fixes
and
there's
also
some
non-trivial
bugs,
but
at
one
point,
do
you
think
you
know
look?
The
region
will
be
comfortable
with
the
theme
being
like
hey
we're
stopping
at
this
commit
hash.
We're
gonna
be
targeting
release
this
anything
beyond
that.
We
don't
promise
you
envision
a
good
candidate
for
that
might
be
in
a
few
weeks
or
for.
D
A
Yeah
and
I,
just
I,
want
to
be
clear,
I
think
targeting
your
client
speaking
to
other
versions
of
your
client
is
probably
a
more
sane
goal
in
the
in
q1.
If
people
do
have
clients
that
are
speaking,
then
that's
great
but
again,
I
think
like
demonstrating
internal
networks
within
a
singular
client
is
more
sane
and
just
gonna.
Keep
you
from
having
to
one
keep
the
spec
from
having
to
harden
entirely
and
to
just.
A
There
could
be
a
lot
of
time
wasted
until
we
iron
out
some
of
these
bugs
between
for
consensus,
bugs
especially
before
we
have
testing
framework
like
consensus,
testing
in
place.
I
think
that
it's
going
to
probably
be
very
much
a
headache.
So
I
don't
expect
us
to
say
okay
in
two
weeks
or
four
weeks
like
this
is
exactly
what
all
these
tests
should
be
targeting,
but
that
you
should
be
targeting
these
releases
internally
and,
having
begin
to
have
networking
internally
within
your
clients,
cool.
A
I
would
imagine
that
the
release
today
has
critical
bugs
in
that,
like
some
type
of
epic
epoch
transition
or
some
type
of
an
unexpected
validator
entering,
would
probably
cause
it
to
crash.
So
knowing
that
there's,
you
know,
I'm
not
I'm,
not
willing
to
say
like
target
this,
because
I
think
it
would
certainly
crash.
Oh.
A
We
plan
on
bringing
more
people
doing
formal
analysis
both
from
the
software
side
and
on
the
algorithmic
theoretical
side,
with
respect
to
any
type
of
audits.
Beyond
that,
I
I'd
like
to
court
some
people
and
bring
some
people
into
this
process,
but
we're
not
there
and
I'm
I'm,
not
gonna
I,
don't
I'm,
not
gonna,
put
a
timeline
on
when
we're
gonna
be
there.
Yet
before
we
get
to
a
more
stable
spec
and
you
know
ironed
out
networking
et
cetera,
I'm
screwed.
O
G
I
T
Q
M
A
A
T
A
A
G
G
A
F
T
T
A
F
S
There's
a
few
other
optimizations.
You
can
also
just
approach
it
differently,
see.
Well,
you
process
this
on
the
go
through
every
block
process
process
around
trance.
So
if
you
have
this
processing
going
on
every
block
anyway,
you
can
just
modify
a
stage
for
this
one
block
you
remove
and
this
one
book
you
act,
I'd
like
the
previous
at
the
station.
S
Let's
get
lost
new
at
station
at
this
added,
so
all
you
have
to
do
radius
change
the
thoughts
you
know
sort
of
like
a
deck,
as
you
can
do
this
in
optimal
way
and
just
go
from
your
edit
book
back
to
the
initial
source
tree
or
the
deck
and
then
go
back
to
multipath.
You
walked
through
the
most
optimal
or
the
most
solid
target.
S
Dag
and
in
keep
the
poets
in
memory
in
don't
recalculate
them
every
time
they
can
just
do
linear
programming
really,
so
you
just
store
for
a
free
hash
of
every
block.
Remember
and
the
best
child
and
the
best
future
far
tells
it's
like
the
target
look
and
then
every
time
you
add
a
block,
you
basically
modifier
stack,
who
added
you
propagate
the
fault
back
to
the
sources
tree
and
then,
when
you
want
to
go,
find
an
investor
all
you
basically
well
from
from
the
source
of
the
tree.
S
You
find
the
first
look
that
has
first
note
that
has
this
fart
out
and
they
get
the
the
most
optimal
target
walk
immediately.
The
only
problem
is
where,
when
you
get
your
thoughts,
I
mean
a
validator
a
test,
a
second
book
a
later
blow
and
he
needs
to
remove
his
fault
or
the
to
be
removed
in
the
previous
address.
The
third-
and
this
is
a
little
bit
more
complex,
which
also
linear.
A
A
S
F
B
R
Yes,
I
have
a
comment.
Some
point.
We
were
description
also
having
a
meeting
in
April
and
one
of
the
options
there
was
great
was
Barcelona
and
for
me
is
easier
to
to
book
a
room
earlier
than
at
the
last
minute.
So
I
was
thinking,
even
if
you
know
at
the
end
we
decided
to
do
it
in
some
world.
I
can
just
cancel
the
reservation,
but
perhaps
it
would
be
good
to
start
to
start
looking
into
into
that.
So
I
don't
know
if
I
maybe
should
have
start
making
arrangements
or
if
we
can.
A
Ok,
so
we
are
going
to
be
in
Sydney
and
so
is
lighthouse.
So
that's
definitely
a
possibility
but
offline
I'll
kind
of
begin
talking
with
some
of
the
teams
to
see
if
there
will
be
representatives
out
in
Sydney
and
if
not,
then
we
will
begin
to
discuss
where
and
when
you
know,
if
I
don't
I,
don't
have
it
I,
don't
have
any
sort
of
date.
So
I,
don't
I,
wouldn't
recommend
any
when
you
would
pre
booked
a
hotel
but
and.
H
A
O
A
N
A
Great
I
think
I'm
gonna,
add
a
new
agenda
item
next
week
or
for
subsequent
meetings,
which
would
be
testing
I.
There's
enough
work
going
on
there
that
we
should
do
a
sanction.
A
little
update
beyond
that
I
think
two
weeks
from
now
makes
sense,
but
I
need
to
look
at
my
calendar
and
make
sure
there's
nothing
conflicting
I'll,
let
y'all
know
in
the
chat
as
always
hit
us
up
in
the
getter
make
issues
as
you
find
bugs.