►
From YouTube: Eth2.0 Call #26 [2019/10/24]
Description
A
A
A
They
softer
some
of
the
consensus
sex
since
issues
that
we've
found
it
been
around
and,
along
with
the
networking
updates
that
were
also
primarily
driven,
that
interrupt,
include
chunking,
listed
responses
for
better
streaming
or,
if
you're,
enabling
streaming
and
modifying
some
of
the
national
structures
to
better
facilitate
sync
and
some
other
clarifications
we
talked
about.
Some
of
this
may
be
during
research
and
just
general
conversation,
but
we
are
staging
an
upcoming
semi
major
release.
A
There
are
a
number
of
PRS
that
are
under
review.
Some
of
these
things
are
stuff.
That's
come
out
of
audits,
for
example,
hardening
of
the
fourth
choice
rule
against
some
of
the
attack
vectors
found
by
booyah,
but
also
some
other
stuff,
which
are
some
substantive
changes
to
pay
zero.
With
respect
to
removing
the
cross-linking
scaffolding
such
that
we
can
continue
to
development
while
we
iron
out
the
actual
direction,
we're
gonna
take
on
phase
one,
so
that's
less
up
under
D
right
there.
A
A
E
A
E
Be
oh
good.
We
have
modified
the
lock
target
last
week
to
allow
clients
to
essentially
load
the
BP
state
from
file,
so
we
now
have
a
pre-processing
function
that
uses
a
state
ID
reference
and
we're
essentially
asking
the
relevant
state
to
the
all
the
first
targets.
So
this
was
a
decent
react,
react
texturing
of
the
way
we
hand
over
the
corpus
from
both
the
BP
State
and
the
block.
E
So
so
far
we
have
the
int
I,
spec
and
lighthouse
on
on
these
buzzards
PI
spec
is
very
slow,
so
we're
probably
gonna
want
to
remove
it
when
the
point
to
production
in
the
project
infrastructure.
We're
also
tweaking
the
fuzzer
to
ensure
consistency,
behavior
across
different
implementations
when
returning
empty
bytes,
as
opposed
to
an
image
like
punches,
we're
also
working
on
adding
the
epoch
state
transitions
so
currently,
looking
at
process
specification,
finalization
process,
cross-links
process,
final
updates
and
so
on.
E
So
this
should
be
fairly
straightforward,
since
these
functions
only
take
a
big
mistake
as
we're
also
exploring
creating
custom.
Huge
errors
are
also
mostly
passive
plugins
to
enable
struck
to
where
a
mutation
based
projects.
So
this
would
help
greatly
with
coverage
and
there's.
Another
alternative
that
we
could
also
potentially
use
is
leveraged
leadcore
above
each
other,
and
also
known
as
okyun,
which
essentially
help
us
translate,
because
is
that
object
into
a
Prada
buff
and
back
so
first
to
make
informed
decision.
We
need
accurate
coverage
measurement,
so
we
decided
to
focus
on
that.
E
That's
something
that
we're
currently
working
on.
So
while
we're
doing
this,
this
would
essentially
allow
us
to
generate
time.
You
type
e
key
states
rather
than
you
know,
sticking
with
known
value
states
we're
also
adding
support
for
more
because
taking
puts
my
essentially
adding
the
post
stage
into
the
list
of
valid
states
that
we
feel
it
was
even
good
corpora.
F
D
D
G
F
Also
race,
an
interesting
attack
this
morning
about,
we
can
potentially
basically
obtain
a
dishonest
majority
in
the
first
2
e
box
after
Genesis.
It's
easily
patched,
though,
and
I'm
also
working
to
build
Genesis
off
prisms
to
cause
the
closet,
contract
I'm,
pretty
close,
but
still
not
quite
enough
right.
There
believe
tomorrow
or
tonight,
sports
actually
protective
protection
for
the
value.
The
client.
F
You
start
with
a
nice
and
tight
scheme,
co-found
some
other
solid
optimizations
in
the
raspy
LS
library
and
the
same
time
we're
also
building
some
basic
rust
findings
to
Harumi
to
see
if
it's
worth
us
switching
over
to
and
what
might
affect
people
trying
to
do
interrupt
is
we're
gonna
change.
The
way
that
you
run
lighthouse
so
we're
gonna
presently
evaluate
a
client
and
beacon.
No
two
separate
binaries
they're
gonna
move
them
under
the
one
binary
called
lighthouse,
so
it's
gonna
feel
a
lot
more
like
Perry
with
subcommands,
and
that's
it
from
me.
Oh.
F
J
So
we
have
made
some
BLS
signature.
Improvements
then
has
been
working
on
that
Chauhan
has
been
doing
well.
Actually,
noise
JVM
implementation
is
complete.
Now
we
are
working
on
handled
implementation
of
handle
for
our
client,
we're
mostly
doing
code
cleanup
and
handing
off
to
our
frog.
Dev
team
were
working
to
merge
with
harmony,
that's
just
kind
of
like
going
through
the
HR
stuff
for
that
and
lower
implementing
some
benchmark.
D
If
I
can
just
that,
the
violette
stopping
working
on
specifically
is
implementing
the
new
hash
declare
standard
is
more
or
less
complete
and
I'm
more
or
less
happy
ish
with
performance.
If
any
teams
want
to
take
a
look
at
it
in
Java,
then
it's
on
our
Artemis
github,
as
a
draft
will
request
at
the
moment,
so
feel
free
to
take
a
look
at
him.
Any
feedback.
D
K
Oh
yes,
yeah.
Yes,
the
joy
said
we
are
figuring
out.
Their
immersion
in
the
darkness
which
are
both
teams
are
excited
about.
This
is
one
of
the
priorities
at
the
moment.
We
also
work
and
continue
to
work
on
their
focus
tests.
It's
gonna
be
test
for
our
implementation
of
the
cultures
back
and
actually
it's
almost
done.
K
I
guess
we'll
be
finished
next
this
week
and
it
probably
would
be
a
good
start
to
to
make
some
test
vectors
and
share
them
with
the
community
and
have
focused
as
on
the
shared
repository
as
well
as
tested
as
we
have
a
four
state
transition
part
also.
We
are
now
finishing
our
discovery.
The
five
implementation
it's
already
done,
but
now
working
on
tests,
test
coverage
and
our
current
goal
is
to
get
interrupts
with
guest
implementation
and
yeah.
K
A
L
You
guys
hear
me
yeah
yeah,
so
what
is
the
warrior?
Hardening
up
our
test
net,
finding
Barcelona
way
in
the
parks?
We
also
have
a
it's
free
mental
PR,
working
on
removing
the
chart
and
crusting
across
across
output
base
I'm
a
big
fan
of
such
change
and
so
fitting
unit
test.
For
that,
and
that
step
is
try
out
in
the
Valentine.
We
also
start
implementing
the
naive,
a
creation
strategy
as
well
and
yeah.
We
are.
L
We
also
implement
in
the
process
Eve
heart
optimization,
which
was
which
was
inspired
by
the
White
House
art
during
period
of
hearing
I'm
def
con
and
then
we'll
what
you
know
stood
for
a
testing
and
we
started
working
on
and
to
end
testing
and
then
going
into
our
first
testing
strategies
and
yeah.
That's
about
it.
M
Yeah
so
we
past
week
we
brought
on
a
new
team
member
named
to
Yen
and
he's
been
contributing
and
finally
brought
him
on
more.
Officially,
we
kind
of
have
a
lot
of
small
things
in
progress.
We're
kind
of
the
name
of
the
game
is
optimization,
so
we're
optimizing.
Our
state
transition
logic
kind
of
refactoring,
pulling
it
out
into
a
separate
package.
M
M
A
O
All
right
start
I
will
mention
that
Mommy
has
created
an
organization
and
it
hop
called
appear
on
to
client,
and
the
idea
of
this
organization
is
that
it
would
be
a
suitable
place
to
store
the
scripts
for
that.
We
developed
during
interrupt
to
run
test
match
between
the
multiple
clans
and
then
it
could
also
host
all
our
useful
projects,
such
as
low-level,
goes
to
be
sub
chat.
So
we
can
test
the
lipid
of
implementations
for
conformance,
so
everybody
all.
O
The
teams
should
have
received
invite
for
this
for
being
admins,
and
once
you
are
running
you
can
add
more
people
from
your
team.
So
moving
on
to
updates
we've
been
working
on
it,
one
integration.
We
are
pretty
much
wrapping
this
up
and
we
are
looking
forward
to
participating
in
a
shared
test
net
with
a
contract
deployed
on
Garlin
right
now.
We
are
yet
implemented
the
latest
spec
0
84,
but
I
guess.
O
The
consensus
here
will
be
that
the
cross
links
simplification
will
be
included
in
this
share
test
net
I'll
be
expecting
feedback
from
everyone,
otherwise
we'd
be
now
so
adding
a
lot
of
metrics
to
numbers
and
we
plan
to
have
a
public
graph
Anna
instance.
Once
the
test
net
is
running
and
we'll
be
running,
probably
like
80
or
100
nodes
on
a
server
cluster
and
you'll
be
exposing
the
metrics
from
that
they've
been
significant
progress
in
our
native
live
p2p
implementation.
O
F
P
So
after
they've
come,
our
team
is
working
on
making
hilux
vigo,
key
implementation.
More
complete
and
alex
has
a
fear
about
making
different
protocols.
We
said
in
the
poopy
side
and
it's
being
merged.
We
also
fixed
the
current
attention
sinker
in
Trinity
and
and
for
the
Python
synchronized
module
migration.
P
A
A
There's
been
a
lot
of
movement
discussion
around
the
phase.
One
proposal
metallic
I
discussed
this
at
length
in
various
workshops,
the
dead
pond
on
the
flawed
posts
online.
Essentially,
it
makes
it
trade-off
to
have
fewer
shards,
at
least
to
start,
but
the
ability
to
cross
link,
in
the
best
case,
every
chart
every
slot
to
facilitate
single
slot
time
and
to
short
communications
for
sure
communications,
I.
Don't
think
we
need
to
go
super
in-depth
into
it
today.
A
The
primary
primary
implication
is
that
it
changes
some
of
the
phase
0
machinery
that
we
had
in
place
for
cross
links.
In
retrospect,
pretty,
obviously
we
were
prematurely
putting
that
in
the
spec
anyway,
so
I
have
a
phase:
zero
spec
update,
er
that
just
removes
cross-links
altogether
and
please
zero
funny
enough
cross-links
and
the
updating
across
thanks
to
the
calculation
it
rewards.
The
cross-links
is
one
of
our
biggest
source
of
consensus
errors
that
interrupts
so
be
nice
to
just
remove
it
anyway.
So
anyway,
there's
this
PR
up
review.
A
It's
been
under
review
for
about
a
week
and
I
think
we're
very
close
to
merging
it
in
when
we
get
to
the
test
net
discussion,
we
can
discuss
the
implications
on
test
nets
which
there
should
be
testing
in
test
sensing
things
like
that.
Beyond
the
phase,
one
modifications
and
things
are
there,
any
other
research
updates
that
anyone
wants
to
share.
M
M
Like
a
lot
of,
it
is
basically
just
like
how
problems
about
how
do
we
pack
bytes
together?
Otherwise,
you
know,
like
I,
don't
see
any
super
fundamental
problems,
maybe
the
more
fundamental
problem.
Well,
not
even
fundamental,
but
just
difficulties
are
more
kind
of
face
to
related
ones
like
how
to
actually
do
like
guaranteed
across
shard
movement
of,
if
between,
between
shards,
for
example,.
M
M
H
One
update,
which
is
better
than
to
face
zero
I
guess
is
Els
signature,
so
on
the
standardization
I'd
say
we're,
you
know
we
never
very
good
place.
The
spec
really
hasn't
changed
for
the
last.
You
know
several
months
with
the
exception,
I
guess
of
which
is
of
notes
of
a
very
minor
security
bug,
which
should
be
a
very
easy
fix.
Like
a
one-line
change
riot.
H
Who
is
the
author
of
the
one
of
the
authors
of
the
webby
born
a
hash
function
that
we're
using
is
doing
an
amazing
job
of
taking
ownership
of
the
standard
deviation
after
that
lots
of
polishing
he's
also
addressed
various
patents
or
possible
papers
and
regiments
and
suggested
the
workarounds
for
those
and
in
November
between
the
16th
and
this
22nd
of
November.
There's
going
to
be
a
meeting
I
see
FRG
meeting
where
the
various
people
involved
in
the
standardization
and
at
that
point
I
guess
the
specs
you
know
can
be
considered.
H
Another
update
on
the
BLS
stuff
is
there's
this
library
called
Hiromi
and
it's
it's
offered
and
maintained
by
CEO
Mitsunari,
who
came
to
to
DEFCON
and
met
some
of
the
prison
guys
and
like
the
it.
It
turns
out
that
this
library
seems
to
be
like
significantly
faster
than
an
other
libraries
like
Milagro's,
dizzy,
cache
a
library
so
like
a
recent
benchmark
from
the
lighthouse
people
suggest
that
it's
it's
2.4
times
faster
than
an
Allegro,
so
we're
in
touch
with
the
author
and
we're
considering
a
possible
grant
one.
H
H
The
library
and
final
update
on
the
BLS
stuff
has
kind
of
been
a
very
interesting
paper
recently
by
Mary
mala,
where
she
describes
a
technique
to
aggregate
signatures
in
such
a
way
that
the
aggregated
signature
is
cheap
to
verify.
So
when
you
have
n
distinct
messages
or
regardless
of
n,
you
only
have
to
pay
two
pairings
to
verify
the
aggregate,
signature
and
kind
of
roughly
two
and
exponentiation,
so
that
may
or
may
not
be
something
that's
relevant
for
for
layer,
one
and
it's
still
very
exciting,
but
something
to
keep
in
mind
at
layer.
Two.
H
Mean
also
in
terms
of
quick
updates
for
the
deposit
contract,
for
more
verification
should
I
guess
and
maybe
in
a
couple
weeks.
So
there's
a
there's
still
one
minor
point
regarding
removing
some
of
the
safety
checks
and
we
should
get
kind
of
a
final
okay
on
those
within
a
couple
weeks.
There's
also
kind
of
discussions
about
how
we
want
to
design
the
website
to
make
the
deposits
so
I
still
still
I
guess
significant
work
to
be
done
before
we
want
to
deploy
the
deposit
contract.
H
You
know
we
want
to
do
lots
of
testing,
make
sure
that
the
UI
is
good
and
I
guess
we
also
don't
want
to
be
in
a
situation
where
validators
make
deposits
too
early
on
and
they
have
funds
and
the
deposit
contract
without
being
able
to
use
them.
So
I
guess
there's
also
no
significant
rush
to
deploy,
at
least
until
maybe
we
have
sort
of
public
cost
clients,
instant.
B
E
H
Sure
yeah
it's
here,
roomie
h
e
are
I
and
the
specific
pairing
library
is
called
MCL
and
the
author
is
also
the
author
of
like
an
optimized
x86
assembly
jet
and
and
he's
basically
using
using
this
this
to
get
the
performance
that
he
got
all
right.
Thank
you.
Do.
H
B
M
Yeah,
so
we're
setting
up
a
drink
service,
setting
up
a
monthly
like
client
call
I
like
a
working
group
or
a
task
force
of
sorts
for
like
clients.
The
goal
is
to
make
sure
that,
like
signs,
don't
get
left
behind
and
really
I
think
it's
to
coordinate,
bringing
my
client
tech
from
four
eighth
one
and
each
eventually
two
into
production.
So
we're
gonna
be
exploring
like
research
and
development
updates,
open
questions
and,
like
there's
a
lot
of
some
technical
problems
to
work
through,
but
I.
Think.
M
A
lot
of
the
issue
is
social
and
so
having
like
a
regular
meeting
where
we
can
kind
of
just
all
sink
through
the
sink
through
the
bigger
issues
and
coordinate,
will
be
really
helpful
and
so
we'll
post
on
all
the
relevant
channels.
In
the
next
few
days
like
when
we
have
a
solid
agenda,
but
we're
think
we're
gonna
be
targeting
in
two
or
three
weeks
for
the
first
call
and
we'll
be
collecting
community
feedback
and
all
that
cool.
It.
A
A
A
R
R
R
R
Estimation
isn't
really
all
that
well
defined
inspect
right
now,
and
that's
because
you
know
do
they
have
a
super
good
solution
for
this
I
expected.
We
will
find
solution
to
this
problem
when
we
actually
implemented
a
second
time
like
we
have
an
implementation
of
the
radius
estimation,
but
the
code
is
this
is
horrible
and-
and
it's
actually
not
really
clear-
this
is
a
workable
solution
and
I
do
think.
R
I
even
put
that
in
this
back,
thanks
to
any
by
the
way
for
posting,
like
two
references
to
so
the
proof
of
worker
and
what
that
entities
is.
This
is
pretty
old
idea.
This
is
something
that
was
even
present
in
the
academia
system
and
I.
Think
by
now
we
basically
have
like
two
options
for
it,
so
we
could
use
equi
hash
or
we
could
use
Kaku
cycle
and
I've
been
trying
to
like
I
play
around
with
with
Kaku
cycle
and
making
it
sort
of
like
integrating
it
into
the
into
the
protocol
in
encode
and
I'm.
R
Not
super
happy
about
this
change.
To
be
honest,
so
it's
I
think
it's
still
we.
What
I
would
really
like
to
have
is
like
some
more
solid
feedback
from
other
people
who
are
more
knowledgeable
about
who
work.
What
just
in
general
I
mean
I
I,
guess
we
we
just
kind
of
have
to
make
a
decision
whether
we
actually
want
to
work
in
this
protocol
or
not.
So
this
is
kind
of
like
the
big
thing
it's
still
out.
R
I
The
proof-of-work
topic
I
can
just
quickly
mention
that
from
statuses
side,
one
of
the
biggest
reason
we're
prisons
were
that
meaning
whispers.
Basically,
that
has
a
spam
mechanism.
Spam
prevention,
listen,
it
doesn't
quite
work
simply
because
node
power,
so
the
node
doing
the
work
honestly
is
almost
always
going
to
be
underpowered
your
listen
to
an
attacker
which
makes
it
very
much
useless.
So.
R
It's
a
bit
different
in
with
with
this
type
of
thing,
because,
like
the
or
the
discovery
or
more
general,
this
is
this,
isn't
just
for
discovery,
release
more
like
in
general.
Like
do
we
want
our
notes
to
like
add
proof
of
work
on
their
identity,
it's
kind
of
something
that
like,
if
you
have
it,
the.
What
this
actually
prevents
is
basically
attackers
choosing
their
unknown
identity
in
an
in
an
arbitrary
way
too,
because
a
lot
of
things
in
discovery,
but
also
in
general
of
peer-to-peer
algorithms
rely
on
just.
R
Having
the
node
ID
space
sort
of
like
uniformly
distributed
and
attackers
can
actually
influence
the
distribution
by
choosing
their
node
IDs.
So
if
you
add
proof-of-work,
then
basically
an
attacker
would
have
to
perform
true
4/4
many
many
times,
whereas
a
node
that
just
doesn't
care
about
its
ID
and
just
once
a
random
ID,
it
would
have
to
perform
the
proof
of
work
one
time.
So
it's
a
bit
different
than
with
the
whisper
system
where
you
have
to
put
to
work
on
every
single
message.
R
R
No,
so
basically,
what
we
did
is
mostly
like.
We
just
ran
the
clients
against
you
against
each
other,
and
then
age
did
most
of
the
work
really,
which
was
like
just
basically
checking
in
his
code
where
things
went
wrong
and
that
led
to
a
couple
corrections
in
the
spec
and
like
couple
Corrections
and
the
goal,
and
so
now
right
now
it
was
like
an
interactive
debugging
kind
of
thing
for
text.
So
you
can
just
run
the
implementation
and
like
send
the
ping
message
and
then
see.
R
If
you
get
a
response
and
if
you
do
get
a
response,
you
can
see
if
you
can
decrypt
it
in,
but
I
am
very
certain
that
both
Russ
and
go
are
now
compliant
with
respect.
So
like
a
hundred
percent.
So
it's
kind
of
if
you
try
either
implementation
and
it
works
with
your
sin
and
we're
good.
Basically,
okay,.
A
C
Yeah,
it's
just
pretty
brief.
We
started
where
we
released
the
repo
of
our
testing
methodologies
or
what
we're
doing
the
p2p,
so
we've
we're
laying
out
basically
what
we're
gonna
do,
the
specific
methods
and
just
some
metrics
that
we
hope
to
collect
drop.
Both
the
discussion
and
the
repo
we've
already
got
some
good
stuff
going
there.
So
we'd
love
to
have
more
people,
though.
So,
if
you
have
any
thoughts
or
would
like
to
see
how
we're
testing
things,
please
join
us.
There.
I
I
A
So
there's
a
lot
here,
flutter
things
going
on
that
affect
when
and
what
types
of
test
sets
happens
or
a
point.
One
of
the
big
things
is
this
days:
they're
update
the
pending
zero
nine,
which
will
include
this
modification
to
the
state
transition
function.
It
is
almost
entirely
simplifying
and
that
it's
cutting
a
few
things
out
and
I
think
Terrence
is
gone.
A
Se
said
has
gone
through
it
and
when
I
was
writing
it,
it
felt
like
it
wouldn't
be
too
bad
on
your
end,
but
that
will
be
we
shall
see,
but
I
did
have
a
few
conversations
with
different
teams
and
it
seemed
like
the
general
desire
and
consensus
was
to
get
these
changes
out
on
the
spec,
get
them
integrated
into
clients.
Before
we
do
some
sort
of
larger,
more
orchestrated,
multi
plant
test
sense.
A
So
that
was
that's.
My
general
understanding
of
where
things
lie
is
there's
there's
plenty
of
work
to
be
done
with
respect
to
kind
of
single
clients
moving
forward
towards
chestnuts.
While
we
get
these
facer
or
changes
integrated.
Are
there
any
additional
thoughts
any
opposing
thoughts
on
tests?
That's
just
in
general,
I.
S
S
S
To
had
so
many
patients
around
the
issue,
using
defend
them
out
sometimes
doesn't
work.
There's
enough,
Mickey
stuff,
you
know
inside
the
crime
because
sometimes
end
up
like
carting
started
you
or
there
were
some
issues
there
way
back
in
doing
what
I
did
that
the
first
time.
So
if
we
can
just
accept
that,
it's
not
very
interesting
to
just
test
that
particular
functionality
at
this
point,
it's
just
think
you
waste
a
little
bit
of
time
if
we
can
go
into
them,
I'm
good
with
that
I
have
about
2000
gonna
issue.
S
A
To
the
question
of
whether
it's
one
deposit
contractor
also
pool
it's
essentially
one
per
test
net
that
is
connected
to
at
least
one
chain.
There's
also
discussion
about
spinning
at
prefer.
Thirty
nuts
I
think
just
generally
connecting
eagerly
is
probably
simpler
than
doing
that,
but
it's
definitely
a
something
we
can
consider,
especially
when
we
actually
moved
towards
poetess
sense,
and
maybe
incentive
is
not
superfluous.
Already
net
might
work
better
because
we
can
better
allocate
the
ease
that
can
participate.
That's
a
little
bit
online.
It's
a
hurry.
You
have
your
hand
raised.
O
One
idea
that
I'd
like
to
share
is:
we
are
planning
to
do
our
test
net,
so
in
the
following
way,
we
will
use
the
validators
from
the
mock
star
to
kickstart
and
network
immediately,
and
the
very
first
block
will
reference
some
current
flocking
garlic.
So
the
validator
contract
could
be
used
to
add
additional
validators,
but
the
network
is
already
running
it.
A
L
Another
few
things
to
discuss
regarding
the
config
so
far
we
actually
ended
up
increasing
the
ejection
balance.
Just
for
once
we
want
to
test
our
validators
can
be
ejected,
and
the
second
thing
we're
seeing
is
that
there's
people
trying
out,
but
people
ended
up,
leaving
the
client
running
for
four
square
for,
like
two
alone
or
or
or
or
that
people
people
do
offline
for
too
long
and
then
senator
I
show
us
have
is,
is
it's
a
little
too
low
and
then
and
then
and
then
and
then
it
starts
to
hurt
our
finality.
L
A
F
A
F
D
A
S
S
There
are
a
couple
more
things
here:
it's
not
really
enough.
Just
get
notes
going
only
to
monitor
and
make
sure
that
they're
running
okay,
so
I
know
the
higher
you
mentioned
that
run
wanna,
that's
net.
For
the
note
that
the
run
for
cells,
we
can
definitely
run
permission
this
as
part
of
our
monitoring
who's
that
be
enough,
or
is
there
specific
tooling
that
we
should
develop
to
help
with
twenty
Turing
test
nets?
Some
things
what's
what's
the
thinking
here.
S
O
I
Somebody
else:
okay,
we're
building
a
few
like
monitoring
applications.
We
have
like
the
gossip
sock
sniffer
and
you
can
view
these
things
and
I
expected.
Eventually,
there
will
be
a
community
that
steps
up
from
pocket
stores
and
stuff
like
this.
That
will
also
have
similar
things
but
say
on
the
web
right,
so
those
are
probably
investigate.
I
A
S
S
What
is
a
spinnaker
speakers?
I'm?
Sorry,
my
voice
is
just
that
so
speakers,
the
CIC
D
pipeline
integration
that
takes
a
new-build,
has
work
flow
for
that
bill.
You
know,
for
example,
you
can
test
it
in
isolation
in
three
D
six
nodes.
Look
at
the
stats,
get
the
results
see
if
it's
good
see
if
it's
passing
the
disease
test
anyway
and
then
he's
going
to
make
it
possible
to
replace
the
existing
nodes
in
a
definite
to
the
new
version
in
a
automatic
failover
manner
without
downtime.
S
So
if
we
had
six
clients,
six
different
since
notes
per
clients
and
spinnaker
workflows
for
each
of
them,
every
time
you
push
a
new
docker
image,
it
would
be
built.
You
know
picked
up
the
test
in
isolation
using
the
tooling
from
photo,
for
example,
and
making
sure
it's
working
like
good
stuff.
And
then,
once
you
have
like
the
green
light
from
different
programs,
you
can
introduce
it
into
the
test
net
and
it
becomes.
An
automated
pipeline
makes
it
much
easier
for
people
to
manage
that
over
time.
K
C
I
S
You
know
you're
going
too
fast
on
me.
One
is,
it
doesn't
have
to
be
the
same
test
net.
Even
if
it's
part
of
the
test,
it
doesn't
have
to
be.
The
majority
of
the
test
net
can
be
just
a
client
decides
to
do
that.
That
way,
right
and
it
doesn't
affect
the
whole.
It
shouldn't
be
the
end
and
all
for
a
test
night
like
if
you
happen
to
be
using
the
same
contract
right
and
you're
running
independently.
You
should
not
be
at
all
penalized
or
having
to
interact
to
spinnaker
at
all.
S
In
mind
as
well,
I
think
you
need
both
I
think
if
you
want
to
have
a
painless
approach
to
being
able
to
do
things
automatically
get
some
updates
by
email
that
your
stuff
is
working
on
out
and
it's
it's
much
easier
to
have
a
continuous
deployment
in
perspective.
Where
you
can
test
things
all
the
time.
You
can
also
do
crazy
experiment
and
see
his
cure.
Experimental
PR
is
working
the
way
you
thought
or
if
it's
breaking
the
network
right
so
and
by
the
way,
I'm
not
inventing
anything.
It's
what
prism
uses
today,
I.
L
L
So
then
the
image
deploy
to
cluster
and
then
what
I
read
10%
of
the
traffic
to
the
image,
and
now
we
have
to
be
ourself
like
what
they
all
result
in
measuring
the
court
to
compare
the
page
line
between
the
new
image
and
the
and
the
or
image,
and
then
we'll
do
some
analysis
and
then,
if
and
and
if
that
passed,
then
we
can
direct
more
and
more
and
more
traffic
to
that
to
that
new
image.
Yeah.
A
A
L
A
Yeah
pretty
much
once
we
have
any
amount
of
like
significant
load
on
the
test
net,
we're
going
to
need
an
aggregation
strategy
even
up
until
the
point
where
you
have
as
long
as
you're.
If
you
have
a
single
channel
in
which
everything's
being
gossiped,
you
don't
strictly
need
an
explicit
strategy
other
than
aggregate
locally
and
include
in
blocks,
but
I
think
the
intent
is
to
get
this.
Some
version
of
this
naive
strategy
integrated
and
when
multi-client
test
nuts
come
around
I
think
that
certainly
should
be.
A
We
should
move
towards
that
pretty
much.
We
should
as
soon
as
we
can
on
any
of
the
inter
outbursts
main
net
components.
We
should
be
moving
towards
main
as
soon
as
possible.
This
includes
aggregation
strategy,
x',
SEC,
I,
hope
versus
noise,
can't
where,
if
there's
anything
else
in
there,
so
yeah
I
plan
on
getting
that
we
need
to
get
that
merchant
soon
and
tested
and
on
to
the
assistants.