►
From YouTube: Config Working Group 9/19/2019
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
so
good
morning,
I
have
a
prisoner
apropos
a
design
proposal
to
talk
about
this
week.
The
problem
that
we're
looking
at
is
config
distribution
status
at
a
very
high
level.
This
means
that
when
someone
creates
a
virtual
service
or
any
other
sto
config
object,
there's
no
feedback
on
when
that's
actually
being
applied
to
their
traffic.
This
is
something
that,
as
a
human
user
of
issue,
you're
not
likely
to
run
into
because
this
deal
usually
tends
to
distribute
config
fairly
quickly.
A
So,
if
you're,
creating
the
virtual
service
by
hand
and
then
by
hand
sending
traffic
you're
going
to
see
pretty
much
what
appears
to
be
instantaneous
activity
or
activation,
however,
when
you
try
to
automate
over
sto
with
multiple
DevOps
use
cases,
Kay
native
is
called
out.
Also,
our
own
test
framework
is
called
out
as
one
of
the
use
cases
here.
What
happens?
Is
you
end
up
having
to
write
arbitrary
weight
statements
and
it
becomes
very
difficult
to
know
how
long
to
make
the
weights
in
K
natives
use
case?
A
In
particular,
they
had
scenarios
where,
when
a
cluster
was
under
a
lot
of
load
or
a
mesh
was
under
a
lot
of
load,
it
would
take
a
long
time,
maybe
up
to
15
seconds
or
can
think
to
be
distributed.
But
under
scenarios
where
it's
not
under
load,
it
would
be
less
than
one
second
and
so
having
variables
as
far
as
these
weights
got
very
complicated.
Currently
they
have
a
workaround
which
is
highlighted
here
where
they
send
sample
traffic.
A
They
create
some
sort
of
a
pseudo
listener
within
the
virtual
service
and
Crescenta
sample
traffic
to
it,
wait
for
that
sample
traffic
to
start
succeeding
and
then
the
allow
production
traffic
onto
the
virtual
service.
We
really
don't
want
our
users
to
have
to
go
to
these
kinds
of
great
lengths
to
leverage
sto
at
scale
from
a
DevOps
environment,
and
we
certainly
don't
want
to
have
to
continue
that
in
our
tests,
which,
if
you
work
with
the
what
do
they
call
the
retry
package,
which
is
under
sto
package
test.
A
The
whole
purpose
of
that
package
is
sending
config
out
in
an
end-to-end
test
and
then
waiting
an
arbitrary
amount
of
time
and
running
a
test
against
that
config,
and
if
it
fails,
you
just
try
again
a
few
times.
This
means
that
our
tests
will
often
take
a
very
long
time
to
execute,
which
does
cost
developer
pain,
but
more
significantly,
when
we
work
we're
doing
the
code
mob
work.
A
few
months
back,
we
identified
that
a
substantial
number
of
the
test
links
that
are
occurring
within
our
test
pipeline
are
csce.
A
Pipeline
were
related
directly
to
the
retry
package.
If
the
cluster
was
under
load,
which
is
common,
say
in
the
lead-up
time
to
a
release,
we
would
see
that
config
took
longer
to
distribute,
and
sometimes
the
timeouts
were
not
said,
appropriate
legal
or
were
set
in
such
a
way
that
under
load,
the
test
would
begin
to
fail
at
about
a
50%
rate.
A
This
is
a
critical
time
for
the
project,
so
it's
really
the
worst
possible
scenario
for
a
CI
CD
failure,
and
we
would
like
to
propose
a
solution,
a
way
that
our
tests
and
our
users
can
have
visibility
into
the
distribution
of
config
across
our
system.
I
want
to
pause
and
see
if
anybody
has
a
question.
A
We
need
to
come
up
with
an
iterative
solution
for
this,
because
this
is
a
big
user
pain
today
in
the
community.
It
would
be
easy
to
put
together
a
proposal
for
a
very
fully
fledged
Sto
feature
that
could
take
about
nine
months
to
execute
on
and
which
our
users
would
not
really
be
particularly
pleased
with
the
wait
time.
A
A
We
create
a
st
O'connell
command
called
wait.
It
will
work
very
similarly
to
the
way
that
cute
cuddle
wait
works.
It
will
essentially,
once
you
change
a
particular
config.
You
can
call
it.
Co
cuddle
wait
against
that
config
and
it
will
watch
some
data
that
it's
going
to
get
from
galley
and
we'll
cover
that
in
the
implementation
area,
until
vac
and
fig
has
been
distributed
to
a
certain
ratio
of
envoy
proxies,
the
ratio
will
be
configurable
via
a
command
line
flag
by
default.
It's
a
hundred
percent
in
some
high
scale
scenarios.
A
A
Currently
we
don't
really
have
anything
that
tracks
in
a
detailed
way.
What
version
of
config
is
where
or
more
specifically
the
tracks
provenance
of
configure
from
a
virtual
service
in
the
kubernetes
CRD
space
into
Galilee
and
MCP,
and
then
across
the
pilot
and
then
out
to
envoy?
However,
pilot
does
have
this
really
nice
feature
that
is
already
used
in
his
few
cuddle.
A
If
you
run
a
Steve,
a
cuddle
proxy
status,
what
we're
going
to
be
doing
behind
the
scenes
is
we're
hitting
at
a
debug
endpoint
on
pilot,
that's
going
to
return
a
list
of
all
the
proxies
it's
aware
of,
and
the
most
recently
acts
or
acknowledged
and
runs
a
nonce
is
part
of
the
XDS
and
MC
protocols,
so
the
most
recently
acknowledged
notes
for
each
of
those
envoy
proxies
I'm,
proposing
that
we
make
those
nonces
meaningful.
Currently,
it's
a
randomly
generated,
number
within
pilot
and
actually
galley
sends
its
own
knots.
Galleys
Lance
is
not
related.
A
Today,
two
pilots,
nonce
galleys
non
cezzah,
is
an
incremental
in
64,
so
I'm
proposing
that
gala
that
pilot
begin
to
leverage
nonces
from
gowing
simply
resending
them
along
in
the
chain
when
it
sends
config.
After
talking
with
a
networking
team,
I
believe
that
this
would
need
to
be
in
prefix
form,
so
the
pot,
the
galley
knots,
would
effectively
be
the
prefix
of
the
pilot,
knots
knots
being
longer
and
then
internal
to
galley.
We
will
track
for
each
of.
A
Pilot
or
envoy
requests
config
the
start
of
it
requests
with
no
knots
pilot
requests
config
from
galley.
If
galley
responds
with
this
particular
value,
then,
when
pilot
sends
that
information
to
the
Envoy
you'll
see
that
the
same
value
is
maintained,
whereas
today
the
same
value
is
not
maintained
across
the
communication
protocol.
A.
B
A
B
A
B
From
Gaby's
perspective,
it
is
possible
because
the
New
York
le
pipeline
actually
creates
snapshots
of
configuration.
We
can
assign
a
single
number
to
it
right
saying
that
this
is
the
number
four
for
the
whole
snapshot
and
a
team
CP
layer.
You
can
create
gnosis
for
individual
stream,
based
on
that
prefix,
for
example,
right.
So
yes
that
works,
however,
that
modeling
of
the
snapshot
needs
to
be
preserved
at
the
pilot
side,
where
the
pilot
needs
to
refrain
from
publishing
snapshots
until
it
receives
all
the
collections
for
that
given
prefix
and
well.
B
A
A
A
So
then,
when
a
user
who
runs
the
sto
puddle
weight
command,
what
that
will
effectively
do
is
reach
out
to
pilot
all
of
the
pilots.
The
same
way
that
proxy
status
does
to
get
all
of
the
different
versions
for
each
of
the
different
envelope
proxies.
Then
it
will
compare
the
set
of
those
versions
to
galle
asking
them
does
the
version?
A
A
B
A
A
great
point:
the
Merkle
algorithm
that
we're
going
to
select
is
going
to
have
to
be
order,
independent,
which
means
that,
as
long
as
the
two
galleys
converge
on
a
single
version
of
config,
they
will
have
the
same
hashes.
It
doesn't
matter
what
order
they
receive.
The
events
in
so
I
have
that
in
a
different
dot,
but
it's
not
linked
here
so
I
will
look
it
up
right
now
effectively.
If
you
have
two
galleys
at
version
a
and
there
are
two
chain
and
they
come
in
order.
1
2,
2,
1
galley.
A
A
Is
the
ocotal
perspective
when
we
query
a
galley
about
the
contents
of
a
particular
version
of
config
galley
can
return
a
404,
in
which
case
we
will
resend
the
request
to
the
next
guy
in
line
which
instance,
incidentally,
as
long
as
we're
using
envoy
proxy
on
the
control
plane.
We
can
actually
set
that
up
within
the
proxy
itself
and
not
have
that
logic
need
to
live
in
st
or
cuddle,
but
you
won't
be
ok.
A
So
the
desire
is
to
produce
value
in
the
one-point-four
timeline.
I
recognize
that
sto
cuddle
wait
does
not
solve
all
of
the
problems
we
would
like
solved.
Ideally,
this
data
will
eventually
make
its
way
with
into
the
kubernetes
CRD
status,
for
better
visibility
to
our
customers,
but
that
just
did
not
strike
me
as
something
that
I
could
achieve
in
the
next
three
weeks,
so
this
is
sort
of
what
I
felt
like
we
could
accomplish
in
the
time
frame.
We
have
appreciate
your
thoughts
and
comments.