►
From YouTube: Istio User Experience Working Group November 5th, 2019
Description
Istio User Experience Working Group held November 5th, 2019
A
A
B
Better,
all
right
so
I'm
demonstrating
sto
cuddle,
wait,
it's
an
experimental
command
released
in
sto
1.4
and
essentially
what
it
allows
you
to
do
is
wait
for
your
configuration
to
be
distributed
out
to
all
of
the
proxies
before
proceeding.
This
is
usually
not
of
interest
to
human
users
who
are
going
to
be
doing
what
I'm
doing
here
manually,
applying
config
and
then
doing
something
about
it,
because
by
the
time
you
get
to
passing
traffic
to
your
config,
it's
probably
been
distributed.
B
However,
for
for
a
lot
of
our
automated
use
cases
like
Kay
native
and
spinnaker,
it's
actually
very
critical
that
they
know
that
a
virtual
service
is
ready
to
receive
traffic
before
they
send
the
traffic
to
it.
Not
having
this
feature
has
resulted
in
a
good
number
of
500
errors
to
their
users
and
has
resulted
in
them
finding
some
really
ugly
workarounds
as
to
how
to
wait
for
config
to
be
distributed.
So
what
I'm
going
to
do
in
this
command
is
I'm
going
to
apply
a
change
to
my
entry
point
virtual
service.
B
It's
a
pretty
small
change,
just
changing
the
weights
of
the
various
subsets
and
then
I'm,
going
to
run
an
sto
cuddle
weight
command,
you'll
notice
that
I'm
using
a
particular
name
space
here.
This
is
from
our
perf
cluster
I'm.
Looking
for
a
virtual
service
entry
point,
and
in
particular
this
is
a
very
large
cluster,
so
I'm
also
going
to
be
demoing.
The
use
of
the
threshold
parameter
in
very
large
clusters.
B
You've
almost
always
got
one
or
two
pods
that
are
lagging
for
some
reason,
maybe
they're
in
the
process
of
being
scaled
down,
but
the
threshold
parameter
allows
us
to
say
hey
when
99%
of
all
of
our
proxies
have
this
config.
Let's
move
forward,
that's
probably
good
enough
before
I
run
it
any
questions.
B
Oh
and
also
I
have
the
V
flag
turned
on
V
is
an
undocumented
flag
on
the
wait
command.
That
gives
you
an
idea
of
what
it's
doing
under
the
hood.
Weight
is
really
boring
without
the
V.
It
works
really
well,
it's
effective,
but
it's
not
cool
for
a
demo.
So
V
lets
us
take
a
peek
under
the
hood
and
what's
going
on
so
here
goes
alright
and
we're
done
so
in
the
time
that
it
took
those
lines
to
output.
B
There
are
two
version
or
two
proxies
out
there
that
don't
have
it
for
some
reason,
and
in
this
case
it's
because
I've
specifically
misconfigured
them,
so
we
could
demonstrate
the
threshold
command
I'll
go
ahead
and
demo
what
it
looks
like
if
it
times
out
I'll,
remove
our
threshold
statement,
which
means
that
by
default
we're
waiting
for
100%
completion
and
I'll
add
a
timeout
command.
The
default
timeout
is
30
seconds,
which
also
makes
for
a
boring
demo,
so
I'll
set
it
to
two
seconds.
B
C
B
Question
it
is
looking
at
all
namespaces.
The
way
that
weight
is
implemented
really
treats
configuration
is
a
black
box.
It
doesn't
expect
that
it
can
know
which
proxies
are
supposed
to
get
which
config
it
just
Waits.
Essentially,
it
needs
to
know
that
all
proxies
have
either
acknowledged
this
specific
config
or
a
version
that
came
after
it
in
pilots
terminology.
D
C
B
Unfortunately,
the
way
that
pilot
config
transformation
works,
a
white
box
transformation
where
we
know
the
meaning
of
individual
resources
and
which
proxies
each
individual
resource
was
going
to
go
to
really
is
not
a
viable
solution
in
the
short
term.
If
we
wanted
to
take
three
or
four
releases
to
implement
something
along
those
lines,
that's
feasible
technologically,
but
the
complexity
to
do.
It
is
substantial.
D
D
C
C
A
Knows
which
ones
have
responded?
It
could
keep
per
namespace
I'm
thinking
about
two
cases
that
I
want
to
ask
about
I've
seen
people
who
have
authorized
only
these
one
namespace
of
a
cluster.
They
shouldn't
even
know
how
many
SCO
pods
are
running
other
namespaces
and
I've
seen
split
horizon
where
there's
two
clusters
on
the
same
pilot.
A
B
Would
be
all
yeah
all
of
the
proxies
that
are
known
to
the
pilot
in
the
kubernetes
cluster
you're
running
against.
Additionally,
as
far
as
permissions
are
concerned,
just
like
all
of
the
other
sto
coddled
debug
commands,
this
command
does
require
essentially
root
privileges
on
the
cluster.
You
have
to
be
able
to
exec
within
the
pilot
pod.
So
there's
really
not
a
permissions
problem
there.
It
simply
means
that
if
you
have
restricted
privileges,
you're
not
going
to
be
able
to
run
this
command.
A
B
I'm
very
hopeful
that
remains
debug
API
will
be
as
helpful
in
this.
Additionally,
in
the
1.5
timeframe,
galley
will
be
plumbing
this
information
out
into
CR
D
status
as
a
part
of
some
efforts
in
the
config
working
group,
and
at
that
point
you
will
be
able
to
look
directly
at
the
virtual
service,
C
or
D
to
understand
its
deployment
status
without
having
to
run
a
special
ISTE
or
cuddle
command.
If
you
want
to
wait
on
it,
you
can
use
cube
cuddle
weight,
which
is
kind
of
the
industry
standard,
a.
D
B
I
am
not
aware
of
any
extension
points
in
that
command.
However,
I
have
seen
that
if
you
implement
status
in
your
C
or
D
cube
Karo
will
be
able
to
wait
on
that
status.
So
it's
not
really
doing
anything
special
for
you
at
that
point,
I
see
I
do
there's
just
a
number
of
best
practices
related
to
administrating
a
cluster.
If
you
want
to
understand
its
health,
you
look
at
the
status
of
the
resources
in
the
cluster.
So
beyond.
Just
wait.
A
B
Is
an
excellent
question:
I
I
suspect
that
we
will
need
to
make
that
configurable
by
the
user
somewhere
within
gali
config
by
default,
we
would
say:
100%
is
required
for
completion,
but
on
an
incomplete
status.
I
would
expect
that
we
have
some
sort
of
subfield
that
describes
the
ratio.
That
says
you
know
twenty-three
hundred
and
ninety-nine
out
of
2400
proxies
have
acknowledged
this.
C
Exactly
where
I'm
having
issues
command
not
be
able
to
do
namespace
check-in
because
I
could
have
my
namespace
may
only
have
one
or
two
services
and
a
lot
of
my
services
are
outside
of
my
namespaces.
So
you
could
easily
give
force
results.
That's
because
the
configuration
are
not
propagated
to
some
other
namespaces.
That's.
B
Correct
so,
ideally
what
we
would
have
done
if
we
could
design
pilot
from
the
ground
up
with
this
in
mind,
we
would
have
had
a
config
transformation
platform
that
pilot
uses
that
automatically
tracks
every
transformation.
It
does
back
to
the
input
data
that
it
used
to
create
that
transformation.
That
would
give
us
complete
knowledge
and
the
ability
to
say
what
config
is
supposed
to
go
where
that
sort
of
thing.
But
at
this
point
that
is
that's
a
year-long
retrofit.
On
top
of
what.
C
B
D
B
A
A
B
This
is
going
to
block
in
cases
where
there
are
errors
in
XDS.
This
is
also
going
to
block
in
areas
where
there's
latency
and
that's,
for
instance,
what
we've
seen
a
lot
from
the
que
native
team
is
that
they
have
changes
that
they're,
making
programmatically
and
there's
maybe
one
or
two
seconds
of
latency
on
a
very
large
cluster
for
that
change
to
propagate.
For
that
time,
the
traffic
that
they
send
to
that
virtual
service
is
receiving
503
s,
which
is
not
desirable,
I,
see.
A
E
E
A
So
yeah
I
had
difficulty
when
I
was
running
my
analyzers
because
I
didn't
know
all
of
the
utility
methods
of
pilot
I
didn't
know
pilot,
had
a
method
to
do
this
or
to
do
that,
and
that
was
super
tricky
to
find
them
and
what
I
did
find
them.
Sometimes
they
were
unexploited
with
lower
case
identifiers
for
because
they
were
just
internal
to
their
package,
but
I
wanted
the
same
logic
and
just
to
capitalize.
Someone
else's
method
introduces
a
whole
bunch
of
risk
and
debt.
You
don't
want.
We.
E
E
A
E
A
F
F
D
I'd
really
like
that,
Jason
I
think
that's
what
we
should
do.
I
mean
we
have
experts
in
the
networking
or
security
working
group
who
wrote
some
of
the
logic
in
the
pilot
and
you
can
cancel
them
and
whenever
you
analyze
and
need
something
move
it
to
a
common
place.
Trying
to
do
it
beforehand
as
in
hunting
and
finding
things
will
be
really
difficult.
D
E
I
think
this
is
what
makes
our
sense
as
well
just
because
moving
all
of
the
logic
into
a
common
package
probably
doesn't
make
sense.
That's
probably
more
trouble
than
it's
worth
in
a
lot
of
cases,
but
I'm
kind
of
hopeful
that
things
that
are
particularly
painful
or
complex
enough,
that
it
makes
sense
to
share
them,
will
kind
of
reveal
themselves
either
as
we're
writing
analyzers
or
as
we're
trying
to
update
them
as
we
like.
Okay,
this
is
way
too
painful.
Let's
trim
it
this
other
way
and
waste
it
out
yeah.
So.
F
F
E
I
was
the
way
I've
been
thinking
about.
It
is
for
simple
cases,
it's
probably
easier,
just
to
have
simple
logic
and
the
analyzer
rather
than
trying
to
abstract
out
the
common
code.
That's
my
gut
feeling.
I,
don't
actually
know
very
much
about
pilot
code.
I,
don't
know
how
difficult
refactoring
actually
would
be,
but
I
the
way
I'm
thinking
about
it
is
that
for
simple
cases
you
just
follow
the
documentation
and
what
the
doc
says
is
supposed
to
be,
how
things
work
or.
F
F
F
F
F
E
F
A
A
D
Break
them
exactly,
that's
all!
So
if
you
have
common
code
and
if
you
write
unit
tests
for
the
common
code
and
then
you
have
analyzed
a
unit
test,
utilizing
the
common
code,
I
am
Not
Afraid.
If
pilot
moves
and
break
something,
hopefully
the
PR
submitter
contacts,
the
people
whose
unit
tests
were
broken
right.
A
D
D
E
C
B
G
E
Something
we
definitely
have
on
our
radar
right.
We
know
that
at
some
point,
analyzers
are
going
to
need
to
be
version
sensitive
and
not
all
of
them
and
they'll
download
develop
at
different
rates,
but
we
we
have
that
kind
of
on
our
radar
that
we
need
to
have
add
something
to
analyzers.
That
basically
says.
Okay,
this
particular
analyzer
handles
this
range.
Basically.
G
A
We
want
to
be
pinned
to
the
current
version.
I
think
we
want,
as
to
you,
can
analyze
to
only
analyze
the
control
plane
at
the
sea
lines
that
it'd
be
fun
to
be
able
to
say
yeah
I'm
using
a
1.5
sto
cuddle
I
want
to
analyze
a
1.4
cluster.
That
seems
really
hard
to
write,
because
it
mean
we
need
to
keep
all
of
the
old
logic
around
I.
E
A
D
Think
the
old
maintainability
inversion
awareness
are
gonna
always
be
in
conflict
with
each
other
as
as
as
a
whole,
the
history
of
community.
Besides
that,
the
version
of
inertia
is
to
issue
so
far
by
just
tying
everything
to
one
thing,
including
a
steer,
CTL,
so
I
would
say:
east
you
CTL
overall,
has
this
problem
of
version
awareness
right,
I,.
A
D
So
so,
if
you
don't
make
it
version
of
air,
so
what's
the
downside
of
it?
If
you
can
talk
about
that
right,
so
if
a
user,
if
I
am
using
I,
am
on
sto
version,
1.4
I
go
and
grab
sto
CTL
for
1.4
and
I
get
all
the
analyzers
for
it.
I
can
go
to
the
master
and
grab
something
which
is
much
further
ahead,
but
now
I
have
incorrect
error,
messages
or
warnings.
D
B
A
lot
of
the
motivation
here
is
that
we're
relatively
late
to
the
game
and
adding
useful
analysis
tools
to
sto
cuddle
I
mean
you
guys
know,
because
many
of
you
developed
your
own
because
we're
moving
fast
enough
and
so
we're
looking
at
how
to
bootstrap
this
we've
got
relatively
few
analyzers
ready,
but
a
framework
ready
in
1/4.
We
would
love
to
be
able
to
have
a
fast
cadence
in
in
a
month,
say
hey.
B
C
E
Would
necessarily
do
a
patch
release
just
for
analyzers,
although
maybe
we
could,
but
just
bundling
it
in
alongside
whatever
else
were
pushing
out.
That
makes
the
release
process
a
bit
more
painful
for
us,
probably
because
we
have
to
keep
that
porting
stuff
to
the
core
branch
or
whatever
the
head.
Were
these
branches
that's
another
way
to
just
get
things:
customers
faster
without
actually
having
them
just
do
a
fade
out,
a
master
that.
E
E
F
D
F
Hearing
you
that's
because
the
Mexicans
here
night,
charlie
differentiate,
where
we
want
to
be
where
we
currently
are
so
can
we
even
want
some
versus
cue
policy
into
suit
pedal
and
that
would
apply
to
analyzers
as
well
as
upgrade
and
other
commands,
but
that's
a
larger
effort.
So
until
that's
someplace,
you
would
have
to
just
take
analyzers
as
a
best
effort.
So
sometimes
you
can
backported.
Sometimes
you
can't
you
don't
make
sure
guarantees
we.
A
A
A
A
A
F
F
A
F
E
E
To
type
on
yeah
I've
done,
I
definitely
have
ideas
around
where
I
want
Annalise
to
go.
One-Five
I,
don't
think
any
of
them
are
going
to
come
as
a
shock
to
anybody.
It's
just
kind
of
continuing
progress
in
the
directions,
we're
already
going
I
guess
one
question
is:
do
we
want
to
actually
merge
analyze
in
to
validate
in
1
5
right.
E
Yes,
I
guess
that's
another,
that's
a
question.
We
can
talk
about
another
days
or
discuss
more
another
Navy
already
discussed,
and
some
is
whether
we
keep
ISTE,
O'connell,
validate
and
basically
just
bigots.
But
what
was
experimental
analyze
is
now
validate.
We
basically
take
analyze
out
of
experimental
and
put
it
into
sto
cuddle
validate
or
we
deprecate
misty
little
melody
in
favor
of
sto
cuddle,
analytes
I.