►
From YouTube: Istio User Experience Working Group September 29, 2020
Description
- User Report - Istio Upgrade difficulties with Casey Robertson
- troubleshooting API
- List control planes issue
A
Cool
being
recorded,
I
invited
anybody
from
our
our
team
that
wanted
to
join
joey
who
joined
us
on
works
with
me.
I
don't
know
who
else
is
able
to.
You
know
the
usual
conflicts
and
things,
but
I
figured
the
more
perspectives,
the
better
so.
B
A
Sure,
okay,
so
I'm
casey
robertson.
I
work
on
the
platform
engineering
team
we
have
several,
but
on
one
of
them
at
mindbody,
so
we're
a
sas
provider
of
software
to
run
fitness
businesses,
crm
billing
payments,
etc.
A
A
We
had
an
architect
at
the
time
that
had
come
from
google
and
had
a
lot
of
experience
and
recommendations
surrounding
surrounding
kubernetes,
and
so
that
kind
of
brought
us
down
or
brought
us
into
the
down
the
istio
path,
and
so
we've
been
on
istio
we're
using
it.
A
I
honestly
can't
remember
the
first
version
we
started
using,
maybe
0.8
or
0.9
in
gcp,
used
to
deploy
it
using
the
old
chart
methodology
and
then
earlier
this
year
in
january,
we
completed
migration
kind
of
due
to
some
macro
level,
company
and
direction
changes
from
moving
what
we
had
on
gcp
gke
into
aws
and
eks,
and
later
on,
once
we
performed
that
migration.
A
I
believe
it
was
on
whatever
the
first
version
that
supported
the
or
released
the
istio
ctl
for
the
install
and
upgrade
we
moved
to
that.
So
it
was
like
one
three
one,
four
and
then
we've
been
upgrading
that
ever
since
we
have
a
pipeline
or
a
container-based
pipeline,
azure
devops
that
does
the
builds
a
container
and
then
runs
the
upgrade
or
runs
the
istio
ctl
commands
whether
we're
applying
manifest
or
running
an
upgrade.
A
So
that's
kind
of
our
deployment
process.
Then
we
have
a
separate
chart
or
a
istio
chart
policy
chart
replying
policy,
that's
kind
of
our
deployment
style,
so
we're
currently
on
1.6.9
in
all
environments,
currently
running
eks
1.16
we're
in
the
process
of
upgrading
to
1.17.
So
that's
our
kubernetes
environment.
A
Let's
see
pretty
modest
clusters
like
right
now,
anywhere
from
2
to
12
nodes
depending
on
environment,
kind
of
like
a
test,
alpha
environment
and
then
dev
stage
prod,
and
then
each
of
us
engineers
on
the
team
have
a
or
on
aws
accounts,
we'll
have
similar
infrastructure
in
our
own
accounts
for
blowing
things
up
and
testing.
A
We
do
use
the
istio
ingress,
so
we
have
one
ingress
defined
that
creates
the
load
balancer
in
aws
for
us
and
as
far
as
the
developers
are
current
concerned
with
their
touch
point,
they
each
of
their
application.
Repos
has
a
virtual
service,
helm,
template
defined
and
that's
where
they
put
in
routes
and
anything
particular
to
their
virtual
service
and
that
gets
deployed
with
their
application
and
it
ties
into
a
common
https
gateway.
So
we
have
a
single
gateway
that
all
the
virtual
services
bind
to.
A
We
do
that
they
don't
have
their
own,
they
there's
a
monolithic,
dev
cluster,
but
none
of
the
none
of
the
developers
use
istio
directly.
That
way,
yeah,
at
least
in
our
experience
that
they've
gotten
pretty
used
to
the
process
of
using
the
virtual
services
and
understand
that
their
you
know
their
routes
to
their
kubernetes
services
and
how
that
works.
A
A
lot
of
the
rest
of
it
is
is
pretty
opaque
to
them,
and
I
think
just
just
based
on
time
and
what
they're
focused
on
everything
are
kind
of
taken,
the
role
of
not
abstracting
it
per
se,
but
trying
to
make
that
process
as
easy
for
them
as
possible.
Some
teams
do
a
few
more
experiments
on
their
own.
A
There
are
a
couple
teams
that
are
applying
they're,
not
messing
around
with
their
own
policies
and
doing
things
like
fault,
injection
and
whatnot,
on
their
namespaces
to
try
that
out,
but
for
the
most
part
they
just
define
the
virtual
services
and
they've
never
run
istio
ctl
or
understand
what
it's
doing
or
what
gets
built,
etc.
A
Yeah,
like
the
whole,
the
whole
mesh
and
even
kubernetes,
even
out
even
after
this
time
is
still
pretty.
You
know
again,
depending
on
the
fluency
of
the
team,
is
still
kind
of
a
black
box
to
them.
A
Okay,
oh
then
one
more
thing
I
mean
it's
not,
I
guess
not
particularly
ready
to
upgrade
but
partially
one
of
the
ideas,
or
I
saw
somebody
mention.
I
think
it
was
actually
in
the
flagger
repo,
which
does,
which
is
a
tool
by
weave
that
does
canaries
a
b
blue
green
et
cetera.
A
We
have
experimented
with
that.
We
don't
we're
not
using
it
anywhere.
Currently
we
like
the
idea
a
lot,
and
somebody
had
mentioned
the
possibility
of
something
like
a
flagger,
either
integrating
with
the
the
upgrade
process
or
something
to
help
handle,
because
the
docs
mentioned
the
idea
of
doing
a
canary
of
the
control
plane
when
you
do
an
upgrade,
and
somebody
talked
about,
maybe
a
product
like
flagger
doing
some
integration
to
help
handle
that
kind
of
thing,
a
little
bit
more
transparently
for
the
user,
so
essentially
canarying
the
the
mesh
itself
yeah.
A
So
that's
kind
of
all
of
our
background
of
what
we're
running
and
then,
as
far
as
as
far
as
any
like,
you
know,
I
kind
of
came
up
with
just
just
off
the
top.
My
head
things
about
the
upgrade,
so
I
think
I
mentioned
one
thing
about
going
to
one
1.6
was,
and
some
of
this
was
on
us
is
the
idea
is
that
I
think
we
didn't
realize
or
got
lazy
or
didn't
notice
when
we
upgraded
that
some
of
these
some
of
the
things
get
deprecated
or
we
move
on
from.
A
You
know,
I
think
mentally
you
think,
like.
Oh,
the
old
things
are
going
to
go
away,
get
cleaned
up
and
they
they're
still
hanging
around
it's
like
when
I
was
doing
the
upgrade
to
one
six.
I
think
I
realized,
or
it
dawned
on
us
that
the
the
mtls
experience
had
been
simplified
a
lot
since
one
four
where
it's
on
by
default-
and
you
don't
have
to
do
these-
you
know
explicitly
define
it
in
a
mesh
policy
or
anything,
and
I
think
somehow
that
had
escaped
us.
We
missed
it.
A
A
Interpreting
the
the
documents
we
also
had
ended
up
having
a
couple
old
gateway,
pro
gateway
objects
sitting
around
from
previous
upgrades,
and
I
think
that's
more
to
speak
to
maybe
something
around
checklists
or
a
way
to
figure
out
for
lack
of
a
better
term
which
objects
are
still
quote,
unquote,
being
used
or
referred
to
somehow
like.
If
there's
some
way
to
reconcile,
there's
nothing
referring
to
object
x
or
like
like
hey,
you
defined
a
gateway,
but
there's
nothing.
A
A
A
Yeah,
we
usually
do
one
one
at
a
time
the
way
like
our
our
actual
the
actual
pipeline
or
the
logic
in
the
container.
Just
you
know,
it's
ends
up
just
being
a
bunch
of
batch
scripts.
Honestly,
it
doesn't
it's
kind
of
monolithic
in
the
sense
that
it's
just
gonna
march
through
the
environment,
so
we
put
some
gates.
A
A
Honestly,
some
manual
gates
at
this
point,
because
our
and
one
of
the
things
I'll
come
to
is
like
our
testing
story
is
slim
to
none
on
any
of
this.
It's
it's
a
lot
of
hey
what
works?
I
guess
that's
something
we
could
look
for.
You
know,
advice
or
guidance
on
is
like
what's
and
that
we
need
to
look
into
as
well
is
what
are
what
are
some
of
the
things
people
are
doing
to
decide?
Hey.
This
upgrade
went
okay
rather
than
just
you
know.
A
The
new
real
commander
reading
isn't
dead
or
nobody's
screaming,
or
you
know
whatever,
like
hey
your
upgrade's
good
you're
good
to
go
nothing's
non-functional
but
yeah
so
just
manual
gates
at
this
point
and
then
it's
up
to
like
the
the
engineers
to
vet
the
upgrade.
But
we
don't
have
much
much
of
anything
automated
at
this
point
for
testing.
C
A
We've
tended
just
just
because
of
being
kind
of
spread
thin.
We've
generally
waited
until
the
next
version
has
had
a
few
releases
and
even
then
it's
just
an
as
we
can
kind
of
basis.
So
I
know
when
one
six
came
out.
We
usually
wait
for
a
couple
releases
and
then
and
didn't
do
it
this
time
we
just
happened
to
be
a.
It
happened
to
be
on
169
by
the
time
we
got
around
to
upgrading,
so
it
does
take
a
little
time
for
us,
but
that's
more
just
due
to
a
backlog.
A
I
think
we
normally
just
like
to
wait
a
couple
versions.
Generally,
I
think
I'd
have
to
look
at
the
releases.
I
think
we
probably
do
same
version
upgrades
like,
like,
let's
say,
1.5.4
to
something
else
or
1.5.2
to
something
else,
maybe
maybe
a
couple
times
in
the
release
cycle.
Unless
somebody
happens
to
see,
there's
a
you
know,
something's
impacting
or
a
bug
fix,
or
something
like
that
that
we
really
need.
But
I'd
say
I'd
say
at
most.
We
upgrade
once
maybe
twice
within
a
version
and
then
just
wait
for
the
next.
B
I
was
intrigued
that
you
said
you
were
looking
for
a
checklist
about
upgrading,
because
I
I've
wondered
the
same
thing
if
we
should
provide
one.
Did
you
end
up
making
your
own
checklist
of
things
that
you
do
before
upgrading.
A
B
I
would
be
curious
to
see
what
a
user's
checklist
looks
like.
A
Okay,
yeah:
absolutely
we
can
share
that
for
sure
we
write
some
wiki
dot
as
we
go
through
these.
We
try
to
write
some
documents
down
to
you
know
help
each
other
out,
obviously
give
ourselves
reminders
and
not
get
siloed
off
on
you
know
I
don't
want
to
just
become
istio
upgrade
guy
and
joey
doesn't
want
to
be
kubernetes
upgrade
guy,
and
you
know
things
like
that.
So
we've
tried
to.
A
We
can't
try
to
round
robin
these
to
have
each
person
give
it
a
go
and
then
we'll
pair
on
it.
If
you
know
somebody's
unsure,
but
yeah,
we
will
definitely
share
that.
Oh
one
thing
I
do
like
so
and
and
again
this
could
be
just
not
looking
into
it
deeply
enough.
I
know
when
you
run
the
apply
that
you
see
apply
and
it
executes
an
upgrade.
A
You
know
if
you're,
using
a
new
version
of
the
client,
it
does
have
some
nice
output
about
at
the
beginning
about
objects
that
are
deprecated
or
gonna,
get
upgraded
like
there's
some
warnings
and
things-
and
one
thing
I
wasn't
sure
about
or
couldn't
remember,
is:
if
is
there
a
way
to
dry,
run
those
or
see,
or
is
it
only
going
to
vetted
it
run?
Okay,
there.
B
Yes,
the
install
has
some,
I
think,
it's
dry
run
and,
of
course,
and
there's
also
steel
cuddle
analyze,
which
should
give
you
duplication
warnings,
among
other
things,
that
it
warns
about.
Do
you
use
the
istio
cuddle
analyzed.
A
A
I
wouldn't
hurt
to
just
have
that
run
anytime,
an
upgrade,
or
I
mean
frankly,
almost
any
operation,
depending
at
least
it
could
be.
We
could
run
a
test
against
it,
or
at
least
have
somebody
check
it
before
an
operation
runs.
If,
if
we
did
have.
B
A
Okay,
okay,
that
makes
sense
yeah.
Imagine
that's
difficult,
like
the
so,
for
example,
one
of
the
things
that
changed
is,
I
think,
the
off
the
top
of
my
head
was
the
the
way
the
pure
authentication
worked
like
the
structure
of
that
yaml
file
and
the
api
change.
A
So
we
had
to
change
a
couple
of
those
during
the
upgrade
to
1.6,
and
this
is
probably
just
ignorance
on
my
part
on
the
inner
workings
of
kubernetes,
but
is
that
are
things
like
auto,
auto
upgrading
those
types
of
objects
just
to
fraught
with
issues
and
the
fact
that
we
apply
them
from
a
separate
source?
Just
is
never
going
to
make
that
feasible.
You
know
because
we
have
the
ammos
in
a
repo
and
then
like
istio
knows
this
object
is
changing
of
the
spec
and
I
guess
there's
no
way
to
backport.
A
D
I
believe
I
actually
provided
this
feedback
when
we
changed
from
the
old
authentication
api
to
the
new
pr
authentication
api.
A
D
Yeah
but
you're
absolutely
right,
because
when
I
look
at
our
upgrade
notice
or
upgrade
documentation
today,
we
don't
really
tell
you
you
need
to
wrongly
analyze
the
command.
It's
more
like.
A
A
C
So
yeah,
if
you,
if
you
run
it
against
your
cluster
and
the
file,
it
will
take
the
file
as
a
patch
over
the
config,
that's
in
your
cluster
and
produce
what,
if
results,
if
I
applied
this,
these
would
be
the
warnings
inside
your
cluster.
C
A
C
A
Okay,
fair
enough
yeah,
I
mean
it's,
the
documentation
is
true,
okay,
so
that
brings
that
is
a
good
segue
into
one
other
one
was
so
on
the
documentation
site
or
the
the
documentation
for
istio.
One
thing
we
struggled
finding
was,
I
think,
with
like
coming
from
the
using
helm,
charts
a
lot
so
for
applying
things
into
our
clusters.
A
You
know
most
of
those
projects
will
have
like
a
huge
default.
You
know
list
of
the
ammo
values
for
everything
all
the
knobs.
Essentially
for
that
object
and
the
istio
the
istio
documentation
was
was
pretty
good
in
that
regard.
But
then,
when
you
would
drill
into
certain
things
like,
let's
say,
one
of
the
add-ons
is
prometheus
or
kiali
or
something,
and
you
want
to
know
all
the
options
under
there.
It
was
not
easy
to
figure
that
out,
because
the
way
the
docs
appear
to
be
generated.
A
It
just
takes
just
like
this
add-on
resource
spec
or
something
you
click
into
that,
and
it
just
says:
oh
well,
it's
just
going
to
inherit.
Whatever
this
add-on
supports
like
just
it,
doesn't
you
can't
keep
drilling
into
everything
you
can
apply
there
and
and
depending
on
what
the
object
was,
it
became
difficult
to
know
all
the
options
without
maybe
going
into
github
and
then
trying
to
figure
out
this.
You
know
all
the
spec
you
could
put
in
there,
and
so
maybe
maybe
something
on
the.
A
So
I
guess
in
terms
of
discoverability,
I
liked
the
experience
better
with
the
helm
part,
because
the
way
the
values
by
their
nature,
you
can
see
everything
more
easily.
It
was
a
little
more
difficult
for
some
of
the
add-ons
for
for
istio,
but
I
know
again
after
after
reading
after
the
fact
it
looks
like
probably
due
to
some
of
this
or
or
it
looks
like
things
like
chiali
and
whatnot
are
gonna
that
the
way
we
installed
it
this
time
around,
it
doesn't
appear
to
be
the
preferred
way
anymore
anyway.
C
A
We're
both
so
we
have.
We
do
separately,
install
prometheus
and
graphone
in
the
cluster.
We're
kind
of
in
this
we're
kind
of
in
a
weird
state
organizationally,
where
we
like.
A
We
realize
that
prometheus
is
like
the
native.
You
know
the
cloud
native
solution
and
it's
been
nice
to
leverage
things
like
the
kubernetes
mix
in
that
kind
of
gives
you
some
either
community
vetted
or
best
practice
type
alerts
and
things
for
kubernetes,
but
at
the
same
time
the
entire
you
know
our
entire
dev
org
and
a
bunch
of
the
rest
of
the
company
is
heavily
heavily
invested
in
new
relic,
and
so
we're
kind
of
in
this.
A
This
state,
where
you
know
again,
being
pretty
spread
thin
on
a
lot
of
stuff,
we're
debate,
we're
having
the
debate
right
now
between
prometheus
and
new
relic,
I
mean
we
have
new,
relic
agents
on
the
nodes
and
things
already
shipping.
A
You
know
shipping
a
lot
of
similar
information
from
kubernetes,
so
we're
kind
of
debating
on
that.
So,
yes,
we
do
plan
to
keep
it
probably
for
istio
for
no
other
reason.
This
past
upgrade
we
kind
of
somehow
we
kind
of
rediscovered
kiali.
A
I
don't
know
if
we
broke
it
before
or
just
hadn't
gone
in
there
or
something,
but
it's
super
cool
and
useful
to
see
sometimes
sometimes
making
the
mental
map
of
all
the
the
objects
in
the
cluster
and
then
being
able
to
see
it
visually
like
the
flow
is,
is
tremendously
helpful
for
troubleshooting.
A
You
know,
because
by
the
time
all
your
traffic
comes
in,
you
got
you
know
dns,
then
cloudflare,
then
an
elb,
then
an
ingress,
then
a
service.
Then
you
know
it's
like
where
where's
my
stuff
going
wrong.
So
that's
been
helpful
and
we'll
probably
keep
prometheus
in
there
too,
at
least
for
the
time
being,
because
if
we
plan
to
leverage
flagger,
which
we'd
still
like
to
I
know,
flagger
can
look
at
an
external
prometheus,
but
especially
if
we
plan
to
drop
it,
I
think
it.
I
think
it
looks
at
the
one
in
istio.
A
Maybe
it
installs
its
own.
I
can't
remember
just
to
scrape
stuff
up
from
the
from
the
mesh.
A
A
A
D
A
D
A
A
A
A
Okay,
one
thing
this
one
I
get.
I
guess
this
may
be
more
feedback.
This
hap,
we
noticed
this
or
one
of
the
motivations
for
upgrading
other
than
keeping
up
with
versions
was.
We
did
have
a
weird
issue
on
the
one
five
release
where
it
seemed
like
the
prometheus
and
we
could.
We
could
never
figure
it
out.
Fortunately,
it
seemed
to
have
gone
away
with
the
1
6,
but
the
prometheus,
the
built-in
prometheus
was
just
crashing
constantly
like
the
pod
was
just
crash
looping.
I
don't
know
what
happened
with
that.
A
It's
kind
of
not
used
super
useful
feedback
because
I
don't
have
a
ton
of
details.
We
tried
bumping
versions,
giving
more
resources
different
things.
Some
reason
it
got
into
a
weird
state.
I
don't
know
we
kind
of
chose
to
blame
eks,
because
we
don't
love
eks
that
much,
but
we
were.
We
were
kind
of
all
in
engineering-wise,
as
we
really
liked
as
platform
people.
We
liked
gcp
and
gke
a
lot
and
weren't
very
happy
about
going
to
aws
and
eks,
but
kind
of
is
what
it
is.
A
So
I
don't
know
if
anyone
else
saw
has
seen
or
had
issues
with
the
the
built-in
prometheus
having
problems,
but
it
seems
seems
happy
now
knock
on
wood,
so.
A
A
C
A
You
just
consume
it
at
that
point:
okay,
yeah
or
yeah,
or
for
me
it's
consumed
your
stuff.
Okay,
that
makes
sense.
Let
me
see
if
there's
anything
else
I
had
written
down
in
notes.
A
No,
I
think
that
was
mainly
oh
one
thing
if,
if
it
helps
or
just
just
to
add
more
color
to
our
install,
so
we
use
for
all
the
rest
of
our
infrastructure,
we
were
a
columbia
shop.
I
don't
know
if
you've
heard
of
them,
we
started
out
as
terraform
when
we
were
on
gcp
terraform's,
a
great
great
tool.
A
Get
all
the
the
power
of
you
know,
a
general
programming
language
and
everything
to
define
your
infrastructure,
which
has
been
great
and
one
of
the
things
related
to
istio
is
that
we
generate
the
or
we
apply
the
create
the
secrets
for
the
certificates
that
that
the
gateway
or
the
ingress
uses
there.
So
just
a
minor
just
a
point
of
how
we
use
it,
and
then
we
just
point.
C
So
I
have
a
question:
you
mentioned
the
ci
cd
system
for
maintaining
maintaining
your
kind
of
pipeline
for
promoting
a.
A
C
D
A
A
Okay,
yeah,
that's
probably
when
we're
going
to
look
into
it.
I
think
I
noticed
it
when
I
was
when
I
was
doing
the
reading
and
documenting
for
the
1
6
upgrade.
I
noticed
that
reference
for
the
first
time
about
that.
You
could
do
that,
and
so
yeah
I'd
like
to
explore
that
more
in
one
seven
and
see
what
that
proc
like.
A
If
that's
something
how
like
how
well
we
could
automate
that,
in
terms
of
you
know
deploying
now
so
now,
is
that
just
deploying
another
another
set
of
istio
d
pods,
or
is
that
like
duplicating
everything
like?
How
is
that
working.
C
It'll
duplicate,
just
your
control
plane,
so
you'll
run
a
second
instead
and
then
you
put
over
your
data
planes
to
the
newest
dod,
as
you
see
fit
on
whatever
schedule
you
like.
Okay,
the
the
one
caveat
that
I'll
give
it
is
that
it's
still
in
one
seven
you're,
if
you're
using
the
istio
ingress
gateway
yeah
in
istio
system,
it
is
not
varied.
So
that
does
get
a
hard
cut
over
during
a
canary
update.
I
think
we're
targeting
a
fixed
four
in
one
eight.
A
A
Honestly,
I
think
it's
generally
gone
pretty
well
man.
I
wish
I
could
remember.
I
have
some.
I
have
some
vague
memory
of
a
of
an
issue.
We
had,
I
mean
one
thing:
okay,
one
thing
that
would
be
that
has
bitten
us
before
which
isn't
necessarily
istio
related.
I
mean
it's
like
a
you
know,
a
joke
amongst
all
engineers
is
like
who
did?
A
Who
didn't
you
know
renew
the
certificate
or
something
you
know
on
something
somewhere
was
had
expired,
so
I
guess
any
discoverability
around
that
would
be
cool
like
either
by
keoli
or
some
other
method
like
whether
whether
those
are
you
know,
we
use
the
self-signed
stuff
internally
for
the
mtls,
but
at
the
edge
you
know,
for
the
you
know,
inbound
traffic
from
from
the
internet,
those
certs
live
on
the
ingress
and
even
just
something
minor,
to
help
like
alert
or
check
for
that,
I
suppose
there's
something
in
prometheus
or
some
other
way,
but
yeah
certificates
expiring,
always
scare
me,
because
this
happened.
A
You
know,
we've
got
billions
of
certificates
all
over
the
place:
cloudflare
akamai
internally,
load,
balancers,
etc.
So
I
guess
just
anything
there,
but
yeah.
As
far
as
any
upgrades,
I
don't
think
we've
caused
a
caused
a
hard
down.
I
think
the
only
time
I
recall
it
did
go
down
as
well,
and
this
is
just
process
on
our
side.
A
Somebody,
I
think
somebody
thought
they
were
in
their
their
cube
context
or
something
and
they
were
screwing
around
with
istio
and
they
just
deleted
everything
or
something,
but
they
were
in
staging
or
prod
or
something
I
can't
remember
which
it
was,
but
it
was
like
all
of
a
sudden.
It
all
just
was
gone
because,
but
that's
but
that's
on
us,
you
know
it's
making
guard
rails
on,
you
know
hack
and
your
and
everything
so
yeah,
I
think
otherwise.
Otherwise
it's
gone
fine.
C
D
A
D
That's
great
for
your
upgrade
problem:
the
secrets
for
the
ingress
gateway.
That's
the
secrets,
managed
by
yourself
right,
because
that's.
A
D
I
assume
it's
the
outer
secrets,
the
inner
secrets
are
managed
by
hcod,
correct.
A
A
A
If
things
can
parse
that
or
if
there's
a
metric
that-
and
it's
probably
something
we
could
surface
if
we
dug
into
prometheus
or
new
relic
a
little
more
there
might,
we
could
probably
either
directly
query
the
certain
check
for
the
expiry
or
something
I
just
wasn't
sure.
If
there
was
something
built
in
with
istio
control
ran,
you
know
every
time
it
ran
if
it
just
threw
a
warning.
E
Okay,
I'll
be
good
to
have
it
have
that
in
the
dock.
Okay,.
A
A
You
know
meshes
on
on
top
of
multiple
clusters
and
things
that
maybe
maybe
those
are
that'd,
be
more
useful,
useful
for
them
at
that
point,
which
I'm
excited
to
do
hope
we
get
there.
We
want
to
try
to
do.
A
This
is
kind
of
a
a
little
bit
of
a
green
field
environment
for
mind
body
in
the
sense
of
this
is
where
we're
trying
to
build
our
quote-unquote
modern
services
like
not
as
many
migrations
at
this
point,
it's
net
new,
which
is
lucky
for
us,
but
yeah
like
things
like
multi-clusters
on
our
radar
for
2021,
because
we
don't
really
have
a
good
story
at
the
kubernetes
level
right
now.
A
For
you
know,
each
of
our
environments
is
a
single
cluster
right
now
we
have
a
few
node
groups,
but
one
cluster
and
we'd
like
to
look
at
things
to
make
our
our
upgrade
processes
and
then
just
the
clusters
in
general,
more
resilient,
so
we're
gonna
have
to
look
at.
You
know
things
like
the
multi-cluster
mesh
and
things
of
that
nature.
B
Well,
thank
you
for
your
report.
Casey
sure.
B
No
problem,
it's
good
to
get
real
feedback
from
users
who
are
struggling
because
it
not
only
motivates
us
but
gives
us
ideas
for
how
to
improve
our
process
cool.
So
we
have
another.
We
have
another
20
minutes
casey.
If
you
send
your
checklist,
I
will
link
to
it
so
people
who
come
back
and
look
at
our
internal
document
will
see
that
sure,
and
I
also
took
notes
that
I'm
going
to
add
the
way
lynn
is
adding
right
now,
so
other
things
that
the
group
must
discuss
today.
B
A
Kidding,
okay,
well,
I
probably
will
drop
often,
but
I
really
appreciate
it.
It's
really
nice
to
yeah.
I
worked.
I've
worked
in
enterprise
like
it
type
stuff
for
20
plus
years,
but
only
very
recently
in
you
know,
web
open
source
type
stuff,
and
it's
really.
I
really
like
this
aspect
of
it
like
you
would
never
get
this
dealing
with.
You
know
closed
source,
vendors
and
things.
This
is
really
cool
to
be
able
to
give
feedback
on
the
direct
feedback
on
the
product
like
this,
so
appreciate
it.
A
lot.
B
Time
yeah,
I
was
confused,
but
it
would
be
for
you,
an
island
that
would
be
noon.
Eastern.
C
C
B
Right-
and
I
can't
I'm
not
sure
if
I
can
host
this
myself
next
week,
because
I
might
have
jury
duty,
so
I
will
tell
shamster
that
this
week
is
not
next
week
is
not
going
to
happen.
But
do
you
think
we
could
later
on
in
the
month
have
at
least
one
meeting
a
month
to
try
to
bring
people
in
from
other
parts
of
the
world.
C
B
D
D
So
maybe
we
can
do
it
start
with
the
frequency
of
once
a
month.
First,
and
I
mean
if
we
got
a
lot
of
people,
we
can
increase
the
frequencies.
I
just
didn't
want
to.
You
know
we
change
it
to
lunchtime
with
inconvenience
of
some
of
us
who
join
regularly
and
then
nobody
show
up,
and
we
heard
that
within
the
networking
world
group
meeting
for
longest
time
and
then
we
changed
back
to
two.
I
think
we
changed
back
to
like
2
p.m.
D
Eastern
and
then
one
of
the
lead
was
elected
from
india,
so
we
moved
back
to
lunch
time
for
us
which,
which
is
good
for
that
purpose,
because
that
lead
does
join
every
single
time,
but
I
think
he's
like
the
one
person
from
asian
join.
So
if,
if
we
have
constant
people
show
up,
I
don't
mind,
but
I
just
didn't
want
to.
You
know
move
it
for
nothing.
B
I
agree
I
just
because
I've
never
met
sham
sir.
I
was
hoping
to
get
him
at
least
one
meeting.
I
wouldn't
want
my
lunches
either.
Okay,
I
will.
I
will
figure
out
what
wednesday
that's
gonna
be.
So
the
next
item
I
wanted
to
mention
was:
we
have
a
p0
for
a
troubleshooting
api,
a
design
and
maybe
an
implementation.
B
Good
enough
for
me,
so
one
of
the
items
that
is
our
roadmap
is
having
a
way
to
upgrade
your
existing
mesh
using
the
settings
that
you'd
already
done,
and
I
have
a
pr
that
uses
this
this
awkward
flag
profile
from
mesh
that
I
think
mander
suggested
possibly
in
jest.
B
I
was
wondering
if
anyone
had
thoughts
on
how
this
should
appear
in
the
command
line,
so
the
idea
is
usually
when
you
do
a
steel
cuddle,
install
you
give
it
a
profile,
maybe
a
revision.
If
you're
upgrading
in
place
profile
from
cluster
sort
of
says,
don't
don't
use
one
of
the
compiled
in
profiles
like
demo
or
or
default,
but
just
use
whatever
customization.
You
used
last
time
any
thoughts
on
what
this
should
look
like
implementation
is
straightforward.
It's
just
how
should
we
expose
it?
B
C
D
So
what
does
this?
Do
I
just
want
to
make
sure
I
understand:
is
this
to
puree
the
profile
on
the
mesh
and
then
install
another
one.
B
To
installing
istio
it
puts
the
istio
operator
cr
that
that
was
used
by
a
steel
cuddle
or
could
be
used
by
the
operator
onto
your
cluster,
to
remind
you
of
what
you
have,
and
this
would
use
that
over
what's
compiled
into
istio
cuddle,
I
want
to
use,
and
currently
it's
not
being
used
for
much
of
anything
other
than
verify
install.
B
Although
I
should
probably
mention
this
great
idea,
I
have
for
listing
control
planes,
one
of
the
items
that
I'm
still
designing
for
is
to
list
the
control
planes
and
my
new
proposal
is
for
cloud
operators.
This
is
not
something
users
would
be
able
to
do,
but
if
you
installed
your
cluster
yourself
proposing
a
new
istio
cuddle
manifest
list
that
lists
all
of
those
operators
that
were
created
when
you
did
this
to
cuddle
and
spawn
and
tells.
D
Okay,
I
see
so
it
could
be
pretty
convenient.
I
guess
the
only
confusing
I
had.
If
I
look
at
this
from
a
user
perspective,
is
this
flag
profile
from
mesh?
It
wasn't
clear
to
me:
it
would
just
reuse
the
profile
I
had
in
my
current
cluster,
or
would
it
just
not
only
using
the
profile
but
also
all
the
customization
I
had
on
top
of
that
profile.
B
Does
it
does
it
first?
Does
it
belong
as
an
option
on
sd
card
install?
Should
it
be
a
manifest
command?
How
can
we
let
the
user
sort
of
know
that
I
mean
it
could
be
settings
from
mesh
or
previous
settings,
or
what
do
we
have
any
thoughts
on
that.
C
So
ed
I've
been
a
big
supporter
of
this
idea
in
the
past,
but
I
I
have
to
say
that
talking
with
casey
just
now
gives
me
just
a
touch
of
hesitation.
So
the
motivation
here
is
that
your
upgrades
are
non-destructive
right
right
now.
If
you
just
run
a
very
simple
istio
cuddle
upgrade
command,
assuming
you
installed
with
a
whole
bunch
of
flags,
you
not
only
get
an
upgrade.
You
get
all
those
flags
unset,
which
is
not
probably
what
the
user
intended
when
running,
istio
cuddle
upgrade.
C
On
the
other
hand,
I
feel
like
the
right
solution
to
this
problem
is
to
use
ci
cd
for
deploying
istio.
The
way
that
we
just
heard
that
mind
body
does
because
it
gives
the
full
holistic
experience.
C
I
just
wonder
if
this
isn't
about
30
percent
of
what
our
users
need
and
if
it
might
not
be
better
to
just
publish
a
few
blog
posts,
really
showcasing
ci
integrations
and
encouraging
our
users
to
go.
That
way.
B
So
there's
a
couple
other
things
I
could
do
within
this
pr
one
is,
I
could
incorporate
your
previous
settings
in
the.
Are
you
sure
prompt
so
right
now
it
says
you
know
you're
installing
one
seven
are
you
sure
it
could
say
you're
installing
one,
seven
and
you're
changing
the
value
of
auto
mtls?
Are
you
sure,
or
it
could
say,
you're
changing
the
value
of
auto
mtls
and
do
you
want
to
use
the
old
value
or
the
new
value?
B
E
I
would
say
first
of
all,
what
does
helm
do
and
my
vague
memory
is
that
helm
does
what
you
described
as
it
tells
you
the
diff,
but
it's
not.
A
E
Behavior
is
so
I'm
just
saying
that
blindly
I
do
think
the
warning
sounds.
A
D
B
D
Feeling
is,
this
would
be
more
useful
for
people
doing
testing.
I
I
think
people
deploy
in
production.
They
probably
want
to
know
exactly
what
they
are
deploying.
They
are
like
most
likely
using
a
cicd
pipeline
instead
of
relying
on
the
runtime
configuration
of
the
cluster
at
that
moment,
because
that
could
be.
D
B
B
I
had
one
other
question
I
wanted
to
raise
to
the
group
champs
and
I
got
into
a
discussion
on
github
where
he
made
a
pr
to
get
rid
of
the
use
of
our
sub
commands,
and
I
was
wondering
if
the
folks
here
when
they
see
the
commands
of
istio
cuddle
like
install
or
analyze,
do
you
think
of
those
as
being
commands
or
sub-commands?
B
I
thought
sub-command
was
the
term.
That
seems
to
be
the
term
used
in
go
and
in
cobra,
but
schamser
was
sure
that
install
and
zelda
commands
are
commands.
Do
we
have
any
thoughts
on
that.
D
Well,
I
guess
I
always
think
that
they
are
commands.
There's
a
sub
command
sub
command.
It
would
be
more
like
proxy
config
secret,
like
the
second
thing.
C
Is
a
term
in
cobra,
which
is
what
we
use
for
generating
our
cli?
From
that
perspective,
almost
everything
we
do
is
a
sub
command
like
experimental,
would
be
one
of
the
few
top
level
commands
that
we
support
everything
below.
That
is
a
sub
command.
We're
really
treating
them
like
the
top
level.
Commands
are
more
like
categories
or
not
exactly
name
spaces,
and
I
think
ed.
C
You
have
a
pr
out
for
sort
of
reconciling
istio
cuddle
commands
into
groups
by
what
role
would
be
executing
them
or
what
role
they
require
in
terms
of
privileges,
yeah.
B
You
know
I
I
gave
shamsh
a
hard
time
because
you
know
I
would.
I
saw
web
pages.
That
said,
things
like
you
know,
go
build
and
go
get
our
different
sub
commands,
and
so
I
was
I
was
thinking
he
was
way
off
base,
but
it
looks
like
lynn
agrees
with
sharmster.
Does
what
do
we?
What
do
we
think
for
the
long
term
you're
right
mitch?
We
want
to
reorganize
everything,
but
I
want
to
make
sure
I
I
didn't
give
too
hard
of
feedback
on
the
pr
that's
out.
D
So
to
me,
I
think,
install
manifest
these
are
our
top
level
commands.
I
mean
if
you
think,
about
sub
commands
from
koba
perspective.
Yes,
they
are
sub
commands,
but
if
you're
a
user
just
using
istio
cuddle,
I
mean
these
are
the
top
commands
because
they
are,
they
represent
the
most
important
function
of
istio
cuddle.
B
Fair
enough-
and
I
can
approve
samsung's
change,
which
is
purely
grammar
and
also
removing
discussion
of
commands
that
we've
also
removed.
So
thanks
mitch,
we
have
five
minutes.
I
don't
know
if
we
have
time
for
either
one
of
these
items
should
we
put
these
off
until
next
week.
C
Oh
okay,
yeah
yeah
next
week
is
probably
good,
at
least
for
personas.
It's
going
to
take
a
longer
conversation
than
five
minutes.
All.
B
Right,
I'm
gonna
say
two
things.
Then
first,
I'm
gonna
say
that
if
the
meeting
doesn't
start
next
week,
it's
because
I
have
jury
duty
and
I
will
put
mitch's
google
hangout
in
the
link
here.
If
that
happens
so
that
everyone
will
will
know,
but
I
may
not
be
able
to
change
any
calendar
entries
you
already
have.
B
The
other
thing
is
that
I've
been
hard
at
work
on
writing
a
proposal
for
listing
control
planes,
which
is
a
long
standing
item.
I
think
it's
p1
and
I'm
proposing
three
commands.
So
I'm
going
to
ask
people,
although
this
is,
I
don't
have
time
to
review
it.
If
anyone
wants
to
read
this
ahead
and
make
comments
for
my
next
time,
I
present
this
here's.
B
What
I'm
proposing
proposing
istio
cuddle
manifest
list
which
would
list,
if
you
I
mean
normally
you're,
not
in
a
canary,
so
it
would
just
say
the
revision
you're
using
and
what
profile
it
came
from,
and
it
will
also
say
if
you've
done
any
customizations
on
that
profile,
which
I
think
is
just
handy
for
people
to
see
what
they
have
when
they
sit
down.
But
then
I
also
wanted
to
remix
the
number
of
pods
and
the
age
from
kubernetes
here.
I
think
this
would-
and
this
is
operators.
B
Thing
I
think
this
would
be
a
nice
way
to
list
it.
The
listing
of
the
sidecar
control
planes
is
something
that
we've
struggled
with.
We
have
this
to
cuddle
version
which
tells
you
what
version
each
sony
car
is
using,
but
it
can
be
confusing.
B
So
here's
what
I'm
proposing
I'm
proposing
istio
cuddle
proxy
list,
which
can
either
show
by
control
plane
or
by
namespace
if
you
show
it
by
control
plane.
It
looks
very
similar
to
the
previous
thing,
but
of
course
you
can't
see
the
settings
because
you
might
not
have
access
to
them.
It
might
be
every
central
is
2d,
so
it
tells
you
where
the
control
plane
lives
and
what
data
plane
namespaces
are
set
up
to
be
injected
for
that
revision.
B
How
many
pods
are
in
the
those
namespaces
and
how
many
sidecars
are
stale
and
I'm
still
side
cars?
It's
a
new
concept,
and
those
are
the
side
cars
that
when
you
run
this
to
cuddle
analyze,
it
says
the
proxy
image
is
old.
It
doesn't
reflect
what's
in
your
injector,
maybe
that's.
B
F
Stale
image,
maybe
we
don't
want
to
use
the
word
style,
though,
because
at
all
because
when
you
said
stale,
I
immediately
thought
the
same
as
you
did
mitch.
B
So
that's
a
good
point,
so
we
need
a
better
term
for
that
than
stale.
I
will
say
that
I
was
thinking
that
we
might
consider
adding
a
column
for
that
dynamic,
stillness
in
addition
to
the
old
image
stillness
so
anyway,
listing
by
control
plane
is
nice.
At
least
this
is
helpful
if
you're
doing
a
canary
to
know
what
you've
got
out
there
and
which
namespaces
are
which
listing
it
by
namespace.
B
Also
nice,
I'm
not
sure
if
we
need
both
or
if
one
of
these
is
better,
we
should
get
rid
of
the
other
one,
and
this
just
lists
all
of
your
namespaces
and
which
revision
is
set
to
inject
on
it
and
the
information
we
saw
before
and
then
something
I've
been
hearing
a
lot
about
from
users
in
my
ibm
team
is
that
users
want
to
freshen
those
sidecars.
B
So
this
is
kind
of
something
of
a
sketch,
but
if
we
sort
of
have
images
that
are
old,
that
we've
sort
of
identified
with
this
command
istio
cuddle
proxy
update,
sidecar,
we
appeared
a
proxy
list.
Update
sidecar
or
update
would
either
take
a
single
resource
like
a
deployment,
an
entire
namespace
or
maybe
an
entire
cluster.
B
B
C
Yeah
from
I
mean
for
a
very
large
deployment
that
might
be
acceptable,
but
you
do
have
deployments
that
scale
down
to
one
or
other
deployments
that
are
running
at
very
near
capacity,
for
which
you
really
want
to
respect
the
rollout
strategy,
which
would
be
create
a
new,
a
new
item
on
the
new
revision
before
deleting
the
old
item
on
the
old
revision.
If
that
makes
sense,.
B
So
would
it
be
better
maybe
to
output
just
like
informational
stuff,
so
this
currently
doesn't
output
the
names
of
the
pods,
but
either
an
option
on
this
or
a
separate
command.
That
sort
of
says
these
deployments
are
old.
If
you
want
to
pick
them.
C
I
think
we
recommend
restarting
the
deployments
altogether
and
we
can
even
do
it
for
them.
If
we
want
to
that,
then
we
need
to
invest
in
or
investigate
is
those
handful
of
scenarios
where
they're
not
using
deployments
where
they're
using
jobs
or
stateful
sets
or
something
along
those
lines?
Yes,.
C
What
a
reasonable
way
to
target
a
portion
of
that
workload
is,
and
I
don't
know
what
that
is
yet.
B
Yeah,
it's
a
big
trouble
for
manual
injection
as
well.
I
thought
I'd
put
it
out
there
to
see
if
people
liked
it
all
right
so
we're
over
time.
I
will
refresh
this
based
on
what
I
just
heard
and
also
refresh
the
meeting
notes,
based
on
what
we
heard
from
casey
from
mindbody,
and
hopefully
I
will
see
you
next
week.
Thank
you.