►
From YouTube: Istio User Experience WG September 22, 2020
Description
- Report from User Empathy Session
- Automatically upgrade data plane?
- istioctl analyzer warns on namespace not injected
- istioctl version and proxy-status to use XDS
- Better status on Virtual Service
A
D
D
B
So
I
went
through
my
notes,
so
the
meeting
was
to
sort
of
see
how
what
what
the
people
who
are
adopting
istio
feel
about
how
sd
is
going
and
to
ask
us
for
the
features
they
want
and
most
of
the
stuff
seem
to
be
about
capabilities,
and
I
wasn't
sure
how
this
work
group
could
address
them.
They
see
more
security
and
networking
things,
but
two
of
the
items
stood
out
to
me.
B
B
I
mean
we
have
our
back
security
on
a
lot
of
stuff,
but
it's
hard
to
sort
of
know
what
security
you
yourself
have
once
you
sit
down
at
istio
cuddle
and
then,
if
you're,
some
kind
of
administrator,
what
sort
of
security
you
have
given
to
particular
users
and
things.
B
So
this
seemed
like
an
area
where
we
might
need
help
from
the
security
working
group,
but
we
should
come
up.
We
could
should
consider
coming
up
with
a
way
to
sort
of
say
who
can
do
what?
I
don't
think.
I
think
the
security
work
group
is
more
focused
on
improving
security,
but
we
might
have
a
lot
to
say
about
letting
people
understand
the
what
they
have
applied
in
terms
of
modifying
the
control
plane
itself.
Who's
allowed
to
do
that.
A
And
I
think
you
know
we
have
that
personas
document
that
that
steve
started
and
I'm
I've
been
spending
a
lot
of
time
reading
through
it
and
thinking
about
how
to
revise
it
in
terms
of
not
just
personas
but
also
roles.
A
I
think
that
we
should
probably
include
these
in
personas
having
personas
with
very
restricted
privileges.
A
very
specific
role
within
the
mesh
would
be
useful.
B
I
think
that
makes
sense,
and
I've
heard
similar
things
from
lynn,
so
we
should
see
what
we
can
do
to
get
to
maybe
either
improve
this
get
stephen
back
here
to
talk
about
it,
some
more.
We
have
not
talked
about
it
in
a
month
or
two.
A
A
A
Yeah
yeah,
it
feels
quick,
it
feels
early,
but
at
the
same
time
I
know
that
we
need
to
you
know
the
the
one
eight
cycle
is
not
going
to
last
much
longer.
So
yes,.
C
A
A
C
Yeah,
so
maybe
you
can,
you
can
start
with
the
initial
pass.
I
can
take
a
look
before
our
meeting
next
tuesday
and
then
I
will
have
you
talk
about
it.
I
just
want
to
make
sure
it
aligns
with
what
we
see
from
our
clients
too.
I
think
it's
probably
will
align
based
on
what
we
hear
on
friday.
It
seems
that
they
have
similar
pain,
points
and
roles.
B
B
I
don't
know
if
anyone
from
keali
is
still
attending
these
meetings,
and
I
know
keali
have
their
own
monthly
meetings
that
I
have
not
attended
recently,
and
we
probably
should
try
to
sync
up
that
meeting
or
getting
them
here.
But
keali
does
a
great
job
showing
the
traffic
that's
actually
happening,
but
does
not
show
what
is
locked
versus
what
just
isn't
being
exercised,
and
some
people
at
the
50
session
were
saying.
Well,
we
come
from
a
background.
We
had
a
dedicated
security
team
now
we're
on
istio.
B
B
I
have
not,
and
I
sometimes
get
confused
between
you-
know
what
what
is
being
done
with
calico
and
networking
on
the
kubernetes
level.
What
is
versus
what's
being
done
in
istio?
Nothing
really
shows
it
all
together
does
is
any
anyone
here
from
a
security
background,
or
would
it
be
better
for
mitch
united,
take
this
to
the
security
working
group
and
ask
them
to
do
this
and
beg
to
help.
B
B
Yes,
you
want
to
you,
want
to
have
the
most
restricted
permissions
that
your
application
still
functions
in,
not
just
enough,
but
more
so
that
even
if
some
cluster
goes
rogue
or
some
some
microservice
goes
rogue,
it
can't
do
anything.
We
yeah.
A
E
A
B
B
Exactly
so,
I
feel
like
at
a
minimum
we
should
go
to.
We
should
make
a
an
issue
for
this
and
ask
the
you
know
security
working
group
to
think
about
it,
but
because
it's
mostly
a
kubernetes
issue
and
not
an
issue
issue,
it's
hard
to
say
people
should
be
solving
these
things
with
kubernetes
networking,
not
so
much
with
istio
things.
I
mean
we
do
egress
gateway
stuff,
but
that's
about
it.
C
So
are
they
complaining
about
israel,
security
is
enforced
and
not
enforced
and
couldn't
visualize
in
the
dashboard
or
are
they
complaining
about
between
kubernetes
policy
net
network
policy?
You
can't
you
don't
allow
the
traffic
to
go
through
and
visualize
that
in
the
istio
dashboard.
I'm
just
trying
to
understand.
Is
this
a
service
mesh
specific
problem,
or
is
this
a
generic
problem
for
kubernetes.
A
I
think
that
this
is
a
service
mesh
problem,
because
kubernetes
solves
sort
of
the
north
south
problem
to
a
large
degree.
Istio
is
relevant
there,
but
not
as
valuable.
Our
our
forte
is
going
to
be
east
west,
where
we
say
hey
like
the
only
service
that
needs
to
talk
to
the
checkout
service
is
the
cart
service
and
nothing
else
needs
access
to
it,
because
it's
a
sensitive
service
right,
that's
east
west
control
that
istio
rbac,
pretty
naturally
has
influence
over.
E
C
Are
already
using
network
policy
and
they
are
scoping
their
teams
through
namespaces
and
that's
what
they're
comfortable
with
so
when
they
first
moved
to
istio
abaco
is
not
going
to
be
like
the
first
thing
they
look
at
and
they
would
gradually
move
to
iraq.
But
it's
not
going
to
be
like
day
one
or
day
two.
B
B
They
asked
what
the
roadmap
was
for
a
web
ui
that
can
show
other
teams
that
are
not
familiar
with
kubernetes
and
sto
to
see
a
dashboard
of
what
is
talking
to
what
they
didn't
mention
kiali
here,
but
kealy
has
a
dashboard
for
what
is
talking
to
what
but
there's-
and
maybe
that's
enough.
Maybe
we
just
we
don't
mention
kiali
a
lot
in
our
documentation
and
we
maybe
should
we
should
tell
this
company.
Will
you
use
chiali
to
see
what
your
microservice
is
doing?
B
B
A
I
noticed
that
a
lot
of
our
users
are
unaware
of
canaries
and
are
either
not
using
them
in
production
or
or
simply
don't
know
that
they
exist
and
that
they
should
be
using
them.
B
A
Istio
upgrades,
so
that's
something
that
we
should
look
at
both
in
terms
of
how
we're
communicating
that
to
our
users,
how
we're
making
sure
that
they
do
know
to
do
that.
How
can
we
maybe
discourage
in-place
upgrades
and
and
provide
like
a
warning
hey?
This
is
unsafe,
use,
canaries
instead,
something
like
that
along
with,
let's
see,
that's
the
communication
and
then
understanding
kind
of
what
the
friction
is
there
or
what
the
hesitation
is.
B
Friday,
my
view
of
this
is
that
the
next
step
for
helping
them
understand
that
is
a
istio
cuddle
command
to
list
their
control
planes.
We
had
a
lot
of
arguing
about
that
in
one
six
and
one
seven,
because
of
course,
your
control
play
might
be
remote,
but
I
think
we
agreed
for
one
eight
that
we're
gonna
look
at
the
injector
config
map
and
we're
gonna
provide
commands
to
let
people
sort
of
see
which
name
have
which
injectors
and
just
that
command.
That
sort
of
says
you
know
all
your
namespaces
are
injected
by
your
master.
Control.
A
I
think
it'll
be
valuable
and
that
will
move
the
needle
on
experience
there.
I
think
perhaps
one
of
the
other
difficulties
is
that
when
we
talk
about
canaries,
we
are
thinking
about
running
n
control
planes
where
n
could
be
absolutely
any
number
and
you
can
have
all
sorts
of
complex
configurations,
but
for
99
of
our
users,
what
they're
worried
about
is
having
two
control
planes,
which
is
a
much
simpler
scenario.
A
So
having
some
90
commands
that
work
only
for
canarying
a
control
plane,
you
have
one,
that's
master,
you
bring
one
in
you,
move
traffic
over
to
it
or
all
the
proxies
over
to
it,
and
then
you
shut
down
the
old
one.
A
Those
commands
would
not
work
for
users
that
want
to
have
five
or
seven
out
there
for
various
reasons,
but
I
do
think
that
that
would
help
users
transition
from
the
in-place
upgrade,
which
makes
a
very
intuitive
sense
to
our
users,
to
this
sort
of
more
flexible
right.
Now,
it's
an
infinitely
flexible
model
and
I
think
that's
a
little
bit
overwhelming.
B
Right
so
we
we
want
to
recommend
a
best
practice
that
sort
of
install
a
canary
move
one
namespace
at
a
time
to
the
canary
and
then
make
the
canary
the
new
master.
So
we
want
to
organize
that
command
around
that
story,
so
we
have
a
better
story
for
how
to
do
it.
C
Yeah,
so
so
I
I
think,
you're
right,
a
lot
of
users
are
not
so
much
aware
of
trying
control
plane
canary,
I
I
know
for
ibm,
invited
a
three
user
and
all
of
our
users
are
looking
at
like
a
managed
steel
solution.
So
this
would
not
be
something
they
would
be
interested
to
take
care
of,
because
they
would
trust
the
cloud
provider
to
do
that
for
them,
and
also
because
of
the
the
fact
that
we
support
in-place
upgrade
and
canary
upgrade.
C
C
The
other
key
factor,
I
think
it's
also
most
of
our
users,
are
not
complaining
about
control
plane,
update
it's
actually
being
gone
through
pretty
smoothly.
Most
of
the
complaint
we
heard
is
about
the
gateway
update.
Well,
the
gateway
has
the
downtime,
and
the
over
capability
on
canary
is
only
on
issue
d
and
not
on
the
gateway.
So
that's
that
so
for
our
user.
It's
it's
not
useful
for
them.
C
D
D
I
I
just
found
that
interesting.
I
think
it's
I
think
it's
valid
like
we
should
have
a
better
way
to
do
it,
but
it's
interesting
that
we
didn't
get
a
lot
of
issues
with
it
previously.
D
A
C
C
Right
exactly
that,
that's
why
you
know
we
we
started,
keep
sharing.
I
think
it's
also,
partly
because
a
lot
of
the
users
are
getting
into
production
and
they
do
a
lot
of
testing
themselves
for
upgrade
and
then
that's
how
they
find
out.
You
know
it's
taking
10
20
seconds
downtime
for
them,
even
though
they
already
have
multiple
replica
of
the
ingress
gateway
during
the
upgrade.
A
C
C
I
think
martin
added
for
the
community
called
revision,
so
the
the
issue
operator
also
supports
revision.
So
the
way
it
works
is
you
can
you
can
have
one
revision
just
to
have
your
istio
control
plane
and
your
d4
gateway,
and
then
you
can
have
a
different
reverse
revision,
just
focus
on
your
customer
gateway,
and
then
that
way
you
can
fully
control
your
customer
gateway
upgrade
well.
Our
likable
management
solution
will
provide
the
control
plan
default
gateway,
update
for
the
user.
D
D
To
some
extent,
we
have
a
monolith
again
because
there's
a
single
easter
operator
file
that
we
install
right
really
the
gateway
should
be
completely
independent
and
it
probably
shouldn't
be
an
easy
system
right
and
it's
really
just
like
a
normal
user
application,
but
we've
kind
of
because
easter
cuddle
has
like
this
monolith
approach,
we've
kind
of
gotten
away
from
that,
but
I
do
think
it's
valuable
to
have
it
be
split
and
make
that
more
common,
maybe
not
the
default,
because
everyone's,
you
know
so
used
to
just
spinning
up
everything
at
once.
D
But
you
know
best
practice.
Certainly
I
think
is
just
to
have
them
split.
B
That
makes
sense
I
want
to
ask.
I
want
to
ask
another
question
about.
If,
if
there's
a
managed
sdo,
will
there
be
control
planes?
I
know
some
of
us
are
planning
managers
to
without
canaries,
but
I
would
think
that
some
providers
might
want
to
have
a
managed
plane
canary,
where
you
might
have
two
managed
planes
and
you're
choosing
which
namespaces
get
what
I
think
about
my
managed
kubernetes,
where
I
sort
of
apply
new
versions
to
the
workers
and
to
the
api
server
independently.
B
C
I
think
for
us
it
will
be
more
useful,
like
when
we
provide
different
solutions
for
management
service
like
we
have
a
management
service
today,
which
we
run
histod
inside
of
a
customer's
cluster,
and
we're
interested
to
run
is
through
the
external
to
customers
cluster.
So
when
we
think
about
transition,
I
think
the
the
canary
makes
sense
right
so
based
on
the
canary,
we
could
potentially
transfer
the
cycles
that
are
on
the
old
sdod
to
external
issue
d.
C
D
One
thing
that's
kind
of
unrelated
to
that
comment
is
some
of
the
feedback
we've
got
and
I
don't
remember
it
was
in
that
meeting
or
if
I've
just
heard
it
from
other
users,
is
that
the
burden
of
labeling
the
namespaces
is
challenging
because
it
flips
the
control
from
the
admin
to
like
the
namespace
admin
and
so
previously,
like
the
estro
can
be
kind
of
transparent.
Just
these
two
system,
admin
kind
of
you,
know,
upgrades
it
and
everything
is
good
to
go.
Assuming.
D
So
what
we've
heard
from
some
users-
and
I
think,
like
the
ideal-
would
be
if
you
don't
care
about
what
revision
you're
using
you
know
you
just
slap
on
the
easter
system
or
estro
injection
enabled
label,
and
someone
will
configure
that
in
a
centralized
place
where
that
goes.
D
And
if
you
do,
because
you
know
you
want
to
be
cutting
edge
or
you
know
you
want
to
test
out
a
version,
then
you
can
do
the
estio
io
rev
label
to
override
it.
But
right
now
like.
Basically,
the
idea
would
be
to
give
centralized
control
over
over
the
revisions,
essentially,
rather
than
decentralized.
C
D
C
E
B
I
mean
the
limit
of
you:
can't
specify
istio
rev
equals
for
your
master
control,
plane.
D
Oh
yeah,
that's
not
fixed,
I
mean
we
could
fix
it.
D
This
is
an
easy
problem
to
solve,
with
not
taking
into
account
like
tooling
and
easter
cattle
and
helm,
and
all
that
stuff
like
I
think
we
just
need
to
make
a
web
hook
that
matches
that
label
and
points
it
to
wherever
you
want
how
we
expose
that
to
users
how
we
hook
it
into
the
operator.
That's
the
only
hard
part,
like
the
actual
you
know,
kubernetes
config,
it's
pretty
trivial
to
do
it
yourself.
I
think.
D
B
D
B
I
I
I
agree
with
you
that
it's
nice
to
just
do
the
labels
explicitly
and
not
make
people
learn
an
sdo
cuddle
command
for
label
and
the
analyze
command
sort
of
already
gives
you
the
kug
cuddle
command
to
do
the
labeling,
so
I
think
we're
in
good
shape
so
that
that
was
all
I
had
to
say
on
the
empathy
session.
Do
we
have
any
other
comments
from
people
at
the
empathy
session
of
things
that
we
should
be
looking
at
as
a
group.
C
Yeah,
so
just
on
this
point
not
from
the
emphasis
session,
but
we
did
hear
from
our
user.
Some
of
them
were
complaining
to
us
that
up
upgrading
the
data
plane
manually
is
too
troublesome,
so
some
of
them
even
mentioned
like
if
we
would
be
willing
to
support
like
if
they
label
their
parts
in
a
certain
way
that
we
would
automatically
upgrade
the
data
planes
for
them
whenever
the
control
plane
gets
updated
and
then,
if
they
didn't
have
such
a
label,
then
they
would
manually
do
rolling
upgrades
of
the
data
plane.
B
B
It
was
going
to
be
a
command
where
you'd
say
restart,
either
a
deployment
all
the
deployments
in
the
name,
space
that
had
our
sidecar
or
all
of
the
deployments
in
all
namespaces
with
the
sidecar.
So
it
was
going
to
be
command
line
for
that.
Do
you
think
that
would
be
sufficient
if
we
gave
them
when,
after
they
did
s2
cuddle
install
it
sort
of
said
now
run
this
deal?
Cuddle
up,
did
a
plane,
restart
and
or
is
that
not
good
enough.
C
I
think
that
helps,
but
they
still
have
to
initiate
the
command.
If
you
think
about
the
manage
the
environment,
the
control
plane
are
not
upgraded
by
them,
so
like
moving
from
fixes
from
160
to
161
or
9
all
are
provided
by
the
cloud
provider,
and
then
they
are
the
ones
who
have
to
manually
upgrade
the
data
plane.
So
even
though
it's
one
command,
they
still
have
to
do
it.
B
C
Yeah
cloud
providers
normally
don't
touch
the
data
plane
because
they
don't
know
which
workload
is
critical
and
that's
why
I,
I
think
a
annotation
would
come
helpful
like
they
know.
These
are
the
services
that
they
just
want
to
automatically
upgrade
it
was.
These
are
super
critical.
You
know
they
really
want
to
upgrade
it
at
a
convenient
time
when
they
are
actually.
A
Working,
I
think
you
know
the
the
question
of
upgrading
the
data
plane.
It's
really
one
of
orchestration
and
of
continuous
deployment,
and
I
wonder
if
there's
the
solution
here
is
not
to
provide
easier
ways
for
them
to
integrate
data
plane
rollouts
with
their
existing
continuous
deployment
system.
B
A
A
And
this
is
something
that
we
struggle
with
a
lot
in
user
experience.
Istio
is
not
a
end-to-end
solution
for
anything,
it's
meant
to
be
part
of
an
ecosystem,
and
so
there
are
things
that
it
doesn't
do
that
our
users
absolutely
need
that
makes
it
really
difficult
to
demo
effectively.
You
know,
john.
I
know
that
you
ran
into
this
with
the
add-on
problem
like
for
a
demo.
What
I
want
is
an
install
istio
command
and
then
poof
I've
got
everything
that
I
need.
A
I
can
show
off
all
the
bells
and
whistles
and
all
the
features,
but
for
production,
what
we
need
is
to
to
stay
in
our
lane
effectively
to
handle
only
service,
mesh,
relevant
concepts
and
delegate.
You
know
orchestration
to
kubernetes
and
continuous
deployment
to
whatever
their
particular
solution.
Is
there
I'm
not
sure
how
we
balance.
Those
two
needs.
Does
that
make
sense.
C
C
B
E
B
Some
users
want
it
managed
for
them.
I
want
it
managed
for
me
right,
so
I'm
not
gonna.
I
don't
care.
If
my
app
is
down
for
10
seconds
a
week.
Other
users
want
to
control
that
in
particular
window,
and
they
there
needs
to
be
some
way
to
to
do
that.
So
the
idea
of
having
an
annotation
that
sort
of
says
automatically
do
it
or
warn
me
or
ignore
this
problem
would
be
awesome.
A
C
Yeah,
so
maybe
one
thing
is
that
you
allow
user
to
label
a
namespace
or
part
level
to
say
these
are
the
ones
I
want
to
automatically
upgrade,
and
then
we
provide
israel
cuddle
command
or
you
expand
a
wonderful
existing
command.
All
user
needs
to
do
is
every
day
or
every
other
day
just
run
that
command
and
with
a
scan
for
parts
or
namespaces.
B
I
think
that's
great,
let's,
let's
write
that
up
and
talk
about
it
next
week
I
can.
I
can
take
a
look
at
that.
C
B
Yeah
good
enough,
so
I
wanted
to
move
on
adding
a
couple
items
and
I
see
howard
has
added
one.
I
just
had
a
short
question
about
one
of
an
issue
lynn
filed
for
us,
the
warnings
that
we
give
for
namespace
not
injected
for
istio
injection.
B
B
So
I
guess
the
question
is
what
it
does
seem
that
we're
warning
much
too
high
for
this,
which
which
could
we?
What
should
we?
What
should
we
be
saying
for
all
namespaces?
Should
the
cloud
provider
be
putting
inject
equals
faults
in
the
ones
to
get
rid
of
this
message?
Is
it
istio,
cuddles
or
the
analyzer's
responsibility
to
not
look
at
particular
namespaces,
regardless
of
an
annotation,
for
that?
E
So
I
took
a
look
at
this.
I
think
this
is
a
this
was
discussed.
Actually,
this
analyzer
was
written.
One
improvement.
Actually,
I
think
we
probably
could
have
is
right.
Now
we
put
warren
info
all
to
stand
it
out.
E
So
that's
why,
like
user
like
see
this
message
along
with
like
info,
what
we
probably
can
do
is
put
worn
as
standard
error.
So
if
they
pipe
to
like
pipe
standard
error
to
like
dev
null
and
then
they
won't
be
able
to
see
warren
and
they
just
did
see
info
or
where
yeah
yeah,
so
that
could
kind
of
reduce
the
number
of
warning
messages
that
they
see.
D
Isn't
it
the
opposite?
Do
you
want
to
filter
out
the
warning
in
errors?
Like
errors
are
the
most
useful,
so
we
always
want
the
errors.
We
sometimes
want
the
warnings
we
bless.
Sometimes
we
want
the
info
and
we
never
want
the
debugger.
I
mean
sometimes
maybe
debug.
I
don't
even
know
if
we
have
debug,
but
I
don't
see
why
we
want
bug,
but
not
air.
E
Yeah,
it
was
yeah
yeah.
I
was
thinking
the
error
here
like
the
the
issue
here
is
like
we
have
too
much
warning.
So
how?
How
do
we
want
to
reduce
the
warning.
C
C
D
C
C
I
actually
have
no
intention
for
them
to
have
issue
injections,
so
I
I
don't
really
need
that
warning
so,
but
what
stripped
me
is
at
the
end
of
a
bunch
of
warning
and
info
there's
an
arrow
analyzers
found
issues
when
analyzing
all
namespaces
and
honestly,
that's
all.
I
saw
and
that's
all
it
matters
to
me
and
then
I.
C
B
E
B
So
yeah.
B
Cube
system
is
intentionally
blocked
and
you
don't
see
it
here.
I
had
suggested
two
years
ago
that
any
namespace
ending
in
dash
system
should
be
blocked,
but
people
said
no
by
people
I
mean
jason
young,
so
it's
possible.
We
can
write
an
ad
hoc
rule
for
certain
names,
not
matching.
The
problem
is
when
you
say
all
names
faces,
people
expect
all
namespaces.
B
First,
I
want
to
make
sure
I
can
get
an
owner
for
this
jason.
Would
you
like
to
take
this
one
on.
B
It
should
be
pretty
easy.
I
will
just,
I
think,
the
things
we
both
agreed
on
were
changing
these
levels.
If
I
have
any
other
good
ideas,
maybe
just
do
them
and
we
will
review.
B
B
I
think
we
put
error
if
there's
anything
in
here
which
is
not
good
so
and
I
think
the
reason
we
the
reason
we
do
that
has
to
do
with
the
error
level.
We
return.
I
think
we
return
a
thing
for
scripts,
saying
it's
not
perfect,
so
we
might
need
to
adjust
that
as
well.
B
B
I
have
a
philosophy
question.
We
had
discussed
that
the
xds
based
version
of
proxy
status
commands
be
the
defaults
for
one
eight
and
we
even
had
one
of
our
users
who
doesn't
come
to
this
meeting
implement
this,
but
I
realized,
after
that,
that
might
break
our
compatibility
with
older
control
planes.
Do
we
have
a
philosophy
on
what
we
should
do
in
this
case?
B
C
B
So
many
users
are
going
to
be.
You
know
having
a
a
mixture
of
one
eight
one,
seven,
when
one
eight
comes
out.
I
think
one
seven
is
supposed
to
work
with
this
new
xds
and
tokens,
but
I
in
testing
it
I
found
some
weirdness.
It
could
be
how
I
installed
so
I
just
I
wanted
to
know
what
people
thought.
The
best
policy.
B
B
I
reached
out
to
him
he
said
he
might
show
up,
so
he
has
it
for
two
weeks
so
maybe
reach
out
again.
I
think
that's
all
I
have
but
howard
what
what
do
you
have
what's?
What's
this
item
better
status
on
virtual
service.
D
Yeah,
so
this
is
very
vague,
because
I
I
don't
know
much
more
beyond
the
vagueness,
but
I've
been
writing
up
a
doc
on
kind
of
figuring
out
issues
with
tls
and
like
mismatches
and
stuff,
and
I
was
it
was
one
of
those
things
where
I
was
kind
of
like
embarrassed
to
write
the
doc,
because
I'm
like
well
like
in
order
to
figure
this
out,
go
dig
into
like
all
these
obscure
areas
when
it
seems
like
really,
I
should
just
be
able
to
look
somewhere,
maybe
the
status
or
otherwise
that
says
like
hey.
B
So
new,
for
one
eight,
is
that
analyze
is
going
to
deal
with
duplicate
matching
conditions,
so
that
is
something
I
would
love
to
do
for
1a.
What
we
wanted
to
do
was
provide
some
commands
to
help
make
it
easier
to
do
this
by
fixing
often
and
describe
which,
which
go
out
to
pilot
and
ask
things,
and
I
think
the
security
group
was
going
to
help
us
with
that.
B
I
would
prefer
to
do
what
you
just
said,
which
is
to
have
analyze
really
do
it
by
looking
at
the
tls
settings,
but
I
am
afraid
first
of
my
inability
to
do
it
correctly
and,
secondly,
of
duplicating
the
logic
that
is
already
in
the
data
plane
and
control
plane
about
that.
Do
you
think
yourself
or
someone
from
come
up
with
something
that
looks
at
steel
resources
and
can
tell
you
if
the
tls
is
correct
without
asking
pilot
live.
D
Right,
so
that's
that's
kind
of
what
similar
to
what
I
was
thinking
where
we,
we
can
add
all
these
heuristic
rules,
but
it's
never
going
to
be
enough
to
give
us
good
depth
right.
We
really
need
to
actually
modify
the
pilot
code
to
emit
additional
information,
which
is
then
piped
through
to
the
correct.
D
To
have
you
know,
fully
in-depth
status
messages.
The
same
applies
to
tls
and
virtual
services
and
I
suppose
other
resources
as
well.
So
I
don't
know
how
we
would
do
that,
but
it
seems
like
that
is
the
way
to
go.
Otherwise,
we're
always
like
we
can
always
add
something
like.
Oh,
this
virtual
service
has
like
a
duplicate
router.
I
figure
what
your
check
added,
which
is
great,
but.
B
B
If
we
do
this
for
one
eight,
then
we
would
have
the
code
and
then,
in
one
nine
we
could
try
to
move
the
code
into
an
analyzer.
Is
that
too
aggressive
not
aggressive
enough.
D
You
know
like
info
about
why
it's
doing
what
it's
doing
and
then
correlating
that
back
to
resources
and
then
emitting
that
status.
Like
that's
very
challenging
right,
I
really,
I
don't
even
know
what
that
would
look
like,
but
I
do
think
it
would
be
valuable.
B
B
I
asked
them
to
put
it
in
for
often
as
well,
and
they
said
no,
I
had
to
do
it
myself
because
they're
too
busy,
but
it
would
be
nice
if,
in
addition,
so
the
the
often
and
the
tls
stuff
could
all
be
both
to
the
envoy
config
and
to
something
that
the
analyzer
could
pick
up.
B
Currently,
the
analyzer
doesn't
ask
pilot
any
questions.
It
looks
at
its
own
snapshot,
so
this
would
either
be
breaking
that
convention
or
copying
the
logic
to
actually
within
the
analyzer,
generate
the
exact
same
xds
and
then
throw
it
away
and
only
save
these
problems
that
it
encountered.
B
D
B
B
D
Yeah
I
mean
the
same.
The
same
issue
applies
to
my
suggestion
as
well,
but
one
of
the
differences
with
what
I
have
been
thinking
about
is
that
you're
talking
about
generating
the
xds
and
then
analyzing
the
xds.
A
lot
of
the
issues
I've
been
looking
at
are
things
where
I
have
a
virtual
service,
but
it
just
gets
filtered
out
for
one
reason
or
another,
so
it's
never
actually
in
the
xds,
and
I
want
to
know
what
happened
to
my
route
where
to
go.
Why
was
it
ignored?
D
And
so
we
really
need
to
hook
in
like
at
a
higher
layer
than
the
xds
itself,
but
it
still
does
have
a
similar
problem
where
it's
kind
of
per
proxy,
because
you
know
you
have
different
gateways
which
may
have
different
results
or
it
could
be
for
side.
Cars
too.
C
So
I
have
a
dump
question
I
thought
mitch.
You
had
a
similar
feature
on
this.
That
was
turned
as
experimental
before
one
of
the
releases
might
be
one
six.
A
B
Taking
the
old
1.5
xts
endpoint
that
told
if
a
pod
is
tls
or
not
and
restoring
that
and
putting
it
under
cousins,
xts
events.
But
there
was
a
second
thing
that
I
talked
about,
which
is
when
we
generate
envoy
config
and
send
it
via
xds
we
could
we
could.
We
could
generate
that
config
say
for
a
synthetic
pod
in
every
namespace
and
see
if
any
weird
exceptions
were
coming
out
of
the
code
and
then
we
could
use
that
to
to
make
things.
B
We
sort
of
look
for
problems
and
annotate
crs
with
the
problems,
and
we
could
put
that
in
the
analyzer,
so
the
analyzer
would
itself
part
of
the
one
of
the
analyzers
would
be
the
pretend
I'm
a
synthetic
pod
analyzer
that
just
sort
of
says.
If,
if
a
pod
was
in
this
name
space,
what
would
happen.
B
B
Would
you
get
a
error
trying
to
generate
the
initial
xds
for
that
pod
food?
That
could
be
an
analyzer.
A
I
think
in
this
case
I
I
agree
with
john
that
that
we
can,
we
can
get
to
the
same
result
for
our
users
in
terms
of
mismatched
config,
without
dropping
down
into
xds
level
configuration
and
lin
to
your
point.
If
we
do
implement
that
inside
of
an
analyzer,
then
that
will
automatically
get
written
into
the
status
field
as
a
part
of
the
existing
alpha
level
feature.
C
C
A
lot
of
us
have
to
run
multiple
tools
to
find
out
what
might
be
wrong,
and
sometimes
I
even
have
to
ask
people
like
you
and
john
and
add
on
the
on
the
israel
project,
which
is
even
embarrassing.
Like
I
work
on
this
project
for
three
years,
I
still
sometimes
couldn't
figure
out
what's
going
on,
so
it's
just
hard
for
our
user.
A
Yeah
and
to
that
end,
we
do
have
plans
in
one
eight
to
begin
work
on
the
trace
tool,
and
the
goal
of
that
tool
is
very
much
to
answer
the.
Why
question,
rather
than
to
solve
individual
problems
about
tls
mismatches
or
things
like
that?
It's
it's
asking
why
a
certain
thing
is
happening
to
traffic,
we're
very
very
early
days
in
the
design
of
that
I'm
trying
to
collect
feedback
from
users
on
what
would
be
a
very
a
helpful
format
for
it.
B
And
I've
been
playing
with
something
like
that
as
well
as
I
think
you
know
mitch
yeah,
I
I
have
this
tool
that
sort
of
currently
it
it
well.
The
dream
is
that
it's
going
to
work
like
this,
it's
going
to
take
two
pods,
add
ephemeral,
http
bin
to
one
pod
and
formal
sleep
container
to
the
other
pod
run
some
traffic
and
forth
between
the.
B
With
some
offset
record
what
really
happened
so
that
we
would
say
you
know
if
these
two
pods
ever
communicated,
they
would
use
tls.
B
The
client
side
would
make
three
attempts,
but
the
receiving
side
would
only
hear
one
of
the
attempts,
because
there's
some
retries
happening
and
some
auto
mtls.
Something
like
that.
I
had
some
difficulty
writing
that
so
far,
but
I
think
that
that's
what
I
want
for
one
nine
and
maybe
sooner
a
real,
a
real
trace
route.
That
really
says
what
happens
if
these
two
pods
start
talking,
it's
able
to
inject
the
behavior
into
them
using
something
like
ephemeral
containers,
so
we
don't
ever
need
to
think
about
restarting
them
to
get
this
information.
A
B
Yeah
no,
I
I
started
first
implementation
because
I
wanted
to
see
what
would
be
easy,
but,
yes,
a
more
disciplined
person
would
start
first
with
what
it
should
look
like,
rather
than
what's
easy,
but
my
thought
was.
I
would
make
something
and
then
sort
of
test
it
and
battle
myself
and
get
early
feedback
to
give
to
you
mitch.
C
Makes
sense
thanks,
ed
yeah
makes
sense.
I
think
the
user
going
to
need
both
so
first
of
all,
they're
going
to
look
at
the
status
fields
to
figure
out
which
one
they
need
to
look
at.
Second
of
all,
the
trace
tour
to
show
them
what's
the
actual
behavior,
so
they
can
validate
and
think
out.
That's
not
really
what
they
intended
behavior,
so
that
would
help
them
figure
out.
Why
it's
wrong.
C
C
B
Yes,
I
totally
agree.
So
let
me
let
me
reveal
one
of
my
secrets:
lin
http
bin
has
a
delay
url,
you
can
say
delay
by
one
second
or
10
seconds,
so
in
this
first
version
of
the
tool
it
makes
a
request
http
bin
for,
like
eight
different
delays
and
it
sort
of
tells
you
which
ones
istio
caught,
because
the
delay
of
istio
timeout
was
shorter
than
the
real
delay.
So
it
sort
of
sort
of
tells
you
what's
really
happening
and
it's
quite
easy
to
figure
out.
B
You
know
what,
where
istio's
timeouts
are
just
by
sort
of
honing
in
on
it,
using
newton's
method
to
find
like
where
the,
where
the
istio
delay
starts
and
where
the
user
delay
ends.
So
there's
all
kinds
of
cool
things
we
can
do
once
we
have
like
a
framework
to
play
with.
A
B
Okay,
well,
we've
reached
the
top
of
the
hour.
So
next
week
we
have
two
exciting
things.
Mitch
has
agreed
to
talk
about
personas
and
roles,
and
we
have
a
huge
report
from
casey
robertson
who
who
put
this
on.
A
A
Yeah
the
context
for
that
is
that
casey
mentioned
on
twitter
that
they
had
a
great
deal
of
difficulty
related
upgrades
and
that
they
would
like
to
see
some
sort
of
validation
on
their
upgrade
before
running
it,
but
they
were
fairly
vague
on
the
details.
You
know
it's
140
characters
or
less,
so
I
invited
him
to
come
and
share
the
details
with
us
so
that
we
could
understand
the
use
case
and
what
he
would
like
to
see.