►
From YouTube: Istio User Experience working group, August 18, 2020
Description
- Wiki instructions for using experimental XDS based istioctl command variants
- unsharded view of XDS
- nventory/Analysis of istioctl commands
A
So,
thank
you,
everyone.
This
is
the
user
experience
meeting.
Last
friday
we
presented
our
roadmap
to
the
toc,
but
we
sort
of
went
over
time
because
there
was
not
enough
time
to
do
the
whole
thing.
The
feedback
that
I
got
from
this
roadmap
is
that
what
we
agreed
on
last
week
was
not
everything
they
wanted.
They
liked
mitch's
user
story
and
we
gave
them
a
full
list
of
the
work
items
we
wanted
to
do
with
rankings,
but
they
didn't
feel
that
we
were
telling
them
why
we
wanted
to
do
them.
A
So
I
started
putting
single
words
or
phrases
in
each
item,
such
as
correctness
for
the
federated
view
or
learnability
in
front
of
these
items,
and
I
think,
on
friday,
we're
going
to
have
to
go
back
with
something
better
than
that.
So
I'm
trying
to
finish
up
and
come
up
with
the
right
words
to
to
get
people
to
understand.
Why
we're
doing
these
things.
A
So
I
I
think
we
all
agree.
I
mean
we
all
agreed
on
these
items
last
week
and
what
I
think
is
new
is
just
how
are
we
going
to
tell
the
tlc
about
them
all
right?
So
all
I
wrote
here
was
federated
view
of
xds
events.
I
even
forgot
to
link
this
one.
I
made
it
to
p0,
it
didn't
know
what
that
was.
It's
you
know
for
the
correctness
of
the
proxy
status
command.
A
We
just
need
to
sort
of
be
able
to
put
all
that
into
this
chart.
Do
we
want
to
go
through
this
line
by
line
or
do
we
just
trust
me
to
do
this
before
friday?
I'm
going
to
do
a
good
job?
I
just
want
to
make
sure
I'm
not
trying
to
make
everyone
do
this.
In
my
using
my
terminology,
or
maybe
mitch,
you
and
I
can
do
this-
I.
B
Have
some
thoughts
if
we
want
to
do
it
now,
that's
fine!
If
we
want
to
do
it
kind
of
offline
that
I
can
do
that
as
well.
So.
B
Think
we
can
group
a
lot
of
things
under
the
federated
view
of
xds
events.
This
idea
of
getting
all
of
our
troubleshooting
commands
working
across
all
supported
topologies
that
covers
the
troubleshooting
api
that
covers
this
security
credentials
stuff
here,
I
think
about
half
of
what
we're
doing
can
probably
be
grouped
under
that
use
case.
D
I
think
mitch-
and
I
just
had
a
conversation
like
on
the
vm
scenario-
proxy
status
with
dash
dash
file
parameter.
So
this
is
it's
just
critical
for
people
to
translate
into
how
users
are
consuming
these
like
the
federated
uf,
app
of
xds
events.
I
I
assume.
A
Yes,
so
I
will
refactor
this
item
mitch.
Is
there
anything
we
want
to
discuss
together
right
now
about
these
items?
A
A
A
So
I'm
happy
to
put
the
to
do
the
commands
being
affected.
A
B
Well,
I
think
a
lot
like
I
said
I
think,
there's
value
in
grouping
a
lot
of
them
together,
rather
than
you
know.
We've
got
maybe
30
issues
linked
here
rather
than
telling
30
different
stories
in
terms
of
usability
telling
the
higher
level
stories,
such
as
the
the
all
commands
work
across
all
environments.
I
think,
like
I
said
I
think
we
could
group
about
half
of
them
under
that
particular
heading.
B
A
And
that
makes
sense,
and
I
so
so
you
think
I
should
make
epics
reach
these
higher
level
stories
and
then
link
the
items
with
or
use
the
little
github
check
boxes
to
help
me
manage
it
in
github
and
then
put
the
stories
here
for
presenting
to
the
tlc.
A
D
A
Yes,
so
I
will
link
each
of
these
individual
work
items
to
one
of
these
higher
level
stories,
so
one
of
them
is
finishing
the
refactor
for
central
stod.
We
had
a
lot
of
items
about
that.
Well,.
D
A
So
I
don't,
I
think
I
I
think
mitch
and
I
can
handle
that
on
her
own.
I
think
the
items
that
we
agreed
to
do
can
all
be
easily
grouped
into
three
or
four
items
of
these
higher
level
stories,
and
I
can
make
issues
or
epics
for
them
in
github
and
link
them
all
up
to
the
items
with
the
priorities
that
we
set
here
I
can
make
the
high
level
story
priorities
the
same
as
the
highest
priority
of
each
item
in
its
individual
work
items.
A
So
the
next
thing
I
wanted
to
talk
about
was
the
instructions
that
I
wrote
last
weekend
for
using
the
experimental
istio
cuddle
command
variants.
So
not
everyone
is
aware
of
this,
but
in
the
wiki
I
wrote
these
instructions
wrote
them
here,
rather
than
trying
to
get
them
through
docs
and
release
notes,
because
I
feel
there's
stuff
here
that
we
should
be
sort
of
trying
out
before
we
expect
everyone
to
read
it.
A
So
I
encourage
everyone
to
do
it.
The
instructions
now
seem
to
work
with,
what's
been
merged,
I'll
just
go
through
this
thing
briefly,
because
I
think
it's
important.
A
The
first
step
is
how
to
make
certificates
for
using
xds
based
well
so,
first,
the
background
which
is
not
here,
which
is
that
there's
the
commands
proxy
status
and
version
have
a
new
implementation
for
centralist
2d
based
on
xds
and
we're
trying
to
move
everything
over
to
that
for
one
eight
we're
trying
to
get
rid
of
the
old
ones
so
t
the
first
thing
is
to
use
the
new
commands,
need
certificates
and
there's
a
clunky
way
to
make
them
using
the
make
file.
A
This
tells
how
to
do
it,
and
I
put
an
agenda
item
on
the
networking
meeting
for
thursday
to
figure
out
how
to
simplify
this
should
is
your
cut,
will
be
simplifying
it
or
should
environments
be
simplifying
the
mate
file?
So
you
you
run
these
commands
to
make
the
certificates
the
next
is.
A
If
you
have
central
s2d,
you
expose
it
using
a
gateway
and
a
virtual
service
on
a
particular
port
with
pass
through,
so
that
you
can
expose
to
the
outside
world.
The
https
that
is
2d
already
does,
and
then
you
using
your
new
certificates,
you
use
the
experimental
steel
cutter
x
commands
with
all
of
these
options.
Xds
address
authority
inserter,
and
then
you
use
the
new
configuration
file
feature
that
we
dark
launched
to
make
those
features.
A
E
Hey
ed,
just
a
quick
comment:
I
know
some
of
the
complex
around
this
was
getting
a
certificate
in
very
recent
code.
We
actually
added
you
can
use
the
jot
token
directly
now
so
that
may
help
you
a
bit.
A
E
I
I'm
at
least
the
main
file
I
looked
at
and
I
forget
where
I
saw
it.
I
think
it
was
from
your
pr
you're
making
the
token.
So
it's
that
it's
that
token
request
thing
into
cube,
cuddle
and
then
mike.
A
A
The
makefile
puts
the
certificate
chain
and
the
certificate
in
different
files,
and
we
just
have
to
combine
them
together
and
that's
really
clunky.
We
either
need
to
create
targets,
like
you
said,
for
the
jwt
token,
or
a
target
that
combines
the
certificate
chain
and
the
certificate
or
we
need
to
get
the
dsc
library
to
handle.
What
the
make
file
makes-
and
I
just
want
to
talk
about
that
networking
meeting.
F
Right,
I
think
I
agree
with
john's
approach,
because
in
vm
on
docs
we're
updating
we're
not
it
used
to
use
the
make
command
to
to
to
get
the
asserts
right
now,
we're
moving
with
the
token
request
passed
so.
D
I
have
a
question
so
yeah,
I'm
sorry.
I
didn't
keep
track
this
closely.
I
I
couldn't
get
the
token
working
and
then
I
sidetracked
work
on
some
other
stuff.
So
the
the
issue
with
the
key
and
search
to
bootstrap
the
vm
was
when
you
generate
the
key
and
search
you
can't
be
like
a
namespace
admin
of
any
namespace.
D
D
D
Right
so
john,
that
was
my
concern
with
this.
If
we
are
using
falling
back
to
key
and
search
approach,
we're
saying,
in
order
for
user
to
onboarding,
still
kind
of
commands
for
the
data
plane
within
central
studio
environment,
the
user
would
somehow
be
able
to
retrieve
the
key
and
search
which
is
only
accessible
to
the
mesh
admin.
A
A
The
cloud
provider
has
to
do
this
and
then
send
the
user.
The
output
of
this
command
right,
so
the
user
only
gets
three
files,
not
the
files
that
came
from
here.
So
this
in
these
instructions.
A
The
first
two
pieces
this
piece-
and
this
piece
are
things
that
only
the
cloud
provider
can
do
and
then
once
you've
done
those
things
my
instructions
say
to
log
out
of
kubernetes
and
then
the
stuff
from
here
down,
you
can
do
without
even
kubernetes
access
at
all.
A
C
D
Issue
the
cluster,
so
that's
probably
less
issue
because
it's
not
like
so
much
unique
for
each
stod
cluster,
but
the
key
answers
are
very
much
unique.
Yeah.
D
A
So
I
was
imagining
something
like
you
know,
ibm
cloud:
does
you
you
log
in
to
ibm
cloud
and
you
ask
for
your
kubernetes
cluster
config
file
junk,
so
that
was
my
mental
model.
If
john
howard's
thing
is
better,
I
will
try
it
out.
D
Okay,
yeah,
please
let
us
know,
I
think
the
token
approach
requires
less
permission
to
get
the
token
with
the
key
answers
which
requires
much
higher
permission.
A
Okay,
so
the
next
item
is
the
thing
we
talked
about
for
the
road
map.
The
uncharted
view
it's
needed
for
is
to
cuddle
ps.
A
A
I
think
yes,
I
I
forgot
to
link
it
here,
but
would
we
do
we
think
that
is
going
to
be
sufficient
to
allow
istio
cuddle
to
see
all
of
the
endpoints
and
do
is
do
cuddle
ps
and
it
and
wait.
C
I
don't
know
if
it's
going
to
be
sufficient.
There
are
additional
things
we
probably
want
to
do.
I
mean
it's
it's
kind
of
a
full
back
worst
case
and
scenario,
because
it
relies
on
on
on
events
from
from
ap
server,
which
are
not
very
reliable,
not
not
not
not
f,
zero,
fml,
that's
correct,
so
so
the
api
server
may
hold
them
as
long
as
they
need
the
most
important
reason
I
added
this
is
that
it
allows
us
to
report
events
back
to
the
to
the
pod.
C
In
a
multi-cluster
scenario,
if
you
have
upload
in
a
random
cluster,
you
will
be
able
to
see
all
the
events,
snacks
and
other
things
associated
with
support.
C
C
So
we
still
want
to
do
direct
connections.
Yes,
the
problem
is
that
for
the
reconnection
also,
we
have
corner
cases.
So
I
believe
that
the
combination
of
the
of
the
events
plus
direct
connections
is
sufficient.
For
most
users
I
mean
the
debug
tool
is
not
supposed
to
be.
You
know:
100
percent
accurate,
understandable
we,
the
system
is
recovering,
so
every
15
minutes
I
mean,
even
if,
if
the
eventing
doesn't
is,
is
dropped
because
we
we
reconnect
every
15
minutes
or
20
minutes
or
30
minutes.
C
I
don't
remember
what
the
current
status
the
system,
even
if
it
hits
one
of
the
corner
cases
it
will
recover,
and
it
will
be
eventually
consistent.
It's
just
that.
You
may
have
some
special
cases
where
you
do
not
see
some.
You
miss
some
some
endpoints.
That's
really
the
only
problem.
C
To
be
clear
right
now,
the
pr
doesn't
have
the
sync
itself.
I
mean
I'm
still
working
on
it.
What
is
important,
what
you'll
be
able
to
see
is
that
collect
and
disconnect
show
up
in
in
cube,
ctl
describe
and
you
you
will
see
all
the
connect
and
disconnect
events
across
all
clusters.
C
Okay,
so
you
there
are
two
ways
to
to
get
the
the
status
and
basically
so
one
is
to
watch
the
events,
and
then
you
should
see
you
know
in
real
time
when
endpoints
connect
disconnect
and
then
keep
track
of
them.
The
other
one
is
to
actually
get
this
information
initially
in
edit
code
for
study
towards
the
event
itself
and
to
build
the
database.
That's
not
yet
completed
right
now.
The
purpose
of
the
pr
is
to
you
know
test
it.
So
we
see
if
how
bad
it
is
in
terms
of
you
know,
performance
scalability.
C
C
A
The
other
way
to
do
it,
kosten,
is
to
have
us,
do
it.
Ux
may
not
be
able
to
do
it
as
deeply
as
you
want.
My
my
thought
had
been
just
to
you
know:
have
a
instead
of
exposing
stod
directly,
maybe
expose
some
front
end
that
goes
back
and
queries
all
of
the.
C
C
So
ed,
if
we
had
a
way
to
to
query
all
these
ud
pods,
we
would
do
it
would
not
have
this
problem.
So
the
problem
is
that
we
do
not
have
a
very
good
way
to
find
all
because
you
have
multi
cluster.
Have
too
many
scenarios.
It
is
if
you,
if
you,
if
you,
if
you
are
willing
to
have
something
that
some
code
that
is
connecting
to
all
these
two
of
these
just
put
it
in
in
in
package
util
and
each
id
itself
can
do
it
can
connect.
C
C
B
So
we've
we've
bumped
into
this
problem
before
and
it
seems
like
whether
we're
talking
about
a
pub
sub
system
that
they
all
use
or
we're
talking
about
connecting
directly
to
all
of
them
or
eventing.
I
I
think
the
same
problem
comes
into
play.
The
question
that
we're
asking
is
which
sdod
instances
are
in
scope
for
this
aggregation
right
like
if
you're
aggregating,
let's
say
status
data?
Do
you
take
absolutely
every
instance
of
sdod
that's
running
in
the
cluster?
Is
that
enough,
or
are
there
other
clusters?
F
C
A
You
you
saw
my
you
saw
my
wiki
instructions
to
expose
this
virtual
service
for
sdod.
You
could
imagine
that
this
destination
is
a
subset
of
an
s2d
that
is
unique
to
my
particular
setup.
So
I'm
sort
of
saying
I'm
going
to
expose
a
subset
for
coca-cola
I'm
exposed
to
subset
for
pepsi
yeah.
That
kind
of
thing.
So
I
see
that
as
being
sort
of
fine.
If
I
can
come
up
with
a
way
to
do
that.
C
It's
so,
if
are
a
single
class,
if
you
have
a
single
central
cluster,
it's
fine,
it's
not
a
problem,
and
and
again,
if
you
have
code
that
is
connecting
to
all
these
two
ids,
all
I'm
asking
is
to
link
that
code
into
estuary
itself,
because,
besides
debugging,
we
also
want
to
have
a
way
for
these
students
to
synchronize,
with
each
other,
to
be
able
to
get
in
real
time
information
that
other
students
have
like.
C
For
example,
if
a
knack
happens,
we
want
all
the
students
to
know
that
that
particular
config
is
bad,
so
they
don't
push.
We
want
all
these
students
to
know
what
other
issue
this
existing
system
and
what
loads
they
have,
how
many
endpoints
they
have
so
they
can
rebalance.
We
want
when
an
endpoint
connect
to
any
study.
The
other
issue
is
to
find
out
immediately,
so
we
don't
have
latency.
So
there
are
many
networking
related
features
that
depend
on
study
or
can
be
improved
if
this
study
can
connect
to
all
the
other
studies.
C
The
question
is,
in
the
simple
case:
it's
why
I
think
your
your
code
will
work
perfectly
and
if
you,
if
you're,
only
concerned
with
with
solving
the
problem
for
one
centralist,
ud
cluster,
then
it's
it's
fine
and
it's
a
perfect
start
as
long
as
it's
in
study
itself.
But
if
you
have
to
have
multi-region,
for
example,
have
a
cluster
pair
region,
then
you
have
friends.
How
do
you
discover?
I
mean
all
the
studies
from
the
other
clusters,
so
you
have.
The
scenarios
where
you
run
eastwood
is
on
prem,
because
you
have
some
for
fallback.
C
A
A
We
pass
that
in
as
an
argument,
maybe
through
a
header-
and
we
use
some
header
routing-
to
get
it
to
the
right
instance.
If
it
were
on
a
single
cluster
or
the
right
data
center,
or
we
have
a
data
center
flag
and
once
status,
I'm
willing
to
if
it's
too
hard
to
if
it's
a
general
multi
cluster
computing
problem
that
we
haven't
haven't
solved,
we
can
make
ps,
do
less.
C
That's
another
option:
if
you're,
okay
to
restricting
a
bit
ps,
then
I
think
that
eventing
will
work
perfectly
for
you,
because,
if
you're
saying
that
you
are
okay
with
proxy
status
of
all
the
proxies
that
checked
in
the
last
half
hour,.
A
B
This
won't
work
for
central
stod
right,
like
getting
events,
would
be
getting
events,
presumably
from
the
cluster,
that
the
control
plane
is
running
in
no.
C
No,
no,
no,
the
way
it
is
implemented.
You
get
events
from
the
cluster
where
the
port
is
running,
because
you
need
to
associate
the
event
with
the
pod.
So
in
reality,
if
you,
if
you,
if
you
are
in
a
lane
space,
if
you
just
do
cattle
describe
on
each
pods,
you
will
see
where
they
are
connected
each
port
individually,
and
that
is
very
reliable.
So
if
you,
if
you
have
a
port,
if
you
actually
know
a
pod
or
the
list
of
ports,
you
can
find
exactly
where
they
are
connected.
C
Or
report
yes,
yeah
the
application
workload.
Yes
exactly
because
that's
a
part
where,
where
where
where
there
is
no
problem,
you
do
keep
cut,
will
describe
on
on
on
the
pod.
Kubernetes
retains
the
events
associated
support,
so
you
can
see
that
the
pod,
you
know
pulled
an
image
did
start.
The
readiness
check.
You'll
also
see
that
connected
to
a
particular
study,
and
you
can
get
actually
study
where
it's
connected.
C
It's
very
useful
because
again
for
a
user,
you
just
need
to
do
cube
cattle
described
on
support
and
it
can
get
also
probably
will
be
able
to
get
knocked
so
that
that
particular
port
received
so,
and
I
think
that's
the
most
important
use
case.
You
carry
in
your
namespace
how
all
the
pods
are
connected
and
where.
C
And,
and
what
that
was
saying
about
wrestling
renault
ps
is
giving
you
all
the
supports
all
across
the
mesh
that
has
some
security
and
privacy
implication.
I
mean,
if
you
are
a
namespace
admin
because
centralized
studios,
that's
kind
of
the
premise
that
you
are.
You
are
splitting,
so
I
think
restricting
the
semantic
of
ps
to
mean
in
one
namespace
and
for
the
ports
that
are
currently
running.
B
C
Yes,
but
it's
it's
it's
getting
a
bit
more
complicated
and-
and
you
know
complex
and
let's
I
mean
if
we
can,
if
we,
if
we
are
going
to
to
change
the
semantics,
let's
change
it
to
something
that
it's
you
know
easy
to
support
because
per
name
space,
that's
a
unit
of
granularity.
I
mean
we
can
extend
it
to
namespace,
multiple
namespace
as
a
talented
unit
when
coverity
says
it
and
we
have
it.
C
C
C
B
Okay,
one
more
question
regarding
axe
and
necks:
the
way
that
we're
using
them
here
are
necks,
permanent.
E
After
it
and
it's
good
to
go
right,
actually,
no,
that's
not
true.
We
can
push
partial
resources.
E
C
I
I
think
we
also
need
to
be
very
careful
about
not
overwhelming
the
system,
so
we
definitely
can
send
the
knacks
to
the
port.
So
you
will
see
that
that
port
got
a
knack.
The
question
is:
how
do
we
revert
it?
How
do
we
we
delete
that
event
and-
and
there
is
a
delete
operation
and
we
can
also
send
arc?
I
mean
hey.
Everything
is
okay
for
this
port
when
it
is
resolved,
but
we
cannot
send
all
acts
all
the
time
so.
B
C
Yeah,
probably
we
don't
want
to
delete
yeah,
I
probably
don't
we
want
to
send
an
arc,
but
only
if
it
we
previously,
so
we
want
to
send
arcs,
but
only
if
we
previously
send
a
knack.
That's
easy.
If
you
are
within
a
connection,
because
you
know
that
you
sent
a
knack,
you
got
a
knock,
but
if
you,
if
you
reconnect,
then
we
don't
have
this
information.
However,
if
you
reconnect,
if
the
issue
cattle
is
able
to
interpret
the
stream
to
say,
hey,
knock,
connect
and
no
additional
knock.
It
means
it's
okay.
C
It's
not
very
straightforward,
but
I
think
it
the
logic
works.
I
mean
it's.
Basically,
we
send
a
knack
and
within
the
same
connection,
if
we
we
keep
track
that
we
send
a
knock
and
we
when
it
is
resolved,
we
we
we
revert
it
basically,
but
john
is
right,
it's
probably
per
resource
and
probably
need
to
include
additional
information
in
the
next.
So
the
user
knows
what
to
do
with
it.
A
Okay,
great
and
thanks
for
coming
to
the
meeting
by
the
way
carson.
I
think
this
really
helps
so
I
have
a
no
one
has
no
one
has
reviewed
my
pr
for
verify,
install
I'd.
I
think
it
may
be
too
late
for
one
seven,
but
maybe
one
seven
one
it's
totally
needed,
especially
on
kubernetes
118.
So
I'm
hoping
someone
can
review
that.
A
A
The
commands
that
I
think
either
I
never
the
commands
that
I
never
use.
I
marked
an
orange
and
I
indicated
whether
it
had
documentation
and
I
wanted
to
see
sort
of
what
the
people
on
this
call
think
are
the
bad
commands
and
what
are
the
popular
commands
so
that
we
can
sort
of
focus
our
effort
on
improving
the
good
ones
and
getting
rid
of
the
ones
that
no
one
is
using.
Currently
so.
C
Did
you
color
code
or
or
classify
them
as
cluster
admin
versus
pod
user.
A
I
have
a
I
had
next
week,
I'm
hoping
to
do
that
so
next
week.
I
have
to
get
back
to
this
work
and
process
progress
item
to
separate
them,
and
then
I
broke
these
all
down
by
admin
versus
workload
and
user
and.
A
So
I
was
hoping
that
before
we
could
do
that
that
we
would
look
at
if
there's
any
commands,
we
can
just
totally
get
rid
of
completely.
C
C
A
So
cube
inject
can
take
local
files.
Yes,
so
it
could
be
that
the
same
process
that
we
used
to
get
certs,
we
could
get
those
config
files
and
give
the
user
a
bundle
of
things,
including
what
they
need
for
cube
inject.
C
Yes,
we
we
can
definitely
return
the
file
over
xds.
So
since
we
have
connection
to
each
ud,
we
can
get
them,
but
we
cannot
use
basically
cubes
to
api
to
connect
with
your
system
and
get
them
that's
my
point.
So
we
need
fixes
basically
over
xds.
We
can
return
them.
You
know
we
use
only
files,
whatever.
A
C
And-
and
there
is
also
the
larger
item
of
do-
we
want
to
actually
let
users
specify
an
injection
template
in
their
own
namespace,
in
which
case
we
need
to
modify
the
injector
to
actually
inject
the
files
with
config.
So
then
give
back
the
user,
the
control,
basically,
which
is
they
already
have
with
cube
injector.
They
can
use
local
files.
C
It
was
just
an
example
I
mean
of
why
it's
important
to
to
split
them
first
into
and
to
understand
the
implications
of
what
that
mean.
Okay,.
A
You're
right
I'll
move
all
these
discussion.
I
just
typed
into
the
other
document.
So
right
now
I
just
thought
I
would
go
through
and
ask
people
first.
So
yeah,
let's
go
through
your
list,
so
everyone
loves
analyze.
So
people
been
asking
for
this
authentic
check,
so
I'm
bringing
it
back
can
convert
ingress.
I
haven't
used
since.
A
It
graduated,
it
is,
it
is
mean
line,
but
I
think
rom
told
me,
people
still
use
it.
I
have
not
used
it
in
a
year.
C
And
also
things
have
changed,
I
mean
at
that
point
where
we
were
deprecating
or
thinking
about
deprecating
the
increase.
Now
we
are
supporting
it
again
and
we
also
support
the
kubernetes
gateways
for
us.
John,
are
you
yourself.
E
E
D
C
C
D
E
That's
what
I
mean
has
been
supported
for
quite
a
while.
I
just
made
some
improvements
to
it
and
yeah.
It's
supported.
Well,
so
there's
two
different
things:
there's
ingress
which
has
been
around
for
a
while
and
then
there's
the
new
kubernetes
gateway
stuff,
the
new
kubernetes
gateway
stuff.
Yes,
it
works
on
all
kubernetes
versions,
but
you
have
to
like
install
a
separate
project
crd
and
it's
like
alpha
like
no
one's
using
that
we
should
basically
ignore
it.
For
this
conversation,
okay,
but
but
also.
E
C
C
D
B
A
D
A
C
One
line
it
we
have
to
follow
the
process,
the
process
is
deprecate
and
then
remove.
Okay,.
C
Yeah
dashboard
is
interesting
because
again
doesn't
make
sense
for
centralized
ud.
So
that's
an
important
one
to
deprecate
and
kill.
If
so,
people
seem
to
love
it.
D
No,
no,
I
think
dashboard
is
useful,
because
people,
so
the
the
whole
purpose
of
dashboard
is
for
people
who
can't
remember
the
portfolio
command.
I
know
you're
gonna
laugh,
but
I
always
have
to
look
up
the
doc
to
remember
those
commands
for
the
dashboard.
It's
really
helpful
for
people
to
you
know
launch
this
for
testing
purpose,
but.
A
A
Control
z,
I
can,
I
can
deprecate
if
you
want
yeah.
A
C
Let
me
ask
you
another
question:
what
is
the
status
of
authentication
for
all
those
of
things?
I
mean
if
we
have
jot
authentication
some
form
of
authentication
where,
like
we
added
for
xds,
and
we
configure
those
things
behind
the
gateway
and.
E
That's
not
our
concern.
Those
are
user
applications
now.
C
C
B
I
do
like
the
idea
of
making
them
a
little
bit
more
intelligent,
though,
like
perhaps
replacing
prometheus
with
a
single
telemetry
command.
That
will
do
some
logic
around
detecting,
where
our
our
mixer
list
config
is
sending
its
telemetry
data
and
attempting
to
port
forward
to
that
for
supporting
things
beyond
just
the
production.
E
C
Concern
here
is
that
it's
it's
it
first.
E
C
It's
not
mean
command,
it's
not
a
user
command
and
I'm,
I
think
we
need
to
focus
mostly
on
users
and
users
will
not
have
access
to
the
promise.
Namespace
will
not
have
access
to
zipkin
and
we
are
encouraging
a
very
bad
practice
of
actually
requiring
tools,
debug
tools
to
to
do
high
privilege
operation.
C
Even
for
admin
I
mean,
if
I
want
to
access
prometheus,
I
shouldn't
have
to
to
do
to
have
you
know:
cube
cattle
access
to
the
to
the
prometus
spot.
So
I
totally
agree.
A
Custom,
but
what
I
want
is
dashboard
prometheus
to
still
work.
So
what
I'm
proposing
is
that
when
the
cloud
provider
sets
up
your
environment
with
prometheus
and
they
create
some
way
to
get
to
that,
be
it
a
virtual
gateway
or
something
that
they
put,
that
in
a
config
map
that
the
user
can
see
in
the
user's
own
cluster.
C
Config
map
in
each
namespace
again
keep
in
mind
difference
between.
Maybe
maybe
there
needs
to
be
a
thing.
I'm
a
booking
for
developer,
basically
think
think.
A
A
C
B
It
could
be
helpful
to
think
of
this,
not
as
a
command
that
is
designed
for
troubleshooting.
This
is
not
the
way
that
we
would
recommend
that
anyone
continuously
access
their
prometheus
system
in
a
production
environment.
Instead,
it's
a
helper
command
that
is
useful
when
starting
off
with
this
deal,
it's
good
so
far
as
it
goes,
but
it's
not
designed
to
be
used
permanently
or
for
troubleshooting,
or
anything
like
that.
C
B
B
Those
are
both
valid
users
of
our
system,
constant.
We,
we
definitely
need
to
increase
the
focus
on
the
user
of
a
like
the
application
name
space
owner
that
sort
of
thing
increased
focus
there
is
good,
but
removing
existing
functionality
for
other
users
is
not
the
way
that
we
go
about
doing
that.
Right,
just
add
new
commands
for
that
particular
user
type.
C
A
Our
demo
used
to
be
you
know
it's
all
book
info
now.
Now
you
can
bring
up
prometheus
and
grafana
sure
you're
installing
the
demo
profile
you've
installed
the
item
they're
all
there
now
we're
going
to
say,
but
we're
not
going
to
tell
you
how
to
find
that
stuff
you're
going
to
find
it
on
your
own,
with
cube
cuddle.
C
Sure,
but
it
doesn't
have
to
be
your
cattle
command.
I
mean
we
can
it's
a
demo
of
of
how
to
not
do
things.
Basically,
so
if
you
want
to
have
a
demo,
you
should
you
know,
have
a
big
phone
saying.
This
is
not
something
you
would
do
in
production
and
run
us
root,
and
and
and
so
again
we
are
focused
on
them
or
not
on
production
and
and
and.
B
A
I
think
I'm
adding
we're
promoting
some
functionality,
so
remember
we
have
that
new
dark
launched
is
to
cuddle
defaults.
It
could
be
that
the
prometheus
thing
just
sets
up
a
port
forward,
something
that's
in
that
default
file,
so
it
literally
just
looks
up
something
on
your
local
file
system
and
does
something
but.
C
How
you,
how
it's
even
implemented?
How
do
you
even
know
which
promises
it
is-
I
mean
it
can
be
in
any
namespace?
It
can
be
running
as
our
other
user.
It
may
be
in
a
remote
cluster,
because
that's
also
possible.
I'm
suggesting
that
your
cloud
provider,
when
you
install
istio,
gives
you
a
configuration.
E
That's
I
I
don't
I'd
like
I'm.
I
want
I'm
pretty
fine
with
keeping
this
command,
but
I
think
what
you're
proposing
is
far
and
not
accurate,
because
first
of
all,
a
cloud
provider
may
not
even
provide
prometheus.
They
may
provide
15
different
prometheuses
that
are
doing
all
sorts
of
federation
and
other
things
or
any
anything
in
between
like
we
we
eat
prometheus
is
not
part
of
ustio
and
we
do
not
dictate
how
a
user
connects
to
prometheus.
E
C
How
about
how
about
having
an
istio
demo
command,
which
is,
as
you
said,
useful
for
people
who
do
demos,
and
then
you
can
have
dashboards
there?
We
keep
the
code,
we
keep
everything
that
is
your
demo
and
then
for
production
users
and
what
we
want
to
actually
support
and
and
and
tell
users
of
centralist
ud.
We
we,
we
have
a
clean
steel
cattle
that
is,
or
is
your
user
whatever
you
want
to
call
it
that
is
focused
on
on
on
things.
We
can
do
for
central
history
and
manage
these.
Your
cases.
D
And
you
can
debug
things
if
your
virtual
services
or
destination
who
doesn't
work,
I
don't.
B
Largely
breaks
if
oh
you're
saying
if
admin
is
not
on
well
that
that's
how
config
dump
works
so
all
of
our
troubleshooting
tooling,
is
kind
of
based
around
that
being
functional.
E
A
C
A
And
I
I
I
have-
I
have
some
items
that
I
want
to
get
to
later
on
in
the
week
later
in
the
month
about
how
to
make
envoy
filters
based
on
looking
at
that
config
dump,
but
I
think
we've
spent
way
too
long
on
dashboard.
Okay,
keep
going
deregister
done.
It.
E
A
Yep,
okay,
so
we
use
we
use,
help
and
install
all
the
time
who
needs
help.
I
I
use
help
all
the
time
do.
A
C
D
A
B
We
have
to
remember
that
we
asked
everybody
in
the
one
four
to
one
five
time
frame
to
stop
using
helm
and
start
using
manifest.
So,
let's,
let's
not
move
off
of
it
too
quickly.
C
C
Yeah
no
yeah,
but
now
we
are
going
to
turn
them
to
move
to
centralist
jody,
which
is
kind
of
what
manages
to.
I
don't
know:
okay,
whatever.
D
Yeah
by
the
way,
from
server
smash
con
yesterday,
there's
a
lot
of
user
asking
about
helm
apparently,
and
a
lot
of
users
are
asking
you
know:
when
can
they
go
back
to
home?
They
don't
like
to
use
instructor
commands
to
install.
C
It's
as
soon
as
someone
adds
a
test
and
move
it
better.
I
mean
it's:
it's
working
perfectly
fine.
C
D
A
Okay,
I'm
not
gonna
go
through
in
the
interest
of
time.
Unless
everybody
begs
me,
the
experimental
commands
nope.
So
we
will
talk
about
them
again
next
week,.
D
Okay,
you
don't
you
have
a
few
commands
upgrade.
That's
not
experimental.
All
those
are
experimental
too.
A
A
C
C
C
D
D
It
is
the
same.
The
the
only
difference
is
the
upgrade
command
actually
give
you
warnings.
If
you
have
certain
configuration
that
may
not
be
meaningful,
how.
D
E
A
Lot
of
the
stuff
for
the
checks
might
have
been
based
on
that
restore
the
issue
operator
there
and
that
pr
that
I
asked
people
to
verify
might
fix
those
problems,
but
anyway
we're
at
the
top
of
the
hour.
I'm
gonna,
let
everybody
go.
Thank
you
so
much,
and
we
will
talk
next
week.