►
From YouTube: Istio User Experience Meeting July 21, 2020
Description
1.7 UX items outstanding for July 21 Feature Freeze
Istio.io Documentation pages owned by UX
Review Working doc: UX Requirements for Networking 1.8
Review RFC “Better way to list useable Istio control planes”
A
And
I
think
you're
muted.
B
Okay,
how
about
now
sounds
good,
oh
good
yeah.
That
was
awkward
so
welcome
everyone.
The
feature
freeze
is
today
there'll,
be
a
build
cut
tonight.
Dancing
day
is
next
week.
I
have
three
pr's
that
are
waiting
for
a
review
from
this
working
group
that
should
go
in
one
seven.
Does
anyone
else
have
any
prs
that
need
a
review
from
us.
B
Okay,
great
the
rev,
the
ones
I
have,
I'm
hoping
people
look
at
one
to
flag,
deprecated
and
removed
types.
This
was
done
for
the
mixer
people.
It
now
also
will
warn
you
if
you
have
the
crds
for
the
old,
our
back
that
no
longer
works.
Yeah.
A
I
approved
that
pr,
I
think,
looked
good
excellent.
B
Pr,
so
all
that's
left
is
cube,
inject
perfect
so
and
that
one
was
tiny
that
was
just
greg
and
some
networking
folks
were
complaining
about
not
seeing
an
error
message.
So
I
just
make
the
error
message
show
up
later
sort
of
save
it
in
a
in
a
slice
and
printed
at
the
end.
B
Some
of
our
one
sevens
are
retargeted
for
one
eight,
because
I
told
schweitzer
they're
not
getting
done
most
of
the
items
that
are
on
this
list.
We
share
with
someone
else
and
when
we
have
the
one
seven
review,
I'm
going
to
say
that
it's
a
bad
practice
to
have
items
shared
by
two
work
groups,
because
no
one
knows
whose
job
it
is
to
assign
someone
to
fix
it
or
how
important
it
is.
B
I
have
a.
I
think
this
is
the
analyze
one,
that
I
have
a
pr4.
It's
the
only
blocker
there
where
I
I
and
I
didn't
put
the
titles
here,
but
there
is
a
lot
that
were
ran
with
telemetry
and
telemetry
did
not
make
many
of
its
1-7
goals.
Apparently,
so
probably
all
of
these
will
be
retargeted.
B
D
Actually,
I
it
it
is
done
as
far
as
I
know,
unless,
unless
you
guys
need
something
else
there,
so
the
pr
is
committed
and
appears
to
be
working.
B
Okay
and
I
yeah
those
were
some
items
that
we're
only
really
testing.
We
didn't
really
do
it.
So
thanks
to
environments
for
actually
doing
that,
another
one-
I
don't
know
the
status
of
one.
Is
this
this
one?
That
shamster
from
red
hat
did
to
issue
a
warning
if
the
installed
version
is
different
than
the
cli
version
is
chamber
on
the
call.
B
Okay,
so
I'll
have
to
follow
up
and
see
what
the
status
of
that
one
is
and
jason
are
you
on
the
call
yep?
What's
the
deal
with
this
one,
the
line
number
one.
A
Yeah
so
xiaopong
our
intern
on
a
team
is
working
on
that.
So
I
I
think
yeah.
So
he
has
a
pr
for
that
clay
and
I
are
reviewing
and
providing
feedback.
So
is
it
marcus?
It's
a
1.7
requirement
or
feature.
B
It
was
marked
as
one
seven
I
think
just
in
so
should
we
retarget
it
or
should
we
try
to
get
it
done.
A
I
so
from
my
personal
opinion,
I
don't
think
it's
a
re
like
a
strong
requirement
for
107..
I
would
mark
it
as
1.8,
because
it's
our
target
is
it's
to
finish
it
by
end
of
the
intern
season,
so
so
yeah,
I
think
yeah.
It
will
be
too
tight
schedule
for
what
I
said.
B
Fair
enough
thanks,
so
one
item
that
might
interest
people,
it's
certainly
not
going
to
get
done
for
one
seven
it's
getting
moved
is
telemetry
wants
a
dashboard
for
monitoring
the
envoy,
wasom
extensions.
They
don't,
I
think,
have
a
document.
B
B
So,
as
you
know,
these
meetings
are
all
recorded.
I
wanted
to
call
attention
to
the
fact
that
I
actually
uploaded
the
last
the
ones
which
I
had
not
been
uploading
since
corona,
and
I
put
them
in
our
little
playlist.
The
way
that
the
istio
channel
is
organized
is
that
each
working
group
gets
a
playlist
so
that
their
meetings
can
all
be
together
in
addition
to
under
the
main
istio
channel.
B
So
now
that
the
build
is
over,
we
want
to
focus
quite
a
I
mean
our
features
are
frozen
and
testing
is
coming.
We
want
to
focus
on
the
documentation,
so
frank,
badinsky
who's,
the
documents
lead
has
divided
up
all
of
the
documentation
on
istio.io
by
working
group
and
then
assigned
it
sort
of
to
us.
So
we
have
33
documents,
that's
the
second
highest.
B
So
this
represents
all
of
the
pages
that
are
manually
created
in
our
documentation
and
what
he
has
proposed,
or
what
he's
set
up
is
pretty
interesting
when
you
click
it
brings
you
to
the
documentation
and
it
shows
x,
page
test.
If
there
is
a
page
set
this
and
what
this
means
is,
is
this
documentation
automatically
being
tested?
So
some
of
our
documentation
is
automatically
tested
and
the
stuff
the
commands
that
people
run
like
this
actually
are
in
annotations
on
the
markdown,
and
it
is
turned
into
a
script
that
gets
run.
B
I
think
every
time
the
documentation
gets
built.
Maybe
every
time
missed
you
gets
built,
I'm
not
sure
which,
but
so,
if
anyone
wants
to
write
automated
tests
or
learn
about
this
framework
for
doing
it
write
some.
I
promised
some
people
that
I
would
contribute
several
tests
at
the
beginning
of
the
cycle
and
I
had
not
started
them
yet.
So
I'm
going
to
be
getting
into
this
later
on
this
week
and
I
encourage
everyone
who
wants
to
write
test
to
do
that.
B
B
B
We
tried,
together
with
liam,
to
propose
an
api
that
was
going
to
be
used
for
a
replacement
for
a
lot
of
the
troubleshooting
commitments
and
is2d
was
going
to
offer
it.
We
got
sort
of
shot
down
by
casting
and
networking
or
he
agreed,
but
then
it
was
late
so
mitch
and
I
have
started
working
on
this-
what
we
expect
networking
to
sort
of
do
for
us,
and
so
the
first
thing
I
would
encourage.
B
Is
everyone
in
this
group
to
look
and
see
if
the
things
that
we're
asking
for
to
do
what
we
think
our
tasks
are?
If
these
are
what
you
need
for
your
stuff,
I
might
not
be
aware
of
what
you
feel
study
needs
to
provide
for
your
stuff,
lia
or
mitch.
Do
you
want
to
walk
people
through
what
we're
already
asking,
so
they
get
an
idea
of
what
stuff
they
should
be
asking
for.
C
Yeah
sure
so
our
top
requirement
is
the
federated
view
of
xds
events,
and
that
is
essentially
saying
that
if
we're
going
to
treat
the
xds
endpoint
over
a
load
balancer
as
a
debug
service
that
we
can
access,
then
it
needs
to
respond
on
behalf
of
the
whole
service.
Currently,
if
you
submit
a
debug
request
over
xds
events,
you
will
get
a
response
from
whichever
sdod
you
happen
to
land
on
and
the
response
will
be
completely
different
from
one
sdod
to
the
next.
C
So
we
need
that
to
be
like
an
aggregated
view
from
all
instances
of
this
dod.
This
is
kind
of
the
implication
of
having
a
stateful
service.
So
there's
going
to
need
to
be
some
some
improvements
there.
I
want
to
warn
people
that
that
only
affects.
B
C
Right
and
the
way
that
we
do
that
it
right
now
is
by
port
forwarding,
which
eventually
we'd
like
to
stop
doing
we'd
like
to
be
able
to
treat
this
like
a
service.
The
so
there's
kind
of
two
reasons
there
one
is
for
centralist
dod
support,
as
I'd
mentioned,
when
you're
running
in
sort
of
an
environment
that
you
don't
have
access
to
port
forwarding
is
not
an
option.
C
C
I'm
thinking
of
users
like
kiali,
who
would
love
to
have
the
output
say
of
proxy
status
or
avistio
cuddle
weight
and
be
able
to
reflect
that
in
their
user
interface
they're
not
going
to
run
an
istio
cuddle
command
under
the
hood
and
like
drop
down
to
bash
level,
to
parse
what
they've
got
and
then
bring
it
back
up
into
the
ui.
I
need
a
real
api
to
interact
with,
and
so
that's
kind
of
the
blocker
for
the
both
of
those
use
cases
I'll
move
quicker
through
the
others.
C
Our
auth
z
for
xds
events,
we
we
have
off
n,
you
have
to
provision
a
certificate
to
get
xds
events,
but
the
auth
z
is
unclear.
It's
not
clear.
Once
I've
provisioned
certificate
materials,
which
events
should
I
have
access
to
which
events
should
I
not
have
access
to?
We
need
a
clear
story
there
so
that
workload
operators
can
run
these
apis
without
any
special
permissions
to
the
control
plane
and
get
back
only
results
that
are
relevant
to
them
same
on
the
other
side
of
authn
right
now.
The
certificate
provisioning
is
a
bit
hairy.
C
C
C
We
also
need
to
be
able
to
list
control
planes
that
are
present
in
a
given
cluster.
Networking
and
environments
have
advised
us
that
this
is
not
exhaustively
possible
and
that's
fine.
This
is
sort
of
a
best
effort
sort
of
scenario,
but
we
we
have
two
ways
that
we
need
to
be
able
to
look
at
this.
C
C
Let's
see
the
xds
events
that
we
are
using
right
now
for
the
experimental
istio
control
commands
are
not
documented
and
we'd
like
to
see
that
happen
very
soon.
It
sort
of
feels
like
an
alpha
level
interface
that
could
get
changed
out
from
underneath
us
and
before
we
invest
too
much
more
heavily.
There
we'd
like
to
see
a
lot
more
concrete,
detailed,
both
in
documentation
and
support
guarantees.
C
The
way
that
we're
connecting
to
xds
events
doesn't
make
a
lot
of
sense.
If
you
look
in
the
prs
that
ed
has
done
a
great
job
pushing
through
the
connection
parameters
are
very
confusing.
We're
actually
spoofing
a
proxy
we're
giving
ourselves
a
proxy
id
as
though
we
were
a
regular
proxy
connecting
to
istio
d,
which
is,
of
course
not
the
case,
and
so
when
we
go
to
make
this
api
accessible
to
our
customers,
we
can't
have
that.
We
need
a
connection
parameters
that
actually
make
sense
for
the
operations
they're
trying
to
apply.
C
That's
rationalization,
ux
misses
the
oh
yeah,
the
debug
off
z.
Endpoint
needs
to
have
a
replacement.
I
think
ed
we're
looking
for
a
replacement
on
the
xds
endpoint
right,
we're
not
looking
for
the
re-edition
of
the
debug
api
correct
so,
and
this.
B
Is
this
is
the
described
command
that
rom
and
I
worked
so
hard
on
for
one
for
used
to
print
all
this
great
stuff
about
what
was
tls?
What
was
not
tls?
We
need
to
restore
that
information
that
was
removed
in
one
five.
C
Yeah,
so
this
is
what
we'd
like
to
see
with
regard
to
xcs
events
in
one
eight,
all
of
these,
or
almost
all
of
these
we
are
not
able
to
accomplish
on
our
own.
C
C
B
Well,
as
we
as
we
do,
when
we
get
come
back
to
do
our
1
8
commitments,
we
will
make
sure
that
we
have
the
stuff
on
here
to
make
them
happen.
This
time.
C
Yeah
and
I
would
say,
as
we
progress
through
one
eight-
you
know
design
and
development
if
we
do
take
out
dependencies
on
networking
that
are
not
on
this
list.
Please
raise
them
to
our
attention
early
so
that
we
can
make
sure
those
are
tracked
as
a
part
of
the
official
release
and
have
visibility
that
that's
mostly
on
me
for
not
doing
that
earlier
in
this
release-
and
I
think
has
has
resulted
in
some
difficulty
for
all
of
us.
So
I
think
we
can
make
that
better.
This
next
time.
B
Okay,
the
phone
shouldn't
ring
anymore,
so
I
wanted
to
get
some
early
feedback
on
a
better
way
to
list
istio
control
planes.
We
had
promised
to
do
this
for
one
seven.
We
did
not.
We
were
confused
about
what
was
our
job.
What
was
environments?
What
was
networkings,
so
I
created
this
rfc
and
I'm
going
to
get
everyone
all
those
groups
to
sort
of
agree
on
it
and
it
has.
It
has
two
pieces,
a
sort
of
a
user
side
and
a
local
control
plan
administration
side.
B
So
most
users
who
are
using
istio
today
are
installing
it
themselves
with
this
to
cuddle
install
when
they
do
that
they're
allowed
to
have
multiple
installation
of
control
planes
such
as
canaries,
but
they
might
forget
what
they
have
installed.
B
B
B
Okay,
so
when
we
list
our
control
planes,
so
first
I
installed
four
control
planes
myself
using
a
steel
cuddle
install.
I
did
a
standard
control
plane
with
very
few
parameters.
I
did
a
control
plane
with
as
a
canary.
B
I
did
a
control
plane
as
being
a
multi-cluster
client
for
a
remote
plane
and
I
did
an
install
of
a
control
plane
with
a
central,
s2d
administration.
So
everything
that
you
see
here
was
done
using
the
existing
commands
and
you
may
not
know,
but
every
time
we
create
a
cluster.
So
we
have
two
ways.
We
have
two
ways
to
install.
One
is
you've
installed
the
operator,
the
standalone
operator
for
istio,
you
make
a
istio
operator
resource.
You
apply
it.
B
The
operator
then
stands
up
the
control
plane
for
you,
the
other
is,
if
you
install
with
the
command
line,
we
install
everything
and
then,
in
addition
to
installing
everything
we
write
the
operator
that
we
just
installed
out
like
a
bill
of
materials
and
in
theory
you
could
have
the
actual
operator
take
over.
You
can
install
the
real
operator
and
it
would
start
trying
to
keep
things
in
sync,
although
I
don't
know
how
well
that
works.
B
B
So
this
is
great.
It
tells
you
what
you
have
installed.
It
doesn't
tell
you
if
anyone's
using
those
installations.
So
what
I
tried
to
add
in
this
command
is
the
following:
the
status
field
on
the
istio
operator
cr.
This
is
currently
only
set
if
you're
running
the
operator
pod
deployment
to
manage
it
so,
and
that
tells
you
if
it's
in
the
process
of
installing
or
finished.
B
If
you
just
install
this
your
cuddle,
you
just
get
an
unknown
here,
a
list
of
how
many
pods
are
in
istio
d
for
this
setup
and
that's
of
course
the
pilot
can
only
be
counted
if
it's
on
your
local
cluster
and
you
have
access
and
permission
to
look
if
you're
running
on
a
central
sud
that
same
column
is
listing
the
endpoint
to
central
stod.
B
I
just
thought
that
was
cool
to
reuse.
That
column.
The
version
would
normally
say
one
seven,
although
this
was
a
private
build
for
me.
B
But
then
I
added
the
number
of
proxies
that
are
using
that
control
plane
and
that's
the
same
information
you
get
now
from
istio
cuddle
virgin
in
theory.
You
could
run
this
your
color
version
on
each
revision
here,
but
this
brings
it
all
up
in
one
piece
which
is
nice
and
then
gives
you
the
injector
label
that
a
user
would
use
if
they
wanted
this
particular
control
plane
to
inject.
C
B
B
B
So
I
have
questions
sort
of
about
that.
The
other
question
I
have
is
about
the
to
get
the
information
in
terms
of
version
and
number
of
proxies.
These
two
items
required
me
to
in
the
current
implementation
go
out
to
the
insecure
xds
port.
B
So
what
carson
has
been
doing
is
he's
been
having
1512
offer
xds
securely
with
certificates
and
1510
offer
xds
to
anyone
who
cares
to
connect,
and
when
I
saw
istio
with
when
I
installed
two
control
planes,
each
one
would
have
different
security,
so
I
didn't
do
the
work
that
would
be
needed
to
use
the
secure
channel
and
I
don't
know
if
the
insecure
channel
is
going
to
be
enabled
by
default,
and
I
certainly
think
it'd
be
a
mistake
to
base
this
on
that
insecure
channel.
B
B
Sud
shows
up
here,
but
the
number
of
proxies.
Maybe
it's
going
to
be
unknown,
because
maybe
central
std
won't
tell
you
how
many
proxies
there
are,
or
maybe
it
will
be
known,
because
sync
status
does
work
against
central
stod.
So
I
think
that
it
might
actually
be
able
to
say
the
true
number
of
proxies
for
central
sud.
So
let's
say
you
installed
istio
from
ibm
in
a
central
way
and
from
say,
microsoft
come
up
with
a
version
or
red
hat
and
yet
different
injector
labels.
B
If
you
wanted
ibm
to
manage
your
namespace
or
microspace,
I
think
this
would
all
work
minus
the
security
stuff,
because
I
haven't
implemented
how
to
select
which
certificate
bundle
to
use.
B
So
when
I
was
doing
that,
I
realized
that
most
of
those
columns
only
are
of
interest
to
the
administrator.
So
I
proposed
that-
and
I've
implemented
it
now,
just
as
a
flag
on
the
admin
command.
But
I
think
that
we
would
want
a
client
based
version
that
just
tells
you
what
injector
label
to
use
for
all
of
your
control
planes.
B
So
this
is
pretty
nice.
If
you
want
default
to
manage
your
control
plane,
you
said:
injector
label
just
label
your
namespace
with
this.
If
you
want
your
canary
or
whatever,
you
would
just
label
it
with
that.
This
should
make
it
really
simple.
B
So
the
code
is
not
interesting,
but
I
think
that
we
could
easily
have
this
as
long
as
we
sort
of
have
as
long
as
constant
likes
it
or
maybe
environments
and
us
together
on
this
idea
of
it's
gonna,
be
based
around
the
istio
operator
custom
resource
and
if
we
can
nail
down
how
security
will
work
to
actually
contact
xds
for
more
than
one
control
plan
at
a
time.
I
think
we
have
smooth
sailing
for
this
item.
B
So
I
I
some
people
have
already
were
told
about
this
before
this
meeting.
Many
of
you.
This
is
the
first
time
you're
hearing
about
it.
Please
add
comments
to
this
document
so
that
we
can
make
this
be
one
of
our
features
for
one
eight.
B
Very
cool
ed,
thank
you
mitch,
but
others
have
been
very
silent
during
this
meeting.
Maybe
everyone's
heads
are
down
finishing
up
their
1-7
vrs.
Does
anyone
have
any
items
for
this
week
or
proposals
for
things
we
should
do
next
week.
B
Does
anyone
want
to
hear
me
beg
them
to
do
automated
testing
for
the
items
on
our
our
working
group
owns.