►
From YouTube: Istio User Experience working group meeting May 5 2020
Description
Istio User Experience working group meeting held May 5 2020
A
So
hello,
everyone!
The
big
item,
is
our
one-seven
roadmap.
I,
want
everyone
to
sort
of
agree
to
that
or
make
me
stop
before
I've
done
it.
Tlc
I
want
to
get
any
back
on
the
sidecar
bootstrap
command.
That
comes
out
of
some
some
work
on
getting
the
card
that
IRA
brought
in
IBM's
between
the
mesh
and
then
I
want
to
talk
about
the
testing.
I,
don't
think
there'll
be
any
time
for
status
updates,
except
for
the
fact
that
that
is
one
of
the
things
that's
a
p0.
A
A
We
agreed
last
meeting
that
four
one
seven.
We
want
to
focus
on
this
troubleshooting
API,
there's
gonna
fix
so
many
problems
we
have
now
with
security
and
multi
cluster
and
we're
gonna
continue
to
harden
the
stuff
with
dual
control.
So
there
was
a
meeting.
Some
of
us
had
just
to
get
this
under
control
myself,
Mitch
and
Liam
on
the
troubleshooting.
Api
I've
now
made
a
work
item
for
that
I'm
hoping
we
make
it
p
zero.
Four
one,
seven.
A
Currently
that
issue
just
point
to
the
architecture
and
API
that
was
proposed
last
year.
So
let
me
just
link
to
that
to
remind
everyone.
What
we're
talking
about
so
is
Tod
supports
this
debug
method
to
things
like
proxy
status.
Other
proxies
acting
correctly
that
we
use
for
some
of
our
sub
commands.
So
last
year
we
had
proposals
from
Romain
this
this
API.
A
A
A
B
Each
other
and
that's
what
kind
of
doing
on
a
different
path.
I
think
those
two
contradict
each
other.
Unless
we're
gonna,
do
it
on
a
different
path
right,
because
the
current
API
doesn't
or
do
you
just
mean?
Do
you
mean
backwards
compatibility
there,
or
do
you
mean
you
should
cover
the
same
use
cases
so.
A
A
But
we
want
to
be
able
to
do
these
I'm
shorter
ones,
because
of
because
of
exactly-
and
we
don't
want
to
do
that-
we're
expecting
you
know
is
Tod
is
running
in
another
cluster
or
it's
running
on
a
VM.
We
can't
ask
yet
onto
all
the
instances
all
we
have
as
an
endpoint,
so
we
can't
use
any
strategy.
That's
not!
A
It
has
charted
results.
We
want
this
config
down
functionality
so
that
there's
no
need
to
executive
sidecars
to
improve
our
security,
and
we
have
to
you
have
to
work
and
the
caveat
is
we
also?
That
means
we're
going
to
be
able
not
only
to
implement
all
this,
but
there's
a
little
piece
that
the
injector
will
need
to
do
as
well,
because
right
now
the
injector
just
tells
what
revision
is
a
pod
has
been
ejected
by
it.
A
Or
it
could
be
attached
to
the
sto
operator
resource
that
we
now
write
out
or
it
could
be
attached
to
the
pod,
but
someone
needs
to
give
us
a
declarative
endpoint
that
we
can
talk
to.
So,
in
addition
to
agreeing
on
this
API
we
are
gonna
need
an
implementation
of
it.
I
figure
were
coding
for
that
as
well.
C
A
We
have
to
agree
about
security
and
I
wanted
to
ask
about
how
this
would
work
in
multi
cluster.
So
if
we're
in
a
single
cluster,
it
makes
great
sense
to
put
this
somewhere
in
the
kubernetes
api
and
explosive.
That
way,
if
we
are
running
multi
cluster-
and
we
are
not
the
control
plane
cluster,
do
we
want
I
think
we
want
this
to
work,
even
if
we
don't
have
access
into
the
cluster
that's
running
the
control
plane.
C
It
sounds
like
you're
talking
about
a
fallback
plan
for
exposing
the
API,
in
which
case
I
think
we
would
more
or
less
just
kick
that
to
the
user.
We
create
a
service
that
no
one
has
access
to
and
tell
them
that
if
they
want
to
configure
access,
they
have
all
of
the
tools
of
this
to
use,
often
and
not
see
at
their
disposal.
D
Issue
with
this
proposal
from
the
beginning
is
you
know
it's
from
the
UX
working
group,
but
I
have
still
not
seen.
I
would
love
to
see
a
very
clear
path
of
what
exactly
the
change
the
user
experience
will
be.
We
keep
focusing
on
the
implementation
details
like
JIRA
PC
whatever,
but
at
the
end
of
the
day,
that's
all
worthless.
If
we
let
you
know
what
the
users
changing
and
I
personally
done,
that
we
need.
A
A
D
C
D
Make
the
claim
that
we're
doing
all
this
work
for
a
security
change?
We
need
to
be
very
explicit
on
what
are
the
security
requirements,
because
you
just
said
like
maybe
we'll
let
them
throw
up.
Easter
are
back,
maybe
kubernetes
are
back,
I
mean.
If
the
motivation
for
this
is
security,
then
we
need
to
be
very
clear
on
what
the
security
model
of
this
is
yeah.
E
C
That's
a
good
point:
there's
not
a
lot
in
terms
of
multi
cluster
Diagnostics
that
don't
work
today.
That
would
work
with
v1
of
this
implementation.
Specifically,
the
path
that's
from
multi
cluster
is
contacting
proxies.
There
is
a
design,
is
a
design
present
in
Romans
original
document
involving
communication
between
the
proxies
and
pilot.
That
I
think
has
some
value,
but
it's
going
to
take
more
than
this
release
to
accomplish
that.
Does
that
make
sense?
So
we
do.
E
Have
more
that
we
can
have
features
as
spendable
releases
I
mean
generally,
we
break
like
we've
broken
them
up
in
the
past
in
to
analyze
and
then
implement
phase
one
and
phase
two.
We
did
this
with
the
build
system,
which
is
still
like
an
implementation,
and
we're
almost
done
hopefully
today
I,
don't
know,
but
I
think
that
is
an
example
that
can
be
followed.
It
doesn't
have
to
be
all
done
in
one
seven
I'm
just
by
tough
but
part
you
feel
you
can
handle
so.
A
A
B
A
Support
forward
for
kubernetes,
and
then
you
use
something
like
this
to
document
and
if
it's
vm's,
you
can
have
some
message
tunnel
or
if
you're
running,
is
to
you
as
a
service
and
it's
just
not
running
in
the
same
cluster.
You
don't
give
access
to
it.
Something
like
this,
where
you,
where
you
set
up,
maybe
a
a
reflector
in
your
cluster.
This
you
manage
and
you
port
forward
to
that,
and
that
cluster
knows
how
to
talk
to
this
debug
API,
in
the
same
way
that
a
regular
pod
knows
how
to
talk
to
XDS.
E
A
So
I
had
drawn
some
pictures
up
sort
of
showed
some
others
just
imagining
a
hack
doing
this
without
any
changes
right.
So
so
you
couldn't
touch
this
Tod
at
all.
You
were
just
trying
to
make
this
happen
at
one
six.
You
could
sort
of
have
a
pod
running
in
your
cluster.
You
could
exact
into
it.
You
could
have
a
pod
running
in
the
cluster
and
the
remote
complain
this
sort
of
does
the
stepping
through
all
the
shards.
This
is
fairly
simple.
A
If
you
trust
this
link,
you
could
also,
if
you
had
some
kind
of
all
shards
than
point
where
these
guys
were
already
sort
of
having
the
stepping
through
the
cluster
on
their
own,
that
would
sort
of
work.
So
I
imagine
you
know
doing
this
as
a
proof
of
concept
in
a
small
amount
of
code
and
then
when
we,
if
it
was
but
then
I
talked
to
Liam
and
when
I
talked
to
everyone
else
they
said
well,
we
should
do
it
correctly.
A
A
E
A
A
Mean
like
revisions
or
what
so
we
we
wrote
out
whenever
we
did
the
install,
we
wrote
out
the
sto
operator
when
we
did
the
install.
So
when
you
remove
probably
you're
removing
I
mean
today
or
not,
but
if
we
put
this
in
one
seven,
when
you
installed
one
six,
you've
got
this
operator
there.
So
what
we're
removing
is
essentially
one
of
those
things
that
we
previously
installed.
A
A
Okay,
that
sounds
good.
I
had
done
it
implementation
of
this
for15
and
it
didn't
work
because
in
1/5
the
first
thing
we
did
was
create
the
namespace
and
so
the
first
thing
we
deleted.
It
was
the
namespace
and
then
everything
else
was
already
deleted
and
it
was
complaining
and
then
it
deleted
the
CRS
and
then
it
messed
up
multiple
control
planes.
So
a
lot
of
about
making
the
experience
not
be
surprising
if
you
have
to
control
plans
and
remove
one
of
them
and
you're
gonna.
A
Do
that
a
lot
right
for
rolling
back
and
probably
for
canarian
stuff
so
and
what
we
haven't
talked
about
and
could
is,
if
you
like
your
canary,
are
you
supposed
to
then
upgrade
your
main
control
plane
with
the
same
parameters
as
the
canary
or
you
sort
of
say,
the
canary?
My
new
master
and
delete
the
old
master
people's
opinions
are
on
that
from
the
X
point
of
view,.
A
A
It's
related
right,
so
if,
if
you're
doing
canoeing
of
your
control
plane,
you've
done
the
install
of
the
master
you've
done
the
install
of
the
canary
you've
tested
on
the
canary
now
you
want
to
delete
something.
Do
you
do
you
want
to
delete
the
master
and
make
the
canary
the
master?
Do
you
want
to.
A
C
Maybe
we
have
a
task
oriented
guide
that
says
you
know,
step
1
re
label,
the
master,
2
legacy,
step,
2,
relatable,
the
canary
to
master,
step,
3,
delete
legacy
or
something
along
those
lines
where
they
have
a
lot
of
control
of.
What's
going
on
because
with
big
changes
like
this
automagically
easily,.
A
Currently,
we
don't
have
any
logic
for
that,
but
we
could.
We
could
do
this
logic.
It
should
not
be
impossible
and
I
would
look
I'm,
not
sure
which
group
should
be
doing.
The
design
of
this
I
can
design
how
I
think
it
might
work,
but
we
should
probably
iterate
on
a
few
times
over
the
coming
meetings
and
we
need
to
get
Caston
to
agree
to
it,
because,
when
I
talked
to
him
yesterday
on
slack
he
he
had
different
opinions
than
I
had
about
some
things.
A
A
A
You
know,
which
is
it,
which
is
a
waste.
If
you
supply
the
pod
argument,
we
should
ask
just
the
one:
that's
pretty
easy
and
we
need
to
do
that
for
the
troubleshooting
API,
so
I've
assigned
myself
that
item
and
made
it
a
p0,
because
Costin
was
very
adamant
that
it
should
work
that
way.
Yesterday,
I
wanted
to
talk
a
little
bit
about
this.
A
So
the
idea
is
we:
if
we
had
a
lot
of
control
plants
running
like
every
tenant
has
their
own
control
plane
or
we
have
some
failed
canary
experiments
running,
can
be
a
real
hassle
to
ask
all
of
them
when
we're
waiting,
because
you
don't
want
to
wait
for
what
you
sort
of
need.
Wait
works
great
if
there's
a
single
control,
plane,
waiting,
Brooks
good
enough,
there's
just
two
and
they're,
both
in
the
same
cluster.
If.
A
If
they're,
not
in
the
cluster,
if
we
we
don't
want
to
ask,
we
don't
inventory
all
the
control
planes
by
asking
every
pot
who
injected
you
we
need
to.
We
need
to
know
what
the
control
plans
are
for
weight
and
then
I
had
this
thought
and
suppose
we're
running
a
client
in
the
canary
and
a
server
in
the
master
when
you
apply
a
destination
rule
and
you're
in,
and
you
have
a
script
to
wait
to
test
it.
A
C
A
So
if
you
currently,
it
just
works,
and
it
also
takes
the
revision.
So
if
you
know
what
you're
doing
you
can
slightly
get
better
performance,
but
what
I
had
learned
yesterday
and
didn't
know
was
that
now,
there's
a
way
to
make
destination
rules
apply
only
to
one
control
plane.
It
would
just
be
good
to
have
an
understanding
of
what
this
should
do,
what
the
user
expects
it
to
do,.
C
A
We
can
ignore
that
because
I,
don't
think
that's
important.
Maybe
it
is
it
really
surprised
me,
but
the
thing
is
that
when
you
apply
destination
rule
you
don't
have
to
wait
for
your
master
to
see
it,
because
it
only
only
the
client
needs
to
know
about
destination
rules.
This
shouldn't
be
a
problem.
There's
only
a
few
sto
DS
that
we
need
to
ask
about
this.
A
The
problem,
I
think,
is:
if
one
of
your
control
planes
has
gone,
has
seized
up
its
timing
out
that
kind
of
thing:
if
a
control
plane
is
bad
and
some
pod
uses
it,
then
it's
bad
to
wait
for
it.
So
we
want
to
make
sure
that
it's
at
least
hard
enough
that
the
command
is
hardened
enough,
that,
if
we're
using
weight
that
we
don't
give
up,
because
some
failed
canary
is
keeping
us
from
noticing
the
things
that
don't
require
that
failed
canary.
C
Yes,
all
right,
I
see
the
revision
flag
is
present
only.
That
seems
like
a
great
way
to
avoid
waiting
on
a
canary
as
far
as
the
the
sort
of
intelligent
when
a
destination
will
only
wait
for
control
planes
that
have
clients
that
will
leverage
that
destination
role.
Wait
is
not
a
smart
system
like
that.
I
doesn't
know
what
a
particular
configures
relevant
to
whether
it's
relevant
for
clients
or
relevant
for
servers.
It
simply
says:
has
it
config
at
this
version
that
this
destination
rule
was
created
at
been
distributed
to
proxies,
Roxy
or
not?
I
agree.
A
The
only
reason
I
bring
this
up
is
that
Costin
and
I
had
a
disagreement
about
what
happens.
If
you
leave
revision
off.
If
you
apply
revision,
it
works
exactly
as
we
want.
If
you
leave
it
off,
it
asks
all
the
control
planes
and
the
only
question
is
is
that
we
just
need
to
say:
yes,
that's
our
that's
our
opinionated
viewpoint
that
easily
revision
off,
we
always
ask
all
control
planes
and-
or
we
have
to
say
no-
that
we
find
the
correct
control
plane.
C
A
C
A
A
A
Ss
aging
onto
a
VM
and
running
docker
to
run
a
sidecar
there
I
encourage
everyone
to
sort
of
look
at
that
and
review
it.
I
have
reviewed
it
several
times.
There
were
several
strong
concerns.
My
hope
is
there
we
either
like
it
or
don't
I
I
know
Shriram
likes
it
and
it
comes
from
someone
who
hasn't
contributed
before
that
I
know
of
so
we
want
to
get
this
person
to
enjoy
the
experience
working
with
us.
Her
name
is
security
insanity.
That's
not
a
real
name.
I
forgot
what
it
is
starts
with
the
city.
A
A
A
A
C
I
would
like
is,
for
we've
got
a
pull
request
out
for
this
doc.
That
should
be
present
in
time
for
the
community
testing
day
on.
What
is
that
Monday
I'd
like
to
wait
until
that
dock
is
fully
available
and
approve,
so
that
we
can
follow
those
steps
and
make
sure
like
if
you're,
if
you're,
following
steps
that
I
give
you
interactively
that's
sort
of
cheating?
Does
that
make
sense
that
doesn't
make.
A
A
A
A
A
Mister
you
cuddle
NFS
by
set
revision
equals
canary
and
it
should
install
one
note
that
it
didn't
say:
are
you
sure?
That's
because
it
never
says.
Are
you
sure
if
you
do
a
set
I'm,
not
sure
if
that's
still
the
correct
behavior,
but
that's
what
it
works
now,
so
we
can
see
John
Howard's
animated
UI,
which
is
awesome.
A
Halation
complete
now,
when
I
do
the
get
a
co
operator,
we
can
see
that
there
are
two
stos.
We
can
see
the
revision
label
right
here.
This
gives
you
idea
of
what
you
can
supply
to
the
roof,
monitor
vision
in
sto
cuddle.
So
if
I
look
at
the
pods,
we
can
see
that
I'm
running
to
sto
DS
and
we're
only
running
one
gateway
which
is
I
believe
how
we
do
it
and
I'm
not
sure
if
that's
right,
but
it's
always
been
for
me,
I
haven't
done
a
lot
of
actually
testing
the
gateway
in
the
canary.
B
A
A
Should
see
all
of
the
book
infos
here
currently,
it
does
not
tell
you,
and
maybe
it
should
which
revision
reported.
This
information
I
should
ask
about
that.
Do
we
think
it
should?
Is
it
we
didn't
make
any
of
these
commands
take
new
or
supply
to
output?
Do
we
want
it
show
the
revision
here?
I
don't
know,
but
you
can
see
that
the
number
of
responses
to
proxy
status
is
15.
If
I
do
this
with
revision
Canary,
it
should
show
only
the
canary
ones.
A
It
shows
master
ones,
so
is
this
the
behavior
we
want
to
so
I
think
if
you'd
leave
off
your
vision,
showing
all
the
revisions,
it's
great
I
think
if
you
are
only
debugging
a
particular
control,
plane,
I
think
that's
great
I'm,
not
super
excited
about
having
to
put
revision
default
in
because
I
wanted
to
sort
of
match
what
you
see
when
you
do
this
to
you
operator,
which
doesn't
put
a
revision
here.
What
do
we
think
of
this
behavior
and
has
anyone
else
tried?
It.
E
C
A
A
It's
easy
to
fix
that
default
thing
caught
me
by
surprise,
because
it
early
versions
had
simply
not
put
a
value
in
the
revision
label
on
the
master
and
I
had
sort
of
asked
for
that,
so
that
I
could
distinguish
them
easily
with
a
single
query
that
was
done
nicely
by
the
environments
people,
but
then
they
started
using
the
word
default.
If,
if
ever
it
was
blank
which
which
I'd
simply
never
revised
my
work
to
to
do
so,
I
would
have
not.
Should
it
be
consistent?
Yes,
it
should
be
consistent.
A
E
A
A
Verify
install,
if
you
do
it
like
this,
it
checks
to
see
all
the
resources
till
there
did
you
delete
them.
Are
they
not
ruining
this
kind
of
thing?
And
if
you
do
that,
with
revision
canary
will
check
the
same
ones,
and
it
does
that
by
sort
of
going
on
to
the
collector
and
getting
that
previously
installed
bill
of
materials
running
the
critter
code
that
produced
this
list
again
and
then
running
the
code
that
we
had
in
previous
version
of
Sto.
A
Remember
in
earlier
versions
of
this,
do
you
were
expected
when
you
installed
to
save
a
copy
of
the
mo
manifest
you
installed
with
to
use
this
verify,
install
command,
which
was
a
huge
pain
because
everybody
forgot
it
that
or
what
options
they
had
used
when
they
installed?
So
this
is
supposed
to
be
a
big
improvement.
All
you
need
to
do
is
tell
it
your
revision
name
and
it
just
sort
of
works,
but
there
may
be
some
some
holes
in
the
difference
between
the
empty
and
the
revision
field.
A
E
A
A
Maybe
there
should
be,
but
we
we
sort
of
designed
that
and
implemented
this
and
then,
while
we
did
that
the
way
that
these
manifests
worked
in
the
way
that
the
templates
worked
was
changing.
So
there
may
be
some
things
that
either
we
fix
right
before
the
release
or
we
fix
nob
I
think
to
be
very
small.
A
C
A
Yes,
we
you
and
I,
or
possibly
you
and
I
Liam-
should
have
a
some
kind
of
proposal
that
we
should
get
constant
to
put
his
checkbox
on.
So
what
we
need
to
do
is
maybe
so
the
work
item
that
I
had
proposed
was
to
make
this
document
and
get
it
approved.
But
even
before
we
have
a
document
that
we
need
to
get
approved.