►
From YouTube: Istio User Experience working group, June 2 2020
Description
Istio User Experience working group meeting held June 2 2020
A
So
I
had
two
items
to
talk
about
for
us
this
week.
Both
are
related
to
sort
of
the
central
acidity
and
the
refactoring
for
when
the
control
plane
is
not
present.
So
let
me
go
through
those
I
sort
of
wrote
them
up
as
work
in
progress
is,
but
since
that's
what
we,
what
I
have
I
just
want
to
sort
of
I,
don't
know
if
I
should
mark
them
in
review
or
the
market
is
in
review
and
then,
with
these
to
talk
about
it.
A
Do
we
have
costume
here,
I
see
no,
so
this
won't
full
review.
This
will
be
to
make
sure
this
document
is
the
right
thing
for
when
we
get
Caston
to
sort
of
agree
with
it.
So
when
we
talked
about
Central
St,
OD
Houston
made
the
point
that
well,
some
of
our
commands
need
access
to
the
control
plane
and
some
do
not.
A
When
you
do
it
install
a
video
control
plane,
you
need
access
to
the
control
plane
when
you
do
something
at
cube
injected
or
not,
and
you
know
I
had
done
this
analysis
of
which
commands
talks
to
control
plane,
but
I
hadn't,
sort
of
explain,
sort
of
the
reasons
or
what
they
are.
So
I
just
made
a
simple
table
breaking
down
all
of
the
sto
commands
by
the
expected
user
privilege.
A
So
these
are
the
commands
that
admins
do
and
it's
one
page
and
the
functionalities
all
over
the
board,
so
control
Z,
which
only
in
admin
dashboard
install
Miami,
does
is
a
main
command
and
manifest
apply
or
is
hiding
under
manifest.
Diff
doesn't
really
require
permissions,
but
is
useless
without
them.
A
A
A
So
the
first
big
question
and
I
don't
know
if
we
have
Stephen
here,
he
really
was
pushing
us
to
put
things.
The
roles
of
personas
is
is
very
similar
to
a
cow
and
wants
to
break
things
down.
We
don't
have
a
way
to
indicate
what
these
commands
are.
We
should
certainly
add
something
to
the
short
description
for
the
commands
operator
in
it.
Obviously,
it's
not
something
a
user
normally
does.
A
A
A
A
A
C
A
Does
not
there's
nothing
in
there
that
knows
about
your
role
except
personate.
A
user
I
mean
there's
commands
to
give
yourself
less
roles,
so
I'm
the
admin
of
my
clusters.
I
can
impersonate
a
different
user
or
a
service
account,
but
there's
nothing
that
makes
the
help
text
hide
from
me.
The
things
based
on
what
I
would
be
allowed
to
do.
B
A
I
I'm
thinking
we
need
like
a
config
to
hide
all
these
admin
commands,
especially
if
you
know
as
cloud
operators,
we
don't
expose
install
upgrade,
want
them
using
control
Z
because
they
can't
a
bunch
of
commands.
You
can't
touch
is
frustrating
so
either
at
compile
time.
We
generate
a
special
user,
sto
cuddle
or
there's
some
kind
of
flag
environment
flag
or
like
they
take
a
while
for
this
I
think
it
would
help
I
think
it'd
be
frustrating
to
see
to
not
be
able
to
know
like
that.
There's
a
manifest
migrate
that
you
can't
use.
A
So
this
work
was
done
again
because
I'm
terrified
that
we're
not
going
to
get
this
stuff
for
centralized.
You
have
to
be
done
for
one
seven,
so
here
creating
my
analysis
of
what
the
current
commands
that
talk
to
the
control
plan
are.
These
are
the
commands
that
are
going
to
break
under.
So
Cyril
is
duty
until
we
fix
them
and
the
good
news
is
I've
been
working
on
proxy
status
and
I
have
a
PR
for
most
of
proxy
status
using
the
new
way.
B
A
Talking
about
the
changes
old
versus
the
new,
so
in
one
six
we
have
this
debug
endpoints,
sync
Z,
that
we
use
for
proxy
status
and
one
seven.
We
have
these
new
replacements,
we're
gonna.
So
this
four
one,
six
we're
doing
an
HTTP
GET
of
this
endpoint
we're
gonna,
do
gr,
PC
or
at
least
custom
of
connections.
Nax,
and
maybe
acts
and
connections
is
already
working
on,
is
Tod
next
may
or
may
not
be
Costin
says
it
is
I,
couldn't
make
it
work
and
X
is
not
there
yet
proxy
status.
A
A
A
C
A
A
So
I
didn't
have
a
chance
to
hold
his
feet
to
the
fire,
but
we
should
do
that
and
then
of
course
we
had
the
describe
command,
which
currently
does
not
use
a
special
endpoint,
but
until
recently
it
used
the
authentications
endpoint
that
was
taken
out,
because
the
authentications
endpoint
from
s
to
D
is
no
longer
correct.
It
was
returning
incorrect
information,
so
we
need
to.
We
need
to
probably
me
needs
to
come
up
with
an
XD
s
event
for
that
as
well.
In
similar
effect,.
A
And
there
course,
if
we
have
any
new
commands,
they
might
also
sort
this,
but
that's
what
I
have
now
and
the
other
problem,
of
course
is
that's
this
stuff
talks
about
the
data
format.
Once
the
connections
been
established,
there
still
seems
to
be
questions
about
how
that's
going
to
be
established.
I
I
posted
this
slide
to
networking
this
morning
haven't
got
an
answer
so
when
it's,
your
cuddle
wants
to
find
out
about
something
like
proxy
status.
I.
A
Believe
Costin
told
me
that
I
could
talk
to
the
sleep
pod
sidecar
have
the
sleep
pod
sidecar
to
an
additional
XTS
on
the
connections
already
been
using
to
talk
to
the
control
plane.
I
should
not
attempt
to
do
a
direct
connection
from
my
client
into
the
control
plane.
The
same
way
that
the
pod
does.
Although
I
could
do
this
I've
got
the
code,
but
I
don't
believe
the
code
exists
for
the
sleep
pod
to
forward
a
request.
A
So
those
are
the
items
about
how
this
might
work.
Of
course,
we
want
to
move
away
from
this
to
a
debug
API
that
is
restful
and
maybe
more
like
what
a
script
could
use
with
simple
JSON
but
I.
Think
underlying
that
is
going
to
be
this
XTS
event
mechanism,
which
is
why
we've
been
digging
so
deeply
into
that
I
want
a
lot
of
this
is
around
central.
This
Tod
and
and
I
think
a
few
of
us
are
still
not
exactly
clear
on
specifically
what
that
means.
A
A
C
So
basically,
in
1/6
we
introduced
central
is
DoD
flag.
We
also
introduced
the
is
DoD
remote
chat,
so
the
notion
is,
if
you
look
at
the
multi
cluster
model
today
we
have
we're
not
really
running
single
control,
plane.
It's.
If
you
ever
follow
through
the
multi
cluster
share
control,
plane
document,
you
will
realize
we're
running,
not
only
estudiar
the
first
cluster.
We
are
also
running
in
cod4
second
cluster,
so
central
is
your
D
is
really
about
extract.
C
C
There
is
no
issue
D
functionality,
so
it's
going
to
rely
on
the
issue
of
the
the
left
cluster
to
serve
as
pilot
to
be
able
to
do
XDS
serving
to
push
the
configuration
to
the
sleep
pods
and
whatever
parts
in
the
mesh
to
be
able
to
do
Citadel
function,
to
be
able
to
mint
the
certificate,
to
be
able
to
distribute
the
root
certificate
and
also
be
able
to
do
cycle
in
texture
and
all
that
by
webhook
configuration.
So
on
the
right
side
of
the
cluster.
The
configuration
is
really
lightweight.
You
would
have
the
base.
C
If
you
guys
are
familiar
with
the
ham
charts
we
have,
you
would
have
the
bass
charts
inside
which
installs
a
bunch
of
our
CDs
and
inserts.
The
bunch
of
our
endpoint
and
services
may
need
to
point
back
to
the
first
cluster
on
the
left
side.
It
also
in
such
that
it's
your
remote
out
of
which
it
does
is
conflict.
The
validation
web
work
and
also
the
mutating
web
hook.
C
So
and
if
you
scroll
down
just
slightly
another
diagram
which
I
think
IBM
is
really
interested
in,
is
it's
similar
as
diagram?
But
you
don't
want
workload
on
the
first
cluster,
you
only
Rob
workload
on
the
second
cluster,
so
the
first
cluster
would
serve
as
the
main
the
main
management
cluster.
Well,
it
runs
SEO
D
and
it
would
have
like
mash
admin,
an
administrator,
this
cluster
and
then
the
second
cluster
would
focus
on
the
users,
workload
and
everything
you
know.
A
C
In
this
diagram,
the
user
would
only
interact
with
the
second
cluster,
and
what
ad
was
showing
on
the
historic
Otto
commands
would
be.
You
know,
user
would
have
be
given
the
second
cluster
on
the
right
side,
and
then
they
would
be
continued
executing.
There
is
Ricardo
commands
to
help
them
to
do
things.
C
Yeah
sure
so,
back
to
your
proposal,
I
do
think
it
makes
sense
to
separate
the
it's
really
no
commands
between
the
admin
commands
and
also
the
user
focus
commands,
and
even
some
of
the
commands
that
focus
on
installation
I,
don't
even
know
if
it's
appropriate
to
be
part
of
its
yokatta
or
definitely
needs
to
be
admin
command.
Because
now,
if
I'm
looking
at
cube
kado
right,
the
installation
of
cuba,
nati
itself
is
not
part
of
cube,
kado
and
then
cube
kado
actually
have
the
interesting
thing.
C
They
are
good
in
the
commands
to
basic
commands
for
beginners
and
basic
commands
to
intermedia.
Then
they
also
grouping
like
some
of
their
advanced
commands,
which
they
made
it
clear.
It's
for
custom
management
commands
for
the
admin
of
the
cuban
attic
cluster,
so
they
kind
of
have
some
type
of
separation
between
the
level
of
queue
people
are
expecting
to
have
on
the
lot
age
of
cube,
and
also
you
know
whether
it's
a
management
command
was
a
user
command.
So
I
think
that's
a
good
move
for
us.
I.
A
B
C
Is
just
one
of
many
yeah,
so
this
this
is
definitely
where
people
wants
to
move
for
single
control.
Playing
like
the
first
diagram
is
where
we
want
to
move
for
single
control
play.
Certainly
user
could
allow
multiple
control
play
and
each
control
plan
has
their
own
is
Jodi,
that's
kind
of
like
a
wrap
like
control,
plane
model
for
multi
classes.
That
model
still
exists,
but
this
is
what
we
want
to
move
the
single
control
plane
model
to
and
also
provide
a
model
for
people
once
to
wrong
issued
the
control
plane
under
a
management
cluster
bring.
A
A
A
Does
a
single
command
proxy
status
for
all
the
pods?
Currently
it
talks
directly
to
is
Tod
versus
a
another
pod.
These
two
cuddles
talking
to
localhost.
So
it's
talking
sort
of
directly
to
this,
but
Costin
says
well,
you
know
you
can
afford
forward
to
a
workload
pod
behalf.
So
let
me
just
show
what
I
have
to
give
people
the
taste
of
this,
so
this
version
I
mean
we
don't
want
this
to
be
the
final
way
to
do
it.
This
is
heavy-duty.
It
took
me
more
than
eight
hours
to
write
this
PR.
A
Before
the
code,
let's
just
show
what
that
output
is
so
I've,
just
what
this
PR
does
is.
It
makes
the
second
proxy
status
command.
That's
experimental,
which
has
a
few
different
options
here
is
the
end
point.
This
is
a
sort
of
how
to
talk
to
this
Tod.
It's
not
going
to
use
port
forward
walking
through
all
of
these
2d
pods
and
coalescing.
It's
going
to
talk
to
a
single
point.
Stir
this.
A
The
output
is
somewhat
the
same:
I
missed
the
version
column
got
too
big,
but
and
yet
implemented
the
actual
act
and
knack
here.
The
only
thing
new
is
this
mesh
field.
It's
a
new
column
service.
Does
what
mesh
are
each
of
these
pogs
in
because
they,
of
course,
there
can
be
more
than
one
pod
talking
to
the
same
central
more
than
one
mesh.
Talking
more
sure
this,
probably
this
is
called
mesh.
Couldn't
me
maybe
she
called
cluster.
It's
unclear
he's
called
mesh,
though
in
the
protocol
itself.
So
how
can
I
implement
this
function?
A
A
Forward
we're
using
this
dial
from
this
helper
function
that
such
a
G
RPC
client,
so
it
expects
that
we
could
talk
directly.
It
may
be
that
in
the
future
that
there's
a
port
forward
to
the
workbook
pod
and
then
a
dial
to
localhost
right
now
that
port
forward
to
the
workload
pod
is
being
done
out
of
band
by
me
running
command,
another
window
after
she's
established.
We
send
it's
three
requests
and
this
discovery
request.
A
A
So
we
get
back
the
response
and
then
response
comes
in
the
form
of
these
envoy
configurations
of
nodes,
just
unmarshal
them
and
print
pretty
much
in
the
usual
way
that
we,
because
I
don't
have
the
Knox
yet
I
just
know:
I'm,
printing,
the
instance
name
and
name
space.
That's
the
stuff!
You
saw
for
the
first
column.
So
Colin
was
this
mesh
IDE.
That
was
not
part
of
XTS,
and
that's
that's.
Basically
it
that's
the
whole
thing.
A
A
C
Question
one
is
on
the
remote
cluster:
if
I'm
executing
the
Isreal
catalog
proxy
status
command
apart,
I
guess
the
proxy
status
is
like
overall
for
the
cluster
or
for
a
particular
pod
would
is.
Ricado
would
exact
in
hue
one
with
the
pod
on
the
remote
cluster,
which
would
establish
XDS
connection
back
to
the
sto
D
on
the
main
cluster
and
then
be
able
to
retrieve
the
status.
The
thanks,
Abbess
right.
A
So
there's
but
there's
two
variants
of
proxy
status:
there's
the
one
with
no
pod
name
that
lists
all
the
pods
and
then
there's
the
one
where
you
ask
for
single
pod,
and
then
it
compares
the
configuration
as
well.
So
in
the
first
case,
which
is
all
that
you
saw
a
net
PR
I
believe
costume,
expects
is
to
cuddle
to
port
forward
into
the
sleep
pod,
and
that
will
be
on
the
sleep
pod,
a
port
that
will
be
used
for
forwarding
XTS
requests
to
the
control
plane.
A
A
In
addition
to
doing
the
first
saying,
I'm
going
to
not
ask
for
the
all
of
the
connections
but
I'm
going
to
ask
for
the
XPS
configuration
of
that
pod,
so
I'm
gonna
ask
envoy
itself
for
a
config
dump
and
then
the
code
that
is
currently
initio
cuddle
already
within
compare
the
two.
So
we
have
code
to
compare
a
config
dump
with
something
from
pilot.
The
only
thing
that's
new
is
we're
going
to
ask
pilot
for
the
config
dump
using
XTS,
rather
than
using
this
debug
endpoint.
A
C
I
was
going
to
say,
but
from
your
sleep
pod
perspective,
it's
only
you
can
only
config.
Your
sleep
are
two
point.
One
control
plane
right,
so
you
are
on
the
left
side.
If
I'm
running
multiple
revision
of
history
well,
I
have
canary
and
base,
but
it's
any
point
of
the
time
the
sleep
pod
is
only
point
to
any
particular
version
of
its
Tod.
What,
regardless,
whether
you
have
multiple
instance
of
it's
dirty
right.
So
as
most
of
sleep
pod
knows,
which
is
there
beneath
I.
A
There's
two
problems,
instances
within
one
deployment,
so
it's
one
logical
control,
plane,
that's
just
been
horizontally
scaled
as
kubernetes
does
and
then
there's
the
problem
of
actually
running
two
revisions
of
the
control
plane
and
not
having
the
ability
to
select
between
them
when
you
run
that
command
well,
so
we
have,
we
have
so
the
problem.
The
first
problem,
the
problem
of
being
a
sharded,
multiple
pods
in
a
single
control,
plane,
yeah,
Costin,
has
promised
and
part
of
this
diagram.
Is
these
sto
DS
talking
to
each
other
as
peers
to
make
that
happen
in
this
case?
A
Couldn't
figure
out
how
to
talk
to
sto
to
ask
it
so
the
question
is:
if
we're
gonna
use
method
B
going
through
it
an
existing
pod,
what
do
we
do
if
there's
none,
but
if
we
do
method
B,
there's
gonna
be
some
pod.
We
have
to
some
namespace,
that's
only
pointing
to
a
single
control
plane.
So
here
running
these
commands,
you
may
need
to
give
them
a
name
space
argument
or
a
pod
argument
to
let
it
sort
of
know
which
canary
control
plane.
You
mean.
A
C
A
I
have
outstanding
two
environments
and
networking
about
this,
so
one
possibility
is
well.
We
installed
an
astute
operator
for
each
control
plane,
that's
how
I
was
always
imagining.
It
worked,
even
if
you're
known
as
talking
the
control
plan
yourself.
There's
an
issue
operator
that
sort
of
says
the
user
has
a
main
and
a
canary,
maybe
they're,
both.
A
A
But
I'm
going
to
be
taking
this
slide
tomorrow
to
environments
and
if
I
get
no
satisfaction
from
them
to
networking
on
Thursday
and
I
might
keep
asking
for
issues
that
are
being
p0
p1
for
all
of
the
pieces
to
get
is
to
cuddle
to
do
the
full
story
when
it's
more
complicated
than
this
picture.
This
picture
shows
one
instance
of
his
Tod
and
one
controlled
plane,
but
you
can
imagine
there
be
clusters
all
connected
to
the
same
as
Tod
in
multiple
instances
of
this
control
plane.
A
A
And
thanks
for
tackling
this
it,
it
looks
like
quite
a
complicated
problem
and
it
certainly
seems
like
we're
not
getting
the
traction.
We
need
from
the
other
working
groups
to
resolve
it,
but
I
really
appreciate
the
way
that
you've
approached
this
and
how
thorough
you've
been
in
making
sure
that
we're
getting
answers
thanks,
Mitch,
yes,.
C
A
C
C
Yes,
so
basically
you're
saying
this
diagram:
what
is
there's
a
third
to
binary
cluster
right?
Yes,
I
assume
so,
even
though
most
of
our
testing
on
it
down
with
two
clusters
right
now
see
why
you
couldn't
I
I,
believe
the
code
is
also
coded
assuming
you
could
potentially
have
more
than
two
clusters
so.
A
C
That's
a
great
point
so
depends
on.
So,
if
you
look
at
our
single
single
control,
plain
multi
custom
deployment
model,
we
actually
assume
user
would
be
replicate
the
virtual
service
and
destination
rule
and
all
the
network
resources
across
multiple
clusters
through
a
CI
CD
system.
So
it's
not
like
automatically
because
you
have
to
remember
there
is
no
pilot.
No
is
your
D
on
the
remote
cluster,
that's
been
listening
to
the
a
CD,
the
config
changes
and
could
be
able
to
propagate
the
configuration
changes
so.
C
A
I
would
love
to
have
a
longer
conversation.
I
know
that
we
do
have
time
today,
but
specifically
around
the
relationship
between
the
resource
repositories,
that
the
kubernetes
API
servers
and
central
is
Tod.
I'm
curious,
where
we
need
to
not
only
hear
that
the
object
was
changed
will
actually
write
a
change
back
to
the
object
and
I'd
like
to
get
your
opinions
on
it.
Yeah.
C
C
A
This
this
picture
is
lens
picture
and
I
showed
you
Costas
picture
I
made
my
own
picture
as
well,
which
is
this
monstrosity
trying
to
come
up
with
a
picture
that
this
picture
of
mine
is
supposed
to
show
the
complexity
that
might
really
be
present.
Here
we
have
three
clusters,
one
two
and
three,
two
users
of
this
yo
cuddle
and
cluster.
A
Two
is
talking
to
a
centralized
DoD
cluster
to
be
talking
to
a
local
is
Tod
and
cluster
one
is
talking
to
in
one
name,
space
essential
is
2d,
but
it's
in
the
middle
of
a
canary
so
tries
to
show
all
of
the
domains
of
permissions.
It
does
not
show
that
centrality
might
be
implemented
with
multiple
quads
that
need
to
be
unsure
did
like
constants
does,
but
it
shows
what
I
think
is,
from
a
user
interface
point
of
view.
A
A
You
know
here
we
have
a
gap
I'm
calling
Knuth,
who
is
administering
some
namespaces,
but
not
others
like
Knuth
is
administering
the
sto
system
namespace
and
cluster.
Three
and
Bob
is
a
user
of
that
namespace,
but
they're
sort
of
both
users
of
cluster
one,
and
neither
one
is
an
administrator
of
centralized
DoD.
Those
are
the
problems
that
we
need
to
solve
cleanly
to
make
make
this
work.
People
need
to
understand,
given
their
permissions,
what
they
can
do.
Yeah.
B
A
A
there's
a
new
dimension
to
that
unfortunately,
now
add,
and
that
is
multi-tenancy,
which
it
appears
that
we're
pursuing
in
1/7.
So
not
only
do
they
have
different
directions.
They'll
connect
from
by
giving
Bob
information
that
only
Knuth's
should
have
will
not
be
allowed
exactly
and
I
think
that
this
talks
about
a
little
bit,
but
some
important
feedback
I
got
from
Costin
was
that
all
of
our
commands,
which
currently
are
cluster
based
mesh
based.
A
So
when
I
do
proxy
status
and
it
lists
all
of
the
proxies
constants
claim
was
well
since
this
particular
instance
of
centralist
judy
from
this
company
is
administering
both
production
on
cluster
1
and
some
names
based
on
cluster
2.
That
proxy
status
should
lift
this
pod
and
this
pod
and
this
pod
and
that
column
mesh,
which
may
mean
cluster
or
maybe
mesh
I,
don't
know
there
needs
to
be
some
column.
That
sort
of
says
product
page
on
Cluster
and
product
page
on
cluster
1,
prod
namespace
are
different.