►
Description
The Istio User Experience working group meeting held via WebEx on July 14 2020
A
I've
spent
the
last
week
working
on
the
xds
based
istio
commands
for
version
and
proxy
status,
and
I
got
frustrated
with
a
large
number
of
command
line
options
for
them,
and
I
decided
to
implement
a
customer
request,
a
config
file
or
istio
cuddle.
So
I
would
like
now
to
walk
everyone
through
what
I
implemented.
So
we
can
see
if
this
is
what
is
needed
and
the
right
solution
for
it.
A
A
Our
users
commands
shouldn't
change.
The
new
required
parameters
should
just
sort
of
be
there,
and
other
flags
are
also
going
to
just
be
there.
It
should
be
easy
for
someone
who's
used,
cube
cuddle
to
set
those
defaults.
It
should
be
easy
to
override
those
defaults
and
it
should
be
easy
to
implement
because
the
code
freezes
soon-
and
I
actually
want
this
in
so
I
propose
that
there'd
be
a
config
file
located
dot
is
to
cuddle
config.yml
similar
to
what
you
see
with
cube.yaml
for
kubernetes.
A
A
A
A
Context
context:
if
we
wanted
with
kubernetes
people,
often
keep
multiple
config
files
around
and
they
use
cubeconfig
to
point
to
them.
So
I
put
the
same
functionality
in
so
I
should
be
able
to
point
to
one
configuration
file
and
do
proxy
status
against
one
control.
Plane
instance
should
be
able
to
do
the
same
thing
with
another
status
against
another
instance.
A
And
if
you
didn't
want
to
use
a
config
file,
I
took
advantage
of
the
viper
feature
allowing
any
of
those
parameters
to
be
environment
variables.
So
let's
say
your
istio
namespace
was
called,
not
steal
system.
You
would
just
do
this
and
from
then
on,
it
would
default
for
the
rest
of
the
work
that
you
were
doing.
A
A
We
don't
really
want
to
replace
the
commands
that
are
there
right
now,
because
these
new
xds
ones
kind
of
only
alpha
level
in
my
opinion,
but
explaining
that
is
too
difficult,
especially
in
1
8.
They
might
become
the
standard.
So
I
have
an
additional
setting
called
prefer
experimental
and
if
a
user
sets
that
the
experimental
commands
appear
instead
of
the
current
1
6
implementations,.
A
A
A
Liam
had
asked
for
there
to
be
a
set
command
to
set
these
defaults,
but
I
didn't
implement
that
because
the
stuff
in
vipre
for
setting
piece
parameters
was
blowing
away
all
the
comments,
the
comments
were
almost
more
useful
than
being
able
to
set
it.
I
could
be
wrong
about.
I
didn't.
I
have
actually
have
the
change,
but
it's
not
in
the
pr
that
I'm
trying
to
get
merged.
A
So
I
will
and
the
implementation
says
viper
and
it's
small,
so
I'd
like
to
give
a
demo
of
it
now.
A
A
Sdod
and
currently
I
have
no
istio
config
and
if
I
do
this
to
cuddle
x
version,
it
will
go
out
to
the
control,
plane
and
use
these
settings.
So
when
you
don't
set
xds
address
but
set,
you
use
the
default
xts
port,
it
uses
kubernetes
to
find
sdod,
but
it
will
be
passing
in
these
certificates.
A
So
it's
really
going
to
use
this
the
secure
1512
port,
but
it
will
pass
in
these
certificates.
If
these
were
invalid,
it
wouldn't
work.
You
can
see
that
it
sort
of
gives
the
version
here
if
I
set
it
to
a
different
thing.
So
now
I've,
given
it
the
insecure
configuration.
A
A
There
is,
and
I'm
not
super
happy
with
the
fact
that
port
appears
here,
because
it
is
not
in
the
config
file
that
is
the
default
for
that
setting.
So
it
might
be
that
we
need
to
get
rid
of
it.
A
B
A
I
believe
that
if
the
user
specifies
both
xds
address
and
port,
that
it
should
complain,
but
I
will
ch
I
can
check
that
right
now,
so
I
don't
currently
have
pilot
running,
so
it
sort
of
gives
me
this
error
connection
refused.
A
B
I
think
maybe
on
the
config
list
command,
it
would
be
good
to
put
defaults
in
brackets
so
that
it's
clear
that
it's
not
set
and
will
not
apply
in
terms
of
precedence.
A
Right,
I
it
was
it
sort
of
picks
it
up
from
this
default
here
so
and
some
of
the
settings
I
didn't
I
didn't
put
in.
If
you
look
at
the
pr
you
can
see
that
the
list
of
things
that
are
there
is
explicit
as
part
of
the
pr
and
the
code
has
to
do
it.
So
something
like
the
timeout
I
didn't
put
in
because
I've
never
used
it.
It's
easy
to
add
and
remove
them.
A
It
thank
you.
I
made
it
a
green
document
here,
but
I
could,
if
you
guys,
approved
it,
I
can
make
it.
I
could
go
back
and
make
it
blue
and
you
could
approve
it
again
and
we
could
ship
it
and
it's
the
pr
is
really
small.
If
we
look
at
it,
it
turned
out.
Istio
already
uses
viper
and
viper
is
already
extremely
powerful
and
not
full
of
garbage.
So
it
says
it's
large,
but
it's
not
really
that
big
most
of
the
largeness
is
that
new
configuration
command.
A
A
But
when
a
release
is
made
and
it's
not
a
daily
build,
it
will
be
different,
so
the
the
implementation
is
the
same.
The
client
code
is
very
similar,
except
the
whole
xds
mechanism
that
we
developed
for
the
version
command.
I
hope
that
people
can
look
at
that
pr
and
just
merge
it.
A
Thank
you
for
the
comments
that
you
made,
especially
on
that
pr
mh.
I
did
pretty
much
everything
you
said,
except
for
reporting
an
error
to
the
user
if
their
pod
doesn't
exist
so
with
the
existing
proxy
status
command.
If
I
ask
for
the
details
of
a
particular
pod
like
details,
this
is
the
old
proxy
status.
It
says
they
all
match
the
new
one
will
work
the
same.
A
Yeah,
the
the
old
code
would
give
like
a
400
or
404
if
the
request
to
pilot
was
malformed,
there's
no
way
to
do
that
with
the
current
extension
stuff.
That
is
there
and
fixing
that
turned
out
to
be
a
huge
bear.
So
I'm
gonna
tell
networking
it's
their
job
to
fix.
When
I
talk
to
them
on
thursday,
so
I
hope
that
we
can
sort
of
make
that
pr
for
that
sort
of
just
go.
A
So
there's
two
air
conditions:
one
is
the
malformed
or
missing
proxy
name.
The
other
is
a
proxy
name
that
doesn't
exist,
which
should
be
like
a
404
that
never
actually
occurs,
because
the
client
doesn't
always
goes
to
envoy
first
to
pick
up
the
envoy
config
and
if
there's
no
such
pot,
it
fails.
So
it's
actually
hard
to
test
for
invalid,
possibly
use
grpc
curl
to
test
it.
B
That's
fine.
Can
we
just
make
sure
that
we're
tracking
these
things
that
are
lacking
from
networking
and
make
sure
that
they
end
up
at
a
as
a
p0
on
the
networking
roadmap
for
one
eight.
A
B
Yeah,
I
think
I
think,
an
issue,
and
then
just
you
know,
as
we
start
talking
about
roadmap,
we'll
make
sure
that
they
they
are
aware
of
it
and
that
it
gets
put
on
their
roadmap.
B
A
Thanks
ted,
while
I'm
showing
all
this
great
stuff,
I
wanted
to
show
the
pr
for
more
output
columns.
So
I,
with
changes
in
how
pilot
generates
data
for
envoy
and
even
before
that
the
output
of.
A
A
It
adds
the
domains
that
are
being
listened
to
the
particular
matches
and
if,
if
something
has
more
than
one
match
it
sort
of
shows
and
if
there's
a
virtual
service
involved,
it
shows
which
one,
if
a
user
does
this
on
a
pod
in
the
mesh,
they
see
the
same
stuff.
Of
course
it
gets
much
messier,
because
these
names
are
so
long
and
I
didn't
want
to
try
to
make
them
smaller,
but,
and
the
matches
of
course,
usually
are
everything,
but
sometimes
they're,
big
and
the
virtual
service
is
here
and
also
destination.
A
And
you
can
see
a
new
column,
that's
all
for
that
one.
So
anyway,
I
had
gotten
a
bunch
of
feedback
from
people
and
I'm
just
it
looks
like
it
may
have
passed
a
review.
I
just
want
to
make
sure
everybody
knows
what's
going
on
because
it
has
not
yet.
A
Merged,
so
that's
what
I
have
mitch.
I
can
show
that
your
proposal
now
or
would
you
like
to
take
over
the
screen,
sharing
no.
B
No,
you
can
go
ahead
and
open
it
up,
there's
only
a
handful
of
improvements
from
what
we
discussed
last
week.
They
are
at
the
bottom.
So
two
weeks
ago
there
were
some
performance
concerns
raised
for
the
csds
api,
based
on
the
fact
that
it
allows
multiple
config
dumps
to
be
collected
simultaneously.
B
I've
spun
up
a
testing
cluster
in
the
isotope
or
in
the
mixologist
repository,
which
is
where
we
run
all
of
our
istio
performance
tests.
I
use
the
same
script
that
the
perf
testing
group
uses
to
set
up
the
cluster
and
to
put
the
cluster
under
load
for
reference.
It's
a
it's
a
service
graph
that
includes
20
services
per
name
space,
and
I
don't
recall
what
the
lo
the
actual
data
plane
load
is.
Although
data
plane
load
is
a
little
bit
irrelevant
to
the
conversation.
B
What's
particularly
interesting
is
that
every
namespace
has
a
config
changer
that
is
issuing
a
config
change
at
a
rate
of,
I
believe,
one
per
second
so
with
like
at
the
top,
when
I
scaled
up
to
25
namespaces.
That
means
500
proxies
and
50
changes
per
second
in
config
that
are
having
to
get
pushed
out
to
those
proxies
so
pretty
heavy
load.
B
You
can
see
grafana
snapshots
throughout
showing
the
impact
on
cpu
on
config
distribution.
Time
memory
go
routines
of
running
most
of
the
experiments
I
ran
one
config
dump,
then
five
config
dumps
and
then
50
config
dumps
and
no
matter
how
I
loaded
down
the
sdod
instance
well,
by
the
way,
only
one
instance
of
sdod
serving
all
of
that
traffic
or
not
traffic,
but
serving
all
of
that
config
to
all
of
the
proxies.
We
were
not
able
to
find
any
impact
on
cpu
utilization
or
proxy
distribution
time.
B
B
So
it's
a
little
bit
difficult
to
say
if
the
spike
came
from
running
the
config
dump
command
or
if
the
spike
came
just
from
regular
processes
inside
sdod,
the
spike
was
not
higher
or
more
longer
in
duration
than
other
spikes
observed
over
the
previous
two
hours
and
that's
the
last
link
in
the
doc
there.
If
you
want
to
look
over
those
charts.
B
Yeah
so
where
you
see
my
blue
stripes
on
the
cpu
panel,
you
do
see
some
cpu
spikes
there,
but
if
you
can,
as
you
can
see,
they're
not
outliers
and
those
are
the
only
three
times.
I
ran
the
proxy
config
command
on
this
particular
cluster,
so
those
other
spikes
represent
just
normal
operations
within
sdod
itself.
B
B
In
I'm,
not
especially
optimistic
that
networking
is
able
to
be
convinced,
however,
I
think
that
we've
shown
we've
done
our
due
diligence.
Now
we
can
take
this
and
show
that
we're
not
irresponsibly
creating
a
new
performance
hole.
The
only
remaining
section
is
just
well.
The
other
thing
is:
we've
already
agreed
with
networking
not
to
implement
this
in
one
seven.
So
if
we
were
to
move
forward
with
it,
it
would
go
into
one
eight.
A
A
B
I
envoy
tools,
repo
is
actually
has
a
pr
at
the
moment.
I
know
I
ed.
I
sent
this
to
you,
but
they're
going
to
be
publishing
their
own
tooling
as
a
client
to
the
csds
service,
so
that
for
any
envoy,
xds
server,
you
can
take
one
generic
tool
and
point
it
out
there
and
get
these
this
sort
of
status
and
config
dump
information.
B
So
I've
also
been
working
with
them
on
that
pr
and
making
sure
that
it's
useful,
my
intent
being
that
it
should
be
useful
against
istio
in
the
near
future.
B
A
A
So
it's
a
ad
hoc
request,
a
sort
of
standard
response,
and
then,
if,
if
networking
likes
this,
we
will
do
a
complete
envoy
style,
request
response
and
then
we
can
get
rid
of
our
ad
hoc
requests.
B
I
have
sent
this
to
coston
individually
because
I
don't,
I
think,
he's
the
only
person
who
had
concerns.
I
don't
think
it
was
collectively
networking,
so
he
he's
aware
of
the
work
that's
about
as
much
as
I
can
say
about
it.
B
A
Okay,
I
I
have
no
concerns
about
it.
I
think
I
I
have
you.
B
B
Yes,
so
looking
at
it
today,
the
tool,
the
tools
built
by
an
intern
at
google-
well
at
least
the
particular
command
of
the
tool
is,
and
today
it
is
very
particular
to
google
cloud's
implementation
of
traffic
director
they've
already
confirmed
that
they're
open
to
pull
requests
the
thing
that
would
differentiate,
which,
from
like
traffic
director
to
istio,
to
maybe
an
ibm
implementation
of
xds
service
or
somewhere
else,
is
that
the
auth
authentication
each
of
us
supports.
B
You
know
slightly
different
authentication
schemes
and
provisioning
your
certificate
material
things
like
that
are
a
little
bit
different
per
platform.
So
once
we
have
a
good
story
around
how
we
get
credentials
provisioned
for
istio
in
1.8,
which
may
be
what
we
do
in
1.7,
I
can
modify
their
tool
to
be
compatible
with
what
we're
doing.
A
Okay,
well,
thank
you.
I
hope
you
can
come
back
in
maybe
two
weeks
and
show
us
just
their
output
against
an
istio
cluster
and
compared
to
ours.
Maybe
there's
fields
they
have
that
we
lack
that
would
be
valuable.
B
I
don't
know
I'll
take
that
as
a
follow-up.
As
soon
as
I've
got
my
my
service
up
and
running,
I
think
ed.
I've
shown
you
the
code,
it's
it's
not
there.
Yet
it's
not
ready
to
be
merged,
which
is
fine
because
master
isn't
on
1.8.
Yet
the
earliest
it
could
merge
is
wednesday
but
yeah
within
a
week
or
two.
It
should
be
there.
It's
just
a
few
rough
edges
here
and
there,
okay
excellent.
A
So,
from
a
user
experience
point
of
view,
I
of
course
approve
it,
but
that's
I'm
only
approving
it
from
the
user
interface
point
of
view.
Carson's
objections
were
not
about
the
interface
but
about
the
committing
to
something
that
might
be
heavy
in
terms
of
network
or
cpu
yeah.
So
I
can't
approve
it
in
terms
of
that.
A
Also
I
noticed
rom
has
joined
us
ron
were
you
here
at
the
beginning,
when
I
gave
the
demo
of
config
files
for
istio
no,
I
joined
about
like
25
minutes
late,
oh
because
I
was
I
should
have.
I
should
have
called
you
to
show
it
to
you,
but
I'll
give
you
the
recording.
I
think
that
it's
going
to
help
a
great
deal
with
the
user
experience,
especially
around
central
stod.
A
A
And
I
saw
the
the
comment
about
additional
columns.
It
satisfies
all
the
the
original
requests
that
I
had.
So
I'm
good
with
that.
Thank
you.
Rom
rom,
you
had
one
other
request,
which
was
to
restore
some
functionality
to
istio
cuddle,
describe
I'll,
be
asking
the
networking
folks
about
that
at
the
networking
meeting,
because
I've
been
frustrated
with
that
as
well.
I
wanted
to
bring
up
one
more
thing.
A
For
everyone,
so
this
is
our
favorite
tools
in
hub:
let's
pull
up
just
the
zenhub
for
user
experience
and
one
seven.
A
So
these
are
the
items
that
we,
our
group,
is
either
fully
or
partly
concerned
with
for
one
seven,
which
has
our
freeze
in
a
week
mitch.
This
is
yours
right.
The
perf
test.
A
B
A
A
We
are
not
fully
on
the
hook
for
this
tool
to
detect
and
warn
about
mixer
usage
everybody's
doing
it.
I
wanted
to
add
this
to
is
to
cuddle
analyze.
I
tried
to
do
it
this
weekend,
couldn't
figure
it
out,
because
I
can't
tell
which
sql
resources
are
the
ones
that
are
going
away
and
which
ones
are
staying.
A
A
We
were
able
to
get
security
in
when
we
talked
directly,
but
the
mechanism
for
creating
the
certificates
is
still
out
of
band
and
it
looks
like
we're
not
gonna
have
any
other
mechanism
for
one
eight,
I'm
trying
to
talk
to
the
vm
people.
Maybe
we
might
be
waiting
until
one
eight
four
making
these
certs.
It
might
be
that
if
you
want
to
use
this
to
a
cuddle
against
your
muddy
cd,
you
get
the
certs
from
your
car
provider
mitch.
A
A
No,
it
will
not
all
right
so
we'll
update
this
and
say
this
item
will
slip.
If
you
are
on
central
studio.
Still
looking
for
hd
is
local,
we
have
decided
to
move
anything
mitch
was
doing
with
regards
to
csds
and
for
one
seven
we're
going
to
focus
on
my
almost
csds
version
is
working
and
describe
is
working.
Although
the
improvements
we
wanted
aren't
going
to
be
there
that's
the
thing
I
mentioned,
I'm
going
to
be
talking
about
with
networking.
A
A
A
A
A
A
Okay,
well,
I
believe
this
concludes
our
meeting.
So
next
week
is
code.
Freeze
feel
free
to
reach
out
at
any
time
if
you
are
working
on
something
that
is
for
that.
As
soon
as
the
code
is
frozen,
I'm
going
to
pivot
back
to
writing
these
tests
documentation
based
tests,
which
I
have
been
behind
on.