►
From YouTube: Istio User Experience Working Group July 9, 2019
Description
Istio User Experience Working Group meeting held July 9, 2019
A
So
my
hope
for
this
meeting
is
to
align
the
various
documents
we
have
for
the
user
experience
for
over
one
three
and
four
multi
cluster.
We
have
a
new
document
that
no
one
has
looked
at
yet
the
SPO
multi
cluster
operator,
experience
document
and
the
author
is
here
to
walk
us
through
that
I
wrote
this
SEO
installed
experience
before
the
operator
CLM
a
document
was
written
and
we
should
try
to
align
those
things.
A
So
everyone
knows:
we've
had
this
cluster
stuff
for
a
while,
and
we
have
great
support
for
the
helm
on
sort
of
installing
multi
clustered,
but
we've
been
sort
of
lacking
in
experience
for
maintaining
one
and
I'm
hoping
Jason
can
talk
about
that.
We
also
have
this
new
installer
with
an
operator
and
we
took
a
great
deal,
but
it
capabilities
most
and
has
told
us
all
about
Canarian
between
multiple
sto
versions.
But
less
has
been
talked
about
what
this
simple
experience
of
the
user
just
upgrading
is.
A
A
B
B
So
the
so
the
problems
is
trying
to
sell
up
is
there's
a
lot
of
different
pieces
for
multi
cluster
floating
around
there's
scripts.
There's
some
active
documentation,
sometimes
they're
aligned.
Sometimes
there
are
the
date.
It's
not
easy
to
set
any
of
this
out.
Oh
and
we
don't
really
do
a
great
job
of
making.
That
inherently
is
difficult,
but
we
don't
do
a
great
job
of
making
it
easier.
So.
B
Can
we
fix
that
problem
I
start
to
fix
that
problem
by
including
some
multi
cluster
support
into
this
operator,
so
as
being
proposed
as
a
way
to
manage,
install
and
upgrade
the
kind
of
I
think
the
most
important
parts
of
this
report
are
the
restrictions
prerequisites
on
requirements
for
the
1.3
time
frame.
So
this
is
really
two
months.
What's
the
minimal
amount
that
we
can
do
in
the
operator
to
have
some
form
of
multi
cluster
to
make
some
progress?
I,
don't
think
anybody's
will
be
completely
satisfied
where
we
end
up.
B
We
can
make
longer
term,
but
but
just
make
some
incremental
progress
can
we
can
we
come
to
some
minimal
agreement,
but
we
think
might
be
viable
and
doable
in
the
near
term.
So
that's
one
place
getting
feedback
from
various
groups
would
be
useful
is
what
do
we
think
is
well
here?
What
everything
is
realistic
in
terms
of
the
requirements
and
restrictions.
A
B
For
example,
one
one
point
that
has
I
don't
think
we
have
complete
agreement
on
is
the
the
network
topology
of
a
multi
cluster
setup.
Ideally,
we
could
do
multi
cloud
multi,
network
and
stitch.
Everything
together
is
to
use
capable
of
that,
but
maybe
we
don't
start
there
for
the
minimal
version
of
the
operator.
A
B
A
B
Could
know
their
important
thing
to
note
here
is
when
I
say
operator.
It's
the
operator
is
a
to
the
server
side,
controller,
that's
running
in
a
cluster
of
many
clusters
and
can
also
just
be
a
human
operator.
There's
also
an
operator
project.
They
have
this
notion
of
a
COI,
so
there's
there's
two
forms
of
that.
So
there's
a
kind
of
template
generation
form
of
that
CLI,
which
is
a
replacement
for
home
template,
and
then
there
is
a
Installer
mode.
So
really
in.
B
On
the
same
API
and
the
same
conventions,
but
the
the
mechanism
would
be
different
between
them.
Interestingly,
the
operator
CLI
forms
could
be
a
steel
cuddle.
I
think
you
could
make
a
good
argument
for
that
that
we
need
to
work
through
the
details
about
how
we'd
actually
manage
where
code
lives
and
the
release
process,
but
conceptually
that
operator
CI
was
very
similar
to
other
proposals
as
that
screw
Panetta
written
like
what
there's
this
to
cuddle
MC
doctors
in
this
to
cuddle,
mesh,
fi,
doc,
I
think
there's
another
one
that
they're
all
reference
at
the
bottom.
B
A
B
So
the
if
you
were
to
logically,
you
were
to
break
down
what
a
controller
would
do
into
a
few
different
steps.
You'd
have
a
it's
going
to
collect
information.
It's
going
to
observe
the
state
of
the
world.
It's
going
to
do
some
disk,
and
then
it
will.
You
know
actuate
against
that.
If
you
you
should
be
able
to
do
the
same
thing
with
a
CLI
tool
to
be
able
to
run
a
command.
That
says,
tell
me
the
status
and
even
maybe
diff
coming
to
tell
me
what
I
took
here's.
B
A
B
B
This
proposal
I
think
at
the
top
I
say
this:
maybe
I
can't
spell
it
out,
maybe
I
didn't
make
it
clear.
This
is
focused
on
the
CLI,
so
we
would
not.
We
don't
need
to
persist
any
of
this
in
a
cluster,
so
your
source
of
truth
for
your
list
of
clusters,
source
of
truth
for
your
still
control
plant
configuration
cannot
be
local
to
be
driven
the
community
source
control
driven
through
CI
CD.
B
If
you
there's
variants
of
this,
so
if
you
had
the
operator
running
in
the
in
the
in
the
cluster,
that
spec
would
be
a
CR,
maybe
the
cluster
would
be.
Is
the
cluster
M
will
be
a
CR
and
then
running
sto
cuddle
check
would
call
some
service.
It
would
cross
reference
the
the
CRD
that
a
user
created
so
similar
to
keep.
B
B
Of
yeah,
so
it's
Co
cuddle
cube
injected
a
it
kind
of
supports
this,
where
you
can
run
everything
with
local
files,
but
you
could
also
point
it
out
your
cluster.
You
can
put
it
out
the
in
cluster
config
map.
It
might
be
similar
to
that
I.
Think
that
thing.
That's
something
that
we
can
there's
lots
of
options
we
get
out
here.
So
there
might
be.
A
B
A
B
Great
big
bed,
because
that
that's
that's
like
the
number
one
problem
that
we'll
run
into
at
least
what
not
with
this
topology,
but
with
the
gateways
topology,
what's
being
renamed,
there's
number
two
problem
that
people
run
into:
is
they
don't
so
that
their
DNS
properly
and
so
their
gayness
is
all
quickly?
So
that
would
be
another
check
for
seven
and
what
I?
What
am
I
suggest,
is
describing
the
checks
you
want
to
add
for
the
various
and
models
that
protocol.
A
B
That's
part
of
I
mean
there's
two
different
install
wait
to
get
police
install,
multi
cluster,
you
can
install
with
core
DNS
or
without
it
and
with
core
DNS,
and
you
get
the
global
resolution.
This
proposal
is
about,
but
I'm
working,
not
about
a
ways
which
require
Courtney.
Yes,
but
if
people
were
doing
coordinating
us
I
think
that'd
be
a
great
check,
because
those
are
the
two
things
that
always
stump.
People
is
up
to
me
for
ages.
It'd
be
nice
to
be
able
to
run
one
man
say
my
certs
are
good.
B
Is
good,
you're
good
here
separate
with
crowd
and
the
third
check
would
actually
be
to
run
like
a
pod
on
each
of
the
clusters
and
make
sure
they
have
connected
without
doing
a
really
good
firm
check
if
you
want
advanced
features,
so
these
are
just
like
brainstorming
ideas.
I
can
add
them
to
the
dock.
To
this,
and
if
you
like,
oh
yeah,
that
would
be
I,
don't
know
where
we
yeah,
we,
we
can
add
it
and
ship
it
around
to
the
dock.
B
But
that's
exactly
the
kind
of
that's
one
of
the
spirit
of
this
I
think
other
people
try
to
capture
is
you
should
be
able
to
run
one
or
two
commands
and
get
pretty
far
with
that.
We
shouldn't
have
to
send
you
a
one-off
script
or
you.
How
do
you
paste
a
bunch
of
bash
commands
for
they
pretty
common
cases
anyway,
I
completely
agree,
and
we
have
free
male
I'm
serious.
Why
you
want
to
start
with
flat
networking,
because
that
requires
our
replication
of
the.
B
B
Do
correctly
I'm
curious,
why
you
want
to
start
with
that?
Can
you
I
didn't
hear
the
last
part
of
what
you
said
about
something
with
eyepiece
yeah?
Yes,
you
have
to
replicate
the
services
right
to
get
to
good
IP
resolution.
So
my
understanding
is
that
you
we
don't
need
to
replicate
the
IPS.
We
need
to
replicate
the
kubernetes
service
for
dns.
That's.
B
The
service
and
then
you
get
a
working
system,
so
there's
no
way
currently
to
do
that.
Replication.
Somebody
add
something
I'm
around,
because
that,
were
you
planning
that
for
the
operator
to
do
that,
replication
order,
okay,
yeah!
So
this
is
this-
is
going
to
go
back
to
the
nope
nobody's
going
to
be
on
averse
and
happy
with
with
whatever
and
so
goal
we
have
I'm
not
saying
what
what's
here
is
the
right
one,
but
it's
not
perfect
and
I.
B
Think
there's
some
going
to
be
some
manual
steps
in
the
initial
version
that
people
are
going
to
have
to
do
in
twine
ever
stuff.
Are
you
going
to
go
with
the
Clippers
to
models
right,
there's,
one
where
you
replicate
the
service
entries
and
then
another
model
where
you
replicate
the
services
themselves?
When
we
replicate
the
service
entries,
you
don't
have
to
do
that.
That's
a
choice!
You
opt
into
that
versus
the
services
you
have
to
do
that.
We
don't
replicate
the
services
system,
doesn't
work,
so
I'm
wondering
what
kind
of
quality
to
choosing
that
first.
B
B
Maybe
you
can
make
the
argument
that,
for
the
initial
version
of
the
this
official
multi
cluster
support,
it's
okay
to
have
alpha
quality,
Multi
cluster,
because
you're
trying
to
get
everything
steps
together.
Good
thing,
all
that
all
of
the
testing
that
exists
for
all
of
the
different
cases
is
pretty
weak,
but
saying
multi
saying
the
Gateway
is
weaker
than
the
others
is
not
really
valid.
B
People
actually
tried
the
gateways,
because
the
flattener
working
doesn't
work
for
most
environments
and
the
reason
because
you
need
to
configure
the
pod
network
and
a
lot
of
environments
do
not
allow
configuration
of
the
pod
network
at
all.
So
the
pod
networks
can't
overlap
in
the
flat
networking
model.
I
mean
these
are
all
things
to
consider
when
you're
going
through
the
implementations.
A
A
A
I'd,
like
is
that
the
commands
work
with
any
technology,
so
it
would
be
nice
if
the
command
had
a
you
know,
type
equals
gateways
or
Thai
people
flat,
and
then,
if
you
try
something
it
just
has
not
implemented
in
this
version
of
this
deal.
But
the
options
are
all
there
so
that
we
have
made
sure
we
have
designed
all
the
CLI
options
and
CRT
options
and
then
that,
when
we're
ready
to
put
them
in
that,
we
have
looks
from
all
say.
C
B
Nice
and
what
really
will
we're
a
stores?
We
don't
want
to
reflect
or
anything
like
any
kind
of
refactor
instead,
because
the
CLI
becomes
an
API
and
basically,
what
that's
going
through
here,
I
think
is
amazed
at
reviews.
So
I
don't
know,
went
that.
Okay,
maybe
we're
not
at
that
point
yet.
So
the
my
intent
of
this
doc
was
to
get
agreement
on
this
scope
and
or
maybe
we
have
a
follow
up-
what
we
actually
do
it
like
a
proper
COI
API
review
of
this.
B
A
So
the
COI
should
be
a
good
replacement
for
the
documentation,
so
Peter
I
didn't
have
a
chance
to
incorporate
this,
but
I
had
previously
written
a
document
that
I
linked
to
who
sort
of
down
here.
A
proposal
for
SEO
multi
closer
and
the
very
first
command
I
came
up
with
was
info
that
just
lists.
You
know
if
you're
using
gateways
or
if
you're,
using
remote
control
planner
to
date,
it
was-
and
this
is
all
just
napkin
quality
stuff,
but
just.
A
To
tell
you
sort
of
what
you
currently
have
and
let
you
see
what
other
things
you've
connected
to
already
turns
out
to
be
super
important,
because
you
sit
down
and
adjust.
You
know
you're
coming
from
another
installation,
you're
used
to
one
kind.
You
want
to
sort
of
have
these
initial
commands
that
tell
you
what
you
have
may
be
linking
to
the
SEO
documentation
sort
of
get
you
started
for
me.
That
piece
is
really
important,
then
the
sort
of
commands
to
join
these
control
planes.
These
are
great
ways
to
replace
the
sort
of
hand
written
scripts.
A
People
have
right
now,
but
it
makes
me
nervous
when
we
say
hey
well,
is
too
is
going
to
do
one
thing
based
on
how
its
configured
you
know
something
that
ILM
and
put
it
to
pilot
for
a
lot
of
stuff
and
and
whether
or
not
we
have
a
sto
DNS
to
resolve
start
at
Global
of
things,
but
there's
sort
of
no
way
to
sort
of
see
it
here.
It
would
be
good
if
we
sort
of
mapped
out
the
scope
for
all
of
our
different
kinds
of
networking
and
then
just
implemented
our
favorite
kinds.
B
B
B
And
is
to
cuddle
or
the
operator,
so
what
I'm
hearing
is
we
I
have
commands
now
to
do
the
minimal
amount
to
install
and
check
things?
Look
ok
that
maybe
we
need
a
some
additional
diagnostic,
which
is
like
an
info
sort
of
command
as
part
of
that
minimal
support,
either
in
the
operator
itself
or
the
multi
cluster
part
of
that
I
guess
anything
with
multi
cluster
probably
require
some
notion
of
a
list
of
clusters.
B
That
list
could
take
many
forms,
so
that
seems
like
that
would
be
important
to
have
some
formal
definition
of,
even
if
it's
not
the
final
form,
but
that
but
then,
with
those,
that's
kind
of
would
be
like
the
building
blocks
and
we
can
extend
that
to
support
some
of
these
other
other
use
cases.
One
question
for
you:
Jason
I,
guess,
since
this
is
alpha
quality,
we
did
we're
not
locked
in
a
big
guy.
B
B
D
B
B
We
could
organize.
We
organized
that
into
an
alpha
and
beta
level,
feature
stages.
So
for
this
kind
of
initial
work
we
get
a
base
to
cuddle
alpha
and
then
have
all
these
commands
under
there
with
it.
We
can
do
an
initial
review
and
then,
as
we
get
feedback,
and
we
figure
things
out,
we
can
we're
okay
breaking
them
because
it's
considered
an
alpha
level
feature.
Oh,
we
try
not
to
and
then
at
what
at
the
point
that
we
think
it's
solid
enough.
B
D
B
A
C
So
just
one
one
comment:
the
SEO
cuddle
there
may
be
some
overlap
between
the
multi
class,
the
command
and
gist
your
operator
command,
I'm,
not
sure
exactly
what
that
overlap
would
be,
but
we
may
be
in
a
situation
where
parts
of
the
operator
CLI
are
at
a
different
maturity
to
to
the
multi
cluster.
So
I
guess
we
just
need
some
somebody
to
be
able
to
manage
that.
B
B
For
the
two
control
plane-
API,
yes,
oh
yeah
I
mean
you
could
support
multiple
parallel
versions
of
the
API,
so
there
could
be
two
control
plane
in
an
alpha
version
and
a
beta
version,
and
the
alpha
version
has
the
multi
cluster
support
in
it.
The
beta,
a
beta
version
doesn't
and
then,
whatever
commands
we
run.
So
you
could
pair
that
with
the
sq
cuddle
command.
So
you
do
a
steal,
cuddle
alpha
and
it
that's
going
to
recognize
the
alpha
api's.
B
B
C
B
C
Not
this,
so
we
actually
with
experimental
I'm,
not
sure
if
that's
that's
the
understanding
in
history
letters,
but
among
gives
your
users,
but
you
know
to
me
experimental,
see
kind
of
suggests
that
it's
the
feature
may
or
may
not
go
forward,
whereas
I
I
think,
for
example,
for
so
I,
don't
know
about
multi
classes,
specifically,
but
at
least
for
operator
CLI
I
think
we
would
need
something.
That's
that
that's
alpha
which
which
gal
to
me
indicates
more
strongly
that
this
is
something
that
we
do
intend
to
release
it's
just
right.
B
Don't
know
how
we
differentiate
between
the
two,
because
it
offered
there's
at
least
with
kubernetes
alpha
other
things
get
killed.
So
there's
basically
two
levels
right,
there's
alpha,
which
means
and
we're
not
committed,
there's
beta,
which
means
were
committed
or
output
implementation
in
Misco
kernel
right
now
is
experimental
and
that's
committed
being
committed
to
using
experimental
for
alpha
features.
So
I
don't
see
why
we
would
need
a
data
as
well.
B
It
doesn't
seem
to
make
a
lot
of
sense,
so
it's
what's
removed
experimental
prefix,
the
command
becomes
it
and
naturally,
I
think
it
depends
on
what
you're
referencing.
So
if
you,
if
your
beta
commands
or
referencing
data
API
s,
then
that
might
be
useful,
and
it's
today
with
sto
are
our
GAAP
is-
are
still
called
alpha
for
technical
reasons,
but
eventually
we
would
have
AP
is
and
all
three
stages,
so
you
might
want
to
steal,
cuddle
or
the
operator
CLI
to
be
aware
of
that.
B
What
certainly
we
certainly
can
also
and
non
alpha
I
get
the
Alpha
mated
distinction,
but
why
beta
versus
released
I
thought
is
released
into
partner
I,
get
that
it
basically
means
I.
Couldn't
change
yeah
I
would
probably
debate
that
for
a
while
with
the
the
right
breakdown
is
I
mean,
maybe
they
didn't
make
sense,
or
maybe
beta
is
an
indication
of
the
CLI
quality
that
you
shouldn't
both
things
against
it
or.
C
C
B
B
Well,
it's
going
to
be
promoted
disabled
for
some
point
now.
This
is
different
from
the
fact
that
our
charities
are
half
I.
Don't
know
why
that
is
probably
selective
is
my
finger
point,
but
immediately
that
we're
working
on
the
final
four
days
of
those
but
I
think
this
in
a
separate
discussion,
orthogonal
discussion,
I,
don't
see
why
the
embankment
I
think
it's
criminal
solves
a
problem
and
I'll
do
it.
This
is
good
at
the
autopsy.
A
If,
if
we
feel
that
some
experimental
commands
are
really
experimental
in
my
go
away
and
others
are
on
a
track
to
success,
I
think
it
should
be
sufficient.
Just
when
you
do,
you
know
is,
do
cut
a
little
experimental
help
for
it
to
say
you
know
afterwards
alpha
and
data
after
some
of
these
command
see
if
we
feel
people
are
unsure
about
that,
I,
don't
think
that
changing
the
sub
commander
named
helps
any
because
it
makes
everyone
change
their
scripts.
Ideally,
they
should
only
change
your
scripts
once.
C
B
Think
of
experimental
distinction
is
sufficient.
I
think
we
agree
on
that
that
we
have
some
freedom
to
experiment
with
these.
These
commands
I
think
that
tomorrow,
at
this
point,
there's
we
still
would
have
to
think
through
how
this
works
with
the
operator
CLI.
Whether
the
operator
CL
is
is
feel
cuddle
and
we
just
your
code
or
reduce
your
ap,
is
what
level
of
a
common.
B
A
People
have
said
that
the
operator
when
running
as
up
on
your
local
machine
is
a
sub
commander.
This
to
cuddle.
The
people
have
said
it.
Tucson,
Cayenne,
I,
don't
think
that's
been
fixed,
its
Donna
can
either
run
continuously
or
it
can
be
run.
You
know
remotely
I
had
a
different
question
about
the
up,
the
Installer
and
the
operator.
Every
time.
I
talk
about
the
Installer
Costin
points
out
that
you
can
canary
stos
and
I
was.
A
A
separate
topic,
but
how
does
it
interact
with
this
so
suppose
I
have
to
is
tio
control,
planes
or
I
want
to
am
unable
to
well?
Obviously,
I
should
be
able
to
install
these
coasters
only
in
one
of
my
two
control
plans
as
I'm
testing,
which
means
that
I
could
test
multicast
or
I
could
sort
of
saying
here's
my
sto
here's
my
same
as
guilty
cluster.
Does
it
work
both
ways
if
so
I'm
going
to
go
to
multi
cluster?
A
B
I
think
short
term,
though,
so,
if
the
question
was,
should
we
do
what's
the
status
of
the
dual
control
planes
I,
think
that
is
the
right
thing
to
do.
The
next
question
is:
will
be
operator,
support,
dual
control
planes
or
be
dual
which
will
compatible
I.
Think
it's
a
little
bit
Martin
comment
on
that
further
and
then
the
third
question
would
be.
What
does
that?
Look
like
for
multi
cluster
I
think
that
the
multi
would
know
I.
Let
Martin
talk
about
the
the
operator,
dual
control,
plane,
sure.
C
Yeah
I
can
quickly
answer
that
that
so
yes,
it's
absolutely
intended
to
be
compatible
with
dual
control.
Plane.
I
think
we're
really
following
the
lead
of
obvious
installer
in
this
area,
and
there
is
definitely
a
plan
there
to
solidify
and
just
be
more
explicit
about,
what's
involved
in
in
the
steps
required
for
canary
and
dual
control
plane-
and
you
know,
Carson
is
leading
that
effort
and
I
think
once
once,
the
steps
are
are
more
spelled
out,
we'll
we'll
just
take
take
that
on
into
the
operator.
C
A
C
A
B
A
A
A
First,
of
course,
to
test
something:
I
have
to
take
my
workload
and
say:
listen,
I!
Want
you
to
stop
using
my
single
cluster
control,
plane
and
use
the
multi
cluster
control
plane
and
then,
if
it
doesn't
work,
I
want
to
roll
it
back.
That
is
pretty
simple.
Then
I
realized
what
went
wrong.
I
had
been
using
gateways
and
I
told
him
to
use
my
single
control,
single
multiplexer
or
I
had
screwed
up
steal,
get
ass
or
some
other
thing.
A
B
B
If
this,
if
this
is
under
experimental,
which
we
could
have
some
notes
here,
that
it
should
be
use
it,
so
it's
some
longer
term
handsome
should
be
multiple
dual
control
point
aware,
and
it
will
have
to
tweak
the
commands
to
address
that.
So
maybe
we
need
to
put
some
more
time
into
future
proofing,
the
CLI
and
the
api's,
but
I
think
if
we
have
some
leeway,
because
it's
under
experimental
but
and
I
get
that
I
have
a
question.
If
it's
under
experimental
tank
things,
there's
really
a
lot
of
pain.
A
B
Says
we
don't
if
we
don't
know
how
it's
supposed
to
be
supposed
to
work.
I
think
that'll
be
a
lot
easier
to
do,
but
there
is
a
lot
in
my
mind.
There's
a
lot
of
unknowns
in
terms
of
what
the
act
of
final
kepala
G
would
look
like
exactly
how
we
would
do
dual
control
plan
with
the
operator
and
multi
cluster.
So
it's
I'm
I'm,
finding
it
difficult
to
come
up
with
the
right,
expensable
knobs
to
put
in
right
now
it
may
be.
D
B
B
Okay,
so
would
it
be
second
I,
don't
know
if
this
is
doc?
Is
the
right
place
to
do
that
or
we
can
have
it
work
with
Ed
on
parading
another
doc
or
a
man
Danes
existing
doc
and
focus
in
restricting
it
to
the
next
quarter
or
two,
and
so
here's
the
result.
Here's
the
commands
that
we
think
we
can
actually
support
in
some
more
Guinness
more
details.
There
it'd
be
more
of
a
reference,
and
you
know
user
guide.
B
Access
I'm,
pretty
good
at
me,
I
think
you
can
go.
Tease
me
back
at
one
or
another
one.
It's
up
to
you.
I
just
know
that
there's
one
two
Doc's
out
there
and
I'd
like
them
your
unified
into
one
doc,
because
you
know
we've
had
this
conversion
problem
before
and
it's
really
painful
everything
on
phone,
so
it'd
be
nice.
B
If
we
could
just
solve
it
upfront
by
not
having
a
bunch
of
different
implementations,
if
you
want
to
make
the
canonical
version
of
what
the
API
should
be
and
making
the
move
doc
I
take
that
that
was
rocky
and
describe
what
the
commands
do
in
a
little
more
detail.
That's
then
maybe
it's
the
stock,
I
don't
know
ed.
Do
you
have
any
ideas
of
preference.
B
D
B
B
Well
so
at
least
when
I
protect
it
from
an
army
for
curb
I,
know
constantly
cos,
I'm,
sorry,
but
I
think
from
the
environments,
work
groups
as
make
sense.
Logically,
Edie
you
know,
leads
the
usability
work
groups
of
comment
on
the
rest
of
the
usability
aspect,
but
I
think
from
a
strategic
point
of
view.
That
makes
sense
literally,
is
a
technical
point.
B
A
So
usability,
so
what
what
I'm
thinking
is?
Is
you
know
just
just
dock
with
these
commands
and
then
maybe
maybe
to
take,
for
example,
our
existing
multi
cluster
documentation
on
our
website
and
seeing
if
the
commands
the
new
commands
would
do
the
job
better?
So
if
it
took
15
commands
before
word
to
set
up
multi
closer
that
it
would
take
up
to,
and
if
it
we
had
a
completely
different
set
of
commands
for
the
Gateway
approach
that
we
would
take
too,
but
with
different
options,
then
it
took
for
the
coach.
A
We
that
we're
doing
here
I
just
want
to
make
sure
that
I
can
sort
of
write
documentation
that
it's
almost
exactly
the
same
in
terms
of
what
these
are
done
is
easier
harder
sounds
beautiful.
Odd
I
want
to
I
want
to
make
sure
that
any
files
like
lists
of
clusters
can
be
persisted
in
a
map
or
something
so
that
if
I
come
in
and
add
a
cluster
and
then
I
go
home
and
the
nightshift
guy
comes
on.
You
can
see
what
the
clusters
are.
A
B
A
B
D
A
A
Oh
I
wanted
to
ask
about
mCP.
Is
we're
expecting
that
to
be
part
of
this
in
this
timeframe,.
B
Partially
so
I
think
in
this
timeframe
is
whatever
we
currently
support,
but
documenting
it
and
putting
a
decent
COI
in
front
of
it.
So
pilot
can
get
configuration
from
galley
with
mCP,
so
we
can
use
that,
but
pilot
getting
end
points
from
galley
with
mCP
I,
don't
think
is
ready
yet
so
we
might
not
use
that
by
default.
B
A
To
make
sure
that
if
that
doesn't
happen,
that
it
is
a
command
line,
this
way
around
that,
if
it's
two
clusters
with
different
control
issues
like
the
different
departments
on
them
and
I,
also
wanted
to
make
sure
that
if
we
were
doing
that,
if
they
were
in
the
same
frame
that
our
commands
like
check
would
maybe
incorporate
them.
So
suppose
these
two
clusters
are
doing
NCP
and
it's
working
great
and
something
goes
wrong.
A
B
If
I
would
like
to
I
think
I
really
progress
in
a
half
something
that's
production-ready,
we
focusing
on
a
fewer
number
of
cases
and
make
them
making
them
or
really
well
might
be
a
good
idea.
But
that
means
staying
no
to
some
things.
If
they
noted
that
help
in
pot,
it
is
not
possible,
it
just
means
you
might
have
to
take
some
additional
manual
steps
to
set
things
up.
I'm,
not
sure
where
this
particular
use
case
falls
in
that
spectrum.
B
C
A
That's
fara
and
we
just
may
be
our
own
internal
documentation
stuff
like
cluster
context.
We
should,
you
know,
indicate,
has
that
requirement
and
that
there
needs
to
be
a
second
way
to
do
it
as
a
future
right
to
implement
or
even
design
it
yet.
But
we
need
to
know
that
dead
cluster
contents.
If
it's
going
to
do
work
on
both
clusters
or
if
it's
going
to
even
list
the
clusters
from.
B
B
B
The
simplest
thing
to
do
is
is
this
the
list
of
secrets
in
my
mind
anyway,
there's
some
other
options
that
we
could
consider
a
longer
term.
There's
a
question
registry
API,
there's
also
a
Federation
of
v2
API
I,
don't
know
the
status
of
those
what
your
trajectory
is
and
the
open
source
communities,
but
they
definitely
have
a
list
with
similar
problems
that
we
face
with
the
secrets.
B
D
B
B
It
would
need
pretty
so
if
you
can
so,
if
you
saw
the
operator
in
single
cluster,
any
privileges
to
create
call
on
the
different
is
to
control
control
blank
opponents
so
that
keep
config
in
the
secret
secret
list
above
would
need
to
have
that
that
administrative
scope
there's
an
additional
list
that
is
used
with
the
flat
network,
which
is
what
pilot
has
access
to.
That
is
just
a
pilot's
service.
Account
people.
D
B
B
B
B
Pilots
config
would
be
an
MCB
endpoint
for
user
config
and
they
list
of
mCP
endpoints
for
remote
service
registries,
and
then
that
might
be
a
CR
D
or
maybe
it's
there's
a
secret
that
the
contents
of
the
secret
is
just
an
MCP
address
and
there's
n
number
of
those
secrets,
and
then
that
was
no
longer
consuming
cube.
Contigs.
D
Clusters,
so
we
have
two
different
kinds
of
secrets:
normand
that
we
need,
and
one
of
them
is
not
secretive,
which
is
the
gateway
ID
address.
Now
it's
not
really
secret,
that's
public!
It
should
be
public
Braham
good.
So
something
to
think
about
is
how
to
build
that
information
and
store
it
in
a
way
that
makes
that
have
you
stopped
yeah.
B
You
didn't
care,
did
the
pilot
list
of
Secrets
is
more
of
an
implementation
detail
that
should
I
think
go
away
because
NCP,
you
know
at
mcg,
you
there'd
be
some
other
CRD
like
a
pilot
component
can
take.
That
would
tell
what
the
or
maybe
a
smash
can
fake
that
would
have
a
list
of
NCP
endpoints.
That's
not
going
to
happen
in
the
next
quarter.
B
B
It
was
originally
an
unsorted
list
of
enhancements
that
we
could
make
some
of
them
implementations
some
of
them
different
use
cases
I
then
categorize
that
by
subsequent
milestone,
so
I
think
I
have
like
a
1.4
and
a
1.5
time
frame.
Wow
Downes
yeah.
There
we
go.
We
can
ship
those
around
what
we
think
is
going
to
land
sooner.
What
we
think
is
more
important
I
mean.
Maybe
that's
something
that
the
environments
working
group
helps
drive.
B
A
We're
almost
done
I
had
one
question
about
the
install
and
the
replication
of
this
CR
DS.
Do
we
think
that
there's
any
way
to
use
this
tool
to
help
an
operator
move
cluster
lists
around
like
could
like
I?
You
know
I've
been
asking
for
a
command
to
list
the
clusters
that
are
available.
Can
we
list
them
as
Hamill
in
a
way
that
they
could
be
applied
to
another
cluster
like
I?
Could
I
could
mail
the
output
of
a
command
list
authorities
to
another
operator?
He
could
apply
it
and
then
we
would
be
talking
together.
I.
B
That
in,
if
you
had
a
single
gate,
a
single
operator
in
a
single
install
CR
in
one
location,
then
you
could
dump
that
to
a
file
on
Yamma
format
and
then
reapply
it
like
recreate
your
cluster
create
a
testing
version
of
your
cluster.
If
you
add
the
replicated
cluster
with
option,
two,
you
could
create
that
replicated
cluster.
Let's
do
that
same
mechanism,
I,
don't
know
if
I'd
we'd
recommend
that
or
not
but
I
think
that's
doable.
B
I
think
that
the
key
there
is
whatever
format
I
think
it
means
having
some
some
well-defined
API
for
the
cluster
list
and
be
able
to
dump
it
out
either
as
a
list
of.
If
you
did
a
list
of
kubernetes
secrets,
you
can
put
that
to
a
file
and
you
can
apply
it
same
thing
with
the
cluster
registry
and
then
the
Federation
v2
API.
Some
of
them
require
insalaum
installing
the
CRD
definitions,
so
the
APRI
requisite
step
there,
but.
A
Okay,
yeah.
Thank
you,
Jason
for
this
detailed
walkthrough
and
proposal.
I
think
this
is
all
wonderful.
I
look
forward
to
collaborating
with
you
learn
stuff,
so
that's
that's
great
I
think
we're
the
end
of
our
hour.
If
anyone
wants
to
speak
at
the
next
meeting
in
two
weeks,
let
me
know-
and
please
put
your
feedback
in
the
document
here
and
any
other
ones
from
this
week's
agenda.
Do
we
have
any
final
stuff
before
I
end?
The
call.