►
From YouTube: Istio User Experience working group May 19, 2020
Description
Istio User Experience working group held May 19, 2020
A
So
when
you
were
here
two
weeks
ago,
we
talked
about,
we
were
going
to
the
troubleshooting
API
that
was
going
to
handle
all
of
these
two
petal
commands
that
currently
use
the
debug
API,
an
implementation
that
was
uncharted,
allowing
anyone
who
knew
the
rest
endpoint
of
this
Tod
to
talk
to
it
with
her
without
kubernetes,
and
also
integration
with
kubernetes
aggregation,
so
that
we
could
very
easily
get
this
data.
No
matter
where
your
pods
were
running.
A
We
found
that
our
proposal
was
similar
in
spirit,
if
not
in
design
to
costumes
proposal
for
XDS
events
now
Constanza
probe,
xes
events
somewhat
a
wrapper
of
using
XTS
on
all
the
things.
Those
things
are
somewhat
undefined
and
we'll
get
to
that.
But
Caston
and
I
had
a
meeting
with
Mitch
and
environments
yesterday
to
talk
about
that.
It
challenged
us
to
use
his
new
API.
B
Seemed
like
a
fair
summary,
Mitch,
yeah
I
think
so
in
particular,
he
was
interested
in
the
commands
that
we
were
not
targeting
the
ones
that
require
access
to
the
proxy
and
how
we
run
those
which
we
weren't
planning
on
doing
in
1.7
that
they
seem
to
be
a
higher
priority
for
the
networking
and
working
environments
working
groups.
So
he
had
a
number
of
ideas.
A
A
And
an
almost
total
rework
of
the
way
that
most
of
the
commands
are
done.
The
output
can
stay
the
same
right,
but
we
sort
of
method
of
getting
the
data
from
the
control.
Plane
is
going
to
be
different,
and
my
thought
is
that
it's
different,
we
don't
even
dare
try
to
rewrite
the
commands
in
a
single
PR
that
we
almost
certainly
have
a
PR
to
add
the
new
ways
and
and
then
see
if
they're
working
I
tried
to
write
down
a
style
guide
for
the
things
that
cost
and
sort
of
asked
for.
A
So
let
me
show
you
what
I
think
he
said.
First,
he
had
a
user
experience
concept.
That
I
think
is
a
good
one.
We
should
start
anyway
that
the
output
of
is
to
cuddle
should
cover
a
mesh
rather
than
a
single
cluster.
So
if
you're
in
a
multi
cluster
single
mesh
environment-
and
you
ask
you
for
proxy
status-
it
should
probably
list
all
of
the
pods
and
that
just
the
ones
on
some
cluster
you're
connected
to
I.
Think.
A
Make
sense
so
the
idea
is
to
see
if
we
can
do
that.
Of
course,
this
brings
up
the
possibility
that
there
might
be
some
ambiguous
things.
The
default
namespace
in
cluster
one
and
the
default
namespace
in
cluster
two
are
different,
namespaces.
Currently
all
of
our
reports,
just
just
the
pod
in
the
namespace,
we
might
need
a
new
column.
That's
the
basic
idea.
A
And
the
second
point
is
that
things
may
not
even
require
kubernetes.
So,
for
example,
you
might
just
not
have
to
ask
communities
where
a
pot
is
or
where
a
sto
is,
and
we
might
take
parameters
to
find
them
so
that
mr.
Kyle
can
talk
directly
to
sidecars
or
to
Central
St
Oggy
central
sto
D
is
the
word
we
use.
Ruin
is
Tod,
is
running
sort
of
outside
the
control
plane
of
your
are
outside
outside
your
cluster
managing
multiple
systems,
either
on
some
other
kubernetes
cluster
or
completely
independently
of
any
kubernetes
cluster.
A
A
Decreasing
the
making
them
less
less
Cup
with
the
implementation,
so
the
suggestion
was
instead
of
going
to
these
debug
api's
and
expecting
the
debug
api's
to
continue
being
supported
by
pilot,
and
then
we
have
trouble
if
they
stop
supporting
something.
We
should
follow
these
things,
which
are
would
may
be
more
stable.
The
pod
metrics
this
new
mechanism
that
will
talk
about
of
getting
XDS
events
possibly
and
this
seems
to
be
contentious.
There
are
no
pods
some
way
of
finding
this
XTS
event
mechanism.
B
That
last
point,
it
is
the
most
important
point
from
Causton
and
John's
perspective.
We
need
to
do
this
from
the
perspective
that
either
there
is
no
control
plane
or
we
have
no
access
to
the
control
plane
side
of
what
our
envoys
sidecars
in
a
particular
namespace
and
the
motivation
here
is
many
fold
really,
but
but
the
primary
idea
is
that
we
can
use
oh
hey
John,
how
are
joined
so
he
might
be
able
to
speak
for
himself
here,
but
I
think
that
the
primary
motivation
is
that
an
administrator
who
only
has
access
to
his
namespace.
B
A
The
troubles
item
is
that
it
currently
seems
to
just
be
an
envelope
unless
I'm
missing
something.
So
he
talks
about
now
that
we
have
this.
There's
gonna
be
topics,
and
he
says
they'll
probably
be
these
topics,
and
maybe
some
others,
but
I
haven't
found
and
I
haven't
looked
yet
at
the
full
PR,
but
in
this
design
doc
he
doesn't
say
what
the
topics
are.
We
need
to
see
if
zero
or
more
of
these
topics
are
in
the
first
PR.
A
He
says
they're
in
fit
coming
in
phase
2,
and
then
we,
if
there's
anything,
we
need,
we
need
to
add
them
and
get
them
implemented
under
this
mechanism.
So
something
like
see
status
might
be
able
to
work
perfectly
with
knacks
and
connections,
assuming
that
they
do
what
I
think
they
do,
but
maybe
some
other
commands
like
auth
Z,
which
sort
of
tells
mutual
teos
is
going
to
be
used.
A
C
How
do
we,
how
does
it
it
can
we
start?
It's
all
better,
I,
understand
kind
of
how
the
implementation
works.
Could
you
kind
of
go
through
how
a
proxy
status
with
work
in
this
architecture,
because
we're
gonna
need
to
know
not
just
the
employee,
not
just
the
pilot
or
its
DoD,
that
that
envoy
is
connected
to
we're
gonna
need
to
know
all
of
them
right.
There's.
A
A
great
picture
here
and
I-
don't
I,
must
admit
that
I
don't
understand
it.
So
here
we
see
multiple
is
DoD's,
four
of
them
acting
as
shards,
and
we
see
them
talking
to
each
other
and
we
see
them
communicating
via
these
gateways.
What
Constance
seemed
to
say
was
that
somewhere
in
this
picture
is
an
endpoint
that
knows
about
all
of
the
events
with
in
a
few
seconds.
They
could
give
us
an
uncharted
view.
Hopefully
that's
true,
and
then
we
would
just
sort
of
talk
to
that.
A
A
B
A
Control
plan
revision
we
stopped
having
that
command
and
it
may
be
that
some
commands
we
no
longer
use
one
of
the
frustrations.
Is
we
don't
really
know
what
our
users
are
using?
There's
some
commands
that
sort
of
have
been
in
an
experimental
I've,
never
graduated,
no
one's
asking
us
to
graduate
them.
Maybe
we
should
get
rid
of
them.
There's
other
commands.
We
don't
really
know
who's
using
it.
So
if
anyone
has
an
idea
for
one
seven
or
one
eight
for
how
we
can
find
out
what
commands
people
are
really
using,
that
would
be
great
I.
B
I've
had
some
thoughts
on
ways
that
we
could
collect
that
information
by
and
continue
respecting
privacy.
My
concern
is
I
feel
like
I
need
someone
to
sign
off
on
it
anytime
we're
expecting
data
about
users
that
that's
a
big
change
from
previous
iterations
from
previous
policy
within
the
sto
project.
Maybe
it's
something
that
we
should
ask
the
TOC
about,
but
at
a
high
level,
if
we
can,
if
we
essentially
hash
the
command
that
they're
running
and
then
run
a
Google
search
for
the
command.
B
A
That's
interesting,
I
thought
of
something
lower
level
like
just
having
a
cuddle
stats
file
that
you
could
report
with
you.
You
use
proxy
status
10,000
times
and
proxy
config
twice,
but
whatever
it
is.
If
anyone
has
the
time
to
work
on
something,
one
thing
that's
been
frustrating
is
not
knowing
where
our
group
should
put
our
efforts,
but
for
1-7
I
think
that
maybe
I
just
go
for
one
seven.
The
big
thing
is
this
central
is
the
notion
is
coming.
A
We
certainly
in
multi
cluster
single
mesh,
you're
gonna
have
people
who
have
only
access
to
a
particular
cluster
cannot
port
forward
in
so
the
big
security
improvement
that
we've
already
implemented.
John
howard
for
us,
is
that
we
don't
exact
into
is
to
the
system
anymore.
We
just
port
forward
in
that's
great
the,
but
we
want
to
get
rid
of
even
that.
A
We
don't
want
to
require
anyone
using
is
to
cuddle
to
be
able
to
enumerate
pods
report
forward
into
SEO
system,
and
they
might
not
even
have
it
as
to
a
system
because
they're
using
an
sto
control
plan,
that's
provided
by
a
vendor
and
you
ordered
sto,
comm
or
something
so
to
read
us
on.
All
of
our
is
to
cuddle
intervals,
to
be
able
to
at
least
optionally
work
with
that,
and
I
talked
a
lot
with
Costin
about
different
techniques
of
how
to
do
that.
A
A
So
let
me
talk
about
sort
of
what
what
I
want
us
to
agree
on,
so
that
I
can
present
this
to
the
TOC
for
part
two
of
our
roadmap
presentation.
One
is,
I
want
to
eliminate
or
reduce
our
dependency
on
the
debug
API
of
Sto
and
on
kubernetes
itself
for
listing
pods
I
haven't
exactly
said
how
much,
hopefully,
a
hundred
percent
and
at
a
p1
primary,
but
not
a
must
ship.
If
there's
some
command
this
bill
does
it
we
can
deprecated
that
command,
keep
it
in.
What
does
that
sound
to
people
I.
A
Like
it
Orkut,
and
does
that
so
I
have
this
style
guide
which
we
can
go
through
later
on
on
the
call
and
if
there's
anything
dumb
and
now
we
can
get
rid
of
it
for
p0
x',
for
this
new
approach,
I
have
a
1,
p,
0,
that's
just
to
give
a
control
plane
address.
This
is
for
the
case
where
maybe
don't
have
a
kubernetes,
and
you
just
want
to
say
hey.
You
know
my
SEO
is
running
at
this
address.
A
B
A
So
the
technique
of
how
to
find
is
Tod.
If
you
supply
this,
that's
where
it
is.
The
question
is,
though,
if
you
don't
supply
this,
how
to
find
it
and
I
mentioned
several
things,
and
we
should
we
should
talk
about
them.
In
fact,
I
probably
should
mention
in
here,
because
I
think
it's
important
enough
how
to
find
this
Tod
one.
Is
this
p0
to
explicitly
say
it,
but
that's
just
a
internal
debugging
feature
that
I
don't
think
people
will
use
the
other
is
to
find
it.
A
B
A
You
know
when
I
went
into
yet
today
and
said
meeting
with
caste
and
I'm
like
well
look,
we
did,
we
could
put
it
here,
we
could,
we
could
put
it
here
or
maybe
we
could
even
start
up
a
pod
and
and
look
at
its
config.
We
could
do
this.
It
was
felt
this
was
too
low
level.
So
maybe
it
shouldn't
be
here,
but
we
sort
of
need
to
do
it.
So
I'll
just
make
a
p0
a
new
way
to
find
this
Tod.
A
D
A
So
the
the
user
is
none
if
the
user
has
installed
sto
in
the
current
way
when
a
user
installs
this
do
is
T
was
running
an
is
to
assist
the
user
who
installed.
It
has
admitted
Ora
T
over
his
cluster,
so
he
had
permission
to
do
it
and
he
has
permission
to
call
sto
where
he
has
installed
it.
The
problem
is
that
in
1:7
networking
has
proposed.
A
That
is,
do
you
not
need
to
run
in
the
sto
system,
namespace
or
even
on
the
cluster
at
all,
which
means
that
when
one
seven
hits
this
will
not
work
and
there
is
a
new
way,
so
the
hope
it
come
up
with
a
way
that
works
both
with
people
who
have
installed
themselves
in
the
one
six
way
and
who
are
using
an
out
of
control
plan
way,
such
as
centralized
DoD
or
single
control.
Multi
cluster
configurations.
E
A
A
Some
of
these
very
much
overlap
with
the
stuff
we've
just
talked
about
making
property
status
work
better.
If
we
don't
know,
the
revision
is
very
similar
to
making
proxy
status
work.
We
don't
know
where,
as
Tod
is
alum
commands
to
list
the
control,
planes
might
be
different,
and
if
we
decide
that
we
won't
have
a
compile
the
way
cute
cuddle
does
or
if
we
want
to
have
some
things
on
the
cluster
like
a
manifest
of
control
planes.
These
items
are
much
unchanged
from
what
we
discussed
two
weeks
ago.
A
The
other
thing
that
I
want
to
get
this
group
to
sign
off
on
is
these
items
which
had
been
p0.
This
troubleshooting
API
I
want
to
push
to
1/8
and
make
it
just
be
p2
for
for
this
release
and
whether
it
gets
pushed
and
in
what
format.
I
don't
know
about
these
musts,
so
we
almost
want
to
get
rid
of
these
mosques.
I
think
they're
actually
recorded
in
this
work,
meaning
if
so,
maybe
we
can
get
them
out
of
here.
So
we
don't
confuse
the
TOC
when
they
look
at
this.
A
You
know
a
go
package
to
be
ready
for
it.
We'll
probably
want
to
put
the
code
that
we're
writing
for
these
new
styles
of
flattening
as
Zod
and
the
current
commands
in
a
package
and
I'm
just
gonna
suggest
making
that
package
general
enough
that
we
could
put
stuff
on
top
of
it.
That
would
be
a
troubleshooting
API.
What
do
you
think.
B
In
pursuing
the
proxy
commands
before
this,
the
control
plane
commands
I'm
a
little
bit
concerned
that
we're
we're
building
a
lot
in
an
area
that
we
don't
have
a
good
deal
of
understanding.
You
know
this
is
something
that
you
and
I
became
aware
of
24
hours
ago,
we're
also
building
it
on
top
of
a
design
doc
that
is
not
approved
yet
and
is
being
implemented
as
experimental
in
1/7.
I
feel
like
we're,
building
a
lot
on
on
a
very
uncertain
foundation
here,
so
should
we
reduce
it
to
less
than
p2?
B
F
Would
you
think
much
weight
into
this
eventing
thing
that
Qasim
has
be
the
inventing
is
just
a
small
part
that
would
help
implement
like
the
proxy
status
command.
Only
all
the
other
stuff,
like
proxy
config
is
already
supported
today,
and
the
eventing
stuff
is
actually
very
small
change
to
add
so
I,
don't
think
it's
too
high-risk.
F
B
F
You
elaborate
there.
The
eventing
sends
events
about
connections
disconnects
that
sort
of
thing,
which
is
what
proxy
status
needs.
Things
like
proxy
config,
the
Envoy
config
right.
That's
already
supported
it's
not
eventing,
that's
just
normal
XD
s
generation,
which
is
like
what
pilot
does
basically
it.
So
that's
certainly
not
going
away
ever.
We.
A
A
So
I
went
through
the
commands
as
part
of
this
exercise.
This
is
most
of
the
commands,
maybe
not
all
the
experimental
ones,
to
tell
whether
they
talk
to
kubernetes,
the
pods
envoy
or
the
control
plane.
So
proxy
status
talks
to
control
plane,
verify
install,
might
talk
to
it
just
to
make
sure
it's
healthy
version,
talks
to
the
control
plane
and
describe
talks
to
the
control,
plane
and
I
believe
weight
tucks
the
control
plane.
So
those
certainly
all
need
to
be
reworked
for
this
mechanism.
A
B
B
In
1.6,
the
the
things
that
we
added
for
status,
including
background
analysis
information,
as
well
as
distribution
status,
rights
to
kubernetes,
CR
DS,
that
only
works
for
one
control
plane
per
cluster.
If
you
have
to
two
logical
control
planes
two
revisions
running
within
a
cluster,
then
one
of
them
up
to
is
going
to
need
to
write
status
and
the
other
should
not
in
order
to
access
status,
information
from
the
secondary
control
plane.
That
is
not
writing
it
to
kubernetes,
we'll
need
some
other
API,
and
this
is
where
we
were
going
to
be
doing
that.
F
B
One
of
the
pieces
of
pushback
we
received
that
you
needed
to
be
able
to
see
status
data
from
the
secondary
cluster,
for
instance,
if
you're
upgrading
your
primary
is
writing
to
status.
Your
canary
is
not
so
you
when
I
see
what
sort
of
stat
a
view
of
status
the
canary
has,
which
is
going
to
be
different
than
the
primary
that
would
be
done
through
this
control,
plane
level.
Api.
A
E
E
E
F
C
F
F
E
A
A
So
one
of
the
one
of
my
frustrations
with
the
debug
API
is
that,
since
only
Attila
was
using,
it
it
broke,
so
is
to
cuddle
describe
used
to
tell
you
not
only
whether
your
pod
had
strict
or
permissive
TLS,
whether
by
default,
clients
of
it
would
use
strict
or
I
would
use,
would
use
TLS
for
plaintext
and
that
feature
of
debug
telling
us
which
clients
were
expected.
We're
going
to
be
using,
was
sort
of
broken,
and
when
we
went
from
the
old-style
authorization
to
the
peer
authorization.
A
My
that
kind
of
brittleness
is
a
problem,
and
when
my
my
question
for
getting
rid
of
debug
is
both,
how
do
we
know?
How
can
we
make
the
things
that
we
used
to
have
in
debug
either
be
gone,
or
that
we
can
use
so
that,
when
this
happens,
we
can
sort
of
tell
and
if
they
are
being
fixed,
are
they
being
fixed
through
these
topics?
In
the
event
API
like
when
a
destination
is
rule
is
created,
that
sort
of
tells
us
whether
TLS
will
be
used
or
not?
A
B
A
A
So
definitely
all
of
the
new
commands
are
going
to
need
an
integration
test
that
needs
to
be
performant
and
that's
going
to
be
especially
important
for
some
of
these
multi
cluster
multi
centralized
to
do
things
and
I
should
probably
ask
if
anyone
on
this
calls
knows
when
we
run
integration
tests.
Will
there
be
an
integration
test
for
a
sort
of
out
of
cluster
is
Tod
so
that
we
can
make
sure
all
of
our
stuff
works
in
all
those
cases.
What
sort
of
what
sort
of
deployment
architectures
are
going
to
be
available
in
integration
tests?
There.
B
B
A
B
B
C
A
So
I've
added
to
p0,
which
I
think
is
going
to
be
all
of
us
who
are
writing
these
commands
so
certainly
myself,
Mitch,
probably
Liam
for
anything
that
is
related
to
VMs
or
VM
specific
commands,
so
I,
just
tensley
you
in
there
Liam
did
that
everything.
These
new
commands
that
we
write
that
find
the
data
in
different
ways
and
find
these
2d
different
ways
be
tested,
as
integration.
C
A
A
A
A
A
We
didn't
really
create
a
better
user
experience,
but
just
allowed
multiple
control
plans
to
have
a
command
line
and
with
one
7we
it
looks
like
we
may
be
in
a
similar
cycle
that
we're
just
trying
to
get
all
of
our
existing
commands
to
support
these
new
things
that
are
coming
down
the
road
for
centralist,
EOD
and
XDS
eventing.
It
would
be
ideal
to
talk
about
some
other
things
and
I
try
to
come
up
with
a
style
guide
to
represent
both
our
thoughts
and
the
thoughts
that
networking
has
for
what
we
should
do.
A
So
the
first
one
was
that
we
should
be
thinking
about
making
making
things
cover
the
entire
control
plane,
so
going
back
really
and
taking
our
commands,
which
were
cluster
based
and
sto,
zero
and
SDO
one
and
B
thinking.
You
know
if
his
tio2
is
very
much
based
on
not
clusters
but
a
Multi
cluster
single
control
plane.
All
that
would
look
and
the
other
item
that
Costin
really
wanted,
and
we
also
have
not
done
the
homework
on
and
we
should
kick
ourselves
for
not
doing.
A
It
is
stop
reading
off
the
features
that
require
admin,
access
versus
the
features
that
just
require
access
to
the
workload
namespace,
so
you're
no
longer
in
a
world
where
people
have
their
own
kubernetes
clusters,
people
who
are
running
micro
services
and
production
have
access
to
the
whole
cluster,
there's
a
lot
more
multi
tenant
or
auerbach
stuff
in
place.
We
want
to
be
thinking
about
both
our
limitation,
taking
as
a
few
permissions
as
well,
but
just
the
structure
of
the
commands.
A
A
Do
we
have
any
thoughts
on
that
on
the
structure
of
sort
of
how
our
commands
are,
and
probably
that's
kind
of
break
down
is
going
to
be
needed
for
an
API
like
the
one
Mitch
is
talking
about
as
well.
Some
features
for
the
features
most
people
expect
in
an
API
are
the
features
for
debugging
their
own
pods,
but
occasionally
you
need
to
debug
pilot
itself
as
well.
A
B
B
A
So
I
have
this
model
that
I
have
been
thinking
about,
of
how
users
interact
in
a
complicated
world.
This
is
the
model
that
I
internally
use
I'm
thinking
about
it,
I'm
imagining
what
user.
So
first,
let
me
imagine
that
there's
three
clusters:
there's
some
clusters,
have
an
sto
D
running
in
is
to
a
system
on
the
cluster.
Other
clusters
have
its
Tod
running
outside
the
cluster
as
a
service,
it
was
an
advanced
user,
I'll
call
Knuth
after
the
famous
professor
he's
administering
production.
A
A
Know
both
of
these
users
to
have
a
good
experience
with
sto
to
know
what
they
have
know
that
they
have
some
centralist
duties
that
they're
paid
for
it
I've
used
to
know
that
they
have
something
missed
you
at
these
they've
installed
themselves.
How
can
I
love
them
to
work
together
to
do
things
to
get
to
their
new
this'
administering
yeseo
system?
A
That
bob
is
using,
we've
never
had
a
miss
do
config
before
is
we've
always
used
a
kubernetes
cube
config
to
find
the
clusters
that
is
teo
talks
to
and
I
realized
that,
if
is
Zod,
is
running
outside
of
a
cluster
one
acute
config.
This
is
just
a
strawman
for
how
one
might
look,
but
let's
think
about
this
and
come
back
in
two
weeks.
If
you
think
this
is
right,
I'll
I'll
make
a
publicly
editable
version
of
this
and
put
it
in
the
slag
for
everyone
to
see.
A
A
Would
would
this
be
useful?
I
was
imagining
a
sort
of
user
in
this
system
he's
administering
multiple
systems
having
a
command
to
the
lists
which
control
planes
he
has
access
to
both
ones,
he's
paying
for
and
Canaries
he's
running,
and
so,
if
you
start
canary
in
control
plans,
it
becomes
hard
to
remember
what
you've
got
you
one
guys,
install
the
canary
and
other
guys
testing
it
tomorrow.
One
of
them
is
going
to
be
promoted
or
Welbeck
a
command,
like
you
context,
a
command
to
add
essential,
sto
DD.
A
B
A
Maybe
kubernetes
context
completely
may
be:
installing
sto
only
adds
it
to
this
list.
Maybe
adding
a
canary
only
adds
it
to
this
list.
Then
we
have
this
idea
that
we're
going
to
roll
back
or
roll
forward
a
canary
and
make
it
be
the
master
for
a
plane
and
that
sort
of
effects
list
as
well
right
I
want
to
promote
my
canary
to
master.
A
Then
we
have
this
idea
well
I'm
running
I'm
running
some,
my
companies
provided
me:
they
purchased
a
sto
control
plan,
that's
being
administered
remotely
and
I'm
sort
of
kicking
the
tires
I'm
trying
it
out
and
I've
forgotten.
What
I've
done.
These
commands
sort
of
tell
you
in
the
same
way
that
you
know
that
cube
cuddle
can
think
commands.
Tell
you
which
clusters
you've
been
talking
to
lately.
A
B
A
Think
about
yourself,
if
you
ever
get
stuck
with
this,
would
you
want
this?
And
if
so,
we
can
put
it
in
currently
the
way
that
is
to
cuddle
talks
to
pilot
the
thing
that
we
need
to
remove
because
it
doesn't
work
with
centralized
Judy,
is
that
we
used
to
exact
into
is
Tod
in
rating.
The
pods
now
report
forward
again
thanks
to
John,
which
speeds
things
up
but
still
requires
all
of
this
stuff
and
what
I
showed
Costin
was
well.
A
If
it's
DoD
is
on
a
public
endpoint,
we
can
just
talk
to
it
on
a
flat
map
or
go
through
every
at
the
end.
If
this
DoD,
though,
is
running
on
a
cluster,
we
can't
easily
reach
it.
Have
a
VPN
into
kubernetes
cluster,
so
want
to
port
forward
to
it
the
sort
of
way
we
already
have,
but
you
can
only
port
forward
to
a
single
pod,
and
it's
done
so
I
want
to
port
or
do
the
uncharted
view.
A
B
A
C
A
Maybe
next
meeting
in
two
weeks
you
can
sort
of
tell
us
how
you
think
in
your
architecture,
you
will
be
talking
to
this
event
stuff.
If
it
is
sufficient
for
your
needs
sounds
good.
We
can
make
that
happen,
and
the
people
who
are
writing
things
don't
break
anything
that
you've
added
for
your
use.
Cases.