►
From YouTube: Istio User Experience WG, September 1, 2020
Description
- 1.7 UX retrospective
- Validation Webhook Warning
- XDS event aggregation
A
Hello,
everyone,
so
we
have
several
things
today.
We
have
the
retrospective
we
want
to
do.
Lynn
can
talk
about
her
service.
Mesh
contact
search
mesh
is
still
hard,
which
should
be
very
interesting
to
everyone
into
the
user.
Experience
john
has
a
short
thing
on
validation,
web
hook
warning
and
the
possibility
of
doing
we're
in
the
web
hook.
A
We
also
have
two
others
that
might
slip
reorganizing
the
istio,
color
commands
and
xds
event
organization.
Are
we
happy
with
this
order
and
then
just
ending
at
the
top
of
the
hour?
Or
do
we
want
to
reorder
anything
here?.
A
Excellent,
so
what
we
do
every
release?
Is
everyone
edits
this
document
for
10
or
15
minutes
on
mute,
and
then
we
go
back
and
read
what
we
thought
were
the
conclusions
of
this
document.
I've
put
a
link
to
this.
In
the
slack
I
would
say
we
should
spend
the
next
10
or
15
minutes
until
people
slow
down
just
sort
of
entering
this,
and
we
can
see
what
we
thought
about
our
last
release.
How
does
that
sound
begin?
Now?
Please.
A
And
are
we
ready
to.
A
A
Okay,
so
after
we
type
these,
we
try
to
say
them
out
loud.
The
first
shout
out
is
for
me,
but
I
don't
want
to
say
it
myself.
Can
the
people
who
wrote
these
not
say
them.
B
Yeah
I'll
read
that
out
ed.
You
pushed
through
all
kinds
of
ambiguity
and
disagreement
to
get
us
an
actual
concrete
implementation
of
xds
based
debugging
in
one
seven,
which
was
quite
the
hat
trick.
C
Yeah
wrote
the
next
one.
Basically,
clay
proposed
analysis
api
earlier
this
year
and
mitch
helped
that
push
that
into
api
repo.
I
did
some
final
work,
so
it's
a
collaboration
on
the
people
this
in
this
working
group.
A
Excellent-
and
I
don't
know
if
xenon
is
here,
but
I'm
was
so
happy
to
get
sto
cuddle
uninstall
in
at
the
very
end.
What
went
well.
C
Wrote
the
first
one,
so
we
had
several
interns
at
least
two
who
worked
on
ux
features
for
in
the
past
few
months,
and
some
of
them
also
expressed
interest
in
continuing
working
on
and
just
at
least
in
easterly
still
so
so
yeah.
That
was
a
kind
of
good
effort.
A
C
B
Sorry
yeah
it
was
nice
not
to
have
as
much
conversation
happening
in
the
pull
request
as
much
uncertainty
after
the
work
was
committed.
It
was
unfortunate
how
much
friction
we
did
experience
in
trying
to
get
troubleshooting
done,
but
it
was
nice
that
that
friction
happened
in
the
context
of
design
docs
before
all
the
work
to
implement
had
been
done.
B
And
yeah
istio
cuddle
test
coverage
is
getting
better
release
over
release
so
that
I
think
we
should
be
seeing
fewer
and
fewer
backwards
compatibility
breaks.
B
Oh,
that's
also
me
sorry,
I
put
all
of
mine
in
a
row.
The
vm
stabilization
feature
really
took
usability
into
consideration,
and
I
saw
several
other
areas,
other
working
groups
very
actively,
considering
usability
before
pushing
their
features
out
in
a
way
that
I
haven't
seen
in
previous
releases.
So
that
was
really
nice.
A
I
wrote
my
first
documentation
test.
I
want
to
give
a.
I
didn't
want
to
shout
out
to
frank
because
he's
not
in
our
group,
but
that
framework
worked
well
and
I
think
it's
going
to
be
a
good
place
to
put
tests
in
addition
to
the
integration
area.
A
We
also
did
not
socialize
or
implement
the
idea
of
what
it
means
when
we're
done
with
the
canary
release.
There's
an
issue
for
that
for
one
eight.
I
think
I
think
we're
gonna
make
move
forward
on
it
this
time.
But
that
was
a
big
frustration
and
the
last
topic
is:
what
should
we
completely
stop?.
B
Yeah,
you
know,
as
you
mentioned
ed,
we
had
some
friction
around
xds
eventing,
in
particular,
several
of
the
things
that
we
were
told
it
would
support.
It
did
not
support
by
the
end
of
the
release,
which
caused
us
to
slip
several
features.
So
I
think
the
moral
of
the
story
in
one
eight
is
when
we
have
dependencies
on
other
networking
groups
or
sorry
other
working
groups,
we
actively
drive
those
dependencies
and
implement
them
if
necessary.
A
Excellent
well
thank
everyone
for
participating
in
the
retrospective.
The
next
item
is
lynn.
Lynn.
Can
you
share
the
screen
for
this.
D
Yeah,
so
I
will
try
to
run
this
a
little
bit
quicker
just
for
the
contents
relevant
for
this
team
of
certainly
service
mesh
account.
You
guys,
probably
know
it's
a
paid
conference,
so
the
the
slides
and
the
video
should
be
out
in
early
september,
which
is
now
actually
so
maybe
in
a
week
or
two.
This
is
actually
the
first
talk.
We
actually
had
do
it
with
with
boyan
so
istio.
D
We
never
had
any
talk
with
with
anybody
from
from
linkedin
in
the
past,
so
this
is
like
the
first
historical
jewel
in
tag.
So
one
thing
I
want
to
mention
quickly
for
this
talk.
We
are
most
mostly
focused
on
service
mesh
trade,
of
how
people
are
using
service
mesh
and
what
are
the
pain
points
for
service
measure,
and
the
talk
was
mostly
focusing
on
the
complexity
of
the
service
mesh,
not
so
much
focusing
on
how
much
cpu
and
memory
we're
consuming.
D
You
know
not
so
much
focusing
on
latency,
because
we
believe
the
machine
could
work
harder,
but
the
complexity
for
people
to
wrangle
the
mesh
to
be
able
to
adapt
and
learn.
The
mesh
is
like
the
biggest
thing
from
the
user,
so
that's
what
this
talk
mostly
focusing
on
is
about
the
user's
human
cost
and
who
pays
the
the
cost.
Some
of
you
probably
read
my
most
recent
blog
on
external
sdod
external
control
play
some
of
the
personnel.
D
We
believe
that
it's
really
important
to
the
service
mesh
domain
is
the
platform
owner,
which
are
the
team
who
own
service
mesh
and
they
set
like
the
strategy
for
the
service
owner
to
adopt
service
mesh
and
the
developers
are
whoever
owns
individual
services.
That's
going
to
run
in
the
mesh
and
external
sdod
is
introduced
additional
route
for
people
who
providing
istio
as
a
service,
which
is
why
we
added
like
mesh
operator
raw
who
is
in
charge
of
installing
and
upgrading
control
plane.
D
So
I'm
not
going
to
go
through
like
link
the
examples,
because
I
honestly,
I
thought
their
example
was
pretty
boring.
Basically,
they
talked
about
like
how
do
you
do
zero
configuration
and
how
much
changes
from
one
row
to
two
or
basically
is
doing
nothing,
how
they
have
like
dashboards
built-in
support
in
lengthy
too,
and
they
didn't
had
any.
D
They
didn't
have
much
building
in
one,
and
then
this
is
their
nash,
dashboard,
which
I
think
we
pretty
much
do
everything
they
have
for
the
issue
example,
I
was
scratching
a
lot
of
my
head
when
I
tried
to
build
the
slides,
because
I
actually
think
it
still
improves
a
lot
since
I
started
working
on
istio
like
three
years
ago.
It's
actually
a
lot
easier
to
use.
Now
I
was
giving
example
of
the
concepts
a
little
bit
harder
for
users
to
learn.
I
think
not
just
because
of
the
concept
itself.
D
It's
also
about
the
interaction
of
the
concept
like
in
this
case.
I
was
giving
an
example
of
a
consumer
and
producer
side
using
on
the
producer
side,
if
you're
actually
using
external
services,
you
have
to
worry
about
all
these
concepts
and
also,
most
importantly,
you
have
to
worry
about
how
these
concepts
connect
to
each
other,
and
then,
on
top
of
that,
you
have
to
worry
about
how
you
are
actually
scoping
your
resources
through
export,
2
and
psycha.
D
These
are
not
small
concepts
that
users
have
to
learn
and
get
through
the
learning
curve,
and
then,
if
the
service
a
happens,
also
exposed
on
the
gateway,
then
they
have
to
also
learn
a
bunch
of
concepts.
Related
to
that
too,
I
I
gave
a
shout
out
that
our
team
actually
did
a
really
good
job
to
help
maintain
the
boundary
of
the
platform
service
team
and
the
service
owner
team,
which
is
the
developer
and
who
operating
their
services
in
the
mesh
by
using
a
virtual
service
delegation.
D
And
then
I
talk
about
how
I
had
a
hard
time
with
the
tcp
and
the
host,
so
this
is
interesting.
I
said
john.
I
think
you
know
my
story.
I
had
a
real
hard
time
while
I
was
working
on
external
sjod
that
it's
not
obvious
to
me
that
when
you're
using
tcp,
even
though
we
allow
hosts
in
the
gateway
resource
but
when
tcp
is
used,
the
hosts
are
not
really
being
used.
D
So
I
had
a
simple
virtual
service
and
gateway
resource
like
this,
and
I
find
out
that
didn't
work
at
all
and
it
turned
out
because
I
was
using
the
tcp
protocol
and
the
host
wasn't
really
applied
when
tcp
is
used,
but
we
didn't
have
any
errors.
So
it
took
me
a
while
to
figure
this
out
and
I
think
I
asked
john
and
frank,
multiple
users
to
kind
of
think
out.
What's
going
on
so
when
you
find
a
bug,
that's
when
service
mesh
is
really
really
hot.
D
That's
when
you
know
you
have
to
kind
of
we're
looking
at
the
code
and
the
bug
are
the
maintainers
on
the
project
like
in
my
case,
I
actually
find
a
bug
with
sni
host
based
routing,
and
I
work
with
john
very
closely
to
actually
fix
the
scene
1.7,
but
as
a
user
who
stumped
onto
trying
to
do
this
simple
scenario,
I
mean
the
plan,
the
the
yama
was
perfectly
just
nothing
works.
It's
a
it's
actually,
a
much
bigger
learning
curve
to
figure
out
what
is
exactly
going
on
and
and
even
propose
affects
them.
D
The
other
thing
I
I
sort
of
share
with
the
story.
Some
of
you
know
that
I
wrote
a
book
about
istio
explained
last
year
and
as
part
of
the
book,
it
was
like
a
really
three
services
and
consume
external
services,
so
it
was
really
simple.
D
So
the
the
whole
thing
we
were
trying
to
do
is
teaching
people
how
istio
can
be
simple
right,
so
we
added
psychoproxy
to
each
of
the
services
and
we
tighten
up
the
security
and
then
we
actually
find
out
the
first
thing
we
find
out
that
health
check
was
not
working.
The
moment
we
tighten
up
security,
I
think
this
is
fixed
now
with
the
http
pro,
but
back
then,
when
this
wasn't
fixed,
it
was
really
hard
for
the
user,
because
all
the
services
went
down
the
moment
he
enabled
me
to
trs.
D
The
other
thing
I
find
out
is
because
my
services
was
already
communicated
using
https,
so
I
find
out
if
I
do
like
header-based
routing
on
some
of
my
services,
it
doesn't
work
because
the
traffic
was
encrypted,
so
I
had
to
modify
the
code
to
kind
of
downgrade
to
using
http
to
communicate
with
the
other
services,
so
that
took
me
a
while
to
find
out,
because
it
wasn't
obvious,
because
a
lot
of
our
documentation
was
more
telling
people
hey.
D
If
you
need
to
move
your
services
to
the
mesh,
you
need
to
make
sure
your
deployment
yama.
You
need
to
make
sure
your
service
yammo,
you
know
fits.
We
have
like
a
deployment
and
services
page,
but
not
so
much
about
when
you
need
to
fully
leverage
your
mesh.
You
have
to
make
sure
you
know
the
traffics
are
plain
traffic
among
your
services
so
that
you
can
do
header-based
routing
when
we,
I
can
actually
interpret
the
requests.
D
So
I
think
this
is
the
key
thing,
so
the
one
I
always
mentioned
earlier
was
more
setting
the
stage,
and
this
is
the
key
thing
that
we
want
to
say.
You
know:
how
do
we
actually
make
service
mesh
boring
that
we
don't
have
to
talk
about
it,
because
now,
every
single
conference
with
every
single
you
know
different
user
groups?
There's
always
a
lot
of
hypes
about
service
match
and
also
part
of
the
reason
was
it
was
hard
when
people
try
stuff.
D
D
So
I
was
thinking
what,
if
we
can
actually
provide
an
interesting
tool
based
on
where
we
are
with
our
mesh
to
help
user
analyze
if
we
know
that
it's
kind
of
like
our
analyze
tool
but
maybe
expand
the
purpose
by
the
way
I
didn't
use
istiocado,
because
the
talk
was
generic
to
service
mesh.
So
I
didn't
want
to
just
using
it
still
here,
but
the
thoughts
is
about
using
a
specific
service
mesh
cli
to
be
able
to
analyze
based
on
user's
scope.
D
So
the
user
could
say
you
know
analyze
this
particular
namespace
or
maybe
analyze
the
combination
of
a
few
namespaces
and
then
give
us
a
like
a
recommendation
of
you
know
whether
service
measure
would
be
good
to
have
like
in
our
book.
When
I
go
through
all
these
three
services
within
the
trade
name
spaces
and
in
fact
they
were
using
the
same
language.
So
if
it's
not
writing
the
book
for
the
purpose
of
tutorial
teaching
other
people,
it's
probably
not
was
an
effort
to
do
service
mesh,
giving
all
the
services
are
using
java.
D
D
D
D
And
we
think
it
would
be
really
interesting
to
kind
of
provide
as
part
of
the
analyze
service,
even
though
in
this
example,
this
particular
service
may
be
using
maybe
using
automatically
protocol
deduction
to
detect
it
it's
in
using
http,
but
it
would
actually
be
able
to
recommend
people
hey.
It
would
be
a
good
practice
to
manually,
detect
the
protocol
and
to
kind
of
give
you
the
alert.
Even
though
our
automatic
protocol
detection
could
help
user
to
onboarding
zero
code
change,
I
find
that
it's
actually
not
possible
most
likely.
D
I
think
the
main
hurdle
is:
can
your
other
headers
propagated?
Can
you
actually
run
your
services
with
sidecar?
Also?
Are
your
services
doing
retry
and
timeout
and
secure
communication
yourself,
because
if
you
are
going
to
leverage
the
mesh
to
do
it,
it's
best
to
actually
have
these
disabled
in
your
code,
so
that
you
would
avoid
a
lot
of
surprises.
D
Like
I
I
mentioned
earlier,
I
was
doing
secure
communications
in
my
own
services
and
then
the
moment
I
had
the
moment.
I
had
a
head-based
routine.
It
was
just
not
working
so
that
those
are
the
examples
that,
in
order
to
fully
leverage
the
mesh,
sometimes
you
do
need
to
disable
these
functionalities
and
rely
on
the
mesh
to
provide
these
functionalities
to
you
and
the
other
thing
interesting.
I
thought
it
would
be.
D
You
know
you
know,
what's
going
on
with
your
own
service,
if
you
actually
develop
the
service,
but
a
lot
of
times,
you
might
be
using
images
from
other
people,
maybe
in
docker
hub
or
some
other
repository.
Now
the
questions
comes
also,
you
know,
can
you
actually
run
the
the
the
container
image
in
the
mesh?
Can
you
is
the
container
be
able
to
fully
leverage
the
mesh,
so
there
could
be
certain
levels
of
how
compatible
they
are
with
the
service
mesh
and
one
example
I
knew
was
last
thanksgiving.
D
I
was
helping
our
user
debugging
a
problem
that
the
moment
they
put
a
psycho
onto
zookeeper
and
then
they
find
out
the
the
leader
election
of
zookeeper,
complete
broke
and
their
zookeeper
just
keep
doing
reboots
for,
for,
like
you
see
in
this
in
instances
for
a
crash
loop
back
off
for
their
parts.
So
this
is
an
example
of
some
of
the
container
image
out
there.
If
you
throw
a
sidecut,
it
may
not
even
work
at
all,
so
it
would
be
like
completed
function
broken
for
for
this
application.
D
The
other
thing
we
talked
about
is
metrics
as
part
of
service
mesh.
It's
really
cool
like
in
istio.
I
can,
if
I
know
which
application
I
like,
for
instance
booking
for
I
want
to
gain
visibility.
I
mean
I
can
have
all
that
visibility.
I
want,
but
it's
actually
pretty
hard
if
you
want
to
do
it
at
the
match
level
so
like.
D
So
I
think
these
are
just
different
thoughts,
how
we
can
make
service
mesh
more
easier.
Most
importantly,
I
think
to
me
I
feel,
like
the
most
hard
part
portion
is
not
not
only
the
service
mesh
framework
itself
like
istio,
it's
really
about
dealing
with
different
type
of
applications,
and
you
know
whether
it
was
the
effort
to
move
to
the
mesh
of
whether
the
application
has
been
running
successfully
with
cyca
or
whether
the
application
has
been
successfully
enabled
fully
for
service
mesh.
Those
are
the
harder
things
beyond
just
the
base.
A
I
think
this
is
a
great
presentation
lane.
I
myself
have
wondered
about.
You
know
a
tool
that
would
analyze
your
current
usage
before
histo
and
then
apply
it
and
generate
some
recommendations,
so
it
might
be
valuable
for
this
group
or
other
groups
to
come
up
with
some
proposals.
A
You
know
when
I,
when
I
I'm
trying
to
apply
sto
manually,
I
I
created
the
add
to
mesh
command
and
then
I
will
add
you
know
one
part
at
a
time
to
see
you
know
when
it
starts
breaking
would
be
very
easy
to
automate
those
that
I
do
manually
to
to
do
that
and,
to
sort
of
you
know,
incorporate
wireshark
and
sniff
the
traffic
and
incorporate
chaos
engineering
and
see
how
much
reliability
is
already
built
into
the
pods.
D
Yeah,
I
know
I
I
think
maybe
we
could
potentially
start
small
like
for
some
of
the
application
and
services.
We
know
they
already
have
a
problem
like
we've
done
a
lot
of
study
with
like
readys
to
keep
her
and
some
of
the
other
things.
Maybe
we
could
you
know
if
we
can
just
present
what
we
know
for
the
user,
because
I
can't
remember
how
many
times
people
ask
me
about
reddish
and
some
of
these
applications.
D
You
know
I
have
to
dig
through
our
istio
io
or
discuss
to
find
out
for
people,
so
we
can,
you
know,
present
them
using
the
analyzer,
even
though
we
can't
do
like
100,
if
we
can
do
a
reasonable
starting
percentage
and
gradually
grow.
I
think
it
would
be
super
helpful
too.
A
E
Yeah
questions
in
a
hypothetical
world
where
ego
is
perfect,
is
there
such
a
need
for
this
thing,
or
do
you
just
buy?
You
know
it's
just
transparent.
You
just
put
the
sidecar
everywhere
things
magically
work
like.
Is
this
working
around
bugs,
or
is
this
an
inherent
service
mesh
thing
that
is
needed.
A
A
A
That's
true,
but
what
so?
What
I
found
is
is
people
are
people
are
using.
You
know
some
java
library,
that's
already
doing
tls
and
it's
already
pre-wrecking
tls
1.3
and
it's
managing
all
the
stuff
itself.
An
istio
comes
in
and
ruins
things
because
it
doesn't
know
how
to
get
out
of
the
way
of
that.
So
we
want
this
tier
to
get
out
of
the
way
back
off
to
tcp
or
something
to
get
metrics,
but
we
also
want,
before
you
apply
sdo
a
way
of
saying
well.
E
Right
yeah,
I
think
that
makes
sense.
I
think
that,
in
my
opinion,
though,
like
the
only
reason
that
we
should
not
have
a
sidecar
is
for
high
performance
cases
where
you
really
need
to
squeeze
things
out
just
because
your
application
is
doing
tls
or
it
has.
You
know
you
have
some
library
that
does
reliability,
and
maybe
you
have
good
metrics.
E
It
doesn't
mean
that
east
just
doesn't
still
provide
value
right
now
we
do
a
bad
job
in
some
places
like
zookeeper,
where
we
break
things
so,
but
this
is,
you
know,
assuming
we
fix
those
cases,
but
I
do
think
you're
right
like
it
would
be
useful
to
say:
hey.
This
service
is
already
sending
tls,
so
you
can't
use
you
know
all
these
other
features.
E
D
Yeah,
it's
more
about
alerting
people
based
on
the
state
of
the
mesh
today.
Certainly
john,
like
you
said,
I
mean
if
everything
works
perfect,
we
don't
need
to
want
anybody.
It's
more
about
okay
for
this
foot,
given
version
of
istio,
if
we
round
the
analyze
with
the
reporting
people
on
this
problem
based
on
you
know
what
they
wanted
to
add
to
the
match
or
based
on
the
fact
they
already
added
to
the
mesh,
because
I
think
the
cost
of
finding
the
problem
and
finding
a
resolution
is
super
high
within
issue.
A
D
A
So,
of
course,
we
one
of
the
reasons
we
never
put
the
work
into
analyzing
that
automatically
and
warning
people
is
that
it's
so
much
better
just
to
do
what
we
actually
did
do
and
solve
it,
but
it
would
be
good
to
know
sort
of
to
make
it
easier
for
us
to
know
which
features
of
istio
people
are
still
having
trouble
with.
A
D
So
how
about
this?
Maybe
we
can
take
this
offline
in
a
document.
What
are
the
things
we
could
potentially
realize
through
tooling,
based
on
where
issue
is
today.
A
A
Thank
you
for
presenting
those.
This
is
great
lynn,
howard
john,
do
you
want
to
talk
about
validation,
webhook,
warning.
E
Yeah,
so,
basically
short,
summary
is
kubernetes
added
this
new
feature
for
warnings
on
web
books.
So
for
today
we
could
say
either
this
config
item
is
good
or
this
config
item
is
bad
and
reject
it.
Now
we
can
give
them
a
warning
and
say
you're
doing
something
we
don't
recommend,
but
we're
not
going
to
actually
block
the
config
from
being
created.
E
So
I
think
this
is
super
useful.
We
can
add
a
lot
of
stuff
there,
so
the
stock
is
basically,
I
was
trying
to
get
a
list
of
things.
We
want
to
add
warnings
for
right
now.
I
just
have
a
small
list
because
I
didn't
think
that
hard
about
it,
but
we
can
expand
it
and
then
we
just
need
to
do
some
fairly
small
amount
of
work
to
you
know
refactor
things,
so
we
can
plumb
these
warnings
through.
I
think,
and
that's
pretty
much.
It.
F
These
warnings
get
the
user,
gets
these
warnings
as
soon
as
the
applied
config
like.
So
if
they
apply
like
a
virtual
resource,
that's
referring
to
a
gateway
that
doesn't
exist.
They
would
immediately
get
a
warning.
E
B
E
Yeah,
actually,
that
was
a
good
good
point
mitch.
I
generally
agree
with
you,
but
I
was
thinking
of
it
this
morning.
Actually
part
of
the
reason
why
we
can't
do
it
is
because,
like
the
validation
has
to
be
stateless
right,
but
a
warning
doesn't
necessarily
need
to
be
right.
We
could
technically
warn
on
cross
resource
stuff.
I
think.
E
Yeah,
so
I
don't
know
off
the
bat.
If
it's
a
good
idea,
we
should
probably
ask
kubernetes
what
they
think
as
well,
but
it
seems
like
it
might
be
possible.
C
E
A
I
can
think
of
some
warnings
that
are
single
item
warnings
that
I
would
love
to
put
in
this
way,
which
reminds
me
I
have
a
pull
request,
waiting
for
approval,
analyze,
duplicate
virtual
service
matches,
and
it
it
has
these
infos
that
could
easily
become
warnings
when
the
resource
is
applied.
A
A
Item,
thank
you
was
that
everything
about
the
web
hook.
E
Yeah,
I
also
just
for
you.
I
know
you've
looked
at
envoy
filters
a
lot.
I
think,
there's
a
lot
of
things
that
envoy
filters
that
are
probably
like
warning
type
things
that
maybe
don't
make
sense
to
be
errors,
so
that
could
help
there
as
well
yeah,
that's
pretty
much
it,
and
I
also
I
don't
want
to
do
this.
So
if
someone
wants
to
actually
do
implementation,
that
would
be
amazing.
F
C
E
We
have
to
verify
that
if,
if
that's
the
case,
then
then
great
we
just
implement
it
and
then
once
people
are
on
newer
versions,
they
get
these
warnings
for
free.
If.
C
C
Okay,
because
I
think
you
you
also
mentioned
the
crd
deprecation
warning,
that
is,
I
don't
think
it
will
be
supported
if,
if,
if
user
user
kubernetes
version
lower
than
one
not
one
eye.
F
E
E
C
Okay,
is
there
a
bug
tracking
this
I?
If,
if
not
created,
I
can
create
one
just
to
track
it.
It
doesn't
have
to
be
this
release
if
we
don't
have
time
but
yeah
we'll
we'll
do
that.
A
So
I
don't
think
we
have
time
to
do
the
persona
role,
reorganization.
So,
let's
start
xds
aggregation.
B
B
If
you
remember
in
one
seven
originally,
when
we
started
using
xds
events,
we
were
told
that
we
would
give
be
given
one
xds
channel
that
would
have
events
for
the
entire
control
plane
and
that
did
not
happen
due
to
some
technological
constraints
and
some
time
constraints
on
one
seven.
B
So
austin
has
asked
that
we
take
the
lead
in
implementing
this
in
one
eighth,
and
so
this
is
my
first
attempt
at
a
design
for
that
in
particular,
what
we'd
like
to
do
is
use
the
cloud
events
api,
which
is
a
cncf
technology
that
is
message,
bus
agnostic
in
order
to
share
data
between
instances
of
sdod.
B
As
long
as
this
cloud
event
is
compatible,
it's
really
more
or
less
a
configuration
change
along
with
implementing
something
to
set
up
the
message,
bus
itself
and
authenticate.
But
all
of
our
interactions
with
the
message
bus
should
be
generic
enough
that
you
can
do
it
with
just
about
any
implementation.
B
All
right
we'll
take
that
as
a
yes
one
of
the
other
objectives
is.
We
would
like
this
to
be
as
simple
as
possible
from
an
orchestration
perspective
and
you
all
are
familiar
with
the
work.
That's
happened
in
the
last
year
to
centralize
all
of
our
operations
into
sdod
so
that
our
users
aren't
having
to
run
nine
different.
I
don't
think
it
was
ever
quite
nine,
but
many
services,
each
of
which
has
its
own
logs
and
its
own
problems
in
its
own
configuration.
B
B
That
has
the
advantage
that
we
can
actually
run
it
inside
the
sdod
process
itself
to
again
minimize
the
number
of
things
that
our
customers
need
to
spin
up.
So
my
proposal
is
that
in
open
source,
we
use
nats
to
implement
the
cloud
events
api,
that
we
share
data
between
sdod
instances
using
these
cloud
events
that
that
will
enable
things
like
our
centralist,
eog
debugging,
that
we've
been
trying
to
get
unblocked.
B
Our
debug
api
will
work
that
way.
Koston
also
mentioned
that
networking
has
a
few
features
that
they
would
like
to
leverage
this
as
well,
so
they
would
be
able
to
build
on
top
of
this.
In
addition,
I'm
trying
to
think
if
there's
other
details.
E
I
thought
the
whole
point
of
cloud
events
is
that
it's
a
generic
api
and
we
just
implement
the
api
and
then
they
can
plug
in
whatever
they
want.
Why
do
we
need
to
invent
gnats.
B
So
we
don't
need
to
invent
gnats
the
you're
right
that
cloud
events
is
an
api,
but
in
open
source.
If
we're
going
to
leverage
that
api,
we
need
to
have
an
implementation
right.
There
needs
to
be
some
message
of
us
behind
those
calls,
so
this
would
support
users.
Well,
we.
E
B
E
B
This
would
not
be
the
famous
bundling.
This
would
not
be
us
setting
up
a
deployment
for
gnats
right.
This
happens
all
within
the
sdod
deployment.
E
Make
sense
yeah,
but
I
don't
see
what
like
I've
we,
I
don't
think
we
want
to
take
a
dependency
on
gnats
right,
like
I
don't
think,
ibm's
going
to
use
gnats
where
google's
not
going
to
use
gnats.
It's
just
going
to
be
like
this
worst
common
denominator.
No
one's
going
to
want
to
use
it,
but
then
we
still
have
to
maintain
it,
and
now
it's
no
longer
just
this
generic
pub
sub
stuff,
like
that,
I
mean
a
normal
user
like
on
a
single
cluster
and
non-central
easter
egg.
They
don't
need
naps
at
all
right.
B
That
will
not
be
the
way
that
things
work
moving
forward.
All
debugging
happens
over
xds
and
all
xds
happens
over
the
load.
Balancer
right.
There
is
no
connecting
to
sdod
on
a
backdoor.
That's
going
away.
A
So
if
there's
only
one
stod
pod,
there's
no
need
for
any
sharing.
If
there's
two
s2d
pods
hidden
behind
a
single
endpoint,
the
client
can't
choose
which
one
to
connect
to
so
the
two
pods
are
supposed
to
be
able
to
exchange
enough
information
so
that
no
matter
which
pod
clients
get
they
get
the
same
answer.
A
So
I
am
not.
I
have
not
read
this
document.
A
I've
heard
about
kubernetes
events
and
config
maps
and
mats
and
message
passing
and
stuff
so,
and
I
don't
know
about
performance
or
any
of
that
stuff
it'd
be
great
to
know
to
be
able
to
offer
a
reasonable
opinion
without
having
to
be
an
expert
in
all
those
things.
B
Yeah,
that's
fair
enough.
I
can
add
a
diagram
to
help
with
that.
John.
Do
your
question
about
about
having
this
as
a
plug-in.
None
of
the
other
plug-ins
are
required
for
the
operation
of
istio
right.
You
can
run
istio
without
recording
telemetry
data.
You
can
run
insteo
without
having
rafana
or
kiali
present
and
istio
will
work.
Just
fine
in
terms
of
debugging
istio.
There
will
istio
will
not
work
without
some
implementation
of
a
message.
Bus
present
so.
E
B
E
E
A
Reasonable
mitch,
I
I
want
to
point
out
that
the
debugging
api
doesn't
need
to
be
correct
or
fast,
so
if,
if
one
stod
picks
up
three
clients
and
then
a
few
milliseconds
later,
the
other
one
is
asked,
tell
me
about
clients
and
it
returns
an
answer.
That's
five
seconds
old.
It's
no
big
deal,
so
I
would
go
for
the
easiest
one.
A
Okay,
I
think
we've
reached
the
top
of
the
hour
so
next
week
we
will
talk
about
our.
We
will
return
to
organizing
the
commands
and
breaking
things
apart
based
on
personas
and
if
you're
a
cloud
provider
or
not.
Thank
you.
Everyone
have
a
good
holiday.