►
From YouTube: Istio User Experience working group June 23 2020
Description
Istio working group meeting, User Experience group, held June 23 2020
A
So
I
had
a
small
item.
I
wanted
to
run
past
everyone.
I
have
new
columns
for
proxy
config
routes
and
clusters.
Let
me
slowly.
A
A
So
several
people
have
complained
about
how
little
you
see
when
you
do
the
proxy
config
summary
commands
for
routes
on
the
ingress.
You
see
very
little,
you
just
see
this,
which
is
completely
useless,
and
I
had
written
before
and
some
code
to
display
more
information,
and
I
imported
that
code
to
download
and
made
it
out
for
it.
So
with
this
pr,
the
information
you'll
see
is
much
more
interesting.
A
A
It
also
tells
you
which
virtual
service
the
match
came
from,
which
is
great.
That's
that's
sort
of
how
it
looks
on
a
typical
english
controller.
A
When
done
on
a
regular
mesh
pod.
There
is
quite
a
bit
more
information
in
the
same
columns
and
the
match
is
pretty
boring
usually,
but
you
can
see
the
this
internal
name
here
comes
in
very
handy
and
all
of
the
pods
that
sort
of
the
domains
that
it
sort
of
handles.
So
it
looks
pretty
nice
I
had
for
a
while.
I'd
had
a
column
to
display
where
it
went
to,
but
I
got
rid
of
that
because
it
became
too
wide
anyway.
I
encourage
people
to
review
it.
A
There's
also
just
like
the
virtual
service
here
for
clusters.
There
is
a
destination
rule
column
and
that
should
help
people.
People
have
been
asking
me:
is
my
destination
rule
being
applied
or
not,
and
you
never
had
a
you
could
use
the
describe
command
on
an
individual
pod,
but
just
maybe
a
different
easier
way
to
see.
If
it's
you
know
fine
or
not,.
A
So
I
wanted
to
bring
up
the
my
hobby
horse,
the
dependencies
of
the
central
studio
refactor.
This
is
the
thing
that
I
have
been
spending
my
time
on.
A
I
made
a
small
change
to
the
picture
earlier
and
I
want
to
show
everyone
that
picture
changed
first
and
also,
I
know
lin's
on
the
call
and
she's
going
to
be
showing
some
document
changes
and
those
documents
don't
include
the
sdo
cuddle
part
it'd
be
good
to
have
at
least
an
appendix
on
lens
document
with
this,
and
this
is
the
latest,
so
we
have
sort
of
two
two
routes
that
these
commands
can
go.
A
Very
control
plane
it's
best
if
it
goes
to
the
exact
control
plane,
the
sleep
pod
does
and
we'd
always
envisioned
it
going
through
this
sort
of
reflector
on
on
the
side
car
itself,
there's
a
pr
for
that.
Then,
if
you
ask
for
the
proxy
status
of
all
of
the
proxies
last
time
I
presented
this,
we
saw
istio
cuddle
talking
directly
to
each
one
of
these,
and
we
decided
that
that's
not
going
to
be
feasible
because
their
ip
addresses
will
be
private.
A
Behind
this
ingress,
we
usually
no
way
to
see
them,
so
the
story
is
going
to
be,
even
if
we
know
the
xds
address
of
his
dod
we're
going
to
talk
to
one
of
them
and
it
will
somehow
push
or
pull
that
information
from
the
others.
So
this,
I
think,
is
the
picture
for
how
it
has
to
be
done.
I
don't
think
we
can
get
away
with
getting
rid
of
any
of
these
gray.
Arrows
will
always
steal
cuddle
talks
to
the
system.
A
There's
the
uncharted
view
issue
that
we
talked
about
an
hour
ago.
We
need
an
owner
for
this,
so
let
me
bring
up
the
uncharted
view.
So
this
is
the
problem.
If
there's
yours
to
these
multiple
shards,
you
run
proxy
status.
How
do
you
get
the
proxy
status
for
all
of
the
shards
and
the?
A
If,
if
if
this
was
http,
I
could
imagine
the
implementation
custom
gave
me
using
cluster
load
assignment
having
the
pod
call
it
siblings.
That
would
be
great
combining
the
answers.
A
I'm
I'm
a
little
unclear
how
to
do
it
in
terms
of
grpc,
I'm
hoping
that
constant
can
take
over
this
item
and
triage
it
and
provide
at
least
an
implementation
guide.
So
we
can
figure
out
when,
when
we're
streaming,
this
uncharted
view
of
xds.
B
B
Yeah
so
there's
some
discussion
also
on
the
on
the
slack
on
this
topic.
So
one
thing
we
mentioned
in
a
previous
I
think
ufc
meeting
was
that
we
can
reflect
those
events
in
kubernetes
events,
feed
and
that's
a
very
simple
change.
And
if
you
have
access
to
the
api
server,
you
can
get
a
stream
of
kubernetes
events
and
inside
we
can
start
the
same
information
that
you
would
get
over.
Xds.
B
B
Except
for
that,
this
would
be
a
lot
easier,
so
I
understand
so
again.
What
access
you
can
have
is
different
than
full
access.
I
mean
I,
I
it's
obviously
a
requirement
or
reasonable
to
not
have
access
to
modify
pods
and
do
all
kinds
of
stuff,
but
with
our
bug
permissions,
it's
not
for
short
term,
I
mean
for
for
one
eight
until
we
have,
because
what
you
want
to
do,
what
we
want
to
do
it
with
xds
aggregation
is
pretty
complicated,
especially
with
multinetwork
and
other
things.
B
B
If
not,
we
can
reflect
them.
I
don't
know
there
are
other
ways
to
to
push
them
to
other
pub
subsystems.
We
can
we
we
I'm
very
happy
to
add
code
to
integrate
with
cncf
eventing
and
that
will
allow
us
to
them
to
push
them
to
whatever
knots.
If
you
want
or
any
other
mechanisms
that
are
supported
by
by
cncf
eventing.
C
So
acosta,
I'm
pretty
concerned
about
changing
the
entire
payload
and
protocol
that
we're
using
for
communicating
this
data
with
two
weeks
of
development
left
in
the
release.
B
We
are
not
changing
need.
Neither
the
design
was
that
we
want
to
use
event
for
eventing.
Just
like
cloud
cncf.
We
adopt
the
same,
make
the
same
semantics
that
event.
We
define
what
inside
the
event
and
the
knock
again.
We
may.
We
still
want
to
to
clarify
what
we
put
in
the
neck
or
what
we
don't
put,
but
how
the
event
is
transported
was
always
supposed
to
be
flexible
and
and
and
possible
to
implement
in
multiple
ways.
I
can
tell
you
xds
with
security.
B
A
B
C
B
Seems
like
a
trivial
change,
I
mean
it's
just
three
lines
of
code.
Mostly,
I
know
I
mean
you
know
the
callings
of
cuba
cube
api
to
generate
an
event.
We
have
it's
already
extracted.
There
is
a
call
site
where,
where
it
generates
the
xds
event,
at
that
particular
point,
we
can
insert
additional.
You
know
rights
basically
right
to
kubernetes
events
right
to
cncf
right
whatever
you
want.
It's
not
really.
A
If,
if
the
future
is
xds
events-
and
I
already
have
a
pull
request
for
proxy
status
that
uses
them,
I'm
almost
would
rather
harden
that
pull
request
and
do
a
hack
for
that.
That's.
A
Fine,
you
can,
you
can
get
get
yeah
sorry
it
may
it
may
be
that
if
we
have
a
third
party
providing
central
sdod
that
you
can't
see
all
the
pods,
because
you
don't
in
the
in
one
seven,
you
wait
till
one
eight
for
that,
I'm
willing
to
let
it
slip
yep.
But
I
wanna
understand
if
the
you
know,
if
we're
gonna
continue
down
the
road
that
I
went
in
this
item
or
if
there's
any
possibility
that
we
should
do
try
to
do
this
ourselves.
A
If
I
should
try
to
put
this
in
pilot
or
something.
B
It
would
be
wonderful
if
you
can
put
it
in
pilot,
as
I
mentioned
in
the
past,
the
difficulties
around
you
know,
multi-network
and
other
things,
not
necessarily
connecting
you
already
have
code
to
do
put
portfolio.
B
If
you,
if
you
do
port
forwarding
from
easter
cattle,
you
can
get
the
east
geode
pods
and
then
you
can
port
forward
the
unsecured
port,
because
that's
where
I'm
stuck
really
it's
it's
an
authentication
and
security
model
for
for
east
ud.
B
So
because
I
don't
want,
I
cannot
connect
to
the
secure
port.
I
mean
your
pr.
You
saw
in
your
pr,
it's
wonderful,
it
works,
but
it's
not
really
something
we
want
in
production
to
insecure
connection.
But
if
you're
going
to
do
a
pod
forwarding
from
the
port
15010,
you
have
kubernetes
security,
providing
the
the
encryption
and
everything
else
and
it's
a
viable
solution.
A
B
I'm
not
interested
which
design
you
you
want
here
because,
from
my
point
of
view
it's
I
I
never
said
that
xds
is
the
only
way
to
get
events.
I
always
said
that
we
want
to
integrate
with
eventing
systems.
C
The
question
is,
we
need.
We
need
one
consistent,
inventing
system
that
we
can
integrate,
whether
we're
talking
about
centralist,
dod
or
vms
or
any
other
environment.
What
we
heard
from
you
back
in
june
was
that
or
in
may
rather
was
that
that
was
going
to
be
xds.
If
it's
going
to
be
something
else,
we
need
to
know
what
it
is
and
we
need
to
be
able
to
build
on
top
of
it
more
or
less
this
week.
So
let
me
give
you
an
overview.
A
Of
the
problem
before
we
talk
about
the
solution,
so
the
commands
that
talk
to
pilot
through
debug
are
proxy
status,
two
kinds
with
or
without
the
pod,
the
version
command
the
version
of
the
control
plane
and
of
the
sidecars.
A
Okay,
so
I've
got
I've
got
version,
is
no
longer
exacting
proxy
status,
they're
all
port
forwarding
it's
it's
we're
moving
in
the
right
direction,
but
finding
the
proxy
status
of
a
particular
pod.
B
B
Yes,
that's
that's
the
reason
for
to
have
the
eventing
system
in
place,
so
you
can
you
don't
have
to
connect
to
anything
you
can
you
can
just
subscribe
to
a
pub
subs
or
whatever
equivalent
you
have
and
that's
why?
I
think
it's
very
important
to
continue
to
to
say
that
we
support
integration
with
arbitrary
eventing
systems.
Not
that
xds
is
the
only
one
and
only
event
that
we
support.
A
A
B
My
proposal
was
to
to
just
use
kubernetes
events
in
1.7.
I
mean
it's
relatively
easy
to
add
the
code
to
generate
the
kubernetes
event
and
then,
if
you
have
access
to
the
event
feed
which
is
lower
bugs
and
port
forwarding,
you
will
see
all
those
things.
D
Yeah,
can
you
maybe
educate
us?
What
are
the
access
to
that
access
to
the
event
feed
and
also,
I
probably
only
want
the
event
for
a
particular
sdod,
because
imagine
that
cluster
might
run
can
instill
d
for
10
different
deployments.
I
probably
only
want
to
invent
one
of
them.
B
A
A
B
I'm
afraid
it
will
not
work,
I
mean
that's
the
whole
point
for
the
status
and
and
and
things
is
to
talk
with
all
of
them,
so.
A
Let
me
so
let
me
make
another
point
here.
Let
me
show
the
picture
in
this
case
in
this
picture:
there's
only
one
user
pod
sleep
pod.
A
If,
if
we
did
this
thing,
we
talked
about
to
reflect
into
its
existing
connection,
then
everything
would
work
great.
If
there's
one
pod,
that
would
talk
to
the
only
one
that
mattered.
If
there
were
ones
we
could
talk
to
all
of
them.
We
could.
We
could
talk
to
each
one
until
we
had
a
representative
sample.
B
Yes,
you
could
talk
with
all
of
them,
yes
to
get
the
to
get
status
of
all
the
importance.
Yes,.
A
To
do
that,
we
would
need
this
arrow
to
be
implemented
this
arrow,
and
my
question
is,
I
know
there
was
a
pr,
maybe
for
that,
will
that
be
ready.
A
A
B
A
B
A
I
a
secure
tunnel
is
going
to
be
super
important.
Maybe,
but
I
mean
you
don't
well.
B
It
would
be
wonderful
to
have
it,
but
I
think
we
can.
I
mean
it's,
it's
not
worse
than
what
we
have
today.
D
B
A
So
if,
if
this
could
be
a
priority,
it
would
really
simplify
things
first,
it
would
simplify
all
of
my
uncharting
things.
I
think
maybe
I
mean
I
would
rather
have
uncharted,
but
I
could
live
with
that.
It
would
help
me
deal
with
the
security
items.
So
I
have
this
other
item
for
security.
How
am
I
going
to?
How
is
istio
cuddle
going
to
talk
to
so-called
stod?
A
A
B
Iris,
I
think
iris
did
a
lot
of
work
on
this
area.
I
mean
or.
E
I
think
with
the
oh,
I
don't
think
you're
assigned
to
a
team.
You
have
to
do
an
individual
or
add
the
label.
E
D
E
B
Oh
yes,
but
in
in
so
there
are
two
separate
problems.
One
one
was
the
issue
of
improving
the
ux
for
vms
or
how
do
you
provision
vms
and-
and
I
think
what
john
said
is
the
current
agreement,
as
far
as
I
know,
to
use
the
token-
and
there
was
discussion
about
easter
cattle
being
able
to
automate
the
provisioning
of
vm,
so
someone
working
on
vms
was
supposed
to
have
a
way
to
do.
Istio
cattle
generate
certificates.
B
A
This
connection,
right
here
from
istio
cuddle
to
the
ingress.
If
you
talk
directly,
is
going
to
need
the
same
security
that
is
needed
for
a
vm.
B
No,
no,
you
connect
to
xds
you
connect
to
regular
xds.
You
will
have
some
some
some
credentials,
you
should
buy.
I
mean
some
speedy
identity,
but
it's
nothing
special.
I
mean
it's
just
like
any
vmware
vm
running
into
your
system.
I
mean,
if
you,
if
you,
if
you
configure
the
credentials
to
beneath
your
system,
you
have
that.
C
B
Oh
yeah,
isolation,
yeah,
that's
that's
obvious,
but
but
I'm
not
saying
any
proxy,
I'm
saying
some
proxies
are
more
equal
than
others.
B
Absolutely
so
isolation
will
make
sure
that
if
you
go
in
the
example
namespace,
you
will
not
able
to
see
anything
except
your
name
space,
but
if
you
go
with
a
credential
into
your
system
or
with
with
policies
that
gives
you
permission
to
see
everything,
you
should
see
everything.
Thank
you
that
will
be
true
for
vm's
and
everything
else.
I
mean
it's
not
specific
to
to
issue
your
cattle
or
this
use
case.
It's
something
that
we
want
to
do
in
general.
A
So
carson,
I
want
to
do
a
a
one
week:
sprint,
where
I
either
work
on
this
path,
port
forwarding
into
the
pod
and
then
letting
this
reflection
happen,
or
I
do
the
path
talking
directly
to
the
xds.
I
want
which
one
of
these
should
I
sprint
on
the
left,
the
right
path
or
the
left
path.
A
B
B
D
B
D
B
B
C
I
think
only
dependence,
the
dependency
we
have
there
is
that
it's
a
uri
that
there's
some
address
that
we
can
hit
that
will
get
us
to
the
sdod
right,
usually
an
ingress
gateway,
the
way
that
we
deploy
things,
but
if
that's
a
google
load
balancer
or
exactly
the
vm
thing
that
also
works.
Yes,
for
this
week's
sprint,
I
can
try
the
right
path.
A
Business
and
that
is
going
to
require
again
it'll
require
the
so.
B
B
Yeah,
the
identity
of
all
all
histod
servers
should
be
available
by
listing
by
by
looking
at
the
cds
response.
So.
A
And
I,
when,
when
will
I
be?
When
will
we
we
have
a
test?
Pr
for
this?
This
is
gonna
unblock
me.
If
I
could
get
this.
B
B
A
A
B
A
Okay,
so
I
think
I
think
I
have
a
plan
for
this
week
on
those
items.
A
That
that
handles
the
grey
arrows,
we
have
two
at
least
two
other
problems,
so
the
proxy
status
that
I
wrote
lists
the
proxies
it
does
not
tell
whether
they're
acting
or
knacking.
I
cast,
I
believe,
thinks
he's
waiting
on
me
from
what
I
heard
him
say
at
the
work
group
leads.
I
thought
I
was
waiting
on
him,
but
mitch
told
me
today
that,
instead
of
solving
it,
this
way
that
we
should
solve
it
using
client
status,
discovery
service,
a
new
envoy
thing.
Have
you
guys
looked
at
that.
B
Yes,
thanks
mitch
for
finding
it
so
waiting
for
me
and
me
waiting
for
you.
What
is
I
was
waiting
for
is
for
the
ux
working
group
and
mitch
and
and
you
to
decide
what
proto
do
you
want
to
use.
That's
a
perfect
proto!
I
mean
I'm,
I'm
very
happy.
If
you
pick
this
one.
B
B
The
main
problem
is
in
in
my
pr
what
we
have
right
now
in
east
ud
is
just
a
boilerplate
to
send
an
event
with
some.
I
think
I
put
the
node
in
for
some
some
crap.
That
is
not
really
useful,
because
what
maybe
it
is
so
the
idea
is
that
each
event
will
have
a
proto
that
is
well
defined
and
stable
and
we
treat
it
as
an
api
and
you
can
rely
on
it.
B
C
Yeah,
so
I
will
just
I
will
publish
what
I've
got
to
you
early
and
often
so
that
you
can
redirect
me
when
I
go
down
the
wrong
path.
That's.
B
A
So
I
think
this
will
be
good
for
telling
us
if
the
sidecars
are
acting
and
knacking
for
the,
which
is
great
for
proxy
status,
for
the
version
command.
We
like
to
list
the
istio
version
that
the
sidecars
are
running
and
I
think
the
connections
api
that
we've
been
using
before
might
do
that.
So
the
question,
though,
is
how
they
sort
of
interact.
C
B
Let
me
think
about
it:
does
does.
Can
you
scroll
back
to
the
previous
with
with
with
proton,
let.
A
A
The
client
this
one.
B
Yeah
yeah
there's
a
oh
yeah,
so
if
we
we,
we
could
generate
this
pro.
So
basically,
when
sidecar
connects,
it
will
get
lds
and
itself.
You
will
have
this
event
being
generated.
So
if
you,
if
you
are
listening
for
this
particular
event,
you
will
know
you
can
probably
stop
the
version
somewhere
right.
B
I
was
trying
to
see
if,
if
we
have
any
any
metadata
included
in
this,
because
the
version
is
passed
from
the
sidecar
to
pilot
task
metadata,
if
we
can
pass
this
metadata
in
the
event,
you
are
golden,
because
this
event
will
give
you
both
the
version
and
the
knock
status.
C
A
So
if
I
do
oops,
if
I
do
istio
cuddle
version,
it
lists
three
things
so.
B
B
Yes,
yes,
yes,
so,
and
the
version
is
passed
so
so
pilot
has
access
to
the
data.
Plane
version
has
access
to
the
version
that
mitch
was
mentioning.
What
is
a
current
version
of
each
of
each
feed?
All
of
them
could
be
made
available
in
the
product.
Maybe
this-
maybe
let's
not
have
consensus
on
that
proto
until
we
confirm
that
it
has
all
the
support
we
need.
A
We
we
might
already
have
that
in
the
sdio
connections.
We.
A
The
node,
so
I
may
just
continue
doing
that
in
addition
and
then
use
this
discovery
status
for
the
knack
and
act
status.
B
That's
fine
as
well.
I
trust
you
and
mitch
to
to
pick
whatever
proto
you
need
for
your
command,
because
I'm
not
familiar
enough
with
it.
I
mean
whatever
you
put
in
the
proto,
I'm
happy.
B
My
use
case
is
somehow
to
get
a
proto
where
eventually
I
have
the
ip
address
included
and
that's
one
of
the
metadata.
So
I
only
need
one
field
in
this
whole
thing.
A
B
A
So
we
have
20
minutes
and
two
items
left
on
the
agenda:
martin's
and
lens.
How
long
is
yours
give
me
martin.
F
F
Okay,
so
there's
there's
actually
really
two
parts
to
this
rfc
and,
and
the
genesis
of
it
was
just
the
desire
to
improve
the
the
quality
of
of
logging
and
error
messages
that
we're
getting
currently
because
a
lot
of
a
lot
of
support
cases,
both
internally
and
externally,
are
I
see
they
have
their
roots
in
in
just
you
know,
unclear
or
or
overly
verbose
or
or
just
missing
logs.
F
So
that's
where
this
work
started
and
somewhere
along
the
way
way,
there's
also
some
proposal
for
potentially
refactoring
the
existing
logging
api,
rather
than
just
having
it
as
a
cleanup
exercise
to
refactor
the
existing
logging
api
to
better
combine
like
some
of
the
metric
aspect
of
of
error
and
warning
events
and
also
make
the
the
logging
messages
be
more
maintainable
going
forward
because,
right
now,
it's
it's
difficult
to
enforce.
Every
single.
F
You
know
error,
warning
message
that
that
the
developer
writes,
and
so
so.
A
second
thing
that
came
out
of
this
is
a
proposal
to
extend
the
existing
log
package
that
we
have
in
initial
packages
to
add
some
some
optional
functions
to
scope,
to
support
that
and
ed.
If
you
could
maybe
scroll
down
a
little
bit
to
the
retrofit
proposal,
yeah
so
so
this
is
a
v2
of
the
package
api
which,
which
originally
had
you
know
quite
a
lot
of
feedback.
F
I
I
tried
to
incorporate
all
that
feedback
which
I
left
as
comments
at
the
top
into
the
second
version
of
the
proposal,
which
is
what's
in
the
document
right
now,
and
the
main
thing
I
think
that
I
took
away
from
the
comments
was,
was
a
not
to
force
not
to
force
people
to
do
anything.
F
So
so
you
know,
what's
on
the
table.
Right
now
is
is
optional.
It's
there's
no
required
api
change
logs
and
scopes
will
continue
to
work
as
they
do
right
now.
F
F
And
I
think
the
the
main
main
thing,
the
main
proposal
on
the
table
where,
where
I
think
that
there
is
still
some
contention,
as
is
this
idea
of
a
dictionary,
so
so
what
this
is
saying
is
that,
rather
than
capturing
all
the
logging
information
and
metric
information
at
the
call
sites,
what
we
like
to
try
to
do
is
is
have
that
live
in
some
file,
which
is
at
where,
where
basically,
all
these
user
facing
messages
are
enumerated,
and
this
this
file
comes
under
the
purview
of
you,
know,
folks
that
that
are
more
have
a
more
interest
in
in
user-facing
stuff
like
like
the
ux
group
and
and
dogs.
F
Perhaps,
and
the
idea
is
that
you
know
we
would
slowly
go
through
the
existing
body
of
errors
and
warnings
that
are
there
in
the
logs
and
gradually
retrofit
the
most
important
ones
to
this
kind
of
format,
where
you
know
the
the
log
site
mostly
points
to
information
about
the
error
or
the
condition
which
is
contained
in
this
in
this
one
file,
this
error
dictionary
and
besides,
besides
the
content,
the
user
facing
text,
I
also
included
some
proposal
for
hooking
in
things
like
metrics,
so
so
as
part
of
the
the
dictionary
definition,
if
you
could
go
down
a
little
bit,
so
we
still
have,
we
still
have
the
same
definition
of
a
metric.
F
It's
just
that
now.
There's
a
list
of
metrics,
that's
part
of
the
error,
dictionary
definition
and
when,
when
the
actual
log
message
occurs,
like
scope,
dot,
error,
f
or
whatever
having
this,
this
error,
struct
actually
be
passed
in,
would
result
in.
F
You
know
this
event
actually
making
its
way
to
any
any
metric
listener
that
that
would
be
consuming
these
events,
so
so
the
the
metric
listener
would
actually
get
the
context.
Labels.
Also,
the
the
entry
from
the
the
error
dictionary
and-
and
you
know
it
could
basically
increment
the
metric
or
whatever.
So
again,
this
this
is
optional,
but
this
is
just
showing
a
way
that
you
can
combine
a
metric
handler
and
a
metric
definition
with
with
an
error
dictionary.
F
So
maybe
just
just
scroll
down
a
little
bit
more,
at
least
to
registering
a
sync.
So
so
targeting
is
is
more
just
like
talking
about
scope,
but
I
I
think
that's
that's
like
relatively
vague
at
this
stage,
but
in
terms
of
in
terms
of
just
how
this
is
actually
communicated
to
users.
F
This
is
this
is
a
again
sort
of
an
optional
plug-in
framework,
where
you
could
have
a
like
a
log
handler
like
a
load
listener
that
receives
notifications,
callback,
notifications
of
local
error
and
warning
log
messages
and
receives
all
the
information
from
that
call
site,
including
any
context,
labels
etc,
and
the
dictionary
error
definition
for
that
particular
event.
F
And
it's
able
to
do
some,
something
like
you
know,
use
zap
or
something
like
that
to
log
to
load
to
some
output.
Similarly,
you
know
a
metric
handler
can
can
define
a
callback
to
you
know
incremental
metric
or
whatever
so
right.
I
think
that
that's
pretty
much
the
proposal
and
I'm
I'm
looking
for
an
approval
for
it.
The
state
is,
you
know,
I've.
I've
passed
it
around,
and
I've
talked
with
with
a
bunch
of
people
over
it
and-
and
I
I
think,
there's
generally
good
consensus.
F
I
I
believe
that
costume
has
some
pretty
strong
objections.
Still
so
you
know,
I.
I
think
that
I
I
don't
know
if
you
want
to
talk
about
it,
boston
or.
B
I
I
think
most
important
is
to
have
the
users
the
most
users
of
istio
be
happy
the
person
who
is
running
esqd.
I
mean
it's
not
really
something
that
I
mean.
I
I
think
they'll
be
better
off
using
the
same
tools
that
the
users
are
using.
So
if
all
the
information
that
would
be
expressed
in
logs
is
accessible
to
the
user,
I
mean,
of
course,
with
the
filtering
isolation.
So
you
only
see
stuff
for
your
name,
space
and
so
forth.
B
C
So
I
think
I
I
think
caution
is
helpful
to
think
of
logs
the
same
way
that
we're
thinking
of
xds
events,
if
we
want
to
take
these
logs
and
plumb
them
to
xds
events,
that's
just
a
matter
of
a
few
lines
of
code
and
plumbing.
However,
xds
events
don't
make
it
so
that
logs
are
irrelevant,
we
wouldn't
make
it
so
that
we
never
write
logs.
They
still
have
their
own
independent
value,
but
if
we
also
want
them
on
cloud
pub
sub
or
something
else,
we
know
how
to
get
them
there.
B
C
B
C
B
B
A
structure
is
a
crash
dump
or
whatever
probably
should
be
an
event.
We
we
have
metrics
as
well,
which,
again,
if
you
increment
on
automatically
a
metric
that
you
got
a
packet,
you
got
a
connection,
that's
probably
not
a
log
and
probably
not
an
event
either,
and
I
think
we
need
the
problem
is
that
people
do
not
think
carefully
about
each
particular
use
case,
as
it
should
be
because
events,
we
need
to
think
about
the
proto,
the
discussions
that
we
just
had
about.
How
do
we
put
status
for
metrics?
It's
something
that
you
know.
B
How
does
it
show
in
rafana
need
to
be
stable
and
so
forth
and
for
logs,
usually
it's
whatever
crap
people
put
and
which
is
reality
today
and
I'm
happy
to
see
more
structure
in
the
in
this,
but
I'm
not.
I
don't
want
to
be
the
main
things
that
we
are
doing,
because
then
we
get
an
ms
anyway.
That's
my
opinion.
Sorry.
D
B
Agree
but
but
the
super
user
will
have
access
to
all
the
information
that
user
has,
and
the
question
is:
is
there
anything
that
we
put
on
history
logs?
That
is
important
enough
that
we
will
not
send
to
the
user.
I
mean
if
something
is
wrong
with
your
config.
If,
if
you
have
anything,
it's
something
you
would
inform
the
user
as
well.
No.
D
Right,
I
got
your
point.
I
agree
with
that.
So
my
point
is
this:
ifc
is
not
going
to
help
users
debugging
their
problem
into
the
match.
You
would
have
helped
user
as
to
you
know
they
have
to
contact
their
support
or
admin
to
pull
out
logs
into
of
sdod.
I
agree.
I
mean
this
useful
error
for
the
user.
We
should
provide
an
informative
way
to
not
fire
a
user
without
bugging
their
support,
team
or
admin
to
get
the
logs.
C
C
Common
user
to
be
supported
is
going
to
be
that
workload
administrator
who
does
not
have
any
backdoor
access
to
the
control
plane.
We'll
never
see
us
dod's
logs,
that's
the
most,
that's
our
most
common
user,
but
that's
not
to
say
that
we
don't
also
want
to
support
the
administrator
of
sdod
and
actually
there's
a
few
other
personas
defined
in
that
document.
So
that's
the
impetus
I
think
is
on
us
to
get
that
document
pushed
through
and
approved
with
steve's
help,
and
that
will
help
these
conversations
move
a
lot
more
smoothly.
I
think,
if.
B
I
can
add
one
more
point.
I
I
have
nothing.
I
mean
the
reason
I
have
strong
feelings
about
this
is
I
have
a
lot
of
ptsd
with
with
privacy
teams
and
because
logs
usually
at
least
in
vendors,
like
google,
need
to
be
approved
for
privacy,
they
need
to
be
scrubbed,
they
need
to
be.
You
know,
multi-tenant,
it's
a
huge
amount
of
pain
to
to
care,
for
I
mean
privacy,
information
that
and
p0
sky
is
falling
because
you
log
something
that
you
shouldn't.
So
I
have
some
some.
You
know
history
with
this.
A
B
D
F
Yep
yeah
well
yeah,
so
I
I
do
have
some
some
some
ideas
around
that,
but
you
know
I
I
already
have
eaten
15
minutes
of
the
time.
So
so
maybe
we
could
just
have
some
follow-on
discussion,
maybe
in
environments
or
something
in
the
slack
channel,
but
yeah
it's
currently.
This
is
definitely
oriented
towards
the
admins,
so
so
that
that
is
for
sure.
F
I
I
haven't
really
thought
much
about
what
what
to
expose
to
users
and-
and
you
know
like
how
to
other
than
targeting,
which
was
my
attempt
to
introduce
some
idea
of
scope.
I
I
haven't
talked
much
about
how
to
restrict
things
to
users
versus
admins.
It's
definitely
admin
targeted.
E
Thanks,
martin,
we
just
make
sure
we
still
have
the
logs
that
we
currently
have.
Otherwise
we're
never
going
to
be
able
to
debug
a
complicated
issue
again.
F
Yeah,
so
so
definitely,
this
is
not
removing
anything
that
we
have
it's
more
about
just
taking
the
the
ones
that
are
commonly
used
and
and
possibly
require
metrics
around
it,
and-
and
just
you
know,
having
a
little
more
more
clarity.
Clarity
around
it
by
just
having
more
review
like
from
from
other
work
groups,
and-
and
just
you
know,
rather
than
being
ad
hoc,
as
it
is
like
a
little
bit
today,
just
introducing
a
little
more
discipline
into
it.
B
A
Yes,
sir,
yes
thank
you
for
your
hard
work
on
this.
We
definitely
should
I
need
to
reread
it.
We
should
follow
up
on
this
either
in
environments
or
in
slack
lynn.
We
we
use
more
than
the
10
minutes.
We
said
we
would,
for
your
item,
tell
me
if
we
should
move
it
to
next
week
or
if
we
should
review
it
separately.
D
Yeah,
that's
fine!
So
basically
I
just
want
to
let
everybody
know.
You
know
I
provided
some
update
to
this
document,
so
john
kind
of
would
inspire
me
to
you
know
to
think
about
how
do
we
do
this
most
simplified
way
by
the
fact
that
we
already
can
run
htod
from
a
laptop,
so
I
kind
of
for
having
like
deployment
model.
One
is
a
new
deployment
model
and
two
and
three
are
some
of
our
existing
deployment
model
more
towards
multi-cluster.
D
So
I
encourage
everyone
to
take
a
look
and
I
will
probably
talk
a
little
bit
more
in
detail
in
the
environment
work
with
tomorrow
as
well.
So
I
don't
have
to
you
know
present
the
whole
thing
and
here,
but
do
let
me
know
if
you
guys
have
any
concerns
with
it
feel
free
to
comment
on
the
dog.
A
Lynn,
okay,
everyone
so
we've
started
doing
this
every
week
and
it's
a
good
thing
that
we
did
that
because
we
certainly
had
a
whole
hour's
worth
of
material.
Today
I
will
see
everyone
next
week
and
my
hope
is
to
make
progress
on
central
stod
troubleshooting
commands
this
week,
and
I
may
be
bothering
many
of
you
on
this
call
with
that,
and
also
some
of
the
control
plane
upgrade
stuff
in
environments
tomorrow
all
right.
Thank
you.
Everyone
thanks!
Ed
thanks.