►
From YouTube: Policies and Telemetry WG Meeting - 2020-08-12
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
maybe
we
should
just
go
ahead
and
start.
I
put
a
couple
items
on
the
agenda
up
front,
but
I
think
the
big
thing
that
needs
to
happen
is
planning
for
release
the
the
1.8
release.
A
So
I
wanted
to
spend
the
majority
of
the
time
trying
to
flesh
out
our
planning
there,
because
I
think
we
didn't
do
such
a
great
job
with
one
seven.
So
we
should
try
and
try
and
do
a
better
job
here
in
one
eight.
So
I
put
something
that
came
up
in
the
networking
working
group
and
I
should
probably
this
isn't
tracing.
This
should
be
it's
longer
tree.
A
There
was
a
there
was
a
question
in
the
networking
working
group
about
configuring
access
logs
and
using
them
on
a
per
per
proxy
basis,
and
so
we
took
an
action
or
I
guess
I
took
an
action
item
out
of
that
to
sort
of
figure
out
what
we're
going
to
do
with
the
telemetry
api,
and
we
already
had
an
rfc.
A
I
know
mandara
you
had
asked
see
if
we
could
just
get
agreement
on
the
motivations,
and
so
I
wanted
to
point
to
the
requirements
section
of
that
rfc
and
see
if
that
was
sufficient
or
if
you
think
we
need
more,
more
detail
there
or
more
agreement
or
a
different
way
of
arguing.
A
So
I
wanted
to
put
that
put
that
up
for
discussion.
B
I
okay
so
yeah,
so
I
think
the
the
the
work
per
workload
override
is
where
it
gets
it
gets
either
tangled
up
or
it
gets
very
closely
related
to
other
similar
apis
in
istio,
including
on
what
filter
stoop.
What
is
the
what's
the
poor
workload
override
and
customization
semantics,
and
how
do
they
look?
I
I
think
the
the
other.
B
Yeah,
the
the
other
requirements
seem
like
fine
to
me,
but
the
the
part
that
I
would
like
to
get
agreement
on
kind
of
across
istio
is
how
do
we
deal
with
these.
B
Overrides,
how
do
you,
how
do
you
specify
a
default
and
how
do
you
override
those
defaults
and
what
happens
when
there
are
multiple
things
that
match,
and
these
are?
These
are
all
the
questions
that
either
we
have
answered
in
different
ways
in
different
apis
or
sometimes
we
have
left
them
to
implementation
and
then
kind
of
retroactively
encoded.
The
implementation
as
api.
A
So
it
seems
pretty
clear
that
we
don't
want
to
invent
a
new
one
right.
Correct,
so
is
the
question
in
which
of
those
other
conventions
we
converge
on.
B
B
Okay
and
the
telemetry
api
specifically
should
concentrate
on
the
the
other
aspects
and
and
when
we
solve
the
override
piece,
it
should
apply
equally
to
whoever
needs
to
use
it,
including
including
this
api,
so
so
yeah.
So
what
what
I'm
suggesting
is
that
we
we
split
this
into
two
parts,
because
I
think
the
override
part
it
can
be
like
can
be
a
long
pole
and
we
don't
want
rest
of
the
api
to
get
kind
of
stuck
in
that
discussion.
D
D
E
F
F
There
are
like
kind
of
unification
of
semantics,
rather
than
kind
of
spreading,
all
the
configuration
around
the
mesh
api
right
and-
and
I
think
that's
worthwhile-
to
get
through
review
faster
in
terms
of
like
getting
the
rest
of
that,
even
if
it
still
lives
in
the
mesh
config
would
still
be
a
valuable
exercise
because
right
now,
it's
kind
of
hard
to
figure
out
all
the
telemetry
pieces
that
you
need
to
config,
and
there
are
some
things
that
you
just
end
up.
Relying
on
envoy
filter
like
like.
F
If
you
want
to
use
or
change
the
sampling
to
like
only
sample
logs
on
500s,
then
you
end
up
using
envoy
filter
anyway.
So
I
think
it's
worthwhile
to
separate
it
out
personally.
B
B
B
D
Thinking
about
once,
we
have
made
the
mesh
config
mesh
config
api,
more,
you
know
paletteable
to
what
we
want,
then,
whatever
the
next
level
api
is
what
should
be
the
override
semantics,
whether
it
should
be
like
I'm.
I
think
if
you
leave
onward
filter
aside
every
other
config
like
gateway
resource,
like
gateway
or
peer
authentication
or
request
authentication,
they
all
work
in
the
same
way
where
they
override
whatever
is
given
in
the
mesh
conflict
directly.
Right-
and
it's
I
mean
I
don't
think
any
of
them
does
a
merge.
B
Well,
no,
no,
so
so
they
so
the
so
the
issue
comes,
it
is
when
how
do
you
actually
select
a
particular
workload,
so
so
basic?
So
if
two
of
these
resources
select
the
same
workload
right?
What
is
the
multiple
inheritance
story
here?
Do
we
do
we
define
it
or
not?
What?
If,
what,
if
you
have
a
resource
in
the
mesh
config
one
in
the
name,
space
and
one
is
workload
specific?
B
What
applies
right
like
the
these
are
yeah
and
onward
filter
has
like.
Does
it
in
one
way
and
side
cars
sidecar
api?
Does
it
in
another
way
and
we
have
to
choose
which
way
we
go
here,
but
it
wouldn't
essentially
right.
So
this
is
the
telemetry
api,
but
I
expect
other
apis
which
also
apply
to
the
workload,
but
don't
necessarily
do
specific
telemetry
things
to
have
the
exact
same
concerns
and
requirements
that
I
want
to
apply
here
or
I
want
to
apply
it
here,
so
so
that's
like
so
so
yeah.
B
So
I
think
that's
what
we
we
need
to
solve
in
in
the
override
api
and
and
the
the
specific
requirements
have
been.
B
D
Yeah
makes
sense
one
quick
question
to
both
you
and
doug
here
I
know
we
are
talking
about
workload
level
attachment
which
can
be
via
selectors
or
whatnot,
but
for
telemetry
for
few
cases
does
host
base
attachment
base
makes
more
sense,
because
I
mean
I
what
I'm
thinking
is.
Do
you
want?
Do
you
want,
as
a
user,
to
configure
for
a
particular
service,
a
telemetry
and
via
service?
You
mean
a
host
here
rather
than
saying.
D
B
A
D
So
if
I,
as
a
user,
want
to
say,
I
want
x
like
uniform
telemetry
for
reviews.default.service.com
service.cluster.local
right.
Is
that
a
better
api
and
then
all
the
clients
in
the
servers
whenever
they
emit
the
telemetry?
For
that
particular
service,
which
is
like
the
canonical
service,
that
mandar
was
saying
it
has
the
same
properties.
D
D
D
I
guess
yeah.
What
is
more
important,
a
particular
client
or
a
particular
server
emitting
the
same
telemetry
across
all
services
or
a
uniform
telemetry
across
all
workloads
for
a
particular
service.
F
I'm
not
sure
how
my
input
is
here,
but
for
us
we
would
want
to
have
cost
attribution
in
line
with
the
service,
and
so
we
really
do
want
like
telemetry.
A
lot
of
the
reason
we
turn
off
telemetry
is
to
reduce
the
load
and
cost,
and
so
I'd
want
that
on
the
workload
basis.
I
don't
want
that
on
the
listener
or
edge
basis.
B
So
one
one
concrete
example
of
this
is
that
when
you
have
request
classification
right,
open
api
spec
it,
it
would
actually
make
sense
for
and
especially
request
operation,
which
is
like
already
part
of
the
label,
but
in
any
way,
even
if
it
is
not,
it
would
make
sense
for
the
metric
to
be
uniform,
regardless
of
whether
it's
emitted
from
whether
the
source
is
client
or
sources
server.
A
A
B
We
have
not
been
thinking
about
the
client
telemetry
in
the
server
telemetry
as
an
ownership
thing
it
it's
more
of
it's
more
of
a
measurement
thing.
It's
two
views
of
the
same
information
and
if
that's
our
continued
thinking,
then
it
makes
sense
for
the
views
to
align
exactly.
B
But
if
we
want
to
go
with
what
doug
was
saying.
Is
that
no,
I,
like
I
own
the
service,
and
I
want
my
client-side
telemetry
to
be
collected
in
this
other
way
it
it
like.
B
A
B
Okay,
so
so
yeah
we
so
then
we
we
need
to.
We
need
to
reconcile
those
two.
D
B
And
and
by
service
config
in
the
previous
point
we
mean
more
like
canonical
service
convict
now
right
or
is
it
a
host
service
config.
B
No,
we
we,
we
want
them
to
use
it
right
because
it
it.
It
removes
the
removes
the
ambiguity
so,
but
probably
no
one
uses
it
right
now,
but
if
we
center
our
api
around
it-
and
there
is
like
not
that
much
mismatch
between
host
and
that,
then
it
should
just
be
an
easy
transition,
but
it
will
also
bring
out
specifically
if,
like
yeah,
if
one
host
is
mapping
to
five
different
canonical
services
for
some
particular
user
and
then
it
will
bring
that
to
the
four
and
like
yeah
they'll
have
to
reconcile
it
themselves.
A
B
C
B
Names
before
yeah
so
yeah,
so
there
is
the
the
client
side
is
bound
to
the
name
and
server
side
is
bound
to
workload,
and
maybe
that
is
something
we
could
kind
of
continue.
D
B
Okay,
so
I
think
we've
probably
we
yeah
we've
spent
like
15
20
minutes,
and
we
do
need
to
get
to
the
release
planning
stage.
So
we
should
like
do
some
more
concerted
effort
on
fleshing
this
out
in
the
in
the
proposal.
B
B
And
related
to
that,
I
just
very
quickly
want
to
mention
doug.
If,
if
it's
okay,
like-
I
think
you
you
have
the
you
have
the
dcp
metrics
before,
but
I
I
just
wanted
to
mention
that
there
is
a
there
is
an
onboard
filter,
update
api,
which
is
which
has
which
has
lots
of
comments,
but
the
immediate
thing
that
it
is
it
resolves
is:
it
adds
to
the
replace
semantics
and
it
also
adds
enum
for
attachment
point.
B
So
there
is
no
issue
of
oh
in
the
next
version.
This
filter
is
called
something
else
and
it
broke
me
like
there
is
none
of
that,
so
that's
and
yeah.
So
then
this
solves
the
canary,
because
now
you
can
override
a
particular
filter
by
name
without
doing
anything
else.
The
semantics
are
clearer
than
well.
Sometimes
you
just
cannot
use
proto
merge
because
protomorph
doesn't
support
removal,
so
you
just
replace
and
you
you
get
a
new
one.
So
that's
that's
what
it
is
it's
there
is.
B
There
is
some
discussion
about
whether
the
whether
the
enums,
so
the
enums
as
proposed
right
now,
are
sort
of
relative,
so
they
only
talk
about
partial
order
and
they
talk
about
partial
order
with
respect
to
what
control
plane
inserts
by
itself.
B
So
if
your
filter
depends
on,
let's
say
the
authorization
filter,
then
you
would
say
that
insert
my
filter
post
often-
and
there
is
some
discussion
about
whether
it
should
be
this
partial
order,
type
thing
or
whether
it
should
be
like
absolute
order
type
thing
where
there
are
phases
of
like
decode
and
triage
and
auth
etc.
But
I
expect
that
we
will
have
enums
one
way
or
another
either
this
way
or
that
other
way
and
then
replace
wood
would
be
there.
So
I'm
I'm
hoping
we
get
through
this
yeah
and
right
now.
B
A
B
B
I
think
we
could.
We
could
change
that
and
and
say
that
we
will,
we
will
add
it.
It
creates
a
few
other
things,
though
so,
for
example,
pure
replace,
doesn't
need
to
specify
anything
about
ordering
right
because
it
it
means
that
that
work
is
done
already.
Someplace
and
you
just
say:
okay
now
replace
it
with
this
thing
that
I
gave
you,
whereas
add
or
replace
means
that
if
you
can't
replace,
you
need
to
specify
where
it
goes.
B
D
C
B
No,
and-
and
I
I
like
I
I
agree
mirrors
like
it
is-
it
is
very,
very,
very
complicated,
but
the
but
the
function
that
we
want
to
do
is
also
a
bit
complicated
and
the
other
kind
of
goal
is
that
even
when
most
of
the
things
are
left
unspecified,
it
should
still
work
like.
That's
like
that's
the
that's
the
goal,
so
you
you
really
need
to
dip
into
all
the
settings.
B
B
Okay,
yeah
cool
thanks.
A
Okay,
I
put
this
up
next
line
on
the
agenda
just
because
someone
raised
it
in
the
the
slack
channel
and
I
wanted
to
see
what
people
would
think
and
you
know
one
backward
mixer.
It
was
almost
impractical
to
generate
tcp
metrics
for
hdb
traffic
as
well
including
connection
events
and
things,
I'm
not
sure
that
we're
still
in
that
state.
Now
that
we've
switched
to
v2
so,
but
you
know,
I
think
this
would
be
a
big
shift
because
all
of
a
sudden
we'd
have
all
these
tcp
metrics
showing
up
for
hd
traffic.
A
So
I
wanted
to
see
what
people
thought
about
generating
those
kind
of
stats
or
if
we
should
maintain
what
we're
doing
or
maybe
report
a
subset.
So
I
just
wanted
to
raise
that
and
see
if
there
are
any
thoughts
about,
say,
connections,
open
and
connections,
close
stats
for
http
traffic.
B
B
H
Is
on
the
call,
so
I
will
already
expose
some
stats
about
the
tcp
open,
close
for
http
request,
although
it
doesn't
have
really
rich
workload,
information
that
we
as
we
have
for
these
two
stats,
so
yeah.
I
think
it
would
be
useful,
but
I
guess
most
people
won't
need
it
or
like
the
basic
android
stats.
Would
support
would
have
enough
of
visibility
on
this.
H
So
yeah-
and
I
think
this
is
just
a
configuration
thing
right-
we
just
need
to
insert
the
network.
Oh
sorry
insert
this.
That's
filter
into
the
yeah
right.
B
H
So
there
might
be
the
way
that
user
already,
that
user
might
already
be
able
to
con
to
configure
this
with
our
filter.
I
think
like
insert
the
stats
filter
into
the
http
chain,
so
I'm
not
sure.
D
D
B
G
D
So
I'm
guessing
at
this
for
tcp
metrics
for
http,
open
and
close
are
more
important
compared
to
the
actual
byte
histogram.
D
Exactly
and
I
think
that's
pretty
useful,
especially
if
you
want
to
see
how
connection
persistence
and
http
2
is
performing
overall.
From
the
connection
point
of
view.
G
B
We
we
do
get
the
downstream
dce
right,
we
we
do
get
there
is,
there
is
a
response
flag?
Isn't
there.
G
B
Oh
okay,
I
thought
we,
it
does
show
up
in
our
stats
as
as
as
the
response
flag
with
like
response
code,
zero
and
response
flag
of
downstream.
E
B
A
B
A
Okay,
so
then
the
the
big
thing
to
spend
our
last
20
minutes
on,
I
think,
is
just
ideas
for
one
that
ate.
What
do
we
think
we
should
accomplish?
What
do
we
think
we
can
accomplish?
A
I
put
some
ideas
out
to
sort
of
see
the
conversation,
but
I'm
more
interested
in
what
other
people
think
and
what
we
should
have.
So
I
open
the
floor
to
everyone
to
contribute
an
ad.
B
Yes,
I
I
think
yeah,
so
I
I
think
I
so
like
some
new
extensions
right,
like
fluency,
has
been
requested
and
we
just
need
to
provide
a
solution.
Maybe
it
means
it
it's
a
new
extension.
Maybe
it
means
something
else,
but
regardless
we
we
need
to
have
a
solution
for
fluency
mixer
had
it
and
people
were
using
it.
So
we
need
to
find
an
alternative.
A
A
B
B
Yes,
we
we
do
need
a
landing
place,
so
so
how
about?
How
about
that?
Let's,
let's
put
that
specific
thing
on
the
enchanter
as
well-
create
a
new
landing
place
for
sql
extensions
yeah.
I.
G
B
B
I
I
need
memory
right,
slash
memory
to
be
a
stat
where
I
can
look
at
it
through
gauges
and
look
at
multiple
things,
and
it's
actually,
it
would
be
pretty
useful
to
be
able
to
either
get
some
of
the
higher
value
stats
already
or
do
it
on
a
more
ad
hoc
basis.
So
I
think
it's.
B
B
I
I
really
think
we
should
put
it
back
on
the
agenda
with
yeah
and
like
really
commit
to
it.
It's
it's
now
slightly
more
complicated
with
the
better
transport
security
and
kind
of
where
does
it
feel
so?
We
have
tcp
metadata
exchange,
http
exchange,
bts
exchange.
B
G
Yeah,
I
thought
there
was
a
proposal
to
use
method
exchange
to
use
to
send
those
headers,
but
that
that
sounded
wrong.
I
think
I
think
they
changed
to
using
some
other
side
channel.
G
B
G
A
A
F
Hey
doug,
there's
my
context:
the
trace
context
stuff.
Did
you
guys
want
me
to
generalize
that
out
for
all
the
tracers
that
support
it
for
1.8
release,
or
do
I
just
leave
that
for
the
open
census
trace
exporter?
F
I'm
fine
either
way
willing
to
do
the
legwork
on
that.
I
just
don't
want
to
land
it
and
then
be
taking
it
apart
for
your
stuff
anyway,.
A
Yeah,
I
think
that's
a
good
question
if
we
think
that
we'll
have
agreement
on
both
the
workload
selection,
bits
and
the
telemetry
bits
and
implement
it
in
one
eight
or
if
we
should
do
some
trace
context,
work
for
one
eight
with.
F
It
doesn't
seem
like
a
blocker
for
most
people.
I've
just
seen
questions
in
a
few
places
around
it.
A
F
I
mean,
I
don't
think
it's
blocking
too
many
people
to
onboard
right
now.
So,
okay,
don't
I
was
wondering
if
it
seems
like
a
high
priority,
then
I'll
get
it
in
for
1.8.
If
not,
then
I'll
just
kind
of
leave,
it.
I
B
That
inline
bytes,
are
you
saying
actually
in
the
onboard
field,
repair
itself
or
are?
Are
you
saying
for
actual
transport.
I
The
envoy
filter
api
has
an
inline
bytes
field.
Currently,
if
it's
used
it
crashes
envoy,
I
would
love
to
actually
put
my
watson
code
right
in
the
invoice
filter
cr
rather
than
having
to
mount
files
into
the
sidecar.
So
so
so
we
okay
so.
B
I
think
that
with
ecds
and
with
uri,
which
is
already
supported
you
you
should
be
able
to
do
it.
I
like,
unless
you
have
a
special
need,
that
you
must
include
in
onboard
filter.
I
just
like.
I
just
don't
see
why
you
would
want
to
do
that.
So
if
you
have
an
issue
where
this
is
being
discussed,
we
we
could
do
that,
but
I
I
just
don't
think
putting
one
megabyte
of
like
encoded,
base64
encoded
things
in
on
what
filter
would
be
very
useful.
B
B
G
B
I
I
I
I
I
It
would
be
wonderful
to
have
some
kind
of
command
line
or
web
front
end
to
sort
of
say
which,
which
pods
and
which
workloads
are
running,
which
filters,
because
the
envoy
filter
workload
stuff.
It
can't
be
passed
in
as
label
selected
kubernetes,
it's
very
hard
to
figure
out
which
pods
are
running
particular
wasm.
I
What
which
istio,
which
which
envoy
version
is
running-
and
I
think
I'm
gonna
do
that
myself,
but
I
I
was
confused.
I
spent
a
long
time
trying
to
do
assembly,
scripts
and
figuring
out
like
which,
which
version
of
the
solo
assembly
script
library
needed
and
all
that
stuff.
So
anyway,
ux
would
be
happy
to
talk
and
collaborate
on
any
kind
of
dashboards
command
line
tools,
and
what
we
want
is
just
any
kind
of
way
to
to
add
and
remove
these
filters
without
restarting
the.
D
Now,
what's
ecds
is
that
the
extension
discovery
service
or
something
like
that.
E
I
just
wanted
to
make
sure
that
this
one,
you
know,
gets
slated
for
whether
it's
a
one,
seven
or
one
eight
doesn't
matter
to
me,
but
you
know
peter
knows:
we've
been
going
back
and
forth
since
1-6,
maybe
even
1-5,
with
this
proto-sniffing
situation,
which
is
generating
bad
telemetry
sometimes,
and
we
don't
know
really
why
that
looks
like
the
ultimate
solution
would
be
this.
E
This
issue
here
getting
resolved.
I
mean
in
general
all
of
the
features
that
that
you
guys
are
discussing,
I
mean,
are
good,
but
we
also
need
to
make
sure
that
existing
telemetry
is
meeting
people's
requirements
right,
and
so
they
can
most
of
the
questions
we
get
in
kiali
is.
Why
does
my
graph
look
like
this
and
it
always
comes
back
to
well?
This
is
what
the
underlying
telemetry
is
presenting,
and
so,
with
all
the
progress,
that's
great
but
yeah
I
mean
understanding.
E
The
telemetry
that
does
get
generated
today
is
important,
so.
E
I
don't
care
so
I
I
understand
from
what
what
peter
wrote.
I
believe
in
the
comments
is
that
you
know
you
can
get
around
this
by
disabling
the
proto-snipping
and
that's
fine,
but
I
guess,
if
you
need
proto-snipping,
then
that's
not
fine
right
now.
The
default
is
five
seconds
right
with
the
timeout,
which
is
really
long,
but
for
some
reason
in
certain
situations,
if
you
don't
disable
the
snipping
you're
still
getting
like,
we
can
get
it
pretty.
E
Often
in
just
this
that
one
demo
app
we
have,
we
don't
really
know
why
it
happens
there,
even
even.
J
E
J
E
J
C
Yeah,
I
don't
think
thanks
for
that.
Thanks
for
the
info
wait.
B
No,
no,
but
there
is
there
is
actually
a
way
right
and
I
think
I
think
uchen
has
already
like
done
some
work
there,
which
is
that
we
can
so
we
can
opt
out
so
that
the
default
is
one
hour
and
then
we
can
have
a
way
to
opt
out
of
from
protocol
sniffing
on
ports
that
are
either
server
first
protocols
or
whatever,
and
that
seems
like
a
reasonable,
easy
solution.
The
the
xts
api
already
supports,
like
exclusion
by.
J
Port
yeah,
I
I
think
we're
saying
the
same
thing
just
differently.
I
just
mean
we
need
protocol
sniffing
still,
but
we'll
make
it
work
in
a
hundred
percent
of
cases
because
we'll
disable
the
timeout
so
you'll,
never
time
out
and
we'll
make
it
work
by
doing
what
you
described.
Okay,
yeah.
This
is
this-
is
part
of
the
environment's
roadmap,
so
we'll
hopefully
fix
that.
E
No
problem
I
mean
it's,
you
just
have
to
always
have
to
remember
like
why
why
we
do
all
this
telemetry
stuff,
it's
so
that
people
can
use
it
and
see
something
that
makes
sense,
and
these
this
issue
in
particular,
I
mean
I
can't
tell
you
how
many
hours
I've
been
trying
to
explain
to
users
of
piali
that
you
know
the
reason
you're
seeing
traffic
from
unknown
and
the
reason
you're
seeing
traffic
going
to
pass
through
is
because
of
this
potential
bug
over
here
that
we
don't
know
why
it
happens,
and
you
know
I'm
not
trying
to
sound
whiny,
I'm
trying
to
just
say
it's
a
problem
that
real
users
are
seeing.
A
Okay,
we're
about
out
of
time,
so
I
just
encourage
people
to
add
other
things.
They
think
are
important
to
these
notes
and
then
to
the
other
working
group
leads.
I
think,
we're
scheduled
to
present
this
to
the
toc
next
friday.
I
will
be
on
vacation
and
unavailable,
so
one
of
you
will
be
the
lucky
winner
of
getting
to
present
this
to
the
toc,
so
yeah
I'll.
A
I
will.
I
will
do
it.
Okay,
it's
most
likely
that
it'll
slip
by
a
week
or
two
given
historical
trends,
so
it's
pretty
safe
to
volunteer,
but.