►
From YouTube: Extensions and Telemetry WG - 2021-07-14
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Good,
so,
okay,
there
is
agenda
now
and.
C
Yep
and
john,
do
we
need
to
move
your
item
up
from
the
last
meeting
for
discussion?
B
Yeah
I
mean
this
so
this
was
rama's
proposal
from
some
time
ago
and
we
had
a
soft
agreement
on
having
http
having
the
metrics
published,
using
the
prom
json
format
and
using
http
callout.
B
B
B
I
I
think
we
should
either
vote
it
up
or
down
right
I
mean
if
we
vote
it
down,
then
then
rama
and
whoever
else
is
waiting
for
it
can
find
some
other
solutions
and
if
we
vote
it
up,
someone
else
is
going
to
work
on
it.
So
this
is
an
occasion
where
yeah
as
long
as
we
approve
it,
someone
someone
will
actually
deliver
it.
E
E
E
Yeah,
like
I'm
still
unclear
as
to
so
so
now,
what
we
want
is
just
publish
the
metrics
to
some
other
server,
because
you
know
this
user
does
not.
They
don't
use
prometheus
right.
They
have
some
other
method
of,
but
is
it
very
common?
I'm
still
struggling
to
understand
that,
like
does
it
feel
like
we
should
take
the
burden
for
doing
this
and
who
else
needs
this
functionality.
D
D
D
They
have
open,
telemetry
or
prometheus
metrics,
and
that's
how
it's
exposed
and
if
you
want
to
do
something
else,
you
would
you
know,
have
something
scrape
it
and
then
translate
it
like.
I
know
at
least
there's
like
a
prometheus
to
stackdriver
adapter
type
thing,
so
it
feels
a
bit
weird
for
us
to
go
and
add
some
custom.
Metrics
thing.
E
Yeah,
I
mean
basically,
we
should
have
some
standards
here,
which
is
as
in
standard
way
of
doing
this.
Prometheus
is
already
standardized.
If
you
need
something
else,
I
will
then
just
say,
use
open,
telemetry
and
then
create
an
exporter
or
or
a
translated
translation
layer.
Like,
like
john
said
anything
else,
it's
just
unnecessary
burden.
C
Refresh
my
memory
mandara,
this
was
because
they
they
didn't
want
to
run
a
scraper
in
addition
to
everything
else,
they
had.
B
C
Well
does
john,
does
that?
Does
that
address
your
concern?
I
mean
we
would
have
one
way
of
publishing
versus
I
get
pushing
versus
requiring
pull.
D
C
C
C
C
A
D
C
Okay,
john,
did
you
want
to
talk
through
your
your
doc?
Let
me
share
that.
D
I
think
it's
similar
problem
to
solve,
but
actually
the
solutions
will
be
entirely
different
for
all
three,
because
they
were
originally
configured
differently.
So
I've
laid
out
some
ideas
here.
I
think
some
of
them
are
fairly
straightforward.
Some
some
are
a
bit
more
complex,
so
if
you
want
to
go
to
the
access
logs
one
first,
that's
one's.
F
D
So
what
that
means
is
that
users
would
not
need
to
for
like
the
most
common
cases.
You
know
just
turning
on
access
logs
or
turning
on
prometheus.
They
don't
need
to
go
mesh
with
mesh
config.
D
They
can
just
create
a
telemetry
so
for
access
logs,
that's
pretty
much
all
we
would
need
and
they
would
just
create
the
telemetry
api
saying
provider
equals.
I
had
built-ins
slash
candidate
out
here,
but
in
the
actual
implementation,
which
I
I
think
might
have
merged
already.
D
I
think
I
call
it
envoy,
but
we
can
fix
it
on
the
name.
If
anyone
has
better
ideas.
D
D
Nope,
I
think,
okay,
I
will
assume
that
means
everyone
agrees
which
is
good,
because
if
we
got
hung
up
on
access
logs,
we're
gonna
have
a
big
issue
on
metrics.
Do
you
wanna
go,
maybe
tracing
next?
I
guess
sure
the
next
easiest
one
yeah
tracing's
a
bit
tricky,
because
our
current
default
is.
We
have
one
percent
sampling
to
the
second
address
which
for
most
users
doesn't
exist
if
they
don't
happen
to
install
it.
D
That
way,
so
I
believe
our
desired
default
is
that
there's
actually
no
tracing
enabled,
and
that
makes
the
migration
quite
a
bit
more
challenging
because
we're
actually
trying
to
change
the
default
behavior.
If
we
decided
that
hey,
we
still
want
one
percent
tracing
and
zip
in
as
a
default
behavior.
C
C
D
D
Yeah
well,
if
they
deployed
it
happened
to
be
called
zipkin
and
he's
just
done
with
that
port.
But
I
I
think
I
think
that's
the
standard
port.
No,
it
is
a
standard
port,
but
I
mean
deploying
an
easter
system.
That's
just
if
you
follow
our
samples,
which
may
or
may
not
be
the
case-
probably
a
bad
practice
too.
B
C
B
Yeah,
so
so
I
think
yeah,
so
I
think
I
think
that
that
it's
that
I
don't
know
risk
off
the
risk
that
those
users
would
have
is
actually
fairly
low.
B
As
in
sorry,
if
we
have
it
in
upgrade
nodes,
then
this
it
is
pretty
clear
what
is
the
equivalent
config
that
you
need
to
install,
which
we
will
just
put
there
saying
hey.
If
you
are
deploying
zipkin
like
in
the
like
in
the
samples,
then
this
is
the
telemetry
config
that
you
need
to
put
in
to
get
that
behavior
back,
and
I
suspect
that
that's
only
going
to
be
a
small
minority
of
cases.
B
D
The
second
one
is
we
can,
and
I
think
this
is
probably
a
really
bad
option,
but
we
can
make
it
so
that
users
have
to
explicitly
configure
the
tracer
value,
so
they
either
have
to
explicitly
turn
it
off
or
explicitly
turn
it
on.
That
seems
really
bad,
because
that
is
going
to
be.
D
You
know,
require
every
single
user
to
do
it,
which
is
pretty
annoying
the
other
one
which
is
kind
of
similar
to
what
you
guys
were
suggesting
is
kind
of
do
the
same
warning
thing
like
require
it
to
be
explicitly
set,
but
do
it
only
if
zipkin
service
is
actually
detected
so
that
way,
if
you
do
have
zipkin,
then
that's
fine.
You
just
know
that
you
have
to
make
a
small
config
change
and
it
will
give
you
a
warning
say
like
hey
you're
using
zipkin
set
tracer
does
again.
D
So
you
do
pre-check
and
would
say:
hey.
You
have
zip
in
service
make
sure
that
you
use
telemetry
instead
of
correct
default.
Yep.
B
Right,
yes,
but
but,
but
I
think
pre-check
is
still
like
one.
One
level
removed
right,
it's
kind
of
a
more
tactical
thing
once
and
done
type
thing
so.
D
B
B
Well,
that
have
already
happened,
that
pre-check
helps
them
with
and
stuff
in
the
future
as
well
is
going
to
use
pre-check
so
yeah
I
mean
I,
I
think
I
think
pre-check
is
kind
of
both
necessary.
B
D
And
it
doesn't
know
if
you
do
pre-check,
it
doesn't
know
if
you
explicitly
configured
zipkin
as
the
tracer
or
if
you
are
using
the
defaults,
they're
they're
not
distinguishable
at
that
level
in
install.
We
know
if
they
set
this
values.global.proxy.tracer,
because
I
assume
we'll
still
allow
that
setting
in
a
few
more
releases
right
we're
not
going
to
force
all
users
in
1.11
or
1.12
to
use
the
telemetry
api
we,
yes,
we
we
will
continue
supporting
the
values
api
for
a
little
bit.
D
Yes,
so
in
that
case
we
would
need
to
be
aware
of
what
was
set
there,
which
I
don't
think.
Maybe
we
do
actually
might
be
possible.
I
think
we
store
it
in
that
injector
config
mask
so
yep,
but
I
think
it
will
show
the
defaults,
so
you
can't
tell
if
they
explicitly
set
it
or
it's
set,
because
we
have
it
in
the
values.eml
defaults.
E
So
if
a
user
did
set
the
global
proxy
tracer
to
a
value
which
is
not
default,
even
in
that
case,
they
will
need
migration
right.
D
No
that's
well,
so
a
long
term
will
remove
like
we
have
this
whole
values,
essentially
in
the
very
long
term.
We'll
remove
that.
I
think,
but
that.
F
D
Three
releases
or
something
long,
because
it's
pretty
core
functionality
and
then
we
drop
it.
I
don't
think
that's
a
challenging
problem.
We
just
yeah,
I
mean
we
have
a
bunch
of
deprecations
like.
D
B
Yeah,
I
think
I
think
that
that
makes
sense.
The
yeah
I
mean,
I'm
still
kind
of
mildly,
opposed
to
adding
things
into
a
sort
of
istio
cuttle
install
if
we
want
to
make
pre-check
the
tool,
but.
B
Because
it
it's
going
to
have
like
lots
of
these
tactical
things,
which
will
just
like
needlessly
add
up
there
right,
whereas
we
expected
pre-check,
these
are
written
as
separate
components.
We
just
delete
the
file
and
it's
done.
D
D
G
D
C
Well,
I
did
have
a
question,
though
I
mean
the
timeline
we're
talking
about
111
and
112..
Do
we
think
that
that's
still
the
right
timeline
given
that
we're
at
code
freeze
now
and
the
apis
are
still
experimental?
I
mean?
Is
this
the?
Is
this
the
right
timeline
for
that
for
this
change,
or
should
we
bump
it
by
one.
D
That's
a
good
question:
we
could
probably
bump
it
by
one.
I
wrote
this
a
few
weeks
ago,
so
more
optimistic
back
then,
but.
C
D
Yeah
I
mean
we
could
get
the
pre-check.
I
think
the
question
I
have
is:
do
we
want
to
be
pushing
users
to
telemetry
this
aggressively
this
early.
B
However,
in
in
absence
of
the
in
absence
of
inviting
people
to
use
the
api,
how
are
we
going
to
get
any
feedback.
C
C
B
C
C
And
so
I
just
I
think,
if
we're
going
to
change
the
default
and
suggest
people
use
this,
we
we
should
commit
to
making
making
sure
that
makes
the
111
cut
and
it's
findable.
I
don't
know
about
a
blog
post
but
sure
I
I
don't
know
what
the
bar
for
blog
post
is.
These
days.
B
D
D
C
B
D
Yeah,
so
metrics,
I
think,
is
it's
fairly
tricky
because
we
have
this
extensive
api
through
values.aml,
which
allows
all
sorts
of
customization
so,
but
fortunately
we
still
want
to
keep
the
same
defaults
where
prometheus
is
is
enabled
automatically.
D
I
think,
in
terms
of
that,
it's
it's
pretty
simple.
We
can
just
add
it
as
the
default
default
provider
for
metrics.
If
they
want
to
remove
it,
then
they
can
set
that
explicitly
to
empty
list
or,
to
I
don't
know
whatever
else
they
want
datadog
or
whatever
they
want
as
their
default.
D
So
that's
no
problem,
but
one
issue
we'll
have
is
that
we
need
to
make
sure
that
we
don't
end
up
with
the
envoy
filters
that
they've
configured
as
well
as
the
new
code
that
we're
configuring
based
on
the
telemetry,
and
we
also
want
to
make
sure
that
all
the
settings
that
we
had
in
values.aml
still
work.
So
if
they
set
like,
I
can't
remember,
what's
allowed
like
some
config
override
or
something,
then
that
should
also
so
work.
D
So
there's
a
few
options
we
discussed,
which
I'm
actually
yeah.
It
doesn't
look
like.
I
added
them
all
to
the
dock.
So,
for
most
users,
this
just
enablement
is
the
biggest
thing.
D
So
I
put
a
bit
of
a
blurb
about
that.
I
think
that
we
can
just
looking
at,
like
the
values,
enablement,
add
some
moderately
complex.
If
statements
to
the
mesh
config
to
kind
of
configure
the
default
provider
to
you
know
various
things:
to
preserve
the
functionality
and
that
way
you
know
if
we
and
then
we
just
remove
the
envoy
filters.
D
The
tricky
part
is
the
all
the
other
customization
like
the
config
override.
So
I
think
we
have
a
few
options.
One
is
we
can
just
make
it
so
if
they
use
config
override
that's
fine,
but
we
just
use
the
envoy
filters
instead
of
looking
at
the
telemetry,
we
can
also
just
immediately
block
the
install
or
warn
or
something
that
has
been
deprecated.
I
think
the
users
of
config
overhead
are
fairly
advanced
users
and
there's
likely
not
too
many,
so
we're,
probably
not
breaking
like
someone
that's
trying
to
install
book
info.
D
The
other
option
is
we
could
read
in
the
values.aml
into
pilot
and
actually
use
the
config
overrides,
like
all
the
really
the
whole
values.telemetry
within
pilot
and
kind
of
directly
read
that
to
configure
the
telemetry.
D
D
D
I
wasn't
even
thinking
as
part
of
upgrade
tooling,
like
just
in
the
telm
template
like
in
the
helm,
charts
we
could
just
if
config
override,
then
you
know
create
telemetry.
Just
like
we
create
the
onboard
filter,
but
instead
we
create
a
telemetry.
C
B
A
D
C
E
G
So
the
yeah,
the
config,
the
coffee
or
our
premiums
is
widely
used
and
people
use
them
to
customize
metrics.
B
Yeah,
which,
which
is
why
I
think
that,
like
a
pre-check,
slash,
migration
tooling,
would
actually
be
better
yeah,
so
yeah
it
takes
this
and
says.
Okay,
here
is
what
you
should
actually
configure
and
then
let
them
put
that
in
their
ci
cd
and
all
that
right.
D
Okay,
so
would
it
make
sense
to
have
if
they
have
these
customizations,
so
they
enable
and
disable,
which
is
what
I
expect
most
users?
Actually
most
users,
probably
don't
touch
it
at
all.
That
we
can
do.
I
think,
is
no
problem.
We
can
just
configure
the
mesh
config
for
that
and
then
for
anything
else.
We
can
just
continue
to
create
the
envelope
filters
and
tell
them
that
they
should
migrate
off
them.
D
I'm
a
bit
worried
about
ending
up
with
telemetry
and
onboard
filters
and
then
getting
double
config
applied,
though.
E
B
E
B
No,
I
I
don't
think
everything
I
don't
think
everything
can
be
done.
The
prometheus
part,
maybe
that
that
can
be
done
right,
because
now
we
support
overrides
and
but
okay.
So,
let's
separate
prometheus
from
stackdriver.
A
B
C
Yeah,
I
mean,
I
think,
a
lot
of
that
pass
through
is
logging
related
right,
and
we
just
don't
have
many
knobs
on
logging
right
now
right
when
that
comes,
that
would
that
should
be
doing?
Okay.
E
E
D
Yeah,
I
guess
the
other
concern
is
when
you
actually
do
want
to
cut
over,
like
I've
been
trying
to
avoid
having
some
special
logic
about
envoy
filters
in
the
pilot
code.
Where,
like
we
know
that
this
is
the
prometheus
envoy
filter
whatever,
but
I'm
having
like.
We
need
a
way
to
gracefully
migrate
over,
and
I
don't
think
that
you
ever
want
to
have
two
stat
filters
or
zero
stat
filters.
But
there's
no
way
to
like
atomically,
remove
the
envoy
filter
and
add
the
telemetry
right.
B
Atomically,
well
I
mean
no,
it's
yeah.
No,
I
I
like
kubernetes,
doesn't.
B
Yeah,
I
think
I
think
that's.
I
think
that
that
makes
sense.
We
we
have
similar
concerns
in
even
the
wasm
api
as
well.
Right,
like
you,
have
you
have
new
thing
coming
up.
You
have
your
all
over
filter
this
stewie
and
there
is
no
atomicity,
so
you
either
get
rid
of
everything
and
then
you
add
the
new
one
right
or
so.
Actually
that's
it
right.
The
the
option
is
to
get
rid
of
everything
and
then
add
the
new
one
and
okay.
So
so
then
the
cost
of
that
is
there
is
like
a
blip.
D
B
D
C
C
So
I
mean
in
some
sense
it's
it's
not
that
big
of
a
deal
right,
it's
15!
Second
time
for
scrapes
or
one
minute
reporting
right.
So
you
the
time
between
the
two
things
might
be
you're
not
going
to
lose
a
reporting
cycle.
B
B
D
D
No,
I
think,
that's
that's
good.
Okay,.
C
So
since
111
is
rapping,
I
just
wanted
to
to
add
a
spot
if
there's
anything
that
we
need
to
discuss
about
things
that
still
need
to
be
done
for
111.
Is
there
anything
that
anyone
knows
of
that
we
need
to
address?
C
Okay-
I
guess
not
so
the
next
thing
I
wanted
to
bring
up
was
it's
time
to
present
our
plan
for
112.
in
two
weeks,
there's
starting
to
present
road
maps
for
112..
C
I
think
getting
a
roadmap
together
and
then
this
next
two
weeks
is
probably
aggressive
and
then
the
next
dates
for
presenting
are
in
august,
I'm
going
to
be
out
of
the
office.
So
I
was
hoping
I
could
find
someone
that
would
be
willing
to
take
the
road
map
and
I
will
help
you
know
get
it
together,
but
I
won't
be
there
to
present
so
madar.
So.
B
So
I
I'll
I
will
be
there
to
present
I'm.
I
am
out
the
second
week
of
august,
but
as
long
as
it
that's
not
the
week
when
there
is
toc,
I
think
real,
quick,
okay.
C
C
there's
so
many
api
work,
including
some
of
the
stuff
that
we
just
discussed,
are
there
other
things
that
we
should
be
like?
What
is
there
are
you
still
here.
Is
there
rate
limiting
work?
Is
there?
Is
there
on
voice
side
work?
What
kind
of
things
should
we
be,
adding
as
focus
items
for
112.
F
B
F
But
okay,
here's
another
one:
that's
even
older
is
h-bone
telemetry
in
policy.
B
C
Do
we
need
to,
I
feel,
like
we
keep
putting
off
and
that's
all
I'll
put
it
back
on
here,
just
general
dashboard
work
or
supportive
multi-cluster
scenarios.
C
C
D
I
think
we're
still
waiting
for
some
lawyer
to
say
yes,
but
I'm
sure
they're
going
to
if
we
mug
them
enough
to
finally
give
us
an
answer.
Okay,
but
we
don't
need
to
upgrade
necessarily
like
that's
just
a
sample
and
there's
no
really
important
features
that
that
I.
C
Saw
so
okay,
I
haven't
seen
any
real
big
other
issues
with
integrations.
E
E
E
Good
so
yeah
I
mean
once
these
are
done.
If
we
provide
hotel,
I
mean
we
can
have
a
nice
blogging
and
marketing
around
and
possibly
you
know
invite
some
of
the
other
vendors
like
data,
log
and
stuff
to
say:
hey.
We
have
better
integration
story,
you
need,
you
know
you.
We
provide
open,
telemetry
formats
for
everything
and
you
go
and
you
know
run
with.
E
B
F
F
B
B
C
E
E
B
E
E
E
I
don't
know
the
answer
I
mean
customers
are
living
with
it
right
now.
If
it
is
a
telemetry,
if
it's
supported
in
telemetry
api
dynamic
would
be
good
because
otherwise,
we'll
have
a
lot
of
disconnect.
Would
you
change
something
in
the
api
which
is
not
done
in
sto,
and
then
you
go
and
go
and
go
ahead
and
restart
all
your
workloads
right.
F
F
C
C
F
F
C
Okay,
well,
unless
there's
anything
else,
does
anyone
else
have
anything
they
want
to
discuss
or
should
we.