►
From YouTube: Policies and Telemetry WG 2018-09-26
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
We
get
started
I
think
it's
probably
time
hello.
Everyone
thanks
for
joining
other
guess,
last
meeting
in
September.
It's
crazy
to
think
about
that.
There
are
a
couple
items
on
the
agenda.
Is
there
anyone
before
we
start
that
wants
to
introduce
himself
or
is
new
to
the
meeting
or
anything
like
that.
A
B
B
D
D
D
Sure
I
mean
it's
here
in
the
doc,
basically
working
hard.
You
know
in
the
key
always
already
in
helm,
available
as
an
option,
but
we're
gonna
change
it
so
that,
like
currently
in
the
SEO
demo,
you
get
service
graph,
will
change
it
to
include
Keowee
and
then
the
documentation.
We
currently
have
documentation
on
this
TI
o
as
part
of
like
the
walkthrough
about
telemetry,
to
look
at
service
graph,
we'll
change
that
to
say,
look
at
key
ally
and
and
then
as
an
addition.
B
G
Ideas
so
I
think
that
the
only
thing
that
that
would
mean
is
that
in
the
past,
Jiali
has
slacked
behind
the
release
of
Castillo
in
the
sense
that
we
came
up
with
or
not
or
ended
it.
It
took
them
several
weeks
to
support
one
auto
so
going
forward,
but
if
that
is
the
only
place
to
get
service
craft
and
it
means
that
they
have
to
be
in
sync
with
mr.
release
so
I
think
well
yeah.
We
just
have
to
make
sure
that
whatever
we
release
clearly
works
with
that
right.
D
Yeah,
that's
a
great
concern.
I'll.
Add
that
to
the
doc
as
a
concern
and
then
kind
of
see
what
we
can
say
to
to
mitigate
that
I.
Think
one
of
the
things
the
end-to-end
test
should
do.
It's
kind
of
see
that
the
the
telemetry
that
is
do
screw
ting
in
Prometheus
is
what
key
always
expecting,
but
it's
not
there's
no
mismatch
there.
So
if
we,
if
we
have
that,
then
we
can
see
that
you
know
things
break
like
that
when
somebody
goes
and
changes
an
attribute
name.
B
H
D
I
K
K
L
So
just
add
a
bit
of
context
to
the
one
zero
released.
There
were
some
really
big
changes
that
came
in
before
all
the
configuration
and
that's
really
why
key
ally
like
so
far
behind
at
that
point,
it
was
a
quite
a
big
task
for
the
Cayley
team
to
update
that,
unless
there's
something
major
coming
down
the
pipeline
and
I,
don't
really
see
key
ally
being
too
far
adrift
or
where
SEO
is.
It
was
really
just
the
size
of
that
change
that
exacerbated
the
problem
for
1-0,
okay,.
G
I
D
But
so
I
think
that
sounds
like
an
interesting
thing
to
talk
about
in
the
future.
I
can
I'm
kind
of
seeing
it
as
a
an
integration
like
we
have
with
Prometheus
and
profound
our
it's
a
separate
piece
just
for
the
initial
proposal
and
the
initial
change.
D
I
G
B
B
I
B
B
F
The
one
thing
I
did
want
to
say:
is
you
notice
on
the
doc
we
put
in
when
we
wanted
to
do
this?
We
thought
the
the
October
LTS
release
come
out.
The
one
one
release
would
be
too
early
for
this,
so
we're
targeting
the
next
LTS
release
after
which
is
going
to
be
I
assume
around
January
February
timeframe.
I
just
want
to
make
sure
people
were
aware
of
that
that
we're
not
looking
to
go
in
this
October,
but
the
following
release
after
that.
I
F
I
Do
that
yeah?
So
it's
mostly
a
set
of
guidelines
in
terms
of
if
you
have
an,
if
there's
an
alpha
feature
in
the
product,
it
must
be
disabled
by
default.
It
must
not
affect
the
behavior
of
the
rest
of
the
system
when
it's
turned
off,
you
need
some
basic
documentation,
basic
quality
level,
and
that's
it.
So
if,
if
this
stuff
is
done,
but
through
different
install
options
that
are
not
selected
by
default,
then
I
think
it'd
be
easy
to
put
that
into
the
product
which.
M
I
I
Talk
but
in
fact
I
think
at
that
point
in
the
1.1
release
we
can
put
Doc's.
We
can
put
a
note
in
the
nagas,
the
service
graph,
Doc's,
saying
we're
expecting
to
deprecated
this
by
1.2,
ok
and
that'll,
and
then
that'll
show
up
in
the
release,
notes
and
then
people
will
be
aware
and
life
is
good.
That's.
J
B
I
F
N
I
just
sorry
this
is
Julian
I'm
from
my
step,
I
checked
in
a
couple
weeks
ago
saying
that
we
were
starting
to
look
at
exposing
life
stuff
as
a
tracing
option,
in
addition
to
the
Zipkin
yeah
just
have
some
Pierre's
coming
waiting
for
the
CLA
to
be
processed.
Hopefully
that
happens
soon.
Thank
you
to
Jeff,
who
was
a
great
resource.
N
There's
we
were
as
we
reached
out
and
has
some
questions,
and
also
thank
you
to
the
people
who
had
built
the
wavefront
adapter,
even
though
these
particular
PRS
are
not
going
the
adapter
out
they're,
actually
just
exposing
the
light
step
back
end
through
envoy
directly
we're
also
hoping
to
do
an
adapter
as
well
and
having
the
wavefront
one.
There
is
great
because
it's
a
great
resource
to
kind
of
look
at
and
see
how
to
build
these.
So
thank
you
to
whoever
did
that
and
the
team
behind
that.
You.
N
Yeah,
yes,
so
I
had
it's
possible.
I
did
this
wrong,
but
I
had
submitted
a
PR
yesterday
and
it's
like,
oh,
you
haven't
signed
CLA.
So
then
I
someone
at
light
step
with
more
authority
than
me
sign
it
for
our
organization.
And
then
there
was
a
note
that
says.
Sometimes
it
takes
a
couple
of
days
to
go
through.
Okay,.
N
Possible,
that's
something
weird
happened.
I
know
that
I
initially
submitted
the
PR
and
the
commits
were
from
the
wrong
github
email
address
and
then
I
changed
it
over.
So
it's
possible
that
when
I
changed
it
over
I,
it
didn't
pick
up
and
I
need
to
close
it
and
reopen
it.
I
can
try
that,
but
when
I
go
to
I
click
on
the
CLA
link,
it
still
says:
oh,
we
have
nothing
for
you,
so
I'm
I'm
not
sure
exactly
how
that
process
works.
N
B
Awesome
so
I
added
this
updates
and
recent
changes.
Just
because
there's
some
discussion
last
week
about
some
of
the
ongoing
things
with
process
adapters
and
others
so
and
I
know
there
was
discussion
in
TOC
about
these
to
contribute,
gross
or
so
being
either.
Might
annamund
dollar
could
sort
of
just
provide
a
summary
of
what
happened
there
and
how
that
relates
to
add
a
process
adapters,
Trude.
I
The
procedure
is
basically
that
we
will.
We
will
encourage
people
to
advertise
their
adapters
on
a
steal
at
I/o,
so
I'm
still
I'm
gonna
work
on
this
over
the
weekend
to
put
in
the
and
even
easier
process
to
add
yourself
to
it,
see
all
that
IO
and
that's
basically
everything
else.
If
you're
developing
adapters,
you
can
put
them
in
your
own
repo,
run
your
own
CI
systems
for
testing
and
all
that
kind
of
stuff
host,
your
own
website,
etc,
etc.
We
will
provide
the
discovery
and
that's
it.
P
I
P
So
is
he
busy
we
make
this
three
awesome
user
model
I
think
initially
we
want
to
start
start
with
support.
First
is
MTS
for
imagine
adapter
and
they're.
The
two
is
API
token
and
off
for
a
ductless
drawing
all
of
us
out
of
flash,
so
I
think
yeah.
It's
is
that
a
necessary
sign
of
ostentation
mechanism
or
some
some
wanting
to
do
a
thing.
G
Yeah,
so
we
we
got
rid
of
basic
god,
but
API
key
our
token.
We
have
still
kept,
because
that
is
how
many
people
use
these
kinds
of
things
and
then
I
think
your
comment
to
that
was
we:
should
we
still
shouldn't
do
it
and
now
we
have
solicited
feedback
from
the
community
and
no
one
has
said
anything.
So
this.
I
G
Well,
yes,
but
but
if
someone
doesn't
want
to
implement
a
lot
right,
then
API
key
is
the
way.
Is
the
simplest
way
of
enabling
external
parties
to
talk
to
you
and
yes,
that
it's
not
the
best
way,
but
it
is
a
very
commonly
used
way.
So,
even
though
this
is
new
code,
they
may
not
want
to
immediately
support
OAuth
2
and
it's
the
additional
complexity.
Because
of
that
isn't
that
much
that's
true
yeah.
B
G
I
G
F
G
But
if
you,
if
you
think
about
it
like
we,
we
win,
so
this
is
the
trajectory
that
we
followed
with
all
other
features
that
the
meter
comes
out.
It's
it's
functional,
it
kind
of
meets
basic
requirements
and
then,
when
we
do
the
performance
work
and
make
it
better.
So
unless
the
performance
is
kind
of
prohibitively
bad
in
an
end-to-end
test,
we
should
be
able
to
reach
beta.
If
it
is
bad,
then
we
will
then
we
can
then
we'll
decide
whether
we
reclassify
continue
to
call
it
alpha
or
not.
Yeah.
G
Q
So
I
just
wanted
to
mention
about
different
adapter,
so
yeah
I
was
able
to
pull
it
through
and
we
have
the
repository
published
along
with
the
docker
image
and
all
the
other
configurations
there
are.
There
were
a
few
got
to
us
because
it
was
out
of
Klee,
so
I
had
mentioned
earlier
that
I
wanted
to
create
a
walkthrough
for
auto
free
adapters.
So
should
I
just
add
a
new
issue
on
the
Steven
repository
for
that
and.
G
Q
I
B
Okay
did
the
thing,
and
there
is
something
I
put
on
the
agenda:
I,
it's
weed,
syntax
out,
I!
Think
two
and
a
half
weeks
ago
we
talked
about
the
last
policy
and
working
group
meeting,
but
the
envoy
stats
question
was
fairly
expensive.
I
think
it
was
like
a
ten
percent
overhead
plus
you
had
to
run
this
stat
see
Prometheus
collector,
so
there
have
been
PRS
that
have
gone
into
the
mainline
branch
and
then
I
think
are
now
being
cherry
picked
over
to
the
least
one,
no
branch
that
completely
eliminate
this.
B
That's
the
previous
process
and
they
work
now
by
exposing
a
port
on
an
envoy
that
already
points
at
the
existing
Prometheus
in
point
on
each
of
the
employees
and
some
config
in
the
Prometheus
add-on.
That
knows
how
to
scrape
that
directly.
So
we
don't
use
annotations,
so
the
annotations
can
be
safe
or
application
purposes
scraping,
and
that
should
be
a
big
wing.
B
A
lot
of
what
we
do
is
we
drop
a
ton
of
the
the
metrics
coming
out
of
each
proxy
that
aren't
needed
or
conflicting
and
focused
just
on
things
that
allow
us
to
do
targeted
queries
about
the
versions
of
config
generated
by
pilot
that
the
numbers
have,
so
that
we
can
try
to
configure
roll
out
across
proxies,
etc.
So
that's
sort
of
our
approach
going
forward
and
I
think
it
has
been
paying
off
fairly
well
and
I.
Think
that's
what
the
next
call!
It
is
somewhat
related
to.
Oh.
G
G
G
The
next
bullet
is
actually
that
there
was
another
optimization
that
just
went
it
to
mixer,
which
saved
about
12%
CPU
kind
of
across
the
across
the
fleet,
and
there
is.
There
is
another
one
on
mix
of
client
which
essentially
disables
Delta
compression
in
batch,
and
that
is
also
saving
about
between
10
to
12
percent.
Have
you
looked
at
what
it
does
to
mixer
or
CPU
it?
I
Down
quite
a
bit:
yes,
I
wonder
if
they
have
you
looked
at
the
implementation
of
the
Delta
compression
because
it
gets
there
is
a
kind
of
a
fancy
name
here.
It's
not
really
compression
it's.
It
should
be
a
trivial
amount
of
work.
I'm
surprised
it's
consuming
that
much
CPU
time.
Unless
there's
about
oh
in.
G
Terms
of
actually
time
in
the
actually
time
on
the
flame
graph,
it
was
consuming
like
30
percent
of
the
time,
I
believe
but
wow.
What
if
I
get
it?
It
actually
affects
the
total
CPU
in
a
nonlinear
way,
and
that's
been
my
observation
so
far,
but
some
of
those
measurements
are
still
still
coming
in.
For
the
second
part,
the
first
part
is
all
measured
and
that's.
We
know
that
that
same
CPU,
okay,
I.
I
I
You're,
going
from
having
100
attributes
on
the
wire
to
having
three
or
four
so
the
the
upfront
cost
of
doing
this
deduping
or
whatever
the
delta
encoding,
should
be
offset
by
Inc
the
cost
of
transmitting
this
or
over
tcp,
and
doing
all
that
kind
of
stuff.
So,
anyway,
I
still
think
there's
a
bug
there.
We
might
do
it.
If
you
fix
the
bug,
you
might
end
up
being
able
to
save
more
CPU
time.
Total.
I
G
I
can
I
can
point
you
to
the
code,
and
maybe
there
is
there
is
something
there,
but
after
we
we
did
did
the
tests.
I
believe
Wayne
has
a
PR
that,
where
he
just
completely
removed
it
from
the
codebase
I
mean
we
can
always
add
it
back.
It's
just
a
revolt
but
yeah.
So
I'll
follow
up
with
you
on
that
yeah
I'd
like
to
see.
O
G
And
this
is
just
just
a
very
kind
of
here-now
thing.
There
is
a
flocking
issue
with
mixer,
where
mixer
starts
up
with
some
or
all
or
most
configuration
the
same
and
several
people,
including
Scott
and
several
others,
have
reported
it,
and
we
have
a
good
theory
and
Peter
was
able
to
reproduce
at
work.
So
we
have,
we
have
a
way
forward.
It's
a
p04
already
sent
out
the
pr
to
fix
it.
Okay,.
B
And
we
still
don't
know,
at
least
as
far
as
I
know
was
actually
causing
it.
There
was
one
report
on
the
mailing
list
in
a
separate,
distinct
report
in
the
github
repo
of
people
that
were
running
and
after
a
little
while
of
running,
there
are
all
sudden
see
instances
with
duplicated
dimensions,
which
then
would
get
sort
of
manifest
as
Prometheus
failing
to
to
add
new
metric
data,
because
all
of
a
sudden,
the
dimensions
don't
line
up
at.
F
G
B
Let's
say:
I
have
a
metric
that
has
three
labels
on
it:
three
dimensions:
it's
like
a
B
and
C,
so
you're
getting
those
in
for
a
while.
Then,
all
of
a
sudden
you
get
a
metric
in
that
has
a
a
C
or
a
AAA.
So
it
has
the
same
number
of
labels,
but
it's
a
repeat
of
the
dimension
field
with
the
same
values
and
so
I
I.
Don't
know
what
could
explain
that
fully
when
I
look
at
the
code,
oh
no.
B
At
that
point,
it
seems
like
there
Court
is.
It
seems
like
when
you're
getting
that
that
position
all
of
a
sudden.
Now
you
can
no
longer
do
anything,
so
the
only
solution
is
restarted.
The
pod
and
all
of
a
sudden
everything
starts
working
again
for
a
little
while,
so
I
have
not
been
able
to
reproduce
this
in
any
way,
I've
got
clusters
that
I've
had
up
for
months
that
I'm
running
data
through
I
haven't
seen
it
I
look
at
the
code
and
it's
kind
of
hard
to
follow
all
of
the
ways
that
everything
gets
created.
I
B
Because
of
that
it
doesn't,
and
then
this
manifests
as
Prometheus
saying
you
don't
match
the
schema,
because
previously
you
used
labels
a
B
and
C,
and
now
you
don't
have
little
B
or
something
it's
not
always
the
same
in
the
the
reports.
What
it's
not
like,
the
same
label
is
getting
inserted
twice.
It
just
seems
to
be
a
sort
of
random
selection.
G
Or
is
seen
this
I
think
we
need
more
data,
so
one
one
more
thing
that
that
caught
my
eye
and
that
I
mentioned
it
to
dog-
is
that
someone
also
reported
that
the
Prometheus
end
point
also
gets
into
a
bad
state
right.
The
scraper
itself
errors
out
Metis
itself.
So
do
this
prevent
point
right
like
so,
are
four
to
four
to
20
point:
if
you
do
an
HTTP
GET
to
it,
like
that,
also
better
start
saying:
I,
don't
know
what
to
do.
G
I
got
good
metrics,
so
it
seems
that
there
is
a
rejection
of
incoming
things,
sometimes
cleared
up
the
adapter,
but
if
the
rejection
actually
takes
place,
then
it
should
not
corrupt
the
in
memory
state
of
the
automaker's
registry
inside
inside
the
adapter.
It
seems
that
also
is
happening,
so
Prometheus
cannot
scream
anything
now.
Well,
these
things
are
every
renowned,
but.
B
So
this
cross
there's
a
lot
of
time,
but
it's
happening
on
clusters
with
there's
been
upgrades,
starting
with
zero,
zero.
Five,
all
the
way
up,
and
so
maybe
there's
goes
config.
So
how
sticking
around
I
think
there's
lots
of
ways
that
we
could
theorize,
but
I
have
yet
to
be
able
to
reproduce
it
and
I'm
hoping
someone
has
some
insight
into
how
this
could
be
happening.
There
might
be
something
just
completely
missing
when
I
read
through
the
code,
because.
I
G
I
Seem
problematic,
I
wonder
so.
What
if
in
no
I
was
gonna,
say
what
if
in
particular
cases,
one
of
the
one
of
the
fields
is
not
available
and
we
just
end
up
passing
whatever
junk
was
in
that
same
data
structure
to
the
adapter
again,
but
that
would
be
a
one
once
in
a
while,
not
a
always
kind
of
situation.