►
From YouTube: Policies and Telemetry WG Meeting - 2018 12 05
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
C
C
B
D
C
A
C
B
A
Needs
to
have
some
value
yeah
like.
Why
does
it
need
to
have
some
value,
because
the
the
hostname
and
like
the
other
parts
of
servers
and
here
actually
more
relevant
there
right
the
what
namespace
it
belongs
to
or
what
the
name
of
the
service
entries
and
all
that
is
not.
It
is
not
really
relevant
and
we
don't
want
to
draw
attention
to
that.
It
doesn't
want
any
means.
B
So
if
you,
if
you
don't,
say
anything
mm-hmm
depending
on
the
config
for
the
instances,
it
will
become
unknown
in
the
metrics
things
which
may
be
that's.
Okay,
I,
don't
know
if
you
want
to
distinguish
between
just
we
couldn't
find
a
value
for
it,
and
this
is
a
service
entry
or
not,
and
that
would
be
the
only
reason
to
set
it
to
value.
C
A
E
A
Think
I
think
that,
from
from
a
mesh
type
of
visualization
perspective,
probably
no
because
the
destination
is
what
really
matters
in
these
cases,
but
from
debugging
perspective.
Yes
right
like
if
you
want
to
know
why
is
this
going
here
and
then
you
say,
oh
because
of
this
service
entry
or
because
of
this
virtual
service,
then
that
information
is
useful
but
to
just
know
what
is
going
on
like
I
had
on
the
nuts
yeah
like
this
post,
what
were
the
actual
destination
is
is
more
important.
D
A
C
A
C
C
E
B
D
C
B
A
B
B
The
next
thing
I
know
Jen
de
is
related.
There
was
a
pull
request.
They
wanted
to
add
G
RPC
status
codes
and
messages
to
all
the
the
metric
instances
and
I
sort
of
said:
hey
hold
on.
Let's
talk
about
this,
so
I
don't
know.
If
people
have
looked
through
the
PR
I
push
back
on
messages
in
general
because
they
have
an
unknown
cardinality,
it's
basically
infinite
yeah.
What's
the
PR,
it's
10
127.
B
B
B
B
A
A
E
C
A
E
Will
show
up
in
response
code,
yeah
and
and
somebody's
actual
response
code
and
then
this
jpc
code
within
that
code
right,
that's
is
layered,
so
chip
on
your
PC
sends
an
error.
It's
actually
responds
200
with
a
message
containing
there.
Yeah
yeah
I,
don't
care
about
the
response
to
linear
because
that's
not
very
useful.
A
E
A
E
A
E
Using
an
expression
in
the
function
but
Senate
put
it
in
equals
zero
equals
200.
You
can
just
say:
is
it
okay?
So
we
you
can't
do
that
in
Prometheus,
like
once,
the
data
is
in
Prometheus
right,
you
can't
say
is:
okay,
maybe
give
it
a
choice.
We
make
other
functional.
Translates
your
PC
to
HTML
instance
itself,
make
a
choice.
Yeah.
B
E
I
can't
tell
the
the
problem
there
is
that
our
service
head
cache
was
ignoring
certain
values
you
can
put
in
the
check
key,
so
that
led
to
false
hits
solutions
to
at
least
for
the
short
term
is
to
disable
the
source.
Id
check
cash
and
the
long
term
plan
is
to
fix
the
values
inside
mixer
that
we
always
consist
in
two
types
were
to
expect
and
after
that,
we're
gonna
have
try
to
make
sure
that
the
cash
actually
works.
E
A
A
A
Okay,
there
is
a
there
is
a
server
side,
cash
on
mixer
and
we
had
to
disable
it
in
105
for
survival
redness
issues.
The
question
is:
that
means
whatever
cash
whenever
it
was
hitting
the
cash
and
it
wasn't
calling
you
dr..
Now,
it's
not
going
to
do
that.
So
how
big
of
a
performance
issue
is
it
going
to
be
for
you?
Oh,
oh,.
F
F
A
B
B
Rid
of
that
thing
there
are
some
me
there's
I
I
can
link
here
in
a
second.
There
have
been
lots
of
reports
that
have
come
in
from
a
couple
of
users
about
metrics
having
duplicate
labels
and
previous,
and
this
stops
all
collection
of
labels,
and
we
couldn't
figure
out
how
this
was
happening.
It
seemed
to
be
impossible,
but
just
this
morning
I
was
able
to
take
the
Prometheus
client
library
and
I
changed
one
of
their
unit
tests
and
I
was
able
to
reproduce
the
issue
so
I
think
hopefully
within
the
next
couple
days.
B
A
I
can
say,
is
stay
tuned,
so
justjust
just
select
if
everyone
is
aware
right.
This
issue
is
also
has
like
a
poisoning
effect
on
like
under
on
the
registry,
so
once
it
happens,
like
collection
either
stops
or
cannot
proceed.
So
it's
not.
It's
not
just
like
one
request.
It's
this
issue
and
it's
done,
but
once
it
hits
this
issue,
we
have
the
metrics
collection
as
a
whole
stops
working
correctly.
So
so
this
is
extremely
important
for
us
to
find
a
solution.
B
So,
hey
guys,
thanks
for
the
patience
I
guess
on
that,
the
other
thing
I
want
to
bring
up
was
there's
a
blog
on
IP
whitelisting
and
there
there
were
a
couple
issues
raised:
an
IP
white
listing
in
terms
of
support
for
net
net
that
IP
and
IP
addresses
a
value
type
I.
Think
some
of
the
work
that
you've
been
doing
quiet
might
actually
touch
on
because
we'll
have
consistent
use
of
net
IP
and.
E
B
Well
enemies,
so
I
think
sunny
outcome.
Is
we
I,
don't
know
if
anyone
here
users
IP
where
this
thing,
but
that
we
need
a
task
and
an
FAQ
and
I
sort
of
more
documentation
of
how
to
use
this
checker
because
people
this
is
apparently
enough.
People
are
trying
to
use
this
deal,
but,
oh,
it
was
the
response
codes.
We
returned
four
fours
when
things
aren't
in
the
lists,
but
which
might
be
okay.
If
we
were
saying
we're
trying
to
hide
the
fact
we're
using
a
list,
but
we
also
return
a
message.
B
A
Value
yeah
so
in
in
our
performance
testing
that,
like
large-scale
buster
testing
Oh
in
one
one
before
stops
of
this
issue
of
like
mixer
gold
in
growth
and
then
eventually
it
goes
our
memory
and
it
happens.
It
doesn't
always
happen,
but
it
does
happen
kind
of
often
enough
for
it
to
be
to
be
a
concern
and
I
found
several
different
issues
listed
in
the
notes.
But
essentially
everything
was
down
to
excessive
contention
on
some
common
resource
and
one
of
the
one
of
the
final
things
that
we
found
was
prometheus
registry
itself.
A
Since
we
don't
really
have
any
back
pressure
mechanism
in
mixer.
Yet,
oh,
we
keep
on
accepting
more
work
from
the
front
and
not
prometheus.
Adapter
has
kind
of
ground
to
crawl,
and
now
it's
just
a
vicious
cycle.
Now
gold
teens
continue
growing
whether
it
can
just
continues
to
grow
and
we
go
out
of
memory.
So
I
have
a
fix
from
our
side
on
that.
On
the
way
to
me,
this
registry
is
used,
and
then
we
also
have
ideas
about
kind
of
optimizing
that
further
or
making
it
better.
A
A
similar
issue,
I
mean
excessive
contention.
Issue
was
also
observed
in
Jaeger
traced,
generation
and
it
it
was
attempting
to
create
a
random
number
any
of
the
creator
and
number
it
was
attempting
to
grab
a
lock
and
if
you
have
five
thousand
go
routines
all
trying
to
do
that,
then
that
that
doesn't
end
there.
So
so
again,
this
think
they
were.
A
They
were
somewhat
related,
but
it
was,
but
there
was
a
common
theme,
one
other
thing
I
observed
and
that
may
have
been
being
like
a
secondary
observation
based
on
some
other
bad
things
that
are
already
happening
in
the
system.
But,
however,
the
attribute
bag
itself
is
pulled.
We
don't
create
a
new
one.
We
we
put
them
in
the
pool
and
the
way
sink
pool
works.
A
So
both
those
classes
of
gue
routines
are
contending
on
the
same
law
to
check
out
something
from
the
back
from
the
back
pool
and
that
also
led
to
enormous
contention.
So
again,
I
I
did
I
did
a
local
fix,
but
that
fix
our
would
be.
We
have
to
revisit
and
see
once
the
other
things
are
fixed.
Is
that
still
an
issue,
but
it
does
seem
like
it.
We
either
should
not
use
a
pool
or
use
separate
pools
for
different
classes
of
of
guru,
teens
and
again
the
the
fact
that
we
don't.
A
A
B
A
A
What,
in
addition
to
that,
we
have
also
decided
that
we
should
go
for
a
simplified
configuration
model,
and
all
that
really
means
is
every
adapter
now
gets
to
define
its
template
like
very
precisely
so.
There
is
no.
There
is
no
mapping
or
try
to
match
some
other
template
or
anything
like
that
right.
If
you
are
prometheus
adapter
and
you
need
your
input
in
a
very
specific
form-
well,
that's
what
you
should
use
and
then
that's
what
you
get
and
then
the
new
features
that
are
possible
are
a
protocol
capture.
A
So
we've
had
this
long
rectify
and
how
we
deal
with
a
telemetry
and
then
even
it
was
mostly
related
to
telemetry,
but
how
we
deal
with
a
service
directly
calling
something
or
directly
sending
some
telemetry
data,
and
how
do
we
capture
that?
How
do
we
insert
it
back
into
the
pipeline
so
that
we
can
do
the
normal
stuff
that
we
we
do?
A
So
that's
that's
kind
of
made
possible
or
easier
by
this
week's
mixer
we
to
model
and
then
payload
mutation
previously
because
mixer
was
like
since
today,
mixer
is
outside
mutating
payload
is
not
that
simple
right
we
actually
have
talked.
We
have
several
proposals
which
are
all
kind
of
very
complicated
of
either
having
a
stream
go
off
to
somewhere
else.
It's
muted,
didn't
comes
back
or
you
have
to
buffer.
So
having
mixer
functionality
inside
or
more
itself
means
that
you
can
now
mutate
payloads.
A
So,
even
in
many
ways,
mixer
now
becomes
this
very
configurable
and
flexible,
but
still
portable
filter
chain
extension
I
mean
which
it
was
before
too,
but
now
it's
kind
of
more.
It's
clearer
that,
yes,
it
is,
it
runs
inside
envoy
and
it
is
a
way
to
write
new,
add
new
functionality.
You
know
in
a
much
easier
way.
A
A
Sort
of
all
capture
and
pillared
munition
are
kind
of
completely
new
things
that
they're
possible.
This
is
how
the
the
transformation
pipeline
looks.
So
we
have
this.
We
now
have
this
concept
of
an
ingestion
adapter,
and
all
that
really
is
is
it
will
take
input
from
something
right
now
so
far
you
can
think
of
what
we
have
today
as
like
the
envoy
ingestion
Raptor,
which
is
mixer
client
right.
So
what
it
does
is
it
takes
as
input
sitting
in
the
on
work,
filter,
filter
chain.
A
It
takes
as
input
whatever
on
what
has
to
offer
headers
and
other
metadata
and
things
like
that,
and
then
it
converts
that
into
attributes.
So
that's
the
first
kind
of
level.
That's
that's
today's
ignition
adapter
and
we
just
have
one
so
now
we
have
formalized
it
and
we
can
now
have
many
other
kinds
of
initial
adapters.
So
this
is
where
the
directive
Lea
Thompson
right.
A
A
Surely
he
can
contact
backends
that
are
outside
or
they
can
do
something
locally
and
then
the
additional
part
is
the
arrow
that
goes
back
so
being
able
to
rerun
part
of
the
filter
chain
or
the
entire
filter
chain
is
also
going
to
be
part
of
this
work.
Now
on
one
support
some
of
this
today,
and
it
doesn't
really
supports
all
this.
You
can't
actually
really
fully
rerun
the
filter
chain.
A
E
A
A
A
So
let
me
see
okay.
So
if
this
is
this
an
example
of
how
that
how
the
new
system
will
look
like
there's
the
on
one
filter
chain,
we
already
have
a
precondition,
adapter,
which
is
which
is
called
once
we
decode
headers
right
and
that
largely
looks,
looks
the
same
right.
I
mean
only
the
boundaries
are
redrawn.
Here,
with
the
caveat
that
now
you
can
go
in
the
loop
and
ask
it
to
the
run
and
be
evaluated
from
mutation.
A
Adapter
actually
operates
on
on
the
body,
so
if
it
gets
called
back
when
more
actual
data
is
read
and
then
again
you
can
do
something
similar.
You
can
read
the
body,
send
it
off
to
somewhere,
get
it
inspected
and
decide
to
go
forward
or
not,
and
then
the
telemetry
adapter,
which
is
which
happens
at
the
loft
level,
which
which
happens
at
the
law
callback,
which
is
at
the
very
end
of
the
literature
and
then
on
the
right
hand,
side.
We
have
this
workload
directly
trying
to
go
out
to
someplace
right.
A
So,
let's
say
in
this
case,
we
given
you
an
example
of
stackdriver
adapter.
So
it's
the
workload
is,
is
directly
colleague
stackdriver
in
the
in
the
stacked
up
or
format,
and
then
inside
the
filter
change
variable
to
capture
that
transform
it
into
something
and
again
the
second
part
is
all
configuration
driven
and
then
based
on
that
configuration
decide
what
other
adapters
need
to
be
called
transforming
back
into
whatever
stats
T,
and
in
this
case
we
also
show
a
yoghurt
want
and
then
they
go
to
the
respective
packets.
A
In
terms
of
user
model,
I
think
it
is,
they
are.
The
only
thing
new
here
is
that
there
is
to
a
pipeline.
Now
right,
you
can
have
some
circles.
This
is
how,
specifically
it
is
going
to
look,
and
some
of
this
is
evolving
and
of
course
we
would
love
input
here
of
like
what
people
think
and
all
that,
because
this
is
still
kind
of
somewhat
early
days,
but
essentially
mixer
itself
runs
as
a
library
inside
envoy.
A
It
has
these
ingestion
raptors
and
back-end
adapters,
and
then,
of
course,
all
those
can
also
be
outside
as
well
right.
Not
everything
needs
to
be
inside
to
begin
with
what
we
will.
What
we
will
do
is
we
will
implement,
mix
up
front
end
and
the
JPC
dispatch
right.
So
essentially,
we
will
have
mixer
functionality
and
the
ability
to
call
GBC
adapters,
all
part
of
the
mixer
filter
itself,
and
so
that
way
we
can
use
all
our
existing
DC
adapters
and
now,
rather
than
mixer
calling
them
on
were
concentrating
as
the
next
step.
A
So
actually,
if
you
look
at,
if
anyone
is
familiar
with
the
low
API
and
of
one
point
in
the
lower
filter
right
this,
this
does
borrow
a
lot
from
kind
of
that
model.
Generally,
then,
you
have
a
very
specific
way
to
reach
out
to
other
places,
so
network
connections,
because
that's
kind
of
purpose
of
one
more
so
on
what
manages
all
that
and
then
you
should
be
able
to
schedule
some
asynchronous
work,
some
timers
and
all
that.
A
A
What
that
means
is
once
this
is
up
streamed.
We
can
just
have
the
stock
envoy
as
its
your
proxy
and
we
will
use
effectively
XTS
or
some
other
similar
mechanism
to
send
these
sender
send
web
assembly
code
to
on
Y,
which
it
will
include
as
hasn't
been
needed.
So
there
won't
be
any
need
to
have
a
custom
build
upon
void.
We
can
just
use
the
Envoy
that
we
have
and
then
load
webassembly
through
here.
A
But
you
can
write
your
code
in
C++
or
go
or
rust
or
SMGs
and
several
others,
and
then
all
all
of
that
can
be
targeted
to
AB
assembly.
There
we'll
start
off.
Why
I
just
mix
it
in
C++
which
which
it
will
move
into
web
assembly
like
this,
but
eventually
we
can
start
moving
specific
filters,
write
specific
mixer
adapters
that
are
written
to
a
slightly
different
API
and
even
they
can
migrate
directly
into
on
port
and
of
managed
by
the
mixer
of
adapter
there.
A
We
will
have
adapters
that
actually,
with
some
change,
migrate
into
Envoy
itself
and
kind
of
get
more
than,
if
exactly
so,
that
that's
the
that's
kind
of
the
ultimate
picture
here
off.
We
start
with
the
stock
envoy,
and
then
it
and
all
be
all
these
new
behaviors
get
loaded
into
Envoy
through
web
assembly.