►
From YouTube: Policies and Telemetry WG 2018 09 12
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
That's
where
that's
where
the
work
has
stopped
so
I
think
now
we
at
least
have
two
more
things
to
investigate,
like.
Why
is
so
much
CPU
being
spent
on
scraping
if
we
realize
that
that
is
just
the
inherent
cost
of
scrape,
and
that
is
the
inherent
cost
of
just
civilizing
all
that
data,
then
then,
and
then
I
think
we
have
to.
We
have
to
see
what
else
we
can
do.
However,
we
did
have
other
charts
previously.
That
showed
a
lot
of
time
spent
before
the
promise
from
excessive
adapter.
A
It
took
me
a
second
to
get
the
agenda
updated
on
Monday
we're
seeing
a
lot
of
CPU
I
think
it
was
10%
overall
or
something
instead
see
the
Prometheus,
and
there
are
all
sorts
of
other
problems
when
we
initially
added
the
sassy
Prometheus
bridge,
we
weren't
running
in
a
che
environments
and
we
were
trying
to
use
it
just
to
get
some
stats
about
various
steel
components
from
the
proxies
that
sat
in
front
of
them.
Obviously,
a
steel
evolved
since
those
early
days
and
we
didn't
really
ever
evolve.
A
The
stats
t2
Prometheus
collector,
but
there
are
stats
inside
of
them
at
the
the
proxies
that
we
think
are
valuable
in
particular
versions
of
LD
s
and
C
D
s
so
that
we
can
track
whether
or
not
this
version
skew
with
Animesh
I
think
was
one
of
the
prime
motivating
use
cases.
So
the
Envoy
stats
collection
doc
listed.
There
has
a
proposal
and
some
alternatives
for
how
to
add
how
to
maintain
getting
envoy
stats.
Out
of
the
envoys
and
make
them
usable
for
querying
and
monitoring
and
alerting.
A
So
there
was
an
initial
exploratory
PR
which
actually
got
merged
quicker
than
I
was
expecting.
It
does
this
by
adding
annotations
to
all
deposit
injection
time
and
configuring.
The
Prometheus
instance
that
we
add
in
the
add-on
to
drop
some
of
the
more
high
dimensionality
metrics
that
come
out
of
on
voice
so
that
we
get
a
sort
of
targeted
small
set.
That's
processable
of
stats
from
each
proxy.
B
Yeah
and
that
that
largely
solves
or
saw
some
of
this
scalability
problem,
it
does
move
the
cost
it
it.
Actually,
we
allocate
solve
the
cost,
but
it
does
so
in
a
in
a
better
where
it
it
actually
distributes
the
cost
rather
than
having
this,
like
one
part,
taking
up
everything-
and
it
also
adds
filtering
like
Doug
said
so,
not
just
straight
everything,
fog
and.
E
A
You
know
all
proxies
in
the
mesh.
So
that's
what
this
is
about,
and
also
we
want
to
get
rid
I
mean
part
of
it
is
just
simplifying.
The
deployments
of
this
do
to
write.
The
stats
need
to
Prometheus
was
yet
another
thing
that
had
to
go
somewhere.
So
it's
nice
to
sort
of
remove
that
dependency
from
misdeal
in
general.
A
So
this
isn't
mentioned
in
that
doc,
but
I'm
working
on
something.
Now
that
I
should
mention
the
initial
PR
added
annotations
for
scraping
to
all
of
the
pots,
and
this
was
this
works.
Fine,
if
you
don't
have
any,
if
you're
not
trying
to
scrape
any
application
metrics
as
well,
but
when
you're
trying
to
scrape
application,
metrics
and
you're,
adding
annotations
to
the
pods
they'll
conflict,
so
there's
networked.
A
It
needs
to
happen
which
I'm
doing
today
to
stop
relying
on
the
annotations
for
the
Envoy
metrics
and
instead
set
up
a
better
config
for
inside
of
Prometheus
itself
to
find
the
Envoy
stat
support
and
grab
it
grab
the
theatres
that
way,
and
so
that
way
applications
that
want
to
spend
set
up
their
own
sanitation.
First
great
point
can
so
that's
that's
something!
That's
not
in
the
dock.
That's
worth
mentioning
here,
at
least
and
I.
Think
there's
already
one
known
use
case
with
these
are
overlap.
B
A
A
You
know
one
way
is
just
through
actually
producing
adapters
inside
a
mixer
and
there
there's
no
that's
like
the
simplest
way
to
add
information
via
attributes,
but
that
doesn't
cover
any
information,
that's
not
already
coming
from
the
proxy
or
can
be
derived
completely
sort
of
separately.
If
you
want
to
capture
information
from
the
proxy
and
send
it
to
mixer
via
attributes,
it's
a
more
complex
process
where
we
need
to
agree
on
the
dictionary
for
those
attributes.
We
want
to
make
that
public
and
we
need.
We
need
to
update
the
clients
and
everything
to
send
that.
A
A
A
Yes,
okay,
yeah
yeah,
there's
sort
of
overlapping
yeah
purposes
there
yeah.
Maybe
we
can
discuss
that
in
the
PR.
If
we
transferred
over
right
see
where
that
goes,
the
final
doc
was
a
sort
of
a
proposal
about
monitoring,
not
a
process.
Adapter
is
now
that
we're
moving
stuff
out
of
process
I,
don't
think
the
approach
is
going
to
be
to
recommend
putting
proxies
in
front
of
these
things,
but
they
are
going
to
be
running
as
other
workloads
that
are
sort
of
managed
by
the
operators
for
the
cluster.
A
So
I
was
thinking
that
it
might
be
nice
to
provide
met
metrics
that
parallel
the
way
other
workloads
have
metrics
generated
for
them
in
Sto.
So
you
could
see
you
see
a
request,
counts
and
response
times
separate
from
the
way
mixer
internally
tracks,
dispatch,
metrics,
so
I
put
together
a
proposal
for
how
we
might
a
couple
of
ways
that
that
might
be
accomplished.
A
I,
don't
have
any
strong
feelings,
I
think
one
of
the
things
that
was
important
there
was
we
can't
necessarily
assume
that
the
adapters
will
be
running
in
process
or
a
mesh
that
might
be
outside
of
external
systems.
As
part
of
say,
I-
don't
let's
say
AWS
decides
they
want
to
expose
an
adapter
interface.
We
won't
be.
We
can't
rely
on
putting
monitoring
their
reports
back
to
the
cluster,
so
this
would
be
purely
my
I
think.
We
want
purely
client-side
based
generation,
these
metrics
and
then
it's
a
question
of
how
do
we
generate
them?
A
A
So
anyways
I
sort
of
talked
about
this
at
a
high
level
there
so
I
for
that
I'm,
actively
seeking
feedback
and
comments
and
alternative
proposals,
but
I
think
we
do
want
a
way
that
monitor
them.
That
is
distinct
from
just
mixers,
internal
monitoring
and
although
I
mean
maybe
people
disagree
at
that
point
too,
but
that's
sort
of
the
position
that
I
started
with
so
I,
don't
know
if
there
are
comments,
thoughts.
A
Metrics
there
do
we
need
to
consume
it.
In
other
words,
if
we
attach
so
one
of
the
proposals
was,
you
could
attach
a
stats
Handler
on
the
client
side,
connection
for
GRP
see
which
could
then
get
the
data
generate,
attributes
and
send
it
through
a
report,
and
then
mixer
isn't
there's
not
a
second
consumption
of
config,
I.
Guess:
okay,.
B
A
B
Solo,
although
related
note,
there
is
another,
there
is
an
opa-locka.
Actually
it's
it's
mislabeled
here,
just
fix
the
name,
but
anyway
it's
the
is
the
authentication
for
out
of
process
adapters
right
so
that
that
document
is
also
linked
from
here
and
again.
Please
please,
take
a
look.
Do
comment
and
one
of
the
one
of
the
outstanding
issues
where
we
are
seeking
feedback
is
how
many
ways
of
how
many
authentic
Asian
mechanisms
do.
We
have
to
support
and
I
think
so
right
now
we
have
it
narrowed
down
to
just.
B
B
B
F
C
F
B
You
still
need
you
always
need
to
mark
some
secret
and
the
difference
between
API,
key
and
and
walk
then
becomes
off
yeah
like
well
the
things
I
wrote
it
or
not.
Whether
the
same
thing
is
used
again
and
again
so
I
say
Becky's
used
again
and
again,
whereas
you
know
what
you
would
get
a
token,
it
would
be
automatically
refreshed
and
all
that
that
will
happen
automatically
provided
the
other
side
also
supports
it,
and
so
we're.
H
B
B
In
is
a
specially
configured
side
in
front
of
that?
Yes,
then,
then
you
then
you
do
need
to
break
the
cycle
and
you've
alluded
to
that
in
breaking
a
cycle
for
collecting
telemetry.
So
we
just
have
to
make
sure
that
you
know.
On
the
same
on
the
same
page
there,
okay
Peter,
is
the
different
anything
else
that
tension.
G
B
H
B
F
B
B
But
what
okay?
So?
What?
What
I
think
that
the
criteria
should
be
is
that
if
it
is
something
that's
relatively
vendor
specific,
yes,
it
shouldn't
be
in
is
200
Neistat
ever
yeah,
so
spectacular
should
be
in
one
of
Google's
repositories:
okay
and
similarly,
the
adapters
that
we
have
already
accepted
because
we
didn't
have
a
process.
Of
course
we
would
want
to
move
them
or
they
should
be
hosted
by
the
vendor.
Yes,
and
we
will
have,
we
already
have
ways
to
include
the
documentation
and
have
the
documentation
and
put
it
on
Easter
at
I/o.
B
And
then
there
are
a
few
outstanding
issues
still
about
what
about
integration
testing,
but
is
there?
Is
there
anything
such
as
a
certified
adapter
and
what
that's?
What
does
the
certification
process?
Look
like
alright,
so
those
issues
are
still
outstanding
in
those
all
the
bit
rest
in
that
entry,
but
at
least
from
where
I
stand
when
the
adapters
should
stay
in
vendor
flippers,
and
if
the
vendor
adapter
introduces
a
template
which
they
believe
is
generally
useful,
then
they
can
always
open
OPR
against,
is
2s2
and
and
then
it
will
go
through
the
normal
process.
A
B
Potentially
write
something:
that's
that's
kind
of
semi
vendor,
but
still
part
of
the
standard,
sto
deployment.
So,
for
example,
the
Prometheus
out
of
process
adapter
could
potentially
go
there,
but
then
it
could
also
just
go
into
his
choice.
Yes,
so
so
yeah,
so
I
I,
don't
know.
So
maybe
if,
if
everyone
does
fall
behind
this,
we
yeah.
G
I
think
there
are
two
different
criteria
like
the
decision
to
make
it
out
of
process
versus
in
sto.
Sto
is
a
technical
decision
and
the
decision
if
it
should
be
in
controversies,
a
vendor
repo
is
a
you
know.
The
is
a
decision
based
on
the
whether
or
not
it's
open
source
or
whether
or
not
it's
a
specific
to
one
particular
vendor.
B
B
G
G
B
G
B
G
So
I
think
that's
a
different
question
as
the
standard
of
maintenance
for
things
that
go
in
is
TOS
to
you,
or
things
are
going
to
see
org
and
you
know
if,
if
you
know,
if,
if
we
have
a
quality
bar
that
certain
adapters
are
not
meeting
if
they're
breaking
and
nobody's
fixing
them,
then
they
can
be
placed
on
the.
There
should
be
a
policy
in
place
for
placing
that
on
a
list
for
potentially
being
deprecated
or
putting
in
some
sort
of
attic.
B
B
H
B
B
Yeah
we
would
be,
would
prefer
that
the
end
point
vendor
posts
it
unless
it's
something
like
Prometheus,
which
we
decide
is
part
of
these
two
core
and
then
let's
just
so,
it's
a
yes.
So
if
we
decide
that
fluidity
is
so
common
in
an
is
to
integration
that
we
just
want,
it
then
we'll
just
make
it
make
it
a
part
of
it
still.
I
I'm
also
in
the
process
of
publishing
an
adapter
for
VMware
wavefront,
okay.
So
it's
going
to
be
an
out
of
process
out
of
tree
adapter,
which
would
be
under
the
vmware
organization.
Okay,
that's
all
so
yep,
so
I
think
once
I
have
that
published
I'm
going
to
add
some
optimization
as
to
how
to
make
an
adapter
out
of
three
because
right
now,
I
think
the
documentation
only
speaks
about
out
of
processor
actors
and
not
out
of
Cree
yeah.
F
B
F
I
F
B
Think
something's
deliberately
T
testing
for
out
of
process
has
not
we
we
haven't
done
it
yet.
Yes,
so
that
so
once
once
we
do
that
I
think
we
can
say
that,
which
is
why
we're
still,
however,
right
so
I
I
would
I
would
encourage
these
adapter
other
side
so,
for
example,
wavefront
and
in
anyone
else
to
actually
do
scalability
testing
and
upon
issues
whenever
things
are
not
working
out,
because
we
we
do
want
to
reach
production
readiness
by
1.1.
That's
that's
the
goal
that
it's
not
a
promise
yet,
but
it's
a
if
it's
cool.
F
B
I
think
so,
some
of
them,
but
but
so
there
is,
there
is
the
low
level
testing,
but
we
also
need
to
do
just
just
system-wide
kind
of
scale
test.
How
does
so,
for
example,
an
out
of
process?
Prometheus
adapter
would
actually
be
quite
easy
to
scale
test
with,
because
everything
else
is
set
up
for
Prometheus
anyway,
so
kind
of
see
where
that
goes.
I
B
So
mixer,
even
the
one
sync
single
mixer,
is
logically
single
right.
Mixer
already
has
a
horizontal
power
autoscaler,
which
means
there
would
be
a
number
of
mixers
that
are
doing
this
anyway
yep,
but
now
we
have
so
someone
has
already
tested
with
with
one
mixer
or
node,
and
that
was
similar
to
HP
a
that
was
the
general
consensus.
B
B
B
H
B
B
I
B
I
A
A
I'm
HPA's
right
depending
I,
like
that,
my
we
might
scale
to
five,
but
there
might
be
10,000
pods
or
something
right,
so
it's
possible
that
we
could
overload
or
create
a
little
high
burden,
whereas
if
you
just
strip
it
it
out
to
pods
like
so
that's
something
I
think
we
need
to
look
a
little
bit,
especially.
Is
that
so
that
looks
more
like
Harding,
though,
doesn't
it
oh
yeah
I
mean
one
all.
A
This
is
a
way
of
distributed
right
so,
like,
yes,
you
can
think
of
it
as
starting
to
each
individual
proxy
right
likes,
like
Billa
yeah,
like
a
little
ultimate
charging
ultimate
on
either
D
signatur
or
ultimate
yeah,
but
yeah
I,
think
that
is
something
as
we
discussed.
Mixer
architectural
evolution
right.
This
is
something
that
we
should
call
out
and
are
those
as
anything.
A
Discussion
started
in
any
real
way.
No
no
I,
don't
know
I
think
there
will
be
I
think
this
is
a
that
I
think.
Once
we
get
another
release
under
our
belts,
probably
there's
just
going
to
be
start
to
be
a
lot
of
discussion
about
mixer
next
and
what
that
architecture
looks
like
and
I
think
some
of
your
instincts
they're
just
will
start
to
have
to
address
things
like
that.
I
A
E
E
E
E
Yeah,
well,
let
me
keep
going
in
and
this
will.
This
will
make
more
sense.
Okay,
these
are
workloads
or
these
are
apps
and
sorry.
These
are
apps
but
notice
this
the
services
down
here
all
alone-
and
you
don't
see
any
services
anywhere
else
which
is
kind
of
weird,
which
is
I.
Think
where
you're
coming
from
the
reason
why
you
see
this
is
because
this
Edge
has
nowhere
else
to
go.
E
H
B
E
Hopefully
it
makes
sense,
but
I
want
to
point
out
now:
here's
the
ratings
service
that
we
saw
before
but
notice.
Now
you
can
actually
see
the
v2
went
into
ratings
and
that
was
successful
and
it
made
it
over
here,
their
ratings
v1.
So
this
green
line
tour
is
it's
like
flashing,
but
here
I'll
fix
it
the
green
line
thing:
okay,
so
the
v2
reviews
he
made
it.
He
sent
to
review
service
and
v1.
You
know
that
was
good.
Here's
where
the
fault
injection
lies
here
and
you
can
see
well
disagree
now.
E
B
E
E
You,
like
this
view
on
okay,
yeah,
okay,
so
we're
on
to
some.
So
we
thought
this
is
good,
because,
mainly
because
of
this,
you
know
how
this
ratings
service
we
needed
to
figure
out
how
not
to
have
that
sitting
out
no-man's
land,
but
you're
also
going
to
see
you
know
these
services
everywhere
else.
We
weren't
sure
is
it's
going
to
be
too
noisy
with
with
your
dirt.
Rying
goes
everywhere,
because
now
you
got
your
product
page
or
detail
service
review
service.
No,
yes,.
E
Yes,
that's
that
that's
good
I'll
bring
up.
Is
it
noisy
right
so,
and
this
is
exactly
why
we
have
I
mean,
as
you
can
see,
the
in
the
URL
will
probably
have
some
type
of
UI
control
where
we
can
flip
this,
but
right
now
it's
just
a
in
the
in
the
car,
Deepa
Ram,
you
could
say
inject
service.
No,
it's
true!
That's
why
you
see
them
here.
Okay,
with
my
other
graph
here,
you're
gonna
see
pie
at
it,
set
the
false
and
that's
why
you
don't
see
them
everywhere
else.
E
A
E
E
E
Yeah
yeah
now
this
is
so
you
want
to
see
like
a
service
only
grant,
which
is
what
we
tried
to
show
right
very
early
on,
and
even
the
previous
Prometheus
metrics
was
trying
to
show
up,
but
really
under
the
covers
they
weren't
showings,
like
it's
the
whole
source
service
issue.
Right
no.
A
I'm
acknowledging
that
as
sinking
like
when
the
comments
about
noise
came
up,
I
did
like
I'm
looking
at
it.
I
do
see
the
product
page.
The
reviews
that
and
the
rating
services
are
there
with
arrows
between
them.
Now
they're
going
through
an
intermediate
points,
I
was
wondering
if,
like
one
way
of
reducing
noise
could
be
to
take
that
and
then,
like
I
just
say,
show.
B
D
A
E
Now
so
this
is
the
workload
view.
Let
me
go
back
to
here
the
asset
to
the
noise
problem.
That's
what
we're
trying
to
rectify,
where
we're
now
good
trying
to
implement
what
we're
calling
drill
down
views,
oh
yeah.
So
if
I
were,
if
I
were
to
double
click
here,
that's
gonna
zoom
me
in
right,
so
I'm,
basically
just
looking
at
this
subset
of
the
mesh
looking
what's
coming
in
and
out
the
ratings,
because,
yes,
but
now,
the
obviously
there's
nothing
coming
out
of
ratings.
E
If
you
know
the
book
into
demo,
so
you're
not
seeing
anything
to
the
right,
but
the
point
is
we
can
see
just
focusing
just
on.
What's
this
specific
node
and
ignoring
all
the
rest,
and
we
want
to
go
back,
then
we're
gonna
have
some
way
of
going
back
to
the
main
graph.
You
we're
not
sure
that's
the
right
way
to
do
it,
but
that's
what
we're
thinking
to
help
reduce
the
noise
level
have
an
overall
view
say:
I'll,
look!
Here's
some
red
here!
Let
me
go
drill
down
into
the
red,
double
click
and
now
I.
E
A
G
J
Yes,
so
my
question
is
just
if
you
can
sleep
like
by
selection,
to
flip
subs
to
AB
and
I,
think
that
was
there's
no
question
around
us.
Instead
of
flipping
everything
like
showing
everything
can
I
just
select
one
thing
and
I
explode
that
thing
and
close
that,
like
select
like
how
they're
hindering
off-the-charts
done.
J
J
J
J
Yeah
like
that,
so
they
did
just
you
can
explode
or
not.
That
part.
You
know
just
that
part,
not
everything
that
was
my
questioning.
If
you
have
the
he
views,
basically
exploding
the
inter
version
part
right.
This
could
be
selected
or
something
that
was
what
I
was
trying
to
understand.
If
you
need
for
a
mission,
oh.
A
E
I
understand
that
that
could
that
will
be
something
we
can
look
into.
E
E
E
Yeah
yeah
I
got
you
the
when
I
change
the
graph
type,
that's
the
server
side
request
and
we're
generating
the
graph
on
the
server
side
and
then
we're
returning
it
back
and
we're
populating
the
site
of
state
graph
with
the
with
the
proper
data.
So
we
are
making
some
server-side
requests
now,
if
I
do
things
like
of
traffic
animation
whoa,
this
is
client-side.
E
E
It
is
live
data
yeah
like
a
this,
is
like
data
right
and
a
number
of
dots
changes.
You'll
get
more
dots
if
your
rates
higher
the
speed
of
the
dots
is
the
you
know,
requests
per
second
kind
of
thing,
but
the
point
is
that
that's
we're
animating
this
client-side,
you
know
things
like
hiding
the
badges
here,
that's
client-side
changing
what
the
labels
are
would
be
right.
This
is
all
client-side,
though
our.
D
B
B
There
was
a
check
box
that
said,
security
yeah
is
that
from
tearless
yeah.
E
J
Cool,
so
so
the
data
is
dead
eh.
Why
just
like?
If
someone
gives
you
a
gimmick
of
how
to
enable
to
collapse
and
open
and
close
that
work
right,
because
you
have
the
data
to
render
this
thing
alright
in
the
UI
right,
just
a
behavior
of
transforming
a
spry
ongoing
bunch
of
squares
or
going
back
from
that
right,
yeah.
E
E
A
B
A
One
other
thing
I
want
to
mention:
I,
don't
know
fedora
is
gonna.
Throttle
me
from
itchiness
is
I
think
that
he
is
committed
to
giving
a
mixer
deep
dive
or
mesh
deep
dive.
Maybe
just
mix
a
deep
dive
at
the
next
community
meeting.
So
if
you
have
interested
or
have
things
that
you
would
like
to
see
covered,
maybe
shoot
him
a
note
or
just
tune
in
and
ask
questions.