►
From YouTube: Policies and Telemetry WG 2018-08-15
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
We
could
characterize
it,
but
there
is,
but
but
I
think
that
so
so
far
what
we've
seen
is
if
we
threw
enough
CPU
at
it
like
massively
over-provision,
the
problem
does
not
occur,
but
any
reasonable.
If
you,
the
problem
can
occur.
If
there
is
enough
traffic,
so
we
need
to
do
sampling
or
some
back
pressure
or
something
like
that
to
cut
down
on
the
amount
of
stuff,
but
it
has
to
be
a
reasonable.
B
So
we
have
seen
this
with
I
I.
Have
the
number
so
I
can
I
can
show
you
the
numbers
from
some
from
several
different
experiments,
but
I
think
that
the
bottom
line
is
we
have
to
implement
something
which
does
it
more
graceful
yet
whatever?
Whenever
it
happens,
it
does
it
more
gracefully
than
then
is
doing
now,
but
we
should
be
able
to
say
that
oh,
we
don't
need
all
the
metrics
I,
don't
want
to
use
so
much
TP
I
want
the
matrix,
but
I
don't
want
to
collect
each
and
every
sample.
D
A
C
B
So
in
that
one
experiment,
it's
1,800
requests
per
second
and
ingress,
and
if
you
have
a
factor
of
two,
the
graph
is
about
2d,
two
and
a
half.
So
you
multiply
that
select
4000
and
then
we
have
two
reports
for
called.
So
it's
total
like
seven
or
eight
thousand
reports
per
second
but
they're
back.
So
whatever
they're
patching
thing
is
so
the.
D
B
B
C
B
So
well,
I!
Guess
that
that's
why
that's
why
I
raised
the
issue
that
we?
If,
if
you,
if
you
think
that
we
can
get
this
down
significantly,
so
it's
not
an
issue
then
that
I
guess
before.
If
not,
we
have
to
think
about
sampling
and
back
pressure.
We
cannot
do
that
pressure.
Is
a
report
like
stop
sending
reporters
like
cannot
handle
these.
B
D
B
D
B
A
B
I
have
a
bunch
of
performance
drops
for
this,
but
so
anyway,
I'm
going
to
open
an
issue
for
this.
They
were
others
are
performance
issues
which
were
like
11
elsewhere,
which
were
kind
of
taking
precedence.
That's
why
I
would
catalog
this
in
an
issue
yet,
but
I
will
do
that.
So
I
have
a
flame
graph
for
this
to
see
where
of
you're
spending
time
and
the
60%
number
of
before-and-after
is
for
experimentation.
A
C
Yeah,
so
so
it,
the
relative
percentages,
don't
tell
us
much.
Sixty
percent
of
the
time
is
spent
decoding
the
incoming
requests
and
and
doing
the
instance
processing
to
produce
instances
and
forty
percent
is
in
Prometheus.
The
absolute
time
is
more
interesting
in
this
particular
case
per
incoming
request.
How
much
time
is
spent
doing
the
a
the
yeah?
C
Actually,
the
the
fact
that
the
6040
would
seem
to
indicate
this?
Is
it's
running
normally
from
that
standpoint,
there's
a
to
me.
That's
that
sounds
about
right.
In
the
amount
of
time
you
should
be
using
on
either
side
of
the
defense.
It's
the
absolute
time.
That's
the
problem.
Why
are
we
spending
so
much
time
total?
Is
there
a
bottleneck
somewhere?
Are
we
doing
something
stupid.
B
B
B
B
C
B
D
B
D
B
D
B
D
C
C
B
B
C
D
C
B
C
B
Those
numbers,
okay,
so
I,
will
open
those
two
shows
on
Reaper
positions
if
I
already
find
issues.
That
is
something
similar
well
and
then
I
just
had
a
smaller
another
region
item.
Unless
that's
right,
so
that's.
The
second
I
didn't
item
is
that
I'm
again
theme
the
year
actually
I
opened
up
a
blog
about
that
with
the
attribute
index
is
not
defined
error,
which
means
that
the
mixer
doesn't
have.
B
Again,
this
is
not
one,
although
this
is
101
so
another,
so
we
have
to
click
that
and
then-
and
we
have
to
put
in
some
something
which
is
not
code
review
to
make
sure
that
this
doesn't
happen
again.
So
what's
the
end,
so
the
issue
is
that
if
the,
if
mixers
global
dictionary
and
mixed
are
five
global
dictionary
don't
match.
Typically,
if
mixed
our
clients
globally
is
larger
than
mixers,
then
they
can't
communicate.
Okay,.
C
B
B
So
I
think
upgrade
procedure
is
one
and
that
can
be
followed,
but
this
is
a
brand-new.
This
is
a
installation
where
everything
is
already
upgraded
to
101,
so
the
existing
build
the
existing
latest
build
of
the
100
release
branch
yeah,
it
doesn't
end
attacked
right,
and
that
is
returning
that
error,
which
means
it's
a
coding
error
right.
If
it's
like
a
commit
molds
or
something
like
that
right
is
essentially
the
proxy
mixer
client
is
compiled
with
a
regular
dictionary.
D
A
C
B
D
D
B
B
That
so
all
all
checks
and
reports
are
failing.
If
they
contain
that
string.
Yes,
we
should
probably
be
more
lenient,
so
so
great
well,
so
whatever
that
string
is
if
it
if
it
continued.
If
the
back
contains
that,
then
it's
just
rejected
completely
rejected
to
with
the
same
error
summons
in
it
in
check.
So.
A
C
A
B
People
on
the
clear
side
right-
you
you
have
you
use,
use
some
attribute
in
a
minute
expression
or
string
in
an
expression
like
this
doesn't
have
to
be
an
attribute.
It's
just
it's
just
an
index
I,
just
a
big
screen
deck
so
doesn't
have
to
be
that
it
may
be
of
value.
I
don't
have
to
do,
and
now
you
couldn't
decode
the
value.
So
it's
got
a
fine
value,
what
all
the
octopus
was
removed
and
then,
if
you
have
a
in
the
expression,
if
you
have
a
check
on
that,
then
you
just
usually
incorrect
conclusion.
C
And
you
will
know
well
hold
on,
so
what
would
what
you're?
Describing,
though,
is
the
there's
two
two
spaces
right?
There's
the
attributes
that
are
being
sent
by
the
proxy
and
is
what
the
config
is
specifying
great.
So
if
we
just
take
the
attributes
that
are
coming
from
the
proxy
and
say,
look
you're
giving
me
an
attribute
I,
don't
know
what
that
is,
throw
it
out,
but
continue
as
I
know,
to
Doug's
point.
If
the
config
is
not
referencing
that
attribute
nobody
cares.
C
B
C
B
C
No,
no,
that's
right!
Okay,
so
that
right,
the
pattern
is,
if
it's
not
in
the
dictionary,
then
it's
just
a
regular
string,
so
you
could
have
config
that
uses
a
particular
attribute
name
it's
working
fine
today
and
then
tomorrow
somebody
adds
the
that
string
to
the
dictionary
the
proxy
starts,
sending
it
in
encoded
with
the
dictionary
and
now
that
it
no
longer
matches
what
the
mixer
sees
and
the
behavior
is
not.
Okay,.
B
C
B
C
A
B
B
D
B
D
A
C
Yes,
all
right
originally,
when
we
had
the
GOP
see
the
the
bidirectionally
protocol.
This
wasn't
an
issue
because
it
was
negotiated
now,
every
town,
every
connection,
the
proxy
might
be
talking
to
a
different
mixer
there's
opportunity
for
negotiation
unless
you
want
to
add
extra
rpcs
for
every
single
RPC.
C
B
B
B
B
B
A
C
B
Yes,
so
I
think
so
I
think
for
now
we
all
we
have
to
do
is
make
sure
that
either
time
or
certification
time
right
like
there
should
be
some
can
we
can.
We
actually
verify
like
given
to
proxy
right,
give
us
a
mixer
and
a
proxy.
We
can.
We
just
clarify
that
the
indexes
who's
smaller,
who
is
larger
and
then
that's
like
field
of
verification.
The
thing
is
that
doesn't
matter.
We
expose
the
dictionary
side,
I.
D
C
C
B
I
think
that
that
heart
failure
would
be
would
be
good
and,
and
then
to
add
to
that,
we
should
also
who
what
quad
suggested,
which
is
have
had
an
environment
variable
in
mixer
client
to
limit
the
size
of
the
dictionary,
because
if
someone,
if
it
goes
to
somewhere,
then
there
is
no
recourse,
not
right
like
so.
This
actually
does
400
course,
which
is
okay,
fine.
Thank
you.
This
did
not
work,
but
here
is
how
you
make
it
work
without
recompiling
you
put
the
environment
variable
and
now
it
won't
work.
So.
C
B
C
B
C
A
B
And
no
I
got
too
many
and
I
know
that
no
one
really
mourning
this
particular
problem
is
going
to
completely
go
away.
You
know
in
a
very
fundamental
way
and
if
we
add
a
slack,
then
you
ever
support
the
flag
for
much
longer
than
you
have
to
support
the
environment
variable
Raeleen
by
then
you
remove
a
flag.
Then
there's
like
a
command
line,
never
can
happen.
Then
it's
based
on
some
process.
But
if
you
tell.
D
B
Sure
you
can
set
it,
but
it's
there
to
address
a
problem
like
I
would
have
to
change
my
script
and
remove
an
environment
variable
once
we
don't
need
it,
but
if
it's
applied
then
I
have
to
go
back
and
say:
okay,
this
plaque
has
was
removed,
running
to
light
now
change
our
stuff
I
agree
that
we
shouldn't
use
that
truly
API
things
in
and
vulnerable,
which
also
we
do
inside
sto.
But
in
this
case,
because
this.