►
From YouTube: Kubernetes SIG Instrumentation 20190905
Description
Meeting notes: https://docs.google.com/document/d/17emKiwJeqfrCsv0NZ2FtyDbenXGtTNCsDEiLbPa7x7Y/edit#heading=h.7klqzrjrluew
A
B
C
B
A
C
A
A
A
We
don't
have
a
an
owner
for
this
item
on
the
agenda,
but
I,
guess
kind
of
what
Han
was
just
saying
is
our
major
contributions
that
we
want
to
get
in,
which
is
all
the
things
that
gets
us
to
a
beta
stage
of
metric
stability,
/
kubernetes
metrics
overhaul.
So
that's
primarily
getting
all
these
flags
into
the
various
binaries
so
that
we
can
hide
these
metrics
by
default
and
ultimately
move
forward
with
the
kubernetes
metrics
overhaul.
Is
there
anything
else
that
people
already
know
that
we
would
want
to
do
for
the
1.17
release
cycle
yeah.
B
The
problem
is
right:
now
it
is
possible
for
someone
to
mistakenly
initialize
a
metric
using
the
old
Prometheus
constructor.
That
means
this
metric
will,
if
they
use
the
old,
the
the
native
Prometheus
constructor
and
registered
to
the
native
global
Prometheus
registry.
It
actually
just
won't
show
up,
and
so
this
is
a
cutting
error.
B
A
B
C
Yeah
I
think
I
agree
with
that.
I,
also
wonder
if
the
docs
Lorna
cat,
all
right,
I,
think
I'm.
Pretty
sure
cap
says
this:
Han
and
I
were
working
on
a
blog
post
for
the
kubernetes
blog
to
talk
about
the
metrics
overhaul.
I
should
double-check
that
to
make
sure
it
also
says
that
I
think
that'll
be
I'm.
C
B
C
A
B
Yeah
yeah
I
mean
I,
have
I,
have
mixed
feelings
about
it.
I
mean
yes,
there's
a
little
bit
more
song
and
dance,
but
at
the
same
time
there's
already
a
little
song
and
dance
so
I'm,
not
sure
the
additional
song
and
dance
is
going
to
make
things
single-player,
but
thank
you,
I
mean
the
metrics
and
saturation
phase
is
slightly
different
now,
because
the
counter
ops
has
to
be
a
pointer
to
counter
ops
mm-hmm
as
opposed
to
destruct
it's
because
we're
modifying
the
thing
yeah
I.
A
Think
I
think
that's
small
enough
that
it's
okay,
the
I,
think
it's
just
I.
Think
the
pointer
is
a
thing.
I
totally
agree,
people
will
just
copy
and
that's
fine
I,
think
the
like
stability
level.
Unless
people
actually
want
to
make
it
a
stable
metric
people
shouldn't
need
to
think
about
it
and
when,
even
if
it's
there,
people
will
think
about
it
and
I
feel
like
that's,
not
something
the
burden
that
I
wouldn't
want
to
put
on
people
necessarily.
C
A
Like
there
should
be,
the
like
Colonel
to
introduce
a
metric
should
still
be
extremely,
though,
like
most
metrics
have
relatively
low
time,
expend
or
like
lifetime
and
they're
like
super
bound
to
the
surrounding
code
and
I,
don't
know
I
mean
like
I've
heard
people
say
this
I,
don't
know
how
true
that
is.
Actually,
in
fact,
this
I
think
it's
very
true,
though
I
removed
metrics
numerous
times
because
of
the
surrounding
code,
I.
B
A
C
One
of
the
things
that
I'm
sort
of
excited
about
as
far
as
some
of
this
metrics
stability
of
stuff
goes
is
even
though,
like
today
effectively
like
say,
we
don't
really
change
the
contributing
experience
to
adding
a
metric
right
so
like
with
defaults
to
alpha.
People.
Aren't
really
thinking
about
that.
C
B
B
B
Alpha
metrics,
however,
don't
have
those
guarantees,
and
so
you
can
do
whatever
you
want
with
alpha
metrics
so
see
they
can
be
unconformity,
it's
fine
because
they
are
alpha.
Metrics,
like
you,
are
experimenting
with
this
thing.
Whatever
also
the
static
analysis
piece
is
a
little
bit
more
difficult
because
you
we
rely
off
of
a
certain
implicit
structure
for
us
to
be
able
to
parse
the
fields
from
the
metrics
to
annotate
all
the
metadata
and
like,
if
you're,
creating
a
metric
dynamically.
B
A
Yeah
I
understand
what
the
technicalities
around
this
but
I
think
Elena
has
a
point
like
it
would
be
great,
even
if
we're
not
making
it
a
required
review,
it
would
be
nice
if
we
could,
like
it'd,
be
a
totally
different
mechanism.
How
we
do
this,
but,
for
example,
like
we
now
have
the
metrics
stability
framework
and
people
must
use
that
to
instantiate
it.
Maybe
we
literally
just
get
pinged
on
change
like
this.
That
involves
our
metrics
instantiation
or
something
like
that
and
I'm.
Not
there's,
there's
more
technical
details
that
we
would
need
to
think
about.
A
B
I
mean
there's
a
couple
ways
right,
there's
one
you
can
do
a
proud
task
and
then
you
could
just
you
know
this
great
diff
and
do
whatever
another
way
is
yeah,
basically
similar
to
the
way
that
we
do
this
table
metrics.
We
can
have
another
folder
with
all
of
the
Alpha
metrics,
so
this
would
be
an
argument
for
not
forcing
metrics
to
have
an
alpha
thing.
B
Where
the
owners
of
this
folder
are
all
the
members
of
kubernetes
so
that
basically
people
don't
actually
have
to
get
approvals
for
this
extra
file,
and
so
it
will,
during
your
belt,
you'll
just
generate
a
new
version
of
this
file,
but
a
change
to
that
file
will
Auto
tags.
Instrumentation
and
that's
I
mean
maybe
that's
an
intermediate
level
yeah,
it's
possible.
So.
A
B
Yeah
for
for
the
normal
deprecation
stuff,
beta
metrics
have
a
different
meaning,
like
you
can
deprecated
faster.
So
if
you
look
at
kubernetes
official
deprecation
policy,
then
basically
it
means
like
how
long
you
are
committed
to
supporting
this
thing.
So
we
could
have
a
similarly
defined
thing.
It
wouldn't
be
hard.
I.
B
A
The
other
hand,
I
think-
maybe
that's
even
that's
too
late,
like
the
majority
of
huge
problems
that
we've
had
with
with
metrics
and
kubernetes,
were
like
super
worried
implementation,
metrics
right,
like
the
reflector
metrics
of
people,
remember
like
that.
We
gon'
need
to
be
ever
a
stable
metric,
anyways
right
yeah,
that
was
a
terrible
winter
camp
and
that
twice
produced
a
memory
leak,
so
that
and
those
would
probably
have
been
preventable
if
we
had
been
reviewed
like
heaven
as
per
review
for
those.
A
C
C
Don't
think
it
necessarily
has
to
be
required
for
the
Alpha
stuff,
but
just
making
sure
that
we're
getting
pinged
and
we
have
the
visibility
because,
like
right
now,
it's
very
hard
to
track
sig
instrumentation
being
a
very
horizontal.
It's
like
we
can't
keep
track
of
the
whole
damn
project
on
our
own,
so
adding
some
automation
to
help
out
with
that,
especially
because
we're
a
small
and
short
staff
team
would
be
awesome,
but
I
also.
A
Think
if
we
had,
if
we
had
automation
like
that,
I'm
I
mean
even
you
you
know
about
it
like
within
Red
Hat.
We
are
having
these
conversations.
How
a
service
can
get
involved.
Rs
race
can
get
involved
in
these
things
like
I.
Think,
that's
a
really
cool
thing
too,
that
we
could
point
out
at
the
second
cementation
intro
sessions
are
at
the
developer.
A
B
Another
thing
is
that
there
are
a
lot
of
metrics
pieces
currently
that
live
in
package,
and
these
things
are
shared
between
different
components
and
as
part
of
the
migration
I
moved
some
of
them
into
component
base,
but
it
turned
out
that
that
wasn't
all
of
them,
but
I
just
ripped
the
framework
stuff
so
that
basically
everything
side
loads
into
I,
just
bypass
the
entire
air
mattress
thing.
But
if
we
wanted
to
migrate
these
things
into
component
base,
then
basically
it
would
give
a
home
for
these
shared
metrics
and
there
would
be
proper
ownership
over
it.
A
A
A
Where
you
kill
may
be
a
label,
but
it's
only
a
one
like
zero
one
type
of
metric
and
it
only
whatever
the
last
actual
termination
reason
was
right.
So
if
that
had
was
a
long
time
ago,
then
it
still
is
the
active
series,
but
it
doesn't
actually
matter
anymore.
So
I
think
Elena
put
this
on
the
agenda,
but
it
was.
A
It
came
out
of
it
like
Red
Hat
internal
meeting
yesterday,
that
we
have
a
need
to
be
able
to
monitor
those
not
just
on
a
cluster
level,
but
ultimately
most
people
who
are
interested
in
these
are
actually
people
running
applications
on
top
of
kubernetes,
so
like
roughly
within
the
realm
of
see
advisor
I
would
say,
but
it
needs
to
probably
be
a
higher
like
because
when
a
container
rooms
container
is
gone
for
that,
we
need
to
find
a
different
level
of
aggregation
and
I.
Guess.
That's
kind
of
the
thing
that
we
need
to
agree
upon.
B
B
B
C
A
Outage
much
faster,
so
yeah,
that's
kind
of
where
we're
coming
from
actually
we're
already
out
of
time.
So
maybe
we
can
take
this
discussion
either
into
our
slack
channel
or
also
into
the
next
meeting.
We're
definitely
interested
in
figuring
out
something
yeah,
and
if
anybody
has
suggestions
I'm
more
than
happy
to
hear
about
them,
so.
B
C
If
you
have
this
happening,
and
you
want
to
be
able
to
detect
that
like
that
is
happening
with
any
given
pod
at
any
given
time
in
order
to
be
able
to
alert
on
it.
So
you
can
go
and
take
action
and
be
like
hey,
set
a
limit
on
this
thing
or
something
like
that.
Then
the
I
would
say
the
metrics
that
exist
or
not.
You're.
C
So,
essentially
like
something
adding
a
counter,
or
something
like
that
like
for
that
container
or
like
being
able
to
track
like
occurrences
of
this
and
like
events
or
like
they're
they're,
a
bunch
of
different
possibilities,
but
from
a
cluster
operator
sort
of
point
of
view.
Right
now,
like
the
existing
metrics,
there's,
basically
a
lot
of
manual
correlation
that
you
need
to
do.
It's
not
obvious,
based
on
the
existing
aggregated
metrics,
what
you
can
get
out
of
that
yeah.
A
So,
like
the
two
possibilities
that
I
see
that
we
could
do
is
either
on
the
like
on
an
aggregation
per
couplet
or-
and
this
could
be
more
useful
to
someone
where
they
occur
on
ice
is
often
used
so
I'd,
like
multiple
teams,
or
something
that
this
/
namespaces
are
something
where,
with
that,
we
could
have
an
aggregate
her
names
base.
Although
that's
not
particularly.