►
From YouTube: SIG Instrumentation 20220303
Description
SIG Instrumentation Bi-Weekly Meeting March 3rd 2022
A
B
We
have
a
couple
things
on
the
agenda
item.
The
first
thing
is
the
annual
report.
We
have
this
thing
in
process.
The
action
items
on
me
so
I
will
take
care
of
all
of
the
to-do's
on
here.
There's
just
some
links.
I
need
to
fill
in
and
some
check
boxes,
but
yes,
I
will
get
this
submitted
by
the
end
of
the
week,
and
I
will
probably
work
on
it
today.
C
D
D
Yeah
I
know
okay,
but
the
idea
is
like
how
can
we
achieve
that
then
like,
since
we
cannot
update
this
metric
like
how
can
we
achieve
like
just
adding
a
new
label
to
an
already
stable
metric,
and
the
suggestion
that
Han
proposed
last
time
was,
to,
like
just
add,
a
new
metric
with
the
new
label?
D
That
said,
the
issue
that
I
have
with
that
is
that,
like
there
are
a
couple
of
ways
where
we
can
proceed
like
we
can
keep
the
same
name,
but
then
the
API
would
be
different,
but
that's
a
bit
like
complex
to
use.
I,
don't
think
you
can
keep
the
same
name.
A
D
A
D
B
D
B
A
C
A
C
D
This
is
that
yeah.
If
we
have
to
come
in
place,
we
with
a
new
naming
for
this
new
metric,
then
it's
problematic
like
it's
hard
to
use,
because
it
will
be
harder
to
discover
other
to
comprehend.
Like
the
difference
between
like
the
existing
metric
and
the
new
one
like
as
a
user,
it
will
be
out.
You
really
understand
like
which
one
is
the
latest
and
which
one
is
now
recommended
to
be
used.
C
A
Can
never
change
it.
You
can't
add
labels.
You
can't
do
anything
like
it's
stable.
It
is
not
going
to
change
that's
what
stable
means
not
like
Yay,
it's
good.
It's
graduated,
that's
not
what
stable
means
and
we
do
have
a
proposal
to
maybe
change
that
naming
so
people
aren't
like
mistaking
it
for
GA,
but
stable
means
the
metric
will
not
change.
A
So
if
you
want
to
change
it
later,
you
basically
have
to
like
go
through
the
year
plus
long
deprecation
process
to
remove
it
completely,
and
then
you
can
do
whatever
you
want
with
it.
D
Yeah
but
I
mentioned
that,
like
now
like
there
is
this
discussion
about,
like
keeping
all
the
stable
everything
that
is
stable,
keeping
it
in
kubernetes
like
not
removing
it
like
forever
yeah.
B
One
we
can't
change
the
existing
stable
network,
but
just
we
just
can't
do
that
because
like
then,
then
our
stability
is
like
not
really
stable
like
and
then
and
then.
What
is
this
machinery
for?
Like
you
know,
creating
a
new
metric
is
fine.
I
would
create,
keep
it
in
Alpha
for
a
while.
Let's
see,
if
you
like,
need
to
add
up
New
Dimensions
to
it
or
whatever,
after
it
bakes
for
a
really
long
time,
I
would
consider
promoting
that
to
stable
and
deprecating
the
old
one.
B
So
while
Jordan
is
pushing
for
that,
it
is
not
uncontentious
and
therefore
arguments
can
be
made
against
it,
but
I
suggest
doing
it.
The
right
way,
which
is
like
introducing
a
new
metric
with
the
New
Dimensions
that
you
want
baking.
A
D
C
B
D
B
B
A
Mean
the
the
solution
is:
if
this
was
missing,
labels
it
never
should
have
become
stable
like
that
is
stable
means
it's
not
going
to
change.
If
there
was
confusion
around
what
stable
meant
such
that
people
proposed
it
for
staple
and
like
thought
they
were
going
to
add
like
labels
to
it
later
like.
Maybe
we
need
to
update
documentation
or
something
to
disabuse
people
of
that
notion,
but,
like
stable
means,
it
will
never
change.
I.
B
B
D
Why,
like
we're,
trying
to
introduce
them,
but
with
an
actual
like
way
to
bond
them,
so
I
think
Luis
was
working
on
like
an
allow
list
for
the
label
at
the
webbook
level.
D
Kind
of
specify
like
which
resources
are
critical
and
which
one
we
want
to
have
dedicated
their
sales
for
so
basically
adding
slos
per
resource
compared
to
like
I,
think
one,
instead
of
all
the
web
books.
Why.
B
B
D
D
Okay,
so
that's
Lewis.
A
Do
you
have
any
questions
for
us
welcome
to
Sega.
C
Oh
well
thanks
my
first
time
on
this
call
on
thanks
Damian
for
for
introducing
me
and
trying
to
help
me
work
through
this
as
an
observer
first
time
here
you
know,
I
I
did
find
it
I
think
well
a
couple
comments.
First,
of
course
you
know
your
discussion
of
stable,
I
think
the
and
I'm.
C
Currently,
if
I'm
wrong,
they
mean
part
of
the
the
questioning
of
stable
is
more
of
you
know,
other
actual
changes
that
are
compatible
even
if
you're,
stable
and
and
I
think
we,
we
must
have
just
been
incorrect
thinking
that
it
was
possible
to
add
a
label
without
disrupting
existing
use
of
the
metric.
A
It
will
never
change
so
because,
because
there's
no
way
I
mean
across
minor
boundaries
of
releases,
maybe
we
could
add
labels
in
terms
of
like
just
how
API
guarantees
generally
work
like
that
would
be
consistent,
at
least
with
them,
but
it's
totally
possible
to
have
components
on
different
versions
like
what
the
version
SKU
within
a
cluster
and
therefore
that
would
break
so
like
because
you
could
be
scraping
two
different
components
with
two
different
versions
and
then
they'd
have
different
sets
of
labels,
and
so
that's
why
those
stability
guarantees
are
very
important.
It's.
D
A
Like
a
compatible
API
change
within
the
API
server,
where
you
know,
if
the
thing
doesn't
exist,
it's
never
going
to
get
serialized
and
the
apis
are
always
must
be
ahead
of
the
other
component
version.
So,
like
everything
is
okay,
it's
not
like
that
for
Prometheus
metrics.
So
that's
why
we
have
these
rules.
It.
B
Helps
to
think
of
it
in
terms
of
a
database
table
like,
if
you
add
a
column,
to
a
database
table
right
or
if
you
try
to
admit
like
create
a
row
in
a
database
with
a
non-existent
column.
Right,
that's
going
to
explode.
But
if
you
try
to
create
a
row
in
the
database
and
omit
one
of
the
columns,
that's
actually
a
valid
SQL
statement.
Right,
like
you
can
insert
without
specifying
all
of
the
columns.
A
Yeah,
almost
exactly
the
same
way
for
FCD,
though
like
it's
like
we
have
the
concept
within
API
review
of
a
compatible
API
change,
we're
like
we've
added
a
new
nullable
field
right
and
that's
fine
between
versions,
but
the
fundamental
issue
with
metrics.
Why?
We
cannot
add:
there's
no
such
thing
as
a
compatible.
Api
change
is
because
you
could
be
running
two
components
on
different
miners
in
the
same
cluster,
both
being
scraped
by
the
same
Prometheus
and
if
they're,
on
different
minor
versions,
and
they
have
different
labels,
then,
like
they're,
not
the
metrics
aren't
compatible.
B
A
A
What
I'm
saying
is
you
can't
do
it
and
make
zero
changes
to
your
deployment
like
you'd,
have
to
drop
labels
somewhere
or
like
you'd,
have
to
change
how
things
are
deployed?
You
can't
just
be
like
I
have
this
component.
It
has
this
metric
with
this
name.
A
It
has
this
set
of
labels
and
I
have
the
new
version,
and
it
has
this
set
of
labels
plus
one
and
you
throw
those
into
a
cluster,
and
then
you
scrape
both
targets
like
and
then
you
query
for
the
time
series
in
Prometheus
without
doing
additional
things,
they're
not
going
to
be
preferenceable
right.
You've
got
one
with
like
these
labels,
which
are
different
than
these
ones,
which
is
the
problem
that
we're
trying
to
avoid
the
problem
that
we
saw
with
C
advisor
randomly
emitting
different
sets
of
labels
and
whatnot.
A
This
is
why
I'm
saying
I,
don't
think
it
I,
don't
think
we
can
ever
have
a
case.
It's
not
like
an
API
compatible
API
change
where
you
can
just
add
a
field
that
happens
to
be
nullable.
We
can
just
pretend
that
it
didn't
exist
in
a
previous
version.
Yeah.
C
Okay,
the
other
thought
I
had
was
then
you
know
what
you
know.
It
looks
like
the
way
to
evolve.
Metrics
are
basically
create
a
new
one,
with
a
completely
different
name
and
eventually
drop
the
old
one.
You
know
it
doesn't
seem
like
there's
a
facility
really
within
Prometheus
to
handle
it
any
other
way.
C
Is
there
anything?
Is
there
anything
we
can
do
in
the
future?
To
make
this
more
obvious
or
or
for
example,
like
you
know,
you
know,
require
a
version
suffix
or
something
on
each
metric
name
in
the
future,
so
that
so
that
at
least
the
users
is
very
aware
of
exactly
what
version
he's
using
and
if
he
sees
a
V2,
for
example,
he
knows
I
I
do
I
need
to
move
over
to
V2.
What's
this
over
here?
Is
it
offering
me
more
value.
A
I
think
that's
against
the
Prometheus
naming
guidelines
like
I've,
never
seen,
versions
embedded
in
metric
names
before
I
think
there's
a
bit
of
an
issue
where,
like
Prometheus
I,
think
as
a
community
hasn't
really
like
considered
this
sort
of
scale
or
because
I
filed
issues
against
them
previously.
Asking,
for
example,
like
can
you
add
a
facility
to
rename
metrics,
because
that
would
be
very
useful
for.
C
A
Like
to
be
able
to
declare
hey
I
know
that
this
metric
said
like
one
thing
in
this
last
release,
but
now
we've
renamed
it
and
it
used
to
be
called
this
thing.
Can
you
consider
them
to
be
the
same
thing
at
scrape
time
and
I
filed
an
issue
requesting
that
as
a
feature
and
was
basically
told
no
like?
Why
would
you
ever
want
that?
That
sounds
bad,
and
it's
like
well
like
in
kubernetes.
A
We
have
all
this
like
versioning
issues
with
like
metrics
and
maybe
we're
just
a
very
particular
special
use
case,
and
so
they
consider
like
that
sort
of
request
to
be
a
well.
It's
just
gonna
affect
kubernetes,
it's
not
going
to
affect
anyone
else,
I'm,
not
sure,
but
yeah.
That's
that
is,
unfortunately
like
sort
of
the
response.
We've
gotten
back
from
them.
A
So,
given
that
there's
not
really
necessarily
like
people
setting
the
standards
in
like
our
dependency
Prometheus
for
like
how
we
maybe
should
be
doing
this
we're
kind
of
on
our
own
and
so
we're
making
it
up
as
we
go
along,
but
we're
trying
to
do
it
in
a
way
that
works
I
think
it
might
be
good
for
us
to
write
clearer,
Dev
documentation
if
there
are
any
concerns
about
like
what
it
means
to
be
a
stable
metric
and
like
warning
big
flashing
letters,
you
know
like
if
you
make
it
stable,
you
cannot
change
the
labels
on
it.
A
B
Can
always
use
something
in
the
namespace
like
hit
like
the
options
struct
that
allows
you
to
construct.
The
metric
definition
has
several
Fields
namespace
subsystem.
Name
and
subsystem
is
like
the
component
namespace
you
could
theoretically
use
as
like,
V2
or
whatever
like.
This
is
not
like,
like
just
because
the
Prometheus
you
know,
Community
has
not
said
anything
like
in
our
case.
I
feel
like
it's
probably
fine.
B
If,
if
you
want
to
do
something
like
that,
like
repurposing
the
namespace
field
to
add
like
what
happens,
is
subsystem
namespace
and
the
name
get
concatenated
together
to
form
the
metric
yeah.
So
if,
if
you
wanted
to
do
something
like
that,
I
think
that
would
be
fine.
C
Again,
just
you
know
as
an
observer,
trying
to
do
this
this
time.
I
just
wondered
if
there
was
something
I
understand
that
Prometheus
itself
doesn't
have
any
facility,
but
if
it
something
to
consider
for
for
Cube,
if
if
there
was
a
way
to
to,
as
you
mentioned,
for
example,
use
the
namespace
field
and
and
for
for
cube,
we
we
include,
like
a
you,
know,
a
dash
V1
Dash
or
something
similar
to
give
at
least
Cube
more
flexibility
in
the
future.
Yeah.
A
I,
don't
think
we
think
we
try
putting
random
versions
into
metrics,
because
they're
not
going
to
be
meaningful
on
a
project-wide
basis
like
effectively
I
think
each
component
owner
is
going
to
be
coming
up
with
their
own
versioning
I
think
that'll
be
very
confusing
to
end
users,
particularly
because
we
don't
currently
have
documentation
of
all
of
the
metrics
emitted
by
kubernetes,
and
so
like
then,
throwing
things
like
effectively.
All
they've
got
is
like
the
help
text
from
that
come
along
with
the
metric.
That's
the
only
documentation.
A
B
A
I
had
a
question
for
you
based
on
something
that
you
said
earlier
so
specifically
when
I
was
talking
about
like
API
like
compatible
API
changes.
Typically,
what
that
means
in
kubernetes
is
you
have
some
serializable
type
and
the
only
compatible
changes
that
you
can
make
to
that?
Or
you
can
add
new
fields
that
are
nullable.
A
You
can't
take
away
Fields,
because
that
would
break
things,
and
so
when
I
was
saying
we
can't
add
metrics,
because
the
fields
wouldn't
exist
on
the
old
metric,
and
you
said
well,
you
could
maybe
drop
fields
and
then
just
admit
that
as
a
null
field
that
wouldn't
be
considered
a
compatible
API
change
from
a
kubernetes
API
point
of
view,
which.
B
A
What
I
was
saying?
Yes,
absolutely,
because
we
weren't
compo,
we
weren't,
we
weren't,
suggesting
dropping
labels
right
like
in
this
case
they
wanted
to
add
a
couple
of
labels
and
I
said
well,
isn't
that
API
compatible
and
I'm
saying
no?
It's
not
API
compatible,
because
if
you
have
two
different
components
and
the
same
cluster
that
are
both
emitting
metrics
one's
on
the
old
version,
one's
on
the
new
version,
the
new
version
has
an
extra
label.
A
B
A
A
It's
fine
from
prometheus's
point
of
view,
but
it's
not
something
that
we
permit
per
the
method
of
stability
policy.
Yes
and
like
we
can't,
but
the
comparison
I
was
making
was
not
can
Prometheus
do
it.
It
was
would,
like
the
you
know,
equivalent
of
a
compatible
API
change
exist
in
the
world
of
metrics
like?
Is
it
okay
to
just
go
and
add
a
label
later?
A
A
B
Yeah
I
mean
like
the
component
owners
are
supposed
to
like
own.
The
metrics,
like
personally
I
was
like
I,
was
on
board,
with
promoting
the
web
hook
thing
to
stable
and
I'm.
Also
I
work
on
API
Machinery
stuff,
like
personally
I,
wouldn't
have
thought
that
we
would
have
ever
wanted
to
add
resource
labels
to
this
metric,
mostly
because
of
cardinality
issues.
B
So
I
mean
like
what
would
almost
make
more
sense
is
to
create
like
a
separate,
SLO
metric,
if
you
want
to,
if
you
want
to
measure
SLO
stuff
and
like
not
even
include
all
of
the
labels
that
you
may
have
on
the
original
webhook,
maybe
just
make
it
a
completely
different
one
that
measures
exactly
what
you
need
for
your
SLO
mm-hmm.
C
C
B
D
Cool
yeah
I
think
it's
hard
to
catch
like
four
components
on
there
like
to
know
whether
they
would
need
at
some
point
to
add
labels
on
it
because,
like
they
might
not
have
like
alerting
awareness
to
some
certain
point
like
they
might
not
know
how
to
really
like
create
the
solos
or
even
like,
create
good
alerting.
So
they
might
need
a
label
at
some
point
that
they
don't
have
yet
and
I,
don't
think
like.
D
A
C
A
To
interrupt,
we
have
two
minutes
left,
so
I
just
wanted
to
think.
Maybe
if
there
are
some
action
items
that
we
could
take
away
from
this
discussion,
one
thing
that
I
think
would
be
good
is
to
make
sure
that
our
documentation
and
guidelines
are
clear.
A
Another
thing
that
we
might
want
to
consider
doing
is
like
adding
something
to
prow
that
when
it
sees
something
happening
in
the
stable
metrics
folder
like
pops
up
with
a
little
comment
being
like
you
are
changing
stable
metrics
like
here's
the
documentation
for
that,
because
we
don't
have
anything
like
that
currently.
Well,.
A
No
I
know
it'll
prevent
them
until
somebody
from
physique
instrumentation
approves
it.
But
I'm
saying
like
for
the
author,
we
don't
have
like
a
little
like
comment
pop
up
in
the
same
way
that
like,
if
you
change
something
and
there's
a
label
applied
for
API
review,
it'll
be
like
this.
Pr
may
need
API
review.
Please
see
this
documentation,
we
don't
have
anything
like
that
for
stable
yeah.
B
A
Can't
change
disabled
I
think
it
just
might
be
a
little
bit
more
user
friendly
to
do
it
as
a
comment
on
the
pr,
because
otherwise
people
are
only
going
to
see
it
if
they
look
at
the
output
of
the
verified
job,
and
it's
not
going
to
be
clear,
to
example,
like
people
who
aren't
expecting
verify
to
fail
that,
like
oh
by
the
way.
This
is
a
thing.
That's
been
changed.
B
A
That's
what
I'm
saying
I'm
saying
as
soon
as
those
things
exchange
I
think
it
might
be
a
little
bit
more
user
friendly
to
maybe
see
when
we
have
something
like
that.
Come
up
that
we
go
ahead
and
like
alert
the
user
and
have
the
comment
on
there
so
that,
like
somebody,
maybe
with
top
level
approver
who
doesn't
know
about
stable,
metrics
or
whatever,
is
like
oh
there's,
a
stable
metric
change,
I
need
to
talk
to
instrumentation
or
whatever
so
I.
A
Don't
know
if
the
I
don't
know
if
that
folder,
for
example,
uses
like
no
inherit
like
parent
owners
or
anything
like
that.
I.