►
From YouTube: SIG Instrumentation 20210610
Description
SIG Instrumentation Bi-Weekly Meeting
June 10th, 2021
A
Hello,
everyone
and
welcome
to
today's
edition
of
sig
instrumentation.
It
is
thursday
june
10th
2021..
We
have
two
items
on
the
agenda
today
and
they're
both
from
me.
Oh
no,
so
let
me
link
the
agenda
in
the
chat
and
let's
get
started
so
first
item
this
one's
a
good
one.
So
I
would
like
to
because
I
know
that
we've
talked
about
this
being
important
in
the
past,
but
then
we
haven't
really
made
any
progress
on
it,
so
I
would
like
to
put
out
a
call
for
help
with
metric
documentation.
A
It
would
be
really
cool
if
we
could
like
take
that
automation
and
expand
it
a
little
bit
and
like
ensure
that
the
help
text
was
like
actually
correct
for
everything,
and
also
we
had
like
a
registry
somewhere
in
the
documentation
of
all
of
the
metrics,
with
like
accurate
help
text.
I
think
this
would
be
really
good
because
right
now
trying
to
figure
out
what
a
given
metric
does
is
awful.
A
B
There's
a
couple
problems
with
doing
it
with
the
alpha
ones,
namely
the
static
analysis,
actually
doesn't
work
for
all
metric.
All
metro
types.
B
C
So
we
didn't
want
to
force
everyone
like
we
implemented
some
cases
that
we
wanted
to
handle
and
decided.
Okay,
let's
not
force
everyone
to
comply
to
those
cases
like
how
you
declare.
Is
it
global
or
is
it
function?
Can
you
use,
I
know
const
or,
like
borrow
some
variables
externally,
we
we
propose.
Okay,
let's
require
this
for
stable,
because
because
some
metrics,
like
you
know,
cubelet,
are
auto
generated,
and
this
like
handling
those
metrics
or
discovering
them
would
be
pretty
hard
from
buy
stuff
to
kind
of
listen.
A
Paradigm,
so
at
the
risk
of
letting
the
perfect
be
the
enemy
of
the
good,
I
don't
really
care
that
we
get
all
of
the
metrics.
I
care
that
we
get
most
of
the
metrics,
because
right
now
we
have
three
of
the
metrics
in
our
docs,
and
that
is
not
enough.
So,
like
oh
we're,
four.
A
Excellent
excellent,
so
not
not
to
be
you
know
too
silly
about
it,
but
like
we
could
use
the
static
analysis
to
get
more
and
we
could
have
docks
for
them
and
then
people
would
know,
and
then
we
could
figure
out
the
yak
shaving
for
everything
that
gets
missed
and
we
could
add,
like
some
sort
of
verification
such
that,
like
if
new
metrics
get
added,
we
already
get
tagged
for
like
reviews
and
whatnot,
but
if
new
metrics
get
added,
people
will
get
reminded
that
like
hey,
this
is
going
to
auto,
generate
some
docks,
so
like
make
sure
your
docs
don't
suck.
B
It's
not
super
resilient
right
now
the
static
analysis
and
it
will
break
in
bad
ways.
I
think
if
you
try
to
parse
something
that
yeah,
it
would
need
some
work.
A
So
I'm
confused
by
this
because
isn't
it
already
parsing
like
all
of
the
code?
No,
it
only
parses.
B
If
it
detects
that
the
thing
is
stable
and
then
it
does
a
more
granular
parse.
A
A
B
I
mean
we
can
definitely
improve
the
static
analysis
stuff.
Maybe
we
should
file
some
bugs
if
someone
wants
to
take
it
up.
B
Yeah
I
mean
like
the
static
analysis.
Stuff
could
be
definitely
improved.
We
would
need
that
before
doing
some
of
this
stuff.
D
Need
some
augmentation
if,
if
you're
willing
to
define
the
problem,
I'm
willing
to
spend
a
little
time
to
try
to
improve
that?
Oh
yeah
yeah.
A
B
Yeah,
we
probably
need
documentation
on
how
it
works.
Currently,
there
are
two
people
who
really
know
how
it
works.
A
It
is,
and
mine
is
not
okay,
that's
awesome.
Is
there
anything
else
on
that
one
I
mean
yeah,
so
we
need
you
said
we
would
also
need
documentation
for
like
how
the.
C
A
Okay,
cool,
that's
exciting.
I
really
just
want
to
see
more
documentation
for
metrics,
because
people
are
constantly
asking
me:
what
does
this
metric
do
and
I'm
like
I
don't
know,
and
the
person
who
wrote
it
is
like
gone
and
they
didn't
write
any
documentation
either.
So
there's
nothing
I
can
do
about
it.
B
The
second
piece
after
that
would
be
like
to
parse
the
auto-generated
data
and
to
like
turn
it
into
like
a
table
or
something
in
right
like
have
some
script.
That
turns
it
into
a
table.
C
Good
to
discuss
and
with
I
don't
think,
do
we
have
the
sea
docs
to
what
format
they
want
to
ingest.
So
I
assume
each
each
release
comes
with
auto-generated
command
line
instructions,
so
if
we
could
hook
up
with
the
same
process
exactly
yeah
first,
do
it
manually
for
the
latest
release
or
if
someone
wants
to
do
that
technically
static
analysis
could
do
a
fill
up
for
the
previous
releases,
but
then
make
it
a
regular
easy
to
continue
process.
A
A
They're
not
knowingly,
I'm
sorry.
All
I
said
was
that
we've
got
lots
of
attendees
today,
which
is
very
exciting
and
I'm
so
glad
to
see
everybody
here.
So
I
just
added
everybody
to
the
attendee
list.
A
So
hopefully,
when
we
get
a
bunch
of
stuff
like
sort
of
sussed
out
for
that,
then
we'll
have
areas
for
all
sorts
of
contributors
to
come
in
and
like
actually
check
over
metrics
and
make
sure
that
the
docs
are
up
to
date
and
whatnot,
so
it'll
be
a
great
opportunity
for
new
contributors
cool.
So
if
nothing
more
on
that
topic,
we
sent
something
to
sig
node
with
me,
and
I
discussed
it
there
and
now
I've
brought
it
back.
A
So
basically,
I
think
that
there
was
a
proposal
to
add
some
stuff
to
the
cubelet
metrics
resource
endpoint,
for
both
container
restarts
and
also
like
node
start
time,
and
there
was
a
giant
sort
of
like
bike
shedding
of
what
does
node
start
time
mean
in
sig
node
and
according
to
the
issue,
it
said
that
the
node
start
time
wasn't
really
needed.
A
So
we
suggested
that
maybe
if
we
can
avoid
shaving
that
yak,
we
should
so
and
it
looks
like
in
the
reply
merrick,
you
said
that
skipping
the
start
time
for
nodes
makes
sense,
so
just
wanted
to
make
sure
that
everybody
here
was
aware
and
that
we're
all
on
the
same
page
and
if
there's
any
concerns
about
that
or
questions
about
the
discussion.
That's
signed.
A
B
B
B
B
A
So
my
hot
take
on
this
is
that
we
don't
have
enough
stable
metrics
yet
for
people
to
say
it's
not
stable.
Therefore,
there's
no
sla,
because
there's
like
three
or
four
and
like
I
think
that
every
component
needs
to
have
stable
metrics
before
we
can
say
if
it's
stable,
then
you
follow
this
deprecation
guideline.
If
it's
alpha,
then
you
can
do
whatever
you
want
kind
of
thing,
because
many
of
these
metrics
that
we've
had,
for
you
know
a
dozen
releases
like
people
aren't
expecting
them
to
disappear,
even
though
they
are
alpha.
A
So
I
know
that
in
the
past
certainly
any
time
a
metric
is
removed,
it
has
to
have
an
action
required
release,
note
and
some
sort
of
steps
from
migration
off,
but
I
think
it's
better,
especially
if
things
like
do
we
have
a
guideline,
I
guess
we
are
creating
the
guideline.
Would
it
be
fair
to
say
that
if
a
metric
has
been
in
cube
for
like
say
two
or
more
releases
that
there
needs
to
be
a
minimum
of
a
one
release,
deprecation
period
for
removal.
B
We
can
lengthen
it,
we
can
say
if
it's
been
in
cube
for
four
releases,
it
needs
to
be
deprecated
for
at
least
one.
E
A
Talked
about
doing
that
in
the
metric
stability
framework
and
we
basically
ended
up
going
into
a
you
know
like
not
gonna
need
it,
and
so
we
ended
up,
not,
I
think,
implementing
it.
So
there
isn't
currently
a
beta
status.
It's
either
alpha
or
stable.
B
We
can
I
mean
it,
wouldn't
be
that
hard
to
introduce
the
machinery
is
all
there
already.
So
if
we
have
a
rule,
it
would
be
relatively
easy
to
introduce,
but
the
problem
is:
if
we
introduce
this
thing,
we
have
to
go
and
mark
a
bunch
of
these
things
as
beta
yes,
and
it's
it's
not
us
that
does
that
it's
component
owners
and
those
component
owners
are
not
likely
to
mark
something,
as
beta
they're,
either
likely
to
categorize
something,
as
you
know,
stable
or
supported,
or
not
supported,.
F
Dude,
I
didn't
actually
look
at
the
pr,
but
does
the
group
that
owns
the
component
with
this
metric
in
this
case
want
to
remove
the
metric,
because
we're
really
providing
a
minimum
bar
that
people
have
to
reach
right
in
terms
of
like
if
you're
stable,
you
can't
remove
it.
We
don't
necessarily
specify
that
what
you
have
to
do
if
it's
alpha
you
can
provide
whatever
transition
you
want.
F
So
like
it's
just
a
minimum.
It's
not
a
even
a
prescription.
A
So
han
just
to
make
sure
that
I
captured
what
you
said
accurately
so
you're
saying.
If
it's
been
in
for
four
or
more
releases,
it
should
have
a
one
release:
defra,
deprecation
period.
A
B
F
F
Well,
how
about
this?
What
if
we
only
allowed
metrics
to
stay
in
alpha
for
a
set
number
of
releases
before
the
component
owner
either
had
to
remove
them
or
promote
them,
because,
essentially,
what
you're
saying
is
an
alpha
metric?
That's
been
there
for
x
number
of
releases
acts
like
a
stable
metric
right,
because
stable
doesn't
mean
well.
A
Two
releases
of
deprecation
or
something
like
that
at
least
and
then
there's
magical
stuff
with
hidden
metrics,
so
yeah.
A
We
need
something
to
prevent.
You
know
perma,
alpha
of
metrics
or
perma
beta
cube
y.
It's
a
little
bit
weird
for
us
to
only
have
the
alpha
and
like
stable.
I
found
because
people
will
add
metrics
in
like
say
an
alpha
stage
of
a
cap
and
then
they'll
try
to
graduate
it
to
beta
and
they'll,
be
like
oh,
but
I
thought
I
had
to
make
my
metrics
beta,
but
I
don't
because
there
is
no
beta
of
metrics
phase.
B
F
I
would
say
that
if
we
want
to
have
a
separate
policy
like
one
release,
deprecation
required,
then
a
separate
stage
makes
sense
to
communicate
that,
and
I
would
that
what
I
was
going
to
say
about
sort
of
forcing
upgrades
after
a
certain
number
of
releases.
Is
that
then,
instead
of
it
being
a
silent
like
component
owners
silently
taking
on
additional
responsibilities,
we
make
it
an
explicit
decision
by
them
to
like
they
have
to
do
something.
E
B
The
problem
I
have
with
that
is
like
not
all
metrics
are
like
great
for
promoting
to
stable,
because,
like
metrics
can
change
with
the
shape
of
code
and
like
the
thing
that
you
are
instrumenting
can
change
right.
There
can
be
more
dimensions
or
whatever,
like
as
development
occurs.
So,
like
it's
feasible
that
you
have
a
metric,
that's
like
very
tied
to
development
cycles,
which
changes
frequently
enough
so
that,
like
basically,
you
can't
really
have
any
guarantees
right
like.
F
Something
like
I
remember
all
of
those
metrics
about
priority
and
fairness.
Is
that,
like
a
reasonable
example
where
they
they
want
them
just
for
development
and
testing
and
stuff,
but
okay
and.
B
Yeah,
I'm
not
sure
that
we
can
force
it.
I'm
okay,
with
the
beta.
B
A
Like
we
have
to
have
some
sort
of
terminal
ground
here
right,
you
can't
have
an
alpha
metric.
That's
changing
every
release
for
like
six
releases
like
it
either
needs
to
get
stable
at
some
point
or
it
needs
to
get
removed.
I
don't
think
that's
an
unreasonable
ask
like
how
many
features
stick
around
and
change
every
release
for
six
releases
like
to
give
a
you
know:
priority.
A
Like
maybe
you
set
it
to
like,
I
mean
I
don't
know
if
this
is
a
guideline
that
we
can
actually
enforce
statically,
but
if
the
like,
the
shape
of
the
metric
has
not
changed
in
a
release
for
four
releases,
then
we
kick
it
out.
If
people
don't
like,
we
automatically
promote
it
or
kick
it
out
like
if
the
labels
are
changing
or
the
state
of
the
metric
looks
very
different
from
release
to
release,
then
that
resets,
the
cycle.
B
B
A
B
I
mean
this
was
I
I
proposed
at
one
point
metric
definition,
http
endpoint,
so
that
we
could
just
compile
these
things
once
a
quarter
or
something
I
would
be
pretty
easy
to
do.
B
No,
it
doesn't
it's
mostly
for
documentation
like
quarterly
documentation,.
A
So
I'm
hearing
a
couple
of
things
being
sort
of
banned
bandied
about,
so
one
of
them
is
the
possibility
of
adding
beta
to
metrics
and
the
other
is
a
policy
for
how
we
deal
with
deprecation
of
all
phases
of
metrics,
because
we
have
one
for
stable,
metrics
right
now,
but
we
don't
have
one
for
alpha
and
like
we
know
that
you
know
there's
technically
no
requirements
for
alpha,
but
we
could
add
guidelines
that
say
if
the
alpha
metrics
being
in
cube
for
four
or
more
releases,
please
don't
just
remove
it.
A
Follow
this
deprecation
process.
So
I
think
that
those
are
two
separate
things.
Presumably,
if
we
get
a
beta
metric
phase,
then
we
would
also
need
a
guideline
for
that.
So.
B
A
B
There's
a
simple
way
to
solve
this,
which
is
it
just
it?
We
can
apply
this
going
forward,
and
so
we
just
tag
all
the
metrics
with
the
version,
the
current
version
number
and
then
four
releases
later
they
have
to
be
promoted
to
beta.
A
A
A
Don't
think
that
we're
retroactively
making
any
rules
that
has
been
the
unspoken
rule
of-
or
I
shouldn't
say
unspoken
that
has
been
the
spoken
informal
rule
like
we
have
not
written
it
down
in
documentation,
but
every
single
metric
that
I've
seen
that's
effectively
been
like
you
know,
consumed
by
end
users.
We've
always
had
some
sort
of
deprecation
policy
around
it
like
prior
to
even
the
metric
stability
framework,
as
as
I
said
like
the
problem
is
everything
is
defaulting
to
alpha.
A
There's
almost
nothing
in
stable,
so
we
can't
say
90
of
metrics
and
cube,
have
no
guarantees
and
like
we
can
remove
them
willy-nilly.
That
would
break
everybody
consuming
kubernetes
and
that's
never
been.
The
policy
like
we've
never
said,
oh
yeah,
by
the
way
like
a
metric
can
just
disappear
and
in
fact
we
have
gotten
urgent,
urgent,
critical
bugs
where
people
like
this
metric
has
disappeared
and
we
move
heaven
and
earth
to
make
that
metric
go
back.
So
I
I
don't
think
that
we're
retroactively
adding
a
policy.
I
think
that
we're
formalizing
an
existing
policy.
B
This
is
also
a
call
for
a
cap
right.
Basically,
if
anyone
is
interested
in
writing
a
cap
proposal
for
beta
and
some
reasonable
standards,
I
think
that
would
be.
A
F
I
was
going
to
say
yes,
it
would
be
fairly
simple
for
us
even
just
to
provide
recommendations,
but
not
requirements
retroactively,
so
we
we
can
say
like
for
alpha
metrics.
If
you
can
provide
this
deprecation
period,
you
should,
as
like,
a
more
that
way.
At
least
we
have
something
to
point
out
for
new.
B
Inquiries
I
mean
to
be
fair,
the
the
metric
that
they
want
to
remove
and
that
the
the
pr
that
I
linked
is
in
fact
duplicated
and
the
other
one
is
is
more
granular,
so
it's
better,
but
both
metrics
have
been
around
for
like
two
years
at
least,
and
people
depend
on
them.
I
mean
registered
watchers
just
gives
you
the
watch
accounts.
It's
going
to
be
used.
Pretty
heavily
right,
like
long
running
gauge
is
not
as
obvious
that
it
includes
the
watch
accounts.
B
A
Okay,
so
I
think
what
we
should
do,
I
think
we
should
have
a
policy
in
the
community
docs.
I
think
we
should
make
the
policy
what
the
informal
guidelines
have
been
for.
Like
the
past
multiple
years
of
me,
working
in
cube
and
everybody
else
here,
working
in
queue,
we
need
to
figure
out
what
the
exact
details
are.
I
think
we
can
probably
hammer
that
on
a
pr,
because
we
are
at
time
it
is
10
o'clock.
A
My
time,
honda,
you
want
to
take
the
action
to
like
start
a
draft
of
that
pr,
and
then
people
can
review
like
what
we're
adding.