►
From YouTube: SIG instrumentation 20210121
Description
SIG Instrumentation Meeting January 21st, 2021
A
Welcome
to
today's
edition
of
sig
instrumentation,
it
is
thursday
january
21st
2021
frederick,
do
you
want
to
kick
us
off.
B
Sure,
let
me
actually
go
backwards
on
our
on
our
agenda,
because
this
one's
just
an
an
an
announcement,
I
guess
so
prometheus
kubernetes
adapter
has.
Finally,
I
don't
know
exactly
what
the.
A
B
And
yeah
those
legal
issues
have
been
resolved,
so
that's
really
awesome.
This
is,
I
think,
we've
had
this
in
like
pending
state
for
three
months
or
so
so
it's
good
to
have
this
kind
of
closed.
Finally,
and
I
believe
our
next
point
is
actually
somewhat
related,
because
I
think
marek
did
a
pretty
cool
proof
of
concept
using
it
is
that
right.
A
You're,
you
need
to
present
right
I'll.
Give
you
a
co-host.
D
C
Yeah
we
can
see
okay,
so
let
me
discuss
so.
Let
me
talk
about
the
demo
that
I
did
so
hey
to
those
who
don't
know
me.
I'm
mark
I
currently
own
the
metrix
server,
I'm
one
of
the
owners
of
magic
server.
So
one
of
the
common
things
and
common
question
is:
why
cube
ctl
top?
Is
that
way.
C
That
yeah-
and
it
only
shows
me
and
memory
and
nothing
else.
Why
can
it
be
more
and
all
those
questions
so
usually
our
answer
was
it
shows
that
way,
because
we,
which
it's
a
way
for
us
to
display
your
core
metrics,
which
are
metrics,
that
we
use
for
auto
scaling
and
we
don't
expose
anymore
and
after
the
last
last
week,
when
I
got
the
same
question
about
a
gpu
metrics
on
some,
I
think
exactly
premium
adapter
issue.
C
I
finally
got
it
why
we
cannot
do
this
because
no
one
like
started
work
on
it
and
no
one
like
just
did
it.
So
what
is
the
overall
idea
gets
implement
the
new
new
those
are
just
temporary
like
commands,
but
you
implement
cube
ctl
top
metrics
name
can
be
changed.
That
gives
you
a
list
of
all
metrics
that
you
have
defined.
C
Those
definitions
already
exist
in
kubernetes
apis.
Those
are
just
the
custom
metrics
that
I
think
you
need
to
to
have
them.
You
need
to
have
a
prometheus
adapter
that
is
configured
or
any
other
adapter
that
serves
a
custom
metrics.
So
the
only
thing
that
I
have
is
yeah.
This
is
third
but
yeah.
You
have
this
api
and
that's
all
so
you
get
like
all
the
information.
So
currently
I
have
configured
my
adapter
to
serve
some
custom
metrics.
C
Let
me
know
if
yeah
increasing
it
will
help
and
it
adds
like
metrics
about
bots.
So,
let's
see
there
are
some
based
ones
like
pod
cpu,
but
I
I
don't
know
I
could
display
threats.
Threats
are
imported,
so
cuba,
ctl
top
usually
has
some
has
a
way
to
define
your
custom
columns.
So
you
could
technically
get
any
any
object
in
kubernetes
and
define
your
own
columns
and
take
any
field.
So
I
did
something
similar
for
the
for
cube
still
top,
and
so
you
define
cube
steel
top
normally
gets
you.
D
C
B
One
one
question
that
I
that
I,
that
I
have-
or
I
mean
so
I
think,
last
time
we
talked
about
coop
ctl
top.
There
was
somewhat
of
a
consensus
to
do
only
server
side,
printing
and
just
do
so
that
we
end
up
doing
something
like
goop
ctl,
get
pod
metrics
and
then
just
use
server
side
printing.
For
this
I
think
this
was
even
implemented
so
it
like.
I,
I
have
no
problem
with
changing
changing
our
direction.
B
A
Awesome
so
just
to
clarify
like
to
ask
the
question,
so
this
is
all
implemented
at
the
like
in
the
cube
code.
Client,
it's
it's
on
the
client
side,
not
okay,.
C
No
no
changes
in
like
underneath,
I
can
use
normal
sorry,
I
start
presenting
so
underneath.
I
just
have
a
prometheus
operator
with
the
adapter
that
I
added
some
edited
the
configuration
that
is
the
default
configuration
so
that
maybe
one
problem
server
separating
that
one
difference
in
top
is
that
it's
the
only
really
custom
command
that
we
have
in
qctl,
which
is
when
compared
to
get
delete.
That's
the
result
of
this.
It's
really
custom
and
really
like
not
touched
for
last
four
years.
I
will
I
removed
last
time.
C
B
Yeah,
I
I
think
that
makes
sense.
As
I
said
I
am
I'm,
I
don't
feel
too
strongly
about
it.
It
was
just
what
we
had
previously
thought
about.
I
personally,
I
don't
know
how
many
people
actively
use
the
custom
metrics
api.
I
find
that
I
I
actually
had
this
this
topic
in
a
previous
meeting
which
we
didn't
get
to
but,
like
I
find
it
awkwardly
non-ergonomic,
I
tend
to
always
just
use
the
external
metrics
api
because
I
just
get
to
query
exactly
prometheus,
essentially.
B
But
I
think
this
I
think
this
work
is
really
cool
and
I
think
we
should
try
to
have
something
like
this,
because
I
really
like
the
user
experience
of
this.
C
Yeah
so
one
benefit
or,
like
the,
I
think,
there's
two
major
things
that
have
a
benefit
for
custom.
Metrics
first,
is
that
I
only
implemented
top
bots,
but
you
could
have
a
qtl
top
deployment
cube,
ctl
top
namespaces
nexus
is
just
like.
I
just
would
need
to
copy
copy
and
wk
or
like
make
it
generic
at
some
point,
make
it
generic,
and
so
this
is
out
of
the
box,
and
second
thing
I
think
I
don't
know
external
api
or,
like
I
haven't,
crafted,
but
for
custom
api.
C
It's
enough
to
know
about
the
object
to
build
the
proper
query
for
external
api.
I
was
like
my
first
idea
was
with
custom.
Columns
was
having
a
more
dsl
like
similar
to
how
adapter
defines
how
it
translates
custom
api
into
tremendous
metrics
having
something
like
this
more,
but
this
is
pretty
to
to
to
use
to
have
user's
site
or,
like
lock,
local
machine,
your
own
definitions
of
how
you
translate
matrix,
but
this
this
this
is
somewhat
problematic
without
yeah.
F
A
A
So
for
those
who
are
following
along
at
home,
I
just
clicked
on
this
open
caps
link,
which
is
at
the
on
the
first
page
of
our
agenda.
Talk,
so
we've
got
seven
open
caps
and
they're
all
in
various
stages,
and
I
wanted
to
let
people
know
that
so
previously
in
other
release
cycles.
Basically
the
enhancements
team
would
come
to
us,
and
so
they
would
say,
like
hey,
you
know
like
they
would
check
in
on
every
single
enhancement
and
in
the
121
release.
A
That
has
become
like
a
way
too
heavy
touch
for
them
because
there
are
like,
I
think
you
know,
there's
like
200
enhancements
in
this
repo
or
something
like
that
being
tracked
right
now.
It's
too
much
work
for
them,
so
they
have
asked
all
of
the
sigs
to
instead
come
up
with
their
list
of
the
things
that
the
sig
is
going
to
work
on
and
like
the
points
of
contact
for
each
of
those
cats.
A
So
we
only
have
seven
things:
we're
in
pretty
good
shape,
I'm
also
tracking
the
sig
node
caps
and
there's
like
50
of
those,
so
we're
we're
mostly
okay,
but
we've
got
a
bunch
of
stuff
here
that
we
need
to
go
through
and
decide
what
we're
going
to
get
done
in
the
121
release,
and
I
know
that
not
everybody
who
has
one
of
these
caps
is
in
on
this
meeting,
and
I
know
that
these
folks,
for
example,
have
been
targeting
alpha
in
121.
A
So
I
don't
know
that
we
necessarily
need
to
go
over
that
one,
but
for
all
of
the
other
caps
here.
Do
we
just
want
to
go
over
them
one
by
one?
There's
not
that
many.
E
A
E
My
question
is
whether
I
need
to
make
one
of
these.
I
don't
know.
A
A
Think
there
should
be
a
cap
because
we
need
to
talk
about
like
we
need
to
go
and
poke
other
groups
about
picking
stable
metrics,
and
we
need
to
provide
guidelines
for
the
stable,
metrics
and
whatnot.
I
I
think
it
should
be
a
cap
just
because
there's
going
to
be
a
bunch
of
stuff
written
down,
there's
going
to
be
a
bunch
of
stuff
discussed
and.
E
A
E
Yeah,
I
did
make
an
issue
for
it:
yeah
and
you
volunteer
volunteered.
A
Oh,
no,
the
so
that
that
issue
is
in
the
kk
repo.
There
needs
to
be
one
in
k
enhancements.
If
we're
going
to
do
a
cap.
A
Because
it's
not
just
going
to
be
the
api
server
that
we're
picking
stable
metrics
for
right.
It's
going
to
be.
We.
A
No
exactly
we
need
to
tell
this,
like
various
other
cigs,
how
to
choose
stable,
metrics
and
tell
them
they.
You
know
should
start
doing
that
this
release,
if
we
don't
have
instructions
for
them
to
do
so,
like
I
kind
of
like
we
should
have
a
review
process
for
it
right.
I
kind
of
don't
want
to
just
like
open
the
floodgates
and
let
random
sigs
pick
things
as
stable,
not
understanding
what
that
means,
and
then
we
get
like
stuck
with
bad
metrics.
E
A
A
But,
like
everything
else,
you
know
there's
no
stable
scheduler
metrics,
like
that's,
that's,
not
a
good
user
experience,
and
so
I
think,
if
our
goal
is
to
go
and
do
a
bunch
of
stable
metrics
this
release,
we
should
coordinate
that,
in
which
case
that
has
to
go
through
sync
instrumentation.
So
we
should
have
a
cap.
It
doesn't
have
to
be
super
heavy
weight
cap,
like
I
don't
think,
there's
any
prr
review
even
required
for
this
one,
but
we
should
have
some
sort
of
centralized
tracking
point
for
coordination.
A
F
People,
I
guess,
there's
like
two
things.
One
thing
is
the
guidance
we
want
like.
There
are
three
things
actually
in
my
mind.
One
thing
is
like
we
want
to
give
people
in
different
component
to
like
a
guidance
on
which
are
the.
What
are
the
metrics
should
be
considered
as
stable.
That's
the
one
thing,
the
second
thing
is:
do
we
have
like
a
centralized
place
to
checking
all
the
stable
metrics
if
we
want
to
promote
them?
The
third
thing
is
promote
one
specific
matrix
to
for
from.
E
A
B
So
the
the
two
things
that
I
think
we
should
do
is
we
should
actually
go
through
that
process
ourselves
once
and
once
we've
done
that
we
should
send
out
an
email
to
kubernetes
dev
that
this
has
been
actually
used
now
and
people.
B
If
people
think
there
are
metrics
that
should
be
stable
metrics,
then
they
can
come
to
sick,
instrumentation
and
present
them
or
something,
and
then
we
can
work
towards
towards
adding
them.
B
My
my
guess
is
that
even
like
with
lcd
object,
counts,
we're
probably
going
to
have
to
revise
almost
every
metric
that
that
goes
stable
in
some
way
right,
like
even
api
server
account.
Like
request
total,
we
already
decided
there
are
a
couple
that
we
still
want
to
change
so
that
I
don't
know,
that's
just
how
I
see
what
what
we
should
be
doing,
and
I
don't
really
feel
like
that-
requires
a
cap.
A
Yeah,
I
I
it's
like
it's
not
like.
I
mean
so
like
there
is
a
cap
already
right,
but
we've
already
marked
it
as
closed
and
done,
but
it's
kind
of
not
done
because,
like
we
haven't
actually
picked
any
of
the
metrics
and
we've
been
getting
a
bunch
of
flack
from
other
people
for
like.
Why
are
these
lists
empty?
Where
are
the
standards.
E
A
A
A
B
A
B
A
B
A
E
How
about
we
do
the
testing,
before
the
next
sig
meeting
sig
instrumentation
meeting,
to
ensure
that
the
framework
does
exactly
what
we
expect
it
to
do?
And
then
the
next
sig
meeting
we
will
appoint
delegates
to
sigs
to
go
to
the
that
sig
meeting
and
to
have
like
a
brief
announcement
about
what
they
would
have
to
do
in
order
to
get
in
order
to
elevate
their
metrics
disabled.
A
So,
like,
I
think,
maybe
what
I've
been
saying
is
kind
of
getting
a
little
bit
lost
in
in
the
weeds.
I
am
like
not
like
excited
about
a
cap
for
like
the
purpose
of
going
and
writing
a
doc,
or
something
like
that.
I
have
just
done
this.
It
is
like
quite
heavy
weight
right
now.
That's
not
what
I'm
suggesting
we
do.
A
The
issue
is
that,
like
from
a
kubernetes
release
and
end
user
standpoint,
if
like
the
goal,
is
to
make
a
bunch
of
stable
metrics
in
this
release,
which
there
is-
and
that
is
my
understanding
of
like
the
sig's
goal
for
this
release-
is
have
some
stable.
A
Metrics
then
like
there
needs
to
be
one
centralized
point
for
tracking
that
so
like
we
can
deal
with
release
notes
and
like
the
enhancements
team
knows
who
to
talk
to
and
like
that
kind
of
thing
and
like
if
we
don't
like
you,
know,
there's
like
no
other
framework
for
tracking
something
like
that
other
than
like
at
least
an
issue
in
this
repo.
E
A
No,
no,
I
know
that
I'm
not
talking
about
like
the
specifics
of
like
end
users,
contributing
I'm
talking
about
the
release
team.
We'll
have
no
way
to
know
that
this
is
going
on.
If
we
don't
have
a
cap,
or
at
least
an
issue,
it
doesn't
have
to
be
even
like
a
new
thing.
We
could
just
refer
back
to
like
the
this
cap
exists.
A
We
are
encouraging
caps
to
make
sure
it
seeks
to
ensure
that
they're
picking
things
for
this
release,
because
this
list
is
empty
and
we're
getting
a
bunch
of
bugs
filed
about
it,
which
is
true.