►
From YouTube: SIG Instrumentation 20200416
Description
SIG Instrumentation Meeting - April 16th 2020
A
B
B
A
One
thing
I
think
yeah
I,
think
Lily
I
mean
she
can
speak
for
herself
as
well,
but
I
think
we
were
talking
about
there's
like
one
PR
out
there
about
reducing
cardinality
of
bucket
metrics
or
like
buckets
the
where
we
were
wondering
if
we
could
potentially
solve
the
same
thing
with
that
as
its
densed
I.
Think
no
but
I
wonder
if
there
is
a
possibility
where
we
could
do
something
like
that.
No.
B
It
would
because
okay,
so
the
way
I
think
about
labels
is
a
little
bit
different
from
the
fundamental
Prometheus
model
for
the
Prometheus
model.
Labels
are
not
actually
tied
to
a
metric
right
like
they're
like
like
the
underlying
data
structure
is
like
you
have
this
label
it's
some
string
and
there
are
n
values
of
that
label
and
it
doesn't
matter
which
metric
it
came
from.
B
However,
practically
speaking,
that's
not
really
true.
That's
not
how
people
interact
with
metrics.
Labels
are
scoped
right,
so
I'm
not
see.
B
Well,
the
new
combination,
the
label
values
plus
the
metric
name-
makes
a
unique
time
series.
The
metric
name
is
just
meta
label,
daddy.
That
is
true,
that
is
true
and
and
so
like,
yes,
and
so,
but
the
way
that
so
the
way
my
proposal
sort
of
abstracts.
That
thing
is:
if
a
label
is
considered
nested,
it's
already
going
to
be
scoped
to
that
metric
label.
Name
right,
like.
A
B
B
A
I
think
we,
the
the
only
part
that
I
wasn't
sure
about,
is
I.
Think
you
have
like
the
the
fallback
mechanism,
basically
where,
if
you
try
to
set
some
label
to
some
value
that
is
not
whitelisted,
it
will
go
in
some
non
value
or
something
like
that
right
and
so
I'm,
not
sure
that
would
be
okay
for
a
histogram
I
mean.
Obviously
it's
not
that
big
of
a
deal,
because
the
histogram
or
I
don't
know
I.
Think.
D
B
A
B
Did
not
did
not
think
about
that
thing,
but
III
do
remember
that
issue
where
people
wanted
to
get
rid
of
some
of
the
dimensions
from
one
of
the
metrics,
because
there's
tiny
buckets
but
yeah
yeah,
yeah
sure
I
think
yeah.
That
would
be
super
easy
to
do.
Yeah
yeah.
So
if
anyone
wants
to
help
out
with
the
cat
or
whatever
just
you
know,
feel
free
to
make
any
modifications
or
whatever
send
a
pull
request
and
yourself
as
a
go
out
there,
whatever
yeah.
A
B
A
D
C
A
E
I
think
Khan
was
saying
that
oh
sorry
go
ahead.
Oh
yeah.
B
Sorry,
I
just
yeah,
better
okay,
great
yeah
I
was
just
saying
we
have
a
deadline
coming
up
and
if
people
want
to
get
their
caps
in
there's
three
weeks.
So
probably
if
you
want
to
get
a
cap
in
this
release
cycle,
you
would
need
to
have
it
written
and
probably
have
some
people
looking
at
it
by
the
next
next
big
meeting.
A
So
yeah
I
I
think
it
depends
on
the
size
and
everything
but
I'm
sure
I
think
if
anyone
wants
to
realistically
get
anything-
and
it
probably
needs
to
be
open
by
the
next
meeting
so
that
we
can
at
least
discuss
it
a
little
bit
and
then
obviously
do
reviews
and
potentially
merge
it.
But
what
I
wanted
to
say
about
like
a
closing
note,
but
the
first
topic
I
to
me.
It
doesn't
seem
all
that
controversial
I.
A
A
B
A
B
There's
we
need
to
be
able
to
turn
off
metrics
you
so
yeah
I'd
say
those.
E
Yes,
I
think
if
we'd
get
all
of
that
stuff
done
before
enhancement
for
use,
then
it
should
be
fine,
but
like
we
can
work
on
it
all
simultaneously
as
well
trying
to
identify
that
list
of
metrics.
We
want
to
be
stable,
but
I
don't
think
we
should
set
any
of
them
to
stable
until
we
have
the
rest
of
that
stuff
agreed.
A
B
D
A
A
B
B
D
E
B
D
E
Should
we
be
sending
up
some
communication
about
this
like
on
kubernetes
day
or
something
like
that,
like
yo,
you
know,
propose
some
metrics
to
be
stable.
Let's
talk
about
them.
This
release
I.
Think
if
we're
making
this,
if
we're
aiming
on
taking
this
GA,
this
release
we're
going
to
have
to
do
that
because
I
just
don't
see
another
way
like
if
there's
not
a
cap
or
some
sort
of
communication,
the
rest
of
the
project
I'm,
not
sure
how
we're
going
to
like
do
this
in
some
blanket
form.
B
E
A
Yeah
I
think
that's
also
very
much
why
we
need
to
actually
finish
our
cap
completely
before
we
can
like
mark
any
of
them
as
stable,
because
at
the
end
of
the
day,
basically,
we
haven't
done
finished
all
our
work,
but
we're
asking
other
folks
to
do
work
more
or
less.
With
that
framework,
I,
don't
think
I
think
it's
gonna
get
only
messier
if
we
invite
people
to
already
start
using
it.
It's.
B
Not
it's
not
that
bad
right
I
mean
like
these
metrics
exist.
People
depend
on
them
and,
like
you
can't
like
I
know
that
would
it's
an
alpha
but
like
really,
if
that
one
disappeared,
like
the
request
once
disappeared
tomorrow,
like
you
would
hear
about
it,
no,
you
would
break
it
literally.
Everyone
in
the
world,
like
so
I,
mean
like
not
marking
it
as
evil.
It's
like,
okay,
we're
just
okay
with
breaking
you
for
now,
because
we
don't
have
a
contract.
It's
like,
but.
B
B
A
E
D
D
So
so,
in
the
context
of
this
particular
discussion,
I'm
just
curious
of
whether
there's
already
some
thinking
around
having
like
a
maturity
index
for
a
particular
metric.
So
if
people
see
it,
maybe
they'll
see
a
value
that
will
be
a
reflection
of
like
how
long
metric
has
been
in
use.
How
frequently
it
churns
how
many
dependencies
it
might
have
something
like.
E
A
To
explain
the
topic,
though,
that
basically,
we've
introduced
a
framework
where
we
can
mark
individual
metrics
as
alphabet
astable,
and
alpha
is
essentially,
we
can
break
anything
at
any
point
in
time.
We're
making
no
guarantees
around
it.
I
think
I
think
we
never
actually
finished
the
entire
discussion
around
beta
metrics,
but
stable
is
essentially
we
need
to
go
through
a
very
specific
deprecation
model
to
remove
something.
A
So
you
basically
the
idea
is
you
can
rely
on
these
metrics,
but
in
the
same
way-
and
this
is
kind
of
the
discussion
that
we
were
having-
we
need
to
make
sure
that
the
metric
that
we're
marking
stable,
because
it's
a
large
commitment
we
need
to
actually
ensure
that
it's
high
quality
and
that
we're
not
going
to
change
it
anytime
soon,
because
that
would
mean
we
need
to
go
through
the
deprecation
process.
Again.
D
Because
I
think
that
in
general,
mmm-hmm
I've
seen
this
movie
several
times
before
I'm
just
relatively
new
to
the
kubernetes
community.
And
so
what
tends
to
happen
is
we
have
certain
habits
around
how
we
approach
unknown
new
novel
problems
and
in
sometimes
it's
useful
to
simply
know
how
much
is
known
or
how
confident
we
are
in
the
information
that's
available,
and
then
each
individual
consumer
can
make
the
decision
about
how
dependent
they
want
to
be
on
that
resource
and
that
allows
for
kind
of
an
ongoing,
continuous
improvement
in
some
respects.
E
Great
idea
to
me
sorry,
just
keeping
an
eye
on
time,
because
they
only
have
six
minutes
left
in
the
community
meeting
is
right.
After
this
one,
we
have
I
think
three
things
left
on
the
agenda,
bug
scrub,
a
bug
that
sera
theist
mentioned
and
fixing
the
calendar.
Invite.
Can
somebody
quickly,
like
I,
don't
know
who
put
fixing
the
calendar
invite
on
the
agenda?
E
B
E
A
And
the
problem
is
the
meeting.
Invite
itself
was
created
with
a
chorus
Google
account
that
does
not
exist
anymore
yeah
and
it's
quite
it's
kind
of
a
mess.
I'm
not
exactly
sure
how
to
resolve
it.
I
think
we
said
that
potentially
someone
someone
somewhere
within
Retta
might
be
able
to
have
admin
access
to
all
X
chorus
accounts.
E
F
Think
that
the
to
sting
by
hand
but
I,
don't
know
I,
don't
have
anything
to
disgust.
It's
just
the
idea
that
how
we
can
ensure
that
all
the
changes
to
metrics
are
we
someone
from
sig
instrumentation,
so
I
just
started
it
currently
talking
with
some
people
from
from
prowl
and
God
looking
for
their
feedback.
What
are
their
ideas,
how
to
implement
it?
That's
all.
B
A
The
things
that
you
mentioned
I
think
are
fantastic
criteria
to
mark
something
in
whatever
maturity.
We
we
wanna.
We
want
it
to
be
or
say
like
it
must
have
these
requirements
or
something
like
that
doesn't
doesn't
have
to
be
a
hard
rule
right,
but
at
least
thinking
about
these
requirements
a
little
bit
more.
It's
definitely
something
that
we
haven't
done
enough.
I
think
so
I'd
love
to
continue
this
discussion,
yeah.