►
From YouTube: Kubernetes SIG Instrumentation 20190613
Description
Meeting notes: https://docs.google.com/document/d/17emKiwJeqfrCsv0NZ2FtyDbenXGtTNCsDEiLbPa7x7Y/edit#heading=h.scsskcn8qcvc
A
Remember
that
this
meeting
is
being
recorded,
so
everything
you
say
will
stay
on
the
internet
for
eternity
okay,
and
with
that,
let's
jump
into
our
agenda.
Our
first
item
is
the
email
threat
that
we
had
about
basically
finishing
the
metrics
overhaul,
I.
B
Know
that
there
was
some
pushback,
the
other
states
I
mean
the
the
record
of
that
is
in
the
comment
section
of
the
TR
and
then
I
went
back
and
I
reviewed
the
cap,
and
it
looks
like
all
the
cap.
Approvers
were
sick,
instrumentation
people,
which
is
weird
since
this
basically
ends
up
modifying
other
sig
binaries,
and
so
you
know
like
currently
I'm
working
on
this
migration
cap
and.
B
Don't
know
if
it
makes
sense
to
go
through
sig
arch,
these
sorts
of
approvals,
or
it
makes
sense
you
go
through
each
stake
individually.
I
mean,
like
our
cap
system,
isn't
really
designed
to
handle
mass
approvals
from
each
component
owner
right,
like
it's
kind
of
a
pain.
Yes,
because
if
you
want
to
get
something
implementable,
then
technically,
you
would
want
all
of
the
approvers
to
lgt
em
your
ear
cap.
A
A
We
did
announce
it
in
the
communities
community
meeting
I
guess
we
should
have
at
least
posted
it
on
most
SIG's
involved
mailing
lists,
potentially
in
directly
tell
the
sig
leads,
maybe
it
maybe
at
least
for
the
SIG's
that
owned
the
binaries,
as
you
said,
and
that
could
make
sense,
yeah
I
think
well,
I.
Would
one
I
well
I
understand
how
sick
arch
could
make
sense?
A
I,
don't
know
if
that
actually
solves
the
like
communication
problem
right,
even
if
sick
arch,
if
we
even
if
we
do
talk
to
the
guard
and
the
guard
says
this-
is
fine,
go
ahead,
it
still
doesn't
solve
the
problem
of
users
being
informed
about,
because
that's
that's
why
we
don't
go
ahead
or
that's
what
that
was
like.
That's
a
concern
right,
so
yeah
I
feel,
like
being
being
I
mean
now
it
slipped.
So
obviously
it's
not
happening
1:15
I
guess,
that's!
A
B
A
A
B
D
Yeah
I
think
it
feels
to
me
like
going
deleting
them
at
the
same
time
as
introducing
the
deprecation
warnings
and
going
oh
we're,
introducing
deprecation
warnings
for
some,
but
we've
also
just
deleted
a
pile
at
the
same
time,
sort
of
feels
like
it's
going
to
annoy
some
people.
I
like
it,
doesn't
feel
good
to
me
having
to
keep
them
till
1:18,
but
I
think
it's
realistic
way.
If
we
one
who'd,
stick
to
guarantee
the
guarantees
that
were
paying
in
place,
I
think
that's
the
only
way.
A
B
Because
they've
been
implicitly
stable,
people
have
been
implicitly
treating
them
as
stable.
Now
we
are
moving
to
an
explicit
model,
and
now
we
are
explicitly
saying
these
things
you
might
have
been
treating
this
table
are
not
actually
stable
for
giving
you
some
leeway,
but
then
we're
gonna
remove
that
as
opposed
to
just
removing
them
and
people.
A
A
What
what
do
you?
What
do
you
feel
about
reaching
out
to
the
SIG's
that
own
the
binaries
and
just
ask
what
they
think
like
we
did
deprecated
them
in
1:14?
For
sure
everybody
can
see
this,
we
did
communicate
it,
but
we
obviously
didn't
commute
communicated
well
enough.
Let's
see
what
they
say
if
we
were
to
remove
them
in
1/16.
If
these
people,
those
signals
were
say
that
that's
still
too
aggressive,
then
we
can
come
back
and
redesign
say
or.
B
A
B
A
Maybe
maybe
for
like
framework
kind
of
things,
it's
probably
better
to
go
like
to
just
verify
with
cigars,
because
at
the
end
of
the
day,
they're
they're
responsible
for,
like
overseeing
like
crop
cross
crossing
efforts
in
some
way,
so
I
think
for
for
stuff,
like
that.
That
could
make
sense
to
have
just
involved
to
take
architecture
and
like
there
are
architectural
decisions
that
they
make.
That,
in
fact
affect
every
every
single
right.
So
that's
not
necessarily
new
to
them.
B
A
Think,
probably
the
the
metrics
overhaul
that
probably
wasn't
oversight
and
we
should
have
confirmed
with
every
sink
on
the
binary.
But
in
order
to
like
our
mission,
is
to
put
like
guidelines
and
stuff
like
that
into
place
right
and
if
we're
enforcing
that
those
guidelines,
then
we're
also
just
following
what
we
should
be
doing
so
I
feel
like
yeah
I,
feel
like
that's
just
following
our
duty.
At
that
point,
yeah.
B
B
E
B
D
B
A
B
B
That
was
it.
So
if
you
guys
want
to
take
a
look
at
the
draft
cap,
it
will
be
working
progress,
hopefully
we'll
be
up
for
real
review
in
like
a
week
or
so,
and
then
maybe
we
can
see
if
people
have
any
objections
to
it
and
sign
it
off
and
getting
implementable
state
by
the
next
sick
meeting,
then
I
think
Peters
Auto
wants
to
present
his
login
thing.
F
F
F
A
A
Okay,
go
ahead,
I
mean
I.
Can
I
can
mention
it
so
you're
talking
about
the
comment
that
I
wrote
yeah,
so
basically
I
commented
on
the
newest
cap,
which
is
metrics,
validation
and
verification.
Although
it's
kind
of
blurry
which
kept
it
really
belongs
to,
how
do
we
like
make
it
marking
a
metric
as
stable,
is
a
really
strong
guarantee
that
we're
giving,
because
we're
essentially
saying
nothing.
A
Will
ever
change
like
labels,
label
keys,
won't
change
label
change,
metric
name,
won't
change,
which
is
a
great
thing
for
users,
but
a
terrible
one.
Have
we've
done
a
mistake,
so
I
kind
of
thinking
if
there
should
be
like
an
intermediate
state
and
the
example
that
I
gave
was
the
API
server
requests,
total
and
duration
metrics
that
we
have
found
to
not
suffice
multiple
times
and
we
have
intentionally
changed
multiple
times
because
things
like
scalability.
A
A
B
Then,
like,
after,
like
a
quarter
or
key
quarters
or
whatever
of
the
thing
not
changing
and
people
demonstrably
showing
that
this
this
is
not
going
to
change,
then
you
can
go
for
metrics,
API
review
and
promoted
to
stable,
like
you
want
to
dimension
to
a
metric,
you
know,
philosophically
the
way
people
have
to
look
at.
It
is
different
right
like
if
you
want
to
add
you
know
this
new
dimension
to
request
count
total
or
whatever.
A
Let's
say
we
do
have
the
API
server
requests
total
in
a
stable
state,
and
we
realize
there
is
this
new
label
value
that
we
would
want.
I,
don't
know
there,
there's
an
actually
like
there's
a
new
API
being
introduced,
which
would
cause
a
new
label
value
right.
How
do
we
deal
with
that?
There's?
Definitely
gonna
be
another
API.
B
Right,
yes,
I
mean
one
thing:
you
should
do
metrics
API
reviews.
We
should
make
sure
that
the
labels
founded
the
set
of
values
are
bounded,
because
we
have
definitely
not
been
doing
that
and
that
has
been
a
cause
of
memory
leaks
repeatedly
for
us
absolutely.
But
besides
that,
as
long
as
it's
an
enumeration.
B
The
memory
footprint
of
metrics
right,
so
anything
if
you
compare
up
labels.
The
now
has
a
bigger
three
footprint
should
be
cognizant
of
this,
but
in
general,
I'm,
not
like
that
label
values
in
like
adding
a
label
value,
is
a
backwards,
incompatible
change.
This
right,
like,
if
you
think,
of
the
labels
as
being
on
the
field
on
the
database
model
of
the
database.
B
F
To
if
you
married
those
scenarios
and
basically
give
people
an
example
of
this
is
how
you
add
a
metric,
and
this
is
how
it
evolves
and
explains
how
those
steps
work,
so
that
give
them
a
little
bit
of
a
sort
of
more
understanding
that
this
is
not
a
scary
change
and
that
we've
thought
about
all
this.
Various
transitions.
B
F
A
B
This
is
how
our
thing
works,
and
we
should
do
something
to
prevent
this
from
breaking
I.
Think
that's
important,
I
think
you
know
this
is
a
reasonable
group,
and
people
voiced
concerns
about
their
in
just
lines.
I'm
sure
everyone
here
will
be
receptive
to
listening
and
finding
a
solution
for
them.
Yeah.
E
A
Okay,
anyway,
I
feel
like
we're
actually
pretty
much
on
the
same
on
the
same
page,
so
I
think
what
Peter
said
was
a
good
point.
That's
maybe
draw
through
like
actually
think
through
some
of
these
scenarios
and
write
them
down,
maybe
documentation
or
in
one
of
the
caps
just
so
that
we
think
think
this
process
a
little
bit
more
I
think
that
would
be
helpful,
sounds
good,
okay,
okay
and
I
believe
we're
already
at
the
end
of
our
time.
So
unless
there's
anything,
anybody
wants
to
talk
about
last
minute
going
once
going
twice.