►
From YouTube: SIG Instrumentation 20220120
Description
SIG Instrumentation Bi-Weekly Meeting Jan 20th 2022
A
So
welcome
everyone
to
today's
edition
of
sega
instrumentation
meeting
today
is
the
20th
of
january,
and
I
will
remind
everyone
that
the
meeting
is
recorded
so
be
nice
to
each
other,
and
I
think
we
can
kick
off
the
meeting.
Should
I
share
my
screen
to
share
the
agenda.
Maybe.
C
Yeah
damien
is
our
new
tech
lead.
Everyone
give
damian
a
round
of
applause,
warm.
C
B
Yeah,
as
we
can
see,
do
we
even
have
francesco.
I
feel
like
he
put
an
item
on
the
agenda.
That's
f
romani.
He
put
an
item
on
the
agenda
and
then
I
don't
know
that
he's
been
able
to
attend
so
I've
just
been
kind
of
punted.
B
E
Alrighty,
so
I'm
still
trying
to
decide
what
my
opinion
on
this
is
so
jordan
and
wojciech
both
they're,
making
some
edits
to
deprecation
policies
for
api
versions.
But
they
came
across
the
deprecation
policy
for
stable
metrics
and
are
suggesting
that
maybe
we
as
a
group
should
reconsider
our
current
deprecation
policy,
which
is,
I
think,
it's.
E
For
a
few
releases
and
then
it
can
be
removed,
so
I
just
wanted
to
raise
this
because
I
don't
think
everyone
was
cc'd
on
here
and
yeah
get
people's
opinions.
I
I
think
we
should
stick
with
the
status
quo
unless
we
feel
strongly
that
it
should
change,
but
I.
C
C
B
E
C
I
don't
think
that
that
necessary.
Well,
I
don't
think
I
don't
think
it's
necessary
because
we're
going
to
end
up
with
blank
metrics
that
end
up
not
being
used
if
some
code
becomes
obsolete.
That
doesn't
functionally
do
anything
but
we're
locked
into
keeping
it
around
forever,
because
we
have
a
policy.
B
So
one
thing
that
I
would
maybe
add-
because
this
came
up
in
a
call
earlier
in
the
week-
is
like
what
do
we
do
if,
for
example,
we
find
like
a
metric
cardinality
explosion,
or
something
like
that
like,
for
example,
that
cbe
that
we
had
where
people
could
put
like
outrageous
things
in
for
an
http
method,
but
that
metric
is
stable
like
if
our
stability
policy
means
we
can't
like
do
anything
to
it,
then
doesn't
that
mean
that,
like
we're
stuck
with
a
cve
like,
I
think
we
need
to
be
able
to
like
patch
that
sort
of
thing
or
our
guarantees
need
to
be
able
to
handle
that
sort
of
thing.
B
So
I
was
wondering
if
we
should
also
have
a
policy
that
stablemetrics
must
have
like
strictly
bounded
cardinality
or
something
like
that.
A
But
at
the
end
of
the
day
like
even
if
the
problem
occurs,
like
the
unbounded,
generality
will
only
be
on
a
label,
not
the
whole
metric.
A
So
I
don't
think
we
would
have
to
remove
the
metric
completely
like,
although
maybe
it
will
lose
lots
of
explicitness
if
we
remove
a
label
yeah.
E
C
B
C
B
Because
otherwise,
we
can't
touch
them
other
than
like
removing
them.
After.
C
B
But
I
mean
in
any
case,
I
think
part
of
the
problem
here
is
like
what
we've
previously
discussed
many
times
with,
like
our
metric
stability
policy,
where,
like
it
kind
of
loosely,
follows
api
stability
but
like
we
don't
have
a
beta
and
like
alpha
and
stable,
are
kind
of
misleading
because
they
don't
really
mean
the
same
things
as
alpha
beta
ga
for
like
either
features
or
for
apis.
B
So
maybe
we
want
to
like
rename
them
and
proceed
with
our
proposal
to
like
change
the
way
that
metrics
stability
works,
because
it's
confusing
everyone.
B
Like
I
agree,
I
think
we
want
to
be
able
to
remove
metrics
because
otherwise,
like
they're
much
more
of
an
implementation
detail,
I
mean-
and
maybe
we
say
that,
like
you
can't
add
stable
metrics
that
like
where
code
potentially
could
be
removed
like
so
long
as
we
have
an
api
server
survey
request,
we're
always
going
to
have
request
duration
right
so,
like
that's,
not
really
a
risk,
we
could
realistically
say
that
we
keep
that
forever,
but
I
can
think
of
maybe
other
things
that
that.
C
No,
I
I
haven't
been
able
to
respond
to
it.
I've
been
bouncing
around,
but
I
I
have
been
thinking
about
it
and
I
will
respond
to
it.
A
I
did
see
the
email
like
possibility.
There
is
also
another
possibility.
I
see
when
I
went
through
the
code
of
like
the
registry
that
we
are
using
for
like
the
gpa
stability,
and
it
is
that
we
can
hide
some
metric.
We.
C
Can
deprecate?
No,
they
they
get
deprecated
and
they
get
hidden
and
then
and
then
you
delete
them
so
like
you
can
enable
them.
So
like
the
reason
why
we
have
this.
This
escape
hatch
right
is
because
basically,
we
are
giving
people
a
one
release
window
for
them
to
re-enable
a
metric,
unbreak
their
ingestion
pipeline
and
then
migrate
to
the
next
metric.
C
A
You're
shopping
to
do
something
similar
similar
to
what
the
api
live
stream
like
what
they're
doing
for
the
api
like
if
a
metric
is
stable
and
like
we
want
to
duplicate
it
or
whatever,
we
just
hide
it
forever
and
then,
if
any
users
need
it
at
some
point,
they
can
always
turn
it
on,
and
we
like,
as
you
don't
need
to
care
about
it
anymore,
like
they
just
can
allow
the
use
of
this
material.
C
You
need
to
what
do
you
like
and
you
you
basically
have
to
have
this
like
skeleton
of
a
metric
forever.
That
makes
no
sense
right
like
if
web
books
disappear,
we're
going
to
retain
the
web
hook
metric
and
then
it's
just
going
to
be
ex
like
well
like
it's
going
to
be
broken
anyway
because
like
if
it's
not
instrumenting
anything,
it's
actually
not
going
to
appear
in
the
public
api.
C
B
Han
perhaps
what
sig
arch
is
suggesting
is
those
things
like
anything
that
is
possibly
going
to
be
removed
from
kubernetes
should
never
become
stable,
because
that
violates
our
stable
guarantees
so
like,
if
there's
any
possibility
of
like
you
know,
this
feature
might
go
away,
meaning
that
these
metrics
will
go
away.
Then
they
can't
be
a
stable
metric.
E
Like
I
can't
tell
you
how
long
this
thing
took,
because
we're
not
doing
that
anymore,
it
still
seems
useful
to
be
able
to
describe
something
as
stable
and
clearly
there
is
some
confusion
over
using
the
word
stable
to
describe
something
that
isn't
actually
that
stable,
but
maybe
stable.
C
C
You
know
a
a
polar
thing
like
like
you
can
rely
on
something
more.
It's
like
it's
a
qualitative
attribute.
You
can
rely
on
something
more
or
less
than
something
else
right
like
like.
C
B
B
B
A
C
Off
from
arch
for
the
beta
stuff
they're
down
with
it,
we
circulated
the
idea
and
then
basically,
I
ran
out
of
bed
with
to
do
it.
C
C
C
B
I
think
that
was
the
only
item
that
we
had.
I
also
I
I
forgot
to
remind
us
about
the
upcoming
release
dates.
If
people
want
me
to
do
that.
B
We
super
quickly
go
over
like
the
the
list
of
caps
that
we
pulled
in
for
the
release.
I
can
share
my
screen
if
you
want,
because
I
I
was
wanting
to
put
them
in
the.
B
B
Okay,
sorry,
one
second.
F
B
F
Question
about
the
stable
matrix,
so
yeah
so
currently,
I
think
there
are
two
options:
one
is
to
delete
some
matrix,
another
one
to
modify
or
update
some
matrix.
So
currently
we
are,
I
think,
for
delete
metrics.
We
hope
we
can
delete
some
measures
later,
but
for
update
or
like
modifying
some
metrics.
So
it's
it's
it's
supported
or
not
imported.
If
the
metric
status
is
stable,.
B
The
question
yeah.
C
B
B
For
example,
we
may
decide
that
we
want
to
do
something
with
the
metric
stability
stuff
because
we
need
to
so
here
are
the
five
that
I
hold
so
api
server
tracing
going
to
beta
kubernetes
system
components,
log
sanitization,
it's
being
deprecated,
open,
telemetry
tracing
is
hopefully
going
to
alpha
deprecate,
k,
log,
specific
flags
and
kubernetes
components,
hopefully
going
to
beta
and
contextual
logging,
which
is
a
new
thing
from
patrick,
I
think
is
maybe
going
to
alpha,
assuming
that
we
do
it
for
this
release,
I'm
not
sure
if
there's
gonna
be
consensus
for
that
one.
B
Is
there
anything
else
we
had
like?
We
discussed
a
lot
of
things,
but
a
lot
of
them
were
like
question
marks.
So,
like
I
don't
know,
if
we
we
committed
to
metric
cardinality
enforcement.
B
E
So
we
originally
had
their
cup
and
it
involved
both
a
contextual
logging
piece
and
a
trace
id
propagation
piece,
and
we
ask
them
to
tackle
only
the
trace
id
propagation
piece
and
not
the
contextual
login
piece.
I
see
oh,
so
it's
separate
okay
yeah,
they
did
respond
to
me.
I
need
to
get
back
to
them.
Okay,.
F
E
Doubt
that
it'll
make
that
we'll
be
able
to
reach
consensus
by
the
deadline,
but
I'll
keep
working
with
them
for
now,.
B
Okay,
so
structured
logging,
it's
not
going
anywhere.
B
A
Yeah,
okay
at
the
beginning,
like
shui,
planned
it
for
125.,
okay,.
B
B
Yeah
I
suggested
I
mean,
since
these
are
all
like
just
sub
domains
of
metrics.kates.io.
I
think
we
should
just
do
it
as
one
cap
like
a
single
cap,
because
it's
all
basically
the
same
thing
rather
than
trying
to
have
to
track
three
different
things.
A
C
B
A
Yeah
some
like
maybe
we
don't
want
to
graduate
to
ga
all
of
them
because,
like
there
are
changes
that
needs
to
be
done
to
the
apis
like,
for
example,
the
metrics
api,
like
the
risk
matrix
api,
we
wanted
to
add
a
new
feel
to
it
to
kind
of
expose
the
like
the
information
from
c
groups,
because,
like
you,
don't
have
access
yet
to
like
for
code
matrix,
you
don't
have
access
to
all
the
data
from
the
board
itself,
you're
on
the
other
wow
like
the
resource
situation,
memory,
usage,
yeah.
B
That'll
make
sense
to
me,
but
I
wouldn't
expect
that
to
happen
when
we're
graduating
it.
I
would
expect
us
to
add
that
field
and
then
once
it's
stable,
then
we
just
basically
take
the
beta
api
and
and
rename
it
as
v1
right
like
because
otherwise
we're
stuck
with
the
stable
deprecation
policy
where
we
can
never
remove
the
thing.
So
I
would
definitely
say
we
should
not
be
adding
fields
going
into
ga
like
when
we
want
to
ga
the
thing
it
should
be
done,
like
it's
ready
to
go.
B
E
A
Possible,
so
what
would
be
the
best
like?
Should
we
graduate
to
ga
and
then
do
the
change
or
I
don't
know,
create
a
new
version
like
let's
say,
v1
beta,
two
or
whatever?
I.
B
Depends
on
what
the
change
is,
there's
no
reason
to
add
a
new
api
version
on
if
it's
a
compatible
change,
so
I
don't
know
what
the
changes
that
you're
suggesting
are.
So
I
can't
tell
you
up
top
my
head
if
it's
compatible
change,
but
typically
adding
a
new
field
like
a
completely
new
field,
is
always
a
compatible
change.
B
D
So
if
you
have
an
idea
about
changes,
you
should
test
them
in
beta
and
only
like
wait
enough
releases
for
feedback
and
to
only
then
be
be
sure
that
you're
not
making
changes.
So
the
idea
original
about
making
to
ga
was
okay.
We
no
one
requested
any
changes.
We
can
ga
at
this
at
this
stage,
but
with
changes
we
need
to.
We
need
to
like
cook
them
verify
that
everything
is
like.
It
makes
sense.
B
Yeah,
the
the
api
review
is
often
separate
from
kep
review
but
like
if
you're
adding
any
substantial
new
fields,
you
probably
want
a
cap
because
you're
going
to
need
a
feature
flag
to
track.
The
graduation
of
that
field,
like
I
think
that
we
currently
have
the
policy
that,
like
any
new
field
in
an
api,
should
go
through
the
kep
process,
just
because,
like
it
can
be
really
really
difficult.
B
Otherwise,
to
like
go
through
the
whole
graduation
process
like
that,
helps
us
go
through
the
steps
and
ensure
that,
like
any
operational
things
that
we
need
to
do,
get
answered
and
like
that
we're
very
confident
in
the
design
and
whatnot,
we
typically
don't
go
and
just
like
change
apis
without
caps.
There
are
some
exceptions
to
that
like,
for
example,
some
fields
were
added
in
the
past
release
from
some
of
the
work
that
patrick
was
doing
and
one
of
them
like.
B
Basically,
the
the
field
was
the
wrong
type,
and
so
patrick
came
back
and
he's
like.
Can
I
change
the
type
of
the
field,
and
we
said
no,
you
can't
do
that.
That's
a
breaking
change!
B
It's
like
a
beta
api
that
will
break
the
api
so
like
we
can't
do
that,
but,
like
I
think
that
we
could
do
is
we
could
immediately
deprecate
the
field
that
has
the
type
issue
and
add
a
new
field
that
we
want
people
to
use
going
forward
and
write
documentation
for
that,
and
so
like
that,
that
sort
of
thing
that
would
not
have
to
necessarily
go
through
like
we
wouldn't
need
a
whole
kept
for
that.
B
D
A
Then
rename
all
these
skips
into
like
what
we
really
want
to
do
like,
for
example,
for
the
risk
matrix
api.
We
want
to
extend
it
to
also
expose
information
about
the
c
groups
and
for
the
others,
like
it's
more
like
about.
Actually
the
user
experience,
while
using
the
gps.
D
D
Don't
like
don't
make
it
make
sense
like
we.
Don't
not
specific
for
specific
changes
or
specific
api
cap
doesn't
make
sense
for
me,
because
all
the
changes
are
in
the
same
code
by
like
in
the
same
code
locations
with
the
similar
users
with
similar
goals
like
like
here,
goals
being
better
service,
like
I
s,
for
example,
better
serving
auto
scaling.
So
you
need
same
users
or
same
reviewers,
so
I
don't
think
there
is
a
like
any
reason
to
diff
like
different
audience.
D
B
Okay,
sorry
to
cut
you
off,
but
we
are
currently
at
time.
Do
we
have
any
last
quick
things
follow-ups
we
can
do
on
slot
anything.
D
When
you
think
the
contextual
login
cap
currently
is
is
not
a
trees,
but
the
main
blocker
in
discussion
is
performance,
so
I
wanted
to
ping
david,
because
contextual
logging
is
related
to
tracing.
So
if
we
are
want
to
invest
in
either
of
those,
we
should
have
similar
front
for
sick
arch
to
to
bring
that
the
instrumentation
here
is
worth
the
cost
of
propagating
the
propagating
the
the
context
or
like
changing
the
context.
E
D
B
Take
a
look
at
that
one
too,
I
think
I'm
the
prr
reviewer
and
then
one
last
thing
before
we
drop
sorry
for
going
a
couple
minutes
over,
but
I
just
wanted
to
remind
everyone
of
deadlines.
We
have
a
soft
prr
freeze
next
week
in
one
week
from
now
on
the
27th.
B
So
that
means
please
have
your
production
readiness
review
questionnaire
done
by
that
point,
it's
kind
of
like
the
sla
for
the
prr
team
just
to
ensure
that
they
have
enough
time
to
get
through
all
the
reviews,
because
it's
a
small
team
and
our
enhancements
freeze
is
going
to
be
february
3rd,
and
that
is
when
we're
next
going
to
meet.
So
if
you
need
any
help
with
your
enhancements,
let
us
sink
asynchronously
in
the
slack
channel
or
on
the
mailing
list.