►
From YouTube: SIG Instrumentation 20211028
Description
SIG Instrumentation Bi-Weekly Meeting Oct 28th 2021
A
Hello,
everyone-
and
today
is
the
october
28
2021
edition
of
sig
instrumentation.
We
have
a
few
items
on
the
agenda
today.
We
will
start
with
action,
item
review
and
announcements.
A
The
announcement
is
code,
freeze
is
coming
up.
It
is
november
16th
only
like
three
weeks
away.
So
don't
let
that
catch
you
by
surprise
action.
Item
review.
I
had
been
tasked
with
an
ask
to
see
if
we
can
disable
prs
or
issues
in
repos
like
kubernetes,
metrics
or
other
things
staged
out
of
the
main
kubernetes
repo.
So
people
will
stop
filing
issues
there
because
we
found
a
bunch
there.
A
There
are
currently
no
issues
there
from
what
I
can
see.
I
did
talk
to
contrib
x
and
they
basically
said
it's
not
possible
to
disable
prs.
That's
not
apparently
a
feature
of
github.
We
could
turn
off
issues,
but
that's
not
something
that
they
currently
support
with
their
like
get
ops,
configs
or
something
like
that.
So
there's
a
bit
of
a
discussion
about
it,
but
like
no
obvious
solutions
that
we
could
turn
on
today.
So
that's
that
any
questions.
A
Han,
you
said
you
had
a
couple
of
items
for
the
agenda.
Oh
I
see
it's
on
there.
Now
cool
bunch
of
mentors
are
getting
promoted,
stable,
awesome,
we'll
come
back
to
that
later.
Next,
we
have
sally
with
cubelet,
open
telemetry
tracing
cap.
B
B
Because
I
want
to
show
you
I
have
I
did
hook
up
so
recently.
Tracing
was
added
to
cryo
and
I
I
started
a
cube
admin
cluster
with
cryo
enabled,
and
then
it
was
at
cd
and
api
server
because
they
also,
you
know,
have
experimental
tracing
but
like
the
chain
was
broken
because
cubelet.
So
that's
really
why
I
wanted
to
add
cubelet.
I
just
wanted
to
see
the
chain
getting
growing.
I
guess-
and
so
I
was
able
to
to
enable
that
in
okay
here
we
go.
Let
me
find
the
right
tab.
B
B
B
B
A
B
Cool
so
yeah,
just
I
have
it
running,
but
I'll
I'll
pull
up
the
the
cap.
B
It's
28
31..
It
has
been
reviewed
a
bit,
but
if
you
guys
can
take
a
look,
I
definitely
I'm
not
like
an
expert
with
kubernetes
architecture.
I
think
you
guys
know
more
about
that
than
I
do
so.
I
have
questions
like.
Where
does
it
make
sense?
Alana
I
I
did
elena,
I'm
sorry.
B
I
did
take
your
feedback
and
the
api
review
too.
So
that's
that's
in
there
pushed
now
so
yeah
and
then,
along
with
this,
I
also
have
the
pr
and
that
has
been
reviewed
more
than
the
cup,
and
so
you
can
find
it
easily
it's
through
the
cup
and
okay.
B
B
Yes,
yeah
yeah
yeah,
so
you
can
see
with
api
server
cryo
at
cd
and
cubelet.
Like
I
say
you
know
you
want
to
look
at
this
one.
You
can
see.
Oh
if
there
was
an
issue
it
looks
like
you
know:
cuba
is
taking
a
lot
longer
than
cryo
like
this
is
what
I
need
you
guys
to
help
with
that.
B
This
group-
I
don't
have
like
the
context
for
like
why
is
this
gonna,
you
know
change
the
world
like
what
problems
is
this
gonna
solve
at
kubecon,
I
watched
the
correlation,
the
the
correlating
data
presentation,
if,
if
you
guys
saw
that
it's
really
great
there's
a
few
applications
where
they
showed
some
issues
and
how
they
solved
them
with
correlating
traces
with
metrics,
and
that
was
great
but
like
is
there
anywhere
that
we
can
go
to
get
more
of
that
like
who's
using
this,
I
guess
is
my
question
and
why
did
we
as
as
a
group
like
land
on
open
telemetry?
B
A
A
good
question
for
david,
I
think
a
little
bit,
but
I
would
also
add
to
that,
like
there
were
at
least
two
competing
standards.
Previously
there
was
open
census
and
something
else
and
those
got.
A
Or
open
tracing
right,
so
I
think
that
in
part
it's
just
this
is
the
thing
that
the
community's
kind
of
centralized
on
and
backed,
and
that
is
why
also
open
telemetry
is
a
cncf
project,
so
in
general
kubernetes,
it's
like
why
prometheus
as
well.
Well,
it's
a
cncf
project
and
in
general,
kubernetes
has
sort
of
focused
on
integrating
with
cncf
supported
technologies,
so
yeah.
B
All
right
that
that's
that's,
that's
what
I
know
too,
and
it
makes
tons
of
sense
to
me,
but
so
I'm
I'm
also
a
member
of
this
other
group
called
operate
first
and
there's
a
telemetry
working
group
there
and
it's
a
collaboration
with
some
research
people
from
like
tufts
and
harvard
and
bu,
and
then
it's
red
hat's,
a
a
supporter
of
it
too.
But
it's
really
supposed
to
be
out
in
the
open.
It's
just
new
and
it
hasn't
been
yet.
B
But
the
researcher
was
asking
me
yesterday
about
this
question
like
why
are
we
tracing
asynchronous
or
synchronous
transactions?
And
you
know
what
what
is
this
going
to
be
used
for
and
and
all
of
these
questions
that
I
couldn't
answer,
so
I
want
to
to
kind
of
just
get
a
discussion
going
here,
maybe
in
another
meeting,
but
something
to
think
about,
and
then
with
that
with
that
group
operate
first,
I
have
the
opportunity
to
stand
up
a
long-running
kubernetes
cluster,
with
tracing
enabled
the
way
that
I
have
it
here.
B
This
is
just
a
single
node
cube
admin
cluster,
but
is
there
anyone
that
would
like
to
help
me
by
like
helping
me
set
it
up
or
getting
some
varied
workflows
running
on
it
like
the
research
people,
they
really
want
data
and
they
want
to
evaluate
like
how
we're
using
tracing
and
then
get
back
to
us
and
be
like.
Oh
you
could
these
people
have
like
phds
and
tracing
specifically,
you
know
like.
B
C
So
so
personally,
I
think
tracing
is
pretty
broadly
useful,
mostly
because
kubernetes
is,
you
know
pretty
distributed,
and
basically
the
cardinality
of
events
is
just
too
high
to
be
digging
around
in
logs
for
different.
C
Yeah
and
you
just
need
connective
tissue,
sometimes
in
order
to
debug
a
problem
and.
C
Yeah
yeah
and
yeah
and
basically
tracing
gives
you
that
and
metrics
only
give
you
metrics
per
component.
So
that
aspect
is
really
only
solved
by
by
distributed
tracing.
B
And
and
don't
you
think
that
the
real
benefit
of
tracing
is
going
to
be
like,
like
that
kubecon
talk
was
talking
about
just
correlating,
you
know
looking
at
looking
at
hints
in
the
metrics
and
then
being
like.
Oh
that's
where
I
need
to
look,
and
these
are
the
traces
and
then
and
then
the
the
logs
are
stamped
with
the
trace,
and
so
you
know
correlating
those
things
together,
and
that
is
what
is
possible
with
open
telemetry,
that's
not
possible.
You
know
if
you
use
different
tools
for
different
things,
so
yeah.
A
B
Right
so
yeah,
so
in
my
in
this
cluster
I
have
it's
just
a
straight
up,
nothing
running
except
for
a
test
deployment
and
it's
a
it's
a
mutating
admission
web
hook.
I
I
can
just
throw
the
links
in
the
docs,
so
you
can
they're
very
reproducible.
B
So
I
figured
this
oh
and
I
can
let
me
go
back
as
you
can
see
like
the
api
server.
I
can
scale
it
too
to
create
some
activity
that
I
I
set
this
up
a
couple
of
hours
ago.
Yeah.
Let
me
go
here
and
scale
it,
but
so
yeah.
My
idea
was
that
would
create
a
lot
of
activity
with
lcd
and
api
server
right.
B
C
A
B
So
I
because
I
started
with
cryo
I
tried
to
so
here's
the
cubelet
and
here
here
are
the
operations
with
keyboard
and
I
just
wanted
to
see
cubelet
with
cryo.
So
I
have
the
grpc
calls
like
with
the
the
runtime
service
like
what
is
status,
for
example,
sam
pod
sandbox
status.
Let
me
oh
no.
B
A
A
B
A
It
was
under
cubelet,
you
had
just
scrolled
past
it
under
keep
it.
Oh
okay,
all
right.
Let's
see
it's
like
closer
to
the
bottom.
Oh
okay
status,
yep,
yep
yep
got
it.
D
So
if
I
can
chime
in
and
talk
a
little
bit,
one
of
the
things
that
I
think
would
greatly
improve,
this
in
terms
of
usability
is
connecting
these
grpc
calls
as
part
of
a
higher
level
flow
right.
So
when
I
previously
worked
on
something
similar
to
this,
I
guess
a
few
years
ago
now,
one
of
the
things
that
was
useful
to
me
was
to
put
a
parent
span
at
the
entry
point
of
some
something
the
cubit
is
doing
so.
Okay,
like
one
thing
the
cubelet
might
do,
is
to
start
a
pod
right.
B
B
D
A
D
Like
the
idea
with
tracing
is,
I
can
have
a
general
problem
that
I
know
about
like.
Oh,
my,
my
pod
is
taking
a
long
time
to
start
and
then
I
can
see
the
sequence
of
events
right.
Oh,
it
went
and
called
create
pod
sandbox,
and
then
it
went
and
called
this
other
thing
pull
image
and
that
took
20
minutes
and
like
that
way
you
can.
Then
you
can
also.
B
B
B
So
you'd
have
the
grpc
calls
like
together
in
one
sp
in
one
span,
yeah.
A
More
information
right
now
by
like
instrumenting
from
the
cubelet
side.
I
strongly
agree
with
david
that,
like
what
we're
missing
is
the
context
like
how
do
these
things
all
piece
together,
because
right
now,
we
can't
see
like
what
does
the
entire
life
cycle
of
a
pod?
Look
like,
like
you
know,
to
for
creating
a
pod,
or
this
kind
of
thing
like
it
requires
more
intimate
knowledge
in
terms
of
like
how
does
the
cubelet
schedule
pods.
B
A
Think,
right
now,
like
the
the
information
that
we
are
getting
here,
is
not
there's
not
much
benefit.
Above
and
beyond
the
the
cryo
traced
stuff
like
we
can
see.
A
Okay,
this
thing
is
taking
a
long
time,
but
like
not
like
this
thing
is
taking
a
long
time
in
the
context
of
this,
or
in
this
case
most
of
these
are
taking
a
very
short
time,
and
so
that
doesn't
like,
if
there's
a
lag,
for
example
like
say,
there's
like
a
three
second
delay
between
x
and
y
yeah,
that's
probably
more
what
we
care
about
than
the
fact
that
this
grpc
call
was
very
fast.
B
Yeah
and
I
would
love
to
have
a
an
application
that
I
could
tweak
to
make
like
to
break
something
or
to
make
latency
like
add
it
add
like
a
timeout
or
a
lock
or
something
so
I
just
haven't
gotten
there
yet
and
then
that's
really
good
information,
because
now
I'm
going
to
go
back
through
the
code
and
look
for
those
spots.
B
A
Us
that
intersect
with
sig
node,
we
know
this
code.
That's
probably
me
and
david.
D
D
Right,
if
you
want
to
show
how
someone
would
go
from
hey
my
pod's,
taking
a
long
time
to
start
to
oh
image,
pulling
is
taking
forever.
It's
really
easy
to
see.
If
you
get
that
kind
of
trace.
The
other
thing
that
can
be
a
fun
problem
to
debug
is
the
cubelet
needs
to
report
status
for
pods,
and
that
can
sometimes
be
the
long
tail
of
seeing
a
pod
become
ready,
and
if
you
are
spamming,
the
api
server
with
other
junk
or
if
you
rate
limit
the
cubelet.
D
I
think
it
has
like
a
certain
number
of
updates.
It's
allowed
to
do
per
second,
I
want
to
say
elena
might
know
better,
but
if
you
lower
that
to
something
really
unreasonable,
you
can
see
a
big
gap
in
there.
A
Yeah,
the
other
thing
I
would
add,
is
if
you're
looking
to
come
to
speed
for
to
get
a
little
bit
more
context
on
this.
We
like
in
the
last
release,
totally
refactored
or
maybe
not
totally,
but
we
did
a
very
large
refactor
in
terms
of
how
the
pod
life
cycle
currently
works.
We
did
a
little
bit
of
unification
and
I
spoke
about
that
at
kubecon.
So
if
you
go.
B
A
Recording
there's
like
a
short
section
on
that
that
talks
about
the.
B
A
B
Cool
development
cycle
yeah
and
quickly,
one
thing
that
david
suggested
was
rather
than
now
that
we
have
two
components
is
to
move
the
tracing
configuration
to
the
component
base
repo,
so
that,
then
you
know
if
the
scheduler
wants
to
add
tracing-
or
you
know
the
controller
manager,
then
that
will
be.
It
will
be
a
unified
tracing
configuration
so
that
in
the
in
the
in
the
pr
that
we're
doing
that
too.
A
A
Do
that
as
a
separate
refactor,
but
I
think
everybody
from
the
api
review
group
was
happy
about
that
so
david.
I
want
to
take
a
look
at
that.
D
A
Thanks
again,
awesome
thanks
so
much
sally
han
you
wanna,
take
it
away
with
a
bunch
of
metrics
getting
promoted
to
stable.
C
Yeah,
I
just
wanted
to
throw
it
on
the
radar.
Basically,
we
have
like
three
different
components.
I
think
that
are
getting
proposed
scheduler,
I
don't
know
of
controller
manager,
api
server
and
there's
three
pr.
There
are
three
issues
I
think
hold
on.
Let
me.
A
C
Yeah,
so
we
have
three
cls,
scheduler
metrics,
there's
a
few
that
they
want
to
promote.
I
don't
know
I
I
think
I
had
an
objection
to
one
of
them,
but.
C
C
Yeah
yeah,
so
so,
if
you
guys
want
to
take
a
look
at
these,
in
you
know,
offline,
probably
a
bunch
of
eyes
before
we
commit
a
bunch
of
metrics
to
stable,
would
be
a
good
thing.
E
There
was
like
one
condition
for
the
migration
from
alpha
to
stable
at
first.
That
was
supposed
to
be
different.
Well,
like
a
metric
needed
to
be
used
somewhere,
I
guess
to
be
used
widely
before
being
promoted
to
stable.
Do
you
know
how
we
can
measure
that.
E
A
So
yeah,
I
think
at
this
point
it's
mostly
just
going
to
be
based
on
people
self-reporting
and
saying
yes,
we
would
like
these
metrics
to
be
stable,
like
kubernetes,
can't
call
back
to
people's
clusters
and
ask
hey:
are
you
using
this
metric
in
a
meaningful
way,
and
I
mean
even
like
that
would
be
useless
in
this
case,
because
of
course
the
metric
exists?
If
it's
there,
it's
going
to
get
scraped,
but
that
doesn't
necessarily
mean
somebody's
querying
against
it.
So.
C
We
we
should
definitely
ask
people
from
the
components
I
mean
like
so
my
comment
here
was
basically
this
saying.
I
don't
know
if
you
guys
can
see
this,
but
the
component
owners
own
them.
They
commit
to
to
the
stability
requirements
to
adhering
to
the
stability
requirements.
So
I
mean
if
they
believe
that
the
metrics
should
be
stable.
C
C
A
Or
as
a
project,
I
think
yeah,
I
would
say
you
know,
ask
around
at
your
place
of
work.
If
I,
I
don't
think
we
have
any
objections
to
making
these
stable,
I
mean
it's
it's
on
the
component
owners
to
either
object
or
accept,
since
they
will
be
involved
in
the
approval
it
looks
like
we
are
at
time.
A
I
don't
want
to
keep
people
late,
but
thanks
john
for
bringing
this
to
the
group-
and
I
guess
for
the
newcomers
you
can
find
us
between
our
meetings
on
the
pound,
sig
instrumentation
channel
on
the
kubernetes
slack,
if
you're
not
yet
signed
up.
I
believe
it
is
slack.katesk8s.io.