►
From YouTube: SIG Instrumentation 20210318
Description
Bi-weekly SIG Instrumentation Meeting, recorded on March 18th 2021.
A
Hello,
everybody
and
welcome
to
today's
edition
of
sig
instrumentation.
It
is
march
18
2021..
We
have
some
items
on
the
agenda
today
and
I
shared
a
link
in
the
chat.
Let's
see,
what
do
we
got
going
on
a
reminder?
Test
freeze
is
march
24th.
That
is
next
wednesday.
So
in
less
than
one
week
from
now
as
well
another
announcement,
we
have
the
2021
contributor
survey,
which
sig
contribex
has
asked
us
to
send
to
everybody.
A
So
we
have
not
sent
this
to
all
kubernetes
users,
because
we
don't
want
to
hear
from
users.
We
specifically
want
to
hear
from
contributors.
A
So
if,
if
you
are
here,
you
are
a
contributor
and
this
survey
is
for
you,
so
please
check
out
the
link
fill
it
out
if
you
haven't
had
a
chance
to
already
or
very
excited
to
hear
from
our
wider
contributor
community.
B
Oh
yeah
yeah,
so
so,
basically,
we
already
have
kind
of
an
informal
logging
working
group,
and
I
was
thinking
that
it
would
make
sense
to
make
a
more
formal,
structured
logging
working
group
because
it's
been
sort
of
a
gigantic
effort
and
elena
and
merrick
have
been
working
quite
diligently
at
it
with
a
lot
of
contributors.
So
I
I
feel
like
it
would
be.
You
already
have
the
the
triage
board
so
should
should
I
just
like
look
into
form
formalizing.
It.
A
Yeah,
so
I
agree
with
all
of
the
above,
and
I
would
add
to
that
that
I
mean
we
have
a
like.
You
know
this
is
kind
of
having
talked
to
a
few
folks.
This
is
kind
of
the
ideal
case
for
a
like
chartering,
a
working
group,
so
we
have
like
a
fixed
effort.
It's
got
like
a
target
end
date.
It
requires,
like
you,
know,
additional
participation
above
and
beyond
the
sig.
It's
not
necessarily
like
needs
to
be
shared.
A
Every
update
in
the
sig's
day-to-day
business
so
and
the
the
key
thing
is,
and
when
we
charter
a
working
group,
we
could
bring
in
a
bunch
of
like
new
leadership
and
folks
who,
like
have
basically
volunteered
to
steward
the
whole
thing
from
start
to
end.
So
I
think
it
would
be
really
great
to
put
out
that
call.
I
know
I've
spoken
with
some
folks
at
companies
that,
like
previously
haven't
necessarily
been
big,
instrumentation
contributors
but
say
they
would
be
interested.
A
B
Yeah
I
I'm
down,
I
think
it
makes
sense,
but
I
think
like
in
terms
of
a
lead
like
the
people
who
come
naturally,
to
my
mind,
are
either
elena
or
merrick,
but
I
I
I
suspect,
elena.
A
I
don't
know
I
will
have
time
to
like
own
this
end
to
end,
so
I
I
would
like
to
see
you
know
like,
let's
put
out
a
call,
let's
see
what
happens
and
I
mean
that's
all
part
of
the
chartering
process.
If
you
know,
if
nobody
is
willing
to
share
it,
then
that
would
be
bad,
but
I
don't
think
that
I
have
time
to
own
this
end
to
end
because,
like
logging
is
not
part
of
my
day,
job.
C
Yeah,
like
I
was
I'm
currently
getting
out
of
one
one
effort
and
hoping
to
to
to
invest
more
in
structure
logging,
because
definitely
it's
something
that
needed
something
to
mature,
but
we,
I
think
we
should,
and
I
I
want
to
get
ready.
A
E
A
It
sounds
like
you
know:
I
have
interest
in
it
existing,
although
I
don't
want
to
chair
it.
Merrick
also
has
interest
in
it
existing
and
may
have
time
in
the
future.
I
think
that's
probably
enough
to
at
least
sounds
good
start
investigating
so
yeah.
A
Like
participating,
helping
we'll
make
sure
to
send
all
the
info
to
the
mailing
list
and.
B
Also,
like
we
have
attendees
right
now,
people
are
filling
them
out.
So
if
you
are
interested,
maybe
denote
by
your
ldap
that
you
are
interested
in
this
working
group
and
we
can
send
out
invites
or
targeted
information
once
we
get
the
logistics
worked
out.
A
Cool
next
is
me:
I
got
pinged
on
a
pr
for
the
website
for
stackdriver
documentation
and
I
was
like.
I
don't
think
we
support
this.
We
ripped
out
the
end-to-end
tests,
but
somebody
was
like
just
trying
to
update
the
page
to
make
it
more
accurate
and
I
think
contribex
or
some
other
group
came
back
and
we're
like
you've
stuck
sig
instrumentation.
A
So
do
we
want
to
support
this?
Should
we
remove
the
page?
Like
I
mean
some
very
kind?
Community
individual
has
gone
to
update
it,
but
I
think
we
removed
our
end-to-end
test,
so
I
don't
think
it's
actually
supported
by
sig
instrumentation
and
it
would
be
better
for
that
to
not
exist
in
the
official
kubernetes
stocks.
C
I
think
that
original
idea
was
that
when
the
class
or
kubernetes
cluster
exists,
there
was
a
place
for
add-ons
and
first-party
integration.
To
start
just
ended
up
with
two
solutions.
One
stack
drivers,
elastic
search
stack
driver
is
mostly
like
not
maintained.
I
think
there's
still
two
people
that
maintain
elasticsearch,
but
with
the
application
of
cluster
and
like
not
really
huge
focus
on
vendor
documentation,
and
then
I
don't
think
there
is
a
big
yeah.
A
C
Yeah,
like
half
of
this,
I
think
I
drove
it
through
mentions,
like
other
integrations
or
other
solutions
and
they're
not
killed,
because
I
think
elasticsearch
was
not
completed
or
documented.
So
it's
yeah,
it
looks
like
google
or
like
driver
anything
else,
but
there
is
no
anything
else.
C
I
think
merrick
is
the
right
person
to
to
answer
that,
so
I
talked
with
the
person
who
altered
it,
travels
and
like
he
would
prefer
to
have
like
some
references
to
how
like,
but
if
it
like,
he
understands
that,
if
that's
like
vendor
specific
things
shouldn't
be
good.
He's.
Okay,
with
removing.
C
A
I'm
just
I'm
trying
to
like
look
through
for
for
those
who
are
following
along,
but
maybe
not
looking
at
the
agenda
here
is
the.
A
A
B
B
A
B
H
Yeah,
I
was
just
gonna
say
I
I
think
actually,
the
the
owner
of
these
is
the
cloud
provider
specific
sig.
No,
like
I
actually
kind
of
kind
of
wonder
why
we
were
pinked
on
this.
In
the
first
place,.
A
H
H
Because
I
think
the
the
e2e
test,
or
whatever
it
was,
we
did
transfer
a
bunch
of
things
to
the
like
google
cloud
sig
or
whatever
it's
called.
I
think
there's
one
for
every
major
cloud
right,
yeah.
A
I
think
we
transferred
they're
no
longer
separate
sigs
but
like
they
are
there's
like,
I
think,
they're
like
you,
know,
sub
projects
within
the
sig,
so
there
is
like
a
google
cloud
sub
project
within
sync
cloud
provider.
So
what
I
will
do
is
I
will
respond
to
the
docs
person
who
says
we
should
be
removing
the
vendor
specific
stuff
saying
yes,
we
are
okay
with
this
at
a
sig
level.
We
don't
think
this
is
us.
We
think
this
is
cloud
provider.
They
should
weigh
in.
How
does
that.
A
Cool,
that's,
I
think
all
I
need
for
that
one.
I
can
put
a
thing
saying
I
will
follow
up
and
then
next
on
the
agenda,
we
have
d
grisoni.
F
Yeah,
can
you
hear
me
so
I
wanted
to
start
a
discussion
basically
around
pod
matrix.
For
those
who
don't
know,
prometrics
are
like
another
community
resource
behind
a
cheap,
catalog
pod
and
we've
been
reported
two
days
ago.
F
I
think
that
the
values
that
were
reported
by
pod
or
some
kind
of
drifting
from
the
matrix
from
the
advisor
for
the
whole
pod
and
the
reason
behind
this
is
that
first
for
prometheus
adapter,
we
were
not
taking
into
account
the
post
container
that
he
started
into
every
pod
and
also
like
clayton
mentioned
that
in
the
board.
F
There
is
also
like
some
processes
that
are
started
and
all
this
tasks
like
memory
and
cpu
usage
of
these
processes
are
charged
into
the
c
groups,
but
we
never
really
took
them
into
account
in
the
phone
matrix,
so
they
are
not
really
really
taken
into
account.
F
Whenever
we
see
them,
we
skip
it
on
the
pod
or
even
for
autoscaling,
and
the
current
problem
we
have
with
the
parametric
resource
is
that
it
there
is
only
a
way
to
describe
container
based
metrics
and
not
like
all
the
other
metrics
that
we
can
have
like
the
one
for
the
other
processes
so
yeah.
My
my
question
is
like.
F
Would
we
want
to
update
the
resource
type
to
include
some
kind
of
higher
level
metric
for
the
old
pod,
in
addition
to
the
one
for
the
containers
because,
like
we
are,
we
already
have
like
these
metrics
either
coming
from
the
advisor
for
premise,
adapter
or
in
the
case
of
matrix
server?
We
have
them
under
the
api
resources,
endpoint
of
cubelet,
so
yeah.
I
wonder
if
we
want
to
update
that
and
how
we
would
do
that.
H
H
C
C
So
here
we
go
go
more
into
what
would
be
that
benefit
to
adding
this
like
to
expand
this
api.
If
this
api,
like
mine,
use
cases
are,
are
auto
scaling
or
like
I,
I
treat
cuba
ctl
as
a
after
thought
to
to
be
able
to
look
into
auto
scaling
pipeline,
not
us
like
main
dedicated.
C
Thing
to
check
what
are
popular
resources,
because
you
can
always
look
at
grafana
or
any
like
your
monitoring
solution
to
check.
What
is
your
overhead.
H
I
I
get
what
you're
saying,
but
I
think
that's
not
as
obvious
to
end
users
right
like
I,
like
you
and
I
know
and
understand
this,
but
or
like
everyone
in
this
call
understands
this
right
but,
like
I
feel
like
I
understand
where
people
are
coming
from.
Like
I
don't
know
the
entire
context
here,
but
my
assumption
is
that
someone
opened
a
bug
against
openshift
or
something
and
said
wait
a
minute.
This
pod
actually
uses
more
resources
than
kubernetes.
Tells
me,
and
I
and
I
kind
of
understand
that
confusion
right.
H
G
So
I
want
to
clarify
one
thing
as
well,
because
I
was
confused
about
this.
The
the
pod
level
metrics
already
exist
in
the
summary
api
and
the
cubelet
resource
metrics
api.
This
is
about
the
actual
cluster
level
resource
metrics,
api.
C
Yeah
yeah
so,
for
example,
like
I
think,
stackdriver
already
at
some
point,
definitely
differentiated
those
two
values.
It's
for
more
for
prometheus
and
instagram
like
to
monitor
your
yeah,
to
build
a
monitoring
system,
not
and
not
something
that
concludes
into
the
basically
metric
server
like
this
is
like
the
amazing
metric
server
output
and
auto
screening
further
going
into
it.
Do
you
think
from
to
comment
on
your
on?
What
you
mentioned
is
that
I
noticed
there
is
definitely
lack
of
good
the
documentation
to
keep
ctl
top.
C
C
H
Yeah,
I
I
believe
the
the
last
contribution
might
might
have
been
even
me
changing
from
heapster
to
these
apis
and
I
think
back
then
like
60
li
or
I
don't.
I
don't
know
if
this
sick
even
exists
said
that
they
wanted
us
to
move
away
from
group
ctl
top
to
make
this
a
generic
thing
to
use
like
server-side
printing
eventually,
so
I
don't
know
what
what
that
sig's
idea
is
today.
A
So,
just
a
quick
time
check,
we
have
eight
minutes
left
in
the
meeting
and
we
have
a
demo
on
the
agenda,
so
I
want
to
make
sure
there's
enough
time,
for
that
is
this
something
that
should
spill
over
like?
Should
we
continue
this
discussion
in
the
next
meeting?
Can
we
maybe
continue
this
in
slack,
so
it's
a
little
bit
more
visible
as
well.
H
Yeah,
I
I
think
of
an
issue.
Inevitably,
we
need
to
kind
of
involve
six
cli
anyways
so,
like
we
probably
just
move
in
circles
here
so
yeah,
I
think
let's
go
ahead
with
the
demo
and
move
this
to
slack
for
now.
A
Awesome:
okay,
thanks
folks,
brian
the
floor
is
yours.
Are
you
gonna
need
to
share
your
screen.
E
E
I
could
start
talking,
while,
while
we
figure
out
the
screen
thing,
that's
good,
so
yeah.
I
want
to
show
you
a
thing
which
I
wrote
called
caseband.
E
The
so
the
basic
idea
is
that
I
I
like
distributed
tracing
like
jager
and
I
spent
a
bit
of
time
trying
to
put
some
tracing
into
various
bits
of
kubernetes
and
then
this
occur.
This
idea
occurred
to
me
that
kubernetes
is
already
emitting
events
and
a
lot
of
the
times.
It
emits
an
event
at
roughly
the
same
point
where
I
would
want
to
span
so.
Can
we
do
that?
So?
Can
I
share
my
screen?
Yes,
I
can
share
my
screen.
E
E
I
was
trying
to
get
rid
of
everyone's
faces
because
I
know
you
hate
looking
at
your
own
face.
Well,
I
do
anyway,
but
okay,
so
deployment's
got
two
pods
in
it
and
I'm
just
using
this
thing:
we're
not
even
gonna.
Look
at
the
pods,
I'm
just
gonna
change
the
version
to
make
it
generate.
Events
and
I've
been
doing
a
few
of
these
before
the
these
are
the
kind
of
events
that
we
get.
You
know
we
get.
E
This
created
container
scaled
up
scaled
down
whatever
and
like
that
coming
out
of
cubekiddleget.
E
Well,
a
they
come
out
in
the
wrong
order
b,
they're
impossible
to
understand,
and
anyway
so,
let's
make
a
change,
so
that
would
have
generated
some
events
and
if
my
program's
working
yeah,
we
have
some
spans
coming
in
to
jager.
So
I
got
a
few
runs
that
I
did
earlier
half
an
hour
ago.
Let's
just
update
that
okay.
E
A
E
Yes,
I
would
go
as
far
as
to
say
I'm
using
a
ton
of
heuristics.
E
So
we
can
click
on
these
things
and
I
I
pulled
out
some
stuff.
I
mean
notably
the
the
message.
E
Thank
you
very
much
to
those
of
you
who
are
on
the
selection
committee,
but
I
did
want
to.
I
did
want.
To
put
it,
I
mean
what
I
really
wanted
to
do
was
put
this
in
front
of
a
couple
of
people
quietly
and
get
some
gentle
feedback
and
so
on.
But
I
was
persuaded
to
come
to
your
meeting
and
just
show
you
it
fully.
Do
you
have
a
repo?
I
I
just
want
to
take
a
look
at
the
code.
A
E
Yeah
well
yeah
best
efforts
and
heuristics
are
or
yeah
yeah
it's
it's
it's
not
it's
not
an
exact
same
yeah
there.
I
could
talk
for
quite
a
long
time
about
what
what
doesn't
work
so
well
in
the
event
model-
and
you
know
I'm
kind
of
I'm
kind
of
taking
lossy
information
and
trying
to
put
detail
back
in
and
I
feel
dirty,
but
it
looks
awesome.
B
E
B
B
E
A
Yeah
for
me,
if
more
from,
like
the
you
know,
sort
of
traditional
operations
or
sre
perspective,
one
of
the
things
that
is
really
challenging
is
like
trying
to
make
sense
of
events,
and
this
gives
like
a
really
nice
narrative.
Yes,.
B
A
G
C
G
G
E
Basically,
the
yeah,
so
we
these
these
events
come
with
the
wrong
involved
object.
Can
you
read
that
the
the
these
things
that
are
like
scaled
up
replica
skill
whatever
so
we
we
need
to
correct?
We
need
to
kind
of
we
we.
The
thing
that
we
actually
want
to
hear
about.
Is
this
thing,
but
the
event
is
issued
on
the
parent
as
the
involved
object,
and
so
it's
not
even
entirely
like.
Should
that
be
fixed
possible
yeah,
I
I
you
know
it's
a
it's.
It's
one
person's
bug
is
another
person's
breaking
change.
E
So
well
I
mean
yeah.
It
doesn't
have
to
be
like
hard
quite
as
hardcoded
as
this
you,
you
can
sort
of
recognize
the
pat
you
know.
If
you,
you
sort
of
see
these
names,
then
that's
the
one
you're
interested
in
I
don't
know,
but
that
that's
the
th,
that's
the
only
place
in
the
code
where
we
kind
of
hard
code
deployment
and
stateful
set.
A
Cool
well,
we
are
a
little
bit
over
time,
thanks
so
much
for
demoing.
This
is
super
great.
This
is
very
exciting.
It
sounds
like
community
interest
is
also
here.
Everybody
is
like
very
excited
about
this
and
thanks
everybody
for
attending
today
it
was
great
seeing
everyone's
faces
and
I
hope
to
see
you
all
in
two
weeks.
Good
luck
with
test
freeze
next
week.