►
From YouTube: SIG Instrumentation 20220804
Description
SIG Instrumentation Bi-Weekly Meeting Aug 4th 2022
A
C
C
C
So
do
we
want
to
finish
an
API,
but
if
we
later
on
discover
that
we
need
to
restructure
it
because
otherwise
we
can't
scale
it
we're
kind
of
stuck
with
it
and,
what's
kind
of
my
question,
I
think.
D
I,
don't
think
we'll
be
State
necessarily
like,
even
if
we
stabilize
like
we
go
ga
with
this
Matrix
API
like
we
can
always
comment
with
a
new
one
later
on,
like
that,
wouldn't
be
an
issue
and
also
like.
We
have
to
consider
that
Matrix
server
is
not
the
only
implementer
of
this
API
like
we
have
many
projects,
that's
actually
implementing
this
API
and
yeah.
Maybe
the
problem
means
in
itself
is
that
metric
Server
doesn't
scale,
but
maybe
also
project
might
scale,
depending
on
their
implementation
and
I.
C
Well,
one
thing:
I
I
would
like
to
clearly
Define
before
we
stabilize
the
API.
Is
the
exact
feature
coverage
like
right
now
there?
There
is
no
the
API
definition
like
most
kubernetes
API
definitions
is
the
types
it
returns,
but
we
don't
impose
any
like
requirements
or
guarantees
of
like
it
supports
latest
lectures.
C
It
supports
field
selectors,
you
can
watch,
you
can't
watch
which,
for
me,
is
an
integral
part
of
an
API
definition,
but
right
now
it's
not
there
and
and
defining
these
again
puts
limit
on
what
we
might
be
able
to
change
in
the
future.
D
Yeah
I'm
not
sure
we
will
need
that
that
much
to
be
honest,
like
I
I,
don't
see
like
the
actual
use
case,
that
we
could
have
right
now
for
this
kind
of
future.
C
It's
like
if,
if
you're
a
cloud
provider
or
like
somebody
who
wants
to
implement
metrics
API
provider
and
metrics
apis
I,
think
one
of
the
few
apis,
except
for
the
cloud
provider.
Things
were
there's
one
API
and
many
or
possibly
more
implementers
that
we
need
to
come
up
with
some
feature
parrot
or
something
like
the
API
can't
just
be
a
type
definition.
D
The
data
like
with
the
serving
of
the
data
is
always
the
same
like
it's
have
to
then
be
implemented
to
serve
it
and
to
access
the
metrics
wherever
they
want
to
access
them,
but
like
at
the
end
of
the
day,
it's
what
we
were
trying
to
do
when
the
API
was
created
because,
like
you,
can
have
many
internet
platform
as
the
backend
of
your
cluster
and
the
API
is
just
here,
the
middleware
to
selling
back
to
metrics
from
wherever
you
want
to
back
to
your
cluster
back
to
kubernetes
and
your
auto
scatter
I
I,
don't
think
like.
D
We
need
a
particular
contract
in
terms
of
how
the
data
itself
should
be
sent
back
to
kubernetes.
C
Well,
but
that's
right
now,
the
only
thing
we
have
right
now.
The
only
thing
we
have
is:
if
we
send
a
request
to
this
API,
you
get
back
a
Json
response
in
this
format.
Our
type
definition,
but
we
don't
have
any
like
definition
of
like
people
can
well
it
doesn't
support
metric
Server,
doesn't
support
watch
but
kubernetes
apis
in
general
support
which
people
might
expect
that
it
works
or
rely
on
it
in
one
implementation,
and
then
it
doesn't
work
on
the
other
one
same
way,
like
I,
think
Auto
Skiller
relies
on
label
selectors.
B
Metrics
API
is
an
aggregated
API
server
right,
yeah,
yeah
I
mean
look.
Endpoints.Supporting
watch
is
a
thing
in
kubernetes
like
you
can't
watch
Discovery
Docs.
C
B
C
But
what
I
want
is
if
we
graduate
the
API
that
we
should
be
as
it's
an
API
that
is
not
implemented
by
kubernetes
itself,
but
can
be
implemented
by
Cloud
providers
and
Biometrics
certain,
but
other
components
that
we
specify
at
least
some.
If
you
implement
this
API,
you
should
support
this.
You
should
not
support.
Watch
you
don't
need
to.
You
should
support
labeled
selectors,
because
everybody
needs
for
this
API
yeah.
B
A
B
B
A
B
B
B
B
Happy
to
have
contributors
to
the
project
right
like
we
would
be
happy
to
get
anyone
get
it
into
a
much
nicer,
State
like
especially
like
performance.
That's
obviously,
a
good
idea
right,
like
just
yeah
I,
think
it's
on
like
not
even
deniable
so
like.
If,
if
you
or
anyone
you
know
wants
to
jump
in
on
this,
like
we're
happy
to,
like
you
know,
add
people
to
approvers
or
to
escalate
that
process.
D
C
D
No,
no,
the,
like
you
have
the
resource
Matrix
API,
which
is
the
one
that
is
used
by
Metric
server
like
CPU
and
memory
and
stuff
like
that,
and
then
you
have
like
histometrics
API
that
serve
like
any
kind
of
metrics
about
a
particular
kubernetes
objects.
So
it's
like
namespace
scoped
and
then
you
have
the
external
Matrix
API
that
serves
any
kind
of
metric
and.
E
D
Just
three
Matrix
API
that
we
currently
serve
and
like
supports,
but
we
don't
have
like
that.
Many
maintenance
for
those.
C
C
B
That
would
be
great
contribute.
Yeah
I
mean
yeah,
like
jump
in,
if
you
guys
want
like,
if
you
guys
need
approvers
or
whatever
we're
happy
to
like
escalate
that
process
and
make
it
faster
than
the
normal
timeline.
I
think
the
situation
right
now
basically
warrants
that
so
yeah
I
mean
like
we're
happy
to
work
with
you
on
him.
Yeah.
D
Only
like
the
main
thing
that
we
should
solve
is
to
regroup
all
the
API
in
the
same
repository
like
currently,
we
have
like
the
Matrix
API
that
is
implemented
in
metric
server
and
the
two
other
apis
are
implemented
in,
like
a
library
that
we
have
in
kubernetes
six,
and
it
would
be
great
to
at
least
merge
all
of
them
into
one
repository
that
we
could
use
as
a
library
everywhere
and
like
that,
would
remove
dependency.
Two
metrics
servers
by
like
basically
all
the
project
that
currently
wants
to
support
the
resource.
D
So
that's
one
of
the
major
issue
that
I
would
think
need
to
be
solved
and
beside
that
yeah.
What
we
really
need
help
with
currently
is
to
graduate
all
these
apis,
or
at
least
like
see
an
actual
bus
of
update,
either
like
it's
by
improving
the
apis
or
just
graduating.
The
current
one
interest
start
thinking
about
a
new
API
that
will
like
improve
the
current
state.
D
C
Yeah,
do
we
have
any
like
document
or.
B
Okay,
I
guess
we
have
time
for
the
last
subject:
I
just
it's
a
quick
update,
yeah
I'm
planning
on
getting
the
the
extension
for
metric
stability
in
by
the
next
release.
I
have
actually
been
improving
the
stability
framework
with
extending
the
static
analysis,
tooling.
It
supports
summaries
now.
B
So,
basically,
by
extending
the
static
analysis
coverage,
we
can
more
adequately
cover
the
entire
metrics
code
base,
which
is
going
to
be
necessary
if
we
are
going
to
start
linting
against
Alpha
Beta
And
stable
metrics,
which
is
going
to
be
larger.
A
number
and
more
diverse,
set
of
metrics
than
than
currently
exists.
B
So
I've
started
working
on
that
just
wanted
to
update
people.
It's
it's
gotten.
A
David,
okay,
I
snuck,
another
topic
on
I,
just
for
anyone
who
wasn't
here
at
the
very
beginning,
cubelet
tracing
got
in
so
big
thanks
to
Sally
for
her
work
on
that
took
almost
a
year
for
that,
PR
got
a.
B
So
now
we
have
API
server,
node
and
theoretically,
a
TD.
If
we
ever
version
bump.
A
I
think
so
I
think
that
hopefully
should
be
part
of
the
documentation
for
cubelet
tracing
there.
There
are
guides
I,
think
the.
A
A
Yeah
yeah
we're
still
waiting
on
the
SCD.
B
I,
don't
think
we
can
rely
on
SE
to
version
bump
anytime,
soon,
okay,
the
state
of
the
LTD
communities.
It's
it's
a
really
bad
state
right
now,.
B
B
Yeah
well,
the
question
is:
if
the
if
the
cubelet
tracing
stuff
is
in
three
four,
then,
yes,
we
can
do
your
suggestion
by
asking
for
a
version
bump
for
open
Telemetry
and
that
that's
fine,
but
if
the
change
isn't
in
three
four
I
think
we're
going
to
be
hard-pressed
to
get
them
to
backboard
tracing
changes
into
three
four.
A
So
it
is
it's
three
five
do
we
know?
If
are
we
on
three
five
of
the
FCD
client
in
kubernetes,
we're.
A
B
A
A
And
then,
at
that
point
the
protocol
is
stable
and
we
won't
have
to
worry
about
telling
people
to
use
specific
versions
of
The,
Collector
or
anything.
E
One
question
so
I
have
mentioned
the.
There
are
two
versions:
the
client
version
and
server
version
for
nsat,
so
before
in
my
class,
because
I'm
using
three
four,
so
I
just
change
the
ICD
on
the
port
version
to
a
newer
version,
so
it
doesn't
mean
I
need
to
change
other
thing
to
to
support
HD
tracing
or
something
it's
enough.
I
just
changed
the
sad
part
version
from
three
four
to
three
five.
A
So
three
five
does
have
tracing
you
have
to
enable
it
via
command
line
Flags.
The
only
issue
is
that
it
doesn't
let
you
control
the
sampling
rate.
E
A
I'll
make
notes
and
Catherine
do
you
want
to
collaborate
together
on
working
on
these?
Yes,.
E
Yeah
and
currently
I'm
working
on
end
of
tracing
for
webhook
for
the
mutate
mutating
and
some
of
my
Hawks
here,
because
I
want
to
see
the
latency
for
each
webhook
and
find
the
web
Hub,
which
are
the
slowest,
are
very
slow.
E
But
I
so
I
can
know
okay,
so
there
are
several
different
kinds
of
API
requests,
for
example,
watch
and
the
other
cans,
so
they
have
different
values
for
words.
It's
very
big,
so
I
can
see
very
clear
to
understand
the
distribution
from
this
Trace
data
from
for
metrics
I'm
here.
To
think
why
some
values
are
very
big
foreign.
A
E
So
currently,
yeah
even
I,
don't
know
the
details
about
API
servers.
So
I
know
there
are
different
kinds
of
Apso
requests,
so
I
can
stay
very
clear
from
the
tracing
graph,
so
there
are
one
several
cans
of
changing
it's.
It's
then
its
value
is
about
over
10
over
one
minute
and
other
than
this
data
in
less
than
one.
Second.
B
But
but
if
you're
looking
at
strictly
the
web
hook,
latency
metric
the
web
hook,
latency
metrics
should
be
reliable
for
you
to
get
the
distribution
of
web
hook
lenses
right
because
it's
web
applications
aren't
going
to
vary
depending
on
a
watch.
Requester
list
request
right
like
the
web.
Hooks
are
web
Hooks
and
you
don't
watch
a
web
hook.
Request
right.
You,
your
Web
book
request,
always
just
occurs:
yeah
yeah,
it's
just
a
synchronous
request,
yeah.
Basically,.
B
E
B
B
B
B
B
But
yeah
I
mean
tracing
is
nice
to
look
at
overall
request
flow
yeah,
but
but
if
you're
looking
for
problematic
Web
book,
metrics
I
would
look
at
the
web
hook.
Metric.