►
From YouTube: SIG Instrumentation 20200806
Description
SIG Instrumentation August 6th 2020
A
A
B
Hi,
so
I'm
lori,
I
don't
think
I
know
most
of
you,
so
I
know
alana
from
a
cube,
friends
call
and
han,
and
I
have
talked
a
bit
about
things
on
slack
so
hi.
I
am
a
program
manager
by
trade
and
also
through
the
work
that
I
do
in
kubernetes,
and
so
I
joined
up
around
april
or
so
early
april
and
paris.
Pittman
had
me
look
at
pr
velocity
and
how
to
speed
that
up.
B
She
said
here's
an
issue
for
for
someone
like
you
to
take
over
and
look
at,
and
so
that
led
me
to
the
big
and
broad
and
magical
world
of
many
other
things
in
kubernetes,
including
the
release
team
and
the
release
cycle,
and
so
now
I'm
the
program
manager
for
the
sig
release
and
release
team
and
then
also
helping
out
with
release
engineering
and
so
triage
has
been
a
hot
topic.
B
I've
been
pitching
to
the
chairs
and
leads
the
tech
leads,
so
what
they
might
want
to
do
to
bring
triage
into
the
cigs.
So
there
are
a
couple
sigs
already
doing
triage
api
machinery
does
release
is
getting
a
process
ramped
up
and
then
I
think
cluster
api
also
does.
Cluster
lifecycle
does
triage
and
I
think
sig
docs,
and
so
then
I
was
asking
this
instrumentation
to
triage,
and
maybe
this
is
where
you
can
provide
some
input
from
your
experiences
like
how
you
actually
get
work
into
your
workflow.
How
do
you
schedule
work?
C
So
triage
has
been
on
our
to-do
list
for
a
really
long
time,
but
we
actually,
I
don't
think
we
ever
properly-
went
through
the
process
of
doing
it.
Correct
me
if
I'm
wrong
hon,
I
think
we
had
it
on
our
agenda
once,
but
I
wasn't
able
to
make
it
to
that
call
yeah.
Did
we
actually
do
it
that
day
we
did
okay.
A
So
in
atm
machinery
we
we
do
it
twice
twice
a
week,
but
the
volume
of
prs
and
issues
is
probably
two
or
three
times
greater
yeah.
We
we
don't
actually
have
a
process
for
it
right
now.
We
probably
should
have
a
process
for
it.
We
probably
don't
need
it
bi-weekly,
because
the
volume
differences
but
weekly
might
make
sense.
B
Yeah,
let
me
let
me
pull
up
the
link
for
that
and
then.
D
Okay,
so
let
me
share,
I
have
I'll
just
do
this
I'll
put
it
on
my
screen.
B
Okay,
so
a
disclaimer
is
that
I
haven't
actually
used
this
triage
party
tool
yet
like
in
a
triage
session.
It
just
got
set
up
for
the
release
team,
and
so
then-
and
now
we
will,
we
will
figure
out
some
details
around
how
to
set
it
up
for
our
own
use
case.
And
so
I've
been
talking
to
thomas
stromberg
who's
created
this
about
the
details
and
what
that
might
look
like,
but.
B
So,
basically
is
he
saying
this
is
like
multiplayer
triage.
So
if
we
go
to
the
link,
you
can
see
an
example.
I
think
that
oh,
this
might
go
yeah.
That
goes
to
the
actual
project.
So
here
you
go,
here's
the
actual
cherish
party
setup
for
minicube
and
there
are
different
ways
to
actually
prioritize
work.
B
So
if
there
are
particular
things
that
you're
looking
out
for
or
it's
just
certain
language
that
you
use
around,
how
you
like
to
describe
the
stage
of
work
is
that
a
piece
of
work
is
in
then
you
could.
You
could
get
that
so.
The
idea
here
is,
it
kind
of
looks
like
jarrah.
If
you've
noticed,
you
know
having
lists
of
things
that
are
in
a
certain
status
category,
but
it's
also
bringing
in
very
kubernetes
specific
elements
like
the
the
way
that
we're
labeling
and
the
commenters
and
these
other
details.
B
A
No,
we
were
planning
on
using
the
needs,
triage,
auto
labeling
thing.
I
think
that
was
like
there's
some
cap
where
basically
there's
going
to
be
a
needs,
triage
label
added
automatically
to
pr's
and.
B
A
A
B
I'm
not
sure
like,
as
I
said,
I
haven't
actually
pursued
all
the
details
of
how
you
get
set
up,
but
we
can
find
that
out.
I.
B
Yeah,
what
you
can
do
is
there's
actually
a
demo
that
thomas
gave
at
the
june
community
meeting,
and
I
can
dig
that
up
for
you
and
I
can
also
watch
it
again
to
answer
those
questions.
It's
just
it's
not.
I've
got
lots
of
other
things
that
I've
got
on
my
mind
on
this
project
and
that's
the
one
one
of
the
details
that
I
just
would
leave
to
someone
else,
but
yeah
it's.
He
has
all
of
that,
and
so
we
can.
C
I
don't
have
strong
opinions
on
it.
I
think
we
just
need
to
get
into
a
regular
cadence
of
actually
doing
it
and
I
think
the
the
tooling
we
can
figure
out
kind
of
then
I
I
would
start
with
as
little
tooling
as
possible
and
then
like
try
to
see
if
tooling
would
be
helpful
for
our
process.
C
But
that's
just
me.
I
don't
really
feel
strongly
about
it.
If
people
like
want
to
try
out
these
tools,
I'm
totally
up
for
that.
B
Do
you
want
to
see
some
things
that
I'm
doing
with
the
github
with
the
release
engineering
github
project
board
because
yeah
yeah?
It's
nothing,
it's
nothing
amazing,
but
it
can
give
you
some
ideas
around
what
you
might
want
to
do
and
what
you
might
need
to
do
to
get
your
workflow
in
better
shape.
So
here's
some
just
a
sample
from
release
engineering,
so
here
they've,
cataloged
they're
done
for
119
items.
That's
a
lot
so
looks
like
they've,
been
pretty
productive
and
then
having
a
blocked
column.
B
So
this
is
more
straightforward
in
a
company
I
think
than
it
is
in
an
open
source
context,
because
there's
different
ways
of
work
being
blocked.
Sometimes
you
need
docs
to
make
the
work
cross
the
finish
line.
Sometimes
you
need
a
review.
Sometimes
you
need
lots
and
lots
of
other
things
with
no
known
date
of
completion
to
happen
before
your
item
gets
unblocked.
B
So
it
seems
to
be
a
little
more,
a
little
messier
in
open
source
because
there's
a
lot
fewer
people
minding
process,
typically
in
a
project
so
but
for
our
purposes,
we'd
start
with
just
a
straight
up:
simple
blocked
column,
which
means
blocked
by
you
know,
needs
others
to
do
things
with
code,
then
review
in
progress,
and
this
is
so
that
you
can
track.
B
B
B
B
So
here's
a
bunch
of
high
priority
work,
so
this
is
work.
That's
been
vetted
through
some
triaging,
that's
done
manually
and
in
person.
So
in
small
small
groups,
and
in
this
particular
group
it
was
stephen
tim,
sasha
and
jorge
from
release
team,
and
so
here
are
three
themes
of
work
that
would,
if
they
were
finished,
they
would
free
up
other
work
to
be
done.
So
we've
identified
those
and
I've
created
columns
for
them,
and
then
this
might
actually
go
folded
back
into
one
of
these
more
general
topics.
F
Can
I
ask
a
dumb
question
sure:
do
you
are
these
columns
done
by
github
labels
or
do
you
manually
move
boxes
around
this
ui.
B
A
This
looks
terrific,
we
have
other
items
on
the
agenda,
so
we
should
probably.
B
Yeah
just
one
thing:
one
thing
on
these
says
that
when
you
have
themed
items
like
this,
then
ideally
you
could
create
one
issue
with
the
sub
items
as
a
checklist,
and
then
you
have
kind
of
like
the
epic
format
in
your
workflow,
and
so
that
can
help
you
triage
as
well.
By
with
the
checklist,
you
can,
you
know,
watch
the
items
as
they
get
completed
and
cross
them
off
and
they'd
also
be
in
your
project
boards.
You
have
two
points
of
visibility
for
for
monitoring
your
own
workflow.
A
B
I
don't
think
so
I
mean
no.
I
don't
think
so.
I
think
it
has
to
be
in
one
place
because
there's
also
you
can
actually
automate
so
that
when
work
shifts
from
nothing's
being
done
to
it
to
then
it
somebody
picks
it
up
and
starts
working
on
it.
You
can
have
that
automated
so
that
it
goes
to
an
in
progress
column
and
then,
when
it's
ready
for
review,
then
again
automation
can
kick
that
item
from
in
progress
to
needs
review.
B
A
Okay,
I
will
poke
around
with
it
a
little
bit.
Okay
and
maybe
some
somebody
else
can
poke
around
with
triage
party,
and
then
we
can
like
regroup
on
the
recurring
meeting.
A
Thank
you
so
much
for
for
going
over
our
triage
options
and
nudging
us
to
actually
do
a
recurring
triage
thing,
because
we
we
really
need
it.
B
I
think
the
project
board
sounds
like
the
simplest
for
you
right
now,
just
to
get
you
going
and
also,
if
your
workload
isn't
that
heavy,
it's
probably
the
way
that
I
would
go
just
keep
it
lightweight
and
fewer
than
I
showed
you.
D
A
Next
up,
we
have
you,
chan
and
sally
they're,
going
to
be
showing
us
something
that
we
worked
on.
E
All
right,
so
actually
you
chen
are
you
around
too?
Do
you
want
to
pull
up
the
the
demo
video
you
prepped?
But
basically
the
idea
is
we.
E
We
have
this
problem
where
we
have
a
lot
of
metrics
and
we
have
you
know
metrics
tooling,
configured
to
collect
the
metrics,
but
there's
a
lot
of
metrics
that
we
have
that
are
either
too
high
cardinality
or
we
don't
collect
at
the
right
resolution,
sometimes
or
whatever,
and
for
whatever
reason:
we've
we've
aggregated
them
or
dropped
them
when
they
go
into
our
collection
pipeline.
And
so
sometimes
we
want
to
go
in
and
debug
a
particular
cluster
with
these
metrics.
E
But
we
don't
have
access
to
them
at
the
fidelity
that
we
need,
and
so
we
we
developed
this
new
tool
that
basically
allows
you
to
connect
to
a
particular
prometheus
endpoint
and
directly
run
run
queries
against
the
resulting
data.
So
basically,
instead
of
ingesting
them
into
a
prometheus
server
or
some
other
tooling,
that
can
read
the
prometheus
format.
It
just
loads
them
in
memory,
and
then
you
can
use
the
prometheus.
G
G
Oh
so
hello,
everyone,
I'm
richard
from
google,
I'm
working
with
saudi
and
hong
kong
prom
q
recently
and
it's
my
first
time
joining
this
committee
meeting.
So
I'm
happy
to
be
here
and
demo
our
fancy
to
prompt
you.
G
So
basically,
I
have
recorded
this
video
yesterday
so
make
sure
I
I
won't
miss
some
important
things
today
and
I'll
focus
on
focusing
explanation,
and
so
I
have
already
started
the
kubernetes
cluster
on
gke
and
set
up
the
proxy.
G
So
usually
the
way
we
access
matrixim4
is
to
fetch
the
thing
from
the
matrix
endpoint
like
we
actually
see
from
my
trading
point,
so
we'll
get
a
they
fall
or
they
they
fully
stop
the
matrix
information.
But
it's
hard
to
grab
the
information,
especially
you
want
to
focus
on
some
specific
metrics.
G
Maybe
you
can
do
some
further
thing
with
the
graph,
but
they
still
like
this
way.
If
you
wanted
to
fetch
these
things,
basically
start
a
ways:
work
queue.
We
can
just
add
it
as
a
regular
expression
and
yeah.
We
can
get
the
information
or
you
can
do
further
and
add
more
filter
and
say
we
can
group
the
the
work
queue
matrix
with
the
name
set
to
non-structured
schema
condition,
controller,
whatever
yeah
we
can
get
the
information,
but
it
is
far
from
enough.
G
Basically,
we
have
to
know
how
the
metrics
dimension
looks
like
or
even
the
value
of
the
label
before
we
really
wanted
to
curate,
and
we
cannot
make
use
of
some
useful
promises.
Query
feature
like
aggregation
or
curate
a
range
of
data
or
do
some
functional
cap.
So
that's
the
reason
we
wanted
to
introduce
prom
queue.
I
have
a
prompt
q
binary
under
my
directory.
So
if
we
run
from
here
without
any
flag,
we
will
basically
get
the
same
information
they
as
they
cue
curl,
the
matrix
then
point,
but
in
json
format.
G
If
we
run
it
with
a
lit
with
l
black,
which
is
short
term
for
list,
we
will
get
fetched
a
list
of
the
metrics.
By
default,
the
metrics
we
will
use,
they
could
config
the
current
our
proof
config.
So
we
don't
need
to
study
the
config
specifically
for
prom
queue.
It's
really
helpful.
We
can.
We
can
switch
the
contacts
so
with
config,
you
know
so,
but
you
can
change
the
promises
target
with
t
command
in
the
with
dash
cue
flag.
G
We
can
do
query
see
here
you
can
enter
whatever
arbitrary
query
you
want
say.
If
we
wanted
to
create
an
api
server,
it's
request
to
request
code.
Then
we
will
get
the
information
in
json
format.
We
can
do
further.
We
can.
We
can
curate
the
api
server
request,
count
matrix
with
the
code
set
to
409
it's
during
json
format.
If
you
don't
like
json,
you
can
create
out
with
yama
format
with
o
yamaha
flag
and
then
definitely
yoga
to
the
yamaha
result.
G
We
also
consider
to
keep
the
premises
permissions
original
format,
but
we
will
remain.
We
may
implement
it
later.
G
G
It
is
continuously
fetch
the
information
from
the
matrix
endpoint
and
basically
it
is
like
we
are
monitored
on
this
matrix
with
this
query.
So
it
is
very
useful
if
until
to
monitor
or
something
another
thing,
another
annoying
thing
about
queries
like
well.
We
really,
we
can't
really
always
remember
the
matrix
name
correctly
and
type
it
directly
without
any
typo.
So
we
provide
auto
completion.
Support
so
say
here.
If
we
tapping
wait,
wait,
wait!
The
video
is
too
slow.
G
If
we
typing
api,
then
the
then
the
metrics,
with
with
apis
the
prefix
will
pop
out
the
left
part
is
the
metric
name
and
third
part,
is
the
dimension
of
this
matrix.
So
here
we
can
plot
the
matrix
ips
server,
wishing
one
total
matrix.
Also,
the
other
completion
is
done
with
early
parser,
which
is
a
top
down
dynamic
programming
algorithm,
which
is
fit
perfectly
for
all
the
completion
and
we
like
solidify,
always
store
the
matrix
dimension,
the
label
or
an
error
value
in
memory.
G
So
we
can
create
a
provider's
suggestion
with
this
one,
since
we
we,
we
already
add
all
the
I've
already
covered,
all
the
from
qr
feature
in
our
early
poster
grammar.
So
basically
we
you
can
read
it
whatever
of
whatever
query
and
we
will
provide
you
the
red
suggestion.
So
here,
if
we
want
to
do
the
aggregation
expression,
we
can
type
in
some
and
we
will
get
the
as
we'll
get
the
sum
as
the
suggestion
and
the
next
part
after
the
aggregation
operator
is
a
matrix.
G
So
if
we
type
in
api-
and
I
will
get
the
matrix-
and
the
next
part
is
followed
by
the
label
of
this
matrix,
so
we'll
get
the
label
of
the
api
so
we're
watching.
One
total
and
the
left
part
is
labeled
by
parts,
the
potential
value
of
this
label,
and
we
here
we
choose
version,
and
then
we
can
choose
the
potential
value,
see.
Well,
it's
it's
actually
narrowed
down.
G
The
possibility
you
make
type
of
mistake
and
the
follow
by
this
sum
class
is
the
aggregation,
aggregation,
keyword,
firewall,
result
and
followed
with
the
list
of
labels
yeah
here
there
goes
a
way
plotting
out
the
word
of
this
jury,
one
I'll
give
one
more
example.
Here
we
can
also
do
functional
calc.
If
you
wanted
to
read
the
api
server
version,
one
total
matrix
in
five
minutes.
G
We
choose
phi
and
followed
by
the
time
unit
and
yeah
there
we
go.
We
got
this
in
this
graph,
so
that's
that's.
Basically
all
I
want
to
share
today
yeah,
it's
all
about
my
table.
A
Cool,
thank
you
you,
john,
so
yeah.
We
have
this
on
a
local
fork,
but
we
are
going
to
merge
it
into
instrumentation
tools,
the
sigripo.
A
We
have
a
failing
test
and
we
want
to
fix
it
before
we
do
that.
So,
hopefully,
today
so
yeah,
I
hope
people
poke
around
and
look
at
it.
C
But
frederick
sorry,
no
worries.
This
is
more
of
an
announcement
than
anything
else.
So,
a
couple
meetings
ago,
we
kind
of
had
rough
consensus
around
adopting
the
kubernetes
prometheus
adapter
as
a
subproject
of
sick
instrumentation,
and
two
weeks
ago
I
sent
out
the
actual
formal
request
for
this
and,
as
per
our
governance,
lazy
consensus
has
succeeded.
So
that
means
we
are
now
free
to
actually
make
the
make
the
move
and
the
project
is
an
official
subproject.
C
Thank
you
so
much
sully
for
starting
this
project
and
being
willing
to
donate
it.
I
guess
yeah.
E
Very
often,
just
let
me
know
when
you
need
me
to
initiate
the
project
transfer
or
whatever,
so
you
can
keep
all
the
issues
and
stuff.
C
Fantastic
will
do.
Thank
you
so
much.
A
Well,
I
think
that's
it
for
today's
meeting.
We
are
out
of
time.
Thank
you.
Everyone
for
participating.