►
From YouTube: SIG Instrumentation 20210401
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
welcome
everyone.
Today's
april
fool's
day,
2021.,
it
is
the
bi-weekly
sick
instrumentation
evening,
so
welcome
everyone.
We
have
only
a
couple
things
on
our
agenda
today.
The
first
is
web
weba
cause
critical
failures,
making
webhook
cause
critical
failures,
more
visible.
Do
you
wanna
you
wanna,
take
this
one?
A
B
Do
you
disabled
should
I
rejoin
but
anyways?
I
I
think
that
I
think
the
dog's
properly
publicly
accessible
so
your
virtual
to
to
open
the
dock
and
yes
so.
B
So
hi
everyone,
I'm
from
the
gke
team.
My
team
on
gk
is
to
manage
the
master
running
on
gke,
and
so
during
our
in
our
team
found
that
yes,
the
the
web
hook
is
a
very
it's
very
impactful,
it's
very,
very
useful
and
also
very
dangerous
things
for
kubernetes,
so
admission
weapon
can
validate
or
or
or
mutate
a
request
and
but
the
if
the
I
think
the
web
hook,
the
the
permission
to
create
a
weapon
is
system,
critical
or
or
cluster
or
cluster
admin.
B
I
I
I
forgot-
I
forgot
but
yeah
so
you
can
create
a
user,
can
create
a
web
hook
to
to
block
any
critical
requests
like
cube,
controller
manager
and
cube
scheduler
list
renewal,
so
which
means
therefore
cloud
providers.
B
A
user
can
create
to
break
their
critical
component
which
which
are
hosts
hosted
on
on
the
by
the
cloud
provider.
So
so
it's
a
it's
very
powerful,
but
it's
still
very
thing.
It's
very
also
very
dangerous.
So
there
have
been
a
lot
of
discussion
about
how
to
how
to
improve
the
webhook
to
reduce
such
risks.
But
I
don't
think
there
are
any
fundamental
improvement
on
the
cube.
B
Api's
server
side
or
the
api
machine
learning
side,
because
the
the
design
of
the
macbook
is
very
complex,
and
so
I
think,
for
what
can
I?
How
can
we
do
in
sick
instrumentation?
Is
that
to
to
improve
the
magic
associated
with
the
webhook?
So
currently,
there
are
two
two
metrics
one
is
the
lamp
hole,
duration,
second,
which
requires
the
latency
of
a
web
hook
and
a
web
rejection
and
which
requires
a
number
of
rejections
and
of
the
web
hook
so
there
for
these
two
matches.
B
We
have
the
name
label
the
thai
labeled
operation
and
rejected
or
the
reject
code,
but
we
only
have
the
name
of
the
wrapper
we
can.
We
can
identify
the
latency
of
a
web
hook
or
the
rejection
rejection
count
off
of
that
book.
However,
we
cannot,
we
cannot
see
which
object
the
vapor,
the
visual
object,
the
request
is,
is
trying
to
the
web,
how
to
say
yeah,
which
object
the
the
the
object
of
the
request
that
the
vapor
is
operating
on.
B
So,
for
example,
you
I
want
to
know
that
the
web
hope
has
a
back
impact
on
the
on
on
the
request.
For
a
cube
controller
least
renewal,
but
from
these
matches
I
I
cannot
identify
them.
So
the
idea
here
is
that
we
we
we
want
to
know
which
object,
or
at
least
what
kind
of
object
that
the
vapor
is
affecting
so
previously.
Previously
there
there
is
a
resource
there.
There
is
a
yeah.
B
There
is
a
resource,
related
labels,
so
like
like
the
resource
name,
the
sub
resource
et
cetera,
but
they
were
removed
from
1.11
because
of
a
higher
culinarity.
There
may
be
too
many
objects,
so
the
the
this
these
two
metrics
will
just
consume
too
too
much
memory
and
the
proposal
here
is
that
we
don't
we
don't
put
the
resource
label,
but
we
put
the
namespace
label,
which
is
also
very
helpful,
because
we
can
identify
that
whether
a
vapor
is
affecting
the
cube,
the
cube
system
name
space.
B
So
if
we
can
detect
this,
I
mean
we
know
that
some
the
web
may
be
misconfigured
and
it
it
it
made
taking
the
critical
component
down
so
so.
The
proposal
here
is
to
to
add
a
namespace
label,
and
I
think
I
don't
think
the
cardinality
is
a
problem,
because
the
number
of
namespaces
is
too
is
much
less
than
the
number
of
objects,
so
all
those
in
theory
so
that
the
number
of
names
basis
is
unbounded,
but
thanks
to
to
this
cap,
so
we
have
this
cap
called
dynamic
canary
enforcement.
B
With
this
cap
we
can
configure
a
cube
api
server
to
accept
only
a
few,
a
few
allow
list
name
spaces.
So,
for
example,
we
can
convict
to
accept
cuba
system
only
and
other
important
name
spaces.
Only
so
continuity
won't
be
a
problem,
and
I
also
found
another
problem:
is
that
for
the
duration
matrix
only
the
web
book
duration,
only
the
the
range
of
the
histogram
is
zero.
Second,
to
two
point:
five
second,
but
for
some
critical
request
requests
the
time
out.
B
Let's
say
that,
for
example,
the
leader
election,
the
timeout
by
default,
is
10
seconds
and
I
think
it
has
been
lowered
to
5
seconds
in
the
most
recent
version,
but
with
a
maximum
of
2.5
seconds,
we
cannot
observe
the
observed
timeout.
So
so
the
idea
is
here
to
address.
The
second
problem
is
to
to
extend
the
bucket
from
0
to
2.5,
to
0
to
2.5
and
five
and
ten
seconds.
B
So
this
is
two
simple
improvement
for
for
the
web
related
magics,
and
hopefully
this
can
help
us
identify
some
level
behavior
on
the
critical
request
easier,
so
yeah,
that's
it.
A
Yeah,
I
I
think
that
sounds
reasonable,
so
like
you're,
basically
making
as
I
understand
it,
suggesting
two
changes,
which
is
the
bucket
size
and
the
label.
Yes,
I
think
that
last
sentence
is
out
of
date.
B
A
So
your
your
observation
on
the
buckets
is
actually
it's
a
good
observation
and
in
fact
I
think
we
should
probably
update
our
instrumentation
guidelines
so
that
histogram
buckets
always
end
in
the
timeout
with
the
timeout
value,
because
otherwise
we
end
up
with
the
same
thing
that
same
problem
that
you
mentioned,
which
is
you,
can
never
observe
a
timeout.
B
Okay,
so
I
think
this
is
the
first
time
I'm
trying
to
add
something
to
the
oss
kubernetes.
So
would
you
advise
what's
next
step
so
do
I
need
to
create
a
an
issue
and
then
yeah?
I
would
definitely
send
a
code
to
to
improve
to
these
two
metrics,
but
do
I
need
to
create
an
issue
first
and
then
send
the
pr
and
then.
A
B
And
then,
and
then
after
creating
an
issue
I
assigned
myself
yeah,
you
can
just
assign
it
to
yourself
yeah
I
see
and
then
send
and
send
for
review
and
can
I
can.
I
can
ask
anyone
from
instrumentation
to
review
and
also.
A
B
A
A
Yeah
well
thanks
next,
I
guess
we
have
coffee
pack
with
the
fluent
bit
elastic
search
issue,
hello,.
C
Coffee
pack,
so
basically
big
implementation
owns
a
couple
of
clustered
add-ons
and
apologies.
D
C
Getting
a
new
water
heater
put
in
so
we
own
a
couple
of
cluster
add-ons,
one
of
which
is
the
fluent
d
elasticsearch,
one
that
I'm
an
owner
for
and
it's
time
to
move
it
out.
So
the
question
is:
I
saw
that
sig
instrumentation
just
got
a
repo
created,
instrumentation
utils,
so
my
question
was:
do
you
want
me
to
move
it
there
or
do
you
want
me
to
like
create
a
new
repo
or
because
they
don't
have
really
any
cameras
about
this
at
all?
This
still
does
get
quite
a
bit
of
usage.
C
I
think
the
last
time
I
checked
is
getting
something
like
80,
000
or
20
000
way,
actions
a
day.
Whatever
quite
action
is,
I
assume
it's.
You
know
a
tag
lookup
or
an
image
poll,
so
I
don't
think
we
want
to
just
get
rid
of
it.
So
it
has
some
utility
to
the
community,
but
it's
it's
got
to
get
out
of
computer
add-ons.
It's
kevin
just
got
to
get
out
of
the
kk
clusters.
A
Yeah,
I
I
I'm
okay
with
probably
we
should
make
a
a
new
repo
for
this
okay
yeah
I
I
can
I
can.
I
can
follow
up
with
with
the
republican
people
and
try
to
get
one
created
I'll.
Take
that
as
an
ai
all
right.
There
are.
C
Two
I
mean
so
the
other
one
that
this
sig
owns
is
the
fluency
gcp
add-on.
I
have
never
looked
at
that.
I
don't
know
what
the
state
of
it
is.
I
don't
know
if
it
gets
used
at
all,
but
if
we
want
to
move
it
now,
I'm
happy
to
like
turn
the
crank
twice.
That's
fine!
What.
A
Okay,
awesome:
I
created
an
ai
for
that
and
I
will
I.
C
A
A
little
bit
yeah,
everybody
yeah,
so
okay
I'll,
create
the
sig
repo
for
instrumentation
add-ons
I'll.
Try
to
do
it
by
the
next
sig
meeting.
Metric
server
is
the
other
one
which
is
mixer.
Servers
has
its
own
repo,
though
right
I
mean
it's
still
in
cluster
add-ons
merrick.
Do
you
know
about
this.
A
A
C
A
A
Okay,
yeah
yeah,
that
sounds
good.
I
I
will
create
the
sig
repo
for
for
instrumentation
add-ons
awesome.
Thank
you
yeah.
We
have
one
more
thing,
one
more
issue,
it's
not
on
here,
but
it's
with
the
working
group
information.
So
merrick
do
you
have
any
news
on
that
or.
D
I'm
still
in
the
process
of
creation
and
yeah,
I
need
to
figure
out
something
some
things
on
my
side
to
make
sure
that
to
I
have
enough
time
to
commit.
I
also
discussed
that
someone
with
team
hawking
what
what
was
like
discussing
over
like
our
sexist
criteria
and
what
we
want
to
focus
hope
to
have
the
process
of
working
upgrade
started.
D
A
A
Already
then,
going
once
going
twice
okay,
then
I
will
give
everyone
13
minutes
back
of
your
life
thanks
everyone
for
joining.
I
will
see
you
guys
in
two
weeks.