►
From YouTube: Kubernetes SIG Node 20220816
Description
SIG Node weekly meeting. Agenda and notes: https://docs.google.com/document/d/1Ne57gvidMEWXR70OxxnRkYquAoMpt56o75oZtg-OeBg/edit#heading=h.adoto8roitwq
GMT20220816-170603_Recording_640x360
B
Sure
yeah
I
can
start
yes,
I
kind
of
wanted
to
start
a
little
bit
to
talk
about
the
CRI
stats
cap
and
can
I
have
some
questions
there.
So
for
background,
I
have
like
a
intern
who's
kind
of
starting
working
on
the
setting
a
song
called
Daniel.
Maybe
we
can
introduce
yourself.
It's
gonna
be
kind
of
spending
his
summer
working
on
this
on
this
cap.
They
know,
maybe
you
can
just
introduce
yourself.
People
know
who
you
are
yeah.
C
Sure,
hey
Daniel,
Dave's
intern
right
now,
I'm
working
on
this
top
and
on
node
runtime
daily
based
out
of
Madison
Wisconsin,
a
Rising
senior
in
college
yeah
enjoying
working
on
this
so
far.
B
Awesome
thanks
yeah,
so
I
guess.
One
of
the
things
we
wanted
to
bring
to
talk
about
is
when
we
were
kind
of
looking
through
the
cap.
B
We,
the
current
state
of
the
cap
right
now,
is
that
we
have
the
current
CRI
stats
supporting
the
summary
API,
but
we
are
kind
of
a
little
bit
blocked
right
now,
because
we
have
the
rest
of
the
C
advisor
metrics
in
the
Prometheus
format
and
then
that's
not
currently
supported
by
CRI
so
like
we
need
to
find
some
alternative
source
for
those
metrics.
So
that's
kind
of
the
problem
we're
trying
to
solve
right
now
and
so
in
the
cap.
B
The
current
proposal
that
we
have
is
that
we
will
collect
those
metrics
by
the
CRI
and
the
CRI
will
expose
a
Prometheus
endpoint
and
that's
kind
of
the
current
plan
in
the
in
the
cap
and
then
the
way
it
works
right
now
is
that
users
who
are
using
these
metrics
called
the
kublet
HTTP
server
to
get
those
metrics.
So
one
idea
that
we
had
was
maybe
to
proxy
the
that
endpoint
to
to
the
CRI.
B
So,
basically,
you
would
call
the
kublet
HTTP
API
and
then
it
would
kind
of
redirect
you
to
the
CRI
Prometheus
endpoint.
So
that's
kind
of
the
first
idea
that
we
had
and
I've
been
thinking
a
little
bit
more
about
this
and
maybe
chatting
with
Peter
and
Daniel
a
little
bit.
And
now
you
have
a
little
bit
of
concerns
about
that
idea
in
the
sense
that,
especially
on
the
container
D
side,
containerdy
already
has
a
Prometheus
endpoint,
and
so
it
probably
doesn't
make
sense
to
add
another
Prometheus
endpoint.
B
B
So
one
of
the
things
I've
been
thinking
to
propose
in
stat
is
perhaps
to
add
the
rest
of
the
metrics
that
are
kind
of
currently
missing
in
the
CRI
and
so
add
those
new
metric
fields
in
the
CRI
and
then
have
Kubla
directly,
basically
convert
those
those
metrics
to
Prometheus
format,
so
the
benefit
there
is
that
this
would
be
done
once
by
the
Kubler
instead
of
by
both
of
the
container
runtimes.
B
A
B
We're
talking
about
the
Promethean
yeah
they're,
the
container
stats,
yes,
and
but
specifically
the
container
stats
that
are
present
in
Prometheus
format
on
C
advisor,
but
not
in
summer
API.
So,
there's
kind
of
this
small
subset
of
stats,
they're
kind
of
the
more
advanced,
t-group,
stats
and
stuff
that
we're
talking
about
here,
yeah
sure.
A
D
Yeah
and
thanks
David,
that
was
good
background.
A
a
piece
of
my
my
perspective
is
a
while
ago
we
had
talked
about
doing
you
know
doing
it
directly
from
the
CRI
to
avoid
performance
hit
of
Senate
of
like
sanitizing
and
passing
that
data
through
protobuf
and
something
that
David
brought
up
was
like
it's.
There
aren't
that
many
more
stats
I
mean
it's
like
10
to
12.
D
You
know
you
win
64s,
so
the
the
overhead
might
not
be
as
much
as
we
originally
worried
about,
but
it
we,
we
that's
currently
an
unknown
with
this
approach
and
theoretically
that
there
could
be
some
reduction
in
performance.
But
if
it's
you
know
it
shouldn't
be
huge.
D
So
that
was
some
of
the
context
which
prompted
us
to
get
to
you
know
coalesce
the
idea
of
having
the
CRI
directly
emit
it
I'm
open
to
either
direction.
I
think
standardizing
the
metrics
would
help
in
actually
functionally
testing
it,
and
it
would
make
clear
the
responsibility
of
the
CRI
and
needing
to
collect
them,
but
not
necessarily
to
admit
them
in
the
way
that
it
sees
fit,
which
could
standardize
across
CRI
implementations
I.
D
If
I
were
alone
in
making
this
decision,
I
probably
still
would
have
this
year.
I
do
it
just
because
you
know
to
avoid
that
unnecessary,
hop,
but
I,
understand
the
motivation
and
I'm
willing
to
find
a
way
that
we
can
all
agree
and
move
forward,
so
I'm,
open
and
eager
for
you
know,
discussion
and
coalescing
to
an
idea
so
that
we
can
move
forward
with
this.
A
So
so
David
your
proposal
earlier
you
mentioned
that,
obviously
the
missing
we
will
add
those
missing
stats
right.
So
first
thing
so
second
thing
you
mentioned
that
there's.
The
two
solution
did
you
of
sounds
like
there's
a
two
solution
when
it
is,
you
want
to
add
the
CRS
that
permittest
in
the
point,
but
the
the
problem
is
the
content.
They
already
have
permissions
endpoint.
So
you
you
are
concerned
about
the
introduce
another
permittance
Enterprise.
So
another
option
you
propose
list
here
it
is
kubernetes
which
is
I,
didn't
get
very
clearly.
A
B
So
one
alternative
is
that
that
the
same
format
will
be
provided
directly
by
the
CRI,
so
by
container
D,
for
example,
a
cryo
would
serve
that
endpoint,
that's
one
alternative,
but
then
we
need
to
figure
out
how
to
proxy
that
that
that
HTTP
endpoint
to
to
the
container
runtime,
so
that
users
can
directly
use
the
same
endpoint.
So
that's
one
solution.
The
other
solution
is
that
we
literally
drop.
You
know
instead
of
metrics
slash
the
advisor
today
being
served
by
C
advisor.
B
Instead,
it's
served
by
metrics
from
the
container
runtime
that
we
convert
internally
into
kublet.
So
from
the
user
perspective,
they
continue
calling
the
same
endpoint
but
instead
of
those
metrics
being
served
from
C
advisor
instead
they're
actually
being
served
from
the
container
runtime
and
kublic
converts
those
metrics
from
the
protobuf
to
to
Prometheus
format.
Does
that
make
sense?
B
A
B
Yeah
so
today,
I
think
the
metrics
utilizer
endpoint
actually
doesn't
have
any
machine
info.
It's
all
Container
info,
so
for
machine
info.
It's
on
the
summary
API,
which
today
we
already
have
have
served
from
C
advisor
and
see
right,
that's
in
the
Json
format.
So
we
we
have
that
today.
D
It
is
possible
that
there's
some
machine
info
NC
advisor
I
would
double
check
that
I
was
under
the
impression
there
was
regardless,
whatever
whatever
formats
the
advisor
is
currently
serving.
The
metrics
for
on
a
node
level
will
it'll
continue
to
do
so.
So
if,
if
there
are
still
machine
metrics
in
the
Prometheus
format
coming
from
the
advisors-
and
we
would
maintain
that
we
wouldn't-
we
have
no
intention
of
dropping
that
it
would
just.
D
We
would
stop
see
advisor
from
collecting
any
container
upon
metrics,
so
they
would
no
longer
be
emitted
because
they
would
not
have
them.
E
Yes,
exactly
like
my
my
point
is
what
like
right
now
couplets
and
the
like
sharevisor
is
one
of
the
components
inside
of
it
is
something
what
assumes
like
the
whole
picture
of
the
machine,
and
it's
not
necessarily
over
true
so
like
it
might
be
like
some
set
of
CPUs,
which
is
a
zero
through
the
whole
system
it
might
be
like
run
times,
will
be
restricted
for
some
resources
and
so
on.
So
the
picture
which
she
advisor
presents
inside
kublet
might
not
reflect
the
proper
machine.
E
So
if
we
are
doing
proper
container
Matrix,
when
let's
also
report
the
overall
state
of
the
runtime,
it
will
say
American
point.
D
Those
machine
metrics
will
have
to
be
collected
from
somewhere
and
the
whole
process
of
this
stats
movement
has
been
under
the
assumption
that
we're
only
moving
the
potting
container
I
think
that
it's
valid
to
reconsider,
see,
advisors
like
position
in
collecting
the
machine
metrics
for
kubernetes
in
general,
but
I
think
that
we
should
address
that
separately
and
you
know
work
through
just
the
Potted
container
metrics,
which
the
CRI
kind
of
is
more
of
a
direct
fit
for
fulfilling,
rather
than
you
know
it
would
be.
D
E
D
E
D
Thing
just
sorry,
just
the
another
thing
just
to
mention
is
the
metrics
that
are
in
C
advisor
now
we
kind
of
have
to
consider
as
a
stable
API,
even
though
it
was
never
really
documented
as
such,
because
there
are
people
who
are
relying
on
them.
So
we
need
to
basically
have
metric
parity
in
Prometheus
format.
Somehow,
and
so
we
can't
just
go
ahead
and
you
know,
drop
those
metrics
from
Prometheus,
because
we
need
to
basically
consider
those
to
be
a
stable
API.
B
D
B
F
I
I
think
my
my
worry
continues
to
be
like
what
the
overhead
is
when
we
are
talking
about,
like
250,
maybe
500
pods
on
a
node
and
we're
Gathering
the
data
over
the
CRI.
B
I
think
that
makes
sense,
yeah,
yeah
I
think
it's
something
we
need
to
make
I.
Think
like
with
this
whole
effort.
You
know
there
is
overhead,
because
before
c
advisor
is
in
process
collecting
all
these
stats,
and
now
this
is
done
by
an
external
demon
right
and
then
it
has
to
be
over
grpc.
So
whatever
we
do,
whether
we
do
the
Prometheus
endpoint
here
or
you
know,
or
or
there
I
mean
regardless
the
metrics
we'll
have
to
be
sent
over
grpc.
So.
F
I
I
think
like
if
we
step
back
like
we
were
trying
to
solve
this,
for
something
like
Kata
right,
so
maybe
does
it
make
sense
to
have
an
interface
that
is
collecting
this
metrics
and
like.
If,
if
we
find
out
that
the
overhead
over
the
CRI
is
too
high,
then
we
use
C
advisor
for
collecting
native
container
metrics,
but
still
allow
a
clean
way
to
get
it
from
the
CRI.
Only
when
needed,
Like
My
worry
is
we
don't
like
regress
the
performance
of
Gathering
Matrix
for
NATO
containers.
You
know
for
special
cases
to.
A
A
There's
another
reason:
we
start
this
effort
yeah.
Another
reason
is
just
to
see
the
weather
being
used
to
widenate
Beyond
kubernetes.
So
so
we
present
this
become
to
the
bottleneck,
because
the
state,
whether
we
cannot
grow
Seattle
weather,
because
every
time
we
grow
through
the
weather,
even
we
did
a
lot
of
refraction
in
the
past.
So
we
have
to
add
that
dependency.
We
have
to
think
we
have
to,
but
every
time
when
we're
doing
this
one,
it's
like
the.
Let
me
know
for
us.
A
So
now
we
will
start
to
think
about
the
CRI
say
and
the
container
D
cryo
work.
Then
we
think
about
okay.
Maybe
we
can
really
understand
the
CRI
to
do
those
stats
and
the
container
stats
and
the
Machine
at
the
node.
We
could
refactor
that
Library
put
that
to
the
common
Library.
That's
the
only
things
kubernetes,
the
kubernetes
will
link
against
okay,
only
like
per
node
the
permission
Matrix.
The
rest
stop
because
see:
I'd
rather
grow
too
big
today.
A
A
lot
of
people
want
to
continue
Development
Center,
whether
we
kind
of
have
to
slow
down,
because
we
really
worry
about
the
the
stableness
of
kubernet,
and
so
that's
kind
of
the
problem
original
is
that,
but
the
progress
is
so
slow
until
we
have
the
Qatar
use
cases
and
also
CRI
kind
of
product
or
ID,
then
we
just
revisit
to
this
efforts.
I
just
want
to
share
the
context,
not
just
for
the
right.
Those.
A
B
Yeah
the
dependency
issue
is
definitely
huge,
like,
for
example,
for
container
D.
Recently
we
had
to
completely
remove
the
go
module
and
include
the
source
directly
because
of
kind
of
some
some
vendoring
issues,
so
yeah
there's
kind
of
constantly
dependency
related
issues
there.
That's
definitely
one
big
big
source
of
that.
We
hope
to
fix
with
this
effort.
B
But
yeah,
but
regardless
I
agree
that
performance
is
definitely
like
something
we
need
to
consider
and
I.
Don't
think
we'll
have
a
good
answer
until
we
have
some
type
of
prototype
that
we
can
actually
measure
and
play
around
with.
So
maybe
we
need
to
kind
of
proceed
and
try
it
out
and
kind
of
measure
some
stuff
and
and
see
how
it
works.
B
D
Yeah
I
guess
so
when
David
reached
out
to
me
talked
like
you
know,
re-proposing
the
having
it
emitted
over
the
CRI
I
had.
The
idea
of
you
know,
because
we
have
Daniel
here-
who
you
know
is-
is
ready
and
waiting
to
do
some
of
this
implementation.
D
So
my
thought
is:
maybe
we
go
forward
with
a
proof
of
concept
emitting
the
stats
through
the
cubelet
through
CRI,
so
CRI
passes
the
metrics
up
to
the
CRI
protocol
and
then
cubelet
admits
it,
and
then
we
see
like
we
see
okay
is:
are
we
regressing
at
all
in
performance
like
maybe
the
protobuf
serialization
is
not
going
to
be
as
much
of
an
issue
as
we're
worried
about
in
there
just
bigger
fish
to
fry,
and
it
you
know
the
perform
instead
of
like
you
know,
one
percent
or
even
basically
undetectable.
D
Then
we
don't
really
have
to
worry.
If
there
is
one
you
know,
then
at
least
you
know
we
have
that
data.
We
know.
I
bet
a
lot
of
the
work
that
Daniel
would
have
done
on
the
containerdy
side.
To
get
those
metrics
in
the
first
place
would
be
useful
in
some
future
iteration
and
will
have
some
standard
way
of
testing,
and
you
know
we
could
even
possibly
have
a
standard
way
of
testing
into
Cuba
that
the
metrics
that
we
want
to
be
present
are
present,
which
could
be
useful
in
the
future
as
well.
D
B
Yeah
I
think
that
makes
sense
yeah
without
it.
Without
it
actually
kind
of
working.
We
won't
be
able
to
see
the
performance.
We
need
to
try
it
out.
I
guess
cool!
Thank
you
so
yeah
in
terms
of
next
steps.
Maybe
what
we'll
do
is
we'll
not
actually
make
the
CRI
changes
directly,
but
we'll
kind
of
have
a
fork
with
a
proposed
CRI
changes
and
maybe
some
type
of
poc
in
container
deal.
B
It
will
be
the
goal
and
we
can
try
to
match
for
the
performance
there
and,
depending
on
that,
we
can
kind
of
come
back
to
sick
note
and
Report
our
progress
so
and.
D
Timing,
wise
we're
in
a
good
position
to
do
that
too,
because
you
know
where
it
you
know:
1.25
is
being
finalized.
The
cap
freeze
for
126
isn't
for
I,
don't
know
how
long
a
while
from
now
a
little
bit
from
now.
So
we
have
a
little
bit
of
time
to
kind
of
spoof
it
out.
Try
it
out
see
if
it
works,
then
we
can
pivot
for
the
126
cycle.
If
we
decide
that
it
isn't
like,
there's
there's
no
way
that
we
can
make
the
performance
be
reasonable.
A
Okay,
thank
you,
yeah,
okay,
next
one
is
a
digital.
Do
you
want
to
do
you
want
to
host?
Do
you
want
to
share
anything.
G
No
I
think
I'm
good
I
have
the
link
for
the
pr
in
the
notes,
so
I
just
want
to
introduce
myself
hi
I'm
dikshita
I
work
with
David,
wengen
and
Riven
here
on
the
group
for
GK
note
and
I
have
my
first
PR,
for
which
I
am
looking
a
reviewer.
This
PR
is
essentially
for
the
cap
to
promote
easy
external
credential
Provider
from
beta
to
GA,
and
that
required
only
to
add
a
new
e2i
test
that
uses
gcp
credential
provider
to
validate
external
credential
provider.
G
So
I'm
looking
for
a
reviewer
and
I
already
have
David
and
Riven
on
it,
but
I'm
looking
for
another
reviewer,
and
this
PR
essentially
just
adds
a
new
environment
variable
and
if
that
environment
variable
is
set,
it
would
just
install
gcp
credential
provider
and
pass
the
path
to
the
binary
on
the
on
the
Node
and
the
path
to
the
config
as
well.
To
configure
this
binary,
and
that
is
all,
and
once
this
PR
has
been
reviewed,
I'll
be
able
to
add
or
add
the
configuration
in
the
test
infra
to
run
my
e2e
test.
F
126
opens
since
it's
a
graduation.
It
looks
straight
forward
enough
yeah.
A
E
A
So
next
We
Lay
I,
know
I,
know
really
cannot
be
here
and
he
basically
is
just
caught
out
for
the
review.
I
already
reviewed
his
the
character
and
approved
and
I
am
I,
don't
think
about.
There's
the
he
sent
out
the
API
change.
We
discussed
the
separate
API
changer
and
the
sky
donor.
Tinji
Pierre
I
have
to
say
that
send
out
yet
so
I
know.
Maybe
you
you
have
more
updates
on
that
one,
since
you
review
large
that
he's
called.
F
A
F
D
Hey
I'm
back
so
this
is
part
a
process
question
in
part
of
please
for
review,
but
basically
it's
not
clear
to
me
what
the
best
way
is
to
get.
You
know,
reviews
of
the
general
Community
when
a
PR
is
not
like
explicitly
part
of
a
Catholic.
This
is
a
bug
that
cryo
encounters.
It's
not
a
very
high
priority
one,
but
it's
kind
of
a
it's
kind
of
a
a
paper
cut
for
users
who
it's
kind
of
just
a
lot
of
annoying
Vlogs.
D
It's
existed
for
a
couple
of
months
now
and
I've
tried
various
methods
of
I
mean
I've,
tried
poking
people
on
GitHub
and
trying
to
get
reviewers
and
I
I
feel
I
mean
I,
probably
haven't
gone
about
it.
The
right
way
and
I
just
want
to
know
what
the
right
way,
what
a
better
way
for
me
to
have
gone
so
that
we
could
have
had
feedback
a
little
bit
sooner
or
something.
Obviously.
D
Now
it's
125
freeze
and
it
isn't
something
that's
critical,
so
I
doubt
that
I
would
make
it
in
for
that,
but
I'd
like
to
probably
make
it
in
for
126
and
possibly
back
Port.
So
I
just
want
to
know
that
for
the
future,
so
all
things
considered
what
the
what
is
the
best
way
to
get
these
kinds
of
reviews?
D
F
We
have
too
many
things
coming
in
and
not
enough
people
looking
at
it
so
only
way
will
be
to
like
grow
more
reviewers
and
as
people
come
and
asking
for
reviews
like
trade
with
others
that
are
also
looking
for
reviews.
I
have
a
PR,
so
maybe
I'll
review
your
PR,
your
review
mine
and
you
know
we
both
grow
as
reviewers.
A
B
D
Cool
thanks
yeah
that
that
helps
I
will
do
those
things
in
the
future.
A
Thanks
to
brought
this
issue
yeah,
that's
is
the
knowing
pain
and
for
many
quarters,
so
we
need
the
grow
more
reviewer
and
yeah.
So
that's
all
for
today
and.