►
From YouTube: Kubernetes SIG Node 20220719
Description
SIG Node weekly meeting. Agenda and notes: https://docs.google.com/document/d/1Ne57gvidMEWXR70OxxnRkYquAoMpt56o75oZtg-OeBg/edit#heading=h.adoto8roitwq
GMT20220719-170323_Recording_3390x1440
A
All
right
welcome
everyone
to
the
July
19th
Ernest
signal
meeting.
We
went
through
today's
agenda
item,
which
is
mostly
looking
for
feedback
on
PRS,
but
is
that
items
the
agenda
if
you
see
right
in
there
now
so
First
Step
was
looking
for
reviews
on
the
username
space,
PR
yeah.
B
So
I
think
Rodrigo
and
Giuseppe
can't
attend,
but
I'm
just
relaying
their
message.
So
they're
looking
for
reviews
I'm
reviewing
it
would
appreciate
if
someone
from
the
containerdy
side
reviews
it
as
well,
maybe
Mike,
Brown
or
driven
foreign
yeah.
B
A
All
right,
that's
seems
good.
Next
up
was
around
windows
CR
only
pod,
sandbox
stats.
Anyone
on
the
call
been
tracking
this.
D
Yeah
so
I'm
I
added
this,
this
item
I,
can
give
a
description
or
like
David
Porter
has
also
been
reviewing
it
a
little
bit
as
well.
D
So
I'm
just
trying
to
add
sport
for
Windows
to
the
cry.
Only
stats
cap
there
was
a
field
there
for
Windows
sent
pod
sandbox
stats,
I,
initially
just
copied
over
kind
of
the
standard
stats
that
we
have
for
containers,
but
we
were
reviewing
it
and
there's
a
bunch
of
fields
missing
that
Windows
can't
fill
out
RSS
memory
memory.
Rss
is
an
example
of
that.
D
Windows
just
doesn't
have
that
metric
because
of
the
way
memory
is
implemented,
and
so
we
were
discussing
on
the
pr
as
to
whether
or
not
we
make
some
of
the
fields
specific
for
Windows
and
so
I
I
think
I
tried
to
summarize
what
the
the
remaining
kind
of
questions
are,
but
we'd
like
to
get
some
more
feedback
from
sick
node
on
on
on
the
direction
that
we're
going
here.
A
Yeah
I
don't
know
if
Peter's
on
the
call,
but
one
of
the
things
that
I
know
I
would
find
helpful
is
like
the
original
motivation,
for
this
was
to
basically
stop
depending
on
sea
advisor
and
letting
the
CRI
return
the
stats
so
like.
Obviously,
it
was
very
skewed
to
what
city
advisor
was
giving
us
and
Linux
I'm,
not
aware
of
any
update
to
the
original
cap
that
just
proposed
the
stats.
A
Were
pertinent
to
Windows
is
my
memory
right
on
that,
or
are
we
trying
to
maybe
hash
that
out
directly
in
the
code.
F
A
Yeah,
so
I
guess
what
I
was
wondering
was:
is
there
anything
that
was
written
down
that
covered
a
Windows
side
that
maybe
we
hadn't
seen
or
like
maybe
David
and
Peter?
If
you,
since
you
both
were
driving
this
like?
Would
you
want
to
treat
windows
in
line
with
this
or
treat
it
as
like?
A
separate
evolution.
F
Yeah
I
think
I
think
the
main
idea
is
like.
Ideally,
we
were
discussing
this
on
the
pr
I
think.
Ideally,
we
would
be
able
to
reuse
the
same
same
stats
like
on
the
same
state,
right
messages
and
stuff,
because
it
would
simplify
the
conversion
code
like
in
kublet,
where
we
wouldn't
have
to
have
one
path
to
convert
the
stats,
for
you
know
for
the
summary,
API
and
stuff
like
that.
F
So
the
thing
would
simplify
things.
If
we
could
reuse
those
Strokes,
if
we
we
that
couldn't
work
and
we
needed
window
specific
stats,
things
could
work
too.
Just
we
would
need
a
Windows
specific
code
path
to
convert
to
Stats,
to,
like
summary,
API
and
all
the
internal
structures
within
Google.
So
let's
go.
How
I
see
it
so
I
mean
if
you,
if,
if
there
is
like
no
way
at
all
to
reuse
the
existing
straps
and
they
don't
make
any
sense
on
on
Windows,
then
I
think
we
could.
F
D
Okay,
so
when
I
went
into
the
cry
API
for
the
the
new
pod
sandbox
stats,
there
was
there
was
two
fields
in
there.
There
was
one
that
was
Linux
Windows
pods,
that
stats
and
it
had
a
bunch
of
information
in
it.
And
then
there
was
a
Windows
pod,
sandbox
stats
and
it
was
empty
and
so
I'm
trying
to
add
so
I
and
I
went
to
go.
D
Add
those
stats
to
the
windows
struct,
and
so,
if,
if
we're
gonna
reuse
them
I
think
we
need
to
rename
from
from
Linux
to
something
else,
that's
more
generic,
but
I
guess
we
can
continue
this
conversation
in
the
an
issue.
F
G
I
think
I
had
the
same
issue
with
the
update
update
resources
API
as
well,
where
it
was
Linux,
specific
and
I
changed
it
to
more
generic.
Maybe
we
should
look
at
all
the
places
and
then
do
it
in
one.
Go.
D
G
E
And
I
do
know
for
at
least
the
container
resources
trucks
CRI
API.
We
we
have
been
diverging
there,
because
we've
needed
to
to
support
more,
like
Windows,
specific,
container
setup
stuff
too
so
I
think
we
need
to
maybe.
H
E
Take
it
Case
by
case
if
we
want
a
generic
kind
of
set
of
structures
that
are
going
to
cover
windows
or
Linux,
or
if
we
want
to
diverge
and
just
make
sure
we're
making
the
right
decision
for
each
one.
Yes,.
A
I
was
struggling
with,
it
was
the
the
in-flight
work
for
Linux,
like
there's,
like
a
enhancement
record
like
I,
can
look
at
a
PR.
I
can
look
back
in
time
and
say
how
we
felt
certain
ways
all
I
was
just
trying
to
figure
out
was
like,
rather
than
hash
this
out
directly
in
line
in
implementation.
A
Did
anyone
see
a
benefit
in
just
writing
out
a
design
doc
on
like
what
is
actually
there
on
Windows
like?
So?
What
would
that
make
it
feel?
Any
more
planned
versus
ad
hoc
is
what
I
kind
of
view
like
is
happening
right
here.
E
E
F
A
C
A
Right
that
seems
like
the
right
first
step
honestly
I
feel
like
we
were
trying
to
make
it
that
when
we
make
API
changes
that
we
have
a
corresponding
design
record
and
it
it
helps
a
lot.
So.
A
You
can
get
that
going
now
and
go
from
there.
A
I
I
just
want
to
share
something
the
earlier
principle
I
wish
we
talk
about
in
the
earlier
Windows
container
discussing.
So
basically
it
is.
If
we
think
about
those
things,
the
stats
will
export
the
surface
to
the
cultural
plant.
We
want
the
generic
as
much
as
possible,
but
there
are
certain
things
we
want
to
on
the
local
node
to
make
decision,
because
anyway,
Windows
container
Windows,
the
naturally
don't
support
the
state
group
right.
We
try
to
retrofit
that
concept,
those
abstract
to
the
windows
node,
so
I,
remember
this
kind
of
things.
I
I
talked
to
discuss
with
Mark
in
details
in
the
past.
So
that's
the
principle
we
try
to
follow.
So
we
don't
want
to
everything,
is
connected,
Consolidated
same
thing,
but
anything
like
a
relay.
What
we
are
talking
about,
the
certain
things,
because
it
is
in
the
scheduler
and
the
control
plan,
how
have
to
look
then
we
want
to
obstruct,
but
certain
things
on
the
Node
side.
We
may
not
want
so
I
just
want
to
share
here.
A
Right
so.
A
On
next
topic,
Brett,
do
you
want
to
do
an
update
on
the
the
SRO
camera.
H
J
C
A
I
think
the
the
mail
was
good,
I
I
know
so
don
thanks
for
spawning
on
that
I
know.
Renault
and
I
were
able
to
catch
up
with
Clinton
as
well.
To
make
sure
there
wasn't
really
a
scope
change
here
or
an
expansion
of
scope,
so
I
think
probably
at
this
point
you're
good
to
open
the
issue
and
I
guess
kubernetes,
org
or
wherever.
That
needs
to
be
open
to
get
the
rename
to
happen.
A
If
you
can
just
probably
Slack
me
and
Don
directly,
so
we
don't
lose
it
in
a
notification
list.
That
would
be
good
because
I
think
you
do
need
a
plus
one
on
that.
When
that
happens,.
A
G
Yeah
I
think
so
I
think.
The
main
thing
we
wanted
to
get
clarity
on
was
to
see
how
the
update
container
resources
is
handled
by
the
runtime
and
get
a
clear
guidance
on
the
ACT
and
not
too
specific.
We
don't
want
to
like
micromanage
it,
but
I
think.
The
main
thing
that
we
want
is
that
when
it
receives
the
update
container
resources,
it
synchronously
operates
on
it
and
then
gives
us
a
pass
fail.
Result
that
has
been
captured
in
the
comments.
G
A
Okay,
I
thought
in
our.
A
G
Yeah
I
think
doing
memory
first,
we
are
already
doing
that
in
how
the
update
container
resources
is
called
by
kublet
and
I
thought
about
adding
that
line
as
well.
But
I
want
to
keep
it
a
little
bit
more
open
for
the
runtime,
because
we
can
still
control
that
from
the
couplet.
When
you
look
at
the
update
from
the
atomicity
of
a
pod
level,
that's
what
we
can
that's
what
we
can
control
and
we
can
manage
how
the
runtime
does
it
it.
G
As
far
as
runtime
goes,
it
sees
one
request
at
a
time
and
if
it's,
if
it's
Amalgamated,
we
probably
expected
to
do
memories
first
memory
first
and
then
CPU
once
we're
calling
it
and
we're
expecting
by
stating
that
I
want
synchronous
response,
I'm,
already
kind
of
putting
that
issue
to
rest.
I
believe.
A
Okay,
so
if
anybody
else
wants
to
help
take
a
final
pass
at
that
PR,
that
would
be
good
when
I
just
level
set
with
you
I,
probably
won't
look
at
it
until
Thursday
afternoon
or
Friday.
Oh,
that's,
okay,
yeah,.
G
I
haven't
made
any
major
changes
to
there's
no
changes
at
all.
Mostly
it's
been
rebasing
whenever
a
few
conflicts
come
up
and
luckily
there
haven't
been
too
many
one
or
two
files
here
and
there
so
but
I'm
kind
of
starting
to
get
a
little
nervous
that
we're
getting
closer
to
the
code.
Freeze,
I'm,
hoping
that
that
gets
pushed
out
a
little
bit
because.
A
All
right,
well,
I,
will
try
to
get
to
it
Thursday
at
the
earliest.
Thank.
A
This
week,
so
next
item
on
the
agenda
was
join
with
the
local
storage
feature.
K
K
Great
okay,
thank
you.
So
this
is
a
feature
called
the
local
storage
capacity
isolation.
It's
been
better
for
a
while
and
we
do
want
to
promote
to
GA-
and
this
is
a
feature-
has
a
dependency
on
on
kubernetes,
be
able
to
check
in
the
root
file
system
usage
and
where
the
company
is
running,
and
we
use
the
advisor
to
get
to
root,
FS
file
system
and
like
information
about
how
much
disk
used
what's
the
capacity
and
what's
available
what
is
used.
K
However,
there
are
some
special,
like
a
system
in
certain
cases,
especially
related
to
kind
cluster,
rootless,
I,
think
Benjamin
Elder
bring
out
that
those
system
is
either
like
very
hard
or
almost
not
possible
to
detect
the
the
cracked
root
fast
system
usage
and
in
those
systems.
Currently,
since
this
is
a
beta
feature,
they
can
use
the
feature
gate
right
to
disable
this
feature
during
the
CI
testing.
K
But
when
we
move
to
GA
right,
we
need
to
remove
the
feature
gate
and
this
will
break
those
tests
and
since
I
think
this
is,
the
issue
might
not
easily
resolved
and
we
don't
want
to
block
GA
so
I'm
thinking
whether
we
can
add
an
option
in
kubernetes
side
right
so
to
basically
disable
or
enable
this.
K
So
that
is
represented
by
the
system
is
has
this
capability
and
if
we
call
the
feature,
we
call
this
the
option,
for
example,
enable
local
storage
capacity
isolation,
the
default
we
can
set
it
to
and
in
those
systems
that
they
don't
have
that
capability
right
to
get
the
correctly
root,
FS
information.
K
And
that
means
the
user
cannot
use
this
feature,
and
this
feature
how
use
interactive
features
by
setting
informal
storage
in
the
parts
back
similar
to
CPU
and
the
memory
request
and
limit,
and
we
can
check
whether
it
is
is
not
enabled
and
the
user
if
they
also
set,
requires
an
image.
The
part
creation
can
fail.
K
So
that's
kind
of
the
current
proposal
how
to
unblock
this
to
ga,
and
we
want
to
get
feedback
from
signal
side
right,
whether
it
is
something
we
can
proceed
by
adding
option
in
Google
set,
and
basically
it's
representing
whether
the
system
can
has
this
capability
of
supporting
this.
C
K
So,
in
certain
kind,
environment,
I
think
Ben
mentioned
for
ruthless.
They
have
to
disable
this
feature
for
the
CI
testing.
Otherwise
the
problem
is:
if
it
cannot,
you
know,
get
the
root
file
system
usage.
K
We
have
the
logic
the
kubernator
will
not
successfully
start.
It
I
think
I
post
a
link
here.
That
is
the
where
the
code
is
so
here
so.
A
K
So
during
the
container
manager
start
right,
so
we
have
the
logic
to
check
that
we
can
create
from
City
advisor
to
root
FS
information.
If
we
cannot,
it
will
return
arrow
and
here
I
think
it
will
fail
to
start
container
manager.
I
Actually
this
one,
if
you
based
on
your
suggestion,
shouldn't
the
song
some
note
already
running
for
some
while
and
they
want
to
install
kubernetes
and
and
continuity,
and
then
they
will
on
the
one
that
the
25,
then
the
kubernetes
will
be
crushed
before
sometimes.
K
F
F
K
It's
a
possible,
but
it's
a
little
bit
hard
to
really
be
very
confident
like
100
sure
it
is
because,
like
a
temporary
arrow
that
we,
you
cannot
get
the
information
or
it
is
just
the
system
does
not
has
this
capability,
so
I
was
also
thinking
that
that
way
earlier
too,
but
yeah
I
have
this
concern
whether
we
can
be
very
confident
right.
We
know
it
is
just
a
temporary
Arrow
or
it
is
because
the
system,
so
that's
not
support
this.
K
Yeah
yeah,
so
I
think
Ben
Benjamin
will
have
more
like
a
context
about
this,
but
he
is
out
of
office
and
I
can
further
discuss
with
him
about
that,
and
also
I
think
I
want
to
use
some
see
some
feedback
from
signal
right
about
how
if
we
we
can
add
option
and
what's
the
process
like
we
I
can
just
have
the
code
to
add
option
order
some
process
like
adding
a
completed
option.
A
Well,
I
mean
the
new
option
would
end
up.
Probably
in
that
cute
look
config
file
right
or
were
you
anticipating
a
different
place
to
stick?
It.
K
Yeah
yeah,
the
equivalent
config
I,
think
before
it's
just
direct
option
and
I
see
they
recommend
using
cumulated
configure
map
or
it
can
be
config
option.
What
is
called.
K
I
I
just
feel
the
one
of
the
reasons.
I
J
C
I
K
Yes,
yeah,
certainly
if
we
can
automatically
detector
that
this
system
right
does
not
has
this
capability
and
we
can
automatically
you
know
disable
this
I
think
that
should
work
the
best
just
in
case
there
are
some
challenges
in
doing
that:
I'm
thinking,
yeah,
that's
the
the
option,
I'm
thinking
to
add
to
cubulate
option
so
I
I
will
further
like
discuss
with
Ben
about
this.
Rather,
we
can
reliably
detect
otherwise
yeah.
A
Yes,
I
was
trying
to
think
through,
like
the
rootless
kind
environment,
I,
don't
I,
don't
know
which
audience
might
be
upset
or
not.
If
you
say
this,
but
like
I
didn't
view
it
as
a
production.
Environment,
I
view
it
as
like
a
test
environment
and
so
in
in
that
Spirit,
like
I,
was
trying
to
think
through.
Is
there
a
production
environment
where
I
would
benefit
by
not
having
this
feature
on
and.
A
Would
those
environments
maybe
have
different
file
systems
layouts,
then?
What
is
needed
by
the
future
today
and
like
is
that
another
thing
we
could
use
to
detect
if
the
feature
should
be
on
or
off
like
if
your
host
file
system
doesn't
conform
to
a
proper
layout
versus
like
detecting?
If
we
can,
you
know,
yeah
read
something
else,
I
guess
and
so
I
could
probably
see
cases
where
people
would
be
okay
with
turning
it
off.
K
Yeah
I
think
I
agree
all
the
like
a
feedback
suggestions.
So
here,
like
Ben
mentioned,
can
you
do
this
and
also
he
checked
some
mini
Cube
I,
never
like
really
tested
those
systems
and
I
also
don't
know
whether
any
production
environment
using
those.
K
So
it's
a
not
common
but
yeah,
but
there
are
some
at
least
the
testing
on
those
environments.
H
Hey
I
had
one
question,
so
let's
say
if
we
disable
this
feature
automatically
in
certain
nodes.
Wouldn't
that
mean
the
FML
storage
limit
for
the
Pod
is
essentially
not
going
to
get
enforced.
A
It
actually
wouldn't
get
scheduled
I
think
because
the
scheduler
wouldn't
schedule
to
that
request.
H
C
K
K
A
Yeah
I,
just
don't
think
scheduling,
would
happen
if
a
pod
made
a
storage
request.
Basically
you
would
schedule
pods
that
made
no
storage
requests
just
to
hit
best
effort
storage.
But
if
you,
if
you
did
make
a
request,
then
you
probably
wouldn't
be
scheduled.
If
my
memory
serves
right,
if
you
had
a
limit
and
no
request,
then
yeah
I
guess
that
probably
would
be
scheduled
and
would
do
what
you
described.
K
Oh
yeah,
there
are
two
aspects:
right
ones
request
that
the
others
limit,
so
request
is
basically
scheduler
check
the
capacity
and
the
weather
it
has
enough
resources
requested
by
the
part
and
then
do
the
scheduling,
but
still
the
part
could
potentially
use
like
more
resources
information
requested.
So
we
also
have
a
limit
there.
So,
if
you
set
limit,
then
that
means
kubernet
will
check
the
usage
and
the
evicted
part
if
exceeds
the
limit.
Yeah.
K
K
A
So
you
you
always
have
that
risk
window.
Was
that
deep,
who
said
that
earlier
I
was.
K
So
yeah,
that's
still
the
current
behavior
for
this
feature,
but
I
think
there
is
another
feature
called
the
using
the.
If
the
file
system
has
the
quota
right,
then
they
can
use
the
codon
I.
Think
that
way
it
will
make
sure
it
never
exceeds
the
limit
so
that
feature
I'm
also
checking
right
now
is
Alpha
and
I.
Think
it's
trying
to
promote
to
to
Beta.
A
So
I
guess
like
Dawn
I,
would
prefer
if
we
could
like
detect,
if
it
even
would
work
and
then
just
yeah.
B
A
Because
we
don't
really
give
high
quality
of
service
guarantee
on
it
today,
if
we
had
to
add
a
flag
like
I
kind
of
do
this,
similar
to
like
enabling
debugging
handlers
or
tracing
handlers
and
some
of
those
knobs
I,
don't
think,
there's
anything
too
onerous
about
getting
that
added
to
the
cuebookconfig
types.
Dot
go,
but.
K
K
Okay,
so
I
think
that's
for,
for
this
feature,
yeah.
Thank
you
for
the
feedback,
but
just
in
case
there's
some
challenges
right,
I
think
I
still
need
to
talk
about
the
option
for
adding
incubate
and
so
and
I
think
yeah
next
week.
I
will
give
a
follow-up.
K
I
Basically,
we
cannot
guarantee
production
cluster
is
the
production
ID,
so
so
I
really
think
about
Betty,
for
this
feature
should
be
on,
and
only
some
some
small
cases
like
a
mini
Cube
and
also
the
kind
cluster
on
their
own
DC
for
this
one.
I
So
this
is
why
I
don't
want
you
to
over
complicated
kubernet,
so
if
we
could
detect
that
the
auto
detector
I'm
okay-
so
let's
just
say
oh
here,
it
is,
we
cannot
do
and
if
we
couldn't
do
after
that,
just
keep
yourself
love
not
to
the
per
kubernet
lab
some
love
just
for
those
who
exceptional
cases
instead
goes
to
make
that
out
of
box.
If
we
want
to
hear
this
feature
out
of
box
with
online
feature,
but
only
there
are
some
config
at
the
Highlight
or
disable
that
one
instead
further
complicate
of
the
kubernetes.
K
Okay,
so
you
are
thinking-
maybe
there's
also
like
some
knob
not
to
recuperate,
but
we
can
add
some
knob
somewhere
just
to
disable
this
and.
I
So
all
we
understand
the
use
cases
is
not
in
the
real
production
controller
yeah.
We
all
agree
I
believe
in
at
the
earlier
time,
no
matter
the
kubernetes
or
gke.
Actually
the
disk.
Without
this
feature,
this
actually
is
the
top
one
issue,
even
more
than
quality
of
the
services,
but
this
one
we
don't
so
that's
kind
of
a
big
problem
for
us.
K
All
right-
okay,
yes,
I,
think
I,
I,
agree
and
yeah
I
will
follow
up
with
that.
Thank
you.
A
All
right,
well,
thanks
last
Peter
gets:
do
you
want
to
give
a
more
of
an
update
on
the
CRI
stats
beyond
what
we
discussed
earlier
with
the
windows
topic,
yeah.
L
Sorry,
I
actually
I
actually
missed
the
windows
topic
I
had
to
be
somewhere
else,
but
I
yeah
thanks.
So
I
wanted
to
check
in
about
the
work
that
needs
to
be
done.
For
the
metric
C
advisor
endpoint,
we
had
kind
of
reached
a
sort
of
consensus
on
the
cubelet
being
a
sort
of
reverse
proxy
for
the
CRI
to
emit
the
see
advisor,
stats
and
I
wanted
to
check
with
everyone
to
make
sure
that
that
is
the
direction
we
want
to
go
in.
L
I
chatted
with
David
ashpole
a
bit
and
he
did
have
some
concerns
about
kind
of
the
portability
of
the
metrics,
where,
if
continuity
and
cryo
and
any
other
CRI
implementation
that
comes
up
is
solely
responsible
for
producing
ghost
metrics,
then
they
could
end
up
being
inconsistent,
and
so
that
was
a
concern
of
his
and
and
I'm
I'm
thinking.
L
The
the
alternative
would
be
passing
all
of
the
relevant
metrics
up
through
the
CRI
through
protobuf
and
then
having
the
Cuba
doing
the
serialization
for
Prometheus
and
I
am
leaning
towards
that
having
a
performance
concern,
especially
when
migrating
away
from
C
advisor
itself,
so
I'm
leaning
against
that,
but
I.
Just
wanted
to
you
know,
gain
a
a
broader
consensus
and
talk
about
about
it
and
see
if
anyone
has
any
concerns
about
the
approach.
F
Actually
had
some
discussions
with
some
folks
in
the
container
D
Community
about
this
offline,
like
adding
these
metrics,
basically
adding
a
metrics.
The
idea
is
basically
to
add
a
metric
C
advisor
analogous
endpoint
directly
on
the
runtime
so
directly
on
considered
to
your
cryo,
with
the
existing
see
advisor
metrics
like
in
the
same
scheme
mode
the
same
label,
so
that
people
can
basically
just
use
the
same
endpoint
and
and
continue
to
use
their
exact
same
kind
of
Prometheus
queries
and
dashboards
and
so
forth.
F
So
they're
a
little
bit
concerned
of
putting
this
into
like
core
container
D
from
the
containerdy
maintainers,
because
it's
it's
kind
of
very
specific
to
see
advisor
and
like
the
Kubler
format
and
actually
see
a
containerdy
already
has
a
Prometheus
endpoint
today.
So
this
is
kind
of
a
little
bit
duplicating
the
the
work
on
the
containerdy
side.
So
the
discussion.
H
F
Had
with
the
containerdy
maintainers
is,
maybe
it
makes
sense
as
something
like
us
as
a
external
plug-in,
so
that
it
wouldn't
potentially
be
at
least
in
the
first
version
included
in
a
core
containerd.
It
would
be
something
that
folks
could
install
so
I
guess.
The
way
I
was
thinking
very
high
level.
How
that
could
work
is
like
it
could
be
something
that
people
can
install.
That
would
talk
to
containerdy
to
get
those
metrics
and
then
the
kublet
could
be
configured
to
have
its
metrics.
F
The
advisor
endpoint
basically
redirected
somewhere,
and
that
could
be,
for
example,
the
cryo
Prometheus
endpoint
or
the
container
D
endpoint,
but
basically
I'm
saying
it.
It
won't
be
kind
of
an
OP
like
people
will
have
to
change
their
configuration
a
little
bit
to
install
this
or
configure
it
properly.
L
Yeah
and
and
from
what
I
was
thinking
it
sounds
like
you
know
that
direction
would
go
with
what
we
were
originally
thinking
of
having
the
cubic
beer
reverse
proxy.
Another
thing,
that's
worthy
of
note,
for
this
approach
is
that
there
may
still
be
a
slight
performance
concern.
L
I
talked
with
them
our
internal
monitoring
team,
and
we
were
given
advice
that
generally,
it's
not
super
idiomatic
to
have
the
metrics
be
scraped
through
a
redirect,
and
so
there
might
be
some
issue
there,
so
it
may
be
best
if
the
metric
server
was
pulling
directly
from
the
CRI
or
this
Plug-In
or
whatever,
and
the
cubelet
doing
the
relaying
would
only
be
for
some
user.
L
You
know
like
a
end
user,
who
was
using
the
cubelets,
the
the
Cuba
HTTP
API
directly
with
curl,
and
you
know,
relying
on
the
all
of
the
benefits
that
the
cube
API
provides
for
exposing
that
Port.
F
F
That's
worth
to
call
out
is
if
it
would
be
directly
done
in
the
runtime,
as
you
suggest,
with
other
options.
It's
like
all
the
metrics.
Some
of
the
metrics
today
are
things
like
like
disk
level
stats
so
that
are
actually
used
for
pods.
So
you
know,
for
example,
for
the
storage
isolation
feature
we
just
talked
about
earlier
in
the
call
right,
and
so
today
that
isn't
traditionally,
maybe
tracked
all
those
metrics
by
the
runtime.
F
So
the
runtime
does
not
fully
support
basically
right
now,
all
those
metrics
that
we
need.
So
if
it
would
be
something
that
we
would
add
on
top,
that
would
be
something
we
would
that
we
would
add,
because
it's
not
the
set
of
the
core
measures
that
we
supporting
the
CRI
right
now.
L
Yeah,
so
so
we
have
basically
here
a
couple
of
different
directions
that
we
can
go
in
and
we've
reached,
it
seems
like
we
are
leaning
towards
the
qubit
as
a
reverse
proxy,
only
when
emulating
it
for
the
user
and
then
have
a
metric
server
pulled
directly
from
the
CRI
or
a
plug-in
to
you
know
reach
the
level
of
efficiency
that
we
had.
You
know
with
C
advisor.
L
M
B
M
B
L
Yeah
I
mean
well
as
as
time
goes
on.
You
know
it's
possible.
There
will
be
different
ways
that
we
want
to
Branch.
Like
you
know,
pulling
the
I
mean
a
lot
of
the
some
of
the
motivation
of
opening.
All
of
this,
and
starting
this
whole
process
was
pulling
the
you
know
them
from
a
VM
which
the
advisor
isn't
really
capable
of
or
like
the
way
that
cryo
drops.
The
infra
container
confuses
the
advisor
as
of
now,
and
so
so
just
allowing
more
customization
within
the
CRI
while
trying
to
keep
the
same.
M
So
some
of
the
work
that
we're
doing
within
my
team
is
we're
looking
at
putting
in
basically
a
plug-in
model
and
I'll
put
the
RFC
for
that
here,
so
that
we
can
use
a
plugin
to
handle
resources
within
the
kublet.
And
so
my
curiosity
here
is
how
that
would
affect
the
work
we're
doing
there
and
basically
the
the
end
goal
for
that
is
to
take
the
existing
managers
resource
managers
so
like
CPU
manager,
memory
manager,
topology
manager
and
device
manager
and
pull
them
out.
M
L
I
think
so,
I
don't
know
what
the
level
of
integration
that
you
expect
to
have
with
C
advisor,
but
I
from
just
briefly
looking
at
it.
It
looks
like
there
wouldn't
be
very
much
interaction
with
this
change.
What
what
this
change
is
doing
is
basically
taking
some
of
the
things
that
c
advisor
is
doing
and
moving
them
into
the
CRI,
because
the
CRI
has
a
better
view
of
the
the
information
that
the
advisor
is
trying
to
collect,
and
it
doesn't
really
have
anything
to
do
with
actual
management
of
these
resources.
C
F
Thing
that
we
should
consider
is
for
the
like,
I,
don't
think,
there's
too
much
duplication
in
terms
of
the
core
metrics,
because
those
just
come
directly
from
kind
of
wood
container
and
those
are
kind
of
already
pretty
well
standardized.
And
you
know,
Loop
container
does
the
collection,
but
there's
additional.
H
F
On
top
right
that
are
not
collected
today
by
basically
that
are
not
like
from
the
c
groups,
you
know
stuff,
like
total
number
of
file
descriptors
or
like
load
average
of
containers
and
stuff,
like
that
that
people
expect
in
the
Prometheus
metrics
that
we
don't
have
implemented
today.
So.
F
C
They
were
enable
these
VM
implementers
to
you,
know,
post
up
metrics
that
we
we
describe
as
necessary.
For
you
know,
sandboxes.
C
F
C
F
B
I
Oh
sorry,
I
forgot
to
act
there
so
I.
This
topic
is
being
discussed
at
the
signal,
the
for
a
while
and
we
we
start
to
have
the
proposal
in
the
Google
Doc,
and
the
field
of
us
include
the
old
approval
for
the
for
the
sick
note
and
the
review
that
talk
and
the
direct
recent
convert,
that
to
the
pr
and
it's
merged
and
everywhere
are
great
and-
and
we
have
been
have
many
contributors
in
the
past.
I
I
want
to
know
the
requirement
how
to
how
to
promote
it
to
the
reviewer
and
how
to
promote
it
to
the
maintenance
signaled,
as
the
community
actually
did,
because
we
don't
publish
that
Doc
in
the
past.
So
but
we
did
some
other
thing
like.
C
I
Next
Identify
some
sub-project
and
promote
people
to
the
approval
for
the
sample
project,
but
now
we
finally
have
like
this
region
dog
and
published,
and
anyone
we
have
so
many
contributors
and
thank
you
everyone's
contribution
in
the
last.
If
you
feel
you
are
ready,
please
send
us
the
pr
request
and
against
that
that
written
dark
and
we
will
and
meet
us
now
and
yeah,
just
just
wanted,
because
this
is
kind
of
a
direct
eye
on
the
community.
I
I
That's
all
anything!
You
want
to
add
manure
and
also
mental
I,
think
natural
yeah,
who
is
the
reviewer
and
I,
think
the
size
yeah
as
the
reviewer
anything
you
want
to
add
here,
Kevin's,
not
here.