►
From YouTube: Kubernetes SIG Node 20220215
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
A
Hello,
everyone
and
welcome
to
today's
edition
of
sig
node.
It
is
tuesday
february
15
2022.,
that's
a
lot
of
twos.
Today
we
have
a
relatively
short
agenda
in
sig
node,
but
I'd
like
to
start
by
handing
florida
sergey.
So
he
can
talk
about
prs.
B
Yeah,
we
have
a
great
progress
on
prs
and
I
think
most
of
it
cue
this
to
gyms
for
cleaning
up
things
and
so
jim's,
going
like
don't
wait
for
pr
to
road,
especially
for
pr's,
at
mystery
base
and
never
been
looked
at
for
a
while
and
we
merging
a
lot.
So
you
can
look
at
statistics
quite
good.
We
have
17
most
and
21
clothes,
which
is
like
very
unusual.
B
So
thank
you.
Everybody
for
doing
reviews
and
special
thanks
for
items.
A
Yeah
dims
is
rocking
it.
Thank
you,
dims
sergey,
just
to
put
some
of
these
numbers
in
context
for
us
like,
where
is
that
154
open,
prs,
live
sort
of
historically
for
our
open
prs?
Is
that
better?
Is
that
worse?
I
think
it's
better.
B
So
almost
a
minimal
number
historically,
so
I
I
don't
remember
being
much
smaller
than
that.
A
Awesome
cool:
well,
it's
good
to
hear
that,
let's
see
next
on
the
agenda
before
we
jump
into
things,
I
should
probably
remind
folks
about
dates.
We
have
a
soft
freeze
coming
up
for
sig
node.
So
if
you've
checked
the
email
mailing
list,
you
might
see
an
email
about
this
talking
about
what
our
soft
freeze
is
to
ensure
that
we
actually
can
complete
beta
features
on
time
and
that
we
have
enough
time
to
review
alpha
features
and
gas.
A
We
ask
that
if
you
have
a
beta
feature
or
a
deprecation,
it
should
be
merged
by
this
state,
and
if
you
have
an
alpha
feature
or
a
ga,
you
should
have
a
work
in
progress.
Pr
open,
preferably
a
reviewable
ppr,
if
possible,
so
we'll
have
enough
time
to
look
at
it,
so
that
date
is
coming
up
faster
than
you
would
expect.
A
Next
up
on
the
agenda,
we
have
rodriko
with
a
friendly
ping
for
the
username
space
cap
sounds
very
friendly.
Thank
you
anything!
You
want
to
cover.
C
Yes,
I
think
that
is
basically
it.
I
wanted
to
ping
the
reviewers,
because
none
have
reviewed
yet
and
ideally
we're
aiming
for
125..
C
A
Awesome
next
up,
we
have
a
vanay's
regular
in
place
pod
vertical
scaling
update,
but
it
looks
like
he
won't
be
able
to
make
the
meeting
and
looks
like
derek
left
comments,
saying
that
he
has
reviewed
the
first
commit
left
a
number
of
comments
and
is
hoping
to
get
through
the
rest
today,
tomorrow,
derek
do
you
have
anything
else
to
add
for
the
recording.
E
Yeah,
so
I
got
through
the
first
commit,
which
is
where
all
the
api
types
are
for.
Folks
who've
been
looking
at
it
and
maybe
for
others
who
would
see
in
the
recording.
I
didn't.
E
The
the
pr
adds
changes
to
the
spec
fields
of
a
pod,
as
well
as
the
status
field
and
to
date
we
haven't
done
this
in
the
past,
since
this
new
machinery
is
added,
but
the
machinery
was
in
place
to
drop
fields
in
the
spec
when
the
feature
gate
is
disabled,
but
nothing
was
being
done
to
drop
it
in
status
so
just
to
call
out
in
the
future.
E
If
you
have
pod
api
changes
to
make
sure
that
we're
dropping
both
in
spec
and
status
when
the
feature
is
disabled,
so
vinay
you'll
see
those
comments
and
then
maybe
just
for
those
in
the
call.
Just
if
we
took
a
poll,
the
cap
was
very
open-ended
on
if
resize
policy
for
a
given
resource
should
be
mutable,
post
creation
of
the
pod-
and
I
didn't
actually
have
a
strong
feeling
on
why
that
can't
be
changed
or
not.
E
And
so,
if
folks
would
see
a
use
case
where
you
want
to
say
after
the
creation
of
a
pod,
no
it
actually
changed
my
mind.
You
can
change
the
cpu
field
dynamically
or
not.
Maybe
we
could
make
resize
policy
immutable
and
then
the
last
comment
was
in
the
existing
pr.
I
believe
quota
can
be
gamified
because
it's
not
taking
the
max
of
resources
allocated
versus
resources,
and
so,
if
the
cubic
can
never
relinquish
those
resources,
you'll
be
able
to
gamify
quota.
E
So
we'll
have
to
settle
that,
but
I
hope
to
get
to
the
cubic
changes
in
this
afternoon
or
tomorrow,
but
I
appreciate
venice
patients
so
anyway.
That's
all
I
have
if
folks
want
to
jump
in
on
those
topics,
feel
free.
A
Oh
there's,
two
more
so
before
we
go
into
marlo's.
Let's
go
to,
I
think
deep,
have
something
on
pod,
sandbox
condition
kept
next
steps.
F
Yeah
so
so
far,
I
addressed
the
initial
set
of
comments
from
you,
ilana
and
derek.
So
thanks
for
the
first
round
of
reviews,
I
clarified
some
of
the
use
cases
a
little
more
around.
F
You
know
the
things
that
this
can
surface
also
chatted
a
bit
with
sally
o'malley,
who
is,
I
think,
doing
a
lot
of
stuff
around
the
let
base
tracing-
and
you
know,
based
on
the
conversation,
just
kind
of
refine
the
use
case
a
little
more
to
highlight
like
how
the
pot
sandbox
conditions
will
surface
a
few
more
different
things
than
the
the
immediate
focus
of
the
tracing
kit.
F
So,
if
take
another
look,
that'd
be
great.
E
E
The
use
case
is
the
tldr,
though,
that
you
don't
feel
the
tracing
stuff
lets.
You
satisfy
your
use
case
and
you
think
the
conditions
must
be
propagated
and
then
did
you
think
adjacent
projects
like
cube
state
metrics
needed
to
be
updated
in
order
to
read
this
condition.
How
did
you
see
this
getting
plumbed
end-to-end.
F
Yeah,
so
the
overall
use
case
is
definitely
something
like
cube
state
metrics
that
reports
things
at
a
very
high
cardinality,
which
is
like
at
the
pod
level,
cardinality
and
basically
just
grabbing
it
from
the
pot
conditions
itself,
the
the
state
that
the
pod
is
in
and
surfacing
it
up
through
through
cube
state
metrics.
Basically,
it's
like
this
is
the
duration.
It
took
to
create
the
sandbox,
and
this
is
the
amount
of
duration
it
took
to.
You
know
complete
the
sandbox
creation
process.
Basically,
the
overall
scenario
is
pretty
much.
F
Is
that
right
now
the
pod
initialized
condition
it's
a.
It
has
different
meanings
depending
on
whether
they're
in
it
containers
present
in
the
pod
or
not,
and
so
when
init
containers
are
present,
it
actually
waits
for
all
the
way
till
the
end
containers
complete
before
it
says
that
you
know
the
initialized
condition
is
satisfied.
F
If
there
are
no
init
containers
present,
then,
as
soon
as
the
pod
is
scheduled,
pretty
much
as
soon
as
it
gets
started,
the
initial
initialize
condition
is
set,
so
there
were
reasons
to
do
it
that
way,
but
essentially
the
sandbox
conditions
kind
of
clarifies
that
condition
to
really
say
that
you
know
the
the
the
condition
of
the
pod
at
the
point
where
the
sandbox
is
initialized
is
satisfied
and
when
the
sandbox
is
actually
done,
setting
up
surface
basically,
both
aspects
of
it.
A
Yeah,
I
guess
I'll
I'll
add
to
what
derek
said
when
I
reviewed
this
sort
of
from
a
node
perspective
to
me
at
least
as
this
was
currently
envisioned,
the
adding
like
okay,
we've
got,
we've
started,
sandbox
creation
and
then
sandbox
creation
is
completed.
A
I,
it
just
seems
a
little
incomplete
right,
like
there's
this
entire
pod
life
cycle
and
we're
focusing
on
adding
conditions
for
just
this
one
step
of
it
and
there's
like
nothing
in
there
for
sandbox
tear
down,
which
is
maybe
relevant
and,
in
theory
like
something
like
tracing
will
like
cover
the
entire
life
cycle
now
granted.
You
know
tracing
might
not
give
us
a
deep
dive
on
every
single
pod
because
of
like
necessary
sampling
right.
A
It
can
be
very
high,
cardinality
data,
so
we're
not
going
to
necessarily
have
something
for
for
everything,
but
if
it
sort
of
depends
on
what
the
debugging
data
is
going
to
be
used
for,
but
just
the
way
that
you're
describing
some
of
these
use
cases
it
feels
like
you
know
there
might
be
some
great
scenarios
where
we're
going
to
use
this
data,
but
they
feel
a
little
bit
like
workaroundy
to
me
and
so,
like,
I
would
say,
you
know,
maybe
think
bigger
like
what
is
you
know.
A
The
actual
problem
we're
trying
to
solve
here
is
just
adding
these
two
conditions
like
is
this
going
to
be
a?
You
know,
give
a
mouse
a
cookie
situation
where
we
add
these
two
conditions,
but
then
we
want
to
add
this
one,
and
this
one
and
this
one
and
like
what's
our
overall
picture,
look
like
because
we're
definitely
not
capturing
that
full
life
cycle
with
this
cap.
I
think
and
the
other
question
is
you
know
if
that's
not
enough,
how
do
we
get
something
that
is
enough,
and
how
do
we
use
that?
A
So
I
would
just
encourage
us
to
like
make
sure
that
you
know
we're
thinking
through
all
of
those
alternatives
and
that
we're
like
convinced
that
you
know
yes,
this
definitely
is
the
best
option
you
know
like
here
is
the
like
80
20
use
case,
or
something
like
that
before
we
go
and
add
something
like
this.
F
Oh
sounds
good.
What
would
be
the
next
steps
here
like?
Would
you
like
me
to
clarify
or
like
refine
the
user
stories
a
little
more.
E
E
So
I
don't
think
we'll
get
this
cap
implemented
this
release
cycle
right,
we're
and
unfortunately
we
didn't
get
consensus
at
the
time,
so
we
can
keep
iterating
on
the
cap.
I
I.
E
I
agree
with
everything
alana
said,
and
so
my
recollection
on
your
use
case
was
you,
as
a
cluster
operator,
are
only
responsible
for
the
playground
in
which
a
container
can
launch
and
not
necessarily
responsible
for
the
success
of
that
container,
and
this
can
condition
was
going
to
let
you
distinguish
that
use
case.
E
I
guess
like
lana,
I
I
have
to
think
through.
If
that
alone
does
it,
but
I'm
not
sure
how
many
others
have
that
situation,
I
can
appreciate
being
a
container
distinction.
As
you
said,.
E
E
But
yeah
even
this
discussion
is
helpful.
I
guess
to
know
that
the
tracing
wasn't
sufficient
and.
F
Right,
like
essentially
like
the
overall
use
case
we're
getting
at
is
like
we
are
in
a
situation
where
we
are
operating
like
a
multi-tenant
cluster
and
essentially
like
there's
the
ops
team
that
is
kind
of
responsible
for,
say
the
csi
plug-ins,
the
cni
plug-ins
that
are
that
are
going
into
the
platform
level,
and
then
there
are
specific
customers
who
are
launching
their
workloads
right.
F
So
essentially,
the
customers
want
the
ops
team
to
be
making
the
right
decisions,
and
you
know
in
in
terms
of
saying
that
hey
we
want
like
a
max
of
say,
30
second
slo
on
like
when
my
pod
gets
to
launch
after
it's
it's
scheduled,
and
so
this
would
be
mainly
geared
towards
you
know,
generating
slis
and
slos
around
how
long
the
sandbox
station
is
taking,
which
in
turn
is
dependent
on
the
csi.
E
So
then,
maybe
we
could
talk
through
like
the
dependent
steps
in
the
pod
life
cycle
to
get
the
scenario
end
to
end
right.
So,
like
I
don't
know,
if
we
can
properly
capture,
did
the
image
get
pulled
yet
within
that
slo
time,
and
I'm
trying
to
think
of
other
requisite
steps
that
might
occur
in
cubic
recognition
to
pot
execution.
But
there
might
be
some
additional
conditions.
E
We
don't
have
that
for
you
to
meet
your
scenario,
you'd
want
to
identify
and
maybe
even
want
to
call
it
like
if,
if
the
resultant
pull
the
register,
you're
pulling
form
isn't
present.
What
what
happens
in
that
scenario?
For
you
as
well,
for
meeting
your
slo
so
yeah
either
way
we
can.
E
We
can
update
the
the
cap
a
little
bit,
but
if
the
overarching
theme
is,
I
want
to
provide
an
slo
on
time
to
pod
getting
bound
to
a
cube
with
the
time
to
pod
being
run.
And
then
I
want
to
report
on
that.
E
A
Is
the
sandbox
a
good
proxy
for
this,
like
that?
That
would
be.
My
other
question
is,
I
think,
like
we're,
trying
to
use
it
as
a
proxy
signal
as
an
sli,
but
I'm
not
actually
sure.
If
that's
enough
like
does
the
sandbox
fail
to
fully
set
up
if,
for
example,
like
a
pod
hangs
on
like
a
mount
or
something
like
that
because,
like
that,
that's
sort
of
a
classical
example
of
you
know
a
a
thing
that,
like
delays,
pod
start
latency.
F
Failing
like
how
would
these
conditions
look
like
in
those
specific
scenarios,
you
know
what,
if
the
mounts
are
delayed
versus
what,
if
they
fail
completely
and
same
with,
let's
say
introduction
of
like
a
cni
plug-in
where
the
ipam
is
taking
a
while,
as
well
as
a
specialized
cri
runtime
handler
like
say
if
the
operator
wants
to
introduce
micro
vms
in
their
cluster,
like
how
would
this
work
out?
F
Basically
so
yeah
right
now,
the
the
user
stories
are
calling
out
all
these
scenarios,
but
you
know
happy
to
add,
say
termination
if,
if,
if
that
is
desired,
to
be
explored
and
anything
else,.
G
A
Oh,
I
wanted
to
call
out
because
there
was
a
question
in
the
chat.
When
is
keyboard
tracing
expected
to
be
implemented?
That's
currently
targeted
for
alpha
in
this
release.
124.
F
E
Actually
deep
one
other
question,
then
there
was
a
cap
that
mike
brown,
I
believe,
went
through
on
related
to
image,
pool,
authentication
issues,
and
I
guess
in
general
to
your
overarching
user
story.
E
E
A
Awesome:
okay:
let's
move
on
to
our
final
agenda
item
from
marlow
a
request
for
continued
discussion
regarding
cpu
management
here.
H
Yeah,
so
we
still
have
the
old
dock,
which
I
guess
I
should
link
back
in
where
I've
asked
for
feedback.
Basically,
I'm
trying
to
figure
out
where
we
can
be
useful
in
general
in
the
community
regarding
updating
updating
documentation
to
handle
particular
use
cases
that
may
already
be
available
that
just
aren't
well
documented
and
then,
additionally,
to
figure
out
what
next
steps
are.
One
thing
that
has
broken
out
is
a
a
desire
for
a
plug-in
model,
whether
that's
with
at
the
cpu
layer
or
at
the
topology
layer.
There's
some
negotiate.
H
You
know
some
discussion
going
on
as
far
as
resource
management
so
that
people
can
do
handle
their
particular
use
cases
instead
of
continuing
to
complicate
the
kubelet,
but
that's
a
full
discussion,
I'm
not
saying
we
go
any
particular
direction
or
even
that
we
do
it,
but
that
discussion,
I
think,
is
useful
and
then
from
that,
maybe
we
can
pull
out
something
that
is
useful,
whether
it's
long
term
or
short
term
that
won't
break
the
current
functionality
and
kubelet
or
complicate
it
any
further.
A
Awesome,
do
we
want
to
get
any
feedback
now
or
have
you
sent
this
ping
to
the
mailing
list
as
well?
That
might
be
handy.
H
I
can
send
it
yeah.
I
should
send
it
to
the
mailing
last
year
right.
We
do
have
some
feedback
also
from
people
in
singapore,
on
the
other
time
zones
that
don't
like
this
meeting,
because
it's
two
o'clock
in
the
morning
for
them,
so
I
mailing
list
is
probably
a
good
call.
Thank
you
for
that.
A
Yeah
no
problem:
okay,
we
are
out
of
agenda
items.
Is
there
any
last
minute
business
or
shall
we
adjourn
for
today
we
can
get
half
an
hour
back.