►
From YouTube: Kubernetes SIG Node 20210504
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
A
Hello,
everyone
and
welcome
to
today's
edition
of
sig
node.
It
is
tuesday,
may
4th
2021.,
let's
start
with
pr
review
sergey
what
you
got.
B
Yeah,
if
you've
been
not
booking
until
prs
what's
happening,
we
don't
have
too
many.
We
have
a
lot
of
cherry
picks
actually
immersed
this
past
week
and
beyond
that
there
are
not
too
many
prs
being
merged,
so
we
up
to
up
to
date.
I
think
I
think
I
mean
vacation
up
the
-9
from
last
week,
so
maybe
going
forward,
you
can
start
burning
this
number
down.
A
Yeah,
I
think
I
saw
a
lot
of
things
get
approved
in
merge
and
my
github
notifications.
So
I
think
thanks
renault
for
reviewing
cherry
pick
deadline
is
this
friday.
I
think
so,
for
this
is
the
last
release
of
118..
So
if
you
want
something
to
get
backported
into
118,
it's
your
last
chance
I'll,
probably
go
and
do
a
review
starting
tomorrow
or
thursday,
depending
on
kubecon,
effective
jet
lag.
A
Do
we
want
to
jump
into
what's
going
on
with
caps?
I
think
that
I
put
four
on
the
spreadsheet
already,
but
we
probably
should
go
through
the
rest
of
them.
I
added
to
do's
and
what
not
renault
do
you
want
to
leave
that
share
your
screen.
C
Great,
can
you
see
my
screen.
C
A
Oh,
I
think
the
the
four
that
I
added
were
the
first
one.
I
think,
because
that's
one
of
those
perma
beta
things
that
got
missed
last
release.
Actually,
let
me
see
if
I
can
dig
up
the
link
for
you.
Basically,
we
have
to
put
everything
into
the
spreadsheet
for
122
this
time
and
let
me
dig
it
up.
A
A
So
the
ones
from
node
that
are
already
in
there
are
huge
page,
medium
storage,
size,
support
to
size,
memory,
backed
volumes,
configurable
grace
period
for
probes
and
swap
support.
So
we've
got
like
a
lot
of
other
stuff.
I
think
that
we
want
to
make
sure
that
goes
in
here.
Some
of
it's
not
targeted
for
this
release,
so
we
can
skip
that,
but
I
just
want
to.
I
can
go
ahead
and
add
them
all
to
the
spreadsheet
and
set
the
milestones
after
this.
I
just
want
to
make
sure.
A
All
aligned
on
what's
going
in
because
almost
everything
here
that
I
don't
have
a
link
for
in
the
to-do
column.
Like
has
you
know,
we've
got
to
update
the
cap
or
whatever
before
the
deadline
next
week
and
doesn't
have
a
pr
open.
Yet.
C
Yeah,
I
think,
like
a
lot
of
things
need
to
be.
I
I
guess
like
we'll
just
we
can
just
tag
folks
that
are
the
primary
here
as
a
reminder
and
see
where
we
get
at
least
we
have
this
in
a
sorted
order
so
and.
A
C
Yeah
we
can
do
that,
but,
like
I'm
not
sure
like,
given
that
the
number
of
items
that
have
no
pr's
open,
if
it's
worth
doing
it
today,
maybe
like
today,
we
just
we
just
we
just
remind
everyone
that
hey
the
deadline
is
approaching,
and
then
next
week
we
make
a
pass
at
it.
It
might
be
more
productive,
yeah.
A
That
might
be
the
case.
My
only
worry
with
that
is
that
next
week
will
be
like
two
days
out
from
the
deadline,
including,
like
everything
must
be
implementable.
Prr
has
to
be
done.
That
kind
of
thing,
so.
A
So
there's
for
every
cap
there
needs
to
be
a
few
things
done,
so
the
metadata
for
the
cap
has
to
be
up
to
date
to
target
the
current
release,
which
means
that
as
part
of
that,
ci
will
also
make
you
ensure
that
you
do
production
readiness
review
if
you're,
targeting
a
new
milestone
on
the
pr.
A
We
have
to
set
the
milestone
on
the
issue,
so
that
the
the
release
team
can
track
the
cap
and
they
have
to
also
be
in
the
spreadsheet
and
the
release
team
will
ask
to
ensure
that
you
know
the
production.
Readiness
review
has
been
done,
that
there
are
specific
goals
for
the
milestone
and
that
there
are
test
plans
in
place
for
each
thing
so
yeah.
Basically,
I
would
expect
every
single
one
of
these
caps
is
gonna
need
at
least
one
pr
in
order
to
update
the
milestone
and
whatnot.
A
C
Yeah,
so
anything
ready
here
is
green.
The
rest
is
kind
of
like
red
and
yellow
like
if
we
have
folks
on
the
call
and
like
we
can
quickly
do
a
check
on
whether
it's
on
track
or
something,
and
at
this
point
I
think
we
can
mark
those
yellow.
If
someone
is
saying
yes,
they
are
working
on
it
or
are
gonna
work
on
it
and
then
take
it
from
there.
Does
that
sound
good.
C
Cubelet
credential
providers,
dims
is,
are
you
on
the
call
names.
A
C
Okay,
all
right,
so
next
one
sergey
dynamics.
B
C
F
Yeah,
so
for
that
one,
I
think
we're
pretty
much
good
to
go.
The
kep
was
approved
for
121
for
an
alpha
implementation,
the
implementation
pr
slipped,
but
that's
been
open,
and
there
was
a
couple
of
small
comments
that
I
believe
have
been
driven
to
resolution
on
some
slack
channel
or
some
slack
conversations
around
that
so
hopefully
that'll
get
merged.
You
know
anytime
soon.
A
Yeah
just
make
sure
that
you
update
the
cap
to
target
122
in
like
the
metadata
or
else
the
release
team
will
ask
us
what
is
going
on.
F
Yeah,
I
have
a
pr
open
that
I
for
that,
that
I
put
a
hold
on
waiting
for
the
outcomes
of
some
of
these
other
discussions.
Okay,
but
great
so
I'll,
do
that
today.
Thank
you.
C
All
right,
so
the
next
one
is
cri.
Graduation
and
one
of
the
items
here
was
like
adding
a
call
to
return
list
of
images
and
that
cubelet
ignores
for
gc
and
peter
had
a
comment.
Whether
we
want
a
separate
clip,
so
one
leaning
towards
yes
thoughts,
derek.
C
C
All
right,
okay,
peter
you
got
it,
I'm
not
sure
if
peter
is
on
the
call,
but
I'll
talk
with
peter
and
make
sure
we
do
that.
So
I'm
gonna
mark
this
one
as
yellow
but
I'll
work
with
peter
to
get
it
to
green.
This
week.
C
A
Yeah,
I
just
took
a
look
at
it
when
I
was
looking
through
the
agenda
and
prr
needs
to
be
done.
So
I
saw
there
was
a
pr
up
for
prr,
but
there's
no
approver
set.
So
I
just
put
a
comment
on
that.
D
Yeah,
I
will
I'll
look
through
the
comment.
I
just
checked.
It
looked
at
it
just
now
before
starting
the
meeting
I'll
update
that
and
I'll
also
update
the
pr.
There
are
two
pr's
here.
One
is
the
one:
is
the
pr
number
1883
that
was
merged
for
cap?
That's
the
main
cap
and
there
is
a
second
secondary
cap
which
is
got
the
pr.
D
So
these
two
cups
essentially
define
this
feature
both
of
them
go
in
or
not
we're
just
wondering
if
there
is
the
reviewers
available
for
this
milestone.
D
As
far
as
the
state
of
the
code
is
concerned,
I
importing
the
I
had
an
implementation
for
this
before
and
then
we
changed
directions
on
the
design,
so
I'm
putting
the
making
the
api
changes
and
porting
most
of
the
implementation
that
I
already
have.
We
just
need
to
implement
the
node
local
store
for
business
for
handling.
If
the
node
were
to
restart,
then
we
need
a
source
of
truth.
D
A
Okay
yeah,
I
know
so
I'm
a
little
confused
about
the
second
cap,
so
I
know
that
there's
been
the
the
one
kept
for
the
in-place
vpa
that
that's
been
approved
and
just
needs
prr.
The
separate
one
I
think
that's
entirely
separate
right,
so
we
should
be
tracking
it
separately
right.
Oh.
D
D
Get
the
the
cap
is
not
on
that,
but
let
me
just
update
the
document
with
the
cap
link
here,
because
that
would
be
very
clear.
D
D
Okay,
so
that
is
the
cap,
that's
that's
the
main
core
implementation.
This
was
the
one
which
went
through
design
review
with
tim
harkin,
and
we
made
some
changes
to
the
way
we
were
gonna
to
change
the
api.
Essentially,
and
now
I
just
have
to
put
the
old
implementation
to
this
new
one.
So
this
is
what
I
want
to
see
if
we
can
track
for
for
the
milestone,
122
milestone
and
the
cri
change
goes
along
with
it.
D
It
is
either
both
or
neither
so
that's,
not
a
big
deal.
The
cr
changes
are
pretty
small.
A
D
If
we
have
a
reviewer,
we
can
make
it
great
updating.
The
prr
is
for
sorry
you're,
saying.
C
D
Okay,
then
no
I
haven't,
can
you
please
share
their
email
with
me
so
that
I
can
reach
out
to
them.
I
know
there
is
one
other
engineer
from
byte
dance
who
was
looking
to
help
as
well,
and
I
I'm
potentially
looking
to
I'll
do
the
core
implementation,
but
I
would
need
some
help
because
I
got
some
additional
responsibilities
in
a
new
role
at
work,
so
I
cannot
be
fully
on
this
one.
C
C
H
Yeah
I
mean
yeah.
This
is
justin,
I'm
from
finance,
I'm
pretty
interested.
In
this
feature,
I
would
like
to
give
any
help,
as
I
can
yeah
thanks.
D
Yeah,
I
think
we
have
enough
people.
There
is
one
more
person
from
ibm
who's,
also
interested
in
this
feature.
There
are
a
lot
of
people
waiting
for
this
feature,
so
I
just
want
to
get
it
done
now.
It's
been
a
while.
D
So
if
we
can
find,
I
think
we
can,
it
sounds
like
we
won't-
have
trouble
finding
reviewers
and
even
people
who
can
help
with
implementation
unit
tests
and
stuff.
So
we
can,
we
can
call
it
green
I'll,
take
care
of
the
housekeeping.
The
pr
sections
is
mostly
housekeeping.
It
doesn't
require
detailed
design
reviews
so
I'll
get
that
done
today
and
I'll
set
the
pr
for
both
this
kep,
the
in-place
api
kept
and
the
cr.
I
kept
their
two
caps
in
enrolled
in
here.
D
Okay
and
let's,
we
can
probably
review
this
offline
prr
section
and
then
get
it
merged,
so
we
can
call
it
ready
for
one
to
two.
A
And
just
for
anybody,
because
I
got
a
question:
prr
stands
for
production
readiness
review
and
it
is
an
additional
review
that
needs
to
be
done
on
every
cap
by
a
separate
team
from
the
node
reviewers.
D
Separate
from
the
node
reviewers,
okay,
so,
which
team
would
this
be?
It's.
A
The
they
have
a
channel
in
slack
pound,
prod
readiness
and
basically,
if
you
fill
out
the
there's,
a
kept.yaml
file
and
if
you
update
the
latest
stage
and
the
latest
milestone
in
there,
it'll
automatically
tell
you
to
assign
someone
from
that
team
and
it'll
fail.
Ci
until
you
add
the
right
file,
so.
D
Okay,
and
does
the
review,
involve
a
detailed
knowledge
of
the
design
or
is
it
just
a.
A
D
C
Right
so
on
the
username
spacespr,
I
see
a
couple
more
back
and
forth
with
tim,
hawkin
and
michael,
but
we
still
haven't
identified
anyone
to
take
it.
So
I'm
gonna
mark
it
right
till
we
get
more
charity
like
don,
was
gonna,
get
back
and
see.
If
there's
people
from
google
who
could
drive
it
so
the
next
one
is
liveness
probe
timeout.
C
A
Assuming
that
the
cap
updates
merged,
I
think
this
one
should
be
green.
Okay,
it
just
needs
to
be
reviewed,
but
it's
pretty
straightforward,
there's
not
a
lot
of
changes
in
the
design
or
anything
like
that.
It's
just
kind
of
this
is
the
next
phase.
C
F
Yeah,
I
can
talk
about
that,
so
this
one
is,
I
believe
the
current
state
is
that
there
were
a
couple
of
questions
from
tim
hogg
and
about
usability
that
were
needed
to
be
resolved.
They've
reached
a
consensus
in
the
pr
comments,
the
pr
the
kep
author
is
going
to
update
those
within
the
next
couple
of
days
and
get
that
up
and
hopefully
we'll
get
that
the
cap
approved
for
alpha.
F
I
A
Mark
are
you
submitting
all
of
the
sig
windows
caps
under
sig
windows,
because
there's
a
couple
here
like
basically
does
node
need
to
submit
them
to
to
the
spreadsheet,
or
have
you
already
done
that.
A
F
A
So
that's
what
I
thought,
because
they
were
all
marked
just
sig
windows
in
the
repo,
so
yeah
good.
I
think
we're
good.
F
C
Okay,
so
next
one
is
the
part.
E
E
E
A
C
So
no
graceful
shutdown
like
david
and
I
met
yesterday
and
we
have
a
rough
plan
I'll,
hopefully
we'll
have
something
by
the
end
of
the
week.
I'm
gonna
keep
this
yellow.
A
C
Okay,
all
right,
so
we
do
plan
to
add
another
like
another
feature:
flag
and.
A
C
So
there
is
one
okay
I
probably
can
can
poke
direct
on
this
later.
Don't
want
to
force
him
to
talk
so
the
plan
was
we'll,
probably
introduce
a
new
feature
flag
and
I
don't
know
like
how
that
will
work.
That's.
C
Okay,
I'll
keep
this
yellow
and
sync
with
synthetic
yep.
C
So
c
groups,
v2
alpha
so
what's
happening
on
the
testing
side
is
like
giuseppe,
has
a
pull
request
that
is
passing
all
the
tests
and,
like
the
run
c
release
that
includes
all
the
fixes
is
also
like
coming
out
like
any
day.
It's
just
waiting
for
one
more
approval,
so
I
think
this
is
probably
a
green.
C
So
kept
was
already
merged,
and
but
that
was
like
pre-prr
and
stuff.
C
So
memory
qos
for
c
groups,
v2
david-
are
you
on
the
call.
C
I
know
there
was
some
more
back
and
forth
on
it.
Do
you
feel,
like
we've,
reached
an
agreement
on
the
direction
there
or
like
probably
needs
more
discussion.
I
K
Few
questions
left
unanswered
that
I
think
needed
a
little
bit
more
input
on
yeah,
so
I
don't
think
100.
It's
fully
closed.
L
L
Okay,
okay,
understood:
I
will
check
and
update
this
document.
I
guess
yeah.
L
A
Swap
I'd
say
it's
much
too
much,
probably
yellow.
I
think
it's
close
we're
kind
of
hammering
out
some
of
the
api
updates,
but
I
think
it's
there
have
been
no
like
world
ending
objections
right
now,
so
I'm
confident
optimistic,
okay.
C
Great
so
yellow
then,
and
I
think
next,
one
francesco
again
like
francesco
like
to
make
progress
on
this
one.
L
Okay,
first
of
all,
summary
yellow,
we
are
totally
making
progress
from
the
appeared
side.
Kevin
is
giving
comments
on
the
pr
on
the
cap.
We
got
few
comments,
but
nothing,
nothing
really
big
in
either
direction.
So
I
will
need
to
to
ask
for
pr
review,
even
though
I'm
not
really
sure
it
look.
Like
I
mean
we
are
adding
a
new
option
so
well,
we'll
see.
I
will
ask
for
a
viewer.
Let's
see.
C
All
right
thanks!
So
next
one
is
mike
brown:
hey
guys,
hey
mike,
you
know,
what's
kept
status
on
this
one.
J
All
I
really
need
to
do
is
rebase.
There
was
one
small
fix,
the
change
that
somebody
wanted
to
do.
I
could
do
some
documentation
out,
though,.
J
We,
the
last
time
we
talked
about
doing
this.
We
did
we
weren't
talking
about
doing
it
with
you
know,
a
switch
or
anything
it
was
more
of
a
this
is
a
bug
and
we're
going
to
fix
it
and
document
it.
A
C
C
J
Let's
review
it,
it
should
not,
but
I
think
I
could
use
a
little
bit
more.
You
know
help
on
the
reviews
and
stuff.
C
So
next
one
there's
a
cap,
I'm
just
gonna
open
it.
C
C
So
I
know
we
discussed
it
with
kihiro.
Last
week,
I'm
gonna
keep
it
yellow
derrick.
Let
me
know
if
you
have.
K
Yeah
for
this
one,
so
me
and
peter
can
a
minute
about
this.
I
think
the
the
latest
cap
is
fully
updated
on
kind
of
what
we
discussed.
We
had
kind
of
like
an
internal
sink
so
yeah.
I
think
the
cap
is
fully
up
to
date,
but
I
think
it
still
needs
a
few
more
reviews,
probably
so,
yellow
for
now
yeah
brilliance,
reviews
from
dawn
derek
that
type
of
thing.
I'm
guessing.
C
Wait
second
by
default
so,
like
I
know,
we
like
so
tim
all
care
pinged
me
about
this
today
in
relation
to
the
psp
replacement
work.
So
we'll
I
mean
I'll
work
with
sasha
to
get
it
updated
and
probably
see
how
we
can
intersect
with
the
psp.
I
I
just
left
a
comment
on
that
cap
that
I
think
the
changes
for
alpha
would
be
very
minimal
and
very
low
risk.
So
I'd
like
to
push
to
try
and
get
that
in
122
and
punt
a
lot
of
the
kind
of
more
complicated
details
around
roll
out
to
the
beta
discussion
as
possible.
So.
C
Okay,
awesome,
okay,
yeah,
so
we'll
yeah,
we'll
update
the
cap
and
I'll
change
it
to
green.
Once
we
have
an
update
there
thanks.
A
This
is
net
new,
so
it
hasn't
been
previously
discussed
and
I
haven't
looked
over
the
pr,
but
I'd
suspect
that,
because
this
is
like
a
new
relatively
large
thing,
probably
anything
from
here
down
is
gonna,
be
a
best
yellow,
probably
red.
A
I
think
this
one
might
be
about
changing
timeouts,
because
there
was
like
this
big
pr
about
being
able
to
like
change
like
to
set
like
a
new
grace
period
and
whatnot
from
this
person.
So
I
think
that
might
be
part
of
this,
but
the
the
cap
was
like
it's
not
really
done
so.
C
G
A
A
A
Because
prr
is
not
done,
it's
not
fleshed
out.
It
involves
an
api
change
and
involves
big
cube.
Behavior
changes,
it
hasn't
been
previously
discussed
so.
A
C
Oh
yeah
yeah.
I
got
the
data
yeah
yeah
right.
I
got
some
data
for
sergey,
but
we
need
more
data,
local
storage
capacity,
isolation,
fs
retirement.
G
Actually,
on
that
prior
question,
if
it's
okay
tim
since
you're
joining
us
today,
do
you
have
any
usage
of
pot
overhead
in
the
environments
you're
looking
at
that,
you
could
give
feedback
on
or.
A
That
one,
I
think,
has
been
languishing
in
perma
alpha
and
I
think
paco
has
volunteered
to
fix
or
to
graduate
it
to
beta
okay,
but
there's
no
pr
yet
so.
C
G
A
A
Tim
added,
I
guess
one
more
kept
to
look
at
which
we
hadn't
previously
been
tracking.
I
added
it
to
the
bottom
tim.
Can
you
give
us
the
quick
summary
of
this
one.
I
I
And
it
is
a
breaking
change
technically,
but
should
be
safe
with
the
previous
changes
that
went
into
that
there's
more
details
on
the
cap
itself.
A
I
No,
the
kep
is
the
the
deprecation
plan
was
already
approved
on
the
cap,
and
so
it's
just
I
I
think
it
was
approved
around
like
118
or
something
like
that,
and
so
this
is
just
kind
of
following
up
the
the
122
actions
so
yeah,
I'm
not
sure
if,
if
this
is
tracking
all
of
the
work
that's
happening
or
if,
if
this
is
just
for
the
cap,
then
you
can
probably
ignore
this.
A
Yeah,
I'm
not
sure
I'll
ask
the
release
team
what
we're
supposed
to
do
in
terms
of
multi-release
deprecations
because,
like
for
example,
I
would
assume
that
there's,
I
don't
think
we're
making
any
changes
for
docker
shim
right
now.
So
I
don't
think
we
need
to
do
anything
for
that,
but
for
this
one,
since
there
are
changes
that
we
are
making
I'll
follow
up
with
them,.
G
Yeah
so
tim,
if
I
recall
we
started
this
a
while
ago,
so
there
were
at
least
release
notes
where
we
described
what
was
happening
and
prior
releases.
Are
you
the
main
are?
Are
you
wanting
to
act
as
the
shepherd
for
this.
I
G
I
Yeah,
I
can
definitely
help
out
with
this
either
make
the
it's
a
it's
a
minimal
change.
I
think
it's
just
literally
just
changing
the
the
default
value
on
that
feature.
G
C
A
Okay,
awesome:
okay,
I
had
one
quick
thing
which
I
think
keeps
falling
over,
which
is
a
developer
guide
audit
request,
so
we
have
a
bunch
of
documentation
in
the
community
repo
and
I
believe
sig
controvex
has
asked
us
to
look
it
over
and
make
sure
that
it's
all
accurate.
So
if
you're
looking
for
a
way
to
get
involved
with
sig
node,
it
would
be
super
helpful
to
review
that
documentation
check
to
make
sure
it's
still
accurate
and
update
it.
If
it's
not.
A
M
Hey
everyone-
hopefully,
this
will
be
quick.
I
am
a
little
backstory.
I've
been
debugging
an
issue
that
relates
to
the
introduction
of
xcd
static,
pod
storage
requirements
for
cube
adm.
So
when
you're
doing
a
cube,
adm,
init
or
cubadium
join
since
120.
M
Your
ncd
spec
includes
storage
requirements
which
are
sort
of
sensible,
they're,
very
small
sort
of
like
dummy
storage
requirements,
100
megs,
which
is
actually
practical
for
running
fcd
and
doesn't
really
match
with
what
you
would
expect
to
be
on
a
storage
volume
of
a
node.
But
it's
a
start
to
just
be
able
to
declare
to
the
cube
at
runtime
that
there's
a
pod
run
that
has
critical
storage
requirements.
So
what
I'm
seeing
is
that
in
some
environments,
sometimes
in
azure,
there's
a
thread.
M
That's
linked
in
the
meeting
notes
in
other
environments
as
well
folks
are
noticing
that
ephemeral
storage
is
the
requirements,
are
being
they're
not
being
able
they're
not
being
fulfilled,
because
the
cubelet
reports
that
the
node
allocatable
storage
requirements
for
the
node
l,
cable
storage
is
zero.
M
So
it
seems
like
it's
some
kind
of
a
zero
value
during
cubelet
bootstrap
and
I
just
wanted
to
know
if
this
debugging
work
that
I'm
doing
collides
with
some
other
refactoring
work.
If
it's
easy
to
answer
a
simple
question,
should
the
node
allocatable
storage
ever
be
zero?
That
would
be
if
anybody
on
here
can
definitely
answer
that
otherwise
I'll
work
on
it
offline,
but
mostly
just
wanted
to
see
if
anybody
else
is
working
in
this
surface
area.
G
Else
so
I'm
not
aware
of
any
change
that
would
have
impacted
how
ephemeral
storage
is
tracked.
I
think.
G
I
don't
have
to
memory
where
cubadiam
is
probably
host
path
mounting
where
that
storage
is
for
etcd,
but
the
thing
you'd
we
would
need
to
check
is
basically
if,
when
the
cuboid
looks
at.
A
G
No,
the
disc
that
it
advertises
is
allocatable,
and
so,
if
your
host
path
mounting
at
cd
onto
a
different,
a
different
point,
then
the
keyboard
won't
know
anything
about
it.
So
basically
it
just
comes
down
to
like
understanding
your
disk
layout.
M
Right
well,
the
the
problem
is
that's
a
great
tip.
The
problem
is
that
it's
not
consistent,
and
so
it
seems
like
a
race
between
the
way
that
cubelet
does
basically,
in
the
predicate
admission
code
flow.
If
you
know
what
I'm
talking
about,
there's
a
part
where,
when
pods
are
admitted,
they're
introspected
for
do
you
meet
the
declared
requirements.
So
if
you've
got
a
storage
requirement,
it
does
some
math
to
make
sure
the
declared
storage
is
actually
less
than
the
available
allocatable
storage,
and
so
the
particular
condition
we're
coming
into.
G
If
it's
a
race
type
scenario
describing
we'd
have
to
check,
if
c
advisor
has
discovered
disk
at
that
time,
when
the
cube
was
first
reporting
allocatable
back
and
it's
possible
that
whatever
set
of
du
or
df
calls
that
I
forget
the
exact
chain
of
calls
that
c
advisor
is
doing.
Sometimes
it
might
be
possible
that
there's
like
a
second
status
right,
hopefully
that's
saying
what
allocatable
would
be,
but
it
might
not
be
the
first.
G
That's
the
other
thing
I
would
look
at
because
sometimes
certain
c
advisor
housekeeping
loops
might
only
be
done
like
20
seconds
or
so
after
cubelet's
startup,
depending
on
just
what's
going
on
in
that
host.
M
That's
the
main
question:
I
don't
want
to
take
up
too
much
time
so
I'll
work
on
this
kind
of
as
an
investigative
investigation
of
a
possible
bug
and
if
I
find
otherwise-
and
it
seems
like
there's
some
significant
refactor
work,
then
obviously
that
would
be
something
more
like
like
a
cap
but
so
yeah.
That's
it.
A
Because
this
is
cube,
adm
related,
I
also
tagged
a
sig
cluster
life
cycle.
On
there
I
mean
it.
It
looks
like
there's
click
cluster
life
lifecycle,
people
on
the
issue,
but
those
would
probably
be
the
other
folks
that
would
care
about
yeah.
M
That
was
my
initial
engagement
and
I
they
were
skeptical
that
this
wasn't
a
cubelet
originating
issue
that
the
the
idea
that
the
femoral
storage
would
race
with
the
cubelet
node
out
allocatable
anyway.
Thank
you
very
much
for
that.
I
will
certainly
continue
to
engage
with
them.
M
F
M
G
Jack
is
that
ephemeral
storage
is
not,
as,
as
you
probably
know,
as
easily
enforced
as
say,
other
secret
managed
things,
and
so
I
don't
know
what
the
goal
is
that
sick
cluster
life
cycle
is
trying
to
accomplish,
but.
M
M
Is
to
inform
the
cubelet
about
sort
of
system,
critical
resource
requirements
so
over
time
during
the
life
cycle
of
the
cluster.
If
there
is
contention
across
these
particular
vectors,
whether
it's
cpu
memory
or
storage,
the
cubelet
understands
that
it
needs
to
critically
reserve
n
bytes
for
storage,
because
it's
hosting
etsy
on
this
particular
control
plane.
Node,
that's
the
end
goal.
I
think.
G
Yeah,
okay,
just
making
sure
people
are
aware
that
it's
not
as
perfect
as
one
would
want.
I
guess,
and
maybe
priority
is
another
knob-
that
that
group
could
explore
on
just
like
letting
pod
priority,
try
to
influence
access
to
resource
totally.
I
Hey
I
I
probably
should
have
brought
this
to
signoid
a
lot
earlier,
but
in
case
you
haven't
seen
it
we're
working
on
a
proposal
to
replace
pod
security
policy
with
a
new
mechanism
based
around
the
pod
security
standards
that
we
published
a
couple
years
ago.
I
I
One
of
the
key
core
tenants
is
that
it's
essentially
unconfigurable
that
we
wanted
to
give
a
sort
of
good
kind
of
default
out
of
the
box
experience
with
three
profile
levels
being
privileged,
which
is
basically
what
it
sounds
like
totally
unrestricted,
baseline
I'll
go
into
more
on
the
baseline
in
a
second
and
then
restricted,
which
is
sort
of
our
best
practices
profile.
So
baseline
is
basically,
if
you
create
a
pod
and
only
specify
the
required
fields.
I
So
let's
provide
a
single
container
with
a
name
and
an
image
and
don't
configure
anything
else.
Baseline
is
trying
to
be
that
level
of
privilege,
essentially
so
that
pod,
by
definition,
must
be
allowed
in
the
baseline
profile
and
things
that
that
elevate,
privilege
above
that
are
denied
and
things
that
drop
privilege
are
allowed.
I
Now
it
gets
a
little
fuzzy
around
some
things
like
volumes,
we've
already
kind
of
gone
into
a
discussion
around
basically
on
baseline.
We
allow
all
volumes
except
for
host
path
and
then
capabilities.
That's
another
one!
That's
a
bit
tricky
since
we
don't
actually
define
the
default
capability
set
in
kubernetes,
so
we
had
a
bit
of
a
discussion
on
slack
around
what
to
do
about
this,
and
I've
talked
to
renault
offline
as
well,
so
one
option
would
be
to
say
just
you
can't
add
any
capabilities.
I
But
the
problem
with
that
approach
is
it
breaks
the
it
breaks
the
pattern
of
saying,
drop,
all
capabilities
and
add
only
the
ones
that
I
need
for
kind
of
a
lowest
approach
which,
thanks
to
several
folks
from
sig
node,
we
realized
that
this
actually
is
a
pattern
that
is
used
so
we'd
like
to
not
break
that.
I
I
Or
or
make
it
configurable
somehow
so
the
cable
or
the
proposal
that
I
just
pushed
this
is
still
kind
of
open
to
debate
is
to
basically
take
the
docker
default
set
and
drop
cap
net
raw
because
I
think
that's
the
most
dangerous
and
the
most
contentious
capability
in
that
default,
set
and
say
that
anything
from
that
you
know,
docker
default,
minus
cabinet
raw
is
allowed
to
be
added
to
the
pod
explicitly.
I
So
this
means
that
you
can
still
configure
at
the
runtime
level.
You
can
still
configure
an
expanded
capability
set
beyond
that.
So,
for
instance,
if
you
want
to
use
the
full
docker
default
set
with
cap
net
raw,
that's
still,
okay,
you
just
can't
explicitly
add
capnet
raw
on
the
pod
spec.
I
It
also
means
that
if
you
have
a
more
restricted
set
of
capabilities,
then
users
under
the
under
the
baseline
profile,
we'll
be
able
to
add
additional
capabilities
on
top
of
that
in
this
default
set.
So
that's
the
that's
the
current
proposal,
it
it's
definitely
not
perfect.
So
I'd
appreciate
any
feedback
on
that.
C
Questions
I
I
think
this
is
a
good
balance.
Tim
like
it's
not
perfect,
I
agree,
but
maybe
like
we
can
take
another
look
at
that
list
and
see
if
there's
something
else
we
can
drop
as
well
like
cap,
metra
was
the
obvious
one.
I
Yeah
sounds
good.
Another
thing
worth
pointing
out
is
that
there's
a
versioning
mechanism
built
into
the
proposal,
so
that
means,
if
you
know,
in
a
future
release
we
decide.
Actually
we've
decided
that
this
capability
is
dangerous
and
we
want
to
drop
this
from
the
set
as
well.
L
It's
very,
very
quick.
The
the
the
thing
which
is
worth
highlight
is
during
the
review.
Kevin
gave
a
lot
of
good
comments
and
pointed
out
that,
rather
than
a
new
policy,
he
he
believed
is
better
to
have
a
new
option
to
to
to
fine-tune
the
behavior
of
the
static
policy,
which
is
very
similar
to
what
derek
suggested.
So
we
are
doing
that
way.
But
besides
that
the
behavior
we
want
to
implement
is
still
the
same.
So
just
a
heads-up.
We
are
going
to
pivoting
this
way.
L
It
will
not
be
a
new
policy,
just
a
new
option
to
fine-tune
the
policy
and
kevin
said
he
will.
Maybe
we
will
may
be
able
to
contribute
some
features
he
was
using
in
using
this
feature,
so
that's
it
and
about
the
demo,
I'm
going
to
record
it
and
share
in
the
document
and
build
some
slack
so
we'll
see
thanks.
A
C
No
just
one
more
time,
everyone
please
update
your
caps
and
update
the
doc
with
links.