►
From YouTube: Kubernetes SIG Node 20230801
Description
SIG Node weekly meeting. Agenda and notes: https://docs.google.com/document/d/1Ne57gvidMEWXR70OxxnRkYquAoMpt56o75oZtg-OeBg/edit#heading=h.adoto8roitwq
GMT20230801-170613_Recording_1554x1020.mp4
A
Hello,
hello,
it's
August,
1st
2023.
It's
a
signaled
weekly
meeting,
welcome
everybody
today.
A
We
have
a
few
items.
First
is
board
ready
to
start
containers
conditions.
C
D
C
I
ended
up
taking
over
this
issue
from
deep,
because
it's
I
was
not
well
I,
think
he
made
an
update
to
a
captain
1.28
to
rename
the
condition
and
then
I
took
over
the
implementation
of
that
and
got
the
cap
at
least
I
think
it
was
implemented
or
the
the
second
Alpha
was
implemented
in
1.28
I'd
like
to
try
and
push
for
a
beta
1.29.
I
am
kind
of
I
haven't
heard
much
from
the
original
author,
so
I
I'm
sort
of
going
with
what
I
saw
in
the
comments.
C
The
one
thing
I
saw
for
the
beta
was
moving
the
API
from
a
cubelet
constant
into
into
an
official
API
condition
and
I
have
a
PR
up
for
that
for
1.29
and
I
updated
the
cap
with
some
of
that.
Those
details
I'd
hope
that
it'd
be
okay.
To
take
this.
As
a
in
1.29
I
was
told
to
come
to
the
meeting
and
see
and
if
there's
any
concerns
or
comments,
please
feel
free
to
I
guess
we
can
post
on
either
the
enhancement
or
the
issue.
A
C
Yeah
so
I
sort
of
updated
the
things
I
saw
from
what
was
mentioned
when
I,
when
I
did
the
implementation
for
renaming
the
condition
and
I
saw.
That
was
one
thing
from
Jordan
about
I
think
there
is
a
or
maybe
it
was
a
recommendation
to
Move
It
from
the
cubelet
constant
to
the
staging
for
us.
If
we're
okay
with
going
in
beta,
we
want
it
to
be
a
versioned
API
field
in
the
condition,
so
that
was
at
least
the
one
part
I
saw.
C
A
I
was
involved
into
naming
discussion
for
sure,
but
a
little
bit
more
than
that
I.
It
would
be
interesting
to
like
deep
had
us
very
good
use
case
when
it
needed
to
be
known
how
long
certain
stage
is
taken.
So
maybe
if
you
will
bring
some
feedback,
it
will
be
useful
because,
like
there
are
two
items
about
feedback
right,
so
first
is
some
feedback
and
suggestions.
If
possible-
and
second
is,
is
it
really
working
or
is
this
field
really
helpful?
C
I
mean
part
of
the
reason
why
I
wouldn't
want
to
take
this
cap
is
I.
Have
a
I
have
a
separate
one
around
the
conditions
for
stock
pending
pods,
and
this
is
actually
a
pretty
important
condition
for
a
lot
of
the
use
cases
of
my
users
generally,
when
a
lot
of
a
lot
of
times,
we
have
volume
creation
failures
and
having
that
condition,
is
really
helpful
for
actually
deducing
failures
in
the
Pod
life
cycle
and
it's
very
fairly
closely
related
to
another
cap.
C
C
I
I,
don't
have
as
much
of
a
operational
aspect,
but
just
from
a
user
of
like
be
able
to
tell
like
which
pods
are
failing
in
the
container
sandbox,
which
also
includes
Network
cni
creation
and
volumes.
A
It's
a
great
use
case:
I,
don't
know
how
you
can
express
this
use
case,
but
maybe
you
can
have
some
some
way
to
share
this
community
and
it
will
simplify,
definitely
simplify
going
into
beta
saying.
This
is
how
it's
useful-
and
this
is
how
we
used
it
before.
Okay,.
C
A
E
E
A
Yeah
I
understand
the
use
case,
I
just
thinking
it
can.
We
demonstrate
how
it
was
useful
in
specific
examples,
because
the
feedback,
maybe
we
need
more
precise
conditional-
we
need
condition
a
little
bit
different
place.
This
kind
of
feedback
I'm
looking
for-
and
the
use
case
that
you
may
demonstrate
may
help
understand
that
this
is
actually
what
we
want
and
let's
continue
with
this
field,
as
it's
defined
now,.
A
Okay,
if
there
is
no
more
comment
on
this
topic,
let's
move
on
I'm,
sorry,
I,
don't
know
who
is
MC
kdf.
F
Hey
folks,
that's
me
I'm
Carter
I'm
eks,
so
this
PR
is
attempting
to
fix
an
issue
that
we've
run
into
with
cubelet,
in
which
cubelet
will
update
its
lease
object
to
no
longer
have
all
right
getting
paged.
F
It
will
update
its
lease
object
to
no
longer
have
owner
references
after
the
generic
garbage
collector
has
already
collected
a
previous
lease.
This
happens
when
its
node
object
has
been
deleted,
which
is
kind
of
a
common
scenario
when
customers
use
Carpenter.
F
So
this
PR
essentially
does
not
allow
cubelet
to
create
a
lease
object.
Unless
it's
a
set
owner
references,
the
change
is
actually
in
a
common
Library,
that's
owned
by
API
Machinery,
but
I
wanted
someone
from
sick
node
to
you
know,
sanity
check
the
change
since
the
the
bug
is
happening
in
cubelet,
looks
like
somebody
reviewed
this
morning,
but
I
would
love
someone
from
from
this
side
of
things
to
take
a
look,
if
possible.
A
Yeah
I
think
Antonio
made
a
comment
that
this
is
a
common
library
and
we
you
want
to
make
sure
that
if
you're
not
breaking
any
other
use
cases
that
may
be
useful
here,
but
I
don't
see
any
like.
Maybe
okay,
we
will
need
to
review
yeah.
F
Yeah,
so
it's
only
used
in
two
places
within
the
Upstream
code
base
a
cubelet
and
the
API
server,
and
this
doesn't
break
the
API
server.
I
can't
really
imagine
a
scenario
in
which
this
would
break
someone,
but
but
that's
obviously
the
risk
involved
here
there
was
another
PR
that
was
attempting
to
fix
this
bug
that
took
a
different
approach,
which
I
think
is
kind
of
subpar,
so
that
author
closed
theirs
I
think
this
morning,
so
anyway,
reviews
would
be
appreciated.
A
Okay
and
have
you
said
that
you've
been
paged
during
while
it
while
I
was
speaking
I'm.
Sorry,
just
yeah,
yeah
I
hope
that
we
are
not
affecting
eks
stability.
A
Worldwideable
termination
files,
I,
don't
know
who
is
this
person
like?
What's
the
name
so.
G
That's
it
hi
I'm
Mark,
so
basically,
I
was
trying
to
fix
a
bug
where
cubelet
creates
a
files
empty
files
that
are
worth
writable
by
anyone
running
on
the
host
machine,
and
that
has
been
highlighted
on
various
security
audits
recently.
So
I
thought
I'm
gonna
make
that
my
first
kubernetes
contribution
and
I
started
from
just
changing
the
file
permissions
to
remove
the
666
and
charging
it
to
six
or
four,
but
that
created
another
issue
where
any
containers
running
as
a
non-root
I
couldn't
write.
G
So,
eventually,
I've
changed
it
to
just
inherit
the
ownership
of
the
file
from
the
security
context
of
the
specification
for
the
pod.
So
that
code
is
there
but
I,
don't
know
how
to
move
that
forward
and
I.
Don't
know
if
I
need
a
feature
guide
or
cap
for
that.
So
that's
why
I'm
here.
A
The
issue
is
that
it's
a
world
writable,
is
there
any
problems
with
it?
Is
it
being
World
writable?
Is
it
like
somebody
who
can
fake
message.
G
Well,
so
it's
not
really
a
problem
with
someone
faking
the
message,
I
think
from
security
standpoint.
If
the
risk
is
that
some
malicious
actor
can
use
that
file
to
purchase
their
data,
even
if
they
attack
some
service
running
completely
outside
of
kubernetes
on
the
same
host
so
generally
on
most
security
audience.
If
you
have
a
pile
that
everyone
can
write
to.
A
Yeah
I'm
just
trying
to
understand
severity
of
this.
Is
there
any
comments
from
people
on
the
call.
H
So
I
think
are
you
trying
to
set
it
based
on
the
user,
so
I
think
if
you
want
to
compute
the
user,
it
may
make
sense
to
let
the
runtime
do
it,
because
in
many
cases
we
don't
know
the
user
right,
it
may
come
from
the
docker
file
or
it's
a
match.
G
H
G
I
understand
correctly,
that
cannot
be
done
by
runtime,
because
cubot
is
doing
the
bint
month
before
the
runtime
actually
takes
over,
and
this
is
highlighted
in
that
in
the
comment
just
about
the
changes
saying
that
we
are.
D
I
J
Yeah
I
agree
with
the
Mac
earlier
comment:
yeah,
but
I
haven't
I
think
there
are
some
of
some
restrictions,
but
I
need
to
refresh
my
memory
I.
On
top
of
my
mind,
I
couldn't
think
about
yeah.
A
J
I,
don't
think
so,
but
I
need
a
at
least
for
termination
management,
a
message:
there's
no
I,
don't
I,
don't
recall
any
yeah.
A
We
may
break
somebody
but
I
think
if
you
stated
this
is
totally
unsupported
and
shouldn't
be
used.
This
way,
we
probably
can
go
ahead
with
trying
to
find
a
way
to
match
the
user.
J
Let
me
ask
someone
I
believe
recently,
someone
changed
the
termination
message
to
indicate
the
so
we
have
the
more
meaningful
wide
container
is
being
evicted
or
preempted,
or
it
is
just
because
the
system
right
so
I
think
we
are
trying
to
so
in
United
cases.
Actually,
the
actor
writing
this
terminology
could
be
different.
J
A
Yeah
International
message
is
very
powerful
tool,
so
I
I
hope
that
you
can
resolve
it
and
I
want
to,
like
maybe.
A
I
feel
that
for
this
kind
of
change
that
can
break
backwards,
compatibility
and
media
reviews,
many
people
I
think
we
may
want
to
do
cap
I
know
that's
a
lot
of
I've
ever
had
on
the
surface,
but
in
reality
it
helps
us
answer
all
those
questions
with
having
approaches
to
answer
these
questions,
rather
than
trying
to
come
up
with
some
ad
hoc
really
NPR.
So
if
you
want
to
continue
that
I
would
appreciate
if
you
can
start
the
cap
for
that.
A
I
Hello
yeah,
so
I
just
opened
this
PR
last
week
and
I
wanted
to
see
if
anyone
had
any
feedback
for
it
or
if
it
seemed
like
the
right
time.
Basically,
the
idea
is
that,
like
for
most
of
the
CRI
changes
that
happen,
there
needs
to
be
some
amount
of
consensus
between
the
containerdy
and
crowd
communities,
and
so
I
felt
like
it
was
appropriate
to
like
make
official
that
consensus
by
like
adding
CRI,
maintaining
his
approvers
I'm
open
to
discussion
about
the
precise
set
of
people.
I
I
talked
with
Mike
Brown
about
it
and
also
Sasha,
and
so
like
I've
said
it
as
a
three
of
us.
But
I'm
happy
to
you
know,
change
that
list
or
you
know,
update
it.
Just
wanted
to
start
the
conversation
and
see
what
folks
think.
A
I
can
comment
about
timing,
and
so
we
we
had
this
issue
when
we
did
an
annual
review.
We
discussed
that
we
need
to
review
only
files,
but
we
decided
to
defer
it
for
later,
so
we
created
this
issue,
so
I
closed
reference
that
this
is
Cap.
This
is
PR,
definitely
a
good
timing,
because
we
just
ended
128.
We
have
more
people
available
to
review
things
and
think
about
obvious
files.
A
A
This
list
has
signaled
approvers
and
it
also
have
I
think
people
like
team,
Hawken
and
somebody
else
I,
don't
remember
so
it's
extended
to
all
the
API
Brokers
like
more
globally
approved,
but
we
can
reuse
this
list
for
CRI
approvers
in
different
sense.
J
Sorry,
I,
don't
know,
what's
the
history
about
that
one,
oh
you
are
talking
about
the
API
in
general
of
our
then
we
have
the
signal,
the
API
and
the
power.
It's
just.
We
need
because
the
we
have
this
API
approval
initially,
because
that
impact
is
huge.
Yes,
but
a
lot
of
APN
like
if
you
really
think
about
the
part,
API
and
know
the
API
actually
a
lot
of
email
officer
signaled.
So
that's
why
front
beginning.
J
We
are
insisted
to
have
a
know.
The
people,
but
not
the
entire
of
the
node
approver
like,
for
example,
initially
including
me
actually
later,
I,
don't
have
benefits
so
I
said.
Remove
me
from
that
one.
So
if
I
remember
that's
the
character,
so
so
we
we,
but
we
do
want
to
front
and
know
the
perspective
to
provide
the
API
feedback.
J
That's
kind
of
the
we
have
I
believe
Derek,
also
there
initially
and
the
tire
Academy
and
then
removed
first
myself
and
the
direct
remove
later
so
so
we
try
to
just
focus
on
know.
The
integration
know
the
perspective
to
provide.
So
we
start
from
there
and
then
I
believe
some
other
big
signal
sick
also
have
their
own
Representatives
they're
like,
for
example,
storage
and
network
also
have
some
of
those
representative
there.
That's
the
background
you
ask
for
so.
A
Yeah,
okay,
well,
I
feel
that
now
this
list
is
used
differently.
So
now
this
list
is
used
specifically
for
CRI
API
and
the
list
of
people.
There
is
questioned.
J
J
I
Yeah
I
can
definitely
update
my
PR
to
include
to
use
that
list
instead
of,
for
some
reason,
I
assumed
that
it
would
also
include
like
the
cubelet
configuration
API
pieces
and
stuff,
but
if
it's
just
CRI,
then
I
can
just
update
it.
That
way.
A
Okay,
I
I
feel
that
we
have
consensus
that
we
want
to
move
forward
to
this
PR
and
we
need
more
clean.
Cleanup.
Cleanups
I
know
that
in
CRI
tools
recently
we
had
another
promotion
to
approver,
so
every
area
trying
to
make
adjustments.
So
we
need
to
probably
Define
more
areas
and
in
this
issue,
Parker
also
suggested
more
areas
to
split.
We
need
to
be
careful,
though,
to
not
split
code
into
two
smaller
pieces.
It's
a
real
nightmare
to
maintain
all
those
on
your
files.
A
If
not
and
let's
move
on
to
the
next
topic,
maybe
it's
sunny.
K
Yeah,
sorry,
hello,
it's
a
sunette,
yeah
I'm
new
here,
actually
yeah
I
I
wanted
to
give
a
feedback
on
the
pr
and
that's
changing,
changing
that
not
setting
the
CPU
quota
for
guaranteed
ports.
So
what
we
have
done
is
like
we
did
like
a
patch
on
top
of
the
latest
minor
release
and
then
we
tested
on
our
environment,
and
it
is
so
we
when
we
were
using
this
guaranteed
for
the
application
and
basically
the
port
failed
to
start
in
because
of
the
huge
Pages
issue.
K
So
it's
it's
unable
to
allocate
huge
pages.
So
then
we
did
like
the
normal
release
without
the
patch
and
then
it's
working
there.
So
yeah
I
wanted
to
just
update
the
situation
here,
and
then
we
wanted
to
bring
more
attention
to
this
PR.
Then
we
also
like
would
love
to
push
this
one
and
then
help
this
with
this
yeah.
K
Basically,
that's
that's
it
and
like
I.
Have
a
question
like
is
how
what
can
the
timeline
can
be
for
this?
For
this
PR
I
know
it's
Set
long
term,
but
yeah
if
possible.
If
you
could
give
some
time.
My
thank
you.
A
Welcome
and
thank
you
for
being
here
answering
timeline
question
right
now
we
are
in
a
test
freeze
for
128
and
I.
Didn't
find
any
evidence
that
this
is
a
very
critical
issue
and
it's
not
a
regression,
so
we
probably
wouldn't
fix
it
for
like
wouldn't
back
Port
it
to
128
and
below
so
earliest
it
can
be
out
is
129.
A
Second
question
is
whether
you
want
to
disable
the
CPU
CFS
quarter
from
a
description
sounds
very
logical.
Is
there
any
comments?
Maybe
I'm
missing
something.
H
I
think
we
discussed
it
maybe
a
couple
of
months
ago
or
something
so
just
needs
review,
and
so
that
I
think
the
issue
that
you
raise
needs
to
be
addressed
right.
I
know,
Francesco
was
take
a
look
again.
J
Can
you
I
forgot
because
this
kind
of
problem
come
couple
years
ago,
but
I
think
the
folks
identified
caramel
back
so
so
with
the
interest
we
didn't
set
the
quota
for
the
guaranteed
job
for
a
while
and
if
I'm
on
crack
there.
Sometimes
people
say
oh,
this
is
the
problem
is
fixed.
J
H
A
J
You
think
about
this
PR
address
that
problem,
the
banks
in
the
kernel,
so
this
one
is
basically
try
to
not
using
so
the
the
pi
is
not
addressed
that
problem,
but
I
do
remember
people
reset
that
back
they
have
them
read
I.
Just
couldn't
remember
that,
so
that's
why
this
PR
patch
try
to
not
sideways.
J
H
J
For
for
the
people
who
put
this
back
could
be
like
the
witch
kernel.
This
is
my
biggest
news
kernel.
What
in
your
production,
which
kernel
and
do
you
have
that
fix
Frank
kernel?
The
the
the
only
reason
I
want
to
ask
is
just
because
this
is
back
and
forth
couple
times.
I
want
to
make
sure
when
we
cite
that.
That's
the
reason
I
don't
want
to
cause
another
regression,
because
people
also
demand
to
set,
and
then
there's
we
do
know
clearly
know
before.
H
Could
yeah
yeah
so
I
think
Don
I'll
check
with
Martin,
but
I'm
pretty
sure
that
the
kernel
we
are
using
has
those
fixes?
This
may
be
a
further
optimization,
so
I
think
we
are
doing
a
lot
of
work
where
if
we
know
that
the
CPU
is
used
exclusively
by
a
pod,
then
on
the
Kernel
side
we
can
set
more
Flags
or
do
more
things
to
make
the
performance
better.
So
this
is
more
in
that
area
yeah,
but
I
can
definitely.
J
B
Oh
yeah,
yeah,
hi,
hello,
so
yeah
hi
everyone.
So
this
was
the
issue.
I
have
opened
a
few
time
ago,
related
to
the
eviction
manager,
where
Richard
manager
is
checking
only
the
disc
uses
of
the
living
container
and
the,
and
that's
where
the
Pod,
having
the
total
displaces
evicted
when
the
this
pressure
is
created.
So
this
was
the
issue
there,
so
I
need
to
means,
as
I
am
new
to
the
create
source
code,
so
I
need
some
help
or
some
approach
like
how
it
can
be
resolved
or
since
it
can
be
taken.
D
A
Of
this
issue,
I
think
that
the
issues
that
the
disk
was
these
calculation
includes
doesn't
include
containers
that
already
completed
so
they,
but
they
still
can
hold
some
disk
usage
and
yeah.
It
may
cause
some
eviction
management
problems
in
future.
J
J
Do
you
want
to
start
on
a
cap
with
the
design?
We
know
this
Disk
Management
clean
off
the
diet.
Image
container
image
actually
is
really
bad
that
we
did
a
bad
job.
Do
you
want
to
start
an
enhancement
here.
J
A
Yeah,
okay,
looking
for
a
write-up,
it's
unclear
how
much
changes
we
will
need
and
is
there
any
breaking
broken
breaking
changes,
but
hopefully
it
should
be
something
local.
A
Okay,
if
there
is
no
more
topics,
I
want
to
remind
everybody
that
please
work
on
documentation,
we're
coming
up
for
documentation
very
soon,
so
yeah,
and
at
least
128
will
be
in
two
weeks,
brace
yourself.
Thank
you.
Bye.