►
From YouTube: Kubernetes SIG Node 20230404
Description
SIG Node weekly meeting. Agenda and notes: https://docs.google.com/document/d/1Ne57gvidMEWXR70OxxnRkYquAoMpt56o75oZtg-OeBg/edit#heading=h.adoto8roitwq
GMT20230404-170524_Recording_640x360.mp4
A
Hi
everyone
welcome
to
signode
weekly
meeting
on
April
4th
2023..
We
have
three
items
on
the
agenda
today.
So
first
up
is
Paco
with
on
deprecation
of
the
cubelet
provider
id
flag,
Paco
you're,
on
the
call.
B
Yes,
this
one
is
about
undepricated
some
flags
of
grouplet,
because
there
are
some
flags
that
is
not
specific
and
the
flag
is
used
in
some
place
like
the
e2e
test
and
maybe
in
other
clusters,
and
so
we
want
to
undeplicate
some
correct
flag.
This
is
one
example
of
provider
id
and
in
the
discussion
Michael
has
mentioned
that
we
have
a
long.
B
It
is
a
long
story
that
we
migrate
to
those
flags
into
the
collateral
configuration
and
some
of
them
still
is
deprecated,
but
still
there
and
in
our
e2e
test
there
are
several
several
flags
that
is,
we
will
see
the
deprecated
messages
for
a
long
time
and
we
are
not
sure
if
we
have
a
plan
to
manage
those
flags
if
they
will
be
deprecated
and
migrate,
or
some
of
them
may
be
still
kept
for
a
long
period,
maybe
because
this
is
always
used
in
somewhere.
C
Yeah
thanks
to
to
look
into
this
one
I
don't
know
this
particular
flag
looks
like
it
is
only
using
at
least
I
understand
the
reason
people
may
call
tofu,
I
I,
don't
know,
because
this
is
so
long.
The
project,
the
rest
of
the
to
convert
all
those
flag
to
the
configure
file
config.
So
it's
being
negative
more
than
five
six
years
ago.
C
So
so,
let's
particularly
if
I
remember
correctly,
all
those
Cloud
providers,
they
basically
have
a
data
from
the
provider,
the
provider
id,
but
the
next
snack
is
several
ad
hoc
usage
still
remain
to
use
in
this
one.
When
it
is
looks
like
the
kind
cluster
using
this
one.
So
I
believe
this
is
weapon
risk.
This
concern
initial
name.
So
the
goal,
what
we
want
our
journey
just
minimize
because
it
look
at
the
kubernetes.
If
you
look
at
kubernetes
flight
config,
we
have
the
couple
hundred,
it's
really
not
manageable.
C
So
that's
why
the
original
goal
it
is
convert
those
flags
to
the
config
file
and
that
particularly
those
are
necessary.
So
obviously,
this
one
initially
because
it
all
goes
to
the
cloud
provider
and
the
initially
is
not
that
useful.
So
that's
why,
by
default,
the
main
code
treat
that
is
the
duplicate
one
I
do
see,
there's
cases
kind
of
class.
There
is
one
I,
don't
see
other
cases
there,
but
the
goal
is
if
we
don't
want
to
deprecating
communities
and
migrate,
that
to
the
config
file
and
try
to
minimize
what
we
are
using
device.
C
B
Maybe
another
one
is
the
feature
gate
is
this
one
I
I
think
we
have
other
feature
gates
in
collateral
configuration
for
a
long
time,
but
many
if
we
are
still
using
the
future
kit
flag,
I'm,
not
sure
if
this
is
a
prefer
the
way
or
or
we
need
to
do
whatever
duplicate
that
or
remove
that
application
message
or
just
to
remove
the
flag.
I'm,
not
sure
about
this
one
for
others,
I
think
yeah.
C
C
A
So
maybe
down
in
128,
we
find
someone
if
someone
wants
to
make
a
pass
and
like
Shepherd
this
forward.
But
meanwhile
we
merged
this
one
to
one
block.
A
All
right
so
I
think
we
are
good
with
that
one.
We
can
move
on
to
the
next
issue
on
the
agenda.
Ian
Coolidge
regarding
specifying
results,
CPUs
and
exclusive
CPUs.
D
Yes,
just
to
give
everyone
a
little
bit
of
a
brief
on
this
issue.
I,
don't
know
if
everyone's
read
the
issue,
but
we
found
that
the
when
specifying
dash
dash
Reserve
CPUs
with
static
CPU
manager
policy.
D
Still
some
workloads
are
getting
like.
Non-Exclusive
workloads
are
getting
scheduled
on
the
reserve.
Cpus,
so
I
think
the
team
acknowledged
that
this
is
a
bug,
but
now
going
forward,
I
was
thinking
it.
It
might
be
really
helpful
to
be
able
to
specify
CPU.
D
Use
in
CPU
groupings
use
as
good
to
kind
of
Shield
race
workloads
from
using
some
CPUs,
but
then
maybe
you
all
subdivide
the
workload
CPUs
into
ones
that
are
intended
to
be
used
with,
like
kernel
level,
CPU
isolation,
you
know,
like
kind
of
removed,
all
interrupts,
removed,
all
kernel,
threads
and
all
the
stuff
versus
kind
of
maybe
normal
category
of
workloads
so
that
you
have
sort
of
partitioned.
The
CPU
group
into
three
different
sets.
A
So
I
I
think
Francesco
is
looking
into
the
the
bug
itself
and
I
think
at
least
in
Red
Hat.
We
have
some
use
cases
where
we
have
some
workloads
where
we,
when
we
pin
the
CPUs,
we
don't
want
any
interrupts
and
stuff.
So
we
actually
do
some
extra
work
in
the
container
runtime
to
disable
scheduler,
load,
balancing
and
so
on
is
is
that
the
kind
of
tweaks
you're
thinking
about
oh.
D
Yeah
yeah,
maybe
I
I'm,
not
sure
exactly
what
what
what
that
refers
to,
but
I
think
that's
kind
of
roughly
on
par
with
what
I'm
describing
yeah.
E
Can
I
speak
up
here
for
a
moment
and
we're
working
on
a
cap
right
now
to
handle
different
types
of
cores
and
be
able
to
have
a
CPU
control,
basically
have
its
own
resource
plugin
right
because
currently
with
kubernetes,
you
either
take
the
CPU
stuff
or
you
have
to
do
things
like
they're
talking
about
with
red
hat,
where
you
have
to
do
stuff
through
their
container
runtime
for
CPUs
and
so
number
one.
E
A
E
Yeah
and
I
can
I
can
put
in
our
current
work
ongoing
work,
we're
fairly
Advanced,
but
any
jumping
in
and
making
sure
that
it's
what
everyone
needs,
because
it
does
handle
the
scheduling
problem
right.
So
just
because
you
have
the
ability
with
the
kernel
interrupts
doing
stuff
for
the
runtime
may
not
be
exactly
what
we
need.
A
All
right,
thanks
again,
so
the
last
topic
of
the
agenda
is
username
spaces.
Rodrigo.
A
Oh
Rodrigo.
We
can't
hear
you.
F
C
F
Good,
so
so,
with
the
last
changes
we
merged
with
in
cornelius's.
E
F
For
the
username
spaces
cap,
currently
they
kept
scope
is
only
for
stateless
pods,
but
we
don't
need
to
do
any
any
changes
in
in
yeah
any
other
changes
in
coordinates
or
the
runtimes
to
support
the
stateful
parts.
So
what
do
we
want
to
do
is
just
to
add
into
the
scope
of
the
cap
stage.
Reports
too
like
well.
The
only
code
change
that
we
need
to
do
is
to
change
the
the
validation
right
now,
we're
validating
that
no
volumes
are
are
inside
the
pot
when
the
part
is
accepted.
F
If
we
just
remove
that
everything
works,
so
I
I
wanted
to
propose
to
just
deprecate
the
feature
gate
that
we
have
today,
that
is
username
spaces,
stateless
spot
support
and
just
remove
the
stainless
Port
part
and
add
a
new
like
another
new
feature
together
this
username
spaces
support
and
that
activate
support
for
stateful
and
stateless
parts
wanted
to
know
what
are
what
are
your
thoughts
on
doing?
This
is
still
on
Alpha
and
we
propose
to
keep
it
still
on
Alpha.
A
F
A
F
Code
changes,
basically,
what
you
are
doing
is
using
a
demand
mounts
for
for
the
bind
mounts,
so
stateless
spots
should
work
exactly
the
same
as
in
127,
and
the
only
thing
that
will
change
for
128.
If
everyone
does
is
the
feature
gate
name
and
we'll
accept
more
parts,
more
workloads,
I.
A
I
think
the
reason
we
went
down
the
path
of
splitting
is
because
we
know
new
for
sure
that
okay,
stateless,
we
are
good,
we
know
the
design,
there
are
a
bunch
of
use
cases
and
it
can
independently
move
faster
towards
graduation.
So.
A
F
I,
don't
think
so,
because
initially
we
had
like
several
phases
in
the
cap
and
what
we
wanted
to
do
is
to
support
the
stateless
cap
without
any
kernel
support,
and
that's
that's
why
we
wanted
to
move
it
at
a
different
speed
also,
but
when
we
merge
in
125
we
have
to
do
several
hacks
that
are
several.
There
are
limitations
and
basically,
six
stores
have
concerns.
F
So
what
we
did
in
127
is
just
basically
merging
all
with
I.
I
did
not
mounts,
so
we
don't
need
to
to
do
any
of
the
things
that
were
wearing
six
storage.
We
don't
need
to
change
the
ownership,
but
we
rely
on
the
Kernel
to
do
the
ID
translation
now
I
mean
before
we
were
doing
it
for
the
simple
volumes
ourselves,
the
keyword
was
creating
with
a
proper
host
ID
host
uid,
because
GID
and
whatever
right
now,
we
we
just
undo
that
part
and
we
let
the
kernel,
do
the
ID
translation.
F
So
so
yeah
I,
don't
think,
there's
any
like
what
we
thought
about.
Yeah
volumes
are
more
complicated
and
we'll
need
kernel
support,
and
so
less
users,
like
the
user,
will
need
to
upgrade
the
kernel
and
this
and
that
well,
we
all
have.
We
already
have
that
for
for
stateless
spots
too,
because
we
couldn't
move
with
our
original
idea.
A
F
A
F
Yes,
that
is
that
wasn't
my
to-do
I
had
it
for
the
beta
migration,
like
as
a
blocker
to
Beta
migration,
but
yeah.
We
can
start
the
conversation
earlier.
If
you
want
to
I
think
that
it
will
be
tricky,
probably
because,
at
least
until
the
field
is
GA,
we
cannot
like
the
restricted
policy.
Doesn't
let
you
use
host
namespaces,
except
for
the
username
space,
because
that
was
always
using
the
answer,
but
yeah
we'll
probably
need
to
yeah.
F
Do
and
yeah
when
is
the
right
timing?
It's
when
it's
better
when
it's
GA
one
whatever,
which
do
you
know
which
so
you
guys
should
join
to
discuss
this.
F
There,
okay,
we
can
plan
to
yeah
to
to
go
together
and
are
you
going
to
cubecon
EU.
G
G
Hi
I
had
a
quick
question.
Sorry
about
the
the
stateless
change
I
I
was
wondering
so
currently
we
rely
on
a
newer
kernel
to
be
able
to
make
do
rely
on
the
ID
mounts
being
handled
by
it.
If
we
change
the
scope
of
the
cap
to
include
stateless
pods,
so
that
means
the
username
spaces
would
rely
like
it
would
require
a
new
kernel
whereas
like
if
we
keep
it
the
separation
between
stateless
and
state
full
like
we
could
use
an
older
kernel.
For
you
know
the
stateless.
F
No,
no,
no,
no
So.
Currently
the
cap
only
supports
stateless
gaps,
Sterling
spots,
and
we
already
did
the
change
in
127
to
to
rely
on
I
did
one
month.
So
we
require
a
new
kernel
for
a
stateless
spots,
because
we
couldn't
move
forward
with
our
ideas
to,
in
those
specific
cases,
don't
need.
The
kernel
features
due
to
the
sex
storage
concern.
F
A
So
with
that,
we
are
at
the
end
of
the
agenda:
do
folks
have
any
other
topics
they
wanted
to
raise
today.