►
From YouTube: Kubernetes SIG Node 20230321
Description
SIG Node weekly meeting. Agenda and notes: https://docs.google.com/document/d/1Ne57gvidMEWXR70OxxnRkYquAoMpt56o75oZtg-OeBg/edit#heading=h.adoto8roitwq
GMT20230321-170540_Recording_640x360.mp4
A
Hello,
hello,
it's
March,
21st
2023,
signal
weekly
meeting
welcome
everybody.
We
have
relatively
short
agenda
for
today
and
I
didn't
want
to
include
backtracking
in
like
stats,
because
we
are
in
court
fees
and
Test
free,
so
not
much
happening
itamar.
Is
it
your
first
item.
B
B
No
just
to
speak
so
basically
I'm
trying
to
start
an
effort
to
move
the
swap
API
into
Data.
B
I
saw
that
there's
a
list
of
requirements
for
that,
and
basically
I've
already
made
a
POC
to
both
introduce
the
new
API
and
to
show
that
it's
already
working
and
there
is
not
much
implementation
wise
to
do
there
so
I'm
going
to
share
some
Link
in
the
chat
here.
This
is
the
POC
itself.
It's
also
linked
in
the
medic
agenda.
B
B
But
there's
there
are
another
bunch
of
questions
regarding
the
other
requirements
for
in
order
to
move
it
into
beta.
That
I
would
like
to
discuss
if
it's
okay.
B
So
one
example,
if
you
look
at
the
link
that
I
just
sent
in
a
chat,
this
is
the
cap
that
says
what
are
the
requirements
to
move
it
into
beta
and
it's,
for
example,
one
of
them
is
to
support
controlling
swap
consumption
in
the
Pod
level.
Now
my
POC
actually
introduces
an
API
that
controls
the
swap
entirely
from
the
container
level,
not
the
Pod
level
and
I
think
this
makes
sense,
because
it's
quarant
with
the
other
resource
allocations
that
kubernetes
have
so,
for
example,
we're
defining
memory,
ephemeral
memory
CPU.
B
All
of
that,
it's
all
at
the
container
level,
and
so
I
wanted
to
also
Implement
swap
limitation
on
the
container
level
only
and
not
the
Pod
level.
So
my
basic
question
is:
is
this
list
updated
and
is
it
relevant?
And
what
do
you
think
about
the
container
level
and
the
power
level,
for
example,
and
also
regarding
the
other
items
on
this
list?
Maybe
some
of
them
aren't
relevant
anymore
simply
because
they're,
pretty
old,
yeah
wanted
to
hear
your
thoughts
about
it.
C
So
I
think
when
we
talked
about
pod
level
right,
the
initial
idea
was
that
there
will
be
a
knob
that
controls
a
percentage
for
all
the
pods
rather
than
having
something
directly
in
the
power
API
level.
So
so
we
start
with
that
and
then
we
evolved,
and
then
we
figure
out
whether
it
makes
sense
to
give
that
much
control
to
each
pod.
Maybe
it
makes
sense
to
give
it
at
the
qos
class
level
and
depending
on
which
class
you
are
in
you
get
you
get
access
to
some
percentage
of
the
swap.
C
So
that
was
the
original
idea.
When
this
was
written
down,
I
mean
we
haven't
talked
about
this
for
a
while.
So
I
think
this
is
a
good
time
to
re-kick
those
conversations
here
in.
A
127,
you
plan
to
execute
an
offer
to
Milestone
of
this
cap,
so
we
we
are
in
Alpha
and
we
wanted
Alpha
two
and
biggest
question
for
Alpha
2
was
security
and
reliability
specifically
if,
if
there
is
a
memory-backed
Secrets
located
in
swabs
and
anybody
who
can
take
a
snapshot
of
a
disk
and
get
the
secrets
out
of
them
out
of
swap
file,
so
we
wanted
to
address
this
question
first
and
most
ideas
were
around
not
allowing
C
group
on
V1
zero
V1
for
memory
swap
because
C
group
one
has
so
little
controls
over
usage
of
memory
swap
so
yeah,
and
there
are
more
items
on
Alpha,
too
I.
A
Think
Alpha
2
is
way
more
way,
better
outlining
what
needs
to
happen.
Bait
is,
as
Ronald
said,
a
little
bit
farther
ahead
and
as
a
question
for
beta
is
specifically
for
Alpha.
We
wanted
to
understand
how
eviction
will
work
and
how
we
can
control
memory
between
RAM
and
Swap
and
in
beta.
We
want
to
based
on
this
observations
and
experiments.
We
wanted
to
decide
whether
you
want
to
change
API
to
support
percentage
based
swap
usage,
or
we
want
to
use
peer
port
or
pair
container
definition
of
swap.
A
So,
if
you're
interested
to
help
I
I
do
plan
to
reintroduce
it
in
128
and
I
think
we
need
to
execute
on
Alpha
2
first
and
then
get
down
to
Beta
targets.
B
I
would
say
that
I
am
very
interested
into
pushing
it
into
beta
in
128,
if
that's
possible,
but
I
am
willing
to
to
dedicate
a
lot
of
time
for
it
to
happen
so
yeah.
Maybe
we
can
just
start
by
by
you're
looking
at
a
POC
and
and
maybe
I'll
I'll
revisit
all
of
these
other
requirements
on
the
list
and
yeah.
Maybe
we
can
continue
from
there
and
Herschel.
Do
you
want
to
add
something.
D
You
know
I,
actually
Sergey
already
answered
that
question.
I
had
about
Alpha,
2
and
beta,
so
yeah,
we're
good
I.
Think
we
can.
We
can
continue
to
work
together
as
ignore
Channel,
I,
guess
on
kubernetes
and.
A
So
yeah,
let's
reconvene
and
maybe
for
128,
we
can
change
the
scope
for
beta.
If,
if
there
will
be
many
contributors,
it's
it's
okay.
We
just
expected
one
small
contribution
or
a
contributor
for
this
release
and
we
didn't
get
it
so
yeah,
maybe
128
we
can
increase
couples.
There
are
much
interest.
D
Okay,
so
if
you
have
the
link,
please
send
it
to
us
that
channel
and
then
we
can.
We
can
start
kick
start
the
things
there.
A
C
All
right
so
I
know
we
just
graduated.
C
We
took
couple
of
releases
ago
but
like
in
Red
Hat
we're
trying
to
Moto
V2
and,
as
we
are
moving
to
like
new
pieces,
what
we
are
seeing
is
a
lot
of
the
code
is
dropping
support
for
V1,
like
we
found
patches
in
kernel
that
didn't
work
in
some
cases
for
V1,
then
system
D
is
not
supporting
V1
for
some
match
case
scenarios,
and
so
on
so
like
it
makes
sense
to
ask
the
question
like
how
long
do
we
want
to
support
V1
on
the
kubernetes
side
and
like
maybe
this
is
a
good
time
to
kick
off
those
conversations
like
when
do
we
deprecate
it
because
it
destroys
and
everything
will
be
moving
to
V2
soon,
and
a
lot
of
work
is
also
happening
on
the
Kernel
side,
to
close
the
remaining
gaps
with
V1.
E
C
D
E
We're
doing
this
as
a
group
for
we
to
the
just
the
problem
is
I,
don't
know,
I
also,
don't
know
the
when
the
container
be
and
the
crowd
would
to
drop.
The
stick
will
be
when
support.
C
E
C
E
E
To
the
one
things
we
need
to
say
there,
because,
like
the
system
one
edition
last
week,
we
just
talked
about,
we
actually
only
see
okay,
Colonel
version
3.18
and
about
we
support
them.
So
in
that
case
it's
unless
we
are,
we
are
packed
on
that
one.
So
I'm
not
sure
we
can
make
a
plan.
Maybe
like
the
at
least
the
attempt
allows
us.
C
A
Jk
is
moving
to
av2
as
a
default
in
I
think
in
126,
so
we
don't
expect
much
troubles
with
that
move
because
mostly
vocals
are
compatible
and
I.
Think
if
you
try
to
be
like
numbers
are
like
driven,
it
may
help.
So
maybe
you
see
GK
numbers
I
mean
you
can
yeah
look
at
Red,
Hot
numbers
and
then.
C
Yeah
yeah
definitely
so
we
are
also
gonna
like
push
like.
We
will
still
have
to
stick
with
V1
as
a
default,
because
I
think
we
are
using
some
some
creative
features
in
V1
like
we
are
using
a
real-time
kernel
and
we
are
using
some
features
that
our
customers
depend
on
and
we
found
some
gaps
there,
like
V2,
doesn't
have
those
features
in
V1.
So.
C
G
It
yeah
it's
I
already
sometime
ago
mentioned
that
idea
to
you,
but
what
we
can
do
for
removing
support
for
version.
One
is
actually
to
remove
all
C
group
management
from
the
couplet.
E
E
You
still
need
to
answer
the
question
right
so
as
the
kubernetes
vendor
or
like
the
open
source
Community,
we
still
need
to
use
some
way
to
answer
right.
So
as
the
whole,
like
the
node
stack,
you
have
to
say
Okay.
What
do
you
support?
What
do
you
don't
support
and
how
to
config?
You
still
need
to
do
some
work.
So
it's
not
a
nice
simple,
even
like
the
thank
God
for
us.
I
E
E
And
then
we
also
GK,
don't
have
the
complex
the
problem,
but
no
just
mentioned
that
simple
way:
one
dependency
we're
starting.
We
we
just
don't
care,
because
there
are
many
other
people
using
on
kubernetes.
So
yeah,
that's
the
problem.
We
we
should
decide
a
timeline
and
announce
as
early
as
possible.
I
guess
the.
H
Timeline
is
is
necessary.
Also,
it's
going
to
come
with.
You
know
some
criminal
requirements
and
we
might
have
to
Don
extend
the
service
timeline
when
we
make
this
happen
right.
So
the
people
who
stay
on
older,
kernels,
older
versions
of
secret
right
will
still
at
least
have
some
limited
support
for
a
little
bit
more
time
than
would
normally
be.
You
know
expected
to
three
three-quarters
thing:
yeah
yeah.
I
I,
just
I
wanted
to
add
one.
One
thing
like
I
think
the
big
thing
that
I'm
looking
at
is
system
Dia,
is
dropped
or
officially
said
that
they
will
deprecate
the
group
V1
at
the
end
of
2023
so
like.
If
we
look
at
that
timeline,
you
know
I
assume,
once
the
new
distorts
upgrade
to
that
version
of
systemd
like
you,
won't
be
able
to
fall
back
right
secret
B,
because.
I
E
But
in
the
past
the
actual
long-time
back
system
decided
to
drop
and
we
actually
pull
them
back
to
continue
support.
Okay,
so
I
just
want
maybe
wish
you
to
come
up
the
menu.
Maybe
since
you
have
the
more
contacts
here
to
see
what
is
missing
and
then
you
could
decided
so
if
we
really
need
the
system
did,
can
we
possible
like
last
time
we
asked
them
to
continue
carrying
out
to
support
them
for
sometimes
right?
So
that's
kind
of
good
things.
C
To
get
a
list
of
the
kernel
things
we
are
tracking
and
I
think
some
of
the
things
actually
make
sense
for
us
to
Upstream
like
we
have
it
in
cryo,
but
it
definitely
makes
sense
to
do
it
in
kubernetes.
So
that
way,
you
know
we
can
share
okay.
This
is
what
we're
doing,
and
this
is
the
gap,
and
this
is
the
kernel
work
that
is
happening
and
when
it's
expected
to
land.
F
E
E
C
Yeah,
yes,
I
think,
but
but
even
after
that
right
we
see
some
patches
in
system
D
that
are
like.
Oh
okay,
this
is
V1.
We
won't
deal
with
it
like,
for
example,
on
a
shutdown,
it's
not
doing
graceful
weight
for
units
anymore.
We
are
okay
because
we
have
the
node
graceful
shutdown
feature
in
kubernetes,
but
if
we
didn't
have
that,
then
suddenly,
like
our
demon,
sets
and
static
pods
will
start
getting
killed
without
getting
a
sick
term.
So
patches
of
that
nature
have
started
Landing
in
systemd
with
nodes,
saying:
okay,
this
is
Legacy
controller.