►
From YouTube: Kubernetes SIG Node 20230502
Description
SIG Node weekly meeting. Agenda and notes: https://docs.google.com/document/d/1Ne57gvidMEWXR70OxxnRkYquAoMpt56o75oZtg-OeBg/edit#heading=h.adoto8roitwq
GMT20230502-170521_Recording_2560x1284.mp4
A
Hello,
hello,
it's
May,
2nd
2023
signal
weekly
meeting.
Welcome
everybody
I
want
to
I
Really,
restart
this
tradition
of
looking
at
active
PRS.
We
ended
127
with
200
active
PRS,
and
now
we
grew
40
more
so
now
we're
at
241
and
yeah
trend.
Is
we
see
more
and
more
PR's
coming
in
because
floodgates
are
open.
So
if
you
have
energy
to
review
or
proof,
please
do
so
and
movie
merge
in
the
beginning
of
cycle
safer.
A
We
will
be
because
we
more
likely
will
find
some
bugs
anyway,
if
you're
interested
what
was
happening-
click
these
links.
They
will
lead
you
to
the
specific
queries
for
this
week
with
that
I
want
to
switch
to
planning
and
Reacher
Raven
agreed
to
get
a
to
do
a
retro
Raven
I
made
a
co-host.
If
you
want
to
share
your
screen,
you
can
do
that.
B
B
A
B
Okay,
cool
sure,
let
me
fix
my
audio
okay.
B
All
right,
so
here
is
a
table
of
the
Caps
that
we
tracked
for
127
and
also
whether
they
have
merged.
So
during
127
we
have
a
19
caps
tracked
and
out
of
them,
we
have
13
emerged
I
used
to
track
the
status
in
age
of
the
cap
issued
to
determine
whether
they
have
been
emerged
or
not
yeah.
B
So
let
me
just
to
quickly
go
through
the
merge
ones,
so
kublitz,
parallel
image
pools
that
it's
merged
and
support
memory,
Qs
with
secret
V2,
is
merged,
and
this
is
a
big
one
in
place.
B
Pod
vertigo
scaling
it's
merged,
Dynamic
resource
education
and
then
support
for
user
namespaces
and
then
extend
pod
resources
to
include
resources
for
Dra,
and
also
we
have
a
couple
of
caps
graduating
to
Beta
in
127,
open,
Telemetry,
tracing
and
event
tit
plaque,
yeah
and
also
we
have
a
a
couple
of
caps
graduating
to
stable
downward
API
sitcom,
topology
manager,
jrpc
probe
and
configurable
gray,
Square
Reader,
probes,
yeah
and
here
are
some
history
data
you
can
see.
B
B
With
that
we
can
discuss
things
that
went
well
things
that
could
have
gone
better
and
I
also
pasted,
since
that
could
have
gone
better
from
126
ritual.
So
we
can
discuss
them
afterwards
and
see
if
we
have
improved
on
any
of
those
so
yeah.
We
can
start
by
start
with
things
that
went
well.
B
I
guess
I
will
mention
that
in
place
called
vertigo
scaling
is
finally
in.
D
B
Also
from
statistics
we
have
more
caps
attract
emerged.
I
will
see
that
as
saying
that
goes
well,.
A
A
Planning
went
way
better
this
time.
A
We
had
like
all
the
right
comments
in
the
right
places
and
we
did
everything
on
time.
I
think
maybe
a
couple
things
went
through
exception,
but
it
wasn't
complicated.
B
All
right
anything
else.
B
Okay,
if
not,
we
can
go
to
the
things
could
have
gone
better
part.
A
Yeah
I
think
immersed
in
place
very
late,
so
we
found
so
many
regressions
and
so
many
discussions
like
because
we
fixed
about
whether
we
don't
fix
a
bug.
Foreign.
B
Also
sugar,
you
mentioned
that
we
from
the
last
temperature
we
needed
Mark's
help
for
cab
planning
to
apply
milestones
for
caps,
I,
guess
that
is
still
true
for
Windows
7..
We
need
someone
to
apply
the
masternodes.
C
C
A
One
thing:
I
I
think
we
tracked
all
the
bugs
for
in
place
upgrade
but
I
don't
see,
I
mean
I,
don't
see
a
lot
of
work
happening
right
now
to
fix
it.
Beyond
Windows,
support,
I,
think
I
mean
windows
thanks.
A
A
Yeah
that
that
is
true,
but
we
discovered
that
we
need
a
major
refactoring
after
this
cap
to
improve
how
we
do
in
place
upgrade
and
I
don't
see
much
of
a
work
happening,
so
it
may
stay,
as
is.
C
But
at
the
other
hand,
I
do
think
what
we
did
a
good
job
to
quickly
make
decision
not
use
because
Windows
to
block
that
progress
right.
So
so
that's
why
we
at
least
count
merge
that
change
a
little
bit
earlier.
Otherwise
we
will
last
minutes
to
merge
and
because
I,
basically,
the
the
white
implants
upgrade,
cannot
merge
before
so
many
times.
We
always
have
the
new
feature
block
that
led
to
PR
that
change.
C
So
so
this
time
we
basically
said:
okay,
if
it's
not
the
fundamental,
so
we
could
a
layer
down
scope
a
little
bit
for
that,
for
example,
for
Windows,
then
next
release
can
fix
Windows
anyways
Alpha,
so
we
can
detect
the
problem
not
last
minute.
It
has
a
problem.
No.
A
Yeah
I
want
also
to
mention
that
the
Brooks
and
the
long
Cooper
twice
during
this
release,
one
with
some
storage
improvements
and
one
with
In-Place
upgrade.
Now
we
have
press
admits
for
in
place
for
Standalone
kublet,
but
we
don't
have
a
regular
tests,
so
maybe
that
can
be
improved
as
well.
B
Actually,
you
imagine
twice
one
for
want
to
buy
and
place
the
other
time
modes
for
some.
G
C
G
Okay,
so
I
I'd
like
to
like
point
out
I,
don't
know
it's,
maybe
it's
like
usual
practice,
but
anyway
we
should
think
if
we
can
change
it.
So
the
most
of
reviews
and
approvals,
actually
I
was
done
like
in
in
the
last
last
10
days
before
the
like,
before
the
code
freeze,
so
I
don't
know
if
it
makes
sense
to
maybe
like
spread
the
the
caps
and
implementation
like
through
the
cycle
more
like
smoothly
to
like,
because
it's
like
it
happens
like
every
every
release.
G
E
I
think
one
thing
we
can
do
is
write
like
once
we're
done
with
planning
we'll
have
a
list
right
and
then
we
can
say:
okay,
we,
as
the
reviewer
approval
Community,
will
try
to
focus
on
these
two
caps
this
week
right
and
that
way
like
every
week.
When
we
come
back,
we
see.
Oh,
we
can
check.
Did
we
make
good
progress?
Did
we
review?
Do
we
upload?
Did
we
approve?
What
we
said
we
were
gonna
approve.
Is
something
blocking
so
that
way?
E
A
Yeah,
but
you
know
another
only
about
caps.
We
also
had
the
regulator.
H
A
I
remember
last
few
days:
Paco
picks
like
three
or
four
merge
conflicts
and
flakes
I
fixed
two,
and
then
somebody
else
fixed
the
couple
so
yeah
there
is.
There
is
problem
with
many
caps
moving
in
the
same
time
around
this
time,.
C
Actually,
a
personal
feel
this
time.
Cap
review
actually
is
better
than
three
wheels.
Actually
is
the
implementation
code
review
actually
last
a
couple
days
and
the
block
right
because
we
keep
finding
new
problem,
but
the
camera
actually
at
least
the
I
personally,
a
little
bit
of
spread,
not
the
last
minute.
I
did
a
poor
job,
I
apologize,
but
this
time
I
really
on
purpose
to
to
process
those
cap
a
little
bit
earlier.
So
but
that's
still
at
the
end,
still
there's
the
Dynamics
changes
right.
C
The
people
so
kind
of
actually
a
little
bit
easier
but
cold,
because
there's
many
things,
it's
not
the
self-contained
by
signal
and
also
not
the
self-content,
the
bad
review
or
approval
and
the
author.
There
are
like
test
infrastructure
pre-submit,
all
those
kind
of
things,
so
that's
kind
of
either
more
Dynamics
changes
there.
So
the
code
actually
at
the
end,
I
even
don't
know
which
one
I
should
take
a
look.
I
I,
I
I
I,
admit
to
look
at
the
last
minutes
code
that
I
only
unlocking
the
minimum
time.
I
have
to
admit
yeah.
G
Well,
I
would
propose
to
like
the
like
earlier
stage
of
release
cycle,
to
concentrate
on
the
caps
and
implementations
that
didn't
pass
like
they
like
didn't
go
into
the
previous
release,
but
but
were
planned
for
it
so
that
they,
they
probably
are
like
more
ready
to
be
reviewed
and
approved
at
the
early
stages,
because
the
the
rest
of
the
people
are
busy
like
with
their
caps
and
implementations
and
they
they
can
be
like
not
ready
yet,
but
those
people
who
kind
of
missed
deadline
last
time.
So
they
they
may
be
more
and
more
ready.
I
I
can
I
can
add
one.
One
thing,
I
think
also
went,
went
well
in
this
cycle,
which
is
I,
think
the
testing
caught
much
more
issues
before
like
after
merge
compared
to
prior,
when
we
were
kind
of
releasing
things
not
seeing
things
actually
in
production,
so
like,
for
example,
for
we
made
a
couple
changes
in
pod
life
cycle
due
to
kind
of
some
job
controller
cap,
and
we
found
kind
of
two
issues
with
it
after
we
merged
it,
but
one
was
found
would
be
a
testing
and
one
another.
I
One,
fine,
like
earlier
report
very
early
on,
we
were
able
to
get
those
fixed
before
you
know.
127
was
released
so
I
think
that's
improved
a
lot.
I
C
So
you
suggest
us,
because
we
do
have
some
cap
submit
the
merged
for
previous
release,
used
to
suggest
us,
but
let's
implementation,
maybe
not
finally
go
through
and
there's
the
pr
waiting.
So
we
should
focus
on
code
review
for
those,
so
I'd
emerge
the
couple
before.
C
G
E
H
A
Do
you
want
to
review
what
we
taught
in
1.6
Etc
to
see
if
things
improved
and
then
switch
to
applying.
B
Yeah,
so
these
are
the
things
that
could
have
gone
better
from
126
retro,
so
I
think
we've
talked
about
the
first
one
Native
Mark.
We
in
126
we
needed
someone
to
help
for
care
planning
to
apply
the
milestones
and
we
did
a
better
job
this
time
and
also
last
time,
I
think
from
America
or
no.
We
didn't
enforce
soft
freeze,
I
I,
don't
know
if
that's
still
the
case
for
127.
A
B
Yeah,
and
also
also
from
126,
we
had
a
more
test
of
failures
and
well
I.
Think
David
just
mentioned
that
in
127
we
have
desolator
spotted
those
test.
Affiliates
actually
helped
us
catch
issues,
instead
of
catching
them.
After
the
release.
B
A
B
From
Derek
last
time,
struggle
on
budget
for
external,
facing
changes
doing
better
with
internal
bid
things.
B
B
People
reached
out
discussed
caps
hard
to
keep
all
details
of
all
caps,
expand
the
list
of
people
in
domains
to
make
a
Proverbs
on
caps.
Yeah
I
think
we.
Now
we
have
more
chairs,
I,
think
down
just
Dimension
I
think
we
are
doing
a
better
job
on
this
front
as
well.
In
127.
J
A
I
think
it's
so
positive
right,
so
we
still
have
a
lot
of
things
to
do
better,
but
so
many
things
out
of
1.6.
B
Yep,
okay,
so
I
guess
with
that
we
can
conclude
127
retro
thanks
everyone
for
the
discussion.
E
A
Yeah
now
we
want
to
switch
into
128
preparing.
So
typically
we
have
a
table
and
we
look
at
what
we
have
in
this
table
and
see
if
we
can
assign
people
to
that,
and
somebody
will
be
working
review
and
approving
things
do.
E
You
see,
do
you
see
my
screen?
Yes,
use
the
machine.
Okay,
awesome,
so
I
think
what
I'll
do.
First
is
I'll
just
pick
items
from
Reuben's
list
that
didn't
make
it
and
the
first
one
is
sidecars
and
we'll
start
inserting
those
at
the
top
here
or
move
them
here.
I
just
made
a
copy
of
what
was
there
from
last
time.
Let's
see
if
sidecar
is
already
in
this
list,
I
see
Keystone
but
I
don't
see
the
rename
to
sidecars.
E
All
right
so
sidecars
priority.
E
Bye,
sorry,
all
right,
the
stage
Alpha
we
have
authoris
Sergey,
okay,
who
else
is
working
on
it?
If
you
can
add
other
folks
names.
A
Yes,
okay
I
will
let
it
do
it
offline.
Okay,
okay,
no
worries,
okay,
so
from
approvers
and
reviewers
perspective
we
have
Team
for
Tim
Hawkins
for
API
and
Derek
is
listed
as
main
approval
from
signode.
A
E
Okay,
thank
you,
but
this
one
is
good.
Let's
take
a
look
at
the
next
one,
so
there
is
node
memory,
swaps,
support,
Hershel
or
Ryan.
Do
you
guys
want
to
talk
to
it.
A
Last
time
we
wanted
to
do
alpha
2,
because
we
only
have
like
a
little
bit
of
resources.
These
days,
I
understand
that
more
people
want
to
participate,
so
we
can
even
go
to
Beta
and
there
is
an
open,
PR
movie.
E
F
D
D
We're
proposing
a
beta,
1
and
beta
2
for
the
features.
E
E
All
right:
okay,
let's
go
to
the
next
one;
fine-grained
supplemental
groups,
control.
E
J
A
E
E
Okay
part
conditions
around
readiness.
E
Then
there's
the
app
armor
support,
not
sure
do
you
know
Sergey
where
it
got
stuck.
A
Cap-
and
it
has
couple
comments,
so
somebody
needs
to
go
through
the
comments.
Get
everything
written
down.
One
question
we
had
is
how
to
re-admit
our
ports.
If,
let's
say
apartment
was
disabled
on
a
node
like
how
does
it
affect
workload,
do
we
need
to
re-admit
or
support,
so
we
need
to
just
run
as
usual,
but,
like
then,
don't
have
a
parameter
on
this
no
ports.
So
this
kind
of
questions
that
we
need
to
answer
besides,
that
I
think
everything
else
is
approved
and
API
is
there.
Everything
is
there.
E
C
To
find
the
I
think
he
finds
some
owner,
but
the
company,
the
owner,
I,
didn't
see
the
available.
C
I
think
I've
been
chasing
this
one,
but
the
problem
is
I,
never
find
the
owner
new
owner
is
available,
I
think
he
he
mentioned
to
me.
He
find
a
new
another
new
owner,
so
we.
E
C
A
So
if
anybody
wants
to
do
an
exercise
of
GN
feature,
please
join
this
party.
It
shouldn't
be
too
hard.
E
All
right,
what's
the
last
okay
graduate
the
cubelet
Pod
resources,
end
point
Swati:
do
you
know
about
that?
One
I.
K
K
E
Sounds
good
thanks,
Francesco's
I
put
you
down
like
I
put
swathi
as
reviewer
if
Kevin
can
approve
or
if
he
can't
I
can
I
can
help
there.
E
K
E
You
yeah
so
I
think
we
got
all
of
that
done
now,
we'll
just
review
the
previous
list
and
see
if
something
needs
to
move
forward
or
it
can
be
punted
so
I.
Think
first,
one
here
is
the
c
groups:
memory
qos
David:
do
you
have
any
additional
work
planned
here?
Do
you
think
we
can
do
anything
at
extra
or
we
need
to
sit
on
beta
for
a
release
to
get
feedback.
I
L
Benchmarking,
we'll
do
some
benchmarking
tests
and
try
to
see
what
can
we
do
to
push
it
to
GA?
Eventually,
okay,.
E
F
I
F
E
E
All
right,
cubelet
plugin
model
based
of
dra
owners,
Marlo
and.
E
E
All
right,
juven
anything
remaining
on
the
image
pull
or
we
are
done,
I
think
right.
Yeah.
B
Your
word
John
in
this
cycle,
I'll,
probably
just
to
add
some
more
tests
and
graduate
to
Beta,
are
yeah,
so
should
be
quite
straightforward.
There.
E
The
next
one
is
extend
the
power
resources
API
to
include
dra
Francesco.
You
want.
J
E
All
right,
okay,
so
we
covered
this
one
already:
okay,
I'll
just
copy
this
data,
the
owner
up
there
from
here
topology
manager,
GA,
is
done
right,
yeah,.
E
Yeah,
this
is
done
in
place
resources
what's
next
here,
do
we
want
to
add
tests,
windows.
A
C
Yeah
and
as
I
think
the
when
they
come
back
to
say
actually
Windows
face
and
also
all
those
test
tests
that
we
want
to
yeah.
We
could
change
it,
but
the
sofa
looks
like
stay
on
Alpha
for.
I
E
It
makes
sense
yeah
we'll
just
continue
on
the
bug,
fixes
and
test
it
yeah
all
right.
So
the
next
one
is
split,
standard
out
standard
log
stream.
I
know
we
don't
hear
back
from
the
author.
We
need
to
check
again
and
see
if
they
are
interested
in
taking
it
forward.
E
E
E
Ad
I
think
this
is
merged
right.
The
status
host
type
is
part
field,
so
yeah
to
be
done.
Yeah
more
granular,
probes,
Mike
Brown,
the.
M
E
Okay,
but
I
think
this
is
a
new
net
new
feature
anyways
right,
so
if
we
want
to
pursue
it,
do
you
have
the
bandwidth
to
work
on
it,
I
guess
as
a
question.
L
E
A
E
All
right,
so
this
one
is
folded
into
sidecars
qos
resources
Sasha,
and
are
you
on
the
call
you
guys
are
gonna
continue
working
on
it
right,
I
know
like.
J
E
It's
the
next
one
part
German
termination,
grace
period
GA.
This
is
done
right,
Ryan
item
I,
remember,
yeah,
cheer,
PC
probes
is
done.
E
Cri
starts
David
or
Peter.
You
guys
Wanna
Give,
an
update.
I
Yeah
for
this
one
I
think
we
want
to
continue
with
some
more
work
on
the
runtime
side
and
then
I
think
focus
on
testing
setting
up
test
jobs.
For
this
sorry.
F
Yeah
I'd
like
to
keep
pushing
for
beta,
but
because
it's
reliant
on
support
on.
E
Sounds
good
thanks,
so
CRI
image
pull
with
progress
notification
is
anyone
on
the
call
talked
with
this
one.
I
know
there
was
interest
we
didn't
hear
from
the
author
during
127.
E
E
So
basically,
the
call
was
about
figuring
out
how
username
spaces
intersect,
with
with
the
Pod
security
policies
and
Rodrigo
and
Giuseppe,
are
planning
to
work
on
the
state.
Full
support.
C
E
M
Stability
Paco's
still
here
he
is
volunteering
to
do
the
storage
request
the
course
the
code,
the
coach
there,
except
for
the
storage,
but
I,
would
like
to
get
in
front
of
cigno
to
talk
about
an
alternative
authentication
for
images.
I
think
we
need
to
move
key
ring
support
from
kubelet
down
into
the
Container
runtimes,
so
that
we're
not
passing
Secrets
see
you
know
off
across
the
CRI
wire.
Instead,
I
think
we
should
be
passing
cash.
M
You
know,
policies
and,
and
or
you
know,
the
user
that
we're
supposed
to
be
pulling
for
the
key
ring
that
makes
more
sense
right.
I
can
do
a
detailed
mock-up
for
it.
E
Yeah
I
think
Mike.
If
you
can,
if
you
can
do
an
update.
M
M
Issue
we've
had
in
kubernetes,
but
maybe
there's
maybe
we
need
to
step
back
unless
unless
you
want
to
do
it
in
two
steps.
M
E
Sounds
good
yeah
all
right
should
we
go
through
through
these,
maybe
node
local
I,
don't
know
we'll
have
to
check.
Add
ex
execution,
node
Affinity
manager
to
support.
Does
anyone
know
what
that
one's
about.
E
Okay,
fine
grain,
cubelet,
API
authorization.
E
A
Yeah
I
want
to
shout
out
a
couple:
things
that
have
been
worked
on
is
versus
Dynamic,
not
resizing.
So
cap.
J
A
It's
new
one:
okay,
953,
it's
about
resizing
CPU
and
memory
requests
and
limits
dynamically
for
the
node.
Okay,.
C
H
It's
about
it's
about
the
situation
when
you
have
ISO
memory
added
to
remove
it
or
CPU
goes
online
offline,
so
theoretically
doable,
but
it
might
open
kind
of
forms
in
some
scenarios.
N
A
Okay,
so
it's
a
beacon,
like
it's
small
in
semantic
size
but
on
the
implementation
side
is
huge,
so
design
will
be.
A
discussion
will
take
a
while.
So
let's
see
if
it
will
make
1.8.
E
A
There's
another
one:
it's
about
intelligent
sleep
as
a
post
start
pre-stop
hook.
A
And
I
know
there
is
interest
I
remember
already.
Two
people
asked
for
environment
variables
from
file
this
one
and
another
one.
Is
it's
not
formulated
as
a
cap?
Yet
so
there
is
no
link.
There
are
just
issues
on
KK
and
another.
One
was
back
off,
backup
timeout
to
be
configurable.
A
E
If
you
can
do
that,
that'll
be
helpful
and
then
we'll
see
if
it
is
actually
bandwidth
and
capacity
to
take
on
more
absolutely.
Thank
you
all
right,
I
think.
That's
a
good
first
pass
like
folks.
Please
update
with
comments
here
if
something
changes
with
regards
to
your
ability
to
work
on
it
during
128,
so
we
can
change
things
accordingly.
We'll
make
passes
over
this
again
before
you
finalize
it.
J
A
You,
okay,
we
had
another
item,
but
Lucy
agreed
to
move
it
forward.
We
still
have
five
minutes.
N
Yeah,
let's
just
skip
it
for
now,
it's
so
that
everyone
can
go,
and
we
can
talk
about
it
next
week.
It's
not
particularly
urgent
anyway,.