►
From YouTube: Kubernetes SIG Node 20230228
Description
SIG Node weekly meeting. Agenda and notes: https://docs.google.com/document/d/1Ne57gvidMEWXR70OxxnRkYquAoMpt56o75oZtg-OeBg/edit#heading=h.adoto8roitwq
GMT20230228-180324_Recording_640x360.mp4
A
Hello,
hello,
it's
a
Signet,
weekly
meeting,
it's
February,
28
2023.,
welcome
everybody.
Today
agenda
is
very
short.
First
I
wanted
to
say
that
we've
beaten
all
our
records
on
total
PR's
on
Signal.
It's
248
now
I
think
it's
a
absolute
maximum.
We
so
maybe
like
two
years
back.
We
see
the
same,
so
we
need
approvers
and
reviewers
I
think
we
have
more
active
reviewers
these
days
and
we
have
many
LG
tmprs,
but
some
approvers
are
on
not
around
this
time.
So
yeah.
A
If
you
interested
what's
happening
since
last
week,
you
see
that
36
PRS
were
created
and
it's
not
even
close
to
what
PRS
were
closed
and
merged.
It's
only
20..
So
we
have
16
PR
Scrolls
over
the
last
week
and
I
wanted
to
remind
you
that
the
quad
3
is
in
two
weeks
it's
March
15th.
So
if
you
want
your
feature,
bug
fix
or
anything
or
enhancement
to
be
merged.
Please
hurry
up.
You
don't
have
too
much
time
with
that
said,
I
want
to
pass
agenda
to
vinay.
A
He
has
only
agenda
items
this
week.
We
need
to
take
it
away.
B
Yeah,
hey
so
the
pr
is
finally
merged.
Tim
LGT
did
last
night.
I
saw
that
it
worked
successfully
and
I've
been
watching
the
tests
so
far
and
I
came
out
to
ski,
so
you
can
tell
I'm
really
serious
about
it.
Well,
okay,
there
are
a
couple
of
follow-up
years
that
are
I.
Think
one
follow-up
here
for
sure
which
Tim
wanted
the
name
of
the
resize
policy
is
going
to
be
moved
around
a
little
bit
restructured
and
setting
defaults.
It's
pretty
close.
B
There
is
one
integration
test,
that's
failing
because
it
does
not
like
when
you
patch
it
and
the
patch
is
no
op
patch.
It
still
continues
to
patch
and
updates
the
resource.
The
revision
so
I
need
to
investigate
what's
going
on.
But
besides
that,
that's
the
only
issue
outstanding
issue
for
having
that
PR
ready
and
the
other
one
is
to
use
get
pod
qos
that
function
that
we
have
I
want
to
replace
that
with
the
Pod
status
in
the
status.
What
qos?
B
Because
we
don't
want
to
compute
this
it's
across
the
code
base
and
that
doesn't
have
to
be
in
this
release
is
we're
very
close
to
code
free,
so
I
don't
want
we'll
see
if
they
want
to
do
it
now,
we
can
do
it
now
or
later,
but
yeah
this
one
is
is,
hopefully
nothing
will
break
I'm
keeping
my
fingers
crossed
I'll
keep
an
eye
on
it,
but
if
you
guys
see
any
issues,
I
can't
see
that
I'm
missing.
B
C
I
was
happy
this
stuff
merged,
it's
fun
to
blame
Tim
because
he
was
the
one
I
had
to
do.
The
final
curve.
C
The
changes
that
you're
talking
about,
though
the
the
pr
looks
like
it's
one.
It's
one
pull
request
right.
So
yeah
are
you
describing
them
as
two
changes,
because
you
want
to
separate
them,
or
can
we
just.
B
It
will
be
too
yeah.
It
will
be
two
different
commits
so
that
it's
easier
to
read.
One
is
the
name
restructuring
and
the
other
one
is
setting
the
defaults.
There
is
a
bunch
of
generated
files
that
came
out
from
from
the
restructuring
which
so
it
looks
like
xlpr,
but
really
it's
not
it's
a
lot
of
the
files
are
generated,
but
that
I
think
we
want
to
have
it
as
part
of
this
release,
as
part
of
we
want
to
have
it
as
part
of
this.
So,
besides
that
it
should
be
fairly
easy
to
review.
B
I
already
have
the
pr
up
there,
but
it's
not
ready
to
merge,
because
that
integration
of
one
test
is
failing
and
I
need
to
investigate
after
I
get
back
from
enjoying
the
powder.
That's
out
here
so
yeah
that
that's
the
only
thing
that
that's
outstanding,
that's
critical
for
this
release,
the
other
one
get
pot
keywords
is
not
really
I
mean
it
can
come
we're
two
weeks
from
code
free.
So
maybe
we
should
just
do
it
later.
A
Uni
since
I
didn't
know
that
this
BR
was
merged.
I
didn't
check
my
notifications
before
the
meeting,
so
on
sidecar
group
we've
been
discussing
splitting
our
all
the
tasks
into
multiple
stages,
so
I
pasted,
Uber
issues
that
we
discussed.
So
it
also
attaches
similar
code
base
and
I
think
sounds
a
clean
up.
You'll
need
like
some
of
the
ongoing
PRS
you'll
need
to
rebase.
Now
it's
not
a
big
deal.
It's
just
small
changes
yeah.
A
A
B
A
So
yeah
I
just
realized
that
somebody
put
agenda
items
on
on
the
future,
not
under
a
current
date.
Lucy
I
think
see
you
saying
that
it's
already
taken
care
of.
A
Okay,
no
worries
and
mikhai.
Do
you
want
to
talk
about
your
PR.
D
E
Yes,
yes,
yes,
I
just
moved
it
down
to
today.
E
E
Unfortunately
I
didn't
discover
that
before
in
my
testing
but
yeah,
but
it
happens
on
on
when
the
machines
are
loaded
and
what
I
have
found
characteristic
about
The
Silence,
which
I
ended
up
producing,
is
that
in
the
logs
of
container
D,
the
unkill
task
comes
before
the
start
container
successfully
message
or
something
like
this,
so
I
believe
it's
just
about
waiting
a
couple
of
seconds
to,
and
this
is
like.
E
Typically,
the
difference
in
time
is
like
10,
milliseconds,
so
very
small,
but
still
it's
possible
and
then
the
error
message
isn't,
as
we
expect
unkilled,
but
ever
so
as
a
quick
fix.
I
propose
just
to
wait,
but
for
a
long
term
solution.
I
think
it
would
be
good
to
to
have
like
a
proper
fix,
but
I
don't
I'm,
not
very
like
sure.
If
this
is
container
he
issue
like.
If
you
have
some
idea,
then
I
can
create
an
issue
open
an
issue
in
containerd.
Anyway,
so
that
the
folks
look,
look
it
up.
E
Okay,
but
yeah
so
I
will
open,
but
I
think
for
for
a
quick
fix.
It
would
be
good
to
to
manage
it
so
that
you
know
people's
jobs.
Don't
fail,
oh
yeah,
so
so
please
take
a
look.
E
Then.
Another
thing
is
that
I'm
working
in
this
iteration
I
mean
yeah
is
the
marking
of
pending
and
terminating
parts
as
filed
by
cubelet,
and
the
implementation,
like
the
initial
implementation,
is
now
up
for
review.
So
please
yeah
leave
your
comments,
but
while
working
on
this
implementation,
I
already
have
like
three
questions
that
we
can
maybe
discuss
now
or
during
the
head
use.
So
the
questions
that
I
discovered
is
like,
as
you
know,
this
whole
thing
started
off
from
job
so
that
we
want
to
match
the
failed
ports
against
portfolio
policy.
E
So
we
want
them
to
be
in
terminal
phase,
but
what
is
characteristic
about
jobs
is
that
they
use
finalizers
and
I.
Think
we
could
restrict
the
waiting
to
transition
the
post
to
failed
phase
only
for
those
spots
that
have
finalizers
I
think
like
this
should
be
easy
and
we
can
this
way
save
some
QPS.
E
So
I
would
like
to
hear
what
you
think
about
that,
because
otherwise
cumulate
needs
to
fast
like
transition
the
face,
and
this
may
happen.
This
may
require
additional
API
call
so
yeah,
that's
that's
the
thing
and
another
is
that
I
observed
that
if
cubelet
has
starts
so
I
sort
of
figured
like
often
restarts
of
cubelet,
then
even
I.
E
If
I
have
the
pod
in
terminal
size,
it
can
flip
to
pending
back
to
pending
for
her
like
very
brief
period
of
time
and
I
could
fix
it
like
not
allowed
to
flip
back
from
third
to
bending.
But
I
was
wondering
if
this
is
like
something,
maybe
a
sign
or
something
deeper
like
have
you
seen
some
Behavior
so
again,
a
question:
does
it
require
like
a
generic
fixer
or
the
specific?
C
E
From
failed
to
pending
so
in
general,
I
want
to
transition
pending
and
terminating
parts
to
failed
and
for
that
I
use
the
the
addiction
mechanism,
but
once
they.
C
C
That
you
went
from
there's
some
noise
in
the
room
where
I'm
at
so
the
the
you
went
from
a
pod
being
failed
back
to
pending
a
terminal
state
to
a
non-terminal
state.
C
Clayton
was
investigating
some
of
those
I.
Don't
know
if
probably
on
this
particular
PR
it'd
be
good
to
get
him
to
take
a
look
because
I
know
he
was
hacking
away
at
other
approaches
to
this
problem
independent
of
jobs,
but
that.
C
Who
I'd
want
to
align
you
with.
E
Yeah,
so
it
would
be
good,
maybe
to
let
him
know
about
this.
Pr
I
have
a
triple
so
yeah
if
I
just
suggest
that
I
mean
not
like
100,
but
if
I,
if
I
just
keep
it
a
starting
cubelet
at
random
points.
This
will
happen
time
to
time.
C
Yeah,
so
if
you
have
a
reproducer
independent
of
the
job
use
case,
I
think
that
would
be
really
good.
Sometimes
we
struggle
to
find
like
concrete
reproducer
scenarios,
but
if
you
have
one
independent,
getting
a
test
case
posted
that
just
shows
that
specific
problem,
I
think
would
would
help
a
lot
of
us.
So
if
you,
if
you
have
that
reproducer
and
you're
willing
to
write
that
ede
and
it
was
confidently
reproducing
I-
think
that
would
be
super
helpful.
E
C
D
I
was
gonna
also
suggest.
Clayton
has
some
PRS
that
are
you.
You
could
try
your
test
case
with
his
PRS
and
see
if
the
the
behavior
is
improved.
I
think
that's
where
I
would
start,
because
he's
greatly
changed
the
way
the
Pod
manager
is
working
and.
E
E
It
might
be
I
I,
just
transitioned
the
the
pots
into
file,
but
yeah,
but
I
I
could
probably
merge
the
two
branches
mine
and
his
and
check
it.
The
problem
still
occurs.
That's
that's
a
good
point.
E
So
that's
but
I
don't
know
his
Branch.
So
maybe
someone
could
support
me.
G
Okay,
yeah
yeah
I
just
wanted
to
plus
one
that,
like
I've,
been
working
late
on
that
PR
as
well.
Adding
some
tests
and
we've
made
quite
a
few
changes
to
the
updates
and
other
kind
of
pod
managers,
all
the
changes
as
well.
So
it's
possibly
fix
it
there.
If
not,
though,
like
I
think
we
should
have
separate
PR
for
just
that
test.
E
Okay,
so
that's
that's
another
question
for
me
and
one
more
is
yes,
while
working
on
this
again,
this
was
sort
of
started
off
by
jobs
where
we
only
and
the
job
teacher
where
we
only
deal
with
restart
reports
with
respect
policy.
Never
so,
in
that
case
the
pots
always
transition
to
failed
sooner
or
later,
at
least
from
the
observations.
E
But
I
noticed,
while
working
on
that
that,
if
I
have
a
running
pod,
but
it's
its
face
its
restart
policies
on
failure
or
always,
then
it
may
never
transition
to
failed,
even
though
it
has
the
deletion
timestamp.
E
So
it
doesn't
really
restart
and
it
actually
gets
deleted
from
etcd.
But
if
I
you,
if
you
add
a
finalizer
to
it,
so
that
it
doesn't
get
deleted
from
etcd,
then
it
stays
it
doesn't
like
never
restart,
but
it
stays
in
the
running
phase
with
containers,
terminated
and
I
think
this
is
a
bug,
but
let
me
know
if
I
think
correctly,
and
if
so,
should
we
also
deal
with
that.
Probably
this
is
something
if
you
continue.
This
looks
like
a
bug.
Then
I
will
probably
want
to
Decap
of
this
film.
C
C
E
Okay,
so
so
so
so
so
possibly
that's
that's
also
my
abuse
there,
but
again
something
that
I
observed
not
strictly
needed.
For
my
for
my
work
on
jobs
that
it
was
surprising
to
see
what's
in
the
running
phase
when
other
finalizers
with
Augusta
policy
on
failure,
but
then
they
never
get
deleted
because
they
have
a
finalizer
and
yeah
and
they
are
in
this
weird
state.
So.
G
Yeah
I
just
want
to
say,
like
I
I,
do
think
that
it's
a
bug
because
I
think
we
expect
all
pods
to
eventually
end
up
in
a
terminal
state.
So
the
fact
that's
not
happening
that
does
sound
like
a
bug.
I
did
chat
with
Clayton
more
about
that
I.
Think
Clayton
also
agreed
that
that's
a
book
I
think
the
issue
is
just
right.
Now,
like
we
can
delete.
If
the
Pod
is
delete
like
we
can
delete
the
Pod
regardless.
G
If
we
know
that
the
Pod
is
terminal,
we
delete
the
Pod
from
the
API
server,
regardless
of
updating
the
status,
because
that
requires
looking
under
an
additional
update
on
the
server,
because
you
basically
need
to
do
like
a
you
know,
an
update
and
then
delete
right
away,
but
I
do
think
it's
something
worth
doing
because
of
the
finalizer
use
case.
For
example,
as
you
mentioned,.
E
F
E
Cool
you
mean
on
the.
A
Yeah
I
approach
the
flaky
18
delay,
but
you
probably
need
a
separate
issue
to
fix
it:
yeah,
okay,.
E
A
Yes,
okay,
thank
you.
Thank
you
to
find
yeah.
If
there
is
no
more
agenda
items,
I
would
suggest
we
will
go
back
to
PR
reviews
and
merging
approving
getting
the
numbers
better.