►
From YouTube: Kubernetes SIG Node 20220531
Description
SIG Node weekly meeting. Agenda and notes: https://docs.google.com/document/d/1Ne57gvidMEWXR70OxxnRkYquAoMpt56o75oZtg-OeBg/edit#heading=h.adoto8roitwq
GMT20220531-170402_Recording_640x360
A
Okay
good
morning,
everyone
today
is
the
May
30th
first
and
it's
our
weekly
hope.
Everyone
have
the
have,
the
wonderful,
long,
weekends
and
the
back
to
work,
and
so,
let's
start
today
and
I
also
see
the
circus
here
today,
yeah
Eric
is
here:
yeah:
okay,
let's
start
Kevin,
do
you
want
to
study
your
first
topic.
A
Yeah
I,
adapted
and
I
think
that's
good
enough
and
we
just
make
sure
once
the
surgery
needs
to
ask
us
to
update
that
1.25
hiking
table.
We
make
sure
all
those
kind
of
things
is
included
there
and
also
the
market
Milestone.
So
next,
one
I
also
update
in
our
tracking
dock.
Remove
this
any
Target
release
from
the
125
before
the
meeting
so
I
think
this
one.
It
is
for
the
the
Keystone
containers.
C
Hello,
yeah
I
wanted
to
do
a
friendly
ping,
so
we
just
got
a
look
good
to
me
a
few
hours
ago.
So
I
think
it's
missing
the
slash
approved
for
the
cap.
B
A
We
are
actually
before
meeting
I
already
look
at
that
I'm.
Okay,
I
just
want
to
bring
this
to
the
community
if
the
menu
you
are
because
I
also
already
say.
You
already
looks
good
to
me.
So.
C
A
D
C
D
A
D
I'm
looking
now
but
I
think
it's,
it
looks
good
from
Services
yeah.
Thank
you.
A
A
So
will
they
so
the
first
one
I
think
we
discussed
last
week
and
the
basically
it
is
just
merged
to
PR
two
caps
ticked
into
one,
so
I,
reviewed
and
already
approved
and
I
think
the
rest
of
the
stuff.
You
are
waiting
for
waiting
for
the
direct
approval
right.
Yes,.
E
I
think
the
last
commit
that's.
The
current
commit
that's
currently
on.
The
top,
in
fact,
will
be
top
two
commits
are
the
ones
that
need
to
be
reviewed.
Those
are
the
fixes
that
direct
requested
and
I
believe
all
the
changes
that
he
has
requested
have
been
addressed.
Direct
I,
don't
know
if
you
have
a
chance
to
look
at
it.
E
If
not,
please
do
I'll
try
to
address
them,
but
my
window
of
working
on
this
is
running
out.
I
have
to
be
I.
Have
a
conference
coming
up
in
third
week
of
June
going
to
be
heads
down
with
that
that.
F
Makes
sense
so
this
is
there's
two
Upstream
Powers
I
want
to
get
done
this
week,
so
this
one
and
it's
a
sandboxing
one,
so
I
have
it
open
and
I
started
to
go
through
the
code.
This
morning,
yeah.
A
A
F
Is
an
update
Bernal
and
myself?
We
reviewed
the
cap
and
that
ship
emerged,
and
so
it's
just
the
implementation
prior.
We
need
to
take
a
look
at.
B
A
I
just
want
the
cohort
again.
This
one
is
actually
care.
I
really
have
for
some
advertising
support,
debugging
and
I
think
it
will
be
useful
even
right
now
we
try
to
reduce
the
scoping
to
the
smaller
slope
right.
So
so
we
can
move
forward.
I
have
a
milestone,
but
this
one
please
pay
attention
to
this.
One.
A
G
I
can
talk
about
this,
so
this
is
actually
a
little
bit
follow-up.
It's
not
exactly
the
same
bug
as
last
week.
Basically,
we
found
an
issue
where
there
can
be
cases
where
kublid
reports
to
the
API
server
as
an
invalid
status.
So
when
a
pod
terminates
into
failed
or
succeeded,
phase
Kubler
updates
the
status
of
the
API
server
and
it
can
actually
update
it
in
the
case
where
it
updates
with
a
terminal
phase.
G
But
still
reports
are
ready
condition
of
true
and
we
found
that
it'll
take
a
little
bit
of
time
before
Google
reconciles
the
kind
of
container
status
from
the
CRI
and
eventually
updates
the
ready
status
to
false.
However,
in
some
cases,
if
the
node
actually
goes
away
prior
to
the
update
being
sent,
the
pods
can
get
stuck
in
a
kind
of
invalid
phase
and
what
happens?
The
Pod
just
gets
stuck
around
in
a
phase
like
a
terminal
phase,
but
still
reports
ready.
True,
and
this
is
kind
of
a
big
problem
for
networking.
G
So
we
spend
a
lot
of
time
with
this
we
actually
found
a
Repro.
So
we
had
this
issue
open
in
the
past
and
we
thought
we
fixed
it
fully.
This
was
kind
of
an
issue
after
the
122
pod
life
cycle
refactors,
we
thought
we
fixed
it
originally,
but
I
mean
actually
even
added
a
regression
test,
but
we
found
actually
that
the
six
was
not
100
didn't
fix
the
issue
100.
G
We
actually
found
that
in
the
test
that
we
added,
if
we
kind
of
lower
the
time
out
a
little
bit
and
check
the
status
more
more
frequently,
the
problem
is
kind
of
like
a
race
condition,
so
it's
kind
of
timing
dependent,
but
we
found
it
lower
the
timeout
a
little
bit.
We
were
able
to
kind
of
Repro
in
the
existing
regression
test
this
problem,
so
we
I
looked
to
the
existing
test
where
we
kind
of
reprode
the
issue
and
I
have
a
PR
out.
That
kind
of
fixes
this
issue
so
be
great.
G
If
people
can
take
a
quick
look
and
review
see
if
it
makes
sense,
we're
hoping
I
guess
to
fix
this
issue
and
maybe
backward
it
to
2022,
which
is
the
versions
affected,
the
fix,
basically
just
what
it
does
is
basically
ignores
the
status
from
the
runtime
and
basically,
if
the
terminal.
If,
if
the
Kubla
considers
the
Pod
is
terminal,
it
basically
overrides
the
status
to
false
explicitly.
G
So
we
ignore
the
status
from
the
CRI,
because
we
never
want
to
report
a
ready,
true
condition
if,
if
the
Pod
is
considered
terminal,
so
yeah,
that's
kind
of
the
the
update
there
so
being
asked.
If
people
can
take
a
quick
look
and
review
the
fix
and
see
if
it
makes
sense.
G
Yeah
ring:
do
you
want
to
add
anything?
You
Raven
also
works
on
kind
of
repuring
the
test
and
looking
at
the
fix
with
me.
So.
A
D
D
But
I'm
just
catching
up
after
I've
been
out
of
office.
Do
we
do
we
want
to
run
any
residence
props
when
called
start
terminating
or
we
stop
running
equations
props
as
well
I.
Remember
we
had
the
discussion
about
it.
It's
a
long
time
ago,
yeah.
G
H
D
So
with
medicine
but
I
think
we'll
keep
running,
but
Google
will
ignore
the
result.
H
You
know
the
cubelet,
so
pods
in
terminating
status
or
terminating
face
should
report
Readiness
back
up
to
the
API
so
that
those
Network
controllers
or
the
endpoints
get
removed
from
load,
balancers
and
stuff
such
and
so
my
patch
will
re-enable
the
Readiness
probes
on
termination
and
report
that
status
back
up
to
the
API
earlier
than
when
it,
when
it's
currently
doing
it
upon
termination
of
the
pod.
G
I
Hey
yeah:
this
is
just
a
quick
reminder
on
the
sandbox
ready
condition.
Cap
I
did
have
a
quick
chat
with
Derek
over
the
weekend
and
I
think
he
was
saying
it's
on
his
agenda
for
this
week.
So
that'll
be
great.
If
I
can
get
some
more
feedback.
F
Yep
sounds
good.
This
was
in
the
BPA
one
and
in
this
one
those
are
my
two
priorities
for
the
week.
So
thanks
for
your
reminder,.
A
B
I
Yeah
I
had
a
generic
question
around
like
the
next
steps
on
the
cubelet
stability
I
know
there
was
this
excellent
meeting
that
I
think
Danielle
LED,
but
are
there
going
to
be
some
kind
of
defined
efforts
focusing
on
different
areas
for
one
hour?
That's.
J
The
plan
I
was
out
with
Kubla
and
then
work
stuff
last
week,
and
so,
if
not
sure,
if
Derek
uploaded
the
video
yet
but
I
need
to
go.
Make
notes
from
the
video,
because
my
head
is
full
of
stress.
Yeah.
F
Exactly
Daniel
I
spent
a
few
days
this
week
and
actually
downloading
every
Sig
node,
meaning
since
I
think
the
beginning
of
the
year,
so
I'm
in
the
process
of
uploading
right
now,
but
they're
all
on
my
laptop
to
support
it
It's.
Oddly,
a
manual
process
I.
J
F
The
other
thing
we
wanted
to
do,
I
thought
was
I,
don't
know
Bernal
or
Ryan
if
we
had
a
identified
a
time
to
talk
about
experiences
of
swapping
and
see
run
for
run
seed,
but
that
was
the
other
thing.
I
knew
we
wanted
to
discuss.
B
Yeah
I
think
we
can
cover
that
next
week.
Direct
so
I
have
our
internal
team
doing
some
perf
scale
deeper
comparison,
so
they're
getting
some
charts
together
that
we
can
use.
A
That's
great
because,
because
Elena
is
out,
and
also
we
decided
holder
right
you
because
we
want
more
experience
this
year.
So
so
do
we
want
to
follow
up
next
week.
Also
on
this
post
topic
like
the
like
the
step
in
it
here,
Daniel,
if
you
want
to
have
like
the
discuss
more,
what
we
wanted
to
do
and
share
with
the
community
and
I'm
looking
forward
to
follow
up
with
YouTube
yeah
and
yeah.
J
I'm,
just
very
not
mentally
with
it
today.
Sorry.
F
I
will
ping
you
when
the
video
is
published
and
then
yeah
I
went
off.
We
can
share
our
own
internal
numbers
with
C
run
versus
run
C
and
the
areas
where
it
had
an
impact.
I
think
that
would
be
a
great
input
and
then,
if
anybody
else
has
experimented
with
that,
I
think
we'd
love
to
hear
about
it.