►
From YouTube: Kubernetes SIG Node 20220524
Description
SIG Node weekly meeting. Agenda and notes: https://docs.google.com/document/d/1Ne57gvidMEWXR70OxxnRkYquAoMpt56o75oZtg-OeBg/edit#heading=h.adoto8roitwq
GMT20220524-170404_Recording_640x360
B
Yes,
hello,
so
I
wanted
to
do
a
friendly
ping.
Here,
the
the
kpr
has
been
open
since
I
since
earlier
April
and
we're
targeting
125..
So
it
will
be
great
to
have
a
review.
So
we
can
switch
the
cap
to
implementable
before
the
the
soft
phrase
that
we
have
in
signaled
in
a
in
a
week
or
or
so.
C
Hey
Rodrigo,
so
I
can
I
can
review
it.
This
week,
I
was
busy
last
week
with
qcon
stuff
going
on
and
if
also
some
folks
from
the
containerdy
side
can
help
review
the
CRA
changes.
That
will
be
useful
as
well.
A
B
Right
also,
if
it
helps
Mark
Rossetti
from
when
from
Windows
already
reviewed
and
uncommented
in
the
VR
that
the
changes
seems
fine
Windows,
wise.
B
A
D
Here,
yeah
sure
so
a
little
bit
about
this.
So
there
is
a
kind
of
new
issue
that
was
kind
of
found
out.
That
has
been
discussed
right
now,
with
collaboration
with
stick
networking.
So
the
currently
there's
an
issue,
basically
where,
if,
if
Kublai
marks
pods
as
evicted,
the
IP
address
for
them
is
still
reported
by
kublic
in
some
cases
to
to
end
points
and
endpoints
license
and
energy
traffic
still
being
served
to
them.
D
So
it's
not
super
clear
to
me
yet
if
this
is
happening
every
time
the
bot
is
in
terminal
phase
or
not,
but
it
seems
like
this
is
a
change
as
a
result
of
the
122
pod
lifecycle,
refactor,
there's
kind
of
a
very
small
change
there.
That
was
done
that,
basically,
it
looks
like
when
pods
are
marked
evicted.
D
The
IP
address
for
them
is
still
reported
and
prior
to
122,
the
IEP
address
was
not
reported
and
as
a
result
of
that,
the
Pod
is
still
seen
as
like
ready
to
serve
traffic
here.
I'll,
add
the
link
to
the
to
the
main
difference
between
the
versus
old,
definitely
right
there
in
the
dock.
D
So
this
is
kind
of
an
issue
right
there
there's
someone
we
we
try
to
put
some
folks
from
City
networking
about
this
they're
fixing
the
the
endpoint
slice,
controller
and
endpoint
controller
to
to
basically
change
the
logic
of
when
it
considers
pods
and
terminal,
and
so
that
PR
there
solve
that
issue,
and
once
we
cherry
pick
it
back,
probably
to
122
and
123
that
actually
will
be
resolved.
D
One
of
the
larger
things
I
think
this
industry
raises,
though,
is
the
fact
that
we
and
don't
have
super
good
kind
of
contract,
Slash
description,
sort
of
of
what
exactly
the
pub
life
cycle
is
and
what
different
phases
are
and
specifically
like
during
each
phase
what
information
is
updated
and
what
is
not
so
it
looks
like
here
there's
a
small
change
made
that
the
IP
address
was
was
reported
after
the
POTUS
terminal,
but
before
it
wasn't
and
that
kind
of
caused
this
issue
so
I
guess
there
was
no
real
contract
there
in
place,
and
so
you
know,
endpoint
slice
controller
was
depending
on
that,
but
the
behavior
changed.
D
So
one
of
the
things
I
think
this.
This
raises
and
we've
had
some
other
discussions
in
kubecon
and
some
other
places
some
other
folks
talked
about.
Is
that
one
of
the
things
that's
missing
is
in
Kubla
it's
kind
of
well-defined
contract
between
Kubla
and
API
server
and
having
that
documented
somewhere,
so
I
kind
of
wanted
to
raise
some
discussion
about
them.
What
folks
think
is
the
best
way
we
can
maybe
help
or
write
it
down
somewhere
or
have
some
type
of
maybe
tests
that
test
all
the
different
life
cycles.
D
You
know
steps
so
that
we
can
kind
of
avoid
these
type
of
situations
in
the
future.
So
I
just
wanted
to
kind
of
bring
this
up
and
maybe
get
some
thoughts
from
certain
folks.
A
E
Yeah
so
David
Eads
brought
us
up
with
me
recently
or
over
the
past
couple
days
and
I
found
an
issue
with
probes
and
the
readiness
so
on
Pro
on
pod
termination,
we're
terminating
all
the
probes
and
so
Readiness
doesn't
get
propagated
back
to
the
API
on
that
termination,
I'm
working
on
a
PR
for
that
generally.
Generally,
though,
I
do
agree
that
we
need
to
write
documentation
and
test
cases
for
all
the
Pod
life
cycle
stuff,
but
that's
something
that
I'm
fixing
right
now
with
the
Readiness
in
it.
E
After
talking
to
David
and
somebody
else
on
networking,
it
looks
like
fixing
the
Readiness
probes
on
shutdown
should
fix
their
issue.
E
Pottery
Factor,
the
probes,
are
terminated
on
deletion
from
the
API,
and
so
no
probes
are
run
on
pod
shutdown,
which
Readiness
should
probably
be
still
running.
During
that
shutdown
phase.
D
A
To
one
pad
it
is
on
to
David
just
mentioned
that,
because
we
we
need
a
better
documentation
and
we
need
the
more
clearly
defined
those
contract.
But
another
thing
is
we
need
the
because
we
recently
have
the
testing
coverage
effort
right.
So
this
should
be
really
document
and
I'd
more
test
the
coverage.
So
even
yesterday
we
tried
to
look
at
the
the
reboot
test,
shutdown
test
and
because,
in
the
past
we
do
have
those
tests,
but
over
years
and
those
tests
either
been
repurposed
or
maybe
removed.
A
So
we
need
to
make
those
more
clear
and-
and
so
that's
all
to
run,
did
you
wrist
your
hands
again.
D
Got
it
so
so
maybe
it's
concrete
Next
Step,
maybe
what
we
can
do
is
maybe
we
can
start
some
type
of
dog.
D
We
only
have
around
Cloud
life
cycle
and
what
they
cover
and
if
we
need
to
add
some
other
ones
to
capture
that,
and
maybe
we
can
convert
those
tests
into
documentation
or
something
like
that
moving
forward.
Maybe
that's
something
we
should
try
to
do
during
125
cycle.
F
We've
got
a
State
change
diagram
in
some
code
David.
If
you
want
to
look
at
it,
yeah.
G
This
is
so
another
update
on
the
In-Place
pod
vertical
scaling.
The
the
cap
merge
is
ready,
I.
Think
it's
ready
for
your
review.
Were
you
planning
to
look
at
it?
It's
just
a
the
one,
two
eight
seven
and
two
two
eight
three,
the
CRI
cap
I,
brought
that
CRI
cap
into
the
1287
merge
them
to
have
a
single
cap,
because
at
this
point
we
are
leaning
towards
you
know
putting
both
the
changes
in
one
PR
itself.
G
The
other
point
that
I
wanted
to
bring
up
from
was
looking
at
my
code
reviewing
the
code
as
well,
and
he
found
a
corner
case
where,
if
you
do
a
In-Place
update
of
resource,
say
starting
resource
a
and
then
go
to
B
and
then
come
back
to
a.
There
was
some
code
in
there
which
would
skip
that.
The
way
we
were
using
the
hash,
it's
a
bug
in
the
code,
so
it
would
do
the
A
to
B
without
the
the
B
to
a
would
not
happen
at
this
point.
G
Once
we
have
the
CRI
change
it,
it
would
probably
still
work
it
would
have
worked,
but
we
should
not
be.
You
know
skipping
it.
So
I
made
a
small
fix
for
that.
I
asked
direct
to
take
a
look
as
well
learned.
How,
if
you
can
also
please
take
a
look.
G
It's
just
a
couple
of
lines
of
change
in
the
I
think
you
need
to
re-review
the
the
kubernetime
manager
two
places
in
the
code
runtime
manager,
one
is
where
the
hash
I
moved
it
down
and
the
other
one
is
I
skipped
a
condition
that
checks
for
if
spec
resources
equals
status
resources.
We
really
don't
need
to
block
on
that,
because
the
right
thing
to
do
is
if
the
spec
is
not
what
the
current
C
group
state,
which
is
not
reported
right
now,
the
chicken
and
egg
problem
that
we
have.
G
G
G
Response
to
my
he
had
a
bunch
of
items
for
me
to
fix.
That's
been
fixed,
I'm
waiting
for
his
confirmation
and
then
I
want
to
see
the
next
steps.
Is
it
good
enough
to
merge
early
and
then
flush
out
whatever
issues
fix
as
many
of
the
to-do's
before
we
get
to
code
freeze,
it
looks
like
we
have
a
good
Runway
I'm
going
to
be
out
in
June,
second
half
of
June
for
a
conference
in
Austin
and
also
preparing
for
that
will
take
a
little
bit
of
time.
G
B
A
Today
and
I
saw
the
nanta
White
IQ
and
that
the
resizing
sub
resource
update
that,
as
you
just
mentioned
here,
okay.
G
A
G
I'm
yeah
I'm
tracking,
all
the
issues
that
I
know
and
classified
them,
I,
don't
think
we
have
any
alpha
blockers
at
this
point,
although
all
those
the
many
to-do's
that
are
there
would
be
good
to
fix
them
before
we
even
go
into
Alpha
we'll
work
on
that
July.
We
have
time.
A
C
A
C
A
C
H
Yeah,
hey
just
a
quick
update
from
me
as
well.
Basically,
we
I
got
a
couple
of
reviews
from
kitang
and
Ruben
and
I
think
got
also
an
approval
from
kitang,
so
pretty
much
just
waiting
for
overall
review
and
from
from
direct
at
this
point.
A
I
Yeah
how's
it
going
everyone.
Can
you
hear
me
all
right,
I,
don't
know
if
my
mic
is
working
correctly.
I
Hear
you,
okay,
great
okay,
awesome,
so
yeah
I
was
just
coming
over
from
security.
We
we
kind
of
wanted
to
pick
up
one
of
these
old
issues.
That's
been
around
for
some
time
now
to
get
app
armor
over
the
line
to
ga
so
I've
kind
of
taken,
the
first
stab
at
reworking,
Sasha's
old
Kip
in
the
new
format
and
and
kind
of
update
it
for
the
changes
in
the
Pod
security
policy,
kind
of
deprecation
and
stuff,
like
that.
I
So
just
wanted
to
kind
of
swing
by
and
and
get
that
in
front
of
you
all.
If
you
wouldn't
mind
just
kind
of
reviewing
it
and
and
giving
like
a
sign
off,
I,
don't
I'm,
not
sure
you
know,
I,
don't
think
the
workload
is
too
too
heavy
to
to
make
it
to
need
to
push
off
from
125,
but
I'd
like
to
get
it
in.
You
know
Target
125
and
get
your
guys's
thumbs
up
on
that
so
yeah.
If
you
guys
have
the
time,
I'd
appreciate
a
review.