►
From YouTube: Kubernetes SIG Node 20210727
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
A
Hello,
everyone
and
welcome
to
today's
edition
of
sig
node.
It
is
tuesday
july
27th
2021..
Sergei,
do
you
want
to
start
us
off
with
pr
status.
B
Yeah
definitely
so
if
you
didn't
watch
what
prs
was
this
week,
you
can
look
at
the
agenda
document
and
there
is
a
nice
summary.
We
we're
actually
merging
some
pr's
these
days.
It's
it's
good
and
mostly
closed.
Pr's
are
self-closed
and
not
rotten
pr.
So
it's
a
good
state
when
we
are
not
getting
pr
throat
and
I'm
super
glad
about
it
so
yeah
if
you're
interested
in
looking
at
that.
B
Obviously
I
win
code
freeze
and
test
freeze,
so
there
will
be
not
that
many
parts
being
merged
these
days,
so
yeah.
A
I
think
a
bunch
of
them
are
back
ports
to
the
122
branch
too.
So
anything
right
now
that
is
hitting
master
if
you
want
it
to
go
into
122
must
be
back
ported,
which
I
guess
brings
us
to
our
next
announcements
for
122.
doc.
Freeze
is
today,
so
if
you
have
documentation
for
your
pr,
I
believe,
or
so
for
your
cap
feature
that
kind
of
thing.
I
believe
your
pr
has
to
be
merged
by
end
of
day
today.
So
please
make
sure
that
you
go
over
and
are
being
responsive
to
sync
docs.
A
If
this
is
a
requirement
for
your
particular
feature,
and
the
next
thing
on
the
agenda
is
me
122
burnt
down,
we
have
two
release
blockers,
so
I
thought
maybe
it
might
make
sense
to
talk
about
them
a
little
bit.
I
can
share
my
screen.
A
So
we
have
to
open.
I
will
look
at
this
one
first.
I
believe
this
is
a
sig
storage
issue,
but
I
threw
sig
note
on
here
to
make
sure
that
we
are
keeping
an
eye
on
it.
A
A
A
A
This
is
the
list,
peter
are
you
on
the
call
peter
hunt.
A
Looks
like
no
hi.
A
C
Yeah,
I
think
it's
a
perma
failing
and
no,
I
don't
know.
A
Great
and
then
I
think
that
we
may
punt
this
one
and
this
one
will
probably
get
punted.
So
I
think
it's
just
those
three
that
are
still
on
deck
and
we
don't
have
prs
open
from
what
I
can
see
so
still
work
to
be
done.
I
think
our
tentative
release
date
is
next
tuesday,
wednesday
wednesday,
it's
the
fourth
so
yeah
next
wednesday.
A
So
still
some
work
to
be
done,
but
please
make
sure
that
you're
keeping
an
eye
on
these
issues
and
prs
any
questions
about
122
burn
down
or
what's
going
on
with
release
blockers.
A
D
A
And
your
audio
is
a
little
bit
garbled.
Sorry
I'll
move
this
to
next
week.
D
Okay,
can
you
hear
me
better
now.
A
A
D
I
selected
the
wrong
microphone,
so
yes,
so
this
is
about
introducing
checkpoint
restore.
This
is
the
this
was
the
basis,
the
basic
discussion
about
getting
container
migration
working
at
some
point,
and
so
this
is
this
is
just
the
initial
step
to
to
get
an
understanding
how
it
could
look
like
and
and
with
derrick
and
munal
helps
help.
I
think
we
are
now
at
a
point
where
the
cap
is
in
a
form
where
it
could
be
merged
so
that
we
can
continue
working
on
it.
D
So
this
is
basically
a
document
of
I
don't
know
of
understanding
how
how
it
should
look
like
once
we
start
merging
code.
So
one
of
the
one
of
the
results
from
the
discussion
was
once
the
cap
is
merged.
D
We
will
we
want
to
do
it
in
really
small
steps,
because
the
proof
of
concept
prs
I
opened
are
really
really
big
and
and
the
the
goal
for
the
for
the
actual
coach
to
get
in
into
the
cubelet
and
so
on
is
is
to
to
do
it
in
small
steps
to
see
how,
if
it
works
as
expected
in
the
proof
of
concept,
it
worked
but
big
to
make
it
easier
on
review
or
one
of
the
results
was
make.
A
And
have
you
taken
a
look
at
what
that
might
look
like
after
the
like
plague,
refactor
in
122.
D
No,
no,
I
hadn't
touched
the
code.
In
probably
I
don't
know
four
months
or
so
I
I
I
just
focused
on
on
the
cap
in
the
last
few
few
few
weeks
and
I
haven't
rebased
the
code.
Okay,.
A
I
would
strongly
recommend
that,
because
clayton
basically
rewrote
the
entire
workflow,
so
it
might
okay.
D
Yes,
but
from
what
we
discussed,
the
the
existing
prs
will
be
just
a
starting
point
and
I
will
re-write
everything
I
have
done
anyway,
so
but
but
good
good
to
know
and
that
it
will
be
be
different
than
what
I
have
currently
thanks.
A
Great
thanks
for
introducing
that.
B
Yeah,
I
think,
last
time
the
discussion
was
about
both
life
cycle
and
how
like
what
will
be
pretty
stored
on
different
nodes,
if
defend,
if
any,
is
it
resolved
now?
Is
it
part
of
a
cap.
D
Yes,
yes,
so
so
we
we
have
some
some
ideas,
how
the
pot
life
cycle
will
look
like,
and
we
went
into
discussion
about
in
it
containers
and
side
car
containers.
D
But
I
think
one
of
the
important
results
from
from
from
from
a
life
cycle
point
of
view
was
that
we
want
to
store
the
the
the
checkpoint
image
will
be
stored
in
a
registry.
So
we
can
transfer
the
checkpoint
image.
We
do
not
need
any
local
storage
to
to
move
it
from
one
node
to
another
node.
D
We
will
use
a
registry
as
as
the
as
the
storage
back
end
and
and
the
life
cycle
as
it
is
described
currently
in
the
in
the
cap,
is
basically
we
say
we
can
checkpoint
a
container
as
long
as
it
does
not
touch
external
devices
like
gpus
or
infiniband
or
sriv
things,
and
at
any
point,
once
the
init
container
has
finished
running
and
when
we
restore
it,
it
will
continue
run
at
that
point
on
the
same
node
or
another
node.
E
Already
ate,
I'm
I'm
briefly
skimming
through
the
cap
and
is
this
a
case
where
you're
actually
checkpointing
the
running
containers
image.
D
Yes,
image
so
yeah,
we,
we
checkpoint
the
the
processes,
the
memory
pages
of
all
processes
right
and
we
take
a
diff
from
the
from
the
la
from
the
from
the
oci
image.
So
whatever
files
have
changed,
we
transfer
and
whatever
the
process
state
is
in
so
memory
pages
and
everything
and
that's
what
we
transfer
from
one
node
to
another
node.
If
you
talk
about
migration,
or
if
you
talk
about
reboot,
it
will
be
available
after
the
reboot
again
as.
E
E
Okay,
so
related
to
this,
I
I
have
a
pr
that's
going
on
right
now
regarding
resizing
the
pods
in
place,
and
there
are
a
couple
of
things:
I've
added
as
checkpoint
as
well.
This
is
a
status
I
think
it
might
be
worth
skimming
through
the
pr
I'll
post.
The
link
in
the
chat.
E
A
F
D
Oh,
maybe
I
maybe
I
this
is
okay.
This
is
the
wrong.
I
I
linked
the
issue
another
cap,
so
I
think
each
cap
needs
to
have
an
issue,
so
I
opened
the
issue
and-
and
I
first
created
the
cap,
then
the
issue-
and
I
think
I
should
have
put
the
link
to
the
to
the
cap
in
here
and
now
to
the
issue
or
or
I'm
confused
right
now,
I'm
not
sure.
A
Yeah,
I
added
a
reference
to
it
for
the
pr
at
some
point,
so
the
the
pr
is
1990.
A
Put
that
in
the
notes,
any
other
questions
on
this.
A
Sounds
like
no
moving
along
vinay.
Do
you
want
to
talk
about
in
place
pod
vertical
scaling.
E
Yeah,
this
is
just
a
quick
status
update.
I
think
I
we
got
some
feedback.
I
got
some
feedback
from
the
scheduling,
seg
and
one
way
I
think
he
brought
up
some
items
that
were
overlooked
earlier,
so
the
scheduling
change
the
changes
to
the
scheduler
is
going
to
be
somewhat
bigger
than
I
initially
that
single
file
couple
of
lines
that
I'd
initially
imagined.
E
So
they
were
wondering
if
they're,
okay
with
either
ways,
but
they
would
like
to
have
a
separate
pr
for
the
scheduler
changes
that
goes
in.
You
know,
on
quick
succession
to
the
main
pr
that
we
have
in
work
right
now.
Would
that
be
okay
with
you
or
release
management?
Or
do
you
prefer
to
do.
A
E
A
Is
like
if,
for
example,
it
does
not
land,
then,
and
like
only
say
one
of
the
prs
have
merged,
then
we
do
have
to
back
out
anything
that's
merged,
so
it's
either.
It's
all
got
to
go
or
none
of
it's
got
to
go
so
the
earlier
that
it
emerged
is
in
the
123
release
cycle,
the
less
likely.
That
would
happen
and
probably
the
better.
E
Okay,
so
I'll
take
that
as
a
maybe
and
as
in
you
have
no
strong
objection
to
doing
that,
and
we
might
just
break
it
out
into
a
separate
pr,
so
it's
easier
for
them
to
review
and
they
don't
have
to
look
at
all
the
other
code
and
it
just
goes
on
top
of
the
existing
and
since
we're
getting
in
we're
planning
to
get
in
pretty
early.
We
should
be
good
on
that.
I
started
working
on
the
changes
that
lantau
suggested
and
I'll
probably
have
questions
for
him.
E
We'll
do
thanks
hunter.
That
was
all
from
me
if,
unless
somebody
had
questions.
D
A
Going
once
going
twice,
I
think
let's
call
it
for
today
thanks
everybody
for
joining
and
good
luck
with
getting
everything
in
before
code
freeze.