►
From YouTube: Kubernetes SIG Node 20210803
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
B
It
I'm
I'm
happy
to
do
it.
I
think
someone
kicked
off
the
recording
so,
and
I
was
the
first
one
to
log
in
it
is
tuesday
august
3rd.
2021..
Welcome
to
sig
note
everyone.
I
think
our
agenda
is
pretty
short
today
sergey.
Do
you
want
to
start
us
off
with
the
pr
update.
C
Yeah,
so
again,
if
you
didn't
watch
what's
happening
in
our
branches,
there
is
a
link
on
what
we
did.
So
we
created
a
bunch
of
pr's
out
of
those
pr's.
Many
of
them
are
just
work
in
progress
and
or
attempts
to
run
some
tests.
You,
you
may
have
noticed
that
some
tests
were
fading
and
been
debugging
them
so
out
of
closed
prs.
Most
of
them
are
just
prs
that
were
opened
for
test
runs
plus
a
couple
of
rotten
pr's.
C
I
mentioned
them
behind
it
beneath
the
table,
so
you
want
to
pick
up
rotten
pr
and,
like
maybe
one
of
them
performance,
optimization
another.
Some
json
like
security
issue.
Anyway,
out
of
merge
prs,
it's
mostly
cherry
picks
of
his
previous
branches
or
tests
fixes
again,
because
we
are
in
test
freeze.
This
is
expected
and
we
grow
in
npr's.
B
Thank
you
sergey,
and
I
just
snuck
a
couple
of
sort
of
announcement
things
on
the
agenda.
So
for
those
who
are
not
following
tomorrow
in
theory,
it
should
be
the
122.0
release
yay.
So
for
the
node
burn
down
update.
I
wanted
to
give
a
big
shout
out
to
dims,
who
I
don't
think
is
on
the
call
for
having
worked
through
the
weekend
to
try
to
make
sure
that
the
container
d
serial
tests
were
in
a
passable
state
for
going
to
122
release.
B
So
there's
a
bunch
of
us
that
have
been
working
on
getting
the
tests
into
better
shape
and
then
sent
an
email
with
some
details.
But
as
of
yesterday
morning,
all
of
the
remaining
issues
in
the
122
milestone
that
were
assigned
to
node
were
closed.
So
we
are
go
on
122,
which
is
great
any
questions
about
the
release
or
freeze
or
anything
like
that.
We
should
be
unfreezing
by
the
end
of
the
week,
which
is
exciting.
B
Sounds
like
no
next
item
on
the
agenda.
We
have
matthias
asking
about
a
reviewer
and
approver
of
things.
Take
it
away.
E
D
A
signal
meeting,
but
recently
I've
been
joining
the
signod
ci
meeting
on
wednesday,
and
I
I
love
that
place.
I
love
that
meeting.
I
love
the
people
there
and
I'm
super
excited
to
help
them.
Unfortunately,
we
seem
to
be
missing
a
lot
of
approvals,
I'm
just
referring
to
the
test.
Okay.
I
know
there
are
some
general
signal
approvals,
but
for
the
test
we
might,
we
might
need
some
more.
D
So
I
was
saying
okay,
why
not
propose
myself
and
and
just
come
here
and
ask
what
it
would
take
for
me
to
to
get
these
rights
only
for
the
three
tests
to
the
three
owner
files
referring
to
the
tests?
D
So
I
I
don't
know
I
heard
from
bob
killen
that
sig
node
is
being
working
on
the
process
to
formalize
what
needs
to
be
done,
how
to
grant
permissions
and
that
you
were
looking
for
more
like
specific
permissions,
instead
of
like
full
signal
approval
which
I'm
not
requesting
at
all.
So
I
don't
know,
maybe
the
direct
order.
F
F
F
F
Approvers
on
ede,
particularly
those
that
are
driving
the
ci
subgroup,
so
I
would
encourage
both
alana
and
sergey
and
anyone
else
to
put
their
name
forward.
I
I
have
seen
your
your
work
individually
and
don't
have
an
issue
with
your
reviews
in
that
in
that
area.
So
I
I
would
say,
put
a
pr
forward
and
I
would
encourage
other
participants
in
the
group
to
do
the
same.
F
The
list
in
that
branch
got
out
of
sync
at
one
point,
and
so,
but
I
I
think
it's
it's
great,
that
you're
participating
and
I
encourage
you
to
open
the
pr.
G
A
Yeah,
I
agree
with
the
dark,
but
I
feel
a
little
bit
sorry
for
that.
I
could
just
come
back
from
the
pto
and
put
him
on
the
hotness
date
on
this.
So
we
talk
about
the
final
break
out
to
this
signal
the
approval
status
and
have
more
clear
kind
of
break
that
into
the
final
grand
larity.
A
I
think,
but
do
it
just
like
the
because
we
only
have
limited
after
approval
existing
and
but
also
because
dark
is
our
pto,
so
we
haven't
fallen
asleep
yet
so
we
are
going
to
continue,
but
I
totally
okay
to
take
out
e2e,
especially
e2e
node
and
the
common
basically,
because
both
command
node
actually
comes
from
the
signal
from
day
one.
So
basically
it's
part
of
the
node
the
component.
I
always
really
like
the
part
of
the
node
there's.
Also,
if
I
remove
there's,
also
have
like
the
we
can
break
out
the
et
test.
A
There's
another
thing
for
the
community.
I
think
the
sergey
ailerla
both
did
a
lot
of
work
for
community
and
the
themes
that
also
can
break
out.
For
the
note,
there
also
have
some
other
things
like
documentation
and
I
think
it
can
be
breakout
the
rest
of
the
stuff
we
can
continue
to
discuss
and
also
there's
the
mpd
and
the
sad
weather.
We
said
whether
we
already
break
out
so
there's
the
more
things
we
can
break
out
and
then
for
the
core
kubernetes
signal
ownership.
F
So,
just
and
then
to
clarify
lana
you've
been
leading
that
ci
sub
group.
I
I
assume,
there's
no
objection
and
you
you
also
opening
for
that
branch
and
then
sergey
honestly.
I
would
encourage
you
to
do
the
same
and
if
the
ci
subgroup
that's
meeting,
has
other
participants
that
they'd
want
to
elevate
I'd.
Look
to
that
group
to
to
put
forward
additional
candidates.
But
I
think
this
is
a
great
great
step
in
improving
a
lot
of
things.
B
Yeah,
I
certainly
can't
take
all
the
credit.
Sergey
has
been
doing
a
lot
of
the
work
and
showing
up
every
week
so
yeah.
I
can
certainly
do
that.
I
know
don.
You
also
mentioned,
I
think,
the
community
repo.
I
think
I've
had
a
pr
open
to
add
myself
to
an
owner's
file
there
for
node
for
quite
some
time.
So
exactly.
A
That's
exactly
it
and
that
one,
because
I
did
also
talk
to
the
sergey
about
the
both
youtube
can
be
that
one
and
but
because,
just
because
we
are
past
the
process
where
we
didn't
process
the
felony,
because
this
is
basically
those
is
the
top
of
my
mind
and
also
mpd
and
still
whether
we
actually
treat
it
differently.
We
already
have
those
things
going,
so
that's
the
top
of
in
my
mind,
we
can
break
out,
but
unfortunately
we
didn't
finish
the
process.
A
So
since
the
dark
is
here
and
oh,
we
all
agree
on
the
e2e
test
and
the
community
report.
I
think,
let's
just
start
from
those
two,
then
we
continue
behind
the
thing,
the
existing
approver,
the
top
ruler,
and
they
have
the
meeting
finish
the
process
and
publish
to
community
so
yeah.
B
Ben
elder
and
aaron
krickerberger
also
both
reached
out
to
me
and
suggested
that
I
apply
for
approver
and
test
infra.
Are
there
any
issues
with
that.
F
No,
actually,
I
was
going
to
raise
that.
That's
the
other
one.
That
is
very
useful
because
it's
for
me
personally,
it
gets
lost
in
a
c
notifications,
but
I
know
we
occasionally
have
to
go
through
and
do
that.
So
I
I
would
like
test
infer
to
be
uniform
with
ede
note
owners,
because
you
often
need
to
do
both
so
that
that's
the
fourth
signing
I'd
like
to
see
get
aligned
there.
I
guess:
does
that
sound
good.
A
But
dark,
I
believe,
that's
that's
more
from
the
sick
test.
This
is
the
this
is
exactly
misalignment
first
equated
because
so
then
they
have
the
sick
test.
The
six
tests
that
goes
through
there.
F
E
F
In
my
mind,
they
go
hand
in
hand,
so
I
I
think
if
you
have
approver
and
an
ede
node
test
suite,
you
should
have
approver
and
test
emperor.
F
The
the
pockets
of
top-level
approver
that
I
think
don
you
and
I
had
wanted
to
work
our
way
through
and
happy
to
do
that
in
this
public
setting
right
now
and
we
can
follow
up
on
the
rest
with
the
existing
workers
when
we
can
get
scheduled
together
in
the
are
there.
C
Just
want
to
mention
that
from
ci
subgroup
you
also
suggested
morgan
to
apply
for
proverb,
and
we
just
need
to
follow
up
with
him
again
and
yeah.
F
G
Yes,
so
with
the
help
of
of
derrick
and
and
moon
nunal,
we've
been
reworking
the
checkpoint
restore
cap
during
the
last
few
months,
and
I
think
we
are
now
at
the
point
where
it's
ready
for
for
additional
reviews
or
and
if
everyone
agrees
ready
to
be
to
be
merged.
So
the
the
the
main
thing
we
we
looked
into
is
how
to
handle
the
the
checkpoint.
G
The
images
of
the
checkpointed
containers
and
the
idea
which
came
from
the
sig
node
meeting
is
to
not
use
local
storage,
as
I
implemented
it
initially,
but
to
use
to
push
checkpoint
check,
checkpointed
containers
to
registry
and
to
restore
a
container
is
we
just
specify
the
image
you
know
in
a
registry
and
then
and
the
container
engine
pulls
the
checkpoint
image
from
the
registry
and
restores
the
container
from
the
checkpoint,
and
the
other
thing
we
agreed
on
is
to
do
the
whole
thing
in
in
small
steps.
G
So
because
my
current
proof
of
concept
prs
are
like
two
or
three
really
big
pull
requests
which
are
really
hard
to
review.
I,
I
suppose
so.
The
goal
is
to
to
start
from
the
bottom.
G
Now
that
we
have
the
cap
and
introduce
checkpoint
restore
from
the
bottom
up,
first
in
the
cubelet
and
and
once
we
have
it
working
there,
we
go
to
higher
levels
and,
depending
on
on
what
we
get
also
from
feedback
from
the
compute
community
and
how
the
the
feature
is
accepted
from
from
people
if
they
need
it
more
or
if
it's.
G
F
So
adrian
I
I
know
when
we
were
getting
together
internally,
red
hat,
to
kind
of
talk
through
the
use
case.
There
were
a
number
of
questions
I
had
pushed
back
on
your
internal
document.
That
was
like
what
happens
if
I
have
more
than
one
container
what
happens
if
I'm
running
it
in
a
container
or
that
type
of
thing
just
quickly.
Looking
at
the
cap
here,
did
you
get
those
details
put
into
the
public
dock,
or
am
I
missing
that
that
wasn't
there?
Okay,
I
I
hope
I
did.
F
But
let
me
look
because
the
other
just
to
capture
is,
I
thought
in
our
discussions.
We
were
interested
in
being
able
to
check
a
pod
as
a
first
step,
but
not
necessarily
restore
that
pod.
I
I
don't
know
about
the
broader
community,
but
one
area.
F
That's
a
potential
interest
is
around
being
able
to
checkpoint
a
running
pod
for
later
forensics
to
understand
what
was
happening
on
that
system,
which
conceptually
would
be
able
to
let
you
checkpoint
a
running
application,
stick
the
state
of
the
application
in
some
image
and
then
later,
even
potentially,
outside
of
kubernetes
itself.
You
could
re-uh
run
from
that
checkpoint
to
see
what
was
going
on
and-
and
that
was
interesting
from
us
from
a
forensic
standpoint
to
work
through
some
use
cases
like
that.
F
I
was
curious
if
there
are
other
lower
level
use
cases
across
the
community
that
people
might
be
interested
in
where
just
basic
checkpoint
without
restore
might
be
useful,
and
hopefully
that
was
captured
in
your
updated.
Yes,.
G
It's
it's
all
there.
I
just
had
a
look.
It's
definitely
the
updated
version.
It
should
be
all
there.
A
I
think
the
dark,
the
particularly
interesting
use
cases
that's
a
fulfill
last
time.
I
think
last
time
also
the
concern
came
from
you
and
the
maneuver,
because
I
think
that
we
do
think
about
this
is
the
checkpoint
and
the
restore
feature
is
useful,
but
that's
the
concept.
It
is
we're
only
doing
the
know,
the
library
and
then
we
are
not
involved
with
the
class.
The
live
wall
things.
I
think
that
concern
came
from.
F
The
use
case
that
I'm
interested
in
being
facilitate
or
think
through
as
as
one
basic
one
is
a
cluster
administrator,
can
checkpoint
an
application
without
the
application
being
aware
that
it
was
checkpointed
and
then
potentially
reconstitute
that
application
in
a
sandboxed
environment
even
potentially
outside
of
kubernetes.
So
you
can
imagine
a
flow
where
I
could
checkpoint
an
application
by
going
to
the
node
saying
checkpoint
the
thing
getting
the
state
captured
in
an
image
by
pushing
that
to
a
registry
and
then
running
a
different
crictl
command
or
just
a
podman
run
command.
F
I
don't
care
what
you
run
to
rerun
that
app
to
maybe
do
forensics
on
it.
So
that's
that's
a
very
useful
security
flow
which
alleviates
the
need
to
do
restore
in
that
running
cluster,
which
is
not
at
least.
E
F
From
my
perspective,
at
red
hat
right
now,
a
near-term
priority
versus
wanting
to
understand
in
like
a
security
situation.
If
someone
has
penetrated
a
system,
and
so
that's
that
is
where
I
know
renault
and
I
were
very
interested
in
understanding
a
first
step
for
checkpoint
at
a
very
low
level,
use
case.
E
So,
and
also
dawn
like
this
allows
us
to
kind
of
figure
out
any
issues
at
the
lower
level
take
an
incremental
step
and
then
like
like
build
on
top
of
it.
A
I
agree
because
the
last
time
I
do
think
about
that,
that's
the
right
new,
even
without
the
class,
the
lifestyle
orchestration
for
the
debate
here,
we
reproduce
the
production
issue,
but
I
didn't
come
up
with
this
security
use
cases.
So
I'm
glad
and
we
reach
agreement
even
without
the
class
level
or
translation
all
those
kind
of
integration,
actually
the
feature
only
owned
by
on
the
node
side.
There's
some
use
cases
not
just
limited
on
the
debug
and
and
reproducible.
F
Yeah,
so
I
think
what
we
at
redhat
would
be
interested
in
understanding.
Is
that
there's
others
in
the
industry
who
would
be
exploring
or
interested
in
exploring
similar
forensic
analysis
about
if
someone
penetrated
a
kubernetes
cluster
and
was
doing
something
with
your
pod
that
you
weren't
expecting
do
you
have
similar
use
cases
where
you'd
want
to
be
able
to
detect
that
without
the
user
inside
that
pod,
seeing
it
happen,
and
so
checkpointing
and
being
able
to
do
analysis?
Is
that
use
case?
That's
of
midterm.
F
A
F
F
Practical
standpoint
adrian,
I
think
the
low-level
use
cases
it
required
a
a
checkpoint
operation
in
the
cri
yeah
and
from
a
goals
perspective.
I.
F
To
make
sure
is
that
something
you
had
wanted
to
push
in
the
123
release
as
the
thing
that
we,
you
know,
blocked,
because
right
now
we
didn't
have
experimental
section
to
place
that
checkpoint
call,
even
though
I
know
you've
done
some
prototyping,
but
I
guess
from
my
perspective,
I
I'd
like
to
see
us
be
able
to
make
incremental
progress
on
this
in
123.
F
G
Yeah,
I
think
that
sounds
doable
for
for
the
next
release,
the
the
checkpoint,
only
implementation,
because
I
think
it's
actually
not
much
required
on
the
cubelet
side
to
get
this
working.
Most
of
the
work
is
done
in
the
in
the
container
engine
to
to
the
checkpointing
and
cubelet
would
only
trigger
it.
So.
G
E
E
G
I
I
am
I
I
didn't.
I
just
updated
it
for
more
general
questions,
but
I
can
update
the
cap
to
mention
this
forensics
workflow
and
highlight
that
we
want
to
introduce
only
the
checkpointing
in
the
first
step
and
also
make
sure
that
I
included
it
in
the
user
stories,
because
I'm
not
sure
if
I
go
over
it
right
now
that
I
have
done
the
user
stories
update.
But
I
can
I
can
do
that
and
and
and
then
I
can.
G
E
Yeah
and
you
can
also
send
a
mail
to
the
signatu
mailing
list.
A
Okay,
ask
you
one
thing:
while
we
are
looking
only
on
the
checkpoint,
I
I
try
to
make
sure
I
understand.
What's
it
so,
basically,
the
the
most
change
is
on
the
container
runtime
side
or
even
you
we
can
think
about
the
even
lower
side
run.
State
can
do
most
of
the
work
right.
So
basically,
today
runs
the
rd
support.
We
think
the
proper
kernel
and
we
see
proper,
runs
in
washington.
You
basically
already
can
do
the
checkpoint
and
restore.
A
So
what
we
are
doing
here
is
just
hook
to
make
sure
we
can
send
the
signal
freeze
of
ordering
power
in
within
either
part
all
the
related
of
the
container
and
do
the
check
point
and
but
without
restore,
so
you
think
about
it.
Once
we
have
that
checkpoint
and
anyway,
we
have
the
container
runtime
can
do
download
those
checkpoint
and
run
restore
those
container
and
run
the
things.
So
that's
why
the
cover
the
use
cases.
Always
I
just
don't
understand.
F
There
was
a
bunch
of
issues
just
on
checkpointing
alone,
where
it
was
like
what
content
was
included
in
that
checkpoint
are
secret
content
included
in
that
checkpoint
or
not,
and
then,
if
there
were
multiple
containers
in
an
application
and
those
containers
use
storage.
Was
there
a
particular
nuance
around
restore
that
one
had
to
be
concerned
about
the
big
issue
dawn
for
restore
that
renault
and
adrian,
and
I
saw
when
talking
through?
Was
we?
F
Have
you
almost
want
to
be
able
to
say
when
you
restore
a
pod
from
a
checkpoint,
the
images
your
container
specs
reference
are
not
the
same
as
your
first
pod
that
you
wrote,
and
so
some
of
the
because
you
have
to
restore
from
the
archive
essentially
and
having
a
way
to
message
that
down
through
the
pod
spec
to
say
that
only
on
first
run
of
this
pod,
you
use
this
container
image,
but
not
on
restart
boundaries
and
that
type
of
thing
or
details
that
we
wanted
to
basically
defer,
because
we
saw
a
lot
of
value
in
just
being
able
to
ask
the
cubelet
to
do
the
freeze
and
get
the
checkpoint
state
in
the
interim
for
security.
G
I
can
also
the
what
what
the
the
value
add
from
the
cap
is
is
basically
so
so
currently
run
c
can
do
the
check
point,
but
the
container
engines
are
no.
The
container
engines
are
currently
not
able
to
handle
checkpointing
containers
which
are
running
in
a
pot
with
shared
namespaces,
so
that
doesn't
work.
G
So
the
the
value
at
is
basically
that
we
agree
that
we
want
to
edit
to
the
cri
api
and
once
it's
in
the
cri
api,
I
can
do
the
work
in
the
container
engine
and
to
to
do
the
implementation
and
based
on
this.
So
from
my
point
of
view,
the
the
main
value
add
is
to
decide
that
we
want
to
have
it
in
the
cri
api
and-
and
I
can
continue
to
work
there.
A
I
I
totally
got:
what
do
you
see?
I
just
earlier
just
wondering
why
we,
because
I
think
about
the
ones
you
want
to
restore.
You
may
change
you
some
check
upon
you
found
okay,
certain
things
you
may
be
missing
to
properly
restore.
So
that's
why
I
feel
like.
Oh,
we
just
check
a
point,
but
without
restore
we
may
don't
have
the
right
api,
but
I
with
the
directx
plan.
I
also
think
more,
I
realized.
Actually
we
don't
really
need.
A
This
is
enough
for
us
to
move
forward
and
anyway,
this
is
not
on
the
part,
the
api,
not
the
power
spec.
So
so
so
it's
not
concerned
because
my
console
original,
if
there's
something
pop
up
to
the
powder,
spec
or
whatever
things
and
and
the
api
have
to
be
solid,
they
have
to
be
end
to
end.
So
I
worry
about
it.
We
have
the
half
cooker
api
yeah.
F
Particularly
with
the
nic
containers,
because
you'd
want
to
restore
from
the
status
of
that
in
the
container
having
already
completed
and
there's,
there's
a
complexity
there
to
reason
through
versus
more
basic
use.
Cases
of
I
was
running
a
stateless
application.
I
wanted
to
understand
if
it
was
had
an
issue
from
a
security
attack
standpoint
and
I
checkpoint
it,
and
then
I
can
go
launch
it
in
my
other
container
on
time
of
choice
even
outside
of
cube.
I
think
that
that
is
very
valuable.
G
Okay,
thanks
for
the
feedback
and
and
then
I
will
update
the
cap
and
let
everyone
know
thanks.
A
So
we
this
is
one
of
this
hopefully
currently
is
targeted
for
1.23
right
so
which
one
we
are
trying
and
look
for.
Other
sorry,
do
you
have
any
question
on
this
one
other
side
I
realized.
Maybe
we
need
to
talk
about
1.23
cap
candy
date,
yeah.
B
I
think
that
we
should
be
careful
I
mean
based
on
you
know
we
kind
of
did
a
retro.
I
think
at
a
previous
node
meeting,
I'd
like
to
be
careful
that
we
don't
over
commit
because
I
I
saw
like
we
were
pretty
stretched
thin
in
terms
of
like
reviewing
and
approving
resources,
and
also
I,
if
possible,
like
to
avoid
again
like
everything
all
landing
the
day
of
code
freeze.
I
think
we
really
need
to
stagger
it
because
it
ended
up
like
me
and
dims
and
clayton
and
few
others
like
spending.
B
You
know
the
entire
four
weeks
of
code
freeze
just
chasing
down
like
serious
broken
tests.
Danielle
found
a
really
big
bug
as
well
so,
and
that
was
really
really
hard
to
try
to
pinpoint
where
the
breakage
started,
because
everything
landed
the
same
day
so
yeah.
I
think,
let's,
let's
not
try
to
do
too
too
too
much
and
let's
not
all
do
it.
The
last
day,
I
think,
is
what
I'd
advocate
for.
F
All
right
cool,
so
we'll
do
that
next
week.
So
if
folks
have
particular
topics
they
want
to
bring
forward.
Let's
do
that
and
I
think
I
liked
how
we
handled
this
the
last
couple
releases.
So
hopefully
we
can
continue
that
going
forward,
but
anything
else
that
we
want
to
discuss.
Otherwise
we
can
give
people
back
their
half
hour.
I
will
appreciate
it
as
I
continue
to
catch
up
from
the
whole
I
I've
dug
by
being
out,
but.
B
Yeah,
I
put
a
note
in
chat.
There
are
no
additional
agenda
items,
so
unless
someone
has
a
last-minute
thing,
we
can
call
it.
F
Awesome,
I
also
believe,
there's
probably
a
couple
weekly
meetings
that
were
recorded.
Hopefully,
while
I
was
away
that
I
didn't
see,
I
will
work
to
get
all
the
latest
updates
on
the
youtube
channel
this
this
week
so
appreciate
the
patience
bye.