►
From YouTube: Kubernetes SIG Node 20211102
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
A
Hey
everyone
and
welcome
to
today's
edition
of
sig
node.
It
is
tuesday
november
2nd
2021.
I
can't
believe
it's
november.
We've
got
a
short
agenda
for
today
in
some
part,
probably
due
to
the
fact
that
the
file
went
read
only
we're
very
sorry
for
that.
The
google
account
that
it
was
stored
on
ran
out
of
disk
space.
Briefly,
so
it's
editable
now
we
figured
it
out
no
cause
for
alarm.
A
Sergey
could
not
make
it
to
the
meeting
today.
So
we
don't
have
the
table
of
what's
going
on
with
various
pull
requests,
but
I
did
take
a
look
at
the
total
number
of
active
pull
requests
and
we're
up
a
few.
I
think
that
people
have
been
somewhat
resource
constrained,
but
I
know
from
last
meeting's
review.
It
looked
like
we
had
done
pretty
well
in
terms
of
our
sort
of
soft
code
freeze
and
making
sure
that
beta
stuff
got
merged,
so
that
was
good.
A
I
have
one
announcement
before
we
jump
into
the
agenda,
which
is
that
code
freeze
is
in
two
weeks
from
now.
It's
sneaking
up
on
us,
so
we
probably
wanna
make
sure
that
alpha
stuff
is
getting
reviewed
and
that
we're
getting
feedback
in
and
whatnot.
So
I'm
certainly
going
to
try
to
prioritize
some
time
on
that
this
week,
other
reviewers,
a
heads
up
to
you
as
well
and
for
authors,
please
make
sure
that
your
thing
is
reviewable.
A
It's
not
marked
work
in
progress,
everything
that
needs
to
be
there
is
there
and
if
you
need
to
ping
reviewers,
please
do
so
in
the
pr
reviews
channel
on
the
kubernetes
slack
any
other
announcements
before
I
dive
into
today's
agenda.
B
Yeah,
if
you,
if
you're
eligible,
please
vote
for
the
steering
committee
elections
too,
that
the
polls
close
on
two
days.
Oh.
A
Yeah,
that's
a
great
announcement,
so
if
you
haven't
voted
yet
in
the
election
this
year,
they
are
not
sending
out
individual
emails,
you
have
to
go
and
actually
like
click
on
the
link
which
was
sent
to
the
the
dev
mailing
list,
so
I'll
add
a
link
to
that
to
the
agenda.
Thanks
for
the
reminder,
mark.
A
A
And
you
should
be
eligible
if
you
have
more
than
50
dev
stats
in
the
past
year
or
if
you
applied
for
an
exception,
but
I
think
exceptions
have
closed.
So,
okay,
moving
right
on
into
the
agenda.
Our
first
item
is
from
username
rada
user
spaces.
Our
username
spaces
cap.
C
Take
it
away,
yeah
hello,
do
you
hear
me
fine.
C
Cool
because
my
audio
was
not
working
a
few
minutes
ago,
great
so
yeah,
we
opened
a
cap
long
ago
and
we
received
a
lot
of
proposals,
a
lot
of
ideas
and
yeah.
I
created
a
new
proposal
that
incorporated
almost
all
the
feedback
from
the
previous
discussions,
and
I
created
some
slides
that
I
couldn't
add
to
the
agenda
because
it
was
read-only,
but
I
can
maybe
share
share
the
screen
with
you.
I'm
sure,
like
you,
have
like
three
four
slides
to
show
the
high
level
idea
of
the
proposal.
C
A
C
Great
so,
as
I
was
saying,
we
started
a
cap
long
ago
like
about
a
year
ago,
and
there
was
a
lot
of
very
valuable
discussions
and
we
created
a
new
proposal
that
mentioned
in
a
github
comment
in
that
vr.
That
incorporates
all
this
feedback,
and
I
wanted
to
share
this
in
the
next
slides
and
coordinate.
What
are
the
next
steps
to
iterate
this
idea
or
to
agree
or
see
how
to
continue
with
this?
C
So
going
straight
to
the
proposal,
I
have
also
some
slides
if
you
prefer
about
what
are
username
spaces
and
why
they
are
important
or
some
other
background.
If
you
want,
I
can
jump
to
them.
Let
me.
C
Me
whenever
you
want,
but
basically
the
idea
is
adding
two
two
two
fields
to
the
bot
spec
one
one
field
is
all
the
names
are
preliminary,
of
course,
but
one
field
would
be
about
using
new
certain
spaces,
yes
or
no
like
a
bull
and
another
field
will
be
to
improve
but
isolation,
and
that
this
will
make
sense
in
the
next
slides
on.
Why
why
we
have
these
two
fields?
C
C
So
basically,
we
have,
for
example,
phase
one
that
it's
basically
username.
Space
is
support
for
parts
without
volume,
and
I
want
to
emphasize
that
without
because
parts
that
have
volumes
but
don't
share
it
with
anyone
else
like
an
empty
deer
or
a
configmap
secret
than
our
cpi
or
projected
can
use
it,
and
the
username
prints
will
have
a
mapping
to
user
ids
in
the
host
and
this
mapping
will
not
overlap
with
any
other
part.
So
this
this
will
give
us
more
isolation
between
pawns.
I.
A
Have
a
quick
question:
yes,
in
the
past
release,
we
added
support
for
running
in
rootless
mode.
Does
that
already
cover
phase
one
pods
without
volumes.
C
No,
it's
it's
usually
in
space
is
a
word,
but
but
at
a
different
level,
with
rulers
usually
run
like
the
cube.
Let's
run
c
inside
the
username
space.
D
I
I
would
say
that
we
will
probably
need
to
keep
those
two
separate
like
solving
the
whole
kubernetes
control
plane
to
be
rootless
is
a
very
different
and
way
harder
problem
than
running
the
workloads
with
username
spaces,
and
I
think
that
itself
has
three
phases
and
rootless
is
a
whole
different,
ballgame.
A
Yeah,
the
the
reason
that
I
ask
merinal
is
because
there
are
a
bunch
of
cves
cited
on
the
comments
and
the
cap
which
it's
unclear
to
me.
If
the
issue
is
specific
to
the
workloads
being
privileged
or
if
the
components
being
privileged
are
the
issue.
A
Of
like
fluffiness
there
so
mostly,
I
want
to
make
sure
that
we're
trying
to
solve
like
the
right
thing
and
we're
not
necessarily
like
mixing
the
two.
E
Yeah,
so
the
cubelet
needs
to
be
able
to
start
pods
that
require
root
on
the
host.
So
when
we
explored
running
rootless
cubelet,
you
were
already
running
a
subset
of
valid
pod
specs.
So
I
agree
with
reynold.
This
is
distinct,
but
yeah
good
good
question
to
at
least
clarify,
then
that,
like
a
rootless,
cubelet,
would
not
be
able
to
execute
a
pod
that
requested
the
host's
username
space
or
at
least,
if
the
runtime
it
interacted
with
wasn't
ruthless.
C
Okay,
so
phase
one
will
be
yeah.
All
the
faces
is
creating
a
username
space.
Where
that
the
part
will
use
and
yeah
phase
one
will
the
ideas
to
support
parts
without
volumes,
because
in
parts
with
when
you
don't
share
files,
you
can
use
different
mappings
for
the
user
namespace
that
do
not
overlap
with
other
pods
and
you
have
more
isolation
and,
as
you
don't
have
to
share
files,
you
can
do
that
and
of
course
this
doesn't
work
for
all
the
workloads.
C
E
There
rodrigo
I
I
like
this
phase,
one
honestly,
I
don't
know
renault
or
anyone
else
want
to
talk
about
like
we
had
a
similar
capability
in
cryo
itself
that
allowed
you
to
run
the
pod
rootless.
E
C
Not
a
particular
use
case,
but
we
have
like
a
lot
of
parts
without
with
only
this
kind
of
volumes
and
having
a
good
level
of
isolation
for
them.
It's
nice
because
yeah,
we
have
a
lot
of
web
applications
and
things
like
that
that
access
a
share
database
and
they
don't
have
a
state
person
on
the
pods
only
the
database
and
that
stuff.
C
And
yeah,
I
think
also
this
phase.
One
was
something
that
was
mentioned
in
the
github
discussion,
but
also
what
you
what
you
said
about
the
cryo
annotation.
It's
something
that
gcp
that
I
think
joined
this
meeting
to
hopefully
also
mentioned
recently
like
two
days
ago
in
in
the
github
pr.
C
Yeah
these
faces
do
not
incorporate
the
cryo
learnings
that
gizabet
shared
in
any
way,
because
I
I
think
the
comment
was
scarce
for
me
to
understand
what
what
he
said,
but
I
kind
of
we
can
definitely
try
to
improve
it,
but
yeah
basically,
phase
one
would
be
parts
without
volumes
and
phase
two
would
allow
more
workloads
and
it
will
be
basically
very
similar
to
phase
one
just
that
when
the
part
has
a
different
kind
of
volumes.
D
D
C
No,
no
so,
yes
very
good
question.
The
mappings
in
this
proposal
are
picked
by
the
cubelet,
so
the
queuelet
can
easily
guarantee
that
there
are
no
overlapping
parts
and
there
will
be
some
some
host
some
some
some
oid
space
reserved
for
for
the
host
to
use,
and
the
idea
is
to
also
this.
This
range
given
to
the
cubelet.
The
host
can
guarantee
that
there
is
no
overlap
with
the
file
system
with
the
user
ids
using
the
file
system
and
id
mapped
mounts.
C
It's
something
that
I
mentioned
in
the
in
the
github
pr.
But
it's
basically
very
hard
like
id
map
mounts
it's
very
nice,
but
it's
a
specific
profile
system
and
you
need
super
new
kernels
and
it
only
has
like
x4
bit
btrfs
and
like
fat
and
xfs.
C
But
I
think
that's
the
first
way
we
can
start
using
id
pump
mounts
because
we
control
the
partition,
where
the
container
runtime
runs
on
the
host
and
download
the
images.
And
if
that
partition
is
in
a
supported
file
system
and
with
a
new
kernel,
then
we
can
remove
the
storage
and
performance
overhead
of
of
using
username
spaces.
C
F
So
I
I
just
want
to
point
out
the
for
the
container,
the
plus
the
device
or
use
cases.
Then
this
might
be
actually
done
on
the
container
runtime
phase.
So
maybe
ninten
you
can
cheat
me.
Maybe
I
I
remember
cracked
me
here.
So
we
need
a
double
check.
If
this
is
my
ping,
remapping
stung
by
the
kubernetes,
maybe
there's
the
conflict.
F
So
so
the
container
runtime,
I
think,
directly
also
common
here
and
and
the
mapping
actually
done
by
the
container
runtime.
Not
so
right
now,
if
kubernetes
pick
up
this
remapping
things,
and
so
there
might
be
conflict
with
the
container
rental,
so
the
both
container
replaced
the
devices
and
also
cutter
use.
Cases
so
may
might
be
conflict
here.
But
I
need
because
that's
the
couple
years
ago,
when
we
designed
because
that
time
user
naming
space
this
project
haven't
started.
F
Yet
we
have
so
we
are
doing
the
username
space
to
the
container
runtime
both
for
qatar
and
also
container
deposit
us
use
cases,
so
that
back
then
I
need
the.
I
need
to
refresh
my
memory
on
those
things.
Yeah.
C
Yeah,
we
can
look
at
that
later.
I
think
I
mentioned
in
the
give
github
vr,
but
I
can
mention
it
afterwards
also
why
I
think
this
should
work,
but
we
definitely
need
need
to
have
a
look
good
to
me
from
other
content
like
vm,
runtimes,
runtime
containers
and
things
like
that.
C
C
You
will
need
to
use
the
the
second
field
that
we
added
to
this
part
spec,
the
mapping
that
will
expose
to
the
container
we
don't
know
yet
because
we
don't
have
the
risk
defined,
but
it
will
be
less
than
sixty
four
thousand
because
because
when
we
want
part
what
to
put
isolation,
we
want
to
give
each
part
a
mapping
according
to
the
namespace
or
service
account
they
are
in,
and
those
are
con
and
those
are
container
scope.
No,
no,
the
scope,
so
we
need
to
warranty
yeah.
C
So
if
we
give
64
000
to
this
will
will
run
into
cluster
limits
very
shortly.
So
we
need
to
give
something
less
but
yeah.
The
advantage
is
that
is
that
the
mapping
overlap
will
overlap,
but
only
with
parts
in
the
same
name
of
service
accounts,
so
you
have
more
more
isolation
there.
C
It
might
not
work
for
all
workloads
because
we
need
to
expose
a
short
number
of
user
ids
to
the
container
and
things
like.
If
we
give
a
mapping
per
service
account
or
per
namespace,
you
won't
be
able
to
share
across
service
accounts
or
namespace.
Maybe
it's
not
very
used,
but
it's
a
limitation
by
by
design
of
this
approach.
E
I
was
just
trying
to
refresh
my
own
memory
related
to
phase
three,
which
was:
is
there.
E
E
E
Rodrigo,
I
was
trying
to
my.
I
was
thinking
well,
what
effectively
we're
saying
pods
by
definition,
would
share
a
common
username
space,
whereas
today,
like
they,
don't
have
to
share
a
common
pin,
namespace,
and
so
I
was
just
trying
to
re.
Do
the
mental
exercise
of
asking?
E
C
Yeah
yeah
and
so
basically
yeah
phase.
One
is
parts
without
volume.
So,
what's
with
volumes
that
are
not
shared
phase,
two
is
just
a
small
twist
over
phase
one
and
phase
three
is
more
particular
isolation
where
we
try
to
not
share
the
mappings.
Unless
we
there
are
some
reasons
that
sharing
might
be
beneficial
like
showing
files
or
things
like
that
between
parts,
but
we
restrict
the
number
of
workloads
that
can
I
can
use
because
we
don't
have
any
other
way
around
it.
C
This
is
the
the
basic
idea
I
think
yeah
here
is
the
list
of
some
vulnerabilities
that
are
not
possible
with
any
of
the
phases
like
phase
one
phase,
two
or
phase
three
that
will
be
like
not
applicable
or
completely
mitigated
like
the
first
cross-account
container.
C
It's
like
zero
root
in
the
house
is
not
mapped
to
the
container,
and
things
like
that,
and
there
are
several
vulnerabilities
in
coronet-
is
one
very
recent
like
about
the
subpath
volume.
There
are
always
vulnerabilities
with
sub-path
and
usually
mitigation
is
to
not
run
continuous
as
root,
because
root
has
all
the
privileges
on
the
host
and
can
read
any
file
and
have
like
attack
override
and
those
capabilities.
C
With
any
of
these
proposals
that
the
override
and
those
capabilities
you
don't
have
them
on
the
host
and
no
user
is
root
on
the
cost,
so
they're
mitigated
in.
In
that
regard,
I
think
that
is
yeah.
That
is
all
thanks
for
your
time.
G
Yeah,
my
only
doubt
is
who
should
pick
the
range
for
the
ids,
like
from
my
playing
with
cryo.
I
see
the
advantage
if
this
logic
is
in
the
runtime,
because
it
has
a
better
view
like
on
the
image
used
for
the
container,
so
it
can
pick
a
better
range
than
what
a
cubelet
can
do.
G
For
example,
in
cryo
we
inspect
the
container
image
and
we
pick
a
range
that
honors
the
ownership
of
the
files
in
the
image,
as
well
as
the
files
the
users
defined,
the
ac
password
for
so
yeah.
This
is
this
will
be
my
only
well
dubbed
at
this
point
if
it
would
make
more
sense
to
have
this
logic
in
the
runtime.
G
One
issue,
though,
is
yeah
is
that
the
range
must
be
picked
at
pod
creation
time
and
they
and
and
still
it's
not
known
what
containers
will
be
added
to
the
pod.
So
at
the
moment,
the
limitation
we
have
in
cryos
that
the
inspection
can
be
done
only
for
the
body
image
which
it's
not
very
helpful,
but
so
yeah.
This
is
one
limitation.
I
yeah.
H
Yeah,
if
I,
if
I
remember
correctly
community,
has
a
different
behavior
so
for
community.
I
remember
the
image,
download
and
unpack
separate
so
for
download
you
just
download
the
content
and
for
unpack
for
actually
creating
a
container.
You
unpack
you
you,
you
create
a
snapshot
and
if
I
remember
correctly,
for
each
container,
you
have
its
own
top
level
snapshot
and
you
can
do
the
uid
mapping
there.
H
So
if
I
remember
correctly
for
kinetic
it
can
be
per
part
and
port
container
and
the
ui,
the
mapping
can
happen
at
runtime
when
you
create
a
container.
So
I
just
want
to
point
out
there.
There
may
be
a
information
difference
for
different
runtime
and
focus.
If
I
remember
correctly,
it
might
be
possible
to
support
per
part
and
per
container.
F
I
think
I
think
the
container
do
you
have
to
do
right.
So
if
you
cannot,
then
how
devices
are
doing
this
one
because
device
actually
supports
power
level,
user
naming
space
and
the
mapping
those
and
the
container
increasing
that
powder.
So
we
have
to
pass
those
things,
but
this
is
why,
earlier
I
reached
that.
H
F
Yeah,
okay,
so
we
need
to
follow
up,
at
least
because
I
also
remember
when
we
proposed
that
for
the
device
that's
a
couple
years
ago,
and
so
the
kata
doing
the
similar
things
using
the
signological,
and
so
that's
why
we
we
need
to
figure
out.
This
is
not
a
crosstalk
regression
for
the
other
use
right,
so
cryo.
I
think
that
may
be
the
signal,
but
it's
not
exactly
the
same,
but
also
we
need
to
avoid
the
regression.
That's
the
only
concern
I
have.
G
I
mean
you
can
specify
the
mapping
to
cryo,
like
the
user
can
specify
the
mappings,
so
this
wouldn't
be
a
like
a
breaking
change
for
the
implementation.
It's
just
what
makes
more
sense
by
default
like
I
see
the
advantage
of
having
in
the
runtime,
because
it
does
access
to
more
information
and
can
inspect
images.
C
No
phase
three
yeah,
I
think
the
the
problem
of
the
runtime
peaking,
but
I
think
it's
not
clear
but
only
relevant.
Sorry.
Let
me
organize
myself.
I
think
it's
not
clear
who
should
be
the
owner
of
the
mapping
for
phase
three,
because
it's
tricky,
but
it's
only
relevant
for
phase
three
right
because
because
for
phase
one
like,
if
we
agree
that
we
want
to
give
parts
without
volumes
different
mappings,
we
can
easily
do
it
today
and
we
don't
need
much
from
the
runtime.
D
C
And
so,
if,
if
there
is
general
agreement
on
on
the
idea
for
phase
one
and
phase
two
and
and
we
can
define
phase
three
on
the
go,
I
think
that
would
be
super
beneficial
because,
like
we
did
a
very
detailed
estimate
of
the
work
needed
to
get
phase
one
and
phase
two,
and
it's
about
six
months,
full
time
working
on
that.
So
it's
like
a
lot
so
yeah.
If
we
can,
if
we
can
split
the
faces
like
if
this
split
makes
sense
for
everyone,
I
think
that
would
be.
E
Rigo,
I
think
I
I
like
the
three
phases
you
laid
out.
I
particularly
like
phase
one
if
we
couple
that
with
validation
errors,
where
users
were
using
volumes
that
are
beyond
the
supported
list
like
yeah
and
I
think
that's
obviously
a
nice
benefit.
E
C
Right,
I
think,
monroe
I
think.
C
Last
year,
in
the
initial
version
of
this
cap-
and
I
think
monreal
was
approver-
not
sure
if
someone
else
was
it
but
manuel,
do
you
have
time
to.
I
Feel
free
to
ping
me
on
some
of
that
stuff
too.
We
did
a
lot
of
cursed
stuff
a
few
years
ago
at
circle,
and
that
would
be
useful
for
some
stuff
at
vmware
too.
Oh
great,
can
you
share
your
heat.
B
A
Awesome:
okay,
I
think
that's
all
for
the
topic
of
username
spaces.
Let's
move
on
to
in
place
pod
vertical
scaling,
vinay.
J
Hi
so
sorry
my
apologies,
I
have
not
been
able
to
once.
I
lost
momentum
on
this
last
around
the
last
lead
in
the
last
release.
I
never
got
it
back,
but
I
spent
the
last
couple
of
weeks
just
catching
up
on
the
code
and
looking
through
some
of
the
key
feedback
that
we
have
that
needed
to
be
addressed.
I
think
the
major
ones
was
from
long
term
and
one
one
from
you
and
I've
ping
lonto.
J
I
think
I
was
looking
to
see
how
we
can
instrument
the
plague
to
when
the
resizing
is
in
progress.
For
a
part,
it
should,
you
know,
call
get
bot
status
and
get
the
latest
from
the
cri
and
then
update
its
cache.
The
problem
is
that,
even
if
we
did
that,
it's
not
helping,
because
if
there
is
a
series
of
updates-
let's
say
the
memory
is
being
updated
and
the
cpu
is
being
updated.
We
break
it
down
so
that
we
are
not
exceeding
the
pod
limits.
J
That's
been
approved
by
the
by
the
fit
when
we
admit
the
resize,
so
we
do
memory
first
and
then
we
do
the
cpu.
But
if
we
have
the
already
successfully
updated
the
memory-
and
we
don't
have
the
latest
from
gitpod
status,
then
the
problem
is
that
we'll
be
writing
back
the
old,
the
previous
memory
values
which
keeps
bouncing
toggling
back
and
forth.
So
the
code
that
I
currently
have
there
in
the
update
content.
J
After
calling
update
container
resources
for
a
resource
type,
we
call
getpod
status
and
then
update
the
cache.
I
think
that
needs
to
stay.
So
I
just
wanted
atlanta
to
take
a
look
at
that
code.
I
did
make
the
change
to
define
a
separate
resource.
We
were.
We
were
using
v1
resource
type
in
the
container
resources
container
status
that
we
have
for
the
queue
container
status.
I
changed
that
I
I
use
I
when
we
query
back
the
when
we
get
something
from
the
cri
we
get
either
windows
or
linux
container
resources.
J
I
translate
them
to
just
resources,
resource
quantity
and
then
keep
them
there,
and
then
this
gets
converted
in
when
it's
used
for
before
we
do
the
before
we
check
if
we
need
to
do
update
and
the
other
side
is
on
cube
parts
when
we
are
generating
the
api
container
status.
J
So
that's
one
of
the
comments
that
you
had.
I
looked
at
that
function
and
it
is
pretty
much
looking
at
what's
currently
there
and
storing
the
best
available
information
into
the
container
status
for
the
api.
What's
what
the
user
sees.
J
So
I
was
wondering
if
you
and
lantau
would
have
some
time
to
take
a
look
at
this
this
week,
so
that
we
can
see
if
there
are
any
more
changes
needed
to
be
done.
A
H
Yeah,
I
think
I
can
take
a
look
as
soon
as
possible.
I'm
not
sure
whether
I
can
get
him
this
week,
but
at
least
the
next
early
next
week.
I
think.
J
Okay,
sure
I
think
yeah
we
can.
We
can
do
that
I'll
focus
on
the
schedule.
I
think
joshin
from
bite
dance
is
helping
me
with
the
scheduler
side
of
things.
I'm
follow.
I'm
gonna
follow
up
with
him.
There's
some
non-trivial
work
to
be
done
over
there,
so
I'll
I'll
fix
I'll
fix
it
on
that.
J
Make
sure
that
that
gets
done
so
that
the
pr
is
ready
to
merge
before
world
target
to
merge
it
before
the
code
freeze
happens
and
elena,
I
think
wong
chin
might
need
some
help
from
you
to
figure
out
how
to
run
the
the
e2e
test
that
we
have
in
with
the
feature
gate
enabled
you
mentioned
in.
One
of
the
comments
that
you
had
was
that
the
test
should
be
checked
in
with
feature
gate
enabled
right
now.
If
the
feature
gets
disabled
and
the
tests
are
run,
the
test
will
return.
A
There
is
an
alpha
test,
job
that
runs
everything
with
like
it'll
turn
on
all
alpha
feature
gates,
so
that
should
be
picking
it
up,
but
I
think
that
you
may
have
had
a
selector
in
your
test
that
was
causing
it
to
not
get
picked
up
by
that
job.
So
I
don't
think,
there's
anything
particularly
special
about
this.
That
needs
special
infrastructure
to.
B
A
But
if
you,
if
you
wrote
the
tests
as
node
end-to-end
tests,
as
opposed
to
just
like
standard
core
end-to-end
tests,
then
they
won't
get
picked
up
by
those
jobs.
But
from
what
I
can
see
here,
they
look
like
standard
end-to-end
tests,
so
they
should
get
picked
up.
It
just
depends
on
what
test
selector
that
you
put
on
them.
J
Oh
okay,
so
the
test
selector,
I
think
it's
so
okay.
Well,
let
me.
A
I
think
that
there
isn't
one
at
all
which
is
part
of
the
problem.
I
couldn't
find
the
test
running
anywhere,
like
a
lot
of
these
tests,
look
like
unit
tests.
A
Yeah,
so
they
need
to.
We
need
to
ensure
that
they're
actually
running
as
part
of
the
ede
suite,
and
I
don't
think
that
they're
actually
getting
like
picked
up
with
the
jinko
framework
from
what
I
can
see.
There's
a
bunch
of
cases
that
are
like
unit
test
style,
matrix
ones.
But
I
don't
see
where
the
where
the
test
actually
gets
set
up
with.
J
A
J
Okay,
so
I
have
come
in,
I
think
what
will
help
us
if
we
can
get
the
selector,
what
we
should.
I
think
it
what
you're
referring
to
is
in
describe
in
the
description
we
need
to
add
a
new
yeah.
J
Describe
is
there
I'll
add,
I
think
I'll,
ask
her
to
add
it.
Okay,
she
was,
there
is
sig
note
on
it.
It's
been
changed
in
the
last
quote,
it
just
says
slow.
So
I'm
wondering
if
we
should
have
sigma.
A
A
K
Yeah
hi,
so
I
put
it
on
the
agenda
last
week,
but
I
I
didn't.
K
I
wasn't
able
to
make
it
to
the
meeting
and
I
saw
it
was
discussed
during
the
meeting
and
there
were
two
questions
and
I
just
wanted
to
answer
them
here,
and
one
of
the
questions
was
if
the
whole
cap
is,
if
this
enables
only
forensic
container
checkpointing
or
if
this
also
allows
in
the
future,
to
implement
things
like
micro,
migration
or
container
migration,
and
I
just
wanted
to
say
that
yes,
of
course,
it
enables
everything
around
or
it's
possible
to
use
this
as
a
base
to
enable
everything
around
checkpointing,
restoring
migration
if
this
is
wanted,
and
but
we
wanted
to
just
focus
on
one
single
use
case
and
for
the
checkpoint
feature
to
make
it
easy
to
review.
K
This
was
the
main
goal.
That's
why
we
only
talk
about
forensic
container
checkpointing,
but
it
doesn't
block
us
in
the
future
to
do
anything
more
complicated.
More,
I
don't
know
more
advanced
with
a
checkpoint,
restore
and-
and
the
other
question
was
about-
and
I
I
didn't
understand
it
correctly
from
the
meeting
from
the
meeting
notes.
K
It
was
about
how
networking
storage
is
is
connected
with
containers,
and
I
try
to
answer
it
here
and
in
in
the
in
the
cap
pull
request
that,
basically
all
external
resources
the
container
is
using,
has
to
exist
and
before
the
container
can
be
restored.
That's
all.
I
wanted
to
basically
resolve
the
open
questions
from
last
week.
L
I
drink
this
is
cassie.
I
post
some
more
comments
in
the
pr
in
the
could
you
take
a
look.
K
L
Yeah,
so
for
the
other
question
regarding
the
outstanding
I
mean
the
outstanding
questions
about
how
networking
storage
will
work
on
restore
yeah.
I
didn't
post
those
questions,
but
my
I
I
have
the
same
question
because
when
you
okay,
if
you
would
like
to
restore
the
I
mean
the
container
from
the
from
the
checkpoint
right.
If
you
would
like
to
run
that
properly
on
another
server
on
another
node,
then
you
there's
some
uniqueness
easier
that
you
know
need
to
be
taken
care
of.
L
Like
some
also,
some
networking
you
know
issue.
I
guess
that's
the
same.
That's
what
that
those
questions
mean
so
have
you
thought
about
that?
So
I
think
the
point
is,
if
I
just
check
pointing
the
containers-
probably
it's
not
enough
to
to
further
further
for
a
restoration
on
the
nut
to
work
properly
on
another
load.
K
Yeah,
so
so
this,
so
this
sounds
like
one
of
the
the
questions
I
often
get
about
checkpoint
restore
is
is
about
about
networking,
so
the
thing
is
if
there
are
established,
as
so
talking
about
tcp
connections,
because
that's
most
of
the
time
the
most
difficult
thing.
If
there
are
established
tcp
connections,
we
can
migrate
a
container,
but
we
need
to
have
to
have
the
same
ip,
the
container
or
where
the
socket
is
bound
to.
So
we
need
to
have
the
same
ip
address
in
the
container
as
during
checkpointing.
K
K
If
it's
external,
I
think
it
would
make
more
sense,
which
is
also
supported
by
by
the
checkpoint,
restore
tool.
We
can
say
just
close
all
open,
tcp
connections
on
checkpoint
restore
and
if
we
are
at
a
point
where
we
decide,
we
want
to
migrate
a
container.
K
We
accept
that
the
client
has
to
re-establish
the
tcp
connection,
so
tcp
is
most
of
the
time.
The
most
the
thing
where
most
questions
are
asked
because
established
tcp
connections
are
possible
to
restore,
but
because
the
ip
addresses
have
to
stay
the
same,
it's
not
always
really
doable.
If
you
have
a
elaborate
networking
setup,
then
maybe
you
cannot
have
the
same
tcp
ip
address.
L
Yeah
yeah,
yes
yeah,
it's!
I
think
it
answers
the
question
a
bit.
I
I'm
thinking
about
another
uses
scenario,
not
migration.
I
think
it's
a
very
it's
very
it's
very
useful
and,
for
example,
if
we
would
like
to
create
another
container
same
another
container
on
another
node
from
that
checkpoint
image.
It's
not
my
grease.
I
want.
We
would
like
to
create
a
new
container
instance.
Okay,
because
of
scalability
requests.
L
Then
you
know
we
don't
want
any.
So
it
was
this.
There
are
some
unique
numbers
in
the
existing
container
right
when
you
do
the
checkpoint,
yeah.
B
K
Okay,
so
yeah
that
that's
that's
an
interesting
questions
and
we
are
already
discussing
this
upstream
there's.
The
same
request
is
coming
from
from
amazon
during
virtual
machine
migration.
Basically,
so,
if
you
have
some,
like,
you
said
unique
numbers
which
create
random
numbers
and
if
you
do
a
checkpoint
from
the
container
and
then
create
multiple
copies
of
the
containers
from
this
point
on
all
they,
they
all
have
been
seeded
all
the
containers
with
the
same
random
seed,
and
so
the
the
random
numbers
will
be
the
same.
K
So
there
is
a
discussion
currently
going
on
on
the
linux
kernel
level.
How
can
we
tell
a
virtual
machine
which
is
migrated,
or
how
can
we
tell
a
process
or
a
container?
That
is
migrated
that
it
has
been
migrated
and
that
it
has
to?
I
don't
know,
reseed
its
random
number
generators
and
and
maybe
drop
its
secrets
so
that
it's
no
longer
in
memory
that
it
needs
to
recreate
keys.
Things
like
that
and
the
problem
is
there
is
currently
no
solution.
K
K
So
currently
no
one
has
a
good
idea
how
to
tell
a
vm
or
a
process
that
it
has
been
migrated
in
in
a
way
that
it's
usable
for
for
everyone.
So
we
would
like
to
have
an
interface
and
to
tell
processes
that
they
have
been
migrated
or
even
virtual
machines,
but
there
is
currently
nothing
which
does
this.
There
was
a
discussion
about
this
on
on
linux
plumber's
conference
a
few
weeks
ago.
There's
there's
even
a
video
if
you,
if
you
want
to
check
it
out.
L
K
L
Yeah,
I
think
that
that's
good
yeah.
Actually
another
question
is,
you
know
like
if
we
use
container
b
right.
So
it's
not.
Everything
is
loaded
into
the
into
the
memory.
So
I'm
not
sure
that
when
you
take
the
checkpoint
of
the
container,
do
you
also
checkpoint
those
files
on
the
disk
or
you
just
checkpoint?
The
memory
yeah.
K
So
my
my
work
is
based
on
on
my
work
on,
so
I
did
this
all
for
podman
and
I
brought
this
all
to
cryo
and
the
the
podman
and
cryo
checkpoints
are
including
the
differences
of
the
file
system.
So
this
needs
to
be
implemented
for
container
d
also
so,
but
no
one
has
done
it
so
far,
but
yeah
we
included
in
the
checkpointing.
I
have
done
so
far.
We
take
the
div
from
the
latest,
read,
write
layer
and
store
it
in
the
checkpoint.
We
take
all
changed
files.
K
We
take
all
dumped
processes
and
use
it
to
restore
the
container.
So
this
this
is
solved
but
needs
to
also
implement
it
if
they
want
the
same
feature.
Basically.
K
There
are
this
is
going
on
for
a
long
time.
There
are
a
lot
of
pr's
all
around
and
and
and
but
there
is
a
there's,
a
there's,
a
pr
for
kubernetes
there's
a
big
draft
pr
for
kubernetes
which
implements
it
basically
end
to
end
for
for
the
drain
use
case,
you
say,
cube
ctl
drain
and
on
on
restart
all
containers
are
restored
and
then
there
are
cryo
patches.
K
There
are
cry
control
patches,
so
everything
exists
but
and
it's
all
linked,
but
it's
all
in
in
draft
states,
because
I'm
I'm
waiting
for
the
cri
cri
api
changes,
which
are
which
which
this
cap
tries
to
implement
as
long
as
they
are
not
approved.
I
cannot
do
the
changes
in
the
in
in
in
cryo
and
and
all
the
other
projects,
as
as
long
as
the
cri
api
is
not
updated.
L
K
L
A
Okay,
I
think
that's
all
we
have
for
that
and
that's
the
end
of
our
list
of
agenda
items
for
today,
unless
someone
snuck
another
one
on
nope
doesn't
look
that
way.
So
do
we
have
any
last
minute
agenda
items
or
shall
we
adjourn
for
today.
L
I
have
a
question:
this
is
a
follow-up
on
the
discussion
last
meeting,
so
it's
for
the
dominates
pr.
So
if
we
may
still.
L
Yeah
hi
yeah,
so
I
remember
there's
there
was
a
discussion
that
you
know
your
work
that
pl
will
be
broken
up
into
multiple
pr's.
So
it's
so
it's
easier
for
people
to
review.
Do
you
still
plan
to
do
that
or
you
need
any
help
to
do
that.
J
I
think
the
pr
most
of
the
pr
is
already
reviewed,
so
I
already
rebased
and
squashed
all
the
changes.
Tim
tim
hawkins
looked
into
the
api
and
he's
good
with
what
we
have.
I
think
the
from
the
next
major
thing
is
for
atlanta
to
look
at
the
note,
and
there
are
a
couple
of
changes
for
elena
and
landau
to
look
at
and
see
if
they
make
sense.
On
the
note
side
scheduler
wanted
just
the
change
is
not
very
big.
J
It
looks
like
there's
some
more
work
to
be
done
and
we
may,
if
we,
if
joshin,
wants
to
create
a
separate
pr,
because
my
change
in
the
scheduler
is
just
like
couple
of
lines
or
something
but
there's
more.
That
needs
to
come
as
long
as
it
is
fully
ready.
It
can
be
broken
up
into
another
separate
pr.
J
E
J
Yeah,
that's
unfortunate
yeah.
I
think
that
I
had
it
broken
down
into
separate
commits
earlier
on
just
before,
but
we
missed
the
last
release
and
just
after
that,
I
think
tim
suggested.
We
squash
it
because
it's
already
all
been
reviewed,
except
for
the
specific
feedback
items
that
were
there.
I.
B
B
A
J
I
think
that's
that's
correct,
but
there
were
for
for
a
while.
I
was
maintaining
five
different
commits
there
and
after
we
came
close
to
reviewing
it,
we
decided.
Okay,
it's
good
today
to
squash
it's
in
the
comments
somewhere.
I
can
find
it.
E
Yeah
personally,
but
I
I'll
find
it
very
difficult
to
review
your
pr,
and
I
was
hoping
to
do
that
this
week
because
last
I
looked
at
it
was
almost
like
20
000
lines
of
a
diff
in
a
github
web
interface,
and
so
I
I
have
to
basically
pull
it
down
locally,
which
I
probably
will
do,
because
I
want.
E
But
yeah
just
it
can
be
difficult.
Oh
yeah.
A
In
particular,
splitting
out
those
generated
changes,
anything
that
touches
the
pod
spec
is
going
to
have
lots
and
lots
of
generated
changes
that
come
with
it
for
the
test
fixtures
yeah
ensuring
that
at
least
that's
in
a
separate
commit,
so
that
people
can
look
at
the
rest
of
the
code
and
often
it's
also
nice
to
have
like
the
initial
changes
and
then
in
a
separate,
commit
the
generated
changes
so
that
we
can
see.
J
Okay,
so
let
me
do
this
I'll,
take
a
look
at
it
tonight
to
see
if
I
can
split
it
split
it
up.
I
think
I
still
have
enough
context
in
my
mind
to
tell
which
one
goes
where
and
I'll,
essentially
I'll
split
it
into
four
or
five
commits
four
commits.
I
think
at
this
point
one
for
the
note
we
test
and
one
two
for
the
generated,
I
think
there'll
be
two
two
generated
commits
one
from
the
cri
google
side,
one
from
from
the
api
side.
J
So
five
I'll
see,
if
I
can
split
that
up
that
way,
we
can
bring
it
back
so
that
it's
easier
for
everyone
to
review.
It
looks
like
people
want
to
take
another
look
at
this.
I
squashed
it
mainly
because
at
some
point
we
said:
okay,
it's
good
time
to
squash
it
after
a
lot
of
review
last
june
july,
but
we
didn't
make
them.
A
That
sounds
fine
kathy.
Do
you
have
does
that
answer
your
question.
L
J
I'll
do
it
tonight.
I
think
I
have
enough
context
on
because
previously
api
kept
evolving
on
top
of
the
cubelet
changes,
which
was
making
things
difficult
now,
api
is
not
evolving,
so
I'll
just
take
all
the
api
changes
make
them
put
them
into
I'll.
Just
replace
this.
What
we
currently
have
I'll
first
push
another
one
which
will
have
four
or
five
commits.
I
think
it
should
not
be
more
than.
A
Okay
thanks,
I
think
that's
it
for
this
meeting.
So
thanks
everybody
for
joining
and
I
hope
to
see
you
next
week
cheers.