►
From YouTube: Kubernetes SIG Node 20220208
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
A
All
right
welcome
everyone
to
the
february
8th,
signed
meeting.
Now
we
have
a
couple
items
on
the
agenda.
I
think
the
first
item
we
can
talk
through
is
the
container.
B
Yes,
yes,
I'm
happy
to
hello
everyone.
My
name
is
chutong.
I
work
in
gket.
I
want
to
share
a
finding
of
a
long
go
in
container
debug.
We
have
been
investigated.
B
A
Dawn,
are
you
able
to
make
him
a
co-host?
I
I
right
click
and
I
don't
have
the
option
to
make
him
a
co-host.
A
A
B
Okay,
this
this
issue,
that's
important:
okay,
okay,
let
me
start
the
simple
term.
Is
a
customer
see
the
error?
Message
called
fail
to
reserve
container
name
when
they
use
container
d.
This
is
a
specific
to
continue
d,
because
the
error
message
is
returned
by
the
continuity
cri
plugin
and
they
they
also
see
when
they
see
this
message.
The
paws
are
durably
stuck,
they
are
not
created
successfully
and
this
issue
can
be
back
to
like
two
years
ago,
but
with
the
more
adoption
of
community.
B
We
are
seeing
more
customer
reporting
this
this
problem,
so
it's
it's
got,
got
our
attention
and
we
also
prioritize
starting
this
issue.
So
and
now
we
have
a
promising
fix
and
I
wanna
share
what
is
going
on
and
what
what
will
happen
next.
So
the
the
issue.
B
So
in
that
case,
the
subsequent
crit
container
call
will
just
fail
fast
because
they
are
retrying
on
the
same
container
name
and
the
container
dcri
will
just
say
no.
You
are
not
to
do
this
because
the
name
is
already
reserved.
It's
just
like
that.
So
the
problem
is,
then
why
why
the
the
first
or
the
initial
critical
container
request
takes
that
long
and
from
the
issues
we
have
seen
so
far,
it
is
always
associated
with
slow
disk
operations.
B
B
Maybe
the
customer's
workload
is
disk,
I
o
heavy
and
yeah
so
and
the
recently
we
realized
a
thing
we
can
optimize
in
content
d,
which
is
to
remove
a
necessary
disk
operation
during
the
stack
of
creative
created
container
call,
and
we
and
it's
a
it's
it's
just
a
and
in
the
docker
in
the
darkest
in
a
docker
implementation.
B
It
has
a
similar
kind
of
similar
improvement
or
enhancement
of
that
part
and
then
to-
and
we
did
the
same
thing
or
similar
thing
in
the
community
and
after
that
we
have,
we
can
see
even
during
this,
this
sort
of
cases,
the
create
container
request
can
return
fast.
B
B
So
that's
the
status
if,
if
you
guys
or
any
user,
are
seeking
this
problem,
yeah,
please
route
them
to
this
to
this
github
issue
and
try
to
try
to
try
the
new
new
version
with
the
patch.
B
Also
I
should
I
should
state
that
there
may
be
other
other
factors,
other
factors
contributing
to
this
problem.
That
is
undiscovered
because
the
because,
like
I
mentioned
the
disk,
this
operations
can
come
from
other
parts,
maybe
pulling
my
putting
images
yeah
any
question.
C
Chuton
when
we
discussed
earlier,
I
remember
you
mentioned
that
it's
really
difficult
to
reproduce.
This
problem
did
the
later.
We
find
how
to
reproduce
the
problem
and
also,
and
also
it's
possible.
We
can
add
some
regression
tests.
B
Thank
you
for
asking
this.
This
is
a
great
question.
Yeah
I
forgot
I
forgot
to
mention.
I
try
to
reproduce
this.
I
try
to
find
a
setup
or
experiment.
That's
showing
like
a
docker
always
create
a
pause
faster
than
continuity
during
disk
throttle
case,
but
I
couldn't
find
it
even
today
I
could
not
set
up
such
a
such
a
such
an
environment.
B
If
I,
if
like,
if
I
add
too
much
disguise
both,
are
just
stuck
there,
no
one
can
proceed.
If
I
add
the
less
disguise
the
the
result
can
be
random,
so
I
I
still
don't
didn't,
find
the
perfect
balance
point.
So
I'm
hoping,
if
I'm
not,
I
I'm
a
because
customers
complain
about
this
and
they
they
have
some
workload.
They
have,
they
may
their
special
workload.
Maybe
maybe
they
make
it
even
easier
to
trigger
this.
B
C
Thanks
another
question:
actually
the
follow-up
question,
so
it
is
actually
even
even
kubernetes,
receive
this
container
dc
or
fail
to
reserve
a
container
name
arrow,
but
actually
the
whole
thing.
If
give
some
time,
the
whole
thing
will
settle
down
right,
so
the
so
the
the
container,
because
the
previous
request
will
finished
and
give
enough
time
and
then
what's
the
real
customer
impact
here,
it's
just
take
longer
and
you
receive
some
arrow
and
the
better
arrow
handling
actually
can
solve
this
problem.
But
what's
the
real
customer
impact.
B
Right
yeah.
I
should
also
call
out
that,
if,
if
you
have
a
rest,
if
you
set
restart
policy
to
always
or
on
failure,
the
pod
will
be
created
eventually
given
enough
time.
But
so
some
customers
use
a
restart
policy
failure.
So
then
they
will
see
field
pod.
That's
one
case
of
custom
impact.
Another
another
case
is
they
are
claiming
whether
you
stalker
the
pods
create
faster.
B
A
Hey
hi,
this
is
derek.
I
appreciate
you
bringing
up
more
operational
experience
for
these
types
of
issues,
given
the
move,
we're
making
the
sick
to
to
finish
the
deprecation
of
docker
shim.
I
know
speaking
from
some
of
our
experience
at
red
hat.
We
have
tackled
similar
issues
on
disgaea
contention,
so
I
I
know
peter
hunt
and
mernal
had
explored
this
as
well
for
cryo.
I
don't
know
if
you
want
to
talk
through
if
you've
seen
similar
impacts
and
did
you
do
similar
remediations.
E
Hey
yeah,
so
we
did
yeah,
we
had.
We
have
actually
seen
a
bunch
of
these
there's,
also
a
class
of
issues
we
find
when
seo.
Linux
is
enabled,
because
for
every
new
sc
linux,
every
new
container,
if
the
sd
linux
label
is
chosen
by
cryo,
then
the
root
fest
is
relabeled
and
that
can
take
a
long
time
or
not.
The
root
of
us
volumes
get
relabeled
so
that
the
mod.
D
E
Access
to
them,
so
we
actually
implemented
some
workarounds
in
cryo
to
save
the
work
that
we
did
in
an
original
request
and
like
kind
of
stall,
the
cubelet
pulling
on
this
to
work
around
this
issue.
E
A
Awesome
well
thanks
for
sharing
that
peter
and
thanks
for
bringing
up
the
related
bz's,
because
I
think,
as
a
community,
we
all
get
better
when
we
figure
out
how
to
mitigate
these
issues
when
they're
found
and
at
the
right
layers
of
the
stack.
So
is
there
anything
else
we
wanted
to
raise
on
this
particular
topic.
A
A
F
Yeah
I
just
wanted
to
bring
to
your
attention
this
issue,
like
it's
an
old
new
issue,
the
the
problem
with
the
scheduling
cubelet
being
out
of
sync,
sometimes
on
what
how
much
users
are
available
in
the
node.
So
recently
there
has
been
a
change
in.
I
think
what
qubit
considers
as
available
resources
for
terminated
pods.
F
F
As
on
the
node,
the
change
was
that
in
in
cubic
admission,
those
resources
are
not
are
still
being
considered,
as
used
until
the
part
itself
completely
terminates,
and
so
basically,
we
update
the
pod
status
to
finished
or
completed
or
basically
terminated
before
we
actually
assume
that
before
we
actually
make
its
resources
available
for
other
parts,
so
that
change
was
actually
patched
back.
There
is
a
reason
for
the
change.
I
think
clayton
made
that
change.
F
It
solves
another
problem,
but
it
did
cause
issues
related
to
again,
like
rest
conditions
with
the
scheduler
and
the
reason
I'm
bringing
this
to
the
sig
again
like
first
to
make
you
aware
that
this
is
an
issue.
F
The
second
is
like
in
retrospect,
like
these
things
related
to
admission
should
be
probably
coordinating
sig
scheduling,
because
we
like
again,
we
want
to
be
on
the
same
page,
the
cuban
indicator
and
what
is
available
on
the
node.
What
is
not-
and
the
third
is
to
discuss
like
how
should
the
scheduler
trade
terminated
pods
when
should
it
make?
When
should
it
assume
that
that
the
resources
of
terminator
pods
are
available
for
other
parts,.
F
So
yeah,
let
me
know
if
you
have
any
questions
or
if
you
would
like
to
just
look
at
the
issue
and
comment
on
it.
I
didn't
see
any
of
these
sick
leads
commenting
directly
on
the
on
the
issue
other
than
clayton.
So
I
just
want
to
make
sure
that
you're
aware
of
that
one.
C
We
oh
well,
this
way
even
design
the
in
the
design
phase.
We
talk
about
this.
The
they're
always
have
the
asynchronous
issues
like
the.
This
is
just
one
of
the
same
thing
right,
so
the
terminated
how
we
are
going
to,
because
the
master
site
express
the
desired
status
master
site,
but
we
reclaim
of
those
resource
tech
times.
C
Those
kind
of
things
from
day
one
be
by
design,
distribute
computing,
always
have
this
problem.
This
code
will
be
even
worse
right.
So
when
you
reclaim
the
disk
space,
you
say:
oh
I'm
evict
this
part
and
take
that
resource,
but
delete
those
kind
of
things
and
remove
those
related
logs
and
disk
usage
actually
take
really
long
time.
So
we
know
those
things.
So
this
is
why,
at
the
end,
you
need
to
actuate
her
to
move
the
system
into
the
desired
state.
C
Actually
is
the
cubelet,
so
we're
always
talking
about
the
like
the
kubernete
in
the
in
reality.
If
you
really
think
about
it
for
the
fine
grind
of
the
scanning
decision
across
the
kernel
so
but
on
the
high
level
for
cluster
management,
there's
the
scheduler,
which
is
the
class
level
of
the
scheduling,
but
then
there's
the
kubernetes
as
the
node
level,
and
also
maybe
it's
some
other
component
kubernetes
or
coordinate.
C
There's
a
maybe
have
some
like
the
plug-in
resource
management
plug-in.
We
are
doing
those
kind
of
things,
so
those
kind
of
problems
always
being
there.
It
is
the
only
problem
we
need
to
be
careful
when
we
design
those
things
it
is.
Those
should
be
right.
We
don't
want
to
quit
off
the
situation
and
schedule
to
make
decision
and
send
it
to
the
load
and
not
reject
and
schedule.
This
send
the
same
request
back
to
the
same
node
and
reject
so
those
kind
of
is
by
the
problem.
C
But
sometimes
there
are
some
such
things
as
synchronized.
We
need
to
keep
that
into
this
to
certain
threshold,
but
it
is
acceptable
can
be
gone.
No.
F
But
like
here
in
this
case,
it's
it
should
be
easy
to
solve,
because
cubelet
will
be
updating
the
pod
status.
It
could
update
it
only
after
it
considers
that
these
resources
are
available
for
scheduling.
F
So
why
is
cubelet
declaring
that
the
pod
terminated,
while
it
is
still
not
making
its
resources
available
for
scheduling,
so
what
the
scalar
does
right
now
is,
basically
once
it
receives
an
update
that
a
specific
part
that
was
assigned
to
node
x
is
terminated.
It's
going
to
assume.
Okay,
this
part
does
not
hold
any
resources
on
the
node.
F
I
can
reuse
those
neat
resources
schedule,
another
one,
so
cubelet
is
getting
ahead
of
itself
by
declaring
that
the
pod
is
terminated,
while
still
holding
some
resources
reserved
for
that
generic
part,
and
so
the
ask
here
is
to
make
to
have
a
like
an
easy
to
consume
pod
status
by
the
scheduler.
To
know
that
cubelet
has,
you
know,
released
all
these
resources,
so
it's
available
for
the
scheduler
to
consider
them
for
future
files.
F
A
I
think
I
have
to
check
myself
a
little
closer,
but
I
would
be
terminated.
Pods
still
hold
resources
on
a
cubelet,
but
the
major,
but
the
resources
they
hold
aren't
necessarily
what
we
would
think
they
they
can
still
hold
desk
and
they
can
still
hold
their
logs
are
still
present.
A
D
A
A
terminated
pod
shouldn't
be
holding
cpu,
but
terminated.
Pods
still
have
their
resources
present.
On
the
node
for
later
inspection,
some
of
the
artifacts
that
don
had
referenced.
I
was
trying
to
think
if
there
was
a
situation
where
a
terminated
pod
could
be
having
its
cpu
hell.
But
it's
not
coming
to
mind,
but
I
I
guess
what
I
would
want
to
do
is
just
check
to
see
if
the
out
of
cpu
filter
check
here
or
not
is
filtering
out,
terminated
pods,
because
they're
not
actually
holding
cpu,
but
they
could
definitely
be
holding
disk.
C
Yeah
I
can
hold
the
cpu.
If
terminated,
then
cpu
yeah.
If
you
still
hold
the
cpu,
there
must
be
some
bugs.
G
And
so
that
you
probably
also
need
to
check
whether
it's
actually
completed
if
the
phase
is
just
failed
or
whatever
it
can
still
be
cleaning
up,
because
the
priority
there
is
reporting
that
status
back.
While
we
do
that
cleanup,
especially
if
you
have
like
a
cri,
that's
being
particularly
slow.
F
So
clayton
is
suggesting
that
we,
let
me
see
the.
G
Right
now
they
happen
early
partially
as
performance
optimization
for
external
systems.
I
think.
C
I
think
that
I
do
do
not.
If
you
look
at
the
history
many
times
I
complain,
skedener
is
not
usage
or
wild
scheduler
only
look
at
the
request
and
the
machine
capacity
not
to
look
at
the
new
usage.
C
So
so,
let's
make
this
kind
of
the
even
we
look
at
the
usage.
You
still
have
the
because
the
async
raise
some
of
the
things,
but
it
will
be
way
better
than
today,
because
today
we
didn't
look
at
that
like
that,
we
never
try
to
attempt
to
consider
reclaim.
Sometimes
we
claim
resources
take
a
really
long
time
and
this
problem
at
the
earlier.
I
remember
the
first
cubicle,
the
small
cube
account,
and
I
I
reached
my
my
this
is
one
of
my
wish
list.
C
Many
things
happen,
but
this
one
particularly
is
level
hyper
and
and
usage
a
while
schedule.
G
F
So
what
I
said
just
here
is
like,
if
you
can,
please
take
a
look
at
the
bug
and
suggest
when
what
status
we
should
look
at
on
the
pod
for
the
schedule
to
have
the
smallest
chance
of
getting
into
this
race
condition.
What
should
we,
what
style?
Should
we
look
at
to
to
consider
that
the
part
okay?
It
has
failed.
Its
reason.
A
The
more
likely
thing
is
that
the
list
of
active
pods,
that's
being
evaluated
on
the
cubelet
when
doing
its
emission
check
is,
has
a
bug
that
says
that
that
cpu
is
still
being
held
when
it's
terminated,
so
we
need
to
look
into
that
a
little
closer,
but
that
active
pods
list
or
whatever
that
list
is
that's
getting
propagated
to
the
admission
handler
in
the
cable
it.
It
might
need
to
filter
out
resources
that
can't
be
claimed
when
all
containers
are
stopped
in
this
case
cpu,
whereas
disk
it
might
be.
A
You
know
we
might
need
to
make
a
different
decision
on.
So
if
the
key
thing
is
the
auto
cpu
thing,
I
think
there's
likely
a
bug
that
we
could
chase
down
on
the
cubit
side
so
that
its
emission
check
is
is
not
going
to
run
into
this.
F
Right
and
making
sense
that,
like
I
saw
two
or
three
issues
open
related
to
this
recently
in
open
source
and
within
in
gke
internally,
we've
got
multiple
reports
related
to
this
in
in
121
and
122,
because
this
was
backported
and
that's
my
worry
like
it
right
now.
We
don't
have
too
many
clusters
in
in
these
releases,
but
once
they
become
the
majority
of
our
fleet,
I'm
pretty
sure
we're
gonna
get
a
lot
of
like
pro
bugs
related
to
this
yeah.
I
mean
at
the
end
of
the
day.
It's
it's
like.
You
can
tell
them.
Okay.
F
C
Oh
one
question
I
want
to
follow
up
since
you
mentioned
here,
looks
like
the
tk.
Have
this
problem?
Did
the
gk
enable
any
like
the
cpu
related
over
committed
feature?
C
A
Okay,
well
thanks
abdul,
and
I
I
can
appreciate
that,
if
you're
running
a
lot
of
short-lived
pods
in
like
a
batch
mode
on
a
highly
packed
worker,
you
could
be
hitting
this.
So
we'll
have
to
take
a
look
anything
else
we
want
to
raise
on
this
issue.
H
G
If
more,
the
pr
that
caused
it
is
the
one
that
I
think
it
is.
We
had
a
bunch
of
race
conditions
in
the
like
pod
life
cycle
management,
that
meant
some
stuff
wouldn't
be
tracked
properly
and
some
statuses
wouldn't
be
propagated
in
like
the
correct
way,
I
don't
remember
exactly
the
bugs
that
platinum
was
fixing
now.
C
G
Yeah
and
so
some
of
the
work
around
fixing
that
has
raised
a
bunch
of
complicated
issues.
Some
of
those
are
the
previous
behavior
was
incorrect
and
some
of
those
are
well.
We
don't
have
enough
testing,
and
so
there
are
some
regressions
and
it
can
be
hard
to
figure
out
which
is
which
to
begin
with,
because
the
behavior
of
the
kubelet
isn't
actually
documented,
and
there
isn't
like
a
test
spec
that
exists
anywhere.
That
correctly
covers
most
of
the
kubelet.
A
G
A
Okay,
I
guess
the
last
topic
on
today's
agenda
is
deep.
Do
you
want
to
talk
to
your
item.
I
All
right
so
for
a
quick
background,
one
of
the
things
we
discussed
a
little
while
back
in
1.23
in
insignia
briefly
was
this
concept
of
runtime
assisted
mounting
of
volumes.
I
I
So
the
key
scenario
is
basically
in
the
left,
which
is
kind
of
like
kata
micro,
vm
style
scenarios,
where
the
micro
vm
environment
might
already
have
the
the
file
system
loaded
in
the
guest
kernel,
and
it
can
avoid
a
lot
of
this
vertio
fs
based
file
system
mounts
that's
pointed
out
in
the
left
of
this
diagram.
I
So
essentially,
the
model
that
would
be
ideal
is
to
kind
of
get
to
something
like
this,
which
is
essentially
having
a
block
layer,
which
is
a
pass-through
between
the
cara
containers
and
the
pods
running
within,
say
a
microwave
environment
like
kata
and
the
underlying
discs.
I
So
we
dis,
we
worked
on
this
with
sig
storage,
quite
a
bit
iterated
through
a
few
designs
during
onenote
during
the
last
one
or
23
cycle,
and
one
of
the
things
that
came
up
is:
would
it
be
possible
for
cubelet
to
have
a
set
of
apis
through
which
it
can
directly
communicate
with
the
end
ocr
runtime
like
cara,
in
this
case,
for
example?
I
So
essentially,
this
api
would
be
very
similar
to
that
of
what
the
csi
node
plug-in
apis
are
today.
In
fact,
you
know
each
of
the
apis.
If
you
look
are
almost
identical
like
in
csi,
we
have
node
publish
volume.
Similarly,
this
api
would
have
say
you
know
runtime
publish
volume.
So
in
csi
you
have
node
get
volume.
Stats,
it'll
have
runtime
get
volume
stats
same
with
expand
volume.
I
I
just
gave
it
a
name
calling
it
crust
apis
for
now
like
container
runtime
storage
apis,
but
the
naming
is
always
hard
and
can
be
debated.
I
The
next
steps
that
were
suggested
from
these
discussions
in
six
storage
was,
you
know,
of
course,
get
a
signal's
opinion
on
this
if
there
are
any
major
objections
and
eventually
go
up
to
cigarch
and
kind
of
get
approval
for
something
like
this,
so
my
goal
here
was
to
sort
of
just
present
this
idea
you
know
in
in
summary,
in
this
model
we
expect
no
changes
in
the
cri
layer,
but
instead
kind
of
have
this
driven
through
these
new
crust
apis,
where
the
cubelet
talks
directly
to
the
final
runtime.
I
Essentially,
we
will
need
these
new
apis
and
the
ultimate
handler
to,
of
course,
implement
the
apis
and
present
this
this
functionality,
that's
a
good
question,
so
we
have
a
kept
going
with
this
with
a
lot
more
details,
so
we'll
be
great
to
get
feedback
but
yeah.
I
A
Right
we
had
had
a
goal
of
making
runtime
class
kind
of
opaque
to
the
cubelet
and
just
make
sure
I
understand
that
high
level
here
is
saying
the
cubelet
would
be
runtime
class
specific
aware
and
then,
when
a
pod
is
presented,
saying
I'm
in
like
I'm
going
to
have
a
coda
sandbox,
you
want
the
lifecycle
of
the
cubelet
to
work
differently.
I
Essentially,
like
in
the
cap,
what
we
came
up
with
is
the
runtime
class
just
surfacing
like
a
a
domain
socket
through
which
cubelet
can
essentially
ask
the
run
time
for
hey.
What
are
your
capabilities?
Do
you
support
this
new
runtime
api
and,
if
so,
invoke
these
apis
on
the
runtime.
I
So
so,
essentially,
as
the
part
comes
up,
you
know
before
sandbox
creation
get
the
capabilities
right
after
the
sandbox
gets
created,
it
needs
to
call
runtime
publish
volume.
The
main
things
this
would
do
is
do
things
like
applying
the
fs
group
settings
applying
any
se,
linux
labels
and
surface
any
sub
paths.
These
are
a
special
thing
that
does
today
if
the
volume
is
already
mounted
by
a
csi
plugin
and
then
during
the
lifetime
of
the
pod,
potentially
handle
get
volume,
stats
and
expand
volume.
D
Wouldn't
it
make
sense
to
just
like
make
the
cri
runtime
handle
it.
So
then
cubelet
doesn't
have
like
runtime
specific
code
or
knowledge.
I
We
did
explore
that
the
the
main
thing
was,
I
think,
like
there
was
a
bit
of
hesitancy
around
expanding
cri
to
do
it
and
it
can
all
be
achieved
without
involving
the
cri
runtime
and
the
cr
apis.
I
If,
if
that
is
the
preferred
path
for
signal,
you
know
definitely
willing
to
explore
that
model,
but
like
one
of
the
other,
things
is
also
the
apis,
for
this
needs
to
sort
of
be
as
close
as
possible
to
csi
the
the
csi
node
apis,
and
that
was
kind
of
one
of
the
reasons
why
why
we
wanted
to
have
this
new
set
and
have
it
basically
just
mirror
csi
node
apis
without
having
to
pollute
the
csi
apis
that
runtime
specific
stuff.
I
The
main
benefit
is
we:
we
want
the
pods
running
within
kara
sandbox
to
be
able
to
use
a
direct.
You
know
the
block
interface
between
the
the
pods
and
underlying
disks,
as
opposed
to
the
current
model,
which
is
using
a
projected
file
system
interface,
which
is
what
ifs
between
the
underlying
device
and
the
pod.
I
So
with
the
current
model,
there
are
quite
a
few
disadvantages
with
what
ifs,
some
of
them
being.
You
know,
performance
other
being
security
overall
and
with
the
direct
block
pass
through
with
the
amount
happening
within
the
sandbox.
Those
concerns
earlier.
A
Okay,
so
like,
if
I
was
to
summarize
this,
as
the
kata
community
no
longer
has
to
work
with
vertebrae
or
rfs
by
virtue
of
the
cubelet
taking
on
sandboxing
specific
challenges
like
that's
the
give
and
take
here,
is
there?
I
Correct
so
performance
and
security
are
the
key
benefits
of
with
this
approach.
So
what
we
have
also
found
is
a
a
couple
of
cloud
vendors
like
alibaba,
and
I
think
another
one
they
have.
They
have
been
introducing
this
model
with,
like
you
know,
a
customized
csi
plugin
and
a
very
customized
data
kind
of
doing
a
off
pan
handshake,
which
is
very
similar
to
what
we're
proposing
here,
but
basically.
I
That
model
very
closely
ties
the
csi
plugins
to
the
the
runtime,
basically
circumventing
cubelet
in
this
picture,
and
one
of
the
problems
with
that
model
is,
we
cannot
handle
a
lot
of
the
features
that
are
surface
today,
like
subpaths
fs
group
application
and
s
linux,
whereas
with
this
model
we
can
handle
those.
A
I
think
what
gives
me
pause
when
I
first
look
at
this,
and
I
have
to
read
your
enhancement
in
more
depth.
You
kind
of
have
a
special
code
path
for
a
particular
class
of
pod
and
isolation
technique,
and
I
think
we're
trying
hard
not
to
do
that.
A
It's
cognitively
overloading-
and
I
think,
is
it
possible
to
present
a
path
that
says
this
is
how
these
new
invocations
would
work
independent
of
the
runtime
choice,
so
that,
like
no
matter
what
the
runtime
class,
the
cri
and
cublet
interaction
sequence
was
always
the
same
versus
making
the
cubelet
have
to
be
it.
It's
basically
reintroducing
like
a
strong
coupling.
We've
been
trying
to
get
away
from,
and
so
hopefully
you
could
appreciate
that
that
is
cognitively
difficult
to
then
maintain.
C
We
try
our
heart.
The
nomad
is
the
csi
cr.
Actually,
it
is
trying
to
make
the
the
core
kubernetes.
It
is
more
generic,
okay,
so
not
to,
and
also
we
have
some
like.
The
other
other
design
is
like
the
modernize.
Those
kind
of
the
different
particularly
use
cases
basically
just
want
to
combinate
more
generic
yeah
to
handle
all
the
cases
this
is
and
because
you
earlier
heard
this
part
lifecycle
container
lifecycle
is
so
complicated.
Already
yeah.
A
And
just
a
there
was
some
hesitancy
expressed
that
it
was
that
we
didn't
want
to
change
this
year.
I
don't
think
that
that's
true,
I
think
we're
open
to
making
it
updates
and
we
we
have,
and
so,
if
you
wanted
to
take
a
look
at
this
from
like
a
more
universal
cri
change,
I
think
that
that
might
be
a
different
perspective
to
take,
rather
than
like
a
side-by-side
model,
that's
being
presented
here.
I
Got
it
yeah
that
that's
reasonable
yeah?
Some
of
the
things
I
didn't
find
right
now
is
this
concept
of
I
guess
querying
for
capabilities
like
a
wide
set
of
capabilities
that
that
csi
kind
of
supports,
but
yeah.
If,
if,
if
that's
a
path,
I
can
definitely
explore
that
update
the
cap
and
come
back
with
how
that
model
would
look
like.
A
Okay,
that
sounds
great
deep
and
we,
I
think,
even
this
release,
we
merged
some
enhancements
to
this
heroic
for
things
like
container
checkpointing
and
stuff,
so
we're
trying
to
make
it
that
you
can
have
a
section
to
maybe
do
more
exploratory
innovation
in
and
so
yeah.
I
don't
want
you
to
walk
away
from
here
thinking.
I
would
never
want
to
change
the
cri
anyway,
like
that's
not
our
goal,
our
goal
is
to
not
make
the
keyboard
split
more
split
brain
than
it
need
to
be.
I
guess
so
all
right.
Well.
A
Thank
thank
you
deep.
Any
other
discussion
that
we
want
to
raise
in
this
topic.
J
I
had
a
quick
question
just
for
from
my
understanding
is
the
published
volume
is
that
to
push
data
down
into
the
your
your
kata
sandbox,
or
is
that
just
to
define
a
volume.
I
J
I
So
if,
if
it's,
if
it's
a
female
inline
volume,
that's
not
backed
by
a
block
device,
then
things
would
kind
of
fall
back
to
the
vertex
space
thing,
because
there's
not
much
of
a
performance
or
security,
surface
area
associated
with
it.
This
is
primarily
for
a
large
volume,
backed
large
block
device
back
volume
scenarios.
So,
if
you
have
say
ebs,
volume.
I
Okay,
thanks
a
lot
for
the
feedback,
so
I
guess
I'll
I'll
try
to
explore
if
we
can
plumb
this
through
cri
in
some
form
and
get
back
thanks.
A
Thank
you
deep,
but
I
think
that's
the
end
of
today's
agenda.
We're
back
at
12
minutes
of
time
and
thanks
for
those
who
joined
and
participated,
are
the
best.