►
From YouTube: Kubernetes SIG Storage Meeting 2022-06-02
Description
Kubernetes Storage Special-Interest-Group (SIG) Meeting - 2 June 2022
Meeting Notes/Agenda: https://docs.google.com/document/d/1-8KEG8AjAgKznS9NFm3qWqkGyCHmvU6HVl0sk5hwoAE/edit#heading=h.4oaimfmeq0qd
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Saad Ali (Google)
A
All
right
can
folks
hear
me:
okay,.
B
A
A
So
today,
on
the
agenda,
we're
going
to
go
over
the
125
planning,
we
did
the
initial
planning
session
on
may
12th
to
populate.
What
are
the
items
that
we're
going
to
work
on
for
125
upcoming
dates
to
be
aware
of
here
are.
A
The
cycle
began
on
the
23rd
and
the
production
readiness
off
freeze
is
going
to
happen
on
the
9th
and
the
enhancement
freeze
is
going
to
be
on
the
16th
of
june.
A
The
cap
should
be
approved
by
by
june
16th,
and
it
should
have
production
readiness
sign
off
by
the
ninth.
So
those
are
the
dates
to
be
aware
of.
If
you're
working
on
a
feature,
you
still
have
time
to
add
a
feature
to
the
spreadsheet.
A
If
you
would
like,
let
us
know,
and
we
can
help
you
track
that
and
if
you
have
any
items
that
you
would
like
to
discuss,
feel
free
to
add
them
to
the
bottom
of
the
agenda
and
we'll
get
to
them
after
the
planning
session.
The
link
to
the
agenda
is
in
your
calendar.
Invite
so
with
that
we're
gonna
jump
over
to
the
125
planning,
spreadsheet
and
start
getting
status,
updates.
A
Okay,
so
first
item
that
we
have
is
delegate
fs
group
to
csi
driver
instead
of
cubelet,
including
updating
the
end
to
end
test.
This
is
being
moved
to
ga
month.
Are
you
on
the
call
by
any
chance.
C
A
C
Yeah,
I'm
working
on
a
design
like
slightly
like
changing
the
design
and
checking
on
the
possibility
that
if
he
can
restore
all
the
way
back
to
the
original
side,
because
the
current
design
has
a
limitation
that
you
can
only
restore
to
a
value
that
was
at
least
slightly
greater
than
the
actual
size
of
the
volume,
we
cannot
go
back
all
the
way.
So
I'm
working
on
that
and
I'm
updating
the
cap.
It
already
had
the
pr
review
from
last
release,
but
I
just
need
to
update
the
milestones.
So
I'm
working
on
that.
A
You
got
it
and
are
you
gonna
still
keep
it
beta.
C
A
Okay,
sounds
good.
Next
item
is
issues
related
to
assuming
mounts
assuming
volumes
or
mount
points?
Jing.
Are
you
on
the
call.
D
Yes,
so
it's
still
under
review
I'll
give
another
round.
I
think
this
and
next
week
it
should
be
close
to
yeah.
D
Yes,
it's
also,
I
think
I
have
just
the
last
comment
about
error
handling
other
than
that.
It's
also
ready,
yeah
kind
of
almost
ready.
A
Sounds
good
and
finally,
we
have
storage
capacity.
Tracking
for
pod
scheduling
is
patrick
on
the
call.
B
I
think
this
one,
you
can
probably
it's
a
ga
yeah
yeah.
This
is
already
I
we
can
probably
mark
this
one.
That's
done
because
the
external
provisioner
we
don't
have
any
pending
pr's
from
patrick
anymore,
so.
A
A
Sounds
good
and
I'm
assuming
this
was
actually
completed
in
124.
B
A
All
right
next
item
is
csi
ephemeral
volumes
existing
api.
Moving
that
to
ga
we
had
this
moved
out
to
125.
B
B
D
So
he
has
a
pr
for
reconstruction
review.
I
think
I
just
had
a
very
small
comment
at
that.
It
looks
good.
A
Cool,
thank
you
jing,
and
the
next
one
is
local
ephemeral,
storage,
resource
management.
D
Yes,
so
this
this
is
feature
like
old
features
beta
scenes
110.
We
haven't
really
moved
to
ga
so
right
now,
I'm
writing
a
cap
to
promote
to
ga.
A
Awesome
sounds
good
I'll,
go
ahead
and
mark
this
as
started.
A
And
whenever
you
have
a
cap,
just
update
this
okay,
all
right
next
up
spreading
over
failure
domains.
Is
that
something
we
are
tracking
for
this
cycle
or
cutting
it.
B
The
main
difference
is
because
I
think
we
should
support
static
provisioning
so
basically
added
the
the
second
api
object
for,
like
one
group
now
as
a
one
group
content,
the
group
snapshot
has
a
group
essential
content,
so
we
just
started
the
discussion
at
the
csi
invitation
meeting.
So
I'll
continue
on
the
design
in
this
quarter.
A
Cool,
thank
you.
Shane
next
up
is
csi
out
of
tree
moving
the
iscsi
driver.
We
were
not
clear
if
this
was
completed
or
if
there
was
outstanding
work.
Does
anyone
know.
A
Got
it
and
I
guess
similar
story
with
the
samba
sif
csi
driver.
C
E
Believe
the
yeah,
I
think,
samba,
has
multiple
releases.
I
think
iscsi.
I
only
see
one
release
in
january,
but
that's
probably
fine.
E
I
think
we
can
probably
just
mark
this
as
complete
and
you
know
just
treat
the
rest
as
as
incremental
work.
A
Okay,
next
is
pvc
volume
snapshot
namespace
transfer
trying
for
alpha
this
cycle,
mustafa
masaki,
any
updates.
F
Yes,
we
made
sig
network
involved
because
we
will
use
reference
policy,
which
is
part
of
gateway,
api.
F
And
as
for
transfer,
feature
series
nodes
objection
by
implementing
cross
namespace
provisions,
so
we
we
might
be
able
to
remove
this
and
just
focus
on
close
name
space
provision.
G
A
A
And
for
provision
volumes,
the
design
or
sorry
can
you
repeat
the
status
for
revision
volumes.
F
We
in
we
add
a
sig
network
involved.
Another
discussion
here.
A
A
And
target
is
still
alpha
for
both
of
these
yes
cool.
Thank
you,
masaki
next
is
actually
let
me
mark
both
of
these
as
started.
A
Csi
volume,
health,
additional
metrics,
moving
to
beta
nick
wren,
was
working
on
this.
Anyone
have
an
update.
B
Yeah,
so
oh,
this
is
the
first
one
right.
It's
matching
this
one
yeah,
so
I
updated
the
cap
so
we're
going
in
for
beta.
A
B
We
don't
really
so
that
actually
not
started
because
initially
nick
says
somebody
in
his
team.
Well,
we
can't
design
but
looks
like
that.
Person
does
not
really
have
time
to
work
on
this
in
this
quarter,
so
we
probably
just
can
just
cross
cross
this
out.
A
Okay,
I
guess
this
can
also
be
a
call
out
to
the
rest
of
the
group
if
anybody's
interested
in
picking
up
an
item.
This
is
an
item
you
can
help
design.
Potentially,
the
idea
here
is
that
so
far
we
have
been
treating
csi
volume,
health
metrics
as
something
that
is
human
consumable.
A
We
are
interested
in
also
having
some
sort
of
programmatic
automatic
response
to
volume
health
to
bring
the
drive
volumes
back
to
healthy
state.
Potentially,
that
is
a
more
complicated
design
and
we've
been
punting
on
it.
If
you're
interested
in
helping
drive
that
reach
out
to
me,
shang
yong
or
michelle.
A
H
Save
themselves
abandon,
you
know
it's
like
an
abandoned
ship
kind
of
kind
of
a
signal.
Yeah,
that's
a
really
good
point.
So.
A
I
B
A
A
A
H
Do
that
yeah
well,
and
the
other
thing
is
just
regard
to
like
with
regard
to
monitoring
volume.
Health,
like
a
lot
of
storage
devices,
have
monitoring
tools
that
are
way
better
than
anything.
Kubernetes
will
ever
provide,
and
so
people
who
are
interested
in
finding
out
about
health
problems
are
probably
using
those.
H
Similarly,
fixing
problems
is
usually
going
to
require
some
vendor
specific
storage
controller,
specific
action
to
be
taken
and
again
those
make
sense
to
be
done
at
a
lower
layer.
The
thing
that
makes
most
sense
to
do
at
the
kubernetes
layer
is
to
send
out
a
signal
that
would
let
a
controller
do
something
at
the
kubernetes
layer
in
response
to
it,
which
is
most
likely
to
stop
using
the
volume
or
you
know,
begin
migrating
the
volume,
the
data
off
the
volume
to
another
place,
yep
that
makes
sense.
A
B
I'm
just
saying
the
I
like
the
nick,
and
I
think
there
was
also
someone
on
twitter
who
brought
up
their
use
case.
Our
team
also
have
some
use
cases
as
well,
in
those
cases
mostly
is
to
delete
those
volumes.
So
those
will
be
your
new
audience
who
have
created
those.
A
A
H
H
A
And
any
updates
here
or
just.
H
On
track
well,
the
what
we're
hoping
for
is
to
see
more
actual
implementations
of
volume,
populators
and,
and
so,
for
example,
like
the
creating
creating
volumes
across
namespaces,
is
an
example
of
the
populator
being
used
for
something
useful.
So
the
more
of
those
we
get
the
easier
it
will
be
to
make
the
case
that
it
should
go.
Ga.
H
A
A
So
I
guess
that's
a
good
call
out
for
the
group
as
well.
If
you
are
interested
in
writing
a
volume
populator
reach
out
to
ben
volume,
populator
is
basically
something
that
can
pre-populate
a
volume
with
data.
Instead
of
you
know,
when
it's
created
it's
completely
empty,
you
can
imagine
whether
it's
like
a
github
source
or
you
know
we
have
volume
snapshots
as
data
populators.
A
You
can
imagine
anything
that
kind
of
pre-populates
data
into
a
volume,
and
we
want
this
to
become
a
generic
pattern
where
you
can
write
your
own
data
populators
to
populate
arbitrary
volumes.
So
hopefully
we
have
enough
examples
of
these
that
we
can
drive
this
to
ga
soon.
B
A
project
that
masaki's
will
kill.
Actually
his
country
is
a
rolling
populator
approach.
A
Yeah,
maybe
ben
that
can
be
used
as
one
one
example.
A
All
right,
let's
move
on
to
object,
storage,
api,
cozy,
so
cap
is
in
review
any
new
changes
here.
B
I
think
we
are
just
waiting
for
prr
review
now,
so
michelle
reviewed
from
the
api
side
already
give
it
a
look
to
me.
So
I
think
we're
getting
really
close
this
time.
Nice.
A
Might
actually
be
hit
alpha
this
time,
that's
awesome,
yeah!
All
right
next
up
is
change,
block
tracking
shane.
You
want
to
give
an
update
on
this
one.
B
Yeah,
so
we
had
a
review
meeting
in
the
the
csi
community.
Sync
up
record.
Some
feedback
so
found
iran
and
there
are
a
few
other
folks.
They
are
basically
updating
the
cap
and
we
actually
discussed
this
again
in
yesterday's
dp
working
group
meeting
so
yeah,
so
they
are
actually
working
on
the
cap
and
working
on
the
poc.
A
Sounds
good
next
item
is
runtime
assisted
mounting
and
you
update
on
that
yeah.
J
So
I
updated
the
cap
and
got
a
some
review
from
jan,
so
a
couple
of
things
I
know
we
were
trying
to
kind
of
hit
towards
the
alpha,
but
it
has
some
potentially
controversial
csi
changes
if
time
permits.
I
can
go
over
that
at
the
end
of
the
meeting
today.
Otherwise
I
was
curious,
if
I
know
like,
can
I
just
submit
the
csi
changes
as
prs
on
the
csi
api
and
discuss
it,
because
the
next
meeting
for
csi
is
beyond
the
enhancement
freeze
date.
H
We
can
do
one-off
csi
meetings
for
special
topics,
but
nothing
should
stop
you
from
just
pushing
a
pr
and
calling
people's
attention
to
it.
A
A
All
right,
thank
you.
Deep
next
item
is
csi
proxy
for
windows
transition
to
privileged
containers.
Mauricio.
Are
you
on
the
line.
A
I
see,
do
you
want
to
commit
for
this
release
or.
D
Before
I
do
that,
maybe
I'll
confirm
with
mauricio
about
peace
like
got
it
yeah.
Okay,
I
think
I'll.
Basically.
A
All
right
cool
sounds
good.
Thank
you.
Jing
next
up
is
node
expansion
secret,
and
you
want
to
have
an
update
on
that.
B
So
this
one
is
almost
done
because
the
pr
was
there
last
time
in
the
end
of
1.24,
we
were
trying
to
get
an
exception,
didn't
get
it
so
so
now
that
pr
is
merged,
it's
code,
complete
yeah,
so
code
complete
so
basically,
and
also
he
also
humble,
also
updated
the
cap.
A
A
Oh
sounds
good.
Thank
you.
Yan
next
item
is
csi
core
bugs
and
issues
we
weren't
entirely
sure
if
there
are
any
core
bugs
and
issues
that
need
to
be
fixed,
but
just
in
case
we
copied
the
item
over
is
matt
or
jaway
on
the
line
or
anybody
who
has
a
status
update
on
core
csi
migration.
E
I
think
we
were
supposed
to
move
this
to
ga
last
cycle,
but
it
didn't
happen.
So
I
think
we
need
to
update
the
cups,
but
I
think
for
them.
I
think
we
are
pretty
much
like.
I
think
there
was
one
bug
fix
that
was
pending,
but
it
I
believe,
emerged.
I
have
to
double.
A
Okay,
then
we
can
talk
about
csi
migration
for
individual
plugins,
so
vsphere.
What's
the
plan
what's
going
on
there.
B
I
yeah
so
we
will
try
to
bring
this
well
this
on
by
default,
basically
still
staying
beta,
it's
the
last
time.
We
couldn't
do
it,
but
I
I
think
this
time
we
should
should
do
that.
A
Sounds
good
next
is
azure.
Azure
was
already
g8.
If
I
remember
correctly
so
I
think
we
can
drop
that
one
azure
file
we'll
check
with
andy
on
status.
Anyone
have
the
latest
on
azure
file.
A
Okay
and
for
gcega
plan
would
be
to
move
from
beta
to
ga.
Anyone
have
a
status
update
on
that.
A
I
don't
think
matt
wong
has
the
status
update
for
gce
michelle
do
you
know.
E
I
I
need
to
double
check
with
him.
There
was
one
version,
skew
issue
that
was
caught.
It
was
a
version
incompatibility
between
external
provisioner
and
kubernetes.
E
A
A
And
that
was
specific
to
gce.
Yes,
got
it
all
right
thanks.
Michelle
next
is
aws
windows,
support,
ga
and
I
think
status,
probably
the
same
check
with
matt
wong.
Anyone
have
an
update
on
that
one.
F
A
Then
we
have
sef
rbd
and
ceffs.
B
Oh
actually,
tara,
the
the
name
that
you
know
the
name.
I
think
she
should
still
be
the
owner.
I
talked
to
her
at
the
end
of
1.24.
A
This
is
staying
in
alpha.
Anyone
have
an
update.
B
Yeah,
it's
going
to
be
staying
off
so
I'll.
Try.
G
A
Phrase,
then
you
have
control
volume,
mode,
conversion
between
source
and
target
pvc.
B
So
also
say
in
alpha:
I
checked
with
drawn
up
so
so,
but
we
can,
you
know,
work
on
adding
tests
and
things
like
that
yeah,
but
staying
off.
Okay.
B
H
Okay,
yeah,
I
don't
want
us
to
fall
into
that
trap
of
of
staying
in
alpha
too
long,
but
but
if
it's
only
beta.
B
A
Next,
we
have
secret
protection,
prevent
deletion
while
in
use
depends
on
the
in
use
kept
beef
below
misaki.
You
want
to
give
it
a
status
on
both
of
these,
so
I
guess.
F
A
B
Hey
masaki,
so
because
this
is
owned
by
ak
machinery
yeah,
so
you
talk
to
them
right.
I
just
want
to
make
sure
that
it's
added
in
that
there's
a
enhancement
tracking
spreadsheet,
but
I
think
it
should
be
added
by
the
only
say:
okay.
F
B
A
Cool,
thank
you
miss
aki,
so
the
rest
of
these
are
all
co-owned
they're,
not
just
sig
storage.
So
next
item
is
non-graceful
node
shutdown.
Any
updates
looks
like
it
was.
A
We'll
check
with
matt
on
whether
to
go
beta
or
not.
Anyone
have
an
update
on
that
one.
A
Okay,
I'm
going
to
mark
that
as
no
update
for
now,
then
we
have
volume.
Expansion
for
stateful
sets
lots
of
corner
cases,
we'll
check
with
sean
nate
or
charlini
to
see.
If
she
will
work
on
this,
anyone
able
to
confirm.
K
K
A
H
B
B
Existing
use
cases,
so
we
added
a
feature
gate
for
that
right.
So
it's
kind
of
similar
case.
It's
a
small
small
feature,
yeah.
A
H
Oh
I
so
I
didn't
realize
this
was
on
the
spreadsheet
yeah.
I
was
just
going
to
announce
that
we're
having
a
meeting
to
gather
the
requirements
for
this
next
tuesday.
It's
on
the
csi
or
the
sig
storage
calendar.
It's
a
10
a.m,
pacific,
I
think,
and
the
the
goal
of
that
meeting
it's
spelled
out
and
the
document
is
linked
in
the
section
below
and
we're
just
looking
to
sort
of
get
a
get
our
arms
around
all
of
the
different
problems.
H
H
A
Sounds
good
right
thanks
ben
and
I
think
there
was
one
more
item
that
needed
to
be
added
designs,
peter
stateful
set
slices.
A
A
But
okay,
so
I'll
leave
it
off
the
list,
we'll
switch
back
and
then
I
think
you
are
next
on
the
agenda
so
go
for
it.
Do
you
want
to
share,
share
your
screen
or
sure
yeah?
I.
A
K
B
Oh,
let
me
try
okay
peter.
J
B
H
Maybe
he
has
the
problem
that
sean
chen
has
where
he
can't
share
and
talk.
At
the
same
time,
oh
weird.
G
H
A
He
shares
he
uses
audio
very
strange
okay,
then
I
can
try
sharing
the
screen
for
you
and
you
can
talk
through.
I
A
I
Okay,
perfect
great!
Yes,
if
you
can
just
scroll
down
to
the
summary
motivation
section,
so
the
goal
behind
this
is
being
able
to
split
up
a
staple
set
so
being
able
to
take,
say
the
upper
end
of
a
staple
set
and
move
it
across
a
name
space
or
a
cluster
and
the
way
that
staple
sets
are
initialized.
They
start
from
ordinal
zero
and
there's.
A
I
So
the
proposal
here
is
to
add
some
control,
so
the
user
could
actually
specify
an
ordinal
and
have
more
control
over
a
staple
set
being
brought
up
in
a
different
name
space
or
a
cluster.
So
I've
kind
of
gone
over
this
user
story
here,
and
this
is
kind
of
an
example,
say
we're
taking
a
staple
set
of
replica
five
and
we're
migrating
it.
I
I
So
that's
that's
kind
of
the
high
level
proposal
here
and
I
can
go
into
a
little
bit
more
specifics
as
well.
So
I
think
the
main
things
to
call
it
about
the
caveats
and
the
kind
of
the
non-goals
is
obviously
in
order
to
have
a
stable
set
supported
across
name
spaces
or
clusters.
There
needs
to
be
a
lot
of
other
mechanics
involved
like
networking,
orchestrating
storage,
orchestrating
the
actual
movement
of
the
staple
sets
and
those
aren't
goals
and
they're
not
scoped
in
this
cap.
H
I
H
I
Api
that
I've
proposed
is
having
a
start
ordinal,
so
implicitly,
staple
set
starts
at
zero
today,
having
the
field
called
replica
start
or
null
would
enable
a
user
to
kind
of
scale
a
stable
set
up
from
the
top,
almost
like
a
reverse
ordering,
but
being
able
to
manipulate
both
those
parameters.
Replicas
and
replica
start
ordinal
to
enable
a
user
to
create
an
arbitrary
slice
of
a
staple
set.
H
I
H
A
I
It
was
something
I
was
thinking
about
in
terms
of
a
feature
that
could
play
well
with
this
for
migration
across
name
spaces.
So
I
guess
the
motivation
for
namespaces
was
say
if
the
user
ether
is
moving.
C
A
And
do
you
want
to
target
you
said
an
alpha
in
126,
not
125,.
A
Okay,
so
I
think
no
objections
from
the
sig
storage
side.
Your
next
step
should
probably
be
to
reach
out
to
stig
apps
and
try
and
get
some
feedback
from
them
and
if
they're
good
with
it,
then
create
a
cap
and
should
be
good
to
go.
E
E
And
I
believe
for,
like
the
snapshot
transfer,
I
think
the
you
know
thinking
about
the
security
parts
of
it
is
a
big
part
of
that
proposal.
B
B
That
is
the
reference
policy.
That's
actually
from
c
networking.
B
They
have
a
weaven
we've
alpha,
2
crd,
for
that
it's
part
of
the
gateway
api.
A
Okay,
so
I
think
next
steps
for
you
peter
go
talk
to
the
gaps
since
they
own
staple
set.
Okay
and
sorry
go
ahead.
A
Yeah
yeah
no
problem
and
if
there's
no
other
comments
on
this
we'll
move
on
to
the
next
item,
we
got
eight
minutes
left
deep.
You
want
to
talk
about
this.
One.
J
Yeah
sure
can
I
share
my
screen
real
quick.
B
J
All
right,
can
everyone
see
my
screen
yeah?
I
guess
you
can
see
it
and
can
hear
you
still
great
okay.
So
this
is
a
quick
update
off
of
like
where
we
are
with
the
runtime,
assisted
mounting
cap
and
just
wanted
to
get
some
very
high
level
feedback.
So
far,
ian
has
taken
a
look
and
provided
a
lot
of
great
feedback.
I
just
want
to
get
some
overall
feedback
from
others
as
well.
J
So
a
quick
state
of,
like
you
know
where
we
are
today,
which
is
the
the
state,
is
basically
like
file
system
mounts
in
kubernetes
is
completely
controlled
by
csi
plugins,
and
so
this
is
like
a
pretty
complicated
diagram,
but
on
the
right
you
basically
have
the
run
c
model
where
you
know,
of
course,
a
csi
plugin.
Does
everything
on
the
left?
J
You
have
the
a
slightly
more
complicated
model
involving
a
micro
vm
such
as
cada,
which
uses
a
file
system
called
vertio
fs
to
basically
project
host
file
systems
into
like
a
micro,
vm
guest,
and
this
has
several
disadvantages
from
a
security
as
well
as
performance
standpoint
that
are
detailed
in
the
cap
and
one
of
the
main
reasons
why
we
have
seen
like
various
cloud.
J
Vendors
like
alibaba
and
others
have
started
kind
of
doing
a
more
like
a
direct
assignment
where
things
move
to
sort
of
a
model
where,
instead
of
what
iofs,
you
have
the
block
device
being
projected
from
the
host
directly
to
the
to
their
runtime
environment,
which
would
be
a
vm
in
case
of
cut
up.
J
So
in
in
previous
cycles.
What
we
tried
was
kind
of
trying
to
make
him
come
up
with
a
model
where
this
delegation
of
file
system
mounts
from
the
csi
control
model
today
to
the
runtime
is
kind
of
done
in
a
more
in
a
csi
plug-in
or
as
a
runtime
agnostic
mode.
That
is,
the
csi
plug-in
does
not
have
to
be
specifically
aware
of
what
is
the
runtime?
J
On
the
other
side,
we
iterated
on
that
with
signaled
in
the
last
cycle,
and
we
tried
to
come
up
with
like
this
new
api
set
called
crust
like
container
runtime
and
storage,
which
would
sort
of
mimic
and
reflect
a
lot
of
the
csi
apis.
But
basically
the
overall
feedback
was.
There
was
an
interest
in
kind
of
coming
up
with
this
new
api
set
that
allows
a
cubelet
to
directly
talk
to
something
like
a
non-cri
runtime
like
kata,
and
so
like.
J
The
the
new
proposal
I
came
up
with
that
is
currently
reflected
in
the
cap
is
sort
of
a
model
where
the
csi
plug-in
is
very
much
aware
of
the
runtime
on
the
other
side
and
and
and
this
model
is
sort
of
equivalent
to
like
at
a
very
high
level.
It's
just
like
csi
node
plugins
today
have
to
be
aware
of
the
operating
system
they're
running
on,
like
windows
versus
linux.
J
J
What
we
came
up
with
is
like
a
very
well-defined
protocol
or
a
strategy
as
we're
calling
it
in
the
cap,
which
basically
requires
a
csi
plugin
to
to
populate
a
metadata
file
with,
like
the
underlying
block
device
name,
the
file
system
type
mount
options
as
well
as
like
fs
group
and
fs
group
change
policy
for
doing
stuff
after
the
amount
has
happened
and
basically
in
response
to
like
cri
api
calls
scatter
on
time
base
just
analyzes
this
metadata
file
and
use
it
to
mount
the
file
system
on
the
specified
device
and
then
apply
fs
group
based
on
the
fs
group
change
policy,
but
the
main
disadvantages
with
this
model
without
any
changes
in
csi
or
cubelet,
or
anything
else
is
that
it
requires
like
pod
lookup
through
the
api
server
from
the
csi
plugin
to
obtain
fs
group
nfs
group
change
policy
details
and
it
does
not
handle
sub
paths
properly
because
of
the
security
bind.
J
So
basically,
the
cap
tries
to
address
these,
and
so
essentially
what
it
tries
to
do
is
generalize
the
the
protocol
between
the
csi
plug-in
and
the
container
runtime
through
basically,
a
new
field
called
a
fs
deferral
strategy.
So
at
a
very
high
level,
this
can
be
thought
of
as
a
read-only
field.
Today,
that's
used
by
a
pod
to
specify
how
exactly
it
wants
to
mount
a
pvc.
So
similarly
the
fs
default
strategy,
it's
a
free
form
field
as
it
exists
in
the
cap
today,
sort
of
like
fs
types.
J
It
can
be
any
value,
it's
not
a
canonical
value
or
enumeration,
and
just
to
start
off,
we
are
proposing
like
maybe
it
can
be
just
set
to
kata
and
that
essentially
means
that
the
csi
plugin,
if
it
can
support
that,
will
basically
choose
to
opt
into
that
kada
strategy
and
populate
the
metadata
files
and
stuff
exactly
as
kata
proposes.
So
it's
a
direct
handshake
between
the
csi
plugin
and
the
runtime,
as
opposed
to
the
csi
plugin
staying
runtime
agnostic,
and
in
order
to
surface
all
this,
we
are
proposing
a
couple
of
new
capabilities.
J
Api
calls,
which
essentially
reflects
the
fs
default
strategy
in
the
pod
spec
and
finally,
the
ability
to
send
down
the
fs
group
and
the
change
policy
details
as
well,
so
that
a
csi
plugin
doesn't
have
to
do
the
api
server
lookup
yeah
yeah.
So
this
is
kind
of
like
the
gist
of
it
would
love
some
feedback
from
more
members
of
the
community
if
possible,
and
especially,
if,
if
like
you
are
involved
with,
like
another
say,
micro,
vm
type,
runtime
like.
H
J
Firecracker
or
divisor
that
might
that
might
like
you
know,
benefit
from
a
model
like
this.
I
would
love
to
hear
especially
some
feedback
from
that
direction
on
whether
this
helps
or
not
so
that's
kind
of
all
I
had
so
the
cap
is
there.
The
exact
csi
proposals
is
something
I'm
gonna
set
up
a
pr
for
and
then
basically
iterate
there
yeah
deep.
H
K
E
E
J
Okay,
great,
I
can
also
set
up
some
meeting
like
maybe
next
week
and
send
up
a
doodle
for
it
and
go
from
there,
but
yeah
would
love
feedback
thanks
a
lot.
A
All
right,
thank
you.
Deep
for
the
presentation
and
yeah.
Let's
carry
the
discussion
on
offline.
There
were
a
number
of
other
items
on
the
agenda
today
we
didn't
get
to
so
I'm
just
gonna
mark
those
as
ran
out
of
time
and
we'll
go
ahead
and
move
those
to
the
next
meeting.
Thank
you.
Everyone
for
attending
and
we'll
see
you
next
time
in
two
weeks
take
care.