►
From YouTube: Kubernetes SIG Storage Meeting 2021-07-15
Description
Kubernetes Storage Special-Interest-Group (SIG) Meeting - 15 July 2021
Meeting Notes/Agenda: https://docs.google.com/document/d/1-8KEG8AjAgKznS9NFm3qWqkGyCHmvU6HVl0sk5hwoAE/edit#heading=h.n8qgrbxikh0
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Saad Ali (Google)
A
A
So
today's
agenda,
we
are
going
to
go
over
122
planning,
then
we're
going
to
go
over
a
design
that
is
currently
in
progress.
The
namespace
transfer
and
then
discuss
these
miscellaneous
issues
down
here.
So
for
122,
the
important
milestones
the
most
recent
was
the
code
freeze
for
122..
A
That
happened
last
week
july.
8Th
this
deadline
means
that
any
features
that
we
were
working
on
for
122
must
be
completed
and
merged
by
this
date,
and
so
we'll
go
and
get
a
status
update
on
what
features
made
it.
What
features
did
not
upcoming
deadlines
to
be
aware
of
july
15th
is
test
freeze.
This
means
any
test.
Related
changes
for
these
features
or
in
to
any
tests
in
general
should
be
completed
by
july
15th.
A
That
is
today.
In
fact,
so,
if
you
have
any
pending
changes,
please
make
sure
they're,
lgtmed
and
approved
by
end
of
day
today.
Upcoming
deadlines
are
docs.
Pr
reviews,
that's
going
to
be
these
if
you
have
any
features
that
are
going
into
122,
the
associated
docs
must
be
ready
by
july
20th,
which
is
next
week
and
should
be
merged
a
week
later
on
the
27th.
A
Similarly,
the
feature
blog
review
needs
to
be
completed
by
the
27th.
The
feature
blog
is
for
accompanying
the
122
release.
The
kubernetes
team
does
a
set
of
blogs
that
go
out
with
the
release
highlighting
different
features
in
there.
A
If
you
think
that
there
is
a
feature
that
you've
been
working
on,
that
you'd
like
to
also
write
a
blog
for
please
add
yourself
to
the
spreadsheet
here,
looks
like
we've
got
a
couple
of
proposals
from
the
storage
side,
already
one
for
volume,
populator
and
one
for
cozy,
with
that,
let's
jump
into
the
spreadsheet
and
start
getting
some
status
updates.
A
First
up,
we
have
delegate
fs
group
to
csi
driver
instead
of
cubelet,
fro
and
haman's
working
on
this
samantha.
You
want
to
give
an
update
here.
A
A
Is
there
any
leftover
docks
or
anything
like
that
that
we
need
to
track.
B
Yeah
there
is
a
placeholder
doc
pr
open
for
review
and
then
open,
and
then
it
will
need
to
be
updated
before
the
deadline.
C
A
Next
item
is
csi
online
offline
volume,
resizing
volume,
expansion,
update
cap
and
fix
issues,
copy,
allow
volume,
expansion
from
storage
class
to
pv
amount
or
anyone
else
who
has
a
update.
A
D
I
think
that
one,
I
think
that
one
is
in
133,
it's
not
for.
A
Merge
okay
and
looks
like
the
next
one
is
the
same
story.
We
missed
122,
moved
to
123
and
redesigned
we're
covering
yeah.
D
A
All
right,
thank
you,
shane
next
up
is
from
humble
we
have
csi
entry,
read-only
handling
and.
F
F
On
the
sidecar,
so
I
think
yeah
we
have
some
more
time
left,
but
I
think
we'll
be
able
to
make
it
here.
A
A
A
A
And
mark
that,
as
no
update,
we
can
get
an
update,
hopefully
next
time
also
from
patrick
pvc
inline
ephemeral
volumes
working
with
csi
driver,
this
was
bug
fix,
is
being
reviewed,
volume
limits
attached
to
code.
Any
updates
on
that
since.
A
Okay,
we'll
keep
it
moving.
Next
item
is
spending
spreading
over
failure,
domains
shing.
D
Yeah
this
one
has
no
updates
so,
depending
on
the
the
next
one
yeah
and
then
next
one
okay,
the
warning
group.
There
are
some
new
review
comments,
so
I
have
not
got
a
chance
to
address
those
here,
so
I
will
need
to
update
cap
to
address
those
comments.
A
Okay,
thank
you
shane.
Next,
from
christian,
we
have
csi
out
of
tree
moving
iscsi
driver
fit
and
finish
image,
building,
testing,
ci
cd
documentation.
G
F
G
F
Yeah
right
right
question:
yes,
so
I
have
been
testing
this
and
fixing
bugs
a
bit
heavily
and
because
that's
just
that
driver
is
very
at
this
stage,
so
somewhat
to
figure
it
out,
but
I
think
next
week
I'll
start
file
appears
on
what
are
the
picks
I
have
done
so
we
can
continue
on
that,
but
we
haven't
made
any
release
yet
on
this
driver.
Yet
so
we'll
continue
working
on
that.
A
Okay,
next
up,
we
have
moving
samba
sifs
csi
driver,
ga
or
a
and
help
with
flex
volume.
Deprecation
assigned
currently
to
andy
julie.
The
linux
part
is
stable,
was
last
update.
Ga
was
cut.
Windows
is
working
progress.
Any
updates
since.
C
A
A
Okay
sounds
like
there's
progress
you
made
here.
Thank
you.
Mauricio
next
up
is
sending
out
deprecation
notices
for
flex
volume
last
status
here
was.
There
was
a
message
that
was
drafted
and
shared
with
michelle
and
matt
for
review,
and
that
was
waiting
review.
Any
update
since
then.
A
Okay
sounds
good.
Next
up
is
pvc
volume
snapshot
namespace
transfer?
I
saw
that
there
was
a
design
meeting
this
monday.
Unfortunately,
I
wasn't
able
to
make
that
one,
but
it
looked
like
good
progress
was
made.
Mr
phy,
you
want
to
give
an
update
here.
H
H
I
added
in
the
under
design
meeting
the
name
of
the
the
feature
and
the
link
for
the
the
doc
I
shared
during
the
meeting
yeah
and
everyone
pretty
much
agreed
on
the
crd
approach.
We
are
not
yet
a
next
step
that
I
will
be
thinking
with
the
michael
eriksen.
He
is
from
red
hat.
H
He
started
also
increasing
topic
over
from
him
and
yeah,
but
we
are
just
still
not
discussing
in
concrete
details,
but
I
believe
we
can
use
the
crd
approach
with
external
controller
and
I
don't
want
to
take
so
long
time
we
can
discuss
and
after
we
think.
I
H
Issue
around
secrets:
actually
we
did
not
discuss
this
yet
we
wanted
to
focus
in
the
beginning
of
the
pvcs
and
maybe
volume
snapshots
as
a
first
step,
and
then
we
can
try
to
figure
out
how
to
find
the
work
around.
I
Right
but
one
of
the
when
we
did
it
earlier,
I'm
sorry.
I
missed
the
meeting
on
monday,
but
when
we
no
longer
investigate
this
feature
earlier,
one
of
the
sticking
points
was
that
pvcs
inevitably
end
up
with
the
p,
having
pvs
that
point
back
to
secrets
that
are
in
the
same
name
space
as
the
original
pv
or
pvc.
I
And
if
you
just
move
the
pvc
to
another
namespace,
all
the
secrets
remain
in
the
original
namespace
and
we
were
never
able
to
figure
out
what
to
do
about
that
and
it
seemed
like
a
pretty
big
sticking
point.
You
know
totally
orthogonal
to
what
is
the?
What
should
the
ui
look
like
for
actually
doing
this?
Just
the
the
fact
that
the
way
that
the
provision
or
sidecar
works
is
that
it
refers
back
to
secrets
in
the
in
the
name.
H
Yeah,
can
you
please
add
your
note
to
the
to
the
box
that
I
opened
for
for
this
feature.
Just
it's
important
to
you
know
to
get
any
input
from
anyone
who
had.
You
know.
H
I
A
All
right,
thank
you
both
for
the
update
and
the
comment
and
looks
like
I'm
glad.
We've
got
traction
here,
so,
let's
keep
that
going.
Next
up
is
csi
volume,
health
for
csi
volume,
health
last
status,
update,
was
working
on
writing
down
the
details
on
various
requests
and
use
cases.
D
Yeah,
so
I
need
to
schedule
a
meeting
to
discuss
this
so
I'll.
Try
to
find
some
time
next
week.
A
Okay,
thank
you.
Shane
next
up
is
volume,
populator
data
source
ben.
I
I
D
Ben
we
want
to.
We
want
to
cut
the
release
of
those
two
repos
for
1.2
right.
I
We
do
yes
and
that's
that's
a
whole
other
effort,
because
the
release
tools,
integration
was
only
done
for
one
of
the
repos,
but
but
first
they
need
to
be
updated
to
accommodate
the
new
alpha
alpha
feature
and
that
they
needed
to
merge
before
that
could
happen.
So
now
that
it's
in
we
can
refactor
those
to
rely
on
the
new
data
source
ref
field
instead
of
data
source
and
and
get
get
them
updated.
A
All
right,
good
progress,
ben
thank
you
next
up
is
cozy.
Last
status
here
was
sids,
putting
together
a
cap,
and
there
were
continuing
discussions
on
the
design.
Any
new
updates
here
from
anyone
closely
involved
with
the
cozy
project.
D
I
think
sid
is
trying
to
get
him
to
do
a
api
review,
but
I
don't
know
I
haven't
heard
from
him
on
the
status.
G
A
A
I'll
copy
over
the
last
status,
it
seems
like
that's
still
applicable.
Next
up.
We
have
change
block
tracking
for
the
data
protection
work
group
we've
had
a
no
updates
for
a
while
anything
new
here.
D
Yeah,
so
I
think
fun
is
trying
to
schedule
our
meeting
to
to
continue
the
design
discussion,
so
I
think
we're
going
to
discuss
the
design
in
our
next
data
production,
one
group
meeting,
which
is
wednesday
in
two
weeks
yeah.
So
if
anyone
is
interested,
please
join
meeting
and
a
some
show,
and
you
know
if
you
can
join
I'll,
be
great.
A
Sounds
good
and
I
don't
know
matt
if
you're
on
the
line.
This
might
be
a
good
one
for
you
as
well.
J
K
So
we
have
released
the
v1
for
csi
proxy
yesterday,
so
that's
awesome
and
the
next
step
is
updating
documentation
or
rather
creating
documentation.
A
All
right,
that's
a
pretty
big
milestone,
congratulations
to
the
team,
so
this
has
been
work
in
progress
for
a
long
time.
Csi
proxy
enables
csi
drivers
for
windows.
So
if
you
have
a
csi
driver
that
you
would
like
to
be
able
to
work
with
windows,
please
work
with
the
csi
proxy
team
and
try
and
consume
the
csi
proxy
and
use
it
for
working
with
your
csi
driver.
A
D
Window
support
was
planned
for
end
of
year
and
then
there's
an
another
issue,
which
is
the
support
for
nfs
before
I
think
that's
still
not
supported
here,
so
we're
trying
to
try
to
figure
out
when
that
can
be
supported,
but
that's
not
a
driver.
It's
not
a
sata
driver
issue,
so
these
fear
issues
so
there's
a
bug
there
that
we
are
not
sure
when
that
will
be
will
be
fixed.
Yet.
A
C
A
Next
up,
we
have
azure
azure
disk
in
azure
file,
andy
or
michelle.
Any
updates
on.
A
J
A
All
right,
thanks
for
that
update,
humont
next
up,
is
gce
beta
and
on
by
default.
Matt
doesn't
look
like
he's
on
the
line.
Mauricio.
Do
you
have
anything
new.
A
A
Okay,
we're
going
to
mark
that
as
no
update
and
csi
migration.
Aws
last
status
we
got
here
was
pr
was
out
for
windows
support
and
there
was
bug
fixes
going
on
for
topology
any
progress
since
then.
Any
updates
from
anyone.
A
Okay,
we'll
keep
moving
along
next
items
are
set
fs
and
stuff
rbd
csi
migration.
Humble
any
updates.
F
Here
so
I
had
dropped
the
email
to
kubernetes
and
storage
sigma
list,
but
unfortunately
there
is
no
response
on
the
thread,
so
we
don't
have
a
consensus
on
this.
Yet
our
balance
creations
are
also
gone
and
hopefully
we'll
be
able
to
take
final
code
and
target
for
maybe
next
release.
A
Okay,
all
right,
thank
you
for
following
up.
Humble
next
item
is
control
volume,
mode
conversion
between
source
and
target
pvc.
The
last
status
here
is
that
a
design
was
being
drafted.
Any
updates
since
then.
D
Yeah,
so
we
actually
had
a
review
meeting
in
yesterday's
data
protection.
Green
group
about
this
rona
has
this
draft
design
and
we
reviewed
that
and
yes,
I
think
that
it
looks
good
so
far,
so
he's
going
to
address
the
comments
and
then
once
that's
in
a
good
shape,
we
will
contact
security
group
just
to
see
if
they
have
any
concerns
before
we
turn
this
into
a
blue
cap.
A
Awesome
all
right,
thank
you.
Shane
next
up
is
secret
protection.
Finalizers
to
prevent
deletion
last
status.
Update
here
was
concerns
on
the
cap
are
still
outstanding.
Is
that
still
the
case
here.
M
Yeah
actually
concerns
are
not
resolved
yet,
but
here
I
did
some
updating
in
implementation
of
the
poc
chord.
A
A
Okay,
we'll
get
an
update
on
that
from
jaway,
hopefully,
next
time
next
up
is
with
sig
node
ungraceful
node
shut
down.
The
last
status
here
was
there
was
concern
from
james
on
both
kepp
and
csi
spec,
which
were
outstanding
synced
up
with
yasin,
who
is
looking
into
bare
metal
issues.
A
Looking
at
graceful
note
shutdown,
any
updates
on
that
since.
D
So
your
scene
says
he's
going
to
do
and
give
an
update
on
found
those
comments,
but
I
have
not
seen
anything
yet
so
I
will
need
a
pinging.
D
So
I
have
been
looking
at
the
gracefulness
and
see
if
we
can
just
leverage
that
code
to
do
something
if
we
are
trying
to
narrow
down
the
scope
saying
if
we
only
want
to
handle
the
real
shutdown
case.
A
Got
it
all
right?
Thank
you,
shane
for
the
update
next
few
items.
Here
we
have
enable
end
user,
sorry
enable
user
name
spaces
in
cubelet,
so
uids
get
shifted
rootless
mode.
This
is
with
sig
node
hamant.
Do
you
have
any
updates
here.
L
A
Okay,
no
worries.
I
think
this
is
probably
gonna
get
moved
to.
A
A
Next
item
is
sig
apps
address
issues.
Pvc
created
by
stateful
set
will
not
be
auto
removed.
Matt
was
helping
drive
this.
He
was
working
on
it
as
a
four
weeks
ago.
Any
updates
here.
Anyone
see
a
vr
come
through
any
anything.
D
I
think
this
one
didn't
make
it
because
I
think
the
api
side
changes
got
merged,
but
then
the
controller
side
did
not
need
some
approval
from
someone
from
the
sick.
Offset
looks
like
didn't
get
anyone
to
review
that.
So
I
missed
the
deadline,
so
I'm
not
sure
what
to
do
with
the
api
side.
Changes
that
part
is
actually.
A
A
Okay,
take
a
note
to
follow
up
offline
with
matt.
D
So
shalini
is
a
trying
to
catch
up
on
the
cap,
so
she
picked
this
up,
so
I
just
trying
to
submit
a
new
cat
based
on
the
original
one.
I
think
I
synced
up
with
her
a
few
days
ago,
looks
like
she's
still
working
on
it.
A
Thank
you
xing
for
that
update.
Next,
up
is
execution
hook
for
application
snapshot.
A
Designs
to
help
review
and
they're
very
conservative,
so
yeah,
that's
next
up
is
sig
architecture.
Splitting
up
the
mount
utility
into
its
own
repo
last
status.
Update
here
was
added
a
deprecation
notice
for
the
old
repo.
A
Next
item
is
sig
scheduling,
prioritization
on
volume
capacity
last
status
update
we
got
was
a
month
ago.
Some
remaining
prs
are
being
updated.
Any
updates
since.
A
D
D
Yeah
yeah,
it's
ga
already.
I
I
think
everything
that
is
needed
is,
I
believe,
it's
in,
but
it's
good
to
double
check
with
michelle.
A
All
right,
thank
you
all
for
the
updates.
Let's
switch
back
to
the
agenda
doc,
so
first
up,
we
have
mustafa
with
the
volume
snapshot
transfer
looks
like
you've
captured
the
meeting
notes
here
from
the
discussion
on
monday.
H
So,
just
like
you
will
find
first
attempt
and
second
attempt
third
attempt,
because
there
were
three
different
people
working
in
this
locket,
the
first
one
was
proposing,
just
as
you
see
using
annotations
the
pvc.
The
second
one
was
proposing
using
crds
for
pvcs.
Only
the
third
one
was
using
the
same
approach,
crds,
adding
volume,
snapshots
and
correct
me
christian.
If
I'm
wrong,
I
think
he
was
here
yeah
yeah
and
everyone
agreed
in
the
call
about
using
the
crd
approach.
H
Still
we
need
to
clarify
many
things,
for
example
like
shall
we
use
controller
of
entry
or
having
external
control?
So
this
was
a
proposal
by
the
by
michael
herrickson.
In
the
second
attempt,
he
said
that
we
can.
We
can
use
the
same
approach
crds
using
external
controller.
That
would
be
much
more
flexible
to
add
things
in
the
future,
but
yeah.
It's
still
open
for
discussion.
A
H
Here
so
actually
masaki's
implementation
was
added
in
my
document.
If
you
go
back
to
my
document,
I
will
show
you
where
he
has
a
poc
already.
H
So
if
you
go
for
the
third
attempt
and
then
yeah
number
two
you
will
find
the
poc
link
here
and
below
is
things
that
he
added
to
the
pvc
specs.
H
H
M
It
yeah
and
I
add
the
comment
on
the
cape-
that
what
complexity
will
be
introduced
by
using
crd
approach,
so
please
check
it
and
see
it
can
be
solved.
M
H
I
I
added
my
notes
at
the
bottom
of
that
doc,
talking
about
the
issues
related
to
pvc
secrets
and
what
happens
to
them
when
you
transfer
a
pvc
and
and
all
the
problems
that
come
from
not
dealing
with
them.
D
M
M
Actually,
my
poc
approach
doesn't
implement
sir
secret
handling,
and
but
I
have
an
idea
for
disrupting
it.
So
I'll
try.
M
A
Okay,
how
do
we
wanna
collaborate
moving
forward?
Do
you
wanna
do
another
design
meeting
to
talk
through
these
ideas
or
mustafa?
What's
what's
your
plan.
H
Yeah,
I
actually
need
to
think
with
michael
herrington
or
yeah.
We
can
also
make
another
meeting
for
this
to.
You
know,
discuss
all
these
things
that
were
added
today
yeah,
so
I
think
I
will
end
up
opening
another
kit
with
changes,
and
so
I
would
like
to
have
input
from
everyone
if
possible,.
D
This
this
is
in,
I
send
that
invite
to
the
mailing
list,
but
you
know
sometimes
it
just
automatically
bounce
back.
I
A
Yeah,
I
think
the
other
thing
is.
We
can
just
add
it
to
the
sig
calendar
as
long
as
it's
on
the
same
calendar.
A
Hopefully
people
will
be
able
to
see
it
but
go
ahead
and
send
out
a
doodle
poll
mustafa
and
see
if
you
can
get
at
least
the
the
core
people,
misaki
ben
and
anybody
else.
Who's
interested
try
and
find
a
time,
that'll
work
for
them,
and
we
can
get
this
conversation
going
and
thank
you
for
your
work
on
this
next
up.
We
have
from
deep
runtime
assisted
mounts
to
be
added
to
the
design
discussion.
Oh
yeah,
so
you
want
to
talk
about
this
deep.
K
Sure,
if
you
can
just
open
up
the
doc,
so
the
context
for
this
is,
we
are
trying
out
some
scenarios
based
on
micro,
vms,
specifically
kata,
and
one
of
the
things
we
came
up
with
there
is
is
like
in
the
car
community.
There's
this
kind
of
preference
to
be
able
to
mount
the
volumes
within
the
micro
vm
context
right.
So
what
happens
with
csi,
of
course,
is
that
the
volume
gets
mounted
in
the
host
and
then
something
like
what
vertex
fs
is
used
to
kind
of
project.
K
K
Maybe
this
diagram
will
help
and
and
like
he
commented
that
you
know
it'd
be
done
the
right
way,
which
is
you
know,
through
some
level
of
cubelet
involvement,
making
the
signals
pretty
definitive
between
the
various
components
involved
that
that
a
plug-in
a
csi
plug-in
may
not
want
to
or
maybe
may
want
to
yield
the
mounting
process
to
to
the
to
the
cri
run
time,
which
might
eventually
get
oca
run
time.
K
K
We
thought
like
we
can
get
a
kept
going
around
it
and
started
off
with
this
sort
of
design,
with
some
some
thoughts
around
like
potential
enhancements
to
some
of
the
csi
messages,
and
definitely
you
know
enhancing
the
cri
mount
message
to
include
some
of
the
details
around
what
needs
to
be
mounted
so
that
a
csi
plug-in
can
fill
the
node,
publish
and
essentially
say,
hey
runtime.
When
you
get
the
mount
make
sure
you
do
this
mount
with
these
details,.
I
I
K
So
yeah,
I
do
call
out
like
about
four
different
alternatives
besides
the
main
design,
so
we
can
definitely
consider
other
alternatives
too,
and
would
be
great
to
have
more
comments
on
the
doc
but
yeah
pretty
much
all
the
operations,
all
the
main,
like
csi
node
operations,
odd
things
that
I'm
specifying
that
it
be
delegated
to
the
plugin
and
the
runtime
to
basically
coordinate
amongst
themselves.
K
It's
mainly
the
mount
step
around
node
publish
where,
due
to
some
security
issues
that
patrick
was
identified,
we
figured
it
might
be
better
to
have
left
and
you're
right.
You
control
that
thing
and
provide
like
extremely
specific
directions
that
hey
runtime,
given
this
kind
of
mount
spec
from
oci
make
sure
you
either
mount
this
file
system
on
a
specific
block
device
or
mount
like
an
nfs
share
on
the
vm.
K
So,
in
parallel
to
the
csi
proxy
mechanism-
yes
like
pretty
much
all
the
csi
node
operations
are
happening,
you
know
will
be
will
be
done
by
the
csi
plugin
in
coordination
with
the
runtime
using
you
know
some
kind
of
a
private
interface
so
for
kara,
for
example,
I
have
a
link
in
the
design
to
like
a
cli
that
was
kind
of
prototyped
to
to
enable
some
of
this
and
and
that
kind
of
is
sort
of
parallel
to
the
proxy
approach
right,
where
the
csi
plug-in
is
delegating
a
lot
of
the
operations
to
a
proxy
but
yeah.
K
I
I
I
But
like
an
nfs,
you
just
can't
do
that.
You
have
to
do
some
something
much
more
complicated
and
it's
going
to
be
different
for
every
style
of
csi
plug-in
and
unless
you
have
something
centralized
that
understands
all
that
it's
kind
of
hopeless
to
do
something
other
than
just
let
the
csi
plugin
do
the
mouse.
A
I
am
a
little
afraid
of
csi
proxy
getting
too
large
because
it
is
effectively
a
whole
nother
api
that
we
have
to
maintain.
I
I
Right
right,
but
I
mean
it,
then
it
means
that
any
attack
performed
on
a
node
plug-in
lets
you
take
over
the
node,
which
is
a
giant
security,
hole
that
you
know
you
have
a
hundred
different
vendors
implementing
plug-ins
that
all
have
different
security
holes
and
it's
a
swiss
cheese
kind
of
an
approach.
Yeah.
A
All
right,
we
have
a
trade-off
between
a
api
that
we
have
to
maintain
southbound
and
kind
of
play,
catch
up,
potentially
versus
yeah.
It's
a
security,
vulnerability.
K
Right,
like
one
of
the
other
goals
as
part
of
this
design,
is
also
to
enable
like
regular
mounts
to
happen
and
only
kind
of
do
this
deferral
to
the
run
time.
If
a
pod
with
a
specified,
runtime
class
comes
in
and
says
you
know,
I
want
this
deferral
behavior
to
be
there.
K
So
we
want
to
be
able
to
be
like
fairly
flexible
in
terms
of
allowing
this
to
be
enabled
on
a
pod
pv
pairing
basis,
rather
than
you
know,
have
it
kind
of
blanket
that
every
pod
on
a
certain
node
be
be
using
using
this
deferral,
which
I
think
is
more
the
model
with
csi
proxy.
Where
in
windows,
we
pretty
much
require
that
the
csi
plugin
using
the
proxy,
because
there's
another
way
out.
L
Deep
this,
this
does
have
a
goal
of
like
letting
users
attach
or
mount
volumes
at
like
run
time
like
when
the
while
the
pod
is
running
right,
or
am
I
reading
this
wrong.
K
I
didn't
consider
that
use
case.
Like
does
kubernetes
support
that.
K
That
that's
a
great
point,
so
I
think
this
is
kind
of
like
I
guess
our
first
baby
step
into
what
we
can
achieve
through
potentially
expanding
a
little
bit
of
the
cri,
and
you
know
the
next
step
here
is
obviously
also
going
to
signal
and
getting
this
queued
up
and
one
of
the
alternatives
could
be
to
see.
K
If
you
know
we
can
have
other
mount
messages
be
passed
through
after
the
sandbox
has
started
up
has
started
that
that
that
enables
what
you,
what
you
described
and
that's
not
part
of
the
initial
goal,.
L
One
more
issue
that
we
are
going
to
have
is
like.
I
know
that
carter
containers
don't
support
s
linux,
for
example
like
so
the
whole.
I
think
it
supports
as
fast
group,
but
when
you
are
yeah
so
basically
we'll
have
to
figure
out
the
permissions
of
the
volume.
A
Yeah,
that's
a
really
interesting
proposal.
How
do
you
want
to
proceed
with
this
deep.
K
I'm
open
to
suggestions,
since
this
involves
quite
a
bit
of
signal.
I
also
do
want
to
get
this
queued
up
in
their
massive
queue
of
things,
and,
besides
that,
should
I
send
out
a
doodle
around
further
discussions
around
this
yeah.
A
All
right
folks,
if
you're
interested
in
this
discussion
or
the
previous
one
keep
an
eye
out
on
the
mailing
list
for
kubernetes
storage
for
the
doodles
and
please
respond
to
those
if
you're
interested
in
attending
the
meetings
last
item
here
is
failing,
tests
need
to
be
fixed.
Shing.
Did
you
add
this
or.
E
As
a
team,
we
added
these
items
to
the
agenda.
The
first
one
is
one
issue
in
mastering
forming.
We
think
is
related
to
the
sick
storage,
so
there
were
a
request
that
hopefully
fixes
the
metrics
of
getting
the
metrics,
but
the
message
is
still
there
on
the
error.
Message
is
still
there
and
only
we
added
to
the
to
the
agenda
in
order
to
to
look
for
someone
to
take
a
look
on
that
issues.
A
Got
it
and
it
looks
like
it
was
patrick
who
is
taking
a
look
at
these.
A
Got
it
does
anybody
on
the
call
have
bandwidth
to
help
take
a
look
at
these
looks
like
these
are
high
priority
issues.
A
Or
has
patrick
not
been
engaged?
Is
this
just
more
an
fyi
rodolfo,
or
do
you
need
like
additional
help.
G
A
Got
it
got
it?
Is
there
anybody
on
the
call
who
can
who's
interested
in
helping
test
with
the
windows
environment
yeah?
I
can
do
that.
Okay,.
A
All
right
was
there
anything
else
that
you
needed
results.
A
Thank
you
for
attending
and
for
surfacing
this.