►
From YouTube: Kubernetes SIG Storage - Bi-Weekly Meeting 20210923
Description
Kubernetes Storage Special-Interest-Group (SIG) Bi-Weekly Meeting - 23 September 2021
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Xing Yang (VMware)
A
Hello,
everyone
today
is
september
23rd
2021.
This
is
the
kubernetes
storage
seek
meeting
today.
We
will
first
go
over
the
1.23
planning
spreadsheet.
So
there
is
a.
There
is
a
feature
blog
opt-in.
So
there's
a
there's
a
spreadsheet
here.
If
you
want
to
write
a
blog
for
the
future,
you
are
working
on.
Please
pin
sad
michelle
young
or
myself
to
get
the
feature
added
to
the
spreadsheet,
but
it's
until
november.
Second,
is
the
deadline
and
the
next
release
deadline.
A
B
B
I'm
just
afraid
there
wouldn't
be
anything
new,
just
be
the
same
thing.
You
still
have
some.
C
Excuse
me,
sorry
if
I
just
comment,
I
just
seem
to
find
those
blog
posts
often
become
the
sort
of
canonical
point
that
people
see
for
a
feature,
so
even
an
update
that
it's
beta
would
be
helpful.
Otherwise,
I
feel
like
oftentimes
people
assume
something
is
still
alpha,
because
that's
the
only
blog
that
they
see.
Okay,
I.
A
C
Just
like
two
two
sense:
there
sure.
A
Yeah
that
makes
sense
yeah,
it
might
be
good
to
you
know,
add
those
you
know
the
populators
other
people
have
written.
You
know
those
for
right.
B
A
C
C
We
have
time
I
I
think
this
is
definitely
an
at-risk
state,
though
just
because
we're
getting
sort
of
backed
up
behind
some
other
things.
A
C
Yeah
yeah,
but
just
in
case
anyone's
concerned
that
it's
at
risk
please
reach
out
to
me
and
we
can
discuss
and
see
if
we
can
figure
out
options
or
yeah.
A
C
Oh,
I
mean
I,
I
think
if,
if
there's
anyone
who
is
who
would
be
super
concerned
or
upset,
if
we
did
not
make
ga
in
123,
we
should
should
talk.
A
C
It's
like,
I
think,
that's
fairly
likely
at
this
point,
and
so,
if
there's
concerns
yeah
please
reach
out,
and
you
know
perhaps
yeah
things
can
be
shifted
on
our
end
or
yeah.
A
A
And
next
one
is
volume
group
this
one's
with
studio
and
design.
So
there
are
some
comments
on
the
cap.
I
just
addressed
them
before
the
meeting.
So
yes,
so,
basically
right
now,
it's
just
continuing
to
address
comments
and
next
one
is
a
see.
The
driver,
auto
shrimu,
iscsi
driver,
so
humble
said
he
couldn't
make
meeting,
but
he
actually
provided
an
update
so
pr's
up
for
review.
A
C
A
A
On
the
call,
sad,
so
should
we
just
send
it
out
to
the
seek
storage?
Is
anything?
Do
we
need
to
send
it
out
to.
E
A
A
And
next
one
is
warning
house
additional
metrics,
so
I
pinned
run
he
he
will.
He
said
he's
going
to
work
on
this,
so
I'll
ping
him
again
see
if
he
has
started
and
then
the
next
item
is
the
boarding
house
reaction
nick
actually
has
some
draft,
but
he
was
saying
that
there's
he
got
some
other
use
cases
from
some
other
people
he's
going
to
sort
it
out.
So
so
this
is
in
progress.
B
A
Think
last
last
week,
basically,
oh,
I
think
I
think
we
talked
about
this,
this
new
thing.
What
is
that
called
again
there?
This
is,
this
is
for
the
transfer
right,
so
the
transfer
of
the
bucket
sharing.
B
It's
for,
like
cross
name
space
delegation
right,
there's
some
there's
some
features
in
other
in
other,
like
outside
the
storage
area,
where
people
are
looking
at
ways
of
allowing
people
to
access
objects
outside
their
name.
Space
to
the
use
of
these
delegation
objects
and
it's
it's
a
approach
that
may
be
appropriate
for
cozy
or
it
may
be.
It
may
even
be
appropriate
for
sig
storage
like
for
sharing
snapshots.
B
We
have
to
figure
that
out,
but
but
it
doesn't
fundamentally
address.
You
know
the
the
problems
we're
struggling
with
on
the
cozy
side,
which
are
you
know?
How
do
you
deal
with
brownfield
situations?
How
do
you
deal
with
cross-cluster
sharing
situations?
So
I
I
don't
know
I'm
you
know.
A
lot
of
work
is
is
going
on,
but
I
don't
know
if
we
see
an
end
point
to
when
we
can
say
we're
done.
A
Yeah
but
at
least
for
the
for
this
one
we
can
actually,
we
can
look
at
the
this
proposal,
at
least
to
the
I
think,
it's
a
gateway
api
saturday.
They
are
already
using
this
right,
so
yeah
yeah.
D
B
A
Okay,
yeah,
I
think
that's
some.
I
think
they
are
discussing
that
in
sick
austin.
Is
that
c-class?
I
think
it's
cigars,
yeah,
okay,
so
yeah
so
we'll
see
how
that
one's
going.
I
think
another
thing
I
believe
is
also
he
flattened
the
it.
The
bucket
access
request
right,
I
think,
there's
some
some
changes
to
the
to
api
design.
A
Okay,
next
one
is
change
block
tracking.
So
yesterday
in
the
data
protection
group
meeting,
we
had
a
discussion
about
this
and
there
is
a
draft
cap.
So
there
are
some
concerns
about
this.
I
think
there
are
some
questions
on
whether
if
we
have
this
design
other
vendors
who
are
interested
in
implementing
the
space,
this
is
a
trying
to
come
up
with
a
common
api
right.
That's
like
we'll
have
those
definitions,
csi
and
try
to
use
this?
Are
there
anyone
interested?
A
D
Do
you
have
any
ideas,
so
I'm
still
working
on
the
cap?
There
are
a
couple
of
things
that
came
up
while
I
was
admitting
lasseter
comments
so
working
through
those
yeah,
hopefully
we'll
be
in
a
shape
to
ask
for
more
reviews.
Next
week.
A
Next
one
is
csu
proxy
for
windows
transition
to
privileged
containers.
So
this
is
we're
going
to
do
a
design.
D
Yeah
marisa
ongoing
pretty
much.
We
are
working
on
the
on
the
on
what
the
new
api
would
look
like
to
enable
the
transition
from
csi
proxy
binary
to
the
privileged
support
in
windows.
Okay,.
A
Yeah
yeah,
but
the
cs
driver
does
not
have
to
like
implement
the
new
interface
again.
Does
it
is
this
no.
D
It's
just
like
keeping
the
set
of
changes
to
minimal
okay,
so
basically,
mauricio
is
working
out
like
how
that
transition
would
look
like
with
gcpd.
D
A
Thank
you
next,
one
csm
migration,
officially
duplicate
cloud
provider
plugins.
This
is
the
jelly
here.
A
C
For
the
cloud
provider
plug-ins
yeah,
I
don't
know
what
the
state
of
that
one.
A
A
A
C
Yeah,
this
is
still
on
track
to
be
on
by
default
and
still
beta
in
one
twenty.
Twenty
three.
A
A
A
And
the
next
one
is
the
csm
migration
cef
rbd.
I
think
this
is
we're
only
doing
self
rbd.
I
don't
know
okay
I'll
leave
this
the
self
fs
here,
but
because,
when
I,
the
the
cap
is
only
for
rpd
I'll
just
add
another.
A
And
so
humble
gave
an
update,
he
said,
looks
good
tested,
create
delete,
amount.
A
H
Yeah
so
a
dpr
result
and
I'm
in
the
middle
of
running
it
and
doing
tests,
okay,
so
yeah.
So
I
need
to
complete
the
internet
tests
and
it
will
be
ready
to
merge.
A
A
Next
one
is
control
volume,
mode
conversion
between
source
and
target
pvc
yeah.
So
ronald
is
back
from
vacation
and
he
is
looking
at
it
again.
So
there
are
some
comments
last
time
from
xiangtian,
so
he's
going
to
address
that
and
add
more
details
to
the
document
and
then
then
we
can
pin
the
security
team
to
take
a
look.
F
A
All
right,
thank
you.
Next
one
is,
this
is
the
feature
we
called
with
cigars
user
id
on
the
shipping,
config
maps
and
secrets
preserved
default
file
mode
bit
set
in
atomic
writer
volumes.
Does
anyone
know
the
status
of
this.
A
And
the
next
one-
oh
non
graceful,
so
not
much
update
from
from
my
side,
but
I
think
jin
and
haman
was
sync
up.
Was
there
actually
look
at
the
if
we
need
to
make
some
enhancement
on
the
graceful,
no
shutdown
side,
so
we
can
get
some
update
from
them.
Maybe
next
time.
A
Next
one
is
enable
username
space
incubate,
some
new
ids
get
shifted.
A
This
is
also
commander's.
Reviewing.
Does
anyone
know
any
status
on
this?
Otherwise,
I'll
just
say
more
update
is
not
here.
A
C
Yes,
this
is
hopefully
on
track.
I'm
talking
with
my
implementation
reviewer
to
make
sure
there's
consensus
on
that,
and
it
should
be.
It
says,
oh
yeah,
so
this
is
on
track
to
be
alpha.
Oh.
A
C
So
it's
someone
from
cig,
apps,
kenneth
owens,
so,
okay,
owens
k,
o
w
e
n.
Okay
s.
D
C
A
All
right,
thank
you,
and
next
one
is
one
extension
for
staple
set.
So
shalini
submitted
a
cap
waiting
for
hamid
to
come
out
to
review
this
opinion
again.
A
Next
one
is
there's
a
container
notifier
for
application
snapshot
that
we
call
with
signaled,
so
actually
no
update.
So,
basically
is
it's
still
designed
the
cap
didn't
get
merged.
A
A
I
I
Basically
what
we
are
seeing
and
it
has
been
there
since
1.17
for
some
of
the
customers-
are
operating
like
really
big
customers,
that
they
mount
efs
volumes
as
part
of
a
cron
job
and
then,
after
a
while,
they
have
to
recycle
their
nodes,
and
usually
they
get
like
a
lot
of
unable
to
attach
or
mount
arrows.
I
What
I
saw
was
like
there
were
some
dangling
mount
points
and
if
you
scroll
at
the
very
end,
I
think
that's
the
most
important
part
like
I
added
more
logs,
and
I
saw
that
the
reconciler
was
all
already
getting
like
an
operation
with
that
name.
It's
got
to
be
executing.
I
Take
inside
our
csi
driver
for
one.
The
second
question:
I
don't
know
the
code
base
as
well,
but
that's
why
I
wanted
to
understand.
If,
like
do,
can
we
have
some
sort
of
timeouts
or
amounts
as
well,
when
the
unmount
volume
functions
is
being
called.
I
I
Yeah,
I
shared
it
in
the
chat
like
this
part,
like
returns
an
error
if
the
first
like
subdirectory
path
does
not
match
up.
I
was
wondering
like
should
we
like
like,
if
the
part
does
not
exist,
because
there's
a
os
dot,
remove
full
container
directory
path
immediately
afterwards,
if
the
subpar
does
not
exist,
should
we
move
ahead
with
the
cleanup?
I
These
are
the
three
things
that
I
was
thinking
but
overall,
like
I'm,
wanted
to
like
get
some
ideas
on
like
how
can
we
mitigate
or
make
it
better
from
efs
side
or
if,
if
it
could
be
yeah,
I
need
to
get
to
john.
G
I'm
looking
at
this
issue,
but
I
can't
reproduce
it
like.
It
seems
that
you
have
really
a
lot
of
volumes
mounted
to
a
single
node,
like
thousands,
maybe
tens
of
thousands
and
under
that
load
my
container
run
times
melt.
G
So
I
had
different
issues
than
you
and
like
at
this
scale.
Like
thousands
tens
of
thousands
of
volumes,
you
may
be
the
first
who
are
trying
something
like
that
and
I'm
not
sure
anybody
did
any
testing
at
that
scale.
G
It
could
be
that
it's
ef-s
issue
like
that.
There
are
several
issues
mentioned
in
the
in
this
ticket
and
some
of
them
are
like
that.
Our
reading
from
prop
mount
is
kind
of
wrong,
but
I
don't
know
any
better
solution.
So
I'm
sorry,
but
that's
the
best.
We
can
do
with
proc
mounts
and
if
you
can't
get
a
consistently
consistent
content
of
proc
mounts,
then
these
things
tend
to
heal
themselves
like
in
the
next
attempt.
We
try
with
exponential
backup
back
off
and
eventually
and
usually
very
quickly.
You
should
this.
G
It
gets
note
and
publish
call,
and
it
doesn't
know
anything
about
the
timeout
on
the
cubot
side,
so
it
can
try
to
unmount
the
volume
like
you
should
unmount
call,
and
if
that
gets
stuck
somewhere
on
the
cloud
side,
then
it
will
get
another
unpublished
soon
and
that
could
and
get
stuck
too,
and
you
will
get
a
lot
of
unmount
calls
piled
up
in
the
csv
driver
to
some
point
that
it's
too
much
for
kernel,
for
I
don't
know
for
efs
or
for
something
okay.
G
So
what
I
would
recommend
you
is
to
check
what
happens
on
the
csi
drivers
side
like
if
it
gets
not
note
unpublished.
I
Yeah
I'll
verify
that
again,
I
I
remember
like
the
people
who
write
like
efs
driver
code.
They
they
said
that
they're
not
getting
like,
unpublished,
calls
and
that's
why
I
linked
to
that
subpart
linux
code
because,
like
the
third
tab,
the
last
tab.
This
sorry.
I
I
If
one
of
the
sub
path
is
cleaned
up,
but
the
container
directory
is
always
present,
then
would
it
lead
to
some
sort
of
like
never?
Would
it
never
reach
unpublished?
Let's
say
you
have
10
sub
paths
in
the
first
call,
two
of
them
get
will
get
removed.
G
So
if
one
of
these
unmount
cleanups
fails,
then
we
need
to
try
again
because,
like
we
can't
call
note
and
publish
if
we
know
that
something
on
the
node
still
uses
the
volume.
I
It's
hard
off
like
oh
yeah.
I
I'll
have
to
do
a
bit
more
digging
on
this,
but
I
I'll
try
and
get
more
information
and
probably
talk
to
you
on
ticket
or
channel.
I
I
G
I
G
What
I
remember
efs
like
it,
was
sometimes
pretty
slow,
unmounting
volumes
and
in
the
container
I
could
see
unmount
processes
firing
up
slowly
but
like
it
took.
I
don't
know
a
few
minutes
to
to
recover.
It
wasn't
ours,
but
I
didn't
have
thousands
and
tens
of
thousands
of
volumes
mounted
to
the
same
note.
G
A
Okay,
so
so
many
of
you
can,
if
you
try
to
check
and
see,
if
why.
A
I
A
Thank
you
both
okay.
So
then
we
have
so
so
I
just
want
to
mention
this.
This
this
driver
and
the
nvmf
driver.
I
think
I
actually
forgot
as
a
joe.
I
think
he
actually,
he
was
here
a
couple
of.
I
think
it's
a
couple
weeks
ago.
He
mentioned
that
you
want
to
donate.
This
driver
see
the
driver,
so
he
submitted
this
issue
here
yeah.
So
we
are
going
through
the
review
process
and
trying
to
get
decided.
A
So
if
so,
he
will
be
maintaining
the
the
service
driver
after
this
is
donated.
If
anyone
else
is
interested
in
helping
out,
you
can
either
reach
out
to
one
of
the
the
640
leads
or
you
can
reach
out
to
him.
A
And
this
one
I
actually
mentioned
this
one
last
time,
so
the
cpt
I
added
the
link
in
this
is
the
the
data
protection
group
agenda.
So
we
asked
who
are
interested
to
have
the
cpt
as
a
common
api.
So
I
listed
the
you
know:
people
who
sign
up
to
say
they
they
are
interested
in
this.
So
if
you
are
interested
yeah,
please
feel
free
to
add
your
company's
name
here.
Just
want
to
see
who
are
interested
in
use
this.
A
A
And
and
then
so,
kubecon
is
coming
up
so
I'll
be
there.
I
know
sid
says
he
is
also
attending
and
there
will
be.
A
contributor
summit
will
have
a
greeting
meet.
That's
actually
on
wednesday.
It's
actually
not
not
on
monday.
So
that's
the
when,
when
the
when
the
regular
conference
starts
on
wednesday,
I
think
it's
probably
before
the
breakout
sessions.
A
I
don't
have
the
details
yet,
but
so
I'll
be
there.