►
From YouTube: Kubernetes SIG Storage - Bi-Weekly Meeting 20211021
Description
Kubernetes Storage Special-Interest-Group (SIG) Bi-Weekly Meeting - 21 October 2021
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Xing Yang (VMware)
A
A
There
are
some
deadlines.
The
next
deadline
is
november,
2nd,
that's
the
future
block
freeze
deadline.
A
So
if
you
have
any
feature
that
you're
working
on
you
want
to
write
a
blog,
please
make
sure
you
pin
one
of
the
640
leads
to
have
your
feature
added
to
this
spreadsheet
so
that
you
can
add
a
blog
and
the
code.
Freeze
deadline
is
november
16th.
A
Do
we
have
chang
or
hamad
here
michelle
does
anyone
know
status
of
this.
A
A
A
A
A
I
think
he
he
talked
about
this
yesterday.
Is
it
patrick?
If
he's
not
here,
I
believe
this
is
a
this
is
on
track.
It
has
a
pr
out
just
need
to
be
reviewed
and
merged.
A
And
warning
group:
okay
yeah
this
one-
I
don't
have
any
update
on
this.
I
just
updated
the
cap
couple
weeks
ago.
D
Yeah,
so
we
are
making
good
progress
on
this
one,
especially
on
the
library
which
is
using.
Hopefully,
we
will
be
able
to
release
first
version
by
by
the
time
of
until
release
so
you're,
making
great.
A
A
Okay,
send
out
application
notice
for
flux
volume.
So
I
just
added
this
item
to
the
new
design
issue.
Well,
we
are
supposed
to
propose
items
to
be
added
to
the
release
note.
So
I
just
added
that
there.
A
Next
one
pvc
volume
snapshot
namespace
transfer-
I
just
pinned
the
mustafa,
so
he
is
back
so
he's
still
interested
in
working
on
this.
I
think
he
just
needs
some
time
to
catch
up.
Okay,
let's
see.
C
C
Probability
of
having
a
secret
related
problem
is
a
lot
lower,
but
it's
not
gone
if
we
have
to
deal
with
it
for
yeah
for
snapshots.
Maybe
we
should
just
do
pvcs
too,
say
this.
The
same
same
situation
for
both
I
don't
know
I
mean
I
I
hear
this
this
request
from
users
a
lot
like
they
want
to
move
stuff
across
name
spaces,
so
I'm
interested
in
whether
we're
going
to
just
do
snapshots
or
whether
we're
going
to
do
both.
A
A
Yeah.
I
think
we
need
to
take
a
look
at
that
and
see
if,
if
we
can
use
any
of
that
for
for
this,
I
don't
know
for
secrets
or
something
okay,
so
yeah,
maybe
because
because
if
there
is
already
a
proposal,
we
should
just
do
something
similar
right.
So
yeah,
okay,.
A
Explains
the
csv
one
in
house
position
matrix
noise
for
review,
so
yeah
we're
getting
some
comments
from
node
and
the
instrumentation
side.
A
Zengren
is
looking
at
how
you
address
those
comments,
and
next
one
is
the
wooden
house.
We
split
this
out
into
a
separate
issue
to
track
problematic
response,
so
I
have
a
meeting
with
nick
tonight,
so
I
will
discuss
what
do
we
want
to
do
for
this
one?
C
Yeah,
I
I
don't
have
any
pr's
yet,
but
I
know
the
deadline
is
coming
up.
So
I'm
working
on
getting
those
now
because
it's
we're
running
low
on
time
to
to
do
the
the
few
pr's
that
are
needed
to
move
it
to
beta.
A
A
And
next
one
is
cozy.
A
Do
you
have
a
you
know
if
it's
here
so
I
believe
the
tim
has
been
reviewing
it
and
he's.
I
think,
he's
happy
with
how
thing
of
things
are
going.
I
think
there's
still
a
few
things
that
need
to
be
addressed,
but
hopefully
we
can
make
it.
A
And
the
next
one
change
block
trucking,
I
think
fong
is
working
on
a
cap.
I
believe
he
has
some
questions
so
yeah.
I
probably
want
to
talk
to
him
about
that,
but
he
is
working
on
the
cup
and
the
next
one
is
runtime.
Assisted
mountain
is
deep
here.
F
Yeah,
hey
so
pretty
much.
I
haven't
had
a
good
chance
to
figure
out
a
solution
to
some
of
the
outstanding
issues,
mainly
around
ss
group
handling.
So
I
guess
I'll
just
update
the
cap
and
then
gonna
have
a
discussion
around
that.
F
So
cap
is
still
in
progress,
but
no
major
progress
since
last
time.
A
F
And
this
one's
basically
in
progress
as
well.
We
are
trying
out,
like
the
new
api
and
we'll
update
okay
progress.
A
Thank
you
and
then
we
have
a
few
csm
migration
related
items.
The
first
one
is
officially
duplicate
cloud
provider:
plugins
is
jave
or
matt
here.
Do
you
have
any
update
michelle?
Do
you
know
what's
the
status
for
this.
B
A
A
B
A
Okay,
so
send
out
emails
release
note:
did
we
send
out
emails.
A
Big
thing
I
okay.
A
All
right,
thank
you,
and
the
next
one
is
also
related.
Next
one
is
the
cs:
migration
core
boxing
issues.
This
is,
should
we
ongoing
or
what's.
B
A
Thank
you
next
one
is
the
csm
aggression
vsphere.
I
believe
this
is
still
the
same.
I
don't
have
anything
new.
A
And
cesar
migration
azure
disk-
oh,
this
is
done
right.
So
I
think,
if
we're
good
with
this
looks
like
an
audio
file,
sure
do
you
know
about
audio
file.
B
Azure
file
is
waiting
for
the
fs
group
to
go
beta
before
it
can
go
beta
or
before
it
can
turn
on
by
default.
I
think.
A
B
Yeah
right
now,
I
think
the
team
is
working
on
trying
to
disable
pd
specific
tests.
A
B
Michelle
I
have
not
heard
any
update
from
matt
on
this.
I
think
last
update
was.
They
were
working
on
that
adding
windows
tests.
D
Yeah,
this
is
on
track
for
123
alpha
due
to
incompatibilities
of
entry
and
csi
storage
class
parameters,
good
amount
of
changes
were
required
in
csi
driver
one
more
pr
to
be
emerged
in
the
driver.
I
have
done
one
round
of
testing
and
it's
working
as
expected.
So
by
next
week.
I
think
I
can
refresh
my
qpr.
D
So
in
short,
this
is
in
fact
so.
D
What
are
available,
I
mean
the
cap
is
there
for
both
string,
but
the
set
of
us.
I
do
see
some
hiccups,
so
maybe
we
have
to
move
out
of
123
but
for.
F
A
G
Yeah,
so
I
have
an
update,
dpr
result.
The
only
thing
I
had
to
do
is
to
run
the
aoe
tests
against
the
driver.
I
did
it
and
that
the
tests
were
successful,
so
I'm
gonna
post
the
result
I
know
today,
maybe
so.
Okay,
I
think
that's
it's
ready
for
final
review.
Okay,
awesome.
G
I
also
think
I
need
to
to
write
documentation
right
before
the
you
write.
What.
A
Next,
one
is
always
owner
reclaim
policy,
so
deepak
is
working
on
this.
He
actually
has
a.
He
has
a
pr
out.
Well,
actually,
let
me
see
if
I
yeah,
so
pr
is
out
for
review.
Well,
actually,
it's
not
that
I
should
say
one
pr.
He
need
to
have
something
else,
one
piece
out
and
he
had
he
had
some
questions
and
he
has
been
asking
young
and
also
discussing.
A
So
ronna
is
working
on
a
cap
and
also
I
I
talked
to
mo-
who
is
one
of
the
seagulls
co-chair,
so
he
was
suggesting
we
that
we
actually
bring
this
up
to
the
seek
off
meeting
and
talk
to
them
about
this.
So
we
added
this
to
their
agenda
next
week.
So.
I
Yes,
no
update
for
these
two
weeks
for
both
in
youth
protection,
secret
production.
A
This
is
the
leon.
This
is
the
for
the
general
because
remember
he
added
trying
to
propose
something
called
right
in
the
in
api,
so
you
can
add
that
for
that's
actually
for
any
api
object,
I
think
right,
if
yes
correct,
so
we
can
potentially
one
if
this
in-use
production
thing
is
merged,
then
we
can
use
that
for
secret
or
anything.
C
C
A
A
A
K
Yeah
okay,
so
this
feature
is
about
the
secret
about
yeah,
preserving
the
mood
of
the
those
files
on
the
on
the
node,
because
we
are
always
recursively
changing
the
permission
and
more
of
those
files,
and
the
purpose
is
to
preserve
those
modes
so
that
they
can
be
used
as
ssh
keys
and
whatnot.
A
Okay
update
on
this,
or
should
it
still
update
no
update,
no
okay,
all
right,
okay,
thank
you,
and
so
next
one
is
a
non
grease
for
no
shout
out.
I
think
the
status
still,
I
think,
jing,
I
don't
know
if
you
can
change
you
ain't
meeting
up
with
jim
he's
talking
to
actually.
I
Oh,
I
found
that
there
is
a
fix
for
the
bug
for
the
graceful,
node
shutdown.
Oh.
I
A
I
A
I
Yeah,
it
is
already
matched.
H
A
Okay,
thank
you
yeah.
Let
us
know
if
you
get
a
chance
to
to
try
it
out.
A
Okay,
and
next
is
enable
user
name
space
in
kubernetes,
so
you
ids
get
shifted.
I
don't
know
anything
about
this,
so
harmon.
Do
you
have
any
update
on
this.
A
Okay,
oh,
I
believe
this
one.
Basically
right,
I
think
the
cap
is
merged.
I
believe
there's
some
yeah.
A
A
Okay,
so
this
one
I
believe,
he's
going
in
on
track.
Michelle
do
you
know,
looks
like
matt
is
not
on
call.
A
B
A
Okay,
thank
you,
and
next
one
is
one
expansion
for
staphylocene,
so
I
think
associate
and
you
can
you
to
review
this
one
again.
A
Yeah
this
one
is
actually
designed,
so
we
are
not.
This
is
just
reviewing
the
cap,
but
I
mean
like
to
get
this
one
going,
but
because
it's
so.
A
Next
one
is
okay,
this
is
a
container
notifier,
so
I
don't
really
have
an
update
on
this.
So
probably
I
need
to
try
to
schedule
a
meeting.
I
think
right
now.
It's
mainly
there
are
some
comments
from
sig
note
side
yeah.
So
we
need
to
schedule
a
meeting.
A
K
A
Oh
okay,
so
should
we
actually
bring
this
one,
we
should
bring
this
one
up.
Yeah.
A
All
right
how
about
this,
one,
jonas
and
you
know
he's
that
is
on
this
next
one.
K
A
A
Okay,
going
back
to
here
there
is
this
pr
to
discuss,
is
a
menu
here,
hi.
L
Yes,
so
last
month
I
actually
came
for
help.
What
we
are
noticing
is
that
on
an
eks
node,
we're
launching
a
lot
of
parts
and
then
a
lot
of
pods
short-lived
pods
with
a
lot
of
subparts
and
the
behav,
and
the
things
that
I
was
asked
to
investigate
was
like.
If
the
eff
csi
driver
was
either
hanging
or
slow,
I
did
investigate
that
and
it
was
neither
of
those.
The
reason
being
I
counted
like
the
mountain
and
mount
calls
through
the
logs
and
they
were
equal
numbers
for
hanging.
L
The
second
thing
was
like:
we
also
have
a
10
minute
timeout.
Then
I
looked
into
it
further.
What
I
noticed
was
that
the
reconciler
was
taking
a
lot
of
time
to
like
unmount
these
sub
paths,
so
I
sent
out
a
pr
which
selectively
searches
for
like
proc
mounts
only
for
that
mount
point
or
consistently.
L
There
are
some
concerns
about
both
the
designs
like
even
mine
and
like,
but
we
wanted
to
like
get
some
feedback
as
like.
If
this
is
possible
or
not,.
A
M
A
M
Yeah
at
this
pr
before
the
meeting,
okay
like
it
could
work,
I
need
to
check
with
kernel
guys
if
that
this
kind
of
guaranteed,
that
if
a
line
is
there,
is
that
the
line
is
always
returning
the
whole
front
rock
mounts.
But
it
looks
like
it's
so
so
the
approach
could
look
like
working.
I
will
need
to
div
dive
deeper
in
the
code,
but
yeah.
F
M
End
I
got
into
situation
where,
like
cubelet
was
able
to
read
everything
it
needed,
but
somehow
it
wasn't
retrying
to
unmount
the
volumes
and
there.
I
believe
there
is
something
wrong
in
cuba,
not
in
reading
the
brock
mountains,
but
in
the
clipboard
itself,
maybe
in
the
nested
operations
that
we
use.
Maybe
there
is
some
race
there,
but
I'm
not
100
percent
sure
that
speeding
up
the
rock
mounts
reading
will
will
help
like.
Maybe
it
will
not
be
reproducible
that
often,
but
I
still
think
we
have
could
have
some
issues
in
cuba
itself.
L
Yeah
I
can
like
so
like
we,
so
what
we
did
like
was
like
we
also.
This
is
like
the
fourth
change
that
we
are
trying
to
do
internally.
Like
we
cherry
picked
like
three
cubelet
patches,
one
was
like
jeans
change
to
speed
up
prop
mounts.
L
The
second
one
was,
I
think,
jnu
fights
change
with
the
while
that
basically
takes
out
the
u
out
of
the
picture,
and
the
third
one
was
like
charms
change
where,
like
we
were
ignoring
child
process
weights,
even
after
that,
when
I
measured
like
the
unmount
was
like
have
stopping
out,
and
then
I
also
like
added
like
logs
and
like
nested,
pending
operations,
where,
like
we
are
setting
like
pending
equal
to
true
and
at
like
complete
pending
pending
equal
to
false
those
counts,
are
also
coming,
as
maybe
they're
they're
the
same
for
me.
N
So,
to
clarify
a
little
bit
about
bonus,
change,
right,
hey
this
is
shyam,
so
the
one
thing
which
he
changed
in
this
year,
the
biggest
thing
is
basically
once
you
issue
the
amount
or
unmount
after
that,
like
throughout
the
court.
We
are
checking
for
this
consistent
treat
right
and
that
part
seems
to
be
slow
because
on
these
busy
notes,
we
just
cannot
get
these
consistent
reads
so
that,
combined
with
I
think
these
operations
are
being
retried
by
the
reconciler
with
an
exponential
back
off.
N
M
M
If
you,
if
we
have
a
problem
with
the
nested
pending
operations,
maybe
there
is
some
race,
then,
with
your
code,
it
will
be
faster.
This
issue
will
not
be
reproduced,
but
the
issue
could
still
be
there
and
we
could
hit
it
randomly.
N
L
We
did
not
hit
the
same
behavior
at
all
like
what
we
were
noticing
we
I
saw
like
the
cubelet
was
not
was
marked
with
a
taint,
not
schedulable
after
six
thousand
pods,
but
like
I
did
not
see
any
parts
to
be
stuck
in
like
terminating
state,
along
with
that
like
and
like
the
no
dangling
mount
points
at
all.
So
I
did
not
see
the
same
thing
as
what
I'm
trying
to
say.
L
O
And
did
you
measure
how
far
like
how
much
faster
like.
L
I
could
schedule
like
four
times
more
pod
in
a
single
hour
so
earlier
I
could
schedule
like
400
pods
in
one
hour
and
now
I
can
schedule
like
2
000
pods
in
one
hour
so
actually
five
times
in
one.
O
L
K
Because
there
was
one
more
pier
by
kofi
see
that
used
the
watcher
for
the
for
the
proc
mount
and
we
didn't
merge
at
that
time
because
we
didn't
think
it
was
worth
doing
it
yeah.
That
was
I,
but
it
wasn't.
It
didn't
really
essentially
resulted
in
regression,
but
I
don't
know
if
you
want
to
take
a
look
at
that
one.
L
I
I
think
that
I
had
a
look
at
that
pr.
It
was
probably
two
or
three
years
back
essentially
that
watch
for
like
prop
launch
changes,
I
believe,
through
a
simple
cache.
L
K
Is
actually,
according
to
linux,
kernel
like
you,
can
pull
like
mountain
for
file.
L
M
M
M
The
cache
can
be
stale
in
this
case,
like
almost
every
read
from
broke
mounts
was
inconsistent.
That
means
that
during
the
reading
somebody
else
mounted
the
first
system
or
unmarked
it.
That
means
you
will
have
more
events
than
reading.
Then
we
can
read
from
the
first
from
so
we
will
be
constantly
reading
propounds
anyway.
N
N
So
one
thing
quickly
on
that
note
actually
with
this
here.
If
you
can
go
to
the
comments
on
the
original.
N
So
one
like
there
is
one:
oh,
I
see
some
newer,
no.
L
N
Yeah,
so
I
I
think
the
one
concern
which
wanted
to
see
if
that
is
actually
a
legit
one
is:
can
you
go
further
down?
N
Okay
thanks,
so
it's
so
basically
what
we're
saying
is
now,
instead
of
checking
for
the
whole
file
being
consistent,
we
will
check
only
for
this
particular
entry
relating
to
the
volume
here,
the
amount
we
are
caring
about,
only
that
to
be
consistent
right
but
like
like
a
race
condition
like
this
might
happen
that,
like
let's
say
you
have
a
file
with
three
lines
and
let's
say
in
one
page,
just
like
a
small
example
in
one
page,
you
are
able
to
read
only
half
of
this
of
this
file
because
the
three
does
it
block
by
block
within
the
kernel.
N
So
then,
essentially,
you'll
read
till
the
second
point,
a
and
mount
point
b,
half
of
that
line,
and
then
the
remaining
right
now
the
issue
is,
if
you
don't
check
for
the
whole
file
consistent,
then
next
time
let's
say
you
again
read
you
read
the
first
half,
so
you
read
a
and
b,
but
in
between
actually
b
gets
unmounted
right.
This
is
I'm
talking
about
step.
Four,
so
b
actually
gets
unwanted,
so
the
real
proc
file
contents
is
a
and
c.
N
But
then
you
go
on
to
read
this
the
remaining
reminder
of
the
page,
which
will
essentially
give
the
other
half
so
you
land
up
thinking
that
you
have
a
and
b
in
your
final
file.
So
so
initially
you
thought
there
was
a
b
and
c,
and
now
you
see
there
is
a
b.
So
with
respect
to
u
you
think
b
is
consistently
mounted
while
it
is
not
right.
N
M
N
So
ian
we
can
simulate
a
case
where
it's
not
even
broken
at
the
middle
of
the
line,
but
still
this
happens
right.
Let's
say
with
a
b
you
read
and
then
you
wait
for
c,
but
by
then
b
is
now
already
gone
and
let's
say
some
new
entry
d
comes
in
its
place.
N
If
that
makes
sense,
like
this
example
is,
is
yes
it's
split
in
between
the
line,
but
it
does
not
have
to
be.
We
can
create,
I
think,
a
race
condition,
even
without
that
generally.
F
N
M
N
Yeah
exactly
so,
if
you
can
scroll
a
little
bit
down
so
on
the
comments,
even
one
below
this
one
yeah.
So
at
the
bottom
under
like
what
that's
one
of
the
things,
I
think
the
only
thing
we
are
able
to
actually
guarantee
is
that
if
a
line
and
as
assuming,
we
are
able
to
read
the
whole
line
right
like
if
a
line
has
been
read
as
part
of
a
block,
then
we
know
that
amount
exists.
But
if
a
line.
F
N
Read
in
any
of
the
paged
calls
that
does
not
guarantee
that
it's
not
there,
which,
which
is
a
concern,
so
you
can
only
guarantee
that
mount
is
there,
but
you
cannot
guarantee
mount,
is
not
there
with
this
issue
right
reading
block
by
block,
and
that
is
a
problem
because
apparently
from
what
manu
was
saying
earlier,
like
these
unmounts
happen
in
huge
volumes
right
for
some
of
these
nodes
and
they
apparently
happen
parallelly
by
by
the
reconciler.
So
it
can
actually
make
this
problem
hard
using
block
by
block
screens.
N
So
there
is
one
solution
here
which
I
want
manu
to
talk
about.
He
found
out:
maybe
that's
something
which
is
a
good
option
for
us
money.
You
want
to
talk
about
using
that
open
eight
at
two.
L
Yeah,
so
I
can
I
just
share
like
so
I
was
actually
investigating,
like
what
others
other
systems
are
doing
to
solve
this
problem,
and
let
me
share
the
link.
I
have
that
link
with
me.
L
And
I
actually
found
out
like
the
docker
code
and
what
docker
has
is
leveraging
a
new
system
open
call
called
open
at
two
and
most
of
the
changes
between
like
line
number
24
and
like
34,
so
24
to
34.
L
Essentially,
what
this
entire
change
is
doing
is
that,
if
you
have
an
absolute
path,
resolve
underscore
no
extent
would
say
that
it's
either
a
mount
point
or
within
a
mount
point
and
the
third
the,
and
if
it
returns
like
an
exterior
error,
it
means
we
are
crossing
the
mount
point.
L
So
that
would
mean
that
it
is
definitely
amount.
I
did
a
test
run
on
this
on
my
system.
I
don't
understand
all
the
syscalls.
To
be
honest,
I
started
reading
about
it
only
yesterday,
especially
specifically
open
active,
but
from
my
test
the
results
were
same
as
like
the
is
not
mount
point.
The
specific
check
that
I
was
looking
for
in
my
current
pr,
so
the
results
are
the
same.
M
C
I
N
So
the
comment
there
is
saying
it
works
for
all
kinds
of
mounts,
including
mind
bones,
but
definitely
I
think
it's
a
good
point
and
to
check
check
that
again
the
main
difference
here.
What
it
seems
to
be
doing
is
previously.
We
were
reading
the
whole
proc
mount
file
and
understand
ourselves.
If
there's
a
mount
or
not
right
in
here,
it's
just
directly
trying
to
check
for
the
mount
point
rather
than
relying
on
proc
mounts
it's
trying
to
open
at
whatever
that
means,
but.
N
M
L
Into
like
5.6,
so
if
you
look
at
like
line
47
44
to
like
44
and
then,
if
you
go
down
like,
if
you
scroll
down
the
screen
a
bit,
it
actually
ends
up.
Passing
mounted
like
prop
mounts.
If,
if
an
error
is
returned
by
this
as
a
fall
fall
back,
so
maybe
we
could
do
the
same
thing.
N
Then
you
can
update
your
pr
with
this
change.
Is
there
anything
else?
Folks
think
we
should
be
thinking
about
for
this
change
like?
Is
it
safe
to
use
that
or.
N
A
So,
okay,
so
young
the
you
will
be
reviewing
the
the
pr
right.
O
Then
the
issue,
the
risk
issue
you
talk
about
will
be
gone.
If
you
change
to
use
that.
N
N
Yeah,
yes,
it
will
keep
trying
to
unmount
and
check
if
it
was
unmounted
and
for
that
check
today
we
are
basically
doing
a
consistent.
What
we
are
asking
is
we
need
to
get
two
reads
of
this
whole
proc
mounts
file
which
have
the
exact
same
contents
to
just
make
sure
that
it
is
consistent,
and
then
we
will
check
that
that
volume
is
actually
unmounted
in
that
from
that
contents.
N
N
L
K
L
I
can
provide
an
update
on
that.
That
is
one
of
the
changes
that
I
have
I'm
trying
to
work
on,
like
with
the
efs
driver
team,
to
get
it
updated
as
well
so
yeah
so
I'll
yeah.
I
I
know
which
change
you're
talking
about.
O
So
yeah,
just
a
quick
question
like
if
a
issue
amount
is
the
command
is
successful
right.
Why
we
must
need
to
read
like
still
verifying
again
please
consistently,
like
must
consistently
read
the
program.
O
C
C
B
Yeah,
I
think
we
should
potentially
try
to
move
towards
using
the
cis
calls
as
much
as
we
can.
Instead
of
executing.
M
A
Hey
sorry,
folks,
we
are
talking
about
that
and
we'll
have
another
meeting
coming
so
yeah.
So
please
submenu
update
the
pr
and
then
we
can
continue
discussions
on
the
pr
and,
of
course
thank.