►
From YouTube: Kubernetes SIG Storage - Bi-Weekly Meeting 20210701
Description
Meeting of Kubernetes Storage Special-Interest-Group (SIG) Bi-Weekly - 01 July 2021
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Xing Yang (VMware)
A
Hello,
everyone
today
is
july
1st
2021.
This
is
the
kubernetes
36
meeting
today
we
will
go
over
our
1.2
planning
spreadsheet
and
after
that
I
think
we
have
a
couple
of
things
on
the
agenda,
so
we
will
look
at
those
after
our
regular
planning
and
also
as
a
reminder
next
thursday
july.
8Th
is
the
code
freeze.
So
if
you
have
something
that
you
want
to
get
into
1.22,
then
you
need
to
get
that
merged.
The
code
is
in
tree,
okay,
so
look
at
the
spreadsheet.
A
C
A
My
question
is
this
one,
that's
okay,
we
can
check
with
him.
Often
I
was
just
wondering
if
this
one
do
we
need
to.
Is
there
a
deadline
like
next
week's
deadline?
Does
that
apply
to
this
one?
So
I
was,
I
was
just
wondering.
A
A
Okay,
thank
you
and
the
next
one.
The
okay
csr
online
apply
resizing
only
expansion,
and
oh
this
one,
I
think,
did
you
also
have
update
on
this
one.
I
think
we
don't
really
have
a
update
on
this
one
right
locked.
It's
a
blocked,
next
item,
all
right,
so
it's
I
think,
the
same
same
as
before.
B
A
A
It's
probably
same
as
before
then
storage
capacity,
tracking.
A
Doesn't
even
know
and
okay,
I
think
this
is
probably
just
back
not
fixing.
A
Okay,
it's
probably
the
same,
I
think
I'll
just
say
I
don't
have
no.
What
is
update
unless
someone
know,
because
I
think
young
is
not
here-
patrick's,
not
here
today,
okay,
so
this
one
yeah,
so
the
spreading
overfilled
failure
domain.
This
one
is
depending
on
the
next
one.
One
group-
I
don't
have
an
update
on
this
one,
so
I'll
try
to
update
the
cap.
D
Yes,
apologies
there's
still
the
open
pr
that
hubble's
been
working
on
to
fix
the
note
and
publish
calls,
so
that
vr
is
still
in
progress
and
needs
to
be
merged.
First,
okay,.
A
A
Next
one
send
out
the
application
notice
for
flex
holding
oh
okay.
I
think
I
need
ping
michael
or
it's
matt
here.
I
think
I'm
waiting
for
a
review.
I
dropped
it.
A
message
waiting
for
a
review.
A
Okay,
so
so
I
think
there
are
some
comments
on
that
cap.
There
was
a
question
for
saad.
I
think
that
is
out
this
week,
so
we
can
ping
him
next
week,
then
we
can
discuss
what
is
the
solution
to
move
forward?
A
D
A
One
house-
I
I
I
did
not
finish-
updating
the
docket
and
write
down
the
use
cases
so
I'll
continue
working
on
that
volume.
Populator
then.
E
I
have
not
yet
got
it
to
pass
prow,
because
I
have
discovered
that
when
you
add
a
new
api
field,
there's
all
kinds
of
other
things
you
need
to
do
to
prevent
tests
from
failing
and
I'm
like
working
my
way
through
that
list
of
things,
the
the
pr
is
there,
I've
added
tim
hawkin
for
review.
I
haven't
gotten
reviews
yet,
but
I'm
still
focused
on
just
fixing
everything
that
is
failing
in
prow
as
a
result
of
having
a
new
api
field.
A
Yeah
yeah
and
it's
thursday
right
so
yeah.
Hopefully
we
can,
if
you
can
get
a
test
fixed
and
then
well,
I
think
even
right
now
you
can
probably
pin
team
to
start
test
to
start
reviewing
right
because.
E
A
E
A
So
the
next
one
is
cozy.
I
don't
know
if
srini's
here
open,
give
an
update
on
this
cozy.
A
Okay,
I
think
I
think
the
meeting
monday's
meeting
was
cancelled.
Yes,.
A
A
Yeah,
we
actually
have
one
meeting
after
this
one,
so
we
just
I
didn't.
I
have
not
seen
a
cap
because
I
think
jeff
I
see
he
had
you
close
to
the.
E
App,
yes,
sid
is
rewriting
the
cap
for
those
that
don't
know
the
the
original
author
of
the
cap
had
transitioned
off
the
project
and
sid
had
inherited
ownership
of
it
and
he
was
just
sort
of
tweaking
the
existing
cap,
but
after
receiving
review
on
it
for
the
122
cycle,
he
decided
it'd
be
better
to
just
start
over
and
and
sort
of
redesign
the
kept
from
the
ground
up
to
focus
on
the
the
right
things
incorporating
feedback
from
from
api
reviewers.
A
lot
of
the
old
kep
was
focused
on
stuff.
E
A
Yeah
so
yeah
we'll
check
with
him
after
this
meeting.
A
And
the
next
look,
I
changed
block
tracking,
so
we
have
not
really
made
any
progress.
I
think
I
think
fang
is
still
working
on
the
fixing
the
document
part,
but
not
really
the
the
design
part
yet,
but
I
think
we
should
get
back
to
this.
A
F
Hey
this
is
deep,
so
we
just
released
the
rc1
last
week
and
marching
towards
the
v1,
and
so
yeah
things
are
looking
pretty
good.
If,
if
anyone
has
any
feedback
from
using
it,
maybe
for
the
csi,
vsphere
port
for
windows
or
aws
that'd
be
great.
We
already
have
quite
a
bit
of
feedback
from
gcpd
and
azure.
A
Okay,
no
update
and
then
probably
the
we
have
a
few
csm
migration
related
items.
Maybe
we
don't
have
update.
This
is
joey's,
not
here,
maybe
they're
just
the
same
as
before.
So
so.
If
anyone
knows
just
speak
up,
I
don't
know
I'll
just
put
no
update
here.
The
next
one
is
v
sphere.
So
what
I
know
is
that
deviant
has
a
pr
to
update
the
topology
label.
That's
merged
row.
Block
support
will
we'll
add
that
as
an
offer
really
soon
windows
support
is
your
outstanding.
A
Oh
no,
no!
This
is
a
separate
separate.
Actually
sorry,
so
those
are
the
two
things
that
are
currently
not
in
the
seaside
driver.
That's
preventing
us
from
turning
the
zone
by
default.
G
I
have
an
update
about
windows.
Oh
I've
been
working
yeah.
This
is
mauricio
and
I've
been
working
on
enabling
a
lot
of
tests
that
were
skipped
in
the
e3
tested
and
for
for
gc
pdcsi
driver.
Well,
it's
not
related
with
that
driver,
but
all
of
the.
G
A
G
Cool
yeah,
that's
that's
my
update
and
for
I
think
that,
in
our
migration
test
for
gcpdcsi,
all
of
the
tests
were
passing.
There
were
a
few
ones
that
were
skipped
but,
as
I
said,
I'm
going
to
add
what
is
needed
to
to
make
them
visible
in
the
script
yeah.
G
A
G
A
A
Well,
there's
no
update
looks
like
there
was
a
pr
out
from
the
last
week's
update
and
next
one
is
set
fs
and
self
rbd
sata
migration.
So
I
believe
there
is
an
email
out,
so
maybe
still
waiting
for
a
response.
I
I.
A
A
A
Okay,
next
one
is
a
control
volume,
mode
conversion
between
source
and
target
pvc,
so
ronna
is
trying
to
draft
some
design,
so
we'll
see
how
that
goes,
we'll
check
with
him,
but
he
has
something
out.
A
So
I
I
think
it's
probably
still
the
same
that
we
are.
There
are
some
concerns
from
jordan.
We
still
need
to
be.
A
Next,
one
is
user
id
ownership
in
config
maps
and
secrets.
So,
okay,
so
jelly's
not
here.
So
I
assume
this
one
does
not
have
any
update
graceful,
no
shut
up,
okay
yeah!
So
we
talked
about
this
in
the
csi
community
meeting,
but
I
think
james
is
not
there.
He
has
some
concerns.
A
Boson
benz
says
that
pr
and
the
cap,
but
he's
not
there,
so
we
we
still
did
not
get
those
address,
and
then
I
talked
to
jasin,
who
was
the
original
author
of
the
cap
about
those
new
comments,
so
he
said
he's
going
to
look
into
those
concerns
regarding
bell
medal
because
he
doesn't
think
those
are
problems.
A
So
I
will
ping
him
later
and
see
if
he
found
out
anything
and
I'm
also
looking
at
the
gracefulness
cab
and
implementations
and
see
if
there's
anything
that
we
can
leverage
there,
because
I
think
they
actually
added
like
a
node
shutdown
managing
cubelet
it.
It
actually
knows
if
it's
really
a
shadow.
So
if
we
are
going
to
narrow
down
the
scope,
as
we
discussed
in
our
previous
meeting,
then
maybe
we
could
look
into
there
and
see
if
we
can
add
anything.
A
Okay
and
then
the
next
item
is
enable
username
space
in
cubelet,
so
your
ids
get
shifted.
Do
we
okay?
This
one
probably
don't
looks
like
we
don't
have
any
update
on
this
one
last
time
either
our
next
one
is
pvc
created
by
steven
said,
will
not
be
auto
removed.
A
No
okay,
because
even
I
think,
even
in
previous
release,
this
one
is
getting
very
close
as
well
morning.
Extension
for
stefan
said
so.
I
try
to
do
this
shalini,
who
is
looking
at
this?
I
think
she
has
some
questions.
She's
still
working
on
the
updating
the
cap
and
the
next
one
is
a
container
notifier.
A
The
next
one
is
a
kube
news:
utos
mount
split
to
a
new
ripple.
Do
we
have
a
srini
here.
A
I'll
just
say
no
update
and
the
next
one
is
prioritization
and
volume
capacity,
I'm
not
sure.
Okay,
there's
some
prs
that
have
been
updated.
So
I
don't
have
an
update
on
this
one.
Let
me
show
it's
not
here
so
I'll.
Just
put
no
update
here
and
the
last
one
is
a
css
service
account
token.
I
think
we
are
trying
to
bring
this
to
ga
and
all
right.
So
I'm
not
sure.
What's
the
status
since
michelle's,
not
here
not
here,.
A
Okay,
that's
all
for
the
spreadsheet
I'll
go
back
here,
so
we
have
a
design
proposal
alex.
C
Do
you
want
to
talk
about
this
yeah
hi,
everybody
so
am
alice
and
I'm
from
riot.
So
I'm
working
currently
in
tumor
community
and
currently
we
I'm
trying
to
solve
a
problem
that
would
I
call
it
pvc
locking
and
you
can
see
the
current
proposal
there
so
briefly
to
summarize
the
problem.
So
basically
we
are
creating
a
tecton
pipeline
or
creating
part
that
I
have
inside
vms,
where
the
disk
is
stored
on
mpvc.
C
And
we
are
trying
to
find
a
mechanism
to
lock
the
ppc
to
avoid
data
corruption,
so
I
know
that
there
is
this
new
access
mode
that
is
read,
write
pod.
Once,
however,
keepers
use
share
volumes
for
virtual
machine
migration,
so
this
is
access
mode,
it's
a
little
bit
too
restrictive
and
doesn't
apply
to
to
this
use
case.
C
C
So
this
is
done
at
controller
levels.
So,
basically
you
extend
your
controller,
creating
this
new
crt
and
checking
the
status.
So
if,
if
the
pvc
locking
has
been
acquired
successfully,
then
basically
you
can
continue
creating
the
pod.
C
This
is
just
a
brief
summary
of
the
proposal
and
the
problem.
My
questions
are:
do
you
see
any
other
way
how
to
solve
the
problem
or
any
feedback.
C
Yes,
right
now,
it's
a
completely
separate
that
could
be
just
a
custom
resource
that
you
could
just
create
if
you
need.
C
A
Internally,
do
we,
I
guess
my
question
is
when
I
was
reading
this.
Let's
say
we
have
a
controller
right
hosted
by
us,
but
then
I
was
wondering
who
is
going
to
use
it.
I
mean
we
can
of
course
have
a
controller
to
apply
the
logs,
but
then,
in
your
case
I
know
that
you
know
you
said
the
keyboard
will
be
using
this,
but
in
our
case
and
that
that's
the
part,
I'm
not
quite
sure.
So
I
see
you
have
this
example
of
deployment
here.
C
I
mean
that
could
be
some
custom
controller.
I
mean
in
the
case,
for
example,
for
tekton.
C
You
could
add
a
custom
task
that
create
this
this
resource
and
then,
basically,
in
order
to
perform
some
disk
operation
on
the
pvc
you
check.
If
this
lock
has
been
acquired
in
the
state
of
this
pvc
locking
is
successful
and
then
you
continue
with
the
pipeline
and
for
the
entire
pipeline.
Your
pvc
is
locked
and
yeah.
A
Yeah,
so
I
think
normally
like
we
need
to
have
a
use
case
inside
kubernetes
to
bring
this
one
in.
That's
the
one
thing
that
I'm
I
still
can't
figure
out
yet
because
I
can
clearly
see
a
use
case
for
cooper.
Right,
maybe
give
women,
but
we
don't
know
exactly
how
that
is
used.
So
if
we
say
hey,
there
is
a
deployment.
There
is
a
use
case
for
deployment.
Then
I
think
that
makes
sense
to.
C
A
This,
like
an
internal,
I
mean
not
in
the
like
six
storage,
owned,
crdo
controller
or
something
like
that,
but
otherwise,
if
we
are
only
only
adding
this
log,
adding
the
crd
but
we're
not
using
it,
but
it's
only
for
like
other
projects
to
use
it,
then
in
that
case
normally
that
should
be.
C
Yeah,
that's
exactly
I
mean
if
you
can
see
or
are
aware
about
any
possible
use
case.
Of
course
I
am
aware
about
the
cuber
one,
so
I
I
could
see
maybe
some
analogy
if
there
are
workloads
that
use
shared
storage,
where
use
cases
where
you
can
read
write
one
spot
is
too
restrictive
because
the
storage
maybe
has
to
be
attached
to
multiple
nodes,
or
maybe
you
want
to
guarantee
that
the
btc
is
owned
by
a
controller
and
the
pod
level
is
too
restrictive.
C
So
I
know
it's
a
very
generic
description,
but
I
was
hoping
maybe
somebody
else,
yeah.
A
A
C
I
just
wanted
to
mention
you
can
have
a
look
to
the
proposal
and
if
you.
A
C
Any
comments,
please
feel
free
to
just
comment
in
the
google
doc.
A
Yeah
definitely
yeah.
This
definitely
is
interesting.
I
think
yeah
any
if
everyone
please
take
a
look
and
add
your
comments
there,
and
if
you
have
any
other
concrete
use
case,
then
we
can
see
if
we
should
bring
this
in
kubernetes.
Okay.
Thank
you.
Thank
you.
A
A
Okay,
now,
the
next
topic
this
is
from
ben
ben.
You
want
to
go
over
this.
E
Oh
yeah,
thank
you
shank,
so
this
is.
This
is
something
I
wanted
to
socialize
among
this
group.
It's
not
a
specific
initiative
that
anyone's
working
on
yet
to
my
knowledge,
but
we've
had
a
various
we've
had
various
problems
over
the
last
couple
of
years,
with
user
id
ownership
of
files
and
pvcs
and
access
of
pods
to
be
able
to
read
and
write
files
in
a
pvc,
I
think,
mostly
on
container
run
times
that
that
do
like
uid
shifting,
and
so
there
have
been
a
ver.
E
There
have
been
some
workarounds
implemented
in
kubernetes
like
the
recursive
challenge
of
files,
like
the
group
fsid
and
the
the
security
context
where
you
can
run
pods
as
specific
users
and
specific
groups,
and
all
these
workarounds
are
mostly
just
to
deal
with
the
fact
that
you
know.
If
you
have
multiple
pods
accessing
a
pvc
over
the
lifetime
in
the
pvc,
you
don't
want
to
have
a
situation
where,
like
the
second
or
third
or
fourth
pod,
can't
access
the
files
that
the
first
pod
wrote
and
the
underlying
problem
is.
E
Is
that
the
way
containers
work
is
just
sort
of
badly
designed?
With
regard
to
this
specific
aspect
and
I've
been
looking
for
a
long
time,
for
you
know
a
better
solution
in
the
linux
kernel
to
basically
solve
the
problem
at
the
at
the
root
of
it,
which
is
the
fact
that
you
know
when
you
shift
a
user
id
in
a
container
the
you
can't
see
the
user
id
that
you're.
Actually
writing
on
the
files.
E
You'll
you'll
think
that
you're
root
inside
the
container,
but
you're
actually
like
uid
one
million
outside
the
container,
and
so
when
you
write
files
to
the
file
system,
they're
getting
written
as
user
id
1
million.
But
you
don't
know
that
and
like
that.
That's
really
just
a
bad
design
in
terms
of
the
how
the
container
runtime
deals
with
file
systems.
So
what
I
wanted
to
bring
to
people's
attention
was
that,
finally,
in
linux,
5.12,
which
shipped
this
april
so
about
two
two
and
a
half
months
ago,
a
facility
was
added
to
linux.
E
When
you
access
the
file
system,
which
is
what
you
want,
if
you
you
know,
if
you
think
that
you're
root
the
file
system
should
treat
you
like
your
root,
at
least
for
that
particular
bind
mount.
So
I
if
we
could
take
advantage
of
this
facility
in
kubernetes,
it
would
solve
all
of
the
problems
around
user
id.
You
know,
and
user
ids
and
pods
not
being
able
to
access
files
and
volumes.
I
believe,
but
it's
going
to
require
changes
potentially
at
the
cri
level.
E
Potentially,
changes
at
the
csi
level
definitely
changes
kubernetes
and,
of
course,
because
it's
a
brand
new
linux
kernel
feature
it
may
take
a
while
to
to
roll
out
to
you
know
widely
deployed
linux
distros,
but
I
basically
just
wanted
to
make
people
aware
that,
like
this
is
coming,
this
is
a
better
way
to
solve
all
of
the
user
id.
E
You
know,
file,
system,
ownership,
problems
and
I'd
like
to
sort
of
recruit.
Some
people
to
you
know
that
are
interested
in
solving
this
to
to
look
at
a
some
kind
of
design
to
take
advantage
of
this
in
kubernetes,
because
it's
it's
it's
in
linux,
now
we
can
start
playing
with
it
and
it'll
soon
be
widely
available.
E
I
presume-
and
it's
it's
gone
through
a
lot
of
work
upstream,
like
people
have
been
aware
of
this
underlying
problem
upstream
for
at
least
five
years,
maybe
longer
have
been
trying
to
fix
it
and
they
kept
getting
rejected
and
rejected,
and
the
design
kept
changing
and
improving
and
changing
and
improving,
and
so
what
has
merged,
I
think,
is
actually
quite
good.
E
It's
just
a
matter
of
figuring
out
how
we
can
use
it
in
kubernetes
to
to
really
solve
the
problem
of
pod
file
system
access
with
pvcs,
so
yeah.
I
just
wanted
to
socialize
this
make
people
aware
that
it's
fine,
that
the
the
real
solution
to
this
problem
is
is
finally
available
to
us,
and
it's
just
a
matter
of
implementing
it
in
kubernetes.
E
Linux
kernels
I
mean
it
depends
on
the
distro,
some
distros
just
wait
for
the
some
just
ship,
relatively
modern
kernels
other
distros.
Only
ship
lts
kernels
but
like
I
believe,
and
someone
from
red
hat,
could
correct
me
here,
but
like
red
hat,
will
back
port
features
from
new
kernels
into
old
kernels
if
they're
useful
enough.
E
What
they
won't
do
is
back
port,
a
feature
that
hasn't
merged
to
the
mainline
kernel
yet,
but
now
that
it's
in
512
and
it's
committed,
presumably
it's
possible
for
someone
to
back
port
it
to
a
lts
kernel
or
some
sort
of
more
stable
kernel.
That
is,
you
know
shipped
on,
like
a
red
hat,
os
or
or
you
know,
maybe
it'll
start
showing
up
in
core
os
or
one
of
the
container
optimized
devices.
I
mean,
there's
all
these
different
linux
distros
that
have
different
approaches
to
to
kernels.
But
it's
just
a
matter
of
time.
E
You
know
it'll
it'll,
it's
available
in
some
of
them
today
and
it'll
be
available
in
more
of
them
as
we
go
along
and
and
regardless
it'll
take
a
while
for
it
to
roll
out
in
kubernetes
right.
If
we
write
a
cap
now,
maybe
it
becomes
alpha
and
123.
E
E
B
B
So
I'd
like
to
help
I'm
just
I'm
time
constrained
like
everyone
else,
so
I
don't
know
how
much
I
can
commit
right
now,
but
it's
something
I
started
to
look
at
a
little
bit
at
least
and-
and
one
thing
I
noticed
when
I
was
reading
through
some
of
the
docs
on
this
feature-
is
that
they
it's
it
has
to
be.
It
has
to
have
file
system
support,
so
it's
implemented
for
ext4
and
phat
right
now.
E
Yeah
yeah,
it
is
file
system
specific,
but
I
thought
that
they
had
covered
ext4
and
ext3
and
xfs.
I
could
be
wrong
about
exactly
which
file
systems
have
been
covered,
but
I
think
the
the
big
common
ones
have
been
and
you're
right.
We
may
need
a
fallback
for
esoteric
file
systems
that
that
where
this
can't
work,
but
that's
we
already
have
file
system
type
checks.
When
we
decide
whether
to
do
the
recursive
chart
or
not
in
cubelet,
so
we
could
just
modify
those
checks
to
opt
out.
E
You
know
for
you'd
have
to
have
some
way
of
detecting
whether
the
feature
was
even
available
on
the
platform
where,
where
cubelet
was
running
and
then,
if
it
was,
you
could
use
it
for
the
fascisms
where
it
worked,
and
if
it
wasn't,
you
could
just
use
the
fallback
if
you
have
to
be
a
runtime
check,
because
it's
not
widely
available
in
kernels.
Yet.
B
Right,
okay,
that
makes
sense
so
I'll
I'll
spend
a
little
bit
more
time
playing
around
this
when
I
can
and
I'll
probably
be
in
touch
with
doing
them
on
to
figure
out
sort
of
next
steps
all
right.
Thank
you.
H
E
Point
so
so
I
I
gave
a
little
bit
of
thought
to
you
know:
how
exactly
would
you
implement
this
and-
and
I
don't
want
to
rule
out
the
involvement
of
the
cri
or
the
csi
interfaces,
as
I
mentioned,
but
I,
I
suspect
that
we
probably
don't
need
to.
I
suspect
it
all
could
be
done
in
cubelet,
but
I
want
to
keep
an
open
mind
about
that
until
we
have
like
a
poc
sort
of
shows
how
it
can
be
done
in
particular,
because
it
relies
on
linux
namespaces.
E
It
would
be
very
hard
to
plumb
a
linux
namespace
through
csi
yeah,
and
it
feels
like
the
best
thing
to
do
is
let
csi
just
produce
the
base
mount
and
then
stick
a
bind
mount
on
top
of
what
csi
gives
you
that
performs
this
uid
shifting,
so
that
we
don't
have
to
deal
with
the
concept
of
namespaces
in
csi.
A
Okay,
great
okay,
do
we
do
we
have
anything
else
to
cover
today,
all
right,
if
not
that's
it,
for
today,
thanks.