►
From YouTube: Kubernetes SIG Storage Meeting 2021-12-03
Description
Kubernetes Storage Special-Interest-Group (SIG) Meeting - 03 December 2021
Meeting Notes/Agenda: https://docs.google.com/document/d/1-8KEG8AjAgKznS9NFm3qWqkGyCHmvU6HVl0sk5hwoAE/edit#heading=h.ei7op4jo4axs
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Saad Ali (Google)
A
All
right
today
is
december:
2
2021.
This
is
the
meeting
of
the
kubernetes
storage
special
interest
group.
As
a
reminder,
this
meeting
is
public
recorded
and
posted
on
youtube.
So
today,
on
the
agenda,
we
have
123
planning.
A
We
actually
already
passed
the
code
freeze
date,
which
happened
on
november
16th,
and
we
are
very
close
to
the
release
date
for
123.,
so
this
cycle
will
just
get
a
kind
of
final
update
to
see
what
made
it
and
what
didn't
make
it
and
then,
at
the
next
meeting
in
two
weeks,
we'll
do
a
planning
session
for
the
124
release
and
if
you
have
anything
that
you
want
to
discuss,
if
you
have
any
prs
to
discuss,
please
feel
free
to
add
to
the
agenda
same
with
design
reviews
or
anything
else
that
you
want
to
talk
about.
A
A
B
So
everything
except
one
test
pr
is
completed
for
beta
one
test.
Pr
needs
to
be
cherry
picked
into
123.
B
That
that
pr
is
already
ultraseem
approved
just
needs
to
wait
until
the
code
freezes
to
die.
A
Nice
cool
thank
you
chain,
I'll,
go
ahead
and
mark
this
as
complete,
then.
A
D
C
Is
john
is
not
here
yeah
that
one
is
also
done.
Okay,.
A
C
So
he
gave
an
update.
I
think
he
also
added
notes
there.
Okay
yeah,
so
that's
his
notes
basically
says
he's.
He
has
a
pr
out
and
seems
to
be
working
he's
just
wrapping
it
up
got
it.
A
C
Yeah,
so
this
one
didn't
make
it,
but
I
think
now
all
the
comments
are
addressed
on
the
pr
hoping
to
get
that
merged
early
in
the
next.
E
This
one
should
be
moved
to
124.
I
I
am
still
doing
the
out
of
tree
work,
yeah,
444,
124
sort
of
in
the
interim.
That's.
C
A
C
Yeah,
I
don't
have
update,
think
okay.
I
guess
that
he's
going
to
get
back
to
this
one.
A
And
then
runtime
assisted
mounting.
F
Yeah,
the
updates
to
the
gap
were
happening.
It's
coming
along
nicely
and
continue
to
make
progress.
Okay,.
A
A
A
Mark
that
as
no
update,
hopefully
you
can
get
an
update
on
that
one
and
then
vsphere
csi
migration
delay
to
124.
So
I'll
go
ahead
and
mark
that
as
move.
A
And
then
gce
csi
migration.
G
Yep,
that
is
done,
cool
in
123
and
we're
breaking
random
test
pipelines
with
the
fandom.
A
And
then,
finally,
we
have
actually
not
finally,
second
to
last
seth
and
seth
rbd.
C
Stuff
rbd
is
done
the
alpha
that
one's
done
should
this
be
trapped
separately.
C
A
C
C
A
All
right
next
item
is
always
on
a
reclaim
policy.
This
was
moving
to
alpha
for
the
cycle.
Prs
were
all
merged
last
time
and
blog
was
work
in
progress.
Any
update
on
this.
C
D-Pack
here,
if
not
okay,
so
so
this
one
is
yeah,
basically
just
the
blog
that
I
need
to
get
that
merged.
So
I
think
we
can
mark
this
as
done.
Okay,.
A
Got
it
okay,
I'll
just
go
ahead
and
mark
this
as
move
to
124.
D
E
H
Yeah,
actually
any
reason
should
be
fine
for
a
generic
in-use
projection
site
and
for
secret
protection.
We
need
to
define
what
is
used
so
currently,
I'm
assuming
I
assume
that
by
port
or
by
pv
or
volume
snapshot,
content.
H
E
A
D
A
C
Yeah,
so
I'm
going
to
update
the
caps
you're
working
on
it.
So
we'll
move
this
to
1.24.
D
F
I
I
took
a
look
at
their
cap
and
basically,
I
think,
for
their
they've
broken
it
into
a
couple
of
phases
and
for
the
first
phase
they're
trying
to
avoid
anything
related
stuff.
F
They're,
basically
trying
to
avoid
applying
main
space,
applying
the
user
ids
to
pods
with
persistent
volumes.
Basically.
F
Yeah,
I
think
I
guess
I
better
crack
this.
A
Got
it
cool?
Thank
you
for
taking
a
look
deep
sure.
Next
item
is
address
issues.
Pvc
created
by
stateful
set
will
not
be
auto
removed.
D
G
And
justice
and
aside,
if
any
anyone
else
is
considering
doing
a
cross
sig
feature,
you
should
talk
to
me.
I
I'd
learned
some
procedural
things
that
would
have
sped
this
up
a
lot
a
lot
more.
A
Please
show
up
to
the
next
meeting.
That'll
be
an
important
one
for
figuring
out
what
we
work
on
for
the
next
cycle.
If
there's
anything
you
want
to
work
on
anything,
you
think
the
sig
should
be
working
on.
That's
the
meeting
to
attend,
so
does
that,
right
before
christmas,
it
is
going
to
be
on
the
16th,
so
it'll
be
the
week
before
christmas,
okay,
so
people,
probably
too
many
people
won't,
have
gone
on
vacation.
A
Yet
yeah,
that's
my
hope
and
if,
if
we
see
attendance
as
light
on
the
16th,
we
could
end
up
delaying
it
to
post-christmas.
Probably
don't
want
to
do
it
on
the
30th
around
new
year's,
because
that'll
probably
be
lightly
attended
and
then
the
next
opportunity
will
be
probably
sometime
in
january.
C
A
I
Last,
like
last
month,
when
I
gave
an
update,
I
needed
some
help
on
the
tests.
I
added
those
tests
so
that
a
reviewer
can
like
just
pull
this
pr
and
run
them
manually
as
well.
So
right
now,
I'm
hoping
to
get
like
somewhat
stories
to
completely
have
a
look
at
it
and
also
possibly
get
it
merged
in
december
itself
and
I'll.
A
Okay,
anyone
interested
in
helping
review
this,
I
think
jing,
might
be
a
good
candidate.
A
All
right,
if
anybody
on
the
call
is
interested
in
testing
or
in
reviewing
this,
please
take
a
look
and,
in
the
meantime
I
will
add.
Jing
here
has
a
reviewer.
I
I
So
right
now,
I
cannot
see
any
way
to
looking
to
get
the
production
pane.
Also-
and
that's
that's
like
one
of
the
major
reasons
that
we
would.
We
would
like
to
get
it
reviewed
like
as
soon
as
possible,
not
just
the
performance
implication,
but
also
like
the
notes
are
dying
out
because
of
the
contention
for
the
customers
that
we
have.
A
A
Sounds
good
and
yeah
feel
free
to
ping
us
if
it's
not
making
progress
and
hopefully
we'll
get
this
reviewed
emerged
as
soon
as
possible.
I
think
getting
it
in
early
in
124
is
a
good
idea,
because
it'll
give
a
decent
amount
of
time
for
it
to
bake
in
124
before
124
goes
out.
So
if
there
are
any
issues
or
regressions,
we
can
catch
them.
I
A
That
is
definitely
one
of
the
things
that
I
was
thinking.
I
did
not
remember.
Thank
you
so
much
yeah,
no
problem.
Thank
you
manu
all
right,
let's
switch
gears
next
item
is
a
design
review.
Matt
carey
mentions
that
start
using
resource
limits
in
csi
drivers
so
that
a
driver
can
reserve
space
for
future
expansion
for
details
at
a
link.
He
has
posted
here
take
a
look
and
comment.
G
Yeah,
so
thank
you
for
the
introduction
that
that
is
the
sort
of
explanation
I
mean.
I
guess
part
of
what
this
is
part
of
a
matter
of
me,
not
knowing
the
history
of
why
the
limit.
So,
if
I
understand
everything
correctly,
there
is
a
resource
limit
in
kubernetes
which
is
currently
ignored.
G
There
is
in
the
csi
request
in
the
capacity
range,
a
maximum
bites,
which
I
guess
isn't
come
through
to
anything
and
like
the
first
question
I
had
is:
if
there
was
any
on
my
context
as
to
why
he
it's
you
know,
the
kubernetes
limit
is,
is
being
ignored
and
then
also
if
this
is
a
good
or
bad
idea
in
general,.
J
G
K
G
You
might
want
the
volume
to
be
so
that
you
can
guarantee
that
future
expansion
is
possible,
but
you
don't
necessarily
have
to
pay
for.
L
So
can
I
ask
what
it
would
mean
to
guarantee
the
space
I
mean
if
I
have
like
a
pool
of
say
two
terabytes.
L
L
G
L
G
Yeah,
so
just
to
get
into
a
bit
more
details
of
our
precise
problem
or
what
we're
considering
is
a
situation
where
the
storage
pool
is
provided
by
like
discrete
instances
that
can
then
be
split
into
shares
and
each
instance
has
a
maximum
capacity.
It
has
a
maximum
capacity
of
10
t.
G
So
if
you
start.
G
Like
a
500
gig
shares,
but
then
know
that
you
might
want
to
expand
them
to
2p.
That
means
you
could
only
fit
a
total
of
five
shares
on
one
instance,
and
when
you
allocate
the
sixth
one
you
would
put
that
on
a
new
one.
You
would
spin
up
a
new
instance,
whereas
if
you
knew
that
expansion
would
not
be
necessary,
you
could
pack
more
shares
on
an
instance.
So
this
kind
of
gives
you
a
knob
to
do
a
bit
of
like
price
performance
trade-off.
G
G
Yes,
because
the
instance
is
each
instance
itself
is,
can
be
dynamically
sized,
so
these
can
so
each
instance
can
be
dynamically
sized,
but
has
a
minimum
size
so
say
that
the
minimum
size
of
each
instance
is
a
terabyte,
and
if
you
have
a
bunch
of
100
gig
shares,
you
would
like
to
pack
them
all
into
this
single
instance
in
in
order
to
meet
the
minimum
one
piece
size
and
then,
if
instances
needed
to
be
expanded,
you
could
then
expand
the
backing
instance.
G
But
you
would
not
need
to
do
that
until
the
expansion
actually
occurs,
so
it
saves
costs,
but
because
each
instance
has
a
maximum
size
of
10
feet.
You
need
to
know
how
big
each
share
might
possibly
get
you
to
know
how
many
shares.
L
I
guess
it's:
it's
sort
of
pushing
the
it's
pushing
the
requirement
to
the
user
a
bit
as
opposed
to
something
that
say
the
storage
system.
G
Right
but
the
issue
is,
is
like
suppose:
you
wanted
100
gig
instances,
but
in
this
case
and
and
there
isn't
going
to
be
any
expansion,
then
because
each
instance
has
of
a
minimum
size
of
one
key.
You
would
like
to
put
at
least
10
of
those
shares
on
an
instance
in
in
order
to
meet
the
minimum
size,
but
in
in
that
case
you
would
be
if
each
of
those
shares
wanted
to
expand
at
some
point
in
the
future
past
a
terabyte
you'd
be
stuck
right
because
of
the
maximum
instant
size
of
10p.
G
G
Under
utilizing
the
instance
when
things
are
at
a
minimum
and
so
pay
anymore
right,
because
you
would
have
to
pay
for
the
1p
minimum
and
so
like,
it
seems
to
me
that
that
kind
of
trade-off
like
that
can't
be
automatically
figured
out
by
a
system.
G
Yeah
I
mean,
and
I
I
certainly
see
that
this
is
like
an
additional
complicated,
additional
complication
for
a
somewhat
niche
case,
so
I'm
definitely
interested
in
hearing
people's
thoughts
like.
Is
this
a
reasonable
thing
to
do,
or
is
this
a
like
a
bit
too
niche?
If
it
is
niche,
is
there
a
different
way
from
it?
Through
I
mean
we
considered
other
things
like
you
know
having
an
annotation,
but
the
problem
is
that
annotations
and
such
aren't.
G
Aren't
passed
through
to
the
csi
request.
A
Matt
this
sounds
similar
to
we
had
a
storage
pool
proposal
at
one
point
where
we
were
thinking
about
making
storage
pools
a
first
class
concept
and
trying
to
figure
out
what
would
be
the
operations
that
we
would
expose
to
them,
and
what
the
relationship
of
storage
pools
would
be
to
storage
volumes
that
got
provisioned
off
of
those
storage
pools,
and
in
this
case
it
sounds
similar
where
it
sounds
like
your
instances
that
can
be
dynamically
sized
are
pools
essentially,
and
then
you
can
create
volumes
inside
of
that.
G
I
think
it
would
be,
but
I
mean
I
guess
the
certainly
here
the
devil
is
in
the
details.
You
know
like
I
don't
know
how
much
of
our
use
case
is
really
generalizable
to
all
storage
pools
in
general
or
the
fact
that
we
have
these
minimum
and
maximum
distance
sizes.
If
that's
unusual.
M
A
Yeah,
I
think
that's
a
good
question,
so
my
initial
gut
reaction
would
be
if
we
can
align
this
with
the
general
storage
pool
proposal
and
kind
of
come
up
with
something
generic
that
works
for
everyone.
Great.
Let's
do
that.
The
alternative,
I
think
to
consider
would
be
a
crd
specific
for
your
csi
driver,
where
you
know
it
extends
the
basic
set
api
and
kind
of
lets
you
get
this
additional
set
of
controls,
potentially.
G
Right
now,
so,
in
order
to
do
that,
that
would
also
require
I
mean
I
I
guess
this
would
this
would
require
our
driver
reaching
into
the
kubernetes.
A
I'm
trying
to
recall
who
worked
on
storage
pools
last.
Let's
take
a
look
at
the
spreadsheet.
I
think
no
one's
working
on
it.
This
cycle.
C
I
think
that
one
was
kind
of
combined
with
the
capacity
tracking
that
was
the
one
you
were
talking
about
right
or
are
you
talking
about
the
spreading?
I.
C
So
the
spreading
spreading
we
removed
it,
because
that
has
a
dependency
on
the
the
voting
group,
one
right,
because
we
always
say
that
spending
that
one.
So
so
the
wording
group
one
mentioned
a
little
bit,
but
then
because
that
one
is
always
kind
of
at
the
end
of
the
cap.
So
we
always
kind
of
saying:
okay.
We
we
delayed
that
one.
That's
what
happened.
A
Right
yeah,
so
I
guess
our
storage
pools
were
volume
groups
and
the
volume
groups
were
focused
specifically
on
snapshots,
consistency,
groups
and
spreading,
and
in
this
case
I'm
not
sure
a
storage
pool
is
similar
to
volume
group
or
not,
but
it
would
be
worth
having
the
discussion
to
see
what
the
alignment
is
there.
I
think
we
ended
up.
Let's
see
where
storage
pulls.
E
E
C
B
E
Yeah
one
could
imagine
an
implementation
of
spreading
that
just
uses
like
a
selector
or
something,
and
it
doesn't
need
anything
sophisticated.
Like
a
group
and
again
this,
this
pool
concept
is
sort
of
something
that
exists
anyway.
It
exists
independent
of
kubernetes.
This
is
a
question
of
discriminating
or
not,
and
there's
so
many
other
types
of
groups
that
are
either
administratively
defined
or
physically
defined,
and
how
you
deal
with
each
one
matters
on
exactly
what
what
it
is
you're
doing.
C
So
maybe
matt
is
good
for
you
to
take
a
look
at
mary's
one
cup
proposed
by
christian
long
time
ago,
and
then
we
decided
to
merge
that
one
with
the
work
that
patrick
is
is
doing
that
one
talked
about
the
switch
pool.
If
you
can
take
a
start,
taking
a
look
of
that
one
and
see
what
we
can
do.
So
we
had
some
discussions
with
you.
C
C
C
That
one
is
already
added
right,
so
patrick
added
that
maximum
size
for
volume
already
in
css
pack.
K
K
E
E
Yeah
yeah,
but
what
I'm
getting
at
is
like.
There
are
no
guarantees
that
you
can
ever
expand
anything
unless
you
just
make
the
volume
bigger
right.
I
mean
the
only
way
that
you
could
in
reality
enforce
such
a
guarantee
would
be
to
actually
allocate
the
space
out
of
whatever
pool
you're
using
ahead
of
time
and
and
it
that
seems
no
different
than
just
making
the
volume
bigger
anything
else.
You
do
risks
a
chance
that
you
won't
be
able
to
expand
it
later
right.
G
Well,
no,
I
mean
the
the
use
case
I
mentioned
here
is
exactly
a
case
where
you
know
you,
you
do
have
a
different
allocation
strategy
depending
on
whether
you
want
to
guarantee
future
expansion
or
not.
E
G
Because
you
have
to
pay
for
the
the
terabyte
now
so
you
you
can
begin
with
an
instance
which
you
allocate
as
a
terabyte
and
you're
paying
for
a
a
terabyte
and
if
it
has
10
shares
each
of
100
gigs.
And
then,
if
I'm
not
sure.
E
I
think
I
understand
what
you're
saying,
but
that's
definitely
not
a
storage
pool
right.
The
storage
pool
implies
that
you
have
some
some
physically
limited
thing
and
you're
slicing
it
up
and
to
implement
limits
in
that
scheme
would
really
mean
you
basically
have
to
reserve
part
of
the
pool
for
your
future
expansion.
G
I
I
I.
I
am
not
familiar
with
the
storage
tool
proposal
so.
G
Yeah
I
mean-
and
I
I
guess
the
thing
I'm
trying
to
avoid
too
is
that
you
know
to
decouple
the
complexities
of
like
the
storage
implementation
from
the
user
experience
in
kubernetes,
and
so
I
have
to
have
you
know
so
like
I
guess
my
idea
is
to
try
and
determine
the
characteristics
that
are
most
important
for
a
user
to
specify,
and
then
you
know
allow
the
backend
driver
to
figure
out
whatever
strategy
it
sees
fit
in
order
to
make
that
happen.
E
Yeah
it's
hard.
I
also
see
overlap
with
like
thin
provisioning
like
if
you
wanted
to
have
100
gigabyte
volume
that
you
might
later
expand.
One
terabyte
you
could
just
have
a
thin
provisioned
one
terabyte
volume
and
only
use
100
gigabytes
of
it
and
if
there's
a
if
there's
a
billing
model
that
enables
you
to
only
pay
for
what
you
use,
then
you
would
be
perfectly
happy,
but
there
usually
isn't
so
exactly.
A
G
A
G
G
Someone
where
you
have
heterogeneous
usage
of
this
I
mean
it
isn't
a
storage
pool.
I
think
that
the
people
understand
here
but
like
if
you
have
heterogeneous
usage
of
this
storage,
wow,
sure
yeah,
then
then
you're
stuck
because
there
really
isn't
a
lot
of
information
you
can.
That
is
plumbed
from
the
pvc
through
the
csi.
G
G
Which
I
think
that
we
have
those
right.
G
Yeah
yeah
well
yeah
great.
That's
all
follow
up
to
detail
exactly
how
the
quotas
work.
I
think
that's
a
great
point.
A
All
right,
so
in
terms
of
next
steps
here
it
sounds
like
there's
a
bunch
of
ideas,
maybe
matt.
You
can
kind
of
walk
through
some
of
those
and
put
together
a
proposal.
G
Yeah
yep
exactly
so,
but
what
I'll
do
is,
I
will
add
as
much
of
this
discussion
as
I
can
capture
into
the
existing
dock.
I
will
reach
out
to
the
folks
that
you
have
suggested
to
me
I'll,
probably
follow
up
zheng
to
make
sure
I
have
the
correct
folks,
yeah
and
and
and
then
please
I
I
think
the
existing
dock
is
a
good
one
to
continue
conversations.
So
if
any
anybody
feels
like
it,
please
proactively
add
comments
there,
yeah
yeah.
As
I
said,
I
will
I'm
gonna,
follow
up
cool.
A
Cool
thank
you
for
bringing
this
up
matt.
Anyone
else
have
any
other
topic
they
wanna
discuss
today.