►
From YouTube: Kubernetes SIG Storage - Bi-weekly Meeting 2022-12-15
Description
Kubernetes Storage Special-Interest-Group (SIG) Bi-weekly Meeting - 15 December 2022
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Xing Yang (VMware)
A
A
Hello,
everyone
today
is
December
15
2022.
This
is
the
kubernetes
the
story
seek
meeting
so
today
we
will
wrap
up
one
point:
26,
release
planning
and
start
1.27
planning
and,
and
then
there
is
a
few
things
we
will
go
through
after
that.
A
So
just
the
the
formal
release
schedule
for
1.27
is
not
merged
yet,
so
this
is
a
PR.
According
to
the
soil
release
cycle
starts
on
January,
9th
and
the
it
has.
No
freeze
is
one
month
later,
it's
February
9th,
so
maybe
a
little
early
about
we
can
get.
You
started
so
I
just
copied
the
1.26
penis
spreadsheet
to
1.27.
So
just
just
start
from
here.
A
A
The
first
one
is
the
delegate
if
it's
group
2
CSI
driver
instead
of
cubelet,
it's
Fabio
or
command
here.
C
A
Do
we
have
any
other
item
like
some
tests
or
anything?
Do
we
need
to
keep
this
one
for
for
that,
or
are
we
just
completely
done?
We
can
just
we
can
just
remove
this
one
for
1.26.
B
Yes,
I
have
an
internet
PR
that
Jan
is
reviewing,
it
should
get
merged
like
pretty
soonish,
I,
guess
and
then
we
should
meet.
We
should
be
able
to
meet
the
graduation
graduation
criteria
for
beta
in
127.
A
The
the
second
one-
this
is
the
trucking
issue.
Issues
related
to
a
single
volumes
about
points
is
chain
here.
B
A
A
And
the
next
one
is
volume
group
API
yeah,
so
I
would
like
to
move
this
to
you
Alpha
in
1.27,
so
we'll
get
1.27,
and
so
so
there
are
some
comments
that
need
to
address.
One
comments
is
from
James
that
he
want
to
have
the
group
related
rpcc
in
a
separate
in
separate
service,
so
I
will
be
working
on
that
and
also
another
thing
is
there
are
some
questions
regarding
how
vendors
are
can
support
the
the
group?
A
C
Yeah
I
mean
so
I
think
there's
still
one
PR
pending
on
the
data
populator
Library,
but
other
than
that.
It's
ready
to
go.
C
A
Okay,
so
next
one
is
the
CSI
one
house:
this
one:
there
is
a
e3
test
out,
but
I
think
it's
not
ready
to
review
yet
so
so,
let's
try
we'll
try
to
Target
we'll
try
to
Target
data,
but
let
me
just
see
if
we
can
get
the
ETV
testing
before
that.
But
okay,
I'll
just
change
this
one.
Now
foreign.
A
And
this
one
I
think
do
we
have
a
do?
You
have
Yvonne
where
any
man
walking
on
the
city
here
I
think
they
would
like
to
move
this
one
to
Alpha
so
because,
still
working
on
this
on
me.
A
Change
this
and
do
the
let's
change
this
one
to
I
think
this
is
this
is
Alpha.
So
let's
say
we
are
Target.
A
A
Next
one
is
so
access
mode,
so.
C
Yeah
I
can
probably
speak
for
Chris,
but
he's
been
doing
a
lot
of
work
in
126
to
get
the
feature
ready
for
promotion
to
Beta.
So
I
think
we
can
Target
it
for
beta.
A
Maybe
not
okay,
so
yeah
I've
been
reviewing
this
one
right.
So
do
you
know
if
a
deep
one
to
Target
Alpha
in
1.27.
A
Oh
okay,
so
maybe
we'll
we'll
check
with
them
again.
I
know
I,
know
deep,
has
a
cssback
PR
pending
review,
so
maybe
he
won't
want
to
Target
off
about
workshop
with
him.
A
C
Yeah
this
work
is
handled
out
of
tree
there.
There
is
a
design
proposal
out
for
review
and
yeah
right
now,
the
I
my
understanding
is,
they
did
a
bunch
of
work
in
a
feature:
branch
on
CS
type,
proxy
and
so
I
think
the
next
steps
are
to
get
the
design
reviewed
and
approved,
and
then
they
can
move
forward
with
that.
A
A
A
C
I
guess
yeah
the
seats
I,
don't
know
how
much
Cycles
some
bull
will
have
on
it.
So
this
can
probably
use
some
help
from
anyone
that
might
be
interested.
B
There
was
a
person
Alex
Mead
who
was
paying
yesterday
in
slack
about
that
feature.
I
don't
know
if
the
person
wanted
to
help
or.
A
Okay,
so
I'll
leave
this
one,
it's
harder
for
now
until
we
find
an
owner
and
the
next
one
is
the
secure
Linux
relabeling
using
mode
options.
Young
do
we
want
to
Target
the
data
or
what
is
the?
B
Yeah
so
I,
okay
and
he's
not
here,
but
he
has.
Why
does
the
sum
update
and
I
think
the
plan
is
to
keep
in
Alpha,
but
we
might,
which
will
be
most
likely
changing
the
API
in
the
Clusters
in
the
CSI
driver
object,
which
is
a
Boolean
right
now,
whether
the
isolinux
support
is
there
or
not?
We
want
to
remove
the
Restriction
of
like.
Currently,
it
only
works
for
read,
write
one
spot.
What
we
write
one's
part
volume
types
but.
B
It
CSI
driver,
so
CSI
driver
had
a
field
that
was
introduced
to
support
this
feature,
which
was
called
slnx
bound.
Whether
the
driver
supports
the
Linux
model.
It
was
a
Boolean
flag,
but
we
want
to
make
it
an
enum
so
that
the
purpose
of
the
enum
is
like.
It
appears
that
some
drivers
like
redirect
many
drivers,
they
support,
mounting
same
volume,
multiple
times
with
different
slns
contexts
and
it
just
works,
but
some
read
write
ones
like
that.
Have
ext4
likes
file
systems
on
it.
They
can
only
be
mounted
once
So.
B
The
plan
is
like
a
like.
A
cluster
admission
administrator
can
obtain
how
the
driver
should
behave
if
it.
If
it
supports
the
retract
many,
then
it
could
be
used
with,
like
the
same
volume
can
be
used
from
multiple
parts
and
with
Mount
option.
It
will
all
work
fine,
but
if
a
driver
only
supports
mounting
the
same
volume
with
only
once
slrs
context
that
it
has
to
be
restricted
to
read,
write
once
so,
it
might
require
a
API
change.
So
for
that
reason,
I
think
we'll
have
to
keep
that
in
Alpha.
A
And
the
next
one,
okay
see
some
aggression
core.
That's
complete
season
aggression
this
here,
that's
complete,
okay,
yeah
I!
Think
then
do
we
do
we
still
keep
all
of
this
here.
Let's
see
what.
C
Yeah
so
I
think
a
couple
of
them
like
we.
It
was
at
125
where
we
G8
a
couple.
That
means
they
can
be
removed
in
127.
A
Okay,
yeah,
okay,
so
we
removed
them
from
here.
So
maybe
we'll
add
a
row.
A
A
A
There
were
some
questions
on
this
one
and
but
okay,
so
we
should
okay,
so
let's
sort
it
out
in
1.27.
So
we
can
get
this
one
removed
and
okay
and
then
there's
another
one.
A
A
It's
a
little
early
okay.
So,
but
let
me
just
cross
this
out
for
now
and
this
one
as
well
can
cross
this
out.
A
Yeah
this
next
two
I'm,
not
sure
I,
need
to
check.
Who
is
humble
so
he's
not
on
the
call
so.
A
And
this
one
okay,
so
the
next
one
is
support,
rocks
and
so
I
need
to
check
with
cherry.
A
A
And
then
the
next
one
controlling
mode
conversion
between
source
and
Target,
so
this
one
ronak
is
still
working
on
the
E3
test.
C
A
A
A
A
Command
is
this:
this
is
a
this
is
already
alpha
or
I.
Do
not
remember.
B
It
was
Alpha,
but
I,
don't
know
if
they're
moving
to
planning
to
move
it
to
Beta.
Okay,.
A
Is
this
one?
Still
this
one
is
targeting
Alpha
right,
which
is
a.
B
Trying
to
the
one
person
from
signal
presented
us
the
how
it
will
work
with
the
uid
integration
and
in
kubernetes
and
and
with
persistent
volumes.
So,
okay.
A
B
But
I
don't
know
what
they're
planning
yet,
but
the
feature
has
some
blocker
works
with
congenial
runtime,
so
they're
trying
to
fix
all
of
that
I
try
to
test
it
and
it
was
not
working,
but
maybe
it
worked
now
it
works
now.
So
we'll
have
to
kind
of
get
all
of
that.
I
I,
don't
know
I'm,
not
sure.
If
they're
trying
to
move
this
to
Beta
in
127.
I
have
not
heard
anything
like
that.
B
A
And
then
the
next
one
is
okay.
This
is
the
PV
security
by
simple
set,
not
Auto,
removed.
D
No
they're
worse,
it
turned
out.
It
had
not
been
in
the
alpha
end-to-end
testing
job,
so
yeah
there
was
a
bug,
so
the
decision
was
to
have
it
soak
a
bit
more
in
there.
D
Yeah
it
it
had
the
it
has
an
end-to-end
test,
but
the
end-to-end
test
had
a
feature
plague,
and
so
it
was,
it
had
been
run
manually,
but
it
wasn't
getting
run
with
the
mutation
detector
enabled,
and
so
it
missed
a
bug.
Okay,.
A
A
B
A
A
The
last
one
is
a
better
default
storage
class.
So
so
it's
all
it's
it's
beta.
No,
do
you
know
hamant
is.
Do
we
plan
to
move
this
to
GA
or
do
we
need
more
time
for
that
1.27,
whereas
I
don't
know
if
a
romance
here.
B
I
think
we
have
to
because
there's
just
two
to
be
the
last
release,
so
we'll
have
to
wait
for
One
release
or
so
before
we
can
move
it
to.
Gri
is.
A
A
D
That's
a
good
question:
I
was
just
actually
thinking
about
that.
I
mean
we
haven't
started
writing
a
cap
for
it,
but
maybe
maybe
we
should
why
don't
we
get
that
on
on
here?
I
think.
That's
a
great
idea.
A
Okay,
so
just
at
least
here.
A
A
So
next
meeting
in
two
weeks,
that's
the
last
week
of.
E
C
A
And
people
will
be
off,
so
we
are
I'm
going
to
cancel
that
meeting
and
okay.
So
there
are
a
few
items
here,
so
I
don't
know
who
enter
this.
Can
you
please
talk
about
this.
F
Oh
sorry,
my
mic
is
muted.
Can
you
hear
me
yes
yeah,
so
this
is
really
quick,
but
essentially
we
over
the
AWS
EBS
CSI
driver,
we're
implementing
a
feature
to
allow
users
to
specify
the
block
size
of
file
systems,
and
my
colleague
rashob
brought
up
that
this
might
want
to
be
a
like
a
shared
parameter,
similar
to,
like
other
other,
like
common
storage
class
parameters.
A
F
A
C
D
The
thing
I'm
suggesting
is
that,
like
and
I,
don't
think
I
actually
have
an
opinion
here,
but
just
just
suggesting
that
like
do,
we
want
to
add
a
general
options
for
make
FS
or
or
have
like
you
know,
specific,
a
block
size,
and
then
you
know,
as
things
change
in
the
future,
I
can
Envision
that
we
would
add
additional
parameters
as
well.
D
F
Yeah
I'm
not
committed
either
way
on
yeah
on,
like
which,
which
one
would
be
better
for
our
common
parameters.
So
but.
G
Yeah
that
would
need
to
be
a
different
parameter
if
it's
xfs
or
EXT
I,
don't
know
what
kind
of
trouble
you
can
get
into.
If,
if
you
just
let
people
start
putting
arbitrary
parameters,
yeah.
F
C
I
guess
does
like
block
size
factors
into
you,
know
things
like
iops
right
so
do
we
do
we
think
that
maybe
defining
a
block
size
could
be
useful
for
the
qos
stuff.
G
B
I
guess
one
problem
is
like,
if
you
put
it
as
a
parameter
like
as
a
field
on
storage
classes,
we'll
have
to
also
figure
out
how
to
pass
it
down
from
like
from
the
storage
class,
we
have
to
copy
it
to
PV
and
then
to
pass
it
to
the
CSI
RPC
call
when
the
because
it's
the
driver
that
forwards
the
disk.
So
it
has
to
be.
F
B
G
Yeah
there
is
a
there's,
a
plumbing
concern
there,
because
the
wow
this
type
is
consumed
when
kublic
does
the
node
stage
right,
and
so
it
has
to-
and
by
that
time
it's
possible
that
the
storage
class
has
been
deleted.
So
you
need
to
have
copied
it
somewhere
so
that
it's
available
when
you
stage
the
volume.
B
Yeah
and
then
we
get
into
the
business
of
how
do
we
separate
like
like
Mount
option
that
are
applied
between
the
mount
flags
and
then
the
because
the
same
RPC
calls
node
stage
good,
will
call
mkfs
and
do
like
Mount.
So
it's
like
it
requires
two
separate
bag
of
hashes
or
two
separate
bag
of
fields.
It
will
require,
in
that
case,.
E
B
E
I
think
the
question
is
whether
block
size
is
so
ubiquitous
that
it
should
be
singled
out
as
not
just
a
generic
flag,
but
it
and
in
the
fullness
of
time,
mkfs
flag,
more
esoteric
file
system,
specific
nks
Flags,
like
journaling
options
or
number
of
high
nodes.
We
probably
wouldn't
want
to
call
those
out
explicitly
it's
a
good
question
here
is
whether
block
size
gets
special
treatment,
because
every
file
system,
in
theory,
has
some
notion
of
it.
F
Yeah
we
were
talking
I
believe
there
was
previously
we're
talking
about.
You
know
the
storage
class
could
be
deleted
or
whatever,
but
what
I
was
going
to
say
is
like
we
could.
The
suggestion
here
would
be
not
making
it
a
parameter
on
like
the
PVC
or
something
like
that,
but
just
chain
like
having
the
name
in
the
storage,
Class
B
like
something
well
known,
so
not
not
the
way
it's
done
with
FS
type.
F
I
see
we
pass
it
in
the
context,
that's
how
we're
smuggling
it
to
so
we
know
it.
So
we
get
it
when
it's
mounted.
B
So
yeah,
so
if
we
use
it
this
promote,
this
has
a
as
a
top
level
field,
and
it
has
to
be
like,
like
on
the
same
level
as
FS
type,
then
not
sure
if
it
can
be.
In
that
context,.
F
D
D
I
didn't
know
that
the
context
lived
that
long.
You
know
if
you're
doing
a
immediate
provisioning.
D
Oh,
oh,
this
is
the
sorry
I
think
I'm
getting
confused
about
which
good.
F
B
B
And
then
it
gets
passed
to
as
a
context
to
to
the
the
node
stage
or
not
published
requests
now.
Fs
type,
on
the
other
hand,
is
stored
as
a
top
level,
not
top
level,
but
one
level
higher.
It's
not
in
that
same
map
of
volume
attributes
which
could
be
anything
generic
that
create
volume
returned
so
FS
type
is,
is
a
different
field,
a
separate
field.
F
B
Then
you
could
like,
like
then
there's
no
point
in
moving
it.
Then
you
can
keep
it
in
the
storage
class
parameters
as
it
exists.
You
don't
have
to
promote
it
to
a
top
level
field.
F
I'm
not
trying
to
propose
promoting
it
to
a
top
level
field.
We
were
the
question
we
were
just
asking
was
whether
or
not
it
should
have
a
well-defined
name
in
the
storage
cost
parameters
so
like
that
would
be
consistent
across
CSI
drivers.
B
But
the
thing
is
that
nothing
in
parameters,
field
parameter
is
a
map
of
string
string.
Nothing
in
parameters.
Field
has
a
well-known
field.
Fs
type
is
one
level
above
that
and
that's
why
FS
step
is
named
and
if
you
want
block
size
to
be
named
same
as
FS
type,
then
FS
type
has
to
be
one
level
above.
It
cannot
be
inside
that
parameter
map.
A
A
B
C
F
B
C
A
C
F
Would
be
worth
having
a
consistent,
Name
Across
drivers
for
it,
or
should
we
just
say
you
know
everything
in
parameters
is
always
opaque.
You
know,
like
you
know,
if.
G
Part
of
the
problem
is:
is
that
there's
not
really
an
official
mechanism
for
making
those
parameters
be
standardized?
It's
just
that
for
the
ones
that
happen
to
be
in
the
sidecar
code.
They
get
treated
as
special
right.
If
someone
pushes
a
PR
to
external
provisioner
that
consumes
a
field.
That's
that
field,
sudden
that
that
string
suddenly
becomes
magical
and
but
there's
no
official
process
for
that
foreign
so
so
like.
G
If,
if
we
were
to
start
consuming
the
block
size
parameter
in
external
provisioner
like
yeah,
we
would
have
to
pick
a
string
and
that
would
become
the
standard,
but
like
I,
don't
think
we
want
to
do
that.
We
just
want
the
the
the
the
plugins
to
to
consume
it
and
do
what
they're
going
to
do
with
it
and
yeah
it's
hard
to
define
a
standard.
Then
I.
C
Think
the
I,
my
sort
of
where
I
was
trying
to
go
with
the
qos
question
was
like.
If
we
think
we
could
benefit
from
like
a
standardized
or
first
class
concept
of
it
like,
maybe
to
enable
some
features
such
as
qos,
then
you
know
then
I
think
we
can
maybe
consider
that
as
part
of
it.
G
A
Not
yeah
no
consensus
on
making
this
a
common
name
industry
here,
but
we
may
want
to
take
a
look
at
this
when
designing
the
QRS.
So
let's.
F
D
A
Let's
see,
do
we
have
another?
Oh
let
me
show
you
want
to
talk
about
this.
C
Yes,
I
added
this
one,
but
basically
I
know
some
folks
here
might
already
be
familiar
with
this.
But
I
think
this
might
be
new
for
a
lot
of
people.
But
there
is
this
Community
called
data
on
kubernetes,
which
is
basically
like
a
a
community
of
end
users
who
are
running
stateful
applications
on
community
on
kubernetes,
and
they
are
there's
a
lot
of
like
a
database
folks.
There
that
write
operators
for
the
various
databases
and
they
share
a
lot
of
best
practices
and
there's
a
lot
of
good
content
here.
C
I
thought
that
it
would,
but
it
might
be
good
to
have
sort
of
a
more
Regular
kind
of
round
table
or
Forum
between
kubernetes
and
this
ad
user
Community,
so
that
we
can
basically
collect
feedback
more
regularly
and
maybe
help
Drive
priorities
on
different
different
projects
that
we,
you
know
might
want
to
take
on
in
kubernetes.
So
I
think
we're
going
to
schedule
a
first
round
table
with
with
the
community
in
January
and
so
I'm
kind
of
going
around
to
all
the
sigs
to
gauge
interest
and
yeah.
C
So
if
you're
interested
add
your
name
to
this
list
and
then
when
we
schedule
the
meeting
in
January,
then
I
think
we
can
basically
send
out
another
email
to
the
on
the
kubernetes
side
to
all
the
folks
and
yeah.
That's
that's
all
I
wanted
to
say
here.
A
Okay
so
I
think
that's
all
we
have
for
today,
anything
else
and
okay.
So
if
there's
nothing
else,
then
we
will
end
the
meeting
now
we
will
meet
again
in
January
happy
holidays.
Everyone.
Thank
you,
bye-bye,
bye-bye.