►
From YouTube: Kubernetes SIG Storage - Bi-Weekly Meeting 2022-04-21
Description
Kubernetes Storage Special-Interest-Group (SIG) Bi-Weekly Meeting - 21 April 2022
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Xing Yang (VMware)
A
Hello,
everyone
today
is
april
21st
2022.
This
is
the
kubernetes
storage
seek
meeting
today
we're
going
to
go
over
the
1.24
planning
spreadsheet.
I
think
we
have
passed
most
of
the
deadlines
now.
The
next
one
is
ga,
but
1.24
ga
got
delayed
to
may
3rd,
and
then
we
have
a
few
things
to
discuss
after
we
are
regular
planning.
A
B
A
A
Okay,
so
yeah.
I
think
that
is
similar
to
what
we
had
last
time,
the
conformance
that
the
conformance
one
is
actually
already
merged.
A
And
blog
basically
just
been
reviewed
and.
C
Yeah
that
one,
the
the
ga
move
has
moved
to
125.
A
A
Okay,
we
got
an
update
from
asaki
last
time
we
started
discussing,
but
okay,
so
today
I'll
just
say
an
update.
A
A
D
All
of
the
beta
work
is
done,
except
the
blog
is
still
in
review.
A
Thank
you
next
one
is
cozy
is
a
seat
here.
A
So
I
know
the
cap
is
still
being
reviewed.
A
A
Okay
yeah,
so
this
is
actually
have
a
good
progress,
so
there
are
a
couple
folks.
So
fun
is
organizing
meetings,
there's
also
in
lunch
day
that
they
are
meeting
regularly
during
a
poc.
E
E
Yeah,
I
think
this
is
going
to
basically
go
into
1.25
due
to
lack
of
cycles.
A
A
A
And
next
one
is
csm
migration,
azure
disk
okay,
so
this
is
done.
We
already
knocked
it.
This
is
done.
I
do
file
as
well.
Do
you
see
remark
test
done?
Openstack
is
done.
Okay,
self,
rbd,
basically
yeah
this
one
got
delayed,
cfs
also
delayed
locks
as
well.
So
all
those
three
are
delayed
so
just
say:
1.25.
A
Okay
right,
so
it's
the
same
status
as
the
last
time.
Basically,
we
missed
the
test
that
line
so
move
it
to
125..
Yes,
oh
okay,
yeah
all
right
is
there
any
update
from
you.
G
Hey
hi,
so
the
status
is
multiple,
pr's
were
merged
and
a
few
are
left
for
you,
and
I
think
I
have
some
test
apis
to
be
to
be
put
out
for
review.
A
A
And
now
grease
for
no
shadow
yeah,
so
we
have
the
the
dog.
Pr
is
already
merged
block
is
being
reviewed,
so
we
can
mark
this
one
as
a
as
well.
A
All
right,
let's
see
so
that's
all
we
have
in
this
spreadsheet.
Now
we
have
a
few
items.
H
Yeah,
so,
basically,
whether
there's
like
a
recent
change
in
the
scheduler
around
being
able
to
provide
a
pre-filter
result
in
the
pre-filter
stage,
it's
mostly
used
for
the
daemon
set
scheduling
for
node
affinity
right
now,
but
I
was
kind
of
thinking
like
one
of
the
the
thing
we
see
a
lot
with
large
clusters
is
like
let's
say
you
have
like
a
4
000
cluster
and
then,
like
only
maybe
500
of
these
notes
are,
you
know
eligible
for
no
dfinity
for
like
a
specific
storage
class.
H
Then
what
would
happen
is
if
there's
like
a
lot
of
teens
and
stuff,
like
it's
very
difficult
for
the
person
to
figure
out
like
why
the
pod
is
not
scheduling,
because
it's,
like
all
the
notes,
are
kind
of
mixed
together
and
it's
not
specific
to
only
like
the
illegible
notes
for
because
of
the
pv
node
affinity.
H
So
the
pr
iris
here
is
like
a
very
simple
change,
which
is
just
moving
the
like
the
note
affinity
for
pvs
like
to
the
pre-filter
and
leveraging
that
instead,
but
I
think
the
pushback
I
got
from
scheduling
site
is
like.
Ideally,
they
don't
want
to
do
the
multi-node
evaluation
in
pre-filter
and
they
should
only
do
that
in
filter.
So
my
other
idea
that
kind
of
came
out
of
this
was
like.
H
Is
it
possible
to
only
leverage
this
for
local
pvs,
because
local
pb
is
only
like
tied
to
one
specific
node?
So
I
kind
of
just
want
to
pitch
this.
Like
do
we
have
any
standard
around
how
to
define
the
pv
node
affinity
for
local
pvs
like?
Is
it
using
like
a
field
selector
or
is
it
using
like,
for
example,
label
selector,
with
only
one
node
kind
of
thing,
because,
like
we
could
make
it
cover
those
cases
and
have
like
this
feature
available
for,
like
local
pb?
Only.
F
H
Like
I
guess,
with
the
sig
local
static
provisioner,
it's
using
the
node
affinity
label
selector
to
a
single
node.
But
I
don't
know
if
there's
other
csi
plugins
that
are
using
like
field
selector
on
spec
note
name
or
something
like
that.
F
H
F
Okay,
I'm
thinking
like
the
local
volumes.
They
don't
have
anything
special.
They
just
have
a
regular
note
selector.
So
it's
weird
to
have
no
to
have
a
special
case
in
the
scheduler
for
local
volumes,
while
the
same
node
affinity
and
similar
labels
are
used
in
the
other
cs5
drivers
like
for
zones
and
regions
and
this
kind
of
stuff,
like
it's
kind
of
kind
of
weird,
to
have
it
just
for
local
volumes.
They
don't
use
anything
special.
H
Volumes,
I
think,
from
six
scheduling
side
the
pushback
I
got
is
like
we
like.
If
we
can
evaluate
this
in
one
which
is
just
looking
up
because
in
local
cases
simple,
we
just
look
up
the
single
node
selector,
but
for
like
more
complicated
note
selector
matches,
then
it's
probably
not
going
to
be
all
one
unless
we
create
some
fancy
data
structure.
H
So
I
think
that
was
kind
of
what
I
thought
is.
Maybe
we
we
start
with
just
making
this
for
local,
but
but
but
that's
your
point
around.
You
know
it
like
it's
not
really
only
covering
this
specific
type
of
volume.
C
E
H
Yeah,
I
think
the,
but
the
pushback
from
six
scheduling
is
like
it
yeah
you
id,
because
that
could
be
more
complicated,
no
selector
mechanisms
and
that
that
ends
up
just
being
evaluating
all
the
nodes
to
to
to
kind
of
trim
that
down
and
their
argument
is
like
that,
should
really
be
done
in
filter
and
not
prefilled.
H
So
I'm
I'm
I'm
happy
to
kind
of
rework
a
separate
pr
for
in
volume
binding
to
just
cover
the
local
case
and
then
and
then
can
probably
put
it
up
for
review.
A
So
this
is
a
it's
like
a
small
feature
right.
Should
this
also
have
a
cap
just
to
explain
what
you're
trying
to
enhance
here.
H
A
So,
okay,
so
I
submitted
a
issue
to
archive
the
sub
projects
that
we
discussed
earlier
and
and
also
I
think,
last
time
we
talked
about
this.
We
possibly
have
a
small
meet
up
at
the
contributor
summit
at
kukan
eu.
So
if
you
are
interested
attending,
then
here's
a
link
for
registering
for
the
contributor
summit.
H
Yeah,
so
this
one
is
so
right
now
there
is
a
hard-coded
two-minute
for
the
csi
node
client
in
cubelet,
so
the
the
the
contact
timeout
is
set
to
two
minutes
right
now.
So
I
was
just
thinking
if
it
makes
sense
to
make
it
a
parameter.
I
I
I
don't
know
how
everyone
feels
about
more
parameters
in
the
cubelet
but
like
that,
will
allow
this
to
be
kind
of
configurable,
based
on
like
what,
like
the
the
the
the
the
the
kind
of
volumes
that
are
enabled
or
csi
plugins
are
enabled.
A
I
Is
there
a
specific
issue
that
you're
running
into
that
you're
trying
to
resolve-
or
this
is
more
opportunistic.
H
So
the
one
specific
case
we
saw
around
this
was
it
was
fixed
in
120,
but
it
because
we
were
running
off
a
older
version,
but
like
basically,
what
ended
up
happening
is
for
like
a
specific
volume
that
consists
of
a
lot
of
files.
The
amount
took
a
bit
of
time,
so
it
kind
of
exceeded
the
two
minutes
and
then
because
of
the
119
version,
we're
running.
It
wasn't
like
like
item
poland.
H
So
when
I
was
trying
to
do
the
retry
the
backhand
bit
like
in
between
cubelet
was
retrying
and
then
the
backhand
kind
of
plug-in
actually
finished
the
mounting.
So
when
cubelet
retried
it
looked
at
it
and
says:
oh,
like
it's
mounted
already,
so
I'm
not
going
to
do
anything
so
it
didn't
really
handle
the
sc
linux
properly.
So
then
we
did
not
really
kind
of
propagate
down
the
sc
linux
to
container
d
to
do
the
labeling.
Essentially
so
so
this
was
just
like.
H
I
So
my
two
cents
on
kind
of
adding
more
knobs
to
the
kubernetes,
is
basically
we
should
avoid
that
as
much
as
possible.
I
Don't
expose
options
to
users
unless
we
absolutely
have
to.
Ideally
we
make
the
right
decision
on
behalf
of
the
user
and
in
this
case
hard
coded
is
probably
not
you
know
best.
I
think,
like
you
said
it
might
be
nice
to
be
able
to
negotiate
this
with
the
csi
driver.
So
if
we
want
to
do
something
here,
if
we
want
to
make
a
change,
I
think
it
would
be
worth
thinking
about
how
you
know
a
cubelet
and
csi
driver
could
negotiate
a
timeout
that
works
for
that
driver
and
then
make
it
dynamic.
I
I
H
I
Got
it
yeah,
and
so
I
think
you
know
that
may
be
another
place
where
we
could.
If
we
go
down
this
path,
you
know
allow
for
a
configuration
of
all
of
these
different
parameters,
but
effectively
like
at
a
high
level.
My
my
feedback
would
be
avoid
making
this
the
user's
problem.
It's
not
the
user's
problem.
I
This
is
a
problem
between
kubernetes
and
the
csi
driver,
and
if
we
want
to
make
any
changes
here,
it
should
really.
You
know
we
should
make
advanced
kind
of
negotiation
logic
for
negotiating
the
timeouts.
If,
if
anything,.
H
So
the
issue
we
ran
into
was
fixed
already
in
120..
It
was
just
this
was
kind
of
like
I
was
looking
at
the
code
and
I
saw
this
and
I
thought
it
might
be
more
flexible,
but,
like
I,
I
see
the
point
around
kind
of
not
making
that
something
that
users
should
need
to
kind
of
think
about.
I
guess
yeah.
F
H
In
that
case,
with,
you
need
like
a
some
way
to
configure
the
controller
side
as
well,
then,
because
I
I
feel
like
they
should
be
the
same
right.
F
D
D
F
Yeah,
like
that's,
why
I
suggested
the
registrar.
There
is
a
slight
tiny
protocol
between,
like
between
the
registrar
and
cuba,
they
exchange.
F
H
H
F
No,
no
csi
call
register
call
the
results
like
they,
the
registrar
and
cuba.
They
exchange
some
grpc
messages.
I
Yeah,
there's
a
there's
effectively
a
grpc
that
is
not
csi,
it's
defined
by
kubernetes
for
different
types
of
device
plug-ins
and
we
specif
specify
one
for
csi
drivers
and-
and
that's
the
one
that
jan's
talking
about.
So
it's
it's
effectively
internal
to
kubernetes
and
we
can
modify
it
pretty
easily.
I
No,
I
think
the
way
that
it's
structured
is
that
you
can
have
different
fields
for
different
types
of
plugins,
so
csi
plugins
have
a
set
of
fields
and
gpu
plugins
have
a
different
set
of
fields.
A
I
I
H
Yeah
that
sounds
good
to
me.
Would
I
need
to
like
submit
a
cap
for,
for
this.
A
A
It's
a
it's
a
enhancement
right,
so
it
should
go
through.
You
should
go
through
that
all
right,
so
it
could
particle
go
quicker
rather
than
you
know.
Maybe
why
release
alpha
and
then
next
will
be
better.
Don't
have
to
wait
for
a
couple
of
releases.
If
we
are,
we
are
good
with
this
change.
A
A
Okay,
so
see
gary
added
the
top
here
girl.
You
want
to
talk
about
this.
C
Yeah
hi
yeah.
Thank
you
as
a
quick
background.
The
oraser
file
system
has
a
cache
consistency
model
that
can
survive
a
reboot,
so
our
clients
generally
have
a
very
large
cache.
That's
that's
associated
with
a
disc,
you
know,
often
associated
with
a
disc.
It's
mounted
we
we're
using
we're
installing
we
have
a
kernel
driver,
that's
being
installed
using
as
a
special
resource
with
the
special
resource
operator
and
openshift.
Anyway,
I'm
sorry.
We
had.
C
A
C
So
anyway,
the
the
simple
use
case
is
that
I
have
a
my
client
wants
to
use
local
volumes,
but
the
cash
that's
being
used
by
our
our
kernel,
module
and
there's.
They
have
to
be
deployed
as
damage
sets
and
there's
no
way
to
realistically
map
a
local
volume
to
a
diamond
set,
which
is
why
I
came
across
what
red
test
was
talking
about.
C
The
alternative
is
kind
of
implement
the
equivalent
of
local
volume
sets
as
an
indeed
to
maintain.
My
in
my
cartel
driver
pod,
but
anyway
did
there's
any
movement.
Does
anybody
know
if
any
movements
you
made
on
this
statement
that
staple
set.
A
A
A
Yeah,
I
also
don't
don't
know
him.
I
remember
he
showed
up
and
talked
about
this,
but
if
this
anyone
no
yeah,
I
believe
I
think
it
do
you
remember
the
yeah
okay,
this
one
right
is
this.
The
one
yeah.
G
I
think
so,
but
after
that.
H
C
A
C
A
A
A
Yeah,
so
if
anybody
knows
who
is
a
retired,
you
know
can
ping
ping
gary
yeah,
I
don't
know,
lost
track
of
this
one.
I've
not
heard
about
this
after
that.
A
I
Thank
you,
as
we
do
125
planning
and
if
anyone's
interested
in
picking
this
up
might
be
a
good
time
to
to
think
about
this
again.