►
From YouTube: Kubernetes SIG Storage - Bi-Weekly Meeting 20210826
Description
Kubernetes Storage Special-Interest-Group (SIG) Bi-Weekly Meeting - 26 August 2021
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Xing Yang (VMware)
A
A
So
there
are
a
few
deadlines
coming
september,
2nd
next
thursday,
we
have
a
soft
freeze
deadline
for
production,
redness,
meaning
that,
if
you're
working
on
a
feature
that
is
targeting
alpha,
beta
or
ga,
you
need
to
have
the
production,
readiness
review,
form
filled
out
and
pin
the
reviewer
say
that
it
is
ready
for
review
and
then
the
following
week
september.
9Th
that's
the
cap
freeze
deadline.
So
all
the
caps
has
to
emerge
by
that
day,
and
then
we
have
a
few
topics
today
that
we
can
go
to
after
our
planning.
B
Yeah-
and
this
feature
is
going
beta
from
alpha-
I
think
we
will
have
to
update.
We
might
have
to
update
the
targets
of
the
cap
and
everything
and
we
do.
We
need
to
get
another
pr
redness
review
for
beta.
I
don't
know.
A
C
A
Thank
you
next
one
is,
is
another
fs
group
related
feature
now
recursive
voting
ownership.
This
one
is
going
ga
mad
or
command.
B
And
he's
dead,
I'm
not
gone
okay,
yeah,
so
I'll
follow
up
with
him
and
either
of
us
like
I'm,
not
sure
he
he
has
some
time
cycle
like
actual
work,
is
pretty
minimal
to
actually
move
the
future
kit.
But
I
think
this
also
has
to
go
undergo
a
pr
review,
because
the
feature
is
going
from
beta
to
g
all
right
so
I'll
sync
with
him
and
and
work
on
updating
the
appearance.
A
Okay,
but
I
think
the
forms
for
beta
and
ga
seems
to
be
the
same-
I
mean
if
you
are
filling
out
the
pr,
I
think
so
at
least
I
don't
see
any
section
that
is
only
required
for
g
but
not
for
beta,
but
I
think
it's
still
good
too.
I
think
they
still
need
to
review
it
if,
since
you're
changing
the
target
right
and.
B
A
All
right
and
then
I
think
the
next
one
is
another
fs
group
related
feature
config
fs
policy
in
csi
driver.
This
is
also
going
to
ga,
so
jonathan
or
humont.
A
B
B
Yeah,
I
just
don't
see
a
way
me
and
michelle
talked
offline
about
this
one.
I
just
don't
see
a
way
of
we
could.
We
could
support
both.
Then
it
becomes
like
the
two
source
of
truth
and
it's
confusing
so
yeah
and
I
think
I
think
the
api
reviewers
jordan
and
tim
didn't
get
any
input
on
this
one.
So
I
don't
know
if
we'll
have
will
end
up
copying
the
field
from
pv
to
sorry
sc
to
pv.
A
All
right,
thank
you
and
then
the
next
one
is
the
recover
from
resize
failures.
So
22
alpha.
B
Yeah
so
I
have
update
to
the
cap
posted,
but
I'm
working
on
the
state
machine
diagram
actually
like
and
getting
it's
it's
complicated
features.
It
requires
a
little
bit
like
yeah,
more
details,
datagram
so
and
the
pri
review
was
approved
previously,
but
I
might
get
another
pr
review
just
in
case
and.
E
So
I
think
after
clinton's
change,
so
the
there's
the
risk
should
be
kind
of
improved
or
like
avoided,
and
but
there
are
still
some
issue
related
to
like
hanging
buying
nfs,
because
we're
checking
the
mount
point
in
some
places
and
we
should
avoid-
and
I
want
to
like
working
on
that
soon-
there
was
a
pr
reverted
before,
but
after
the
risk
was
resolved,
we
might
be
able
to
revert
to
like
merge
that
pr
back
so
to
to
solve
the
issue
that
when
checking
the
one
point
get
hung.
E
A
Okay,
thank
you.
Next
one
is
theoretical
best
tracking
this
one.
Okay.
So
let's
stay
in
beta
patrick
here.
D
Just
hit
the
fat
will
find
my
window
again
with
mood
button.
No
storage
capacity
doesn't
change
anything.
No,
no
change
is
planned
for
this
cycle.
A
D
But
that
that
is
planned,
I
looked
at
vga
criteria
again
just
to
refresh
my
memory.
What
still
needs
to
be
done.
I
started
a
discussion
thread
on
six
george
on
slack
about
that
today.
The
three
points
that
need
to
be
met
is
some
user
testimonials.
A
So
I
actually
have
not
seen
anyone
collecting
like
submitting
surveys
or
anything
like
that,
but
like
for
snapshot,
for
example,
I
basically
just
asked
people
attending
the
data
protection
group,
whether
they
are
using
that
feature
in
production
or
not.
So
that's
how
I
get
some
some
data
yeah.
G
D
D
D
Or
a
mailing
list,
something
like
that.
I'm
just
wondering
what
happens
if
we
don't
get
feedback,
because
that
really
means
we
now
depend
on
users
to
tell
us
what
we
might
but
just
gets
blocked
by
by
people
not
acting,
although
they
are
using
it.
That's
my
biggest
concern
actually,
so,
even
though
it's
technically
ready
and
it
is
getting
used-
we
just
don't
know.
Well
that's
the
case,
so
I.
H
Like
I,
this
isn't
really
super
helpful
right
now
but,
like
I
at
least
know
one
user
that
wants
to
use
the
feature,
but
they
haven't
yet
but
yeah.
I
think
I
will
at
least
be
able
to
confirm
one
user
in
the
future.
D
Yeah,
okay
and
and
others
have
fixed
issues
around
it,
so
rose,
probably
was
also
motivated
by
actually
depending
on
it.
So
I
can
go
back
to
those
people
and
get
see
whether
they
can
confirm
that
they
are
using
it.
The
the
controller,
the
configuration
thing
that
was
a
pr
that
came
from
someone
else
who
has
not
been
involved
with
the
cap
before
oh
yeah.
That
might
be
two
people.
D
Okay
and
then
I
asked
about
testing
because
the
downgrade
testing
was
listed,
which
michelle
on
select
said,
but
it
can
be
done
manually
which,
which
should
be
fairly
easy.
How
far
back
do
we
need
to
go
back.
D
H
H
I
think
for
downgrade
testing
what
that
means.
More
is
like,
like
going
back
to
a
version
where
the
feature
is
disabled,
so
it's
more
about.
H
D
Okay,
I've
I've
done
that
before
do.
I
need
to
do
it
again
for
ga
or
no
that's.
D
Yeah,
I
I
checked
that
when
we
went
to
to
beta
and
and
and
that
it's
listed
again
under
ga,
I
think
it's
just
from
the
from
from
the
template.
I'm
not
even
sure
what
we've
I
don't
think,
we've
discussed
in
detail
what
was
meant
with
it.
We
are
doing
that
now
anyway,
I
think
it's,
it's
not
a
big
big
problem
for
this
particular
feature
anyway.
D
Scalability
testing
here
I
just
asked
some
questions
on
slack
and
we
can
follow
up
there.
I
can
certainly
add
something
to
cluster
loader
to
to
test
this
running.
It
then
we'll
have
the
same
issues
with
as
and
all
of
our
storage
tests.
You
basically
need
to
know
what
cluster
has
a
storage
pro
provisioner
that
you
can
use
for
this
particular
test.
D
A
And
next
one
is
the
warren
group,
so
I
updated
the
cap.
I
think
there
are
some
comments
from
humble.
I
actually
still
have
a
question
for
him.
I
think
I
need
to
ping
him
to
get
the
answer
for
that
particular
question,
so
that
I
can
update
it
again.
A
A
So
I'll
just
say
my
did
not
show
us
the
status
of
this
one
and
the
next
one.
Oh,
the
deprecation
notice
for
flexful.
I
think
I
think
there
are
still
some
unresolved
discussions
in
that
google
doc,
where
we
have
this
message
drafted.
So
I
think
maybe
I'll,
pin
michelle
and
young
and
I
think
I'll
see
if
we
can
finalize
this.
A
And
the
next
one
is
a
pvc
volume
snapshot,
name
space
transfer,
so
mustafa
said
he
wanted
just
to
do
a
design
1.23.
So
I
just
changed
the
status
to
design
first
quarter
because
there
are
still
some
issues.
There's
some
comments
on
the
cap.
That
needs
should
be
resolved.
A
And
next
one
csi
warning
house
additional
matrix
yeah,
so
I
updated
the
cap,
I'm
not
sure
if
we
can
directly
go
to
beta
because
the
the
the
matrix
actually
has
some.
There
is
no
kubelet
metrics
api
that
was
not
there
before
so
now
we
added
those
not
sure
if
we
can
actually
go
directly
to
beta
michelle.
If
you
can
take
a
look
and
then
we
can
decide
whether
you
know
we
should
still
stay
in
alpha
or
go
directly
to
beta
for
that.
But
the
cap
is
ready
for
review
sounds
good.
Thank
you.
A
Okay,
thanks!
The
next
one
is
csm1
house
programmatic
response.
So
last
time
we
said
we
want
to
split
this
from
the
previous
item
so
that
we
can
make
progress
in
parallel.
So
nick
actually
said
his
co-worker
has
written
down
some
details,
how
they
are
handling
the
this
case,
so
how
they
do
reaction.
I
think
he's
going
to
clean
up
that
a
little
bit
and
then
add
that
to
google
doc.
So
so
then
we
can
go
from
there.
I
Then,
yes,
I
think
I
think
you
know
what's
going
on
here,
we're
we're
working
on
releasing
the
alpha
builds
of
these
things.
We're
currently
blocked
on
a
large
number
of
in
for
people
being
out
on
vacation
this
week,
but
eventually
people
will
come
back
from
vacation
and
we'll
get
the
infrastructure
changes
merged
the
release
tools,
stuff
patrick,
is
helping
with
and
then
and
then
those
releases
will
go
regarding
123
the
hope
is
to
go
to
beta.
I
was
looking
at
the
graduation
criteria
that
we
wrote
down
in
the
kev.
A
Hey
sounds
good,
thank
you
and
the
next
one
is
cozy.
Do
we
have
so
we're
trying
to
do
our
alpha?
I
I
missed
the
last
week's
meeting.
I've
been
following
the
thread
that
we're
trying
to
get
tim
to
review
the
updated
cap
and
it's
been
silenced
from
him.
So
far,
so
I
don't
know
if
he's
too
busy
or
if
he's
has
taken
a
look
at
it,
but
no
news
on
the
kep
review.
A
Yeah,
we
have
not
have
not
just
seen
any
response
from
punting.
F
I
J
Yeah
I
can
check
in
with
him.
Okay.
A
A
K
Yeah
so
I
put
in
the
comments
the
issue
number
and
the
gap
br,
so
I
guess
they
just
need
to
be
put
in
the
actual
cells.
Oh
okay,
so
let
me
see
that
I
don't
have
access
but
yeah
so
far.
The
the
initial
cap
here
is
out
and
got
a
lot
of
great
feedback
from
patrick
and
jan
jan
pointed
out
a
bunch
of
good
issues
to
think
about
mainly
around
subpath
and
s,
linux
handling
and
fs
group.
So
yeah.
K
A
A
K
Yeah
overall,
we
are
kind
of
thinking
of
like
how
to
structure
the
api.
So
that's
still
in
the
initial
discussion
phase.
At
this
point.
A
Thank
you,
and
next
one
is
csm:
migration
duplicating
okay,
officially
duplicate
cloud
provider,
plugins.
A
And
the
next
one
is
the
csm
migration
core
still
joey.
L
Yeah
there
are,
I
think,
two
issues
remaining.
I
had
some
pr
opened
yesterday
I'll
keep
looking
for
anything
related
to
that
to
csi
migration
core
and
then
make
sure
it
is
resolved
in
this
release.
A
Okay,
thank
you,
and
next
one
is
see
the
migration
v
sphere,
so
yeah
2.3
just
released.
I
think
those
the
rest
of
them
are
still
the
same.
The
raw
block,
we're
planning
to
have
that
release
end
of
year
nfs
before
business.
The
same
thing
and
this
issue
is,
do
I
think,
we're
trying
to
get
this
pr
merged
to
address
the
crd
issue.
A
Do
we
have
andy
what
michelle?
Maybe
you
know
the
status.
L
B
A
Okay,
thank
you,
and
next
one
is
gcepd
driver.
Is
there
an
update
from
matt
or
maybe
saad.
A
L
A
L
That's
the
plan
yeah.
I
think
windows
support
it's
already
in
place,
just
I
think
matt
said
hey
some
more
testing
and
otherwise
it's
all
good.
A
All
right,
thank
you
and
then
the
last
six
migration
item
is
for
staff
fs
and
the
sap
rbd.
Is
there
any
update
on
this?
One?
H
A
Thank
you
so
the
next
one
is
control,
control
volume,
mode,
conversion
between
source
and
target
pvc,
and
so
yeah
roanoke
is
out
on
vacation.
But
when
he
come
back
he
will
continue
to
work
on
the
design.
So.
A
C
A
Okay,
I
don't
know,
but
then
the
do
you
need
another
issue
to.
Oh,
is
that
not
part
of
a
sixth
orgy?
That's
like
part
of
or
something
yeah.
G
A
A
Okay,
next
one
so
now
we
have
a
few
items
that
are
called
these
other
six
seek
off
this
one.
We
come
with
seek
out
user
id
owner
shipping,
config
maps
and
secrets,
so
the
design,
jahweh
or
command.
Is
there
any
update
on
this
one.
L
Yeah
we
I
had
a,
I
had
an
offline
discussion
with
ambershark.
I
think
we
had
some
ideas.
We
just
need
to
turn
the
idea
into
a
cap,
which
I
think
we,
I
think
will
happen
in
this
release.
A
Thank
you,
nan
griswold
shadow,
so
I
updated
the
the
cap.
So
now
we
have
an
aeroscope
just
to
address
the
real,
no
shutdown
case
first,
but
the
the
no
partition
case
can
be
built
on
top
of
it.
Actually,
so
I
updated
that
updated
test
plan,
I'm
actually
going
to
try
to
do
alpha.
A
So,
let's
see
if
we
can
make
some
progress
on
this
one.
A
Next,
one
is
enable
username
space
in
cubelet,
so
your
ids
get
shifted.
This
is
so
command
is
reviewing
this
one.
This
is
going
beta.
Is
that
right
come
on.
B
Yeah,
I
haven't
had
a
chance
to
look
at
it
this
since
last
update,
though
okay.
M
Yeah,
this
is
still
should
be
on
track
for
123..
The
api
is
approved,
so
it's
just
getting
the
implementation
reviewing.
A
Okay,
thank
you
and
next
one
voting
expansion
for
stay
for
sad,
so
yeah
just
updated,
updated
the
the
cap.
A
So
it's
ready
for
review
again
come
on.
If
you
get
a
chance,
can
you
take
a
look.
A
Okay,
next
one
execution
cool
container
notifier,
so
we
finally
get
some
reviews
from
sig
notes.
We
actually
added
a
lot
of
comments.
We
also
get
some
new
comments
from
an
api
reviewer
from
clayton,
so
shanti
and
I
are
addressing
that.
I
actually
need
to
discuss
about
it,
but
we
have
already
addressed
some
of
those
comments.
A
A
Okay,
so
I
will
cross
this
out
later
and
the
next
one
is
priority:
prioritize
volume
capacity.
Are
you
doing
a
beta
on
this
one?
Michelle.
H
I
didn't
have
a
check
to
chant
that
to
check,
but
I'll
I'll
follow
up
on
that.
Okay,.
A
That
one
was
a
up
all
right:
okay,
yeah
last
freebie,
okay
and
okay.
This
one's
daddy,
bye,
bye,
okay,
thank
you,
yeah!
I
think
all
right,
so
this
surgery
should
be.
This
is
actually
does
not
exist
right
in
the
1.2.
So
this.
A
Okay,
so
the
pressure
leave
that
blank
all
right.
So
that's
all
we
have
here.
If
you
have
any
other
items
you
want
to
track.
Please
add
it
to
this
spreadsheet,
just
to
remember
that
your
deadlines
are
approaching
already
okay.
So
the
next
item
is
here:
we
have
one
from
command.
You
want
to
talk
about
this.
B
Not
a
design
issue,
but
we
have
been
noticing
like
a
lot
of
drivers,
since
the
drivers
actually
are,
like,
I
think,
azure
and
then
many
other
drivers
don't
take
name
names
but
like
basically
bringing
the
csi
spec
like
why
it
exists
and
and
making
directly
talking
to
kubernetes
api,
and
they
require
kubernetes
api
to.
They
require
credentials
and
set
up
like
like
service
account,
tokens
and
whatnot
to
talk
to
kubernetes
and
the
deployment
as
a
result
has
become
kind
of
complicated
and
moves
away
from
the
original.
A
B
And
then
not
just
for
secret,
but
for
diff
different
reasons.
For
example,
ebs
driver
recently
had
a
change
that
it
wants
to
talk
to
aps.
A
Server
to
get
yeah,
so
so
it's
a
little
different,
so
I
can
say
I
can
tell
you
that
how
we
are
using
this
that
the
the
cs
driver
call.
You
know
the
rpc
that
insight
that
we
are
not
talking
to
kubernetes
at
all.
This
is
like
another
component,
so
there
are
other
information
that
we
need
from
the
kubernetes
cluster,
but
it's
not
inside
of
those
api
functions.
I
mean
inside
those.
A
A
Definitely
understand:
that's
it's
actually
better
not
to
depend
on
that
at
all,
but
I
just
that
it's
just
not
possible.
There
are
a
lot
of
things
I
think
with
this
probably
going
back
to
some.
B
Discussion,
let
me
finish
what
I
was
trying
to
say
is
that
why
driver
talks
through
kubernetes
api
server
depends
on
what
each
driver
is
trying
to
do.
Sometimes
it's
secret.
Sometimes
it's
like
trying
to
get
the
cluster
and
the
nodes
that
in
which
the
the
driver
is
so
that
the
node
can
figure
out
the
topology
of
the,
so
the
driver
can
figure
out
the
topology
in
which
it
exists.
Sometimes
it
wants
to
like
post
the
meta
information
about
pod
pvc
back
to
its
own
storage
apis.
F
F
B
No
is
this
better
yeah.
B
All
right,
so
so
the
the
reason
the
driver
is
is
talking
directly
to
kubernetes.
Api
is
different.
It
varies
from
each
case.
Each
use
case
like
I
was
just
wondering
and
then
like
I
was
talking
to
yan
and
other
than
oh.
We
were
wondering
if
we
have
like
like
failed
in
in
that
sense
in
the
csi
design,
that
it
should
be
ceo
agnostic,
whereas,
like.
A
So
there's
one
time
we
actually
have
a
issue.
I
think
it
has
to
do
with
the
board
expansion.
I
think
we
are
actually
getting
a
recommendation
from
kubernetes
side
that
we
actually,
we
should
actually
just
add
in
a
mission
controller
from
our
own
side
to
prevent
that
from
happening.
I
think
you
are
actually
aware
of
that
issue,
so
I
I
just
think
it's
it's
kind
of
a
yeah.
A
It's
that
definitely
it
is
the
desired
desire,
if,
if
the
driver
does
not
keep
any
state
and
but
then
that
means
kubernetes
need
to
keep
more
state
of
some.
So
of
course
there
are
many
many
reasons,
but
we
were
actually
asked
to
do
that
because
I
guess.
H
A
B
A
The
I'm
saying
that
the
rp
I
mean
the
csf
function
itself
does
not
require,
does
not.
B
No,
no,
I'm
sorry
but
you're
wrong.
The
driver
itself
needs
to
talk
to
the
community's
api
because.
A
B
Just
like
like,
for
example,
if,
if
we
don't
deploy
the
no
like
the
cs,
the
v-sphere
csi
driver
has
a
dependency
on
on
cloud.
The
external
ccm
external
cloud
control
manager
because
expects
the
node
objects
to
have
the
spec
dot
provider
id
if
it
is
not
set,
the
csi
driver
cannot
function,
and
the
point
is
that
that
it's
it's
getting
list
of
nodes
and
and
figuring
out.
What
is
the
provider
id
on
those
nodes
and.
C
I
I
A
J
Taking
a
step
back,
it
sounds
like
I
think
the
fundamental
problem
lamont
is
raising
is
legitimate
right.
Ideally,
we
want
csi
drivers
not
to
have
to
go
around
csi's
back
and
talk
directly
to
kubernetes.
It
sounds
like
there
are
a
number
of
cases
for
different
drivers,
different
reasons
for
them
to
reach
around
and
talk
to
kubernetes
directly.
J
I
think
the
best
thing
we
could
do
is
try
and
start
to
itemize
those
items
and
figure
out.
What
are
the
reasons
that
these
drivers
are
needing
to
go
behind
csi,
to
go
talk
to
kubernetes
directly
and
see
if
we
can
start
promoting
some
of
those
use
cases
into
into
the
spec
into
kind
of
the
kubernetes
csi
interface,
so
that
you
know
we
can
simplify
the
drivers
again.
The
goal
goal,
ideally,
is
that
the
drivers
only
go
through
csi
and
don't
have
to
go
around
it.
I
I
mean
I
don't
know
if
that
is
the
goal,
I
think
the
goal
would
be
to
make
it
easier
to
not
need
kubernetes
for
certain
use
cases,
but
like
one
of
the
common
use
cases
I
think
someone
mentioned
is
some
drivers
just
want
to
use
crds
to
persist
their
own
data?
And
it's
like
we're
not
going
to
put
data
persistence.
That's
fair,
yeah.
C
J
Right,
I
guess
it's:
are
there
a
common
set
of
kind
of
use
cases
where
you
know
we,
we
it's
like
low-hanging
fruit
that
we
should
just
put
into
the
specs
so
that
we
can
make
the
life
of
these
drivers
easier,
you're
right.
There
will
always
be
kind
of
one-off
drivers
that
are
doing
something
odd,
that
need
to
talk
to
kubernetes
and
that's
fine,
but
if
there
is
something
we
can
do
to
kind
of
reduce
these
common
cases,
it'd
be
better.
H
I
think
the
main
one
that
I
know
of
is
is
getting
the
provider
id
from
the
node
object.
I
think
this
is
this
is
for
drivers
that
don't
have
access
to
a
metadata
server
that
can
get
this
information,
so
I
think
they
depend
on
kubernetes
to
do
that,
but
I
think,
like
I
think,
aws
is
maybe
one
example
of
this,
but
I
believe
they
have
like
two
modes
like
either.
B
Yeah
and
recently
I
think
the
azure
driver
also
started
like
talking
to
api
servers.
That's
your
disk
driver
that
was
for
secrets.
If
I'm
not
running
yeah.
B
J
I
see
interesting.
B
Yeah
and
the
drivers
need
it
for
figuring
out
where
the
attached
or
open
notebook
should
go,
and
things
like
that
so
and
build
a
node
manager
kind
of
like
inventory
of
nodes.
So.
J
Normally,
what
the
csi
driver
does
is,
go
and
grab
or
ask
the
csi
driver
what
the
id
is
and
do
the
mapping
internally.
So
I
guess
in
this
case
the
csi
driver
doesn't
know
what
the
node's
id
is.
So
it
goes
back
to
kubernetes.
H
J
H
B
I
J
Yeah,
I
think
we'll
have
to
look
at
it
by
case-by-case
spaces.
Make
sure
we're
not.
You
know
leaking
too
many
abstraction
details
and
kind
of
doing
this.
In
the
same
way,
the
provider
id
one's
an
interesting
one
like
kubernetes
has
the
node
get
info
call.
It
has
nothing
on
the
request
today,
so
we
could
potentially
put
in
an
optional
field.
J
That
says
you
know
this
is
what
the
provider
thinks
that
the
id
is,
if
you
need
it,
but
I
would
say
even
you
know,
we
should
hide
that
behind
a
capability
or
something
and
discourage
people
from
using
it,
because
you
don't
want
folks
to
you,
know,
get
dependent
on
the
co
for
the
identity
if
they
don't
need
it,
but
yeah
to
ben's
point.
We
should
definitely
look
at
it
kind
of
a
case
case-by-case
basis.
J
But
I
think
the
high
level
point
here
is:
let's
see
if
we
can
find
common
areas
where
these
drivers
are
doing
things
and
see
if
we
can
make
their
life
easier.
A
Hey
sounds
good
yeah,
I
think
that'd
be
good
if
we
can
actually
make
something
css
back,
make
it
easier.
A
I
I
mean
in
the
long
run
we
might
end
up
in
a
situation
where
we
want
to
define
kubernetes,
specific
csi
extensions
and
just
admit
that
the
vast
majority
of
people
who
are
implementing
csi
only
care
about
kubernetes
and
so
we'll
just
define
some
kubernetes
extensions.
Then
and
say:
look
these!
You
don't
get
these
if
you're
not
doing
kubernetes,
but
we
need
them
in
kubernetes.
So
so
we're
doing
it.
A
B
Interesting
but
nomad
the
like,
the
there
was
a
ebs
driver,
actually,
the
first
pr
that
went
in
ebs.
It
directly
talked
to
api
server
and
it
was
not
possible
to
opt
out.
So
basically,
ebs
driver
broke
in
on
kubernetes
environment
and
then
we
we
had
to
fix
it.
So
the
point
is
like:
yes,
people
are
using
csr
drivers,
maybe
not
all,
but
at
least
some
popular
ones
in
in
some
cus.
That
is,
that
is
not
kubernetes.
B
K
B
K
K
B
A
K
B
I
I
B
I
Yeah,
but
that
that's
a
compatibility
call
that
the
driver
author
has
to
make
right
like
if
they
wanted
to
be
able
if
they
want
to
implement
a
fallback.
So
it
can
work
in
the
absence
of
it.
Then
they
can
do
that.
But
if
there's
no
way
to
do
a
fallback,
they
just
have
to
say
look.
This
is
our
compatibility
like
you
got
to
have
kubernetes
version
x
or
higher,
because
that's
the
one
where
we
implemented
this
new
feature
and
if
you
don't
have
that
it
just
doesn't
work.
H
A
Okay,
anything
else
on
this
we
can,
we
can
have
yeah
thanks.
We're
gonna,
have
follow-up
discussions,
some
meetings
on
that
and
then
the
next
do
we
have
a
joe
here,
yeah
yeah,
you
wanna
talk
about
this
okay.
C
C
So
basically,
the
question
is
this:
is
the
nicole
driver
and.
A
So
whether
we
want
to
you
know,
maintain
such
a
driver
in
kubernetes
csi
like
what
is
the
criteria
for
accepting
a
driver
there
right
so.
C
The
mmf
was
created
to
enable
the
nvme
comments
to
transfer
data
between
a
host
and
ssd
or
system
over
or
networked
fabrics,
just
like
ice
casing,
but
the
nvme
for
hold
the
more
higher
lps
and
the
lower
nations,
and
now
mf4
already
supported
many
transports
beyond
the
pcie,
such
as
ethernet
fiber,
channel
tcp
and
rdma,
and
my
csimaflow
driver
many
supports
are
demanded
tcp
for
sds
in
this
driver.
I
have
already
implemented
basic
interface.
J
So
that
is
a
good
question.
I
I'm
not
sure
we
have
a
officially
defined
criteria.
I
think,
generally,
you
know
they're,
you
know
existing
kind
of
projects
or
teams
behind
them.
So
if
you
look
at
the
other,
csi
drivers
they'll
be
like
azure
aws
kind
of
the
big
cloud
providers.
J
A
Meet
you
yeah
check
and
yeah.
That's
yeah.
H
I
think
it's
mostly
just
like
if,
for
like,
the
generic
stuff,
like
iscsi
like
we
and
nfs
like
we
do
have
drivers
and
libraries
there
under
kubernetes
csi.
We
did
have
one
for
fiber
channel,
but
no
one
wanted
to
maintain
it.
So
we
closed
it.
H
So
I
think
you
know
for
this
case,
since
this
nvne
over
fabric
is
sort
of
a
generic
thing
in
mind
with
like
iscsi
and
fiber
channel
like
we
can
definitely
consider
it,
but
we
need
to
make
sure
that
we're
gonna
have
people
willing
to
maintain,
maintain
this
driver
and
like
help
set
up,
ci
and
stuff
like
that
as
needed.
H
A
Yeah,
so
so
joe,
basically,
you
know
we
need
to
see
if
there
are
people
who
are
actually
who
are
contributors
who
are
actually
willing
to
help
maintain
this
driver.
A
Because
otherwise,
like
that,
like
that's
michelle,
said
that's
the
favorite
channel
driver,
I
know
what
do
you
mean
it's
there,
but
then
nobody
is
maintaining
them,
so
we
actually
have
to
deprecate
them.
So
is
this
still
there
michelle?
Is
it
remote
or
archive?
What's
the
status
of
this.
A
Yeah,
so
I
think
that's
probably
something
we
do
figure
out
first,
just
to
see
if
there
are
people
who
are
willing
to
help
maintain
the
driver,
otherwise
probably
ended
up
the
same
as
pepper
chandler
driver.
A
Oh,
I
think
we
are
already
on
top
two
minutes
past.
So
hey
joe,
so
why
don't
you
think
about
it
and
see?
If
you
you
know
if
you
or
your
teammates
or
anyone
else
are
interested
in
maintaining
this,
then
we
can.
You
know,
talk
about
it
again.
H
A
Thank
you,
I
think
we're
running
out
of
time.
I
think
this
is
the
same
just
if
you
are,
if
you
are
attending
couple
in
person,
please
put
down
your
name
there,
so
I
will
put
out
my
name
here.
If
you
know,
if
the
how
he
does
not
shut
down
the
conference,
I
think
I
will
be
giving
you
that
so.