►
From YouTube: Kubernetes SIG Storage - Bi-Weekly Meeting 20210729
Description
Kubernetes Storage Special-Interest-Group (SIG) - Bi-Weekly Meeting 29th July 2021
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Xing Yang (VMware)
A
A
We
have
a
couple
of
deadlines
that
just
passed
this
tuesday
july
27th
is
deadline
to
get
your
dog
pr
reviewed
and
merged
and
also
feature
blog
ready
for
review.
So
I
hope
everyone
didn't
miss
this
deadline
and
then
the
next
deadline
is
wednesday
august
4th,
which
is
next
week.
That
is
the
1.22
ga
date,
and
then
we
have
a
few
things
in
in
agenda
after
we
go
over
the
planning
spreadsheet.
B
Yeah,
sorry,
can
you
hear
me
yes
yeah,
so
this
was
done
and
implemented.
Okay,
great.
A
So,
okay,
so
this
is
a
okay
that
this
is
okay,
so
the
alpha
and
then
next
release
will
try
to
move
better,
that's
an
x-ray.
So
this
is
still
just
alpha
right:
okay,
yep,
the
next
one
is
a
csi
online
offline
resizing
world
extension
update,
come
on.
B
Yeah,
so
we
I
think
the
we
tried
we
had
a
design
and
we
like
this
and
the
next
item.
Basically,
we
are
blocked
on
the
recovery
from
resize
filter
and
we
have
a
design
that
we
worked
and
tried
to
push
it
forward
and
I
implemented
the
feature,
but
we
found
some
corner
cases,
risk
conditions
that
still
require
same
changes.
So
we
have
to
kind
of
modify
the
design
and
we'll
try
this
in
next
quarter.
A
Okay,
so
okay,
so
you
also
talked
about
this
next
one,
so
this
one
is
move
to
21.23
changes.
A
A
A
D
Yeah,
I
think
it
was
just
bug
fixing
for
these
next
two
picture
or
next
two
features.
I
haven't
seen
any
new
bug
fixes
recently
so
but
yeah
we'll
have
to
sync
up
with
patrick
and
see.
A
So
cover
the
next
one
as
well:
this
pvc
inline
f4
rolling
and
spreading
affiliate
domain;
okay,
so
yeah
this
is
mine,
but
this
is
blocked
by
the
the
following
one
wooden
group.
I
think
I
still
need
to
usually
update
the
cap.
A
And
next
csi
autoimmune
iscsi.
A
E
So
we've
already
switched
to
csi
proxy,
so
this
make
the
windows
pad
also
g8.
So
now
the
whole
both
the
windows
part.
So
now
it's
like
official
v1
is
out
for
both
windows
and
the
linux.
A
Oh
next
one
is
the
sent
out
okay
notice
for
flex
volume.
Michelle
have
you
got
a
chance
to
review
the
write-up
for
this
with
application
notice
for
the
oh
yeah,
I'm
taking.
D
A
Join,
maybe
not
so
we
had
a
design
meeting
on
tuesday
yeah,
so
I
have
sent
out
the
meeting
minutes.
I
think
mustafa
is
going
to
look
at
that
and
maybe
need
to
write
up
the
cap
based
on
the
discussions.
So
I
think
I
think
the
suggestion
in
that
meeting
is
to
look
at
volume.
Snapshot
names
just
transfer
first,
because
for
pvc
there
are
a
lot
of
race
conditions.
A
We
need
to
consider-
and
I
think
we
also
talked
discussed
how
to
handle
secrets,
but
I
think
that
still
need
more
more
thought
I
think
so,
for
if
it's,
if
no
secrets
will
be
easy
to
handle
with
secrets
with
you
need
to
look
more
into
it.
I
think
that's
yeah,
that's
that's
the.
A
Update
boarding
house
yeah,
so
we
also
had
a
meeting
wedding
house
next
steps,
so
we
discussed
whether
we
can
have
a
have
our
external
house
monitor
controller
to
also
handle
the
reaction
if
something
goes
wrong
with
the
with
the
volume.
So
I
think
we
need
to
write
up
some
details,
so
I
I
have
pinned
nick
to
write
up
some
details
on
how
to
do
that,
because
he
actually
have
an
indentation
for
that
and
also
another.
Another
thing
is
we
we
also
talked
about.
A
We
want
to
add
one
house
into
metrics,
starting
with
the
starting
from
node
side.
Think
yeah.
So
so
we'll
need
to
have
some
follow-up
discussions
on
that.
F
Yeah,
the
the
docs
emerged,
the
code
of
course,
already
merged.
The
blog
draft
is
up
and
ready
for
review
there.
I'm
working
on
changes
to
the
out
of
tree
projects
to
support
the
new
alpha
api.
I
have
two
very
small
pr's
that
need
to
be
reviewed.
I
don't
know
shin
can
I
would
you
be
interested
in
being
my
reviewer
for
the
yeah.
F
F
A
Okay,
so
we
have
until
august
the
30th
to
review
it
or
is
it
giving
you.
A
F
A
Okay,
we'll
just
say
if
you
need
to
buy,
put
it
like
that.
F
Yeah,
so
so
those
are
the
those
are
the
main
things
for
completing
the
alpha,
and
then
we
have
to
turn
around
and
try
to
move
it
to
beta
123,
okay,
but
that
should
be
easier
now
that,
like
we've
dealt
with
all
the
hard
technical
questions-
and
it's
just
a
matter
of
you
know
getting
people
to
to
bang
on
it
a
little
bit.
F
I
I
don't,
I
can't
think
of
any
specific
updates.
I
know
that
we've
decreased
from
two
weekly
meetings
to
one
weekly
meeting
and
sid
has
done
a
ton
of
work
on
rewriting
the
cap,
but
I
don't
know
where
that
review
stands.
I
know
it's,
it
everything's
a
much
better
shape
and
we're
not
arguing
about
big
complicated
things
anymore,
which
is
why
we
decrease
the
meeting
frequency,
but
we
still
need
to
get
through
the
kep
review
and
actually
get
to
the
implementable
place
for
well.
A
Yeah,
I
think
we
passed
to
122.
it's
just
too
late,
yeah,
so
yeah
we
can
get
another
update
from
sid
right
after
this
meeting.
A
A
I
don't
know
if
he
got
team,
two
star
raving
idiot
that
I'm
not
sure
so
yeah
check
with
him.
Thank
you.
Next
one
is
the
change.
Block
tracking
is
a
phone
on
the
call
of
any
chance.
A
So
we
actually
reviewed
this
in
yesterday's
data
protection
group
meeting
so
yeah.
So
the
thing
finally
is
going
to
take
those
feedbacks
and
update
the
doc
right
now
it
says
to
it.
So
it's
a
google
doc.
It's
not
not
a
cap,
yet.
A
A
A
Okay,
so
next
one
is
the
new
rewrite
once
x,
access
mode
chris.
G
Yeah,
the
the
alpha
docs
are
merged
and
currently
addressing
feedback
on
the
future
blog.
G
F
A
H
That
one
is
all
done,
the
docs
have
been
merged
and
I
think
the
blog
is
being
reviewed
right
now.
There
were
some
comments
and
I
think
mauricio
is
addressing
them.
I
J
Yeah
yeah,
sorry,
so
for
the
core
issues
and
bugs,
I
think
we
still
have
one
or
two
bugs.
I
need
to
get
them
fixed
by
end
of
next
release
and
now
I'll
certainly
do
that
yeah.
J
A
A
A
A
Yeah,
so
this
is
the
yeah
because
of
the
the
crd
beta
version
of
crd.
No
longer
work
is
no
longer
supported
in
1.22
and
our
css
driver
has
not
updated
with
this
viva
version
of
crd.
Yet
so
it
will
not
working.
Why
not
22.
yeah.
So
I
see
that
you
have
a
there's
a
for
pr
tabio
on
that
yeah,
I'm
trying
I'm
talking
to
the
team
so
right
now,
I
think
so
far.
What
I
got
is.
A
A
A
D
F
D
A
His
team,
okay
yeah
just
at
least
to
warn
people
not
not
to
upgrade
wait
for
fix
something:
okay,
yeah,
so
we
actually,
I
thought
yeah.
The
thing
is
sometimes
people
don't
really
look
at
the
document
so,
but
in
document
we
actually
have
a
support
metrics.
So
we
have
to
have
a
like
a
maximum
supported
version.
That's
so
only
if
it's
tested,
then
we
should
we
list
there
yeah
it's.
This
one
is
getting
a
little
tricky.
We
kind
of
find
out
this
one
kind
of
late.
A
A
J
I
I
don't
think
so.
I
I
think
for
androidfile,
like
it's
blocking
by
the
ifs
group,
so
we
will
have
to
wait,
for
our
first
group
is
that
is
that
the
conclusion
amount.
B
Yeah
so,
but
it's
still
alpha
so
next
so
next
release
we
should
be.
We
want
to
enable
azure
file
migration
by
default,
so
in
next
release
we
should
be
able
to
migrate.
Sorry
enable
azure,
full
migration
by
default.
C
A
J
A
Drivers:
okay,
thank
you.
The
next
one
is
gce
data
and
by
default,
is
this
one
done
or
no
update,
yeah.
J
So
previously
we're
trying
to
turn
this
on
by
defaulting
this
release
and
we
had
a
pr
to
install
pdsi
driver
in
the
cube
up
script.
But
we
got
some
objections
from
the
sick
architecture
and
some
other
sick
leaders
and
they
feel
like
this
should
not
be
included
in
the
in
the
cube
up.
And
they
mentioned
that
this,
like
installing
the
pdcsi
driver
and
and
stuff
like
and
tests,
including
pd,
should
be
part
of
the
sig
cloud
provider
stuff.
J
So
they
encourage
us
to
move
all
those
csi
migration
and
pd-related
stuff
to
say
to
gc
gc
cloud
provider
repo
and
we
had
a
detailed
plot
and
I
think
matt
have
a
very
detailed
plan
on
how
how
we're
going
to
do
that
in
1.23
release.
A
Okay,
thank
you,
and
next
one
is
aws
by
default.
D
Don't
they're
still
working
on
windows
support,
so
I
don't
think
they
were
able
to
turn
on
by
default
all
right.
A
Thank
you.
Next
one
is
cc.
Migration
set
fsf
rbd
still
waiting
on
this.
So
has
anyone
heard
anything
about
this?
One.
A
A
And
then
next
one,
the
control,
volume
mode
conversion
between
source
and
target
pvc
is
a
ronald
gonna
call
any
chance
yeah.
So
we
actually,
we
actually
had
a
meeting
to
review
this.
I
think
two
weeks
back
in
the
data
protection
group
meeting,
so
ronak
is
updating
the
document
and
also
we
got
some
feedback
from
shan
chen.
So
I
think
he's
trying
to
write
rather
more
details
in
that
doc
before
we
send
an
email
to
the
security
group
to
get
their
feedback
on
our
proposal.
A
Next,
one
is
user
id
ownership
in
configmap
maps
secrets.
This
is
the
a
project
with
seek
auth
and
hormone
tv.
Is
there
any
update
or
is
the
same,
no
update.
J
I
need
to
pin
ambershark,
I
haven't,
got
any
updates
from
around
him.
Okay,.
A
Next,
one
is
the
no
graceful,
no
shout
out
seek
node
so
yeah.
I
think
I
think
this
one
still
trying
to
figure
out
how
to
use
this
information
provided
by
the
grizzle.
No
shutter
did
some
tests,
but
I
didn't
get
the
information
that
I'm
looking
for
so
still
trying
to
figure
out
how
we
can
use
that
information
to
to
know
that
it's
really
a
a
shutdown
so
so
need
more
time
to
figure
out.
A
Next
next,
one
is
enable
username
space
in
cubelets
uids
get
shifted.
This
is
a
project
with
sick
node
and
I
don't
know
who's
working
on
this
come
on.
Do
you
do
you
know
anything
about
this?
One.
M
B
Is
this
something
ben
added
for
the
user?
Id
shifted.
E
A
B
I'm
saying
that,
like
things
like
like
most
of
the
csi,
the
most
of
the
drivers
don't
work,
the
local
volumes
requires
hacks.
So
if
you
people
run
with
this,
it's
just
storage
is
mostly
broken.
So
and
that's
that's
something
that
that's
what.
A
B
C
B
I
B
A
Next,
one
is
pvc
created
by
staple
set,
will
not
be
auto,
removed,
is
matt
here.
A
Thank
you
next
one
is
voting.
Expansion
for
staffordshire
is
shalini
here
by
any
chance,
so
I
sync
up
with
her.
She
actually
has
submitted
a
cap,
so
I
think
we
need
to
get
some
want
to
review
it.
So
I
have
taken
a
look
and
added
some
comments.
So
do
we
have
other
people
sign
up
to
review?
This?
Is
that
command
matte,
or
this
is
the
this?
Is
the
safe
voting
expansion
for
safer
set
so
haman?
We
probably
should
take
a
look.
A
A
A
Thank
you.
Next
one
is
a
container
notifier
for
application
snapshot.
The
status
is
the
same.
We're
still
waiting
for
yeah
I'll
just
copy
this
one
still
waiting
for
a
signal
to
review
and
approve.
A
Next,
one
is
this:
udos
monks,
believing
new
repo
srini,
is
really
here.
A
A
Thanks
the
next
one
is
the
one
that
we
worked
with
scheduling,
prioritization
onwarding
capacity.
D
D
Okay,
so
we
should
be
good
here.
Okay,
I
forget
if.
A
Oh
actually,
I
should
probably
I
should
probably
double
check.
I
think,
because
this
is
ga
I
think
we
actually
asked
yeah.
I
think
you
probably
pinned
the
or
whoever
is
working
on
this
so
but
I
don't
know
the
status
after
that.
I
believe
we
actually
added
this
one.
Okay.
So
yes,.
A
Thank
you,
okay,
so
that's
all
we
have
in
this
planning
spreadsheet.
Now,
let's
look
at
the
other
items
we
have
here.
So
the
first
one
added
by
value
balu.
Are
you
there.
A
Okay,
do
you
want
to
talk
about
this?
This
issue.
N
Yeah
sure,
so
what
I
want
to
bring
up
over
here
is
the
basically
to
discuss
about
supporting,
let's
publish
node
capability,
only
for
block
and
not
for
file.
So
currently
we
have
a
vsphere
csi
driver
which
is
used
for
both
block
and
file,
and
then
we
see
that
we
are
creating
like
we
are
creating
volume
attachments
for
both
block
and
file.
N
Since
we
have
a
single
csi
driver
and
then
whenever
we
are
having
like,
whenever
we
are
implementing
the
list
volumes,
we
see
that
the
reconciliations
happens
for
both
block
and
file,
which
we
don't
want
to
have.
We
want
it
only
for
block
volume
and
not
for
file,
because
the
file
we
see
that,
if
there's
any
sort
of
unwanted
stuff,
the
cubelet
will
take
care
of
the
reconciliation.
N
So
what
I'm
looking
for
basically
is
is
there
a
way
that
we
can
support
these
published
nodes
for
block
and
not
for
file
right
now,
I
don't
see
that
I
have
seen
google
or
azure
the
drivers
they
have
individual
csi
rewards
for
separate
for
block
and
file
and
then
for
some
other
drivers.
I
see
there's
a
single
driver
for
both
block
and
file
and
for
those
drivers
I
don't
see
they
have
implemented
published
nodes.
N
So
now
that
we
have
started
implementing
the
list
volumes
for
vsphere
csi
driver,
we
are
looking
into
a
way.
How
do
we
make
sure
that
we
get
the
actual
state
for
file
volumes
from
the
back
end?
We
don't
have
any
way
of
getting
that
actual
information,
but
then
we
want
a
way
we
can
skip,
skip,
publishing,
notes
or
file
volume,
but
then
do
it
only
for
block
volumes.
F
A
A
F
Yeah
yeah,
we
we
don't
support,
publish
nodes
at
all
and
when
we
do,
we
will
support
it
for
both.
I
think
that's
your
only
option
is
do
neither
both
yeah.
B
F
I
did
sort
of
throw
out
a
suggestion
because
of
the
way
that
the
sidecar
consumes
it.
You
can
get
away
with
faking
it
to
a
small
extent
like
remembering
the
list
of
published
nodes
in
memory
only
and
then
letting
it
reconcile
every
time.
The
pottery
starts
it's
kind
of
gross,
but
it's.
N
B
F
You
know
if
you
just
had
an
in-memory
data
structure,
remembering
the
list
of
published
nodes
for
each
volume
that
got
reset
every
time
the
driver
restarted.
You
get
a
bunch
of
controller
publishers
every
time
the
driver
restarted
and
then
you'd
have
to
remember
them,
but
it
wouldn't
hurt
anything.
So
it's
not
the
end
of
the
world
tonight.
F
N
We
agree
on
that,
but
then,
like
we
know
that
one
option
is
a
fake
which
is
okay
for
us,
but
then,
if
there
is
a
way
like
we
know
that
we
don't
have
to
do
it
for
file,
why
not
just
export
the
capability,
something
like
list
volume
publish
not
only
for
block
and
then
when
we
use
it
you
only.
They
only
do
it
for
block
volumes
and
not
for
file
volumes.
Can.
K
You
jog
my
memory
and
remind
me
what
this
capability
is
for
the
list
volume
publish
nodes.
F
The
yeah,
the
the
idea
behind
this
is
the
external
attacher
wants
to
be
able
to
repair
damaged
attachments,
and
so
it
needs
a
way
to
ask
the
sp.
Like.
Is
this
volume
published
right
now
and
so
the
way
it
does?
It
is
by
listing
the
volume
and
then
getting
its
list
of
attachments
and
if
anything's
missing,
it
will
assume
that
there's
been
a
damaged
attachment,
it'll
attempt
to
repair
it,
which
is
nice
behavior.
F
I
love
it,
but
it
depends
on
the
driver,
knowing
which
nodes
which
volumes
are
accessible
on
which
nodes
and
if
your
driver
doesn't
do
that
and
it
doesn't
have
to,
then
you
can't
implement
the
capability.
So
what
we're
going
to
do
is
actually
improve
the
driver
so
that
we
know,
and
then
we
can
properly
implement
this,
because
it's
worth
it.
A
Ben,
so
your
plan
is
to
save
that
in
information.
Your
memory
is
that
your
plan,
or
whatever
no.
F
N
So,
at
least
from
the
vsphere
side,
we
don't
see
a
way
of
getting
for
firewall,
install
ip
addresses
right.
It
doesn't
have
to
deal
in
a
way
with
nodes
and
stuff,
so
we're
finding
it
difficult
to
get
the
actual
information
specifically
for
the
nodes.
Yeah.
F
F
It
forces
the
controller
plug-in
to
be
stateful,
but
there
are
so
many
things
in
csi
that
force
the
controller
plug-in
to
be
stateful
that
we
just
gave
up
on
trying
to
make
it
stateless
and
said.
Okay,
we're
going
to
have
a
bunch
of
state
that
we
carry
around
so
that
we
can
correctly
implement
the
spec.
N
L
L
You
should
have
separate
drivers
because
it's
not
about
block
volumes
and
fire
firework.
You
said
about
the
storage
back
and
behind.
I
can
have
a
five
volume
backed
by
a
block
block
device
and
that
it
makes
perfect
sense
to
have
the
node
publish
nodes
there
and
it
is
useful,
but
just
for
nfs
or
whatever
you
use
for
the
file
systems
volumes,
it
doesn't
make
any
sense.
N
F
Yeah,
a
meta
driver
that
has
multiple
implementations
behind
it
is
because
it
allows
you
to
set
up
your
storage
class
once
and
then
over
time
add
more
capacity
or
to
it.
You
know,
behind
the
scenes
without
having
to
create
more
storage
classes
and
add
new
provisioners,
you
can
just
gracefully
expand
your
pool
of
storage
as
much
as
you
want,
and
it
all
looks
like
one
csi
driver
to
kubernetes.
F
B
Also,
a
problem
there's
another
problem
that
you're
going
to
run-
and
I
talked
with
michelle
about-
is
like
the
fs
group
support
because,
like
it's
orthogonal,
but
these
drivers
like
like
for
block
volumes,
we'll
do
the
fs
group
recursively,
but
for
nfs
we
don't
want
to
do
and-
and
we
are
trying
to
move
away
from
like
the
this
guesswork,
we
are
doing
in
kubernetes
and
move
it
to
csr
driver
object
and
we
can't
do
that.
If
there's
only
one
csr
driver
object
for
one
driver,
so.
F
We
already
have
that
problem.
I
don't
think
that
gets
any
worse
and
that
the
one
you
mentioned
is
a
definite
problem
because
yeah
like
if
we
mix
iscsi
and
nfs
kubernetes,
doesn't
know
which-
which
any
particular
volume
is,
and
so
it
makes
it
hard
for
kubernetes
to
do
the
right
thing,
but
the
csi
driver
can
do
the
right
thing
so,
the
more
we
can
push
that
kind
of
stuff
down
into
cs5
the
more
it
lets
people
just
solve
it
themselves.
F
L
F
There's
a
long
list
of
these
kinds
of
funny
corner
cases
where
you
basically
need
state
in
the
controller
plug-in
in
order
to
do
the
right
thing.
The
the
one
I
ran
into
not
too
long
ago
was
the
the
file
system
block
duality
problem
where
you
can
have
a
you:
can
have
a
volume
that
is,
that
is
a
file
system
and
you
can
take
a
snapshot
of
it
and
then
you
can
create
a
volume
from
the
snapshot.
F
That's
actually
a
block
type,
and
then
you
can
take
a
snapshot
of
that
and
then
you
can
turn
that
back
into
a
file
system
volume.
But
you
can
change
the
file
system
from
like
ext4
to
xfs
along
that
chain
and
nobody
stops
you
and
so
like.
Unless
the
csi
driver
remembers
that
it
used
to
be
ext,
then
then
you
have
no
way
to.
You
know
prevent
the
bad
thing
from
happening
when
they
try
to
thaw
it
out
as
an
xfs
volume
or
something
you
have
to
basically
destroy
that
state
somewhere.
A
A
I'm
talking
about
public
publishing,
notes
yeah
this
one
in
publishing,
notes.
B
A
L
A
Attached
yeah,
I
was
just
wondering
if
there
was
any
a
real
lender
who
has
actually
done.
This
would
be
good
yeah.
A
L
Sorry,
yes,
sorry
for
system
volumes
like
samba
or
nfs.
It
doesn't
make
sense
because
they
don't
have
a
third
party
attachment.
F
Right
yeah
right,
but
they
can
have,
they
can
have
an
accol
that
says
you
know
nodes
one.
Three
and
five
are
allowed
to
mount
it
and
nodes.
Two
four
and
six
are
not
right,
and
then
the
idea
is
controller,
publish
changes
that
that
access
list
and
control
and
publish
changes
that
access
list
and
the
list
list
volume
with
published.
Node
ids
allows
the
external
attacher
to
find
out
if
the,
if
the
echoes
are
correct
or
not,.
D
F
F
You
know
if,
if
every
volume
is
accessible
to
every
node,
then
you
know
when
someone
takes
over
a
node,
all
the
data
is
gone
and
that's
something
we
would
like
to
prevent.
I
mean,
unfortunately,
controller
publish,
gives
you
all
the
knobs.
You
need
to
do
it,
it's
just
you
have
to
actually
implement
it
and
and
if
you
do
implement
it,
then
things
like
implementing
published
node
ids
is
no
big
deal
because
because
you
know
who
it's
published
here.
N
So
what
you
guys
suggesting
is
that
either
fake
it
in
the
memory
or
get
it
from
get
get
the
information
from
the
storage
backend.
F
Yeah,
but
my
recommendation
is:
do
it
correctly
like
have
the
real
information
and
feed
it
back
into
kubernetes
and
unders,
with
the
understanding
that
that's
not
easy
to
do,
but
it's
worth
it's
worth
doing.
If,
if
you
don't
want
to
do
the
hard
work,
you
can
probably
get
away
with
faking
it
a
little
bit
yeah.
N
F
N
True,
but
I'm
just
talking
for
the
file
volumes,
let's
say
if
something
got
unwanted
out
of
like
some.
For
some
reason,
something
happened
to
the
unmounted
volume,
wouldn't
cubelet
automatically
take
care
of
reconciliation
of
those
volumes
like
see
the
underlying
no
yeah.
F
D
J
F
B
Okay,
that
that
also
begs
the
question,
though,
that,
should
we
fix
that
fs
group
change
policy
in
the
policy
in
the
species
survivor
to
support
these
meta
drivers
before
that
thing?
Go
ga.
D
B
Yeah,
it
doesn't
interact
well
with
the
assumptions
that
we
have
in
cubelet
site
in
particular,
so.
F
D
F
K
Thanks
for
raising
this
and
shin,
can
you
put
down
a
conclusion
here
just
so
that,
if
someone's
following
the
notes.
B
B
B
C
That's
stuff,
but
we
have
a
project
to
unify
the
side
cards
right,
but
you'll
still
have
two
of
the
unified
side.
Cars
in
that
world.
L
A
N
A
You
have
other
questions,
are
you?
Are
you
outside
with
this.
N
Yeah,
I
think
I'm
good
thanks
for
the
commission
guys.
Thank
you.
A
All
right,
thank
you,
okay,
so
the
next
ipodrank.
Do
you
want
to
talk
about
this
next
issue.
O
Yeah,
so
I
reached
this
discussion
point
in
the
slack
thread
before,
but
basically
would
the
community
be
interested
in
like
a
generic,
since
I
had
no
level
plug-in
for
local
volume?
Basically,
it
shouldn't
match,
like
all
the
features
of
the
current
entry,
like
local
volume,
plug-in
in
kubernetes.
A
O
Yeah
because
I
guess
like
we're
trying
to
decouple,
like
the
kind
of
you
know,
provisioning
of
the
local
discs
themselves
to
like
you,
know
the
actual
mount
and
amount
so
essentially
like
the
current
entry
plug-in.
That's
mostly
like
the
you
know
it
like
it
has
option
to
format
like
a
block
volume,
but
like
mostly
for
like
five
system
volume.
It
just
does
like
a
mountain
out
mount
or
like
a
fine
amount.
O
So
like
the
idea
is,
this
could
be
kind
of
generalized
to
like
a
csi
like
a
no
level
plug-in,
and
that
way
like
so
so.
The
use
case
for
us
is
like
we're,
forced
to
kind
of
you
know
needing
to
carry
this
downstream
patch
to
do
like
a
migration
for
like
existing
local
volumes
that
were
provisioned
by
like
the
local
static
provisioner.
O
But
it
would
be
nice
to,
like
you
know
not
having
to
carry
this
internal
patch
and
be
able
to,
like
you
know,
have
a
generic
way
of
mic
of
doing
this.
Migration.
D
I
guess
the
question
is
is
like:
why
do
you
need
to
migrate.
O
Customers
that
are
using
like,
like
we
have
you
know
this
we're
going
through
this
transition,
where,
like
existing
statically,
provisioned
local
volumes,
we
don't
want
to
just
have
to
like
ask
all
the
customers
to
switch
over
their
storage
class,
so
we're
doing
kind
of
this
migration
like
using
the
csm
migration
plugin,
but
instead
of
having
to
carry
like
the
internal
patch,
it
would
be
good
to
kind
of
somehow
it
would
be
good
just
to
kind
of
make
it
somewhat.
Generic.
O
O
Like
level
features,
you
could
kind
of
fork
the
upstream
entry
plug-in
and
be
able
to
add
your
own
like
internal
features
to
it,
without
needing
to
actually
touch
like
the
actual
kubernetes
code.
D
H
D
Done,
I
think,
we've
kind
of
done
something
similar
with
like
the
iscsi
driver,
like
I
believe
the
current
iscsi
driver
only
supports
publish,
but
but
I
am
I'm
actually
interested
in
like
adding
more
than
what
intrigue
can
do
like
you
know,
with
the
volume
capacity
tracking,
I
think
we
can
actually
add
dynamic
provisioning
support
to
local
volumes
now,
and
so
I
would
actually
be
interested
in
a
csi
driver
that
does
more
than
just
the
node
side,
but
also
can
do
the
provisioning
side.
B
Yeah
we
have
some
some
requests
for
it
in
in
openshift
and
there's
some
like
people
wanted,
there's
some
use
case
for
dynamic
poisoning
of
local
volumes,
and
we
have
been
looking
at
using
something
similar,
nothing
certain
yet
and
we
were
wondering
how
it
will
live
like
side
by
side
with
other
statically
provisioned
volumes
like.
D
Yeah,
so
I
think
that
that's
the
main
challenge,
I
think
we
need
to
think
about
how
they
can
co-exist
or
transition
that
I
think
that
is
the
more
tricky
part.
A
I'm
sorry,
I
think
we
are
on
top
of
the
hour
so
yeah,
so
maybe
people
you
can
think
about
it,
and
you
know
whether
we
can
add
more
functionalities
to
this
plugin
and
we
can
continue.
You
can
come
back
to
talk
about
this
in
next
meeting
or
and
also
continue
on
the
slap
channel.
Okay,.
D
A
D
D
In
email
too,
like
I
think
definitely
you
know,
I
think
this
is
something
we
can
pursue,
but
I
would
like
to
actually
have
it
support
the
new
capacity
features
that
were
currently
adding
to
it.
A
Okay,
so
I
think
we
are
out
of
time
deep.
Can
we
talk
about
this
slack
or
yeah.
H
H
On
on
six
storage,
slack.