►
From YouTube: Kubernetes SIG Storage - bi-weekly meeting 20210128
Description
Meeting of Kubernetes Storage Special-Interest-Group (SIG) - 28 January 2021
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Xing Yang (VMware)
A
Hello,
everyone
today
is
january,
28th
2021.
This
is
the
kubernetes
storage
sig
meeting.
So
today
we
will
go
through
our
1.21
planning.
The
next
deadline
is
caprice,
which
is
february
9th,
so
that
is
coming.
A
So
if
you
have
a
feature
that
you
want
to
get
into
this
release
make
sure
that
you
pin
the
leads
to
get
the
caps
track
tracked
in
the
this
spreadsheet,
and
then
you
must
make
sure
that
your
caps
are
in
the
implementable
state
and
you
need
to
do
the
production
readiness
review.
So
there
are
a
few
requirements
for
that
for
the
cup
okay.
So,
let's
go
to
spreadsheet.
A
B
Yeah,
sorry,
I
was
trying
to
find
me
okay,
so
we
have
a
cap.
I
opened
a
cap,
there's
the
enhancement
file,
it's
a
alpha
feature
and
it's
being
reviewed.
It's
going
david
eats
review,
did
the
pr
review,
redness
production
readiness
review
for
it,
and
I
am
addressing
his
comments
and
everything
then
there's
a
parallel
csi
proposal
to
make
the
changes.
B
B
I
think
next
week,
yeah
next.
Is
it
next
week,
yeah
next
week,
wednesday
february
third
to
get
the
csi
proposal
merged
or
like
at
least
decide
status,
but
we
are
basically
kind
of
good
to
go,
but
we
are
blocked
on
the
csi
proposal
getting
merged.
A
Thank
you,
you
have,
you
said
you
have
an
enhancement
issue
and
cap
right.
Can
you
talk.
B
So
the
current
status
is
that
sung
and
abhishek
and
neha
there
are
folks
from
microsoft,
have
are
helping
us
out.
We
are
fixing
some
of
the
issues
in
in
121
time
frame,
but
the
two
big
thing
like
the
something
that
require
cap,
the
recovery
from
research
failure.
I
think
we
may
not
be
able
to
get
in
time
for
121.
A
B
I
just
want
to
thank
you.
I
don't
know
if
they're
in
the
chat
like
like
song,
I've
seen
any
for
joining
this
effort.
Actually.
A
A
E
Yeah,
so
ashley
will
be
focusing
on
one
specific
csi
pv
review
only
handling
pr.
You
know,
as
I
made
a
mistake
last
time,
to
sign
up
for
this
one.
Unfortunately,
I
won't
be
able
to
focus
on
the.
A
That's
okay,
so
should
we
keep
your
name
here
or
you
sh,
I
don't
think
I'll
be
able
to.
A
C
Okay,
I'm
not
sure
I
think
this
one
was
mostly
scoped
to
the
that
one
pr,
oh.
A
A
All
right,
thank
you,
yeah
we
still
okay,
so
we
still
haven't
heard
of
how
about
yeah,
okay,
all
right
out
of
office.
Next,
one
issues
related
to
assuming
volumes
are
month
points.
F
Not
just
one,
but
there
are
some
prs
related,
but
it's
not
completely
fixed
the
whole
tree
yet
so
they're,
like
some
people,
keep
having
small
prs
related
to
this
area.
So
hopefully
I
later
have
time
to
check
again
like
to
see
whether
we
need
something
to
fix
as
a
whole.
So
yeah.
A
G
Sorry
for
that,
so
I've
restarted
the
discussion
around
the
caps,
the
prr
production
readiness
review.
That
was
still
missing
and
I
now
got
the
attention
of
my
reviewer,
so
we
are
making
good
progress
on
or
we
have
made
good
progress
on,
the
storage
capacity
one
and
the
other.
The
next
one,
generic
from
inner
voice
will
come
come
soon
technically.
G
G
So
I'm
I'm
waiting
for
that
and
then,
if
that's,
if
that's
agreeable
I'll
start
the
implementation
work,
but
it's
the
change
itself
is
entirely
an
external
provisioner,
which
means
that
we
are
less
a
bit
less
constrained
by
by
the
kubernetes
deadlines.
We
still
want
to
have
it
ready,
of
course,
but.
G
In
terms
of
intrigue,
kubernetes
changes,
I
have
a
meeting
score.
I
I
will
attend
the
next
csi
community
meeting,
because
I
do
have
one
open
question:
around
maximum
volume
size
versus
total
available
capacity.
G
It's
currently
ambiguous
in
the
csi
spec
and
I
think
we
wanted
to
clarify
whether
csi
drivers
can
be
made
more
precise
by
returning
two
values
or
clarifying
respect.
What
that?
What
that
value
is
that
we
return
and
based
on
that,
we'll
probably
update
the
entry
representation
of
that
information
in
the
csi
storage
capacity
object,
so
that'll
that'll
happen
next
week.
I
hope.
Okay,
I'm.
H
G
A
H
Yeah,
there's
isn't
any
update
since
the
last
time.
This
is
still
something
we're
looking
at
in
the
second
half
of
the
year.
H
A
J
This
is
a
call
out
if
anybody
else
wants
to
pick
this
up,
welcome
to
pick
it
up,
otherwise,
it'll
be
delayed
until
the
second
half
of
the
year.
H
Yeah,
absolutely
I
am
if
anyone
is
willing
to
pick
it
up.
I
definitely
can
help
out
with
that.
I
just
don't
have
the
time
to
really
push
this
through
on
my
own.
A
Right,
if
anyone
wants
to
work
on
this,
please
let
us
know
you
can
sync
up
with
matt.
A
Okay,
next
one
spreading
over
failure
domain.
This
is
me,
so
I
don't
have
an
update.
Yet
this
one
is
depending
on
the
next
one,
which
is
a
volume
group
api.
So
I
have
updated
the
cap
based
on
our
meeting
last
time,
so
I
will
probably
schedule
another
meeting
to
review
the
changes.
A
A
It
looks
like
getting
close
so
okay,
so
we
will
get
update
next
time.
K
Yes,
an
issue
has
been
filed,
a
cap
has
been
pushed
which
outlines
the
temporary
design
and
largely
mirrors
the
pvc
name,
space
transfer,
which
has
not
yet
been
fully
merged,
and
it
relies
on
the
same
api,
so
the
design
is
still
being
tentatively
discussed
and
the
kept
needs
to
be
reviewed,
but
it's
not
imperative
for
this
corner.
Basically,
we
have
a
design
and
we're
iterating
on
it.
A
Okay,
next
one
is
following
a
house
we're
trying
to
bring
this
to
you
beta,
so
this
one.
Actually,
we
did
get
a
production
redness
review.
There
are
some
concerns
from
michelle
about
the
parting
former,
whether
that
will
affect
the
scalability.
A
So
I
attended
one
of
the
signal
meeting
and
asked
them
for
some
input,
so
they
mentioned
something
called
a
node
problem
detector
which
actually
is
a
demon
set.
That
is
doing
some
monitoring
on
the
node
and
then
either
export
that
to
prometheus
or
to
event
or
if
it's
a
permanent
error.
They
actually
put
that
as
a
condition
on
the
node
so
well,
we
are
actually
sending
those
events
to
your
pod.
A
So
I'm
not
sure
if
we
can
use
that,
but
I
think
I
need
to
take
a
closer
look
of
that
and
then
they
also
are
saying
that
we
should
see
if
we
can
like
limit
to
the
scope,
only
watch
the
parts
that
are
bound
to
your
node
and
also
the
scope
of
the
privileges.
A
So
those
are
things
I
still
need
to
look
into.
I
may
need
to
call
another
meeting
and
discuss
what
we
need
to
do
about
this.
I
think
the
controller
side
is
seems
to
be
straightforward.
Just
to
the
on
the
note
side,
we
need
to
figure
out
what
to
do.
A
Okay,
so
I
didn't
attend
the
meeting
this
week.
I
think
last
time
he's
working
on.
I
think
he
made
some
code
changes.
J
A
A
A
All
right
thanks
sad
next
one
is
cozy,
so
I
got
an
update
from
srini.
He
said
he
can't
attend.
Today
said
there
are
many
document
prs
in
progress
for
other
for
repos
api
and
spec
changes.
A
Updating
the
cap
and
ready
for
review
comments
are
addressed
for
the
smac
pr
things
that
for
the
review,
they
will
work
on
focus
on
the
demo
after
everything
is
merged.
A
Okay,
so
the
next
one
change
block
tracking
yeah.
So
finally,
it's
working
on
a
design.
We
had
another
meeting.
We
got
some
input
from
from
from
ad
from
microsoft
on
the
api,
so
I
think
we
still
need
to
get
some
input
from
the
like
the
ebs
side.
We
don't
really
have
anyone
working
on
that
so
because
their
api
is
a
little
different
from
the
rest.
So
yeah,
that's
still
like
one
remaining
thing
we
want
to
resolve.
A
Otherwise
the
does
not
look
like
our
current
proposed
api
work
for
without
for
ebs,
yeah,
so
hoping
to
get
some
feedback
from
from
them.
A
Next
one
is
the
the
new
rewrite
army
access
mode
is
chris
working
on
this.
I
Yes
right
now,
I'm
just
doing
just
research,
just
trying
to
understand
like
what's
going
to
be
involved
in
making
this
work
and
then
I'll
proceed
with
filing
a
cap
and
enhancement
issue.
A
Next
one
is
the
updated
topology
labels
from
beta
uga.
Do
you
have
anyone
with
this
one?
It's
a
swollen
just
needed.
L
Yeah,
so
I
got
a
email
from
from
an
engineer
from
microsoft.
Her
name
is
layla.
L
A
L
Yeah,
so
for
the
sasa
migration,
we
had
a
open
source,
kubernetes
csi
migration
meeting
last
friday,
and
we
had
some
discussion
there
and
some
announcement.
L
L
So
in
our
next
csi
migration
meeting,
which
is
next
friday,
our
proposal
is
that
each
club
provider
should
come
up
with
a
plan,
and
you
know
to
to
to
just
let
us
know
what
what
their,
what
are
their
blocking
issues
and
what's
their
current
status,
and
how
is
the
timeline
going
to
look
like
for
them
so
specifically
for
gcpd,
where
our
timeline
is
that
on
1.21
we're
gonna
turn
this
csi
migration
gc
on
by
default
and
fix
all
the
test,
infrastructure
and
ci,
and
on
one
1.2
we
will
announce
it
as
ga
and
in
1.23
and
1.4
we
will
try
to
do
something:
intriguing
removal,
so
yeah.
L
There
are
more
discussions
around
this
and
I
don't
think
I
want.
I
can
talk
all
of
them,
but
yeah,
if
feel
free
to
join
the
sas
and
migration
meeting
or
just
looking
at
the
notes
regarding
the
code,
the
core
ga.
I
had
three
pr
related
to
the
core
ga
out
for
review,
so
I
think
two
of
them
are
already
there
and
one
of
them
is
related
to
the
migration
matrix.
L
It's
still
working
in
progress
and
michelle
and
patrick
are
trying
to
help
review
that
yeah.
I
guess
that's
all
about
it,
yes
and
yeah.
So
one
thing
I
wanted
to
know
if
there
is
anyone
from
like
from
aws
aws
in
the
call
that
I
wanted
to.
Let
them
know
aware
of
this
timeline
thing,
because
last
time
I
don't.
I
don't
know
if
there's
anyone
from
aws
attendant
meeting.
L
L
Okay,
that's
all
from
the
core.
L
I'm
sorry
by
the
way,
I
think
one
one
thing
I
also
wanted
to
mention
about
this:
the
core
csi
migration
is,
I
had
a
pr
which
I,
which
is
to
what's
that
called
it's.
It's
called
replace
the
csi
migration
complete
flag
with
entry,
plugin
unreached
unregistered
flag,
so
that
means
some
of
the
existing
csi
migration.
Complete
flag
will
be
just
removed
directly
because
it's
doing
alpha
and
we
will
replace
this
with
a
new
flag
called
entry
plugging
something
on
register.
L
So
basically
the
idea
is
that,
with
this
new
flag,
we
are
decoupling,
unwrapped
unregistered
the
entry
plug-in
with
the
csi
migration.
So
previously,
if
we
wanted
to
disable
the
entry
plugin,
we
will
have
to
enable
csi
migration
complete
flag
to
do
that.
That
means
you
have
to
have
the
csm
migration
feature
turned
on,
but
see
if
there's
a
cluster
that
wants
to
directly
go
with
csi
and
not
support
entry
class
entry
volume
at
all,
it
can
directly.
L
A
Okay,
thanks
joey,
okay,
next
one
is
the
system.
Aggression,
vsphere
is
the
deviant
here.
A
It's
not
in
the
call,
I
think,
yeah.
I
think
he
attended
the
meeting.
I
think
there
is
one
issue
regarding
the
scene
and
the
seek
permission
format
that
is
not
in
not
supported
in
the
c
center,
but
it
is
in
the
entry
driver.
A
B
So
the
one
more
item
was
like:
there
were
three
things
I
think
one
was
the
disc
format
and
second
one
was
the.
We
should
explicitly
deprecate
older
versions
of
b
sphere
in
this
release
so
that
we
can
make
meet
124
timeline,
and
the
third
thing
was
that
michelle
pointed
out
the
hardware
version.
If,
if
we
are
going
to
not
support
like
let's
say
15,
then
we
should.
B
A
Is
this
the
same
as
the
the
issue
opened
by
is,
I
think,
shawway
open
the
issue
right?
Is
that
the
same
same
thing,
this
issue
is
the
same.
Oh,
no,
no
way
this
is
jay.
C
A
A
And
the
next
one
is
okay,
as
you
file,
I
think
this
one
is
trying
to
go
beta.
C
Yes,
that's
right.
Andy
andy
can
confirm
that
we'll
be
targeting
beta
and
I
think
the
blocker
issue
that
was
discovered
last
release.
I
think
he
has
a
fix
for
it.
A
B
And
but
but
there's
the
first
group
thing
and
then
there's
the
yeah
and
then.
B
No,
this
is
that's,
that's
already
track,
but
there's
an
issue
of
of
like
if
the
intrigue
plug-in
supports
mounting
same
volume
with
the
different
fs
groups
that
we
won't
be
able
to
support
in
with
csi
driver,
and
I.
H
C
Know
yeah,
I
guess
jaway.
Did
you
get
a
chance
to
talk
to
andy
about
the
the
plan
we
have
to
by
next
week
to
write
up
the
plans
for
124.
L
Yes,
yes,
I
I
pinned
him
on
slack
and
he
mentioned
he
will
keep
track
of
that.
I
I'm
not
sure
if
he
will
attend
the
meeting,
though,
but
I
think
I
can.
I
will
pick
him
maybe
like
next
week
next
week
to.
H
A
A
A
All
right
next
one:
okay
gc!
So
you,
I
think
joey
said
you're
going
to
train
on
in
1.21
right.
H
Yes,
so
the
I
mean
our
plan
is
for
gke
to
switch
it
on
for
gke's
21
release.
I
think
the
pertinent
question
here
is
whether
we
switch
it
on
by
default
in
121,
which
is
our.
M
H
C
No,
I
don't
think
so.
You
need.
A
H
Okay,
so
so
then,
if
there's
a
need
for
consensus,
I
think
we
can
just
say
that
our
plan
is
to
turn
it
on
by
default
in
121,
but
it
will
remain
as
beta.
Okay.
A
A
All
right
thanks,
oh
and
the
next
one
is
openstack
cinder,
csi
migration.
So
did
you
have
me
working
on
this
now.
D
D
Is
that,
like
I
haven't
been
able
to
find
anybody
in
my
company
who
knows
the
upstream
of
openstack
cs
cinder,
csg
driver,
and
there
are
some
tasks
in
the
migration
that
we
discussed
last
friday,
that
we
need
to
update
the
upstream
tools
upstream
ci
and
this
kind
of
stuff-
and
we
don't
have
the
expertise
here.
So
if
there
is
anybody
on
this
call
or
in
the
csi
community
who
knows
about
the
openstack
external
cloud
provider
and
the
csi
driver
upstream,
like
the
install
tools
and
the
ci,
we
need
help
there.
Otherwise
we
can.
D
A
I
think
there's
a
guy
walked
on
the
what
is
called
emperor
yeah.
D
B
I
I
actually
pinged
the
the
contribution
history
and
the
owners
of
the
sick,
open,
openstack
cloud
provider
and
like
three
people,
I
pinged
and
I
think
dims
today
morning,
so
that
I
didn't
get
anything
from
the
cloud
product
owner.
But
dims
was
asking
what
those
specific
tasks
are
there
for
migration,
like
answers
like
install
tools
and
ci,
but
I
think
we
have
to
be
if
we
can
provide
more
accurate
guidance
like
we
need
to
what
like.
B
B
So
so,
if
we
can
have
more
like
concrete
steps-
and
we
know
roughly,
but
if
we
can
do
more
concrete
stuff,
maybe
and
give
it
to
them,
it
might
help.
So
james
was
asking
that
that
information,
I'm
not
sure,
if
dim's
in
a
position
to
help
but
but
yeah.
C
I
A
A
M
Yeah
quick
question:
this
is
luis.
Portworx
has
an
entry
driver
too.
A
Want
to
do
that
that
I
don't
know
so,
maybe.
A
J
Meeting
that
jahweh
has
started
for
storage
vendors
who
are
looking
into
csi
migration.
I
would
suggest
starting
to
attend
that
meeting
and
then
coordinating
with
him,
and
we
can
track
the
status
of
the
work
in
this
dock
as
well
as
withdraway.
L
To
ping
me,
that's.
L
And
also
we
have
a
we
have
a
csi
migration
channel
on
the
slack
as
well.
Oh.
C
Cloud
provider
right,
one
yeah
124-
is
for
cloud
providers
but,
like
you
know,
after
the
cloud,
so
that's
what
we're
focusing
on.
But
after
the
first
phase,
then
the
second
round
of
things
is
we're
going
to
start
looking
into
the
rest
of
the
non-cloud
provider,
storage,
plug-ins,.
A
You
thanks
next
one
is
an
angriest
for
no
shout
out,
so
we
had
a
meeting
with
signaled
folks.
So
yasin
is
in
progress
to
update
the
cap,
so
he
said
he
hope
we
can
submit
an
updated
cap
by
monday.
A
Next
one
is
enable
user
namespace
in
couplet,
so
your
ids
get
shifted.
I
don't
know,
but
not
familiar
with
this
one
command.
Do
we
know?
Is
this
one
on
track.
A
Okay,
I
don't
see
this
in
the
tracking
sheet,
yet
maybe
they
just
didn't
add.
I
don't
see
anything
from
signature
in
that
tracking
structure,
okay,
so
the
next
one
is
the
immutable
secrets
and
convection
maps
that
this
one
is
already
done.
That's
really
quick,
okay,
next
one
is
pvc
created
by
sleeper
set,
will
not
be
auto
removed
and
I
think
we
are
losing
track
because
I
we
don't.
I
don't
know
if
kk
is
still
working
on
that
yeah.
A
H
And
see
I
mean
I
I
guess
this:
we
need
to
get
this
by
the
next
deadline
right
in
february.
A
A
Okay
thanks
next
one
what
expansion
was
okay.
I
think
this
is
my
saying
this
one
is
a
design,
so
also
cake
is
probably
yeah.
I
think
he's
working
on
the
the
pvc
lesion
first.
So
next
one
is
content
notifier.
So
yes,
I'm
working
with
shantian.
This
is
actually
updating
the
cap,
so
he
will
try
to
schedule
a
meeting
with
tim
to
discuss
the
the
api
changes
so
hope.
Hopefully
we
can
get
that
one
down
soon
and
the
next
one.
A
Okay,
it's
a
kubernetes
utos
mount
split
into
new
repo,
so
this
is
also
srini,
so
he
he
said
no
progress,
but
he
will
work
on
it
this
week.
Okay,
I
guess
that's
tomorrow.
C
Oh
sorry,
I
need
to
yeah
follow
up
with
on
this
to
see.
If
he's
gonna
have
time,
okay,.
A
C
I
think
the
the
cap
was
merged,
but
I
think
there's
just
the
implementation.
There
hasn't
been
an
implementation.
Yet,
oh,
okay,
all.
A
L
Yeah,
I
I
have
not
had
a
chance
so
so
the
target
here
is
to
design
this
release.
Oh
yeah
I'll
target
on
that.
A
Okay,
thanks,
okay,
I
think
that's
all
we
have
on
this
spreadsheet,
we'll
go
back
to
the
agenda
doc,
so
we
have
a
few
items.
I
think
this
is
from
last
meeting,
but
I
don't
it
looks
like
deviant
is
not
on
the
call.
So
he
has
this,
but
I
yeah
I'd
like
to
actually
look
at
this
one.
So
it's
this
issue
that
he
opened
basically
volume
leaks
when
pvc
is
deleted
when
associate
pv
is
in
the
terminating
state.
So
basically
what
happened
this
so
he
created
a
pvc.
A
It
has
pvpc
band
with
the
deletion
policy
which
is
delete
now,
if
you
go
ahead
and
try
to
delete
the
pv
first
right,
so
it's
going
to
be
like
in
this
terminating
state,
so
he
just
canceled
that
one
and
then
then
he
went
to
delete
the
pvc
and
then
ppc
and
pv
are
both
deleted.
However,
the
volume
the
underlying
volume
is
not
deleted
because
the
cc
driver
didn't
get
any
call
to
delete
that.
A
So
he
opened
this
issue,
which
seems
to
me
like
a
bug,
but
there
is
also
a.
I
know
there
was
a
lot
of
discussion
on
this
in
this
other
issue
seems
to
be
some
concerns.
A
So
I
guess
my
question
is:
if
we
say
if
admin
delete
pv,
then
we
always
delete
re.
We
don't
really
look
at
the
reclaimed
policy.
A
Then,
what's
the
point
of
having
a
reclaimed
policy
setting
there,
I
think
the
reclaim
policy
should
have
some
effect
there.
Does
anyone
have
any
thought
on
this?
Let
me
show
young.
C
Yeah
yeah,
so
I
think
the
original
intention
is
that
reclaim
policy
comes
into
effect.
When
you
delete
the
pvc
object
and
deleting
the
pv
object,
it
was
basically
not
really
a
supported
sequence
of
events
and
basically
that
would
be
like
an
admin
going
in
and
trying
to
force
things.
C
I
think
that
is
the
that
was
the
intended
behavior.
It's
definitely
a
bit
confusing.
I
think
we
could
consider
potentially
adding
a
new
finalizer
or
something
to
help
with
this.
I
think
there
were
attempts
to
add
it
in
the
past.
I
don't
remember
exactly
why
we
did
not
end
up
going
with
it,
but
I
mean
it's
something
we
can
look
into,
but
I
think
it's
definitely
right
now,
working
as
expected.
A
So
you
mean
adding
a
finalizer
on
the
pv.
If
the
policy
I
mean
okay,
so
okay,
well,
do
we
add
that.
C
Like
adding
adding
a
finalizer
on
the
pv
to
that
will
stay
on
until
the
actual
plug-in
successfully
deletes
yeah.
I
think
I.
A
D
A
D
C
I
mean
so
in
that
case
you're
describing
the
they
only
delete
the
pv
object
or
not.
Deleting
the
pvc
object.
A
J
D
A
And
also,
I
don't
think
this
behavior
is
documented
in
the
kubernetes
website,
because
this
is
very
good.
H
A
Because
you
should,
when
you
look
at
reclaimed
policy,
that
should
mean
something,
but
this
one
says:
okay,
if
you
try
to
delete
pv
or
hydro
before
you
try
to
do
the
pvc,
then
this
is
the
behavior.
But
if
you
try
to
delete
pvc
first,
then
you
wouldn't
run
into
this
behavior
yeah.
J
Lot
of,
like
weird
undocumented
behaviors
of
kubernetes
that
we
don't
like,
but
we
have
to
continue
to
support
like
the
multi-pod
single
node
thing.
C
J
Think
it
shouldn't
be
very
easy
to
make,
because
by
default,
user
should
not
have
permission
to
delete
the
pv
yeah.
A
J
A
But
the,
but
this
is
really
confusing,
like
I
even
thought
I
thought
that's
not
possible
how
come
you
have
this
bag
and
it's
still
there
because
I
thought,
if
you
have
the
deletion
policy,
then
that
means
delete
is
delete.
But
in
this
case,
depending
on
which
one
you
try,
you
try
to
give
you
the
first
right.
You.
O
A
Delete
pv
first,
then
this
reclaimed
policy
is
useless,
but
if
you
try
to
delete
ppc
first,
then
let's
claim
bitcoin
policy
is
useful.
So
I
think
that's
the
confusion,
because
because
if
you
look
at
here
actually
we
did
delete
pvc
next
right,
so
we
tried
pv
first
canceled
it
it's
still
kind
of
in
terminating
right.
But
then
then
we
did
the
delete.
Pvc
right.
It's
not
like.
We
didn't
really
delete
the
pvc
right,
so
so
that's
very,
very
confusing.
A
C
A
Does
it
say
that
in
but
if
you
really
just
delete
pvc,
then
you
have
to
keep
your
pv
and
then
you
need
to
keep
the
the
reclaimed
policy
to
be
retained
or
something
but
otherwise
both
will
be
deleted.
Everything
will
be
deleted
right.
You
can't
yes,
so
I
think
it's
not
clear
right.
If
you
just
say:
okay,
you
did
the
pvc
first.
How
can
we
prevent
the
pv
to
be
also
deleted
if
we
delete
pvc
first
right
didn't.
C
A
A
C
C
A
J
So
it
sounds
like
documentation
is
the
approach
we'll
go
with.
A
Yeah,
okay
sounds
good
thanks.
Okay,
so
I
think
the
next
one
sandeep,
I
think,
he's
not
here
today
and
this
one.
Let
me
move
to
the
next
meeting.
I
think
he
also
opened
an
issue
to
check
this
okay.
Next
one
is
the
okay
chase,
vm
hardware,
dependency.
N
N
A
Should
document
that
yeah,
I
think
well,
I
need
to
get
devion
to,
and
so
something
is
not
here,
but
I
think
the
hardware
version
really
is
tied
with
the
vc
version,
but
yeah.
We
should
definitely
document
that
one.
Clearly,
it's
not
clear
so
so
I.
A
J
A
A
A
O
N
No
because
please
figure
673
supports
lower
vm
hardware
versions.
Also,
customers
may
be
running
those
vm
hardware
versions
which
doesn't
satisfy
the
css
requirement.
I.
A
I
I'm
sorry
I
couldn't
get
completely
if
your
voice
is
a
little
broken
so.
N
A
A
J
Yeah,
I
think
that
makes
sense.
I
think
the
only
thing
we
want
is
we
don't
want
to
abandon
a
bunch
of
users
here
and
say:
oh
sorry,
you
can't
actually
do
csi
migration,
so
that
would
be.
That
would
be
pretty
bad,
but
it
sounds
like
at
least
with
the
backboard
there's,
a
potential
lower
hardware
version.
So
let's
find
out
what
that
version
is
yeah.
A
So
yeah,
oh
I'll,
ask
deviant
yeah
to
find
out
details
about
that.
Yeah
ooh
I'll
need
to
yeah.
Get
this
documented
and
see
what
is
the?
What
is
the
hardware
version
sounds.
L
A
Let's
see,
oh,
I
think
worry,
sorry
time.
I
think
I
just
I
think
I
already
mentioned
this
one's
basically
this
chain
broad
tracking
proposal.
We
need
to
get
some
input
from
ebs,
some
someone
who
will
understand
the
ebs
apis
so
yeah,
that's
it
anything
else,
all
right,
if
not
that's
it.
For
today
thanks
everyone.