►
Description
Meeting of Kubernetes Storage Special-Interest-Group (SIG) Workgroup for Storage Pool Design Review - 16 April 2020
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
B
Okay,
thank
you.
So
the
topic
for
today
is
a
proposal
that
I've
been
working
on
for
what
about
storage
capacity
tracking.
So
because
not
everyone
is
probably
familiar
problem.
Let
me
start
with
a
short
problem
statement
in
keeping
it
is
really
have
two
situations
where
pots
are
getting
scheduled
and
that
scheduling
doesn't
work
as
well
as
it
could
that
one
situation
is
when
a
pot
has
a
volume
that
hasn't
been
provisioned
yet
and
the
provisioning
is
waiting
for
the
pot
to
use
for
some
pot
to
use
the
volume.
B
In
that
case,
there
is
a
scheduler
extension
building
to
the
community
scheduler,
which
triggers
volume
creation,
and
there
is
a
mechanism
now
that
also
if
it
we
fixed
for
CSI,
where
a
pots,
the
pots
kalium,
can
move
between
nodes
if
a
volume
can't
be
created,
but
even
in
that
case
the
cumulative
schedule
are
really
basically
randomly
picks
an
order.
As
far
as
the
storage
system
is
concerned,
there's
no
hint
from
the
storage
system
to
vacuum
in
it
to
schedule
a
bet
which
which
nodes
are
actually
suitable
for
for
the
pot.
B
The
other
problematic
situation
is
with
ephemeral,
inline
volumes
there.
The
situation
is
even
worse:
a
port
gets
assigned
to
a
note
permanently,
and
only
then
when
couplet
tries
to
publish
the
volume,
then
the
the
operation
succeeds
or
fails,
and
if
it
fails
because
there
isn't
enough
storage
available
on
the
node,
the
port
is
basically
permanently
stuck.
So
we
don't
even
have
the
chance
with
cork
abilities
components
to
move
it
to
a
different
note.
B
Have
you
only
only
solution
in
that
case
is
to
delete
for
port
and
hope
that
the
next
time
that
it
gets
created
by
a
higher-level
app
controller
that
it
lands
on
a
more
suitable
note,
so
for
proposal
that
I
have
pending
it's
a
cap,
called
storage
capacity
constraints
for
pot.
Scaler
scheduling
tries
to
enhance
this
situation.
B
A
B
B
C
A
B
Technology
sometimes
it
works,
though.
The
idea
here
is
to
define
this
as
a
standard.
Api
object,
type
in
API
server.
The
reason
why
it's
it
needs
to
be
built
into
quark
abilities
is
that,
later
on,
we
want
to
use
that
information,
also
in
the
normal
cube,
scheduler
and,
more
specifically,
in
the
part
that
is
responsible
for
volume,
creation
and
volume
scheduling
in
in
the
cube
scheduler,
and
there
is
currently
no
good
mechanism
to
do
that
with
custom
resource
definitions
and
yeah.
The
client
bindings
also
would
be
problematic.
B
D
Patrick
Jamie
sorry,
this
Hammond
Thea,
so
yeah
so
about
the
being
part
of
core
kubernetes
to
do
a
team
actually
prefers.
They
have
a
plug-in
design
where
external
component,
externally
learning
plug-in,
can
affect
see.
Dueling
decisions
and
I.
Think
what
you
know,
team
is
quite
reserved
about,
like
I,
say
accepting
new
primitives
in
scheduler,
actually
just
for
reference,
and
they
want
everything
to
be
written
as
plugins.
D
B
We
plug
in
in
this
case
is
for
plug-in
that
the
volume
six
storage
already
provides
for
the
scheduler.
We
don't
need
to
change
anything
between
in
the
in
the
course
scheduler.
All
the
changes
are
in
that
that
volume
scheduling
plug-in
that
six
George
basically
has
in
cube
scheduler
now
the
other
extension
mechanism.
That,
of
course
exists
is
the
scheduler
extender,
which
is
a
completely
separate
binary.
It's
basically
a
web
hook,
and
then
it
could
be
done
with
a
CRD,
but
the
entire
solution
would
basically
become
as
their
event
a
specific
solution.
B
I've
done
that
in
p.m.
CSI
and
I
can
say
from
experience
that
it's
horribly
difficult
to
deploy
that,
because
you
end
up
with
dependencies
on
on,
however,
cluster
is
configured
and
whether
it's
where
we,
where
we're
cube
scheduler,
is
running
how
to
each
report
inside
before
inside
the
cluster,
where
you
are
your
extender,
runs
and,
of
course,
my
first,
every
single
scheduler
decision
that
needs
to
go
through
the
web,
which
has
performance
impacts,
I,
would.
C
E
F
It's
still
compiled
in
but
I'm
just
saying
like
in
the
end.
All
of
this
is
moving
to
the
scheduler
framework
and
I
think
we
should
also
consider
like
I,
don't
like
I,
think
the.
If
I
understand
correctly,
the
scheduler
extender
was
kind
of
the
old
way
at
extending
the
scheduler,
and
now
we
have
the
scheduler
plug-in
framework
which
is
supposed
to
improve
upon
that
I.
Don't.
B
F
B
E
E
G
B
C
and
C++
have
done
that
for
a
long
time,
and
it's
it's
really
just
plain
problematic,
behavior
standard
way
of
extending
something
is
by
a
remote
procedure,
calls
whether
you
do
that
with
IRG,
RPC
or
all
rest
interface,
but
that's
the
old
way
of
a
schedule
extender,
but
basically
has
all
of
these
deployment
issues.
It
has
to
be
the
performance
overhead,
yeah.
B
So
with
that
in
mind,
I
think
the
only
viable
way
forward
really
is
to
extend
goop
scheduler
and
how
it's
a
way
I'll
attack
exactly
the
code
lands,
whether
it's
a
plugin
that
all
can
be
done
according
to
the
feedback.
From
from
six
scheduling,
although
I
think
we
currently
do
have
the
the
interface
defined
between
the
storage,
specific
part
and
that's
the
only
part
I
really
need
would
be
need
would
have
to
be
modified
and,
however,
it
actually
gets
called
by
the
cube.
Scheduler
is
orthogonal
to
that.
B
B
Without
changing
we
see
as
I
spec,
we
have
discussed
how
to
do
that
with
network
attached
storage,
but
the
problem
is
where's,
the
there's,
no
good
interface
to
discover.
Actually,
what
get
capacity
call
parameters
are
needed
to
retrieve
available
capacity,
because
it's
unknown
what
kind
of
topology
the
storage
system
really
has
so
someone
outside
of
that
storage
system,
extreme
provision
agree,
can't
make
up
the
necessary
of
a
correct,
get
capacity
parameters
to
get
it
get
to
get
a
useful
response
from
the
CSI
driver
that
part
pretty
much
has
to
be
postponed.
B
Otherwise,
we
create
a
parallel
cuba,
Nettie's
distribution
with
this
feature,
add
it
on
top,
and
that
just
feels
wrong
to
me,
where
that's
not
aligned
of
my
opinion,
with
without
keeping
it
is
itself
gets
developed.
We
do
have
a
lot
of
features
for
reason,
that
is
to
collaborate
on
them
in
communities.
H
C
B
H
I
think
this
is
kind
of
where
some
concerns
have
been
raised
in
the
past.
I
think
the
challenge
is
calculating
volume
capacity
tends
to
be
different
from
storage
system
to
storage
system
and
I
want
to
make
sure
that
what
we're
doing
here
would
be
generically
reusable.
So,
as
I
understand
it,
we
are
allowing
kubernetes
to
be
the
one
that
will
make
the
calculation
to
figure
out
what
a
volume
size
is
going
to
be
and.
B
B
Just
this
available
capacity
respectively,
the
maximum
volume
size
and
those
values
are
calculated
by
the
CSI
driver
itself,
and
the
difference
is
available
capacity
and
I.
Don't
know,
I
need
to
refresh
my
own
memory
again,
so
we
disease,
I
spec,
has
this
ket
capacity
call,
but
it
doesn't
really
specify
in
detail.
In
my
opinion,
what
the
available
capacity
means
that
is
returned
by
the
CSI
driver.
B
The
cap
proposes
that
this
gets
figured
in
the
external
provision
that
the
CSI
driver
deployment
can
basically
specify
in
more
detail
what
this
value
really
means,
whether
the
capacity
response
is
returned
total
capacity
and
that
just
sums
up
all
the
available
space
for
a
storage
system
that
is
completely
linear
and
has
no
fragmentation
issues.
That's
probably
the
right
solution.
B
You
can
take
that
available
capacity
and
split
it
up
whatever,
in
whatever
way
you
want,
and
if
you
have
volume
that
is
smaller
than
the
available
capacity
you
you
have
a
reasonable
chance
to
actually
provision
it
now.
The
other
other
situation
is
where
the
storage
system
actually
has
fragmentation,
and
then
the
total
available
capacity
might
be
higher
then
considerably
higher
than
they
the
maximum
volume
size.
B
That
actually
can
be
provision,
because
you
have
one
chunk
at
the
beginning:
some
allocated
space
in
the
middle
and
some
space
again
at
the
end,
and
you
can
only
have
create
volumes
fed
findt
into
these
two
parts
at
the
beginning
or
the
end
at
the
end.
In
that
case,
it
makes
more
sense
to
set
the
maximum
volume
size
and
the
comparison
in
kubernetes
is
the
same.
B
B
H
H
H
H
B
H
And
so
that
I
think
is
the
concern,
because
the
size
that
kubernetes
requests
may
not
be
the
actual
size
that
the
underlying
storage
system
ends
up
using
right.
So
if
I
request
a
one
gig
volume,
the
storage
system
may
end
up
consuming
three
gigs.
To
give
me
that
one
gig
volume
and
eating
up
more
of
the
actual
capacity
than
expected.
H
H
F
Last
time
when
we
tried
to
propose
this,
one
of
the
use
cases
that
came
up
was
a
storage
system
where,
in
the
storage
class,
you
can
configure
things
like
the
replication
factor,
and
that
is
going
to
impact
the
actual
storage
that
gets
taken
up.
But
at
the
same
time
also,
multiple
storage
classes
can
end
up
sharing
the
same
underlying
storage
pool.
So
you
can
have
one
storage
class
with
the
factor,
one
replication
and
another
storage
class
with
factor
three
replication
yeah.
B
That
that
is
covered
by
the
API,
because
a
storage
pool
has
a
list
of
information
by
storage
class.
So,
in
the
case
where
you
have
one
storage
pool
and
it
may
end
up
being
with
being
used
with
replication
and
without
replication
depending
on
the
storage
class,
the
available
capacity
and
the
maximum
volumes
is,
will
be
stored
differently
and
will
be
populated
differently.
Because
fisty
is
I.
Driver
will
get
called
with
four
parameters
by
all
of
all
known
storage
classes.
To
make
sure
that
these
values
really
reflect
the
storage
class
parameters
can.
B
H
B
Basically,
iterates
over
all
storage
classes
and
for
each
storage
class
it
does
a
get
capacity
call,
and
this
is
ID
rather
than
looks
at
four
parameters
for
that
storage
class
and
the
storage.
It's
still
the
C's
I'd.
Rather,
that
interpret
certain
parameters.
It's
just
that
the
external
provision
are
basically
does
that
by
my
storage
class
and
if.
B
Yeah
slowly
the
extra
comparison
it
takes
into
account
for
size
and
for
storage
class,
and
then
it
walks
that
store
fat,
fat,
csi's
storage
pool
data
structure
to
find
a
relevant
entry
that
says
yeah.
This
pool
has
enough
capacity
for
my
storage
class,
for
this
volume
size
and
and
when
it
picks
the
pool.
So.
H
B
B
F
F
I
think
the
shared
storage
pools
or
shared
storage
pools
across
storage
classes
is
still
a
challenge,
though,
because
you,
if
say
you
are
a
very
low
capacity
available
than
and
if
you
had
a
pod
request
to
volumes
from
two
different
storage
classes,
but
they
end
up
sharing
the
same
underlying
storage
pool.
Then
you
could
potentially
cause
you
know
you
could
provision
one
volume
successfully,
but
then
the
second
volume
might
fail
to
provision,
because
both
available
capacity
values
were
assuming
that
it
was
not.
This
capacity
was
not
being
shared
with
anyone
else.
B
B
B
One
idea
that
I
had
is
that
we
allow
PVC
creation
in
a
tentative
way
and
then,
if
we
find
that
another
volume
fails,
we
roll
back
for
binding
off
of
that
first
volume
or
the
other
volumes
that
have
been
tended
leaf
tentatively
allocated
something
like
that.
I
think
is
feasible.
But
that's,
in
my
opinion,
a
separate
cap,
and
even
with
that
and
in
this
cap
here
is
still
needed,
because
we
still
need
more
information
about
available
capacity.
Do
we
reduce
the
chance
that
randomly
picking
a
nodes
will
end
up
on
a
bad
note,
but.
F
B
But
then
that's
that's
where
we
have
four
four
volumes
with
delayed
binding.
We
have
this
pod
rescheduling.
So
basically
what
happens
is
Vic's
scheduler,
picks
an
old
tentatively,
because
this
currency
is
our
storage
pool,
says
that
it
has
enough
capacity.
But
then,
when
the
actual
volume
creation
happens
it
fails.
The
external
provision
I
can
use
that
as
a
signal
that
it
needs
to
get
capacity.
Ruvik,
add
capacity
again.
It
can
update
this
to
use
our
storage
pool
and
the
next
time
that
the
scheduler
gets
involved
for
the
board.
B
It
now
has
more
up-to-date
information
and
can
pick
some
node.
We're
actually
still
really
have
some
available
capacity
left.
We
do
have
a
spot
rescheduling
right
now
we
fixed
it
for
for
118
and
external
provisioner,
but
it's
currently
entirely
random.
Whether
the
next
attempt
will
pick
a
better
note,
it
might
still
pick
the
same
node
again
and
then
it
will
just
continuously
end
up
trying
to
run
on
the
same
node,
but
that
still
doesn't
have
capacity.
B
There
have
been
proposals
to
model
pending
or
in
in
flight
operations,
but
it
has
been
up
has
been
pointed
out
earlier
that
this
is
just
not
possible
for
our
for
every
storage
system
and
therefore
this
cap
doesn't
even
try
that
it.
It
still
relies
on
some
recovery
mechanism
for
a
broken
out
selection
and.
B
H
So
I
just
want
to
understand
the
flow
a
little
bit.
If
I
can
repeat
it
back
to
you,
can
you
confirm
so
first
step
is
identifying
the
capacity
of
a
given
storage
pool.
So
the
first
thing
a
cluster
admin
would
do
is
create
a
bunch
of
storage
classes,
then
create
a
storage
pool,
object
that
points
to
those
storage
classes.
Now.
B
So
the
external
provision
I
pick
the
external
provision
it
just
as
one
side
car
where
this
most
logically
fits
a,
and
it
could
be
an
entirely
new
side.
Car
but
I
felt
that,
because
external
provision
is
involved
in
provisioning,
its
it
has
a
chance
to
notice
things
like
volume,
created,
warning,
deleted
and
and
then
update
capacity.
So
it
seems
like
a
logical
place.
To
put
it.
The
flow
is
external
provision
that
needs
to
know
that
it
can
do.
H
B
This
viz
mode
of
operation
would
remain
for
CSI
drivers
that
just
manage
local
storage,
but
we
can
add
another
mode
later
on
that
uses
some
other
way
of
identifying
storage
pools,
and
then
he
sees
I
Drive
a
deployment.
Basically,
a
Soviet
VC
is
I
vendor
of
a
storage
mender.
They
decide
how
to
how
to
do
that,
how
to
how
to
identify
storage
tools.
B
B
B
F
B
F
B
B
F
I
G
B
J
H
I'm
less
concerned
about
the
Alpha
I
want
to
make
sure
that
we
have
a
clean
path
to
GA.
So
I
want
to
dig
in
a
little
bit
into
what
that
interface
would
look
like
if
we
made
it
in
CSI.
How
would
we
do
the
mapping
between
a
storage
pool
and
a
storage
class,
so
I
guess
we
could
have
a
you
know,
get
storage
pool
concept
or
something
can
you
talk
about
what
you
think
that
would
look
like.
B
A
B
Iii
have
looked
at
that
cap
and
it
doesn't
really
specify
a
full
solution
for
that
either.
It
has
exactly
the
same
issue
that
the
mapping
within
which
class
parameters
and
the
information
that
was
proposed
for
that
if
CSI
interfaces
is
incomplete,
so
I
don't
I,
don't
think
that
cap
solves
fee
issue
either.
B
B
That
would
be
a
good
that
would
be
actually
a
neat
idea
that
we
have
one
way
of
listing
storage
pools
just
so
that
we
know
that
they
exist
and
then,
if
we
have,
if
we
call
get
capacity
with
parameters
that
are
still
opaque,
we
get
the
information
back.
What
pool
that
belongs
to,
but
that
may
work
yeah.
H
Okay,
so
assuming
you
have
a
mechanism
and
CSI
to
get
a
list
of
storage
pools,
the
external
provisioner
comes
up,
it
says,
give
me
the
list
of
storage
pools,
you
have.
It
has
a
B
and
C.
Then
it
iterates
over
the
storage
classes
that
exist
in
this
cluster
and
for
each
storage
class
that
belongs
to
the
driver.
H
It
does
a
get
capacity,
call
and
gets
the
mapping
from
from
the
storage
class
to
the
storage
pool,
and
presumably
the
get
capacity
call
returns,
a
response
for
capacity
that
may
be
lower
than
what
the
actual
capacity
is
in
order
to
accommodate.
If
a
volume
was
provisioned
with
those
specified
parameters
and
so
effectively,
we've
taken
care
of
the
issue
of
striping,
mirroring
replication,
potentially
throwing
the
calculation
of
kubernetes
off
because
as
far
as
kubernetes
is
concerned,
a
byte
is
a
byte.
Well,.
F
B
B
C
B
J
H
H
B
B
I
think
the
most
viable
approach
is
to
have
a
rollback
for
the
volume
provisioning
for
volumes
that
have
not
been
used
yet,
so
we
know
that
we
can
destroy
them
because
they
don't
they've,
never
been
used
by
a
port.
They
don't
contain
data.
Therefore
we
can't
destroy
them.
If
we
keep
track
of
that,
then
we
may
be
able
to
add
a
roll
deck
mechanism
that
says:
okay,
we've
allocated
some
volumes.
K
B
Yeah
I
I'd
like
to
move
back
to
to
this
cap
here,
because
what
we
are
now
discussing
is
really
a
separate
enhancement
proposal
that
needs
a
lot
more
Forge
I
fully
agree
that
we
need
it
and
I
wouldn't
even
move
this
cap
here
with
storage
capacity
for
words
to
from
alpha
to
beta,
unless
we
also
have
an
answer
to
that.
Other
problem
like
concurrent
thoughts,
conquered
ports
with
multiple
volumes,
but
I
would
like.
B
B
F
B
If
we
allow
for
of
a
special
case
that
any
line
volume
doesn't
have
a
storage
class,
for
example,
so
that
would
be,
in
my
opinion,
the
long
road
to
fix
this
problem
for
CSI
in
line
as
CSI
family
non
volumes.
Add
the
API
Asaph
is
a
bi
extension
with
a
size,
use
that
size
and
and
and
we're
basically
done
I'm
mostly
done
for
for
for
capacity
tracking.
The
problem
remains
that
recovery
is
incomplete
for
inline
volumes,
because
it's
still
deceased
still.
C
B
B
But
it's
another
big
change
and
there
has
been
some
concerns
by
my.
My
first
idea
was:
okay:
let's
create
a
news
source
for
for
a
volume.
Let's
make
it
possessed
a
volume
template
and
then
some
controller
can
create
an
actual
PBC
in
the
same
namespace
as
report
on
behalf
of
a
pot
and
that
pervasive
and
triggers
provisioning.
We
have.
We
have
about
volume,
perhaps
tentatively
say
magazine
as
for
normal
PVC
dereference.
B
Is
that
book
via
the
ownership
on
a
mechanism
that
PVC
gets
deleted
together
with
a
port
and
therefore
has
a
life
cycle
that
is
tied
to
the
port.
Now
sod
has
has
this
concern
about
creating
a
user
visible
object
with
PVC
automatically
through
the
controller,
but
I
think
that's!
That's
not
that
unusual.
B
B
But
I
I
think
that
this
idea
still
has
merit
and
I
want
to
bring
it
up,
because
that
to
me
is
even
more
important
than
figuring
out
how
to
and
reports
with
multiple
volumes
or
thoughts
or
with
this
heavy
load
situation,
because
even
on
a
lightly
loaded
system,
inline
volumes
when
they
land
on
the
wrong
nodes,
but
that's
just
just
bad
and
and
I
want
that
problem
solved
too,
even
even
before
the
others.
Sorry
time
check.
H
B
H
True
very
concerns
right
now
are
the
multiple
volume
scheduled
in
a
very
short
amount
of
time,
and
if
we
have
a
viable
solution
for
that,
I
think
that'll
unblock
the
Alpha
to
Michelle's
point.
It
looks
like
Sagarika
tech
chure
is
encouraging
us
to
have
a
plan
not
just
for
alpha
but
kind
of
a
path
towards
beta
before
we
start
implementing
the
Alpha
and
if
that's
the
case,
these
kind
of
hairy
edge
cases,
let's
kind
of
think
through
them
and
see.
H
B
H
Already,
okay,
I
just
need
to
fake
self
right.
If
you
want
that
brainstorm
on
a
meeting
that
would
be
perfectly
okay
to
like
I
hate,
to
kind
of
put
more
work
on
you.
So
if
you
want
to
like
get
everybody
in
a
room
and
say,
let's
bounce
ideas
off
each
other
and
see
if
we
can
come
up
with
something
together,
that
would
be
okay
too,
whatever
you're
comfortable
with
yeah.
F
B
I
need
to
think
through
some
some
of
this
myself
first
and
then
I
may
have
more
and
more
specific
proposal,
whether
it's
a
cap
or
just
some,
some
who
will
Google
talk
with
some
some
rough
rough
ideas
that
maybe
one
wing
but
I
need
I,
also
need
to
know
how
we
move
forward
with
a
firmly
nine
volumes.
Yes,
I
will
don't
want
to
lose
track
of
that
particularly
I.
B
Said
definitely
so
the
my
current
proposal
here
in
this
cap
I've
updated
it
last
week
or
this
week.
I,
don't
remember,
there's
one
section
about
how
to
handle
CFM
lean
on
volumes,
and
he
basically
has
two
solutions
in
this
cap
here
one
is,
we
have
the
size
extension
and
the
other.
Is
this
completely
new
idea
of
mapping
h2
PPC,
somehow
and
I-
don't
intend
to
put
that
into
this
cap?
B
This
cap
just
says
this
information
that
we
have
here
can
be
used
also
for
involve
FML
volumes,
but
how
to
do
that
is
this
needs
to
be
separate.
The
question
is
now:
which
path
do
we
want
to
pursue?
Do
we
want
to
keep
extending
the
current
CSI
ephemeral
inner
volumes,
or
do
we
want
to
try
something
else,
and
if
so,
would
it
be
viable
or
would
it
be
acceptable
to
have
a
controller
that
creates
PVCs,
because
I
think
that
is.
C
H
B
H
G
B
B
H
Me
honestly,
I
think
if
we
were
to
do
that,
the
the
reason
we
would
be
doing
it
is
because
there's
a
missing
object
that
basically
allows
you
to
represent
volume
capacity
and
in
lieu
of
that
we're
using
PVC
as
kind
of
a
hack.
And
so
it
may
be
worthwhile
to
consider
what
a
new
object
might
look
like.
B
B
B
But
I,
don't
I
like
that.
Less
than
actually
splitting
out
for
PVC
remain
the
main
disadvantage
is
we
would
need
to
modify
all
other
components
to
work
with
that
model.
It
can
be
done,
I
think,
with
a
translation
layer,
something
that
sits
between
reading
PVC
is
and
and
and
modifying
them,
and
then
then
just
redirects.
These
reads
and
writes
to
where
the
PVC
actually
lives,
whether
it's
a
separate
object
were
embedded
and
then
the
rest
of
a
code
in
external
provisioner,
for
example,
would
be
mostly
the
same.
B
H
C
H
Smells
weird
to
me,
because
it's
kind
of
it's
not
a
user
created,
object
the
whole
lifecycle
if
it
is
going
to
be
created
by
kubernetes,
deleted
by
kubernetes
and
tied
to
the
pod
object,
and
so
when
you
have
of
it
object,
that's
the
lifecycle
is
tied
to
the
pod.
There's
the
you
know
potential
for
ask
you
where
the
object
exists
before
after
the
pod
is
gone,
and
things
like
that
it
gets
weird.
I
B
Then
yeah,
even
the
ownership,
that's
fine
either
either
wiploc
I
actually
would
plug
for
the
deletion
of
a
pod
objects,
while
they
are
still
PVC
is
tied
to
it,
and
even
if
the
API
controller
decides
API,
acerbic
decides
to
go
ahead
and
delete
pod
we
would
still
have
is
pending
PVC
as
for
place
where
we
store
information
where
we
can,
where
we
know
that
this
will
then
become
deleted,
it
will
basically
still
live
as
long
as
the
volume
exists,
but
it
will
get
garbage
coated.
So
I'm
not
concerned
about
that.