►
From YouTube: Kubernetes SIG Storage Meeting 2022-01-13
Description
Kubernetes Storage Special-Interest-Group (SIG) Meeting - 13 January 2022
Meeting Notes/Agenda: https://docs.google.com/document/d/1-8KEG8AjAgKznS9NFm3qWqkGyCHmvU6HVl0sk5hwoAE/edit#heading=h.8765qb2fo50o
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Saad Ali (Google)
A
All
right
today
is
january,
13
2022.
This
is
the
meeting
of
the
kubernetes
storage
special
interest
group.
As
a
reminder,
this
meeting
is
public
recorded
and
posted
on
youtube.
We
have
our
agenda
listed
on.
The
screen
here
feel
free
to
go
ahead
and
add
any
suggestions
to
it
as
we
go
through
it
and
first
up
we're
going
to
go
through
the
124
planning.
The
124
release
is
underway.
We
did
planning
for
124
in
december.
A
The
upcoming
dates
to
be
aware
of
for
124
next
up
is
on
the
27th.
There
is
a
production,
readiness,
soft
freeze.
What
this
means
is
that
if
you
have
a
feature
that
needs
to
go
into
the
124
release,
any
kept
that
you
have
must
get
production
approval
a
little
bit.
I
think
a
week
before
the
actual
enhancement
freeze
date.
So
please
keep
an
eye
on
that
and
jesse.
I
see
you're
adding
a
couple
items
here.
You
wanna
you
wanna
comment
on
that.
B
Sorry
I
was
muted
yeah
hi,
so
I'm
jesse
butler,
I'm
one
of
the
released
lead
shadows
for
124
and
we're
just
trying
to
jump
into
some
sig
meetings
and
introduce
ourselves
and
say
hey
and
do
exactly
what
you've
done
here.
So
it
looks
like
you've
got
it
well
in
hand.
B
I
just
did
add
that
extra
in
week
11
we
have
a
feature
blog
freeze
to
keep
in
mind
as
well,
which
is
not
sure
I
mean,
I
think,
you're
all
very
aware
and
very
active
there.
But
this
has
been
an
opt-in
part
of
the
cycle
for
the
last
few
releases.
So
just
as
with
caps
for
week,
11
the
deadline
for
opting
in
feature
blogs
happens
so.
C
C
A
Cool
thank
you
jesse
for
that
reminder,
and
that's
right
folks.
So
here's
the
full
schedule
good
to
remember
that
there's
also
a
option
to
have
published
blogs
for
the
features
that
we
have
and
if
you
want
to
do
that,
please
make
sure
you
add
to
that
spreadsheet
that
tracks
this
by
the
23rd
all
right.
So
with
that
we're
going
to
go
ahead
and
switch
over
to
the
spreadsheet
that
we
use
for
tracking
our
work
and
we'll
go
over
the
124
items
and
get
status
updates
all
right.
So
first
item
here
is
delete.
A
Fs
group
or
sorry
delegate,
fs
group
to
csi
driver
instead
of
cubelet
chang
was
presumably
on
this,
but
I'm
not
sure
we
can
actually
commit
him.
Michelle
do
you.
E
A
That's
reasonable,
so,
let's
hold
off
on
this
until
the
125
release,
I'm
going
to
go
ahead
and
cross
it
off.
F
A
E
So
that's
something
I
want
to
try
in
this
release
and
there
are
there's
one
or
two
outstanding
issues
that
that
will
fix
and
move
the
expansion
to
ga
as
a
whole.
I
am
trying
to
set
up
some
calls
next
week
discuss
if
anybody
has
any
other
outstanding
issues
apart
from
recovery
and
one
more
issue
about
read,
write
many
volume
types
but
yeah.
A
Awesome
thank
you
and
for
folks
on
the
call
remember
that
if
you
are
using
volume
expansion,
it's
currently
in
beta
and
if
you
have
any
major
issues
with
it
any
concerns
before
we
move
it
to
ga,
please
try
and
attend
these
meetings
that
haman's
gonna
organize
to
raise
any
of
your
concerns.
Otherwise
it
is
going
to
get
moved
to
ga,
as
is
other
than
a
couple
of
known
issues
thanks
a
month.
Next
item
is
recovering
from
resize
failure.
I
assume
this
is
a
tied
to
the
related,
or
are
we
still
moving
these
independently.
E
Yeah
this
is
this:
will
move
independently
because
it
we
don't
want
to
block
like
resizing
as
a
whole,
going
ga
for
this,
because,
as
long
as
like
thinking
was
as
long
as
you
just
have
have
a
way
to
recover
it,
it's
fine.
So
we
do
have
a
way
now.
So
we
want
to
move
the
recovery
feature
to
beta
in
124,
while
moving
the
resizing
as
a
whole
to
ga.
H
A
Thank
you
for
that.
Next
item
is
issues
related
to
assuming
volumes
or
mount
points.
I
think
we
haven't
had
an
update
on
this
for
a
while.
Does
anyone
have
a
status
update
on
this
or
or
not.
A
All
right
next
item
is
csi
ephemeral
volumes
and
that
is
moving
from
beta
to
ga
yawn,
we'll
check
with
jonathan
or
matt
on
this
was
the
last
status
update
yawn.
Did
you
get
an
opportunity
to
check
with
either
of
them
yeah?
I.
I
Took
this,
we
didn't
do
much
progress,
meeting
canada
tomorrow
actually
about
this,
because
some
csi
drivers
started
implementing
ephemeral
volumes
for
persistent
storage,
which
is
kind
of
dangerous,
and
it
may
not
be
secure
and
our
solution
in
our
cap.
D
A
Got
it
all
right,
so
it
sounds
like
next
steps
here
are
to
figure
out
how
these
ephemeral
volumes
are
going
to
play
with
the
security
and
given
pod
security
policies
going
away,
and
it
sounds
like
you
have
a
meeting
with
jonathan
to
discuss
that
soon,
so
progress
being
made
cool.
Yes,
thank
you.
Yawn.
A
Moving
on
next
item
is
the
volume
group
api
design
for
this
cycle
ying
any
updates
on
this
one.
C
F
A
Next
item
is
csi
out
of
tree
move,
iscsi
driver,
fit
and
finish
image,
building,
testing,
ci,
cd
documentation,
etc.
I
think
this
was
a
carryover
from
last
cycle.
Anyone
have
an
update
on
this.
I
don't
think
humble
is
able
to
join
the
call.
A
H
J
A
All
right
and
next
item
is
moving
samba
sif
csi
driver
to
ga.
This
should
help
with
the
flex
volume
deprecation
as
well
andy
julie,
michelle
anyone
have
an
update
on
this.
A
I
guess
then
the
next
question
is:
do
we
want
to
commit
to
samba
sifs
to
ga
for
this
cycle?
Given
we
haven't
had
an
update
on
this
for
a
while
either.
A
All
right
moving
on
next
item
is
pvc
volume
snapshot.
Namespace
transfer.
Do
we
have
most
to
fall
on
the
call.
C
So
I
pinned
him
after
last
meeting
and
he
said
he
needs
some
help.
So
I
was
suggesting,
maybe
you
know
on
masaki
could
work
with
him
on
this.
I
have
not
got
a
chance
to
check
this.
It
looks
like
he's
not
gonna
quality.
A
Next
item
is
csi
volume,
health,
additional
metrics
and
or
events.
This
is
moving
from
alpha
to
alpha
two
last
status.
Was
we
needed
code
review
on
this?
Anyone
have
an
update
on
on
that.
C
Right
so
yeah,
so
command
has
already
reviewed
that
so
right
now,
basically,
you
need
to
get
people
from
seek
instrumentation
seek
node
to
review
it
for
that
to
be
merged.
E
I
think
the
there
was
a
review
from
us
instrumentation
yesterday
and
there's
some
feedback
on
the
beer.
Oh.
A
A
C
A
Okay,
cool.
Thank
you.
Shane
next
item
is
volume
populator
data
source
ben.
Are
you
on
the
call
yeah
I'm
here.
K
So
if
this
one
missed
the
the
beta
and
we're
going
to
go
beta
in
124.,
you
may
want
to
add
that
the
work
to
be
done
is
add.
Metric
support
and
testing
got
it.
K
The
the
volume
data
source
validator
controller,
which
is
an
out
of
tree
controller,
will
have
metrics
and
the
the
library
for
supporting
populators
will
have
support
for
metrics.
Although
being
a
library,
the
consuming
application
will
need
to
do
a
little
bit
of
we'll
need
a
little
bit
of
glue.
So
there'll
be
metric.
Support
in
the
library,
but
they'll
be
work
for.
K
But
but
the
if
you
read
the
cap,
there's
very
good
reasons
for
all
the
metrics
that
are
proposed
there.
So
I
don't
think
we.
K
A
Cool
sounds
like
we
need
some
production
readiness
stuff
here
to
get
it
to
beta
and
that's
going
to
be
in
progress
this
cycle.
Thank
you.
Ben
next
item
is
object.
Storage,
api
cozy.
This
is
hopefully
moving
from
a
prototype
to
an
alpha
this
cycle.
Anyone
have
an
update
on
on
on
this.
A
Got
it
that
makes
sense
all
right,
hopefully,
we'll
make
good
progress
on
this
soon.
Moving
on
to
change
block
tracking,
this
is
a
design
for
this
cycle.
Any
updates
on
this.
A
Next
item
is
runtime
assisted
mounts
deep
yeah.
I
know
this
is
a
back
and
forth
with
cigar.
How
are
things
going.
J
Yeah
so,
based
on
our
last
discussions,
I
have
a
fairly
newish
design,
so
this
new
design,
I'm
just
now
beating
the
cap
with
it
and
then
we'll
queue
up
in
the
cigar
discussions.
The
advantages
of
this
new
designer
that
avoids
many
of
the
cri
change,
like
any
of
the
cri
changes
changes.
So
it's,
I
think,
is
much
clearer
and
yeah.
As
soon
as
I
update
the
cap,
maybe
someone
like
yan
can
do
another
review.
A
J
So
mauricio
published
a
proposal
for
this
and.
A
A
All
right
next
item
is
node
expansion.
Secret
humble
will
submit
a
cap
was
the
update
for
this
cycle.
Anybody
knows
if
there's
any
progress
here.
C
So
I
he
just
says
he's
going
to
be
working
on
a
cap,
so
there
is
an
enhancement
issue,
so
he
is
going
to
work
on
this
in
1.24.
H
A
C
So
I
so
I
know
he
has
opened
several
caps,
a
cab
for
each
club,
divider.
A
A
Okay,
so
with
that,
let's
get
the
per
cloud
provider
specific
csi,
migration,
upgrades
or
updates.
So
first
up
is
vsphere.
Anyone
have
a
status
update
on
that.
C
I
think
it's
just
about
the
cap
that
joey
just
opened
that
one
got
it.
That's
already:
that's
crew,
we're
basically
just
waiting
for
production
readiness
cyber
from
outside.
I
think
we
shall
prove
it
yeah,
just
trying
to
turn
on
like
default
in
this
release.
There
are
a
couple
issues.
A
Awesome,
thank
you
shane
for
that
update
next
up,
azure
disk
or
azure
file.
Anyone
have
an
update
on
either
of
those.
A
K
A
Next
is
gce,
I
don't
know
if
matt's
on
the
line,
if
not
michelle,
do
you
happen
to
have
an
update
for
the
gt.
D
A
D
Think
it's
target
ga
I
think
aws
is
also
target
ga,
but
I
think
we
should
double
convert
that
one
on
that.
A
A
And
then
we
have
openstack
also
targeting
ga
supposedly
dims
is
owning.
This
anybody
have
confirmation.
C
Yeah,
so
he
actually
already
submitted
a
pr.
The
pr
is
already
merged.
So
I,
what
are
the
other
things
other
than
the
pr?
Is
it
just
the
documentation
remaining
michelle
young?
Do
you
know
what
else
do
we
need
to
do
for
this?
C
It
basically
just
you
know
the
future
paid.
I
think.
A
A
All
right
next
up,
we
have
the
seth
rbd
and
seth
fs.
Both
of
them
are
owned
by
humble.
At
this
point,
targeting
beta,
I
don't
believe
we
have
an
update
on
this,
so
we'll
keep
moving.
Next
up
is
csi
migration
for
portworx.
A
L
Yeah
from
I'm,
we
have
to
update
the
cap,
I'll
probably
doing
it
I'll
probably
do
it
next
week,
but
I
think
yeah
and
we
are
in
agreement
of
what
changes
we
need
for
1.2.
A
Cool,
thank
you.
Shane
next
couple
items
are
owned
by
masaki
secret
protection
and
in-use
protection.
Anyone
know
if
there
is
progress
made
here.
C
I
have
not
seen
him
so
probably
well
since
I'll
pay
him
for
this
other
one
transfer,
maybe
I'll
ask
him
about
this
one
as
well.
Okay
sounds.
H
A
All
right
next
item
is
co-owned
between
us
and
sig
off.
The
item
is
user
id
ownership
and
config
maps
and
secrets
preserve
default
file
mode
bits,
a.
D
A
A
Okay,
let's
see
if
we
get
confirmation
on
if
we
actually
need
this
still
looks
like
we
haven't,
had
an
update
for
a
while.
Next
item
is
ungraceful
node
shutdown.
Anyone
have
an
update
on
this
still
targeting
alpha.
C
A
E
No,
I
don't
know,
haven't
taken
a
look
at
it
recently.
Okay,.
A
J
So,
for
that
one,
I
think,
like
there
isn't
going
to
be
much
storage
interactions
in
their
phase
one.
I
think
it's
more
like
phase
two
or
three
when
they
would
get
to
actually
handling
pods
with
pvcs.
That
does
not
that
actually
refers
to
like
real
worlds.
A
Okay,
so
we
should
be
able
to
move
this
out.
124
will
be
phase,
one
only
phase
3
will
require.
A
Thank
you
deep
for
that
update.
Next
up,
we
have
a
few
items
code
between
us
and
sig.
Apps
first
up
is
address
issues
with
pvcs
created
by
stateful
set
not
being
auto
removed.
Any
update
on
that.
A
Okay
sounds
like
we
need
to
check
with
check
with
matt
on
status
for
for
this
item.
Next
is
volume.
Expansion
for
stateful
sets.
E
C
A
A
All
right
next
item
is
execution
hook.
It's
gonna
be
a
design
for
this
cycle.
C
Yeah
most
likely,
I
talked
to
xiangtian,
but
we
have
not
really
because
we
need
to
resolve
a
few
remaining
issues,
not
sure
if
we
have
time
to
get
that
result
before
the
download
yeah
I'll.
Just
keep
it
at.
A
Okay,
let's
keep
it
as
a
stretch
goal
for
now.
Next
item
is
se
linux
relabeling
using
mount
options
and
yawn.
A
Okay
sounds
good,
and
the
last
item
we
have
here
is
determine
mount
points
without
relying
on
proc
mounts.
M
He
is,
I
think,
mostly
working
on
that
repo
and
just
want
to
get
his
final
approval.
I
think
he
said
last
time.
He
said
it's
almost
good
just
need
to
march
to
the
mix
and
I'm
pinging
him
to
give
the
final
yeah.
A
Awesome
good
progress,
thank
you,
jing,
and
since
we
have
you
here
jing
there
is
a
couple
items
that
we
need
status
updates
from
you
on.
Let
me
see
if
we
can
go
back
and
grab
those.
A
A
Okay,
are
you
going
to
be
driving
this,
or
is
someone
else
going
to
be
driving
this
and
you're
going
to
review.
A
Okay,
I'm
gonna
put
you
as
primary
here
and
then
second
item
is
mount
related
issues,
perf,
etc.
This
was
moved
from
last
cycle.
Is
that
still
something
that
we
plan
to
commit
to.
M
I
think
it
is,
we
can
combine
it
with
the
last
item
like
got.
It
fast
amount,
yeah.
A
Right
yep,
the
issues
related
to
assuming
volumes
are
mount
point.
A
B
A
Cool
thank
you
and
looks
like
we
have
no
pr's
to
discuss
no
designs
to
review
today,
miscellaneous
michelle
cloning
to
a
different
storage
class
discussion
ongoing
michelle.
You
want
to
talk
about
this.
D
Yeah,
so
this
was
a
issue
that
matt
had
hit
and
trying
to
implement
cloning
for
the
pvcsi
driver.
D
D
But
yeah
to,
I
guess,
summarize
the
issue
right
now
right
now,
when
you
clone
a
volume,
it
requires
that
the
source
ppc
and
the
new
pvc
be
of
the
same
storage
class,
and
you
know
in
some
cases
that
makes
a
lot
of
sense
like
if,
in
one
storage
class,
you
might
have
some
parameters
that
are
incompatible
with
the
parameters
of
your
target
storage
class.
I
think
originally
there
were
examples
like
the
fs
type
like
a
different.
D
D
C
Michelle
I
forgot:
how:
how
do
you
handle
me
right
now?
Are
we
actually
blocked?
This
will
check
yeah.
D
Right
now
in
for
cloning,
we
actually
disallow
it
there's
an
error
given
by
the
external
conditioner,
but
like
yeah,
I
was
also
thinking
you
know.
Snapshots
has
a
similar
issue
too,
but
I
think
for
snapshots.
We
actually
allow
it.
C
D
D
D
C
D
Is
actually
in
the
pd
object,
but
I
think
it
when
we
do
cloning,
I
I
don't
think
we
actually
look
into
the
pv
object
and-
and
we
don't
pass
it
to
down
to
the
driver
either.
C
D
D
It
could
also
be
just
sort
of
like
a
potentially
a
documentation
thing
where
we
say
hey.
You
know
there
are
some
parameters
like
fs
type
that
may
not
be
compatible.
If
you
have
a
different
storage
class.
A
Could
we
wire
up
external
provisioner
to
grab
the
fs
type
from
the
pv
and
pass
that
through
and
let
the
driver
do
validation.
D
I
mean
yeah
that
so
like
the
fs
type
was
just
one
example,
but
like
there
could
be
other
things.
I
think
that's
the
more
like
as
a
general
problem.
We
could
solve
the
fs
type,
one
specifically.
C
Got
it
so,
there
are
basically
multiple
parameters
that
we're
not
really
passing
the
original
parameters
from
the
origin.
Ppv
were
not
re-passing
at
cloning
time
right
so
do
we
want
to
pass
them
all
in
now,
or
is
that.
D
Yeah,
I
guess
yeah,
one
of
the
ideas
is
like
when
we
do
create
volume
in
addition
to
giving
the
the
volume
id
should
we
also
pass
down
more
things.
I
D
Yeah,
that's
why
I
think
the
the
storage
class
we
might
not
be
able
to
pass
down
since
the
original
storage
class
could
have
changed,
but
like
the
pv
spec,
like
the
the
volume
attributes
in
the
pv
spec.
M
What
if
we
just
do
not
validate
more
like
a
snapshot?
What's
the
worst
case,.
D
D
D
D
Yeah
I
mean
we
could
say
we
could
say
you
know
it's
well
like.
I
think
I
was
already
sort
of
moving
towards
the
idea
that
it
should
be
the
responsibility
of
the
driver
to
check
that
the
source
and
the
targets
are
compatible,
but
I
think
the
main
problem
is
there
are
some
parameters
that
may
be
difficult
for
the
driver
to
check
such
as
the
fs
type,
but
I
guess
I
don't
know
what
else
besides
fs
type.
That's
like
the
main
thing
that
comes
to
my
mind,.
A
Yeah,
it
seems
like
one
of
those
things
where
yeah
we
should
probably
loosen
this
a
little
bit.
The
edge
cases
are
definitely
there
for
the
ones
that
we're
aware
of,
we
can
try
working
around
it
and
the
rest
of
it
is
like
a
csi
driver
vendor.
Please
be
careful
here
and,
as
we
discover
new,
I
think
areas
we
can
surface
it
here
in
the
sig
and
make
sure
folks
are
handling
it
correctly,
but
it
seems,
like
the
benefit,
outweighs
the
cons
in
this
case.
A
A
Okay,
anything
else
on
this
topic.
A
Okay,
to
loosen
the
restriction,
let's
see
if
we
can
do
workarounds
for
known
issues,
document
potential,
unknown
issues
and
proactively
share
any
new
workarounds
in
this
forum,
as
they
are
discovered.
A
N
Something
yeah,
so
this
is,
I
couldn't
add,
a
topic
to
the
dog,
since
I
didn't
have
a
permission.
So
if
we
have
time,
I
have
a
topic
that
maybe
I
can
bring
in
for
discussion
here.
Yeah.
N
Oh
yes,
so
this
is
on
a
particular
use
case
that
I
think,
or
it
doesn't
exist
or
I
might
be
missing.
So
this
is
regarding
storage
with
a
particular
access
mode.
Read
write
once
so
just
to
explain
the
scenario,
so
I'm
trying
to
get
all
parts
on
the
same
node
same
storage
on
the
node,
but
the
problem
arises.
If
say,
I
have
to
scale
the
pod
to
a
multiple
node
and
the
new
node
wouldn't
be
attached.
N
The
new
storage
wouldn't
be
attached
to
the
new
node,
because
that
pv
is
already
attached
to
the
existing
node
and
it
wouldn't
spin
up
one
more
storage
to
attach
to
the
new
node.
So
as
far
as
I
see
currently,
there
exists
a
workflow
where
it
could
be
one
storage
per
deployment
or
each
port
could
have
its
own
volume
through
generic
fml
volume.
But
there
isn't
something
where
every
node
would
have
a
volume.
N
All
parts
on
the
node
would
share
the
same
volume.
If
that
makes
sense.
A
So
I
want
to
make
sure
I
understand
your
use
case.
So,
ideally,
what
you
want
is
all
the
pods
that
live
on
a
given
node
share
some
sort
of
volume
that
is
not
ephemeral.
It's
persistent
right
and
every
time
a
new
node
is
added
a
new
pod
that
lands
there
effectively.
A
new
volume
is
created
on
that
on
that
node
right,
that's
got
it.
A
Have
you
looked
into
local
persistent
volumes.
N
No,
I
yeah,
so
the
whole
idea
was
to
use
ebs
csi.
I
don't
know
if
that
the
local
persistent
volume
would
work
in
that
scenario,.
N
I
tried
that
approach.
I
believe
it
doesn't.
A
D
A
He
needs
something
like
a
demonstrate,
stateful
set,
which
would
give
him
a
new
volume
per
per
node
node
yeah
I
mean
you
can
always
write
a
custom
controller
for
this
rotash.
It's
a
little
bit
of
a
weird
edge
case
that
you're
encountering
it's
not
super
complicated
to
write
such
a
controller
effectively
like
you
are
creating
like
a
demon
set
stateful
set,
and
you
could
write
a
controller
that
does
that
and
have
your
own
cr
for
how
to
define
that
and
then
create
your
own
pods
and
wire
them
up
to
pvcs.
D
I
know
the
the
ask
for
like
a
demon,
stateful
set
has
come
up
a
couple
of
times
in
the
past.
It
might
worth
be
bringing
the
stuff
into
gaps
too,
and
I
don't
know
if
they're
still,
there
might
have
been
like
old
enhancement
issues
opened
up
in
the
past
for
this.
But
I
I
don't
know
off
the
top
of
my
head.
M
So
nothing,
but
it's
mentioned
that
new
forwarding
wouldn't
attach
to
new
node
because
there's
a
rightly
attached
existing
node
here,
but
you
are
trying
to
to
provision.
You
you'll,
be
why
new
pb
cannot
attach
to
a
new
node.
A
A
All
right
so
imagine
like
gcepd,
you
provision
a
volume,
and
how
do
you
say
that
that
volume
will
only
be
attached
to
pods
on
node
a
and
that
if
somebody
adds
in
a
node
b
that
we
must
spin
up
a
new
pd
csi
volume
for
that
and
any
any
workload
that
gets
scheduled
to
node
b
now
should
be
using
that
second
volume.
D
I
have
seen
in
the
past
some
people
do
a
workaround
where
they
they
have
a
stateful
set,
but
they
basically
have
this.
Like
small
controller.
On
the
side
that
scales
the
stateful
set
to
the
number
of
nodes
in
their
cluster.
M
A
I
think
you're
going
to
need
like
pod
anti-affinity
as
well
right.
M
If
you
add
that
scheduling
rule
right
and
the
part,
will
you
possibly
go
to
the
new
node
and
the
courage
will
also
put
it
in?
I
think
this
later
binding.
A
So
yeah
that
could
be
a
easy
workaround
for
you.
Ritesh
basically
use
stateful
sets
put
a
anti-affinity
policy
that
ensures
that
the
pods
in
the
stateful
set
do
not
land
on
the
same
node
and
then,
like
michelle,
said,
write
a
small
controller
that
controls
the
number
of
replicas
on
that
stateful
set
to
be
the
number
of
nodes
that
you
have
on
your
cluster,
and
so
that
will,
you
know,
create.
A
Now
I
think
the
challenge
you'll
have
from
that
point
is:
if
you
have
any
subsequent
pods
that
are
created
on
those
nodes,
how
do
you
ensure
that
they
attach
the
volume
that
is
created
for
that
node,
and
I
think
you
can
use
a
a
web
hook
an
admission
web
hook?
That's
you
know,
checks
once
a
pod
is
scheduled,
what
the
correct
volume
should
be
for
that
node
and
you
know,
injects
it
into
that
pod.
N
Right
yeah,
I
might
have
to
try
a
few
things,
but
I
think
stateful
said
I
did
give
it
a
thought,
but
it
does
change
some
of
the
mechanism
of
the
deployment,
because
currently
I'm
using
deployment
object
right.
So
stateful
site
is
kind
of
no
go
for
some
of
the
different
design
decisions
that
I
had
to
make.
N
So
in
being
in
the
deployment
object
territory,
I
couldn't
come
up
with
some
option
unless
yeah,
as
I
said,
we
could
write,
I
could
write
a
custom
controller
to
attach
a
volume
every
time
pins
up
and
somehow
make
pod
access
that
particular
volume.
A
Yeah,
I
think,
if
you're
gonna
stick
with
deployment
deployment
object,
then
yeah
you
don't
have
a
lot
of
options.
N
Yeah,
I
don't
know,
as
it
was
mentioned
before,
even
with
the
demonstration.
I
don't
know
if
this
needs
more
discussion,
if
we
like.
A
Yeah,
I
think
yeah.
It's.
M
A
Interesting
idea,
so
if
you
want
to
kind
of
leave
this
or
you
know
put
together
a
cap
or
even
just
brainstorm
some
ideas,
I
think
that
would
be
a
good
idea.
N
A
Cool
awesome,
so
I'll
write
down
a
conclusion
here,
red
crash
will
put
together
a
demon,
set
stateful
set
cap
and
bring
it
to
the
city
all
right.