►
Description
Kubernetes Storage Special-Interest-Group (SIG) Volume Populator Review Meeting - 09 February 2021
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
A
A
All
right
yeah
so
today
is
the
kepp
merge
deadline
for
kubernetes
121
and
we're
trying
to
meet
that
deadline,
and
so
a
bunch
of
stuff
has
been
happening
in
the
last
week.
Thank
you
to
those
who've
reviewed
the
kept
so
assad.
I
saw
you
you
made
a
you,
did
an
approval
and
I
don't
know
if
you've
been
explaining
things
to
tim
on
the
side
or
not,
but
he
seems
to
have
a
very
good
grasp
of
what
we're
doing
and
how
we're
doing
it.
A
A
We
need
an
api
approver
to
sign
off,
so
basically
the
the
the
plan
would
just
be
to
hope.
He
takes
another
look
before
this
evening.
A
I
I've
been
trying
to
to
be
responsive
through
all
channels
so
that
he
yep
so
he's
not
waiting
it
for
anything.
For
me
yeah.
I
I
do
want
to
talk
about
some
of
the
issues
he
raised
because
they're
bigger
issues,
but
just
just
to
go
down
to
the
checklist,
so
saad
you've
approved
on
behalf
of
six
storage.
I
think
david
approved
on
behalf
of
the
product
readiness
team,
so
I
think
the
because
I
pushed
another
commit
in
response
to
tim's
feedback.
A
I
think
we're
going
to
need
to
reapply
the
approvals
okay,
so
I
just
I'll
ping
you
and
david
again
later,
but
but
tim
is
the
one
I
want
to
make
sure
is
happy.
So
let
me.
A
All
right,
this
is
working.
Let
me
put
the
link
in
here.
B
A
Oh,
oh
thank
you.
Thank
you
that
that's
pretty
helpful
all
right!
So
yeah.
We
responded
to
all
of
david's
feedback.
David
did
the
approval
here
and
tim
took
a
look
so
yeah.
He
he
noticed
in
the
commit
history
that
we
had
had
a
web
hook
and
we
switched
it
to
a
controller.
A
We've
of
course
discussed
the
reasons
for
that
and
I
think
we're
on
solid
ground
here.
We
just
need
to
get
tim
to
agree
with
us.
I
guess
if
he
came
back
and
said
no,
no
a
web
hook
is
better.
We
would
have
to
seriously
consider
that
so
so
he
he.
He
actually
expressed
a
lot
of
dismay
that
that
the
current
behavior
is
that
when
you
specify
a
data
source,
that's
invalid.
A
We
basically
ignore
it
which-
and
I
think
we
all
agree-
that
that's
sort
of
not
the
best
behavior,
but
given
that
it's
been
that
way
since
115,
could
we
fix
that
in
a
backwards
compatible
way
or
are
we
stuck
with
the
bad
behavior.
C
Do
you
remember
why
it
was
that
we
were
clearing
it
out
rather
than.
A
I
don't
I
I'm
pretty
sure
that
it
was
john
griffith,
so
so
the
first
edition
of
the
data
source
field
was
for
volume.
Cloning,
because
I
know
that
we
were
working
on.
It
was.
A
B
B
1.12
snapchat
was
adding
1.2
right,
but
of
course
john
griffith
is
adding
some
other
changes.
I
don't
remember
the
you
know
the
remove
the
incorrect
that
might
be
there
from
the
beginning
or
the
gesture.
A
B
I
yeah
I
doubting
there
was
any
like
specific
reason.
Just
I
think,
because
I
was
also
for
you
know
just
just
got
introduced
to
that
code.
Yeah.
C
B
B
A
Do
recall
that
you
know,
while
we
had
very
specific
desire
to
get
snapshot
and
restore
from
snapshot
in,
we
didn't
want
to
do
it
in
a
way
that
would
be
specific
to
snapshots.
We
wanted
to
do
in
a
more
generalized
way,
which
is
why
the
field
has
the
structure
it
does,
where
you
could
specify
a
data
source,
which
is
a
kubernetes
object.
So
like
there
was
a
we.
We
were
in
a
weird
situation
where,
like
we
were
hurrying
to
get
snapshot
support
in,
because
that
was
imperative.
B
B
A
So
so
I
what
I
think,
here's
here's
my
guess
not
not
knowing
we
could
go
back
and
look
at
the
commit
history,
but
probably
what
happened
is
initially.
It
was
added
with
little
validation.
Just
like
here's
an
object
you
can
put
whatever
you
want
there.
Somebody
objected
and
said
hey
what
if
someone
puts
garbage,
and
so
we
had
to
add
some
kind
of
logic
to
validate
it.
I.
B
A
A
You
can't
just
let
people
put
whatever
they
want
in
here,
and
so
an
agreement
was
probably
reached
to
say
well
for
now
it's
just
going
to
be
snapshots
and
later
we'll
relax
it
to
add
more
things
and
then
later
the
volumes
were
added
too,
but
but
but
probably
it
was
that
decision
to
say
well,
we
need
to
validate
this
field,
but
we're
going
to
relax
it
over
time
that
led
to
this
decision
to
say
well,
if
it's
something
that's
not
recognized,
we're
just
gonna
blank
it
out,
and
then
maybe
that
was
an
ill-considered
decision
or
maybe
again
I
don't
know
like,
maybe
maybe
the
person
who
did
it
didn't
even
intend
for
that
to
happen.
A
I
think
that
if
you
turn
this
feature
gate
on
so
the
thing
I
was
emphasizing
in
my
feedback
to
tim
is
when
you
turn
this:
any
data
source
feature
gate
on.
It
basically
stops
doing
that
for
anything
except
core
api
objects,
and
maybe
to
make
tim
happy,
we
should
change
it
so
that,
if
you
specify
a
core
api
object
like
it
actually
rejects
your
pvc,
which
would
be
a
non-backwards
compatible
change,
but
maybe
one
that's
for
the
better.
A
C
And
so
just
to
make
sure
I
understand
this
proposal,
it's
if
the
this
flag
is
off
for
the
feature
flag,
then
there's
some
admission
controller
that
will,
instead
of
you,
know
just
clearing
out
the
field.
If
it's
invalid
will
actually
just
reject
the
pvc,
no.
C
A
A
A
Yeah
I
added
an
alternative
validating
web
hook
at
the
bottom
and
said
one
alternative
was
considered
was
the
validating,
webhook
yadda
yadda,
and
I
explained
why
we
rejected
that
that
proposal
in
the
last
commit.
So
so
I
tried
to
spell
out
our
reasoning
here
for
not
doing
the
web
hook,
but
it's
still
it's.
A
It
still
doesn't
talk
about
what
you
know
how
how
core
kubernetes
should
handle
data
sources
that
are
still
definitely
not
valid,
like
if
you
put
a
config
map
as
the
data
source
of
a
pvc
like
today,
it
just
gets
ignored
and
even
tomorrow,
with
the
feature
gate
turn
on,
it
will
still
just
get
ignored
and
you'll
get
an
empty
volume
or
if
you
put
a
pod
or
if
you
put
a
load
balancer,
I
mean
there's
all
kinds
of
things
you
could
put
as
the
data
source
of
a
pvc
and
if
they,
if
the
api
group
is,
is
the
empty
string
we'll
retreat
it,
especially
on
the
kubernetes.
C
Side
if
it's
invalid-
and
it's
like
obviously
invalid
you
put
in-
like
you
said,
a
config
map
or
something
provisioning-
is
still
going
to
be
blocked.
Right
and
you'll
get
an
event.
A
C
A
B
A
Yeah,
so
so
so
that
was
one
of
the
big
points
of
contention
was
like
you
know,
should
we
what
do
we
do
about
this
existing
bad
behavior?
That
everyone
agrees
is
bad,
but
it's
you
know
we're
trying
to
be
backwards
compatible
with
it
to
the
extent
possible,
and
maybe
we
could
change
that
stance
and
then
and
then
you
know,
and
if
we
do
change
our
stance
to
say
no,
we
don't
want
to
allow
these
things
to
go
in
if
they're
invalid.
A
Okay,
yeah
he'd
asked
about
sig
security.
It
didn't
occur
to
me
to
reach
out
to
them,
and
it's
probably
too
late
to
get
a
hold
of
them.
Now
I
don't
know
I
mean
clearly,
there
are
some
subtle
security
implications
to
what
we're
doing,
because
the
act
of
creating
volumes
from
data
sources
implies.
You
know
accessing
data
that
might
you
might
not
have
access
to,
or
at
least
you
know,
somebody
might
want
to
be-
have
more
control
over
whether
you're
allowed
to
do
this
or
not.
Then
then,
then
we
provide
by
default.
A
So
I
I
don't
know
what
we
need
to
do
about
the
security
aspect
of
this
design
yeah.
I
think.
C
At
least
we
can
respond
to
the
question
about
why
the
controller
has
events
rather
than
synchronous
asynchronous,
but
for
the.
C
C
A
Little
bit
about
about
the
security
implications-
and
it
is
the
case
that,
like
data
sources,
have
to
be
in
the
same
name
space
as
the
pvc,
that
is,
that
is
being
created
from
the
data
source
so
like
at
least
we
have
namespace
control.
So
there's
there's
a
modicum
of
security,
but
I
haven't
thought
about
it.
Much
deeper
than
that.
C
Yeah,
it's
worth
mentioning
that
I
guess
for
beta.
Would
it
be
automatically
on
by
default?
Is
that.
A
A
The
feature
gate
will
be
on
we'll
have
to
tell
deployers
and
and
distros
that,
like
this
is
a
crd
and
a
controller.
You
need
to
include
for
correct
behavior,
because
if
you,
if
you
just
turn
the
feature
get
out
and
you
don't
provide
the
control
you're
back
in
the
boat,
if
you
create
a
pvc
and
nothing
happens
and
you
don't
get
any
feedback
so.
B
A
So
and
again,
we
don't
want
to
end
up
in
the
situation.
We
have
the
snapshots
where,
like
we
have
vendors
shipping
or
installing
the
crd
automatically
or
something,
because
that
that
caused
a
lot
of
clashes.
You
know
between
multiple
implementations,
trying
to
install
the
same
crd
or
trying
to
install
a
controller,
and
we
don't
want
to
replay
the
what
happened
with
snapshot
controller
back
in
the
alpha
days.
A
So
that's
the
other
concern
with
it
going
to
beta.
Is
that
like
that?
That
needs
to
be
handled
in
this
release
cycle?
You
know
some
sort
of
a
communication
to
distros
that
this
is
something
you
have
to
include
when
you
weren't
spin
121,
and
we
have
something
we
have
to
have
releases
and
everything
available.
A
There
was
one
other
thing
that
he
brought
up
about
metrics
this
comment
here,
so
he
he
was
interested
in
like
collecting
how
long
it
takes
for
pvcs
to
bind
and
and
making
some
judgments.
Based
on
that,
and
I
actually
think
it's
it's
an
interesting
metric
to
have
for
all
pvcs,
whether
there's
a
popular
involved
or
not
right,
just
knowing
the
time
it
takes
to
go
from
pvc
being
created
to
it
binding
for
empty
pvcs
for
snapshot
clones
for
volume
clones.
A
A
C
I
think
there
is
some
sort
of
metrics
around
how
long
it
takes
to
provision.
C
I
can't
remember
the
details
of
it,
but
we
have
something-
and
I
think
it's
something
along
the
lines
of,
I
think
there's
like
two
sets
of
metrics
one
measures,
the
inner
loop
of
like
when
a
individual
provision
call
happens.
How
long
does
that
take
and
then
there's
like
a
second
call
which
measures
the
end
to
end
time,
including
the
retries.
C
Caller
the
csi
sidecar
there's
that
and
then
on
the
on
the
kubernetes
pvc
controller
side.
There's
also
some
metrics.
A
C
A
A
Oh,
I
see
what
you're
saying
you're
saying:
if
you
just
have
a
general
metric
and
you
can't
filter
it
by
like
how
long
did
the
empty
volumes
take,
how
long
did
the
volumes
of
data
sources
take?
How
long
did
each
type.
A
A
A
Yeah,
it
would
just
be
that
the
key,
the
key
in
that
extra
dimension
would
be
the
group
kind
of
the
object
that
was
in
the
data
source
and
then
there'd
be
a
special
one
for
no
data
source
which
could
just
be
the
empty
string.
I
guess
if
you're
allowed,
to
use
the
empty
string
as
a
as
a
key
yeah
that
makes
sense.
A
Okay,
so
so
yeah
that
that
might
just
be
a
general
piece
of
kubernetes
work
that
anyone
in
storage
could
work
on.
We
can
propose
it
during
the
next
thursday
meeting
so
yeah,
so
I
have
so.
We've
responded
to
all
of
these
comments
and
we're
waiting
for
the
you
know
reply
and
I
don't
know
what
else
to
do
between
now
and
then.
A
A
A
A
C
A
D
Oh
yeah,
okay
check
that
one
out,
let's
see
here
what
just
happened.
A
C
A
We're
very
specifically
excluding
core
objects
like
it.
I
think
what
we'd
be
saying
is,
if
you,
if
that's
what
you
want
like,
you,
need
an
intermediate
object
that
snapshots
the
pod,
and
then
you
use
that
as
your
data
source
to
directly
refer
to
a
running
pod
would
have
very
weird
semantic
implications.
Right
like
I,
I
we
kind
of
made
the
judgment
that,
like
all
the
core
objects,
are
off
limits,
except
for
pvc
for
data
populators,
and
if
we
want
to
revisit
that
decision,
then
yeah.
A
We
have
to
sort
of
propose
full
relaxation
of
the
of
the
admission
controller
logic
to
include
the
empty
api
group
and
just
let
anything
in
so
yeah.
You
could
just
point
a
pvc
at
a
at
a
at
a
pod
and
if
somebody
had
written
a
popular
that
understood
how
to
do
that,
like
it
would
just
work
but
like
yeah,
there's,
a
question
of
who
who
who
has
control
over.
A
C
So
is
my
understanding
here
incorrect
when
I
I
think
I
was
assuming,
if
you
have
any
sort
of
populator
in
this
new
design,
you
need
to
register
a
cr
of
the
type
volume
populator
that
says
for
this
kind.
You
know
I
am
the
populator
yes,
and
so,
if
you
have,
if
you
expand
that
potentially
to
allow
for
kinds
that
are
core
the
problem
of
who
owns,
it
would
be
basically
whoever
the
first
one
is
that
claims
it
effectively.
C
C
C
C
A
Yeah,
I
was
gonna
say
I
would
like
to.
I
would
like
to
have
a
validating
web
hook
on
the
we
actually
talked
about
this.
We
you
know
when
we
were
doing
the
validating
web
hook.
We
said
well,
we
can
both
validate
the
pvcs
and
we
can
validate
the
volume
populators
themselves
if
we
want
to,
and
then
we
backed
away
from
doing
a
web
hook
at
all
and
saying
well,
we'll
just
do
a
controller,
and
so
then
the
question
was
well.
We
do.
C
A
Unfortunately,
I
think,
even
if
we
had
the
web
hook
to
prevent
two
volume
populators
from
claiming
the
same
group
kind,
the
having
the
exis
the
existence
of
the
volume
populator
doesn't
control,
whether
the
populator
can
do
its
work.
The
populator
can
still
function,
even
if
there
is
no
volume
populator
object,
because
all
it's
doing
is
it's
a
pvc
controller.
That's
watching
pvcs
reacting
to
them
and
binding
pvs
to
them,
and
if
two
of
them
decide
to
react
to
the
same
pvc,
you
will
get
bad
results.
A
This
would
be
the
case
if
you
had
two
csi
drivers
that
both
picked
the
same
driver
name
and
you
installed
them
both
right
right.
That's
how
you
would
get
into
trouble
because
then
they
would
both
fight
over
the
same
pvc
when
it
came
in,
and
so
we
just
sort
of
rely
on
vendors
not
to
ever
pick
the
same
csi
driver
name,
and
if
anyone
ever
does,
we
rely
on
admins
not
to
be
dumb
enough
to
install
both
of
them.
A
Yes,
so
so
the
theory
is
the
same
here
is
that
you
know
nobody
should
be
writing
populators
for
the
same
kubernetes
object,
and
you
know
when
you
write
a
popular,
you
should
be
defining
a
new
object,
and
I
guess
that
that
was
that
was
my
justification,
for
excluding
the
core
group
is
that
got
it?
Is
that
anyone
who's
writing
a
populator
is
is
definitely
going
to
make
a
new
crd
that
they're
going
to
use
as
their
data
source.
A
No
one
would
ever
write
a
populator
for
something
that
already
existed,
because
that
would
be
a
weird
thing
to
do.
It
wouldn't
make
sense,
but
if,
if
it
does
start
to
make
sense
in
some
way
that
I
can't
see,
then
we
do
have
this
issue
of
yeah.
How
do
you
make
sure
that,
like
you,
don't
end
up
with
two
they're.
C
I
think
that's
a
valid
concern,
so
I
would,
I
would
say
you
push
back
on
that
and
say
you
know,
let's
start
off
with
this
tighter
kind
of
validation
here
keep
core
out
of
it.
Because
of
this
issue
of
collisions.
That
can
happen.
It
can
happen
with
any
data
source,
but
it's
much
more
likely
when
you
have
core
objects,
that
combined
with
the
fact
that
we
don't
really
have
a
real
use
case
for
directly
doing
any
core
objects.
C
A
A
The
interaction
tells
you
how
to
make
the
volume
from
that
pod
and
then,
if
someone
else
comes
in
and
comes
makes
a
different
type
of
volume
from
the
same
pod,
they
can
use
a
different
object
name
and
and
work
around
the
collision.
That
way,
so
you
know
by
adding
a
level
of
indirection
with
some
other
object.
A
All
right
I'll
all,
I
guess
I'll
just
respond
to
tim,
hawkins
email,
maybe
copy
it
into
the
into
the
github,
and
you
know
see
if
we
can
convince
him.
So
so
that's
that's
sort
of
where
we
are
today.
I
don't
have
anything
other
than
trying
to
get
the
kept
merged.
B
I
think
yeah,
I
think
so
it's
really
just
about
the
the
cap.
What
are
still
remaining,
I
think
the
just
what
what
questions
that
the
team
raised.
A
C
A
A
Okay,
well
that
that's
all
I
have
for
for
today,
I'll
get
back
to
it
and
yeah,
I'm
basically
assuming
that
tonight
is
the
deadline,
and
if,
if
we
somehow
miss
it,
I
do
intend
to
try
for
an
exception.
If
I
can,
you
know,
wrap
things
up.
You
know
with
the
next
few
days.
C
Okay
sounds
good
to
me:
yeah
feel
free
to
hit.
A
A
Because
I
will
need
you
to
put
your
approval.
C
Back
on
sure
and
I'll
actually
keep
an
eye
on
the
email
thread
between
you
and
tim.
I
actually
noticed
the
emails
before
I
noticed
slack
so.