►
Description
Kubernetes Storage Special-Interest-Group (SIG) Volume Populator Design Meeting - 01 December 2020
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
B
Okay,
so
hello
and
welcome
today's
december
1st.
This
is
the
volume
populators
kubernetes
sig
storage,
community
meeting
we've
yeah.
I
canceled
the
last
two
of
these
meetings
and
I
have
not
gotten
a
ton
of
work
on
this
done
over
the
the
month
of
november,
because
I
was
distracted
with
other
things,
but
I
finished
all
the
other
things
and
I'm
back
to
working
on
this
now
and
I
have
I'm
working
on
converting
my
data
populator's
web
hook
code
into
a
controller,
so
I'll
be
pushing
a
pr
for
that
relatively
soon.
B
The
main
thing
I
wanted
to
ask
about,
or
or
discuss
is
who
else
was
interested
in
in
doing
actual
implementation
or
code
reviews,
because
some
people
had
expressed
in
earlier
meetings
the
desire
to
to
help
with
this,
and
I
just
wanted
to
know
who
I
should
be
interacting
with.
If
there's
anyone
who
wants
to
be
more
closely
involved,
what
help.
C
B
Okay,
thanks
elliot
that
that's
helpful
okay,
so
elliott
might
help
with
the
coding
xing
for
code
review
any
other
people
that
want
to
contribute
yeah.
B
Okay,
awesome
all
right
so
so
the
the
plan
of
record
is
to
produce
a
patch
against
the
external
external
provisioner
repo,
with
a
new
crd
that
represents
the
actual
populated
crd
and
there
will
be
a
new
controller
that
will
look
well.
It
won't
look
like,
but
it'll
it'll
be
conceptually
similar
to
this
snapshot
controller
that
we
have
in
the
external
snapshot
of
repo,
where
it
will
be
a
a
singleton
that
would
be
installed
in
the
cluster
and
not
with
each
csi
plug-in.
B
B
But
that's
sort
of
the
easiest
way
for
me
to
get
started
and
get
the
prototype
done,
and
we
even
discussed
the
possibility
of
somehow
putting
it
in
in
tree.
But
I
I
don't
know
who
to
who
to
have
that
discussion
with.
B
Or
if
or
if
it
would
make
sense
to
move
from
like
the
external
provisions
repo
to
entry
as
I
get
to
have
it
as
a
staging
area.
So
that's
that's
still
something
that's
tbd,
but.
C
B
Okay
and
then,
and
then
actual
implementations
of
populators
would
will
be.
You
know
up
to
up
to
you
know,
developers
to
put
in
their
own
repos
the
the
the
other
piece
of
this
was
is
after
I
have
the
controller
working
in
the
crd
working.
B
B
A
That
be
under
kubernetes,
csi,
og
or
kubernetes.
B
Well,
for
now
it's
just
in
the
external
populator
repo.
If
we
do
decide
it
has
to
be
a
separate
repo
yeah,
I
would
want
to
put
it
under
kubernetes
csi,
okay,
okay,
I
mean
it's
gonna
have
all
the
same
problems
that
the
external,
the
snapshot
controller
has
in
terms
of
you
know,
deployers
have
to
install
it
and
get
and
get
those
crds
into
their
clusters.
Somehow.
A
B
And
it's
not
an
in-tree
thing,
at
least
as
it's
currently
envisioned.
So
I
I.
I
hope
that
there
is
work
being
done
on
making
that
problem
more
tractable
in
general,
because
I
think
we're
just
going
to
have
more
and
more
of
these
sort
of
out-of-tree,
crds
and
out-of-tree
controllers
that
we're
going
to
want
to
be
available
and,
and
things
are
going
to
depend
on
them.
You
know
like,
like
data
populators,
are
going
to
depend
on
this
controller
being
there
to
to
so
they
can
register.
B
The
kind
of
you
know
that
the
populator
understands,
and
so
that
users
can
get
feedback
on.
You
know
when
the
populator's,
not
there
so
yeah,
that's
sort
of
a
meta
problem
that
I'm
not
spending
energy
on
solving.
B
Progress
because
well
where
it's
most
acute
is
with
the
snapshot
controller
right
now
and
the
snapshot
crds
right,
because
we
saw
that
some
kubernetes
117
distros
went
out
without
snapshot,
support
and
but
but
the
snapshot
feature
is
beta
and
so
the
the
feature
gate
gets
flipped
on.
But
if
the
crd,
isn't
there
you're
sort
of
out
of
luck-
and
I
don't
know
who's
working
on
helping
kubernetes,
distros
sort
of
know
that
they
have
work
to
do
when
things
like
the
snapshot.
Controller,
get
released.
A
B
C
B
C
A
B
B
And
yeah
we
we
have
to
find
a
way
to
sort
of
get
distros
on
the
same
page
with
this
stuff.
Otherwise,
like
all
the
new
storage
features,
are
just
going
to
be
released,
you
know
sort
of
here
and
there
hit
or
miss,
and
that.
B
D
D
I
think
the
reason
we
hit
it
with
volume
snapshots
is
because
volume
snapshots
was
the
first
kind
of
major
feature
to
go
through
this
yeah
and
I'm
sure
there's
going
to
be
a
ton,
more
features
that
are
going
to
hit
this
and
it's
it's
included
right
because
effectively
what
it
means
is
you
used
to
have
you
used
to
be
able
to
depend
on
a
given
kubernetes
version
to
figure
out
whether
a
feature
was
available
or
not,
and
distributors
depended
on
that
users
depended
on
that
and
third-party
components
dependent
on
that,
but
now
there's
kind
of
a
that's
just
the
minimum
bar.
D
B
B
So
so
yeah,
so
so
that's
that's
sort
of
the
status
update
is
that
I
am
I'm
back
to
actively
working
on
this.
It
was
really
slow
going
through
the
month
of
november,
but
I
should
have
more
status
update
next
week
and
I
will
reach
out
to
elliott
and
and
I'll
I'll
put
xing
inside
on
my
code
reviewers.
Thank
you
for
volunteering
cool.