►
Description
Kubernetes Storage Special-Interest-Group (SIG) Volume Populator Review Meeting - 19 January 2021
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
B
All
right,
hello,
welcome.
This
is
the
volume
populators
meeting
from
in
the
kubernetes
storage?
Stick.
Thank
you
for
joining.
So,
as
I
mentioned
last
week,
I
have
everything
basically
working
and
at
least
so
yeah,
just
just
to
recap.
B
So
what
I
wanted
to
do
today
was
to
sort
of
go
through
the
code
and
talk
about
what
what
it
does,
how
it
does
it
there's
a
few
remaining
things
that
you
know,
maybe
some
sort
of
polish
that
that
is
still
tbd,
but
basically
I
think
we
could
go
to
beta
with
this
version
of
the
code
so,
but
before
I
jump
into
that,
is
there
any
questions
or
anyone
new
that
wants
to
ask
ask
about
something
else
before
we
go
into
the
code.
B
All
right,
so,
if
you
look
at
the
agenda
last
week,
has
a
link
to
the
to
the
get
repo
where
the
code
is,
you
can
go
there
and
look
and
follow
along
if
you
want
to,
but
I'm
just
gonna
share.
My
ide
here
is
that
visible
to
everybody.
A
B
B
It's
only
115
lines,
it's
very
short,
so
this
is
sort
of
the
minimum
required
to
implement
a
populator,
not
including
all
the
the
auto
generated
code
for
for
the
hello
object,
which
is
in
a
in
this
client
directory.
You
know,
the
auto
generated
stuff
is
a
huge
amount
of
code
and
then
there's
the
controller.go,
which
is,
if
I
I
can't
see
my
scrollbar
here.
It
is
about
758
lines.
This
is
the
reusable
part.
B
The
idea
is
that
this
this
will
eventually
be
a
library
that
will
be
imported
by
all
of
the
populators
and
we'll
implement
sort
of
the
the
core
kubernetes
api
interactions
in
a
consistent
way.
That
sort
of
follows
the
you
know
the
rules
and
the
I
guess,
the
requirements
of
any
kind
of
volume,
provisioning
that
we'll
talk
about
in
a
minute
so
and
then
the
rest
of
this
repo
is
just
boilerplate
stuff.
You
know
the
go.mod
with
minimal
requirements
come
on
open.
B
You
know
these
minimal
requirements,
eventually
I'm
hoping
to
host
this
under
kubernetes
csi.
It's
not
here
yet
there's
a
make
file.
There's
a
docker
file.
B
B
There's
just
three
functions
here:
the
main
function
does
all
the
arg
parsing
most
of
the.
So
these
things
here
are
pretty
standard,
the
cube
config
and
the
master.
Like
you
know.
You
see
this
almost
with
every
kubernetes
sidecar
to
specify
how
to
talk
to
the
kubernetes
server.
There
is
an
image
name
which,
as
I'm
realizing
now
probably
doesn't
need
to
be
a
command
line
parameter,
but
this
is
this
tells
what
tells
the
the
populator?
What
is
going
to
be
the
image
name
for
the
pod
they'll
be
doing
the
population?
B
There's
a
version
command.
There's
this
there's
this
main
switch
called
mode
which
tells
this.
So
this
binary
is
a
dual
purpose:
binary.
It
is
both
the
controller
and
the
populator
and
which
identity
it
takes
on
depends
on
whether
you
run
it
in
controller
mode
or
populate
mode,
and
then,
if
it's
in
the
populate
mode,
you
have
these
two
special
arguments
that
are
specific
to
the
hello
world,
popular
file,
name
and
file,
contents
that
specify
which
file
will
be
created
and
what
will
be
put
into
that
file
all
right.
B
B
The
prefix,
which
is
what
gets
prefixed
onto
all
the
annotations
and
finalizers,
that
the
populator
will
be
creating
a
group
kind
and
a
group
version
resource
object.
These
are
just
standard
kubernetes,
boilerplate
things,
the
the
mount
path
in
the
device
path
within
the
pod
of
the
volume
that
to
be
populated.
B
So
this
is
how
you
control
you
know
when
the
populator
pod
starts
up.
Where
will
the
sort
of
the
temporary
pvc
get
mounted
in
and
then
there's
this
function
called
get
populator
pod
args,
which
is
down
here
at
the
bottom.
So
the
idea
behind
this
function
is
that
for
any
given
object
to
populate,
you
want
to
invoke
the
populator
pod
with
different
arguments
and
right
now
that
and
and
this
part
I
I
envision
changing
over
time
becoming
more
flexible.
B
B
The
first
thing
it
does
is
we
convert
the
unstructured
to
a
hello
object
which
is
expected
to
be
because
of
the
the
gk
and
the
gvr
up
here
are
both
for
hello,
hello,
v1
and
then,
and
then
it
basically
just
returns
a
list
of
arguments
that
that
the
populator
pod
will
will
have
so
it'll
put
it
in
mode.
Populate
it'll
set
a
file
name.
B
After
a
volume
has
been
created,
it
will
attach
the
volume
to
the
pod,
and
then
it
will
invoke
this
binary
again
with
the
mode
populate
argument
which
will
cause
it
to
call
this
function
called
populate,
which
gets
the
file
name
and
the
file
contents,
and
it
just
opens
the
file,
writes
the
contents
in,
and
it's
done
so.
It's
really
simple.
This
is
a
sort
of
the
minimal
example
of
how
to
create
a
populator
and-
and
the
idea
is
you
can
this
can
get
as
fancy
as
you
want.
B
But
but
you
know,
if
users,
you
know
deploy
this
thing
and
they
deploy
the
crd.
That
represents
the
hello
object.
All
they
need
to
do
then,
is
instantiate.
B
If,
if
there's
a,
if
there
are
populators
where
the
actual
work
is
done
by
some
csi
driver
or
some
sort
of
it's
done
by
the
the
back
end
itself,
we
could
still
use
this
model
here.
Where
you
have
the
populator
machinery
running
and
basically
the
pods
that
it
creates
much
like
in
in
this
example,
it's
only
about
you
know,
20
lines
of
code
to
do
the
population.
This
could
be
a
no-op
or
a
call
or
an
api
call
to
something
else
to
actually
do
the
the
real
work.
B
So
just
because
we've
set
this
up
as
a
as
a
pod
that
attaches
to
the
volume
doesn't
mean
that
you
can't
populate
using.
You
know
the
some
external
thing
that
that
knows
how
to
talk
to
the
storage
controller
and
and
do
do
the
real
work.
B
B
All
right,
so,
if
no
questions
on
this,
the
popular
machinery
itself,
as
I
said,
is
how
can
I
move
this
thing?
Okay,
700
lines
or
so
got
a
couple.
Constants.
B
The
struct
itself
has
a
has
several
informers
because
it's
going
to
be
watching
pvcs
and
pvs
and
pods
and
storage
classes,
and
this
unstructured
object,
which
represents
the
the
data
source
crds
themselves.
But
this
thing
doesn't
know
any
it
just
treats
as
an
unstructured
thing.
B
B
B
We
have
five
specific
conformers
that
get
created
here
and
it
builds
the
objects,
and
then
it
sets
up
watches
on
all
five
of
those
object.
Types
with
little
handler
functions
that
are
most
of
these
handler
functions
are
extremely
simple,
as
you'll
see.
B
Now,
there's
this
notification
framework
I
built
in
here
that
basically
allows
you
know
the
only
thing
that
this
controller
is
interested
in
is
pvcs.
Really,
you
know
it's
going
to
see
a
new
pvc
that
it
determines
needs
to
be
populated
if
it
determines
it
doesn't
need
to
be
populated.
It
just
drops
it
and
ignores
it
and
lets
the
regular
thing
happen,
but
if
it
determines
it
does
need
to
do
something
with
the
pvc.
B
B
There's
a
cleanup
that
cleans
up
all
the
stuff
that
this
thing
creates.
I
won't
go
through
the
details
of
this
because
it's
just
it's
just
manipulating
sets
translate
object
figures
out
whether
it's
a
real
object
or
a
tombstone
handle
mapped.
This
is
the
thing
that
unmaps
objects
so
so
down
here.
If
we
get
a
pv
like
a
if
the
y,
if
the
pv
watcher
sees
a
change
to
a
pv
it'll
it
all
most
of
these
do
is
call
handle,
mapped
and
then
what
these
do
is
they
go?
B
Look
for
those
notifications
to
see
if
there's
any
pvcs
that
are
mapped
to
this
pv
it'll
do
something
similar
with
pods
storage
classes
and
unstructured
objects.
The
only
interesting
one
is
handle
pvc.
It
both
calls
handle
maps
for
pvc
and
then
it
also
does
the
translate
object
and
then
it
adds
the
real
pvc
to
the
work
queue.
B
The
run
function-
this
is
just
where,
where
it
forks
the
background
thread
to
do
the
to
do
the
actual
work,
this
is
pretty
standard
controller
stuff.
B
B
Everything
else
is
an
error
not
too
interesting.
Sync,
pvc
is
where
all
the
real
work
is.
So
this
is
about
half
half
the
file.
Is
this
giant
sync
pvc
method
and
and
I'll
go
through
this
all
the
way
to
sort
of
explain
every
case
that
it
gets
handled,
and
you
guys
can
help
me
remember
if
I
missed
any
important
cases.
B
So,
most
importantly,
it
ignores
any
pvc
in
its
working
name
space,
so
every
populator
is
going
to
need
sort
of
its
own
name
space
where
it
creates
these.
These
temporary
pvcs
that
it's
going
to
do
the
population
on
and
then
after
those
pvcs
are
populated,
it's
going
to
rebind
the
the
underlying
pv
back
to
the
real
pvc,
and
so
because
of
that,
you
can't
ever
create
a
pvc
with
the
data
source
in
the
same
name
space
where
the
populator
for
that
data
source
lives,
because
that
would
be
really
weird.
B
So
we
just
ignore
those.
If
it
somehow
happens,
it
gets
the.
This
is
just
a
you
know,
get
the
actual
pvc
from
kubernetes.
The
first
thing
we
do
when
we
get
a
pvc
that
is
not
in
the
wrong
name.
Space
is
look
at
the
data
source
if
there's
no
data
source
or
if
the
data
source
doesn't
match
our
gk.
B
B
If
it
does
match
the
group
kind,
this
popular
is
responsible
for
the
first
thing
it
does
is
it
goes
and
it
loads
that
the
unstructured
object,
which
is
the
actual
data
source
and
if,
for
some
reason
it
doesn't
exist.
Yet
we
bail
within
after
after
adding
a
notification
on
that
unstructured
object.
B
B
B
If
there
is
a
storage
class
name,
we
try
to
get
the
storage
class
and
again
if
the
storage
class
doesn't
exist,
for
whatever
reason
we
add
a
notification
and
bail
so
that
when
the
storage
class
is
eventually
added,
we
will
come
back
in
and
reprocess
this,
but
if
there
is
a
storage
class
and
we're
able
to
get
it
and
it
has
a
volume
binding
mode
of
waitress
consumer,
we
set
wait
for
first
consumer
to
true
and
then
we
look
at
the
node
name
of
the
pvc,
so
so
this
is
something
I
actually
wanted
to
talk
to
you
guys
about.
B
I
went
looking
at
how
wait
for
first
consumer
is
actually
handled
in
in
the
real
external
provisioner
sidecar
and
what
appears
to
happen
is
it
relies
on
this
annotation
called
selected
node.
B
B
And
so
something
then,
is
watching
that
pvc
and
waiting
for
a
pod
to
get
provisioned
and
bind
to
that
pvc
and
then
waits
for
that
pod
to
get
scheduled
to
a
node.
And
then,
when
that
node
gets
set,
it
sets
an
annotation
on
the
pvc,
and
I
don't
know
which
component
is
responsible
for
that
portion
of
the
work.
But
I.
C
B
B
Has
the
scheduler
put
that
annotation
on
yet
and
if
not,
we
just
wait,
we
don't
do
anything
so
so,
if
wait
for
first
consumer
set
to
true
and
the
scheduler
hasn't
put
the
annotation
on
we're
not
going
to
do
anything
which
is
exactly
what
the
external
provisioner
would
do,
if
it
is
set,
then
we'll
continue
on
or
if
wait
for.
First
consumer
is
false
or
the
or
there
is
no
or
there
is
no
storage
class.
It
will
assume
it's
false
and
then
it
will
continue
on
with
the
immediate
binding
strategy.
B
So
we
we
have
a.
We
automatically
generate
the
pod
name
in
in
the
pvc
prime
name
just
based
on
some
prefix
and
then
the
uuid
of
the
user
generated
pvc.
B
This
is
currently
hard
coded.
This
could
be
made
more
flexible
depending
on.
If
we
want
to.
We
set
up
notifications
on
both
those
objects
so
that
if
this
pod
doesn't
exist
or
if
it
changes
later
on
we'll
get
we'll
get
the
chance
to
resync
this
pvc
again
and
then
we
attempt
to
get
the
pod
and
get
the
pvc
it's
okay,
if
these
don't
exist
yet
because
we
are
responsible
for
creating
them.
B
So
if
they
don't
exist,
these
variables
just
get
set
to
nil
and
we
keep
going
so
I
put
a
comment
for
myself
here
at
this
point.
We
we
haven't
changed.
Anything
we
haven't
done
anything
we're
just
we're.
Just
reading
reading,
checking
a
bunch
of
different
objects,
starting
here
we're
actually
going
to
do
we're
going
to
start
manipulating
objects.
B
So
the
first
thing
we,
the
first
main
switch,
is
whether
the
pvc
that
the
user
created
has
a
volume
name
or
not.
So
if,
if
this
volume
name
is
set,
it
means
that
somehow
the
pvc
got
bound
to
a
pv
and
whether
we
did
it
or
someone
else
did
it.
It
can
never
be
changed.
So
we
assume
that
that
we're
done
and
we
skip
over
the
main
portion
of
this-
this
function.
If,
if
it
is
not,
if
it's
empty,
then
this
pvc
is
not
bound
yet
and
there's
work
for
us
to
do
so.
B
I
went
through
a
lot
of
generations
for
how
to
implement
this
ensure
finalizer
function.
I
ultimately
decided
to
use
patch
with
the
json
patch
to
add
it
on
it's
kind
of
difficult
to
do
this
atomically,
because
the
representation
of
the
finalizers
is
a
list
and
patches
in
kubernetes
can't
atomically
manipulate
lists
right.
They
can.
But
if
there's
a
race
you
can
get
really
weird
behavior.
B
So
this
is
my
attempt
to
safely
atomically
update
the
list
of
finalizers
with
a
json
patch.
You
can
redo
that
if
you're
interested,
where
was
I
insure,
finalizer,
okay,
okay,
so
the
first
thing
we
do
if
the
pod
doesn't
exist,
the
populator
pod
we
create
it.
B
Here
we
look
at
whether
the
user
created
a
persistent
volume,
so
we
support
file,
system
volumes
and
raw
block
volumes
here,
depending
on
whether
it's
wrong
block
or
not.
It
could
change
the
arguments
that
the
populator
is
going
to
need.
So
we
include
that
flag.
This
is
the
callback
function.getpopulatorargs
that's
defined
in
main.go,
then
we
create
the
pod.
B
We
assign
it
to
the
pvc
name
that
we're
going
to
use
it
has
one
container
in
it.
The
image
name
is
defined
by
the
command
line
argument.
The
arguments
are
defined
by
this
callback,
whether
it's
raw
block
or
not.
We
set
up
either
a
volume
device
or
a
volume
amount.
If
it.
If
it's
wait
for
first
consumer,
we
set
the
node
name
on
the
pod
to
the
node
name
that
the
scheduler
chose.
So
this
forces
the
resulting
pod
onto
the
node,
where
it
needs
to
be
basically
bypassing
the
scheduler
for
this
populator
pod.
B
Then
it
creates
it
and
we're
done.
Then
it
goes
ahead
and
creates
the
pvc
for
that
pod.
Again,
it
checks.
If
it
already
exists.
If
it
doesn't,
we
go
through
the
process
of
creating
it.
If
wait
for
first
consumer
is
set,
if
wait
for
first
consumer
is
set,
then
for
pvc.
Prime,
what
we
do
is
we
put
the
annotation
on
pvc,
prime,
with
the
same
node
name
that
the
scheduler
chose
for
the
original
pvc-
and
I
presume
this
to
be
safe.
B
I've
tested
it,
it
seems
to
work.
I
don't
know
if
it
has
any
weird
bad
interactions
with
the
scheduler.
I
hope
it
doesn't,
but
but
this
worked
when
I
tested
it-
create
the
pvc
we're
done
at
this
point.
We
always
return
because
there's
no
chance
that
that
the
rest
of
this
function
will
succeed.
If
we,
if
we're
just
creating
the
pvc
and
and
the
and
the
the
pod,
so
we
return
we
wait
for
something
to
get
updated.
The
sync
function
comes
back
through
and.
B
Oh,
it
was
right
there,
sorry
that
this
return
is
is
only
if
the
pod
was
was
nil.
So
if
we're
creating
the
pod
during
this
iteration
of
the
sync,
then
we
always
return
once
the
pod
gets
created,
we'll
come
back
through
from
the
top,
and
this
will.
This
will
be
false,
so
we'll
skip
over
this
section
at
this
point
we'll
be
looking
at
the
the
pod
status.
B
If
it's
not
succeeded,
then
we
well
if
it's
failed,
we're
going
to
delete
the
pod
and
then
return
which
will
cause
that
pod
to
go
away.
The
sync
will
get
called
again.
It
will
recreate
the
pod
and
we'll
do
that
in
an
endless
loop.
Until
we
reach
succeeded,
if
it's
not
succeeded
or
not
failed,
it
will
just
return
and
wait
for
the
it
to
be
something
other
than
those
two.
B
So
at
this
point
we
know
that
the
pod
has
succeeded
if
it
gets
to
this
point.
Pvc
prime
really
should
not
be
nil
if,
if
the
pod
has
succeeded
because
the
pod
needed
that
pvc,
so
this
is
just
an
error
if
the
pvc
isn't
there
at
this
point,
so
once
we
get
here,
the
pv
exists
and
has
the
data
in
it.
So
it
has
been
populated,
we're
basically
done.
We
just
need
to
do
the
rebinding
cleanup.
B
And
then
here
we
look
at
the
claim
ref
on
the
original
pvc,
the
user
created
and
basically
compare
it
to
the
pv
that
pvc
prime
was
bound
to
and
if
they
match,
then
we're
just
waiting
for
the
rebind
controller.
To
do
its
thing.
If
they
don't
match,
then
we're
going
to
build
a
patch
on
the
pv
that
points
that
basically
rebinds
the
pv
back
to
the
original
pvc
that
the
user
created-
and
here
I
use
a
strategic,
merge
patch
to
do
that.
B
And
then
again,
all
if,
if
we're
doing
a
rebind,
we
always
return
because
we
have
to
wait
for
the
bind
controller
to
do
its
work
and
then
again
we
come
in
from
the
top
and
when
we
come
down
through
here,
what
we
would
expect
during
that
iteration
is
for
pvc.expect
volume
name
to
not
be
empty,
because
at
that
point
the
the
pvc
bind
controller
will
have
updated.
B
The
volume
name
of
the
original
pvc
to
point
to
the
pv
that
got
created
and
you'll
come
all
the
way
down
here
and
here's
our
pointer
to
pvc.
So
what
we're
waiting
for
here
is
for
pvc
prime
to
go
into
the
the
claim
lost
phase,
which
is
what
will
happen
to
to
the
pvc
after
we
rebind
the
pv
out
from
under
is
it'll
become
lost,
and
once
that
happens,
that's
our
signal
that
that
the
rebind
is
done
and
it's
safe
for
us
to
do
cleanup.
D
A
It
safe
to
do
that
if
it's
a
binding
to
the
wrong
pvc,
so
here
you're
trying
to
correct
it,
is
that
what
is
that
safe
to.
B
D
B
B
So
so
here
we
we're
just
we're
comparing
the
name,
the
name
space
and
the
uid
of
the
claim.
Ref
that
is
currently
on
the
pv
and
and
what
we're
comparing
it
to
is
the
actual
pv
that
the
user
created.
So
initially,
it's
not
going
to
be
this.
These
values,
it's
going
to
be
the
name
and
name
spacing
uid
of
pvc,
prime,
because
it
was
created
through
the
normal
process
by
some
csi
controller.
Some
csi
external
provisioner
created
this
pv.
B
You
know
as
a
re
at
our
request,
because
we
created
pvc
prime
in
in
our
special
name
space,
so
it
the
normal
thing
happened
and
that
pv
got
correctly
bound
to
pvc
prime.
So
what
we're
doing
here
is
constructing
a
strategic,
merge
patch
that
refers
back
to
the
original
pvc's
namespace
name
uid
and
resource
version.
B
We're
setting
an
annotation
called
populated
from
on
that
on
the
pv.
Just
as
a
reminder
to
ourselves
that
this
pv
has
been
populated
from
this
specific
data
source
name
and
then
it
then
executes
a
json
patch
with
the
strategic,
merge
patch,
and
the
only
thing
that's
in
here
is
the
the
annotations,
the
name
and
the
claim
ref.
So
the
rest
of
the
pv
is
empty
and
will
not
get
overwritten
it'll,
just
overwrite
the
claim
ref
and
and
this
this
works
perfectly
well.
Nothing
prevents
you
from
changing
the
claim
ref
on
a
pv.
B
B
The
pvc
and
and
the
of
course
the
the
pod
refers
to
the
pvc,
and
then
we
go
into
this
weight
loop,
where
we're
waiting
for
the
pod
to
get
done.
So
in
that
waiting
time,
it's
presumed
that
something
is
actually
creating
the
pv
for
that
pvc,
because
the
pod
couldn't
have
started
and
run
to
completion.
Unless
that
happened.
D
Yeah,
I
I
think
I
understand
that
part
is
the.
It
will
be
automatically
dynamically
allocated.
The
people
are
behaving
right,
but
there's
some.
D
The
the
user
might
want
to
create
a.
E
Tv,
you
know
that
have
some
note
affiliation
specify
the.
D
E
B
So
that's
actually
a
good
point.
So
so,
if,
if
wait
for
first
consumer
is
set
to
true,
I
copy
the
semantics
of
wait
for
first
consumer
by
waiting
for
this
annotation
to
pop
up
but
you're
right.
There
are
other
ways
that
the
system
can
force
a
pvc
onto
a
node,
and
this
code
needs
to
replicate
those
semantics
as
well.
B
B
B
Then
the
pvc
has
to
follow
the
pod,
and
so
we
replicate
that
logic
here,
as
I
as
I
showed
you
know
right
up
here,
if
wait
for
first
consumer,
then
we
get
the
node
name
out
of
the
pvc
annotation
and
we
wait
for
it
to
not
be
empty.
B
So
does
that
answer
your
question
that
I
I'm
pretty
sure
the
only
two
ways
a
pvc
can
get
forced
onto
a
node
are
either
through
wait
for
first
consumer
or
by.
B
B
C
And
so
once
that
gets
set
to
something,
how
does
the
external,
or
is
this
modifying
the
external
provisioner?
This
is
not
modifying
the
external
provisioner
right.
This
is.
B
A
standalone
controller
yeah,
once
node
name,
is
set
to
something.
There's
there's
two
things
that
happen:
one:
the
pod
where's
the
line,
the
pod,
the
pod.spec.node
name,
gets
set
to
that
value,
so
the
the
populator
pod
gets
forced
onto
the
node
that
was
chosen
for
the
original
pvc
and
similarly,
we
set
the
annotation
on
the
pvc
we
create
to
the
same.
B
Well,
it's
gonna,
it's
gonna
follow
its
same
logic
where
the
the
pvc
prime
will
be
of
the
same
storage
class,
and
so
it
will
also
see
that
pvc
prime
has
wait
for
first
consumer
equals
true
and
what
it
should
do
then,
is
wait
for
this
annotation
to
get
set
the
way
it
always
does,
and
so
because
we
set
it,
it
knows
what
to
do.
It
doesn't
need
to
wait,
because
we
just
at
creation
time
we
fill
in
that
annotation,
and
so
it
immediately
knows
they
can
get
to
work
on
putting
it
on
that
node.
B
B
D
B
It's
going
to
ignore
the
first
pvc.
It
won't
ignore
pvc,
prime,
because
there
is
no
data
source
on
pvc
prime,
but
because
we
copy
the
the
node
name
annotation
and
the
storage
class
name.
That's
enough
to
sort
of
tell
it
wait
for.
First.
Consumer
is
also
true
on
pvc,
prime,
and
so
you
better
put
it
on
that.
Node.
B
Okay
yeah,
my
my
big
worry,
would
be
that
that
somehow
the
scheduler
might
attempt
to
muck
with
this
annotation
that
we're
setting.
C
Yeah,
I
think
once
it's
set,
it
shouldn't
be
it's
worth
investigating.
I
want
to
say
it
shouldn't,
but
yeah.
B
B
B
C
B
Just
say:
oh,
it
has
a
node
name.
You
know
it's
a
no
op.
For
me.
The
schedule
only
reacts
to
pods
that
don't
have
node
names.
I
think
so.
Yeah
yeah,
I'm
pretty
sure
that
this
is
safe.
So
the
oh
and
we
lost
fong,
the
the
guy
who
asked
the
question
he
seems
to
have
dropped
off.
I
was
going
to
ask
if
he
was
satisfied.
I
guess
I
guess
he
is.
I
hope
he
is
any
other
questions.
C
Now
this
looks
good,
I
think,
moving
forward
in
it
not
necessarily
for
alpha,
but
maybe
for
beta.
It
would
be
interesting
to
think
through
how
we
could
extract
this
out
into
a
sidecar,
and
I
think
one
of
the
benefits
of
that
would
be.
You
know
just
just
the
fact
that
this
is
such
complicated
logic.
I'm
sure.
B
C
C
B
C
C
If
there,
if
this
isn't
a
kind
of
standalone
sidecar
container,
then
it
becomes
much
easier
to
integrate
versus
like
oh,
let
me
go
grab
a
new
library,
recompile
everything,
and
you
know
that
kind
of
thing.
B
Yeah,
so
so-
and
we
talked
about
this
before
I-
I
agree
that
having
a
sidecar
is
better
from
a
deployment
perspective,
because
it
means
that
we
can
like
fix
bugs-
and
in
this
case,
without
having
to
recompile
all
the
populators,
which
would
be
nice.
The
the
downside
is
going
to
be
that,
instead
of
just
having
code
that
links
and
has
a
callback
we're
going
to
have
to
have
a
full
grpc
interface
with
a
socket
in
the
middle.
That's.
C
What
I'm
thinking
about,
maybe,
if
we
think
about
what
that
interface
should
look
like,
do
we
need
a
full-blown
grpc
interface
or
could
we
get
away
with
some
like
soft
contract?
That
says
you
know
hey,
I
will
surface
a
you
know
a
a
a
local.
You
know
file
system
directory
at
a
specific
location.
C
B
Yeah,
there's
there's
two
touch
points.
There's
at
creation
time
the
populator,
the
specific
populator
has
to
tell
the
the
current
library
you
know
a
bunch
of
variables
like
image,
name,
various
paths.
You
know
the
the
kind
the
group
kind
and
group
version
resource
information,
all
yeah.
All
that
could
be
in
any
format
you
want.
B
In
fact,
it
wouldn't
be
hard
to
make
this
a
grpc
call,
but
the
problem
is:
is
the
function
callback
where,
for
each
specific
data
source
instance,
the
the
library
or
the
sidecar
is
going
to
have
to
sort
of
ask
the
populator
like
what?
How
do
I
set
up
the
pod,
the
specific
pod
for
this
object?
B
You
know
what
what
what
arguments
do?
I
need
to
set
what
you
know,
what
what
do
I
need
to
and
I'm
I
was
considering
making
this
even
more
fancy
where
we
actually
sent
send
back
the
whole
pod
instance
to
the
populator
and
let
it
manipulate
the
pod
instance
so
that
it
would
have
more
flexibility
for
exactly
how
that
pod
gets
constructed.
B
I
didn't
do
that
in
this
version
of
it,
because
I
was
trying
to
keep
it
simple,
but
but
yeah
we
want
to
make
sure
that
the
individual
populators
have
the
right
amount
of
control
over
how
that
populator
pod
gets
instantiated
for
an
individual
data
source.
I
mean
it's
conceivable
that,
like
it
may
want
to
add
other
pvcs
or
other
things
when
cozy
comes
along.
B
Command
line
arguments
are
enough,
but
if
it
gets
much
more
complicated
than
that
like
doing
it
through
any
file,
system
mechanism
or
grpc
would
both
be
kind
of
nasty,
because
you
want
to
pass
in
the
instance
of
the
object,
which
is
the
data
source
from
the
from
the
populator
library,
to
the
or
from
the
machinery
I
guess
to
the
into
the
populator
and
then
get
back.
You
know
what
what
features
the
pod
needs
to
have
and
yeah
right
now.
B
It's
easy
to
do
because
it's
all
go
lying
and
it's
a
callback
and
I
can
pass
back
and
forth
whatever
I
want.
I
don't
yeah
you're
gonna
have
to.
I
guess,
explain:
assad
what
you
mean
by
the
like
some
sort
of
a
file
system,
because
I'm
just
trying
to
think
that.
C
Yeah,
no,
I
hadn't
thought
through
kind
of
these
other
touch
points.
I
was
really
only
thinking
about
having
to
write
the
data.
B
B
B
C
Then
we
still
got
to
think
about
where
that
would
exist,
and
you
know
backwards
compatibility
and
getting
that
api
right.
So
yes,
maybe
like
for
alpha,
let's
you
know
hold
off
on
that.
Do
the
simplest
solution
possible,
get
it
out
there
and
see
how
kind
of
viable
it
is,
and
then
a
goal
for
beta
would
be
come
back
and
revisit
and
see.
If
we
can
make
this
modular.
B
A
B
And
that
to
do
that
is
going
to
require
the
other
work
that
I
was
doing
before
the
holidays,
which
is
that
that's
populator
controller
and
the
registration
of
populators
that
basically
lets
lets
deployers
sort
of
register
the
populators
with
the
system
and
then
and
gives
users
feedback
when
they
specify
a
data
source
that
isn't
valid.
They're
told
that
if
we
can
just
get
those
pieces
in
then
the
the
the
feature
gate,
the
any
data
source
feature
gate
could
go
to
beta,
because
we
would
have
appropriate
error
messages.
B
We
would
have
all
the
other
things
we
would
want
to
have
to
convince.
You
know
the
the
api
guys
that,
like
we
know
what
we're
doing-
and
this
is
eventually
going
to
work
and
then
this
could
be
delivered
as
alpha
while
the
feature
gate
is
in
beta
and
we
can.
A
A
B
A
E
E
B
Point
in
time:
yes,
we
could
call
it
beta
and
we
could
iterate
on
it
while
it's
beta,
because
it's
at
a
tree
but
yeah,
I
am
very
focused
on
getting
enough
stuff
sort
of
as
a
proof
of
concept
so
that
the
entry
part
can
move
from
alpha
to
beta.
Now.
A
B
Yes,
I
know
I
I
yeah,
I
will
try
to
wordsmith
that
in
the
cap
and
and
answer
any
questions,
but
but
but
I'm
glad
you
brought
it
up.
So
what
I
was
gonna
say
is:
is
this?
This
is
good
enough
for
me
at
this
point.
I'm
gonna
sort
of
this
is
already
pushed
to
github
at
the
moment,
and
it's
done,
and
so
my
my
next
piece
of
work
is
to
get
the
kep
in
shape
and
merged.
So
I'm
gonna
go
back
into
all
the
pr
stuff
and
and
another.
A
Related
question:
not
maybe
not
necessarily
this
one
facade
like
we
have
some
of
those
features
right.
We
don't
really
have
a
feature
game
and
then
we
go
off
and
then
beta
like
the
the
volume
health
one.
I
was
I'm
not
sure
like.
How
do
we
say
it's
a
fun
bit?
We
don't
really
have
a
feature
gate
to
this.
So.
C
I
think
it
gets
tricky
what
we
did
with
csi
was
when
there
was
not
a
direct
feature
gate
in
the
core
kubernetes
we
updated
the
csi
feature,
documentation
to
say.
Oh
such
and
such
feature,
that's
part
of
csi
is
you
know
alpha
beta
or
ga,
so
there
was
somewhere
where
the
user
can
track
what
the
actual
kind
of
state
was,
and
so
I'm
thinking
populator
is
one
of
those
kind
of
extension
mechanisms
like
csi.
A
D
D
C
B
Yeah,
because
what
once
once
we
get
the
sidecar
or
sorry
the
the
feature
gate
to
beta
and
deliver
the
populator
controller
and
the
the
data?
What
do
we
call
it?
The?
I
think
you
think
it's
just
called
volume,
populator
crd,
once
that's
delivered.
B
Other
people
could
make
other
implementations
of
volume
populators
that
share
none
of
this
code
and
there's
nothing
nothing
to
prevent
that
from
happening.
Yep.
C
Yep
yeah,
I
think
this
would
be
worth
the
ufa
blog
post
at
the
end
of
this
release.
Just
to
clarify
this
whole
picture.
B
Okay,
so
I
think
we
covered
everything
we
need
to
and
and
yeah
so
so
I'm
gonna,
I'm
gonna,
focus
on
the
cap,
getting
that
in
merged
by
the
kep
deadline
and
then
turning
my
attention
back
to
the
the
pieces
that
actually
need
to
merge
for
the
feature
to
go
beta
and
then,
and
then
this
you
know
there
was
there
was
one
thing
I
wanted
to
mention,
which
is
this:
this
existing
version
of
the
code
has
no
logging.
B
I
didn't
put
any
logging
in
because
I
couldn't
choose
a
logging
library,
so
that's
that's
work
to
be
done,
add
logging
and
make
it
do
something
other
than
panic.
When
there's
an
error,
as
you
can,
I
was
just
looking
at
the
screen
here
you
can
see.
This
is
panicked
if
anything
goes
wrong,
which
is
not
the
best
error
handling
mechanism
so
so
that
that
needs
to
be
shored
up
but
but
yeah.
B
B
So
it's
it's
actually
usable
in
some
sort
of
an
alpha
state
right
now
and
and
yeah
I'm
gonna
go
then
work
on
the
the
populator
controller
and
the
volume
populator
crds
that
actually
need
to
ship
in
a
beta
form,
much
like
the
snapshot
controller
and
the
snapshot
crd
needed
to
ship,
so
that
deployers
could
actually
deploy
that
snapshot,
controller
and
snapshot
crd
with
their
kubernetes
distros
when
the
snapshot
feature
went
to
beta
back
in
117,
and
you
remember
that
whole
it
was
a
debacle
because
not
everyone
actually
did
that
yeah.
B
B
Okay,
all
right,
so
that's
all
I
have
for
today
we
used
up
most
of
the
time,
so
you
don't
get
any
time
back.
Sorry
all
right.
B
Plan
on
meeting
again
next
week
and
I'll
talk
about
the
status
of
the
kev.