►
Description
Kubernetes Storage Special-Interest-Group (SIG) Volume Populator Design Meeting - 27 October 2020
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
B
Helpful
I've
never
done
that,
okay,
so
hello,
and
welcome
to
the
volume
populators
community
meeting.
This
is
a
weekly
meeting
that
we've
started
to
discuss
volume
populators
and
I
guess,
as
a
reminder,
it's
recorded
and
posted
on
youtube.
B
The
the
plan
is
to
use
a
controller
to
to
notify
the
user
of
that.
That
will
rely
on
the
volume,
populator,
crd
being
installed
and
an
external
controller,
and
it's
still
tbd
whether
that
controller
can
live
in
the
external
provisioner's,
repo
or
external
provisioner
repo,
where
it
where
my
current
prototype
exists
or
whether
it
needs
its
own
repo.
B
And
it's
also
an
open
question
of
whether
this
controller
has
any
hope
of
ever
moving
into
the
core,
including
the
the
related
crd.
B
B
Okay,
so
yeah,
assuming
everyone's
happy
with
with
that
part
of
the
design.
B
The
the
other
thing
that
I
wanted
to
sort
of
bring
people
up
to
date
on
is
is
the
the
implementation
of
populators
themselves
and
I
did
a
prototype
way
back
at
the
beginning
of
the
year,
and
I
sort
of
demonstrated
that
in
the
data
protection
working
group
meeting,
but
we
haven't,
we
haven't
gone
over
it
since
then,
so
I
I
don't
know
if,
if
any
of
you
guys
were
there
and
saw
that
that
presentation
or
if
I
should
just
sort
of
go
over
again
the
how
the
populator
works
or
how
how
my
proposal
for
populators
works,
because
it's
it's
not
the
cleanest
way
of
doing
things,
I
mean
it,
it
does
work
but
like
it
feels
like
there's
room
for
improvement.
B
So,
like
I
gotta
remember
where
I
put
this
thing.
C
B
Okay,
so
yeah
this.
This
is
the
the
prototype
implementation.
I
posted
it
said
netapp,
hello,
popular
and
this
this
prototype
predates
any
of
the
crd
design.
So,
like
this
thing,
you
know
under
the
current
design,
would
have
an
instance
of
of
a
of
a
volume.
Populator
cr
called
hello.
B
So,
let's
see
here,
I
can
just
quickly
show
the
the
crd
for
hello.
It's.
I
actually
don't
know
if
I
have
a
sample
of
this
anywhere,
but
it's
a
really
simple
crd
that
basically
the
spec
just
has
a
a
file
name
and
a
file
contents,
and
so
what
the
populator
does
is.
B
It
takes
the
contents
which
is
a
string
and
it
writes
it
to
the
file
given
here,
and
so
you
can
create
one
of
these
hello,
hello
objects,
give
it
a
file
name,
give
it
a
content
and
then
make
that
the
source
of
your
pvc
and
what
what
the
hello
popular
will
do
is
give
you
a
pvc
that
already
has
that
file
with
that
name
and
that
contents
pre-populated
in
it.
B
B
I
don't
know
where
to
start,
but
this
is
just
basically
a
it's
a
standard,
kubernetes
controller,
it's
based
on
the
sample
controller.
It
has
a
watch
for
for
pvcs,
so
it
watches
all
it
watches
is.
Is
the
individual
pvcs
and
I'm
trying
to
find
where
that
control
loop
is
so?
Okay
yeah
we
have
a
pvc
informer
and
a
pv.
Oh
that's
right!
B
It
has
a
lot
of
informers
because
it
has
to
watch
both
the
for
pvcs
that
the
users
are
creating,
but
then
also
has
to
pay
attention
to
the
pvs
that
are
getting
created
by
the
external
provisioner
and
it
has
to
watch
the
pods
and
it
has
to
watch
for
the
hello
objects
too.
Okay,
sorry,
I'm
jogging
my
own
memory
as
I
go
over
this
code
because
it's
been
like
it's
been
a
while,
but
the
basic
implementation
is
down
here
in
sync,
pvc.
B
B
I
think
it
checks
for
storage
class,
but
this
will
see
you
know
every
pvc
that
gets
added
to
the
system,
and
so
it's
important
that
it
doesn't
react
to
anything
except
for
the
kinds
of
pvcs
that
it's
responsible
for
which
are
ones
that
have
a
data
source
where
the
the
kind
is
set
to
hello
and
the
the
group
name
is
is
said
appropriately
too.
B
B
So
it
has
this
concept
of
a
populator
pod
and
pvc
prime
and
it
so
it
has
a
separate
name
space
where
it
actually
does
its
work.
The
populating
work.
It
will
create
another
pvc
that
has
all
the
same
details
as
the
original
pvc
except
no
data
source.
So
so,
when
the
external
provisioner
sidecar
sees
the
first
pvc
that
did
have
a
data
source,
it
will
ignore
it
because
it
will
say
I
don't
know
what
a
hello
is
and
it'll
just
drop
it
on
the
floor.
B
This
controller
will
see
it
and
create
another
pvc
that
we
call
pvc
prime
internally
with
no
data
source
and
then
the
when
the
external
provisioner
sidecar
sees
that
second
pvc,
it
will
create
an
actual
empty
pvc,
but
in
the
other
name
space
that
this
thing
is
is
watching
for.
So
after
that
pvc
exists,
it
creates
the
populator
pod
bound
to
pvc,
prime
and
the
the
definition
for
that
pod
I
can
find
it
make
populate
pod
has
an
image
name
that
points
to
the
same.
B
Actually,
what
is
image?
Name?
I'm
sorry,
I'm
not
more
prepared
to
go
over
this,
but
but
the
the
image
is
is
the
image
of
the
actual
popular,
which
I
believe
in
this
implementation
is
the
same
binary
that
the
con
that
runs
the
controller.
So
the
the
binary
is
like
a
dual
purpose:
binary
in
one
mode,
it
runs
the
the
controller
and
in
the
other
mode
it
does
the
populating
yeah.
B
It's
right,
oh-
and
it
has
some-
has
some
out
of
date-
code
here,
because
yeah
it
has
a
mode
which
is
controller,
and
this
is
based
on
my
backup
restore
design.
So
it's
actually
should
be
something
else
for
the
for
the
mode
argument,
but
yeah
it'll
it'll
start
up
a
pod
with
the
same
image
that
this
controller
is
using,
but
just
with
the
different
mode
that
says
you
know
populate
this
pod
or
populate
this
volume
and
then
that
pod
will
run
to
completion
and
down
here
in
the
in
the
sink
loop.
B
It's
watching
for
this
pod
and
you
know
if
the
pod
dies
for
any
reason
or
fails
to
you
know,
fails
to
do
its
job.
This
thing
will
recreate
it.
You
know
it
it'll
keep
syncing
it
until
the
pod
runs
to
completion
once
the
pod
runs
to
completion.
It'll
presume
that
the
the
pvc
has
been
populated.
At
that
point
it
deletes
the
pod
scroll
down.
Here
we
got
got
finalizers.
B
This
one
actually
clones,
the
pv,
which
I
think
a
later
experiment
showed
is
not
necessary,
but
this
is
something
we
could
talk
about,
whether
it's
a
good
idea
to
clone
the
pvc,
the
pv
also
or
whether
it
makes
sense
to
rebind
the
pv
but
yeah
the
the
key.
Is
you
have
this
original
pvc
sitting
around?
Nothing
is
you
know,
nothing
is,
is
binding
it,
except
for
this
controller.
B
So
so
until
we
do
anything,
it's
just
going
to
sit
there
and
unbound.
We
have
this
other
pvc,
prime,
that
got
created
by
an
acsi
plug-in
through
the
regular
external
provisioner
process,
and
then,
after
that
pvc
prime
gets
created
and
gets
populated.
B
We
take
the
pv
that
it
is
bound
to
and
we
rebind
it
back
to
the
original
pvc
as
if
as
if
it
was
something
we
just
created,
and
at
that
point
it's
safe
to
tear
down
the
pvc
prime
and
the
populator
pod
and
any
other
objects
that
we
created
and
in
this
particular
example
it
actually
clones.
The
pv
too.
I
should.
I
should
probably
update
this
example
to
to
show
the
way
to
just
rebind
the
existing
pv.
B
But
but
this
is
this:
is
the
mechanism
and
it
works
fairly
well
to
you
know
so,
from
an
end
user's
perspective,
it
looks
like
the
populator
actually
created
the
pv
for
it,
because
you
know
it
was
actually
the
pv
was
created
for
a
different
pvc,
but
after
you
rebind
it
there's
no
trace
that
it
was
ever
bound
to
anything
else
in
the
kubernetes
api
and
it
looks
like
it
was
originally
created
for
for
this.
Pvc.
B
So
as
we
discussed
in
the
data
populators
working
group
or
sorry,
the
the
data
protection
working
group,
there
are
some
issues
around
late
trying
to
do
late,
binding
or
just
in
time
pvc
binding.
If,
if
you
have
a
requirement
that,
like
your
pod,
is
going
to
run
on
a
certain
node
and
you
want
the
pvc
to
to
not
be
bound
until
after
the
pod
has
been
scheduled
so
that
it
can
be,
I
guess,
assigned
the
same
availability
zone
as
the
pod.
For
example,.
B
So
what
would
happen
in
that
case
is
that
the
populator
pod
would
get
scheduled
somewhere.
The
pvc
would
then
get
through
the
when
the
request
arrived
at
the
actual
csi
plug-in
to
create
an
empty
volume.
It
would
see
the
availability
zone
of
the
populator
pod
and
then
create
create
a
pv
based
on
that
and
then,
when
you
went
to
go
and
rebind
it
to
the
original
pvc,
if
the
availability
zone
didn't
match
where
the
actual
worker
you
know,
the
user's
worker
pod
was,
that
would
just
be
too
bad.
B
B
If
there
is
no
pod
like
do
we
wait
like
we
would,
I
guess
we'd
have
to
come
up
with
a
way
to
match
that
late,
binding
semantic
where
we
would,
if,
if
a
pvc
was
said
to
you,
know,
wait
until
first
use,
we
would
have
to
also
wait
until
first
used
to
do
the
population
and
then.
C
B
If
we
could
somehow
ensure
that
the
populator
pod
ran
on
the
same
name
or
ideally
the
same
node,
I
mean
I
I'm
not
really
familiar
with
like
node
specific
provisioning,
but
if,
if
if
there
is
a
finer
grained
concept
and
availability
zone
for
pod
scheduling,
then
like
you
would
want
to
make
sure
that
the
the
populator
pod
ran
as
close
to
the
the
you
know
where
the
final
pod
was
going
to
run
as
possible,
ideally
on
the
same
node.
But
then
you
run
into
other
issues
around.
You
know
what?
B
B
B
Well,
yeah,
so
so
I
actually
mentioned
in
my
kept
that
it's
one
of
the
use
cases
I
had
in
mind
when
I
designed
this
is
something
that
would
like
take
images
out
of
like
a
glance
repository
or
something
like
that
and
shove
it
into
a
disk
for
the
virtualization
use
case.
So
did
you
guys
solve
this
a
different
way.
D
B
Wait
what
what
do
you
mean
two
separate
nodes?
You
just
mean
that
the
where
the
volume
ended
up
was
different
than
where
the
vm
was
supposed
to.
D
Yeah,
so
there
would
be,
you
know
either.
The
most
common
case
was
a
vm
had
two
pvcs
that
were
using
local
storage
and
we
would
populate
them
and
because
we
didn't
take
any
regards
to
where
this
thing
would
finally
be
running,
they
may
get
populated
on
two
different
nodes.
Oh
and
then
it
just
couldn't.
B
D
So
what
we
did
is
because
you
know
we
have
a
controller,
that
everything
is
based
off
our
own
virtual
me.
We
have
a
controller.
We
have
virtual
machine
definitions
which
we
create
pod
definitions
ourselves.
So
what
we
did
is
created
what
we
call
a
doppleganger
pod.
So
we
would
when
we
first
see
the
fir,
the
the
vm
definition.
D
But
one
thing
that
we
thought
about
doing
that
may
apply
here.
Is
you
know
you
could
do
some?
Basically,
you
could
have
a
mutating
web
hook
on
pods
that
looks
for
pvcs
that
need
to
be
populated,
and
then
your
populator
becomes
an
init
container.
E
B
E
As
far
as
I
know
like
like
when,
when
we
have
like
use
late
late
binding
it's
basically,
we
are
waiting
for
a
certain
annotation
to
show
up
on
the
pvc
that's
right
and
that
annotation
will
actually
have
the
node
name
where
the
pod
is
going
to
be
scheduled.
Where
it's
already
scheduled
right
is.
Is
my.
B
Understanding
correct,
you
know,
I
haven't
looked
at
the
implementation
of
how
wait
for
first
consumer
works
like
I've
used
it
as
an
end
user.
You
know
where
I
you
can
set
it
on
the
pvc
and
then
attach
a
pod
and
then
sure
enough
that
it
matches
the
availability
zone.
But
I
didn't
know,
I
don't
know
how
it
works
under
the
covers,
so
yeah
yeah.
I.
B
So
so
let
me
see
if
I
understand
you're
saying
that
the
there's
a
controller
that's
watching
for
pvcs
that
are
wait
for
first
consumer
and
then
it
waits
for
the
a
pod
to
refer
to
that
pvc,
and
then
it
waits
for
that
pod
to
get
scheduled
and
then
once
the
pod
is
scheduled.
It
figures
out
the
node
that
it
got
scheduled
to
copies
it
back
to
the
pvc
and
then
the
provisioner
that
that
was
that
was
ignoring
it
because
it
was
way
too.
First
consumer
at
that
point,
says:
okay.
B
But
so
I'm
just
I'm
just
trying
to
think
this
through
now.
So
so
let's
say
you
do
all
that
and
then
like
right
after
right
after
the
the
pve
actually
gets
created
like
that
pod
gets
killed
and
then
gets
recreated
somewhere
else
like
at
that
point.
You
just
have
a
a
situation.
That's
not
going
to
work
right
like
it
well,.
E
E
B
Yeah
on
the
node,
where
the
new
pod
ended
up,
which
I
guess
sort
of
just
an
unavoidable
problem,
right,
okay,
so
so
so
so
that
means
that
you
could
play
the
same
game
here
where
we
could
be
watching
for
pvcs
that
had
wait
for
first
consumer
set
to
true,
and
then
we
could
create
the
pvc
prime.
Similarly,
with
oh,
no
okay,.
B
What
you'd
want
to
do
is
you'd
want
to
just
not
do
any.
You
would
not
want
to
create
pvc.
Prime
until
the
original
pvc
had
that
annotation
indicating
the
node
where
the
pod
that
wanted
to
consume
it
had
gone.
And
then
at
that
point
you
could
just
create
a
populator
pod
and
force
it
to
the
same
node
right,
yeah
and
then
it's
yeah,
and
then
I
guess,
and
then
you
could
just
also
create
pvc
prime,
with
a
wait
for
first
consumer.
B
No,
the
the
only
trick
you'd
have
to
play
is
you'd
have
to
enforce
the
the
node
name
for
that
populator
pod
and
then
also
so
so
that
when
you
copied
over
the
wait
for
first
consumer,
so
you
need
to
do
two
things.
You
need
to
not
create
the
pvc
prime
until
you
knew
where
the
original
pvc
what
node
was
supposed
to
be
on
well,
I.
C
I
think
that
makes
sense.
The
only
thing
I
would
caution
is,
instead
of
always
just
running
the
populator
on
the
same
node,
where
wait
for
consumer
selects,
we
can
look
at
the
topology
constraints
applied
if
any,
by
the
wait
for
consumer
or
sorry
by
by
the
scheduler.
E
E
If
it
like,
like,
maybe
maybe
you
guys
like
better
understand
it
than
I
do,
I'm
trying
to
think
about
like
we
have
like
two
use
cases
right.
I
think
what
saad
is
mentioned.
Okay,
so
we
have
local
storage
which
can
be
attached
only
to
like.
So
we
need
really.
We
must
basically
schedule
populator
part
on
the
same
node,
because
there
is
no
other
way
because
it's
local
storage-
and
it's
only
available
on
that
node
right.
B
Think
the
way
kubernetes
works
is
that,
while
the
csi
layer
will
never
be
more
specific
than
region
and
zone,
the
way
that
it
finds
out
the
reasoning
zone
is
it
waits
for
a
specific
node
to
get
chosen,
and
then
it
finds
what
region
and
zone
that
node
is
in
and
then
sends
those
over
to
to
the
to
the
provisioner
or
to
the
csi.
E
B
It
always
starts
from
from
a
specific
node,
but
then,
of
course,
csi
doesn't
understand
nodes.
It
just
understands
these
key
value
pairs,
so
it
takes
the
key
value
pairs
for
that
node
and
sends
them
across,
and
the
key
value
pairs
would
be
specific
to
the.
I
guess.
The
kubernetes
deployment
so
like
gke
is
always
going
to
give
you
region
and
zone
but
like
a
different.
E
My
concern
here
is
like
if
we
take
gke
right,
I'm
thinking
about
two
cases,
so
we
have
this
pd
notion
of
a
persistent
disk
and
basically
the
same
persistent.
This
can
be
bound
like
it's.
A
zonal
resource
or
regional
resource
depends,
but
it
can
be
bound
to
any
node,
or
instance
in
in
the
same
region
or
zone
right,
but
also
you
can
imagine
that
we
can
have
local
disks
also
in
gke,
and
this
stuff
just
goes
with
the
node
right
it
doesn't
it
even
it.
You
can't
move
it
across
even
in
the
same
zone.
E
B
B
C
Yeah
today
we
don't
have
a
local
disk,
csi
driver.
It's
a
built-in
entry
driver
that.
B
Said,
there's
nothing
that
I
mean
go
ahead.
Does
it
have
special
understanding
of
like
what
node
the
pods
are
on
and
what
nodes
the
disks
are
on
yeah.
The
scheduler
is
aware
that
local
pv
is
a
special
type
and
it
handles
it,
especially
okay,
so
so
like
if
we
tried
to
make
that
into
a
csi
driver
like
you
would
have
problems
because
right.
C
C
On
that
mechanism,
then,
what
we
could
do
is
that
node,
that's
selected
for
wait
for
consumer
grab
the
labels,
this
for
that
csi
driver
off
that
node
apply
those
labels
to
your
pod
and
then
you've
effectively
effectively
got
your
constraints
if
it
happens
to
be
a
node
level
great,
if
it's
wider
than
that
even
better.
But
let
me
think
this
through
so
so
you're
saying
that.
B
B
The
behavior
of
the
controller
is
nothing
happens
until
the
pod
picks
a
node,
and
then
the
node
gets
copied.
The
name
of
the
node
gets
copied
back
to
the
pvc
annotation.
At
that
point,
the
external
provisioner
controller
goes
and
looks
at
that
node
and
gets
its
topology
information
which
might
be
node
specific
or
might
just
be
region
and
zone
and
passes
it
down,
but
the
the
piece
of
it
that
the
populator
would
need
would
be.
B
C
C
Thing
it
doesn't
necessarily
need
to
say
go
to
that
node.
All
it
needs
to
do
is
a
pull
off
the
topology
labels
from
that
node,
and
presumably
those
topology
labels
will
be.
You
know
if
it
is
a
node
level
constraint,
will
include
node
information
if
it
is
wider
than
that
it
would
include
only
zone
level
information.
C
C
Basically,
we
imitate
the
behavior
of
the
external
provisioner,
because
the
external
provisioner
is
in
the
exact
same
position
right
all
it's
getting
from
the
scheduler
is
this
node
and
then
so
it
has
to
take
that
and
convert
it
into
a
set
of
topology
constraints
to
pass
to
create
volume.
Okay,
so
so.
B
B
And
regions
on
there,
yep,
okay,
well
so
yeah,
I
don't
have
any
experience
actually
doing
topology
based
scheduling
of
pods,
because
I've
only
looked
at
this
from
the
volume
side.
But
that's
something
I
need
to
look
at
so
yeah.
I
think
I
think
we
could.
We
could
play
the
same
trick
where
we,
you
know.
If
it's
a
wait
for
first
consumer,
we
would
just
follow
the
same
kind
of
process.
The
external
provisioner
does
wait
for
there
to
be
a
node
grab.
C
I
think
that
sounds
right,
but
yeah
double
check
the
logic
there.
I
I
think
it
should
work
but
worth
double
checking.
B
It
can't
be
that
hard,
but
yeah
it'll
be
interesting
to
like
test
it
in
a
scenario
that
where
we
have
multiple
nodes
and
in
different
zones,
and
also
maybe
to
to
construct
a
a
scenario
where
there's
a
per
node
topology
information
like
node,
specific
topology
information
that
would
force
it
onto
exactly
the
same
node
and
see
if
we
get
that
behavior
too.
B
C
E
Tolerations,
can
we
do
some
sort
of
pod
templates
kind
of
expose
it,
so
users
can
actually
modify
pod
templates
for
our
populators,
just
crazy
idea
and
well.
B
E
Okay,
so
let's
say
another
about
you,
I
I
I
maybe
yeah
user
segment
like
users
in
like
broad
scopes,
so
administrators
in
this
case,
so
can
we
give
them
access
to
actually
popular
carrier
pod
template,
so
they
can
do
like
add
whatever
labels
to
do
it
and
annotations
if
possible,
so
it
it's
gonna,
have
kind
of
right
set
of
properties.
So
it's
also
it's
going
to
be
like
yeah.
B
So
so,
for
for
a
specific
populator,
you
could
add
like
command
line
args.
That
say,
you
know,
add
these
tolerations
to
every
populator
pod,
and
then
it
could
just
do
that,
and
that
could
be
something
that
you
configure
at
installation
time.
If
that's
the
right
way
to
deal
with
something
like
that,
but
I
guess
what
what's
making
me
nervous
is
like
we're
designing
a
lot
of
very
specific
mechanisms
for
like
how
populators
can
mirror
the
functioning
of
like
a
normal.
B
You
know
provisioning,
workflow
and
still
respect
all
the
rules
that
that
you
know
users
will
expect
it
to,
but,
like
all
of
this
code
has
to
go
into
like
the
populator
implementation
itself
and
like
I
I,
the
whole
point
of
this
project
is
like
we
want
to
have
lots
of
populators,
not
just
one.
So
the
thing
that's
making
me
nervous
now
is
like
there's
going
to
be
all
this
code.
B
That
has
to
be
coded
just
so
to
make
things
work
right
and
like
and
then
everyone's
going
to
have
to
do
the
same
thing,
and
so
we
really
need
a
way
to
like
share
the
portion
of
the
code.
That
does
all
the
stuff
we
just
talked
about.
You
know
the
the
creation
of
populator
pods,
the
the
rebinding,
the
the
respecting
of
the
wait
for
first
consumer.
B
It
almost
feels,
like
you,
want
to
make
this
a
side
card
that
then
like
right
runs
alongside
like
the
thing
that
does
the
actual
work,
which
is
something
that
will
you
know,
populate
volumes
and
then,
and
all
this
all.
This
is
just
sort
of
machinery.
That's
necessary
to
you,
know,
take
pvc
or
to
take
a
a
blank,
pvc
and
sort
of
and
make
it
show
up
in
the
right
place
and
then
swap
in
the
pv
to
the
place
where
you
would
expect
it
to
have
been.
B
But
I
I'm
I
I'm
nervous,
because
the
other
thing
I
wanted
to
talk
about
related
to
this
discussion
is
like
all
the
different
flavors
of
how
populaters
could
actually
do
their
their
their
the
real
work
of
pipelining.
B
So
so,
for
now,
we've
been
talking
about
how
they
interact
with
kubernetes
and
how
they
would
get
the
objects
created
the
right
way
and
end
up
in
the
right
place
and
bound
to
the
right
things,
but
as
far
as
how
you
actually
get
the
data
into
the
volume
like
one
way
of
doing,
it
is
just
to
create
a
pod
like
the
hello
populator
does.
But
I
I
really
like
to
talk
about
versions
of
these
that
have
multiple
implementations
or
most
interesting
to
me.
A
A
Yeah,
I'm
just
saying
like:
can
we
have
this
more
like,
like
we
have
this
csr
driver
for
host
path
right?
So
maybe
this
would
be
something
like
that.
Otherwise,
how
can
you
make
sure
this
works
for
every
driver,
because
our
driver
does
not
need
to
have
a
pod?
We
do
move
data,
but
does
not
need
this
way
right.
So
I'm
just
saying
it's
probably
there
will
be
different
implementations
like
well.
B
Yeah
yeah,
let
me
let
me
state
another
way.
What
I
think
you're
saying,
and
then
you
can
tell
me
if
if
this
is
correct
or
not
so
what
we
could
do
is
take,
take
the
hello
populator
and
and
make
all
the
fixes
that
we
just
described.
You
know
to
do
things
correctly
and
to
respect
wake
to
first
consumer,
etc,
but
but
then
split
it
into
two
things.
B
However,
they
wanted,
depending
on
the
you
know,
the
the
crd
that
they
were
using
as
the
data
source,
but
but
if
we
could
somehow
split
the
implementation
into
two
things
where
the
first
part
was
reusable
and
everyone
could
just
share
that
and
the
second
part
was
pluggable,
then
that
would
kind
of
get
it.
What
you're
talking
about
shing,
except
that
there
are
versions
of
this
like
where
you
don't
actually
want
a
pod
to
do
the
population
where
you
want
something
else
to
do
the
population.
A
B
Well,
yeah,
and
in
particular
I
would
like
to
reimagine.
You
know
what,
if
we
had
done
this
work
before
we
had
done
snapshots
and
then
snapshots
was
just
an
implementation
of
a
data
or
the
snapshot.
Restore
workflow
at
least
was
an
implementation
of
a
data
populator
like
how
would
that
look,
because.
A
C
Can
you
talk
a
little
bit
more
about
what
that
use
case
is
if,
where
you
don't
need
to
use
a
pod.
A
Oh,
this
is
just
to
currently
like
our
plug-in
larabee,
so
plug-in
actually
does
not
really
use
a
part.
It's
because
the
the
protocol
that
currently
we
have
we're
doing
it's
going
through
the
network.
So
we
just
need
to
have
this
connection.
We
don't
really
have
to
create
a
part
to
attach
and
then
and
copy
data.
A
A
In
our
case,
this
could
change
even
in
long
term,
because
I
think
in
the
vm
case
like
it's
actually
would
require
like
attach
or
something,
but
this
right
now.
I
think
we
couldn't
do
that
way,
but
doesn't
mean
that
in
the
future
we
cannot.
So
I'm
just
saying
this
part.
Maybe
still
we
may
still
use
this
in
the
future
when
we
don't
have
this
restriction
or
something
but
right
now
we
actually
don't
use
this.
We
don't
have
to
use
this
part
2.
C
A
A
B
Yeah,
so
so,
if
we,
if
we
put
our
backup,
restore
hats
on
for
a
minute
and
imagine
how
this
would
integrate
with
a
with
a
restore
workflow
in
the
future,
like
I,
I
can
imagine
a
version
of
backup
and
restore
where
you
have
these
crds
that
represent
backups.
B
That
just
knows
how
to
access
the
backup
wherever
it
is
and
write
it
to
a
pvc
that
the
pod
is
attached
to
and
like
that's,
how
restore
works
or
you
know
if,
if
the
csi
driver
that
is
responsible
for
that
pvc
itself,
like
knows
how
to
do
a
better
way
of
restoring
it,
that's
more
efficient
than
just
having
some
pod
copy
the
data,
then
the
external
provisioner
sidecar
could
figure
that
out
somehow
that
then
the
how
was
tbd
but
then
just
like
go
through
the
regular
csi
like
restore
this
backup
for
me
kind
of
workflow
the
same
way
we
do
with
snapshots
or
clones,
where
you
basically
tell
the
csi
driver,
give
me
a
volume
that
has
the
data
in
it
already,
and
then
it
can
do
that.
B
C
B
B
B
External
provision
or
figure
out
whether
you
know
which
path
you're
going
to
go
down,
am
I
going
to
go
down
the
csi
plug-in?
Does
it
or
the
populator
does
it
path,
and
I
have
a
prototype
of
how
you
can
make
that
decision,
but
then,
like
after
you
make
that
decision,
then
you
know
you
either
have
to
hand
it
all
over
to
the
csi
plugin
and
say:
do
that
or
have
the
external
provisioner
plug-in
sort
of
step
back
and
let
the
pro
let
the
let
the
populator
do
its
workflow,
but.
A
B
A
Yeah,
that's
different.
That's
like
what
I'm
talking
about
like
we
have.
We
need
to
have
a
set
of
backup
api,
welding,
backup
apis
right,
yeah.
A
A
A
B
B
So
so
I
I'm
happy
to
spend
the
next
week's
meeting
going
into
the
details
of
of
that
like
because
I
again
that
was
that
was
my
original
prototype
from
from
last
year.
That
sort
of
drove
all
of
this
was
that
I
I
wanted
to
have
a
a
backup
crd.
I
wanted
to
be
able
to
drive
it
through
csi,
but
I
wanted
to
have
a
non-csi
path,
and
so
out
of
that
came
the
the
proposal
for
data
populators.
E
C
E
One
question:
maybe
you
mentioned
that
you
had
a
use
case
where
you
didn't
need
this
external,
like
populator
pod.
I
want
to
just
dig
into
this
to
understand
how
that
would
work.
So
I
I
just
imagine
we
haven't.
A
C
Handles
it
without
without
getting
any
kubernetes
pods
involved.
E
A
A
There
yeah
created
that
one.
Then
I
then
we
just
basically
read
the
read
the
data
download
the
data
and
then
overwrite
the
page.
E
I
see
so
generic.
Basically
generic
logic
here
would
be
provision
and,
like
blank
p,.
B
E
Blank
correct
volume
in
the
region
zone
corresponding
to
the
target
pod.
If
it's
late,
binding
and
then
at
this
point
it
they,
we
can
have
different
use
cases
right.
One.
B
A
The
I
can't
use
the,
but
the
one
problem
for
my
approach
is
right
now
without
this
this
is
bad.
This
thing
bang
is
working
on.
I
cannot
really
support
the
vehicle
first
consumer
yet
because,
because
the
moment
I
have
the
pvpc
created
and
banged,
you
know
I
can't
so
basically
meaning
I
cannot
create
a
part
first.
So
in
my
case,
right.
A
B
It
very
well
made
so
I,
but
just
to
answer
alexi's
earlier
question
about
about
the
I.
I
did
a
presentation
of
my
version
of
the
backup
design
way
back
in
like
the
first
or
second
of
the
the
data
protection
working
group
meetings.
I
think,
do
you
remember
that.
B
A
B
I
want
to
look
at
that
backwards,
which
is
like
I
don't
want
to
like
from
the
perspective
of
how
we
implement
populators.
I
don't
want
to
think
about
only
backups.
I
want
to
think
about
for
any
kind
of
populator
where
you
might
want
to
have
multiple
ways
of
doing
the
population
whatever.
That
is
whether
it's
back.
A
C
Yeah
this
is
such
a
complicated
problem,
kind
of
leaving
it
up
to
everyone
to
figure
it
out
on
their
own
is
going
to
mean
most
people
are
not
going
to
adopt
it
or
if
they
do
there's
going
to
be
lots
of
bugs.
So
if
we
can
follow
the
model
that
we
did
with
csi,
where
we
said,
okay,
all
the
complicated
logic
goes
into
these
standard
side
cars.
C
C
Having
kind
of
a
a
well-worn
path
that
people
can
follow
and
say
here
are
the
side
cars
don't
worry
about
how
they
work,
we've
taken
care
of
designing
and
building
those
for
you
just
create
the
small
little
piece
that
you
know
knows
how
to
copy
data
or
whatever.
That
would
be
the
goal.
I
think
here
so
that
we
can
simplify
that
and
experience
of
developing
these
things.
B
E
I
like
saad's
idea
about
having
this
pod
use
case
as
our
kind
of
the
first
one.
Then
we
can
think
about
the
other
use
cases,
or
maybe
we
can
think
now
one
of
the
thoughts
I
have
like
the
populator
itself.
It
can
be
actually
any
sort
of
logic
there,
and
maybe
you
don't
even
need
to
ask
to
attach
volume
to
this
populator.
You
say
that
it's
just
a
populator
and
you
don't
need
to
attach
volume
to
it
right
to
the
pod,
and
maybe
that
way
you
can
just
implement
whatever
third
party
copying
logic.
B
B
E
C
One
more
crazy
idea
I
want
to
throw
out
before
we
end
people
can
think
over
it
before
our
next
meeting.
So
I
completely
recognize
all
these
issues
that
you've
mentioned
ben
and
I
agree
like
there's
potentially
other
features
we
may
break
in
the
future.
C
E
B
B
B
Okay
yeah
so.
C
B
We're
out
of
time,
thank
you
guys.
I
have
some
ideas
for
an
agenda
for
next
week
and
I
will
try
to
dump
the
notes
from
my
brain
into
into
the
the
agenda
doc
this
afternoon,
while
they're
still
fresh
talk
to
you
guys
next
week,.