►
From YouTube: SIG - Storage 2023-04-24
Description
Meeting Notes:
https://docs.google.com/document/d/1mqJMjzT1biCpImEvi76DCMZxv-DwxGYLiPRLcR6CWpE/edit#
A
C
A
A
D
D
Okay,
so
we
in
MTV
wanted
to
show
you
our
usage
and
plans
regarding
the
volume
popolitos
and
what
we
already
implemented
so
I
guess.
It
will
start
so
small
background
where
the
main.
D
D
We
choose
recent
improvements
over
image,
IO,
it
is
on
the
usage
and
volume
populators
featured
basically
enable
to
use
it.
D
So
basically,
we
ended
up
duplicating
some
of
the
logic
in
CDI
in
terms
of
support
for
volume,
operators
and
CDI.
It
would
enable
us
to
read
of
the
duplications
and
to
use
volume
small
extensively.
F
Hey,
can
you
hear
me,
can
somebody
confirmed
yep
check,
Okay
cool,
so,
let's
start
with
VMware
that's
sort
of
a
simple
one,
because
we
don't
use
populators
there.
Yet,
although
we
pretty
much
would
like
anyway,
so
for
the
warming,
migrations
and
migrations
to
remote
cluster,
we
still
rely
on
CDI
and
on
their
importers
for
the
world
migration.
We
still
need
the
multi-stage
import
with
3D
decay,
yep
and
for
karma
recreation.
In
recent
MPV
version,
we
changed
things
a
little
bit.
F
F
Monitoring
we
use
the
same
technique
that
CDI
does
we
have
a
Prometheus
Matrix
endpoint,
that
stores
the
progress
and
and
the
controller
connects
to
it
locally
to
report
and
grab
the
metrics
regularly,
but
for
remote
cluster
this
obviously
doesn't
work,
and
we
also
looked
at
the
approach
of
storing
the
progress
in
some
crd,
let's
say
for
in
the
book,
but
there's
a
problem
that
we
need
a
proper
service
account
to
do
that
which
we
don't
have
and
we
it's
difficult
to
manage
from
the
forklift
controller
remotely.
F
But
for
the
code
migrations
to
local
cluster,
we
change
the
flow
and
we
got
rid
of
the
CDI
importer
Wheels
only
blank
DVR
TV
to
provision
the
series
and
then
we
start
the
conversion
spot,
which
runs
the
v2b
and
we'll
add
with
v2b
to
do
the
actual
conversion
and
population
of
the
data
on
the
volumes.
F
F
But
what
to
do
now
with
all
that
is
a
little
bit
uncertain.
As
I
said,
we
would
like
to
use
the
populators,
but
that's
a
tricky
thing
because
from
from
the
kubernetes
perspective
and
their
design,
it
should
be
used
only
to
populate
single
PVS,
but
to
import
virtual
machines.
We
often
need
to
get
more
than
one
disk
more
than
one
PV
and
popular
time,
Advanced
and
not
separately.
F
There
were
some:
this
was
opened
as
an
issue
on
the
volume
operator
Library.
We
got
some,
let's
say
ideas-
how
to
approach
this,
but
none
of
this
seems
visible
or
usable,
really
in
our
case,
so
at
the
moment
we're
a
little
bit
stuck
because
if
we
wanted
to
use
the
volume
preparators,
we
need
to
rewrite
the
controller
from
scratch,
which
seems
like
a
lot
of
work,
but
possibly
global.
F
Yes,
as
I
would
mentioned,
having
the
populator
with
a
proper
service
account
would
be
useful
not
only
as
for
the
progress
monitoring,
let's
say,
but
also
it
would
enable
us
to
to
let
the
population
create.
Also
the
VM
configuration
records
after
the
migrations,
so
it
wouldn't
depend
on
on
the
forklift
controller
and
the
reason
to
do
that
is
because
the
graduate
today
and
the
conversion
or
the
populator
bot
knows
what
the
actual
final
VM
configuration
is,
and
it
would
make
things
a
little
bit
easier.
F
Without,
without
that,
we
can't
really
change
anything
or
approach,
visibly
the
remote
cluster
scenario,
and
it
would
end
up
having
the
same
fate
as
the
warm
migration
that
I'm
just
going
to
talk
about
in
a
second,
because
we
don't
have
any
reasonable
way
how
to
manage
things
under
mode
cluster
from
forklift
and
for
vermigation.
There's
hasn't
been
any
real
change
in
the
latest
MTV
version
in
the
conversion
spot
that
we
use
for
the
migration.
G
Of
questions
I'm
glad
you
linked
to
that
issue
that
I
opened
ages
ago,
I
mean
I,
see
the
answers
there
and
I
don't
really
understand
what
they
even
mean
so
yeah
I
mean
it
just
seems
a
fundamental
problem
with
volume
populators
that
they
only
work
with
a
single
disk
so
and
I
don't
know
even
how
to
even
approach
a
solution
to
that.
G
Of
course,
if
you're,
if
you're
in
the
common
case,
where
you,
where
you
only
have
a
single
disk
to
migrate,
which
is
actually
pretty
common,
then
it's
fine.
So
you
might
just
say
that
you
know
we're
not
going
to
support
multi-disc
VMS.
G
On
what
migration,
what
how
does
it
work
at
the
moment
are
we
are
we
taking
snapshots
from
on
VMware
and
doing
all
that
stuff?
Yes,.
F
Exactly
it
we're
taking
regular
snapshots
from
VMware
that
we
grab
a
transfer
to
the
local
storage
and
when
we
are
decide
that
it's
a
cut
over
point,
we
stop
the
VM
at
VMware
and
we're
run
with
which
we
with
in
place
conversions.
Well,
we
first
we
first
of
all
yeah.
We
first
got
the
last
set
of
changes
from
the
source,
and
then
we
run
30
to
this.
This
in
place.
Conversion.
G
Okay,
yeah
I
mean
I,
don't
have
anything
really
to
say
about
that,
but
yeah
we
we
thought,
for
you
know
warm
migration,
that
we
would
add
a
sort
of
In-Place
mode
for
Burt
B2B.
To
do
this,
I
mean
I,
don't
think
it's
a
particularly
great
idea,
but
we
don't
really
have
any
time
to
develop
anything
else.
So
I
I'd
be
kind
of
interested
to
know
from
we
had
that
question
last
week
about
you
know
how
popular
is
wall,
migration
and
how
important
is
it
for
customers?
G
You
know:
can
we
go
and
ask
POS,
you
know
where
the
customers
really
are
desperate
for
all
migration
or
not,
and
that
kind
of
those
sort
of
questions.
Now,
if
it
turns
out
that
customers
are
really
really
desperate
for
all
migration,
then
we
can
go
and
talk
to
Claus
about
re-prioritizing
things,
but
I
think
I
would
like
to
have
some
actual
independent
feedback
on
that
question.
First,.
F
And,
and
for
the
old
migration
General
scenario,
I
know
we
were
kind
of
looking
at
the
volume
populator
use
of
the
volume
operator
yourself
that
you've
managed
to
get
to
create
some
something
or.
G
G
F
G
I
mean
maybe
that's
something
you
could
also
find
out
from
POS,
whether
whether
multi-disc
configurations,
obviously
you
know
it's
a
problem
if
you've
got
multi-disc
configurations
that
you'd
never
be
able
to
migrate
them,
if
we
just
decided
to
abandon
them,
but
if
they're
incredibly
rare,
then
maybe
that's
not
a
problem.
I
don't
know.
F
And
well
I'm
not
sure
if
they
don't
that
they
are
really
rare
from
what
I
recall
from
rest.
We've
got
a
bunch
of
cases
or
bugs
that
that
where
people
were
actually
struggling
with
some
multi-disc
conversions
that
we
had
to
fix
solve
so
I'm
I
would
on
real
estate
itself.
Of
course,
I
have
don't
happen
in
numbers,
but
I
wouldn't
really
say
it's
rare.
G
Yeah
I
mean
one
thing:
I
said
on
that
issue
that
you
linked
to
is
that
I
mean
it
could
work
that
we
would
create
a
volume
that
was
actually
a
file
system
volume,
not
a
not
a
like
a
block
device
volume,
and
then
obviously
you
can
just
create
one
like
a
q
code,
two
file
per
disk,
but
I,
don't
believe
that
Cooper
has
any
way
to
boot
that
so
you'd
end
up
having
to
do
like
a
second
copy.
Afterwards
is
obviously
less
than
ideal.
H
Yeah
about
this
this
part
back,
then
we
thought
about
the
volume
populator
was
the
black
box
right.
We
use
it,
as
is
on
the
kubernetes
side,
and
we
came
to
conclusion
that
we
need
to
modify
it.
So
we
have
a
modified
version
of
the
controller
and,
as
far
as
I
understand,
also
the
CDI
guys
plan
to
to
have
a
modified
version
of
their
site.
And
if,
if
we
are
going
to
modify
the
controller,
then
we
have.
B
H
G
I
mean
that
would
be
I
mean.
That
would
be
great
I
can't
believe
it
were
the
only
people
that
have
this
problem
with
volume.
Populators
I
mean
it
seems
like
such
an
obvious
problem
with
them.
That's
you
know,
surely
other
people
have
looked
at
this
and
have
a
problem
and-
and
you
know
would
want
a
solution
as
well.
I
guess,
I,
don't
know.
I
D
A
B
I
guess
I'm
just
wondering
what
what
the
next
steps
are
then
and
I
don't
know
how
we
can
better
collaborate
on
getting
getting
this
done.
G
Well,
I
would
definitely
like
to
hear
from
POS
on
how
common
or
migration
is,
how
many
customers
have
asked
for
it.
Whether
there
are
customers
who,
for
example,
are
blocked
because
it
takes
too
long
to
do
a
cold
migration
or
all
those
sort
of
questions
and
we'd
need
to
feed
all
that
back
to
Klaus
QE.
G
So
that
would
definitely
be
a
next
step.
For
my
point
of
view,.
B
Okay,
and-
and
the
other
thing
I
wanted
to
ask
about-
is
apparently
data
volumes.
Can
it's
confusing,
but
you
can
they
have?
They
allow
a
populate,
an
external
populator
as
a
source
and
I
recall
that
some
mention
of
you
guys
were
like
duplicating
some
CDI
functionality
like
regarding
storage
profiles
and
file
system,
overhead,
stuff
and
I
was
wondering
if
you've
looked
at.
It's
like
you've
written
your
own
populator
I.
B
Think
for
rev
migrations,
I'm
wondering
if
it
if
it
made
if
you've
looked
into
and
if
it
makes
sense
for
you
to
still
use
data
volumes
with
that
and
give
it
your
populator
as
the
source,
and
that
way
you
wouldn't
have
to
duplicate
the
storage
profile.
Look
up
stuff
like
that.
H
Yes,
it's
mentioned
later
in
the
presentation
that
this
is
one
of
our
goals.
I
mean
we
started
something
like
half
a
year
ago,
working
on
the
on
the
populatos
and
back
then
that
cells
that
you
mentioned
didn't
exist
in
CBI
solesh
mentioned
before
it
was
the
blank
volumes.
So
yes,
definitely
we
want
to
leverage
what
you
did
and
we
want
to
get
rid
of
the
duplication.
That's
one
of
the
goals:
okay,
foreign.
D
D
As
for
openstack,
it's
downloads,
a
single
image
from.
B
D
It
is
a
couple
cloud,
and
this
is
example
of
the
openstack
volume
proposal
CLD
with
the
identity
URL
the
image
ID
secrets.
D
As
you
can
see,
in
the
status
of
both,
we
basically
posting
the
progress
of
the
this
transfer
using
the
permitters
metrics.
D
I
And
in
in
forklift,
we
create
two
instances
of
it:
one
one
to
watch
volume,
populator
PVCs
and
one
to
watch
openstack,
foreign
populated
practices
and
accordingly
we
run
the
relevant
image
for
each
one.
The
other
one
uses
of
the
IMG
that
stack
uses
the
the
small
golfer
cloud
program
and
in
terms
of
progress
reporting,
it's
pretty
much
similar
to
to
the
CDI
approach.
I
Well,
we
updated
via
a
matrix
endpoint
and
in
terms
of
usage
of
CDI
in
MTV
24,
which
is
the
first
version
where
we
use
volume
populators.
It
is
used
in
World
migration
for
for
rev
and
VMware.
It
is
used
in
migration
currently
to
remote
cluster,
because
we
still
can't
use
the
volume
populators
and
in
column
migration
from
this
field
to
a
local
OCB
clusters.
A
I
The
forward
we
want
like
the
bottom
line
is
that
we
want
to
use
the
CDI
and
use
data
volumes
with
our
volume
populators,
for
many
reasons
that
that
were
already
mentioned,
like
moving
the
load,
duplicate
Logic
for
reading
the
storage
profile
and
and
calculating
the
file
system
overhead,
as
well
as
enable
migrations
to
remote
clusters
by
having
a
CDI
installed
our
crds
and
and
run
the
pop,
the
controller
that
will
watch
them.
I
Another
thing
this
is
more
more
future.
We
still
haven't
looked
into
it
too
much,
but
another
option
is
to
improve
all
migration
for
whatever,
by
utilizing
a
fairly
new
feature
that
was
added,
which
is
the
incremental
backup,
specifically
hybrid,
which
which
would
work
with
both
called
and
wall
migrations.
I
J
Sorry,
excuse
me
if
I
could
just
jump
in
on
that
previous
slide.
Something
that
came
to
mind
for
me
is
the
recent
virtual
machine
export
capability.
That's
been
added
to
to
cube,
vert
and
and
CDI.
So
I
just
wanted
to
point
that
out.
It's
going
to
be
important
for
us
to
kind
of
establish
what
the
correct
line
between
use
cases
is
for
those
different
features
like
we're
not
trying
to
implement
MTV
in
Cube
verder
CDI,
but
we
are
trying
to
provide
some
native
kind
of
low
level.
J
You
know,
inter,
like
basically
built-ins,
that
allow
you
to
to
get
data
out
of
the
cluster,
so
I
just
wanted
to
point
that
out
and
make
sure
you
guys
were
aware
of
what's
going
there,
and
we
should
definitely
consider
making
sure
that
we
continue
to
talk
to
to
get
this
right.
You
know
so
that
we're
not
stepping
on
toes
and
duplicating
effort,
if
that
makes
sense,
yeah
definitely.
J
Yeah
and
we
honestly,
we
weren't
really
sure
what
was
going
on
with
the
you
know,
the
you
know:
Cube
verticubert
migration
path,
and
we
want
to
get
that
right
with
you
guys
so
anyway,
that's
all
I
had
on
that.
On
that
point,.
H
G
I
was
going
to
ask
about
that
that
thing
you
just
talked
about:
what's:
what's
the
metadata
format
that
you're
using
and
I
really
hope
it's
not
over
yet.
H
H
That
it
reminds
me
of
all
export
to
OVA
and
external
providers
in
so
yes
that
was
also
a
question
that
came
to
my
mind:
I,
don't
you
mentioned
the
export?
What
the
exported
VM
looks
like
Yeah.
J
So
basically,
what
the
export
is
doing
is
implementing
I
did
a
POC
quite
a
while
ago
to
show
that
you
can
export
a
virtual
machine
disk
by
just
simply
starting
a
pod
that
has
a
web
server
in
it.
That's
connected
to
the
PVC
and
basically
is
willing
to
serve
that
up
in
various
formats.
J
J
You
create
a
virtual
machine
export
the
target
can
be
a
PVC,
a
VM
or
a
VM
snapshot,
and
it
will
collect
the
resources
associated
with
that
spawn
a
pod,
attach
the
PVCs
and
have
everything
kind
of
just
there,
and
then
it
just
lives
as
an
application
on
that
cluster.
That's
serving
this
up
and
we
have
some
certificates
and
things
that
are
making
sure
the
data
is
protected
and
that's
all
it
really
does.
Today.
J
We
we
have
an
idea
of
some
things
we
could
do
on
the
receiving
side,
because
there's
actually
people
I'm,
aware
of
that
are
trying
to
move
VMS
already
between
clusters,
and
so
we
had.
J
This
idea
of
you
could
create
a
virtual
machine
import
object
on
the
destination
cluster
that
basically
uses
the
export
on
the
other
side,
but
this
is
starting
to
get
into
that
territory
where
I'm,
seeing
the
overlap
so
like
it
doesn't
make
sense
to
do
this
twice
right
now,
it's
low
level
building
blocks
and
we
wanted
to
do
that
on
purpose
so
that
we
don't
over
engineer
something
and
we
have
a
chance
to
see
how
people
use
it
that
sort
of
stuff.
G
G
G
It's
literally
a
dump
of
internal
structures
inside
vmware's
old
database.
The
good
thing
about
Obi
I
think
is
the
is
this
idea
of
encapsulating
multiple
discs
in
an
uncompressed
Harbor?
It
gives
you
like
a
file
which
which
corresponds
to
multiple
disks
in.
G
A
variation
where
you
can
have
multiple
VMS
in
a
single
file,
which
is
you
know,
useful,
just
useful
to
copy
that
around
and
using
an
uncompressed
tar
means
that
you
actually
are
able
to
access
the
disk
without
having
to
actually
uncompress
without
shaving
to
expand
the
tar,
because
you
you
can
just
look
for
offsets
within
the
Tuple
and
and
definitely
exporting
the
metadata.
Actually
I.
G
Think
your
your
choice
of
using
yaml
is
a
pretty
good
one
really
I
mean
it
obviously
means
they
will
only
be
applicable
to
other
kubert
instances,
but
you
know
for
what
you're
talking
about.
That
seems
fine
and
it's
obviously
it's
a
lot
better
than
ovf,
so
I
would
I
would
say
that
I
think
the
lesson
there
is
that
having
an
OVA
that
perhaps
contains
the
yaml
and
the
discs,
and
it's
not
compressed
and
sort
of
follows
some
of
that
thinking
behind
OVA
without
the
ovf
component
might.
J
Idea,
yeah
and
I
really
appreciate
that
context,
and
probably
we
should
have
some
follow-up
conversations.
We
have
mulled
around
the
idea
of
what
is
the
canonical
virtual
machine,
like
storage
format,
for
cube
vert
right
now.
There
is
no
storage
format,
it's
just
the
raw
elements
and
you
can
do
what
you
want.
So
we
thought
that
somebody
might
actually
create
an
external
library
that
knows
how
to
pull
down
all
these
things
and
assemble
them.
There's
another
consideration
per
Cube
vert,
where
the
in
the
kind
of
standard
interchange
in
a
kubernetes
cluster
is
a
container
registry.
J
So
you
know
we
would
want
a
format
that
would
be
pushable
in
a
container
format
like
say
a
container
disk
that
maybe
contains
all
this
stuff
in
a
in
a
unit
and
container
disks.
You
know,
container
oci
container
format
is
a
tar
file
as
well.
J
So
we
kind
of
had
had
thought
about
about
this
kind
of
the
jury's
out
still
on
on
what
to
do
and
we're
trying
to
take
real,
slow
steps
to
make
sure
that
we
don't
make
any
dumb
mistakes
on
a
format
like
this,
but
I
definitely
see
there
being
a
use
for
it.
There's
challenges
like,
for
example,
layer,
size
limitations
and
a
container
registry
which
are
often
much
lower
than
a
typical
installed
disk
image
size,
so
yeah
I'm.
J
G
Yeah
I
mean
it's
interesting.
You
mentioned
CDR
I
mean
that
was
the
main
problem
with
cdis.
It
didn't
Define
like
the
metadata
as
well.
It
was
just
a
container
for
a
single
disk
image
yeah
and
you
do
need
that
message.
You
need
to
know.
You
know
just
basic
stuff
like
how
many,
what
what's
the
ideal
number
of
vcpus
for
this
video
yeah.
J
And
so
there's
a
few
things
like
Cube
Verde
has
come
up
with
this
instance.
Types
and
preferences
set
up
now,
where
you
can
refer
to
these
more
in,
like
a
cloud
native
type
of
format.
You
know
where
this
is
a
small.
You
know
Windows
VM
or
something,
and
then
those
have
meaning
I.
J
Don't
know
like
that's
another
thing
that
you
know
fits
in
here,
because
you
could
get
away
with
maybe
less
metadata,
but
anyway,
yeah
Michael
is
mentioning
this
persistent
container
disks
below,
and
this
is
kind
of
a
related
topic,
because
we
are
considering
what
some
extensions
might
be
that
actually
put
the
the
metadata
in
there
and
also
enable
a
container
disk
to
store
multiple
disk
images
and
does
that
make
sense?
How
does
that
square
with
some
of
the
other
ways
we've
been
using
them
so
yeah?
It's
some
interesting
topics.
There.
J
I
guess
I'll
take
I
I'm,
usually
the
moderator
but
I
didn't
attend
right
on
time
today.
So
I'll
take
over
here
and
bring
up
the
next
topic,
which
is
Alex.
The
data
import,
cron,
GitHub
issues.
K
Yep,
so
I
still
can
see
the
screen
by
the
way
yeah.
J
K
Sure
and
I
see
it
so
yeah
we
have
a
couple
of
data
import,
Chrome,
GitHub
issues,
I
just
wanted
to
bring
them
up
here,
so
we
can
make
some
action
items
out
of
them.
K
The
first
one
is
pretty
simple:
when
you
just
go
ahead
and
create
a
data
import
Chrome
with
a
URL,
that's
that
contains
no
tag,
just
a
show
it
gets
expended
in
the
wrong
way.
It
gets
like
a
duplicated
shot
and
you
can
imagine
that
leads
to
import
failures
all
over
the
place.
K
So
we
could
fix
that.
But
first
I
I
wanted
to
know
about
the
use
case.
Can
you
pull
from
this
type
of
URL?
Can.
E
You
pull
it
rather
I'm.
After
all,
this
issue
and
I
can
say
about
our
use
case.
We
are
shipping
Cube,
weird
as
part
of
our
distribution,
with
all
the
images
which
you
can
use
for,
creating
the
images,
for
example,
Ubuntu,
Santos
and
stuff,
like
that,
all
our
images
have
this
tag
generated
this
digest
generated
and
it
can
be
updated.
Since
then,
you
release
is
coming
for
our
platform,
so
we
would
like
to
manage
that
import
Chrome
and
update
those
that
tags
manually
instead
of
checking
them
by
tax
ES.
A
J
E
Yeah
exactly
I
found
that
the
data
import,
Chrome
spec
is
immutable,
so
we
also
have
to
remove
and
create
it
again.
But
after
all,
it
works
with
no
problems.
J
Sure
yeah,
there's
I,
know
that
there
have
been
other
suggestions
of
use
cases
for
having
the
spec
mutable.
Our
non
would
be
familiar
with
a
few
of
those,
for
example,
if
you're
aware
that
the
updated
image
requires
more
storage
space,
you
may
want
to
update
the
storage
spec
to
a
larger
size,
for
example,
or
if
you
wanted
to
change
the
storage
class,
where
the
you
know
where
the
images
are
importing
into
so
I,
don't
know
what
we
decided
on
on
those
topics
or
if
there
was
a
discussion
around
that.
E
A
E
J
E
Yeah,
it
is
but
the
problem
is:
we
have
some
registry
with
all
these
images
and
we
have
to
somehow
update
this.
Those
images
like
the
new
release
is
coming.
We
update
all
the
data
input
crones
to
update
the
images
itself.
If
we
would
provide
the
URL
for
images
to
our
users,
I'm
not
sure
it
might
be
pretty
confusing,
for
them
could.
J
You
put
this
I
think
what
we're
wondering
is:
can
you
put
this
stuff
in
the
the
actual
data
source
instead
of
then,
the
data
import,
Crown
spec.
J
L
E
C
L
C
L
Yeah,
that's
not
the
way
you
try
to
use
the
data
and
program
is
not
how
it
was
designed
to
be
used.
You
know
you're
pointing
your
specific
show,
which
is
not,
which
is
trying
to
reinvent
the
wheel
as
we
are
looking
to.
We
are
trying
to
call
the
latest
one.
Okay,
the
latest
image
you
shouldn't
tell
us:
what
is
the
latest
image
we?
We
are
doing
the
job
for
you,
okay,.
J
So
we
don't
I
so
I!
Suppose
if
you
wanted
to
manually
import
them,
you
could
actually
essentially
just
not
use
data
import
crons
and
when
you're
aware
of
the
new,
then
you
just
create
a
new
data
source
pointing
to
that
shot
and
then
a
data
volume
that
imports
from
that
data
source
and
then
or
you
could
edit
the
old
data
source
to
point
to
have
the
new
sha
reference,
I.
J
Guess
right
so
they're,
I
guess:
if
you're
wanting
to
do
a
manual
operation,
it
may
make
sense
to
just
illustrate
the
whole
piece
rather
than
kind
of
overriding
the
I'm.
Just
trying
to
summarize
the
discussion
that
I'm
I'm
hearing.
E
L
We
have
the
the
generally
garbage
collection
which
works
for
all
data
volume
created
in
a
using
the
CDI
and
they,
so
each
data
value
will
be
garbage
collection
by
default
when
it's
done
its
work
when
it's
successfully
completed
and
when
you're
working
with
data
in
Pokemon,
you
have
a
second
level
of
the
opportunity
which
says
that
the
old
Imports
will
be
a
garbage
collection.
You
see
the
line
inputs
to
keep.
L
I'm
talking
about
only
the
last
two
Imports
okay.
So
that's
what
you
are
you
that's
what
you
want
here
right!
You
want
to
keep
the
last
two,
the
last
two
PVCs,
that's
the
functionality
that
you're
interested
in
yep,
okay,.
J
So
yeah
I
think
we
have
I
guess
it's
sort
of
like
almost
exposing
the
logic
of
what
is
the
latest
image
and
I'm
trying?
Where
is
that
determined?
Is
that
annotated
somewhere
in
the
in
the
status
or
like?
J
Where
is
that
because
I'm
just
wondering
if
there's
I
don't
know
the
internals
of
how
that's
working,
but
at
some
point
our
data
import,
cron
is
pulling
the
registry
and
determining
what
the
latest
image
is
and
then
it
marks
that
down
somewhere
and
then
determines
if
a
new
up
a
new
Imports
required,
so
I
guess
I,
almost
wonder,
is:
can
we
also
allow
the
latest
image
to
be
specified
in
the
spec
like
to
say
what
it
is
and
if
it's
in
the
spec,
then
we
don't
have
to
do
the
check
to
the
registry
and
I
would
say
that
one
cool
thing
about
that
approach
could
be
that
it
could
eventually
be
extended
to
work
with,
for
example,
HTTP,
where
there's
no
way
to
determine
what
the
latest
image
is
necessarily,
but
that's
a
different
topic.
J
J
So
the
so
Arnon
the
idea
that
I
have
and
like
this
is,
like
you
know
just
talking
about
it,
so
that
somebody
could
implement
it
if
they
like
is
to
so
the
polar
comes
up
with
what
the
latest
image
is
and
it
records
that
result
somewhere
and
if
it's
a
declarative
system,
it's
probably
in
the
data
import,
Crown
object
somewhere,
I'm
guessing
in
the
status.
So.
J
Okay,
it's
an
annotate.
What
I'm
suggesting
is
that
it
actually
be
made
a
proper
first-class
citizen
of
the
status
API
first,
so
the
status
is
okay,
the
current
or
you
know
the
the
latest
tag
equals
whatever
that
is
some
Shaw
or
whatever
it
is
that
we
decide
that
is
okay.
That
would
be
step
one
step.
Two,
then,
would
be
as
we
do
for
some
of
our
other
objects.
We
could
actually
mirror
that
as
an
optional
field
to
be
specified
in
the
spec
and
again,
this
is
just
an
idea.
J
B
L
The
way
when
we
were
not
there
because
we're
using
that
mechanism
in
our
tier
one
tests,
you
know
structural
test.
You
can,
for
example,
disable
completed
the
schedule
and
then
you
can
you
just
need
to
to
annotate
the
data
program
whenever
you
have
a
new.
Whenever
you
know
from
you
a
dog
or
shot
okay,
I'm,
always
supporting
this
format.
L
J
So
that
I
mean
a
lot.
Let
me
just
defer
to
see
if
that
would
be
useful.
It
sounds
like
it
might
solve
the
use
case
here.
So
if
so,
then,
maybe
Arnon.
You
could
just
provide
a
few
details
in
the
in
the
issue
and
then
you
guys
could
try
that
and
if
it
works
for
you,
we
can
figure
out.
If
we
want
to,
you
know
like
if
you
guys
would
want
to
submit
a
PR
to
make
that
a
more
comfortable
API
or,
if
you're,
just
happy
with
what
it
is.
J
J
L
J
K
Yeah,
of
course,
okay,
so
here
we
need
a
little
bit
of
grooming,
because
we
we
kind
of
started
off
with
one
issue,
but
then
that
that
resolved
successfully
and
then
down
the
line
we
hit
another
one,
okay,
I
I,
think
the
I
think.
The
conclusion
here
is
that
we
have
a
problem
with
secret
refs
on
data
import
crones.
E
Yeah
the
main
problem
here
that,
if
you
want
to
import
some
image,
you
have
to
create
a
secret
device
first
time
in
need
space
where
the
actually
data
volume
is
located,
and
the
second
one
in
CDI
namespace,
because
checking
the
current
job,
which
is
checking
images,
also
requires
this
Secret.
K
J
E
That
will
that
will
not
work,
because
the
initial
Crown
job
is
always
created.
E
L
Show
you
right,
but
so
this
this
would
be
need.
This
will
need
to
be
sold.
Also
for
your
previous
issue,.
J
Oh
okay,
so
that
can
be
something
when
you're.
Looking
at
that
other
issue.
That
sounds
like
yeah
I
think
that
if
data
import
cron
is
created
with
a
a
never
schedule,
I
mean
I
think
it
makes
sense.
If
there's
any
schedule
whatsoever,
it
should
do
the
initial
one
immediately
upon
creation
and
not
wait
till
the
next
interval.
But
if
the
schedule
is
explicitly
disabled
then
it
should
probably
you
know
with
all
zeros
or
whatever.
Then
it
should
probably
not
create
one.
J
Right
and
of
course,
that
should
Behavior
should
be
documented,
hungry,
okay,
so
that
sort
of
is
a
orthogonal
to
this
issue
in
a
way
because
I
think
there
still
is
an
issue
of
Secrets
needing
to
be
in
two
places
and
I.
Don't
know
if
that
can
be.
Can
that
be
solved?
I
guess:
we'd
have
one
in
the
CDI
name
space
for
the
data
in
Port
cron,
and
then
it
would
be
copied
out
to
where
it
needs
to
go,
probably
by
the
controller
and
manage
that
way
for
the
actual
import
operation.
K
Yeah
and
then
we'd
have
we'd
have
to
give
some
secret
R
back
to
our
controller,
which
is
also
a
discussion
topic.
So.
J
Is
disabled
then,
and
initial
import
I'd
never
be
scheduled
or
it
should
never
be
completed,
attempted
and
two
secrets.
J
Okay,
anything
else
on
this
one
for
the
for
the
time
being,.
J
Okay,
all
right,
so,
let's
pop
back
I,
don't
know
if
why
don't
we
try
to
pick
up
this
La
this
topic
here
and,
unfortunately
Michael
we
may
have
to
defer
the
persistent
container
discs
until
next
time
to
to
address
it
properly.
So,
let's
take
on
the
lvm
unshared
Lund
config.
E
Just
simple
question:
if
anybody
use
lvm
unsure
to
Loon,
which
Seaside
drivers
you
use
or
which
methods
you
use
to
populate
this.
J
I'm
just
trying
to
I'm
trying
to
give
it
a
little
bit
of
thought
just
to
mean
to
be
exactly
sure
like
what
you
mean.
So
is
this
the
idea
that
you
have
a
a
nice
guzzy
Lun.
For
example,
that's
given
to
a
VM
in
the
form
of
a
PV
and
the
VM
is
initializing
lvm
from
the
guest
perspective
there
or
something
else.
E
J
J
The
idea
of
one
iSCSI
connection
or
Lun
corresponds
to
one
disk,
so
those
are
two
and
then
and
then
in
the
case
of
the
one
Lun
per
disk,
then
I
was
asking
if
it's
the
operating
system
inside
of
the
virtual
machine
that
is
taking
that
block
device
and
creating
lvm
for
itself
on
the
device.
But
it
sounds
like
you're
discussing
partitioning
a
large
Lun.
E
J
So
there
is
a
I
know
of
a
project,
but
that's
not
shared
Lun,
that's
Topo
lvm,
which
is
doing
that
for
local
storage
devices.
K
And
one
note
about
Topo
lvm,
the
way
it
works,
it
would
just
Loop
over
the
available
devices
you
have
and
we'll
just
utilize
them,
throw
it
throw
them
in
the
ovm
pool
yeah.
J
J
It
has
no
multi-node
awareness
there,
so
there
wouldn't
be
any
kind
of
locking
or
anything
on
those
on
those
devices
currently,
for
that,
so
I
wouldn't
be
surprised
if
other
storage
drivers
may
be
doing
something
like
that,
but
I'm
not
aware
of
any
specifically.
E
J
J
I'd
love
to
hear
your
I'd
love
to
hear
an
update
from
that
after
you've
taken
a
look
because
I
think
that
would
be
a
pretty
cool
use
case.
J
All
right
so
with
that
we
have
we're
about
two
minutes
from
the
top
of
the
hour,
so
I'd
like
to
kind
of
close
down.
Is
there
anybody
with
a
burning
quick
topic
they
want
to
get
across
here
before
we
wrap
it
up
today,.
K
Just
a
quick
question
about
the
persistent
container
disk,
because
is
this:
would
something
like
this
enable
the
Nvidia
use
case
that
they
demoed
in
the
summit
is?
Is
that
what
processing.
B
The
in
fear
well
so
if,
if
the
node
has
the
image
in
the
you
know,
Docker
cache
or
whatever
then
I
think
it
would
be
functionally
the
same
thing.
But
I
think
you
know
that's
a
big
if
you
know
otherwise,
I
I
think
that's
the
main
I
mean
we
can
we'll
talk
more
about
it
next
time,
but
I
think
that's
the
main
advantage
you
can.
If
the
image
is
in
the
cache,
your
VM
can
get
provisioned
and
started
really
quickly.
E
J
Okay,
so
yeah
we're
basically
out
of
time
thanks
to
everyone
for
your
presentations
and
and
topics.
That's
an
interesting
discussions
today,
as
always
appreciate
that
and
I
hope
you
all
have
a
good
week
and
we'll
see
you
back
here
in
two
weeks
for
the
next
installment
thanks.