►
From YouTube: KubeVirt version 1 first planning/scoping meeting
Description
First meeting to discuss the scoping proposal for what should become version 1 of KubeVirt.
The meeting was held on 2020-09-03
Document: https://docs.google.com/document/d/1KpCOIgiIfTsHQ_AA6xfr3ndgVc8fIs-BSUrdXFdVuxs/edit
A
All
right,
so
this
is
a
one-off
community
meeting
where
we're
trying
to
shape
up
and
define
what
cubert
version
one
should
look
at
like
so
we're
planning
for
keyboard
version,
one
which
we're
hoping
lands
this
year
so
pretty
soon.
We
just
need
to
define
what
that
is
and
kind
of
set
the
goal
posts
for
ourselves.
So
we
know
what
we're
working
towards
so
to
begin
with,
I
think
it
would
make
sense
for
us
to
take
a
look
at
what
version
one
means.
I
I
wrote
down
my
thoughts
and
it's
okay.
A
If
you
don't
have
any
other
thoughts,
but
at
least
let's
try
to
agree
on
something
and
what
I've
kind
of
tried
to
use,
as
our
definition
is
version
one,
it
should
be
the
minimal
functionality.
We
think
it's
necessary
to
meet
our
goal
of
being
an
infrastructure
as
a
service,
virtual
machine
management
platform.
A
That's
right
for
production
use
and
I
think
that
this
means
that,
in
order
to
be
ready
for
production
use,
we're
gonna
have
to
have
an
established
process
for
community
support
and
a
user
guide
that's
easily
accessible
and
and
formatted
in
a
way
that
people
can
can
use
easily.
So
what
are
people's
thoughts
about
how
I've
defined
this
does
that
make
sense
to
everyone?
Is
that
a
good
kind
of
just
general
goal?
Is
there
anything
we
should
add,
remove
or
or
anything
whatever?
B
C
A
Yeah,
I
think,
that's
a
that's
a
really
good
point,
so
you're
coming
kind
of
from
a
communication
standpoint,
so
how
we
communicate
to
people
how
to
create
virtual
machines,
or
are
you
saying
that
we
need
to
potentially
look
at
streamlining,
maybe
go
into
some
more
detail
there
like
how?
A
How
could
we,
what
actions
could
we
take
to
simplify
this,
for
people.
C
A
I'm
not
sure
so
I
mean
our
api
is
pretty
complicated
because
it's
kind
of
flexible
and
it's
fairly
powerful,
but
it
gives
us
a
lot
of
options.
So
it
sounds
like
what
you're
talking
about
might
be
something
similar
to
what
openshift
has
done
with
the
common
templates,
where
they
give
kind
of
a.
A
D
A
Yeah,
I'm
going
to
add
that
to
the
list.
That's
something
we've
been
thinking
about
so
at
the
bottom
of
the
list.
I
would.
I
would
define
that
as.
A
So
I
said
templating
mechanism
for
virtual
machines,
and
this
is
going
to
be
something
that
strap
extracts
away,
some
of
the
api
complexities
from
users
and
conceptually.
We
would
consider
this
to
be
something
similar
to
what
we
get
with
openshift
templating,
except,
of
course,
we'd
want
this
to
be
available
to
everyone,
regardless,
if
you're
using
openshift,
so
something
that's
generically
available
for
anyone
who's
using
cube,
fert
yeah.
So
thanks
for
writing
excellent.
So
it
sounds
like
we
didn't,
have
a
whole
lot
to
talk
about
the
definition-
and
I
may
have
one
thing:
I'm.
E
Yeah,
so
I
feel
this
commitment
issue
wherever
whenever
software
goes
to
v1,
because
you
want
it
to
be
perfect
when
you
go
out
with
v1
right,
so
you
don't
need
to
add
anything
else,
then
so
the
bucket
can
just
grow
and
grow
and
convert
will
be
never
done
right.
So
for
me,
I,
as
a
user,
I
would
expect
one
single
thing
from
convert
going:
v1
and
that's
like
perfect
documentation.
E
A
So
I
think
we're
in
agreement
that
anything,
this
user,
especially
people's
first
interaction
with
our
project.
We
need
to
make
that
just
awesome
that
needs
to
be.
They
can
find
the
information
they
need.
It's
concise
and
it's
easy
to
consume
because
that's
how
we
gain
traction
and
we're
gonna
have
a
lot
of
eyes
on
us
once
we
make
an
announcement
like
keeper
is
version
one.
So
that's
an
opportunity
for
us
to
gain
traction
as
well.
So
I
would
say
it's
priority
one.
A
I
would
agree
that
we
need
to
take
a
hard
look
at
our
user
guide
and,
what's
available
to
people
that
are
first
approaching
the
project,
make
sure
that
they're
funneled
in
a
way
that
makes
it
easy
for
them
to
access
the
information
that
they
need.
So
yeah,
that's
definitely
doesn't
need
to
be
cut.
I
would
agree
yeah
I
mean.
E
While
writing
the
docs
for,
like
writing
all
the
user
guides,
we
can
think
about
like
once.
You
start
doing
it,
you
will
figure
out
what
are
the
like.
You
will
see
clearly
when
convert
lacks
some
user
experience
features.
You
know
if
you
need
to
write
a
longest
user
guide
on
one
feature.
It
probably
means
that
you
need
to
rethink
of
each
feature,
and
so
I
focus
solely
on
that.
I
mean
like
the,
for
instance,
the
work
launcher
live
updates.
B
Yeah
to
to,
I
would
like
to
add
to
what
peter
just
said
about
the
launcher:
live
updates
the
reference
to
control,
plane
components
here,
I'm
not
sure
if
it
is
a
100
accurate
because,
unlike
a
control,
plane,
components
which
can
be,
for
example,
a
compared
to
very
tender
or
something
like
that.
But
launcher
is
a
user-defined
workload
more
like
a
pod,
so
yeah,
I'm
not
sure
that
this
feature
is
strictly
required
for
v1.
D
Oh
sorry,
well
I'm
thinking
thinking
you
know
as
a
potential
user
and
and
the
scope
of
v1
being
production,
ready
correctness
and
ground,
but
this
build
launcher
live
update
is
kind
of
important
in
the
sense
that
you
can
upgrade.
You
know
you
can
perform
updates
while
keeping
vms
not
impacted.
A
Correct
yeah,
so
there's
here's
why
this
is
in
here:
vert
launcher
pods
or
the
vmi
pods.
They
contain
our
control
plane
component,
which
is
vert
launcher,
but
also
contains
levert
and
qmu,
and
if
anyone
wants
to
update
those
components,
it's
tied
to
the
workload.
A
So
we
were
looking
at
ways
to
perhaps
seamlessly
figure
out
a
way
to
kind
of
inject
those
new
components
into
that
pod
and
maybe
cycle
the
the
virtual
machine.
The
way
that
it's
not
disruptive
and
things
like
that.
D
I
mean
sorry
controller,
but
beyond
the
technical
details
I
would
say
so
from
a
user's
point
of
view
and
and
daniel.
I
think
you
you
did
the
last
comment
or
better,
I'm
not
sure,
but
you
said
okay.
This
is
like
a
part.
Well,
it's
not
it's
a
vm
running
inside
the
pod
right
and
if
you
know
for
for
kubernetes,
like
a
normal
pod
upgrades,
pods
are
expected
to
be
killed
and
you
know
restarted.
E
So
I
mean-
and
I
think
that
here
we
can
make
a
like
there's
a
difference
between
the
pet
vms
and
like
the
kettle
beams
right.
So
if
we
say
that
we
want
to
support
pet
vms
in
v1,
then
we
probably
need
live
updates
and
we
need
snapshots.
A
I
think
we
I
want
to
take
a
step
back,
so
this
task
item
is
talking
about
live,
updating,
invert
launcher.
I
think
that's
going
to
be
that's
too
much.
What
we
need
is
just
to
at
least
the
very
minimum
have
a
clearly
documented
or
a
process
for
updating
these
vert
launcher
pods.
So,
even
if
that
means
that,
if
you
want
to
update
them,
you
have
to
live
migrate
or
you
have
to
shut
down
and
start
up
again
or
whatever.
A
We
just
need
a
process
that
defines
that
today,
because
right
now
it's
kind
of
you.
You
update
your
q
version
and
it's
not
obvious
that
these
things
are
left
around
with
the
old
version.
The
workload
components
are
the
old
version.
It's
not
even
obvious
what
you
need
to
do
to
to
update
these
things.
Even
if
we
can't
do
the
live
update,
it's
gonna,
be
it's
gonna,
be
pretty
difficult.
I
think
at
the
very
minimum.
A
A
Okay,
so
I
think
I'm
just
going
to
start
going
down
this
list
and
we'll
we'll
we'll
tackle
these
things,
one
by
one
and
hopefully
we'll
get
to
the
bottom
of
this
list.
We've
already
knocked
out
a
few,
and
that
should
give
us
a
good
indication
of
what
we're
targeting
here.
So
the
first
thing
on
this
list.
We
have
the
core
keyboard:
apis
need
to
go
to
version
one,
and
I
think
that's,
we've
already
essentially.
A
We
already
are
kind
of
version
one.
It's
just
we're
version
one
alpha
three
right
now
for
historical
reasons:
it's
because
there
were
some
issues
in
previous
kubernetes
versions
that
made
it
difficult
for
us
to
have
multi-version
support
in
the
way
that
we
wanted.
That
was
stable
and
we've
just
hit
the
point
in
the
past,
probably
six
months,
where
we
feel
comfortable
doing
that.
But
then
we
get
help.
We've
been
held
back
by
some
other
things,
so
I
think
we're
at
a
point
where
we
are
effectively
version
one.
A
We
just
need
to
actually
flip
the
the
version
to
actually
represent
what
we
are,
but
as
part
of
this,
we're
saying
that
we
need
to
rely
on
the
general
availability
kubernetes
entities.
So
when
we
register
our
crds,
they
need
to
actually
be
registered
using
crd
version.
One,
that's
the
keyword
object
itself
and
we
need
everything
that
comes
along
with
that.
So
we
need
to
have
the
explain
functionality.
A
E
A
No,
so
we're
we're
looking
at
being
forward
moving
forward.
So
what
we
have
now,
we
would
be
just
moving
forward
and
backwards
compatible.
A
So
the
current
b1
alpha
3
will
be
an
alias
for
v1,
essentially
they're
they're,
just
going
to
be
the
same
thing:
md1
alpha
3,
it's
not
going
away.
It'll
always
exist,
it'll
just
be
the
same
thing
as
b1,
so
anyone
who's
invested
in
their
b1
alpha
3
api,
and
they
have
perhaps
virtual
machines
that
use
that
group
and
kind
or
whatever
it
would.
A
Okay,
so
the
next
one
is
snapshot
support
and
I
think
it
doesn't
really
matter
what
we
think
about
this,
because
it
it's
so
close
that
it'll,
probably
just
make
it
in
maybe
michael,
is
the
one
that
might
have
some
strong
opinions
about
this.
A
I
would
say
say
that
again,
I
might
have
missed.
A
B
A
D
So
before
we
move
on
to
the
next
one,
sorry
not
an
objection,
but
just
a
question.
What
what
what
could
depend
on
snapshots,
I
mean
I'm
wondering
things
like
backups,
probably.
A
D
Well,
okay,
I
would
not
object
from
for
about
deferring
it,
but
I
just
wanted
to
add
a
note
that
you
know
keeping
this
production
ready
context.
Pockets
are
important
for
production.
So
anyway,
let's
move
on.
B
I
did
and
my
question
is:
why
is
it
something
that
is
like
strictly
required
for
version
one,
why
it
can't
be
something
like
version,
one
dot,
something.
A
Yeah,
it
can
be
that's
what
we're
that's
what
we're
discussing.
So
I
think
it's
a
perception
issue
right
now.
People
would
like
to
have
their
virtual
machine
workloads
running
in
a
the
least
privileged
pod
possible.
A
H
It
says
right
can,
I
add
something,
and
so
I
I
think
it's
clearly
and
notice
that
this
is
untrusted
area.
As
you
actually
said,
it
is
problematic
on
one
hand,
to
specify
it's
untrusted
and
to
give
root
privilege
to
the
pod.
So
I
seeing
it
is
advisable
either
to
indicate
that
that
it's
kind
of
working
progress
or
remove
the
route.
A
So
in
indicating
this,
would
this
be
part
of
our
user
guide?
How
would
you
expect
us
to
communicate
this.
H
I
don't
know
I'm
just
saying
that
you
know
it's
clearly
divided
between.
You
know
we're
trying
to
divide
between
untrusted
and
trusted
right,
and
you
have
the
handler
on
one
side,
the
launcher,
and
then
you
have
it
on
untrusted
area,
but
you
basically
give
it
root,
and
so
they
can
elevate,
they
can
do
a
lot
of
damage.
So
it's
kind
of
you,
you
break
the
border
between
the
untrusted
and
trusted.
B
H
H
A
My
only
point
so
here's
what
I'm
getting
maybe
non-root
v,
my
pods,
don't
need
to
be
targeted
for
version
one,
but
at
the
very
least
we
need
to
have
a
documented
trust
model
for
what
our
components
are
allowed
to
do
and
exactly
where
we
fall
as
far
as
privileges.
A
B
Yes,
I
have
one
question,
though,
that
just
came
to
my
mind,
so
actually,
if,
if
I'm
giving
so
let's
say
that
I'm
a
cluster
admin
and
I'm
giving
you
david
the
option
to
create
vms
on
your
namespace
and
giving
you
a
namespace
and
you
can
create
vms
there.
So
you
can
also
exec
in
the
launcher.
Pod
right,
yeah.
D
A
A
Don't
have
so
you
couldn't
pod
exact
into
the
vert
launcher
pod,
but
in
vanilla
kubernetes.
Without
I'm
not
sure.
If
the
what
is
it
called
the
policy.
H
A
That's
not
something
we
can
necessarily
enforce,
but
that's
something
that
whoever's
setting
up
cubert
and
however
they've
said
their
multi
tendency
should
should
have
to
enforce
somehow
with
all.
B
A
A
Okay,
I
think
that's
something
that
we
need
to
investigate
our
messaging
around
this
and
what.
A
A
Or
we
could
do
the
that
we
could
say:
non-root
pods
is
required
for
version
one.
I
don't
know
what
what
are
we
leaning
towards
here.
H
Not
necessarily
because
you
know
it's
the
option
of
the
of
the
user
right
if
you
have
a
very
well
defined
documentation
of
the
trust
model,
it's
up
to
me
to
decide
whether
I
want
to
use
that
in
production.
My
problem
currently
is
because
it's
not
well
document
it's
kind
of
misleading
right.
I
see
this
nice
model
of
trust
and
trust.
I
don't
want
to
dig
into
a
lot
of
documentation
to
get
the
to
the
answer
that
oh,
I
have
a
root
on
the
board.
H
B
Model
it
is
not
just
root
on
the
pod,
because
if
I'm
a
privileged
pod,
I
can
easily
access
to
other
pods.
A
And
privileged,
we
we
enable
privileged,
for
I
think
it
might
have
been
srvo
and
maybe
that's
even
gone
now.
We
shouldn't
have
a
problem
with
that
anymore.
I
can
double
check.
E
Yeah
we
are
not
privileged,
but
you
still
have
you
can
access
the
you
can
edit
the
networking
within
the
work
launcher,
and
by
doing
that,
you
may
be
able
to
destroy
networking
for
all
the
other
bots
that
run
in
the
same
note,
for
instance,
so
this
risk
is
still
there.
A
The
only
the
only
way
privileged
gets
set
today
is,
if
somebody's
using
power
pc
and
that's
because
of
a
some
issue
with
livert,
but
that's
the
only
way
it
would
get
sat.
B
But
the
problem
is
that
people
can
consume
cube
vert,
for
example,
through
the
openshift
app
registry,
because
we
deploy
our
releases
there
as
well.
E
C
A
I
Here
I'll
go
back
there,
I'm
not
at
I'm,
not
super
concerned
about
this
one
you
know
admit:
er
executing
into
pods
and
creating
vmware
are
two
different,
really
sets
of
permissions,
and
you
know
regular
users
don't
necessarily
have
to
execute
pods.
I
I
I
I
I
don't
know
I
mean
I've
installed
a
lot
of
kubernetes
applications
and
a
lot
of
them,
you
know,
are
privileged
without
saying
anything
about
it
or
do
a
lot
of
you
know,
wait
and
on
upstream
you
know
it's
kind
of
root
by
default
anyway,.
A
The
buses
have
potential
to
damage
our
version,
one
offering.
So
if
we
come
out
the
gate
with
version
one
and
then
we
talk
about
root
privileges,
and
things
like
this,
my
impression
is
that
that's
going
to
instill
some
sort
of
doubt
and
whether
that
doubt
is
founded
in
reality
or
not.
I'd
prefer
if
we
can
pull
it
off
to
just
not
even
have
to
go
down
that
path.
I
think
that's
kind
of
what
we're
leaning
towards
at
least
right
now.
A
A
If
we,
if
we
find
that
it's
not
viable
for
us
to
go
non-root
in
a
reasonable
time
period,
then
maybe
we'll
open
this
back
up
and
say.
I
think
this
is
going
to
hold
us
back
from
version
1
too
long.
A
A
So
when
somebody
purchased
a
project
for
the
first
time,
everyone
uses
container
disks
because
it's
the
easiest
way
to
just
start
their
first
virtual
machine
and
see
it
working
and
they
don't
have
to
think
about
how
to
import
their
virtual
machine
root
disk
and
all
that.
So
the
idea
with
persistent
container
disks
is.
A
We
would
give
somebody
an
option
immediately
when
they're
just
trying
to
play
around
the
key
vert
to
use
the
fully
fledged
features
that
they
get,
with
kuvert
being
able
to
stop
and
start
virtual
machines
and
potentially,
if
we
had
like
snapshots,
enabled
by
them
they'd,
be
able
to
snapshot
and
restore,
and
so
it's
it's
not
something
that
I
necessarily
see.
I
think
some
people
will
use
it
production,
but
I
see
it
as
something
that
users
will
will
definitely
take
advantage
of
with
their
first
interactions
with
keifert.
F
F
A
J
Hello,
this
is
fabian
from
redhead
about
this
one.
I
totally
see
it's
a
useful
feature,
but
why
would
we
make
them
part
of
the
mvp4
v1.
A
The
my
thought
was
so
when
we
make
noise
about
version,
one
we're
gonna
have,
I
hope
new
people
checking
out
the
project
and
if
it's
the
first
time
engaging
with
the
project,
if
we
have
a
a
clear
path
for
them
to
just
start
virtual
machines
in
a
persistent
way
and
stop
them
and
they
can
just
get
things
work
immediately.
A
A
A
J
My
whole
worry
about
container
discs
is
really
I
mean
it.
I
totally
understand
the
use
case
of
having,
like
you,
know,
well
crafted
and
created
container
disks
right
like
everything
we
know,
but
it
doesn't
scale
if
you
come
up
with
your
custom,
vm
images
right.
I
know
plenty
of
users
who
have
like
20,
20,
gig
size,
vm
images,
hundred
gigs,
right
or
even
larger
vm
images,
and
that's
what
container
disks
fail
right.
J
The
registries
are
not
tuned
towards
this
with,
like
these
large
blobs,
I
know
of
issues
were
like
huge
java,
binary
java
binaries
crash
registries
in
the
past,
and
I
I
see
the
same
issue
that
people
will
get
the
wrong
expectations
about
vms
right.
I
I
have
the
worry
that
contain
that
I
have
that
ever
since
right
and
that
people
get
the
wrong
impression
of
this
delivery
mechanism.
It
is
well
suited
for
crafted
images,
but
it's
not
a
good
mechanism
for
custom
images
that
you
prepare
outside
right.
A
A
feature
so,
let's
say
you're
encountering
kuvert
for
the
first
time.
How
would
you
you
know
our
whole
feature
set?
You
know
the
whole
ecosystem.
A
How
would
you
communicate
to
this
person
who's
starting
their
first
virtual
machine
where,
where
to
get
the
disk
where
to
consume
it,
and
how
to
do
this,
would
you
yeah.
J
I
won't
say
anything
else.
I
I
think.
To
be
honest,
I
think
what
would
make
the
story
clearer
in
my
opinion,
more
versatile
is
to
say
we
purposely
contained
it
is
like
raised
first,
you
know
stateless
workloads,
right,
that's
a
great
story,
because
yeah
it's
better
right
and
anything
that
requires
persistence.
We
require
the
vm
image
to
be
on
a
pvc
and
for
that,
in
order
to
achieve
that
right
today
we
know
we
could
use
cdi,
but
that's
cumbersome
to
a
certain
degree,
because
you
need
to
install
cdi
in
advance
to
cuber
one.
J
So,
for
example,
one
thing
I
could
imagine
which
would
also
achieve
this
way
to
simply
do
sorry.
Let
me
rewind
wind
back
a
little
bit
is,
I
think,
if
we
say
now,
you
want
to
do
a
persistent
vm
right
copy
this
or
make
it
exp.
I
don't
know
the
wording
right
so
being
with
beer
with
me
on
the
wording
copy
this
container
disk
to
pvc,
and
you
can
use
it
as
a
persistent
vm
right
and
cdi
is
doing
it
so
by
including
cdi
getting
cdr
closer
to
keyword,
including
it,
including
certain
parts.
J
In
order
to
make
this
easy
right
simply
put
a
container
just
on
pv,
I
think
that
would
help,
because
then
it's
clear,
oh
it's
the
pb,
it's
a
persistent
volume.
Oh
yeah
click,
yeah!
Absolutely
that's
clear
right!
So
the
words
already
imply
that
persistence
is
so
much
clearer
and
you
know
then
you
know
the
user
is
also
getting
the
idea.
Oh,
so
I
can
create
a
pv
to
modify
my
vm.
I
can
upload
random
vms
at
arbitrary
sizes
and
pvs
are
suited
for
bigger
sizes
right
two
pvs,
all
right.
J
A
So
I
understand
what
you're
getting
at
my
concerns
here
is:
we've
already
introduced
in
the
first
few
seconds
that
somebody
is
looking
at
this
project.
We've
introduced
another
project
so
cdi
so
somebody's.
J
That's
why
I
said
sorry,
that's
why
I
said
it
today:
it's
a
pita
right,
but
that's
why
I
said
the
different
option
to
have
persistence
is
by
pulling
cdi
in
or
you
know,
leveraging
it
differently.
But
yes,
as
part,
it
needs
to
be
part
of
cuber.
The
ability
to
have
a
container
disk
on
a
pvp
needs
to
be
part
of
core
kubernetes
right.
Let's
talk
about
that.
So
do
you
think
that
we
should
bundle
cdi
with
keyboard?
J
I
would
see
that
as
an
option
right
I
mean
we
discuss
here
right
I
out.
First,
I
want
to
make
sure
that
I
explain
my
problem
with
that
story
about
the
persistent
on
container
disk
and
the
wrong
explanations
about.
You
know
the
versatility.
If
you
could
take
with
that
way
of
container
disks
and
that
for
persistence,
I
would
rather
like
to
focus
on
pvcs
and
everything
we
do
with
pvs
today,
centered
around
cdi.
J
So
yes,
right
if
we
say
that
story
is
if
we
agree
right
that
for
persistence,
pvs
are
a
better
story,
then
we
can
look
at
options
and
one
of
the
options
would
be
pull
cdi
closer
to
kubert
incorporate
it
directly,
make
it
a
core
controller,
for
example,
to
not
have
it
stand
alone,
but
be
part
we
be
built
into
keyboard,
because
then
we
eliminate
the
problem.
That
is
an
external
dependency.
A
I
don't
want
to
talk
about
cdi
in
this
initial
like
I
want
them
to
use
cdi,
and
I
want
that
to
be
happening
behind
the
scenes.
But
in
the
first
few,
like
we're
talking
about
people's
first
interactions
with
it.
I
want
to
be
cuter
and
I
want
a
simple
manifest
and
I
want
it
to
be
streamlined
where
they
just
do
the
thing
and
it
works,
and
if
that
cdi,
working
behind
the
scenes,
that's
great.
But
we
need
to
streamline
how
we
package
and
deliver
that
with
cube
vert.
H
H
Right,
but
if
it
make
it
wait,
but
if
it
makes
it
in
and
you
still
have
some
images
on
which
you
cannot
use
it,
that's
a
problem.
Oh
that's
my
point
exactly
so
either,
so
you
cannot
leave
it
and
let
the
user
kind
of
suddenly
need
to
understand
that
you
need
to
conv
move
to
cdi
right.
You
need
to
documentation
as
we
covered
right
so
either
either
you
explicitly
defined,
give
some
limit
to
the
image
size
or
you
disallow
or
you.
If
you
go
to
cdi
just
disallow
the
other
option,
you
cannot
leave
it.
J
One
option
right
if
we
ignore
ci
another
option,
for
example-
and
my
point
is
here-
I
just
want
to
highlight
the
problem
and
I
cannot
come
up
with
a
solution
in
two
minutes
or
five
or
seven.
So
one
option
is
right.
We
could
say
we
have
a
similar
api,
which
is
reference.
The
container
disk
right,
but
instead
of
you
know
putting
it
into
an
empty
gear
on
the
node
right.
We
at
the
same
time
reference
the
pv
and
what
kubernetes
doing
for
us.
J
It's
pulling
down
the
pvc
and
putting
it
on
the
on
the
pv
and
we
launched
the
vm
from
the
pv
right.
So
it's
effectively
a
subset
of
what
cdi
does
simply
build
on
the
kubrick
api
for
having
something
like
a
seed
image
right,
the
seed
image
which
is
put
on
the
pvc
and
then
we're
done
right.
Then
you
only
need
the
api
put
it
on
the
pve
and
that's
it.
I'm
not
saying
it's
good,
but
I
just
want
to
say
they're
different
options
and
ezra.
I
agree.
J
A
J
A
Check
with
common
registries
yeah,
I
think
it's
easy
enough
to
just
say
you
know
you're
limited
by
what
your
registry
allows
you
to
upload
here
and
here's
another
path
for
you.
If
you
need
to
consider
this,
I
I
I
don't
necessarily
think
that
this
is
a
problem
for
this
initial
user
who's
just
wanting
to
play
around
with
cuber.
For
the
first
time
sure
it's
a
problem
for
somebody
who's,
making
a
production
usage
out
of
kupert.
J
By
the
way
you
run
into
I,
the
problem
is:
if
we
strengthen
this
feature
too
much,
I
think
we
run
into
problems
of
the
you
know
over
the
course
of
the
of
the
lifetime
of
the
project,
because,
like
a
hundred
gigabyte
image
one
side,
one
problem
is
like
the
registry
side
and
I
think
it's
already
a
problem
like
with
docker
up
today
or
quay
right,
but
on
the
other
side.
Actually,
we
then
also
have
the
requirement
that
nodes
are
limited
by
their
storage
size
right.
J
So
you
can
have
that
100
gig
image,
but
you
need
to
pull
it
down
to
the
node.
You
have
to
need
to
have
it
there
right,
even
even
only
if
you
want
to
to
run
the
overlay.
So
I
I
really
don't
think
that
this
technical
solution
is
scalable
to
to
to
the
images
and
that
we
put
the
users
on
the
wrong
track
right.
I
think
we
focus
them
on
the
on.
In
my
opinion,
non-scalable
or
non-future
proof
solution
right,
because
I
think
pvs
are
when
it
comes
to
persistence
and
huge
vms
superior.
B
J
B
Think
I
I
think
that
container
disks
are
a
great
feature
to
demo
cubesat,
and
this
is,
I
think,
what
it
was
meant
for
initially,
and
we
even
have
a
set
of
examples
in
the
examples
directory,
and
I
think
that
what
fabian
suggested
in
the
beginning
that
so
and
it
it
somehow
connects
to
what
you
david
said,
that
what
will
be
the
initial
experience
of
a
user
with
keyboard,
and
I
think
that
making
something
like
okay
welcome
to
cube
root
here
is
a
container
disk
and
you
can
deploy
it.
B
H
Persistency,
but
I
think
that
contradict
actually
the
definition
that
david
put
because
the
definition
is
the
version
one
of
production.
So
it's
not
a
demo.
It's
not
it's
a
version,
one,
it's
a
first
impression
of
the
production
and
I
think
the
problem
coming
from
the
different
view:
fabian
and
david
because
david,
you
are
saying
first
impression
and
fabian
correctly
say:
okay,
if
the
first
impression
was
good,
I
want
to
continue
right
and
now
I'm
stuck
with
something
that
is
not
scalable.
H
I
don't
want
in
a
half
a
year
to
go
back
and
starting
to
change
stuff,
because
I
choose
the
the
wrong
stuff.
On
the
other
hand,
I
don't
want
too
complicated
process
to
start
with,
so
it's
kind
of
a
trade-off,
and
I
think
that
if
we
can
guarantee
that
someone
knows
not
a
demo
and
not
so
on,
but
he
knows
that
this
is
kind
of
a
intermediate
or
something
that
is
limited
for
the
future.
H
J
And
I
think
that
is
why
I
said
I
think
it's
definitely
something
I
mean,
there's
no
question
that
we
want
to
keep
container
disks
right
because
they're
so
cool
to
get
started,
and
that's
why
I
said
you
know:
let's
see
how
we
can
connect
them
right,
how
there's
a
small
delta
to
get
them
on
the
track
when
they
get
to
the
versatile
solution.
Pb
is
right
and
that's
why
I
had
that
idea,
and
I
don't
know
how
the
implementation
would
look
but
to
say:
okay,
now
do
this
small
change.
A
But
you're
still
it's
the
same.
It's
the
same
track,
so
the
limiting
factor
here
is
the
container
disc
itself,
so
we're
talking
about
putting
too
much
too
large
of
information
inside
container
disks.
So
if
they're
going
down
the
path
and
the
first
interaction
was
look
at
the
hair
disc,
then
we've
already
we're
saying
that
that's
a
problem,
then
I
don't
see
how
we're
solving
anything
by
not
letting
them
use
a
pvc
backing
container
disk
or
directly
importing
that
container
disk
onto
a
pvsc
itself,
like
I
don't
see
where
either
of
these
things
really
matter.
A
It's
the
same.
It's
just
the
same.
It's
the
same
problem
we're
just
packaging
it
in
two
different
ways.
That's
all
we're
talking
about
here.
A
Could
you
I
I
think
I
lost?
Are
we
saying
that
cantare
dis
themselves
are
the
problems
so
placing
a
container
sorry,
a
virtual
machine
image
in
the
container
image
and
uploading
that
to
registry?
J
I
think
there
are
a
couple
of
technical
problems
right,
so
the
registry
is
one
of
them
right
in
the
aspects
when
it
comes
to
using
this
non-shared
storage
when
pulling
it
back
to
the
node,
for
example,
and
the
then
expectation
to
to
you
know
that
everything
is
built
that
I
mean
by
introducing
this
persistency
on
pvs
right,
the
overlay
persistence
on
pvs.
J
J
We
really
highlight
this
feature,
and
that
is
what
I
think
is
wrong,
because
the
container
disk
approach,
in
my
opinion,
is
not
versatile.
It
does
not
support
network
storage
and
for
live
migration.
We
also
need
to
oh
yeah
laughing,
let's
not
get
into
live
migration,
but
I
think
because
we
don't
to
me
it's
important,
because
I
think
it's.
The
right
technical
thing
is
to
focus
on
pvts
pv
pvs,
to
be
the
versatile
storage
mechanism
for
a
virtual
machine
disks
for
data
disks
and
os
disks,
and
we
don't
get
this
component
into
the
picture.
A
Here's
the
here's,
the
issue,
the
first
interaction
somebody
has
with
cooper
they're
going
to
want
to
say:
where
do
I?
How
do
I
get
my
image
into
this
system?
I've
got
an
image
and,
if
they're
stuck
spinning
their
wheels
just
on
this,
but
they're
not
going
to
move
past
that.
So
that's
why
we
say:
use
container
disk
as
your
first
impression
get
the
thing
up
and
going
and
that's
a
path
that
you
can
take.
A
But
if
we're
saying
that
it
doesn't
matter
if
we're
saying
that
their
first
interaction
is
using
antenna
disk
that
they
put
directly
on
a
pvc
using
cdi
or
they're
using
a
guitar
disk
and
we're
making
a
backing
volume
using
a
pvc.
That's
irrelevant
really
because
they're
they're,
starting
with
the
container.
J
Disc,
though
I
I
think
it's
not
irrelevant,
because
the
problem
with
the
overlay
is
you
need
to
keep
the
pv
the
the
it
is,
it
doesn't
work
alone
right.
You
cannot
have
that
overlay
alone
right
so
from
a
flow
perspective.
You
need
that
container
disk
in
order
to
to
leverage
this
overlay
functionality.
J
What
I
would
try
to
focus
on
is
you
can
use
that
ppc
and
if
you
go
over
to
that
persistence
topic,
you
then
discover.
Oh
ho,
the
model
is
simple.
I
can
put
anything
on
a
pv
and
it
will
be
persisted
right
and
use
that
ibm's
right.
So
I
want
to
actually
use
the
container
disk
as
the
starting
point
to
quickly
get
started,
but
then
move
the
focus
over
to
persistent
volumes.
J
If
we
go
over
to
that
persistence
topic
and
let
the
user
discover
that
the
model
is
actually
very
simple
that
they
can
then
do
any.
You
know
that
they
can
craft
the
pvs
in
whatever
way
they
want,
they
can
prep
they
use
vert,
install
or
whatever,
to
create
those
pbs
to
have
jenkins
pipelines
to
create
them.
I
think
to
me:
it's
like
guiding
the
user
right.
Expanding
the
user's
knowledge
introducing
him
to
that
new
concept
of
pvs,
okay,.
A
J
Perfect,
I'm
fully
fully
on
board
and
we
can
discuss
actually
the
implementation.
Actually,
you
know,
I
think,
important.
What
we
hear
is,
I
think
the
definite
gap
is
that
the
step
from
container
disk
over
2pd
is
too
large
right.
It's
there
is
no
friendly
user-friendly
flow
to
do
this.
Cdi
is
an
option
better
inclusion.
I
mean
there's
now
many
options.
How
we
can
improve
that
to
me
yeah,
so
yeah,
okay,
so
we're.
A
Gonna
we're
gonna
make
that
the
task-
that's
gonna,
be
streamlining
people's
first
interaction
with
persistent
virtual
machines
and
we'll
we'll
take
that
offline,
but
I
think
that's
something
that
we
have
to
have
for
version
one
so
does
it
have
to
be
persistent
container?
Just
fine
doesn't
have
to
be
the
way.
I
outline
the
test
now
I'll
revise
this.
J
J
Right
cdi
has
been
stand
alone,
but
on
the
other
tightly
integrated
with
keywords,
so
we
could
discuss
pulling
it
in
and
then
we
can
directly
leverage
that
functionality
one
way
or
the
other
right,
maybe
adding
a
bit
more
glue
here
and
there
improving
the
apis
to
have
it
even
more
tightly
integrated.
That
is
one
option,
but
if
we're
saying
we
need
one
simple
functionality
of
of
simply
putting
a
simply
right
in
quotes.
If
you
don't
see
me
hello
in
quotes,
I'm
copying
a
container
disk
onto
a
pv.
J
A
A
To
me
as
well,
so
I
I
think
through
this
whole
discussion,
the
one
thing
that's
really
stood
out
is
that
our
user
guide
and
how
people
perceive
and
really
encounter
key
vert
is
the
thing
that
is
really
important
to
us
for
version
one.
So
before
you
came
online
fabian
there
was
discussion
about
our
user
guide
needing
to
be
something
that's
just
way
more
way
easier
for
people
to
find
the
information
they
need
quickly
and
maybe
restructuring
that
and
looking
at
that
is
probably
priority.
One,
yes
yeah,
so
we
didn't,
we
didn't
get
very
far.