►
From YouTube: The Road to Version 1
Description
A few months ago the KubeVirt community started to discuss what would be the requirements that KubeVirt should meet in order to release KubeVirt Version 1.0.
This session aims to:
- Provide a recap of the discussion so far
- Review any relevant updates since the last time the plan was discussed
- Collect additional feedback / elements for discussion
- Propose the next steps to take
In summary, this session is another step in the journey towards the release of KubeVirt Version 1.
Session lead: David Vossel, Senior Principal Software Engineer, Red Hat
A
A
So
if
you
want
to
talk
like
pep
just
said,
we
have
to
actually
enable
that
for
you,
so
I
I
want
to
have
a
discussion
here.
So
if
you're
interested
in
talking
about
the
roadmap
to
version
one
and
some
of
the
items
on
that
and
we're
going
to
discuss,
maybe
what
we
prioritize
and
what's
important
to
you,
then
definitely
go
ahead
and
be
proactive
and
asking
the
chat
to
be
enabled.
Now
what.
A
Do
an
overview
of
the
document,
so
we
already
had
a
meeting
previously
back
in,
I
think,
was
mid
2020,
where
we
kind
of
created
an
initial
list
of
things
that
we
wanted
for
qver
version
one
so
before
we
would
call
ourselves
a
version
one
offering.
I
can
do.
C
A
A
Lots
of
people
in
the
community
to
create
a
list
of
the
things
we
wanted
for
version
one
and
we
also
defined
what
we
wanted
version
one
to
mean.
So
I
think
we
all
kind
of
arrived
and
agreed
on
my
thoughts
here,
which
was
version.
One
is
going
to
represent
the
minimal
amount
of
functionality.
We
need
to
meet
our
goal
of
being
a
infrastructure
as
a
service,
virtual
machine
management
platform,
that's
ready
for
production
use
cases
and
to
support
that
we
need
a
couple
things
in
the
community
as
well.
A
So
in
order
to
can't
just
have
solid
software,
we
also
have
to
have
solid
processes
in
the
community
for
releasing
software
and
an
accurate
user
guide
that
reflects
on
how
to
use
this
software.
So
that's
kind
of
our
definition
of
our
target
for
version.
One
also,
you
know
I'm
gonna
go
ahead
and
post
this
in
the
chat.
Okay,
thanks
pap,
you
already
did
that.
If
you
can't
access
this,
you
need
to
join
the
kuvert
devel
mailing
list,
and
I
don't
have
a
link
for
that
right
off
the
top
of
my
head.
A
But
if
somebody
can
post
that
to
the
chat,
that
would
be
useful
as
well.
If
somebody
has
comments
or
anything
like
that,
I'm
not
even
sure
if
you
can
view
the
document
until
you
have
joined
that
group
all
right.
So
looking
at
the
list,
I'm
gonna
go
through
the
items
that
we
identified
as
wanting
to
have
in
version
one
and
talk
about
a
little
bit
of
what
we've
done
so
the
first
one
here
is.
We
want
to
ga
our
kubernetes
or
qvert
api,
so
these
are
the
virtual
machine,
apis
and
virtual
machine
instance
apis.
A
Basically
our
fundamental
core
set
of
apis.
We
wanted
to
call
them
version,
one
that
doesn't
mean
that
q
vert
is
actually
taking
the
leap
to
be
version.
One
we're
saying
that
the
apis
themselves,
we
think
they're
stable.
We
think
that
we're
gonna
support
them
for
a
long
time
and
we're
going
to
call
them
version
one,
and
we
did
that.
So
since
that
meeting
that's
been
done,
I
believe
it
made
it
into
the
0.37
release,
which
was
last
month.
C
A
A
We've
seen
some
movement,
I'm
reproducing
capabilities
out
of
the
vmi
pod,
which
we
had
a
great
presentation
earlier
today,
but
as
far
as
actually
still
using
root
as
user
zero.
For
the
entry
points
to
that
point,
we're
still
there
and
there's
a
lot
of
work
still,
and
we
haven't
made
a
terrible
amount
of
progress
on
that.
Quite
yet,
there's
still
a
lot
to
be
done
so
that
that's
one
that
we
haven't
seen
a
whole
lot
of
movement
on
yet.
A
This
persistent
container
disk
volumes-
this
was
a
big
discussion
we
had
and
the
idea
behind
this
one
was.
We
wanted
easy
mode
for
people
who
were
approaching
qvert
for
the
first
time,
give
them
a
way
to
attach
storage
in
a
persistent
way
to
a
virtual
machine
and
let
them
kind
of
just
run
with
that
in
their
maybe
test
environment
or
something
like
that
and
today,
container
disks
are
just
ephemeral.
A
To
get
into
that
as
part
of
the
questions
in
the
next
section
after
I
finish
this
overview,
so
I
would
say
that
that
one
is
at
risk
actually
of
not
making
it
into
version
one
and
potentially
maybe
we've
shifted
our
focus
as
a
community
a
little
bit
on
that
and
I'll
try
to
talk
to
that.
A
We
need
a
lot
of
progress
on
establish
or
I'm
not
sure
we
need
progress
on,
but
we
need
to
kind
of
solidify
and
that's
to
establish
predictable
community
release
and
support
patterns.
So
we
have
a
predictable
release
pattern
right
now.
We
release
monthly,
but
the
support
patterns
on
how
long
releases
get
back
ports
for
bug
fixes
and
things
like
that.
We
haven't
really
defined
that
we
also
haven't
defined
a
deprecation
policy.
So
how
do
we
remove
things
from
cubert
over
time?
A
Congratulations:
ryan
that
happened
today,
fantastic
okay,
so
this
one
was
our
bridge
mode,
which
is
our
default,
binding
mode
for
network
it
wasn't
stable
and
that
essentially
led
to
a
situation
where
the
default
way
people
use
virtual
machines
may
cause
problems
for
them.
So
it's
great
that
we
stabilized
that,
because
we
weren't
really
sure
exactly
how
to
proceed
with
that,
whether
we're
going
to
replace
bridge
binding
mode
or
not.
But
I
guess
we
can
mark
that
one
off.
A
Review
our
user
guide.
We
definitely
need
to
revise
our
content
still,
but
there's
been
some
great
progress
on
the
user
guide,
specifically
in
finding
a
platform
to
well
just
serve
the
user
guide
that
we
think
is
going
to
act
as
a
solid
foundation
for
us
moving
forward.
So
there's
been
movement
here.
I
think
the
next
steps
for
us
are
going
to
be
to
actually
revise
the
content
within
the
user
guide
and
perhaps
organize
it
in
a
way.
That's
it's
easier
for
people
to
find
what
they
need
to
know
and
maybe
provide
more
examples.
A
This
is
actually
an
area.
I'd
really
like
to
get
some
feedback
from
the
community
you
all
from
today,
hopefully
and
virtual
launcher
live
updates.
I
talked
about
that
yesterday,
my
presentation.
I
have
a
work
in
progress,
pull
requests
up
for
that
now,
and
so
what
this
is
talking
about
is
updating
during
the
keyword
update
process
of
the
components
that
live
inside
the
vert
virtual
machine
pod,
and
we
have
a
couple
paths
to
that.
So
we've
made
progress
on
that.
A
I
think
we're
pretty
close
and
getting
that
merged,
and
the
last
one
here
is
a
templating
like
mechanism
for
virtual
machines
in
order
to
abstract
away
a
lot
of
the
complexity
for
building
bmi
yaml
like
our
api
for
people.
So
our
api,
it's
complex,
there's
a
lot
of
advanced
options
in
there
there's
a
lot
of
ways
to
get
kind
of
lost
in
those
details.
A
So
this
task
is
when
somebody's
approaching
kuvert
for
the
first
time.
How
can
we
create
mechanisms
that
allow
simplifying
that
virtual
machine
creation,
so
the
ammo
that
they
have
to
be
exposed
to
to
just
do
something
really
basic
like?
I
want
to
start
a
virtual
machine
with
one
cpu
and
using
this
volume
they
should
be
able
to
do
that
in
a
few
lines
of
the
ammo
rather
than
right.
Now
it's
pretty
robust.
A
So
we
have
a
proposal
out
for
this
right
now.
That
is
the
flavor
api
proposal,
something
I'm
working
on
which
allows
us
to
abstract
away
a
lot
of
the
complexities
of
the
performance
characteristics
of
a
virtual
machine.
Allow
people
just
to
focus
on
what
volumes
they
want
to
attach
to
a
virtual
machine.
C
A
Large
or
something
like
that
and
that
takes
care
of
representing
the
performance
characteristics,
so
you
really
just
said
two
things.
I
want
this
volume
as
my
root
disk
and
I
want
this
performance
and
pair
those
together.
I
want.
I
want
our
vm
creation
to
be
that
simple
when
people
approach
the
project.
A
A
A
So
I
I
can
ask
some
questions
we'll
see
where
this
goes.
This
might
not
be
the
oh
alex
you
have.
Some
alexander
has
some
questions
shoot
alex.
Let's
hear
it.
C
B
Yes,
yes,
let's,
let's
ask
them
live
since
you
know,
I'm
I'm
only
on
storage
team,
so
I'm
mainly
interested
in
cbi,
related
items,
and
one
big
thing
for
us
is
right
now
cube
verb
is
dependent
on
the
v1
alpha
one
version
of
cdi
and
we
have
to
more
or
less
maintain
that
you
know
to
make
sure
that
kuper
gets
everything
that
we're
adding.
We
really
like
to
ask
if
we
could
get
to
at
least
the
beta
one
api,
so
we
can
start
dropping
down
on.
B
B
A
Yeah
no
problem
yeah,
I'm
curious
about
that.
So
cdi
has
an
old
v1
alpha
api.
Is
that
not
a
so
that
that
differs
from
the
beta
alpha
api
so.
B
A
B
B
A
A
Okay,
what
we've
done
in
cubert
and
what
I've
seen
in
kubernetes
as
well,
which
I
understand
the
problem
you've
encountered,
and
I
think
it
might
be
a
an
artifact
of
using
the
container
runtime
to
generate
a
lot
of
this
stuff
version.
One
of
our
api
has
aliases
that
go
all
the
way
back
to
v1
alpha
3..
A
B
What
we're
doing
in
in
cdi
as
well
is
you
know
where
we're
actually
storing
the
beta
one
version
in
the
lcd,
but
then
aliasing
it
to
the
alpha
one
because
they
match
so
there's.
There's
no
they're
they're
identical
right
now,
but
we
really
don't
want
to
keep
the
alpha
one
around.
So
okay
and
keyboard
is
basically
the
main
reason
we
still
have.
A
B
We
would
maintain,
you
know
some
kind
of
backwards
compatibility
similar
to
what
we're
doing
with
alpha
one
right
now,
but
you
know
we
can't
really
move
to
v1,
because
then
we'd
have
to
keep
backward
compatibility
with
two
versions,
and
we
really
you
know,
do
a
maximum
of
one.
This
is
sort
of
what
what
kubernetes
does
as
well.
You
know.
A
Let's
see,
I
think
we
need
to
kind
of
shore
up
our
plan
here.
Would
that
be
something
that
you
would
mind:
kicking
off
a
discussion
on
the
kubert
dev
mailing
list
about,
so
we
can
kind
of
sort
through
those,
because
this
details?
A
B
You
might
sort
of
dovetail
into
the
the
container
disk
to
persistent
container
this
deal
so.
A
Okay,
is
there.
A
A
I
might
bring
that
one
up
in
a
second,
let's
see
if
there's
any
other
community
questions
and
any
other
feedback
we
can
get
because
that
one's
gonna
I
could
easily
eat
the
rest
of
our
time,
which
maybe
what
we
do
yeah
we'll
see.
What
other
questions
do
we
have?
Anyone
else
have
feedback
on
this
list.
D
Yeah,
so
this
is
marcus,
hi
david,
so
there
was
there
was
something
we
talked
about
on
the
mailing
list,
oops
my
videos,
we're
on
camera,
something
on
the
mailing
list
that
we
discussed
at
one
point
regarding
like
sharing
file
systems
into
the
vm.
So
you
imagine,
maybe
a
customer
has
a
read.
Only
many
a
read
write
mini
pvc
and
they
want
to
share
that
between
a
pod
and
a
vm.
D
A
D
Have
anything
as
far
as
implementation
you
know
to
discuss
on
that?
I
just
I
didn't
see
it
on
the
list
and
that
I
wasn't
sure
if
that's
something
that
we
would
want
to
add
to
the
list
or
anything.
C
D
Or
need
in
the
community.
A
A
Have
you
looked
at
vert
iofs
and
the
behavior?
We
have
so
far.
I'm
curious
that
this
is
already
solved,
because
we
just
had
a
request
to
change
the
permissions
and
how
permissions
were
handled
when
invert
iofs
pvcs
are
shared
or
storage
file
systems.
I
guess
is
there
are
shared
with
virtual
machines,
but
we're
not
going
to
change
the
permissions
on
the
files
to
in
order
to
make
them
available.
A
D
I
haven't
tried
that
I,
I
assume
that
it's
just
a
subject
and
it
works
similar
to
like
what
you
get
with
9p,
but
I
see
the
atlantic
mentioned
in
the
comments
here
that
it's
being
worked
on.
So
that's
good
news.
E
I
just
want
to
try
and
chime
in
I'm
going
to
talk
a
bit
about
iofs
in
my
next
presentation.
We
use
it
extensively
between
both
vms
and
after
some
bug
fixes
it
works
really.
Well,
we
get
perfect,
coherency
and
really
really
good
performance.
We
push
tens
of
thousands
of
ios
over
it.
A
A
lot
of
work
recently
on
that.
A
Any
of
your
thoughts
on
what
people
might
want
in
version
one
and
right
here.
Let
me
look
at
my
notes,
if
you
all
are,
are
still
thinking,
I'm
curious
if
we
don't
have
any
immediate
people
there,
just
wanting
to
jump
in
some
of
the
questions
I
have
are:
what
are
people's
experience
with
our
user
guide
and
perhaps
what
are
the
kinds
of
things
that
would
make
their
lives
easier
when
it
comes
to
documentation?
A
How
can
we
improve
that?
Maybe
if
anyone
has
any
feedback
on
like
their
first
experience
with
their
user
guide
and
the
kinds
of
things
that
they
struggled
with,
that
sort
of
thing
would
be
beneficial
in
trying
to
kind
of
shape
our
priorities
here
and
any
other.
B
A
A
Wanted
here
was
to
have
a
simple
way,
like
I
said
earlier,
for
people
to
convert
with
the
container
disk,
acting
as
their
volume
yet
find
a
way
to
make
that
persistent
and
do
it
automatically
for
users.
So
this
would
be
assigning
a
container
disk
to
your
virtual
machine
and
then
automatically
syncing
that
into
a
pvc
and
making
it
persistent.
A
We.
We
approached
this
a
lot
of
different
ways
and
we
were
thinking
about
using
qcal
backing
with
the
container
disk
existing
in
the
container,
with
the
backing.
Only
being
the
thing,
that's
persisted,
there's
a
lot
of
technical
discussion
that
went
on
about
problems
that
could
arise
from
that
and
what
we
kind
of
landed
on,
which
I'm
not
from
a
usability
standpoint.
I'm
not
happy
with
it,
but
from
a
consistency
standpoint
across
the
entire
ecosystem.
A
I
think
it
makes
sense,
and
that
is
that
cdi
today
has
the
ability
to
import
a
container
image
onto
a
pvc.
So
if
there's
a
disk
on
a
container
disk,
we
can
import
directly
onto
a
pvc
using
cdi,
and
that
gives
the
same
flow
we're
talking
about
where
somebody
can
use
a
persistent
characteristic.
A
That's
how
they
can
they're
approaching
keyboard
for
the
very
first
time
and
they
want
to
get
their
fedora
virtual
machine
started.
They
can
just
pull
one
of
our
community
fedora
images
that
are
stored
in
a
container
and
immediately
pull
that
in
and
express
that
pretty
easily
in
the
virtual
machines
yaml
using
cdi
in
a
data
volume.
B
A
Right,
so
that
was
the
thing
that
was
inconsistent
with
the
approach
that
I
was
trying
to
take,
where
we
would
back
do
the
the
delta
file
on
a
pvc
for
a
container
so
anytime.
In
this
case
somebody
performed
a
write
on
their
volume
that
data
would
be
separated.
It
would
be
layered
into
a
pvc
persistently
where
the
read-only
kind
of
initial
file
would
exist
in
the
container
image.
A
B
C
A
Does
anyone
in
this
conversation
feel
like
we
should
continue
this
discussion
of
trying
to
persist
contagious
volumes
outside
of
cdi?
So
this
would
be.
A
I
guess,
that's
the
major
drawback
here
and
I
have
no
problem
with
the
cd
I'm
not
saying
that
using
cdi
is
a
drawback.
I'm
saying
that
from
the
first
five
minutes
of
approaching
q
verts,
if
I
have
to
both
install
q
verb
and
install
cdi
in
order
to
get
a
persistent
volume,
those
are
more
steps,
but
maybe
that's
something.
A
Maybe
that's
something
we
have
to
absorb
as
a
community.
I'd
love
somebody
to
argue.
B
A
Yeah
we
can
introduce
something
like
a
pvc
template
section
in
the
in
the
vm
if
we
really
wanted
and
then
assign
that
to
the
container
disk
as
a
backing.
For
that
sorry,
I'm
reading
the
chat
at
the
same
time
right.
A
That
we
can,
we
can
package
up.
We
can
package
up
all
these
components
like
cdi
and
cuvert
together
in
a
helm
chart
or
something
like
that.
I'm
trying
to
think
of
I
think
that
might
simplify
some
things.
D
E
E
Whenever
there
is
enough
updates
on
both
for,
however,
you
want
to
release
it
or
just
you
know,
roll
our
new
release
on
every
release
of
either
cdi
and
and-
and
you
know
very
similar
to
kind
of
how
prometheus
operator
operates
where
they
have
a
one
helm
chart
that
can
deploy
prometheus
for
you,
you
can
deploy
grafana,
you
can
deploy
alert
manager
based
on
all
the
components
you
need
to
kind
of
get
a
monitoring
stock
up.
E
We
can
do
the
same
thing
and
you
know
I
don't
think
everyone
should
have
to
use
the
ham
chart,
but
it
would
provide
a
very
easy
way
to
get
going
with
kind
of
you
know
very
standard,
very
standard,
straightforward,
convert
cdi
storage
setup
on
a
kind
of
a
barrel
cluster,
and
I
don't
think
it
would
be
that
complicated
to
build
later.
A
Okay,
I
think
that's
a
great
area
for
to
us
to
investigate.
E
Yeah,
I
do
think
that
you
know
kind
of
reinventing
functionality
like
you
would
get
just
by
spinning
up
a
longhorn
seem
to
just
to
make
it
easier
for
people
to
deploy
them
seems
like
a
bit
of
an
anti-pattern,
just
like,
like
you
guys,
just
discussed
like
it's
best
to
stick,
stick
to
kubernetes
primitives
and
instead
figure
out
an
easy
way
for
people
to
deploy
the
stack
that
allows
them
to
use
them.
Sure.
A
A
I'll
try
to
get
that
link
to
you
and
I'll
post
it
either
in
this
I'll
post
it
here
after
this
is
over.
I
think
that
is
about
it.
I'm
going
to
sign
off.
A
C
Okay
yeah.
Thank
you
very
much
yeah.
We
are
over
time.
This
was
a
great
summary
and,
and
you
know,
a
reminder
of
the
pending
discussion
items
and
I
encourage
everyone
to
follow
up.