►
Description
KubeVirt is intended to provide a convergence point for the data center of the future using Kubernetes as an infrastructure fabric for both application container and virtual machine workloads. Using a unified management approach simplifies deployments, allows for better resource utilization, and supports different workloads in a more optimal way.This session will outline how the Kubevirt project seeks to achieve this while using the extensible nature of Kubernetes in a way that provides a develo
A
The
second
most
common
is
is
in
a
section
of
these
technologies,
and
the
one
you've
probably
heard
the
most
about
this
week
here
at
this
conference
is
I
need
a
way
to
provide
strict
isolation
of
application,
containers
typically
leveraging
hardware
based
virtualization
technology
in
projects
like
Kutta
containers
and
G
visor,
and
that's
something
that
obviously
has
been
a
hot
topic.
So
those
technologies
are
focused
on
I,
have
an
application
container
and
I
want
to
use
virtualization
as
a
way
to
provide
strict
security
around
that
application
container.
A
But
the
question
we
asked
ourselves
is
what
about
existing
workloads.
So
a
number
of
years
ago
now,
I
think.
Four
years
ago,
Red
Hat
made
a
big
bet
in
jumping
in
with
Google
and
others
on
kubernetes
as
the
future
of
orchestrating
workloads
across
the
data
center
and
the
public
cloud
for
that
matter,
as
we
refer
to
it,
the
four
footprints
of
the
open,
hybrid
cloud,
but
we
know
that
there's
a
huge
fleet
at
existing
virtualized
workloads
out
there
they've
been
built
up
over
the
last.
A
You
know
one
or
two
decades
that
the
technology's
really
been
commonplace
and
that
those
virtualized
workloads
aren't
necessarily
going
anywhere
fast.
That
can
be
for
business
reasons,
in
terms
of
it
being
tough
to
build
a
business
case
to
containerize
every
single
virtual
machine
you
have
in
your
fleet.
It
can
also
be
that
that
takes
time
and,
of
course,
is
your
time
to
market
issues.
On
the
flip
side,
they're
also
technical
reasons,
you
may
not
be
able
to
containerize
a
workload
immediately.
A
So
if
it's
using
a
different
operating
system
to
the
one
you
wanted
to
use
on
your
container
hosts
using
a
different
or
older
version
of
the
kernel
on,
we
want
to
run
on
your
container
hosts
or,
if
you're,
using
things
like
custom
k.
Mods
that
again,
you
don't
want
to
expose
as
part
of
the
attack
service
of
the
kubernetes
nodes
themselves.
A
So,
in
essence
we're
thinking
about
how
can
we
bring
these
two
things
closer
together?
Our
orchestration
of
container
workloads,
but
also
virtual
machine
work
wise?
How
can
we
make
those
managed
and
interacted
with
in
a
more
common
way
than
they
are
today,
where
you're
effectively
using
two
separate
management
layers?
A
So
if
we
think
about
virtual
machines
and
containers,
we
have
a
virtual
machine
effectively
virtualized
as
hardware
and
a
container
isolates
processes,
and
each
of
these
gives
the
workload
or
was
a
the
footprint
for
the
workload,
positive
or
negative
attributes
in
terms
of
a
trade-off
between
performance
density
and
security.
But
increasingly
most
organizations
at
this
point
have
a
mix
of
both
those
existing
virtual
machine
workloads,
but
also
in
new
applications,
they're
writing
in
cane.
A
So
we
have
these
existing
systems
that
treat
this
separately
and
we
kind
of
have
this
transition.
We
see
as
going
where
we
started
off
with
virtual
machines
running
our
physical
machine.
People
started.
Adding
two
containers
to
those,
because
that
were
very
machine
was
effectively
the
easiest
way
to
get
compute
resources
in
their
organization,
but,
as
containers
become
more
and
more
commonplace.
A
We
think
we're
gonna,
see
this
flip,
where
more
and
more
of
the
containers
are
going
to
run
on
bare
metal,
but
there's
a
desire
to
keep
virtual
machines
running
alongside
them
and
when
we
think
about
the
Linux
virtualization
stack
as
we
work
up
from
the
hardware,
we
have
a
Linux
operating
system
so
effectively
the
Linux
kernel.
We
have
the
KVM
kernel
module,
but
then
on
the
user
space
side
of
the
equation.
We
have
qme
processes
now
and
although
they
need
access
to
the
dev
KVM
device,
they
are
at
the
end
of
the
day,
just
processes.
A
And
if
there's
anything,
we've
learnt
in
the
last
couple
of
years,
the
rise
of
containerization
technology
and
linux.
We
can
take
a
process
and
we
can
put
that
in
a
container
and
by
doing
so,
the
thinking
is
that
we
can
take
advantage
of
many
of
the
things
that
kubernetes
can
do
for
us
in
terms
of
placement,
network
security,
isolation,
quotas,
etc
and
share
those
capabilities
between
both
the
application
container
workloads
and
the
virtual
machine
work
was
that
we
want
to
run
so
effectively
taking
these
two
separate
management
plans
and
combining
them
using
kubernetes.
A
As
that
common
element,
so
what
is
cuvette
cubit
is
a
technology
enabling
kubernetes
as
a
unified
platform
for
building
modifying
and
deploying
applications,
regardless
of
whether
they
reside
in
containers
or
virtual
machines
in
a
common
environment
so
effectively
making
adding
virtual
machines
to
your
kubernetes
projects
or
namespace
as
easy.
It
is
to
add
a
new
application
container.
A
We
do
that
by
implementing
a
custom
resource
definition
or
actually
increasingly
a
number
of
custom
resource
definitions.
What
that
means.
Some
people
may
have
heard
this
concept
referred
to
as
third-party
resources
in
the
past,
but
it's
a
way
of
extending
kubernetes
to
add
new
top-level
API
objects,
in
this
case
a
virtual
machine
as
a
pattern
that
you're
going
to
see
more
and
more
of
in
coming
years,
as
the
kubernetes
community
type
tries
to
keep
the
core
of
cumin.
A
A
Wherever
there's
been
a
decision
point
around
how
to
implement
something
from
a
conceptual
point
of
view,
we
are
attempting
to
take
this
opportunity
to
reimagine
a
little
bit
the
way
we
think
about
virtualization
and
to
take
as
kubernetes
a
native
approach
as
possible.
So
interacting
with
these
virtual
machines
in
this
system
is
as
natural
as
possible
and
feels
as
much
of
a
common
workflow
as
it
is
to
interact
with
the
application
containers.
A
So
in
terms
of
working
through
a
basic
use
case
for
this,
when
we
start
we'll
start
with
a
legacy
VM
that
we
have
in
our
environment
and
let's
imagine
that
that
VM
is
hosting
an
API
or
a
service
that
our
business
relies
on,
and
we
know
right
now.
We
don't
have
a
business
case
to
containerize
that
immediately,
but
we
want
to
continue
to
expand
our
capabilities
and
build
services
around
our
existing
capabilities.
A
The
idea
is
that
we
can
finally
bring
these
two
types
of
workload
together
and
the
resultant
virtual
machines
can
run
side
by
side
on
the
same
house.
There's
also
some
potential
consolidation
gains,
the
in
terms
of
ability
to
pack
a
host
with
both
the
virtual
machine
and
container
workloads
side
by
side.
A
We
do
also
have
the
ability,
through
using
kubernetes
in
this
way,
to
leverage
multiple
existing
ecosystems.
So
at
the
moment,
the
virtual
machine
networking
we
plug
directly
into
the
same
networking
as
other
pods.
We
also
have
the
ability
to
use
storage
both
from
the
kubernetes
ecosystem,
but
also
from
the
OpenStack
ecosystem
via
the
cinder
plugin
for
kubernetes.
A
One
of
the
interesting
things
there
and
where
we're
continuing
to
work
with
a
number
of
the
storage
vendors
is
around
defining
the
container
storage
interface
spec
for
kubernetes,
because
we
have
a
strong
interest
to
support
virtual
machine
work-wise,
which
requires
some
storage
capabilities
that
normal
kubernetes
work
was
don't
necessarily
currently
have
to
care
about.
So
things
like
cloning
are
very
important
to
us
and
there's
no
standardized
way
to
expose
that
incriminated.
A
Yet,
but
that's
something
where
we
believe
that
bringing
these
types
of
workloads
to
kubernetes
is
also
going
to
allow
us
to
improve
the
base
that
we're
building
on
for
everyone
to
take
advantage
of
as
well.
So
at
this
point
I'll
hand
over
to
Stu
and
he's
going
to
take
you
through
a
bit
of
a
deep
dive
of
the
technology.
Thank.
B
B
So
I'm
probably
going
to
cover
a
little
bit
of
the
same
ground,
the
Steve
did,
but
that's
actually,
okay,
because
repetition
builds
retention
and
historically
this
has
been
a
kind
of
a
deep
talk.
As
Steve
mentioned,
we
use
custom
resource
definitions
to
actually
model.
The
virtual
machine
objects
that
we
use.
I've
got
an
example
of
one
up
here
on
the
right.
B
The
advantages
of
doing
that
is
that
it
is,
you
know,
absolutely
as
kubernetes
native
as
possible
that
way,
because
you
know
anybody,
that's
familiar
with
kubernetes
and
actually
show
of
hands
who
is
familiar
with
kubernetes
at
this
point:
okay,
fair
enough
about
half
and
half
so
yeah.
As
far
as
these,
these
definitions
go.
B
It's
all
yeah
Mille,
which
is
you
know
very
human,
readable
in
terms
of
configuration
and
inside
of
our
custom
resource
definitions
that
we're
setting
up
we're,
aiming
to
to
be
as
expressive
as
possible
in
terms
of
what
virtual
machines
are
going
to
need.
So
whether
that's
networking
stack
who's,
your
the
net
card
that
you're
using
the
model
of
the
the
architecture,
the
number
of
CPUs
memory,
all
that
sort
of
thing
is,
is
in
that
that
manifest-
and
in
our
example
here
we
have
one
gig
memory.
B
The
feature
set
that
we're
targeting
is
as
close
to
libvirt
as
possible,
and
you
know
that's
a
very
good
reason
for
that:
we're
actually
using
libvirt
under
the
hood
on
each
of
the
pods
and
as
far
as
implementing
it
as
a
CRD.
We
also
benefit
by
gaining
any
sort
of
kubernetes
isms
for
all
of
the
resources
that
kubernetes
manages.
B
Now.
As
far
as
the
CRD
goes,
we
would
if
we
just
define
that
and
left
it,
it
would
just
sit
there.
If
you
uploaded
a
virtual
machine
object
to
the
kubernetes
api
server.
It's
it's
not
going
to
do
anything.
You're,
just
gonna
have
a
record
on
the
on
the
system,
so
this
slide
sort
of
models
out
how
we
actually
put
that
together
and
it's
kind
of
information
dense,
so
I'm
going
to
spend
a
minute
here.
B
As
far
as
the
Cubert
model,
we
use
the
operator
pattern
from
kubernetes.
So
the
vert
controller
and
the
vert
handler
are
working
in
tandem
to
to
kind
of
handle
each
of
the
different
resources
as
they're
being
pushed
around
now.
I've
used
custom
resource
here,
instead
of
virtual
machine
being
deliberately
evasive
about
the
the
actual
name
of
that
we'll
get
back
to
that.
But
the
vert
controller
actually
monitors
for
these
virtual
machine
objects
using
a
Lister
watcher
or
a
shared
indexer,
which
is
a
kubernetes
object.
Reason
doing.
B
B
To
actually
start
the
virtual
machine,
which
is
at
that
point,
a
liberty'
main,
so
as
far
as
the
vert
controller
there's
one
per
cluster,
you
only
need
one
and
it's
on
the
centralized
system.
As
far
as
vert
handler,
you
have
one
per
node
and
it's
monitoring
only
four
objects
on
that
node
and
that
dichotomy
allows
things
to
to
be
distributed
and
to
scale
a
little
better.
B
As
far
as
scheduling
we
just
talked
about
that
virtual
machines
are
scheduled
as
pods
and
in
fact
we
could
actually
say.
Virtual
machines
are
literally
scheduled
via
pods.
We
schedule
a
pod
and
then
put
a
virtual
machine
into
it.
By
doing
this,
we
get
the
same
set
of
features
that
you
would
for
any
pod,
such
as
affinity
or
anti
affinity,
so
that
you
could
say
you
know
the
pod,
nay
or
excuse
me,
the
node
named
foo
I.
B
You
know
this
machine
is
incompatible
with
that
or
you
know
using
labels
and
selectors
much
the
same
or
you
could
even
write
a
custom
scheduler
if
you
needed,
because
anything
that
is
fair
game
for
pods
is
fair
game
here,
because
we're
using
pods
and
because
we're
using
pods
each
of
the
network
services
that
you
might
need
to
expose
on
the
virtual
machine,
such
as,
if
you've
got
a
MySQL
server,
you
could
actually
use
the
exact
same
services
and
routes
that
kubernetes
uses
for
pods.
To
do
that.
B
Once
again,
you
can
use
labels
and
selectors
to
to
determine
that
now,
because
virtual
machines
live
in
pods,
they're,
transparent
to
the
higher
level,
which
is
something
that
Steve
sort
of
touched
on
in
terms
of
the
management
you're
only
managing
the
pod.
The
fact
that
the
workload
on
that
pod
happens
to
be
a
virtual
machine
is
irrelevant,
and
so
that's
basically
what
we
have
today
anyway.
Now,
when
virtual
machines
leverage
the
pods
by
doing
that,
the
metadata
so,
for
instance,
the
labels
annotations.
B
What
have
you
and
I'll
show
that
a
little
bit
of
that
later,
we
can
actually
take
advantage
of
each
of
those
sub
resources
in
order
to
do
a
little
accounting.
We
get
the
CPU
and
memory
resources,
especially
the
memory
comes
via
the
pod,
directly
affinity
and
anti
affinity.
Once
again,
so
you
can
schedule
the
node
that
you're
going
to
be
on,
as
well
as
the
storage
which
we'll
get
into
in
a
bit.
B
Actually,
we'll
get
to
it
now.
So
as
far
as
storage,
the
way
the
pods
work
in
general,
is
you
don't
put
any
data
that
needs
to
be
permanent
on
the
pod
itself,
because
that's
supposed
to
be
ephemeral,
you
stop
the
pod,
restart
it
or
delete
and
riad
and
your
data
is
gone.
So
the
method
that
kubernetes
gives
for
pods
for
persistent
storage
is
called
a
persistent
volume
and
is
bound
with
a
persistent
volume
claim.
B
We
use
those
one-to-one
inside
of
cube
vert,
where
every
persistent
volume
that
is
declared
in
the
virtual
machine
manifest
will
be
mapped
into
the
virtual
machine.
We
give
you
a
way
to
either
use
a
mount
point
or
a
device
node
in
order
to
put
it
in
the
correct
spot.
Of
course,
it
could
be
mutable
or
immutable.
For
instance,
if
you
had
one
common
boot
path
of
slash
as
far
as
mount
goes,
you
wouldn't
want
that
to
be
necessarily
shared
or
to
be
scribbled
on
by
one
VM.
B
If
it
was
shared,
so
you
could
set
it
to
be
read-only
and
then
allow
multiple
uses
of
it
at
the
same
time
or
it
could
be
a
mutable
or
immutable.
Excuse
me
where
it
was
actually
for
your
application
program
data,
for
instance,
a
database
where,
from
one
boot
up
to
the
next
it
needs
it's
it's
non
transient
memory
and
as
a
side-effect
of
using
the
kubernetes
persistent
volume
methods.
Is
that
anything
that
kubernetes
can
do?
B
But
the
thing
about
persistent
volumes
is
for
a
newcomer
or
beginner
there's
a
little
bit
of
friction
because
they're
not
exactly
easy,
I
guess
to
set
up.
So
we've
are
developing
a
couple,
different
utilities
that
are
alongside
cube
vert
that
can
actually
help
in
that
arena.
So
the
containerized
data
importer
is
one
of
them
and
that's
actually
on
github.
That
is
a
link.
If
you
get
a
hold
of
this
slide
deck
that
links
to
the
cube
root,
slash
containerized
data
importer-
this
is
a
declarative,
kubernetes
utility.
B
It
works
much
like
the
operator
patterns
that
cube
root
itself,
uses
where
it
monitors
for
virtual
machines
that
have
the
specific
markings
that
are
needed
for
this.
The
two
use
cases
that
we've
identified
at
this
point
are
either
to
download
an
image,
for
instance
just
a
URL,
so
that
you
can
obtain
an
image
and
just
insert
it
directly
into
a
virtual
machine
and
go
or
we
have
the
notion
of
copying
and
images
from
a
read-only,
scoober,
Nettie's
namespace,
and
what
that
basically
means
is.
B
You
would
designate-
and
it's
up
to
you
as
the
administrator
of
your
cluster,
a
namespace
that
each
of
these
containerized
data
importers
from
other
namespaces
would
be
able
to
use
in
terms
of
copying
the
data
out.
So
you
could
just
have
a
list
of
persistent
volumes
that
could
be
copied
cloned
if
you
will
to
a
different
namespace
and
then
used
that
way.
B
If
you
had
one
golden
master
sort
of
repository,
you
wouldn't
want
that
to
be
messed
with,
but
it
would
be
able
to
be
duplicated
as
on-demand
one
of
the
other
projects
that
we're
using
in
terms
of
trying
to
make
this
a
little
easier
is
vert
v2
v.
Once
again,
it's
under
the
cube,
vertebra
po.
The
approach
here
is,
instead
of
taking
a
disk
image
from
somewhere,
it's
taking
the
entire
virtual
machine
from
you
know,
other
sorts
of
solutions
such
as
VMware
or
you
know,
OVA.
B
There
are
some
limitations.
Of
course
it
needs
to
have
just
a
single
NIC,
and
the
reason
for
that
is
because
we
only
have
one
NIC
that
we
can
define
on
the
cube
root
side
right
now,
and
it
can
only
have
one
single
attached
disk
and
the
reason
for
that
is
that
we
don't
necessarily
know
the
correct
order
of
the
mount
points
if
you've
got
them
nested
on
each
other.
B
A
couple
good
reasons
to
do
that
when
migration
is
in
play,
which
currently
it's
not
working
and
I'll,
get
to
that
in
a
bit
when
you're
migrating.
You
want
to
keep
as
much
of
your
identity
as
possible
across
that
process.
So
you
want
to
keep
that
that
IP
the
same,
if
possible,
there's
also
no
difference
in
terms
of
accessing
via
the
pod
or
via
the
the
virtual
machine.
B
So,
label
selectors
I'm
going
to
move
a
little
faster
here,
because
I'm
noticing
the
time
kubernetes
is
doing
some
work
in
this
areas.
Well,
they're,
actually
trying
to
expand
the
different
networking
features
that
they
have
and
will
be
able
to
take
advantage
of
that
when
that's
mature
offline
and
running
virtual
machines.
B
You
know,
and
in
order
to
start
it
again,
and
so
we
we
introduced
this
concept
of
an
offline
virtual
machine,
and
one
thing
to
note
about
that
is:
there's
discussion
right
now
about
renaming
it,
so
that
the
one
thing
that
we
can
agree
on
is
a
community.
Is
that
the
word
offline
virtual
machine
is
very
confusing
because
it's
not
offline,
it
basically
represents
something.
That's
stateful,
it
could
be
online
or
offline,
and
so
it's
causing
a
little
bit
of
cognitive
dissonance,
so
we're
trying
to
get
away
from
that
nomenclature.
B
B
We
also
have
a
vert
control,
client
tool,
which
is
separate
from
cube
control,
which
is
what
you
can
use
most
of
the
time.
There
are,
unfortunately,
a
few
different
features
that
don't
exist
in
cube
control
and
there's
no
real
easy
way
to
map
them,
for
instance
the
console
or
VNC.
If
you
wanted
to
connect
that
way,
because
it
takes
a
continuously
running
socket,
that
does
not
work
well
inside
of
a
rest
server.
So
we
need
to
create
a
client.
B
That's
able
to
look
that
up
and
proxy
that
for
you,
we
also
have
starting
and
stopping
of
offline
virtual
machines,
which
will
probably
be
named
two
virtual
machines
at
some
point
soon,
and
the
reason
that
that's
here
I'll
actually
show
that
in
the
demo,
but
it's
to
do
it
with
Cube
control
itself
means
modifying
a
record
on
the
fly,
and
that's
it's
a
little
unfriendly
to
do
so.
This
cube
or
vert
control
can
actually
either
be
a
cube,
control
plug-in
and
the
syntax
is
below,
or
it
can
just
be
literally.
B
B
I'll
just
show
you
what
the
the
system
namespace
actually
looks
like
the
four
lines
here
on
the
bottom.
These
are
actually
cube,
verts,
specific
everything
else
is
actually
kubernetes,
so
we've
got
vert
API,
which
is
it's
a
proxy
that
basically
takes
all
the
cube
root
native
calls
that
we
we
handle
and
if
it
doesn't
recognize
it
sends
it
on
to
kubernetes
itself,
that's
a
little
contrary
to
how
kubernetes
wants
to
do
it,
but
we
got
there.
B
First,
we
implemented
this
before
they
had
the
concept
of
a
user
API
server
which
we
would
love
to
move
to,
but
there's
been
a
little
difficulty
with
you
have
to
ship
your
own
at
CD
server
and
that's
a
little
difficult,
vert
controller.
As
I
mentioned,
we
have
one
on
the
system.
You
see
the
reason.
Why
is
there's
actually
one
that's
for
redundancy,
so
it's
actually
lost
the
leader.
B
Election
is
not
active
right
now
and
because
I
only
have
one
node
I
have
one
Verte
Handler,
and
so
this
is
actually
just
straight
off
of
the
github
repo
here,
and
so
this
manifest
actually
exists.
If
you
wanted
to
clone
the
repository
and
look
at
it,
but
what
we've
got
here,
this
is
the
virtual
machine.
I'm
in
a
boot
in
a
second,
is
a
for
doram
image.
It's
requesting
one
gigabyte
of
memory,
we're
injecting
a
password
of
fedora
for
that
user
account.
B
B
This
line
right
here
so
in
a
moment
here,
the
pod
is
now
running
as
I
was
talking,
and
so,
if
we
look
at
that
yeah
mol
again,
the
virtual
machine
is
actually
running
and
we
can
actually
connect
to
its
console
real,
quick
and
show
you
that
that
actually
is
working
and
it's
just
a
standard
boot
sequence
that
you
would
you
know
you're
familiar
with
from
other
machines.
I'm
gonna
go
through
that
process.
B
I,
don't
know
if
we
need
to
snes's
solely
sit
there
and
watch
the
whole
boot
sequence
or
there
you
go
so
can
even
log
in
fedora
and
the
password
was
Fedora
from
the
manifest.
You
could
change
it
right
there
if
you
wanted,
and
so
there
you
go.
That
is
a
virtual
machine
and
if
I
want
to
go
ahead
and
delete
that
I
use,
you
know
we're
deleting
records
because
we're
in
the
declarative
sort
of
model
here.
B
And
that
will
actually
clean
up
both
that
and
the
pod
will
be
terminating
here
and
will
actually
disappear
in
a
second
we'll
also
like
to
real
quick
show,
offline
virtual
machines,
and
we
have
an
example
in
there
as
well,
and
we
want
to
look
up
the
alpine
multi
pvz,
so
this
is
basically
very
similar,
we're
only
going
to
request
64
megabytes
of
ram.
In
this
case,
we
actually
have
to
persistent
volume
claims
that
we're
using
we've
named
one
Verdi
o1
Verdi.
Oh
excuse
me
PVC
discs
ones
and
two.
B
An
offline
virtual
machine
created
and
if
you
look
at
the
amyl
that
is
created
here
for
this,
you
see
running
equal
false.
This
is
basically
saying
we've
defined
the
virtual
machine,
but
we
haven't
actually
started
one
and
you
can
see
that
if
we
go
in
and
look
at
the
virtual
machines
there's
just
not
there.
So
as
far
as
this,
the
or
ovm,
if
I
edit,
it.
B
If
I
can
type-
and
at
this
point
now,
if
we
look
at
vm's,
it
is
started
1
and
so
it's
kind
of
the
same
process
as
before,
obviously
doing
that
every
time
you
wanted
to
start
or
stop
a
virtual
machine
is
not
ideal,
and
so,
as
I
had
mentioned,
we
have
the
vert
control
command.
That
actually
has
a
stop.
In
fact,
if
I
just
run,
it'll
show
you
the
commands
that
it
actually
has
available
to
it.
So
stop
ovm
alpine
multi
V
PVC.
B
Was
it
that's
a
mouthful,
but
if
you
then
look
at
the
virtual
machines
no
longer
running,
and
if
you
look
at
the
offline
virtual
machine,
it
should
say
stopped
again,
because
that's
all
that
is
that
command.
This
is
just
a
shortcut
to
actually
edit
this
running
equal
false
flag
for
you.
So
as
far
as
the
demo
I
think,
that's
pretty
much
all
I
wanted
to
show
there.
So
I
will
turn
it
back
over
to
Steve
for
future
plans.
All.
A
Right
so,
first
of
all,
I
just
want
to
talk
very
quickly
from
so
he's
focused
mostly
on
the
cuvette
project,
from
a
Red
Hat
product
ization
perspective.
At
the
moment,
we're
really
just
focused
on
getting
a
preview
of
this
technology
into
people's
hands,
and
you
can
see
here
an
example
of
that
from
the
openshift
Service
Catalog.
So
the
idea
is
that
this
summer,
if
you're
an
open
shift,
customer
you'll
be
able
to
access
that
tech
preview.
A
But,
of
course,
if
you
want
to
try
the
community
bits,
I
don't
have
the
links
up
on
the
screen
in
a
second,
you
can
go
to
the
Qbert
repo.
You
can
grab
the
demo
script
from
the
demo
repo
under
that
organization
and
basically
run.
What's
you
had
running
here
today.
For
the
most
part,
all
those
EML
files
are
there
running
or
installing
cuvette
on
a
cluster
is
fairly
non-invasive,
so
it
doesn't
need
to
modify
the
hosts
directly.
A
You
basically
deploy
it
by
running
manifest,
so
we
also
have
some
ants
or
play
books
to
help
you
out
as
well.
In
terms
of
future
plans
from
a
wishlist
perspective,
some
of
the
things
we
want
additional
VM
lifecycle
actions.
So
at
the
moment
you
know
we
demonstrated
start
and
stop.
I
would
like
to
do
things
like
reset
and
hibernate
in
particular,
because
I
think
you
mentioned
you
know
the
current
calorie
to
idling
hibernate
would
potentially
get
a
concept
much
closer
to
actually
idling
the
VM
that
we
have
today
supporting
more
turnkey
storage
solutions.
A
So
we
mentioned
that
via
the
interfaces
we
have,
we
can
theoretically
use
any
storage
solution
in
the
kubernetes
or
OpenStack
ecosystems,
but
we
need
to
test
more
of
those
and
get
more
of
those
into
CI
networking
at
the
moment.
We're
cheating
a
little
bit
we're
taking
the
IP
from
the
pod
and
giving
it
to
the
VM.
A
Now
we
would
like
to
use
multi
networks
to
actually
have
an
IP
for
the
pod
and
and
for
the
VM,
but
also
to
enable
things
like
SRV,
sidecar
containers
for
sto
and
so
on,
and
then
the
other
thing
that
we're
focused
on
is
thinking
about.
How
can
we
get
the
VM
constructs
into
things
like
replica
sets?
Daemon
sets
and
other
kubernetes
like
constructs
that
people
want
to
use
for
scheduling
their
workloads,
so
in
terms
of
community.
At
the
moment
this
is
in
a
github
organization.
We
have
mailing
lists
on
the
website.
A
You
will
find
instructions
for
running
Qbert
on
a
number
of
common
platforms,
so
things
like
mini
cube,
mini
shift
and
so
on,
and
also
we're
available
on
IRC
and
Twitter.
At
this
point,
I
think
we
have
a
little
bit
of
time
for
some
questions
and
I
see
someone's
been
waiting
very
patiently.
The
microphone
so
far
away
very.
C
A
C
A
Yes,
so
this
one
is
touching
on
the
cauda
versus
keeper
thing
quickly
again,
just
to
start
so
one
of
the
things
I
should
have
mentioned
is
we
do
see
cada,
it's
very
much,
a
complementary
use
case
not
competitive.
So
we
see
a
world
where
both
of
those
coexist
to
some
degree,
because
both
have
value
to
add
for
those
separate
use
cases,
and
we
actually
also
because
we're
sharing
the
KVM
and
qmu
elements
of
the
stack.
A
Now
we
have
worked
quite
closely
with
Intel
and
others
on
how
we
can
continue
to
improve
the
vert
stack
in
terms
of
seeing
this
used
in
production.
I
think
we
see
it
as
a
somewhat
inevitable
reality
that
people
are
going
to
have
workloads
that
are
composed
of
both
virtual
machines
and
containers
that
are
part
of
one
complex
application
that
they
want
to
walk
a
straight
together
and
in
terms
of
an
emerging
technology.
We
see
this
is
a
very
promising
approach
to
doing
that.
I
mentioned
from
a
product
standpoint
we're
only
looking
at
technology
preview.
A
This
summer
we
don't
have
a
fixed
roadmap
right
now
for
awhile.
We
would
go
to
full
production
of
support,
but
certainly
the
plan
is
to
get
there,
but
we're
going
through
that
planning
process
right
now
and
part
of
that
is
putting
it
in
people's
hands
and
basically
taking
feedback.
We're
very
interested
right
now
in
what
people
want
to
do
with
the
tool
and
taking
that
into
account.
As
we
plan
kind
of
beyond
this
initial
preview
phase.
Okay,.
D
B
A
Can
probably
say,
I
could
protex
on
that,
because
I
think
that
goes
back
further
into
the
history
as
a
project,
so
I
think
for
the
people
who
don't
know
so
I
mentioned
Qbert
is
defined
as
a
custom
resource
definition,
which
means
it's
a
top-level
api
object
that
integrates
it
the
same
way
as
cotta
containers.
So
it's
a
container
runtime
interface
implementation.
A
So
what
that
means
is
that
when
it
comes
to
all
of
the
features
of
the
virtual
machine
or
the
knobs
as
you
will
they
put
it
through
the
metadata
annotations
because
they
don't
have
a
top
level.
Api
object
for
Kutta
containers
that
works
quite
well,
because
the
metadata
annotation
typically
is
just
by
trusted
or
not
for
a
full
virtual
machine
workload.
A
We
felt
that
was
a
little
problematic,
because
once
you
really
get
into
all
the
knobs
of
traditional
worklet
expects,
then
your
metadata
blob
becomes
quite
expansive
and
we
really
wanted
to
like
when
we
look
at
the
complexity
of
the
existing
live
XML.
For
example,
we
wanted
to
have
a
rich
API
object
to
interact
with,
and
also
we
have
to
have
objects
for
other
things
as
well.
So,
for
example,
right
now
we
have
a
concept
of
a
virtual
machine,
preset
object
at
the
top
level.
A
We
know
we
want
to
take
a
traditional
workload
and
ideally
make
it,
so
you
don't
have
to
change
it
at
all
to
be
able
to
bring
it
in,
although
you
may
change
it
later,
as
I
mentioned
to
start
decomposing
it
and
those
things
typically
expect
a
lot
of
very
specific
settings
depending
on
the
workload,
and
we
know
we're
going
to
need
those.
So
the
objects
give
us
a
way
to
expose
that
in
a
very
neat
way
to
the
user.
A
E
A
So,
in
most
of
the
demo
scripts
for
simplicity,
like
the
Alpine
image
and
the
Fedora
image,
are
coming
straight
from
docker
hub,
for
example,
we
can
also
use-
quite
obviously
in
other
registries,
where
we
mentioned
the
cloning
around
storage
is
in
a
realistic
production
deployment.
What
would
actually
happen
is
we
have
a
concept
of
a
template
and
for
fast
instantiation
you
want
to
be
able
to
use
storage,
assisted
cloning
to
bring
that
template
into
a
new
VM
each
time
really
quickly.
So.
A
You
set
it
up
that
way
you
don't
have
to
so.
Even
even
now
we
can
use
the
storage
assisted
cloning
if
it's
there,
but
in
the
in
a
really
simple
demo
example:
we
just
pull
it
from
the
registry
because
we
don't
want
to
have
the
overhead
necessarily
like
if
I
run
it
up
in
mini
shift
or
mini
cube,
I,
don't
necessarily
have
a
storage
implementation
there.
That
has
cloning.
So
for
those
examples
we
typically
just
use.
The
registry
excuse
grabbing
the
actual
yeah.
B
A
So
we
right
now,
we've
primarily
tested
with
Gloucester
in
a
context
of
CNS
we've
tested
with
the
registry
disks
yeah
and
we've
also
actually
Yusef
via
cinder
as
the
primary
way
we
vet
the
cinder
driver,
although
ventually
we'd
probably
use
a
native
safe
integration
to
kubernetes
as
well.
But
yes,
certainly,
the
intent
is
wherever
it's
available
to
take
advantage
of
cloning
and
storage,
assisted
migration,
and
things
like
that.
It's
just
at
the
moment.
We've
really
been
focused
on
the
very
basics,
and
then
it's
actually
not
the
storage.
That's
the
limitation!
E
A
It's
not
incompatible
and
having
been
the
Nova
product
product
manager,
read
out
in
my
previous
life
and
lived
through
this
whole
evolution
of
joy.
I
know
that
those
things
are
needed
and
they
are
on
the
backlog
as
things
we
want
to
do
so.
Basically,
all
those
things
that
can
be
loosely
grouped
as
enhance
platform
awareness
we
want
to
implement.
Some
of
them
are
going
to
be
easier
in
cubes
than
others,
because
there's
a
group
that's
been
working.
A
What
we
refer
to
as
performance,
sensitive
apps
and
cube
and
they've
done
some
of
these
things
for
us
already
from
a
cube
point
of
view,
which
then,
from
qubit
point
of
view,
we
can
just
take
advantage
of
there's
other
things
they
haven't
done
yet,
and
we're
gonna
have
to
work
on
the
kubernetes
native
integration
to
do
that.
It's
gonna
so.
A
Exactly
so,
we
get
the
benefit
of
all
the
things
that
are
in
kubernetes.
Do
we
get
the
corner
of
anything
we
want
that,
isn't
in
kubernetes.
We
need
to
go
and
do
it
basically
which,
from
a
rat
that
point
of
view
we
see
is
a
good
thing,
because
ultimately
we
will
improve
Kuban
Eddie
says:
well,
it's
gonna
take
some
time
right.
G
We
usually
use
use
the
customize,
the
worse
image,
which
is
built
by
something
like
disk
image
builder,
which
is
which
is
one
of
project
OpenStack
project
and
actually
I've
seen
the
demo.
You
just
put
the
container
image
image
on
the
pot,
spec
and
sorry
should
I
use
should
I
another
way
to
build
the
OS
image
for
this
solution.
A
The
main
so,
on
the
back
end,
we
are
very
focused
on
using
raw
images,
but
I
think
the
container
I
stuttering
importer
currently
will
convert
a
cue
cat
to
as
well
and
we're
not
getting
too
far
in
like
we're,
not
introspecting
a
guest
at
all
right
now.
So
basically,
you
could
use
anything
that
builds
a
cue
cat
to
a
roll.