►
From YouTube: A Practical Guide To KubeVirt
Description
KubeVirt is a robust Virtual Machine management infrastructure that runs on and leverages the core concepts of Kubernetes. The APIs used by KubeVirt will feel very familiar to the seasoned Kubernetes user, but that doesn't cover everybody.
This session is designed to arm users with the practical experience they'll need to deploy Virtual Machines using KubeVirt. We will start with a Virtual Machine running on a local Qemu instance, and using concrete examples, discuss the steps needed to move it to the hybrid cloud. This will cover the Custom Resources used by KubeVirt as well as other considerations such as storage and networking.
A
Hi
everybody
I'm
Stu
God,
and
this
is
a
practical
guide
to
cute
Verte.
Hopefully,
a
guide
for
the
rest
of
us
just
a
current
state
of
the
world
or
your
statement
on
where
it
is
containers
are
increasingly
becoming
the
de
facto
standard
of
power
of
packaging
applications
and
kubernetes
and
OpenShift
are
becoming
kind
of
the
de
facto
way
that
we
do
that.
A
But
that's
for
new
applications.
When
you
start
talking
about
virtual
machines,
I've
heard
some
people
say
they're
going
away.
Well,
no
they're,
not
for
business
reasons,
it's
hard
to
redo
some
applications
and
for
technical
reasons.
It
may
be
impossible
to
do
that.
For
instance,
if
you
need
Windows
in
the
machine
or
if
you
attended
the
unit
kernels
talked
yesterday,
that
would
not
be
something
you
would
put
directly
in
a
container.
A
So
we
do
this
by
using
a
custom
resource
definition
that
we
drop
into
existing
kubernetes
clusters.
Now
this
is
really
important
to
to
stay.
We
don't
require
we
one
of
our
requirements
for
ourselves
is
that
we
do
not
allow
modification
of
the
kubernetes
cluster
before
we
deploy.
In
other
words,
we
can't
change
container
runtimes.
A
We
can't
add
system
accounts
or
what
have
you
it
all
has
to
be
done
as
part
of
our
deployment,
or
it
can't
be
done,
and
so
by
doing
this,
we
extend
the
kubernetes
infrastructure
so
that
you
know,
as
kubernetes
natively
possible
as
a
way
as
possible,
and
so
by
doing
this,
the
the
virtual
machines
are
actually
inside
a
container
some
solutions
out
there,
such
as
cata
containers,
I,
believe
vertel.
It
may
be
actually
modified
the
container
runtime.
A
That's
something
we're
explicitly
trying
not
to
do,
because
we
don't
want
to
be
modifying
that
ahead
of
time
now
in
the
future.
That
might
be
a
restriction.
That's
lifted,
because
dynamic
container
runtimes
is
something
that
may
come
to
kubernetes
in
the
future,
but
for
now
that's
kind
of
a
hard
and
fast
rule,
and
one
of
the
reasons
that
we're
doing
it.
A
So
for
the
way
we
implements
this,
we
actually
are
using
a
custom
resource
definition.
This
is
you
know:
I've
got
an
example
of
one
over
on
the
right.
It's
basically
just
a
mo
file
for
those
who
haven't
seen
this
before
for
those
who
have
seen
kubernetes
constructs
before
this
should
look
pretty
familiar
in
this
case.
The
only
thing
special
about
it
is.
The
kind
is
a
virtual
machine
instance.
So
virtual
machines
here
have
their
own
kind,
and
this
gives
us
the
ability
to
express
all
common
virtual
machine
parameters.
A
Such
as
memory
CPU
and
the
like,
because
we're
implementing
this
is
a
custom
resource
definition.
We
also
inherit
our
back
rules,
so
users
are
only
allowed
to
modify
things
in
the
namespaces
they're
designed
for
and
what
have
you
so
here's
a
little
bit
of
the
workflow.
This
is
a
busy
slide.
So
if
I
could
take
a
minute
to
explain
this,
when
the
user
implements
this
custom
resource
are
posted
to
the
system,
that's
a
virtual
machine
instance.
So
that's
actually
just
a
record
on
the
you
know.
A
In
the
sed
cluster
we've
got
a
controller
vert
controller,
which
is
monitoring
for
changes
to
two
custom
resources
or
two
virtual
machine
instances
in
this
case
and
when
it
sees
when
it
actually
schedules
the
pod
and
that's
all
it
does
at
this
point,
we
just
schedule
a
pod,
and
you
can
see
that
you
know
this
is
the
third
step
here
and
then
now
the
vert
controller
is
a
cluster
level
resource.
So
it's
only
job
is
to
schedule.
Pods,
then,
on
each
of
the
individual
nodes.
A
Vert
handler
is
running
and
that's
another
controller
we
have,
and
it
is
looking
for
these
pods
that
have
a
special
label
on
them
so
that
it
knows
that
it
owns
that
pod
and
it
will
then
schedule
starting
the
virtual
machine
inside
of
it.
Now,
there's
a
little
bit
of
hand
waving
there,
of
course,
because
I
said
start
a
virtual
machine
in
a
container
that's
already
running.
So
what
we're
actually
doing
is
we've
got
a
Damon
called
vert
launcher
inside
this
pod.
That's
actually
doing
that
work.
You
know
just
a
full
disclosure.
There.
A
A
All
of
those
constraints
that
you
can
put
on
a
kubernetes
pod
to
still
work,
and
you
can
even
use
a
custom
scheduler
if
you
needed
now
the
applications
within
the
virtual
machines
because
they
are
leveraging
a
pod.
All
existing
kubernetes
constructs
such
as
services
and
routes
still
work
and
we'll
get
a
little
bit
more
into
what
those
are
later,
but
we
actually
use
labels
on
the
on
the
service
itself
to
designate
which
pod
the
service
belongs
to
or
where
to
route
the
packets.
A
Basically,
so
virtual
machines
live
in
pods,
now
that's
transparent
to
higher
level
of
management
systems,
but
you
know
technically,
that's
not
worse
than
it.
It
currently
is
before
we
did
this
project
now
virtual
machines,
leverage
pods
when
we
have
a
new
virtual
machine
record,
any
labels
that
are
on
that
will
be
translated
over
to
the
pod.
A
A
Namespace
that
way
they
wouldn't
modify
the
original,
the
goodness
that
comes
with
senator,
says.
First,
the
network
goes
we're
actually
using
the
pod
network
for
the
virtual
machine.
That's
both
bad
and
good.
The
the
good,
of
course,
is
that
you're
able
to
communicate
with
any
existing
container
resource
as
it
currently
exists.
A
So
we
can
also
expose
these
services
from
our
virtual
machine
using
services
and
routes,
as
I've
mentioned,
to
expose
specific
ports
on
your
virtual
machine
to
the
outside
world.
We're
looking
at
alternative
networking
options
such
as
multiple
networks
or
different
variants,
but
right
now
what
we're
using
is
just
a
tap
device
inside
the
virtual
machine.
A
The
unfortunate
part
about
that
is,
we
lose
the
ability
to
do
live
migration
because
in
in
the
beginning,
we
actually
had
livered
outside
of
our
pod
and
at
the
cluster
level,
or
we
had
one
lizard
per
node
and
that
allowed
us
to
do
migrations
between
do
do
to
move
virtual
machine
between
different
nodes
on
your
kubernetes
cluster.
The
the
trouble
with
that
was,
you
know
a
little
bit
of
a
rabbit
hole,
but
we
had
some
issues
with
hid,
namespaces
and
the
like
that
we
were
violating
assumptions
and
we
just
really
couldn't
do
that.
A
It
wasn't
a
good
model,
so
instead
we're
actually
doing
one
live
per
pod
and
so
lipfird
actually
lives
inside
of
the
pod.
That
we're
deploying
our
virtual
machine
in
what
that
unfortunately,
means
is:
livered
has
no
network
access
to
the
cluster
or
to
other
nodes,
so
we
lose
the
ability
to
live
migration.
For
now,
once
we
implement
other
networking
options,
we
can
reintroduce
that
so
looking
at
the
virtual
machine
client
tool,
this
is
vert
control.
A
One
of
the
things
that
I
sort
of
skipped
over
or
have
glossed
over
at
this
point
is
we're
looking
at
virtual
machine
instances
versus
virtual
machines.
These
are
two
different
kinds
of
records.
The
virtual
machine
is
kind
of
a
static
template
for
virtual
machine
instance,
point
being
in
kubernetes
world.
If
you
start
a
pod
or
stop
a
pod,
you're,
basically
creating
or
deleting
a
resource,
and
so
that's
what
our
virtual
machine
instance
is.
A
It's
kind
of
an
analogy
to
that,
but
we
recognize
that
that
doesn't
really
translate
well
to
the
world
over
the
like
people
coming
into
this
ecosystem
or
the
tools
that
were
trained
to
translate
to
the
system
that
doesn't
really
work.
Well,
so
we
created
the
virtual
machine
object
and
that's
where,
when
I'm
talking
about
starting
and
stopping
that's,
what
we're
doing
where
you
actually
issue
a
control,
start
command
on
a
virtual
machine
and
we'll
kick
off
an
instance.
A
Now,
there's
two
ways
to
do
for
a
confer
client
and
that
is
either
as
a
standalone
command,
which
is
what
I'll
be
using
or
you
can
actually
use
it
as
a
cube,
controlled
plugin.
So
it
would
come
straight
off
the
cube
controller
and
time
for
a
demo,
real
quick
before
I.
Do
that
I'd
like
to
explain
what
my
system
looks
like,
so
this
Lite
other
than
being
an
example
of
something
complicated
is
what
my
development
environment
looks
like
inside
the
the
physical
machine,
we're
actually
running
and
I'm.
A
Sorry
down,
I
have
to
say
a
docker
cluster
nearly
got
away
with
it,
but
it's
on
the
slide
inside
of
the
d-word
docker
we're
running
a
vagrant
instance,
and
the
reason
we're
doing
this
is
for
streamlining
development
so
that
everybody's
machine
looks
the
same.
We're
getting
consistent,
builds
and
the
like,
but
unfortunately
adds
a
little
bit
of
complexity
that
I
can't
get
around
what
I'm
showing
this
as
a
demo
we'll
be
using
cube
control
commands
directly
from
the
physical
machine.
We've
got
a
little
sleight
of
hand
where
we're
actually
proxying.
A
These
calls,
through
the
through
the
different
layers
here
and
down
to
node
0
1,
but
when
I
start
working
on
the
networking,
the
edge
is
a
light
gray
box,
node
0
1
is
where
your
node
ports
actually
terminate
and
so
I
can't
reach
them
from
the
physical
machine.
So
I
had
to
explain
that
before
we
get
into
this.
A
B
A
A
You
know
all
I
did
was
take
a
10
gig
image,
ID
deed
from
dev
0
and
then
run
a
qm
you
install
on
it
and,
of
course
we
skinned
it
with
the
dev
Kampf
logo,
so
that
we
would
have
something
recognizable
so
killing
that
off
and
I'm
actually
going
to
start
a
simple
HTTP
server
here
using
just
Python,
which
I
wouldn't
really
recommend,
but
it
works
great
for
a
demo
hair,
so
I'm
gonna
use
for
9090.
And,
of
course,
when
you
run
this,
it's
exposing
all
the
files
in
this
directory.
A
A
So
here
this
is
the
container
race
data
imported
and
what
just
happened
was
I'm
using
the
the
get
tree,
as
you
can
see,
there's
a
little
bit
of
cruft
there
from
the
video
ignore
the
second
argument
over
the
last
part
of
it.
All
it
is,
is
the
pointer
to
my
git
repo,
with
containerize
data
importer
and
all
I've
done
at
this
point.
To
that
is
run,
make
manifests.
So
it's
just
a
straight.
A
You
know
get
your
tree,
you
can
check
out
and
run
directly
and
that's
all
I
did
here
is
just
deploy
these
different
pieces,
so
I've
got
a
service
account
the
cluster
roles
that
are
needed
to
actually
do
these
actions
and,
of
course,
the
the
controller
that
is
monitoring
for
persistent
volumes.
Their
system
volume
claims
that
match
it's.
A
So
show
you
what
the
persistent
volume
claim
here
looks
like
we're
using
annotations
and
that's
all
the
containerized
data
importer
needs
in
order
to
recognize
persistent
volume
claims
that
is
supposed
to
be
taking
action
on
here.
So,
as
you
can
see,
I've
got
four
9090
discs
IMG.
This
contrived
IP
address
actually
points
back
to
my
fair
metal
machine
when
I
ran
this
demo
and,
of
course,
the
key
value
pairs.
The
key
is
the
cube,
Fornaio
storage,
important
endpoint
for
telling
the
containerized
data
importer
where
to
go
to
fetch
this
image.
A
A
That's
created
the
persistent
volume
claim
already
and
we'll
look
at
the
pods
here,
real
quick
to
show
that
that's
the
case.
Now
there
ends
up
being
a
little
bit
of
lag
here
and
that's
an
unfortunate
side
effect
of
our
current
implementation.
We're
in,
as
you
can
see,
it's
now
running
we're
running
the
upload
for
containerize
data
importer
directly
through
the
cube
api
server.
Now,
in
this
case,
that's
a
10
gigabyte
image,
we're
moving
10
gigabytes
through
the
cube
API
server.
That's
causing
lag
I'm,
sorry
about
that.
That
is
what
it
is
in
the
future.
A
And
so
we're
gonna
look
at
the
virtual
machine
instance
itself
here.
This
is:
what's
gonna
actually
be
using
this
persistent
volume
claims
that
we're
creating
right
now.
So
you
know
this
is
basically
the
bare
minimum
that
I
would
really
want
to
define.
In
the
first
place,
I've
got
two
gigs
of
RAM
I've
got
a
persistent
volume
claim
in
this
case,
which
is
mapping
back
to
DEFCON
PVC,
which
is
the
the
persistent
volume
claim
that
we're
actually
creating
with
the
containerize
data
importer
as
we're
going
live.
A
A
That's
the
stage,
because
it's
already
running
on
the
Santosh
box
and
we're
going
to
be
exposing
for
30,000
as
a
node,
which
means
that,
on
the
outer
level
of
the
light
gray
box,
we're
going
to
be
using
for
30,000
as
pestis
age,
so
over
here
in
deaf,
camel
I'm
highlighting
the
incorrect
thing.
Actually,
what
I
was
wanting
to
show
is
the
the
label.
A
A
So
we've
got
a
cluster
right
fee
of
1099,
one
thirty,
four
to
thirty
to
forty
four
and
the
important
that
we're
exposing
is
thirty
thousand
and
we
can
check
the
endpoints
real,
quick.
So
services
always
have
an
endpoint
which
is
where
they
map
to.
On
the
other
end,
and
at
this
point
of
course,
the
endpoints
that
we're
mapping
to
is
none
because
we
haven't
created
the
virtual
machine.
The
maps
to
this
yet.
A
So,
let's
check
again-
and
it
looks
like
the
import
has
now
completed
so,
let's
check
the
logs
real
quick
to
make
sure
is
that
everything
went
okay
and
it
did
right
there
import
complete
down
near
the
bottom,
and
we
don't
need
to
worry
about
the
warnings
for
file
because
we
didn't
use
that
we
use
HTTP,
and
here
is
the
persistent
volume
itself
that
were
bound
to
and
it's
because
map
to
the
DEF
CON
PVC.
We
don't
need
to
worry
about
its
name,
because
that
will
be
looked
up
automatically.
C
A
A
A
It
would
take
a
minute
for
the
machine
to
boot
at
this
point
so,
but
that
was
basically
there's.
The
only
other
thing
that
I
wanted
to
show
was
the
disservice
and
actually
how
we
mapped
from
that
virtual
machine
that
exposed
that
surface,
because
the
end
point
that
I
showed
you
a
moment
ago
was
mapped
to
that
pod.
For
that
virtual
machine
and
because
we're
mapping
the
virtual
machines
there's
the
pods
IP
to
the
virtual
machine
that
then
terminated
at
the
machine
itself.
A
So
then
we
were
able
to
SSH
in
from
the
node
IP
itself.
So
it's
something
outside
the
cluster,
if
you're
done
ssh
into
that
box,
which
obviously,
if
you're
gonna
have
a
cloud-based
virtual
machine,
is
a
very
essential
point
from
here:
the
yes
yeah
we
was
just
gonna
talk
about
the
next
steps.
Of
course,
one
of
the
things
that
I
glossed
over
was
that
we
don't
or
that
I
was
using
local
storage
on
this,
because
it's
a
single
instance
machine
you
can
use
other
backends.
A
However,
I
chose
not
to
do
that
because
of
the
complexity
of
actually
doing
that
on
a
single
node.
One
of
the
things
we'd
like
to
work
on
in
the
future.
Of
course,
it's
making
that
a
little
easier
to
do
and
multiple
networks
is
you
know,
is
another
thing
that
I'd
mentioned
that
was
on
our
major
wish
list,
but
from
there
I
guess
it's
time
to
take
questions.
B
All
right,
can
you
guys
hear
me
all
right.
I
want
to
make
sure
I
understood
the
containerized
data
importers
that
what
it
was
called?
Yes,
okay,
it's
basically
just
a
utility
that'll
go
grab
an
HTTP
image
of
a
disk
image
right,
yes,
but
it
runs
as
a
container.
Is
that
what
to
call
the
container
hands
it.
A
B
B
B
A
So
we
actually
do
have
registry
disks
as
a
possible
option
here.
So
you
know
you
absolutely
could
do
that.
I
think
that
in
a
production
environment
I
would
imagine,
maybe
that
the
persistent
volume
claim
would
have
just
a
more
universal
appeal,
but
yeah
we
certainly
could
put
it
into
a
registry
as
well.
A
container
registry
right.
A
A
We,
you
know,
will
me
instantiate
the
develop
vironment,
we
stock
it
with
a
tall
pine
image
and
a
cirrus
image
and
a
couple
others
you
know
just
so
that
we
have
base
images
to
do
all
our
testing
infrastructure
in
and
so
yeah.
We
put
that
directly
into
a
container
registry
and
you
know
utilize
those
images
inside
Cubert
as
well.
A
C
A
C
C
One
of
the
benefits
of
running
virtual
machines
on
top
of
kubernetes
is
you
get
more
than
just
like
Alex
C&C
groups
to
do
like
sandboxing
between
different
varmints
and
I
was
curious,
like
are
there
either
customers
of
Red,
Hat
or
Red
Hat?
That's
using
it
in
an
area
that
they
have
like
pen
tested
this
on
direct
containers,
and
you
know
they
weren't
secure
enough,
and
this
was
secure
enough,
and
could
you
talk
about
that
a
little
bit
so.
A
A
Yes,
it
is
more
secure
that
wasn't
necessarily
our
stated
goal
in
terms
of
something
we
were
setting
out
to
accomplish,
but
you're
right,
there's,
there's
a
lot
stronger
process,
isolation
when
you
run
services
inside
the
virtual
machines
like
this,
however,
I
would
still
point
out
that
between
other
containers,
there's
no
isolation
at
that
level
anyway,
so
you're,
just
really
I
mean
if
security
of
now,
if
you're
looking
at
untrusted
workloads
in
the
virtual
machine,
that's
yes!
This
is
great.
If
you're
looking
at
not
trusting
anything
on
the
cluster,
you're
gonna
need
stronger
guarantees.
C
A
The
level
of
process,
isolation,
yeah,
so
that
I
mean
that
is
one
of
the
reasons
that
you
would
run.
A
virtual
machine
is
because
you
have
that
stronger
process.
Isolation
than
you
do
with
a
container
which
you
see,
groups,
namespaces,
SELinux
I
mean
those
are
strong
guarantees,
but
in
theory
you
might
be
able
to
do
something.
I
don't
know
so
so
yeah
I
mean
the
virtual
machine
is
a
stronger
guarantee,
but
that's
not
what
we
set
out
to
do
when
we
do
that
when
we
set
this
up.
Thank.
B
If
you
don't
mind
it
might
try
to
answer
his
question,
I
think
I
understand
it.
Okay,
so
like
I
think
you're
thinking
about
it,
reverse
like
like
you're
thinking
about
it
from
a
security
perspective,
he's
saying
it
exactly
right,
don't
think
of
it
so
from
a
security
perspective,
but
we
should
tell
them
what
to
think
about
it.
Think
about
it
as
a
tool
to
pull
a
VM
into
a
an
application.
Similar
to
you
know
in
a
kubernetes
am
will
file
like
that's
phenomenal
right,
because
now
I've
got
a
database
living
in
a
VM.
B
I've
got
front-ends
living
in
a
container
and
it's
a
way
to
scope
the
entire
application
with
a
single
application
definition.
That's
the
beauty
of
coop
vert
on
the
other
side
of
that
I
would
say
something
like
attic.
Containers
is
where
you're
now
running
a
container
in
a
VM.
It
still
pulls
the
VM
image.
It
still
uses
all
of
the
I'm.
Sorry,
the
container
image
it
uses.
All
the
container
constructs
that
you
care
about.
So
now
it's
a
packaging
format.
In
that
scenario,
you're
adding
extra
isolation.
B
You
know
around
a
container
I
think
that's
the
way
that
you
think
about
in
a
security
construct
in
that
now
it's
an
isolate
container
but
I
still
get
all
the
packaging
format
advantages.
This
is
the
opposite.
You
don't
get
the
packaging
format
advantage.
You
get
bring
in
old
stuff,
that's
in
a
VM
advantage,
which
is
a
very
which
is
like
the
converse.
Essentially
thank.
A
E
So
one
of
the
interesting
things
I
would
like
to
see
with
Cooper
is
to
sort
of
be
able
to
handle
the
kind
of
container
and
use
case
in
that
you
know,
kubernetes
is
running
a
law
and
you
define
said
it
needs
additional
resource
application
for
yourself.
That
needs
additional
resources,
so
it
calls
into
Cooper
and
launches
additional
VMS
that
then
kubernetes
could
take
advantage.
Those
new
VMs
to
watch
for
containers
inside
is
that
is
that
been
considered,
or
is
that.
A
The
big
limitation
to
what
we're
setting
out
right
now
is
that
we're
attempting
to
not
modify
the
kubernetes
cluster
or
not
required
to
be
modified
ahead
of
time
and
so
because,
in
order
to
be
able
to
run
a
virtual
machine
as
a
container
that
requires
a
different
container
runtime
than
the
default.
So
we
can't
ask
an
administrator
to
necessarily
do
that,
because
that's
going
to
increase
the
friction
in
the
future.
A
E
I
mean
I'm
not
looking
for
necessarily
the
considering
the
isolation
be
great,
but
just
being
able
to
launch
more
being.
You
know
at
a
certain
point
in
applications
run
out
of
resources
inside
of
the
beat
existing
VMs
and
needs
to
launch
more
VM,
so
usually
coupang
they're
these
breaks
at
that
point
it
has
to
fall
back
and
hope
it
stack
or
some
other
tool
for
launching
for
VMs
I
just
thought
that
cougar
could
be
a
way
to
actually
Oso.
A
Resource
say
to
run
right,
I
hadn't
thought
about
that.
That's
an
interesting
angle.
A
D
A
D
D
A
A
That's
the
answer
is
yes,
that's
true.
We
are
looking
at
other
possibilities
in
terms
of
doing
a
monitor
application
that
would
be
available.
If,
of
course,
you
know
how
do
you
do
that
in
a
generalized
fashion,
if
you're
booting
putting
up
a
generalized
virtual
machine,
suddenly
you're
building,
you
know
your
own
VMware
sort
of
infrastructure.
So,
yes,
we're
looking
into
all
possibilities
for
limited
cases.