►
From YouTube: KubeCon Office Hours: Cloud Native VMs with KubeVirt
Description
KubeCon NA 2020 Office Hours with KubeVirt contributors Pep, Andrew Sullivan, Peter Lauterbach
A
B
Thank
you,
chris
welcome
everybody
to
day
two
of
kubecon
or
day
three,
depending
on
whether
you're
indexing
from
zero
or
one
depends
on
your
programming
language.
Anyway.
Today
we
have
we're
going
to
actually
have
three
different
office
hours,
starting
with
this
terrific
one,
with
a
good
cross-section
of
the
coovert
team.
B
As
usual
for
these
things,
they're
going
to
do
a
short
presentation,
they're
actually
going
to
do
a
short
presentation
and
a
short
demo,
showing
you
the
technology
and
then
we're
going
to
open
it
up
for
your
questions.
This
is
an
office
hour.
We
are
here
for
your
questions
and
it
doesn't
matter
they
can
be
developmental
questions
of
like
what's
coming
in
kubford,
etc.
They
can
be
operational
questions.
B
All
of
that
is
fair
game
and
we
have
the
right
people
to
answer
here.
So
speaking
of
the
right
people,
why
don't
we
introduce
everybody
andrew?
You
want
to
start.
C
I
would
be
happy
to
so
hello
everyone.
My
name
is
andrew
sullivan.
I
am
a
technical
marketing
manager
with
the
red
hat
cloud
platforms,
business
unit,
where
I
am
responsible
for
kind
of
generally
speaking
virtualization
that
isn't
openstack.
So
I
love
all
of
our
hypervisors.
Equally,
they
are
all
my
favorite
child,
including
all
of
the
ones
that
openshift
happens,
to
run
on.
D
D
Hey
I'm
david
vossel,
I'm
one
of
the
kind
of
core
architects
of
the
key
vert
project
and
I
contribute
to
a
lot
of
the
other
projects
just
kind
of
in
the
cuvette
ecosystem.
So
I'm
here
to
answer
questions
about
cubevert.
B
F
G
I'm
part
of
the
kubernetes
native
infrastructure
community
team,
so
we
focus
on
upstream
projects
which
are
relevant
to
this
kubernetes
native
infrastructure
concept
and
jupiter
is
one
of
our.
You
know
favorite
projects
in
that
field.
So
that's
that's.
Why
we're
here.
E
Hi,
my
name
is
mike
hendrickson.
I
am
a
cuber
contributor,
working
mostly
on
storage
stuff,
where
vms
meet
storage
most
recently
worked
on
the
offline,
snapchat
and
restore
functionality
for
vms.
So
if
you
have
any
storage
related
questions
hit
me
up.
B
Terrific,
okay,
and
so,
like
I
said
any
questions
you
have
about
coobert
about
virtualization
on
kubernetes,
et
cetera.
Someone
in
the
panel
will
be
able
to
answer
so
go
ahead
and
put
those
in
chat
in
the
meantime.
For
those
of
you
who
are
new
to
kubert
we're
going
to
have
a
presentation
followed
by
a
quick
demo,
oh
good,
and
we
already
have
a
question.
So
yes,
I
see
that
question
and
they
will
take
it
immediately
after
the
short
presentation
of
the
demo.
G
Yeah
once
I
find
sorry
okay,
so
I
will.
I
will
provide
a
high
level
very
high
level
overview
of
the
keyboard
project
and
why
why
it's
here-
and
I
will
start
by
looking
at
the
context
of
how-
how
are
applications
being
developed,
deployed
and
maintained
nowadays.
So
for
quite
some
time.
The
answer
to
this
question
of
how
our
applications
developed
deployed,
maintain
is
has
been
virtualization
and
virtual
machines.
G
You
know
many
people
have
made
a
significant
investment
on
on
deploying
applications
on
vms
in
multiple
places
with
on-premise.
You
know
traditional
virtualization,
as
bitter
mentioned,
or
private
public
clouds,
and
the
good
news
is
that
this.
This
works
right
so
and
this
this
has
been
working
for
a
long.
E
G
On
the
other
hand,
as
as
you
all
know,
because
in
the
context
of
foreign,
we
have
seen
that
containers
work
too
and,
and
they
have
been
become
very,
very
popular
nowadays,
so
it's
normal
that
the
question
arises.
Should
I
migrate
all
my
applications
to
from
you
know
bm
based
applications
to
containers?
G
Well,
maybe
the
answer
might
be
yes.
Maybe
the
answer
might
be
no,
but
even
if
we
decide
to
you
know,
yes,
let's
do
it
in
any
case,
this
is
not
a
straightforward
process
and
we
we
might
even
decide
not
to
because
there
might
be.
You
know,
as
mentioned
vms
work,
so
the
thing
is,
we
believe
that
vms
are
here
to
stay
and
at
least
for
the
for
well
for
the
foreseeable
future.
G
We
believe
that
the
reality
is
that
developers-
and
you
know,
applications
will
be
still
be
developed
and
deployed
using
both
containers
and
virtual
machines.
So
if
this
is
the
reality
that
both
you
know,
vms
and
containers
are
here
to
stay,
how
the
question
becomes.
How
do
we
make
this
reality
a
bit
easier
to
digest
right?
So
how?
How
can
we
make
lives?
Our
lives
easier
and
we
believe
that
the
answer
to
this
involves
bringing
them
together.
So
both
virtual
machines
and
containers
are
here
to
stay.
G
So,
let's
try
to
make
our
lives
easier
by
using
you
know,
helping
us
use
them
together.
This
is
this
kind
of
shifts,
the
question
of.
Can
I
migrate
my
application,
which
is
currently
based
on
vms,
to
a
container-based
application
that
shifts
to
can
I
maybe
just
migrate,
my
vms
as
they
are.
You
know
to
my
container
platform.
Maybe
later
I
can
decide
to
to
migrate
it
to
containers,
maybe
not,
but
in
any
case
I
don't
have
to
choose,
I
I
have
them
both
right.
G
So
this
is
good
for
well
existing
applications
that
you
might
have.
You
can
just
move
them
to
the
container
platform,
it's
good
for
application
developers.
They
they
can
use
one
unified
single
set
of
tools
or
you
know
the
same
pipeline
to
deploy
sorry
to
develop
and
deploy
applications
by
using
containers
and
vms,
and
it's
also
especially
good
for
infrastructure.
Maintainers,
you
don't
have
you
only
have
to
maintain
a
single,
a
single
platform,
and
you
can
use
your
tools
that
to
manage
the
tools
you
use
to
manage
it
like
monitor
it,
etc.
G
You
know
you
don't
have
to
keep
an
eye
on
to
separate,
let's
say
separate
walls
and
vms
and
containers.
So
with
this
vision
is
what,
with
this
vision,
is
how
the
cube
project
was
started
over
four
years
ago,
and
now
since
last
year,
it's
it's
a
cncf
sandbox
project.
With
the
goal
mentioned
this
primary
goal
of
running.
You
know
running
vms,
alongside
containers,
both
together
on
the
same
platform
and
that
platform
being
kubernetes
right.
G
So
the
idea
is
to
provide
an
api,
extend
kubernetes
to
provide
the
virtualization
api
to
allow
you
to
run
vms
on
kubernetes,
so
how
you
know
dms
on
on
kubernetes.
How
so?
How
does
that
look
like?
Well,
one
of
the
guiding
principles
on
designing
qb
has
been
to
be
kubernetes
right.
So
kubernetes
is
the
platform,
so
we
want
to
integrate
with
it
and
use
as
much
as
possible.
You
know
be
as
kubernetes
as
much
as
kubernetes
as
possible
right.
G
So
if
there
is,
if
there
are
existing
kubernetes
apis
that
work
for
the
need
to
run
the
ends,
let's
use
them
so
vms,
for
example,
need
storage
for
their
disks
right,
so
kudatas
offers
offers
storage
in
the
form
of
persistent
volumes,
so
vms
can
use
persistent
modeling
claims
to
access
the
storage
right
well.
This
goes
only
so
far
because
you
know
at
some
point:
custom
apis
are
needed
to
model
vms
and
associated
resources.
G
So
again
we
extend
qubit,
extends
kubernetes,
the
kubernetes
way,
which
is
with
custom
resource
definitions.
So
there
are
a
bunch
of
new
crds
custom
resource
definitions
to
model
virtual
machines
due
to
machines,
migration
and
the
associated
components
that
you
know
like
the
controllers
that
they
care
associated
with
each
of
those.
G
Custom
resources,
so
the
api
is
not
old
story,
though
you
know
running
vms
is
well
it.
It
is
an
evolving
task,
so
there
there
are
always
a.
G
There
are
a
set
of
additional
components
that
helps
us
make
our
life
easier,
for
you
know
for
the
bm
persona
looking
at
the
kubernetes
api,
so
there
are
a
set
of
tools
like
a
basic
one,
is
build
ctl,
build
control
that
allows
you
to
interact
with
the
api,
the
equivalent
of
tube
ctl,
but
for
bm
or
bm
focused
operations
like
starting
ibm,
which
is
an
api
call,
but
you
know
it's
it's
an
easy
way
of
accessing
this.
There
are
whole
projects
like
the
containerized
data
importer,
which
helps
you
know.
G
I
mentioned
persistent
volumes
as
as
a
host
for
vm
disks,
so
the
cdi
the
containers
data
imported
helps
you,
you
know,
interact
with
pv
pvs
in
a
way
that
you
know
migrate
or
you
know,
interact
disks
with
kubernetes
storage.
Networking
is
a
whole
story.
You
know
little
machines
have
typically
significant
expectations
from
from
network
networking
expectations
that
usually
go
beyond
the
simple
default
default.
G
G
How
do
you
handle
the
configuration
of
that
complex
networking
across
all
nodes
and
a
mistake
comes
to
help
and
well
you
know
all
those
set
of
components
end
up
being
quite
a
pile
and
to
manage
all
of
this,
deploy
keyboard
and
the
components
and
and
and
keep
them
healthy
and
upgradeable
and
and
keep
the
vms
happy.
G
There's
also
a
bunch
of
operators
that
take
care
of
that
task
and
helps
us.
G
So
that's
the
high
level
overview,
but
you
know
seeing
is
believing
so
at
this
point
I
I
would
like
to
hand
it
over
to
andrew
who
will
actually
show
this
in
action.
How
how
does
it
really
look
like?
So
let
me
stop
sharing
here.
B
Okay,
before
we
start
in
the
demo,
we
actually
already
have
a
question
that
I
wanted
to
actually
field
on
air
cubified
actually
had
a
question
about.
B
Sharing
qcow
images
via
container
registry
and
using
the
cdi
importer
to
make
those
available
to
vms
andrew
did
you
want
to
actually
feel
that
out
loud.
C
B
E
I
mean
the
container
is
in
the
proper
format,
which
is
documented.
It's
very
easy
should
work,
the
image
will
get
downloaded
and.
B
Cool
awesome:
okay,
if
you
want
to
go
ahead
with
the
demo
feel
free.
I
just
want
to
take
that
question.
While
we
had
it.
C
Let
me
unmute
myself,
first
all
right,
so
I'm
actually
going
to
I'm
going
to
do
what
my
my
product
manager
calls
terrifying
and
I'm
going
to
do
this
off
the
cuff,
I'm
going
to
show
exactly
what
we
just
linked
here.
C
C
C
So
all
right,
so
my
registry
url,
so
this
is
essentially
the
container
image
that
I'm
going
to
pull
it's
going
to
be
quay,
dot,
io
slash
a
and
solo
slash,
fedora
32..
So
all
I've
done
here
is
followed
the
documentation
that's
available
on
github
there
to
add
the
fedora
32
in
this
instance
cloud
image
into
that
container
image
and
then
simply
did
a
podman
push
push
it
up
to
clay,
and
now
it's
available.
C
C
So
the
first
thing
I
want
to
start
with
here
is
this
is
just
a
standard,
kubernetes
cluster,
so
let's
say
k
get
node
all
I've
got
here
is
a
simple
three
node
cluster.
I
deployed
this.
You
can
see
two
and
a
half
days
ago
using
cube
admin.
Writing
version
1.19.4
relatively
straightforward.
I
didn't
do
anything
extra
inside
of
here
it's
running
in
my
lab,
so
I'm
a
little
resource
constrained
for
anybody
who
watches
the
stream.
C
You
know
this
already,
so
I
did
deploy
a
couple
of
things:
okay,
okay,
so
if
we
look
inside
of
here,
you'll
see
a
few
pods
running
so,
first
and
foremost
I
am
using
flannel
for
this.
The
sdn
really
doesn't
matter.
They
should
all
work.
Equivalently
check
the
documentation
just
to
be
sure,
however,
and
then
you'll
notice
down
here
in
the
kubevert
namespace.
I've
got
a
bunch
of
things
running
inside
of
here,
so
this
is
the
kubevert
operator
and
the
other
supporting
services
for
instantiating
for
managing
controlling
virtual
machines.
C
Inside
of
my
kubernetes
cluster
here
I
also
have
up
here.
I've
deployed
the
containerized
data
importer.
So
this
is
what
implements
all
of
those
data
volume,
custom,
resource
definitions.
I
can
do
a
kgit,
crd
and
grep
for
coobert,
and
we
can
see
all
of
those
different
crds
that
are
defined
inside
of
here.
So,
for
example,
here's
our
data
volumes
down
here
we
have
our
virtual
machines
right
so
and
so
forth.
These
are
what
we're
going
to
be
using
in
order
to
create
our
virtual
machine.
C
So
we
can
see
I'm
going
to
create
a
data
volume
named
registry
image
data
volume,
it's
going
to
source
it
from
this
particular
image
registry
and
it's
going
to
give
it
five
gigabytes
of
storage,
which
should
be
enough
for
that
fedora
cloud
image,
although
normally
you
would
probably
want
it
to
be
more
realistic.
You
know
10
15,
20
gigabytes
in
size.
C
C
We
can
see
that
my
data
volume
has
been
requested
and
bound
to
a
pvc
as
well
as
the
scratch
space
that
it's
going
to
use.
So
if
I
look
here
and
do
a
k
get
pod,
we
can
see
that
we
have
our
data
volume
importer
pod.
That
is
running,
so
this
is
one
that
has
actually
been
spun
up.
It
has
reached
out.
It
is
now
pulling
down
that
image
puts
it
in
the
scratch
space
first
and
then
does
whatever
work.
C
E
So
this
is
correct
and
just
to
clarify
what
we're
doing
the
qca
2
image
is
downloaded
to
the
scratch
pvc
and
then
it
is
converted
from
cue
card
to
to
raw
on
onto
the
destination
onto
the
target
pvc,
and
then
we
make
sure
to
resize
the
image
to
use.
You
know
as
much
of
the
target
pvc
as
possible
so
that
those
are
basically
all
we
do.
C
Pretty
pretty
straightforward:
it's
it's
not
not
super
complex!
So
normally,
if
I
were
downloading
this
and-
and
I
think
cubified
when
you
asked
a
question,
you
said
you
were
pushing
them
to
an
http
or
https
server
right,
you
would
be
able
to
look
at
the
importer
logs.
You
can
see
the
percentage
that
it's
going
through,
so
we
can
see
not.
C
We
don't
have
anything
like
that,
because
we're
not
actually
importing
it
right,
we're
not
having
to
download
it.
So
we
don't
get
that
output
instead,
it's
instantiating
it
on
top
of
that
container
image,
so
you
would
be
able
to
see
by
doing
like
a
k,
describe
pods
see
that
it
is
pulling
that
container
image
down.
Of
course,
keep
in
mind
things
like
by
doing
this.
C
If
you
have
you
know
a
20,
30,
40
gigabyte,
container
image,
it's
going
to
be
on
each
one
of
your
hosts,
you're
gonna
have
to
wait
for
it
to
pull
down
each
one
of
those
hosts
that
it
that
is
used
to
create
the
data
volume,
so
it
can
take
up
some
extra
space
just
make
sure
pruning
that
type
of
stuff
is,
is
automated.
Inside
of
your
infrastructure,.
C
Yeah,
so
you
can
bring
dis
into
kubevert
in
a
number
of
different
ways.
So,
generally
speaking
and
again,
mike
I'm
going
to
rely
on
you
to
keep
me
honest
here.
There
are
container
volumes,
which
is
what
we've
just
showed
here.
There's
import
from
a
url
endpoint,
so
you
know
http
https,
s3,
et
cetera.
C
E
Upload,
so
if
you
have
an
image
on
your
laptop,
you
can,
and
you
can
connect
to
your
cluster.
You
can
basically
push
the
image
to
a
pvc
and
boot
up
vm
from
that.
C
I
should
have
known
that
that
was
the
one
I
was
originally
going
to
show.
So
all
right,
so
you
see
that
my
data
volume
has
finished
importing
the
disk
from
here.
So
interesting
data
volumes
give
a
little
bit
of
robustness
to
this.
So
sometimes
I
have
inadvertently
done
things
like
accidentally
delete
the
pvc,
that's
backing
this
data
volume
and
it
will
very
helpfully
recreate
it
for
me
and
re-import
the
disk.
So
sometimes
you
do
need
to
be
careful
with
those
things.
C
Because
I
changed
the
name
of
our
persistent
volume
claim,
I
need
to
find
it
down
here
and
I'm
doing
this
a
little
bit
different.
So
this
I'm
using
a
persistent
volume
claim.
I
noticed
that
some
of
my
compatriots
here
might
be
looking
at
it
funny.
You
would
actually
want
to
use
a
data
volume
here
or
you
could
use
a
data
volume
here
and
that
would
work
just
as
well.
C
So
let
me
copy
this
name
real,
quick
and
I
will
import
it
to
there,
and
now
I
have
my
oops.
I
didn't
mean
to
exit
out
of
that
now
I
have
my
virtual
machine
definition
correct
so
without
doing
too
much
scrolling,
because
I
know
that
that
can
be
a
little
bit
confusing,
particularly
for
anybody
who
is
watching
online.
I
want
to
walk
through
a
couple
of
the
settings
that
are
available
inside
of
here.
So
first
and
foremost,
you
notice
that
up
here
at
the
top,
I
am
defining
a
virtual
machine
right.
C
Remember
that
kubevert
extends
kubernetes
itself
and
is
natively
running
virtual
machines
inside
of
kubernetes
they're
deployed
as
pods
to
the
kubernetes
cluster.
It's
running
that
qmu
that
libvard
process
on
the
hosts
so
down
here
in
the
spec.
I
am
defining
what
my
virtual
machine
actually
looks
like.
So
we
see
here
underneath
the
domain,
the
number
of
cpu
cores.
If
I
look
down
below
there's
also
a
memory
definition
we're
defining
the
disks
associated
with
it.
I
have
two
disks
here.
One
of
them
is
the
disk
containing
the
operating
system.
C
If
you
would
like
so
here
is
my
memory
request
that
I've
mentioned
up
above
and
then
down
below
here
is
the
definition
for
all
of
those
things
I'm
using.
So
we
see
down
here,
here's
my
cloud,
init
data.
You
can
see
my
super
secret
password
here.
Please
don't
steal
it
and
we
should
be
done
at
this
point.
So
I'm
gonna
go
ahead
and
create
my
virtual
machine.
C
And
now,
if
I
do
a
cube
cuddle
get
vm,
I
have
my
virtual
machine
defined
here
right,
so
as
a
object
inside
of
the
cougar
inside
of
the
api.
I
can
query
these.
I
can
see
different
information
about
them
and
when
I
want
to
start
that
virtual
machine
there's
a
couple
of
different
ways
that
I
can
do
that
so
one
I
could
do
like
kubernetes
cube,
cuddle,
edits,
vm
fedora
and
I
could
find
down
here
there's
a
setting
called
running
as
false.
C
C
If,
if
anybody's
watched
me
before,
you
know
that
I
switch
between
them
at
random,
so
using
vert
control,
I
can
now
interact
with
my
virtual
machine,
so
I
can
do
things
like
and
I'll
just
hit
enter
here
to
see
the
available
commands.
So
I
can
do
things
like
start
my
particular
virtual
machine.
So
I'm
going
to
do
vert
ctl
start
fedora
and
we
can
see
that
it
has
scheduled
our
virtual
machine
to
start
now.
This
will
do
or
we
will
see
this
reflected
in
two
very
important
ways.
C
C
We
can
see
here
that
my
fedora
virtual
machine
is
running
on
node
cube02
and
it
has
been
given
this
particular
ip.
Now
I
can
use
vert
control
to
connect
to
the
console
of
that
particular
virtual
machine.
So,
for
example,
I
have
remote
viewer
installed
here,
so
I
can
do.
Oh,
the
second
way.
Sorry,
I've
just
distracted
myself.
The
second
way
I
can
see
that
virtual
machine
is
running
is
I
can
do
a
k
get
pod
and
we
see
we
have
a
vert
launcher
pod
for
my
fedora
virtual
machine.
C
C
C
And
I
know
I
just
switched
to
a
very
tiny
font.
That's
just
because
I
opened
I
was
previously
logged
into
a
remote
host.
Now
I'm
logged
into
my
local
desktop
here
you
can
see
with
remote
viewer.
I
can
pop
up
and
I
can
log
into
my
virtual
machine
just
like
if
I
were
sitting
in
front
of
any
other
particular
virtualization
host.
C
I
can
do
a
quick
ping
out
to
the
world,
see
that
everything
is
working,
the
way
that
we
would
expect
it
to
now.
I
can
do
a
number
of
different
things
with
my
virtual
machine
at
this
point.
So
if
I
have
an
application,
I
can
deploy
my
application
right.
Maybe
I
want
to
do
something
like
a
dnf
install
httpd
it'll
probably
take
in
a
minute
because
it
needs
to
reach
out
and
pull
down
all
of
its
stuff.
C
C
C
So
remember:
revert
control
has
a
couple
of
different
commands.
For
example,
we
have
this
expose
command
here.
It's
going
to
do.
Vert
control
expose,
and
I
can
look
at
the
commands
available
here
and,
for
example,
I'm
going
to
copy
this
particular
command
and
I'm
going
to
do
something
like
vert
control
expose
my
virtual
machine
instance
fedora.
I
want
to
expose
port
80.,
I'm
going
to
give
this
a
name
of
fedora
web,
so
I
successfully
exposed
that
I
can
switch
back
over
here,
see
that
it
is
finishing
up
that.
C
Now
you
could,
of
course,
do
all
of
this
through
cloud
init
right,
there's
a
number
of
different
ways
of
doing
it
right
ansible,
even
if
you
wanted
to
connect
in
that
way,
but
the
net
or
the
end
result
here
is.
I
can
do
a
curl
on
localhost
and
you
can
see
that
I
get
my
hello
world
application
back
and
I
can
do
the
same
thing
from
the
nodes
inside
of
my
cluster
right.
So
this
node
happens
to
be
connected
to
right.
C
It's
a
part
of
the
sdn,
so
it
would
be
kind
of
cheating
if
I
did
it
from
there
plus,
you
wouldn't
really
want
to
access
your
application
from
the
node.
You
want
to
access
it
from
other
pods.
So
if
you
were
sharp
eyed,
you
saw
that
I
have
a
helper
pod
inside
of
this
cluster,
so
I'm
just
going
to
connect
into
that
pod.
C
I
just
created
defaults
being
the
namespace
that
I'm
in
and
svc
because
it
was
exposed
via
service,
and
I
get
back
my
hello
world
so
pretty
simple,
pretty
straightforward,
relatively
quick
demo
showing
creating
a
virtual
machine
exposing
of
service
and
then
consuming
that
virtual
machine
service
from
other
pods
inside
of
the
cluster,
and
we
can
of
course
do
the
same
thing
from
inside
of
our
virtual
machine.
We
can
reach
out
and
we
can
connect
to
other
containerized
services
using
their
service
names
as
well.
If
we
so
choose.
B
Yeah,
let's
go
ahead
and
start
taking
some
questions
because
we
already
have
one
queued
up.
Actually
we
have
two
queued
up.
In
fact,
the
so
question
number
one-
and
this
looks
like
probably
a
question
for
mike
old.
Obviously,
anybody
which
is
emmett
wants
to
know
about
integration
with
csi,
including
support
for
volume,
snapshots.
E
So,
as
we
mentioned
earlier,
one
of
the
methods
for
used
by
the
containerized
data
importer
to
create
to
create
disks
for
new
vms
is
basically
copying
from
another
pvc
or
having
a
pvc
source
and
for
if
your
pvc
supports
snapshot.
So
it
has
a
volume
snapshot
class
and
it
supports
snapshots
it's
all
configured
rather
than
doing
what
we
call
a
dumb
clone,
which
is
copying
all
the
bits:
compressing
them
sending
them
across
the
network.
E
We'll
use
we'll,
create
a
snapshot
and
create
a
new
pvc
from
the
snapshot.
E
Another
place
where
we
integrate
with
snapshots
is
the
vm
snapshot
and
restore
functionality.
So
we
just
recently
introduced
an
api
that
to
snapchat
a
vm
and
what
that
does
is
essentially
store.
The
vm
configuration
and
snapshots
references
to
snapshots
for
all
the
disks
that
support
csi,
of
course,
support
snapshots
and
you
know,
store
those
so
and
the
idea
is
that
you
can
restore
your
vm
to
any
of
those
snapshots
at
any
point
in
time.
E
So
there's
a
vm
snapshot
resource
and
a
vm
restore
resource
that
can
restore
your
vm
from
the
data
into
snapshot.
So
that's
where
we
integrate
with
snapshots
right
now.
Our
goal
is
to
really
use
the
kubernetes
native
functionality
as
much
as
possible.
Yeah
we
have
vms
running
qmu
and
you
know
cue.
Cartoons
have
snapchats
and
all
that,
but
we're
really
focused
on
leveraging
the
kubernetes
primitives.
B
Cool
okay,
hopefully
emmett,
if
that,
hopefully,
that
answered
your
question:
if
you
have
oh,
you
have
a
follow-up,
but
first
kooba
fight
has
another
follow-up,
which
is:
will
the
vms
survive
a
reboot?
And
let's
do
both
things
right,
which
is
one?
Is
you
reboot
the
pod
and
the
other
one?
Is
that
what
happens
if
you
reboot
the
node.
E
So
you
can,
even
for
david,
can
step
in
if
he
wants,
but
even
for
our
vms
that
don't
use
persist,
pvcs
that
are
backed
by
a
kind
of
container
disk,
temporary
storage.
You
can
reboot
and
still
get
your
state.
You
can't
shut
down
and
start
your
vm
up
again
and
get
the
same
state,
but
you
can
reboot
and
for
pvcs
you
can
shut
down
and
reboot
and
your
state
is
there
on
the
ppc.
B
Cool
okay,
the
so
another
one
from
a
follow-up
from
amit.
So
with
the
virtual
machine
snapshots,
is
there
a
way
to
engage
an
in-app
agent
to
spin
down
applications
running
inside.
E
Yeah
so
right
now,
snapchat
restore
snapshots
are
offline,
so
you
have
to
shut
down
your
vm
to
do
snapshot,
but
we
are
currently
working
on
online
snapshots
and
for
that
yeah
we're
going
to
integrate
with
you
know
the
work
that's
going
on
in
the
community.
E
Maybe
you
follow
like
there's
in
this
in
the
storage
stakes,
there
are
a
couple
different
routes,
there's
execution
hooks
which
have
been
around
for
a
while
that
will
probably
integrate
with
that
and
also
there's
this
container
notifier
system
and
there's
also,
you
know
at
some
level,
maybe
integrating
with
the
you
know
the
qmu
guest
agent
to
make
callbacks
into
the
vm
to,
for
you
know,
vm
specific.
E
You
know
quiesce
freeze
stuff,
like
that,
so
we'll
probably
be
doing
things
like
fs
freeze
for
file
systems
at
like
the
container
level
and
allow
integrations
into
the
vm
for
you
to
write
your
own
custom
logic.
So
if
you
want
to
flush
tables
in
your
database
or
do
something
there
will
be
hooks
for
that.
B
Cool
okay,
so
archersman
has
a
follow-up
on
the
vm
reboot
question,
which
is
so
if
the
node
permanently
shuts
down
or
shuts
down
for
some
lengthy
period
of
time.
Assuming
that
the
pvc
is
on
some
kind
of
shared
storage,
kubernetes
will
automatically
bring
it
up
somewhere
else,
or
do
you
have
to
do
extra
steps.
E
Yeah
I
mean
this-
is
this
is
kind
of
I
think,
an
area
that
is
still.
E
There's
work
going
on
in
the
community.
I
think
that
so
assuming
you
have
a
pvc
on
shared
storage
and
a
node
becomes
unavailable.
It's
typically
involves
you
know,
administrator
intervention
to
kind
of
remount
that
somewhere
else
I
mean
I'm
not.
I
haven't
kept
up
with
the
latest
work,
but
I
think
that
there
is
still
a
manual
step
involved
in
those
sort
of
cases
where
you
can't
just
automatically
fail
over
a
pvc
that
is,
you
know,
read
write
once
pvc,
for
example,
read
write.
E
Many,
of
course
we
handle
live,
migrations
and-
and
david
can
maybe
talk
more
about
that.
But
a
read
write
wants
pvc.
As
far
as
I
know,
in
a
node
failure
situation,
there
are
a
bunch
of
timeouts
to
kick
in
and
still
involve
administrator
intervention.
D
Maybe
to
add
to
that
a
little
bit
think
about
a
stateful
set
of
size
one
and
what
happens
there.
So
a
virtual
machine
with
a
pvc
attached
to
it
is
really
gonna
have
similar
behavior
as
a
staple
set
a
size
one.
So
we're
talking
about
h,
a
virtual
machines,
the
kinds
of
scenarios
you
would
expect
a
staple
set
to
bring
up
that
pod
again.
D
If
the
pod
goes
down,
it's
the
same
kinds
of
snare
as
we
would
bring
up
our
virtual
machine
again
if
there
was
some
sort
of
failure
to
the
pod.
So
we're
talking
about
actual
node
failure,
go
and
look
what
the
staple
set
does
and
try
to
understand
that
behavior
and
we're
mimicking
that
in
a
lot
of
ways.
So
there's
a
manual
intervention
needs
to
take
place
for
a
staple
set
to
recover,
then
expect
the
same
sort
of
thing
from
us
right
now.
B
Okay,
great
good
question,
so
cooperfide
has
another
question
regarding
windows,
vms
and
particularly
uefi
support
and
machines.
Hanging
on
reboot,
particularly
his
problem
is
that
when
he
tries
to
reboot
machines,
they
seem
to
hang
in
terminated
state
forever.
I
pasted
the
actual
error
message
into
chat
for
you
guys
if
you
want
to
take
a
look
at
that.
D
There
this
could
take
a
while.
I
would
recommend
starting
opening
an
issue
on
kuvert's
cuvette,
convert
issues
on
github
and
providing
as
much
detail
as
you
can
in
the
logs
there
we
could.
D
We
could
spend
a
lot
of
time
specifically
what
I'm
looking
for
when
you
do
provide
those
details.
Is
the
output
like
the
yaml
of
the
pod,
that's
stuck
in
terminating
yeah,
since
it's
been
stuck
three
days,
that's
pretty
it's
pretty
long.
D
B
Okay,
oh
cool,
so
an
easier
question
regarding
windows.
Vms
jp
dade
wants
to
know:
can
he
run
a
windows,
2019
vm
with
iis
and
net,
and
do
that
in
cooper?
Go
ahead,
peter.
F
Yes,
the
answer
is
absolutely
yes.
We
validate
against
the
microsoft
server
validation,
pro
server,
virtualization
validation,
program
svvp.
F
F
B
B
Support:
okay,
the
okay,
the
let's
see.
B
B
F
I
were
literally
talking
about
this
yesterday,
which
was
yeah.
I've
got
a
container
platform,
I'm
running
a
vm
inside
of
it,
but
I
want
to
run
containers
inside
that
vm,
and
the
answer
is
yes,
that
absolutely
works
right.
As
long
as
the
the
container
workload
is
supported
inside
the
guest
os
that
you're
running
you
know,
but
then
you
kind
of
get
into
okay
who's
in
charge.
F
Right
and-
and
are
you
running
that
as
a
a
a
unified
container
platform
that
or
container
workload
that
has
nothing
to
do
with
the
container
platform
that
it's
running
on
or
if
you're
trying
to
integrate
those
things,
then
it
then
it
gets
a
little
tricky
andrew.
I
don't
know
if
you
had
anything,
you
wanted
to
add.
C
The
only
thing
I'll
add
is
that
with
windows
containers
just
be
aware
that
there's
two
types
right:
the
there's,
the
regular
windows
containers
and
then
there's
the
hyper-v
containers
to
provide
kernel
level,
isolation
and
being
hyper-v
based.
Those
would
require
nested
virtualization,
which
kubvert
does
not
pass
through
to
to
those
virtual
machines.
C
So
if
you're,
using
a
hyper-v
container
that
wouldn't
work
but
otherwise
yeah
the
standard
windows
containers,
whether
they're
deployed
in
docker
or
some
other
fashion,
would
work
inside
of
those
virtual
machines
and
then
when,
if
right,
windows,
nodes
ever
become
a
thing
that
you
and
you
want
to
have
mixed
type
of
clusters,
you
could
of
course
take
advantage
of
it
that
way
too.
If
it's
container
based.
D
Yeah,
we
even
see
some
interesting
things
happening
on
this
topic
like
treating
cubevert
as
an
infrastructure
as
a
service.
It's
so
similar,
like
you,
might
do
aws
where
people
are
launching
virtual
machines
with
kuver
to
host
kubernetes.
So
you
have
clusters
on
top
of
clusters
and
that's
actually,
surprisingly
enough,
a
pretty
large
use
case.
That's
kind
of
emerging
for
the
whole
project,
so
yeah
definitely
turtles
all
the
way
down.
F
B
The
and
and
lose
track
of
the
fact
that
you
actually
own
any
hardware
at
all.
B
So
I
actually
have
one
of
my
own
questions
since
we've
gotten
through
the
queue
questions
in
chat.
So
when
you
set
the
cpu
and
memory
limits
in
the
definition
of
the
cooper
pod
are
those
hard
limits
because
we're
dealing
with
virtualization,
like
am
I
actually
capping
like?
If
I
set
a
memory
limit
of
one
gigabyte,
am
I
actually
capping
the
vm
memory
at
one
gigabyte.
C
So
my
understanding
is
that
when
you
instantiate
the
pod
there's
a
small
amount
of
resources
that
are
requested
right,
what
is
loosely
equivalent
to
a
a
guarantee?
However,
there
is
no
limit
associated
with
that
pod.
So
if
I
say
my
virtual
machine
has
two
cpus,
there
is
no
limit
of
two
cores
associated
with
that.
C
B
Yeah
in
the
lab
here:
let's
just
because
it's
a
vm,
it
is
possible
for
me
to
give
the
vm
a
hard
limit,
because
right
on
a
container,
you
can't
have
any
hard
limits,
they're
all
soft
limits
so
because
the
vm
it's
possible
for
me
to
actually
give
it
a
hard
limit.
My
question
is:
what
is
my
way
to
do
that
via.
C
Kubernetes,
I
believe
that
that
is
the
cpu
assignment
associated
with
the
virtual
machine.
Let's,
let's
experiment,
let's,
let's
take
a
look,
because
I
love
breaking
things
while
on
a
live
stream
yeah.
So
let's
do
this
so
okay
get
pod
and
I
want
to
attach
to
that
particular
pod.
That's
okay!
C
C
C
C
C
B
C
C
C
C
D
D
It
like
one
temp,
the
pod,
one
tenth
of
a
cpu,
but
the
guess
itself
thinks
that
it
has
entire
cpu.
So
that's
that's
over
committed
the
memory,
we're
doing
some
kind
of
tricks
here
to
account
for
the
overhead
of
our
control
plane,
which
exists
within
that
pod,
versus
what
we
actually
give
the
virtual
machine.
D
So
you
probably
ask
for
a
gigabyte
of
memory,
but
you've
got
a
little
bit
more
than
that,
and
the
reason
for
that
is
when
we
see
that
you
ask
for
a
gigabyte
of
memory,
we're
saying:
okay,
we're
going
to
give
a
gigabyte
of
memory
to
that
qmu
guest
and
we're
calculating
this
amount
of
overhead
for
some
small
components
that
we
have
in
there
like
lever
and
vert
launcher,
and
that
accounts
for
the
increase
there
so
that
that
is
a
you
know.
D
We
were
adhering
to
a
hard
limit
there
when
we
want
to
give
like
guaranteed
access
to
resources
on
a
host
to
a
virtual
machine.
We
can
do
something
called.
I
believe
it's
called
a.
What
do
we
call
that
it's
a
guaranteed
class
of
limits
and
requests.
D
So
if
you
have
the
requests
and
limits
being
set
to
the
same
thing,
so
you
require
the
same
cpu
request
as
you
do
limit,
then
we're
actually
pin
our
guest
cumu
process
to
the
cores
that
are
exposed
within
that
pod
and
the
cubelet's
going
to
only
give
us
it's
going
to
give
us
dedicated
cpus
there.
So
that's
like
the
way
to
have
a
one-to-one
relationship
between
what's
on
the
host
and
what
you
actually
want
in
your
virtual
machine.
D
B
I
mean
I'm
really
actually
more
interested
in
memory
limits.
It's
just
because
there's
a
lot
of
applications
that
are
designed
to
maximize
their
use
of
memory
because
they
were
designed
around
running
on
a
dedicated
machine
or
a
vm,
and
that
would
include
databases
and
jvms
and
a
few
other
things
and
so
having
hard
memory
limits
on.
Those
is
a
huge
benefit
to
resource
allocation,
because
you
know
that
they
can't
consume.
F
B
Let's
see
do
we
have
one
quick
follow-up
from
cubified,
which
is
it
was
asking
whether
or
not
nested
virtualization
is
actually
officially
supported
in
openshift
4.6,
depending.
E
C
Yeah,
so
if
you're
talking
about
deploying
kubeverts
as
nested
virtual
machines,
so
I
want
to-
I
have
a
parent
hypervisor
say
I
have
a
rel
host
physical
host
and
I'm
using
libert
to
deploy
openshift
with
pass-through
virtualization
nested
virtualization
that
will
technically
work.
It's
just
not
supported
in
the
case
of
openshift
virtualization.
C
So
it's
great
for
like
getting
familiar.
Learning
right,
experimenting
demoing,
as
the
case
may
be,
but
that's
not
what
you
would
want
to
do
in
a
production
environment,
certainly
not
from
red
hat's
perspective
from
a
community
perspective
right.
Certainly
it's
it's
really
largely
up
to
you
and
your
tolerance
for
for
risk.
In
that
performance
penalty.
B
Okay,
we
have
reached
the
end
of
our
current
questions.
It's
been
an
awesome
session
answered
a
lot
of
things
I
wanted
to.
We've
got
a
couple
minutes
left.
I
wanted
to
open
it
up
to
you
guys
for
final
thoughts.
B
Does
anybody
have
any
final
thoughts?
They
want
to
share
other
things,
they'd
like
people
to
look
at
after
this
session.
G
One
thing
that
I
would
like
to
mention
is
how,
to
you
know
precisely
how
to
follow
up.
You
can
find
us
on.
You
know
we
have
the
the
website
cuber.io
and
there
you
will
find
references
to
you
know
our
main
communication
channels.
We
have
two
channels
on
kubernetes
slack,
which
is
you
know,
hash,
virtualization
and
high
skill,
build
dash,
dev,
there's
also
a
mailing
list,
actually
a
google
group,
keeper
touchdev
and
well.
G
F
Yeah
the
the
final
thought
I've
got
is
you
know:
we've
talked
to
customers
that
kind
of
come
from
both
ends
of
the
spectrum,
some
that
are
hey.
I
love
my
vms
and
I
love
my
traditional
virtualization
and
you
know
when
you
pry
it
from
my
cold
dead
hands,
I'll
I'll,
think
about
kubernetes
and
then
others
who
are
like
hey
kubernetes
is
awesome
boy.
I
really
want
to
use
these
vm
paradigms
in
that
context
and
then
we
help
them
with
that
as
well.
F
So
if
you
look
upstream,
there's
lots
of
cool
stuff
like
going
on
with
networking
we're
doing
all
the
the
stuff
michael
talked
about
in
terms
of
csi
and
and
storage
and
data
protection,
and
then
even
just
simple
stuff,
like
gpu
access
for
compute,
intensive
workloads
right.
So
there's
plenty
of
stuff
happening,
we're
excited
and
we're
trying
to
close
the
gap
as
quickly
as
we
can.
B
One
other
thing
I
would
like
to
mention,
because
we
have
some
people
on
who
are
very
interested
in
the
technology,
are
very
interested
in
new
uses
and
that
sort
of
thing
is.
We
are
planning
a
summit
in
february
that
will
be
partly
to
plan
out
cooper
1.0,
but
also
to
connect
with
a
lot
of
our
advanced
users.
B
Please
follow
the
cooper,
blog
and
or
the
kubert.
You
know
social
channels
for
more
concrete
information
about
that,
because
we
would
really
like
to
have
you.
C
I
will
throw
out
one
more
thing,
which
is,
and
I'm
trying
to
dig
up
the
link
right
now.
There
are
catacotta
scenarios
for
kubert.
If
one
of
you
guys
knows
the
link
off
the
top
of
your
head.
So
if
you
just.
G
Want
to
sling,
and
if
you
go
to
kubernetes
the
first
thing
you
will
find
there
brings
you
to
katakada
scenarios.
Thank
you.
B
Gene:
okay,
well
thanks
everybody!
So
for
our
people
who
ask
questions
in
the
session,
I
posted
a
link
in
chat
for
you
to
follow
up
on
to
collect
your
complimentary
kubvert
t-shirts
and
see
peter
sporting.
The
cooper
t-shirt
there.
Thank
you,
everybody
for
participating.
This
is
an
awesome
session.
If
you
missed
any
part
of
it,
it
will
be
available
as
a
recording
on
youtube
later
today
and
thanks
everybody
and
have
a
great
kubecon.