►
From YouTube: OpenShift Virtualization: OS and applications deep dive with Andrew Sullivan and Rhys Oxenham
Description
During this stream we'll review OpenShift Virtualization and then dive into the different ways of providing the operating system to a virtual machine: container image, DataVolume, and PXE. We'll also look at connecting applications deployed in virtual machines to other OpenShift components using Services and exposing them to the world using Routes.
A
Good
morning,
good
evening,
good
afternoon,
wherever
you're
hailing
from
welcome
to
another
episode
of
openshift.tv,
I
am
chris
short
principal
technical
marketing
manager
on
the
openshift
team.
I
am
joined
today
by
my
fellow
teammates,
the
one
and
only
andrew
sullivan
mister
virtualization
himself.
B
B
Yeah,
so,
as
you
said,
we
are
teammates,
I
tend
to
focus
on
anything
virtualization
related
that
isn't
also
open
stack.
I'm
I'm
not
an
openstack
person.
I
freely
and
openly
admit
this
and
make
no
claims
otherwise.
B
Is
good
yeah
so
funny
enough,
we
should
be
joined
by
reese
here
in
a
few
moments,
who
is
an
openstack
expert?
On
the
other
hand,
interestingly
enough?
Yes,
so
yeah
between
the
two
of
us
we've
kind
of
got
all
of
the
aspects
covered,
but
that's
not
why
we're
here
today?
That's
only
part
of
why
we're
here
today,
right
small
part,
so
really-
and
I
think
I
put
in
the
request
for
this
one-
I
filled
out
the
form
which,
by
the
way,
that
process
works
phenomenally.
A
B
Yeah,
so
the
the
impetus
for
this
particular
session
was
a
conversation
that
I
had
with,
among
others,
burr
so
burr
and
a
handful
of
other
people
all
reached
out
to
me
within
the
span
of
like
two
days
and
we're
asking
basically
the
same
questions,
which
is
how
do
I
actually
create
vms
right
when
you
look
at
the
demos
and
all
the
other
stuff,
even
the
last
time
that
we
did
an
openshift
virtualization
live
stream,
both
reese
and
I
kind
of
you-
know:
hey
here's.
B
We
have
this
image
and
it's
on
a
web
server
and
kind
of
skip
over
that
whole
part.
So
my
goal
today
is
to
dig
down
into
those
virtual
machine
disks
and
data
volumes,
and
how
do
we
create?
B
How
do
we
install
an
operating
system
on
a
virtual
machine
so
that
oh
and
the
second
part
of
that
is
how
do
we
then
expose
those
virtual
machines
so
in
a
perfect
world
fingers
crossed
what
I'd
like
to
do
is
be
able
to
so
one
install
a
virtual
machine
and
then
install
something
like
stand
up
a
web
server
or
something
with
podman
and
then
expose
that
out
to
the
outside
world.
But
maybe
we
can
play
around
with
ssh
or
something
like
that
see
how
we
can
get
access
via
ssh,
okay,.
A
A
D
A
C
C
The
world
with
their
open
shift
and
virtualization
projects-
and
this
is
not
my
first
time
on
the
stream-
we
did
a
similar
one-
when
was
it
back
in
may-
maybe.
B
All
right
so
now
that
you're
here
reese
I
will,
I
won't
delay
anymore.
I
won't
do
any
more
tap
dancing
and
just
kind
of
move
ahead
and
yeah.
So
what
you're
looking
at
here
is
an
openshift
virtualization
or
an
openshift
cluster.
That's
deployed
inside
of
my
home
lab.
B
So
the
last
time
that
reese
was
on
reese
was
leading
the
openshift
virtualization
session
and
he
walked
through
kind
of
setting
up
exactly
what
I've
got
inside
of
here.
So
if
you
go
to
demo.openshift.com,
look
for
the
openshift
virtualization
category
and
then
down
towards
the
bottom
is
a
link
to
the
github
repo
that
has
how
to
deploy
all
of
this
infrastructure.
Everything
that
I'm
going
to
be
using
inside
of
here.
Oh
goodness,
I
did
make
some
changes,
which.
B
Yeah,
so
if
we
come
all
the
way
to
the
bottom,
because
they're
alphabetized
now,
thank
you
eric,
thank
you
and
then,
if
we
come
towards
the
bottom
here
we
have
this
self-paced
learning
and
demo
environment
and
right
here
in
this
repository,
is
a
link
to
all
of
the
good
work
that
rhys
and
august-
and
I
think
a
couple
of
others
have
done
in
order
to
make
this
stuff
easy
to
deploy
and
consume
in
your
local
environment.
B
So
one
thing
to
note,
and-
and
I
know
reese-
and
I
talked
about
this-
the
last
time
yeah
in
our
lab-
we
are
using
nested
virtualization,
just
remember
that
that's
not
supported
by
red
hat.
So
if
you
go
to
deploy
in
your
environment,
you
want
to
use
a
production
environment
just
make
sure
it's
running
on
bare
metal,
not
not
in
a
nested
setup,
all
right.
So
with
that
out
of
the
way
I'll
quickly
review.
My
my
cluster
here
so
super
simple.
This
is
five
nodes.
B
You
see,
I
have
three
control
plane
nodes,
two
worker
nodes
in
my
instance,
these
all
happen
to
be
running
on
a
red
hat
virtualization
instance,
as
opposed
to
a
local
libvert
instance,
but
otherwise
they're.
Exactly
the
same,
if
you
were
quick-
and
I
didn't
highlight
this
before-
I
am
running
the
most
recent
stable
version
of
openshift
so
454.,
and
if
we
look
at
our
installed
operators,
you
can
see
that
I've
only
installed
the
openshift
virtualization
operator.
B
B
She
was
like
you
know:
you
spent
like
a
minute
and
a
half
just
talking
about
naming
and
how
dif
things
have
all
these
different
names
and
that's
because,
as
marketing
does
the
name
of
things
changes
so
what
started
out
as
what
was
it
container
native
virtualization
turned
into
openshift
virtualization
and
as
a
result,
we
have
a
couple
of
different
names,
so
you'll
see
cnv,
container
native
virtualization,
so
just
don't
be
alarmed
by
that
if
you
go
to
deploy
so
the
other
thing
that
we
can
do
here
is
if
we
look
at
our
workloads
and
I'm
not
going
to
rehash
all
this
stuff,
because
it's
covered
in
the
other
live
stream
as
well
as
a
bunch
of
other
material
that
we
have.
B
Basically,
all
the
services
inside
of
here
are
deployed
and
the
one
that
we
really
care
about
today
is
going
to
be
the
cdi
services,
the
containerized
data
importer,
and
we
can
also
come
over
here
to
our
command
line,
and
I
will
reiterate
and
refresh
that
I
cannot
talk
and
type
at
the
same
time.
So
my
apologies.
If
anybody
is
expecting
that.
A
B
Yeah,
so
q
cube
vert
is
the
upstream
project
for
open
shift
virtualization.
So,
like
everything
red
hats,
we
do
everything
upstream
open
source
and
then
bring
it
downstream
as
a
productized
version
to
provide
support,
etc
inside
of
there.
So
what
you'll
see
is
much
like
you
know.
I
just
did
oc,
get
crd
the
custom
resource
definitions
and
grepped
for
qbert.
That's
because
everything
is
defined
in
the
converts
api
namespace
inside
of
here
nice.
So
conveniently
it's
all
highlighted
in
red.
B
B
A
Right,
sweet
so
eric
and
I
were
having
a
discussion
earlier
about
music
for
the
channel
and
you're.
A
Right
and
that's
what
I
was
thinking
and
I
was
like
yeah,
let's,
let's
have
like
some.
B
That
might
be,
that
might
actually
be
distracting.
Yes,
okay,
so
I
I
have
openshift
virtualization
deployed
you
see
when
you
deploy
it.
A
C
Yeah
well,
we
start
a
pod
called
the
importer
pod
and
that's
just
simply
going
to
pull
the
the
qcad
2
or
the
raw
disk
image
directly
from
the
url
source
that
you
provide,
and
it's
essentially
going
to
write
it
out
to
wherever
it
needs
to
go,
if
that's
just
onto
the
local
file
system
in
a
host
path
or
if
that's
on
nfs,
share
or
onto
something
like
like
seth,
if,
if
using
openshift
container
storage,
just
going
to
write
it
out
directly
there.
So
it's
it's
code
that
lives
essentially
within
the
cdi
space.
B
Cool
yeah,
so
all
I
did,
while
you
were
chatting
their
their
release,
was
browse
to
the
rel
download
page
right.
So
I'm
just
looking
at
rel
8.2,
which
is
the
latest
non-beta
version
available.
If
we
look
in
the
downloads
here,
there's
a
couple
of
things
we
want
so
the
first
is
this
kvm
guest
image.
So
this
is
going
to
be
a
qcal
file
and
it
contains
that
base
os
disk.
B
Yeah,
so
the
disks
will
end
up
being
either
qmu2
or
sorry
cucao2
or
raw
disk
images.
Okay,
either.
One
of
those
will
work
cool.
Thank
you.
So
the
other
thing
that
I'm
going
to
download
here.
So
I
downloaded
the
kvm
guest
image,
I'm
going
to
download
the
rel
binary
dvd,
and
hopefully
this
doesn't
absolutely
kill
my
internet
connection.
It
should
be
fine.
Oh.
B
Yeah
it's
gigabits,
but
with
the
streaming
up
and
down
and
everything
else
we'll
see.
A
B
B
A
A
A
B
So
all
I
did
you
saw
I
I
did
a
quick
search
for
fedora
cloud
image.
It's
going
to
take
me
to
cloud.fedoraproject.org,
which
will
then
redirect
all
stuff
to
our
project.org
and
I'm
just
going
to
download
this
cloud-based
image
for
openstack.
B
And
yes,
I
do
want
to
save
that.
So
last
but
not
least
so
remember,
openshift
virtualization
has
the
same
guest
os
support
that
red
hat
virtualization
has
so
officially.
We
only
support
the
various
windows
so
server
and
desktop
from
I
think
server,
2008
r2
on
and
windows,
8
on
or
something
like
that
and
the
rel
variants.
B
Other
operating
systems
like
fedora
will
work.
Cintos
will
work
just
remember
we
don't
officially
support
them.
So
I
say
that
because
you
can
also
search
for
openstack
cloud
images
and
the
openstack
team
very
helpfully
has
this
page
that
lists
a
whole
bunch
of
other
distros
that
you
can
get
with
cloud
images.
B
C
And
this
actually
highlights
a
really
important
point
with
openshift
virtualization
we
are
building.
On
top
of
you
know
many
many
years
worth
of
experience,
expertise,
but
also
technical
capabilities.
You
know,
openshift
virtualization
is
using
the
same
underlying
stack.
It's
the
same
red
enterprise,
linux,
it's
the
same
kvm
same
libvert,
it's
just
being
orchestrated
in
a
slightly
different
way,
and
so
yes,
we're
looking
at
openstack
images
here
and
you
can
use
openstack
images
on
top
because
they're
all
utilizing
very,
very
similar
underlying
components
to
instantiate
them.
C
D
B
C
E
C
There's
the
ability
to
do
that,
but
vgpu
is
slightly
different
because
the
orchestration
platform,
in
this
case
openshift
virtualization,
essentially
has
to
instruct
the
underlying
card
to
carve
itself
up
into
vgpus,
and
we
typically
do
that
with
nvidia
and
the
grid
driver,
and
things
like
that.
Now
we
had
the
capability
to
do
that
with
rev
and
openstack,
for
you
know
a
very
long
time,
some
of
the
capabilities
on
top
with
openshift,
I
believe
you
can
do
but
openshift
virtualization-
requires
some
additional
configuration
because
we're,
of
course
defining
a
virtual
machine
through
an
openshift
api.
A
Appreciate
it:
okay,
next
question:
with
openshift
container
platform:
ovn:
is
it
possible
to
take
an
online
backup
of
a
guest
vm.
B
B
If
you're
wanting
to
do
it
at
the
hypervisor
level,
then
we
would
rely
on
our
party.
You
know
our
ecosystem
and
of
those
back
defenders
to
be
able
to
integrate
with
the
native
kubernetes
paradigm,
for
you
know,
snapshot
backup
of
persistent
volume
claims,
because
that's
really
what
these
disks
are
now
that
being
said,
that
ecosystem
right
now,
you
know,
there's
not
a
lot
of
vendors
who
understand
and
integrate
with
kubernetes.
So
that's
an
area
where
we're
doing
a
lot
of
work
and
a
lot
of
you
know
how
can
we?
B
How
can
we
change
that
effectively?
The
other
thing
that
I'll
add
is
that,
with
version
2.4,
where
we
don't
yet
support
snapshots
for
virtual
machines
either
so
on,
the
roadmap
is
cold
snapshots.
So,
with
the
vm
being
powered
off
to
take
the
snapshot
and
then
eventually,
you
know
being
able
to
take,
live
or
hot
quote,
unquote,
snapshots,
and
so
the
the
interesting
part
for
this
is.
This
is
because
of
how
kubernetes
itself
works.
B
So
when
you
go
and
with
so
it's
a
pvc
and
it's
backed
by
a
pv
that
has
most
likely
been
satisfied
by
a
csi
driver,
a
csi
provisioner.
So
if
I
have
a
csi
provisioner
that
supports
snapshots
when
I
create
that
snapshot
custom
resource,
when
I
say
hey,
take
a
snapshot
of
this
volume,
it
it's
non-deterministic
when
that
happens,
basically
the
object
gets
created.
Then
we
have
to
wait
for
the
operator
to
be
notified
of
that
to
get
created
and
then
for
it
to
do
something
to
actually
create
the
snapshot.
B
So
there's
a
lot
of
work
going
on
in
the
background
in
order
to
make
that
easier
to
make
it
faster
to
make
it
more
integrated
into
the
traditional
virtual
machine.
You
know
snapshotting
paradigm,
that
people
are
used
to
so
just
know
that,
yes,
we
know
yes
we're
working
on
it
and
it
doesn't
work
the
way
that
you
might
think
it
does.
A
B
Yeah
and
what
I
tell
people
is,
you
know
we
don't
assume
that
we
know
or
are
aware
of
our
working
every
feature
capability
always
put
in
rfes
right
put
in
a
bz
put
in
an
rfe.
You
know
if
you're,
a
red
hat
customer
work
with
your
account
team,
so
you
can
throw
your
weight
behind
that
and
help
product
management
and
engineering
management
prioritize
those
because
it
is
super
important.
B
Much
so
thank
you
all
right,
so
the
last
thing
kind
of
set
up
or
or
prep
work
here
so
notice
that
all
three
of
my
images
here
have
finished
downloading.
B
The
last
thing
that
I'll
bring
up
here
is
the
github
page
for
containerized
data
importer.
So,
quite
simply-
and
here
we
can-
all-
I
did-
was
go
to
github.com
cubert
and
if
we
search
in
the
repository
list
here
for
data,
you
can
see
that
there
is
the
containerized
data
importer
and
from
here.
If
we
go
to
doc,
you
can
see
that
there's
a
whole
bunch
of
documentation
in
here
about
how
data
volumes
work
and
how
to
use
them
and
all
kinds
of
other
stuff.
B
So
the
page
that
I'm
interested
in
today
is
kind
of
the
core
landing
page
for
data
volumes.
Information,
so
you've
heard
me
use
I've
talked
about
data
volumes
a
number
of
time,
and
I've
also
talked
about
pvcs
and
pvs
a
number
of
times,
and
the
core
thing
to
remember
is
that
a
data
volume
is
a
pvc
with
some
additional
services
on
top.
So
what
does
that
actually
mean?
B
It
means
that
when
I
create
a
data
volume
that
says
import
this
disk
from
this
url
or
this
s3
endpoint
or
this
container
image
or
whatever
it
happens,
to
be
the
cdi,
the
containerized
data
importer
will
say.
I
need
to
create
the
persistent
volume
claim
when
that
volume
claim
is
created,
I'm
going
to
or
a
bound.
Rather
I'm
going
to
spin
up
the
importer
pod
I'm
going
to
attach
that
pvc
and
then
I'm
going
to
begin
to
stream
that
data
down.
B
When
you
manage
data
volumes
separately,
these
can
become
pretty
powerful,
so
things
like
if
I
were
to
go
in
and
destroy
the
pv
or
pvc
that
is
behind
my
data
volume,
the
data
volume
operator
will
automatically
recreate
it
and
it
will
automatically
re-import
this
the
data
that
originally
sourced
it.
We
can
also
leverage
the
data
volume
controller
right
cdi
in
order
to
do
things
like
clone
disks.
B
So
if
I
have
a
pvc
that
has
a
source,
you
know
a
template
disk.
If
you
will,
it
will
clone
that
in
a
couple
of
different
ways,
including
if
it's
available
it
will
use
the
cloning
api
or
the
cloning
option.
For
your
persistent
volume,
provisioner
or
a
storage
class
provisioner,
so
it
does,
it
is
capable
of
doing
smart
clones
in
that
respect
so
long
as
they
are
csi
compliant
all
right.
B
So
I'm
going
to
come
back
over
here,
I'm
more
or
less
going
to
go
in
order
with
these
and
talk
about
them,
except
for
pixie,
pixie's,
very
straightforward,
it's
pixie!
Basically,
the
only
caveat
here
is
openshift.
Virtualization
doesn't
provide
any
of
the
pc
infrastructure.
So
there's
no
dhcp,
there's
no
tftp!
There's
none
of
that
yeah
you're,
effectively
connecting
it
to
a
network
that
has
all
of
those
pixie
services
enabled
and
letting
it
go.
B
B
All
right,
I
know
it's
all
300
megabytes
or
something
like
that
all
right.
So
I
have
my
two
cucao
images,
so
I
hit
the
button
on
my
mouse,
so
I
have
the
two
qcow
images
here,
one
for
fedora,
one
for
rel
8.2
and
then
I
have
an
iso
image
so
for
url.
Again
we're
just
going
to
host
these
we're
just
going
to
put
them
onto
an
a
web
server.
B
B
B
B
B
Sometimes
I'll
start
saying
what
I'm
typing
or
typing
what
I'm
saying
instead
of
what
I'm
yeah,
it's
just,
not
a
bad,
not
a
good
combination
for
me,
rough
all
right!
So
now
we
can
refresh
this
now
we
can
see
that
we
have
our
stream.iso
already
uploaded
inside
of
there
and
from
here
I
can
go
in
and
very
simply
use
that
url,
so
copy
link
location.
B
Fedora
http,
we
will
let
this
know
that
is
fedora
31,
it's
a
small
image,
again
t-shirt
flavoring,
I
think
small
as
one
cpu
and
2
gigabytes
of
ram
and
just
a
simple
desktop
after
that.
We'll
just
ignore
most
of
these
things
we'll
accept
the
defaults,
so
networking
it'll
use
the
masquerade
network
which
will
put
it
on
to
or
the
the
pod
network,
which
will
put
it
onto
the
network,
the
sdn
that
is
being
used
by
all
of
the
other
pods.
B
B
Should
be
for
fedora,
we
can
expand
it
if
we
want.
B
I'm
looking
for
my
the
usual.
C
B
Yeah
I
was
looking
for
my
the
usual
cloud
in
it
that
I
use.
Let's
go
back,
we'll
adjust
this.
Thank
you
for
highlighting
that
so
one
other
thing,
or
a
couple
of
other
things
that
have
changed
here,
one
I
can
highlight-
or
I
can
select
my
storage
class
specifically
and
I
can
set
these
options
directly
from
inside
of
here.
So,
for
example,
if
I
was
using
a
block
storage
device
and
wanted
to
pass
it
through
directly
as
a
block
storage
device,
I
can
do
that
from
directly
here,
as
well
as
change
the
access
mode.
B
B
So
what
I
was
just
digging
for
a
moment
ago
was
this.
You
know
it's
like
30
characters
that
I
just
can
never
remember
how
to
type
and
all
I'm
doing
here
is
creating
the
world's
most
simple
cloud
in
it
to
set
a
password
for
the
default
user,
so
fedora,
that's
literally
fedora
and
then
setting
it
so
that
I
don't
have
to
change
it
as
soon
as
I
log
in
so
this
just
makes
it
easy
to
access
that
particular
server.
B
So,
if
you've
seen
me
do
this
before
you'll
know
that
at
this
point
we
should
expect
to
see
an
importer
pod,
that's
running
that
is
going
to
be
pulling
that
disk
from
the
http
server
that
we
specified.
If
I
had
to
guess
this
is
waiting
to
pull
the
yep,
so
it
bound
the
volume.
Now
it's
assigned
it
now
it
should
be
pulling
the
image
here.
B
So
if
we're
looking
through
these,
it
was
waiting
for
the
volume
to
be
created
and
bound
to
the
pvc
once
that
was
done,
it
scheduled
the
pod
and
then
it
pulled
the
image
in.
We
can
see
it's
now
running
and
if
we
look
at
the
logs
here,
it
is
pulling
that
image
from
our
particular
host,
so
that'll
take
a
a
couple
of
minutes
because
slow
network,
even
though
it's
a
small
image
but
pretty
straightforward.
B
D
B
B
B
C
The
logs
will
actually
show
at
the
end,
that
disk
image
being
resized
to
the
size
in
which
andrew
suggested,
so
the
image
is
probably
expecting
10
gig,
but
it
finds
a
20-year
container,
which
and
it'll
just
resize
to
fill.
B
Yep
yeah,
so
it
it's
another
convenient
thing
about
the
data
volume
operator
is:
it
will
resize
the
underlying
image
from
whatever
it
is
or
wherever
it
was
to
whatever
the
size
of
the
pv
is,
and
then
we
rely
on
the
guest
os
at
that
point
to
then
resize
its
file
systems,
so
in
the
case
of
fedora,
rel,
etc,
cloud
init
is
there
and
cloudinet
will
automatically
notice
the
new
size
of
the
disk
and
resize
those
those
file
systems
automatically
if
you're,
using
something
else
or
if
you
create
your
own
disk.
B
You
just
have
to
take
that
into
account
all
right,
so
our
data
volume
has
been
created,
which
means
that
our
vm
over
here
is
now
ready
to
ready
to
go.
I
can
very
quickly
turn
that
guy
on
just
to
let
it
run
for
a
few
minutes.
Hopefully
everything
is
working.
This
environment,
I
just
stood
it
up.
Last
night,
I've
only
tested
it
once
so.
B
A
B
Yes,
yeah,
I'm
hoping
mine
might
might
be
borderline,
we'll
have
to
see
so
yeah.
The
mouse
disappears
when
I
go
into
the
vnc
session,
but
you
can
see
here
to
the
right
of
my
mouse
that
e0
was
assigned
an
ip
on
10.0.2.2,
which
is
the
pod
network
for
whatever
host
it's
running
on
worker
zero.
A
B
Okay,
so
that
addresses
the
url
very
much
the
simplest
of
all
of
them
right,
really
straightforward
of
literally
put
your
cucao
up
onto
our
raw
image
file
up
onto
an
http
or
https
server.
Let
it
pull
that
disk
in
if
I
were
to
again
destroy
that
pv
or
pvc
that
backs
the
data
volume
it'll
automatically
re-import
that
data
for
us
and
I'll
show
that
here
in
a
few
seconds.
B
So
the
second
one
that
I
want
to
show
is
going
to
be
the
container
image.
So
container
images
are
helpful
in
a
couple
of
different
ways:
they're
really
good
for
providing
really
any
disk,
not
necessarily
an
operating
system
disk.
So
what
do
I
mean
by
that?
So
I'm
going
to
demo
taking
an
operating
system
disk
and
putting
it
into
a
container
image
and
then
pushing
that
up
to
a
registry,
but
really
you
can
take
any
qcow
any
dot.
You
know
any
raw
image
file
and
put
it
into
that
container,
and
why
is
this
useful?
B
B
So
it's
yet
another
way
of
publishing
an
application
to
the
vm
or
providing
really
any
other
arbitrary
data.
Maybe
I
have
some
sort
of
automation
or
other
secret
data,
something
anything
that
I
need
to
get
into
my
virtual
machine
just
keep
in
mind
that
it
is
treated
like
any
other
container
disk
in
that
it
is
ephemeral
so
or
any
other
container
image.
Rather
so,
when
it
gets
pulled
down
when
we
instantiate
it,
it
will
create
that
copy
on
right
layer
on
top,
that's
where
all
of
the
writes
happen.
B
Inside
of
for
while
the
vm,
the
pod
is
running
when
that
pod
is
terminated,
that
copy
and
write
layer
goes
away.
So
just
keep
that
in
mind.
So
creating
these
is
super
easy.
So
all
I'm
going
to
do
is
create
a
docker
file
and
first
I'm
going
to
copy
this
because
we
will
need
that
name
and
all
I'm
doing
is
from
scratch
and
then
adding
my
file
name.
B
Into
slash
disk,
so
that's
really
all
there
is
to
it
so
from
scratch
means
it's
a
completely
empty
image.
I'm
going
to
add
this
qcow
image
it's
going
to
go
into
a
folder
named
disk
inside
of
that
particular
image.
That
is
literally
all
it
is
when
we
create
a
container
image
whatever
that
disk
is
I'm
going
to
add
it
inside
of
there
so
we'll
do
a
podman
build.
B
E
B
Forgetting
that
the
size
of
the
image
went
up,
it
was,
I
think,
800
or
900
megabytes,
and
now
it's
1.1
gigabytes.
A
No,
it's
actually
it's
kind
of
nice
right
like
you
think
it
would
be
more
complex,
but
now
it's
kind
of
we've
pulled
all
away
that
glue
and
abstraction
and
made
it
very
simple.
B
All
right,
so
we
have
our
image.
I
think
I
need
to
do
login
to
quay.io.
D
B
I
always
forget
that
you
have
to
do
control
shift
c
and
control
shift
v,
yeah
all
right.
So
now
we
wait
for
my
extremely
slow
internet
to
push
up
our
1.1
gigabytes
and
it
should
be
ready
to
go
so
normally.
I
have
a-
and
I
think
you
saw
in
our
team
chat.
I
was
asking
about
creating
a
local
registry
instance.
B
So
normally
I
use
a
local
registry
instance.
It's
just
easier
to
show
that
you
can
do
this
with
any
registry
anywhere
right.
Hopefully
any
quai
that
io
guys
that
are
watching,
aren't
changing.
B
In
the
in
the
the
uk
there
is
it
quay
or
is
it
key?
I
think
the
aussie's.
C
C
Right,
like
a
like
a
harbor
but
yeah
wait:
wait!
Don't
dock
your
boats,
okay,.
B
Well,
it's
I
think
vmware
has
harbor,
you
know.
A
Reese,
I'm
gonna
put
you
on
the
spot
here
and
and
if
you're
looking
at
the
twitch
jet,
you
know
what
I'm
about
to
ask
you?
Okay,
if
you
want
to
look
at
the
twitch
chat
and
answer
the
question.
E
C
C
You
know,
for
example,
the
applications
that
you
run
on
top
rely
on
the
underlying
infrastructure
for
availability,
dynamic,
resource
allocation,
etc,
etc.
Openshift
virtualization
is
our
first
attempt
at
providing
a
single
platform
for
both
container
or
containerized
and
virtualized
workloads
through
a
single
platform
near
a
single
api,
single
and
storage,
networking
infrastructure
and
so
on.
But
it
you
know,
whilst
andrew
is
showing
lots
of
wonderful
features.
C
A
Right
and
like
the
the
the
the
openshift
virtualization
use
case,
right
now
is
it's,
I
think,
designed
around
those
people
that
are
like
we
just
need
to
get
on
a
new
platform.
We've
got
to
lift
and
shift
these
images,
and
we
have
no
other
choice
right
like
we
have
to
do
something
with
these
vms,
we
can't
decouple
them.
We
can't
containerize
them.
We
can't
do
anything.
Okay,.
C
C
That
one
platform
for
both
side
by
side
is
incredibly
indicative.
You
know
we
recognize
that
virtualization
isn't
going
away
anytime
soon.
You
know,
I
think
a
lot
of
organizations
have
a
desire
to
move
or
modernize
or
other
applications
to
containers,
but
you
know
that
takes
considerable
amount
of
time.
Some
applications
may
never
be
containerized,
and
so
I
think
the
the
desire
really
certainly
the
trend
that
we're
trying
to
address
is
that
single
platform
for
both
and
you
can
optionally
use
that
platform
to
look
at
modernizing
through
the
same
apis.
B
You
know,
of
course
you
want
to
make
sure
that
it
fits
within
the
constraints
and
capabilities
of
openshift
virtualization,
so
things
like
snapshots
right,
as
we
just
said,
don't
work
the
way
that
you
would
expect
them
to
and
won't
in
in
the
near
term,
but
beyond
that,
it's
less
about
the
workload,
the
application
and
more
about
how
you're
consuming
it,
how
you're
managing
it
if
your
application
or
your
application
team
is
already
doing
the
majority
of
their
deployments.
B
You
know
the
cicd
pipeline,
all
these
other
things
leveraging
kubernetes
and
the
kubernetes
api,
then
bringing
virtual
machines
underneath
that
same
paradigm
can
sometimes
make
a
lot
of
sense
right.
It
eliminates
another
pillar,
another
vertical
inside
of
the
organization
that
they
have
to
deal
with.
You
know
as
far
as
long
term,
whether
or
not
it'll
become
a
replacement
for
red
hat
virtualization.
A
B
So
yeah
six
years,
six
years,
minimum
all
right.
So
while
we
were
chatting,
the
only
thing
I
did
here
was
I
logged
into
my
quay
account
or
key
account
if
you're,
aussie
or
british
and
changed
the
newly
created
repository.
Remember,
I
called
it
and
solid
slash
stream,
so
I
just
changed
that
to
being
public.
So
that
way
I
don't
have
to
add
my
poll
key
into
open
shift
or
anything
like
that.
B
B
So
I'm
I
am
going
to
delete
this.
This
is
another
thing
that
was
added
with
2.4
is
the
ability
to
separate
previously
if
you
created
the
data
volume
as
a
data
volume
template
so
real
quick,
you
can
create
data
volumes
separately
or
you
can
define
them
as
a
part
of
the
virtual
machine
as
well.
B
So
you
can
see
this
data
volume
template.
So
every
time
you
instantiate
a
new
virtual
machine
using
this
yaml
it'll
create
a
new
data
volume
based
on
that
template
so
before,
if
it
was
created
as
a
data
volume
template
and
you
deleted
the
vm,
it
would
also
take
the
data
volume
with
it
automatically.
B
Same
as
before,
we
will
provide
the
operating
system
and
other
details
here,
I'll
customize,
this
one,
so
two
gigabytes
of
ram
two
cpus
and
it
is
desktop.
So
if,
if
I,
if
you
don't
remember
so
the
workload
profile
here,
which
is
this
is
something
that
I
foresaw
with
red
hat
virtualization.
B
C
And
that's
just
by
the
looks
of
things
it's
just
an
external
bridge
network
where
you've
just
got
your
hypervisors
or
your
openshift
nodes
that
have
access
to
a
specific
node.
Okay,.
B
Correct
so
it's
so
my
nodes
have
a
second
network
interface
that
is
connected
directly
to
that
interface.
That
being
said,
when
you
deploy
openshift
virtualization,
it
includes
the
nm
state
operator,
which
I
absolutely
adore,
because
it
simplifies
that
day,
two
network
config
for
the
nodes
dramatically
right,
create
the
nncp.
The
network
configuration
plan,
node
network
configuration
plan.
A
D
B
It's
basically
the
nm
state
operator
or
the
nm
state.
Why
is
my
brain?
Not
remembering
what
this
is?
It's
using
nm
states
and
yaml
in
order
to
define
what
network
manager
should
have
the
network
interfaces
look
like.
Hopefully
that
makes
sense,
and
I
can
show
if
you,
if
you
think
it's
valuable.
I
can
certainly
show
what
that
looks
like
inside
of
my
lab
as
well
so
back
over
here,
because
we're
using
a
container
for
this
you'll
notice
that
I
don't
have
all
of
the
same
options
that
I
did
before.
B
So
there's
no
defining
the
size,
there's
no
selecting
a
storage
class,
it's
not
a
pvc,
so
there's
no
storage
class
associated
with
it.
So
really
the
only
thing
I
can
change
here.
Besides,
the
name
is
what
type
of
interface
I
would
use
to
connect
and
there's
no
reason
to
change
that
from
vert.
I
o
here.
B
E
B
B
So
this
is
creating
our
blank
disk
is
what
it's
doing.
It's
not
importing
the
disk
from
the
container
registry
or
anything
like
that.
You
can
see
it
just
fired
up
and
then
turned
right
back
off
and
now
it's
gone.
So
that's
all
that
that
was
doing.
That
was
because
I
added
the
second
blank
disk,
not
because
it's
using
a
container
image
for
our
base
os.
B
So
at
this
point,
if
I
start
our
virtual
machine,
what
we're
going
to
see
is
it
will
get
scheduled
and
then
we
will
have
to
wait
for
the
node
to
pull
down
our
container
image.
So
if
we
go
to
events,
so
it's
a
little
larger,
you
can
see
container
image
already
present.
So
that's
the
vert
launcher
and
down
here
is
pulling
image
quartet
io,
slash
and
solo,
slash
stream
tag,
fedora,
32
or
f32.
B
So
that'll
take
it
a
moment
again
we're
pulling
300-ish
megabytes
across
the
internet
and
what
we
should
see
at
the
end
of
that
is
our
virtual
machine
turn
on
so
a
couple
of
things
to
talk
about
here.
So
since
we
had
created
a
second
disk
on
our
virtual
machine
one,
you
can
see
that
there's
a
data
volume
template
that
is
source
blank,
so
that
was
where
we
saw
that
importer
pod
startup,
essentially
what
it
did
in
the
background
there
was
created
an
empty
10,
gigabytes
disk
inside
of
that
pvc
at
the
slash
disk
location.
B
C
B
Yeah
cloud
user
yeah,
so
that's
my
own
fault
of
getting
my
images
mixed
up.
B
B
And
we
can
see
that
it's
there
all
right
so
with
this
one.
Let's
take
a
look
at
a
couple
of
things
here.
So
first,
let's
go
to
our
pod.
B
So
remember
that
everything
in
kubernetes
is
a
pod.
Our
virtual
machines
are
no
exception.
There's
a
vert
launcher
pod
associated
with
each
one,
and
when
we
look
at
this
pod
I
can
go
to
the
terminal
conveniently
through
the
gui,
and
we
have
our
different
disks
available
here.
B
C
And
again,
that's
kind
of
the
important
thing
right:
it's
we're
just
running
the
same
binaries
or
sign
same
processes
that
our
other
virtualization
and
your
private
cloud
software
uses.
So
really
what
we're
doing
with
openshift
virtualization
is
teaching
it
how
to
define
and
manage
lifecycle
of
virtual
machines,
not
starting
from
scratch.
B
B
B
inside
of
there
is
our
disk.img,
so
this
is
our
10
gigabyte
blank
disk.
So
this
is
the
behind
the
scenes
of
what
any
disk
being
mounted
into
that
virtual
machine
looks
like
so
in
this
instance,
it's
our
secondary
disk.
If
it
were
a
primary
disk,
it
would
be
handled
exactly
the
same
way
if
we
had
more
than
one
disk
mounted
this
way
they
would
all
be
seen
inside
of
here
and
then
they're
attached
by
liberty
and
qemu.
B
So
if
we
switch
back
to
our
virtual
machine
now
in
our
console,
what
we
should
see
if
we
do
an
ls
block,
is
our
disk
that's
inside
of
here.
So
again
my
mouse
disappears,
but
we
have
vda.
So
vda
is
our
container
image
disk.
This
is
the
one
that
has
the
operating
system
inside
of
it.
We
have
vdb,
that's
our
10
gigabyte
drive
that
is
attached
to
the
second
one
and
we
have
vdc.
So
vdc
would
be
our
cloud
init
disk
that
was
attached
in
order
to
pass
that
information.
B
And
if
I
do
a
quick
check,
vdb
is
not
mounted
anywhere,
but
I
think
vdc,
oh
maybe
it's
not
I
thought
vdc
might
be,
might
continue
to
mount,
but
I
think
it
might
also
be
unmounted
by
cloud
init.
I
don't
know
if
you
know
what
happens
there.
Reese.
B
No,
no
worries:
do
you
know
what
happened?
Do
you
know
if
the
cloud
init
disc,
the
temporary
disc,
is
stays
mounted
after
cloud
and
it
is
done
or
does
it
unmount
it?
I
think
it
unmounts.
It.
B
No,
it
shows
present
but
not
mounted
yeah.
So
all
I'm
doing
I'm
just
going
to
put
a
quick
partition
on
this.
If
you
can
see
the
first
time
it
failed
because
I
was
trying
to
do
it
as
the
cloud
user
not
as
root
so
new
partition
primary
partition
partition.
One
start
at
the
beginning
end
at
the
end,
and
then
we
will
write
it
out.
B
B
E
B
Power
of
2
and
power
of
10-
and
that's
it-
that's
that's
our
disk,
so
just
keep
in
mind
if
you're,
adding
secondary
devices
to
virtual
machines
that
you
do
from
inside
of
the
guest
os
have
to
manage
that
attachment.
So
it's
not
like
a
traditional
pod
or
container,
where
you
say
attach
this
pvc
at
this
mount
path.
It's
going
to
be
attached
as
a
disk
device,
so
you
saw
devvdb
in
this
instance
based
on
the
order
that
they're
defined
in
the
vm
yaml.
B
B
B
B
Oops,
did
you
see
what
I
did
there?
I
was
going
to
prove
that
the
disks
do
person
and
I
didn't
uncheck
the
box
that
says
retain
disks,
that's
what
I
get
for
clicking
too
fast.
B
B
C
D
B
B
E
B
B
B
D
B
B
B
B
A
A
D
A
C
B
C
You
mistyped
pseudo.
B
B
So
the
issue
is:
when
we
look
at
ls-ld
dot,
you
can
see
it's
owned
by
root
and
only
has
user
writable
for
slash
m
t,
which
is
why
my
the
cloud
user
account
cannot
write
data
to
it.
So
if
I
were
to
do
a
change,
mod.
B
B
It's
I
thought
that
was
step
one
for
everyone.
A
A
B
All
right
so
so
slash
m
t
is
where
I
mounted
the
secondary
disk.
We
just
created
some
test
data.
You
can
see.
I
put
the
date
in
there
2
14
eastern
time,
2
14
pm
eastern
time
and
then
just
for
giggles.
We'll
do
the
same
thing
in
the
home
directory
of
this
cloud
user.
B
B
All
right
so
I'll,
let
that
turn
back
on.
So
the
last
thing
that
we
want
to
look
look
at
here,
for
our
sources
is
the
disk,
so
when
we
create
or
when
we
use
a
url,
if
we,
the
containerized
data
importer,
reaches
out
to
that
url
and
it
pulls
in
a
disk
image
and
it
it
adds
that
disk
image
into
this
same
disk
exactly
like
any
other
pvc.
B
B
So
you
can
use
a
data
volume.
So,
just
like,
I
defined
a
data
volume
here,
just
like
we
saw
with
the
data
volume
templates
with
the
other
one
of
specify
the
source
as
being,
for
example,
url
tell
it
to
pull
that
in
and
then,
when
you
create
the
data
volume
from
the
cli,
it
goes
through
the
exact
same
process,
but
with
vert
control,
which
is
the
cli
command
line.
Right
for
openshift
virtualization,
I
have
a
number
of
different
things
available
to
me,
including
image
upload,
so
to
get
vert
control.
B
There's
a
couple
of
different
ways.
If
you're
using
a
rel
box-
and
I
honestly
don't
remember
the
last
time-
I
used
a
rail
box
for
my
desktop-
there
is
an
rpm
package
right,
so
you
can
add
the
repo
and
then
just
do
a
yum
or
a
dnf
install
for
me
personally,
most
of
the
time
I
just
go
to
the
kubevert's
github
page.
E
B
B
B
So
I
can
start
and
stop
virtual
machines.
Inside
of
here
you
can
see.
I
can
also
do
things
like
pause
them.
I
can
trigger
a
migration,
so
lots
of
interesting
things
that
can
be
done
through
the
command
lines.
Remember
that
this
is
just
an
interface
to
the
api.
All
of
these
same
things
can
be
done
by
creating
api
objects.
B
If
I
look
at
my
crds
for
kubevert
for
openshift
virtualization,
there
is
a
virtual
machine
instance.
Migration
object
inside
of
here.
So
if
I
were
to
do
a
explain,
I
think
explain
will
work
on
these
supposed
to
work
on
these
nope.
They
must
not
have
populated
the
whatever
that
reads.
Normally
oc
explain
you
can
use
to
see
exactly
what
fields
go
into
that
crd.
B
I
don't
know
why
they're
not
there
with
kuvert
anyways.
What
I'm
getting
at
here
is.
All
of
these
are
just
api
commands
that
have
been
put
into
a
cli
interface
in
order
to
interact
with
it.
So
you
can
do
all
of
these
things
just
as
well.
By
creating
you
know
using
coug
couple
or
oc,
you
can
do
all
of
those
same
things
or
you
can.
B
That
so
image
upload
is
the
command
that
we
want
to
use
and
I'm
basically
going
to
copy
one
of
these
and
use
that
so
you
can
see
all
I'm
doing
is
saying:
convert
control,
image
upload
to
a
data
volume
with
this
name
this
size
from
this
particular
disk
image
that
I'm
going
to
use
here
now.
You
can
also
specify,
as
you
can
see
down
here,
I
can
also
tell
it
to
pull
that
in
from,
for
example,
url.
If
I
wanted.
B
B
All
right
so
vert
control
image
upload,
create
a
data
volume
named
dv
fedora
that
is
20
gigabytes
in
size,
because
I'm
not
specifying
the
storage
class,
it
will
inherit
the
default
storage
class,
and
then
we
wanted
to
upload
this
particular
image.
If
we
hit
go,
you
can
see
it
created
the
data
volume
waiting
for
the
pvc
to
be
created.
B
So
if
we
come
back
up
here
to
our
pods,
we
should
see
an
upload
pod
get
created
and
in
the
background
here
it
either
failed,
signed
by
unknown
authority
figures.
B
B
A
A
B
Validating
I
see
one
of
them
complaining
about
being
85
full.
I
should
probably
check
on.
B
B
B
If
there
is
no.
Basically,
if
the
storage
class
is
not
defined
in
this
file,
then
it
will
inherit
whatever
is
set
here
for
axis
mode
and
volume
mode.
So
in
this
way
I
can
customize
each
one
of
my
storage
classes
to
provide
or
to
let
the
data
volume
importer
know
which
type
of
storage
it
is
and
which
type
of
volume
I
want
associated
with
that.
B
So
this
is
convenience
in
particular
because
see
the
default
is
read
write
once
if
my
default
storage
class
is
nfs,
then
I
probably
want
read,
write
many
to
be
able
to
do
things
like
live
migration.
Remember,
live
migration
needs,
read,
write
many
pvcs,
so
this
is
how
we
can
can
change
that
or
change
the
defaults.
Remember
file
system
is
also
the
defaults
if
you
want
to
use
block,
so
you
pass
a
block
device
block
pvc
directly
attached
to
that
virtual
machine.
You
can
change
that
here
as
well.
B
B
B
B
There
we
go,
and
now
our
virtual
machine
is
booting
so
again
super
easy,
creating
a
data
volume
to
import
that
data.
For
us
you
can
pre-stage
a
whole
bunch
of
those.
If
you
wanted
so
hey,
I
need
to
create
10
virtual
machines,
create
10
data
volumes
and
let
them
all
import.
Alternatively,
and
what
we'll
look
at
next
is
cloning
that
data
volume
in
order
to
create
effectively.
What
is
the
template
volume
all
right?
B
We
don't
need
to
wait
for
this
guy
to
boot,
I'm
going
to
go
ahead
and
stop
you
and
I'm
going
to
go
ahead
and
delete
you
notice
in
this
one
that,
because
it
was
a
disk
volume,
it
was
decoupled.
So
I
need
to
come
down
here
to
my
persistent
volume
claims.
So
remember
how
I
said
that
if
I
deleted
the
persistent
volume
claim
underneath
the
dv,
then
it'll
get
recreated
we're
about
to
test
that.
B
So
there
it
deleted
it
and
then
it
recreated
it
immediately.
If
I
come
back
here
and
do
oc
get
dv,
you
can
see,
that's.
It
is
back
in
the
upload
scheduled
phase,
and
if
we
keep
watching
this
guy,
it
should
go
back
into
the.
We
should
have
a
pod,
so
upload
dv
fedora,
and
if
we
go
to
the
logs,
we
can
see
this
guy.
B
B
So
essentially,
all
I've
done
at
this
point
is
reset
that
disc.
So
because
I
had
booted
it
the
first
time
cloud
init
did
its
thing:
it
set
the
host
name,
it's
at
the
password
and
all
that
other
stuff.
So
by
deleting
it
and
allowing
the
data
volume
containerized
data
importer
through
the
data
volume
mechanism
to
recreate
it,
it's
now
back
to
being
a
quote:
unquote
fresh
template
disk,
so
we'll
let
that
do
its
thing
for
a
moment.
While
we
go
ahead
and
define
our
next
virtual.
B
B
So
this
time
I'm
going
to
use
a
disk
source
as
well,
it's
still
going
to
be
fedora
31.,
it's
still
a
desktop
machine
again,
not
changing
our
network,
and
this
time
I
want
to
attach
a
cloned
disk.
So
for
this
one
we
need
to
specify
where
the
source
disk
is
at
we're
going
to
use
the
one.
That's
in
the
same
name
space
as
our
virtual
machine,
and
we
want
to
specify
the
source
persistent
volume
claim
so
effectively
from
here.
B
B
B
So
now,
when
we
go
look
at
our
virtual
machine
details,
status
is
off
and
if
we
look
at
our
pods,
we
have
our
cdi
pod
running
yet
again,
so
this
one,
I'm
so
sorry
for
jumping
around
a
little
bit
here.
If
we
look
at
our
persistent
volume
claims,
you
can
see
that
we
have
our
template
fedora
disk
zero.
So
this
is
the
one
that
we
just
created.
B
C
Sorry
to
talk
about
only
because
I
haven't
done
this
particular
operation.
What
would
happen
if
the
volume
that
I
was
gonna
that
I
was
trying
to
clone
was
already
attached
to
and
running
on
a
virtual
machine.
B
I
don't
know,
I
know
who
to
ask
alexander.
E
B
B
A
On
fire
in
chat,
rhys
has
been
on
fire
in
chat.
No
there's,
not
any
questions.
We
are
curious,
how
you
say:
cube
ctl.
How
do
you
say
that.
A
E
B
A
B
So
cloned
our
disk
right
template
fedora,
so
we
can
switch
over
here.
We
can
see
it's
running
booting
doing
its
thing.
If
we
wanted
to,
we
can
create
another
vm.
I
don't
think
I
have
capacity
I'm
going
to
cause
my
cluster
to
fall
over.
If
I
try
and
do
this
again,
so
I'm
going
to
avoid
that
but
yeah
it
booted
exactly
as
we
would
expect
it
to
do
clone
that
data
out.
It
happened
pretty
quickly.
B
B
Let's
go
back
to
our
vms
here
we
can
leave
those
in
place.
So
to
do
this
so
remember.
I
grabbed
our
rel
iso
here
now.
Actually
I
don't
think
I
want
to
use
the
rail
iso
because
it
is
gigantic.
Eight
gigabytes
is
a
lot
of
waiting
around
and
me
tap
dancing.
B
C
Whilst
you're
downloading
that-
and
this
is
the
the
benefit
of
having
access
to
some
amazing-
it
amazing
engineers
alexander's
already
given
me
a
reply
to
the
the
cloning
question.
He
essentially
said
that
if
you're
using
a
read
write,
many
it'll
allow
it'll,
allow
the
clone
the
cloner
pod
to
get
access
to
that
disk
as
well,
but
depending
on
the
turnover
of
the
file
system.
On
top
what
you
end
up
with
on
the
clone
volume
may
or
may
not
be
corrected.
C
C
B
All
right,
so
all
I'm
doing
here
is
pulling
down
the
the
the
most
recent
version
of
the
centos
minimal
iso,
because
it's
about
a
gig
and
a
half
in
size,
as
opposed
to
eight
gigabytes.
A
B
Yeah,
so
it's
funny,
I
was
deploying
pi
hole
the
other
day
and
I
deployed
using
debian
or
debian,
depending
on.
D
B
Preference
for
how
hard
the
e
is
so
I
can
install
a
debian
minimal
in
like
256
megs
of
ram
super
small,
whereas
rail
8,
I
think
it's
about
a
gig,
is
what
the
the
you
know.
Small
end
of
that
happens
to
be
yeah,
so
yeah,
it's
it's
surprising
how
different
each
one
is,
and
all
of
that
I
guess
it's
not
surprising.
That's
why
there's
different
distros
but
right.
B
A
B
B
B
B
So
real
quickly
before
I
do
this,
however,
there's
a
bit
of
a
cart
before
the
horse
problem
of
when
doing
this
through
the
wizards.
When
I
go
to
add
a
disk,
I
need
to
have
a
disk
that
I
can
mark
as
bootable,
so
the
wizard
will,
let
me
complete
it.
So
this
is
going
to
look
ugly
and
it's
just
because
of
the
way
that
things
are.
It
should
be
fixed
in
the
future.
B
B
So
all
I'm
doing
is
creating
a
blank
data
volume,
a
blank
disk,
which
is
what
I'm
going
to
use
to
attach
as
the
first
disk,
the
bootable
disk
for
our
virtual
machine
temporarily
again,
this
is
a
quirk
in
the
gui
that
it
won't.
Let
us
move
past
the
disk
screen
without
marking
at
least
one
of
them
as
bootable,
so
create
a
virtual
machine.
Call
this
one
syntos
our
source
is
going
to
be
a
disk
sent
to
s8,
which
is
the
same
as
rail.
Eight.
B
So
again,
I
won't
mess
with
the
networking
interfaces
here
so
remember
this
is.
I
have
to
have
some
sort
of
disk
here
in
order
to
move
past.
If
I
create
a
blank
disk
here,
for
whatever
reason
it
won't,
let
me
mark
it
as
bootable,
so
I'm
going
to
create
a
pre-existing
disk
or
use
a
pre-existing
disk
rather
of
centos
dv
and
we'll
set
that
one
as
our
bootable
disk
temporarily.
B
B
So
here
I
will
attach
our
iso,
so
I
can
import
these
right.
I
can
specify
it
as
a
url.
If
it
does
that
containerized
data
importer
again
will
kick
in
it
will
import
that
iso
image
to
a
persistent
volume
claim
and
then
attach
that,
just
like
it
does
with
a
standard
disk
same
thing
with
the
container
right,
I
can
use
a
container
image
for
my
iso.
B
This
is
another
super
convenient
way
of
attaching
those.
So
the
reason
why
I
tend
to
do
it
this
way
is
because,
in
my
lab
remember
it's
a
home
lab
I'm
limited
on
storage,
et
cetera.
If
I
do
the
container
option,
it
has
to
pull
that
image
locally
to
the
node
which,
if
it's
like
a
rel
iso,
remember
that's
eight
gigabytes
and
I
think
my
base
os
images
are
just
sizes
are
only
like
80
gigabytes,
so
using
the
attach
disk
option
to
use
a
pvc
means
that
it's
hosted
off
of
an
external
storage
device.
B
B
So
I've
created
my
virtual
machine.
I've
defined
my
virtual
machine
and
if
we
look
at
the
yaml
associated
with
it,
you
can
see
down
here
that
I
have
this
boot
order
option.
So
I
can
change
the
boot
order
associated
with
each
one
of
those
disks,
so
I
can
change
it
in
the
yaml.
If
I
want
I'm
going
to
show
using
the
gui
in
order
to
do
this,
we
see
on
the
details
screen.
We
have
this
nice
boot
order
with
a
pencil.
Next
to
it.
B
I
can
add
a
source.
I
want
the
cd
drive
here
and
then
I'm
going
to
move
it
up
as
being
the
first
one
that
I
want
it
to
boot
from
click
save
and
now,
when
I
turn
on
my
virtual
machine,
what
we
should
see
here
is
our
vm
start
and
then
boot
to
that
iso
device
and
that's
exactly
what
we
get
so
I
can
select
my
image
install.
A
B
B
B
B
I
called
you
weak,
I
know
so
anyways
it's
going
through
it's
loading,
so
this
works
with
windows
as
well.
If
you
wanted
to
install
windows,
which
is
super
important,
you
can
absolutely
use
this
method.
So
the
big
thing
that
changes
with
windows
is,
it
doesn't
recognize
or
doesn't
have
the
vert.
I
o
drivers
out
of
the
box,
so
fortunately
openshift
virtualization
very
nicely,
and
we
can
actually
see
this
in
action.
So
if
I
were
to
create
a
new
virtual
machine.
B
B
B
B
So
you
want
to
do
an
attach
or
use
disk
and
browse
to
this
cd-rom
device
and
then
select
it
and
it
will
scan
the
disk
and
find
the
drivers
relevant
for
your
particular
operating
system.
So
select
them
hit.
Ok,
the
disk
will
appear,
and
then
you
continue
with
windows
installation
just
like
always
so
you
want
to
after
the
install
is
finished.
You
do
want
to
use
that
same
iso
again
to
install
the
the
integrations.
B
So
when
you
open
that
disk
device
there'll
be
an
executable
file
inside
of
there
just
execute
that
it'll
install
all
of
the
other
drivers
for
everything
else,
all
the
other
integrations
with
kvm
for
windows.
After
the
fact
so
windows
works
great,
I
would
say
I'd
test
it,
but
windows
takes
forever
to
install
yes.
E
C
D
A
Know
I
I
have
no
idea
and
what
we're
talking
about
folks
is.
Last
month
I
got
a
2700
aws
bill
because
I
had
uploaded
a
13
gig
sql
server
with
all
the
trimmings
right
all
the
drivers
and
everything
else
that
needed
to
be
used
and
installed
to
work
on
openshift
vert
and
somehow
it
got
out
that
this
image
was
out
there
and
someone
used
it
to
the
tune
of
31
terabytes.
A
The
image
was
only
13
gigs,
so
you
do
the
math
how
many
times
it
was
actually
downloaded
but
turns
out
after
the
fact
that
cloudfare
cloudflare
was
sending
partial
gets
to
the
bucket
and
s3
was
responding
with
full
request
back
and
cloudflare
was
sending
them
full
request,
as
partial
gets
as
well.
So
that's
part
of
the
problem.
I
don't
think
that
adds
up
to
the
tune
of
30
terabytes
but
yeah.
Something
went
horribly
wrong
there
and
I'll
pull
up
the
article
that
I
wrote
about
it.
Folks.
A
Well,
I
mean
the
crazier
thing
is:
is
that
there
was
no
events
on
the
calendar,
there's
nothing
to
explain
to
who
the
how
the
why
it
happened.
No
one's
fessed
up
to
it
like
hey,
I
might
have
broken
something
or
hey.
I
downloaded
this
image
a
million
times
or
something
so
clearly
something
just
went
wrong.
B
Makes
me
wonder
it
makes
me
wonder
how
often
that
happens
with
you
know.
If
you
have
a
if
you're
paying
seven
figures
to
aws
every
month
or
I'm
sure,
azure
or
google
or
any
of
the
others,
you
know,
would
you
even
notice
an
extra
two
thousand
dollars.
A
D
A
Yeah,
no
I
mean
it
was.
It
was
literally
designed
just
to
share
real
quick
with
with
jafar,
and
I
it
kind
of
persisted,
and
then
I
got
asked
a
few
more
times
for
it
and
it
was
like
I.
E
A
E
B
So
all
I've
done
here
is,
I
restarted
one
of
the
other
virtual
machines.
This
is
the
one
that
is
based
on
a
container
image,
a
registry
image
for
rel8.
B
So
what
I
want
to
do
with
this
one
is,
you
know
I
want
to
start
something
say
an
http
service
and
then
expose
that
to
the
rest
of
the
world.
So
to
kind
of
close
out
the
last
process
right
we
went
through.
B
We
looked
at
how
to
create
a
virtual
machine
disk
based
off
of
a
qcow
or
a
raw
image
and
upload
that
into
either
via
a
url,
via
a
data
volume
and
uploading
from
the
command
line,
how
to
put
it
into
a
container
registry
and
use
that
which
is
the
instance
that
I'm
using
here
and
then
the
same
thing
with
an
iso
how
to
boot
a
vm
from
an
iso
and
then
install
the
operating
system
from
there,
and
I
stopped
that
because
it
was
taking
forever.
B
The
only
thing
that
you
would
want
to
do.
There
is
particularly
with
windows.
So
after
the
os
installs
right,
install
those
secondary
drivers
and
everything
else
turn
it
off
and
then
remove
those
extra
isos,
those
cd-rom
devices
and
then
restart
it
again,
so
remember,
hot,
add,
doesn't
work
or
hot
remove,
doesn't
work.
Neither.
D
B
B
B
Okay,
so
we'll
let
that
import
for
a
moment
so
effectively.
What
I'm
going
to
do
is
once
this
virtual
machine
starts
up,
I'm
going
to
use
podman
so
I'll,
do
a
dnf,
install
podman
and
then
use
podman
to
start
up
just
a
very
simple
apache
web
server
on
port
80
running
on
the
virtual
machine
and
then
we'll
create
a
service
that
then
exposes
that
particular
web
server.
B
So
again
we're
using
the
pod
network
here
so
remember
that
that
means
it's
only
internally
accessible,
it's
not
externally
accessible
and
then
we'll
use
a
route
pointing
to
that
service.
To
then
expose
it
to
the
rest
of
the
world.
We
should
be
able
to
do
the
same
thing
with
ssh.
We
should
be
able
to
do
the
same
thing
with
if
you
want
to
install
cockpit
or
something
like
that
and
have
all
of
that
running
on
the
virtual
machine.
B
Alternatively,
you
can
also
do
things
like
have
a
secondary
interface
right.
So
one
interface:
that's
on
the
pod
network,
one
interface
that
is
on
the
a
public
network,
so
you
could
split
your
traffic
that
way
so
my
operating
system
administrator,
you
know,
maybe
they're
old
school.
They
don't
understand
or
don't
want
to
understand
all
these
routes
and
services
and
all
this
other
stuff
they
just
want
to
ssh
directly
into
the
virtual
machine.
Well
great,
all
of
that
works
for
my
application
team.
I
can
expose
my
application
across
that
pod
network.
B
You
know
just
add
this.
Add
the
route
there
for
all
of
those
networks,
so
expose
it
across
that
network
and
then
all
of
those
application-centric
things
happen
inside
of
the
the
cluster
across
the
sdn
and
let's
check
our
import
here,
which
just
finished
man.
That
was
good
timing.
B
I
don't
know
if
you
can
hear
the
so
the
server
that
all
this
runs
sits
right
over
here.
I
don't
know
if
you
can
hear
the
fan
for
it
through
my
microphone.
No.
B
A
You
should
get
one
of
those
you
should
just
submerge
it
in
that
non-conductive,
liquid
stuff
yeah.
Have
you
ever.
B
No,
I
was
how
was
that?
Where
was
it,
I
was
at
netapp
insight
in,
I
think,
berlin
and
cisco,
or
somebody
like
that
fujitsu
had
like
a
blade
chassis
that
was
submerged
in
whatever
that
non-conductive
liquid
is.
B
A
Oil,
that's
non-conductive
yeah,
but
yeah
like
as
well
as
I
was
reading
last
week
that
gartner
pointed
out
that
that
is
going
to
be
like
a
way
to
get
more
performance
out
of
our
cpus
to
like
keep
up
with
moore's
law.
D
A
B
I
I
worked
at
a
place
many
years
ago
and
the
way
that
they
had
done
the
power
space
and
cooling
was
each
rack
was
enclosed.
So,
instead
of
being
open
racks
with
you
know,
perforated
tiles
in
front
and
air
just
trickles.
D
B
Rather,
the
racks
were
enclosed,
and
so
they
were
able
to
have
basically,
they
required
super
high
density
like
if
you're
not
consuming.
You
know
like
40
kv,
or
something
like
that
per
rack.
Don't
even
bother
coming
to
us.
So
it's
something
like
you
had
to
have
high
density,
blade,
chassis
that
were
fully
loaded
to
meet
their
requirements.
Otherwise
it
just
wasn't
worth
their
time
right.
B
If
you're,
only
using
a
little
bit
of
heat
a
little
bit
of
power
to
generate
heat,
it's
it's
costing
them
more
than
it
was
right.
Then
it
was
making.
A
Yeah
I
worked
for
a
company,
we
had
it's
basically
like
a
tent
or
something
around
the
data
center
aisles
right
where
it
was
the
internal
hot
aisle
was
self-contained,
and
we
had
air
coolers
that
pulled
through
the
rack
to
cool
that
off
that
hot
aisle
area
and
keep
air
moving
through
it
and
like
it
would
get
up
to
100
something
plus
degrees
in
there.
B
If
you
for
anybody
who's
a
netapp
customer
whenever
the
travel
becomes
a
thing
again,
if
you
ever
come
to
raleigh,
come
to
their
rtp
campus
for
a
an
ebc
or
anything
ask
for
a
tour
of
their
data
center,
so
they
have,
they
call
them
the
gdl
global
development
lab.
B
D
B
Active
cooling,
it's
miserable
to
go
to
work
in
there
because
it's
hot
everywhere,
yeah
yeah,
but
they
they
test
and
they
you
know
their
equipment,
is
fully
supported
like
well
beyond
normal
temperature
ranges
for.
E
B
B
So
effectively
at
this
point,
we've
got
ssh
and
I'm
getting
ready
to
start
a
pod
man.
A
A
B
B
B
Released
all
I'm
doing
here
very,
very
simple:
podman
run
demonize
call
it
httpd
map
external
port
8080
to
internal
port
80
and
use
the
relate
the
ubi
httpd
2.4
or
apache
2.4
image.
D
B
Yeah,
so
I
had
to
specify
the
registry
name
where,
as
opposed
to
with
rel
registry.io,
I
think
is
the
first
one
in
the
search
path.
A
B
B
B
Oh
happy
equals
fedora.
I
can
just
use
that
one
every
once
in
a
while.
When
is
that
a
blind
squirrel,
even
a
blind
squirrel,
finds
a
nut.
B
What
am
I
looking
for
here?
Services
app
equals
fedora.
B
B
All
right
so
same
thing
as
before:
vert
cut
over
control,
vert
ctl
expose
so
the
type
that
we
have
here
so
vm
irs
vmi
and
I'm
trying
to
remember
what
those
are
at
the
moment.
B
Ip,
so
this
is
the
name
of
our
virtual
machine.
C
C
B
B
So
one
thing
to
note,
and
if
you,
if,
if
you
one
of
the
bugs
that
we
found-
and
this
was-
I
think
it
was
both
august
on
your
team
and
jafar
on
my
team-
you
see
it
uses
things
like
the
flavor
inside
of
here.
So
if
you
were
to
go
in
and
change
the
flavor,
it
would
cause
the
service
to
fail.
So
you
can
actually
go
in
and
you
would
want
to
edit
the
pod
selector
and
remove
some
of
these
things
just
to
prevent
that
from
happening.
B
Let's,
let's
restart
them
so
create
route.
B
C
Oh,
I
didn't
yeah
yeah
vegan
connection
refused
so.
E
C
Back
to
the
vm
and
so
you're
in
the
pod,
there.
D
D
A
C
Does
just
have
ip
tables
installed,
I
mean,
but
connection
refuse
you
you
wouldn't.
B
No
firewall
command.
Isn't
here?
Okay,
so
that's
not
it!
It's
been
a
long
time
since
I
used
ip
tables
directly
and
not
through
firewall
command,
so
yeah.
Well
in
theory,
I
just
exposed
ssh.
So
if
we
come
back
here
to
our
services,
so
we
have
this
ssh
which
is
going
to
port
22..
B
B
A
B
B
I
did
so
the
the
problem
is,
it's
expecting
a
key.
It
only
allows
key
based
authentication
by
default
and
I
didn't
add
a
key.
I
just
set
a
password
right,
so
all
I'm
doing
here
is
turning
on
password
authentication.
B
B
B
E
B
Let
me
let
me
try
and
start
this
first.
So,
let's.
B
B
A
D
B
A
B
A
B
B
C
B
B
For
here
listen,
thank
you.
I
spaced
out
for
a
moment.
A
D
B
Thanks
containers,
all
right,
let's
come
back
to:
where
did
we
leave
all
of
this
stuff
at.
B
B
Let's
clean
up
some
of
these
yeah
so
serviced
create
a
service
oops,
not
you,
so
we
have
two
of
these
already
you.
I
want
to
go
away.
Yes,
you!
I
want
to
go
away.
Yes,
now
we'll
go
back
to
vert
control.
B
B
B
B
All
right,
yeah,
I'm.
B
What's
going
on
here,
I've
got
something
borked,
probably
in
that
vert
control
expose.
B
A
B
It
but
it
yes,
it
should
be
it
shouldn't
be
as
complicated
as-
and
this
is
simply
andrew
doesn't
often
do
this.
So
I
haven't
explored
and
found
all
of
the
rough
edges
yet
because
most
of
the
demos,
interaction
that
I
do
are
at
the
infrastructure
layer,
if
jafar
is
watching
or
listening
he's
done
this
about
a
dozen
times.
B
He's
got
a
demo.
He
has
a
great
demo
that
shows
yeah
christian
yeah.
Maybe
it's
my
weird
multi-tiered
network.
That's
a
good
possibility
too!
Oh
that's
true
yeah.
My
wife
keeps
saying
that
if
I
get
hit
by
a
truck
she's,
just
gonna
rip
everything
out
and
like
pay
best
buy
to
come
in
and
fix
it
anyways.
So
I
I
know
that
this
works.
We've
seen
it
with
the
kodi
demo.
We've
seen
it
with
the
demo
that
burr
did
on
stage
at
red
hat
summit.
B
We've
seen
it
with
a
bunch
of
other
things,
including
with
windows
and
net
applications,
I'm
just
not
smart
enough
for
it.
It's
it's
beating
me
today
and
what's
funny.
A
E
D
A
C
C
D
E
C
So
do
oc,
debug
node
slash,
then
one
of
your
machines
like
a
worker,
because
that
should
have
access
to
the.
C
C
B
B
B
A
Could
this
be
now
it's
a
route
endpoint
thing:
does
the
target
need
to
be
8080.
A
D
A
That
was
it
okay,
cool.
Thank
you,
everybody.
Thank
you.
I
need
to
figure
out
how
to
say
these
names.
Carlos
santana
appreciate
you.
Thank
you.
D
A
It's
a
feature
carlos
santana
says
he
can
explain
in
chat
well.
This
will
be
interesting.
Carlos
read
it
in
a
way
so
that
I
can
use
a
great
impersonation
voice.
A
C
A
A
Let's
see:
okay,
there's
some
explanations
in
chat
here:
okay,
carlos
santana,
says
service
q.
Proxy
is
the
one
doing
80
to
8080..
When
another
pod
or
microservice
tries
to
hit
service
on
port
80,
then
iv
tables
will
do
its
work
to
route
from
port
80
to
pod
8080.
A
D
A
B
So
another
thing
I
noticed
so
this
is
inside
of
our
virtual
machine.
It's
only
listening
on
ipv6
on
port
8080.,
not
sure
why
that
is.
B
A
A
A
D
B
This
this
thing
has
outsmarted
me,
so
I'm
going
to
continue
to
test
and
experiment
with
all
of
this
there's
a
blog
post
here
that
I'm
going
to
publish.
B
In
all
of
this,
of
the
correct
way
or
the
way
that
works
and
doesn't
break
while
doing
a
live
stream
for
andrew
around
all
of
these
things,
because
I
think
it's
something
that
there's
a
lot
of
questions
out
there
about,
I
get
asked
them
fairly
frequently.
So,
while
it
would
have
been
nice
to
point
the
point,
those
folks
at
the
live
stream
to
see
a
working
example
I'll
I'll
instead
do
that
in
a
blog
post.
Instead
of
spending
the
next
few
minutes
continuing
to
bang
my
head
on
the
proverbial
wall,.
A
Anyways
so,
let's
see
eric
says,
I
thought
that
the
routes
port
was
supposed
to
point
to
the
services
port.
B
I
think
we
kind
of
yeah,
so
that's
what
we
ended
up
doing
that
worked
right.
So
when
we
change
this
to
the
same
as
the
service,
this
now
works
right.
B
B
B
So
yeah,
I'm
I'm
not
sure,
what's
going
on
here
and
honestly,
this
is
not
an
area
that
I
have
explored
or
experimented
with
enough
to
know
off
the
top
of
my
head.
What
could
be
wrong?
So
thank
you
to
those
who
are
watching
who
have
been
helping
with
this.
It.
D
B
Been
incredibly
helpful
to
me
and
I've
learned
quite
a
bit
in
this
process
andrew.
D
C
E
D
B
D
B
B
So,
okay,
I
will
take
that
as
an
action
item.
B
Yeah
ex
look
into
and
and
explain
how
all
of
these
ports
are
exposed
and
how
avert
cuddle
exposed
differs
from
a
service
differs
from
a
route
and
how
all
of
those
things
hook
together
so
keep
an
eye
on
openshift.com
blog
for
that
and
I'll
I'll
start
writing
that
this
afternoon.
B
E
B
Wonderful
yeah,
so
if
anybody
has
questions
concerns
problems,
issues
around
disk
images
right,
creating
images,
importing
images
etc
definitely
feel
free
to
reach
out.
You
can
also
look
at
the
so
again
in
the
github
repository
for
containerized
data
importer,
so
github.com
for
kubevert,
slash,
containerized
data
importer,
you'll
find
a
ton
of
docs
on
how
all
of
these
things
work
inside
of
there.
B
B
Right
now,
so
just
keep
that
in
mind,
but
otherwise
yeah
there's
a
ton
of
stuff
that
you
can
do
inside
of
there
definitely
recommend
checking
it
out
and
then,
regarding
the
whole
service
and
route
thing.
My
apologies
for
that
not
working.
You
know
I
haven't
paid
my
dues
or
to
the
demo
gods
or
the
stream
gods
or
whatever
it
happens.
To
me.
B
Yeah
we'll
get
that
straightened
out
and
turn
it
into
a
learning
experience
awesome,
but
otherwise
yeah.
A
B
A
Yeah,
thank
you,
everyone.
We
really
appreciate
you
tuning
in
when
in
doubt
go
to
openshift.tv
to
learn
the
latest
and
greatest
about.
What's
going
on
tomorrow
morning.
First
thing:
9
a.m:
eastern,
not
first
thing
for
yuri's,
1300
utc.
We
have
the
level
up
hour,
we're
talking
about
helper
scripts
in
containers.
So
this
is
how
to
take
those
things
that
you
would
normally
have
running
around
on
your
your
file
systems,
scripts
and
everything
and
actually
containerize
them
and
run
them
as
containers.
E
A
Eastern
openshift
commons
we'll
be
talking
about
gpus
on
openshift,
so
that'll
answer
some
of
our
gpu
questions
from
earlier,
so
that'll
be
fun
so.
B
You
know
I
I
didn't
say
thank
you
to
reese.
I
know
it's
it's
later
in
the
day
for
you
reese
so
do
appreciate
you
yeah
no
kidding
helping
and
participating.