►
Description
Deploying and using OpenShift virtualization with Rhys Oxenham, Andrew Sullivan, and Chris Short from the Red Hat OpenShift Twitch https://twitch.tv/redhatopenshift
A
B
C
Awesome
and
for
the
audience
out
there
sorry
I
did
not
hit
the
actual
transition
button
fast
enough,
so
my
intro
was
completely
skipped,
but
luckily
reese's
was
probably
mostly
there.
So
just
fresh
reminder:
I'm
Chris,
short
technical
marketing
manager,
restocks
and
amaz
joining
us
today.
You
just
caught
his
intro
and
also
andrew
sullivan.
My
fellow
teammates.
Are
we
wearing
the
same
shirt
today?
Possibly
is.
D
I'm,
just
here
for
moral
support
for
for
Rhys.
He
certainly
doesn't
need
my
help
with
anything
technical,
Risa's,
incredibly
good
at
this
stuff
and
I'm
blown
away
by
the
script
and
everything
that
he's
created,
which
I
hope
he
walks
through
a
little
bit
of
today
for
creating
the
nested
OpenShift
virtualization
lab.
B
A
B
B
A
B
All
of
these
machines
are
completely
virtualized,
but
it's
just
because
it's
much
easier
for
me
to
you
know,
build
up
demos
and
labs
and
things
when
it's
running
on
my
system.
Here
plenty
of
resources
available
to
me
so
I
mean
what
we
got.
Hopefully,
this
is
coming
through.
Okay,
I've
just
got
five
systems
here,
just
three
masters:
sort
of
standard,
highly
available
configuration
and
two
workers.
B
Now
for
those
of
you
that
don't
know
what
open
shift
virtualization
is
you
can
kind
of
think
of
it
as
merely
a
feature
or
an
extension
of
opf
container
platform
to
run
virtual
machines,
so
we're
kind
of
delivering
on
the
notion
of
a
single
platform
to
run
both
of
them
simultaneously.
So
you
can
have
sets
of
nodes
that
run
containers
and
virtual
machines
side-by-side
all
orchestrated
with
same
api's,
all
running
on
the
same
Hardware,
all
utilizing
the
same
networking
backends,
the
same
storage
backends.
So
you
no
longer
having
to
maintain
multiple
silos
of
Technology.
B
Just
to
do
you
know
virtual
machines
and
containerization
so
really
trying
to
deliver
on
this
with
with
OpenShift
virtualization
and
what
I'm,
showing
you
today
is
OpenShift
virtualization
2.3
we're
gonna
go
through
an
actual
deployment
of
that
we're
gonna
set
down
some
networking,
some
storage
configurations.
We
know
deploys
and
virtual
machines,
and
we
can.
We
can
poke
around
the
API
and
what
you
can
currently
see
on
on
the
UI
we're
just
a
pointer.
We
are
very
much
still
in
beta
with
with
openshift
virtualization.
B
C
C
B
It's
not
like
it's,
you
know,
as
you
say,
Chris
you
don't
have
to
deploy
any
additional
hardware,
or
indeed
you
know
make
any
drastic
changes.
Your
environment,
it's
it's
an
opt-in.
If
you
want
to
use
virtual
machines
alongside
your
existing
infrastructure,
you
simply
enable
the
extension.
Now
it's
gonna
rewind
a
little
bit
when
I
said
no
additional
hardware
to
run
virtual
machines
on
top
of
OpenShift.
We
really
do
recommend
you
use
it
on
top
of
bare
metal
for
obvious
reasons.
Inside
of
this
environment,
which
I'm
showing
you
here,
I'm
doing,
nested
virtualization.
B
This
works
just
fine
in
a
demo
in
a
sort
of
lab
environment,
but
it's
not
recommended,
nor
will
it
likely
be
supported
for
any
production
usage.
So
if
you
don't
already
have
bare
metal
OpenShift
inside
of
your
environment,
then
we
would
very
much
encourage
you
if
you
wanted
to
take
a
look
at
leveraging,
openshift
virtualization
or
indeed,
as
you'll,
see
it
in
a
few
different
places,
see
envy
or
container
native
virtualization,
and
then
we
would
recommend
you.
You
attach
bare
metal
machines
to
your
cluster,
to
do
so
or
indeed
deploy
a
dedicated
bare
metal
cluster.
D
Can
I
can
I
pause
for
a
second
and
ask
you
a
couple
of
questions
there.
So
there's
a
lot
of
questions
that
come
up
internally.
Both
you
know
just
internal
folks
asking
as
well
as
asking
on
behalf
of
customers
around
mixing
cluster
node
types,
so
it
is
fully
supported
to
have
a
virtual
control
plane
and
physical
worker
nodes,
the
caveat
being
you
can't
deploy
using
UPI
or
IP
I.
It's
the
quote-unquote
bare
metal
installation
method.
So
does
that
and
does
that
hold
true
with
open
shop
virtualization
as
well?
It.
B
Does
so
just
to
add
a
bit
of
color
around
some
of
the
terms
you
use
there,
the
OpenShift
installer
for
version
4
and
above
we've,
really
worked
on
what
we
call
platform
integration.
The
idea
here
is
that
you
run
the
openshift
install
a
binary.
It
asks
you
a
few
different
questions
around
where
you
want
to
deploy
your
cluster.
Do
you
want
to
run
on
Amazon?
Do
you
wanna
run
on
OpenStack?
Do
you
want
it
to
run
on
VMware
or
whatever
it
might
be?
You
answer
some
questions
away.
B
It
goes
it
deploys
all
of
the
infrastructure,
it
gets
it
all
up
and
running,
but
crucially
ties
in
some
of
the
underlying
platform
integration.
So,
if
you
tell
openshift
I
you
know,
I
want
new
worker
nodes,
it
will
connect
into
those
api.
Is
provision
it
in
a
way.
It
goes.
One
of
the
big
things
we're
currently
working
on
is
to
get
that
same
capability
for
bare
metal
as
well,
so
this
is
commonly
referred
to
as
bare
metal.
Ip
I
install
a
provisioned
infrastructure.
B
So
the
idea
is
that
if
you
want
to
do
things
like
scaling
or
you
want
to
provision
OpenShift
all
the
way
from
bare
metal
right
to
a
running
cluster,
you
can
do
that
an
open
shift.
Installer
will
provide
you
the
ability
to
do
that.
The
problem
with
this
configuration
when
it
comes
to
the
original
question
about
having
mixed
clusters
is
it's
kind
of
assumed
that
all
of
the
you
know
the
entire
cluster
is
of
one
type.
B
B
So
what
tends
to
happen
is,
if
you
want
to
break
from
that
mold,
there
might
be
a
little
bit
of
handcrafting
or
manual
deployment
of
some
of
those
bare
metal
machines,
because
realistically
the
openshift
installer
was
set
up
and
is
originally
configured
to
deploy
against
one
particular
type
of
infrastructure.
So
it
is
possible,
and
it
is
supported,
and
the
caveat
that
you
said
andrew
is
is
absolutely
right.
It
just
requires
a
little
bit
more
manual
steps
to
get
some
of
those
bare
metal
workers
up
and
running.
B
D
D
Yes,
you
can
work
around
that
by
doing
some
careful
you
know,
labels
and
and
and
or
paints
and
toleration,
zet
cetera,
so
that
you
only
have
vm's
with
a
you
know
dynamically
provisions
PV
from
that
storage
platform
on
those
nodes
right
that
type
of
stuff,
but
that's
an
awful
lot
of
stuff
to
track,
and-
and
you
know
if
it
goes
wrong-
it's
you
know
a
hassle
and
all
that
other
stuff.
So
from
you
know
the
Red
Hat
perspective,
it's
you
know
know
just
always
use
the
bare
metal,
no
integration,
installation
method.
B
Yeah
absolutely
agreed
all
right,
so
let's
go
ahead
and
show
how
easy
it
is
to
get
OpenShift
virtualization
up
and
running,
and,
aside
from
you
know,
one
of
the
greatest
things
about
appreciate
for
being
the
platform
integration.
One
of
the
sort
of
next.
The
best
thing
is
some
of
the
integration
work
of
operators.
Now,
operators
make
the
deployments
and
the
lifecycle
management
of
additional
tools,
components,
features,
and
you
know
value
and
software
much
much
more
powerful.
B
So
it's
kind
of
handing
over
the
knowledge
of
how
to
manage
and
manage
the
lifecycle
of
those
directly
to
Google
at
ease
and
therefore
open
shift.
So
it's
incredibly
powerful
and
we've
done
the
exact
same
thing
with
with
open
shift
virtualization.
So
turning
this
on
is
literally
as
easy
as
deploying
a
new
operator,
so
you
go
into
operators
and
the
operator
hub,
and
this
is
a
list
of
all
the
various
different
components
you
can
deploy
as
as
part
of
openshift
or
with
varying
different
methods.
B
For
you
know,
depending
where
it
where
it
came
from,
some
are
provided,
as
you
know,
community
open-source
bits-
some
of
course
require
additional
licenses
and
things
from
the
respective
vendors,
but
all
I'm
going
to
do
here
is
just
search
for
the
true
authorization,
and
you
can
see
here
we
have
container
native
virtualization
or
in
need
openshift
virtualization,
as
it
will
be
called
in
the
final
product
you
can
see
here
has
a
particular
version
2.3
and
it
has
what
we
call
capability
levels.
So
operators
have
various
different
levels
of
sort
of
feature,
support
and
maturity.
B
So
some
it
literally
can
just
do
a
basic
install.
Some
operators
have
the
ability
to
do
rolling
upgrades.
So
if
you
start
off
on,
say
2.3,
we
add
this
is
part
of
our
cluster
and
2.4
comes
out
in
a
few
months.
The
operator
they've
already
installed
can
manage
its
own
upgrade
path,
so
it
will
get
you
to
that
next
version,
without
you
as
an
end-user
or
you
as
an
open
shift
administrator
having
to
worry
about
how
do
I
do
all
of
that.
How
do
I
get
that
back
up
and
running?
B
You
have
things
like
full
lifecycle,
so
you
know
scaling
up
and
down
and
recovery
from
from
potential
errors
and
fault,
tolerance
and
things
as
well,
and
then
there's
some
additional
things.
You
know
getting
some
more
metrics
and
insights
into
what's
going
on
inside
of
the
environment,
but
you
know
we're
pretty
mature
on
the
on
the
2.3
version
of
OpenShift
virtualization
here
and
as
I
said,
this
is
fully
available
as
part
of
OpenShift
4.4.
B
You
can
install
on
older
versions,
of
course,
but
I'm
just
saying
this
is
this
is
now
fully
available
at
through
the
marketplace
as
2.3,
but
I
just
want
to.
If
any
of
many
many
was
join
since
we'd.
Since
we
started
this,
this
is
not
fully
supported
from
Red
Hat.
Yet
it's
still
very
much
is
in
the
sort
of
technology
preview,
slash
beta
realm,
so
it's
gonna
hit,
install
and
it's
gonna
say
well,
which
which
channel
do
I
want
2.3,
it's
gonna
put
it
in
a
specific
namespace
for
me
called
open,
shifts,
env
approval
strategy.
B
Okay,
leave
that
as
automatic
am
I
gonna
hit
subscribe
to
that
now,
what's
going
to
happen,
is
it's
going
to
go
ahead
and
it's
going
to
deploy
some
additional
components
and
for
me
it's
from
let's
lay
down
the
operator
that
I
wanted
it
to
do
so
here
you
can
see.
Install
is
ready,
so
I'm
gonna
go
into
here
and
now
what
I've
done
is
I've,
just
I've
simply
deployed
this
operator.
B
What
I'm
going
to
need
to
do
now
is
create
a
new
instance
of
the
the
hyper-converged
operator,
so
it
create
instance
to
ask
me
for
some
additional
component
additional
questions.
Basically
Yama
file
here,
bare-metal
platformer
levers
as
false
cuz,
I'm
doing
Nessa's
virtualization
here
and
I'm
gonna
hit
create.
Now
what
this
is
going
to
do
is
gonna
deploy
all
of
the
pods
that
I
need.
So
these
are.
These
are
all
the
services
that
provide
me
with
all
of
the
API
capabilities.
You
can
see
a
little
bit
too
fast
if
I
change.
This
can.
D
D
B
Yeah,
absolutely
so
now,
all
being
good
see
all
of
these
pods
and
eventually
running
so
these
are
all
of
the
respective
services
that
I
need
to
then
go
ahead
and
provision
virtual
machines
on
top
of
my
openshift
infrastructure,
so
you've
got.
You
know,
controllers
your
bridge
markers
that
enable
you
to
do
things
like
bridge
networking,
various
them
components
around
CDI,
which
is
about
importing
data.
So
you
have
existing
disk
images.
You
want
to
use
for
your
virtual
machines
will
go
ahead
and
do
that
host
path.
B
If
you
want
to
use
some
local
storage
CNI,
so
you
can.
Actually,
you
know,
get
your
networking
into
that
as
well.
There's
nm
state
which
we're
going
to
go
into
in
more
detail.
This
is
a
really
cool
operator
that
allows
you
to
set
out
your
networking
configuration
through
network
manager
directly
through
openshift.
B
So
it's
not
going
to
be
able
to
run
virtual
machines
they're
only
my
workers
can
so
it's
gonna
bring
these
machines
up
check
that
it
has
dev
kbm
in
there.
So
I
can
actually
do
the
virtual
machines
and
then
we
should
be
relatively
good
to
go
you'll
see
on
the
left-hand
side.
You
know
this
is
dynamically.
Changed
I
now
have
a
new
entry
for
virtual
machines.
No
virtual
machines
found.
So
you
know
we
can
do
a
bunch
of
things
with
this
as
well.
B
So
there's
new
with
wizard,
so
it'll
run
you
through
and
we'll
go
into
that
a
little
more
detail
shortly,
where
it's
can
ask
its
a
bunch
of
questions
about
my
virtual
machine.
We
can
import.
So
if
you
have
an
existing,
you
know
say
VMware
cluster,
that
you
want
to
pull
virtual
machines
from
you
can
do
it
directly
through
this
or
you
can
just
go
straight
to
Yama.
You
know
you're
an
expert
in
in
often
shift
from
kubernetes,
and
you
want
to
copy
and
paste
your
your
Yama
into
there
directly.
B
You
can
go
ahead
and
do
so,
but
I
just
want
to
make
sure
that
this
deployment
has
gone
ahead
successfully.
First,
so
I
think
that
they're
all
running
yep.
So
every
pot
is
running
here.
We
don't
have
anything
in
pending
to
a
meeting
or
failed
or
anything
like
that.
So
I'm,
confident
that
my
openshift
cluster
is
working
and
just
fine
now
on
the
terminal
side
as
well
check
I'm
logged
in
yep,
I'm
logged
in
here.
B
You
also
see
that
we
now
have
some
additional
API
resources
and
customer
resource
definitions
that
we
can
use
directly
from
the
command
line
as
well.
So
I
can
do
as
he
gets
VM
no
resources,
but
in
that's
you
know
it's
proven
just
by
doing
that,
but
it
understands
what
that
VM
resource
is
instead
of
saying
that
it
doesn't
understand
what
that
resource
is.
This
VM
is
also
VMI.
For
instance,
you
can
define
a
VM
that
has
multiple
counts,
like
you
used
to
be
able
to
do
in
OpenStack
as
well.
B
So
now
that
appreciate,
virtualization
is
deployed.
You
know,
I
think
that
was
literally
three
or
four
clicks
to
enable
that
particular
feature
directly
through
the
operator
hub
and
I.
Have
this
this
ability
to
do
it
by
the
CLI
and
the
API
I
need
to
make
some
additional
very
minor
changes
inside
of
my
environment,
to
support
the
running
of
virtual
machines,
namely
around
networking
and
storage.
So
the
first
thing
I
want
to
do
is
I
want
to
setup
my
networking
now
by
default.
Openshift
virtualization
supports
out-of-the-box
pod
networking.
B
So
just
like
your
vert,
your
your
containers,
you
know
they
will
be
essentially
on
the
masqueraded
based
implementation,
so
they
hide
behind
native
interface.
You
can
create
routes
to
them.
You
know
you
can
use
all
the
various
different
standard,
openshift
networking
capabilities
directly
for
your
virtual
machines,
but
for
a
lot
of
cases
that
model
that
works
for
containers
doesn't
always
fit
for
virtual
machines.
B
Sometimes
you
might
want
to
enable
you
know
direct
network
attachment
of
your
virtual
machines
on
to
existing
networks
that
can
be
over
something
like
a
bridge
or
it
can
be
over
SR
Iove
or
indeed,
as
we're
working
on
within
the
engineer
departments
a
lot
more
of
the
fast
data
path
stuff.
So
we
need
to
basically
make
a
small
modification
inside
of
this
environment
to
suit
my
needs.
I
want
to
demonstrate
that
I
can
attach
a
virtual
machine
directly
to
a
you
know,
a
data
center
network
that
I
have
I
say
a
datacenter
Network.
D
B
Absolutely
I
mean
in
the
example
that
I'm
going
to
show
we're
going
to
use
just
a
standard
Linux
bridge
but
you're
absolutely
right.
There
are
lots
of
different
types
of
networking
attachments
that
we
can
use.
You
know,
there's
s
arrivee,
there's
open,
V
switch
bridge,
there's
Mac
VLAN,
which
you
know
provide
you
got
supporting
hard,
which
is
pretty
much
anything
nowadays.
You
can
absolutely
use
that
so
yeah,
all
it
really
comes
down
to
is
defining
that
configuration
so
that
openshift
knows
how
to
attach
and
how
you
want
to
do
it.
B
The
great
thing
about
open
chef
for
is
it
leverages
maltose
out
of
the
box,
so
you
you're
not
limited
to
just
having
one
network
attachment
to
your,
not
only
container
but
now,
of
course,
virtual
machine.
So
you
can
just
as
easily
run
with
standard
openshift,
Sdn
network,
Nearpod
networking
and
an
additional
network
that
you'll
just
use
directly
for
alpha
connectivity
or
you
can
just
use
one
of
one
of
each.
You
know
it's
completely
flexible,
yeah.
D
B
B
Yes,
so
if
you
behind
the
low
bouncer
and
you're
just
using
standards,
you
know
openshift,
networking
out
of
the
box.
It'll
follow
the
same
path
as
if
as
if
it
was
a
container.
So
there's
no
there's
no
difference
there
whatsoever.
Obviously,
if
you're
using
additional
networking
interfaces,
you
know
either
provider
by
Linux
bridge
or
you
know,
Mac
VLAN
or
something
the
openshift
doesn't
have
control
over.
D
B
B
Going
to
so
I'm
just
gonna
say
and
in
this
file
I'm
going
to
paste
following
so
this
is
a
node
network
configuration
policy
file.
So
what
this
does
is
it
uses?
Nm
States
now
you'll
see
that
in
in
this
deployments
that
we
deploy
something
called
the
nm
State
Handler
and
each
of
these
machines
you
know
or
five
remember:
I've
got
three
masses
and
two
workers.
Every
node
gets
deployed
this
this
small
pod,
and
this
is
the
one
that
handles
all
of
the
network
manager
configuration
for
the
machines.
B
An
nm
State
lives
in
the
couvert,
which
is
the
upstream
name
for
openshift
virtualization.
It
lives
in
that
in
that
in
that
upstream
community,
but
it
allows
us
to
define
what
the
underlying
network
network
configuration
looks
like
and
so
what
I'm
doing
here
is
so
we
got
it's
called
node
network
configuration
policy
I,
give
it
a
name
so
I'm,
just
calling
it
by
the
exactly
what
I'm
doing
I'm,
creating
a
bridge
adding
a
particular
interface
to
it
and
it's
for
the
workers.
So
we
use
a
standard.
B
Node
selector
only
want
to
make
the
changes
on
nodes
that
are
workers,
because
these
the
only
ones
they're
gonna
run
virtual
machines
and
they're.
Also
the
only
the
ones
that
I
have
the
additional
network
attached
to
I
have
desired
state
and
that
desired
state
is
to
create
a
Linux
bridge
with
the
name
BR
one
state
is
up,
I
am
not
attaching
an
IP
address
to
it,
and
this
is
critical.
All
I
want
to
use
it
for
is
layer.
Two
connectivity,
if
I
only
had
one
interface
on
this
entire
machine
that
I
wanted
to.
B
You
know
use
for
also
the
rest
of
OpenShift
networking
as
well,
but
I'm
obviously
going
to
want
to
make
sure
I
have
an
IP
address
on
that
bridge,
but
this
I
just
want
to
provide
connectivity.
I
don't
want
spanning
tree,
and
all
I
want
to
do
is
I
want
to
add
this
physical
interface
to
that
bridge.
Now,
anp2
s0
is
this
specific
for
my
particular
environment.
D
I
want
to
pause
for
a
second
to
talk
about
networking
and
OpenShift
because,
as
far
as
I
know,
there's
with
openshift
virtualization
now
there's
three
different
ways
to
achieve
this
configuration
right.
So
we're
just
talking
about
basic
level,
bonds
or
interfaces.
You
could
configure
those
when
you
install
a
core
OS
so
using
like
the
Dre
cut,
command
line
or
the
kernel
parameters
that
are
passed,
so
you
could
use
the
network
operator
and
so
at
the
cluster
level,
the
network
operator
and
define
a
cni
inside
of
there.
D
D
D
Ultimately,
it
comes
down
to
you
know,
which
one
are
you
most
familiar
with,
which
one
are
you
most
comfortable
with
I
think
and
there's
a
bubbling
way
back
in
in
my
head,
for
some
time
has
been
helping
either
the
documentation
team
or
somebody
to
here's.
Some
common
networking
scenarios
rate
of
you
know.
Maybe
I've
got
four
physical
network
adapters
and
I
want
to
create.
You
know
one
LACP
bond
or
two.
You
know
mode1
bonds
or
you
know
walking
through
those
kind
of
common
configurations
and
theoretically
it
should
just
work
right.
D
B
Yeah
absolutely-
and
you
know,
annum-
state
I
think
is
a
great
way
of
of
just
expressing
the
desired
configuration.
Then
it
haven't,
you
know,
go
out
there
and
set
those
configurations.
I
mean
LACP.
Creating
you
know
bridges.
It
does
all
of
that
stuff
right
out
of
the
box.
It's
certainly
how
I
like
to
do
it,
but
your
ups,
you
right
there
are
a
number
of
different
ways
to
achieving
that
back
configuration
do.
D
B
Far
as
I
know
yeah,
there
should
be
any
reason
why
it
won't
I
mean
at
the
end
of
the
day,
it's
still
a
rl8
kernel.
It's
still
our
system
D.
It
still
has
the
you
know
the
vast
majority
of
libraries
and
tools
that
you
would
would
want.
It's
just
you
know,
stripped
down
it's
a
mutable
and
it
you
know
being
immutable.
You
have
to
make
the
vast
majority
of
configuration
changes
through
machine
comic
operator,
and
things
like
that.
B
B
Yeah
yeah
indeed
so
the
good
thing
about
M&M
state
is:
it
will
apply
it
immediately
instead
of
going
in
there
and
laying
out
the
machine
at
laying
out
the
end
of
the
if'
convict
files-
and
you
know
any
additional
requirements
directly
on
the
file
system,
because
every
time
you
do
an
MCO
change,
it
needs
to
cycle
the
machines.
So.
D
B
That's
a
very
good
way
of
thinking
it,
whereas
this
will
apply
it
immediately
and
little
likely
then
reapply
it
off.
The
machine
has
come
up,
so
what
you
may
want
to
consider
doing
is
you
know:
I
need
to
look
into
this.
Guy
I've
never
actually
tested
this
way,
deploying
using
this
to
make
sure
it
all
works
as
expected
and
then
setting
the
configuration.
B
No
network
configuration,
oh
so
OC
get
nm
CP,
you
can
say
configuration
is
progressive
or
changes
to
an
e,
which
is
an
enactment
you'll,
see
that
it's
already
configured
right.
So
NSEP
is
the
policy.
You
have
various
different
policies
and
you
have
an
enactment
so
for
each
node
you
have
you
know,
so
that's
the
node
name.
This
is
the
particular
policy
for
the
three
masters.
Obviously
the
selector
wasn't
matching.
So
I
only
want
this
to
apply
to
the
workers.
Then
you
have
the
successfully
configured
for
the
various
other
ones.
B
B
Anything
I
write
to
that
file
system
is
gonna,
be
gone
when
it
reboots,
that's
just
the
nature
of
Korus
right
so
to
make
it
permanent
I'd
need
to
look
into
whether
there's
a
way
of
making
that
persistent
through
nm
state
or
whether
the
most
appropriate
way
of
doing
it
after
you
validated
that
it
works
through.
This
is
to
do
an
MCO,
so
I
don't
know.
I
need
to
I
need
to
look
into
that.
Yeah.
D
I
would
think
that
if
it's
basic,
you
know
very
low
level,
networking
that's
required
just
to
boot
and
get
connected
to
you
know
the
master,
the
control
plane.
That
should
be
done,
ideally
at
install
time
using
kernel
parameters
for
great
cut
and
as
a
a
secondary
option.
You
know,
after
the
fact
using
MCO
and
then
for
other
networking
so
enabling
you
know
additional
pod,
networking,
SR,
Iove,
right
annum,
state
etc.
That
would
be
applied
after
its
rejoined.
You
know
or
connected
back
into
the
control
plane,
yeah.
B
B
B
Ip
link
br1
is
there
so
that's
fine.
He
linked
where
was
it
IP
link
E&P
to
s0
has
master
v
r1,
so
we
know
there's
created
that
bridge,
just
fine
for
me.
So
now
that
that's
happened,
what
I
next
need
to
do
is
create
a
networking
definition
for
what
I
want
my
machines
to
be
attached
onto.
So,
if
I
just
show
you
this.
Second,
this.
B
So
this
is
a
network
attachment
definition,
I'm,
just
calling
a
tuning
bridge
fixed
and
it's
just
a
bridge
network.
So
the
ID
here
is.
This
is
just
a
standard,
kubernetes
network
attachment
definition,
so
this
essentially
tells
OpenShift
what
to
do
when
I
specify
this
particular
network,
how
to
attach
it
from
a
CNI
perspective.
So
what
I
have
here
is
I
have
a
plug-in
called
cnv
bridge.
So
it
knows
that
is
a
particular
type
of
bridge
for
cnv.
Now
these
are
a
little
bit
different
to
pods.
B
Just
remember
that
when
we
launch
a
virtual
machine
on
top
of
you
know
any
system
any
rep,
many
Linux
based
system,
it
is
just
a
binary.
We
have
to
attach
a
virtual
NIC
into
that
virtual
machine
to
get
networking
attachment
directly
through
to
it
when
it's
a
pod.
It's
just
a
case
of
putting
an
interface
into
a
namespace,
so
cnv
bridge
is
slightly
different.
In
then
has
to
link
a
virtual
interface
inside
of
the
virtual
machine
directly
to
the
namespace
is
slightly
different,
but
the
functionality
ends
up
being
very
similar.
B
So
the
difference
here
is
now
that
the
bridge
I'm
specifying
is
BR,
one
which
we
know
exists
because
we
created
it.
It
knows
how
to
do
that
particular
attachment,
so
I'll,
just
save
that
and
I
can
apply
that
touch.
So
we
are
now
our
bridge
is
created.
So
that's
networking
for
my
particular
environment.
Pretty
much
set
up
I
said
that
the
underlying
host
configuration
and
I've
created
a
new
network
attachment
for
for
attaching
a
virtual
machine
to
it
and.
A
D
Learned
this
the
hard
way
of
I,
if
you
create
the
network,
attach
definition
in
one
namespace,
it's
not
accessible
from
other
namespaces,
which
I
think
is
a
good
thing,
because
it
means
you
can
control.
You
know
from
an
administrator
perspective,
control
what
resources
your
projects
write.
Your
users
have
access
to
you,
mm-hmm.
C
B
Okay,
so
that's
storage
done
sorry,
that's
networking
done.
Maybe
we
should
talk
about
storage
now
with
open
shift
virtualization
there's
a
wide
variety
of
storage
that
you
can
integrate
with.
You
know
you
can
our
of
course
preferred
mechanism
to
use
would
be
open
shift
container
storage
with
opens
your
for
them.
You
know,
that's
that's
built
around
safe
project,
so
it's
all
deployable
via
an
operator
because
I'm
doing
all
of
this
in
a
sort
of
nested,
virtualization,
environment,
I
kind
of
run
out
of
memory,
so
I
don't
have
OCS
running
in
this
environment.
B
I
do
have
NF
x
now,
NFS
is
you
know,
really
a
quick
and
dirty
way
of
setting
up
shared
storage,
yeah
and
so
I
do
have
an
NFS
server
running
inside
of
this
environment.
So
I
was
just
going
to
I'm,
going
to
use
that
and
directly
so
I
need
to
set
up
because
I
don't
think
that
I
set
this
up
out
of
the
box
storage
classes,
no
I
don't
have
any
storage
classes
trying
to
create
a
storage
class
and
I'm
going
just
gonna
paste,
my
amyl
in
here,
because
that's
it's
easier
for
me
to
do.
B
In
FS
storage
class
copy
that
paste
that
in
yeah,
so
all
this
is
standard,
Co
storage
class,
metadata
name
is
NFS,
and
this
is
a
no
provisioner,
as
in
I
cannot
set
this
to
be
a
provisioner
because
it
doesn't
do
any
dynamic
provisioning
with
NFS.
That's
one
of
the
biggest
drawbacks
about
using
NFS.
You
have
to
have
all
of
the
various
different
TVs
already
pre-created.
If
you're
utilizing
something
like
open
shift.
B
Container
storage,
you
just
set
it
up
with
an
operator,
you
spec,
it
generates
the
storage
class
for
you
and
everything
will
be
dynamic
for
you.
Don't
have
to
worry
about.
You
know,
creating
volumes,
creating
no
partitions
and
doing
all
the
manual
PV
creation,
it's
all
automatic
for
you
and
FS
it's
it's!
It's
cheap
and
easy
for
my
for
my
requirements.
Yeah.
D
You
provide
it
with
a
path
to
a
local
storage
device,
so
could
be
an
individual
disk
could
be
a
local
raid
device,
Hardware
a
device.
So
you
pass
it
that
path,
and
then
it
will
create
the
folders
and
files
as
necessary
to
provide
up
to
whatever
your
pods
are
doing.
Obviously,
the
downfall.
There
is
well
it's
local
to
that
note
and
it
doesn't
move
around.
B
Exactly
yeah,
that's
very
fair.
I
was
just
talking
about
NFS
out
of
the
box,
with
with
blue
Linux
you're,
absolutely
right,
some
of
our
partners
in
this
area
that
do
do
NFS
like
like
net
apps
and
various
others.
They
can
absolutely
do
the
dynamic
provisioning
as
and
many
other
hardware
integration.
Sorry,
storage,
integration
partners
do
all
of
this
completely
dynamically.
All
right,
so
I've
just
created
a
really
basic
NFS
storage
class.
So
I
can
go
ahead.
B
I
need
to
define
a
new
persistent
volume,
so
I'm
going
to
get
rid
of
this
and
I
have
just
a
definition.
I
want
a
copy
and
paste
here
and
I'm
sure
this
looks
like
so
I
have
one
here
system
volume
type
are
calling
it
NFS
pv-1
and
has
various
different
access
modes.
Redirect
many
is
of
course,
going
to
be
very
important
if
I
want
to
do
anything
like
live
migration
or
I
want
to
do
some.
B
Data
import
and
data
import
is
important
because,
if
I
have,
for
example,
an
existing
disk
image
that
I
want
to
use
I'm
going
to
need
to
have
multiple
pods
being
able
to
access
that
volume
simultaneously.
You
know
I
got
the
the
import
pod
can
attach
to
it
and
as
soon
as
that's
done,
it
can
be
attached
into
into
the
virtual
machine.
I'm
gonna
show
you
that
shortly
so
capacity
is
40
gigabytes.
B
That
is
just
the
size
of
the
the
maximum
size
of
the
volume
that
that
I
require
its
path,
is
NFS
pv-1,
and
it's
on
this
particular
server
again.
This
is
just
NFS
server
inside
of
inside
of
my
environment,
so
I
will
create
that
that
is
available
inside
of
my
environment,
but
of
course,
it
being
available.
There's
no
claim
that
I
have
on
that.
Yet
so
I'm
gonna
create
next
a
persistent
volume
claim
now.
This
is
where
it
starts
to
get
a
little
bit
more
interesting
and
more
relevant
to
the
cnv
use
case.
I.
B
Think
I'm
just
going
to
llamo
again,
because
it's
to
show
I'm,
gonna,
say
I'm
gonna
use
this.
This
is
the
PVC
definition
that
I
have
now,
then.
This
is
where
we
can
start
to
add
some
couvert
annotations,
so
I'm
creating
a
system
volume
claim
called
rl8
NFS.
It
uses
this
label
containerized
data
importer
with
this
additional
annotation.
So
what
this
is
doing
is
as
soon
as
I
create
this
that's
PV
claim
it's
going
to
look
for
an
available
PV
again.
B
It
has
that
I
had
to
create
a
PV
for
because
it
bit
before,
because
it
doesn't
have
the
dynamic
provisioning
with
a
size
of
40,
with
a
storage
class
name
NFS
and
as
soon
as
it's
found
one
it's
going
to
run
something
called
the
containerized
data
importer
and
it's
essentially
going
to
fill
the
persistent
volume
with
the
data
that
it
finds
in
this
particular
disk
image.
Now
this
is
just
a
rel8.
You
know
cloud
image
you
know
can
be
whatever
I
like
it
can
be.
B
You
know
Linux
Windows,
whatever
I
wanted
to
be,
and
so
as
soon
as
I
hit
create
on
this.
It's
going
to
notice
that
it's
got
a
persistent
volume
but
I've
labeled,
labeled
containerized
data
importer,
and
it's
going
to
pull
that
contents
in.
So
let
me
show
you
that
so
straight
away
bound
found
the
list
of
volumes.
It
has
has
this
particular
claim,
but
if
I
then
go
into
my
pods
you'll
see
that
I
now
have
changed
this
namespace
to
default,
its
owner.
Bridge
FC
and.
A
D
B
So
if
we
go
back
into
storage,
you
see
persistent
volumes.
You
see
this
NFS
PV
one
and
it's
not
currently
in
use
by
any
additional,
yet
owner,
no
owner,
so
nobody
is
actually
owning
that
of
the
mote,
so
I'm
free
to
use
this
has
the
contents
of
a
relly
dim
egde.
So
now
that
I've
set
up
networking
I
set
up
storage
now
I
can
actually
show
you
the
creation
of
of
a
virtual
machine
inside
of
this
environment.
B
Remember
like
we
started
with
a
fresh,
open
shift
environment
with
no
virtualization
whatsoever,
so
I'm
gonna
go
into
virtual
machines.
Now
I
could
do
this
fire.
The
ya
know:
I've
got
a
definition
here,
but
I
just
want
to
show
you
the
wizard
as
part
of
this,
so
I'm
gonna
say
create
virtual
machines
new
with
the
wizard.
So
don't
have
any
templates.
You
know
you
can
make
templates
if
you
want
to
so
the
source
of
my
machine
now,
I
can
pixie
boot.
These
machines
I
can
point
it
to
a
URL.
Now
this
is
important.
B
I
could
have
done
a
little
bit
with
what
I
just
did
with
you
know,
just
pointing
directly
to
that
cue
cow
and
it
would
have
created
the
volume
and
attempted
to
do
that.
All
for
me
contain
a
disk
image.
So
if
you
want
the
source
to
source
of
it
to
be
just
containing,
let's
run
that
and
ephemerally
or
a
disk.
Now
I've
already
created
the
disk
I
just
wanted
to
load
from
my
disk
I
create
so
I'm
just
gonna
hit
disk
operating
system.
B
D
Rhys
I
know
you
have
a
strong
OpenStack
background.
Does
this?
How
does
this
equate
to
OpenStack?
Because
in
my
head,
I
tend
to
map
things
like?
If
I
do
this
source
URL?
That's
a
lot
like
you
know,
creating
a
glance
image
or
creating
a
VM
based
off
of
a
glance
image
now.
I
fully
admit
that
OpenStack
is
not
my
forte,
so
I
might
not
be
using
right
terms
here,
but
conceptually
is
that
right,
wrong,
yeah.
B
You're
absolutely
right,
so
you
think
of
URL
yeah.
You
can
think
of
that.
Okay,
that's
kind
of
like
a
glance
image
as
in
use,
use
this
glance,
image
URL
and
build
a
disk
image
based
on
that
disk
is
a
bit
more
like
cinder
volumes
as
it
I've
already
got.
The
volume
there
just
use.
That
container
is
more
sort
of
specific
to
to
OpenShift
and
pixie.
Well,
OpenShift!
Sorry,
OpenStack,
never
really
supported
pixie.
You
know
you
really
had
to
you
know.
The
easiest
way
was
doing.
B
Pixie
inside
of
OpenStack
was
to
attach
a
cd-rom
with
a
pixie
image,
so
it
booted
up
bad
and
you
kind
of
got
pixie
that
way.
So
so
yeah,
that's
that's
kind
of
the
difference
and
then
you
know,
flavor
flavor
is
very
much
an
openstack
slash.
You
know
public
cloud
type
term.
You
know
this
has
some
preset
and
you
can
adjust
these.
You
know
this
has
some
preset
resource
requests.
You
know
for
CPU
memory
and
various
other
other
things.
Yeah.
D
I,
don't
remember
the
object
inside
of
a
open
shifts
inside
of
kubernetes
off
the
top
of
my
head,
but
those
are
you
can
customize
those
you
can
create
new.
You
can
remove
them,
I!
Think
if
you
select
the
operating
system
as
well
like
if
you
were
to
choose
Windows
versus
Linux
right,
it'll
customize,
those
additionally
right,
it
sets
different
options.
B
It
does
yeah
and
also
adds
you
know
if
I
were
to
choose
Windows,
it
changes
some
of
these
menu
options
as
well,
there's
some
additional
things
that
we
can
do
with
with
Windows,
because
you
know
that's,
that's
that's
a
really
important
point.
This
isn't
just
Linux
on
Linux.
We
can
do
Windows
on
of
Linux
as
well,
pretty
much
anything
that
KVM
can
support
will
run
just
fine.
Obviously,
in
the
supported
product
we
have
a
somewhat
restricted
list
of
of
operating
systems
that
we
support,
for
you
know
obvious
supportability
reasons.
B
B
And
all
of
these
various
different
virtualization,
all
of
the
enhancements
that
we've
made
on
the
underlying
platform.
You
know
the
rail,
the
KVM,
the
libvirt
and
all
of
the
work
we've
done
there
around
security
and
networking,
a
storage
we're
able
to
leverage
all
of
that
with
OpenShift
virtualization
we're
not
throwing
all
of
that
away
and
starting
from
scratch
all
we're
doing
with
with
OpenShift
virtualization
is
teaching
kubernetes
and,
of
course,
OpenShift
how
to
manage
those
objects:
how
to
define
all
those
how
to
then
extend
kubernetes
to
to
give
you
access
those
resources.
So
you
know.
D
I
think
openshift
fertilisation
is
using
the
same
KVM.
That
Rev
is
because
there's
there's
two
different
kvms,
because
there's
yes,
because
there's
KVM
that
ships
with
rel
we're
the
only
supported
guest
operating
system
from
Red
Hat's
perspective
is
rel
and
then
there's
KVM
with
OpenStack
and
Rev.
Excuse
me
and
now
open
shift
virtualization,
which
adds
more
rels
as
well
as
windows,
etc.
So
yeah.
B
Absolutely
so
the
reason
behind
that
I
won't
go
into
too
much
detail,
but
you
know:
Red
Hat
has
a
firm
commitment
to
never
break
API
and
ABI
compatibility
across
the
lifecycle
of
rel.
What
that
essentially
means
is
that
if
our
customers
deploy
their
application
or
deploy
their
virtual
machine
on
top
of
our
infrastructure,
it's
a
rel
8.0.
It
should
work
as
expected
and
not
require
recertification
throughout
the
entire
10
year.
Lifecycle
of
rel
I
can
count
on.
B
You
know
well,
I
think
I've,
only
ever
heard
of
one
time
where
we
ax
dentally
introduced
a
regression
that
absolutely
got
fixed
in
that
customer's
workload
or
application
wasn't
working
as
expected
at
that
time.
That
is
a
huge
commitment
from
an
engineering
perspective.
You
know
we're
one
of
the
only
vendors
that
really
works
to
prioritize.
You
know
not
introducing
any
changes
that
would
that
would
interrupt
that.
So
that's
you
know
that's
incredibly
important
for
organizations
that
are
there.
So
what.
B
No,
you
actually,
frankly,
please
do
I,
don't
want
to
monopolize
this
so
yeah.
The
problem
with
with
that
is
that
you
know
rel
lasts
a
very
long
time.
You
know
it
has
a
10-year
life
cycle,
your
customers
that
are
dot
rel8.
Today
they
have,
you,
know
random
at
nine
years
or
so
worth
of
life
lifecycle
on
that
it
becomes
more
and
more
challenging
for
us
to
introduce
new
features
and
new
hardware
enablement
as
the
operating
system
ages.
B
So
we
wanted
to
both
provide
our
same
guarantee
for
keeping
that
stable,
API
an
ABI
compatible
on
RAL
further,
you
know
the
the
PME
KVM
binary,
but
also
allow
us
to
not
break
it,
but
be
a
little
bit
more
aggressive
with
regards
to
the
additional
features
and
harder
enablement
that
we
put
into
qmu
and
KVM
and
liver
and
various
different
things,
and
some
of
our
additional
products
that
were
targeted
as
virtualization
platforms
like
Rev,
like
OpenStack
and
now,
of
course,
like
open
shift
virtualization.
So
we
sort
of
created
a
bit
of
a
fork.
B
So
now
there's
two
types
of
qkv
and
binary
that
you
can
install
on
row.
One
is
it's
just
standard,
qmk
VM,
which
does
have
limitations
right.
Just
sports,
RAL
and
I
think
you
can
only
deploy
for
virtual
machines
on
it
and
has
limitations
of
the
amount
of
memory
and
other
hardware
that
is
put.
Then
you
have
QE
k,
vm
Rev,
but
the
binary
is
the
same
across
OpenShift,
virtualization,
OpenStack
and,
of
course,
as
it's
originally
named
named
after
and
that's
where
it
has
a
lot
more
features.
B
D
B
Exactly
alright,
let's
crack
on
with
this
wizard,
so
I've
filled
in
these
details
and
hit
next
there.
So
networking
interfaces
is
the
next
option
so
by
default
it
wants
to
put
me
on
the
pod
networking
now
I
could
leave
this
I
could
add
an
additional
one
belonging
to
I'm.
Just
gonna
delete
this
because
I
want
my
machine
to
be
directly
on
my
bridge
network,
so
you'll
see
in
here
type
definition.
Sorry
exactly
what
you
said
was
gonna
bite
us
did
I'm
in
the
wrong
namespace,
because.
B
B
Alright,
there,
it
is
so
tuning
bridge
fixed
the
models
just
for
IO.
If
you
have
any
other
guest
operating
systems
that
don't
support
that
IO,
then
of
course
you
can
use
some
of
the
more
legacy
once.
But
video
is
just
fine.
It's
just
gonna
be
a
relic
guest.
This
type
is
bridge.
You
know
you
can
do
a
thorough
B
as
well,
but
you
know
the
type
of
this
is
bridge.
B
Mac
address,
I'm
gonna
leave
blanks
will
automatically
generate
one
for
me,
so
I
should
only
have
one
NIC
that
it's
attached
directly
to
my
BR
one
bridge
on
my
implementation.
The
next
right
disks,
no
disks
found
okay,
I
need
to
add
a
disk
source
can
be
blank,
so
I
can
literally
just
have
a
really
just
completely
blank
one.
I
can
then
go
in
I
can
specify
some
additional
things,
but
I
don't
know
attach
an
existing
disk
that
I
have
to
attach
disk.
It's
gonna
say
which
persistent
volume
claim
do
you
want
to
use?
B
B
A
A
B
B
A
C
B
Nice
yeah,
it's
alright,
ok,
so
managed
to
get
back
to
where
we
were
so
select.
Persistent
volume
claim
right.
This
is
the
thing
now
we're
actually
about
rl8
NFS
persistent
volume
claim
so
hit
that
this
name
dis
year
is
fine
interface.
Now
you
can
be
specific
if
you
wanted
to
show
up,
as
you
know,
SD
a
vga
or
have
you
so
I'm
gonna
go
with
that
IO,
it's
real
late
again.
That
I
owe
is
definitely
the
way
forward
for
that
hit
that
so
it's
attached
at
this
now,
don't
ever
forget
to
do
boot.
B
Source
I
want
the
boot
source
to
be
disc
0
because
remember
you
could
have
a
boot
source
as
being
something
else
right.
It
could
be
pixie
or
something
else
like
that.
So
if
they
do
series
is
my
boot
source
hit
next
on
there
here
you
can
add
some
additional
cloud
in
it
configuration
you
know.
If
you
wanted
to
force
it
to
be.
B
You
know
we
got
limited
amounts
of
parameters,
we
have
any
UI
today,
you
can
put
it
all
in
a
script
if
you
wanted
to
so
you
can
expose
all
of
the
capabilities
of
cloud
in
it.
Should
you
want
to,
but
I
don't
need
to
worry
about
that.
I
have
pre
customized
this
relic
with
my
root
password
and
stuff.
So
I
don't
have
to
worry
about
that
virtual
hardware
attached
cd-rom
right,
if
you
wanted
to
you,
could
actually
attach
a
cd-rom
here
review.
B
You
know
pretty
simple,
really
sources:
a
disk,
it's
relic
machine,
small
flavor,
server
profile.
This
is
the
name
of
the
machine.
I
don't
want
to
well.
This
is
an
option.
I
glanced
over
earlier
start
virtual
machine
on
creation.
You
don't
have
to
do
this.
This
is
very
much
like
what
we
have
in
Reb
and
you
know
some
other
things
like
OpenStack,
where
you
don't
have
to
start
at
first,
you
can
go
in
and
make
sure
that
it's
configured
everything
first
without
just
trying
to
start
this
machine
up.
B
Nic
zero
is
my
bridge
network,
which
I
called
tuning
bridge
fixed,
and
my
storage
is
my
NFS
disk
that
we
created
a
little
bit
earlier
so
in
a
create
virtual
machine,
successfully,
create
a
virtual
machine
so
see
virtual
machine
default
that
details.
So
this
just
like
in
any
other
snap
in
inside
I've,
opened
shift.
Is
you
know
it
looks
exactly
the
same,
but
there
are,
of
course,
additional
custom
resource
definitions
that
it
is
exposing
and
showing
you
know
some
additional
insights
into
the
machine
when
it's
up
and
running
we
can
go
into.
B
D
Reese
real
quick:
let's,
let's
look
at
the
ammo
for
the
for
the
virtual
machine,
because
there's
a
couple
of
things
that
that
might
be
important
rate
of
one.
You
can
literally
copy
and
paste
this.
You
know
copy
it
out
and-
and
you
know,
save
off
your
virtual
machine
definition
into
your.
You
know:
revision,
source
revision,
control
system,
yeah.
A
D
B
Exactly
right,
you
know,
I
have
you
know,
I
showed
through
the
UI,
because
I
think
that
you
know
the
wizard
through
to
the
OpenShift
consoles,
pretty
it's
pretty
cool,
but
you
know
you
can
actually
do
everything
that
I've
just
done
through
and
through
the
command
line.
Of
course,
now
you'll
see
that
it's
still
showing
no
IP.
B
The
main
reason
here
is
that
open
shift
or
the
the
configuration
that
I
set
with
regards
to
the
network
definition
is
using
a
bridge
network
just
as
a
layer,
2
network,
the
VM
will
just
DHCP
on
its
own,
but
OpenShift
hat
doesn't
have
any
IPAM
control
over
this
particular
machine.
What
you
might
find
is,
if
I
do,
that,
hey
I've
got
an
IP
address
now,
hey
the
reason
why
it
does.
B
That
is
because
it
has
the
guest
agent
installed
in
the
VM,
so
the
guest
agent
is
able
to
update
through
you
know,
openshift
virtualization
when
it's
IP
address,
is
and
then
it'll
update
inside
of
OpenShift,
so
that
IP
address
will
now
be
shown
if
I
go
into
overview
as
IP
addresses.
You
know
I'd
be
v4
ipv6,
it's
all
there
and
you
know
good
to
go
and
they'll
also
be
able
to
show
you
some
utilization
information
once
if
you
know
once
it's
able
to
get
some
of
that
updated
now
you
can
go
in
here.
B
You
can
see
the
console,
so
you
know
just
like
you
know,
open
chef,
sorry,
OpenStack
and
Rev.
You
have
direct
access
to
console
and
you
notice
I
can
get
directly
into
this.
You
know
just
fine
or
indeed
just
to
prove
that
networking
is
is
working
to
in-state.
One
23.62
I
can
get
act
out
directly
from
this
machine
and
also
just
to
you
know,
prove
that
it
is
actually
there
62
yeah
there
we
go
I'm.
C
D
B
Yeah
we're
very
fortunate
in
Europe
with
our
network
connectivity.
That's
for
sure,
alright,
so
so
yeah
that
machine
is
is
up
and
running.
I,
don't
think
that
I
automatically
expand
this
disc,
no
I,
don't
so
by
default.
This
flay
sorry,
not
this
flavor.
This
qmu
image
is
just
a
10
gig
volume,
but
if
I
was
to
extend
that
partition,
I
could
grow
this
right
out
to
the
40
gigabytes.
That
is
on
that
on
that
NFS
yeah,
that's
available
for
me
to
for
me
to
use
yes.
D
B
B
D
B
So
now,
if
I
do,
it
might
need
to
need
to
reboot
yeah
I
need
to
reboot,
but
yet
it's
still
a
40
gigabyte
disk.
Now,
how
do
we
link
all
of
this
together?
Well,
if
I
do
OC
get
pods
I
now
have
this
vert
launcher
now.
This
is
important
because
remember
kubernetes,
whilst
it's
it's
able
to
understand,
you
know
what
vm
objects
are,
and
you
know
how
to
associate
it
all
and
bind
it
all
together.
It
still
launches
a
pod
right
to
spawn
that
virtual
machine.
A
virtual
machine
is
just
a
binary
that
binary
has.
B
You
know
a
lipid
configuration
that
defines
how
that
binary
binary
comes
up,
so
what
we
can
do,
we
can
do
OC,
exact
it
and
we
can
go
I
want
to
work
on
that
part
and
give
me
bash
inside
of
that
terminal
inside
of
that
that
container
so
I'm,
now
inside
of
that
that
launcher
container,
if
I
do
a
first
there's
my
VM
right
I
can
do
verse,
dump
X
m1.
This
is
the
pellets
or
less
I.
Don't
know
press
okay,
we're
just
looking
at
there.
This
is
the
lippert
definition
for
that
particular
virtual
machine.
D
B
D
And
I
think
it's
important
to
point
out
that
you
know
we
can
kind
of
geek
out
and
dig
into
all
these
things
and
kind
of
prove
that,
yes,
it's
all
the
same,
but
that
from
a
user
standpoint
you
know
if
I'm
an
application
team,
if
I'm
the
virtual
machine
administrator
I,
don't
really
care
right.
You
just
showed
you
know.
Yeah
I
can
use
the
open
shift
console
if
I
want
you
to
go
in
and
manage
it,
just
like
any
other
virtual
machine
access,
the
console
all
that
other
stuff.
B
And
you
could
define
you
know
as
a
sort
of
overall
workload.
You
know
that
workload
could
comprise
of
virtual
machines
and
containers,
and
you
could
define
them
using.
You
know
almost
like
one
push
deploy
all
of
these
resources.
You
know
some
of
them
happen
to
be
in
virtual
machines.
Some
of
them
are
containers,
and
you
know
Oakland
shift
will
still
do
all
of
its
its
magic
on
the
networking
front.
So
yeah,
it's
it's
pretty
cool
yeah.
B
B
C
B
Yeah
so
then
I'm
just
going
to
specify
that
as
raw
file,
instead
of
a
key
code
to
the
I,
know
that
they
were
working
on
the
back
edge
karma
where
whether
they
fix
up
the
top
of
my
head
so
do
so
read,
read
read.
We
should
be
good
to
go
on
that
all
right,
so
that's
we'll
make
that
50,
gig
disk
read
right.
Many,
which
is
that
19
container
a
state
your
own
quarter
should
be
good
girl,
yeah.
Let's
try
that
all
right,
so
that's
bound
instantly.
That's
good!
D
B
B
B
B
I'll
leave
this
on
pod
networking.
Just
so
you
can
see
the
pod
networking
and
how
that
works.
Just
can
add
my
disk,
and
this
is
the
attach
disk.
The
windows
disk
boot
I/o
boot
disk
is
my.
The
disk
are
selected
again.
If
you've
got
cloud
in
it,
there
is
Windows
cloud
in
its
well
I
said
support
it
that
you
can.
You
can
do
it
there's
ways
and
means
of
achieving
that,
so
this
is
cool
as
well,
so
by
default,
with
Windows
is
going
to
attach
this
the
tire
windows
drivers.
B
So
if
you
need
them
if,
for
example,
your
mushy,
your
virtual
machine,
comes
up
and
it
can't
access,
networking
or
or
storage
or
something
like
that,
you
can
do
it
this
way.
So,
let's
say
you're
setting
it
up
via
the
you
may
be
provisioning,
an
old
version
of
Windows
that
doesn't
have
that
I/o
support.
It
will
attach
this
as
another
disk
or
a
cd-rom
so
that
you
can
access
those
drivers
directly
from
the
cd-rom
interface.
So
you
can
install
drivers
a
window
set
up
will
run
it
attaches
this
buyer.
B
B
C
A
B
C
C
B
B
You
know
that's
just
on
a
on
a
pod
networking
interface,
so
anything
you
want
to
do
you
know
exposing
that
by
you
know
a
route
or
a
service
or
a
low
balance
or
whatever
you
know,
you've
been
doing
through
your
normal
openshift
day-to-day
activities.
You
can
absolutely
do
that
with
this,
you
know
set
the
port.
You
want
to
use
wherever
it's
listening,
whether
it's
a
database
or
Webster
or
whatever.
It
might
be
it's
kind
of
irrelevant
than
it's
a
virtual
machine
and
you
can
administer
it
in
in
any
way
that
you
want
right.
B
D
Also,
there's
a
question,
so
why
is
that
special
referring
to
accessing
you
know,
windows
in
open
shifts?
You
know
in
Linux
and
I
think
you
kind
of
just
addressed
that
of
it's
a
virtualization
environment
in
open
shift.
That
is
just
as
capable
of
doing
all
of
the
things
that
you
would
expect
from
any
other
virtual.
You
know
virtualization
environment
mm-hmm.
This
one
just
happens
to
be.
You
know,
kubernetes
based
right.
C
So
like
they
have
your
kubernetes
environment
and
your
VM
environment
living
in
the
same
place,
that
really
lowers
the
overhead
and
all
the
operational
complexity
of
having
disparate
systems
spread
out.
You
know
throughout
your
data
center
now
it's
just
open
shift.
You
can
scale
open
shift
and
you
can
manage
openshift.
You
don't
necessarily
have
to
worry
about
this
virtualization
platform
and
this
container
platform
and
this
hardware
platform
it's
the
hardware
and
openshift
and
off
you
go
yeah.
D
Christian's,
making
a
comment
about
open
chef,
not
open
field
of
yes,
you
could
technically
deploy
core
OS
virtual
machines
into
openshift
virtualization
and
yes,
either
deployed
distinct,
openshift
clusters
or,
if
you've
really
wanted
to
definitely
not
support
it.
You
could
create
worker
nodes
on
your
worker
nodes.
B
C
Like
here's
a
Windows
desktop,
you
know
you
can
get
licensed
unlicensed
or
temporary
license
versions
of
Windows
for
testing
purposes
all
day
long.
So
this
is
a
fantastic
example
of
how
you
could
take.
You
know
any
kind
of
work
environment
and
say:
oh,
you
need
a
Windows
box
for
testing
here
you
go
and
you
can
add
that
to
your
CI
as
well,
all
right
like
so
you
can
now
spin
out
this
Windows
box
you'll
run
your
test
on
the
Windows
box
and
send
it
back
down
as
Christian.
D
B
Yeah
and
also
if
you've
got
if
you
know,
if
you're
just
using
pod
networking,
then
obviously
every
pod
can
contact
every
other
part
on
the
cluster.
So
you
don't
have
to
worry
about.
Well,
you
know
I
need
to
get
my.
You
know,
VM,
that's
running
wherever
you
know
connecting
through
it's
just
there
already
so
right,
yeah,
all
right,
so
I'm
gonna
delete
this
Windows
virtual
machine
because
I
think
we've
proven
that
that
works
and
there
we
go
it's
gone.
B
C
B
C
B
Read/Write
many
PVC
yeah
now
there
are,
there
is
some
work
on
going
to
do
live
block
copy
as
in
not
using
shared
storage
as
you
want.
Do,
live
migration
it'll,
do
the
copying
of
the
bits
and
eventually
when
it
gets
to
a
point
where
it's
transferred
all
of
the
bits
and
it
can
do
a
you
know,
an
immediate
switchover
you
good
to
go.
You
know.
Obviously,
if
the
rate
of
data
change
is
you
know,
network
bandwidth
or
have
you
it'll,
never
migrate?
That's
kind
of
that!
There's
no
way.
We
can
stop
that
easily,
but
yeah.
B
B
The
most
simple
object
here
you
can
create,
which
looks
like
like
this
virtual
machine
instance.
Migration
is
a
custom
resource
type,
there's
a
migration
job
as
here
can
there
can
be
whatever
you
want
it
to
be,
and
you
specify
the
name
of
the
VM
I.
Do
you
want
to
chain?
You
want
to
move,
relates
or
NFS,
or
you
can
just
go
in
here
and
do
it
through
the
UI.
So
so.
D
B
B
Yes,
oh
yes,
absolutely
yes,
and
there
is
a
way
you
can
expand
on
the
definition
and
you
can
specify
the
destination
host.
So
I'm,
just
gonna
keep
pinging
that
so
we
can
see
that
it
works.
I,
see
migrate,
yeah
migrate,
yes,
is
migrating,
see
on
the
background.
That's
work
too,
and
if
everything
set
up
nicely
and
everything
works,
if
you
hover
over
that
now,
it's
working
one,
its
and
you'll
see
that
there
was
just
the
five
millisecond
yeah.
So
that's
now
running
on
worker
one,
so
we
can
just
verify
a
CD
bug
nodes.
Obviously,
people.
C
B
There
you
go,
there's
my
BM
relic
server
NFS
on
worker
one,
so
we
know
that
it
migrated
to
just
fine
and
that
machine
shouldn't
have
even
noticed
that
it
is
was
migrated
so
that
same
IP
address
good
to
go
so
live
migration
works.
You
can
also
do
a
node
maintenance.
So
if
you
want,
you
know,
take
down
a
machine,
you
want
to
drain
it
of
all
of
its
parts,
but
you
know
the
critical
thing
here:
is
you
do
it
through
couplets
so
that
it
doesn't
just
terminate
the
pods
like
it
typically
would.
B
D
D
B
D
B
As
I
understand
it,
this
is
going
to
ensure
that
the
migration
happens
first,
okay
and
what
we
can
kind
of
verify.
Let
me
just
check
the
definition
of
this.
What
if
we
got
here
running
true
I,
don't
know
whether
it
has
any
fiction
or
yeah.
So,
let's
try.
Let's,
let's
see
what
happens
so
it's
running
on
worker
1
I've
got
no
maintenance
here,
which
is
just
worker
maintenance.
No
name
is
this
particular
one.
B
B
B
B
B
B
B
So
this
is
a
little
bit
bigger,
so
this
is
a
machine
convict.
The
reason
why
you
have
to
do
machine
convict
is
because
we
need
to
create
directories
on
the
underlying
host,
because
it's
using
local
storage
now
with
host
path.
We
are
literally
using
a
path
on
the
underlying
file
system
of
our
worker
nodes
to
store
those
disk
images,
so
we're
no
longer
using
NFS
or
anything
like
that.
B
Now
you
can
do
you
know
data
migration
between
you
know
if
you've
already
got
a
host
path
and
you
want
to
move
to
NFS
or
back
and
forth.
You
can
absolutely
do
that.
You
can
absolutely
you
know,
move
between
various
other.
You
know,
non-local
storage,
should
you
want
to,
and
so
we're
going
to
apply
a
machine
convict
and
to
these
these
to
these
machines
and
we're
just
gonna
add
a
new
system
D
file.
This
is
going
to
do
two
things.
B
The
first
thing
it's
going
to
do
is:
is
gonna
make
a
new
directory
on
the
root
filesystem
called
you
know,
slash
var,
slash,
V
volume
x'.
Then
it's
gonna
re
label
it
so
that
we
don't
have
any
selinux
issues,
and
this
is
gonna
come
up
and
it's
gonna
set
it
to
run
up
on
system
boot.
So
you
know
she
applied
a
chef.
Mco
ya
know,
so
my
machines
are
now
going
to
reboot.
B
D
B
While
starts
doing
that,
let's
have
a
look
at
some
of
the
other
files
that
we're
going
to
apply
pretty
quickly
in
a
minute.
So
the
first
thing
we
do
is
we
apply
the
the
MCO.
Once
we've
applied
the
MCO,
we
need
to
apply
the
configuration
of
the
host
path
provision.
S
is
just
that
the
resource
definition
for
that
and
I'll
show
you
that
and
PC
llamo.
B
C
D
D
D
B
B
B
And
last
we
have
then,
and
then,
when
we're
ready,
we
can
just
create
a
PVC
I'll,
just
create
all
these
files
were
ready
to
go
again.
We
can,
of
course,
do
all
of
this
fire
of
the
UI
as
well,
and
all
I'm
gonna
do
is
I'm
going
to
create
another
VM
based
on
relay
to
are
going
to
call
it
relay
toast
path,
I'm
just
going
to
run
the
same
containerized
data
importance
of
calls
that
relative
directly
for
us,
but
of
course
we
specify
the
storage
class
name
as
host
path,
probationer
as
well.
A
B
C
A
D
A
D
So
you
can
do
I
think
at
the
very
beginning
we
talked
about
doing
emulation
versus
nested
virtual
virtualization.
Both
work
emulation,
of
course,
you're
gonna
have
a
even
more
substantial
performance
penalty
and
I.
Think
if
I
remember
correctly
with
the
operator
it's
as
simple
as
when
you
initially
deployed
changing
that
false
to
a
true
or
is
there
still,
you
still
have
to
create
a
config
map,
I,
don't
remember
yeah.
B
You
do
so
before
we
had
the
operator
install.
You
had
to
run
a
deployment
script.
That
would
you
know,
deploy,
deploy
a
little
bits
you
needed
and
yeah.
You
could
set
a
parameter
there,
which
is
KVM,
underscore
emulation,
and
you
know
that's
that
that
works.
Just
fine,
for
you
know
just
doing
a
little
bit
of
testing,
but
you
know
the
performance
is
pretty
bad.
B
A
B
Default
I
think
on
you
know:
libvirt,
it's
it's
you
know
enables
it
by
default.
When
you,
especially
when
you
do
copy,
you
know,
host
CPU
model
it'll
pull
through
the
the
nested
virtualization,
so
you
can
do
it
there.
All
that
node
label,
apart
is
it
you
know
wants
to
see,
is
that
it
has
dev
KVM
available
if
it
has
to
have
KVM
available.
It
knows
that
it'll
it'll
work,
just
fine
yeah.
D
B
So
everything
that
RedHat
does
is
open-source,
and
we
have
a
very
you
know
of
what
you
know.
Part
of
our
mantra
is
absolutely
upstream
thirst,
so
we
always
develop
all
of
our
new
features.
Security
fixes
enhancements
whatever
they
are,
they
always
go
into
the
community
first,
so
everything
that
you
saw
here
is
available
today.
It
is,
you
know
it's
all
based
on
open
source,
I've
deployed
the
Red
Hat
supported
operator,
but
there
is
an
equivalent
upstream
project
called
convert.
Yeah
exactly
community
called
called
couvert.
B
What
you
saw
today
is
technology
preview.
You
know
you
can
install
it.
We
provide
all
of
the
bits
you
know
with
an
alongside
OpenShift,
so
you
can
try
it
out.
It's
just
not
fully
supported.
We
can't
provide
our
standard
support
level
agreement
for
it.
So
if
you
raise
an
issue,
a
bug
you
know
we'll
do
our
best
to
you
know
help
you
with
it.
We'll
never
put
the
phone
down
on
any
customer
ever,
but
there's
obviously
a
limit
to
what
we
can
actually
do.
In
terms
of
being
able
to
to
support
that.
D
It
sort
of
changes
depending
on
whether
or
not
you
want
to
use
supported
versus,
not
supported
kind
of
the
location
that
you're
getting
it
from
right,
mm-hm,
as
in,
if
you're,
getting
completely
upstream
in
the
case
of
openshift
virtualization
you're,
really
using
the
couvert
project,
if
you're
using
the
preview,
ie
unsupported
or
not
supported
yet
version,
you
get
it
from
Red
Hat
as
tech
preview
and
then
when
it
goes
GA
you
get
it
from
Red
Hat,
but
the
difference
being.
We
will
fully
support
it.
At
that
point,
duck
hunt.
B
C
A
B
My
team
does
a
lot
of
enablement.
You
know
that's
internal
training
for
our
technical
resources,
but
we
also
do
a
lot
of
you
know
labs
at
Red,
Hat
summit
and
various
other
conferences,
and
so
we
try
and
make
it
a
little
bit
more
fun.
Once
you
know
the
you
know
the
attendee
of
one
of
our
lab
sessions
has,
you
know,
got
that
cluster
up
and
running.
You
know:
how
do
you
prove
it
and
I
refuse
to
go
down
the
path
of
deploying
WordPress
or
something
you
know
like
that's
the
most
boring
thing
in
the
world.
B
If
I
can
get
a
game
running
and
the
way
that
I
deploy
this
game,
you
know
we
do
a
proper
source
to
image.
You
know
downloads,
the
code
from
git
repo
builds
it
using
the
you
know,
standards.
You
know,
pressure-filled
pipelines,
spits
it
out
into
the
internal
registry,
deploys
it
in
a
pod
scales.
It
out
attaches
our
roots
to
it
and
you
expose
you
know
you
expose
and
you
can
use
it
through
the
you
know
the
ingress.
That's
using
almost
you
know
all
of
the
features
of
OpenShift
right
there
to
run
a
game.
B
B
B
It
does
yet
so
that
goes
back
to
what
I
was
saying
earlier:
we're
leveraging
all
of
the
work
that
we've
put
into
virtualization
on
on
Linux,
for
you
know
the
past
at
least
ten
eleven
years
that
I've
been
a
Red
Hat.
So
all
of
that
engineering,
all
of
that
effort
is,
is
literally
being
reused.
So
we
call
libvirt
to
instantiate
that
virtual
machine,
so
when
I
was
doing
the
debugging
earlier
on
that
on
that
on
that
virtual
machine
to
show
you,
but
behind
the
scenes
I
was
using
versh,
you
know
we
dumped
the
lipfird
XML.
B
If
you
joined
a
little
bit
later,
you
go
back
into
the
recording
when
it's
available
after
the
stream.
You
can
see
us
go
into
that.
You
know
it's
just
using
libvirt
and
for
covers
its
using
libvirt
is
using
Kiyomi
kbm
it's
using
everything
from
rel
that
you
typically
use
in
a
virtualization
environment.
You
know
that
the
big
difference
is
that
it's
orchestrated
using
kubernetes
and
not
you
know
OpenStack
or
Rev,
or
you
know
just
standard.
You
know
the
manager
tools
that
I.
You
know
you're
seeing
here
only
because
I've
got
a
nested
environment,
I.
D
Think
that's
important
to
point
out
right
because
even
with
Rev
and
OpenStack
KVM
is
the
hypervisor
KVM
is
the
part,
that's
actually
executing
the
virtual
machines.
All
of
the
other
bits
and
pieces
on
top
are
really
focused
around
two
things.
Right,
one
is
getting
the
resources
that
those
virtual
machines
need
available
to
whatever
host
it
may
be
using
so
storage
network,
etc
and
then
to
actually
scheduling
it.
D
So,
whatever
policies
you
put
it
in
place
of
you
know,
I
want
high
availability,
I
want
it
to
infinity
I
want
you
know,
x
and
y
and
z,
so
using
the
scheduler
to
actually
make
that
decision,
but
at
the
core
it's
still
just
KVM,
it's
the
same
hypervisor.
It's
we're
just
changing
the
management
plane.
If
you
will
yeah.
B
So
I
mean
we
could
we
could
try
and
troubleshoot
some
of
this
if,
if
we
like,
but
I
mean
this
is
this
is
not
how
anyone
would
really
run
openshift
in
reality.
So
I've
already
pushed
the
boundaries
of
not
only
the
product,
but
also
this
system,
so
yeah
I
think
something
may
have
fallen
over
somewhere.
C
B
A
B
D
C
A
B
A
D
B
B
D
B
D
I
think
it's
fine
I,
think
you
know
conceptually
what
happens
and
you
walk
through.
This
already
is
the
Machine
config
that
was
created
is
specifically
for
selinux,
because
it's
real
aids
or
let
me
rephrase
that
its
core
OS
but
core
OS
is
built
on
rel
8,
which
means
that
all
of
the
normal
features
like
selinux
are
there.
So
we
have
to
take
that
into
account
when
we
want
to
use
local
storage,
ie
storage
from
a
local
physical
storage
device
to
host
virtual
machine
disks.
So
that
was
the
genesis
for
all
of
this
with
the
machine.
D
Configurator
was
to
do
that,
selinux
relabeling,
to
allow
it
to
happen
yeah.
Theoretically,
once
the
nodes
come
back
up,
we
just
used
the
host
path
provisioner
in
order
to
define
right.
So
when
we
deploy
it
to,
we
define
that
it
is
a
storage
class
and
then
three,
you
simply
start
creating
PVCs
using
that
storage
class,
and
it
will
result
in
folders
and
files
being
automatically
created
on
that.
D
B
Yeah
exactly
that
the
host
path-
provisioners,
is
certainly
not
virtual
machine
specific,
but
we
simply
utilize
it
to
host
our
virtual
machine
disk
images,
and
you
know
the
beauty
of
having
a
provisioner.
Is
you
know
it
does
all
of
that
dynamically?
You
say
I
want
this
claim.
You
know
this
size.
You
know
you
can
attach
some
annotations
to
it.
B
That's
a
good
question:
I
think
it
has
something
to
do
with
the
persistency,
but
I'm,
probably
not
the
best
person
to
ask
there.
I
know
that
you
know
this
there's
some
big
difference
between
the
local
storage
operator,
which
is
slightly
the
you
know.
That's
that
doesn't
have
dynamic
provisioning,
but
the
beauty
of
local
storage
is.
You
can
use
entire
block
devices
instead
of
just
a
file
system,
location
but
empty
empty
dir
is,
is
you
know
it's
kind
of
a
little
bit
more
of
a
hack,
I
would
say
in
that?
It's
just
you
use
this.
D
I
and
I
think
the
answer
to
that
question
is
because
I
was
trying
not
to
answer
or
ask
you
a
question
that
I
didn't
maybe
already
know,
which
is,
if
you
create
an
empty
Durr.
Essentially
you
are
using
the
standard
graph
storage
right
of
so
what
is
it?
Var
live
volumes
wherever
it
normally
stores
the
ephemeral
data
for
container
image
layers,
whereas
with
hosts
past
host
paths,
provisioner
I
can
have
a
completely
separate
storage
device.
B
Yeah
so
I
I
forcefully
rebooted
those
two
workers
to
see
if
the
MCO
had
actually
run,
but
it
just
hasn't
run
them
yet
so
yeah
something's
getting
stuck
in
my
particular
environment.
But,
as
you
say,
this
would
never
happen
on
a
you
know,
proper
deployment
because
I
just
haven't
set
up.
You
know
those
those
particular
thresholds
it
just.
It
was
more
than
happy
to
take
down
both
of
my
workers
where
the
majority
of
the
you
know.
The
infrastructure
parts
were
also
running.
B
C
C
C
C
A
D
C
Few
of
our
shirts
are
red
yeah,
that's
funny.
My
wife
was
like
I'm
terrible,
addressing
myself
right,
like
I
spent
solitaire,
so
like
I
had
a
new
pair
of
pants
and
I
was
like
so
they're
blue
can
like
I,
wear
what
color
can
I
wear
it.
She's
like
oh,
like
gray
and
I,
look
at
all
the
redhead
shirts
and
I'm
like
well.
There's
one:
that's
the
volunteer!
One
I
got!
That's
it
like
a
light
gray
or
a
red
there.
We
go
so
yeah
all
right
with
that.
I
think
we're
done
here!
Thank
you,
Andrew!
C
Thank
you.
So
much
Rhys
appreciated
of
time
coming
up
on
the
schedule
later.
Today
we
are
having
a
OpenShift
Commons,
multi,
simulcast
I.
Guess
you
call
it
a
multi
stream,
OpenShift
Commons
we'll
be
doing
a
session
with
Andrew
clay
Schaefer,
the
the
DevOps
luminary
that
he
is
talking
about.
You
know,
transforming
you
know
your
environments,
your
work
environments,
your
systems,
the
way
things
work
in
your
company
so
join
us
today
at
noon.
For
that's
that
simulcast
and
tomorrow,
at
2:00
o'clock,
Eastern
I'm,
sorry,
I!
Don't
you
don't
know?
I
do
have
UTC
timing
on.
C
It
is
1800
UTC.
There
will
be
some
deploying
of
openshift
on
bare-metal
happening
so
check
that
out.
Eric
will
be
running
that
one
tomorrow,
while
I
am
off
doing
other
things
for
behind
the
scenes,
work
for
the
stream
itself,
so
yeah.
Thank
you
all
for
joining
us
today
have
a
wonderful
day
evening,
night
week
weekend,
the
whole
nine
yards
Reese
again.
Thank
you
so
much
for
joining
us
today.
Thank.