►
Description
Deploying OKD4 on OpenStack
August Simonelli & Nick Satsia
OKD4 Live Deployment Marathon
August 17, 2020
OKD-WG
A
All
right
so
to
start
off
with
morning,
everyone
we're
actually
in
australia,
I'm
in
sydney
nick's
in
camera.
So
it's
a
very
early
start
for
us.
So
if
we,
if
we
stumble
a
little
excuse
that
and
I'm
sure
that'll
be
no
problem,
but
let's,
let's
get
straight
into
it
because,
as
you
know,
some
of
the
installations
that
we've
been
looking
at
today
have
taken
some
time,
and
ours
is
no
exception.
A
So
I'm
gonna
quickly
introduce
the
architecture
and
then
we're
gonna
jump
into
getting
the
installer
kicked
off
and
start
to
see
the
good
stuff
so
to
put
open
I'll.
I
come
to
the
okd
space
from
openshift.
I
work
for
red
hat,
so
I
will
probably
just
you
know,
mix
the
terms
up
but
yeah
so
to
put
okd
on
openstack.
We
have
to
look
at
some
key
integration
points,
but
in
the
middle
of
all
of
that
is
the
use
of
the
openstack
apis.
A
We
don't
ever
want
to
interact
with
our
underlying
infrastructure
without
going
through
those
apis
so
that
we
can
ensure
a
good
sense
of
control
over
the
environment.
So
when
we
do
this
installation
we're
going
to
utilize
some
different
aspects
which
we'll
dive
into
a
bit
later,
once
we
have
it
running
and
questions
come
up
and
such
but
we're
going
to
use
utilize,
different
components
of
the
open,
stack
environment.
A
Things
such
as
glance
for
image,
storage,
we're
going
to
use
some
cinder
for
block
and
disk
storage
or
presentation
and
we're
gonna
also
use
object,
storage,
a
bit
of
everything
and,
of
course,
during
the
talk
I
mean
nick,
will,
will
break
in
with
his
insights
and-
and
please
say
any-
you
know-
please
add
anything
based
on
the
experiences
we've
all
had.
A
A
In
this
example,
you
can
see
I'm
setting
up
in
a
an
via
an
overcloud,
so
we're
going
to
be
using
red
hat
openstack
platform,
but
it'll
work
the
same
with
most
openstacks.
So,
as
you
would
have
known
and
heard
a
bit
today,
there
are
many
different
ways
or
multiple
different
ways
to
install
okd.
There's
the
full
stop
full.
I
should
say
full
automation,
which
is
the
installer
provision
infrastructure
model.
This
is
a
guided,
install,
it's
easy,
but
very
prescriptive.
A
A
A
So
to
really
reinforce
that
today,
I'm
not.
I
did
not
install
this
openstack
cloud,
I'm
using
an
internal
cloud.
I
do
not
have
admin
privileges
on
the
cloud.
I
just
wanted
to
to
see
what
it
was
like
as
an
actual
installer.
So
let's
go
straight
to
the
install,
because
again
it's
going
to
take
a
bit
of
time
and
we'll
see
how
far
we
get.
I
have
two
environments.
A
A
We
have
a
running
openstack
cloud
right,
so
nothing
too
exciting.
It's
got
cinder
for
volume.
It's
got
a
basic
setup
where
we
have
an
external
network
presenting
our
external
connectivity.
A
I've
gone
ahead
and
created
a
base,
a
simple
tenant
network,
because
what
I
want
to
do
is
show
that
we
can
actually
plumb
into
that
network
with
some
of
our
openstack
nodes
or
open
shift
nodes.
What
else
have
we
got
sitting
in
here?
Got
some
object,
storage,
so
we're
looking
at
a
bit
of.
A
In
this
case
we
are
backed
by
a
ceph
install,
and
so
this
is
going
through
rados
gateway
or
rgw
just
to
access
that
what
else
a
couple
of
floating
ips
that
need
to
be
pre-allocated
and
set
up
in
dns,
but
to
really
understand
how
that
works.
A
Let
me
switch
back
to
the
installation
window
and
how
we
talk
to
the
cloud
for,
for
the
installers
to
the
usual
clouds
yaml
file
and
in
in
this
case
I've
got
two
clouds
indicated
here,
I'm
going
to
be
using
the
one
I've
called
openstack
with
my
demo
user
and
what
we'll
see
in
the
minute
is
when
we
run
the
open
shift
installer,
it's
actually
just
going
to
communicate
with
the
cloud
directly
and
prompt
me
for
various
aspects
of
of
what
is
needed
in
that
cloud.
A
That's
how
we
can
then
and
create
our
install
config
file.
Now,
while
it's
an
ipi
and
it
will
generate
most
of
that
on
its
own,
we
will
add
some
extra
pieces
to
it.
So,
let's
get
right
to
building
the
install,
so
we're
going
to
do
a
create,
install
dash
config,
I'm
going
to
do
this
off.
As
I
mentioned,
a
specific
cloud
first
thing
it
asks
me
for
is
a
public
key
again.
This
will
seed
inside
of
the
ignition
file
to
allow
us
to
ssh
to
the
core
os
or
fedora
coreos
nodes.
A
Additionally,
you've
seen
that.
So
now
it's
actually
reading
the
cloud
xml
file
and
it's
asking
me
which
cloud
I
want
to
install
on
and
I'm
going
to
choose
openstack.
It
asks
me
what
my
external
network
is.
You
can
see
that
right
now
seeing
the
same
stuff,
I
just
showed
you
in
the
other,
install
that
we've
got
a
custom
tenant
network
and
an
external.
So
this
will
be
my
external
network.
A
I
need
to
have
two
ips
set
up
previous
to
the
installation.
One
is
for
the
api
vip,
which
we'll
see
how
it
sits
in
front
of
the
tenant
networks
that
get
created
this
one.
I've
pre-set
it
up
in
dns
in
route
53,
just
to
make
it
easy.
The
other
ip
that's
required
is
for
the
ingress
router
same
situation.
It's
the
wild
card
domain
for
apps.
It's
also
been
set
up
inside
of
inside
in
route
53.,
so
we'll
select
that
I've
got
to
choose
a
flavor.
A
These
are
being
presented
to
me
as
a
tenant
in
the
cloud.
So
this
one
looks
good
the
usual
stuff
here:
I've
got
a
base
domain
prepped
call
it
that
and
then
the
pole
secret
is
the
same
thing
that
we
saw
before
it's
the
fake
one
and
that'll
generate
my
cloud
config,
but
I'm
actually,
so
I
actually
have
to
make
some
alterations
to
that
and
what
I
want
to
do
is
actually
I
have
it
here.
A
I
have
one
ready
and
then
the
one
we
just
created
so
a
couple
of
things
that
we're
going
to
do
to
make
this
ipi
install
a
little
bit
more
customized
is,
as
you'll
see
on
the
on
the
left
in
the
red.
I
have
reduced
the
number
of
replicas
down
to
one
in
theory,
I'm
hoping
to
speed
up
the
installation
time
in
practice.
A
I
may
not
be
additionally,
I'm
able
to
add
that
additional
network
ip
this
block
actually
belongs
to
the
worker
nodes,
not
to
the
controllers,
so
that
network
you
saw
on
there
I'll
be
able
to
go
ahead
and
attach
automatically
with
the
installer
additionally
for
the
the
control
plane
nodes,
I'm
going
to
add
a
root
volume
and
an
ephemeral,
val
or
sorry,
a
block
volume
out
of
cinder.
A
This
is
the
volume
pool
provided
by
openstack
that
I
can.
I
can
actually
carve
volumes
out
of
again,
and
I'm
doing
this
mostly
to
to
just
demonstrate
the
features
we
might
use
that
because
we
need
to
back
fcd
with
something
fast.
We
might
prefer
to
install
this
way
just
to
have
the
options
and
then
what
else
I've
added
an
external
dns.
So
when
the
openshift
installer
creates
the
subnet
it
conceded
with
a
name
server,
it
doesn't
have
to.
A
Many
clouds
may
offer
that
directly,
but
it's
a
convenient
way
and
yeah
nick
if
you've
got
anything
to
add.
I
absolutely
hope
you
jump
in.
B
I
was
just
gonna
say
the
with
the
volumes
having
centralized
storage
like
that
allows
you
to
to
do.
You
know
via
migration
much
more
easily
and
everything
between
computes
on
openstack.
A
Yeah
so,
additionally,
I'm
specifying
the
cluster
os
image.
Those
who
have
done
bare
metal
installs
are
probably
familiar
with
this.
I'm
doing
this
because,
while
the
ipi
installer
does,
as
you
saw
the
guys
do
in
the
previous
install,
it
does
pull
down
a
qcow.
A
The
problem
is
because
I'm
backed
by
ceph.
I
want
to
use
a
raw
image,
so
I'm
able
to
create
my
own
place
it
in
there
and
still
access
it
from
the
upi,
the
ipi
installer
so
again,
interest
of
time,
I'm
going
to
create,
create
that
or
add
that
config
file
into
my
directory,
so
that
now
I'm
actually
installing
off
of
that
one
all
right
so
again.
The
pieces
I
just
spoke
about,
as
the
guys
mentioned,
we're
using
obn
kubernetes
on
there.
A
You
might
have
seen
that
in
the
previous
that's
done
in
okd,
but
in
openshift,
it's
tech
preview
at
the
moment.
So
anyway,
let's
get
the
install
going
we're
going
to
watch
the
environment
as
we
do
this.
So
I've
set
up
a
watch.
That's
going
to
show
various
components,
as
the
instances
are
built.
Images
are
used
that
type
of
thing.
Let's
get
this
going,
we're
going
to
do
the
same,
create
cluster
off
our
directory,
and
I'm
like
previous
guys,
I
like
debug
okay,
so
this
should
go
ahead
and
get
that
install
going.
A
B
A
Yeah
nick's
playing
with
some
really
exciting
stuff,
where
we
can
start
putting
openshift
into
telco
environments.
So
it's
really
cool.
As
you
can
see,
the
the
build
has
begun,
so
the
installation
has
started
and
the
one
point
we
notice
is:
the
ignition
file
has
been
created
in
an
ipi
install.
This
is
stored
automatically
in
glance
in
upi.
You
can
store
it
wherever
you
want,
but
for
an
ipi
we
get
it
for
free
in
glance.
A
Additionally,
the
volumes
have
started
to
be
built
for
the
control,
plane
and
bootstrap
nodes.
Now,
as
you
know,
we
didn't
actually
specify
anything
for
the
bootstrap,
but
because
it's
part
of
that
initial
cluster
to
do
the
bootstrapping
it
defaults
to
using
the
same
as
the
control
plane
installation.
A
What
else
can
we
say
about
this?
The
the
ports,
the
master
ports,
are,
are
being
created
and
allocated
against
that
internal
tenant
network.
That
you're
seeing.
B
Yeah,
the
the
actual
dhtp
control
of
that
machine
network
10.0016
is
actually
handled
by
the
openstack
router
as
well.
A
Right
and
let's
see
what
else
so,
the
floating
ips
have
now
been
matched
to
an
internal
to
the
internal
network.
I
believe
they're,
all
trunks
that
are
are
created
for
each
of
the
instances
as
well.
B
Then,
in
a
telco
world,
where
you
may
want
to
have
a
jumbo
mte
or
something
on
the
system,
you
would
set
that
all
up
through
your
openstack
neutron,
setting
the
the
default
mtu
to
a
certain
size
that
it
that
it
when
it
creates
networks
and
the
router
will
sorry
the
dhf
server.
The
openstack
dhcp
server
will
tell
the.
B
A
A
This
is
actually
chosen
at
random,
it's
not
preset,
and
it's
attached
to
the
bootstrap
node
to
allow
us
to
jump
on
there
and
I
guess
troubleshoot
look
around
and
see
what's
going
on
and
that
same
that
same
what's
it
called
ssh
key
was
added
to
that
node
and
so
in
theory,
once
that
thing
comes
up,
we
can
connect
to
it.
So
let's
go
ahead
and
take
a
look
at
the.
A
Yeah
all
right,
so
I
can
probably
get
this
out
of
the
way
we
can
see
what
openstack
has
done.
We
can
see
what's
being
built
up
and,
as
nick
said,
the
controllers
there
we
go.
So
this
is
this
two-phase
installation.
The
first
phase
is
completed
on
the
bootstrap.
So
soon
it
will
be
able
to
give
this
controller
its.
A
A
A
A
And
we
can
as
well
I'll
just
jump
off
again
and
I've
set
this
up
so.
A
A
B
And
it's
downloading
the
images
live
so
depending
on
your
internet
speed.
It
could
take
a
little
bit.
A
Perfect
eventually
that
cluster
I
mean
this
is
the
same,
install
process,
hopefully,
you've
been
watching
all
day,
but
you've
been
seeing
all
day
to
actually
to
get
us
there.
There
that
was
nicholas
pointing
out
the
images
have
just
been
grabbed
by
the
controllers.
A
Now,
with
that
with
the
installation,
if
you
notice,
there's
no
floating
ip
placed
on
any
of
the
control
plane,
nodes
and
ideally
right,
you're
not
going
to
jump
on
to
them,
but
we
can
actually
just
jump
over
to
them.
If
we
want
by
just
using
a
jump
command
for
linux
for
ssh,
and
then
we
can
watch
what
the
cluster
that
the
masters
are
doing.
B
B
A
A
And
what
you're
seeing
is
the
same
installation
process
across
you've
seen
across
all
the
other
platforms.
A
Except
we're
sitting
here
inside
of
inside
of
open
openstack.
What
else
can
we
add?
As
it's
not
the
most
thrilling?
I
have
more
slides
there.
We
go
so
we're
switching
over
to
the
the
into
the
second
phase
in
a
minute
we'll
get
kicked
off
and
then
the
cluster
will
go
ahead
and
build
up
and
once
that's
come
on.
I
want
to
jump
back
on
the
machine
and
have
a
look
at
some
of
the
networking
aspects
that
are
being
set
up
on
there.
B
A
As
you
can
see,
the
installer
has
also
set
up
our
security
groups
for
the
cluster
and
again
they're
all
unique
to
this
to
this
one
cluster.
So
as
a
10
and
in
openstack
I
could
run,
I
could
run
multiple
clusters
in
the
same
space.
Just
have
to
have
your
quota
set
up.
Things
have
to
work.
If
I
didn't
have
swift,
it
would
place
the
the
registry
back
into
into
block
storage
into
cinder,
but
we're
going
to
talk
about
that
in
a
minute.
A
B
Then
one
thing
to
note
is
the
reason
you
can
have
multiple
clusters
as
a
tenant
is
that
it
tags
every
every
resource
that
is
deployed
for
the
specific
cluster
tags
it
with
the
cluster
id.
B
A
All
right,
my
blue
jeans
is
hurting
my
machine,
so
here's
what
nick's,
hopefully
I'm
still
with
you
guys
yeah
here's
what
nick's
talking
about.
We
got
the
tag
we
got.
The
unique
network
set
up
there,
but
if
you
click
on
the
actual.
B
It
doesn't
show
you
the
tags
on
the
gui.
If
you
were
to
do
a
show.
Openstack
show
network
on
on
the
cli
and
openstack
you'll
see
actually
a
tags
property
as
well.
B
Yeah,
if
you
do
a
show
on
the
yep
on
that.
A
A
Cry
control:
you
can
see
some
of
the
functions
and
stuff
we
have
so
core.
Dns
is
running,
and
this
is
handling
all
the
internal
name,
server
stuff
for
the
cloud
for
the
for
the
cluster.
Where
we
are
getting
like,
you
can
see
the
settings
that
were
were
created
when
we
did
the
installer
and
I
was
able
to
set
the
forwarder
or
to
set
that,
and
then
we
can
see
how
our
internal
network
has
got
a
set
of
vips
created,
and
these
are
going
to
be
balanced
across
the
entire
cluster.
A
So
this
is
all
handled
internally
with
the
ipi
install,
obviously
with
the
upi
you
can.
You
can
do
different
things
if
you
need
to,
but
as
a
great
way
to
get
okd
running
very
fast,
on
openstack
not
having
to
fiddle
around
with
too
much
external
networking
and
dns
and
such
that
this
is
quite
convenient,
and
this
is
managed
and
again
all
by
people
ifd.
A
B
Now,
if
you
know,
if,
if
your
openstack
is
in
within
a
private
network,
so
and
these
ip
addresses
clash
with
that
private
network,
you
can
renumber
them
through
the
install
config
file
right.
A
But
also
because
we're
going
to
lose
the
bootstrap
soon
proxy
is
running
there
as
well
again,
this
is
all
set
up
by
the
ipi
installer
and
we
get
we
get
it
all
all
for
free.
B
B
Especially
in
if
you're
installing
it
on
stack
that's
inside
our
enterprise
network,
that's
typically
we'll
be
using
10
addresses
itself.
You
might
want
to
change
it
to
170,
216
or
or
some
other.
You
know
private
addressings.
A
Yeah
all
right
now,
one
other
piece
of
setup
that
we
need
to
do
manually
that
hasn't.
That
will
be,
I
think,
it's
being
added
to
the
the
next
release
of
the
various
pieces
is
to
attach.
We
have,
we
still
have
that
extra
floating
ip
and
we
want
to
go
ahead
and
attach
that
to
the
the
ingress
port
so
that
the
the
app
you
know
the
app
url
and
then
all
the
apps
can
resolve.
So
at
the
moment,
we
do
that
manually
with
just
an
openstack
command.
A
We're
going
to
actually
get
that
resolution
against
that
ip,
so
now
is
where
we
get
to
sort
of
the
kind
of
the
boring
part
of
the
install
everything
is
rolling,
but
what's
going
to
happen
in
a
minute,
as
everyone
knows,
is
we're
going
to
have
that
bootstrap
there?
It
goes
actually.
So
it's
nice
timing,
so
the
bootstrap
is
currently
being
removed
and
the
that
means
the
cluster
is
actually
self-sufficient.
A
Actually
moving
along
quite
nicely
and
then
finally,
everyone's
favorite.
A
We
can
see
that
the
various
operators
are
starting
to
come
up,
so
what
I
thought
we
might
do,
while
this
is
running
in,
is
quickly
look
at
some
of
as
as
well.
He
was
saying
the
last
one
was
the
stuff
about
the
slides.
I've
got
a
couple
of
slides,
so,
let's,
let's
briefly
talk
about
those
things
before
we
go,
the
timing
here
is
actually
accelerating
and
I
love
it.
A
What
we
have
going
on
here
is
the
bootstrap's
been
removed.
We
have
our
control
plane
now
established
you'll
notice
that
the
image
for
ignition
has
been
removed
as
well.
That's
wonderful
because,
obviously,
there's
sensitive
stuff
in
there-
and
we
don't
want
to
have
that
sitting
around
so
the
installer
removes
that.
Obviously,
if
you're
using
another
method,
you
need
to
do
that
yourself
or
expire
it
or
protect
it
or
whatever
you
need
to
do,
but
the
installer
is
looking
after
that
we
are
still.
A
As
you
remember,
we
asked
I
think
I
have
got
one
mass
one
worker
coming
up
and
we're
going
to
connect
it
to
the
this
extra
network,
but
we
haven't
seen
that
actually
happen
yet
until
the
workers
come
up.
So,
let's
quickly,
I
want
to
quickly
go
back
to
these
slides
because
I
think
that
it
helps
to
visualize
a
bit
of
what's
going
on
while
it's
happening
in
the
background.
A
So
essentially,
this
is
the
the
ipi
deployment
flow
on
on
openstack,
it's
similar,
obviously
to
other
other
cloud
providers,
but
I
found
it
helpful
just
to
outline
it
right.
We
run
the
installer
and
then
this
bootstrap
node
grabs
our
fedora
core
os
or
our
red
hat
core
os
image
at
a
glance.
A
It
grabs
ignition
from
glance
and
then
sets
up
the
bootstrap
cluster
and
then
that
cluster
can
pull
the
containers
and
then
maintain
itself
as
normal.
These
are
all
running
through
nova
and
they
can
be
backed
by
cinder.
Whatever
we'll
talk
more
once
that
bootstrap
cluster
is
established
and
we've
we
wait.
The
keepel
id
back
over
to
the
three
masternodes
and
the
cluster
is
established,
and
then
we
are
able
to
go
ahead
and
add
workers
through
a
machine
set,
which
is
what
the
installer
will
actually
build
for
you.
A
So,
let's
dive
quickly
into
the
integrations
as
the
installer
goes
in
the
background
we
visited
this
already.
You
should
be
familiar
with
what's
happening
there,
so
where
are
the
integration
points
for
open
stack
when
you're
putting
okd
on
there?
One
is
the
image
service,
so
this
is
glance
where
we
will
store
a
core
os
image
for
the
base
install
and
the
ignition
payload.
We
can't
add
it
to
directly
to
cloud
init
because
cloud
and
it
can't
it's
too
big.
A
So
what
we
actually
do
is
offer
the
instance
just
a
url
to
go
retrieve
it.
It
does
mean
that
the
core
os
image
and
that
that
tenant
will
need
access
to
clients.
A
The
next
thing
to
talk
about
is
networking
so
to
try
to
summarize
what
we
saw
happening
there
and
and
please
you
know
nick
add
when
you
have
more
here.
We
stick
openstack
floating
ips
in
front
of
these
internal
vips,
that
and
they're
all
managed
by
keep
alive
d
to
balance
across
the
cluster.
A
There's
an
api
vip
on
the
boot
track.
Bootstrap
till
the
cluster
is
up
shouldn't
be
unfamiliar,
which
is
then
weighted
back
to
the
masters.
Those
fips,
as
we
talked
about,
have
to
be
established
in
dns
prior
to
installation.
The
masters
will
then
run
everything.
That's
needed
for
the
cluster,
your
dhcp
nha
proxy
for
dns,
the
plug-in
for
cordianus,
the
mdns
publisher
and
keep
alive
d.
A
The
ingress
vip
is
then
set
to
the
workers
and
not
fronted
but
and
fronted
by
a
floating
ip,
which
in
our
case
we
have
to
manually,
assign
the
machine
network
is
the
neutron
tenant
network
that
was
built.
This
is
what
nick
was
talking
about,
how
you
can
actually
change
it
to
a
different
sider.
If
you
need
to
or
a
different
network
id
if
you
need
to,
but
that
is
managed
by
the
installer
in
an
ipi
you
can.
You
can
customize
that
somewhat
by
what
nick
said,
let's
see.
B
There's
a
tenant
on
openstack,
I
mean
you,
can
you
just
say
my
peer
dressing
across
multiple
clusters
on
openstack
and
then
what
distinguishes
them
is
the
net
address
or
the
floating
ip
address
that
you
assigned
to
them.
So
each
cluster
that
you
deploy
in
openstack
as
a
tenant
would
have
different
floating
up,
would
need
its
unique
floating
ips
and
dns
names
registered,
but
in
terms
of
the
internal
ip
addressing
they
could
be
the
same.
A
B
In
enterprises
I
mean
you
know
in
enterprises,
I
see
more
multiple
clusters,
you
know
a
dev
cluster,
a
pre-product
cluster,
a
production
cluster
say
on
openstack,
or
you
may
even
go
down
to
specific
development
teams.
So
you
know
a
development
team
may
want
their
own
cluster
and
you
can
easily
as
their
own
tenant
on
openstack.
You
can
easily
spin
up
their
own
cluster
on
that.
B
Another
aspect
of
networking
which
we
didn't
talk
about
here
is
this
is
running
the
open,
the
okd
sdn
on
top
of
the
openstack
sdn,
so
we've
got
in
a
way.
We've
got
double
encapsulation
with
vxlan,
but
with
openstack.
You've
got
the
option
of
deploying
courier,
which
allows
kubernetes
to
interact
directly
with
the
openstack
sdn
itself,
so
it
creates.
It
removes
the
the
the
okd
encapsulation
layer
and
it
creates
the
infrastructure
networks
that
it
requires
for
okd
creates
it
creates
them
natively
on
on
openstack
itself
and
in
that
type
of
deployment.
B
If
you
were
to
enable
courier,
there
will
be
many
many
more
networks
that
you'll
see
created
on
openstack
and
not
just
the
main
node
network,
which
is
then
it
encapsulates
that
it
uses
its
own
sdn
to
encapsulate
all
the
infrastructure
networks
across.
B
So
at
the
moment
that's
been
abstracted
to
us
and
all
the
okay.
The
infrastructure
networks
are
being
encapsulated
across
this
orange
network
that
you're
seeing
here
on
openstack.
A
Yeah
yeah-
I
didn't
play
with
courier
on
this
one,
and
some
of
that
was
because
the
version
of
openstack
I'm
using,
doesn't
actually
have
it,
uses
ovs
and
can
be
a
bit
heavy
and
my
resources
are
limited.
I
wasn't.
A
So
this
is
quite
good.
The
worker
node
has
actually
come
up,
so
we're
progressing
quite
well
with
the
installation,
as
you
can
see
through
my
watch
command,
we've
connected
to
both
networks,
so
where
my
vyon
okd
network,
as
well
as
the
openshift
network
that
was
created
by
the
installer
additionally,
we
now
have
containers
created,
not
those
containers,
a
different
kind
of
containers.
A
Now
it
bothered
me
a
lot
that
there
were
multiple
names
here,
so
I
asked
the
developers
and
it's
a
bug,
so
there's
only
meant
to
be
one
and
they're
they're
working
through
that.
I'm
not
sure
how
that
happened,
but
these
containers
will
will
be
created
if
we
had
been
using
cinder
for
our
registry,
we
would
see
another
volume
attached
to
one
of
the
workers.
A
Obviously
that's
not
a
best
practice
for
h.a,
whereas
for
an
object
store,
it's
not
pinned
to
an
instance.
So
so
far
the
installation
is
is
quite
quite
good.
At
the
moment,
all
the
components
are
coming
up.
A
And
as
you
saw
in
here,
we've
actually
attached
into
that
network,
so
let's
see
the
status
of
the
install
all
right.
The
workers
up
might
even
be
able
to
do
a
scale
on.
A
Yep
the
installation
seems
to
be
moving
along
pretty
nicely.
This
should
complete
shortly.
So,
let's
quickly
jump
back
to
the
slides.
A
To
finish
what
we're
talking
about
someone
I
work
with
at
red
hat
robert
heinzman,
a
colleague
of
mine,
did
this
incredible
drawing
to
try
to
capture
what's
going
on
with
all
the
different
components
of
the
of
the
installation,
and
so
I
wanted
to
reproduce
it
here.
It's
it's
all
his,
but
it
really
helped
me
to
understand
how
it
holds
together.
A
Again,
so
just
to
talk
through
these
integration
points
we
have
got
in
one
case,
I
showed
an
example
of
where
we
set
a
root
volume.
I
set
it
to
30
in
the
actual
demo,
but
we
saw
those
connected
and,
as
I
mentioned,
it
can
be
registered
the
registry
backend,
but
that's
not
the
preferred
method.
Ipi.
A
The
installer
will
actually
test
to
see
if
it
has
access
to
object,
storage
if
it
doesn't
or
or
get
some
kind
of
like
no
access
errors
or
whatever
it
will
go
ahead
and
set
it
up
in
cinder,
something
that's
coming
soon
and
something
that's
near
and
dear
to
nick.
I
think
these
days
is
the
addition
of
more
storage
support
for
okd
and
openshift
right
now.
We
are
a
little
bit
limited
in
that
there's
no
rwx
support
under
by
using
cinder
for
volumes.
Of
course,
you
can
use
an
nfs
server.
A
B
A
B
Well,
I
mean
what
manila
will
give.
Openshift
is
the
ability
to
basically
utilize
nfs
on
you
know.
Typically,
a
sf
storage
cluster,
that's
deployed
with
openstack
and
the
way
openstack
does
it
or
I
should
say
ceph?
Does
it
for
to
present
an
nfs
front
end
to
openstack
resources?
Is
it
uses
the
ganesha
project,
which
does
nfs
to
cfs
gateway
for
you
and
manila's
responsibility
is
to
establish
and
secure
the
the
file
shares.
B
Persistent
volumes
that
can
be
shared
across
multiple
or
what
pods
running
on
multiple
workers,
because
they'll
all
be
able
to
you
know,
use
the
standard,
nfs
mountings
to
communicate.
A
A
A
Some
other
interest,
I
mean
obvious
stuff
here-
is
nova
for
all
our
compute.
It's
can
be
ephemeral.
It
can
be
block
the
usual
requirements
here
that
we
need
fast
disk.
This
isn't
a
public
cloud,
so
you
need
to
be
working
with
an
openstack
admin
to
understand
what
kind
of
storage
is
being
supported,
underneath
that
also
there's
improved
support
for
availability
zones,
so
you
can
actually
start
to
place
the
workers
where
you
want
them
inside
the
cloud:
that's
evolving
as
well
around
between
the
cinder
support
and
and
nova
to
get
that
right.
B
Yeah,
there's
I
mean-
and
one
thing
to
note
is
with
the
deployer-
will
request
affinity,
point
and
infinity
for
the
masters
and
things
like
that,
so
it
will
ask
openstack
to
try
and
place
them
on
different
compute
nodes.
B
So
the
boys
trying
to
put
you
know
two
masters
or
three
masters
on
the
same
computer,
because
obviously
then
you're
not
really
running
for
your
three
masters
are
running
on
the
same
physical
machine
that
may
die.
B
But
it
will
try,
you
know,
to
spread
them
across
different
physical
machines.
A
B
Yes,
yeah,
I
mean
I've
played
with
it
quite
a
bit.
If
you
do
have
central
storage
like
ceph,
rbd
and
and
you
do
booth,
the
you
do,
create
it
using
volumes
or
ephemeral,
backed
by
by
ceph
on
stack
and
that's
where
you
start
needing
to
really
coordinate
with
your
openstack
administrators,
on
how
they've
deployed
things
and
what's
available,
but
a
lot
of
migration
certainly
does
work
even
on
masters.
I've
done
it
many
times
as
long
as
your
your
storage
back-end
systems
can
handle
it.
A
A
Yes,
indeed,
all
right,
so
our
install's
still
progressing
the
final
piece.
That
far
as
the
integration
point
is
the
object,
storage
or
swift.
It's
the
preferred
register
registry
back
end
for
hi
in
the
default
choice
of
the
ipi
installer,
and
we
saw
that
one
thing
I'd
like
to
do
is
actually
show
a
scaling
demo.
A
So
what
I've
done
is
when,
when
the
ipi
installer
runs,
it
creates
just
a
machine
set
for
the
workers
to
make
scaling
simple
I'd
like
to
demonstrate
that
it
works
and
see
how
it
works
across
openshift
and
openstack.
If
the
cluster
hasn't
finished,
I've
got
another
one
that
I
pre-built
to
just
show
the
demo.
While
we
do
it.
So,
let's
see
where
we
are
we're
not
quite
there
with
the
final
installation,
so
we're
gonna
go
ahead
and
bring
up
this.
A
Okay,
so
this
is
a
different
okd
install
but
on
a
different
open
stack,
make
sure
we're
all
logged
in.
C
A
A
A
number
of
workers-
in
this
case
we
have
two
workers
and
our
three
control
plane
nodes.
You
can
see.
I
built
this
previously.
The
actual
control
plane
was
built
days
ago
and
then
I've
scaled
a
few
times,
and
this
is
an
ipi
install.
A
A
A
You
can
easily
see
that
once
we
we
scaled
out,
the
machine
set
openshift
has
related
that
information
back
down
to
openstack
or
okd
back
down
to
openstack,
and
we
got
a
matching
of
naming
so
that
you
can
really
see
how
integrated
the
two
platforms
are,
and
I
know
it's
not
surprising
for
those
on
on
aws
and
such
it's
like
how
it
works
every
day,
but
in
an
open
stack
space,
it's
really
helpful
that
we
can
see
the
same
type
of
integration
that
happening
in
all
the
other
platforms,
so
simply
done,
and
what
I'll
show
you
as
nick
said,
because
we
finished
the
other
install
that
you
know
this.
A
This
is
what
comes
right
out
of
the
box.
So
literally,
we
now
have
machine
the
machines
provisioning.
There
may
not
be
nodes
yet
because
they're
still
building
up,
we
can
see
that
the
the
extra
the
new
workers
are
being
added
to
the
topology
and
and
just
being
automatically
plumbed
into
the
right
networks.
If
we
do
this
on
the
other
machine
on
the
other,
one
we'll
actually
get
the.
A
Extra
networks,
we
can
see
that
we
have
the
ability
to
create
the
auto
scaler,
the
so
the
custom
stuff
where,
if
we
want
to
go
ahead
and
hit
it
with
load,
it
will
automatically
auto
scale
and
that
constant
communication
and
interaction
and
integration
between
okd
and
openstack
means
that
the
two
the
two
pieces
work.
Almost
you
know
you
know
seamlessly
right.
You
you're
able
to
just
to
scale
out
without
much
effort.
B
B
It's
you
know
it's
running
on
premise
where
you
know
that
would
normally
be
running.
It's
not
in
the
cloud.
It's
not
across.
You
know,
depending
on
the
type
of
workloads
you're
dealing
with,
and
security
required,
government
regulations,
and
things
like
that.
B
A
So
yeah!
Look
at
that!
So
I
know
diane
loves
this.
So
here
we
go.
We've
had
a
successful
installation
on
our
on
our
cloud
of
the
live
demo,
so
we're
going
to
go
and
see
if
we
really
did.
C
C
C
A
Guys,
love
it
so
and
a
good.
This
is
terrific,
a
couple
of
things
I
you
know
want
to
point
out
that
have
been
done
here.
A
storage
class
was
created
and
I
haven't
mentioned
much
about
it,
but
the
cinder
storage
classes
is
created
for
the
by
default
out
of
the
ipi
installer.
Again,
you
can
modify
that
you
can
change
that.
A
You
can
do
what
you
want,
but
you
get
that
one
that
that
one's
built
in
and
so
you're
you're
getting
that
that
immediate
integration
with
with
openstack,
where
you
can
use
it
to
use
it
for
for
persistent
volumes
and
it's
in
the
other
pieces.
If
we
had,
let's
see
what
else
I
did
go
on
about.
B
Was
if
you
had
to
you
know,
if
you
didn't
integrate
with
manila
theater
deployment
manila,
you
would
see
an
additional.
That's
where
you
would
set
up
your
or
the
manila
operator
would
automatically
create
an
additional
storage
class
for
you.
A
B
A
Might
as
well
scale
this
while
we're
here
that
install
definitely
was
let's
now
we
can,
you
know,
we've
got
the
auto
scale
ready
there.
We
go
ready
to
go,
you
see
it
appear
there
shortly
and
overall
I
mean
that's.
I
probably
couldn't
have
asked
for
more
with
that
installation
and
that
timing
to
actually
have
been
able
to
to
go
through.
All
that
I
don't
know.
If
I
actually,
I
had
any
more
slides
too
no
see
I
prepared
this
slide.
This
is
my
just
in
case
slide
success,
but
it's
a
different
one.
A
Obviously,
so
that
was
just
to
sort
of
end.
It
off.
A
A
That
was
the
just
in
case
yeah
say
you
know,
look
I
I
really
can
do
this.
It
does
actually
work,
but
I
don't
need
to
do
that
because
it
really
does
actually
work,
and
you
know
working
within
red
hat
with
a
lot
of
the
upstream
guys,
just
and
and
gals
to
to
put
this
to
see
how
this
comes
together.
It's
just
getting
better
and
better
with
each
with
each
new
cut
and
they
add
features
to
it,
like
you
wouldn't
believe,
but
the
integrations
have
become
so
clean.
A
I
mean
literally
even
the
upi
install,
which
used
to
really
you
know
terrify
me
is
perfectly
documented,
there's
a
bunch
of
helper
scripts
to
make
it
work.
I
know
nick
loves
to
bash
about
on
it
and
find
all
the
various
issues,
but
it
it's.
You
know
it
works
right
out
of
the
box.
It.
You
know
these.
These
two
technologies
just
work
so
nicely
together.
So
here
we
go
I've
added
those
two
workers.
Remember
we
had
an
extra
network,
so
that's
not
been
forgotten.
They've
been
plumbed
into
that
network.
A
I
haven't
had
to
do
any
kind
of
complicated
setup
on
the
host.
I'm
not
pixie
booting,
I
mean
it's
all
being
taken
care
of.
It
feels
like
I'm
on
public
cloud
it.
The
integration
between
the
two
is
is
so
is
becoming
so
clean
that
I
feel
confident
with
my
openstack
cloud
running
open,
okd,
openshift,
whatever,
because
it's
just
built
so
nicely.
I
mean
I'm
just
clicking
buttons
here
and
of
course
you
can
do
all
this
with
the
cli,
but
that
wouldn't
be
as
fun
to
watch.
C
C
A
Yeah,
so
what
else
can
we
add?
Nick
I
mean
this:
is
there
the
install
is
coming
up
now?
This
is
just
open.
Oh
it's
just
okay
d!
There's
nothing
to
to
show
off
about
how
these
nodes
are
added
to
the
cluster,
but,
as
you
can
see
with
the
screen
here,
it's
all
managed
through
those
open
stack
apis,
meaning
our
openstack
admins
are
aware
of.
A
What's
going
on,
they're
able
to
control
quotas,
access
to
resources,
ensure
that
tenants
have
what
they
need
where
and
and
then
the
tenant
running
okd
is
able
to
do
what
they
need
to
do.
They're
able
to
scale.
A
B
Well,
I
mean
I
can
mention
what
I've
been
playing
with
and
that's
you
know
if
you
had
an
openstack
that
had
bare
metal
as
a
service
as
well
through
the
upi.
You
can
actually
achieve
a
scenario
where
you've
got
openstack,
deploying
bare
metal
workers
for
you
as
well
connected
dynamically
and
connected
straight
into
the
the
okd
cluster,
without
even
you
know,
just
as
you
just
as
we've
scaled
up
now
for
for
our
virtualized
workers,
you
can
have
bare
metal
machines
deployed
for
you
and
attached
directly
into
the
okd
cluster.
C
I
love
how
you
say
that
the
private
cloud
is
the
coolest
out
there,
and
I'd
have
to
have
to
agree
for
for
lots
of
different
reasons,
but
I
think
it
is
one
of
the
one
of
the
neatest
things
to
see
us
to
be
able
to
do
a
full
stack
there
with
open
all
open
source
stuff.
It's
really
pretty
awesome,
so
I
I'm
really
grateful
you
guys
got
up
so
early,
so
you're,
the
only
ones
with
light
in
the
room.
C
So
far
today
we
had
people
up
at
midnight
in
the
eu
and
in
saudi
arabia.
So
I'm
really
very
grateful
to
be.
Have
you
guys
on
and
look
forward
to
collaborating
a
lot
more
with
the
openstack
community
and
and
bringing
this
bringing
this
to
the
forefront?
I
think
you
guys
there's
an
open,
infra
summit.
I
think
in
october
that
we're
going
to
try
and
do
an
openstack
commons
with
an
open,
an
open
shift
commons
with
an
openstack
theme.
C
It
was
the
event
that
was
supposed
to
be
in
berlin,
so
negotiating
you
know
how
to
co-locate
virtually
with
the
openstack
foundation,
but
hopefully
we'll
get
you
guys
back
on
stage.
Hopefully,
before
october,
but
to
continue
on
the
live
stream
having
a
lot
of
openstack
content,
because
I
think
there's
a
there's,
a
good
number
of
our
you
end.
C
Users
who
are
are
deploying
okd
openshift
and
on
openstack
these
days,
and
we
would
love
to
to
hear
from
you
all
as
much
as
possible,
so
kudos
to
you
guys
for
for
getting
up.
Is
there
any
last
words
or
a
slide
or
anything
that
you
wanted
to
end
on
just
in
case,
we
want
to
get
a
hold
of
you
and.
A
A
fair
point:
I
didn't
actually
prepare
that,
no
and
and
the
more
we
hear
from
people
the
better.
So
I
I'm
easy,
I'm
august
at
redhat.com-
and
I
think
you
should
you
know
if
you
shared
a
twitter
handle
at
silenthog
on
twitter,
but
yeah
just
reach
out
ask
questions.
A
There's
a
bunch
of
stuff
that
we've
produced
nick
works
on
blogs.
I've
got
some
blogs
on
openshift.
We
want
to
share
the
shift
and
okd
on
stack
experience
because
it
is
growing
and,
as
you
can
see
in
you
know,
36
minutes,
we've
got
a
container
enabled
cloud.
I
mean
that's
pretty
awesome,
so
I
guess
to
end
on
yeah
I
just
the
more
we
talk
the
better
and
I
just
can't
wait
to
hear
more,
I'm
so
excited
to
have
been
given
the
opportunity
to
actually
do
this.
I
was
terrified,
but
I
was
super.