►
From YouTube: OpenShift on OpenStack Reference Architecture Deep Dive Jacob Liberman Red Hat OpenShift Commons
Description
OpenShift on OpenStack Reference Architecture Deep Dive
Speaker: Jacob Liberman (Red Hat)
August 29 2019
OpenShift Commons Briefing
In this briefing, Jacob Liberman, author of the latest OpenShift on OpenStack Reference Architecture, walks thru the reference architecture for running OpenShift Container Platform 3.11 on Red Hat OpenStack Platform 13 and shares important design considerations and key integrations between the products. The architecture is highly available and suitable for production.
A
B
Well,
hello,
everyone
and
welcome
again
to
another
openshift
Commons
briefing.
Today
we
have
Jacob
Lieberman,
who
is
the
author
of
the
open
shift
on
OpenStack
reference
architecture
and
we've
invited
him
here
to
tell
us
a
little
bit
more
about
it
to
walk
us
through
it
and
I'm,
going
to
let
Jacob
introduce
himself
and
take
it
away
because
there's
a
lot
of
content
here.
We'll
have
live
Q&A
at
the
end
and
we
will
probably
be
ending
in
about
40
minutes,
because
Jacob
has
a
another
commitment.
So
let's
kick
it
off
and
get
started.
Okay.
C
Thank
you
Dan,
so
hi
everyone.
My
name
is
Jacob
Lieberman
and
I'm,
a
field
product
manager
at
Red
Hat,
and
what
that
means
is
that
I
work
with
some
of
our
strategic
customers
and
the
field,
which
is
our
sales
and
consulting
group
to
close
the
feedback
loop
with
engineering
and
to
make
sure
that
we're
building
things
that
people
want,
though,
today
I'm
going
to
talk
about
a
reference
architecture
that
we've
recently
published,
OpenShift
311
on
the
open,
stack,
13
or
if
you
want
to
adhere
to
red
hat
branding
standards
or
Google
search
the
title.
C
It's
actually
deploying
Red,
Hat,
open
shift
container
platform,
311
on
Red
Hat,
OpenStack
platform,
13,
so,
and
also
just
to
let
you
know
these
are
a
few
of
the
programs
that
my
team
runs.
I
won't
spend
a
lot
of
time
on
this
slide,
but
basically
we
run
a
number
of
programs
connecting
the
field
and
customers
to
engineering,
both
internal
and
external.
C
So
before
I
jump
into
the
reference
architecture
I
have
about
you
know:
18,
slides,
total
and
I'm
gonna
try
to
move
pretty
quickly.
I
just
want
to
briefly
mention
where
this
camp
thing
came
from.
So
this
is
open
shift
311
on
the
open,
stack,
13,
obviously,
open
shift
4
has
been
released,
but
most
of
our
customers
are
still
on
311.
C
Last
year
we
had
a
hack
fest,
which
is
kind
of
a
Red
Hat
internal
event,
where
we
brought
together
engineers
from
both
the
open
shift
and
OpenStack
engineering
teams,
as
well
as
field
experts
on
OpenShift
and
OpenStack,
and
we
put
them
in
a
room
for
a
week
and
we
white
boarded
executed.
Use
cases
talked
about
how
customers
are
actually
using
this
things
together
in
the
field
and
out
of
them
back
fast
and
those
design
sessions.
C
This
reference
architecture
was
created,
so
all
of
the
input
that
we
gather
from
the
field
and
from
the
customers
was
distilled
through
this
event
to
come
up
with
best
practices
and
identify
gaps
and
then
produce
this
final
document.
I
guess
I
am
the
author
of
the
document,
but
really
my
job
was
to
kind
of
engage,
filter
and
distill
all
of
the
learning
from
our
field
and
customers.
C
C
There's
one
cloud:
the
under
cloud
with
the
director
note:
that's
a
fully
fledged
OpenStack
deployment
on
a
single
node
that
is
used
to
configure
and
deploy
the
over
cloud,
which
is
the
cloud
where
customers
run
their
workloads
and
the
over
cloud
is
made
of
controller
nodes
that
run
the
API
endpoints
and
services
they're,
somewhat
analogous
to
master
nodes
and
openshift
compute
nodes
that
are
similar
to
worker
nodes
and
then
storage
nodes.
This
reference
architecture
we
actually
use
SEF,
which
is
our
software-defined
storage,
that's
very
tightly
integrated
with
OpenStack
as
the
storage
back-end
for
openshift.
C
So
the
relationship.
This
is
just
an
overview
of
the
architecture
and
it
shows
the
relationship
between
the
two
products
and
how
they're
layered
together.
So
what's
important
to
understand
is
that
the
relationship
between
OpenStack
and
openshift
is
complementary.
These
are
not
alternative
approaches
to
do.
The
same
thing
openshift
consumes
the
resources
that
stack
produces,
and
you
can
see
some
of
those
things
here.
So
OpenStack
can
provide
you
with
virtual
machines
or
bare
metal
nodes.
C
It
can
manage
very
metal
notes
that
you
can
actually
run
your
containerize
applications
on
and
then
of
the
open
back
api's
for
storage,
cinder
for
block
storage
and
file,
storage
and
then
swift
for
object.
Storage
are
consumed
by
OpenShift
as
persistent
volumes
through
a
storage
class
or
multiple
storage
classes,
and
also
we
use
swift,
which
is
the
object,
store
for
open
shifts
internal
registry
and
then
on
the
networking
side.
Openstack
has
a
very
robust
support
for
self-service
networking
and
it
supports
many
plugins
from
different
vendors,
such
as
Cisco.
C
The
neutron
networking
piece
on
the
OpenStack
provides
load
balancing
as
a
service
through
Octavia.
This
can
be
used
for
the
OpenShift
routers
and
when
you
expose
a
service
load
balancer
for
to
expose
service
and
also
courier,
which
is
an
integration
that
allows
openshift
instead
of
OBS,
multi-tenancy
or
network
policy,
to
use
OpenStack
networking
verbs
directly
so
that
your
openshift
pods
aren't
directly
connected
to
your
OpenStack
software-defined
networks.
C
So
why
I
just
throw
this
slide
in
there
in
case
there
are
people
that
don't
understand
why
you
might
want
to
run
these
things
together.
So,
first
of
all,
you
would
run
an
open
shift
on
OpenStack
if
you
have
a
need
for
both
virtual
machines
and
containers
on
programmable
infrastructure.
So
if
you
just
need
containers-
and
you
don't
need
virtual
machines-
maybe
this
doesn't
make
sense
or
vice-versa,
but
if
you
need
both,
this
is
a
great
way
to
do
it.
Also
you
get
scale
and
automation
at
both
layers.
C
C
Like
experience
the,
if
you
are
accustomed
to
running
OpenShift
in
say
Amazon
or
some
other
public
cloud,
and
then
you
want
to
bring
some
of
that
capability
on
Prem,
if
you
slap
it
on
a
virtualization
platform,
you
might
not
have
all
of
the
self-service
capabilities
that
you're
accustomed
to
having
in
the
public
cloud
like
DNS,
self-service,
networking,
self-service,
storage,
debt,
cetera
and
then
the
last
reason
is
that
OpenStack
has
native
multi-tenancy.
Many
organizations
can
share
the
same
OpenStack
deployment
which
reduces
the
administration
overhead,
but
they
their
resources
are
sequestered
from
another
one
another.
C
This
is
what
we're
seeing
more
and
more
from
customers
they
want.
Each
organization
wants
to
spin
up
their
own
kubernetes,
their
own
OpenShift
and
own
it
and
run
it.
They
don't
want
a
massive
kubernetes.
That's
shared
that
scales
up
horizontally,
though
OpenStack
is
a
great
match
for
that
deployment
model.
If
you
want
to
run
many
kubernetes
on
the
same
infrastructure,
so
now
a
bit
about
the
architecture,
so
in
I
mentioned
that
you
can
run
an
open
shift
on
bare
metal
through
OpenStack.
C
However,
in
this
reference
architecture,
we're
running
open
shift
in
virtual
machines
on
the
OpenStack
hypervisors,
and
you
can
see
how
that's
kind
of
laid
out
in
this
diagram,
though
the
reason
that
we
chose
virtual
machines
in
that
bare
metal
nodes
is
that
that
is
what
most
of
our
customers
are
doing.
So
that
is
the
more
common
case.
So
that
is
the
one
that
we
thought
that
what
we
would
describe
now
in
this
reference
architecture,
openshift
is
deployed
in
a
single
OpenStack
tenant.
C
C
This
reference
architecture
is
targeted
for
existing
OSP
customers
who
want
to
add
open,
show
capability
to
their
deployment,
and
it's
really
best
for
the
customers
that
have
about
a
70%
or
more
virtual
machines,
and
they
just
want
to
add
some
Cape
container
capability
to
that,
and
it's
really
targeted
at
a
medium
to
small
scale
for
an
OpenStack
deployment.
So
if
you
have
an
open
stack
deployment
of
say
20
to
64
compute
nodes,
this
would
be
appropriate
for
you.
If
you're
going
above
that,
then
this
architecture
is
not
suitable.
C
There
are
other
things
we
would
have
to
do
now.
Internally,
Red
Hat's,
whole
CI
CD
system
runs
on
OpenShift
on
OpenStack
and
we
have
hundreds
of
compute
nodes.
So
we
do
have
a
lot
of
institutional
expertise
about
deploying
these
things
together
in
large
scale,
but
because
it's
not,
we
chose
to
focus
on
the
common
case,
which
is
kind
of
this
small
to
medium
75%
virtual
machine
workload
case.
C
The
last
thing
I'll
mention
is
that
I
already
mentioned
this,
but
we
rely
on
staff
heavily
in
this
reference
architecture.
So
we
expect
you
to
have
staff
as
the
storage
back-end,
so
drilling
down
a
bit
further
into
the
architecture.
This
diagram
Maps
the
OpenShift
services
to
the
openshift
node
roles
and
the
networks
that
we
deploy
so
the
first
we're
gonna
start
kind
of
on
the
right
with
that
light,
blue
network
that
says
public
network
FIP,
so
that
stands
for
a
floating
IP.
C
The
only
endpoints
that
are
publicly
accessible
in
this
reference
architecture
are
the
api
load
balancer
that
load
balances,
the
open
shift,
API
endpoints
across
the
master
node
the
router
load,
balancer
that
you
can
use
to
access
the
other
services
that
need
to
be
externally
accessible
or
exposed
applications,
and
then
the
deployment
house,
the
deployment
host
at
the
is
at
the
bottom.
We
use
a
deployment
host.
We
also
call
it
a
bastion
host
or
a
jump
host
that
node
can
be
connected
to
publicly
via
floating
IP
address,
provided
by
OpenStack.
C
We
don't
put
floating
IP
addresses
on
the
master
infrastructure
or
application
notes
that
can
be
done,
but
we
don't
recommend
it
because
that
exposes
your
internal
OpenShift
platform
to
the
outside
world
and
other
tenants.
So,
with
the
deployment
host,
those
things
are
get
a
bit
of
separation
and
additional
security
moving
left.
We
have
a
deployment
network.
This
is
kind
of
the
service
and
management
network.
This
is
a
a
software-defined
network
in
OpenStack
that
is
used
to
actually
deploy
the
openshift
virtual
machines
and
can
manage
them
moving
further
left.
We
have
the
service
network.
C
This
is
very
typical
for
OpenShift.
In
that
the
services
communicate
the
OpenShift
services
communicate
with
one
another
via
this
network.
Unless
we
have
a
pod
network
for
inter
pod
communication
that
spans
the
application
notes
and
infrastructure
nodes,
the
services
we
do
have
at
CD
on
the
masternodes
fcd
is
the
state
database
for
openshift.
C
So
really
that's
it
about
the
architecture
itself.
I
mentioned
the
key
components:
I'm
going
to
drill
down
a
little
bit
into
the
integrations.
I
understand
them
I'm
moving
very
quickly,
but
I
want
to
spend
a
bit
more
time
on
the
integrations,
because
that's
where
a
lot
of
the
engineering
work
was
done.
C
So
if
customers
have
been
running
OpenShift
on
OpenStack
for
a
long
time,
but
but
with
three
dot,
11
and
13
I
would
say
this
is
the
first
time
that
we
really
sought
to
align
and
coordinate
the
schedules
to
make
sure
that
the
features
were
delivered
together.
The
integration
features
themselves
fall
into
three
buckets.
One
are
related
to
the
instance,
deployment
and
scale
and
the
sack
and
are
related
to
storage
and
the
third
are
related
to
networking.
C
C
So
we
have
an
ansible
based
installer
that
I'll
drill
down
into
a
little
bit,
but
you
use
the
same
installer,
do
both
configure
the
OpenStack
tenant
and
set
up
the
networking's
and
all
of
the
volumes
and
everything
that
openshift
needs
to
run
and
also
to
deploy
the
openshift
nodes
that
configure
their
integrations,
so
that
installer
is
really
the
first
integration
point.
The
second
is,
with
storage
I
mentioned
that
we
rely
on
staff
heavily.
C
We
use
SEF
RBD
in
this
reference
to
architecture,
Steff
rgw,
to
provide
block
and
object,
storage
or
persistent
volume,
storage
classes
and
the
internal
registry,
and
we
do
recommend
that
you
use
object.
Storage
for
the
internal
registry,
even
though
it
is
possible
to
use
a
valium,
because
a
volume
kind
of
represents
a
single
point
of
failure
and
does
not
scale
as
well
as
using
rgw
with
an
object
store.
C
What's
conspicuously
absent
here
is
that
we
do
not
have
Manila
manila
provides
file
as
a
service
for
OpenStack,
which
would
give
you
your
rwx
storage
class.
At
the
time
that
this
was
written.
There
was
no
support
for
our
WX
storage
class
delivered
by
staff,
so
it
is
not
in
the
reference
architecture
in
it
that
will
be
coming
in
the
open
ship
for
reference
architecture.
I
know
people
will
ask
about
that,
which
is
why
I
mention
it
now
last
is
the
network.
C
C
What
happens
is
that
when
you
expose
a
route,
a
vnf
or
a
virtual
machine
running
H,
a
proxy
has
actually
created
that
fronts,
the
the
pods
that
are
being
load
balanced
across,
so
for
basically
for
every
service
that
you
exposed,
you
get
another
virtual
machine
that
acts
as
a
load
balancer,
which
is
one
of
the
reasons
that
we
recommend
that
kind
of
70
percent
virtual
machine
to
pod
ratio.
If
you're
deploying
many
many
pods
and
exposing
them.
C
That
means
you're
also
deploying
many
many
virtual
machines
to
load
balance
them,
and
that
consumes
a
lot
of
memory.
And
last
we
have
courier,
which
is
the
plugin
for
the
open
shift
SDM
that
lets
open
shift,
consume
Neutron,
networking
directly,
so
quick
time
check,
okay,
good,
so
drilling
down
a
bit
into
open
shift
ansible.
C
This
is
the
deployment
it
deployment
tooling,
it's
divided
into
two
playbooks
there's
a
provision,
an
old
playbook
that
is
used
to
configure
the
OpenStack
resources
and
you
run
first
and
it
has
a
separate
ansible
virus
file,
we're
specifying
how
you
want
the
OpenStack
resources
deployed,
and
you
follow
that
with
an
installed
yeah,
which
has
a
also
a
separate
virus
file
that
one's
called
OS
ev3
ml.
That's
used
to
specify
how
you
want
the
OpenShift
cluster
deployed.
Now
the
reference
architecture
prescribes
the
use
of
OpenShift
ansible.
C
C
So
if,
in
order
to
follow
this
record
prints
architecture,
you
need
to
deploy
via
OpenShift
danceable,
it
limits
us
somewhat,
because
OpenShift
ansible
can't
do
all
of
the
things
that
you
can
do
when
you,
you
know
an
expert
kind
of
hand,
tools
OpenShift
on
top
of
OpenStack,
but
to
make
it
testable,
repeatable
and
you
know
ultimately
more
stable.
We
recommend
the
use
of
open
ship
danceable
so
a
bit
about
the
integration,
the
provision
dot,
yeah
mol
playbook
actually
uses
the
ansible
cloud
module
or
OpenStack
to
make
native
calls
to
heat.
Now.
C
Heat
is
open,
stacks
templating
service,
it
it's
very
robust
and
extensible.
It's
similar
in
some
ways
to
maybe
a
kubernetes
deployment
or
an
open
ship
deployment.
Config
in
that
you
can
specify
the
end
state
of
the
template
that
you
want
deployed,
and
then
it
will
go
in
inactive
through
an
API.
So
the
integration
is
that
we're
not
you
know
using
shell
and
module
commands
through
ansible
to
do
these
things
we're
actually
making
native
API
calls
to
heat.
So
once
once
of
the
resources
are
deployed
on
top
of
OpenStack,
you
can
actually
manage
them
with
heat.
C
The
same
way,
you
would
manage
any
other
resources
deployed
by
heat,
so
heat
will
deploy
the
compute
nodes,
so
your
virtual
machines
for
your
master
infrastructure
and
application
notes,
and
if
you
can
also
use
heat
to
deploy
day,
you
wanted
to
run
your
application
notes
on
bare
metal.
The
OpenStack
ironic
service
can
manage
a
pool
of
bare
metal
servers
and
you
can
use
heat
to
deploy
to
those
as
well.
That
is
not
in
the
reference
architecture,
however,
we
do
have
customers
doing
it.
There
is
a
use
case
for
running.
C
You
are
master
nodes
and
virtual
machines
on
I'm,
sorry,
your
master
nodes
and
infrastructure
nodes
on
virtual
machines,
so
that
they're
more
scalable,
so
you
wanted
to
add
routers
or
something
like
that
or
protect
them
and
back
them
up
with
the
way
that
you
protect
all
of
your
other
virtual
machines.
But
then
you
want
the
bare
metal
performance
of
deploying
your
compute
notes
directly
onto
physical
servers,
but
there
with
there
is
a
way
to
do
that.
But
again,
that
is
not
the
common
case,
so
it
is
not
in
this
reference
architecture.
C
So
the
storage
networking
we
have
an
open
shift,
cinder
provisioner
that
basically
watches
the
API
open
shift.
Api
endpoint
do
intercept
events
related
to
creating
updating
deleting
a
storage
device.
So
this
kind
of
depicts
the
the
workflow
of
a
pod
on
an
open
shift.
Node
needs
a
persistent
volume
and
you
make
a
persistent
volume
clean.
C
The
cinder
provisioner
will
see
that
request
and
it
will
actually
make
API
calls
directly
to
cinder
to
create
the
virtual
to
create
the
virtual
disk,
the
block
device,
and
then
it
will
make
calls
to
Nova,
which
is
the
OpenStack
API,
for
managing
the
servers
and
instruct
Nova
to
attach
the
cinder
volume.
Do
the
Nova
compute
node
that
has
the
pod
on
it.
So
it
does
all
of
that
translation
and
fulfill
the
persistent
volume
claim
and
then
that
cinder
volume
appears
on
the
host
as
just
a
regular
Guzzi
block
device.
C
So
we've
had
this
feature
for
a
while
I
think
we
had
this
in
310
and
earlier
versions
of
OpenStack,
but
this
model
of
having
a
kind
of
a
pod
that
listens
to
the
API
server
intercepts
and
then
interacts
directly
with
openstack
api's.
This
pattern
is
repeated
for
most
of
the
integrations
I'm
only
going
to
go
into
one
more,
but
this
is
kind
of
the
pattern
that
we
used
to
enervate.
C
So
for
courier
Sdn,
quite
a
lot
of
work
was
done
to
allow
OpenShift
pods
to
access
OpenStack
Neutron
networks
directly.
That
is
because
both
OpenStack
and
OpenShift
use
encapsulated
networks
in
order
to
communicate.
So
what
would
happen
is
that
if
you're
say,
for
example,
your
layer,
2
communication
is
encapsulated
on
the
OpenStack
layer
and
then
you're
using
OpenShift
multi-tenant
Sdn
on
top
of
that,
the
packets
get
encapsulated
twice,
which
can
introduce
some
performance
overhead
in
some
complexity.
C
So
the
way
that
that
works
is
that
every
open
shift,
node
runs
a
pod
called
the
courier
see
an
iPod
and
the
courier
see
an
iPod
is
responsible
for
interacting
with
Neutron.
The
courier
CNI
pod
will
then
make
the
request
the
resources
from
the
openshift
api
server,
which
is
being
watched
by
courier
controller
and
the
courier
controller
pod,
actually
runs
on
the
infrastructure
node.
C
When
the
courier
controller
pod
sees
the
request
for
OpenStack
Network
Devices,
it
will
first
make
a
call
to
Neutron
to
create
a
virtual
interface
for
the
pod,
that's
plugged
into
the
software-defined
Neutron
Network,
and
then
it
will
also
create
a
load
balancer
entry
for
that
address
and
port
on
the
pod.
If
it's
exposed
the
Neutron
agent
running
on
the
OpenStack
compute
node
receives
the
request
to
plug
in
the
virtual
interface
and
attaches
it
to
an
open
V
switch
integration,
bridge,
that's
running
on
the
application
node
and
then
gives
the
interface
directly
to
the
pod.
C
So
this
is
the
same
pattern
that
we've
seen
on
the
cinder
side,
but
repeated
for
courier.
Doing
this
another
way
would
kind
of
be
an
anti-pattern.
We
have
pods
that
running
an
open
shift
that
control
the
interactions
between
open
ships
and
OpenStack.
One
thing
I
will
point
out
about
this-
is
that
although
there's
a
courier
cni
pod,
that's
running
on
every
node
that
in
this
in
this
reference
architecture-
and
it's
mentioned-
there's
only
one
courier
controller
pod.
So
if
anything
happens
to
the
infrastructure,
no,
that's
running
the
courier
controller
pod.
C
The
networks
should
be
fine,
but
new
requests
will
be
queued
until
the
courier
pod
has
respond
on
a
different
application.
Node,
and
we
do
have
some
of
the
network
engineers
I
believe
on
this
call.
So
if
I
said
anything
wrong
there,
you
know
please
correct
me
during
the
QA
portion,
but
I
believe
all
that
to
be
correct
and
recently
Ramona
Sato
who's.
The
product
manager
for
open
shift
on
OpenStack
published
a
blog
post
along
with
side,
a
milliner
I'm,
not
sure
I'm,
saying
his
last
name
right
I'm
describing
some
work.
C
They
did
to
characterize
the
performance
uplift
that
you
get
with
courier
over
using
kind
of
the
double
encapsulated
OBS
ml
to
with
openshift
Sdn.
On
top
of
it-
and
you
can
see
on
the
left,
you
can
see
a
stream
which
is
kind
of
a
bandwidth
test
that
on
small
packet
sizes,
the
the
performance
between
open
chef,
Estie,
N
and
courier
is
very
similar,
but
as
the
packet
size
grows,
you
see
massive
increases
in
performance.
C
So
if
you
have
something
that
is
very
bandwidth,
sensitive
and
application,
that
requires
you
to
transfer
a
lot
of
data
and
large
trunks.
A
courier
can
give
you
a
tremendous
performance
uplift
all
right.
We
have
a
it's
more
of
a
latency
performance
comparison
now.
The
metric
for
this.
It's
I,
believe
transactions
per
second
I.
Don't
have
that
in
the
on
the
y-axis
there,
but
higher
is
better,
so
lower.
Latency
means
it
takes
less
time
to
send
the
packet.
C
So,
the
more
that
you
can
do
in
a
second
the
better-
and
you
can
also
see
that
courier
gives
you
a
performance
of
lift
across
all
packet
sizes
when
you're
in
terms
of
latency
and
for
most
of
the
open
shift
workloads
that
don't
involve
say,
for
example,
OCS
there
are
more
latency
sensitive
than
they
are
bandwidth
sensitive.
So
across
every
packet
size,
you
should
see
a
performance
off
lift
when
you're
using
career
and
I
recommend
to
everyone
to
go.
C
So,
just
a
little
bit
about
future
work
now
with
open
shift,
for
there
are
kind
of
two
classes
of
installation.
One
is
installer
provision
infrastructure,
which
means
that
we
don't
really
allow
a
lot
of
choices
in
how
it's
deployed.
So
you
you
upload
an
SSH
key.
You
pick
the
provider
that
you
want
to
deploy
to
and
basically
we
push
out
open
shift
in.
One
way
now
right
now
opens
two
for
one
is
the
current
release,
but
the
IP
I
installer
for
OpenStack
is
OpenStack.
Thirteen
is
coming
in
open
ship
for
two.
C
We
also
have
a
deployment
model
called
user
provision
infrastructure,
which
is
a
lot
more
customizable.
You
kind,
the
the
customer
brings
their
own
infrastructure
and
then
just
deploys
the
OpenShift
on
top
of
it.
Instead
of
having
us
make
the
decisions
of
how
the
infrastructure
should
be
deployed
that
is
coming
in
in
openshift
4.3,
which
is
you
know,
coming
later,
I,
don't
know
if
I'm
allowed
to
give
an
exact
date.
So
we'll
just
say:
that's
that's
coming
later,
but
the
main
thing
to
know
is
that
you
know
we've
we're
now.
C
We
ready
has
been
kind
of
it's
been
suggested
to
us
in
the
past
that
we
need
to
be
more
prescriptive
with
how
we
recommend
we
deployed
this
stuff.
Well,
the
OpenShift
ipi
installation
is
just
that.
It
is
full
stack
automation
that
is
very
prescriptive
and
deploys
OpenShift
and
configures
OpenStack
exactly
how
we
recommend
it,
but
we
also
want
to
reserve
the
capability
to
give
customers
choice
say
they
have
a
disconnected
environment
or
they
have
a
specific
I'm,
just
making
stuff
up
with
storage
back-end
they
have
to
use.
C
Then
we
still
offer
the
flexibility
with
UPI
to
kind
of
customize
the
deployment.
How
do
you
want
so
concluding?
This
is
a
bit
of
a
roadmap
on
what's
coming,
so
you
can
see
in
3.11
and
all
of
the
things
that
I
talked
about
our
features
that
we
delivered,
such
as
integrating
octavia
load,
balancing
open
ship,
danceable
deployment,
internal
registry
on
sapphire,
GW
and
namespace
isolation
support
courier,
an
open
shift
4.1.
C
D
We
don't
want
to
add
anything
more
just
yeah,
but
they
are
adding
here
is
so.
The
key
goal
here
is
to
simplify
your
deployment
experience
so
that
way
that
you
don't
spend
a
lot
of
time
with
setting
up
the
solution
and
you
can
spend
more
time
on
the
data
operations
in
terms
of
like
having
a
developers
and
the
Solutions
Architect
start
provisioning,
VMs
and
containers.
Therefore,
the
applications
use.
So
that's
why
we
are.
The
goal
is
to
reduce
the
deployments
by
giving
you
prescriptive
guidance.
B
B
If
anyone
else
has
any
questions,
now
is
the
time.
Otherwise
we
will
make
sure
that
everybody's
contact
information
is
along
side.
The
here
comes,
one
PM
is
asking
you
mentioned
Manila
earlier,
and
anything
missed
back
in
the
roadmap.
C
B
In
your
freshman,
what
what
is
Manila
I'm
not
die
hard
OpenStack
er.
C
It's
kind
of
a
similar
to
Gloucester.
In
some
ways
it
allows
users
to
create
to
request
shares
that
are
similar
to
enter
shares
that
are
mounted
over
the
network
and
Manila.
The
capability
is
something
we've
had
an
openstack
with
Steph
for
a
while.
You
can
consume
it
either
via
this
ffs
client
or
via
regular
NFS
clients
that
that
feature
is
called
Ganesha.
C
Manila
is
being
implemented
with
rook
rook
being
the
operator
that
can
be
used
to
provision
storage
in
openshift
and
there's
a
SEF
back
end
plug
in
for
rook.
That
can
provide
Manila
shares
and
also
block
shares,
though
the
state
of
Manila
right
now
is
that
and
the
state
of
Manila
right
now
is
that
there's
kind
of
two
pieces
to
it.
C
If
you
have
SEF
deployed
and
OpenStack
and
you
have
Manila
configured,
work
is
being
done
to
certify
Manila
and
consume
Manila
shares
from
OpenShift
natively,
but
there
was
also
work
being
done
for
the
Manila
CSI
provisioner
to
allow
you
to
consume.
Manila
shares
brew
that
are
deployed
by
rook
I
believe
that
Manila,
the
support
for
the
CSI
part
will
be
added
in
OpenShift
4.3,
which
is
when
we're
all
supposed
to
have
OCS,
but
I
can't
you
know
I
I'm,
not
positive
of
that.
C
So
don't
hold
me
to
that,
but
work
is
being
done
right
now
to
test
and
validate
Manila.
The
kind
of
the
first
case,
which
is
native
note,
was
delivered
by
staff
with
open
shift
4.2.
However,
I
don't
know
if
it
will
make
it
for
the
4.2
release
so
I.
That
was
a
very
long
answer,
but
there
are
a
lot
of
pieces
involved
to
getting
the
Manila
supported.
I.
E
C
So
again,
so
courier
is
somewhat
agnostic
to
what
is
behind
it.
There
are
a
lot
of
performance
numbers
that
were
gathered
running
courier
over
OB
n
OB
n
is
another
OpenStack
networking,
capsulation
back-end
plug-in
for
Neutron
right
now
we
use
OBS,
but
OPM
is
going
to
become
the
new
thing
around
OpenStack,
15
or
16.
It
supported
in
OpenStack
13,
but
not
the
default
right
now.
Their
courier
does
work
with
ovn
it.
We
did
not
put
it
in
the
reference
architecture,
because
there
is
not
a
lot
of
kind
of
mileage
behind
it.
C
There
are
not
a
lot
of
customers
yet
using
it
in
the
field.
There's
also
the
piece
with
Octavia,
which
also
needs
some
further
testing
and
examination.
So
the
ovn
with
courier
is
something
that
works,
but
it
is
not
something
that
we,
you
know
for
the
reference
architecture.
Again.
We
wanted
to
go
with
the
common
case,
though,
because
Obi
SML
2
is
the
common
case
for
most
of
our
deployments
that
that's
what
we're
using
with
courier.
B
All
right!
Well,
that
looks
like
all
of
the
time
that
we
have
today
and
I
think
you've
answered
all
the
questions.
I'm
sure
there'll
be
a
few
more
I
will
post
this
video
up
on
our
YouTube
channel
and
I
will
have
Jacob
back
and
Abhinav
as
well
talk
more
about
the
subject
in
a
not
too
distant
future.
So
thank
you
very
much
for
your
time
today,
Jacob
and
for
everybody
for
attending.
B
Obviously,
it's
a
popular
topic,
so
we're
looking
forward
to
doing
more
and
hopefully
we'll
be
seeing
a
lot
more
of
OpenShift
on
OpenStack
and
OpenStack
content
in
future
OpenShift
Commons
gatherings
as
well.
So
if
you're
coming
to
Red
Hat
summit
in
2020,
we're
planning
on
having
a
tight
rack
on
OpenStack
at
the
Commons
event,
so
make
sure
you
watch
for
those
announcements.
If
you
have
a
use
case
or
you're
someone,
who's
got
an
open
staff
deployment,
we'd
love
to
give
you
the
podium
either
as
a
briefing
or
gathering.