►
From YouTube: OpenShift Commons Briefing #128 OpenShift Service Mesh on Multi-Cloud Environments-Paul Pindell (F5)
Description
OpenShift Service Mesh on Multi-Cloud Environments - Paul Pindell (F5 Networks) & Dave Cain (Red Hat)
A
Hello,
everybody
and
welcome
to
another
open
ship
Commons
briefing
this
time.
We're
have
a
sort
of
a
joint
presentation
on
service
mesh
on
multi
cloud
environments.
It's
been
a
lot
of
work
going
on
in
the
background
between
Red
Hat
and
f5.
To
move
this
real
and
really
pleased
to
have
a
folk
build
an
internal
from
an
open
tell
from
f5
and
Dave
came
from
Red
Hat
open
the
key
instigators
for
this
work
here
today
to
talk
about
it
so
I'll,
let
them
introduce
themselves.
A
B
C
So
I'm
Paul
Pindell
I'm,
a
senior
manager
of
architecture
and
engineering
and
our
business
development
department
at
f5
networks
and
I
manage
a
team
of
architects
that
engineers
that
work
on
various
partner
solutions.
We
develop
solutions
with
our
partners.
We
can
both
work
together
better
and
provide
solutions
that
our
customers
need
all.
D
B
David
King
I
work
in
some
of
the
partner
organization
at
Red,
Hat
and
I
helped
build
a
repeatable
solution
with
Red
Hat
strategic
partners,
of
which
f5
is
one,
have
been
working
with
Dylan
for
about
a
year,
Paul
a
little
bit
longer
than
that.
But
one
of
my
favorite
mantras
is
talking
is
good,
but
doing
is
better
and
we
have
some
neat
stuff
to
show
you
here
through
some
demos
and
slides.
D
All
right
so,
when
we
first
set
forth
on
this
adventure
back
in
a
coop
con
last
year,
last
November
we
kind
of
sat
down
and
had
an
idea.
What
should
we
be
focused
on
an
obviously
multi-cloud
is
at
the
at
the
center
of
all
of
this
we
came
up
with
a
basic
goal
or
what
seems
like
a
basic
goal
at
outset:
deliver
a
multi,
cog
web
application
architecture.
We
wanted
to
include
the
big
IP
DNS,
as
well
as
the
big
IP
LTM.
We
order
to
give
us
access
into
OpenShift.
D
We
included
the
big
IP,
a
container
connector
controller,
as
well
as
our
recently
unveiled
f5
acid
mesh,
which
is
built
on
top
of
it.
Do
the
entire
thing
we
wanted
to
deploy
it.
We
wanted
it
to
be
fully
automated
and
deployed
through
into
the
open
shift
clusters
using
ansible
tower
and
we
wanted
to
deploy
into
one
or
two
in
this
case
private
clouds,
one
public
cloud
in
Azure,
one
in
AWS
and
moving
forward
we're
going
to
be
looking
at
including
additional
cloud
providers,
as
top
of
the
list
is
the
Google
Google
cloud
as
well.
C
Dang
so
I
mean
we
knew
it
had
to
be
multi
cloud
in
that
that
multi
cloud
was
a
focus
here.
This
is
the
you
know.
You've
got
to
be
buzzword,
compliant
in
in
presentations
of
summits,
and
things
like
this,
so
we
knew
it
had
to
be
multi
cloud.
We
knew
it
was
going
to
include
big
IP.
We
knew
that
it
was
going
to
next.
C
We
were
going
to
use
OpenShift
container
platform
and
we
knew
that
it
was
going
to
use
the
big
IP
controller
for
open
ships
a
little
about
the
big
IP
controller
for
OpenShift.
What
it
does
is
in
the
controller
environment.
It
said.
Excuse
me
in
the
container
platform
environment
it
sits
in
the
open,
shipped
environment.
It
listens
to
changes
within
a
particular
namespace
or
within
a
group
of
namespaces
and
then
affects
configuration.
Changes
on
a
big
IP.
We'll
see.
Some
of
that.
C
We
knew
that
this
was
going
to
be
on
premises,
private
cloud
when
we
were
going
to
use
some
public
clouds,
Microsoft,
Azure
and
AWS,
and
we
knew
that
we
were
going
to
use
an
enterprise
e-commerce
apps.
Our
intent
was
to
stretch
this
app
across
all
of
them
and
cross
all
of
these
clouds
and
use
big
IP
DMS
as
the
method
for
providing
and
the
DNS
intelligent.
Dns
entry
points
for
all
of
these
and
to
provide
the
failover.
C
We
were
going
to
use
an
f5
technology
called
I
apps
to
help
deploy
this
on
the
big
IPs
themselves,
a
tool
called
TMS
h,
2i
f
that
actually
helps
us
build
those
ayats,
those
declaratives
methods
for
deploying
f5
config,
and
we
were
going
to
use
AI
rules,
LXi
rules.
Lx
is
a
data
plane
language
that
we
can
use
on
a
big
IP
device
to
manipulate
packets
and
manipulate
the
data
plane
and
what's
happening
there.
C
So
if,
for
any
reason,
there
is
functionality
that
does
not
exist
on
a
big
IP,
you
get
the
flexibility
to
write
a
node.js
package
and
I
rules
LX
that
it
can
provide
that
functionality.
So
we
wanted
to
use
that
we're
deploying
everything
on-premises
using
relative
coasts.
We
use
the
cockpit
server
and
the
ant
little
tower
to
deploy
all
that
next
and
to
top
it
all
off
the
service
mesh
that
we're
using
is
a
stood
though
f5
has
created.
C
C
So
did
give
us
a
head
and
start
and
talk
about
how
we
built
this.
We
used
ansible
tower
to
build
this
environment
and
I
know.
Many
of
you
may
have
had
to
do
demos
in
the
past
and
built
them
we're
going
to
be
doing
videos.
So
if
your
next
line-
and
this
is
a
this-
is
how
I
describe
what
we're
doing
many
of
you
may
have
seen
Julia
Child's
and
she
prepares
her
turkey,
her
her
goose,
she
gets
it
all
ready.
She
puts
it
in
the
oven.
C
She
turns
around
and
pulls
a
fully
cooked
one
out
of
the
oven.
So
you
don't
have
to
wait
while
the
Zeus
is
being
cooked
and
shows
you
had
carbon
we're
gonna
do
the
same
thing.
We're
gonna
show
you
some
videos
run
through
the
ansible
tower
demo,
and
so
you
don't
have
to
wait
we'll
have
you
will
speed
up
the
part
where
it's
actually
running
the
rolls
and
play
books
and
behind
the
scenes
here
there
was
a
large
team
of
folks.
It
wasn't
just
the
three
of
us
that
was
working
on
this.
C
C
The
ones
that
were
interested
in
are
the
coop
system
project
in
the
Red
Hat
cool
store,
but
we're
using
Red
Hat
cool
stores,
the
e-commerce
app
going
to
show
you
that
in
there
there's
just
the
basic
dig
maps
that
are
being
used,
I
want
to
look
at
Kubb
system
and
notice
that
there's
nothing
happening
in
there.
As
with
all
demos,
it's
good
to
start
with
going
and
looking
at
these
things
to
make
sure
that
we
have
anything
hidden
up
our
sleeve,
isn't
that
here
on
the
big
IP.
C
This
is
the
big
IP
UI
and
we're
going
to
show
you
the
network
map,
there's
nothing
here.
There
are
no
partitions,
there's
no
VLANs,
no
self
IP.
None
of
the
basic
networking
figures
is
created
yet
and
and
we're
showing
you
the
UI
just
to
show
you
that
there's
nothing
hidden
here.
All
of
this
is
driven
without
touching
the
UI,
so
we're
using
ansible
tower
to
do
that.
C
The
only
reason
we're
in
the
U
is
our
UI
is
just
it
shows
better
than
then
looking
through
configs
on
black
and
white
screen,
so
the
open
ship
demo
multi
app
is
the
project
that
we're
working
with
here
in
ansible
tower
we're
going
to
show
you
the
different
inventories
that
we
have.
We
have
four
groups
inventory
groups,
one
for
each
of
our
data,
centers
Amazon
Asia,
and
we
have
two
on-premises
data
centers.
C
What
we're
going
to
do
for
each
of
those
we've
got
a
cleanup
and
a
setup
set
of
tasks
templates
we're
going
to
run
the
setup
tasks
within
MCD
on
p2.
What
this
is
going
to
do
is
it's
going
to
do
everything
from
taking
a
bare-bones
big-ip,
putting
the
initial
network
configuration
on
it,
putting
the
virtual
servers
on
us
that
are
needed
put
into
profiles
for
in
the
policies
putting
the
pools
and
the
pool
members.
C
It
will
also
deploy
into
openshift
the
container
connector
that
piece
that
talks
to
the
big
IP
and
keeps
so,
if
you
think
of
the
openshift
environment.
Being
the
source
of
truth,
we
read
that
source
of
truth
and
change.
They
configure
the
big
IP
to
match
the
source
of
truth,
that
is
the
openshift
environment.
So
if
there
are
20
20
containers
that
we
need
to
load
balance
to
will
put
those
in
a
pool
if
you've
changed
that
to
40
will
automatically
change
it.
Here.
C
C
Your
config,
you
can
see
extra
config
maps,
one
for
cool
store,
cool
store
gateway
and
the
cool
store
DC
to
the
coop
system.
You'll
see
that
we've
got
a
controller,
a
big
IP
controller
for
each
of
the
three
applications.
We
deployed
we're
only
showing
you
one
of
the
three,
but
there
are
three
applications
that
we
ended
up:
complying
in
this
environment,
I.
C
D
Yeah
right
now,
basically,
what
we're
doing
is
we're
going
through
we're,
showing
that
the
that's
the
controllers
deployed
all
the
virtuals,
as
well
as
the
pool
numbers.
We
now
have
a
self
IP
that
connects
us
into
the
open
ship
environment
and,
of
course,
that's
the
end.
The
demo
there
yep
Diane
is
the
video
still
running
on
your
end.
Can
you
still
see
it?
Okay,
the.
D
All
right,
so,
let's
talk
a
little
bit
about
the
architecture
behind
all
this
and
what
it's
comprised
of.
So
when
we
set
this
up,
we
had
a
few
clouds
in
mind.
The
azure
cloud
was
the
first
one
that
we
brought
in.
We
also
have
two
on-premise
deployments:
one
called
on
p1
and
on
p2,
so
I'll
come
as
one
on
premise:
2.
We
also
have
a
test
dev
environment
that
runs
Jenkins
and
all
of
the
CI
CD
stuff
that
that
we
used
to
work
on
the
apps
and
then,
of
course,
we
added
AWS
as
well.
D
Each
of
the
environments
contains
a
big
IP
that
runs
our
load,
our
local
traffic
manager.
That's
what's
looking
inside,
that's,
what's
communicating
with
the
the
big
IP
controller
and
getting
data
out
from
the
control
plane
from
the
inside
openshift
and
configuring
the
big
IPS
accordingly,
as
the
app
needs.
D
Also,
we
have
on
premises
big
IP
DNS,
which
is
using
our
global
global
server
load,
balancing
technology
to
be
able
to
move
traffic
back
and
forth
between
the
various
data
centers.
Now,
in
this
particular
demo
environment
we're
using
a
single
DNS,
big
IP
in
a
more
production-ready
environment,
we
would
we
would
find
either
one
of
those
big
IP
dns
boxes
in
in
each
of
the
clouds,
or
at
least
one
more
and
one
of
the
other
clouds
to
give
us
a
che
capabilities
for
our
DN
a--.
D
In
addition
to
that
inside
OpenShift
we
have
three
openshift
clusters
or
four.
If
you
count
the
emesis
five,
if
you
count
the
dev
environment
inside
those,
we
have
the
container
connector
deployed
and
we
have
three
web
applications,
e-commerce
applications
and
if
SEOs
book
info
demo
app
and
then,
of
course,
the
CI
CI
solution
sits
inside
our
on-premises
data
center
and
and
orchestrating
all
of
this
from
a
deployment.
Standpoint,
of
course,
is
danceable.
So
from
a
control
playing
standpoint
or
I
mean
an
ingress
standpoint.
Let's
take
a
look
at
how
the
traffic
flow.
D
If
you
have
an
HTTP
request
that
comes
in
it
would
first
hit
the
big
IP
dns
and
then
the
big
IDP
dns
would
make
a
traffic
a
traffic
decision
depending
on
what
data
centers.
It
has
currently
available
right
now,
that's
sitting
in
a
round-robin
configuration,
so
it's
random
how
they
pop
up,
depending
on
where
it
is
in
the
round-robin
sequence.
D
D
Now
one
one.
One
important
point
of
note
is
that
the
big
IP
controller
is
essentially
a
monitor
and
its
job
is
to
configure
the
big
IPs.
Once
the
configurations
are
set
up.
We
can
spin
pods
up
and
spin
pods
down
and
you'll,
see
that
we
add
new
pool
numbers
automatically.
As
a
result
of
that,
when
we
have
a
communication,
come
in
that
communication
goes
directly
from
the
big
IP
to
the
actual
pods
inside
the
lickin,
the
OpenShift
cluster.
That
way,
there's
no
intermediary
we're
not
replacing
the
we're,
not
controlling
the
traffic's.
D
There's
no
bottleneck,
it's
just
direct
from
the
the
big
IP
to
do
the
the
various
services
that
we're
serving
out.
D
So
what
I'm
going
to
show,
as
far
as
the
demo
goes,
is
I'm
going
to
show
a
failover
demo,
which
is
using
our
big
IP
global
service
loads
over
server
load,
balancing
and
and
the
big
IP
that
the
ingress
technology.
So
when
you
go
ahead
and
get
that
fired
up
here
and
make
sure
that
it's
all
right,
so
let
me
back
it
up
and
pause.
Okay,
so
first
thing
I'm
going
to
show
is
that
we
have
wide
IPS
that
are
set
up
per
service
inside
each
of
the
data
centers
for
each
of
the
applications.
D
We'll
go
ahead
and
take
a
look
at
the
various
in
openshift
clusters
and
show
that
we
essentially
have
Red
Hat
cool
stores,
the
primary
demo
app
that
we
use
it's
set
up
and
then
I'm
going
to
get
prepared
to
go
ahead
and
emulate
a
service
failover.
So
each
of
these
that
we're
showing
right
in
our
words
that
that's
atomic
to
or
on-premises
to
now
we're
taking
a
look
at
our
easy
you
cluster,
which
is
out
on
AWS
and
then
next
up.
We
take
a
look
at
edge,
you're,
also
running
the
same
applications.
D
Then
we
take
a
look
at
the
big
IP
so,
as
you
can
see
inside
we're
serving
out
two
or
three
services
of
the
cool
store
web
UI,
as
well
as
the
the
Gateway
and
the
urn,
the
gateway
services
that
needed
to
serve
content.
So
first
thing
we're
going
to
do
is:
go
ahead
and
open
up
a
browser
and
take
a
look
see
if
we
can
get
to
the
pages
so
we'll
hit
the
red
net
cool,
store,
main
link.
D
So
what
we
do
here
is
we're
wide
and
take
a
look
at
the
network
map.
You
can
see
that
that's
immediately
reflected
inside
the
VIP,
so
the
service
is
now
down
and
then
we
go
back
to
the
website
and
we
do
a
quick
refresh
and
it
fails
over
to
the
AWS
cloud,
which
is
next
in
line
for
the
round
robin
so
we'll
go
through
and
we
iterate
through
a
few
more
of
the
clouds.
D
D
There
we
go
so
now
we're
going
to
go
back
through
turn
on
all
the
services
and
then
we'll
iterate
through
each
of
the
cloud
environments
just
to
see
that
the
services
returned
again.
None
of
this
is
there's
no
user
intervention
here.
If
a
service
outage
happened
in
naturally
without
an
e,
obviously
without
it
being
manually
done.
This
is
something
that
would
automatically
happen
and
the
failover
would
be
consistent
throughout
the
clouds.
D
The
interesting
story
when
I
was
working
on
revving
from
3.5
to
3.7
for
this
particular
environment,
I
am
I,
went
through
and
I
rolled
each
individual
OpenShift
cluster
to
the
next
revision
and
never
had
to
touch
anything.
It
basically
kept
the
website
available
the
whole
time,
so
we're
go
ahead
and
we're
going
to
just
going
to
see
each
of
the
data
centers
to
make
sure
that
we
now
have
traffic
coming
through
them
again,
as
we
can
see,
because
we
hit
each
of
those
data,
centers
they're,
now
all
available.
So
we
go
back
and
check.
D
B
Thanks
Dylan
great
demo,
so
a
little
bit
about
sto,
just
we
like
said
we
presented
this
at
Red
Hat
someone
we
wanted
to
make
sure
folks
had
a
bit
of
an
understanding
of
what
a
service
mesh
is
and
what
sto
really
provides
in
this
architecture.
So
sto
is
a
multi
vendor
initiative
to
establish
a
uniform
way
to
connect,
manage,
monitor
and
secure.
Your
micro
services
primarily
announced
the
community
from
IBM
Google
and
lyft,
and
Red
Hat
is
a
big
contributor
there
as
well.
B
Seo
means
sale
and
Greek
nautical
terms
like
kubernetes
means
helmsman
or
ships.
Pilots
and,
like
I
mentioned
sto,
provides
us
traffic
control,
service,
discovery,
load,
balancing
resilience,
observability
and
security,
and
really
the
basic
premise
is,
instead
of
having
to
code
in
individually
boilerplate
code
in
the
applications
themselves,
dealing
with
retry
circuit,
breakers,
TLS
and
encryption,
etc.
Why
not
offload
that
and
put
that
into
an
actual
service
mesh
logic
for
the
application
so
get?
B
Have
your
developers
focus
primarily
on
writing
business
logic
in
their
associated
applications
and
lets
a
service
mesh
like
sto
handle
the
application
logic
like
I
mentioned,
retries,
circuit-breakers
TLS,
and
just
it's.
It's
divided
into
two
distinct
sections:
a
control
plane
and
a
data
plane
just
run
through
some
of
the
components
right
here:
pilots,
as
the
name
implies,
it
is
responsible
for
overall
managing
the
fleet's
of
all
of
your
micro
services
across
the
open
shift
and
kubernetes
cluster.
B
Does
the
discovery
of
the
on
voice
sidecars
more
about
that
in
a
minute
I'm,
just
focusing
on
the
control
plane.
Aspects
of
this
pilot
also
does
traffic
management
and
Abee
tests
for
those
networking
folks,
you
know
that
are
in
the
session
today,
I
like
to
give
an
example
here.
This
is
like
a
software-defined
networking
paradigm,
where
you
have
a
control
plane,
such
as
an
SDN
controller,
and
then
you
have
individual
networking
virtual
switches
on
each
one
of
the
hosts
that
integrate
and
provide
a
fabric.
B
So
what
we're
really
talking
about
with
is
teo
is
application
micro
service
level
fabric
here,
mixer
also
as
part
of
the
control
plane.
This
does
and
delivers
telemetry
data
from
the
individual
sto
proxy,
which
is
also
denoted
as
envoy.
In
the
data
plane,
you
can
create
ecl's,
apply
rate,
limiting
rules,
custom,
metrics,
etc
and,
finally,
off
or
now
affectionately
called
sto
CA.
This
is
what
I
mentioned:
secure
communication
between
the
micro
services
and
and
strong
identity
via
certificate
assurance,
signing,
revocation,
etc,
and
on
the
data
plane.
What
we're
doing
here
is
for
every
pod.
B
We
are
injecting
a
sidecar
container,
so
rate
B
sides,
either
automatically
or
manually
our
business
logic.
We
have
envoy
also
known
as
SEO
proxy
in
the
pod
itself,
which
intercepts
all
of
that
ingress
and
egress
traffic
and
returns
that
to
to
mixer.
So,
let's
a
little
bit
about
sto,
we
want
in
select
two
slides
from
now.
Dylan
is
going
to
talk
a
little
bit
more
about
the
service
mesh
and
how
it
interfaces
from
the
application
developer
standpoint.
D
Alright,
so
one
of
the
things
that
v
has
recently
done
and
Paul
talked
about
this
a
little
bit
in
the
beginning
of
the
presentation.
F5
has
created
an
incubation
which
is
essentially
startup
corporation,
funded
by
f5,
but
they're.
They
function,
autonomously,
they've
developed
the
technology
that
sits
on
top
of
its
CEO
called
Aspen
mesh
and
I'm
going
to
go
ahead
and
kind
of
talk
about
that
here
for
a
few
minutes,
so
service
mesh,
as
dave,
has
portrayed
it.
D
If
you
look
on
the
left
hand
side
there,
this
is
kind
of
the
world
that
people
are
used
to
living
in,
so
we've
got
our
various
technologies
that
would
be
necessary.
So
on
a
per
service
basis,
we
would
have
to
build
in
all
of
the
various
components
to
be
able
to
get
visibility,
traceability
and
and
and
actionable
intelligence
out
of
the
service
mash.
So
with
the
advent
of
HBO,
we
that
kind
of
simplified
things.
So
if
we
look
to
the
right,
you
can
see
that
now
that's
kind
of
relegated
to
three
distinct
areas.
D
So
we
still
have
the
primary
services
that
we're
using
so
and
I
mean
languages
and
services,
but
the
service
match
provides
all
the
all
of
the
rest
of
that
in
a
compartmentalized
fashion
that
that's
deployed
in
you
know
in
a
relatively
easy
manner
what
a
spin
mesh
is
is.
Essentially,
it
is
a
technology
built
on
top
of
sto.
That
gives
you
a
hosted
portal.
D
B
D
Then
pipe
that
information
out
to
a
managed
service
portal,
so
that
you
can
look
at
that
that
that
telemetry
data
through
applications
like
Ravana
and
and
and
other
and
other
items
as
well,
one
of
the
things
to
point
out
here
is
kind
of
the
icon
down
at
the
bottom.
Here
it
is
fully
supported
and
it
gives
you
the
capability
to
be
able
to
take
advantage
of
the
intelligence
that
sto
offers,
with
a
minimal
amount
of
of
configuration
to
get
it
up
and
running.
D
D
It
also
offers
customizable
alerts,
so
you
can
get
visibility
into
the
health
of
your
service
mesh
as
well
as
as
well
as
piping
all
that
information
through
a
securely
authenticated
portal
and
again
fully
supported.
That's
it
so
we're
going
to
go
ahead
and
do
a
quick
demo
of
one
of
the
technical
one
of
the
capabilities
that
that
it's
do
slash,
act,
asks
and
mesh
offers
and
go
ahead
and
get
that
rollin
rollin
that
beautiful
bean
footage
all
right
go
back,
awesome,
okay,
so,
right
now
this
is
live.
You
can
go
to
add
some
mesh
tayo.
D
You
can
send
us
a
message
it's
currently
in
early
access,
so
you
can
go
ahead
and
click
that
link
and
sign
up
getting
up
and
running
is
relatively
simple.
You
pull
down
the
bits
from
our
acid
mash
relief
site,
which
gives
you
the
ish
do
as
well
as
the
Aspen
mesh.
Put
that
and
put
that
into
whatever
cluster
you
want
to.
You
know
unpack
it
in
to
add
it
to
your
path
and
then
run
a
couple
of
command
lines
to
deploy
the
animals
it.
D
It
adds
SEO
CA
ingress
the
mixer
as
well
as
the
pilot,
and,
of
course,
if
you
want
to
leverage
this
you'll
have
to
add
in
the
initializer
now
this
has
changed
in
3.7
this.
This
was
originally
built
off
of
3.5
in
the
prime
sorry
and
built
up
3.7
3.9.
Obviously,
they've
changed
that
well
they're,
now
using
mutated
web
looks
as
part
of
the
is
to
do
the
same
thing
as
what
the
issue
is.
D
So
this
is
the
agents
that
we
deploy
when
you
want
to
set
this
up.
All
you
have
to
do
is
go
to
your
ass
pin
mess
portal
go
into
your
user
info,
grab
that
token
ID
and
then
add
that
into
the
command
line,
to
deploy
what,
once
it's
deployed,
it'll
give
you
visibility
through
this
web
portal
into
the
health
of
your
mesh.
What
services
are
available,
any
kind
of
warnings
that
you
might
have
that
you
need
to
take
a
take.
D
A
look
at
it
gives
you
this
level
of
connectivity
visibility
into
seeing
what
services
are
talking
to
what
services
inside
your
service
mesh,
as
well
as
an
addict
lance
health
service
of
your
services,
which
is
currently
it's
built
off
of
specific
set
of
parameters.
In
my
case,
it's
you
know
longer
than
31
milliseconds.
You
go
back
a
little
bit
here,
right
there
and
stop
so
I'm,
looking
at
requests
that
are
taking
over
trance
or
doing
milliseconds
anything
running
returning,
4x
or
5x
error.
D
So
when
any
of
that
shows
up,
obviously
I
wouldn't
be
getting
you
0%,
but
in
this
case
my
mesh,
isn't
you
know
perfect
health,
because
it's
a
demo
mesh
or
a
demo
environment.
So
let's
go
ahead
and
take
a
look
at
what
we
can
do.
We
have
hosted
Ravana
as
well
as
a
host
Diego
portal.
Then
we're
going
to
go
ahead
and
take
a
look
at
what
what
we
could
do
with
Yeager's,
so
those
who
have
done
web
development
kind
of
understand
when
you're
doing
going
through
the
debugging
process.
D
This
is
kind
of
what
we've
always
had
visibility
into.
You
can
see
the
actual
get
request.
You
can
see
it's
taking
two
hundred
two
hundred
and
sixty
milliseconds,
but
you
really
can't
see
what's
going
on
inside
that,
so
what
if
Co
does
is
it
gives
us
the
ability
to
look
inside
that
get
request
from
a
service
master
standpoint,
so
I'm
gonna
go
ahead
in
this
next
bit
and
run
a
Jager
traits
just
to
kind
of
show
you
what
that
looks
like
it.
D
The
initial
outset
set
up
all
the
parameters
right
now,
we're
just
going
to
set
it
to
a
max
value
and
then
limit
our
results
to
twenty.
Do
a
quick
trace
and,
as
you
can
see,
it
basically
gives
us
visibility
inside
the
actuals.
What's
going
on
instead
of
ServiceMaster,
so
we
can
see
all
of
the
items
that
are
happening
inside.
D
Inside
the
actual
request
that's
coming
and
where
it's
headed
to
what
services
is
taking
advantage
of
and
whatnot
and
how
long
those
are
taking
so
real,
quick
I'm
going
to
go
ahead
and
generate
some
traffic,
so
I'll
pull
open
a
tab,
we're
using
the
s
us
booked
info
application
to
be
able
to
generate
that
traffic.
So
real,
quick
I'm
just
going
to
do
some
use
of
movie
magic
to
fast
forward
through
this
and
we'll
do
that.
D
We've
generated
a
bit
of
traffic
leave
our
tracing
set
up
the
same
settings
and
then
we'll
do
a
find
traces
and
then
we
can
see.
Okay.
Everything
in
May
here
is
well
within
parameters,
so
126
milliseconds,
which
is
under
my
three
and
thirty
one
millisecond
threshold
and
then
we'll
go
back
and
take
a
look
yep
that
looks
good
so
to
kind
of
emulate
that
I'm
going
to
implement.
C
D
Route
rule
here
in
a
second
which
basically
adds
timing
to
the
request
process.
So
roughly
about
400
milliseconds
of
latency
into
that
service
request
real,
quick,
pull
that
up
run
a
quick
gamma
file,
that's
set
in
place
and
then
we'll
go
back
through
and
we'll
generate
some
more
traffic
so
that
we
can
kind
of
see
that
that
take
advantage
of
that
that
latency
that's
been
introduced.
D
So
a
couple,
quick,
refreshes
and
then
we'll
go
back
and
we'll
run
another
Jager
trace
and
same
parameters,
quick
trace
and,
as
you
can
see
now,
those
are
500
plus
milliseconds
across
the
board.
We
can
go,
take
a
look
and
we
can
see.
Okay,
that's
happening
in
the
reviews,
which
is
where
I
actually
put
the
actual
route
rule
injection
we'll
go
ahead
and
take
that
out
to
make
it
go
away.
D
D
And
we
can
see
that
our
milliseconds
are
back
within
thresholds
down
to
111
milliseconds.
So
that's
that's
part
of
the
functionality
that
you
get
with
this.
Do
it
just
a
real
quick
demo
of
some
of
the
some
of
the
some
of
the
fun
things
you
can
do
with
this
do
as
far
as
you
know,
getting
visibility
into
your
service
mash
all
right
go
ahead
and
flip
back
over
to
the
slide
presentation,
so
I'm
going
to
turn
it
back
over
to
Paul
and
Dave
to
talk
a
little
bit
about
our
partnership.
D
C
Thank
you
go
on
to
the
next
slide
there
for
me,
so
Red,
Hat,
Nephite
we've
been
working
together
for
several
years
on
various
different
projects
and-
and
you
might
ask
why,
why
would
these
two
companies
work
together?
There's
very
little
overlap
in
the
technologies
that
we
provide
to
the
market,
but
the
technology
we
provide
are
very
complimentary.
Surrett
head,
as
you
know,
is
the
leader
in
open
source
software.
C
F5
is
the
leader
in
in
application,
delivery
control,
so
we're
the
leader
in
managing
traffic
through
a
proxy
into
various
different
environments,
where
the
the
foreign
way
leader
there
and
we
happen
to
work
on
any
private
or
cloud
platform.
So
we've
shown
that
you
can
work
on
premises
and
we've
shown
that
we
can
work
on
various
cloud
platforms
next
slide.
C
So
we've
got
several
different
places
that
we
over
several
different
solutions.
The
first
is
our
OpenStack
type,
one
we've
got
certified
or
the
OpenStack
platform
from
Red
Hat
we've
got
certified
solutions
out.
There,
we've
certified
our
Elbaz
v2
plugins,
with
the
long-term
releases
of
Red
Hat
OpenStack
platform,
and
we
have
open
shipped
integrations.
You
can
find
the
container
connector
that
we've
been
talking
about
today.
It
is
in
the
Red
Hat
open
ship,
the
portal
and
you
can
find
it
there
and
download
it
from
there
and
also,
if
you
want
to
get
from
dr.
C
hover,
you
can
build
it
from
our
pages
and
we've
got
today.
We've
got
sixty
two
up
streamed
ansible
modules
into
ansible
core
and
those
were
we're
tracking
trending
to
have
ninety
to
a
hundred
ready
by
the
next
release
of
ansible.
So
we're
we're
building
new
modules
to
expose
additional
f5
functionality
to
be
automated
and
orchestrated
via
ansible
pencil
tower
next
slide.
C
So
as
we
wrap
up
you
know,
one
of
the
things
that
we
found
is
that
automation
is
table
states.
You
know
if
you
keep
your
automation
in
lockstep,
with
your
architecture
and
design.
So
if
you're
writing
your
automation,
as
you
are
designing
things
and
as
you're
building
out
your
applications,
you
can
avoid
problems.
Let
me
tell
you
about
a
story
that
we
had
in
building
this
demo.
We.
B
C
On
a
we
had
to
demonstrate
this,
a
Tuesday
at
noon,
Friday,
night
Dylan
and
I
and
Dave
we
sat
down
and
we
we
got
the
environment
set
exactly
the
way
we
wanted
it.
We
were
ready
to
record
the
demos
because
we
knew
we
had
to
use
records
demos
and
we
scheduled
time
Monday
morning
early
to
get
up
and
do
that
work.
C
Monday
morning
early
we
went
to
log
in
and
no
one
could
get
into
the
lab
anymore.
Couldn't
get
to
the
ansible
tower
server.
We
couldn't
get
to
entity
on
premises,
openshift
environments
we
couldn't
get
to
our
DNS
server,
our
Active
Directory
was
down.
We
had
a
storage
controller
failure,
we
lost
half
of
our
storage
and
it
was
a
very
long
day.
Monday
focus.
C
It
took
us
less
than
an
hour
to
run
through
all
of
the
environments,
bend
them
back
up
and
get
them
to
a
known
good
state,
so
that
the
last
three
hours
of
that
day
was
a
long
18-hour
day.
We
could
record
the
demos
and
edit
them
and
be
ready
for
Tuesday
morning.
So
automation
and
ansible
tower
is
a
table
stakes
for
us.
It
is
something
you've
got
to
do.
It's
saved
our
bacon,
make
sure
that
you're
using
good
use
of
variables
when
you're
doing
that
multi-cloud
is
here
and
it
isn't
just
a
buzzword.
C
It
may
be
to
provide
failover
and
resilience
for
your
services,
like
we
showed
here
today,
and
it
may
be
that
you
want
to
run
your
services
where
they're
best
suited
you
may
want
to
run
your
databases
and
those
PCI
compliant
applications
and
pieces
and
parts
on
premises
where
you
can
secure
them
with
your
standard
tools,
and
you
may
want
to
run
your
web
front-ends
in
public
clouds.
So
multi
cloud
is
here:
your
apps
must
be
multi
cloud
able
and
we
showed
you
aspen
mesh,
which
is
an
SDO
based
solution.
C
B
Sure
Paul,
so
we
want
before
we
get
into
Q&A.
We
just
wanted
to
add
a
couple
links
for
your
perusal
as
a
lot
of
the
technology
elements
that
are
here
not
only
aspen
mesh
and
the
big
IP
controller
that
both
Dylan
and
Paul
mentioned
and
demoed
for
you
today
and
showcasing
how
we're
able
to
leverage
an
orchestra,
multi
cloud
deployments
with
our
you
know
three
app
applications
across
those
various
environments,
but
also
a
landing
page
down
at
the
bottom.
B
D
Just
to
kind
of
tack
onto
that
Devi,
the
the
all
of
the
items
that
we
use
to
set
this
up,
everything
from
the
note
jazz
scripting
to
the
ansible
play
books
to
the
JavaScript
we
use
within
the
play
books.
All
of
that
will
be
made
available,
we're
working
on
getting
that
up
on
github
and
available
for
the
community
to
take
advantage.
B
D
In
addition
to
that,
we're
always
open
if
people
are
working
on
a
project
or
just
want
to
have
a
conversation,
you
can
contact
your
you
know:
local
Red,
Hat
or
f5
sales
team,
they'll
get
in
touch
with
us
and
we'll
come.
We
can
come
help
you
out
with
your
deployments
so
and
I
just
also
want
to
add
that
this
is.
This
is
essentially
the
initial
outset
for
this
architecture.
There
are
multiple
planned
upcoming
phases.
D
A
That's
actually
a
great
that'll
be
a
great
follow-on
briefing.
So
once
we
have
that
content
ready
and
available
we'll
bring
you
back
in
and
have
you
do
it
again
and
show
us
there
are
a
lot
of
questions
in
the
chat
and
some
of
them
you've
answered,
but
just
really
that
can
you
just
touch
on
again
someone's
asking
would
you
recommend
is
to
do
for
use
in
production
already
and
I
think
we
just
reiterate
that
the
latest
version
is
0.8
and
we
talked
a
little
bit
about
that
sure.
C
Dave
and
I
had
been
answering
questions
there
in
the
in
the
chat
and-
and
yes,
this
do,
is
it
version
0.8?
It's
not
to
be
one
product
yet
and
Dave
mentioned
in
the
upcoming
OCP
3:10
sto
will
be
a
deaf
preview,
and
so
it
won't
be
listed
as
production
at
that
point,
but
depending
on
feedback
and
how
it
works
in
that
environment
and
bugs
that
are
fixed,
it'll
determine
which
release
sto
will
be
released
as
GA.
D
The
the
idea
of
deploying
it
in
production,
I
would
say,
I
would
probably
wait
till
it
was
until
it
hits
GA.
Just
because
of
you
know,
most
IT
environments
aren't
comfortable
with
doing
pre
GA
code
on
a
production
environment.
That
being
said
for
tests
EV
or
for
an
early
for
early,
looks
at
it
to
deploy
it
into
your
test.
Dev
environment,
absolutely
ready.
B
D
That
that's
that's
kind
of
what
we
have
been
using
it
for
inside
f5,
so
that
I
guess
that
would
be
my
take.
Is
it's
not
it's
never
too
early
to
start
to
get
a
look
at
it
and
get
comfortable
with
it,
whether
it's
ready
for
production
or
not?
That's
that's
going
to
be
a
community
decision.
I
think
for
the
most
part,
but
when
it
is,
will
will
be
right.
There
writing
with
them.
A
Well,
as
I
mentioned,
I
think
my
mind
has
been
blown
a
little
bit
today.
I've
been
following
these
CEO
stuff,
really
pretty
closely
I
think
this
has
been
one
of
the
better
demonstrations
of
the
power
of
it
and
the
promise
of
Sto,
so
I'm
really
looking
forward
to
that
production
quality
and
if
folks
here
who
are
listening
in,
if
you
end
up
or
if
you
are
POC
its
tio
we'd
love
to
hear
from
you
and
get
your
feedback
on
it
and
and
incorporate
it
into.
You
know
this.
A
C
A
A
A
That
that
just
shows
you
how
good
of
a
briefing
you
guys
have
just
done
and
but
when
everybody's
minds
truly
I'm,
this
really
is
of
cloud
and
multi-cloud
personified.
So
thank
you
very
much
for
taking
the
time
today
to
share
this
with
us
and
I
know
you.
A
lot
of
work
went
into
this
presentation
that
you
did
it
read
next
time
and
this
revised
one
for
us
and
I
really
do
look
forward
to
doing
this
again
in
the
next
revision.