►
Description
Multi-cluster management is hard. Technology, teams and culture clash in a race to deliver clusters and applications in a secure and compliant way. Red Hat Advanced Cluster Management for Kubernetes (RHACM) provides the capabilities to address common challenges that administrators and site reliability engineers face as they work across a range of public and private cloud environments. Clusters and applications are all visible and managed from a single console—with security policy built in.
A
Good
morning,
good
afternoon,
good
evening,
hello,
everyone
and
welcome
to
a
brand
new
show
here
on
openshift
tv,
I'm
chris
short
technical
marketing
manager
host
producer
of
the
channel
openshift
tv
that
you're
watching
right
now,
I'm
joined
by
some
very
special
friends,
they've
made
quite
the
journey
to
the
channel
when
you
think
about
it
kind
of
in
a
grand
scheme-
scott
you
want
to
introduce.
You
know
this
is
the
the
the
red
hat
advance.
Cluster
management
presents,
show
you're
kind
of
the
the
brain
child
behind
the
show.
B
The
the
warm
regards
we
appreciate
that,
on
behalf
of
our
distinguished
guests.
Here
I
will
take
a
minute
to
introduce
our
vice
president
dave
lindquist,
ibm
cello
on,
I
don't
know
which,
if
it's
left
or
right
on
my
left
is
josh
packer.
His
senior
league
in
the
application
lifecycle
space
spends
a
lot
of
time,
architecting
all
the
ins
and
outs
between
cluster
and
app
and
everything
else
in
the
product,
our
product
management.
Director.
B
To
my
right,
I
don't
know
if
that's
your
left,
jeff
brent,
who
said
that
any
minute
he
he
might
get
struck
by
lightning-
or
maybe
it
was
the
house,
I'm
not
sure
which,
but
down
in
sunny
south
florida,
where
apparently
thunderstorm
rain
all
day
represents
the
product
management
side.
So
we
really
try
to
bring
together
chris,
the
best
and
the
worst
of
it
with
engineering
and
product
management
and
vp
strategies.
So
the.
B
B
A
B
It's
awesome
so
yeah
acm
advanced
cluster
management.
We
got
to
know
you
and
you
quickly
made
an
imprint
on
us
and
I
think
it
was
a
mutually
beneficial
experience
for
you
to
learn
the
product
and
for
us
to
kind
of
explain
what
we're
doing.
You
started
that
inroad
with
us
and
needling
me
on
the
side
like
hey.
When
are
we
going
to
get
you
on
the
show?
B
Rtm
jimmy
did
a
great
job
and
I
think
it's
time
that
we're
approaching
the
2-1
ga,
which
is
about
two
three
weeks
away.
It's
time
for
us
to
come
back
to
the
helm
and
really
give
you
a
series
of
topics
that
really
explains
to
the
world,
what
the
heck
we're
doing
and
how
we
got
here,
and
so
I
think
that's
what
we're
trying
to
kick
off
today.
We
appreciate
you
putting
us
on
the
show
and
giving
us
a
platform.
A
A
If
it
were,
if
it
were
freezing
cold,
I
would
have
my
red
hat
long
sleeve
shirt
on
my
advanced
cluster
management
one,
but
it's
not
freezing
cold
here
today,
thankfully,
so
I
have
that
going
for
me.
B
Well,
we
would,
we
would
be
remiss
to
not
mention
the
fire
behind
you.
The
fireside
moon
is
quite
symbolic.
We
all
speaking
of
fire,
we're
on
fire
with
ansible
fest
this
week.
B
One
of
the
reasons
why
we
wanted
to
jump
on
and
take
advantage
of
that
momentum,
which
is
one
of
the
things
we
can
talk
about
today,
I
mean
there's
anything
under
the
sun
we
can
talk
through,
but
the
ability
to
bring
together
your
existing
investment
and
ansible
and
infrastructure
bits
and
pieces
combining
that
in
the
kubernetes
you
know
desired
state
methodology
that
we
bring
with
rackham
right.
It's
definitely
one
of
those
sweet
spot
areas.
A
Like
advanced
cluster
management
is
such
a
like
all-encompassing
product.
I
feel
like
right
like
it
does
a
lot
and-
and
I
feel
like
the
the
purpose
of
this
show
might
be
to
like
take
those
different
things
that
it
does
and
kind
of
highlight
each
one
and
like
what
you
just
mentioned,
bringing
in
you
know
existing
ansibles.
You
know,
content
and
porting
that
into
your
kubernetes
environment
is
something
that
people
are
going
to
be
like,
oh
okay,
I
can
do
that,
so
I
hope
you're
ready
to
show
that
off.
At
some
point.
C
And
we
refer
to
those
major
areas
of
the
product
as
our
pillars
right
there,
our
pillars
and
we
have
pro
prominently,
four
of
them
from
from
a
product
perspective,
there's
the
application.
There's
the
cluster
life
cycle,
which
is
really
about.
How
do
we
create
clusters?
How
do
we
create
them
at
scale
and
encourage
good
cultural
practices
within
an
organization
like
infrastructure
as
code
upgrading
and
deleting
them?
C
So
we're
very,
very
focused
on
the
the
life
cycle
of
the
cluster
itself
in
in
that
phase,
and
then
the
next
phase
really
is
kind
of
day
two
configuration
management
and
operation.
So
once
a
cluster
is
stood
up,
we
have
a,
I
think,
a
very
unique
capability
in
the
product
that
the
cluster
is
going
to
have
some
user
defined
labels
on
them
and
then,
once
the
cluster
is
stood
up
and
made
available,
then
policy
and
policy
and
governance
takes
over.
C
So
you
typically
have
a
way
that
you
will
have
policies
be
applied
to
a
fleet
of
clusters,
and
that
could
be
any
of
the
configuration.
That's
that's
that
your
is
your
represents
your
desired
state.
So,
do
I
want
this
oauth
provider
configured?
What
are
my
roles,
role,
bindings
name
spaces
quota
limits,
anything
that
really
has
a
kubernetes
api
or
could
be
expressed
as
a
kubernetes
api
acm
is
able
to
create
a
policy
and
then
enforce
that
policy
on
the
fleet
when
we'll
dig
into
that
a
little
bit
more.
C
The
third
pillar
is
application
life
cycle.
So,
if
you're
looking
at
from
from
a
from
an
edge
perspective
and
a
lot
of
other
a
large
scale
deployments
perspective,
you
have,
I
want
to
create
the
clusters
in
the
very
much
the
same
way.
C
A
Yeah,
the
like
those
pillars
are
just
it
is.
It
is
so
remarkable
to
me
that
they
can
all
exist
in
one
thing.
First
of
all,
right
like
normally,
these
are
all
like
independent
tools,
the
the
ability
to
take
that
multi-cluster
life
cycle
management
and
then
say
nope.
I
want
this
group
of
clusters
to
be
nis,
compliant
this
to
be
fisma
compliant
this
to
be,
you
know,
australian,
whatever
e8,
they
call
it
down
there.
A
D
A
I've
got
a
cluster,
you
know
here
in
the
us.
You
know
they
all
need
to
be
managed
differently
right
and
when
it
comes
to
compliance,
but
I
do
need
a
consistent
management
plane
and
way
to
manage
all
those
things
underneath
it
and
ecm
really
drives
that
home.
I
feel
like
in
a
great
way
just
having
gone
through
the
hackathon
and
like.
E
A
C
We
we
recognize
that
need
very
early
on
and
there's
multiple
layers
from
that.
From
that
desired
state
model
I
may
have
corporate
policies.
Maybe
some
of
those
policies
are
related
to
configuration
or
compliance
standards
that
I'm
pursuing
and
then
above
that
you
might
have
more
regionally
and
more
autonomous
based
business
where
they
have
their
own
set
of
compliance
you
want
to.
We
want
to
be
able
to
provide
a
self-service
capability,
so
that's
where
we
leverage
get
ops
even
for
policy
management,
because
policy
policies
are
delivered
by
the
hub.
C
They
should
go
through
governance
and
pull
requests
to
have
policy
changes
applied
to
the
fleet.
So
this
is
something
that
we
can
get
into
and
demonstrate
a
little
bit
earlier,
a
little
bit
later
on,
but
yeah,
although
this
point
might
be
good
to
take
a
step
back
and
and
talk
about
how
we
pulled
all
these
things
together
and
what
was
yeah
behind
that.
So.
D
Yeah,
that's
a
good
idea,
jeff!
I
suggest
first
at
some
point.
We
certainly
want
to
climb
in
and
illustrate
and
show
off
some
of
the
product
and
some
of
these
some
of
these
types
of
capabilities,
and
chris,
that's
a.
I
think,
it's
a
very
good
observation
of
how
you
know:
we've
really
pulled
together
a
number
of
domains
from
cluster
life
cycle
to
compliance
and
security
and
and
application
management
as
well
as
observability.
D
If
you,
if
we
go
back
like
jeff,
was
talking
about
the
team
itself,
the
broader
team
that
builds
develops.
Acn
rackham,
has
extensive
background
in
management
and
operation
systems
and
developing
and
hosting
devops
ci
cd
systems.
E
D
We
started
running
into
a
situation
where
we
had
upwards
30
50
clusters
running
to
support
our
own
development
and
some
of
the
things
that
our
customers
are
using
wow
and,
of
course,
we
had
a
number
of
customers
talking
about
this
issue,
so
that
really
with
the
skills,
the
background
we
had
led
us
to.
Why
don't
we
start
looking
at
cluster
management?
D
Why
don't
we
look
at
what
does
it
really
mean
for
an
enterprise
to
be
able
to
manage
clusters
across
hybrid
environments
within
their
data
center
in
public
clouds,
private
deployments,
the
collection
of
them?
What
does
it
mean
to
manage
the
life
cycle
of
those
clusters?
How
do
you
set
those
clusters
in
the
various
configurations,
compliance
that
need
to
be
done
and
then
getting
into
some
of
the
complex
things
at
least
felt
complex
to
us
at
the
time
is
when
applications
are
deployed,
they're
often
distributed
many
components,
they're
deployed
into
potentially
many
clusters.
D
How
do
you
begin
to
manage
the
applications
with
deployment
policies
with
with
this
with
this
compliance,
so
that
led
us
to
various
design
choices
and
design
points
around
leveraging
the
kubernetes
architecture
around
get
ups
around
policy?
One
of
the
keys
to
that
was.
We
have
josh
here,
he's
deep
into
the
get
up
space
and
into
the
policy
paid
space
in
in
some
of
her
design
points
so
I'll.
Let
josh
elaborate
on
some
of
that.
F
F
All
the
time-
but
you
know
one
of
the
one
of
the
key
tenets-
is
being
able
to
use
git
as
a
as
a
system
of
truth
for
the
environment
both
across
you
know,
cluster
deployments,
the
policy
driven
and
the
advanced
cluster
management,
and
this
is
what's
sort
of
driven
a
bunch
of
our
our
open
sourcing,
and
so
you
know
we
have
policy
frameworks
and
collections
that
are
open
sourced
and
available.
F
We
have
integrations
with
the
community
opa
we
open
source
our
entire
subscription
model
that
we
use
under
the
covers
both
to
in
our
eating
our
own
cake
in
deploying
acm
as
well
as
it
can
be
you
it's
used
for
application
lifecycle.
You
can
drive
actually
each
of
these
pillars
using
the
same
type
of
git
ops
flows,
be
it
provisioning
clusters,
being
it
applying
policies
or
be
it
application
management,
and
so
it's
it's
sort
of
a
driving
tenant
under
under
acm,
as
well
as
as
we
move
forward.
C
A
C
We
are
open
in
the
sense
that
we
take
the
best
of
those
into
the
operational
platform,
so
we
heard
josh
just
mentioned
kind
of
the
the
opa
integration
and
bringing
that
into
the
product,
and
we
look
at
other
things
that
are
available
in
the
open
source
community.
You
I
saw
that
you
posted
from
the
youtube
channel
that
there
is
the
announcement
that
we
made
at
kubecon
from
a
red
hat
perspective.
E
C
With
argo,
and
so
argo
has
a
lot
of
a
great
following
in
the
open
source
community,
around
get
ops
right,
evolving.
That
story
for
acm
and
argo
will
be
essentially
embracing
the
open
source
community
working
at
some
of
the
things
that
we
have
an
advantage
and
and
capabilities
that
we
have
in
acm
that
doesn't
exist
in
argo
working
with
those
communities
to
contribute
and
then
over
the
course
of
time.
C
You
know
obviously
a
seamless
transition
between,
if
you're
an
rgb
user
today-
and
you
want
to
use
those
advanced
case
capabilities
in
an
acm
tomorrow,
then
you
know
you
could
do
that
very
very
rapidly,
but
over
the
course
of
time,
we'll
also
see
the
code
base
for
acm
itself.
Take
on
more
argo
ship,
more
argo.
C
For
you
know
we
right
when
we
joined
the
organization,
I
was
told
that
red
hat
doesn't
serve,
doesn't
sell
products,
we
sell
knowledge
and
we
sell
support
and
that
you
know
through
our
open
source
contributions
and
where
we're
driving
there
strategically.
You
know
our
team
embraces
that
100
percent.
B
You
know,
I
think
it
galvanizes-
that
kind
of
better
together
mantra
where
we
want
to
work
as
openly
as
possible
as
quickly
as
possible,
with
these
teams
and
these
technologies
to
incorporate
them
in
a
functional
way
in
a
multi-cluster
way
that
we
keep
getting
that
imprinted
with
every
customer.
We
talked
to
chris,
it's
like
they
know
the
challenge
and
our
job
is
to
deliver
the
tool,
the
sticky
tool
that
they
come
back
to
every
day.
F
Yeah
just
to
continue
the
name
dropping
we
have
the
the
open
shift.
Cluster
update
service
is
another
one,
we're
just
integrating
with
the
current
release
and
we're
also
active
in
the
the
sig
multi-cluster
group
as
well
community
group
as
well,
and
we're
looking
to
continue
expanding
that
as
we
touch
with
argos
as
well.
F
Exactly
especially,
you
start
to
talk,
we
start
to
get
into
edge,
I
mean
we
can
spend
an
entire
hour
just
talking
about
those
types
of
scenarios
where
you
have
much
of
very,
very
small.
Almost
you
know
in
certain
cases
single
node
clusters,
where
you
need
to
go
out
and
run
thing,
but
you
have
you
know,
instead
of
there
being
50
or
60,
that
we've
talked
about
so
far.
There
are
thousands
of
them
because
they're
all
over
there
on
the
edge
working
with
iot,
et
cetera,
yeah,
yeah
yeah.
A
C
Before
you
know,
it's
it's
really
about
changing
culture
and
helping
people
adopt
the
culture
as
soon
as
possible.
Infrastructure
is
coding.
Git,
ops
and
our
architecture
enables
us
to
to
rapidly
take
on
these
new
projects
because
of
its
standardization
on
open
on
open
standards,
but
also
it
it
represents
the
source
of
truth.
C
Then
you
know
you're,
really
integrating
with
git
and
you're,
not
having
to
really
integrate
with
all
the
other
things
that
might
be
contributing
to
get
like
the
technology,
bonds
or
anything
else.
We're
really
picking
up
at
a
really
strategic
point
and
the
flexibility
of
our
policy
and
application
delivery
framework
is
that
we
can
take
any
of
the
operators
and
deploy
them
across
the
fleet.
So
we
enable
operator
usage
at
scale,
and
we've
demonstrated
that
internally,
with
red
hat
with
container
vulnerability.
E
C
Operators
as
well
as
we're
really
making
headway
very
quickly
with
some
of
our
third-party
providers,
where
they're
able
to
contribute
a
policy
that
will
deploy
their
operating
solution,
their
operator
to
in
number
of
different
parts
of
the
fleet.
So
we
just
did
some
really
interesting.
Work
with
sysdig
and
systick's
been
a
valuable
partner
for
red
hat
many
many
years
zero
code.
We
didn't
drop
any
any
code
at
all,
but
using
the
policy
framework
we
can
now
distribute
systig
to
the
entire
fleet
and
enable
them
to
provide
their
capability
for
security
and
at
scale.
A
A
F
Absolutely
so
I
just
flip
tabs
here,
since
I'm
sharing
the
screen
I'll
keep
rolling
with
it.
So
again
you
see
sort
of
this
list
of
the
pillars
when
you,
when
you
land
in
the
overview
and
then
on
the
launch
out
as
well,
you
have
observe
observational
type
overview
pieces.
You
have
the
infrastructure,
so
the
cluster
creation.
This
is
that
hive
integration
that
we
started
at
the
very
beginning
of
our
journey
that
scott
mentioned
you.
F
Applications
and
our
governance
and
risk,
so
maybe
we'll
start
in
clusters
just
quickly
and
then
we'll
move
over
to
the
application
and
the
application
section
and
talk
about
the
get
ops,
but
just
to
get
you
know,
especially
for
those
who
haven't
seen
it
before.
So
you
know
you
have
your
sk,
you
have
your
clusters,
you
can
do
filtering
on
names
as
well
as
on
label
label
definitions
to
help
describe
those
clusters.
We
have
a
very
simple
way
of
leveraging
the
hive
ipi
for
deploying
so.
F
You
know
it's
you,
you
type
in
a
name,
you
pick
a
provider,
so
the
two
new
one
goga
with
the
the
new
release
of
acm
2.1,
are
the
vmware
on-premises
as
well
as
well
as
the
bare
metal
moves
out
of
tech
preview
into
into
ga.
Once
you've
done
that
you
pick
from
the
open
shift
version
you
want,
we
have
just
one
or
the
latest
right
now
from
each
of
the
the
release
streams.
F
You
can
custom
curate
this
and
actually
there's
a
full
scenario
around
doing
this.
You
can
do
this
offline
in
a
disconnected
mode
as
well,
and
then
you
choose
your
provider
connection.
This
is
where
sort
of
the
credentials
it's
going
to
use.
D
F
F
And
and
that's
it
the
one
piece
I
didn't
click
I
should
have
hit
while
we're
doing
it,
but
it's.
A
F
A
F
Yes,
exactly
so
you
can
click
the
the
yaml
piece
here
and
so,
as
you
fill
this
out
with
something
like
a
name
and
you
pick
a
provider,
it's
creating
all
of
the
ammo
pieces
that
you
need
to
build
this,
and
so
what
you
can
do
is
you
know
when
we
start
to
talk
about
git
ops
number
one.
This
is
a
way
to
create
a
fully
configured
configuration
for
deployment,
but
it's
also
a
way
you
can
use
to
generate
something
like
a
template.
F
B
B
So
it
not
yeah
jeff,
I
mean
sorry
josh
hit
it
on
the
head
with
this
is
an
experience
throughout
all
of
our
life
cycles.
Where
we're
moving
you
in
the
direction,
are
you
there
yet?
Maybe
maybe
not
so
you
have
the
easy
entry.
You
also
have
the
you
know
the
steroids
view
you
know
turn
on
the
advancement.
If
you
want
the
handle.
F
And
I
mean
so
since
we're
in
cluster,
I
wanted
to
to
touch
on.
You
know:
you're
going
to
have
cus
you're
going
to
have
places
where
you've
already
got
open
shift
and
or
other
star
k-8
clusters
deployed
in
cloud
providers
etc,
and
so
we
have
the
import
infrastructure
capability
as
well,
which
is
as
easy
as
again.
F
You
create
a
name
we'll
leave
cloud
detection
auto
detect,
so
the
system
is
going
to
fill
that
in
I
hit
generate
a
command
which
is
going
to
spit
me
back
here
and
again
this
it's
all
hidden
behind
the
scenes
we
it's
going
to,
but
you
click
there
to
copy
it
to
the
clipboard.
I
log
into
my
gke
cluster,
my
aks
or
my
iks
cluster.
I
paste
the
command
and
boom
in
about
60
seconds,
or
so
you
get
a
an
import
into
acm,
at
which
point
you
can
then
apply
application,
control
and
or
policies.
A
B
F
And
that's
actually
a
great
it's
a
great
moment
then
to
segue
into
the
next
two
pillars,
and
so
you
know
we
talked
about
in
the
cluster
again
the
the
labeling
and
the
labeling
being
important,
and
it
was
if
you
look
when
we
did
the
when
we
do
the
create
we
have
it
available
here
for
augmenting
with
different
details
as
well
as
it
was
present
in
the
import
case.
These
become
a
a
key
pivot
point
for
both
our
application
and
our
policy
life
cycle.
So
it's
not
so
much.
I
want
to
deploy
a
new
cluster.
F
Then
I
need
to
go
into
application
lifecycle
and
I
want
to
choose.
You
know
cluster,
a
cluster
b
that
I
just
provisioned
with
labels
and
what
we
call
a
placement
rule.
The
applications
that
you've
deployed
the
policies
that
you've
defined
are
able
to
automatically
detect
these
new
clusters
as
they
come
online
and
they're
pulled
into
acm
and
applied
directly
to
them
and
that's
sort
of
one
of
the,
especially
in
the
governance
and
risk
space.
F
That's
one
of
the
really
important
points
of
of
this
labeling
placement
rule
technology
is
that
as
soon
as
you've
created
the
system,
you
know
these
policies
are
going
to
recognize.
Oh,
this
is
a
system.
It's
in
the
it's
got
a
label
region
that
says
it's
in
europe
and
therefore
it
needs
to
apply
an
eu
type
of.
F
Policy
to
it
and
or
you
have
a
policy
that
looks
for
specific
name
spaces,
either
being
there
or
not
being
there,
and
you
know
as
soon
as
the
the
the
system
is
provisioned.
That
policy
goes
into
effect.
So
then,
if
somebody
comes
in
and
starts
to
mess
around
you're
going
to
get
that
kind
of
policy
and
we'll
we'll
show
some
of
that
in
a
minute
as
well.
A
F
A
F
And
but
this
pacman
app,
what
makes
this
special
and
and
new
to
acm
is
these:
you
have
these
two
ansible
pieces,
and
so
we
have
this
concept
of
a
pre-hook
or
po
and
a
post
hook,
and
so
obviously,
as
the
name
describes,
the
pre-hook
runs
before
the
subscription,
which
is
how
our
application
model
brings
apps
to
the
clusters,
so
you
can
think
of
a
cluster
as
subscribing
to
a
source,
so
a
git
repo,
a
helm,
repository,
etc,
and
so
what
a
pre-hook
does
is
a
pre-hooked
job
will
run
before
that
subscription
is
applied
to
a
cluster
and
the
post
hook.
F
Job
will
run
once
the
job
is,
or
that
piece
of
manifest
is
applied
to
the
cluster,
and
so
for
our
our
pacman
example.
Our
pre-hook
job,
which
you
can
sort
of
make
out
here,
is
we
have
the
short
form
snow
because,
again
we're
we
love
our
our
acronyms,
which
is
a
servicenow
create
ticket.
So
we
we
went
to
galaxy.
You
know
we
didn't
really
want
to
build
anything
anything
extra
we
didn't
have
to
and.
F
One
of
the
beauties
of
ansible
is
you:
have
the
entire
galaxy
marketplace
to
take
advantage
of,
and
so
we
grabbed
there's
an
ansible
job
for
creating
a
servicenow
ticket
and
we
got
an
ansible
job
for
the
f5
aas
or
yes,
the
f5
as
a
service
which
is
their
load.
Balance,
offers
load
balancing,
as
well
as
other
dns
definitions
that
we
defined
as
a
post
hook.
F
And
so
what
that
allows
us
to
do
is
define
a
load,
balanced
url
for
the
two
clusters
and,
as
we
add
or
remove
clusters,
which
I
guess
I'll,
which
I'll
do
in
a
second
based
on
labels,
it's
going
to
go
out.
It's
going
to
create
a
ticket,
tell
us
it's
making
a
change
for
the
cluster
and
it's
going
to
also
update
the
f5
load
balancer
again
non.
F
Set
up
here
before
we
kick
it
off,
because
you
know
the
the
beauty
of
kube,
but
also
the
what
makes
it
hard
to
demonstrate
is
things
happen
awful
quickly
in
these
spaces,
so
we've
got
the
service
now
as
well.
So
we
see
the
last
ticket
was
1009
because
I
deleted
and
cleared
these
up
a
bit
and
we
have
our
f5
is
loaded
up
here
too.
F
F
We're
gonna
see,
look
for
a
service
now
ticket
to
get
created,
as
well
as
an
additional
path
for
our
load
balance
domain
to
appear,
and
so
we're
gonna
do
this
by
so
we've
got
two
clusters:
if
we
click
on
here,
we've
got
our
two
cluster
names.
Already.
We've
got
quebec
alpha
and
quebec.
Bravo
so
we're
gonna,
add
charlie
and
again
I
mentioned
this
placement
rule.
So
the
placement
rules
are
looking
for
this
label.
F
F
D
F
F
Well,
yes,
that's
the
problem
is
so
fast,
so
we
could
see
this
blocked
for
a
minute
and
that's
so
that
the
ansible
job
could
actually
run
which
which
would
have
happened
on
the
back
end,
we'll
hop
over
here.
Oh,
I
think
I
just
lost
my
share
one
second.
A
F
F
So
what's
happened
is
not
only
has
it
run
this
pre-hooked
job,
but
it's
actually
dynamically
injected
some
parameters
to
give
the
ansible
job
some
details
so
that
it
knows
in
this
case
that
it's
actually
three
clusters
or
it
could
be
if
we
were
if
we
were
downsizing
going
to
two
clusters,
so
there's
that
that
dynamic
change
that
allows
you
to
you
know
is
what
really
allows
you
to
activate
external
technologies
in
a
very
automated
fashion
and
so
for
a
ticket.
It's
obviously
useful
information
for
something
like
the
f5,
we'll
just
see.
F
If
it's
done
here,
oh
still
running
through
something
like
f5,
though
we're
using
those
names
to
be
able
to
look
up
and
see
what
is
the
ip
of
that
cluster
and
again,
we've
used?
This
is
there's
a
standard
repo
for
ansible
available
through
ansible
galaxy
for
the
for
the
f5
yeah.
F
Exactly
and
so
we
did
a
little
modification
to
give
it
an
extra
lookup,
but
that
was
all
it
took
to
get
it
working
as
a
as
a
load
balancer
for
our
kubernetes
apps.
So
we
can
see
the
third
ones
here,
it'll
take
just
about
30
seconds
or
so
for
the
what's
called
the
the
health
monitor
to
come
online
and
it
should
go
okay,
maybe
even
quicker
than
that,
we'll
hit
refresh
we'll
give
it.
C
A
little
rushing
you
know
chris.
This
was
a
strategy.
What
we've
seen
is
some
very
practical
examples
that
we
think
will
resonate
with
our
customers.
That's
we'll
be
on
on
stage
or
we
have
some
sessions
that
ansible
fest
that'll
highlight
this
integration.
What
you're
seeing
is
the
first
stage
of
that
right.
C
Put
these
ansible
call
outs
strategically
into
all
of
those
pillars
that
we
discussed.
So
as
we
move
forward
from
a
road
map
perspective.
What
you
see
here
is
the
pre
and
post
hook
for
git
ops,
application
life
cycle,
but
imagine
the
pre-deploy
for
for
the
for
a
cluster,
the
post
deploy
for
the
phone.
E
C
For
me
right,
like
yeah,
call
a
playbook,
I've
created
a
policy
called
playbook
I've
disabled,
a
policy
called
playbook,
we're
going
to
put
it
really
anywhere
and
everywhere.
That
makes
sense,
and
then
it
really
becomes
the
art
of
the
possible
from
there.
So
you
have
the
galaxy
and
you
can
apply
anything
you
want
in
there.
If
you
wanted
to
start
a
pot
of
coffee
every
time
that
you
deployed
you
know,
you
could
do
that
as
well.
E
F
F
Exactly
yeah
we're
tying
in
these
external,
these
external
factors
now
that
allow
us
to
you
know
truly
globalize
the
the
application,
because
we
I
mean
we
have
the
three
clusters
and
one
of
them
is
in
europe,
one
of
them
central
and
the
other
one
is,
I
think
it's
western
u.s-
and
I
I
want
to
take
a
second
just
to
you-
know-
go
into
the
weeds
a
little
bit
here,
just
because
I'm
very
proud
of
this
stuff,
but
just
to
show
the
usefulness.
F
F
Thousand
clusters:
it's
always
going
to
look
like
this,
but
when
you
get
into
a
problem,
that's
where
these
type
of
topology
views
come
come
into
play
and
and
become
very
beneficial
because
you
know
automatically.
If
there's
a
single
pod,
that's
having
a
problem,
it's
going
to
be
shown
here
and
it's
going
to
be
percolated
up
to
the
top,
and
so
here
we
have
for
each
of
the
clusters.
F
You
know
the
individual
pod
that's
out
there
running
and
if
there
were,
I
mean
these
are
all
working
fine,
because
it's
a
demo,
but
but
you
can
click
into
these,
and
this
isn't
taking
me
out
to
you
know
down
to
the
system,
although
that's
an
option
as
well.
If
you
want
to
grab
the
cluster,
but
I
can
see
the
yaml
that
represents
it.
But
you
know
a
key
point
when
I'm
having
a
problem
is.
F
Yep-
and
so
you
know,
these
are
kind
of
the.
These
are
the
key
points
when
you
get
into
that
problem.
Determination,
point
of
view.
It's
you
know.
The
idea
of
this
topology
was
to
make
it
very
easy
to
see
where
the
problem
was
number
one
so
which
app
was
having
a
problem.
When
you
look
at
the
app
in
a
topology
view,
the
next
thing
is
okay.
F
I
know
exactly
what
part
is
having
a
problem
and
we
have
representations
for
you
know,
there's
a
failure
if
it
didn't
just
didn't
deploy,
so
it's
not
replicating
out
so
that
when
you're
talking
about
these
scales
of
you
know
a
thousand
clusters,
500
clusters,
you
know
you'll
know
you
know
what
percentage
of
my
infrastructure
is.
Having
problems.
Is
everybody
having
problems
with
just
the
piece
right
and,
and
you
can
zoom
in
and
you
can
you
know
when
you
have
a
problem
because
there
was
a
security
or
you
couldn't
access
a
an
image
repository.
F
F
And
so
you
never
know
exactly.
No,
it's
very
it's
very
true
and
that's
the
those
are
the
types
of
visualizations
we're
sort
of
after
is
to
be
able
to
to
pinpoint
to
you
know,
plus,
if
there's
clusters
of
issues
being
able
to
relate
that
back
to
you
know,
am
I
having
a
regional
problem?
Am
I
having
you
know?
Is
there
just
a
problem
with
this
version
of
you
know,
or
is
the
service
malfunction
figured
right
exactly.
A
F
Yeah
and
each
of
these
nodes
has
some
type
of
detail
about
the
system,
so,
whereas
I
didn't
launch
out
before
you
can
click
each
of
these
links
and
it'll
take
you
down
to
the
actual
the
actual
console
for
the
three
clusters
that
are
involved,
and
then
you
have
simple
things
like
the
route,
and
so
we
have
two
routes
defined
here.
One
is
the
load
balance
route,
and
so,
when
we
click
on
that,
you
see
only
the
single
one.
F
F
To
instantiate
something
something
over
in
in
the
vpn
to
somewhere
in
the
u.s.
F
Well,
but
so
just
to
show
that
there
are
other
apps
running,
you
can
click
over
here
and
you
know
so
we
have
in
each
of
the
individual
cluster
urls.
So
we
can
click
into
alpha
which
is
running
in
the
us
east
and
we
can
click
over
to
the
other
pacman
at
the
charlie,
which
was
the
the
west
one
and
maybe
I'll
give
it
one
or
two
more
flips
in
the
pac-man
and
we'll
see
if
we
can
get
a
change
out
of
it
come
on.
C
Just
just
to
kind
of
to
add
to
that
you
know
the
the
the
other
exciting
thing
that
we're
putting
into
the
upcoming
november
release
is
really
around
the
clock
that
takes
cluster
down
around
the
observability
right
is:
we've
done
some
more
integration
with
open
source
projects
with
with
thanos
to
bring
observability
in
the
grafana
dashboards
and
all
that
back.
C
C
A
And,
like
thanos
is
a
relatively
newish
product
right
or
newish
like
project.
I
should
say
right
like
it's,
not
exactly.
You
know
like
widely
hugely
adopted,
but
it
is
becoming
increasingly
adopted
for
large
scale
management
of.
C
It's
really
scale
right,
as
we
talked
about
or
about
you
know,
we've
got,
we've
got
a
scale
of
these
edge
edge.
Use
cases
create
thousands
of
clusters,
configure
them
the
way
that
I
want
them
and
throw
workloads
on
thousands
of
clusters.
Thanos
provides
you
that
capability
to
bring
it
back
in
some
way
shape
or
form
to
a
reasonable
sre
experience.
Without
you
know
having
retention
periods
of
45
seconds
right,
the
traditional
ways
of,
and
we
went
this
route.
C
The
traditional
way
of
bringing
things
back
to
the
central
pane
of
glass
was
to
do
a
federated
prometheus,
and
then
you
had
a
log
aggregator
at
the
hub
that
that
didn't
scale
and
that's
not
something.
That's
part
of
our
strategy.
Our
strategy
is
really
to
collect
the
information.
That's
useful
for
the
sre
use
cases
jump
in
context
when
necessary,
as
you
saw
josh
is.
F
Yeah,
I
guess
I
was
going
to
say
before
I
leave
this
page,
maybe
the
one
last
thing,
and
then
we
can
segue
into
governance
and
risk
or
or
and
the
observability
into
more
detail
is
you
know
we
didn't
pay
a
lot
of
homage
to
search
here,
but
we,
you
know,
each
of
these
we've
got
the
launch
out
to
get
the
search,
objects
and
search
can
be
run,
independent
of
the
application
or
the
grc's
pieces.
But
again
it
brings
you
know
if
you
really
want
the
gory
guts
of
all
the
pieces
that
are
involved.
F
You
know
so
for
this
subscription.
These
are
all
the
different
parts
and
then
you
can
click
on
them
and
filter
in
you
know
the
cluster
pieces
that
are
involved
the
objects
and
all
of
the
different
stuff.
In
there
we
have
the
history
of
the
different
ansible
jobs
that
ran
over
time.
As
I
read
this,
so
all
of
those
details
are
searchable
and
you
can
filter
on
it.
You
can
do
the
yaml
for
certain.
D
I
was
gonna
just
comment
that
first
of
all,
the
demo
is
incredible.
The
technology
is
incredible
and
for
those
that
might
be
struggling
to
follow
along
at
home
on
all
the
pieces
here,
if
you
josh,
if
you
go
back
to
that
topology
view
yep
some
of
the
things
that
josh
first
started
walking
through
was
the
ability
to
create
clusters
or
import
clusters,
and
then
he
set
up
some
rules
that
could
put
the
clusters
in
a
common
configuration
common
compliance.
D
So
maybe
it's
a
healthcare
business
and
hipaa's
important
or
pci,
or
federal,
some
of
the
federal
regulations
or
just
compliance
policies
for
our
business,
so
presume.
In
this
case,
josh
created
three
clusters.
They
were
all
configured
appropriately
to
run
this
application,
that's
running
in
production,
okay,
then
he
goes
and
deploys.
D
This
application
happens
to
be
a
pacman,
a
fun
application,
but
he's
deploying
multiple
instances
of
this
application
using
something
called
the
subscription,
basically
think
of
it
as
a
getups
model
that
ties
the
source
from
a
git
repo
to
potentially
with
gates
and
some
policies
to
individual
clusters
that
he's
deploying
to
so
now
we
can
replicate
instances
of
this
application
weather
complex
into
one
or
more
clusters
based
on
the
deployment
policy.
That's
how
he's
going
to
drive
scalability
for
the
for
the
application.
D
Now
in
doing
that
in
many
businesses,
you
then
have
to
integrate
with
the
various
operational
controls.
So
you
end
up
in
many
businesses
saying:
okay,
here's
a
change
request,
change
ticket's
going
in
so
there
he
hooked
into
before.
He
did
the
actual
deployment
he
hooked
into
servicenow
and
a
change
request.
D
He
then
deployed
the
application,
but
then
post
the
deploy
a
configuration
needed
to
be
done
to
the
global
load
balancer
to
have
this
new
instance
put
in
put
into
the
into
the
environment
to
the
global
load
balancer,
so
that
was
the
subsequent
post
playbook
from
ansible
out
to
f5.
So
what
he
was
able
to
do
was
manage
the
clusters
consistently
manage
the
deployment
of
the
applications,
understanding
the
model,
the
apology
app
where
it's
going
to
go
under
policy
and
then
configure
it
into
external
services
appropriately
like,
in
this
case,
service
management
and
a
load
balancer.
D
There
could
be
other
network
configurations
that
are
required
storage
configurations,
maybe
even
security
like
threat
management
collectors,
and
things
like
that.
So
this
is
why
this
integration,
first
of
all,
the
the
capabilities
that
josh
went
through,
are
critical
to
scaling
use
of
containers,
cloud
native
and
kubernetes.
But
then
the
ability
to
integrate
that
with
all
the
wealth
of
automation,
that's
available
through
ansible
automation
platform
is
just
is
very
significant
in
how
this
will
roll
out
into
enterprises
in
a
in
a
hybrid
environment.
So
I'm
sure
all
that
came
across
in
the
demo.
A
F
F
So
here
is
the
get
repo,
and
this
is
the
manifest
for
that
application,
and
then
you
can
have
one
or
more
of
those
ansible
pre-hooks
and
post
hooks,
and
this
is
just
the
preference
to
the
job
that
it's
it's
going
to
run
post
and
this
is
the
new
integration,
is
a
new
ansible
job
kind
that
allows
you
to
to
leverage
a
a
ansible
automation
platform
deployment
that
you
have
in
the
environment
and
the
same
thing:
the
pre-hook
we
have
the
service
now
this
is
the
job
call
and
then
what
these
reach
out
to
are
just
the
readily
available
already
ansible
galaxy
platforms,
and
then
I
guess
the
p
side
I
didn't
show,
because
the
application
was
already
in
topology
was
this
was
the
other
major
change
is
to
create
an
app
you'll
see.
F
This
looks
very
similar
to
our
cluster,
create
in
that
you
know
you
give
you
give
it
the
name.
You
pick
the
name
space,
we'll
just
reuse,
the
the
pacman
one
here,
one
more,
and
then
we
pick
git,
and
so
since
this
is
already
deployed,
we
know
about
the
the
repo
I
was
just
in.
F
Can
cut
and
paste
it?
It's
got
some
dynamic.
This
is
slightly
older
build,
but
you
have
some
dynamic
population
of
branches,
et
cetera.
So
I
didn't
you
know
we
picked
it.
You
put
path
in
the
final
builds
you
get
this
automatically
and
then
for
ansible
integration.
You
know
we
you
look.
You
pick
the
access.
F
You
can
have
more
than
one
tower
that
you
reference
to
depending
on
where
you're
running
this
and
what
you
want
to
do
with
it,
and
then
we
get
into
our
placement
where
you
can
either
define
a
new
placement
rule
or
you
can
select
the
existing
place.
An
existing
placement
rule
that's
already
there
and
it
gives
you
a
little
readout
of
you
know
what
it
was
going
to
be.
We've
also
got
these
time
window
capabilities
where
so
you
know
most
of
the
time.
Kubernetes
is
all
about
got
to
get
to
state
gotta
get
to
state.
F
You
can
you
know
in
a
production
environment,
you
can
set
it
to
be
sunday
nights
between
1
a.m
and
3
a.m,
and
pick
a
time
zone
is
the
only
time
it's
going
to
actually
make.
So,
even
if
I
committed
and
get
we
won't
apply
it
to
production
until
we
hit
that
window.
So
you
know
your
production
isn't
going
through
a
change
in
the
middle
of
black
friday
or
amazon
prime
day
type
of
thing.
F
A
You
can
ask
your
little
your
dingus
in
the
corner
to
get
you
a
five
dollar
amazon
plug.
I
found
out
today
so
it'll
just
just
ask
it
for
one,
and
it
will
give
you
five
bucks.
Those
are
very
handy.
I
have
found
for
decorations
and
heating
pads
other
than
that.
F
E
F
They
yeah.
E
F
F
Here
too,
yes,
absolutely
so
you
get
the
ammo
for
this
definition
and
again,
this
gives
you
a
little
bit
more
of
an
example.
It's
not
so
long,
because
I
used
a
pre-existing
placement
rule,
but
what
you
know
the
simplification
you
gain
from
having
these
side-by-side,
wizards
versus
you
know
there
there's
a
lot
of
there's
a
lot
of
yammel
code
here
that
that
one
otherwise
has
to
write
and
that
you
know
that's
not
specific
to
our
application
model.
Yeah.
That's
an
across-the-board
kubernetes
right.
F
That's
just
how
life
is
and
so
having
these
you
know,
you
get
a
sense
for
it.
But
again
it
goes
back
to
I
talked
about
you
could
take
this
yaml
put
it
into
a
subscription
or
subscribe
it
as
git
ops.
You
can
do
the
same
here.
You
can
have
a
subscription.
You
know
you
can
you
can
have
a
subscription
subscribe
another
subscription,
so
you
can
have
one.
B
F
Hub
exactly
and
pulls
in
all
of
your
configuration
that
you
want
of
all
your
other
subscriptions
for
your
apps,
and
so
you
don't
even
need
to
use
this
ui.
In
that
sense,
you
could
add
applications
into
that
git
going
through
a
git
ops
flow.
Where
you
you
know,
you
add
you
add
the
application
yaml
you
put
it
up
for
review
the
operator.
You
know
devops
or
the
operators
in
charge
say:
okay,
it's
all
right.
F
They
merged
the
request
and
then
that
one
single
subscription
that
was
running
on
the
hub
pulls
all
any
new
ones
down
and
applies
it
and
so
again
you
you
can
you
can
sort
it
it's
cyclical
and
that
you
can
come
back
and
use
the
subscription
for
it.
You
know
to
manage
itself
as
well
as
all
of
these
other
pillars,
but
wait.
There's
more.
A
A
C
The
policy
so
much
at
this
point
should
I
do
it
real
quick
on
policy.
C
E
C
Amazon
prime
day
and
my
wife
is
a
primer.
C
C
So
we've
been
talking
a
lot
about
labels
and
they
are
really
the
key
to
the
system,
because
we
want
this
dynamically
driven
desired
state
model,
and
that
is
incredibly
important
for
us,
but
it's
also
incredibly
powerful.
So
with
great
power
comes
great
responsibility,
as
we
like
to
say,
and
and
these
policies
are
really
driving
the
configuration
of
your
entire
fleet.
So
when
you
think
about
from
a
governance
risk
compliance
perspective,
we
have
a
number
of
different
out
of
the
box
policies
that
we
include
in
acm.
C
C
Our
policies
are
are
located
so
in
the
stable
folder
you'll
find
everything
that
we
have,
that
we
ship
in
the
box
from
from
a
policy
perspective,
but
we
also
have
this
concept
of
community
and
in
the
community.
This
is
where
we
want
to
rally
all
of
the
all
of
the
lev
users
of
acm
to
contribute
their
policies,
their
yaml
and
pursuit
of
their
compliance
goals,
so
they
can
put
those
into
the
community
repository.
This
is
how
we
have
actually
integrated
with
third
parties
like
like
cystic
they
again.
C
No
no
code
was
necessary
to
drop
in
either
one
of
the
products
they
whittled
together,
a
policy
that's
actually
in
our
community
around
the
systig
project
and
so
they've
leveraged
this
technique
of
deploying
an
operator
at
scale
to
the
fleet
and
their
operator
is
the
cystic,
the
cystic
agent.
They
basically
followed
what
we
have
in
the
box
when
it
comes
to
that
container
vulnerability
thing
that
we
were
talking
about
a
little
bit
earlier.
C
So
if
we
look
at
the
container
vulnerability
policy,
this
policy
essentially
is
is
based
on
the
desired
state,
so
leveraging
the
placement
rules
and
those
labels.
It's
going
to
deploy
the
the
operator
to
n
number
of
different
clusters
in
the
fleet.
So,
let's
take
a
look
at
it
from
a
ui
perspective
and
when
we
look
at
this
image
vulnerability
and
we
look
at
the
policy
itself,
this
is
basically
saying
deploy
this
to
the
fleet.
C
Here's
the
configuration
of
the
of
the
crs
and
then
here's
what
I
I
want
to
do,
and
this
is
showing
not
compliant,
because
what
it's
done
is
it's
it's
gone
out
and
let
me
take
a
step
back.
We
see
again,
our
labels
are
key,
so
we're
saying
that
this
operator
needs
to
be
deployed
on
clusters
that
have
a
environment
of
dev
right.
C
So
this
thing
is
out
there
with
the
match
expression
and
what
it's
done
is
actually
found
container
vulnerabilities
out
there,
so
we
can
take
a
look
at
where
we
are
compliant
and
when
we're
not
compliant
on
this
policy,
we
also
have
a
new
feature
that
we
got
out
of
our
our
test-a-thons
in
the
early
going
was.
I
want
to
know
more
about
the
history
of
my
violation,
so
we've
added
that
in
this
release
we
dig
into
what's
what's
compliant.
C
C
And
we
think
that
it's
a
perfect
complement
to
acm,
because
it's
a
one-two
punch
now.
What
we've
been
saying
is
that
if
I
have
a
label
that
says
that
this
is
a
development
cluster
and
it
needs
to
be
configured
this
way,
acm
provides
the
ability
to
change
that
cluster
and
make
it
to
your
desired
state.
Opa,
then,
is
going
to
prevent
anyone
from
changing
from
that
desired
state.
So
it's
a
solid
one-two
punch.
When
we
talk
about
acm's
capability
combined
with
opa,
you
know
the
desired
state
model
is
setting
a
cluster.
C
The
way
it's
supposed
to
be
and
then
being
able
to
at
scale
deliver
open
policies
that
prevent
any
configuration
drift
without
oppa.
In
the
past,
we've
been
able
to
detect
drift
and
put
it
back,
but
in
this
case
now
we
can.
We
can
deploy
the
constraints
through
the
controller
that'll.
Keep
you
from.
C
Like
this,
this
open
policy
is
is
is
about
saying
that
all
these
namespaces
must
have
these
required
labels.
It
will
also
return
back
and
say
you
know
you
just
put
this
policy
into
play,
but
before
you
put
this
policy
in
play,
these
are
the
things
that
are
were
already
in
the
system
that
wasn't
compliant.
That
gives
you
that
opportunity
from
a
from
an
acm
perspective,
to
go.
Oh,
you
know
what
I
didn't
know
those
were
in
the
system.
C
Let
me
create
an
acm
policy,
that's
going
to
set
them
correctly
and
then
I'm
now
I'm
preventing
configuration
drift.
So
the
one
two
punch
allows
you
to
do
the
cleanup
and
but
wait
there's
more.
The
other
part
with
opa
is
we
will?
We
will
provide
you
information
into
the
hub
when
someone
is
trying
to
drift,
someone
is
going
there
and
actually
doing
things,
that's
being
caught
being
prevented.
We
were
able
to
surface
those
through
history
through
this
audit
or
I'm
sorry
through
the
admission
capability.
A
C
C
That,
because
we
have
our
acm
policies,
we're
gonna
have
opal
policies
pretty
soon.
There's
gonna
be
another
thing:
we've
been
working
with
cystic
around
falco
and
maybe
falco
falco
rules
kind
of
smells
like
a
policy
right.
We
will
have
a
management
platform
for
all
of
these
being
able
to
distribute
all
of
the
technology
at
scale
and
then
report
back
centrally
is
really
the
acm's
strategy
in
that
area
as
well.
A
That's
awesome
right
and
the
the
scale
right,
like
the
fact
that
you
can
do
this
with
tens.
Dozens
hundreds
potentially
of
clusters
is
what
makes
it
really
really
amazing
to
me
is
that
I
can
look
at
an
entire
fleet
of
clusters
doing
all
kinds
of
operations
for
my
environment.
For
my
for
my
organization,
and
just
it's
all
right
there
right.
If
I
need
to
dive
down
into
a
pod,
I
can,
and
it's
right
there
in
front
of
me.
All
I
have
to
do
is
just
click
away
at
it.
C
So
if
you
look
at
that
policy
collection
that
I
showed
you
it'll
actually
teach
you
how
to
bring
those
policies
into
your
new
hub,
prime
the
pump
and
get
started
right
away.
So
you
know
when
that
when
that
community
we
get
more
and
more
adoption
in
the
community.
We're
gonna
be
outs
and
sourcing
from
our
best
practices
in
red
hat.
A
B
You
can
see
we're
all
busting
our
pants
to
talk
about
how
excited
we
are
with
raccoon
here,
but
we
actually
pulled
our
vp
off
the
golf
course
to
to
be
here
today.
So
I
want
to
give
dave
the
final
minute
to
bring
us
home
and
and
tell
us
you
know
what
do
we
get
to
look
forward
to
what's
coming
up
on
the
horizon?.
D
E
D
Well,
certainly,
the
series
is
important
to
go
through
in
in
more
depth
these
various
areas
search
policy
kui,
I
think,
all
all
exciting
areas.
Moving
forward
we're
gonna,
we
have
already
have
seen
a
lot
of
investment
and
how
this
supports
telco
environments
5g
edge
edge,
takes
you
know
we
were
talking
about.
D
The
partner
models
has
jeff
alluded
to
a
few
times
are
just
bubbling
over
and
what
the
system
does
is
put
everything
in
context:
the
kube
resources,
the
applications,
how
they're
deployed
what
the
configuration
should
be.
That's
invaluable,
as
you
begin
to
integrate
across
the
wealth
of
management,
operation
and
security
systems,
so
key
points
we
continue
and
we'll
continue
to
accelerate
work
in
the
open
projects.
D
That's
critical
to
the
community.
We
will
see
continued
drive
in
extension
of
this
out
into
beyond
the
hybrid
cloud,
but
into
the
edge
deployments.
D
A
So
yeah
folks,
this
show
is
a
repeating
show.
Every
other
week,
you'll
have
the
experts
from
rackham
here
to
talk
about
all
things
rackham
and
we
will
be
able
to
answer
any
questions
you
may
have
and
if
we
can't,
we
can
get
them
answered
there
shortly
thereafter.
So
trust
me,
I
have
ways
of
tracking
people
down.
You
think
you
can
run,
but
you
can't
hide.