►
Description
Red Hat Advanced Cluster Management for Kubernetes gives you End-to-end visibility and control for your Kubernetes clusters. Advanced Cluster Management for Kubernetes controls clusters and applications from a single console, with built-in security policies. Extend the value of Red Hat OpenShift by deploying apps, managing multiple clusters, and enforcing policies across multiple clusters at scale. In this AMA session, Red Hat’s Scott Berens and members of the engineering team, will give an overview and demonstration of ACM and answer questions from the audience.
A
All
right,
everybody
welcome
to
another
openshift
commons
briefing,
as
we
want
to
do
on
mondays.
We're
going
to
have
an
ask
me
anything
question
with
one
of
the
projects
or
offerings
or
feature
sets
for
any
of
things
in
the
openshift
ecosystem,
and
today
we
have
the
team
behind
red,
hat's,
advanced
cluster
management
for
kubernetes.
A
That's
a
mouthful,
but
scott
barenz
is
going
to
kick
us
off
and
introduce
all
the
folks
that
are
here
today
to
answer
your
questions
and
tell
you
a
little
bit
up
front
about
what
that
all
means
and
I'll,
let
scott
take
it
away
and
at
the
you
can
ask
your
questions
in
the
chat
wherever
you
are
on
the
live
streams
or
if
you're
in
blue
jeans,
and
we
will
try
and
answer
them
today.
So
scott.
Take
it
away.
B
B
Those
were
the
days
when
we
could
get
together,
but,
alas,
here
we
are
at
a
virtual
ask
me
anything.
I'm
here,
I'm
a
product
manager,
I'm
scott
varen's
work
with
rackham.
We
affectionately
referred
to
that
long
string
of
text
as
rackham,
red
hat,
advanced
cluster
management
for
kubernetes
and
I'm
joined
by
some
esteemed
colleagues
I'll
do
a
quick
intro
for
them.
B
So
our
senior
architect,
michael
elder
who's,
been
in
this
multi-cloud
multi-cluster
space
for
at
least
two
years,
if
not
more
josh
packer
his
sidekick,
that
the
robin
to
batman
scenario
does
well
here,
for
us.
Josh
really
focuses
as
an
architect
in
our
application
management
space,
a
lot
of
the
advanced
scenarios
around
good
ops,
ensuring
that
we
can
place
workloads
across
clusters
and
clouds
and
our
esteemed
technical
marketing
manager,
jimmy
alvarez,
who
is
located
in
sunny
florida
and
wishing
he
was
outside
on
the
beach
right
now.
So
I
appreciate
our
panelists
for
being
here.
B
I'm
I'm
fired
up.
I'm
excited
to
be
in
front
of
this
audience.
It's
it's
been
a
long
time
in
coming,
but
really
you
know,
multi-cluster
management
has
value
to
the
world
and
we
hope
to
bring
it
to
you.
We
hope
you're
already
aware
of
the
fact
that
we
gave
in
july
we
have
a
2.0
release
that
came
out
in
july.
Of
course
our
announcement
goes
back
to
summit
and
we'll
be
quickly
accelerating
to
a
2-1
release
coming
out
in
the
fall,
so
we're
working
hard
for
you
today.
B
B
I
think
the
fact
that
our
our
product
title
includes
the
word
kubernetes
should
really
clue
you
in
that
we're
really
focusing
on
the
kubernetes
plane.
Here
we
really
want
to
normalize
that
experience
across
you
know
from
single
cluster
to
multi
and
beyond
so
distributed.
Looking
at
you
know,
hundreds
and
thousands
of
of
clusters
that
you're
managing
in
your
fleet,
we
all
get
it
like
kubernetes
is.
This
is
the
decision?
It's
it's
the
it's
the
market
trend.
B
B
So
we'll
start
with,
you
know
a
good
kind
of
look
of
what
we're
doing
and
from
a
product
perspective
and
try
to
answer
your
questions
as
to
how
we're
doing
that,
from
an
architectural
point
of
view,
how
we're
defining
that,
how
we're
implementing
that
desired
state.
So
hopefully
you
guys
have
some
questions
that
you're
thinking
around
devops
and
git
ops
and
security
ops
and
how
we
kind
of
define
all
that
in
a
multi-cluster
approach.
B
We
do
kind
of
focus
on
three
key
life
cycle
areas.
We
talk
about
the
cluster.
Of
course,
the
life
cycle
of
the
cluster
is
probably
where
most
of
our
operations
teams
will
begin,
and
we
really
want
to
try
to
ease
that
management
pain
so
that
you're
not
having
to
wake
up
on
saturdays
and
sundays
and
do
more
work.
B
The
policy
driven
compliance.
So
really
we
have
this
picture
called
governance
risk
and
compliance
which
helps
us
define
not
only
the
security
of
a
cluster
but
also
think
about
the
configuration
of
that
cluster.
So
what
kind
of
elements
in
terms
of
oauth
resource
limits
certificate
expirations?
I
am
role
binding.
So
all
of
the
things
that
really
define
your
cluster
think
about
day
two
operations,
how
you
spend
that
time
to
really
articulate
what
this
cluster
is
for?
Is
it
a
dev
or
is
it
a
prod?
Is
it
a
qa
test?
Is
it
an
edge?
B
Does
it
have
certain
requirements
for
banking
and
hipaa?
So
really
the
point
around
policy
is
not
just
thinking
of
it
from
strictly
a
security
perspective
and
regulatory,
but
also
in
a
configuration
standpoint.
So
how
do
you
define
the
configuration
of
a
cluster
that
is
always
going
to
be
there
on
time,
whether
you've
just
imported
it
or
just
created
it
from
reckon,
and
then
the
third
key
area
is
our
application
life
cycle
management?
B
This
is
really
how
we
start
to
to
have
a
discussion
around
deployables
and
workloads,
how
we
define
those
in
a
consistent
way,
how
they
can
come
out
of
a
git
repo
and
immediately
be
sprayed
out
to
100
clusters
without
you
having
to
do
anything
special
just
by
labeled
assignment
and
a
role
policy,
a
placement
rule
we've
already
determined
how
that
workload
should
be
distributed.
So
those
are
really
the
three
key
areas.
B
We've
got
the
cluster,
the
security
and
the
application
and
we'll
we'll
talk
through
those,
and
hopefully
you
have
some
questions
on
your
mind
about
how
we
tackle
those
life
cycle
areas
the
benefits
I'm
going
to
just
go
ahead
and
say
these
are
all
on
the
box.
I
don't
want
to
spend
time
we're
really
trying
to
reduce
the
bottlenecks
in
terms
of
getting
development
to
production.
So
how
do
we
streamline
centrality?
B
Reduction
of
costs
is,
is
a
table
stake,
increased
application,
availability
is
another
table
state
and
then
the
ease
of
compliance.
That's
really
a
unique
area
to
really
bring
that
security
role
to
the
forefront
and
say
we're
going
to
have
a
conversation
with
security
and
we're
going
to
make
sure
that
development
and
operations
are
all
at
the
table,
so
everybody's
at
the
table
at
the
same
time.
Talking
through
what
does
security
mean
from
building
out
that
application
all
the
way
to
the
deployment
to
the
clusters
to
really
bring
security
to
the
forefront.
B
In
these
conversations,
a
quick
blurb
about
four
five,
we
do
run
on
openshift
four
five,
so
I
know
that
this
audience
is
very
familiar
with.
What's
going
on
there,
so
just
a
quick
level
set,
we
run
as
an
operator.
B
We
come
out
of
operator
hub
and
we
run
on
top
of
an
ocp
cluster.
I
just
thought
it
was
relevant
to
kind
of
lay
the
land.
This
is
what's
new
on
ocp45,
just
a
handful
of
improvements
that
are
going
on
at
the
hub,
the
openshift
layer
and
then
in
terms
of
how
we
fit
in.
We
are
that
purple
bar
across
the
top.
B
We
are
the
multi-cluster
management
tool,
that
kind
of
has
a
full
purview
and
a
full
look
towards
integration
and
control
all
the
way
down
the
stack
so
across
all
of
the
you
know,
open
shift
everywhere
mantra.
Wherever
you
find
an
open
shift,
you
can
have
openshift
management.
We
can
also
manage
non-ocp
clusters
so
kubernetes
as
a
target.
B
We
handle
aks,
eks,
gte
and
iks,
so
here's
the
big
perspective
where
the
purple
bar
crosses
top.
Hopefully
that
helps
you
understand
where
we
sit
in
the
in
the
cake,
and
from
that
point
I
think
those
are
the
only
slides.
I
wanted
to
look
through.
There's
a
hub,
we'll
talk
through
about.
You
know
the
hub
today
that
has
a
layer
that's
running.
On
top
of
the
ocp
platform.
B
So
that's
the
two-minute
spleel.
I
think
it
took
more
like
10,
but
I
think
we're
at
a
point
where
we've
got
the
table
set.
I
really
hope
that
there's
some
questions
that
have
already
popped
up-
and
so
maybe
jimmy
has
a
you
know
a
good
chance
to
trickle
some
of
those
into
this
point
of
view,
or
maybe.
C
Yep
we
have.
We
have
one
question
here
that
kind
of
came
up
around
obc
acm.
So
the
first
question
is:
what
are
the
plans
that
we
have
with
acm
for
okd
and
they
link
to
a
blog
where,
where,
where
they
there's
some
talk
about
it,
so
they're,
basically
asking.
Is
this
just
a
matter
of
time
that
we're
integrating
with
okd
or.
A
B
A
Okay,
okay,
d4
just
released
and
about
I
don't
know
while
he
could
probably
correct
me,
I
think
about
four
weeks
ago,
yeah
and.
E
Was
really
about
when
do
we
make
rock'em
available
open
source
right,
there's
a
reference
to
a
blog
that
we
can
drop
in
the
various
chat
streams
that
talk
about
that
pattern.
So
we
deliver
some
of
the
operators
today
to
the
community
operator
hub,
so
you
can
deploy
some
of
the
parts
of
rackham
like
hive,
which
gives
you
some
apis
around
openshift
cluster
management,
like
our
application
subscription
model,
which
is
how
we
distribute
applications
across
many
clusters.
E
We
will
have
other
community
operators
that
that
feed
into
that
over
time,
there's
some
work
around
cluster
registration
and
a
cluster
agent
base
capability.
That's
used
in
rackham
becoming
a
community
operator
as
well.
The
github
project
open
cluster
management
is
where
we
are
making
these
things
publicly
available
and
where
we're
engaging
the
community.
E
There
are
some
people
who
take
the
operator
and
try
to
run
it
in
okd
today,
and
we
will
support
importing
an
okd
cluster.
We
only
provision
the
full
openshift
installer
provisioned
infrastructure
patterns
on
the
three
public
clouds,
but
you
can
import
and
manage
an
okd
cluster.
Today
we
do
from
a
support
perspective,
really
target
the
rackham
hub
running.
On
top
of
openshift
as
its
base
hub
cluster,
but
the
technology
itself,
I
think,
would
be
runnable
and
okd.
It's
just
not
something.
E
We
prioritize
documenting
and
driving
the
involvement,
because
we're
looking
for
community
involvement
to
help
drive
some
of
that
we're
more
than
happy
to
help
support
the
community
and
may
end
up
doing
more
of
that
ourselves.
But
I
think
really
the
point
there
is:
when
are
we
open
sourcing,
rock'em
and
large
parts
of
it
already
there
today
around
cluster
lifecycle
or
in
the
application
model
or
on
the
policy
framework?
There
are
some
parts
like
search
in
the
ui
that
we're
working
through
the
path
to
open
source
now.
A
That's
great
news
actually
and
the
okd
working
group
has
been
working
through
which
of
the
operators
are
going
to
be
available
in
the
catalog
that
comes
with
okd
as
well.
So
if
people
are
interested
in
that,
you
can
go
to
okd.io
and
find
all
of
the
operator,
hub.io
is
for
generic
kubernetes,
so
creating
them.
Creating
the
operators
that
work
with
okd,
for
which
runs
on
fedora
core
os,
has
a
few
more
steps
involved
in
it,
just
as
creating
other
things.
A
So
there's
a
little
bit
of
work
that
has
to
go
on,
but
besides
open
sourcing,
your
work,
making
it
work
on
okd4,
and
that
is
along
with
the
documentation
on
the
okd
community,
to
update
and
and
blog
about,
but
we'll
definitely
reach
out
to
you
guys
and
and
invite
you
to
that
party.
Thanks.
B
A
That's
cool,
so
I
had
a
question
for
you
guys.
How
does
this
you,
you
talk
about
it
in
terms
of
devops
in
your
presentation.
How
does
this
surface
for
developers-
and
I
get
the
devops
stuff-
and
this
feels
to
me
like
a
very
operation
admin
side
of
the
house
thing,
but
how
do
you
see
this?
Adding
value
to
the
developer
side
of
things.
E
So
a
few
areas,
really,
I
think
one.
How
do
we
enable
developers
to
get
access
to
the
right
kinds
of
resources?
How
do
we
make
it
easier
for
them
to
actually
create
clusters
or
to
deliver
content
to
clusters?
Rackham
is
not
so
much
about
creating
a
new
application.
You're
not
going
to
go
to
rackham
and
scaffold
a
node
app
and
convert
that
to
a
container
image.
E
You're
going
to
use
the
devtools
from
red
hat
around
that
that
capability,
right,
use,
odo
scaffold
in
a
new
application
use
things
like
s2i
convert
that
into
a
container
image
package
it
up
with
the
ammo,
where
rackham
is
going
to
step
in
to
help
the
development
and
delivery
process
is
really
from
the
point
that
you've
got
assets
in
get
and
an
image
registry
right.
I've
got
assets
that
are
ready
to
be
deployed,
and
then
rackham
defines
a
model
of
the
application
that
lets.
You
link
those
things
that
need
to
be
deployed
to
the
right
location.
E
I
don't
know
if
it's
helpful,
I
can
jump
into
kind
of
showing
what
that
looks
like
in
a
real
environment.
That
would
be
awesome
all
right.
Let's
take
a
look
at
that
over
here,
so
what
we
end
up
doing
within
rack
and
we
have
a
an
ability
to
create
clusters
and
import
clusters.
So
as
a
developer,
I
think
we
likely
it's
been
our
experience.
That
developers
probably
touch
many
clusters
less
often
than
like
a
site,
reliability,
engineer,
right,
an
sre
site,
reliability,
engineer,
type
of
role
or
an
operator
type
of
role.
E
They're
gonna
have
more
of
this
type
of
view.
A
developer
might
only
see
the
one
or
two
clusters
like
their
dev
and
qa
clusters
that
they're
working
within,
but
when
they
start
to
think
about
the
application
model.
In
fact,
maybe
one
quick
application
example
is
built
off
of
wordpress,
and
this
repo
is
public.
E
We're
using
part
of
rockham's
dynamic
placement
engine
to
describe
where
we
want
this
particular
application
to
be
deployed,
and
so
this
dynamic
placement
rule,
which
let
me
make
my
text
bigger
here,
so
you
can
actually
see
what
I'm
trying
to
show
you
in
this
model.
We've
got
the
application.
We've
got
a
link,
the
subscription,
which
is
basically
a
reference
back
to
the
github
repo,
in
this
case
the
customize
repo
into
the
kubernetes
sigs
org,
and
then
we've
got
a
dynamic
placement
rule
which
is
matching
clusters
based
on
a
label
right.
E
So
all
clusters
can
be
labeled.
They
have
labels
like
the
cloud
that
they're
running
on
or
the
vendor,
and
then
I
can
add.
Labels
like
in
this
example
purpose
equal
development
and
I
can
define
the
number
of
replicas
that
are
desired
as
well,
and
then
at
that
point,
rackham
takes
over
to
begin
delivering
the
application
to
those
target
clusters
and
once
they're
deployed,
I
can
see.
E
Okay,
where
are
the
pods
for
this
application
running
so
now
as
a
developer,
instead
of
having
to
switch
my
terminal
between
multiple
kubernetes
context
in
order
to
see
what's
going
on
with
the
pod?
In
this
cluster,
what's
going
on
with
the
pod
in
that
cluster,
here,
I'm
leveraging
the
ability
in
rackham
to
index
all
of
the
things,
and
I
can
use
search
to
actually
drill
in
to
the
details
or
to
be
the
pod
logs
in
this
case,
but,
what's
also
really
neat
is
I
can
actually
look
at
all
of
the
objects
related
to
the
application.
E
In
one
query,
so
here
I
can
see
the
deployment
I
can
see
the
cluster
it's
running
on.
If
this
were
federated
across
many
clusters,
then
I'd
see
the
deployment
for
each
cluster.
If
the
application
had
more
than
one
subscription,
maybe
it
had
a
front-end
subscription
and
a
different
back-end
subscription
that
happened
to
be
placed
in
different
clusters.
E
C
Trying
to
see
if
oh
there's
a
follow-up
question,
so
that
was
directed
to
you
scott,
so
when
you
say
private
on-prem
does
that
include
restricted,
slash,
disconnected
clusters
on-prem,
and
it's
reckon
targeting
only
ipi
cluster
openshift
clusters
for
now
in
the
four
public
clouds,
like
michael
said,
so
it
might
be
a
good
time
to
clarify
questions.
B
From
so
the
on-prem,
yes,
we
we
do
support
disconnected
clusters.
In
fact,
the
hub
can
be
disconnected
as
well
as
those
managed
clusters
and
we're
working
through
scenarios
to
ensure
that
we
have
a
fantastic
user
experience
with
upgrading
of
those
disconnected
clusters.
We
want
everything
to
be.
You
know,
seamless
from
the
management
plane
that
requiem
provides,
whether
that's
a
connected
or
disconnected
environment,
that
you're
managing
and
then
what
was
the
second
part
of
that
was
around
oh
ipi.
Yes,
so
hive
is
the
api
that
we're
targeting
which
handles
the
ipi
installation
flow.
B
Ipi
is
installer,
provisioned
infrastructure,
and
so
when
I
think
it's
microlet's
driving
when
he
drives
through
that
create
flow,
we're
targeting
that
hive
api
to
handle
all
the
infrastructure
management,
that's
underneath
the
cluster,
so
storage
and
vm
and
networking
that's
all
tied
in
together
at
the
ipi
layer.
So
if
you
have
upi,
which
is
the
user
provision,
we
can
import
those
clusters
so
any
existing
clusters
in
your
fleet
or
anything
that
you're
carving
out
through
upi
flow.
B
B
Just
a
few
quick
clicks-
and
you
get
this
this
import
command
that
you
would
run
in
the
cube
ctl
of
your
target
cluster
or
the
oc
command
that
you're
running
on
the
target
cluster
there
yeah.
So
while
for
reporting
cluster,
I
see
a
follow-up
question
there
in
the
chat.
Your
question
is:
can
it
be
either
another
cluster
type
other
than
openshift?
And
the
answer
is
yes,
so
we
always
have
a
red
carpet
experience
with
openshift.
We
know
more
about
red,
hat's
cluster.
We
know
more
about
that
ecosystem.
B
That
can
mean
we
know
more
about
operators.
We
know
more
about
the
underlying
network
capability
and
storage.
We'll
always
have
a
more
rich
experience
with
an
open
shift
cluster.
We
can
also
import
azure
kubernetes
service,
amazon's
eks,
google's
gke,
as
well
as
ibm's
iks.
So
those
are
four
managed
service,
kubernetes
types
that
we
can
import
and
do
management
of
and
when
I
say
management,
it
means
I
can
see
it
here
in
the
inventory
list,
but
I
can
also
define
labels
and
start
to
target
those
for
management
scenarios
around
application,
as
well
as
security
policy.
B
So
in
this
case
I
have
a
eureka
which
is
an
eks
cluster
running
on
amazon.
Its
purpose
is
development,
so
I'm
doing
some
quick
dev
test
on
that
on
that
cluster
same
thing.
Here
with
my
gke,
when
it
comes
to
my
production
workloads,
I
happen
to
be
running.
You
know,
as
singapore
and
portland
these
are
defined
more
towards
a
production
and
I
can
start
to
target
workload
differently,
depending
on
which
channel
it's
coming
out
of
that's
an
area
of
expertise
that
josh,
hopefully,
is
itching
to
talk
about,
and
hopefully
we
have
a
question
that
targets.
B
C
Yeah,
it
seems
like
he
did,
he
said.
Thank
you
excellent,
so
yeah,
that's
another
good
point
right
there
to
dive
a
little
bit
deeper.
Now
that
we
have
that
that
the
cluster
screen
there
and
we
can
kind
of
see
the
different
types
of
clusters
too,
that
that
we
have
available
in
the
web.
We
have
imported
versus
some
of
them.
We
have
built
right
and
you
can
see
the
functionality
also.
It
allows
us
to
do
upgrades
to
openshift
clusters,
which
is
a
you
know,
something
that
we
didn't
really
talk
about
right.
E
And
jimmy
that
really
is
a
key
aspect
here,
as
that
alluded
to
we're
always
going
to
have
more
capability
for
openshift.
For
a
number
of
reasons.
In
particular,
we
can
drive
upgrade
of
openshift
clusters,
regardless
of
what
infrastructures
are
running
on,
so
we
can
trigger
an
upgrade
pick,
an
available
version
and
target
it.
If
that
cluster
is
running
on
a
cloud
provider
like
here,
I've
got
azure
or
amazon
or
in
this
other
hub.
I've
got
some
in
google
as
well,
but.
E
And
that
were
openshift
on
vmware
we
can
still
drive
the
same
upgrade
behavior
and
a
lot
of
that
has
to
do
with
the
very
operator-centric
nature
of
the
way
that
the
openshift
cluster
is
configured
and
how
it's
operated
and
how
it's
maintained.
We're
able
to
trigger
the
cluster
to
go
in
and
drive
a
more
intelligent
upgrade
path
than
we
could
for
an
assembled
cloud.
E
Provider's
version
of
kubernetes,
what's
really
powerful
when
you
start
thinking
about
all
of
that
api
surface
area,
to
configure
how
openshift
runs
like
how
I
do
identity
providers,
how
I
define
authentication,
how
I
manage
the
network
configuration
of
that
cluster?
How
I
manage
the
storage
configuration
of
that
cluster?
If
I
want
to
augment
it
with
security
behavior,
I
can
drive
all
of
that
through
the
operator
catalog
and
I
can
use
rackum
to
actually
deliver
content
through
a
set
of
policy
behavior
down
into
those
openshift
clusters.
E
E
That's
also
true
for
image.
Manifest
vulnerabilities
here
we've
got
an
example
that
is
pushing
down
a
subscription
to
the
container
security
operator,
and
so
in
this
target
cluster
it
has
deployed
the
container
security
operator,
but
there's
still
a
violation,
because
it's
actually
finding
an
image
manifest
vulnerability
record,
which
is
basically
an
api
kind.
It
records
that
one
of
the
containers
in
that
cluster
has
a
cve
right,
has
a
common
vulnerability
and
exposure
that
needs
to
be
remediated,
and
I
could
only
do
that
by
going
and
updating
the
version
of
the
container
image.
E
It's
backing
that
pod.
So
here
I'm
able
to
enforce
the
behavior
of
having
the
operator
deployed
and
then
also
validate
this
additional
concern
that
it
must
not
have.
Any
of
these
records
indicate
there's
a
problem,
so
those
are
operator-based
policies
that
really
only
apply
in
terms
of
open
shift
if
I'm
just
doing
things
like
name
spaces
or
roles.
E
So
here
I
might
have
a
policy
that
I
wanted
to
create
a
particular
namespace
called
prod
apps
on
my
target
cluster
and
a
particular
limit
range
on
that
namespace
as
well.
This
policy
will
work
on
the
eks
clusters,
the
gke
cluster,
the
aks
iks,
because
they're
just
base
kubernetes
api,
but
really
to
take
advantage
of
the
powerful
operator
ecosystem.
C
Everywhere
so
we
had
a
question
come
up
around
our
back
and
how
do
we
handle
our
back
from
an
acm
perspective?
I
think
that's
a
that's
a
good
one
to
kind
of
go
over,
so
I
don't
know
scott
michael
who
wants
to
take
it.
E
E
So
I
don't
know
if
I'll
be
able
to
pull
up
the
page
fast
enough,
but
if
you
look
at
the
rbac
documentation
for
rockum,
we
defined
a
number
of
roles.
In
fact
now
this
is
going
to
be
an
opportunity
to
show
off
one
of
my
favorite
rackham
features
so
jeff
and
scott
are
going
to
chuckle
in
the
background
at
me.
So
what
I'm
going
to
do
is
I've
opened
up
our
visual
web
terminal?
This
is
just
a
it's
a
way.
E
So
let's
look
at
all
the
cluster
roles
that
are
on
my
hub
and
you'll
notice.
I've
got
all
kinds
of
things,
but
if
I
scroll
up
here,
you'll
see
a
whole
wave
of
cluster
roles
that
start
with
this
open
cluster
management
prefix-
and
one
of
these
is
the
open
cluster
management.
Colon
cluster
manager
admin.
Now,
if
this
were
just
a
blank
terminal,
I'd
have
to
go
and
run
a
git,
oh
yaml
command.
In
order
to
show
you
the
detail
but
visual
web
terminal,
I
can
just
pop
it
up
and
and
get
that
detail
here.
E
So
this
particular
cluster
role
gives
a
user
access
to
do
all
the
things
with
the
cluster,
create
clusters,
import
clusters
etc.
Now,
if
I
want
to
assign
a
user
to
a
particular
cluster,
whenever
we
create
or
import
a
cluster,
we
actually
create
a
role
called
opencl
open
cluster
management.
Colon
manage
cluster
colon
name
of
the
cluster.
You
may
notice
golf
india
hotel
juliet
alpha.
E
These
are
clusters
that
are
registered
on
this
hub
and
this
cluster
role
is
dynamically
created
and
if
I
create
a
cluster
or
binding
to
my
user
now,
that
user
will
be
able
to
view
that
cluster.
If
I
logged
in
as
a
user
that
wasn't
a
cluster
manager,
admin
wasn't
a
cluster
admin
was
simply
maybe
I
added
a
role
for
scott
and
I
could
create
a
clusteral
binding
for
scott
to
the
india
and
the
hotel
cluster.
E
Rolls
that
you
see
here
and
now
scott
would
be
able
to
view
only
india
and
only
hotel
he'd
have
full
control
over
those
clusters,
but
he
wouldn't
have
access
or
visibility
to
the
rest
of
the
clusters
as
well.
I
would
also
assign
scott
behind
the
scenes.
We
also
are
creating
a
project
for
each
of
our
clusters
that
we
create
and
the
project
serves
as
a
home
to
basically
hold
the
policy
resources
or
the
application
resources
or
other
off
authorization.
E
Resources
like
additional
service
accounts
and
things
like
that
that
interact
with
that
remote
cluster
and
so
I'll
also
see
juliet
alpha,
india,
hotel,
etc.
These
projects
are
defined
as
the
cluster
is
created
or
imported,
so
I
would
assign
scott
a
binding
to
that
cluster
role
that
gives
them
access
to
the
manage
cluster
api
object
and
then
I'd
assign
him
access
to
the
hotel
cluster
and
then
from
here.
If
I
look
in,
I
think
hotel
has
some
but
we'll
see
how
many
policies.
E
So
these
are
the
policies
that
were
dynamically
matched
based
on
a
placement
rule
for
the
hotel
cluster
and
were
delivered
into
this
particular
project
into
the
hotel
project.
So
scott
would
be
able
to
interact
with
the
manage
cluster
api
object,
view
and
list
the
details
on
it
see
its
help,
but
he'd
also
be
able
to
look
at
and
modify
content
here
in
the
name
space,
which
would
let
him
define
a
placement
rule
and
deliver
content
to
hotel
or
to
india,
but
no
other
clusters,
because
he
didn't
have
access
to
those
other
clusters.
A
E
Absolutely
so
maybe
let's
talk
about
that
so
here
I'm
just
going
to
pull
up
the
policy
that
was
in
the
hotel
name,
space
that
came
up
here
a
moment
ago.
This
is
a
policy
which
delivered
to
the
hotel
cluster
and
it
is
defining
a
auditor
role,
config
policy,
actually
is
it
auditing
or
defining?
Let
me
just
double
check.
E
This
is
only
auditing,
so
policies
can
either
enforce,
meaning
that
they
create
the
thing
that's
under
management
or
they
can
simply
audit.
Has
it
been
modified
from
what
I
expect
you
might
use,
audited
policies
as
you're
building
up
the
library,
and
you
want
to
ensure
that
it's
doing
the
right
thing
before
you
enforce
it.
E
But
if
I
wanted
to
automatically
create
a
particular
role
or
particular
role
binding,
then
I
could.
I
could
switch
that
flag
from
remediation
action
and
form
to
remediation
action
and
force,
and
then
I
could
define
a
policy
that
had
a
number
of
things.
So
for
my
scot
user
example,
I
can
create
a
policy
that
assigns
scott,
perhaps
the
the
edit
role
and
a
number
of
projects
and
then
a
the
role.
The
binding
content.
E
I
would
probably
lump
those
together
and
then,
in
addition
to
defining,
in
addition
to
defining
the
user,
which
I
could
do
through
the
identity
provider
policy,
I
would
have
defined
the
role
or
the
cluster
role,
define
the
role
binding
or
cluster
role
binding
and
maybe
have
another
policy
that
defines
the
project
along
with
configuration
like
network
policy
and
limit
ranges,
and
all
of
that
would
be
bundled
up
and
give
me
a
way
to
onboard
scott
with
a
very
specific
configuration
to
a
target
cluster
and
then,
even
if
scott
didn't
have
access
to
log
into
the
hub,
if
he
didn't
have
any
of
the
other
cluster
role
bindings.
E
B
And
the
cool
part
just
to
add
on
and
michael
thank
you.
This
is
a
desired
state
model.
So
what
you've
defined
in
terms
of
this
profile?
For
my
cluster,
it
is
going
to
work
towards
that
eventual
state,
but
you
don't
have
to
go
in
and
add
anything
else
right.
You've
already
defined
effect
like
the
profile
of
what
that
cluster
should
look
like
and
feel
like
what
it
can
do
and
another
cluster
that
comes
on
board
it
or
another
10
or
15,
isn't
going
to
make
you
hemorrhage
on
a
friday
afternoon
to
go
configure
stuff.
E
That's
that's
a
critical
point
right.
This
dynamic
placement
engine
allows
us
to
create
placement
rules
that
are
very
declarative
that
again
match
by
label.
If,
regardless
of
when
the
cluster
is
created
or
imported,
as
soon
as
the
labels
are
applied,
the
dynamic
placement
engine
will
match
them
and
then
rule
out
those
changes.
If
we
want,
we
can
look
at
that
example
for
the
container
security
operator.
B
E
E
And
let
me
see
what
operators
it
has
installed
perfect,
so
the
hotel
cluster.
So
I
I
launched
the
console
for
a
hotel.
I
don't
have
any
operators
other
than
the
default
package
server
here.
So
what
I'm
going
to
do
is
edit
a
label
and
I'm
going
to
edit
a
label.
I'm
sorry
for
jumping
around
give
me
just
a
moment
here
to
get
to
a
point
where
I
can
show
you
what
I
want
to
show
you
and
I'm
going
to
try
to
catch
it
in
the
act
of
doing
the
work
so
over
here
on
the
right.
E
Thank
you
for
taking
out
on
that.
Let
me,
let's
just
show
you
the
screen.
E
E
Label
click
add
and
click
done,
and
I'm
going
to
try
and
get
over
here
fast
enough.
A
detected
hotel
is
not
compliant
within
a
few
moments.
You'll
actually
see
the
installed
container
security
operator
populate
in
so
you
can
see
it
is
going
through
and
triggering
the
installation
of
the
container
security
operator
on
the
hotel
cluster
and
then
once
it
completes
that
aspect.
E
The
policy
is
twofold:
it
is
both
requires
the
operator
to
be
deployed
and
requires
that
there
not
be
any
of
these
image,
manifest
vulnerability
types,
and
so
the
policy
has
a
a
must
have,
but
also-
and
let
me
scroll
down
here
to
the
bottom-
this
is
a
longer
one,
but
it
also
has
this.
Compliance
type
must
not
have
right,
so
it
must
not
have
an
instance
of
this
api
kind
and
if
it
finds
that,
then
what
it
knows
is
that
the
container
security
operator
found
a
vulnerability
in
the
cluster.
E
This
will
get
surfaced
up
on
the
overview
page.
So,
in
fact,
if
I
come
back
to
the
overview,
I
can
see
there's
one
image
manifest
vulnerability
and
in
fact
now
I'll
see.
The
second
violation
is
already
up.
So
within
35
40
seconds,
whatever
window,
that
was,
I
simply
updated
a
label
on
a
cluster.
Had
the
policy
trigger
had
it
modify
the
cluster
to
bring
it
in
to
alignment,
but
then
it
figured
out,
there's
actually
still
a
remaining
issue
that
needs
to
go
and
be
remediated
against
that
target
cluster.
That's
right!
C
That
was
awesome.
Thank
you,
michael
for
showing
that
so
we
have
another
question
around
being
able
to
manage
kubernetes
like
rancher,
for
example,
and
they
also
would
like
to
know
about
support
for
like
things
like
b
sphere
upi
installed
with
static
ips.
B
Let's
try
that
one
jimmy
I'll
see
so
you
know
rancher
presents
an
interesting
case
literally
if
you
just
bucket
rancher
as
a
non-ocp
and
you're
talking
about
rancher
kubernetes
engine
by
virtue
of
the
fact
that
those
are
standard
kubernetes
apis,
we
could
probably
talk
to
it.
We
could
probably
manage
it
now.
I
don't
really
care
to
do
that,
because
there's
what
64
kubernetes
that
are
cncf
certified
out
there
in
this
world
it
it's
not
really
incumbent
on
me
to
invest
in
that
direction.
All
right.
B
B
Now,
if
I
want
to
go
down
that
list
and
pick
a
bunch
more,
we
can
do
that
and
I
think
you
know
what
some
people
are
finding
is
that
our
agent
is
going
to
work
on
those
types.
Does
that
mean
it's
under
enterprise
support
or
red
hat?
No,
we
haven't
spent
the
time
and
the
investment
to
do
quality
assurance
testing
to
make
sure
our
qe
team
has
the
time
to
vet
it,
because
I
don't
want
to
put
that
in
front
of
an
enterprise
if
it's
not
fully
baked.
B
A
Yeah,
so
it
depends
on
how
you
look
at
it.
Some
of
them
are
the
hosted
ones,
and
some
of
them
are
the
installers
and
stuff.
But
if
you
go
to
the
cncf
landscape
right
now,
there
were
over
103
at
the
last
count.
So,
and
it
really
is
very
much
a
community
effort
to
do
some
of
those
tests
testing
on
those
and
yeah.
So
that's
that's
a
that's!
A
community.
A
D
E
D
Question
so
so
pretty
much
where
we
stand
right
now
is,
and
this
touches
a
little
bit
on
the
question
above
it
that
was
asked
to
scott
about
a
road
map.
Is
we
have
the
vmware
ipi
which
we
touched
on,
which
is
coming
in
the
fall
release?
If
you
have
upi
deploys
those
so
user
provisioned
installs
of
of
the
openshift,
you
definitely
can
import
them
and
michael
had
shown
previously
sort
of
the
steps
to
that.
It's
pretty
much.
D
You
give
it
a
name
and
open
shift,
and
it
gives
you
back
a
big
long
command
and
as
long
as
you're
logged
into
that
open
shift,
you
can
cut
and
paste
it
and
it'll
import,
and
I
guess
one
of
the
pieces
we
didn't
mention,
and
maybe
michael
if
you
can
bring
that
the
cluster
cluster
list
page
up
just
because
you
had
a
number
of
clusters
to
upgrade,
whereas
I
think
my
demo
only
had
one.
There
were
two
things
I
wanted
to
touch
on
here.
One
is
once
you
imported:
it.
D
You'd
obviously
be
able
to
upgrade
it,
so
we
don't
have
to
provision
the
openshift
to
be
able
to
upgrade
the
openshift
as
long
as
we're
managing
it.
The
upgrade
becomes
available
and
the
other
piece
we
didn't
touch
on
was
you
can
do
bulk
upgrade
from
the
console
as
well,
which
is
another
key
point,
so
you
have
a
number
of
these.
You
can
select
them.
B
D
B
Cluster
viewpoint
when
they
create
a
cluster,
we're
going
to
continue
michael,
if
you
drive
into
create
cluster,
the
icons
that
are
in
the
create
flow
are
going
to
continue
to
grow
so
today,
you're
seeing
three
public
clouds
up
there.
We
actually
do
support
the
bare
metal
in
tech
preview
not
available
on
this
environment,
but
it's
available
in
tech
preview
right
now,
vsphere
is
coming
in
the
fall
and
we're
going
to
look
to
add
additional
infrastructure
providers
down
the
road
anywhere
openshift
is
available.
Today,
we're
looking
to
target
that
same
openshift,
everywhere
mantra.
C
Awesome
yeah,
and
I
think
this
is
a
perfect
kind
of
get
a
getaway
to
go
into
and
talk
a
little
bit
about
applications
and
application
deployment
within
cluster.
So
we
we
talked
a
little
bit
about
creating
clusters.
We
talked
about
you
know,
securing
your
clusters
right
and
being
able
to
have
that
policy
and
governance,
and
then
maybe
josh,
you
you
mind
kind
of
showing
us
a
little
bit
the
application
world.
D
Absolutely
so,
hopefully
you
guys
see
my
screen
and
you
see
the
different
channels
and
my
other
page.
So
in
the
application
space,
our
application
system
is
pretty
much
the
continuous
delivery
portion
that
you
would
find
in
ci,
cd
and
so
just
to
roughly
touch
on
these.
Before
we
take
a
look
at
a
system.
D
You've
got
github
types,
so
you
can
have
your
deployment
service
definitions
or
customization
yaml,
so
the
customize
feature
defined
in
a
git
repository
and
you
can
have
an
application,
have
a
subscription
for
one
or
more
of
these
repos
and
or
directories
within
it.
We
also
support
helm
releases,
so
you
can
subscribe
to
a
helm,
repository
and
pull
in
a
helm,
release
based
on
versions
or
a
or
or
a
parameter
parameterized
version
of
it.
D
So
like
greater
than
version
2.0
or
I
want
to
keep
my
production,
you
know
less
than
2.0
and
greater
than
1.0.
We
have
object
storage
as
well,
which
can
be
third
party
provider
or
a
minio
running
on
your
openshift
cluster
and
then
as
well
as
you
may.
Sometimes
the
term
deployable
comes
up,
but
it's
pretty
much
kubernetes
resource
template.
D
So
I
can
create
a
a
template
for
a
deployment,
resource
or
config
map
that
I
want
to
transport
to
all
of
my
might
manage
clusters,
and
so
this
is
a
very
simplified
version
or
flow.
I
should
say
for
how
this
works,
and
so
you've
got
the
hub
here
on
the
left,
and
that
is
where
you
define
the
initial
subscription
and
then
using
the
placement
that
michael
mentioned
before
you
define
us.
It
defines
a
set
of
targets
and
so
the
matching
there
there's
actually
a
number
of
different
capabilities.
D
Yes,
so
here
there's
a
couple
of
different
ones,
so
there's
conditional
types,
so
this
condition
is
an
example
of
I
want
placement
to
only
match
to
systems
when
they're
online.
This
is
usually
default
that
folks
use,
but
there
can
be
a
circumstance
where
you
want
it
to
at
least
attempt
to
reply,
even
if
the
system
isn't
online,
so
you
can
see
that
there's
a
failure
and
your
app
isn't
in
a
location
that
maybe
it
should
be.
D
We
keep
talking
about
the
labels,
which
is
the
most
powerful
because
of
the
fact
that
any
system
I
deploy
that
comes
up
with
that
label
will
match
the
placement
rule
and
therefore
get
the
app
or
the
policy.
We
share
the
same
placement
rule
technology
in
both,
and
then
we
have
the
more
traditional
cluster
list
in
that
you
can
specifically
list
a
arrayed
list
of
cluster
names
that
you
want.
D
The
system
to
deploy
the
app
to
this
also
works
under
policy,
and
you
know
whatever
is
defined
in
that
list-
are
the
only
ones
that
are
going
to
to
find
this
application
on
them.
And
so,
when
we
look
here
the
flow,
is
you
define
the
subscription
here
on
the
left?
It
flows
down
to
the
managed
clusters.
D
You
end
up
with
an
application
object,
that's
listed
at
the
top,
and
so
here
we're
doing
some
ng
x
pieces.
You
can
click
into
that
and
what
you
get
is
this
topology
that
we
saw
before
so,
hopefully,
it's
not
too
small
but,
as
michael
had
pointed
out
before
you
that
application
object,
that
ties
together
all
the
pieces
that
make
up
the
application
that
you're
going
to
be
using.
In
our
case,
we
have
two
different
subscriptions.
D
D
In
this
case
it's
targeted
to
just
a
single
cluster
and
the
deployment
and
the
replica
set
that
comes
with
it,
as
well
as
a
service
object,
and
so
this
type
is
a
is
a
git
subscription,
and
so
we
have
here
a
little
bit
of
information
about
it.
When
you
click
on
the
properties,
you
can
view
the
actual
yaml
and
get
the
search
outputs
that
we
saw
before.
But
here
we
see
it's
pointing
to
the
get
ops
demo
piece
as
well
as
some
of
the
information
about
it.
D
So
maybe
what
we'll
do
is
show
this
a
little
more
in
action.
There
was
another
question
that
that's
come
up
jimmy
were
there
any
other
questions
when
I
shared
the.
C
D
All
right
we'll
keep
rolling
then
difficult
to
navigate
to
these,
but
so
what
we're
gonna
do
is
I'm
gonna
take
advantage
of
the
fact
that
this
is
my
cluster
here,
so
I'm
on
the
cluster
page.
I
want
to
go
to
that
cluster,
so
I
can
show
you
guys
what's
going
on
as
this
happens,
so
I
was
in
the
cluster
info.
I
clicked
the
link
to
get
to
my
cluster
console,
so
this
is
my
remote
jnp
system,
that's
deployed
in
aws
and
I'm
going
to
go
to
workloads.
I
want
to
take
a
look
here.
D
I
see
I'm
in
the
ng
and
x
blue,
which
is
the
same
nginx
blue,
that
I
was
looking
at
over
here.
I
see
a
number
of
my
apps,
so
this
app
has
a
subscription
that
points
to
a
channel
and
that
channel
is
in
get
offs,
and
so
here
we
see
nginx,
it's
12.12,
that's
deployed
and
when
we
go
back
to
the
system,
if
we
take
one
of
these
pods
and
we
look
at
the
events
we
can
see,
the
image
here
is
12.12,
that's
deployed,
so
we're
going
to
go
back
and
watch
the
pods.
D
I
can
commit
it
possibly
to
my
branch,
but
with
branch
control
I
may
not
be
able
to
commit
to
the
specific
branch
that
the
subscription
is
watching,
so
you
can
have
approvals
required
to
merge
it
in
first
etc
and
then,
in
this
case
we're
doing
a
this
is
part
of
a
blue
green
type
of
demo.
So
I
just
made
a
modification
to
the
blue,
so
I
changed
it
to
1.142.
D
We
can
see
here.
We've
already
got
the
pods
terminating
the
new
pods
coming
up.
If
we
click
on
one
of
these
new
pods
and
we
look
at
the
event
we'll
see
now,
we've
got
the
1.14.2
that
we
were
looking
for,
and
so
that's
the
first.
So
that's
the
git
ops
being
pulled
down
so
ever
so
often
there's
two
ways
it
works.
D
The
subscription
either
does
a
pulling
monitor
of
the
git
depending
on
what
your
firewalls
are,
or
you
can
have
a
web
hook,
push
which,
when
there's
a
change
in
kit,
it's
going
to
signal
to
the
subscription
that
there's
a
change
and
it
will
happen
pretty
close
to
instantaneously
to
when
you
do
the
commit,
and
so
the
next
piece
you
can
do.
Is
we
go
out?
D
D
C
None
coming
up
right
now,
so
I
figure
with
the
last
that
was
great
overview.
Thank
you
very.
C
Changing
over
here-
so
yes
excellent
excellent,
so
so,
hopefully
that
gives
an
idea
of
how
applications
kind
of
work
within
within
acm.
So
I
think,
with
five
minutes
left,
I
kind
of
wanted
to
talk
a
little
bit
put
you
on
the
spot
jeff.
So
I
kind
of
wanted
to
see
if
we
could
talk
a
little
bit
about
acm
and
how
can?
How
is
acm
going
to
be
integrating
into
the
broader
red
hat
portfolio?
C
F
That's
a
that's
a
great
question
jimmy
and
I
I
really
appreciate
the
time
and
the
from
everyone
on
the
on
the
panel,
including
our
guests
today,
but
we're
really
excited
as
scott.
Probably,
ladies
with
is
that
the
ga
was
the
end
of
july.
We'll
have
our
new
release
coming
out
later
on
in
this.
In
this
you
know
fall
winter
time
frame.
We
we
typically
stay
on
a
a
a
quarterly
release
cycle,
so
we'll
you'll
expect
that
to
come
in
around
the
three-month
period,
but
from
a
portfolio
integration
perspective.
F
We
have
a
lot
of
things
going
on
from
when
you
think
of
acm,
there's
primarily
three
different
pillars
of
functionality
that
we
focus
on
and
a
fourth
that
surrounds
them
all
in
observability.
But
we
saw
a
lot
of
demonstration
on
cluster
life
cycle
policy,
governance
management
as
well
as
josh,
just
rounded
out
the
application
life
cycle
and
in
each
one
of
those
areas,
there's
opportunities
for
our
integration.
F
We
already
see
that
our
tight
integration
with
the
hive
open
source
community
and
cluster
life
cycle
in
the
policy
and
governance
area
we're
working
with
the
compliances
compliance
operator
team,
bringing
in
some
of
their
work
as
well,
we've
a
lot
of
opportunities
to
work
with
insights
and
telemetry
and
pulling
in
that
part
of
the
portfolio
here
in
the
very
in
the
in
the
short
in
in
the
upcoming
months
here.
F
But
one
of
the
one
things
that
we're
really
really
excited
on
is
that,
as
you
know,
we
have
a
world-class
kubernetes
platform
with
openshift
and
we
have
the
the
de
facto
standard
in
in
automation,
with
ansible
integration,
automation,
platform
and
bringing
those
two
together
in
context
of
those
three
different
use.
Cases
is
something
that
we're
really
excited
about
from
a
cluster
life
lifecycle,
perspective,
application,
lifecycle
perspective
and
from
a
policy
risk
and
gov
governance
perspective.
F
We're
really
we're
going
to
start
off.
Looking
at
some
of
the
the
things
in
application
life
cycle,
as
josh
just
showed,
you
know
deploying
applications
at
scale.
What
we
hear
from
our
customers
is,
I
want
these
clusters
to
be
stood
up
the
same
way
and
I
want
to
be
able
to
upgrade
them
easily.
I
want
them
to
have
the
same
configuration
a
lot
of
cases
and
I
want
the
same
applications
deployed,
but
there's
a
broader
world
out
there.
F
Besides
the
outside
the
kubernetes
domain,
and
that's
where
ansible
automation
platform
comes
in,
we
think
of
acm.
We
really
focus
from
the
concrete
to
the
heavens
in
terms
of
kubernetes
management
and
being
able
to
do
automation
and
enforcement
ansible,
bringing
that
into
the
into
the
mix
working
better
with
with
ocp.
Now,
if
you
can
imagine
a
scenario
where
josh
just
demonstrated,
an
application
is
being
deployed
to
new
clusters,
there's
a
load
balancer
that
needs
to
be
updated.
F
There's
information
that
needs
to
be
updated
in
cmdb
in
cmdb,
so
we're
really
opening
up
these
hooks
I'll
call
them
hooks
and
each
one
of
these
life
cycles,
so
that
customers
can
really
integrate
any
of
the
any
of
the
collections
or
the
workflows
or
anything
else
that
you
want
to
invoke
from
an
ansible
automation,
platform
perspective
and
that's
really
the
art
of
the
possible.
We
have
obviously
some
some
key
use
cases
that
we
work
with
our
customers
and
have
kind
of
honed
in
and,
like
I
said,
the
application
example.
I've
deployed
an
application.
F
Now
I
need
to
you,
know,
update
the
firewall
or
update
cmdb
or
I
need
to
maybe
adjust
some
things
on
the
on
the
external
storage
front,
from
a
policy
and
remediation
perspective,
you
can
imagine
collecting
logs,
doing
quarantining
activities
and
other
things
like
that.
Are
our
key
use
cases
there
and
in
cluster
life
cycle,
when
we
get
to
get
to
that
opening
up
those
hooks.
F
Pre-Provisioning,
a
cluster,
allocating
storage,
post,
provisioning,
cluster,
updating,
cmdb
updating,
load
balancers,
making
those
available
to
the
scale
of
developers
that
are
out.
There
are
some
of
the
use
cases
that
we're
thinking
about.
I
mean
in
the
immediate
term
for
integration
with
ansible
automation
platform,
so
we'll
see
some
things
coming
up
in
the
ansible
fest
we'll
be
talking
about
that
and
doing
demonstrations
and
then
we'll
progress
down
the
road.
F
So
if
you
have
a
customer
is
really
interested
in
that
type
of
integration,
then
please
reach
out
to
anyone
on
this
panel
and
we
can
start
to
work
through
those
use
cases
and
make
sure
that
they're
gonna
be
addressed
when
they
come
to
market.
C
Awesome
yeah,
that's
great!
Thank
you
so
much.
I
appreciate
that
jeff
all
right
and
then
with
that
I
guess
diana.
Let
you
close
it
up.
A
I
would
love
to
do
you
have
in
your
slide
deck,
though
scott,
maybe
one
final
slide
with
any
resource
links
and
that
you
can
share
to
end
this
with
so
that
people
can
find
you
and
then
everything
that
you've
put
in
the
chat
here.
A
All
these
wonderful
links
to
you
know
all
the
different
red
hat
documentation
sites
we'll
incorporate
that
and
repost
that,
with
the
the
slides
up
to
the
youtube
channel
in
the
next
day
or
so
and
yeah,
it
would
be
great
if
you
could
put
it
under
speaker
deck
and
we
also
link
them
to
on
the
events
calendar
for
openshift
commons.
So
yeah,
that's
that's!
We'll
we'll
expand
those
urls
that
are
linked
in
there
as
well.
So.
A
The
new
virtual
world
is
kind
of
like
sometimes
you
feel
like
it
should
be
clickable,
and
sometimes
it's
just
not
so
like
I,
you
know
I
feel
like.
I
should
be
able
to
touch
my
screen
and
click
on
anything
that
I
see,
but
it's
not
quite
there
yet
so
we'll
make
it
happen
and
really
guys.
Thank
you
very
very
much
for
fielding
all
of
this.
All
the
questions
today
for
coming
and
giving
this
talk.
A
I
think
it
was
really
timely
and
tomorrow
is
the
okd
working
group
meeting,
so
I'm
sure
walid
will
be
on
there,
picking
our
brains
about
how
to
get
this
going
because
he
seems
to
have
to
demo
it
for
an
aws
meetup
soon.
So
there's
going
to
be
a
lot
of
collaboration
going
on,
and
it
really
is
nice
to
see
this
and
to
see
the
open
source
side
of
it
and
all
the
work
you
guys
are
doing
so
again.
A
Thank
you
very
much
for
coming
totally
appreciate
it
and
we'll
look
forward
to
seeing
some
stuff
that
ansible
fest
coming
up
soon
and
making
sure
that
I
think
ansible
fest
is
in
october.
Is
that
right,
jim.
A
Yeah,
so
there'll
definitely
be
lots
of
opportunities
to
check
this
out
with
the
new
releases
and
everything.
So
we'll
have
you
back
soon,
thanks
again
guys.