►
Description
Red Hat Advanced Cluster Management is a user-deployed software operator running in an OpenShift project that is enriched by the inclusion of additional Red Hat cloud services. We continue to move to this hybrid model of SaaS/Software in order to provide an elevated user experience that benefits from deeper cloud based analytics and ensures seamless management capabilities across the fleet.
Twitch: https://red.ht/twitch
#RedHat #Kubernetes #Multicluster
A
A
A
B
Good
morning,
good
afternoon,
good
evening,
wherever
you're
healing
from
welcome
to
another
redhead
advanced
cluster
management
office
hour,
my
name
is
chris
short:
I'm
host
of
red
hat
live
streaming.
The
title
of
today's
show
is
for
active
fleet
management
with
red
hat
advanced
cluster
management.
We're
joined
by
our
normal,
like
massive
group
of
red
hatters
assembled
by
the
one,
and
only
scott
baron
scott.
How
are
you
today,
sir?
Hey.
C
Chris,
I'm
great
thanks
for
having
us.
It
sounded
like
you
were
saying,
proactive,
freak
management
and
yes,
we
need
some
of
that
in
acm
land,
because
we
tend
to
take
on
a
lot
of
things
and
I'm
excited
to
talk
with
you
today
we're
going
to
hit
on
what
acm
is
doing
in
this
space
around
hub
management
fleet
management.
How
do
you
get
analytics
from
the
cloud
and
enrich
that
space
from
your
on-prem
software?
How
you
start
to
bring
all
those
goodies
into
one
basket?
So
it
is
absolutely
proactive.
C
Fleet
management
really
free
character.
I
am
sporting,
the
red
hat
neurodiversity
t-shirt,
so
I
just
want
to
do
a
big
shout
out
to
some
denia
consoles
here
at
red
hat
and
now
I'm
going
to
turn
it
to
radic
for
his
intro.
D
Hey
folks,
so
my
name
is
radi
vocal,
I'm
the
product
manager
for
insights
for
openshift
we're
going
to
cover
in
a
bit
what
it
is
all
about.
I'm
based
off
of
burnout,
czech
republic
and
I've
been
at
redhead
for
more
than
15
years
now,
and
I
have
ready
here
with
me
to
help
out
with
what
we're
going
to
be
talking
about.
E
Thanks
roddick
randy
george,
I'm
also
in
austin
texas,
with
scott
and
we're
closely
with
bringing
in
the
analytics
that
products
teams
been
producing
and
I'm
all
excited
I
mean
love
the
analytics,
but
I
love
proactive
management
even
better.
So
I
can't
wait
to
get
on
with
this
and
we
can
show
you
guys
how
to
do
that.
So
I'll
turn
it
over
to
zach,
who
will
be
one
of
the
guys
also
have
been
implementing
this
and
help
show
us
a
little
demo.
F
Hey
everyone,
I'm
zach
lane,
I'm
based
in
raleigh
and
I've,
been
a
ui
developer
for
the
recom
product
since
its
inception
back
at
ibm.
I'll
turn
it
over
to
the
other
zach.
G
H
Hi
I'm
jacob
I'm
a
software
developer
at
red
hat
and
I've
been
there
for
about
a
year
and
a
half
now
and
I'm
based
out
of
virginia
and
I'll,
be
going
into
a
bit
of
cluster
discovery
and
automatic
import.
C
Alone,
we've
got,
I
think,
we're
over
like
100
engineers
now
that
are
working
on
this.
This
multi-cluster
challenge
space
and
upstream
projects
like
open
cluster
management,
submariner
and
some
of
that
stuff
will
get
you
there.
But
then
you,
you
really
need
analytics
and
you
need
to
start
crunching
through
the
data.
You
need
an
experience
that
pulls
together
insights
across
the
fleet
and
that's
why
we
started
to
team
up
with
product
and
what
that
team
was
doing
in
the
customer
insights
space.
D
And
I'll
start
with
my
personal
experience
from
couple
minutes
ago,
so
it's
getting
late
here
for
me
it's
almost
evening
and
I
needed
a
break
before
this
presentation,
so
I
was
working
on
my
bike.
The
shifting
on
my
bike
doesn't
really
work
very
well.
I
love
mountain
biking,
so
I
was
like
I
should
actually
look
at
it
and
my
bike
is
a
set
of
different
components,
both
together
connected
with
cables
connected
with
chain
right,
all
together,
somehow
working
and
the
shifting
issue.
D
D
Well,
this
is
my
analogy
to
openshift
cluster
openshift
openshift4
is
a
bunch
of
living
components
that
work
together
are
connected
together
and
I
are
interacting
all
the
time
and
every
now
and
then
they
send
us
a
signal,
a
signal
that
something
is
not
behaving
as
it
expects,
as
is
as
it's
expected,
something
is
going
wrong,
there's
an
alert
coming
from
an
operator.
There's
some
behavior
issue
that
we
need
to
act
on
without
having
the
experience
of
where
to
start
triaging
the
issue.
You
have
to
look
at
different
places.
D
You
have
to
start
digging
into
one
component,
another
component,
trying
to
figure
out
what
the
hell
is
happening
on
my
cluster
and
with
insights,
we're
trying
to
bring
that
experience
of
someone
who's
been
running
openshift
for
almost
a
decade
right
now,
who's
been
working
on
these
components
day
by
day
and
who's
got
like
hundreds
over
hundreds
of
engineers
on
thousands
of
engineers,
actually
sorry
about
it,
working
on
openshift
and
different
components
and
many
people
and
support
having
that
experience
of
supporting
the
openshift
customers
and
people
in
sre
and
others
and
others.
D
So
what
my
team
is
doing
is
that
we're
looking
at
the
analytical
data
that
every
openshift
cluster
is
providing
us
by
default
when
you
install
openg4
by
default,
you
send
us
telemetric
data
and
as
of
openshift4.2,
we
also
have
insights
operator
that
is
providing
us
some
additional
pieces
of
information
that
we
look
at.
We
combine
these
information
together
and
we
come
up
with
different
things,
different
findings.
Based
on
that
information,
we
provide
it
to
different
teams.
Our
main
thing
is
that
we
look
at
these
data
from
an
engineering
perspective.
D
First
thing:
first,
we
want
to
fix
the
problem,
the
potential
problem
and
the
product,
so
we're
looking
at
fleet
issues
we're
looking
at
a
set
of
different
conditions
again.
Would
there
be
cluster
alerts,
some
again
behaviors
log
messages
and
whatnot?
We
combine
them
together
and
we
present
them
back
to
the
engineering
team.
We
work
with
engineers
on
specific
components
and
we
tell
them
hey
this.
This
thing
is
actually
not
behaving
as
it
should.
It's
it's
a
performance
issue.
D
It
has
an
upgrade
issues,
has
some
other
sort
of
an
issue
and
we
need
to
look
at
how
we
can
potentially
fix
the
problem.
Well,
not
only
we
do
that,
we
look
at
our
internal
knowledge
base.
We
work
with
the
support
team
and
the
support
team
is
able
to
tell
us
hey
this.
This
issue
we
already
know
about
it.
D
Some
other
poor
customer
actually
shared
with
us
a
similar
problem
and
we
solved
that
problem
for
them
and
we
know,
what's
the
pro,
what's
the
root
cause,
we
know
how
to
fix
this
issue
or
we
can
look
at
our
internal
knowledge
base.
We
can
work
with
solution
architects
times.
Basically,
whoever
touched
openshift
is
able
to
contribute
back
to
us,
their
own
experience,
how
to
troubleshoot
a
specific
problem
and
how
to
prevent
this
issue
in
an
ideal
case.
D
So
for
our
support
engineers,
we
offer
them
a
different
set
of
tools
that
is
allowing
them
to
look
at
the
potential
root
cause
of
a
problem.
Recently,
we
realized
that,
for
more
than
80
percent
of
support
cases
that
has
been
open
against
openshift
4,
we
already
have
a
solution.
We
already
know
about
the
problem
because
someone
else
has
already
hit
it
and
we
can
instantly
provide
that
information
back
to
the
customer.
D
We
prevent
that
issue
with
open
chief
engineers.
We
tell
them
hey.
This
is
spiking.
This
is
somehow
suspicious
and
we
need
to
look
at
that,
and
the
second
thing
is
that
we
work
with
support
and
we
try
to
identify
solutions
for
problems
that
already
exist
or
problems
that
might
happen
to
the
customer
cluster,
and
the
last
thing
that
we
do
is
that
we
present
back
some
information
directly
to
our
customers.
D
We
tell
them
about
hey,
there's
and
already
there's
a
potential
issue
on
your
cluster
again
we're
trying
to
be
proactive
as
much
as
possible
we're
trying
to
prevent
the
issues
here,
so
we're
basically
telling
customers.
If
you
continue
on
this
path.
With
this
configuration
with
this
setup,
eventually,
you
might
have
a
problem.
You
might
be
having
a
degraded
cluster
performance.
You
might
be
hitting
this
issue
that
you're
not
able
to
upgrade
so
here's
a
solution
for
you.
This
is
the
insight's
goal,
is
to
provide
specific
solutions,
specific
steps,
how
to
prevent
that
issue
from
happening.
E
And
running,
I
think,
that's
one
of
the
keys.
You
mentioned
different
scenarios
where
we
know
about
this
problem.
We
can
give
you
a
fix
of
problem,
but
I
think
one
of
the
most
important
ones
of
value
I
see
is
we've.
We've
found
this
problem.
We
know
what
led
to
the
problem.
We
can
tell
you
here's
how
to
prevent
ever
having
that
problem
right
and
that
I
think
that's
okay,
because
that's
really
what
you
want
to
do.
You
don't
want.
Yes,
problems
are
going
to
happen.
E
That's
that's
the
ultimate
goal
right
and
by
having,
like
you
said,
thousands
and
thousands
of
date
clusters
from
a
thousand
thousands
of
clusters,
and
if
somebody
has
a
problem-
and
you
know
how
it
got
there,
you
can
prevent
all
these
other
folks
and
like
configuration
issues,
it's
not
just
a
code
problem,
it's
not
just
a
bug
in
the
code,
a
lot
of
the
things
that
your
insight
spends,
analyzes
configurations,
we
all
know
this
is
one
knob
is
on
the
other.
E
E
You
know
this,
but
you
know
I
wonder
why,
but
people
that
these
scientists
are
always
mining,
that
data
and
testing
new
algorithms
to
come
up
with
additional
data.
In
addition
to
those
other
sources
that
you
have
right.
D
And
one
thing
that
we
realized
real
quick
when
we
tried
to
analyze
this
data.
So
it's
it's
exactly
like
you
said
randy
that
the
challenge
always
is
when
you
put
a
data
scientist
on
a
huge
set
of
data,
and
you
tell
them,
go
find
a
problem.
This
is
gonna
fail.
What
we
really
need
to
do
is
to
feed
it
with
our
own
experience.
So
again,
that's
why
we
engage
internal
teams,
support
engineers
and
others
to
tell
us,
hey
I've
seen
this
before.
I
know
what
is
happening
here.
D
Can
we
actually
estimate,
or
can
we
look
at
other
clusters
having
similar
issues
similar
symptoms,
similar
conditions
that
are
almost
meshing
this
one,
and
can
we
estimate
that
this
is
the
same
problem,
so
this
is
what
we
do
a
lot
internally,
that
we
tailor
these
views
on
the
data
for
specific
teams,
specific
use
cases
and
we
provide
them
this
knowledge
to
be
able
to
again
solve
some
problems,
but
mostly
prevent
problems.
Plan
feature
prioritize
features,
so
a
lot
of
the
times
not
only
we
look
at
is
that
going
to
prevent
an
issue.
D
But
again
it's
a
lot
about
your
the
behavior
and
your
experience
with
openshift
cluster.
Are
we
able
to
tweak
certain
things
because
we
see
that
there's,
probably
a
performance
bottleneck
or
there's
some
biz
configuration
that
customers
often
have
so?
Can
we
improve
our
documentation?
Can
we
improve
the
operator
responsible
for
this?
So
this
doesn't
already
happen.
Can
we
do
some
modern
enhancement
that
again
would
prevent
this
issue
in
the
in
the
product
itself.
E
D
That's
the
thing
that
I
love
the
most
that
we're
able
to
tell
customer
exact
steps.
We
call
them
remediation
steps,
how
to
resolve
the
problem
or
how
to
prevent
the
problem
again
from
potentially
happening,
and
what
you
will
see
in
the
demo
later
will
be
this
exact
one
line
or
two
line
commands
telling
the
user
go
run
this
as
an
oc
admin
go
fix.
This
configuration
go
fix
this
problem
and
again
you'll
be
back
on
track
with
a
cluster
as
it
should
be
and
behaving
as
as
expected.
C
Like
we're
acm
and
we're
an
on-prem
operator
we
deploy
out
of
operator
hub,
and
even
though
I'm
a
software
thing,
I
can
still
tap
into
your
data
scientist.
I
can
tap
into
all
of
the
analytics
of
the
fleet
to
enrich
my
experience
down
in
my
manage
fleet
with
an
aci
so
that
that's
the
real
fun
part
about
this
journey
is
working
with
your
teammatic
and
bringing
those
two
worlds
together.
Did
you
have
something
you
wanted
to
share
like
a
picture
or
any
kind
of
description
on
that?
Or
should
we
just
jump
into
the
demos.
B
D
Yeah
this
is
this
one
side,
but
the
thing
is
that
we
are
not
able
to
share
all
the
insights
of
all
our
internal
know-how
because
of
customer
data
and
whatnot,
but
just
to
give
you
a
sense
of
what
our
engineers
are
dealing
with.
These
are
the
type
of
reports
that
they
are
looking
at
they're
looking
at
specific
cluster
and
what
potential
conditions
or
potential
we
call
them
symptoms
and
diagnosis
that
are
hitting
one
thing
that
I
would
recommend.
Chris
you
had
here.
D
D
So
if
you
want
to
know
more
details
about
all
these
charts
and
all
these
diagrams
and
trends
that
we're
looking
at
and
how
it
impacts
our
upgrade
path
and
the
openshift
how
we
feed
that
data
back
to
the
cincinnati
service
that
basically
tells
users
which
versions
they
are
safe
to
upgrade
to
or
even
more
different
playbooks
and
views
on
the
data.
I
recommend
that
talk
from
yvonne
and
yanzalani,
because
they
they
went
into
much
deeper
details.
E
You
bring
up
a
good
point
on
the
whole
upgrade
called
tree
right.
Acm
has
already
pulled
that
knowledge
in
right.
So
when
we're
going
to
upgrade
to
fleets,
we
know
what
is
the
next
safest
or
you
know,
version
that
you
can
upgrade
to
right
and
we
use
those
analytics
to
generate
that
upgrade
path
for
the
various
fleets
right.
So
this,
what
we're
going
to
show
today
is
just
additional
analytics
that
we
keep
tapping
into
and
to
provide
a
need
to
buy
better
fleet
management
right.
C
A
C
F
Yeah
absolutely-
and
thank
you,
I
guess
just
to
go
into
a
bit
of
background
on
this,
bringing
in
what
roddick
just
just
talked
about.
We
essentially
out
of
the
box
now
with
rackham,
have
a
new
service
that
runs
in
our
back
end.
E
Screen
real,
quick
and
while
you're
doing
that
stack
it's
it.
I
know
you're
going
to
show
the
display
that's
much
easier
in
a
demo.
You
also
can
get
alerts
for
all
of
these
right.
So
it's
not
like
you
have
to
sit
there
at
the
console
to
get
that
as
soon
as
the
insights
come
in
and
which
is
very
frequent
checking
you
know,
alarm
can
be
generated
and
sent
to
wherever
you
have
your
alerts
integrated
with.
F
E
F
Absolutely
I
guess
I
can
talk
about
the
alerts
a
little
bit
too
so
out
of
the
box
as
randy
was
mentioning,
you
can
get
alerted
based
on
different
rules
that
you
have
set
up
so,
for
instance,
our
out
of
the
box
alert
which
is
defined
here
in
this
thanos
ruler
default
rules.
Config
map
will
alert
based
on
critical
critical
insights,
so
this
is
just
the
default
one
that
comes
out
of
the
box
with
our
observability
feature
and
essentially
we'll
alert
only
on
critical
insights
that
come
in
from
the
ccx
team.
B
F
C
Those
alerts
would
come
to
me
and
then
now
I'm
I've
received
the
alert,
I'm
ready
to
jump
into
the
console
and
I'm
ready
to
understand
more
about
it
like
I'm
ready
to
solve
it
or
remediate
it
or
even
understand
what
was
the
proactive
thing.
You
know
that
came
to
my
attention
so
so
drive
me
through
that
story.
Now,
like
I'm,
I'm
in
the
console
and
I'm
ready
to
work
on
that
remediation.
F
So
once
you
come
once
you
realize
you
have
an
insight
that
you
want
to
take
remediation
on
a
quick
way
to
sort
of
go
through.
The
flow
at
at
least
a
fleet
level
is
to
come
into
the
overview
page.
So,
on
the
overview
page,
we
have,
as
I
said,
a
fleet
level
statistics
of
certain
health
metrics
among
the
clusters
that
you
have
under
management.
So
in
this
instance,
in
this
environment,
we
have
two
managed
clusters,
a
little
bit
of
insights
on
them
and
for
the
insights.
F
We
have
this
new
card
that
shows
the
total
amount
of
clusters
that
have
issues
on
them,
along
with
the
sum
of
the
different
severity
insights
that
each
of
the
clusters
has.
So
it's
a
quick
way
to
identify
exactly
how
many
clusters
have
issues
and
then
further
go
along
with
that.
So,
if
say,
a
certain
severity
has
issues,
it
becomes
a
link,
and
this
launches
out
to
our
search
page
and
our
search
feature
for
those
not
aware.
Essentially,.
C
F
Absolutely
so,
as
you
can
see,
we
have
two
different
policy
report
resources,
one
for
each
of
the
clusters
that
we
have
under
management,
our
local
cluster,
being
our
hub
cluster
and
the
mc-rmg
being
a
managed
cluster,
both
of
which
have
the
same
violation.
This
prometheus
db
volume
rule,
and,
along
with
that,
you
can
see
that
they're
both
a
moderate
severity,
because
it's
the
same
insight
but
with
the
search
feature
say
we
were
to
have
a
policy
report
with
more
than
one
insight
or
one
that
you
were
specifically
looking
for.
F
E
C
E
Fires
interrupt
before
you
get
there.
I
mean
following
your
example:
you
got
notified
of
a
cluster
having
this
problem
or
a
potential
problem
you
came
in
here.
What's
nice
is
you'll,
get
the
view
across
the
whole
fleet
to
say
what
other
clusters
are.
Having
that
similar
problem
right,
so
you're
not
just
doing
cluster
by
cluster
resolution,
you
you
get
a
fleet,
you,
like
you,
said
right
absolutely.
F
Yeah
and
to
go
off
on
what
scott
was
talking
about
to
take
further
action,
we'll
take
the
manage
cluster
for
an
example.
Each
of
these
names
links
out
to
that
specific
clusters.
Details
page
so,
as
you
can
see
we're
in
the
clusters
page
in
its
overview
with
you,
have
a
few
different
statistics
on
that
cluster,
but
by
default,
navigating
from
the
search
page.
F
D
D
Try
to
figure
out
if
you
hit
this
issue,
what
is
going
to
be
the
level
of
impact
on
your
cluster
again,
it's
around
degraded
performance,
different
behaviors
and
ability
to
upgrade,
and
things
like
this.
If
you
look
at
these
criticality
levels,
each
one
of
them,
I'm
not
sure
exactly
if
we
had
that
implemented,
already
have
pop-up
describing
the
impact
level.
What
we
tell
customers
is
that
those
that
are
critical
and
important.
D
We
recommend
solving
them
immediately,
because
again,
these
might
in
a
very
short
timeframe,
cause
some
potential
harm
on
the
cluster.
These
there
are
moderate
and
low.
They
still
need
some
additional
consideration
on
your
level
for
the
low
ones,
for
example,
will
tell
customers.
This
is
probably
wrong,
but
you
might
have
a
good
reason
for
running
this.
I
like
that
right.
So
it's
really
how
we
estimate
the
impact.
F
F
So
you
can
see
here
the
same
description
along
with
a
little
bit
more
information
and
a
remediation
text
that
describes,
at
least
in
this
instance
I'll
link
out
to
the
documents
page
on
configuring,
persistent
storage,
to
solve
this
problem,
but
in
other
instances
it
could
be
a
longer
set
of
steps
that
you
would
have
to
take
to
to
remediate
a
certain
insight.
F
C
Nice,
so
we
have
a
couple
of
you
know
in
this
little
demo
environment.
We
just
have
a
couple
of
moderate
ones
to
show,
but
this
gives
you
an
example
of
where
prometheus
I
mean
this
is
kind
of
straightforward
you're
telling
me
that
something's
misconfigured
in
this
system,
I
don't
have
a
persistent
storage,
which
means
I'm
going
to
lose
all
the
metrics
that
I
would
probably
want
to
see.
You
know
if
I
want
to
look
at
health
trends
or.
B
E
Yeah
and
and
scott,
even
you
know,
developers
setting
this
up.
I
remember
the
first
time
I
got
this,
I'm
like
yeah.
I
knew
this.
You
didn't
have
to
tell
me,
but
you
don't
think
about
it,
and
you
know
the
default
setup
in
the
cloud
is
to
use
some
femoral
storage
for
prometheus
and,
like
I
said,
if
I
would
have
ran
into
something,
I
wanted
to
go
look
at
the
metrics
and
if
I
would
have
to
restart
some
of
the
prometheus
pods,
I
would
have
lost
all
of
my
data.
E
I
would
have
been
you
know,
tech
right.
So
just
this
reminds
me
hey
by
the
way
here's
the
default
set
up.
If
you
restart
any
pods
you're
going
to
lose.
Oh
yeah,
that's
right
reminds
me
to
go
set
up.
You
know
persistent
storage
for
prometheus
right
versus
waiting
until
I
ran
into
the
problem
lost
the
data.
Then
I
get
reminded
the
default
was
set
up
in
femoral
right.
You
don't
want
to
get
to
that
point.
So
a
lot
of
these
things
are
very
informative
in
the
lower
risk,
but
yet
very
useful
right.
All
right.
C
Might
be
a
reason
why
you
want
to
run
like
this
and
that's
okay,
that's
why
this
is
a
risk
factor
of
two.
It
doesn't
mean
you
have
to
drop
everything
and
do
it,
but
yeah
when
that
critical
one
comes
in.
It
probably
means
you
want
to
pay
attention
to
this
today
like
this
is
something
you
want
to
respond
to.
C
Nice,
so
this
is
available
in
the
current
release
version
2.3.1,
we're
going
to
continue
to
to
be
able
to
you
know,
use
this
type
of
feature
use
this
type
of
information,
one
of
the
things.
F
Yep,
absolutely
so
in
this
two
three
two
three
one
release
we're
using
the
policy
report
from
the
sig
work
group
to
store
the
different
insight
information
and
we
have
normalized
it
into
their
resource
using
that
so
that
in
the
future,
which
I
think
for
two
four,
I
believe
correct
me.
If
I'm
wrong,
we
will
also
integrate
the
grc
policies
into
this
policy
report.
So
we
will
have
multiple
sources
of
insights
coming
in.
That
will
then
be
displayed
for
each
cluster,
as
you
can
see
here.
C
C
Decision,
that's
an
upstream
work
group,
a
special
interest
group
that
we're
participating
in
to
help
enrich
this
type
of
experience
across
all
of
the
pillars
within
acm
and
so
policy
report
yeah.
I
do
believe
you're
right.
I
think
that's
coming
in
2.4
with
a
notification
story,
around
policy
reports
and
the
policy
violations
and
how
we
can
start
triggering
those
off
into
third
party.
E
Types
of
tools
we
have
the
trigger
in
two
three:
it's
just
like
you
said
the
policy
reports,
a
standard
api
that
the
sig
work
report
on
the
state
policy
say
work
group
is
defined
implemented.
It
will
generate
metrics
alerts,
it
would
be
grc
policies
we'll
be
creating
those
that
don't
inherit
the
alert
team.
That's
there
now,
so
that
integration
will
be
there.
Do
that
already
nice.
E
Yeah
one
other
thing:
I
wanted
to
come
back
to
look
at
this,
even
though
it's
an
oversimplified
demo
with
one
problem.
You
know
and
same
problem
on
two
clusters.
Right,
hopefully
you
don't
have
this
issue,
but
if
you
have
multiple
problems
on
this
cluster,
as
you
see
the
table
sortable
you
can.
You
know,
you'll
know
how
many
criticals
you
have.
We
didn't
talk
about
categories,
but
insights
will
also
just
like
they
tell
you
severity
just
like
they
tell
you
the
risk
of
of
not
you
know
making
this
recommendation.
E
They'll
get
they'll,
separate
or
categorize
these
issues
as
well,
and
sometimes
it
could
be
one
or
two
categories,
and
it's
nice
because
you
can.
You
know
if
it's
a
security
issue
or
if
it's
something
that'll
affect
your
availability
or
something
affect
my
performance
as
rodic
was
saying
those
other
things,
so
you
can
also.
E
You
know
if
you
have
a
whole
bunch
that
are
same
importance
or
whatever
you
might
want
you
know.
Maybe
security
is
more
important
than
config
health
or
something
like
that
which
it
should
be,
and
so
because,
though
you
can
go
prioritize,
if
you
would,
it
does
sort
in
the
table
and
prioritize
and
resolve
you're.
D
Go
ahead,
just
real
quick
on
top
of
that
right
now.
Insights
is
very
much
looking
at
the
core
set
of
components
and
things
that
make
up
the
cluster,
but
the
future
is
that
we
want
to
go
beyond
that.
We
want
to
look
at
the
workloads
that
are
running
on
the
cluster
and
look
at
things
like
kubernetes,
best
practices
for
running
specific
workload,
clustering
to
give
resource,
optimization
recommendations
and
again
looking
at
how
your
workload
is
actually
impacting
the
cluster
behavior.
So
that
is
the
future
of
insights
and
it's
still
evolving
a
lot.
Yeah.
C
You're
speaking,
my
language,
I
want
everybody
to
be
talking
about.
What
really
matters
is
the
workload.
I
want
to
be
able
to
mute
out
all
of
this
other
cluster
stuff.
At
some
point,
it
should
just
be
there
and
operational,
but
you
hit
on
a
point
which
is
kind
of
that
understanding
of
a
blast
radius
right
if
you're
just
talking
to
one
cluster,
okay,
cool,
you,
you
understand
it,
it's
your
bespoke.
You
know
cluster
that
you
built,
but
we're
talking
about
hundreds
and
thousands
of
clusters.
C
I
don't
I
don't
want
to
have
to
be
able
to
sit
in
front
of
a
machine
in
front
of
a
dashboard
and
have
to
cognate.
You
know
what's
going
on
with
this,
is
this
prod
or
is
this
dev?
I
shouldn't
have
to
be
asking
those
questions.
You're,
helping
me
understand
the
blast,
radius
and
you're.
Only
waking
me
up
when
it's
critical
when
I
actually
have
to
do
my
job.
I
love
that
I
love
enriching.
C
C
I
have
a
thousand
of
these
things
that
I
need
to
bring
under
management
and
another
aspect
that
we're
enriching
in
this
space
is:
how
do
you
communicate
to
the
cloud
or
how
do
you
communicate
to
a
list
of
known
clusters
to
be
able
to
start
to
ingest
those
and
import
those
and
take
action
on
those
from
an
acm
management
perspective,
and
so
here's
another
example
where
we're
reaching
out
to
the
cloud
and
we're
figuring
out?
What
do
you
have
in
the
openshift
domain
of
your?
C
You
know
your
org,
your
organization,
and
how
do
you
start
to
quickly
import
those
and
start
to
help
me
automate
the
management
of
that?
So
I'm
going
to
kick
it
over
to
our
disco
team,
the
discovery
gentleman
I've
got
zach
and
jacob
here
and
you
guys
have
been
working
on
this
problem
statement
which
is
okay,
yeah.
It's
it's
excellent
to
be
able
to
manage
something,
one
thing,
but
I
need
to
manage
a
hundred
things.
So
how
do
I
go
from
zero
to
zero
to
a
hundred?
How
do
I
do
that
quickly?
G
I'll
go
ahead
and
take
this
one
on
scott.
Can
you
all
see
my
screen
real,
quick.
G
All
right
great,
my
name
is
director
kelly
and
I'll,
be
demoing
cluster
discovery
from
consoleredhat.com
today
and
quickly,
as
scott
mentioned
discovery
is
a
feature
which
allows
the
hub
cluster
to
reach
out
to
console
the
redhead.com
to
determine
if
there
are
any
discovered
clusters
in
which
red
hat
openshift
cluster
manager
would
know
about
which
are
available
to
be
imported
into
advanced
cluster
management.
Here,
if
there
are
any
clusters
which
are
discovered
through
openshift
cluster
manager,
we'll
provide
them
with
a
simplified
mechanism
for
cluster
import
and
to
showcase
this
I'll
walk
through
this
entire
flow.
G
The
first
step
I
have
is
to
set
up
a
red
hat
openshift
cluster
manager,
credential
that
we
could
then
use
to
create
a
discovery
config
for
import,
so
I'll
go
ahead
and
come
to
the
credentials
over
here
and
click,
add
credential
I'm
going
to
go
ahead
and
create
a
credential
of
type
red
hat.
Openshift
cluster
manager.
C
C
You
know
management
system
because
I
can
connect
to
any
of
my
hyperscalers.
I
can
connect
to
my
on-prem
like
data
center
credentials,
and
I
can
also
connect
to
automation
and
what
we're
calling
other,
which
is
basically
ocm,
is
the
openshift
cluster
manager.
So
I
have
the
ability
to
make
all
these
connections
from
acm
and
what
you're,
what
you're
highlighting
is
a
cloud
service
console.redhat.com
which
is
kind
of
an
uber
list
of
red
hat
openshift
clusters
that
have
phone
home
or
that
have
subscription
right,
they're
tied
back
into
into
the
red
hat
mothership.
C
So
you're
able
to
illustrate
on
this
screen
exactly
my
point,
which
is
we
can
talk
across
cloud
local
cloud,
whatever
cloud
you
got
and
in
this
case
you're
going
to
be
speaking
specifically
to
console.redhat.com
to
get
some
information,
that's
really
cool
and
you're
doing
that
from
a
hub,
which
is,
I
think,
you
might
be
deployed
on
amazon.
But
let's
just
imagine
this
is
a
hub
deployed
anywhere.
C
E
And
what
you're
saying
scott
is,
you
know
multiple
people
from
the
enterprise
have
deployed
instances
of
openshift
or
whatever
right
on-prem
in
aws
in
azure,
wherever
coupe
now,
you've
felt
time
to
get
them
under
management
right
and
we
installed
rackham
now
quickly,
zach's
gonna
show
us.
I
can
gather
and
bring
under
management
all
of
those
previously
deployed
clusters
right,
no
matter
where
they
are.
G
So
now
I'm
gonna
go
ahead
and
enter
the
basic
information
for
this
credential
I'm
going
to
go
ahead
and
call
it
zkrez.
Just
so.
I
know
the
name
and
I'm
going
to
put
it
in
my
namespace,
my
own
personal
name
space
here
and
now
I'm
going
to
go
ahead
and
enter
my
openshift
cluster
manager
api
token.
G
This
can
be
retrieved
from
console.redhat.com
through
our
documentation
as
well,
and
now
that
I've
got
that,
I'm
going
to
go
ahead
and
create
it
and
just
click
add,
and
then
we
can
see
here
we
have
a
tool
tip
that
was
saying
that
this
credential
was
created
successfully
and
we
can
see
we
have
an
additional
action
here
to
create
cluster
discovery
when
we're
ready,
but
quickly,
I'm
going
to
navigate
back
to
our
discover
clusters
tab
and
then
we
can
see
over
here.
G
It
says
we
currently
don't
have
any
discovered
clusters,
but
it
will
say
that
we
have
two
credentials
ready
to
begin
configuring,
a
discovery
config.
C
G
Exactly
the
credentials
are
set
up,
but
we
still
need
to
set
up
the
actual
discovery,
config
resource,
which
would
then
contain
some
filters
that
the
user
can
configure
so
beginning
to
get
into
it.
You
can
see
we
have
our
our
discovery,
setting
here
we're
selecting
the
credential
that
we
just
created,
which
is
in
our
namespace,
and
we
have
some
filters
on
to
use
to
discover
clusters.
The
first
filter
we
have
is
a
last
active
filter,
which
is
exactly
what
it
sounds
like.
G
It
will
only
retrieve
clusters
which
are
active
in
this
case
seven
days
before
when
they
were
last
active
seven
days
ago.
Anything
greater
will
not
show
up
and
we
can
also
filter
by
version
in
this
case,
and
if
you
do
not
include
a
version
in
this
drop
down,
you're
able
to
select
multiple.
But
if
you
do
not
select
a
version,
all
versions
will
be
accepted
by
default.
C
G
So,
there's
a
there's
a
few
for
the
first
one.
You
may
only
want
to
see
openshift
clusters
across
specific
versions.
You
may
only
want
to
see
four
six
clusters
or
four
seven
clusters.
Potentially
you
might
not
want
to
see
or
import
all
clusters,
depending
on
your
certain
use
case
and
having
a
last
active
cluster
is
very
helpful.
Just
knowing
when
that
cluster
has
last
phoned
back
home
and
reported
data,
and
if
you
see
that
a
cluster
wasn't
active
and
hasn't
phoned
home
in
say
seven
days.
G
G
Absolutely
and
it
would
allow
you
to
start
to
create
a
filter
and
then
perhaps
you
realize
that
that
filter
includes
too
much
and
you
want
to
come
back
and
scale
it
down
and
I'll
walk
through
that
quickly,
too
cool,
so
I'll
go
ahead
and
create
a
discovered
discovery,
config
with
a
30
day
last
active
filter,
and
you
can
see
it
was
created
successfully
and
immediately.
G
It
begins
to
reconcile
and
I've
got
31
clusters
here,
but
that's
a
bit
much
because
now
I
want
to
import
a
cluster
and
I
just
want
to
tune
it
down
a
bit
more.
So
I
don't
see
everything
for
the
last
30
days
and
then
I'm
able
to
come
in
here
and
just
quickly
put
my
filter.
I
want
it
only
for
the
last
two
days.
I
can
save
that
and
I
can
see
that
this
list
quickly
becomes
trimmed
down
and
I've
only
seen
the
clusters
which
have
phoned
home
recently.
In
that
case,.
G
No
problem
so
now
that
I've
got
my
my
credentials
set
up
and
I've
got
my
discovery,
config
set
up,
and
you
can
see
here
that
clusters
are
beginning
beginning
to
be
discovered.
These
are
clusters
which
are
available
to
be
imported
directly
into
acm
and
I'll
begin
by
starting
to
import
one
of
these
directly
into
acm.
G
C
G
Perhaps
this
is
a
bit
on
our
personal
use
case,
but
as
developers
and
engineers
here
on
acm,
we
have
cluster
pools,
which
is
a
basically
exactly
what
it
sounds
like
a
pool
of
clusters
which
are
automatically
provisioned
and
hibernated.
So
they
don't
use
up
too
many
resources
and
one
as
an
engineer,
I
decide
that
I
want
to
use
a
cluster
to
then
import
or
develop
upon
or
essentially
to
to
check
out,
I'm
able
to
check
it
out
for
my
cluster
pool
and
it
unhibernates,
and
then
I'm
able
to
deploy
and
use
operations
upon
it.
G
So
if
all
the
clusters
would
show
up
in
my
cluster
fluid
or
that
are
in
my
cluster
would
be
in
this
list
and
then
I'm
able
to
select
from
them.
C
Are
coming
from
outside
of
the
hub
and
again,
this
is
the
beauty
of
this.
These
are
clusters
that
exist
out
in
the
ether,
so
to
speak,
but
you're
bringing
them
under
management,
and
you
know
that
they
come
from
a
cluster
pool
just
based
on
the
name
or
some
of
the
attributes
for
your
demo.
Here
got
it.
G
Yep,
so
I've
got
this
cluster
here,
ready
to
be
imported,
so
I'll
go
ahead
and
show
how
to
do
that.
So
I'm
going
to
click
on
this
kebab
menu
over
here
and
just
click
import
cluster
and
the
method
I'm
going
to
use
to
import.
It
is
by
an
automatic
import
by
importing
it's
an
automatic
import
and
all
I
need
to
do
is
enter
the
coup
config
for
the
cluster.
C
D
C
C
C
Management
could
be
as
a
development
environment,
a
production
environment.
It
could
be
an
hr
system.
You
know
the
line
of
business
that
might
be
serving
product
security
or
you
know
certain
features
like
support,
so
this
helps
me
define
what
type
of
work
to
throw
at
that
cluster
or
what
type
of
config
to
manage
on
that
cluster.
C
What
kind
of
guardrails
do
I
need
for
this
particular
system,
I'm
bringing
in
so
as
you're
doing
this
zach
you're,
actually
bringing
it
in
with
information
and
smarts
that
acm
is
going
to
respond
to
and
as
it
comes
under
management,
and
it
takes
its
first
breath
of
air
on
the
api,
it's
going
to
say:
hey
do
I
have
work
to
do?
Oh,
I'm,
a
production
server
that
needs
to
run
this
pacman
application.
Okay,
go!
G
G
C
G
No
problem
so
now
that
I've
got
my
coup
config
entered
here,
I'm
just
gonna
go
ahead
and
click
import.
You
can
see
the
import
saved
and
I'm
brought
to
the
manage
cluster
view.
Here.
I'm
going
back
to
the
manage
cluster,
it's
going
to
say
pending
import
while
it
begins
to
chug
through
the
import
process
and
it'll,
take
about
a
few
minutes
for
the
import
to
succeed
completely.
But
in
the
meantime,
I'm
going
to
go
ahead
and
hand
it
over
to
jacob
who's
going
to
walk
through
setting
up
a
discovery
config
on
his
account.
H
H
So
if
I
check
out
credential,
you
might
have
seen
it
earlier,
but
I
already
have
my
credentials
set
up
in
its
own
namespace.
But
if
I
wanted
to
get
that,
I
would
go
over
to
console.redhat.com.
H
Openshift
token,
and
from
there
I
can
get
my
api
token.
So
it's
pretty
quick
and
easy
to
get
that,
but
I
already
have
it
set
up
so
I'll
go
ahead
and
create
a
cluster
discovery
using
that
credential
and
this
credential
has
access
to
a
number
of
clusters.
So
I'm
going
to
restrict
it
to
just
one
day
and
I'll
just
go
ahead
and
say:
4.7
open
shift
version
so
that
could
be
4.7
open
shift
with
any
sort
of
z
stream.
H
Beyond
that,
so
I'll
go
ahead
and
create
that
and
what
I'll
see
is
it
starts
turning
out
a
number
of
clusters
in
that
name,
space
and
these
discovery.
Config
objects
operate
as
a
one
discovery,
config
per
namespace,
and
that's
kind
of
how
we
manage
our
back
here.
So
if
I
have
some
r
back
set
up
and
have
users
that
are
restricted
in
the
namespaces
that
they
can
see,
then
they'll
only
see
those
discovered
clusters
in
those
name
spaces.
H
C
You
have
a
bit
of
an
uber
privilege
there
jake.
Do
you
mind
zooming
up
just
a
little
bit.
I
want
to
see
all
the
little
metadata
details.
A
H
Yeah
yeah
and
so
there's
a
whole
whole
number
of
clusters
that
came
from
there
all
under
the
4.7,
or
at
least
that
I
discovered,
and
it
shows
infrastructure
provider
and
everything
else.
H
E
G
C
H
Yeah
pretty
much
any
of
these
columns
are
customizable
to
search
through
so
like
by
infrastructure
provider.
There's
going
to
be
a
whole
lot
of
aws
or
it's
almost
all
aws,
because
I'm
filtering
by
the
devo
one.
I
clear
that
yeah
vmv
sphere
right
at
openstack
and
then
even
bare
metal.
All
the
above
essentially.
C
Nice,
I
know
I'm
beating
a
dead
horse,
but
just
proving
the
point
that
we're
we've
got
a
wide
view
on
this
ability
to
discover
and
bring
things
under
management,
and
I
don't
care
what
infrastructure
it
is.
We
can
take
it,
take
it
home
to
acn
all
right,
so
you
were
looking
for
devo
1
when
I
took
you
off
the
rails,.
H
Oh
yeah,
I'm
just
showing
the
kind
of
clusters
that
I
would
be
interested
demoing
the
search
feature
and
I
can
go
ahead
and
configure
my
settings
go
back
to
jdg
and
from
there
just
delete
that
and
then
within
a
few
seconds.
It'll
tear
down
just
those
clusters
and
only
zach's
clusters
will
remain.
H
It
looks
like
the
cluster
we
were
importing,
has
finished
and
that's
f
8k,
and
so
what
happens
is
once
it
becomes
a
managed
cluster.
Then
it's
no
longer
eligible
to
be
managed,
so
it
gets
removed
from
the
discovered
clusters
tab.
So
in
the
discovered
clusters,
you
only
see
those
that
are
viable
to
be
managed.
H
All
right,
that's
the
book
of
one.
I
wouldn't
need
to
show.
C
So
yeah
this
is
cool,
so
zach
set
up
that
import,
we
saw
it
was
saying
pending
import
and
now
it's
it's
done
its
job,
and
what
we
know
is
that
on
the
back
end,
it's
actually
deploying
an
agent,
a
cluster
lit
agent
with
a
handful
of
add-ons,
and
you
can
show
me
those
in
the
add-ons
tab
and
that's
what
gives
me
the
ability
to
manage
and
control
this.
This
managed
cluster
from
acm
and
those
are
all
reporting
back
as
available.
C
There's
no
potential
issues
found
that
fourth
box
over.
There
is
telling
me
that
there's
no
insights
from
roddick's
team,
which
means
he's
already
solved
them
all,
they
already
proactively,
took
care
of
that
those
forming
they
handle
all
the
critical
alerts,
and
I
didn't
have
to
get
out
of
bed
and
do
that.
C
But
this
is
the
same
environment.
We
demonstrated
the
the
other
two
alerts
that
were
firing.
I
think
those
were
considered
moderate
alerts
before
awesome,
hey
great
demos,
zach
and
jacob.
Thank
you
thanks
to
zach
lane
for
your
demo
as
well.
Did
any
questions
pop
up
chris.
I
wasn't
really
paying
attention.
B
No,
no
questions
popped
up.
I
dropped
some
links,
but
nothing
to
be
answered
here.
C
All
right
are
there
any
other
questions
on
your
mind,
chris
or
anything
you
want
to
throw
at
us
any
curveballs.
B
No,
I
thought
we
had
a
question
from
openshift
batman
hours,
but
I
don't
think
so,
but
it
might
be
cool
to
do
an
openshift
admin
hour
with
you
all.
B
C
B
A
B
C
B
We'll
see
you
again,
yes,
sir,
should
be
the
fifth.