►
From YouTube: KubeCon EU: Cluster Management Office Hour
Description
How do you manage your clusters, particularly in hybrid cloud or multicluster environments? The leads of the Open Cluster Management project have some ideas; join us to discuss with them.
A
Good
morning,
good
afternoon,
good
evening,
wherever
you're
hailing
from
welcome
to
a
special
kubecon
eu
office
hours
this
office
hour,
we
are
going
to
be
talking
about
cluster
management
or,
more
specifically,
open
cluster
management
joined
by
my
usual
team
of
acm
folks
and
ocm
folks,
and
I'm
very
happy
to
have
josh
burke
is
here
that
has
organized
this,
so
josh.
Please
take
it
away.
B
Thank
you
chris
yeah,
I
so
this
morning
or
afternoon
or
evening,
depending
where
you're
located,
we
have
three
of
the
team
from
the
open
cluster
management
project.
We've
got
another
josh
here,
just
to
add
to
your
confusion,
we've
got
scott
and
we
have
michael.
C
Absolutely
so
my
name
is
joshua
packer,
I'm
one
of
the
architects
that
work
on
advanced
cluster
management
for
kubernetes
the
pillars
or
areas
of
expertise.
I
guess
you
would
say-
or
at
least
a
focus
for
myself
are
both
in
the
application
space,
and
so
that's
git,
ops,
helm,
object,
storage
today
as
delivery,
manif
delivery
mechanisms,
as
well
as
cluster
life
cycles,
so
rolling
out
your
cloud
provided
open
shifts
or
okay,
etc,
as
well
as
importing
those
clusters
as
well.
So
expanding
your
fleet.
D
And
scott:
hey
yeah,
I'm
a
product
manager,
so
I
take
all
the
goodness
that
michael
and
josh
and
the
team
are
building
and
look
at
the
opportunities
and
upstream
try
to
be
strategic
about.
You
know
this
project
and
that
project
and
how
do
we
build
that
into
a
seamless
delivery
with
a
red
hat
brand
on
it?
So
I
have
a
product
manager
based
in
austin
texas
and
I'm
really
excited
about
josh's
sweater,
and
I
want
to
hear
more
about
that
during
the
call
today,
because
that
is
dope.
E
D
It
is
kind
of
a
boring
goal,
but
it
like,
if
our
if
our
job
was
to
take
that
problem
space
and
just
make
it
easy.
You
know
that's
what
we
wake
up
and
get
paid
to
do
is
take
the
multi-cluster
management
challenge
and
make
it
easy
and
allow
you
to
get
on
the
bigger
and
better
things
with
your
life.
So
I
don't
I
just
I
like
the
way
you
said
that
michael
resonates
with
me.
B
So
this
isn't
office
hours
and
so
for
people
who
are
new
to
this,
let
me
explain
how
it
works.
The
folks
here
are
here
to
answer
questions.
We
have
a
few
things
to
talk
about
that.
We
will
go
through
here,
but
you
are
encouraged
to
interrupt
at
any
time
by
asking
questions
in
chat
and
we
will
pick
up
your
questions
and
ask
them
the
panel.
B
So
you
can
hear
the
answers
out
loud.
You
can
also
for
the
special
coupon
edition.
You
can
also
ask
questions
in
the
six
kubecon
red
hat
channel
on
cncf
slack,
because
that
is
the
the
chat
attached
to
the
conference
and
I
will
see
the
questions
there
and
pick
them
up
for
the
session
the.
So
with
that.
B
I
have
a
few
questions,
but,
but
maybe
you
all
wanted
to
actually
get
started
talking
a
little
bit
about
where
cluster
management
is
today
about.
Where
ocm
is
today.
E
D
E
As
part
of
openshift
commons
as
well-
so
you
may
have
already
seen
this
picture,
but
it
at
least
gives
us
an
idea
of
what
open
cluster
management
is
really
about.
This
is
a
project
that
we
are
focused
on
growing
as
an
upstream
community
project.
We've
got
a
couple
of
other
vendors
of
beyond
red
hat
that
are
participating
in
it
now
and
really,
the
focus
is,
as
we
have
seen,
the
growth
of
kubernetes
is
a
powerful
and
de
facto
standard
for
developing
workloads
on
cloud.
E
There
is
this
sort
of
net
new
generation
of
challenge,
which
is
okay.
Now
I've
got
clusters,
maybe
running
kubernetes
is
a
is
a
great
way
to
normalize.
How
I
think
about
the
environment
openshift
provides
me
a
great
distribution
of
kubernetes
to
use
that
in
a
supported
way.
But
how
do
I
start
to
blend
and
understand
multiple
clusters
that
might
be
running
on
multiple
clouds
and
so
open?
Cluster
management,
as
its
name
implies,
is
really
about
making
it
easy
to
drive
that
type
of
provisioning
behavior.
So
we
can
provision
an
openshift
cluster
across
different
clouds.
E
We
bring
together
a
governance
and
compliance
framework
that
is
native
or
invented
within
open
cluster
management,
but
also
reach
out
and
integrate
with
compliance
frameworks
like
open
policy
agent,
making
it
easy
to
distribute
the
gatekeeper
admission
controller,
making
it
easy
to
distribute
and
enforce
rego
policies
across
a
fleet.
There's
also
other
partner
integrations
around
falco,
so
not
just
having
a
not
just
trying
to
invent
another
way
of
doing
policy,
but
really
making
it
simple
again
that
word
to
understand
how
to
drive
that
across
the
fleet.
E
E
We
found
that
we
had
some
additional
use
cases
and
growth
and
ultimately,
we've
built
a
management,
a
managed
cluster
inventory
api
that
lets
us
think
about
the
inventory.
Let's
just
think
about
role-based
access
control,
right,
a
team
or
a
user.
What
clusters
can
they
see
or
interact
with
and
then,
let's
us
think
about
an
api
for
placement
rule.
So
placement
rule
is
a
kubernetes
kind.
E
Everything
here
is
kubernetes
native
api
crds,
and
so
we
can
think
about
how
do
we
place
workload
or
place
policy
against
a
fleet
and
that
set
of
apis
can
be
leveraged
by
any
operator
that
wants
to
become
multi-cluster,
and
so
in
fact
we
use
that
when
we
integrate
thanos
when
we
integrate
argo,
we
use
those
and
even
opa.
We
use
those
core
api.
That's
in
open
cluster
management
to
say
I
need
a
particular
kind
of
operator
deployed
within
a
cluster.
I
need
a
particular
configuration
to
be
enforced
for
that
operator.
E
So
I'll
pause
here
and
see
if
that
helps
answer
some
questions
about
what
open
cluster
management
is
really
all
about
see
if
there's
any
questions
coming
in
and
I
don't
have
the
view
of
questions
josh
and
maybe
if
you
want
to
highlight
those.
B
Yeah,
I
actually
have
one
question,
but
I
didn't
quite
understand
it,
so
I'm
waiting
I've.
I
asked
him
to
clarify
yeah
and
I'm
waiting
for
that.
A
D
Eye
opening
I'll,
I'm
gonna
jump
on
that
because
you
know
from
a
product
perspective,
you're,
always
thinking
about
the
strategy
in
the
market.
How
do
you
win?
You
know?
How
do
you
bring
in
partners
what
are
the
right
relationships
but
at
the
same
time,
our
organization
through
and
through
has
had
this
transparent
and
open
culture
within
us,
and
so
whether
we
were
at
ibm
or
here
at
red
hat,
it's
always
felt
like
yeah
jump
in
this
boat,
you
know
grab
a
paddle,
that's
all
moving
the
same
direction.
D
I
think
one
of
the
interesting
takeaways
chris
is
when
you
look
at
upstream.
There
are
some
games
right.
C
D
We
don't
need
to
get
in
that.
You
know.
There's
politics
and
there's
you
know:
do
you
put
emphasis
on
this
project
or
this
project
and
this
one's
gaining
steam?
And
this
one's?
Not.
I
don't
understand
that
lexicon
very
well,
like
that's
just
not
the
history
that
I've
been
in,
but
I
think
that's.
My
learning
curve
is
kind
of
figuring
out.
How
does
red
hat
really
embrace
and
support
communities
and
move
in
a
direction
to
really
encourage
them
to
flourish?
Things
about
cncf
things
about
linux
foundation,
other
foundations
that
do
support?
D
You
know
the
general
cause
of
open
source
and
community.
So
I
think
that's
been
the
biggest
learning
learning
experience
for
me.
It
doesn't
feel
like
a
harsh
pivot,
because,
even
though
we're
coming
from
ibm
like
I
said,
our
org
has
always
been
very
open,
very
progressive
in
terms
of
bringing
on
new
features
and
function
and
talent.
So
I
think
we've
always
kind
of
had
that
at
the
heart
of
our
organization.
C
Michael
I'm
going
to
add
to
that
the
development
perspective.
It's
also
been
a
lot
of
work
but
and
a
little
eye-opening,
but
you
know
as
an
overall
I
guess
transition.
It's
definitely
been
for
the
better
and
you
know
it's
helped
improve
our
code
just
in
general.
The
way
we
work,
but
you
know
doing
it
in
the
open,
is
it's
been
a
little
bit?
It's
been
liberating
to
be
honest,
and
it's
it's
it's
quite
nice,
but
it
also.
It
also
makes
you
think
a
little
bit
harder
before
I
hit
that
push
button.
C
You
know
making
sure
that
I
crossed
all
my
t's
and
dotted
my
eyes
before
it's
out
there
for
the
big.
You
know
the
big
great
world
to
to
see,
and
so
you
know
it
hasn't
been.
It
wasn't
just
like
we
flipped
the
switch
and
we
were
done,
but
it's
been
a
transition
and
but
the
team's
adapted
really
well
to
it
and
it
it
really
is
it's
a
it's.
A
B
So
we
actually,
we
have
a
few
questions
from
the
stream,
the
one
of
which
actually
this
came
from
slack,
which
is
somebody
picked
up
on
on
you
talking
about
the
third
party
components
like
integration,
falco
and
that
sort
of
thing,
and
so
one
of
the
things
that
he
does
with
ocm
is
manage
upgrades
for
things
in
the
clusters.
E
Sure
so,
and
I
think
I
was
able
to
pull
up
the
slack
just
to
see
the
question
is
written
there
as
well.
So
maybe,
let's
think
about
how
do
we
configure
a
third-party
extension
on
a
cluster?
Typically
we're
going
to
do
that
by
deploying
a
an
operator
so
setting
up
the
olm
subscription,
which
will
drive
the
configuration
of
that
operator
on
the
cluster
or
we
might
develop
an
application
in
open
cluster
management
and
deliver
that
application,
which
includes
things
like
helm
charts.
E
So
when
we
think
about
upgrading
it,
what
we're
really
now
thinking
about
is
when
do
I
push
out
the
next
level
of
configuration
for
that
operators
subscription
or
when
do
I
push
out
the
next
version
of
that
helm?
Chart
so
open
cluster
management
will
help.
You
drive
that
next
version
of
config
right,
the
next
version
of
the
helm
chart
or
the
next
version
of
the
olm
subscription,
and
then
it's
going
to
rely
on
either
the
operator
or
the
home
chart
to
properly
handle
its
own
upgrade
behavior.
E
So
open
cluster
management
doesn't
have
innate
awareness
of
the
the
inner
workings
of
everything
under
the
sun,
but
it
is
able
to
drive
a
desired
state,
a
declarative
state
of
configuration
to
clusters
that
are
under
management
and
then
to
validate
whether
those
clusters
are
in
a
thumbs
up
or
thumbs
down.
State
right
was
the
operator
configuration
pushed
correctly.
E
C
B
E
So
here
is:
let's
pick
on
this
operator.
This
is
an
example
operator
that
actually
allows
us
to
configure
the
update
service
or
the
upgrade
service
for
openshift
itself,
and
I'm
on
a
early
dev
build
so
you're
going
to
see
a
couple
of
bugs
pop
up
here
and
there,
because
this
is
pre-release
code.
Everything
I'm
showing
you
is
in
the
publicly
released
versions
of
the
project.
You
can
go
and
run
this
stuff
now,
but
I
just
have
a
a
very
early
build
right
now.
E
In
any
case,
this
particular
configuration
has
a
statement
around
the
operator,
lifecycle,
manager
or
olm
subscription
for
the
operator
and
it's
expressing
a
desired
channel
and
it
can
also
express
a
desired
version.
So
in
this
case,
there's
not
a
unless
my
eyes
are
missing
it
there's
not
a
specific
version
highlighted,
but
if
I
go
back
here
is
one
that
does
have
a
specific
version
that
is
highlighted.
E
B
Okay,
by
the
way,
this
is
a
bit
of
a
side
track,
but
since
you've
got
the
ui
up
right
now
sure
which
says
advanced
cluster
management,
I
I
wanted
you
to
just
quickly
go
over
the
relationship
between
open
cluster
management,
advanced
cluster
management
and
openshift,
so
that
people
can
understand
the
different
names.
E
Absolutely
so,
thanks
for
calling
it
out
so
open
cluster
management
is
the
upstream
community
project.
It's
where
we
develop
the
technology
that
we
then
deliver
in
a
supported
product,
which
is
where
the
advanced
cluster
management
name
comes
from,
so
open
cluster
management
open
dash
cluster
dash
management,
dot
io
is
the
site.
You
can
see
it
up
here
if
you
want
to
connect
with
us
on
github
github,
open
cluster
management
and
then
probably
within
here,
the
community,
repo
or
the
enhancements
repo
or
the
api
repos
are
good,
starting
places
to
start
to
see.
E
What's
going
on
within
this
this
organization
now
right
now,
because
of
the
way
that
we
transitioned
over
and
we
were
in
the
process
and
flow
of
open
sourcing,
the
technology
there
are
some
repo
all
the
parts
of
what
we
think
about
is
the
product,
with
the
exception
of
one
component,
which
is
redis
graph,
which
has
to
do
with
licensing
concerns
and
our
ability
to
open
that
part
up.
But
with
that
one
exception,
everything
else
is
open
source.
E
There
are
other
parts
of
the
organization
the
github
org
they
deal
with,
like
our
build
process,
and
things
like
that.
That
are
supporting
features,
but
are
not
supporting
our
mechanics
of
release
and
delivery,
but
are
not
actually
part
of
what
a
user
needs
in
order
to
run
it.
So
everything
there
is
open
and
you
can
get
started.
Each
of
the
repos
tries
to
provide
specific
information
about
how
to
build
and
how
to
leverage
it
so
registration.
E
E
So
if
I
wanted
to
take
any
cluster,
kubernetes
okd
open
shift-
and
I
wanted
to
make
it
a
hub-
I
can
go
and
deploy
the
cluster
manager
operator
for
that
particular
cluster,
and
then
I
can
import
clusters
by
going
to
the
cluster,
I
want
to
import
and
deploying
a
cluster
with
some
configuration
and
at
the
end
of
the
day,
what
that
gives
me
is
now
on
my
hub.
I
can
view
that
cluster,
that
is
under
management.
E
E
B
E
So
you
can
actually
so
one
of
the
things
as
a
community
that
we
are
focused
on
growing
and
improving
right
now
is
just
making
things
more
consumable.
So,
for
instance,
if
you
wanted
to
provision
an
okd
cluster,
you
can
do
that
today.
There's
just
not
a
lot
of
dock.
That
explains
exactly
what
you
would
modify
in
this
case.
There's
an
api
object
called
a
cluster
image
set
and
you
can
take
an
okd
release.
E
So
from
that
perspective,
it's
possible
today.
It's
something
that
we
like
other
areas
right.
It's
a
young
project,
we're
looking
for
community
members
to
get
involved,
we're
looking
for
help,
both
in
understanding
things
like
that,
where
the
community
wants
to
do
something
that
just
hasn't
been
our
primary
focus
on
a
day-to-day
basis
so
help
there
is
always
appreciated
in
terms
of
improving
either
both
pointing
out
hey.
You
guys
aren't
easy
enough
to
consume
here,
easy
enough
to
understand
in
this
particular
area,
so
just
providing
that
feedback
and
then
also
contributing
here's
improved
documentation.
E
E
And
on
that
note,
there
is
actually
a
bi-weekly
community
call
so
I'll
post
that
link
in
the
thread
that
you've
got
in
the
kukan
chat.
It's
also
in
the
zoom
chat,
so
hopefully
it
to
twitch
as
well,
but
that
board
is
actually
the
proposed
agenda.
That's
how
we
manage
our
conversations
that
are
public
and
open.
All
of
the
prior
sessions
are
available
on
youtube
as
well,
so
you
can
go
back
and
watch
the
history
of
those
sessions.
If
you
want
to
get
into
more
detail
in
particular,
topics.
B
Okay,
the,
where
do
people
what's
the
best
place
for
people,
to
ask
questions
and
give
feedback
if
they
can't
make
that
meeting
because
of
time
zones
right.
E
A
E
E
There's,
amazon,
eks,
provisioned
and
imported
gke,
which
was
provisioned
and
imported
the
ibm,
red
hat
openshift,
kubernetes
service,
rox
and
then
the
red
hat
openshift
on
aws,
so
lots
of
fun
acronyms.
But
if
it
if
it
runs
openshift
or
if
it
runs
a
managed
coupe,
then
you
can
interact
with
it
through
a
open
cluster
management
hub.
E
So
I
think,
like
any
good
community
project,
that's
in
a
growth
phase,
there's
always
sharp
edges
right
right.
They
we
call
it
the
bleeding
edge.
For
a
reason,
I
think
probably
the
biggest
sharp
edges
now
is
that,
in
order
to
deploy
the
complete
system,
you're
still
kind
of
picking
up
the
individual
operators
and
parts
and
deploying
those
parts,
that's
something
as
a
product.
We
focus
on
trying
to
pull
together
all
the
things
and
making
it
easy.
E
As
a
community,
though,
we
still
have
some
questions
in
our
mind
like
would
you
rather
really
have
a
single
person
who
wants
to
do
all
of
cluster
provisioning
management
and
all
of
application
delivery?
Or
do
you
really
have
a
user
that
is
more
platform
operator,
centric
and
a
second
user
that
is
more
application?
Delivery,
centric
right,
like
I,
don't
know
a
single
person
that
would
leverage
all
the
parts
of
open
cluster
management,
but
as
an
organization.
What
we're
trying
to
do
for
presenting
a
solution
that
an
organization
as
a
whole
can
consume.
B
A
D
Open
to
that,
in
terms
of
open
cluster
management
being
a
central
management
hub
to
to
see
the
world
and
manage
it,
I
don't
think
we've
done
any
testing
on
arm.
Yet
we
we
are
aggressively
moving
in
the
direction
of
power
and
z,
so
the
multi-arch
support
for
power
and
z
is
being
baked
in
as
we
speak,
and
then
we
also
want
to
be
able
to
run
the
hub
on
power
and
z.
So
you
know
importing
the
fleet
making
sure
we
can
manage
that
from
end
to
end.
D
D
You
know
that
the
pattern
that
we're
looking
for
is
conformance
to
an
api,
spec
and
agreement
to
kind
of
a
cncf
structure
in
terms
of
the
way
that
they're
building
up
their
code
and
we
feel
like
we
can
probably
play
pretty
well
within
that
space.
That's
a
really
broad
definition
of
what
you
can
support,
whether
that's
like
you
said
arm
josh
or
whether
that's
you
know
a
large
set
of
customers
that
are
investing
in
the
power
and
z.
So
there's
a
lot
of
of
wiggle
room
in
that
space.
D
B
Right,
yeah
yeah,
that's
what
that
wire
is
the
so
actually
I
used
to
in
fact
have
an
arm
v8
server
here,
loaned
to
me
from
the
arm.
Spec
coalition.
B
We
used
to
actually
build
a
fedora
atomic
for
arm
when
that
was
the.
A
A
B
B
The
okay,
so
I
actually
want
to
have-
I
want
to
have
a
couple
of
details.
This
is
just
this
is
sort
of
random,
but
I
realize
I
don't
actually
know
these
things
cluster,
let
and
cluster
manager.
What
do
these
two
components
do?
E
C
Absolutely
so
cluster
lit
is
the
general
term
we
used
for
the
binary
or
the
the
images
that
we're
running
on
our
managed
clusters,
so
those
be
the
ones
that
you
deploy
like
michael
showed
or
you
import
like
michael's
show.
So
this
is
the
the
sort
of
the
brains
or
the
visitor
that
that
does
all
the
work
brings.
Us
back,
you
know
stands
up.
C
Does
the
initial
handshakes
make
sure
we're
secure
exchanges,
our
certificates
etc
gets
approval
to
to
join
the
hub
and
then
once
it
does,
brings
what
we
call
our
at
cluster
lit
add-ons
in
and
so
add-ons
are
things
like
the
application
subscription
that
we
talked
about
and
is
in
the
community
today,
as
well
as
the
policy.
So
the
grc
in
compliance
has
a
number
of
these
additional
add-ons,
each
of
those
being
containers
that
live
side
by
side
with
the
with
the
cluster
lit
that
give
us
those
they
give
us
those
additional
capabilities.
C
So
there's
an
im
policy,
there's
there's
a
compliance
policy
cluster
loaded
out
on
so
the
cluster
lid
is
for
all
tents
and
purposes
the
bag
of
the
bag
of
automation
or
binary.
C
That
runs
on
the
managed
cluster
that
allows
us
to
control
and
interact
with
it,
and
then
you
have
the
the
cluster
management
side,
which
is
the
we
often
refer
to
the
hub
side,
where
the
uis
you'll
find
when
you're
and
when
you
start
with
the
open
cluster
manager,
where
you
build
out
all
of
your
initial
control
plane
pieces,
and
so
that
side
is
what
is
sitting
there
and
listening
for
the
cluster
list
to
phone
home
and
validate
them
and
then
approve
their
approve,
their
sort
of
onboarding
into
control
from
the
cluster
manager
and
then
once
they're,
once
they've
been
approved
and
they're
brought
under
control.
C
That's
when
it
then
goes
and
the
cluster
manager
looks
at
okay.
These
are
the
add-ons
that
are
supposed
to
go
for
this
specific
managed
cluster
and
sends
those
details
that
well
allows
those
details
to
go
down,
and
the
agent
on
that
manage
cluster
expands
with
those
plugins
and
bada
bing
bada
boom
you're
you're
under
management-
and
you
know
we're
one
of
the
things
we're
looking
for
is
to
expand
those
cluster
load
add-ons
as
well.
So
it's
not
you
know.
C
Eventually,
we
don't
want
it
to
just
be
ac
pieces
that
acm
brought,
but
also
plug-ins
from
third
parties
that
can
be
enabled
or
disabled
that
a
customer
can
choose
or
a
user
can
choose.
When
they
it's
my
business
side,
the
customer
comes
out
that
a
user
can
decide
what
they
you
know
when
they
want
to
use
it
with
it,
and
we've
got
the
policy
controller
displayed
right
here.
D
C
Right
so
we
have
a
number
of
these
and
they're
all
in
the
in
the
public
domain,
and
so
you
literally
could
as
simply
as
fork
one
of
those
and
begin
to
modify
it.
So
you
know
we
have
them
in
the
application
domains.
You
can
fork
that
and
begin
working
there.
We
have
them
in
the
compliance
and
security
space.
You
see
the
cert
policy
controller,
the
im
controller,
the
policy
controller,
all
all
listed
here,
and
so
you
can
fork
those
and
start
to
modify
them
and
and
expand.
C
Or
you
know
you
can
completely
start
from
from
scratch.
We
have
the
layouts
available,
and
so
you
could
start
to
build
one,
and
so
literally
it's
a
you,
provide
a
configuration
on
the
we'll
say
the
north
side
or
the
hub,
and
once
that's
once
that's
provisioned,
the
cluster
lit
will
see
it
and
the
cluster
lit
being
the
manage
cluster
and
pull
those
in,
and
so,
as
you
see
with
the
true
and
false,
you
can
turn
them
on
and
off
as
needed
for
the
endpoints
as
well.
C
So
we've
got
a
bunch
of
different
sort
of
spaces
where
you
can
fork
a
project
and
start
to
work
on
one
in
that
space
or
you
can
you
know
if
you
have
something
completely
new,
you
can
use
one
of
these
as
a
base
and
and
start
that
or
reach
out
in
the
community
and
we'd
be
happy
to
work
with
you
as
well.
B
I
have
some
follow-up
questions,
but
the
audience
questions
are
more
important.
So,
let's
take
the
audience
questions.
One
of
them
is
about
using
ocm
in
a
disconnected
environment
that
is
specifically
where
you're
managing
clusters,
all
of
which
are
in
an
isolated
data
center.
D
Yeah,
that's
a
that's
a
really
common
use
case,
josh,
whether
they're
purely
disconnected
or
what
we
see
more
often
as
a
hybrid.
You
know
they
they
have
an
on-prem
hub
or
they
have
a
reason
why
they
need
their
storage
locally.
You
know
data
residency,
geopolitical
reasons,
latency
reasons,
but
yet
they
want
to
start
to
take
advantage
of
the
cloud
and
they
want
to
start
to
take
advantage
of
some
cost
savings
and
some
some
benefits
of
you
know
positioning
workload
closer
to
a
region
where
it
needs
to
run
in
the
edge
space.
D
You,
you
kind
of
see
some
the
reverse
of
that.
In
some
ways
you
see
they
want
to
take
advantage
of
rosa
as
a
central
management
point
for
where
they
want
to
run
their
hub
in
some
particular
locale
or
data
center.
But
then
they're
going
to
have
a
bunch
of
edge
devices
and
widgets
that
are
running.
You
know
more
in
on-prem,
space
disconnected
on
a
boat
or
a
kiosk
or
something
so
you
get
a
mix
of
both
both
directions
of
where
they
want
the
cloud
to
be
and
how
they
want
to
leverage
the
cloud.
D
It's
really
not
up
to
us
to
dictate.
We
want
to
be
able
to
play
in
any
of
those
hybrid
scenarios
if
we
fully
support
the
disconnected
on-prem.
We
have.
You
know
a
lot
of
interest
in
the
community
around
making
sure
that
that
is
a
stable,
supported,
functional
path
and
that
will
always
continue
to
be
a
lot
of
our
bread
and
butter.
A
B
Of
different
question,
I'm
going
to
paraphrase
niels
a
little
bit,
but
so
you
know
say:
I've
got
kubernetes
deployed
the
and
I'm
now
starting
to
run
some
stuff
in
production,
et
cetera.
B
What
are
some
things
that,
in
your
opinion,
would
trigger
my
really
wanting
to
look
at
deploying
ocm
right
so
cluster
size
having
a
certain
number
of
separate
clusters
having
you
know
certain
business
requirements?
What
are
what
are
things
or
you
know
for
more
for
your
experience,
working
with
users?
What
are
points
at
which
you
see
users
saying
hey?
I
need
ocm
now.
E
The
policy
framework.
That's
in
open
cluster
management
lets
you
enforce
or
audit
those
types
of
technical
controls.
So
even
if
you
only
have
one
cluster,
that's
running
in
a
public
cloud
or
one
cluster
running
in
vsphere,
a
new
security
team
says
look
every
two
weeks.
I
want
you
to
send
me
a
doc
that
says
you
validated
all
these
technical
controls
and
you
can
do
that
by
hand
or
you
can
use
the
framework
that
I
showed
earlier
with
policies
that
allows
you
to
actually
define
those
as
yaml
resources
that
you
control
and
get.
E
E
So
if
you've
got
a
compliance
scenario,
that's
that's
also
a
trigger
and
then,
if
you're,
developing
an
application
that
has
components
that
run
on
different
clusters
or
that
itself
is
needs
to
be
more
globally
available
than
the
fact
that
we
can
distribute
the
app
and
manage
its
placement
dynamically
and
how
to
deal
with
that
problem.
That's
also
a
trigger
as
well.
So
if
I
introduce
a
new
cluster
and
I'm
starting
to
treat
clusters
as
more
disposable
and
I'm
provisioning
them
as
needed,
putting
applications
on
them,
maybe
tearing
them
down
more
dynamically.
E
The
fact
that
we
can
syndicate
all
of
the
health
information
from
all
clusters
to
the
hub
and
render
that
in
a
dashboard,
in
fact,
maybe
I'll,
pull
it
up
and
show
it
while
I'm
chatting
that's
also
a
pretty
powerful
capability
that
that
just
simplifies
the
life
up
for
that
user
users
or
that
that
platform
operator
that's
responsible
for
that
type
of
use.
Case.
B
E
Placement
rules
a
lot
of
that,
so
the
way
that
you
would
do
that
today
there
is
not
a
condition
that
knows
about
certain
classes
of
storage,
but
it's
very
label-centric.
So,
for
instance,
I
could
have
a
label.
So
here's
a
cluster,
that's
provisioned
in
tokyo
and
aws.
I
could
add
a
label
here
that
might
say:
storage
class,
equal
gp2.
E
E
E
This
is
a
definition
of
an
app
and
one
of
the
objects.
Here
is
a
placement
rule
and
among
the
placement
rule
it's
got
a
set
of
match
labels.
So
certainly
I
could
have
a
placing
rule
that
said:
storage,
class,
colon
gp2
and
when
I
apply
this
resource
to
a
hub,
that's
managing
clusters.
It
will
dynamically
select
any
cluster
that
has
that
storage
class,
equal
gp2
gp2
label
or
I
could
use
a
match
expression
and
I
like
to
say
any
cluster
that
has
storage
class,
gb2
or
io1
as
an
example.
D
The
sky's
the
limit
there
this
is
this-
is
josh's.
You
know
neck
of
the
woods
when
he
when
he
shows
off
his
get
ops
capabilities
and
maneuvering
applications
across
the
fleet.
You
know
blue
green
scenarios,
enterprise,
like
all
the
enterprise
controls
for
how
you
deliver
to
prod
versus
how
you
deliver
to
dev
all
within
that
matching
label
space.
That
michael,
is
showing
right.
Now
that
was
your
key
josh
took
it
away.
B
I
was
going
to
say
well,
no
yeah
get
apps
was
going
to
be
one
of
my
questions
right
because,
obviously
you
know
we've
shown
off
a
bunch
of
the
the
gui
stuff,
and
I
know
that
there's
admins
who,
like
the
graphical
interfaces,
but
I'm
not
a
graphical
interface
person.
I
may
I
may
check
something
into
a
git
repo
person.
C
Yeah,
no,
absolutely
and
like
and
as
scott
said
sort
of
the
labels
puts
the
sky.
The
limit,
which
is
pretty
pretty
much
the
cool
cool
piece
of
it.
You
can
slice
and
dice
things
up
in
in
you
know,
infinite
number
of
ways
which,
which
means
it
will.
You
know
you
can
make
it
meet.
You
know
your
needs,
regardless
of
what
they
are,
and
so
you
know,
michael
demonstrated
very,
very
quickly
that
you
know
how
you,
how
can
do
it
for
storage?
C
We
definitely
do
it
for
pillars
like
development
qe,
for
test
and
production,
even
within
the
production,
there's
automatic
labels
that
are
generated
onto
the
systems
as
you
import
them.
So
you
get
things
like
regions
for
aws
and
azure,
etc.
So
that
you
can,
you
know
you,
can
you
can
choose
where
you
want
the
application,
and
all
this
is,
is
in
the
ocm
placement
rules
as
well.
As
you
can
start
to
play.
C
If
you
have
you
know,
storage
pieces,
you
need
to
move
or
a
need
for
a
single
state
as
well
as
we
were
discussing
using
this
as
a
a
bursting
scenario
where
you
have
a
placement
rule
where
you
actually
define
the
two
cluster
names
that
you
want,
one
on
premise
and
one
in
the
cloud
you
set
the
cluster
replicas
to
one
and
it's
going
to
use
the
first
one
in
the
list
which
would
be
on
prem.
And
then
I
see,
oh,
I
you
know
I'm
experi,
I'm
coming
up
to
black
friday.
C
I
I
know
I'm
going
to
have
a
lot
of
people
hitting
my
application.
I
make
a
mod
the
night
before
I
can
or
sorry.
I
set
up
some
automation
on
a
timer
that
flips
the
replica
from
one
to
two,
and
the
system
will
automatically
burst
the
application
out
to
that
space
and
so
bring
it
out
into
the
the
the
act,
the
cloud
solution.
C
So
you
have
it
running
both
internal
external
and
then
we
get
into
there's
things
like
ansible
hooks
where
you
can
control
the
I'm
not
going
to
go
too
deep
down
the
rabbit
hole
but
yeah.
You
know
you
can
attack.
You
can
deal
with
external
pieces
like
a
load
balancer,
so
that
your
inbound
for
your
application
knows
I'm
not
just
pointing
to
the
local
replica
ports,
but
I'm
also
now
need
to
point
to
this
cloud
provider.
You
can
have
that
change
happen
before
or
after
there's
all
kinds
of
different
ways
to
slice.
It.
B
Yeah,
I
I'm
actually
I'm
very
interested
in
this,
but
I'm
not
going
to
get
too
far
into
it,
because
you
know
my
thing
is
stateful
services
on
on
kubernetes
and
so
obviously
with
stateful
services.
You
have
a
lot
of
potentially
complicated
requirements
like
for
a
particular
stateful
service.
B
A
C
It
into
the
slack
channel,
but
we
are
doing
a
bunch
of
work
with
different
two
different
storage
teams
to
do
just
that
type,
that
type
of
work
with
a
staple
staple
applications
in
peering
clusters
or
peering
groups.
Sorry
and
then
using
acm
as
a
as
a
catalyst
to
be
able
to.
You
know,
span
that
or
oh
and
ocm
will
have
the
capability
as
well.
Since
everything
is
developed
in
the
upstream
first
but
be
able
to
span
those
peer
groups
across
customers.
C
A
Yeah,
how
easy
or
hard
is
it
to
use
ocm
acm
to
migrate,
live
workloads
to
a
new
cluster
or
region
without
major
disruption?
That's
a
good
question.
A
C
So
I
I'm
gonna,
I'm
gonna
key
myself
back
to
I
mentioned
the
ansible
integrations
of
the
externals
so
external
pieces,
so
it's
always
possible
to
be
running
a
load
balancer
and
there
are
operator
controls
for
load
balancers.
So
all
traffic
comes
into
one
cluster
and
then
sprays
can
be
expanded
out
to
spray
between
the
two.
But
to
get
back
to
how
simple
it
can
be
is
that
I
have
cluster
a
that
has
a
label
that
says
application
one.
I
add
that
label
to
my
second
cluster
that
has
application.
C
Two
that
I
wanted
to
go
to
the
placement
rule
will
automatically
it's
got
a
watch
going
on
all
of
the
managed
clusters
in
ac
or
ocm,
and
it's
going
to
say,
okay,
the
app
needs
to
go
to
this
new
position.
So
it's
going
to
write
out
what
we
call
a
new
decision
and
the
application
model.
The
open
source
subscription
is
going
to
read
that
new
decision
and
say:
okay,
I
need
to
go
to
this
new
cluster.
C
I
need
to
do,
and
so
that
can
be
something
like
creating
a
ticket
or
modifying
a
load
balancer
any
kind
of
automated
setup
and
well
pretty
much
anything
available
in
the
ansible
galaxy
can
be
executed
there,
and
all
of
this
can
be
done
with
the
ocm
piece
and
so
it'll
run
those
pre-steps
which
likely
would
be
updated,
possibly
updating
the
load
balancer,
although
likely,
if
you
want
a
zero
outage,
the
load
balancer
will
be
in
the
post
hook.
C
So
the
first
thing
the
placement
does,
is
maybe
open
a
ticket
that
says
or
sends
us
a
message
to
the
slack
channel,
saying:
hey
we're
we're
bursting
over
to
cluster
number
two.
This
is
the
list
of
clusters
that
we're
using
it
will
then
the
subscription
will
then
apply
the
application
there
and
once
the
application
is
up
and
running
and
all
the
status
is
coming
back
as
as
a
go,
it'll
run
the
post
hook,
which
would
then
update
the
load
balancer
to
say
this.
C
Is
you
know
this
is
the
new
new
additional
route
that
I
can
use,
so
all
traffic
going
into
that
external
load
balancer
will
then
get
sprayed
to
both
clusters,
and
then
you
can
go.
You
know,
after
the
after
the
time,
if
you
find
your
traffic
is
dropping,
you
can
go
the
opposite
way
as
well,
and
so
you
remove
you
remove
the
label,
which
will
then
reverse
all
of
those
pieces,
and
so
the
app
will
go
away.
C
D
A
cool
project
in
that
space
just
to
dovetail
on
the
storage
part,
there's
an
upstream
project
called
scribe
which
we're
very
keen
on
and
looking
at
as
a
way
to
do
that
replication
of
persistent
data
from
cluster
a
to
cluster
b.
So
in
that
migration
scenario,
or
in
that
bursting
scenario,
you
know
josh
described
ways
that
you
can
handle
off
cluster
things
like
the
ansible
hooks
that
are
talking
to
f5
and
servicenow
and
blah
blah
blah,
but
also,
let's
look
at
the
storage
part
of
that
you
know
it's.
D
A
very
common
concern
is:
how
are
you
going
to
replicate
that
storage
off
of
the
in
the
cloud
native
architected
app
that
still
uses
a
you
know,
cockroachdb
or
you
know
something
on
the
back
end
that
needs
to
be
copied
over.
Submariner
is
another
tool
in
that
space
that
that
we've
actually
started
to
already
include
as
an
operator.
D
So
you
know
looking
at
those
tools
and
part
of
that
part
of
that
problem.
Space
always
gives
us
a
new
opportunity
to
look
at.
How
can
we
make
that
job
easier?
How
can
we
make
the
life
of
the
central
ops
team
smoother?
You
know
less
friction
as
they
approach
these
environments.
A
Awesome
so
there's
a
question
in
chat
here
right
now
we
deploy
openshift,
get
ops,
argo
cd
on
a
cluster
and
manage
application
placement
dev,
qa
prod.
You
know
that
kind
of
three
stage
environment
from
that
one
install
so
sounds
like
advanced
application.
Lifecycle
management
can
complement
argo
cd.
Suppose
this
is
not
a
replacement
for
get
ops
flow.
I
mean
what
is
what
is
your
opinion
on.
E
C
Absolutely
so
so
we
look
at
them
as
complementary
and
not
competitive
that
they
work
together.
So
we
didn't
touch
on
it,
but
in
the
add-ons-
and
I
don't
know
if
it
was
visible
when
scott
had
it
up
so
maybe
I'll
try
a
screen
share
here,
we'll
see
how
that
goes.
C
There
is
in
the
add-ons
we
actually
have
integration
today
for
working
with
with
argo,
and
so
I'm
going
to
click
over.
C
We
see
this
visual
web
terminal,
but
this
is
all
from
the
cli
today
so
that
we
talked
about
the
cluster
lit
add-on,
configs
that
come
with
our
each
of
our
clusters
that
we
either
deploy
or
we
imported.
And
so
when
you
look
on
these
in
the
which
cluster
or
which
add-ons
we
have,
we
have
the
argo
support
today,
and
so,
if
you
flip
the
bits
to
from
false
to
true
on
an
import
or
deployment,
then
that
imported
cluster
now
becomes
available
to
any
argo.
That's
running
on
that
same
cluster
as
the
hub.
C
So
that's
sort
of
the
step,
one
integration
in
coming
up
and
we're
just
in
the
process.
I
think,
in
about
two
weeks
in
the
upstream,
we'll
be
committing
we're,
adding
a
capability
to
be
able
to
do
the
same
kind
of
import,
but
not
just
one
argo,
but
to
multiple
argos
if
you
have
multiple
argos
running
in
different
name,
spaces
for
different
development
teams,
etc,
etc.
So
this
what
this
does?
C
Is
it
just
populates,
the
argo
section
that
you
would
use
for
or
the
argo
cluster
list
that
you
would
use
as
targets,
so
your
remote
target,
so
anything
you
provision,
anything
you
import
in
the
us
in
ocm
now
becomes
a
target
that
you
can
leverage
from
argo
as
well,
and
then
this
is
coming.
C
The
will
be
coming,
I
guess
in
time
to
the
upstream,
but
is
available,
maybe
in
the
downstream,
but
I'm
going
to
touch
on
it
anyways,
because
it's
pretty
cool
stuff
in
the
acm
side
of
it
is
we're
also
bringing
you
know
as
close
as
we
can.
C
The
argo
integration,
because
with
our
subscriptions,
because
you
know
both
technologies
do
a
very
similar
set
of
jobs,
although
have
different
in
different
spaces,
different
reasons
to
use
them-
and
I
guess
our
my
point
being-
is
that
we're
attempting
or
we're
working
towards
embracing
both
and
that
our
goal
is
to
make
that
is,
is
not
to
do
one
or
the
other,
but
to
make
them,
you
know,
make
them
coexist
and
make
them
work
as
as
closely
together
as
you
can,
and
so
I
I'm
just
going
to
point
out-
and
I
guess
I
can
we
can.
C
I
can
show
the
links
to
this
as
well,
so
this
is
actually
a
git
ops
scenario
where
I
have
created
the
an
initial
subscription
which
is
called
infrastructure
build
out,
and
so
that
is
an
ocm
construct
resource.
So
this
subscription
it
points
back
to
a
git
repo
called
fleet
management,
and
in
that
repo
I
have
a
bunch
of
other
subscriptions
as
well
as
argo
applications,
as
well
as
argo
application
sets,
and
so
that
single
point
of
start
is
giving
rise
to
a
configuration
of
all
of
these
applications
as
well
as
these
ocm
policy
pieces.
C
Such
as
installing
open
or
well
in
this
case,
it's
open
shift,
get
ops,
but
can
be
the
argo.
The
the
argo
operator
on
a
remote
cluster
we've
got
the
compliance
stuff
we
talked
about.
We
also
have
some
security
pieces
where
we
talk
about
where
we
were
talking
about,
such
as
making
sure
that
the
lcd
encryption
is
enabled.
C
So
all
of
that
from
just
that
single
acm
subscription-
and
you
know
again
pointing
out
how
how
similar
a
lot
of
these
pieces
are,
could
be
a
could
be
triggered
from
a
single
argo
subscription
as
well.
But
my
point
being,
though,
is
that
we're
building
these
to
inter
interoper
operate
as
well
as
to
coexist
in
a
in
a
sort
of
seamless
way
in
a
similar
visual
way.
C
So,
regardless
of
whether
I'm
looking
at
an
acm
described
app
with
a
topology
and
again
topologies
and
whatnot,
we're
actually
we're
bringing
to
the
open
source
as
well,
and
they
actually
will
display.
Even
today
you
can
see
topology
views
in
the
in
the
upstream
code
and
then
we
have
the
same
views
for
the
argo
as
well,
and
so
you
get
that
same,
look
and
feel,
and
this
one
shows
an
error
just-
and
this
is
purposely-
has
an
error
on
it.
C
Just
so
we
can
show
you
can
go
and
look
and
it
you
know
it
gives
you
that
same
kind
of
readout.
That
ocm
does
as
to
where
a
problem
is
with
the
application
so
again
very
powerful.
The
two
are,
are,
I
guess,
interconnected
is
no
better
way
to
say
it
and
you
know
we're
working,
not
one
against
the
other,
but
how
do
we
bring
them
together
so
that
you
know
if
you're,
using
one
or
you're
using
the
other?
You
know
you're,
not
locked
in
you.
Can
you
can
move
back
and
forth
between
the
two.
A
D
A
Yeah,
that
would
be
amazing,
so
we
got
about
six
four
minutes
left
anything.
You
want
to
talk
about
future
plans
or
anything
you
haven't
mentioned
yet.
Do
you
want
people
to
know
now's
the
time
to
get
it
out
because
we're
about
to
cut
over
to
the
okd
office
so
hour
here
at
the
top
of
the
hour.
D
I'll
go
first,
but
I
know
josh
and
michael
have
a
lot
to
say
too.
I
think
one
of
the
areas
that
we're
really
keen
on
is
getting
better
stronger,
faster,
smaller.
You
know
in
a
lot.
D
A
smaller
footprint
out
on
the
edge
providing
scale
capabilities.
You
know
hundreds
of
thousands
of
things
in
your
fleet
that
you
need
to
keep
an
eye
on.
So
those
are
kind
of
the
questions
on
my
mind
is,
is
what
makes
the
most
sense
if
from
a
a
bundling
or
like
a
packaging
scenario,
josh
and
michael
showed
off,
and
they
demonstrated
the
value
of
the
add-on
framework
and
yeah.
You
could
add
on
30
things,
but
maybe
you
don't
need
that
in
a
very
lightweight
high
scale,
type
of
environment.
So
what
are
the?
D
What
are
the
minimum
pieces?
You
need?
You
probably
need
policy.
You
probably
need
some
level
of
metrics,
you
probably
need,
and
even
in
those
metrics
you
only
need
the
most
critical
things
right
like.
I
don't
need
to
be
inundated
with
events
that
are
happy
state,
so
just
show
me
the
critical
stuff
across
the
entire
west
coast,
for
example.
So
those
you
know
in
my
mind,
it's
like
how
do
we
start
to
to
make
that
easy
job
even
easier
by
eliminating
a
lot
of
noise
when
you
get
to
the
large-scale
types
of
environments.
A
E
So
I
think,
there's
a
lot
of
a
lot
of
areas
that
we
can
still
continue
to
make
easier.
I
think
you
know
josh
talked
about
the
fact
that
having
a
global
load,
balancer
in
front
of
a
fleet
of
applications
is
a
very
common
thing
and
there's
some
work
in
upstream,
around
multi-cluster
service,
import
and
export,
and
we
can
actually
support
that
through
our
submariner
integration
in
open
cluster
management.
E
E
But
you
know
at
this
point
I
think,
we're
very
much
it's
still
a
relatively
young
community,
so
people
that
are
excited
about
this
topic
and
want
to
get
involved
and
get
engaged.
You
can
have
a
big
impact
in
how
this
this
grows
over.
A
Nice
awesome
so
we're
at
time
josh
you
got
like
50
seconds.
If
you
want
it.
C
I
was
just
gonna
sit.
I
was
gonna
echo.
What
michael
was
saying
and
it's
it's
you
know
it's
about
growing
the
community
and
in
interactions
and
and
trying
to
you,
know,
evangelize
and
bring
the
open
cluster
management
to
as
many
additional
projects
and
or
interact
with
additional
projects,
as
as
we
can
to
you
know
to
just
to
grow
the
space
awesome.
A
Well
folks,
they're
looking
for
your
help
out
there
so
go
see
what
you
can
do
to
help
them
out
and
until
next
time
y'all
I
will
I'll
be
in
touch
with
scott,
in
particular,
about
getting
you
all
back
on
track
with
openshift
tv
episodes.
Here.
A
Thank
you,
chris
thanks
for
joining
us
have
a
good
one.
Thank
you.
Everyone
out
there
coming
up
next
is
the
okd
office
hour
and
we
will
catch
you
there
in,
like
a.