►
From YouTube: OpenShift Coffee Break: Open Cluster Management
Description
Get your espresso ready for the EMEA OpenShift Coffee Break together with Natale Vinto, Jaafar Chraibi and Tero Ahonen as we welcome our guest Gianni Salinetti, Solution Architect at Red Hat, to celebrate Open Cluster Management’s acceptance into the CNCF Sandbox and discuss all about multi-cluster multi-cloud Kubernetes deployments!
Twitch: https://red.ht/twitch
A
A
B
Good
morning,
good
morning,
everyone
welcome
to
this
other
session
of
the
open
shift
coffee
break
here
at
open
shift
tv
with
this
show
about
everything
about
openshift
and
cloud
native
architecture.
My
name
is
natalie
vinto,
I'm
product
marketing
manager
for
openshift,
and
I'm
here
with
my
colleagues
and
mate
jafar,
hey
jafar.
How
are
you.
A
Hey
good
morning,
everyone
it's
great
to
be
back
here
and
have
some
coffee,
although
I
forgot
to
take
one
this
morning.
So
let's
yeah,
let's,
let's
see
how
it
goes.
B
Hot
topic,
which
is
the
open
cluster
management
and
I'm
very
pleased
to
welcome
our
guest
today,
which
is
johnny
salinetti
solution
architect.
I
read
that
hey
johnny.
C
Good
morning,
good
morning,
natalie
here
too
nice
to
meet
you
happy
to
be
here
so
today
we're
gonna
talk
about
open
cluster
management,
submission
on
cncf,
yeah,
very
nice,
very
nice
topic.
B
I
really
looking
forward
to
it
johnny
because
I'm
very
interested
on
knowing
what
is
really
this
open
cluster
management.
How
is
related
to
kubernetes
multi-cloud
multi-cluster
management?
I
know
it's
a
hot
topic
in
the
market.
Everyone
has
its
own
solution.
We
also
have
in
red
that
openshift
our
solution,
which
is
acm
so
jenny,
correct
things
are
wrong.
This
is
the
upstream.
Those
are
the
upstream
bits
of
advanced
cluster
manager,
management
for
kubernetes.
C
Yes,
correct
ocm
or
open
cluster
management,
it's
the
upstream
for
red
that
balanced
cluster
management
for
kubernetes
and
it's
totally
open
source.
C
So
one
important
thing
is
that
red
dot
acquired
the
solution
some
time
ago
and
open
sourced
and
everything
was
open
sourced
following
the
usual
behavior
of
red
hats,
in
this
case,
so
where
red
hat
gets
acquires
so
a
product,
maybe
a
property
software,
and
it
becomes
something
that
that
is
managed
by
red
hat.
It's
open
sourced
maybe
takes
some
times.
C
Maybe
it
can
take
two
months
one
year,
it's
up
to
the
how
it's
written
but
fine,
it's
open,
sourced
and
ocm
open
customer
management
is
an
open
source,
completely
usable
version
of
so
of
the
multi-cluster
management
solution
of
reddit,
and
it's
the
upstream
that
drives
all
the
changes
to
icm
and
the
nice
thing.
Very
nice
thing
is
that
it's
being
currently
admitted
as
a
sandbox
project
to
the
cncf.
C
So
that's
something
we
can
discuss
later,
because
it's
very
interesting
because
it
it's
a
very
important
signal
to
me.
It
means
that
this
is
something
that
it's
really
ready
for
the
community.
A
A
That's
great
and
yeah
correct
me
also,
if
I'm
wrong,
but
what
I
find
interesting
with
this
specific
product
from
the
acm
standpoint
and
project
from
the
community
standpoint
is
that
we
all
know
that
red
hat
has
been
acquired
by
ibm
some
time
ago,
and
maybe
some
people
started.
A
Freaking
out
is
red
hat
going
to
remain
committed
to
open
source
and
are
they
going
to
hold
on
to
these
to
these
core
values
and,
and
things
like
that,
and
if
we
can
also
tell
the
backstory
the
the
the
product
itself
comes
from
ibm
engineering.
A
In
the
beginning,
it
was
called
mcm,
multi-cluster
management
for
kubernetes,
actually
and
after
the
acquisition,
the
engineering
was
transferred
from
ibm
to
red
hat
and
not
like
the
other
way
around
from
red
hat
to
ibm.
A
So
I
I
think
that
was
a
very
cool
proof
of
both
ibms
and
red
hat's
dedication
to
committing
to
open
source,
and
now
we
see
the
the
the
outcome
of
it,
which
is
we
have
the
red
hat
product,
which
is
red,
hat
acm
and
we
created
the
upstream
and
the
community
for
it
with
open
cluster
management.
So
I
think
it's
really
satisfying
to
see
that
red
hat
stands
to
these
values
and
and
see
that
we
are
still
you
know,
pushing
things
in
the
upstream.
C
C
C
It's
here
it's
that
open
cluster
management
is
something
that
works
out
of
the
box.
So
it's
a
distribution
that
works
for
everybody
on
every
kind
of
kubernetes
cluster.
It
can
be
installed.
We
now
showcase
for
on
a
basic,
vanilla,
cluster,
certain
kind,
so
very
basic
basic
thing,
not
an
office
shift,
it
works
everywhere
and
it's
something
that
works
out
of
the
box.
B
C
A
B
Yeah-
and
I
I
like
what
you
said
jafar-
that
the
the
stories
they
read
that
way,
you
know
to
do
the
things
an
acquisition,
then
the
open
sourcing
of
a
product
and.
C
B
We
see
the
results,
so
I
would
like
to
share
in
the
chat
the
the
the
website
of
of
the
result,
which
is
the
website
of
the
opel
open
cluster
management.
B
If
you
go
to
this
website,
you
will
see
all
the
details
of
the
project,
all
the
github
repository
and
how
to
join
this
community,
which
is
the
very
important
part
of
also
of
the
the
overall
process,
and
I
also
would
like
to
ask
johnny
before
we
go
into.
I
know
we,
of
course
we
have
a
live
demo
right.
B
Of
course,
we
have
demos
and
where
johnny
will
show
us
how
it
works
in
a
local
kubernetes
and
how
to
to
make
it
working
with
multi-cluster,
but
before
we
go
in
that
johnny,
what
that
means
that
a
project
is
accepted
as
a
cncf
sandbox
project.
What
does
it
mean
for
what
does
it
mean
for
for
the
community.
C
Yeah,
sorry,
the
club
community
foundation
is
ready
to
include
that
project
in
all
their
ecosystem.
So
it's
it's
included
in
their
ecosystem
map.
They
have
many
degrees
of
acceptance,
so
you
have
sandboxed
you
upgraded,
graduated
and
match
your
projects,
so
you
start
from
sandbox
and
then
you
graduate
so
it
means
that
the
cncf
is
looking
at.
This
project
is
helping
it
grow
and
it's
a
huge
up.
It's
a
huge
boost,
because
when
something
goes
inside
this
cncf
sandbox,
it
gains
a
very
important
visibility
and
boost.
C
So
it
means
that
we
are
something
that
is
going
to
grow.
It's
going
to
grow
quickly,
probably
because
we
have
something
that
it's
ready
to
be
used.
It's
ready
to
be
adopted
by
a
different
kind
of
kubernetes
users.
Obviously
it
works
great
in
openshift,
because
you
know
we
are
talking
about
openshift
after
all
here
so,
but
we
have
something
that
it's
not
difficult
for
kubernetes,
so
it's
just
worth
using
custom
resources,
so
you
can
apply
them
everywhere.
C
A
That
means
that
there
are
people
that
adhere
to
like
the
the
the
the
tool
and
the
features
that
it
can
provide
and
that
this
can
be
useful
for
kubernetes,
as
as
a
community
and
as
an
ecosystem,
and
not
just
like
an
end
software
vendor
solely
which
is
red
hat.
So
that's
also,
you
know
this
this
notion
of
being
accepted
and
that
it
serves
a
broader
purpose
for
kubernetes
community
itself.
A
B
That
we
know
that
you
are
prepping
super
cool
stuff
to
show
us,
but
we
also
see
you
less.
Maybe.
B
Yes,
before
you
go,
I
would
like
to
to
ask
the
people
attending
so
there's
a
tiger
flight
that
say
that
the
microphone
was
not
good,
but
I
I
I
think
now
johnny
solved
it.
I
we
I
hear
a
little
bit
of
echo
slide
the
echo,
but
I
think
it's
fine.
So
now
the
microphone
is
fine
and
I
would
like
to
ask
to
the
people
attending
if
you
have
any
question
about
open
cluster
management
on
everything.
Please
write
in
the
chat.
A
B
Will
bring
the
question
to
to
the
speakers
during
the
live.
A
So
I
I
do
have
a
question
if,
if
I
may
so
what
what?
What
is
this?
Actually,
what
is
it
and
what?
What
is
the
purpose
of
this
tool
like
if
you
can
very
briefly,
introduce
the
purpose
of
it,
its
main
functionalities,
I'd
say
even
before
going
into
the
demonstration
yeah.
C
Absolutely
absolutely
right:
the
point
is
managing
multi
multiple
is
complex.
You
know
when
you
have
to
deal
with
many
many
many
clusters,
it's
complex
to
manage
resources,
keep
them
aligned
and
checked
so
multi
cluster
management
is
a
topic.
That
is
a
long
story.
It's
probably
it's
something
that
has
been
discussed
since
the
beginning
of
kubernetes,
and
there
were
many
solutions.
C
There
were
many
attempts
to
manage
this.
The
concept
is,
you
need
a
single
pane
of
class
to
manage
all
your
clusters.
You
need
a
single
point
when
you
can
apply
your
changes
where
you
can
monitor
the
state
of
your
clusters
when
you
can
monitor
the
parameters
for
your
applications
and
also
maybe
when
you
can
manage
in
a
declarative
way,
your
infrastructure
declarative
way.
That's
a
very
important
point.
C
You
need
to,
for
example,
to
manage
your
application
lifecycle
in
this
way,
so
maybe
using
a
githubs
approach
using
a
git
repository
when
you
have
your
your
applications
configurations.
So
all
this
stuff
together
and
also
you,
maybe
you
need
to
manage
policies
for
your
clusters,
also
in
a
declarative
way.
You
need
to
integrate
policies
as
soon
as
you
join
your
clusters.
To
your
to
your
multicluster
infrastructure,
so.
A
C
C
Have
one
manager
it's
called
hub
cluster?
Usually
that's
it's
responsible
to
manage
or
a
series
of
clusters,
it's
responsible
to
apply
a
series
of
custom
resources
to
your
managed
clusters
and
you
can
do
it
in
many
ways.
We're
gonna
see
how
and
it's
responsible
to
monitor
the
state
of
your
clusters,
which
is
very
important
because
you
can
see
the
the
health
of
your
clusters.
Since
you
have
agents
running
up
your
on
your
managed
clusters,
which
can
continuously
report
the
status
of
your
cluster
and
using
those
agents,
you
can
apply
changes
to
those
clusters.
C
You
can
make
groups
of
clusters,
for
example,
you
can
mage
manage
cluster
sets
and
apply
in
embark
changes
to
a
single
set,
and
you
can
you
cannot?
You
can
do
a
lot
of
things
and
one
very
important
is
application
life
cycle
management.
Maybe
if
you
have
sometimes
later,
we
can
try
to
also
to
apply
and
add
on
the
application
life
cycle
at
dawn
and
see
how
you
also
can
do
heat
ups
with
open
cluster
management.
C
A
C
A
C
To
do
this
on
rcm,
you
have
two
main
components:
a
cluster
manager
running
on
your
hub
cluster,
so
you
need
a
central
hub
cluster
to
control
all
the
others,
just
like
the
lord
of
the
rings
wearing
to
room
at
all
to
rule
them
all.
You
have
one
hub
cluster
to
root
them
all,
and
you
have
managed
clusters
which
run
those
agents
called
cluster.
Let's
the
cluster,
let's
think
of
it
like
like
cubelet,
in
a
work
in
a
kubernetes
worker
node,
the
cluster.
C
Let
is
the
agent
it's
an
operator
which
communicates
with
the
hub
reports
the
status
of
the
cluster,
its
health,
its
metadata
and
also
gets
from
from
the
hub
cluster
informations
about
the
changes
that
must
be
applied
to
the
managed
cluster.
So
you
have
a
work
agent
in
there
in
a
managed
cluster.
That
continuously
enforces
it's
reconciles
using
a
kubernetes
related
term
the
changes
on
a
managed
cluster.
So
it's
it's
a
matter
of
reconciliation.
C
C
Also,
if
you
want
to
for
people
who
try
to
understand
it
better,
if
you
go
to
the
ocm
documentation,
which
is
on
a
link
that
natal
shirt,
you
are,
you
also
have
nice,
nice,
graphs,
architectural
graphs,
about
the
design
of
ocm.
This
is
the
basic
design.
C
Then,
on
top
of
it,
you
can
add
many
addons,
for
example,
the
application
lifecycle
management,
which
is
an
add-on.
You
can
add
the
policy
manager
which
is
another
add-on
who
lets
you
deploy
policies
on
your
clusters,
so
it's
some
kind
of
some
kind
of
configuration
management,
for
example.
You
maybe
want
to
apply
default
configurations.
C
C
I
use
these
policies
to
deploy
operators.
As
soon
as
I
join
a
cluster
to
acm.
I
apply
a
policy
that
deploys
one
or
many
operators
on
my
cluster,
so
without
using
is
any
other
tool.
I
just
need
to
join
the
cluster
as
soon
as
I
join
the
clustering,
and
I
apply
some
labels
to
this
to
that
cluster.
That
match
my
policies.
A
A
Yes,
I
was
going
to
say
so
some
like
examples
that
maybe
people
can
can
more
easily
understand
is
say
you
have
new
clusters
that
you
created
the
15
new
clusters
and
you
want
them
all
to
integrate
to
your
ldap
or
your
your
central
authentication
system,
so
your
users
can
can
can
have
access
without
needing
to
to
push
the
configuration
to
each
one
of
them.
A
That's
something
that
you
can
enforce
like,
say:
okay,
whenever
we
create
a
new
cluster,
just
configure
yourself
to
authenticate
with
our
central
authority
or
whatever
tool
we
use,
and
that
that's
one
step
that
you
know
makes
it
easier
for
your
for
you
and
your
users
and
maybe
something
else
like
central
logs.
You
want
all
your
clusters
to
push
backlogs
to
your
central
login
system.
A
A
B
You
recall
jafar,
we
had
the
demo
from
fuzz
and
andres
in
on
our
one
of
the
previous
show.
They
did
a
great
demo
about
acm
now,
johnny,
I'm
really
now.
I
would
like
to
say
this
show
me
the
code
dude
so.
A
A
C
I
did
this
short
deployment
on
purpose
similar
to
the
documentation.
So
if
you
look
at
the
documentation,
there's
a
quick
start,
which
is
very
nice-
and
it's
based
on
kind,
so
you
don't
need
oh
wow,
big
kubernetes
clusters.
No,
you
just
need
kind.
What
is
kind
kind
is
a
tool
for
deploying
kubernetes
in
that's
the
name
kind.
So
it's
an
acronym
or
kubernetes
in
docker,
so
kind
lets
you
create
very
small
kubernetes
cluster
inside
a
container.
C
C
Running
podman,
it's
amazing,
and
so
we
run
two
cam
clusters,
one
hub
and
one
managed
cluster,
and
then
we
deploy
and
cluster
management
on
top,
it's
very
easy.
It's
something
that
everybody
can
reproduce
immediately
right
now.
So
let's
share
this
video.
Can
you
see
it
yep?
Yes,
yeah.
Let's
start
it
so
now.
B
C
See
I
have
two
tabs
here.
I
did
one
tab
for
the
up
and
one
tab
for
the
manage
cluster
the
manage
cluster.
It's
going
to
be
created
right
now.
Now,
I'm
going
to
I
creating
the
kind
hub
the
ken
cluster
naming
hub
so
kind
is
creating
the
cluster.
Now
I'm
creating
the
cluster
one.
The
managed
see
that
kind
says
that
I'm
using
the
podman
provider
here,
not
docker,
so
it
takes
very
little
just
a
minute
to
boot
up
to
bootstrap
to
clusters.
C
C
So
now
my
cluster
is
started,
so
I
just
need
to
monitor
if
the
containers
are
completely.
C
Let's
see,
I
passed
the
context
to
use
the
specific
kubernetes
context
here.
Kind
hub
is
the
name
of
the
cluster,
so
you
see
it's
very
fast.
It's
very
small,
it's
minimal
cluster.
Everybody
can
run
it
on
their
laptops.
It's
very
easy
and
it's
great
to
experiment
with
kubernetes.
It's
a
great
way
to
to
to
play
to
do
your
first
steps
if
you
are
starting
to
learn
kubernetes
and
also
to
make
some
quick
experiments.
C
Okay,
everything
is
started
on
hubla.
I
just
want
now
to
deploy
stuff.
It
should
okay
cluster
again.
What's
this,
it's
the
cli
tool
that
deploys
the
open
cluster
management.
I
pre-installed
this
cli.
Obviously
so
just
one
comment:
cluster
idm
in
it
it
each
initialize
my
cluster,
my
hub
and
creates
this
message.
This
is
the
command.
With
the
token
I
should
run
on
the
manage
clusters
to
join
them
now
before
joining
the
cluster.
Let's
see
if
the
deployment
of
ocm
has
completed,
so
you
see
it's
still
creating
the
containers.
C
It
has
created
two
new
spaces:
open
cluster
management
and
open
cluster
management
hub
which
contain
the
cluster
manager
and
also
the
other
addons.
A
So
yeah
the
this
might
be
obvious,
but
all
the
components
are
actually
containerized
right,
all
the
so
it's
it's
a
sort
of
micro
services
based
architecture
for
ocm
and
acm
and
all
of
them
run
into
containerized.
C
B
Very
easy:
that's
cool,
so
you
basically
have
two
kubernetes
nodes
created
by
a
kind
in
your
workstation,
so
anyone
in
linux,
mac
windows
can
install
kind
and
reproduce
the
same
demo
and
you
are
joining
those
two
cluster
with
the
one.
Is
the
control
control
the
control
plane
and
one
is
that
the
con
there?
It's
a
data
plane
or
a
control
led
cluster.
C
Yeah
absolutely
easy
answer.
The
the
the
team
of
ocm
did
a
great
job
with
the
cli,
so
you
have
now
just
a
message
to
jo
to
finally
accept
the
cluster.
The
manage
cluster
now
made
a
request
to
the
hub.
Hey,
please
join
me,
so
you
have.
C
You
can
see
that
you
have
custom
resources
on
the
hub.
You
have
this
very
important
custom
resource
the
manage
clusters.
You
can
see
that
you.
B
C
C
A
A
Yeah
yeah:
this
is
interesting
because
you,
the
the
the
the
managed
cluster,
had
nothing
had
no
relationship
with
the
with
the
other
cluster
beforehand,
so
it
wasn't
created
by
something
else.
It's
just
like
a
regular
kubernetes
cluster.
A
A
C
A
C
Okay,
so
I
want
to
show
you
something
more
inside
the
clusters.
This
is
the
hub,
so
you
see
now
inside
the
app
we
have
this
container.
Those
containers
here
that
are
related
to
the
cluster
manager
and
if
I
move
to
the
manage
cluster
and
I
run
oc,
get
pods
dash
a
dash
context,
kind
cluster,
one.
I
passed
the
context
of
the
manage
cluster.
C
C
What's
this
the
cluster,
let
is
the
agent
that
communicates
with
the
control
plane
with
the
cluster
manager
on
the
hub,
and
the
cluster
led
also
deploys
some
extra
agents,
in
this
case
the
registration
agent,
which
is
the
the
purpose
of
registering
the
cluster
and
the
work
agent,
which
probably
is
to
apply
the
changes
that
you
declarative,
declarative,
define
on
the
hub.
So
every
change
you
do
on
the
hub.
C
C
A
I
think
this
is
great,
so
one
question
that
one
might
ask
is:
is
there
a
big
overhead
on
the
manage
cluster
like?
Does
the
these
agents
weigh
a
lot
in
terms
of
cpu
ram,
or
is
it
something
negligible.
C
Oh
well,
this
this
configuration
here
is
quite
lightweight
and
it's
very
basic,
so
the
components
we
are
running
here
are
not
using
a
huge
amount
of
of
resources.
What
changes
is
the
api
pressure
as
you
grow
your
managed
cluster
number.
So,
as
you
grow
number
of
clusters,
you
join
to
the
hub.
C
You,
you
obviously
increment
the
number
of
api
calls
between
the
cluster
manager
and
the
clustermint.
C
So
this
is
something
that,
as
an
input
on
eip
on
the
api
pressure,
what's
the
concept
of
api
pressure,
we
can
define
it
as
a
the
saturation
of
the
api
calls
on
your
cluster,
so
your
hub,
a
a
limit
of
api
pressure
that
it
can
stand
according
to
the
sizing
on
your
app.
So
when
you,
when
you
design
the
sides
of
your
hub,
you
must
be.
A
Yeah,
so
that's
that's
why
the
hub
is
running
on
a
dedicated
cluster.
I
guess
so.
It
can
scale
up
to
manage
thousands
of
clusters
and
I
believe,
on
the
acm
parts.
We
did
some
tests
up
to
2000
managed
clusters
or
something
like
that,
which
is
quite
quite
a
lot.
C
So
the
point
is
as
much
as
you
go
as
you
plan
to
grow
your
infrastructure,
you
have
to
somehow
adapt
the
sizing
of
your
of
your
app
cluster.
Obviously,
when
we
deal
with
a
small
number
of
managed
clusters.
C
Have
just
one:
it's
really
really:
it's
more
small
small
resource
usage.
C
B
Very
lightweight,
I
have
a
service
communication.
Unfortunately,
we
cannot
stream
at
full
luck
hd
at
the
moment,
so
from
youtube.
You
would
see
480
because
the
I
think
there's
a
wrong
setting
in
the
stream
machine
we
are
using.
We
will
correct
in
the
next
show,
but
for
now
I'm
sorry
we
can
have
full
hd.
I
hope
you
can
see
everything
enough
good,
so
agiani
to
help
this.
Maybe
you
can
increase
the
phone.
That's
the
maximum.
C
A
Yes,
I
think
it's
it's
easier
to
read
now:
cool.
B
C
A
So
so,
just
for
people
who
maybe
are
not
that
familiar
with
kubernetes
basically
a
custom
resource
is
an
extension
of
kubernetes
api
to
add
your
own
concepts
that
become
a
native
resources
of
kubernetes.
So
now,
kubernetes
understands
the
notion
of
what
a
managed
cluster
is
because
we
extended
the
api
to
through
the
ocm
installation.
Thank
you.
C
C
C
C
C
If
I
move
to
cluster
management
directory
manifest
work,
I
have
this
file
a
low
work.
It's
just
a
very
basic
deployment.
You
see
very,
very
easy.
It
creates
a
deployment
object
inside
a
cluster.
C
C
B
C
B
Channel
and
in
the
telegram
coffee
break
chat,
I
will
put
put
it
now
in
the
in
the
in
the
chat
so
examples,
repo.
C
B
Examples
johnny
and
can
run
the
same
example
you
you're
doing
right
now,
yeah.
C
Absolutely
they
cannot
be,
they
can
be
applied
on
the
manage
clusters,
as
I'm
doing
so.
If
I
create
this
resource
here,
I
just
created
this
resource
inside
a
namespace
named
after
the
manage
cluster
cluster
one.
So
what
I
expect
to
see
now
is
here:
it
is
hello,
you
see
I'm
on
the
manage
cluster
here
and
it
just
created
the
resource
immediately
on
a
managed
cluster
quite
simple,
and
this
is
something
that
can
be
applied
to
any
kind
of
resource.
C
I
just
created
a
deployment,
but
I
can
do
anything
just
like
jafar
said
before,
and
it's
very
basic,
very
basic
approach.
We
can
make
more.
We
can
evolve
this
approach,
for
example
using
application
lifecycle
management.
Just
we
in
that
way
we
can
enable
a
github
approach,
so
we
need
to
deploy
some
more
add-ons.
You
find
those
add-ons
in
the
in
the
documentation
of
the
open
cluster
management.
C
So
if
you
look
at
the
docs,
you
see
that
in
the
architecture
add-ons
you
see
the
addition
how
it
don't
work
first,
so
you
see
all
the
add-on
manager
you're
deploying
a
dom
manager
on
a
hub.
The
add-on
manager
manages
all
the
additional
components
you
deploy
and
you
deploy
agents
on
the
manage
clusters
that
communicate
with
the
dots.
C
So
if
I
want
to,
for
example,
to
to
add
the
the
application
of
segment
management,
I
just
need
to
create
these
components
on
the
album
manage
clusters
that
are
able
to
sense
changes
to
repository.
C
The
nice
approach
here
is
that
you
start
you
start
creating
your
application
using
the
github's
approach
by
applying
labels
to
your
clusters.
So
you
don't
need
anymore
to
know,
for
example,
the
exact
name
of
the
manage
cluster.
You
where
you
want
to
apply
your
your
resource.
You
just
say
I
want
my
application
on
all
the
clusters
with
the
label:
environment,
lab
environment
prod,
so
every
cluster
with
that
label
automatically
gets
the
application,
because
you
just
declaratively
declaratively
created
a
placement
rule.
That's
the
name
of
the
approach.
C
A
A
Yeah,
sorry,
so
just
maybe
to
touch
upon
something
that
is
very
important
is
when
you
come
to
that
scale,
where
we
are
managing
tens
or
hundreds
of
clusters
and
does
thousands
of
applications.
One
of
the
questions
that
arise
is
how
do
you
control
what
happens
where,
like
in
case,
there's
a
problem?
There's
an
issue
you
need
to
troubleshoot.
A
How
do
you
figure
out
what
happened
when
what
what
configuration
was
pushed
to
what
cluster?
And
if
you
do
it
like
through
command
lines,
I
mean
that's
totally,
something
that
you
can
do
you
you
do
a
cube
city
ctl
apply
to
10
namespaces
and
it
goes
there.
It's
going
to
be
deployed
from
the
hub
to
your
10
clusters.
That's
fine!
A
But
when
you
want
to
audit
like
what
version
of
the
deployment
was
deployed
and
when
etc,
it
becomes
a
bit
harder,
and
I
believe
that's
where
the
getups
approach
becomes
very
handy
is
because
you
have
a
traceability
of
what
was
applied
when
to
which
clusters
etc.
So,
yeah,
I
think
combining
the
two
is-
is
really
a
great
fit
because
you.
B
A
The
of
course,
the
power
horse
of
the
hub
and
the
cluster
legs
to
deploy
stuff,
and
you
have
the
single
place
of
truth,
which
is
you
push
your
one
change
to
your
good
people
and
then
everybody
else
gets
it.
But
you
know
when
and
why
etc,
or
at
least
when
not
necessarily
why?
If
there
are
no
comments
but.
C
A
And
you
said:
there's
a
reconciliation
is
that
correct
also
like
if
I
go
to
one
of
the
target
clusters
and
tamper
with
the
something?
Is
it
going
to
be
remediated
yeah.
C
C
Say
that
somebody
changes
my
deployment
on
the
cluster
using
the
cli.
It
goes
there
and
scans
out.
For
example,
my
application
using
cli
so
interacting
directly
with
the
manage
cluster
agents
in
the
in
manage
cluster,
reconcile
immediately
the
application
to
the
desired
state
and
that's
one
key
point
of
gitops.
Reconciling
constantly
the
desired
state
of
an
application
inside
a
git
repo.
C
A
A
A
So
yeah
very
nice
and,
of
course,
very
useful
in
in
real
life
when
you
have
to
manage
up
to
that
scale.
As
we
said,
cool.
C
A
B
A
B
Mentioning
gitops,
but
I'm
wondering
if
gitops
means
connecting
to
an
existing
github
engine
like
argo,
cd
or
any
other
like
that,
or
is
there
any
other
github's
implementation
there.
C
Ocm
has
a
native
graph
implementation
with
the
application
lifecycle
management,
but
the
the
great
thing
is
that
you
can
integrate
it
with
other
tools
like
cargo
city
and
that's
exactly
what
we
do
with
acm
advanced
cluster
management,
where
you
have
a
native
ui
that
integrates
with
argosy
resources.
So
you
can
see
live
what
you
create
on
top
on
your
argo
cds.
So
in
that
way
you
can
kind
of
separate
your
infrastructure
resources.
You
want
to
create,
for
example,
with
acm
with
your
application
resources
you
manage
directly
on
the
cluster
with
arbor
cd.
C
If
we
have
some
time,
maybe
I
I
can
show
you
an
example
of
my
lab,
where,
where
I
run
argo
cd
resource
is
integrated
with
acm.
But
yes,
the
answer
is
yes,
you
have
native
github's
approach
and
you
can
integrate
with
with
argo
cdo
data
tools,
and
so
you
know
get
the
best
of
everything.
B
A
So
I
I
I
I
had
one
question,
but
I
I
think
you
wanted
to
show
something
before
I
mentioned
the
get
up
thing.
I
don't
remember
what
what
it
was
you
were.
You
were
good.
C
Yeah,
I
wanted
to
show
you
exactly
the
application
life
cycle
management,
which
is
the
step
zero
of
the
github's
approach
here.
A
C
We
just
deploy
those
components,
this
deploy
those
agents
and
then
try
to
deploy
in
a
declarative
way,
using
github's
approach,
just
simple
application,
and
then
we
we
can
extend
the
concept
with
our
cdn
more
stuff,
but
let's
start
with
the
basics.
So
now
I
can.
I
can
show
you
that
I
I
just
create
on
my
manage
cluster,
a
simple
comment
with
a
simple
command.
I
just
want
to
kind
hop.
I
install
an
add-on
the
net,
the
name
of
the
add-on
is
application
manager,
and
I
used
to
do
it
with
custodian.
C
C
Let's
get
you
see,
it's
deploying
more
components.
It's
deploying
some
operators,
these
ones
and
these
components
are
basically
four.
So
you
have
a
channel
this,
a
component
that
manages
channels
a
channel
is
the,
let
me
say,
a
pointer
to
a
repository.
So
it
points
to
a
specific
git
repository
that
works
with
our
application.
C
Then
you
have
placement
rules
that
are
the
rules
that
match
your
managed
clusters.
So
placement
rules
can
manage
conditions.
For
example,
the
existence
of
a
label
and
subscription
which
connect
together
placement
rules
and
channels
and
say,
okay,
this
channel
pointed
here,
which
is
a
git
repository,
must
be
handled
for
these
clusters
here
managed
by
the
placement
rule.
So
in
this
way
you
connect
together
the
source
of
information,
so
the
git
repository
pointed
by
the
channel
with
the
target,
which
are
the
clusters
managed
by
the
placement.
True,
so.
C
C
So
I
want
to
enable
this
on
cluster
one.
I
do
I'm
running.
Look
I'm
running
everything
from
the
hub,
I'm
not
going
to
the
to
the
manage
cluster.
C
Just
enabled
the
addon
on
the
manage
clusters
so
on
the
manage
cluster
now
as
an
agent
as
another
container,
another
pod,
which
is
going
to
be
deployed
on
a
managed
cluster
which
senses
the
changes
that
we
do
on
the
application
life
cycle
management
on
the
app.
So
we
extended
in
a
very
simple
way,
our
hub
with.
C
C
C
C
Thing
which
is
saying
apply,
these
changes
apply
these
resources
on
all
the
clusters
with
the
label
and
lab
it's
just
label.
I
just
create
this
label
on
the
manage
cluster
and
see
it
is
applied.
I
can
apply
this
label
to
a
thousand
of
clusters
and
add
the
application
deploy
to
a
thousand
of
clusters.
Just
with
one
comment,
and
then
I
have
a
subscription
subscription
connects
together
of
everything.
So
so
it
subscribes
to
my
repository
and
applies
the
changes
to
the
cluster,
matched
by
the
placement
rule
and
inside
the
resources
directory.
C
And-
and
everything
is
declared
in
simon,
it's
repository
so.
B
C
C
C
C
C
That's
the
that's
the
part
and
that's
the
service.
Everything
was
created
immediately
and
if
I
change
such
now,
if
I
change
something
on
my
git
repo,
let's
say
I
scale
out
my
deployment.
This
is
immediately
reconciled
to
the
managed
clusters,
that's
great
and
that's
exactly
how
it
works
in
advanced
cluster
management
for
kubernetes.
C
B
Cool,
so
the
the
core
is
open,
the
core
is
all
upstream.
Then,
of
course,
the
product
has
some
ui,
some
things
easy
to
install
and
maintain,
and
that
is
the
supported,
but
the
core
is
open
source.
The
same
bits
are
the
one
that
letting
you
right
now
deploying
nginx
to
two
clusters.
C
B
That's
great
statement:
I'm
gonna
put
in
the
chat,
contribute
and
there's
the
link
in
the
in
the
website
of
open
cluster
management
about
contribute
how
to
contribute
how
to
join
the
community.
On
github
on
the
working
group
on
discussion
about
this
new
community,
I'm
really
looking
forward
to
see
that
community
growing
more
and
more
in
the
kubernetes
and
cloud
native
space.
A
And
just
keep
in
mind
that
contribution
is
not
necessarily
by
pushing
code.
You
can
also
contribute
to
things
like
maybe
documentation
or
samples.
We
saw
that
you
just
used
some
samples
that
you
have
on
your
repo
and
if
people
start
to
build
interesting
things
like
that
and
then
share
share
those
on
on
the
public
repos.
That
can
be
great.
Also,
you
know
to
help
all
of
us.
A
You
use
those
resources
and
touch
things
so
usually
when
people
are
here
contribute
to
open
source,
they
think.
Okay,
I
have
to
be
a
black
developer
or
something
like
that,
but
it
can
be
different.
Things.
A
C
I
think
that
a
great
starting
point
in
contribution
is
helping
improve
documentation.
For
example,
you
don't
need
to
be
a
black
belt
developer
to
contribute
to
the
limitation,
and
you
do
a
great
huge
work
for
everybody.
If
you
create
awesome
documentation
of
a
product
you
you
are
enabling
that
product
to
the
masses,
because
a
great
documentation
of
a
project
is
my
opinion,
is
the
key
of
the
success
of
that
project.
C
B
Agree
I
like
what
jafar
and
johnny
said
so
start
contributing,
no
matter
how
documentation
backfixes
or
code,
please
judge
discussions
as
we
say,
please
just
start
contributing
join
the
community,
that's
a
great
wrap
up
and
final
for
our
show
that
it's
at
the
end.
Unfortunately,
we
are
at
the
at
the
end
of
our
show.
So
I
would
like
to
thank
everyone
that
joined
it
also
in
the
chat
and
renato
joined
late,
but
renato
you
have
the
same.
B
B
You
yeah,
we
look
forward
to
have
you
in
the
next
episode
jafar
some
appointment
for
openshift
tv
today,
yeah
yeah.
A
Yeah
so
yeah
thanks
a
lot
jenny
for
the
great
demos.
It
was
awesome.
Everything
worked,
it
was
very
well
explained,
so
thank
you
very
much
yeah.
So
we
still
have
some
other
shows
coming
up
today.
Of
course,
if
I
may
mention
that,
I'm
I'm
I'm
going
to
be
doing
one
this
afternoon
on
pipelines
as
code,
which
is
a
very
nice
upcoming
feature
on
openshift,
which
is
basically
how
to
allow
you
to
push
your
tecton
pipelines
from
the
git
application
and
that's
another
step
towards
get
ups
right.
A
B
A
Join
yeah,
okay,
yeah,
so
I
just
shared
it.
I
believe
on
youtube:
I'm
not
okay
with
stream
chat,
but
if
you
yeah
natalie,
if
you
can
get
the
link
from
our
zoom
zoom
chat
and
paste
it
through
restream.
That
would
be
also
helpful.
B
Yes,
sir,
let
me
let
me
copy
from
our
chat,
so
we
can
share
here.
So
this
is
the
link
to
the
openshift
tv
show,
which
is
will
be
on
pipelines
as
a
code
right.
A
Exactly
yeah
yeah,
so
this
is
not
exactly
openshift
pipelines
like
the
the
techtone
downstream.
It's
a
new
feature
that
allows
you
to
define
your
your
takedown
pipeline
in
your
git
repository
and
as
soon
as
you
commit
your
app
changes.
A
Openshift
triggers
the
pipelines
automatically
on
openshift
pipelines,
it's
similar
to
what
we
used
to
do
with
jenkins
file,
where
you
have
a
jenkins
file
in
the
repo,
but
it's
more
advanced
because
you
can
now
react
with
pull
requests
and
stuff
like
that.
So
stay
tuned
and.
A
I'm
gonna
be
doing
a
demo,
so
I
hope
yeah.
It's
an
interesting
topic.
B
Can't
miss
it,
it's
of
course
it's
another
live
demo,
here's
openshift
tv,
so
I
would
like
to
close
up
and
wrap
up
everything.
Thank
you.
Everyone
for
joining
the
recording
will
be
at
the
same
link
I
saw
while
johnny
were
playing
some
telegram
chat
from
our
okd
group.
If
you
would
like
to
join
our
okd
channel,
which
is
the
openshift
community
group
on
telegram,
please
join
by
this
link,
I'm
putting
here
in
the
chat
and
also,
if
you
like,
to
follow
johnny
on
twitter,
his
his
twitter
handle.
B
You
feel
free
to
to
follow
him
and
say
something
about
this
awesome
demo.
Today,
java
appointment
today,
openshift
tv,
the
openshift
tv
coffee
break,
will
come
back
on
december,
the
15th.
We
will
have
another
up,
dev
topic
with
alex,
groom
and
other
folks
joining
us.
So
please
stay
tuned
and
see
you
on
december,
the
15th
same
time
here
at
openshifttv.