►
From YouTube: Red Hat OpenShift GitOps 1.0 Overview
Description
A walkthrough of Argo CD in OpenShift GitOps 1.0 for configuring OpenShift clusters as well as delivering applications across multi-cluster environments
Read more:
https://www.openshift.com/learn/topics/gitops/
0:07 OpenShift GitOps 1.0 Overview
0:37 Install OpenShift GitOps operator
2:02 Log into Argo CD
3:17 GitOps workflow for configuring OpenShift
9:03 GitOps workflow for delivering applications
11:13 Drift detection and auto-healing
13:59 Deploy additional Argo CD instances
A
What
I
have
in
front
of
me
is
openshift
container
platform
4.6
and
I'm
already
logged
in
looking
at
the
administration
administrator
perspective
of
openshift
web
console.
Let's
enable
openshift,
git
ops
in
the
cluster
through
the
operator
hub.
I
go
to
the
operator
hub
search
for
githubs
from
the
list
of
the
content
and
operators
that
are
available
there.
There.
A
It
is
found
red
hat,
open
shift,
git
ups,
the
first
version
of
that
I
was
released
and
install
it
on
the
cluster,
so
I
just
go
through
the
default
and
wait
a
couple
of
seconds
that
starts
installing
the
operator
and
the
related
resources
and
controllers
for
me
on
that
cluster.
What
happens
by
default
when
I
install
the
cluster
is
the
install
the
operator
is
that
a
default
instance
of
argo
cd
is
also
at
the
same
time
provisioned
on
this
cluster
and
is
pre-configured
to
allow
configuring
openshift
cluster
itself
through
this
instance
of
argo
cds.
A
As
soon
as
the
operator
installation
is
ready,
we
get
green
check
mark.
If
we
look
at
the
application
launcher,
we
see
an
instance
of
vargo
cds
added
there
in
a
shortcut,
so
we
can
easily
switch
and
navigate
to
this
instance.
Like
I
mentioned
this
instance,
is
pre-configured
to
be
able
to
manage
cluster
configuration
with
it.
A
If
you
want
to
make
customization
to
open
shift
the
authentication
or
console
or
other
areas
as
well
as
one
install
operators,
for
example,
it
is
not
a
cluster
admin,
but
it
has
elevated
privileges,
so
it's
an
instance
that
typically
owned
by
the
platform,
ops
team
or
the
cluster
owners
that
want
to
manage
a
cluster
itself.
So
let's
go
to
this
argo
cd
instance
and
make
some
customization
to
openshift
configuration
through
it.
A
We
are
faced
with
the
login
page
of
argo
cd,
the
default
user
for
argo
cd
is
admin.
If
you
haven't
configured
anything
any
other
authentication
for
it.
The
password
by
default
is
generated
when
argo
cd
gets
bootstrapped
and
stored
in
a
secret
that
is
living
in
the
same
name
space
as
argo
city.
So
let's
get
this,
let's
decrypt
the
secret
and
get
the
password
out.
A
So
this
is
the
password
that
was
generated
for
argo
cd
in
future
versions
of
openshift
you're
working
on
integrating
authentication
over
or
cd
with
openshift
visible.
So
you
would
use
essentially
your
openshift
credentials
here
as
well
to
login
into
our
cd.
You
wouldn't
need
to
decrypt
the
secrets
anymore.
A
Let's
log
in
and
there
we
have
argo
cd,
1.8
and
right
now
is
empty.
It
doesn't
have
any
applications
or
things
defined
in
it.
I
do
have
a
git
repo
already,
which
is
also
the
flow
of
this
demo.
If
you
want
to
try
it
on
your
own
and
within
that
there
is
a
cluster
folder
that
I
have
stored.
Some
cluster
configuration.
A
I
have
a
cluster,
a
console
link
that
I
can
add
a
custom
link
to
the
application
launcher,
a
link
to
the
red
hat
developer
blog
the
kubernetes
space,
and
I
also
have
some
name
space
configurations
to
be
created.
So
let's
ask
argo
cd
to
sync:
the
content
of
this
folder
to
the
cluster
for
us
I'll
go
through
a
new
app
in
in
argo
cd.
Let's
call
this
cluster
configs,
I
use
the
default
project.
A
Projects
is
used
to
group
the
applications
inside
argo
cd
and
I
will
use
the
manual
simples
policy
in
for
for
this
particular
part,
because
I
don't
want
the
changes
from
the
git
triple
to
automatically
be
rolled
out
to
the
cluster.
I
want
a
chance
for
the
ops
team
to
review
and
when
they're
happy
with
the
changes
in
sure
issue
a
manual
sync
and
ask
argo
city
to
sync
the
configs
to
the
cluster.
A
The
next
part
is
which
git
repo
contains
the
configuration
of
the
cluster.
So
that's
security.
We're
looking
at
I'll
go
with
the
main
branch.
A
It
is
better
if
you
go
with
a
particular
comment,
id
or
branch
for
your
cluster,
but
in
this
example
I
go
with
a
main
and
the
content
of
the
cluster
folder
is
what
I
want
to
be
synced
to
this
cluster.
The
destination
would
be
the
cluster
that
I'm
running
on
kubernetes
defaults.
We
see
if
I
want
to
sync
the
content
of
that
repo.
You
want
to
manage
a
configuration,
a
remote
cluster.
I
can
just
add
the
cluster
url
here
instead
and
right
now.
Everything
in
that
repo
is
cluster
scope.
Resources.
A
Really,
if
you
have
other
type
of
configuration
for
openshift,
they
are
that
are
namespace
scope.
They
usually
end
up
in
openshift,
configs
and
namespace.
So
let's
just
use
that
namespace
to
be
sure
and
recursively
apply
all
of
this.
So
instead
of
this
form,
if
you
want
to
be
fully
declarative,
you
could
also.
I
have
an
example
of
the
the
declarative
way
of
creating
the
same
application.
You
could
add
a
an
application
cr
from
argo
cd.
That
would
configure
the
exact
same
thing
that
I
just
showed
you
through
the
dashboard.
Let's
create
this.
A
All
right,
the
application
is
created
and
argo
cd
immediately
does
a
drift
detection
and
notified
and
identifies
that
it
takes
that
the
cluster
does
not
have
the
conflict
that
we
have
in
the
git
repo,
and
this
is
expected
because
we
asked
argo
city
to
not
do
automated
automated
things
and
wait
for
us
to
issue
a
sync.
So
what
we're
going
to
do
is
that,
let's
just
check
that
in
this
console,
we
we
don't
have
any
extra
links
here.
A
I
will
ask
argo
cd
to
perform
a
sync
at
least
what
kind
of
resources
will
be
synced
to
the
cluster.
Do
the
synchronize
for
me,
and
it
automatically
applies
those
resources
to
the
cluster.
If
I
go
to
the
console
under
the
application
launcher,
I
see
that
there
is
actually
a
link
added
there,
so
the
the
config
is
in
in
sync
with
the
cluster
firm
from
this
point
on.
A
A
So
let's
modify
the
name
of
the
link.
I
make
this
added
kubernetes
at
the
end,
so
this
is
the
kubernetes
space
of
the
red
hat,
developer
block
and
normally
I
should
issue
a
pull
request
and
get
a
chance
to
get
it
the
chance
of
review
to
my
peers.
So
they
can
take
a
look
at
the
change
that
is
being
asked
to
be
rolled
out
to
those
clusters
for
the
demo,
I'll
shortcut
that
and
you
should
comment
commit
directly.
A
So
the
change
that
I
want
on
those
clusters
now
is
represented
by
a
comment
right.
There
is
an
a
history
of
this
change,
both
for
audit
purposes
and
also
especially
when
you're
looking
into
issues
happening
and
what
happened
to
the
cluster.
You
can
always
come
look
at
the
git
history
and
see
what
was
rolled
out
to
to
that
cluster.
A
We
see
that
argo
cd
has
detected,
that
there
is
a
change
and
the
state
of
the
cluster
is
not
the
same
as
what
we
have
declared
in
get
which
is
expected.
We
just
issued
a
change.
I
will
ask
cargo
cd
to
sync
this
change
to
the
cluster
again,
so
it
would
issue
a
sync
and
roll
those
out
to
the
cluster.
If
you
look
at
application
launcher,
we
see
that
the
name
has
changed
to
be
the
kubernetes
at
the
end,
so
the
git
becomes
my
interface
as
rolling
out
operations
right
there.
A
We
have
changed
the
cluster
operation
to
be
fully
through
the
git
workflows.
They
pull
request
workflows.
They
come
the
reviews,
the
comments
and
everything
that
we
have
been
doing
really
around
it
workflows
for
application.
Now
we
can
apply
the
same
for
managing
the
configuration
of
the
cluster
itself
and
that's
really
the
value
that
you
would
get
out
of
adopting
githubs
for
for
configuration
management.
A
So
let's
go
further
and
deploy
an
application
also
on
the
cluster
through
through
argo
cd.
So
in
in
the
previous
section,
when
I
asked
the
the
cluster
configuration
to
be
synced,
we
created
a
namespace
as
well
called
springpitcoolnik
so
that
we
can
deploy
the
springpad
cooling
application,
the
springboot
sample
application
in
this
namespace
within
the
same
git
repo.
There
is
an
app
folder
that
I
use
customize
for
a
templating
system
to
be
able
to
deploy
manifests
of
spring
boot.
A
There
is
a
deployment
random
service
that
I
want
deployed
on
the
cluster,
so
let's
create
a
new
application.
We
call
this
one
spring
pet
clinic
default,
and
this
time
I
set
it
on
automatic.
So
every
change
that
is
on
git.
I
want
to
automatically
roll
out
to
the
cluster
if
something
is
removed
from
that
git
repo
folder
should
be
removed
from
cluster
as
well
as
well,
and
it
won't
save
self
healing.
So
argo
cd
should
enforce
that.
A
The
state
of
the
cluster
should
always
be
in
sync,
with
the
state
of
the
git
repo
itself,
I'll
use
the
exact
same
git
repo.
Here
we
go
still
with
the
main
branch
and
the
app
subfolder
in
that
git
repo.
We
are
syncing
to
the
current
cluster,
the
the
pull
method,
so
the
this
is
the
pool
model
of
application
delay
where
the
cluster
pulls
its
configuration
or
application
into
its
namespaces,
and
this
needs
to
get
sync
into
the
spring
pet
clinic
namespace
that
we
were
created.
A
You
can
see
also
that
argo
cd
actually
have
detected
that
we're
using
customize
for
that
particular
folder
and
gives
me
some
customized
configuration
or
options,
so
I
can
can
modify
it
if,
if
needed,
let's
create
this,
and
since
we
put
it
on
auto
sync,
argo
cd
automatically
starts
rolling
out
the
content
of
that
git
repo
to
the
openshift
cluster
inside
a
springfield
clinic
namespace,
we
see
a
spring
pit
clinic
is
started
deploying
now
and
bringing
it
up
to
to
a
healthy
status.
Give
it
a
second
till
the
the
container
is
pulled
from
quay.
A
A
So,
let's,
let's
look
at
the
self-healing
part
of
our
gross
cd
and
the
security
aspect
that
we
have
on
one
side,
a
trace
of
every
change
that
is
rolled
out
to
the
cluster
in
the
git
provider
in
the
git
history.
So
that
already
gives
us
a
higher
level
of
audit
and
traceability
of
what
changes
by
who
and
when
was
rolled
out
to
the
cluster.
A
But
on
the
other
hand,
argo
cd,
constantly
monitors
the
state
of
this
deployed
objects
and
compares
them
to
the
git
repo,
and
if
there
is
a
drift
it
detects
it
and
and
tries
to
correct
it
as
soon
as
possible.
This
change
might
be
a
raj
change
right.
It
might
be
somebody
manually
coming
changing
the
object
on
the
cluster
or
changing
the
image
that
is
deployed
to
an
image
that
was
not
supposed
to
be
in
that
cluster
that
have
been
throughout
last
year.
A
Breaches
that
were
really
done
through
on
the
cluster
through
just
replacing
the
image
and
that
that
is
not
visible
in
any
system,
so
dark
cd
would
prevent
that
because
it
immediately
compares
to
the
git
repo.
So,
let's
see
I
scale
this
deployment
to
three
pods
and
you
can
see
that
it
immediately
scales
back
to
one
part,
and
if
you
look
at
argo
cd
in
spring
clinic
you
for
a
moment,
you
might
have
noticed
that
it
was
in
the
sinking
status.
A
I
can
see
it
in
the
events
because
obviously
also
creates
kubernetes
events
as
it
performs
operations
on
the
cluster
that
it
had
identified
immediately
that
the
cluster
was
out
of
sync,
the
application.
Somebody
had
changed
something
on
the
cluster,
and
this
is
critical
because
the
change
was
not
issued
or
initiated
through
the
git
repo.
A
The
change
was
initiated
on
the
cluster
and
los
argo
city
identified
that
and
detected
and
immediately
rolled
it
back
to
the
state
that
was
available
in
the
kit
and
bring
it
back
to
a
fewer
two
to
one
pod.
We
can
do
even
more
aggressive
changes
and
let
me
do,
for
example,
delete
this
all
together
and
see
what
happens.
I'm
going
to
delete
the
deployment
we
have
some
malicious
user
logged
in
and
is
messing
with
the
cluster.
A
So
the
deployment
object
was
removed
and
you
can
see
that
argo
cd
again
identified
it
immediately
and
it's
rolling
it
out
again
and
to
the
cluster
based
on
the
content
of
the
git
repo
and
bring
the
application
up
so
that
it
ensures
that
undesired
changes
cannot
be
rolled
out
to
a
cluster
unless
it
is
coming
through.
The
git
flow
and
it
gets
flown
is
approved.
So
it
really
heightens
the
level
of
security.
We
haven't
control
in
the
changes
that
are
rolled
out
to
the
cluster.
A
As
soon
as
you
install
the
openshift
github's
operator
within
the
catalog,
you
can
see
that
there
is
an
argo
cd
instance
added
to
the
developer
catalog,
so
you
could
go
and
instantiate
an
argo
cd
within
any
name
space
that
you
want
and
retrieve
the
password
similar
to
what
I
did
from
the
secret
and
your
admin
to
that
argo
cd
instance
that
is
confined
to
that
name
space.
A
So
through
that
roc
instance,
you
cannot
install
an
operator
or
make
any
type
of
cluster
scope
changes
to
to
the
cluster
you're
running
on
and
you're
only
limited
to
the
namespace
that
you're
running
unless
the
cluster
admin
comes
and
explicitly
gives
more
access
to
this
argo
city
instance.
So
we
can
cover
also
the
cases
that
you
want:
a
less
privileged
argo
cd
instance
just
for
application
delivery
and,
at
the
same
time,
instances
that
are
owned
by
platform
operation
for
managing
cluster.