►
From YouTube: GitOps Automation on OpenShift - Mark Roberts (Red Hat)
Description
GitOps Automation on OpenShift
Mark Roberts (Red Hat)
This OpenShift Commons Gathering was held on July 6th, 2022 live in London, England
https://commons.openshift.org
A
Good
afternoon
everyone,
so
I'm
mark
roberts,
I'm
a
solution
architect
at
red
hat.
I
was
going
to
start
with
a
reference
to
the
previous
session
on
cockroach
db
and
I
thought
george
best
was
on
screen.
So,
let's
start
with
a
quote
from
george
best:
let's
not
start
with
a
quote
from
george
best:
that's
not
a
very
good
place
to
go.
There
aren't
many
george
best
quotes
that
you
could
politely
use
these
days.
So,
let's
get
straight
to
it.
A
So
this
session
is
on
git,
ops
and
openshift
pipelines,
and
our
objective
really
here
is
to
create
automated,
build
and
deploy
pipelines.
That's
quite
simple
and
I've
been
doing
ci
cd
stuff
for
many
years
with
many
different
technologies,
and
it's
a
fairly
simple
objective
is
to
create
pipelines
and
processes
where
we
can
smoothly
create
software
and
get
it
deployed
quickly
and
easily.
A
So
if
we
add
to
that
some
of
the
github's
principles,
what
this
leads
us
to
is
that
we
want
a
system
that
is
described
declaratively.
We
want
everything
written
down
in
how
this
system
is
going
to
operate.
We
don't
want
anything
left
to
chance.
We
don't
want
anything
left
in
a
scripted
guide
from
whether
everyone
has
to
someone
has
to
follow.
A
A
We
also
want
a
reconciliation
loop
so
that
any
fingers
in
the
tinkering
takes
place
within
a
production
environment
is
immediately
identified
and
reconciled
back
to
the
common
state
of
affairs
that
we
have
within
a
git
repository.
So
we
have
a
single
source
of
the
truth
within
our
environments,
so
that
there
is
always
that
the
common
set
of
assets
that
we
want
are
deployed
deployed.
A
So
we
extend
our
objective
now
to
create
an
automated
building
deploy
process
in
which
the
assets
are
stored
in
and
managed
in
a
git
repository
they
are
stored.
They
are
versioned.
We
have
a
history,
we
have
an
audit
trail,
we
know
who
has
changed
what
where
and
when.
So.
In
simple
terms,
we
want
to
take
our
source
content
and
we
want
to
convert
that
into
something
that
we
can
run.
A
So
we
start
off
with
application
source
code,
so
we
have
there,
on
the
left
hand,
side
of
the
graphic
and
we're
going
to
go
through
some
sort
of
compilation
process
to
convert
that
into
something
that
can
run
and
we're
going
to
run
that
within
a
container,
so
we'll
create
a
new
container
image
and
we're
then
going
to
deploy
that
container
image.
Now,
when
we
deploy
the
container
image,
we're
combining
that
with
the
various
different
kubernetes
resources
that
describe
how
that
application
should
perform.
A
So
we
have
two
different
sets
of
source
content
that
we're
managing
here.
Really
we
have
the
application
source
code
itself
goes
through
some
sort
of
compilation
process.
We
have
the
kubernetes
resources
that
we're
going
to
actually
use
to
deploy
the
application
into
a
production
environment.
A
So
let's
focus
first
on
that
continuous
integration
activity
and
the
process
that
we
go
through
there.
So
we're
going
to
store
our
source
code
within
a
git
repository
and
that
initial
step
there.
That
store
is
really
the
git
push
action
that
all
our
developers
are
going
to
do
to
combine
their
efforts
together
into
source
code
held
within
a
git
repository.
A
A
So
we're
going
to
put
that
in
a
registry
where
we
can
get
access
to
it
to
deploy
it
from
our
environments.
So
we've
gone
from
source
code
to
a
running
container
image.
We've
gone
through
some
sort
of
compilation,
build
process
and
we've
combined
that
with
a
base
container
image.
Now
we
could
talk
about
that
base
container
image
for
most
of
the
afternoon,
because
where
do
we
get
that
from
how
we're
managing
those
base
container
images
to
ensure
that
they
don't
have
vulnerabilities
in
them?
Well,
again,
that's
something!
A
A
Now,
when
we
do
that,
when
we
create
those
kubernetes
resources,
our
deployment
yaml
file
has
a
reference
to
our
container
image
that
we
built
previously.
So
we're
going
to
pull
that
container
image
from
our
container
registry
and
then
we're
going
to
deploy
that
out
and
get
a
running
application
fabulous.
A
A
Well,
the
first
one
we're
going
to
talk
about
is
the
openshift
git
ops
technology.
Now
this
is
based
on
the
argo
cd
open
source
project.
It's
delivered
as
open
shift,
git
ops
on
the
openshift
platform,
fully
supported,
managed
maintained
by
red
hat
as
a
first-class
citizen
running
on
your
openshift
environment.
A
It
operates
this
declarative
model
in
which
you
define
what
resources
you
want
to
exist
within
a
git
repository.
You
then
point
your
argo
cd
application
at
that
git
repository,
and
you
tell
it
where
you
want
that
to
deploy.
So
it
basically
says
what
content
have
you
got
and
where
do
you
want
to
put
it
it's
as
simple
as
that,
and
then
it
maintains
those
those
resources
on
that
kubernetes
environment
keeps
them
in
synchronization.
A
If
someone
tinkers
with
that
environment,
argo
cd
will
spot
that
and
it
will
reconcile
it
back
to
how
it
should
be.
So
we
don't
get
any
of
this
configuration
drift
anymore.
Everything
remains
exactly
as
it
should
really
quick
and
easy
to
install
onto
your
openshift
cluster.
As
an
operator,
you
require
some
elevator
permissions
to
do
that.
It's
not
something
every
every
developer
would
be
allowed
to
do.
But
it's
really
quick
and
easy
to
install
the
next
technology
is
openshift
pipelines
with
the
cutest
little
icon.
There
is
of
any
open
source
project.
A
It
has
to
be
said.
I've
called
that
flow.
That's
my
cat
and
I've
adopted
that
I
do
have
a
black
cat,
so
she
needs
a
suit
of
armor,
because
she's
a
real
wuss,
the
birds
beat
her
up
in
the
garden.
So
that's
what
my
cat
really
needs
to
look
after
itself,
but
anyway,
overshift
pipelines.
It's
based
on
the
upstream
open
source,
tecton
project
and
again
installed
as
an
operator
on
your
openshift
cluster,
really
quick
and
easy
to
install
and
it's
a
fabulous
declarative,
continuous
integration
process.
A
Whenever
I
speak
to
large
organizations-
and
I'm
sure
a
number
of
them
are
represented
here
today,
one
of
the
things
they
complain
about
is
jenkins
sprawl.
Now
I'm
a
fan
of
jenkins.
I've
used
it
for
many
years,
it's
great
at
what
it
does,
but
it's
an
infrastructure
headache
for
a
lot
of
large
organizations
to
maintain
there's
a
lot
of
moving
parts
to
that
to
keep
it
up
to
date
and,
as
a
consequence,
lots
of
people
are
looking
to
alternatives.
A
So
this
is
completely
serverless
technology
effectively.
Some
of
you
might
have
heard
of
serverless
that
we
have
that
runs
on
openshift.
That's
another
presentation.
We
could
talk
about
that.
This
operates
in
exactly
the
same
way,
because
you
only
run
anything
when
you
actually
have
a
pipeline
to
run.
Every
individual
pipeline
runs
in
an
isolated
container
such
that
you
get
complete
clarity
over
every
instance.
You
run
there's
no
other
content
hanging
around
to
pollute
the
environment
and
cause
issues.
A
So,
let's
join
the
two
things
together
and
let's
create
a
cicd
process
on
one
diagram,
so
across
the
top
there
now
we've
slotted
in
that
the
tekton
icon
there
is
our
build
process,
that's
creating
our
container
image.
That's
pushing
that
into
that
container
registry
to
be
used
now.
The
container
registry
there
at
that
point
could
just
be
the
openshift
image
stream,
because
at
this
point
we
actually
want
to
do
some
analysis
on
this
to
make
sure
that
this
is
vulnerability
free
before
we
move
it
to
our
enterprise
registry,
such
as
red
hat
key.
A
Now
we'll
have
a
web
hook
operating
on
here
as
well
such
that
as
soon
as
an
action
is
performed
in
git,
such
as
a
push
of
new
content
that
will
operate
a
web
hook
that
will
trigger
the
tecton,
build
to
run
on
the
openshift
pipelines
and
that
will
cause
a
new
build
to
run
automatically.
So
all
the
developer
has
to
do
is
push
the
source
code
into
the
git
repository
that
will
trigger
off
everything
else
that
happens.
The
result
will
be
a
new
container
image
is
produced.
A
A
Now
that's
going
to
perform
an
image
pull
and
it's
going
to
pull
that
container
image
from
the
container
registry
into
our
environment
to
run
it,
but
what's
it
going
to
pull
because
every
single
build
generates
a
brand
new
container
image
with
a
brand
new
container
image
tag.
So
how
does
the
bottom
line
there
know
what
to
use,
because
it's
one
continuous
process
and
the
content?
The
new
tag
has
been
produced
as
part
of
that
build
process.
A
A
It's
telling
it,
which
particular
container
image
tag
we
want
it
to
deploy,
and
that's
the
next
part
that
I
want
to
talk
about
so
here,
I'm
going
to
introduce
a
new
technology
to
this
process
might
not
be
new
to
everyone
in
the
room,
but
I
can
introduce
customize.
Now.
Customize
is
great
at
managing
the
configuration
of
the
kubernetes
resources
that
we
want
to
use
as
part
of
our
deployment.
A
A
So
here's
our
base
deployment
file.
Now
you
can
see
from
there
it's
a
deployment
resource.
It
has
a
name
called
myapp,
it
has
a
namespace
and
it
has
a
specification
specifying
the
number
of
replicas
there
to
be
one
now,
obviously,
that's
a
cut
down
version.
It's
not
complete.
It
doesn't
have
everything
in
there
just
cut
it
down
to
make
the
slide
look
a
little
bit
easier.
A
A
A
A
Now
that's
great
because
this
allows
us
to
create
a
base
set
of
assets
that
are
applicable
to
all
environments
and
then
add
the
tiny
little
bits
of
variance
that
we
have
for
specific
environments,
and
you
can
probably
think
about
lots
of
ways
that
this
could
be
useful
in
other
areas.
Perhaps
database
connections
that
you
need
to
maintain
integration
points
that
are
different
in
different
environments
on
the
route
to
live
now.
A
This
is
particularly
useful
when
it
comes
to
our
ci
cd
process,
because
we
have
an
ability
to
edit
an
image
tag
with
a
customized
command,
and
this
is
something
that
we
can
build
into
our
ci
cd
process.
Once
we've
produced
this
new
image
tag
that
we
are
going
to
use,
so
we
start
off
with
the
file
as
it
is
on
the
left
hand,
side
there.
A
That's
my
customized
file,
then
I'm
going
to
execute
a
command
as
part
of
my
cocd
process
and
that's
going
to
be
customize
edit,
set
image
and
that's
going
to
have
an
image
full
name
and
tag
on
the
end
of
it.
And
that
will
add
this
content
to
my
customize
file.
It's
not
modified.
My
deployment
asset
at
all,
it's
not
operating
anything
like
said
or
any
other
sort
of
editor
to
modify
the
actual
deployment
file.
A
What
I'm
modifying
is
the
patching
mechanism
that
will
be
applied
to
my
deployment
files,
and
the
result
is
that
we
have
this
images
section
that's
added
to
the
customized
file.
It
identifies
the
name
and
it
identifies
the
new
tag
that
should
be
used.
So
now,
when
I
use
this
customized
file
that
will
be
applied,
there's
two
ways
that
I
can
operate
with
this
particular
customized
file.
I
can
either
use
customize
build
and
that
will
generate
a
new
set
of
yaml
files
which
I
can
apply
or
I
can
actually
use
just
the
oc
command.
A
The
the
open
shift
command
line
interface
has
a
switch,
a
minus
k
switch,
so
you-
probably
some
of
you-
are
familiar
with
using
oc,
apply,
minus
f
and
then
a
file
name
that
you
want
to
apply,
swap
the
f
for
k,
oc,
apply,
minus
k
and
then
a
location,
and
it
goes
and
looks
for
a
customized
file,
and
if
it
finds
one,
it's
then
going
to
process
that
customized
file.
It
will
pick
up
the
resources
that
identified.
A
It
will
apply
any
patches
that
are
included
in
that
customized
file
and
it
will
insert
the
container
image
there
into
the
deployment
yaml
file,
which
is
great.
That's
our
apache
mechanism,
that's
how
we
get
things
patched
now.
By
doing
this,
I
can
then
commit
this
version
of
my
customized
file
to
my
continuous
delivery,
git
repository,
so
it's
baked
in
stone.
It's
an
audited
change.
We've
got
it
preserved
forever
more,
but
of
course,
I've
made
a
change
to
a
git
repository,
and
what
does
that?
Do
that?
A
Kicks
off
operations
in
argo
cd,
because
I'll
go
see
these
spots,
that
something
is
new
and
different.
It
goes
and
applies
that
change
for
us
and
that's
what
we
look
at
now
so
right
over
there.
On
the
left
hand,
side
we've
got
our
development
process
and
we're
starting
off
with
cloning,
the
source
code,
building
a
jar
file
in
this
this
instance
of
the
application
and
creating
my
runtime
container
image.
A
That's
my
runtime
container
image
with
my
brand
new
tag,
I
update
and
commit
my
customize
file
within
my
development
branch
or
my
development
directory
of
assets
and
that
will
then
cause
argo
cd
to
trigger
its
deployment
operation.
So
brilliant
I've
used
the
tecton
process.
On
the
left
hand,
side
I've
used
an
argo
cd
process
there
to
update
my
deployment.
A
A
Now
I'm
doing
this
within
my
continuous
delivery
repository
within
a
git
repository,
and
I'm
doing
this
actually
on
a
qa
ready
branch
I'm
going
to
bring
in
one
of
the
other
git
ops
principles,
which
is
that
we
should
use
pull
requests
as
the
mechanism
by
which
we
can
review
changes
and
then,
by
merging
that
pull
request,
we
are
accepting
that
change.
We
are
merging
that
to
a
branch
which
is
observed
by
argo
cd,
argo
cd
will
pick
up
those
new
changes
for
us.
So
let's
see
how
this
pans
out.
A
A
Clearly
not
got
this
clicker
in
the
right
place.
There
we
go.
I
can
review
the
pull
request
and
I
can
merge
the
pull
request
whoops
I'll
get
too
far,
and
that
causes
argo
cd
to
pick
up
those
changes
from
my
qa
activity
and
deploy
those
out
now.
That's
the
simple
view
of
the
pull
request,
create
the
pull
request
review
it
merge
the
changes,
I'll
go
cd,
picks
up
the
change.
Sadly,
it's
a
little
bit
more
complicated
than
that,
there's
a
few
more
moving
parts
to
it.
A
So
now
I
move
that
to
the
top
of
the
screen,
because
there's
extra
things
I
want
to
add
here-
and
this
is
where
I
want
to
add
in
my
security
analysis
phase-
so
I'm
going
to
do
an
image,
build
check
and
my
image
validation
is
a
gate
point
it's
in
red.
If
I
fail
my
image
build
check,
if
there
are
vulnerabilities
found
within
this
container
image,
we
go
no
further.
We
don't
go
further
than
that
point,
but
assuming
my
vulnerabilities
don't
exist
and
there
are
it's
a
clean
image.
A
That's
a
new
commit
I've
made
that
change
within
that
particular
qa,
ready
branch,
it's
a
new
commit
and
I
set
the
commit
status
depending
on
that
particular
commit
now
that
does
something
special
in
git,
because
it
stops
you
from
being
able
to
merge
the
pull
request
associated
with
that
commit
so
effectively.
Now
I've
put
a
blocker
on
the
merge
process.
We
can't
accept
this
change
at
the
moment.
A
What
we're
then
going
to
do
is
I'm
going
to
clone
those
resources
into
a
new
directory,
I'm
going
to
clone
the
content
that
is
associated
with
that
pull
request,
I'm
going
to
configure
my
deployment
assets,
that's
where
I
run
my
customized
build
operation
and
then
I'm
going
to
do
a
resource
deployment
check.
I
want
to
validate
those
kubernetes
resources
pass
our
security
standards.
A
It's
not
just
good
enough
to
look
at
the
image
itself,
the
container
image
to
see
if
it
has
vulnerabilities,
we
can
shoot
ourselves
in
the
foot
at
the
11th
hour
with
the
resources
we
use
to
deploy
this.
Are
we
exposing
any
secrets
that
we
shouldn't?
Do
we
have
resource
constraints
on
cpu
and
memory
for
how
we're
going
to
deploy
this
up
this
application?
A
A
So
we
use
advanced
cluster
security
to
perform
that
resource
deployment
check
in
the
same
way
that
we
use
advanced
cluster
security
to
perform
our
image
build
check
at
the
top
and
that's
another
gating
point:
we
don't
go
any
further.
If
that
fails,
but
assuming
it's
successful,
we
then
set
the
commit
status
to
success
on
that
commit
and
that
allows
that
merge
operation
to
take
place.
A
A
A
So,
let's
move
this
forward
a
little
bit
now
and
let's
take
that
process
that
I've
talked
through
in
the
center
there,
where
we're
creating
the
pull
request,
we're
using
the
statuses
on
the
commits
we're
doing
our
resource
deployment
checks,
I'm
going
to
sort
of
factorize
that
out
now
and
call
that
my
qa
deployment
process,
that's
a
sort
of
wrapped
up
process
that
I
can
use
for
this
process
and
then
on.
My
final
slide,
I'm
now
going
to
work
work
that
through
and
show
where
it
fits
in
and
how
we
extend
this
into
production.
A
So
we
have
our
development
process
going
straight
away
across.
We
update
and
commit
our
customized
file
for
development.
No
pull
requests
straightaway.
Change
on
the
main
branch
of
that
repository
immediately
causes
argo
cd
to
pick
up
that
change
for
us
and
push
that
change
into
development.
Because
that's
what
we
want,
we
want
things
to
go
through
quickly
in
development.
A
We
then
bring
in
at
the
image,
build
check
and
we
go
into
that
larger
box.
That
was
on
the
previous
slide.
That's
our
qa
deployment
process.
Now,
if
that's
successful,
what
will
happen?
Is
we've
got
that
new
commit
with
the
merge
process
carried
out
committed
into
our
git
repository
that
will
trigger
our
go
cd
to
deploy
into
our
qa
environment?
A
A
So
I'm
going
to
push
that
image
to
my
container
registry
in
key.
Now
key
is
great
one
of
the
things
it
has
is
web
hooks.
So
the
result
of
pushing
a
new
container
image
into
key
can
get
key
to
trigger
a
web
hook
and
what
does
it
trigger?
Well,
it
triggers
an
a
open
shift
pipelines
process.
It
triggers
a
tecton
process
again.
A
That
is
an
exact
copy
of
that
process.
We
had
for
qa,
but
now
it's
operating
on
our
production
environment
and
it's
going
to
deploy
into
production,
so
it's
operating
on
production
branches
it
can
include
if
we
want
to
any
sort
of
call
out
to
something
like
a
service,
now
environment
and
create
tickets
in
service.
Now
that
need
to
be
approved,
there's
lots
of
other
wrappers.
A
What
does
that
do?
Ultimately,
through
someone's
human
intervention,
there
will
be
a
merge
operation
taking
place
to
merge
the
content
for
that
production
deployment
after
someone
has
done
a
human
review
of
those
contact
content
and
it
will
merge
it
in
git
that
will
trigger
our
go
cd
to
deploy
to
production.
A
So
that
concludes
what
I
wanted
to
show
you
this
afternoon.
I
hope
that's
been
useful.
I
will
be
around
later
on
during
the
breaks
and
during
the
the
beers
this
afternoon
I
am
assuming
there's
going
to
be
beers.
You'd
be
very
disappointed.
Now,
if
I've
sold
you
that
and
there's
no
beer
when
we
get
out
there,
so
I
think
I
need
to
go
figure
that
out,
but
I'm
making
a
bit
of
a
red
hat
assumption.