►
Description
Immutable Deployments – the Goal of the GitOps Movement - Tracy Ragan, DeployHub
The GitOps movement has introduced a new model for supporting continuous deployments. At the core of GitOps is the process of creating immutable deployments that can be repeated without the potential interference of a human. In other words, a single source of truth that can be versioned over time to track configuration changes. In this session, we will explore the basic concepts of GitOps and discuss how the Ortelius open source project fits into this model to deliver a GitOps approach at scale.
For more Continuous Delivery Foundation content, check out our blog: https://cd.foundation/blog/
A
My
name
is
tracy
reagan.
I
am
a
microservice
evangelist.
I
am
the
ceo
of
deploy
hub
and
I'm
also
serving
on
the
cdf
board
as
the
member
representative.
So
thank
you
to
all
of
you
who
voted
for
me.
I've
been
around
a
bit
I've.
This
is
not
my
first
open
source
kind
of
foundation
to
work
with.
I
was
also
on
the
board
of
the
eclipse
foundation.
A
I'm
a
devops
institute
ambassador
and
I
am
the
community
director
for
an
open
source
project
called
ortillius,
and
those
of
you
who
know
artelias
will
know
that
our
art
logo
is
an
alien,
and
I
like
to
talk
to
people
on
this.
This
subject.
If
you
ever
want
to
chat
there
is
my
linkedin
address.
Please
reach
out
to
me
I'll.
Send
you
an
invite
for
a
15-minute
coffee
chat
and
we
can
talk
about
anything
from
standard
life
cycle
management
to
modern
day,
get
ops
and
building
devops
pipelines.
A
So
it's
good
to
always
understand
things
from
the
beginning.
So,
let's
start
of
where
gitops
sort
of
was
born.
I
first
started
hearing
about
infrastructures
code
several
years
back
and
that
movement
was
all
about
creating
a
moving
away
from
imperative
scripts
to
stand
up
your
environment,
whether
it
be
in
a
kubernetes,
environment
or
otherwise.
A
Just
even
of
you
know
a
vm
experience,
the
the
idea
being
that
we're
switching
from
a
imperative
process
to
more
of
a
declarative
process
where
you
have
operations
by
pull,
request
this
and
introduce
the
concept
of
an
operator
and
the
operator
having
the
intelligence
to
understand
how
to
act
upon
a
declarative
script
where
you're
declaring
a
future
state
and
the
operator
manages
that
script
and
determines
the
future
state
of
your
infrastructure.
And
that's
where
I
began
seeing
the
use
of
an
operator
and
the
goal
was
to
create
this
hermetic
operations
process.
A
There
were
really
big
benefits
to
moving
away
from
these
these
imperative
processes,
namely,
not
relying
on
a
single
human
to
be
able
to
stand
up
an
environment.
When
you
have
developers
and
testers
wanting
to
have
these
environments,
there
needed
to
be
an
easy
way
for
everybody
to
stand.
One
up
on
their
own
and
infrastructure's
code
allowed
that
to
happen.
A
A
A
Now,
in
a
in
a
in
this
type
of
environment,
you
have
an
operator
that
acts
upon
the
script.
So
as
soon
as
you
check
something
into
your
get
repository,
you
point
your
operator
to
track
that
and
the
operator
acts
upon
that
script.
So
it's
locked
down,
it's
hermetic.
You
can't
change
it
once
it's
in
and
if
it
is
changed,
it's
being
tracked
that
it
was
changed
now.
A
This
opened
up
a
huge
door
for
us
in
terms
of
how
to
start
managing
these
environments
and
it
led
us
away
from
being
dependent
on
a
particular
human
to
do
it.
Instead
of
a
human,
we
were
now
relying
on
an
operator
and
the
idea
of
triggering
an
action
upon
a
check
and
we've
done
for
quite
some
time.
Everybody
knows
that
continuous
integration
was
it
triggered
from
a
check-in
from
a
for
for
a
source
code
and
emerge.
A
So
then,
somewhere
along
the
road
somebody
said
well,
if
we
can
do
this
with
day
one
events
and
when
I
say
day
one
I
mean
standing
up
the
infrastructure.
Why
not
day
two
events,
meaning
updating
applications
inside
your
inside
that
environment
and
why
can't
hermetica
application
deployments
be
a
reality
so
that
iac
model
started
being
applied
to
deploying
what
I
call
containerized
application
now,
a
containerized
application
is
still
a
monolithic
you're,
putting
your
entire
application
into
a
container,
including
anything.
You
need
to
run
that
and
you
push
that
out.
So
it
was
a.
A
It
was
a
small
addition
to
this
infrastructures
code
experience
and
we
can
give
we've
worked
some
credit
for
for
taking
a
charge
on
this
one.
They
are
known
for
now
as
the
get
ops
company
or
the
first
company
to
really
start
talking
about
get
ops.
So
we
should
acknowledge
weave
works
for
their
brilliance
in
saying
hey.
If
we
can
do
this
in
day,
one,
why
not
always
be
able
to
do
day,
two
events
and
a
shameless
plug.
We
had
a
get
up
summit
a
day,
zero
event.
A
If
you
missed
it,
those
videos
recording
should
be
posted
out
there
for
you.
After
this
I'm
sure
there
will
be
a
announcement
made
where
you
can
see
this
stuff
online
and
on
demand.
A
I
highly
recommend
you
watch
our
keynote.
It
includes
cornelia
davis,
dan
lawrence
and
dan
garfield,
as
well
as
myself,
talking
about
get
ops
and
how
it
got
started.
Cornelia
is
brilliant.
In
explaining
these
things
anything
she
does.
I
highly
recommend
watching.
A
Let's
just
talk
about
the
architecture
in
a
get
ops
environment
in
your
cluster,
we're
just
going
to
refer
to
it
in
this
model,
and
people
will
tell
you
you
can
use
git
ups
in
many
ways,
and
we
don't
have
enough
time
to
talk
about
all
the
ways
you
could
probably
apply
it.
So
we're
going
to
use
this
common
model
in
this
environment.
You
have
a
git
ops
operator.
That's
running
in
your
cluster
that
get
ops
operator
looks
at
two
things.
A
A
Now
in
in
your
repository
side,
in
this
example,
I'm
using
git
you're
going
to
have
two
different
repositories,
and
this
is
how
most
most
setups
are.
You
have
what
we
call
your
code
repository,
what
you're
used
to
you're,
checking
in
and
checking
out
code
out
of
your
code
repository,
but
you're
also
going
to
have
a
second
repository,
and
that
repository
is
your
environment
repository.
A
A
Now
the
beauty
of
this
also
means
that
if
somebody
manually
tweaks,
the
the
cluster,
the
getups
operator
is
going
to
fix
that
for
you,
because
it's
going
to
go.
Oh
I'm
out
of
state.
So
I'm
going
to
go
back
and
look
at
my
desired
state
and
correct
it.
A
So
that
is
why
it
becomes
a
hermetic
or
immutable
because
you're
always
going
back
to
your
git
ups,
your
getups,
your
git
repository
and
your
github
operator,
is
always
checking
against
it
and
making
sure
that
the
desired
state,
as
defined
in
your
yaml
file
stored
in
the
git
repo,
is
what
the
cluster
looks
like
perfect,
brilliant.
A
So,
let's
go
through
that
again
because
I
went
through
it,
let's
go
through
it
in
steps,
so
you
first
you
as
a
developer.
You
make
an
update
to
the
code
and
you
commit
those
changes
to
to
get
you
create
a
new
container
image
and
registered,
which
then
creates
a
new
sha
or
in
particular,
new
tag.
That
tag
makes
that
particular
container
unique
and
that
tag
is
what
ends
up
in
that
yaml
file.
A
So
the
developer
updates
the
yaml
file
with
a
new
tag
and
then
converts
it
back
to
the
get
environment.
This
is
the
simple:
these
are
the
simple
things
there
may
be
environment
variables
that
have
to
be
updated
too.
That
just
depends
on
what
that
update
is
needing,
so
there
may
be
some
environment
variable
changes
that
occur
as
well,
and
then
the
gitops
operator
sees
the
new
commit
updates
the
cluster
with
a
new
container,
and
we
are
done.
The
solution
is
hermetic.
A
A
So
I
I
talked
a
lot
about
scaling
get
ups
and
can
it
scale
from
a
purely
operator
cluster
kind
of
configuration,
it
scales,
marvelously
each
cluster
can
have
its
own
getups
operator.
So
it's
not
overloaded
in
any
way
it
can
scale
to
hundreds-
and
you
know,
literally,
thousands
of
clusters
without
any
real
problem.
A
So
one
of
the
benefits
on
the
purely
technical,
architectural
side
of
the
house,
it
scales
very,
very
well
so
you're
you're,
not
thinking
about
you,
know,
trying
to
create
a
distributed
deployment
model
where
you're
building
out
all
these
agents.
You
just
got
and
basically
an
agent
running
in
your
your
each
cluster.
That
manager
monitors
those
repos
simple,
but
there
is
a
human
side
to
everything
and
we
have
to
question
any
time.
We
make
changes
like
this,
and
this
really
speaks
to
the
continuous
deployment
part
of
the
continuous
delivery
pipeline.
A
You
know
you
have
continuous
build,
you
have
continuous
test
and
you
have
continuous
deploy
and
you
have
to
deploy
to
dev
test
and
prod,
because
you
can't
test
without
something
deployed.
You
can't
even
do
a
unit
test
without
deploying
something
so
deployment
is
a
key
part
of
our
devops
pipeline.
I
often
see
presentations
and
they
always
show
delivery
is
the
last
step
you
have,
you
know,
build
test
and
then
deliver
well.
Deployment
is
a
part
of
all
those
steps
and
the
get
ops
process
fits
into
all
of
those
different
stages.
A
A
In
the
example
I
gave
you,
we
had
a
cont.
We
had
a
containerized
application
that
had
one
big
gamma
file
that
defined
its
it's.
It's
deployment
state.
What
we
want
the
get
ops
operator
to
read
and
to
maintain
for
us
when
we
start
building
it
out
into
a
life
cycle
model
we're
going
to
start
creating
additional
branches.
We
have
dev
test
and
prod,
for
example,
developers
would
be
updating
the
development
branch
production
would
be.
A
Maintaining
production
and
test
would
be
updating
the
test
branch
now
they're
different
for
many
reasons,
they
obviously
you're
having
running
different
versions
in
all
of
these.
So
you
have
different
sha
tags
in
that
yaml
file.
You
also
have
different
variables
production
variables.
How
production
is
set
up
to
execute
that
particular
container
may
look
different
than
development,
so
there's
differences
in
them,
subtle,
small
differences.
A
Now,
in
some
cases,
you
may
just
have
one
trunk
with
multiple
yama
files
that
the
in
the
the
operator
interrogates
regardless
you're,
going
to
have
more
than
one
yama
file
that
you're
trying
to
manage.
You
can't
get
away
from
it.
That's
just
the
way.
It's
going,
that's
just
the
way.
It
is
because
that's
the
way
we
develop,
we
don't
develop
in
production.
We
don't
so
you're
going
to
have
separate
ones,
and
then
let's
talk
about
git,
ops
and
microservices.
A
So
when
we
start
stretching
our
our
arms
out
and
giving
you
know
a
micro
service
architecture,
a
big
hug
it
just,
it
disrupts
many
parts
of
our
devops
pipeline.
It
disrupts
pretty
much
everything
and
it
disrupts
get
offs
as
well.
A
So
it's
going
to
complicate
not
just
our
getups
model,
but
it
complicates
a
lot
of
pieces.
But
in
particular
there
are
challenges
that
you
will
find
with
the
gitops
model
and
the
number
of
yaml
files.
You
will
eventually
need
to
monitor,
because
in
a
in
a
micro
service
architecture,
you're
going
to
start
breaking
up
that
application,
yaml
file
and
instead
you're
going
to
that's,
that's
going
to
contain
a
lot
less.
It's
going
to
make
less
changes,
you're
going
to
have
more
containers
to
manage.
A
So,
let's
just
think
about
some
ways
that
people
are
setting
up
their
sort
of
their
microservice
architecture,
and
I
know
that
this-
these
are
very
simple,
simple
images
here,
but
they're
intended
for
conversation.
A
So,
let's
say,
for
example,
we
are
running
a
store
that
we're
running
a
company
that
manages
different
stores.
We
have
a
candy
store
and
we
have
a
hipster
store.
The
candy
store
and
hipster
store
share
services.
They
share
the
shipping
payment
and
the
cart
service,
for
example,
but
they
have
their
own
front
end.
The
application
team,
more
than
likely,
would
be
the
team
that
would
be
managing
the
yaml
file
and
the
deployment
of
the
front
end,
while
potentially
other
teams
could
be
pushing
out
these
shared
services.
A
Somebody
who's
writing,
microservices
for
shipping
functions,
payment
functions,
card
service
functions,
maybe
pushing
those
out
independently.
So
now,
we've
breaking
up.
We've
broken
up
this.
This
monolithic
and
we're
starting
to
share
services
as
a
company,
which
is
what
a
micro
service
architecture
should
do,
which
means
that
we're
going
to
have
more
yaml
files
instead
of
just
having
one
to
manage
the
entire
candy
store
we're
going
to
have
ones
that
manage
individual
pieces
of
it.
A
Now
it's
true
you
could
you
could
manage
the
the
candy
store
in
this
configuration
with
one
big
yama
file
that
addressed
all
of
them,
but
that
may
hinder
your
experience
with
microservices
a
bit,
because
what
you
want
to
be
able
to
do
is
not,
as
an
application
team,
worry
about
the
shipping
payment
and
card
service,
but
let
somebody
else
worry
about
that.
For
you.
A
Now,
there's
other
ways
to
build
your
environments,
your
kubernetes
environments-
and
this
is
more
of
a
service.
A
shared
service,
oriented
architecture
whereby
the
cart
service,
shipping
and
payment
service
are
are
in
their
own
name
spaces
and
they're.
Communicating
with
the
candy
store
in
the
hipster
store.
The
git
ops
operator
would
know
about
all
of
those
and
it
would
be
able
to
update
all
of
those
at
any
point
in
time,
there's
going
to
be
more
yama
files.
A
Now
I'm
showing
you
just
for
one
specific
environment,
and
you
can
see
that
we
start
building
out
more
and
more
yaml
files
to
manage
this.
While
this
is.
This
is
a
good,
a
bad
good
thing.
I
guess
you
say
this
is
something
that
we
can
we
can
fix.
We
can
start
fixing
the
way
that
we
manage
these
different
yaml
files
from
the
human
side,
but
keep
in
mind
as
we
build
these
environments
out.
A
So
if
those
have
tweaks
between
them,
potentially
you're
going
to
have
different
yaml
files
for
those,
so
it
could,
it
can
get
really
big.
We
can
get
a
lot
of
yaml
files,
so
lots
of
deployment
files
to
manage
manually
is
the
point
here.
So,
even
though
we
consider
git
ops,
a
human
kind
of
or
an
automated
process,
you
still
have
a
human
doing,
the
push
to
to
get
they're
still
making
the
updates
to
the
shaw
or
the
to
the
to
the
to
the
yaml
file.
A
That
represents
the
tag
that
came
from
the
from
the
shaw
they're
still
having
to
do
merges
if
there
is
merge,
if
there's
merging
requirements,
because
two
people
updated
the
same
thing
just
like
in
code
and
as
we
expand
into
microservices-
and
we
add
it
to
the
ci
cd
pipeline,
we
just
have
more
things
to
think
about.
The
number
of
files
can
become
a
little
bit
onerous.
A
So
while
we
want
to
continue
with
this
incredible
hermetic
solution,
really
the
heart
of
what
git
ops
is
all
about
and
what
infrastructure
is
code
taught
us,
we
still
need
a
way
to
automate.
We
still
want
the
cicd
pipeline
to
be
automated.
We
don't
want
to
wait
for
someone
to
do
a
create
a
pull
request
or
do
a
merge
or
do
an
approval.
These
things
need
to
be
part
of
the
automation
and
that's
the
front
side
of
get
ops.
A
It's
not
get
ops
itself,
but
it's
still
key
part
because
we're
talking
about
moving
from
a
ci
cd
pipeline
that
we
didn't
really
think
about
having
to
stop.
This
is
automated.
Maybe
you
had
a
gated
gated
approval
to
get
to
production,
but
it
was
just
an
approval
now
we're
talking
about
stopping
the
process
and
doing
something
outside
of
the
ci
cd
pipeline
and
checking
in
a
check
updating
a
yaml
file
with
the
correct
the
correct
tag
and
pushing
it
back
and
then
doing
a
merge
and
then
doing
an
approval.
A
So
this
is
a
topic
that
we
have
selected
in
the
ortillius
project
to
really
address
automating
the
front
end
of
get
ops,
and
just
so
you
know,
ortilius
is
a
incubating
project.
With
the
cd
foundation,
it
became
an
incubating
project.
Back
in
december,
we
have
a
fabulous
team
of
contributors.
We
have
a
little
over
a
hundred
people
who
are
part
of
the
group
and
I'd
say
we
have
about
30,
really
committed
developers
who
are
starting
to
really
bite
down
on
this
particular
problem
and
address
it.
A
Ortilius
is
a
microservice
management
platform,
it
versions
and
tracks
microservices,
along
with
their
blast
radius
and
their
inventory
across
the
clusters.
It
really
provides
a
proactive
view
of
microservice
architecture
and
as
it
changes
over
time.
As
a
result,
we
have
a
lot
of
the
data,
that's
needed
to
generate
the
information
in
a
yaml
file
and
in
many
ways
we
are
already
doing
that
and
we're
tracking
things
across
the
cluster,
so
you're
able
to
see
it
in
a
way
that
integrates
in
with
git
but
gives
you
some
visibility
and
automation.
A
So,
let's
just
think
about
what
artilleries
does
today,
so
today
what
what
artillerius
does
is.
Ortilius
gets
woken
up
at
the
point
in
time
that
a
new
service
has
been
registered
at
that
point
in
time
it
gets
triggered.
It
captures
the
shaw
and
the
tag
and
at
for
each
version
of
a
of
a
image
that
has
been
registered,
we
do
it
for
every
every
single
time
a
new
image
is
registered.
We
grab
that
because
it's
a
release
candidate
and
if
it's
a
release
candidate,
it's
probably
gonna,
go
once
we
grab
that
information.
A
A
And
then
we
listen
on
the
back
side
to
make
sure
that
that
it
went,
and
we
then
track
what
was
deployed
across
all
your
clusters.
So
that's
how
we
track
what
we
call
the
blast
radius.
We
know
if
a
single
container
has
been
updated.
We
know
the
applications
that
are
consuming
it
and
we
know
where
it's
been
deployed,
which
means
we
know
which
application
versions
have
now
become
new,
because
we
know
what
what
container
version
is
new.
It's
a
whole
new
world
in
configuration
management.
A
We
track
the
metadata
and
we
put
that
in
a
central
catalog
of
microservice
configuration
management.
It
is
our
core
data
store
and,
as
I
said,
we
like
to
separate
the
data
from
the
definition
and
we
store
the
key
value
pairs
so
that
we
can
begin
generating
these
kinds
of
files
on
the
fly
and
we
store
all
the
shell
information,
and
this
is
the
data
that
we
will
will
use
to
automate.
A
So
there's
two
genera
two
generic
ways
that
we're
looking
at
adding
this
to
the
ortillius
roadmap
and
a
group
of
brilliant
minds
out
of
australia
is
specifically
working
on
this
project.
For
us,
the
two
ways
would
be
one
would
be
a
helm,
get
ups
operator,
and
then
you
have
one
that
uses
a
kubernetes
deployment
yaml
and
either
way.
Ortilius
would
generate
the
values
for
the
yaml
file
for
the
correct
environment
repository.
It
would
then
check
that
new
yaml
file
into
the
environment,
repository
and
after
check-in.
It
would
do
all
the
other
manual
steps.
A
It
would
just
look
differently
a
little
bit
of
different
structure
and
again
after
check
in
we
would
issue
all
the
pull
requests
to
stage
the
change
and
do
the
automated
approval,
if,
if
it
and
merge,
if
that
was
allowed
and
we're
hoping
that
it
can
be
because
that's
the
key
to
making
sure
we're
automating
all
of
these
parts
and
pieces.
But
if
you're
still
looking
for
that
gated
approval,
then
you're,
then
the
approval
is
still
there
and
you
can
still
do
that.
A
So
what
would
it
look
like
once
we
do
the
integration,
what
we're
hoping
to
have
really
begun
working
on
in
the
late
summer
and
fall.
We
would
still
have
that
we
would
still
be
woken
up
just
the
same
way
as
we
are.
We
would
be
triggered
by
a
new
image
of
being
registered
to
a
container
registry,
but
instead
of
actually
doing
a
push
to
a
deployment
solution,
we
would
call
a
what
we
call
a
custom
action
that
would
generate
the
ammo
file
based
on
it.
A
We
would
then
track
that
we
actually
delivered
it
to
the
the
the
git
repository.
So
we
know
what
environment
we
actually
delivered
it
to.
So
the
data
pretty
much
is
the
same
you're
still
using
your
git
and
get
ops
to
manage
those
hermetic
deployments
and
we're
adding
another
level
of
security
by
generating
these
pieces,
and
you
can
always
go
and
see
exactly
what
shaw
or
what
tag
or
was
was
used
because
we're
going
to
track
all
of
those
pieces
separated
from
the
yama
file
itself
and
the
database.
A
So
thank
you
very
much
for
taking
the
time
to
hear
you
know
my
thoughts
on
get
ops
and
the
front
end
of
get
ups
and
how
we
need
to
maintain
that
hermetic
deployment
process.
It
is
brilliant.
We
need
to
be
able
to,
in
the
future,
have
a
continuous
delivery
pipeline
that
really
deploys
in
a
repeatable
way
between
dev
test
and
prod,
and
we've
been
working
on
this
problem
for
quite
some
time
now.
Not
everybody
actually
uses
their
cd
pipeline
to
deploy
all
the
way
to
production,
most
are
doing
dev
and
test.
A
We
are
now
at
a
point
in
time
that
we
need
to
start
thinking
about
really
automating
these
pieces
because,
as
you
move
into
microservices,
there's
a
lot
of
files
to
be
pushing
it's
not
about
you
know
we're
doing
really
great
indoor
metrics,
because
we're
deploying
once
a
day
you
should
be
deploying
all
day
long,
so
those
metrics
sort
of
change
too
microservices.
They
disrupt
everything.
A
A
Yeah,
they
are
pretty
pretty
out
there
concepts
right,
configuration
management's,
always
been
a
challenge
for
people
and
microservices
just
puts
you
know,
breaks
a
puzzle
up
into
tiny
pieces
and
makes
cm
even
more.
A
A
Is
my
experience
in
distributed
monoliths
common,
not
sure
what
that
means,
but
I
have
been
in
the
business
for
quite
some
time
a
distributed
monolith.
I'm
not
sure
what
you're
referring
to
there
if
it's
being
distributed
to
multiple
platforms
or
if
you're,
talking
about
breaking
up
a
distributed
system
and
starting
to
implement
a
domain
driven
design.
A
Things
that
have
to
be
deployed
at
the
same
time,
we
call
that
a
release
and
yes,
a
release
train
is
an
important
part
of
monolithic
deployments.
I'm
not
sure
how
that's
going
to
look
in
a
microservices
world,
but
I
believe
that
we
do
have
some
customers
who
are
asking
how
to
coordinate
multiple
things.
A
Changing
there
is
a
interesting
cncf,
perfect
brian,
do
elaborate
on
discord,
but
on
that
topic,
there's
a
cncf
working
group
through
the
application
delivery,
sig
called
application,
enablement
that
talks
about
coordinating
application,
releases,
database
releases
and
infrastructure
releases,
which
we
have
termed
cooperative
delivery.
If,
if
that
is
something
that's
an
interest
to
you,
consider
looking
at
the
app
delivery
sig
at
the
cncf,
okay
thanks
all
we
are
going
to
get
so
we
will
see
you
later
in
the
in
the
I
guess
the
happy
hour
thanks
bye-bye.