►
Description
Never make a manual change again. In this talk we’ll show how to use GitOps to achieve reliable, and fast releases time and time again. Rather than pushing changes, Argo pulls and syncs code changes to a cluster. When combined with Codefresh’s CI/CD components we can get something magical.
Presented by:
Brandon Phillips, Solutions Architect @Codefresh
A
Okay,
let's
go
ahead
and
get
started.
I'd
like
to
thank
everyone
for
joining
us
today
to
today's
webinar
get
ops,
continuous
delivery
with
argo
and
code
fresh,
I'm
julius
rosenthal
I'll
be
moderating.
Today's
webinar
I'd
like
to
thank
our
presenter
today,
brandon
phillips
solution,
architect
at
code
fresh,
a
few
housekeeping
items
before
we
get
started
during
the
webinar.
You
are
not
able
to
talk
as
an
attendee.
There
is
a
q,
a
box
at
the
bottom
of
your
screen.
Please
feel
free
to
drop
your
questions
in
there
and
we'll
get
to
as
many
as
we
can.
A
A
B
All
right,
thank
you
so
much
julius.
I
appreciate
the
introduction.
First
of
all,
I
just
want
to
say
thank
you
from
everyone
at
code
fresh
for
joining
the
webinar.
We
think
it's
an
extremely
interesting
topic,
so
I
hope
you
guys
enjoy
it
as
julius
mentioned.
This
is
a
webinar
on
github's,
continuous
delivery
with
argo
and
code
fresh.
B
B
I
have
a
history
in
software,
development
and
software
architecture
for
the
last
10
years
or
so,
and
just
really
heavily
specialized
in
kubernetes
and
just
for
an
agenda
and
overview
here
we're
going
to
have
a
light
intro
into
code
fresh
because
you
are
going
to
be
seeing
it
quite
a
bit
during
the
demo
periods
and
then
we'll
cover
just
git
ops
principles.
B
Why?
We
think
it's
important
and
we'll
dive
into
argo
cd
and
do
a
demo
around
get
ops
and
then
we'll
dive
into
argo,
rollouts
and
we'll
demo
a
canary
deployment.
So
we
are
doing
live
demonstrations
so
hopefully
the
demo
guides
are
with
us
today,
oh
and
one
other
thing
to
call
it
here,
so
we
do
have
the
links
to
the
github
repos
here.
So
you
know
once
the
slides
are
shared.
B
If
you
want
to
take
a
look
at
the
applications
in
more
detail,
you
can
it's
a
very
simplistic
application
for
a
web
service
and
then
just
the
necessary
code
to
create
an
application
manifest
and
an
argo,
rollout
and
I'll
show
you
little
bits
and
pieces
of
that
as
well
and
so
code
fresh
is
an
enterprise
ci
cd
solution.
So
we
are
a
kubernetes
native
solution.
We
provide
container-based
pipeline
processes
and
we
give
you
a
way
to
cleanly
tie
into
your
kubernetes
ecosystem
using
things
like
helm
and
really
advanced
pipeline
processes.
B
So
one
of
the
reasons
that
we're
having
this
talk
today
is
we
have
decided
to
heavily
invest
in
argo
and
integrate
it
with
our
tooling.
We
see
a
great
opportunity
there
to
make
our
cd
solutions
even
more
robust
by
integrating
with
argo,
and
so
you
will
be
seeing
our
products.
So
that's
why
I
wanted
to
give
this
little
intro
here,
and
this
is
where
the
pipelines
are
built
out
in.
So
you
will
see
a
little
bit
of
that
and
if
you
have
questions
about
it,
you
know
feel
free
to
ask
at
the
end.
B
So
git
apps
is
as
an
a
goal
really
is
something
I
want
to
talk
about,
because
it's
important
to
understand.
You
know
why
we
think
get
ops
is
fundamental
to
good
ci
cd
practices.
So
it's
making
sure
that
your
process
to
deliver
your
code
is
repeatable.
You
know
that
you're
applying
your
changes
in
whatever
environment.
B
You
want
to
push
this
out
to
you
in
the
same
way,
every
time
in
some
cases
you
can
even
push
out
your
entire
stack
like
code
fresh,
for
example,
we
use
pipeline
processes
and
helm
charts
to
be
able
to
deploy
out
our
entire
stack
at
the
same
time
in
a
declarative
manner.
So
this
is
extremely
important
to
have
that
repeatable
process,
because
you
have
less
moving
pieces
that
way
it
should
be
predictable,
and
this
goes
right
back
to
the
repeatable
portion
as
well.
B
So
you
should
know
when
you
put
in
that
pull
request
exactly
what
the
end
state
of
the
application
is
going
to
look
like.
So
having
that
declarative
configuration
with
a
known
end,
state
is
extremely
important,
knowing
that,
if
I
make
this
change,
my
replicas
have
to
be
at
four
instead
of
the
two
that
it's
at
now.
Otherwise,
this
is
a
failed
deployment
process,
so
using
declarative
configuration.
B
Therefore,
it's
going
to
be
a
predictable
deployment
and
if
it
did
not
match
that
end
state,
then
you
know
it
was
a
failed
process,
and
that
gives
you
the
opportunity
to
roll
back,
and
the
same
goes
for
auditability
as
well,
so
making
sure
that
every
change
that
you
push
into
your
production
or
your
staging
or
even
your
development
environments
are
historically
traceable
back
through
the
code
commit,
but
also
all
the
way
up
through
all
the
deployment
process,
and
this
becomes
crucial,
especially
in
organizations
who
are
trying
to
debug
or
understand
an
issue
that
happened
in
production.
B
To
know
that
you
know
two
weeks
ago
at
1pm.
I
know
exactly
what
version
of
my
code
was
running
in
production
and
I
can
look
exactly
at
the
declarative
configuration
that
specifies
not
only
my
application
code
but
also
the
infrastructure
code
around
it.
So
you
should
have
you
know
a
declarative
configuration
for
both
your
infrastructure
and
applications
so
that
you
can
recreate
environments
if
necessary
in
a
lower
environment,
so
that
auditability
is
really
key
to
quickly
triaging
and
dealing
with
issues
as
they
come
up
and
then
making
sure
it's
accessible.
B
You
know
I
really
like
that
you
can
control
this
with
pull
requests
so
that
you
know
not
only
is
it
very
accessible
for
basically
anyone
who
can
use
get
tooling
or
in
some
cases
you
know
they
can,
if
they're
not
comfortable
with
it
say
you're
using
github,
you
can
just
go
on
to
github,
modify
the
file
and
it'll
even
prompt.
You
know,
instead
of
delivering
this
to
master,
do
you
want
to
create
a
feature
branch
and
create
a
pull
request
for
that?
B
You
know
having
those
easily
accessible
options
to
expose
it
not
only
to
developers
who
are
comfortable
and
happy
in
tooling
like
this,
but
maybe
some
of
the
you
know
groups
like
security
or
server
teams
that
this
isn't
their
bread
and
butter,
but
they
can
get
in
there
and
easily
make
these
changes
so
having
that
is
a
huge
win,
the
other
thing
would
be
putting
it
in
a
pull
request
means
that
you
can
refine
and
control
the
process.
B
So
what
you
can
do
is
you
can
actually
create
separate
pull
request,
approver
groups
based
on
areas
of
responsibility.
So
if
you
have
application
code,
that's
part
of
your
pipeline
process
to
deliver
and
you
have
infrastructure
code.
That's
part
of
your
pipeline
process
delivery.
You
might
have
different
approvers
for
each
of
those
areas
based
on
who
owns
the
cloud
infrastructure
versus
who
owns
the
development
applications.
B
B
It
is
extremely
important
and
it
leads
to
some
really
good
benefits,
so
the
first
being
git
should
really
be
your
system
of
record,
and-
and
that
means
you
know
your
desired
state
of
configuration
for
your
applications,
your
infrastructure,
whatever
is
a
moving
part
of
the
puzzle
in
your
deployment
process
and
your
continuous
integration
process.
B
B
But
it's
making
sure
that
everything
is
driven
by
pull
requests
and-
and
I
wanted
to
highlight
this
here
again
because
it's
important
to
understand
that
we
know
that
there
are
occasionally
you
know
some
heroics
that
happen
where
you
might
go
in
and
backdoor
your
kubernetes
cluster
to
make
a
change
to
kind
of
make
things
happen.
I
understand
that
on
occasion
that
still
ends
up
being
necessary,
but
the
goal
at
the
end
of
the
day
is
to
push
that
through
a
configuration
change.
B
So
there's
that
level
of
traceability
in
there,
because
realistically,
the
next
time
you
go
to
deploy
that
application.
If
you
don't
have
a
historical
record
of
those
changes,
you're
likely
to
end
up
in
the
exact
same
situation,
and
then
the
cluster
state
always
needs
to
match
the
git
state
and,
more
importantly,
the
tooling
that
you're
working
with
should
be
able
to
understand
and
monitor
for
configuration
drift.
So
if
you
have
tooling
that
is
aware
of
this,
then
you
don't
need
to
do
the
lag
work.
B
You
don't
need
to
do
the
manual
testing
and
checking
to
make
sure
that,
yes,
the
version
of
the
application
that
I
expect
to
be
deployed
out,
there
is
actually
deployed
out
there.
So
having
tooling,
like
argo
cd,
that
is
aware
of
the
state
of
the
application,
is
extremely
important
and
deployments
and
rollbacks
should
be
painless.
B
I
know
this
sounds
like
something
that's
obvious
to
say,
but
we
work
with
a
lot
of
customers
who
you
know
they're
coming
to
us,
because
they
want
to
get
away
from
deployment
processes
that
are
an
all-day
affair,
all
hands
on
deck
everyone's
in
the
you
know,
in
like
a
war
room
working
on
deployments
and
your
rollback
should
be
similarly
seamless
as
well.
So
if
everything
is
driven
by
a
declarative
configuration,
then
it's
just
as
easy
to
roll
back
as
it
is
to
deploy
an
application.
B
Now,
of
course,
that
might
mean
that
you're
out
of
sync,
with
your
current
configuration
until
configuration,
changes,
are
pushed,
but
it's
still
crucial
to
be
able
to
do
that
quickly
in
an
environment
so
that
you
don't
have
any
problems.
You
know
long
term
that
you
can't
resolve
and
then
developers
can
and
should
focus
on
feature
development.
So
this
this
is
interesting
because
I'm
not
saying
that
developers
should
not
be
managing
their
own
pipelines.
B
I
do
think
that
shift
left
has
some
really
good
merits
and
developers
actually
should
take
some
ownership
of
the
pipeline
processes
to
deliver
their
code
now.
What
I
do
think
is
that
once
those
are
established
and
in
place
and
they're
running
well,
then
they
should
be
able
to
focus
on
their
feature
development.
You
should
have
your
ci
cd
process
in
a
place
where
your
developers
don't
have
to
be
focused
on
triaging.
B
You
know
minor
issues
that
might
help
in
it
or
happen
in
a
development
environment,
or
you
know
trying
to
figure
out
what
happened
when
they
deployed
to
this
specific
staging
environment.
If
everything
to
configure
and
require
your
application
to
run
in
a
kubernetes
cluster
is
part
of
that
pipeline
process,
it's
going
to
really
alleviate
the
need
for
developers
to
get
in
there
dig
in
go
hands-on
on.
You
know
exactly
why
my
application
isn't
deploying
or
behaving
the
way
that
I
would
expect
it
to,
and
then
argo
cd.
You
know.
B
We
really
think
that
argo
cd
is
a
phenomenal
tool
for
continuous
delivery,
and
it
goes
back
to
a
lot
of
the
principles
that
we're
talking
about.
So
it
is
built
around
being
a
declarative,
git,
ops,
continuous
delivery
tool
and
it's
extremely
feature-rich.
So
all
the
you
know
things
that
I
just
highlighted
here,
so
they
support
rich
templating,
so,
whether
you're
pulling
in
from
an
application
manifest
you
want
to
use
helm,
charts
or
customize.
B
That
says,
hey
like
it
noticed
a
change
in
your
git
repo,
we're
monitoring
it
and
we're
going
to
go
ahead
and
update
that
and
by
default
you
know,
argo
cd.
If
you
set
up
auto
sync,
it's
pulling
that
git
repository
every
three
minutes.
You
know
with
this
demonstration
we're
going
to
show
a
little
bit
different
approach
where
code
fresh
is
going
to
utilize.
B
The
rich
web
hooks
to
actually
actively
engage
it,
but
yeah.
It
can
also
auto
pull
it's
also
enterprise
ready.
So
you
know
all
the
items
that
I'm
mentioning
here,
like
sso
multi-cluster,
multi-tenancy
audit
trails,
the
stuff
that
you
would
expect,
but
even
the
more
nuanced
things
like.
How
do
I
deal
with
high
availability
needs?
You
know
it's.
It
has
automatically
built
out,
so
it
can
specify
that
you
can
do
high
availability
for
the
redis
back
end
and
things
like
that.
B
They
have
disaster
recovery
built
in
so
that
you
can
dump
full
configurations
if
you
need
to,
and
then
it's
also
just
you
know
resilient
as
well,
so
it
can
lose
the
redis
back
end
fully
and
be
brought
back
up
and
still
be
in
a
runnable
state.
So
it's
very
enterprise
ready
from
that
needs,
and
then
it's
very
extensible
as
well,
which
really
lends
itself
to
being
enterprise
ready.
So
they
have
a
rich
cli
that
you
can
utilize
web
hooks.
B
You
know
which
code
fresh
is
tapping
into,
they
can
do
event
driven
architecture
to
drive.
Your
deployments
polished
web
uis
and,
although
I
don't
list
it
here,
you
know
in
part
of
that
extensibility
would
be
what
they
call
app
of
apps
or
app
projects,
which
is
actually
a
custom,
kubernetes
crd,
which
actually
allows
you
to
specify
groupings
of
applications
so
that
you
can
have
your
dependency
chain
managed
and
understood
by
your
continuous
delivery.
Tooling,
and
so
this
is
going
to
be
a
brief
introduction
to
what
our
demo
is
actually
going
to
cover
here.
B
So
we
have
a
nice
little
developer
guy
down
here
and
our
developer
is
going
to
commit
some
code
so
into
the
main
source
code
here
from
there
that
source
code
is
going
to
trigger
an
event
on
our
ci
cd
platform.
In
this
instance
we're
using
code
fresh.
Obviously
it
could
be
other
tooling
here,
but
in
code
fresh
we
have
a
trigger
interaction
to
say.
Looking
at
this
repository
for
changes
and
it's
going
to
trigger
an
event
in
our
ci
cd
platform
from
there
code
fresh
is
actually
going
to
handle
updating
the
application
manifest.
B
So
we're
talking
about
the
kubernetes
application
manifest
and
we're
going
to
change
the
version
of
the
application
when
we
change
that
version
of
the
application
from
code
fresh,
it's
going
to
trigger
a
continuous
delivery
pipeline
and
that
continuous
delivery
pipeline
is
where
we
have
our
built-in
integration
with
argo,
it's
going
to
say:
okay,
you
know
I
got
the
request.
I
know
there
was
a
code
change,
it's
going
to
go
out
and
it's
going
to
call
argo
cd
to
sync.
Now
this
is
where
argo
cd's
magic
really
comes
in
handy.
B
So
we
have
our
web
hook,
interaction,
we're
going
to
nargo,
see
we
say:
okay,
you
know,
we
know,
there's
a
code
change
the
application's
now
out
of
sync,
so
you
don't
have
to
worry
about
the
auto
sync
or
the
time
policy,
and
once
we
make
this
interaction
with
argo
cd,
it's
going
to
get
the
latest
from
the
repository
and
it's
going
to
update
our
application
to
the
newest
version.
So
we
went
from
our
older
version
to
our
newer
version
here,
based
on
configuration
sync
and
now
you
know
once
that
application
is
out
here.
B
Let's
say
that,
for
whatever
reason
you
know,
the
trigger
gets
turned
off
in
your
ci
cd
platform,
or
something
else
happens
where
someone
goes
in
and
you
know
they
actually
make
a
change
to
the
file
without
it
being
picked
up
by
the
platform.
We
would
see
that
we
were
now
out
of
sync
and
argo,
so
argo
again
monitoring.
B
It
would
know
that
we're
out
of
sync
for
the
configuration
and
if
you
had
the
auto
sync
policy
turned
on
it,
would
then
go
ahead
and
go
through
the
deployment,
and
so
now
we're
going
to
drop
over
to
the
actual
demo,
and
this
is
kind
of
the
live
demo
part.
So
hopefully
this
is
going
to
go
well.
I
do
want
to
talk
through
a
little
bit
of
what
the
actual
pipeline
process
looks
like.
B
So,
if
we
take
a
look
at
so
I'll
drop
in
here
and
we'll
just
go
to
our
main
code,
fresh
area
here.
B
Okay
running
a
little
minor
snag
here,
so
if
we
we'll
start
by
taking
a
look
at
argo
cd
here,
so
if
we
drop
an
argo
cds
ui,
the
first
thing
you'll
notice
is,
you
have
applications
here,
so
the
applications
are
what
you're
going
to
add
in
that
are
actually
syncing
with
your
code
repository
and
then
from
there.
That
will
actually
determine
what's
in
sync,
with
your
application.
Now
I
have
the
canary
one
already
out
here,
but
I
do
not
have
our
application
for
our
get
ops
demo
here.
So
what
I'm
going
to
do?
B
Is
I'm
going
to
go
ahead
and
create
that
so
give
me
just
a
moment
here
and
so
we'll
go
in
we'll.
Do
a
new
app
inside
of
our
application
name,
I'm
going
to
name
this
get
ops
app!
Very
generic
and
creative,
as
you
can
tell,
our
project
is
just
default.
That's
fine
repository
url
in
this
case
we're
actually
using
an
open
source
repository.
B
So
in
this
instance
you
don't
have
to
set
up
any
special
configuration
now,
if
you
need
to,
you
can
actually
have
ssh
enabled
repositories
as
well,
so
it
supports
connecting
into
private
repositories.
So
this
would
be
the
link
to
our
main
repository
for
our
path,
I'm
just
specifying
dots.
So
it's
just
saying
look
in
that
root
folder
there
and
then
from
there
for
cluster
I'm
going
to
use
in
cluster
now.
This
is
an
important
distinction
in
this
case.
B
For
the
the
sake
of
this
demonstration,
I
am
just
having
the
deployment
happen
in
the
same
cluster
that
argo
lives
in
you
do
not
have
to
set
it
up
that
way,
so
you
can
set
it
up
so
that
argo
has
its
own
dedicated
cluster
and
then,
if
you
add
in
an
external
cluster,
they
get
stored
as
secrets
inside
of
the
argo
cd
cluster
and
then,
in
this
case,
yeah
I'll
put
it
into
a
namespace
of
its
own.
B
So
this
is
going
to
create
that
application,
and
I
will
show
you
what
that
application
looks
like.
So
if
we
look
at
github
here,
so
the
the
directory
that
I
actually
sent
it
to
is
actually
the
directory,
that's
containing
the
application
manifest
and
the
service
definition.
So
it's
not
actually
the
source
code.
This
is
the
manifest
that's
driving
the
deployment
of
the
source
code.
B
And
I
don't
have
any
deployments
right
now,
so
there's
nothing
actually
running
out
there.
So
what
I
can
do
is
I
can
hit
synchronize
here,
and
this
is
from
within
argo
cd.
You
know
later
I'll
show
it
with
code
fresh.
But
if
I
hit
synchronize
here,
this
is
going
to
bring
us
up
to
the
current
state
of
our
application
and
you'll
see
that
it
starts
to
spin
it
up.
And
of
course
this
was
previously
a
deployment.
So
it
comes
in
super
fast
and
it's
going
to
start
to
synchronize.
B
B
B
So
there's
not
a
lot
of
complexity
here,
but
if
we
drop
in,
we
can
take
a
look
at
what's
actually
running
and
it's,
I
think
it's
a
pretty
interesting
color
right
now,
but
basically
it's
just
a
web
service.
That's
exposing
out
a
base
web
page,
that's
really
all
that
it
is,
and
then
we
will
take
a
look
at
the
pipelines
a
little
bit,
because
I
want
to
cover
a
couple
things
in
here.
So
if
we
drop
in.
B
B
B
B
Now
all
right!
Well,
I
guess
well,
that's
kind
of
taking
a
break
a
little
bit
we'll
go
ahead
and
we'll
start
talking
through
some
of
the
other
aspects
of
argo,
cd
and
kind
of
just
move
on
through
the
presentation
portion
here.
B
So
hopefully
we'll
be
able
to
get
back
to
that
demo.
But
if
not,
I
do
want
to
talk
about
some
of
the
features
of
argo
rollout
which
what
that's
really
doing
is
it's
applying
advanced
deployment
strategies
for
your
kubernetes
environment.
So
it
gives
you
a
custom
kubernetes
crd
called
rollout
that
allows
you
to
specify
whether
you
want
to
do
a
blue,
green
or
canary
deployment
and
then
add
a
whole
bunch
of
additional
criteria
on
there
and
there's
some
key
things
that
I
want
to
talk
about
here.
B
So
there's
service
mesh
support.
So
I
know
that
there's
a
lot
of
companies
right
now
that
are
looking
forward
to
integrating
with
service
meshes
or
adding
in
support
for
service
mesh
in
the
future
and
as
you
kind
of
roll
to
that,
you
know
making
sure
that
the
tooling
that
you're
picking
is
compatible
with
that.
So
argo
is
compatible
with
service
meshes.
So
it
supports
istio
linker
d
through
smi.
It
also
supports
nginx
a
lot
of
the
key
players
that
you
would
also
anticipate
there
as
well,
and
then
it
very
now.
B
This
is
very
important
to
me
personally,
because
we
use
a
lot
of
auto
scaling,
but
it
supports
anti-affinity
and
horizontal
pod
scaling.
So
what
that
really
means
is
typically,
you
know
if
you're
deploying
an
application
historically,
it
would
be.
I'm
deploying
an
application
it's
going
to
go
to
whatever
pod
has
resources
for
it,
and
in
that
instance,
you
know
you
can
end
up
in
a
situation
where
the
resources
actually
get
deployed,
but
then
you've
auto
scaled
up
to
fill
the
needs
of
additional
applications.
B
That
can
cause
a
problem
with
you,
because
you
might
end
up
with
a
remnant
of
say
one
pod
left
on
a
node
when
you
try
to
scale
things
back
down.
B
When
really
you
wanted
that
node
to
be
able
to
go
away,
so
argo
actually
has
the
capability
to
specify
anti-affinity
so
that
when
I
have
a
previous
application
version,
even
if
there
were
resources
available
on
say
this
pod
and
this
pod
on
these
nodes,
those
should
say
node
instead
of
pods,
but
on
these
nodes,
then
it
would
deploy
to
new
nodes
entirely
so
that
you
could
auto
scale
down
these
nodes.
So
that's
a
huge
win
for
customers
that
want
to
be
able
to
use.
B
They
want
to
be
able
to
use
that
anti-affinity
and
auto
scaling
inside
of
kubernetes.
Another
aspect
to
talk
about
is
analysis
control.
So
argo
rollout
has
this
concept
of
analysis
where
you
can
tie
into
specific
metrics
using
prometheus
to
say
you
know
as
part
of
my
canary
process
or
my
bluegreen
process.
I
want
you
to
go
out
to
your
prometheus
instance,
and
I
want
you
to
analyze
these
specific
metrics
and
you
can
even
specify
pause
breaks
in
that
as
well.
B
So,
for
example,
if
we
take
a
look
at
our
actual
application
deployment
here,
we're
doing
a
canary,
that's
what
we're
going
to
demonstrate
here,
but
we
have
our
steps
in
here
with
a
set
weight
and
a
pause,
and
so
what
they
allow
you
to
do
with
the
rollout
crd
is
actually
to
specify
your
analysis
as
part
of
a
step
in
a
pause
period
or
can
even
be
analysis
higher
up
to
say,
let's
say,
for
example,
that
I'm
doing
a
blue
green
deployment,
and
I
have
a
preview
application
out
there.
B
Your
analysis
can
actually
happen
against
the
the
preview
version
of
the
application
as
well.
So
it
gives
you
that
flexibility
to
set
the
analysis
up
and
the
other
thing
that
I
would
recommend
you
know,
particularly
with
code
fresh.
What
we
like
to
see
is
you
know
you
can
bake
it
into
your
deployments
here.
You
can
also
bake
it
directly
into
the
cicd
process
as
well.
B
So
if
you
wanted
to
make
it
part
of
the
actual
deployment
process,
you
can
do
that,
and
so
that's
extremely
important
to
doing
roll
outs
like
canary
and
blue
green,
because
you
want
to
make
sure
that
it's
in
a
healthy
state
when
you
actually
try
to
go
through
and
roll
out
the
rest
of
the
application
and
so
talking
through
what
a
canary
deployment
would
actually
look
like.
B
So
again
we
have
our
our
nice
little
developer
down
here.
Our
developer
is
going
to
commit
code
and
just
to
kind
of
talk
about
where
the
current
state
of
the
application
is
right.
Now
we
have
four
instances
of
our
application
version
out
there
when
a
developer
actually
commits
the
code,
it's
going
to
trigger
that
ci,
just
like
it
did
for
the
get
off
space
pipeline
from
there.
B
You
know
your
platform
is
going
to
do
whatever
it
needs
to
do,
and
this
is
you
know
in
our
instance,
the
pipeline
that
we
had
set
up
for
this
is
very
simplistic.
So
it's
really
just
build
an
application
bundle.
It
up
commit
back
to
the
repository
you're
going
to
have
a
lot
more
complicated
stuff
in
there.
B
You're
likely
going
to
have
integration,
testing,
unit,
testing
security
and
vulnerability
scanning
all
that
stuff,
that's
going
to
happen
in
your
ci
and
if
it
actually
passes
all
of
those,
then
it
would
say:
okay,
we
know
that
it's
a
healthy
build,
therefore
we're
going
to
update
our
rollout
manifest,
which
is
going
to
trigger
that
canary
cd
pipeline
from
here.
Your
ci
cd
platform
is
then
going
to
make
a
call
out
to
argo,
and
this
is
a
key
thing
to
mention.
B
So
it's
going
to
call
out
to
argo
to
do
that
rollout
process,
but
also
it's
going
to
hit
that
pause
event,
and
this
is
important,
because
this
is
where
you're
specifying
your
actual
threshold
on
how
much
you
actually
want
to
deploy
the
application.
So
if
you
have
four
instances
in
our
case,
we
want
to
do
a
25
deployment,
so
we're
gonna
deploy
one
new
version
of
the
application,
which
is
gonna
happen
here
and
we're
gonna
update
our
state
to
say:
okay,
we're
at
a
25
canary
status.
That
was
what
we
set
for
our
pause
event.
B
So
we
have
argo
here
now
grabbing
the
latest
manifest
and
then
from
there.
It's
actually
updating
the
state
of
the
application.
B
Now,
if
we
take
a
look
at
our,
you
know
our
following
steps
here:
what
we're
going
to
do
once
we're
at
that
canary
point
is
then
it's
up
to
your
platform
to
actually
determine
if
that
canary
is
in
a
healthy
state.
So
you
can
have
you
know
argo
doing
some
analysis
via
metrics
through
prometheus.
You
can
have
your
cicd
platform
during
this
pause
state.
Actually
looking
to
tooling,
like
let's
say,
datadog
relic,
dynatrace
or
you
know
you
might
use
jaeger
zipkin
or
an
elk
stack.
B
You
can
kind
of
write
your
own
custom
process
to
go
out
and
say:
okay
is
that
canary
actually
in
a
healthy
state?
If
it
is,
then
you
can
call
the
argo
to
actually
complete
that
rollout
and
when
you
do
that,
you're
saying
okay,
I'm
calling
back
to
argo.
Here's
the
you
know,
here's
the
build
that
I'm
looking
to
complete
and
then
from
there
it's
going
to
do
a
configuration
sync
as
it
actually
stores
your
git
repository
in
its
own
local
repo,
and
it's
going
to
update
that
state.
B
Now
in
ours
we
only
have
one
pause
step
in
there,
so
at
this
point
what
it
would
do
is
it
has
a
current
state.
It's
going
to
do
a
config
state,
it's
going
to
bring
our
entire
application
up
to
that
new
version,
so
we're
going
to
go
from
one
of
the
new
versions
up
to
four,
and
so
this
is
where
you
know,
you're
you're
now
fully
deployed
on
the
new
application
version,
and
at
this
point
you
know
you've
vetted
it.
B
You
know
that
the
application's
in
a
good
place-
and
now
you
can
you're
fully
transitioned
over
to
that
new
application.
Now,
that's
not
to
say
if
you
didn't
encounter
errors
that
you
could
roll
back
as
you
absolutely
could,
and
so
this
is
where
you
know
we
would
be
dropping
into
our
demo
time,
and
so
I
do
want
to
show
you
in
argo
what
that
would
actually
look
like.
B
What
you
can
do
from
within
argo
is
you
can
actually
kick
off
a
rollback
process
directly
from
within
here
and
we'll
see
the
canary
process,
because
one
really
cool
thing
here
is:
if
you
have
a
rollout
specified
as
deployment
mechanism
for
this.
Just
like
we
do
in
this
guy,
where
we're
saying
you
know
we're
using
canary
then
when
you
actually
do
a
rollback
here
and
so
we'll
go
ahead
and
we'll
kick
it
off
now
again.
This
would
typically
happen
from
your
ci
cd
platform.
B
But
if
we
kick
off
a
rollback
here,
you'll
start
to
see
that
we
are
spinning
up
the
previous
version
of
the
application
and
in
that
instance,
once
this
is
up
to
a
healthy
state.
What
we
should
see
is
that
we'll
start
to
terminate
one
of
these
pods
here
as
well,
and
so
we'll
have
two
different
versions
of
the
application
actually
running,
and
so
that's
giving
us
our
canary
test
bed.
Now
again,
you
know
if
you
were
doing
a
blue
green
version
of
this
deployment.
B
Potentially,
what
we'd
end
up
with
is
four
pods
of
the
old
four
pods
of
the
new
and
then
allowing
you
to
do
your
test
analysis
against
that
preview
application.
So
now
we
see
that
we
have
three
versions
of
the
old
version
or,
I
should
say
the
newest
version,
but
really
since
we
did
a
rollback.
This
is
what
we're
anticipating
to
be
the
current
version,
and
so
we've
done
our
canary
steps.
B
So
now
we're
25
rolled
out,
and
this
is
where
we're
going
to
end
up
in
a
suspended
state
because
it
says
okay,
I
was
told
by
the
declarative,
configuration
that
this
is
where
I
pause,
and
so
I'm
going
to
pause
for
any
feedback
or
you
know
any
basically
any
future.
You
know
calls
from
an
application
to
say
whether
or
not
I
should
continue
with
this
rollout
and
then
from
there.
B
If
we
take
a
look
at
our
actual
rollout
application,
you
notice
we
are
out
of
sync,
but
that's
because
we
did
a
rollback,
but
we
are
running
off
of
two
different
versions
of
the
application
and
that's
because
we
have
25
on
that
other
version.
And
if
I
take
a
look,
so
let's
go
look
at
our
actual
roll
out
app
here.
B
B
So
let
me
go
ahead
and
refresh
this
guy
to
get
demonstrate
that
for
you
in
general,
so
we
are
again
we're
out
of
a
sync
state
here,
so
I'm
going
to
go
ahead
and
I'm
just
going
to
sync
this
back
up,
I'm
going
to
let
our
go
just
going
to
handle
that
for
right.
Now,
I'll
drop
back
to
our
application,
where
we
created
our
gitops
app
and
then
we'll
make
a
code
commit
here
as
well,
and
so
I
do
want
to
talk
a
little
bit
about
the
pipeline
process
here.
B
So
there's
a
couple
important
things
to
point
out
to
here,
so
the
first
would
be
obviously
we're
doing
our
clone
or
build,
and
this
is
where
you
could
layer
in
much
more
advanced,
build
steps
and
we
would
highly
recommend
it.
But
this
is
where
you
would
layer
in
your
things,
like
your
integration
testing
and
then
once
we
do
that
we're
actually
updating
the
get
off
so
we're
taking
that
file.
B
And
if
we
take
a
look
at
our
trigger
event
here
we
have
a
trigger
set
up
on
our
codes,
our
actual
code
of
the
the
web
application
itself
saying
if
we
get
any
push
commits
on
this,
let's
go
ahead
and
kick
off
this
process,
and
currently
it's
in
an
on
state.
So
now,
if
I
actually
go
back
to
our
code
and
we're
going
to
do
this
again,
so
we'll
change
it
back
to
let's
go
chocolate,
because
why
not-
and
let's
save
that
guy.
B
So
we're
going
to
be
cloning
down
our
repository,
we're
going
to
be
building
out
our
image
at
the
same
time
that
we're
cloning
down
our
github
repo
and
potentially
updating
that
manifest,
but
we're
not
actually
going
to
commit
the
changes
to
the
manifest
unless
the
docker
image
build
is
successful,
and
this
is
key
to
understand
because
realistically,
like
I
said,
you're
going
to
have
a
lot
of
testing
events
in
here.
So
you
don't
actually
want
to
kick
off
that
cd
process
or
the
git
ops
process.
B
B
You
know
pre-synchronization
checks
or
post
synchronization
checks
inside
of
here,
but
in
this
instance
you
know
we
just
have
some
dummy
steps
in
there
to
kind
of
demonstrate
that
you
can,
but
here
it's
actually
going
to
start
that
deployment
process
with
argo
cd.
So
this
is
our
current
application.
You'll
see
we
just
kicked
over
to
syncing
now
and
you'll,
see
that
we're
spinning
up
the
new
version
of
the
application,
we're
spinning
down
the
old
version
of
the
application,
and
now
this
process
is
in
sync.
B
So
once
this
finishes
up,
we'll
see
that
that
fully
goes
away
will
have
a
historical
reference
to
that
deployment.
Just
like
we
did
in
the
other
instance
that
I
demonstrated
and
you'll
have
a
good
insight
into
exactly
what
changes
happen
and
if
you
wanted
to,
of
course,
you
can
click
through
the
revision
information
and
go
directly
to
the
code.
Commit
and
you'll
see
that
it
was
just
a
commit
sha
there
that
actually
changed.
B
But
we
have
our
new
version
of
our
application
and
canary
is
much
the
same
process
for
us,
except
that
we
baked
in
ways
to
actually
do
additional
testing.
So
if
I
drop
back
I'll
go
to
our
pipeline
for
our
canary-
and
I
actually
currently
have
the
trigger
for
this
disabled,
so
I
will
step
in
I'm
going
to
go
ahead
and
I'm
going
to
turn
on
that
trigger.
B
B
If
I
could
type
all
right
and
we'll
go
ahead
and
we'll
commit
that
guy
and
we'll
push
this
and
now
when
we
actually
push
that,
we
will
see
that
we're
kicking
off
our
canary
ci
pipeline
here
and
this
canary
ci
pipeline
is
very
similar
to
the
get
ops
one.
The
flow
is
the
same.
B
You
would
be
doing
the
same
testing,
but
instead
of
kicking
off
the
commit
to
the
gitops
repository,
what
we're
actually
begin
going
to
be
committing
a
change
to
is
this
rollout
manifest
and
we're
going
to
change
the
version
of
the
application?
B
So
it's
very
similar
so
that
flow
from
the
you
know,
the
beginning
area
is
basically
identical
and
so
we're
building
the
docker
image
again.
You
know
we
are
pulling
from
cache
for
the
entirety
of
this
build,
so
it's
it's
typically
pretty
quick
and
then
we're
actually
committing
back
to
that
repository.
So
if
we
look
at
our
rollout
once
this
commit
happens,
we
should
see
that
again
we
just
had
a
change
and
then
from
there
you
know
we
should
kick
off
that
cd
pipeline
yep.
B
We
see
there
is
one
change:
there's
delta
we'll
go
ahead
and
commit
this
back
now,
we've
committed
our
change,
so
we
see
that
we
just
made
a
change
now
and
if
we
look
back
at
our
build
processes,
we'll
see
that
we
kicked
off
a
canary
cd
pipeline
now.
This
is
where
you
know,
you're
really
going
to
get
more
detailed
in
your
analysis
of
whether
or
not
this
canary
is
healthy.
So
if
you're
using
argos
analysis,
you
can
you
can
take
that
approach
if
you're
using
your
ci
cd
tooling,
to
do
it.
B
You
might
even
do
some
pre-environment
checks
to
make
sure
that
you
know
everything
that
you're
expecting
to
be
in
place
is
there,
but
once
you
actually
specify
this
roll
out,
what
it's
doing
is
it's
starting
that
roll
out
of
the
application
you'll
see
it's
already
started
to
synchronize
by
adding
that
canary
object,
so
the
containers
creating
and
again
we
went
from
one
image
being
deployed
to
now
two
versions
of
the
image
and
then
once
we're
at
that
25
canary
status,
like
I
showed
earlier
the
tooling
itself
once
it
finishes
this
rail
process
and
again
you
have
detailed
logs
on
exactly
what's
happening
inside
of
argo
when
you're
calling
from
our
platform.
B
So
if
you
need
to
see
more
information,
you
can
drop
in
here,
it's
going
to
tell
you
how
that
synchronization
is
process
is
going
in
this
instance
once
we're
at
that
25,
it's
actually
going
to
pause
the
deployment.
So
let's
say
that
you
had
an
environment
for,
for
whatever
reason
you
wanted
to
do,
and
you
just
saw
that
that
fourth
version
actually
disappear.
The
first
part
of
that
version
actually
disappear.
B
This
is
actually
going
to
get
to
a
point
to
where
you
know,
let's
say
you're,
going
into
like
a
little
a
local
dev
or
sandbox
environment,
or
maybe
a
qa
environment.
Where
you
have
a
qa
group,
that's
actually
smoke
testing
it,
so
you
should
have
all
sorts
of
other
analysis
in
there
in
place
as
well.
But
if
you
have
a
manual
qa
process,
then
what
you
can
do
is
you
can
have
like
a
pending
approval
step.
B
You
know
in
code
fresh,
we
call
a
pending
approval
where
you
can
layer
in
a
gated
deployment,
so
we've
started
the
canary
and
we
now
have
25
of
our
rollout
completed,
but
we've
now
gated
it
to
let
you
do
any
vetting
that
you
need
to
do
to
make
sure
the
application
is
healthy.
So
once
we're
you
know
confident
that
everything's
in
a
healthy
state,
we
can
say:
okay,
we're
going
to
prove
this.
B
Now
we're
going
to
go
to
50
and
we're
going
to
pause
again
and
we're
going
to
do
traffic
analysis
to
make
sure
that
everything's
healthy
we're
going
to
do
error,
analysis
and
things
like
that
to
make
sure
that
everything
is
in
a
state
that
is
production
ready
in
your
case,
in
this
instance
we're
saying:
okay,
the
you
know
the
pending
approval
is
fine.
We
did
our
smoke
test
of
the
environment,
we're
starting
the
deployment
process
again.
B
So
if
we
take
a
look
here,
we'll
see
that
our
dind
is
spinning
back
up,
which
is
our
execution
process
for
our
pipeline
and
then
we'll
start
to
actually
do
the
promotion
process
and
again
this
is
calling
to
our
argo
cd
steps
and
then
from
our
argo
cd
steps.
Again,
it's
going
to
give
us
a
link
to
say,
okay,
you
know
we'll
drop
you
right
into
argo,
so
that
you
can
visualize
what's
actually
happening
and
so
you'll
see
that
we're
starting
to
actually
deploy
new
the
newest
version
again.
B
So
we're
upping
the
pods
now
and
as
we
get
reports
back
that
that's
actually
healthy
and
the
health
check
has
checked
out,
then
it'll
start
to
terminate
pods
that
are
running
on
the
older
version,
and
so
it's
really
just
automating
that
entire
process.
So
the
key
thing
here
is
that
once
these
pipelines
are
set
up,
you
know
whatever
your
ci
cd
platform
is
your
developers.
B
All
they
need
to
worry
about
is
committing
code,
and
so,
if
they
have
a
feature
that
they're
working
on,
all
they
need
to
do
is
actually
commit
that
in
let
the
automation
and
the
declarative
configuration
actually
take
care
of
it
for
you,
and
so
there's
really
no
uncertainty
in
the
process.
The
only
thing
that
I
would
highly
recommend
would
be,
and
in
this
for
demonstration
purposes,
we
don't
have
it
in
here,
but
typically
what
we
like
to
see
is
you'll.
B
Have
your
start
of
your
rollout
process
you're
going
to
have
your
your
analysis
and
your
health
checking
and
things
like
that
in
here
and
then
you
would
have
a
conditional
check
to
say
only
do
the
rollout
if
all
of
that
is
good.
Otherwise
we
can
automatically
execute
a
rollback
again
using
our
built-in
steps
and
so
to
kind
of
talk
about
what
those
steps
actually
are.
B
Oh-
and
it
looks
like
so
once
this
updates
we'll
be
fully
on
the
new
version
of
the
application
as
well,
and
we
will
see
our
our
yep
three
minutes
ago
for
our
deployment
time,
and
so,
if
we
drop
back
into
our
pipeline,
I
want
to
talk
about
what
our
steps
actually
are.
So
we
have
a
step
marketplace
and
in
the
step
marketplace
you
know
you
can
run
container-based
steps,
you
can
write
code
behind
them
and
we
actually
wrote
steps
to
integrate
with
argo
cd.
So
we
wrote
a
synchronization
step
and
a
rollout
step.
B
So
this
allows
you
to
directly
tie
into
your
argo
instance
and
execute
things
that
are
web
hook
driven.
You
know
directly
tying
your
cicd
platform
into
your
argo
and
then
we
also
are
building
heavily
upon
argo,
so
we're
also
going
to
integrate
it
into
our
environments,
and
things
like
that,
so
you
know
code
fresh,
is
really
looking
to
make
heavy
investments
in
argo
and
make
it
a
big
part
of
our
platform.
B
And
so
that
actually
concludes
our
demo
portions
and
the
majority
of
the
presentation.
So
this
really
just
opens
it
up
the
question,
so
I
hope
that
that
was
interesting.
I
hope
that
it
covered.
You
know
the
high
level
things
that
you
wanted
to
see
with
argo,
and
I
think
at
this
point,
let's
just
open
it
up
to
q.
A
B
Now,
if
you're
talking
about
like
ephemeral
environments
or
like
feature
testing
environments,
then
what
you
might
want
to
do
at
that
point
is
you
might
want
to
take
argo's
app
of
apps
approach
where
you
can
actually
spin
up
like
a
dynamic
version
of
just
that
instance
of
that
dev
feature
and
then
from
there
you
have
sort
of
an
umbrella
chart
that
manages
the
deployment
process.
A
B
Yeah,
so
so
we
actually
do
have
that
and
it's
interesting.
So
we
have
a
it's
called
the
kubernetes
health
check
step,
but
what
it
actually
allows
us
to
do
is
execute
a
rollout
process,
and
then
you
specify
the
time
frame
over
which
that
rollout
is
actually
taking
place
and
then,
in
that
health
check
we
can
actually
call
out
to
things
like
looking
at
kubernetes
health
or,
let's
say
spring
actuator
checks
for
health
checks
or
even
looking
at
things
like
data
dog
and
checking
slos.
B
And
then,
if
that
slo
ever
fails,
then
we
would
call
the
rollback
procedure
to
actually
do
that
and
you
can
do
that
in
our
pipeline
processes.
So
the
way
that
I've
typically
seen
our
customers
implement.
It
is
that
they
actually
have
a
pipeline
called
to
another
pipeline.
So
we
support
pipeline
inheritance.
B
So
you
can
install
the
it's
interesting,
so
you
can
install
the
crd
on
its
own,
but
the
controller
is
what's
really
doing
the
heavy
lifting,
so
argo
cd
is
made
up
of
three
major
components.
You
have
your
api
controller,
you
have
the
application
controller
and
then
you
have
the
server.
I
believe-
and
those
are
all
necessary
to
really
make
the
argo
rollout,
fully
featured
and
fully.