►
From YouTube: GitLab Canary & Blue/Green Deployment
Description
In this video, how both blue/green and canary deployments are realized, the key differences between them, and how to implement canary deployments in GitLab pipelines
A
Hello,
everyone
and
thank
you
for
joining
me
in
this
canary
gitlab,
canary
and
blue
green
deployment
session.
A
My
name
is
samara,
kobe,
I'm
solution
architect
with
git
lab
I've
been
working
with
gitlab
for
more
than
a
year
with
where
I
experienced
many
projects
and
before
that
I've
experienced
projects
in
financial
industry
and
many
other
cloud
architecture
positions.
These
are
my
contacts,
please
feel
free
to
reach
out
to
me
after
the
session
with
any
further.
If
any
further
clarification
is
required.
A
Well,
the
agenda
for
today.
I
know
this
may
sound
very
familiar,
but
I
thought
I'll
shed
some
more
light
on
the
blue
green
deployment
and
the
canary
deployment
we'll
try
to
discuss
together
what
are
the
similarities
and
the
differences
between
the
two,
especially
with
the
cloud
native
applications,
and
how
we
support
our
customers
to
do
a
canary
deployment
in
gitlab
and
just
a
bit
of
introduction
on
the
right.
This
is
the
the
blue
canary
there.
So
it's
like
it.
A
It
has
both
of
them
the
blue
and
the
canary
in
in
one
in
one
creature.
So
let
me
start
with
the
blue
green
deployment,
so
on
a
high
level.
Usually
what
the
most
common
way
of
implementing
blue
green
deployments
is
that
I
have.
As
a
customer,
I
have
a
production
running
my
blue
application,
where
this
is
the
daily
live
application,
and
then
I
have
some
new
features
which
I
want
to
push
into
the
the
production
environment.
So
what
that
happens
is
that
I
have
another,
let's
say
a
shadow
environment
where
I
deploy
this.
A
Sorry,
the
old
blue
environment
is
labeled
as
the
new
green
environment
and
it
is
ready
now
for
the
next
release
or
the
next
feature
to
be
deployed.
So
remember
here
we
have
two
environments,
one
to
host
the
next,
the
the
updated
and
one
the
updated
application
release
and
one
to
post
the
existing
one.
And
it's
like
one
step
under
one
step
now.
A
Usually
this
switch
to
green
and
back
to
the
blue
has
to
be
supported
with
some
key
cabinets,
all
the
transaction
or
writing
transactions
or
update
transactions
usually
or
many
times.
I've
seen
that
they
are
blocked
so
because
they
do
deploy
to
the
green
deployment,
but
they
make
it
sort
of
read-only.
The
other
way
to
do
it
is,
and
mainly
to
minimize
the
impact
on
the
data
changes.
They
do
update
the
database
first
to
as
like
a
blue
green
database
and
make
the
new
schema
support.
A
A
Now,
switching
the
gear
to
the
green,
the
canary
deployment,
it's
sort
of
sort
of
similar
idea.
I
have
an
existing
environment
running
my
old
version
of
the
application,
and
then
I
have
a
new
version
of
the
application.
The
main
difference
here
is
that
I
am
not
switching
100
of
my
workload
to
the
new
version
of
the
application.
A
I
am
increasing
the
workload
going
into
the
canary
deployment
now
once
that
reach
hundred
percent
and
this
canary
deployment
is
running
fully
functional
and
I'm
sure
that
this
is
this
is
perfect,
usually
that
the
application
or
the
deployment
is
redone
into
the
production
environment
with
the
new
deployment
and
the
old
canary
instances
are
terminated
or
deleted,
because
the
idea
of
the
canary,
the
canary
deployment,
is
less
about
to
have
it
as
a
permanent
deployment
similar
to
that
blue
green,
but
more
to
do
tests
and
gradual
tests,
mainly
functional
load
testing
and
non-functional
sorry,
functional
non-functional
load
testing
before
saying
that,
okay
hundred
percent,
I'm
I'm
happy
with
this
this
new
diploma
that
the
the
idea
here
with
the
canary
deployment
is.
A
Many
customers
I've
seen
before
they
divert,
for
example,
if
they
can,
they
would
switch
only
the
internal
workload
to
the
new
canary
deployment
and
keep
their
official
customers
going
to
the
existing
or
old
version
of
the
deployment
just
to
minimize
any
business
impact
on
on
these
two
instances.
Remember
here:
this
requires
a
little
bit
more,
let's
say
a
smaller
router
many
times,
I'm
doing
the
routing
best
or
on
on
contents
or
header
based
routing,
based
on
the
keywords
or
type
of
the
request
or
the
source
ip
or
many
different.
Let's
say.
A
Rules
for
diverting
diverting
the
work
workload
now:
okay,
let's
review
together
here.
What
are
the
differences
between
the
two,
the
switch
between
the
environments
in
the
blue
green
deployment?
The
switch
is
100
like
I,
I
deployed
the
green
deploy
environment
with
the
new
application,
the
updated
one.
I
divert
all
the
nodes,
all
the
whole
workload
there,
I'm
100
sure.
Yes,
thank
you
very
much.
Then
I
deployed
the
new
new
releases
onto
the
old
blue,
which
becomes
the
new
green,
and
then
I
switch
back.
A
If
something
went
wrong,
I
switched
hundred
percent
back
the
workload,
so
there
is
no
partial
switch
between
the
two
canary
deployment
is
more
a
gradual
adoption
between
the
the
main
deployment
and
the
the
old
deployment
and
the
existing
one
similarity
to
the
production,
usually
the
blue
green
deployment,
both
the
blue,
the
existing
and
the
green.
The
new
infrastructure
are
as
similar
and
as
as
identical
as
as
possible,
because
remember
you
at
some
point,
the
customer
will
adopt
the
new
green
deployment
as
the
new
production.
A
So
it's
more
think
of
it
as
sort
of
similar
to
the
dr
failover.
It's
like
you
have
an
active
site.
You
have
a
passive
site,
so
at
some
point
you
are
switching
to
the
passive
side
which
becomes
active
and
then
you
start
doing
your
test
and
fixes
onto
your
old
active
site
which
becomes
passive,
so
the
these
two
environments
are
usually
are
identical
in
terms
of
setup
in
terms
of
configuration
with
the
canary
deployment,
not
necessarily
to
be
identical,
at
least
with
it
can
be.
With
this
capacity.
A
Remember
I
it's
not
necessarily
to
reach
100
percent
of
the
workload
to
be,
I
mean
forwarded
to
the
new
canary
deployment
traffic
distribution.
As
I
said
many
times,
it's
100
percent
blue
hundred
percent
green
with
the
canary
deployment,
it's
more
at
gradual,
where
I
can
start
from
like
five
percent
ten
percent
up
to
hundred
percent.
If
required,
livability
of
the
new
deployment
I
deployed
to
the
green,
I
switched
the
workload
there.
If
I'm
happy
with
that
done.
This
is
my
new
blue
deployment
and
then
the
old
blue
is
the
new
green.
A
So
it
is
a
livable
environment.
It's
a
sort
of
a
permanent
environment
with
the
canary
deployment
you
deploy
to
the
canary
department.
Happy
with
that,
and
then
you
promote
these
instances,
and
I'm
saying
just
usually
this
is
what
happened.
You
promote
these
instances
to
be
the
new
to
be
the
new
production,
and
then
you
terminate
the
newly
deployed
canal
deployments
or
canary
boats
now
how
to
do
as
the
focus
of
this
session
on
more
of
canary
deployment
how
to
do
canary
deployment,
especially
with
kubernetes
and
cloud
native
cloud
native
applications.
A
I've
seen
many
customers
doing
that
using
this
and
as
you
as
you
know,
istio
came
to
address
the
challenges
the
developers
and
operators
are
started
to
face
when,
especially
with
microservices
and
as
more
monolithic
applications
are
being
moving
into
a
smaller
part,
decomposed
into
smaller
parts
of
smaller
running
teams
from
from
operation
and
operation
and
insights
right.
So
this
is
a
solution.
Think
of
it
as
the
commando
and
control
for
service
mission
by
service
mission.
A
I'm
talking
about
parts
microservices
of
applications,
smaller
operations
or
smaller
pieces
of
programs,
communicating
with
each
other
so
how
they
will
discover
each
other,
how
they
will
do
routing
for
each
between
each
other,
how
they
will
do
security
like
ssl
termination,
monitoring,
access
control
and
blue
blue,
green
or
canary
canary
deployment,
so
ist
definitely
has
been
used
as
part
of
its
functionality
to
do
blue,
green
and
canary
deployment.
A
But
the
other
way-
and
I
would
say
maybe
a
little
bit
simpler
way
of
doing
canary
deployment-
is
simply
using
nginx
controller
on
ingress
rules.
So,
basically
using
the
english
rules,
you
can
specify
you
can
expose
the
application
as
services
and
the
application
puts
as
services
and
then
define
english
rules
on
top
of
these
services.
A
So
and
of
course,
you
you
need
to
have
an
ingress
controller,
which
is
most
of
the
time,
is
an
engine
x-based,
ingress,
ingress
controller.
So,
basically,
what's
what's
happening
here,
the
these
ingress
ingressors
angus
rules
are
responsible
to
divert
the
incoming
workload
or,
to
tell
let's
say
the
english
controller,
how
to
divert
the
angus
or
that
incoming
request
into
the
different
services
based
on
the
path
based
on
type
of
of
request,
based
on
the
the
host
or
the
incoming
ip
address,
and
all
of
these
things.
A
So
we
can
use
these
english
rules
to
achieve
canary
deployment
simply
in
in
if
we,
if
we
add
annotations
into
the
english
rules
to
as
you
can
see
here,
two
men
of
the
annotations,
which
is
the
slash
canary
and
slash
canary,
wait.
I'm
telling
this
english
rule
that
you
are
at
an
implementation
or
please
function
as
a
canary
deployment
and
the
canary
wave
x
x.
A
Here
is
the
percentage
of
workload
you
want
to
deliver
divert
into
this
canary
deployed
application,
which,
as
I
said
before,
it
can
be
between
zero
to
100
a
zero
implies.
Of
course,
no
no
requests
will
be
sent
into
the
service
and
hundred
percent
means
all
the
requests
will
be
said,
which
you
can
think
of
it.
It
will
function
as
if
it
is
the
new
green
deployment,
because
I
am
by
100
means
I
will
switch
100
into
the
newly
deployed
application
so
canary
deployment.
This
is
a
simple
drop.
A
Maybe
it
will
help
help
me
to
describe
how
we
do
it
in
in
gitlab
and
how
it's
done
in
english
controller
in
general,
we
define
an
ingress
rule
and
that
will
english
english
rule
will
be
annotated
as
a
canary
english
rule,
and
then
it
will
have
that's
yes,
and
no,
if
it's
a
canary
english
rule,
yes,
then
it
will
have
a
percentage
of
the
workload
that
will
be
diverted
into
or
forwarded
into
the
canary
ports
and
again
these
ports
run
as
long
as
you
are
running
canada
deployments
once
you
promote
your
deployment
to
the
production
environment,
this
whole
canary
this
whole
canary
deployment
will
be
terminated.
A
Sorry,
both
canary
and
blue
green
deployment
strategies
can
be
used
to
execute
functional
and
non-functional
tests
on
the
application
with
kubernetes
canary
ports
can
be
swiftly
deployed
to
production,
environment
right
and
with
using
kubernetes
engine
engine
x.
Ingress
rules.
Workload
can
be
dynamically
distributed
between
the
production
and
canal
deployment
and,
as
I
said
before,
yes,
you
can
implement
that
using
hp,
but
that
will
be
under
your
control
on
management
and
operation
of
the
hdr
object.
A
Let's
you
know
what
let's
switch
to
that
demo
quickly.
So
here
I
have
an
application,
and
this
this
is
by
the
way,
producers.
This
is
the
sas
version
of
gitlab,
and
I
have
a
project
here
and
git
lab
and
I
have
a
pipeline
for
my
my
deployment.
I'll
show
you,
so
this
pipeline
is
using
what
we
call
auto
device,
which
is
git
lab
packaged
best
practices
for
deploying
cloud
native.
A
Basically,
instead
of
you
writing
hundreds
of
lines
of
code
for
deployment,
health
charts
and
managing
all
of
them,
we
have
out
of
get
lab
experience
working
with
hundreds
of
customers.
We
have
packaged
the
best
practices
for
the
whole
cicd
devops
lifecycle,
from
build
to
to
deployment
into
what
we
call
auto
devops.
So
just
by
including
this
single
line
of
code
and
going
into
my
pipelines,
you
will
see
that.
A
A
A
If
I
have
a
docker
file
in
my
home
root
folder
for
this
project,
it
will
use
it
to
build
the
image.
If
not,
then
gitlab
will
out
of
the
box,
use
the
build
packs
to
detect
and
build
the
to
detect
the
programming
language
of
my
application
and
build
the
required.
A
Image,
so
it's
now
building,
let's
go
there.
Hopefully
it
will
not
take
long
time
yep
the
build
process
is
done
so
now
it's
running
containers
can
so
nice
thing
about
container
scanning
is
usually
I
am
a
developer.
I've
done
my
changes,
the
scanners.
It's
not
only
potential
scanning
I've
just
included
in
this
demo,
but
in
gitlab
we
have
container
statics
container
scanning
static
scanning,
dynamic
scanning
security,
scanning
fuzz
testing
and.
A
Code
tests
included
as
part
of
of
a
gitlab
alternate
edition,
which
means
that
the
pipeline
will
be
able
to
detect
any
vulnerabilities
introduced
by
these
changes
immediately
on
the
branch
before
being
merged
back
into
the
master
master
brush.
So
here
we
go,
the
container
scanning
is
running
and
it
is
done.
Let
me
go
back
to
the
pipeline
and
then
because
I've
done
the
security
scanner,
you
can
see
that
I
can
access
my
report
of
vulnerabilities
directly
on
the
home
page
direct
next,
where
I've
done
my
my
changes
on
the
homepage
of
my
my
pipeline.
A
A
A
A
A
Go
to
the
pipelines
just
to
show
you
what
will
happen
so
and
in
in
gitlab
or
autodesk.
I
can
define
three
environments.
Of
course
I
can
define
unlimited
environments
if
required,
but
out
of
the
box
using
environment
variables,
I
can
define
the
staging
environment,
the
production
environment
and
the
testing
environment.
Where
do
I
want
to
run
my
staging
code
in
which
fibonacci
cluster
so
not
necessarily
to
have
them
all
in
the
same
cubenets
cluster
I
can
have,
and
usually
you
do
have
your
staging
environment
running
in
a
different
kubernetes
cluster
other
than
the
canary.
A
A
By
the
way,
meanwhile,
it's
building
it's
running
in
geek
lab.
We
have
an
integrated
artifacts
management
and
container
registry
embedded
or
included
in
the
tool.
So
the
image,
that's
built
by
the
a
belt
step
or
build
stage,
does
not
have
to
to
leave
github
or
stored
somewhere
outside
the
platform
because
it
still
needs
to
be
like
tested
and
very,
very
fun.
So,
okay,
our
canal
deployment,
is,
is
running
and
it
will.
This
is
the
namespace
for
that
canary
deployment.
So
let's
have
a
look
on
what's
happening
in
there.
A
A
A
I
just
want
to
show
you
the
ml
file
for
that
one
remember:
we
agreed
that
this
ingress
canary
deployment
is
controlled
in
the
ingress
object,
using
the
annotation
for
the
weight
of
the
workload.
So
if
I
do
this
and
hyphen
o
and
dml,
you
see
that
this
is
what
I
was
talking
about.
The
ingress
header
canary
and
the
weight
is
100,
so
that's
100,
100
percent
of
the
workload
now
is
being
diverted
into
this
canary
deployment.
A
A
Okay,
okay,
as
you
can
see
here,
you
see
here
unstable,
zero
and
canary
deployments
hundred
percent,
so
nice
thing
in
gitlab
is
now
I
have.
This
is
a
recently
added
feature.
I
can
change
the
workload
between
the
production
and
the
canary
deployment
directly
from
the
ui,
so
let's
make
it
fifty
percent
we'll
say:
okay,
I
want
fifty
percent
of
my
incoming
workload
to
be
diverted
into
the
canary
and
the
other,
fifty
percent
to
the
existing
production,
one
change
ratio
to
a
refresh.
A
A
A
A
So
now
it
is,
it
is
deploying
it's
rolling
the
application,
10
percent
to
the
to
the
production,
and
you
can
see
on
the
right.
It
is
creating
the
new
the
new
deployed,
the
new
instances
of
my.
A
And,
as
I
said
before,
it
is
that
canary
all
are
the
canary
pods,
which
we
used
during
the
canal.
Deployment
have
been
now
terminated,
so
that's
all
what
I
have
I
have
today,
and
this
is
what
I
wanted
to
explain
in
this
video.
I
hope
you
would
find
it
useful.
Thank
you
very
much
for
watching
and
again,
please
feel
free
to
reach
out
again.
These
are
my
contacts.
If
you
have
any
questions,
thank
you
very
much
for
watching
and
have
a
great
day.
Thank
you.