►
From YouTube: Workshop3 : Argo Rollouts
Description
The speaker will be mainly presenting\talking about how to achieve Progressive Deployment(Canary/Blue-Green deployment) using Argo-Rollouts
A
So,
thank
you
all
for
being
present
here
to
give
a
quick
introduction
about
myself.
My
name
is
nina
desai.
I
started
my
career
as
a
database
administrator
and
then
got
eventually
into
the
day
of
srs
space,
so
I
am
working
with
infracloud
as
a
senior
site
reliability
engineer
at
infracloud.
I
mainly
work
to
help
client
to
get
onboarded
on
cloud
and
handle
the
need
in
regards
to
infrastructure
design
scale.
Modernization
as
using.
B
A
Standard
cncf
tool
stacks,
I
would
say,
like
kubernetes
clouds,
terraform
and
monitoring
stacks
like
plg
elk
data
and,
yes,
we
are
hiring
at
intra
cloud
as
we
are
scaling.
So
if
anyone
is
looking
for
a
more
better
opportunities,
please
do
hit
our
website
and
check
on
your
career
page,
okay.
So
just
before,
starting
with
argo
rollout
itself,
argo
rollout
is
not
the
only
argo
project.
There
are.
A
These
four
main
projects
are
go
workflows,
argo,
cd,
argo
events,
ergo
rollouts
and
argo
project
lab
is
like
a
lab
r
d
kind
of
environment
where
new
products
are
getting
developed
and
so
on.
So
this
argo
workflow,
basically
just
to
give
some
hint,
I
would
say
it-
helps
you
to
create
and
run
advanced
workflows
entirely
on
kubernetes,
mainly
this
gets
used
for
etl
or
ml
related
job
processing.
A
Then
argo
cd,
as
just
like
a
flux.
You
know
this
is
a
cd
tool
which
helps
to
implement
githubs
and
always
make
sure
that
your,
which
would
be
like
a
source
of
truth,
would
be
always
in
sync,
with
your
clusters
desire
state.
Even
if
you
change
some
object,
it
will
always
try
to
reconcile
it
and
bring
it
to
the
again
or
the
same
state
as
it
is
mentioned
in
your
git
argo
rollout.
We
would
be
talking
in.
A
The
same
topic
today,
it's
a
custom,
kubernetes
controller,
that's
what
I
would
say
at
the
moment
and
argo
events.
It
basically
acts
as
a,
I
would
say,
triggers
based
on
the
event
using
the
kts
as
a
platform.
A
So
it
can
listen
to
any
events
like
github
or
gitlab
notifications
or
any
file
based
or
any
kts
events,
and
then
it
can
trigger
action
like
it
can
trigger
your
lambda
functions
or
it
can
create
your
greatest
resources.
It
can
send
some
or
other
sort
of
notifications
as
well.
A
So
that's,
I
would
say
some
context
about
this.
Particular
these
particular
projects.
Okay.
So
this
is
the
agenda
of
our
today's
session.
We
will
go
through
what
the
progressive
delay
is,
why
we
are
choosing
argo
rollout
itself,
how
it
works
its
installation.
We
will
do
lots
of
hands-on
around
four
different
areas
that
I
believe
would
be
able
to
help
you
to
give
you
a
proper
gist
and
understanding
around
how
argo
rollout
can
help
you
to
achieve
progressive
delivery
outside.
A
So
how
I
came
to
know
about
this
term
called
progressive
delivery
or
argo
rollout
is.
I
happened
to
work
on
one
of
the
healthcare
industry's
client,
who
came
to
infra
cloud
with
the
ask
that
they
would
need
help
in
achieving
minimal
to
zero
downtime
deployment
and
with
possibly
minimal
inter
human
intervention.
I
would
say-
and
that's
when,
while
working
with
our
solution
architect,
I
came
to
know
about
this
progressive
delivery
term,
and
so
eventually
our
rollout.
A
We,
if
I
am
not
mistaken,
they
had
almost
16
17
micro
services
with
their
application
and
we
helped
them
to
achieve
the
candy
deployment
using
argo
roll
out
for
the
sake.
So
I
think
it's
been
close
to
like
last
five
years,
mainly,
I
would
say
we
have
been
all
hearing
quite
a
lot
of
time.
Our
times
about
cloud,
devops,
ci,
cd
pipelines
mainly,
and
I
think
every
single
company
that
has
been
using
ci
cd
has
seen
the
benefit
of
it.
I
recall
the
at
the
start
of
my
career.
A
We
were
doing
the
deployments
like
once
in
a
two
months
or
three
months
from
there
till
the
time
recently,
I
saw
like
at
least
seven
to
eight
deployments
in
our
day
on
environments
being
carried
out.
So
this
is
the
outcome
of
using
ci
cd
process.
I
would
say
continuous
integration,
continuous
delivery,
and
I
think
my
personal
views
are
the
world
should
move
ahead
from
cd
specifically
and
look
for
a
solution
like
progressive
delivery.
A
B
A
So
the
reason
why
I
would
say
one
should
go
for
this:
progressive
delivery
is
with
ci
cd
pipeline
continuously,
delivering
the
I
would
say,
software
updates
or
upgrades
to
your
system,
though
we
are
trying
to
bring
more
customer
satisfaction
by
bringing
some
more
features
at
the
same
time
as
as
it
is
also
been
mentioned
in
google's
sre
book
as
all
right.
Most
of
the
systems
instability
related
issues
does
occur
because
of
the
change.
A
The
change
in
terms
of
these
software
deployments
that
we
do.
Does
that
mean
that
we
should
stop
doing
those
deployments
answer
is
obvious?
No,
we
should
continue
doing
the
deployments
of
your
software
upgrades,
but
while
moving
with
the
pace,
we
should
have
some
sort
of
control
over
the
same,
and
I
think
that's
where
this
that
was
the
underlying
thought
behind
having
this
progressive
delivery
as
a
term,
I
think
james
governor
from
redmonk
was
the
one
who
coined
this
term
actually.
A
So
this
is
a
kind
of
you
can
say.
Modern
software
development
life
cycle
that's
again
been
mentioned,
been
built
on
a
core
tenants
of
ci
cd
itself.
A
It
basically
allows
you
to
pursue
cd
or
continuous
delivery
with
quite
safer
way
and
the
big
giant
companies
like
google,
facebook
or
microsoft
amazon.
They
are
already
employing
the
progressive
delivery
at
scale.
For
example,
you
might
have
seen
sometimes
this
used
to
happen
that
you
would
be
able
to
see
some
feature,
but
maybe
someone
of
your
colleague
is
not
able
to
see
that
that's
because
they
are
rolling
it
in
a
progressive
way
and
how
these
big
companies
start
is.
A
A
If
we
should
put
a
login
button
in
the
center-
or
it
should
be
at
the
right
side,
what
they
would
do
is
they
will
roll
out
both
the
versions
where,
in
one
version
it
would
be
on
a
left
side
in
one
version,
it
would
be
at
the
center
to
a
different
geographical
geographically
based
users
and
based
on
the
user
assessment,
I
would
say,
or
the
metrics
they
will
derive
from
that
they
would
be
able
to
decide.
Okay,
I
think
this
version
would
scale
better.
A
This
is
what
the
users
are
liking
more
okay,
so
how
progressive
delivery
is
different
than
continuous
delivery?
I
would
say
main
difference
is
with
progressive
delivery.
You
get
a
ability
to
gradually
release
your
feature
to
a
progressively
larger
percentage
of
audience,
and
that
start
with.
First
your
internal
teams,
then
maybe
better
testers
and
then
subsequently
to
a
large
number
of
audience,
and
if
you
detect
at
least
even
a
one
percentage
of
errors
or
performance,
spikes
or
user
backlash,
you
would
be
immediately
able
to
roll
it
back
here.
A
Impact
would
not
be
that
high,
like
blast
radius,
would
be
always
minimal
because
you
would
start
with
one
percent
of
audios.
If
you
are
able
to
see.
If
any
issues
are
there,
you
would
be
immediately
roll
it
back
automatically
and
that's
like
only
one
percent
of
maybe
really
lost,
and
maybe
one
percent
of
customer
satisfaction
issues
compared
to
if
it
would
have
been
given
to
like
all
the
audience
in
one
go.
A
So
that's
the
core,
I
would
say
characteristics
of
progressive
deliveries.
A
C
I
have
one
question
so
when
we
try
to
do
a
kennedy
deployment
and
we
use
header
base
to
make
sure
only
the
specific
changes
has
been
the
new
change
right
during
the
canary
with
the
header
base.
Only
a
few
people
using
those
headers
will
be
able
to
get
the
new
release.
So
is
it
something
part
of
it
progressing?
What
it's
still
a
cd?
It's.
A
C
We
have
only
one
region,
so
we
cannot
target
like
region,
ways
right,
so
other
alternative.
What
we
are
thinking
is
only
to
a
small
people
for
the
beta
means.
Like
a
testing
team,
we
use
that
so
that
is
considered
as
a
progressive
itself.
A
Right
right,
based
on
that
only
I
have
added
one
hands-on
as
well,
where
I
would
be
doing
similar
stuff
what
you
said:
that's
how
even
we
did
for
internal
testing
when
we
were
delivering
this
solution
for
our
client,
where
we
use
this
header-based
routing,
so
that
the
client's
qa
team
would
be
able
to
use
it
and
assess
it
way
ahead
before
rolling
it
out
further
yeah.
A
Okay,
so
before
going
in
depth
about,
I
would
say,
argo
rollouts.
Let's
understand
how
and
why
we
need
to
have
something
like
argo
rollout
when
kubernetes
itself
is
helping
us
to
get
rid
of
many
pinpoints.
So
in
kubernetes
we
all
know
rolling,
update,
execute.
These
are
the
two
strategy.
A
Rolling
update
is
a
kind
of
default
strategy,
and
so,
even
before
that
it
was
having
a
big
bang
kind
of
deployment
strategy,
I
would
say
in
the
in
the
regular
world
where
we
were
taking
down
time,
we
were
kind
of
eliminating
old
stack
and
recreating
new
stack
to
avoid
this
kind
of
downtime
kubernetes
employed
this
strategy
of
rolling
update,
where,
let's
say
when
you
create
a
kubernetes
deployment,
it
internally,
create
your
replica
set
and
that
replica
set
will
eventually
create
all
your
ports,
okay
and
when
you
let's
say,
want
to
go
from
version
1.0
to
2.0.
A
A
Then
it
will
do
a
health
check
based
on
the
readiness
or
line
space
or
anything
that
you
would
set
and
if
it
is,
if
it
does
find
that
it's
looking
good,
then
in
that
case
it
will
start
receiving
traffic
and
one
part
from
your
older
replica
set
would
get
deleted
and
that's
how
one
by
one,
this
rolling
update
strategy,
work
and
eventually,
in
the
end,
you
would
be
able
to
see
all
your
parts
from
previous
replicas
had
been
terminated
and
from
the
new
replica
sets,
are
being
working
and
receiving
your
larger
traffic
okay,
so
there
would
be
a
question
like
so
this
is
already
helping
me
in
certain
way
to
do
a
kind
of
minimal
downtime
kind
of
deployment,
then
why
we
would
still
need
to
have
something
like
progressive
delivery.
A
The
reason
is,
I
would
say,
even
though
rolling
update
has
already
helped
us,
but
there
are
some
of
these
limitations
like
you
would
not
control
directly
the
speed
of
rollout.
Second,
you
would
not
be
able
to
control
the
traffic
flow
to
your
new
version
compared
to
your
older
version.
A
For
some
scenarios,
your
readiness
probes
are
kind
of,
I
would
say
unsuitable
for
deeper
or
stress
test.
You
would
not
be
automatically
employ
ability
to
query
external
metrics
to
decide
whether
your
update
has
been
completely
working
or
not,
and
the
main
thing
is,
it
cannot
automatically
roll
back.
A
These
are
the
some
of
the
limitations
with
rolling
update
strategy
and
that's
where
the
original
contributors
of
argo
rollout
thought
of
developing
something
that
would
help
to
overcome
these
challenges
with
rolling
update
strategy,
and
that's
how
argo
rollout
got
into
the
picture.
I
would
say
so
what
it
is.
It
is
eventually
just
a
kubernetes
controller
and
a
set
of
crds.
A
It
does
provide
you
deployment
strategies
like
blue,
green
kangaroo,
kangaroo
with
analysis
recently
they
have
added
experimentation
which
would
be
more
or
used
for
destructive
tests
right,
and
if
you
want
to
compare
what
this
argo
rollout
is
it
just
completely,
you
can
say,
dropping
replacement
of
your
kubernetes
deployment
like
for
deploying
any
of
your
microservice.
If
you
are
currently
creating
kubernetes
deployment,
you
can
create
a
rollout
component
for
the
same
it
just
exact,
I
would
say
replacement
of
your
kubernetes
deployments.
A
So
these
are
some
of
the
use
cases
about
the
argo
rollouts
like
you
can
run
last
minute,
functional
test
on
your
new
version.
You
can
compare
the
performance
of
old
virgin
vs
new
version.
As
I
said
right
during
the
a
b
test
you
can
do
and
you
would
be
able
to
control
how
much
traffic
you
would
want
to
redirect
to
the
new
version
of
the
service.
So
this
is
how
the
I
would
say,
architecture
of
argo
rollout
is
about.
I
would
just
try
to
put
them
in
very
simple
words.
A
A
Job
of
this
rollout
controller
is
to
check
if
there
are
any
resources,
been
deployed
in
your
kubernetes
cluster
of
kind
rollout
and
make
sure
that
this
rollout
state
is
always
been
as
it
is
that
you
want
it
to
be,
and
apart
from
that,
this
just
like
in
deployment
right
when
you
create
a
deployment
it
create
your
replica
set
and
that
replica
set
creates
your
ports.
A
Okay,
ingress.
This
is
not
directly,
I
would
say
part
of
argo
rollout,
but
just
to
make
it
more
understandable.
Ingress
is
any
english
any
service
mess.
I
would
say
it
is
your
nginx
controller
or
any
service
meshes
that
you
are
using.
A
A
Ultimately,
during
your
rollout,
you
would
need
to
create
the
analysis
template
in
this
analysis
template.
Basically,
we
will
define
what
kind
of
analysis
it
should
do
to
decide
whether
it
should
roll
out
ahead
or
it
should
roll
back
your
deployment
automatically,
and
for
that
purpose
it
create
a
kind
of
internally
analysis
run
component
and
that
analysis
and
component
can
do
any
kind
of
analysis.
B
A
Clear
to
everyone
about
how
this
works,
yep,
okay,
thank
you.
So,
let's
get
our
hand
dirty
a
bit.
Let's
do
argo,
rollout
installation,
it's
very
simple
installation.
I
would
say
you
just
need
to
create
a
dedicated
name
space
as
we
do
as
a
standard
practice
in
kubernetes
and
deploy
your
argo
rollout
there.
So.
A
So
I
hope
you
all
would
have
got
access
to
these
lab
setups
right.
Just
click
on
this.
D
A
The
chat
sure
sure
so
nipendra
would
be
sharing
this
link
with
you.
It's
a
platform
being
developed
by
cloud
yoga
team
now
very
handy.
I
would
say
in
terms
of
using
for
doing
all
your
tests
or
learning
yeah.
D
So
it's
a
it's
an
environment
which
you
can
try
it
out
again:
there's
no
compulsion
that
you
should
be
doing
that,
but
it's
like
your
entire
workflow
and
this
would
happen
with
very
ease.
So
we
appreciate,
if
you
kind
of
log
into
the
platform
and
then
open
that
link
to
kind
of
get
the
workshop.
So
we'll
give
you
guys,
maybe
a
couple
of
minutes
to
sign
up
and
then
take
it
from
there
any
questions.
Meanwhile,
that
would
be
good
to
have.
E
So,
with
this
rollout,
we
won't
be
able
to
control
the
traffic
right.
Here
is
the
the
number
of
ports
that
is
in
ready
state.
That
is
what
this
rollout
controls
right.
A
A
E
A
D
A
D
D
Just
say
yes
or
thumb,
plus
one
on
the
chat,
as
you
have
all
set
up,
so
we'll
wait
for
at
least
five
thumbs
up
to
kind
of
take
it
forward
so
till
she
got
it.
So,
please,
let
me
know,
let
us
know
at
least
we
are
waiting
for
five
years
before
we
continue
yeah.
I
think
we
are
good
if
all
of
you
have
triggered
the
lab
go
ahead.
I
think
you
can
just
talk
about
briefly
about
the
how
the
labs
look
like
and
then
we'll
continue
on.
A
Sure
so
guys
you
would
be
able
to
see
here.
Some
of
the
I
would
say,
facets
about
this
platform.
I
will
show
you
like
here.
You
get
this
terminal
where
you
can
run
all
your
regular
linux
command.
Anything
that
you
would
like
to
do.
A
Apart
from
that,
it
does
provide
you
the
id
as
well.
Just
like
a
vs
code
editor,
you
would
be
able
to
use
it.
We
would
be
using
it
anyhow
in
one
of
the
hands
on
here.
You
will
be
able
to
see
it
will
open
you
a
good
code
editor
just
like
this.
A
Let's
install
a
yaml,
I
would
say
extension:
it
would
be
helpful
for
us
later
from
visual
perspectives,
so
this
root,
for
is
something
that
you
are
currently
able
to
see
here
on
our
labs
here.
Only
okay,
so
I
hope
now
all
are
logged
into
this
console
right.
We
would
be
going
through
these
all
instruction
set.
One
good
thing
is
all
instructions
I
have
already
copied
here.
You
don't
need
to
run
any
command
at
such
shear
directly.
A
All
that
you
would
need
to
do
is
just
click
on
these
green
signs
and
they
will
do
the
job
for
you.
They
will
execute
commands
for
you
there
throughout
the
instructions.
Apart
from
that
to
access
the
argo
rollout
console
now,
we
have
already
been
open
for
the
same.
You
would
just
need
to
click
on
these
urls
right
click
and
open
it
into
another
tabs.
A
Okay,
so
I
have
copied
all
the
argo
rollout
examples
that
I
have
created
into
this
git
repo.
This
is
a
public
repo,
so
you
can
fork
it
and
if
not,
then
you
can
just
clone
it
directly.
A
A
A
A
A
If,
yes,
then,
we'll
move
ahead
for
the
next
instructions,
so
we
would
need
a
helm3
client.
It
would
be
used
in
one
of
the
example
to
show
how
you
can
use
having
itself
with
the
rollout
and
so
just
install
the
helm,
client
as
well.
D
So,
rather
than
the
appreciative
thumbs
up
and
say
yes
on
the
chat
as
we
kind
of
go
right
because
being
on
remote,
it
becomes
difficult
for
us
to
kind
of
judge,
what's
happening
there.
So
if
you
be
more
interactively
really
helpful
for
all
of
us,
but
that's
your
choice
here.
Thank
you.
A
Do
let
us
know
in
case
you
face
any
issues,
so
if
you
would
be
able
to
see
now
we
have
install
helm
as
well
on
our
machines,
you'll
be
able
to
see
if
it
is
being
deployed,
and
we
were
originally
about
to
do
the
argo
installation
right.
So
for
that
we
would
create
a
argo
rollout
namespace,
as
you
are
able
to
see
it
here.
So
let
me
just
create
a
namespace
for
it.
A
If
the
argo
reload
namespace
has
been
created,
so
argo
rollout
directly
does
provide
a
url
from
where
it
can
install
this
install.yaml
file.
I
have
just
copied
it
for
my
own
purpose.
You
can
directly
use
the
urls
from
their
official
installation
document
as
well.
So
let's
install
the
argo
rollout
just
prior
that
you
would
be
able
to
see.
Currently
there
is
nothing
running
at
the
argo
rollout
inside
or
reload
namespace.
So
now,
let's
deploy
the
controller,
so
you'll
be
able
to
see
it,
create
crds
and
cluster
rules,
and
so
on.
A
F
A
Is
in
running
mode,
but
it
is
yet
to
be
ready
yep
now
this
is
ready.
So
your
argo
controller
is
been
installed
now,
just
like
to
interact
easily
with
kubernetes.
We
use
cube
ctrl
plugin
right
to
interact
with
argo
rollout.
You
would
need
to
have
this
argo
rollout
plugin,
so
we
would
deploy
the
argo
rollout,
qctl
plugin.
B
A
Yeah,
so
you
can
see
the
argo
rollout
has
been
plugin
been
installed,
it
is
showing
its
version
as
well.
A
Now
argo,
the
rollout
does
provide
its
own
dashboard
as
well.
It
listens
on
port
3000.
So
what
I
have
done
in
this
lab
is,
I
have
already
opened
it
on
port
3000,
just
click
on
this
and
you
would
be
able
to
see
argo
rollout
console
available.
C
On
young
one
quick
question:
we
installed
this
ergo
command
line
right,
so
this
is
part
of
cube,
ctl,
rollout
or
you're
using
the
crd.
It's
a
command
line
overall,
because
you're
doing
with
cube
ctl
argo
rollout
right.
A
Yeah
so
like
the
only
difference,
I
would
say
when
we
execute
cube
ctrl
command,
we
do
like
cubes
it'll
get
deployments
right.
This
plugin
is
cube.
Ctl
argo
rollout
get
rollouts,
I
would
say
so.
C
D
C
D
Or
basically,
it's
like
that,
you
can
insert
the
plugin
that
I
think
what
the
ad
is
doing
is
a
very
easier
way
so
that
you
don't
require
to
remember
the
rbo
cd
command
and
so
on.
You
just
go
with
the
cube
city
elevator
itself,
so
it's
it's
a
personal
choice.
Processing,
there's
nothing
changes
per
se,
even
if
you
go
with
the
argo
cd
r1
command
or
you
go
with
the
cube,
ctl
command.
It's
similar.
Okay,.
A
And
just
for
sake
of
showing,
I
would
say
or
you'll,
be
able
to
see
all
the
commands
provided
by
argo
rollout
here.
Okay,
we
would
be
using
them
in
our
demo
further.
A
Okay,
so
currently
you
would
be
able
to
see
it
as
a
blank,
and
it
is
just
normal:
don't
get
confused
once
we
will
create
rollout
objects,
you
would
be
able
to
see
them
here
go
back
again,
so
we
have
done
the
installation
of
our
controller.
We
will
move
with
some
more
details
about
it
and
then
again,
come
back
so
we'll
move
in
this
way
we'll
go
into
the
slides,
then
we'll
go
back
and
do
hands-on
on
the
concept
that
we
have
learned.
We
have
seen
already
how
our
goal
rollout
work.
A
We
have
done
now
the
installation
of
rollout
as
well
right.
This
is
how
the
ui
would
look
like.
Once
you
create
your
rollout
objects,
you
would
be
able
to
see
it
very
soon.
A
So
in
terms
of
rollouts,
these
are
different.
Specs
been
available.
All
these.
You
would
be
able
to
understand
much
better
way
when
we
will
be
doing
the
hands-on
mode-
okay,
but
just
to
give
a
quick
comparison
compared
to
the
specs
of
deployment.
The
difference
is
in
case
of
our
kubernetes
deployment
right.
The
client
is
deployment
app.
Our
api
version
is
also
different.
These
are
the
two
different
and
the
main
difference
is
the
strategy
you
would
be
able
to
see
here.
A
This
is
the
strategy
like
you
can
use
blue
green
strategy
or
in
case
of
candy
you
would
be
a
setup,
candy
deployment
strategy
as
well.
You
would
be
able
to
mention
the
candy
service
table
service,
analysis,
templates
and
so
on,
and
for
the
traffic
control
as
well
right,
you
would
be
able
to
add
this
traffic
routing
parameter.
I
think
someone
was
asking
so
argo
rollout
help
in
this
way
to
do
the
traffic
routing
as
well.
A
We
would
get
to
these
things
more
in
during
the
demo.
I
would
say
you
would
be
able
to
understand
in
much
better.
A
Okay,
so
moving
ahead,
so
blue
green
is
one
of
the
I
would
say
main
strategy
provided
by
argo
rollout.
A
I
think
some
of
you
might
be
already
familiar
with
this
concept
of
rollout,
how
it
blue
green
rollout
work
is,
let's
say:
currently
you
are
running
application
version,
34,
so
you'll
be
having
some
load
balancer
and
all
your
client
requests
are
going
to
via
this
load
balancer
to
your
application
version.
34
now,
let's
say
you
want
to
deploy
a
new
version,
so
what
you
would
create
is
you
would
deploy
another
set?
Let's
say
right
now
for
version
34,
you
are
running
three
parts.
A
Now
you
will
create
another
three
parts
which
would
be
running
your
application
version
35.
Okay,
so
you
would
be
having
your
old
version
and
new
set
of
application
version.
I
just
that
it
is
not
yet
receiving
the
new
traffic.
What
you
would
do
is
after
let's
say
you
do
some
tests
internally
to
check
that
new
version
is
working.
A
So
these
are
the
different
specs.
I
would
say
in
case
of
you
want
to
do
a
blue,
green
kind
of
deployment.
Main
thing,
I
would
say,
is
focus
on
this
rollout
type,
api
version
and
the
strategy
part.
The
rest,
all
part
you
would
be
able
to
see
is
similar
to
kubernetes
deployment
that
we
do.
Okay
in
strategy
in
case
of
blueprint
deployment,
you
would
need
to
mention
it,
which
is
your
blue,
that
is
old
version
or
active.
Currently
active
version
and
preview
service
would
be
your
new
version
of
the
service.
A
Okay
and
currently,
in
this
spec
that
I
am
showing
I
have
stopped
the
auto
promote.
So
there
is
this
parameter
been
provided
by
argo
rollout
so
which
will
make
sure
that
it
will
not
automatically
cut
down
the
old
active
service
and
use
the
new
service.
Here
we
would
be
doing
a
manual
intervention
to
promote
this
in
a
to
promote
the
traffic
from
old
version
to
new
version.
Only
after
we
are
sure
that
the
new
version
is
working
well.
A
A
So
if
you
would
be
able
to
see
here
after
argo
installation,
there
is
this
option
available
for
blue
green
deployment,
where
you
would
be
able
to
see
the
specs
that
we
have
created,
so
we
will
be
creating
just
kind
of
like
the
deployment
we
would
be
creating
a
rollout
object.
Where
I
have
mentioned.
I
want
to
run
two
replica
set
of
this
version
and
it
will
be
basically
running
one
sample
containers
which
would
show
you
on
a
console,
the
blue
version.
A
It
is
running
on
port
8080
and
the
strategy
it
is
using
is
the
blue,
green
and
in
blue
green
deployment
strategy.
You
would
create
the
two
services.
One
service
would
be
your
active
one,
and
one
is
your
preview
service
one?
If
you
see
nothing
is
changed
except
their
names,
and
here
for
demo
we
would
be
using
nginx
controller,
so
we
would
be
creating
an
ingress
which
would
route
our
traffic
to
our
services,
so
how
the
flow
would
be
from
your
client
request.
It
will
go
to
your
ingress
from
ingress.
A
It
will
start
routing
traffic
to
your
initially
stable
version,
and
once
we
promote
it,
it
will
cut
down
and
start
moving
all
the
traffic
to
your
new
version.
A
Okay-
and
I
hope
all
of
us
are
already
familiar
that
basically
service
is
able
to
find
out
its
pod
using
labels
and
selectors
same
thing
happens
here.
Initially,
your
active
service
is
having
those
selector
labels
matching
using
which
it
is
able
to
route
the
traffic
to
your
active
version
and
what
you
would
be
able
to
see
once
we
promote
this
deployment,
your
previous
service
would
start
receiving
traffic
as
well
as
your
active
service
would
also
would
start
routing
traffic
to
your
new
version.
Only
okay,
how
we
can
do
the
demo
is
just
execute
this
command.
A
F
A
Which
will
deploy
these
objects?
You
would
be
able.
We
are
currently
just
for
testing
purpose,
deploying
it
in
default
namespace.
So
right
now
we
are
deploying,
let's
say
your
blue
version
of
the
service.
It
is
creating
two
services
that
we
have
told
him
to
do.
We
have.
It
is
creating
two
parts,
as
we
have
requested
for
two
parts.
A
A
A
F
A
So
now,
let's
now,
let's
go
back
to
that
argo
rollout
console.
If
you
will
refresh
this
page,
you
would
be
able
to
see
it
running
on.
A
So
you
would
be
able
to
see
earlier.
It
was
a
blank
console
now
it
is
showing
the
rollout
objects
that
we
have
created
here.
Okay,
it
is
currently
running
revision,
one
so
going
forward
either
you
can
use
this
console,
argo
console
or
you
can
use
the
command
line
as
well.
I
would
prefer
command
line
doing
and
from
here
you
would
be
able
to
roll
out
further.
You
can
promote
or
you
can
abort
roll
out
as
well.
So
now,
currently
those
two
blue
pods
are
receiving
traffic.
A
That's
why
you
are
able
to
see
all
the
all
those
blue
dots
on
your
machine
on
on
the
client
front.
End.
Okay-
and
this
is
currently
your
stable
service.
So
what
is
happening
is
from
ingress.
It
is
going
to
your
stable
service
from
stable
service.
It
is
getting
routed
to
the
parts
running
the
blue
version.
Okay.
A
So
now,
let's
move
here
and
deploy
the
green
version.
So
what
right
now
I'm
gonna
do
is
you
would
be
able
to
see
this
way?
The
use
of
cubesat
argo,
rollout
plugin?
If
you
would
be
able
to
see
it
has
created
a
replica
set
and
that
replica
set
has
created
two
ports.
It
is
running
with
that
capacity.
Only
now,
let's
change
the
image
so
that
it
will
create
a
blue
version
of
your
green
version
of
your
application.
A
A
A
You
would
be
able
to
see
it
is
creating
the
new
two
new
ports
which
would
be
running
your
green
version.
A
Okay,
so
these
are
runnings,
you
would
be
able
to
see
on
a
console
side
as
well.
Now
the
new
revision
has
got
created.
It
is
showing
the
new
revision
is
able
to
run
these
two
parts
that
we
were
able
to
see
there,
but
in
console
site
it
is
still
showing
blue
version
reason,
because
we
have
not
yet
shifted
the
traffic
from
blue
to
green.
That's
what
I
was
referring
earlier,
that
your
live
production
traffic
would
not
be.
A
I
mean
you
would
not
be
able
to
do
the
testing
of
your
live
production
traffic
directly
in
the
blue
green
version.
I
would
say,
unless
what
you
can
do,
if
you
are
going
to
off
the
blue
green
is
we
would
be
able
to
use
a
traffic
routing
mechanism
like
nginx
or
any
other,
and
do
you
can
use
the
custom
headers
to
pass
the
traffic
to
the
your
new
version
preview
service
and
able
to
do
some
tests
on
it?
A
E
I
have
one
question
yeah:
when
I
check
the
labels
of
those
boards,
it
is
showing
rollout
dash
blue
green
and
have
two
services
right.
One
is
selecting
the
blue
green,
oh,
both
are
pointing
to
blue
green,
only
got
it.
A
E
A
Now
you
are,
you
are
still
seeing
both
the
parts
and
if
you
see
after
the
promotion,
it
has
now
started
terminating
the
old
parts
as
well.
That's
why
the
revision
one
parts
have
been
removed.
Now
you
can
set
a
timer
value
for
it
for
how
many
long
hours
like
that
would
be
a
standard
practice.
A
I
would
say
make
sure
that
your
old
version
is
still
available
so
that
even
in
case
of
some
issues,
you
would
you
don't
need
to
again
create
the
application
or
the
old
version
it
just
that
you
need
to
shift
or
roll
back
the
traffic
routing
you
would
be
able
to
see
earlier.
There
were
four
parts.
Now
it
is
back
to
two
parts
because
it
has
deleted
those
older
version,
parts.
A
A
So,
let's
go
back
to
slides
again,
we
will
see
now
the
concepts
around.
How
can
we
deployment
does
its
work?
Okay,
so
just
before
moving
that
I
would
say
these
are
some
of
the
pros
and
cons
that
I
see
in
terms
of
blue-green
deployment.
You
get
client
apis
consistently.
That's
a
plus
point,
I
would
say
your
previous
stack
can
be
tested
even
before
your
live
traffic
goes
to
that
and
rollbacks
are
immediate,
because
your
ports
are
already
been
running
as
a
standby
mode
there.
A
So
in
case
of
issues,
you
can
just
switch
the
traffic
and
roll
it
back
and
you
would
be
again
running
your
older
version,
but
yeah.
It
comes
with
the
cost
of
running
2x
of
your
resources,
and
you
cannot
do
a
fine
grain
canary
kind
of
deployments
with
it.
This
cost
you
can
reduce,
as
there
is
one
parameter
available
which
would
help
you
to
make
sure
that
how
many
number
of
parts
of
your
new
service
you
would
want
to
run.
A
You
can
limit
it
in
certain
way
to
reduce
the
cost,
but
yeah
it
will
still
incur
a
bit
of
cost
for
you.
So
moving
ahead
with
the
canva
deployment
concept,
how
it
work
is
again
the
same
example.
Let's
say
you
are
running
your
application
version.
34,
you
are
receiving
all
this
traffic
where
your
load
balancer.
A
A
If
you
are
able
to
see
those
requests
are
getting
correctly
routed,
then
you
would
increase
number
of
ports
and
also
you
would
increase
the
percentage
of
traffic
that
you
are
routing
to
it,
and
once
you
are
happy
with
it,
you
would
be
full-fledged
using
the
new
version
here.
The
cost
factor
will
not
be
affecting
much
because,
let's
say
currently,
you
are
running
five
ports
as
part
of
your
application
version
34
and
you
deploy
the
application
was
version
35.
Initially,
you
said
you
would
need
only
20
percent
of
traffic
to
be
routed.
A
It
will
create
one
pod
alone
and
it
will
terminate
one
pod
from
the
old
version,
just
like
in
case
of
rolling
update.
So
that's
why
your
pod
count
will
be
always
what
you
are.
You
have
asked
as
a
desired
one.
Okay
in
terms
of
spec
earlier
we
had
defined
blue
green
here
here
you
would
be
defining
canary
in
case
of
rollout,
and
you
are
using
traffic
routing
mechanism.
You
would
need
to
define
both
these
things.
A
You
would
be
able
to
see
it
in
the
third
demo
that
we
would
be
doing,
and
here,
as
I
said
right,
you
would
be
able
to
control
the
traffic
that
you
are
routing.
Ultimately,
the
number
of
ports
that
you
are
creating.
Let's
say
I
am
running
five
ports
and
if
I
set
weight
initially
to
20,
what
it
will
do
is
out
of
old
five
parts.
A
It
will
terminate
one
and
create
a
new
pod
of
new
version,
one
you
can
add
up
manual
pause,
as
we
did
in
earlier
case
as
well,
and
you
can
do
some
testing
on
your
new
pod.
Only
one
percent-
or,
I
would
say
20
of
your
traffic
production
live
traffic-
would
be
going
to
this
new
version.
Okay,
if
you
are
happy
with
the
results,
then
you
can
set
further
weights
and
accordingly
it
will.
A
We
create
new
ports
at
the
same
time,
start
terminating
the
old
parts
and
you
can
set
up
the
duration
as
well
for
the
same
so
yeah.
Let's.
A
This
more
using
doing
some
handsome,
so
here
you
would
be
able
to
see
this
is
this
option
b
right
this?
These
are
the
specs
that
I
just
now
defined
these
you
can
set
a
weight,
whatever
amount
of
steps
you
can
add
in
that
right
now.
In
this
example,
we
are
not
doing
nginx
or
any
other
traffic
routing
mechanism
that
we
would
be
doing
in
last
demo
that
we
will
be
doing
okay
yeah
again,
we
would
be
creating
one
in
case
here
for
this
demo.
A
So
clear
to
all:
let's
just
do
a
demo
of
it
again
this
example.
You
would
be
able
to
see-
I
have
already
created
here.
This
will
be
available
for
you.
Even
you
want
to
play
around
it
later.
F
A
A
So
you
would
be
able
to
see
again
the
rollout
objects
been
created
here.
You
would
be
able
to
see
what
things
we
have
defined
there.
Already
five
parts
of
old
version
is
running.
We
have
defined
the
steps
as
well
for
canva
how
it
should
progress
ahead.
Okay
and
now,
let's
do
the
promotion
of
new
version.
A
A
A
You
would
be
able
to
see
your
new
version.
Yellow
dots
are
also
coming
here
in
terms
of
revision.
As
you
see
here,
it
is
waiting
now
on
pause.
Initially,
you
were
running
five
ports
right
now.
One
part
from
old
version
is
terminated
and
new
pod
has
been
created
with
the
new
version,
and
that's
why
so
out
of
five
parts,
four
parts
will
be
kind
of
showing
the
older
version.
One
part
would
be
showing
your
new
version
and
that's
why
you
are
able
to
see
very
small
amount
of
yellow
parts
running
in
here.
A
A
A
A
Okay,
since
this
is
just
a
demo,
these
weight
size
are
been
kept
in,
like
in
terms
of
seconds
in
production.
You
would
increase
this
value
to
a
much
higher
number.
So
now,
right
now,
your
entire
new
version
is
running.
A
Okay,
so
that's
how
the
canary
does
work
it.
You
can
control
to
certain
extent
the
traffic
routing.
C
B
A
Let's
go
back
to
slides
again,
okay,
so,
as
you
have
seen,
the
candy
is
kind
of
simple
one.
It
does
not
require
any
special
service
mesh,
a
bit
better
advantage
than
doing
rolling
update,
but
yeah.
You
will
not
be
able
to
do
a
fine
grain
candy
wait
without
running
number
of
many
pods,
okay
and
compared
to
blue
green,
where
you
would
be
able
to
roll
back
immediately,
because
your
old
version
is
still
running
as
a
standard
in
case
of
candy.
A
Roll
back
is
comparatively
slow
because,
again
to
shift
from,
let's
say,
yellow
version
to
your
blue,
that
is
old
version.
You
would
need
to
terminate
one
part
here
then
create
one
new
part
there,
and
that
way
it
would
go.
Okay
and
yeah.
You
would
need
to.
You
would
be
doing
a
production
traffic
testing
only
there,
but
these
advantages
can
can
be
overtaken
by
using
the
analysis,
steps
here
for
the
candy
purpose,
okay
and
that's
what
we
would
be
doing
for
the
so
argo
rollout.
Has.
A
This
very
amazing
feature
called
analysis
where,
let's
say
right
now,
someone
did
ask
that
they
do
by
passing
the
custom
headers.
They
can
do
analysis
here.
The
manual
intervention
is
there.
You
can
instead
of
ask
vargo
reload
itself,
thanks
to
this
analysis,
feature
to
do
all
those
testing
for
you
and
based
on
that
result
of
your
analysis,
you
can
decide
whether
it
should
roll
out
further
or
roll
back
the
new
versions,
deployment,
okay,
and
there
are
different
ways
available.
A
Second
approach
is
your
inline
analysis,
where
you
would
add
that
analysis
as
a
step
during
your
during
your
deployment
and
for
bluegreen
related
deployment,
you
can
do
the
bluetooth,
pre-promotion
analysis
and
post-function
analysis
again.
The
sole
purpose
of
this
analysis
component
is
to
let
you
automatically
decide
whether
to
roll
out
or
roll
back,
and
in
what
scenario
it
should
do
that.
A
A
A
That,
yes,
yes,
you
can
do
that.
I
would
be
showing
that
I
mean
the
specs
of
that
as
well.
Okay,
so
you
can
use
any
of
your
monitoring
tool.
You
can
use
prometheus
datadog
or
any
other.
In
my
case,
I
had
to
initially
use
datadock.
Then
client
had
some
other
requirements,
so
we
created
a
custom
job
as
well.
So
as
part
of
this
analysis,
you
can
run
create
your
own
custom,
kubernetes
jobs
that
you
can
trigger.
A
Okay,
so
compared
to
background
analysis,
where
it
will
run
continuously
in
case
of
inline
analysis
steps.
You
can
add
it
here
as
a
steps
only
so
it
will
not
move
ahead
unless
and
until
this
analysis
is
completed,
okay,
suitable,
it
is
more
suitable
for
doing
this
benchmarking
kind
of
test.
A
A
As
you
can
add
this
as
a
pre-promotion,
where
you
would
be
running
your
templates
where
this
template,
you
would
be
running
either
your
custom
job
or
let's
say
you
have
some
monitoring
tool
from
that
monitoring
tools.
Whatever
you
are
checking
as
a
200
or
any
metrics,
you
would
be
able
to
do
that
so
that
you,
once
you
are
able
confident
enough
that
okay,
this
new
version
is
been
rolled
out
correctly.
Then
you
would
do
a
traffic
shifting
from
old
to
new
version,
as
well
as
as
a
part
of
post
promotional
analysis.
A
Even
once
your
new
traffic
is
been
routed
that
time
as
well.
You
can
do
the
analysis
part
again,
so
it
has
both
of
these
capacities
available
with
it.
A
Okay,
so
I
think
this
is
what
you
are
asking
right.
So
as
part
of
this
analysis,
I
can
so
they
have
providers
for
prometheus
and
for
any
monitoring
tool.
You
can
say
and
like
in
this
example,
we
are
using
prometheus
provider
and
via
this
provider,
we
are
particularly.
A
Query
that
one
would
normally
run
in
directly
by
logging
into
the
observability
tool
to
check
if
a
new
version
is
correctly
running
or
not.
Okay,
like
here
in
this
case,
we
are
defining
that
if
it
gets
some
500
related
errors
and
if
it's
percentage
is
more
than
like
0.95
as
in
you
can
say,
one
percent
more
if
it.
A
If
the
new
version
is
getting
errors
more
than
one
percent,
then
we
would
tell
our
rollout
to
stop
that
promotion
ahead
and
just
let
me
take
back
to
the
older
version,
you
can
run
the
custom
jobs
as
well
the
provider
difference
here
like
here.
In
this
case
you
would
be
able
to
see
prometheus
as
a
provider.
A
A
A
A
We
would
be
adding
step
to
do
the
analysis.
You
would
be
able
to
see
it
here,
so
I
am
doing
an
inline
analysis
where
I
am
asking
rollout
to
go
back
use
this
template,
so
this
template
is
been
created
by
using
the
analysis,
objects,
okay
and
here
just
for
demonstration
purpose.
I
am
running
a
busy
box
container.
A
We
all
know
once
busy
box
container
complete
successfully
exits
with
status
0.
So
that's
where
hypothetically
imagine
we
are
our
analysis.
Template
is
telling
rollout
okay,
I
am
able
to
do
analysis
successfully.
Please
go
ahead
and
do
the
further
rollout
promotion.
A
Okay
and
later
we
will
make
it
fail
as
well
by
just
inducing
a
component
which
will
make
sure
that
this
analysis
gets
paid.
If
we
will
just
uncomment
it,
let's
see
both
the
demos
again,
okay,
so
what
happens
eventually
is,
if
you
recall
the
architecture
where
I
mentioned
right,
the
analysis,
template
create
an
analysis,
run
kind
of
object
and
that
analysis
run
eventually
create
a
custom,
kubernetes
jobs
and
that
job
would
run
whatever
commands
like
it
will
run
the
create
a
kubernetes
job
that
will
run
this
busy
box
image.
A
A
A
Currently,
it
is
running
five
parts
you
would
be
able
to
see
thanks
to
the
analysis,
template
object
that
we
have
created.
It
has
created
an
object,
called
analysis,
template
custom
crd,
you
can
say
when
we
will
do
the
rollout
this
analysis.
Template
will
create
a
object
of
kind
analysis
run
right
now.
You
would
not
be
able
to
see
okay,
it
is
showing
no
resource
found,
and
this
analysis
run
will
create
that
kubernetes
job
for
us.
A
Okay
right
now
it
is
not
showing.
So
let's
do
the
promotion
of
from
the
old
version
to
the
new
version.
Here
again,
so,
let's
say
instead
of
blue
now
I
want
to
do
a
yellow
version,
so
let
me
run
the
yellow
version
of
the
application.
You
would
be
able
to
see
here.
So
this
scanner
and
a
check
analysis
template
will
now
create
an
analysis
run
object
once
it
will
reach
to
that
step.
Okay,
so
you
would
be
able
to
see.
20
of
traffic
would
be
routed
here.
A
A
A
F
So
nina,
can
we
say
that
this
blue,
green
and
canary
okay,
it's
all
a
set
of
procedures.
So
if
you
get
an.
A
F
No,
no,
no.
We
just
gave
here
in
this
workshop
an
example
how
to
do
this
correct.
So
basically
we
started
with
this
argo
reloads
and
you
know
we
created
the
objects
right.
That
is
application,
specific
and
all
that,
and
then
there
is
a
technique
by
which
you
do
this
kind
of
a
blue
green
and
then
you
went
to
a
canary
correct.
So
if
you
follow
the
similar
kind
of
steps
and
all
that
to
achieve
it,
I
think
we
can
be
successful
in
any
other
production,
any
any
other
customers
as
well
correct.
A
Yes
right,
but
as
a
I
would
say
like
I
believe
is
that
see
the
challenge
with
us
is
when
you
want
to
develop
something
it
would
take
30
of
your
effort,
but
maintaining
it.
It
would
take
70
percent
of
your
effort.
So
my
personal
philosophy
is
use
a
kind
use
it
as
a
managed
service
like
if
someone
has
already
solved
a
problem
for
you
either.
You
can
leverage
that
but
yeah,
if
you
have
some
custom
need-
and
you
want
to
develop
something
like
that.
A
Yes,
using
the
same
principle
as
you
mentioned,
you
can
develop
something
on
your
own.
F
Something
or
something
I
mean
not
something
on
your
own:
it
is
basically
its
water
understood,
it's
it's
a
kind
of
a
procedure
right,
you
do
it
and
you
can
replicate
it
or
is
it
something
more
to
it
and
all
that?
Are
you
saying
that
you,
because
your
statement
was
mentioning
that
there
is
some
30
and
then
70
is
on
the
maintenance?
What
does
that.
A
Mean
okay,
so
initially
I
thought
you
are
saying
that
similar
kind
of
custom
tool
you
would
like
to
create
on
your
own.
That's
what
initially
I
felt.
F
No,
no,
no,
I'm
asking
a
very
basic
one
see
now,
as
you
said
very
clearly,
so
let's
say
you
know,
I
don't
know.
Maybe
myself
I'll
put
myself
in
this
thing.
I
come
from
cd,
only
okay,
continuous
deployment.
Now
I
want
to
have
an
idea
of
progressive
deployment.
Okay,
I
suppose
I
want
to
take
like
this
and
then
suggest
that
yes,
argos,
argo
cd
rollout
is
there
okay,
it
is
open
source
and
then
that
is
also
trusted.
F
Also
so
probably
in
our
you
know,
let's
say
in
a
devops
and
all
that
we
could
try
this
out.
We
can
have
a
demo,
so
we
can
create
our
objects
here,
application
objects
and
all
that
and
then
we
set
it
up
like
this
and
then
we
can
have
our
blue
green
and
then
we
can
go
to
canary
as
well.
F
A
So
this
can
be
done
right
like
following
these
steps.
Yes,
yes,
following
these
steps,
you
would
be
able
to
do
it.
That's
how
we
did
it.
When
client
came
to
us
with
the
after
giving
proposal,
we
showed
them.
We
picked
one
of
their
microservice
and
we
tried
to
show
them
doing
this
way.
So
yeah
I
mean
when
you
want
to
approach
or
propose
this
kind
of
solution.
A
F
A
So
the
need
that
I
had
for
the
healthcare
client
right,
I
said
it-
it
was
having,
I
think,
28
or
29
micro
services
that
we
converted
from
deployments
into
the
argo
rollouts.
C
A
So
there
is
no
upper
cap
for
that
I
would
say.
A
Okay,
so
argo
rollout
has
integration
for
istio
as
well
available
for
traffic
routing.
So
yes,
you
can
do
that.
I
would
say-
and
I
haven't
seen
any
upper
limit
about
how
many
services
you
can
do
as
long
as
you
have
a
good,
robust
clusters,
and
so
on,
I
would
say
so.
You
can
do
that.
My
approach
was
like
in
my
case
as
well
like
I
did
not
directly
went
and
did
it
for
all
29.
We
started
one
by
one
service
in
a
phase
manner.
A
That's
one
thing
I
would
say
you
would
see
been
added
as
a
learning
that
it's
a
continuously
like
iterative
process
when
you
want
to
shift
from
cd
to
progressive
delivery
start
with.
One
then
show
the
success
of
it
then
plan
for
a
few
more
and
that
way
you
can
go
ahead
for
it.
C
A
In
my
scenario,
I
did
not
get
any
issues.
Client
has,
you
can
say,
github
action
has
been
available
so
via
github
actions.
Eventually,
we
were
doing
all
these
things.
We
did
not
have
to
use
the
console
specifically
for
the
same
purpose.
A
B
A
Saw
the
community
is
quite
good
in
terms
of
response.
You
would
be
able
to
get
all
the
helps
that
you
would
need
in
case
you
get
stuck.
B
A
Some
of
the
companies
like
intuit
itself
right
the
one
who
is
official
maintenance
of
it.
They
are
using
the
argo
rollouts
only
for
for
all
their
micro
services,
and
I
think
their
number
is,
I
believe,
quite
similar
to
maybe
what
you
are
suggesting.
B
A
Okay,
so
I
hope
it's
clear
till
here
now:
let's
delete
this
setup
now,
this
time
we
will
do
a
hands-on
to
see
how
it
roll
back
in
case
it
get
unsuccessful
right.
So
I
have
deleted
the
current
setup
that
we
did
here.
You
would
be
able
to
see
it
has
been
stopped.
A
Okay,
so
now
what
we
will
do
is
to
make
sure
and
see
how
it
roll
it
back
in
case
it
find
issues
or
during
the
analysis
days
use
this
ide
console
right
here
you
are
able
to
see.
I
have
opened
this
analysis.
You
can
open
it
at
your
end
as
well,
and
just
uncomment
this
okay,
we
are
purposefully
here
trying
to
make
that
our
analysis
should
get
filled
because
after
running
this
command,
let's
say
job
will
exit
with
non-zero
status
and
that
will
make
analysis
believe
that
okay,
my
analysis
is
not
successful.
A
Let
me
pass
that
message
to
the
rollout
saying
see:
the
new
service
seems
to
be
not
working
as
expected,
and
then
argo
ruler
will
say.
Okay,
then
let
me
roll
it
back
to
the
old
version
itself.
Okay,
so
just
uncomment
it
save
it
here.
Only
and
now,
let's
run
again
the
setup
thing
again,
let's
follow
the
same
step
that
we
did
earlier
here.
B
A
Let's
see
if
the
objects
has
been
created
so
yeah
the
new
objects
are
getting
created
here.
Okay,
you
would
be
able
to
see
its
agenda
in
six
seconds.
A
A
A
Ideally,
this
analysis
should
get
filled.
So
if
you
see
analysis
got
failed,
so
it
terminated.
The
old
pod
created
a
new
port
of
the
old
version
itself,
so,
whatever
initially
you,
the
yellow
version
was
available,
has
disappeared
and
again
it
is
now
fully
running
on
the
old
version
itself.
You
would
be
able
to
see
those
analysis
runs
getting
failed
here.
B
A
It
says
here
in
the
commands,
if
you
see
rollout
about
it,
to
update
to
version
2,
because
metric
test
failed
and
that's
why,
based
on
the
failure
limit,
it
has
rolled
it
back
to
the
older
version,
so
I
hope
able
to
see
the
capabilities
how
it
roll
ahead
as
well
as
roll
back
without
any
human
intervention.
For
us.
E
A
Let's
go
back
to
slides
okay,
so,
apart
from
this,
as
I
said
right,
argo
rollout
does
provide
you
with
traffic
management
capability
as
well,
and
the
advantage
of
this
traffic
management
is
basically,
you
are
able
to
reduce
the
blast
radius.
A
That
means
in
case
of
failure
effect
will
not
be
as
hazardous,
I
would
say,
or
as
a
nightmare
compared
to
if
you
would
have
done
it
as
a
100
roll
out
of
the
new
service,
and
there
are
various
techniques
either
you
can
use
a
raw
percentage
where
you
will
constantly
sending
only
five
percent
traffic
to
the
new
version.
Only
second
is
a
header-based
routing,
the
one
that
we
opted
for
our
project
and
the
third
is
a
mirror.
A
You
can
mirror
all
the
traffic
that
you
are
sending
in
production
to
your
the
new
version
as
well,
and
these
are
the
different
traffic
routers
already
been
supported
by
argo
rollout.
You
can
use
aws
alb
or
you
can
use
istio,
nginx
controller
or
any
other
smi
as
well.
For
the
same.
A
Okay,
we
will
be
doing
demo
as
well
of
it,
the
in
this
demo,
the
next
demo
that
we
would
be
doing.
I
have
used
a
helm
chart,
so
whatever
we
were
able
to
see
so
far
as
a
rollout
services,
I
have
just
wrapped
them
under
one
hand,
chart
to
show
how
it
looks
into
the
rollout
and
just
like
any
other
kubernetes
deployment,
where
we
use
these
kind
of
variables
to
replace
values.
You
can
do
the
same
for
the
argo
rollouts.
A
Also,
okay,
the
key
difference
here
when
you
want
to
use
a
traffic
routing
mechanism.
Is
this
section
you
would
add
here
and
you
would
just
mention
a
stable
service
thanks
to
these
annotations,
what
it
will
do
it
it
will
create
a
canvas
or
a
version
of
service
as
well,
and
here,
if
you
see
these,
two
parameters
can
read
by
header,
can
be
by
value
so
here
for
demonstration
purpose
as
well
as
in
my
experience,
we
try
to
pass
a
custom
headers.
A
So
imagine
our
qa
team
is
doing
testing
so
that
time,
while
doing
the
testing
they
can
use
this
custom
headers
and
get
connected
to
the
new
version
of
your
service
and
do
whatever
test
they
want
to
perform
on
it.
This
would
be
more
helpful
around.
I
would
say
in
case
of
blue
green
as
well
as
in
canary
as
well.
A
So
I
think
someone
was
asking
about
the
same
this
way.
You
can
add
the
custom
headers
past
path
as
well,
so
we
will
be
doing
a
demo
around
nginx
as
a
one
to
pass
these
headers
values
and
we
would
be
able
to
see
we
are
directly
hitting
the
can
new
version
of
the
pod.
Only
new
version
of
your
application
service
only
rest
all
part
remains
same.
The
only
section
that
you
would
add
in
your
rollout
is
this
one.
A
A
Okay,
so
we
will
create
a
one
name
space.
It
could
be
of
any
name
I
just
when
was
doing
demo.
I
thought
of
this
and
so
added
it
here
so
before
doing
that,
one
thing
you
would
need
to
do
is
so
create
a
namespace
has
been
directed
here
now
go
back
to
your
id
okay.
Here
you
would
be
able
to
see
your
demo
been
added
called
as
a
argo
traffic
management
demo
inside
it
in
your
values.
Here,
values,
file.
A
Try
to
search
for
host
okay,
so
I
have
added
here
this
particular
hostname.
You
would
need
to
do.
Similarly,
for
you
as
well.
Very
easy
way
is
just
copy
this,
your
entire
url
that
you
have
right
and
from
that
you
would
be
able
to
get
your
header
hostname
as
well.
If
you
see
here
from
this
value
of
your
url,
just
keep
this
particular
name
and
remove
rest
of
the
all
and
pass
that
value
update
that
value
here.
A
So
in
the
repository
that
you
have
cloned,
it
would
be
showing
this
particular
hostname,
which
is
of
mine.
You
can
replace
with
your
host
name
how
just
copy
this
url
copy
from
the
host
name.
Remove
this
https
part
just
copy
from
this
section
onwards,
till
the
cloud
gurus
section
remove
this
next
part
as
well.
A
A
Here,
when
you
are
using
the
traffic
routing
you
need
to
mention,
which
is
your
candy
service,
which
is
your
stable
service,
mention
the
your
nginx
ingredients
and
these
headers,
and
if
you
see
in
terms
of
values,
what
I
am
passing
here
as
a
part
of
demo
is,
I
am
just
passing
two
custom
headers
and
its
value,
as
as
you
would
be
able
to
see
here,
I
am
passing
one
header
name
can
be
and
passing
value
as
yet
you
can
pass
it
anything
here.
A
A
Okay,
you
would
not
be
able
to
see
anything
in
ui.
I
would
say
this
kind
of
ui
because
it
is
not
built
in
that
way.
It
is
just
a
plain
nginx
container
that
I
am
running
here
so
once
you
change
the
hostname
here.
Just
do
help
install.
I
have
given
this
name
just
run
this
command.
A
D
A
Yeah,
so
our
two
parts
are
available
now,
okay,
so
now
to
upgrade.
If
what
we
will
do
is
initially,
you
would
be
able
to
see
the
version
tag
here
as
a
1.19,
we
will
update
with
1.20
and
we
will
do
helm
upgrade
once
we
do
helm
upgrade
it
will
in
the
back
end,
do
the
same
stuff.
It
will
create
a
new
replica
set
using
argo
rollout,
which
would
be
running
this
particular
version.
Okay,.
B
A
Look
for
the
image
right
so
now,
instead
of
just
uncomment
comment
out
this
one
and
use
a
version
like
one
point,
twenty
point:
zero.
A
A
A
A
A
A
Since
we
are
using
candy
version,
even
if
you
like,
I
would
say,
remove
header
sometimes
request-
will
land
on
the
old
version
sometime
it
land
on
the
new.
B
A
But
from
internal
testing
perspective
I
would
say
you
can
let
your
qa
team
to
do
these
kind
of
custom,
header
based
routing
or
any
other
traffic
routing
mechanism
sometime,
and
that
way
they
would
be
always
able
to
connect
to
your
new
version
of
the
service.
A
Yeah,
it's
okay.
I
think.
A
I
hope
you
are
not
checking
on
ui
circuit,
the
four
zero
four
expected
to
get,
because
we
are
not
using
any
index
html
page
right,
customize
page
where
it
will
go
and
land.
So
if
you
see
on
my
machine
as
well,
it
is
showing
the
4.4
that
is
expected
output.
All
that
we
are
trying
to
show
or
see
here
is
whenever
you
route
past
the
custom
headers,
it
should
land
on
your
new
ports
alone.
A
Okay,
so
that
404
is
expected
because
we
do
not
have
created
any
landing
page
for
this,
just
make
sure
to
check
that
your
new
parts
are
getting
the
version
of
the
service.
Okay
now
like.
If
I
have
to
show
here
now,
I
just
now,
if
you
see
I
promoted
the
new
deployment
which
created
the
new
pod,
which
was
66
seconds
ago
now,
let's
say
if
I.
A
B
B
F
A
So
this
is
the
new
feature
been
added
into
the
argo
rollout
called
as
a
experiment.
Okay,
what?
Basically
this
experiment
does
differently?
Is
it
create
a
set
of
parts
just
like
in
other
demo
that
we
did
where
it
create
a
new
version
of
the
service,
or,
let's
say
new
version
of
the
pods
here
it
will
create
those
spots
and
you
can
set
a
kind
of
expiry
date
after
so
those
would
be
like
a
ephemeral
one
and
the
traffic
light
traffic
would
not
be
going
to
them.
A
Even
if
you
use
candy
version,
no
live,
actual
traffic
would
be
going
to
them.
This
experiment
feature
is
basically
launched
so
that
you
would
be
able
to
do
some
more
other
sort
of
destructive
testing
as
well.
When
I
say
destructive
testing.
That
means
you
would
not
be
doing
the
actual
application
functionality
test,
but
would
be
purposefully
running
the
test
to
check
what
are
the
point
of
failures
of
your
applications.
A
Okay,
or
in
case
let's
say
you
want
to
not
only
do
a
b
version
kind
of
testing
but
abc
where
you
can
create
multiple
experiments
and
those
experiments
will
create
a
self-isolated
version
of
your
application
services.
Apart
from
the
one
that
is
stably
running
and
against
those
isolated
version
of
your
application,
you
would
be
running
a
set
of
different
tests
that
you
would
want
to
perform.
A
Okay,
so
yeah
in
terms
of
spec,
I
would
say
you
can
just
add
a
another
section
here
compared
to
what
we
did
prior
this
section,
you
would
add
as
an
experiment
and
in
this
experiment
again,
you
can
run
your
analysis
as
well.
Okay,
so
the
main
advantage
is,
it
will
not
have
any
cascaded.
If
effect
on
your
stable
running
service,
those
service
will
be
running
as
it
is.
You
would
want
to
do
some
experimentation
with
your
production
running
services.
That's
where
this
would
be
helpful.
A
A
Okay,
I
was
not
able
to
create
a
complete
demo
for
it,
but
I
have
added
an
example
for
the
same.
If
you
see
in
the
examples,
I
have
added
an
example,
you
can
just
run
those
examples
directly
to
get
a
feel
of
it.
A
Okay,
and
so
yes,
coming
back
to
a
question,
let's
say
you
are
currently
running
all
the
kubernetes
deployments
and
now
from
kubernetes
deployment
you
want
to
go
for
the
rollout
option.
What
changes,
ultimately,
you
would
need
to
do.
Is
you
need
to
change
the
kind
from
deployment
to
rollout?
A
You
will
change
the
strategy
and
you
would
need
to
create
an
additional
service
to
route
the
traffic.
Okay,
so
in
terms
of
here,
you
would
be
able
to
see
it
was
having
deployment.
Now,
the
same
object
is
converted
to
the
rule
out
in
terms
of
strategy.
A
This
is
like
a
default
kubernetes
strategy
that
you
can
replace
with
this
can
and
in
terms
of
services.
If
you
have
one
service,
you
will
create
another
set
of
service,
so
I
think
this
is
what
tulsi
you
are
looking
right
if
you
want
to
do
a
demo
to
your
client
to
show
if
you
should
go
for
rollout.
A
Yes,
so
these
are
the
only
changes
so
in
terms
of
changes
as
well,
you
don't
have
to
do
something
different,
just
that
you
need
to
as
a
step.
One
install
the
controller
in
that
cluster
and
that
controller
is
then
capable
of
observing,
keep
an
eye
on
the
any
object
of
kind
rollout
and
do
the
actions
for
you.
C
And
here's
one
quick
thing
so,
while
running,
can
you
just
go
back
to
the
previous
slide
yeah?
So
my
observation
right,
I
still
still
once
I
re-run
that
particular
exercise,
whatever
you're
given
might
be
understand
better.
So
in
this
step
you
have
something
called
step
weight,
20
and
you
are
given
a
pause
correct,
so
it
stops
over
here.
Only
after
I
try
to
run
the
argo
promote
command.
Then
it
goes
to
the
next
steps
right
right.
C
Okay,
so
even
after
step
40
I
can
give
one
more
pause
and
I
can
hold
on
just
to
be
playing.
A
A
C
C
C
A
A
I
mean
in
regular
scenario
like
in
the
use
case,
in
my
use
case.
What
I
did
I
was
I
there
was
nowhere.
These
walls
were
added.
It
was
the
part
that
we
added
and
that
analysis
was
basically
doing
this
work
of
making
sure
that
my
new
service
is
running
correctly
or
not
those
projects
so.
A
A
You
have
already
seen
this
hand
chart
that
I
just
now
showed
right,
nothing
different.
You
would
be
just
templatizing
it
just
like
you
do
it
for
any
other
deployment
object.
A
Okay
and
yes,
these
were
my
lessons
from
the
work
that
we
did
is
make
sure
that
it
shouldn't
be
a
very
short
time
to
decide
whether
it
should
promote
ahead
or
not.
Based
on
what
I
have
seen
is
companies
keep
it
running
for
a
long
duration.
When
it's
a
long
duration,
like
maybe
a
day,
you
can
keep
your
older
version
as
well
running.
Okay,
second
and
most
important
thing
is
carefully
choose
metrics
when
you
are
doing
auto
promotion
or
auto
analysis.
A
That
metric
should
be
the
right
one
to
do
the
decision,
making
consider
that
I
would
say
right
now,
whatever
you
are
doing
as
a
manual
checks,
and
you
are
confident
that
this
is
how
I
should
I
do
a
check.
Add
that
as
a
part
of
your
analysis,
so
that
that
automated
process
should
do
it
third
thing,
I
would
say
yes
in
one
go,
you
would
never
get
it
correctly.
A
It
would
be
always
an
iterative
process
right
as
the
overall
devops
process
in
that
way
only
start
with
small
and
then
go
progressively
and
try
to
improve
something
more
in
that.
That's
how
I
would
suggest
someone
to
use
the
can
any
kind
of
deployment
strategy
that
you
would
use
it.
A
Needless
to
say,
don't
directly,
try
it
in
production,
start
doing
these
things
in
staging
and
then
go
in
production
and
the
last
but
not
least,
I
would
say
the
don't
try
to
compare
your
calorie
result
directly
with
exact
production,
but
try
to
compare
it
with
the
baseline
results,
because
sometimes
what
happens
in
case
of
like
java
kind
of
applications,
it
keep
accumulating
those
results
right.
So
better
thing
is
try
to
compare
it
with
the
average
results
that
you
have.
That's
all
I
had
from
my
end.
A
Anyone
has
any
questions
for
me.
Please
go
ahead.
A
D
Okay,
all
good
guys
see
you.
Thank
you
once
again
for
putting
all
the
efforts
there
and
looking
forward
for
more.