►
From YouTube: Lightning Talk: GitOps and Progressive Delivery with Flagger, Istio and Flux - Marco Amador, Anova
Description
Lightning Talk: GitOps and Progressive Delivery with Flagger, Istio and Flux - Marco Amador, Anova
Organizations that use progressive delivery are able to ship new code faster, reduce risk, and continuously improve customer experience. Progressive delivery is an essential component of DevOps, and feature management is the primary way it works. In this talk, Marco Amador (Anova) will describe their journey into progressive delivery with some hands-on demos and explain why they've chosen progressive delivery on their multi-cluster and multi-region Kubernetes cluster.
A
So
anova
is
the
global
leading
provider
of
industrial
iot
and
is
the
result
of
a
merger
of
five
well-established
companies,
and
with
that
we
had
five
platforms
and
five
different
ways
to
build
software
and
deliver
it.
So
we
created
a
new
platform
trying
to
follow
the
cloud
native
culture
and
best
practices
and
created
the
unifi
platform.
A
A
A
So
we
also
need
to
a
way
to
configure
the
roles
and
the
tenants
themselves,
and
for
that
we
used
gear
ups,
more
specifically
flex.
We
started
with
flex
v1
and
for
a
while
now
we
are
using
flex
v2,
which
brings
us
a
few
controllers
source
controller
to
handle
all
our
git
repositories,
customize
controller.
That
applies
customize
on
those
git
sources
and
create
the
the
kubernetes
resource
themselves
and
elm
controller
to
take
care
of
the
m
releases
and
perform
all
the
elm
operations
that
we
need
to
install
our
elm
releases.
A
A
If
any
of
those
match
the
image
policy.
With
the
new
tag,
a
new
patch
is
applied
in
our
or
in
any
git
repo
that
reference
that
image
policy,
and
then
we
roll
out
the
new
elm
release
with
the
new
version,
but
not
so
fast,
because
sometimes
we
don't
want
to
simply
roll
out
the
new
image,
because
a
lot
of
things
or
a
lot
of
bad
things
can
happen.
We
might
be
introducing
a
regression.
A
We
might
be
missing
some
integration
tests
that
we
forgot
to
to
do,
or
even
suddenly
we
might,
our
apis
might
start
taking
longer
that
our
slo
allows.
So
we
are
using
flagger
to
that
as
many
deployment
strategies,
scanners,
a
b
testing
and
blue
and
green.
We
are
mostly
using
canaries
where
the
principle
is
very
simple:
we
start,
or
we
spin
up
new
containers
using
the
new
image
that
we
detected
in
our
image
policy
and
start
progressively
sending
or
forwarding
traffic
to
those
containers,
and
during
that
which
is
called
the
canary
analysis.
A
A
Sometimes
we
don't
have
enough
traffic
to
have
metrics,
so
the
categories
wouldn't
progress
without
the
metrics
for
us
to
analyze.
So
we
can
use
different
strategies
like
using
generate
our
own
traffic,
like
using
tools
like
a
or
k6,
where
we
can
perform
loading
tests
during
the
canary
analysis,
while
looking
and
seeing
how
the
countries
are
progressing,
we
can
keep
seeing
this
in
our
dashboards
to
see
how
the
canaries
are
performing.
A
The
canaries
are
ephemeral,
we
test
it
and
the
canal
is
only
last
while
we
are
doing
the
canary
analysis.
Sometimes
we
want
to
keep
the
new
version
not
being
exposed
to
all
of
our
users,
so
we
are
using
dark
launches
where,
for
the
case
that
we
cannot
test
the
new
feature
using
feature
flex.
A
For
instance,
when
we
start
using
a
completely
new
architecture,
we
wanted
to
make
available
the
new,
completely
new
version
to
only
a
specific
set
of
users,
and
for
that
we
are
using
an
issue
virtual
service
fixture,
which
is
called
delegation
where,
depending
on
an
http
editor,
for
instance,
we
can
delegate
the
traffic
to
another
virtual
service
from
from
istio.
In
this
case,
if
anyone
knows
the
other
side
dark,
we'll
hit
the
new
versions
that
I'm
releasing
and
are
available
only
for
the
users
that
know
that
specific
error
we
can
keep
running
canneries
the
same.
A
A
We
still
have
a
few
challenges
like
we
needed
to.
We
have
a
different
observability
stack
in
on
every
cluster
we'd
like
to
have
a
unified,
for
instance,
observability
stack
and
that's
it.
I
hope
you
enjoy
it.
B
C
C
A
True
true,
but
we
are
using
specific
containers
for
that
specific
service
where
we
define
a
bunch
of
case
tests
that
we
perform,
I
think,
probably,
is
even
more
accurate
because
we
can
just
go
through.
C
A
Well,
we
are,
we
are
keeping
upgraded
with
basically
manual.
We
are
right
now
we
are
using
h213,
I
guess
it's
pretty
updated
and
the
flex
is
the
only
thing
that
we
actually
are
not
using
image
automation,
but
we
could
also
use
image
automation
to
to
update
flex
itself,
but
we
are
not
doing
that.
We
are
actually
performing
because
we
want
to
test
first
in
staging,
we
are
doing
flex
install
and
so
on,
but
flex
it's
in
its
own
github
repository.
A
E
E
The
talk,
how
are
you
securing
your
get
repositories
to
to
make
it
so
that
only
kind
of
privileged
people
could
you
know.
A
F
A
How
is
that
configured
yeah?
We
have
one
single
album
shark,
that
supports
it's
very
flexible,
so
they
can
decide
what
latency
they.
Basically,
they
can
decide.
There's
slo
for
that
specific
service
that
can
be
latency
can
be
error
rate
can
be
whatever
they
want.
They
have
complete
ownership.
The
amstarts
are
flexible
enough
to
be
configurable
as
we
want.