►
From YouTube: Episode 9: Understanding GitOps
Description
What is GitOps?
GitOps is a continuous deployment implementation using Git as the source of truth for declarative infrastructure and applications. Commonly used in Kubernetes and cloud-native environments, GitOps is gaining popularity with organizations modernizing to Kubernetes environments.
In this Hoot, Yuval walks through GitOps and demos GitOps tools like Flux.
Code Samples: https://github.com/solo-io/hoot
Suggest a topic to cover here: https://github.com/solo-io/hoot/issues/new?title=episode+suggestion:
A
Today
we
will
be
talking
about
a
git
ops
and
with
misdefined
from
weave
works.
Who
will
do
a
demo
for
flagger,
and
we
do
these
hood
episodes
once
every
two
weeks
on
1pm
on
tuesdays,
so
feel
free
to
join,
subscribe
to
get
notified
of
the
next
episode.
A
A
So
so,
if
I
want
to
introduce
yourself
briefly.
B
A
Here
we
go
and
we'll
talk
a
little
bit
about
github's
in
general,
so
what
is
githubs
right
and
the
idea
with
gitops
is
that
we
can,
because
modern
infrastructure
is
what's
called
declarative.
Essentially,
we
can
represent
its
state
in
in
a
declarative
form.
Essentially
ml
we
can
represent
its
state
in
source
control
and
githubs
is
kind
of
the
next
step
after
configuration
as
code.
So
you
have
your
configuration
in
source
control
already,
instead
of
taking
it
from
there
and
applying
it
to
the
cluster.
A
The
other
way
around
the
cluster
will
watch
git
and
will
apply
the
configuration
itself
as
it
changes
in
git,
so
essentially
using
git
to
do
the
operations
right
so
making
a
change
in
your
git
ro
repo
will
automatically
do
the
the
operations
required
to
essentially
deploy
those
changes
to
your
a
cluster,
so
that's
kind
of
in
a
natural.
What
is
githubs
as
far
as
definition
right
operations
via
git
and
next
slide?
Why?
Why
would
we
want
to
do
that
right?
A
So
it
gives
us
all
the
advantages
we
know
and
love
from
source
control
into
git.
So
we
can
have
an
audit
log
and
see
exactly
what
happened
in
our
cluster.
We
can
revert
to
a
previous
state.
We
can
review
changes
in
pull
requests
and
we
can
even
use
pull
requests
and
test
those
changes
and
kind
of
like
in
a
ci
flow
right,
so
all
the
stuff
that
we
like
and
love
from
source
control
development.
We
can
now
apply
to
our
infrastructure.
A
So
that's
kind
of
the
motivation
of
why
we
want
to
do
githubs
and
talking
briefly
about
how
are
we
going
to
do
that
and
again?
This
is
all
in
very
natural.
We're
going
to
see
a
demo
explaining
how
it's
done
in
real,
very
very
soon,
so
anything
that
can
take
your
state
of
git
when
it
modified
and
applied
to
a
cluster
can
be
considered
as
githubs
right.
So
you
can
homemade
stuff
with
your
ci
cd
pipelines.
A
Add
some
gateway
books
essentially
get
some
sort
of
a
robot
to
watch
your
cluster
whenever
it's
modified,
deploy
those
changes
into
your
cluster
right
and
if
you
don't
want
to
do
all
that
work
yourself,
there
are
off-the-shelf
tools
tools
to
do
that.
For
you
and
one
of
these
tools,
flux
is
a
something
we're
going
to
drill
down
a
bit
more
today
in
the
demo.
But
flux
is
a
purpose-built
tool
to
help
you
do
githubs
with
kubernetes
and
and
your
git
repo.
A
A
But
yeah
essentially
code
is
more
concise,
because
it's
imperative
right,
you
can
have
functions,
you
can
have
loops
versus
declarative,
is
more
verbose
due
to
its
declarative
nature
right,
you
have
to
declare
everything
that's
happening,
and
so
a
cluster
state,
that's
managing
gate,
will
have
all
these
yamos
that
represent
the
state.
You'll
you'll
be
seeing
a
lot
of
yammers
right.
A
A
Flux
in
addition,
has
support
to
a
mozilla,
secure
ops,
which
allows
you
to
encrypt
arbitrary
ammos,
not
just
secrets,
which
is
nice.
Whatever
path
you
choose
just
take
that
into
account.
You
know
one
of
the
classic
ways
to
get
a
bitcoin
miner
in
your
cluster
is,
if
you
have
a
secret
on
git
in
a
public
repo,
it
is
about
scanning
for
this
stuff,
all
the
time.
So
just
be
aware
of
that
now.
A
Another
thing
that
to
take
into
account
is
that
the
cluster
state
may
have
drift
from
the
the
state
declaring
git
and
sometimes
that
drift
is
actually
desirable.
A
So,
for
example,
if
you
have
a
pod,
auto
scaler
that
scales
your
deployments,
you
want
to
make
sure
that
your
github
solution
will
not
scale
them
back,
because
the
the
replica
number
in
git
is
one
while
the
auto
scaler
tries
to
get
them
to
10
right.
So
you
want
to
pay
attention
to
these
details
and
there
are
solutions
we
can
talk
about
it
later.
So
as
far
as
another
thing
to
take
into
account
as
far
as
intentional
drifts
from
the
cluster
state
is
automated
rollbacks
right.
A
If
you
try
to
deploy
something
and
it
doesn't
work,
it
failed
rollbacks.
You
want
to
make
sure
that
your
solution
doesn't
kind
of
force
and
reapplies
it,
but
it's
kind
of
aware
of
of
that
something
has
failed
and
it
should
kind
of
wait
all
right
with
that.
We
can
talk
I'll
hand
it
off
to
stefan
to
do
a
demo.
If
there
are
any
questions
I'll
be
monitoring
the
chat
so
feel
free
to
ask.
Let
me
stop
my
sharing
here.
We
go.
B
Sure
I
have
a
couple
of
things
of
what
you
said.
B
The
idea
of
gitops
is
not
only
that
you
apply
what's
in
your
git
repo
on
the
cluster,
when
you
do,
a
modification
is
continuously
apply
that
state.
Yes,
if
you
for
example,
say
you
do
a
your
homemade
deployment
with
with
a
ci
tool
right,
so
any
ci
out
there
is
capable
of
reacting
to
a
git
commit
and
you
can
have
keep
cutting
in
there
and
apply
the
whole
thing
every
time
on
the
cluster
right.
The
problem
that
github's
resolve
is
correcting
drift.
B
B
If
you
use
a
github
store,
a
specialized
github
store
like
flux
and
others,
then,
if
someone
makes
a
change
in
the
cluster,
let's
say
it
changes
the
hpa
or
changes
the
limit
or
changes,
something
that's
declaring
it.
Flux
will
will
detect
that
change
and
we'll
revert
it,
something
that
you
can
cannot
do
from
from
ci
another
another
issue
with
driving.
All
of
that
from
ci
is
the
fact
that
every
time
you
add
a
cluster
to
your
fleet
or
you,
let's
say
you
have
a
staging
environment.
B
B
The
advantage
of
githubs
is
that,
instead
of
connecting
to
the
cluster
from
outside,
it
runs
inside
the
cluster.
So
you
can
spin
up
a
cluster.
Let's
say
in
a
private
network,
no
one
has
access
from
internet
to
the
kubernetes
api
and
you
can
still
synchronize
with
public
repositories.
You
can
reach
out
to
github
from
inside
your
cluster,
pull
the
changes
inside
reconcile
them
without
exposing
your
your
cube
api.
B
So
there
are
githubs
has
a
lot
of
advantages
in
terms
of
history
and
stuff
like
that,
but
it
also
brings
up
a
new
security
model
where
the
cluster
is.
Reconciling
state
is
not
something
from
outside
that
connects
to
it.
As
cluster
admin
and
changes
that
state
so
from
from
a
security
perspective,
is
something
that
a
lot
of
companies
love
about
githubs
and
that's
why
I
think,
in
the
last
couple
of
years,
has
been
more.
A
A
Yeah
that
makes
a
lot
of
sense
and
it
kind
of
shows
that
the
difference
between
them.
You
know
a
lot
of
time,
you're
in
a
startup.
You
just
want
to
do
something
you
think.
Oh,
I
don't
want
to
learn
this
tool.
A
Flags
look
at
the
guide,
it's
so
long
I'll!
Just
do
it
in
my
ci
myself,
and
essentially
this
tool
also
encompasses
all
those
lessons
learned
that
over
the
years,
so
I
I
personally
think
you
know,
there's
a
purpose
built
too
for
that.
It's
probably
worth
at
least
understanding.
B
Yeah,
I
can
so
we
we
have
developed
flux
initially
four
years
ago,
when
kubernetes
was
young,
weworks
was
very
young
and
why
we
did
it
was
because
we
had
we
had,
and
we
still
have
a
very
small
team
of
sre
people
right
so
debugging,
ci
jobs
and
everything
like
that.
It
takes
a
lot
of
time,
so
we
basically
made
flux
to
serve
our
own
purposes.
We
said:
okay,
like
we
do
with
code.
We
want
to
push
something
to
git
and
the
cluster
itself
should
take
it
from
there.
B
If
it
fails,
it
should
let
us
know,
but
I
don't
want
to
build
ci
pipelines
every
time,
I'm
changing
something
or
I'm
adding
a
cluster
and
that's
how
it
all
started.
Then
there
was
this
other
need,
like
okay
in
your
ci
pipeline,
you
what
you
basically
do.
Is
you
build
a
container
image
and
you
push
that
image
to
the
registry?
B
B
Controller
replication
controller
yeah,
so
you
have
to
go
into
a
replication
controller
in
the
image
tag
and
you
have
to
change
manually
that
the
tag
like
okay.
Now,
it's
not
running
this
commit
I
move
to
to
the
different
one
and
in
order
to
automate
that,
because
we
didn't
want
to
make
manual
changes
every
time
to
yammels.
While
we
are
building
images,
we
made
flux,
aware
of
container
registries.
So
what
flux
does
it
scans
container
registries?
B
It
finds
that
okay,
someone
pushed
a
new
image
and
it
knows
how
to
patch
that
particular
yamo
and
write
it
back
to
git,
so
it
scans
the
image
detects
a
new
image.
Writes
it
in
the
yaml
commits
diamond
and
pushes
the
ammo
back
to
it.
Then
the
other
side
of
flux
will
say:
hey.
There
is
a
change
in
the
git
repo
that
I
made
and
I
will
apply
it
right.
B
So
flux,
flux
version,
one
does
a
bunch
of
stuff
and
we
later
on
hell
got
very
popular
three
years
ago
or
something
like
that
helm
was
was
the
solution
to
you
know,
trim
down
the
yamas
that
you
wrote
so
we've
created
a
helm
operator,
which
is
a
thing
that
should
let
you
define
helm
releases
like
you
would
define
a
kubernetes
deployment.
You
can
have
a
helm
release
yaml,
you
place
it
in
your
git
repo
that
gets
applied
on
the
cluster
by
flux,
and
then
there
is
this
operator,
helm
operator
which
says
hey.
B
I
have
to
install
or
upgrade
this
particular
hem
release
and
it
acts
on
that
and
we
we've
developed
helm
operator.
Then
people
said
hey.
I
want
to
get
charts
from
git
ripples,
not
only
hand
repos
great,
so
we
had
to
duplicate
the
things
that
flux
was
doing
into
helm
operator.
Helm
operator
had
to
do.
Git
operations
had
to
connect
to
git
authenticate
to
git,
pull
git
repos
inside
the
cluster
and
apply
charts
from
there
apply,
install
them
or
upgrade
them
in
in
helm
terms.
B
B
Over
time,
we've
seen
that
both
projects
are
growing,
they
they
have
a
shared
code
base,
but
it's
really
hard
to
to
maintain
the
two
and
both
of
them
have
evolved
into
two
monoliths.
Basically,
so
what
we
we
did
with
flux
version
2
and
that's
why
flux
version
2
is
a
different
project
from
flux.
It
has
its
own
kit,
repo,
it's
called
flux,
2
and
we
also
have
flux
and
so
on.
We
we
did
the
same
as
linker
d
did
with
linkedin
link
rd2.
B
The
idea
was
to
break
apart
these
controllers
and
create
dedicated,
let's
say,
micro
services
that
will
take
into
a
that
will
be
dedicated
for
for
specific
things.
For
example,
now
in
flux
version
2,
we
have
a
controller
called
source
controller
and
the
source
controller.
What
it
does
it.
It
knows
how
to
connect
to
git
repos,
how
to
pull
changes
from
there.
How
to
monitor
git
tags
commits
and
branches.
B
B
As
long
as
you
have
some
kind
of
storage,
for
example,
mineo
is
a
great
s3
solution
that
you
can
run
on
any
kubernetes
cluster.
You
can
push
all
your
yamls
in
there
and
you
can
say
hey.
I
want
to
reconcile
my
my
cluster
state
with
this
particular
bucket
right.
So
in
flux
version
2,
we
creating
this
dedicated
source
controller.
We
could
easily
expand
flux
to
other
things
than
just
git.
B
So
once
we
made
this
source
controller
component
that
can
pull
things
inside
the
cluster.
That
means
flux
is
no
longer
limited
to
a
single
git
ripple.
That
was
one
of
the
flux
version.
One
main
limitation
flux
will
plus
version.
One
will
only
synchronize
with
the
single
git
repo,
if
you
want
to
add
a
second
git
ripple,
maybe
with
other
things
that
also
need
to
end
up
on
your
cluster,
you
have
to
install
a
second
price
yeah.
B
If
you
have
multiple
tenants,
let's
say
multiple
teams
inside
your
organization.
Let's
say
a
tenant
equals
a
team.
You
don't
want
the
team
to
be
able
to
modify
namespaces
or
objects
that
are
created
by
a
different
team.
So
how
you
can
do
that
with
flux
version?
One?
Is
you
install
a
flagster
theme
and
then
you
set
up
a
kubernetes
arbuck,
and
you
restrict
that
particular
flux
instance
to
to
the
namespaces
that
that
team
owns,
but
then
what
happens?
B
If
you
have
100
teams
and
all
share
the
cluster
you'll
have
to
install
100,
fluxes
and
so
on
right.
So
these
were
the
the
main
drives
of
developing
flux
version
2.
We
we
have.
We
want
to
add
multiple
sources,
no
matter
from
where
register
them
inside
the
cluster,
and
we
want
to
ensure
that
we
can
do
multi-tenancy
without
you
having
to
install
all
the
things
per
tenant.
B
So
how
we
solve
that
in
in
flux
version
2
we
have
source
controller
which
pulls
sources.
Then
we
have
specialized
reconcilers.
One
is
a
custom
called
the
customized
controller
that
can
apply
plain
yamls
from
a
source
or
it
can
apply.
Customized
overlays
and
the
other
one
is
called
helm
controller,
which
is
the
next
version
of
helm
operator
and
so
on.
That
can
install
upgrade
test,
roll
back
and
uninstall
health
charts
right
and
helm
controller
and
customize
control.
They
both
share
the
same
sources.
B
So
if,
if
let's
say
you
have
a
git
repo
that
has
charts
and
also
has
plain
yamas,
you
register
that
giddy
people
once
and
then
to
customize
controller.
You
say:
hey,
please,
apply
this
directory
and
to
help
controller
can
say:
hey.
Please
apply
these
charts
from
the
same
repo.
That
means
we
only
pull
the
ripple
once
when
something
changes
in
the
repo
we
we
notify
customize
control
and
hand
control
hey.
There
is
a
new
change
through
a
kubernetes
event,
and
those
reconcilers
will
act
immediately
in
in
flights
version
one.
B
You
had
to
wait
like
five
minutes
or
seven
minutes
or
so
like
that,
every
time
something
changes
and
that's
yet
another
problem
like
if
you
have
a
staging
cluster,
maybe
every
time
you
do
a
git
push.
You
want
that
change
to
be
reconciled
immediately
on
your
cluster.
You
don't
want
to
wait.
Five
minutes,
seven
minutes
whatever.
B
So,
in
order
to
solve
that
issue,
we
we
created
yet
another
controller
called
notification
controller,
which
is
something
that
you
can
with
which
you
can
declare
webhook
receivers
and
you
can
say,
hey,
create
a
receiver
for
me,
expose
it
through
the
ingress
or
a
load
balancer
and
listen
to
events
from
this
particular
git
repo.
When
something
changes
there,
let
the
system
know
let
flux
know
that
there
is
a
change
and
it
needs
to
pull
it
instantly
in
the
cluster
and
apply
it.
So
you
can
have
this.
You
know
immediate
effect
if
you
set
up
webhooks.
B
Also
this
this
notification
controller,
so
deals
with
incoming
events,
but
it
can
also
dispatch
events.
For
example,
let's
say
you
add
something
to
your
git
report
deployment
and
that
deployment
after
it
gets
applied
on
the
cluster.
It
fails,
it
crashes,
it
enters
a
crash
loop.
Maybe
the
image
url
is
wrong
with
flux.g1
you
had
no.
B
B
The
new
version
is
actually
rolled
out
inside
your
cluster.
If
it's
not
issue
an
event
and
tell
me
what
is
the
error
inside
kubernetes
and
notification
controller
can
dispatch
this
kind
of
events
to
slack
microsoft,
teams,
rocket
chat,
even
your
custom
web
hooks
and
so
on.
So
we
we
also
improved
here
a
lot
in
terms
of
observability
you
we
issue
events,
kubernetes
events
for
everything,
that's
happening.
We
can
dispatch
them
to
other
notification
systems
and
we
also
expose
prometheus
metrics.
So
you
can
I'll
try
to
show
you
a
dashboard.
B
The
idea
is
you
can.
If
you
don't
want
to
look
at
the
select
channel,
you
can
also
create,
with
prometus
and
from
each
select
manager
other
definitions,
so
you
can
have
like
in
either
pagerduty
or
whatever
you
are
using
to
in
your
sre
team.
You
can
get
a
ping
hey
that
deployment
failed,
it
got
applied
and
then
it
failed
right
so
that
that
were,
let's
say
the
main
drives
for
for
going
from
flux
version,
1
to
flux
version
2,
which
is
a
total
rewrite.
B
There
is
no
shared
code
between
the
two
flex
version:
2
is
written
using
the
controller
runtime
libraries
for
from
kubernetes,
and
we
use
cube
builder,
which
is
a
tool
to
scaffold
kubernetes
controllers
very
easily,
and
we
we
use
these
tools,
these
modern
tools,
to
build
version.
Two,
okay,
I
I
think
I've
talked
too
much.
Let's.
A
B
Yeah,
I
mean
the
the
part
with
health
checks,
for
example,
is
not
only
enterprise,
it's
more
about
the
small
startups
that
yeah
they
don't
want
to
set
up
yet
another
monitoring
system,
because
you
can
of
course
can
do
that
I
mean
there.
Are,
I
don't
know
you
really
data,
though
you
name
it.
There
are
a
bunch
of
services
that
allow
you
to
deploy
something
on
kubernetes.
B
They
need
to
monitor
everything
for
you
and
it
will
do
all
that
stuff,
but
from
a
continuous
delivery
perspective,
many
people
expect
that
the
tool
that
rolls
out
the
the
new
version
that
tool
should
also
deal
with
health
checking
and
other
things
right
in
place
version.
2
is
real.
We're
not
going
to
deal
with
that.
There
are
so
many
solutions,
but
yet
again
our
users
come
back
to
us
and
said
hey.
We,
we
would
like
some
alerts
here,
because
I
don't
want
to
pay
yet
another
thing
to
do
that.
A
B
A
Some
customers,
you
know
they-
they
provide
their
own
abstraction
layer
to
their
teams,
because
you
know
kubernetes
cmos
are
kubernetes
samples,
so
it
is
kind
of
a
way
to
enable
these
custom
use
cases.
So
it
does
make
a
lot
of
sense
in
my
head
kind
of
future
proofing
flats
in
a
sense.
B
B
So
if
you
want
to
add
something
new
to
flux,
the
way
you
do
it
is,
you
will
create
a
new
api
or
write
a
controller
for
it
and
then
using
the
current
libraries
that
you
know
can
listen
to
source
changes,
can
issue
events
and
so
on
by
using
those
already
made
libraries,
it's
way
easier
to
extend
flux
than
what
it
was
back
then,
with
flux
version,
one
or
I
don't
know,
argo
cd
jenkins.
All
these
things
right,
you
have
to
contribute
those
changes
to
upstream.
A
B
And
it's
very
hard
to
do
that
because
you
you
want
to
change
something,
and
maybe
you
break
some
other
component
being
being
all
these
solutions
like
monolith
right
what
what
we
are
trying
to
do
with
with
flux
version
2
and
the
github
toolkit
is
instead
of
you
modifying
flux
itself,
you
deploy
your
own
controller
and
that
control
will
be
part
of
the
whole
thing.
So
you
can
easily
extend
the
pipeline
with
with
new
stuff
without
having
to
merge
them
upstream
or
or
stuff
like
that.
B
B
B
Okay,
I'm
going
to
use
a
a
repository
I've
made
in
in
the
flux,
cd
organization,
and
this
repository
is
an
example
of
how
you
can
structure
your
git
repo.
So
you
can
deploy
helm
releases
and
plain
yamas
to
more
than
one
cluster.
The
idea
is
no
matter
where
you
are
working.
What
organization
you
are
part
of!
B
You
definitely
have
at
least
two
environments.
Let's
say
a
staging
one
and
a
production
one
right.
Many
many
organizations
out
there
have
way
more
clusters.
I
don't
know
pre-production
a
production,
cluster
per
region
and
so
on
right.
So
the
idea
is
with
flux
version
2.
You
can
create
a
repository
and
from
there
manage
not
only
a
cluster
but
your
whole
fleet
of
clusters.
A
B
Okay,
let
me
answer
that
by
looking
at
the
roadmap,
so
we
publish
the
roadmap
here
and
we've
we've
created
three
milestones
out
of
flux
version.
One
one
is
future
parity
with
flux
in
read-only
mode.
What
read-only
mode
means
is
where
you
run
flux
on
your
cluster,
without
flux,
writing
back
to
get
container
images
updates
right
and
version.
One
in
vidoli
mode
is
100
in
at
future
parity
with
version
2.
B
So
if
you
are
using
flux
in
video
mode,
you
can
switch
migrate,
flux
version
2
today,
if
you
are
using
helm
operator
version
1,
you
can
that's
also
at
100
you
can
switch
to
flat
version
2.
if
you
are
relying
on
the
image
update
feature.
That's
currently
in
development,
we
have
finalized
the
api
design
and
now
we
are
working
on
things
like
authentication
to
container
registries,
ecr,
gcr
acr,
every
every
single
cloud
out.
There
has
a
different
type
of
authentication,
it's
very
hard
to
deal
with
it,
and
we
are
still
working
on
this.
B
But
I
I
encourage
users
to
to
give
the
current
image
update
controls
a
test
and
see
how
they
how
they
work
we
also
created.
B
B
We
I
especially
love
github
discussions,
because
once
you
publish
all
this
information
on
slack,
the
other
days
lost
like
github
discussion
is,
is
a
place
where
you
can
have
a
conversation.
Is
there
is
index?
You
can
go
back
to
it
and
so
on,
and
we
here
we
published
a
migration
guides
for
flux,
v1
users
in
read-only
mode
flux,
v1
users
that
are
using
customize
and
users
that
are
using
helm
operator,
and
we
are
also.
B
Yeah
yeah,
we
have,
we
have
a
migration
guy
from
v1
to
v2,
and
we
also
have
very
detailed
hell
migration
guide
written
right
here,
because
we
so
we
changed
the
api
on
how
you
define
a
helm,
release
and
we've
documented
here
how
you
have
to
modify
your
yamls
to
move
from
hand,
release
version,
one
to
hand,
release
version
two
and
yeah.
Please
give
this
a
try
and
let
us
know
if
something
doesn't
work
for
you.
B
In
the
discussions
directly
here
on
the
migration
topic,
of
course,
slack
and
so
on,
but
we
we
prefer
to
have
discussions
on
github.
B
A
Makes
sense
and
there's
another
question
about
multi-cluster,
but
I
think
you're
going
to
attach
that
in
the
demo.
So
I'll
just
let
you
do
that
and
if,
if
it's
still
unclear
ask
it
again,
please
ramesh
one
more
question
that
I
see
here
is
that
you
mentioned
that
flex
monitor
the
container
registry
and
does
the
deployments
based
on
the
new
image
released
and
pushed.
B
So
flux
will
not
do
rollbacks
to
old
images
and
the
reason
why,
for
example,
version
one
doesn't
do
it.
It's
very
simple,
let's
say
you
connect
to
ecr
and
you
ask
hey,
give
me
all
the
tags
and
for
some
reason
you
get
rate
limited
and
eci
will
give
you
only
half
of
all
the
images
so
you'll
say
hey.
My
latest
image
is
an
image
from
three
months
ago.
Okay,
let's
upgrade
to
that,
I
need
to
roll
back
three
months
ago
and
everything
will
crash.
B
So
what
we,
what
we
did
in
foxy,
we
built
some
gatekeepers
there,
so
it
will
never
do
the
rollback
automatically.
It
will
only
go
forward.
So
if
you
want
to
roll
back
to
another
image,
you
should
push
that
re-tag
that
image
and
push
it
with
a
new
tag,
and
only
then
it
will
it
will
move
forward.
B
This
is
more
of
a
limitation
of
container
registries,
as
a
service
where
you
you
can
cannot
reliably
depend
on
depend,
cannot
reliably.
Think
that
what
you
get
there
is
the
actual.
The
latest
version
right.
So
that's
why
flux
doesn't
do
rollbacks
on
its
own.
B
Yeah
many
people
asked
us
like
well,
I've
deleted
an
image
and
it
didn't
roll
back.
Well,
it
doesn't
so
if
you
want
to
do
rollbacks
use
flagger,
because
it
will
also
test
your
app
and
you'll,
do
the
role
back
consciously
based
on
metrics
and
so
on.
Right.
B
Okay,
so
I
have
this
gka
cluster,
nothing
on
it.
What
I
want
to
do
is
install
flux
on
this
cluster
and
tell
flux
to
synchronize
with
the
git
repo,
so
we've
created
the
cli
called
flux
which
can
install
with
blue
and
all
you
can
download
the
binary.
It
works
on
linux,
macos
windows
and
you
can
tell
flux
to
the
cli
to
create
a
repository
for
you
if
one
doesn't
exist
and
we
have
support
for
github
and
git
lab
and
we
are
extending
support
to
other
platforms
like
bitbucket
and
so
on.
B
B
B
Okay,
so
what
what
these
commands
expect
is
your
github
username
here
it
can
be
your
own
personal
username
or
it
can
be
your
organization
name.
If
you
are
creating
the
repo
on
your
personal
account,
then
you
have
to
tell
flux
hey.
This
is
a
personal
account
when
you
are
using
a
github
organization,
you
can
also
tell
flux
to
give
access
to
specific
github
things
to
that
repo,
so
they
will
have
right
access
to
it.
B
B
B
So
what
is
happening
now
flags?
The
cli,
has
pulled
all
the
manifest
from
from
github.
The
latest
github's
toolkit
manifests
it
installs
all
the
custom
resources
that
you'll
be
using
to
control
flux
and
it
also
installs
the
flux
components
which
is
source
controller,
customized
controller,
handle
controller
notification
controller.
B
You
can
pick
and
choose
I
mean
if
you
don't
need
notifications,
you
can
install
flux
with
only
three
controllers,
or
if
you
don't
deal
with
helm,
you
can
only
install
source
control
and
customize
control
and
so
on.
So
these
these
controllers
are
optional.
A
But
so
someone
asked
about
bitbucket
if
they
want
to
use
a
bitbucket
today,
is
there
something
they
can
do
to
use
flex
and
bitbucket?
Today.
B
Yes,
if
we
go
at
the
dock
site
in
the
installation
guide
here
there
is,
there
are
instructions
for
generic
git
servers
where
this
this
works
on
any
kind
of
git
server
that
supports
authentication,
be
it
that's
a
sage,
deploy
key
or
beat
a
token.
You
can
also
use
https,
only
authentication
with
a
token
generated
by
your
git
provider,
so
we
we
already
have
users
that
are
running
fonts
version,
one
bitbucket
and
on
gita
and
other
another
platforms
is
no
longer
just
run
a
single
command.
B
B
Okay,
so
let's
see
what
what
happened
on
my
cluster,
so
with
with
the
flux
cli
I
can.
I
can
ask
the
flax
cli
to
check
my
cluster
and.
B
The
check
command,
what
it
does.
It
verifies
that
the
kubernetes
api
and
your
cube,
catalog
installed
on
your
local
machine,
are
on
the
right
versions.
We
we
only
support
kubernetes,
116
and
four.
That's
because
of
crd
changes
in
in
116
and
flux
check
will
tell
us
which
components
are
we
running
and
at
what
version?
B
B
So
the
first
commit
was
flux
has
generated
all
the
github
toolkit
components
it
bundles
them
in
a
huge
ammo,
with
all
the
custom
resource
definition,
namespaces
deployments.
Everything
commits
that
to
your
git.
Repo
then
applies
that
on
the
cluster,
so
flux
starts
on
the
on
the
cluster,
then
it
creates
a
second
commit
with
with
two
objects,
and
let's
look
at
this
so
first
we
have
a
git
repository
definition.
B
So
this
is
the
custom
resource
on
how
you
can
register
git
repositories
inside
the
cluster
and
how
this
yammer
looks
like
you
have
your
url,
which
is
prefix
with
ssh.
B
B
Then
we
have
a
different
definition:
type
customization,
where
I'm
saying
hey
from
this
source
from
this
git
repository
called
flux
system,
apply
everything
that's
in
cluster
production
in
that
directory,
right
so
with
with
one
piece
of
configuration,
I'm
telling
flux,
connect
to
this
git
repo
and
and
pull
the
changes
inside
and
with
the
second
configuration
I'm
telling
flux,
hey
from
that
particular
ripple,
apply
only
this
path.
B
We
see
here
this
directory
flux
system,
which
was
generated
by
the
cli
and
completed
to
git.
On
my
behalf,
and
here,
I've
defined
two
things:
one
is
my
infrastructure
items
and
the
other
one
are
all
my
applications,
and
this
is
how
you
could,
for
example,
structure
your
clusters
into
layers,
no
matter
what
you
do
on
kubernetes
you'll
have
some
infrastructure
things,
for
example,
you'll
have
an
ingress
controller
or
you'll.
B
Have
I
don't
know
some
csi
plugin
or
something
like
that
right
and
in
front
in
the
infrastructure
definition,
I'm
saying
hey
apply
this
directory
called
infrastructure,
which
is
at
the
root
of
my
git
repo.
And
if
we
look
in
here
in
infrastructure,
I
have
added
a
couple
of
things:
nginx,
the
ingress
controller,
radius,
cluster
and
some
sources,
and
let's
look
at
these
sources.
These
are
different
type
of
sources
than
git.
B
I've
defined
here
a
bitnami
source.
And
if
we
look
in
here
what
what
I've
configured,
I'm
saying:
hey,
I'm
defining
a
source
of
type
term
repository.
This
time
is
not
a
git
report
and
has
this
url?
What
source
controller
does
it
connects
to
every
30
minutes?
This
time
it
connects
to
bitnami
and
pulls
the
helm
repository
index
file,
which
has
all
the
charts
that
binami
is
publishing
on
the
repo
and
I'm
also
registered
a
second
hand
repository
which
this
one
is
my
own
health
repo.
B
It's
hosted
on
github
pages
and
I'm
also
saying
hey,
pull
also
this
helm
repository
for
for
my
app
now.
If
I'm
going
in
here
and
I'm
saying,
flux
get
sources
help,
it
will
tell
me
that
okay,
these
two
sources
are
registered
and
this
is
the
latest
version
of
the
index
that
was
pulled
from
from
those
sources.
B
Now
I
want
to
install
helm,
charts
inside
my
cluster
from
this
helm,
repo
stories.
How
can
I
do
that
there
is
a
another
custom
resource
called
helm
release,
so
let's
make
this
a
little
bit
slower?
Okay,
so
I
have
here
my
apps
directory
inside
here
I
have
a
base
directory
for
my
application
and
I
have
a
release
definition,
and
here
it
is
it's
a
type
hell
release.
B
The
source
is
from
this
repository
and
here
are
my
values,
so
I
can
override
the
values,
for
that
particular
hand,
chart
with
my
own
things
and
because
this
is
in
base,
these
values
are
common
for
both
staging
and
production.
For
all
my
clusters
right
and
I'm
telling
I'm
configuring
for
info
hey
here-
is
my
radius
class,
my
redis
instance,
and
please
enable
ingress
form
here,
I'm
not
setting
the
ingress
dns.
Why?
Because
on
each
cluster,
you'll
probably
have
a
different
dns
record
right.
B
B
It
configures
flux
to
pull
us
specific
versions
from
henry
boy
stories.
What
I'm
telling
here
to
flux
is
hey.
I
want
to
deploy
the
podium
for
hell
release
and
if
I,
if
someone
pushes
a
new
chart
version
with
a
stable
version,
let's
say
one
zero
zero.
One
then
automatically
upgrade
that
release
inside
my
cluster
right.
So,
instead
of
every
time
you
release
a
new
version.
Instead
of
going
here
in
your
repo
and
bumping
the
version
manually,
you
can
give
flux
a
sample
range,
you
can
say
hey
if
it's
it's
bigger
than
one
zero
zero.
B
B
Namespace
going
to
see
that
flux
has
detected
that
the
latest
version
of
my
app
is
503
and
has
installed
it
in
my
cluster.
So
if
I'm
doing
here,
cattle
minus
and
info
get
pods
saying
that
okay,
my
my
app
is
running
right
here.
B
So
with
using
a
customize
overlay
instead
of
copy
pasting
that
fold
info
definition,
my
app
for
all
the
clusters
that
I
have,
I
can
only
specify
here
the
things
that
are
different
from
one
cluster
to
another.
If
I'm
going
to,
for
example,
to
staging
in
staging,
I
have
other
settings
here.
I
have
a
different
hostname
and
what
I'm
doing
here,
what
I'm
telling
plugs
if
there
is
a
alpha
beta
or
any
kind
of
pre-release
of
my
chart,
deploy
that
pre-release
in
my
staging
cluster?
Why?
B
Because
you
want
to
test
free
releases
in
your
staging
cluster
and
on
your
production
cluster,
you
only
want
to
deploy
stable
releases
right.
So
this
is
how
you
can
you
can
create
preview
environments
using
flux,
and
you
can
also
enable
helm
tests
and
a
bunch
of
things.
You
can
also
tell
flux
to
roll
back
if
the
helm
test
fails,
and
there
are
many
many
options.
B
What
you
can
do
here
with
with
hand
releases,
but
what
I
want
to
show
now
is
let's
say
we
want
to
do
a
manual
rollback,
and
here
I
can
say
the
latest
version
has
a
bunch
of
problems
and
I
want
to
roll
it
back
to
502
from
the
current
one,
which
is
503,
so
I'm
making
this
change
here,
I'm
going
to
commit
the
change.
B
Now,
if
I
do
watch
flux,
get
sore
sore
is
git,
seeing
that
this
is
the
revision
it
has
pulled
and
in
a
couple
of
seconds
flux
will
detect
hey.
There
is
a
new
commit
inside
my
git
repo,
and
it
will
pull
that
commit
inside
the
cluster
after
it
pulls
that
commit.
It
will
let
the
hand
control
know
through
an
event:
hey
something
changed
check
out
the
changes
right
hand,
control,
detect,
hey,
I
have
to
deploy
a
new
version,
and
it
will.
It
will
do
that.
B
Yes,
how
this
works
instead
of
specifying
here
inline
the
values
there
is
a
different
option,
called
values
from,
and
you
can
say,
values
from
a
kubernetes
secret.
Now
that
secret
will
be
in
your
git
repo.
Of
course,
it
shouldn't
be
in
plain
text
here
you
could
encrypt
it
with
mozilla
soaps
or
create
a
seal
secret
for
it
and
so
on.
The
idea
is
a
hair
release
can
take
values
from
secrets
from
config
maps
or
inline
values
inside
the
custom
resource
itself.
B
Yes,
only
the
name
and
the
password
so
from
the
whole
values
thing
you'll
take
the
username
and
the
password
you'll
put
that
in
a
secret
encrypt
that
and
then
you'll
tell
inside
the
hell,
release
you'll
say
take
this
secret
and
all
the
other
values
which
are
inline
and
held
controller
will
merge
the
two
in
the
final
values:
yeah
right.
It's
like
doing
helm,
upgrade
minus
values,
some
file,
minus
value,
some
other
file.
It's.
B
So
what
we
see
here,
the
the
old
commit,
is
this
one.
The
new
commit.
Is
this
one,
my
my
current
change?
So
now,
if
we
do
flux,
get
helm
releases,
all
namespaces,
I'm
going
to
see
that
it
has
rollback
to
502
and,
of
course,
any
kind
of
change,
not
this
one
around
versioning,
but
any
kind
of
change
you
doing
in
git
or
maybe
in
other
git
repos
that
are
registered
on
your
cluster
flux,
detects
them
and
notifies
the
right
controller.
That's
in
charge
of
that
particular
configuration
right.
B
If
I'm
doing
a
change
for
a
hand,
release
then
help
controller
react
on
it.
If
I'm
doing
a
change
in
a
plain,
yellow,
a
namespace
or
something
like
that,
then
customize
control
will
act
on
it
and
this
works
not
only
with
changes
but
also
with
deletion.
B
So
we
we
have
a
thing
called
garbage
collection
for
for
both
controllers
when
you
delete
a
manifest
from
git
that
delete
operation
will
be
replicated
on
your
cluster.
So
if
I'm
deleting
a
namespace,
customize
controller
will
delete
that
namespace
and
everything
that's
inside
the
main
space.
If
I'm
deleting
a
custom
resource
the
custom
resource,
let's
say
a
helm
release,
the
helm
controller
will
do
help
uninstall
automatically
for
me
and
so
on.
So
it's
not
only
about
adding
things
or
modifying
things.
It's
also
about
removing
things
right.
Any
kind
of
operation
is
replicated
so.
B
B
In
an
ideal
world,
if
you
are
doing
only
microservices,
then
it
doesn't
matter
on
which
order
you
create
things
on
your
cluster
or
you
upgrade
things
right.
That's
what
flux
version.
One
principle
is,
everybody
should
do.
Microservices
should
be
no
problem,
no
matter
the
word,
it
should
work
well
for
some
companies,
that's
true,
but
for
most
of
them
that's
not
quite
true,
like
usually
you
say,
hey,
I
need
to
have
my
database
deployed
and
upgraded
migrated.
B
The
list
of
version
and-
and
only
then
I
can
upgrade
my
app
or
I
need
to
deploy
istio
first,
make
sure
that
the
istio
daemon
is
up
and
running
before
I
can
deploy
my
app
because
my
app
has
to
be
injected
with
the
istio
sidecar.
If
you
deploy
your
app
first,
then
you
deploy
istio,
let's
say
on
new
cluster.
Your
whole
setup
will
be
broken
because
even
if
the
apps
are
there,
they
are
not
injected
with
envoy.
So
surprise,
nothing
will
work.
If
you,
you
know,
rely
on
virtual
services
and
so
on.
B
Production
and
I'm
going
to
look
at
the
apps
definition,
so
this
is
a
customization
which
is
applying
this
directory.
But
what
I'm
saying
here
is
apps
depends
on
infrastructure.
B
What
will
happen
if
I'm
adding
a
new
cluster
to
my
fleet
right
now
and
I'm
applying
everything
inside
this
repo?
What
flux
will
be
doing
will
not
apply,
will
not
deploy
any
kind
of
apps
until
infrastructure
has
been
applied,
is
healthy
and
only
then
the
applications
will
be
deployed,
and
this
doesn't
only
work
at
let's
say,
bootstrap
time
when
you
create
a
new
cluster.
This
also
works
when
you
update
stuff.
So
if
I'm,
if
I
I
make
a
change
in
this
repo
flux
will
see.
B
Okay,
the
repo
changed
was
the
and
it
will
draw
the
the
graph
of
dependencies
and
we'll
apply
the
upgrade
in
that
particular
order.
So
it
will
upgrade
for
the
infrastructure,
then
the
applications-
and
this
is
how
you
can
you-
can
define
dependencies
and
relationship
between
also
between
apps
or
between
infrastructure
items.
This
is
just
an.
A
We've
been
using
flux
v1
for
webassemblyhub.io,
and
we
did
have
some
manual
parts
that
you
exactly
touched
on
and
I
think
once
we
upgrade
to
flux
version
2,
it
can
be
completely
hands
off,
so
yeah
very
cool.
We
have
a
couple
of
questions
in
the
chat
this
one.
I
I
think
I
even
know
the
answer
after
you've
seen
it
so
ramesh
is
asking
the
in
him.
A
B
Yes,
so
now,
in
version
2,
a
helm
release
can
come
from
any
kind
of
source.
You
can
put
your
charts
in
a
bucket,
and
you
can
say
I
want
to
install
this
release
from
this
bucket
or
it
can
be
in
a
git
repo
or
it
can
be
in
a
in
a
hand
repository
it's
up
to
you,
how
you
define
it,
because
let
me
look
at
this
one.
B
So
in
here,
when
you
define
the
source
reference,
you
also
specify
the
kind
so
here
can
be
a
git
repo
or
a
bucket
or
anything
like
that,
and
you
can
also
share
the
same
helm
repository
or
the
same
kid
repo
between
releases.
You
don't
want
to
pull.
Let's
say
the
binami
index
file,
which
is
kind
of
large
for
every
hand,
release
you
have.
You
can
share
that
definition
between
multiple
releases.
A
A
A
If
you
can
clarify
your
mesh
that'd
be
great.
B
So
what
you,
until
we
have
ready
the
image
update
feature
in
in
flux
version
2.
What
you
can
do
is
from
your
for
your
master
branch,
main
branch,
whatever.
What
is
that
branch
that
needs
to
be
deployed
rapidly
on
staging?
You
can
publish
a
chart
with
the
with
a
prayer
release
minus
beta
minus
alpha,
and
then
you
tell
flux,
hey
pull
that
chart
every
time
and
that's
how
you
can
do
continuous
deployment
based
on
just
chats.
A
Very
cool,
I
don't
see
any
more
question
in
the
chat,
so
we
can
summarize
so
what
I'm
seeing
in
this
demo.
A
Essentially,
I
can
have
one
flux
in
my
cluster
and
that
flux
refer
to
a
git
repo
and
in
that
repo
it
can
refer
to
a
specific
path
that
represent
my
cluster
and
what
it
allows
me
to
do
is
essentially
represent
my
entire
infrastructure
in
a
single
git
repo
and
each
cluster
gets
its
own
flex.
That
watches
the
the
right
path
in
that
triple.
A
So
that
can
give
me
a
very
powerful
way
to
get
all
kind
of
like
a
single
pane
of
glass
like
a
one
place
where
all
my
infrastructure
leaves,
and
that's
that's
very
cool.
And
oh,
let's
see
there's
a
few
more
questions
here.
It's
so
ramesh
is
clarifying.
He
said
it's
been
used
in
multiple
clusters
on
a
single
repo
and
it's
in
venues
and
be
multiple.
A
B
Yeah-
and
you
can
also
do
you-
can
also
drive
the
your
production
cluster
with
git
tags.
For
example,
let's
say
my
staging
cluster
will
be
synchronizing
from
the
main
branch
and
instead
of
using
directories,
I
will
say:
hey
my
production
cluster
instead
of
synchronizing,
with
the
branch
will
synchronize
with
the
git
tag.
B
So
every
time
you
want
to
do
a
release
on
your
production
cluster,
you
can
do
a
github
release
or
git
tag
for
assembler
push
that
to
upstream
and
tell
flux
on
the
production
cluster
hey,
instead
of
looking
at
a
branch,
monitor
the
git
tags
and
if
it
matches
the
same
way
expression
instead
of
applying
the
branch,
apply
the
tag
itself
right.
So
as
we
do
with
apps
we
we
do
assemble
releases.
You
can
do
the
same
with
your
with
your
github's
repository
now.
B
B
I
don't
want
to
deploy
anything,
so
you
can
modify
the
git
repo
definition
of
your
production
cluster
and
say
I
want
to
pin
it
at
this
particular
commit.
This
is
also
something
that
a
lot
of
people
asked
from
for
flux.
Originally,
we
couldn't
do
it
there,
but
here
you
can
pin
flux
to
a
specific
git
tag
or
git
commit
or
tell
flux.
Hey,
follow
the
latest
release
of
my
repo.
A
B
No
okay.
Now
I
get
what
he
means
so
in
flux
version
one:
how
how
flux
version
one
will
know
where
what
is
the
last
committee
synced,
because
it
didn't
have
any
kind
of
stake
inside
the
cluster.
It
will
have
to
push
its
own
git
tag
to
upstream,
and
you
have
to
give
it
right:
access
to
the
repo
and
so
on
in
flux
version
2.
You
don't
have
to
do
any
of
that,
because
flux
version
2
will
never
create
a
git
tag
on
your
cluster.
B
It
stores
the
the
latest
commit
inside
the
custom
resource.
Like
you
saw,
that's
why
you
can
do
flux,
get
source
git,
and
it
will
tell
you
exactly
at
what
shaw
at
what
committee.
It
is
because
that
state,
now
it's
stored
inside
the
custom
resource
definition
right,
so
you
no
longer
have
to
deal
with
all
of
that
create
different
teleflux,
hey.
Now,
I'm
thinking
this
one
push
the
tag
with
this
prefix.
All
of
that
is
gone.
You
don't
have
to
deal
with
it
anymore,
nice.
We
removed
that
completely
here.
A
B
So
we
we've
announced
last
month
that
flux
version
one
is
currently
in
maintenance
mode.
That
means
that
flux
or
even
one,
will
only
receive
security
patches
from
this
moment
on
the
moment
we
will
have
future
parity
in
flux
version
2.
So,
with
the
image
tag
update
once
that's
released,
we'll
have
a
six
month
window
for
flux
version.
One
then
we'll
archive
the
repository.
B
B
It's
toolkit,
dot,
flux,
cd,
dot,
io
the
website
and
you
have
their
many
guides.
The
get
started
guide
shows
you
how
to
bootstrap
to
clusters
like
I've
shown
you.
There
is
also
this
example
ripple,
but
are
many
examples
there
and
we'll
be
adding
more
and
more
use
cases.
While
we
we
work
on
it
we
this
time
we
want
to
create
example,
ripples
I've
created
this
one
for
helm,
but
we
also
want
to
create
for
play
numbers
and
so
on.
B
A
B
Yeah,
when
the
cncf
slack
there
is
a
flux
channel
there,
I'm
I'm
very
active
there
twitter,
but
I
prefer.
If
it's
I
don't
know
something,
that's
particular
to
flux.
You
have
a
problem
or
you
have
an
idea.
Please
open
a
discussion
on
github
and
we'll
be
answering
there.
The
whole
flux
team
is
watching
the
github
discussions
and
also.
B
B
You
can
put
your
crds
inside
the
helm
chart,
but
they
only
get
applied
at
install
time,
so
you
can
use
the
hand
chart
to
deploy
prometheus
operator,
but
when
prometus
operator
will
release
a
new
version,
if
they
change
something
in
their
custom
resource
definition,
helm
itself
will
not
apply
those
updates.
It
only
applies
the
deployment
spec
and
so
on.
So
what
happened?
You'll
end
up
with
a
broken
operator,
because
the
crds
are
old,
but
your
operator
is
new,
so
there
are
a
couple
of
ways
how
to
solve
that.
B
You
either
tell
flux
to
pull
the
raw
yamls
from
the
prometus
operator,
git
repo
they
have
their
yamas
or
you
install
the
helm
chart
with
the
helm
release
you
disable
the
crds
install
and
you
place
the
crds
in
your
own
ripple
and
you
keep
them
synchronized.
But
I
think
what
what
we
added
to
flux
version
2
right
now
is,
you
can
add
any
kind
of
git
ripple
and
reconcile
from
that
inside
your
cluster.
B
So
if
you
want
to
depo,
deploy
prometheus
operator
glue,
whatever
is
out
there,
any
open
source
project
has
a
directory
called
deploy
or
config
or
whatever
in
their
repo,
where
they
publish
the
latest
things.
So
you
don't
have
to
rely
on
health
charts
anymore.
You
can
take
that
yaml.
Tell
flux,
hey
clone,
that
ripple,
take
those
yaml
and
apply
apply
them
on
the
cluster.
So
that's
that's
how
I
would
do
it,
but
of
course
you
can
do
it
with
hand
charts.
If
you
want
to.