►
From YouTube: TGI Kubernetes 127: GitOps with Steve Wade
Description
This will be the first tgik presented wiht a guest! Welcome Steve Wade as we explore gitops and Steve's direct experience around it.
A
Hey
everybody
and
welcome
to
tgik
number
127,
and
this
episode
is
a
is
definitely
a
first
for
tgik
we're
going
to
be
I'm
going
to
be
co-presenting
with
my
very
good
friend,
steve
wade.
There's
some
really
interesting
technical
stuff!
That's
happening
here.
I'm
leveraging
an
open
source
project
called
obs
dot.
Ninja
for
this
and
steve
is
in
the
uk.
So
let's
say
hello
to
steve:
hey.
A
D
A
Me
one
second
to
kind
of
like
arrange
things
a
bit.
You
know
how
it
is
all
right
who
got
going
in
here.
We
got
waleed
signing
in
good
to
see
you
and
we
got
alex
saying
hello
from
atlanta
and
we
got
sevy,
saying
hello
and
riko
from
tropic
from
tropical
germany.
Is
there
like
a
place
called
tropical
in
germany,
or
are
you
saying
that
it's
actually
like
tropical
weather
in
germany?
A
Europe
and
the
battery's
saying
hello
and
ham
cabeza,
saying
hello
liam
also
from
the
uk
good
to
see
you
liam,
and
we
got
david
michael
saying,
hey
and
tim
downey
from
the
east
bay.
So
we've
got
people
like
it's
funny.
We
have
people
on
either
side
of
our
our
our
yeah
a
little
bit
folks,
saying
hello
from
helsinki,
hello
from
munich
and
nuno,
saying
hello,
good
to
see
you
ninja
keith
keith
is
from
ireland,
which
I
think
is
maybe
just
a
little
bit
closer
to
me.
B
A
Lyle,
saying
hey
and
chris
carty
checking
in
philly
geared
bear
from
portugal
good
to
see
you
joy,
awesome
mona
from
germany
and
george
is
helping
us
out
behind
the
scenes
today.
So
he
put
up
another
link
to
the
to
the
notes.
If
you
want
to
follow
along
on
the
notes,
it's
tgik.io
notes,
rada,
saying
hello
from
arizona
and
christopher
hello
again
from.
E
A
Got
some
good
representations
good
to
see
you
all
hey,
yeah
steve
and
I
have
done
a
bunch
of
stuff
together.
We've,
like
you
know,
been
I've
been
we've
been
kind
of
communicating,
since
you
cube
was
like
a
much
bigger
part
of
like
tectonic
and
core
os.
So
it's
great
to
it's
great
to
have
you
on
and
I
think
it'll
be
a
really
fun
conversation
kind
of
digging
into
what's
happening
with
what's
happening
with
the
get
off
space,
because
I
know
that
you've
been
doing
a
lot
of
work
with
that
at
metal
right
for.
C
We
have
heavy
users
of
github,
so
hopefully
we
can
embed
some
knowledge.
A
A
So
I'm
going
to
say
hello,
bojonsha
boyzanje,
I'm
guessing
here,
arun
from
dubai,
good
to
see
you,
daniel
from
san
jose
robert
from
boston
and
eduardo
from
costa
rica
and
paul
saying
wait.
There
are
two
people.
Yes,
it
is
amazing,
it's
really
fun
yeah.
It
was
really
good.
My
friend
my
good
friend
rory,
and
he
lives
in
a
town
in
scotland
that
you
could
probably
pronounce
steve
it's
like
gulliga
oil
head
or
something
like
that
and
eric
saying
hello
from
wisconsin
good
to
see
you
and
daniel
awesome
very
excited
stuff.
All
right.
D
A
A
So
this
week
in
core
kubernetes
119
release
candidate
4
is
out,
and
it's
still
on
track
for
an
august
25th
ish
release.
So
that's
pretty
exciting
component
status,
and
this
is
the
the
tool
that,
when
you
do
cube
kettle,
what
is
the
command
can
get
all
get
components
or
something
like
that?
A
Get
component
statuses,
yeah
and
so
like,
and
it's
been
broken
for
a
while,
like
we
can
see
in
these
examples
right.
It's
actually
trying
to
to
do
a
health
check
on
the
controller
manager
in
the
scheduler,
but
where
it's
running
from
it
doesn't
have
that
connectivity.
A
So
this
has
been
kind
of
coming
and
going
for
a
bit,
but
what
we
see,
but
what
we
see
here
is
a
pull
request
to
mark
it
deprecated,
and
that
means
that
you
know,
as
we
once
I'm
on
the
wrong
tab
here
there
we
go
so
it
means
it
can
be
marked
deprecated
and
it
will
eventually
be
removed.
But
just
remember
that,
like
deprecated
does
not
mean
removed.
A
C
Mariot's
legoland
12
yeah.
A
A
These
last
few
weeks,
steven
augustus
wants
you
to
know
very
large,
sick
testing,
really
release
effort
to
clean
up
tests,
pruning
backlogs
and
very
boring
things,
and
I
think
that
you
know
steven
is
really
good
at
like
making
that
sound
like
it's
not
a
lot
of
work,
but
it's
actually
quite
a
lot
of
work
like,
and
I
know
that
I
know
that
a
bunch
of
folks
are
really
like
hands
on
keyboards
working
super
super
hard
to
try
and
get
a
lot
of
these
things
done
and
and
improve
the
testing
and
release
cycle.
A
A
A
A
So,
in
the
ecosystem
we
got
open
service
mesh,
which
is
a
new
project
by
microsoft.
There's
been
a
significant
amount
of
you,
know,
interest
and
drama
and
stuff
happening
inside
the
space
it
turns
out.
There
were
some
you
know
in
trying
to
get
out
and
trying
to
get
out
the
door.
There
was
like
a
quick
merge
of
code
that
was
borrowed
pretty
much
verbatim
from
linker
d
and
stuff
like
that.
A
So
I
think
that
microsoft
has
done
the
right
thing
in
just
removing
the
code
for
now
and
figuring
out
how
to
what
the
right
model
going
forward
is.
I
don't
think
that
they
intended
to
plagiarize
so
heavily
from
liquor
d,
but
you
know
I
think,
that
they're
looking
to
address
it.
They
want
to
make
sure
that
they
do
right
by
the
project
and
I
think
that
they've
worked
with
lingard
pretty
heavily
in
the
past
and
they
I
definitely
don't
want
to
like,
have
a
bad
situation.
A
There
happy
sysadmin
day
last
week,
oh
also,
I
was
going
to
show
you
this
other
service
mesh.
I
heard
about
that.
Just
came
out.
That
apparently
is
good
for
mainframes
brought
to
you
by
mr
kelsey
hightower.
You
know
always
always
right
at
the
cusp
of
humor.
I
already
got.
D
A
A
Also
it's
my
microphone's
over
here.
So
if
I
put
my
hand
in
front
of
it,
obviously
you
can
you'll
be
able
to
detect
that.
So
I
should
not
do
that.
Happy,
sasset,
happy
sysadmin
day
last
week
and
there's
actually
some
pretty
fun
stuff
like
I
was
watching.
One
of
my
one
of
my
favorite
twitter
pundits
is
that
corey
quinn
and
he
was
putting
out
a
bunch
of
ancient
system
mid
knowledge,
which
was
always
kind
of
a
fun.
Snarky
thing
definitely
check
that
out.
A
A
A
He
gets
into
networking
talks
about
how
it
works.
What
are
they
using
to
do
the
networking
piece
talking
about
bridges
and
beats
some
of
the
base
components
that
make
up
networking
pieces,
I'm
actually
building
a
series
of
talks
at
cube,
dot
academy
on
networking,
so
that
should
be
available
soon
and
they're
talking
about
how
these
work?
These
are
basically
pipes
pretty
interesting
stuff
and
it's
kind
of
basically
when
traffic
gets
routed
up
to
a
pod
ip.
A
That
traffic
like
comes
to
the
node,
where
that
pod
ip
is
residing
first
and
then
hits
a
routing
tape,
hits
a
routing
connection
straight
up
to
the
other
side
of
that
piece.
So
anything
coming
in
to
the
node
side
beats
goes
out
the
pod
side
wreath
or
inside
of
that
network
name
space
in
the
same
way,
the
other
way
around
now
there's
some
there's
some
there's.
Definitely
some
really
interesting
twists
and
turns
that
see
that
different
cni
providers
put
things
through,
but
that's
an
interesting
one.
A
A
I
mean
there's
a
lot
of
there's
a
lot
of
neat
things
you
can
do
with
that
stuff,
but
I
haven't
seen
somebody
actually
try
to
explore
host
no
host
local
networking
for
this
in
a
while.
But
it
is
pretty
neat
wait
a
minute.
What
are
they
doing
here?
Special
multi
counter
address
for
what,
if
the
packet
isn't
going
to
the
pod's
local
network
or
a
special
multicast
address
that
masquerade,
why
a
special.
A
I'm
curious
like
why
they're
doing
multicast
like
if
this
was
actually,
if
they
explored
this,
because
they
were
actually
looking
to
solve
multicast
on
kubernetes,
which
is
definitely
a
challenging
thing,
but
anyway,
so
interesting,
article
exploring
different
things
about
networking.
If
this
is
an
area
of
interest
for
you
definitely
definitely
check
it
out.
C
Yeah,
so
at
metal
we
wanted
to
make
sure
that
we
have
ephemeral,
nodes
right
and
we
can
handle
node
outages
and
node
loss.
So
one
of
the
things
that
we
spend
a
lot
of
time
doing
is
working
out
how
we
could
handle
the
loss
of
an
std,
node
and
recover
gracefully.
So
the
way
that
we
do,
that
is
obviously
we're
running
in
amazon
and
we,
rather
than
having
an
auto
scaling
group
where
the
the
the
kind
of
data
directory
comes
up
as
part
of
our
scanning
group.
C
We
have
decoupled
the
two,
so
we
have
an
auto
scaling
group
that
brings
up
a
node
and
then
kind
of
like
a
volume
that
lies
around
underneath
and
then
as
part
of
the
node's
bootstrapping
process.
It
goes
and
establishes
a
connection
to
that
eds
volume
where
the
data
resided,
so
that
just
means
that
the
kind
of
the
maintain
the
maintenance
of
quorum
is
faster.
It's
faster
than
the
whole
consensus,
algorithm
kicking
off
with
a
brand
new
node
that
is
completely
empty.
A
That
is
awesome
that
reminds
me
of
a
project
that
my
friend
named
quentin
machu
worked
on
for
a
bit
which
was
called
that
was
it
it's
the
cloud
entity
operator.
It
was
seo
yeah
yeah,
based
very
similar
on
that
so
they're
looking
into
they're
exposing
like
how
do
we
actually
monitor
it,
because
std
is
a
secured
interface
for
things
and
you
can
actually
configure
sdd
to
have
a
different
metrics
endpoint
than
the
the
client
or
server
endpoint.
There's
a
bunch
of
different
things.
A
We
can
do
here,
but
they're
getting
into
some
of
the
detail
about
like
how
we
have
to
authenticate
to
even
get
to
the
the
metrics
what
metrics
are
exposed?
What
things
you're
supposed
to
we
should
be
looking
for
while
lead
points
out
that,
if
you're
looking
at
the
the
openshift
interface,
if
you
look
at
the
openshift
ui
that
they
actually
do
ex,
they
do
expose
scd
metrics
up
at
the
top,
which
is
awesome
but
simply
calling
out
lcd
node
availability.
Whether
there
is
a
leader
how
many,
how
frequently
the
leader
changes.
C
A
They're
also
calling
out
consensus
proposal.
A
proposal
is
a
request
or
write
request
or
configuration
change
request
so
like
when
you
actually,
when
the
api
server
serializes
data
to
sad.
That
would
be
a
consensus
proposal.
A
So
proposals
failed
total,
like
how
many
times
data
didn't
didn't,
actually
persist
and
they're
looking
for
like
a
maximum
of
five,
and
they
want
to
like
understand
when
those
things
happen,
they're
looking
for
disk
sync
duration.
So
this
is
the
question
of
so
I've
heard
me.
I'm
sure,
you've
heard
me
say
this
before
but
like
ncd
is
very
io
sensitive
and
that's
because
it
has
to
persist
those
rights
and
it
has
to
persist
those
rights
across
each
member
for
it
to
have
that
guaranteed
right
effect,
and
so
what
they're?
A
Looking
for
in
this
disk
sync
duration,
is
how
long
it
actually
takes
to
persist
those
rights
and,
as
you
as
you
increase
the
you
know,
perhaps
the
other
entities
on
that
node
that
are
consuming.
I
o
you
can
see
effectively
the
wait
time
for
commits
on
ncd
to
go
up.
I've
seen
scenarios
where,
like
a
very
aggressive
log
monitoring
solution,
was
like
sending
mad
amounts
of
logs
from
disk
right
and
that
represented
a
pretty
significant.
I
o
a
pretty
significant
amount
of
io
and
that
was
actually
causing
scd
to
ungracefully
decay
like
it
was.
A
A
There's
a
cystic
dashboard
doing
it,
so
that's
all
pretty
cool
I
mean
like.
I
think
I
like
that
they
call
out
like
what
particular
pieces
and
parts
are
important
to
watch,
and
why,
like
that,
I
love
articles
that
like
get
into
like
do
this,
but
don't
just
do
it
because
I
say
so
do
it
because
of
this
other
thing
like
actually
have
good
reasons
behind
it.
You
know
that's
good
stuff.
I
like
to
see
that
we
talked
about
that.
We
talked
about
that.
A
What
you
can
bring
your
system
in
quake
cube
I've
seen
so
many
different
examples
of
this,
but
it
always
makes
me
chuckle
whenever
I
see
a
new
one,
so
this
one
is
basically
a
way
to
expose
if
you
ever
played
quake
back
in
what
is
it
the
80s
90s
or
whatever
like?
This
is
a
way
of
like
exposing
quake
as
a
other
piece
of
software
on
top
of
kubernetes,
and
I
wonder
if
I
suspect,
but
I
wonder
if
they
allow.
C
David's
saying
a
a
tgik
on
quake
yeah.
That
would
be
fun.
A
A
Fun,
you
know
joe
did
a
series
has
done
a
series
on
actually
like
building
an
operator
around
minecraft,
minecraft
yeah,
and
so
that
that's
pretty
similar
I've
seen
another
one
where
I
can't
remember
what
is
it
called
cube
doom?
I
think
it's
called
cube
doom
is
that
right,
similar
again
to
quake,
but
in
this
case
cube
doom
is
playing
doom
and
killing
pods.
C
A
And
there's
like
a
few
different
examples
of
this,
you
know
like
at
kubecon.
We
had
one
that
was
like
a
whack-a-mole
right
like
where
you
gonna
hit
it.
You
would
hit
the
one
you'd
hit
the
mole
with
the
hammer,
and
that
would
cause
the
pod
to
die,
but
it
would
be
replicated
almost
instantly
and
then
you
would
see
the
pod
come
back
up
and
that
kind
of
stuff
you
do
need
to
get
back
on
the
series
for
minecraft.
That
was
a
really
fun
series
joe's
saying.
A
A
A
A
Okay,
let's
get
down
to
like
what's
happening
here:
500
cron
tasks
with
more
than
1
500
invocations
per
hour
in
our
multi-tenant
production,
kubernetes,
environment,
repeated
schedule,
tasks
are
widely
used
at
lyft
for
variety
use,
cases
make
sense,
manipulating
data
or
whatever
these
are
executed.
Using
unix
cron
directly
on
linux
boxes.
Developer
teams
were
responsible
for
writing
their
crown
top
definitions.
A
A
So
I
think
they're
going
to
get
right
down
into
the
detail
here
like
getting
into
like
processing,
latency
scheduling,
cron
jobs,
pot,
execution,
handling,
failure,
crown
jobs
actually
are
the
crown
job
task
within
kubernetes
has
a
has
kind
of
an
interesting
thing
in
that
it
can
keep
a
number
of
jobs
that
have
failed
around
and
that's
a
separate
configuration
point
then
keeping
the
number
of
histories
for
successful
jobs
like
you.
Could
you
have
a
lot
of
leeway
there
and
configuring
like
what
stays
around?
So
you
can
go
back
and
see
what
what
happened
or
didn't.
A
Happen
were
seeing
some
random
failures,
so
yeah
I
mean
I
don't
want
to
dig
into
this
too
much,
because
I
want
to
get
to
the
fun
stuff
of
what
we're
going
to
work
on
today.
But
this
looks
like
a
very
detailed
article
if
you're
interested
or
if
you
worked
on
the,
if
you,
if
you're
running
cron
jobs
locally,
I
definitely
recommend
I
recommend
checking
this
one
out.
This
looks
like
a
really
good
one.
C
A
A
So
this
is
actually
also
a
first,
like
typically,
I
don't
have
slides.
I
don't
generally
have
slides
when
doing
tgik,
but
I
think
these
are
going
to
be
representative
of
pretty
decent
docking
points.
The
first
thing
that
steve
and
I
are
going
to
want
to
focus
on
is
like
just
kind
of
framing
the
conversation
around
like
git
ops
and
how
it
works
and
what
that's
and
what
it
does.
A
A
What
is
I
guess,
I'm
struggling
with
the
word
immutable
in
relation
to
cron
jobs,
like
the
idea
being
that
you
can
schedule
a
job
and
if
it
fails,
do
you
want
it
to
like
retry?
Are
you
trying?
Are
you
talking
about
like
the
focal
point
of
like
retrying
a
job?
If
it
fails
or
do
you
just
let
it
fail
and
pick
it
up?
The
next
time
it
comes
scheduled
through
is
that
where.
A
I'd
important
is
your
friend
for
anything
distributed.
It's
very
true,
amen
to
that
yeah.
I
guess
it
depends
on
like
what
what
the
work
is
right,
like
if
you're
gonna
be
doing
work
that
modifies
a
bunch
of
things
you're
gonna
have
to
have
some.
You
have
to
have
some
way
of
determining
like
where
you
are
in
the
scope
of
that
work
like
has
it
already
been
done?
Do
you
have
to
do
it
again,
like
yeah?
A
Then
that
would
be
an
about
an
item.
Potent
task
there
are
ways
to
solve.
Item
potency
like
one
of
the
ways
to
solve
it
is
to
understand,
is
to
have
some
way
of
introspecting
where
you
are
in
the
work
that
is
being
done
so
that
you
know
whether
you
can
go
forward
or
not
go
forward.
There's
ways
to
solve
it,
wherein
you
say
I
need
to
make
it
so
that
this
work
will
always
just
continually
work
right.
Kubernetes
is
a
pretty
good
model
for
that
sort
of
thing.
A
If
you
apply
a
deployment
to
kubernetes
and
you
apply
the
same
deployment
like
15
times,
nothing's
really
going
to
change
on
the
back
end,
because
it's
the
same
construct
right
like
it's
going
to
take
that
declarative
implementation.
As
long
as
nothing's
changed
in
the
in
the
description
of
it,
it's
going
to
continue
doing
the
work
to
maintain
that
declared
state.
A
Pretty
cool
stuff,
mr
alexander
brand,
checking
in
good
to
see
you
alex
hey,
alex
but
non-unimportant
stuff
is
a
very
different
gig
right
like
if
you,
if
you
have
a
state
where,
if
you're
going
to
make
a
change
to
a
thing,
and
you
have
no
way
of
determining
whether
that
change
has
been
made
and
you're
still
in
a
state
where
you
actually
have
to
like.
You
know.
This
is
like
kind
of
like
the
at
least
once
or
if
only
only
one's
problem,
right,
yeah,.
A
D
A
The
ladder
would
be
an
important
that's
true,
yeah,
like
which
one
is
shift
up
so
in
this
slide,
we're
talking
about
like
we're
kind
of
framing
the
the
good
ops
conversation.
So
I
always
like
to
I
always
like
to
lead
with
this
question.
It's,
like
my
go-to
question,
pretty
much,
no
matter
what
it's
like.
What
problem
are
you
trying
to
solve
and
then
let's
just
talk
about
like
what
the
what
these
things
are
and
how
they
work
so
get
ops.
A
Typically,
in
my
mind
and
I'd
love
to
hear
your
your
opinion
as
well,
it's
trying
to
solve
the
problem
of
ensuring
that
we
have
an
audit
trail
and
change
log
for
all
changes
to
the
deployment
or
configuration
of
an
application
to
have
a
way
to
customize
the
deployment
of
an
application
in
some
sites
specific
way
and
we'll
talk
about
those
things.
So
because
in
this
way
you
can
kind
of
have
a
source
of
truth
for
how
things
will
be
deployed
or
configured.
A
But
you
might
have
things
that
are
very
specific
to
a
particular
area
that
you
would
want
to
modify
like.
Maybe
it's
a
different
set
of
certificates
or
a
different
hostname
pattern
or
an
fqdn
pattern,
or
what
have
you
you
want
to
ensure
that
a
set
of
applications
is
deployed
consistently
across
lots
of
sites?
Not
just
one,
and
you
want
to
understand
or
report
on
the
drift
in
case.
Things
in
case
folks
actually
have
the
ability
to
modify
the
configuration
of
things
like
at
one
of
those
sites
or
at
some
subset.
C
That
will
try
and
reconcile
that
state
to
be
reality,
and
it's
very
similar
to
how
kubernetes
works
right.
We're
declaring
state
and
kubernetes
as
a
kind
of
orchestrator
is
trying
to
get
us
in
that
in
that
state.
A
Yeah
and
I
like
kevin's,
call
out
this
is
a
very
great
example.
You
know
with
the
ability
to
have
to
to
introspect
changes
in
the
way
that
you
might
introspect
changes
in
code
right
like
having
a
get
ops
model
is
really
nice
for
that,
in
that
you
can,
you
know
somebody
wants
to
make
a
change,
and
then
you
can
actually
get
other
eyes
on
that
thing
and
go.
I
don't
know
like.
Maybe
that's
not
a
great
idea
or
because
of
this
or
sure
it
looks
good
to
me.
A
Josh
good
to
see
you
so
yeah,
I
think
that's
a
good
summary
and
to
your
point
steve,
I
think
this
is
like
the
next
one
I
want
to
point
out
here
is
that
in
my
mind,
people
frequently
paint
get
ops
as
one
solution
that
tries
to
solve
how
we
have
chris,
too
wait
did
chris
sign
into.
A
A
I
think
you're
right,
I
think
it
was
100
which
would
be
really
easy
to
remember,
but
anyway,
so
I
was
saying
that
I
think
that
people
paint
in
their
mind
when
they
think
about
get
ups.
They
paint
this
picture
of.
You
have
a
git
repository
where
you
keep
the
configuration
for
things
that
will
be
deployed,
and
you
have
an
operator
or
some
software
running
inside
of
your
cluster,
that
monitors
that
that
monitors
that
repository
and
applies
changes
when
those
when
those
changes
are
relevant
for
that
cluster
over
time.
A
But
I
don't
think
that
that's
the
only
definition
of
it
and
I
think
that
really
these
two
different
things
these
these
two
things
are
somewhat
separate
right.
In
my
opinion,
gitops
represents
that
first
part
like
having
a
git
repository
where
you
have
where
you
maintain
a
desired
state
for
those
things
that
would
be
deployed
is
git
ops.
A
But
how
you
go
about
deploying
it,
I
think,
is
a
little
bit
over
open
to
interpretation,
both
in
the
model
of
how
those
things
are
applied
to
clusters
like
whether
they
are
a
push
or
a
pull,
and
also,
even
even
in
the
case
where
you
do
decide
to
go
for
a
pull
model,
which
is
the
more
common
case
for
git
ops
flows,
there's
a
bunch
of
different
models
for
how
to
solve
that
or
how
to
how
to
interact
with
that.
What
are
your
thoughts
there.
E
C
I
agree
right
so
so
git
ops
is,
you
know
for
me
personally.
The
way
I
like
to
think
about
it
is
again
it's
a
center,
centralized
location
for
configuration
right
and
the
way
that
the
configuration
ends
up
in
your
end
destination
or
your
end.
Cluster
is
completely
dependent
upon
the
way
you
want
to
the
way
you
want
to
go
about
it
right.
You've
got
the
option
of
pushing
those
changes
in,
but
they
are
still
in
it
or
you've
got
using
gear
and
pulling
them
in
inside
with
an
agent
that's
running
already
there.
A
Yeah,
so
we
had
a
couple
diagrams
here
to
talk
through
and
then
we'll
get
into
like
playing
with
a
couple
of
the
repositories
that
steve
has
created
to
play
to
explore
these
things,
and
so
I
think
this
is
a
fun
one,
because
this
kind
of
like
captures
like
the
way
that
a
lot
of
folks
do
this
stuff
and
have
been
and
have
done
since
before
there
was
a
kubernetes
and
probably
continue
to
do.
Long
after
there
is
a
kubernetes
does
get
up
obviate
the
need
for
admission
control.
I
don't
think
it
does.
A
I
think
that
I
think
that
it
does
introduce
a
new
opportunity
to.
I
do
think
that,
and
you
actually
have
done
a
bunch
of
work
on
this,
which
I
think
is
pretty
awesome.
But
if
you
have
a
get
ups
flow,
then
you
can
generate
a
set
of
manifests
that
you
can
evaluate
with
code,
like
you
know
that
that
you
can
use
that
you
can
evaluate
and
understand
whether
it's
properly
formatted
in
a
good
shape
and
do
a
and
offload
a
lot
of
the
work
that
admission
controllers
would
typically
do
right.
A
So,
if
you're,
looking
at
things
like,
if
you're
looking
at
things
like
opa,
gatekeeper
and
things
like
the
ability
to
evaluate
these
manifests
to
understand
that
they're
using
recent
versions
of
apis
and
those
sorts
of
things,
then
in
a
git
ops
flow,
you
have
the
ability
to
handle
to
offload
some
of
that
work
to
the
get
ups
flow,
which
I
think
is
pretty
cool.
But
at
the
same
time
admission
control
is
about
whether
that
object
is
admitted
into
the
cluster
or
not
like,
and
so
wow.
C
You've
got
kind
of
the
way
I
like
to
think
about.
It
is
you're
using
the
pr
as
the
gates
into
git
and
then
you're,
using
the
admission
controller
as
another
set
of
gates
into
the
actual
cluster,
because
again
it
is
you're
just
your
definition
of
change,
whereas
once
it's
hit
the
api
you're
now
you
know
changes
are
actually
going
to
get
applied.
So
I
still
think
that
those
are,
you
know,
still
two
valid
components.
C
You
can
still
run
opa
or
kind
of
regular
policy
checks
yeah
as
part
of
your
pr
flow,
but
you
also
want
to
have
something
at
the
front
door
to
make
sure
that
you
know
you
can
you
catch
those
rogue
employees?
I
guess
that
just
yolo
merge
straight
into
master
there
and
it
gets
there.
A
Yeah
fact-
and
I
mean
admission
control
is-
is
like
the
bouncer
you
know
like,
and
so
I
don't
think
that
you
know
I
don't
know.
What's
the
word
there
so
like
if
somebody
mailed
me
a
bunch
of
ids
and-
and
I
was
able
to
look
at
that
id
and
understand
that,
like
this
idea
is
from
somebody
who
is
21
years
old
or
whatever
I
can
let
them
into
the
club
when
they
show
up
at
the
club,
that's
not
going
to
get
them
in
the
door.
A
C
I
I
like
paul's
way
of
putting
it
where
he
says,
trust
but
verify.
I
think
that's.
A
Exactly
so,
I
don't
think
it
ever
evaluate.
I
don't
think
it
obviates
the
need.
It's
a
good
question,
though,
so
I
was
saying
that
this
centralized
flow
is
kind
of
a
similar
is
a
neat
one,
because
it
actually
talks
to
like
the
way
that
people
have
been
doing
this
stuff
forever
right
and
we'll
probably
continue
to
do
it
forever.
But,
and
that
is
that
you
know
I
was
actually
joking
with
steve
earlier.
I
was
like
it's
it's
like
you
know.
A
If
you
have
a
hammer,
everything
looks
like
a
nail
and
I
think
that
a
lot
of
people
swing
jenkins
around
that
way
and
have
been
and
have
been
for
a
long
time.
So,
if
we're
getting
to
a
place
where
we
actually
are
embracing
continuous
deployment,
then
typically,
we
would
just
continue
using
jenkins
to
handle
that
deployment
aspect.
But
does
this
mean
that
we
can't
have
git
ops
right
and,
in
my
opinion,
it
doesn't
right.
A
It
means
because
like
where,
how
you
hold
the
configuration
for
for
what
will
be
deployed
and
where
you
handle
the
manipulation
of
that
configuration
before
deployment,
doesn't
really
change.
The
fact
that
you're,
using
a
transactional
model
that
you're
getting
the
benefits
of
having
having
git
in
front
like.
I
think
that
you
still
have
git
ups,
even
if
you're,
using
jenkins
to
handle
deployment
or
even,
if
you're,
using
spinnaker
to
handle
deployment
right
you're
still
having.
If
you're,
if
you're,
still
engaging
with
a
model
where
you're
handling
pull
requests
for
changes
for
that
desired.
C
The
all
we're
doing
here
is
we're
talking
about
the
the
area
in
which
the
execution
environment
is
is
right,
so
you've
got
it
there
as
being
outside
of
the
q
and
a
first
or
maybe
it's
inside
another
kubernetes
cluster,
that's
running
spinner
or
jenkins,
but
that
thing
is
pushing
changes
into
the
cluster.
But
again
flux
or
other
ci
agents
are
still
running
applies
against
an
api
server,
but
they
just
happen
to
be
running
inside
the
same
cluster
that
you
are
yeah
and.
A
Deploying
to
some
of
the
challenges
some
of
the
challenges,
the
centralized
flow,
which
are
kind
of
interesting,
are
that
you
know
that
that
spinnaker
or
jenkins,
that's
running
here
in
the
middle,
has
to
have
pretty
significant
access
to
a
bunch
of
different
clusters
and
having
to
and
making
sure
that
you
like
handle
credentials
and
those
things
securely
and
rationally
when
dealing
with
like
a
centralized
flow
can
be
a
little
challenging.
A
But
it
is
super
important
because
this,
because,
because
you're
doing
this
execution
here,
this
means
that,
when
you're
handling
this
execution
and
applying
those
manifests
to
your
target
clusters,
they
have
to
have
right
against
those
target
clusters
they
need.
They
typically
would
need
to
be
able
to
create
namespaces,
perhaps
create
users
create
all
the
different
all
the
different
resources
within
that
cluster,
depending
on
the
scope
of
the
work,
that's
being
applied
right,
and
so
things
can
like.
That's
a
lot
of
permission.
That's
a
that's!
A
A
big,
expensive
security
challenge
to
solve
when
you're
centralizing
like
this
right,
whether
whether
you
centralize
that
on
a
like
whether,
instead
of
actually
centralizing
this
and
to
like
a
single
jenkins
or
a
single
spinnaker,
you
might
you
might
try
to
like
explore
the
idea
of
like
having
of
breaking
that
security
domain
down
a
little
bit
so
you're,
not
putting
all
the
credentials
in
one
basket
as
it
were.
C
That,
just
because
we're
we're
running
a
github's
paradigm
here
doesn't
necessarily
detract
from
the
fact
that
we're
going
to
need
these
kind
of
ci
slash
cd
machines.
The
only
difference
is
really
where
the
you
know
the
delivery
aspect
of
this
is
happening.
That's
right!
You,
you
know
you
are
you're
still
going
to
want
to
be
able
to
ask
kubernetes.
C
Potentially
you
know,
has
my
change
been
applied
and
am
I
ready
to
test
it,
but
that's
just
a
decrease
in
scope
of
the
communication
and
the
kind
of
levels
of
permissions
that
that
ci
machine
requires
to
the
target
clusters.
A
One
of
the
other
things
that
I
think
is
actually
a
benefit
to
this
model,
which
I
think
represents
a
pretty
significant
challenge
when
people
start
trying
to
like
rationalize
github's
patterns
is
when
I
go
to
a
centralized
model,
I
have
a
serial
process.
Perhaps
that
actually
manages
the
deployment
of
resources
and
closes
the
loop
right.
I've
deployed
the
thing
it
is
now
deployed
it's
in
a
healthy
state.
I
can
report
that
pipeline
healthy
and
the
task
is
done
right.
A
I
have
a
closed
loop
system
and
I
can
serialize
that
across
all
of
my
different
target
environments,
and
so
I
have
a
really
good
way
of
understanding
like
on
a
per
task
basis
on
a
per
pipeline
basis
across
all
of
the
pipelines
that
might
be
related
to
a
particular
deployment
where
I
am
in
that
process,
where
I
am
in
convergence
or
where
I
am
in
in
successful
output
and
where
we
get
into
some
of
the
challenges
are
like
when
we
go
to
a
decentralized
flow.
Obviously,
when
the
x
sorry,
I
hate
the
word.
A
Obviously
I
gotta
remember
hear
me
say
it.
You
know
remind
me
when
we
go
to
a
decentralized
flow.
We
have
the
challenge
of
while
we
get
a
better
autonomy
down
here
at
the
at
the
cluster,
where
we
have
like
an
operator
or
some
code
running
in
the
cluster,
that
watches
for
config
changes
and
applies
them.
A
The
the
challenge
is:
how
do
these
clusters
on
the
outside
actually
report
up
to
some
centralized
piece?
So
you
can
understand
convergence,
so
you
can't
understand
health
and
those
sorts
of
things.
C
C
They
have
made
changes,
but
we
still
have
ci
machines
and
their
jobs
are
to
to
produce
images
which
I
know
we're
going
to
come
on
to
and
then
the
agent
is
going
to
sit
inside
the
kubernetes
cluster
see
the
change
in
images
make
the
change
to
the
humanities
cluster
and
then
write
that
back
to
kind
of
keep
that
audit
trail.
But
we're
not
detracting
from
the
need
of
having
that
ci
machine.
We're
just
reducing
the
scope
of
the
communication
that
it
requires
with
the
target
environment
that.
A
You
know
generate
and
those
sorts
of
things,
and
typically
you
would
also
still
be
running
git
to
handle
like
prs
against
changes
in
configuration.
How
many
of
a
thing
are
running
and
those
sorts
of
things
are
all
still
are.
E
C
So,
let's
go,
I
think
that
we
actually,
prior
to
this,
had
a
a
centralized
flow,
but
the
issue
is
that
it
it
makes
that
ci
machine
or
the
kind
of
environment
that
is
pushing
changes
into
the
cluster
that
that's
really
the
source
of
truth.
So
having
a
you
know,
declarative
kind
of
you
know
you're,
declaring
it
and
then
allowing
the
cluster
to
become
aligned
with
that
declaration.
A
Why,
like
you,
know
we're
doing
this
episode,
because
we
do
see
a
lot
of
interest
in
this
model
for
almost
exactly
that
reason
and
yeah
centralized,
like
you
know
having.
This
is
definitely
like
a
control
conversation
like
in
a
centralized
model.
You
have
a
lot
more
control
over
where
you
are
in
the
process
like
you,
you
could
make
these
pipelines
and
you
could
make
these
pipelines
relative
to
one
another
because
you're,
because
you
have
control
over
that
entire
problem
domain.
Right
like
if
I
had.
Let
me
just
go
back
a
slop.
A
If
I
had
my
and
jenkins
environment
here-
and
it
was
going
to
go
ahead
and
deploy
changes
to
these
three
clusters,
because
we're
still
talking
about
a
centralized
flow
here,
I
could
say
well
deploy
it
to
this
cluster
and
then
wait
for
that
to
become
healthy.
A
Wait
for
tests
to
pass
to
understand
if
things
have
improved
before
moving
on
to
the
next
pipeline
that
would
deploy
to
the
next
two
clusters
or
the
next
cluster,
where,
like
in
git
ops,
we
have
a
adapter,
sorry
in
the
flow
where
you
have
an
operator
running
in
the
cluster.
That
would
have
to
happen
in
a
different
way,
because
you
don't
have
the
context.
A
E
C
Think
you
you,
you
can
do
that
duffy.
I
I
think
it's
just
the
kind
of
communication.
Again
I
kind
of
I'm
powering
on
the
point
here
a
little
bit
but
you're,
it's
just
the
communication
right,
so
we
can
still
wait
for
it
to
go
to
dev
and
then
and
then
go
to
staging,
and
we
can
have
ways
of
being
how
to
manage
that.
I
think
it's
just
shifting
the
way
that
changes
get
you
know
deployed
to
the
cluster.
C
It's
just
a
kind
of
different
mindset,
really
a
different
way
of
thinking
about
things
and.
A
The
way
that
you
do
that
is
by
you
know
manage
so
like
and
we'll
get
into
this.
I
think
when
we
start
playing
with
it,
with
the
repository
that
you
shared,
but
like
the
way
that
you're
handling
signaling,
it's
basically
is
based
on
when
your
target
clusters
see
that
there
is
an
update
in
the
configuration
that
they
would
apply.
C
Yeah
either
that
or
a
new
image
tag
at
a
given
kind
of
tagging
structure,
essentially
they're
the
two
ways,
so
you
can
either
the
way
that
changes
hit
the
target
cluster.
Is
you
make
a
change
to
git
and
you
merge
to
master
and
we
reconcile
or
a
new
image
tag
is
available
in
the
road?
Exactly
like
this,
a
new
image
tag
is
available
in
the
registry.
The
agent
sees
it
and
deploys
that.
So
we
have
a
kind
of
two-way.
A
Yeah,
so
in
this
case,
so
in
the
decentralized
flow
that
we
were
describing,
this
is
kind
of
like
where
people
generally
start
right.
They'll
have
like
they
may
use
flux
or
or
argo
cd
or
any
any
any
tooling.
Here
in
the
middle
to
handle
or
in
any
tool
like
inside
of
the
cluster,
to
handle
like
a
watching
first
for
particular
resources
they
would
apply
and
managing
the
deployment
of
them
and
what
we're
pointing
out
in
this
next
one
is
perhaps
a
different
model,
which
is,
I
guess,
closer
to
what
you
were
like.
A
What
you're
pointing
out
is
something
closer
to
what
metal
does
wherein
you
have
your
app
code
and
you
have
your
config
code
and
you're
still
managing
it.
The
way
that
you
would
normally
do,
but
the
difference
is
that
we
might
have
a
model
where
you
make
a
deployment
and
you've
deployed
that
thing.
A
Let's
go
ahead
and
and
pull
that
new
tag
down
and
try
to
apply
it
and
then
notify
up
into
the
config
section
on
success
that
the
new
tag
is
now
the
base
image
or
the
the
base
deployment
object,
rather
than
having
the
developer
have
to
actually
push
a
new
change
to
the
configuration
repository
when
there
is
a
new
tag.
We're
off
we're
offloading.
Some
of
that
logic
to
an
operator
running
in
the
cluster.
C
Yeah
for
us
specifically
a
metal.
If
we
take
that
as
a
kind
of
example,
we
we
strive
for
a
self-service
model
whereby
we
will
help
them,
define
the
configuration
once
and
they
are
responsible
for
shipping
features
right
or
our
image
changes
and
the
agent
that's
sitting
inside
cuba
headings
is
constructed
in
such
a
way
that
it
understands
when
changes
are
made
to
the
image
registry
and
reconciles
those.
So
we
have
a
very
small
config
repo.
C
Well,
it's
a
large
conflict
repository,
but
in
terms
of
complexity,
it's
reduced
and
they
don't
have
to
become
helm,
charting
experts
or
know
how
customized
works.
C
All
they
have
to
do
is
understand
the
value
that
they
need
to
put
into
the
helm
chart
and
then
thankfully,
flux
and
other
github's
kind
of
tooling
has
crds
and
and
kinds
that
are
available
to
use
that
the
agent
that's
running
inside
the
cluster
can
understand
and
kind
of
do
the
heavy
lifting
that
in
the
past,
something
like
jenkins
or
spinnaker
or
any
other
ci
cd
machine
was
doing.
But
it's
now
running
inside.
A
Yeah
brian
calls
out
that,
like
git,
ops
is
a
term
like
devops
and
people
are
going
to
make
definitions
fit
where
whatever
they
have,
and
I
agree
with
that-
it's
been
fascinating
to
watch
like
that,
develop
over
time.
The
next
I
also
have
trying
to
switch
to
argo
c
app
of
apps
and
jenkins,
says
spaceboy
and
then
weimo
has
a
couple
of
questions.
A
You
could
have
a
proliferation
of
a
cluster
like
staging
production,
dev,
etc,
and
if
you
have
a
non-centralized
approach,
how
would
you
handle
things
like
a
rollback
for
an
example
when
something
fails
at
each
at
a
particular
stage
right?
So
the
idea
being
like
like
what
is
the
detail
around
handling
the
serialization
of
this
stuff
like
if
we
were
to
deploy
to
a
dev
environment
and
some
part
of
that
deployment
failed?
A
How
do
we
notify
the
other
clusters
not
to
make
a
change
and
what's
interesting,
I
think
I'm
going
to
take
a
shot
at
this
one,
but
but
catch
me
if
I,
if
I'm
wrong
on
this,
I
think
what's
different,
is
that
you're
changing
the
perspective
of
the
subject
a
little
bit
right
instead
of
saying
all
of
these
see
the
truth,
all
the
time
you're
saying
you
can
control
what
each
cluster
sees
as
the
truth.
That
needs
to
apply
right,
whether
that
means
in
a
branch
or
whether
that
means.
C
Yeah
different
different
branches,
different
tags,
you
could
all
you
could
all
track
master
and
the
only
difference
between
environments
that
get
deployed
is
a
new
image
tag
that
gets
uploaded
as
an
example,
and
then
the
you
know
the
githubs
agent
that's
running
in
dev
is
looking
at
dev
tags
and
the
githubs
agent
that's
running
in
staging
is
looking
at
staging
tags,
and
it's
looking
at
that.
All
the
time
and
you've
deployed
to
dev
and
dev
has
failed.
A
C
Do
you
so
I'm
going
to
talk
specifically
about
blocks?
I
haven't
really
looked
in
great
depth
about
about
argo,
cd
and
other
aliens
available,
but
with
flux
you
have
the
option
in
a
in
a
home
release
the
same
kind
of
way
that
you
do
with
a
helm,
chart
the
ability
to
be
able
to
force
a
rollback
with
blocks.
C
You
can
actually
add
a
field
to
your
home
release
that
says
roll
back
and
true,
and
then,
if
it
fails,
it
will
roll
it
back
to
the
previous
version
so
that
the
kind
of
tooling
is
there
you're,
just
declaring
it
in
a
in
a
yaml
file,
essentially
or
in
a
resource
rather
than
in
a
command
that
gets
executed
from
a
centralized
location,
yeah.
A
A
I
it's
curious.
I
mean,
like
I'm
wondering
about
the
tag
piece
of
it
because
of
the
way
that
kubernetes
handles
tags.
So
are
you?
Are
you
like
just
like
absolutely
all
in
on
always
pull
like?
Are
you
like,
like
it
has
to
always
be
always
pull
no
matter
what
so
you
don't
end
up
in
that
state,
where
you
have
a
tag
that
you've
referenced.
That
is
no
longer
unique.
C
C
To
be
unique
right,
I
think
that's,
that's
that's
a
critical
part
of
it
within
reason.
Right
or
you.
You
know
it
doesn't
have
to
be,
but
for
us
personally
we
have
a
you
know.
We
have
a
number
of
different
environments
and
the
way
that
we
tag
images
are
the
environment,
prefix
dash
and
then
the
long
commit
shot,
because
then
it
commits
our
loops
back
to
the
pipeline.
C
That's
been
executed
on
its
way
to
production,
so
it's
tag
with
dev
dash,
commit
and
then
fl
the
flux
agent
or
the
github's
agent
inside
the
dev
prosper
sees
it
deploys.
The
ci
machine
is
essentially
sat
there.
Waiting
and
saying
has
it
deployed?
Has
it
deployed?
Has
it
deployed?
The
answer
is,
yes,
runs
its
dev
tests,
re-tags
the
images
stage,
dash
image
uploads
to
the
image
registry,
and
then
the
loop
just
completes
it's
just
test,
rinse
and
repeat.
C
So
fox
doesn't
doesn't
care
about
that.
Specifically,
it
cares
about
the
image
tag.
That's
getting
deployed,
so
the
change
that
you're
deriving
the
change
that
we
are
making
because
the
helm
release
or
the
specification
hasn't
changed.
The
only
thing
that
has
changed
in
this
context
is
the
image
tag.
So
it
will
push
back
to
get
the
fact
that
the
image
tag
has
changed.
So
you've
got
kind
of
a
there's
two
ways
of
making
changes.
C
A
From
the
security
perspective,
I
think
it's
kind
of
interesting
to
think
about
the
fact
that
you
have
a
writable
that
you
have
that
you
are
able
to
commit
to
get
from
inside
the
cluster.
So
like
how
you
secure
that
credential
and
like
how
you
understand
like
how
you
handle
that
credential
kind
of
becomes
pretty
important.
E
C
I
think
it's
it's
a
slightly
easier
problem,
though,
from
that
perspective
than
worrying
about
the
permissions
that
your
kind
of
centralized
deployment
tool
of
choice
in
our
case
jenkins
or
spinnaker,
has
on
each
individual
cluster,
changing
that
and
changing
a
secret
essentially
inside
kubernetes
there's.
You
know
one
is
a
little
bit
more
easier
than
the
other,
so.
D
A
A
So
steve
has
put
up
a
couple
of
repositories
that
explore
this
stuff
and
they're
linked
from
the
episode
notes.
One
of
them
is
get
ops
with
customize
and
the
other
one
is
get
ops
with
secrets.
We're
gonna
dig
into
the
good
ops
with
customized
one
first
and
so
help
me
kind
of
get
me
walking
through
this
stuff.
I'm
gonna
walk
through
the
the
setup
here
and
we're
gonna
just
kind
of
play
with
things
and
get
things
going
see
what
we
can
see.
What
we
can
see
here.
C
C
To
to
walid's
point
in
the
in
the
chat,
we
we
don't
track
latest
tags,
we
specifically
track
environment,
prefix
tags,
so
production
as
an
example
is
just
a
prd
dash
tag.
D
A
E
A
C
You
explain
a
little
bit
more
about
what
you
mean
specifically
by
testing
in
that.
A
C
No,
so
look
for
local
development.
We
use,
you
know
the
option
is
there
to
use,
find
faster
like
like
that,
be
showing
you
can
build
the
image
locally
and
upload
it,
and
you
can
just
upload
looking
at
any
arbitrary
tag,
but
once
it
hits
the
pipeline
to
production
for
your
commit,
it's
going
to
be
using
environment
tags.
A
A
C
Yeah,
let's
go
with
three
five,
we'll
see,
we'll
see
what
happens.
We'll
live
dangerously.
E
E
E
C
C
C
And
yeah
mermaid's
right
with
what
he's
saying
and
set
up
different
rules
for
different
gift
paths:
okay,
so
duffy!
You
want
me
to
run
you
through
how
to
get
this
thing
set
up
sure
I
think
so,
if
you,
if
you
go
to
your
terminal
and
go
to
this,
there's
a
scripts
directory
in
there,
and
we
need
to
make
a
change
to
your
to
that
script,
yeah
that
plugs
in.
C
A
C
Why
am
I
so
your
question
whilst
duffy
is
installing
this?
There
are
a
number
of
unsupported
specifically
for
flux,
there's
a
couple
of
like
flux,
cloud
and
there's
flux,
flux,
ui
as
well,
which
kind
of
has
a
ui
on
top
and
then
there's
other
tools.
Obviously
like
argo
cd
that
have
an
actual
ui
that
allow
you
to
be
able
to
specifically
see
what's
going
on
with
those.
C
C
C
So,
for
those
of
you
who
are
kind
of
unaware
of
how
flux
works,
flux
is
going
to
sit
inside
our
cluster
and
it's
going
to
use
a
a
kind
of
key,
an
ssh
key
that
we're
going
to
have
to
upload
to
our
git
repository
and
it's
going
to
sit
there
and
reconcile
or
sync
the
state
of
the
world
in
as
regard
in
regards
to
the
git
repository
and
then
as
well
as
that,
we
are
going
to
be
looking
at
once.
C
We
set
it
up
for
new
images
based
on
a
specific
tag
and
those
are
backed
into
a
memcache
pod
or
a
set
of
pods
for
for
kind
of
image.
Synchronization
and
then
all
flux
cd
is
doing
is
essentially
doing
the
apply
and
the
delete
of
those
manifests
to
the
kubernetes
api.
And
then
it's
exactly
the
same.
So
just
replace
kind
of
flux
in
the
kind
of
apply
stage.
With
one
of
the
centralized
toying
that
we
were
talking
about.
C
So
that's
kind
of
how
some
yaml
files
are
going
to
get
deployed.
So
there's
going
to
be
some
rba
there's
going
to
be
some
name
spaces.
That's
the
kind
of
flow
that
we're
going
to
go
on
from
that
perspective
and
then
duffy.
If
you
go
across
to
the
next
tab
that
you've
got
open
there,
you'll
have
the
same
problem.
Unfortunately,
this
one
here,
no
the
the
one
to
your
left
yeah
this
one.
B
C
So
another
component
that
we're
going
to
deploy
alongside
flux
is
the
helm
operator
and
the
helm.
C
C
E
C
What
you're
doing
with
flux
is
we're
defining
how
the
helm
release
looks
like
so.
The
helm
releases
you'll
see
in
a
second
is
the
thing
that
couples
the
the
chart
version
with
a
specific
version
at
the
point
of
execution,
and
I
think
that's
important
to
know
so.
We
are.
C
This
is
the
first
time
execution
into
the
environment
and
then
we're
going
to
use
the
image
tag,
automation
that
we
saw
on
a
couple
of
diagrams
ago
to
to
make
changes
to
our
application
and
we're
going
to
do
that
with
an
annotation
that
we're
going
to
put
on
the
helm
release,
so
they
are
kind
of
a
little
bit
decoupled.
So
the
way
that
we
do
it
at
metal
specifically
is
a
a
helm.
C
Chart
version
is
tied
to
an
environment
so
think
about
like
the
one
kubernetes
114
to
kubernetes
116,
they
deprecated
a
set
of
apis,
so
we
needed
to
make
a
major
change
to
our
helm
chart
to
use
the
new
api
versions
that
were
available
to
us
so
for
us
to
bump
those
changes
through
environments.
We
change
the
helm,
chart
version
in
that
given
environment
and
that
no
we
know,
then,
that
charts
that
start
with
2.x
are
able
to
be
deployed
to
the
target
environment.
A
That's
fair
tim.
I
have
actually
thought
a
couple
of
times
about
making
a
slack
channel
for
this,
but
I
don't
know
I
don't
know
if
I
have
the
wherewithal
to
get
that
done
in
kubernetes.
That
would
be
a
fun
one,
though
kind
of
keep
the
conversation
going
in
between
eric
says
it
might
be
easier
if
you
think
about
like
an
automated
version,
keep
getting
applied.
Yeah.
C
A
That
makes
sense
yeah
this
piece
here.
So
basically
they're
saying
don't
do
this
part
anymore,
because
now
you're
going
to
be
managing
that
that
apply
aspect
by
right
there
inside
the
cluster
by
leveraging
flux
cd
to
do
the
apply
to
the
to
the
local
api
server
yeah.
It's
probably
a
good
idea.
I
think
I'll
might
have
some
time,
maybe
next
week
to
work
on
that
all
right.
So,
let's
get
into
it
here.
C
I
think,
what's
a
good
starting
point
is
for
us
to
slowly
but
surely
make
our
way
down
this.
So
a
couple
of
things
to
note
flux
is
going
to
be
looking
at
a
git
directory.
So
if
you
go
up
to
line
14
nothing,
you
can
see
there
we're
going
to
be
looking
at
the
customized
dev
directory,
so
duffy's
going
to
basically
spin
up
a
dev
cluster.
C
Essentially,
we
are
going
to
look
at
the
dev
root,
which
is
going
to
be
sorry,
not
the
dev,
the
get
root
which
is
going
to
be
master,
and
then
that
is
the
kind
of
branch
that
we're
going
to
look
at
all
right.
So
the
first
thing
we
need
to
do
in
terms
of
this
script
is:
we
need
to
deploy
blocks
and
flux
needs
to
sit
and
look
at
a
given
git
directory
and
a
git
repository.
So
those
are
the
couple
of
steps
in
there.
The
important
thing
to
note
here
is
line
10
here.
C
Yeah
line.
No
sorry
well,
the
the
manifest
generation
equals
true
line.
So
that's
used
because
we
are
using
customize
if
you're,
because
customize
needs
to
generate
the
manifests
that
are
going
to
get
created
if
you're
not
using
customizer.
You
don't
need
that
line
and
then
what
we're
saying
here
is
we're
going
to
set
the
image
pole
interval
to
one
minute.
So
every
one
minute
we're
going
to
start
looking
for
new
images
and
new
image
tags
that
are
available
to
us
and
then
we
are
going
to
allow
garbage
collection.
A
C
Then
we're
going
to
install
the
helm
operator
as
well.
That's
the
other
component
that
we're
going
to
want
in
there
comes
as
a
crd.
So
we
need
to
deploy
the
crd
first
and
then
deploy
the
helm
operator.
This
specific
implementation.
We
are
telling
the
helm
operator
that
it's
going
to
run
in
I'm
going
to
use
helm
b3.
You
can
pass
the
v2
comma
v3
here
if
you
want,
but
for
us
we're
going
to
deploy
v3,
so
the
helm
operator
is
going
to
be
able
to
use
mv3
inside
inside
of
the
pod.
C
You
don't
have
to
we
we're
actually
going
to
see
what
happens
in
a
minute
we're
going
to
deploy
a
number
of
different
home
operators
into
a
number
of
different
namespaces.
But
again,
what
we
want
to
do
here
is
we
want
to
keep
the
interaction
with
kubernetes
from
a
cube,
ctl
perspective
below
we
want
a
lot
of
the
stuff
in
the
repository
that
gets
reconciled.
So
this
is
the
minimum
amount
of
stuff
that
we
need
to
do
to
get
ourselves
out
of
the
gate.
C
So
flux
by
default,
if
you
don't
specify
a
key,
is
going
to
create
a
key
for
you.
So
what
we're
going
to
do
further
down
is
this
command
down?
Here
is
going
to
go
and
get
the
key
for
us,
so
it's
going
to
wait,
we're
going
to
wait
for
the
key
to
be
ready
and
then
we're
going
to
have
to
copy
that
key
into
your
github
repository,
and
then
we
create
the
connection
cool
yeah.
A
C
So
why
am
I
to
your
question
and
we
essentially
have
a
a
task
inside
of
rci
tooling,
that
has
a
timeout
of
five
minutes
and
if
it
times
out
after
five
minutes,
it's
time
for
us
to
kind
of
go
in
and
start
to
look
around,
but
it's
normally
relatively
fast
and
if
you
think
about
how
we're
differentiating
this,
because
we're
using
a
helm
release
the
thing
that
flux
is
actually
polling
and
and
deploying
to
the
cluster
are
helm
releases.
It's
really
the
helm
operator
here,
that's
doing
a
lot
of
the
heavy
lifting.
C
That's
the
thing,
that's
doing
the
helm
upgrades
or
the
helm,
installs
or
the
helm
rollbacks.
So
it's
really.
How
do
we
split
out
the
helm,
operators
that
we
have
available
and
the
helm
releases
that
they
are
responsible
for.
C
E
C
We
have
that
in
in
terms
of
other
other
pods
already
like
the
liveness
and
readiness
probes,
and
we
use
those
as
a
way
to
know
whether
the
pods
or
the
micro
services
that
have
just
been
launched
have
are
ready
to
receive
live
traffic.
We
don't
have
anything
specific
inside
of
the
application
itself.
We
just
leverage
some
of
the
things
that
kubernetes
provides
us
out
of
the
box
to
instrument
and
the
applications
in
such
a
way.
That
kubernetes
is
aware
of
when
they're
ready.
C
Matty,
do
we,
you
also
use
flux
to
update
flux,
though
absolutely
it
kind
of
turtles
all
the
way
down.
So
we
install
this
main
plot
buffy's,
actually
installing
it
at
the
latest
version,
but
we
could
install
it
manually
and
then
have
a
flux,
helm,
release
inside
of
our
git
repository
that
upgrades
it.
That's
that's
exactly
how
we
do
it
blocks
can
upgrade.
A
C
Yeah,
it's
doing
the
the
helm
installing
home
upgrades.
It
can't
do
can't
do
helm
template.
C
So
what
we're
kind
of
doing
a
little
bit
is
we're
using
the
kind
of
helm,
release
and
customize
to
do
some
of
that
before
with
flux
for
the
flux,
deploys
the
right
specification
in
terms
of
the
helm
release
and
then,
by
the
time
it
reaches
the
helm
operator,
it's
got
everything
that
it
needs.
C
And
if
you
make
sure
duffy
that
you
click
the
box
at
the
bottom,
because
we
weren't
going
to
write
back
some
changes.
E
C
We've
got
a
memcache
there
ready
to
be
caching
all
of
our
image
tags
when
we're
ready
and
then
finally,
we
have
a
helm
operator
there.
That
is
going
to
be
the
thing
that
is
responsible
for
deploying
our
home
releases
so
duffy.
If
we
kind
of
maybe
switch
back
to
the
repository
and
kind
of
step
through.
C
That
so,
let's
start
with
the
customized
directory
and
maybe
maybe
build
from
there
sure
so
the
customized
directory
has
has
a
base
and
a
base
is
going
to
deploy
a
number
of
different
resources
into
our
kubernetes
cluster.
So
you'll
see
that
we'll
deploy
cert
manager
and
inside
the
cluster
we're
going
to
have
a
number
of
different
things.
C
So
we're
going
to
have
a
helm
release
a
couple
of
those
we're
going
to
have
some
name
spaces
that
get
deployed
a
sealed
secrets.
Controller.
Imagine
we're
gonna
we're
gonna,
be
using
vietnamese
secrets
to
deploy
our
secrets
into
cube
and
then
finally,
some
some
storage
classes
and
if
you
look
at
customize.yaml
just
to
give
people
a
kind
of
understanding
of
how
this
kind
of
gets
built
up
so
in
our
customization,
the
resources
that
we
have
are
essentially
the
yaml
files
that
are
in
the
current
directory.
B
A
D
A
Okay
and
then
you
create
the
platform
system
and
the
sealed
secrets
name
space,
and
then
you
have
some
crds
for
sealed
secret
and
then
the
home
release
for
it,
and
then
these
there
is
already
a
storage
class
model
for
on
top
of
kind,
although
it
is
very
node
specific,
so
anything
that's
been
deployed,
leveraging
that
storage
class
will
will
rely
on
it,
but
it
is
set
as
a
default
one.
A
A
B
No,
we
don't
just
an
example
of
another
stuff
that
we
can
deploy
there,
we'll
just
leave
it
we'll
leave
it
be
there
to
my
mood's
point
of
how
does
customize.
C
A
Go
ahead
in
this
case,
though,
like
I
think
your
point
is
that
there's
a
race
right
so
like
if
I,
if
I
deploy
an
operator
that
defines
a
bunch
of
crds
for
example,
then
how
does
the
then
like?
Is
it
just
going?
Is
it
because
flux
will
continue
to
try
and
apply
those
resources
until
it
actually
succeeds
that
the
crds
are
defined.
C
A
C
I
look
forward
to
playing
with
that
yeah
yeah,
that
that
is
correct
and
no
that
will
bite
you.
It's
been
in
me
plenty
of
times
as
well,
you're
in
a
race.
So
if
you,
if
you're
allowed
to
do
cube,
cto
edit
you've
got
enough
time
until
flux
realizes
that
there's
something
different
for
you
to
try
and
work
out.
What's
going
on,
I
used.
A
To
I
used
to
refer
to
this
as
technology
gaslighting,
you
know
because,
like
if
you're,
if
you
worked
in
a
very
locked
down
shelf
chef
or
puppet
environment
like
you
apply
a
change
to
a
file
and
you're
like
great
looks
like
it's
working
and
then
what
happened?
Where
did
the
change
go?
I
know
that
I.
D
A
Yeah
pretty
wild-
I
remember,
like
I
remember,
being
beat
up
on
that
one
before
and
the
one
that
got
me
was
like
a.
It
was
a
configuration
file
and
it
didn't
have
a
header
that
said
that
it
was
actually
under
the
ownership
of
puppet,
and
so
the
change
happened,
and
I
was
not
expecting
it
to
happen,
and
I
didn't
even
understand
that,
because
I
was,
I
was
doing
consultant
work
at
the
time
right.
So
I
didn't.
A
C
So
yeah
yeah,
maybe
if
we
go
back
to
basic
duffy,
I
just
want
to
kind
of
show
an
example
home
release
that
people
are
kind
of.
If
we
go
to
cluster,
maybe
go
to
home
releases
and
see
what
one
we've
got
in
there.
Yeah.
B
E
C
To
live
dangerously
so
so,
as
you
can
see
here,
a
helm
release
or
a
home
resource.
Sorry,
you
have
a
number
of
different
annotations
so
that
annotation
on
line
six,
there
is
essentially
saying
that
we
don't
want
automated
image
updates.
C
So
I'm
going
to
force
the
image
that
currently
gets
deployed
as
part
of
this
helm
shot,
and
I
do
not
want
you
to
make
any
changes
after
you've
deployed
it
and
then
in
the
specification
we
are
defining
the
chart
that
we
want
deployed
in
the
version
that
we
want
it
to
be
deployed
from
and
then
a
release
name
and
that
release
name
needs
to
be
unique
at
the
cluster
level.
If
you
don't
do
that,
then
you're
going
to
get
yourself
in
a
little
bit
of
a
mess.
C
I've
been
bitten
by
that
a
couple
of
times
and
then
these
values
are
essentially
the
values
that
you
want
to
override
from
the
default
home
shot
and
that's
kind
of
how
they
all
come
together
as
a
group.
So
that's
kind
of
the
definition
of
a
home
release.
So
it's
a
chart
at
a
given
version
with
a
release
name
and
then
the
values
that
you
want
to
override.
A
So
when
you
set
this
value,
saying
automated
false,
what
what
are
you
turning
on
or
off
there
are
you
saying.
C
C
Yeah,
I
like
to
be
explicit
to
make
sure
that
when
I
go
back
and
if
once
the
summary
changes
the
truth,
I
I
know
that
explicitly
that,
in
my
in
my
home
release
time,
that
makes
sense.
A
C
So
obviously,
base
has
got
a
set
of
directories
and
then
inside
dev,
if
you
look
here,
if
you
go
to
something
like
monitoring,
maybe
as
an
example
here.
So
if
we
look
at
the
home
releases
inside
of
monitoring
and
we'll
see,
grafana-
and
here
is
where
we
have
the
environment,
specific
conflict,
so
mostly
ingress
right,
the
host
name,
you
could
even
move
the
tls
name
and
have
it
always
standardized
to
refiner
if
you
wanted
to
so
what
this
is
going
to
allow
us
to
do.
C
B
C
Mick
yeah,
so
if
you
go
to
the
make
file,
we
can
kind
of
show
what
this
is
doing
locally.
C
If
you
run
the
command
yeah,
if
you
scroll
up,
there
should
be
a
there's,
a
test.
C
Yeah,
so
how
customized
works
for
anyone,
that's
unfamiliar
with
this,
is
it
runs
the
concept
of
customized
build
and
what
customized
build
does?
Is
it
trolls
the
customize.yamls
from
the
base
so
from
the
place
that
you're
executing
it
or
the
directory
that
you're
executing
all
the
way
up
as
far
as
it
needs
to
go
all
the
way
into
base
and
everything,
and
then
it
provides
you
and
produces
you
a
single
yaml
file?
C
B
A
I
don't
know
why
I
do
that,
but
I
do
that
frequently.
So
here's
like,
if
you
were
to
do
what
is
the
command
in
helm,
it's
like
helm,
template
yeah.
A
A
Is
that
this
is
still
somewhat
relative
right
like
this
is
still
looking
at,
like
the
helm,
they're
still
looking
at
the
helm,
object
as
a
helm,
release
correct,
so
here's
a
bunch
of
plugins
that
are
being
enabled
oh
interesting,
so
this
isn't
actually
all
the
way
down
to
the
detail.
Yet
this
is
still
interacting
with
the
helm
object
as
a
release,
so.
C
Yeah,
so
what
we
are
giving
to
flo?
What
we're
giving
to
the
repo
to
the
cluster
is
we're
giving
a
helm
release
and
then
the
helm
operator
is
going
to
see
that
resource
and
it's
going
to
go
and
do
something
with
it.
Yeah,
and
it's
essentially
going
to
be
the
thing
that
converts
that
helm
release.
Sorry
that
helm
release
yeah
into
the
equivalent
of
a
helm,
install
passing
in
the
correct
chart
with
the
version.
Then
the
values
that
you
override.
C
So
what
this
is
a
useful
exercise
is
to
if
you're
running,
changes
locally,
right
so
say
you're,
making
quite
wholesale
changes
and
you're
unsure
or
you've
made
them
you've
merged
them
to
master
and
you're
unsure
about
what's
going
on.
This
is
just
essentially
the
equivalent
of
what
flux
is
running
locally
in
the
pod.
It's
doing
this
and
then
it's
doing
a
qct
I'll
apply
against
us,
so
it's
just
a
way
of
being
up
for
us
to
be
able
to
get
more
of
it
a
bit
more
understanding.
C
The
other
thing
to
note-
and
I
think
it's
very
key
and
duffy-
and
I
were
talking
about
this
earlier-
is
cube.
Ctl
comes
with
a
customized
plugin
built
in
yeah.
That's
right!
The
one
that
flux
uses
is
the
actual
customized
binary.
It
is
not
the
one
that
comes
with
cube
ctrl
and
they
are.
They
are
different.
C
So
that's
why
duffy
needed
to
install
customize
the
binary
locally,
because
that's
the
one
that
flux
uses
it
doesn't
leave
the
one
inside
cube
ctrl.
So
that's
just
the
kind
of
thing
to
note
that.
A
The
help
that
I'm
looking
for
yeah
here
we
go
so
it's
actually
vendored
into
cube
kettle,
but
it
is
vendored
in
it
kind
of
an
old
version
which
is
actually
specifically
the
problem.
So,
even
though
there
is
like
some
functionality,
there
is
some
customized
functionality
in
cube
pedal
itself.
A
It's
not
completely
up
to
date
and
there
have
been
quite
a
few
changes
to
customize
since
the
vendor,
since
it
was
vendored
in
I'm,
not
sure
if
we're
addressing
that
reasonably
in
upstream
yet,
but
effectively,
we
kind
of
stopped
time
for
the
customized
project
with
cube
kettle
and
time
kept
marching
on
for
things
like
you
know
like
what
we're
playing
here
with
like
flux
and
those
sorts
of
things.
E
C
Mel's
point
about
through
flux,
it
does
the
helm
upgrade
or
the
helm
delete,
slash
install.
So
a
key
thing
to
note
here
is
that
flux
is
just
deploying
those
kinds.
The
thing
that's
actually
implementing
and
interacting
with
helm
commands
is
the
home
operator
and
you
have
a
number
of
options,
so
it's
essentially
doing
a
helm
upgrade,
but
you
can
actually
set
force
to
true
and
that
will
essentially
do
the
equivalent
of
a
delete
and
then
an
install
over
the
top.
So
you
have
the
option
to
do
that.
A
C
No,
so
we've
established
a
connection
now,
that's
right,
sorry,
so
we
should
be
in
a
position
whereby
we
can
now.
If
you
do
qctl
get
namespaces,
we
should
be
able
to
actually
see
some
things.
C
D
E
C
C
So
a
couple
of
other
things
really
to
touch
on
here.
So,
if
you
can,
can
you
do
that,
but
also
kind
of
grep
for
helm
operator,
yeah,
helm,
dash
operator.
C
Notice
here
is
that,
even
though,
if
you
when
I
fire
on
the
script,
we
installed
the
helm
operator
into
the
blocks
namespace
as
part
of
the
deployment
that
comes
inside
of
the
git
repository.
We've
got
a
number
of
other
fl
helm
operator
instances
and
then,
if
we
actually
look
at
one
of
those
duffy.
So
if
you
do
qptl
get
helm,
release
and
then
target
the
one
that's
running
in
sir
manager.
C
Oh
no
key
thing
to
know
so
they
are
actually
in
flat
in
the
flash
namespace.
So
if
you
do
the
same
command
but
dash
n
flux
and
then
if
you
actually
describe
one
of
these
actually
maybe
just
get
it,
it's
probably
going
to
be
easier
ghetto,
yaml
or.
E
All
right
so
yeah,
if
he's
no
scroll
down
a.
C
Bit,
I
think
yeah.
So
here
we
go
so
we
are
deploying
a
home
operator
and
how
operator
is
obviously
a
a
helm
chart
and
there's
a
couple
of
things
to
note
here.
So
we've
explicitly
set
the
version
of
helm
that
we
want
to
use
so
helm
version
v3,
and
that's
because
the
flux
instance
and
the
helm
operator
that
we
deployed
is
specifically
set
to
only
understand
v3
resources.
C
And
then,
even
though
the
helm
release
was
deployed
to
flux,
the
target
namespace
is
cert
manager.
So
what
that
means
is
all
of
the
manifests
that
come
as
part
of
that
helm
release
is
essentially
the
equivalent
of
putting
dash
dash
namespace
equals
cert
manager
at
the
end
of
your
helm,
install
and
now
what
we
have
is
we
have
a
a
helm
operator.
That's
sitting
inside
of
the
serp
manager,
namespace
and
all
its
job
is,
is
to
reconcile
home
releases
that
sit
inside
of
the
helm,
the
cert
manager
name
space.
A
That
makes
a
ton
of
sense.
Let's
talk
about
that,
a
little
bit
more
because
that's
actually
a
really
super
interesting
pattern.
That
has
been
a
challenge
generally,
I
think
in
controllers,
which
I
think
is
fun
right
and
so
yeah,
and
so
what
you're
saying
is
that
this
helm
operator,
it
is
only
it-
has
its
focus,
extremely
narrowed,
only
to
the
chart
associated
with
cert
manager.
C
Only
to
helm
releases
that
are
deployed
in
that
namespace,
okay
and
the
reason
why
we
do
that
because
if
you
think,
if
we
think
about
it,
we
don't
want.
If
we
have,
you
know
a
huge
amount
of
applications
that
get
again
deployed
to
kubernetes.
We
don't
want
a
helm
operator,
that's
responsible
for
reconciling
all
of
those
helm,
shots
or
home
releases.
C
See
it's
an
opinionated
choice.
You
don't
have
to
do
this.
You
could
have
one
centralized
helm
operator,
but
what
we
found
is
the
reconcile
loop
is
great,
is
much
much
faster
yeah.
So
to
why
most
point
about
you
know,
do
I
need
some
additional
logic?
Well,
we've
sped
it.
D
C
B
C
They
are
they're,
so
the
only
thing
that
pushes
changes
to
get
is
flux.
This
is
the
helm
operator,
that's,
that
is
the
thing
that
is
deploying
helm
releases.
So,
in
terms
of
a
permissions
perspective,
the
helm
operator
that
sits
within
the
flux
namespace
has
to
have
cluster
admin
because
it
needs
to
deploy
the
other
home
operators
that
are
on
the.
C
B
E
Yeah
ingress
should
have
it.
I
think.
C
We
can
actually,
we
can
actually
tell
so
if
you
go
to
the
helm,
if
you
go
to
the
helm
release,
there
is
an
option
that
says
whether
it
needs
to
be
cluster
scoped
or
unclustered.
C
And
then,
if
we
scroll
up
yeah
cluster
rotary
and
the
reason
for
that
is
because,
obviously
for
us
to
be
able
to
create
tickets,
it's
going
to
need
to
be
able
to
do
it
across
the
street.
A
Yeah
and
so
if
I
look
at
the
permissions
there,
it's
probably
permissions
being
granted
to
cert
manager
that
allow
it
to
watch
for
resources
being
created
in
any
name
space
that
might
really
might
relate
to
the
work
that
it
would
have
to
do
correct
yeah.
That
makes
sense.
But
whereas
if
we
look
at
the
nginx
one,
oh
actually,
you
know
the
operator.
I
guess
that
would
probably
be
two
roles
right,
because
they're
still
being
able
to
define
an
ingress
object
would
be
something
that
we'd
watch
for
in
every
name.
Space
as
well.
C
I
think
josh
has
got
a
comment
that
people
want
us
to
have
a
look
at
so
josh
shares
in
the
decentralized
model.
Where
there's
one
helm
operator,
you
kind
of
end
up
with
the
tiller
again
absolutely
right.
You
end
up
with
a
with
a
you
know,
a
bus
factor
of
you
know
100,
when,
when
everything
is
being
managed
by
a
single
home
operator,
you
might.
C
Again,
you
can
do
it,
there's
no
reason
why
not
to
do
it,
but
what
we
have
found
is
a
kind
of
large
scale
and
I'm
talking,
like
hundreds
and
hundreds
of
helm
releases
having
this
specific
structure
makes
life
a
lot
easier
and
again,
if
your
application
developers
are
deploying
to
a
specific
name
space
and
you've
got
the
rbac
permissions
right,
you
can
give
them
the
ability
to
see
the
logs
of
their
helm
operator,
that's
responsible
for
deploying
their
helm
releases
and
they
don't
need
to
try
and
trawl
through
one.
C
That
wouldn't
work
because
you,
you
could
have
multiple
home
operators.
C
A
Okay,
well
cool,
so,
let's
play
with
this
stuff:
let's,
let's
do
a
little
chaos
engineering
that
we
were
talking
about
before.
So
one
of
the
things
that
we
pointed
out
before
was,
if
somebody
modifies
a
resource,
then
the
helm,
operator
and
flux
will
both
like
take
their
best
everyday,
putting
it
back
the
way
that
it
was
or
based
on
the
desired
state.
A
C
D
A
A
A
C
Here,
that's
correct,
zayn
to
your
point,
yeah
that
the
helm
operator
can
can
be
used
without
flux
or
you
don't
need
it
as
long
as
you
can
deploy
a
helm
release
in
the
in
the
structure
that
the
home
operator
understands,
you
could
from
your
centralized
location
so
in
before
we
were
talking
about
jenkins,
you
could
get
jenkins
to
manually,
apply,
apply,
sorry
helm
releases
and
then
the
helm
operator
would
sit
inside
and
reconcile.
C
C
Yeah,
I
think
so
kind
of
the
the
conversation
that's
going
on
in
the
chat
with
kind
of
between
between
caleb
and
and
eric
and
others
about
using
using
argo
as
a
way
to
be
able
to
to
make
changes
to
the
cluster
yeah.
We
you
could
essentially
do
that
with
a
with
a
normal
ci
machine.
It's
just
for
us.
It's
just
how
we
tag
so
we
just
have.
C
We
can
have
manual
approval
that
requires
us
to
tag
a
new
image
at
a
given
for
a
given
environment
and
once
that
happens,
the
kind
of
deployment.
E
E
B
C
C
A
A
A
B
A
C
A
C
To
why
most
point
about
the
the
horizontal
plot-
auto
scaler?
Yes,
yes,
it
would
so
your
your
best
bet
is
to
probably
not
specify
the
values
as
part
of
your
home
release.
So
actually
you
have
another
way
of
setting
those
values,
because
it's
going
to
constantly
get
in
a
fight,
it's
going
to
scale
up
and
then
the
helm
operator
is
going
to
put
it
back
to
the
previous
value.
A
A
C
C
Yeah
welcome
to
customize
having
to
know
the
directory
structure
where
everything.
C
A
E
A
Face
repository
there
we
go
okay,
so
I'm
going
to
go
in
it's
my
own
master
and
how
to
get
to
your
master.
I
don't
know
why.
I
would
pull
that
and
then
we're
going
to
compare
we're
moving
ingress,
so
I
can
see
the
changes
that
are
going
to
be
done
here.
I'm
actually
basically
removing
this
line
and
I've
actually
also
because
of
the
other
change
I
made
to
the
flux
in
its
script.
C
C
You
could
have
a
different
branch
for
your
environment,
but
we
prefer
to
have-
and
I
prefer
personally,
to
have
a
different
directory
and
always
allow
the
things
to
track
master,
but
just
use
directories
to
define
which
environment
you're
looking
at
the
main
reason
why
I
like
that
approach
is
because
it
makes
the
prs
more
easily
read
right,
we're
always
merging
into
master
we're
doing
a
dip
against
master.
You
know.
So,
if
you
look
at
what
duffy
nearly
did,
it
was
one
slip
away
from
merging
into
the
wrong
repo
or
you
know,
kind
of
putting.
C
Strategy,
it
could
be
a
different
branch
and
he
merges
into
production
instead
of
sort
of
dev
and
he
makes
the
change.
So
I
think
it's
horses
for
courses
right.
You
can
do
this
a
number
of
different
ways
for
us.
We,
like
this
pattern,
it's
easier
for
developers,
especially
to
to
understand
that
they're
going
to
merge
those
changes
into
master
and
then
the
directories
are
going
to
do
it.
C
It
should
eventually
disappear.
A
C
No
you'll
see
a
change
yeah.
You
should
see
a
delete
in
terms
of
the
helm
release
specifically.
C
Yeah
and
I
think
to
gavin's
point
as
well
that
git
ups
is
not
just
for
applications
right,
it's
just
a
way
of
getting
state
into
the
cluster,
so
you
can
use
it
for
a
number
of
different
things,
secrets,
other
custom
resources
that
you've
got
think
if
you're
running
a
kafka
cluster-
and
you
want
to
define
your
kafka
topics.
You'll
have
blocks.
That
was
reconciling
those
kind
of
you
know.
It's
just
a
way
of
being
able
to
get.
You
know
resource
the
cluster.
Essentially,
it
could
be,
could
be
anything.
A
Nice,
so
we
do
see
the
change,
so
I
actually
went
ahead
and
gripped
for
delete,
so
I
can
definitely
see
that
because
I
removed
that
line
from
the
customized
value
up
above
when
defining
the
environment
that
is
dev.
I
removed
that
from
my
dev
environment
and
I
see
that
keep
cuddle
heat
dash,
f
happening,
and
now
I
do
cube
kettle
get
an
s.
A
I
see
ingress
gone
so
there
we
go.
We
made
a
change
to
the
upstream
configuration
of
what's
happening
inside
the
cluster
and
we've
seen
it
be
realized
by
flux
and
what's
neat
it
is
this
follow-through
model
right
like
the
thing
that
was
actually
caught.
The
thing
that
was
told
to
be
deleted
was
the
helm
release
associated,
and
that
meant,
if
I
understand
it
correctly,
right
so
flux.
C
No,
so
we've
deleted
everything
we've
deleted
that
so
it's
everything
that
is
derived
within
that
directory.
So
if
you
go
back
to
the
repository
and
you
go
to
face
and
you
look
at
what's
in
base
ingress,
it's
everything
that's
inside
there,
so
the
name
space
is
gone.
The
home
release
is
gone.
B
A
That's
really
cool.
I
like
that.
Well,
I
think
we're
pretty
close
to
that
time.
Folks,
we've
been
at
this
it's
coming
up
on
three
o'clock.
I
hope
that
you
all
really
dug
this
first
tgik
with
a
guest
like
I
think
it's
been
a
tremendous
tremendous
time
getting
that
working.
A
A
All
right,
so
that
was
a
that
was
a
that
was
the
episode
kind
of
exploring
git
ops.
We
talked
a
little
bit
about
what
git
ops
is
like
some
of
the
different
patterns
for
it.
We
had
some
incredible
work
by
mr
steve
wade,
who
was
in
london,
and
it's
the
very
first
like
I
love
that
we're.
Not
only
are
we
doing
it,
the
first
co-hosted
tgik,
but
also
we
did
this
with
like
two
people
on,
like
other
opposite
sides,.
B
A
And
then
mine
is
my
line,
of
course,
so
you
always
know
how
to
reach
all
of
us
thanks
all
again
and
we'll
see
you
all
next
week,
unless
you
have
some
last
comments
thanks,
everybody
saying
thanks
in
the
chat
again.
Thank
you
all.
So
much
definitely
check
us
out
and
play
with
it.
It
looks
like
it's
a
really
easy
way
to
kind
of
get
involved
and
have
some
opinionated
ways
of
playing
with
things
and
kind
of
exploring
this
whole
space.
A
I
can
tell
you
that,
in
my
opinion,
and
likely,
I
imagine
that
if
quite
a
few
of
you
share
this
opinion
that
this
is
definitely
a
space
where
there
is
a
lot
of
interest
and
a
lot
of
focus,
and
I
imagine,
as
we
get
more
into
different
get-offs
patterns,
we're
going
to
see
we're
going
to
see
this
become
kind
of
a
new
norm.
Certainly
over
things
like
keep
going
apply,
that's
not
how
we
should
be
doing
this
stuff
we
should
be.
We
should
be
figuring
out
how
to
programmatically
manage
this
stuff
in
general.