►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
And
we're
live
hello
and
welcome
back.
My
name
is
robert.
I'm
part
of
the
susan
rancho
community
here
at
sousa
and
you're
here
for
today's
master
class,
and
I
think
we're
talking
about
rant.
No
we're
not
we're
not
talking
about
rancher,
because
we
have
dan
garfield
on
dan
is
from
code
fresh.
I
don't
even
know
your
exact
title.
Actually
we
have
a.
We
have
an
introduction
slide.
We
can
go
to
that,
but
dan
welcome.
A
Be
here
all
right,
so
today's
class
is
get
ops
with
rancher
and
argo
cd
and
dan.
Again
gracious
with
this
time.
We've
actually
talked
a
couple
times
before.
It's
usually
do
partner
interviews
with
him
that
you
guys
have
seen
before
at
kubecon.
We've
done
the
last
two
kubecons,
both
in
valencia
and
in
los
angeles.
So
this
should
be
nothing
new
for
our
community,
but
with
no
delay
we'll
just
get
started
dan.
You
want
to
start
sharing,
go
through
all
the.
B
There
we
go
yeah,
you
should
be
able
to
see
my
screen
now
we're
all
set
and
I've
got
my
I've
got
my
argo
mug
here.
A
A
B
So
I'll
be
watching
the
chat
as
we
go
through
and
feel
free
to
make
comments
and
and
ask
questions,
and
I
always
think
that
these
things
are
way
better
when
they're
conversational,
when
people
are
engaging
so
as
you're
looking
at
stuff
as
you're
seeing
stuff
as
you're
having
questions
put
it
in
here
because
it's
a
it,
you
know
it
makes
it
a
lot
more
interesting.
I
see
we
got
people
joining
from
egypt.
I
think
we
got
people
from
all
over
the
world
joining
which
is
always
lovely
to
see
so
to
introduce
my
well.
C
Okay,
a
few
few
yeah.
A
Yeah
here
here's
some
some
housekeeping
things.
The
platform
we
are
using
is
called
crowdcast
and
below
you
can
see
the
chat.
Some
people
are
using
it
if
you
feel
comfortable
telling
us
where
you're
from
please
do
put
in
the
chat.
We'd
love
to.
I
love
telling
my
kids.
I
talk
to
people
around
the
world
and
not
just
other
ohioans.
A
There
is
a
poll
you'll
see
those
come
up
I'll,
be
asked,
or
if
you
have
a
question,
please
ask
a
question
and
upload
and
download
it
I
will
put
in
polls.
So
if
you
see
those
I
will
put
in
chat
to
say,
hey
go
answer
these
things.
It
helps
us
keep
the
conversation
going
so
next
slide.
A
This
is
a
master
class.
This
is
you
know,
45-75
minutes.
Depending
on
the
questions.
We
welcome
questions
as
as
much
as
you
got.
Please
keep
them
on
topic.
So
if
it's
a
little
off
topic,
we'll
just
have
you
route
those
questions
over
to
the
community,
we'll
get
them
answered
over
there.
We
try
to
get
them
all,
but
if
we
don't
and
we'll
get
to
you
within
the
community,
we
always
answer
these
things.
This
is
being
recorded.
A
You
will
see
this
on
youtube
here
when
I
get
the
video
cut
and
cleaned
and
put
up
there,
so
this
will
be
up
there.
So,
if
you're
asking
those
questions,
it's
already
there,
I
think
that's
it.
For
my
side,
I
think
it's
your
slides
now
dad.
B
C
Okay,
perfect:
let's
get
into
it.
B
So
yeah
session's
gonna
be
recorded
on
youtube.
So
let
me
introduce
myself.
My
name
is
dan
garfield.
I
am
a
co-founder
and
chief
open
source
officer
at
code
fresh.
We
are
an
enterprise,
argo
company
and
I
am
an
argo
project.
Maintainer
truthfully.
I
don't
do
a
ton
of
code
contributions
these
days,
but
do
help
out
just
a
little
bit
on
some
of
that
stuff
more
on
the
community
side.
B
So
less
interesting,
probably
we're
going
to
be
talking
about
avoiding
configuration
drift
so
just
to
back
up
on
this
talk,
title
we're
going
to
be
avoiding
configuration
drift
with
argo
cd
and
a
lot
of
you
folks
might
be
thinking
what
is
configuration
drift.
Don't
worry,
don't
worry
we're
going
to
cover
it,
and
this
is
really
about
get
ops
at
the
end
of
the
day.
So
we're
going
to
we're
going
to
even
up
level
it
even
a
little
bit
more
and
get
deeper
as
well.
B
So
we
don't
need
to
go
through
the
we're
going
to
cover
just
a
brief
intro
to
rancher.
Explain
some
configuration
drifts,
we'll
talk
about
get
ops,
we'll
talk
about
bi-directional
sync
and
what
what
we're
really
talking
about?
B
It's
more
unidirectional,
sync
now
that
I
think
about
it:
self-healing
kubernetes
clusters
we're
going
to
get
into
a
demo
and
we'll
do
q
a
and
we'll
be
rocking
and
rolling
from
there
from
a
rancher
introduction
slide,
and
I
don't
know
if
you
prefer
to
talk
to
this,
but
rancher
has
like
a
ton
of
tools,
and
most
of
you
are
probably
are
already
aware
of
this:
we've
got
rancher
kubernetes
engine
we've
got
the
rancher
service,
which
allows
you
to
manage
many
instances
of
kubernetes
and
many
destination
clusters.
B
You've
got
fleet
which
is
part
of
that
and
provides
not
only
authentication
services
and
proxy
services,
but
you
know
allows
you
to
go
pretty
deep
in
in
managing
really
kubernetes
at
scale.
So
if
you
have
a
ton
of
kubernetes
clusters,
this
is
where
it
becomes
really
interesting.
The
first
place
that
I
saw
this
so
there
are
a
couple
places
actually
that
I
see
this
first
off.
B
If
you
can
see
behind
me,
this
stack
of
machines
over
here.
This
is
actually
running
rancher
k3s.
So
this
is
my
home
lab
cluster,
and
I
I
got
a
couple
of
of
hp
elite
desk.
I
think
g3s
or
something-
and
I
got
them
all
off
of
university
surplus.
I
don't
know
if
you
all
have
that
where
you're
from,
but
if
you
go
to
a
local
university,
they
usually
have
surplus
and
you
can
buy
machines
on
the
cheap.
So
my
I
upgraded
my
kubernetes
cluster
recently
with
a
bunch
of
surplus
machines.
B
So
now
I
have
a
lot
more
power
than
I
used
to
before
that
it
was
all
atomic
pi
clusters,
and
so
I've
got
rancher
k3s
running
as
my
kubernetes
distribution,
and
then
you
have
these
other
services
from
rancher
that'll.
Allow
you
to
manage
scale
and
where
I
first
saw
rancher
really
being
used
at
scale.
When
we
were
organizing
get
ops
con,
we
invited
chick-fil-a
to
come
and
give
a
talk,
and
I
think
you've
all
probably
seen
chick-fil-a
give
a
talk.
B
If
somebody
in
the
chat
wants
to
throw
their
get-ops-con
talk
into
the
chat
they
also
recently
presented
at
the
argo
meetup,
you
could
throw
a
link
in
there
if
somebody
else
wants
to
grab
it,
but
what
they're
doing
is
they're
using
rancher
with
fleet
with
argo
cd
and
that's
really.
The
basis
of
where
I
came
out
for
this
talk
was
looking
at
what
they
did
and
the
success
they've
had
and
saying:
okay.
Well,
how
do
we
introduce
argo
cd
to
more
people
and
help
them
understand
git
ops
and
make
this
more
accessible?
B
So
what
they
essentially
do
is
they
use
argo
cd,
which
is
a
tool
we're
going
to
introduce?
Many
of
you
are
probably
familiar
with
it.
It's
the
world's
most
popular
and
most
most
used
get
ops
tool
today
and
they
use
it
with
fleet
and
what
they
do
is
they
basically
have
in
every
storefront.
B
So,
for
many
of
you,
I
see
you're
from
germany
you're
from
brazil
you're
from
south
africa,
you're,
probably
not
familiar
with
chick-fil-a,
think
mcdonald's,
except
better
chicken,
more
focused
on
chicken,
but
it's
a
it's
a
restaurant,
it's
a
fast
food
restaurant
and
they
have
a
kubernetes
cluster
in
every
store
and
what
they
do
is
they
have.
All
of
those
clusters
are
added
to
rancher
through
rancher
fleet
and
they
can
basically
manage
the
rollout
across
all
those
clusters
by
grouping
them
into
regions
and
things.
B
So
they
can
say:
okay,
let's
up
update
the
version
of
our
software
in
in
this
region
and
then
we're
gonna.
Do
it
in
this
region
and
we're
in
this
region,
and
then
each
of
those
clusters
actually
is
using
argo
cd
to
do
reconciliation
for
itself.
So
fleet
basically
says:
okay
update
the
desired
version
of
applications
that
we
have
in
argo
cd
and
then
argo
cd
goes
and
actually
manages
the
update
within
that
cluster.
B
B
So
let's
talk
for
a
moment
about
configuration
drift
and
why
this
happens
so
configuration
drift
is
poison
within
an
organization.
Basically,
it
means
that
you
have
environments
that
are
supposed
to
be
similar
or
an
environment
is
supposed
to
be
updated,
but
for
some
reason
it
is
out
of
date
with
its
definition.
B
B
Well,
if
we're
doing
git
ops,
we
have
it
all
defined
in
git
and
whatever
the
definition
is
in
git.
That
is
what
should
be
deployed
and
if
you
think
that's
you
know
you
might
be
looking
at
your
organization,
you're
saying
that's
not
the
way
that
we
work.
We
actually,
you
know
we
update
git,
and
then
we
talk
to
joe
and
joe's
our
release,
manager
and
joe
goes
in
and
he
does
a
bunch
of
work.
B
I
don't
know
what
he
does,
but
he
figures
it
out
and
then
he
updates
stuff
when
we're
ready-
and
maybe
we
still
have
stuff
in
git-
that's
not
ready
to
be
deployed,
but
joe
knows
not
to
deploy
it
or
we
have.
We
have
stuff
that
we've
updated
and
get,
or
maybe
actually
there's
some
hotfix
and
I
don't
know
how
he
makes
it
work
but
but
joe
goes
in
and
he
figures
out
how
to
make
it
work,
and
this
is
like
when
I
started
first
building
software.
B
B
I
would
build
a
patch
and
I
would
email
him
the
patch
and
say
please
go
apply
this
and
he
would
go
and
apply
the
patch
to
the
server
and
then
and
then,
if
it
didn't
work,
he
would
yell
at
me
and
I
would
yell
at
him
and
then
we
would
maybe
switch
out
servers
or
something
like
that,
because
we
usually
kept
an
we
kept.
One
server
that
was
out
of
that
was
that
was
basically
a
backup
of
live
that
we
could
switch
to.
A
I
think
I
think
all
of
us
have
been
there
that
was
that
part
of
my
career.
I
did
the
same
thing,
so
you
told
that
story
and
you
could
have
put
me
there
and
I'm
like
yeah,
been
that
been
there.
I
was
on
both
sides
of
the
house
too.
I
was
the
guy
who
was
like
this.
Doesn't
work
what's
going
on
so
we've
both
been
there
yeah.
B
So
it's
it's
not
a
great
way
to
work,
because
I
mean
this
at
this
point:
we're
not
even
using
containers
right.
This
is
back
in
the
day,
so
some
of
you
might
still
be
in
this
situation.
You're
saying
I
want
to
improve
the
way
that
we're
running
software.
Well,
if
I
am
running
git
ops
and
I
basically
said
look,
how
do
we
know
what's
supposed
to
be
deployed
in
production?
B
You
start
to
get
into
if
you
say,
as
a
hard
and
fast
rule,
once
it's
committed
into
this
get
repo,
it
should
be
deployed.
Well,
the
other
area
of
where
configuration
drift
happens
is
really
in
that
grease
and
a
lot
of
downtime,
that's
caused
within
organizations,
and
I
would
say
my
estimate.
I
didn't
take
a
pull
on
this,
but
I
would
estimate
90
percent
of
downtime
is
caused
by
these
mistaken
deployments,
90
of
big
downtime.
B
Yeah
2017
2018
aws
had
an
outage
where
somebody
was
making
a
change.
They
were
connected
directly
against
their
production
and
they
fat
fingered.
The
fact
do
you
guys
know
what
fat
finger
means,
but
for
the
international
audience
it
means
you're
typing,
but
your
finger
was
so
big
that
you
accidentally
typed
an
extra
character.
So
it
just
means
you
mishyped,
it's
a
joke.
B
B
There
were
a
lot
of
jokes
about
people's
dishwashers,
not
working
because
they
were
internet
connected
and
that
kind
of
thing,
so
that
was
that
was
a
bummer
costco
had
a
similar
kind
of
change,
where
somebody
was
connected
against
production
on
black
friday
and
made
a
change,
as
I
recall,
and
I
may
be
getting
some
of
these
details
wrong
and
costco
engineers
are
going
to
reach
out
and
say
that
never
happened,
you're
liable
in
us
and
I'll
say
my
mistake,
my
mistake,
but
this
is
very
common
people
go
in
and
make
changes
cowboy
style.
B
They
just
connect
directly
to
production
and
they
tweak
something
they
get
it
working.
That's
it
that's
a
common
source
of
configuration
drift.
I
also
added
security
breaches.
If
somebody
edits
your
live
production,
they've
they've
hacked
into
your
system
or
something
and
they're
mining,
bitcoin
or
or
they've
injected
code
or
whatever.
B
This
is
another
case
where
you
get
configuration
drift
happening
and
if
you're
not
doing
get
ops,
you
probably
don't
know
about
it.
Another
source
is
failed
automation.
So,
if
we
go
into,
if
we
look
at
like
configuration
drift,
we
may
be.
You
know
we
start
off.
We've
got
these
three
servers
over
here
on
the
left.
B
Somebody
makes
some
kind
of
ad
hoc
change
in
server
two
and
then
we
go
to
do
a
deployment
and
server
two
fails.
The
rollout
fails
because
there's
been
some
sort
of
ad
hoc
change,
and
I
I
think
about
this
often
most
often
in
kubernetes
clusters.
So
if
I
had
a
number
of
different
kubernetes
clusters,
maybe
I'm
rolling
out-
or
maybe
I
I
have
these
different
things.
So
let's
pause
for
a
second,
because
there's
actually
a
good
question
here
is
immutable.
B
Yeah,
so
immutable
environments,
I
think,
is
a
really
great
part
of
this
story,
because,
if
you're
using
something
like
kubernetes,
you
don't
update
servers,
you
make
a
change
and
it
destroys
the
pod
and
it
and
it
creates
a
new
one
with
your
changes,
so
they,
the
each
piece,
should
essentially
be
immutable
once
it's
deployed
right
and
this
doesn't
apply
just
to
kubernetes,
but
it
would
apply
to
other
services
as
well.
If
you
have
good
discipline,
good
infrastructure
and
good
management,
you
won't
have
to
update
virtual
servers.
B
You'll
recreate
virtual
you'll,
create
new
virtual
servers
and
then
reroute
the
traffic
to
the
new
one,
and
that's
a
better
situation
to
be
in
so
this.
This
configuration
story
when
I
was
when
I
was
thinking
about
this.
It
reminded
me
of
a
story
that
I
was
familiar
with
at
that
comes
to
us
from
a
major
credit
card
company
and
they
had
a
situation
where
they
had
a
team
that
was
working
on
some
components
of
the
service
and
they
ended
up
laying
off
the
team.
B
This
was
a
number
of
years
ago.
There
was
an
economic
downturn,
so
they
laid
off
the
team
and
they
expected
that
they
were
going
to
move
over
the
management
of
the
service
to
another
team
that
was
going
to
be
taking
on
more
responsibility
and
as
they
headed
into
christmas,
they
had
an
outage
on
that
service.
B
And
unfortunately,
there
was
no
one
from
the
team
that
had
been
managing
that
service
to
look
at
it
and
tell
them
what
was
going
on.
And
so,
when
the
new
team
that
had
taken
taken
on
this
service
went
in
and
looked
at
it,
they
said.
Okay,
hang
on
I'm
trying
to
figure
out
what's
going
on
here,
but
it
looks
like
none
of
the
changes
that
have
been
made
to
this
server
for
the
last
year
and
a
half
have
been
getting
checked
in
to
get
so,
there's
been
a
bunch
of
changes.
B
We
have
no
idea
what
they
are,
and
this
engineer
ended
up
having
to
this
is
a
number
of
years
ago,
but
they
basically
printed
out
all
of
the
running
code
and
then
went
through
it
line
by
line
to
figure
out
what
was
going
on
and
fix
the
outage
now
this
this.
This
represents
really
a
big
organizational
failure,
because
it
means
that
people
are
basically
making
ad
hoc
changes
on
top
of
ad
hoc
changes.
B
You
don't
know
what's
running
so
this
drift
becomes
a
real,
really
big
problem
and
so
you'll
you'll
notice,
when
this
is
happening
because
you
have
things
like
it
succeeds
in
staging,
but
it
fails
in
production.
If
that's
happening.
Often
that
probably
means
there's
some
drift
failure
happening.
There
could
be
some
architectural
failure
happening
too.
If
you
have
a
view-
and
you
can
just
ask
yourself,
do
you
have
any
systems
that
you
view
as
like?
Oh
you
don't
touch
that
system.
That's
you
know
that's
joe's
system.
B
We
don't
mess
with
that,
because
it's
it's
the
whole
thing.
It's
complicated.
You
know.
Well,
it's
probably
not
complicated.
It's
probably
poorly
implemented.
That's
probably
what
it
means
right.
If
you
have
a
system
you're
afraid
to
touch,
it
means
that
there's
probably
configuration
drift
problems,
rip
that
band-aid
off
and
start.
B
If,
if
you
can't
deploy
updates
to
a
system,
it
means
you
know,
the
the
old
saying
is.
If
you
want
to
get
good
at
something,
do
it
repeatedly
right?
So
if
you
want
to
get
good
at
updating,
that
system
start
doing
it
repeatedly,
because
if
you
can't
update
a
system,
if
it's
off
limit,
it's
already
fragile,
it's
gonna
break.
You
just
don't
know
when
it's
gonna
break,
so
you
may
as
well
get
into
the
habit
of
starting
to
update
that
system.
B
Now
right
lots
of
hacks
quick
fixes
in
specific
environments,
those
things
become
very
common
where
people
are
like.
Oh,
you
have
to
add
a
little
overlay
you
have
to.
You
have
to
add
a
little
tweak
to
this
when
you
deploy
it
because
this
environment,
it's
special,
it's
a
special
snowflake
and
people
are
trying
to
figure
out
what
the
difference
is
between
these
environments.
B
These
are
good
symptoms
that
you
have
a
lot
of
configuration
drift
problems
and
that
you're,
probably
not
following
and
getting
the
full
value
out
of
get
ops
and
some
people
like
this
goes
back
to
I'm
going
to
say
your
your.
I
don't
know
if
your
name
is
rendering
correctly
on
my
screen,
but
cagitay's
question
about
immutable
infrastructure,
immutable
environment,
a
lot
of
people
will
say:
look
we
will
use
something
like
terraform
and
we'll
do
a
terraform
apply,
so
that
gets
me
my
desired
state
and
so
I'm
actually
working
from
git.
B
I
actually
check
that
in
and
it's
okay,
the
problem
with
that
is
that
the
configuration
drift
that
we're
talking
about
usually
happens.
In
fact,
it
always
happens
after
the
infrastructure
is
created,
people
go
in
and
they
make
a
cowboy
change.
There
is
some
kind
of
tweak
that
that
people
make,
and
maybe
they
don't
document
they
a
hacker
gains
access.
B
Something
happens
where
that
that
desired
state,
maybe
it
wasn't
even
applied
correctly
in
the
first
place
that
desired
state
isn't
actually
being
met,
and
so
now
you
have
essentially
a
black
box.
You
don't
know
what's
happening
there
and
it's
terrifying
to
touch
it,
because
you
don't
have
predictability
about
what's
going
to
happen,
so
this
happens,
like
I
said,
with
kubernetes
as
well
somebody
coops
ctls
in
to
production.
They
apply
some
change
and
now
you
go
to
do
a
deployment
and
it
fails
because
it's
different
than
what's
expected
in
staging.
B
This
is
a
big
problem
and
kelsey
hightower
had
a
really
good
tweet
about
this.
He
said
cube.
Ctl
is
the
new
ssh
limit
access
and
only
use
it
for
deployments
when
better
tooling
is
not
available.
You
should
really
only
be
using
kubectl
for
local
deployment
if
you
have
to
break
glass
and
use
it
against
staging,
that's
a
problem,
let
alone
production,
if
you're
doing
it
in
production.
Oh
boy,
we're
in
trouble
right
if
you're
doing
it
against
staging
you
really.
B
You
should
be
in
the
in
the
position
where
you're
not
even
have
to
having
to
ever
do
it
with
for
staging,
because
if
you
don't
ever
have
to
do
it
for
staging,
you
know,
you're,
not
gonna
ever
have
to
do
it
for
production
right,
so
you
don't
wanna,
be
having
people
making
ad
hoc
changes
like
this.
B
So
what's
the
better
way
to
be
making
this,
so
there
are
a
lot
of
a
lot
of
strategies
that
people
employ.
They
say
we
have
really
great
documentation
for
how
changes
should
be
made.
We
have
audits
on
all
manual
changes.
We
we
enforce
these
best
practices.
We
have
great
training
these
by
themselves
are
doomed
to
fail,
because
this
is
ultimately
a
question
of
tooling
and
organizational
structure.
So
these
things
they
might
be
helpful,
but
they're
not
going
to
be
enough
to
get
the
job
done.
You
need
to
be
using
get
offs
tooling.
B
That
is
going
to
enforce
this
stuff
and
you
need
to
be
doing
it
in
enforced
get-off's
way.
So
this
is
when
we
get
into
argo
cd
and
get
ops,
so
for
those
of
you
unaware
I'll,
just
gonna
shoot
this
to
you
really
quick.
This
is
opengetoff.dev.
This
is
an
open
standard
that
we
helped
author.
We
worked
with.
I
think
we
had
over
120
interested
parties.
We
had
no,
it's
90
interested
parties
were
involved
in
the
creation
of
the
standard
and
over
120
individual
contributors,
I'll
throw
it
in
the
chat
here.
B
For
you,
this
goes
into
the
standard
of
what
get
offs,
how
the
principles
that
you
really
need
to
follow
at
a
very
basic
level
to
be
following
get
offs
and
I'm
not
going
to
go
through
those
exactly
just
yet
we're
going
to
we're
going
to
yeah
go
ahead.
B
Yeah
great
great
question:
okay,
so
get
so
from
devops
from
a
definition.
Standpoint
is
debated
quite
a
bit.
We
don't
really
know
what
it
is.
It's
like.
We
know
it
when
we
see
it,
that's
what
it
is.
It's
like
devops
is
like
developers
working
with
operations
to
improve
and
deploy
stuff.
We've
got.
You
know,
we've
got
the
the
unicorn
tale
that
that
book
that
classic
book
we've
got
all
these.
B
You
know
books
about
devops
and
how
to
do
it
get
ops,
I
think,
is
best
thought
of
as
an
implementation
of
devops
best
practice
as
a
subset
of
devops
and
devops.
You
know
typically
we're
like
okay
we'd
like
to
have
things
defined
in
code.
You
know.
B
Infrastructure
is
code
that
feels
like
devops
we're
gonna,
have
you
know
good
separation
of
responsibility,
we're
gonna,
have
good
communication
between
teams
and
when
you
get
into
like
what
devops
really
is,
it
ends
up,
meaning
a
thousand
different
things
at
a
thousand
different
organizations.
B
B
Your
your
desired
state
needs
to
be
desi,
entirely
declaratively
defined,
and
so
that's,
if
you're
coming
from
devops
world
you're,
probably
thinking
yeah.
Okay,
we
agree.
Infrastructure
is
code.
That's
important.
Declarative
is
as
opposed
to
doing
something
imperatively
so
like.
If
you
are
oh,
I
want
to
have
a
server,
so
I
follow
this
procedure,
which
is,
I
click,
create,
or
I
run
this
command
that
that
creates
a
server
I
ssh
into
it,
and
I
make
some
changes
or
whatever
well
you're.
That's
an
impera.
Those
are
a
bunch
of
imperative
operations.
B
Declaratively
saying
I
want
a
server
with
these
these
this
profile.
Now
there's
going
to
be
a
bunch
of
imperative
operations
that
go
on
in
some
sort
of
automation
to
create
that
desired,
declarative
state,
but
you
have
you've
defined
it
declaratively
right,
so
you're
saying
look.
I
just
want
a
server
with
these
parameters.
B
I
don't
care
how
it
happens,
go
and
do
whatever
operations
automatically
that
need
to
happen,
but
at
the
end
of
the
day
I
know
that
I'm
going
to
be
able
to
recreate
this
over
and
over
again
the
same
way
every
time
because
I've
declared
it
I've
created
a
desired.
My
desired
state
has
been
done
declaratively
and
we
talk
about
that
desired
state.
We're
not
just
talking
about
server
creation,
we're
talking
about
all
the
software
that
runs
on
those
things.
Kubernetes
is
very,
very
good
for
git
ops,
because
it's
really
good
at
this
declarative
side.
B
So
this
is
why
you
see
so
much
discussion
about
get-offs
within
the
kubernetes
world,
because
it's
so
good
at
it.
The
second
thing
is
having
it
be,
versioned
and
immutable,
and
so
this
is
something
that
I
see
people
mess
up
on
all
the
time.
I
took
a
hobby
project
this
weekend.
I
don't
know
if
any
of
you
guys
play
games,
but
I
was
setting
up
a
game
server
and
I
a
lot
of
people
that
do
containers
in
the
home
lab.
B
They
just
use
docker
compose
and
they
just
run
it
on
a
single
machine
and
it's
so
common
like
if
steam
servers
they
don't
allow
you
to
specify
a
version.
When
you
install
a
steam
server,
you
can
you
can
install
a
branch
and
then
it
just
gives
you,
whatever
the
latest
one.
It's
horrifying
to
me
because,
as
a
get
out
professional,
I
want
it
to
be
versioned
and
immutable.
B
I
want
to
be
able
to
say,
deploy
this
version
and
I'm
always
going
to
get
the
same
thing
and
when
I'm
ready
I'll
say,
deploy
this
new
version,
and
so
you
see
a
lot
of
people
deploy
things
where
the
container
that
they've
specified
in
their
deployment
is
using.
The
latest
tag
well
you're
using
a
declarative
system
for
sure,
but
it's
not
actually
versioned
and
immutable
and
they'll
do
the
same
thing
with
git.
Where
they'll
say
you
know,
I'm
I'm
relying
on
a
helm
chart
that
is
always
just
whatever
the
latest
is
well.
B
That's
actually
you've
taken
versioning
and
the
immune
you've
taken
versioning
out
of
your
deployment
equation,
and
then
next
we
have
these
software
agents
that
deploy
things
automatically.
They
pull
that
state
automatically
and
then
they
continually
reconcile
which
means
they're,
aware
of
the
actual
state
of
what's
happening
and
the
desired
state
as
defined
in
git.
B
So
when
we
talk
about
get
ops,
it's
doing
at
least
these
four
things,
and
there
is
more
to
kind
of
solve
above
and
beyond
this,
but
if
this
is
confusing,
it
should
make
sense
within
the
context
of
this
demo,
because
we'll
show
you
how
this
works
so
we'll
introduce
you
to
argo.
But
let's
just
look
at
a
practical
example.
Okay,
so
you
are
making
a
commit
to
your
source
code
right.
You
probably
have
a
repository
where
your
application
source
code
is
that's
going
to
make
a
build.
B
It's
going
to
make
a
new
container
image,
it's
going
to
push
that
image
to
a
registry
and
we're
going
to
open
up
a
pull
request
onto
a
different
repo.
Probably
generally,
I
like
a
two
repo
approach
for
get
ops
and
that
second,
repo
is
going
to
be
your
kind
of
infrastructure.
Repo
I'll
show
you
what
this
looks
like
when
we
get
into
the
demo.
This
updates
to
manifest
the
charts,
there's
some
kind
of
pull
request.
That
happens
that
happens
and
then
the
cluster
looks
and
says.
B
Oh,
the
desired
state
has
changed,
I'm
going
to
go
and
make
an
update.
So
this
is
how
a
deployment
would
be
happening.
Now
you
compare
this
with
a
classic
ci
cd
approach,
where
you
just
say:
oh,
I'm
not
opening
a
pull
request.
Instead,
I'm
just
running
automation
to
deploy
the
new
version.
Well,
what
happens
in
that
new
version
fails.
How
do
I
know
what's
supposed
to
be
deployed?
How
does
that
work?
B
Well,
you've
you've
created
a
expected
desired
outcome
with
imperative
operations,
rather
than
creating
your
desired
outcome
and
then
letting
imperative
operations
take
care
of
it
behind
the
scenes.
So
that
might
be
a
confusing
point
as
we
as
we
get
into
this,
I
think
it'll
it'll
all
start
to
make
sense.
B
Okay,
so
from
a
from
a
high
level
standpoint,
you
basically
have
argo
cd,
you
have
your
desired
state
in
git,
it's
pulling
those
changes
from
git
and
then
it's
looking
at
the
the
actual
state
in
kubernetes,
and
it's
saying,
oh
something
about
what
you
have
in
your
definition
is
actually
not
correct.
So
I'm
going
to
update
it
and
argo
cd
is
reconciling
here
now
when
we
say
reconciling
I'm
not
connecting
I'm
not
making
a
change
against
kubernetes
and
then
argo
cd
sees
the
change
and
says:
okay
I'll
go
write
that
to
git.
B
B
So
this
is
telling
me
that
this
get
revision
is
what's
been
defined
in
git
and
this
sync
has
occurred
and
it's
working
properly
and
it's
always
going
to
guarantee
that
these
things
are
in
sync,
and
if
someone
goes
and
changes
the
the
coop
ctl
onto
the
cluster
and
make
a
change
that
would
show
up
as
drift
and
it
would
be
automatically
detected
by
argo.
So,
in
order
to
be
doing
get
offs,
we
need
to
have
a
component
that
is
aware
of
the
desired
state
and
get
and
the
actual
state
wherever
that
is
so.
B
This
is
why,
in
the
terraform
example
that
we
had
earlier,
where
I
said
hey,
I
did
a
terraform
apply
that
bootstrapped
my
infrastructure
that
bootstrapped
my
components.
Okay.
Now
what?
If
someone
goes
in
and
changes
it?
How
do
you
know
well?
Terraform
does
have
some
tooling
for
this,
but
it's
actually-
and
I
don't
mean
to
pick
on
terraform
but
terraform
is
not
very
good
at
being
aware
of
state.
B
It's
not
very
good
at
being
aware
of
state
changing,
and
so,
if
someone
makes
a
manual
change,
terraform
is
very
often
not
aware
of
it,
and
so
you
can't
detect
that
drift
and
you
can't
therefore
correct
it.
So
you
don't
actually
know
what's
happening
in
production.
You
are
assuming
that
the
things
that
you
put
into
terraform
when
you
hit
terraform
apply,
have
been
made
into
reality
and
you
are
assuming
that
they
are
staying
that
way.
B
My
dad
said
to
me
one
time
when
I
was
growing
up,
assuming
makes
an
ass
out
of
you
and
me,
which
is
how
you
spell
assume
we
don't
want
to
assume.
We
want
to
know,
there's
a
who
was
it
reagan
that
said,
trust
but
verified.
B
So
we
trust
you
know
that
the
systems
are
going
to
work,
but
we
need
to
have
some
verification
in
place
for
it,
and
I
heard
someone
tell
me
the
other
day
they
said
I
don't
know
if
I
need
get
outs
because
we
just
we
just
lock
our
environments,
you
know
so
you
know
we
just
have
a
cicd
process
that
updates
it
and
it's
like.
Okay,
that's
great,
what's
happening
in
production
and
they're
like
well.
B
What
do
you
mean?
I
mean
it's
locked
so
so
it's
got
to
be
whatever
was
last
applied
right
and
I'm
like
it?
Maybe
you
tell
me
you
don't
know
you
don't
know
you
don't
know
what's
happening
in
production,
you
don't
know
you're,
assuming
good
luck
with
that.
That's
that's!
That's
a
recipe
for,
for
you
know
it's
like
it's
like
saying.
Look.
I
deployed
the
service.
I
don't
need
monitoring
on
it.
B
B
You're
assuming
that
you're
monitoring,
just
because
the
monitoring
isn't
down
that
the
configuration
is
correct.
Well,
if
you,
even
if
you
have
monitoring
in
place
hey
your
servers,
might
be
running.
You
know
they're
serving
spam
that
somebody
injected
on
there,
but
yo
the
metrics
are
all
there.
So
they're
fine,
you
don't
know
so
you
want
to
know
what's
happening
in
production.
That's
where
this
stuff
comes
in,
and
it's
going
to
help
you
avoid
downtime,
it's
going
to
make
things
easier.
B
So
when
I
make
a
change
to
my
application
in
this
case,
we've
detected
manual,
changes
have
been
made
and
we
find
that
they
are
out
of
sync.
The
service
has
been
updated
and
this
could
be
because
the
deployment
was
changed,
in
which
case
this
sync
would
show
that
there
was.
It
would
show
up
over
here
or
because
somebody
made
a
manual
change,
which
is
what
happened,
which
is
why
this
is
showing
up
is
out
of
sync.
B
Now
in
argo
cd
once
it
detects
this,
you
can
have
it
either
automatically
correct
these
things
with
auto
healing
or
you
can
have
it,
allow
the
the
issue
to
exist
until
you
do
a
manual
intervention.
B
That's
a
policy
that
you
set
at
the
application
level
and
it
allows
me
to
view
a
diff,
so
it
will
actually
show
me
what's
changed
in
this
case
the
the
actual
state
the
desired
state
is
it
wants
port
this
port
80
and
target
port
to
be
set,
but
it's
not
been
set,
so
that's
been
removed
for
some
reason.
So
at
that
point
we
can
hit
sync,
it
will
sync
and
that's
that
state
will
be
set.
B
Monica
asks.
How
do
you
connect
argo
cd
to
the
image
registry
so
that
the
new
image
I
deployed
to
the
kubernetes
cluster?
So
assuming
I
I'm
assuming
that
your
question
is
there's
two
two
meanings
of
your
question:
one
is:
how
does
argo
cd
actually
connect
to
a
private
repository
and
have
authentication
and
have
that
work
which
that's
something
that
is
pretty
easily
accomplished
in
the
docs?
When
you
go
to
add
a
registry,
you
can
add
an
authentication
information
and
you
get
your
you
know
you
can
pull
stuff.
B
If
I
update
the
image,
how
does
argo
cd
become
aware
of
it,
and
this
is
actually
through
your
configuration
update,
so
you
should
be
updating
the
configuration
so
when
we,
when
we
had
this
process
flow
back
here
earlier,
where
we
say,
oh
an
image
has
been
pushed.
We
need
to
open
a
pull
request
to
get
that
could
be
done
by
a
person
or
it
could
be
done
by
automation.
B
B
This
is
an
embarrassing
repo
by
the
way
I've
just
been
plinking
around
in
here
over
the
weekend.
So
it's
it's
nonsense,
but
basically
I
have
my
application
repo
here,
where
I
have
my
docker
file,
that's
building
things
and
I
need
I,
you
know
I've
just
been
fiddling
around
with
this
week
weekend.
It's
a
hobby
project,
but
I've
got
my
repo
where
my
application
stuff
is
stored
and
I
actually
have
inside
of
here
some
deployment
information.
B
I
have
a
customization
that
I'm
using
and
a
just
plain
kubernetes
manifest
because
I
just
want
to
add
the
additional
option
if
people
wanted
to
deploy
this
manually
or
something
so
this
is
my
application
repo.
Now,
when
I
make
changes
to
this,
basically,
what
I'll
have
this
do?
Is
this
will
automatically
open
up
a
pull
request
onto
my
infrastructure
repo?
So,
let's
go
to
that
one
and
I'll
show
you
what
that
looks
like
this
is
just
for
my
home
lab.
B
B
So
this
is
my
argo
cd
instance
and
you
can
see
for
those
of
you
that
aren't
aware
of
argo
cd.
Each
one
of
these
tiles
represents
an
application,
and
an
application
is
an
arbitrary
definition
of
resources
that
need
to
be
synced
along
with
the
destination
for
where
they
need
to
be
synced.
In
this
case,
if
I'm
looking
at,
let's
look
at
this
demo
app
here,
I've
got
this
demo
app.
It's
this
demo
app
here's
the
application
definition
it's
to
be
deployed
at
my
default
cluster.
B
This
is
the
repo
where
it's
pulling
from
the
target
revision.
I'm
pulling
from
right
here
is
head,
which
is
a
little
bit
of
a
no-no
like
I
said
earlier,
because
I
want
it
actually
to
be
versioned.
So
I
need
to
update
that
and
then
this
is
the
path
of
where
it's
pulling
from,
and
I
have
some
policy
in
here
where
it's
automated
sync,
so
anytime
I
make
changes,
it
will
make
changes
here.
It
automatically
will
prune
resources.
B
So
if
I
delete
a
resource
from
my
manifest
it'll
get
deleted
and
it
has
self
healing,
which
means,
if
I
go
in
and
edit
this
resource
in
kubernetes,
argo
cd
will
pick
up
on
that
drift
and
automatically
destroy
it,
which
I'll
demonstrate
for
you
in
a
second
I've
got
my
service.
I've
got
my
deployment.
I've
got
my
other
components,
my
resource
group,
my
pod.
B
These
are
all
the
elements
that
make
up
this
service
and
the
way
that
this
repo
is
structured
is,
I
have
an
apps
folder
for
each
of
my
apps
that
are
deployed.
So
this
is
the
hobby
project.
I
was
telling
you
about,
and
this
has
a
base
reference
which,
if
we
look
at
the
customization
here,
is
referencing
the
kubernetes
branch
that
I
have
just
because
it's
all
it's
it's
all
alpha,
while
I'm
fiddling
with
it,
and
so
once.
C
A
You're
actually
applying
git
ops
to
your
verizon
server
and
yeah.
C
A
C
A
So
not
many
times
we
get
this
guy,
so
big
else
that
someone
is
going
to
do
a
demo
on
the
game,
a
game
server
that
he
worked
on
this
weekend.
So
this
is
actually
I'm
enthralled
right
now,
I'm
like
wow.
This
is
actually
a
real
use
case.
It
wasn't
so
you're,
not
getting
a
demo
you're,
actually
getting
a
real
world
use
case
that
yeah.
B
This
is
more
like
yeah,
it's
more
like
a
sprint
review
and
I'm
just
sharing.
B
B
My
application
repo
has
everything
in
it
that
I
need
to
run
the
application,
but
my
deployment
repo
specifies
what
is
actually
getting
deployed,
and
so
I
have
my
base,
I'm
using
customize
here
and,
if
you're
not
familiar
with
customize,
it's
a
it's
a
kubernetes
resource
package
manager
similar
to
helm,
but
what
I
like
about
it
is.
B
I
can
reference
the
resource
from
my
base,
repo
and
then
I
also
have
overlays
for
each
of
my
servers
now
in
this
case,
I
only
have
one
and
I
actually
just
removed
it,
so
I
stuck
it
into
a
backup
folder.
B
So
this
isn't
getting
deployed
anywhere
at
this
moment
when
it's
ready,
I'm
going
to
move
it
into
the
elite
cluster
folder,
which
will
then
automatically
deploy
all
these
elements
and,
as
my
customization
it
references
that
base
which
I
remember
was
referencing
my
my
original
resource,
my
in
my
application,
repo
and
then
it's
applying
a
patch
for
the
load
balancer
so
specific
to
this
environment.
I
have
a
ip
address
in
this
case.
It's
not
an
ip
address.
B
Actually
I
just
specify
that
it's
node
port,
which
is
not
the
default,
so
I
just
make
my
change
so
going
going
back
over
to
our
demo
app
here.
B
Okay,
so
you
should
be
able
to
see
my
whole
desktop,
so
everything's
gonna
be
a
little
bit
smaller
now,
but
so
you
we've
got
our
argo
instance
here.
This
is
monitoring
both
my
kubernetes
cluster
and
my
and
my
github
repo
right.
So
I'm
going
to
switch
over
to.
C
My
my
atomic
cluster
okay
and
we're
going
to
go
over
get
rid
of
these
logs
we're
going
to
be
looking
at
those
right.
C
B
B
Okay,
so
I
changed
this
replica
to
zero.
I
can
do
this
and
I
save
that
now.
You'll
notice,
that
it
actually
happened
really
fast.
This
actually
saw
that
there
was
a
divergence
right
away
and
fixed
it
already.
Did
you
see
that
happen
really
quick?
Let's
do
it
in
the
terminal,
because
maybe
it'll
be
a
little
bit
more
noticeable.
C
Okay,
make
sure
I'm
on
the
right.
Oh
I'm
on
the
wrong
cluster
right
here.
Let
me
just
change
my
coupe
context.
B
B
I've
destroyed
it
and
our
man,
argo
seed,
is
picking
up
like
too
fast,
but
you
can
see
it's
just
it's
picking
it
up
immediately
and
it's
saying
hey:
it's
not
supposed
to
be
scaling
down.
It's
supposed
to
be
set.
It's
supposed
to
be
set
and
running.
It's
not
supposed
to
be.
Have
that
many
replicas
it's
supposed
to
have
zero,
it's
supposed
to
have
one.
So
if
we
look
at
it
again,
you
know
we
can
see
that
it's
already.
It's
already
put
that
replica
back.
B
So
it's
not
going
to
allow
those
pods
to
get
destroyed.
It's
going
to
it's
going
to
be
recreating
them,
see,
they're
getting
recreated
immediately
as
soon
as
I
try
to
destroy
them,
so
any
configuration
drift,
that's
happening
is
automatically
getting
corrected
here
now,
if
I
were
to
go
and
change
this
in
git,
so
let's
go
over
and
look
at
this
simple
deployment.
This
demo
app
again.
B
Let's
look
at
the
overlay,
I
think
I
said
that
the
I
think
I
said
that
my
I
wasn't
referencing
a
specific
version,
so,
let's
reference
a
specific
version,
so
this
is
coming
from
this
argo
cd
autopilot
example
over
here
and
they
actually
have
releases.
A
B
Oh
this
yeah,
this
is
lens.
I
don't
know.
If
any
of
you
are
aware
of
lens
lens
is
a
cool
tool.
It's
got
a
just
a
nice
ui
for
monitoring
and
working
with
kubernetes,
and
basically
you
can
do
all
the
stuff
that
you
do
with
coop
ctl
with
lens,
but
it
just
gives
you
a
nice
visual
look
at
it
and
I've
started
using
it
more
and
more
I've
kind
of
come
around
to
it.
B
It's
so
fast
3e
90,
4a's
yeah,
so
it's
picked
up
on
that
sink
and
it's
looking
and
it's
saying
yeah
actually,
there's
no
difference
between
what
was
deployed
and
what
is
deployed.
So
I
was
in
luck.
That's
that's
what
you
want
to
see
and
if
I
look
you
know
if
I
was
looking
at
this,
this
summary
of
applications
you're
not
going
to
see
any
differences
here,
because
I
have
this
automated
sync
policy
set.
If
I
didn't
have.
B
B
It
doesn't
really
matter
and
I'm
gonna
leave
sync
policy
on
manual
and
the
deletion
finalizer.
What
this
means
is
that
this
is
wholly
managed
by
argo
cd,
and
so,
if
you
deleted
argo
cd,
it
would
delete
the
resource.
B
So
I
actually
don't
mind
that
that's
fine,
but
it
is
potentially
destructive,
so
be
aware
of
it.
I'm
going
to
leave
schema
validation
in
place.
I
am
going
to
auto,
create
the
name,
split
namespace
and
I'm
not
going
to
do
a
replace.
This
is
important
if
your
manifests
are
getting
too
long
and
then
we're
going
to
specify
our
repo,
which
we're
going
to
go
back
to
this
autopilot
example.
C
B
And
I'm
going
to
pull
from
the
specific
you
know:
release
version,
that's
currently
out,
just
like
we
did
a
second
ago
so.
C
C
B
B
I
think
I
did
all
this
right.
I
don't
ever
do
it
this
way
so
and
I'll.
Tell
you
why,
in
a
second
and
I'm
going
to
create
a
new
namespace
for
this,
that
I'm
going
to
call
master
class
not
going
to
be
a
recursive
directory
only
adding
the
one
directory
and
this
this
works
with
helm
charts
in
this
case
it's
going
to
be
customized
and
it'll
pick
it
up
automatically.
I
don't
need
to
do
anything
special
if
you're
going
to
do
that
and
I'll
hit
create.
C
C
B
It's
funny,
I
don't
ever
do
it
this
way.
I
always
do
it
through.
A
B
B
It
doesn't
like
it
well,
at
any
rate,
I
mean
we
don't
have
to
go
through
it
right
now,
but
that
the
point
is,
if
I
were
to
do
it
this
way
and
I'll,
tell
you
why
I
don't
ever
do
it
this
way.
B
So
this
is
this
is
this
is
why
I
don't
like
the
this
is
why
I
don't
ever
use
this
ui
to
do
this
is
because
I
actually
always
just
do
it
in
git,
so
the
way
that
I've
structured,
this
repo
is
with
something
called
an
application
set.
B
So,
under
my
projects,
repo,
I
have
this
elite
cluster
yaml
and
in
here
I
have
something
called
an
application
set.
An
application
set
is
a
way
of
generating
applications,
and
in
this
case
the
generators
are
looking
for
any
time.
I
have
a
config
json,
that's
under
apps
any
path,
elite
cluster
and
I
have
a
config
json.
It
will
take
that
and
it
will
automatically
assume
that
it
needs
to
be,
it
needs
to
be
deployed
and
it
will
generate
the
application
based
off
of
this
and
we'll
fill
in
the
destination.
B
B
So
when
I
look
at
this
config
json,
you
can
see
the
path
it's
taking
arbitrary
and
this
config
json
just
specifies
the
app
name:
the
user,
given
name
the
default
destination,
namespace
the
destination
server,
which
is
going
to
be
my
local
kubernetes
cluster.
Now
argo
cd
can
connect
and
manage
many
many
kubernetes
clusters,
so
you
could
have
50
or
100
in
here
the
source
path
I
have
specified
to
where
it's
coming
from
the
source,
repo
that
it's
coming
from
the
source
target
revision
and
any
labels
that
I
want
to
have.
B
So
this
is
all
just
specified
automatically
in
here
and
then
I
use
a
tool
called
argo
cd
autopilot,
which
is
this
will,
if
you
use
this
to
install
let's
go
down
here.
If
you
use
this
to
install
argo
cd
on
your
repo,
it
will
write
the
entire
configuration
of
argo
to
git.
So
when
I
look
at
this
instance
of
argo
that
I'm
using
you
can
see,
argo
cd
is
an
application.
That's
on
here,
so
I'm
using
argo
cd
to
manage
argo
cd.
B
I
see
there's
some
discussion
going
on
about
the
tools
that
I'm
using.
So
this
up
here
is
rancher
desktop
and
it
it
just
keeps
track
of
your.
I
mean
we
could
just
do
it
in
command
line
too.
But
it's
you
know
it's
it's
it's
faster
just
to
do
it
this
way
so
rancher
desktop.
Does
this
docker
desktop?
Does
this
I,
like
rancher,
desktop
a
lot
because
I
don't
have
to
have
docker
installed
and
it's
better
that
way.
B
Ttx
is
great
too
for
switching
and
then
yeah
I
do
like
lens
for
for
fiddling
with
clusters.
I
mean
it's
just
nice
and
visual
versus
I
mean
we
can
do
it
in
command
line
too.
It's
fine,
but
yeah
like
I
was
saying.
Argo
cd
is
actually
managing
itself
here.
So
if
I
wanted
to
update
the
version
of
argo
cd
that
I'm
running,
let's
go
and
look
at
our
apps
here.
B
Argo
cd
is
not
listed
under
my
apps
folder,
because
I
have
this
sitting
in
a
special
folder
under
bootstrap,
which
you
don't
have
to
do,
but
you
can
and
if
we
look
at
argo
cd,
we
have
the
definition
of
my
application.
This
is
a
crd.
B
This
is
kubernetes
custom
resource
and
that
specifies
what
an
application
is.
So
once
I
sync
that
to
the
cluster,
it's
done
so
this
gives
me
my
definition
of
what
the
application
for
argo
cd
is
and
if
we
go
back
and
look
under
argo
cd
at
our
customization,
you
can
see
that
I
actually
have
my
base
reference
in
here
and
then,
if
I
wanted
to
have,
I
have
the
service
exposed,
so
I
have
a.
I
have
a
custom
overlay
for
that
under
my
customization.
That's
a
patch!
B
That's
getting
set
to
expose
it
and
then,
if
I
wanted
to
change
what
version
that
I'm
running,
I
would
just
change
the
reference
here
and
commit
it
and
be
done.
So
you
can
see
how
you
can
use
this
to
manage
the
definition
of
application
across
many
environments,
and
this
is
why
I
don't
actually
ever
use
the
ui
for
creating
the
applications,
because
I'm
always
just
creating
them
and
git
directly
and.
A
B
B
The
way
that
argo
city
autopilot
works
is
this:
will
this
bootstraps
argo
cd,
so
basically
installs
argo
cd
manually
under
the
cluster
right
that
this
binary
is
actually
taking
care
of
that
operation
and
it's
specifying
its
definition
from
this
git
repo
and
then
from
there
argo
cd,
syncs
everything,
so
only
the
only
thing
that
needs
to
be
synced.
B
B
I
don't
actually
even
talk
to
the
argo
cd
server.
The
only
thing
I
do
is
I
update
git.
So
if
I
add
something
that
matches
this
generator,
so
I
add
that
folder
that
file
it
will.
It
will
do
that
and
then
argo
see
the
autopilot
actually
has
a
command
line
tool
that
does
this.
So
this
doesn't
change.
This
doesn't
even
talk
to
argo
cd.
It
only
talks
to
git
and
updates
it
from
there
duck
asks.
Do
we
need
to
create
root
app
in
argo,
cd,
ui
or
argo
cd
cli
first?
B
Otherwise,
how
can
we
register
our
application
set
repo
to
argo
cd?
Well,
that's
what
argo
cd
autopilot
is
doing
when
you
do
a
repo
bootstrap.
Let's
say
that
you
weren't
using
this
tool.
How
would
you
do
it?
Okay?
Well,
if
I
were
just
going
to
go,
install
argo
cd,
I
would
just
let's
say
I'm
going
to
do
it
manually
right
now.
There
is
a
terraform
module
for
it.
There's
a
cross
plane
race
resource
for
it.
B
So
there
are
ways
of
doing
it
just
entirely
declaratively
but,
let's
just
say
we're
doing
argo,
cd,
quick
start
and
we're
just
following
this
this
path
here,
so
I
would
create
I'm
doing
manual
operations
right,
I'm
creating
a
namespace,
I'm
deploying
the
resource,
and
at
this
point
argo
cd
is
not
managing
itself.
All
I've
done
is
I've
installed
argo
cd.
B
Well
now
I
create
an
application
for
argo
cd
and
I
apply
that
using
coop
ctl
and
at
that
point
argo
cd
is
now
self-managing,
so
the
resource,
if,
if
the
resource
already
exists-
and
you
create
an
application
for
it
in
argo,
cd,
argo
cd
will
detect
that
it
already
exists
and
it
will
say:
okay,
I
don't
need
to
sync
it.
I'm
I'm
good
to
go
so
it
will.
It
will
just
take
over
the
management
of
any
resource
that
exists
so
in
in
the
example
of
my
home
cluster
that
I
was
showing
you.
B
I
actually
have
a
few
resources
that
aren't
under
management
yet
that
I
had
applied
manually
before
I
had
argo
cd
installed
and
I
will
I
am
moving
them
under
management
by
just
creating
an
application
for
them.
Under
my
directory
structure,
which
is
picking
it
up,
you
do
need
to.
You,
do
need
to
apply
the
application
set,
it's
a
custom
resource,
so
this
needs
to
be
applied,
so
you
do
have
a
few
little.
B
You
have
a
few
things
that
you
bootstrap
on
to
the
cluster
in
order
to
do
that-
and
this
is
actually
where
rancher
becomes
really
nice.
As
well,
because
I
can
see-
let's-
let's-
let's
look
at
rancher-
really
quick-
let's
see
if
I
can
we're
getting
we're
getting
into
like
we're
getting
into
other
things
and
and
we'll
probably
we're
getting
close
close
on
time.
So
I
don't
want
to
take
too
much
time,
but.
B
If
I
wanted
to
run
this,
I
would
go
get
my
repo
out
and
I
would
make
sure
that
I
was
connected
to
the
right
cluster
and
I
could
actually
do
a
customize
apply
on
my
bootstrap
resource
and
this
would
trigger
everything
else
to
happen.
B
So
everything
else
would
just
be
bootstrapped,
including
all
the
applications,
and
I
would
that
way.
I
have
disaster
recovery
so
anyway
we're
a
little
bit
into
the
weeds.
I
know
I've
taken
you
on
a
little
bit
of
a
tour
of
a
couple
of
different
things.
Let's,
let's
move
into
a
couple
of
questions
that
I
think
people
want
to
ask
oh
before.
Well,
let's
do
yeah
before
we
do
q
a.
I
just
want
to
just
call
this
out
really
quick.
B
So
the
reason
that,
where
we're
coming
from,
I
mentioned
that
it's
very
top
of
the
hour
that
code
fresh
is
the
enterprise
version
of
argos.
So
there's
the
community
version,
which
is
the
argo
open
source
project,
and
then
there
is
code
fresh,
which
is
the
enterprise,
mostly
open
source
enterprise
version,
and
what
we
do
is
we
basically
allow
you
to
do
this
at
scale.
B
So
if
you
have
many
instances
of
argo,
you
have
a
thousand
instances
of
argo
and
they're
all
behind
the
firewall
and
whatever
we
provide
this
universal
dashboard,
where
you
can
search
across
all
the
applications
deployed,
no
matter
where
and
we
automatically
give
you
door
metrics.
So
we
automatically
calculate
how
quickly
your
deployments
are
happening,
how
many
times
you're
having
failover
of
deployment.
We
integrate
with
ci,
so
this
is
worth
checking
out.
If
you
haven't,
you
just
go
to
codefresh.io
and
you
can
get
a
demo,
you
can.
B
You
can
sign
up
and
use
it
for
free,
and
we
just
announced
that
we
actually
are
are
launching
a
hosted
argo
cd
version.
That'll
be
launching
planned
is
next
month,
so
you
can
you'll
get
a
free,
hosted
instance
of
argo
cd
and
the
other
thing
I'll
call
out
before
we
get
into
questions
is
that
there
is
a
really
fantastic
get
ops
certification
at
code,
fresh
dot,
io
get
certified
I'll
throw
this
in
here.
This
is
currently
free,
and
this
actually
provides
you
with
a
free
environment
where
you
can
bootstrap
argo
cd.
B
It
shows
you
how
to
do.
It
shows
you
how
to
create
your
applications,
and
it
will
go
through
the
details
of
how
to
do
all
this
stuff
with
get
ops.
It
shows
you
how
to
do
secrets.
It
shows
you
how
to
do
progressive
delivery,
canary
and
blue
green
deployments,
all
of
those
things
using
git
ops
and
using
argo
cd.
B
So
I
definitely
recommend
you
check
this
out
if
you
haven't
and
with
that,
let's,
let's
move
in
to
some
of
the
questions,
because
I
think
that
people
are
asking
a
few
questions,
so
oh
yeah
sure
go
ahead.
Yeah.
A
I
think
you
got
to
ducks.
We
have
cadgety,
has
kind
of
a
question
and
a
follow-up,
and
then
we
have
one
in
the
question.
Let's
casualty
again
he's
asked
the
three
questions.
So
let
me
grab
these
two
that
you
that
you
see
and
then
I'll
ask
the
last
one.
B
Okay,
not
sure
if
it's
relevant
right
now,
but
what
is
the
best
practice
for
managing
multiple
environments
with
argo,
cd,
single
argo,
cd
instances
for
multiple
environments
or
argo
cd
instance
per
environment?
Okay.
This
is
actually
a
really
really
important
question
that
I'm
planning
to
do
a
talk
around
and
there
are
there.
There
is
some
implication
about
this
in
a
blog
post
that
I'll
share
on
running
argo
cd
securely.
B
So
the
reason
this
guy
so
I'll,
throw
this
blog
post
in
the
chat
which
is
is
highly
relevant
to
this.
But
why
would
you
we?
We
mentioned
that
right
in
this.
In
this
case,
I
have
argo.
Cd
is
managing
a
single
cluster,
but
I
could
have
it
managing
lots
of
clusters
right.
So
when
should
I
have
argo
cd
managing
lots
of
clusters,
or
should
I
have
argo
cd
per
cluster?
What
would
determine
why
I
would
do
one
versus
the
other?
Okay,
there
are
a
couple
of
things.
B
First
off
is
performance,
so
once
you
get
up
in
the
you
know,
you've
got
2
000
applications
running
on
argo
cd,
it's
gonna,
even
though
they're
all
deploying
potentially
to
different
clusters.
B
It's
gonna
start
to
have
some
performance
issues
and
now
argo
cd
does
have
a
an
version
that
does
allow
some
scalability,
but
even
if
you're,
using
the
version,
if
you're
hitting
around
2000
applications
and
and
this
this
is
super
caveated,
because
if
those,
if
those
applications
have
you
know
a
thousand
resources,
then
you're
not
gonna
get
to
2000
applications.
B
So
it's
it's
a
combination
of
basically
how
many
resources
under
management,
but
it'll
start
just
taking
a
really
long
time
to
sync
so
for
performance
reasons,
you're
going
to
want
to
start
to
split
up
argo
cd
instances.
So
that's
one
reason:
the
other
is
security.
So
argo
cd.
We
didn't
talk
about
this,
but
argo
cd
has
role-based
access
control,
so
I
can
create
teams
and
I
have
single
sign-on
too
so
I
can
have
teams.
I
can
have
it
syncing
to
my
single
sign-on
and
I
can
have
it.
B
You
know
you
can
deploy
to
these
namespaces.
You
can
deploy
to
those
namespaces
and
that's
working
great,
there's
sort
of
a
caveat
to
that,
which
is
that
the
trust
that
you
have
should
be
at
least
somewhat
tempered
and
that's
why
I
shared
this
scaling.
Argo
cd
securely,
because
if
you
think
about
the
operation,
that's
happening
when
you
create
an
application
you're
allowing
somebody
to
execute
a
helm
chart
or
a
customization
or
arbitrary,
you
know
files
that
could
potentially
try
to
reference
things
that
are
outside
of
their
scope
and
there's
some
mitigation.
B
That's
done,
but
you
should
be
aware
from
a
security
standpoint
that
there's
you
should
trust
in
a
monitor,
a
moderate
amount,
and
so
I
wouldn't
want
to
have
you
know:
5
000
people
on
one
instance
I'm
going
to
have
it.
You
know
be
a
couple
of
teams.
So
that's
another
reason
to
split
it
up,
and
this
is
where
that
control
plane
from
code
fresh
becomes
really
valuable,
because
this
control
plane
lets.
You
manage
many
instances,
and
I
mentioned
single
sign-on.
B
So,
for
example,
I
can
create
my
single
sign-on
once
with
code
fresh
and
then
every
instance
of
argo
that
I
deploy
is
automatically
associated
with
it
versus
having
to
set
up
single
sign-on.
Every
time
I
set
up
a
new
instance
of
argo
cd,
so
those
are
some
of
the
considerations
that
you
would
go
into
for
why
you
would
want
to
split
up
argo
cd
and
you'd
want
to
have
multiple
instances.
B
The
other
question
you
asked
is
plus,
if
we're
trying
not
to
do
any
imperative
changes
on
environments,
what
is
the
use
cases
for
features
at
argo,
cd,
like
a
maintenance
window,
so
yeah
argo
cd
has
a
feature
called
so.
First
of
all,
they
have
sync
windows,
so
you
can
basically
say
only
sync
during
these
time
periods,
and
you
also
have
the
idea
of
like
pausing
synchronization.
Why
would
I
want
to?
Why?
Would
I
ever
want
to
pause
synchronization
well
there
if
the
situation
arises,
where
things
have
broken
in
some
spectacular
way?
B
And
you
know,
ideally
you
just
all
you
do.
Is
you
just
revert
a
change
and
get
and
you
let
it
roll
back
to
the
previous
version,
but
for
the
scenarios
that
you
can't
imagine,
people
do
want
the
ability
to
say
look
something's
going
wrong
and
I
might
maybe
I
shouldn't
be.
Maybe
I
shouldn't
be
connecting
directly
to
production
using
coop,
ctl
or
using
lens
or
whatever,
but
I'm
going
to,
because
I'm
going
to
go
jump
in
jump
into
logs
and
stuff
and
whatever
and
actually
in
the
new
version
of
I
haven't.
B
I
haven't
turned
it
on,
but
you
can
actually
access
logs
and
in
the
I
haven't
turned
on
the
r
back
for
it,
but
you
can
actually
even
exec
into
containers.
If
you
have
the
permission
set,
I
don't
have
them
set
here,
but
but
that's
why
you
would
want
to
have
that
maintenance
window
of
some
kind
is
if
you
want
to
do
some
sort
of
manual
changes,
and
sometimes
this
goes
because,
like
maybe
your
secrets
provider
is
not
functioning
properly
and
you
need
to
go
and
work
on
that
or
you.
B
You
don't
want
to
be
fighting
with
argo
cd,
because
you're
making
a
change
and
argo
is
breaking
it
or
whatever.
You
shouldn't
be
doing
that
in
production
anyway,
but
you
know
people
are
used
to
having
that
as
an
option,
so
it
isn't
available
as
an
option.
A
So
real
quick,
you
showed
us
some
things
that
were
bad
practice
and
you
were
like:
don't
do
it
or
don't
do
it
this
way?
This
is
really
bad,
but
you
also
showed
us.
You
have
your
application,
repository
and
kind
of
like
your
infrastructure
repository.
These
are
some
best
practices
around
argo
cd.
I
assume.
Where
would
someone
want
to
learn
more
about
some
of
these
best
practices.
B
So
argo
cd
best
practices
is
a
great
blog
post
that
hannah
put
together
where
she
talked
to
the
community
and
got
a
ton
of
things
and
separating
get
repositories
is
the
number
one.
So
that's
one
of
the
things
that
we
talked
about
creating
the
directory
structure
to
enable
multiple
application
system.
I
actually
showed
you
that
as
well
and
this
this
explains
it
more
in
depth
that
shows
the
promotion
strategy
application
sets.
This
actually
covers
a
lot
of
the
stuff
that
we've
talked
about.
B
That
goes
a
little
bit
more
in
depth,
so
it's
worth
checking
out
and
then
obviously
this
certification,
which
is
free,
as
I
mentioned,
is
really
good
for
teaching
this
stuff
and
level.
Two
is
almost
done.
We're
going
to
have
that
out
soon
so
level
one
is
out,
and
I
think
we
have
almost
eight
thousand
people
engaged
on
that.
So
it's
it's
the
world's
most
popular
get
up
certification,
hands
down,
it's
pretty
amazing,
so
yeah
there
we
go,
let's
see
and
then
yeah
any
other
questions
before
we
wrap
up.
B
I
mean
I,
I
think
that
what
we
did
here.
I
hope
this
is
interesting.
I
mean
we've
covered
a
lot
of
the
principles
and
then
I
kind
of
gave
you
a
tour
of
things
and
showed
you
how
some
of
these
different
things
work
in
practice
gave
you
some
general
best
practices
and
general
tips
and
then
gave
you
some
resources
where
you
can
go
in
to
learn
more
of
the
stuff.
This
is
kind
of
more
of
an
introduction
to
some
of
this
stuff.
B
Oh
yeah
check
out
luke's
talk
on
multi-tenancy
and
get
ups
with
with
rancher
and
kubernetes.
That's
great
also.
You
know,
while
we're
while
we're
I've
got
you
argo
con
is
just
around
the
corner.
It.
A
B
It's
going
to
be
september
19th.
The
schedule
is
going
to
be
announced
this
week.
We
just
finished
the
program
last
week,
everybody's
just
accepting
and
we're
we're
just
making
sure
that
everybody
can
make
it
and
stuff,
but
there's
going
to
be
workshops
here,
there's
going
to
be
a
bunch
of
talks
and
keynotes.
First
in
person
argo
con,
we
did
a
virtual
one.
Last
year
we
had
about
6
000
people
at
this
conference.
It
was
just
absolutely
insane.
It
was
the
first
one.
B
We
did
so
definitely
check
out
argo
con
I'll,
throw
the
link
in
the
chat
any
other
questions
before
we
come
to
a
close.
A
No,
no,
I
I
threw
two
polls
out
there.
First
one
was
who
uses
argo
cd
so
over
over
half
of
people
who
are
on
this
webinar
they're
using
it,
which
is
good
and
then
sorry
again.
But
I
asked
who
explained
the
ranch
or.
A
A
I
don't
think
there's
anything
else,
I'm
actually.
I
I'm
following
you
now
on
github,
because
I'm
curious
to
see
how
you
get
this
visa,
my
friends
play
b
rising.
So
I
the
idea
that
you're
you're
containerizing
this
almost
like
I'm
gonna,
follow
this.
I
want
to
see
if
he
gets
it
done
and
he
gets
it
running
up
there,
because
that
would
be
kind
of
a
cool
example.
I
love
anytime.
You
can
put
a
real
world
example
down
and
you're
like
yeah,
I'm
doing
it
for
game
server.
B
Yeah,
it's
actually,
so
I'm
actually
running
it
right
now
and
it's
running
successfully.
I
have
some
cleanup
to
do
on
the
project
to
make
it.
You
know
nice
and
consumable,
and
it's
it's
actually
a
windows
server
application.
So
my
container
is
actually
running
wine
to
run
the
server
which.
A
B
Gross
it's
gross,
it
is
what
it
is,
but
you
know
hey.
C
B
Could
but
then
I'd
have
to
add
a
windows
node.
I
don't
know
I
like.
Having
all
my
nodes,
I
like
having
all
my
nodes,
just
be
eh
they're,
all
the
same.
I
don't
care
about
them.
I
can
throw
one
away.
I
can
plug
another
one
in.
If
I
have
to
have
a
windows
node
then
I'm
going
to
have
to
have
like
this
is
my
windows
node
and
I.
B
C
B
A
B
A
Yeah,
I
might
tweet
that
later,
just
so
just
see
what
happens
see
if
they
have
a
response.
Thank
you
all
right.
Well,
dan.
Thank
you
very
much.
We
do
appreciate
it.
You're
always
welcome
back
to
speak
to
the
community
and
I
think
I'll
be
seeing
you
here
in
a
few
months
in
october,
are
you
going
to
be
in
detroit
for
vidcon.
B
Okay,
wait:
we've
got
get
ops
con
happening
in
detroit
and
there's
going
to
be
some
really
amazing
talks
actually
that
cfp
just
opened.
So
if
you
half
the
people
here
are
already
argo
users.
If
you
have
a
cfp,
you
want
to
get
in
a
talk
you
want
to
get
in
for
get
op's
con
that
just
opened
up
get.
A
A
A
A
B
I
threw
my
twitter
in
the
chat
feel
free
to
hit
me
up
directly
or
dm
me
if
you,
if
you're
struggling,
especially
if
you're
working
on
like
scaling,
argo
cd
and
you,
you
want
advice
or
thoughts
on
that
happy
to
happy
to
help
you
on
that.
That's
something
that
I
do
quite
a
bit
so
feel
free
to
you
know
my
dms
are
open,
so
reach
out
anytime,
awesome.