►
From YouTube: GitOps Overview and Demo - Session 2
Description
Join Caleb Cooper for a GitOps overview and demo.
A
All
right,
hello,
everyone,
my
name,
is
Caleb
Cooper
I
am
a
customer
success
manager.
Let's
see
if
I
can
even
talk
this
evening
and
I
work
with
mostly
public
sector
customers,
but
my
presentation
today
isn't
about
them.
It's
about
get
Ops
and
some
of
my
customers
are
doing
this
and
some
aren't.
But
this
is
really
where
I
came
from,
so
I've
been
in
it
for
a
decade
and
a
half
now
and
yeah
I
worked
almost
the
entire
time
in
a
Opera
purely
operational
problem.
A
Once
I've
learned
about
gitlab
I
started
looking
into
CI
and
incorporating
that
in
more
into
my
process,
so
I
went
from
purely
just
writing
things
that
would
configure
systems
to
deploying
those
systems
through
part
of
the
process
of
writing
that
so
get
offs
as
a
neat
and
tidy
definition
is
an
extension
of
infrastructure.
As
code
where
you
take
the
benefits
of
a
Version
Control
System,
get
as
in
the
name
to
build
your
infrastructure
through
automation,
that's
based
off
of
contributions
to
a
centralized
version,
control
system
that
allows
for
things
like
merging.
A
A
I
had
some
luck
doing
this
in
the
past
in
environments
in
which
that
was
possible
to
some
degree
or
another,
but
it
wasn't
until
really.
I
came
to
gitlab,
where
I
was
given
the
opportunity
to
really
do
this
in
a
kind
of
cloud
environment.
So
I
have,
of
course
sent
into
this
in
AWS
and
in
gcp
and
in
Azure,
but
I'll
admit
that
my
favorite
Cloud
platform
is
digitalocean.
So
what
you're
going
to
see
in
this
demo
is
in
digitalocean
and
it
uses
digitaloceans
API
all
right
with
that
kind
of
introduction.
A
Out
of
the
way,
I
do
want
to
say
that
I
am
welcoming
questions
in
chat
in
the
doc
or
you
can
just
jump
in
and
interrupt
me
if
you
put
in
the
dock.
I
will
stop
every
once
in
a
while
to
glance
over
at
that
screen
to
look
at
it,
but
I.
It
won't
be
quite
as
interactive
so
expect
that
there
may
be
some
delay
if
you
ask
it
there.
A
B
A
Okay,
so
yes,
jump
in
make
comments.
Ask
questions.
I
want
this
to
be
interactive
as
much
as
we
can.
A
So
let
me
tell
you
what
my
goal
was
with
this
project:
I
had
a
conversation
three
months
ago,
or
so
with
a
customer
in
which
they
were
talking
about
how
they
would
really
like
to
start
using
gitlab
as
part
of
their
infrastructures,
code,
environment.
Of
course,
there's
a
conversation
I'm
happy
to
have
with
customers
and
we're
doing
great
until
they
said
we
really
need
to
be
able
to
do
all
of
this
through
ansible
Tower,
okay.
Well,
we
start
to
run
into
my
limit
of
knowledge.
A
Need
terraform
I
can
do
all
of
this
in
CI
with
some
shell
script.
Shell
scripts
say
that
five
times
fast
with
just
using
curl
to
access
apis
as
I
need
to
so
I
built
a
proof
of
concept
of
that
in
about
eight
hours.
You
know
pretty
easy
and
then
I
spent
a
lot
more
time.
A
It
kind
of
ridiculously
a
large
amount
of
time
from
there
building
out
what
this
ended
up
being.
So
this
is
a
demonstration
of
a
process
by
which
I
can
define
an
infrastructure
in
a
Version,
Control
System
and
then
have
it
pushed
out
into
digitalocean
using
scripts
and
apis.
So
what
am
I
building
here?
This
is
a
project
that
builds
the
gitlab
2K
reference
architecture,
you're-
probably
familiar
with
that.
But
if
you
aren't,
we
have
several
different
reference:
architectures
for
different
numbers
of
users.
That's
what
the
k
is.
A
A
A
A
A
A
I
do
have
two
really
kind
of
three
different
deployment
models
here
and
you'll
see
that
some
of
these
are
very
short.
This
one
at
the
bottom
is
very
short
or
on
the
bottom,
the
one
where
my
cursor
is
very
short,
and
this
one
is
currently
running
it's
much
longer
and
that's
because
for
changes
that
aren't
affecting
the
configuration
of
a
deployment,
so
you
have
to
get
lab
deployment
I
want
it
to
test
those
changes
as
quickly
as
possible
and
then
end.
A
In
these
variables,
inside
of
my
gitlab
yeah
CI
file
that
way
I
just
make
sure
I
have
those
things
update
and
can
easily
get
to
them.
I
want
to
make
these
as
close
to
the
place
where
they're
going
to
get
to
Hawaii
as
possible.
I
could
just
get
them
out
of
git
lab
every
single
time,
but
to
reduce
the
load
on
gitlab
servers
and
also
to
bring
these
closer
to
where
they
will
be
deployed.
I
put
them
in
here
so.
A
Yes,
it
would
also
help
with
air
gapping.
Definitely
if
you
wanted
to
put
the
runner
that's
going
to
pull
down
these
two
packages
in
a
place
where
it
could
talk
to
the
internet,
but
the
rest
of
the
environment
could
not.
That
would
help
with
air
gapping
all
right.
So,
let's
talk
about
this
bigger
pipeline,
so
the
figure
pipeline
has
two
jobs
in
the
pre-flight
shell
check.
It
always
is
going
to
have
shelter.
It
also
has.
This
is
gitlab
on.
A
A
Well,
there's
a
lot
of
ways
you
can
do
that
by
defining
variables
and
those
kinds
of
things,
but
in
this
case
I
want
to
know
whether
I
should
run
a
job
simply
by
knowing
was
this
infrastructure
already
in
place
and
running,
because
it's
often
not
unlike
a
production,
gilab
instance
where
you'd
have
these
things
on
all
the
time
this
environment
I
destroy
every
night,
because
I
don't
want
to
pay
digitalocean
to
run
some
virtual
machines
for
me
overnight.
So
this
just
checks
real
quickly
to
see
if
it
can
talk
with
gitlab.
A
So
that's
all
it's
doing
right
here.
It's
curling
see
if
you
can
access
the
gitlab
domain
that
I
have
for
this
test
environment.
If
you
can't
do
that
it
dies.
This
job
is
set
to
allow
for
failure.
So
you
can
see
that
I'm
allowing
failure
here
by
allowing
failure.
What
it
means
is
that
the
rest
of
the
pipeline
click
continue,
which
is
terrific,
because
then
I
can
go
ahead
and
deploy
this
thing
that
wasn't
running,
but
if
it
was
running,
then
I
have
some
things
that
happen
after
that
curl.
A
What
I
want
to
do
here
is
create
a
dynamic
child
pipeline.
This
is
how
I
got
around
that
problem
of
how
do
I
do
something
that
I
want
to
do
only
if
gitlab
is
already
running
so
I
create
these
child,
this
child
pipeline.
That
does
some
work
for
me,
so
it
creates
this
tear
down
ciml
file
out
of
some
templates,
so
these
templates
are
up
here
and
I.
Have
them
here?
A
You'll
see
that
we
have
template,
which
has
two
stages
back
up
and
shut
down
go
out,
so
the
back
of
one
I
think
is
pretty
obvious.
What
I'm
trying
to
do
here
is
before
I
go
ahead
and
destroy
everything
if
the
lab
is
already
running
and
I
have
this
opportunity
to
create
a
backup
for
it,
I'm
going
to
have
to
do
that
so
I
do
that,
and
I
also
run
my
backup
Secrets
script,
which
I
have
installed
in
a
previous
deployment.
A
That
is,
for
the
things
like
certificates.
B
A
The
secrets
Json
file,
which,
if
a
customer,
destroys
that
then
they're
going
to
lose
access
to
most
of
their
stuff,
that's
encrypted
in
the
database,
effectively
making
it
unavailable
to
them.
In
my
case,
I,
don't
want
that
to
happen
so
I
back
those
things
up
through
that
script,
then
I
have
this
shutdown,
gitlab
and
you'll
see
that
shutdown
gitlab
has
this
dot
in
front
of
it?
What
that
makes
it
is
kind
of
like
a
hidden
job
and
those
hidden
jobs
will
not
run,
but
they
can
be
extended
and
through
that
extension,
you
can
run
them.
A
So
what
I
have
here
is
partial.
This
is
a
very
small
job,
but
all
it
does
is
say
to
extend
this
hidden
job
and
it
adds
this
tag.
So
what
I'm
doing
there
is
very
important,
because
I
want
to
shut
down
gitlab
and
to
ensure
that
say:
giddly
isn't
running
anymore,
postgres
isn't
running
anymore.
These
things
are
going
to
be
writing
to
the
disk.
I.
Don't
want
them
to
be
running
to
the
disc,
while
I,
destroy
the
virtual
machine.
I
could
lead
to
data
corruption.
A
So
what
I
do
here
is
I
replace,
do
name
digital
ocean
name
with
the
name
of
the
virtual
machine
which
is
going
to
run
then
I
tack,
that
on
to
the
end
of
template
over
and
over
again
for
as
many
VMS
as
our
running
git
lab
in
this
environment.
So
what
I'll
get
is
four
five.
Six
of
these
tacked
on
to
the
problem
is
this
and
what
that
does
is
each
one
of
them
extends
this
so
that
on
that
virtual
machine,
it
will
run
that
job.
So
what
does
that?
A
Look
like
I'm
going
to
go
back
here
and
I'm
going
to
find
one
in
which
that
ran,
so
that
would
be
click
on
here
and
you'll,
see
that
tear
down
ran
this
child
job
runs
in
the
next
stage
because
it
couldn't
run
in
the
pre-flight
stage
because
it
didn't
exist
yet
I
hadn't
built
that
ciml
file,
yet
so
it
didn't
exist,
so
I
need
to
run
in
the
next
job.
So
I
do
that.
My
next
stage
rather
I
do
that.
A
A
Five
times-
and
that's
because
another
problem
that
I
ran
into
is
that
I
want
to
only
Define
jobs
that
are
going
to
run
on
a
virtual
machine
if
that
virtual
machine
exists.
So
all
of
these
things
that
are
going
to
run
on
part
of
that
infrastructure
that
makes
up
the
skit
lamins
instead
of
deploying
get
run
as
triangle
jobs,
so
everything
else
you're
seeing
here
is
run
on
a
virtual
machine
that
exists
all
of
the
time.
A
A
The
actual
systems
that
I'm
building
to
run
gilab,
however,
could
be
very
powerful.
In
my
case.
They
don't
have
to
be
very
powerful.
I
made
them
as
small
as
I,
possibly
could
because
again
saving
money,
but
if
I
was
doing
this
for
a
production
environment
for,
say,
2
000
people
I
would
want
to
follow
a
reference
architecture
and
then
have
them
scale
to
that
degree.
But
this.
B
A
So
there's
another
thing
that
I'm
doing
in
the
download
stage
here
that
is
different
from
that
pipeline,
where
I
hadn't
made
any
changes
to
the
configuration
that
is
delete,
vars
or
delete
variables.
What
all
this
does
is
delete
a
whole
bunch
of
variables
that
I
have
built
up
in
a
previous
pipeline.
A
A
The
reason
I
am
making
these
variables
that
exist
only
for
a
single
deployment
is
because
I
want
to
ensure
that
if
I
had
any
leak
of
any
of
these
credentials
or
any
of
these
secrets,
I
could
just
do
a
redeploy
and
they
would
be
all
brand
new,
so
you'll
notice
that
some
of
these
are
passwords
so
like
the
root
password
Here,
the
redis
password
Here,
the
getaway
Austin
token,
here
this
certificate
Authority
Key
here
these
are
all
things
that
are
pretty
sensitive,
so
something
happens,
there's
a
leak
rerun
this
they're
all
gone
and
replace
with
new
ones
without
disrupting
the
functionality
of
the
service.
A
A
The
two
pieces
that
need
to
stay
around
are
I
need
to
have
the
underlying
storage
for
giddly
and
the
underlying
storage
for
postgres,
so
I
create
two
volumes
within
digitalocean
using
digital
oceans,
volumes,
API
and
then
I
only
do
that
if
they
do
not
exist
right,
I
run
this
over
and
over
and
over
again,
there's
no
reason
to
create
a
new
volume
every
time
that
would
actually
defeat
the
entire
purpose.
So
what
it
does
is
it
checks
to
see
if
that
exists?
Let
me
just
show
you
the
actual
script.
For
that.
A
So
we
have
he's
a
little
bit
out
of
order
of
what
they
show
up
there,
but
essentially
it's
just
going
to
run
this
script.
I
do
want
to
call
out
real
quickly
how
I've
made
it
so
that
it
only
runs
these
jobs
if
I've
made
changes
to
the
configuration
of
gitlab,
rather
than
the
configuration
of
the
pipeline.
The
way
I'm
doing.
That
is
by
saying
if
there
are
changes
to
the
slash
VMS
which
I
haven't
talked
about,
but
that's
where
I
Define,
the
VMS
I'll
get
to
that
in
a
minute
and
configs
gitlab.
A
This
is
where
I'm
storing
the
gitlab
RBA
files
look
at
various
files
for
the
get
live
application.
If
these
change
then
I
need
to
reappoint,
if
these
don't
change,
then
I
just
need
to
check,
because
I
may
want
to
make
a
whole
bunch
of
edits
to
my
script
and
get
shell
check
to
test
them
without
completely
deploying.
A
B
A
You
what
exactly
that
looks
like
a
little
bit
later?
Okay,
so
I
was
telling
you
the
script
about
creating
storage,
so
I
mentioned
before
that,
where
I'm
heavily
dependent
on
the
digital
ocean
API
here
is
a
curl
to
that
what
it
does
is
it
asks
for
a
given
storage
name,
how
many
of
this
exist,
and
if
it
is
zero,
then
it
goes
ahead
and
it
calls
it.
The
digital
API.
B
A
And
says
create
this
thing
for
me,
but
how
do
we
know
the
storage
name?
Well,
we
get
the
storage
name
from
this
configuration
under
storage.
So
let's
just
look
at
gitly
real,
quick,
so
storage
name
is
here
with
a
digital
volume.
Do
volume
named
dilution
volume
name
which
I'll
get
to
where
that's
at
in
a
moment,
and
then
the
CI
underscore
commit
underscore
Branch
variable,
taking
a
quick
tangent
to
what
that
is.
That
is
a
built-in
variable
in
gitlab
CI.
This
is
one
of
the
things.
A
That's
really
nice
about
doing
these
kinds
of
actions
within
a
automation
system.
That's
built
in
with
your
version
control
system.
Is
it
as
a
layer
of
information
about
your
Version
Control
system
in
this
case,
what
branch
we're
on
so,
if
I
want
to
do
this
for
a
test
branch
and
a
staging
branch
and
a
prod
branch
and
an
awesome
feature
branch
I
can
do
that
and
I
don't
have
to
go
in
here
and
tweak
to
this
every
single
time.
A
A
I
have
this
script,
but
then
I
have
the
first
argument
to
that
script,
which
is
VM
and
so
what
it
does.
Is
it
Loops
over
this
list
of
VMS
and
then
runs
this
script,
and
so
for
the
first
one,
let's
say
it's
rails,
it's
not
going
to
be
because
it's
going
to
be
Alpha
and
Border,
but
I'm.
Just
to
the
point
of
this
explanation,
what
it's
going
to
do
is
it's
going
to
source
that
in
and
it's
going
to
look
in
there
and
it's
going
to
look
for
do
underscore
volume.
A
A
Well,
giddling
does
exist,
so
then
it
knows
it
can
Source
in
the
giddly
volume
information
and
then
go
ahead
with
this
process,
so
that
similar
kind
of
workflow
is
how
I'm
doing
deployments
for
almost
everything
in
this
infrastructure
I
do
that
to
deploy
load.
Balancers
I
do
that
to
deploy
virtual
machines
and
I
even
do
that
to
deploy
these
are
Runners.
A
They
end
up
being
Runners
of
this
gitlab
instance,
but
it
actually
deploys
another
Runner
that
registers
with
this
new
gitlab
instance
in
kubernetes,
but
I'm
not
going
to
get
too
far
into
that,
because
we
don't
have
all
day
for
me
to
dive
into
that.
So.
But
let
me
show
you
real
quick
what
it
does
when
it's
creating
a
virtual
machine
so
again
just
heavily
making
use
of
these
apis.
But
one
of
the
things
you
can
do
in
digital
oceans,
API
is
pass
in
what
they
call
user
data.
A
Other
systems
like
AWS
may
call
this
cloud
in
it
and
essentially
what
it
does
is
it
allows
you
to
give
it
some
information
about
what
it
should
do
when
it
first
starts
up,
so
what
I
have
it
doing
is
going
and
fetching
that
gitlab
Runner
package
that
I've
stored
in
the
registry
and
installing
it
and
then-
and
this
is
a
place
where
some
people
might
cringe
I'm,
giving
the
gilab
runner
user
full
pseudo
powers
without
a
password.
A
Now
the
reason
I'm
doing
that
is
because
I
need
this
user
to
be
able
to
execute
things
on
my
behalf,
such
as
installing
gitlab.
Without
me,
logging
into
it
typing
in
a
password,
so
the
idea
there
is
I'm,
essentially
just
turning
the
runner
into
a
service
user
with
kind
of
system
level,
admin
level,
privileges.
A
A
This
will
come
to
another
place
where
I,
you
know,
I'm
going
to
say
the
same
thing.
You
really
really
have
to
trust
your
git
lab
administrator
for
this
instance,
because
if
I
was
running
this
on
something
where
I
didn't
know,
who
was
running
the
skit
lab
instance,
you
may
notice
in
the
address
bar
at
the
top,
that
that
is
not
gitlab.com.
That
is
my
own
gitlab
instance.
B
A
A
A
Okay,
I
do
want
to
point
out
some
things
in
here
that
I
haven't
talked
about
yet
so
I
talked
about
how
some
of
these
secrets
I'm,
you
know,
destroying
every
time,
because
I
want
to
make
sure
that
they
get
refreshed,
can't
do
that
with
everything.
Unfortunately,
so
two
of
the
things
that
are
really
kind
of
fundamental,
that
I
can't
do
that
with
are
my
git
lab
personal
access
token.
Actually,
it's
a
project
access
token
in
this
case,
but
project
access
token
and
my
digital
ocean
access.
A
A
We
would
call
these
inherited
credentials,
because
my
gitlab
instance
here
requires
two-factor
authentication
I,
have
a
EB
key
that
I
use
to
log
into
it,
but
my
gitlab
project
access
token
is
a
single
Factor,
but
it's
inherited
because
a
person
had
to
make
it
same
is
true
with
digitalocean,
so
these
things
are
sensitive
and
sink,
and
single
Factor.
Okay.
How
do
I
work
around
that?
Where
do
I
store
those
things?
A
So
the
nice
thing
about
cicd
variables
are
they're
encrypted
at
rest
within
our
database,
and
you
can
do
things
like
mask
them
and
protect
them
now,
in
this
case,
I'm
not
protecting
these
variables,
and
you
have
to
be
careful
with
protecting
variables,
because
you
want
to
make
sure
you're
only
protecting
ones
that
you
won't
need
when
you're
deploying
to
an
unprotected
branch,
protective
variables
can
only
be
used
on
protected
branches
and
also
I.
Am
a
horrible
developer
and
I've
been
developing
this
entire
thing.
A
A
I,
don't
know
how
many
of
you
so
far
have
looked
at
that
address
bar
and
tried
to
go
to
this,
but
it
is
publicly
available.
So
I
don't
want
people
to
be
able
to
read
through
my
job
logs
and
find
out
my
secret
credentials
again,
like
I
mentioned
about
somebody
being
able
to
take
advantage
of
Runners
that
have
unlimited
access
onto
these
virtual
machines
also
need
to
trust
the
the
admins
of
this
Gill,
even
since
not
to
go
in
here
and
steal
my
digitalocean
token
or
the
password
for
my
DNS
server.
A
All
right,
so
it's
hard
to
do
free,
deploy,
I'm,
making
some
things,
such
as
block
storage,
to
give
us
Authority
certificate.
Authority
allows
me
to
create
encrypted
connections
between
my
component
pieces,
such
as
between
rails
and
giddly,
without
having
to
purchase
certificates
from
a
you
know,
trusted
certificate.
Authority
like
did
dessert
since
I,
make
this
and
I
own
it.
I
can
put
this
certificate
on
all
of
these
systems,
so
they
trust
them
automatically.
A
So
I
make
that
make
a
loot,
balancer
and
I
make
some
secrets
that
are
going
to
go
into
the
galav
RV
file
like
I
mentioned
before
I
also
removed
some
old
Runners
ones
that
were
produced
in
previous
jobs
and
delete
the
existing
virtual
machines,
the
ones
from
the
previous
jobs,
if
they
still
previous,
runs
if
they
exist
so
I
backed
them
up
right,
I
turned
off
gitlab,
so
it
wasn't
writing
anything
to
this
now.
I
delete
them
once
all
that's
done
now.
The
work
that
started
to
build
things
so
I
run
this
deploy.
A
A
Defines
Okay
so
it
creates
the
VMS.
That's
actually
a
very
small
part,
but
the
other
thing
it
does
is
just
like
I
had
for
teardown
I'm,
creating
this
false
check
pipeline.
This
is
another
Dynamic
Channel
pipeline
that
gets
created
with
the
intent
that
it
will
run
a
health
check
on
every
single
BMX
spun
up.
Admittedly,
that's
a
bad
name,
because
it's
long
since
not
just
been
a
health
check,
so
it
does
a
bunch
of
other
stuff
which
I
will
look
at
and
the
but
the
job
still
exists.
A
And
so
what
happens
here
is
I,
build
it
and
the
same
way
I
described
before
and
it
becomes
an
artifact
and
then
I
have
health
check
job
here,
which
triggers
the
child
pipeline.
Just
like
I've
shown
you
before.
Let's
look
at
what
health
check
does
so,
as
I
said
before,
I'll
check
no
longer
just
as
a
health
check.
It
also
does
some
stuff
that
you
can
only
do
once.
A
The
virtual
machine
is
up
and
run,
one
of
which
is
I
want
to
know
the
IP
address
of
the
virtual
machine,
so
that
I
can
store
that
in
CI
variables.
One
way
to
move
variables
from
one
job
to
another
job
is
to
use
dot
EnV.
This
is
an
artifact,
that's
special
meaning
within
CI
that
will
pass
along
these
environment
variables
from
one
job
to
another.
A
I,
don't
use
them
because
I'm
fully
leaned
into
the
model
of
building
variables
and
sticking
them
into
the
environment,
variables,
Within,
cicd
and
so
I.
Just
do
that
for
everything.
I,
don't
worry
about
the
other
one
I.
Also,
admittedly,
don't
know
how
well
that
works
for
dynamic,
John
pipelines
they
might
work.
Great
I
haven't
tried
it,
but
I
I'm
doing
it
entirely.
This
way,
I
do
have
step
in
here,
which
does
some
things
like
delete.
A
These
are
two
other
Dynamic
child
pipelines,
one
of
which
attaches
disk.
If
that's
useful
and
so
there's
two
cases
in
which
that's
useful,
that
is
going
to
be
Italy
and
postgresql.
A
The
problem
is
that
I
can
attach
them,
but
I
really
don't
trust
them.
They've
been
flaky
in
the
past.
If
I
try
to
attach
them
and
then
just
go
so
what
I
do
in
there
is
I
attach
them
and
then
I
tell
it
to
reboot
the
computer
in
one
minute,
okay.
So
this
is
where
one
of
my
first
problems
ran
and
I
repented
to
because
I
do
that
and
I
reboot
it
and
then
we
move
ahead
and
another
job
picks
up
and
gets
picked
up
on
that
Runner
in
less
than
a
minute.
A
A
Well,
that's
a
perfectly
valid
assumption
in
most
cases,
but
in
my
case
it's
just
because
I
turned
it
off.
So
I
wanted
to
be
able
to
say
finish
the
job
and
then
turn
off
later.
So
that's
what
that
does,
but
if
another
job
picks
up
in
less
than
that
minute,
then
it's
going
to
shut
off
in
the
middle
of
that
job,
which
causes
problems
as
you
can
imagine.
So
there
is
a
pause
here.
A
The
other
thing
that
happens
in
pre-install
is
it
sets
up
the
gilab
RB
file,
so
anyone
who's
installed.
Gitlab
is
probably
aware
that,
as
part
of
the
lab
installation
process,
it
adds
a
well
it's
for
Omnibus
to
be
clear.
It
adds
a
file
in
ETC,
gitlab
called
the
gitlab.rb
file.
This
is
a
ruby
configuration
file
Chef
which
we
use
for
Omnibus
is
Ruby,
and
so
it
reads
in
that
and
that's
the
instructions
to
use
it,
and
so
those
things
that
it's
putting
in
in
RB
deploy
exists
down
here
in
configs
config
lab.
A
We
have
a
default,
so
one
of
the
things
that
I
was
pleasantly
surprised
with
as
I
was
developing
this
because
even
though
I've
been
working
with
Gila
for
quite
a
while
now
my
understanding
of
exactly
how
a
reconfigure
worked
in
Omnibus,
wasn't
you
know
perfect
I'm,
not
saying
it's
perfect
now,
but
it's
better
one
of
the
things
is,
you
could
have
conflicting
configurations.
So
in
this
case
this
configuration
turns
off
absolutely
everything.
So
if
you
ran
this,
nothing
would
run,
but
then
I
run
I
append
to
that
another
configuration
that
turns
something
back
on
again.
A
In
this
case
it
turns
giddly
back
on,
but
in
this
case
it's
turning
nginx
and
puma
and
psychic
and
Workforce
back
on
and
the
way
that
works
is
that
Omnibus
will
go
through
this
and
it
will
turn
these
things
off
and
then
later
it
will
find
that
I
need
to
turn
them
on
and
it
will
turn
them
on.
So
anything,
that's
further
down
in
your
configuration
will
get.
B
A
Prior
will
get
will
win
yes,
so
what
I've
done
here
is
I
have
a
default
configuration
that
turns
everything
off
and
then
I
have
append
these
other
configurations
that
turn
things
back
on
again.
The
reason
why
I'm
doing
that
is
because,
for
a
giddly
note,
I
don't
want
everything
to
be
on
I.
Don't
want
rail
levels
to
be
on
I,
don't
want
nginx
to
be
on.
I
only
want
giddly
to
be
on
and
for
postgres.
The
same
thing
now.
A
In
the
ESP
from
last
quarter
right
right
last
last
round,
I
mean
so
I'm.
Not
sure.
Did
you
get
that
email?
They
bought
the
shares
for
there
we
go
okay,
so
yeah,
okay,
so
I
go
through
and
I
turn
off.
Everything
I
turn
certain
things
back
on
again
and
then
I
placed
this
configuration
into
slash,
Etc
gitlab
as
gitlab.rp,
and
then
I
go
through
and
I
replace
these
placeholders
with
things
that
are
in
my
CI
CD
variables.
A
That
way,
I
don't
have
to
keep
those
secrets
in
Version
Control.
This
is
one
of
the
problems
that
people
run
into
when
they're
trying
to
do
infrastructure
as
code
when
they're
trying
to
store
their
infrastructure
in
Version
Control
is
where
do
I
keep
my
secrets,
and
so
what
I'm
doing
here
is
I'm
just
keeping
those
Secrets
inside
of
the
kill,
app
cicd
environment
variables
and
then
during
that
run,
I
Implement
them.
So
where
do
I
do
that?
Do
that
kind
of
over
and
over
again,
depending
on
what
I'm
doing?
A
But,
for
instance,
here
is
a
loop
that
Loops
over
every
one
of
those
variables
and
then
looks
for
them
using
said,
which
is
a
great
editor
for
files.
If
you
don't
want
to
actually
go
in
and
edit
them
manually,
so
it
looks
for
that
placeholder
and
then
it
replaces
it
with
what
is
stored
in
the
Version.
Control
I
mean
sorry
in
the
cicd
variable,
not
in
Version
Control.
A
It
also
replace
some
other
things
more
statically
and
then
once
it's
done,
it
is
able
to
save
it
to
the
right
place
so
that
when
I
do
my
install
later,
it
already
exists.
If
the
gitlab
installer
finds
it
or
exists,
it's
not
going
to
overwrite
it.
This
is
how
you
will
do
your
pre-configuration
of
a
gitlab
instance.
If
you
want
to
be
configured
in
advance
of
installing
it,
and
so
one
of
the
questions
is,
why
would
you
want
to
do
that?
A
Why
don't
you
configure
it
after
you've
installed
it
and
the
reason
why
you
don't
want
to
do?
That
is
because
it's
going
to
start
up
some
things,
it's
going
to
create
some
place
for
getaway,
it's
going
to
create
a
postgres
database.
It's
going
to
do
all
of
that
for
you
and
then
you
end
up
with
that
stuff,
just
left
around
on
your
system
after
you
turn
them
off.
So
by
setting
this
in
advance,
you
get
to
avoid
all
of
that.
A
Finally,
I
mentioned
that
the
secrets
that
I
have
to
store
previously
those
things
are
important,
because
if
those
don't
exist,
when
I
redeploy,
then
all
of
the
things
that
I've
set
up
in
there
like
users,
won't
have
access
to
their
credentials
their
access
tokens.
There
will
be
other
configurations
that
are
broken
if
your
secrets
is
not
there.
So
what
I
do
is
if
that
Secrets
exists
in
the
cicd
variables,
then
I
will
place
it
into
where
it
needs
to
be.
So
that's
what
this
test
is
at.
A
Variables
like
a
string,
it
can
also
be
an
entire
file,
so
in
this
case
I'm
storing
the
entire
Secrets
file
as
a
variable.
So
what
my
test
is
doing
here
is
it
checking?
Does
this
file
exist
and
contain
data?
If
it
does
it's
going
to
kind
of
trust?
That
is
right
and
then
it's
going
to
stick
it
in
place.
There's
course
some
risk
that
that
has
broken,
but
it
didn't
exist
on
this
system
in
the
first
place.
So
you
know
it
being
broken,
is
not
actually
as
bad
as
it
not
existing.
A
I
could
just
go
back
into
the
web
interface
and
press
the
button
and
turn
it
on,
but
it
would
cause
the
whole
pipeline
to
fail.
So
what
I
have
here
is
a
job
that
just
tries
to
turn
all
of
these
virtual
machines
on
it's
already
on
great.
It
doesn't
need
to
do
anything,
but
if
they're
not
it
will
turn
them
on
and
then
the
update,
gitlab
final
tile
pipeline
can
run
update.
Gitlab
again,
I
would
say
that
one
of
the
things
I'm
not
great
at
is
aiming
things
is
actually
going
to
install
gitlab.
A
Now,
if
gitlab
have
already
been
installed
in
this
branch,
and
it
already
had
a
giddly
disk
and
a
postgres
disk,
then
in
some
ways
it's
updating
it,
but
it
really
is
installing
it
now.
It
has
to
do
some
logic
to
make
sure
that
things
are
reasonable,
like
for
instance,
if
we
need
there
to
be
a
volume,
we
need
to
check
if
that
volume
exists
and
that
volume
isn't
there,
we
need
exit.
So
I
do
that
foreign
and
then
I
do
a
reconfigure.
A
This
shouldn't
do
anything
right,
but
what
it's
doing
is
it's
saying
if
the
lab
is
already
installed,
then
we'll
do
a
reconfigure
that
is
mostly
leftover
of
a
case
in
which
I'm
may
or
may
not
get
around
to
making
it
so
that
I
can
do
this
without
completely
destroying
the
virtual
machines
we'll
see.
But
one
of
the
things
it
does
allow
me
to
do
is
If.
This
job
fails
for
some
reason
outside
of
the
job
like
it's
unable
to
connect
with
something
or
whatever
and
I
can
resolve
that.
A
A
First
I'll
just
some
other
checks
for
SSL
stuff
and
then
finally,
so
then
installs
and
then
it
handles
some
stuff
with
Secrets
I
would
back
up
secret
I
showed
you,
and
so
that
happens
there
once
that
finishes,
then
it
sleeps
for
60
seconds
to
let
everything
start
up
a
bit
and
then
it
runs,
get
lab
tail
very
familiar
with
gitlab
tail.
It
is
one
of
our
built-in
niceties
around
managing
gitlab
that
allows
you
to
look
at
gitlab
blogs
and
it
runs
that
for
30
seconds,
and
then
it
sends
that
to
a
file
called
hostname.
A
A
A
A
Well,
I
was
telling
them
that
it
was
good
for
24
hours,
and
so,
if
I
was
deploying
this
10
times
in
a
day,
you
can
imagine
that
let's
encrypt
was
unable
to
find
the
new
deployments.
That's
why
I
needed
to
have
a
persistent
load
balancer
one
way
to
handle
this
would
have
been
to
have
some
kind
of
other
like
elastic
IP
address
that
moves
around
from
the
thing,
but
I
chose
a
load
balancer,
because
I
would
need
a
load
balancer
to
balance
between
rails
nodes
anyway.
A
So
that
was
the
first
place
in
which
DNS
was
a
problem.
Another
place
that
let's
encrypt
was
a
problem
was
that
I
would
deploy
over
and
over
again
and
then
eventually,
let's
encrypt
would
rate
live
in
me.
They
would
say:
you've
asked
for
too
many
certificates
recently.
A
So
the
way
I
got
around
that
was
as
part
of
my
backing
up,
Secrets
I,
also
back
up
the
let's
encrypt
certificate
and
key
that
I
create.
So
if
things
are
still
valid,
it's
not
going
to
try
to
get
another.
One
I
will
say
that
that's
been
one
of
the
things.
That's
been
the
most
difficult
for
me,
because
I
keep
somehow
messing
up
that
in
one
way
or
another,
and
then
it
will
simply
break
and
now
I
can't
request
another.
Let's
encrypt
certificate,
because
I
don't
realize
the
problem
until
I've
been
rate,
Limited.
B
Hey
Caleb,
first
of
all,
fantastic
demo,
I
thought
this
was
super
valuable,
I
love.
What
you've
done
just
automating
kind
of
the
entire
deployment
of
the
2K
reference
architecture
and
I
think
some
handy
tips
and
tricks
that
you,
you
kind
of
involved
during
the
pipelining
process,
was
the
fact
that
you
output
it
or
you
tailed
the
logs
for
each
each
stage
or
each
job
so
that
you
could
reference
that
as
an
artifact
about
having
an
SSH
into
the
VM
I.
B
Think
that's
that's
super
cool
I
like
how
you
said
to
to
go
in
and
update
kind
of
variables.
I
was
just
so
you
know
the
the
main
question
I
have
is:
I
guess
are
you?
Are
you
capturing
the
state
of
kind
of
your
deployment
in
within
git
and
Version
Control?
B
Or
is
it
because
what
I'm
seeing
is
a
lot
of
shell
scripts
and
not
necessarily
you
know
the
actual
State
being
captured
like
when
you,
when
you
think
of
you,
know
infrastructure's
code
and
get
Ops,
typically
I'm
I'm
talking
about
any
any
sort
of
like
framework
like
ansible
or
terraform,
or
even
you
know,
kubernetes
manifest
that
can
be
continuously.
You
know
continuously
checked
or
monitored,
and
you
know,
and
then
you
kind
of
reconcile
any
differences
between
what
lives
out
in
production
with.
B
What's
in
your
what's
in
your
kind
of
configuration
right,
be
it
for
you
know,
Cloud
native
or
just
the
more
traditional
and
support
terraform.
So
I
was
just
curious.
You
know
what
your
take
was
on.
You
know
how
how
your
version
controlling
the
state
of
the
state
of
the
deployment
through
through
your
through
your
repo
and
through
the
Pipelines.
A
Absolutely
so
the
biggest
thing
for
me
is
the
difference
between
a
declarative
tool
like
ansible
terraform,
a
Helm
chart
and
an
imperative
tool.
This
is
more
of
an
imperative
tool.
Most
of
it
is
imperative.
So
it's
been
written
to
execute
some
actions
rather
than
to
ensure
a
state
that
ensures
a
state
by
being
Barry
destructive
in
the
way
in
which
it
does
its
work.
A
So
that
makes
it
a
lot
slower
right,
because
there's
a
lot
of
things
that
may
not
have
changed
so
there's
definitely
value
in
those
tools
in
those
ways
and
that's
why
I
was
in
that
call
with
that
customer
and
the
customers
saying
I
want
to
do
this
through
ansible
and
terraform
such
as
I
didn't
tell
them.
No,
don't
do
that.
Do
a
whole
bunch
of
bash
scripts
right,
because
I
know
that
the
industry
investing
best
practice
is
to
use
those
tools
right
because
of
those
advantages.
A
What
occurred
in
my
brain
was
mostly
just
I
wonder
how
much
of
this
I
could
do
without
those
things
without
those
kind
of
outstanding
without
spring,
without
standing
on
shoulders
of
those
giants
right.
What
could
I
do
and
so
in
a
large
part,
this
is
definitely
not
a
recommendation.
I
I,
maybe
I,
should
preface
that
like
don't
take
this
and
be
like
I'm
gonna
go
deploy
a
production,
2K
environment
right,
because
wow
I've
put
a
whole
bunch
of
time
into
this.
I
would
not
consider
it
to
be
production.
B
A
Information
is
actually
really
simple.
I
saw
that
kind
of
thing
in,
like
this
postgres
volume
configuration
where
I
say:
I
have
10
gigabytes
right,
so
really
simple
stuff.
It's
really
not
going
to
check
to
see
if
that's
been
changed,
it's
just
going
to
enforce
it
over
and
over
again.
B
Gotcha,
so
you
know
you
know,
because
what
I'm
kind
of
used
to
from
from
kind
of
a
you
know,
infrastructure
is
code
approach,
I've
seen
like
the
git
lab
engineering
toolkit
or
get
and
I
think
they
have
reference.
Architectures
I
think
the
standard
is
10K,
but
but
I
think
the
the
two
options
you
have
are
like
and
support
terraform.
So
that's
that's
kind
of
like
what
what
I'm
used
to
seeing
while
this
is.
B
You
know
this
is
very
cool
and
you
must
have
spent
quite
quite
quite
a
you
know
quite
amount
of
time
on.
You
know
just
the
scripting
that
you've
called
in
the
pipelines,
it's
just
very,
very,
very,
very
different
of
an
approach
than.
B
Kind
of
the
git
lab
engineering
toolkit,
but
very
very
cool
stuff.
So
thank.
A
A
Right
I,
don't
know,
what's
going
on
under
the
hood
and
so
I
could
learn.
Terraform
really
well
dig
into
the
code,
it's
open
source
and
try
to
figure
out
what
it's
doing,
but
instead
my
way
of
doing
things
is
to
like
start
from
the
very
beginning
and
figure
out.
So
one
of
the
weird
the
great
things
about
this
weird
grade
I,
don't
know.
B
Yeah
yeah,
it's
you
know
very,
very
cool
stuff.
You
know
I.
A
A
Am
not
great
at
making
commit
messages
so
I
started
this
like
thinking
I'm,
never
going
to
show
this
to
anybody
right,
so
I'm.
B
A
Not
yeah
so
I
just
and
I
just
persisted
in
making
bad
commit
messages.
Surely
at
some
point
this
is
going
to
get
to
the
end
of
this.
You
know
it
started
on
September
1st,
yes,
so
I
just
have
to
get
to
the
end
to
September
1st.
A
This
is
going
faster
last
time
than
I
did
this
time,
if
somebody
knows
a
way
for
me
to
just
jump
to
the
bottom
without
having
to
load
the
bottom
every
time.
I
would
love
that
a
great
opportunity
for
somebody
to
jump
in
and
tell
me
how
to
do
it.
Okay,
first
commit
all
it
does
is
run
a
success
on
two
VMS
right.
B
I
I
think
you're,
you
you've
gotten
very
comfortable
with
gitlab
CI
and.
B
You
know
all
the
ins
and
outs,
and
you
know
you
know
you
know
where
the
bodies
are
buried
as
well.
Yes,.
A
Speaking
of
that,
I
do
want
to
mention
one
thing:
I
am
absolutely
floored
with
the
work
that
the
verify
team
and
the
distribution
team
have
done.
Their
work
is
absolutely
amazing
and
I
wouldn't
have
been
able
to
do
any
of
this
without
the
amount
of
features
that
they've
been
able
to
get
to
work
together.
A
Omnibus
installer
is
awesome
and
the
runner
is
awesome
and
all
of
the
CI
authoring
and
execution
are
awesome.
So
absolutely
I
know
where
the
bodies
are
very,
but
I
also
know
who
to
praise
for
this.
They
are
amazing,
yeah.
B
Yeah
definitely
it's
very,
very
robust
implementation,
so
you
know
love
love.
What
you've
done
here.
A
And
anybody
who
wants
to
talk
with
me
please
schedule
a
coffee
chat.
Do
a
lot
to
talk.
I'll
talk
about
anything,
I've
been
working
with
gitlab
for
eight
years
now,
not.
B
B
Thanks
Caleb
appreciate
you:
sharing,
okay,
go.