►
From YouTube: GitOps Overview & Demo - Session 1
Description
Join Caleb Cooper for an overview of GitOps.
A
Wonderful,
thank
you
all
so
much
for
joining
us
today,
as
I've
said
in
the
other
call
that
we've
had.
We
don't
really
have
CS
skills
exchange
sessions
set
up
for
Q4,
just
in
the
interest
of
time
but
Caleb
and
had
reached
out
to
me
about
git
Ops
and
the
work
great
work
that
he's
been
doing
around
that.
So
we
thought
we'd
do
a
skills
exchange
exclusive
for
Caleb
and
his
team
to
walk
us
through
the
great
work,
so
Caleb
I'll
go
ahead
and
pass
it
off
to
you.
Thank.
B
You
hi
everybody,
sorry,
my
voice
goes
out
a
little
bit
during
this
a
little
bit
under
the
weather,
but
I
think
we'll
be
able
to
talk
about
some
great
things.
This
is
less
of
a
presentation,
more
of
a
demo
and
a
conversation,
so
I'm
just
going
to
hope
that
nothing,
you
know
catastrophically
breaks
during
this,
as
things
are
likely
to
do
in
demos,
please
jump
in
and
ask
questions
as
you
have
them.
There's
going
to
be
quite
a
bit
of
Highly
technical
parts
of
this
I'm
going
to
hope.
B
I
can
explain
them
so
that
everyone
understands
it,
but
if
you
feel
lost
at
any
point,
please
ask
if
you
feel
like
somebody
else
might
be
lost,
because
this
is
something
that
you
just
recently
learned.
Please
also
ask
then
too
I
want
to
make
sure
that
everyone
gets
it
even
if
they
don't
learn,
are
here
to
ask
the
question
or
don't
feel
comfortable,
asking
questions
so
dive.
C
A
B
In
what
is
get
Ops,
so
we
hear
about
get
Ops
a
lot.
This
is
one
of
those
things
that
we
talked
with
our
customers
about
get
Ops
is
an
evolution
of
a
concept
called
infrastructure,
as
code
and
infrastructure
is
code,
is
really
interesting,
because
what
it
allows
you
to
do
is
take
what
would
normally
be
manual
processes
like
going
into
a
system
to
set
up
a
new
virtual
machine
or
to
set
up
a
new
storage
volume
and
put
that
into
some
kind
of
automation,
so
that
a
computer
will
do
that
for
you.
B
This
is
advantageous
for
several
reasons,
but
the
reason
that
I
appreciate
most
is
that
it
helps
to
eliminate
the
possibility
of
failure
due
to
just
human
error,
sure
it
can
make
things
faster.
It
acts
as
self-documenting
procedures
and
those
kinds
of
things,
but
for
me,
I'm
just
incredibly
error
prone.
So,
if
you
ask
me
to
do
the
same
thing
over
and
over
again,
I
am
likely
to
make
a
mistake
at
some
point
in
the
process
every
single
time,
so
when
I
do
instead,
I
try
to
put
it
into
some
kind
of
automation.
B
That
way
when
the
process
gets
kicked
off
and
it
fails
because
of
some
error,
I
made
I-
can
go
and
fix
that
one.
But
then,
from
that
point
forward
it
won't
make
that
error
again
now
what
get
Ops
adds
on
top
of
that
is
several
things.
One
of
the
biggest
things
is
that
you
are
able
to
store
this
infrastructure's
code
in
Version
Control,
which
allows
you
to
roll
back
anything
that
you
may
have
made
in
error
and
also
collaborate
now
in
My.
B
Demo
I
will
not
be
showing
out
a
whole
lot
of
the
collaboration
features
of
githubs,
because
I've
been
working
on
this
primarily
by
myself,
so
I
didn't
get
a
whole
lot
of
input
as
in
the
form
of
merge
requests
or
things
like
that
for
these
kinds
of
changes,
so
you'll
see
mostly
I'm
working
inside
of
a
main
branch
or,
in
my
case,
I
call
that
the
test
Branch,
but
it's
my
primary
Branch,
was
the
idea
that
I'm
just
kind
of
the
only
person
working
on
this.
B
C
B
Those
problems
in
by
just
using
branches
there's
an
additional
problem
that
I'll
come
around
to
when
we're
talking
about
branching
and
infrastructure
as
code.
B
Another
advantage
of
get
Ops
is
that
it
could
be
tied
in
with
runners,
so
the
advantage
of
being
able
to
use
a
tool
like
gitlab,
where
you
have
your
version
control
system
integrate
with
your
automation,
is
really
quite
profound.
It
is
one
of
the
things
that
got
me
into
gitlab.
Initially,
to
give
you
a
little
bit
of
background
on
me.
I
have
been
working
in
it
for
over
a
decade
and
a
half
and
I
got
really
interested
in
git
and
inversion
control.
Not
because
I
was
a
software
developer.
B
B
One
of
the
things
that
I
really
enjoyed
about
learning
this
was
taking
processes
that
had
normally
been
manual
and,
as
I
said
before,
automating
them
codifying
them
so
that
I
don't
make
those
errors.
One
thing
you'll
see
when
I
start
sharing.
My
screen
is
that
my
pipelines
run
a
lot
and
they
fail
a
lot.
So
just
because
I
have
automation
going
doesn't
mean,
there's
not
gonna,
be
failures,
but
because
it's
in
Version
Control
and
because
it's
in
go
FCI
I'm
able
to
see
those
things
and
produce
some
kind
of
workflow
through
the
pipeline.
B
That
ensures
that
the
prop
it
doesn't
progress
if
something
breaks
I
could
easily.
If
I
was
doing
this
all
manually
make
a
typo
someplace
and
then
just
keep
going
and
I
wouldn't
notice
that
something
was
broken
for
weeks,
but
through
this
processes,
I'm
able
to
see
that
those
things
are
going
to
break
in
the
middle
of
my
pipeline.
B
Okay,
so
I'm
going
to
jump
in
and
just
start
walking
you
through
this
pipeline.
So
let's
see
I
can
share
my
screen.
First
part
of
the
demo
purpose,
all
right.
Everyone
able
to
see
gitlab,
okay,
terrific
I'm,
going
to
move
you
all
over
a
little
bit
on
my
screen.
So
I
can
see
your
bases
Okay
so
before
I
jump
into
this
I
do
want
to
do
a
couple,
shout
outs,
and
that
is
the
pipeline
I'm
using
for
this
demonstration
is
in
gitlab,
but
it
also
builds
Guild
lab.
B
C
B
B
One
of
the
things
that
I
want
to
talk
about
is
how
you
want
to
do
things
early
so
to
fail
early,
but
you
also
want
to
be
able
to
dynamically
evolve
your
pipeline
depending
on
the
requirements
at
the
time.
So
let
me
Define
real
quickly
now
that
you've
seen
this
nice
long
pipeline,
what
I'm
deploying
all
right
switch
over
here,
so
I'm
deploying
one
two,
three,
four:
five
different
virtual
machines.
B
Each
of
these
virtual
machines
has
a
particular
task,
such
as
a
rails
node
a
redis
node.
If
anyone
is
not
familiar
with
any
of
these
terms,
again,
please
jump
in,
but
these
are
the
component
pieces
of
gitlab.
These
are
all
different
services
that
your
lab
requires
in
order
to
do
its
job,
Italy
stores,
our
git
data,
postgres
stores,
the
database
redis-
is
a
cache
so
that
we
don't
have
to
look
things
up
from
a
slow
on
disk
system
nearly
as
much.
B
We
can
grab
it
out
of
memory
and
then
rails
is
the
actual
Ruby
on
Rails
gitlab
application.
The
web
thing
that
you're
used
to
poking
around
in
so
in
this
case
I
also
have
a
few
other
things
that
aren't
labeled
here,
because
they're
kind
of
wrapped
in
with
the
rails
node,
our
2K
reference
architecture
specifies
that
the
rails,
node
will
also
run
sidekick.
Sidekick
is
our
queuing
system.
B
So,
while
this
is
a
2K
reference
architecture,
it
could
be
a
3K
reference
could
be
a
10K
reference
architecture,
but
I
just
didn't
build
it
out
that
way.
So
what
I
do
here
is
I
Define,
a
virtual
machine
in
a
lot
of
different
ways.
So
some
people
may
be
wondering
at
this
point:
where
am
I
running
these
things
and
I've
chosen
digitalocean.
B
A
Caleb
sorry
looks
like
Alejandro
has
his
hand
raised.
C
Sorry,
oh
sorry,
yes,
just
a
quick
one.
Giddly
is
a
service
right.
It's
not
the
data
database
right.
C
B
Yeah,
yes,
yes,
gittle
is
a
service,
but-
and
so
is
postgres
I
kind
of
I
probably
went
through
that
quickly.
Postgres
and
access
art
database
for
storing
things
like
user
information,
project
information
and
so
on.
Okay,
so
I
am
using
digitalocean,
so
you'll
see
do
in
some
places.
That's
just
a
reference
to
digitalocean,
so
I
Define
here
some
information
about
the
digital
ocean,
virtual
machine
I'm,
going
to
spin
up,
but
also
some
qualities
of
it
that
I'm
going
to
use
in
other
places.
B
So
the
format
here
is
designed
to
really
speak
with
the
language
that
I
have
chosen
to
write
this
in
people
who
know
me
well
know
that
I
tend
to
write
everything
in
a
scripting
language
called
Bash,
that's
because
of
my
operational
background
from
a
highly
Linux
Centric
environment.
So
while
you
could
write
this
in
any
other
language-
and
maybe
it
would
be
better
in
any
other
language,
I
write
everything
at
bash.
So
that's
what
this
is
written
in
in
bash.
B
If
you
define
a
file
this
way
with
some
variable
name
and
some
value
after
it
equals
you
can
Source
it
in
so
you'll
see
that
later,
when
I
run
scripts
that
you
will
bring
this
in
and
it
defines
each
of
these
variables
the
time
that
that's
sourced
in
so
I
can
use
them.
So
you'll
see
variables
get
called
later.
It's
coming
from
this
all
right,
I
think
I
notice
why
our
pipeline
broke
earlier,
but
I'll
get
to
that
in
a
minute.
A
I
just
noticed,
there's
a
couple
other
questions:
oh
thank
you
for
adding
your
name.
There
I
feel
good.
Do
you
want
to
go
ahead
and
talk
about
your
questions
in
the
document?
I.
D
B
Caleb,
sorry,
okay,
no
problem!
Okay,
so
I
have
five
virtual
machines
that
I
want
to
spin
up.
So
the
first
thing
I'm
going
to
do
is
shell
check.
Shell
check
is
just
a
linter
that
allows
me
to
check
to
make
sure
the
scripts
I'm.
Writing
are
good.
This
just
catches,
it
early
I
talked
about
failing
out
earlier,
so
that
I
would
save
time.
B
Those
run
all
the
time,
but
one
of
the
things
that
I'm
doing
here
is
I'm
deploying
a
live
environment
over
a
live
environment,
so
I'm
going
to
get
to
a
little
bit
about
that
in
a
minute,
but
if
that
environment
is
on
so
if
gitlab
is
already
running,
I
want
to
First.
Do
some
work
to
make
sure
that
it
is
not
going
to
get.
You
know
clobbered
and
the
one
of
the
biggest
parts
of
that
is
sorry.
B
C
B
Backups,
so
if
Guild
app
is
already
running,
it's
going
to
try
to
clear
it
back
up
and
then
what
it's
going
to
do
is
going
to
shut
down
every
single
node.
That
is
already
running
this
way.
You
know
if
I
turn
off
the
virtual
machine
in
the
middle
of
writing
or,
if
I
destroy
the
virtual
machine
in
the
middle
of
writing
a
commit
into
Italy
or
something
like
that.
I
could
potentially
cause
some
corruption.
So
what
I
want
to
do
is
shut
down
everything
nicely
if
they're
already
up
foreign,
so
I
do
those
two
things.
B
B
There
are
two
virtual
machines
that
need
to
have
block
storage
and
all
this
does.
Is
it
references
another
piece
of
so
I?
Have
virtual
machines
here,
I'm
defining
my
infrastructure
in
these
key
value,
payer
files,
so,
for
instance,
getaway
is
going
to
be
storage
name,
which
is
volume
name
which
comes
from
here.
B
So
it's
first
sources
in
Italy
demo
and
gets
this
volume
name
so
then
volume
name
and
then
CI
commit
Branch.
One
of
the
things
that's
really
neat
about
go
FCI
is
because
of
its
tight
integration
with
things
with
our
Version
Control
System.
It
allows
us
to
leverage
that
within
the
pipeline
to
be
aware
of
information
about
the
version
about
the
branch
we're
on
the
commit.
B
What
happened
recently,
those
kinds
of
things
and
so
see
activate
branch
is
just
the
branch
we're
currently
on
I
mentioned
before
that
we
run
into
problems
if
we
create
a
whole
bunch
of
branches
for
doing
our
work,
and
those
problems
are
that
we
need
to
be
able
to
see
those
branches
execute
independently
of
each
other.
So
if
I
created
a
new
feature,
Branch
for
say,
adding.
B
Examples
but
I'm
going
to
add
into
this
system
some
monitoring,
for
instance,
I,
may
create
a
new
Branch
for
that,
but
then
I
have
to
create
an
entire
infrastructure
in
which
that
can
run
right.
So
while
that
is
probably
the
safer
route
and
I
want
to
be
able
to
facilitate
that
in
in
situations
in
which
I
feel
like
that
is
worth
it
for
small
changes,
it
may
be
like
if
I'm
just
changing.
Oh
I
want
me
to
change
this
name
a
little
bit
or
this
path
a
little
bit.
B
Then
I
may
not
do
that,
but
I
want
it
to
facilitate
branches
as
much
as
possible.
So
wherever
I
can
I
distinguish
things
that
are
going
to
be
instantiated,
such
as
new
volumes
by
the
branch
that
they're
on
I
also
give
it
some
other
information.
You
know
like
the
size
of
it
which
region
it's
in
and
those
kinds
of
things.
So
foreign
just
goes
over
that
who's
list
of
VMS
and
I'll
show
you
the
code
that
does
this
so.
B
B
But
if
you
want
your
input
to
that
script
to
be
longer
than
your
than
a
single
line,
for
instance,
if
you're
just
doing
for
formatting
like
I'm
doing
here,
then
you
will
have
you
have
to
use
either
this
Mark
or
a
pipe
there's
some
slightly
different
effects,
but
right
here
now
they
do.
Basically
the
same
thing.
B
I
would
say
that
in
working
on
this
I
found
a
whole
bunch
of
different
things
that
are
a
little
bit
quirky,
mostly
with
digitalocean,
but
also
that
at
the
moment,
if
you're
using
multi,
pipeline
or
multi-line
script
statement,
then
it's
not
going
to
show
the
entire
thing.
You'll
notice
here,
class
multi-line,
command
I
had
a
great
conversation
with
one
of
our
developers
in
the
Ci
or
verify,
and
he's
working
on
this
I'm
very
happy
about
that.
B
But
I
for
now,
I
don't
get
to
see
this
stuff.
So,
there's
a
little
bit
of
shortcoming
there
if
I'd
written
it
on
a
single
line,
I'd
be
able
to
see
this
so
instead
I'm
going
to
kill
here
and
see
that
what
I
do
is
I
run
Bash
for
this
create
storage,
script
and
I
have
an
input
of
VM
and
that
VM
is
coming
from
looping
over
the
list
of
VMS
that
we
looked
at
already
and
for
every
VM.
It
tries
to
run
this
great
storage
script.
B
So,
let's
create
storage
script
is
in
here
and
so
what
it
does
is
it
sources
in
default,
which
is
this
non-vm?
This
is
extra
information.
That's
going
to
be
true
for
all
of
these
and
less
overwritten,
so,
for
instance,
the
size
for
all
of
these
is
going
to
be
one
virtual
CPU
and
two
gigabytes
of
RAM,
but
I
can
override
those
within
a
subsequent
declaration.
If
I
need
to
I
found
that
giddly,
postgres
and
redis
in
my
case
can
all
be
smaller
than
rails.
B
B
C
B
B
B
B
So
here
I'm
calling
the
API
for
digitalocean
twice.
First
I'm
calling
it
to
see
whether
or
not
this
storage
already
exists
and
the
way
I'm
doing
that
is
I'm
asking
for
a
list
of
every
storage
resource
that
is
available
to
me.
With
the
same
name
that
I'm
about
to
Define
for
this
storage
volume,
if
that
is
zero,
then
I
will
go
on
and
create
one.
If
it's
not
zero,
then
it's
just
going
to
move
on
and
end
meaning
that
thing
already
exists.
B
This
is
important
because
wow
part
of
my
pipeline
is
to
delete
virtual
machines.
I
want
to
keep
volumes
around,
because
we
want
to
make
sure
that
we
have
data
persist
through
our
deployments,
I'm
kind
of
skimming
over
there
kind
of
important
part,
and
that
is
as
part
of
this
workflow
I,
delete
all
the
virtual
machines
and
then
rebuild
them.
B
So
I've
gotten
this
list,
that's
zero!
So
then
what
I
do
is
I
use.
The
information
I
have
defined
in
that
storage
definition
down
here
to
define
the
the
information
about
this
new
volume,
I'm
creating
so
instantiate
a
volume.
That's
happened
twice
here
and
we
don't
get
any
output,
which
means
that
it's
a
success
for
the
most
part
I
try
to
follow
the
Unix
philosophy
of.
If
it
succeeds,
don't
tell
me
anything
about
it.
C
B
B
It
was
in
a
state
where
it
was
attached
to
a
virtual
machine
that
no
longer
exists
and
I,
and
because
of
that
because
of
the
problem
with
digitaloceans
interface,
I
couldn't
remove
those
so
I
had
to
contact
support
over
and
over
again
for
that,
because
I
was
building
and
destroying
so
I
was
building
attaching
disks
and
then
destroying
virtual
machines.
B
Many
times
a
day
and
in
that
process
what
was
happening
was
a
race
condition
where
instructing
it
to
destroy
the
volume
would
run
into
a
problem
where
some
of
the
time,
most
of
the
time,
it
would
tell
it
to
detach
it
first,
but
sometimes
the
instructions
Within
digitalocean
system
would
delete
the
virtual
machine
before
it
and
start
to
detach
it,
giving
it
this
weird
state.
So
what
I
did
was
I
built
in
something
to
detach
it,
for
me,
probably
should
have
done
that
from
the
beginning.
B
B
I
just
said
about
not
saying
anything
but
successful,
because
I
want
to
be
able
to
see
this
because
I've
had
problems
with
it,
but
I
detach
these
disks
before
I
go
ahead
and
delete
any
and
so
I
Loop
over
all
of
the
VMS
to
detach
them
and
then
I
removed,
so
I
deleted
a
bunch
of,
and
this
is
actually
here's
a
bug.
The
this
is
a
fun
one.
B
I
have
this
looping
over
the
VMS
and
it
is
going
to
then
find
if
that
VM
has
an
attached
volume
and
if
it
does
it's
going
to
detach
it.
But
then,
unfortunately,
here's
a
place
I
need
to
improve
this.
It
goes
ahead
and
deletes
all
the
VMS
at
the
end
of
that.
So,
what's
going
to
happen,
is
it's
going
to
Loop
over
every
single
VM,
try
to
detach
and
then
try
to
delete
every
single
VM
every
single
time?
B
So
what
could
happen
there,
so
this
is
the
thing
I
need
to
fix
is
that
it
could
run
to
a
situation
in
which
the
first
VM
of
tries
is
rails,
and
so
then
it
deletes
all
the
VMS
and
then
I
guess
stay
more
I
have
volumes
so
I
need
to
probably
get
this
written,
so
it
doesn't
delete
all
the
VMS,
but
it
deletes
only
the
one
that
it's
working
on
right
now.
So
that's
an
improvement
for
me
to
make
a
little
bit
later.
B
All
right,
I'm
going
to
talk
I'm,
not
going
to
talk
about
making
a
little
boundaries
because
that's
mostly
like
the
deploy
VMS
or
my
Box
Storage,
but
I
do
want
to
talk
about
Secrets
prep.
So
one
of
the
things
that
we
run
into
when
we're
trying
to
write
infrastrous
code
is
how
to
deal
with
secrets
secrets.
Are
everything
from
passwords
to
API
tokens
to
certificate
Keys?
B
These
are
things
that
we
want
to
be
able
to
use
because
we
need
to
Define
them
for
say
a
lab
RB
file,
which
is
the
case
in
this,
but
we
don't
want
them
to
be
stored
in
Version
Control.
If
I
go
back
to
my
web
ID
here,
you'll
see
configs
gitlab
and
then,
let's
see
like
look
at
the
rails
definition
here,
so
you'll
see
that
I
have
initial
root.
Password
you'll
I
have
RB
registration
token.
B
This
is
for
registering
Runners
I,
have
connections
with
postgres
and
connections
with
redis
and
connections
with
giddly,
and
all
of
these
things
are
secrets.
Some.
A
B
Because
I
want
those
things
to
be
either
protected
or
easily
changed,
and
so
what
this
stage
does
is
it
comes
up
with
a
brand
new
credential
to
place
with
this
file
every
single
time
this
runs
so
the
advantage
there
is
I
don't
have
to
track
those
things,
but
also
if,
for
any
reason,
I
felt
like
there
was
a
compromise
of
one
of
these
credentials.
I
simply
redeploy
the
entire
environment
and
all
of
the
credentials
have
been
reset.
B
But
where
do
I
store
these
that's
what
these
curls
are
doing?
Not
only
are
they
creating
the
cringe?
Will
you
see
here
that
it
is
echoing
the
random
variable
to
md5
sum
and
then
cutting
off
everything,
but
the
beginning
of
it?
What
that
ends
up
with
is
an
md5
sum
of
random
information
for
anyone
who's
not
aware
of
what
md5
is.
It
is
a
kind
of
old
cryptographic,
hatching
algorithm
that
will
allow
you
to
come
up
with
theoretically
unique
information
that
describes
in
a
very
short
way
some
input
information.
B
Somebody
who
can
access
this
page
can
go
in
and
look
at
one
of
them
so
I'm
going
to
pick
this
one,
which
is
not
actually
a
secret,
because
it's
just
a
host
name
and
so
I
can
go
in
here
and
look
at
this,
so
I
can
also
copy
them
out.
So,
for
instance,
I
set
the
default
route
password.
So
if
I'm
setting
up
a
brand
new
instance
and
I
want
to
be
able
to
log
in
I
can
set
that
here
now,
I
don't
know
what
that
is,
because
that
was
randomly
generated
down
here.
C
Hey
Caleb
yep,
just
so
I
understand
it
correctly,
so
so
you're,
basically
rotating
your
passwords
every
time
you
go
through
this
absolutely.
C
Is
awesome
and
then
so
can
you
can
you
explain
you
said
so
you
don't
pass
them
as
what
was
that
line?
Did
you
use
the
facts?
Yes
can
you
can
you
walk
me
through
that?
Thank
you
sure.
So,.
B
Several
things
can
be
moved
along
within
a
pipeline,
and
one
of
the
things
that
you
can
do
to
move
variables
along
is
to
Define
them
as
a
certain
file
that
can
that
will
be
uploaded
into
gitlab's
artifact
store.
So
for
every
pipeline
you
can
Define
artifacts
for
any
job
and
then
you
can
go
and
fetch
them
out
of
the
pipeline.
B
Okay
so
similar
to
those
Secrets.
The
other
thing
that
I
rotate
every
single
time
is
that
I
make
a
internal
and
self-signed
certificate
Authority,
so
that
every
one
of
my
components,
redis,
postgres
and
get
away,
can
all
be
communicated
to
by
the
rails
nodes
using
these
certificates.
I
do
this
so
that
the
certificates
I'm
using
for
anyone
else
to
connect
with
this
instance
are
different.
B
So,
for
instance,
if
you're
connecting
with
the
rails,
node
you'll
get
a
let's
encrypt
certificate,
but
that's
different
from
the
ones
that
are
used
internally,
so
that
I
don't
have
to
worry
about
things
like
wild
cards
and
I.
Also,
don't
have
to
worry
about
handling
lesson
grip
for
all
of
these,
since
I'm
able
to
serve
these
certificates
to
each
of
these
component
pieces,
as
well
as
the
authority
to
all
of
the
component
pieces,
every
piece
is
able
to
trust
each
other,
because
I
have
distributed
the
certificate,
Authority,
so
kind
of
circumvents
the
certificate
Authority
system.
B
That
is
the
global
circulatory
system,
where
you
go,
get
a
certificate
from
somebody
like
digicert
or
GoDaddy,
or
whoever
those
cost
money
and
I
don't
want
to
create
a
new
one
of
those
every
single
time.
I
run
this
because
I
could
of
course
be
running
this
many
times
a
day
and
so
buying
a
new
certificate.
Every
single
time
would
be
painful.
I
could
buy
one
and
store
someplace
that
I
keep
placing
it
in.
B
B
A
D
One
of
my
questions
was
a
comparison
to
get
GitHub
environment
toolkit,
but
I
think
I
can
answer
that
for
myself
already
you're
using
the
RP
interfaces
from
the
cloud
provider
to
set
up
your
environment,
and
that
is
a
different
to
terraform
and
after
that
ansible
s
Gap
is
doing
it
am
I
correct
there
or
is.
A
D
My
my
really
point
of
interest
is
the
health
checks
that
you
do
against
your
infrastructure
and
especially
against
gitlabs
that
you
then
have
installed
in
that
infrastructure.
What
tools
are
you
using
there?
What
checks
are
you
using
there,
because
that
would
be
really
really
helpful
for
my
customers
that
are
looking
for
tests
that
they
can
do
after
upgrades
for
functional
tests
that
they
can
do
after
upgrades.
B
So
there
is
less
checking
of
the
actual
gitlabs
once
it's
been
deployed,
then
there
is
about
checking
the
environment
that
it's
going
to
get
deployed
into
that
I
will
say
that
is
mostly
a
run
out
of
time
problem
for
me,
I.
That
is
something
that
I
would
have
liked
to
have
been
able
to
do,
and
I
have
an
upcoming
conversation
with
a
customer
about
this
specific
problem.
B
So
I
will
likely
be
solving
it
just
in
time
to
help
that
customer
and
then
once
I
have
that
then
I
will
have
a
way
to
implement
it
here,
then
we
can
have
another
conversation
about
that,
so
maybe
Circle
back
with
me
after
the
holidays
and
I
can
see
if
I
have
anything
more
for
you
about
that.
D
B
Absolutely
so
one
of
the
things
I
want
to
touch
on
because
we
are
running
out
of
time.
Thank
you
for
reminding
me
of
that
is
that
one
of
the
problems
I
ran
into
when
I
was
initially
coming
up
with
this
process
was
that
I
wanted
to
have
a
way
to
do
jobs
on
different
virtual
machines
if
they
existed
so
I
needed
to
be
able
to
Define
some
way
of
determining
whether
or
not
a
virtual
machine
existed,
and
so
through.
B
Some
work,
I,
eventually
determined
that
Dynamic
child
pipelines
was
the
path
to
go
for
this,
so
with
Dynamic
child
pipelines.
Allow
me
to
do
is
create
these
Downstream
pipelines
that
can
run
if
that
CI
file
has
been
created
so
I'm
going
to
dive
into
that
real
quick.
A
B
Okay,
so
health
check
what
health
check
does
here?
Is
it
actually,
this
job
does
effectively
nothing
but
trigger
a
pipeline
that
is
produced
out
of
this
health,
check.cr
and
or
else
nhxci.yaml
file,
which
is
created
in
this
job,
which
goes
over
every
single
VM
and
then
generates
this
file
based
on
what
is
contained
in
those
definition
files
we
talked
about
before
it
does
that
by
taking
the
health
check
template
here,
which
defines
what
should
be
done
in
the
health
check,
script
and
the
health
in
that
stage,.
B
And
puts
that
at
the
top
of
the
generated
file,
then
it
creates
a
line
for
each
virtual
machine
or
not
a
line,
but
a
block
for
each
virtual
machine
within
that
same
CI
file
that
contains
basically
the
same
information.
It's
just
going
to
extend
health
check,
it's
going
to
run
the
health
check
stage,
but
it
uses
this
tag.
Do
you
do
your
name?
That
name
is
important,
because
what
it
means
is
that
there's
going
to
be,
if
I
have
five
virtual
machines.
B
There's
actually
no
way
of
doing
that
within
Gill
FCI,
the
live,
CI
is
really
doesn't
have
a
construct
for
doing
kind
of
parallel
runs
of
the
exact
same
work,
it's
kind
of
unexpected.
Why
would
you
want
to
do
that?
But
if
you're
doing
it
because
you're
doing
systems
orchestration,
then
it
makes
sense.
The
way
to
do
that
is
to
have
these
child
pipelines.
Where
is
able
to
Define
over
and
over
again
the
same
instructions
but
say
go
to
these
particular
Runners,
so
each
Runner
gets
a.
B
B
That
would
all
appear
here
and
then,
if
I
cut
it
back
to
five,
it
would
cut
them
back
to
five
automatically
and
so
I
can
make
those
definitions
not
within
the
code.
But
within
these
virtual
machine
definition.
Files
now
I
mentioned
that
I
think
I
realized
where
the
problem
was.
That
I
was
why
my
job
failed.
So
I
want
to
go.
Look
at
that
real
quickly
before
we
wrap
up
and
so
in
update,
get
lab.
I
have
a
series
of
jobs.
B
B
Rails
inside
kick
do
depend
on
these
other
things,
and
so
they
have
to
run
later,
but
this
one
failed
Okay.
So
this
failed
because
of
the
problem
that
I
keep
running
into
over
and
over
again
so
I
mentioned
that
I
have
I
Define
a
certificate
Authority
for
my
internal
Communications,
but
I
use,
let's
encrypt
between
systems.
B
The
problem
here
was
that
let's
encrypt
tried
to
validate
for
this
rails
node
and
couldn't
now
exactly
why
it
couldn't
I
will
have
to
dig
into
that
a
bit
more,
but
most
of
the
time,
what
has
ended
up
being
true
in
this
case
is
that
it
wasn't
able
to
resolve
through
DNS,
which
is
frustrating
DNS
is
generally
the
problem
that
you
know
I
struggle
with,
but
the
that
DNS
is
particularly
problematic
in
these
cases,
because
the
same
performing
you
know
10
times
a
day,
I
can't
change
the
IP
address
to
which
I'm
deploying
because
DNS
needs
to
catch
up.
B
So
I
have
to
have
something
like
a
load
balancer
in
place
to
prevent
that
which
I
do
I,
make
a
load
balancer
so
that
I
can
store
an
IP
address.
Moving
in
between
these.
So
that's
one
of
the
things
I
don't
delete
every
time,
I
keep
my
load
balancer
and
then
I
just
have
it
point
to
these
new
virtual
machines,
but
it
didn't
work
in
this
case
because
the
demo
had
to
fail
in
some
way
all
right.
B
B
Oh
yes,
good
question,
so
I
had
a
conversation
with
a
customer
who
was
telling
me
that
they
wanted
to
deploy
gitlab
using
terraform
through
a
pipeline
and
I
was
encouraging
them
that
that
was
a
good
idea.
I
showed
them
get
and
everything
like
that,
but
they
asked
okay.
How
do
we
store
secrets
for
this,
and
so
I
said
well
in
your?
B
So
that
was
basically
where
I
ran
up
into
the
into
my
wall,
of
not
being
able
to
help
them
anymore,
because
I
would
like
for
them
to
have
been
able
to
store
this
in
a
place
where
it
was
connected
with
their
Version
Control,
but
what
it
triggered
in
my
brain
was
you
don't
need
ansible?
You
don't
need
terraform.
You
could
just
do
all
of
this
through
some
shell
scripts
in
CI
Pipeline,
and
then
you
could
store
all
of
this
stuff
easily.
B
Of
course,
if
they
were
doing
it
all
in
the
CI
pipeline,
they
could
have
stored
these
stuff
in
our
CI
CD
variables
anyway,
so
they
could
still
use
ansible.
They
could
still
use
terraform.
They
really
the
only
they'll
pick
up
for
them
was
their
dependence
on
Tower,
but
basically
what
it
did
was
like.
Oh
I
bet,
I
could
just
do
this
myself,
and
so
I
came
up
with
a
proof
of
concept
in
like
eight
hours.
B
That
worked
but
didn't
work
quite
well
enough,
like
there
was
all
these
little
education
that
didn't
solve
for
so
then,
at
this
point,
I
spent
probably
800
hours
on
this,
so
that
it
takes
to
give
some
idea
like
you,
can
make
something
that
works
and
deploys
that
work
in
gitlab,
environment
or
whatever
else
pretty
quickly
in
a
day
right.
But
if
you
want
to,
you
can
run
with
this
for
probably
ever,
and
so
that's
why
this
is
so
complex.
Just
because
I
keep
adding
things.
C
This
was
awesome
Caleb.
Thank
you
just
a
quick
one
for
me,
I
mean:
do
you
think
if
you
would
have
used
something
like
terraform
or
ansible,
but
even
kubernetes?
You
might
have
cut
down
on
your.
What
was
it
200
lines,
yaml
file
or
three.
B
I,
don't
know
I
mean
most
of
that
stuff
I
think
would
get
gone
into
the
scripts
instead,
so
I,
don't
think
it
would
cut
down
necessarily
the
link
to
that
yaml
file
could
theoretically,
but
one
of
the
things
that
I
have
found,
because
prior
to
this
I
did
a
lot
of
configuration.
Management
I
did
a
lot
of
work
in
ansible
and
puppet
and
CF
engine,
and
the
thing
I
found
over
time
was
that
those
tools
are
really
great
for
interoperability.
B
If
you
have
different
systems
like
you're
trying
to
deploy
into
Rel
and
Ubuntu
right,
you
can
Define
the
same
kind
of
thing
in
workspace,
but
also
works
really
well
for
interoperability
between
people.
It's
much
easier
to
hire
somebody
who
will
be
able
to
quickly
get
a
speed
on
your
ansible,
then
just
hire
somebody
who
feels
good
to
be
on
your
scripts.
B
The
downside
is
that
they
all
come
with
their
own
bunch
of
baggage.
There's
all
these
quirks
in
these
tools
as
well,
and
so
while
it
probably
would
have
cut
down
the
length
of
my
total
work,
because
those
things
would
have
abstract
or
a
lot
of
that
away.
I'm
not
sure
it
would
have
cut
down
on
the
amount
of
time
it
took
me
to
do
this
in
all
reality,
because
I
would
have
had
to
spend
a
lot
of
time
trying
to
figure
out
how
to
do
the
thing.
B
And
this
is
a
public
project
I
plan
to
keep
adding
to
it
feel
free
to
look
at
it.
It's
not
in
gitlab.com,
it's
in
my
own
personal,
app
server
that
I've
been
running
since
before
I
came
here,
but
I
would
be
happy
if
you
want
to
take
a
look
at
it.
Have
any
other
questions
after
this
I
would
love
to
talk
about
it.
B
It's
going
to
be
difficult
to
Fork
it
across
well,
no,
you
can
do
it
yeah,
you
can
Fork
it
totally
open.
There's
no
restrictions
on
it.
Yeah
I
should
probably
add
a
license
to
tell
you.
You
can
do
that
foreign.
B
So
it
is
to
be
clear:
the
root
password
that
you
use
to
log
in
the
first
time
and
then
you
go
and
change
it.
So
it's
not
just
something
that
I
like,
but
I
do
I.
Do
trust
the
CI
variables
pretty
strongly
so
I
don't
really
see
a
problem
with
using
that,
as
my
credential
store
the
I'm
less
concerned
about
that
than
I
am,
for
instance,
my
digital
ocean
access
token.
D
B
A
A
But
if,
if
you're
available,
if
the
timing
works
out
for
you,
please
feel
free
to
join
that
session
as
well,
and
thank
you
so
much
for
your
time.
Thank
you
for
the
extra
time.
Thank
you
all
for
the
great
questions.
Thank
you
all
bye.