►
Description
GitOps with GitLab - Presented by Sri (sri@gitlab.com) at the EMEA CS Knowledge Share meetup 2019-11-29
Learn more about GitOps: https://about.gitlab.com/topics/gitops/
A
All
right
so
get
ups
with
get
lab.
That's
going
to
be
the
theme
for
today
what
is
get
ups.
There
are
a
lot
of
definitions
out
there,
but
here's
a
concise
one
to
have
get
the
version
control
system
to
have
git
as
the
single
source
of
truth
for
your
high
D
operations.
That
is
a
in
in
your
organization.
If
you're
developers
are
creating
software,
that
code
should
go
inside,
get
related
to
that
software,
all
the
build
jobs
and
the
test
jobs
in
the
deploy
the
whole
pipeline.
A
That
should
also
go
inside
get,
and
the
third
part
is
to
have
your
infrastructure
that
this
code
should
run
on.
Have
that
inside
gate.
So
how
do
you
go
about
putting
infrastructure
inside
a
version
control
system?
If
you
can
represent
the
infrastructure
as
a
schematic
like
in
a
code
form,
then
it
is
possible
so
doing
that
entails
the
following.
There
are
some
tools
that
you
can
use,
but
primarily
there
are
like
three
general
concepts
regarding
that
one
is
to
use
a
markup
language
like
ml,
for
example.
A
So
if
you
use
ansible,
then
you
can
actually
create
the
schematic
for
all
the
resources.
The
infra
resources
inside
the
amber
file
or
adjacent.
Actually,
if
you
use
something
like
Jenna
form,
then
they
have
a
DSL,
a
language
specifically
created
for
for
creating
models
of
your
infrared
source
--is.
So
they
call
it
the
HCL,
the
Hashi
Corp
language,
or
if
you
use
a
plumie,
then
you
can
have
general-purpose
programming
languages
to
actually
create
schematics
of
your
of
your
infra,
regardless
of
which
approach
you
use.
A
Two
things
remain
the
same.
One
is
that
the
infrastructure
scheme
is
readable
by
humans
and
machines
and
two
you
can
execute
them,
so
you
can
execute
them
manually
or
automatically,
but
you
can
execute
them,
and
if
you
execute
the
scheme,
then
the
infra
gets
created.
That
should
be
the
idea
behind
it.
B
A
Part
of
Google
Docs,
so
they
have
this
nice
pointer
there,
okay,
so
jumping
ahead.
Why
would
you
want
to
do
this?
The
benefits
would
be
basically
threefold.
Really
it's
either
cost
related
speed
related
or
you
can
reduce
risk.
So
if
your
infrastructure
is
is
denoted
as
a
script,
then
you
can
execute
it
and
the
execution
is
repeatable.
So
if
your
infrastructure
schematic
is
quite
complex,
you
have
like,
let's
see
a
dozen
services
up
and
running
in
different
places
and
resources.
A
Then
if
you
have
it
as
a
script,
you
can
execute
them
again
and
again
really
fast,
so
you
reduce
effort
which
obviously
gives
you
speed
and
cost
benefits
again.
Script.
Execution
is
faster
than
the
way
humans
are
going
to
do
it
so,
whether
it's
human
doing
it
manually
or
human,
using
a
GUI
to
provision
the
infra
script,
execution
is
always
going
to
be
faster,
so
again,
cost
and
speed
benefits.
Script.
Execution
can
be
automated.
Humans
cannot
at
least
not
legally
so
so
that's
another
benefit
that
you
get
in
the
cost
and
speed
column.
A
The
thing
with
scripts
are
that
you
can
test
them.
You
can
write
them
once
you
can
test
them
and
every
time
after
which,
when
you
execute,
you
have
consistency
and
what
gets
created.
So
you
reduce
risk
and
cost
by
that,
because
you
also
eliminate
human
error.
So
a
quick
summary
of
the
benefits
of
the
general
concept
of
having
infrastructure
as
code,
but
then,
where
does
get
lab
fit
into
this
picture?
A
So
I'll
give
you
like
a
tldr
version
if
you're
familiar
with
git
and
get
lab,
you
already
know
this,
but
just
to
just
to
elaborate
in
the
next
few
seconds
what
you
can
do
with
get
lab
generally
in
dips
you
getting
in
get,
you
would
have
one
branch,
which
is
the
main
branch,
usually
called
the
master
branch.
This
branch
represents
the
stable
production
state
of
your
code,
and
now
it
represents
also
the
stable
production
state
of
your
IT
infrastructure.
A
Suppose
you
want
to
make
a
change
to
this.
Let's
say
I'm
the
DevOps
engineer:
I
want
to
make
a
change.
I
would
branch
off
my
feature:
I?
Will
branch
off
my
master
branch
into
a
feature
branch
so
then
I
have
another
copy
of
the
code
where
I'm
doing
all
my
work.
This
is
like
a
private
place
where
I'm
just
testing
things
out.
Let's
say
I
make
my
changes.
A
I'd
run
the
changes
on
my
local
environment
I
like
the
changes
so
now,
I
push
them
on
to
my
feature,
branch
and
the
minute
I
start
pushing
them
on
to
my
feature
branch.
The
gate
lab
pipeline
kicks
in
and
starts
executing
my
changes,
so
the
pipeline
would
have
jobs
to
verify
if
my
changes
are
actually
valid
or
not.
If
it
is
not
valid,
the
build
is
going
to
fail
and
I
have
to
make
fixes
in
the
feature
branch.
A
If,
if
the
verification
of
my
infra
changes
is
good,
then
you
basically
apply
these
changes
inside
and
inside
the
environment
as
well,
so
this
also
would
be
done
by
a
pipeline
job,
so
it
applies.
The
changes
in
the
test
environment
and
I
can
actually
play
around
with
it.
Finally,
let's
say:
I'm
happy
with
this
now
I
want
my
changes
to
come
back
into
master
for
this.
A
The
mechanism
would
be
using
git
merge
requests
where,
in
my
pipeline
should
pass
and
I
have
to
invite
my
colleagues
human
approvers
to
see
if
they
like
the
changes
or
not.
So
let's
say
in
my
organization,
the
rule
is
that
three
approvals
are
required
before
something
can
be
merged
back.
That
check
is
happening
at
this
level
once
all
approved.
It
goes
back
into
master
and
then
get
lab
pipelines
could
automatically
execute
and
publish
the
changes
to
the
product
environment
or
it
can
do
it
in
a
manual
basis.
That's
the
rough
idea.
B
Have
a
quick
question:
what's
the
best
way
to
manage
different
different
credentials?
Yes,
probably
in
the
testing
stage,
I
would
use
some
sort
of
a
cloud
environment
and
then
moving
it
closer
to
production.
It
will
be
something
else,
and
then
we
need
different
credentials,
keys
whatever
and
yeah.
What
would
be
the
best
way
to
do
this
yeah.
A
I
I,
always
like
don't
like
to
answer
questions
say
that
what
is
the
best
way,
because
it's
hard
to
have
one
best
way,
because
that's
always
that's
always
depends
on
your
particular
situation,
but
there
are
certain
ways.
One
way
that
I
will
show
you
in
the
demo
is
a
little
further,
is
to
use
environment
variables
and
to
have
get
lab
pass
them
through.
C
I
have
three
I
thought
so
one
question
or
suggestion
you
ever,
however,
you
want
to
call
it
and
it
would
eventually
fit
into
not
that
slide.
But
the
slide
before
I
have
asked
I've
gotten
that
question
before,
and
that
is
hey.
We
want
to
go
to
we're
going
to
move
from
mutable
to
immutable
infrastructures,
so,
as
a
is
a
bit
of
a
as
a
good
thing,
immutable
infrastructures
are
a
good
thing
and
I
believe
having
infrastructure
as
code
and
the
Associated
automation.
A
Sure
for
sure,
so
the
demo
that
I'm
going
to
show
you
is
actually
infrastructure.
That
is
immutable.
That
is,
you
can
only
deploy
to
the
resources
being
provisioned
one
time
and
if
you
want
to
make
changes
you
deploy
again
and
you
provision
new
ones.
So
that
is
a
good
point.
Probably
I
should
have
mentioned
immutable
infrastructure
here
in
the
benefits
slide.
But
then
thanks
for
that,
that's
that's
a
very
good
point.
At.
A
The
thing
with
the
thing
with
immutable
properties
in
general
is
that
you
avoid
state,
so
it
just
becomes
much
easier
to
manage.
So
this
could
be
an
immutable
programming
language
or
it
could
be
immutable
interest
or
whatever
else,
just
the
fact
that
you
don't
have
to
deal
with
state
over
time
and
then
you
can
just
press
a
button
and
everything
gets
created
from
scratch.
Exactly
as
you
wanted,
it's
a
huge
benefit.
It's
a
it's,
a
big
cost
and
thanks
here,
I.
D
Got
a
question
then
so
related
to
this,
which
is
that
say,
we
use
a
tool
like
terraform
that
that
determines
a
desired
state
of
your
infrastructure
and
if
you
make
a
change
in,
say
a
feature
branch.
What
you're
effectively
saying
is
that
you're
changing
the
desired
state
of
your
infrastructure.
If
you
run
the
pipelines
at
your
step,
3
here
and
that's
the
first
time
at
that
pipeline,
that
infrastructure
is
code
is
is
executed.
D
What
you'll,
actually
what
you'll
actually
be
testing,
is
creating
the
entirety
of
that
infrastructure
in
one
go,
you
will
not
be
testing
actually
having
the
previous
state
of
the
infrastructure
and
the
and
the
patches
that
are
required
in
order
to
get
that
infrastructure
to
the
new
state.
So
the
question
I
have
tree
is:
how
do
you
ensure
that
that
is
actually
tested,
because
that
is
the
code?
That's
actually
going
to
run
when
that
branch
has
reemerged
back
into
master
yeah.
A
So,
for
example,
if
I'm
going
into
a
project
that
I'm
going
to
show
you
and
in
Matera
form,
whenever
you
execute
that
is
you
run
the
command
terraform
apply,
it
actually
applies
the
changes
somewhere
and
then
the
results
of
the
application
gets
stored
at
state.
So
in
this
case,
I've
got
three
workspaces
and
I.
Think
I've
done
I've
run
dev
locally,
so
I've
got
the
state
and
form
of
a
JSON
file,
so
you're
good.
If
you
want
persistent
infrastructure,
your
goal
is
to
persist
the
state
as
well.
A
D
Suggest
I
guess
what
my
okay,
just
not
playing
customer
now.
What
I'm
really
getting
at
is
that
if
you
have
a
development
environment,
that's
also
cloud-based,
so
you've
got
your
production
environment
and
you've
got
your
your
development
environment,
for
this
particular
feature
that
that
particular
features
environment
doesn't
actually
exist,
and
so,
if
the
pipeline
weren't
to
run
until
you've
are
after
you've
made
some
changes,
then
it's
not
actually
testing
what
will
be
applied
to
master
the
key
bit
there
is
that
we
actually
run
up.
A
So
the
question
is:
do
you
actually
persist
stayed
inside
version
control
or
not,
and
that
would
like
that
would
define
yeah
that
that
would
decide
whether
you're,
actually
starting
from
scratch
or
you're,
actually
building
up
from
from
some
existing
state
yeah
cool
cool
all
right.
So
what
are
the
benefits
of
having
get
lab
in
the
ISE
space
right
so
again,
the
benefits
are
three
main
columns
for
them:
cost,
speed
and
risk.
A
Of
course,
what
you
get
if
your
infrastructure
state
is
versioned
over
time
is
that
you
can
roll
back
quickly
right
and
you
can
roll
back
in
an
easy
manner.
So
this
actually
refers
to
persisting
the
TF
state
file
as
well.
But
if
you
can
roll
back,
then
of
course
you
you
mitigate
a
lot
of
risk
and
you
have
constant
speed
benefits.
A
You
can
also
adapt
your
existing
by
your
previous
infrastructure.
For
example.
Let's
say
I
have
let's
say:
I
have
a
project
that
is
starting
now,
but
then
the
infrastructure
that
I
have
right
now
is
not
ideal
for
it.
But
six
months
ago
the
infrastructure
was
perfect
for
it
with
its
lab.
You
can
actually
just
take
how
things
were
six
months
ago,
and
you
can
build
on
top
of
that.
The
fact
that
your
infrastructure
changes
passed
through
much
requests,
give
you
huge
benefits.
First,
is
that
it
automatically
gets
verified
across
multiple
environments.
A
So
you
get
this
for
free
with
the
pipelines.
You
have
this,
you
have
the
human
reviewers
as
well,
so
the
four
eyes
principle
kicks
in
and
and
of
course,
the
audit
ability.
So
you
are
making
changes
to
your
infra
and
you
have
a
log
of
what
changes
are
being
made.
You
have
within
that
log.
You
have
how
those
changes
are
being
made,
and
you
know
why
those
changes
are
being
made.
So
if
you
use
get
lab
issues
and
merge,
request
and
feature
branches,
all
of
these
things
get
get
locked.
A
A
But
what
you
can
do
is
that
at
a
group
level
you
can
define
infrastructure
blueprints
so
to
speak,
and
then
all
your
projects,
all
your
future
projects,
can
just
get
infrastructure
provision
for
free
based
on
these
blueprints.
So
again
you
get
a
lot
of
cost
and
speed
benefits,
of
course,
but
then
you
also
avoid
risk
in
the
sense
that
you
only
define
and
test
things
once,
but
it
gets
reused
a
lot.
A
So
in
this
scenario,
in
the
demo,
let's
assume
we
have
three
teams
on
the
far
left.
You
have
the
infra
team
on
the
far
right.
You
have
the
product
team
and
then,
of
course
you
have
the
integration
team
in
the
middle,
the
product
team
on
the
far
right
is
responsible
for
making
the
apps
making
the
products
right.
They
are
not
necessarily
concerned
with
infra
or
operations,
they
just
work
closely
with
business
and
they
try
to
make
their
apps
better
and
then,
on
the
other
hand,
you've
got
the
infra
team
on
the
far
left.
A
They
are
responsible
for
maintaining
the
infrastructure
overall
for
your
organization,
so
they
are
not
necessarily
involved
with
a
particular
product,
but
they
just
want
to
make
infra
willable
for
all.
In
this
scenario,
we're
going
to
see
the
infra
team
creating
in
frog
blueprints
and
then
the
integration
team,
reusing
them.
So
a
blueprint
is
basically
a
reusable
speck
of
certain
ID
infra
resources,
and
then
you
just
publish
them,
you
make
them
available
and
the
integration
team
would
consume
the
blueprint
and
then
it
would
consume
the
app
and
it
would
just
make
it
work.
A
So
that's
going
to
be
the
demo.
The
first
part
of
the
demo
is
setting
up
gate
lab
and
there's
not
really
much
to
set
up
get
lab
by
default
is
set
up
for
this
kind
of
work.
One
thing
you
want
to
do
is
you
want
to
work
with
get
lab
groups,
so
you
can
see
I'm
working
with
a
group
over
here
called
github
circuit
lab
and
within
the
group
you
need.
What
you
need
is
a
sub
is,
is
a
sub
group
right?
This
sub
group
needs
to
be
marked
as
the
project
template
folder.
A
So
if
you
go
into
settings
for
your
group
and
you
go
into
custom
project
templates,
what
you
want
to
do
is
you
want
to
select
this
this
group
once
you
do
this,
you
will
have
a
certain
set
of
benefits.
Let
me
show
you,
that's
the
next
part.
Basically,
if
I'm
to
consume
a
blueprint,
all
I
need
to
do
is
I
need
to
create
my
blueprint
as
part
of
this
group
here,
I'm
coming
in
Nice
team
logo,
Thank
You
Korina.
A
So
so,
if
I
am
the
infra
team
and
I
want
to
make
certain
number
of
blueprints
available
for
my
dev
teams,
or
my
degradation
teams,
I
just
create
those
projects
within
this
sub
group
over
here.
So,
in
this
case,
I've
got
a
I've
got
a
blueprint
which
is
just
provisions
and
open
to
instance
on
ec2
all
right
now.
A
Let
me
just
play
the
role
of
the
integration
team,
I'll,
say
I'll,
say:
I
want
to
create
a
new
integration
project
and
I
want
to
create
from
template,
I
select
the
group
and
I
select
my
infra
blueprint
for
ec2
Ubuntu,
se
use
template,
say
test
into
Wow
I,
like
I'm,
using
dashes
and
underscores
and
comes
casings.
Obviously,
I'm
feeling
really
good
today
create
the
project.
A
So
what
this
is
going
to
do
is
this
actually
takes
the
blueprint
and
makes
it
available
as
part
of
my
integration
project.
So
now
I'm
looking
at
my
integration
project
and
thankfully
it's
going
to
read
me
file.
So
now
me
as
the
integration
engineer.
I
just
have
to
follow
the
steps
over
here
and
the
steps
are
okay.
Here
they
are
okay,
configure
the
environment
variables
write
the
script
sounds
simple
enough.
A
Let
me
just
see
what
happens
if
I
try
to
run
the
pipeline
without
doing
any
of
these,
so
I'm
already
seeing
existing
pipelines
over
here,
even
though
I
just
created
the
project,
but
these
are
actually
pipelines
from
the
blueprint.
So,
if
I
want
to
look
at
the
pipeline
of
the
blueprint
I
have
this
available
I
see
that
it's
got
three
stages.
I
really
at
this
point,
I,
don't
know
what
these
changes
are,
but
I
see
verify
provision
and
deploy
and
destroy.
The
names
are
fairly
simple.
A
I
know
what
it's
probably
know
what's
going
on,
but
let's,
let's
be
certain,
let's
have
a
look
at.
Let's
have
a
look
in
my
text.
Editor
at
pipeline
and
terraform
get
lab,
see
IMO.
So
these
files
are
also
available
in
the
repository
that
I
just
created,
of
course,
but
here
I'm,
actually
seeing
the
jobs
and
seeing
three
stages
and
the
stage
verify
is
basically
selecting
some
workspace
and
I'm
guessing
I'm.
Getting
a
comment,
and
probably
the
comet
is
the
font
size
is
too
small.
Is
it
all
right.
A
Okay,
so
hope
this
works,
but
yeah
so
I've
got
three
stages:
verify
provision
deploy
and
destroy.
Let's
have
a
look
at
verify.
I've
got
def
verify
as
a
job
and
what
it's
trying
to
do
up
to
here
doesn't
matter
I,
don't
care.
I
will
explain
this
later,
but
let's
look
at
what's
going
on
it's
running
terraform
in
it,
it's
selecting
a
workspace
and
it's
running
plan
so
plan
is
the
terraform
command
that
actually
doesn't
apply
the
changes,
but
just
shows
you
what
the
changes
would
be.
A
So
it's
kind
of
like
a
preview,
but
it
also
verifies
if
your
changes
are
valid
or
not
so
in
this
case
by
running
plan,
I'm,
actually
just
verifying.
What's
going
on
in
my
second
bad
bad,
actually
stands
for
provision
and
deploy
I'm
doing
the
same
thing,
I'm
selecting
the
workspace,
but
this
time
I'm
doing
terraform
apply.
So
this
is
where
the
actual
changes
get
applied
and
destroy
basically
destroys
that
particular
set
of
infrastructure
resources
that
were
created
now.
A
I've
also
got
the
same
set
of
jobs
for
test
environments
and
fraud,
environments
and
different
sets
of
jobs
run
at
different
stages.
So
the
way
it
is
set
up
right
now
on
the
dev
environment,
everything
is
provisioned
automatically
whenever
changes
are
pushed
to
a
feature.
Branch
test,
for
whatever
reason,
provisions
changes
automatically
when
pushed
to
master
branch
and
prod
in
this
demo
can
only
be
triggered
manually.
So
it
doesn't
get
auto
applied
all
right
good
enough,
but
what
is
it
that
is
getting
executed?
So
what
is
that
info
getting
created?
A
A
Easy
to
go
into
is
the
name
and
I'm
passing
the
ami,
the
instance
type
and
here's
the
funny
thing:
user
data
file
data
data
dot,
so
the
provider
for
terraform
has
this
unique
property
can
use
the
data
which
actually
lets
you
execute
a
set
of
commands
one
time
during
the
launch
of
the
resource.
So
when
this,
when
this
is
to
instance,
will
be
provisioned
one
time,
I'm
saying
run
the
script
as
root,
so
this
script
lives
in
my
deployment
scripts
folder.
A
So
if
I
look
at
my
main
SH
over
here,
it's
installing
Apache
and
it's
creating
a
HTML
file
with
an
image
which
might
be
a
cat
because
you're
supposed
to
do
that.
Okay,
so
back
to
my
uniform
here,
basically
pretty
simple
run
this
script.
One
time,
that's
the
rules!
So
in
this
case,
if
you
don't
persist
the
state
of
this
of
your
of
your
terraform,
then
this
becomes
an
immutable
infra.
A
The
state
like
I
mentioned
is
persisted
in
this
folder,
the
terraformed
RTF
state
in
addy
folder,
and
this
is
part
of
my
version
control.
So
if
you
were
to
create
a
blueprint
for
a
lot
of
different
things
to
use,
then
of
course
the
state
is
not
going
to
be
part
of
version
control.
The
implementer
of
the
blueprint
will
make
it
part
of
his
project,
but
not
me
what
else
do
I
have
here?
A
I'll
show
you
my
kid:
ignore
it
might:
okay,
so
I'm,
just
I'm,
ignoring
the
doc
terraform
folder,
which
is
basically
a
folder
which
contains
the
local
telephone
binary.
So
I'm
running
this
on
a
Mac,
but
it's
probably
not
going
to
run
on
a
Mac
when
I
run
the
jobs,
so
I
don't
need
to
push
these.
This
is
this
binary
there?
A
Okay!
So
let's
go
back
here
and
let's,
let's
try
to
run
the
pipeline
and
I
think
it
should
fail,
because
I
have
not
done
the
two
things
that
asked
me
to
do:
I
have
not
created
the
environment
variables
yet
and
have
not
modified
the
deployment
script.
So
this
modifying
the
script
is
kind
of
optional
because
it's
already
got
something
that
is
going
to
provision
a
passe
and
a
HTML
file.
So
it's
going
to
put
a
static
web
app
somewhere,
but
I
think
I'm
obligated
to
to
define
the
variables.
B
B
A
Enough
but
okay,
let
me
just
get
back
to
my
jobs
for
now.
Let's
have
a
look.
My
job
failed
and
it
failed
because
it
says
test
access
key
needs
to
be
defined.
Okay,
so
why
is
it
failing?
It's
running
a
validate
Python
script.
So
this
is
a
script
that
is
part
of
the
blueprint
whose
job
is
to
validate
if
the
environment
variables
are
available,
very
simple:
Python
script,
self-explanatory
checks.
A
If
these
six
are
available,
these
fiber
available,
but
okay,
let's
try
to
make
this
work
so
far,
remember
in
gate
lab
the
only
thing
I've
done
is
I've
created
a
project.
I
have
not
modified
any
of
the
code
so
showing
you.
The
code
was
like
bit
of
an
extra,
but
you
don't
need
to
see
whatever
form.
Does
you
don't
need
to
see
the
Python
script?
You
just
need
to
consume
it.
A
So
if
I
go
into
my
get
lab
settings,
see
ICD
and
if
I
look
at
my
variables
here,
I
have
I
get
the
option
to
define
them.
So
then
I
need
my
readme
because
I'm
going
to
define
them
for
test
moment
access
key.
This
is
the
most
glamorous
part
of
the
demo,
where
I
actually
put
the
values
for
the
environment
variables
within
in
lab,
but
there
you
go.
Does
somebody
else
have
questions
at
this
point
or
remarks.
A
A
A
So
now,
what's
going
to
happen
is
if
it
is
able
to
successfully
verify
it
will
actually
run
the
provision
and
deploy
jobs
as
well,
which
means
it's
actually
going
to
provision
the
infrastructure
on
AWS.
Let
me
have
that
open
on
the
side
and
it's
going
to
deploy
my
HelloWorld
application
as
well.
So
in
theory,
by
the
time
these
two
jobs,
complete
I,
should
have
a
running
web
application
somewhere.
A
All
right,
let's
see,
test,
verify
good.
Let's
look
at
the
results
of
this
job,
so
Jenna
form
I
like
it,
because
it
actually
shows
you
exactly
what
it's
gonna
do
is
Jason.
A
A
That
is
true.
That
is
so
true
assignment.
There's
actually
tena
form
lets
you
mask,
so
you
don't
have
to
pass
the
access
key
and
the
secret
key
as
part
of
the
telephone
file.
You
can
have
them
as
part
of
your
profile
and
you
can
pass
them
through
or
there
are
some
other
ways
as
well.
You
can
have
something
called
a
TF
VARs
file,
which
is
not
part
of
your
wrapper
and
then
on.
So
there
are
various
ways
of
doing
it.
A
A
A
They
go
an
application
deployed
so
now,
if
I
were
to
be
consuming
this,
what
should
I
be
doing?
All
I
need
to
do
is
I
need
to
modify
one
file,
so
I
just
go
here
and
I
just
modify
my
deployment
script
main
Sh.
So
it's
a
bash
script
so,
regardless
of
the
type
of
application
I
want
to
deploy,
I
just
modify
my
changes
here
and
I
can,
in
theory,
deploy
gitlab
on
this
Ubuntu
instance
by
just
running
apt-get.
So
that's
the
idea
and
that's
the
demo
questions.
C
A
C
No,
not
necessarily
I'm
thinking
of
can
that
be
cost.
Is
there
value
in
combining
it
with
our
deploy
board,
feature
and
the
environment,
so
the
environment
in
get?
Let's
see
I?
Also
in
there
in
settings
CI
and
variables
there
CI
variables,
you
have
the
ability
to
define
per
environment
variables,
different
environment
I,
don't
know
whether
some
of
that
can
be
used
and
I.
A
C
That
is
just
just
some
thought
that
might
come
in
handy
if
at
all,
and
then
that
great
demo
shall
we
shall
we
make
that
rather
short,
because
there
is
another
event
taking
place
right
now.
Yeah.