►
From YouTube: Deploying GitLab on Kubernetes
Description
Cristiano Casella, Technical Account Manager, provides an overview of what to watch out for while deploying GitLab on Kubernetes and Helm Charts.
A
What's
up
party
people
thanks
so
much
for
joining
us
for
another,
exciting
installment
of
the
customer
success
skills
exchange
today,
we're
going
to
be
talking
about
deploying
get
lab
on
kubernetes,
and
what
to
watch
out
for
Christiano
is
here
today
to
share
with
us
some
information
about
helm
and
cube
control,
or
for
those
you
know
we
like
to
call
it
cube.
Cuddle
I
will
be
talking
about
and.
A
A
B
B
So
today
we
will
talk
about
one
of
the
two
main
installation
method
for
get
lapa
the
chart.
We
will
go
through
the
water
helm.
Is
it,
sir?
The
difference
between
how
I
can
control
the
heat
Lapham
chart,
how
to
install
and
all
the
parameters
and
so
on
till
de
debod
session.
So
what
is
the
album?
Happy?
Marriage
communities,
application
ham
chart
help
you
define,
install
and
upgrade
even
the
most
complex
Cuba
needed
application.
Basically,
helm
is
doing
for
communities
what
apt
rpm
you
are
doing
for
our
wall
station.
B
Kubernetes
is
including
a
lot
of
different
resources,
but
deployment
services
service
account
token
config
map
and
a
lot
of
other
resource
and
charge
is
including
the
definition
for
everything
required
from
your
application.
So,
if
I
want
you
install,
for
instance,
a
web
server
and
I
want
to
use
just
keep.
Control.
I
will
need
to
define
all
of
these
parameters
by
my
own,
with
a
lot
of
different
comments
or
manifests
with
em.
I
can
just
give
the
hem
chart
name
and
obviously,
where
it's
located
and
just
the
parameters
and
everything
will
be
more
or
less
transparent.
B
B
B
Here
you
have
the
shark:
now
we
are
going
to
Evie
to
look
Leafly
and
the
documentation
regarding
the
installation
is
in
our
handbook.
Obviously,
so,
really
a
quick
look
to
the
repo
what
we
can
see
from
our
from
our
chart.
The
first
part,
the
most
interesting
is
obviously
after
the
read
me
is
the
value
in
every
single
chart.
You
will
find
this
file
is
part
of
the
M
char
standardization
in
sign
the
value
file.
B
The
value
is
really
important,
especially
in
complex
M
chart
like
our.
We
have
a
lot
of
fabrication
included
inside
get
lab
things
to
premier
use
graphing
a
sidekick
engine
X,
and
from
this
list
we
can
easily
understand
which
are
the
capabilities
of
the
Sun
chart
and
which
is
not
in
the
scope.
For
instance,
I
can
manage
the
Postgres
I
can
manage
Redis
I
can
install,
for
instance,
your
father
for
my
matrix
or
not.
B
It's
important
also
to
consider
that
usually
and
I'm
chart
like
this.
One
is
including
a
lot
of
external
chart
or
resource
that
may
be
pre
exist
or
your
cluster.
For
instance,
if
I'm
going
to
search
for
the
set
manager,
we
can
see
that
the
installation
by
default
is
enabled
if
I'm
going
to
try
the
github
chart
installation
on
a
cross,
so
that,
for
instance,
is
connected
to
my
gitlab
comma
for
other
application.
I
will
get
a
conflict
because
set
manager
is
really
stylish.
B
A
little
manager
app
and
get
lab
will
try
to
install
the
same
things
again,
but
I
will
get
a
conflict
at
the
resource,
so,
for
instance,
if
I'm
going
to
install
this
cluster,
this
application
and
such
manager
is
a
really
is
a
readable
label.
There
I
can
just
turn
off
these
parameters
and
these
will
not
be
included
inside
a
gift
lab
chart
and
I
can
use
what
is
a
read
existing
in
our
cluster,
the
same,
for
instance,
for
ingress.
B
Maybe
you
know
that
every
time
you're
going
to
install
an
ingress
say
you
are
getting
a
public
IP
addresses,
and
so
the
cloud
provider
is
going
to
build
you
for
this
resource.
If
you
are
really
a
class,
that
is
a
really
exposing
an
English
controller.
So
you
really
have
a
public
IP
addresses
you
just
you
need
just
to
create
the
rule
for
the
new
application
using
the
same
IP
addresses
and
you
can
save
money
and
resource
in
this
case
same
from
Prometheus
or
every
other
resources.
But
going
back
to
the
slides.
B
Our
chart
is
actually
including
a
lot
of
different
components
and
of
everything
is
maintained
by
us.
The
core
component
are,
as
mentioned
at
the
day,
ingress
the
registry
for
the
container
Italy
with
the
exporter
and
ETA
fauna,
the
shell
migration,
psychic
and
web
services,
but
we
have
also
other
other
product
inside
we
have
Postgres,
we
have
ready,
say,
menial
and
optionally.
You
can
also
install
from
use
graph,
Anna
runners
or
such
manager
to
manage
the
SSL
provisioning
for
the
certificate,
but.
B
Is
important
to
understand
that
is
telling
each
lab
from,
and
the
omnibus
package
is
stellar
and
I'm
chart
is
not
the
same
thing.
For
instance,
there
is
some
difference
regarding
the
feature
actually
get
LePage
and
this
marker
authentication
are
not
a
label
for
M
chart,
so
choosing
distillation
maded
is
not
just
about
what
is
in
the
customer
house
or
getting
the
infrastructure,
but
it's
also
about
which
feature
the
customer
is
requesting
and,
for
instance,
we
have
also
some
limitation
for
customer
that
are
approaching
the
G
of
nationalities.
B
Actually,
we
just
need
to
provide
two
parameters
and
instead
get
lab
and
set
the
hasta
domain
and
the
set
manager
issue
email.
Obviously,
if
we're
going
to
leave
these
enable
it,
but
I
wanted
to
even
look
to
the
default
installation.
So
if
you
have
a
blank,
even
in
this
cluster,
you
want
to
install
git
lab
out-of-the-box.
B
B
B
But
in
this
case
we
use
it
really
two
parameters,
but
if
we
think
to
anonymous
installation,
usually
we
have
a
long
configuration
file
that
lets
you
to
manage
everything
in
animal.
We
can
use
the
set
flag
like
I,
did
in
the
previous
box,
but,
as
you
can
imagine,
this
is
not
the
best.
If
you
have
to
maintain
the
configuration,
if
you
have
a
lot
of
configuration
or
if
you
want
to
version
the
configuration.
This
is
why
usually,
we
create
an
overwrite
file.
B
The
value
file
is
coming
from
default
from
the
chart,
and
we
already
gave
a
look
to
that.
You
can
just
create
another
file
with
the
same
format
and
using
this
file
as
an
override.
So
looking
at
the
same
scenario
mention
it
before
I
add
my
repo
I
still
get
lab
and
I
used
the
parameter.
Minos
F
and
I
give
the
override
fire.
So
in
that
case
the
override
file
was
containing
the
original
indentation
in
form
of
coming
from
their
value
and
the
value
that
I
decided.
I
can
also
retrieve
the
existing
of
a
ride.
B
I
makes
and
try
with
Michel
online
at
the
end,
I
find
what
I
want
regarding
the
parameters,
but
now
I
want
to
save
them
in
a
specific
file.
I
start
by
versioning.
This
is
how
I
can
retrieve
the
parameters
so
I
can
take
this
remove.
Obviously,
the
user
supplied
values
line
and
save
the
content
of
the
output
as
my
new
overall
file-
and
this
is
what
I
need
to
store
the
existing
configuration.
B
So
I
deployed
my
application.
My
application
is
up
and
running
I
change
it
just
as
two
parameters.
Let's
give
a
look
to
what
has
been
deployed
at
the
end.
This
is
the
least
of
the
deployments
that
are
up
and
running
now
so
I
see
the
search
manager
to
manage
my
SSL
endpoint
did
get
lab
main
component
like
runner.
Share
exporter,
get
Phillip
menial,
the
ingress
controller,
the
premedia
server.
B
So
is
this
stolid
by
default
singing
for
the
registry
and
for
the
sidekick
and
runner,
and
obviously
they
were
server
that
we
are
using
to
reach
our
applications.
An
important
note
about
that
is
that
the
runner
by
default
is
a
stolid
but
is
not
privileged.
So
in
a
container
class
you
will
not
be
able
to
use
the
hand
over
you
have
to
change
that
parameters.
B
Another
important
note
that
you
should
give
Luke
is
that
sidekick
is
mentioning
all
in
one.
If
we
are
talking
about
a
really
large
installation,
usually
you're,
going
to
split
psychic
basing
on
the
kind
of
the
job
that
is
executing
by
default.
Our
chart
is
not
supporting
that.
It's
just
deploying
a
psychic
that
we
look
at
every
single
cue
and
try
to
pick
just
the
first
job.
So
if
you
want
to
make
a
sign
customization
you
need
to
craft
yourself
the
chart
or
the
existing
configuration.
B
Auto-Scaling
is
supported,
yes,
is
supported,
but
not
for
every
single
component
actually
is
supported
for
github
shell
for
registry
for
sidekick
EFT
for
the
web.
Server.
Keep
in
mind
that
talking
about
the
runners,
you
don't
ever
to
put
an
auto
scaling
on
the
first
runner
deployments,
the
first
runner,
the
Parliament
to
the
first
Porter
that
you
will
see
running
inside
your
cluster
is
not
a
real
runner.
It's
just
a
listener.
B
When
one
not
again
about
that,
the
autoscaler
obviously
is
something
that
you
can
set
up
by
default.
You
have
minimum
spot
at
two
or
one
defending
from
the
component,
the
maximum
at
ten,
and
you
have
some
specific
target
for
the
CPU
consumption
to
understand
when
that
board
need
to
be
scaled
up
or
not,
as
you
can
see,
for
github.
Shell,
for
instance,
is
the
number
of
the
resource
that
is
going
to
be
consumed
and
for
every
is
a
percentage.
B
How
many
resources
do
I
need
to
run
a
gate
lab
on
Cuba
ninis?
Obviously,
I
still
thinking
about
the
full
configuration,
but
if
you
have
an
existing
faster,
as
mentioned,
a
lot
of
resource
could
be
already
existing
there.
So
you
don't
need
to
duplicate
that
and
at
the
same
time,
some
service
that
we
are
including
could
not
fit
your
need
or
could
be
to
be
for
your
requirement.
B
So,
according
to
these,
we
can
suppose
that
the
default
rate
here
for
consumption
between
CPU
Ram
is
1
to
chew,
and
a
starting
cluster
could
have
eight
CPU
and
16
Giga
of
RAM.
Obviously,
after
these,
after
the
first
be
nappa,
everything
is,
depending
by
the
usage.
This
cluster
is
probably
able
to
support
a
small
team
for
the
development,
but
it
defends
from
the
user
from
the
users
base.
If
we
need
to
add
more
resources
or
not.
This
is
just
to
have
a
starting
point.
B
Out
of
the
bag
and
I'm
installation
hell
is
supporting
life,
many
others
software,
the
debug
flag,
that
is
giving
you
a
more
verbose
output,
but
usually
is
clearly
enough.
Let
you
letting
you
understand
what
is
wrong
after
that
after
that
have
taken
the
deploy
comment.
If
you
still
have
some
trouble,
you
need
to
go
back
to
control.
Q
control
is
giving
you
the
tool
that
you
need
to
debug
communities,
installation
keep,
control,
describe
pod
and
queue
control.
Log
is
usually
your
best
friend
in
a
community
debug
session.
B
My
suggestion
is
always
to
give
a
look
to
the
Livni's
and
provenance
check.
Also
in
demo
system.
We
see
a
lot
of
support
requester,
where
we
are
trying
to
find
a
problem
inside
a
human.
It
is
but
a
reality.
The
body
is
not
exposing
a
service
in
the
expected
port,
and
the
describe
pod
is
giving
you
a
report
about
the
last
aliveness
in
prognost
check.
So
you
can
give
a
look
to
that
understand.
If
death
part
of
your
deployment
is
correct
and
not.