►
From YouTube: Technical Bootcamp - Runner Configuration
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
The
way
this
works
is
you
install
the
runner
app
on
a
machine
separate
from
your
gitlab
server,
and
then
you
connect
the
runner
back
to
your
gitlab
server.
This
is
called
registering
your
runner
and
it's
really
easy
to
do.
It's
likely.
You'll
have
multiple
runners
for
your
gitlab
server,
based
on
your
user
traffic,
and
it's
a
lightweight
program
that
you
can
even
install
on
your
local
laptop.
A
Now,
a
runner
can
be
shared
or
specific
tagged
or
untagged
protected
or
not
protected,
and
these
are
different
attributes
that
control
their
priority.
Specialization
and
security
shared
is
the
most
basic
way
to
group
your
runners,
a
runner,
that's
attached
to
the
entire
instance
of
gitlab,
is
called
a
shared
runner.
This
means
that
any
project
in
that
instance
can
use
a
shared
runner,
and
these
are
most
commonly
used
on
gitlab.com.
A
A
A
A
And
even
beyond
these
three
options,
there's
flexibility
here
we
can
allow
runners
to
with
tags
to
pick
up
untagged
jobs
or
allow
protected
runners
to
also
run
on
unprotected
branches.
So,
there's
a
lot
we
can
do
to
configure
our
runner
policies.
Get
labs
can
be
installed
on
almost
all
the
platforms
and
they're
the
workhorse
of
gitlab
ci,
and
so
we
have
maximum
coverage
on
platforms
for
building
software.
A
The
executor
is
the
heart
of
the
runner
different
executors,
allow
for
different,
build
scenarios
and
use
cases.
Sometimes
it's
a
bit
confusing,
as
which
is
the
right
one.
So
we've
included
a
link
that
helps
you
figure
that
out.
Shell
and
docker
are
typically
the
most
common
executives
they
handle
commands
as
if
you're
executing
them
directly
in
a
bash
terminal.
Shell,
docker
machine
and
kubernetes
allow
for
auto
scaling
of
runners,
which
is
really
useful.
A
If
you
have
a
large
development
organization,
some
of
the
less
common
executors
are
virtual
box
and
parallels,
and
these
are
basically
virtualization
executors,
which
have
a
bit
more
overhead
but
is
sometimes
necessary.
Depending
on
your
use
case
and
the
least
common
is
typically
the
ssh
executor,
which
is
fairly
bare
bones.
Runner.
A
Now
there
are
several
different
ways
to
auto
scale:
your
runners.
We
have
custom
executors
for
aws
that
enable
auto
scaling
on
fargate
and
ec2,
but
if
you're
not
on
aws,
you
can
use
docker
machine
which
works
on
many
different
cloud
providers
and
platforms
or
if
you
want
to
use
kubernetes,
we
can
basically
scale
this
to
use
one
pod
per
job
and
it's
becoming
the
go
to
way
to
manage.
Auto
scaling
installing
and
registering
a
runner
is
very
simple
and
we've
put
together
a
lab
exercise
where
you'll
do
just
that.