►
From YouTube: GitLab Runners
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
and
welcome
to
another
exciting
installment
of
the
customer
success
skills
exchange.
My
name
is
Chris
Reynolds
I
lead
customer
success,
enablement
here
at
gate
lab
and
we
are
joined
today
by
Brendon
O'leary
senior
developer,
evangelist
who's
going
to
talk
to
us
a
little
bit
about
runners.
B
B
So
first
a
little
overview,
so
the
gait
lab
Runner
is
basically
the
agent
that's
responsible
for
running
sorry
for
defining
a
word
with
itself
but
running
the
jobs
that
you
define
in
your
CI
CV
pipeline.
So
this
runner
can
be
is
a
nice
generic
term
for
it,
because
it's
actually,
you
know,
can
be
run
in
a
many
different
ways
right.
But
the
reason
that
we
have
this
concept
of
a
runner
is
to
have
the
ability
to
make
you
know
the
jobs
that
you
run
on
get
labs,
see
ICD
to
be.
B
You
know,
multi-platform
that
can
run
on
any
platform.
The
runners
written
in
a
go,
the
multi-language,
any
language
you're
building
with
get
lab.
You
can
build
with
the
runner.
It's
also
built
from
the
ground
up
for
paralyzing
builds
for
building
with
docker,
and
you
know,
then,
of
course
having
this
extra
agent
outside
of
get
lab
means.
You
can
have
one
gate,
lab
installation,
but
many
many
runners
for
the
different
needs
that
you
need,
and
that
may
be
ones
that
people
bring
themselves
and
we'll
talk
about
that
or
it
may
be
a
pooled
model.
B
Where
you
have
you
know
job
execution
at
the
pooled
level,
so
the
example
of
that
would
be.
You
know,
get
lab.
Accom
is
a
single
install
of
get
lab,
but
has
both
pooled
compute,
the
compute
that
you
get
when
your
get
lab,
comm
user.
You
know
two
thousand
minutes
for
free
and
then
up
in
the
number
of
minutes
after
that
for
each
tier.
B
That's
that
pooled
runner,
shared
runner
compute
that
we
allow
folks
to
use,
and
so
the
life
of
the
gate,
lab
job
I
feel
like
I
have
the
wrong
slides,
because
this
slide
says
needs
work,
but
this
is
fantastic,
so
that's,
okay,
we're
gonna,
make
it
work.
B
B
So
this
is
a
critical
thing
to
the
way
runners
are
architected,
we'll
talk
about
it
in
more
detail
later,
but
what's
a
job
is
received,
it
then
clones
the
repository
to
itself
and
then
runs
the
script
so
in
more
detail
for
to
pull
those
steps
apart,
there's
actually
a
number
of
steps
that
happen,
so
the
polling
still
is
first
and
the
running.
The
script
is
still
last,
but
there's
a
lot
of
things
that
happen
in
between
them.
So
there's
pre
build
and
pre
clone
jobs
that
can
be
run
which
can
include.
B
You
know
the
ability
for
the
administrator
of
the
runner
to
say
something
runs
before
every
build
or
for
the
writer
of
the
job
to
say:
I
want
to
run
this
code
or
you
clone
the
repository.
You
also
can
have
jobs
that
don't
clone
the
repository,
then
there's
post
clone
jobs,
scripts
that
can
run
and
then,
of
course,
the
actual
user
script.
B
They
may
be
kubernetes
pods
like
it's
spun
up,
they
could
even
be
you
know
the
developers
laptop
that
is
running
the
job,
so
they
can
see
how
they're
running
exactly
and
it's
an
important
part
of
the
important
part
of
the
architecture
is
that
polling
method
we
talked
about
before
so
gitlab
has
a
queue
of
jobs
that
it
stores
that
need
to
be
executed
and
which
runners
are
able
to
execute
that
and
as
a
runner,
I
pull
gitlab
and
I
say
hey.
Do
you
have
any
jobs?
For
me
this
is
who
I
am
this?
B
Is
my
configuration
and
then
get
lab
will
decide?
Yes,
I
do
have
one
that
matches
your
configuration
here.
It
is,
and
that's
an
important
distinction
about
the
way
that
the
gate
lab
Runner
is
architected,
because
it
allows
you
to
have
your
runner
in
a
completely
separate
network
from
the
gate
lab
server
as
long
as
it
has
access
in
that
direction
and
the
gate
lab
server
doesn't
necessarily
need
to
know
anything
about
the
network
that
your
Runner
is
in,
or
you
know,
have
the
ability
to
actually
reach
into
that
network
right.
B
The
runner
is
reaching
out
what
this
allows
for
is:
customers
that
maybe
have
multiple
multiple
network
segments
or
customers
that
want
to
use
our
gate
lab
comm,
but
to
store
their
code
but
run
their
runners
within
their
own
private
networks.
It
allows
them
to
do
that
without
having
to
open.
You
know
ports
on
the
firewall,
so
the
for
instance,
would
be
you
know.
A
simple
example
would
be
here
in
my
house:
I
have
a
Raspberry
Pi
that
has
runs
my
DNS
for
the
house.
B
It's
called
piehole
slope
software
and
when
I
update
the
configuration
of
that
I'd
actually
store
that
configuration
on
get
live.com,
but
the
Raspberry
Pi
itself
is
a
runner,
so
get
lab.
Comm
doesn't
have
a
direct
connection
into
my
house,
but
the
runner
is
always
going
out
and
polling
for
new
information
and
if
it
finds
that
there's
new
commit
it
pulls
the
latest
in
builds
and
reconfigures
itself.
So
that
means
I
didn't
have
to.
You
know
open
up
some
port
on
my
router
I.
Don't
have
a
static
IP
address
at
my
house.
C
It's
Darwin
I
think
to
that
this
is
not
a
small
point.
I
think
a
lot
of
customers
assume
the
gate
lab
has
to
build
a
post
into
the
runner
at
an
API
call
level.
So
I
really
emphasize
this
with
customers,
because
I
think
they
don't
get
it
that
they
don't
have
a
lot
of
security
requirements
to
run
private
runners
wherever
they
want.
B
Yeah,
it's
really
critical,
it's
critical,
especially
in
the
essent.
You
know
SMB
and
mid
market,
where
you
know
we
have
traditionally
seen
a
lot
of
demand
for
comm,
but
also
just
as
critical
as
we've
seen
growing
demand
in
the
enterprise
for
comm.
To
emphasize
that
point.
That
just
kind
of
seems,
like
a
basic
architecture
point,
was
a
really
important
way
that
we
architected
our
CI
system
and
one
that
other
systems
are
either
trying
to
model
themselves
after
or
failing
because
I
can't
model
themselves
that
way
with
with
other
CI
systems.
B
You
end
up
with
a
lot
more
patchwork
to
achieve
the
same
level
of
security
rather
than
just
kind
of
being
native
to
the
orchid.
The
way
that
the
runners
are
architected
as
it
is
with
gate
lab
now,
I
said
that
the
runner
is
polling
yet
lab
with
its
configuration
asking
for
new
jobs,
and
so
how
does
get
lab
kind
of
decide
what
a
runner
is
going
to
run?
B
So
there
are
a
number
of
properties
of
the
runner
that
help
make
that
determination
and
so
I'm
going
to
go
through
each
of
those
and
so
they're
they're,
mostly
mutually
exclusive
properties.
A
runner
can
be
a
shared
or
a
specific
runner;
it
can
also
be
a
tagged
or
an
untagged
runner
and
I
can
also
be
protected
or
not
protected
runner.
So
then,
you
can
have
any
combination
of
these
things
as
well.
Of
course,
so
you
could
have
a
shared
tagged,
protected
runner
or
a
shared
untagged,
protected
runner
or
a
specific
untagged,
not
protected
runner.
B
There's
a
you
know.
Any
combination
of
these
things
exist
and
we're
going
to
kind
of
go
through
next.
Each
of
those
so
shared
versus
specific,
a
shared
runner
is
in
that
kind
of
general
pool
and
can
be
used
by
any
project
in
the
entire
get
instance
that's
installed
and
and
configured
to
use.
So
these
are
all
managed,
of
course,
then,
by
a
gitlab
admin
on
the
administration
side
of
the
get
lab
instance.
B
So
that's
at
you
know
your
get
Leben's
the
slash
admin,
/
runners
and
typically
these
are
have
some
sort
of
auto
scaling
associated
with
them.
So
again,
the
the
key
example
here
would
be
get
lab
comm
that
has
a
number
of
runner
scaling
managers
assigned
to
be
available
to
any
project
on
all
of
get
lab
comm
and
they
auto
scale
and
create
and
destroy
a
virtual
machine
actually
for
every
single
job
that
gets
created
on
get
lab
comm.
B
Specific
runners
are
tied
to
a
project
or
to
a
group,
and
they
actually
can
be
tied
to
more
than
one
project
or
group
they're,
then
only
in
the
pool
for
those
specific
projects
that
they're
tied
to
and
can
be
ran,
managed
then
by
the
project
or
group
owners
right.
So
in
the
namespace
under
CI
CD
settings,
you
have
the
ability
to
add
specific
runners.
B
Typically,
this
is
for
special
IDE
builds
or
if
you
know
an
organization
doesn't
want
to
provide,
share
compute
across
their
get
Lab
instance.
They
might
do
this
so
that
people
can
bring
their
own
compute.
You
might
hear
that
sometimes
and
then,
of
course,
this
is
how
I
would,
as
a
get
lab
comm
user.
Add
my
my
specific
runner,
that's
inside
my
network
that
I'm
not
freeing
up
the
compute
to
all
get
lab
before
or
all
of
everyone
that.
B
For
then
again,
a
runner
can
be
tagged
or
untagged
tag.
Runners
look
only
for
jobs
with
that
same
tag
and
untag
runners
run
jobs
with
no
tags,
so
an
instant,
for
instance,
use
case
here
might
be
while
I'm
doing
a
Windows,
build
or
I'm
doing
on
iOS
build.
That
requires
a
specific
operating
system
to
build
on
and
so
I'm
a
tag.
My
winner,
my
Windows
runner,
with
with
attack
like
Windows
and
then
I
tag
my
job
here.
B
You
can
see
I'm
building
a
c-sharp
project
here
with
with
Windows,
so
that
I
do
that
or
I
may
say
that
you
know
I'm
just
gonna
be
pulling
the
Maeva.
You
know
maven
JDK
image,
so
I
can
run
on
anywhere
any
dog,
any
untag
runner
that
has
docker.
So
that's
the
two
examples
here
and
then
finally,
a
runner
can
be
protected
or
not
protected,
so
protected
runners
only
run
jobs
from
protected
branches
or
protected
tags.
So
this
is
typically
for
runners
that
might
be
doing
deploys.
B
You
can
say
whether
it's
going
to
run
on
tagged
jobs
or
not
so
a
tagged
Runner
actually
can
pick
up
jobs
without
tags,
and
you
can
also
lock
it
to
the
project,
which
means
no
one
will
be
able
to
assign
it
to
another
project.
So,
as
I
said,
a
specific
runner
could
be
tied
to
one
or
many
projects.
A
lock
specific
Runner
would
would
be
locked
to
that
particular
project.
B
There's
an
example
project
with
a
lot
of
these
different
kinds
of
tags
and
runners
that
I
have
not
done
a
great
job
maintaining,
but
hopefully
we
do
doing
a
better
job
in
the
future.
At
gate,
lab
comm
slash
all
the
things
where
I'm
trying
to
do
get
lab
CI
for
all
of
the
things,
and
so
speaking
of
all
the
things,
let's
talk
about
platforms
and
executors.
B
So,
as
I
mentioned
earlier,
the
runner
was
written
and
go
and
go
is
a
language
that
can
run
almost
on
any
platform
in
the
world,
and
so
the
gate
lab
runner
is
able
to
run
in
Linux
and
mat
on
a
Mac
OS
on
Windows
and
docker
in
kubernetes.
There's
lots
of
different
platforms
with
which
you
can
run
the
gate,
lab
runner
and
then
for
each
install
of
the
run
arm.
You
choose,
what's
called
an
executor,
so
the
executor
is
how
the
runner
is
going
to
execute
the
script.
B
Surprisingly
enough,
and
so
the
most
common
ones
are
our
shell
runner.
So
shell
runner
is
exactly
like
it
sounds
like
it
just
runs
commands
directly
as
if
you
were
typing
them
into
your
bash
terminal
or
your
PowerShell
terminal
docker
uses
a
docker
image
and
execute
the
build
inside
of
that
docker
image.
It's
obviously
our
most
common
use
case.
Another
very
common
use
case
today
is
our
docker
machine,
auto
scaling.
So
this
is
where
a
machine
scales
up
runners
and
basically
it's
kind
of
a
bastion
host
that
then
creates
new
runners
on
demand.
B
This
is
how
dr.
machine
is
how
our
auto
scaling
works
on
get
lab,
comm
and
then
kubernetes,
where
the
runner
runs
as
a
pod
and
a
cluster
and
can
also
enable
auto
scaling,
because
Eddie's
enables
always
Galen
there's
a
number
of
other
executors
though,
and
so
some
least
common
ones
include
VirtualBox
parallels
SSH.
These
are
just
different
methods
of
running
your
jobs
and
they're.
Actually,
now
is
the
concept
of
a
generic
executor
where
you
can
actually
provide.
B
And
let's
talk
about
that,
auto
scaling
right
now.
So
again,
the
way
that
folks
are
able
to
do
that
so
AWS
there's
a
lot
of
ways
to
just
generally
auto
scale,
compute
in
general
and
so
folks
have
written.
You
know
ways
to
have
auto
scaling
groups
in
AWS
stand-up
runners,
as
they
would
stand
up.
Other
servers
again.
The
most
common
use
case
use
usage
today
is
docker
plus
machine.
B
So
the
fact
that
we
have
doctor
machine
in
the
gait
lab
relay
means
you
can
auto
scale
on
all
of
those
things
listed
here,
and
so
that's
obviously
gonna
be
that's,
obviously
a
lot
of
work
that
was
done
and
would
be
a
lot
of
work
to
maintain
and
that's
I
think
why
doctor
chose
to
stop
maintaining
it,
but
it
works
pretty
well
right.
Now
it's
getting
patches
for
security
and
then
we're
working
on
plans
to
bring
native,
auto
scaling
for
all
the
cloud
providers
directly
to
the
runner
and
then
again
kubernetes.
B
You
can
use
a
config
map
and
spin
up
a
pod
purge
AAB
to
auto
scale.
The
runner,
oh
and
then
this
so
just
to
talk
a
little
bit
more
about
these.
So
docker
plus
machine
again
allows
you
to
create
a
new
virtual
machine,
for
instance
an
ec2
instance
which
we'll
talk
about
next,
and
it
also
has
a
lot
of
great
features
like
how
many
I
want
to
keep
around
idle,
how
long
I
keep
them
around.
B
B
You
know,
there's
the
ability
to
have
these
across
network.
So
of
course
you
have
to
consider
all
of
those
AWS
networking
considerations
like
security
groups
and
your
V
PC,
etc,
as
you
build
them
out,
but
there
is
the
ability
to
use
spot
instances,
so
you
can
either
be
easy
to
spot
instances
or
unused
compute
in
the
ec2
environments
that
you
can
bid
for
and
basically
get
at
a
super
discounted
rate.
B
Without
you
know,
if
the
the
trade-off
there
is
that
it
may
go
away
if,
if
compute
demand
increases,
but
the
fact
that
you
know
jobs
can
if
a
job
goes
away,
it's
not.
The
end
of
the
world
means
that
you
can
save
a
lot
of
money
in
your
compute
cost.
By
doing
that,
the
runner
also
has
the
ability
to
use
s3
compatible
storage
for
its
caching,
and
so
this
is
a
really
great
solution
to
share
the
cache
among
runners
that
may
be
different
machines
and
then
kubernetes
again.
B
Let's
talk
about
a
few
security
considerations,
so
we
talked
a
little
bit
about
how
the
architecture
kind
of
kind
of
helps
from
a
secure
perspective,
but
I
think
there
still
are
a
number
of
concerns
that
you
have
to
be.
You
have
to
think
about
depending
on
which
excuse
and
how
you're
going
to
be
using
the
runner.
So
this
talks
about
it
in
more
detail.
Don't
necessarily
need
to
go
into
it
super
in
detail,
but
things
that
you'll
hear
come
up.
B
B
And
you
know
there's
some
other
issues.
You
know
if
you,
if
you
need
authentication
through
a
proxy,
you
have
to
configure
both
docker
and
the
gitlab
runner
to
use
that
go
ahead.
Sorry,
it's
a
question
just
a
breath,
fantastic
awesome!
So
let's
talk
about
how
you
install
the
runner.
So
again
it
can
be
installed
to
any
platform
that
runs
Co.
B
So
if
we're
talking
about
advanced
configuration,
some
of
those
things
we
talked
about,
like
the
docker
plus
machine
configuration,
how
you
configure
it
to
use
cache,
that's
external
to
itself
how
you
configure
it
to
scale
those
things
are
typically
contained
within
the
config
tunnel,
and
so
that's
often
something
that
folks
will
turn
into
a
template.
You
know
if
you're,
for
instance,
auto
scaling
runners
with
an
auto
scale
group
in
AWS.
B
You
might
have
this
this
templatized
so
that
you
can
put
it
in
place
yourself,
rather
than
doing
it
kind
of
interactively
through
the
interface
and
there's
a
lot
a
lot
of
options
here.
I
could
spend
another
half
an
hour
going
through
everything
that's
available
in
the
tomah,
there's,
obviously
fantastic
reference
documentation
for
it
as
there
is
with
everything
we'd
get
lab,
but
I
would
mostly
be
interested
in
what
questions
you
might
have
about
that
or
other
things.
Runner.
B
A
E
B
E
You
riff
for
a
bit
on
on
sort
of
how
we
should
be
positioning
things
like
run.
Our
best
practices
like
I've
had
multiple
customers
say
to
me:
what's
the
best
practice
for
creating
an
auto
scaling
pool
of
runners
today
and
I've,
you
know
in
my
five
customers,
I've
seen
them
manage
in
five
different
ways.
B
So
I
would
say:
there's
two
things
here:
there's
there's
the
consideration
of
how
the
customer
views
share
compute
right.
That
I
think
greatly
impacts
their
answer
and
then
there's
the
the
issue
that
docker
plus
machine
is
in
maintenance
mode,
so
that
kind
of
stinks
if
it
wasn't
for
that,
that
would
be
the
answer
and
I
think
it
still
is
the
answer
for
a
lot
of
customers.
B
The
fact
that
it's
in
maintenance
mode,
especially
if
I'm
running
in
my
own
environment,
I,
don't
think
that's
a
huge
concern.
It
will
not
be
the
only
open
source
library
in
your
production
environment,
and
maybe
that's
about
right
like
but
I
understand
that
cut
some
customers
have
that
concern
so
I
it's
my
understanding
that
at
least
it
was
when
I
was
yeah
I.
Think
one
of
the
epochs
you
listed
we
opened
when
I
was
PM
of
verify.
B
E
Let
me
reframe
like
to
my
knowledge.
Then
we've
got
reference
architectures
today.
If
you've
got
2,000
5,000
10,000,
25,000
users,
you
can
kind
of
go.
Look
at
a
doc
page
that
says
here's
what
we've
tested
very
thoroughly
and
would
recommend
in
terms
of
get
lab
infrastructure.
I
think
that
directly
speaks
to
runners
and
I'm
curious.
B
B
It's
much
more
generalized,
but
I.
Think
for
AWS,
what's
listed
in
that
article
is,
is
current
best
practice.
It
goes
through
all
the
details
of
how
to
set
up
a
runner
manager
and
configure
it
and
again
I
would
if
I
was
instead.
If
I
was
you
know
at
a
large
company
today
and
installing
gala
runners,
this
is
how
I
would
do
it
in
a
depress
to.
C
We
we
had
something
similar
where
I
came
from
before
that
would
actually
patch
to
you
just
rerun
the
cloud
information,
it'll
patch,
all
the
runners
and
then
you'll
be
able
to
set
the
I
am
profile
as
well,
so
that
you
can
give
specific
permissions
to
each
runner
cluster
and
also
even
our
docker
machine
patterns
that
we've
done
and
the
ones
I've
seen
the
community.
Do
they
never
make
the
docker
machine
machine
a
che.
It's
always
sitting
there
by
itself.
C
B
E
B
Depends
on
the
use
case,
the
thing
I
would
say:
is
we
don't
auto
scale
the
runner
today
in
kubernetes
because
of
the
requirement
to
run
in
privilege
mode?
And
so
we
don't
do
that
on
to
get
lab
calm,
because
we
don't
trust
the
folks
putting
putting
stuff
in
to
share
compute
there
and
I
would
say
for
most
large
enterprises.
B
They
don't
also
don't
trust,
but
they
may
have
a
scheme
of
kubernetes
today
where
they've
got
multiple
clusters
for
production
versus
test,
and
then
it's
not
a
big
deal
in
a
you
know,
test
cluster
to
have
stuff
going
willy-nilly
and
then
production
is
locked
down
in
a
different
way,
but
I
would
say
the
reason
we
end
up
not
doing.
That
is
because
people
want
to
have
shared
compute
on
trusted,
share
compute
and
that's
not
ideal
through
Cooper
net.
The
kubernetes
executors
today.
C
Brendan,
there
was
also
a
customer
who
gave
a
presentation
in
Brooklyn
where
they're
using
cayenne
for
permissioning
the
pods
so
that
they
could
have
separated
permissions
but
still
use
kubernetes,
Ivan
I've
never
done
it,
but
I.
Just
remember
that.
I
recalled
that,
because
I,
that
was
one
of
the
reasons
why
we
were
holding
off
on
yes,
I've.
B
I
would
say,
can
I
add
one
thing.
Sorry,
Chris
I
would
say
that
it.
This
is
the
most
important
question
in
my
mind,
because
I
think
our
Technical
Account
Managers
can
have
a
huge
impact
in
their
jobs.
If
they're
able
to
get
customers
over
this
hump
of
having
shared
compute
4runners,
that's
the
thing
that
will
let
get
lab
take
off
at
your
accounts,
so
fantastic
for
first
questions
here.
F
Great
good
morning
from
sunny
Santa,
Cruz
California
so
time
to
give
you
an
example
of
what
we
deal
with
in
the
field.
I
had
a
call
this
morning.
This
is
kind
of
level
of
detail
that
we
have
to
handle,
but
basically
I
thought
you
might
know
the
answer,
which
is
so.
B
Has
a
good
question:
I,
don't
know
it
depends
on
exactly
what
they're
doing
like
if
they're
good,
like
my
assumption,
is
probably
they're
gonna
have
to
change
it
because
probably
their
corporate
policy
is
gonna
be
say,
turn
off
HTTP
access
to
get
lab
right
like.
Why
would
they
be
enabling
HTTP
access
to
leave
HTS
access
to
be
leaving
HTTP
on
well.
B
F
B
B
H
H
G
Good
morning
afternoon,
from
semi
sunny,
Morro,
Bay,
California
yeah
I
guess
let
me
be
a
little
bit
more
clear
about
resources
I'm.
So
this
would
be
things
that
we
need
to
persist
between
jobs,
but
maybe
we
want
to
hook
into
it
could
be
even
like
you
know,
a
text
file,
so
maybe
not
so
much
dependencies,
which
are
you
know
our
documentation.
Clearly
states
is
better
reserved
for
caching,
but
I
still
feel
like
there's
a
little
bit
of
you
know,
fogginess
between
those
two
and
and
or
is.
B
B
You
can
read
more
about
them
in
the
documentation,
but
the
important
point
here
for
the
question
is:
what's
better
for
forgiving
use
cases.
So
the
number
one
thing
to
keep
in
mind
first
off
is
that
cash
is
only
guarantee.
Well,
it's
not
guaranteed
at
all
we'll
get
to
that,
but
the
only
time
you
should
rely
on
cash
at
all
is
in
the
context
of
the
same
pipeline,
all
right.
So
if
it's
possible
that
you're
going
to
need
something
in
another
pipeline,
you're
already
on
artifacts
period.
B
B
If
it's
mission
critical
that
the
build
artifact
from
this
job
is
updated
before
my
downstream
job
runs
right
like
that,
consumes
it
well
now
it's
an
artifact
for
sure
or
if
there
are
mission,
critical
tests,
that
I
have
to
parse
every
times
and
it's
an
artifact.
So
that
doesn't
give
you
one
answer,
but
that's
the
answer.
I
think
to
the
question:
okay,.
B
B
B
Naming
things
is
fun
right.
There
are
only
two
hard
problems
in
computer
science,
naming
things
cache
invalidation
and
off-by-one
errors,
so
the
anyway
the
concept
would
be
you'd
have
a
shared
workspace,
which
then
would
kind
of
almost
be
in
the
middle
of
caching
and
artifacts.
Right
like
it's
like
not
super
like
forever
feeling
like
an
artifact,
it's
not
an
artifact,
but
it's
also
not
and
it'd
be
nice
if,
like
a
cache,
it's
like
no.
This
is
how
we
build
something.
B
E
A
G
G
D
B
B
It
was
tie
from
product
marketing
interview
me
about
it.
I
spoiler,
alert,
I
end
up
still
talking
about
what
get
loves
better,
but
I
think
the
biggest
thing
when
talking
about
Jenkins
agents
versus
runners,
which
I'm
glad
they
changed,
the
name
to
agents
used
to
be
master
slave.
So
you
might
hear
that
terminology
sometime
is
that
you,
you
have
to
configure
oftentimes
those
agents
to
have
the
right
tools
in
them.
So
again,
the
disadvantage
that
I
talked
about
the
gitlab
has
is
not
necessarily
us
being
smarter
than
Jenkins,
but
our
time
to
market.
B
So
we
came
to
market
with
CI
when
docker
had
already
kind
of
won
the
day,
and
so
our
CI
is
very
docker
first
at
docker
friendly,
whereas
Jenkins
has
kind
of
bolted
that
on
so
the
traditional
Jenkins
install.
Has
these
agents
where
you
have
to
go
put
the
tools
on
so
I
have
to
go,
install
nodejs
or
install
Java
on
and
so
meeting
and
watering
and
managing
those
agents
then
becomes
a
very
you
know:
cattle.
If
you
ever
heard
the
term
cattle
versus
pets.
You
know
those
agents
are
oftentimes
their
pets.
That's.
B
Get
labs
kind
of
designed
from
the
ground
up
to
have
be
cattle
now
a
lot
of
folks
have
spent
so
long
with
Jenkins
that
they've
created
their
own
systems
of
cattle
on
top
of
it,
but
they're
still
maintaining
that
right,
whereas
we're
maintaining
the
ability
to
do
that
with
get
Lab
CI
that
that
would
I
say
is
a
big
difference.
Excellent.
D
D
B
E
Saw
it
the
next
year
the
context
for
mine
is,
is
I've,
got
a
large
sort
of
GPU
manufacturer
customer
that
you
doing
crazy
things
where
their
hand
rolling,
get
replication
to
nvme,
based
servers
elsewhere
in
a
different
physical
location,
to
try
and
speed
up
a
CI
job
with
a
custom
CI
tool,
they've
written.
So
it's
kind
of
insane
and
maybe
Tanish
to
be
relevant,
but
I
was
curious
and
I.
E
Think
D
T's
question
relates
when
it
comes
to
you
know:
partial
clone
or
shallow
clone
in
the
Runner
itself,
and
building
that,
if
you've
got
some
monster,
large
multi-gigabyte
repo,
you
know
what
ways
can
we
sort
of
pointed
the
runner
and
say
this
is
what's
defensively
better
about
our
approach
versus
like
a
champions
approach?
I
mean
you
kind
of
just
talked
about
with
the
cattle
versus
that
story,
but
oh
yeah.
B
I
did
a
little
you're,
probably
gonna
have
more
pet,
like
servers
in
this
case
right
and
so
again,
I
would
say
that
our
advantage
is.
You
can
still
do
everything
the
way
Jenkins.
Does
it
right.
They
would
say:
oh
well,
Jenkins
it's
the
agent
and
the
repo
is
on
there.
Well,
you
can
do
the
same
exact
thing
with
gitlab
ci.
You
just
have
the
option
to
also
not
do
that.
I
would
say
so.
B
B
B
E
Even
some
cool
stuff
coming
with
or
cool
stuff
that
I've
been
discussed
with
gittel
eh-eh,
where
you
can
actually
tag
from
storage
types.
So
prefect
could
be
aware
of
a
certain
note.
That's
on
SSD
storage
for
high-performance
repos
need
to
say
alright
repos
tagged.
As
you
know,
high
importance
for
high
speed
could
could
be
redirected
to
be
stored
on
this
specific
yep,
some
pet
pet,
to
buy
speed.
Note
right
exactly
exactly
yeah.
E
C
B
Existed
for
a
while
on
the
runner
and
I
think
they're,
probably
some
of
the
most
underused
underutilized
features
of
the
runner
to
help
your
help.
You
speed
up
I,
remember
when
we
moved
it
in
1202
50,
there
was
like
us
whole
uproar
about
it,
but
like
50
still
is
a
crazy
get
depth
to
clone.
If
you're
thinking
hey
I'm,
building
head
right
like
right
like
why
do
I
even
need
that?
B
B
The
answer
may
also
be
so
I
think
for
a
lot
of
customers.
At
least
I
mean
I
found
this
myself.
We
introduced
the
concept
of
what's
called
a
dag
pretty
recently,
so
dag
is
a
directed
acyclic
graph,
and
so
this,
the
kind
of
more
English
he
is
you
can
have
jobs
in
different
stages,
depend
on
different
state,
different
jobs
and
not
just
wait
around
for
an
entire
stage
to
finish.
I've
like
cut
some
build
times
of
mine
in
half
just
by
refactoring.
B
That
correctly,
because
I
might
have
you
know,
test,
build,
deploy,
but
there's
multiple
components
right
and
there's
seven
jobs
and
build,
and
you
know
one
relies
on
job
one
in
test
and
two
of
them
are
alive
job
to
actually
using
the
dependency
stuff.
The
stuff
that
came
out
with
dag
really
helps
you
architect
that
pipeline
to
be
a
lot
faster
and
then
the
other
thing
again.
B
I
think
this
is
a
a
balance
of
how
much
do
I
want
to
pay
for
compute
versus
how
fast
I
want
my
jobs
to
run
right,
like
I've
got
to
make
that
trade-off
right.
If
you
threw
a
massive
VM
at
all,
your
jobs
well
they'd
run
really
fast,
but
you
pay
a
lot
for
that
compute.
So
you
can
also
tweak
you
know
how
much
impact
does
you
know?
Extra
memory
or
extra
CPU
actually
impact
the
job,
and
then
is
that
worth
it?
The
cost
difference
worth
it
to
us.
A
A
I
Correct
my
question
is
about
what
are
the
best
recommended
practices
for
customers
to
manage
large
and
diverse
fleets
of
runners?
So
I
have
cases
where
the
question
comes
is
hey.
We
have
four
different
developments:
creating
the
runners
scope,
the
project's
scope,
the
groups
and,
in
the
end
we
have
no
visibility.
What
are
there
what's
a
state,
how
busy
they
are
and
how
we
can
reuse
them?
What
are
the
best
recommendations
on
this
site?.
B
I
would
say:
that's
somewhere
that
we're
lacking
you
know
the
today.
Basically
answer
is:
don't
let
people
install
their
own
runners
and
have
a
method
that
they
come
with
through
that
you,
where
you
manage
this
or
let
people
install
their
own
runners
and
have
this
run
or
sprawl?
That's
a
pain,
so
I
I,
don't
know
if
there's
a
right
answer.
Unfortunately,
I
do
know
it's
something
that
we're
concerned
about,
and
the
runner
team
has
a
number
of
open,
epics
around
just
run
or
administration
in
general.
I.
B
Think
there's
a
lot
of
quick
and
easy
wins
there.
I
don't
know
if
they've
been
where
they're,
where
they
are
prioritized
priority
wise,
like
just
filtering
the
list
of
runners
and
a
couple
other
little
things
like
that,
could
go
a
long
way.
I
mean
I,
think
there's
we
could
do
a
lot
better
than
filtering
a
list
of
runners,
but
even
just
doing
that
I
think
would
have
a
big
impact.
So
if
you've
got
customers
that
want
that,
I
would
encourage
you
to
share
their
use
cases
on
your
issue,
because
it's
something
I've
heard
a
lot.
K
That
is
Mike.
That
is
my
question.
So
it
kind
of
goes
along
with
the
question
and
the
first
question
hand
right,
but
for
JCB
and
you
know
suggesting
docker
machine
Graham
to
VM.
You
know
and
I
said
I
felt
like
we
don't
in
the
end
of
the
day,
I
guess
we
don't
really
have
an
answer.
I,
don't
think
it's
you
first
much
from
AWS
right,
Bennett,
I,
think
it
follows
this
yeah.
B
It's
it's
very
similar,
I,
don't
know
GCP
pre-emptive
machines
as
well
as
I,
know
easy
to
mm-hm
I'd
recommend.
If
you
have
a
customer,
that's
got
a
lot
of
questions
here.
You
talk
with
are
like
infrastructure
running
team
right
because
they
are
running
a
massive
scale.
Gate
lab
runner
operation
on
GCP
today
and.
K
B
B
So
there's
a
couple
engineers
so
steve
who's,
one
of
the
lead
engineers
on
the
runner,
I'm
tomash
who's,
also
went
to
lead
engineers
on
the
runner
who
actually
was
part-time.
I
sorry
for
a
runner
service
for
a
long
time
and
then
alex
on
the
infrastructure
team,
I
highly
recommend
you
talk
with
those
photos,
they're
really
friendly
and
probably
would
be
happy
to
spend
15-20
minutes
talking
to
you
about
it.
C
Also
that
retry
setting
Brendan
in
the
gate
lab
llamo
that
detects
a
catastrophic
failure,
specifically
I,
think
the
help
with
these
ephemeral
instance
types.
So
whenever
I've
been
telling
customers
about
a
lot
of
times,
they
they
hesitate
to
run
their
own
runners,
but
of
course,
I
tell
him
about
spot,
but
we
also
tell
him
about
that
setting.
So
they
know
that
you
know
it's
smart
enough
to
go.
Oh,
it
died
so
restart
it
yeah.
E
K
L
B
There's
not
a
really
great
answer.
I
have
ideas,
though
one
is.
If
they've
got
you
know,
high-level
groups
that
are
big
enough
that
represent
some
of
these
folks
like
make
it
a
group
runner
to
start
off
with,
can
always
re-register
it
as
a
shared
runner
later
like
if
I
get
lab
admin,
I
can
do
both
so
that
that
would
be
the
easiest
way
if
there
get
lab
group
architecture
matches
this
thing
that
they're
thinking
about.
L
B
B
You
register,
you
can
tell
you
with
multiple
things.
Yes,
that
was
gonna,
be
my
next
suggestion
is
kind
of
like
a
a
you
could
just
make
them
share
it
and
then
tag
it,
which
means
people
could
maybe
the
self
discover
them,
but
in
reality,
probably
you're
just
gonna
most.
You
know
it
won't
be
a
run
on
the
bank
necessarily
right.
L
B
L
B
B
Well,
I
think
what
I
was
saying
is
that
it
tags
it
increases
the
number
of
pings
by
the
number
of
tags.
I
have,
which
is
then
painful,
I
didn't
know,
I
thought
it
just
pinged
and
said
hey.
These
are
my
tags.
These
are
my
things
what's
up,
but
if
it's
pinging
once
per
tag
like
API
call
for
the
tag,
a
API
call
for
tag,
B
API
call
for
tag,
see
them
yeah
that
doesn't
scale
very
well.
B
The
the
thing
to
do
with
it.
Yeah
I
would
ask
Taryn
about
it.
The
thing
I
would
yeah
you.
You
can
set
the
ping
interval
on
the
runner
and
so
then
I'm
like.
How
does
that
relate
to
that
I?
Don't
know,
maybe
maybe
you
can
make
it
cuz,
it's
like
five
seconds
by
default
and
that's
pretty
quick
like
maybe
if
it
was
15
seconds
it
would
then
save
you,
this
crazy
amount
of
pings,
but
also
not
be
waiting
around
for
jobs.
That's.
A
And
with
that,
we
are
out
of
time.
Thank
you
all
so
much
for
joining
us
today.
Thank
you
to
Brendon
for
coming
to
share
this
great
information
with
us
and
for
everyone
who
had
questions
thanks
for
those
of
you
who
helped
me
keep
track
of
the
notes
in
the
doc.
I'm
gonna
go
ahead
and
stop
the
recording.