►
From YouTube: Q1 2021 GitLab Hackathon: Runner Office Hour
Description
A
Yes,
hello,
everyone
good
morning
evening
afternoon,
wherever
you
are
welcome
to
the
runner
office
hour
before
I
introduce
steve,
let
me
just
spend
a
few
minutes
just
to
remind
everyone,
a
few
things.
First
of
all,
it's
really
important.
I
want
to
remind
everyone,
the
community
code
of
contact
here
at
gitlab.
A
We
strive
to
have
a
safe
and
inclusive
environment
for
everyone,
so
please
be
aware
that
any
unrespectful
disrespectful
and
offensive
behaviors
will
be
addressed,
and
if
you
it's
any
point
either
through
a
comment
or
through
a
merge
request
or
through
any
of
our
community
channels,
feel
unsafe
or
threaten
please
reach
out
to
us,
and
we
will
be
there
to
respond
to
responding
support
as
soon
as
possible.
A
Again,
this
is
the
time
we
are
almost
halfway
there.
The
hackathon
begins
on
the
31st
of
march
at
your
local
time
zone
and
ends
on
the
end
of
the
day
of
1st
of
april
at
your
local
time
zone,
so
we're
already
there
halfway
there.
We
have
a
lot
a
lot
of
emerge
requests
so
far
and
I'm
really
happy.
Thank
you
all
for
your
contributions.
It's
really
nice.
We've
passed
the
120
something
merged
requests
already
when
in
during
our
last
hackathon,
the
overall
hack
member
mars
were
167.
A
So
that's
really
exciting
in
case
you
need
support
or
help,
or
just
you
want
to
reach
out
to
the
community.
You
can
find
us
at
guitar.
Guitar.I
am
slash
gitlab
contributors.
If
you
need
anything
specifically
feel
free
to
ping
me
through
here
or
through
any
of
her,
mr
or
mars
or
issues,
and
with
no
further
due.
A
B
Hey
everyone,
so
my
name
is
steve.
I'm
located
in
malta
and
I've
been
working
at
catlett
for
around
two
and
a
half
years,
and
I'm
one
of
the
project
maintainers
on
gitlab
runner,
which
is
the
agent
that
runs
dci
jobs
for
gitlab
the
usual
format
of
this
office
hour.
Is
we
either
have
like
participants
during
the
call
we
can
like
answer
any
questions
they
have
or
review
to
help
them
contribute
something
to
get
lebron
or
if
we
have
no
questions
or
no
other
items
on
the
agenda.
B
C
Currently,
I
don't
have
anyone
that
needs
to
be
reviewed,
I
mean
reviewers
are
already
assigned
to
the
mrs.
So
that's
it
awesome.
C
I
was
just
listening,
but
I
mean:
is
the
git
lab
runner,
the
one
that
is
actually
running
the
pipeline?
I
mean
the
integration
tests
that
we
that
we
write.
That's
where
the
runner
is
there.
B
Yeah,
so
that's
a
great
question,
because
it's
not
very
if
you're
not
familiar
with
the
gitlab
stack,
it's
not
very
clear
what
gitlab
runner
is
doing
right,
because
when,
when
I
first
looked
at
gitlab
runner,
I
thought
was
also
parsing,
the
gitlab
ciaml
and
creating
the
pipelines
and
things
like
that.
But
that
is
not
the
case.
B
So
to
give
like
a
quick
overview
of
the
architecture,
we
have
gitlab
the
application,
which
is
a
ruby
on
rails
application,
and
that
is
where
all
the
data
is
stored,
and
that
is
what
we
call
the
single
sources
route.
So
that
is
where
the
gitlab
crm
parsing
happens.
That
is
where
the
jobs
are
created,
that
is
where
the
pipelines
are
created
and
then,
when
everything
is
created,
it
will
assign
this
specific
job
to
gitlab
runner
and
basically,
what
the
club
runner
is
going
to
do.
B
Is
it's
going
to
get
a
specification
for
that
specific
job?
So
it's
completely
ignorant
about
pipeline
scheduler.
It
only
knows
about
one
single
job.
It
will
get
a
specification
of
that
job
and
little
browner
will
tell
it
hey.
I
need
you
to
run
this
script
and
with
this
infrastructure
and
so
on
and
super
and
then
kit
labrunner.
B
Does
it
like
it
just
parses
that
json
json
string
and
creates
like
a
docker
container
with
all
the
specifications
and
then
just
runs
the
script
for
the
user
in
one
container,
for
example,
if
we're
talking
about
docker
or
it
can
be
kubernetes
or
it
can
be
anything
else
so
gitlab
runner
is
mostly
somewhat
of
a
low
level
thing
where
the
execution
of
the
user
script
actually
happens.
The
creation
of
the
pipeline,
the
job
and
all
the
magic
that
is
gitlab
cima
file.
B
So
let
me
share
my
screen
actually,
and
I
saw
some
community
contributions
coming
in
during
the
hackathon,
so
we
have
two
so
far
and
it's
always
nice
to
see
documentation
changes.
Those
are
like
one
of
my
favorite.
It
was
mostly
because
they're
super
quick
to
merge
and
they
improve
the
the
quality
of
life
for
everyone.
They
improve
the
quality
of
life.
B
For
me,
when
I
go
reference,
my
our
own
documentation,
when
I'm
trying
to
figure
out
a
problem
and
they
help
increase
adoption,
so
we
actually
we're
very
lucky
at
gitlab
that
we
have
a
technical
writing
team
where
they
spend
like
enormous
amount
of
hard
work,
making
our
documentation
look
consistent
and
with
proper
grammar
and
so
on
and
so
forth,
which
is
something
most
developers
don't
always
have
time
to
do
so.
B
So
we
have
this
process
where
engineers
usually
vet,
that
everything
looks
okay
from
a
technical
level
and
then
we
usually
assign
it
to
our
technical
writer,
which
will
look
at
the
consistency
and
the
flow
of
the
documentation.
B
So
we
can
go
through
these
two
and
I'll
do
that
right
now,
so
this
one
is
updating
reference
for
minion
and
one
who's
not
familiar.
We
use
manio
for
caching,
so
if
you're
not
familiar
with
what
caching
is
for
gitlab
runner,
imagine
that
you
have,
for
example,
let's
stick
with
a
nodejs
application
right
most
of
the
time
you
have
to
run
an
npm
install
before
you
run
any
tests
or
any
builds
right,
and
that
means
you're
downloading
a
bunch
of
javascript
packages
from
the
internet.
B
And
if
you
do
that
for
every
single
job
for
every
comment,
you
push
that's
going
to
take
a
really
long
time
right,
because
you
have
to
do
an
npm
download
everything
and
over
and
over.
So
we
have
this
caching
feature
or
you
can
cache
a
specific
directory.
So
you
can
cache
the
node
modules,
for
example,
and
doing
so
you
don't
have
to
download
everything
over
the
internet
and
one
of
the
backends.
We
call
it
is
using
minio.
B
B
Okay,
so
basically,
what
this
is
saying
is
when
you
start
a
menu
contain
docker
container.
It
gives
you
like
it,
uses
these
specific
set
of
default
credentials
and
for
simplicity,
simplicity
reasons.
We
always
use
those
default
credentials,
but
this
one,
what
it
seems
to
be
doing
is
it's
updating
the
menu,
username
and
password,
which
I
agree
with.
B
We
should
try
and
have
the
most
secure
copy,
based
kind
of
commands
for
users,
because
me
personally,
I
do
it
all
the
time,
I'm
not
too
familiar
with
these
technology
sometimes
so
I
just
hope
the
base
and
then,
like
I
kind
of
start
to,
I
forget
to
reset
it
or
something
like
that.
So
it's
always
good
to
make
it
clear
to
our
users.
They
need
to
do
something
to
make
it
secure
by
default.
B
So
let's
see
what
this
is
doing,
so
just
updating
the
docker
to
start
it
in
background
mode
in
demon
mode.
So
if
we
look
at.
B
B
So
it's
a
it's
gonna
run
the
container
in
the
background,
so
I
think
that
is
fine
before
we
were
running
it
into
the
foreground.
So
when
the
user
starts
to
continue,
it
spits
out
the
logs
and
they
can't
really
exit,
but
without
terminating
the
container.
B
So
I
think
that's
good,
because
the
way
this
minion
container
is
gonna,
be
it's
running
in
the
background
in
the
server
somewhere.
So
I
think,
passing
in
the
minus
d
is
good,
so
this
is
setting
the
menu
root,
pass
user
and
password.
B
B
B
I
have
a
docker
for
mac
running
locally
on
my
laptop,
so
I
can
just
do
this
and
I'm
gonna
pass
in
just
let's
do
it
to
my
temporary
director,
so
it's
removed
after
I
restart
my
computer
and
it's
starting
a
minial
container
right
and
I'll
just
pass
in
the
username,
which
is.
B
B
B
We
are
forwarding
port
99005.
B
B
And
that
worked
so
I
like
this,
so
to
see
what
we
had
before.
I
always
try
to
like
enjoy
going
on
our
master
branch,
which
is
our
main
branch
that
we
consider
as
a
stable
branch
and,
let's
just.
B
B
B
Awesome,
so
I
think
this
looks
good,
given
that
this
is
just
a
updating,
the
docker
command
and
one
sentence.
I
feel
comfortable
merging
it
myself.
So
there's
always
somewhat
a
judgment.
Call
right.
If
you
want
to.
B
If
it's
a
large
change,
you
should
always
go
through
the
technical
documentation.
If
it's
a
small
change
me
myself,
I
can
merge,
so
I
feel
comfortable
matching
this
one,
so
I'm
gonna
say.
B
B
So
yeah
that
should
be
added
to
the
merge
string.
It
won't
be
merged
immediately.
We
can
see
a
new
pipeline
started,
so
we
have
a
feature
at
get
lab
called
merge
range
where
we
run
the
pipeline
on
the
merge
result,
meaning
the
current
branch
and
the
master
branch
merged
to
see
if
it
will
cause
any
failures,
and
this
way
we
can
prevent
any
broken
master
pipelines,
so
that
will
take
a
while
to
run
so
we
can
move
on
to
the
next
merge
first
that
was
opened
as
well,
and
that
was
this
one.
B
Yeah
this
is
like
the
perfect
merge
request
as
well
to
get
started
with
right,
like
it's
just
updating
a
few
grammar
mistakes,
so
I'm
just
gonna,
approve
it
and
add
to
the
train
as
well.
B
D
B
That's
it:
let's
see
if
we
have
any
more
merchandise
that
came
in
through
the
hackathon,
so
we
usually
use
labels
for
the
hypertension,
so
that
is
quite
useful,
so
communist
contribution.
B
B
Awesome,
so
what
I
had
planned
to
review
is
two
community
contributions,
so
the
first
one
is
around
kubernetes.
B
So
for
folks
who
are
not
familiar,
gitlabrunner
can
run
on
multiple
types
of
infrastructures
right
and
we
call
these
executors
so
imagine
that
your
application
is
containerized.
B
That
means
you're
most
likely
using
something
like
docker
container
d
or
like
kubernetes,
for
example,
and
like
your
ci
infrastructure,
should
try
and
mimic
as
close
as
possible
to
your
production
environment.
So
when
you
run
the
test,
it's
closest
possible
to
your
production
environment
right.
So
we
support
multiple
execution
environments.
B
We,
the
most
popular
ones
that
we
support,
is
the
docker
executor,
so
we're
running
everything
inside
of
a
tucker
container,
with
the
image
that
you
specify,
and
we
also
have
the
kubernetes
executor
where
we
run
the
job
inside
of
a
kubernetes
cluster
by
creating
a
port
for
you,
and
everything
is
isolated
inside
of
that
pod.
So
one
job
is
one
port.
B
And
the
way
we
do,
that
is
talking
through
the
kubernetes
api,
for
example.
For
this
specific
case,
and
basically,
as
I
said,
we
create
a
pod
right
for
each
job.
Now
the
pod
can
have
multiple
containers
inside
of
it
and
with
kubernetes.
You
get
a
lot
of
flexibility
from
a
scheduling
perspective,
so
with
kubernetes
you
get
multiple
nodes.
B
So,
for
example,
let's
say
you
have
a
cluster
of
three
nodes,
so
three
compute
servers
I
like
to
call
them
and
you
might
want,
like
one
one
of
the
nodes
has
a
faster
disk
or
one
of
the
nodes
is
a
different
cpu
architecture,
so,
like
r,
for
example,
is
very
popular
nowadays,
so
you
might
have
a
cluster
with
intel,
cpus
and
then
arms
rmcpus,
and
this
specific
application
that
you're
running
only
runs
on
r,
for
example
right.
B
So
you
want
to
make
sure
that
this
job
only
runs
on
the
arms
on
the
arm
notes,
instead
of
on
the
internet,
for
example
right
on
the
x86,
so
that
is
where
these
powerful
constructs
of
kubernetes
come
into
place.
Regarding
scheduling,
there's
a
lot
of
them,
I
will
not
go
through.
B
I
will
not
go
through
that
myself,
because
this
is
not
the
communities
tutorial,
but
it
was
mostly
runner
related.
So
we
have
what
we
call
affinities.
So
there
are
two
types
of
affinities.
There
is
a.
Let
me
zoom
in
a
bit
sorry
about
that.
B
I
will
not
go
into
too
much
detail,
what's
the
difference,
but
basically
one
of
them
is
selecting
where
the
notes
is
gonna
be
scheduled
and
then
the
pods
is
making
sure
that
one
part
is
close
to
the
other
part
so
like,
for
example,
if
you
have
at
the
server
it
should
be
next
to
your
application
server
just
so
it
doesn't
go
over
the
nar,
for
example.
B
Those
are
a
few
examples.
The
documentation
here.
B
And
this
url
is,
it
goes
through
all
the
details,
so
I
will
not
explain
everything
there,
because
that
can
be
a
whole
to
our
lecture
about
it.
So
at
the
moment
we
do
support
node
affinity.
So
if
we
open
up
our
runner
documentation,
so
we
go
to
runner
executors.
B
So
if
we
go
to
this
url
here,
we
will
see
all
the
documentation
related
to
our
kubernetes,
so
using
affinity,
so
we
have.
And
basically
what
is
this
saying?
Is
this
gitlab
runner
when
it
creates
a
pod?
It's
gonna
attach
this
affinity
affinity
specifications
to
the
job
board,
so
then
it
can
be
scheduled
properly
and
right
now
we
support
node
affinity
and
all
our
github
runner
configuration
right
now
is
inside
tamil,
and
that
is
like
a
different
configuration
language
similar
to
yammer,
for
example,
right.
B
So
I
see
config
so
we're
updating
the
config
and
then
we're
updating
the
kubernetes
tests.
B
One
thing
that
I
really
like
to
see
is
one
merge
request
where,
if
it's
adding
a
new
feature,
it
should
be
adding
documentation.
So
everything
is
titled
like
in
one
package
kind
of
thing.
Sometimes
it
happens.
Sometimes
you
have
to
have
documentation
in
a
separate
merger
as
it
can
be,
because
the
merge
request
is
too
big
or
there
is
some
time
pressure
or
like
in
the
case
of
a
community
contribution.
B
You
don't
have
time
to
contribute
to
documentation
so
that
that
is
something
that
sometimes
you
take
on.
I
I
usually
enjoy
communicating
this
with
the
contributor
because,
like
they
might
not
be
familiar
that
they
have,
they
might
not
be
aware
that
they
had
to
add
documentation.
So
it's
always
like
a
two-way
communication
like
just
hey,
be
aware
that
we
need
documentation.
If
you
want
to
add
it,
you
can,
but
if
you
don't,
we
can
we're
more
than
happy
to
do
it.
B
B
D
B
D
D
B
Let's
see
what
else
there
is
here,
so
this
is
not
touching
the
node
affinity.
B
Okay,
let
me
open
this
up
in
my
editor
as
well,
so
you
can
look
at
that.
B
So
I
have
an
analysis
in
my
git
config,
where
I
do
get
a
more
origin
and
the
id
of
the
amar,
and
it
will
automatically
clone
it
for
me,
which
is
quite
useful
to
quite
quite
quite
useful
to
for
code
reviews
and
I'm
gonna
try
and
find
it
as
this
alias
right
here,
where
it
automatically
automatically
checks
up
checks
out
anymore
for
me.
B
B
D
B
Yeah,
I
think
that
makes
sense.
It
might
be
because
we're
not
specifying
the
av
keyword
and
we
have
like
some
automatic
transformation
from
these
fields
too.
B
That
lets
me
double
check
something
yeah,
so
yeah,
and
that
makes
sense,
because
these
structures
are
not
something
you
can
start
with
environment
variables
right,
because
these
are
somewhat
complex
of
a
structure,
because
it's
it's
having
like
a
bunch
of
this
structure,
we
usually
have
like
environment
fibers
for
values
like
streams.
So,
for
example,
let's,
let's
look
at
an
example-
I
don't
know
the
kubernetes
source,
for
example,
where
to
find
the
kubernetes
api.
B
We
will
show
you
that
easy
amount
of
it,
maybe
optional,
kubernetes
master
host
url.
When
you
register
a
runner,
you
can
pass
in
the
kubernetes
host
environment
url
and
it
will
automatically
populate
the
configuration.
B
But
since
this
is
a
complex
structure,
it's
not
just
a
one
field
value,
so
we
don't
really
specify
it.
I
allow
the
user
to
specify
it,
because
it's
just
confusing
at
that
point,
but
they
still
like
to
have
a
description,
even
though
we
won't
be
used
anymore,
just
as
a
a
documentation.
D
B
So,
and
by
the
way,
what
I'm
using
here,
it's
called
conventional
comments
and
it's
actually
a
nice
way
to
specify
what
kind
of
thing
you're
talking
about
in
the
chord
review.
So,
for
example,
if
it's
a
suggestion,
if
it's
a
nitpick,
if
it's
blocking
or
not
blocking,
if
it's
a
issue
that
you
think
it
needs
to
be
resolved
and
if.
C
B
A
question,
for
example,
like
most
of
the
time,
it's
me
asking
questions
in
a
code
review
so
like
hey.
Why
is
this
like
this
and
I
just
prefix
it
with
questions,
so
the
user,
like
the
other
person
and
the
other
side
of
photography,
knows
what
I'm
trying
to
get
out
of
this,
or
if
it's
a
chore
like
hey,
we
need
to
update
documentation
or
hey.
B
We
need
to
run
this
script,
for
example,
before
we
do
that,
so
it's
always
nice
to
run
through
this,
and
probably
what
I
like
the
most
is
the
praise
like
when
you
say
something
nice
about
the
decoder
was
contributed.
B
B
So
so
sometimes
what
I
do
is
like.
I
either
give
the
solution
to
the
user,
to
the
I
keep,
calling
them
users
and
I
shouldn't
to
the
community
contributor
or
to
or
just
tell
them
like,
hey
I'll,
let
you
figure
it
out.
If
you
have
any
issues
reach
out
to
me
again,
and
that
is
mostly
like
a
how
to
say
a
case-by-case
basis.
I've
worked
with
this
contributor
before
and
we
have
somewhat
of
a
relationship
with
him.
B
So
it's
mostly
a
case
by
these
basis,
so
that
is
it
on
this
case.
One
other
interesting
thing
is
we're
updating
the
tests,
which
is
perfect,
but
what
I
see
is
we're
not
actually
updating.
B
The
kubernetes
itself,
so
I'm
curious
how
it's
updating
the
response,
and
that
is
because
see.
B
B
D
B
So
api
affinity
is
the
definition
from
the
kubernetes
api
and,
as
we
can
see,
there
is
no
dfinity
pod
affinity
at
port
on
the
infinity
and
I'm
guessing.
The
merge
request
is
updating
this
method.
So,
let's
see
where
that.
B
B
B
B
So
let's
see
this
is
just
ranging
over
the
required
and
preferred.
B
So
what
does
this
mean-
and
this
is
probably
the
trickiest
part-
about
this
merchandise
right-
making
sure
that
we
map
our
config
tunnel
to
be
the
same
as
the
kubernetes
structure
and
the
api,
and
now
we
had
discussions
with
the
community
to
try
just
to
try
and
make
this
easier,
both
from
our
perspective
and
our
users
perspective
where
they
just
specified
the
part
specification
and
we
just
translated
and
that's
it,
but
at
the
moment
we
have
been
doing
it
on
a
case-by-case
basis,
but
might
not
be
ideal.
B
But
that's
what
we
have
right
now,
so
what
I
like
to
do
is
look
actually
at
the
api
specification
here.
B
So
it's
under
the
kubernetes
api
reference
and
I
just
want
to
deploy
affinity
one
and
we
have
two
high-level
ones,
so
preferred
steering
scheduled,
ignored
during
execution
and
required
during
schedule.
If
you
are
ignored
during
execution
and
if
we
look
at
our
code,
that
is
pretty
much
what
this
is
right.
B
B
B
D
B
Okay,
so
this
is
looking
at
the
kubernetes
api
and
it
specifies
a
of
an
editor
and
it's
a
label
selector
namespace,
anthropology.
Okay,
that
makes
sense-
and
we
are-
this-
is
about
the
required.
Of
course,
that's
why
it's
not
matching
so
we're
talking
about
required
schedules
during
scheduling,
sorry,
so
the
logic
key
label.
B
It's
the
key,
the
operator
and
the
values,
and
I'm
guessing.
This
is
what
we
have
here:
the
image
labels
and
the
expressions.
B
And
also
feel
free
to
jump
into
any
dude,
with
any
questions
that
you
have
at
the
moment,
I'm
more
than
happy
to
answer.
D
D
B
Sense
now
I
will
not
go
through
all
the
translation
field,
because
that
was
very
boring
for
you
to
watch,
but
I'll
go
to
some
other
things
that
I
usually
do
when
I
review
emergencies.
B
So
the
lucky
part
is
we're
doing
a
lot
of
we're
doing
more
tests
than
we're
doing
actual
code,
which
is
nice,
but
the
reason
we
want
to
have
tests
is
to
have
coverage
on
all
the
cases
that
this
can
go
to
right.
So
what
I
usually
look
is
at
the
coverage
report
now.
B
Unfortunately,
this
community
contribution
is
quite
old,
so
I
don't
think
it's
gonna.
Have
the
coverage
report
still
available?
So
oh
awesome,
so
part
of
our
pipeline
is
generating
the
coverage
report.
So
if
you
look
at
what
we're
doing
here-
and
most
of
our
logic
is
inside
of
the
config.go
file.
B
D
B
And
this
gives
us
a
nice
report
of,
like
the
red
part,
is
code
that
can
be
covered,
but
it's
not
covered
degree.
One
is
code
that
is
not
able
to
be
covered
by
our
tests
and
the
green
one
is
tasks
that
are
being
covered
and
the
higher
intensity
green.
It
is
the
more
hits
we
get.
B
Basically
so
the
more
tests
we
have
covering
the
score
part,
which
is
something
that
might
be
useful
to
say
to
think
like
if
we
need
more
tests
or
if
one
thing
like
one
line,
is
only
being
covered
by
one
test
and
that
might
help
you
think
like.
Oh,
should
we
add
more
tests
or
not?
B
B
We
have
one
hit
actually,
so
we
only
have
if
you
hover
on
it,
it
will
tell
you
how
many
coverage
you
have,
so
this
is
only
being
tested
by
one
test,
which
is
fine
in
this
case,
because
someone
wants
us
that
we
need
right.
B
And,
for
example,
here
we
see
that
we
added
the
get
part
affinity,
for
example.
This
is
a
new
change.
We
added,
but
we're
not
really
testing
anything
about
the
preferred
during
scheduling
ignored,
and
this
is
a
clear
thing
that
hey,
we
should
probably
add
tests
to
this,
and
this
merger
press
so
make
sure
everything
is
covered,
and
this
helps
me
like
figure
out.
Okay.
What
is
the
state
of
our
test
and
should
we
add
more
coverage,
just
a
whole
philosophy
where
you
should
like?
B
You
should
never
aim
for
100
code
coverage,
because
that
does
not
mean
that
all
your
tests
and
all
your
edge
cases
are
are
are
tested,
and
that
does
not
mean
that
your
code
quality
is
a
good
either
right,
because
sometimes
you
can
like
you
have
to
go
like
way,
adding
a
lot
more
complexities
to
just
get
100
coverage.
B
So
it's
like
mostly
a
judgment
call
here
as
well
like
should
we
have
coverage
or
not,
since
this
is
just
parsing
fields,
I
would
feel
a
lot
more
comfortable
if
we
have
coverage.
So,
let's
see
why
this
is
not
being
covered
at
all.
B
B
D
B
B
B
I
know
we're
at
times
so
I'm
gonna
pause
here
to
wrap
up,
but
basically
what
I'm
gonna
keep
looking
at
is
if
the
translation
from
our
configuration
to
the
kubernetes
api
makes
sense
and
if
it's
like
a
one-to-one
relationship
making
sure
we
have
the
right
amount
of
coverage
and
also
making
sure
that
we
have
the
configuration
the
configuration
documented
inside
of
our
inside
of
our
documentation.
B
A
I
don't
see
any
questions
either
on
youtube
or
guitar,
or
here
so
yeah.
We
can
wrap
it
up.
B
We
reviewed
to
merge
requests
about
documentation,
changes,
those
were
pretty
straightforward
and
then
we
have
community
contributions
like
these
ones,
where
they're
a
bit
more
involved,
just
because
it's
either
a
lot
of
data
structures
involved
or
like
a
lot
of
considerations
to
take
into
a
lot
of
factors
to
take
into
consideration.
Basically,
we
have
to
keep
in
mind
that
if
we
release
the
configuration,
we
have
to
keep
it
backwards
compatible
because
we
can't
have
users
updating
the
gitlab
runner
version
and
their
old
config
does
not
work
anymore.
B
So
it's
something
that
we
have
to
be
careful
a
lot
of
the
time
where
we
try
to
add
a
new
configuration.
We
need
to
make
sure
that
is
something
that
we
can
extend
in
the
future,
and
that
is
correct
just
because,
if
we
want
to
change
how
that
works,
we
need
to
make
it
in
a
backwards
compatible
way
and
make
sure
we
deprecate
it
slowly
and
like
inform
our
users,
and
we
usually
do
the
applications
once
every
year
when
we
create
the
major
releases.
B
So,
in
this
case,
for
example,
in
14.0,
that
is
when
we
can
do
some
breaking
changes,
and
that
is
always
a
long
and
heavy
process,
and
it
is
by
design
to
be
long
and
heavy
just
because
we
value
backwards.
Compatibility
a
lot
because,
like
if
your
code
keeps
breaking
every
upgrade.
That
is
not
a
nice
experience
for
our
users.
A
Thank
you
so
much
and
thanks
everyone
for
participating
and
watching
see
you
on
the
next
office
hour,
which
is
in
one
hour
from
now.
It's
gonna
be
the
package
group
office
hour.