►
From YouTube: Intro to CI/CD with GitLab Webinar
Description
Learn about what CI/CD is and how it can benefit your team. We will cover an overview of CI/CD and what it looks like in GitLab. We will also cover how to get started with your first CI/CD pipeline in GitLab and the basics of GitLab Runners.
A
A
All
right,
we'll
kick
things
off
good
morning
or
good
afternoon,
depending
on
where
you're
located
we're
happy
that
you've
taken
some
time
to
join
us
today.
Today's
session
is
focused
around
an
introduction
to
ci
cd
and
git
lab
and
kevin
chassis
will
be
our
our
presenter
today
he's
a
staff
technical
account
manager
here
at
gitlab
before
I
turn
it
over
to
him
just
a
couple
of
housekeeping
items.
A
First
off
this
session
is
being
recorded,
so
you
can
look
for
that
recording
to
come
through
into
your
inboxes
into
the
next
day
or
so.
If
you
have
any
questions
throughout
this
session,
please
go
ahead
and
put
that
in
the
q
a
portion
of
zoom,
we
will
be
able
to
answer
some
questions
through
that
that
feature
through
the
text
and
then
kevin
will
also
be
answering
some
towards
the
end
of
the
session
as
well.
B
All
right
thanks,
taylor,
appreciate
that
welcome
everyone.
Hopefully
you'll
get
a
lot
out
of
today's
session,
as
taylor
mentioned
today,
we're
covering
kind
of
the
the
introduction
to
gitlab
ci
cd.
B
But
even
if
you
have
some
experience,
I
think
there's
some
great
tips
and
tricks,
and
you
will
learn
something
useful
today,
so
we're
going
to
go
through
what
is
ci
cd,
an
overview
of
it
in
git
lab
and
how
to
get
it
running
in
git
lab
and
then
some
details
on
runners
and
then
we
will
do
live
q
a
at
the
end,
but
first
and
foremost,
since
this
is
an
intro.
Let's
start
with,
there
might
be
some
folks
who
are
brand
new
to
gitlab.
B
So
what
is
git
lab,
so
gitlab
is
a
complete
tool
for
the
dev,
full
devops
or
devsecops
lifecycle.
B
Not
you
know,
we
started
as
a
git
repository
manager
and
that-
and
that
was
what
really
kind
of
started
gitlab
many
years
ago,
but
one
of
the
first
features
that
we
added
after
that
was
our
cicd
feature,
which
were
what
we're
going
to
be
talking
about.
B
Today
and
thankfully
you
know
there
was
a
debate
at
gitlab
with
whether
we
would
do
that
as
a
separate
module
or
add
it
into
the
core
application,
and
one
engineer
was
really
adamant
that
it
needed
to
be
in
the
core
application
and
eventually
won
that
argument,
and
thankfully
they
did
because
now
we've
got
we've,
we've
done
it
with
all
of
our
features.
B
Since
then
we
have
a
single
application
that
covers
the
entire
secops
life
cycle
and
there
are
some
significant
advantages
to
having
everything
in
one
application
and
not
having
stuff
passed
between
separate
modules
or
separate
applications.
So
so
that
being
said,
for
those
that
are
brand
new
to
to
ci
cd,
you
may
not
even
know
exactly
what
ci
cd
is,
or
you
may
have
heard
it-
you
know
mentioned,
but
you
didn't
actually
know
what
it
stood
for.
So
ci
stands
for
continuous
integration.
B
Basically,
the
idea
that
we
are
you're
going
to
continuously
be
building
and
and
integrating
your
code
and
testing
it
with
each
small
change,
rather
than
integrating
building
and
testing
your
code
at
the
like
right
before
a
release
which
is
kind
of
how
it
was
done.
Historically,
cd
is
kind
of
an
overloaded
term,
so
there's
actually
two
different
things.
B
It
can
mean
it
can
mean
either
continuous
delivery
or
continuous
deployment,
one
being
where
your
code
is
ready
to
be
delivered
and
the
other
one
is
where
it's
actually
pushed
out
into
production
live
and
gitlab
does
support
both
of
those
use
cases.
B
That's
another
great
question
right:
why
not
just
build
it
manually
and
run
through
a
bunch
of
manual
tests,
so
the
the
beauty
of
ci
is
that
we,
because,
if
you're
running
your
ci
pipeline
and
building
and
testing
your
code
frequently,
you
can
find
errors
really
quickly
and
while
they're
still
fresh
in
your
mind,
so
rather
than
hearing
about
a
problem,
you
know
weeks
or
months
down
the
road
you're
getting
notified
about
that
immediately
in
the
results
of
the
ci
pipeline,
which
we'll
get
into
in
a
few
minutes,
and
that
lets
you,
you
know,
while
that's
fresh
in
your
mind,
you
can
think
okay,
what's
the
change
I
just
made,
and
oh
I
didn't
realize,
I
accidentally
broke
this
other
piece
and
that
automated
test
is
now
failing.
B
B
All
this
allows
your
team
to
develop
faster
and
have
more
confidence,
and
it's
it's
not
only
more
confidence
for
you
and
your
team,
but
it's
also
more
confidence
for
management.
It's
more
custom,
confidence
for
your
customers
or
stakeholders,
the
folks
that
you're
delivering
software
for
right.
It
will
reduce
the
amount
of
errors
that
are
delivered
into
production
and
help
to
ensure
that
every
change
is
ready
to
be
released.
Now,
that's
not
saying
that
it's
gonna
release.
B
It's
gonna
reduce
that
to
zero
you're,
not
gonna
capture,
a
hundred
percent
using
this
process,
but
you're
gonna
capture,
so
much
more
that
that
the
things
that
do
get
through
will
be
more
minor
and
more
easily
addressed
and
thus
again
instilling
confidence
in
everyone
associated
with
with
the
project
and,
most
importantly,
it
kind
of
allows
releases
to
become
a
boring
thing
rather
than
a
big
stressful
event.
B
They
just
become
a
mundane,
maybe
every
day,
maybe
every
week,
maybe
even
multiple
times
a
day
thing
that
happens,
and
thus
you
can
get
value
into
your
customers
or
stakeholders.
Hands
more
quickly,
get
feedback
from
them,
so
you
can
alter
course
more
quickly
and
arrive
at
the
solution
that
is
going
to
be
the
most
beneficial
for
them
or
for
you
and
your
business.
So
all
of
these
things
are
great
reasons
to
use
ci
cd.
B
Before
we
go
into
how
to
do
all
this
in
git
lab,
I
wanted
to
kind
of
just
emphasize
the
get
lab
flow
or
the
get
lab
recommended
process,
as
you
can
see,
as
I
mentioned
before,
git
lab
covers
the
entire
all
10
stages
of
the
devsecops
life
cycle,
as
defined
by
gartners.
B
These
terms
here,
which
we
have
aligned
are
different
sets
of
functionality
to
within
the
single
application,
are,
are
actually
terms
that
were
or
come
up
by
gartner
and
we
cover
all
of
those,
and
this
is
kind
of
giving
an
idea
of
of
what
a
gitlab
recommended
process
looks
like,
and
the
concept
here
is
that
we're
going
to
do
a
short-lived
feature
branch
with
a
small
set
of
changes,
get
feedback
on
those
changes
quickly,
iterate
on
those
and
then
push
those
out
into
production
and
and
see
those
results.
B
As
I
mentioned
some
of
those
benefits.
So
the
idea
here
is
that
you
that
you
can
start
with
with
epics
and
issues
within
git
lab
and
plan
all
of
your
work
that
you're
going
to
be
doing,
discuss
it
and
break
it
into
different
pieces,
and
when
an
issue
is
ready
to
be
worked
on,
you
can
assign
it
to
an
engineer
and
then,
from
that
issue
they
can
create
a
merge
request
which,
by
default
in
gitlab,
will
actually
create
a
feature
branch
which
is
an
isolated
branch
of
code
from
your
main
branch.
B
And
then
your
developer
or
developers
can
start
pushing
code
either
using
our
web
ide
or
working
locally
and
pushing
it
up
to
the
server,
and
then
you
can
that
will
trigger
our
automated
ci,
which
we're
going
to
be
talking
about
today
can
also
trigger
our
security
scans,
and
you
can
get
feedback
from
colleagues.
You
can
get
feedback
from
your
manager.
You
do
code,
reviews
and
suggest
changes
that
are
either
mandated
by
failed
tests
or
vulnerabilities.
B
You
introduced
or
stuff
that
came
out
of
the
code
review,
and
all
of
this
can
happen
within
the
merge
request
and
then
those
fixes
get
pushed
process
happens
again
and
you
kind
of
go
through
this
little
loop
until
everyone's
happy
with
the
change
and
again
this
is
a
small
change.
That's
going
to
occur.
B
You
can
also
push
out
to
a
review
app,
and
this
is
a
temporary
or
ephemeral
application
that
might
be
a
running
copy
of
just
these
changes
to
get
feedback
or
do
manual
tests.
This
could
also
be
pushed
out
to
a
test
environment
and
then,
ultimately,
once
everyone's
reviewed.
B
All
of
these
results,
the
approvals
if
you
want
there
are
optional
approvals
available
in
the
merge
request
can
be
done
and
when
everything
is
said
and
done,
this
small
isolated
change
can
then
be
merged
into
the
main
line
and
trigger
our
deployment
delivery
pipelines
to
make
the
code
available
or
even
push
it
into
different
environments.
B
Now
you
don't
have
to
use
gitlab
this
way.
This
is
our
recommended
method.
We
think
you're
going
to
get
the
best
results
using
this,
but
if
you're,
using
a
waterfall
or
or
a
traditional
scrum
process,
you
can
continue
doing
that.
Git
lab
will
is
very
flexible
and
allows
you
to
use
whatever
process
you
want.
B
This
is
just
a
a
recommendation,
but
it's
something
that
we'll
touch
on
several
times
throughout
this
this
session,
and
so
I
wanted
to
give
you
an
idea
of
what
our
recommended
process
is,
and
please
do
remember
that
we
will
be
doing
questions
at
the
end
or
you
can
submit
it
through
the
q,
a
function
and
my
fellow
teams
that
are
on
the
call
today
will
be
able
to
respond
while
I'm
presenting
to
your
questions.
B
B
You
may
also
have
related
code
in
other
projects
or
even
other
repositories,
and
you
can
pull
all
those
together
in
a
commit.
So
commit
is
a
single
set
of
changes
that
are
being
pushed
to
the
repository
at
one
time
and
when
that
commit
is
pushed
up
to
the
the
gitlab
server
it
can
automatically
trigger
our
ci
pipeline,
which
optionally
can
include
an
automated
build
automated
unit
tests,
integration
tests.
B
You
can
even
incorporate
ui
tests
here
if
your
application
allows
for
those-
and
you
can
also
do
linting
tests
any
any
kind
of
tests
that
you
can
imagine
that
can
be
automated.
You
can
do
in
as
part
of
your
ci
pipeline
and
again
when
you're
first
starting
off,
you
don't
have
to
do
all
of
this
at
once.
Right.
You
can
start
by
just
automating
your
build
and
then
doing
a
bunch
of
manual
tests.
Then
you
can
slowly
incorporate
those
manual
tests
into
your
ci
pipeline,
so
that
more
is
being
done.
B
Automated
and
less
is
being
done
manual
and
then,
ultimately,
you
can
end
up
at
a
place
where
the
majority
of
your
code
is
being
automatically
tested
with
each
commit
which
gives
you
that
immediate
feedback
and
allows
you
to
make
those
quick
changes
and
then,
on
the
deployment
side,
you
can
push
out
to
various
environments.
They
can
be
controlled
by
git
lab
through
technologies
like
kubernetes
or
docker,
or
they
can
be
static.
Environments
like
bare
metal
or
vms
that
you
are
pushing
to
that.
You
control.
B
We
support
both
and
they
can
be
test
environments,
review
environments,
staging
environments,
production
environments.
You
can
declare
any
number
of
those
and
then
push
to
those
different
environments.
So
a
ton
of
flexibility
is
built
in.
B
So
let's
take
a
look
at
what
this
actually
looks
like.
So
what
you
see
here
at
the
bottom
is
a
typical
pipeline
within
git
lab
jobs,
these
bubbles
down
at
the
bottom.
Each
of
these
ovals
represent
a
job
which
is
a
a
piece
of
work.
That's
going
to
be
accomplished
automatically
whether
that's
a
build
or
a
test
or
a
deployment,
and
they
can
be
organized
optionally
into
stages.
Stages
are
by
default,
sequential,
so
everything
will
happen
in
this
build
stage.
B
Then
everything
will
happen
in
the
test
stage
and
when
all
of
those
jobs
are
done,
it
will
move
to
the
next
stage.
That's
optional.
There
are
other
ways
to
run
that
and
we
will
be.
We
do
have
an
advanced
ci
cd
session.
So
look
out
for
that.
B
Where
we
go
into
a
lot
more
detail
on
how
you
can
optimize
these
and
then
within
an
existing
stage,
all
jobs
that
are
defined
within
that
stage
are
run
in
parallel
and
then,
if
all
of
those
succeed,
then
the
the
pipeline
will
move
to
the
next
stage.
If
a
job
fails,
the
next
stage
is
usually
not
executed.
Although
there
are
again
ways
that
you
can
override
that
and
then
there's
other
options
like.
So
if
you
click
on
these
in
get
lab,
you
will
actually
get
to
see
the
output.
B
We
we
save
that
output
so,
rather
than
having
to
sit
there
and
watch
the
screen
or
tail
the
the
output
on
the
screen,
you
could
actually
save
all
of
that
for
you.
You
can
rerun
it
by
clicking
this
recycle
icon
here
to
rerun
the
job
and
we
also
have
options
to
have
jobs
be
triggered
manually.
So
it'll
pause
and
wait
for
someone
with
permissions
to
trigger
these
jobs
manually
and
we'll
talk
about
how
to
set
all
that
up
when
we
get
into
the
details
here
in
a
moment.
B
So
what
are
the
different
ways
to
trigger
a
get
lab
pipeline?
Well,
we've
already
talked
about
one,
which
is
that
you
can
push
code
to
your
git
lab
repository,
but
you
can
also
run
pipelines
manually
from
the
user
interface.
So
there
is
a
way
to
trigger
a
manual
run
of
a
pipeline
directly
from
the
ui.
B
You
can
also
create
a
schedule
and
you
can
use
a
cron
job
type
nomenclature
in
order
to
schedule
when
that's
going
to
run
so
if
you're
currently
doing
nightly,
builds
or
or
weekly
builds
or
whatever,
and
you
want
to
continue
to
do
that,
rather
than
have
it
triggered
by
pushing
code
to
the
repository,
you
can
do
that,
although
we
do
recommend
by
you
gain
more
advantages.
By
having
the
pipeline
run.
B
Every
time
you
push
code,
you
can
also
have
it
triggered
by
an
upstream
pipeline,
we'll
cover
that
more
in
the
advanced
ci
cd
session,
which
definitely
look
for
when
that's
going
to
be
available
for
you,
but
that's
part
of
the
series,
and
then
you
can
also
use
the
api.
B
So
if
you
have
another
system
that
you
want
to
launch
the
pipeline
from
and
I've
had
customers
that
have
created
very
simple
web
pages,
even
using
gitlab
pages,
to
make
this
api
call
in
order
to
easily
trigger
this
from
an
external
system
or
you
can
you
can
do
this
from
external
systems.
So
there's
a
lot
of
options
on
how
to
trigger
your
gitlab
pipeline
all
right.
So
let's
go
in
and
take
a
look
at
how
you
can
physically
set
that
up
within
within
gitlab
and
what
these
pipeline
definitions
actually
look
like.
B
So
the
foundation
of
every
gitlab
pipeline
is
defined
in
a
special
file.
That's
in
the
root
of
your
repository,
that
is
a
dot
gitlab
dash,
ci
yaml
file,
and
so
this
file
basically
is
where
you're
going
to
define
and
tell
gitlab
all
about
what
your
pipeline
needs
to
do
and
how
you're
going
to
to
do
it.
And
so
we
will
actually
walk
through
this
section,
and
this
is
a
very
simple
one,
with
just
two
jobs
in
two
stages.
B
So
you
can
see
here.
The
stages
are
defined
here
and
then
within
those
stages
we
have
jobs.
So
this
is
that
build
job
and
within
the
jobs
we're
going
to
go
into
several
of
the
in.
You
know
basic
options
for
these
throughout
today's
session
and
we're
going
to
build
one
of
these
out
through
examples
on
the
screen.
B
But
this
ball
build
job
it.
We
tell
it
what
stage
it's
a
member
of,
and
we
tell
it
what
it's
going
to
do.
So
the
script
is
going
to
be
the
series
of
commands
that
get
executed
to
do
that,
and
then
you
can
also
define
rules
or
use
only
an
accepted
effect
to
determine
when
this
is
going
to
be
included
in
the
pipeline
and
then
another
example
of
a
job.
Is
this
deploy
job
again?
It's
in
a
different
stage.
B
It
has
a
different
script,
but
within
the
deployment
you
can
also
tell
it
which
environment
it's
going
to
go
to
and
variables
that
might
be
associated
with
that
job.
You
can
also
have
variables
associated
with
the
entire
pipeline.
We'll
talk
a
little
bit
more
about
variables
later,
as
well
as,
if
you're
using
docker,
you
can
define
the
images
that
you're
going
to
be
using
for
the
pipeline
and
you
can
even
define
those
within
the
jobs.
B
So
there
are
a
ton
of
options
that
are
available.
We
are
going
to
go
through
some
basic
examples,
since
today
is
the
intro
session,
but
if
you
want
to
read
more
and
go
into
it,
this
slide
with
this
link
is
the
key
there's
lots
of
different
keywords
that
are
available
and
we're
gonna
go
through
and
hit
a
bunch
of
these
today,
but
we
don't
have
time
to
go
through
everything,
and
so
what
we're
gonna
do
is
we
will?
B
What
you
can
do
is
go
to
this,
and
it
has
everything,
has
examples
of
all
of
these
and
tells
you
all
of
the
options.
It
is
a
lot
of
information,
you
can
look
up
individual
ones
or
you
can
read
through
the
whole
thing
if
you
want
for
all
of
the
different
options
for
configuring
your
pipeline,
but
we'll
hit
the
basic
ones
today
and
then
in
the
advanced
ci
cd
session,
we
will
hit
some
advanced
tips
and
tricks
that
dive
a
little
bit
deeper
into
some
of
those
areas.
B
B
However,
I
highly
encourage
you
to
use
the
stage's
keyword
to
define
your
stages,
because
when
someone
else
is
looking
at
your
yaml
file,
they
will
understand
what
you're
doing
better
than
if
you
just
use
these
defaults
and
don't
declare
them
the
order
of
the
stages.
This
is
the
one
place
in
the
ml
file
where
the
order
does
matter
so
the
order
that
you
declare
jobs
and
variables
and
all
those
other
things
doesn't
matter
at
all
as
long
as
it's
included
in
the
yellow
file,
but
the
order
of
the
stages
is
critical.
B
B
So
you
can
use
these
default
stages
to
kind
of
quickly
put
together
a
pipeline
and
test
something
out,
but
if
you're
building
a
a
full
pipeline
for
your
application,
I
would
highly
recommend,
even
if
you're
going
to
use
these
default
names,
you
declare
them
and
have
a
section
that
explicitly
declares
clarism
now
keep
in
mind.
These
can
be
named.
Anything
you
want.
B
So
if
you
want
to
create,
if
you
want
to
have
two
different
test
stages,
one's
dependent
on
the
results
of
the
first
one,
you
could
have
a
test
one
stage
in
a
test
two
stage,
and
you
would
just
add
them
in
here
and
declare
them
so
and
remember
that
stages
separate
those
jobs
into
sections
and
then
the
jobs
will
be
executed
in
parallel
and
jobs
are
the
things
that
perform
the
tasks.
B
Now,
how
do
jobs
perform
the
tasks?
Well,
as
I
mentioned
earlier,
they
perform
them
using
the
script
keyword.
So
the
script
keyword
is
how
you
tell
gitlab
what
you
want
this
job
to
do,
and
basically
anything
you
can
do
with
the
command
line,
whether
you're
on
linux
or
windows.
Anything
you're
gonna
do
at
the
command
line.
Is
you
can
do
in
a
job,
but
how
you
declare
your
script?
There
are
some
very
different
ways
that
you
can
do
that.
B
So
I'm
gonna
show
you
some
examples
here
and
we
can
talk
about
some
of
the
pros
and
cons,
but
you
can
actually
have
so
here's
a
build
code,
job
I've
declared
the
stage
it's
going
to
be
in,
and
in
this
case
it's
just
going
to
run
a
single
file.
Now
this
file
has
no
pathing
to
it.
So
it's
going
to
assume
that
it's
in
the
same
folder
in
the
root
of
your
project
and
it's
going
to
run
whatever
this.
So
this
is
obviously
a
linux
script.
B
It's
going
to
run
this
sh
script
and
whatever
it
contains
it's
going
to
run
now.
That
is
an
important
thing
to
remember
is
that
jobs
will
download
your
code
and
have
availability,
so
they
have
jobs,
have
access
to
your
entire
code
base,
while
they're
executing,
so
you
can
use
any
of
the
files
within
your
code
base.
Obviously
another
way
to
do
this,
though,
is
say
your
script
is
in
a
subfolder
subdirectory.
B
B
So
do
you
have
to
have
it
be
defined
in
a
script
file?
No,
but
you
know,
if
you
do,
that
file
is
just
like
the
the
yaml
file
itself.
The
ci
dml
file
itself
is
versioned
in
your
repository,
so
you
gain
those
advantages
being
able
to
roll
back
those
changes,
but
you
can
also
have
the
script,
execute
specific
commands
so
like
if
you're
doing
something
like
maven
or
you
know,
npm
or
or
whatever
you
can.
B
Actually,
you
can
have
a
single
command
in
line
or
you
can
have
it
run
a
series
of
commands
and
you
can
mix
and
match
these.
So
you
can
have
individual
commands
as
well
as
entire
script
files
that
it's
gonna
run
and
you
can
specify
the
directory
to
those,
and
so
basically,
what's
gonna
happen
is
when
this
job
runs.
B
It's
just
gonna
follow
whatever
the
script
is,
and
it's
gonna
execute
those
commands
one
by
one
and
if
you
call
a
script,
it's
gonna
take
the
commands
within
the
script
and
execute
them
in
order
as
if
you
had
typed
them
in
manually
to
the
location
where
that
job
is
running.
B
All
right,
so
now,
what
we're
going
to
do
is
we're
going
to
walk
through
an
example
that
uses
and
we're
going
to
build
out
our
own
pipeline.
B
If
you're
using
docker
or
kubernetes
or
any
kind
of
container
service,
you
need
to
tell
it
what
image
you're
going
to
use,
and
so
the
image
keyword
allows
you
to
tell
git
lab
what
your
default
image
is
going
to
be
for
this
pipeline
and
you
can
override
that
for
any
individual
job.
B
And
if
you
and-
and
so
what
you
can
do
is
if
you
use
the
image
keyword
outside
of
a
job
that
will
become
the
default
image
for
every
job.
Unless
you
use
the
image
keyword
within
the
job
definition
which
we'll
see
later
now,
if
it's
going
to
default
to
docker
hub
by
default.
So
if
you
simply
provide
the
name
of
the
public
image
and
the
tag
it
will
use
that
image.
However,
you
can
also
do
this
within
any
container
registry
that
you
want.
B
B
So
in
this
example,
we
will
do
a
we're
gonna
go
ahead
and
start
off
this
example,
one
that
I'm
creating
during
today's
session.
Using
this
example
right
here.
B
Now,
if
you're
using
containers,
you
may
need
some
additional
containers
in
order
for
your
application
to
run,
you
may
be
able
to
run
everything
in
a
single
container.
So,
for
example,
we
also
do
support
the
ability
to
include
services.
B
So
if
you
need
like
a
postgres
container,
you
can
include
that
here
again,
that's
going
to
use
the
same
rules
that
we
use
with
the
image
definition.
You
can
also
define
variables.
So
if
you
define
the
variables
here
outside
of
a
job,
those
variables
will
be
available
to
every
job.
So
if
you
have
certain
values
that
you
want
to
be
available
to
all
jobs,
then
you
can
specify
them
outside
of
the
job
definition
and
then,
if
you
need
to
override
them,
you
can
specify
a
value
specific
value
for
that.
B
You
can
also
store,
define
and
store
variables
within
the
gitlab
ui
outside
of
the
the
ci
gamel
and
there's
some
advantages
to
doing
that,
like
you
can
have
massed
and
protected
variables
which
we'll
talk
about
later.
B
So
here's
what
our
yaml
looks
like
so
far.
We
have
our
image.
We've
brought
in
the
service
container.
We've
declared
a
global
variable
for
the
entire
pipeline.
We've
defined
our
stages
and
we're
ready
to
add
our
first
job
now
jobs,
the
name
of
the
job
will
be
kind
of
the
keyword,
which
means
you
can't
use
any
of
our
existing
keywords.
Your
jobs
can't
be
named
the
same
as
one
of
those,
but
you
can
name
it
pretty
much
anything
else
you
want
and
when
you
name
your
job.
B
B
We're
adding
a
job
just
to
deploy
this
and
it's
going
to
be
part
of
the
deploy
stage,
and
then
it's
going
to
run
this
this
command,
and
you
notice
here
that,
even
though
it's
a
single
command,
they
they
put
it
here
on
a
sub
line.
I
actually
recommend
you
do
this,
because
if
you
need
to
add
other
ones,
it's
quick
and
easy
to
add
additional
lines
to
your
script
and
you
can
have
as
many
as
you
want.
B
Whereas
if
you
do
the
inline,
then
you
have
to
you
have
to
you
know,
modify
it
more,
so
I
recommend
always
using
this
method,
but
remember
from
before
you
can
have
it
in
mind
as
well.
If
it's
a
single
command.
B
So
in
this
case
it's
part
of
a
cube
kubernetes
cluster,
but
it
could
also
be
a
static
environment
and
then
the
when
is
going
to
tell
you
if
it's
a
manual
by
using
the
way
manual
that
will
create
that
little
play
button
that
we
saw
earlier.
So
you
can
control
when
the
the
deployment
takes
place,
and
so
these
are
the
three
main
aspects
of
the
environment
and
we'll
go
into
these
a
little
bit
more
detail
later
on.
B
Now
there
are
a
couple
of
different
ways
that
you
can
control
when
your
job
is
going
to
be
included
in
your
pipeline.
Here,
I'm
going
to
introduce
the
concept
of
only
an
accept.
This
is
actually
deprecated.
It
is
very
limited
with
what
you
can
do.
It
still
exists
and
you
can
still
use
them,
but
we're
not
adding
any
functionality
into
only
an
accept,
but
it
is
an
easy
way
to
kind
of
quickly
get
started.
However,
we
have
a
complete
rules.
B
Engine
now
and
there's
a
link
to
the
rules
here
and
it's
much
more
powerful
and
allows
you
to
do
a
lot
of
really
really
cool
things
that
you
couldn't
do
with
only
an
accept
and
I'll
show
you
an
example
in
a
moment.
So
in
this
case,
this
is
saying
this
job
will
only
be
included
if
it
was
a
change
made
to
a
branch
unless
that
branch
was
named
maine.
So
in
this
case,
basically
it's
it's.
This
is
basically
saying
only
include
this
job
if
it
was
on
a
feature
branch
effectively
right.
B
It's
not
a
change
being
made
to
the
main
branch,
but
it
is
being
made
to
a
branch,
and
so
there
are
some
limited
options
with
only
an
except
I'll
show
you
a
few
examples
coming
up,
but
you'll
see,
and
we
go
into
the
rules
syntax
in
much
more
detail
in
the
advanced
ci
cd
session,
so
keep
an
eye
out
for
that
as
an
example.
B
If
I
want
to
do
the
exact
same
only
and
accept
here,
it's
going
to
use
the
rules
keyword,
it's
going
to
have
a
condition
where
it's
going
to
look
to
see
if
this
built-in
variable,
which
is
the
the
name
of
the
branch,
if
the
name
of
the
branch
is
being
committed
to
his
name,
then
I'm
not
going
to
include
this.
Otherwise,
I
always
will
include
it.
So
this
is
you
can
see.
B
This
is
a
little
bit
harder
to
read,
but
it
is
so
much
more
powerful
and
there's
so
many
more
things
that
you
can
do
with
it,
including
evaluating
your
own
variables,
including
all
of
our
built-in
variables
and
even
looking
at
like
changes
to
certain
directories
or
or
files,
and
then
we
with
the
environment,
we'll
declare
give
it
the
name
and
the
url
for
the
environment.
So
this
is
what
our
job
looks
like
now,.
B
Now,
there's
a
couple
other
options
that
you
can
do
with
your
scripts
and
you
can
include
a
before
script
and
an
after
script
before
scripts
are
obviously
executed
before
your
main
script
and
at
compile
time
what
it
effectively
does.
Is
it
just
basically
concatenates
this
to
the
front
of
your
script
where
you
can
use
this?
B
Is
you
can
declare
up
a
course
before
script
outside
of
a
job
so
that
the
same
before
script
gets
applied
everywhere
same
thing
with
the
after
script,
but
you
can
also,
if
you
wanted
to
just
organize
stuff
into
before
and
a
main
script
for
organizational
purposes.
You
can
also
do
that
within
a
job.
B
The
before
script,
like
I
said
it
literally
just
gets
concatenated
with
the
rest
of
the
script.
The
after
script,
however,
is
slightly
different.
The
afterscript
will
run
even
if
the
job
fails,
so
you
can
actually
force
a
job
to
fail
by
by
setting
a
return
code
that
will
result
in
a
failure
or,
if
there's
an
error
that
is
encountered
within
the
script
itself,
both
of
those
result
in
a
job
failure.
The
nice
thing
is
that
your
afterscript
will
run
even
if
the
job
fails.
B
B
The
other
thing
that's
useful
to
understand
is-
and
this
is
a
question
we
often
get
asked-
is
well.
How
do
I
pass
stuff
between
jobs
right?
So
I
have
a
build
job
and
I
have
art,
I
have
pieces,
you
know
artifacts
that
got
generated.
How
do
I
make
sure
that
you
know
if
I
don't
have
those
available
for
my
test
job,
then
I
just
have
to
rebuild
it.
So
what's
the
point
having
a
build
job?
Well,
you
can
there's
two
different
ways
to
pass
files
between
jobs.
B
One
is
called
cash
and
the
other
is
called
artifacts.
The
main
difference
between
these
is
the
cache
only
exists
for
the
lifetime
of
this
one
pipeline.
So
if
you
have
temporary
files
that
you
need
to
have
access
to
within
the
life
of
the
pipeline,
but
you
don't
care
about
them
at
all.
Afterward
then
use
cash.
They
only
exist
for
the
life
of
the
life,
so
pipeline
then
they're
destroyed.
B
B
As
you
see
here,
artifacts,
however,
will
also
be
collected
and
saved,
but
there's
a
lot
more
options
with
artifacts
and
artifacts
will
be
saved
after
the
pipeline
is
done
and
you
can
configure
how
long
to
save
those
within
the
project
and
they're
available
for
download
within
the
gitlab
ui
and
they're
in
the
and
the
last
successful
pipeline
will
always
have
its
artifacts
available
for
download
or
viewing
from
within
the
gitlab
ui.
So
artifacts
are
saved
for
a
period
of
time.
Configurably
cash
is
not.
B
You
can
also
define
if
the
artifacts
are
saved
here
it
says.
If
the
job
was
successful,
then
I
want
the
artifact
saved,
otherwise
it
will
save
whatever
artifacts
were
created,
regardless
of
whether
the
job
failed
or
not
and
again
it
uses
the
paths
keyword
to
define
any
number
of
directories
files.
These
can
be
individual
files,
wild
carded
or
entire
directories,
as
you
see
here.
B
So
here
is
what
we
look
looks
like
so
far,
so
we
have
everything
that
we
had
before,
but
now
we've
added
a
cache
and
now
we're
going
to
add
our
build
it
job.
B
So,
as
I
mentioned
before,
the
order
that
you
declare
jobs
does
not
matter
so
we
have
a
deploy
code
job
here,
even
though
it's
the
first
job
listed
because
it's
in
the
deploy
stage,
it
will
execute
after
this
build
job,
because
the
deploy
stage
executes
after
the
build
stage
and
this
build
job
is
part
of
the
build
stage
and
again
we
use
our
script.
B
And
then
here
this
cache
because
it
was
defined
outside
of
a
job-
applies
to
all
jobs,
whereas
this
build
a
job.
It's
going
to
save
the
artifacts
that
are
generated
from
this
job
and
only
this
job,
and
only
in
this
case,
if
the
job
was
successful.
So
if
the
job
recounted
any
error,
it's
not
going
to
save
those.
B
So
in
this
case,
what's
going
to
end
up
happening,
is
you'll
get
the
bin
target
directory
from
the
build
it
job,
but
you'll
get
everything
that's
put
into
this
binary
path,
regardless
of
which
which
job
created
it
so
do
do.
Keep
in
mind
where
you
declare
these,
if
they're
declared
outside
of
any
job
they'll
apply
to
all
jobs
by
default
and
if
they're
declared
inside
a
job,
it
will
apply
only
to
that
job
same
thing
with
like
image
and
variables
and
so
on.
B
Now
you
can
also
we're
gonna
talk
about
runners
in
a
few
minutes,
but
you
can
also
use
ci
tags
to
so
jobs
are
executed
by
runners,
so
runners
are
run
on
machines,
we'll
cover
those
in
a
minute,
but
you
can
use
a
ci
tag
to
force
jobs,
to
run
on
a
specific
runner.
So
say
you
had
a
runner
on
a
specific
operating
system
or
with
a
specific
level
of
computing
power
or
or
even
maybe,
the
machine
that
has
permissions
to
deploy
your
production
environment.
B
You
can
use
tags
to
force
jobs
to
be
run
on
specific
runners,
so
you
can
define
what
those
tags
are
in
the
job
definition
by
using
the
tags
keyword
you
can
have
as
many
of
these
as
you
want,
and
any
runner
that
meets
all
of
the
tags
defined
by
the
job
will
be
eligible
to
run
that
job,
even
if
the
runner
has
more
tags
than
required
and
I'll
show
you
an
example
of
that
later.
B
So
here
is
what
our
file
looks
like
so
far,
so
to
the
build
it,
we
have
added
the
tags,
osx
and
ios.
So
it's
looking
for
a
runner.
B
That
also
has
those
two
tags,
because
maybe
this
build
is
for
an
iphone
app
and
it
needs
all
of
the
resources,
not
only
the
operating
system,
but
it
needs
the
the
ios
resources
as
well,
since
this
might
not
be
available
on
every
mac
based
runner
that
you
have
so
speaking
of
runners,
let's
go
through
what
they
are
and
go
into
them
a
little
bit
more
detail.
B
And
we
are
almost
done
and
at
the
end
we
will
take
live
questions,
any
questions
that
weren't
answered
by
the
team.
Throughout
the
session
we
will
take
those
okay.
So
how
do
these
jobs
get
executed
as
you've
seen?
We
have
our
magical
docket
lab
ciemo
file?
That
is
the
instructions
and
lives
in
the
root
of
your
repository.
So
it's
the
instructions
for
what
all
the
jobs
are
going
to
do
when
they're
going
to
be
included
all
that
great
stuff
that
we
just
covered.
B
But
how
are
they
executed?
Well,
the
gitlab
runner
is
the
magic
that
makes
that
part
happen.
So
it
is
a
lightweight
agent
that
runs
the
ci
cd
jobs,
okay,
so
the
two
of
these
together
is
what
makes
everything
within
gitlab
cicd
work,
so
runner
can
be
installed
on
any
platform
where
go
binaries
can
execute,
and
that
includes
almost
all
of
your
major
operating
systems,
including,
could
you
can
even
put
a
runner
on
your
laptop?
If
you
want,
you
can
put
it
on
a
number
of
servers,
you
can
run
it
in
kubernetes.
B
You
can
run
it
in
docker
machine.
You
can
do
it
in
a
lot
of
different
places,
any
place
that
go
can
run
and
pretty
much
gitlab
runner
can
test
any
programming
language
just
a
few
listed
here,
but
it
really
can
test
any
any
programming.
Language
runners
are
generally
created
by
an
administrator
of
the
instance
or
an
owner
or
maintainer
of
your
group,
or
project,
and
runners
can
be
available
at
the
instance
level,
the
group
level
or
the
project
level.
B
Let's
go
through
each
of
these,
so
shared
runners
are
available
to
every
project
within
similar
with
similar
requirements,
they're
included
in
a
pool
for
all
of
the
projects,
they're
managed
generally
by
the
administrator
of
your
instance
or
your
group,
and
they
can
be
set
up
to
be
auto
scaling
where
it
spins
up
runners
dynamically
to
meet
demand
and
spends
them
down
when
not
needed.
B
It
would
only
be
specific
to
the
project
or
potentially,
the
group
that
it
was
registered
with
it
is
generally
run
managed
by
the
maintainers
or
owners
of
that
group
or
project,
and
there
you
know
most
of
your
jobs
will
be
fine
by
being
executed
by
shared
runners,
specific
runners
generally
you're
going
to
want
to
use
for
specific
use
cases
that
would
not
apply,
and
thus
you
wouldn't
have
a
shared
runner
that
meets
it.
B
So,
for
example,
if
you're,
the
only
group
in
your
organization
that
does
ios
builds,
you
might
set
up
shared
specific
runners
for
those
that
wouldn't
be
part
of
the
shared
runner
pool.
Potentially.
B
Tagged
and
untagged,
so
again
you
can
have
an
untagged
runner
that
will
be
available
to
run
jobs
that
have
no
tags.
So
if
there's,
if
the
job
has
no
tags,
it
can
be
run
by
an
untagged.
Only
untagged
runners
will
run
sorry.
Untagged
runners
will
only
run
jobs
that
are
untagged.
B
If
your
job
is
tagged,
then
it's
going
to
look
for
runners
with
that
same
tag.
So,
for
example,
if
I
have
two
runners-
one,
that's
just
tagged
with
windows
in
this
case
and
another
that's
tagged
with
with
windows
and
something
else,
because
this
job
is
only
tagged
with
windows
either
of
those
two
runners
would
be
eligible
because
they
both
have
the
windows
tag.
It's
irrelevant
that
the
second
runner
has
an
additional
tag.
It
doesn't
have
to
be
a
one-to-one
match
as
long
as
they're.
B
So
in
this
case
would
be
windows,
but
you
saw
an
example
with
ios,
but
you
can
also
do
this
for,
if
you're
doing
like
ai
machine
learning
and
you
need
those
jobs
to
be
run
on
a
specific
machine
with
more
horsepower,
you
could
totally
tag
it,
and
these
tags
are
just
wherever
you
declare
them
and
when
you,
when
you
tag
the
runner
and
tag
the
job
they
just
need
to
match,
it
doesn't
matter
what
the
name
is.
B
It's
purely
just
has
to
match,
and
then
finally,
we
have
protected
and
non-protected
protected
runners
can
only
run
jobs
from
changes
to
protected
branches
or
tags
on
those
protected
branches,
and
these
are
usually
used
with
sensitive
runners
that
have
access
to
deploy,
keys
or
other
sensitive
capabilities.
Vast
majority
of
your
runners
will
be
non-protected
that
can
run
from
any
branch
or
any
build.
B
There
are
some
additional
runners
when
you
register
the
runner,
you're
gonna
tell
it.
You
know
where
you're
associating
whether
you're
associating
with
the
instance
level
the
group
level
or
the
project
level,
and
then
once
the
runner
is
registered,
you
can
go
into
the
ui
and
modify
the
settings
for
that.
Runner,
you
can
just
decide
whether
it's
active
protected,
whether
it's
eligible
to
run
untagged
jobs,
whether
it's
locked
to
the
project
and
it
cannot
be
assigned
anywhere
else.
What
the
runner's
ip
the
description
and
any
tags
you
want
to
specify.
B
You
can
do
that
when
you
register
it
or
modify
them
here
and
the
timeout,
so
runners
can
have
different
executors
or
basically
how
they're
going
to
execute
the
code.
When
you
register
the
runner,
you
have
to
tell
it
which
executor
you
want.
The
most
common
are
shell.
So
it's
going
to
run
it
as
if
you
were
sitting
at
the
the
terminal.
B
Docker
is
another
very
common
one,
as
well
as
docker,
machine
and
kubernetes,
and
so
these
will
be
run
within
those,
the
docker
environments
or
the
kubernetes
environments,
and
again
use
those
image
keywords
and
all
that
good
stuff.
The
shell
one
will
ignore
the
image
keyword,
obviously
because
it's
relevant,
we
also
do
support
virtualbox,
parallels
and
ssh.
B
Ssh
is
similar
to
shell,
but
it
can
be
useful
when
you
don't
want
to
install
the
runner
software,
even
though
it's
lightweight,
sometimes
you
can't
install
it
on
the
machine
where
it
needs
to
execute.
So
you
install
it
on
another
machine
that
can
ssh
into
the
machine
where
it's
going
to
be
executed,
but
do
keep
in
mind
that
it
does
not
support.
Caching,
it
will
download
the
artifacts,
but
it
will
not
support
the
caching
feature
we
talked
about
earlier
and
ssh
bash.
Only.
B
All
right,
so
that
is
everything
I
want
to
cover
today.
We're
gonna
switch
it
back
over
to
taylor
to
do
our
poll
and
then
we'll
do
some
live
q.
A
so
thank
you,
everyone
for
your
attention
and
I'll
turn
it
over
to
taylor.
A
Thank
you
kevin.
I
I
just
launched
that
poll.
It's
just
a
couple
of
quick
questions,
so
we'd
love
to
to
get
your
feedback,
as
as
you
participated
in
in
today's
session.
So
thank
you
in
advance
for
doing
that,
as
as
you're
doing
so,
we
have
a
couple
of
questions
here,
I'll
I'll
pose
the
first
one
to
you
here.
Kevin
would
an
afterscript
be
used
to
upload
an
artifact
to
a
repository.
B
You
you
could,
if
you
needed
to
force
that,
but
artifacts
are
automatically
uploaded
at
the
end
of
the
job
back
up
to
get
lab
and
they're
automatically
downloaded
when
a
new
job
runs.
So
so
you
have
two
jobs
in
two
different
stages:
the
first
one
creates
artifacts.
B
When
the
job
is
completed,
those
artifacts
are
automatically
uploaded
back
to
gitlab
and
are
available
to
browse
download
view
so
on,
and
then
the
next
job
that
runs
will
download
the
code
first
and
then
it
will
cut
down
all
the
artifacts
and
then
it'll
execute
all
the
scripts
that
are
associated
with
it.
If
you
needed
to
push
those
artifacts
somewhere
else,
then
yes,
you
could
do
that
in
an
afterscript.
B
A
B
Yeah,
if
you
keep
reading
questions
out
or
if
anyone
else
wants
to
chime
in
with
questions,
I
have
not
been
monitoring
the
q.
A
sorry.
A
Here's
another
one
need
to
know
how
to
work
with
package
registry.
Could
you
help
with
that.
B
Yeah
so
yes,
there
are
multiple
package
registries
that
are
available
within
git
lab
and
being
that
today
is
kind
of
an
intro
class.
We
didn't
go
into
detail
on
that,
but
there's
definitely
information
on
our
website
and
in
fact
maybe
one
of
my
colleagues
can
put
a
link
to
it
in
the
the
q
a
or
respond
to
that
one
with
a
link.
B
But
there
are
several
package
ones
that
are
up
available
and
then
there
are
ways
that
you
can
add
commands
into
your
ci
yaml
file
that
will
interact
with
the
the
pack
registries
if
you
are
using
external
ones.
So
if
you're
using
the
gitlab
built-in
ones,
it's
a
little
bit
easier,
but
if
using
external
ones,
you
just
need
to
add
those
into
your
scripts
to
pull
or
push
into
those
external
registries.
B
So
yep,
that's
that's
that's
the
basics
of
it.
We
don't
have
enough
time
to
go
into
a
lot
more
detail,
because
I
want
to
make
sure
we
get
as
many
questions
as
possible.
So
hopefully
that
points
you
in
the
right
direction.
A
Awesome,
here's
another
one
that
just
came
through
can
the
upload
to
git
lab
be
disabled,
we
use
artifactory
and
don't
want
to
pay
for
double
storage.
B
Okay,
yeah,
so
what
I
would
do
in
that
case
is
I
yeah.
I
think
what
you
would
do
is
just
not
even
use
the
artifact
keyword
and
just
make
the
push
and
pull
from
artifactor
you're,
basically
going
to
have
to
duplicate
that
functionality
yourself
within
the
scripts
to
to
push
and
pull
those.
I
don't
know
if
any
of
my
colleagues
on
there
has
a
better
answer:
I'm
not
an
expert
on
artifactory
one
of
the
one
of
the
challenges
of
being
a
tam
at
gitlab.
B
Is
it
intersects
with
so
many
different
technology
areas
that
you
can't
possibly
know
all
of
them?
But
I
don't
know
if
any
of
my
colleagues
wants
to
wants
to
chime
in
but
feel
free
to
come
off
mute
and
do
that
otherwise,
I'll
leave
you
with
with
my
answer.
A
So
we
have,
we
have
a
an
attendee
here
asking
about
information
about
our
certification
program.
Can
you
speak
to
that
at
all
kevin.
B
Yes,
definitely
so
we
we
do
have
a
certification
program.
It
covers
several.
We
have
several
different
certifications,
including
ci,
cd,
there's
also
a
devsecops1,
and
I
think
they've
actually
added
a
couple
more.
B
Hopefully
one
of
my
colleagues
can
get
you
the
link
that
gives
you
the
details
where
you
can
kind
of
peruse
it
generally.
The
way
it
works
is
if
your
your
organization
would
it's
usually
associated
with
our
professional
services,
full
training
courses.
So
what
we
gave
you
today
is
just
kind
of
a
art
of
a
possible
one-hour
session.
B
There
are
full
training
courses
with
a
professional
trainer
delivered
by
a
professional
services
or
a
partner
are,
are
eight
hours
broken
into
two
four
hour
sessions
over
two
days
or
possibly
one
eight
hour
a
day
and
at
the
end
of
that
course,
all
participants
are
eligible
to
go
through
the
extra
exercises
and
you
basically
in
addition
to
completing
the
course
you
submit
a
project
that
is
defined
it's
fairly
simple,
but
they
define
a
project.
B
You
have
to
go
through
several
steps
and
you
submit
that
to
the
instructor
to
grade
and
if
they
pass
you,
then
you
get
your
your
get
lab,
cicd,
certification
and
there's
other
ones
available
as
well.
So
hopefully
one
of
my
colleagues
can
get
you
the
link
for
that
and
and
we
so
you
answered
in
the
q
a
and
you
can
read
more
detail,
but
that
gives
you
an
idea
of
the
kinds
of
how
the
program
basically
works.
A
Great,
thank
you,
I'm
not
seeing
too
many
other
questions
right
now.
Maybe
we'll
give
a
give
a
few
more
minutes
here.
B
Oh
I'll,
take
this
question.
I
just
had
a
chance
to
pull
up
the
q
a
so
the
question
is:
is
there
a
way
for
us
to
change
the
internal
variables?
The
ci
underscore
variables
in
the
variable
section,
from
a
job
defined?
Yes,
and
we
actually
cover
that
in
the
advanced
one,
and
we
cover
all
of
the
variable
hierarchy,
because
there's
there's
several
different
ways
where
you
can
change
variables
right,
you
can,
you
can
declare
them
in
the
global
section.
You
can
declare
them
in
a
job.
B
You
can
set
them
in
a
manual
run.
You
can
set
them
in
the
project
or
the
group
level
and
all
there's
a
there's,
a
specific
hierarchy
where
it'll
override
them,
and
so
you
can
over
override
those
by
assigning
that
variable
a
new
value
in
and
then
it
will
get
that
get
that
value
and
and
use
it
later.
You
can
declare
your
own
variables
or
you
can
use
our
our
built-in
ones.
So,
yes,
you
can
override
those
those
internal
variables.
B
Okay,
another
question:
what
is
the
gitlab
ci
equivalent
of
a
jenkins
workspace,
to
execute
a
script
from
a
different
repo
within
git
lab?
Is
that
something
discussed
in
the
advanced
webinar
yeah?
We
do
cover
how
you
can
do
multi-project
pipelines
and
yeah.
We
could
definitely
cover
that
in
the
advanced
one.
Yes,
that
is
covered
in
the
advanced
one.
So
there's
a
couple
different
ways:
you
could
do
that
you
can,
you
can
do
a
multi-project
pipeline.
B
You
can
also
pull
in
code
from
a
related
project
and
the
details
of
that
would
be
in
our
on
our
docs
page
and
and
we
do
cover
the
multiple
project
pipelines
in
the
advanced
session.
So.
B
B
While
we're
waiting
actually
so
somebody
had
a
suggest
kind
of
posted,
a
question,
but
it's
really
more
of
a
thing
to
share
with
the
rest
of
the
the
the
attendees.
So
the
statement
is
for
using
an
external
private
registry
to
pull
images
for
the
job
container
runtime
with
your
own
private,
gitlab
runner.
You
need
to
set
up
a
ci
variable
with
the
authentication
and
then
there's
a
link.
B
There's
a
link
there.
I
don't
know
if
everyone
can
see
that
or
maybe,
if
we
just
I'll
type
an
answer-
and
I
think
that'll
put
it
into
the
into
the
q
a
for
everyone
to
see,
and
then
we
got
a
question.
What
is
the
difference
between
git
lab
and
github
great
question?
So
there
are
two
different
platforms
and
they
both
utilize,
the
same
open
source
underlying
technology
of
git.
B
They
were
actually
started
around
the
same
time
for
two
slightly
different
purposes,
but
we
both
expanded
and
we
both
cover.
Several
you
know
different
areas
of
surrounding
git,
so
git
lab
was
kind
of
started
to
do
kind
of
make
it
simpler
to
install
git
your
own
git
server
and
work
with
it
locally
and
github
was
originally
started
to
create
a
web-based
version
of
git
that
could
be
used
in
general.
Now
git
lab
has
its
own
gitlab.com,
which
serves
that
purpose.
B
Github
also
does
have
a
version
that
you
can
install
your
yourself.
They
both
cover
a
lot
of
the
same
things.
Gitlab
does,
we
do
believe
covers
more
of
the
entire
devsecops
lifecycle
in
a
single
application.
The
other
big
difference
is
git,
lab
is,
is
still
open.
Source
and
independent
github
was
purchased
by
microsoft
a
number
of
years
ago.
So
those
are
some
of
the
differences
there's.
Actually,
if
you,
google,
git
lab
versus
github,
there
is
a
good
comparison
page
on
our
website.
B
That
is
public,
and
you
can
even
suggest
changes
to
it.
If
you
want
that,
you
can
find
and
read
through
so.
B
Let's
see
a
couple
other
questions:
how
can
you
specify
different
scripts
for
windows
and
linux
on
the
yaml
file?
Yeah?
There's
there's,
I
think,
let's
I
don't
know
if
my
colleagues
have
a
better
suggestion,
but
the
way
I
would
do
that
is
you.
You
probably
need
to
create
two
separate
jobs
and
and
and
then
define
have
had
the
two
separate
scripts
have
like
the
batch
or
powershell
and
then
the
the
sh
files.
B
So,
for
example,
if
you
need
to
do
a
linux,
build
in
a
windows
build
for
a
specific
application
and
you
want
those
commands
to
be
executed
on
those
two
different
operating
systems,
so
you
can
use
the
tags
to
force
it
onto
runners
and
you
can
have
two
separate
scripts,
but
I
I
don't
think
there's
a
way
to
handle
the
single
single
set
of
commands
for
both
of
those.
B
Unless
again,
one
of
my
colleagues
has
an
idea
on
that.
Another
question
is:
how
is
secret
damage
data
managed
in
get
labs?
There's
a
couple
different
options
for
that
we
we
do
so.
We
do
have
the
ability
to
declare
variables
and
mark
them
as
masked
in
our
ui.
B
However,
if
they're
truly
secret,
we
would
actually
not
recommend
doing
that,
we
would
actually
recommend
we
have
great
partnership
and
integration
with
hashicore
ball,
and
I
would
actually
recommend
you,
you
utilize,
that
the
integration
makes
it
very,
very
easy
to
use
those,
and
you
can
do
that
with
the
the
free
version
or
if
you
have
the
enterprise
version
of
hash
floorball.
It
gives
you,
unlike
some
additional
features,
but
you
don't
need
those
to
do
that
basic
functionality.
B
So
I
would
definitely
recommend
taking
a
look
at
either
of
those
if
the
masked
variables
doesn't
meet
your
need,
where
the
only
person
who
can
unmask
that
as
a
maintainer
or
an
owner
of
that
group
or
project.
If
you
need
a
higher
level
of
security
for
those
secrets,
then
I
would
look
at
using
hash
core
ball.
B
Do
runners
need
an
ip
address
per
runner?
No,
they
don't
so
you
can
actually
register
you
can
install
the
runner
multiple
times
in
the
same
physical
machine
and
register
it
multiple
times
when
it
registers,
it'll
get
a
unique
id
and
then
each
instance
of
the
runner.
The
way
that
does
the
communication-
and
I
forgot
to
emphasize
this-
so
thank
you
for
bringing
this
up.
B
There's
no
communication.
That's
initiated
from
server
to
runner,
it's
always
from
runner
to
server.
So
each
runner
periodically
pings
the
server
where
it
was
registered
and
says:
hey.
Have
you
got
anything
for
me?
You
know
I'm
alive,
I'm
this
runner
and
I
do
you
have
anything
for
me
and
then
in
the
response.
The
server
will
go.
Oh
yeah,
I
got
a
job
for
you
to
do
and
it
will
execute
the
job
and
respond
back
with
the
results
that
the
server
will
then
store
and
all
that
good
stuff.
B
So
so,
yes,
you
can
register
the
same
runner
multiple
times
on
the
same
server.
B
And
is
there
a
runner
limit
per
instance?
No,
I
don't
believe
there
is.
B
And
then
what
are
the
disadvantages
of
a
ci
cd
environment
yeah?
We
talked
about
the
advantages
honestly.
I
I
think
the
what
people
will
say
is
that
they,
you
know
they
already
have
all
of
their
stuff
built
out
or
they
have
their
processes
and
procedures
and
they
want
to
go
through.
They
don't
have
the
time
to
take
their
build
process
and
and
and
convert
it
into
scripts
and
yaml.
B
But
I
would
counter
that
with
you
know,
just
take
one
piece
of
it:
put
it
into
a
single
job
and
start
automating
that
way,
and
then
each
sprint
make
it
your
goal
to
add
one
more
job.
Add
one
more
test
in
before
you
know
it:
you'll
have
everything
over
into
ci
cd.
It
wouldn't
have
taken
you
any
significant
extra
amount
of
time
and
your
your
your
system
can
then
grow
from
there.
B
So
not
really
a
disadvantage,
but
some
people
would
say
that
they're
overwhelmed
by
you
know
the
process
of
getting
started,
but
I
say
start
simple
and
build
from
there
all
right.
A
Well,
thanks
kevin,
thank
you
for
watching
here,
so
yeah
great
awesome.
I
think
we
got
through
all
those
great
great
work
on
the
q,
a
thanks,
everyone
for
submitting
those.
I
think
we
had
a
really
engaging
session
today,
so
appreciate
you
once
again
taking
the
time
like
I
said,
we'll
be
sending
out
the
recording
here
in
the
next
day
or
so,
and
with
that
we
will
wrap
up
today's
session
have
a
good
day.
Everybody.