►
From YouTube: GitLab Automated Test Suite Overview
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
C
C
Alright,
so,
let's
start
by
saying
well
for
before,
can
we
see
my
screen
or
a
little
screen?
Okay,
cool?
So
within
the
gate
lab
directory
the
actual
source,
we
have
a
QA
directory,
the
QA
directory
houses,
all
of
our
stuff,
as
well
as
the
framework,
the
tests
and
all
that
all
that
stuff.
So
we
have
a
couple
things:
we
have
the
QA
directory
under
the
QA
directory,
which
you
think
would
be
free
extra.
But
it's
to
separate
the
actual
thread
explained.
C
We
have
a
spec
folder
under
the
Qi
directory,
which
is
actually
specs
for
our
framework
and
then
I'm
the
QA
directory.
You
have
the
framework
itself,
so
I,
don't
know
it's
it's
a
you
get
I
guess
you
could
argue
that
it
is
redundant,
but
that's,
okay.
Let's
just
continue
move
on
so
under
the
QA
QA
directory.
We
have
page
resource.
All
of
these
are
the
actual
page
objects,
the
resources
that
actually
resemble
like
yet
lab
models
within
the
rails
framework.
C
So,
for
instance,
you
have
a
project,
you
have
a
group,
these
are
models
in
the
rails,
environment
and
they're.
Also
the
same
with
our
framework.
We
have
issues
groups,
you
know
CI
variables,
project.
All
these
are
resources
that
can
be
fabricated
via
the
UI
and
the
API
and
they're
implemented
in
each
of
these
resources.
C
A
C
Browser
UI:
this
is
basically
our
entire
sweet
right
here
under
the
browser
UI
directory,
we
have
divided
into
specific
stages
and
the
number
is
completely
arbitrary.
It
does
not
matter.
Actually,
you
know
what
that's
why
it
does.
It
reflects
the
plan.
Good
luck
just
see
if
I
can
find
that
image
there
yep,
so
it
actually
might
actually
should
it
mimics.
This
so
manages
on
top
number
one
two,
three
three,
eight
four,
five,
six,
seven
eight
and
then
since
secure
and
defend,
aren't
really
part
of
it
actually
hang
on.
C
C
Are
SP
and
then
tests
yeah
under
API?
These
are
also
invests,
but
there
are
a
lot
more
specific
to
testing
the
API
user
spec.
For
instance,
you
can
open
this
up
real,
quick
and
you
can
see
that
it
doesn't
interact
with
the
UI
at
all.
It
just
sends
using
the
API
client,
so
kind
of
our
own
built-in
little
API
tester
that
we
have
so
get
requests
and
expect
that
it
equals
200.
C
C
C
Guess
we
test
on
what
we
need,
mostly
the
API
is
covered
in
the
future
specs
within
gimlet.
So
typically
the
API
specs
will
be
kind
of
redundant
say
the
least
like,
for
instance,
get
users
that
is
kind
of
a
serious
like
URL
get
new
users,
and
if
that
doesn't
work,
then
the
entire
application
breaks
I
can
almost
I
can
assure
you
that
this
will
be
tested
in
the
feature
spec.
So
a
lot
of
these
are
kind
of
redundant.
C
There
are
some
cases,
I,
don't
know
we
don't
have
many.
In
fact,
I
haven't
really
looked
at
him
in
a
while,
but
here's
one
a
user,
creates
a
project
in
the
file
and
deletes
them
afterwards.
So
it
looks
like
we
yeah,
we
do
a
201,
so
we
did
it
post
to
the
great
project
URL
with
some
parameters,
and
we
expect
that
it
was
created,
201,
HTTP
and
then
there's
the
JSON
body
response
to
be
a
hash,
including
these
two
properties
etc.
So,
there's
nothing
in
specific.
C
We
test,
but
I
guess
when
we
need
a
fast
test
already
an
API
test
and
I.
Don't
think
many
of
our
engineers
have
I
personally
have
not
created
an
API
test
yet
because
it
hasn't
been
really
practical,
but
you
can
see
that
it's
kind
of
lacking
in
the
directory.
You
only
have
4
spec
files
that
are
API
so
to
understand
features.
We
opened
up
the
browser,
UI
there's
also
an
ease
back
or
I'm.
Sorry,
the
EE
directory
under
the
features
that
are
under
the
aspects,
so
these
are
actual
EE
specific,
oh
shoot.
C
C
C
C
Let's
say
best
example:
I
can
get,
let's
see
a
code
on
respect
there,
you
go
so
only
in
Enterprise
Edition.
Can
you
specify
a
code
owner
file
and
that
allows
gitlab
to
actually
pull
from
a
pool
of
users
that
are
owning
this
file
and
this
action
tests
that
so
this
is
an
example
of
an
ee,
ee
and
presentation
only
test,
and
this.
If
this
is
run
on
a
non
key
instance
unlicensed,
then
it
will
fail.
B
C
Only
specific
features
get
lab
is
very
open
company,
so
most
it's
not
all
of
our
stuff
really
is
open
and
free,
but
there
are
some
features
that
are
within
those
stages
that
are
not
free
and
you
can
actually
here's
a
here's.
The
actual
features
page
where
you
can
actually
see
the
different
tearing
so
audit
logs
is
only
available
in
premium.
Ultimate
authentication
is
completely
free
and
then
you
keep
going
on
list
and
you
can
see
the
different
kind
of
tiers
that
unlocks
so
productivity
analytics.
This
would
be
if
we
were
testing.
C
Let's
say
what
stage
am
I
looking
at?
This
is
managed
right,
so
manage
I've,
create
a
folder
called
value
stream
management
or
something
and
then
a
tests
aspect
that
says
productivity
and
a
little
it
expect,
and
this
would
be
in
the
EE
directory,
because
it's
only
available
and
premium
and
up
it's
not
available
in
the
lower
tiers.
So
we
write
a
spec
for
in
the
e
directory.
Does
that
make
sense?
Yes,.
C
You,
whereas
this
one
analytics
workspace,
would
be
three
and
the
cord
free,
so
it
would
be
within
the
regular
specs
so
with
the
EU
specs.
There's
also
something
that
we
do
to
kind
of
combine
I
want
to
say,
see
II,
but
there's
no
such
thing
see
II
anymore,
to
combine
the
free
stuff
with
the
paid
stuff
in
our
page
object.
Library,
let's
see
if
I
can
find
the
best
example.
So,
let's
say
page.
C
B
C
C
So
this
is
the
regular
menu
because
the
EE
version
menu
so
basically
what's
happening
is
we
are
pre
pending
the
Enterprise
Edition
to
the
regular
staff
that
way
we're
not
duplicating
work
any
any
additional
stuff
that
we
need
in
the
EE
video
it
kind
of
just
plugs
it
right
into
the
regular
version,
so
monitor.
Auto
II
audit
logs
is
not
available
in
the
regular
edition.
So
this
method
does
not
exist.
C
Just
so,
that's
the
QA
stuff
I
just
wanted
to
cover
a
little
bit
on
the
regular
lower
level
specs.
So
there's
incredible
benefits
that
we
have
from
having
our
framework
as
a
QA
directory
underneath
the
get
lab
source
and
one
of
the
one
of
the
biggest
things
that
we
have
an
advantage
of
having
our
stuff
in
the
gate
lab
source
is
we
actually
rely
on
views
that
are
spat
out
in
rails
in
the
QA
menu
that
we
see
here,
for
instance,
we
haven't
even
defined
and
we
defined
elements
within
that
view.
C
So
that's
one
of
the
biggest
advantages
of
having
our
source
embedded
right
in
from
the
gate
lab
sources
that
we
can
have
it
in
one
project.
We
have
to
rely
on
to
mrs
one
to
add
a
data
selector
and
one
to
add
the
QA
tester
or
the
element,
so
that's
kind
of
how
we
get
our
framework
to
work
with
the
source
itself.
C
Also,
within
the
regular
get
left
directory,
there's
a
spec.
So
if
you're
familiar
or
not
familiar
with
group
with
r-spec,
we
also
use
our
spec
in
the
get
lab
specs.
We
there's
features
in
the
under
here,
which
actually
is
like
a
kind
of
an
integration.
Ui
integration
type
of
task
lists
more.
Let's,
it's
still
I,
don't
want
to
consider
in
the
end,
but
it's
still
a
UI
test,
and
we
can
I
can
show
you
this
by
going
to
one
of
these.
C
So
just
by
looking
at
this
shared
example.
Here
we
visit
a
page
and
within
some
ID
we
fill
in
something
with
something
and
click
on
the
Save
change.
Your
actually
integrate
interacting
with
the
UI
any
specs,
but
there's
a
lot
less
setup
involved
in
these
ones,
and
they
run
a
lot
more
quicker
because
they
they
still
use
selenium
and
they
still
use
capybara.
Just
like
our
and
then
tests
use
but
I
think
it's
a
yeah.
It's
just
it's
it's
a
lot
more
lower
level
and
anytime.
C
B
C
B
C
Is
supposed
to
look
at
it
and
the
danger
bot
actually
tells
them
to
include
a
QA
person
to
do
to
look
at
it.
I
reviewed
personally
a
lot
of
configure
specs
because
I'm
the
configure
team,
so
any
spec
that
is
right,
written
under
here
units
or
otherwise
it
could
be
a
feature
because
they're
both
from
of
this
Jarek
tree,
then
a
QA
person
should
should
review
at
least
yep
and
I.
Think
there's
some
there's
some
plan
right
now
under
that
MEK
is
trying
to
implement
regarding
QA
getting
involved,
wait
earlier
in
the
process.
B
C
So
yeah
I
just
want
to
clarify
the
features,
are
the
capybara
tests
and
these
ones
anything
outside
of
the
features
directory
is
just
a
regular
unit
test.
So
sorry
that
was
loud,
but
that's
so
from
then
to
admin
statistics
yeah,
so
you
have
JSX
JavaScript's
fence,
but
also
you'll
see
remove
sorry
r-spec
tests
under
here
there
you
go
well.
This
factory's.
C
C
Thank
you
so
regarding
how
this
is
all
run
from
a
let's
say,
from
the
very
perspective
on
nightly,
say
we
have
a
very
complicated
and
involved
process,
that's
so
complicated,
but
it's
very
involved
packaging.
Both
we
get
lab
into
a
doctor
container
QA
this
directory,
basically
into
a
doctor
container
and
yet
lab
QA
and
yennai
QA
is
our
Orchestrator.
C
This
kind
of
ties,
all
of
those
components
together
and
so
I
can
show
you
this
right
now
component
yet
lab.
So
we
have
a
get
lab
component
and
aspects
component.
This
will
actually
pull
gait
lab,
pull
the
specs
and
run
them
against
that
galeb
instance.
So
this
is
what
that
works.
Trigger
does
get
laboratory
does
is
it?
Is
it
manages
those
that
orchestration
between
the
get
lab
instance,
the
spec
instance
we're
sorry
the
specs
container
and
runs
them
actually,
there
is
a
very
good
architecture
diagram
that
we
can
take
a
quick
glance
that.
C
A
C
C
C
C
C
C
This
pipeline
page
has
forever
EMR
there's
a
memo
of
there's
a
pipeline
created.
You
can
see,
there's
a
lot
of
pages
that
goes
through
this
44
minutes
ago.
It
shows
the
author,
of
course,
Remy,
but
that's
the
last
working,
but
yet
this
is
kind
of
the
orchestrator
and
the
where
the,
where
the
tests
actually
are
contained
in
run.
B
C
C
So
most
of
these
run
on
a
schedule
will
see
nightly,
for
instance,
runs
a
semi
nightly
QA
in
the
American
time
zone
or
and
then
maritime
zone.
So
actually
it's
inactive.
That's
an
active
as
well
so
nightly.
Qa
runs
an
8
hours,
so
we
can
edit
this
to
take
a
look
at
it.
Okay,
so
it
runs
that
hour
at
4
a.m.
UTC
every
day.
C
B
C
Sure,
okay,
so
that's
the
that's
the
tagging
system,
but.
C
Let's
get
into
that
a
little
bit
my
project
here
clean
up,
so
it's
not
so
busy.
Okay,
so
within
our
specs,
which
is
the
QA
QA
specs
features
browser
UI,
other
UI
or
sleep
there.
We
go.
Let's
just
go
to
create
and
read
your
best,
because
that's
I
think
the
best
example
so
create
a
murder
suspect.
Let's
take
a
look
at
this
back,
so
we
have
some
capybara
domain-specific
language
shared
context
described
before,
and
then
we
get
digits,
which
is
the
example.
C
We
have
a
tag
applied
here
for
smoke,
so
this
is
how
we
denote.
We
use
tags
there.
This
is
actually
an
R
Spec
thing.
You
specify
tags
on
two
examples
or
on
the
context
surrounding
describe
blocks
to
kind
of
describe
to
to
add
metadata
to
something.
So
this
is
kind
of
how
we
apply
that.
So,
let's
keep
looking
around
and
see
so
this
one
has
nothing.
This
means
that
it's
just
a
regular
test.
It
should
run
all
the
time.
C
C
C
C
C
C
C
A
C
C
C
There's
a
lot
of
complexity
here
in
this
in
this
see
IMO
file,
so
I'll
try
to
condense
it
as
much
as
possible.
We
have
an
anchor
here,
dot
QA
on
line
33.
Let's
take
a
look
at
smoke,
so
I'm
73.
We
extend
from
that
anchor
and
we
add
a
variable
test,
ops
with
tag
smoke,
so
this
right
here
tells
r-spec
to
only
look
at
the
:
smoke
tests
that
we
had
here's
one.
We
only
look
for
those.
We
can
do
the
same
thing
with
orchestrated
and
kubernetes,
but
instead
specify
ready,
orchestrated.
You
can
see
here.
C
C
It's
because
this
isn't
a
separate
project
entirely.
That
is
only
because
we
share
this
kind
of
mechanism
for
running
tests
on
multiple
things.
We
run
it
in
ops,
get
lab
calm
or
OPSEC.
You
lab
net
rather
I,
don't
know
if
you
run
on
the
ndepth,
but
there's
just
a
lot
of
places
where
this
is
applicable
was.
A
C
Project
ABC:
we
want
to
run
this,
but
we
wanted
yeah.
So
we
can.
We
can
just
consume
that
another
thing:
you'll
notice
that
QA
API
down
here
runs
add
some
options
for
test:
that's
just
specifying
directory
to
run
and
here's
the
API
quarantine
to
run
both
regular
and
et.
So
these
are
also
how
we
kind
of
run
only
manage
tests
here.
Only
plan
tests,
so
there's
there's
we
can
utilize
our
spec
in
its
environments,
positional
arguments
and
stuff
well
to
know
what
to
run
as
well
as
the
tagging
mechanism.
A
C
C
These
are
right
here,
everything
under
131
and
down.
This
is
what
we
run
against
nightly
and
so
starting
from
131
instance,
image
see
e
we're
extending
another
anchor
of
QA
and
we're
executing
it
up.
Q,
8
instance,
image
see
E
and
I
cover
real
quickly
and
what
that
is.
This
is
like
a
forwarder,
your
lab,
QA.
Anything
you
specify
here
test
instance:
image
will
actually
be
forwarded
into
get
lab
QA
here,
let's
take
a
look
at
tested.
B
C
This
basically
just
is
running
all
tests.
There's
no,
it
doesn't
discriminate
one
test
to
run.
It
all
depends
on
the
arguments
that
you're
specifying
so
in
nightly.
I
guess
I
can
shorten
this
answer
for
you
by
saying.
Every
test
is
run
nightly
included,
including
quarantine
tests,
and
we
can
specify,
if
that's
all,
specified
in.
A
C
Same
with
staging
I
believe
I
haven't
really
been
involved
and
too
heavily
with
the
staging
environment,
but
I
believe
that
the
same
thing
applies,
but
it's
you
can
see
that
there's
also
another
project
for
staging,
so
whatever
is
specified
in
this
yellow
pile
is
what's
wrong.
Oh,
you
can
see
that
this
uses
a
blind
comment.
So
this
runs
all
tests.
C
B
C
C
B
Do
we
have
to
specify
which
configuration
to
build
every
time,
for
example,
running
against
aging
I
know
we
cannot
configure
a
lot,
but
if
you're
running
in
guess
in
the
environment,
we
can
probably
have
some
specific
configurations.
Do
you
have
to
specify
that
every
single
time
I
will
specify
the
environment
it
kind
of
comes
by
default?
So.
C
C
B
B
C
Staging
you
can
see
that
it
included
pipeline
in
common,
but
where
does
it
run?
And
here
we
specify
environment
variable
I'll,
get
other
dress
against
staging
baguette
lab
desktop
Oh.
Also
scenario
test
instance
staging
you
notice.
Here
we
use
test
instance
any
so
there
are
different
things
that
you
can
run
I
kind
of
want
to
see
what
test
instance
staging
is
switch
over
to
Gila
Furion,
see.
C
C
C
But
I
guess:
in
short,
there
is
a
get
lab,
docker
container
sarangi
lab.
There
is
a
QA
docker
container,
which
is
designed
to
run
against
that
get
lab,
and
the
orchestrator
does
that
it
can
tie
those
two
images
together
run
those
specs
against
those
that
get
lab
instance,
and
it
doesn't
just
have
to
be
doctor
either.
You
can
have
a
running
gila
instance
on
your
local
machine
and
run
any
specs
against
that.
C
And
so
there's
so
there's
really
I
just
wanted
to
touch
just
again
on
dork
straight
and
kubernetes,
for
instance,
cuz.
That's
the
kind
of
the
main
thing
there
really
is
no
difference
between
how
we
run
those
against
smoke
tests.
It's
just
that
our
spec
tag
and
I
think
there
is
yeah.
It's
just
it's
just
an
r-spec
tag
and
that's
that's
specified,
that's
kind
of
how
we
differentiate
different
tests
and
that's
just
completely
arbitrary.
We
can
specify
anything
there.
Smoke
was
added
me
they're,
added
by
me
circuit
last
year.
C
B
C
B
C
B
C
B
In
that
case,
all
right,
as
we
have
test
cases
created
in
each
features
from
the
team
itself.
By
now
we
are
creating
a
new
are
depending
on
where
that
code
change
is
test
aspects
particular
that
fetus
can
be
easily
plugged
in,
though
right
because
into
it
is
different.
We
don't
know
what
I
would
end
to
end,
but
the
other
one
specific
that
feature
could
be
run.
Then
it
did
it
serve
the
test
live
within
the
same
repo.
B
C
C
Guess
I,
don't
know,
okay,
what
the
feature
specs
do:
I
haven't
been
really
I
mean.
None
of
the
QA
folks
have
been
really
involved
heavily.
That's
more
of
the
developer
domain.
You
know
like
the
actual
back-end
engineers
and
firming
engineers,
but
no
absolutely
it's
your
second
potential
for
that
I.
Don't
know
if
they
do
that
yet
or
currently.
C
B
Make
sense
yep,
yep,
okay
sounds
good.
This
is
definitely
helpful
because
there's
so
many
things
going
on
it's
good
to
have
kind
of
like
one
picture.
This
is
helpful.
Yes,
I
think
probably
that
I
have
a
lot
more
questions,
but
once
I
dig
more
deeper
into
it.
I
know
who
believes
out
at
this
point
awesome.