►
From YouTube: AMA about the GitLab end-to-end testing framework
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
All
right,
we
are
on
time
so
I'll
get
started.
My
name
is
Val.
Muir
I
am
test
automation,
engineer
in
the
plan
team,
and
this
is
a
am
a
session
about
the
eat
lab
instrument
as
the
framework.
The
idea
is
that
who
have
a
brief
explanation
of
the
entry
and
testing
framework
in
the
beginning
and
then
open
for
questions
in
the
document
that
I'm
sharing,
which
is
also
available
in
the
on
Google
Calendar,
the
link
for
this
document.
You
will
find
also
some
documents
that
we
recommend
reading.
A
If
you
want
to
know
more
about
the
answer
and
testing
framework-
and
there
are
also
two
videos
available-
one
of
them
was
a
training
that
was
done
in
the
past
about
the
keyway
framework
and
the
other
one
was
a
presentation
in
a
conference
from
explanation
in
this
link.
Here
we
have
the
gitlab
QA
framework,
which
is
under
it,
lab
CEQ
a
slash
away,
and
then
we
have
all
these
sub
directories.
A
These
are
the
ones
that
will
briefly
explain
so
I
will
not
talk
about
all
of
them,
but
at
least
some
of
them
that
I
think
are
important
for
you
to
to
know.
So
we
have
a
directory
called
git
which
is
used
for
resources,
creation
via
the
command
line
interface,
and
so
in
this
directory,
you
will
find
some
classes
with
methods
that
can
create,
commits
or
can
push
to
a
specific
repository,
as
a
user
would
do
through
the
common
line
user
interface
so
that
through
the
command
line
interface.
A
So
it's
basically
for
us
to
simulate
these
kind
of
actions
while
running
and
fantastic,
then
we
have
the
Patriots
directory.
This
is
where
we
have
the
base
directory.
This
is
where
we
have
our
page
objects.
These
are
abstractions
of
web
elements
and
the
methods
that
can
be
performed
in
the
page,
like
feeding,
inform
and
submitting
the
form
things
like
that.
Then
we
have
the
resources
directory.
This
is
where
we
have
some
classes
that
we
use
for
resources
fabrication.
This
is
how
we
call
it,
and
a
resource
can
be
anything
that
you
can
do
on
it.
A
A
So
we
have
tasks
for
managed
for
plan,
create
verify
and
so
on
and
so
forth.
And
finally,
we
have
the
support
directory
where
we
have
thing
like
we
try
not
rb2
reruns
failed
specs
to
avoid
failures
due
to
things
like
flakiness,
and
we
have
also
waiter
dot
RB
as
a
helper
for
waiting
for
elements
to
be
present,
and
things
like
that.
So,
as
I
have
already
mentioned,
these
are
some
resources
that
you
can
go
through.
If
you
want,
you
add
more
understanding
about
the
framework,
and
now
we
can
start
with
the
questions
so
Peter.
A
Thanks
for
the
question
Peter,
so
the
answer
for
your
question
is
that
there
are
differences.
What
we
have
in
this
link
that
I
mentioned
here
before,
where
we
have
the
test
framework
itself,
is
where
we
have
all
these
things.
That
I
mention
here
like
this
resources,
runtime
the
spec
files
and
things
like
that.
These
other
projects
here
get
slapped.
The
QA
is
more
about
the
infrastructure
that
is
used
for
running
the
tasks
themselves.
I
know
that
we
have
more
as
automation
engineers
in
the
fall.
A
So
if
you
want
to
add
more
to
that,
but
the
main
difference
is
that
what
lives
inside
inside
/a
/li
way
are
all
the
files
used
for
creating
the
tests
themselves
and
what
lives
in
these
other
repository
is
more
like
the
infrastructure
for
running
the
tests
on
CI.
Correct
me.
If
I'm
wrong
anyone
else
from
the
I.
C
C
But
the
main
test
of
functional
tests
is
in
the
QA
directory,
because
the
test
should
live
with
the
source
code.
We
are.
We
have
some
discussions
of
renaming,
evacuated,
galaki
orchestrators,
so
it's
more
clear
and
also
it's
also
also
being
used
by
some
of
our
customers.
After
they
upgrade
their
gilliam
instance,
they
will
want
to
run
some
small
tests
and
they
could
just
run
it
by
the
orchestrator
directly.
Did
I
have
a
check
out
collapse?
C
Ee,
a
black
horse
color,
because
we
ship
these
tests
as
part
of
a
gem
like
if
this
cool
down
could
a
QA?
There
was
a
project
and
run
a
set
of
smoke
tests
or
whatever
tests
against
turkey
lab
instance,
and
there
are
tests
that
needs,
for
example,
two
instances
of
good
lab
which
you're
in
the
orchestrator
zone,
because
you
need
to
set
up
a
primary
site
of
your
lab
one
and
in
a
secondary
side
of
it,
lab
to
those
tests
will
have
to
reside
outside
the
code
outside
of
the
product
itself.
C
B
B
So
I
am
so
in
both
places,
I
see
code,
which
is
similar,
for
example.
Some
had
no
good
in
order
to
shell
out,
for
example,
and
right
now,
I
am
about
to
improve
some
of
that
code
and
I'm,
not
sure
which
version
to
improve
so
learn,
I
mean
and
which
place
should
I,
improve
it
from
the
queue
a
folder
or
should
I
contribute
to
the
to
the
github
to
a
project.
B
A
I
was
not
aware
of
this
duplication,
so
we
would
need
to
look
into
it.
I
would
say
that
if
it's
something
related
to
the
tasks
themselves,
you
should
change
on
Cee
to
REE,
depending
on
what
is
the
improvement
that
you
want
to
make,
but
it's
really
about
your
castration,
then
maybe
it
should
go
to
the
it
lab
QA
project.
C
B
C
If
it's
put
in
a
dark
pool
at
the
scale,
how
we're
growing
I
appreciate
all
the
patience
this
for
context,
other
enterprise
companies
have
a
1
to
5
ratio
of
developers
to
test
automation,
engineers
we're
at
roughly
1
to
18.
So
it's
impossible
for
us
to
know
all
the
nooks
and
crannies
and
edge
cases
of
other
framework
right
now,
and
if
you
find
anything
please
let
us
know
we're
working
on
making
the
counterpart
mapping
better
and
yeah,
and
in
some
areas
is
it
one.
C
D
Sure
I
just
thought:
maybe
this
is
the
best
place
to
ask
for
an
interview
and
through
obviously
joining
and
running
again
this
week.
I've
got
it
running
on
my
local
machine,
but
I
just
want
to
ask
to
see
if
this
was
the
easiest
way
to
do.
I
currently
do
this
I
have
a
GK
running
and
then
I
would
try
and
run
the
test
viral
having
to
build
a
local,
QA,
docker
or
trying
to
make
the
Ben
QA
xxq
work
on
my
machine.
D
A
A
Had
some
performance
issues
while
using
the
GDK
locally
in
comparison
to
docker,
but
the
main
point
of
using
the
GDK
is
that
sometimes
when
you're
writing
a
new
test,
you
need
to
change
something
in
the
view
of
the
application,
for
instance,
for
adding
a
specific
attribute
that
the
tests
will
read
and
if
you
do
it
through
the
GDK,
it
will
recompile
the
code
and
the
new
class
will
be
available
for
you
in
the
application.
Why,
if
you
use
docker,
you
won't
have
the
chance
to
do
that
so
for
running
the
tests
on
docker
it
works.
D
E
Just
add
to
this
that
in
general,
if
you
want
to
have
fast
test
execution
and
fast
in
a
fast
setup,
you
would
normally
just
run
a
docker
image
as
Warner
said
and
run
the
findit
QA
folder
executable,
to
run
the
specific
test
that
you
created,
but,
as
he
also
mentioned
like,
if
you
don't
want
to
run
to
write
a
test
on
your
own,
we'll
just
want
to
test
something.
There
is
also
take.
E
It
lab
kill
a
gem
that
you
can
as
well
use,
but
in
general
like
as
as
a
test
for
a
optimator
test
writer,
you
would
normally,
if
you
don't
want
to
change
anything
in
the
view.
I
normally
use
a
darker
image
and
just
run
it
against
the
darker
image,
in
the
only
exception,
if
I
ever
want
to
improve
something
out
of
you
or
maybe
change
something.
A
A
One
thing
that
I
could
mention
why
we
don't
have
any
other
new
question
is
that
we
are
working
on
improving
the
end-to-end
testing
framework.
Some
things
that
we
are
iterating
on
are
improving
the
documentation
to
have
some
more,
like
step-by-step
guides,
on
how
to
use
the
framework
and
how
to
write
the
tests
and
how
to
update
us,
and
things
like
that
we
are.
We
are
also
defining
some
style
guides
that
we
want
to
follow
and
I.
Remember.
Also,
that
mark
pavia
mentioned
that
the
author
of
capybara
framework
is
also
willing
to
contribute
with.
A
Improving
the
framework
itself
using
some
best
practices
of
using
capybara,
since
this
is
the
library
that
we
use
together
with
the
framework.
So
it's
really
this
link
here.
They
are
very,
very
valuable
if
you
want,
you
know
like
what
are
the
guidelines
that
we
are
using
and
if
you
don't
understand
exactly
how
big
to
objects
works
or
how
resource
publication
works.
These
are
the
places.
F
E
In
general,
I
would
say
there
is
another
question,
but
if
you
look
at
the
testing
pyramid-
and
if
you
look
at
how
many
tests
in
general
a
UI
based
test
like
how
much
do
I
based
tests,
you
should
have
in
such
a
huge
project
like
this.
There
is
a
correlation
where
you
can
see
that
UI
tests
should
not
be
that
many
in
general,
like
you,
should
be
focusing
much
more
on
unit
tests,
which
we
are
currently
doing
anyway,
like
every
developer,
is
writing
his
unit
tests
on
everything
he
does
so.
E
The
testing
the
test
coverage
general
is
not
bad.
A
UI
test
should
be
done
if
they
are
needed
and
it
is
harder
because
of
the
sheer
amount
of
people
that
are
working
currently
on
the
test,
automation
framework,
as
well
as
on
the
sheer
amount
of
code
that
is
pushed
every
day
through
such
a
huge
team.
E
F
It
was
a
great
answer,
so
what
about-
and
this
may
be
a
little
off
topic,
but
about
performance
testing?
You
guys
take
the
same
type
of
approach
or
are
you
guys
responsible
for
the
performance
testing
of
the
application
I.
C
Can
take,
we
have
an
enablement
team
now
that's
dedicated
to
memory
and
and
other
large,
encompassing
performance,
but
we
do
take
into
account
functional
performance
in
each
area,
and
example,
for
this
is
what
what
Rogers
team
Tommy
and
ami
are
working
on
is
that
we
we
have
tests
around
big,
much
requests
an
issue
with
a
lot
of
discussions,
love
a
lot
of
labels
and
I
will
link
an
epoch
that
tracks
to
the
two
tracks
of
work.
C
C
In
addition
to
that,
I
do
want
to
go
back
and
answer.
The
first
question
is:
do
we
see
challenges
any
moving
faster?
It's
a
challenge,
but
it's
a
good
challenge
because
we
were
doing
it
differently
than
other
companies.
An
effective
iteration
that
we
made
early
on
earlier
was
to
remove
the
requirements
of
writing
test
plans.
C
Those
are
something
that
they
get
tends
to
get
carried
over
from
quality
engineering
processes
in
enterprise
companies,
and
it
has
proven
to
be
more
of
a
backlog
lagging
past,
because
you
have
to
create
an
issue
for
testify
and
am
I
on
feature
is
small.
The
the
cost
of
paperwork
in
creating
a
test
plan
doesn't
benefit
the
outcome.
So
what
we
do
here
is
we.
It's
not
required
his
plans
by
default
anymore,
but
we
do
want
to
move
into
a
test
planning
iteration
closer
to
product
requirements.
C
So
we
added
a
testing
section
in
the
issue
template
so
product
managers
and
engineers
and
test
engineer
should
which
collaborating
earlier
on
on
what
to
tests,
and
we
are
making
progress.
It's
not
perfect
yet
and
there's
a
section
to
discuss
testing
in
India
Mars
as
well,
and
we
will
require
his
plans
on
big
changes
like
upgrading
to
rail
6
in
the
future,
going
to
ruby
or
rolling
out
big
cross-cutting
changes
like
a
good
example.
C
F
C
You
there's
still
a
lot
of
work
to
be
done
and
I
the
credit
all
those
who
the
team
I'm
just
a
facilitator
here.
Thank
you.
Yeah.
G
A
A
Yeah,
so
in
in
the
document
that
I
clicked,
we
have
this
session,
our
entrant
asks
needed,
and
then
we
have
links
for
both
C
and
E
code
coverage
reports.
I
think
this
should
be
the
first
place
to
look
into
to
see
which
parts
of
the
code
are
or
not
covered
by
unit
tests.
I
would
say
it's
it's
taking
a
bit
long
for
you
to
render
yeah
but
yeah
as
we
can
see
here.
This
is
the
test
coverage
report
for
see
here.
It's
still
rendering
for
EE.
C
Is
sorry
to
jump
in
here
I
do
appreciate
you
bringing
it
up,
because
we
do
a
pretty
good
job.
Adding
tests
with
every
bug
fix
that's
good
and
bad
good,
as
we
have
a
lot
of
tests.
The
bad
is
that
the
execution
time
tends
to
grow
and
there
are
certainly
duplications
that
we
can
make
it
leaner.
We
don't
have
a
big
cross-cutting
effort
to
groom
this
right
now,
but
I
think
we
should
revisit
them.
C
Maybe
I
think
an
experimentation
we
could
do
is
started
in
one
one
team
and
on
any
area
first
to
see
where
we
can
go
in
and
and
groom
things
an
iteration
that
we
made
to
improve.
This
is
now
we
in
addition
to
removing
test
plans.
We
are
adding
at
the
counterpart
test,
automation
engineers
to
do
review
roulette
where
there
are
changes
made
to
the
test.
So
this
will
increase
the
wear
as
an
hour
part.
We
still
need
to
to
to
learn
what
are
all
the
tests?
C
They're
like
five
thousand,
almost
six
thousand
tests
in
the
spec
features,
there's
even
more
in
it
unit
tests.
So,
building
out
that
knowledge
is
happening
now,
it
will
take
a
lot
to
get
there
where
say
counterpart
has
all
the
agency
knows
exactly
where
remove
what
to
remove
when
a
new
test
comes
in.
But
a
thing
is
a
it's
a
collaborative
effort
with
the
product
managers
as
well,
and
it's.
C
Shouldn't
go
in
and
have
to
clean
out
things,
because
every
buffett's
goes
in
with
a
tests
and
we
remove
something
unintentionally,
that's
like
removing
a
brick
in
the
wall.
Right
they
make
things
may
come
crumbling
down
so
yeah.
We
would
love
to
tone
it
down,
but
we
probably
have
to
do
it
with
caution.
A
A
So
they
are
not
very
reliable
and
we
are
when
we
face
these
kind
of
issues,
we
quarantine
the
tests
so
that
they
don't
pollute
the
the
marsh
requests
with
like
false
negative
results,
and
things
like
that
and
we
are
iterating
on
making
them
more
reliable,
making
it
more
reliable
to
interact
with
web
elements
through
the
graphical
user
interface
in
Altru.
We
are
as
I
mentioned
in
the
beginning.
H
C
The
other
increasing
of
test
gaps,
so
you
can
do-
is
always
in
depth
because
that's
our
bread
and
butter
and
performance
I
think
we
need
to
improve
production
like
data
in
our
performance
test
beds
and
that
that
is
a
recurring
theme
I'm
seeing
so
those
would
be
like
the
the
three
areas
of
highlight
right
now,
I'm
curious
to
hear
what
stand
things
as
well,
because
he's
a
boots
on
the
ground
in
the
frontline
so
feel
free
to.
Let
us
know
there
are
the
areas
of
improvements.
H
Yeah
I'm
still
thinking
about
it,
but
you
know
I,
think
generally
having
to
the
point
about
unit
tests.
I
think
developers
do
a
great
job
of
writing
unit
tests.
It's
just
that.
There's
always
another
edge
case
that
you
haven't
considered
because
you
know
baby.
We
have
bad
data
in
our
database
and
you
know
you
haven't
thought
about
that.
Or
you
know
users.
Customers
are
upgrading
from
one
version
to
another
and
during
this
time
frame
we're
in
this
kind
of
you
know
different
state,
which
we
don't
completely
cut.
H
So
it's
the
things
that
like
like
that
that
are
harder
to
test
it's
nice
to
say
we
can't
do
it,
but
I
mean
I.
Think
to
your
point
about
predicting
production
like
data
I
mean
that's,
that's
absolutely
true.
It's
like
there's
a
lot
of
stuff
in
our
database
that
is
strange
or
not
valid.
We've
added,
for
example,
validate
patients
after
the
fact
that
well
and
data
was
in
there
so
now
like
existing
models
and
things
like
that
aren't
actually
validating.
H
And
so,
if
you
try
to
save
like
a
record
like
a
project,
it
actually
doesn't
save,
because
it's
the
validations
don't
pass.
So
there's
lots
of
those
kind
of
things
that
happen
in
our
system
that
are
hard
to
test,
and
you
can
argue
that
you
really
want
to
clean
that
up
and
be
proactive
by
cleaning
up
rather
than
trying
to
account
for
every
edge
case.
H
But
I
do
think
that
there
are
things
like
that,
those
the
gaps
that
are
there
and
if
you
look
at
all
the
priority,
bugs
that
we
had
in
the
last
two
weeks,
they
were
mainly
from
things
like
that.
You
know
gaps
like
we
didn't
expect.
You
know
a
pipeline
scheduled
not
to
have
an
owner.
You
know
like
things
like
that.
It's
like
someone
probably
could
have
considered
that,
but
it
wasn't
obvious
from.
C
The
get-go
I
echo
that,
because
this
ties
in
with
the
theme
of
more
iterative
test
planning,
you
can
sit
in
a
room
for
a
week
and
plan
all
the
cases
that
we
want,
but
that's
not
how
we
work.
Good
luck,
so
lightweight
is
binding
and
more
realistic
test
data
as
a
cording
to
what
you
said.
Yes,
those
are
the
ones
that
will
advise
us
because
we
didn't
think
of
it,
and
the
only
place
right
now
is
production.
That
is
realistic
and
I'm
glad
I
we
have
canary
and
we
have
pre
product
now,
but
yeah.
C
A
So
I
really
appreciate
that
you
took
the
time
to
be
here
with
us
in
this
meeting.
The
idea
is
that
we
will,
by
reading
the
questions
and
answers
from
this
meeting,
improve
the
things
that
we
have
so
thanks.
A
lot
I
will
also
upload
the
recording
of
this
meeting
later
so
that
people
that
couldn't
attend
be
able
to
watch
it
thanks
a
lot.