►
From YouTube: Test File Finder GitLab Implementation
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
C
B
Of
the
questions
that
we
talked
about
in
the
last
stakeholder
call
that
I
was
just
gonna
ask
and
yeah
we
can,
we
can
kind
of
go
from
there.
B
Jump
in
like
by
all
means
jump
in
and
participate
any
questions
you
have.
You
know
this
would
be
the
time
to
ask
them
too.
B
C
Hey
good
to
see
you
again,
hey
drew.
A
B
Yeah
yeah,
so
so
albert
I
was,
I
was
wondering
if
you
could
just
maybe
share
your
screen
and
talk
through
like
we
could
get
things
started
just
talking
through
the
implementation.
C
Is
okay?
So
so,
okay,
are
we
following
the
issue
as
an
agenda,
the
record
of
a
pipeline
shop.
B
I
had
some
other
questions
on
dynamic
mapping
that
I'll
add
in,
but
it's
essentially
those
that
that
we'll
discuss
we
discussed
covering.
C
These
things,
okay,
sure,
yeah,
then
I'll.
Just
let
me
let's
just
go
through
what
we
have
right
now:
okay,
yeah.
C
Okay,
so
so
this
diagram
is
just
an
overview
of
how
our
pipeline
looks
like
right
now,
so
a
small
part
of
it.
So
we
have
three
stages
in
the
previous
stage
test
stage
and
then
post
test
stage,
so
prevent
stage
is
where
we
are
using
the
tesla
finder.
C
We
use
it
to
detect
the
test
that
we
need
to
run
for
a
particular,
mr
based
on
the
changes
in
the
mr.
So
this
is
where
the
tesla
finder
is
used
inside.
It
creates
a
base
of
test
files.
We
pass
the
list
later
on
to
the
aztec
job
technology,
which
runs
just
estimated
number
of
tests
in
parallel
to
the
other,
existing
artifact
jobs
that
we
have
as
for
normal,
so
yeah
so
and
then
for
this
pipeline.
C
The
special
thing
is
that
when
the
failed
first
jobs
fails,
we
have
a
subsequent
job
in
the
next
stage,
which
looks
at
failures
for
this
job
and
and
then
and
then
make
an
api
call
to
committee
the
pipeline
earlier
yeah.
So
it
does
it
by
using
a
cancel
api,
but
it
still
gives
us
the
same
feedback
in
terms
of
a
failed
pipeline
to
the
user.
C
B
C
Right
now,
these
are
not.
These
are
not
related
to
any
deck
or
anything.
It's
just
the
illustration
of
the
dependencies.
C
Yeah,
so
so
between
this,
so
between
this
job
and
this
job,
there
are
other
jobs
like,
for
example,
setting
up
the
environmental
environment
and
other
things
yeah.
C
It's
not
a
direct
relationship.
Okay,.
C
Check
yep:
it's
that
it
has
a
neat
relationship.
B
C
So
this
job
uses
test
file
finder
it
takes
the
mrdif,
looks
at
the
list
of
files
changed
in
mrdiff,
pass
it
to
the
pencil
finder
and
then
the
decimal
finder
will
then
creates
a
list
of
the
test
files.
Specifically
that
needs
to
be
run
based
on
the
mapping
that
we
have
the
static
mapping.
C
So
this
is
the
main
thing
how
this
is
like,
for
example,
let
me
see
so
here
we
have
so
in
gitlab
we
have
the
ee
extension
code,
which
is
usually
used
to
extend
or
yeah
to
extend
a
force
code
with
some
e-specific
implementation.
So
in
that
case,
when
the
ee
specific
code
is
changed,
we
also
want
to
run
the
test
for
the
force,
the
corresponding
class
in
force.
So
this
is
yeah,
something
that's
unique
to
the
gitlab
code
base.
C
We
want
to
address
them
as
well
yeah
other
than
that
we
have
other
things
like
schema,
migrations
and
various
kinds
of
things.
B
Well,
I
I
should
add
the
context
some
of
the
questions
I'm
asking
are
for
like
ricky
and
james
who
aren't
here
too
yeah
just
so
that
you
don't
gloss
over
stuff.
So
I
do
appreciate
you
going
into
that
detail.
Yeah,
that's
it
for
me.
As
far
as
questions
on
that
piece,
do
you
have
any?
Yes.
C
Drew
any
have
any
question
feel
free
to
ask
if
you
have
any
questions.
A
Yeah
no,
no,
this
this
looks
great.
I
don't
I
guess
I
don't.
I
know
I
don't.
I
don't
have
anything
yet
keep
going.
Please.
C
Okay,
yeah
so
yeah,
so
that's
as
far
as
the
tesla
finder
is
being
used
right
now,.
C
B
So
so,
just
to
recap:
we're
using
a
test
file,
finder
gem
for
identifying
the
files
that
are
changed
from
the
mrdiv
and
detect
test.
We
run
those
like
we
create
an
artifact
or
something
that
we
pass
on
to
the
r
spec
fail
fast
job
and
then,
when
that
one
completes,
if,
if
it
fails,
then
the
failed
pipeline
early
job
will
cancel
everything
else.
C
Yeah,
so
if
we
look
at
this,
so
let's
say
this
takes
10
minutes
and,
like
I
think,
majority
of
our
tests
runs
at
about
15
minutes
per
job,
so
at
10
minutes.
If
this
fails,
this
will
kick
in
cancels
all
the
remaining
jobs
here.
That
may
still
be
running
for
another
five
to
ten
minutes
and
yeah.
So
that's
where
we
can
shorten
the
pipeline.
C
A
C
Yes,
and
no
I'm
I
I
was
thinking
it
really
depends
on
the
project
basis.
I
would
say
what
all
right
the
project
needs
to
do
before
they
can
run
the
test
yeah
and
out
of
the
box.
Yes,
probably
it
will
work
but
right
to
what
extent
that
can
sustain,
I'm
not
sure
I
I
won't
be
able
to.
A
B
C
If
we
don't
detect
anything
to
run
most
most
of
the
time,
there
will
be
something
to
write.
A
C
C
C
The
other
thing
that
we
needed
to
take
care
of
was
if
this
list
gets
too
large.
For
example,
let's
say
an
mr
changes
like
600
files,
for
example
right
a
huge
refactor.
So
in
that,
in
such
cases
we
don't.
C
We
will
decide
not
to
run
this
at
all,
because
it
makes
it's
more
worthwhile
to
just
run
all
the
tests
as
normal
instead
of
running
it
in
single
job,
because
we
will
be
duplicating
first,
we'll
be
duplicating
the
the
effort
of
testing
it,
and
second,
is
that
this
job
is
not
paralyzed
at
the
moment.
So
when
it
gets
whenever,
when
the
number
of
hours
gets
too
large,
it
gets,
it
will
take
longer
and
the
fail
early
mechanism
will
not
be
beneficial.
A
C
B
And
then
the
I
guess,
other
contexts
to
maybe
add
here
is
these
all
run
when
there
are
code
changes
right
albert
so
like
we
break
our
pipeline
graph
up
into
like,
if
only
docs
files
change,
these
jobs
will
never
run
based
on
the
changes
implementation
within
the
pipeline.
So
the
maybe
some,
mrs
that
you're
thinking
where
hey
there,
probably
wouldn't
be
any
tests
here.
It's
likely
never
going
to
run
this
job
in
those
circumstances
because
of
how
changes
at
the
pipeline
configuration
level
level
is
implemented,.
A
C
I
think
it
is
possible.
I
think
why
I
put
it
as
a
separate
job
in
the
beginning
was
because
I
was
thinking
also
about
how
we
might
want
to
do
this
fail
fast
mechanism
in
these
different
ways
like,
for
example,
like
ours,
like
we're,
currently
running
it
like
on
different
levels,
you
need
integration,
migration,
etc.
So
I
was
wondering
if
we
might
want
to
need
to
do
that.
We
need
to
do
that.
Then.
This
is
separate
because
right,
these
menus
might
take
time,
and
we
don't
want
to
repeat
this
process.
C
A
C
Yeah
also,
if
we
want
to
parallelize
this
job
itself,
we
also
don't
want
to
necessarily
run
that
as
many
times,
because
that
yeah,
how
we
paralyze
this
to
depend
on
how
many
pesos
we
are
getting
out
of
this.
C
B
So
so
I
I
was
going
to
ask:
we
deviated
from
the
ci
template
quite
a
bit
like
we
used
the
test
file
finder
and
I
think
that's
that
was
kind
of
our
goal,
but
we
deviated
quite
a
bit,
but
is
there
any
ideas
on
what
patterns
here
or
anything
that
could
be
brought
back
into
the
fail
fast
ci
template
or
or
is
our
implementation
just
very
opinionated
for
our
use
case
right
now,.
C
C
You
like
an
mri,
look
at
the
div
and
determine
what
test
needs
to
be
run
for
that,
for
that
landmark,
for
example,
and
then
so
I
feel
like
the
outcome
for
the
boundary
between
the
template
and
the
user's
configuration
will
be
this
list
or
false
yeah.
I
think,
if
a
template
whatever
can
help
a
user
to
produce
this,
it
would
be
very
helpful
and
then
the
the
user
engineers
can
take
these
smalls
around
the
test.
However,
they
want
it
to
be,
however,
suits
their
project,
and
they
can
do
that.
C
And
then
I
think
that
also
will
lead
on
to
the
next
steps
that
we
might
want
to
consider.
If
you
want
to
consider
dynamic
mapping
et
cetera
all
that
complexity
will
go
into
this
part
of
the
job
part
of
the
pipeline
and
not
so
much
in
terms
of
how
the
jobs
are
it's
different
projects
and
for
different
ways
of
running
tests.
I
think
it's
best
to
leave
that.
A
Yeah
and
that
I
think
that
is
a
good
point
that
I
think
we
we
sort
of
consciously
made
that
decision
with
test
file
finder
that
we're
not
going
to
get
into
execution,
and
so
I
think
that
I
think
you're
right
about
the
artifact
being
a
good
ideal
output
for
the
job.
But
it's
like
we're
not
we're
not
really
doing
your
execution
for
you,
because
we,
we
probably
won't
do
it
right,
but
like
we
know,
we
can
give
you
this
list
of
high
risk
files
or
whatever,
and
that.
C
Yeah
yeah
also,
then
that
means
how
does
one
finder
is?
Currently,
it
leaves
the
decision
of
finding
out
what's
important
to
the
user
as
well
like
they
can
put
in
the
mapping
as
needed
based
on
the
project
and
then
the
job
the
template
template.
The
job
then
do
the
execution
of
based
on
that
gives
them
a
list
of
files
that
they
need.
That's
files
that
they
need
to
do
based
on
what
is
known
within
that,
mr
and
then
give
it
back
for
the
user
to
then
take
it
on
from
there.
C
A
B
Cool
I
did
want
to
talk
about
the
dynamic
mapping
a
bit
drew.
Did
you
have
any
questions
on
this
these
items
before
we
move
into
that?
Like
anything
else
here,.
A
B
Cool
so
albert,
I
know
you've
done
a
lot
of
analysis
like
looking
at
crystal
ball
and
other
ways
of
doing
this.
I
just
thought
I'd
ask
like
what
are
your
thoughts
on?
B
How
will
we
can
evolve
the
gitlab
project
pipeline
with
dynamic
mapping
and
then
how
do
you
see
that
relating
to
test
file
finder
like
what
the
responsibilities
we
just
talked
about
are,
and
maybe
what
the
gem
can
actually
do,
there's
a
lot
to
unpack
there,
but
that
was
in
essence
what
I
wanted
to
try
to
get
out
of
this
part
of
the
discussion.
C
Yeah,
so
this
is
the
mapping
that
we
have
now,
I
think
yeah.
So
the
next
thing
we
want
probably
want
to
get
at
is
how
to
be
smarter
by
creating
this
mapping,
because
in
this
at
the
moment,
there's
a
lot
of
assumptions
and
I
think
we
there
may
there
may
be
different
scenarios
that
we
missed
in
this
piece
of
what
we
know
at
the
moment.
C
So
we
can
move
this
towards
the
dynamically
generated
map.
I
think
that
would
be
great.
I
think
that's
the
first
step
to
take
where
that
would
fall
within
the
the
gem.
I'm
not
sure
about
that.
Yet.
C
If
it
is
part
of
the
gem,
it
might
be
a
like
a
separate
component
within
the
gem
to
generate
a
mapping,
because
it
knows
the
formula.
C
A
C
A
A
quick
question
about
our
mapping
here:
how-
and
this
is
I
obviously
projects
this
will
always
kind
of
be
project
specific,
but
how
how
much
of
this
mapping
is
I'm
reading
here
generalized
to?
I
was
because
I
was
thinking
about
extensible
mappings
right,
because,
if
we're
going
to
lean
into
you
know
we're
just
going
to
produce
this
list
of
files
for
you
getting,
I
think
getting
started
with
having
having
a
user
have
to
configure
as
little
as
possible.
A
A
C
Yes
and
no,
I
think
I
think
there
is
a
middle
ground
there,
which
also
thought
before,
which
is
probably
we
might
want
to
be
able
to
take
in
multiple
mapping,
for
example.
So
I
think
that
could
really
help.
For
example,
if
let's
say
the
template
has
a
default
mapping
and
then
the
user
wants
to
wear
it
on
with
some
customized
maps
that
they
need
to
customize
for
and
yeah.
I
think
that
could
help
yeah
and
also,
I
think,
like.
C
And
forth
in
the
mapping
like
there's
a
lot
of
tweaking
experimenting
and
then
we
rest
something
was
missed
and
then
we
need
to
add
on
to
a
new
one
so
having
multiple
mapping
files
supporting
multiple
mapping
files
like
this
an
option,
it
might
help
to
solve
that
pain
point
as
well,
and
then
you
could
make
it
more
modular.
A
C
Yeah,
the
other
possible
benefit
I
was
thinking
of
is
that
we
can
still
have
this
static
mapping
and
then,
as
we
move
towards
a
dynamic
mapping,
then
we
could
iterate
on
the
dynamic,
dynamically
generated
map
and
then
combine
that,
together
with
the
current
static
map
that
we
have
to
give
us
a
more
comprehensive
mapping.
B
C
Point
I
feel
like
we
need
to
get
to
a
parity
with
the
static
mapping,
because
it's
a
right,
because
we
can
only
have
one
to
get
that
file
to
privacy
with
what
we
have
right
now
in
order
to
be
as
effective.
But
we
don't
know
how
what
it
takes
to
get
there.
At
this
point
or
how
long
it'll
take
how
many
iterations.
C
B
And
just
to
cycle
back
to
the
dynamic
mapping
like
the
general
idea
would
be.
The
implementer
like
the
developer
would
generate
the
dynamic
map
outside
of
tfm,
but
then
feed
it
in
in
this
format,
so
that
the
files
detected
would
essentially
work.
The
same
so
like
from
a
responsibility
perspective,
dynamic
mapping
probably
doesn't
make
sense
to
build
into
the
gem,
but
it's
an
implementation
decision
that
would
feed
into
the
mapping
that
the
gem
uses
right.
Yeah.
C
Cool,
I
mean,
I
think
it
could
be
packaged
part
of
it,
but
it
should
be
like
a
different
entry
point,
I
would
say
from
the
default
finder
as
it
is
now
yeah.
C
A
C
I
mean
it
depends
on
how
the
dynamic
generation
is
what
the
approach
is.
If
it
is
based
on
like
call
graph,
then,
yes,
it
needs
to
hook
into
into
the
test
execution
and
then
yeah,
so
that
you
will
be
able
to
correlate
between
the
test
files
and
the
test
execution
path.
I
would
say:
yeah
right,
yeah,
yeah,
yeah
yeah.
C
It
needs
to
hook
into
the
execution,
which
is
why
I
also
mentioned
earlier
that
I
think
it
should
not
be
the
same
entry
point
between
the
the
binary
that
is
currently
creating
the
artifact
versus
the
binary
that
creates
the
mapping.
So
I
feel
like
yeah
exactly
those
other
two
entry
points
like
one
needs
to
one
is
for
creating
the
artifact
and
one
is
for
creating
the
mapping
on
its
own.
B
C
Yeah,
I
think
I
think
we
can.
We
might
be
able
to
take
that
on
before.
We
see,
because
I
think
that
that
that
will
help
us
make
smaller
iterations
as
we
move
towards
dynamic
yeah
and
not
lose
confidence
on
what
we
already
have
right
now,.
C
B
A
No,
I
mean
how's,
I'm
super
excited
about
this
dashboard
that
we
can
actually
like
see
how
the
how
it's
going
yeah.
A
Yeah,
I'm
the,
I
think,
the
only
the
the
part
the
only-
and
I
not
even
sure
I
know
what
question
I'm
asking
here,
but
the
only
part
that
I'm
still
that
I'm
a
little
bit
fuzzy
on
is
that
connection
between
our
detection
and
the
execution,
because
that
seems
that
seems
very
open-ended.
A
So
is
it?
Is
it
gonna
be
a
matter
of
like
kind
of
picking
tools
and
saying,
okay,
you
know
we're
going
to
pick
this
tool
and
we
can
integrate
with
that
and
it'll
know
how
to
work
with
that
execution
tool
and
then
another
one
and
just
adding
more
tools
that
we
know
how
to
talk
to.
Or
is
there
a
more
generalized
way
to
approach
that.
C
I'm
doing
things
I
think
yeah,
it's
very
customized
I
was,
I
think,
yeah
it
needs.
It
needs
to
be
yeah.
We
need
to
know
the
tools
that
we
want
to
hook
into
and
how
do
we?
How
does
how
do
those
tools
provide
the
data
that
we
need
so
yeah?
I
think
if
we
want
to
take
such
a
approach
then
needs
to
be
adaptable.
I
would
say
to
do
these
different
methods.
B
Basis
but
albert,
is
it
fair
to
say
like
as
we
look
towards
dynamic
map
mapping,
we
would
look
for
our
miss
rate
metric
on
that
dashboard
to
decrease.
So,
like
that's.
How
we're
going
to
evaluate
successes
we
yeah,
maybe
a
moving
average,
would
improve
over
time
and
then
that's
when
we'll
start
to
consider
moving
out
a
portion
of
unit
like
if
we
feel
confident
that
we're
not
missing
on
unit
tests.
We
can
remove
unit
tests
from
the
mr
pipeline
and
just
do
it
in
there.
C
C
So,
if
we
could
actually
look
at
this
individually,
we
might
be
able
to
start
something
substituting
this
out
bit
by
bit,
instead
of
as
a
whole
timeline,
so
yeah
that
might
give
us
some
incremental
values.
A
And
just
to
double
check
that
miss
rate
is
the
rate
of
that's
our
rate
of
false
negatives.
That's
the
number
of
passing
fail
fast
jobs
that
are
followed
by
a
failed
pipeline.
A
A
A
B
Yeah,
I
like
your
idea
to
break
it
out
if
we
can,
if
we
can
at
least
even
look
at
unit
integration
system,
that
would
that
would
be
great.
I
can
help
with
that.
B
You
need
some
help,
but
it
looked
like
you
already
broke
it
out
in
the
query
to
make
that
possible
too.
C
I
was
working
through
it,
half
like
in
the
middle
of
working
on
it.
Let's
see
cool,
but
I
haven't
got
it
go
down
to
the
charts
yet.
B
Awesome,
okay-
and
I
think
so
it's
hard
right
now
to
read
into
duration
and
cost
numbers
because
we
have
I'll
say
other
factors
that
are
increasing
them
outside
of
just
this
experiment.
So
it's
hard
to
make
any
judgment
in
isolation
of
this
experiment
on
how
the
cost
and
duration
is
being
impacted.
Is
that
fair
to
say
albert.
B
Yeah
cool,
okay:
it's
been
really
helpful
for
oh
go
ahead.
A
I
would
say
for
for
what
it's
worth
you
if,
if
we
wanted
that
number,
we
could
probably
back
it
out
from
the
miss
rate
and
the
the
time
time
to
failure
of
the
of
the
the
actual
of
the
the
true
negatives
and
the
false
negatives.
So
it's
I.
I
think
that
number
is
gettable.
C
Which
number?
Or
this
cos.
A
Could
we
could
go
with
the
time
to
failures
and
then
work
back
and
figure
out
the
cost,
but
observably
like
this
is
well,
I
think
a
lot
harder.
C
Yeah,
I
would
agree
with
that,
because
we
know
the
average
number
of
pipelines
not
average
of
failing
aspect,
jobs
and
the
cancellation
rate
yeah.
A
B
Yeah,
maybe,
as
we
implement
more
with
dynamic
mapping,
we
can
we
can
look
towards
that.
Let's
open
just
using
some
other
charts,
we
have,
we
could
see
some
more
pronounced
trends,
but
just
with
the
the
current
implementation,
which
is
not
able
to.
C
Yeah,
so
just
on
the
chart,
so
I
added
this
every
10
day
average
on
the
cost
of
field
pipelines
trail
mr
pipelines,
and
I
I'm
not
sure
if
it
is
this,
I
mean
there
might
be
other
reasons
for
it.
But
at
least
from
what
I
see
now
where
when
we
first
started
the
experiment,
they
showed
just
about.
C
B
Yeah,
I
liked
your
idea
of
just
focusing
in
on
where
we
know
the
pipeline
was
short-circuited
just
trying
to
see
like
we
can.
We
can
look
at
how
much
time
savings
and
how
much
cost
we
have
just
on
that
subset
of
pipelines.
A
But
fortunately
we
we
run
so
many
pipelines
that
we
could
probably
look
at.
We
might
even
be
able
to
look
at
it
on
a
daily
basis,
and
you
know
just
to
try
and
control
for
the
you
know
the
s
the
size
of
the
the
ci
suite.
If
you
will
in
a
single
day,
you
could
look
at.
You
know
how
much,
how
much
did
the
fast
failures
cost
and
how
much
did
the
long
failures
cost
on
a
day
and
have
we?
We
might
have
enough
pipelines
for
that
number
to
be
meaningful.
C
C
Okay
yeah,
so
what
I
was
saying
was
so
if
we
look
at
this
chart,
we're
not
seeing
as
many
cancellations
happening
than
we
expected,
because
I
think
I
think
it's
actually
a
good
thing
that
most
pipelines
are
completely
passing
and
it's
based
on
what
we
think
is
affected
to
that
was
related
to
that,
mr
yeah.
So
so
this
is
something
that
is
new
to
us
as
well,
like
we
thought
that
we
would
see
more
cancellation,
but
it's
not
as
high
as
we
thought
it
would.
C
A
B
Yeah
albert
thanks
a
lot
for
the
time
and
walking
through
this.
I
recorded
it
I'll
upload
this
to
get
lab
unfiltered
and
share
it
in
the
issue,
so
that
james
and
ricky
can
check
it
out
tomorrow
morning.
My
time,
okay,
sure
all
right.
C
B
Yeah
thanks
thanks
albert
did
you
have
something
else
you
wanted
to
share
here?
No,
okay,
click
through
okay,
all
right
I'll!
I'm
gonna
hop
off!
Thank
you,
everyone
for
the
time
and
I'll
catch
you
later.