►
From YouTube: Frontend pairing - Jest speed reporter (Part 1)
Description
In this session we start looking into a GitLab CI solution to generate a report for slow Jest tests.
Part 2: https://youtu.be/BgFLT4oOwa8
Result: https://gitlab.com/gitlab-org/frontend/playground/jest-speed-reporter/
A
Maybe
like
this,
I
don't
know,
if
you
can,
can
you
do
variables
and
I
I
wouldn't
deviate
from
our
coverage,
the
way
we're
doing
it
and
the
coverage
one
too
much.
B
B
A
B
Just
saying
like
like
this,
and
we
should
be
good,
the
problem
is
like
reports
junit,
so
ci
reference.
B
The
question
is
like:
can
you
use
a
variable
in
that
or
not
j
coverage
report
keywords?
So
the
only
thing
that
I'm
saying
is
like
yeah.
That
makes
sense
when
are
artifacts
created
because
other
than
we
could
just
do
like
after
script
right
and
then
we
could
just
be
like
copy
yep
this
one
into
junit
that
x,
report.xml
yep
and
then
we
are
happy.
B
And
at
www.gitlab.com
for
the
about
website.
Oh.
B
A
C
B
B
B
Then
we
can
oh,
we
defined
it
here
right,
so
j
unit
front
end,
so
we
want
to
have
junit
front
end.
We
just
one
want.
You
know
to
be
honest,
like
every
xml
in
that
directory
right
locally,
I'm
just
going
to
create
that
and
then
I'm
going
to
all
right.
Thank
you.
B
This
is
like
what
I
don't
want
to
whole
artifact
shenanigans
command
s.
Oh
it's
a
web
page!
Okay,
let's
try!
If
we
can
download
it
like
this
yeah
c
d,
j
unit
front
end.
D
B
O
minus
l
is
that
download
yeah.
It
looks
like
it.
Oh
you
rock,
and
this
one
move
junit.
What
what
did
we
say?
We
just
basically
want
to
have
like.
I
don't
know
something
like
this
like
junit,
because
we
glob
all
of
the
xmls.
It
actually
doesn't
matter
how
the
file
is
called.
B
B
B
B
Maybe
I
have
some
like
you
know.
Great
coverage
map
is
not
defined.
We
don't
want
that.
Yeah
that
looks
good
now.
Npm
junit
is
definitely
a
junit
package.
A
E
B
What
I
was
asking
so
that.
E
B
B
B
A
What
stack
overflow
suggested
that
I
use
xml
to
json,
it's
xml,
the
number
two
and
then
json.
F
That's
literally
what
I
just
posted
in
this
zoom
chat.
B
B
Minus
b
j
unit
to
json.
B
B
F
What
I
pretty
much
think
he
did
was
so
he
created
a
new
ci
job
with
and
downloaded
some
artifacts
that
we
already
have,
and
then
he
had
a
script
on
that
job.
That
would
read
that
data
parse
xml
to
json
and
we're
going
to
try
to
find
the
slowest
jobs
out
of
there.
A
Yeah
yep
and
I
think
all
what
I'm
really
thinking
we
should
just
do
is
just
sort
it
by
time
and
print
test
time.
I
don't
know
if
we
need
to
do
the
whole
test
file
or
if
we
want
to
do
just
an
individual
spec.
A
F
F
Four
to
me
it
would
be
useful
to
have
I
mean
yeah,
that's
nice,
on
the
ci
pipeline
like
our
normal
pipeline,
but
if
we
could
do
some
sort
of
like
notification
system,
like
you
know,
yeah.
D
F
Dropping
it
in
a
channel
or
something
like
hey
team.
You
know
this
file
path
just
finished
out
at
30
seconds.
You
know
I
would.
I
would
love
that,
because
I'm
thinking
it's
going
to
be
a
pain
to
go
into
the
actual
pipeline
every
time
you
want
to
kind
of
see
the
test.
D
A
Yeah,
so
what
do
you?
What
do
you
think
it
would
look
like
for
us
to
create
some
sort
of
slack
bot
that
could
potentially
just
post?
You
know
here's
the
top
10
slowest,
you
know
top
the
files
with
the
slowest
tests.
That's
the
most
that's!
What
I
would
think
is
the
most
important
thing.
F
This
may
be
a
good
candidate,
then,
for
a
maybe
a
separate,
separate
domain
that
we
could
have
bookmarked
that
you
can
regularly
check
for
like
the
performance
of
the
test.
It
may
be
quicker
than
you
know.
You
know,
if
slack
could
potentially
get
too.
A
Noisy
that's
a
good
point
and
that's,
I
think
what
ip
was
suggesting
too.
Let's
talk
about
like
reading
something
from
pages,
oh,
how
he
creates
all
those
like
those.
You
know
small
side
projects
that
basically
read
these
reports
and
just
present
it
and
that's
one
way
we
could
do
it
for
sure
that
would
be
nice.
E
I
was
thinking,
maybe
doing
it
the
prometeous
way
what
we
could
do
in
order
to
limit
a
little
bit
of
cardinality
and
not
to
destroy
your
infrastructure,
just
getting
the
front-end
specs.
E
What
we
could
do
is
do
an
average
on
every
single
folder
on
the
average
amount
of
time
that
we
spend
running
the
test
in
that
specific
folder,
because
if
we
do
individual
files,
we
will
cause
a
lot
of
this
usage
and
kind
of
don't
want
that
because
primitives
already
kind
of
you
know
uses
a
lot
of
disk
space
and
due
to
averages,
we
can
do
alerts
and
we
can
have
this
on
the
graphina,
dashboard
or
whatever
and
on
the
ground
dash.
Where
we
can
say,
hey
or
goals
are
like.
E
A
E
Prometheus
has
something
called
cardinality
problems,
so
the
more
because
what
we
will
do
in
order
to
classify
each
one
of
our
files
is
that
we
will
create
a
label
for
each
file
problem.
Is
that
the
more
labels
that
you
use,
the
more
resources
primitives
will
take
from
your
system?
So
in
order
to
mitigate
some
of
that
cost,
we
could
just
create
labels
based
on
folders
and
send
the
data
there,
as
averages.
F
I
don't
think,
though
my
suggestion
had
anything
to
do
with,
like
I
don't
think
it's
going
to
matter
with
the
disk
space,
because
we're
like
each
pipeline
is
still
going
to.
You
know
like
generate
those
artifacts,
no
matter
what
we're
just
going
to
take
advantage
of
those
artifacts
that
are
already
generated.
A
That's
true,
I
think
you're
talking
about
jose,
though
from
like
we
could
get
it
reporting
in
grafana
like
rafana
yep.
We
want
that
data.
If
we
do
it
too
granular,
it
shuts
everything
down,
because
it's
at
that
level.
If
we
wanted
to
have
some
nice
grafana
reporting,
we
may
have
to
do
a
higher
level,
exactly
yeah.
That.
A
A
D
A
D
Yeah
well
yeah
speaking
about
it
like
a
test
because
we
run
them
in
emulated
environment.
So
in
theory
we
I
should
not
be
flaky
and
we
have
to
resolve
it.
A
Yeah,
what's
I
think,
what's
what's
difficult
is
that
the
the
speed
of
these
tests
can
be
so
flaky,
not
necessarily
the
results
of
it,
but
just
because
you
know,
I
guess
we're
all
using
shared
runners,
so
it's
dependent
on
the
the
amount
of
load
on
the
runner,
but
also
sometimes
it
seems
like
you
just
feel
sick
some
days,
and
I
don't
know
yeah,
I
I
don't.
A
I
don't
really
know
what
that
what
that
end
result
is
gonna
look
like
of
how
we
are
aware
of
this
stuff,
but
just
pulling
it
from
the
ci
and
having
a
way
to
query.
It
sounds
like
the
best
first
step,
so
I
think
ip
got
us
like
75
percent
of
the
way
there
does.
Someone
want
to
pick
it
up.
D
C
F
H
A
I'm
looking
over
the
changes
ip
made.
A
Man
these
these
xml
files
are
really
large.
A
E
E
I
mean
the
less
memory
we
use
the
better
we
already
using
like
eight
gigabytes
of
memory
on
on
our
pipelines.
Oh.
D
You
can
just
use
x
after
query
required
attributes.
A
D
A
F
Sense,
yeah,
but
I
don't
think
this
should
exist
on
our
pipeline.
That's
for
sure,
yeah.
D
F
A
Well,
I
was
hoping
to
have.
I
was
I
was
hoping
to
have
one
job
that
like
was
just
really
readable,
but
maybe,
if
we're
already
needing
to
build
a
view
like
on
top
of
it
or
something
we
should
just
solve
it
from
there
and
not
introduce
something
that
could
take
up
a
good
amount
of
memory.
Yeah
yeah,
that's
just
my
that
those
are
my
thoughts.
They
could
be.
A
So
we
could
try
instead
of
because,
right
now
what
we're
building
is
yeah.
This
is
a
good
point
right
now.
What
we're
building
is
the
thing
that's
going
to
read
the
files
and
then
kind
of
aggregate
them.
So
let's
just
read
the
files
from
a
url
right
yeah.
I
think
that's
a
great
idea.
Do
you
want
to
try
that
out,
fiddly
or
I
can.
F
Oh
yes,
yes,
you
can
go
to
query
project,
so
I've
got
a
project,
yeah
and
then
pipeline.
A
A
You're
you
write
it.
I
do
okay.
A
F
I'll
hush!
No!
No!
No,
please
don't!
No!
I
I
think
I
may
be
wrong,
but
but
like
I'm
just
so
you're
trying
to
write
a
query
to
pull
that
data.
Is
that
right?
Yes,.
A
A
A
A
F
Yeah,
the
latest
pipeline
rental
master,
always
and
only
those
four
jobs
yeah.
I
don't
know
what
after
means.
A
I
don't
think
it
likes
my
query:
okay,
yeah
after
pre-test
before
post-test,
I'm
looking
for
just
yeah.
Okay,
all
right,
I
blew
it
up.
A
A
And
so
then
I
can
go
to
this
list
of
groups,
I
guess
and
the
jobs
yeah.
So
I
guess
I
can
just
get
everything
here
and
I
guess
that's
fine,
but
I
wonder,
is
there
a
way
for
me
to
given
a
project
instead
of
just
looking
at
pipelines?
Look
at
jobs.
F
A
A
I'm
not
talking
yeah,
no,
it
seems
like
graphql
might
not
be
the
way
to
do
this.
Maybe
in
the
rest,
api
there's
something.
C
C
C
A
A
C
A
I
don't
know
what
happened.
I
don't
know
what
I'm
looking
at
so
the
groups
versus
the
jobs.
Okay,
so
I'm
not
really
interested
in
the
jobs,
because
I
really
wanted
to
see
hey.
Do
we
have
the
just
one
in
here?
Oh
can
I
can
I
look
up
a
group
by
name.
That's
the
part
that
really
bothers
me.
Is
I
can't
anyways
okay,
okay,
so
we
can
get
you
know.
I
can
use
this
to
see
all
right.
What
is
the
the
pipeline
id.
A
E
Yeah,
that's
probably
specific
stuff.
C
A
A
A
A
F
A
F
A
Okay,
so
given
a
specific
job
id,
then
we
can
get
the
artifacts
cool.
Do
you
want
to
try
all
right?
So
here,
let's
just
say,
I
got
the
latest
pipeline.
I'm
gonna
open
up
another
graphql
explorer,
I'm
just
planning
all
this
out.
I
was
gonna.
Let's
then
see
if
I
can
find
the
specific
job
id
and
then
I
was
gonna
curl
this
and
see
if
we
can
get
to
it.
Yeah.
A
That
sounds
right
all
right
here,
I'm
querying
something.
I
guess
I'm
going
to
project.
A
A
A
A
A
Yeah
yeah
peyton
brought
up
a
really
good
point
that
so
we
looked
at
this,
and
these
files
are
kind
of
humongous
and
the
data
already
exists
and
we
want
to
present
it
anyways.
So
why
not
we
just
let
whatever
is
presenting
something:
do
the
fetching
the
files
aggregating
or
whatever,
rather
than
you
know,
having
our
pipeline
be
burdened
with
reading
these
huge
files
into
memory
and
all
that
stuff?
That
was
the
suggestion.
B
We
could
in
that,
in
that
case
I
would
highly
suggest
to
go
similar
to
the
webpack
memory
reports
that
we're
doing
and
basically
just
create
an
external
project
that
queries
those
few
things
from
our
api
yeah.
A
That
was
the
plan.
Yep
yeah
doing
something
on
front
end:
playground,
that'll
that'll,
do
that
some
sort
of
dnl
script
yeah,
but
this
has
been
interesting.
I
am
gonna,
probably
create
that
front-end
project.
If
I
get
really
bored
today,.
F
G
B
Right
the
rest
api
is
literally
like
the
thing
that
I've
written
for
the
failures
yeah,
because
the.
B
Yeah,
it
looks
perfect.
Let
me
let
me
just
add
my
script
and
make
it
download
the
xml,
and
then
we
can
go
from.
There
also
has
the
benefit
that
that
we
can,
that
we
can
add
like
a
templating
engine
or
whatever
in
that
pipeline
and
just
publish
it
to
pages,
because
the
problem
is
that
when
we
that
that's
usually
also
something
that
you
don't
want,
it's
also
usually
something
that
you
don't
want
to
add
to
our
gitlab
project
right.
If
you
want
to
use
something
like
pug
or
whatever
right
right,
right,
yeah,.
F
A
Cool
well,
this
was
this
was
fun
and
educational
and
yeah.
So
this
is
oh,
I
created,
I
didn't,
create
it
in
the
playground
I
just
created
under
front
end.
Oh.
E
B
A
Well,
the
front
end's
hard:
can
you
can
you
move
it
to
the?
No,
I
think
when
I
became
maintainers
just
like,
because
some
people
are
maintainers
of
the
gitlab
org
group
and
some
people
are.
A
I
was
one
of
the
people
that
I'm
just
maintainer
of
the
project,
but
I
think
it's
just
a
it's
just
an
admin
issue,
but
I
don't
think
I
have
push
access
to
this.
That's.
A
G
A
A
A
Oh
you're
right,
oh
well,
okay!
Well,
everybody
have
a
great
rest
of
the
day
and
thanks
for
hopping
on-
and
this
is,
this-
is
fun
so
yeah
I'll
catch
you
all
later,
all.