►
Description
Distribution team demo on Allure report that gets generated while running the qa-test job.
Also a slight overview on qa test jobs as wells as on how to make the job to post a comment on MR
A
Hello
and
welcome
to
the
distribution
team
demo,
it's
June
15th
in
the
APAC
region
and
I
am
Vishal
Patel
I
am
the
quality
counterpart
for
the
distribution
team
and
today,
we'll
be
doing
a
demo
on
hello
report
and
we'll
also
be
touching
a
bit
on
the
QA
pipelines
that
run
in
the
Omnibus
project.
A
Cool
so
when
I'll
be
talking
about
the
early
report,
I'll
be
mainly
focusing
around
the
load
report
comment
that's
being
generated
in
the
EMR,
so
whenever
a
developer
creates
an
MR,
there
will
be
a
comment
which
will
be
generated
through
the
various
processes
when
the
pipeline
is
started.
So
there'll
be
just
one
comment
for
the
testing
that
will
be
generated.
A
So
if
we
look
at
the
Omnibus
git
lab
project
and
if
you
create
an
MR
there'll
be
a
pipeline
which
looks
like
so,
there
are
two
manual
jobs
called
triggered
CE
package
and
Trigger
e
package,
which
you
need
to
kick
off
once
you
kick
off
either
of
these
package,
depending
on
which
package
you
want
to
test,
you'll,
be
kicking
off
those
jobs,
and
once
you
kick
off
those
jobs,
it
will
be
triggering
the
child
pipeline.
So
let's
go
into
one
of
the
pipeline
and
see
what
it
has.
A
A
A
Once
all
these
tests
are
run,
there
will
be
an
end-to-end
test
report,
job
which
will
be
running,
and
this
is
the
job
which
does
all
the
collating
of
all.
The
reports
combines
it
into
one
report
and
uploads
it
to
an
S3
bucket,
and
it
also
posts
a
comment
in
the
Mr.
So
you
can
see
this
heading
reports
URL
and
this
updating
right.
That's
what
says
that
a
comment
has
been
posted
into
the
Emma.
A
So
if
we
take
a
look
at
the
comment
yeah,
this
is
what
you
will
be
seeing
once
you
run
that
either
the
trigger
ee
package
or
the
trigger
CE
package
job.
A
This
will
basically
show
you
all
the
test
failures.
It's
it's
showing
you
by
groups
where
the
failures
in
each
group
it
will
show
you
show
you
all.
The
past
failed
and
skipped
tests
any
flaky
and
the
total
number
of
tests
that
have
run.
You
can
access
the
report
by
clicking
here,
and
this
is
the
commit
ID
that
the
test
has
ran
on.
So
let's
look
at
the
report
right.
So
this
is
the
report.
A
Let
me
look
at
the
yeah
I'll
show
you
more
comprehensive
comment,
so
this
is
a
more
example
of
a
comprehensive
comment
rate
which
will
I've,
ran
a
CE
trigger
job
and
an
e-trigger
job,
and
if
you
run
both,
this
is
how
the
comment
will
look
like
just
ignore
these
replies
because
we
won't
be
having
these
in
the
newer
version
of
the
report
comment.
This
was
the
previous
one.
A
So
just
ignore
this,
but
this
is
what
will
the
report
will
look
like
if
you
run
both
the
jobs
right
and
if
you
go
to
one
of
the
report.
A
This
is
what
it
looks
like
tell
you
report.
It
will
show
you
all
the
failures,
how
many
percentage
of
the
tests
have
passed,
the
total
number
of
test
cases.
It
shows
your
historic
trend
of
the
test
depending
on
the
pipeline.
This
is
the
pipeline
number
and
the
previous
pipeline
that
you've
done
so
you
can
also
go
to
your
previous
Pipeline
and
see
the
report
over
there
as
well.
That
is
your
previous
historic
pipeline.
A
There
are.
The
failures
are
classified
under
two
categories:
the
product
effects
and
the
test.
Defects.
The
product
effects
are
basically
any
failures
which
have
anything
to
do
with
a
test
which
is
expecting
something.
So
if
you
encounter
any
expect
failures
in
the
test,
it
will
be
lined
up
over
here.
It
will
be
categorized
as
a
product
defect
and
if
there
are
any
other
generic
defects
like
environment
failures
or
404
or
i02
or
any
of
such
failures,
they'll
be
under
the
test
defects.
A
So
if
you
go
under
one
of
the
product
defects
like
see,
you
can
see
it's
expecting
certain
things
to
be
true,
but
it
got
false.
So
you
can
see
all
the
failures
over
here
right.
How
many
failures
all
the
test
failures
combined?
You
don't
have
to
go
into
each
and
every
job
to
look
at
all
these
failures.
A
So
if
we
take
one
of
the
failure
over
here,
you
can
see
screen
shots
attached.
You
can
see
logs
attached
and
you
can
see
it
like
two
iteration
iterations
of
it,
because
this
test
retried
twice
and
that's
why
you
have
like
two
screenshots
and
two
HTML
and
two
browser
logs.
A
A
So
this
takes
you
to
an
existing
failure
issue.
If
there
was
one
right
and
then
you
can,
there
might
be
multiple
failure
issues
as
well,
depending
on
the
the
stack
Trace
that
we
might
have
right.
So
you
will
just
have
to
go
into
a
couple
and
then
couple
of
ones
which
are
open
and
then
look
at
the
failure
and
if
it's
similar
to
what
you
you
have
been
encountering,
it
might
be
an
existing
failure.
A
So
essentially,
this
you
know
might
make
you
a
bit
independent
in
terms
of
figuring
out
if
the
failure
is
related
to
your
Mr
or
not
right,
and
if
there
are
any
confusions
around
that
you
can
obviously
contact
the
quality
team
me
or
nylia.
Anyone
well
either
of
one
might
be
able
to
help
you
on
that
right.
So
this
has
links
to
failure,
issues
which
is
one
important
thing
that
might
be
useful
to
the
distribution
group.
There's
another
test
cases.
A
This
may
not
be
that
much
useful
to
you,
but
it's
just
something
to
know
about.
So
it's
each
test
is
linked
to
a
test
case,
and
you
can
see
the
history
of
each
test
case
whether
it
has
passed
or
failed
and
any
comments
that
we
as
quality
people
do
analysis
on.
So
you
might
find
any
comments
over
here
on
why
it's
failing
or
if
it's
a
flaky
test
or
any
other
discussions
around
the
test
right.
A
A
If
you
go
to
the
overview
you'll
be
able
to
see
which
version
of
gitlab
the
test
ran
on
the
revision
and
the
cache
version
as
well.
If
you
go,
if
you
want
to
go
directly
to
the
pipeline,
you
can
click
over
here
as
well
and
go
to
the
pipeline
as
well
directly
right,
and
this
would
take
you
through
the
test
pipeline
that
was
running
earlier,
and
this
is
a
historic
Trend
that
we've
already
looked
at
so
I
think
those
are
pretty
much
the
main
things
that
the
distribution
team
might
be
needing.
A
There
are
other
things
as
well
in
the
suite,
but
I
think
we
can
skip
that
as
of
now.
The
other
thing
that
I
wanted
to
show
was
around
the
three
I
guess.
A
We
had
a
question
from
one
of
the
team
member
in
one
of
our
weekly
meeting,
where
there
are
some
jobs
which
show
like
the
same
name
but
repeated
multiple
times
like
say,
for
example,
the
instance
job
or
the
packet
job
or
the
prefect
job,
so
just
to
make
it
clear
that
these
jobs
aren't
the
same
jobs
running
again
and
again.
These
the
instance,
for
example,
is
running
a
test
instance
all
scenario
in
our
test:
Suite
right.
A
What
that
scenario
does
is
that
it
runs
the
entire
speed
and
the
way
it's
programmed
is
that
of
the
entire
Suite
runs
in
parallel,
and
that's
why
you'll
be
able
to
see
the
five
sub
jobs
if
I
may
say
within
the
instance
of
it's
not
a
reputation.
But
if
you
go
ahead
and
compare
the
test,
you
know
they'll
each
be
individual
tests
and
the
the
basically
show
you
that
you
know
there.
These
are
all
the
tests
that
are
running
and
they
should
all
pass
so
in
case.
A
If
there
are
any
fairies,
you
can
probably
retry,
but
don't
there
shouldn't
be
any
assumptions
that
you
know
it's
it's
a
job
which
is
it's
it's
the
same
job
which
is
running
again
and
again.
The
same
thing
is
for
any
of
the
other
sub
jobs
as
well
like
this
is
running
tests
in
parallel
as
well.
There
are
different
tests
which
are
running
different
tests
in
parallel.
A
Yeah
and
going
back
to
the
report
we
have
some.
This
is
the
latest
report
that
you
will
be
seeing
right.
We
have
some
improvements
that
are
still
happening
in
this
report.
Currently,
if
you
see
there
is
a
failure
in
the
test,
but
the
pipeline
is
still
passed.
A
Right,
the
pipeline
is
still
passed,
but
the
if
the
failure,
the
the
tests
have
still
failed.
So
there
are
some
improvements
that
we
are
still
will
be
doing
and
it's
under
discussion
how
to
tackle
such
kind
of
things
or
bring
it
to
notice
to
developers
that
the
test
has
failed
and
you
need
to
resolve
it,
or
at
least
see
that
you
know
it's
not
related
to
your
Mr
before
you
go
ahead
and
progress
with
the
merging
of
the
SEMA.
A
A
This
might
not
be
helpful
directly
to
the
distribution
team,
but
it's
just
for
the
quality
folks,
so
the
distribution
Omnibus
gitlab
pipeline
it
makes
use
of
the
gitlab
project
as
well.
So
it's
recalling
the
QA
test
job
in
the
gitlab
project
right
and
what
the
keyword
test
job
does
essentially
is
it's.
A
It
calls
the
a
pipeline
common
project,
which
is
basically
a
collection
of
the
common
things
that
the
test
will
be
using,
and
one
of
it
is
the
Allure
report,
uploader
Gem
and
that's
a
gem
which
has
been
specifically
designed
for
git
lab.
It
also
works
for
GitHub,
but
it
in
the
back
end.
It
uses
allu
reports
CLI.
A
What
it
does
is
that
it
collates
each
of
this
tests
that
generate
a
report
individually,
and
what
this
gem
does
is
that
it
collates
all
that
all
those
reports
it
makes
it
in
201
report
and
it
uploads
it
to
an
S3
bucket,
which
is
acting
as
a
static
host,
and
you
can
access
that
as
directly
using
a
link.
A
A
Test
to
make
the
QA
test
job
post
a
report
in
DMR.
These
are
the
two
environment
variables
that
we
had
to
pass.
The
first
one
is
the
auth
token,
which
will
be
used
by
the
dangerbot
to
give
it
access
to
post
a
comment
on
the
Emma
in
that
project.
Basically,
and
the
second
one
is
the
merge
request
IID.
A
If
you've
noticed
in
the
pipeline,
when
we
run
the
pipeline,
there
are
multiple
child
pipelines
that
are
created.
There
is
one
keyword
test.
There
is
one
for
trigger
e
packages,
so
we
are
passing
the
merge
request
ID
of
the
parent
pipeline,
so
that
the
gem,
the
Allure
report,
publisher,
gem,
might
come
to
know
in
which
pipeline
it's
supposed
to
post
a
comments.