►
From YouTube: Quality (Mek) Group Conversation (Public Stream)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hey
everyone:
this
is
max
citri
director
of
quality,
and
this
is
the
group
conversation
for
the
quality
department.
As
always,
please
write
down
your
questions
in
the
agenda
and
our
team
to
get
to
it.
I
hope
that
you
get
a
chance
to
take
a
look
at
the
intro
we
sent
out
earlier
as
well
with
that
we
will
start
taking.
B
Questions:
hey
mack,
I
don't
see
any
questions
in
the
docs
or
ask
one
and
it's
one
I
can
relate
to.
There
was
a
goal
to
make
a
synthetic
monitoring.
I
think
last
quarter,
maybe
this
quarter
and
it
got
pushed
out
because
of
a
lack
of
capacity,
and
I
I
totally
I'm
not
arguing
with
that.
I
totally
understand
that
I
was
thinking.
Have
we
thought
about
making
it
smaller?
B
What
is
the
most
minimal
implementation
of
synthetic
monitoring?
We
can
do
some.
I
love
your
your
ideas
about
that.
A
Yes,
we
we
discussed
this
in
the
issues
and
we
were
discussing
how
to
break
it
down
even
smaller.
A
I
believe
the
original
proposal
that
we
discussed
was
to
use
load
testing,
but
it
wasn't
really
the
right
fit
for
verifying
the
user
journey,
so
the
team
did
some
investigation
on
actually
reusing
the
gitlab
qa
framework
to
be
the
the
foundation
for
this
feature,
and
I
believe
that
we,
we
may
have
another
way
out
on
reusing
some
of
the
page
objects
that
are
refactored
out
to
to
power.
This
feature:
that's
those
are
the
things
that
come
with
all
mine.
A
C
C
Yeah,
we
were
investigating
some
other
open
source
tools
as
well
to
see
if
there
was
something
that
we
could
utilize
to
do,
the
the
actual
user
interactions
and
then
for
our
side
to
do
the
measurements
and
and
check
how
long
the
pages
were
loading
and
how
responsive
they
were,
and
things
like
that.
So
we're
definitely
still
investigating
that
and
still
trying
to
be
creative
in
how
we
can
iterate
on
it.
B
B
I
think
the
only
thing
we
need
is
a
periodic
job
like
synthetic
is
like
hey,
it
happens,
even
when
there's
no
no
users
on
the
side
and
when
I
don't
press
a
button
like
it,
it
just
happens
every
now
and
then
so.
Maybe
that's
a
smaller
thing
like
don't
write
any
new
low
testing
tools
just
use
the
k6
we
already
have
but
think
about
how
that
periodic
job
looks
and
I
think
for
ci.
A
Thanks
for
the
piece
of
knowledge,
I
think
we
will
look
into
the
periodic
jobs
and
see
if
we
can
glue
something
together.
I
believe
the
goal
is
to
set
for
by
before
enough
before
quarter
two
next
year,
so
we'll
try
to
to
be
more
ambitious
here
and
try
to
meet
that
with
metadata
with
that
earlier.
B
A
Thank
you
sid.
I
also
want
to
add
a
call
to
the
team
that
we
are
helping
to
improve
the
features
we
signed
up
to
do
some
bug
fixes
this
quarter
for
the
ops
area
to
help
improve
channel
and
smile.
So,
even
though
we're
not
working
on
synthetic
monitoring,
there's
work
going
on
to
solidify
the
knowledge
of
the
scts
in
our
team,
so
they
are
more
comfortable,
adding
features
and
new
features
in
the
future.
So
I
want
to
make
that
broadly
known.
There's
a
ongoing
look.
There.
D
I
I
just
wanted
to
mention
too
said
we
did.
We
did
start
looking
at
k-6
when
we
were
investigating
and
didn't
didn't,
find
it
to
be
the
best
thing
to
do
synthetic
user
modeling
just
because
of
the
number
of
click-throughs
that
we
would
generally
do,
but
it's
definitely
something
that
we
can
do
at
a
small
scale.
B
Kind
of
had
oh
yeah,
I
mean
normally
with
synthetic
monitoring.
You
want
to
test
the
stream
like
ordering
a
product
or
something
like
that
yeah.
I
recognize
that
I
think
we
got
to
start
somewhere.
I
don't
think
k6
is
the
tool
to
to
do
it
long
term,
but
it's
already
we're
already
using
it
somewhere
else.
Maybe
just
start
there
and
also
k6
seems
to
have
a
really
good
organization
behind
it.
So
if
we
have
it
in
gitlab
and
we
use
it
for
synthetics,
then
we
can
maybe
reach
out
and
say:
hey.
B
A
Yeah
and
to
add
to
that,
I
think
we
can
swap
out
k-6
later
if
we
find
better
candidates
but
leaning
on
values
of
iteration
improving.
What
we
have,
I
believe
using
k-6
would
be
a
small
iteration.
So
thank
you
for
the
feedback.
Sid.
E
A
B
Mecca
we've
seen
we've
seen
in
merge
requests,
especially
it's
been
that
some
kind
of
of
the
javascript
reloading
or
something
seems
off.
I'm
I'm
phrasing
this
in
a
horrible
way.
It's
monday,
it's
my
first
meeting
of
the
day,
but,
for
example,
when
you
post
a
comment
on
an
mr
like
it
disappears
in
a
five
second
later
it
it
appears
seem
to
be
all
kinds
of
kind
of
small
things,
but
they're
in
our
most
like
popular
work
stream
that
are,
that
feel
kind
of
broken.
B
A
Sure
we
discussed
this
in
in-depth
at
the
transient
box
working
group.
The
update
I
have
from
from
tim
and
andre
is
that
we
need
to
get
better
at
managing
state
in
our
workflows
when,
when
a
user
do
does
an
action,
we
may
spawn
multiple
endpoints
or
updates,
and
we
need
to
manage
the
state
better.
A
What
we
we
saw,
your
feedback
on
the
on
the
box
that
you
encountered
said-
and
I
think
an
action
sign
up
for
the
working
group-
is
to
identify
the
themes
of
these
transient
issues
and
then
start
addressing
them
at
the
root
cause.
It
could
be
getting
better
at
using
the
front-end
components
talking
to
the
back-end
components,
getting
some
best
practices
there,
but
we
are
really
keen
on
identifying
the
really
key
themes
that
that
that
we
encountered
these
transient
issues.
That's
that's
the
update
we
have
currently.
B
A
A
B
I
don't
see
anybody
else
in
the
dark,
so
I'll
I'll
do
a
third
one,
but
then
really
someone
else
needs
to
step
up
and
ask
questions.
Our
pipeline
is
54
minutes
and
I
think,
if
I
think
about
my
days
as
a
programmer,
I
think
like
most
of
the
time
the
pipeline
fails.
You
kind
of
that
is
like
annoying
and
you
really
want
to
get
the
failure
as
soon
as
possible.
If
it
would
succeed
like
I
don't
care
anymore,
because
I
don't
have
to
get.
B
I
don't
have
to
do
anything
like
if
it
fails.
I
have
to
fix
something
and
it's
annoying
if
I
already
started
another
task,
so
we're
now
measuring
the
the
speed
in
which
the
pipeline
succeeds
or
fails
completes.
I
think
we're
measuring
maybe
we're
actually
measuring
the
time
it
completes
because
other
it
can
fail
fast,
but
it's
almost
like
we
should
be
measuring
the
time
it
takes
to
take
the
pipeline
to
fail,
not
against
measuring
the
time
to
success.
The
time
to
failure
is
more
essential
for
my
job
as
a
developer.
B
B
I
think
we
have
these
big
dreams
of
how
we
can
run
the
most
relevant
tests
first
and
then
like
at
the
ultimate,
and
would
you
be
using
gtp3
to
identify
those
tests
and
then
the
medium
version
it
will
be
using,
like
I
don't
know,
ml
to
figure
out
which
tests
are
most
likely
to
fail
at
the
smaller
version.
If
we
just
run
the
tests
that
fail
most
often
more
frequently,
maybe
the
minimum
version
is
to
run
the
test
that
filled
the
last
time
immediately
like
we
can't
even
spin
up
a
separate
job
for
this.
B
A
Don't
know
what
you
think
thank
you
for
that
said.
Yes,
I
I
recall
you
gave
us
this
feedback
in
the
last
key
review
as
well.
We
plan
to
add
new
pi
measurements
on
measuring
the
la
the
time
to
failure
and
try
to
improve
that
move
that
faster.
A
We
can
follow
up
with
you
on
on
the
detailed
issue
and
then
plan
for
that.
But,
yes,
we
do
plan
to
do
something
like
that
with
the
pipeline
steps,
it's
likely
that
the
we
are
in
a
better
state
in
terms
of
measuring
success
and
failure.
Now
I
think
we
will.
We
will
either
move
the
success
rate
to
a
pi
and
then
add
focus
on
measuring
the
time
to
fail
and
try
to
move
that
faster
in
our
in
our
day-to-day
job.
B
A
There
we
could,
we
could
run
it
via
artifacts,
so
I
have
jobs
that
that
talk
to
the
last
one
I'll
pull
up,
pull
up
the
last
job
and
one
of
the
output
of
the
job
artifacts
would
be
the
list
of
tests
that
that
failed.
A
And
then
it's
like
a
talking
domino
where
hey
I'm
just
going
to
look
at
the
last
job
and
look
at
the
list
of
artifacts,
I'm
going
to
run
these
tests
first
and
each
job
has
a
job
of
outputting.
The
list
of
tests
that
failed
towards
the
end.
So
the
next
job
can
can
look
back,
so
that
might
be
easier
than
we
thought.
E
F
See
could
you
clarify
something
for
me?
Did
you
mention
that
you
find
failure
a
bit
more
important
than
success
in
this
case?
It
was
that
what
you
said
if.
B
Like
obviously
like
it
should
be
correct
right,
the
field
test
should
represent
bad
code
and
the
correct
test
should
represent
good
code,
but
assuming
that
the
output
of
the
test
is
correct,
if,
as
a
developer,
I
can
have,
I
have
two
options.
I
get
field
tests
in
five
minutes
and
successful
text
tests
in
an
hour,
or
vice
versa.
B
I'd
really
like
my
field
test
to
go
much
faster
because,
as
I'm
developing
most
of
the
time,
the
test
fulfill
fail
fail
like
especially
when
I
need
the
ci
to
do
something
like
it's
a
test,
and
I
don't
really
understand
why
it's
failing.
So
I
keep
re-running
that,
and
I
need
that
a
lot
and
I'm
watching
that.
I
have
to
wait
for
that
to
complete
before
I
I
do.
My
next
iteration
well
like
a
successful
test,
is
like
kind
of
it's
not
input
right.
B
F
Well,
I
would
slightly
disagree
with
that
not
fully,
but
slightly
because
the
successful
duration
of
a
successful
test
also
is
important
when
you
actually
chain
things
together
and
you
need
to
roll
things
out
and
deploy
it
further
on.
So
it
has
a
consequence
further
on.
So
I
like
that,
we
are
measuring
the
duration
as
well.
I
would
like
to
see
it
run
for
five
minutes,
which
means
right.
It
has
effect
downstream.
B
A
I
totally
agree
with
that.
We
need
an
equal
view
on
both
sides.
If
we
only
look
at
successfully
gearing
too
much
on
the
positive
news,
and
we
need
to
see
if
we
are
missing
out
on
anything
and
looking
at
the
the
last
field
test,
if
you're
doing
correctly,
it
could
be
it's
a
it's
another
metric
to
look
at.
I
do
want
to
call
out
that
the
engineering
productivity
team
is
working
in
a
similar
work
stream
line
of
work.
It's
in
sly,
15
and
the
second
kr
I'm
reducing
in
my
pipeline
duration.
A
A
Once
this
is
shipped
we
we
think
that
it
will
lower
the
pipeline
duration
further,
and
this
paired
with
looking
at
the
last
failure,
could
make
us
even
more
effective
at
seeing
running
the
the
most
of
the
most
relevant
tests,
which
failed
recently
in
the
pipeline.
So
I
think
it
helps
when
these
things
are
working
together.
G
Yeah-
and
I
would
just
add
to
that-
I
linked
to
the
issue
for
dynamic
test
mapping.
The
whole
goal
of
that
is
to
accelerate
failure.
So
by
using
crystal
ball,
we
can
have
it.
We
can
dynamically
determine
which
tests
are
more
applicable
to
the
code.
Change
run
only
those
tests
in
the
mr
and
then
get
the
feedback
from
those
sooner
versus
waiting
for
a
lot
of
tests,
which
may
not
apply
to
the
mr
in
question
to
run
and
finish
to
get
that
total
time
for
feedback.
A
Thank
you
kyle
and
that's
point
four
and
I
added
the
kr,
which
was
1.9.
We
have
a
question
other
than
sid
amanda
0.5.
Would
you
like
to
vocalize.
H
Sure
I
have
new
headphones.
Can
everybody
hear
me?
Okay,
okay,
great!
Thank
you
yeah!
No!
I
just
wanted
to
highlight
that
the
quality
team's
been
doing
really
great
work
to
automate
and
end
testing
for
fulfillment
systems.
I
was
wondering
if
you
could
just
highlight
for
everybody
not
involved
what
what
has
been
done
and
where
you're
going
with
it.
A
This
is
where
I
would
gladly
pass
it
on
to
is
vinci
on
the
call.
Yes.
H
I
I
As
you
probably
know,
we
only
have
two
scts
in
fulfillment
they
joined
in
q2
and
the
second
one
joined
in
november,
so
we
only
had
one
set
for
q2
and
q3
and
luckily
in
q4
we
have
two
so
to
give
an
overview
of
what
we
have
done
so
far
as
as
part
of
q3,
we
had
initiative
within
growth
as
a
time.
Fulfillment
was
part
of
growth
as
to
identify
the
most
critical
sas
flows,
out
of
which,
from
a
test
coverage
point
of
view.
The
quality
team
has
automated
six
critical
sas
flows.
I
So
we
have
the
test
coverage
at
this
point
in
customers
dot
along
with
test
coverage.
The
team
has
focused
on
integrating
this
end-to-end
test
with
the
pipeline,
so
every
mr
that
goes
into
staging.
We
have
our
end-to-end
test
running
along
with
it.
We
have
been.
This
has
enabled
us
in
identifying
defects
in
the
staging
environment
when
mrs
have
been
pushed
forward.
I
So
that's
a
great
achievement
for
the
team
as
well,
and
then,
of
course,
as
part
of
end-to-end
tests,
the
primary
goal
is
to
integrate
customers.with
gitlab.com
and
that
integration
is
also
happening
as
part
of
each
flow,
so
we
are
able
to
identify
the
effects
that
are
happening
as
part
of
changes
in
gitlab.com
as
well.
There
is
more
work
on
that
side,
though
that's
still
in
its
initial
phases
as
of
now.
I
These
are
the
three
main
critical
work
that
has
been
completed
by
the
team
as
to
what
we're
looking
forward
to
complete,
especially
with
having
two
sets
at
this
point,
as
one
continue
to
grow
our
end-to-end
test
coverage.
As
I
mentioned,
we
have
six
critical
falls
flows
and
I
believe
we
have
two
more
to
go.
This
is
the
initial
smoke
suite
that
has
been
automated.
We
do
have
to
build
on
top
of
the
existing
floors
as
well.
The
next
thing
is
definitely
adding.
One
of
the
chaos
for
this
quarter
is
to
improve
our
production
deployment.
I
This
also
requires
the
capability
of
quarantining
a
test
case,
meaning
if
a
test
case
is
failing,
not
because
of
a
bug
in
the
system,
but
because
a
test
gets
probably
requires
some
added
work.
Then
we
have
the
capability
of
quarantining
the
test
case,
which
means
only
100
of
the
tests
would
pass
at
all
times,
and
those
tests
would
be
catching
the
defects,
not
a
failure,
because
within
a
test
weight,
I'm
just
looking
at
what
else
have
we
have
done?
I
Yes,
continue
to
add
more
test
coverage
in
gitlab.com
we
right
now
the
team
is
focusing
on
customers
dot
only,
but
the
plan
is
to
also
have
test
coverage
in
the
building
pages
in
user
registration
gitlab.com
as
well,
and
one
of
the
final
kr's
chaos
for
this
quarter
is
also
to
provide
test
coverage
for
self-managed
customers
on
as
well.
These
are
the
high-level
areas
that
they're
planning
to
have
for
q4
and
maybe
moving
into
q1
for
next
year.
A
Cool,
I
also
want
to
add
a
point
c
there's.
Please
see
slide
43
onwards,
that
that
should
capture
the
rough
information
that
the
int
localized
sid,
you
have
number
six.
B
Yeah,
I
want
to
call
out
that
the
review
app
success
rate
is
at
99
and
that's
amazing.
Thanks
for
achieving
that
goal,
that's
super
cool
and
I
I
also
just
want
to
thank
for
the
the
slide
deck.
It's
I'm
hesitant
to
compliment
because
it
seems
like
a
lot
of
work
so
feel
free
to
do
less,
but
it
was
interesting
to
read-
and
I
think
in
general.
A
B
A
Thank
you
very
much
for
the
compliments.
It
will
echo
that
to
the
team
on
in
terms
of
the
number
of
slides.
Yes,
we
we
we're
trying
to
tone
down
the
number
of
slides,
I'm
actually
thinking
that
we
will
reuse
them
in
the
counterpart
group
conversation
as
well,
so
each
each
of
the
managers
can
just
have
a
rotation
to
copy
paste
to
make
sure
to.
A
Use
great,
I
think,
we're
almost
out
of
time
and
that's
a
number,
a
good
quarter
of
awkward
silences
so
I'll
end.
The
call
I
want
to
thank
everyone
for
joining
and
reading
our
slides,
we'll
see
you
in
the
next
conversation
have.