►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Yeah
cool-
I
just
want
to
hit
record
before
I
forgot
yeah.
It
sometimes
there's
a
lot
it
on
top
of
master,
broken
from
like
11
a.m,
to
about
6
p.m.
My
time
is
always
feels
like
a
lot.
B
I
see
okay
before
we
start.
Let
me
let
me
put
this
out
there
first,
so
you
know
the
intention
you
are
closer
to
me
on
the
challenge,
so
by
all
means
feel
free
to
override
what
I
suggest
that
that
should
be
the
reaction
here,
because
I
am
not
I'm
not
with
you
in
on
call,
so
you
should
be
the
one
I'm
listening
to.
A
Yeah,
so
I
don't
think
it's
a
bad
idea
and
I
know
it's
been
communicated
separately.
So
that's
why
my
question
of
if
this
is
just
a
higher
priority,
we
can
always
shuffle
it
around
and
get
to
the
other
thing
as
well.
The
other
thing
is
the
discussion
right
now
is
between
you
and
me,
so
I
want
to
get
some
other
person,
like
my
perspective
on
on-call,
might
be
unique
to
me,
so
I
don't
want
to
make
a
decision
based
on
what
I'm
experiencing.
B
Also,
shall
I
set
the
context
for
the
people
watching
the
call
yeah.
Of
course,
do
you
want
me
to
stop
and
then
restart?
No,
it's
fine!
Okay,
it's
totally
fine.
So
this
call
is
about
deciding
what
would
be
more
impactful
to
help
with
the
on-call
triage
process
and
I'll
start
with,
where
I'm
coming
from
first,
which
is
creating
less
issues
on
the
failure
for
creating
less
issues
and
just
focus
on
getting
a
test
session
out
and
to.
B
To
add
on
to
what
we
talked
about,
we
discussed
this
briefly
in
the
in
the
department
call,
I
think
three
weeks
back
and
some
of
us
may
may
have
lost
some
other
contacts
there.
So
I'll
just
do
a
recap,
so
the
brainstorm
here,
a
couple
of
comments
from
from
sanad
then
is
condensing
this
place
where
people
need
to
look
for
results.
B
B
Is
it
a
flaky
test
or
a
stale
test
and
those
issues
get
created
every
week
depending
on
who
is
on
on
call?
Sometimes,
if
there's
a
good
handshake
people
reused
the
issues
created
last
week,
but
if
it
was
quiet
for
three
weeks,
I've
noticed
that
someone
will
create
an
issue
and
turns
out
that's
actually
a
duplicate,
because
you
know
that
that
thing,
the
failure
that
came
up
four
weeks
ago
didn't
happen
for
another
three
four
weeks,
so
nobody
paid
attention
to
it
and
then
we
lost
we
lost
track
of
it.
B
We
lost
the
breadcrumbs
and
like
fast
forward.
Four
weeks
later,
someone
ran
into
that
issue.
The
search
didn't
help
or
it
could
be
like
the
names
weren't
spelled
out
correctly.
B
So,
let's
aim
to
to
address
it
by
instead
of
creating
more
issues,
create
less
issues
and
that's
where
I'm
coming
from
of
instead
of
having
a
failure
issue
that
has
what's
what
is,
what
is
a
good,
a
good
explanation
here?
Should
I
just
pull
up
from
here
automatically
create
a
qa
issue.
B
But
with
this
you
get
like
maybe
a
list
of
five
50
test
cases
which
is
pass
and
fail,
and
instead
of
discussing
what
is
it
sorry
instead
of
discussing
in
here,
you
can
just
work
with
the
whole
cohort
of
the
on-call
team
in
this
issue,
grouped
by
the
session.
B
A
B
Yeah,
let's,
let's
pause
there
and
see
if
you
think
this
is
useful.
A
I
I
see
the
use
usefulness
I
found,
and
this
might
be
me
not
fulfilling
responsibilities
completely,
that,
like
I'm,
not
able
to
resolve
flaky
failures
on
my
own
and
that's
where
those
qa
failure
issues,
things
that
aren't
resolved
by
the
dri
need
a
place,
need
a
place
to
exist
outside
of
a
session
right
right.
From
my
perspective,
right.
B
Right,
I
think,
there's
a
case
for
that
to
be
created
manually,
though,
because
the
proposal
that
I
see
here
is
that
we
will
automatically
create
an
issue
for
all
the
failures
and
my
concern
with
that
is:
it's
gonna
even
add
more
to
the
noise.
B
If
we
do
that,
so,
if
you're
just
feeling
like
five
times
in
a
day,
are
we
creating
five
issues?
The
searching
will
be
there.
I
mean
that
that
solves
the
searching,
maybe
probably
too
much,
but
that's
probably
two
too
many
artifacts
at
the
smallest
revolution,
where
we
want
the
artifacts
to
be
on
the
broader
scope.
A
Yeah
I
agree
there.
I
guess
my
understanding
is
that
we
would
create
an
issue
for
a
spec
and
a
failure.
Reason
so
think,
like
the
stack
trace
and
then,
as
new
occurrences
happen,
they
would
be
noted
within
the
existing
issue.
So
we're
not
going
to
create
10
issues
for
the
same
failure
and
like
the
same
failure
in
multiple
runs
or
the
same
failure
in
multiple
in
multiple
environments.
A
B
Okay,
so
that
so
you're
saying
two
things,
we're
already
doing
it
in
in
the
test
case
project.
Aren't
we.
A
So
the
I'll
say,
run
history
of
this
test
case
passed
or
failed
in
this
environment
is
captured
and
the
failures
do
grab
the
logs
based
on
my
understanding.
Yes,
you've
got
the
loss
it,
I
would
say
it
serves
as
a
extensive
system
record.
That
definitely
needs
some
improvement.
A
Yes,
I
I
would,
I
would
agree
with
that
and
the
test.
That's
where
the
test
case
session.
I
see
supersedes
that
because
it
could
become
the
system
of
record
of
at
this
point
in
time.
These
are
the
tests
that
were
run.
This
was
the
state
of
those
tests
and
here's
all
the
all
of
the
here's,
the
links
to
the
relevant
information,
okay,
but
yeah.
I
I
still
don't
know
how
we
would
schedule
and
resolve
failures
that
the
on-call
dri
is
not
not
resolving.
B
So
how?
How
would
automating
sorry
for,
if
I'm
being
asking
I'm
asking
pedantic
questions,
but
how
would
automatically
creating
that
solve
what
use
of
what
you
said,
because
it
sounds
to
me
that
if
we
create
more
automatically
and
people,
don't
close
them
right,
it's
not
being
scheduled.
Doesn't
that
add
to
the
problem
where
we're
just
going
to
get
more
issues,
that
the
qems
hasn't
scheduled
to
close.
A
To
me,
it
cuts
out
that,
like
the
steps
three
and
four
that
I
listed
out,
I
don't
have
the
is
this
a
new
issue,
or
is
this
a
known
issue?
It
is
taken
care
of
for
me
and
surfacing.
The
information
of
this
is
the
qa.
This
is
the
issue.
That's
related
to
this
test
case.
Failure
within
either
the
job
log
or
the
test
case
issue
is
how
I
would
I
would.
A
B
A
That
that's
where
I
would
say,
that's
an
implementation
detail
that
I
would
look
for
the
team
to
figure
out.
The
goal
is
to
not
create
issues
and
mass
it's
to
identify
when
there's
a
failure.
Is
this
a
known
failure?
Is
this?
Is
this
a
new
failure
and
correlate
that
to
an
issue?
That's
the
feedback.
I've
heard
from
albert
from
mark
and
jensen,
that's
the
piece
of
pipeline
on
call
for
art,
but
that
I
guess.
B
B
A
B
A
Automation
would
be
like
gitlab
bot
or
an
automated.
B
A
Mind
again,
that's
where
I
would
look
towards
jin
chin
or
whoever
is
working
on
this
to
come
up
with
a
more
clever
solution
than
this,
but
I
would
at
least
start
with
extracting
the
failure
reason
for
that
spec
and
seeing
like
and
then
based
on
the
structure
of
the
issue,
try
to
find
an
existing
issue
that
has
the
same
failure.
A
Reason
it's
not
going
to
be
perfect,
but
I
think
it's
an
improvement
from
having
that
all
be
manual,
and
I
do
see
the
appeal
of
what
you're
saying
too
of
maybe
we
don't
need
to
create
those
issues
anymore.
Maybe
we
just
need
to
re-look
at
the
process
and
how
we
can
remove
that
part
of
it
as
well.
That's.
B
B
Working
on
the,
if
we
improve
the
the
broader
view
of
the
test
failure,
it
might
help
with
detecting
the
duplicate
issues,
and
here's
where
I'm
coming
along
pause
pause
that
for
a
second
I'm
looking
at
times,
15
minutes.
So,
ideally,
though,
how
how
this
works
in
the
test
case,
management
system
would
be
to
look
at
the
test
case.
B
Oh,
my
god,
why
is
this
here?
Okay,
it's
new
test
case.
Okay,
he
says
case
from
girls
like
for
yeah.
This
is
man.
This
is
man,
okay,
good!
This
is
what
I
expect
test
case
to
be
like,
but
this
one
is,
I
think
we
can
reset
this.
Essentially,
if
you're
looking
at
linking,
for
example,
if
this
should
be
like
the
name
of
the
spec
and
then
in
here,
can
we
link
to
this
and
it
doesn't
have.
B
A
Yes,
mark
lapierre
created
an
issue
for
that.
I
will
find
it
real,
quick.
A
A
A
A
B
What
I'm
looking
for
is
a
clear
line
to
these
to
duplicate
things
I
think
we're
on
the
same
page
of
not
creating
not
creating.
Where
is
it
not
creating
this
sorry,
not
this
one.
This
one.
B
B
A
I
looked
to
jin
shin
to
see
if
we
can
get
that
clever.
I
don't
see
why
we
couldn't
the
other
thing
that
happens
in
the
qa
failure
issues.
Is
we
embed
the
images
because
those
are
deleted
after
a
week,
like,
I
think,
an
ops?
I
don't
know
they're
deleted
on
different
frequencies,
based
on
where
the
pipelines
run
because
they're
stored
as
java
artifacts,
but
I
think
we
can.
A
We
can
figure
out
how
to
be
more
efficient
or
I
guess
like
not
duplicate
the
information
in
three
different
spots
in
the
test
case
issue
in
the
qa
failure
issue
and
in
the
job
log.
That's
yeah,
yeah,
that's
that's
too
tedious.
So
I'll
note
that
so
the
I
guess,
if
I'm
summarizing
the
goal
of
586,
it's
do
not
create
a
large
like
a
large
amount
of
issues
and
deduplication
is
probably
the
most
important
thing.
Yes,
let's
get
into
the
let's
rename
this.
Can
we
rename.
A
A
A
I
want
to
get
some
more
opinions
like
because
again,
I
don't
think
my
experience
on
pipeline
on
call
is
representative
of
everyone
else's.
It
should
be,
but
I
I'm
really
far
removed
from
I.
B
I
don't
think
so.
I
think
you're
in
charge
of
the
massive
pipeline
broken
failures
with
all
the
great
results,
so
I
think
we
should
be
listening
to
you.
Have
you
have
the
right
use
cases?
What
I'm
trying
here
is
just
to
help
from
a
top-down
structure
and
it's
easy
to
to
digest
even
for
me
and
for
for,
like
the
managers
of
other
teams.
Okay,
I
think
this
is
great.
I
I
wouldn't
say
automatically
create
an
update.
Qfl
is
in
this
iteration,
I
always
say,
detect.
A
Or
detect
known
it's
known
as
too
vague,
but
yeah
existing
something
like
that
where
this
is.
B
A
this
one
makes
a
lot
of
sense
thanks
for
sitting
with
me
and
polishing
it
up.
I
know
that
dan
added
an
attribute,
so
if
I
may,
if
you
look
at
the
is
this
recording
going
to
be
public
yeah.
A
Okay,
yeah
I'll
put
it
on
get
I've
unfiltered.
I
have
three
or
four
videos
up
there.
B
Okay,
if
you
can,
I'm
gonna
send
this
in
in
slack,
because
I
don't
want
to
show
my
experience.
B
B
A
The
other
thing
that
I
think
would
be
eventually
good
with
this
is
using
that
status
issue.
So
the
thing
that
you
were
you
were
pointing
to
opening
an
mr
and
saying
like
this
one
is,
is
flakier
this.
This
spec
is
not
working
because
of
this
issue.
B
Yeah
open
opening
in
mars
automatically.
I
think
that
would
be
that
the
following
iterations,
I
think,
solving
the
duplication,
and
I
I
would
really
if
we
can
do
the
test
sessions
that
unblocks
other
teams
as
well,
because
we
can
use
that
to
showcase
all
the
tests
that
failed
and
and
add
more
tests
there.
I
would
like
to
call
attention
to
the
slack
message.
Yes,
thank
you,
yeah
and
those
are
already
linked,
meaning
it's
in
code,
so
the
information
should
already
be
there
in
code.
A
So
right
now
there's
three
failures
that
are
happening
pretty
regularly
in
master
and
nowhere
else,
but
they're
not
quarantined
because
they're
only
happening
in
masters,
so
they
would
not
show
up
in
this
report.
They
would
be
failures
in
the
pipelines.
They
don't
succeed
on
retry,
but
I
haven't
opened
a
quarantine
because
they're
only
failing
in
master,
it's
like
giddily,
giddily
cluster
or
like
like,
like
it's
very
high
performance.
B
A
B
A
B
Right
but
there's,
if
you
look
at
the
the
report
right,
I
see
what
you
mean.
A
B
That's
a
boring
solution
to
solving
duplication,
yeah.
Okay,
now
I
learned
something
new.
I
didn't
realize
thank
you
for
your
patience
with
me
yeah.
Likewise,
I
think
changing
the
report
to
capture
more
than
quarantine
makes
sense
because
it
seems
like
the
hardship,
is
investigation
and
it
might
be
best
to
do
that.
A
Yeah
and
I
think
the
the
failures
that
are
hard
to
reproduce,
which
is
like
the
ones
I
was
just
talking
about
at
master,
I
can't
reproduce
it
locally.
A
B
B
What
do
you
plan
to
do
with
five
eight
six
and
I'm
okay?
We
want
to
do
that
first
before
doing
the
session.
I
think
that
the
session,
I
think,
would
still
be
helpful
and
here
here's
where
I
think
it
will,
if
I
need
my
screen
again.
So
if
you
can
have
a
session
here-
and
you
have
that
that
issue
that
we.
A
You
don't
need,
I
believe
this
session
is
valuable.
It
just
wasn't
hitting
the
pain
point
that
I
felt
the
most
so
I
agree
the
session
pulls
the
information
together
and
makes
it
more
efficient
to
find
all
of
the
issues
related
to
a
a
test
session
like
I
I
I
see
all
of
that.
I
I
just
it
wasn't
the
area
that
I
thought
was
the
more
important
one
to
focus
on.
B
I
thank
you
I
I
just
I
understand
I'm
trying
trying
to
make
the
case
that,
if,
if
you
have,
this
is
still
going
to
be
created
manually
correct,
we
still
need
this.
That's
a
yeah
report.
B
A
I
think
there'd
be
lots
of
sessions
for
that
day
so
like
if
we
just
take
staging
there'd,
probably
there'd,
be
one
for
every
orchestrated
staging
deployment,
okay
and
and
what's
different
about
those
tables.
So
I'm
not
trying
to
say
that
we
can't
do
that.
I
I
just
wouldn't
need
to
think
about
the
tables
also
list
what
environment,
if
you
scroll
over
to
the
right
a
little
bit.
A
So
does
this
happen
in
prod
staging
nightly
master,
so
you
could
have
the
same
failure
across
multiple
environments
in
different
pipelines
and
that's
where
correlating
the
failures
to
the
same
test
sessions
would
be
helpful.
I
guess
I
was
just
thinking
I'm
not
there.
B
Yeah
that
makes
that
makes
sense
it.
It
won't
replace
it
entirely,
there's
still
some
some
high
level
looking
that
we
need
to
see
here
like
if,
for
that
data,
you
have
multiple,
multiple,
multiple.
So
what
you're
saying
is
that
this
is
a
roll-up
of
actually.
This
is
a
pipeline
right.
So
this
is
okay,
one,
two
three
okay,
so
you
have
one
of
that
already.
A
A
I
think
it
will,
but
I
want
to
get
input
from
everyone
else
like
I
want
to
say
like
these
are
our
objectives?
Do
we
think
this
issue
is
going
to
do
it?
We
want
to
create
less
issues,
make
them
easier
to
discover
and
faster
to
identify,
identify
known
issues.
A
Is
this
the
path
forward
to
solve
it,
and
if
it's,
if
there's,
if
it
I
get
a
no,
then
it's
let's
just
do
test
case
sessions.
First
is
the
way
I
would
look
at
it
because.
B
Value
they
both
provide
value.
I'm
more
than
happy
to
to
have
the
sessions
be
done
later.
I
think
the
importance
is
you
know
you.
You
know
this
more
than
I
do
and
the
team's
feedback.
So,
let's,
let's
post
the
video
ask
for
input
and
then,
if
they
might,
they
might
have
other
ideas
to
improve
586.
A
Okay
I'll
add
a
comment
to
the
issue:
tag
you
with
the
video
mentioning
quality.
Add
it
to
the
staff
meeting
tomorrow
for
discussion
as
well.
We
can
hopefully
get
some
good
discussion
over
the
next
few
days
on
that
to
make
sure
we
have
a
clear
path
forward.
B
Sounds
good
yeah,
sorry
that
this
was
not
no
no
apologies
needed
things
are
in
draft
two-way
door
decisions
and
the
collaboration
here
is
is
highly
appreciated.
So
thanks
for
thanks
for
leading
here,
I'm
just
providing
input.