►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
We
hope
is
a
good
State
to
allow
us
to
move,
allow
the
business
to
move
forward
with
the
storage,
artifacts,
the
storage,
artifact,
limit
enforcement,
sorry
storage
limit
enforcement
and
then
we're
gonna
pivot
to
testing
and
we're
actually
proposing
to
rename
code
testing
and
coverage
to
test
intelligence.
A
So
Mac
that
would
include
like
fail
fast
testing
as
one
of
the
things
we'll
focus
on
looking
at
how
we
could
optimize
pipelines,
potentially
with
machine
learning
or
giving
more
user
options
around
that
and
then
long-term
also
like
how
do
we
really
like
tie
into
on
the
testing
side
really
becoming
a
player
in
the
test?
Management
systems
based
currently
like
we're?
Not
when
our
when
our
essays
go
and
talk
about
you
know
what
testing
capabilities
does
git
lab
actually
provide,
there's
really
not
much
in
that
in
that
area.
A
So
that's
like
kind
of
the
the
longer
term
goal
here.
Yeah
I
wanted
to
share
that
out
as
like
kind
of
where
we're
headed,
because
they
know
the
the
test.
Intelligence
was
a
piece.
B
Yeah
I
want
to
know
how
what's
the
lift
needed
from
this
team,
is
it
going
to
be
already
replacement
drop-in,
or
is
there
things
that
we
still
need
to
work
with
you
to
validate
like?
Is
it
going
to
be
something
like
we
can
just
use
right
away,
or
is
it
going
to
be
something
else
when
this
is
ready
for
us
to
use.
A
Like
we
have
something
completely
ready
for
youth,
like
our
big,
like
we're
gonna
start
with
flaky
tests,
because
that's
something
that
we've
just
we've
had
a
lot
of
interest
in
and
then
we're
looking
at
like
where
we
want
to
take
fail,
fast
testing.
A
In
terms
of
like
right
now,
we
support
Ruby
we're
looking
at
what
that,
what
like
JavaScript
supports
and
so
like.
We
need
to
figure
out
right,
like
Beyond
Ruby
with
what
should
we
be
supporting
next,
because
if
we
look
at
the
the
data,
the
Breakout,
we
don't
have
a
lot
of
projects
with
Ruby
like
it's
actually
a
really
small
percentage,
and
and
so
we
need
to
like
kind
of
focus
on
like
the
larger
pieces
and
then
I'm.
Also
looking
at
like
potential
integration
opportunities
as
well.
B
Okay,
do
you
know
when
this
is
going
to
land?
Is
there
Mr
to
the
categories
page
some
an
ETA.
A
B
Okay,
yeah,
that's
good
development
I
know,
sits
very
high
on
that
term.
Test
intelligence.
We
use
it
in
a
lot
of
the
logs
industry
blocks
and
we,
the
thing
is:
I
I.
Think
we
the
first
thing
we
have
test
intelligence,
but
it's
not
it's
just
a
customized
version
of
fail,
fast
testing
and
a
lot
of
heavily
mapping
test
mapping
that
we
maintain,
on
our
end,
I'm
glad
that
we're
finally
putting
this
on
the
project
category
in
the
roadmap
yeah.
A
So
so
I
super
exciting
news.
I
thought
you
guys
would
like
to
hear
that
yeah.
A
So
that's
all!
That's
all
I
had.
C
B
Sure
that'll
be
awesome.
We
have
a
size
engine
dashboard
for
flaky
tests.
If
there
are
things
in
the
product,
that's
ready
to
use,
then
by
all
means
we're
happily
adopted.
C
Okay,
it
is
not
developed
yet,
but
I'll
share
my
screen
and
just
show
you
what
it's
going
to
be:
okay,
okay,
so
we
did
validate
this
with
users
already.
So
this
is
like
the
final
design
and
any
feedback
is
welcome
as
well.
The
first
thing
that
we're
going
to
do
here
is
on
the
job
details
page.
We
don't
have
a
link
between.
C
Is
this
job
a
test
or
not,
and
we
have
this
wonderful
test
summary
that
we
explained
for
or
we
include
for
tests,
so
we're
going
to
link
to
that
now
and
we're
also
going
to
tell
you
right
there
and
then,
if
there
is
a
flaky
test,
that's
included
so
that
you
have
a
little
bit
more
of
an
indication
of
why
that
failure
happened,
for
example,
and
so
that
that
link
would
take
you
to
the
test
summary,
which
is
I,
believe
this
next
design
nope
didn't
time
that
up.
C
Okay,
there
we
go
so
this
is
what
it
would
take
you
to
the
the
existing
test
summary
that
we
have
today,
but
then
we
would
also
add
a
flaky
badge
here.
If
that
test
is
flaky
and
then
what
we're
considering
flaky
is
actually
how
we
do
it
internally,
accurate
lab
and
Albert
the
engineer
who's
on
our
team
had
helped
us
out
with
that
as
well.
C
So
we're
going
to
provide
today
we're
providing
a
I
think
it's
a
14-day
history,
but
we're
gonna
do
seven
days
instead
and
give
you
the
percentage
of
the
failure
rate
of
that
test
on
the
main
branch,
and
then
we
just
provide
like
right
now,
unfortunately,
we
can
just
say
you
can
rerun
the
test
to
try
to
make
it
pass
this
time,
but
in
the
future,
we'd
like
to
make
this
like
more
automated,
but
this
is
just
the
NBC.
C
So
that's
what
we're
saying
for
now
and
then
the
last
thing
here
this
also
comes
up
in
this
modal,
which
connects
back
with
like
when
you're
pushing
A
change
in
an
MR,
and
you
have
the
test
summary
widget
enabled
this
modal
would
also
come
up
from
the
widget,
but
we
also
want
to
just
include
that
this
test
is
flaky
in
that
details.
Modal,
so
you
get
that
system
output.
It's
it's
like
a
summary
of
the
job
details.
C
Almost-
and
this
is
the
test
summary
widget,
so
we're
going
to
include
text
about,
however
many
fail
tests
might
be
flaky
and
then
include
like
a
new
section
within
the
widget
listing
the
flaky
tests,
and
then
we'll
also
include
that
same
failure.
Rate
percentage
as
I
was
showing
before.
C
So
that
that's
the
plan
right
now
we're
still
working
on
documentation
and
we
have
not
added
I
think
this
is
added
For
an
upcoming
Milestone,
but
it's
just
maybe
Jocelyn
yeah.
A
Yeah
so
for
1510,
we'll
we'll
get
started
on
this
I,
don't
know
we
haven't
like
really
scoped
out
like
the
full
effort.
So
you
know.
Hopefully,
let's
see
if
we
do
the
back
end
in
1510
and
then
the
front
end
in
1511,
hopefully
around,
like
16.0,
we'll
be
able
to
release
this
or
like
shortly
following
16.
I,
don't
know,
but
this
is
the
next
thing
that
we're
focused
on
and
we
just
for
a
little
bit
of
History.
A
We
kind
of
landed
on
seven
days
like
we
thought
about
like
the
14
days,
but
what
we
heard
from
a
lot
of
the
the
customers
that
we
spoke
to
is
they
really
are
focused
on
that
recency,
and
so
we
are
are
going
with
the
seven
days.
D
Yes,
this
is
really
cool.
I
would
be
really
interested
in
helping
you
in
any
way.
If
you
need
to
somehow
like
sort,
for
example,
validate
the
results
of
the
prediction
of
the
flakiness,
which
actually
I'm
really
curious.
Do
you
know
how
you're
going
to
what
kind
of
method
you're
using
to
predict
the
test
could
be
flaky.
C
So
the
what
we've
talked
about
thus
far
and
I'm,
not
sure
if
it's
going
to
change
once
they
start
the
back
end
work,
but
is
if
that
tests
over
the
past
seven
days
has
does
not
have
a
hundred
percent
success
rate
or
100
failure
rate.
Then
we're
gonna,
Market,
us
flaky
and
we're
still
saying
like
this
may
be
flaky
and
we'll
link
out
to
the
fact
that
that's
how
we're
gonna
do
it?
That's
the
plan
right
now,
but
it
might
change.
If
you
have
any
feedback
on
that,
I
can
add
the
well.
B
C
B
E
B
Okay,
okay,
so
we
will
still
need
some
sort
of
aggregation
in
size
sense
to
look
at
how
how
this
test
performs
over
six
months
or
a
year.
Yes,.
E
Exactly
yeah,
we
we
decided
not
to
store,
because
that
would
be
a
ton
of
data
that
we
would
have
to
store
yeah,
for
you
know
a
longer
period
of
time,
so
yeah
seven
days
is
what
we
decided
on.
B
Okay-
and
it
is
a
data
store
within
the
job
itself,
because
it's
always
hey
you,
the
context
of
flaky
will
change
right,
depending
on
which
Mi
you're.
Looking
if
you're
looking
at
something
that's
seven
days
old,
it's
looking
at
seven
days
ago,
plus
another
seven
into
into
the
history,
is
that
what
how
it's
being
implemented?
I.
E
Don't
think
so,
I
think
it
is
the
test
case
itself
not
related
to
the
job
would
be
in
seven
days
and
we
would
have
it
related
to
like
the
master
so
for
whatever
the
default
branch
is.
So
if
it's
failed
so
many
times
on
the
default
Branch,
then
it's
considered
flaky,
okay,.
B
Okay,
is
there
anything
we
could
dog
food
either
via
an
API
endpoint
say
if,
if
we
want
to
write
a
report-
and
we
don't
want
to
click
through
all
these
Master
flaky
lists-
is
there
something
we
could
pull
to
build
a
report
for
us
in
sizeense.
E
We
have
a
kind
of
version
of
this
already
where
it
says
it's
like
what
does
it
say,
recent
failures
or
something
in
in
master
and
that's
kind
of
what
we're
like
gonna,
be
basing
this.
This
feature
on
and
adjusting
it
slightly.
So
currently
it'll
it'll
say
that
it's
failed
in
master
so
many
times
in
the
last
14
days,
so
I
think
there
should
be
a
way
to
get
that
information.
E
It
might
not
be
super
easy
to
navigate
because
it's
very
set
up
for
this
page
instead
of
like
strictly
for
API
usage,
but
we
could
probably
find
it
if,
if
it
is
it's
likely
in
graphql.
B
Basically,
what
I'm
trying
to
do
is
tie
these
back
to
what
Jennifer
and
her
team
is
doing.
If
some
of
these
can
alleviate
some
of
the
work
and
make
the
team
and
the
overall
gitlab
engineering
and
more
productive
with
data
that'll
be
of
use.
But
right
now
it
seems
like
we
are
we're
starting
to
track
on
on
a
per
Branch
per
test,
run
and
marking
it
as
flaky,
and
that
is
where
that
is
the
first
iteration
or
the
next
thing
we
see.
A
Just
so
much
data
to
collect,
and
it's
it's
also,
some
of
the
feedback
that
we
got
is
people
don't
care
if,
if
the
tests
failed
whatever
five
months
ago,
right
like
that
one
instance,
they
don't
they
don't
care
about,
especially
if,
like
it's
past
recently
so
like
we,
we
were
kind
of
like
playing
around
like
thinking
through
like
timeline,
just
because,
like
people
like
all
the
feedback
that
we
heard
is
people
saying
like
we
want
to
know
what's
recent,
and
so
so
we
we're
like
okay,
seven
days.
A
Seven
days
is
recent,
obviously
subject
to
change
based
on
feedback
yeah.
B
So
the
the
feedback
that
I
would
give
is
because
we
quarantine
the
tests
actively.
So
when
it's
flaky,
we
quarantine
right
away
and
depending
on
the
state
of
the
change,
whether
it's
quarantine
or
Not,
That
Could
impact
how
you
track.
Flaky
tests,
okay,
yeah,
Jeff,
I'm,
not
sure
if
you
have
anything
else
to
add
and
if
it's.
If
it's
going
to
be
useful
for
us
in
the
short
term
or
long
term,.
D
I
think
having
this
feature
in
the
product
could
definitely
help
us
I'll,
be
very
interested
to
see
how
the
result
will
be
different
from
I
guess
the
Remy
script,
which
is
doing
the
latest
Wiki
test
report.
If
we
could,
if
the
results
are
equivalent,
I
feel
that's
already.
A
validation
of
the
flicky
test
prediction
I
would
love
to
start
off
winning
right
away.
If
we
can
confirm
you
know,
the
result
is
basically
the
same
as
how
we
are
scripting
it
right
now.
Yep.
D
D
So
I
don't
know
exactly
how
many
of
the
flaky
tests
will
be
not
quarantined
but
predicted
by
this
feature
we'll
have
to
just
sort
of
let
it
run
and
then
see
how
many
we
actually
would
miss,
because
sometimes
the
flick
it
has.
They
don't
really
cause
a
lot
of
problems
and
they
never
get
flagged
it'll
be
interesting
to
see
how
this
feature
will
be
able
to
pick
that
pick.
Those
up.
D
Like
so,
we
have
a
script
in
the
master
pipeline.
Currently
that
actually
runs
on
schedule
to
track
how
so
you
will
actually
run
all
Suites
and
if
any
of
the
tests
fails,
but
a
retry
makes
it
pass.
D
You
use
that
as
an
indication
that
this
test
is
flaky
and
it
will
actually
just
automatically
skip
that
test
by
adding
that
test
Suite
into
that
into
this
Json
that
I
linked
so
the
future
it
has
when
it
sees
this
Json,
it
will
realize
this
test
is
actually
flaky,
so
in
the
future
runs
I'm
actually
going
to
skip
it.
So
this
is
a
like
a
intelligent
way
of
running
tests
by
excluding
the
flaky
tests,
but
that's
going
to
cause
a
problem
for
your
future.
B
Going
to
be
included
so
essentially
you
we.
If
we
were
to
use
this,
it
would
be
Mark
as
flaky,
maybe
a
few
times,
and
now
script
goes
and
quarantine
it
and
it's
going
to
be
gone
and
people
think
it's
gonna.
People
might
think
it's
gonna,
be
it's
not
flaky
anymore.
It
is
still
it
can
be
still,
but
we
don't
run
it
because
it
pollutes
the
master
pipeline
stability
and
for
context.
D
This
makes
me
think
if
we
were
to
dog
food
it
we
should
create
another
pipeline
that
does
not
involve
any
auto
skipping
or
quarantine.
Just
let
all
the
tests
to
run
in
that
branch
and
that
in
that
way,
every
test
book
gets
run
and
then
it
will
be
able
to
properly
test
out
this
feature.
So
Master
pipeline
might
not
be
a
good
candidate
to
dog
food.
Actually.
D
We're
both
quarantined
them
manually
when
you
know
see
masterbroken
and
my
team
actually
triage
those
incidents
by
pointing
the
test.
That's
one
way
we're
quarantine
the
test
and
then
is
also
that
automatic
quarantine
mechanism
that
I
talked
about
where,
like
the
schedule
pipeline,
will
flag
the
flinky
test
and
then
put
that
into
the
Json
report.
So
the
master
pipeline
will
know
these
tests
are
flaky.
It
will
just
automatically
skipping
a
bunch
of
tests
in
the
future
runs.
D
D
E
That
makes
sense,
that's
very
helpful
feedback
and
I
know
we.
We
talked
about
Justin
and
I
talked
about
potentially
implementing
quarantining
in
the
future
in
future
iterations
as
well.
E
Perhaps
in
one
of
our
like
beginning
steps,
we
could
add
some
configuration
of
what
Ranch
we
want
to
base
the
failure
history
on,
because,
by
default
it's
going
to
be
the
default
branch
which
would
be
master.
But
if
we
do
want
to
have
the
user
able
to
like
choose
which
branch
they
want
to
base
the
failure.
History
on
that
could
be
helpful
and
I
can
I
know.
This
may
be
going
back
to
an
earlier
question.
Mega
you
asked
about
the
API
I
can
share
my
screen
real,
quick.
E
What
we
have
right
now
for
you
see
if
it
would
be
useful
at
all.
B
E
Right
so
this
is
only
the
amen,
real,
quick.
E
You
can
see
that
the
API
is
just
project
pipelines
test
report
here,
so
this
is
like
a
full
test
report
is
what
you're
gonna
get
and
you
can
see
you
can
navigate
to
the
test
Suites.
E
So
this
is
very
tied
into
like
how
are
you
UI
behaves
and
then,
when
you
click
on
a
test
where
you
can
see
the
specific
test
cases
in
there,
and
this
recent
failures
is
what
is
going
to
be
what
we
use
for
that?
It's
just
a
number
basically
of
how
many
times
it's
failed
on
the
default
branch.
E
D
E
Right
so
that
one
has
failed,
but
it
doesn't
look
like
it
has
any
recent
failures.
Let's
see.
C
E
So
none
of
these,
maybe
none
of
these
tests,
are
like
actually
failing
on
NASA
right
now
in
their
legitimate
failures
but
yeah.
That
would
be
the
number
that
you
would
be
basing
this
off
of.
If
we
can,
you
can
look
and
try
to
find
one
that
is,
that
has
recent
failures
that
is
not
null
but
yeah.
Does
that
help
answer
your
question.
D
E
Is
available
right
now,
this
is
based
on
the
past
14
days,
right
now
and
yeah.
So
we
we
have
some
extra
things
that
we
need
to
change
for
the
MVC
that
Tina
showed
to
make
that
work,
but
yeah.
This
is
currently
working
and
is
useful.
E
B
Might
be
too
small,
though,
is
it
more
from
our
users.
E
E
We're
shrinking
it
down
to
seven
days
in
the
past
seven
days
instead
of
14
days,
so
it
is
a
small
time
area
for
sure.
B
I
because
I'm
thinking
in
terms
of
like
a
release,
a
lot
of
things
can
change
within
the
release
like
I'm.
If
it's
configurable,
maybe
a
month,
got
it,
but
let's,
let's
I
mean
there's
nothing
there
right
now.
Seven
days
is
better
than
nothing
so
I
welcome,
iteration
and
MVC.
So
right,
I
would
sell
in
person
and
tweak
it
from
there,
but
by
all
means
I
understand.
B
Agreed
and
then
I
can
ask
my
team
to
do
something
else,
not
quarantine
tests
every
every
week
yeah.
Maybe
it's
really.
A
Great
yeah
we
do-
and
it's
just
like
by
title,
but
one
of
the
things
that
we
have
is
ideally
in
terms
of
like
making
our
pipelines
go
faster.
A
Is
this
ability
to
say
like
hey,
flaky
tests,
you
know
we're
not
gonna
we're
gonna
skip
anything
that
we've
identified
as
as
flaky
so
right
now,
there's
the
allow
failures
option
that
can
be
Set,
but
that
right
is
any
test
failure
and
we
want
to
really
I
think
make
sure
we're
targeting
only
the
flaky
tests.
A
All
right
so
I
think
that's
all
we
had.
B
Yes,
one
of
the
feedback
I
want
to
give
you
is
because
of
our
triage
process.
The
value
will
be
cross
branch,
for
example,
someone
pushing
tests
in
their
own
feature
branch.
They
will
want
to
know
if
the
test
is
flaky
in
master
or
not
so
it
can
be
excluded.
E
Okay,
so
yeah,
so,
even
though
the
number
is
counted
in
like
the
master
Branch,
it's
still
that
flaky
will
show
up
in
the
Branch
it
currently
is
so
if
you're,
if
you're
viewing
like
your
Mr
and
you
see
a
test,
is
failing,
that's
one
easy
way
to
see
like.
Oh
this
isn't
my
fault.
This
is
filling
and
master
because
it
will,
it
will
show
up
and
say
this
has
failed
so
many
times
at
Master,
even
though
you're
you're
test
didn't
run
on
master.
That
makes
sense.
B
Okay,
cool
yeah,
looking
forward
to
to
see
where
how
this
goes,
I'm
glad
we're
having
more
investments
in
this
area.
Yeah.
A
A
We
really
want
to
kind
of
work
in
that
direction
and
kind
of
see
like
where
we
can
really
like.
Add
that
value
and
grow
some
of
those
users.
A
Right
does
anyone
have
anything
else
before
we
Hop
Off.
B
No,
oh,
this
is
good.
When
is
the
next
catch
up.