►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
For
other
people,
so
we
are
talking
about
this
comment
here,
where
there
is
this
desire
to
have
a
kind
of
generic
workout
of
status,
widget
that
can
be
used
by
different
types
like
and
recently
we
renamed
verification
status,
as
status.
Isn't
going
to
do
that
and
I
totally
agree
with
Melissa's
desire
to
build
something,
that's
more
broadly
useful
for
work
items.
A
Is
it
when
you
think
about
sort
of
this
goal
of
having
these
different
kind
of
state
buckets
to
denote
if
something
is
actually
done,
or
not,
if
it's
active
or
inactive,
as
we've
sort
of
worked
through
this,
and
also
heard
from
Matthew
and
Dan,
that,
like
requirements,
can
have
different
like
sort
of
workflow
statuses,
about
what
the
progress
of
them
we
sort
of
end
up
with
this
kind
of
interesting
thing
where,
if
we
look
at
what
what
what?
A
How
behaves
today
there's
a
unit
test
right
that
contains
some
sort
of
link
to
the
requirement,
maybe
in
the
title
or
whatever,
which
gets
run
in
a
job
which
gets
around
as
part
of
a
pipeline,
which
then
generates
the
test
report.
The
test
report
gets
uploaded
is
in
our
Json,
artifact
or
apartment.json
file,
which
then
kind
of
surfaces
the
status.
The
current
status
of
satisfied
failed
unverified
here
in
the
requirement.
A
A
So
it's
hard
to
tell
like
which
buckets
these
fall
in
right,
I
think
in
review
would
be
definitely
active.
A
Drafts
would
sort
of
be
maybe
if
it's
actively
being
drafted
in
the
active
bucket
too,
but
while
these
different
statuses
are
true
of
the
requirement,
it's
like
kind
of
life
cycle.
A
At
the
same
time,
it
would
be
unverified
until
the
pipelines
run
and
the
test
report
kind
of
generates
stuff,
and
then,
if
it
does
get
generated,
you
could
say
like
satisfied
and
passive
exception
are
completed,
but
sort
of
like
what
happens.
If
it
then
fails
at
some
point
in
a
later
pipeline,
where
do
you
put
the
failed?
Which
bucket
do
you
put
it
in?
Is
it
in
inactive?
Well,
it
could
be
failed
and
not
being
worked
on
to
correct.
It
could
be
failed
and
working
on
to
fix
this
right.
A
So
it's
like
not
clear
how
to
bucket
the
output
from
the
test
report
cleanly
into
the
buckets
for
the
different
statuses
and
then
reconciling
those
with
sort
of
the
workflow
statuses
on
a
requirement,
and
the
other
thing
to
think
about
is
that,
like
in
this
sort
of
kind
of
flow,
this
all
like
everything
to
the
left
of
requirement
is
the
test
report.
A
I
would
just
call
it
a
test
report,
and
so,
if
we
think
about
how
this
is
going
to
work-
and
we
want
to
have
like
a
generic
status,
which
is
that
all
workers
can
work
work
with,
you
almost
need
to
have
it
be
a
separate
widget
from
the
test
report,
because
it's
impossible
to
know
if
what
the
actual
status
is
of
the
requirement
itself
in
terms
of
its
is
somebody
working
on
or
not?
Is
it
actually
being
worked
on
or
not?
A
And
so
this
would
sort
of.
Let
you
have
your
own
I
would
even
call
this
status
with
the
test
report.
It's
just
a
state
almost
like,
because
these
things
don't
really
change
and
you
can't
really
change
them.
Unless
the
code
passes
either
the
pipeline
is
never
run
against
a
requirement,
in
which
case
it's
unverified.
A
It
still
could
be
work
B
being
worked
on,
but
the
merge
request
hasn't
been
put
up
yet
so
it's
going
to
be
unverified
and
then
at
some
point
when
maybe
it's
in
review
is
when
the
pipelines
start
running,
it
could
even
be
put
into
A
Satisfied
state
before
it
gets
accepted
by
a
client
right.
So
I
don't
know
if
a
client
would
accept
it
until
the
test.
I
guess
the
the
tests
are
saying
that
it's
satisfied,
so
that's
sort
of
where
I
was
thinking.
I,
don't
know.
A
This
is
all
the
stuff
that
is
an
ultimate
which
is
tying
a
work
out
into
a
test,
a
bit.
Basically,
a
pipeline
job
test
report
and
separating
that
would
make
it
clean
to
kind
of
package
this
up
and
then
the
stats
widget
is
sort
of
a
separate
thing.
But
if
this
is
sort
of
available,
then
certificate
and
figure
out
how
they
want
to
map
any
sort
of
like
automations.
A
That
happened
between
the
test
report
widget
and
how
that
transitions.
The
status
widget
I
also
think
in
the
future.
Nick
and
I
are
we're
talking
about
like
right
now.
I
think
this
is
a
one-to-one
behavior,
but
you
might
have
lots
of
different
unit
tests
that
I'll
need
to
pass
a
unit
test.
A
A
And
then
you
might
have
a
unit
test
free,
it
gets
mapped
to
a
job
that
gets
executing
your
job,
and
maybe
all
of
these
are
linked
to
the
same
requirement
and
in
order
for
that
requirement
to
be
satisfied
all
three
of
these
you
need
to
be
passed
and
I.
Imagine
at
some
point
a
customer
gonna
ask
for,
can
I
see,
test
reports,
breakdown
or
status
or
test
runs
or
whatever
in
a
little
widget
in
a
requirement
itself.
A
We
might
want
to
have
a
test
report.
Widget
that
has
these
unverified
satisfied
past
failed
and
decouple
that
from
the
like
life
cycle
of
the
requirement
might
go
through,
because
if
it's
failed
it
might,
we
might
trigger
that
to
transition
into
some
other
status
right.
That
would
then
do
something,
but
it
it
doesn't
seem
like
it's
lining
up
well
to
like
smash
these
things.
These
two
things
together
into
one
thing,
at
least
from
like
generalization
standpoint:
do
you
have
any
thoughts
on
that
Nick.
B
Yeah
I
mean
I
I.
Think
I
initially
was
thinking
of
this
in
the
more
linear
flow,
but
I
think
if
it's
true
that
something
could
be
satisfied.
While
it's
in
review
I
mean,
then
that
automatically
kind
of
makes
it
hard
to
to
do
because
you'd
have
two
simultaneous
statuses
and
one
would
have
to
win.
But
then
you
kind
of
don't
have
a
lot
of
clarity
and
I.
B
Don't
really
want
to
end
up
in
a
situation
where
we're
doing
like
sub
steps
or
something
like
interview
satisfied
or
something
like
that,
like
as
a
parenthesis
like
I,
would
probably
rather
keep
these
separate.
If
that's
not
a
consideration
like
that,
we
have
to
the
thing
about,
like
you
know
what
you'd
end
up
having
to
do,
I
think
would
be
to
use
essentially
automation
to
make
that
work
in
a
generalizable
way
right
like
because
other
work
items
aren't
going
to
have
test
reports.
B
So
you'd
have
to
basically
create
like
a
generic
automation
capability
and
then
create
a
rule
for
just
requirements
that
is
tied
to
the
test
report,
which
I
don't
know
I
mean
we
want
to
do
stuff
like
that
eventually,
but
I.
Don't
know
if
that's
more
complicated
than
than
this
sort
of
Route,
where,
where
you
can
just
kind
of
utilize
this
on
its
own
I
mean
we
do
know.
We
talked
just
this
morning
with
the
customer
that
basically
I
think
talked
through
that
use
case.
B
If
I
want
to
see
everything
down
from
the
story
down
to
the
requirements
down
to
the
the
individual
tests
that
were
run
against
those
requirements,
you
know,
and
and
even
to
the
kind
of
environment
level
for
those
tests.
So
I
I
mean
yeah
that
that
request
is
not
just
forthcoming,
but
already
already
something
we're
hearing
well.
A
The
other
interesting
challenge
about
this
speaking
to
that
is
that,
if
you
think
about
how
pipelines
and
all
that
stuff
actually
work,
it's
not
really
clean.
Right,
like
you
have
I
mean
it's
clean,
how
they
work.
But
in
this
scenario-
and
this
is
what
is
typical
git
lab
today-
you
might
have
all
this
happening
simultaneously,
where
you
have
free
feature,
branches
or
two
feature
branches,
a
main
branch.
A
When
we
run
a
pipeline,
we
run
it
against
the
branch
usually,
and
so
you
might
link
to
a
requirement
in
your
future
branch
and
it
passes
there.
But
then,
when
you
merge
feature
Branch
into
your
main
branch,
it
fails
right.
You
could
also
a
requirement
could
fail
in
a
certain
environment
like
environment,
a
and
staging
it
might
pass
in
environment
B
like
during
deploys,
because
the
pipelines
also
run
during
deploys,
which
say
I
think
generally
run
all
the
tests
and
test
Suites
as
part
of
that
as
well.
A
So
you
get
this
sort
of
like
complex,
like
there's
lots
of
places
where
it
could
pass
and
fail,
depending
on
which
environment
set
it's
on
and
like
right
now,
I
think
the
workaround.
A
Is
you
only
set
your
your
requirements,
verification
to
happen
when
you
merge
a
feature
branch
and
domain,
and
that's
something
you
could
do,
but
it
also
does
get
a
little
bit
complicated
even
within
when
you
do
that,
like
in
gitlab,
for
example,
our
deploy
pipeline
pushes
things
to
staging
and
I,
think
it
pushes
things
to
Canary
and
I,
think
it
pushes
things
to
production,
and
you
almost
need
to
know
if
it
does
fail
in
which
environment
does
it
fail?
A
Or
is
it
currently
failing
right
and
there's
also
use
cases
today,
where
the
past,
with
exception,
is
where
they'll
quarantine
Engineers
will
quarantine
the
flaky
tests
because
something's
wrong
with
the
test,
not
necessarily
with
the
code
and
so
they're,
basically
saying
it's
fine,
if
we
don't
run
this
test
right
now,
we're
willing
to
take
the
risk
so
we're
just
not
going
to
run
it
right,
which
then
would
have
implications
also
on
things
like
this
I
think
it's
all
stuff
for
the
certified
team
to
figure
out,
but
it
does
feel
like
you.
B
Is
there
so
so
in
that
environment?
Right,
like
you
go
through
your
workflow,
your
workflow
basically
ends
at
accepted,
meaning
that
this
requirement
is
now
in
a
usable
State
and
it
would
never
like
there'd,
never
really
be
a
completed
status
right
or
like,
or
would
we
somehow
mirror
the
verification
status
in
the
in
the
workflow
status,
because
that's
where
I
could
see
that
getting
kind
of
confusing.
A
I
mean
I
think
accepted
by
comment
where
I'd
say
would
be
completed.
Your
requirement
is
accepted
by
the
client,
which
means
it's
completed
and
done
they've
accepted
it,
but
even
after
it's
been
accepted
by
the
client,
it
could
fail
in
a
future
at
a
future
point
in
time.
If
the
code
changes
at
which
point
it
would
you
automatically
move
this
to
like?
A
Would
it
no
longer
be,
it
would
have
to
be
like
sort
of
kick
it
back
into
this,
a
new
workflow
of
okay,
this
whatever
the
intake
is
for
this,
you
know
triage,
and
then
it
would
have
to
kick
this
back
into
whatever
an
inactive
state
was
and
then
an
engineer
would
pick
it
up.
Triage
it
fix
it,
it
would
go
through.
They
don't
have
to
get
accepted
again,
because
it's
passing
you
know.
B
Yeah
I
mean
that's
mostly
a
question
of
him
like
I,
understand
the
only
features
that
you're
looking
at
the
list,
you're
looking
at
boards,
you're
looking
at
like
stuff
to
do.
You
know
most
likely
you're,
probably
filtering
out
some
of
your
requirements
and
things
unless
that's
what
you
are
are
looking
for.
But
you
know
how
do
you?
B
How
do
you
find
those
things
that
need
attention
and
if
it's,
if
you're,
just
filtering
out
things
that
are
let's
say
like
you're
filtering
for
things
that
are
in
like
an
active
State
like
things
to
work
on,
if
it's
completed
in
that
sense,
but
it's
failed,
like
you
wouldn't
necessarily
see
it.
B
I
don't
know
like
there's,
obviously
ways
around
that
you'd
create
different
views
and
things
like
that
to
get
to
to
requirements
that
are
failing
based
on
that
status
or
something
like
that,
but
that
is
just
kind
of
a
Nuance
of
the
different
items.
Getting
potentially
mixed
together.
B
I
mean
this
kind
of
makes
sense.
I.
Think
in
this
model
like,
if
you
think
about
this
from
like
a
widget
perspective
like
this,
would
probably
at
least
as
I'm
thinking
about
it
and
again
I
think
you
know
something
for
the
certified
team
to
to
figure
out
would
be
a
verification
or
like
test
Report
app
where
you
could
basically
like
app
in
the
sense
of
you
know.
We
have
like
the
attribute
widgets,
which
are
generally
like
your
iterations.
B
It's
like
a
single
thing
and
if
we
only
cared
about
you
know,
verified
unverified,
it
could
be
an
attribute,
but
if
we,
if
we
expect
to
want
to
be
able
to
list
out
like
what
environments
and
where
and
what
test
cases
and
things
like
that
in
the
future,
you
could
build
that
like
a
little
bit
more
robustly,
probably
as
something
like
an
app
another
example
of
an
app
would
be
like
related
links
or
something
like
that
or
tasks
even
on
on,
like
an
issue.
B
So
you
can
use
something
like
that
and
kind
of
bubble
that
up
and
then
still
bubble
that
up
to
the
you
know,
metadata
that's
used
in
other
places
because
I
assume,
if
you
you
know,
wherever
looking
at
a
requirement
in
a
list
view
or
something
you'd
still
want
to
see.
You
know
what
the
main
stat
like
verification
status
of
that
would
be.
B
It's
not
really
a
use
case
that
we
have
I
mean
this
will
be
the
first
time.
I
think
we're
doing
that
on
the
work
item,
so
some
stuff
to
be
figured
out
there,
but
that's
kind
of
how
I
envisioned
some
of
that
stuff
working.
A
If
it's
ready
and
and
more
or
less
like
some
of
this
more
metadata
about
it
and
I'm
wondering
if
there's
a
way
to
make
this
even
a
little
bit
more
generic,
instead
of
just
a
test,
Report
app,
it
sort
of
gives
you
like
your
current
status
of
where,
where
your
issue
is
in
terms
of
environments,
pipeline
statuses
and
that
sort
of
thing,
because
there's
also
I,
can't
remember
I,
don't
have
it
open
right
now
there
was
an
ask
to
be
able
to
associate
a
pipeline
directly
to
a
work
item
of
type
test
case
or
test
session.
A
I
think
is
what
it
was
so
that
when
your
tests
are
running
you
see
which
pipeline
they're
running
in
and
then
you
can.
Click
into
that
pipeline
today
and
go
look
at
like
the
test
reports.
I
guess
is
the
use
case
there,
but
it's
sort
of
this
is
really
just
tying
in
all
this
information
into
a
specific
work
item,
where
the
relationship
is
driven
by
the
work
item
id
being
associated
to
I.
A
Guess
one
of
the
the
unit
tests
I
definitely
think
something
that
for
certified
think
about,
but
it
also
opens
up
the
door
if
you
do
think
about
a
little
bit
more
generically
for
a
lot
of
cool
integration
of
like
Downstream
devops
data
into
work
items.
B
No
I
don't
think
so.
This
is
all
pretty
new
functionality
to
me,
so
it's
enlightening
to
learn
about
it,
but
also
sort
of
map
it
back
to
the
to
the
work
items
and
kind
of
how
we
can
make
that
more
appropriately,
reusable,
but
also
not
like
where
it
needs
to
be
kind
of
more
specific.
In
some
cases,
yeah.