►
From YouTube: Progressive Delivery UX/PM meeting 2020-05-07
Description
Meeting to improve alignment between Product Management and UX for progressive Delivery
A
Right
at
that
point,
mike
one
time
highlighted
the
point
that
it's
it's
important
for
us
to
like
align.
What
exactly
we
want
to
take
out
or
learn
from
the
user
validation
which,
which
questions
do
we
want
to
answer
so
yeah?
I
guess
to
your
point
to
your
questions
that
you
erased
earlier.
It's
very
important
for
us
to
align
on
those
and
like
clearly
be
on
the
same
page,
with
the
goal
of
why
we
are
doing
research
and
what
exactly
we
want
to
learn
out
of
that.
B
And
no
problem,
thanks
and
let's
see
like
I
have
two
more
two
other
researchers
that
are
currently
in
progress:
there's
the
user
research
for
post
deployment
monitoring.
As
far
as
I
see
that
is
well
on
the
way,
so
any
further
help
needed
on
there.
Just
just
checking
in
here,
mostly
looking
at
you
already
as
you're.
D
C
It
super
excited
about
this
user
research.
First
of
all,
some
of
the
questions
have
come
up
in
different
researches
and
we
can
add
a
bullet
here
for
spinnaker
which
I'll
talk
about
in
a
minute.
So
super
cool
that
engineering
finished
their
research
and
it
looks
like
we
can
do
it.
We
can
work
with
what
we
already
have
in
place
in
monitoring
and
build
on
top
of
that
and
trigger
things
from
the
pipeline.
C
So
we're
really
good
to
go
and
just
make
this
really
awesome,
and
once
we
figured
out
figure
out,
my
initial
research
is
going
to
be
around
what
are
the
important
metrics
that
we
want
to
collect
and
have
something
actionable
on
them,
and
then
are
the
users
interested
in
just
stopping
rollback
and
manually
rolling
back
or
do
they
want
automatic
rollback?
So
that's
kind
of
the
areas.
What
I'm?
What
I'm
looking
for
in
my
research,
but
I
think,
there's
going
to
be
a
step
two
here
which
is
going
to
be
solution.
C
Validation
because
I
already
just
today
opened
like
five
issues
just
from
the
engineering
research
outcome
and
there's
a
ton
of
things
that
we
want
to
add
to
the
pipeline
to
the
jobs
page
to
the
deployment
boards.
So
there's
so
much
work
here
that
I
think
we
need
to
do
a
solution.
Validation
come
up
with
several
options
and
see
that
users
understand
what's
going
on
once
we
get
to
the
solution
phase.
B
All
right
sounds
good,
yeah,
we'll
get
that
recruitment
figured
out
first
and
then
we
can
for
sure
look
at
that
yeah
and
then
there's
the
last
one
which
I
was
wondering
about,
which
is
why
are
people
not
using
review
apps?
They
came
across
this.
C
It
all
right
we're
not
going
to
do
anything
active
at
the
moment
for
review
apps.
The
team
is
totally
focused
on
cd
right
now.
So,
even
though
we
did
a
lot
of
work
in
the
discussion
guide-
and
I
don't
think
we
have
time
to
do
research
at
the
moment
for
for
this,
nor
do
we
have
time
to
implement
it.
So
I
just
I
don't
think
we
need
to
touch
it,
but
if
we're
ready
on
the
topic
of
user
research,
let's
talk
about
spinnaker
our
favorite
subject.
C
C
So
maybe
we'll
get
some
more
candidates,
I'm
still
leaving
this
issue
open
because
it's
just
really
important
for
me
to
understand
why
we're
losing
business
to
spinnaker
and
I
get
really
really
good
feedback.
One
of
the
customers
that
that
I
interviewed
that
you
can
see
the
video,
but
I
also
posted
the
summary
of
of
the
insights
as
a
separate
issue
like
lori
does.
I
hope
I
did
you
justice
so
really
interesting.
C
It
looks
like
we're
we're
missing
some
functionality,
like
the
top
thing
that
this
specific
customer
and
again
one
is
not
an
indication
of
anything.
But
what
they
were
saying
was
that
spinnaker's
pipelines
are
super
flexible.
They
don't
use
research
sources,
they
just
wait
until
they
get
triggers
and
they
don't
pull
all
the
time.
C
So
there's
definitely
a
lot
of
things
that
we
can
learn
from
there
and
improve
our
product.
So
I
think
it's
worthwhile
to
look
into
it.
B
Thanks
is
there,
has
there
been
them
a
proper
competitive
analysis?
There.
C
I
did
a
competitive
analysis.
Previous
quarter
there's
an
issue
linked
to
it.
It's
called:
what
can
we
learn
from
seneca?
There's
a
bunch
of
screenshots
that
I
added
there,
including
issues
that
I
opened
up
implementation
issues
plus
the
content
that
I
learned
has
been
added
to
the
competitive
page
and
mark
has
also
added
like
insights,
that
from
what
he
thought
are
like
the
to
make
a
long
story
short.
There
is
not
such
a
big
feature
gap
between
us
and
spinnaker,
but
they
have
a
different
concept
and
way.
C
So
it's
really
interesting
in
terms
of
ux
the
way
they
approach
it
is
different
and
it's
like
they
treat
the
cd
part
as
a
first
class
citizen
and
for
us
kind
of
like
this
is
the
end
of
the
pipeline
and
it's
like
a
continuation
of
ci.
So
a
lot
of
things
are
hidden.
You
really
need
to
know
your
stuff
like
what
you're
doing
in
order
to
get
something
deployed
correctly
and
they
kind
of
approach
it
very
differently.
C
So
I
don't
know
it's
a
huge
topic
to
tackle
an
interesting
one,
but
a
huge
one,
so
that
was
also
added
to
the
comparison
page.
It's
also
listed
in
the
direction
page
for
cd.
C
The
only
thing
that
I
haven't
properly
done
to
do
it
is,
I
had
plans
to
record
a
competitive
analysis,
walkthrough
of
spinnaker,
and
I
had
such
a
hard
time
getting
it
operational
that
I
just
gave
up,
which
is
also
like
a
good
sign
for
gitlab,
because
it's
easy
to
set
up,
but
I
spent
like
weeks
getting
an
up
getting
it
up
and
running
in
aws
only
to
have
whenever
I
press
any
button,
I
got
an
error
that
I
couldn't
figure
out
how
to
solve
so
I
couldn't
properly
do
any
work
flowing
spinnaker.
C
B
Gotcha
thanks
it's
interesting
for
to
see
that
they
have.
You
know
they
treat
cd
as
a
first
class
citizen.
I
got
was
talking
today
with
ryana
and
I
need
to
react
to
some
issue
from
security
who's
thinking
about
making
some
kind
of
our
security
pipeline
as
a
separate
pipeline.
I
know
there
were
some
things
that
ux
figuring
out
there
and.
C
Yeah
I
actually
mentioned
that
on
last
week's
call
that
I
was
thinking
of
going
nuts
and
separating
a
gitlab
cd
ammo
from
the
ci
ammo,
but
that's
like
just
still
floating
around
in
my
mind,
I
I
don't
want
to
do
anything
actionable
before
we
have
like.
We
have
so
much
on
our
plate
right
now,
but
I
do
think
it's
a
really
good
idea.
C
Ci
is
enormous
and-
and
I
think
that
having
a
separate
yaml
file
a
now-
it's
possible
because
we
support
parent
child
pipeline,
so
we
can
probably
like
utilize
that
kind
of
mechanism
in
order
to
have
like
you
know
just
the
next
step
in
the
ammo,
but
also
it
gives
you
the
flexibility
to
have
several
deployment
different
deployment
targets
that
you
can
call
and
then
you
can
decide
if
you
want
to
like
do
multiple
deployments
to
different
different
clouds
or
just
one
and
there's
a
lot
of
flexibility
to
it.
B
B
Okay,
yeah
I'll,
say
standby
on
that,
since
this
is
underway
and
then
there's
no
immediate
next
actions
needed
on
it
or
is
there?
Are
there.
C
No
not
at
the
moment
I
mean
I
think
at
some
point.
We
would
want
to
explore
that
because
the
yaml
doesn't
allow
nesting,
and
this
might
give
that
that
option
and
but
think
of
this
as
an
iteration.
So
first
iteration
we
have
all
in
one
script,
one
yaml
file,
all
the
stages,
build
test,
deploy
and
then
another
iteration
would
be
you
know
taking
that
out
to
separate
yellow.
So
I'm
really
fine
with
the
way
it
works.
Now
I
think
in
the
future,
it's
something
to
explore.
B
Thanks
then,
I
want
to
quickly
give
an
overview
of
my
like
what
is
currently
in
progress.
Let
me
share
my
screen.
B
Let's
play
where:
where
is
this
okay?
Wait
before
we
get
to
the
list?
Let's
get
this
underway.
Let's
make
this
a
little
bit
more
clear
in
progress.
I
got
a
b
testing,
of
course,
allow
deploy
keys
to
push
the
protective
branches
those
are
are
already
in
there.
Then.
I
have
added
rollout
of
new
feature
flags.
Let
me
see,
let's
put
every
way
this
board,
like
a
standby
issue
where
I
need
to
react
to
this
is
another
issue,
basically
those
three
no
longer
possible
to
edit
deploy
keys.
B
Let's
deploy
the
code
tokens
without
enabling
pipelines,
show
error
rate
threshold
exceed
on
deploy
board
and
roll
out
new
feature
flags.
This
one
I
just
added
by
the
way
to
have
a
more
of
a
view
on.
I
wanted
to
align,
see
if
it's
like
how
you
feel
about
me
working
on
those
issues
currently
or
is
there
a
big
issue
missing,
because
I
got
these
issues
still
in
up
next?
Maybe
I
need
to
swap
out
some
of
these
where
I'm
currently
working
on,
where
I'm
working
on.
C
Next,
so
the
deploy
keys
is
super
critical
because
it's
a
rollback,
it's
a
bug
that
we
introduced
last
yeah.
C
Yeah,
a
b
testing
we
talked
about
it.
We
really
need
to
get
this
done
before,
because
I
want
to
start
working
on
that
for
q3.
So
this
is
the
first
step
which
is
understanding.
Then
then
you're
gonna
have
quite
a
bit
of
work
to
figure
out
where
you
x,
I
don't
know
if,
if
we,
if
we
might
revisit
mike's
suggestion
based
on
the
research
outcome
next,
we
have
show
error
rate
on
deploy
board.
C
So
this
is
a
really
interesting
one.
This
is
related
to
post
deployment
monitoring
and
I
opened
like
a
ton
of
issues
today
related
to
post
deployment
monitoring.
Could
you
open
the
epic
that's
linked
here.
C
C
Know
your
view
is
a
little
bit
different
than
mine,
but
that's
fine,
so
user
research
for
post
appointment
we
haven't
done
started
yet
because
this
is
pending
the
recruitment
that
we
discussed
before
I'll
ping
emily
on
it,
but
it
will
probably
not
be
done
in
13.
This
is
like
the
third
milestone.
It
slips
because
of
recruitment,
and
I
understand
emily
is
a
little
bit
busy,
so
I'll
try
to
ping
her
and
see
what
we
can
do
here.
C
In
any
case,
even
if
I
don't
get
user
research,
I'm
pretty
sh,
confident
about
the
initial
issues
that
we
need
to
do.
So
I
don't
want
to
stop
so
so
what?
What
do
we
have
next?
So
research
if
we
can
cancel
pipeline?
This
is
the
one
that
I
said.
Shinya
did
an
amazing
job.
He
made
a
demo
it's
linked
on
this
issue
and
I
encourage
you
to
see
it.
C
It's
really
cool,
so
he
tested
out
our
hypothesis
that
we
can
use
the
monitor,
issue
incidence
response
mechanism
to
do
it
and
he
was
able
to
stop
the
pipeline,
which
is
really
exciting.
C
So
this
opens
up
all
the
opportunities
for
everything
that
I
opened
today,
so
we
have
on
13-1,
show
error
rate
exceeded
on
the
play
board,
so
it's
good
that
that's
you're
up
next,
because
it's
the
next
milestone-
and
the
idea
here
is
that
even
if
shenya's
research
would
have
failed
the
minute
that
the
monitor
monitoring
prometheus
sent
in
some
kind
of
indication
that
error
rate
has
exceeded,
I
want
to
know
so.
This
doesn't
do
anything
active.
It
doesn't
stop
anything,
it
doesn't
roll
back.
All
it
does
is
tell
the
user.
C
You
have
a
problem
and
I
think
that's
there's
a
value
to
that,
regardless
to
the
research
regardless
to
cancel
pipeline,
regardless
of
anything,
so
I'm
pretty
confident
that
that's
a
good
way
to
start
and
then
after
that
is
cancel
incremental
rollout
if
threshold
exceeded.
So
this
is
the
outcome
of
shenya's
research
and
the
idea
is
that
we
already
found
there's
an
error.
Now
we're
going
to
stop
okay.
C
So
for
the
for
this
initial
iteration,
I
don't
think
we
need
any
ux
work
here,
because,
basically,
we're
all
doing
is
we're
stopping
and
on
the
pipeline
level,
you'll
see
a
cancel
icon
which
already
exists
today.
I
did
ping
you
on
that
to
make
sure
that
you
don't
want
to
change
anything,
but
this
is
basically
just
stopping.
B
C
C
And
then
there's
the
next
one,
which
is
kind
of
the
same.
It's
share
the
cancel
pipeline
reason
on
the
pipeline,
so
you
saw
the
pipeline
is
cancelled
and
now
you
press
the
job
page
and
it'll
tell
you.
This
was
canceled
because
error
rate
exceeded
0.10
or
whatever,
and
I
made
a
note
in
this
issue
in
the
share.
The
cancel
pipeline
reason
that
the
text
needs
to
be
consistent
to
the
one
that
we
have
in
the
state
which
metric
because
basically
they're
showing
the
same
data
yeah.
B
Do
we
take
into
account
that
we
already
show
error,
like
an
error
report
summary
when
hoovering
over
a
job
within,
for
example,
the
mini
pipeline
graph,
like
there's,
there's
other
places
than
just
a
job
view
where
we're
doing
error
reporting
and
the
feel
that,
if
we're
just
saying
hey,
this
pipeline
was
cancelled,
my
expectation
there
is
never
gonna
confuse
users
as
we're
not
showing
any
reasoning
as
to
why
that
was
done.
C
Yes,
100,
I
mean
yes
and
no,
no,
because
I
totally
forgot
about
hovering,
so
we
should
add
an
issue
for
that.
But
again
I
do
want
to
think
about
this
in
an
iterative
way.
So
I
think
it's
okay
for
to
go
out
in
the
first
iteration
with
a
confusing
way
and
fix
it.
C
The
next
iteration
so
first
iteration
we
cancel
it
and
we
write
in
the
docs
that,
if
you
have
this
defined
and
a
pipeline
is
cancelled,
it's
pretty
much
because
an
error
was
was
detected
and
you
can
go
to
the
dashboard
linked
here
and
you
can
see
okay,
so
everything's,
just
in
the
docs
and
the
next
iteration
is
actually
give
user
context
so
either
in
the
deploy
board,
or
in
the
I
mean
I
don't
have
a
preference
where
we
can
decide
this
based
on
how
complex
it
is
for
the
front-end
engineers
to
to
implement
it.
C
So
it
could
be
in
the
playboard
it
can
be
in
the
jobs
it
can
be
in
the
jobs
plus
the
hover.
It
can
be
in
the
in
the
log,
whatever
whatever
we
decide,
but
basically
the
next
iteration
is
give
give
the
reason
give
the
user
a
reason
how.
B
How
will
the
user
know
in
case
that
the
pipeline
is
cancelled
into
in
case
of
incremental
rollout
error
rate
threshold
x
excitation?
Can
you
say
like
that?
Anyway,
you
know
what
I'm
talking
about.
B
That
it
is
like
that
they
have
that.
That
is
the
thing
that's
going
on,
like
I
understand
that's
in
the
docs,
but
like
there's
a
difference
between
referring
to
docs
and
the
user
can
read.
What's
going
on
there
versus
the
user
has
to
know.
What's
going
on
then
has
to
go
to
the
docs
and
then
find
out
what's
going
on
yeah.
C
Is
they
would
see
that
something
is
cancelled
and
they
are
already
monitoring
this
with
prometheus.
This
is
prerequisite
for
this
to
work,
so
they
can
go
into
the
monitoring
dashboard
and
see
a
spike
in
the
threshold.
So
it's
not
given
to
you
like.
I
would
like
it
to
be,
but
I
think
for
our
first
iteration,
it's
okay.
C
There's
another
way
because
since
we're
we're
building
on
top
of
monitors
incidence
response
every
time
error
threshold
is
crossed,
an
issue
is
created
automatically.
So
there
will
be
that
issue.
C
B
That's
good.
I,
like
I
like
any
kind
of
mock-up
any
visual
give
me
any
visuals
will
always
improve
things
a
lot.
Let
me
see
so
we're
we're
on
we're
on
par
with
that.
This
is
good
any
other
of
those
issues
that
need
to
be
like
I'm
on
this
one
and
then
on
this
one,
which
seems
to
be
good.
C
C
B
Bigger
discussion
on
deploy
boards
might
make
sense
to
have
this
as
the
next
topic
in
a
big
big
thing,
big
cross-functional.
B
I'll
do
my
typing
later,
it's
good!
It's
good
enough
for
me!
For
now!
Yeah,
let
me
see,
show
airaid
roll
out
new
feature
flags,
so
we're
going
on
that
a
good.
C
So
I
just
want
to
give
a
background
to
lori
and
nicole.
You
understand
what
we're
talking
about
so
we're
revisiting
our
maturity
for
our
cd
cd
is
currently
at
complete
and
its
next
maturity
level
is
lovable.
C
But
if
we're
going
to
be
honest
with
ourselves,
we're
not
at
complete,
so
I
want
to
revisit
and
do
a
rebase
of
the
baseline
of
what
the
maturity
level
is
now
and
the
reason
I'm
saying
this
is
because
we
currently
don't
support
cloud
deployments
and
we
don't
support
advanced
deployments
like
blue
green
traffic,
vectoring
traffic
shaping.
So
I
think
it's
like
we're
kind
of
lying
to
ourselves,
saying
that
we're
complete,
so
we
decided
to
go
back
and
ask
users
what
they
think
about
the
maturity
level.
C
The
biggest
problem
with
it
is
that
cd
is
such
a
huge
category
that
there's
so
many
jobs
that
we've
done,
that
we
can
have
dmitry
work
on
this
all
year
and
he
would
still
not
finish
so.
Keeping
that
in
mind.
I
spoke
to
jason
and
I'm
planning
on
opening
nmr
to
change
the
category
from
cd.
So
today
we
have
three
and
a
half
categories
in
in
progressive
delivery.
C
One
of
them,
the
one
and
a
half
is
cd
and
the
half
is
incremental
rollout,
which
I
made
a
marketing
category
because
it
just
didn't
justify
itself
on
its
own.
So
what
I'm
going
to
propose
is
bringing
back
incremental
rollout
from
marketing
to
an
actual
category,
but
changing
it
to
advanced
deployment.
So
it
will
include
everything
canary
incremental
rollout
events,
deployment
traffic
shaping
for
traffic
vectoring
load
balancer.
C
So
we
have
a
bunch
of
work
around
that
and
I
think
that
deserves
its
own
category
and
then
cd
is
going
to
take
that
all
that
out
of
it
and
what
we're
going.
I
want
to
rename
it.
I
don't
have
a
good
name.
If
anyone
has
a
suggestion,
I'm
open
to
it,
we
kind
of
need
to
keep
cd
in
there,
so
it's
easy
to
find,
but
what
it
actually
is
going
to
be
is.
This
is
an
extension
of
the
ciamo
file
and
all
those
functionalities.
C
So
we
have
a
bunch
of
things
like
resource
resource
groups
that
were
introduced
or
or
forked
pipelines
that
you
had
in
the
upcoming
column,
dimitri.
So
these
are
all
like
basically
extending
the
ammo
file,
but
it's
not
necessarily
like
something
that
has
to
do
with
this
new
concept
of
deployment.
It's
just
like
making
it
easier.
C
I
don't
know
what
the
word
is,
so
that
deployment
to
cloud
and
everything
we're
doing
now,
which
is
post
deployment
which
activates
something
on
the
deployment
like
we're
doing
now,
with
the
rollback
or
halting
so
that's
my
my
thinking
is
making
this
smaller
so
that
we
can
tackle
it
easier
when
we're
doing
this
research,
but
not
only
like,
besides
creating
a
maturity
plan,
just
creating
any
plan,
so
just
just
making
everything
easier
with
smaller
bits.
Yeah.
D
Have
you
have
you
talked
with
tim
to
see
how
he
and
ian
approached
trying
to
figure
out
like
if
they
had
the
right
jobs
to
be
done
and
they
get
prioritization
around
it
from
their
customers,
because
it
might
be
helpful
for
you
guys
to
go
through
that
process
too.
They
did.
They
used
a
survey,
but
they
basically
brainstormed
all
the
tasks
created
jobs
to
be
done,
brought
engineering
in
to
give
them
feedback
on
that
list.
Then
ian
created
a
survey
and
sent
it
out.
D
B
I
I
also,
I
believe,
ian
laid
out
this
process
as
well
in
his
ux
scorecard
video.
B
C
C
The
first
one
my
fruit
voting
system,
it's
really
interesting!
Honestly.
I
have
no
idea
what
I'm
what
to
do
here.
If,
I'm
being
totally
honest,
it
seemed
like
we
had
a
specific
course
to
go
on
and
we
had.
You
know
a
direction
of
what
we
want
to
implement
here
and
then,
while
customers
were
providing
feedback,
we
figured
out
that
there's
like
a
ton
of
use
cases
that
are
hidden
behind
this
one
issue.
So
what
I'm
trying
to
figure
out
hey?
C
I
need
to
split
out
all
these
different
use
cases
into
different
issues,
but
what
shinya
tried
to
do?
Was
he
asked
people
to
vote
the
customers
themselves
that
were
giving
us
feedback
on
what
they
use?
So
we
can
figure
out
what's
the
most
popular,
so
he
was
using
bananas
and
apples,
and
then
we
added
another
use
case
which
added
pineapples.
So
it's
not
a
process.
C
It's
nothing!
Formal
we're
just
trying
to
figure
out
based
on
uploads
what
people
were
using.
C
We
were
just
trying
to
figure
out
like
what's
the
majority,
because
we
need
to
start
with
one,
and
I
can.
I
don't
really
think
this
really
needs
research.
We
probably
need
to
do
everything,
but
I
just
wanted
to
get
an
indication
of
what
to
start
from.
B
C
It's
not
an
intentional,
it's
very,
very
much
active,
but
there's
just
a
lot
of
ongoing
discussion
and
there's
not
a
decision.
We're
probably
going
to
end
up
swapping
this
with
a
different
issue
that
chenia
wants
to
work
on
first,
which
is
going
to
upset
a
bunch
of
people,
but
it
is
what
it
is
gotcha.
B
C
So
so
here's
the
thing-
I'm
gonna
explain
this
really
simply,
but
it's
obviously
more
complex.
The
way
that
this
works
is
that
you're,
giving
people
permission
to
use
your
stuff.
So
imagine
someone
goes
into
your
house
and
they
have
to
ask
explicitly
permission
to
turn
on
the
tv
or
to
open
the
fridge
or
to
use
your
bathroom.
C
So
we
want
that
behavior
and
that
someone
using
that's
not
part
of
my
project
will
not
be
able
to
use
any
resource
that
I
haven't
explicitly
given
to
them,
which
makes
it
a
ux
nightmare,
because
it
means
that
you
need
to
have
this
list
of
things
that
if
it's
not
stated-
and
you
haven't
checked
it
off,
then
hey-
you
have
to
explicitly
write
it
and
you
have
to
create
this
huge
list
of
features
or
resources
b.
Make
sure
that
nothing.
This
is
already
permissions
but
make
sure
that
nothing
else
is
permitted.
C
While
you
checked
only
one
thing,
so
that's
I
think
the
biggest
part
of
this
issue
in
terms
of
ux
that
we
need
to
overcome.
How
do
we
create
this
huge
matrix
of
stuff?
C
And
I
think
the
the
right
approach
is:
do
it
iteratively
again,
so
we
don't
start
with
a
list.
We
start
with
one
thing,
then
we
add
two,
but
we
need
to
keep
in
mind
that
this
is
an
oh
growing
list
of
things
that
might
need
paging
or
might
need
sorting
so
at
some
far
away
time
in
the
future.
We
might
need
to
have
this
like
super
complicated
ux.
C
B
Gotcha
in
that
case,
because
this
issue
seems
already
in
dev,
but
that
is
in
that
case
not
true.
B
B
C
That's
development,
for
you
I
mean
nicole
though
well
also
contested.
I
mean
we
started
out
something
then
it
turns
out.
It
was
more
complicated
and
a
bunch
of
questions
we
started
raising.
Then
you
know
the
good
thing
and
bad
thing
about
gitlab
is
that
people
contribute
so
a
bunch
of
people
were
or
were
talking
of
the
on
the
issues
I
was
like
hey.
This
doesn't
solve
my
problem:
they're
like
okay.
Maybe
we
should
do
this.
Maybe
we
should
do
that.
So
that's
just
how
the
history?
It's
also
a
pretty
old
issue.
C
Probably
but
as
I
mentioned,
I'm
still
not
100
clear
on
what
we
need
to
do
and
it's
probably
going
to
be
swapped
out
with
the
other
issue.
So
that's
probably
what
I'm
going
to
do
is
promote
this
and
create
issues
from
everything
that
came
out
from
here.
B
All
right
thanks
for
that
all
right.
It
seems
that
we're
at
a
good
state
here,
filter,
feature
flags,
understand,
interest,
sorry,
press
the
delivery,
material
score
cards,
that's
up
next
confirmation,
one
general
generating
future
flag
in
society.
B
C
It's
not
in
our
q1
goals,
it'll
be
great.
If,
if
nicole
is
the
new
person,
that's
joining
could
tackle
it,
but
it's
like
it's
not
in
the
priority.