►
From YouTube: Plan | Weekly Team Meeting - 2020-06-24
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
B
So
yeah
well,
while
we're
waiting
I
wanted
to
show
you
like
the
second
thing
we
work
on
this
week.
We
have
this
ugly
bug
where
you
can
see
here
that
we
are
filtering
EPA
issues
by
epics.
So
whenever
we
resubmitted
the
form,
we
are
going
to
get
an
extra
unperson
here.
So
this
is
this
is
on
production.
A
C
A
C
I
thought
I'd
bring
this
to
everyone's
attention,
just
because,
like
the
team
have
been
really
helpful
and
kind
of
breaking
down
the
roadmaps
page
and
trying
to
find
ways
that
we
can
speed
it
up
and
we're
looking
for
kind
of
more
suggestions
around
this,
like
especially
things
that
we
can
deploy
more
generally
to
the
rest
of
goodlove.
Basically,
so
I
could
do
a
quick
demo.
I
know
that
not
everybody
on
the
call
is
gonna,
be
particularly
interested
in
this.
So
I'll
keep
it
really
brief,
but
let
me
show
my
screen
really
quickly.
C
I'm
super
interested
in
it.
Oh
that's
awesome!
Thanks!
Can
you
tell
me
if
you
can
see
the
issue?
Yeah
yeah
yep,
all
right,
so
everything
that
we
need
to
start
off
with
is
pretty
much
in
this
issue,
but
there
are
a
couple
of
gotchas
that
I
want
to
kind
of
just
raise
for
anyone
who
might
be
interested
in
this
might
want
to
look
at
it.
So
we
run
site,
speed,
analysis
on
a
number
of
pages
and
anyone
can
add
new
pages
to
be
regularly
checked
against
sites
beads
kind
of
I.
C
Don't
know
what
you'll
call
it
like
profiling
or
something
the
two
ones
that
we're
doing
for
road
maps
are
a
small
road
map
page,
which
is
the
gait
lab
comps
support
page,
which
has
I,
don't
know
how
many
like
12
epics
against
this,
and
also
the
gait
lab
org
page,
which
has
750
and
so
immediately
like
off
the
top
line.
You
can
see
a
few
things.
The
initial
page
load
is
nearly
twice
as
slow,
even
though
it's
just
an
initial
page
page
loads,
then
maybe
some
things
there.
C
So
this
is
the
gait
lab
org
one
and
it
would
not
be
a
race
to
see
whether
GDK
starts
at
first
or
this
page
loads,
so
generally
like,
if
you're,
if
you're
looking
at
the
site,
speed
statistics
that
encourage
everyone
to
just
check
the
video
on
the
site,
speed
thing
as
well
and
I'll
show
you
why?
So
this
is
the
waterfall
chart,
for
this
page
specifically
forget
lab
org,
you
can
see
the
initial
page
load
for
this
particular
measurement
is
nearly
a
second.
C
You
can
also
see
that
the
graph
GL
request
goes
on
for
quite
a
long
time,
but
the
really
talent
thing
is
that
the
last
visual
change
is
measured
around
10
or
11
seconds,
but
you
can
see
the
graph
key.
L
request
takes
quite
a
lot
longer
to
respond,
and
you
can
see
why
that
is
you
just
click
the
video
tab
watch
the
video
here's,
the
page
load
of
site
speed
sees
it's
there's
your
first
visual
change.
C
C
So
if
you
actually,
if
you
actually
check
the
charts
for
a
gitlab
org,
this
can
take
up
to
25
seconds
I
think
to
load
the
page,
which
is
obviously
problematic,
because
if
you're
waiting
for
25
seconds
for
this
to
load
on
its
initial
load,
I
likely
argue
to
use
any
of
the
other
features
we
built
around
it,
the
filtering
and
so
on-
probably
quite
unlikely.
So
so
there
are
two
ways:
I
kind
of
look
at
this.
C
The
first
one
is,
we
should
look
for
as
Tim
calls
it
in
the
general
like
performance
improvements
across
get
lab
low-hanging
fruit,
so
things
that
we
can
fix
quickly
and
more
obvious
things,
and
then
we
should
also
look
for.
You
know
targeted
things
that
we
can
do
specifically
to
this
page
and
the
way
I
see
it
is
like.
If
we
are
going
for
a
targeted
change,
it
should
be
like
roughly
one
order
of
magnitude
more
effective
than
anything
we
could
do.
That
is
more
general.
C
So,
to
take
the
example
of
the
initial
page
load
and
if
we
could
shave
like
say
200
milliseconds
off
that,
but
we
could
generalize
that
across
page
loads
across
a
wide
array
of
endpoints
on
get
lab.
That
would
be
a
very
worthwhile
change.
In
my
opinion.
On
the
other
hand,
if
we
can
somehow
solve
n
plus
ones
or
some
graph
GL
things
that
cause
the
top
line,
number
4
get
lab
org
and
other
organizations
with
lots
of
epics
to
come,
join,
that's
also
a
worth
change,
and
so
yeah.
C
You
can
check
out
that
issue,
there's
quite
a
bit
of
discussion
already
on
it.
But
of
course,
then
courage
more
anyone
can
get
involved
and
anyone
can
contribute
and
yeah
like.
If
you
see
things
on
this
page
that
you
know
maybe
think
could
be
fixed
or
even
if
you
have
kind
of
some
suggestion
to
do
with
caching
or
something
that
we
haven't
thought
of,
and
please
leave
a
message
and
start.
C
The
discussion
on
that
and
yeah
thanks
I
have
a
five-point
that,
like
I've,
noticed
that
in
some
cases,
the
way
that
we
design
features
I
like
from
the
brog
standpoint
and
the
UX
and
point
lead
to
bad
performance
just
because,
and
so
how
can
a?
How
can
we
push
like
the
discussion
about
performance
left
a
little
bit
into
the
design
phase
almost
and
to
that
end,
milestones
is
equally
slow.
C
We
understand
like
how
to
make
those
trade-offs
upfront
when
we're
designing,
instead
of
like
forcing
engineering
into
a
bad
position
of
fixing
it.
After
the
fact,
yeah
I'm
we've
made
changes
to
the
roadmap
page
when
we
switched
it
over
I
believe
when
we
switched
it
over
to
graph
KL
and
we
did
some
measurements
and
we
find
that,
like
pine
fir
pine,
that
was,
you
know
roughly
as
performance
as
the
the
previous
rest,
endpoint
or
more
like
more,
it's
more
accurate
to
say
that
it
was
not
performant
but
not
performing
in
different
ways
right
so
like.
C
C
Any
other
progress,
basically
because
this
was
part
of
our
plan
to
move
to
graph
kill
and
to
do
all
future
features
and
graph
kill
and
but
then
there's
the
there's
two
things
right:
there's
the
perceived
performance
of
a
page
and
the
actual
measured
performance
and
they're
both
important
right.
But
if
you
look
at
the
milestones
page,
the
milestones
page
is
slow
because
we
load
the
issues
column
in
the
initial
page
load.
We
don't
load
any
other
column
on
that
page
in
the
initial
page
load.
C
So
a
win
would
be
to
move
that
issues
column
to
be
asynchronous
and
then
the
first
visual
change
would
be
much
much
faster.
But
it's
not
quite
as
simple
as
the
merge
requests
requests.
It's
quite
a
bit
trickier
and
we
did
do
it
at
one
point
and
I
think
we
rolled
it
back
because
it
ended
up
breaking
workflows.
So
it's
a
little
trickier,
but
yeah
I
mean
you
could
improve
the
perceived
performance
of
the
milestones
page
by
doing
that
work.
C
What
about
combining
like
the
issues
into
a
single
list
and
adding
pagination
with
like
infinite
scroll
kind
of
thing
that
oh
it
would?
Oh
yeah.
It's
because
originally
we
set
a
limit
on
the
issues
and
that
limit
was
even
though
we
felt
it
was
quite
high,
turned
out
to
be
extremely
low
for
some
people,
and
not
only
did
it
was
a
too
low
for
their
list
of
issues,
but
it
also
reared
all
their
coins
as
well,
so
it
just
basically
made
the
milestone
page
useless
for
them.
It
was
unfortunate
like,
but
we
can
like.
C
We
just
have
to
be
cognizant
of
every
use
case
whenever
we
kind
of
start
this
work,
I
guess
all
right.
Question
I
have
around
performances
like
re-rendering
some
of
these
calculations
like
it
run
time
or
when
we
query
the
data,
or
are
we
stashing
any
of
it?
You
know
cuz
things
like
counts.
You
know
it's!
C
Okay,
if
we,
if
we
update
that
asynchronously
in
the
background
every
you
know,
10
minutes
or
something,
so
you
always
had
a
static
number
that
you
could
pull
the
front-end,
so
you
didn't
have
to
calculate
it
and
have
we
thought
about
doing
things
like
that
to
also
improve
performance
for
certain
places
where
we
showed
data
that
is
derived
and
calculated
on
the
fly
yeah.
So
these
are
the
kind
of
like
suggestions
that
I'm
really
interested
to
hear
because
there
are.
C
However,
I'm
sure
there
are
cases
where,
like
it's
specifically
looking
at
the
thicket
lab
org
roadmap,
we're
bringing
in
confidential
epics,
which
means
that,
yes,
neither
there
would
be
a
problem.
Caching,
the
roadmap
on
the
back
and
because
some
people
will
be
able
to
see
some
epics
and
other
peoples
won't
other
people
won't.
But
in
the
past
we
possibly
could
have
cached,
maybe
the
entire
list,
so
that-
and
we
just
keep
that
cache
warm
because
it's
because
it
you
don't
necessarily
create
epics
nearly
as
much
as
you
view
them
in
a
road
map.
A
A
Now,
when
I
look
at
probe
map,
the
loading
performance,
I
believe
request
is
taking
longer,
because
right
now,
I
can
see
in
the
response
that
we
are
loading
around
1,300
epochs
within
the
response.
So
if
the
size
of
data
that
we
are
loading
is
massive,
then
it
is
obviously
going
to
slow
down
the
process.
A
We
discuss
this
at
length
like
how
we
can
change
the
UX
of
horizontal
infinite's
code,
maybe
if
we
can
get
rid
of
it
instead
of
scrollbar,
we
can
maybe
have
some
form
of
resizing
the
duration
that
we
want
to
cover
in
the
horizontal
scrolling
so
that
once
user
resizes
the
scale
like
how
we
would
load
up
the
data.
We
would
then
do
the
soul-sword
direction.
That
user
is
selected,
but
we
would
just
not
trigger
any
kind
of
requests
on
horizontal
scrolling
in
other
directions.
A
So
if
we
figure
out
a
way
to
present
the
UX
around
how
to
move
horizontal
timelines
coding
instead
of
doing
a
scrollbar,
then
it
solves
two
problems
for
us.
We
won't
have
to
deal
with
sort
direction
and
we
can
still
get
on
with
the
infinites
code,
where
first
good,
after
your
request
that
we
will
make,
would
have
its
size
defined
to
it
like
we
would,
at
once
user
schools
further.
A
Second
thing
I
wanted
to
include
here
was
that
yeah.
So,
as
I
mentioned
that
a
while,
we
were
working
on
roadmap,
page
performance
improvements,
I
did
look
at
the
like,
because
the
roadmap
page
has
been
slow
for
a
while,
but
we
did
see
any
completes,
at
least
on
developer
site,
not
sure
their
product
managers
had
a
chance
to
interact
with
customers
and
whether
any
customers
complained
about
roadmap,
global
roadmap,
page
being
slow
when
the
number
of
epochs
are
larger,
but
I
didn't
see
any
users
complaining
it.
A
So
that
is
the
question
like
whether
users
are
actually
looking
at
group
level
roadmaps
or
whether
they
are
just
using
the
rule
that
we
have
on
individual
epics,
because
most
of
the
times
ever
since
we
have
introduced
nested
epics
users
can
still
see
roadmap
for
a
particular
parent,
epic
and
there
they
would
only
see
only
5
or
6
FX
x
max,
depending
on
how
many
child
attics
they
have
going.
So
if
users
are
not
using
group
level
roadmap,
then
what
does
it
make
sense
like?
Do?
D
Now
I
can
say
that
it's
a
great
question
so
well.
First
off
road
maps
are
one
of
the
largest
areas
of
opportunity
that
we
have
with
enterprise
alure
customers
coming
in
to
plan
for
portfolio
management
functionality,
but
the
number
of
folks
were
actually
executing
on
them
at
the
level
a
gitlab
does
is
low,
so
I
think
one
of
the
reasons
we
don't
see.
Complaints
from
the
user
sides
or
the
group
roadmaps
is
none
of
the
customers
using
plan
right
now
at
that
level,
have
1,400
epics
at
their
single
group
level.
D
So
I
think
there
is
a
there's
some
reality
around
how
we
have
our
work
structured
inside
of
get
lab
and
we
run
into
some
unique
problems.
It's
good
to
know
that
there's
a
limit
right
and
we
can
tell
customers
hey
if
you
gets
1,400,
it's
not
gonna
work
well,
but
for
the
most
part,
yeah
a
lot
of
customers
are
a
lot
of
prospects
and
customers
that
were
starting
to
talk
to,
like
I'm
recording
on,
say,
a
big
large
ultimate
customer
who's.
D
Looking
for
a
pretty
large
seed
expansion
today,
they're
excited
about
these
possibilities,
especially
when
we
can
get
to
the
point
where
we
can
display
like
dependency
mapping
on
this
level.
So
one
it's
an
area
where
we
have
a
lot
of
opportunity
and
we
just
haven't
really
been
able
to
double
down
on
it.
D
Yet
because
we've
been
focusing
on
the
epic
level
stuff
but
yeah,
it
is
an
area
that
we
need
to
invest
in
and
continue
to
grow,
because
there
is
a
lot
of
interest
and
it
is
an
area
that
we
will
continue
to
lose
customer
interest
in
and
prospect
interest
in
against
tools
like
and
align
because
they
do
this.
For
you
all,
I.
C
Also
say
that
we're
early
adopters
for
plan,
and
so
if
we
have
problems
at
scale
right
now,
our
customers
might
not
yet,
but,
as
we
expand
plan
into
these
organizations,
they're
larger
they
are
gonna,
run
the
same
problems
and
probably
even
more
some
of
the
some
of
the
folks
that
I've
talked
to
have,
you
know
up
words,
a
five
hundred
thousand
to
a
million
issues.
If
you
consider
across
you
know,
team
of
fifteen
thousand
people
how
many
epics
and
issues
that
could
end
up
creating
it
gets
pretty
big.
C
C
C
Nothing
happens
at
all,
but
the
idea
behind
startup
is
is
that
you
take
this
request
for
the
actual
useful
data
and
you
move
it
back
here
to
the
start
right
you
make
that
request
asynchronously
as
you.
Well
once
you
you
load
the
page
initially,
but
then,
before
you
kind
of
bootstrap,
the
application
load,
all
the
assets
so
you're
not
quite
moving
it
right
back
to
zero,
but
you're
moving
up
back
in
the
timeline,
and
so
by
the
time
the
application
has
loaded
enough
to
actually
make
the
request
around
a
byte
here.
You've
already
taken.
C
You
know
a
second
off
the
request
time.
The
data
is
either
available
or
it's
or
there's
a
promise
available.
That
will
be
available.
That
can
be
reused
by
the
application
and
the
problem
with
that
is
it
currently
only
works
with
the
REST
API.
It
doesn't
work
with
graph
GL,
yet,
although
I
believe
I
think
Natalia
was
working
on
that
so
yeah.
It's
just
one
example
of
highly
it's
a
simple
idea,
but
when
it's
generalized
across
the
entire
application
it
will
produce
measurable
improvements
in
page
load
times
across
the
entire
application.
C
Wherever
it's
used
and
I
think
to
actually
employ
this
on
a
page,
you
simply
have
to
change
one
line
so
yeah
pretty
cool.
Those
are
the
kind
of
big
ones
that
we're
looking
for
and
if
we
can
like
use
the
roadmap
page
as
a
test
bed
and
then
generalize
it
all
better,
but
something
that
improves
the
loading
of
epochs
in
graph
QL
and
reduces
x
significantly
on
this
one
page
is
still
a
win,
but
I
think
something
like
we
generalized
is
is
the
real
goal.
C
Let
us
start
jazz,
do
again,
ok,
good
question,
so
what
it
does
is,
if
you're,
if
your
application,
if
your
front-end
application,
makes
a
request
for
data
to
the
backend
yeah
the
front
that
application
has
to
load,
then
it
has
to
make
the
request.
The
idea
behind
startup
jeaious
jazz
is
that
you
make
the
request
after
the
initial
page
load
just
using
like
vanilla
JavaScript,
you
load
that
into
either
a
JSON
object
which
is
provided
to
the
front-end
application,
or,
if
it's
not
ready,
yet
it
provides
the
application
with
a
wrapped
promise.
C
Basically,
the
same
thing:
the
application
would
use
it
internally
in
itself,
so
you're
just
basically
moving
the
request
into
vanilla,
JavaScript.
The
reason
it
doesn't
work
with
graph
QL
yet
is
that
you
can
it
stores
it
against?
The
key
is
the
URL
that
you've
requested
data
for,
and
that's
obviously
quite
complex
with
graph
GL,
because
you
have
the
query
and
then
have
the
variables.
So
it's
it's
harder
to
match
one-to-one.
What
they're,
working
on
and
I'll
add
this
issue?
C
A
So
we
did
explore
service
orders,
but
we
decided
to
keep
on
the
common
button,
which
is
applicable
to
all
the
pages
to
keep
in
serviceworker.
Obviously,
it
never
moved
past
vo
C
M
was
working
on
it
a
while
ago.
I
think
it
was
more
than
a
year
ago
when
we
started
exploring
that
it
was
only
around
the
time
and
we
introduced
tree
shaking.
A
So
it
would
allow
us
to
separate
out
the
common
bundles
from
page
specific
bundles,
and
then
we
decided
to
keep
common
bundle,
be
part
of
serviceworker
cache
to
reduce
the
load
times,
but
I'm
not
sure
if
we
are
still
moving
in
that
direction.
I'll
follow
up
with
Tim,
whether
any
other
team
working
on
the
back
stage
of
those
changes.
Yeah,
that's
a
good
idea.
We
can
especially
for
large
application
pages
like
roadmap
requirements,
which
is
also
taking
up
the
entire
page,
all
the
places
where
we
have
full
page
applications
going.
We
can
still
utilize.
B
So
my
server
restarted
so
yeah.
Basically,
the
first
thing
I
wanted
to
show
you
is
the
speed
of
dog
folding
again
that
we
replicated
the
this
icon
on
issues
that
basically
says
that
this
issue
is
part
of
the
of
a
syllabic
here
on
this
epic.
So
again
like
the
highlight
here
is
that
it
was
so
easy
to
implement
it
was.
It
was
pretty
fun
to
work
with
because
we
have,
like
the
whole,
the
whole
environment
working
with
view
now.
B
So
that's
one
other
thing,
and
the
other
thing
that
I
share
with
you
is
that,
for
example,
here
in
production,
we
have
this
bug
that
whenever
we
reset
made
it
an
epic
filter,
we're
going
to
be
adding
Ambersons
to
to
the
ID.
So
this
is
solve
what
we
did.
There
was
basically
updated
tokenizer
for
these
for
the
for
the
search
query,
so
yeah
and
and
regarding
regarding
the
the
conversation
we
have,
we
had
to
right
now
regarding
performance,
one
of
the
anarion
or
opportunity
that
I'm
going
to
be
looking
into
is.
B
A
A
C
Yep
similar
for
the
packin
theme,
we
have
an
async
issue
just
to
kick
off
purpose.
These
issues
is
to
kind
of
draw
attention
to
things
that
we
have
in
progress
to
provide
a
kind
of
start
and
end
point
for
the
milestone,
so
yeah
any
feedback
in
the
issues
appreciated
in
particular,
mark
and
ki
know.
If
you
would
look
at
the
deliverables
section
and
see
if
you
agree
and
that
I
just
I
haven't
missed
anything
I
know
we
have
some
feedback
on
already.
C
I
got
feedback
in
the
past
that
these
were
useful,
so
I'll
keep
doing
them
as
long
as
they
are
once
they
stop
being
useful.
I'll
stop
doing
them,
but
the
idea
is
just
to
try
and
get
weights
for
everything,
get
things
broken
down
and
to
have
a
sense
of
team
cohesion
around
some
of
the
features
that
we
deliver.
So
it
doesn't
just
feel
like
a
hamster.
Will.
E
C
C
Them
trying
to
automate
some
of
these,
so
we
don't
have
to
do
them
behind
every
every
month
and
they
get
the
might
faster.
But
until
now
we
have
an
issue
templates.
I
believe
that
doesn't
do
much
just
creates
the
head
headlines.
You
have
to
populate
it
yourself,
but
let's
see
give
me
a
bit
of
time
and
maybe
we
can
collaborate
on
a
tick
pretty.
C
Well,
I
think
I
have
the
last
item.
So
this
is
more
of
like
a
collaboration,
discussion
and
I
love
feedback
from
engineers
and
UX
folks.
Anyone
who
can
help
product,
especially
me
figure
out
like
how
to
better
support
like
getting
more
requests
and
per
month,
while
working
on
features
or
just
breaking
things
down
into
smaller
pieces.
So
it's
easy
to
work
in
them
and
they're
parallel
or
in
like
smaller
mr-s
behind
a
feature
flag
and
so
like
when
I
was
working
on
iterations,
I'm
and
riding
out.
C
Gonna
talk
to
Simon
and
Mario
about
this
Mario's
back
here,
because
they're
the
engineers
worked
on
this
to
learn
from
them
as
well,
but
like
what's
the
best
way
to
structure
epics
and
issues
in
these
like
small
chunks
that
are
vertical
feature
slices
and
like
are
things
that
you
can
tangibly
test
as
an
end
user
behind
a
feature
flag
as
like
a
bigger
feature,
is
getting
developed.
Is
there
any
feedback
or
suggestions
from
engineers
on
the
best
way
to
do
that,
it
is
most
helpful
for
them.
I.
E
Think
something
some
from
my
perspective
and
obviously
didn't
work
on
it
or
didn't,
do
code
writing
on
it,
but
it's
trickier
I
think
it's
trickier
with
new
things
so,
like
iterations
being
largely
a
new
MVC
being
developed
as
a
new
feature
like
you're
sort
of
starting
from
scratch.
In
it
at
least
my
inclination
is
like
do
a
whole
bunch
in
one
big
chunk
and
then
when
you're
it
when
you're
like
continuing
to
do
it
on
stuff,
it's
easy
to
sort
of
find
smaller
pieces,
but
that
could
be
part
of
it.
E
With
regards
to
this
specific
example,
there's
also
I
feel
like
different
engineers.
Some
engineers,
like
prefer
a
very
granular
like
very
small
tickets,
and
then
others
prefer
kind
of
like
a
nebulous
like
big
bucket,
and
they
sort
of
work
through
that.
So
you
know
I,
don't
know
if
we
want
to
enforce
one
style
or
the
other
prefer
to
keep
it.
C
I,
don't
think
we
should
enforce
anything
I'm,
not
suggesting
that
I'm
suggesting
a
mic
asking
for
how
can
I
better
support
engineers
and
make
it
easier
to
grow
up.
What's
going
on
and
chunk
things
up,
one
of
the
other
pieces
of
feedback
that
I
received
personally
after
I
think
this
to
me
is:
he
said.
If
it
were
me,
what
I
would
have
done
is
I
would
have
taken
each
of
those
things
that
use
you
can
do
and
ship
them
as
they
were
done
behind
a
feature
flag
and
put
it
in
the
release
posts.
C
And
so
my
inclination
was
like:
let's
wait
until
it's
valuable,
at
least
like
the
entire
thing,
isn't
valuable
enough
to
use,
but
he
said
this
is
something
the
giveaway
team
does
is,
like
you
put
the
small
thing,
that's
behind
a
feature
flag
and
it
really
supposed
to
the
secondary
item
and
then
when
it
becomes
valuable,
you
put
it
as
a
headline
item
and
that
way
like
you,
the
team
feels
like
they're
making
progress.
They
get
to
include
things
in
released
posts.
They
can
get
smaller
in
Mars.
It
can
all
be
behind
the
feature
flag.
C
So
I
was
curious
about
that
and
then
also
how
we
decided
what
feature
Foggs
use.
One
of
the
interesting
things
that
happen
when
building
iterations
is
there's
a
back-end
feature
flag,
and
then
there
was
a
front-end
feature
flag
that
got
introduced
and
then
you
would
have
to
turn
both
on.
So
adding
that
to
the
documentation
is,
if,
like
an
end
user,
where
it's
gonna
like
be
an
early
adopter
of
that
it
become
became
more
clunky.
So
I'm
wondering
if
second
part
is
like
if
we
should
consolidate
like
when
we're
building
a
big
feature.
C
I've
seen
these
checklists
pop
up
a
little
bit
lately
and
I,
don't
know
how
they
compared
to
like
having
an
epic
and
multiple
issues,
but
I've
seen
them
be
quite
useful.
I
mean
sorry
to
give
context
like
I'm
thinking
of.
On
the
one
hand,
we
have
the
project
to
move
service
desk
to
core
and
that's
got
like
a
little
checklist
in
it
and
it's
being
done
in
steps
and
then,
on
the
other
hand,
we
have
things
like
the
first
iteration
JIRA
importer,
which
is
like
an
epic
of
epics
right.
C
But
in
the
case
where
you
have
something
like
this,
where
it's
a
series
of
tasks
and
the
first
iteration
is
pretty
well
refined
like
we
know,
we
can't
release
until
X.
Maybe
the
task
does
a
better
thing,
I,
just
one
aside
as
well.
Maybe
we
should
add
to
B
to
the
SU
template
for
for
features.
I
had
a
specific
heading
for
feature
flags
we're
pretty
good.
It's
usually
pretty
easy
night
to
find
out
which
feature
flag
covers
a
specific
feature,
but
not
always
so
it
would
be
good
to
do
that
to
have
that
in
there.
A
C
A
To
learn
that
regarding
the
checklist
items,
so
what
we
can
start
with
for
larger
efforts
that
we
do
around
feature
development
is
that
we
can
start
with
an
epoch
with
a
checklist
and
depending
on
how
developers
realized
that,
which
checklist
item
from
that
epoch
is
going
to
take
much
larger
effort.
Although
both
back
and
in
front
end
is,
you
know,
try
to
keep
em
our
size
as
small
as
possible
and
preferably
behind
feature
flights,
so
that
it
can
still
be
merged
into
master
without
affecting
the
end
product
unless
feature
flag
is
turned
on.
A
C
I
I,
just
the
context
of
reason
why
I'm
using
checklists
in
epics
is
because
things
and
the
feature
could
change
right
and
I,
don't
want
to
create
like
if
I
see
what
a
vision
for
something
is
and
I
can
break
it
down
all
these
different
things
hey.
It
might
not
be
broken
down
the
way.
That's
optimal
for
engineers
and
so
like.
Creating
a
bunch
of
individual
issues
is
overhead
and
B.
C
I
might
not
get
to
some
things
further
down
for
like
months
and
then
I
don't
want
to
have
a
stale
issue
by
the
time
we
get
there.
So
it's
like
less
it's
less
issue
management
and
fatigue
to
keep
it
in
checklist
until
you
are
ready
to
work
on
it
and
then
like
you,
can
so
like
413
I
created
issues
for
each
of
the
checklist
items,
but
I'm
not
going
to
for
13th
wreaks
or
not
there
yet,
and
it
would
be
like
wasteful,
but
that's
how
I'm
personally
using
them.
C
D
A
A
D
C
C
So
we
could
add
to
this,
mr
and
put
it
in
there
somewhere,
because
it's
more
or
less
where
I
said
like
don't
create
issues
until
the
last
responsible
moment,
more
or
less,
because
it's
like
just
in
time
and
it
it
prevents
a
lot
of
ways
downstream.
I,
don't
know
if
y'all
think
that
that's
a
good
spot
that
I'm
happy
to
add
it
there.
So
it's
good
to
me
yeah
a.
B
I
just
wanted
to
mention
that
I
recently
worked
on
the
epic
creation
page
and
that
was
basically
a
copy
of
the
iteration
page
with
different
components.
So,
and
one
of
the
thing
that
happen
it
was
that
day
scope
grew
tremendously
because
we
have
a
blocker
with
the
label
selection
and
we
have
another
one
with
the
date
selection.
B
So
I
will
have
loved
to
have
a
Shaklee
like
that,
because
at
that
moment,
whenever
I
found
that
that
blocker
I
cool
I
close
at
the
issue
because
like
there
is
a
lot
of
work
that
that
at
Western
and
focus
on
those
two
specific
items
like
creating
the
issue
afterwards.
So
so
yeah
in
phagon
I'm
going
to
be
adopting
that.
If,
if
yeah,
because
I
I
found
it
really
useful
cool
thanks
for
sharing.