►
From YouTube: Plan stage weekly - 2020-04-15
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
I
just
want
to
run
through
the
13
dot,
o
release
fighting
issue
right,
quick
I'll
share
my
screen
just
so
we
can
have
it
for
the
video
I'm
gonna
try
something
a
little
different
for
the
project
management
group
instead
of
like
listing
out
a
bunch
of
things
to
deliver.
It
worked
really
well
with
the
jury
reporter,
with
Alexandra
and
Yakko
I'm
kind
of
soaring
on
that
and
collaborating
and
anything
facing
some
of
the
other
feedback
that
was
in
the
ad-hoc
retro
put
together.
B
So
we
can
import
comments
and
an
ad
mentions
and
descriptions,
and
that
sort
of
thing
and
then
the
other
is
the
making
progress
on
the
JIRA
markdown.
Parser,
so
we
have
to
write
a
better,
proper
parser
for
moving
descriptions
and
other
gr
content
from
the
JIRA
markup,
basically
to
get
lab
flavored,
markdown
and
I
think
we're
gonna
use
some
middleware
for
that
as
well.
B
So
those
are
the
kind
of
the
two
goals
for
that
and
Alexandre
is
gonna,
be
out
of
the
office
for
a
while,
so
we'll
have
to
figure
out
how
to
keep
moving
that
forward.
The
other
is
sprints
and
tracking
scope
change.
We've
been
working
on
the
multiple
time
box
thing
for
awhile,
and
this
is
kind
of
like
bringing
it
full
circle
with
the
smallest
iteration.
Instead
of
making
it
overly
complex,
we
just
wanted
to
add
sprints.
So
an
issue
can
belong
to
a
milestone.
B
So
that's
the
first
smallest
thing:
there
was
the
burn
up
charts,
so
it's
basically
all
stuff
that's
been
in
play
thus
far
and
then
the
third
theme
is
longer
running
investments,
chores,
bugs
performance,
availability,
availability
issues,
so
things
like
getting
the
real
time
working
via
WebSockets
that
working
group,
it's
like
it's
a
slug
Heinrich's,
been
doing
an
awesome
job,
thus
far.
I
think
he
had
also
used
some
support
from
another
back
on
engineer
from
plan,
and
then
we
also
have
committed
to
working
on
three
UX
issues
per
release:
I,
don't
I
talked
to
Mike
long.
B
We
can
talk
to
holiday,
deluxe
I,
don't
know
if
that's
three
per
stage
or
three
per
group,
but
anyway
we're
gonna
prioritize.
Some
of
those
and
I've
left
that
up
to
Holly
to
kind
of
look
at
and
Alexis
and
Nick,
you
all
are
kind
of
in
charge
of
telling
us
what
those
things
should
be,
but
I
would
also
kind
of
encourage
us
to
look
at
the
areas
that
were
already
gonna,
be
working
in
terms
of
features
and
seeing
if
we
can
like,
if
we're
already
delivering
the
features
or
iterations
on
existing
features.
B
B
We
also
need
to
move
some
promise
features
down
to
core
there's
an
escalated
security
issue
and
then
Donald
John.
Whatever
else
you
guys
feel
like,
we
need
to
target
from
bugs
SLA
that
sort
of
thing,
there's
gonna,
be
like
a
third
basic
theme
and
I
think
we'll
probably
have
a
theme
like
this
will
will
be
every
release
and
then
we'll
try
to
have
like
kind
of
two
main
feature
themes
and
that
way
like
eventually
we
can
get
to
the
point
where
were
kind
of
doing,
50
to
60%
future
development
and
40
to
50
percent.
B
C
Like
that
format,
Gabe
I'll
update
my
section
and
I'll
push
an
update
to
the
template
reason
for
this
next
month.
I
think
that's
a
good
break!
Oh
yeah!
You
want
just
scroll
a
little
bit
for
you.
Yeah
perfect,
go
yeah!
So
from
the
portfolio
management
side,
we're
gonna,
keep
moving
or
moving
some
of
the
problem,
validation
and
a
little
bit
of
solution,
validation
for
epic
swim
lanes
and
epic
board
just
to
kind
of
get
somewhere
that
on
paper
and
some
more
concrete
details.
C
Excuse
me
well
as
well
as
try
to
wrap
up
solution
validation
for
these
two
items,
which
are
expanding
filtering
on
the
roadmap
and
hiding
collapsing
the
milestone
view.
C
Hopefully
we
can
get
a
some
updating,
the
epic
parent
via
the
drag-and-drop
and
the
epic
tree
through
Belle,
getting
epic
assignment
on
the
issue
list
and
that
new
epic
creation
page
and
we
have
the
critical
refactor
detect
at
bug,
items
that
John
and
Donald
I
recommended
that,
like
said,
I'm
gonna
move
into
this
themes
like
the
game
is
here:
that's
good!
That's
what
we've
got
right
now.
D
I
can
talk
through
real
quick
I
mean
there's,
there's
three
real.
You
know
priorities
to
talk
about
priorities
in
terms
of
threes
requirements.
Management
iterate
on
that
I
mean
I,
think
we
kind
of
know
that
we're
moving
forward
on
that.
We
need
to
come
up
with
a
solution
before
trying
to
figure
out
how
to
link
requirements
to
other
objects.
I
know
there's
some
great
ideas.
D
My
throwing
around
I
want
to
try
to
you
know,
have
a
spike
on
that
and
really
nail
down
what
we
want
to
do
on
the
linking
I
have
a
phone
call
with
Matt
Gonzalez,
actually,
because
he's
curious
about
how
to
link
these
things
as
well.
I
guess
he's
looking
at
it
more
from
the
idea
of
linking
testing
back
requirements.
Wouldn't
nobody
talked
about
so
I'm
curious
to
hear
his
thoughts
and
his
team.
Can
you
know,
help
us
in
any
way
shape
or
form
on
that
as
well?
D
So
there
might
be
a
collaborative
design
session
there,
which
would
be
great.
The
second
major
theme
for
us
is
usage
data
collection.
As
a
p.m.
I'm
flying
blind
I
know,
Gabe
has
limited
information.
I
know
Kean's
worked
really
hard
in
his
dashboards,
but
there's
really
no
metrics
collection
right
now
for
blocking
issues
or
related
issues,
which
is
a
big
hole
and
requirements.
Usages
are
North
Star
for
certify,
so
we
need
to
ensure
that
we
can
track.
You
know,
requirements
created
and
then
down
the
road.
We
want
to
be
able
to
track.
D
You
know,
interactions
with
requirements,
so
that's
sort
of
the
second
major
theme
is
getting
that
all
lined
up
and
then
really.
The
third
thing
is
gonna
be
sort
of
a
technical
debt
refactoring.
We
want
to
make
sure
we
prioritize.
You
know
any
bugs
that
we
have
in
the
back
lot.
We
want
to
get
those
through
I
know,
there's
been
a
push
to
help
a
couple.
Other
teams
out
who
are
way
behind
on
their
bugs
as
well.
I,
don't
know
what
the
status
of
that
isn't
enough
to
negate
I've
heard
much
more
about
that.
D
But
if
those
come
down
the
pipe
I
want
to
make
sure
we
have
a
little
bit
of
bandwidth
to
provide
assistance
if
necessary
and
just
make
sure
that
we
don't
have
any.
You
know
critical
tree
Factory.
We
need
to
do
on
our
side
just
to
make
sure
that
we're
continuing
to
prioritize
those
engineering
metrics
as
well
I,
know,
there's
the
measurements
are
being
done
to
determine
if
we're
getting
through,
you
know
usage
and
whether
or
not
we're
our
availability
numbers
stay
high.
D
I
know
that
that's
a
big
thing
so
I
want
to
make
sure
any
issues
that
are
promoted
are
related
to
that
we're
prioritizing
in
some
capacity
as
well
just
to
make
sure
we're
covering
all
our
bases.
So
that's
sort
of
the
three
major
things
I
was
working
toward
for
13
Oh.
Anybody
has
feedback
or
comments.
I'll
have
the
issue
updated
the
next
couple
hours
and
you
know
we
can
discuss
I'd,
be
happy
to
you
know,
shift
things
around
if
there's.
E
B
Thanks,
you
know,
is
it
for
me,
did
Donald?
Did
you
want
to
verbalize
any
of
your
feedback
on
this
item?.
A
Yeah,
so
we
discussed
it
in
our
meeting
yesterday
on
the
project
management
side
and
decided
to
go
with
three
main
themes
for
13
Oh,
just
because
it
works
out
well,
primarily
with
we
have
six
back
end
engineers
on
the
figure
could
have
to
beckon
engineers
focused
on
each
theme.
We
have
three
front
end
engineers
at
a
time
or
at
currently,
so
we
can
have
front-end
engineer
dedicated
to
each
theme.
A
It
I
don't
know
if
it
works
quite
as
well,
especially
if
we're
thinking
about
doing
three
themes
for
portfolio
and
three
themes
for
first
certified
that'll
kind
of
spread
people
then,
and
we
won't
get
some
of
the
value
that
gave
talked
about
around
collaborating
and
working
together
on
on
these
themes.
So
does
anyone
have
any
thoughts
on
how
many
themes
we
should
try
to
commit
to
for
portfolio
and
certify
I.
D
Mean
I'd
be
totally
open
to
the
idea
of
doing
like
one
major
theme
and
then
like
sort
of
like
what
you
call
minor
theme
like
the
I
just
want
to
make
sure
we
don't
lose
the
bug,
fixes
and
the
performance
improvements
and
the
availability
ideas
as
well.
I
I,
don't
want
them
to
get
lost
in
the
shuffle.
A
B
Sure
so
I
think
this
is
kind
of
going
and
looking
at
everything
that
we
have
in
our
backlog
like
we
have
and
we're
gonna
always
have
ongoing
chores
like
moving
things
to
core
stuff
that
comes
up
random,
SLA
things,
technical
debt,
that
we
need
to
take
care
of
performance
availability
issues
like
that's
gonna,
be
an
ongoing
thing
and
I
think
the
goal
is
is
like
right
now,
based
on
the
Mr
throughput.
That
I
was
looking
at
over
the
last
several
releases.
B
I
think
we
were
having
around
25
percent
features
and
I
think
we're
still
getting
better
at
labeling
things
being
backstage
in
a
feature
like
it's
part
of
a
feature.
Epic
pmrs
then
make
it
a
feature
not
backstage,
but
targeting
getting
about
60%
of
all
the
mrs
and
issues
that
we
push
through
to
be
a
feature
related
or
enhancements
or
like
improvements,
things
that
will
drive
customer
value
and
then
the
other
like
40%
are
things
that
are
gonna.
Help
us
improve
our
ability
to
drive
customer
value
in
the
future.
So
Peggy
got
that.
B
Ongoing
investments
like
we've,
been
working
on
the
the
migrations
of
mentions
so
that
we
can
have
a
Notification
Center,
which
is
like
a
super-long
or
anything
just
those
things
that
aren't
gonna
produce
a
lot
of
fruit
right
away,
but
we
need
to
take
care
of
that's
always
gonna,
be
there
so,
basically
looking
at
our
our
kind
of
throughput
streams
and
that
kind
of
ratio
dripping
away.
So
that's
that
what
you
wanted
me
to
explain.
Oh.
A
A
A
E
Yeah-
you've
probably
read
this
already,
but
for
the
benefit
of
anyone
watching
the
call
yeah,
we
seem
to
have
delivered
a
lot
or
we'll
deliver
a
lot
touchwood
in
twelve
ten,
so
I'll
riff
awesome.
So
we
have
requirements
management.
The
first
version
of
that
your
importer,
the
first
version
that
health
status
of
epics-
that's
as
well
as
setting
health
status
on
issues
which
it
believed
made
twelve
nine,
some
iteration
of
barn
up,
charts,
hopefully
and
more
Donal,
as
you
probably
ever,
liked
a
back-end
and
full-stack
bias.
E
So
I'm
probably
missing
a
couple
one
that
jump
out
of
me.
It
would
be
the
ability
to
reorder
discussions
in
issues.
That's
awesome,
yeah,
a
few
things
I'll
narrowly
miss
as
well.
I
mean
that's
normal.
You
know
continuous
delivery
environment.
So
to
me,
this
feels
like
a
huge
achievement
like
a
step
change
in
our
process.
We've
had
months
in
the
past,
where
we
haven't
delivered
anything
for
a
group,
and
this
just
feels
like
we're
delivering
a
lot
of
stuff
so
yeah.
E
A
Yeah
one
thing:
the
cart
that
I
found
interesting
is
I,
don't
think
our
velocity
was
really
went
up
in
twelve
ten
I
think
some
of
it
may
have
had
to
like
the
things
we
delivered
were
very
visible,
tangible
things
that
we
can
add
to
our
release
posts.
Essentially
so
I,
don't
know
what
that
means.
If
we
did
a
better
job
of
prioritizing,
if
we
did
a
better
job
of
iterating
or
cutting
scope
for
some
of
these
things,
so
we
could
deliver
something
within
milestone
for
everything,
but
I
just
thought.
A
B
B
Why
I
think
it
worked
is
like
we
were
focused
on
an
outcome
and
we
were
willing
to
trade
scope
like
you
should
always
be
if
you're
in
a
fixed
time
box,
you
should
have
to
like
make
trade-offs
decisions,
but
we
committed
to
shipping
something,
no
matter.
What
and
I
think
it
was
really
awesome,
so
I'm,
proud
of
everybody,
I.
D
Mean
I'll
second
and
I
think
this
was
a
fantastic
release.
I
really
felt
like
there's
a
a
well-thought-out
plan
in
execution
and
I.
Think
people
really
took
that
into
account
and
I
was
really
happy
to
see
people
I
think
I,
like
oh,
what
Gabe
said
I
think
was
really
happy
to
see
people
instead
of
picking
up
lots
of
little
tasks
that
didn't
necessarily
all
work
together.
D
We
focused
on
a
theme
like
requirements,
management
or
the
cheer
importer,
and
we
made
sure
that
everything
we
were
doing
was
focused
on
that
theme
and
when
there
were
other
tasks
they
were
brought
up
and
we
were
you
know,
aerial
to
disposition
those
as
yes
when
we
still
need
to
do
that
or
no
we
can.
We
can
push
it.
One
that
ended
up
doing
was
giving
us
a
very
focused,
build
with
the
focus
on
the
delivery.
I
think
it
worked
really
well.
D
C
F
Definitely
felt
different
for
me.
I
would
say
that
it
was
not
ideal
for
me
and
that
I
would
have
liked
to
have
had
more
time
to
do
research
and
think
through
things
a
little
more
thoroughly,
but
I
felt
like
I,
had
more
time
with
the
developers,
which
was
great,
and
we
spent
more
time
collaborating
as
a
group
which
I
loved
I
really
loved
doing
that
it
takes
more
time,
but
it
was
more
synchronous
collaboration,
so
I
felt
like
we
were
able
to
move
things
along
a
little
faster,
so
I
appreciated
that.
G
I
was
just
I
was
excited
to
see
I.
Think
some
of
the
work
in
portfolio
management
has
been
there.
We've
been
working
on
it
for
a
while,
and
it's
been,
you
know,
been
getting
release
which
is
really
exciting
and
a
lot
of
collaboration
and
mrs
back-and-forth
working
on
things
together,
which
is
but
yeah
I,
think
JIRA
importer
was
definitely
a
change
which
it's
an
exciting
change,
as
everyone
seen.
E
Just
on
this,
as
well
like
part
of
the
chain,
so
one
of
the
changes,
if
you
like
that
we've
made
to
speed
things
up
and
feature,
has
been
to
keep
reviews
and
maintain
ership
reviews
within
the
team.
I
know:
there's
an
Emma
from
Jean
one
of
the
other
engineer,
managers
to
actually
make
this
part
of
the
kind
of
process
company-wide
that
we
have
domain
domain
expertise
in
reviews
as
a
first
choice.
I
think
we'll
still
have
reviewer
relax,
so
they'll
still
be
sort
of
shared
code
ownership.
But
you
know
as
a
first
pass.
E
C
E
This
release
in
general
yeah
that
well
that's
the
question.
I
was
asking
how
it
was
for
other
people.
You
know
like
I,
think
from
sort
of
back-end
engineering
point
of
view
when
it
came
to
things
like
the
burn
up
charts.
E
If
we
needed
answers
to
a
question,
we
knew
where
to
ask-
and
that
might
seem
obvious
right
it's,
but
it's
you
can
have
two
or
three
people
working
on
this
with
different
ideas
of
a
different
experience
of
what
a
burn
off
chart
is
and
does
and
has
done
in
their
you
know,
past
or
in
their
experience
right
so
to
understand
exactly
and
it's
possible,
like
there's
a
lot
of
written
material
on
this,
that
they
could
read
through
in
the
issue.
But
you
know
maybe
they,
like.
E
Maybe
it's
just
like
you've
seen
these
issues
where
there
are
hundreds
and
hundreds
of
comments
and
the
the
discussion
of
all's
over
time
right,
and
so
you
have
to
kind
of
take
the
entire
thing
in
and
then
figure
out
what
the
outcome
was
for
yourself
and
that
kind
of
thing,
so
that
the
fact
that
we
had
a
channel
where
people
could
ask
questions
of
the
p.m.
and
the
e/m
I
just
keep
themselves
unblocked
was
I
think
very
useful.
E
In
the
end,
we
ended
up
cutting
even
more
scope
than
we
originally
intended
from
the
burn
of
charts
and
I
think.
As
far
as
I'm
aware,
it's
still
at
risk,
but
it's
hell
of
a
lot
closer
than
it
was
when
we
were
in
the
middle
of
the
milestone
and
weren't
sure
you
know
where
it
was
and
what
was
happening
with
it.
B
Yeah,
so
I
was
trying
to
read
through
the
everyone's
contributions,
the
ad
hoc
retro.
Thank
you
for
those
that
did
comment
and
try
to
pull
out
some
of
the
things
that
we
wanted
to
take
away
from.
That
I
think
we've
already
talked
about
one
of
the
bigger
ones
which
is
working
more
from
a
thematic
standpoint,
planning,
standpoint
and
organization
having
dedicated
channels
and
a
few
engineers
collaborating
on
a
given
like
team
in
a
release.
I
think
that's
a
huge
one
and
then
I
didn't
know
if
there
was
any
others
that
we
wanted
to
talk
about.
B
But
it
would
be
great
to
think
through
some
sort
of
shared
goals
that
we
could
work
towards
and
feel
good
about.
So,
whether
that's
us
trying
to
get
to
being
to
working
on
60%
features
or
reducing
the
the
defect
rate
or
some
sort
of
like
performance,
metrics
or
just
things
that
is
a
team
we
can
align
behind
is
like
we're
also
working
on
North
Star
metrics
from
a
proxy
appoint,
which
will
become
something
that
we'll
talk
about
together
as
a
team
and
drive
towards
improving.
B
E
So
I
need
to
go
back
and
check
where
the
discussion
is
now,
but
the
main
one
that
stuck
out
for
me
was
the
comment
about
feeling
like
we
had
six
sub
teams
of
one
instead
of
one
team
of
six
people
and
that's
the
kind
of
thing
that
I've
been
trying
to
look
for
ways
that
we
could
mitigate
that
in
future.
I
think
the
themes
thing
is
a
really
good
idea.
E
So
yeah
like
from
that
point
of
view,
I
think
anything
we
can
do
to
build
building
support
into
our
process
for
engineers
as
they
work
on
stuff
and
that
usually
comes
from
one
being
available
and
to
having
more
than
one
person
on
each
individual
thing
where
they
are
kind
of
solely
responsible
for
it
and
I
think
we
could
caught
like
conquer
that's.
It
would
be
pretty
much
a
good
outcome
from
this.
A
Yeah
I
agree
that
I
think
that
would
be
the
big
one
and
then,
to
your
point,
Gabe
I,
think
also
adding
or
aiming
for
providing
a
little
bit
more
direction
on
how
much
time
should
be
spent
working
on
those
themes
or
product
deliverables
versus
some
of
the
other
stuff
that
that
we
do
so
aiming
for
like
a
60/40,
split
I
think
that
would
be
I
think
that
would
be
helpful.
Also.
B
B
You
know
which
era
there
were
always
there's
always
one
biggest
constraint
in
a
system,
and
it's
gonna
change
over
time,
but
it's
it's
basically
like
it
would
enable
us
to
look
and
see
which
parts
of
our
process
are
the
slower
ones
or
the
ones
that
take
longer,
and
then
we
can
figure
out
as
a
team
how
we
want
to
improve
processes
there,
whether
it's
like
breaking
things
down
smaller,
providing
better.
Like
you
know,
planning
and
stuff
upfront
like
there
could
be
all
sorts
of
root
causes,
but
it's
a
framework
that
lets
us
at
least
identify.
B
B
More
often
he's
like
I'm
in
a
like
I,
said:
I
talked
with
sales,
folks
and
prospects
and
all
sorts
of
people
all
day
and
hear
certain
things
and
I
want
to
make
sure
that,
like
the
engineers
and
designers
understand
like
have
that
context
to
and
so
I
don't
want
to
waste
time
on
like
fruitless
things,
but
I
also
think
it's
important
to
like
that.
Everyone
has
a
unified,
like
understanding
of
who
were
selling
to
who's,
buying
our
products
whose
problems
were
trying
to
solve,
and
so
I
just
didn't
know.
B
D
Was
wondering
that
too
I
mean
I
made
a
conscious
effort
in
the
last
customer
call
I
had
to
invite
all
I
know.
I
was
outside
of
John's
working
on
so
justly
he
could
hear
the
discussion
we
had
I,
don't
know
if
that
was
useful
if
you'd
like
to
be
invited
to
more
of
those
any
feedback
on
how
that
went,
because
I'm
very
open
to
doing
that.
If
you
think
there's
benefit
there,
I'd
love
to
be
as
inclusive
as
possible.
A
D
Think
it's
really
critical
for
them
to
hear
the
first-hand
accounts
of
the
customers
and
to
see
the
discussions
we're
having
on
a
daily
basis
about
what
the
customer
expects
from
features
which
may
or
may
not
correlate
to
what
we
necessarily
understand
them
to
expect
and
I.
Think
there's
you
know
resolving
that
disconnect
is
is
a
huge
step
forward
in
a
lot
of
ways.
H
D
Know
I
would
love
to
ask
that
question
more
I
know
with
requirements
management
it's
difficult,
because
we
are
working
in
more
of
a
regulated
space.
I,
don't
think
I've
included
engineering
out
any
NBA
based
calls
yet
or
not
exclosure
calls.
But
as
we
move
forward,
you
know,
sometimes
there
are
actually
a
residency
restrictions
as
well.
So
I
will
do
my
best
to
ensure
that
you
know
it's
as
open
and
you
know
inclusive
as
possible
and
if
we
can
record
them,
I
think
that's
fantastic
snow
I
completely
agree.
It's.
D
B
Know
I'm
a
lot
of
the
calls
I'm
on.
They
have
note
taker,
which
is
the
gym
ba,
for
course,
AI
and
all
those
at
least
for
the
sales
calls
the
course
AI
call
us
or
definitely
recorded,
and
so
I'm
wondering
if
it
makes
sense,
just
create
like
a
slack
channel.
Just
specifically
where
you
like,
post
meeting,
that's
coming
up,
and
then
people
can
say
that
they
want
to
join
in
the
thread
or
something,
and
then
we
can
follow
up
and
attach
like
a
recording
link
if
we
have
one
or
something
to
that
same
thread.
A
B
That
it
just
was
hard
I
guess:
I
did
wasn't
sure
how
you
add,
because
I
have
not
the
one
who
schedules
these
most
of
time.
It's
like
random
people,
and
so,
if
I
can
figure
out
how
to
add
that
calendar
to
the
thing
that's
already
scheduled,
and
that
would
work.
But
that's
just
why
I
like
it
I
created
one
of
those
shared
calendars,
but
it
was
a
little
bit
wonky
to
figure
out
how
to
get
it
populated
correctly.
So.
A
Well,
yeah:
let's
start
with
just
creating
a
select
channel
with
all
those
and
then
we
can
iterate
on
it.
C
Yes,
so
just
race
with
you
know
the
expanding
epochs
on
roadmap
in
the
current
state,
it
seems
like
we
ran
into
some
funky
behavior
once
we
got
the
first
pass
of
it
out
and
you
have
full
expansion.
I
just
want
to
ask
the
question
just
from
a
kind
of
like
do
a
quick
retrospective
on
Italy.
Is
there
anything
we
think
we
could
have
done
better
or
or
maybe
some
of
the
complexities
that
are
on
the
engineering
side
that
maybe
aux-in
product
and
anybody
else
watching?
C
This
call
might
not
fully
get
just
want
to
open
up
a
conversation
on
how
we're
feeling
about
it,
and
maybe
is
there
anything
from
the
product
or
design
side
we
could
have
done
yeah
avoid
some
unseen
items
that
came
up.
I,
don't
know
if
it
doesn't
regular
on
the
call
asset
question,
but
it's
kind
of
one.
That's
you
know
sure.
A
A
So
maybe
we
talk
through
how
that
was
how
there
was
miscommunication,
I,
think
or
how
just
how
that
went
about
and
Keenan.
You
were
pretty
involved,
I
think
in
the
process
there.
So
maybe
if
you
could
talk
through
maybe
just
talking
through
the
feature
and
how
it
came
about-
and
the
review
process
yeah.
C
I
mean
you
know
this
one's
actually
predates
me
quite
a
bit,
but
it
was
a
fairly
necessary
add
to
the
roadmap,
because
one
you
know
the
idea
of
a
couple
different
use
cases.
The
roadmap
needs
to
tell
the
right
story
to
the
right
level
of
user,
our
consumer
and
sometimes
that's
very
high
level
like
we
just
need
big
bucket
parent
items.
You
know
if
you're
communicating,
really
business
leader
director
somebody
doesn't
need
to
know
the
minutiae
to
hey
here's,
the
big
project,
it's
lasting
this
long
and
super
rep.
C
For
other
conversations,
you
need
more
information.
You
need
more
granular,
her
idea,
ideas
of
the
work
of
the
sequencing,
the
progress,
so
you
need
to
be
able
to
expand
that
parent
epic
out
on
the
roadmap.
C
So
you
see
the
children's
head
like
under
it,
how
they
land
on
the
timeline
and
individual
progress,
so
the
expected
state
was
the
game
kind
of
mentions
was
how
epic
tree
works,
for
you
should
be
able
to
expand
it
all
the
way
down,
and
you
know
to
be
honest:
when
there
was
a
there
was
a
demo
of
it
and
that's
the
first
time
I
realized.
Oh,
it
looks
like
we're
only
expanding
to
one
level
and
so
I
think
maybe
I
missed
something
through
the
one
of
the
conversations
in
the
past.
C
Maybe
that's
why
I
wasn't
where
that
or
maybe
the
requirements
just
for
as
clear
as
they
could
of
it,
and
so
that's
when
we
kind
of
spun
up
a
different
issuance
inhaling.
This
is
actually
what
we
want
to
try
to
get
to.
Let's
make
sure
we've
cleared
e
and
let's
try
to
move
that
forward,
which
I
think
is
is
happening
now.
It's
in
depth
now,
but
yeah
I
mean
I.
C
Think
part
for
me,
I
think
is
you
know
like
my
GDK
is
down
right
now
so
heard
that's
on
my
side,
but
I
think
maybe
earlier
demos
of
function
might
help
with
some
of
these
big
tent
items,
or
you
know
early
like
initial
NBC's
for
folks,
not
maybe
that
can
help
answer
some
of
these
questions
and
get
earlier
feedback.
One
thing
I
didn't
think
about
to
I.
B
Today
and
I
really
liked
Kong
I
guess
in
the
dreary
importer
on
an
mr,
he
recorded
a
video
of
different
stages
of
the
thing,
and
so
that
was
helpful
for
me,
because
I
didn't
have
to
spend
something
up
locally.
I
also
noticed,
like
Scott
Stern,
sometimes
puts
gifs
on
his,
mrs,
which
like
showing
the
interaction
in
the
behavior,
which
is
also
helpful.
I.
B
Think
one
thing
that
it's
like
interesting
to
me
about
this
is
that
we
have
the
epic
view
and
the
epic
tree
on
that,
and
then
we
have
the
road
map
and
like
the
way
that
I
I
guess
the
boys
envision.
This,
like
you,
don't
need
the
epic
tree
and
the
road
map
necessarily
like
you
want
to
have
information
about
an
epic,
but
you
could
also
like
it
interact
with
that
via,
like
a
road
map
type
view,
and
that
way
you
don't
like
duplicate
functionality.
B
C
I
mean
it's
like
I
understand
what
you're
saying
they're
two
different
use
cases
that
use
the
same
information,
though
like
the
epic
tree
and
I.
Don't
want
to
take
this
call
on
these
cases.
Whatever
yeah
I
mean
the
epic
itself
is
the
collection
of
work.
It's
looking
at
the
bucket,
it's
fairly
agnostic
of
time
right.
The
road
map
is
looking
at
that
similar
information
over
a
spans
of
time
and
like
we
do
see
different
needs
for
this
from
customers.
C
In
the
conversations
like
it's
I
need
to
plan
the
world
I
need
to
put
the
work
together.
I
need
to
fragment
it
in
any
create
issues.
I
need
to
organize
it,
and
the
road
map
is
about
displaying
it
looking
at
it
over
time
and
having
the
ability
to
get
more
granular,
as
you
need
for
different
audiences
and
so
yeah.
C
There's
some
duplication
there,
but
I
mean
when
you
talk
about
reporting
on
multiple
artifacts
you're,
always
gonna
have
some
level
of
duplication
of
data,
because
there's
so
much
data
and
they're
all
connected
in
the
right
ways,
but
you're
also
gonna
need
to
look
at
dependency.
Mapping
which
is
gonna,
be
really
difficult
to
do
in
an
epic
ListView.
But
you
mean
that
same
information
and
map
it
over
a
time
line.
C
B
C
A
And
I
think
that's
what
we're
aiming
for
in
this
with
the
latest
issue,
not
so
much
the
pagination
or
filtering
or
lazy
loading,
because
that's
something
we
still
have
to
figure
out
I
think
we
still
have
an
issue
on
that.
You
outside
of
that,
but
as
far
as
just
showing
the
top
level
epics
and
then
allowing
the
expanding
of
sub
epics
I'll
off
a
big
tree,
I
think
is
what
we're
working
on
that.
What.
B
Slash,
you
know,
go
to
market
get
stuff
ready
for
production
to
test
scalability
of
features
as
like
we've
implemented
them,
because
there's
a
big
difference
between
like
developing
something
locally
with
the
smaller
data
set
versus
you
know
developing
something
that
has
enormous
amount
of
I.
Guess
a
larger
data
set
and
like
how
are
we
are
we
doing
anything
correct?
You
did
look
at
the
performance
of
that.
It's
scale
like
load
testing
before
we
just
call
something
got
done.
E
We
do
we
don't
do
a
little
testing
before
we
release
the
feature,
but
there
seems
to
be
like
an
effort.
I'm
not
I'm,
not
totally
okay
with
this,
but
there
seems
to
be
some
sort
of
effort
to
run
load
testing
against
known
features,
parts
the
API
for
example-
that
we
then
get
issues
with
severity
labels
attached
to
based
on
the
performance
at
that
part
of
the
application.
A
good
example
would
be
Heinrich
really
recently
fixed
a
problem
where,
on
merger
quota
it
was
identified
a
merge
request,
but
I
think
it's
applicable
to
issues
as
well.
E
Where
then,
as
the
discussion
went
up
the
you
know,
the
number
of
comments
on
a
discussion
went
up.
The
response
time
went
on
exponentially
on
higher
loads
right,
so
idea
like
I
mean
he
got
in
and
fix
that
and
sped
it
up.
So
you
had
like
an
answer
to
the
question.
I,
don't
believe,
apart
from
what
we've
been
trying
to
expose,
which
is
have
graphs
and
Griffin
and
pages
and
Cabana
logs
on
performance
parts
of
the
site
and
how
they
perform
now
and
then
give
engineers
the
ability
to
check
that
after
they
ship
the
feature.
B
About
any
other
sort
of
like
tracing
or
I
mean
I'm,
not
sure
I'm,
not
an
engineer,
I
just
know
back
when
I
was
building
Ruby
applications,
we
used
New
Relic
and
we
would
instrument
that
and
the
engineers
we
use
that
as
part
of
their
development
process
to
check
their
code
for
performance
issues,
at
least
that
they
could
find.
And
then
there
was
always
stuff
you
can't
find
until
it's
live
in
the
wild
and
at
scale,
but
is
there
anything
that
we
could
do
to
help
catch?
Some
of
these
things
proactively
said
if
retro
actively.
E
You
I
think
like
I
mean
I,
know
what
you
said
in
New
Relic.
You
would
have
the
ability
to
know
pretty
quickly
like,
and
you
also
get
to
see
how
much
of
the
request
is
application-layer,
how
much
its
database
layer
I,
don't
think
we
use
New
Relic
anywhere
in
Goa,
but
we
do
send
stuff
through
Prometheus
and
visualize
and
Cabana
graphs.
We
also
aggregate
logs
in
elasticsearch
cluster
and
makes
them
queryable
by
through
Cabana
I.
E
Think
that's
limited
to
seven
days,
though,
and
so
we
have
like
a
lot
of
data
and
we've
just
gone
through
a
period
of
performance
training
with
number
of
members
of
the
team.
It's
just
them.
Maybe
we
could
be
better
at
surfacing
that
it's
always
gonna
be
a
retrospective,
though
a
retroactive
until
we
figure
out
some
way
of
building
performance
into
the
development
process,
not
just
making
people
aware
of
why
things
perform
already.
E
It
added
a
feature,
but
it
also
improved
the
performance
hugely
of
the
the
current
feature
that
was
already
there.
So
I
mean
there
is
an
emphasis
on
performance
and
we
just
don't
have
any
way
really
to
retroactively.
So
I'm
not
retroactively
leaked
to
to
do
load
testing
as
part
of
the
development
process
at
the
minute.
B
That
makes
sense
what
about
sorry
I'm,
just
asking
lots
of
questions
I'm
curious
about
this,
using
things
like
profilers
locally
to
look
at
you
know
your
database
queries
and
your
memory
usage
and
like
overall
time
for
different
stuff,
just
as
like
that
could
be
one
way
that
we
could
do
that.
It's
not
like
at
scale
necessarily,
but
at
least
we
can
maybe
start
to
like
think
about
it
proactively
and
start
to
see
like
if
something's
already
get
a
little
bit
slow
locally.
It's
kind
probably
really
slow
when
it's
at
scale,
I.
E
Think
you
can
do
like
you
can
do
a
lot
of
that
with
what
comes
out
of
the
box
and
rails
I
mean
you
in
the
logs.
You
can
get
a
fairly
good
idea
quickly
of
what
part
of
the
application
is
slower.
What
part
of
what
you've
built
is
slow?
My
like
how
much
time
the
application
spent
in
in
the
database
versus
in
the
application
layer
and
that
kind
of
thing
so
yeah
we
could
add
profilers
I'm,
not
sure.
If
there
are
any,
maybe
Alexandria
you
could
you
could.
E
H
There
is
some
like
in
logging
from
the
rails.
You
can
get
some
some
feedback
already.
How
much
time
did
you
spend
in
the
queries,
but
I'm
not
sure
how
feasible
it
is
for,
like
in
development
to
do
these
slow
testings?
It
does
sound
like
a
next
step
or
iteration
on
the
future
development
itself.
It's
like
you're,
you're,
building
a
feature
with
performance
in
mind,
so
building
the
architecture
and
so
on,
and
then
that
sounds
as
a
next
step.
That's
just
how
much
blood
it
can
actually
sustain.
H
H
Does
it
have
all
the
data
that
you're
going
to
is
like
you
can
serve
generate
huge
deficit,
but
it
was
still
the
problem
with
super
popping
on
production
because
of
the
the
kind
of
the
data
that
you
have
there
so
like
here,
I'm
thinking,
for
instance,
let's
say
I'm
thinking,
right
away
for
parsing,
for
instance
the
the
descriptions
of
the
zero
issues,
or
something
like
that.
So
you
cannot
really
prepare
for
all
kinds
of
information
that
you
can
prepare
for
counts
and
amounts
in.
That
sounds,
but.
H
B
Think
we're
it
time
so
I
don't
want
to
keep
everybody
later
than
we
got
to
be,
but
I
would
be
interested
John
and
I.
Was
it
during
any
of
the
engineers
just
to
think
about
things
that
we
can
do
proactively?
They
don't
add
a
ton
of
unnecessary
overhead
like
I.
Don't
want
to
do
busy
work
or
wasted
chores,
but
I
do
a
Greek
sort
of
like
what
Alexander
was
saying
is
like
designing
with
performance
in
mind
from
the
beginning.
B
H
One
more
thing,
though,
was
like,
as
we
develop
a
feature.
Maybe
we
should
from
get-go
have
a
issue
that
says:
let's
evolve
any
performance
implications,
so
at
least
we
know
we're
thinking
about
that
and
we're
not
just
going
forward
without
at
least
paying
some
attention
to
that
and
see
what
the
implication
might
be
and
then
maybe
something
surface
is
that
yeah.
What
this
indicates
is
that
we
should
look
at
and
see
how
it
forms
in
specific
situations
on.