►
From YouTube: Plan stage weekly meeting - 2019-09-25
A
B
C
B
Although
I've
written
a
lot
of
points,
I
just
wanted
to
make
sure
that
I
don't
derail
the
conversation
when
explaining
what's
happening
here.
So
this
is
about
a
road
map
page.
So,
as
you
all
know
that
we
are
slowly
moving
away
from
traditional
rails
API
for
everything
that
we
do
on
the
portfolio
management
side
over
to
graph
tools.
So
that
means
that
we
want
to
make
use
of
graph
QL
in
the
roadmap
page
as
well.
But
with
that
comes
some
concerns
like
right
now.
What
happens?
B
So
that
means
that
if
you
today
go
to
the
club
comm
and
visit
roadmap
page
on
github
or
group,
it
has
1,500
epics,
but
all
of
those
somewhere
close
to
500,
epics,
l
start
or
finish
states
or
both,
and
that
means
that
we
load
up
all
500
epics
on
roadmap
page
at
once,
and
obviously
this
is
not
very
performant
the
delay
in
how
long
it
takes
for
API
to
return
those
500
results
is
itself
a
bit
slower.
It
might
be
even
slower
on
slow
connections
at
the
same
time.
B
That
means
is
that
if
we
were
to
merge
this
merge
request,
which
integrates
graph
QL
onto
the
production
user
won't
be
seeing
more
than
hundred
items,
so
we
need
a
way
to
have
basically
some
sort
of
HD
nation
on
scroll
for
it
to
work
now,
as
I
mentioned
like
what
are
the
complications
that
come
with
the
graphical
integration.
So
these
are
the
five
points
that
I've
mentioned
here.
B
So
the
first
thing
is
that
we
have
endless
horizontals
falling
within
the
time
line,
which
means
that,
if
you
school
into
the
future
or
into
the
past,
you
would
load
up
results
from
future
or
past,
depending
on
which
direction
user
schools
in
and
we
would
insert
those
epochs
into
the
existing
list,
while
making
sure
that
the
sort
order
that
user
has
selected
is
respected.
So
that
means
that
insertions
would
happen
anywhere
within
the
list,
depending
on
where
exactly
that
epoch
falls
in,
as
for
the
currently
selected
salt
on
earth.
B
So
that
is
one
feature
that
we
are
doing
right
now
and
plus
sorting
itself
happens
on
the
client
side
like
once,
results
are
I.
We
do
the
sorting
on
what
we
are
showing
on
UI
and
then
everything
is
rendered.
Fortunately
on
UI.
It
is
not
very
noticeable
because
view
is
smart
enough
to
make
sure
that
insertions
that
happen
in
a
new
place
are
the
only
ones
where
you
know.
Insertion
of
item
is
basically
happening
visually
like
behind
the
scenes.
A
lot
of
things
are
going
on,
but
usually
you
wouldn't
notice
like.
B
Okay,
all
the
items
were
shuffled
around
because
something
was
sorted.
So
that's
a
good
thing,
but
once
we
have
the
graphical
integration,
we
would
only
be
seeing
hundred
items
per
page,
so
once
user
Scrolls
at
the
end
of
the
list,
we
would
append
100
more
items
with
that
sorting
becomes
a
problem,
because
how
we
would
do
the
sorting,
then
the
problem
is
that
it
might
be
possible
that
one
of
the
items
which
was
supposed
to
be
at
the
top
of
the
list.
B
As
for
this
currently
selected
sorting
order
is
now
at
the
bottom
of
the
list,
because
back-end
doesn't
perform
any
kind
of
sorting
on
the
results
itself.
So
that
is
the
whole
problem,
and
the
performance
as
I
mentioned
is
already
something
that
we
need
to
look
at
as
soon
as
possible,
because
right
now
we
are
only
having
founded
epochs
with
with
the
dates
and
even
with
500
items.
It
is
starting
to
become
a
scrolling
problem.
B
Essentially
so,
in
order
to
make
the
scrolling
behavior
itself
faster,
we
need
to
make
sure
that
we
do
not
render
everything
at
once.
We
only
render
what's
needed,
which
is
also
called
as
buffered
rendering,
so
at
any
given
point
of
time
on
a
page
user
only
she
is
somewhere
like
30
35
epochs
at
once.
We
don't
need
to
render
500
items.
We
just
can
show
only
the
items
which
are
visible
essentially,
so
that
is
one
thing
now:
what
are
the
options
to
solve
it?
B
So
one
thing
is:
obviously
we
would
need
some
sort
of
buffered
rendering
where
we
would
only
show
what's
what
matters
on
the
viewport
and
we
would
take
away
any
items
which
are
not
currently
part
of
UI.
So
that's
one
thing
and
then
once
user
Scrolls
upwards
or
downwards,
basically
any
vertical
scrolling.
That
happens,
we
would
append
or
remove
items
on
the
list.
So
that's
one
thing
and
we
would
continue
to
do
insertions
on
horizontal
scroll
as
it
is
like
a
fuse
or
Scrolls
into
the
past
or
future.
B
We
would
insert
items
within
the
list
as
it
is.
So
that's
the
second
thing
and
then
sorting
would
apply
only
on
what's
visible
on
the
viewport,
and
this
is
the
important
one
here.
What
we
will
do
is
that
if
user
is
seeing
only
35
items
at
once
on
screen,
our
salting
will
be
applied
only
on
those
35
items
which
are
on
UI.
We
won't
do
any
kind
of
sorting
on
what's
outside
the
viewport,
but
that
is
a
problem
in
case
user
wants
to
scroll
all
the
way
to
the
bottom
of
the
list.
B
So
that
is
one
problem.
So
another
alternative
in
combination
with
first
option
is
to
basically
move
entire
sorting
logic
to
mapping,
and
it
is
easier
to
do
so
because
graph
QL
already
has
a
threshold
of
hundred
items.
So
once
API
returns
back,
we
can
do
the
salting
I'm,
not
sure
if
that
can
be
done
on
a
controller
level
or
a
helper
level,
depending
on
how
backends
one
wants
to
do
it.
But
we
basically
move
entire
sorting
logic
into
the
back
end
so
that
we
do
any
sorting
one
front
end.
B
We
would
just
append
the
results
as
they
come
by.
So
that
is
one
approach,
and
then
we
continue
with
rest
of
the
approach,
as
I
mentioned
in
the
first
part,
where
we
would
do
buffer
rendering
in
everything
and
I,
see
a
question
Shawn
that
what
is
sorting
logic
currently
it
is,
but
I
don't
start
and
end
it.
B
Yes,
it
is
based
on
dates
essentially,
but
depending
on
what
user
is
selected
from
that
salt
drop-down,
that
we
show
on
top
of
the
roadmap
view,
which
is
combination
of
the
drop-down
itself
with
start
and
finish
dates,
as
well
as
sorting
direction
like
whether
it
is
ascending
or
descending.
So,
depending
on
what
user
is
selected
there,
we
use
that
logic
to
perform
sorting
on
client
side.
B
There,
basically,
yes,
I
I,
believe
the
original
reason
why
we
didn't
do
on
the
back
end
was
because,
if
users
crawls
into
the
future,
where
certain
start
date
comes
up
into
the
future,
then
chances
are
that
that
particular
item
might
fall
somewhere
between
the
list.
Instead
of
end
of
the
list
and
I
recall,
I
had
a
conversation
with
Pedro
and
Annabel
when
we
were
working
on,
the
salting
part
was
that
we
didn't
want
to
append
the
results
as
they
come
in
at
the
bottom
of
the
list,
because
it
would
just
feel
out
of
place.
B
We
decided
to
do
the
insertion,
depending
on
where
exactly
it
falls
for
that
particular
salt,
because
if
we
were
to
append
the
list,
it
is
much
simpler
problem.
We
just
weapon
the
results
and
we
wouldn't
care.
How
timeline
looks
like
like
if
you
go
to
production
currently,
you
would
notice
that
epics
kind
of
the
timeline
of
individual
epics
look
kind
of
like
a
ladder
where
they
start
with
the
start
date,
and
then
it
continues
to
tilt
towards
so
end
it.
A
A
D
So
I
think
the
other
option
here
is
to
keep
it
as
is
so
match
the
current
API
approach
that
we
have,
but
on
graph
QL.
So
essentially,
if
we're
able
to
pull
in
all
the
epics
through
graph
QL
there's
nothing,
we
have
to
do
right
now
on
the
front
end
or
on
the
back
end.
If
we
choose
the
sort
on
the
back
end,
have
we
and
I
know
we
ran
into
this
issue
with
the
epic
tree
around
a
hundred
limit?
Is
there
did
we
do
all
the
research
to
see
if
there's
a
way
around
that?
B
D
D
E
A
E
D
Okay,
so
it
sounds
like
we
should.
We
should
check
that
we
should
create
an
issue,
maybe
a
new
issue
or
move
this
conversation
to
the
to
the
original
issue
for
epsilon
roadmap
page
to
determine
which
of
the
first
two
options.
We
want
to
move
toward,
ideally
in
the
future,
but
possibly
this
release
if
we
have
to
but
I
think
that's
the
action
items
from
that
right.
Let's
check
the
limit
and
then
we'll
figure
out
which
option
we
want
to
select.
E
A
A
What
we're
calling
a
build
board
like
Sergio
have
these
as
well.
So
it's
like
the
the
project
management
group
on
the
current
milestone
and
then
it's
the
workflow
stages
that
sort
of
apply
to
individual
engineers
working
on
stuff.
So
you
know:
they're
ready
for
development,
trying
break
down
I've
included
as
well,
because
I
need
engineering
input
blocks
and
on
from
there,
so
to
see
what
back-end
engineers
to
work
need
to
work
on
the
filter
by
back-end,
and
then
they
can
pick
things
from
ready
for
development.
A
You
know
break
down
things
in
planning,
break
down,
there's
also
a
column
right
at
the
end
for
community
contributions,
just
to
sort
of
hide
those,
because
we
need
those
on
the
milestone
to
keep
aware
of
them.
But
we
don't
actually
need
anybody
to
assign
themselves
to
them
that
just
not
having
that
column
at
the
end,
just
moves
them
out
of
the
way
and
then
Donald
I
think
you
created
a
similar
board
for
plan,
because
front-end
is
still
one
team
across
the
whole
stage
that
right,
yeah.
D
That's
correct,
so
it's
his
ex
same
board
that
you
have
there
just
for
the
entire
stage
just
so
it's
a
little
bit
easier
for
for
the
front-end
team
who
do
work
on
all
three
groups,
essentially
in
kind
of
order
of
specialties.
But
it's
a
little
easier
for
the
engineers
to
you
to
look
at
just
a
single
board
as
opposed
to
having
to
look
at
three
different
boards.
A
A
So
we
don't
have
great
options
there
and
we're
still
discussing
it
in
the
issue
whether
that
means
separate
issues,
whether
that
means
like
we
always
use
the
lowest
like
the
furthest
left
workflow
label,
whether
we
do
something
else.
So
we
do
know,
we
need
to
address
that.
We
haven't
done
it
yet.
Do
anybody
have
any
questions
about
that.
A
F
A
F
I
would
think
the
breakdown
is
where
you
then
split
it,
because
this
goes
back
to
I
know
we're
working
on.
You
know
having
issues
that
can
block
other
issues
and
that
type
of
thing
this
is
an
ideal
use
case.
Why
that
feature
I
think
is
beneficial
to
our
team
as
well,
because
in
an
ideal
world,
when
you
do
that
breakdown,
that's
for
you
to
sign.
You
know
the
fun
I'm
back
in
work
and
those
would
be
two
issues
underneath
a
parent
issue
or
a
parent
epic,
against
what
we
currently
have
it
with
yeah.
A
I
think
I
think
it
might
be
the
answer
to
that
I'm,
not
100%
sure,
because,
for
instance,
if
you
have
two
issues
like
you
know,
it's
quite
easy,
and
this
is
again
a
problem
we
could
fix,
but
it's
quite
easy
to
like
for
someone
not
to
realize
that
I'll
woke
up
a
last
issue
in
this
epoch.
So
once
I'm
done
with
this,
the
epic
can
be
closed
and
you
know
we
can
move
on
so
yeah,
probably
worth
bringing
that
up
in
the
issue.
A
A
C
Let's
see
so
I
was
kind
of
wondering,
like
not
definition
of
dawn,
because
that's
for
merger
quests,
but
do
we
have
a
kind
of
process
documented
anywhere,
where
we
define
delivered
for
a
milestone
and
and
specifically
like
for
the
kind
of
corner
case
that
we
encountered
there
recently
where
the
thing
was
delivered,
what
behind
a
feature
flag?
The
feature
flag
belongs
to
a
previous
feature
which
was
considered
delivered,
even
though
it
was
behind
the
features
like
and
yeah.
C
So
there
were
a
couple
of
regressions,
which
complicates
things
a
little
bit,
but
I
was
wondering
how
this
come
up
before
and
when
do
we
consider
something
like
a
feature
delivered.
So
in
this
case
the
feature
was
delivered
in
time.
It
was
deployed
to
production
and
time
for
to
make
the
milestone,
but
was
behind
a
feature
flag
that
we
couldn't
turn
off,
so
nobody
was
using
it.
C
A
I
think,
as
far
as
I'm
aware
and
I'm,
not
100%
sure,
maybe
donald's
or
somebody
else
has
a
better
handle
on
how
this
works
right
now
than
me
as
far
as
I'm
aware,
delivered
basically
means
can
go
in
the
release
post
if
applicable.
So,
like
you
know,
if
it
was
just
like
yeah
a
front-end
refactor,
then
obviously
it
wouldn't
go
in
the
release
post.
But
you
know
for
something
like
this.
D
Unfortunately,
because
I
think
there
isn't
like
we
don't
have
a
Dutch
definition.
Company-Wide
like
that
I
mean
the
real
question
is
like:
when
do
we?
When
do
we
close
the
issue
and
I?
Think
as
we
start
using
feature
flags
more
and
more,
that
becomes
a
little
harder
to
answer
because
and
especially
as
we
kind
of
separate
the
collab
com
releases
from
our
self-managed
releases,
come
so
I
don't
have
a
better
answer.
D
C
Yeah
I
think
you,
you
kind
of
you
head
on
it
there.
It's
it's
two
things
like
one
things,
the
feature
being
delivered
and
the
other
one
is
the
issue
being
closed.
This
issue,
like
spawned
two
regression
issues,
but
also
a
third
issue,
to
switch
the
feature
of
like
on
on
for
on
by
default,
and
so
the
next
that
was
planned
in
three
separate
issues.
So
the
question
is:
could
we
close
this
current
issue
as
part
of
the
current
milestone,
or
do
we
consider
it
open
until
the
whole
feature
is
delivered?
I.
A
Would
say:
normally
you
can
close
it.
The
exception
might
be,
and
this
might
be
an
example
of
if
it's
an
issue
where
a
lot
of
people
are
participating
or
watching
it,
then
you
want
to
like
clarify
when
you
close
it
they're
like
hey.
This
is
closed
because
it's
in
this
state.
This
is
the
follow-up
issue
when
it
will
like
be
fully
done
by
obviously
it's
better
to
be
more
explicit
and
more
transparent
in
general,
but
especially
in
ones
where
you
know
people
might
be.
A
You
know
a
community
member
of
the
wider
community
or
something
might
be
saying
like
hey.
You
closed
this
issue,
but
I
don't
see
this
anywhere
like
you
know.
Why
have
you
closed
it
like
it's
not
done
so
yeah
I
think
it's
I
think
it's
reasonable
to
close
it.
If
that's
the
that's
the
state
and
I,
think
that's
what
you
did
right.
C
F
This
should
really
be
a
sub
epic,
if
there's
multiple
things
that
have
to
happen
afterwards,
this
portion
is
done,
but
the
follow-on
work
is
still
there.
I'm
running
the
same
thing
as
I'm
planning
things
out,
because
you
kind
of
need
to
take
the
first
step,
but
that
doesn't
really
get
you
where
you
need
to
go.
It's
just
the
first
step
and
then
there's
multiple
steps
beyond
that
it
should
be
tied
together.
Somehow,
so
you
recognize
it's
a
progression
of
effort.
C
Yeah
I
think,
like
the
context,
also
is
different
for
from.
If
you
look
at
it
from
a
product
perspective
versus
from
the
technical
engineering
perspective.
From
an
engineer's
point
of
view,
you
just
want
to
close
the
issue
fulfill
the
requirements
of
the
issue
and
then
creating
issues
to
follow
on
and
close
that
one
from
a
product
perspective.
It's
like
is
it
ready
for
the
release
post,
look
well.
F
I
think
that's
the
joy
of
that,
because
you
can
then
say:
okay,
this.
This
is
done,
we're
closing
that
issue
and
that
issue
block
the
following
works.
Now
those
issues
pop
up
and
can
be
worked,
one
at
a
time
and
you're
closing
out
small
units
of
work,
which
is
in
the
ideal
world,
when
you're
planning
on
the
engineering
side,
I
would
think
you
would
rather
have
small
units
of
finite
amounts
of
work,
and
not
these
long
issues
that
drag
out
indefinitely,
it's
better
to
break
it
down
and
be
able
to
actually
close
out.