►
From YouTube: Plan | Weekly Team Sync - 2021-05-12
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
I
think
I
have
the
first
item
refactor
party,
starting
in
14
1..
How
do
we
want
to
self-organize
the
question
a
little
bit
of
context
before
we
open
up
discussion?
A
We
are
working
on
some
ux
iterations
right
now
to
think
about
any
changes
we
might
want
to
make
to
what
we're
coining
collabject,
which
is
the
future
like
epic
issue
requirement
test
case
based
view
and
then
we'd
add
some
widgets
and
things
on
top
of
that,
just
for
each
of
the
different
types
of
issues,
so
that's
happening,
and
then
there's
also
refactoring
epic,
that
we
started
put
together
the
key
outcomes.
A
When
we
do
that
and
then
sort
of
the
question
is,
are
there
other
areas
within
our
span
of
control
outside
of
like
the
issue
detail
view
and,
and
maybe
some
other
places
that
we
want
to
target
for
refactoring
and
add
to
our
key
outcomes
as
well,
because
more
or
less
as
a
product
manager?
I
want
to
enable
us
to
focus
on
paying
down
technical
debt
to
increase
our
future
velocity.
A
A
Yeah
we're
doing
that
internally,
so
it
was
hard.
Some
of
the
designers
were
stuck
on
the
idea
of
an
epic
or
an
issue
in
terms
of
thinking
like
it's
the
same
thing
under
the
hood.
It
will
be
the
same
thing
so
we're
just
calling
it
a
cloud
object,
but
what
we
call
it
technically
in
the
api
and
things
like
that,
it
would
be
the
issue,
an
issue,
type
hierarchy.
A
We
might
call
it
something
different
in
the
ui.
It's
not
ever
going
to
be
called
clabject
publicly,
but
it's
like
an
internal
good.
A
D
Are
these
agenda
items
under
how
do
we
self-organize
from
the
team?
Are
these
just
detail
and
then.
A
There
are
questions
for
the
team
opening
up,
because
I
I
have
some
perspective.
Each
engineer
has
other
perspectives.
That's
valuable,
and
I
think
what
I
would
like
to
end
up
doing
is
getting
to
the
point
where
we
have
a
series
of
epics
and
issues,
grouped
together,
that
kind
of
represent
getting
to
our
key
outcomes
that
we
can
track
over
time,
and
it
can
include
other
things
that
the
engineers
have
flagged
as
important
technical,
that
to
pay
down
like
it's,
not
just
the
stuff
that
I've
been
identified.
That
will
push
the
proc
vision
forward.
A
A
E
Well,
it
seems
like
getting
epics
to
an
issue.
Type
is
a
pretty
well
defined
thing
like
so
you
know
from
what
product
the
product
planning
team
can
contribute.
Like.
That's
definitely
one
thing
you
asked
about
what
other
refactors
would
be
important
like
it's.
It's
becoming
apparent
to
me
that
we
have
a
pretty
big
a
better
term
than
this,
but
blind
spot
for
graphql.
E
We
don't
have
the
granularity
in
graphql
as
regards
observability,
that
we
do
with
our
other
medius
or
rest
apis
or
our
web
endpoints,
where
we
can
see
like
the
amount
of
time
that's
spent
in
the
database,
the
number
of
queries
it
made
the
amount
of
reads
and
writes
to
redis
all
that
kind
of
detailed
information
we
can't
get,
and
not
only
that,
but
because
of
the
cardinality
of
basically
like
using
prometheus
to
measure
so
many
different
graphql
queries
and
mutations.
E
We
can't
actually
even
see
requests
per
second
relative
to
a
particular
query,
which
is
a
problem
right,
so
even
the
most
standard
stuff
that
we
can
get
on
our
rest
apis.
We
can't
currently
get
on
graphql.
So
it's
going
to
be:
I'm
not
sure
how
we
solve
this,
because
you
can't
change
the
fact
that,
like
in
order
for
me
to
know
like
how
many
times
the
create
epic
mutation
is
being
called
like,
I
can't
do
that
basically
in
in
and
that's
a
problem,
so
so
we'll
need
to
solve
that
problem.
E
I
think
pretty
soon,
because
at
the
minute
we
have
zero
visibility.
I
know
from
looking
at
the
stage
dashboard
for
certified
there's
very
little
in
there,
because
all
of
the
stuff
we've
built
is
pretty
new
and
it
all
works
off.
F
F
E
Yeah
we
can.
The
weird
thing
about
cabana,
I
noticed,
is
that
graphql
requests
are
logged,
twice,
they're
logged
once
with
very
little
information
and
then
they're
logged
again,
but
their
query
name
isn't
indexed
in
that
instance.
If
that
makes
sense.
So
if
you,
if
you
search
by
the
query
name,
you
get
like
very
limited
information,
but
you
get
a
reliable
chart
if
you
just
filter
by
that
query
name,
you
get
also
a
second
request
with
all
the
data
on
time
spent
in
the
database
and
so
on.
Yeah.
It's
very
strange.
G
G
E
Yeah,
no
we're
not
I
mean
we
can
only
do
it
globally.
We
can
only
fix
it
globally.
It
would
cost
a
lot
more
to
fix
it
just
for
ourselves,
but
I
think
like,
as
we
start
to
think
about
things
like
scaling
targets
and
so
on.
It
becomes
more
and
our
stage
dashboards
and
even
like
the
process
of
getting
slis
in
place
for
new
features
like
if
they're
built
on
graphql,
it's
incredibly
hard
to
do
so,
maybe
right,
maybe
we're
just
hitting
it
first
or
maybe
we're
not
even
hitting
it
first.
E
G
Talked
to
them
about
in
those
stand-up
meetings,
they're
all
about
trying
to
solve
these
long-term
problems,
but
if
we
don't
have
observability
or
measurability
as
we
move
forward
and
create
features,
I
think
we're
missing
the
fundamental
step
there,
because
if
we're
trying
as
a
company
to
become
more
data
driven
and
look
at
scaling
opportunities,
this
is
going
to
pose
significant
risk
to
us
more
so
than
any
of
the
tech
debt
out
there,
because
it's
like
an
unknown
unknown.
We
don't
even
know
what
things
are
doing.
G
E
So
you
can
look
globally
at
pressure
on
the
database,
you
can
look
globally
at
read,
write
on
redis
clusters
and
other
like
non-horizontally
scalable
resources,
but
it
is
global
and
you
might
be
able
to
say:
okay,
we'll
use
cabana
to
figure
out
what
our
like
really
expensive
actions
are,
but
the
more
stuff
we
move
into
graphql,
the
harder
it
is
to
get
that
information,
and
so,
if
we're
actually
successful
in
being
graphql.
First,
then
moving
more
and
more
things
into
graphql.
If
we
don't
solve
this
problem,
we're
actually
removing
stuff
from.
E
What's
observable,
like
removing
information
from
the
sphere
of
what's
observable,.
A
Yeah,
I
would
say
that
that's
I
didn't
realize
we
were
in
that
the
that
early
stage
of
observability.
But
if
that's
the
case,
I
think
that's
one
of
the
most
important
things
to
solve,
and
I
guess
the
question
is:
do
we
know
how
we
want
to
solve
the
problem?
Which
is
the
first
thing
like
you
know,
there's
the
taking
the
time
to
do
it,
but
do
we
even
know
how
to
solve
it
and
then
second,
like
I
would
look
at?
A
Are
there
open
source
tools
available
for
monitoring
that
work
well
with
graphql
is
also
worth
like,
exploring
paying
for
external
tools
like
the
observability
suite
from
the
polygraph
ql,
the
company.
E
It
yeah,
I
don't,
have
an
answer
really
at
the
minute,
we're
working
on
our
stage
group
sort
of
scalability
dashboards,
if
you
like,
with
sean
from
scalability
and
we're
just
focusing
on
rest
standpoints,
get
and
psychic
workers
and
then
presumably
like
we'll
address
graphql
later.
I
So
I
think
the
question
is
like
on
the
engineering
side,
we
know
the
importance
of
observability.
It's
really
now
about
prioritizing
this,
like
the
research
work
which
we
should
think
about
how
we
want
to
handle.
I
This
is
something
that
it's
not
like,
there's
not
a
requirement
to
to
finish
the
issue
redesign
before
we
start
working
on
this.
So
I'm
wondering
if
we
can
say
as
a
team
that
this
is
something
we
want
to
commit
to
prioritizing
start
doing
some
of
the
investigation
now
in
1312
or
14-0
like.
Are
we
okay
as
a
team
in
prioritizing
this
work.
A
It's
more
of
a
question
for
product
I,
I
have
not
moved
basically
not
basically
again,
I'm
not
going
to
put
anything
new
into
the
14.0
issue
that
wasn't
in
the
13.12
planning
issue,
except
for
security
or
performance
things
that
were
new
that
are
deadly
or
bad.
So
I
I
pretty
much
have
signaled
to
my
leadership
that
we
are
feature
frozen
indefinitely,
we'll
see
how
long
that
I
can.
A
I
can
hold
that
up
without
getting
like
a
ton
of
pushback,
but
right
now
because
of
everything
else,
that's
going
on
it's
it's
a
great
time
to
be
able
to
do
that.
So
if
this
is
the
thing
that
we
need
to
do-
and
this
is
like
the
dependency
before
we
can
move
to
graphql
first
safely-
then
yes,
I'm
totally
great
with
it.
I
can't
speak
for
kristen
and
mark
they
have
their
own
things.
D
D
G
Yeah,
I
agree
with
what
dave
and
chris
are
saying
I
mean,
I
know
we're
in
the
middle
of
trying
to
convert
issues
or
requirements
were
to
issue
types,
and
I
know
that
work
is
ongoing.
But,
aside
from
that,
of
course,
the
things
are
already
sort
of
in
flight.
I
am
fine
with
sort
of
just
stepping
back
and
understanding
what
we
need
to
do
holistically
so
that
we
can
move
forward
faster,
because
I
think
if
we
solve
these
underlying
problems,
we're
going
to
be
much
quicker
in
the
future.
I
The
I
think
I
have
the
next
part
so
as
we're
as
we're
building
out
some
of
the
other
things
that
we're
using
graphql
for
like
filtered
search
and
and
list
view.
We
are
finding
some
things
that
we
had
assumed
we
got
to
when
we
were
refactoring,
some
of
the
other
parts
of
the
application
that
were
in
graphql,
but
that
aren't
quite
that
aren't
quite
done
yet.
So
my
question
is:
how
do
we
want
to
handle?
I
How
do
we
want
to
handle
building
out
the
graphql
api
like
do
we
want
to
sit
down
and
figure
out
what
we,
what
we
need
from
it
before
the
refactor
and
maybe
start
working
on
those
before
14
1,
or
do
we
want
to
build
out
as
we're
like
get
more
alignment
between
the
front
end
and
the
back
and
then
build
out
as
we're
as
we're
building
out
features
for
the
refactor
in
fourteen
one
and
beyond?
A
I
don't
have
a
preference
other
than
I
would
encourage
us
to
think
in
terms
of
data
graphs
and
not
rest
endpoints,
like
that's
part
of
the
whole
value
of
graphql.
Is
it
enables
you
to
work
richly
express
what
you
want
to
query
without
having
to
like
get
a
list
of
ids
that
make
a
request
for
each
id
to
get
the
like
actual
resource.
A
So
that's
my
only
feedback,
because
I've
seen
us
like
there's
varying
levels
of
experience
with
graphql,
I'm
still
learning
myself,
and
so
I
think
that's
where,
like
understanding
the
core
principles,
but
in
what
order
we
do
it,
I
don't
really
mind.
I've
also
seen
a
lot
of
value
in
letting
front
end
heavily
influence
the
sh,
the
shape
of
the
schema,
if
that
makes
sense,
but
I
totally
trust
engineering
to
pick.
However,
you
want
to
do
it.
I
F
Yeah,
just
a
generic,
though
that
we
would
know,
rather
sooner
than
later,
what
will
be
frontend
needs
from
graphql
api.
It
might
be
better
to
make
sure
that
we
don't
hit
some
roadblock.
F
I
Yeah,
so
if
that's
the
case,
then
we're
we're
kind
of
for
the
most
part,
probably
going
to
have
to
wait
until
14
1,
because
we
don't
know
like
the
idea
isn't
to
be
at
feature
parity
with
a
refactor
from
legacy
issue,
page
right,
gabe
and
kristen
mark.
So
there
may
be
some
things
that
that
we
require
from
graphql
that
we
don't
know
that.
We
don't
know
until
the
the
redesign
is
complete.
A
I
I
think
that
part
of
the
feature
freeze
is
there
might
be
some
mutations,
but
the
shape
is
sort
of,
according
with
the
refactor.
We
don't
want
to
change,
we
don't
want
to
add
any
new
features
and
we
don't
want
to
change
the
data
graph
right.
So
that's
like
one
of
the
things
that
we
put
on
ourselves
is
like
we're
going
to
keep
everything
we
have
now.
A
We
just
are
going
to
present
it
in
a
different
way,
so
that
may
change
how
we
query
things,
but
I
I
don't
see
why
it
necessarily
would
that's
just
all
I
know
right
now
could
change.
D
A
Yep,
that's
like
that
would
be
the
key
thing
that
would
change
a
little
bit,
especially
because
we
we
basically
want
to
have
the
same
sort
of
parenting,
support,
test
sessions
and
test
cases.
What
is
now
epics
and
issues
and
all
that
other
stuff
and
then
even
requirements
being
able
to
support
multi-level
requirements.
So
it's
it
should
be
fairly
generic
to
work
across
just
an
issue
type
hierarchy,
but
that
doesn't
exist
right
now
at
all.
So.
I
I
don't
know
the
the
things
that
we
are
requiring
be
that
are
the
same
as
what
we
have
for
legacy
issues
like
relationship
management.
Should
we
add
that-
and
I
did
add
it
as
a
key
outcome,
but
should
we
should
that
be
an
outcome
saying
that
we
want
that
stuff
to
remain
the
same
or.
A
No,
the
the
relationships
is
an
interesting
thing.
We're
gonna
we're.
That's
like
this.
First
ux
iteration
we're
working
on
sort
of
like
the
description
of
discussions
and
stuff
like
that,
and
then
the
next
one
we're
gonna
be
working
on
metadata
and
then
the
third
iteration
we're
working
on
relationships
but
like
they're.
This
is
basically
like
my
current
thinking
and
I
think
we
would
have
to
which.
A
But
but
more
or
less
there's
lots
of
different
relationships
that
need
to
be
present
and
I'll
share.
My
screen
real
quick.
We
don't
know
how
the
ui
is
going
to
work
for
this,
but
there's
there's
more
or
less
like
all
these
different
things
that
people
want
to
relate
to
issues
similar
to
what's
happening
with
merge
requests
vulnerabilities.
A
A
I
think
the
thing
that
we're
sort
of
introducing
in
a
little
bit
more
generic
way
is
taking
what
we
have
now
with
the
epic,
which
is
the
parent
child
relationship
and
making
that
kind
of
work
with
like
in
an
issue
type
hierarchy.
If
that
makes
sense,
so
I
don't,
I
don't
know
how
that's
going
to
impact
ux
necessarily,
but
I
see
that
is
something
that
has
to
solve
be
solved
in
order
to
migrate,
epics
to
issue
type
hierarchy.
D
I'll
just
jump
in
there
too
from
yesterday.
I
think
I
hope
alexis
doesn't
mind
that
I'm
quickly
just
looking
at
this,
we
didn't
look
at
that
in
our
meeting,
but
the
one
part
like
gabe
was
mentioning
the
relationship.
This
part,
the
fact
that
there's
children
objects
within
an
epic,
that's
the
messiest
thing
and
the
most
heavyweight
thing
in
the
ui,
and
that's
probably
because
we're
not
tackling
that
to
the
third
design.
Sprint
we're
not
really
thinking
about
it.
D
Until
then,
I
think
that's
the
one
that
could
inform
engineering
the
most
kind
of
like
on
this
shift.
So
once
that
happens
once
we
start
ideating
and
figuring
out
how
it
would
all
go
together,
I
think
it
will
bring
a
little
more
clarity
around
what
we
need
to
support,
but
I
think
from
a
constraint
standpoint,
it's
going
to
be
looking
at
what
are
my
children
and
really
focusing
on
the
objects
below
me
in
the
tree,
initially
versus
showing
the
whole
way
up
the
chain
as
well,
which
gets
a
lot
more
complicated.
C
I
think
on
the
screen
that
you
shared
gave
there
were,
it
was
under
the
types
heading,
and
it
was
talking
about
child
parent
relationship
and
also
blocking
on
blocking
relationship.
Is
that
on
types
can
types
like?
Can
we
have
a
type
that
is
blocking
another
type,
or
is
it
object
actually
relationships?
C
A
A
We
do
like
this
base
discussion
piece
because
it
would
be
natural,
but
the
way
that
I
sort
of
I'm
thinking
about
now,
at
least
it's
like
a
a
parent
child,
is
within
like
one
sort
of
work
tree,
but
you
you
can
have
multiple
parents
so
like
that's.
That's
the
whole
like
value
of
multi-tree
relationships
versus
this
like
interesting,
open
source,
app
that
our
project
is
tackling
that
pretty
well
and
I'll
link
to
that.
But
yeah.
We,
let's
start
the
discussion
async.
A
So
we
can
get
all
the
questions
out
there
and
that
would
be
great
to
have
engineering
leads
on
that
too,
because
more
or
less,
we
want
to
support
multiple
parents
on
things
which
is
going
to
be
ux,
challenging
and
of
itself,
but
we
want
it
to
be
performant,
and
so
I
think
that's
where
thinking
about
what
constraints
we
need
to
introduce
would
be
helpful.
D
Well
currently,
like
there's,
I
think,
there's
two
concepts
here.
One
would
be
the
templated
way
that
the
hierarchy
sets
up,
so
you
could
have
an
epic
above
an
issue
and
maybe
a
requirement
below
an
issue.
So
there's
a
hierarchy
in
terms
of
the
objects
that
we'll
use
in
the
ui
or
we
may
allow
the
user
to
set
that
up.
D
There's
actually
another
type
which
is
just
like
the
loose
object.
Relation
like
this
is
related
to
these
other
objects,
but
it
doesn't
necessarily
like
have
a
it's
not
apparent,
and
it
doesn't
end
up
showing
up
on
road
maps,
but
it's
got
more
of
a
loose
level
relation
to
another
thing,
and
I
think
that
could
go
to
any
related
types,
but
I
think
initially
we're
going
to
set
up
the
structure
where
we're
parenting
on
a
specific
tree
like
epics
to
issues.
D
C
D
C
There
is
something
to
be
discussed,
and
at
least
for
me
to
understand
how
that's
going
to
work
out.
D
I
just
I
just
pasted
in
that
I
need
to
rearrange
my
epics
into
this
hierarchy.
This
is
from
the
collet
or
our
design
sprints
in
terms
of
our
scope,
but
1b.
That's
what
I
was
saying
like
arranging
them
into
the
hierarchy
and
then
also
creating
the
parent-child
relationships
between
the
individual
objects
that
each
have
an.
A
Id
yeah-
and
I
I
think
that
I
put
this
together
because
from
some
several
customer
conversations,
I've
been
a
part
of
especially
those
ones
that
use
requirements
and
test
cases
and
all
those
things
together
in
terms
of
traceability,
this
is
sort
of
what
they
end
up
with
from
some
form
or
fashion
they're
able
to
tie
a
test
case
to
a
parent
test
session
test
case.
It's
also
tied
to
a
requirement.
A
requirement
gets
tied
to
a
user
story
that
implements
the
requirement
right.
A
The
user
story
is
part
of
a
bigger
feature
that
bigger
feature
might
map
to
a
higher
level
requirement
somewhere
in
there.
You
have
mrs
thicket,
like
actually
do
the
implementation
that
you
want
to
have
tied
back
into
like
your
requirement
and
the
the
the
user
story
to
influence
it
at
some
point,
you
might
have
an
incident
that
triggers
a
bug.
That's
part
of
this
feature.
That's
part
of
this
requirement
that
eventually
might
go
back
to
this
test
case
right,
and
so
I
think
it's
where
it's.
A
I
don't
know
how
we're
going
to
solve
for,
but
that's
the
level
of
traceability
that
our
customers
want
in
terms
of
relationships
and
I
think
figuring
out
how
to
do
that
in
a
consistent.
Predictable
way
from
both
like
engineering
and
ux
is
definitely
going
to
be
a
challenge.
So.
G
A
Under
of
a
two,
a
three
three
two,
four
four
yeah
I'll
just
highlight
it:
yellow.
D
I
just
made
a
note
as
well,
at
the
end
that
we
should
have
an
engineer
cadence
of
meetings,
engineering
just
like
we're
doing
with
design,
so
we
can
start
having
these
conversations
more
frequently
leading
up
to
the
six
week
mark.
D
So
I'm
and
also
the
engineers
can
attend
the
convergence
meetings
if
they
want
with
design
or
you
can
watch
the
recorded
videos.
I
suggest
everyone
at
least
watch
the
recordings
so
that
everyone's
up
to
speed
and
we're
not
necessarily
moving
all
of
like.
I
just
moved
a
big
chunk
of
content
in
from
our
design,
sprint
meetings,
but
we'd
like
everyone
to
kind
of
just
be
aware,
because
it's
going
to
be
such
a
critical
point
of
time
for
how
we
build
this
architecture
up.
F
Consolidation
yeah,
I
realized-
I
probably
put
it
in
one
of
the
wrong
place
in
the
document
but
yeah.
My
question
is:
are
we
focusing
on
parent-child
component
because
of
epic
specifically
because
epics
are
blocked
by
project
group
consolidation
work,
which
is
definitely
not
a
short
term
work?
So
I
don't
see
us
to
convert
epics
into
issue
type
in
short
term
time.
So
if
this
would
be
the
major
reason
for
working
on
this,
we
might
perhaps
prioritize
this
in
favor
of
something
else.
A
So
I
had
an
update
down
below.
I
think,
on
the
point
c
that
we
had
a
meeting
to
kick
off
the
workspaces
and
emerging
groups
and
projects
it's
going
to
start
moving
forward.
It's
got,
I
think,
a
dri
engineer
from
access
now
or
they're,
looking
for
one
and
they're
going
to
take
the
lead,
but
the
first
thing
that
they're
going
to
be
working
on
in
addition
to
the
like
architecture,
blueprint
is
getting
issues
at
the
group
first,
so
I'm
sure
at
some
point
we're
going
to
need
to
contribute
to
doing
that.
A
But
once
we
have
issues
at
the
group
level,
then
it
sort
of,
I
think,
unblocks,
all
of
this
stuff,
like
specifically
migrating
epics,
to
issue
types
having
consistent
like
planning
or
cloud
check
on
the
group
and
the
project
level
and
then
allowing
multi-level
requirements
multi-level
test
sessions
like
at
the
group
level.
So
we
there's
two
things.
I
guess
two
ways
that
we
can
approach
this
one.
A
We
can
build
this
out
like
at
the
project
level
and
work
on
the
hierarchy
and
the
relationship
between
types
of
issues
at
the
project
level,
because
theoretically,
once
we
do
that
and
we
are
at
the
group
level.
However,
we
end
up
solving
it
without
duplication.
I
guess
that's
where
we
just
need
to
figure
out
like.
Are
we
going
to
do
the
shadow
thing
or
are?
Are
we
going
to
do
like
the
containable
right,
but
hopefully
we'll
have
some
clarity
within
the
next
four
to
six
weeks
on
how
we're
going
to
tackle
that?
F
F
A
You're
right,
though,
and
I
think
it
is
important
and
I
think
we
need
to
make
sure
that
whatever
we
do
works
and
gels
well
with
that
right,
whether
it's
us
focusing
on
getting
that
forward
first
or
having
some
engineers
from
our
stage
contribute
to
that.
While
we
also
move
some
of
this
and
like
connect
the
work
streams
together
in
the
future,
but
that's
a
huge
thing
and
it's
good
that
you
call
it
out,
and
I
thank
you
for
that.
D
D
I
Yeah
me
too,
I
think
we're
I
mean
we
should
we
should
move
on
probably
to
some
of
the
other
stuff
yeah.
So
we
have
time
to
talk
through
some
of
the
issues,
specific
discussion
and
demos
too.
Unless
yeah
gaby
did
read
only
kristin,
I
think
your
point
b
can
probably
read
only
two
so
yeah.
Let's
move
on
to,
if
that's
okay
with
you.
I
Let's
go
b
four
b:
oh
wait
is
that
what
you're
talking
about?
Okay,
let's
go
with
beyond
a
first,
then
yeah.
F
F
H
F
H
C
I
I
like
for
me
personally.
I
think
I
would
not
want
them
cashed
or
rounded
so
to
say
on
the
issue
this
page
if
there
is
any
kind
of
filtering.
So
if
I
filter
down
by
a
label
author
or
anything,
I
would
really
want
to
see
more
of
an
exact
number.
But
then,
when
I
see
like,
even
if
I
go
to
gitlab
work
to
the
list
page
and
I
see
50
000
issues
or
something
like
that-
that's
not
as
important
as
you
said.
C
That
helps
with
the
performance
problem,
or
we
want
the
caching
tray
filters
as
well
just
like
to
fix
this
thing
about,
like
from
a
new
ex
perspective.
I
would
think
that
when
you
do
apply,
some
filtering
you'd
want
to
see
some
sort
of
a
feedback.
What
like
how
many
issues
you
have
there?
How
many
things
you
found?
I
feel
now.
F
Yeah,
I
wonder
if
filters
would
be
used
for
smaller
projects
than
gitlab,
because
then
it's
quite
commanded
on
my
personal
projects.
I
don't
use
any
filters
because
of
the
small
number,
but
the
counts
would
not
match
there
and
either.
So
maybe
I
could
cash
from
some
threshold
like
1000
or
something
yeah.
A
Well,
I
I
also
dropped
in
a
link
to
some
things
that
I
was
playing
around
with
in
my
non-engineering
self,
but
right
now
we
get
all
the
issues
across
all
states
open
closed
and
then
to
show
the
accounts
for
each,
and
I
was
curious
like
if
there
are
things
if
we
were
to
change
ux,
where
we
showed
just
the
count
of
open
when
you're
on
the
open
tab
and
then,
if
you
clicked
on
close
you'll,
see
the
kind
of
closed
so
we're
not
clearing
counts
for
all.
A
Basically,
it
would
be
like
right
now
what
how
many
issues
are
closed
a
lot
more
than
are
open.
So
that's
one
thing
that
we
could
do
where
we
still
showed
the
number
on
the
active
tab
or,
if,
like
all
when
you
click
on
it,
the
other
that
I
was
playing
around
with
is:
can
we
simplify
the
query
a
little
bit
and
thinking
about
if
we
don't
filter
by
issue
type,
and
we
just
show
all
all
types
of
issues
initially
on
the
initial
load?
A
And
then
we
basically
only
show
things
that
are
open.
The
query
was
really
really
fast
when
I
was
playing
around
with
it
in
the
in
the
database
lab.
So
that
could
be.
Another
thing
to
look
at
too
is
like
simplifying
the
ui
and
removing
counts,
where
we
don't
really
need
to
count,
because
we're
on
that
current
context
and
or
simplifying
the
query
a
little
bit,
but.
F
Useful
cool
yeah
all
right,
if
you
would
have
a
minute,
would
you
mind
to
please
give
a
feedback
to
the
issue
or
to
the
match
request?
What
are
your
thoughts
about
this
from
ux
point
of
view
cool?
Thank
you
very
much.
E
Yeah
and
what
are
the,
what
are
the
circumstances
by
which
the
kind
would
be
inaccurate?
I
know
we
took
into
account
we
bust
the
cache
when
common
actions
are
taken,
so
I'm
assuming
that,
like
uncommon
actions,
might
cause
it
to
be
inaccurate
for
a
while,
like
large
imports
or.
F
So
currently,
there
are
quite
a
bunch
of
quite
a
bunch
of
situations
I
listed
this.
I
know
about
in
the
comment
I
linked
from
the
document,
but
it
would
be
any
time
when
basically,
two
different
users
access
this
page
and
both
of
them
have
different
permissions
in
subgroups
yeah.
They
would
see
inaccurate
numbers
whenever
you
close
an
issue,
you
would
see
you
would
see
inaccurate
number
two
for
the
next
one
hour
or
until
the
cash
count
expires.
E
There
would
be
much
more,
of
course,
the
one
thing
I
would
say
about
that
is,
though,
that,
like
I
mentioned
in
the
dark
like
there
are
a
couple
of
facts
about
this,
one
is
that
these
coins
are
really
expensive
relative
to
their
utility.
In
my
opinion,
the
second
of
all
is
we,
I
believe
we
only
cash
north
of
1
000
issues
and
the
question
again
is
for
me:
is
it
useful?
E
I
mean
if
I
close
an
issue
in
a
project
with
33
484
open
issues.
Do
I
mind
that
that
number
is
inaccurate
for
an
hour
afterwards
like
maybe
not,
but
then,
if
that's
the
case,
what
like,
why
do
we
even
do
the
coins
in
the
first
place?
Why
not
just
say
more
than
one
thousand
issues
and
then,
if
we
like
make
the
count
a
sync
or
something
yeah.
F
Yeah,
I
think
you
might
be
right
personally.
Maybe
it
would
be
ideal
to
have
just
a
round
number
estimated
around
it,
but
it
would
be
cool
if
user
could
still
on
demand.
Ask
for
a
great
count
if
needed,
which
might
be
handy
for
some
filtering
of
issues
which
matches
some
conditions
but
yeah.
I
don't
know.
C
F
D
F
D
The
next
I
think
I've
mentioned
this
to
you
on
a
few
other
calls,
but
I
wanted
to
just
thank
the
team
for
everyone
contributing
and
getting
their
tracking
up.
I
linked
to
our
gmail
dashboard.
We
do
have.
If
I
look
here,
I'm
just
going
to
share
my
screen
underneath
project
planning.
If
you
scroll
down,
there
are
actions
we
now
have
whip
gmail
users
interacting
with
epics
and
the
data
is
coming
through
for
nearly
all
of
the
trackers.
D
So
thank
you,
everyone
for
that.
That's
going
to
contribute
huge
to
our
mao
numbers.
D
I
couldn't
find
that
are
in
the
main
issue,
I'm
off
the
issues
now,
but
I
can
work
with
the
t
and
there's
just
a
couple
discrepancies
and
then
a
question
I
have
is
the
next
thing
that
I
would
like
to
do
is
to
have
the
exact
same
stats
for
event,
track
counting
versus
the
unique
users,
and
so
is
it
the
right
way
to
do
it
or
if
I
just
do
the
current
issue,
make
another
table
and
have
a
column
on
the
end,
just
to
count
the
individual
events
versus
deduping
it
down
to
users.
D
And
just
to
be
clear
too,
so
everyone
understands
that
currently
we
can
see
how
many
people
are
like
adding
epics,
but
if
I
had
a
thousand
epics
today,
I'm
going
to
only
be
tracked
as
one
because
I'm
one
unique
user.
So
I
want
to
see
the
relative
usage
of
each
of
our
features
by
just
converting
or
adding
another
event
count
to
each
of
these
different
ones.
E
It's
deduped
interesting,
so,
if
we
have
you
know,
7500
today
that
could
actually
be
much
higher
like
7500
epics,
created
or
reported
is
created
could
actually
be
much
higher
because
everyone
just
counts
as
one
right.
D
Yeah
anything
that
has
mao
in
it
just
counts
as
one
user.
The
way
this
that
it
works,
and
so
then
we
would
need
an
actual
action
count
that
wouldn't
be
deduped
to
be
able
to
see
the
feature
relative
feature
usage
of
the
our
features.
A
Dictionary
yeah,
it's
just
the
normal
redis
counter.
Not,
I
don't
think
it's
rest
hll
counter,
so
I
think
they
have
just
the
increment
method.
D
D
E
Yeah,
I
think
we're
at
time,
but
I
figured
with
three
people
on
the
call
who
might
know
is
forecasting
enabled
on
c
sense
globally,
because
I'd
really
like
to
use
it
for
scaling
targets,
but
I
can't
find
if
it's
just
disabled
for
me
or
disabled
globally.
Do
you
know.
D
E
Interesting
because
they
built
in
some,
they
say
it's
ai,
based
forecasting,
but
I
think
it
really
just
picks
the
most
likely
forecast
from
a
set
of
probably
polynomial
exponential,
different
algorithms
right
I'll
have
to
have
the
page
for
it
here.