►
From YouTube: Plan Team Weekly (2020-08-26) - APAC
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
Okay?
Well,
it's
very
simple!
You
just
grab
a
header
from
the
list
and
you
drag
it
and
it
reorders.
That's
the
demo.
If
it
fails,
an
error
message
displays
and
it
just
reverts
back
to
what
it
was
before
any
questions.
A
Iterations
got
it.
No,
that's
great.
D
C
C
Set
up
yeah,
we
could
probably
move
on
to
alexis's
stuff
I'll
play
alexis
today.
Unless
someone
else
wants
to
go
for
it
all
right.
First
off
love
it
she
says
to
flory
the
next
topic,
all
right,
I'll,
just
read
what
she
has
to
say
here:
I'm
gonna
start
pinging
bugging
y'all
for
more
often
for
feedback.
C
Please
take
a
look
at
these
issues
and
leave
any
feedback
you
have.
Let
me
know
if
there's
any
other
way,
I
can
be
more
visible
about
designing
or
anything
else.
I
could
do
to
better
support
you,
so
the
first
one
she
has
here
is
around
the
program
or
epic
boards
and
the
next
one
is
around
swim
lanes
and
bliss.
C
D
I'll
offer
a
blast,
I
guess
yeah,
it
was
really
since
there's
not
a
ton
of
stuff
on
the
engine.
I
just
never
see
or
get
the
overlap
with
simon
or
flurry
or
folks
name
back.
So
I
don't
know
just
see
their
faces
is
great
and
I
don't
want
this
to
end
just
yet,
because
it's
only
six
minutes
into
our
meeting.
A
It's
not
an
add-on
necessarily
from
a
customer
who
may
be
landing
with
like
source
code
management,
but
they're
actually
intentionally
choosing
plan
over
overall
or
sorry
intentions
using
gitlab
plan
over
jira.
So
we
could
pull
up
that
issue
or
maybe
just
drop
it
in
the
agenda.
So
we
don't
have
to
talk
about
customers
and
then
make
this
video.
D
Yeah,
I
will
try
to
find
that
and
dig
it
up.
I
think
it
is
interesting
that
we
did
or
are
about
to
win.
D
I
think
it
was
an
1800
seat,
gold
deal
just
because
of
plan
portfolio
and
and
just
project
management
together
like
and
everything
which
was
pretty
crazy
because
that's
a
very
big
deal
in
terms
of
icv
and
then
there's
also
another
potential
opportunity
for
1500
premium
seats
that
it
wants
to
move
away
from
people
tracker
to
use
gitlab
and
I'm
meeting
with
them
shortly
to
learn
about
that,
because
they
have
some
concerns-
and
I
understand
like
where
those
concerns
are.
D
We
don't
really
at
this
point
calculate
velocity
automatically
and
a
bunch
of
other
things
that
pivotal
track
does
nicely,
but
that,
coupled
with
the
blog
post
that
I
read
where
there
was
a
project,
a
product
manager
comparing
jira
with
git
lab
and
saying
they
opted
to
go
with
gitlab
because
of
the
single
application
value
proposition.
Plus
we
have
enough
functionality
that
they
can
use
instead
of
jira,
which
was
I've,
never
seen
anything
like
that
on
the
internet
before
so
that
was
a
pretty
big
deal.
I
think
we
shall
celebrate.
A
Yeah,
that's
why
I
thought
it
was
worth
bringing
up.
I
linked
an
issue
in
the
agenda
that
you
all
should
take
a
look
at
and
it
just
calls
out.
I
don't
know
a
dozen
dozen
and
a
half
customers.
That
plan
is
either
leading
with
or
a
big
component
of
the
deal.
A
It's
really
cool
to
see
because
it's
it's
validation
that,
like
we've
all
been
working
super
hard
y'all
longer
than
I
have
at
building
this
this
section
of
the
stage
and
it's
starting
to
see
some
really
great
validation
that
we're
on
the
right
track
and
and
the
work
we've
done
over
the
past.
Several
months
is
starting
to
to
bear
fruit,
which
is
great,
so
take
a
look
at
that
issue.
Aw
and
wonder
of
the
deal
size
of
these
customers
that
are
coming
in
use
a.
A
E
All
right,
just
let
me
share
my
screen.
E
Can
everyone
see
all
right
so
now,
when
we
close
the
issue
we
last
time
we
just
hit
the
edit
button.
Now
what
we
do
is
we
disable
the
button
like
this
and
then
there
is
a
tooltip,
and
I
want
to
know
what
you
think
of
this
disabling
the
button
and
then
showing
the
tooltip.
F
G
Most
of
the
tooltips
still
will
work
on
touch.
Then
they'll
show
up
it's
not
ideal
for
links
because
then
touching
them
activates
the
link,
so
you've
got
to
kind
of
press
and
hold,
so
I'm
not
huge
on
tool
zips
in
general,
but
they
most
of
them
on
icons
and
stuff
will
still
work.
A
A
E
Also,
on
the
most
of
the
edit
buttons
we
use
geo
link,
but
for
health
status
and
iteration
we're
using
geo
button.
So
that's
also
a
difference
there,
because
with
a
button
we
can
disable
it,
and
that
seems
more.
You
know
semantic
and.
G
A
E
I
don't
remember
where
I
exactly
read
it,
but
just
this
morning
there
was
some
document
that
outlined
our
strategy
going
forward
like
the
next
year
and
then
like
supporting
mobile
was
one
of
the
top.
I
don't
I
don't
remember
like
where
it
was
but
yeah.
It
definitely
read
it.
So.
A
Yeah,
there's
a
there's
a
couple
things
floating
around
like
supporting
mobile
and
then
a
very
generic
bucket
of
machine
learning
and
ai,
but
no
one
has
spent
as
far
as
I
know.
No
one
has
spent
any
time
like
thinking
critically
about
what
that
really
means.
So
we
should
follow
up
and
see
where
that's
pointing
and
then
we
can.
We
can,
you
know,
fall
back
up
with
the
team
and
see
where
we
might
contribute
to
it.
A
If
we,
if
there
is
a
decision
where
we
want
to
lean
into
mobile,
like
the
plan
stage,
is
going
to
be
impacted,
heavily
right.
G
I
yeah,
I
think
we
we've
got
a
lot.
We
can
improve
for
touch
screens
generally,
which
kind
of
affects
ipads
and
bigger
ones.
Yeah.
Just
that,
like
I've
raised
my
issues
with
like
accidentally
dragging
stuff,
I've
accidentally
moved
a
bunch
of
issues
before
or
like
related
links
and
stuff
just
from
if
you're
dragging
down,
then
it
also
grabs
cards
and
stuff,
and
you
fling
them
halfway
across
the
screen
yep.
I
did
that
earlier
today,
yeah
and
you
can
never
tell
you've
moved
you
just
hope.
Someone
else
will
notice,
yup
accidentally
unassigned
a
bunch
of
issues.
D
I
think
yeah
I
I
would
also
I'm
not
interested
in
moving
more
towards
like,
if
it's
possible,
doing
a
progressive
web
application
where
you
think
about
offline
first,
when
you
think
about,
like
I
think
native
mobile,
there's
a
lot
of
great
things,
and
I
built
lots
of
them
apps
before
that
you
can
benefit
from,
but
there's
also
something
very
appealing
about
being
a
purely
web
progressive
web
application.
D
Architecture
that
works
very
well
on
native
mobile,
as
well
as
web
same
thing
also
justin
since
you're
an
advocate
upstream,
I'm
very
interested
in
getting
into
machine
learning
and
have
a
plethora
of
things
where
I
would
like
to
apply
that
to
our
stage
so
yeah.
You.
A
Like
I
don't
know,
I
I
don't
see
if
you're
looking
at
jira,
for
example,
like
they
haven't,
invested
very
heavily
in
their
mobile
experiences
because
they
assume
the
same,
I'm
sure
they
see
the
same
data
we
do
or
most
of
our
users
are
using
desktop,
doesn't
mean
we
shouldn't
have
a
usable
experience,
though
so
we
should
make
sure
we
strike
that
balance,
but
for
ai
and
ml,
like
there's
a
lot
of
interesting
applications
that
we
should
think
about
and
consider
for
future.
D
I
added
one
other
thing
because
since
we
have
some
front-end
folks
here
more
than
normal
and
I
haven't
seen
donald
in
a
week,
I
don't
think
it
was
anyone
on
our
team.
But
someone
else.
I
can't
remember
who
updated
the
milestone,
select
file
and
it
it.
D
I
think
you
shall
like
save
the
day
with
the
fix,
if
I
remember
correctly,
but
I
looked
at
that
file
and
it
was
pretty
like
pretty
nasty
because
it
hadn't
been
touched
in
such
a
long
time
and
that
one
change
that
the
other
person
made
basically
resulted
in
three
defects.
D
That
touched
the
bulk
edit
up
sidebar
the
issue
sidebar
the
milestone
select
within
boards
and
a
bunch
of
other
places
where
it
pretty
much
made
it
unusable
for
24
hours
or
more,
and
I
think
it
impressed
upon
me
the
fact
that,
like
it's,
no
one,
it's
no
one's
fault,
it's
just
like.
We
have
overlapping
technologies,
and
I
think
we've
made
the
decision
to
commit
towards
pajamas
and
refactoring
things
for
that
and
being
graphql
first,
all
of
which
I
wanted
to
support.
D
It's
it's
non-trivial,
so
I
really
wanted
to
prior,
like
I
would
like
to
start
prioritizing
getting
every
single
plan
feature
that
has
a
ui
element
switched
over
to
using
pajamas,
first
and
using
graphql
first,
so
that
we
have
like
non-overlapping
technical
solutions
and
can
like.
Basically
it's.
We
don't
run
into
this
situation
where
we
have
like
jquery
mixed
with
ajax
mixed
with
you
mixed
with
axios,
with,
like
all
these
other
things,
so
holly
was
kind
enough
to
start
the
spreadsheet.
D
If
anyone
wants
to
contribute
to
like
helping
us
audit
all
the
places
that
we
need
to
refactor
to
the
newer,
I
like
ideal
architecture,
whatever
that
is
engineers
can
decide.
I
would
love
for
y'all
to
fill
that
out.
So
we
can
spin
up
issues
and
and
prioritize
that
I
think
this
fits
nicely
along
with,
like
our
general
shift
towards
focusing
on
usability,
defects
and
and
performance
issues
are
just
as
much
usability
as
like
fixing
actual
ux
problems.
So
it'd
be
great
if
y'all
wanted
to
collaborate
on
that.
D
I
I
don't
know
if
it's
worth
bringing
up,
but
at
least
with
issues
we're
also
working
towards
like
the
idea
of
like
having
slots
in
the
issue
view
and
like
turning
each
field
into
its
own
widget,
so
that
you
can
reorganize
it
eventually
not
to
later
on.
But
if,
if
there
is
opportunity
for
like
thinking
about
if
we
have
to
refactor
these
things,
also
taking
that
new
account
it's
worthwhile,
but
I
don't
really
care
as
long
as
we
get
everything
over
to
like
our
current
tech
stack,
I
would
be
ecstatic
so.
C
Agreed
yeah
there
was
a
decent
amount
of
discussion
around
how
to
go
about
handling
all
the
of
the
implementation
step
of
getting
our
front-end
components
over
to
pajamas.
I'm
trying
to
find.
B
G
G
C
It
does
exist
and
there
may
be.
I
think
the
first
step
in
that
was
to
do
exactly
what
you
gabe
and
holly
are
doing
with
creating
that
spreadsheet
of
places
that
we
use
all
of
them,
essentially
auditing
them.
So
some
of
that
work,
maybe
are
maybe
done
already,
but
we'll
find
that
issue
in
posts
that
are
added
to
the
agenda.
G
I
find
that
a
lot
of
our
existing
like
feature
specs
and
qa
specs
do
stuff
like
just
create
one
issue
and
then
like
just
that's
generally
pretty
small,
which
is
partly
just
because
it's
easier
to
set
up
that
way
and
it
does
validate
what
we
actually
need
but
like
for
that
example.
We
we
were
loading,
a
drop
down
and
checking
that
an
issue
was
there,
but
the
the
bug
was
only
if
you're
trying
to
search
and
there's
more
than
20
issues
which
we
don't
usually
have
in
tests.
G
So
I'm
not
sure
if
there's
a
way
that
we
can
like
effectively
start
using
bigger
test
data
without
also
exploding
the
time
it
takes
to
run
the
tests
or
if
we
should
just
assume
that
stuff
will
reliably
work
with
multiple
pages.
But
it
seems
like
that's
a
kind
of
thing
that
can
like
break
easily
and
then
be
missed
by
tests
and
not
picked
up.
D
I
will
say
please
three
times
with
the
yes
before
the
three
pleases:
all
right.
How
can
I
help
it's
like
the
one
of
the
things
that
has
been
most
prevalent
in
terms
of
like
a
spreading
bug,
which
I
don't
know
where
it
ever
started,
or
why
is
the
20
like
20
limit
thing
on
drop
downs
that
you
never
match
what
you
want
and
then
you
never
can
scroll
further
to
get
more
of
the
things
to
find
the
things
that
you
want
from
the
list.
D
So
yes,
I
completely
agree
with
that.
I
think
we
should
maybe
work
with
our
stable
counterpart
and
quality
engineering
and
see
if,
like
that,
that
is
a
way
to
do
it.
I
don't
know
enough
about
how
all
that
works,
but
let
me
know
how
I
can
help
and
support
it,
and
I
will
put
my
weight
behind
it.
H
C
With
the
front
end
engineers
at
this
meeting-
yes,
it's
worse
for
like
kashal
and
rajat
now
we
have
essentially
no
overlap
which
will
be
fun
to
figure
out
like
I.
B
C
Meet
with
at
one
along
with
kasha
today,
and
it
was
late
for
him,
but
it
was
six
am
my
time
and
my
one-on-one
with
let's
say
cash.
My
one-on-one
with
rajat
is
at
4am
my
time
now
tomorrow,
which
I
don't
know.
If
I'm
gonna
make
that
one.
C
A
C
I
just
have
to
have
a
like.
I
normally
have
one
day
or
one
night,
where
I'm
working
the
opposite
schedule
to
get
that
overlap,
which
is
not
too
bad.
Yeah.
F
F
Oh
yeah,
I
mean
we're
pretty
used
to
it
because
we've
done
it
for
many
years,
but
yeah
I
we
need.
We
needed
a
bit
of
a
change
because
you
know
when,
when
my
my
partner
and
my
daughter
were
kind
of
getting
a
bit
sick
of
each
other,
I
think
they
just
needed
kind
of
a
break
from
each
other,
which
is
fine.
I
mean
it's
really
typical,
so
I
took
over
so
we
had
this
morning.
F
We
did
a
bunch
of
her
algebra
and
she's
starting
yeah
and
just
some
for
maths,
and
then
we
did
some
here's
some
cooking
because
we
cook
lunch
every
day
together,
so
she
gets
like
she's
learning
how
to
cook
and
also
about
science.
Things
like
I
don't
know
absorption
and
caramelization
and
cooking
science
and
stuff
like
that,
and
then
I
work
after
that.
F
F
But
I
don't
know
it's
working,
I
mean
we
can
think
of
myself.
I
guess
I
really
can
press
on
weekends.
I
don't
do
anything.
D
F
That
I
was
going
to
have
a
week
off
or
two
weeks
off
next
month
and
then
the
because
of
cobit
a
whole
bunch
of
stuff
got
shifted
with
their
extracurriculars
so
like
their
play.
That
they're
in
for
for
their
drama
class,
got
shifted
so
right
in
the
middle
of
my
time
off,
because
I
was
going
to
take
them
on
a
road
trip.
So
I've
had
to
move
it
so
yeah,
no
rest
for
the
wicked
or
whatever.
D
Then
engineering,
it's
in
improve
the
performance
of
like
the
top
five
use
pages
and
get
live
by
25,
of
which
the
issue
lists
in
the
issue
detail
view
or
among
them,
and
then
there's
also
like
the
improve,
like
you
know,
mao,
whatever
for
product
I'm
more
interested
in
seeing
if
we
can
find
the
overlap
between
the
ux
and
engineering
okrs
of
if
we
refactor
things
your
pajamas
or
like
the
point
above
like
what
does
it?
D
Is
there
an
opportunity
to
pair
that
with
wins
quick
wins
in
the
back
end,
so
that,
like
we
can
kind
of
like
have
a
compounding
effect
so
to
speak?
But
I
just
didn't
want
this
to
get
lost
in
the
either
because
it's
like
just
with
refactoring
things
pajamas.
This
is
equally
important
to
me
and
I
want
to
help
like
prioritize
the
things
that
will
move
it
forward
and
then
we're,
like.
I
don't
know
a
month
into
the
quarter
almost
and
given
cycle
times.
D
If
we
plan
things
for
13.5,
which
starts,
we
basically
have
one
release
now
to
plan
and
prioritize
this,
and
given
our
like
timeline
of
when
we're
supposed
to
have
that
done
by.
We
have
14
days
to
figure
out
everything
we
need
to
figure
out
in
order
to
create
the
issues
in
order
to
prioritize
it
for
13.5,
so
psa
put
love
collaboration
on
it.
H
Yeah
I've
been
spending
a
fair
amount
of
time
with
the
site,
speed
tests
on
the
issue,
detail
page
and
it's
it's
actually
kind
of
interesting,
because,
like
a
lot
of
the
times,
I
don't
see
in
site
speed
what
like
it.
It
seems
like
the
this
metric
of
largest
contentful
paint,
which
I
had
to
learn
about
it's
sort
of
like
the
largest
element
on
the
page
when
it's
visible
or
interactive.
H
I
think
it's
like
the
new.
I
guess
one
of
the
new
standards
for
measuring
web
performance,
but
it's
kind
of
funny
what
chrome
ends
up
picking
as
that
element.
So
sometimes
in
in
like
an
issue
detail
it's
just
like
a
random
p
tag.
I
guess
because
it's
you
know
the
way
that
it
determines
what's
largest,
but
it's
not.
You
know
whole
it's
not
like
a
whole
element
that
we
can
specifically
target.
It
might
just
be
a
p
tag
in
the
description.
H
But
moving
things
up
the
waterfall,
like
the
notes,
the
notes
call,
I
think
the
discussions
call
is
like
near
the
top
of
the
waterfall,
but
the
notes
call
is
still
pretty
far
down,
so
the
notes
call
doesn't
it's
like
pushing
the
waterfall
out
a
lot
longer
than
it
could
be,
so
I'm
not
sure
what
the
like
actionable
trigger
to
pull.
H
There
is
to
like
get
it
to
be
earlier,
but
it
seems
like
that
might
be
an
easy
way
to
not
to
not
to
like
specifically
focus
on
gaming,
the
metric
but
like
if
we,
if
we
trust
oh
nice
donal,
if
we
trust
the
metric
is
doing
what
it
should
be
doing,
then
that
might
give
us
a
a
healthy
win
on
that
front.
C
Yeah,
I'm
not
really
fixing
it,
but
I'm
adding
it
to
the.
We
have
a
we
just
recently
added
startup
js
that
you
can
explicitly
define
what
essentially
the
order
of
the
ajax
calls
you
want
to
make.
So
we
can
bump
up
notes
the
notes
json
higher
up
the
waterfall
than
it
currently
is.
D
One
thing
too:
I
can't
don't
ask
me
to
look
for
the
issue
right
now,
but
it
was
related
to
some
epic
having
to
do
with
getting
gitlab
under
500
milliseconds
of
performance.
The
discussion
was
around
pagination
of
just
notes
or
whatever,
but
there's
a
discussion
which
then
fetches
the
notes,
but
really
like
didn't.
If
you
list
all
the
notes,
if
you
just
called
notes,
they
still
have
all
the
discussions
in
there,
so
you
don't
need
to
have
two
day
separate.
Recalls
two
separate
requests
to
get
all
the
notes
on
an
issue.
D
C
C
H
D
I
wouldn't
game,
it
wouldn't
really
help
the
number,
but
it
would
help.
This
is
sad
to
say
it
wouldn't
help
our
okr,
but
it
would
help
the
end
users
perception
of
performance.
If
we,
if
we
did
optim
optim
like
what
is
it
called
optimize
fetching
where
you
like?
Basically,
if
you
hover
over
an
issue
title
from
the
issue
list,
we
go
ahead
and
make
the
request
so
that
way
like
what
is
it
prefetch.
D
We're
not
currently,
we
should
well
that
right
there,
like
would
probably
save
like
a
good.
You
know
sec,
because
the
the
thing
that
I
noticed
when
like
going
through
some
of
the
site,
speed
stuff
and
the
lighthouse
tools
was
like
noticing
that
the
initial
request
just
to
get
the
dom
back
to
start
loading.
The
issue
was
a
second
and
a
half
so
like
we
weren't,
even
we're
like
in
most
most
issues,
we're
not
even
starting
to
fetch
anything
any
data
for
the
issue
and
for
the
first
second
half.
D
G
Jake
or
don
or
anyone
really
but
is,
did
we
find
any
meaningful
or
reliable,
interactive
metric
other
than
just
the
paint
metrics,
because
I'm
still
of
the
opinion
that
that's
again
for
end
users.
So
it's
like
hard
to
get
numbers
on,
but
that
seems
to
be
where
more
of
our
wait.
Time
is
like
we
get
the
first
paint
happen
and
then
the
page
freezes
for
5
to
10
seconds
or
js
does
stuff,
and
then
you
can
interact.
G
So
I
understand
that
time
to
interactive,
I
think,
is
unreliable
for
measurements
or
hard
to
get
like
a
meaningful,
repeatable
metric
on
compared
to
paint
metrics.
But
it
still
seems
like
a
good
thing
to
be
able
to
keep
track
of.
H
I
don't
know
the
answer
to
your
question,
your
I
my
understanding
of
well.
No,
I
guess
I'm
wrong
on
that.
I
was.
I
was
thinking
that
this,
like
the
way
this
largest
content,
full
paint,
metric
is
being
recorded,
actually
would
be
interactable,
but
it
doesn't
appear
that
it
guarantees
that
so.
D
It
does
I
just
I
just
ran
my
house
and
like
on
an
epic
and
the
largest
contentful
paint
was
two
seconds,
but
the
time
to
interact
it
was
3.3,
so
it's
definitely
different
and
I've.
I've
noticed
at
least
through
lighthouse,
another
like
site,
speed
or
something
else
could
be
reporting
differently,
but
our
largest
content.
Full
paint
is
dramatically
slower
than
what
our
site
speed.
D
D
H
Yeah
there's
this
pretty
dense,
grafana
dashboard,
which
I
can
drop
a
link
to
that.
I'm
trying
to
figure
out
how
it
actually
appears
to
be
showing
no
data
right
now,
so
I'm
not
sure
what's
going
on
there.
But
I
know
that
there's
a
way
to
get
into
the
various
timing.
Metrics
that
site
speed
offers-
and
I
think
the
interactivity
stuff
should
be
in
there,
but
it
might
be
easier
to
get
to
just
from
the
site
speed
interface,
because
this
doesn't
appear
to
be
working
at
all.
H
Something
I
think
that,
like
tim,
is
doing
a
good
job
of
at
least
on
some
of
the
front-end
stuff
at
the
at
a
very
high
level
is
like
kind
of
like
making
stuff
easy
to
work
on
so,
for
example,
like
the
font
awesome
swapping
those
out
for
the
gitlab
svgs,
and
things
like
that
like
doing
a
very
good
job
like
bubbling
up
that
work
and
distributing
it
around
the
team,
so
that
that
it
gets
done-
and
I
think
we
as
ems
can
kind
of
try
to
do
a
better
job
of
getting
to
that
point
with,
like
the
individual
team
stuff
on
on
performance
stuff,
like
just
like
bubbling
up
the
very
easy
to
take
action
on
things
that
we
can
do
rather
than
having
like,
I
think
right
now
we
have
a
list
of
sort
of
like
nebulous
like
this
is
slow
type
issues
that
are,
you
know,
informative
for
finding
out
things
that
are
not
super
great,
but
not
super
like
it's
it's
hard
to
translate
that
from
like
that
issue
into
like
okay,
I'm
going
to
take
this
action,
it's
going
to
make
an
improvement.
H
So
it's
something
I've
been
trying
to
spend
some
some
more
time
on
is
just
like
thinking
about
how
we
can
actually
document
that
stuff
and
make
it
kind
of
just
easy
to
pick
up
what
you
finish,
your
you
know,
big
feature,
work
that
you're
working
on
you
want
a
mental
break
and
you
can
go
just
like
knock
out
some
sort
of
simple
action
for
improving
performance.
But
that's
so
that's
a
lot
of
work
to
like
get
to
that
point.
So
you
have
to
have
ideas.
That'll
be
great.
G
G
C
Yeah,
I'm
just
reading
through
the
the
product
kr
issue
and
it
looks
like
tim-
did,
call
out
that
it
would
be
nice
if
we
had
like
a
list
of
top
five
items
to
solve,
because
once
we
know
what
that
is,
I
think
it'll
be
easier
to
actually
create
the
issues
off
of
how
to
solve
for
those
those
things
and
then
being
able
to
distribute.
That
should
be
fairly
easy.
C
G
C
As
the
tti
stuff,
that's
the
same
conversation
where
it
kind
of
faded
two
weeks
ago
on
to
whether
we
want
to
use
tti
or
some
some
interactive
metric,
as
opposed
to
just
the
site,
speed
score
or
the
speed
index.
C
But
we
should
probably
bring
that
conversation
back
up
and
see.
If
we
can,
I
like
I'm
okay
with
using
tti,
because
while
it
may
not
be
reliable
like
if
we're
measuring
it
enough,
you
know
and
we
can
get
a
baseline
and
we
can
see
an
improvement.
It's
it's
enough
of
an
improvement
on
something
that
matters
where
that's
more
important
than
really
having
a
super
reliable
metric.
There.
G
Yeah,
it's
yeah
also
a
like
good
realization
is
that
performance
isn't
just
a
single
number
anytime.
You
get
like
performance
stats,
it's
going
to
be
different
for
every
single
person
and
every
single
page
load.
So
it's
more
you
get
like
your
chart
of
here's,
the
range
of
performances,
that's
why
you
have
your
95.
So
whatever
else
you
aim
for,
I
think
75
is
what
the
web.dev
suggestion
is
like
75th
percentile,
but
yeah
we're
not
going
to
get
to
an
exact
number.
It's
still
just
like
a
spectrum
of
performances
in
terms
of
prioritizing.
D
And
as
we
load
a
lot
of,
I
don't
know
what
our
webpack
chunking
strategy
is,
but
some
of
the
stuff
that
I
saw
in
the
lighthouse
reports
showed
that
we
were
loading
a
bunch
of
unused
javascript
and
when
I
looked
in
some
of
the
basically
bundles,
they
were
things
that
we
never
ever
would
use
on
the
issue
page,
for
example.
H
Actually,
I
just
said
it's
kind
of
a
nude
question,
but
is
our
is,
is
the
webpack.config
that's
in
I
guess
it's
in
the
config
like
root
directory?
Is
that
sort
of
the
the
whole
webpack
config
for
gitlab,
or
is
there
more
magic
happening
somewhere
else
with
the
view
stuff
honestly
right.
D
I
haven't
written
one
in
like
a
couple
years
from
scratch,
but
when
I
read
that
one
I
was
extremely
confused
a
lot
of
things,
but
if
you're
reading.
D
Well,
I
wrote
enough.
I
read
enough
of
them
over
like
a
four
year
period
of
time
where,
like
it
almost
was
like
home,
and
when
I
read
that
one
maybe
everything's
changed
so
much,
but
it
was,
it
didn't
feel
like
home
anymore
at
all.
H
H
An
older
one
rather
than
a
newer
one,
but
I
was
curious.
I
didn't
see
much
about
like
explicit
tree
shaking
in
there.
Maybe
I
just
wasn't
finding
it
just
trying
to
figure
out
like
how
that's
done,
because
I
know
there's
a
couple
different
strategies
with
regards
to
like
exclu
or,
like
you
know,
chunking
up
your
javascript,
so
I
wasn't
sure
how
like
how
it's
actually
being
determined
what's
getting
loaded
into
what
page
or
if
it
is
at
all.
If
it's
just
sort
of
we.
G
Have
we
have
super
super
fine,
grained
per
page
splitting
for
stuff
in
the
pages
folders
and
those
are
like
borderline
too
small?
Most
of
those
are
like
1k
or
less,
because
the
page
specific
code
is
tiny
and
then
we
end
up
with
a
bunch
of
shared
libraries
like
main.js
is
still
huge
and
we've
got
like
common
and
vendor
and
some
other
ones
that
are
it's
like
it's
code
that
ends
up
being
on
90
of
pages,
so
it
ends
up
being
shared,
but
then
it
just
means
that
we
have
like
it.
G
H
G
You
go
for
like
excessive
breaking
your
flow,
just
the
exact
code
you
need
on
each
page,
then
you're
going
to
be
loading,
the
same
libraries
on
different
pages.
So
then,
going
from
like
list
to
detail
page
then
you're
learning
a
lot
of
the
same
code
twice
rather
than
if
you
have
the
same
thing
like
your
shared
library,
then
you
just
load
that
once
and
then
it's
cached
the
second
time.
D
Yeah,
the
problem,
at
least
for
com,
is
that,
if
we're
deploying
a
ton
of
times
a
day
and
we're
changing
that,
then
the
cash
breaks-
and
you
have
to
like
you
know
in
any
given
one
day,
depending
on
how
many
production
deploys
you
could
be
having
to
like
get
a
fresh
set
of
all
the
the
bundles
basically
several
times
it
is
hard
I
I've
been
through
this
before
so
I
have
empathy.