►
From YouTube: Application Performance Session 2020-09-28
Description
The first GitLab Application Performance Session:
* What we measure (LCP + Co)?
* Sitespeed Measurements at GitLab
* Investigations on Snippets / Text View
B
A
Perfect,
because
this
is
the
second
reason
why
I'm
doing
this-
I
hate
talking
to
myself
on
the
zoom
recording.
So
I
wanted
to
do
this
in
a
session.
So
we
can
record
and
save
this
for
later
and
put
this
on
youtube
for
other
team
members
who
don't
have
time
thanks
everyone
for
joining
thanks
for
joining
for
the
first
time,
the
performance
session.
A
Why
did
I
choose
the
funky
name
session?
Because
I
really
want
to
be
have
this
as
a
hands-on
thing
from
here
on.
So
this
means
really.
A
About
other
tools
that
we
are
using
measuring
with
the
chrome,
dev
tools,
so
really
all
those
introductions
we
get
them
to
video
and
then
hopefully,
we
basically
convert
more
and
more
over
time
to
be
rather
hands
on
and
be
rather
discussing
around
bigger
initiatives
and
how
we
are
doing
there.
If
someone
has
seen
something
that
we
can
do
etc,
but
really
get
started
from
here
on.
A
So
if
you
have
questions
put
them
in
the
channel,
put
them
in
chat,
I
will
try
to
answer
them
right
away
during
the
session
so
that
I'm
not
going
somewhere
and
not
everyone
is
able
to
follow
because
I'm
already
way
down
in
the
topic.
So
let's
get
started.
Where
are
we
at
gitlab
with
application
performance
would
say
we
had
always
a
little
bit
an
eye
on
it,
but
we
never
had
a
focus
on
it.
A
What
we
really
wanted
to
do,
especially
in
the
this
quarter
and
last
quarter
of
our
fiscal
year,
is,
have
way
more
focus
on
improving
that
performance
and
really
working
on
it,
really
investing
into
it
and
really
taking
it
from
the
ground
up
and
that's
exactly
where
we
are.
We
are
at
the
stage
that
we
have
an
open
source
solution
which
has
grown
over
time,
which
has
especially
in
the
front
end,
has
collected
a
lot
of
not.
A
A
So
that's
what
we
are
trying
to
clean
out.
We
have
really
at
the
beginning
of
it.
I
always
say
that
performance
improvements
is
somewhere
between
like
formula
one
tuning
where
you
measure,
where
you
just
do
small
changes
here
or
there
or
it's
like
pulling
something
with
a
tractor
out
of
the
mud.
A
I
would
say
we
are
rather
on
the
on
the
side
of
pulling
something
out
of
the
mud
with
the
tractor
and
but
over
time,
and
what
we
have
also
seen
especially
over
the
last
couple
of
months,
is
we
are
improving
steadily
and
we
are
improving
and
the
numbers
are
decreasing,
and
I
really
foresee
that
at
the
moment
we
are
100
focused
in
reality
on
loading
performance,
that
we
are
really
making
a
curve
over
time
so
that,
as
soon
as
we
have
loading
performance
figured
out
and
have
really
improved
that
part
that
we
are
getting
more
and
more
into
the
area
of
really
like
interaction,
performance
performance
of
things
that
load
afterwards
really
really
complex
performance.
A
A
So
there
is
a
lot
of
complex
stuff
in
the
future
right
now
we
are
more
in
the
basics,
what
we
have
done
and
what
we
have
aligned
with-
and
that
is
really
the
first
topic,
is
the
intro
to
web
vitals
is
that
we
have
aligned
way
more
to
more
standards
that
came
around
in
the
last
12
months.
A
So
what
we
have
done
in
this
quarter
is
really
we
have
moved
over
and
we
are
focusing
now
on
something
which
is
called
the
largest
content
food
paint,
which
is
one
of
the
web
vitals,
as
they
are
defined
also
by
google
and
in
the
link
on
the
top,
which
is
they
define
three
different
web
vitals.
One
of
them
is
the
largest
content
for
paint,
the
other,
which
is
really
a
very
basic
thing,
which
simply
says
okay.
A
They
also
define
on
top
of
that,
something
which
is
a
little
bit
below,
which
is
the
cumulative
layout
shift,
which
is
simply
how
much
does
the
page
still
move
around
after
it
has
been
loaded,
because
this
is
especially
has
a
high
impact
on
perceived
performance,
which
really
means
that,
as
long
as
everything
is
jumping
around,
you
never
have
the
feeling
that
something
is
really
fast
and
the
third
one
is
the
first
input
delay
which
is
really
around.
How
fast
can
you
interact
with
the
whole
page?
A
So,
for
example,
how
fast
can
you,
in
a
new
issue,
page,
go
in
and
click
and
type
something
and
basically
measuring
those
three
main
things?
What
we
are
focusing
on
and
that's
going
back
on
where
we
are
currently
in
this
phase
of
performance,
improvement
and
measurement
is
really
that
we
are
focusing
on
loading
performance.
We
are
focusing
on
this
largest
content
for
paint,
because
we
need
to
start
somewhere
and
it's
the
most
and
generalized
number
that
gives
us
a
good
insight
on
the
combination
combination
between
front
and
back
end.
A
On
top
of
that
to
have
a
better
measurement
and
separation.
We
are
also
very
focused,
of
course,
to
time
to
first
bite,
which
is
simply
how
long
did
the
back-end
take
to
render
something
to
the
browser
like
the
first
html
and
that
we
have
a
better
understanding?
A
Is
it
rather
in
the
back
end
or
in
the
front-end,
and
have
exactly
set
those
goals
to
get
started
with
my
screen
sharing,
I
will
show
you
also
that
you
have
a
better
understanding
which
is,
for
example,
the
project
home
page,
and
if
you
do
there
in
the
chrome,
dev
tools,
you
do
a
measurement
which
it's
called
in
the
performance
tab
and
there
you
have
now
a
marker,
which
is
this
lcp,
the
largest
content
for
paint.
A
And
if
you
go
down
here,
you
see
the
related
notes.
So
this
is
really
the
browser
is
looking
at
everything
that
comes
to
the
screen.
At
some
point
it
will
say:
okay,
the
biggest
element
on
screen
is
this
div
dot
commit-
and
this
was
basically
on
screen
and
didn't
change
anymore
at
the
time
stand
of
1.2
seconds
and
that's
what
we
are
focusing
on,
and
this
is
of
course,
a
combination
of
everything
that
is
on
the
back
inside
everything
that
is
on
the
front
and
side
during.
D
A
Initial
loading,
good,
the
other
thing
is
that
we
there
is
a
huge
difference,
how
you
can
measure.
Of
course,
you
can
measure
with
different
settings
on
the
connection.
You
can
measure
different
settings
on
the
cpu,
there's
a
huge
trend,
especially
around
websites,
to
measure
rather
mobile,
based,
which
is
like
you
measure
everything
mobile
up,
so
that
it
loads
really
super
fast
on
on
on
mobile,
which
is
not
really
our
first
and
foremost
target
at
the
moment.
A
So
something
that
we
are
still
doing
is
we
are
measuring
everything
with
a
cold
cache,
and
so
this
means
nothing
is
pre-cached.
This
is
like
any
user
that
hits
exactly
that
url
for
the
first
time
goes
in
sees
it
and
that's
it.
We
are
most
probably
converting
this
in
next
quarter
to
so.
The
quality
team,
for
example,
is
already
measuring
with
a
hot
cash,
so
they
have
an
initialized
cash
which
makes
the
situation
and
the
measurements
simply
a
little
bit
more
realistic.
A
Why
is
this
important
because
at
some
point
or
to
some
extent,
we're
already
getting
the
area
where
we
would
rather
optimize
for
the
statistics,
but
not
the
real
user
experience?
So
we
are
already
measuring
in
comparison
the
five
most
used
routes,
also
with
a
hot
cache,
so
that
we
have
a
better
understanding.
Are
we
just
improving
to
get
better
statistics,
or
are
we
really
improving?
A
They
are
not
always
applicable,
as
we
are
rather
of
application
than
a
website
and
to
get
a
better
understanding
where
we
are
and
what
basically
our
numbers
are
and
where
the
split
is
happening
at
the
moment
is,
for
example,
a
very
fast
page
on
the
backend
side
is
the
issue
list.
We
have
there
563
millisecond
time
to
first
byte,
which
is
what
we
recorded
today
in
the
morning
and
until
we
have
this
first
content
for
pain.
A
So
the
thing
that
I
just
showed
you
before
it
takes
on
our
measurement
instance,
which
is
a
gcp
instance
with
a
little
bit
of
a
lower
cpu
it
takes
and
until
1.7
seconds.
This
means,
in
reality,
that
we
have
1.1
seconds
of
global
front-end
initialization
at
the
moment,
which
is
important,
because
this
is
exactly
this
huge
pile
of
legacy
stuff
that
we
are
currently
trying
to
reduce,
which
is
a
huge
bunch
of
a
big
pile
of
css
and
a
big
pile
of
javascript
that
already
got
reduced
by
a
lot.
A
So
I,
the
biggest
thing
that
we
are
currently
working
on,
is
really
reducing
this
mountain
of
css,
with
excluding
everything
that
we
have
on
this
page,
specific
focus
that
we
just
loaded
on
that
page
that
we
get
rid
of
old
classes
that
we
get
rid
and
of
really
bad
things
that
are
happening
there.
But
there
is
our
biggest
focus.
We
are
still
loading,
I
think
1.3
megabytes
of
basically
optimized
css
already,
which
is
way
too
large,
and
there
is
currently
our
biggest
target.
A
A
But
right
now
we
have
easier
targets
and
the
good
thing
also
about
it
is
everything
that
we
improved
there
and
every
investment
that
we
make
to
reduce
even
by
one
line
means
that
we
have
a
global
improvement
on
that
in
the
end.
So
everything
that
we
are
now
at
the
moment
able
to
reduce
the
css
size-
and
we
have
also,
for
example,
reduce
the
javascript
or
already
a
lot-
is
and
is
having
a
direct
impact
on
all
of
those
pages
good
part
one.
Any
questions.
A
A
What
the
side,
speed
site
speed,
is
a
collection
of
a
couple
of
tools
that
are
put
together
to
measure
really
in
real
world
the
performance
data,
it's
also
collecting
from
some
other
data
sources.
So
one,
for
example,
is
the
so-called
chrome
user
experience
report.
We
are
able
to
pull
in
data
from
there,
which
is
when
people
browse
with
chrome
and
they
have
everything
activated
of
data
sharing.
A
How
fast
is
it
for
them
and
how
fast
is
it
on
average
and
all
that
topics,
which
is
one
thing,
but
the
main
thing
is
that
we
have
site
speed
running
it's
a
docker
container,
it's
running
around
every
three
to
four
hours
at
the
moment,
and
we
have
a
list
of
urls
that
joshua
has
collected,
especially
from
product,
which
is
all
the
urls
that
we
are
currently
monitoring,
not
locked
in,
and
there
is
also
another
txt
file,
and
that
is
basically
defining
all
the
urls
that
we
are
able
even
to
measure
with
unlocked
in
users.
A
We
have
a
special
user
and
just
give
us
a
pin
in
the
research
performance
channel
that
we
can
help
you
with
that
can
also
test
stuff
that
you
can
only
see
if
you're
logged
in
you
can
of
course,
also
say
see
something
here,
which
is
the
official
name
of
the
url
in
our
monitoring,
and
you
can
see
those
names
a
lot
in
the
different
dashboards
and
at
some
point,
if
you
are
not,
if
you
don't
directly
see
which
page
this
in
reality
is
you
can
either
look
it
up
here
or
you
can
go
into
a
detailed
report
and
see
which
url
was
directly
monitored
and
the
other
thing
is:
if
you
have
a
route
that
is
not
covered.
A
A
So
that's
the
setup
and
we
are
running
a
set
on
a
google
cloud
instance
in
the
us.
I
think
it's
in
the
same
region
as
a
lot
of
our
servers
at
the
moment,
which
means
that
the
global
thing
is
not
too
big.
We
limit
the
data
connection
to
cable,
it's
not
a
very
strong
machine,
so
it's
also
a
little
bit
limited
in
their
cpu,
which
is
good.
It's
not
like
mobile,
but
it's
somewhere
in
between,
and
this
is
basically
piping
recordings,
every
three
to
four
hours
to
our
dashboards.
A
And
if
you
take
a
look
at
the
dashboards
and
we
have
a
couple
of
them-
you
already
see
quite
a
lot.
Luckily,
site
speed
has
just
updated
them
recently,
so
it's
it's
much
easier
and
much
clearer
to
see
what
are
good
things
and
what
are
bad
things,
and
so
that
already
helps
a
lot.
So,
as
you
can
see,
our
javascript
is
still
too
big.
Our
css
is
still
too
big.
It's
reducing,
but
still
we
need
to
crunch
this
down
and
and
get
this
lower.
The
other
thing
is
that
you
can
see
directly.
A
Here
is
the
so-called
lcp
that
we
are
measuring
as
we
getting
this
in,
so
you
can
also
limit
the
graphs
to
it,
and
the
other
thing
is
we
are
collecting
a
lot
of
data,
especially
this
is
these
are
our
engineer
dashboards
and
they
help,
and
they
can
help
you
a
lot.
So
there
are
tons
of
other
data.
A
You
can
find
all
the
definitions
of
it
in
the
site,
speed
reports
and
they
can
be
very
important
on
figuring
out,
especially
problems
or
most
of
the
time.
Currently,
we
are
not
figuring
out
problems,
but
rather
figuring
out
low-hanging
fruits
that
we
can
change
to
improve
overall,
as,
for
example,
in
the
last
quarter,
one
of
the
biggest
things
and
the
one
of
the
biggest
targets
was
definitely
the
the
change
of
the
first
pane
so
that
getting
something
on
the
screen
faster
and,
for
example,
let's
see
last
30
days.
I
think
it
ran
already
out.
A
If
we
put
this
to
last
90
days
and
then
you
can
see
clearly,
you
can
even
better
see
it
on
the
explore
page,
which
is
one
of
my
most
favorite
ones,
to
measure
to
some
extent,
because
it
doesn't
include
too
much
so
it's
much
clearer,
especially
for
the
front-end
renderings,
to
see
what's
going
on.
So
what
we
see
here,
for
example,
was
the
huge
drop
that
we
made
some
time
ago.
A
When
was
it
beginning
of
august,
when
we
first
activated
startup
css,
where
we
were
able
to
drop
really
the
the
time
until
something
was
painted
on
screen
by
almost
half
or
sometimes
even
more,
and
that's
what
I
mean
with
it's,
not
about
one
number,
it's
always
on
the
overall
user
experience,
but
we
have
a
clear
focus
right
now
on
lcp
and
to
get
this
improved.
A
What
else
do
you
have?
You
have
a
set
up
here?
You
have
something
more
interesting,
which
is
the
desktop
cached.
That's
what
I
meant
before
with
that.
We
are
also
running
all
measurements
with
a
cached
server
service
so
that,
basically
we
visit
another
ul,
the
explorer
url
before
we
actually
hit
the
ul
that
we
want
to
measure.
I
will
turn
this
now
again
to
30
days.
We
have
yeah
even
monitoring
a
couple
of
different
domains,
so
we
are
on
the
desktop
cache
not
but
on
the
desktop.
A
You
are
also
monitoring
the
website
dev.
So
if
you
want
to
test
something
directly
on
that
also
easily
possible,
then
on
page
you
have
the
different
pages.
As
said
the
connectivity,
we
have
only
cable
at
the
moment
as
we
otherwise
would
need
to
run
even
way
more
runs
and,
as
we
are
very
focused,
only
on
cable
only
and
desktop.
This
means
that
it's
much
easier.
What
else
do
we
have
here?
A
You
have
at
the
bottom
also
something
quite
interesting,
which
is
exactly
what
we
are
currently
trying
to
reduce
and
which
is
the
css
size
and,
as
you
can
see,
we
are
able
to
reduce
this
from
time
so
over
time.
So
we
started
with
1.7
megabytes
content-wise
and
we
are
by
now
at
1.3
megabytes,
but
I
still
believe
we
can
go
way
below
one
megabyte
if
we
are
doing
this
right
and
with
the
right
things.
A
A
The
even
bigger
thing
around
here
is
that
you
can
see
runs,
and
I
will
now
limit
this
to
seven
days,
because
otherwise
my
machine
will
go
crashing
if
I'm
doing
zoom
at
the
same
time,
which
is
this
little
check
box,
which
is
a
huge
help,
and
if
you
click
on
runs,
then
you
have
suddenly
some
markers
and
you
can
open
each
individual
run
stator
and
you
can
see
all
the
data
that
is
included
there.
A
Everything
that
happened
in
there
you
can
take
a
look
at
anything
that
site
speed
has
recorded,
which
is,
I
think,
200
data
points
and
a
lot
of
information.
A
So
one
of
the
things
that
you
have
here
is
that
you
have,
we
are
always
doing
two
runs
we
are
taking
a
median
and
so
that
we
have
a
better
understanding
and
we
have
still
a
lot
of
going
way
above
the
scale
so
which
is
also
interesting.
Because
then
you
can
research
what
happened
exactly
there.
I
said
lots
of
data.
You
have
also
the
possibility
to
take
a
look
at
the
waterfall
there.
A
You
can
see
when
basically,
the
back
end
came
in
with
the
time
to
first
bite,
which
was
in
this
case
at
668
milliseconds
and
then
the
whole
waterfall
of
the
front
end
happened
until
we
hit
around
here
the
largest
content
footprint
at
3.4
seconds
on
the
issue
detail
page,
and
what
else
do
we
have
the
metrics,
which
is
also
something
really
interesting,
because
we
you
have
a
couple
of
different
data
and
starting
from
the
lcp
to
how
complete
the
whole
page
is,
how
ready
the
whole
thing
is.
A
You
have
the
video,
and
this,
for
example,
is
a
pretty
interesting
one
and
it's
not
looking
funky,
but
it
also
can
give
really
insights
because
you
can
slow
it
down
and
you
see
each
of
the
individual
things
when
they
are
loading,
how
long
they
took
to
load
and
that's
a
great
tool
and
then
something
practically
that
I
will
show
at
the
end,
how
we,
for
example,
figured
out
what
is
going
on
with
the
issue
detail
page
and
how
we
improved
the
lscp.
There.
A
No,
I
think
we
are
measuring
completely
different
urls
with
logged
in
and
non
logged
in,
and
the
thing
with
the
login
is
also
that
we
have
only
test
projects
as
we
wanted
to
have
one
specific
user
that
basically
just
has
test
data
in
there.
So
if
anything
happens
to
the
monitoring
etc.
With
that
login
data,
it's
just
contained,
but
we
are
not
measuring
at
the
moment
between
logged
in
or
not.
A
Good.
Another
really
interesting
thing
is
again
here:
the
lcp,
the
film
strip,
which
also
shows
you
exactly
the
markings
again
and
what
happened
in
each
individual
step
and,
for
example,
one
of
the
things
with
the
issue.
Detailed
page,
is
why
it's
measuring
so
long
is
that
it
doesn't
get
straight
away
that
we
are
already
doing
like
real-time
updates.
A
Even
perhaps
we
are
doing
them
way
too
early
because
we
started
sometimes
the
real-time
updates
before
everything
was
already
loaded,
which
also
means
that
we
are
doing
a
lot
of
calls
already,
and
this
means
that
the
recording
is
thinking
hey.
There
is
still
something
loading
and
doesn't
say
earlier:
it's
basically
fully
loaded.
A
A
But
it
can
you
give
you
also
a
good
insight,
especially
around
what
you
could
save
with
better
images
and
things
like
that
page
x-ray,
also
interesting
to
really
figure
out
if
you
have
huge
content,
which
is
definitely
a
huge
difference
between
cached
and
non-cached,
because
in
the
cached
world,
a
lot
of
those
impacts
that
you
would
have
with
that
loading
a
lot
of
those
things
like
our
icons,
etc.
They
are
already
in
cash
and
don't
have
that
type
of
impact
anymore.
A
You
see
also
the
cash
headers,
which
is
an
important
thing,
because
there
we
also
had
some
things
that
we
requested
in
reality,
every
call-
and
I
needed
to
do
some
more
setup
around
it
cpu
also
very
interesting,
because
you
can
really
download
the
trace
log
and
load
it
in
your
chrome,
dev
tools
to
see
really
the
different
timings
of
the
different
steps
that
happen.
A
Third
party
part
is
really
a
website
thing
how
much
we
load
stuff
from
other
domains
lighthouse,
and
that
is
also
really
interesting,
because
this
is
lighthouse
is
a
performance
tool
that
you
also
have
in
your
chrome,
and
this
is
an
additional
monitoring.
This
is
really
already
targeted.
That's
what
I
mentioned
with
the
whole
trend
of
doing
rather
mobile
then
desktop.
A
This
is
already
much
more
to
having
everything
mobile
first,
but
it
can
give
you
also
a
lot
of
insights
and
especially
insights
around
stuff
that
we
can
improve
and
also
figure
out
how
what
we
could
save
so,
for
example,
reduce
javascript
execution.
Time
is
a
huge
thing,
but
always
remember
that
this
is
really
like
fully
limited
to
completely
a
very
slow
cpu,
but
remove
unused
javascript,
for
example.
So
this
is
basically
telling
us
what,
in
our
packages
that
we
have
actually
loaded
was
never
touched
by
execution.
A
So
this
showed
us,
for
example,
a
couple
of
weeks
ago,
still
a
lot
of
high
numbers,
what
we
could
say
from
the
main
javascript
bundle
that
was
still
then
I
think
1.1
megabyte
is
now
at
600
kilobytes
and
we
hopefully
get
this
even
lower
in
the
near
future.
The
biggest
step
in
their
jquery
and
jquery
is
something
that
you
can't
remove
that
easily
because
it
all
the
old
bootstrap
stuff,
everything
that
we
still
have
from
all
the
days
in
there
is
directly
connected
to
it.
A
Good
one
more
thing
that
I
just
discovered
a
couple
of
weeks
ago
is
what
you
also
have
in
here
is
the
waterfall.
And
if
you
click
here
on
the
compare
button,
then
you
are
able
to
to
upload
two
hard
files,
and
that
gives
you
a
really
nice
comparison
between
all
the
numbers
of
those
runs.
So
you
can
see,
for
example,
if
the
numbers
went
up
or
down
what
actually
improved
in
the
whole
waterfall
in
the
whole
setup.
A
We
also
have
here
the
x
request
id,
which
gives
us
the
possibility
to
go
into
our
log
files
and
really
take
a
look
at
that
specific
call,
and
we
also
have
the
x
run
time,
which
is,
I
believe,
the
time
we
spend
in
our
rails
part.
So
this
gives
can
give
you
also
a
much
better
insight
and
we
can
basically
follow
up,
especially
if
we
have
high
time
to
first
bytes,
etc.
A
A
A
So
the
question
was
now:
why
are
we
recording
six
point
something
seconds,
but
in
reality
everything
that
you
should
see-
or
that
is
measured
in
the
hand-
is
already
in
three
seconds
which
is
half
of
it,
and
I'm
not
sure
if
you
can
still
see
it
in
this
video.
But
there
was
two
weeks
before
that.
We
were
seeing
really
during
the
video
like
a
big
flash
in
between,
but
this
gave
us
the
hint
to
research
in
more
detail.
What
was
going
on
in
the
issue
detail
page
and
the
thing.
A
What
happened
here
is
that
we
had
the
stuff
the
data
that
was
coming
in
from
the
back
end
from
the
initial
page,
rendering
was
sent
through
our
dom
purify
library
and
the
api
response
wasn't
so
to
the
user.
It
looked
100
the
same,
but
in
the
background
it
was
a
completely
different,
html
structure,
so
it
had
all
the
attributes
and
different
order,
etc.
So
this
meant
really
to
the
browser,
hey
look.
A
I
have
new
content,
I'm
rendering
again
and
that's
what
really
happened
is
that
the
browser
rendered
at
six
seconds
when
the
first
time
the
real-time
changes
came
in
it
said
hey.
This
is
different
because
it,
the
html,
looks
different
compared
to
the
actual
content,
and
that's
when,
where
we
went
in,
we
changed
simply
that
both
get
cleaned
by
the
same
function
and
problem
solved
and
our
basically
the
lcp
dropped
and
no
more
updates
were
done,
and
he
also
moved
out
the
real
start
of
the
real
time.
A
C
F
The
request
timed
out,
basically
in
10
seconds,
because
the
request
was
so
slow
to
load
nothing
on
the
on
the
page.
Change
for
10
seconds,
request
timed
out,
but
the
request
itself
actually
takes
30
seconds
so
so
like.
F
F
B
A
A
The
most
used
routes,
which
is
five,
which
is
the
project
home
issue,
detail
issue
in
this
much
detailed,
merge
list,
but
there
are
way
more
pages,
of
course,
and
way,
more
routes
that
we
are
actually
monitoring,
and
what
we
are
currently
focusing
on
is
that
that
that
the
target
is
really
2.5
seconds
is
to
have
that's
the
definition
of
a
good
lcp
and
if
you
take
a
look
at
all
our
routes
that
we
are
currently
monitoring.
A
You
will
see
that
currently
until
here
we
are
still
in
the
in
the
good
area,
and
then
it
gets
so
I
would
say
54
if
my
calculation
there
on
the
grafana
dashboard
is
right.
We
have
good
lcp,
but
that
means
that
the
other
46
are
not
in
that
area,
and
we
are
most
probably
going
to
focus
that
we
really
get
everything
else.
Also,
on
the
good
side
of
the
whole
thing,
and
as
you
mentioned,
you
can
have
especially
what
was
it
the
milestone
page
and
pipeline
charts.
A
A
That
is,
like
almost
200
meter
runs
that's
a
long
time,
so
there
we
need
to
come
in
and
also
figure
out
what
type
of
data
we
don't
want
to
load
on
on
the
first
run
and
and
figure
out,
and
how
we
can
split
it
up,
and
there
are
a
lot
of
techniques
that
you
can
do
to
reduce
page
load,
the
overall
load
right
now,
the
overall
focus-
and
that's
what
I
mentioned.
A
A
It
is,
to
some
extent,
also
quite
easy
to
simply
go:
go
in
replace
an
icon,
get
the
icon
out.
We
are
getting
closer
to,
I
think,
280
or
so
instances
that
we
still
have
left
in
the
product.
Why
is
this
important
for
performance
quite
easy?
As
soon
as
we
have
cleared
out
all
from
the
awesome
icons,
we
can
delete
this
font,
which
means
that
most
probably
will
save
around
100
to
200
milliseconds,
and
that
is
definitely
something
where
it's
an
easy
task.
A
It's
simply
a
lot
that
we
need
to
do,
but
at
some
point,
hopefully
it
will
pay
off
and
I
already
gave
martin,
who
is
the
dri.
He
will
be
the
one
that
is
deleting
the
font
awesome
and
a
lot
of
pipe
people
will
watch.
Why
he's?
Basically
why
we
are
merging
this?
So
that's
definitely
a
topic.
F
Quite
global
but
pretty
close
anytime
you've
graphql
queries
at
the
minute
as
long
as
you're,
not
working
with
fragments.
You
can
use
this
well,
you
can't
yet,
but
it
will
be
merged
fairly
shortly.
F
I
think
you
can
use
this
helper
to
embed
the
graphql
request
in
the
page
and
if
you
follow
the
rest
of
that,
mr
you
can
pretty
hopefully
pretty
easily
use
start.js
for
graphql
queries
but,
like
I
said,
we
haven't
merged
it
yet,
but
it's
pretty
close
and
we're
expecting
like
20
25
percent
speed
increase
from
this
so
like
yeah,
just
to
put
on
everyone's
radar.
I
don't
think
the
mr
is
going
to
change
that
much.
So
it's
pretty
good.
A
Then
we
start.
Okay,
hello,
viewer
application.
I
need
to
go
out
and
load
some
api
endpoints.
So
let's
call
this
endpoint
and
let's
call
this
endpoint
and
then
the
endpoint
is
called,
and
then
this
endpoint
takes
900,
milliseconds
and
suddenly
already
at
2.5
seconds
until
you
have
all
data
available
that
you
want
to
render
on
the
first
thing
on
screen,
which
is
way
too
long.
A
So
the
idea
was
quite
simple
is
to
move
exactly
that
call
that
we
are
doing
then
at
1.6
seconds
and
have
it
through
vanilla,
javascript
right
after
the
server
is
loaded.
So
it's
like
a
classic
vanilla,
javascript
that
goes
ahead
and
does
the
fetch
from
the
api
and
later
when
the
application
starts
and
loads,
the
application
goes
out
and
says:
hey
was
there
anything
for
me
in
the
post
box?
That
was
already
has
this
url
already
been
called
great
perfect?
A
I
just
take
the
result
pass
this
and
then
you
have
saved
these
900
milliseconds
that
you
would
need
would
have
needed
to
wait
and
basically
paralyze
this
with
all
the
global
front
and
blah,
and
that's
what
startup
gs
is
and
what
john
and
natalya
were
working
on,
especially
and
I'm
really
looking
forward
to
that
is
doing
the
same
thing
for
graphql.
So
that
we
can
do
graphql
course,
you
have
arrays
helper,
you
added
to
your
template.
A
You
say,
please
call
go
ahead
and
call
that
stuff
for
me
already
and
then
the
application
can
basically
pick
it
up
from
there
and
I
can
take
it
and
has
already
the
data
available
when
the
application
is
started,
and
that
is
definitely
a
big
help
and
as
it
now
sounds
a
little
bit
that
we
are
taking
very
long
from
the
front
and
side.
Yes,
we
are
taking
a
very
long
front.
A
We
need
to
reduce
this,
but,
as
john
said,
there
are
a
lot
of
pages
that
already
take
nine
seconds
or
15
seconds
to
load
at
the
moment,
and
that's
where
we
need
to
get
that
stuff
out
of
it
to
be
basically
faster
and
do
stuff.
Async
reduce
the
amount
of
data
that
we
load
at
the
beginning
or
reduce
the
heavy
lifts
at
the
beginning
and
then
take
it
from
here.
F
I'll
just
address
sam's
point
there
yeah.
I
should
have
been
clearer
on
that.
Sorry,
I'm
not
saying
don't
use
fragments.
I
just
we
haven't,
found
a
way
to
support
them.
Yet
with
with
this
like
because
the
tricky
part
is
the
query,
that's
embedded
in
the
page
has
to
be
identical,
including
white
space,
to
the
one
that's
issued
through
vue.js.
If
they're
not
identical,
they
won't
be
hashed
in
the
same
way,
so
that
we'll
create
a
cache,
miss
basically
and
the
request
will
actually
be
made
twice.
F
So
we
just
have
to
figure
out
a
way
to
consistently
like
to
consistently
compose
the
queries
on
the
front
and
back
end
and.
F
Of
out
of
scope
for
the
first
iteration
just
try
and
get
this
proof
concept
and
just
see
what
it
could
do
on
one
end
point
first
and
then
and
then
we'll
try
and
if
it's
successful
then
we'll
build
it
out
for
fragments
as
well.
A
Iteration
for
the
win
awesome
good,
then.
G
It
might
be
worth
talking,
since
we
have
some
several
people
on
the
call
about
the
potential
issues
of
startup
css,
that
might
some
people
might
be
running
into
weird
race
conditions,
but
what
gets
uploads
first
so
I'll
add
a
link
to
the
auto
size,
how
we
work
around
the
auto
size
to
the
document.
G
So
the
question
is
the
startup
css
can
change
the
order
of
things
loading
up
if
the
javascript
is
waiting
for
dom
to
be
there,
it
might
not
be
there.
So
we
devised
the
helper
to
make
code,
wait
for
the
css
to
be
available.
Whatever
comes
first,
so
I'm
just
going
to
put
here
work
around
anyway
I'll
just
type
you
can
go
in
unless
anybody
has
any.
A
Questions
thanks
for
bringing
it
up,
yeah
that
might
be
that
we
run
into
some
more
race
conditions
and
just
to
clarify
startup
css
is
because
naming
is
hard
again
and
we
picked
up
startup
and
called
it
the
injection
of
critical
css
into
into
each
page
through
inline
css.
What
does
this
mean?
We
have
a
crawler
that
takes
a
look
at
the
page
and
takes
a
look.
A
So
what
we
are
doing
right
now
is
we
extract
the
minimal
css,
which
is
29
kilobytes
or
so
and
inject
it
right
into
a
style
attribute,
and
this
helps
that
the
browser
basically
can
go
ahead
and
say.
Oh,
there
is
my
back-end
page
perfect.
I
already
have
css
boom.
I
render
it
to
the
screen
so
before
it
was
around
one
one,
second,
before
the
first
or
up
to
one
five
1.5.
Second,
until
anything
on
the
screen
happen,
and
now
we
basically
get
the
back
end
and
have
100
to
200
milliseconds
later
already
straight
away.
A
The
first
thing
painted,
which
is
not
the
thing
the
views
actually
has
clicked,
but
you
actually
see
the
navigation
bar
on
the
left
side
menu,
and
that
is
much
better
than
any
white
screen
in
that
sense.
So
we're
looking
at
something
and
that
helps,
of
course,
all
the
overall
perceived
performance,
good.
A
A
A
A
A
But
if
we
are
already
here,
then,
let's
say
simply
the
explorer:
let's
take
a
look
at
that
one
that.
D
A
Yeah
that
looks
better
editor
snippets
textual.
So
please
chip
in.
If
you
see
anything,
but
I
will
simply
go
ahead
now
for
the
first
time,
and
I
hope
that
next
time
someone
else
does
it
to
go
in
and
take
a
little
bit
and
look
at
how
I
would
try
to
basically
take
it
from
site,
speed
and
analyze.
It
then
not
only
side
speed,
but
also
then
in
the
chrome,
dev
tools
etc,
to
figure
out
what
are
things
that
we
could
improve,
what
are
perhaps
low
hanging
fruits?
A
A
E
A
Happens
then
we
have
unloading
at
the
top
and
then
we
have
finally
the
actual
stuff
at
4.533,
which
is
interesting
because
the
lcp
yeah
it's
four,
three,
nine
two.
So
that's
that's
quite
aligned
with
what
we
also
see
on
the
screen,
because
we
only
see
it
really
there's
no
hidden
thing
in
there.
But
what
was
also
quite
obvious
is
that
we
have
in
reality
the
application
already
at
4.1
seconds.
A
So
let's
take
a
look
at
the
waterfall,
which
simply
means-
and
that's
where
now
the
overall
thing
from
john
can
come
in
handy
is
that
we
are
doing
the
graphql
calls
back
here.
So
you
were.
If
you
looked
at
the
video
you
saw
already
that
the
javascript
application
was
already
ready,
but
only
then
started
really
loading
and
if
we
would
move
the
graphql
calls
that
we
have
in
this
line.
A
And
this
line
and
in
this
line,
which
is
nothing,
is
long
there,
we
even
have
a
subsequential
call.
So
that
is
also
something
we
should
be
able
to
figure
out,
because
it
looks
like
that.
We
need
the
results
from
the
first
one
before
we
can
actually
call
the
second
one,
and
why
do
we
have
on
top
of
that
three?
If
it's
graphql,
if
we
could
combine
those,
but
the
overall
topic
is
nothing
is
longer
than
a
second
or
anything.
So
this
means
that
we
have
200
highest
is
369.
A
So
if
we
move
everything
to
startup
gs
with
graphql,
we
would
be
able
to
call
it
here
and
it
would
be
already
loaded
down
here
at
the
one
second
mark
most
probably
so.
This
means
that
as
soon
as
we
get
the
stuff
loaded
and
activated,
we
should
already
have
the
data
and
be
able
to
move
the
largest
content
for
paint
in
reality
rather
more
to
the
unload
most
pro,
which
will
be
a
reduction
of
a
second.
A
Good
then,
let's
take
another
look:
why
are
we
talking
that
long
for
a
lot
of
the
javascript
stuff?
Okay,
we
have
monaco
and
I
will
take
a
look
at
the
page
x-ray.
So
we
have
that
is
monaco.
Monaco
is
the
editor
that
we
are
using.
So
this
is
not
a
big
of
a
topic
if
we
are
talking
about
a
cached
application.
A
So
as
soon
as
we
have
a
setup
that
is
more
realistic
with
the
user
experience,
we
shouldn't
be
running
that
much
of
a
problem
into
having
such
long
loading
times
like
this
one,
so
that
took
two
seconds
to
load
in
these
setups
to
one.
Second,
I
think
to
some
extent,
this
is
most
probably
also
due
to
the
extent
that
we
are
loading
a
couple
of
things
and
it
makes
it
harder
for
the
cpu
to
digest.
D
C
Sorry
to
jump
in
here
did
that
page
actually
need
that
editor,
or
did
it
just
display
the
file.
A
This
needs
the
editor,
but
in
the
sense
that
it
needs
to
have
the
same
rendering
it
needs
to
be
monaco,
because
we
are
doing
the
the
file
parsing
also
based
on
on
on
the
editor,
because
that
is
really
now.
It
was
a
huge
topic
to
reduce
the
number
of
editors.
We
have
the
web
ide.
We
have
the
single
file
editor
and
moving
that
stuff
across.
A
But
what,
with
the
I
mean
with
the
snippets,
we
need
to
load
it
directly,
because
this
is
also
another
big
topic
is
that
is
bundle
splitting,
which
we
are
doing
way
too
less,
which
is
really
like
loading
parts
of
the
javascript.
A
Only
when
you
need
it
so
one
thing
to
give
you
a
good
example
is
we
were
able
to
cut
80
to
100
kilobytes
out
of
the
main
javascript
bundle
simple
by
take
loading
everything
around
the
search
bar
that
you
see
at
the
top,
the
search
block
only
when
someone
is
clicking
into
it.
So
only
if
you
click
into
it
or
you
call
the
shortcut,
then
we
basically
go
ahead
and
load
the
actual
search
javascript
that
we
have
in
our
main
bundle
and
this
timing,
especially
if
you
have
have
it
cache,
takes
4
milliseconds.
A
This
is
nothing
that
you
will
ever
see
so
doing.
Good
web
bundle
splitting
based
on
what
you
need,
what
the
user
will.
You
are
trying
to
thrive,
really
for
everything
that
all
that
the
user
needs
right
away,
and
then
you
can
still
go
in
there
are
tons
of
you
can
load
automatically.
You
can
only
load
on
into
action,
but
I
think
this
is
a
huge
topic
I
have
seen
we
have.
A
We
have
only
12
to
15
bundle
splits
in
our
whole
code
base,
which
is
nothing
compared
to
the
overall
code
base,
so
reloading
stuff
that
is
only
needed,
then,
would
be
super
vital
to
reduce
the
number
of
the
the
size
of
javascript,
and
that
is
especially
important,
because
a
kilobyte
of
javascript
is
way
more
heavy
and
intensive
to
the
browser
than
any
image
or
anything
because
it
needs
to
parse
it.
It
needs
to
get
it
ready.
A
It
needs
to
get
it
into
the
runtime
et
cetera,
et
cetera,
so
every
kilobyte
counts
to
some
extent,
and
hopefully
our
main
bundle
is
anyhow
so
small
soon
that
we
don't
need
to
figure
it
out
anymore.
A
Good,
let's
take
a
look
at
lighthouse,
not
the
best
number
25.
It
goes
from
zero
to
100.
By
the
way,
let's
see
we
have
here
accounted
largest
cotton
food
paint
of
9.9
seconds.
We
have
exactly
that's
where
bundle
splitting
comes
into
play.
We
are
using
how
much
three
seconds
we
could
save
most,
probably
based
on
the
stuff
that
we
split
out.
A
So
if
you
look
at
this
monaco,
we
could
perhaps
split
out
some
languages
and
there's
a
huge
chunk
that
is
223
kilobytes
and
all
we
could
save
120,
which
is
almost
top
more
than
half
of
it.
So
this
means
that
if
we
split
that
out,
50
of
that
stuff
is
gone
and
it's
only
loaded
on
that
interaction
and
I've
seen
a
lot
of
times
already
over
the
last
weeks
that
we
are
loading
stuff
that
perhaps
0.003
of
the
people
will
actually
click.
A
So
one
of
the
best
examples
in
that
area
is
on
the
project
homepage.
Currently
we
render
still
on
the
race
side
a
modal
for
custom
notification
settings.
I
didn't
even
know
until
that
point
that
we
have
custom
notification
settings,
but
this
means
that
we
are
doing
a
query
on
the
back
end
to
figure
out
what
are
the
settings
of
the
user.
We
are
initializing
this
on
the
javascript
side.
We
are
rendering
this
to
the
dom.
We
have
additional
dom
elements.