►
From YouTube: Web Performance Monitoring Workshop #1
Description
The first in a series of workshops dedicated to monitoring, analyzing, and improving the performance of GitLab.
Jannik Lehman: introduction into what and why we measure and monitor
Denys Mishunov: deeper dive into Grafana and Sitespeed reports
A
Everyone
hi
welcome
everybody.
I'm
super
happy
that
everybody
could
make
it
to
a
web
performance
monitoring
workshop
number
one
in
the
last
weeks
dennis
and
I
figured
that
our
web
performance
monitoring
setup
could
get
a
little
extra
attention,
it's
quite
an
exciting
topic
and
a
really
powerful
tool
or
a
set
of
powerful
tools
about
it.
So
here
we
are
and
we're
going
to
tell
you
everything
you
need
to
know
about
it
today.
A
This
is
what
we
are
planning
on
with
what
you'll
be
walking
away
with
I'll
give
you
a
short
introduction
about
what
performance
data
actually
is
and
what
it
measures
once
we
have,
that
covered,
we'll
be
speaking
about
different
ways
to
aggregate
performance,
data
I'll
talk
about
performance,
monitoring,
the
setup
that
we
have
in
place
and
we're
going
to
deep
dive
from
the
grafana
dashboards
and
see
the
measured
data
in
action.
We
want
this
workshop
to
be
as
interactive
as
somehow
possible.
A
So
please
put
your
questions
in
the
linked
agenda,
use
the
race
hand,
feature
and
zoom
anytime,
or
just
pronounce
your
question
interrupt,
feel
free
to
interrupt.
That's
all
good.
So
what
is
this
whole
thing
about?
In
the
times
we're
currently
living
in,
nobody
wants
to
wait
for
anything
and
to
provide
a
good
experience
to
our
users.
It
is
crucial
that
we
help
them
do
their
job
quickly,
but
how
can
we
tell
if
we're
actually
doing
a
good
job?
A
There
are
plenty
of
ways
to
measure
this,
but
rep
vitals
are
highly
regarded
set
of
metrics
in
our
industry.
The
web
models
take
care
about
three
different
things
when
it
comes
to
rendering
a
web
page
on
the
client's
device.
The
loading
stage,
which
is
measured
by
the
largest
content
full
paint
when
the
page
is
ready
for
user
input,
which
is
measured
with
the
first
input
delay
and
the
visual
stability
which
is
measured
with,
which
is
measured
in
the
cumulative
layout
shift,
we'll
be
diving
deeper
on
those
metrics
later
in
the
workshop
and
see
examples.
A
But
for
now
these
are
three
essential
values
and
if
you
have
all
of
them
in
a
greenish
area,
we're
doing
a
great
job
with
performance
wise.
There
is
a
caveat
to
those
values,
though
they're
all
made
up.
There's
no
standard,
no
specification.
Nothing
like
that.
The
chrome
team
has
decided
that
these
are
important
indicators
and
our
industry
has
adapted.
A
Most
of
those
metrics
are
also
only
available
in
chrome.
While
this
is
a
bit
of
a
bummer
in
the
real
world,
it's
actually
not
a
big
deal,
since
those
metrics
are
good
enough.
If
we
care
about
providing
the
best
possible
user
experience,
we
should
be
caring
about
two
key
things:
one
would
be
trend
data
and
the
other
one
would
be
consistency
so
to
put
an
example
on
this.
A
If
our
goal
would
be
to
become
a
faster
runner,
it
doesn't
really
matter
if
we
are
measuring
a
distance
of
90,
100
or
110
meters
and
the
time
on
the
stopwatch
not
to
import
meters.
What
matters
is
that
we
are
using
the
same
distance
every
time
we
measure
and
the
trend
data
if
we're
getting
slower
faster.
This
is
all
we
need
to
be
on
the
right
track,
so
you
might
be
asking
yourself:
where
can
I
get
those
metrics
from?
There
are
plenty
of
ways
to
get
the
starter,
so
please
bear
with
me.
A
While
I
talk
you
through
this
table.
First
off
there
are
two
different
ways
to
collect
performance
data
that
we
care
about,
there's
lab
and
there's
field.
A
lag
performance
test
is
probably
what
a
lot
of
us
have
seen
before.
It
is
done
in
a
controlled
measurement
environment
under
reproducible
conditions.
So
a
google
lighthouse
test
on
your
local
gdp
would
be
one
example
for
all
of
tests.
Left
tests
are
great
for
many
reasons.
They
are
reproducible
and
easy
to
debug,
and
they
are
doing
a
great
job.
A
If
you
want
to
compare
two
different
approaches
about
how
to
tackle
a
problem,
the
thing
with
lab
that
I
don't
on
complex
systems
like
we
deal
with
it's
just
not
the
real
deal,
there's
a
difference
between
my
developer
setup
running
on
my
localhost
and
a
user.
Getting
a
product
delivered
over
a
flaky
wi-fi
connection,
but
there's
field
out
there
as
well
and
field
data
is
collected
from
actual
users,
similarly
similar
to
a
telemetry
setup
when
a
user
interacts
with
gitlab
we'll
be
collecting
the
metrics
and
send
it
back
home.
A
So
this
is
fantastic,
because
what
we
care
about
is
to
provide
a
good
user
experience
and
that's
what
we
measure.
Unfortunately,
this
data
can
become
quite
noisy.
We
can
really
break
this
down
to
one
user
who
took
an
astonishing
five
minutes
to
render
our
start
page,
even
though
this
looks
bad.
The
whole
story
might
be
that
they've
been
entering
the
subway.
In
the
meantime,
they
lost
their
mobile
connection.
They
tried
to
connect
with
a
flaky
hot
spot.
A
They
finally
reached
their
destination,
got
exposed
to
sunlight
and
signal
once
more
and
then
finished
the
loading
of
a
start,
page
crossing
my
fingers
and
a
lot
of
luck
with
debugging
that
graph.
I
think
those
examples.
They
explain
the
problem
with
field
data,
it's
great
and
if
you're
seeing
trends
in
your
filter,
it's
absolutely
worth
investigating
on
them,
but
take
it
with
like
a
tablespoon
of
salt,
since
reverb
conditions
are
wild
and
be
prepared
for
exceptions
to
your
average
user
experience.
A
We're
currently
collecting
or
the
data
we're
currently
collecting
this
kind
of
in
between.
Since
the
definitions
are
rather
loose.
What
we're
doing
is
we're
running
automated
reproducible
tests
against
our
production
environments,
so
this
is
better
than
only
testing
in
the
lab.
There
are
some
real-world
factors
included,
but
the
downside
is
that
things
might
break
even
in
our
production,
and
we
might
see
some
exceptions
in
our
tests
that
are
rather
unrelated
to
the
problems
we'll
be
looking
at
one
more
pro
tip
for
further
research
on
this.
A
There
is
a
lot
of
confusion
about
naming
these
things
and
at
the
end,
they
all
mean
the
same.
But
if
you
do
a
web
search
lab,
that
is
something
sometimes
referred
to
as
synthetic
data
field
data
sometimes
goes
as
rum,
which
stands
for
real
user
monitoring.
So
just
so,
you
heard
that
before
we'll
be
looking
at
our
dashboards
in
just
a
second,
but
what
is
actually
are
we
going
to
see
on
them?
As
mentioned
before,
we
are
running
lab
tests
against
a
production
service
every
four
hours.
A
We
are
using
site,
speed
io
as
a
testing
application.
We
are
not
tracking
every
path
that
we
have
at
gitlab.
That
is
because
testing
is
quite
resource
heavy
and
we
would
be
testing
a
lot
of
the
same
things
due
to
our
components
we
are
using.
If
you
want
to
track
a
specific
url,
there's
a
link
to
the
wrapper
on
the
slides
and
the
docs
on
how
to
set
this
up,
those
dogs
are
a
little
outdated.
Unfortunately,
there
is
an
mr
updating
them
in
place,
so
maybe
take
these
information
for
your
consideration
as
well.
A
So
quite
a
lot
of
data
and
things
that
are
measured,
but
where
can
we
actually
get
our
hands
on
this?
Here's
where
we
accumulate
the
stutter,
there's
the
performance
bar
which
will
provide
you
handy
information
built
in
in
our
product,
our
side,
speed
reports
which
will
be
stored
in
the
profaner
dashboard.
A
We
also
have
an
instance
of
debug
bearer
in
place.
Dropbear
is
doing
very
similar
things
than
the
sidespeed
reports.
The
reason
we
have
it
in
place
is
to
double
check
our
measurements.
Whenever
you
are
suspicious
that
there
is
something
wrong
with
the
side,
speed
test
check,
debug
bearer
as
a
second
source
of
shoot.
A
There
is
also
also
performancealerts.com,
which
is
a
slack
channel
which
will
post
a
message
when
tests
are
over
a
certain
threshold,
so
before
we
hand
over
to
dennis,
we
will
get
to
the
meat
and
potatoes
of
doing
some
analysis
of
the
data.
We
have
here's
a
quick
outlook
on
what's
in
there
for
us
the
future.
We
currently
don't
have
any
field
data
sentry
is
supporting
performance
data
since
a
few
releases
and
we're
currently
looking
into
this
so
expect
some
field
data
as
another
point
of
reference
in
the
future.
A
B
Thanks
a
lot
janik
before
we
jump
into
the
hardcore
thing
course
raises
a
concern
in
the
questions
of
the
document
that
the
slides
are
not
accessible
for
everybody.
Could
you
please
share
them
with
with
everybody
in
the
in
the
gitlab?
Probably
I
don't
know
because
it
would
be
make
sense
for
people
to
follow
the
the
hardcore
part.
Yannick
made
a
really
great.
B
So
while
yannick
fixes
the
slides
I
I
would
highly
recommend
everybody
opening
the
slides,
because
there
will
be
some
links
that
you
might
want
to
click
while
I
I'll
be
talking,
but
janik
made
a
great
introduction
to
our
so
performance
measurements
set
up
how
we
do
things,
but
there
are
a
lot
of
things
to
digest
there
to
be
honest,
and
we
we
we
totally
know
know
about
this.
There
are
the
the
essence
of
the
theory.
Now
is.
B
We
are
gonna
talk
about
loading
performance
today
we
do
have
some
some
attempts
and
some
experiments
about
the
performance
of
some
stories
like
user
opens,
the
issue
or
creates
the
new
issue
sounds
to
like
create
some
art,
or
something
like
this
like
different
stories
where
different
interactions
happen.
But
those
are
not
going
to
be
part
of
this
of
this
workshop
today,
so
we
are
going
to
talk
about
measuring
performance
mainly
from
the
front
and
side
of
the
things,
and
you
will
see
why
so
later
in
this
workshop.
B
But
before
we
proceed,
let's,
let's
make
things
clear,
so
the
main
web
performance
measure
monitoring
tool
we
have
is
the
site,
speed,
site
speed
runs
as
yannick
mentioned.
Every
four
hours
creates
the
results
dumps
the
results
into
the
graffit
or
graphite.
I
don't
know,
what's
the
correct
pronunciation,
graffit
database
and
the
data
is
picked
up
out
of
that
database
into
grafana,
so
the
the
the
information
we
are
going
to
look
at
today
is
updating,
is
being
updated
very
regularly
and
is
pretty
much
reliable.
B
The
main
question
we
have
is
in
order
to
avoid
additional
questions.
As
yannick
mentioned,
we
are
sort
of
doing
the
lab
testing
means
we
are
running
the
tests
in
the
reproducible
and
predictable
environment.
What
is
that
environment?
That's
a
good
question,
so
we
are
running
on
the
throttled,
cable
connection.
That
is,
I
calculated
today
having
the
numbers,
so
it's
about
on
the
downloading
side.
This
connection
that
we
are
testing
on
is
30
faster
than
regular
3g
and
about
nine
times
slower
than
4g.
B
So
it's
like
average
app,
let's
say,
like
average
wi-fi
connection
you
can
find
in
at
least
european
hotel
sort
of.
So
it's
not
exceptionally
fast,
but
it's
not
very
slow
either.
We
measure
for
the
desktop
we
measure
for
logged-in
urls
for
non-logged
in
urls
for
mobile
support
and
that's
what
we
are
going
to
look
into
right
now.
So,
let's
jump
to
this
thing.
B
So,
first
of
all
the
the
urls
we're
talking
about
when,
as
janice
mentioned,
we're
not
measuring
all
the
urls
measuring
the
urls
that
you
want
them
to
be
measured
are
pretty
much
defined
in
the
list
in
this
package.
That
is
called
side,
speed,
measurement
setup.
B
If
you
go
to
gitlab
folder
here,
you
will
see
some
subfolders
desktop,
desktop
cached,
urls
and
related
mobile
urls
and
www
about
your
rounds.
So
if
you
are
about
to
test
any
page
on
our
marketing
site,
that
is
www
about
you
put
your
urls
into
this
desktop
txt
file.
So
the
format
is
just
one
url
per
line
as
easy
as
that,
so
all
of
these
urls
are
gonna
be
picked
by
our
www
measurements.
B
If
you
want
to
measure
on
the
mobile
you
wait,
while
the
browser
is
loading,
the
parent
folder,
but
in
the
in
the
gitlab
folder,
as
we
as
we
saw
there
is
this
you
emulated
mobile
folder,
then
for
the
desktop
url
and
at
the
moment
we're
measuring
only
two
two
routes
on
the
mobile.
It's
the
explore
page
and
it's
the
project
page.
B
Why?
So?
Because
my
running
tests
on
the
mobile
takes
more
time
because
on
the
mobile,
we're
throttling
the
network
even
more
and
then
the
tests
take
longer,
so
we
just
try
to
avoid
over
overloading
the
mobile
measurements
in
the
regular
scenario,
what
you
will
want
to
do
is
go
to
the
desktop,
depending
on
whether
you
want
to
measure
performance
for
logged
in
your
route
or
anonymous
route.
For
example,
project
page
is
the
anonymous
route.
Anybody
can
get
access
to
it
but,
for
example,
getting
access
to.
B
I
don't
know
to
some
pipeline
editor
or
to
web
id.
That
would
require
you
to
be
logged
in
like
loading
the
the
repositories
so
depending
on
that,
you
pick
either
logged
in
urls
or
urls.
Folder.
B
B
B
This
is
the
dashboard
aggregating
all
of
the
monitored
routes
and
rank
them
based
on
their
performance
health,
so
to
speak
so
from
the
healthiest
all
the
way
to
the
worst
once
the
route
gets
into
the
red
zone.
This
is
the
danger.
The
if
you
are,
if
a
route
in
the
red
zone
belongs
to
your
group,
that's
a
good
indicator.
If
you're
into
performance
just
go
there
figure
out.
What's
going
on
with
that
route
and
fix
it?
B
Oh,
okay:
let's
let
me
just
increase
the
text
then
so
I'm
I'm
actually
not
sure
whether
it
makes
so
the
text
is
bigger
right.
But
I'm
not
sure
you
see
this.
You
don't
really
need
to
see
this
problem.
What.
C
B
Okay
cool
now,
so
this
is
the
lcp
leader
dashboard.
So
you
don't
really
need
to
read
that
text.
Really,
it's
totally
fine!
You
can
get
to
this
to
this
dashboard.
However,
if
you
go
to
the
slides
that
you
probably
have
access
to
and
go
to
slide
number
24,
you
have
to
be
able
to
click
this
link.
B
Let
me
just
do
it
with
you?
Well,
you
don't
really
click
you
probably
copy
and
paste
you
get
to
another
in
a
more
important
dashboard.
That
is
the
site
speed.
Page
summary,
apparently,
you
were
graded
with
the
login
screen
at
the
moment.
If
you
have
clicked
that,
but
you
use
your
regular
gitlab
google
credentials
to
get
to
grafana
dashboard.
B
So
if
you
couldn't
access
it
now,
please
do
let
me
know
and
we'll
we'll
try
to
fix
this,
but
I
assume
that
we
all
can
get
access
to
this
this
dashboard.
So
if
you
have
seen
this
dashboard
before
the
page
summary.
B
Doesn't
look
very
much
familiar
to
you
because
only
earlier
today,
I
have
updated
this
dashboard
for
hour
for
the
purpose
of
our
workshop,
so
that
we
show
the
information
that
is
relevant
and
and
get
rid
of
the
information
that
is
irrelevant.
So,
on
the
page
summary
dashboard,
you
are
greeted
with
a
standard,
grafana
dashboard
and
you
can
select
desktop
desktop
cache,
emulated
mobile
as
the
path
and
that's
what
I
said
where
you
when,
when
you
put
your
url
into
different
folders,
that's
how
it's
going
to
be
reflected
here.
B
So
if
we
go
to
the
emulated
mobile,
you
will
see
that
we
do
this,
only
for
the
gitlab
domain
and
then
for
gitlab.com,
and
then
there
are
only
two
pages
explorer
gitlab
and
gitlab
org.
This
is
the
gitlab
project,
but
this
is
not
what
we
are
gonna
be
investigating
today.
B
So
the
desktop,
let
me
get
back
to
project
so
by
default
you
will
get
the
data
for
gitlab
project
home
page.
This
is
the
main
project.
The
project
view
that
we
are
measuring
against.
B
We
will
take
a
look
at
how
this
particular
project
looks
like
in
a
in
a
minute,
but
the
important
things
here
to
note
is
we
measure
on
chrome,
as
yannick
said,
our
essential
metrics
like
lcp,
cls
and
afdi
we're
like
I'm
gonna
talk
about
fdi
a
bit
a
bit
later.
They
are
chrome,
specific
they're,
not
standardized
metrics.
They
are
synthetic
values
generated
only
in
chrome
browser.
There
are
some
analogies
in
firefox,
but
we
are
measuring
only
on
chrome.
B
You
can
technically,
you
could
technically,
if
we
would
measure
on
different
connect
connections,
select
different
connection
if
you
need
your
particular
route
to
be
measured
on
some
specific
connection,
please
reach
out
to
me
and
we
will
try
to
sort
that
out,
but
in
general,
as
I
said,
we
measure
everything
on
the
cable,
and
here
is
the
main
interface
that
you
will
be
greeted
with.
B
So
the
first
row
is
dedicated
to
the
web
vitals
the
matrix
that
janik
mentioned
lcp
largest
contentful
paint
cumulative
layout
shift
and
total
blocking
time
in
this
case,
not
fdi
again
talking
about
this
later.
Both
all
of
this
all
three
panels
here
will
indicate
to
you
whether
you,
whether
the
value
for
this
particular
route,
is
in
the
safe
zone,
means
in
green
zone
or
in
the
red
zone.
B
So,
as
you
can
see,
lcp
chart
is
all
green,
but
the
cls
the
cumulative
layout
shift
is
just
floating
under
the
red
threshold
and
the
total
blocking
time
is
all
in
the
red
zone.
So
these
two
things
are
a
bit
problematic
on
the
project
page.
Unfortunately,
we
will
get
to
the
side,
speed
report
and
see
how
things
can
be
analyzed
from
there
a
bit
later
further
down.
B
You
see
the
page
summary
this
this
panel
here
is
not
very
informative
because
it,
it
shows
a
lot
of
times,
but
we
do
not
rely
on
those
times
much
more
interesting.
Is
this
this
side
of
the
page,
so
the
total
transfer
size
of
the
page
number
of
requests,
dom
elements,
and
then
again,
fcplcpcls
metrics
here
and
for
the
routes
which
have
implemented
the
user
timing
marks
that
we
will
be
talking
about
later.
B
You
will
see
data
here,
for
example,
if
we
switch
to
web
id
to
web
id
route,
then
you
will
notice
the
data
gets
in
here
with
the
custom
user
timing.
Metrics,
I'm
not
going
to
cover
those
in
details,
but
you
will
find
the
link
to
the
documentation
on
how
to
add
those
and
how
to
use
in
on
the
slide
23
of
our
slide
deck.
It
says:
gitlab,
user
timing,
api
documentation
or
in
our
notes,.
B
I
think
janik
have
you
edited
this?
No,
the
link
to
user
timing
metrics
anyway
just
grab
this
from
the
from
the
last
slide
from
the
slide
23
in
the
documentation
that
that
will
give
you
a
really
good
understanding
of
how
particular
element
on
your
on
your
page
behaves
because,
as
we
have
had
discussion
with
course
on
during
our
previous
application
performance
session,
lcp
might
not
be
the
metrics
the
metric
that
gives
you
real
understanding
of
how
your
page
your
route
performs
for
the
end
user.
B
So
you
want
to
rely
on
a
specific
element
being
output
on
the
on
the
screen
as
the
milestone
saying.
Okay,
we
are
done.
This
is
what
user
came
here
for
as
an
example,
when
you
load
the
project
page,
probably
the
user
came
for
the
file
tree
of
the
of
the
page.
That's
what
you
want
to
measure
against,
not
some
random
element
that
that
can
trigger
lcp.
B
Those
marks
will
be
rendered
in
this
in
this
panel,
but
in
general,
most
of
the
most
of
the
routes
we
have,
they
don't
introduce
the
user
timing
marks
just
yet
and
actually
helping
with
introducing
the
marks
to
those
routes.
Would
be
a
huge
improvement
of
our
performance
monitoring
as
well?
So
if
you
want
to
to
get
in
into
this,
please
reach
out,
and
we
will.
I
can
explain
what
this
is
about.
A
Awesome
and
dennis
we
do
have
some
questions
on
our
docs
and
it'll
probably
be
a
good
thing
to
get
final
answer
to
those
now,
because
we're
now
on
the
topic
yeah
and
before
we
move
on
cos,
do
you
want
to
verbalize,
or
should
I
do
it
for
you.
C
Right
so
sorry
lost
in
disgrace,
yeah
the
screen
that
dennis
has
just
showed
that
there
is
a
cumulative
layout
with
with
a
few
spikes.
Here
I
was
wondering
how
that
happened
happened
because
we
see
some
spikes
recently
on
the
lcp
and
we
discussed
that
that
there
might
be
some
hiccups
with
this
performance.
B
This
is
a
very
good
question,
so,
first
of
all
keep
in
mind.
B
Every
route
is
measured
only
two
times,
so
we
have
two
runs
on
every
route
when
we
measure
the
things,
so
it
might
be
that
one
of
the
measurements
was
completely
off
during
this
run
and
we
just
don't
have
statistically
significant
number
of
data
in
order
to
even
that
thing,
but
when
it
comes
to
these
particular
spikes,
they
come
around
the
time
when
we
were
fixing,
we
had
a
problem
with
running
the
tests
in
the
virtual
machine,
so
these
spikes
normally
should
not
be
on
the
charts
they
they
in
this
particular
case.
B
They
are
result
of
a
halfway
round
tests,
aborted
tests-
and
things
like
this,
because
this
is
the
this
is
around
the
time.
We
were
experimenting
a
lot
with
the
with
how
we
run
the
tests.
So
these
are.
These
are
not
typical
things
if
there
is
a
spike.
B
First
of
all
what
you
do
first,
is
you
wait
for
the
next
measurement?
If
the
measurement
goes
down,
then
probably
you
you
faced
some
hiccup
during
the
testing.
If
the
spike
is
not
really
the
spike,
but
is
new
trend,
that's
when
you
start
worrying
and
ping
either
your
group
or
the
group
responsible
for
that
or
another
route.
Just
answer
the.
C
B
Lcp
can
can
can
jump
but,
as
you
can
see
like
it's,
the
jumps
are
not
that
if
you
look
at
the
absolute
numbers
they're,
not
that
dramatic.
B
So,
for
the
sake
of
the
time,
let's,
let's
move
on
nicholas,
do
you
want
to
verbalize.
D
Yeah,
my
question
would
be
more
about
like
do.
We
also
see
improvements
when
we
deploy
something,
and
can
we
see
in
the
grafana
dashboards
deployments,
so
we
can
relate
to
specific
improvements,
or
this
is
this
is
this
is
a
very.
B
Good
question:
so
how
do
we
see
the
performance,
improvements
or
performance
changes
after
we
deploy
something
that
we
assume
is
is
a
fix
right.
So
first
thing
you
do.
Is
you
figure
out
when
your
merge
request
has
been
deployed
to
production?
B
How
do
we
do
this?
We
just
go
to
any.
I
don't
know:
okay,
oops,
not
gitlab
salary.
We
go
to
get
getlab.com
and
get
love
any
merch
request.
Like
I
don't
know,
this
is
issue.
This
is
okay,
merge
requests,
something
all
right,
something
that
was
merged.
We
have
to
take,
and
then
okay,
let's
take
this
one.
B
So
every
merge
request
has
the
information
about
when
it
has
been
deployed
to
production
right.
So
we
go
to
the
pipeline
widget
and
it
has
been
deployed
to
production
on
yeah.
This
one
is
bit
too
too
too
late
check
but
9th
of
september
3
8
26,
so
you
technically
is
interested
in
checking
the
performance
before
that
time
stamp
and
after
that
time
step.
How
do
you
do
that?
This
is
brilliantly
moves
us
to
the
next
thing.
How
do
we
get
to
the
results?
B
B
So
if
you
hover
over
this
one
of
these
triangles,
you
see
the
timestamp
of
this
run
and
then
the
screenshot
just
to
confirm
that
this
is
this
is
the
page
you
were
looking
for,
so
you
just
identify
the
timestamp
that
is
before
your
changes
have
been
merged
to
production
and
after,
as
yannick
mentioned,
our
tasks,
our
performance
measurements
run
on
production.
So
you
have
to
make
sure
that
your
changes
were
in
production
when
we
were
running
the
test.
B
So
take
point
before
your
merge
request
take
point
after
your
merch
request
and
then
you
will
notice
whether
you
have
actually
change
dramatically
affected
performance
or
not,
and
we
will
get
to
the
to
the
results
of
a
particular
run
in
a
second,
but
have
that
answered
the
question?
Okay,.
B
Yes,
yes,
okay,
so
okay
cos:
how
do
we
define
which
elements
should
we
put
custom
test
marks
on?
Let's
get
to
this
part
again
briefly,
like,
as
I
said,
for
every
single
route
there
are,
there
is
a
specific
criteria.
B
That's
that
would
say
whether
loading
that
or
another
out
is
was
successful
or
not
like,
as
I
said,
for
for
a
project
page,
the
main
event
saying
that
okay
user
actually
got
what
they
were
coming
for
is
loading
the
repository
tree,
for
example,
when
loading
web
id
it's
again
loading
the
file
trees.
If
it's
like,
we
have
two,
we
monitor
two
two
routes
for
webide
about
repo
is
the
empty
state.
So
if
you
take
a
look
at
the
screenshot
here,
so
it's
just
the
the
empty.
B
It's
just
the
repo
with
no
files
open
and
then
another
one
is
so
for
that
for
that
one
for
the
for
the
empty
one.
The
main
element
is
loading
the
navigation
tree
so
that
users
could
start
using
this
application
and
another
one
we
have
here
is
for
a
particular
page.
So
in
this
case
the
differentiator,
whether
user
succeeded
successfully
loaded
the
page
or
not,
would
be.
When
is
the
content
of
the
file
rendered
on
the
screen?
So
every
route
has
a
default,
well-defined
state
saying
whether
the
loading
was
successful
or
not.
B
So
that's,
and
so
so
that's
why
these
metrics
are
custom.
They
differ
from
page
to
page
and
they
depend
on
the
particular
purpose
of
that
or
another
page
right.
So
does
it
answer
the
questions.
C
The
product
managers
to
define
particular
elements
on
some
features
which
are
important
to
us
and
also
to
ask
them
to
join
such
performance
work
workshops,
so
they
could
define
those
for
for
their
futures.
Well,.
B
This
is,
this
is
a
very
good
point.
The
the
problem
is
that
a
project
man,
project
managers
should
not,
or
product
managers,
should
not
necessarily
care
about
user
timing
metrics
they.
They
will
still
evaluate
the
success
of
that
or
another
page
using
lcp,
because
that's
the
metric
we're
using
if
you
want
to
to
have
the
real
understanding
of
how
the
page
behaves
and
how
page
performs
this
is
technical
engineering
task
and
you,
as
an
engineer,
have
all
the
all
the
capabilities
of
measuring
like
for
web
id.
B
We
measure
different
things,
so
we
can
relate
to
different
elements
on
the
page
we
can
say:
okay,
when
is
the
file
tree
loaded
when
a
particular
file
is
loaded
or
when
the
content
of
the
file
is
loaded
different
marks?
That's
why
they
called
custom
that
you
can
put
them
anywhere.
You
want,
and
all
of
those
marks
will
be
gathered
in
the
async
manner.
So
if
you
put
10
marks,
they
won't
affect
your
loading
performance.
So
there
is
no
penalty
really
to
to
use
more
than
just
a
few
okay
nicholas
into
that
yep.
B
Thanks
this
is
the
main
part
of
the
grafana
dashboard,
but
looking
at
these
numbers,
it
sort
of
it
just
scratches
the
surface.
It
gives
us
the
idea
of
how
that
or
another
page
behaves
and
how
that
or
another
page
performs.
But
what
exactly
is
happening
there?
What
is
going
on?
Why,
for
example,
lcp?
Is
that
or
another?
So
for
that
purpose
we
can
get
straight
to
the
side.
Speed
reports,
as
I
said,
go
to
results.
B
Toggle
has
to
be
switched
on
in
order
to
generate
these
vertical
bars
and,
for
example,
we
are
on
the
web
ide
team,
page
okay,
let's
let's
dive
into
it
it's
at
the
moment.
The
lcp
is
above
to
right.
This
is
this
is
important
to
mention
the
performance
budget
for
the
lcp
we
have
is
2.5
seconds
if
you
go
below
2.5
seconds.
This
is
great.
This
is
green
zone.
B
B
So
our
ideal
performance
budget
for
performance
for
loading
performance
is
2.5
seconds.
This
is
not
our
invention.
This
is
how
google
defines
good
lcp
and
that's
what
we
aim
for,
because
google
actually
panels
punishes
those
pages
that
do
not
perform
as
good
as
they
suggest
so
to
speak.
B
So
right,
there
is
one
thing
I
hope
I
have
to
have
mentioned
here,
but
for
the
sake
of
time,
we'll
skip
that
one.
So
what
are
the
times
here?
This
is.
This
is
important.
We
show
the
the
numbers
for
by
default.
We
show
the
median
time
on
the
runs.
What
is
median
time?
It's
the
the
number
between
like
in
the
middle,
so
the
number
of
measurements
below
that
number
will
be
exactly
the
same
as
number
of
measurements
above
that
number,
since
I
said
we
have
only
two
runs.
B
This
means
that
medium
technically
in
our
case,
is
exactly
the
same
as
mean
so
the
just
the
mathematical
average
between
two
two
values,
so,
whether
you
use
median
whether
you
use
mean
it
doesn't
really
matter.
It's
going
to
be
the
same
here,
but
it's
it
says
median
for
just
for
simplicity
as
the
default
value.
B
B
This
is
the
format
representing
the
performance
profile.
You
can
upload
it
to
your
dev
tools
and
analyze
this
in
your
browser,
but
we
are
not
going
to
cover
it
here,
because
we
get
wonderfully
assembled
site,
speed
reports
for
our
disposal
here
when
you
get
to
the
side,
speed
report,
the
first
thing
you
see
the
name
of
the
page.
This
is
the
page.
You
can
click
this
page
or
this
this
link
in
order
to
see
what
what
actually
is
being
measured
in
this
round.
B
So
this
is,
as
we
can
see,
the
team
yaml
page
in
the
sidespeed
test
repo.
This
is
a
very
large
file.
That's
why
performance
of
this
one
is
not
optimal.
B
As
you
can
see,
we
have
had
two
runs,
and
then
we
are
the
first
tab
we
are
getting
to
is
summary,
so
summary
provides
with
exactly
what
it
says.
It's
just
the
summary
right
of
different
performance
related
metrics
here.
Some
of
those
are
useful.
Some
of
those
are
not
that
much,
but
you
will
get
some
some
inspiration
from
those.
The
screenshot
is
self-explanatory.
B
In
the
summary
table,
you
most
probably
will
be
interested
in
largest
continental
paint,
metrics,
so
2.5
and
that's
what
should
be
here,
2.52
technically
the
same
as
2
517.
B
So
this
things
do
match
cumulative
layout
shift
total
blocking
time,
interesting
thing
and
related
thing
fid.
This
the
fid
is
not
very
accurate
in
this.
In
this
rounds
so
below
you
have
timing,
summary
and
again,
a
lot
of
data
we're
not
gonna
dive
into
all
of
this
again
lcp.
First
quantity
paint
largest
content
paint.
These
are
very
interesting
things,
but
they
are
just
summary.
B
You
also
have
the
graph
showing
how
the
visual
program,
how
the
page
was
visually,
rendered
the
progression
of
visually
rendering
the
page
and
you
see
it's
been
it
had
nearly
nothing
here.
Then
we've
got
39
rendered
on
the
on
the
page
for
some
period
of
time
it
hasn't
changed.
Then
some
element
has
been
removed
from
the
screen.
B
That's
the
the
drop
here,
and
this
is
very
interesting
to
see
what
has
happened
here
and
I
can
tell
you
what
has
had
happened,
the
the
low
the
image
the
svg
in
the
center
of
the
web
id.
So
if
you,
if
we
go
back
here
and
open
this
page
so
watch
here,
there
is
the
svg
that
is
gone
and
then
we
continue
rendering.
So
that's
why
this
dip
is
here
and
then
all
of
a
sudden
we
get
86
rendered
and
then
it
nearly
stays
the
same
until
we
get
a
hundred
percent.
B
You
also
get
the
very
interesting
things
like
first
visual
change,
metric
last
visual
change.
So
what
is
first?
Visual
change.
This
is
the
moment
when
the
browser
screen
gets
at
least
some
color
or
some
text,
some
change
comparing
to
just
blank
wide
page,
not
white
in
this
particular
case.
But
this
is
the
moment
when
the
very
first
pixel
has
been
rendered
on
the
screen.
B
But
the
interesting
thing
here
is:
whenever
you
hit
any
performance
issue,
you
you
sh.
The
first
thing
you
as
the
front-end
engineer:
let's
put
it
this
way
as
a
front-end
engineer,
you
have
to
figure
out
whether
it's
a
back-end
issue
or
front-end
issue.
You
know
you
have
to
know
where
to
look,
because
you
might
spend
a
lot
of
time
tinkering
with
the
front-end
while
the
real
problem
might
be
in
the
slow
database
connection
or
something
else,
and
that's
where
the
browser
metrics
come
into
play,
and
here
we
have
with
the
cleared
text.
B
B
But
before
we
get
into
this,
I
would
like
to
to
talk
about
different
moments
in
the
page
in
the
page
evolution
so
to
speak.
B
Okay,
it's
not
yeah,
it
does
suspenders.
So
we
are
going
to
talk
about
different
moments
that
happen
in
the
oh.
You
don't
see
this
okay.
Unfortunately
it
does
crop
okay.
I
will
just
show
it
like
this
if
you
don't
mind
and
what,
if
I
just
make
it
this
way,
yeah.
Okay,
that's
good!
So
I
will
just
show
slides
like
this.
Now,
okay,
I'm
going
to
talk
about
different
navigation
timing,
api
events
that
happen
during
your
page
loads
in
the
browser,
so
it
all
starts
with
navigation
start.
B
This
is
the
moment
you
type
in
url,
in
your
browser's
navigation,
navigation
bar
and
click
enter
or
you
click
a
link.
This
is
navigation
start
at
this
moment
your
browser
sends
requests
to
the
server
and
then
back-end
processes,
your
request
and,
at
some
point
it
starts
sending
the
response,
the
difference
between
the
browser,
starting
sending
the
response,
and
then
the
moment
you
clicked
your
answer
or
click
the
link.
This
is
where
the
back
end
sits.
B
Then,
after
that
the
browser
is
done.
Sending
you
sending
your
requested
data
right,
so
the
response
and
event
happens-
and
this
is
the
the
difference
between
response
and
and
response
start-
is
what
constitutes
the
page
download.
So
this
is
just
sending
bytes
down
from
the
server
to
to
the
client.
According
to
your
request,
next
stage
is
load
event
start.
B
This
is
the
moment
when
your
browser
says:
okay,
I'm
done
loading.
I'm
done,
processing
the
all
of
the
assets,
I'm
done
with
parsing
the
resources,
everything
that
constitutes
the
dom
of
your
page,
the
representation
and
rendering
painting
of
the
content
on
the
screen.
So
at
that
moment,
when,
when
browser
is
done,
this
is
loaded
on
start
and
difference
between
low
demand,
start
and
response.
B
End
is
what
constitutes
the
front
end
time
and
it's
I
intentionally
made
it
much
bigger
than
the
back
end,
because
that's
in
99
of
the
cases,
this
is
going
to
be
the
case
for
your
routes
as
well.
Front-End
will
take
much
longer
than
back-end,
simply
because
we
are
in
the
world
where
our
applications
are
heavily
based
on
javascript
and
javascript.
Processing
and
parsing
takes
some
time
on
the
screen.
B
Then
we
have
the
on
processing
in
the
unload
event,
but
we
don't
do
a
lot
on
the
on
load
and
the
difference
between
load
event
and
load
event.
Start
is
exactly
the
time
needed
for
the
browser
to
process
the
onload
event
handlers
in
the
front
end.
While
this,
this
is
going
on
different
events
happen,
dom
content
loaded,
dome,
interactive,
first
paint,
first,
contentful
paint.
Those
are
the
the
things
that
we've
seen
in
the
tables,
and
these
events
are
actually
very.
They
are
standardized,
they
follow
the
specifications
and
they
are
very
reliable.
B
Technically,
however,
here
you
won't
see
the
lcp
cls
or
total
blocking
time.
Why?
Because,
as
janik
mentioned,
though
these
things
are
synthetic
values,
they
are
not
related
to
any
specification.
They
are
the
product
of
different
computations
made
by
the
chrome
engine
based
on
the
content
you
output
on
the
screen.
B
B
The
value
here
front
end
time
is
shown
as
well
in
general,
a
lot
of
interesting
information
here,
and
it
might
already
start
giving
you
ideas
of
where
the
things
happen
and
whether
things
go
wrong,
but
in
order
to
get
a
real,
deep
dive
into
the
into
the
life
cycle
of
your
page,
you
go
to
the
waterfall
tab.
This
is
the
view
that
you
might
see
in
the
networks
tab
of
your
devtools.
B
There
are
these
colored
bars
and
the
meaning
of
every
color
color
is
above
the
table,
so
it
all
starts
with
with
your
html
like,
even
if
it
has
dot.
Yaml
extension
is
just
the
the
happens
to
be
the
the
part
of
the
url.
So
this
is
the
original
html
that
we
have
requested
and,
as
you
can
see,
there
is
nothing
happening
while
we
are
getting
this
html
because
browser
doesn't
know
what
to
do
without
actually
seeing
any
instructions
in
the
html.
B
Once
we've
got
that
html
file,
that's
where
browser
starts,
parsing
the
the
links
we
have
in
our
html
like
link
script
and
all
these
things
that
in
case
of
front
and
web
packaging
rates
for
us
in
most
cases,
then
it
starts
fetching
the
resources.
B
Since
we
are
using
http2,
we
are
not
limited
by
a
browser
like
http.
One
protocol
has
a
limitation
on
how
many
resources
can
be
downloaded
at
once
in
the
browser.
Http
allows.
Http
2
allows
multiple
resources
to
be
downloaded
at
the
same
time.
So,
as
you
can
see,
a
lot
of
resources
are
downloaded
in
parallel,
however,
not
all
of
them,
because,
for
example,
this
resource
the
svg
file
is
depending
on
some
other
resource
that
has
been
fetched
here
and
apparently
it.
B
It
comes
from
some
css
file
that
has
been
downloaded
somewhere
here
or
here
or
here,
and
then
that
css
file
gives
instruction
to
the
browser.
Okay,
I
have
the
reference
to
this
svg
file.
Please
go
and
get
it
from
me.
That's
where
browser
comes
back
to
the
server
sends
a
request
to
the
server
server,
finds
this
file
and
returns
this
image
for
you.
B
There
are
blocking
and
non-blocking
resources,
so
that's
the
example
I
told
you
about
now
is
the
perfect
example
for
blocking
resource.
The
css
is
the
blocking
resource
for
this
svg
file.
You
cannot
fetch
it
before.
It
has
been
discovered.
Resources
are
fetched
in
browser
only
when
they
are
discovered,
and
there
are
several
types
of
blocking
resources
in
the
browser,
so
any
resource
that
can
alter
dom
is
considered
to
be
blocking
resource,
so
it
will
be
fetched
the
very
moment
it
gets
discovered
in
the
html.
B
So
once
we
see
the
reference
to
the
monaco
chunk,
it
will
be
downloaded
right
away
and
all
of
the
javascript
and
css
and
svg
the
only
exception
and
phones,
the
only
exception.
The
only
resource
that
is
not
blocking
from
like
conventional
resources
is
the
image.
Images
are
not
blocking
resources
and
they
will
be
fetched
in
parallel
asynchronously.
A
Just
wanted
to
make
one
addition,
and
that
would
be
since
this
is
a
great
a
great
point,
and
that
is
the
difference
between
a
streaming
file
format
and
a
non-35
movement.
So
just
to
get
this
in
your
head
and
for
your
understanding,
html
is
actually
a
streaming
file
format,
so
it's
basically
just
as
any
human
being.
It
goes
through
it
line
by
line
it
is
non-blocking
and
it
will
discover
resources
and
then
download
it
and
execute
it.
Javascript,
for
example,
on
the
other
hand,
will
be
downloaded
as
a
chunk
and
then
executed.
A
B
Point
and
this
gap
between
the
css
and
svg
is
exactly
about
the
resources
being
parsed
before
we
can
discover
this
svg,
so
every
resource
you
get
to
the
browser
has
to
be
parsed
and
in
case
of
javascript,
it
has
to
be
executed
and
that
only
then
we
can
process
to
downloading
other
resources
and
on
on
the
bottom
of
this
table,
you
will
see
all
the
different
events
that
some
of
those
we
have
mentioned
in
on
this
timeline,
and
one
of
those
is
where
is
it
visual
completeness?
B
Where
is
the
lcp
here
largest
contentful
paint?
Here
we
go
so
it's
it's
right
here,
and
this
is
the
matrix
that
we
have
to
optimize
for
everything
that
is
before
this
event
shifts
the
lcp
to
the
point
where
it
is
so.
The
task
of
us
as
engineers
when
we
deal
with
performance
is
to
reduce
the
number
of
resources.
The
number
of
actions
happening
before
the
lcp
is
triggered,
but
how
is
the
lcp
measured?
This?
Is
this
essential
to
understand
so
largest
content
for
paint
is
exactly
what
it
says.
B
It
triggers
the
it
measures
the
time
needed
for
a
browser
to
output
the
largest
contentful
element
on
the
screen,
so
we
will
start
digesting
this
from
the
back
backwards,
so
the
contentful,
what
is
contentful
it's
the
element
that
has
some
meaning
to
the
user.
It's
text
header
image
image
in
in
svg.
B
What
else?
Even
the
images
that
are
provided
as
css
backgrounds
will
can
will
count
as
the
contentful
elements.
The
point
to
to
keep
in
mind
is
that
lcp
measures
only
the
elements
that
are
visible
to
the
user.
So
if
the
element
is
on
the
second
screen
or
the
element
is
outside
of
the
screen
of
the
very
first
screen,
those
elements
won't
trigger
lcp.
B
So
the
lcp
and
here's
where
it's
synthetic
nature
comes
into
play.
It's
not
one
moment
in
time,
because
every
time
browser
gets
a
resource
and
needs
to
repaint
the
page
browser.
All
in
particular
case
chrome
also
identifies
the
largest
content
element
on
every
refresh,
a
re-render
of
the
page,
every
paint
of
the
page,
and
if
the
alum,
the
largest
control
element,
has
changed.
That's
when
browser
logs
the
new
lcp
for
the
element,
so
the
task
of
the
the
idea
of
reducing
the
lcp
is.
B
We
have
to
provide
the
big,
the
biggest
geometrically,
the
biggest
element
on
the
screen
as
soon
as
possible,
and
this
is
important-
do
not
change
it.
It
has
to
stay
on
the
page,
because
if
it's
gone,
it's
gone,
the
the
the
measurement
for
it
won't
be
or
actually
no.
This
is
this
is
this
is
not
true.
So
if
the
element
has
been
registered
for
the
largest
content,
we'll
paint,
but
then
it's
gone
from
the
screen,
it
will
still
trigger
the
lcp.
B
That's
what
we
saw
with
with
the
web
id
when
the
the
image
in
the
center
has
been
gone.
But
if
we
would
analyze
the
lcp
matrix,
we
would
see
that
that
image
was
still
stored
as
the
largest
control
pane,
but
it
would
be
overwritten
by
new
measurements.
Lcp
is
the
compound
matrix.
It
has
records
of
all
elements
that
have
triggered
the
lcp.
B
B
If
you
want
to
see
the
progression
of
your
page
load,
we
also
record
the
video
of
how
the
page
is
getting
loaded
and
just
you
know,
just
let's
take
a
look
at
this
one.
B
B
In
this
particular
case,
you
see
that
we
start
with
some
random
things,
because
when
we
get
to
the
to
the
logging
urls,
we
start
with
the
explore
page,
but
that
unloading,
the
explore
page
still
falls
into
still
happens
after
navigation
start.
So
navigation
start
triggers
the
browser
to
fire
the
unload
event,
which
clears
up
the
currently
open
web
page.
So
that's
why
this
page
got
into
this
into
this
load
below
the
the
images
the
screenshots.
We
also
have
different
different
information
about
events
happening
also
about
the
custom
user
timing.
B
Metrics.
If
we
have
introduced
those
for
this
particular
round
also
the
lcp.
This
is
where
we
see
the
lcp
and
actually,
as
you
can
see
in
this
particular
case,
the
lcp
doesn't
is
exact.
It
shows
exactly
the
problem
that
we
have
discussed
about
during
our
application
performance
session
that
the
lcp
is
triggered
on
the
element.
That
is
not
really
relevant
to
the
user.
It's
triggered
on
this
icon
and
to
confirm
this.
B
We
can
go
back
to
to
our
metrics
page
and
if
we
scroll
down
there
will
be
absolutely
fantastic
information
here
all
right
in
this
particular
case.
That
element
is
not
is
not
shown
here
so
and
if
we
take
another,
for
example,
project,
as
I
said,
even
if
the
element
is
gone
from
the
screen,
it
will
still
register
the
the
lcp
and
that's
what
happened
here
so
that
image
is
gone
from
the
screen,
but
still
it
registered.
B
This
largest
content
will
poor
paint
that
that
is
not
shown
on
this
page,
but
in
a
lot
of
cases,
for
example
the
project
home.
If
you
go
to
the
matrix
and
go
to
the
largest
paint
you
will,
you
will
notice
the
element
that
has
triggered
the
lcp
and
in
this
particular
case,
ironically,
the
largest
content
paint
of
the
project
page
has
been
triggered
by
by
the
element
that
was
geomet
just
geometrically,
bigger
than
anything
else
on
the
screen.
B
That
is
the
commenting
from
the
the
commit
information
about
the
last
commit.
So
this
is.
This
is
not
very
helpful
and
we,
if
you
want
to
dive
into
more
details
on
this,
please
re-watch
the
recording
of
the
last
application
performance
session
on
discussing
exactly
this
topic.
B
We
are
right
on
time
now,
and
the
only
thing
that
I
would
like
to
mention
to
you
is:
if
you
want
to
get
the
inspiration
where
to
start,
you
go
to
the
lcp
leaderboard
and
find
the
route
that
suits
your
group
or
your
interest.
B
When
you
go
to
the
page,
some
to
the
side,
speed
report,
it's
it
might
be
smart
to
start
with
a
coach
tab
that
gives
you
the
ideas
of
where
to
start
fixing
your
performance
different
things
as
we
can
see
for
the
for
the
web
id.
We
have
performance
score
of
82.
That
that
is,
yellow
and
can
be
can
be
improved,
of
course.
So
that's
where
probably
we
should
start
for
the
web
id.
B
B
For
taking
all
of
this
time
and
going
one
minute
above
but
to
to
to
fin
to
wrap
this
up,
I
would
like
to
to
welcome
you
all
to
get
to
to
check
this
last
page
of
the
of
the
slide
deck
check,
different
links
here
so
lcpdub
leaderboard
is
here
page
summaries
here
the
link
to
the
package
where
to
add
the
routes
and
the
link
to
the
user
timing,
api
documentation
there.
B
We
also
have
two
important
slack
slide
channels
as
yannick
mentioned
resort
research
performance,
it's
the
main
discussion
channel
and
performancealerts.com
is
the
channel
where
we
get
alerted
about
things
going
beyond
the
budget
and
performing
not
as
as
they
are
supposed
to.
B
So
with
that,
I
would
like
to
thank
you
all.
If
you
want
to
stay
on
this
call,
I
would
I
will
be
more
than
happy
to
answer
the
questions,
but
if
you
have
to
drop
off,
that's
totally
fine.
As
I
said,
this
is
the
first
watch
workshop.
The
next
workshop
will
be
about
actually
taking
the
route
and
measuring
it
before
we
get
to
the
to
the
production.
This
is
this
is
important
to
understand.
What
we
were
covering
today
is
about
retrospective
analysis
and
retrospective
monitoring
of
performance.
B
This
is
monitoring
performance
after
the
the
potential
damage
has
been
done,
so
you
create
a
merge
request
that
potentially
degrades
performance.
You
merge
it
to
the
production.
B
Ideally,
four
hours
later,
you
see
the
numbers
in
grafana.
You
see
that
the
degradation
happened
you
get
back
and
fix
the
things
new
new
route,
with
the
merch
request,
push
merge,
request,
merge
and
so
on,
and
so
on.
So
during
the
next
workshop,
we're
going
to
talk
about
monitoring
and
measuring
performance
proactively
before
your
merge
request
gets
murdered
and
before
you
you
even
create
the
merge
request.
So
I
hope
that
one
will
will
be
interesting
for
people
as
well,
but
when
it
will
happen,
we
still
don't
know.
B
I'm
open
for
questions
if
you
want
to
stick
with
the
with
a
call
and
probably
yannick
will
be
happy
to
answer
as
well.
I
don't
know
it
was
pretty
compressed
pretty
pretty
intense.
I
realized
that
that's
why
you
can
get
to
the
to
the
slides.
B
We
made
sure
that
main
information
is
in
the
slides
and
then
getting
into
the
site
speed.
I
o
reports
is
just
fun
on
its
own,
so
you
can
take
the
route
that
is
most
interesting
to
you
and
if
you
have
difficulties
analyzing
that
or
another
route,
please
bring
this
route
with
you
to
the
next
application
performance
session,
and
we
will
be
happy
to
go
through
that
online
and
record,
of
course,
to
for
the
for
the
transparency
sake.
B
A
Go
for
it
just
do
it
sweet
thanks!
Everybody
for
showing
up
we'll
see
you
on
the
other
side,
whether
it
be
in
our
application
performance
meeting
slack
channel
thanks
a
lot
bye.