►
From YouTube: Web Performance discussion with Denys Mishunov
Description
The retrospective session dedicated to answering the questions after "Pragmatic Web Performance" workshop at Virtual Contribute 2020
A
A
A
A
A
A
A
That's
at
least
that's
what
is
in
the
okay
ours
now
and
that's
what
we
start
like
looking
into
so
yeah,
it's
very,
very
important
in
my
opinion,
and
that's
what,
because
not
caring
about
performance
might
cost
much
more
than
caring
about
performance,
because
now
people
started
posting
blah
I
like
know
some
like
hacker
news
and
things
like
this.
Comparing
speed
of
get
lab
to
get
half,
and
at
least
you.
A
B
A
This
is
the
link
to
to
our
to
the
repository
that
hosts
our
set
up
for
site
speed
they'll
be
URLs
that
we
test
phones
for
you
can
find
those
in
the
in
the
repository.
But
technically
yes,
I
mentioned
two
ways
of
measure
of
monitoring
performance
right
during
the
session,
we
can
send
things
to
snowplow,
send
things
to
site,
speed.
Technically
the
site.
Speed
will
pick
up
anything
performance
related
right
away
without
doing
anything
anything
special
as
long
as
the
URL
you're
imagining
performance
for
a
sporting
to
that
slide.
A
Speed
set
up
so
when
it
comes
to
snowplow
I
mentioned
this
just
for
informative
reason.
Technically,
I
I,
don't
advise
selling
things
to
snowplow,
because
it's
the
analytics
tool
that
you
kind
of
like
it
requires
some
involvement
into
the
process.
Also
site
snowplow
sense
data
to
size
us,
and
this
means
that,
in
order
to
create
a
new
dashboard
there,
you
need
to
request
access
there.
A
C
A
A
Know
with
site
speed
is
going
to
be
like
technically
the
site.
Speed
works
differently,
like
snowplow
sends
the
van't
when
the
event
happens
to
to
the
to
like
tracks
the
event
in
the
database
right
site,
speed
works
differently.
That's
why
we
have
the
URLs
in
the
in
the
setup
site.
Speed
makes
rounds
every
I
think
every
two
hours
we
have
around.
It
goes
to
the
URL
track
measures
or
gathers
all
the
performance
related
metrics
dumps
it
into
the
database
and
that's
what
we
get
into
integral
father.
A
A
So
I
imagine
I,
mentioned
snow
plow
during
the
session
just
for
for
informative
reason,
because
I
found
it
very
confusing
to
figure
out
that
we
have
different
systems
for
monitoring
different
things.
So
I
just
wanted
to
to
give
some
overview
of.
What's
going
on
with
these
things,
because
in
theory-
and
this
is,
for
example,
how
the
Google
Island
Analytics
works
right,
so
you
can
track
performance
there
as
well
with
different
user
specific
metrics,
but
you
need
to
do
some
some
additional
job
there,
while
site
speed,
just
does
regular
thing
without
any
additional
code
related
analytics.
D
Sure
I
can
just
explain
a
little
bit
better.
So
while
we
were
chatting
on
the
scalability
team
about
monitoring
performance
on
the
front-end,
the
infrastructure
side
as
well,
Andrew
brought
the
topic
that
we
do
have
site
speed
setup
in
it
is
a
different
way
than
the
rest
of
the
dashboards
that
they
have
to
track
performance
of
the
of
the
infrastructure.
The
thing
that
he
noticed
is
that,
because
it's
not
set
up
in
the
same
way,
there's
a
bunch
of
things
that
we
cannot
do
so.
A
Actually,
this
would
be
super
super
useful,
of
course,
like
to
raise
the
alarm
like
we'll
get
back
to
this
later
in
the
questions,
but
right
now
we
sort
of
we
have
the
size
speed
and
it's
purely
the
monitoring
tool
without
any
notification
system.
So
you
you
get
the
information
whatever
you
need
it,
so
you
have
to
explicitly
go
to
the
to
Ravana
and
check
the
data
check
numbers
there.
We
also
have
the
very
basic
CI
job
that
is
called
preview
performance
that
is
allowed
to
fail.
A
Nobody
pays
attention
to
it
and
it's
just
like
it's
just
there
with
them,
like
with
the
funny
with
the
problems
that
betrays
discussions,
sort
of
whether
we
should
raise
the
red
flag
if
the
if
performance
degrades
by
10,
milliseconds
or
something
so
that's
that's
kind
of
not
very
informative
things.
So
we
have
to
work
with
that
CI
job
unless
we
get
the
Prometheus
integration
propylene.
That
would
be.
That
would
be
really
cool.
Actually,
yes,
to
do
this.
Science
Pete,
not
just
in
there
sending.
C
Yes,
so
this
question
is
about
transparently
and
instrumenting
view,
or
generally
just
some
JavaScript
instrumentation
to
detect
when
the
main
main
thread
blocks
in
production
and
the
wreath.
So
you
you
asked
what
would
be
the
benefit
of
this
and
I'm
thinking
that
perf
issues,
performance
issues
aren't
always
obvious
until
you're
in
production
working
with
real-world
data,
like
maybe
the
author,
didn't
anticipate
there
being
as
many
items
as
there
are
in
production
and
there's
a
slow
v4
or
something
like
that.
C
A
C
One
correct
me:
if
I'm
wrong
with
site
speed,
we
have
to
explicitly
configure
it
to
visit
a
particular
page
right.
That's
true!
Yes!
So
that
means
we
are
choosing
a
particular
project
on
a
particular
page,
whereas
it
might
be
that
a
different
project
is
different.
You
know,
data
in
it
will
generate
a
different
experience.
C
A
A
So
those
those
thing
to
to
track
performance
of
edge
cases,
but
I
think
that,
if
like
constant
monitoring
of
blocked
main
track
with
at
least
I'm,
not
sure
I
I
know
any
tool
that
would
track
that
without
performance
penalty
as
well,
it's
like
we
have
to
constantly
monitor
it
and
if
we
did
not
specify
any
particular
page,
this
means
would
that
we
around
this
product
wise
and,
to
be
honest,
that
sounds
a
bit
scary
to
me
from
performance
point
of
views,
so
I
would
much
rather
prefer
like
hitting
this
wall
once
and
then
adding
that
your
rel
of
a
project
or
a
merge
request.
A
That
is,
that
is
before
me,
really
bad,
because
blocked
main
threat
paddock
to
site
speed,
so
that
I
could
track
this
over
time
inside
speed,
with
with
like
proper
site
speed
tools,
rather
than
monitoring
main
thread,
locking
block
product
wise.
But
that's
that's
my
opinion.
I
don't
know,
maybe
maybe
there
are
tools
that
do
this
smart.
A
D
That
on
that
sentence
that
you
just
said
so
because
I
understand
what
Dennis
is
saying,
but
instead
of
transparently
instrumentalizing
view,
do
you
think
that
I
could
potentially
achieve
the
same
goal?
If
we
follow
Dennis's
advice
to
have
to
leverage
the
user
timings
API,
because
if
each
app
is
marking
the
moments
of
like
bootstrapping,
the
app
bootstrapping,
the
data
and
those
timings
are
being
reported
and
captured
in
some
way
you
can
live.
You
can
get
the
same
information
that
you're
trying
to
get
here.
No.
C
Yes,
sort
of
I
mean
so
actually
what
I
had
in
mind
was
using
the
user
timing,
API
and
basically
using
the
before
update
and
update
view
hooks
and
I
would
have
thought
I
had
no
idea.
Maybe
this
is
worth
testing
that
the
performance
penalty
would
be
negligible
because,
like
Dennis
has
said,
it's
an
async
operation
to
use
the
user
timing
API
and
then
maybe
if
it
is
above
a
certain
threshold,
you
know
if
the
before
update
and
update
hooks,
it's
like
above,
say
five
hundred
or
a
thousand
milliseconds
whatever.
F
C
F
Oh
yeah,
I
think
I
think
one
there's
a
downside
to
to
this
kind
of
broad
net
and
it's
kind
of
the
downside
it
to
if
we
did
just
snapshot
testing
it's
like
something
may
trigger,
but
it
may
be
really
hard
to
find
out
what
that
thing
is
and
I
think
it's
really
helpful
when
we
can
label
like
and
to
go
in
back
to
Dennis's
original
proposal
when
we
have
when
we
use
the
performance
API
to
specify
very
specific
business
logic,
markers
like
I,
think
I
think
that
is,
if
there's
something
wrong
there,
that's
a
way
more
actionable
step.
F
C
C
Base
level,
you
know,
because
people
on
on,
if
they're
building
the
GDK
well
interacting
this
get
labs
and
it
might
trigger
more,
but
presumably
some
projects
on
some
pray.
Some
pages
would
peak
higher
above
the
noise,
and
that
would
be
an
indication
that
something
about
about
that
project.
You
know,
basically,
if
we
attached
URLs
to
this
and
projects
to
this
data,
maybe
we
would
get
better
insight.
Yeah
would.
F
It
be
enough
to
just
like
Jim
is:
maybe
it
enough
to
just
watch
like
just
CPU
usage
for
taps
or
like,
and
that's
the
kind
of
thing
that
sites
P
does.
Is
it
something?
Is
there
something
helpful
for
us
to
do
something
specific?
At
the
view
level
like
like,
if
before
update
and
update,
is
slow
I?
Imagine
it's
because
our
CPU
usage
is
really
high
like
so
maybe
maybe
looking
at
CPUs
is
a
snap.
F
I'm
not
I'm,
not
saying
like.
Oh,
we
shouldn't
do
this
I'm,
just
I.
Think
it's
a
really
interesting
idea.
One
suggestion
I
had,
though,
and
one
concern
that
you're
having
is
out
there
in
the
wild
when
you're
in
production,
things
are
gonna,
look
different
than
your
test
environment
and
one
production,
page
gonna
was
different
from
another
production
page
and
that
there's
an
underlying
problem
there
is.
This
is,
is
the
disparity
between
test
world
and
real
world
and
it'd,
be
really
sweet
even
outside
of
production,
but
be
really
sweet?
D
D
One
of
the
things
that
I'm
remembering
is
out
of
all
the
deep
performance
work
we've
done
on
the
merge
request,
page,
which
was
she's
one
of
the
largest
problems
we
have
in
performance
these
days.
We
noticed
that
the
more
the
more
hooks
we
have
on
each
component
in
the
more
components
we
have
time
just
compounds
and
it
gets
excruciating,
ly
slow.
So
what
I'm
concerned
about
this
white
net
is
casting
this
on
production
environments,
I'm,
not
sure
if
you
will
be
negligible,
so
we'd
have
to
check
that
of
course,
but
it.
D
A
Up
anymore,
keep
in
mind
also
that
we
do
have
this
before
words
bar
right
along
the
top
of
the
size,
so
we
can
put
some
some
information
there
and
it's
it's
strongly
advised
to
anybody
carrying,
but
before
us
to
enable
the
performance
bar
while
they're
developing
things
and
in
production,
I,
think
I
think
it's
still
available
in
production.
You
can
enable
it
for
for
performance
to
get
the
performance
metrics,
and
then
we
could
just
throw
such
information
on
every
view.
However,
this
is
this.
A
Is
the
problem
like
it
will
tell
you
that
the
problem
happens
on
this
particular
page
on
your
particular
machine
with
your
particular
network
and
your
particular
CPU.
This
doesn't
rate
this
shouldn't
be
sort
of
alarming,
as
long
as
we
are
sure
that
this
is
the
same
situation
for
several
users
if
we
measure
your
machine
right
now.
This
is
your
setup,
and
this
is
yourself
that
produces
these
results,
so
there
is
it's
very
hard
to
like
if
we
want
to
alarm
a
lot
of
developers
right
away.
A
E
Do
yeah
I
was
just
wondering,
so
we
were
talking
about
sending
things
to
potentially
snow,
plow
and
kind
of
roll
in
our
own,
and
things
like
that.
But
is
this
something
if
sentry
doesn't
do
it
itself
that
we
can
just
put
on
and
say
you
know,
send
me
an
alert
when
performance
is
bad,
I'm
gonna
word
it
that
vaguely
there
must
be
a
tool
somewhere
that
just
kind
of
does
that
already
it.
A
Sorry
I'd,
like
I,
took
this
pipe
seconds
to
do
all
the
good
things,
so
what
I
was
telling
is
I
would
much
rather
prefer
us
to
look
into
the
integration
with
Prometheus
to
give
us
kind
of
like
integration
of
the
side
speeds
rather
than
sending
the
because
again
sending
things
to
century.
This
means
that
we
need
to
update
the
code,
and
this
means
that
we
have
to
sort
of
predict
where,
in
our
code
we
might
be,
we
might
get
the
bottleneck
or
something
it's
like.
A
E
Yeah
I
was
just
sort
of
I,
don't
know
just
floating
the
idea
out
there
to
be
honest,
I
wasn't
expecting
a
full
answer
to
it's,
just
that
if
there
was
a
tool
that
already
existed,
that
did
everything
we
wanted.
Amazing
I,
don't
know
of
one
I
just
know
that
there's
a
million
JavaScript
tools
out
there
and
one
of
them
was
what
we
want.
You.
A
Know
they're
like
and
that's
that's
why
we
use
million
million
on
tools
that
good
laughs
as
well
to
do
measure
different
things,
because
just
because
we
can
write
why
not
okay,
now
done
I
think
I
think
we're
we're
done
with
this
part
of
discussion.
Unless
what
do
you
think
mark
did
we
discuss
this
properly?
This
was
your
point.
I
think.
A
A
F
Should
we
run
this
for
all
users
like
or
should
we
run
performance
collecting
only
for
a
certain
percentage
of
users,
or
maybe
we
wanted
to
detect
like
meal
these
users,
the
fast
machines,
let's
run
another,
what's
running
on
their
computers
or
whatever,
but
I
am
concerned.
This
specific
question
I
have
kind
of
a
follow-up
question,
but
this
specific
one
is
about.
What's
the
performance
head
of
this?
A
So
IP
the
lofty
comment
with
the
link
to
wonderful
discussion
between
Andre
Andre
server-side,
rendering
I
finished
watching
it's
just
today
earlier
today
and
I
already
had
the
call
with
Andrey
as
well.
So
this
the
idea
is
there
are
just,
are
really
really
great
and
there
are
different
ways
to
tackle
the
table.
The
performance
right,
but
the
the
point
is
that
I
suspect
your
question
pool
is
more
related
to
measuring
the
things
not
already
optimizing,
the
things,
because
optimization
of
performance
should
not
have
performance
penalty,
that's
sort
of
the
law.
A
Otherwise,
what's
the
point
but
measuring
performance
using
the
user,
tiny
API
has
no
penalty
on
its
own.
Since
this
is
async
async'
api,
it
doesn't
put
any
performance
penalty.
What
does
boot
penalty
is
the
way
we
track
the
things.
So
if
we
track
it
using
site
speed,
there
is
no
penalty
because
site
speed
as
I
said,
runs
its
own
process.
It
loads
the
page
and
gets
the
measurements.
That's
it.
If
we
decide
to
send
things
to
snow
blow
or
to
sentry
there,
there
is
so
technically
we
hit
the
bottleneck
of
those.
A
Those
third
party
is
how,
but
how
synchronous
or
asynchronous
they
are
and
how
fast
they
can
process
data
and
whether
we
need
to
to
wait
for
any
return
of
the
data
from
that
byte
Southey
user
timing,
a
guy
that
I
was
telling
in
my
session
is
asynchronous
API.
There
is
no
no
performance
penalty
and,
of
course,
there
is
no
problem
in
having
having
this
in
the
production
code,
because,
even
though
those
measurements
will
be
taken
on
user
on
each
particular
user,
they
won't
be
sent
to
side
speed
automatically.
A
F
F
A
This
is,
this
is
a
very
good
thing,
because
this
is
the
part
where
I
myself
get
confused
a
bit,
guess
technically
originally
user
timing.
Api
is
designed
specifically
for
measuring
performance
on
clients
machines,
so
that
are
one
gathers
performance
from
different
from
from
the
representative
group
of
real
users
of
the
product
right.
However,
this
is,
if
you,
for
example,
send
the
data
to
to
analytics
tool
right,
so
data
from
data
to
analytics
is
sent
right
there.
A
A
Significant
deviation
because,
like
while
user
gets
there
from
from
an
all
mobile
device,
another
user
gets
to
the
site
with
the
cable
on
high-speed
machine
right
so
like
the
number
of
results
will
be
so
the
ranch
with
the
results
will
be
so
wide.
That
would
be
impossible
to
make
any
conclusion
out
of
that.
However,
in
our
case,
that's
why
I
originally
in
my
RFC
I,
propose
that
we
might
put
this
behind
the
future
flag
so
that
we
could
limit
this
only
to,
for
example,
get
lab
engineers
getting
to
get
Lancome
right.
A
However,
if
we're
talking
about
math
taking
the
measurement
with
slight
speed,
we
don't
need
this
because
we
just
found
we
have
this
code
in
the
introduction
and
then
the
measurement
gets
taken
by
the
time
when
side
speed
gas
to
the
to
the
to
get
lab
comm.
So
there
is
no
penalty
for
the
users.
There
is
no
noise
from
the
users.
We
just
get
the
things
right
there
and
it's
technically.
We
have
the
unified
measurements
over
time,
because
we
have
to
make
sure
that
we
get
this
measurement
in
order
to
have
the
consistent
results.
A
We
have
to
measure.
Take
the
measurements
with
particular,
throw
out
point
nothing
that
work
with
particular
throttling
CPU.
Usually
this
is
the
performance.
Measurements
now
are
taken
with
fast
3G
connection
with
four
times
slow,
slower,
CPU
throttling.
So
that's
that's
the
baseline
and
we
by
taking
measurements
with
site
speed.
We
do
not
need
to
hide
anything
behind
the
feature
flag
because
it's
only
site
speed
that
we'll
get
access
there
and
we
will
talk
about
noise
and
deviation
of
the
results.
F
A
A
Yeah
I've
heard
that
it's
not
very
secure
way
of
fixing
the
things
getting
to
some
others,
computers.
So
that's
that's
the
thing
that
we
will
hope
was
TOEFL
later
I
guess,
but
yeah
I.
Technically,
if
the
user
experienced
enough-
and
he
worked,
for
example,
he
or
she
has
the
problem
with
performance
and
mic
diving
to
this,
we
might
even
ask
like,
if
you
type
this
command
and
the
consult,
what
numbers
do
you
get
there
and
that
we
can
figure
out?
What's
going
on
with
this
or.
F
D
You
can
just
unsilent
them
in
production
if
we
need
to,
and
my
idea
to
him
was
to
to
check
a
cookie
if
the
cookies
president
that
we
put
in
our
machines,
you
would
output
the
logs
to
our
console
logs
and
we'll
be
able
to
see
the
deltas
between
each
marks
and,
if
not,
it's
just
registering
into
the
performance
API.
Just
like
that
and
said
it
shouldn't
be
a
hit,
but
the
kodan
be
there.
D
What
what
Andrew
brought
up
is
that
we
do
have
ways
to
filter
out
any
console
logs
or
anything
from
the
webpack
bundles
themselves.
So
if
we'd
rather
have
them
only
in
development,
we
would
still
have
those
markings
in
the
code,
but
when
it's
deployed
to
production
those
laws
would
be
silent
it.
So
that's
something
for
us
to
be
this
to
decide,
but
I
could
see
me
using
that
all
the
time
I.
F
Think
that's
a
really
good
call
doing
that
not
filled
not
stripping
those
things
out,
but
having
an
adapter
where
based
on
a
clock
or
something
thing,
is
for
performance
measurement
and
console
logs
Oh
that'd,
be
so
helpful.
I
think
it's
a
good
idea.
I
want
to
I
want
to
keep
moving
for
the
sake
of
time.
Is
that
everyone
happy
for
doing
that
and
I
want
to
keep
moving
because
I
have
the
next
question.
F
Still
capturing
the
data
is
interesting.
This
part
I
find
particularly
interesting
is
how,
and
so
this
is
a
specific
question,
but
but
a
lot
of
it
stems
from
a
more
abstract
problem,
but
I
asked:
can
you
imagine
a
reliable
CI
check
for
performance,
degradation
and
reliable?
Is
the
key
word
there
and
it
means
you
know
hey
if
if
we
have,
our
cloud
is
at
80%
capacity
versus
40%
capacity
like
like
that
we're
reliably
we're
not
we
don't
have
a
flaky
job
there
and
a
lot
of
that
is
how
do
we
measure
time
over
time?
F
Performance
and
know
did
something
actually
change
here
because
of
the
application
code,
or
is
it
changing
because
of
my
environment
and
our
environments
are
always
going
to
change
and
that's
that's
the
challenge.
In
my
opinion,
if
I
see
some
people
have
left
some
really
interesting
thoughts
here.
Does
anyone
have
any
other
so.
D
A
A
What
I
would
like
to
talk
to
to
talk
about
is
the
existing
the
review
performance
job
that
we
have
in
in
the
pot
in
the
ECI
right,
so
I
was
I,
had
a
chat
with
Romney,
who
was
the
author
of
that
initial
implementation
of
review
performance,
job
and
technically
how
how
it
works?
I,
don't
know,
maybe,
like
some
people
didn't
know
this,
maybe
some
not
that
we
have
when
you
push
the
Lord's
request.
There
is
this
performance
analysis,
kind
of
saying,
you're.
A
This
merge
acquires
degraded
or
increased
some
some
numbers,
and
then
those
numbers
are
there
and
nobody
knows
what
to
do
with
those
numbers
terribly.
What
what
else
there
is
that
whenever
you
push
merge
request
that
job
like
if
you
run
that
job,
that
job
would
man,
first
of
all
over
time
again
like
cron
job,
like
style,
that
job
gathers
measurements
against
master
and
stores
those
measurements
in
the
artifacts?
A
Then
it
runs
the
same
measurements
on
your
merge
request
and
that
compares
to
the
numbers
in
the
taken
against
master
as
simple
as
that
so
and
then
I
think
I
didn't
put
the
link,
but
then
there
are
the
things
that
I've
already
mentioned
like
when
the
question
arises,
whether
this
job
should
fail
if
the
performance
degrades
by
like
1%
or
something
like
this,
so
because
originally
it
was
set
up
so
that
performance,
the
great
degradation
for
degrading
for
one
millisecond,
raises
the
error.
I
think
this
is
this
is
this
is
some
cage?
A
However,
we
could
do
better,
I
think
so,
first
of
all,
how
to
do
the
like
sensible
measurement
I
think
it
has
to
have
to
even
not
three
levels
of
check.
First
of
all,
we
did
the
same
thing
that
the
review
performance
job
does
we
check
the
numbers
against
master
and
if
we,
if
we
do
this
smart
or
like
pragmatic
approach,
we
check
whether
we
are
going
out
of
twenty
percent
margin
comparing
to
master.
If
the
man
request
doesn't
the
great
performance
by
more
than
twenty
percent.
A
Probably
it's
okay,
but
we
have
to
raise
the
war
warning,
but
warning
I.
Think
like
the
state
of
warning
I
think
should
also
mean
that
we
have
to
somehow
check
the
performance
of
exactly
the
same.
Like
Oh
master,
some
I
don't
know
some
commits
back
for
example,
and
see
whether
actually
the
current
master
degraded
comparing
to
that
previous
commit
if
current
master
degraded
by
let's
say
ten
percent,
and
now
our
merge
request
degrades
master
even
by
twenty
percent
more.
This
is
the
earth
because
in
total,
be
degraded
by
30
percent.
A
This
is
way
too
much,
however,
if
the
master
was
more
or
less
stable,
and
now
we
are
degrading
by
20
percent.
It
might
be
ok,
but
again
we
have
to
sort
of
think
through
this
way
of
dependence,
because
technically
as
as
as
I
mentioned
during
the
session,
twenty
percent
is
just
a
noticeable
difference
right.
But
if
one
word
requested
graced
by
twenty
percent,
another
one
degrades
by
twenty
percent
and
then
like
by
the
end
of
the
milestone,
we
have
a
really
really
snow,
a
slow
elephant
that
that
cannot
move
anymore
right.
A
So
we
have
to
have
some
control.
There
is
this
sort
of
thing
so
and
then
again,
if
several
worst
quests
the
great
performance
of
of
master
over
time,
who
takes
the
burden
of
fixing
performance,
then
is
it
the
order
of
the
last
words
request
that
gets
out
of
this
twenty
percent
margin?
Well,
it's!
It
might
be
the
case,
but
it's
not
really
fair,
so
the
the
whole
were
around
this
measurement
and
analysis
in
CI
in
particular,
should
be
thought
through,
because
this
is
like
we
need
to
compare
to
master.
A
D
E
D
Stan
mentioned
this
during
a
cold,
and
it's
on
the
chat
is
that,
when
we're
comparing
performance
of
reviews,
apps
they're
not
always
very
stable
and
predictable,
so
you
might
see
variation
that
would
put
an
alarm
would
fall
in
the
red
like
would
trigger
an
alarm
and
eat
the
code
is
the
same
so
that
kind
of
defeats
a
little
bit
I,
don't
know
if
they
get
if
the
lighthouse
CI
plug-in
is
more
immune
to
that
than
the
one
we're
using
right.
Now.
This.
A
Thing
I
suppose
the
the
the
issues
that
you've
just
mentioned
in
the
review
before
this
job
are
related
to
the
fact
that
I,
as
far
as
honest
earth,
we
take
the
measurement
on
master
only
once
we
did
not
have
three
passes,
for
example,
to
have
the
average
numbers.
So
if
something
goes
wrong,
this
is
so
what
that
we
compare
again
yeah.
B
D
Definitely
the
other
point
that
I
remember
talking
to
Ramya
is
the
measure
measurements
we
take.
There
is
just
the
usual
standard
ones.
First,
visual
paint
last
last
repair
and
I
think
the
user
user
timing
API
marks
here
will
be
incredibly
beneficial
yeah
because
then
I'll
be
able
to
time
it
directly
that
the
part
where
we're
changing
like
that
Delta
did
that
change.
Yeah.
A
That's
that's
the
problem
with
all
these
generic
numbers
like
speed
index,
is
be
better
in
this
regards,
but
still
like.
Oh,
they
are
very
generic
like
B,
and
there
are
a
lot
of
them
and
people
who
deal
with
before
most
sort
of
taught
themselves
to
ignore
all
these
all
these
as
just
the
noise,
because
they
do
not
tell
you
the
real
picture
of
performance
for
your
for
you.
F
Sam
brings
up
a
really
really
interesting
approach
and
talking
about
CIT
tools,
around
performance,
doing
a
mostly
static
and
Static
analyzing
code
and
making
sure
hey
do
we
do
these
good
practices
and
we're
talking
about
having
a
performance,
a
finger
on
a
performance
pulse
in
production,
and
it
will
have
performance
issues
just
all
those
performance
issues,
how
cool
it
would
be
if
we
could
save
ourselves
rather
than
having
to
solve
this
really
really
really
hard
problem
of
measuring
performance
correctly.
What's
our
right,
Oracle
like
how
do
we
do
this?
F
So
it's
not
flaky
like
this
is
a
really
hard
problem.
Having
a
way
more
pragmatic
approach
of
okay,
we've
learned.
This
thing
causes
performance
issues:
let's
write
a
static,
analyzing
tool
to
stop
that
kind
of
thing.
That
might
be
a
good
enough
net
to
make
sure
that
when
we,
when
we've
grown
from
a
performance,
when
we've
improved
performance,
we
can
keep
it
that
way.
I
think
I,
like
that
idea
of
that's.
A
That's
really
great
a
year
and
that's
in
performance
in
performance
world.
This
is
called
performance
budget
where
you
set
the
budget
on
different
different
parameters.
We
like
me
to
before
us,
including,
like
it's
not
solely
related
to
the
time-based
measurements.
It's
also
related
to
the
number
of
number
of
requests:
the
size
of
JavaScript
assets,
for
example,
because
this
is
the
main
bottleneck
right
on
the
front
end
way:
processing
the
JavaScript
resources.
So
you
set
these.
A
Budgets
for
different
parameters
that
define
define
performance
and
value.
You
can
write
tests,
and
this
is
what
I
showed
during
my
session
in
r-spec,
where
I
said:
okay,
I
said
before
months
or
select,
I
set
the
budget
for
that
or
for
one
of
the
measurements
and
set
form
us
for
another
management
and
accept
the
deviation
within
plus
minus
20%
and
I
just
found
this
this
r-spec
tests,
and
that's
that's
one
of
the
ideas
that
we
can
incorporate
as
well.
A
A
F
One
last
point
here
that
they
came
to
my
mind,
while
we're
talking
about
this
something
that
just
keeps
bothering
me
is
like
this
variability
of
every
time
we
measure
it.
It's
gonna
be
different,
even
if
it's
the
same
machine
like
because
there's
so
much
that's
non-deterministic
with
these
runs,
and
it
dawned
on
me
that
there's
approach
that
they
do
this
outside
of
software
of
normalizing
data
and
having
some
sort
of
normalizing
factor
I
was
like
man
that
needs
to
be
part
of
our
review
job.
F
Let's
run
something
has
nothing
to
do
with
application
code,
but
let's
run
it
measure
the
performance
of
running
something
like
Generic
and
I
put
here
just
for
joke,
generating
Bitcoin
hashes,
and
if
that
took
ten
seconds
for
this
run,
I'm
now
gonna
have
a
waiter
bar
I'm
gonna.
Look
at
these
numbers,
then,
if
I'm
master
and
only
took
two
like
a
little
time,
I
remain
on
master,
only
took
two
seconds
to
run.
F
This
bar
I
think
when
we
do
these
review,
when
we
do
these
performance
jobs
and
we're
gonna,
compare
numbers
I
think
there's
got
to
be
some
sort
of
normalizing
step,
because
the
environment
isn't
gonna,
be
deterministic
and
I'm
sure
that
there's
been
approaches
of
what
is
that
kind
of?
How
do
I?
How
do
I
normalize
my
machine
for
performance
I'm
sure
that
there's
been.
A
In
order
to
do
this
properly,
I
think
we
would
need
to
involve
some
static
at
this
statistical
analysis
specialist.
In
order
to
do
that
for
us
and
I
I'm
not
sure
this
is
something
that
we're
looking
forward
to,
but
the
weight
of
like,
for
example,
when
it
comes
to
size,
the
way
science,
people
sort
of
normalizes.
A
The
things
is
to
run
the
same
thing
three
times
right,
three
passes
on,
saying
you,
and
then
we
get
the
median
and
average
numbers
there
and
again
like,
of
course,
you're
absolutely
right
like
if
we're
on
the
same
the
same
test
against
the
same
page,
for
example,
in
a
week
from
now,
the
numbers
might
might
be
different.
But
again
if
we
see
the
spikes
for
example
or
drops
for
one
particular
day,
this
tells
us
okay,
something
but
from
that
particular
day.
But
then
the
numbers
are
again
sort
of
on
par
with
previous
results.
A
So
it's
like.
If
we
have
statistically
significant
amount
of
data
in
these
insights
peeking,
the
measurements
will
be
easily
spotting.
Things
like,
despite
some
drops,
I,
think
and
that
will
tell
us
that,
okay,
this
because
anyway
site
speed
is
the
respective
analysis
tool
right.
We
check
there
after
the
danger
and
after
the
beat
sort
of
the
problem.
If
the
problem
occurred
after
the
problem
occurred
right,
so
we
see
okay,
we
did
something
on
that
day
that
dropped
phones.
A
Unfortunately,
it
won't
tell
you:
okay,
in
two
days
from
now,
you're
gonna
screw
up
and
performance
will
go
down.
That's
like
that's
not
the
way
before
muscles
works,
and
the
only
thing
the
only
way
to
catch
things.
Sort
of
proactively
is
either
running
the
tasks
or
monitoring
this
properly
in
the
CI
yeah.
F
And
I
think
I
think
a
good
they'll
take
away.
Is
that
I
think
when
there
is
variability
and
flakiness
I
think
there
are
approaches
to
stabilizing
that
and
it
just
it
does
take
what
you
said:
yeah
it's
gonna
take
a
different
look
at
it,
but
I
think
that's!
This
is
good!
Mark
f!
You
have
a
really
interesting,
thoughtful
question.
After
this,
let's.
A
A
Just
gets
there
and
since
things
this
is
just
trans
testing
note.
No,
it
has
no
idea
what
the
formants
object.
Global
performance
object
is.
So
it's
just
like
what
are
you
talking
about?
So
that's
that's
one
thing
to
keep
in
mind
and
that's
why
I
just
jobs?
Do
you
fail
in
this
particular
mark
and
then
Alomar
is?
Oh,
it's
nothing
more.
It's
just
like
link
to
all
the
merge
requests.
Were
you
doing
right?
The
the
second
one
is
just
the
joke
without
actual
link
to
the
merge
request
right,
let
me
see
where
is
it?
A
A
Apparently,
so
that's
let's,
let's
just
go
well
with
the
next
question.
If
you
don't
have
any
questions
about
this,
one
I
mean
in
in
that
merge
request,
we
can
find
you
can
find
the
way
I
incorporate
user
timing,
it
guys
for
measuring
performance
or
snippet
view.
That's
that
that
should
give
some
basic
idea
and
that's
this
is
the
code
that
I
was
sharing
during
the
session
all
right,
I
remember.
The
second
link
has
to
be
to
you,
a
branch
that
has
different
commits
tackling
different,
different
aspects
of
using
user
timing.
A
A
I
also
put
the
link
to
the
original
video
by
team
who
that
he
shared
at
the
end
of
last
year,
I
think
where
he
shows
how
our
set
up
for
site
speed
works.
So
for
anybody
who
wants
to
dive
deeper
into
this
and
understand
how
the
how
this
works
on
how
we
can
use
this
big
reef
on
a
dashboard
for
site
speed,
that's
the
video
to
check
out
then
next
Andre
asked.
Would
it
be
possible
to
couple
the
performance
a
clear
with
chronic
use
and
use
this
me
get
lab
monitoring
dashboards?
A
A
It's
definitely
I
put
the
comment
there
that
terribly
I'm
not
sure
how
Prometheus
works,
but
if,
as
long
as
it
can
get
access
to
graphite
or
influx
the
bead,
this
is
the
database
where
size,
feed
stores
all
the
data.
Then
we
can
do
anything
with
pro
news
about.
As
long
as
it
gets
access
to
the
database,
then
Roman
asks.
Are
there
any
best
practices
practice
with
naming
markers
across
a
whole
product
they
would
console.
A
This
is
a
great
question,
I
think
and
told
who
is
a
reviewer
of
not
merge,
requests
that
I
already
post
music
timing,
API
for
snippets
raised
the
same
question
and
I'm
pretty
sure
that
we
have
to
come
up
with
some
naming
convention,
but
if
it
doesn't
make
sense
to
come
up
with
something
before
the
RFC
gets
some
attention
from
managers,
and
we
have
green
light,
hi,
I'm,
Matt
and
user
timing.
Api
implementation
get
some
green
light
from
from
management,
honey.
D
A
C
B
C
A
A
A
Yes,
no,
yes,
I'll
just
go
and
if
people
need
to
leave
yeah
if
people
need
to
leave
well,
it's
just
nope
no
obligations
to
turn
this
meeting
exactly
and
then
Roman
asks
another
good
question:
is
there
a
danger
with
using
something
like
this
in
a
component
that
gets
mounted
multiple
times?
This
is
a
really
interesting
point
and
technically
the
scenario
is
we
want
measured
performance
with
one
component.
A
We
put
the
mark
they're
following
our
combat
naming
convention,
but
then
this
component
gets
mounted
several
times
on
the
page
right
and
that's
that's
not
really
good
situation.
So
apparently
we
have
to
keep
an
eye
on
this.
I.
Don't
have
immediate
solution
to
that.
Apparently
we
just
need
to
say:
okay,
we
just
put
remarks
out
of
this
component
into
the
higher
levels
and
that's
that's
how
we
should
tackle
this
Dennis.
D
A
B
A
F
A
I
think
we
have
discussed
this
briefly
right,
so
it's
just
like
there's
not
going
to
be
noise.
We
just
run
this
high
speed
against
get
lab
to
come
whatever
we
need
it.
So
yeah,
no
user
data
is
involved
here.
So
since
before
most
done,
Fred
fred
is
novel
Nicole
as
well.
So
since
performance
is
also
an
accessibility
factor,
optimizing
performance
from
might
not
just
be
for
high
speed,
Internet's
and
200
milliseconds.
That
is
not
noticeable
to
someone
might
be
a
four
second
slow
time
for
users
with
slow
connections.
A
How
can
we
take
this
into
account
right?
This
is
a
very
good
question
into
the
account
for
performance
organization
that
we
see
perceived
long
times
as
the
long
time
changes
per
connection
speed.
This
is
a
really
good
question,
and
but
the
thing
is
that
we
are
going
to
deal
with
general
performance,
optimization
based
on
like
comparing
get
lab
to
get
lab,
get
I'm
sorry
get
lab
dot-com
comparing
to
get
to
get
lab
calm.
A
A
They
will
see
that
might
be
perceived
as
worse
or
better
optimization,
but
on
the
large
scale
it
will
be
mapped
and
exactly
with
the
same
with
the
same
graph
who
proposed
proportion,
I
I
would
believe
that
yeah,
that's
that's
my
understanding
of
the
thing
and
because,
as
I
said,
the
main
thing
is
that
we
performance
on
the
gitlab
comm
is
measured
with
stable
parameters.
So
usually
this
is
fast
3G
with
four
times
CPU
throttling.
This
is
these.
D
D
This
topic
I,
wanted
to
tell
a
very
quick
story
that
happened
with
YouTube
redesign.
I
can
remember
that
I
was
trying
to
find
the
link
to
give
you
a
source,
but
I'll
try
to
find
it
later.
So
there
was
a
couple
years
ago
they
were
redesigning
the
YouTube
page
for
the
video
and
I
wanted
to
put
it
below
100k
of
data
to
load
so
that
it
would
be
a
faster
and
snappier
and
everything.
They
noticed
that
the
metrics
for
the
average
time
to
load
the
page
and
everything
went
significantly
up.
D
They
couldn't
wrap
their
heads
around
it
and
upon
further
inspection
of
the
data,
they
realized
that
by
bringing
the
making
the
page
lighter,
they
enabled
more
people
in
more
remote
areas
of
the
world
to
access
the
page.
So
that
meant
that
now
they're
reaching
much
more
people,
but
the
average
timing
went
up
okay.
So
that's
one
of
the
tales
that
we
need
to
be
careful
when
we're
starting
to
talk
about
these
things
is
that
you
might
change
one
variable
and
have
a
significant,
unexpected
result.
So
just
for
us
all
to
be
mindful
of
that
thing.
A
That's
that's
a
good
one.
That's
I
have
a
story
of
the
same
sort.
From
my
personal
experience
when
we
like
at
one
of
my
previous
works
hearing
or
where
we
started,
we
worked
with
optimization
for
the
project
and
we
were
happy
when
we
saw
the
numbers
went
down
and
things,
but
the
problem
was
that
people
stopped
buying
things
from
the
company
after
we
made
the
optimization
and
types
if
we
would
be
just
looking
at
the
numbers,
that
would
be
very
silly
and
very
frightening
tendency.
A
That
would
break
any
statistic
and
would
be
very
huge
hit
at
every
before
at
any
performance
conference
right.
But
the
prop
the
social
was
that
we
just
had
to
take
a
look
at
the
calendar
and
we
just
launched
the
performance
optimization
in
July
when
the
whole
Norway's
technically
shut
down
in
everybody's
on
vacation
yeah.
E
A
B
First
is:
is
user
an
interaction
increase,
the
more
valuable
metric
is
because,
when
you
just
kind
of
look
at
the
bundle
size
or
these
like
tiny
micro
optimizations,
we
might
be,
like
you
said
missing.
The
point.
I
came
across
something
yesterday
that
I
noticed
we
were
duplicating
react
him.
We
we
were
duplicating
how
we
were
storing
an
object
in
a
few
really
big
components
in
terms
of
reactivity,
which
is
really
expensive,
and
then
I
thought
to
myself
when
I
make
this
refactor.
How
can
I
make
the
case
that
this
was
an
improvement?
And
it's
like?
B
A
That's
that's
a
very
good
point.
Thank
you.
That's
that's!
That's
the
idea
that
I
had
in
my
as
well
optimizing
just
for
the
sake
of
optimization
doesn't
make
sense,
and
it's
just
like
this
is
the
straight
path
to
madness,
because
what
people
can
can
tinker
with
performance
for
like
I've,
been
in
this
boat
like
I,
was
sitting
and
optimizing
for
like
every
millisecond
for
you
for
weeks,
and
there
were
a
couple
of
cases
when
I
was
doing
this
for
months
and
then
it's
like
it's
never
fun.
A
It's
like
the
outcome
is
worthless,
so
we
have
to
understand
when
we
optimize
and
why
we
have
at
optimize.
That's
why
I
advocate
for
this
user,
centric
user
timing,
it
when
we
measure
exactly
the
things
that
do
matter
for
that
orbit
or
not
of
you
and
employee
psychology,
for
something
like
the
writer
central,
like
the
blabber
fashion,
we
would
figure
out
whether
our
positions
may
make
any
sense
or
whether
we
have
to
care
about
performance
drop
at
all
on
this
particular
at
this
particular
stage.
So
that's
that's
the
very
complex,
complex
topic
to
figure
out.
A
F
A
F
A
You
too,
so
I
think
I
think
we've
reached
the
end
of
the
list
and
yes,
the
server-side
rendering
and
read
thanks
for
putting
the
link.
This
is
a
very
good
topic
and
we
could
gain
really
really
good
performance
improvements
just
by
doing
that,
but
by
saying
just
doing
that,
it's
not
like
trivial
trouble
right
so
know.
A
No,
but
but
keeping
this
in
mind,
especially
when
we
build
the
front
end
parts
of
this,
this
is
a
very
something
that
we
have
to
keep
in
mind
for
sure.
Yes,
so
I
think
that
was
very
productive
and
we
had
a
really
nice
conversation.
Thank
you
very
much
and
let's,
let's
just
arrange
something
some
time
game
I
think
would
be
really
nice
yeah.