►
From YouTube: WebPerfWG F2F TPAC 2019 - day 1 late afternoon - FrameTiming and LargestContentfulPaint
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
So,
right
now
for
framing
is
to
measure
how
smooth
the
content
is
being
under,
and
the
idea
is
that
we
already
have
a
way
to
measure
latency
of
frames
via
event.
Timing
for
entertaining
with
measure.
Until
so
this
API
wants
to
expose
more
smoothness
or
principle,
and
the
idea
is
that
it
will
be
useful
to
identify
regressions
in
page
rendering.
A
For
the
previous
proposal
and
for
conference
in
general,
the
idea
is
to
capture
expensive
compositing
work,
which
would
not
be
otherwise
captured
in
long
tasks,
because
it
does
not
necessarily
happen
and
right
now
there
are
websites
that
use
wrap,
calling
to
basically
polyfill
maintenance
to
detect
house
absolutely
the
content
is
being
rendered,
and
that
is
pretty
bad
because
it's
expensive
in
terms
of
CPU
and
like
battery
left
and
it
also
kind
of
violates
the
principles
of
ref
and
overall
it
should
be
avoided.
So
all
these
things.
B
B
We
have
a
touch
move
and
then
within
the
frame
we
produce,
and
so
those
are
quite
low
latency,
but
in
the
third
frame
it
takes
us
a
while
to
produce
that
frame,
and
so
we
we
just
miss
using
a
brain
yeah.
So
there's
one
frame
to
be
draw
and
then
all
of
the
frames
after
that
we're
taking
a
long
time
to
produce
and
so
for
kind
of
the
last
four
frames
once
in
the
glut
brings
three
four
and
five
I
guess
it
was.
B
B
B
Us
to
focus
on
drop
frames
as
opposed
to
long
frames.
It
was
a
particular
example.
The
difference
would
be
if
we
were
surfacing
long
frames.
All
of
those
annihilating.
The
two
frames
mostly
switched
from
60
milliseconds
of
latency
is
32
milliseconds
of
latency,
for
example,
we'd
be
reporting
every
single
frame,
whereas
if
we
were
courting
drop
frames,
we
would
just
report.
B
Okay,
the
specification,
certainly
by
the
sword,
I,
don't
know
with
other
browser
engines,
but
certainly
chrome,
is
very
happy
to
switch
into
what
we
call
like
a
high
latency
mode,
where
we
just
say
we're
not
confident
that
we'll
be
able
to
scroll
the
62
milliseconds
of
latency,
so
we're
going
to
add
an
extra
frame
of
pipelining
and
stick
to
32
milliseconds
of
latency
and
servicing
a
long
frame
entry
for
every
one
of
those
frames.
D
E
B
Where
there's
English,
you
should
care
what
the
lens
you
care
about.
The
fact
that
the
page
is
not
sticking
to
the
finger
as
well
and
any
user
that
to
capture
that,
if
you're
you've
tapped
a
button
and
there's
innovation
that
happens,
the
menu
slides
in
the
only
thing
I
think
you
care
about
is
the
one
breaking
you
drop
as
you
transition
I
think
it
looks
like
it'll
be
noticeable.
A
B
Sorry,
if
you
have
a
device
with
a
dynamic
music,
long
frames,
it
gets
to
be
a
little
bit
tricky.
A
reason
about
how
this
frame
duration
longer
than
you
thought
it
should
be.
Putting
with
drop
frame
is
like.
Did
you
start
losing
the
brain
or
like
did
you
want
to
produce
a
frame
but
you're
unable
to
do
so?
One.
G
H
One
concern
with
the
approach
of
drop
frames
and
at
least
in
Excel
for
us
getting
to
sixties
basically
unrealistic
at
this
point.
So
for
us,
a
long
frame
is
generally
anything
that
above
32
or
50
milliseconds.
So
we
don't
look
much
about
the
60
millisecond
range,
16
millisecond
range
for
the
60
Hertz,
and
that's
also
the
case
as
Todd
says
when
you
switch
to
monitor
like
presentation,
devices
or
remote
desktop
sessions
and
so
on.
So
all
those
cases
that
typically
go
down
to
30
frame
per
second.
B
F
Sure
yeah,
so
you
plug
in
and
you
go
to
a
for.
You
know
4
milliseconds,
refresh
monitors
or
high-speed,
and
then
you
unplug
and
now
you're
back
on
your
laptop
and
then
you
happen
to
be
you
scroll
and
now
it's
writing
code.
And
then
your
UA
goes
down
to
32
like
those
are
all
gonna
be
right
now,
so
there's.
B
Yeah
there's
two
things
that
the
user
will
notice.
There's
we
didn't
produce
a
frame
when
we
expected
to
produce
a
frame
and
there's
lease
give
to
frame,
so
we
were
still
producing
frame
at
60
frames
per
second,
but
there
was
a
frame
missing,
so
cuz
I
jumped
a
little
bit,
so
this
room
level
currently
does
not
cover
the
case
where
you
skip
a
brain,
which
each
idea
is
that
you're
getting
in.
F
I'm
explicitly
asking
for
the
point
of
view
of
a
web
developer.
How
does
a
web
developer
reason
about
the
difference
between
skip
frames
at
high
risk?
You
know
high
frequency
on
refresh
versus
look
for
consumers
right.
B
A
Respect
that
kidney,
you
are
consistently
seem
intent
on
cleaning
and
then
you
don't
infringe
registro
face.
You
might
still
know
that,
even
though
it's
still
fat,
they
will
feel
slower
than
no
and
when
you
were
used
to
so
maybe
it's
okay,
that
you
explode
an
entry
in
that
case,
even
though
it
still
passed.
D
This
is
frequency
exposed
somehow
today,
because
you
wouldn't
expect,
like
with
all
our
scheduling
I
think
is
the
adjoining
story
basically
tells
developers
to
you
care
about,
like
you,
have
these
dis
moving
milliseconds
to
actually
run
something,
but
that
this
the
number
can
vary
based
on
the
refresh
rate,
yeah
I.
Think
if
you
look
at
the
interval
between
wraps
that.
I
E
E
B
Think
I
mean
desert
has
something
about
the
frustration.
Does
the
other
question
of
how
a
romantic
provider
doing
actually
suppresses
data?
It's
easy
to
turn
drop
frames
into
something
like
fraction
of
frames
which
were
not
drop
for
a
second
here.
That
kind
of
thing.
There's
some
nice
ways
back
using
this,
where
I
think
you
would
probably
be
able
to
see
like.
Oh,
there
are
a
couple
of
users,
but
my
median
brain
dropping
for
a
second.
E
F
D
H
F
E
B
F
B
G
H
H
So
we
don't
feel
that
the
raft
call
back,
at
least
from
our
what
we
observed
is
that
accurate
and
the
main
problem
with
that
is
how
much
expensive
it
is
and
how
much
it
actually
affects
the
users.
So
we
would
like
if
there
was
a
way
to
measuring
the
long
frames
without
using
rafts,
and
that
would
be
the
main
that
would
be
the
ideal
solution
and
another
downside
of
the
of
the
RAF
is,
if
you
making
a
raft,
call
back
every
time
to
do
the
measurement.
Are
you
essentially
losing
the
request
idle
call
back?
H
H
So
if
if
an
API
can
provide,
that
would
really
help,
and
we
just
at
the
meeting
yesterday
discussing
exactly
that
because
for
us,
it's
really
important
to
understand
where
to
focus
our
efforts.
When
we
optimizing
the
application
performance,
we
want
to
know
if
we
should
focus
more
put
more
resources
on
optimizing,
JavaScript
execution
or
optimizing
the
Dom
on
in
rendering
pipeline
related
areas.
So
if
there
was
a
way
to
to
basically
give
us
hints
where
most
of
the
time
is
spent,
that
would
be
a
significant
help.
F
So
the
current
so
conceptually
right
now
that's
work.
What's
animation
framed
requestanimationframe
right
now,
you
just
know
there's
a
time
that
a
framed
up,
but
you
can't
break
it
down
the
chance.
Profiler
would
tell
you
some
amount
of
samples,
but
you
wouldn't
be
able
to
break
down
conceptually
how
much
of
the
frame
was
javascript
versus
I'll,
say
layouts
or
garbage
collection
right.
D
K
H
H
M
H
Course
we
would
like
everything,
but
if
we,
if
we
think
about
moving
gradually
in
steps,
then
the
first
step
is
understanding
the
the
high-level
granularity
before
we
go
down
to
the
exact
cause,
whether
a
certain
layout
was
triggered
by
a
certain
access
to
the
Dom,
that's
ideal,
but
we're
not
aiming
there
yet
right.
We
want
to
first
understand
just
the
high-level
breakdown
of
a
frame
that
would
really
help
us
as
well.
So.
M
H
Yes,
we
use,
we
use
profilers
a
lot
and
I
mentioned
it
in
this
document.
The
problem
with
profilers
is
that
you
can
only
use
them
when
you
have
a
repro
of
the
issues
that
affecting
our
users
and
a
lot
of
times.
We
we
just
don't
because
they're
impacted
by
the
user
content,
which
we
don't
have
for
privacy
reasons.
H
N
H
So
we're
trying
to
solve
two
major
problems.
One
is
animation
smoothness.
When
we
are,
for
example,
in
Excel,
you
can
select
a
range
right.
You
can
highlight
a
range
that
you
don't
want
to
select
and
do
something
on
it,
so
that
is
considered
more
animation
related
operation
because
you
you're
moving
across
the
excel
grid,
right,
you're,
animating,
a
rectangle,
basically,
and
also
when
you're
scrolling.
That's
another
thing.
We
consider
the
animating
and
the
other
aspect
is
the
response
time.
Basically,
whew.
The
user
has
some
interaction.
H
If
there
is
a
long
frame,
the
response
would
be
slower
as
well.
So
we
may,
we
get
reports
from
our
users
that
the
application
is
slow
is
not
responding,
is
stuck
and
we
it's
hard
to
tell
from
their
input
what
exactly
is
happening.
So
we're
measuring
what
we
started
doing
is
just
measuring
long
frames.
We
using
rafts
and
every
time
the
user
starts,
doing
something
some
interaction.
H
It's
it's
actually
both
the
problem
is
that
we
can,
since
we
cannot
continuously
measure
frame
rate
using
rafts
because
of
the
reason
explained
earlier,
CPU
and
battery.
We
actually
miss
the
first
thing:
user
interaction
because
of
that,
because
only
after
the
user
interact
to
get
the
event,
so
we
can
start
measuring.
So
we
always
have
some
blind
spots
on
the
initial
interaction
and
it's
true.
We
can
use
event
timing
for
that
and
try
to
correlate
the
event
timing
to
the
start
of
the
measurement.
N
N
H
As
I
said
earlier,
for
us,
any
frame
that
is
longer
than
50
millisecond
seems
like
a
good
compromise
for
us
to
do
to
measure
that,
even
so,
if
we
miss
one
frame
on
a
60
frame
per
second
device,
60
Hertz
device,
I-
guess
we're
okay
with
that,
as
the
TARDIS
said
earlier
for
for
us,
if
we,
if,
if
it
goes
down
to
30
frames
per
second,
it's
still
reasonable
from
the
user
experience
point
of
view.
It's
not
3d
gaming
application
that
needs
the
60
frame.
The
users.
Don't
notice
that
much.
H
H
The
way
we
implemented
it
is
again
a
compromise
of
the
constraints,
so
we,
after
we
get
the
event.
We
set
a
timer
for
5
seconds
and
then
we
track.
We
assume
that
if
a
user
started
interacting
with
the
UI
may
interact
a
bit
more
have
a
time
window
for
five
seconds,
where
we
measure
the
frame
rate
using.
H
H
Ideally,
we
wouldn't
want
even
to
have
a
specific
code
that
is
related
to
the
user
interaction
right.
We
would
just
wanted
to
always
be
measured,
and
if
we
have
frame
drops
or
frame
that
is
longer
than
30
or
50
milliseconds
or
whatever
threshold
we
set,
we
will
get
some
kind
of
a
callback
that
we
can
or
an.
C
D
F
H
We
also
have
internal
events
in
our
system
when
we
have
external,
we
call
them
like
external
events,
where
one
example
is
when
we
have
cost
scenarios
where
multiple
users
are
active
on
the
spreadsheet
and
they
changing
their
selection
ranges
and
performing
edits
and
there's
also
some
scenarios
where
properties
of
the
documents
change
over
in
the
sun
was
it
called
the
extensibility
api's,
where
users
have
their
own
code
that
may
run,
and
then
returning
results
or
updating
the
the
content
of
the
document.
So
it's
not
only
user
live
user
interaction,
related
events
so.
F
H
You
know
well
what
we
can
do.
Is
you
Rishta
cailli
infer
that
when
we
tracked
that
a
certain
operation
happened,
we
observed
long
frames
in
proximity
to
that
right?
We
don't
have
the
exact
causation
link
there.
So
it's
basically
a
heuristic
based
exercise
where
we
use
statistics
mostly
and
if,
for
many
cases
it
actually
works
on
large
data.
F
H
H
It
what's
the
yes
I
think
if
we
add
it
like
the
long
tasks,
API
style,
I
guess
if
we
had
a
way
to
really
understand
when
we
have
long
frames
all
the
time
and
we
could
decide
whether
we
want
to
track
them
or
not
using
our
own
sampling
logic.
But
if
we
had
all
the
time,
regardless
of
the
user
input,
that
would
really
help
us,
because
we
could
had
additional.
We
could
add
additional
markers
to
indicate
what's
happening
in
in
application
at
any
given
moment.
A
H
F
So
I
heard
you
say
no,
basically,
if
you
do,
the
key
thing
that
you're
looking
for
is
the
highest
priority
is
to
remove
your
dependency
on
requestanimationframe
as
a
way
to
detect
long
frames,
yes
and
then
secondarily
figuring
out
how
to
get
better
attribution
at
runtime
is
this?
Is
the
second
or
ask?
Is
that
correct?
That's.
H
Good
that
would
be
secondary.
The
our
highest
priority
is
to
basically
collect
more
data,
so
we
can
have
better
analysis
on
our
user
experience
and
we
actually
limited
in
collecting
data
because
of
the
rough
loop,
both
because
of
we're
limiting
to
time
slots
during
user
session,
and
also
we
want
to
impact
least
amount
of
users,
so
we're
using
very
aggressive
sampling.
So
we're
getting
very
few
of
it's
still
a
lot
of
data,
but
we
don't.
We
get
a
subset
of
the
data
we
would
like
to
collect
and.
F
Both
MSM
and
use
a
similar
technique
and
are
limited
from
their
detection
of
long
frames,
and
both
of
them
are
using
this
technique
for
identifying
bad
IDs,
make
a
tabs
that
are
shipped
by
various
networks
that
have
are
being
shown
to
cause
long
prints
and
they
then
use
correlation
as
a
way
to
then
give
feedback.
Not
till,
though,
there's
a
visa
able
to.
F
I
Tests
should
be
sufficient
to
prove
otherwise
late,
I
think
if
you
have
this
conference
on
a
clinic,
so
most
the
vast
majority
of
problems
will
be
on
language,
so
they
should
be
related.
It
won't
ask
if
something
is
happening
at
major
them.
The
you
know,
state
yourself,
fine
I,
think
that's
a
lot
of
error
and
I
would
like
to
first
and
that
proof
can
be
connected
by
having
a
profile
that
shows.
Okay.
The
problem
is
actually
on
the
other
thread
and
profile
escape
and
information.
It
is.
B
B
N
B
F
N
Yeah
so
so
I
can
show
ya
man,
oh
by
the
way,
the
frame
fingers,
oh
so
that
helps
at
all.
The
browser
implements
to
come
back
bay,
boston,
MA,
so
I
think
the
most
one
thing
right,
because
there
could
be
one
task
that
is
like:
let's
it
ten
milliseconds
and
then
does
another
kind
of
like
wrapped
up
takes
like
five
years
of
us
right
and
in
the
rendering
I
believe
a
second
yeah
I.
N
B
And
then
so,
from
my
perspective,
you
get
a
pretty
big
win
from
having
a
rosary
yeah,
even
in
terms
of
quality
over
unity.
The
brass
signal,
because
we
could
get
play
signals
really
look
like
what,
like
the
GPU
might
say.
Oops
I
didn't
have
time
to
put
their
best
friend
and
we
would
be
able
to
service
that,
whereas
I
suspect
that
would
not
be
the
case
for
Safari.
F
This
right
here
caused
msn.com
to
be
the
number
one
battery
consuming
the
website
in
Microsoft
edge
on
laptops
when
they
have
the
requestanimationframe
threat
detection
enabled
because
since
early,
they
were
using
their
rough
technique
to
find
skip
frames
and
for
four
or
five
months.
They
were
eating
users
batteries
and
they
would
not
have
done
that
with
the
existence
of.
M
F
B
B
H
B
O
O
I
mean
just
one
over
at
the
end,
which
would
be
just
overall
kind
of
average
frame
rate
we
haven't.
We
are
also
tracking
the
deltas
visually
to
show
on
an
individual
basis
if
you
want
to
go
back
and
look
at
that
session
to
particularly
like,
along
with
all
your
other
resources,
then
I'm
in
a
waterfall
devtool
stand
right
now
these
things
are
presented
for
that
kind
of
purpose,
but
otherwise
it's
really
just
what
kind
of
one
magic
number
can
we
present
and
we're
not
even
would
count
that
that's
really
helpful.
O
B
O
Figure
out
how
to
fix
it,
yeah
similar
kind
of
long
tasks
where
this
is
kind
of
a
number
of
measures,
I
think
right
now
it
is
helpful
in
aggregate
when
you're
looking
at
you
know,
deploy
one
version
versus
the
next
version
to
see.
If
you
what
got
worse
that
kind
of
thing,
but
unless
you're
tracking
that
very
specifically
frustrating
for
them
to
what.
C
B
F
F
N
B
B
B
N
N
F
Well,
if
I
understand,
they
use
cases,
at
least
as
I
understand.
The
UCS
Viper
are
all
about
correlating
action
within
a
duration
of
a
frame.
So
if
you're
going
to
do
drop
frames,
you're
still
going
to
need
the
concept
of
when's
the
range
in
which
the
raw
frames
happened,
a
lot
of
correlation.
We
would
certainly
service
the
time
at
which
the
frame
was
the.
F
All
about
the
compositor
thread,
or
is
it
also
about
because
you
get
into
that
interesting
well,
if
every
single
frame
has
the
same
delay,
you
will
impact
the
feeling
of
input
for
the
user,
but
I
think
the
intent
you're
saying.
Is
that
the
event
time
it
would
be
how
you
would
measure
that
now
you
would.
You
only
use
this
to
measure
when
animation.
P
F
B
D
N
B
B
B
N
Don't
even
know
that
I'm
for
expansion-
it's
because
we
don't
know
how
fast
the
screen
yeah.
N
P
B
F
To
what
users
actually
and
trying
to
involve
as
on-site,
so
it
would
an
example,
be,
for
example,
in
Excel,
showing
a
couple
of
screenshots
of
someone
else
is
editing
the
document
and
here's
the
first
screenshot
and
then
here's
another
screenshot
after
the
user
edits
the
document
and
we're
trying
to
measure
if
this
code
is
getting
the
Dom
updates
and
all
the
updates
without
skip
frames.
Something
along
those
lines
sounds
perfect,
so
really
spelled
exactly
a
couple:
key
use
directions
and
then
the
MSN
add
you.
E
F
F
Use
cases
Excel
is
using
is
doing
more
work
because
they
have
to
then
correlate
to
actions.
To
then
say
hey,
there
are
certain
actions
that
are
triggering
more
frames
dropped
and
then
they
have
to
go.
Do
you
know
way
more
digging,
whereas
the
MSN
case
is
like
is
simple
correlation
because
they
have
a
direct
component
to
turn
on
off.
D
F
It's
also
like
you
guys,
are
in
favor
of
the
idea
that
hey
websites
are
really
measuring
it
and
they
are
actually
finding
things
to
make
better
or
worse,
based
on
requestanimationframe
already
is.
Is
that
something
you
know,
because
there
are
multiple
websites
doing
that
already
it's
just.
How
do
we
can
replace
it
with
like
Web
API
events,
gonna
be
the
right
web
api
and
in
all
the
browsers,
yeah.
I
Knowing
these
statistics
useful
know
very
well,
and
then
there
was
I
think
another
human
on
the
meaningless
common
is
field,
and
he
said:
well,
we
don't
really
an
informational,
but
this
is
related
or
that
this
correlates
with
users
actually
experience
and
know
that
we
have
a
lovely
neighbors
shipping
it'll
be
nice.
If
we
you
could
know
if
there
is
more
evidence
in
the
direction
of
carnism,
actually
is
that
experience
people.
B
I
can
provide
them,
so
the
idea
is
first,
we
temple
pain
services,
the
first
like
image
or
text
that
is
drawn,
but
obviously
that's
not
going
to
be
particularly
representative
user
experience
largest
content.
Bull
paint
is
a
slightly
more
complicated.
Your
incident,
which
gives
somewhat
better
results
which
looks
at
either
the
largest
image
of
or
largest
block
of,
text,
whichever's,
bigger
or
a
block
of
text,
is
defined
as
the
lowest
level
block
element
or
something
along
those
lines.
B
So
we
just
look
at
whichever
those
is
bigger
and
then
service
the
time
at
which
that
element
was
wrong
and
you
just
look
in
the
first
one.
So
it's
the
first
big
thing.
That
means
so
we
don't
report
immediately.
So
if
we
see
something
bigger
than
later,
we
see
something
bigger.
We
will
report
the
bigger
thing
the
risk
around
when
we
were
Porsche
is
either
when
the
user
navigates
away
or
when
there
is
discrete
user
inflation.
I
like
clicking
I.
B
Maybe
wasn't
do
right,
yeah
in
terms
of
gathering
data.
It's
good.
Most
of
our
research
did
not
depend
on
shipping.
The
API
most
of
our
research
was
done
in
the
lab,
getting
filmstrips
and
examining
large
quantities
in
some
strips
comparing
to
various
metrics
I.
Think
last
time
we
talked
about
this.
We
said
we
proved
that
some
of
those
some
time
and
then
you
still
haven't
gotten
them
too
silly.
B
B
C
B
Yeah
yeah
I
mean
so
we're
still
not
really
getting
down
a
bed.
Everything
yeah,
cuz
I,
don't
seem
like
that's
your
expectations
like
doing
it,
but
then,
as
wild
and
web
developers
are
trying
to
use
this,
like
is
that.
Is
that
assumption
about
the
assumption
that
those
paints
are
not
like
real
content
right
should
be
considered.
Yeah.
I
B
B
Put
some
amount
of
effort
into
users
days
here
with
very
little
success.
That's
not
to
say
that
we
could
not
put
in
more
effort,
but
I.
Think
if
you
want
to
prove
that
it
correlates
well
with
user
experience.
We're
gonna
have
a
hard
time
coming
up
with
that,
and
if
you
want
proof
that
it
does
better
than
perfect
example
think
we
can
definitely
get.
B
A
low
bar
yeah,
but
it
is
also
like
developers
right
now,
I
think
evolution,
I,
know,
search
but
yeah
yeah
element.
Timing
is
definitely
going
to
be
better
than
learners.
Can
simple
paint
for
the
cases
where,
where
developers
are
willing
to
annotate
their
page,
but
for
cases
where
users
are
not
willing
to
dictate
their
page,
so
what
data
should
be
be
somewhat
like
I,
don't
have
like
to
see
crush
on
that
the
other
question
in
the
way
we
first
talked
about.
Let's.
D
Q
B
Q
E
O
B
E
N
I'm
not
saying
that
we
should
do
this
Oh
like
this,
should
we
do
quite
a
bit,
for
example,
you've
got
a
button.
Let's
say
you
have
some
criteria
like
you.
Look
nice
wait
until
the
text
and
image
came,
and
you
could
imagine
the
exponent
why
you
do
they
everything
painting
above
everything
else
authorship?
You
just
been
possible
to
do
that
and
then
show
up
to
the
user,
and
does
it
feel
the
same
when
he
was
the
next
club
DJ
Oliver.
It's
obviously
not
going
to
feel
on
this
thing.
Now,
no
are
like
you
do.
N
D
B
D
E
B
That
Sweden
X
is
a
great
metric,
but
of
the
metrics
that
we've
got
LGBT
colleagues
most
strongly
to
speed
index.
Maybe
that
means
something
they
made
others
it
doesn't
correlate
great
in
all
of
these
cases,
it's
very
difficult
to
understand
what
a
number
like
you
boil,
a
correlation
down
to
like
the
Spearman
coefficient.
What
does
that
mean?
Is
it
good?
Is
it
not
it's
hard
to
make
decisions
based
on
that
I.
N
Q
B
N
F
K
F
Piece
of
data-
that's
super
valuable,
but
this
specific
question
that
was
asked
was
how
closely
dues
does
LCP
map
to
the
perception
of
the
user
performance
right,
which
is
actually
that's
why
he
brought
user
studies.
The
trick
part
is,
though,
across
sites
LCP
will
be
perceived
differently.
So
let's
say:
there's
the
top
50
sites
we
just
grab
screenshots
of
all
of
them
and
then
say:
does
this
site
feel
better?
Does
that
site
feel
better
if
an
LCD
is
greater
on
one
site
than
another
site?
B
F
Key
when
you
think
about
the
goodness
of
received
components
in
terms
of
being
effective,
that
I'm
seeing
that
there
is
more
to
perceive
your
storm
performance,
the
LCP,
that
the
way
that
that
page
display
is
the
ordering
not
you
know
there
are
more
scenes
to
how
it
feels
and
the
magic
number
right
and
I
have
not
seen
sufficient
studies
of
performance
to
give
give
us
a
feel
from
that,
but
within
the
same
site,
then
the
question
would
be
well.
What,
if
you
I
think
somebody
was
saying
in
fleet
will
see
be
a
little.
F
F
B
N
Good
here
well,
I
think.
The
reason
is
because
let's
say
that
you
know
you
scroll
down
the
thinking
by
language
right
and
then
let's
say
you're
not
on
something
like
this.
At
twenty
pages
you
come
across
the
page
turns
out
the
users
receive
on
average,
that
say,
agents
high
person
slow
and
then
let's
say
you
all
know
that
I'm
just
did
background
image
in
account
and
then
sold
out.
Something
else
like
maybe
fix
it,
and
then
one
of
average
users
perceive
that
in
that
case,
to
be
sliding
faster.
That.
B
B
C
B
C
P
B
So
the
clear
thing
we
can
do
is
show
like
provide
the
group
with
a
bunch
of
villagers
and
I
think
we
should
be
able
to
do
past.
That
I
haven't
heard
any
proposals
than
particularly
convincing
to
me
like.
We
can
certainly
quantify
how
good
it
is
with
a
number
police
initials.
Let's
get
inside
yeah
yeah.
If
folks
I
didn't
surface
some
concrete
numbers
on
that
that'd
be
great,
but
a
depressing
piece.
There
is
that
of
our
metrics
that
we
had
the
order.
B
O
A
good
way
to
find
whether
large
come
tempo
paint
is
a
good
proxy
for
should
this
page
load
quickly
and
draw
quickly
and
now
I
see
what
I
want
to
speak
is
to
take
a
large
number
of
sites
and
have
the
original
site
out
there.
Anna
paid,
what's
the
most
important
thing
to
draw
and
then
compare
that
metric
with
our
largest
content
will
paint
or
various
changes.
B
O
F
C
C
C
Just
on
the
first
hundreds
of
milliseconds
of
the
experience
is
the
whole
thing,
and
so
I
think
this
large
largest
Cathedral
paints
might
be
useful
for
a
websites
who
wants
to
kind
of
separate
ads
from
their
own
contents
kind
of
automatically,
assuming
that
they
have
something
that's
bigger
than
the
ads
because
know
is
the
case
on
the
first
rendering
of
the
page.
That's
the
only
that's
the
main
value
I
see.
So
this
is
useless
for
Wikipedia.
C
This
one
will
have
ads
and
it
tends
to
be
all
these
new
metrics
about
additional
pain
tend
to
be
all
very
close
to
each
other,
because,
if
she'd
be,
if
you
have
already
reasonably
fast
websites,
it
tends
to
be
that
most
of
stuff,
that's
about
the
foal,
is
gonna
appear
all
at
once.
You
know
if
you've
been
very
diligent
about
making
things
lightweight
and
and
shift
in
the
right
order.
C
In
that
sense,
you're
kind
of
splitting
hair
just
looking
for
new
metrics
in
that
area,
because
you
really
have
a
very
good
idea
when
that
happens,
and
from
the
sales
I've
done,
it's
they're
fairly
poorly
correlated
to
use
their
opinion
about
performance,
so
I
would
say
it
has
some
use.
The
question:
is
you
have
to
figure
out
how
many
websites
care
about
that
Wooden's?
Otherwise,
you
zone
in
timing,
which
I
think
uses
a
far
superior
solution
to
that
problem.
Space
30.
B
C
But
what
I'm
saying,
though,
is
you
have
to
be
good?
For
example,
a
newspaper
websites
where
often
you
have
giant
ads
appear
that
are
much
bigger
than
the
actual
content
people
care
for
and
that's
what
largest
critical
page
will
end
up
measuring
sometimes-
and
so
you
have
to
you,
know,
I
think
you
have
to
need
a
way
I
frame.
M
C
H
Well,
I
can
say
that
for
Excel,
the
LCP
would
probably
measure
the
skeleton
content
we
show
before
we
start
loading
the
actual
content,
because
that's
sort
of
an
image,
so
it
would
not
measure
the
user
experience
for
the
real
content
laser
is
interested
in.
For
that
we
will
have
to
probably
use
element.
Timing.
C
B
B
And
I,
certainly
wouldn't
Casey
would
claim
that
large
nipple
pain
its
way
to
do
the
right
thing
in
all
cases
like
it
is
heuristic
for
a
minute
element.
Timing
is
the
correct
solution
for
people
who
really
do
want
I
here
data,
but
for
developers
that
are
not
elected
with
the
ever
been
to
an
occasion.
I
think
yes,
but.
C
B
R
R
Without
this
morning
we
will
because
for
us
we
use
them
nine
thread
as
an
indicator
that
the
page
is
done
when
I'd
always
say:
okay,
that's
87
or
M
is
going
out
to
serve
on,
and
hopefully
this
will
match
with
our
city,
in
other
words
like,
if
we
assume
that
the
last
paint
is
close
to
the
last
JavaScript
activity
will
be
able
to
correlate
it.
That's
true,
and
if
it's
not
we'll
see
what
extra
javascript
we
are
we
are
doing,
but.
E
B
C
F
Actually,
the
core
question
I
was
getting
that
is
I
suspected
there
was
multimodal
variance
and
if
any
of
those
sites
that
have
some
really
great
like
chunks,
are
also
sites
that
happen
to
collect
user
happiness
data,
there's
the
potential
for
them
to
correlate
these
multimodal
with
LCP
to
gather
you
information
about
the
you
know.
Does
this
actually
have
something?
The
trick?
Is
it's
just
a
really
hard
thing
to
get
right
in
an
origin
trial?
So
it's
a
heuristic
based
thing
is
really
hard
to
measure.
I
think
is.
C
B
B
Name
University
temple
tonight
are
also
like
effectively
heuristics.
If
you
say
their
objective
is
to
quantify
some
aspect
of
using
experiences.
They're
less
crazy,
for
instance,
certainly
like
LCP,
does
have
more
for
mystics.
That
would
be
ideal
from
some
perspectives,
but
I
think
it
makes
the
right
trade
a
house.
N
B
B
N
C
N
C
In
my
opinion
were
reading
a
toolbox,
and
if
this
piece
of
data
covers
an
area
that
wasn't
memorable
before
first
on
websites,
then
it
has
value
and
it's
up
to
them
to
decide.
How
did
it,
combined
with
other
things,
to
measure
the
on
users?
Performance
matters
is
often
enough
for
enough
websites
to
be
justifiable.
Yeah.
B
C
B
N
N
N
N
M
C
E
E
A
N
Yeah,
here's
an
interesting
so
right
at,
for
example,
the
idea
that
you
want
to
remove
the
image
that
was
involved
in
one
of
the
first
communal
this
week.
Painful
then
God
with
their
group.
One
question
I
like
to
ask:
is
okay:
if
you
remove
that,
like
behavior
the
not
taking
to
combat
later
that
moon
in
how
much
of
the
webpage
right,
because
I
think
the
quote
I
think
the
thing
that
I
have
up
on?
Is
it
really
like
each
this
current
carries
it's
pretty
complex?
How
much
of
that
complex?
B
B
N
N
Example,
the
web
home
side
of
this
API.
Until
when
we
fall
back,
they
fall
back
to
dig
no
system
form
we're
just
waiting
to
load
and
then
using
to
hold
up,
but
you
know
different
phone
stuff
like
that.
If
we
expect
the
same
before
those
things
happen,
we
may
have
respect
of
a
different
timing
and
when
that
wants
to
say
yeah
I
became
available
with
Linda
I.
N
F
That
that
it's
fair
to
say,
though,
there's
a
lot
of
foundational
work
in
terms
of
performance
and
how
it
is
reloaded
that
if
any
company
funds
and
goes
after
would
be
helpful
for
web
performance
for
how
we
all
think
about
it.
Is
that
a
fair
statement
yeah?
So
we
are
all
theorizing
about
what
feels
good
to
users
in
terms
of
performance,
and
it
turns
out
to
be
a
good
corollary
data
about
when
you
change
certain
types
of
things
about
whether
usual
perceive
that
well
but
like
I,
was
just
employed
on
the
big
website.
F
The
background
image,
but
my
core
question
was
well:
if
I
do
or
do
not
count
the
background
image,
will
that
feels
faster
or
slower
user
I?
Don't
know
I'm,
gonna,
performance,
expert
and
I?
Can
you
have
an
answer
so,
and
this
is
the
work
the
deal
is
trying
to
do
so
these
these
eight
guys
are
good.
These
measurements
are
good.
They
have
correlated
to
engagement
in
dollars.
C
Yeah
I
think
we
need.
We
need
more
systematic
studies
about
engagement
in
dollars.
Those
are
the
two
things
like
that
are
very
difficult
for
me.
To
get.
I
cannot
measure
engagement
well
because
we
don't
track
our
users
and
we
don't
have
revenue
dollars
and
so
I'm
stuck
with
asking
people
what
they
think,
which
is
another
way
to
go
at
this
it'd
be
very
nice.
If
there
was
more
systematic
study
of
these
new
metrics
by
people
can
correlated
to
engagement
in
dollars,
which
I
cannot
do
with
just
Wikipedia
in.
N
C
N
B
F
Q
C
In
my
study
it
wasn't
rendered,
but
there
was
a
lot
of
overlap,
so
there
were
surprises
about
people
being
unhappy
about
what
we
would
consider
very
fast
edge
loads
and
so
I
think
this
is
not
something
to
be
discarded
immediately.
It
probably
means
that
it
might
have
been
frustrated
by
something
we
could
not
measure
right
now
or
that
we
don't
measure
in
my
case,
and
so
you
know
having
a
very
fast
first
90
people
still
being
upset
doesn't
mean
that
people
are
stupid.
C
Do
we
keep
the
other
with
the
account
for
this,
and
so
now
it's
getting
to
the
point
where
I
can
measure
like
sub
percents
changes
in
people's
opinion
that
it
does
make
sense
it
does
progress
in
a
way.
That
is
logical,
so
I
think
it's
worth
doing
this.
But
if
you
do
it,
you
have
to
do
it
on
a
large
scale.
Otherwise
that,
like
you,
said,
there's
too
much
noise
for
you
to
capture
I
like
you
know,
I
did
this
and
people,
so
he
didn't
changed.
C
I
do
try
I,
didn't
Tyler,
just
contentful
pains
the
reason
I'm
not
pursuing
more
of
those
is
because
already
first
painted
first
potential
paint
or
most
identical
for
us
and
so
element
ending
was
more
interesting
for
us.
So
I
look
into
element,
IMing
and
looking
at
the
time
it
takes
for
the
top
dimension
article.
It
wasn't
it
wasn't.
You
know
suddenly
magical
it
was.
It
was
about
the
same
as
the
question
we
got
for
first
Anton.
C
First
potential
pains,
so
it
was
maybe
slightly
better,
but
not
to
the
point
that
I
can
declare
how
this
is.
Our
awesome
have
to
switch
to
this.
So
it
was
okay,
but
it
was
not
a
revolution
mostly
because
most
of
the
time
is
the
same
as
first
paint
for
us
because
you
tend
to
have
you
know
the
top
of
the
page
appear
all
at
once,
including
the
top
image,
which
is
usually
a
fairly
small,
the
thumbnail
in
terms
of
power
weight,
so
yeah
and
again
I.
C
C
But
yes,
please,
please
do
studies
if
you
can
tell
all
your
cows
running
website
and
for
your
companies.
Please
study
more,
like
there's
just
really
a
lack
in
science
in
this,
and
it's
not
just
you
know,
asking
people
it's
in
general,
like
very,
very
few
studies,
I,
would
say:
there's
probably
two
three
academic
papers
that
come
out
every
year
about
what
performance
it's
too
little.
C
But
not
with
verifiable,
often
like
I
like
that
website,
but
part
of
me
is
it's
not.
Data
is
not
released
and
you
have
to
track
people's
claims
and
maybe
dad
had
a
mistake
in
the
way
to
stop
their
experiments.
It's
kind
of
difficult
to
trust
all
of
these
claims,
as
as
opposed
value
compared
to
a
peer-reviewed
paper.
C
Well,
no,
not
necessarily
going
to
that
extent,
but
you
know
I,
don't
hear
enough
stories
of
even
lab
studies
of
those
new
metrics
to
figure
out
if
it
correlates
well
to
people's
impressions.
You
know
like,
for
instance,
like
the
Demetrius
I'm
most
excited
about
today.
New
stuff,
we're
working
on
are
sayings
that
are
completely
unrelated
to
things
that
we
were
measuring
before,
like
even
timing
or
layout
stability.
Those
are
great
like
in
my
in
my
mind,
studying
where
performance
is
like
we're
trying
to
study
the
elephants.
C
You
know
we're
a
bunch
of
blind
people
and
we
focus
if
we
focus
again
on
something
it
has
to
be
the
first
render
which
stunted
the
lag
of
the
often
really
well.
We
know
that
lag
every
dimension,
but
there's
a
whole
other
part
of
the
element
that
we've
meld
looks
and
different
looked
at,
and
so
we
have
to
think
we
have
to
think
of
really
huge
areas
of
the
user
experience
that
might
be
curly
to
performance
that
we're
not
measuring
it
all
back
now.
I
think
this
is
where
the
biggest
gains
are
in
the
toolbox.
C
It's
you
know,
I
I'm
sure
it's
a
value
of
to
other
people
to
study
the
initial
rendering
more
like,
for
example,
one
of
the
main
findings
of
our
research
was
that
the
best
weighting
metric
was
not
even
end,
which
is
as
old-school
and
basic
as
it
gets,
but
I
think
what's
it's.
What
this
is
hinting
at
is,
for
example,
the
focus
on
above
the
fold
is
probably
outdated
nowadays,
like
people.
C
If
people
react
to
the
fool
what
time,
including
a
bunch
of
images
that
they
don't
see,
which
is
the
case
that
we
keep
you
taking
be
a
lot.
A
lot
of
stuff,
beautiful
and
apparently
is
clear,
is
better,
then,
the
first
image
that
they
get.
Then
that
means
that
people
pay
a
lot
more
attention
to
the
whole
page.
C
It's
like
when
I
see
people
who
have
very
fast
first
fainted
and
they're
upset
and
a
lot
of
them,
I,
really
wonder.
What's
going
on
and
unfortunately,
I
can't
go
through
this
between
and
ask
them
and
I
don't
have
the
resources
to
do
more
lab
studies,
and
things
like
that,
like
we
have
to
like
chip
in
like
you,
guys,
have
a
lot
of
resources
to
work
on
standards,
but
I,
don't
see.
You
know
that
effort
being
made
to
study
those
new
metrics
in
a
more
regular
manner
to
figure
out.
C
Does
that
really
help
to
psychically
get
closer
to
what
people
are
hearing
it
really
looking
at
the
whole
picture
of
the
user
experience
not
just
from
the
browser's
perspective,
which
tends
to
be
like
I.
Think
of
thinking
that
we
tend
to
have
is
we
go
from
whether
browser
can
expose
and
we
can
work
our
way
back
to
the
user?
N
That
brings
up
the
interesting
aspect,
which
is
that,
when
this
largest
contents
for
brain
does
not
number
correlate
with
either
load
event
or
the
first
thing
paint
or
any
other
existing
metric,
and
if
there
are
cases
where
this
metric
does
not
prevent
am
right.
Is
that,
if
that's
the
case
this
metric
minutes,
although
then
it
does
not?
Probably
with
any
of
them.
We
have
to
also
study
was
that
this,
our
current
data
useful
in
bus,
miss
yeah
and
I
will
be
interesting.
It
and.
B
N
B
E
B
Like
there's
some
questions
here,
I
think
the
question:
is
it
useful
to
people
and
does
it
curl?
Oh,
these
are
experience,
aren't
quite
the
same
question
like.
Is
it
useful
to
people
we
can
see
if
people
adopted
and
getting
some
adoption
numbers
and
that
could
prove
it
like
I,
think
people
think
it's
useful
for
them,
but
I
don't
think
that
what
making.
E
B
No
I
mean
like,
like
Jill,
there's
a
standard
of
rigor
and
what
that
you
could
pursue
perceived
performance
that
Wikipedia
is
trained
to
do
that.
No
one
else
is
matching,
including
Mozilla
and
I.
Think
that
is
a
standard
that
this
group
should
aspire
to,
and
it's
really
unclear
to
me
how
we
could
do
that.
Looking
at
a
single
website,
you
control,
there
are
a
lot
fewer
variables.
B
D
D
C
If
you're
a
nice
PA,
it
helps
with
your
decision
making
I
think
you
know,
because
you
can
relate
to
the
way
that
have
a
set
up
it's
better
than
nothing.
Currently,
it's
no
information
about
which
websites
it's
useful
for
so
yes,
it's
not
great
if
it's
just
Google,
but
then
you
know
if
everybody
chips
in
maybe
you'll
have
the
Google
website.
If
you
have
a
Microsoft
website,
and
then
you
know,
once
you
have
two
or
three
a
big
property,
so
there
have
nothing
to
do
with
each
other.
It
will
all
say
that
this
is
useful.
C
Then
this
is
a
very
big
signal
for
people
to
degree
when
you
say
you
know,
look
at
adoption
to
a
degree.
We
kind
of
drive
adoption
with
this,
the
descriptive
leases.
So
if
we
like
things
like
whatever
we,
whatever
release,
people
I
could
be
excited
about,
because
it's
new-
and
this
can
be
some
adoption
about
it.
C
Maybe
the
adoption
is
more
how
easy
it
is
to
implement
and
in
a
way
we
might
be
steering
the
entire
web
performance
community
in
the
wrong
direction,
because
we
decided
to
release
this
without
because
it
was
cool
or
whatever
and
they're
ready
jumps
on
it,
because
it's
easy
to
implement
back
to
other
api's,
but
it
doesn't
mean
that
we're
actually
solving
the
the
thing.
That's
the
entire
committee
should
be
solving
I.
Definitely.
B
Agree
with
that,
I
also
think.
A
lot
of
these
EP
is
so
a
large
general
think,
for
example,
is
not
targeting
domains
like
Google
right,
like
it's
targeting
domains
that
are
not
willing
to
put
the
effort
in
to
do.
It
was
an
instrumentation
and
I
think
the
level
of
complexity.
You
tend
to
see
in
applications
that
complexity
of
app
is
cruel.
It
with
whether
or
not
you'd
be
willing
to
do
explicit,
annotation
and
so
taking
a
bunch
of
complicated
applications
that
would
be
willing
to
do
this
illicit
annotation.
N
N
What
so,
for
example,
the
memory
API
right
about
pea-sized
like
there
was
a
you
know
the
commentary
about
how
the
GB
of
exposure
memory
excusing
day,
yeah
prom,
that's
a
that's
a
very
compelling
monster.
Maybe
is
also
a
compelling
evidence
because-
and
you
guys
want
to
pop
websites-
and
you
know
a
lot
of
people-
use
it
and
being
able
to
fix
the
leak
on
that
website
is
a
very
compelling,
Shogun's
to
say.
Okay,
this
API
works
right
like
if
you
have
something
similar
to
that
story.
Now
would
be
a
very
good
balance.
B
O
O
B
We
have
I,
don't
remember
burger
if
you
were
on
LSD
specifically
around
this
general
drug,
but
one
thing
we've
done
is
take
a
bunch
of
film
strips
rank,
those
pages
by
hand
and
then
test
a
couple
of
different
metrics
and
see
how
good
they
are
at
reconstructing
the
order.
In
the
beginning,
yeah.
C
Strip
yeah,
you
know-
and
that's
there's
been
a
lot
of
studies
like
the
lazy
academic
studies
where
they
show
people
videos
like
side
by
side
and
you
pick
which
ones
the
fastest
loading
the
like
that
never
happens
in
real
life.
You
know
that's
not
what
influences
people's
behavior,
so
it's
kind
of
again,
as
the
methodology
really
matters
there.
So
it's
a
good
intent,
but
you
have
to
acknowledge
the
fact
that
it's
really
disconnected
from
their
user
experience.
D
D
C
D
C
It's
something
to
think
about,
look
I,
see
it
as
it's
not
perfect
either.
Of
course,
you
know.
Do
I
asking
people
directly
the
question
the
same
as
the
way
they
behave,
although
that
could
be
studied
separately.
It
is
covering
an
area
that
we're
not
measuring
it
all
right
now,
no
API
gives
you
people's
opinion,
and
so
that's
an
extra
signal.
That's
very
strong
and
very
interesting
to
compare
to
others.
C
Not
just
in
the
context
of
evaluating
API
is
imagine
having
this
at
the
browser
level
and
it
being
exposing
crux,
and
then
people
can
do
a
lot
of
data
mining
there
and
in
compared.
You
know,
have
the
human
signal,
in
addition
to
all
the
other
stuff
you'd,
even
look
at
the
patters.
If
you
go
out,
have
people
happy
with
the
pairs
in
terms
of
performance,
etc,
etc.
So,
yes,
I,
think
it's
definitely
an
idea
to
consider
seriously,
because
the
human
signal
is
missing
here.
The.
E
So
the
general
of
the
Rachel
of
Calgary
user,
but
on
the
experience,
is
an
interesting
one,
whether
that's
better
or
not,
then
what
other
metrics
businesses
use
engagements
length
of
sessions?
Conversions,
whatever
is
unclear
entirely
so,
depending
on
who's
died
here,
you're
like
the
browser's
metrics,
and
what
we
optimize
for
in
terms
of
engaging
in
different
from
site
engaging
which
is
different
from
the
business
unit
of
this
length.
If
there's
management
in
those
months,
there's.
J
J
F
J
C
Know
to
have
to
look
at
our
time
like.
Yes,
if
you
move
things
around,
people
with
Nick
on
adds
a
bit
more.
But
how
long
does
that
last
and
if
you're
only
looking
at
snapshots
in
time,
then
you'll
get
a
very
different
story,
as
what's
the
evolution
over
six
months
or
a
month
whatever,
like
those
effects,
tend
to
wear
off
when
it's
just
shuffling
things
around
or
changing
the
experience
a
little
bit
so
that
the
muscle
memory
doesn't
stop
the
stops
working.