►
From YouTube: WebPerfWG TPAC 2020 meetings - October 19 - part 1
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
hello
folks,
and
welcome
to
2020
tpack
festivities
for
the
web
performance
working
group.
As
always,
the
calls
will
be
recorded
and
then
posted
online
and
for
this
year
because
the
world
has
imploded.
A
We
are
doing
this
online
and
split
up
into
four
days
this
week
for
three
hours.
They
are
packed
with
a
bunch
of
super
interesting
proposals
and
personally
I'm
looking
forward
to
that
and
yeah
and
the
first
slot
is
basically
we're
gonna
go
over
like
some
introductions,
because
I
think
we
have
some
new
folks
who
may
want
some
background.
But
for
that
yeah
nick,
do
you
wanna.
B
Sure
I
will
kick
it
off
a
little
bit.
Obviously
everybody
welcome
to
tpack
this
year.
It's
my
first
year
as
a
co-chair
for
this,
so
I'm
gonna
try
to
drive
some
of
this
but
fall
back
on
yov's
excellent
advice
through
parts
of
it.
Let's
see
so
why
don't
we
just
start
out
with
what
the
mission
is.
You
know
we're
all
here
to
think
about
and
measure
and
improve
performance
of
the
web.
B
B
We
have
we've
made
a
lot
of
progress
this
year.
Certainly
I
joined
as
one
of
the
co-chairs.
We
recently
pushed
hr
time
as
level
2
recommendation
safari
recently
added
paid
timing
to
webkit,
which
is
fantastic.
B
I
noticed
you
added
this
crosstalk
one
here
as
well
for
our
goal
of
having
a
sba
focused
meeting
in
the
middle.
I
think
we're
planning
to
do
that
around
you
know.
Maybe
the
perf
matters
conference
back
in
the
day
before
that
was
all
cancelled.
B
I
think
there's
we
do
have
a
couple
discussions
planned
for
this
week
around
sva
related
items,
so
I
think
we'll
kind
of
continue
some
of
that
here,
but
yeah
I
mean
other
than
that.
We're
just
you
know
we're
every
time
we're
working
on.
You
know
our
conference
calls
and
everything
we're
working
to
close
out
issues
and
push
specs
forward
and
to
further
get
adoption
and
everything.
B
I'm
not
sure
if
that
title
at
top
86
is
correct
still,
but
those
are
the
number
of
issues
that
we've
closed
under
each
repo
over
the
last
year
since
our
last
d-pack.
So
that's
a
lot.
A
lot
of
great
things
that
we've
worked
on.
B
We
do
have
a
larger
discussion
planned.
I
think,
on
wednesday
as
part
of
the
rechartering
of
the
group.
Just
as
a
you
know,
heads
up
our
current
charter
for
the
web
performing
group
ended
back
in
june.
We
did
get
an
extension
for
it
to
december
31st
of
this
year.
B
B
B
Another
thing
is,
I
think,
it's
part
of
that
discussion
too.
We're
gonna
be
seeing
if
there
are
any
anybody
wants
to
kind
of
pick
up
some
of
the
specs
that
we
have
and
potentially
be
an
editor
co-editor
on
some
of
these
some
of
the
specs
that
we
have
have
had
a
good
start,
and
they
have
you
know
a
lot
of
things
there
already
there's
some
open
issues,
but
there's
not
necessarily
active
editors
working
on
them
or
trying
to
push
them
forward.
B
B
As
a
group,
we
have
a
lot
of
incubations
as
well
that
we
frequently
discuss
during
the
calls
things
that
we
may
or
may
not
want
to
actually
bring
into
the
official
charter
at
some
point
in
the
future.
But
this
is
a
pretty
big
hearty
list.
B
You
know
lots
of
different
measurement
things
here.
A
couple
ways
of
you
know
active
apis
for
our
web
developers
to
use
to
make
their
sites
faster,
so
there's
some
good
options
here
that
we
may
want
to
discuss
further.
B
I
believe
you've
recently
checked
out
the
stats
for
chrome,
I'm
assuming
and
usage
across
the
web
for
some
of
these
apis
that
we
work
on.
We
also
have
year-over-year
increases
or
decreases
as
a
percentage
for
some
of
them
as
well.
B
So
there's
been
a
lot
of
a
lot
of
uptake
as
far
as
all
these
pretty
much
most
of
these
apis
kind
of
interesting
that
performance,
observer
and
long
task
usage
went
down.
A
little
bit
might
be
some
something
interesting
to
look
into
there
about
why
I
don't
know.
A
Yeah,
if
anyone
has
ideas
regarding
what
may
have
been
the
cause
for
that,
that
would
be.
I
really
appreciate
it,
but
yeah,
but
overall
we
see
yeah.
Most
of
the
apis
are
green
and
show
significant
uptake,
and
yes,
this
is
like
this
is
the
the
public
stats
for
recruit
from
chrome
status.
B
B
We
have
a
lot
of
new
participation
this
year,
then
I
think
that's
been
one
of
our
greatest
strengths
is
getting
a
more
diverse
voice.
So
a
lot
of
new
companies
and
invited
experts
have
been
joining
the
calls
and
sharing
the
things
that
are
important
to
them.
Certainly,
you
know
we.
I
think
that
that's
one
of
the
best
ways
for
us
to
grow
is
to
get
more
diversity
and
and
what
people
are
using,
what
people's
needs
are,
and
everything
like
that.
So
thank
you
for
everybody.
B
That's
participating
a
good
old
pat
on
the
back
self
high
five.
Thank
you
very
much
for
everybody
this
year
for
doing
what
you
can
for
joining
calls,
and
just
you
know,
being
part
of
the
group.
I
think
we're
working
on
a
lot
of
good
things.
B
We
do
have
an
agenda
posted
in
a
google
doc.
We
also
are
also
taking
trying
to
scribe
minutes
at
the
end
there
I
think
joe,
and
I
will
primarily
try
to
do
that
this
year,
but
if
we,
for
some
reason
happen
to
be
both
talking
at
once,
if
there's
anybody
else
that
wants
to
help
with
that,
we
may
try.
To
course
somebody
as
well,
but
we
will
try
to
get
the
notes
consistently
throughout,
that
in
the
same
google
doc.
A
A
Sure
so
most
of
you
are
probably
familiar
with
it,
but
the
w3c
has
a
new
code
of
conduct
or
cpc,
which
I
don't
remember
what
it
stands
for,
but
there's
positive
in
there.
I
should
know
that
positive
and,
but
essentially
I
think
that
the
just
there
is
a
positive
environment
and
positive
work
environment.
A
Yeah,
photovoltaics
and
professional
conduct
no
positive,
but
yeah
the
title
for
the
like
public
work
environment
that
w3c.
So
essentially,
we
all
share
the
responsibility
of
making
the
experience
here
positive
for
everyone
we
all
commit
to
treating
each
other
with
respect,
professionalism,
fairness
and
sensitivity.
A
And
we
all
need
to
actively
promote
diversity
and
seek
out
diverse
perspectives
and
make
sure
that,
in
terms
of
speaking
time,
in
terms
of
like
we
don't
necessarily-
you
know,
you
know,
dominant
members
of
the
group,
don't
necessarily
occupy
90
of
the
talking
time
and
new
members
and
people
from
all
backgrounds
have
an
opportunity
to
say
what
they
want
and
need
to
say
on
that
front.
Unlike
some
unlike
other
working
groups,
we
don't
necessarily
operate
with
a
queue.
A
A
A
On
top
of
that
as
part
of
the
as
part
of
the
cpc,
there
are
a
couple
of
sections
both
about
around
unacceptable
behavior
and
around
safety
versus
comfort,
and
these
are
sections
even
if
you
like.
Ideally,
I
think
we
should
all
just
read
the
full
document
and
it's
not
super
long,
and
it
is
well
done,
but
in
case
you
don't
have
full,
you
don't
have
time
to
read
all
of
it.
A
These
two
sections,
I
think,
are
super
important
to
internalize
and,
yes,
as
it
says,
we
are
recording
the
event
and
we'll
publish
it
online.
B
As
far
as
this
kind
of
intro
block
today
goes,
I
think
there's
two
other
things
I
would
like
to
do.
One
is
just
to
kind
of
go
over
the
agenda
just
to
kind
of
give
everybody
an
overview
of
the
week,
and
the
second
thing
would
be
if
we
have
time,
maybe
just
to
go
down
the
list
and
just
quickly
say
who
you
are
or
who
you
work
for
or
any
thing
that
you
want
just
to
kind
of
do
a
really
quick
intro
round.
B
I
think
we
could
probably
just
try
to
go
alphabetically
down
the
list
at
some
point.
If
that
sounds
good
to
everyone,
why
don't
we
start
with
the
agenda?
I'm
going
to
try
to
switch
the
tabs
that
I'm
sharing?
B
Let
me
know
if
you
see
the
agenda,
this
is
the
same
link
that
we
shared
in
the
chat.
If
anybody
wants
to
follow
along,
is
the
agenda
showing
right
now?
B
Okay,
so
obviously
we're
kind
of
in
the
intro
section
we've
split
out
the
each
day
into.
B
I
think
four
to
five
main
topics:
kind
of
have
half
hour
slots
with
the
break
in
the
middle,
at
10,
30,
pacific
it'll,
be
1,
30,
eastern
and
then
kind
of
having
a
afternoon
one
or
two
sessions
as
well.
So
for
today
we
are
currently
in
the
intro
section.
I
think
michael's
gonna
be
talking
about
frame
timing
and
smoothness
reporting.
Next
cliff
cracker
was
gonna,
be
talking
about
some
of
speed,
curves
hot
issues.
When
he's
able
to
join.
B
B
With
web
apps
folks,
so
we'll
have
more
joining
some
updates
from
enn
reporting,
api
and
more
reporting
api
proposal.
I
think
from
yov
as
well,
and
then
we
have
a
little
bit
of
overflow
time
on
thursday.
In
case
any
discussion
particularly
goes
long.
We
could
always
tack
it
on
there.
B
If
there's
other
things
to
discuss
it's,
I
would
say
you
know
if
there's
anything,
really
urgent,
that
anybody
wants
to
talk
about,
certainly
bring
it
up
now
or
in
the
document,
and
we
could
try
to
try
to
fit
it
in,
especially
if
we
run
under
time
for
one
of
these
other
things,
but
that's
kind
of
how
the
schedule
is
flowing
at
this
point
any
other,
so
we
can
probably
do
intros
in
a
moment
any
other
things
to
discuss.
First,.
A
Just
in
terms
of
scribing
just
minutes
before
we
started
the
call
michael
suggested
that
we
could
have
backup
scribes,
so
me
and
nick
can
take
on
primary
scribe
and
take
turns
in
that
and
then
have
folks
that
pick
up
the
slack
for
bits
and
pieces
where
we,
when
we
miss
out
so
that
could
be
like
if
folks
are
okay
with
that
yeah
I
we
could,
you
know,
have
volunteers
for
secondary
scribes
and
maybe
start
a
list
somewhere
in
the
dock,
and
then
we
can
pick
someone
from
that
list.
B
Yep
and
we
would
probably
need
to
give
them
editing,
full
editing
rates
on
the
document
as
well,
because
I
think
it's
right
now.
It's
comment
only.
A
B
So
if
anybody
wants
to
suggest
your
name
there
and
we
can
go
from
there-
I
will
stop
presenting
at
this
point
and
why
don't
we
just
do
a
quick
round
of
introductions
I'll
start
with
myself
nick
jansma,
I
work
at
akamai
these
days
on
our
impulse
room
performance
monitoring
product.
I
work
in
boomerang
their
javascript
library
that
collects
rom
data.
B
A
B
C
All
right,
yes,
I'm
alex
christensen.
I
work
at
apple
primarily
on
webkit.
D
I
think
I'm
up
hi,
I'm
andrew
commoners,
I'm
a
software
engineer
on
facebook's
browser
team.
E
G
B
Ben
would
be
next
and
sorry.
Alphabetically
by
first
name
is
just
how
it's
sorted
and
checked
out.
H
Okay,
I'm
benjamin
de
cosmic.
I
work
at
mozilla
on
performance
and
a
variety
of
metrics
type
things.
E
Oh
so
in
chat,
there's
a
sorted
by
first
name
or
your
visible
name
in
hangouts,
but
you
yourself
are
always
at
the
top
of
the
list.
So
you
don't
know
your
own
order.
So
maybe.
B
B
Thank
you,
dinko.
I
B
Hi,
I'm
jill.
I
work
on
performance
at
wikipedia
ian.
K
Ian
cleveland,
I
work
at
google
for
the
purposes
of
this
group.
I'm
spec
editor
of
the
reporting
and
network
error
logging
apis.
B
Thank
you
mark.
E
L
M
Thank
you,
noah
hi,
I'm
noah
lemon,
I'm
a
software
engineer
at
facebook
on
the
browser
engineering
team.
O
Hi
I'm
nolan
lawson.
I
work
at
salesforce
on
the
ui
platform
performance
team.
B
Welcome
I'm
sorry,
I
forget,
if
your
first
name
is
peter
pearl,
is
that
it.
P
Yeah.
Okay,
so
probably
I
would
go
third,
but
it's
okay,
so
I'm
here,
I'm
working
at
trivago
in
the
interface
platform
team
and
we're
also
doing
some
interesting
stuff
on
the
world
performance.
Last
week.
B
Awesome,
patrick
hills
first.
B
Fantastic
patrick
meadow.
D
Hey
pat
meenan
work
at
catchpoint
on
webpagetest
synthetic
performance,
testing
measurement,
that
kind
of
stuff.
C
All
the
good
stuff,
robert
hi,
I'm
rob
flock.
I
work
at
google
on
the
chrome
team
on
input
and
animation
and
in
particular
some
of
the
smoothness
api
measurements.
B
Thank
you
real
estate.
B
Welcome
scott.
Q
Yeah,
hey
scott
hesley
from
google
work
on
scheduling
and
scheduling
apis
on
chrome.
N
Wokey
and
mozilla
software
software
engineer
on
performance.
S
B
Welcome
subrada.
T
Welcome
tim
hi,
I'm
tim
dressler.
I
work
for
google
on
chrome
performance.
B
Fantastic,
I
think
that's
listen
unless
somebody
joined
while
we
were
going
down,
it
apologies
if
I
butchered
anybody's
name,
but
thanks
for
everybody
for
giving
introductions.
I
think
it's
good
to
set
contacts
a
little
bit
anything
else.
You
have
before
we
kind
of
head
into
the
first
session,
with
michael.
B
E
I'm
expecting
one
person
to
join,
but
he
might
be
here
shortly.
First
confirming
that
folks
see
this.
E
There
we
go
okay
thumbs
up.
I
have
to
switch
times
to
see
all
right
all
right,
so
I
wanted
to
do
a
quick
round
of
introductions
about
smoothness
reporting
for
animations
and
scrolling.
There's
been
plenty
of
requests
for
this
over
time.
E
I
will
say
this
might
be
sort
of
information
dense
and
starting
at
the
beginning,
are
things
that
we
are
more
confident
and
have
stronger
opinions
in,
and
we
where
we
would
like
strong
feedback
if
anyone
disagrees
and
as
the
talk
goes
on,
it's
sort
of
more
an
introduction
of
options
moving
forward,
but
they're
not
necessarily
set
in
stone,
and
so
I
don't
want
to
get
bogged
down
at
the
details
if
folks
hate
sort
of
the
end
of
it.
E
If
that
makes
sense,
so
you
know
why
why
discuss
smoothness
animation
and
scrolling
is
a
big
part
of
ux
on
the
web.
Stuttering
tends
to
be
a
visible
problem
for
users
and
leaves
a
bad
impression
of
a
page
overall,
and
so
hopefully
I
don't
have
to
convince
folks
of
this.
This
has
been
long
requested,
part
of
what
perf
and
it's
really
hard
for
developers
to
know
exactly
when
stutters
or
hiccups
or
or
jank
happens,
why
it
happens
or
what
exactly
was
the
thing
that
slowed.
E
So
what
I
want
to
cover
today
is
how
exactly
to
measure
smoothness
some
options
for
how
we
can
report
that
data
and
then
I'll
leave
it
with
open
questions
that
come
to
mind
for
me,
but
of
course,
there
might
be
more
all
right,
so,
first
of
all
measuring
smoothness.
So
I
think
when
we
talk
about
smoothness,
it's
often
in
terms
of
fps,
certainly
for
things
like
gaming.
This
is
what
folks
usually
talk
about
in
fps
and
that's
the
ultimate
measure,
but
when
it
comes
to
the
web,
this
has
some
flaws.
E
So
first
of
all,
a
perfectly
static
website
still
feels
perfectly
smooth,
even
if
it's
not
generating
any
new
frames
and
then,
second
of
all,
you
know
what
does
a
frame
even
mean
when
you
have
multi-threaded
rendering
separate
system,
compositors
variable
screen
refresh
rates,
and
certainly
now
we
have
like
hyper
like
very
fast
refresh
rates
on
modern
screens.
So
what
does
fps
even
measure,
especially
if
you're,
showing
lots
of
stale
frames
on
time
and
quickly
or
doing
many
partial
updates?
E
So
the
way
we've
been
thinking
about
it
increasingly?
Is
it's
not
so
much
about
fps?
It's
about
missed
opportunities
to
show
expected
animation,
updates
and
and
in
many
ways
this
is
becoming
a
bad
name,
but
we
typically
call
this
just
dropped
frames.
E
It
doesn't
necessarily
mean
the
frame
was
entirely
dropped.
It
just
means
that
it
missed
some
opportunity
to
show
scientific,
and
so
the
types
of
things
that
we
consider
as
needing
to
have
an
update
so
like
animations,
include
scrolling
pinch,
zoom,
js,
driven
animations
like
graph
loops
or
just
repeated
updates
to
animated
properties
of
the
dumb
css
and
web
animations
canvas
video,
continuous
input
handling
like
drawing
applications
where
you
have
your
finger
down
and
you're
drawing
what?
Where
are
the
fingers
going?
E
And
there
are
things
that
do
update
the
visual
state
of
the
page
that
we
don't
necessarily
consider
animations,
and
this
is
typically
tracked
using
other
types
of
metrics
such
as
just
inserting
new
content
onto
the
page
clicking
a
button
that
takes
a
long
time.
Even
if
it
produces
some
ux
response,
even
if
it's
in
stages
or
forms
and
how
their
appearance
changes
as
you
input
into
them,
or
things
like
background
voting.
E
Okay,
so
defining
drop
frames
is
anytime.
An
animation
is
expected
to
produce
an
update
for
a
vsync,
yet
for
some
reason
either
the
page
doesn't
provide
that
update
or
the
browser
does
not
do
so
in
time.
We
consider
that
a
drop
frame,
so
some
things
to
note
is
that
animations
don't
are
not
always
expected
to
produce
an
update
for
every
single
vsync
and
then
for
things
like
scrolling.
We
expect
to
update
the
scroll
position
to
a
certain
point
in
time
with
each
one.
E
So
let
me
give
you
some
examples,
so
this
is
like
a
wrath
animation.
So
every
single
frame
we
call
request
animation
frame
and
we
run
this
animation
step
and
some
of
the
time
every
other
frame.
Specifically,
we
update
some
style
on
an
element
and
half
the
time
we
just
skip
and
do
nothing.
So
in
this
case
only
half
the
frames
are
expected
to
make
a
visual
update,
and
we
know
that,
even
if
there
wasn't
a
visual
update,
this
animation
could
have
happened
if
it
was
handled
in
time.
E
If
that
makes
sense
for
css
animations,
you
can
define
them
in
a
way
where
there
are
expected
idle
periods.
So,
in
this
case
the
the
keyframes
here
you
have
a
period
where
50
of
the
time
the
transform
is
exactly
the
same,
and
so
the
animation
isn't
moving,
and
so
there
won't
be
any
new
frames
for
the
perspective
of
this
specific
animation.
E
So,
even
if,
for
whatever
reason,
the
browser
doesn't
generate
a
new
frame,
it
wasn't
a
problem.
There
was
no
visual
update
that
was
expected
or
for
scrolling
effects.
Here's
two
different
examples:
the
example
on
the
left.
You
have
a
non-passive
event,
lister
for
the
mos
wheel
event,
which
will
take
a
long
time
to
execute,
and
since
this
is
non-passive,
it
will
actually
delay
scrolling.
E
However,
once
the
scroll
starts
it
can
be
perfectly
smooth,
and
so
the
animating
part
of
the
scroll
is
smooth,
even
though
there's
a
initial
delay-
and
this
is
a
bit
of
a
perhaps
contentious
example-
so
we
could
discuss
it
at
the
end,
whereas
on
the
right
here,
you
have
a
scroll
that
does
have
a
passive
event
listener,
so
the
scroll
itself
is
not
delayed.
However,
there
is
a
styling
update
to
the
page
on
every
single
new
scroll
position
that
takes
a
long
time,
and
so
this
is
their.
E
All
right
so
we'd
love
to
know
if
folks,
think
of
smoothness
in
terms
of
drop
frames
or
partial
updates
or
expected
animation
updates
appearing
on
time,
so
that
that
I'd
love
to
hear
feedback
on
that.
But
once
we
have
this
data,
there's
still
lots
of
different
ways.
We
can
actually
summarize
it
and
record
it.
So
this
is
a
screenshot
of
chromium's
dev
tools.
Not
all
types
of
animations
are
captured
here
just
yet,
but
you
can
still
see
that
there's
like
a
lot
of
different
overlapping
animations.
Some
are
very
short.
E
Some
are
very
long
in
the
timeline,
and
so
you
could
slice
this
data
in
different
ways.
This
is
a
a
fabricated
example.
Let
me
just
check
your
new
joint.
So
that's
a
drill.
E
I
wasn't
sure
if
that
was
a
question
or
anything.
So
if
we
were
to
report
all
of
the
raw
data
points,
you
can
imagine
slicing
up
the
whole
page
for
every
single
frame,
every
single
vsync,
giving
it
a
frame
id
and
then
going
down
the
list
for
every
single
animation,
some
of
which
start
at
different
times,
take
a
certain
amount
of
duration
and
run
you
during
different
using
different
strategies:
css
animation,
scrolls
raft,
loops,
video,
js,
animations
and
so
on,
and
so
for
each
frame
id.
E
So
in
theory,
we
could
present
all
of
this
data
for
every
single
frame
id
for
every
single
animation,
exactly
what
happened
in
the
context
of
this
one.
But
of
course
this
has
lots
of
concerns
ergonomics.
It's
just
a
lot
of
data.
E
It's
not
really
useful
on
its
own
there's
lots
of
post-processing
that
needs
to
happen
performance
of
reporting
just
this
much
data
all
at
once,
especially
if
an
animation
is
defined
in
terms
of
lots
of
little
pieces
moving
independently
and
there
might
be
security
and
privacy
implications
with
reporting
all
of
that
data.
E
So
since
these
animations
can
be
defined,
using
different
strategies
started
at
different
times
respond
to
distinct
events,
you
could
have
metadata
that's
unique
per
animation,
which
is
useful
to
report
sort
of
slicing
along
the
row
level
in
the
previous
chart
that
I
shared.
E
So
I
think
this
also
would
help
simplify
attribution
if
a
particular
animation
is
slow
or
janky,
it
would
immediately
become
visible
in
its
individual
summary
and
I'll
show
an
api
in
a
second.
But
the
interesting
part
is
that
every
single
animation
would
get
an
expected
number
of
frames
that
were
sorry,
a
number
that
were
expected
frames
generated
and
the
number
that
was
actually
produced
over
a
certain
duration,
and
from
that
you
could
turn
it
into
sort
of
a
percentage,
smoothness
or
average
effective
fps.
E
For
that
one
specific
animation
and
I
presented
a
variant
of
this
earlier
this
year,
and
so
I
linked
it
here
on
the
slides,
so
a
straw
man
for
this
might
be
creating
a
new
type
of
performance
entry
for
an
animation
of
entry
type
animation.
It
would
report
the
type
of
animation
that
was
running
when
it
started
how
long
it
took
the
vsync
interval
during
that
animation,
the
number
of
frames
produced
expected,
and
so
you
could
turn
this
into
the
number
of
missed
frames.
E
The
number
of
expected
frames
over
that
duration,
so
it
would
be
very
useful.
You
can
also
provide
some
attribution
like
the
element
that
was
animating
or
the
id
of
the
animation,
because
it's
at
the
animation
level,
another
entirely
different
option
is
to
think
about
the
page
itself
and
its
timeline.
E
This
is
more
akin
to
the
original
frame
timing
proposal
where,
rather
than
summarizing
each
animation,
when
it's
done
or
after
a
certain
amount
of
time,
you
can
imagine
reporting
for
every
single
frame,
all
of
the
things
that
were
happening
within
that
frame
and
whether
they
appeared
on
time
and
this
perhaps
better
matches
to
what
the
user
feels
like
they
are
actually
experiencing
when
they
are
navigating
the
page.
E
So
a
straw
man
for
this
might
be
performance
frame.
Timing
enter
type
frame
where
you
get
the
presentation,
time
of
every
single
frame,
possibly
fuzzed,
and
then
the
number
of
updates
to
animations
within
that
frame
that
were
expected
and
the
number
that
were
produced
an
intricacy
here-
is
that
not
everything
that
went
into
this
frame
was
was
because
of
an
animation,
so
you
might
actually
have
no
expected
updates
for
an
animation
or
maybe
there
was
no
paint
update
at
all
and
we'll
talk
about
some
options
for
variants
here.
E
E
E
All
right,
so,
if
folks
are
in
agreement
about
the
concept
of
dropped
frames
being
the
most
important
thing
to
to
measure
and
the
options
for
how
we
can
present
that
data
make
sense,
then
we
can
begin
to
discuss
which
are
most
preferable,
but
we
still
have
a
bunch
of
open
questions.
E
First,
the
easy
stuff,
I'm
curious
to
know
what
other
vendors
currently
measure
for
animation.
Smoothness,
of
course,
do
folks
agree
on
this
drop
frame's
perspective
or
expected
animation
updates
in
terms
of
defining
animations.
E
Some
of
them
are
easier
than
others,
especially
for
js,
driven
animations
to
animatable
properties
outside
of
rough
loops,
I
think
is,
will
require
lots
of
work
rob's
here
to
help
because
he's
been
working
on
this
and
then
perhaps
harder
questions
are,
you
know,
is
a
dropped
frame,
just
a
boolean
signal
for
a
single
frame.
Something
didn't
update
that
we
expected
to
update.
Does
that
mean
this
whole
frame
is
sort
of
partially
updated
or
do
we
want
to
count?
You
know
there
were
10
expected
updates
and
only
nine
of
them
completed.
E
So
it's
90
good
enough
or
does
each
animation
need
to
have
a
weight
like
an
impact
region?
If
there's
a
tiny
spinner
in
the
corner,
is
it
less
important
somehow
than
a
big
animation?
E
E
We
absolutely
expect
more
smoothness
hiccups
during
load,
and
so
is
it
more
important
to
sort
of
consider
the
amount
of
time
it
takes
to
settle
and
to
become
smooth
and
then
after
which
it
needs
to
be
always
smooth
and
stable,
or
do
we
just
blend
it
into
a
single
metric
and
then
perhaps
like
the
scroll
example
that
I
gave
earlier?
Do
we
need
to
make
a
distinct
notion
of
how
long
it
took
to
start
an
animation,
so
the
latency
until
start
and
then
smoothness
afterwards
or
again?
E
Do
we
bloated
blended
into
a
single
metric,
and
with
that
I
think,
that's
all.
I
have
open
it
up
for
discussion.
U
Can
I
ask
what
do
you
mean
by
expected,
like
so
in
option
two
you
mentioned
that
the
you
want
to
report
the
expected
frames
and
producers.
What
do
you
mean
by
expected
frames.
E
Sure
so,
for
something
like
a
css
animation,
if
the
key
frame
was
defined
in
a
way
that
we
expected
it
to
move
from
the
previous
frame
and
it
didn't
for
whatever
reason
like
the
browser
didn't
actually
render
it
on
time,
then
this
was
a
frame
that
wasn't
produced
even
though
it
was
expected
versus
it
could
be
in
the
middle
of
an
animation.
E
But
let
me
go
back
to
the
example
that
I
gave
like
here
between
25
and
75
of
the
keyframes
there's
no
actual
visual
update.
That
is
to
be
expected,
so
it
doesn't
matter
whether
a
frame
was
actually
produced.
An
updated
frame
was
actually
produced.
It
will
look
visually
identical
to
the
previous
one,
and
so
there's
no
need
to
consider
whether
the
frame
is
skipped.
U
Right,
I
mean
even
between
the
time
when
there
is
a
visible
update,
it's
unclear
to
me
what
is
the
like
number
of
extra
frames
right,
because
the
number
of
frames
might
be
dependent
on
the
animation
itself
in
a
sense
that,
like
some
environments
like
in
particular
in
a
certain
ios
devices
and
ipads,
they
adjust
the
frame
rates
based
on
what
like,
what's
being
animated.
So
it's
unclear
to
me
what
what
the
expected
number
of
frames
even
mean
here.
Yeah.
D
E
Unclear
to
me
what
what
that
even
means
so
so
in
option
two
in
in
the
proposal
and
again
the
actual
options
for
presenting
are
like
the
most
in
the
air,
but
there
would
be
an
interval
which
is
the
average
vsync
interval.
So
if
you
do
increase
the
the
frame
rate,
the
interval
is
expected
to
change
so
between
the
number
of
produced
expected
and
interval.
I
think
you
could
get
a
fairly
decent
well.
U
U
Yeah
I
mean
it's
so
so
unclear
to
me
what
what
this
means.
Also
in
a
javascript
api
case,
it's
like
browser
have
to
render
the
frame
in
order
to
decide
whether
there's
any
visual
update
or
not
right.
How
do
we
know
that
there
was
a
frame
expected?
I
mean
if
you
call
back
and
if
you
know
there
was
no
drawing,
do
we
say
that,
like
we
missed
the
frame
or
we
didn't
miss
the
frame,
I.
U
T
Is
all
of
the
like?
It
is
unclear
in
the
case
where
the
vsync
interval
is
not
constant.
If
the
vsync
interval
is
constant?
No
it's,
I
guess
we'd
argue
it's
still
unclear
because
was
a
frame
different
from
the
previous
frame
is
hard
to
define.
Is
that
right.
U
Right
yeah,
I
mean
also
like
what
happens
in
the
css
animation
case,
right
like
if
you
on
on
the
device
that
is
capable
of,
let's
say,
updating
a
display,
120
hearts.
Yet
the
initially
the
screen
was
a
throttle
to
be
a
lower
frame
rate.
That's
at
30
hearts
because
you
know
there
was
no
animations
and
then
starts
to
run
pop.
U
U
It's
totally
unclear
to
me
what
all
of
this
means
right
I
mean
so
so
I
think
the
the
conception
that
there
is
expected
number
of
frames
at
any
given
point
or
any
given
time
frame
is
a
bit
misleading,
because
the
number
of
frames
the
system
decides
to
produce
are
dependent
and
congruent
on
the
very
animation.
That's
been.
E
E
Right
I
mean
the
the
goal
here
was
to
say
there
was
an
opportunity
to
present
like
a
vsync
and
for
whatever
reason
the
opportunity
was
not
taken
either
the
page
was
slow,
the
browser
was
slow,
but
in
in
this
case
you're
unaware
of
how
many
opportunities
or
you
can
affect
the
number
of
opportunities.
V
Yeah,
so
I
I'm
not
really
familiar
with
all
the
browser
offenders,
but
I
believe
there's
something
called
sort
of
the
next
rendering
opportunity
or
something
like
that.
So
if
there's
something
we
expect
to
produce
from
the
page,
then
that
would
be
that's
kind
of
what
the
max
rendering
or
the
next
expected
frame
would
be.
So
the
question
here
is:
if
the
device
is
at
a
different
frame
rate
when
it
starts
then
essentially
so.
V
The
plan
here,
I
think,
is
to
measure
the
performance
of
the
page
and
the
browser,
and
we
have
to
sort
of
depend
on
the
system
to
tell
us
what
the
frequency
or
the
display
frequency
is.
So
if
the
device
itself
is
at
a
lower
frame
rate
at
the
beginning,
so
let's
say
the
example
you
gave
we
are.
The
device
is
configured
to
run
at
30
fps
at
the
beginning
of
the
animation
and
after
it
starts
it
goes
up
to
120
fps,
then
towards
the
beginning.
V
While
the
device
is
at
the
30
fps,
we
would
expect
the
frames
to
be
produced
at
30
fps
and
as
long
as
the
animation
frames
are
produced
at
30
fps,
you
would
consider
that
as
a
perfectly
smooth
animation
and
when
the
page
or
the
device
switches
to
120
fps
setting
at
that
time,
we
would
expect
to
produce
at
120
fps,
and
if
we
are
not
able
to
keep
up,
then
we
would
sort
of
count
those
as
dropped
frames.
I
don't
know
if
that
sort
of
makes
sense.
Well,.
U
I
mean,
but
what
I'm
saying
is
that
if
the
browser
and
the
ultimately
web
on
a
website
is
not
producing
fast
enough
frame,
you
will
we
will
never
ramp
up
120
hearts
right
so
like
like
so
like,
because
the
display
itself
thought
to
itself
if
the
app
is
being
slow.
So
it's
unclear
to
me
what
what
that
means.
You
know
like
it.
I
mean
at
some
point,
I'm
hypothetically
right,
it's
possible
that
you
stay
always
in
30
heart.
You
know
hearts
because
you're
slow.
T
So
you
should,
if
I
understand
correctly
like
so
one
option,
is
we
just
use
the
refresh
rate
that
we
end
up
with?
So
if,
like
surreal
mentioned
we're
switching
between
30
to
120,
we
say
we're
expected
to
produce
once
per
vsync.
If
there's
a
page
update,
so
that
is
well
defined.
I
think
the
problem
is.
It
doesn't
correlate
well
with
user
experience
because
you
can
be
switching
the
refresh
rate
all
over
the
place
without
impacting
user
experience
at
all.
T
The
alternative.
There,
I
think,
is
to
look
at
dropped
frames
per
unit
time,
not
drop
frames
per
expected
frames,
which
I
I
think
still
has
this
problem
to
some
degree,
but
at
least
not
quite
as
badly.
U
That's
dropping
frames,
you
know
like
if,
if
the
everywhere
else,
you're
expecting
100
to
a
hard
smooth,
animations
and
all
of
a
sudden
on
this
website,
you
know
you're
stuck
at
that
20
hearts
or
30
hertz
from
you
and
user's
perspective.
What
is
this?
Why
is
the
stage
so
junky?
You
know
right
so
yeah,
I
don't
know
I
I
it's
hard
to.
E
Say
what
this
means
does
does
ramping
up?
The
refresh
rate
rely
on
overproducing
frames
so
for
things
like
css
animations,
since
it's
specified
somewhat
semantically,
the
system
can
just
decide
it
predicts.
It
knows
more
frames
would
be
better,
but
what
about
for
things
like
raft.
U
E
F
E
U
T
U
I
mean
I
it's
it's
hard
to
say
like
if
there's
any
anybody
who
has
successfully
come
up
with
a
good
metric
here,
like
I
mean
it's
really
yeah,
it's
really
hard
to
say
I
mean,
like
you
said
earlier.
I
think
in
a
case
where
the
refresher
is
not
changing
at
all.
Right
like
a
metric
is
kind
of
clear.
You
know.
Let's
say
you
are
120
hearts
you
you
you're,
not
drawing.
You
know
120
hearts,
it's
very
clear.
U
R
C
Sorry
go
ahead
yeah,
so
it
sounds
like
the
other
option
would
be
like.
If
we
know
that
the
system
can
choose
to
ramp
up
the
frame
rate,
then
we
would
consider
any
choice
to
run
at
a
lower
frame
rate
being
dropped
frames,
but
I
guess
the
issue
with
that
would
be
that
in
these
phases
of
ramp
up,
we
would
be
considering
it
to
have
dropped
frames.
Even
though,
like
there's,
some
system
limitations
on
how
often
the
frame
rate
can
change.
U
N
So
would
it
make
sense,
to
I
mean
it's
all
seems
very
hard
problem
to
think
about,
but
would
it
make
sense
for
the
api
to
actually
allow
the
developer
to
specify
what
is
interested
of
smoothness.
N
And
I'm
not
sure
exactly
how
to
define
it,
but
at
least
for
my
experience,
some
applications
require
are
probably
okay
with
20
fps
for
most
actions
or
specific
actions,
and
some
it's
untoldable
below.
I
don't
know
50
or
60
frames
per
second
that
you
want
to
sustain,
would
setting
allowing
it
to
be
adaptable
by
the
user
or
configurable.
U
U
You
know
it
won't
be
like
a
accurate
number
in
the
sense
that,
like
that,
the
theoretical
maximum
frame
rate
may
not
have
been
used
by
the
device
anyway
at
any
given
point,
but
it
will
still
give
you
information
about.
Oh,
like
this
device
supposed
to
be
able
to
run
at
20,
120
hertz,
but,
like
you
know,
my
website
is
slow,
so
it
could
only
generate
you
know,
60
hertz,
so
that
that
might
be
actually
useful
information
to
know
for
a
website.
I
don't
know
if
that.
U
E
Yeah
yeah,
the
original
here
was
to
say
the
worst
thing
that
can
happen
is
an
opportunity
was
available.
This
screen
could
refresh
faster.
The
battery
was
high
enough
to
allow
whatever
refresh
rate
and
as
defined
the
animation
had
a
useful
update
to
give,
and
yet
it
was
not
for
whatever
reason
actually
updated.
This
is
what
a
user
would
feel
as
like
the
this
is
the
worst
thing
that
could
happen
now.
E
There
are
other
things
to
optimize
for
like
pushing
the
envelope
of
how
often
you
are
given
an
opportunity
to
render,
and
so
things
like
the
interval
property
you
might
want
to
optimize
for
having
that
be
as
high
as
possible
or
like
the
interval
should
be
as
low
as
possible,
which
is
a
secondary
thing
to
optimize,
for
I
guess,
but
you
definitely
want
to
make
sure
you
present
for
every
opportunity.
Given
we
got
a
couple.
D
T
Yeah
that
research
definitely
sounds
valuable
to
me.
The
other
angle
we
could
take-
which
I
know
has
a
bunch
of
downsides,
would
be
to
look
at
the
time
we
take
to
produce
a
frame
which
I
think
has
the
nice
property
of
sort
of
dodging
a
bunch
of
the
b-sync
interval
issues
at
the
downside
of
being
less
representative
of
user
experience.
E
So
that
was
all
super
useful
feedback.
Anyone
have
anything
on
a
completely
different
engine.
V
I
would
like
to
learn
how
other
browser,
vendors
or
other
sort
of
analytics
providers,
measure
smoothness
right
now
or
sort
of
have
looked
into
measuring
smoothness.
So
I
think
patrick
talked
about
video
games.
Are
there
other
use
cases
or
people
that
measure
smoothness
right
now
on
the
web?.
B
So,
from
a
rum
analytics
point
of
view,
we
do
capture
a
very
simple
metric,
just
of
frame
rate
over
time
during
page
load,
but
we
do
not
find
it
to
be
a
very
valuable
metric
to
capture
because
for
all
of
the
issues
presented
here
and
just
you
know,
inconsistencies
and
whether
a
page
should
be
animating
or
not
or
presenting
or
not.
So
we're.
Definitely
looking
for
more
useful
user
experience
like
metrics.
H
That
hey
michael,
is
the
intent
here
to
kind
of
grow
towards
media
capabilities
working
group
and
have
this
be
applicable
to
video
content
in
the
future.
E
T
I
think
the
video
use
case
to
do
a
good
job
of
that
requires
a
whole
lot
of
additional
data
around
what
bitrate
is
being
streamed
and
all
kinds
of
extra
data,
some
of
which
I
think
is
already
exposed
for
video
playback.
My
guess
is
that
to
serve
that
use
case
properly,
you
really
do
need
first
class
support
for
video
streaming,
not
animations
in
general.
B
I
think
in
a
lot
of
ways,
it's
nice
to
be
able
to
think
about
the
reporting
on
the
negative
things
that
happen
right,
so
the
painful
processes,
the
the
things
that
you
miss
the
dropped
frames,
those
are
in
some
ways
easier
to
present
or
track,
because
they're
very
intuitive
that
there
was
there
was
pain
here
there
was.
There
was
something
wrong
that
happened
here.
So
even
just
getting
you
know,
counts
of
these
things
that
happen
over
time
or
or
whatever
is,
is
a
very
useful
metric
to.
B
G
F
B
Okay,
michael,
is
there
anything
else
that
you
want
to
get
back
on
for
this
part
right
now,.
E
I
think
not,
I
think,
next
steps
are,
you
should
see
more
from
us.
Just
yeah
expect
more
on
this
topic
and
we'll
put
some
thought
into
the
feedback
that
was
given
thanks
folks.
N
E
That's
coming
next,
like
I'll.
Probably
do
that
soon.
B
Great
question:
okay,
thank
you.
Michael
next,
up
on
the
agenda,
we
did
have
a
presentation
from
cliff
cracker
of
speed
curve.
Cliff.
I
think
I
see
you
there,
I'm
here,
hey
buddy!
All
right!
Are
you
ready
to
do
you
have
anything
to
present?
Are
you
ready.
W
A
It
yeah
another
note.
It
looks
like
the
raises
hand.
Feature
is
only
supported
for
some
folks,
but
not
everyone,
so
we
could
just
use
the
chat
for
you
know
or
like
hand
emoji
or
whatever,
for
people
who
want
to
say
something
and.
A
B
And
cliff
we
had
done
a
really
quick
round
of
intros
before
you
joined
as
well.
So
if
you
just
want
to
do
a
very
quick
intro
on
yourself
when
you're
ready,
yeah
one
second,
I
just
I
remember.
W
J
J
R
A
A
Mask
yeah,
or
maybe
we
can
oh
hey,
cliff.
B
W
W
Okay,
so
you
see
my
screen,
you
can
hear
me.
I
really
apologize.
I
actually
had
to
recreate
these
slides
from
memory
right
before
this
because
of
the
mountain
doing
this
and
apparently
just
botched
everything,
so
I
apologize
it's
a
little
bit
high,
really
quick,
sorry
for
the
delay,
I'm
cliff
and
I
work
for
speed
curve.
I'm
a
product
person,
I've
been
doing
web
performance
and
web
performance
monitoring
and
analytics
for
roughly
the
past
18
years
and
I
live
in
denver
colorado.
W
I
really
really
love
and
follow
all
the
work
that
this
working
group
has
been
done
doing,
and
I
know
that
I've
been
burying
the
fruits
of
your
labor
for
many
many
years
so
first
off,
you
know
very
humble
to
be
talking
to
this
group.
Thank
you
for
everything
that
you're
doing
and
thanks
for
giving
me
some
time
today.
The
things
that
I
wanted
to
hit
on
were
going
to
be
a
little
bit.
W
Working
on,
but
really
to
give
you
some
feedback
mainly
and
kind
of
talk
to
you
a
little
bit
about
what
we're
seeing
in
the
wild,
what
our
customers
are
seeing
and
reporting
on
and
how
things
are
being
sort
of
perceived
with
some
of
the
latest
work
that
you've
been
doing
the
three
areas
I
want
to
talk
about
if
we
have
time
one
core
web
vitals
and
then
two
measuring
performance
of
ads
in
the
wild
and
then
three
just
quickly
touching
on
server
timing.
W
This
does
work
a
lot
better
for
me,
if
you
guys
just
you
know,
want
to
unmute
yourselves
and
ask
questions
if
you
can
do
that,
because
I
do
have
two
screens
going
and
I'll
try
to
keep
an
eye
on
people
raising
hands,
but
I
probably
won't
do
a
great
job
of
it
so
feel
free
to
just
interact
and
ask
questions.
Because
part
of
this
is
me
asking
some
questions
of
you.
First
off.
I
want
to
give
the
the
quick
positive
feedback
that
core
web
vitals
has
had.
W
I
think
on
performance
in
general,
in
this
industry
we
have
seen
a
huge
wave
of
interest,
not
just
for
our
product,
which
is
great,
especially
in
a
crazy
time
like
like
we're,
seeing
right
now
with
kobit,
where
we're
seeing
our
business
growing
and
doing
really
well,
but
also
just
in
terms
of
people
wanting
to
make
their
sites
faster.
There's
been
a
huge
wave
of
interest,
a
lot
of
it's
coming
from
people
outside
of
our
our
echo
chambers.
It's
coming
outside
of
performance.
W
It's
coming
outside
of
engineering,
it's
very
much
coming
in
from
you
know,
seo
teams
marketing,
you
know,
ceos
and
down
really
kind
of
a
top-down
driven
approach,
because
people
are
very
very
tuned
into
the
work
that
that
I
credit,
annie
and
team
and
everyone
else
that
they've
been
doing
with
core
web
vitals.
So
huge
kudos
to
everybody.
There's
a
really
strong
desire
out
there
for
people
to
learn
more
about
this,
as
well
as
improve
their
vitals
I'll
call
this
a
con.
But
it's
really
not
that
much
of
a
con.
W
W
I'd
also
like
to
see
it
and
I
think
our
customers
kind
of
want
to
unify
on
some
of
the
things
they're
looking
at
across
these
other
areas,
because
you
are
getting
more
inbound
requests
from
marketing
from
seo.
They
want
to
see
it
in
their
other
tools
as
well.
So
I
think
there's
a
big
push
for
how
do
we
get
core
web
vitals
to
be
something?
That's
not
as
much
of
a
google
centric
metric
and
more
of
a
performance
or
analytics
or
just
user
experience
set
of
metrics
that
people
care
about?
W
You
know
whether
that's
facebook,
using
this
to
drive
some
of
their
behavior
around
what
they're,
placing
in
terms
of
ads
other
tools
outside
of
performance
tools
like
adobe,
analytics
and
other
places
where
these
folks
might
live.
I
just
think
that
there's
huge
opportunities
there
now
that
we've
taken
that
first
step.
W
So
how
are
people
doing?
I
just
grabbed
some
data
this
morning,
just
to
sort
of
look
and
see
for
lcp
our
customers,
and
I
just
grabbed
the
I'm
looking
at
the
us,
I'm
not
looking
at
all
of
our
customer
customer
base.
We
have
a
lot
of
customers
in
europe.
Probably
a
majority
of
our
customers
are,
but
I
thought
the
us
was
kind
of
more
interesting
due
to
where
I
see
some
of
the
bigger
opportunities,
but
for
lcp
you
know
the
75th
percentile.
W
If
we
look
across
all
of
our
customers,
you
know
they're
doing
pretty
well
they're
at
2.2
seconds,
they're
sort
of
right
at
that
threshold
that
we've,
given
the
recommendations
on
you
know,
and
even
at
the
95th
percentile
seeing
for
some
of
this
and
again
this
is
desktop
as
well
in
the
us.
So
you
know
keep
that
in
mind,
happy
to
share
more
data.
Yeah.
B
This
is
this
synthetic
or
rub
data.
W
I
apologize
yeah.
This
is
rum,
so
this
is
from
our
lux
products
across
the
board
for
everybody.
W
So
I
will
follow
up
when
I
share
the
links
to
this
with
some
stuff
in
the
in
an
appendix
that
looks
at
mobile
and
looks
at
other
regions
as
well,
but
I
just
sort
of
wanted
to
use
this
to
illustrate
lcp.
I
think,
has
been
one
of
the
easier
to
understand
core
web
vitals
for
our
customers
and
one
that
they've
actually
been
working
on
for
a
while
in
terms
of
improvement.
W
It
registers
with
them.
It
just
makes
sense,
especially
for
our
commerce,
customers
and
people
that
are
focused
on
getting
product
imagery
and
things
like
that
to
the
page
really
quickly,
so
no
surprise
that
they
they
tend
to
be
doing
well
here,
I
did
circle
a
little
bump
here.
I
think
this
is.
This
is
86.
W
Where
annie
and
team,
I
believe,
rolled
out
some
changes
to
improve
the
lcp
metric
where
we
were
seeing
some
false
positives
before
with
really
low
lcp,
scores
and
fcp
scores,
as
well
due
to
the
opacity
issues
and
things
that
we
saw.
So
we
have,
we
are
seeing
an
uptick
in
that
so
good.
It
looks
like
it's
more
accurate
now
synthetic
we
absolutely
see
it
when
we
look
at
test
by
test
and
case
by
case
we
saw
a
huge
improvement
there
for
lcp.
W
The
bad
side
of
this
is
this
means
a
whole
lot.
More
support
calls
that
we
get
in
inbound
that
we
get
when
people
are
like.
Hey,
wait,
a
minute.
How
come
lcp
changed
so
quickly,
so
I
know
that
this
is
not
perfect
and
I'm
absolutely
an
advocate
of
getting
things
out
there
in
the
wild
as
soon
as
possible.
But
when
we
see
changes
and
big
changes
to
metrics,
I
think
it
does
drive
a
little
bit
of
issues
with
our
customers
around
confidence
in
the
metric,
and
can
they
really
trust
it
specifically.
W
You
know
those
that
actually
went
and
looked
at
the
film
strip
and
saw
you
know
lcp
happening.
You
know
before
there
was
even
a
start
render
on
any
of
the
web
page
test
film
strips
or
anything
like
that,
so
great
that
we're
fixing
it
great
that
we're
addressing
it.
I
think
there
might
be
probably
more
things
that
we
need
to
do
to
make
lcp
more
accurate
in
all
those
edge
cases
in
general.
I
think
it's
been
widely
accepted.
W
It's
a
it's!
A
pretty
awesome
metric
for
people
to
focus
on.
B
Clifton,
I
had
one
thing
too,
even
just
around
you
seeing
the
visible
metric
change
here,
and
I
know
that
you
had
mentioned
with
a
recent
version
of
chrome
how
it
changed
it.
One
thing
as
a
fellow
analytic
provider
that
has
been
valuable
for
us
is
to
actually
have
a
public
change
log,
if
you
will
from
vendors
when
they
note
important
changes
like
this.
That
may
be
reflective
in
metrics.
Is
that
something
that
we
can
give
to
our
customers
as
well?
B
That
notice
very
similar
things,
and
you
know,
as
you
said,
it
builds
actually
in
some
cases
more
confidence
in
it,
because
you
can
have
something
to
point
to
so
I
just
want
to
say
you
know
I
I
do
appreciate.
Chrome,
I
know
is
doing
this.
I
don't
know
if
the
other
vendors
are
yet,
but
it
is
very
valuable
whenever
there
are
not
like
breaking
changes
or
just
significant
changes
that
may
be
reflective
in
version
of
version.
W
Yeah
I
just
learned
about
that
from
annie
the
last
time
we
met,
and
it's
now
like
a
saved
reply
and
our
support
when
we
talk
to
people
about
this,
because
we
get
a
lot
of
inbound
about
it.
So
absolutely
thank
you
so
much
for
that
and
would
love
to
see
it
as
well,
so
good
call
out
cls.
On
the
other
hand,
this
is
a
this
is
kind
of
an
interesting
one
too.
W
So
this
is
one
where
I've
kind
of
we're
still
sort
of
figuring
out
I'll
talk
about
some
of
the
issues
we've
run
into,
and
I
think
you
all
are
aware
of
some
of
them
as
well,
but
in
general
people
seem
to
be
doing
relatively
well
here.
I
think
this
might
be
a
little
bit
different
when
we
break
it
out
by
industry.
We
absolutely
see
higher
cls
scores
when
we're
looking
at
media
than
we
do
traditional
sites.
W
A
lot
of
that
driven
by
either
display
out
of
performance
or
just
maybe
you
know
not
following
as
many
of
the
best
practices
or
more
complex,
dom
sorry.
What
is
cls,
sorry
cumulative
layout
shift.
I
apologize
I
I
should
have
stated
that
first
so
of
the
core
web
vitals,
the
largest
control
contentful
paint.
First
input
delay
in
cumulative
layout
shift,
there's
actually
a
great
if
somebody
wants
to
post
this
in
the
chat,
there's
a
great
blog
that
nick
just
did.
W
That
really
goes
in
the
details
of
cls
and
how
to
measure
it,
and
and
some
of
the
caveats
there,
which
is
an
awesome
reference
post,
but
essentially
measuring,
as
things
are
dynamically
shifting
on
the
page
and
changing
for
the
user
very
simple
way
of
stating
it
here's
kind
of
an
you
know:
here's
here's
some
of
the
issues
that
we've
seen,
though,
so
so
looking
at
this
right
around
the
gate.
W
I
think
we
knew
this
was
going
to
be
a
hard
metric
to
get
our
heads
around
due
to
the
fact
that
it
wasn't
really
a
time
based
metric
and
it
wasn't
really
it
kind
of
is
a
score
based
metric.
It
is
a
score
based
metric.
I
guess
so.
It
was
a
little
bit
hard
to
sort
of
tell
people
what
it
meant
how
to
interpret
it.
I
think
we've
you
know
with
most
of
our
customers.
We
got
over
that
hump
pretty
quickly
because
the
gate
got
it.
You
know
higher
score
is
bad.
W
Lower
score
is
good.
However,
they
started
running
to
a
lot
of
challenges,
and
I
will
say
this
probably
makes
up
maybe
40
to
50
percent
of
the
support
questions
we
get
in
around
core
web
vitals
is
why
the
hell
are
my
cls,
metrics
so
different
when
I'm
comparing
field
and
lab
data.
So
from
a
lighthouse
report,
you
know
we
can
see
this.
W
You
know
cls
score
of
0.13
for
the
from
the
crux
data
versus
0.023
in
a
synthetic
test,
and
this
is
really,
as
we've
found
and
known
due
to
a
great
term,
that
I
stole
from
nick's
blog
due
to
the
load
limited
nature
of
synthetic
where
they
are
dependent
on
the
onload
event
from
firing
as
well
as
you
know,
seeing
in
crux
where
essentially
they're
waiting
until
you
know
after
the
load
event
until
that
visibility
state
changes
in
the
page.
This
also
impacts
most
most
of
our
rum
tooling.
W
I
know
at
least
for
us
it
does
where
we're
tied
to
the
load
event,
so
you
can
see
at
the
bottom
that
cls
score
for
rum
at
the
75th
percentile
matches
more
closely
with
what
we
see
for
synthetic,
however,
trying
to
explain
and
reproduce
cls
scores
for
data,
that's
coming
from
crux
it.
You
can
do
it
and
you
can
show
customers
where
it's
coming
in
and
when
those
layout
shifts
that
are
occurring
or
coming
after
that
load
event.
W
However,
it
just
it's
really
hard
for
them
to
track
down
and
they're
sitting
there,
basically,
knowing
that
they're,
potentially
going
to
be
judged
by
the
crux
data,
but
the
tooling
doesn't
necessarily
support
what's
being
collected
there.
Actually,
I
want
to
pause
for
a
second
there,
just
to
make
sure
one.
Everybody
sort
of
understands
this.
This
issue
unto
any
commentary
on
it
that
I
may
be
missing,
or
you
know,
need
to
be
thinking
about.
W
E
I
thought
it
was
yeah
benjamin
wrote
he
wants
to
know
so
how.
W
Are
you
explaining
this?
Thank
you
yeah,
so,
basically,
how
we're
explaining
it
is
that
you
know
cos
is
a
cumulative
score.
It
is
you
know
it
does
continue
to
accumulate
as
the
life
cycle.
That
page
continues.
It's
not
bound
to
load
time.
It's
it's
not
bound
to
other
things.
If
these
are
staying
on
the
page
as
being
measured,
you
know
by
by
chrome
and
by
crux
really
so
way
we
explain
is
we
tend
to
we
end
up
having
to
essentially
open
up
dev
tools,
which
isn't
the
easiest
thing
to
do.
W
When
you're
dealing
with
a
lot
of
these
inbound
requests,
we
open
up
dev
tools,
we
show
when
you
know
our
our
beacon
was
actually
fired,
which
tends
to
be.
You
know
right
around
the
end
of
the
load
event
or
cue.
You
know
shortly
thereafter
and
then,
when
the
where
those
layout
shifts
are
happening
after
the
load
event,
so
they
can
still
get
visibility
into
when
they're
happening,
but
unfortunately
our
tools
aren't
showing
them
that
and
it
kind
of
sucks
to
not
have
a
great
way
to
really
illustrate
that
and
for
them
to
repeat
it.
W
So,
to
be
honest,
I
don't
know
what
the
best
you
know
option
is
here
like
I
don't
know
if
this
is
something
where
you
know,
we
just
need
to
be
better
or
have
something,
and
I
know
I've
talked
a
little
bit
with
yo
about
reporting
api
other
ways
of
looking
at
how
we
can
measure
or
allow
analytics
providers
to
measure
this
after
the
load
event
to
provide
something
more
realistic
or,
if
there's
a
better
way
to
normalize.
What
we're
seeing
between
crux
and
synthetic
data.
E
Cliff,
if
I
could
just
add
cls,
is
a
full
page
lifecycle,
metric
and
most
of
the
issues
you
just
said,
I
believe,
are
more
specific
to
comparing
lab
and
run
data,
because
lab
data
is
currently
like
up
until
the
load
event,
as
opposed
to
full
page
lifecycle,
and
it
doesn't
there's
no
way
you
can
synthesize
real
user
interactions.
You
don't
actually
know
what's
going
to
happen
with
all
these
things,
I
think
there
will
be
increasingly
more
metrics
that
fall
into
that
bucket,
like
smoothness.
E
That
I
just
talked
about,
will
be
a
full
gauge
lifecycle
event
and
it
will
have
a
different
set
of
metrics
issues.
It
might
actually
be
the
reverse
problem
or
smoothness
will
be
worse
during
load
than
it
will
be.
Post
load
and
so
rum
will
look
nicer
than
a
lab
metric
would
signify,
and
then,
in
terms
of
responsiveness,
when
I
think
nicholas
is
going
to
talk
about
that
this
week,
you
know
we
might
be
moving
to
to
measuring
more
responsiveness
throughout
the
full
page
lifecycle.
E
So
layout
stability
has
some
unique
issues
in
terms
of
it
being
complicated
or
the
way
cumulative
layout
shift
is.
Maybe
has
some
issues
or
whatnot,
but
it's
the
bigger
issue.
Is
this
full
page
lifecycle
measurement
and
how
folks
usually
aren't
thinking
in
these
terms
and
lab
testing
isn't
set
up
yet
to
measure
that
so
just
making
that
distinction
that
it's
not
it's
not
cls.
That's
the
bulk
of
the
issue
here,
I
think.
A
E
A
I
think
that
what
I
think
there
are
two
problems
here,
one
is
that,
for
all
these
user
interaction,
dependent,
metrics,
lab
and
rum
will
inherently
differ,
and
that
is
fine.
That
is
life.
We
have
to
accept
that.
But
I
think
that
today
many
of
the
rum
providers
are
not
collecting
metrics
for
the
full
lifetime
of
the
page,
but
stop
at
some
point
on
unload
or
at
a
later
point.
A
But
as
long
as
we
only
collect
metrics
up
until
a
certain
point
in
a
page's
lifetime,
there
will
be
differences
between
crux,
which
collects
them
up
until
the
renderer
dies
and
other
and
and
run
vendors,
and
we
need
to
try
to
close
that
gap
and
yeah
and,
as
cliff
said,
I
will
talk
about
one
potential
solution
to
that
in.
A
W
Great
question:
I
think
it
tends
to
be
the
latter
that
which
starts
the
concern
it's
like
hey.
This
is
way
overly
optimistic,
because
I'm
thinking
crux
that
I've
got
something
that's
over.
You
know
0.1,
so
I'm
going
to
get
flagged
for
it,
but
in
your
tool,
you're
telling
me
everything's
great
like
so
so
it
does
start
there,
but
then
it
turns
into
the
end.
Now
that
I'm
looking
at
the
lighthouse
report,
I
compare.
I
also
see
that
there's
very
different,
you
know
stuff
there
so
nicholas.
W
I
think
that
I
think
the
answer
really
is
more
the
difference
between
our
tools
and
crux.
I
think
that
I
haven't
gotten
a
lot
of
feedback
from,
like
you
know,
our
rum
versus
someone
else's
rum
like
impulse
versus
lux
like.
Are
they
seeing
discrepancies
there?
Quite
honestly,
I
don't
think
that
there
would
be
that
many
differences
based
on
the
methodology
that
I
think
we're
both
using,
but
I
think
the
challenge
does
come
come
back
to
how
the
tools
are
really.
W
I
don't
know
equipped
to
measure
this
so
analytics
like
this
is
sort
of
how
we've
always
done
it.
We've
always
been
sort
of
beholden
to
an
event
or
a
load
event
like
there
are
things
we
can
gather
after
the
page,
and
we
have
some
metrics
that
we
gather
that
aren't
dependent
on
a
little
bit
from
firing.
But
when
you
talk
about
the
full
life
cycle
of
a
page,
have
we
really
been
measuring
that
I
mean?
Have
we
really?
W
I
think
it's
a
great
thing
to
think
about,
and
I
think
it's
great
that
we're
starting
to
move
this
direction,
but
I
don't
know
that
the
vendors
are
there
yet
in
terms
of
our
ability
to
actually
do
it.
So
for
one
I
do
think
it
does
fall
on
us,
the
the
analytics
vendors
to
sort
of
work
with
you
know
the
chrome
team
and
others
to
figure
this
out
and
to
try
to
normalize
some
of
it,
and
if
we
have
better
tooling
and
better
things,
we
can
do
like
this
reporting
api.
R
D
Of
a
on
a
full
full
session
life
cycle
problem,
you
know
a
0.1
cls
for
a
12-hour
gmail
session.
F
E
Yeah
yeah,
and
so
so
I
did
want
to
separate
that
many
of
these
problems
just
described
are
not
particular
to
cls,
but
that
part
of
the
problem
is
particular
to
cls
where
the
act
of
accumulating
it.
You
know
it's
not
always
the
best
measure
more
layout
shifts
are
worse,
but
it's
kind
of
like
an
average
shift,
is
also
important
to
consider
and
we
are
putting
some
thought
into
better
measures
for.
W
Remoralization,
I
know
I'm
impressed
on
time,
so
I
want
to
kind
of
keep
chucking
along,
but
you
know,
if
I
hit
a
time
limit,
you
know
nick
or
yo.
Just
let
me
know
I
I'm
seeing
some
improvements
in
the
tooling
I'm
not
just
trying
to
plug
speed
curve
here,
but
this
was
born
out
of
the
fact
that
our
customers
were
like
well,
I
got
a
bad
score.
What
the
heck
do
I
do
with
it?
You
know
lighthouse.
Does
a
good
job
of
you
know,
listing
out
the
elements
that
are
there.
W
Devtools
does
a
good
job
of
highlighting
where
those
shifts
are
occurring.
I
think
that
we
need
to
see
more
of
this
across
the
board
in
terms
of
our
tooling,
where
we
can
actually
highlight
where
these
shifts
are
occurring.
You
know
pat,
does
a
great
job
in
the
web.
Page
test
of
you
know
highlighting
the
frames
and
the
film
strip
when
they
change
and
showing
the
cool
graph
of
like
where
it
happens,
along
that
you
know
life
cycle.
W
That
was
measured,
but
I
think
there's
opportunity
here,
because
the
other
thing
they're
coming
to
us
with
is:
I
have
no
idea
how
to
fix
this
or
address
it.
So
I'd
like
to
see
more
of
this,
I
think
we're
definitely
kind
of
doubling
down
and
trying
to
look
at
making
this
easier
for
customers.
I
think
others
are
too,
but
I
do
think
that
the
tooling
for
diagnostics
of
this
is
a
little
bit
behind
as
well.
F
F
Owner
knows
what
interactions
are
typical,
relatively
soonish?
We
do
hope
to
be
able
to
enable
them
to
measure
that
to
get
a
more
realistic
picture
of
what
it's
going
to
be
like
once,
a
user
actually
comes
on
the
page,
and
we
have
discussed
the
problem
that
you
identified
right,
that
all
of
these
things
are
called
cls
right,
whether
it's
12
hours,
whether
it's
you
know,
and
how
do
we
handle
that
because,
especially
for
our
typical
users,
we're
kind
of
introducing
them
to
this
for
the
very
first
time.
F
So,
if
there's
been
any
thought
about
that
how
to
address
you
know
the
same
metric
name,
reporting
very
different
circumstances,
it's
advisable.
W
Yeah,
okay,
I
guess
I
think
I
belabor
that
that
point
enough,
but
I
do
think
there's
opportunity
there.
The
good
news
is
that
it's
starting
a
conversation
again
and
yeah
benjamin.
I
agree
that
I
don't
know
how
to
fix
this.
W
It's
concerning,
like,
I
think,
that's
something
where
there's
some
great
posts
that
talked
about
some
of
the
biggest
things
that
contribute
to
cls,
there's
some
reference
posts
that
we
use
when
we're
talking
to
our
customers,
we're
building
up
our
own
knowledge
base
when
we
find
different,
you
know,
use
use
cases
and
issues
that
people
run
into
that.
That
are,
you
know,
causing
a
bad
cls
score.
There's
some
great
case
studies
going
on
right
now
in
the
web,
perf
slack
that
I
think
have
been
great
to
reference
and
talk
to
but
yeah.
W
I
do
agree
that
it's
a
concern
that
not
knowing
how
to
address
it
and
fix
it,
you
know,
is
a
problem
so
getting
them
there
with
the
tools
to
the
point
of
here's.
The
problem,
and
quite
frankly,
you
know
layout
shifts
happen
and
a
lot
of
times
they're,
so
small,
they
don't
really
matter,
but
you
know
getting
them
to
focus
on
the
largest
layout
shift,
for
example,
is
a
starting
point,
like
really
kind
of
directing.
W
That,
I
think,
is
the
where
we
have
the
opportunity
really
quickly
moving
on
to
fidd
everyone's
doing
so
fantastic
here
we
have
no
problems
with
fit
whatsoever.
It
seems
like
so
this
is
my
biggest
complaint
with
fit.
Is
that
I
think
that
first
input
delay
basically
how
fast
your
application
is
responding
to
user
input.
This
scroll
click
key
press
for
me
and
what
we're
seeing
with
our
customers
is-
and
I've
shared
this
with
the
annie
and
team
as
well-
that
I
think
the
bar
is
too
low
right
now.
W
What
we're
seeing
it?
Well,
sorry,
the
problem
might
be
multifaceted
here.
I
think
in
general,
the
the
bar
is
too
low
because
we
see
at
the
75th
percentile
far
under
100
milliseconds,
so
27
milliseconds,
even
at
the
95th
percentile
in
255
milliseconds,
where
we're
telling
people
hey
a
good
number
for
fit,
is
under
100
milliseconds
at
p75.
W
Why?
This
is
concerning
to
me
is
not
that
we
don't
want
our
customers
to
do
well,
but
if
we
take
something
we
say:
okay,
what
are
we
really
trying
to
get
them
to
do
with
fidd?
Is
it
going
after
and
attacking
the
long
past?
That
are
the
problem
like
actually
stopping
more
of
the
you
know,
stop
the
world
execution
on
the
main
thread.
This
isn't
helping.
W
So
when
we
look
at
it
and
we
look
at
the
95
or
sorry
75
percentile
in
this
case
for
a
customer
that
just
pulled
you
know:
19
milliseconds
for
fid,
so
they're
they're
great
at
75th
percentile.
However,
we're
measuring
long
tasks
and
looking
at
the
longest
task
as
well,
and
the
75th
percentile
for
that
metric
is
up
to
300
milliseconds,
so
obviously
causing
a
pretty
bad
experience
when
that's
occurring.
W
So
this
raises
a
lot
of
questions,
because
it's
like
is
this
good?
Is
this
bad?
What
should
we
be
measuring?
I
like
that,
we're
using
total
blocking
times
the
proxy
and
our
synthetic
data
to
talk
about
talk
to
talk
about
this,
but
I
fear
that
we're
not
shining
a
big
enough
spotlight
on
the
real
problem
around
long
tasks
and
whether
that's
the
bar
is
too
low
or
maybe
this
is
another
example
of
an
issue
with
you
know:
life
cycle
of
the
page
measurements
or
you
know,
measuring
fit.
W
W
I
believe,
I'm
sorry.
No,
you
know
what
I
take
that
back
for
us,
it's
not
because
we're
measuring
interaction
times
as
well
and
those
are
beacons
that
we
send
after
the
load
event,
so
when
a
user
interacts
or
scrolls
with
the
page.
So
for
us
it's
actually
not.
However,
if
they
don't
interact
with
the
page,
obviously
we're
not
going
to
measure
anything
there.
W
W
So
maybe
that's
that's
a
little
bit
of
a
mis
statement
there.
I
also
think-
and
I
I
have
to
forgive
or
sorry
I
have
to
apologize
to
the
team,
because
I
I
did
see
that
long
test
attribution
was
in
86.
I
believe,
but
that
was
actually
something
I
just
learned
about
today,
so
I'm
behind
a
bit
but
the
ability
to
actually
say
hey,
what's
driving
these
long
tasks
again
making
it
actionable.
You
know
answering
that.
W
How
do
I
fix
this
question
right
now
when
we
try
to
compare
the
two
between
synthetic
data
and
rum
data,
it's
sort
of
hit
or
miss,
because
a
lot
of
times
that
execution
is
coming
in
from
an
ad
that
might
have
been
or
a
third
party
that
might
have
been
introduced?
For
that
specific
thing
that
we
measured
on
the
page
but
isn't
on
ninety
percent
of
the
other
pages,
it's
a
little
bit
hard
to
sort
of
tie
those
two
together
so
being
able
to
provide
that
attribution
for
what's
driving
those
long
tasks.
W
So
our
customers
know
how
to
actually
go
about
and
fix
it,
and
we
can
tell
them
that
I
think
would
be
a
huge,
huge,
huge
advantage
for
them
with
their
rum
data
in
in
attacking
long
tasks.
It
doesn't.
It
doesn't
address
the
issues.
I
think
we
see
with
fid
honestly,
I
think
it's
maybe
a
discussion
of,
should
it
just
be
a
higher
bar
than
it
is
today.
L
So
patrick
will
be
discussing
some
thoughts
we've
put
into
well
he's
basically
done
every
all
of
the
work.
Now
we
sorry
how
lighthouse
does
long
task
attribution
and
how,
if
possible,
we
could
try
to
use
that
in
the
long
tasks.
Api
awesome
awesome.
Looking.
L
W
W
Okay,
this
brings
me
to
my
favorite
topic
over
the
last.
You
know
eight
plus
years.
This
is
something
I
just
wanted
to
sort
of
raise
up,
and
I
know
that
we've
tried
in
different
ways
to
address
how
we
actually
go
about,
measuring
and
attacking
this
problem
of
third-party
performance,
specifically
with
display
ads,
I'm
kind
of
putting
this
out
here
more
because
I
got
I
had
this
really
enlightening
discussion
with
the
customer
a
few
weeks
ago,
and
this
has
been
the
number
one
complaint
we
see
from
our
media
customers.
W
I
mean
we
love
to
beat
up
on
them,
everybody
that
I
know
like
that.
Does
a
presentation
around
third
parties
pulls
up
the
cnn.com
site
and
just
has
at
it,
but
at
the
same
time
I
do
feel
like
maybe
in
myself
included,
we
haven't
really
given
them
or
armed
them
with
the
tools
that
they
need
to
actually
make
this
problem
less
of
a
problem,
actual
quote
from
a
customer.
You
all
can
read
it
there.
W
This
is
someone
that
I've
been
working
with
across
like
sosta,
akamai
and
now
speed
curve,
and
he
literally,
you
know,
is
banging
on
the
table
because
he
doesn't
have
the
ability
to
get
visibility
into
what's
happening
within
an
embedded
iframe
and
the
resources
that
are
being
shown
there.
He
doesn't
have
anything
to
tie
into
that's
more
turnkey
and
out
of
the
box
to
actually
go
about
measuring.
You
know
when
the
ad
slots
are
being
rendered
to
try
and
improve
these
things.
W
I
this
is
a
shameless
plug
for
some
of
these
issues
that
I
know
have
been
raised
and
talked
about,
but
these
issues
right
here
to
me
have
been
outstanding
for
a
really
long
time.
Specifically,
you
know
looking
at
resource
timing
within
those
third-party
frames
or
even
first-party
frames,
as
well
as
some
of
these
others
that
have
been
raised,
how
we're
starting
we're
kind
of
ignoring
some
of
the
the
impact
of
frames
on
things
like
like
cls
so
they're,
the
worst
offenders
they're
the
poorest
performing
sites.
W
You
know
they
have
the
worst
scores,
but
we're
still
kind
of
giving
them
this
sort
of
like
empty
answer.
In
terms
of
how
we
go
about,
you
know,
fixing
measuring
addressing
ad
performance
more
of
a
question
for
the
group,
there's
probably
some
things
that
we
could
be
doing
as
vendors
that
we
could
tie
into
today
or
that
people
have
experimented
with.
I
know
new
relic,
for
example,
have
been
doing
some
stuff
around
measuring
events
that
ads
or
you
know,
ad
providers
are
posting
back
into
the
page
out
of
the
box.
W
You
know
so
I
don't
know
if
there
are
other
best
practices
which
we
can
be
latching
on
to.
I
do
think
you're
getting
the
visibility
into
resources
within
iframes.
Just
so
they
can
understand.
Was
it
campaign
a
versus
campaign
b
campaign
b
that
I
need
to
turn
off,
because
it's
it's
so
slow
would
be
a
huge,
huge
improvement
with
what
we're
seeing
today.
Q
Just
a
quick
comment
from
the
scheduling
side
of
things:
this
is
something
we've
been
exploring
just
a
little
bit
over.
The
last,
like
you
know,
maybe
starting
the
last
month
or
so
I'll
talk
about
in
my
talk
next
is
that
we're
seeing
a
desire
from
folks
from
partners
to
to
have
some
sort
of
primitives
to
be
able
to
control
what
third
parties
are
doing
on
the
main
thread,
whether
it
be
de-prioritizing
them
or
you
know
doing
something
in
order
to
improve
first
party
performance.
Q
W
Awesome
yeah
we'd
love
to
work
with
you
on
it,
we'd
love.
To
I
mean
I
can
give
you
a
lot
of
different
examples
and
a
lot
of
different
here's.
What
the
customer
wants
to
see
and
here's
what
we
can't
show
them
if
helpful
yeah,
let
us
know
we'd
love
to
collaborate.
Okay,
finally,
last
thing
I'll
say
is
on
server
timing.
This
is
one
that
you
know.
I
really
love
the
the
spec.
I
love
the
idea.
I
love
what
we,
what
our
ideas
were
around,
how
we
were
going
to
leverage
server
timing.
W
You
know,
but
as
of
today-
and
I
just
pulled
this
from
the
chrome
usage
data
and
we're
still-
you
know
under
seven
percent,
roughly
in
terms
of
adoption
of
server
timing
and
I'm
not
a
gambling
man,
but
I
think
if
we
were
to
go
in
here
and
look
at
the
http
archive
data
mostly,
I
would
guess
that
a
lot
of
this
is
coming
from
akamai
because
of
the
work
they've
done
to
expose.
W
Some
of
that,
for
you
know
their
run
product
for
people
to
get
visibility
into
things
like
cash
hit,
miss
or
you
know,
origin
edge,
latency
things
like
that.
I
just
feel
like
there's.
I
just
feel
like
it's
such
a
missed
opportunity
and
I'm
kind
of
coming
with
a
line
and
a
complaint
that
has
nothing
to
do
with
this.
This
group
necessarily,
but
it's
more
of
a
call
to
action
of
of
you
know,
are
we
really
are
really
taking
advantage
of
this
like?
Are
we
really
doing
everything
we
can
with
server
timing
it
was
introduced?
W
It
was
great
to
see
akamai
do
that,
however,
akamai's
kind
of
the
only
product
and
only
cdn
that
I
know
of
anyway-
that's
surfacing
this
for
their
customers
by
default,
with
specific
cdn,
specific
measurements
through
server
timing
in
terms
of
performance
analytics
the
tools
speed
curve
isn't
in
there
yet
like
we're
just
now
getting
ready
to
roll
out
server
timing
in
our
rung
data,
but
have
kind
of
been
waiting
for
customers
to
tell
us
what
it
is
they
want
to
surface,
and
you
know
right
now,
there's
just
less
adoption
of
it.
W
I
believe
web
page
test
is
servicing
this.
They
threw
this
in
there
because
pat
tends
to
be
on
top
of
all
this
stuff
before
everybody
else
is,
but
you
know
are
there
things
that
we
could
be
surfacing
at
the
request
level?
Even
but
to
me
this
is
a
an
underutilized
specification,
an
underutilized
tool
in
the
browser,
and
I
don't
know
what
we
can
do
to
get
more
support
of
it.
But
I
absolutely
see
a
need
there
to
expose
different
metrics
and
things
that
we
can't
get
other
than
using.
W
You
know
an
apm
tool
or
something
that's
really
more
by
code
instrument
or
something
like
that
on
the
server
side,
so
for
cdns
would
love
to
see
fastly,
cloudflare
others
working
alongside
akamai.
Imagine
that
to
be
able
to
actually
surface
some
of
this
stuff,
I
think
it
could
be
really
cool.
I
think
it
can
be
really
beneficial
to
the
customer
and
we
could
actually
start
to
improve
things
that
are
happening.
What
time
to
first
bite
if
we
were
able
to
surface
some
of
that
and
I'm
sure,
there's
a
host
of
other
reasons.
W
Other
areas
where
we
could
use
this
and
adopt
it,
but
so
far
haven't
really
been
seeing
a
ton
of
that
those
were
the
things
I
wanted
to
hit
on.
I
know
I
went
over
time
and
I
kind
of
blubbed
there
at
the
beginning
with
getting
started.
So
I
appreciate
everybody's
patience
again,
thanks
for
all
the
awesome
work,
this
this
group's
been
doing
really
enjoy
working
with
you
all
and
kind
of
making
a
lot
of
progress
on
this
stuff.
W
What
vitals
is
again
just
been
awesome
from
our
perspective
and
just
starting
the
conversation
and
obviously
a
lot
more
work
that
we
want
to
do
to
continue
to
make
that
better,
so
any
other
thoughts
or
questions.
I
know
them
over
time.
B
Cliff
the
only
other
question
I
had
for
you
just
about
like
some
of
these
new
web
vitals
metrics,
that
you're
measuring
for
your
customers
as
well.
Besides
the
external
pressures
to
look
at
these
metrics,
have
your
customers
reported
use
like
have
they
reported
good
or
bad
things
about
wanting
to
measure
them
at
all?
Just
you
know
like.
Does
it
correlate
with
their
business,
metrics
or
bounce
rates
or
stuff
like
that?
Have
have
you
gotten
much
feedback
yet.
W
Yeah
we
have,
and
I
I
love
the
data
that
you
showed
as
well
when
you
wrote
your
post
on
this
we've
gotten
mixed
reviews,
where
we
have
seen
really
strong
correlations
with
things
like
fed
and
we've
seen,
you
know
strong
correlations
with
lcp,
maybe
not
as
strong
correlations
with
cls
and
that's
just
in
terms
of
you
know,
bounce
rate
conversion
rate
things
like
that
customers
are
excited
about
it
more
for
the
fact
that
they
did.
They
haven't
really
they've
known.
W
There
was
a
problem
and
they
haven't
liked
how
it's
how
the
pages
felt
for
lack
of
a
better
term,
specifically
with
like
cls,
so
the
fact
that
they
can
actually
now
like
pinpoint
it
and
find
an
issue
and
fix
it.
I
think
they're
delighted,
so
I
think
it's
been
really
good
to
see.
I
guess
I'd
like
to
collect
more
data
and
see-
and
you
know
to
decide
whether
or
not
the
verdict's
still
out
on
some
of
these
things,
whether
there's
an
impact
on
you
know
longer
term
conversion
rates
and
things
like
that.
W
B
So
we
had
scheduled
a
break
15
minute
or
so
break.
Do
you
want
to
still?
We
should
probably
still
do
that.
A
Yeah
we
should
can
we
turn
that
into
10
minutes
just
to
recoup
some
of
the
time
would
that
work
for
folks,
I
see
not
so
good.
Let's
do
10
minutes
and
reconvene
in
something
something
53,
depending
on
your
time
zone.