►
From YouTube: WebPerfWG call - June 4th 2020
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
Well,
we
have
a
couple
of
different
topics
for
today:
I
think
we
have
one
discussion
from
Ilya
and
Webb
vitals
and
potentially
two
different
things
from
Annie
on
frame
timing
and
single
page
apps,
and
then,
if
we
get
to
it,
if
we
have
time
left,
Nicholas
wanted
to
talk
about
page
visibility
as
well.
I
would
say
why
don't
we
get
into
it
and
I'm
Millia?
Do
you
want
to
kick
us
off?
B
D
D
This
is
a
thing
that
we
announced
a
few
weeks
back
on
the
chromium
board.
So
this
thing
is
actually
composed
of
two
it's
program
and
it
has
two
major
components
to
it.
One
is
a
recognition
that
at
Google
and
I
think
as
a
community
we've
been
evangelizing
web
performance
for
a
long
time.
We've
developed
lots
of
really
helpful
tools
and
metrics
and
guides
and
all
the
rest,
but
we
consistently
get
feedback
from
folks
that
we
engage
that
it's
also
very
confusing.
D
There
are
too
many
tools
and
there
are
too
many
metrics
and
they
want
more
coherency
and
and
guidance.
So
the
first
commitment
that
we're
making
at
least
as
as
Google,
is
under
the
banner
of
this
virals
programs,
who
actually
bring
together
our
tools
in
a
more
coherent
way
and
what
we
mean
by
that
is
to
converge
on
a
set
of
metrics
that
we
are
gonna,
be
featuring
and
highlighting
across
all
of
these
tools,
as
well
as
developing
all
the
right
guidance.
D
The
best
practices,
the
guides
in
all
the
rest
and
also
providing
a
more
predictable
update
things
where
it
shouldn't
be
the
case
that
you
know
in
June,
we
announced
something
and
then
a
couple
of
months
later
and
other
Google
product
stands
up
on
stage
and
says:
here's
a
new
shiny,
new
metric
that
you
should
be
optimizing
for.
So
it's
a
lot
of.
D
So
that's
the
program
and
we
committed
to
roughly
annual
update
cycle
for
these
key
metrics,
which
we're
calling
the
Corbett
vitals,
so
we've
designated
a
set
of
metrics
to
be
know
what
the
core
set
and
we
will
be
rubbing
them
on
this
gains.
Of
course,
this
is
our
first
loop
around
the
track,
so
you
know
we'll
see
how
how
how
well
we
stick
to
all
that,
but
that's
the
intent.
D
The
I
think
additive
bit
that
we
added
here
is
we
also
providing
guidance
based
on
the
analysis
that
we've
done
on
what
is
a
good
and
needs
improvement
threshold
is
so
in
particular,
and
on
the
slide
you
see
that,
for
example,
for
LCP
were
saying
hey.
You
should
really
target
your
LCP
to
be
below
two
and
a
half
seconds.
D
We
consider
that
to
be
a
good
threshold,
and
you
should
be
targeting
that
at
the
75th
percentile
and
we
used
to
published
similar
trash
holes
for
fit
and
CLS
without
getting
too
much
into
the
details
of
how
we
arrived
at
these
particular
numbers
because
I
know
many
of
you
will
have
that
question.
We
actually
have
a
really
nice
document
that
we
published
that
talks
about
the
actual
methodology
of
how
it
arrives
these
thresholds,
and
it
is
based
on
both
survey
of
some
of
the
literature
and
past
research.
That's
been
done,
but
also
feasibility.
D
D
Also
I'll
I'll
add
that
Phillip
and
C
impose
on
a
call
here.
I've
been
putting
a
lot
of
work
and
effort
into
creating
a
lot
of
content
around
education
and
debugging
guides
best
practices
for
all
these
metrics,
so
you
haven't
seen
it.
I
would
definitely
recommend
going
to
web
dev,
slash
vitals
and
just
looking
at
a
body
of
work,
that's
available
there
and
of
course
you
know,
any
feedback
that
you
folks
can
provide
would
be
hugely
valuable
as
well.
So
that's
from
the
chrome
side
from
a
search
you
may
have
seen
this
announcement
go
out.
D
D
We
all
remember
those
big
movements,
so
the
new
update
here
is
that
we're
unifying
all
of
that
under
page
experience
as
a
unified,
I
guess
program
within
search
and
specifically
calling
out
that
we're
going
to
be
using
core
vitals
as
the
key
health
metrics
for
assessing
user
experience
on
the
page,
which
is
a
big,
a
big
shift
and
a
big
difference
prior
to
previous
communications
coming
from
search
about.
Why
were
performance
matters,
because
previously
it
was,
it
was
not
disclosed
in
which
metrics
site
developers
should
be
looking
at
so
this
is.
D
This
has
been
a
welcome
improvement
from
a
lot
of
the
like
early
feedback
that
we've
seen.
So
that's
I
think
that's
that's
a
really
good
outcome.
The
other
bit
is:
we've
made
a
strong
push.
This
is
connecting
back
to
the
program
on
how
to
be.
How
do
we
actually
align
all
of
our
tools?
We
made
a
big
push
over
the
last
year
and
six
months
in
particular
to
try
and
unify
all
of
our
tools
such
that
when
you
know
these
things
go
out,
developers
are
actually
armed
with
the
right
tools
and
capabilities.
D
So
you
see
here
all
the
different
tools
that
we
launched
last
week
updates
to
all
the
different
tools.
So
things
like
PageSpeed
insights,
chrome,
UX,
reports,
search
console
is
able
to
give
you
a
report
on
how
you
perform
in
my
vitals
dev
tools
like
house,
we
have
an
extensive
blog
post
on
web
dev
that
enumerates
all
the
different
updates
and
specifics
of
what
what
landed.
So,
if
you
haven't
seen
that,
definitely
recommend
checking
it
out.
D
It's
it's
actually
pretty
exciting
to
see
all
these
things
come
together
and
offer
a
nice
tooling
story,
that's
not
to
say
that
there
isn't
room
for
improvement,
there's
plenty,
but
nonetheless
I
think
we're.
You
know,
hopefully
we're
delivering
on
some
of
the
promise
of
helping
developers
unify
and
actually
get
good
results.
Also,
you
haven't
seen
it
Phillip
open
sourced,
a
library
where
vitals
GS
library
or
collecting
these
wrong
metrics
and
it
encodes
all
the
best
practices
that
we
found
to
date,
and
this
is
how
we
measure
these
things
in
in
chrome
user
experience
report.
D
So
this
is
a
production
ready
library,
we've
already
seen
a
number
of
projects
in
sites
adopted,
which
is
really
nice,
but
even
beyond
just
taking
it
as
off-the-shelf
sort
of
thing.
We
encourage
everyone.
So
you
just
look
at
the
implementation
and
see-
and
it's
kind
of
sanity
check
the
implementation
to
make
sure
that
they
follow
the
same
patterns,
because
there
are
some
non-trivial
gotchas
for
some
metrics
like
CLS
and
another.
That's.
D
Yeah,
so
you
can
clear
today,
so
all
that
is
great,
exciting
glad
that
we
launched
that
and,
of
course,
you
know
the
day
after
we
launched
it,
we
already
are
having
meetings
about.
So,
if
you
promise
to
deliver
an
update
in
2021,
what
would
that
and
while
a
year
seems
like
a
far
a
long
timeline?
It's
actually
not
that
much
time.
D
So
if
you
look
at
the
stable
and
look
at
something
like
LCP,
we
do
a
good
job
of
measuring
the
initial
page
load
experience.
But
what
about
rendering
performance?
After
that?
That's
kind
of
a
blind
spot
and,
of
course,
that
as
a
user
in
terms
of
measuring
user
experience,
it
would
be
nice
if
we
had
some
primitive
third
was
able
to
speak
to
that
and
stay
not
only
did
a
render
fast,
but
it
also
performed
well
like
smooth
and
I
didn't
drop
frames
or
whatever
metric
that
we
come
up
with.
D
Also,
SBA's
is
kind
of
a
common
theme
that
we'll
come
back
to
in
a
second
for
fit
first
input.
Delay
responsiveness
is
a
general
theme
that
we
care
about.
First
input
is,
as
we
know,
a
particularly
challenging
spot
for
many
sites
today,
due
to
how
pages
are
built,
how
javascript
is
parsed
and
executed
and
all
the
rest,
but
we
care
about
input
overall
on
the
page,
not
just
the
first
input.
So
a
natural
next
step
here
would
be
to
not
only
look
at
first
input
but
all
input
on
the
page.
D
D
There's
we
don't
want
these
metrics
to
incentivize
behaviors,
where
people
are
reloading
pages
because
allows
them
to
reset
their
counters
and
there's
probably
better
ways
to
think
about
it.
And
you
know
this
may
be
I'm,
not
sure,
if
that's
a
good
or
a
bad
example,
but
we
don't
count
the
number
of
dropped
frames
over
the
lifetime
of
the
page.
We
talked
about
frame
frames
per
second,
maybe
there's
some
coolants
to
this,
and
that's
the
space
that
Annie
and
and
Michael
then
explore
so
in
terms
of
area.
D
D
There
are
some
areas
here
that
I
think
we
ready
done
some
groundwork
on
and
maybe
we
could
get
them
or
of
the
finish
line's
the
things
like
LCP
impost,
LCP,
rendering
performance
those
previous
work
on
frame
timing,
maybe
there's
an
opportunity
resurrected
and
and
do
better
on
that
front
responsiveness.
Well,
we
have
invent
timing
and
actually
event
timing.
We
just
recently
shipped
in
chrome
and
I'm,
really
excited
about
that,
but
I'm
not
yet
sure.
If
that's
sufficient
to
cover
all
of
our
use
cases
there
or
not.
So
there's
some
questions
there.
Normalization
is
I.
D
Think
an
interesting
open-ended
question
that
we
need
to
wrestle
with
I
know:
it's
not
sub.
It's
not
specific
to
LCP,
it's
going
to
be
a
problem
for
any
metric
that
captures
a
stream
of
events
right,
the
accumulating
input,
delays
or
accumulating
layer
shifts
or
all
the
rest.
So
how
do
we
reason
about
that?
And
how
do
we
present
that
data
coherently
and
consistently
in
all
of
our
run
tools,
Nick
I'm,
sure,
you've,
given
this
thought
in
your
products?
D
It's
a
non-trivial
thing
and
I'd
love
to
figure
out
some
consensus
on
how
ie,
how
do
best
reason
about
this
and
what
actually
resonates?
How
do
we
help
developers
reach
like
an
extensive
such
data?
Finally,
single
page
apps,
as
we
all
know,
remain
to
be
remain
as
a
gap
for
us
in
terms
of
measurement
they're.
They
don't
have
well-defined
start
and
end
points
and
transitions
like,
unlike
page
navigation,
switch,
makes
it
really
hard
for
us
to
reason
about.
D
Rum.
Vendors
have
come
up
with
framework
specific
solutions,
but
we
don't
have
a
standard
API
for
that,
and
that
makes
it
really
challenging
for
a
tool
like,
for
example,
chrome,
chrome,
user
experience
reports
to
gather
such
data
at
scale,
which
makes
it
hard
for
us
to
reason
about
these
things
that
those
things
at
scale
and
it
does
create
a
blind
spot
where,
depending
on
how
you
implement
your
single
page
app,
you
maybe
you're
telemetry
may
look
horse
because
you
chose
to
implement
a
secure
page
app
and
that's,
of
course
that's
not
a
bit.
D
That's
not
an
outcome
that
we
would
want
to
incentivize
or
try
so
yeah
in
terms
of
priority
areas.
Response.
Maybe
even
tightening,
is
the
answer
not
quite
sure
post
load,
rendering
experience
something
that
we
done.
Work
on
SP
a
is
a
good
love
to
find
some
creative
solution
there
and
normalization.
So
those
are
the
four
four
areas
that
were
really
curious
to
dig
in
over
the
next
year
or
years.
It
might
take
more
than
a
year
and
with
that
maybe
I'll
hand
over
to
Amy
and
Michael
Emily,.
F
F
Hi
you
see
Steven
I
yeah
I've
got.
One
comment
is
yes
for
sure
for
Earth
SP
a
is
you
know
the
most
important
one
and
I'd
be
happy
to
help.
If
there's
any
way,
I
can
help.
Just
a
question
is,
it
seems,
like
you
ditched,
TTI,
for
input
responsible.
You
know
for
responsiveness,
input
delay,
just
wondering
why.
G
Okay,
if
I
take
that
one
so
time
to
interactive
there's,
basically
like
kind
of
two
contexts
in
which
can
be
used
in
the
lab
and
in
the
field.
I'll
talk
about
the
field.
First,
cuz,
that's
really
important!
That's
the
ground
truth,
for
you
know
whether
it
is
performing
well
or
not.
The
big
problem
was
using
time
to
interactive
in
the
field
is,
is
what
we
call
it
abort
bias
so
time
to
interact.
G
It
can
take
a
very
long
time
to
compete,
and
often
they
user
actually
leaves
the
page
before
it's
computed
so,
but
that
bad
experience
is
actually
going
to
make
your
TTI
look
better
until
we're
very
concerned
about
that
abort
bias,
which
is
why
we
started
measuring
first
input
delay.
We
know
that
these
are
is
trying
to
interact
with
the
page
and
the
page
is
loading
there's
a
big
delay
that
that
stops
them
from
being
able
to
interact
with
it.
So
we
were
more
directly
measuring
the
actual
user
experience.
F
Okay,
thank
you.
Thank
you,
Anne,
but
easy,
isn't.
First,
input
delay
depend
depending
on
the
input
when
the
you
know
when
the
click
happens
or
some
you
know
anything
like
how
do
you,
for
example,
if
you
have
a
lab,
you
know
when
do
you,
you
know
you
know
I'm
saying
like
a
yeah
I
see
by
great.
If
you
can,
if
you
can
detect
a
regression
and
then
or
if
you
want
to
optimize
and
remeasure,
it
seems
that
first
input
delay
is
dependent
on.
You
know
that
you
put
yes.
G
So
that's
why
we,
we
actually
did
a
study
where
we
compared
a
bunch
of
different
approaches
to
a
lab,
correlate
metric
right,
there's
as
max
potential
food,
which
is
just
like
the
longest
task.
G
The
one
we
ended
up
going
with
total
blocking
time
is
like
if
you
subtract,
50
milliseconds,
for
that's
not
really
affecting
the
user
input,
50
milliseconds
off
all
the
tasks
like
how
much
are
the
tasks
walking
alluded
a
couple
other
different
lab
options
and
we
found
the
total
blocking
time
when
we
did
a
study
that
took
like
the
same
device
in
the
lab
over
10,000
pages.
And
then
we
looked
at
the
data
from
the
field.
Total
blocking
time
was
able
to
predict
first
input
delay
best
of
all
the
approaches.
E
Could
I
follow
on
an
ad
total
blocking
time
is
defined
in
terms
of
time
to
interactive,
so
we
we
stopped
adding
the
the
long
tasks
at
time
to
interactive
so
time
to
interactive.
Just
gives
you
the
moment
when
the
page
is
idle.
Total
blocking
time
tells
you
well.
How
busy
was
it
until
that
point,
and
so
then
it
kind
of
approximates
how
long
delay
would
have
been
so
it's
more
elastic,
as
opposed
to
just
having
a
single
cliff.
You.
G
Know
we
have
some
studies
on
that.
Basically,
the
way
that
TTI
has
that
50
millisecond
tasks
and
it
so
it
you
can
imagine
you
had
a
toss,
get
one
second,
three
seconds
and
five
seconds
and
if
it
was
slightly
faster,
like
they're
all
49
milliseconds,
you
have
like
like
noooo
TTI,
but
if
they're
like
that
last
one
was
51
milliseconds.
Sometimes
the
teacher
will
get
really
long
and
I
know
it
sounds
a
little
bit
weird
I
live
in
a
pace,
the
duck
that
we
did
like
kind
of
a
study
on
elasticity
of
the
metric.
G
H
G
E
E
So
for
interaction,
I
think
Ilya
already
talked
about
this,
but
we
want
to
move
from
just
tracking
input
delay
to
an
overall
input.
Responsiveness
and
I'll
show
that
in
a
second
as
well
as
going
from
just
the
first
input
to
all
input
on
the
page.
So
this
is
a
little
diagram.
Our
team
worked
on
to
sort
of
overview
how
we
think
about
full
input
responsiveness,
so
you
can
see
on
the
left
in
the
purple.
That's
input
delay.
E
This
is
the
time
from
when
the
input
event
happens
to
when
it
gets
dispatched
and
sort
of
all
of
the
queuing
that
is
getting
in
the
way
of
beginning
to
handle
that
event,
but
really
the
time
it
takes
to
process.
All
of
those
event.
Handlers
is
what
the
user
is
interested
in
and
then
also
the
time
it
takes
the
browser
to
paint
the
next
frame
on
screen.
E
E
There's
other
sort
of
interesting
leads
that
we
have
such
as
paints
being
able
to
attribute
the
tasks
that
led
to
dirtying
the
frame
and
then
perhaps
even
the
event
that
was
led
to
the
cube
tasks,
so
we
can
sort
of
rather
than
going
forward
from
the
event
and
tracking
all
work.
Maybe
we
can
go
backward
from
the
paint
frame
to
try
and
attribute
to
where
it
came
from.
We
do
think
it'll
be
impossible
to
be
perfect,
but
we
do
want
to
capture
a
user
paint
when
we
know
about
it.
E
So
that
was
talking
about
capturing
all
input
response
Ignace,
but
Ilya
also
talked
about
as
we
go
to
capture
more
of
page
life
cycle
events,
we
have
a
new
problem
which
is
to
sort
of
as
we
cover
more
for
old
page
life
cycle.
We
want
to
carefully
compare
per
file,
use
alongside
user
behaviors,
so
as
users
spend
more
time
on
page,
we
don't
want
to
have
the
pages
that
they
spend
the
most
time
with
or
interact
with
the
most
to
be
judged
to
have
the
highest
cost.
E
Those
are
the
best
pages
users
love
them
the
most.
So
so
here's
some
examples.
So
if
we
just
sum
up
all
input
durations
as
the
user
interacts
more
there's
more
total
duration
or
delay,
and
so
obviously
that
page
is
worse
right,
clearly
not
or
if
we
just
average
across
all
inputs.
This
is
better,
but
it
still
means
that
the
worst
delays
are
are
hidden
by
all
of
the
good
delays
and,
of
course
we
could
use
percentile
to
catch
more
of
the
worst-case
scenarios,
but
there's
two
different
ways
to
sort
of
look
at
this.
E
If
we
look
at
first
input
for
aggregating
across
all
users,
all
devices
different
field
like
experiences
and
sort
of
looking
at
the
top
percentiles
there,
but
this
is
looking
across
all
interactions,
and
so
things
that
happen
much
later
in
the
page
life's
like
a
little
behave
very
differently
than
the
first
ones
so
want
to
be
careful
there
and
looking
at
the
maxim
clip
duration.
You
know
this
is
good.
E
This
captures
the
worst
case,
but
it's
very
inelastic
and
it
kind
of
hides
an
overall
picture,
so
we
probably
need
some
blend
of
all
of
these
cumulative
layout
shift.
We
also
talked
about
is
also
one
of
these
full-page
live
set
lifespan
metrics.
That
already
has
some
of
these
problems,
and
so,
when
you
look
at
long
live
decisions
with
repeated
shifts,
they
might
have
unnaturally
high
scores,
and
so
one
idea
is
to
apply
time-based
discounting
so
that
you
penalize
later
ships
less,
but
this
might
actually
risk
incentivizing
lower
page
loads.
E
We
probably
also
want
to
make
sure
to
separate
layout
shift
scores
for
each
sort
of
section
or
page
transition,
especially
when
we
consider
single
page
jobs
and
then
perhaps,
rather
than
just
adding
all
of
the
layout
shifts
across
all
of
those
transitions.
Maybe
we
actually
need
an
average
across
each
section
or
or
at
least
a
maximum.
So
these
are
all
related.
So
here
are
things
we're
thinking
about
we're.
Looking
at
the
total
times,
these
things
take
the
median
in
various
percentiles,
the
maximum,
we're
also
thinking
about
adjusted
versus
not
adjusted
so
much
like
long
tasks.
E
We
think
that
things
there's
a
budget
a
lot
of
these
things
and
as
long
as
it
happens
within
budget,
it's
totally
fine
and
we
want
lots
of
interactions
that
are
all
within
budget
and
that
shouldn't
count
negatively.
So
we
really
want
to
look
at
the
adjusted
sort
of
like
the
extra
time
that
was
above
budget
and
also
compare
those
two
sort
of
page
loads
paid
sessions.
So
when
the
user
switches
to
background
and
then
back
or
the
total
time
spent
in
the
foreground,
because
all
of
these
things
have
an
influence
on
user
behaviors.
E
So
eventually,
we
hope
to
dig
in
and
find
which
of
these
or
blended
set
of
these
give
the
best
prediction
for
real
page
experience
and
I
think
we
could
learn
a
lot
from
a
lot
of
the
folks
in
this
room
here.
But
this
is
something
we're
thinking
about
over
the
coming
year,
right
so
specifically
single
page
apps
in
normalization,
so
I
think
we
all
know
that
it's
uniquely
important
to
consider,
as
you
measure,
your
metrics,
because
multi-page
apps
sort
of
get
this
for
free
or
attribution
is,
is
easier
to
know.
E
Based
on
what
the
user
experience
was.
So
you
really
want
to
measure
permit
ryx
tied
to
specific
rent
or
URLs
for
easier
to
plug
in.
So
how
do
we
do
this?
So
the
goal
eventually
is
to
have
really
accurate,
heuristics
or
propose
new
platform
primitives
to
accurately
track
single
page
app
routing,
but
this
is
hard
to
get
right
and
we
don't
have
a
concrete
proposal
for
it
now.
What
we
really
want
is
real-world
data
to
help
us
inform
our
approach
and
just
confirm
whatever
it
is
that
we
do
that.
E
We
did
it
the
right
way
and
we
want
to
begin
by
asking
developers
to
sort
of
annotate
their
pages,
and
we
know
that'll
be
hard
to
get
full
reach
to
ask
all
lives
to
do
this,
and
so
we
want
to
work
with
spa
frameworks.
Routers
analytics
providers
to
just
enable
reporting
by
default,
wherever
we
can
all
right,
there's
a
bit
of
a
render
issue
here,
but
we
want
to
try
leveraging
user
timing
and
we
think
we
could
do
this
without
changing
the
spec.
E
And
so
you
know
we
talked
about
at
last
meeting
that
folks
wanted
a
custom
color
on
their
performance
marks
in
the
timeline,
and
we
thought
it
wasn't
something
that
needs
a
spec
change,
its
really
a
convention
that
you
can
add
to
the
detail
field.
And
so
we
think
that
there's
something
similar
we
could
do
here.
If
you
report
a
particular
measure
with
a
start
and
end
but
give
us
an
extra
clue,
that's
a
convention
and
not
a
standard.
Then
maybe
we
could
do
a
better
job,
tracking
and
aggregating
it
in
rum,
metrics.
E
And
so
I
did
promised
tomato
eventually
like
to
capture
the
actual
events
that
led
to
a
navigation
not
only
to
get
the
timestamp
of
the
event
like
that's
easy
to
get,
but
also
just
to
tie
it
to
associate
the
two,
because
we
might
need
it
to
report
frame
timing
or
pink
timing.
That
comes
after
the
navigation.
So
we
want
some
way
to
capture
that.
We
also
might
want
to
consider
passing
a
promise
at
the
beginning
to
help
simplify
how
you
resolve
the
end
of
a
navigation.
E
So
these
are
not
proposals,
and
we
do
think
that
it's
a
little
bit
weird
to
add
to
user
timing
a
way
to
capture
these,
and
so
we're
really
proposing
just
do
using
this
convention
for
now.
But
these
are
things
we'd
like
to
capture
eventually
all
right.
So
how
do
we
add
this
to
frameworks
or
routers
or
analytics
I?
Think
a
lot
of
us
have
found
that
it's
pretty
easy
to
track
the
start.
E
Classic
Lee
have
been
able
to
track
a
sink
ins
of
work
legs
and
like
yes,
and
some
upcoming
experimental
things
are
working
on
like
semantic
boundaries
for
spa
apps
like
Rihanna's,
suspense
boundaries,
where
you
can
load
large
chunks
of
async
work
and
it
will
track
progress
and
sort
of
blow
it
load
in
all
at
once,
or
we
can
sort
of
start
tracking
and
then
watch
the
UX
for
updates
and
sort
of
use
a
process.
That's
similar
to
time
to
interactive,
where
we
wait
for
the
page
to
get
idle.
E
Sorry
for
the
rendering
issues
don't
fix
these
later,
but
so
that,
in
my
experiments
with
some
sponsoring
works
and
tracking
I
did
find
using
an
approach
similar
to
time
to
interactive
is
very
useful.
So
you
can
imagine
counting
whenever
the
UI
gets
updated,
tracking,
the
beginning
of
a
spot,
navigation
and
sort
of
resetting
the
timer
anytime.
F
This
is
c1
again
I
work
for
Salesforce
I
was
just
wondering
like
maybe
it's
a
bit
naive
approach,
but
if
we
had
an
API,
you
know
for
span
allegation
that
would
sort
of
reset
all
the
measurements.
You
know
that
would
reach
figure
a
pool,
foil
ski
for
everything,
wouldn't
that
be
sort
of
covering
most
of
the
use
case
like,
in
other
words,
if
you
had,
if
the
framework,
when
they
know
like
every
routing
frameworks
when
they
know
they're
going
to
route
to
a
new
page
or
content,
they
would
just
call
this
new
browser.
E
Yeah
would
like
that,
but
we're
worried
about
the
way
we
introduced
it.
So
it's
a
little
bit
harder
with
single
page
app
navigations,
for
one
reason
is
when
you
do
a
top-level
navigation
everything
on
the
page
changes.
You
start
from
zero
and
you
start
loading,
and
so
you
can
attribute
more
easily
with
a
single
page
app.
It
could
be
a
partial
load
of
the
UI.
F
E
Resetting
and
then
reporting
the
same
metrics
we
talked
already
might
not
necessarily
work.
We
might
need
them
something
a
little
bit
more.
Sophisticated
second
is:
if
we
rely
just
on
developer
hints
to
reset
time,
we
really
want
to
make
it
capture
sort
of
user
experience,
and
so
we
want
to
be
careful
there.
B
I
wanted
to
just
kind
of
reinforce
something
that
you
had
said
as
well,
which
is
you're
talking
about,
maybe
offering
a
way
to
measure
at
the
end,
where
you
basically
can
say
the
start
and
end
time
looking
in
the
past,
for
what
the
navigation
was.
But
you
pointed
out
that
it
can
be
useful
at
the
beginning
to
also
do
mark
that,
as
you
know,
as
Stephens
bringing
up
as
well
definitely
from
a
third-party
analytics
provider.
B
Point
of
view
we
like
to
know
when
things
are
starting
as
well,
because
we
may
want
to
turn
on
additional
interpretation
that
we
wouldn't
have
normally
had
enabled
you
know,
cuz
it's
heavier
or
weightier
or
more
intensive
or
whatever.
So
having
some
sort
of
way
to
give
a
hint
at
the
beginning,
I
think
is
very
helpful
from
like
a
third-party
vendor
point
of
view,
and
then
you
have
trying
to
figure
out.
The
end.
I
think
is
the
hard
point
we
did.
We
do
it
in
boomerang
by
looking
at
the
network.
B
I
I,
like
the
idea
of
looking
at
the
visualization
results
as
well
or
some
combination
of
that
or
some
sort
of
algorithm
or
whatever,
but
that
that
part
is
the
hard
challenge
for
us
to
get
right
is
figuring
out.
When
did
work,
stop
in
in
in
a
world
where
multiple
widgets
are
all
interacting
and
ads
are
reloading
and
other
things
are
happening
on
the
page.
We
don't
always
get
it
right
with
our
hard
naive
approach.
B
So
yeah
I
I'm,
not
sure
if
there's
a
very
smart
way
for
the
browser
to
tell
us,
you
know
what
what
its
opinion
is
or
if
it
takes
more
input
from
each
site
to
kind
of
help,
define
the
important
parts
to
them
or
the
unimportant
parts
to
them
for
it.
For
these
types
of
navigations
but
I
think
that's
the
the
meat
of
our
challenge
as
an
analytics
vendor
and
definitely
wants
something.
I
would
love
to
talk
through
a
lot
more
about
figuring
out.
If
we
could
do
smarter
things
more
automatically,
yeah.
E
I
E
H
I
was
just
I
have
a
question
about:
do
you
think
it
will
be
easier
for
implementers
to
think
of
this
as
separate
and
distinct
from
existing
page
navigation?
That's
the
first
question.
Second
question
is:
do
you
think
it's
easier
for
implementers
to
do
the
same
thing
so
for
both
web
developers
and
for
us
as
browser
vendors
and
analytics
providers,
which
is
the
easier
path
just
want
some
some
commentary
and
that
would
be
appreciated.
H
To
reusing
existing
kind
of
navigation
stuff,
because
I
noticed
in
your
talk,
you
were
like
we're
gonna
reset
here
for
spa
delegation,
so
I'm
just
I
just
want
to
broaden
the
conversation
and
think
a
little
bit
outside
the
box
here,
or
is
it
better
as
we
work
on
this
to
work
on
it
as
separate,
distinct
from
our
existing
navigation
and
pane
timing,
etc?
So.
E
I
think
that
the
we
as
a
browser
are
not
going
to
reset
anything
not
yet,
and
so
we
know
a
lot
more,
but
it's
important
when
you
think
about
normalization
when
you're
tracking,
like
wing
event,
timing,
API,
starts
shipping
and
you
start
tracking,
all
pink
lifecycle.
Events
that
you
is
for
your
own
diva
up
or
keep
this
in
mind,
and
so
it's
for
you
to
be
able
to
Demark
these
sections
and
be
able
to
aggregate
normalize
more
fairly,
if
that
makes
sense.
E
E
All
right,
thank
you
for
the
questions.
I
do
have
one
more
thing
to
present.
E
All
right,
so
the
other
topic
billiad
kicked
off
nicely
was
measuring
smoothness
on
the
web,
and
so
we
pointing
at
frame
timing.
So
this
is
a
co-worker
that
has
been
working
on
this
and
filed
a
train
timing,
API
repo,
but
it's
really
more
about
tracking
smoothness,
rather
than
just
specifically
from
timing.
E
E
Maybe
right
at
the
start,
I'll
compare
the
old
friend
timing.
Api
was
more
about
looking
at
the
event
loop
and
checking
whenever
that
each
sort
of
iteration
surpassed
the
budget
and
reporting
at
a
frame-by-frame
level
of
for
long
cream
sort
of
thing.
This
is
a
higher-level
sort
of
report
over
the
lifetime
of
a
particular
animation
or
scroll
event
how
it
did
in
aggregate.
E
So,
let's
explain
so
a
perfectly
smooth
animation
would
have
every
single
frame
shown
at
exactly
the
right
time
when
it
was
expected,
as
you
see
up
front,
but
of
course
we
know
that
sometimes
there
could
be
frames
dropped,
so
you
might
have
some
shown
some
dropped,
but
that
actually
there's
two
different
ways
to
to
two
different
cases.
Even
among
dropped
frames
that
we
to
be
careful
of
the
first
case
here
we
have
Sean
frame,
drop
frames,
young
friend
drop
there,
but
a
very
consistent
pattern.
E
So
this
is
rather
than
having
say,
60
frames
per.
Second,
it
has
a
nice
study,
30
fps,
whereas
in
case
2
there
was
a
bunch
of
60
FPS
frames,
shown
then
a
big
long
section
with
no
friends
at
all
and
then
finally
returns
to
showing
frames.
If
you
just
count
the
drop
trains
over
time,
you'll
actually
see
that
they
have
the
same
number
of
shown.
The
same
number
of
drops,
so
the
average
percentage
average
FPS
is
the
same.
E
But
if
you
start
counting
sort
of
variations
or
jank
in
the
first
case,
there's
no
real
variation.
You
might
consider
it
one
variation
if
there
was
like
60
FPS
preceding
this,
but
if
it
was
really
steady,
30
for
the
whole
time,
there's
no
variation
and
that's
what
we
label
is
jank,
whereas
the
other
one
there
was
a
steady
frame
way
and
then
a
big
jank
and
that's
one
difference
the
other
Ness.
E
E
Something
not
showing
a
frame
is
not
a
problem
actually,
because
sometimes
there's
just
nothing
to
show.
There's
no
update
to
give.
So,
with
this
sort
of
pseudo
code
on
the
right,
you
could
see
us
a
raft
loop
where
actually
the
developer
chooses
to
just
skip
every
other
frame,
and
so
there
there
the
run
loop
actually
did
yield
and
it
could
have
shown
a
frame
there.
Just
was
nothing
to
show.
This
is
fun.
This
is
my
missus
stolen
nice,
smooth
experience.
E
On
the
other
hand,
you
could
have
a
lap
loop
that
has
keeps
every
other
frame,
but
then
has
a
synchronous
blocking
busy
loop
and
now
nothing
could
have
been
shown.
We
don't
know
if
there
was
anything
to
show,
wasn't
anything
to
show
it's
still
blocking
still
getting
in
the
way.
This
is
still
a
delayed
frame.
E
Now
this
is
what
the
API
proposal
looks
like
the
entry
type
changes
from
frame
from
being
the
interim
to
being
named
animation
from
the
perspective
here,
scrolling
CSS,
animation,
jeaious
animations.
These
are
all
just
types
of
animations.
There
might
be
some
debate
there,
but
it
really
is
they're
really
treated
very
similarly,
so
you'll
have
the
start
time
of
when
this
animation
started
the
average
interval
between
frames
within
that
animation.
You
know
this
could
change
based
on
hardware
we're
getting
screens
that
are
variable
refresh
rate.
We
have
highly
first
rate.
E
We
have
animations
that
aren't
meant
to
run
at
maximum
refresh
rate
and
optionally
I
think
you
have
a
an
array
of
all
frame
details
on
a
per
frame
level
and
we'll
get
into
that
in
a
sec,
as
well
as
attribution
to
the
element
that
was
animating
or
the
idea
of
the
animation
at
present.
So
this
is
a
much
higher
like
semantic
summary
on
an
animation,
as
opposed
to
per
frame
details.
E
If
you
do
dig
into
the
frames
themselves,
you'll
have
the
start
time
of
each
frame
whether
it
was
presented
or
whether
it
was
dropped.
The
scroll
Delta.
If
there
was
one
or
a
user
action,
if,
if
this
particular
scroll
or
other
animation,
will
start
at
due
to
user
action
on
the
right,
you
have
links
to
more
detailed
proposal
as
well
as
discussion
issues
that
have
already
been
filed.
E
E
Another
potential
extension
is
to
have
attribution,
of
course,
folks
will
want
this
and
it
to
protects
your
exceptionable.
It
is
perhaps
difficult
to
be
concrete
here,
but
the
proposal
is
to
just
have
a
string
explaining
best
it
can
and
that
would
be
sort
of
browser-specific
as
to
the
reasons
and
the
possibilities
could
be.
You
know,
garbage
collection,
long-running
event,
handlers,
long-running
javascript,
expensive
layout
style,
updates,
etc,
and
there
are
various
security
and
privacy
considerations
about
what
we
can
report
now.
B
E
J
E
So
I
did
gloss
over
it.
I
tried
to
talk
about
it,
a
bit
at
the
beginning.
So,
unlike
the
previous
frame
timing
proposal,
you
would
not
get
one
entry
per
frame
per
enemy,
etc.
A
report
at
the
end
of
the
animation
about
everything
that
happened
during
its
life
so
indeed
like.
If
you're
reporting
in
details
on
every
single
frame
that
might
get
large,
but
in
general
you
would
just
have
the
number
of
dropped
frames
the
average
interval
between
it
wouldn't
be
that
large.
E
Perhaps
a
follow-on
question
is
for
things
that
have
like
an
infinite
timeline
or
a
very
long
timeline.
How
would
you
report-
and
maybe
you
would
do
so-
every
number
of
seconds
just
have
a
progress
report
or
or
cut
it
early?
Something
like
that.
So
there
there
shouldn't
be
too
many
entries,
unless
maybe
you
have
just
locked
in
lots
of
in
Beijing.
D
E
E
So
I
guess
at
the
end
of
the
scroll
you
a
new
performance
entry
of
type
animation
with
sub
type
scroll,
and
it
would
just
still
say
that
the
start
time
was
a
certain
time.
The
average
interval
between
frames
of
that
specific
scroll
animation,
the
number
of
frames
that
were
actually
shown
and
then
were
intended
to
be
shown
as
defined
earlier
and
then
you'd,
get
more
details
per
frame
here.
E
It
is
interesting
that,
unlike
the
previous
frame
timing
proposal,
that
considers
the
sort
of
main
frame
event
loop,
things
like
scroll
could
be
handled
on
the
compositor,
and
so
you
might
actually
get
a
different
number
of
potential
frames
or
an
actual
different
number
of
frames
shown
for
an
animation
like
scroll.
That
is
decoupled
from
the
main
frame.
Even
when
main
frame
is
blocked,
and
so
you
might
have
a
whole
bunch
of
animations
run
in
a
different
pattern.
D
E
D
Yeah
there's
a
lot
of
goodness
a
second-order
question
said:
first,
one
I
guess
was
motivated
by
you
know:
what
does
what
is
perhaps
the
overhead
of
measuring
this
and
emitting
for
every
single
animation?
Maybe
some
pages
have
a
lot
of
those
going
on
it's
your
earlier
point.
So
what
would
that
be
just
a
firehose
of
data?
On
the
other
hand,
if
you
look
at
something
like
long
tasks,
we
only
surface
the
bad
stuff,
and
it
can
be
a
little
bit
hard
to
know
like
what
about
all
the
good
stuff
like.
D
D
E
C
Little
not
just
graphics,
because
you're
emitting
JavaScript
entries
for
everything
that
seems
too
much
overhead,
so
one
possibility
would
be
like
in
event,
timing
to
use
some
spawn
of
counts.
So
I
guess
you
can
define
these
events
in
a
discreet
way.
We're
already
doing
it
because
we're
surfacing
entries
and
then
you
just
count
the
number
of
entries
you
would
have
fired
for
everything
and
just
fire
the
bad
ones.
Something
like
that.
That's
a
great.
D
D
E
You
for
the
reminder,
so
we
decided
to
file
the
issues
within
that
repo
and
to
because
that
the
the
original
proposal
hasn't
had
progress
for
a
very
long
time.
I
believe
it's
good,
and
this
is
enough
related
that
we
thought
anybody
who
wanted
it
was
previously
interested
in
the
old
one
is
going
to
be
interested
in
this
one.
E
E
D
Yeah
I
don't
have
a
strong
opinion
on
either
the
name
or
the
location.
I
guess
the
the
more
important
bit
is
just
to
have
a
well-defined
place
where
we
can
route
folks
and
also
make
sure
that
if
we
are
reusing
frame
timing,
then
that
we're
navigating
folks
to
the
new
proposal,
as
opposed
to
kind
of
getting
buried
in
the
old
discussions,
because
it
is
significantly
different.
Yeah.
A
I
have
slide
because
the
word
frame
is
so
overloaded
with
you
know,
iframes
and
jobs
for
frames
and
whatnot,
and
because
this
focuses
on
animation
taught
like
maybe
it's
worthwhile
to
rename
to
animation
timing
or
something
along
those
lines
where
you're
still
reporting
frames,
but
they
are
like.
That's
not
the
focus
here.
It's
a
means
to
measure
smoothness
or
you
know,
animations
movements,
yeah.
That
makes
a
lot
of
sense.