►
From YouTube: WebPerfWG call - June 18th 2020
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay
cool,
so
as
always,
this
call
is
being
recorded
and
will
be
posted
online
and
due
to
time
constraints,
because
Nicolas
has
two
presentations
and
only
30
minutes
to
do
them,
including
questions
we'll
get
started
straight
away
and
then
do
a
bunch
of
the
administrative
stuff
towards
the
end.
Does
that
work?
Ok,
cool,
so
take
it
away.
Nicolas,
alright,.
B
B
We
want
to
enable
analytics
providers
and
people
that
gather
performance
metrics
from
their
sites
to
get
some
data
about
users
that
leave
the
page
very
early
during
the
Oh.
Was
there
a
message
for
me:
ok
users
that
leave
the
page
very
early
on
before
I
know
that
eggs
have
had
a
chance
to
register
themselves.
B
This
is
kind
of
a
blind
spot
right
now,
because
if
you
can't
register
yourself,
then
you
have
no
idea
that
the
load
happened.
Unless
you
have
some,
let's
say,
server-side,
processing
to
detect
that
kind
of
stuff.
So
we
want
to
make
it
easier
to
help
people
determine
the
abandonment
rates
of
their
sites,
as
well
as
being
able
to
correlate
those
rates
with
performance
or
business
metrics
across
various
sites.
B
First,
we
want
to
count
only
pages
where
the
user
explicitly
navigates
away
from
the
page,
because
otherwise
we
might
count
things
like
redirects,
which,
from
the
user
experience
perspective,
are
not
really
abundance,
they're,
just
since
they're
triggered
automatically,
and
that
can
happen
very
quickly
there.
They
shouldn't
be
counted.
B
In
addition
to
that,
we
want
to
count
in
abandon
as
a
page
that
did
not
reach
FCP.
Here
we
need
to
basically
pick
a
point
in
time
where,
if
you
did
not
reach
that
point
in
time,
then
you
count
this
and
abandon.
I
will
explain
a
bit
later.
Why
we're
thinking?
Fcp
is
the
right
choice,
and
the
final
constraint
here
is
that
the
page
must
have
been
in
the
foreground
at
some
point.
Otherwise
the
site's
had
no
opportunity
to
show
any
content
to
the
user,
so
we
can't
really
count
that
as
an
abandonment.
B
So,
in
order
to
expose
man
abandons,
we
also
need
to
define
it
and,
as
before,
we
also
expect
the
user
to
navigate
away
so
that
it
is
consistent
with
abandons
definition.
And
then
we
in
this
case
the
only
difference
is
we
reach
F
City.
Of
course
that
means
there
was
some
content
presented
to
the
user,
and
in
that
case
we'll
say
that
that
was
not
an
abandonment.
B
So
now
a
couple
of
reasons
why
we're
choosing
FCP
instead
of
other
potential
candidates?
The
first
reason
is
that
FCP
makes
it
very
clear
that
if
you
don't
reach
it,
you're
definitely
and
abandoned
so
for
pages
that
are
in
the
foreground
and
do
not
reach
FCP.
The
user
did
not
get
any
valuable
information
from
that
page
load
and
as
such
they
should
be
considered
dependents.
Of
course,
like
I
mentioned
non
abandons,
that's
not
the
case.
B
The
user
may
have
not
an
information
that
they
deemed
useful,
but
determining
that
is
beyond
the
scope
of
the
proposal
and
other
options
for
cut-offs
in
time
where
you
could
say
this
is
where,
if
you
don't
reach
this
you're,
an
abandonment
could
be,
for
example,
unload
the
benefit
of
unload,
for
example,
is
that
it
generally
means
that
the
analytics
provider
will
already
be
registered
by
the
time
unload
occurs.
So
it
would
certainly
cover
a
bit
more
of
the
gap
that
we
mentioned
between
user
tries
to
get
to
the
page
and
another
have
registered.
B
However,
unload
is
not
tied
to
user
experience
at
all,
and
unload
values
vary
wildly
between
websites
with
similar
performance
characteristics.
So
we
feel
it's
not
the
right
metric
to
look
at
another
potential
option
we
could
consider
is
LCD,
but
LCD
is
actually
not
well
defined
when
the
user
leaves
the
page
early,
because
termination
condition
is
leaving
the
page.
So
it's
not
suited
for
this
kind
of
analysis.
B
B
However,
that's
a
little
bit
tricky
because
if
your
page
is
set
up
in
such
a
way
that
it's
a
very
small
page-
and
it
contains
the
answer
to
what
the
user
is
looking
for
right
at
the
top
of
the
page
and
then
the
user
just
navigates
away
that
that's
not
really
an
abandonment
right.
So
even
though
that's
a
pretty
good
option,
we
feel
like
it
does
not
really
capture
abundance
properly,
because
it
will
have
many
false
positives.
So.
B
B
There
are
other
potential
things
we
could
expose.
For
example,
the
time
to
abandon
from
some
concept
of
navigation
start
could
be
a
useful
metric,
but
it
also
be
not
very
useful
because
we're
exposing
these
even
after
a
long
background
being
so,
you
would
need
to
similarly
use
page
visibility
API
in
order
to
interpret
the
data
accurately.
B
So
it's
not
so
good
clear
to
me
if
we
would
want
that.
But
well,
we
do
pane
timing.
So
it's
it's
not
necessarily
saying
we
shouldn't
expose
that,
but
it
would
have
to
be
thoughtfully
provided,
and
another
possible
thing
we
could
expose
is
the
foreground
duration
for
the
abandon
cases.
If
the
user
was
in
the
foreground
for
too
long
and
they
abandoned
that's
worse
than
if
they
only
waited
at
a
small
fraction
of
time
and
then
they
abandon
because,
for
instance,
some
evidence
could
be
missed,
clicks
right.
B
If
you
see
lots
of
amendments
that
happen,
super
quick
that
may
mean
that
they
are
caused
by
some
miss
clicks
or
user
interactions
where
they
didn't
intend
to
arrive
to
your
page,
and
that
is
not
as
important
to
track
from
the
performance
measurement
perspective
or
like
less
less
relevant.
So
that
could
be
something
useful
and
I
didn't
included
in
my
slides,
but
something
like
a
URL
could
be
useful
as
well.
B
Now.
The
next
question
is:
how
do
we
expose
this
information?
We
just
said:
analytics
providers
are
not
registered
by
then
right,
so
we
need
to
use
the
reporting
API.
The
reporting
API
includes
two
types
of
reporting.
If
there
is
document
level
reporting-
and
there
is
Network,
centric
reporting,
I
think
Network
centroid
reporting
is
better
because
the
document
one
requires
I-
think
two
registration
in
JavaScript.
So
it's
not
really
suited
for
this
use
case,
because
your
JavaScript
is
not
guaranteed
to
run
now
the
network
one.
B
As
far
as
I
understood
from
my
conversation
with
VM
the
network,
one
is
currently
a
little
sketchy
in
that
it
really
should
use
origin
policy
in
the
future,
but
currently
does
not
at
least
for
Network
error
logging,
which
means
we
may
need
to
wait
a
little
bit
for
the
origin
policy,
spec
and
implementations
to
be
in
place
before
we
can
actually
go
ahead
with
reporting
API
for
abandonment,
but
that
seems
not
too
far
away
into
the
future.
So
maybe
that's
fine.
B
Ian
clarified
that
document
reporting
doesn't
require
J
as
to,
but
it
does
mean
that
the
headers
for
the
document
need
to
arrive
before
we
report
anything
so
right
we
would.
We
would
have
to
exclude
some
abandonments
for
cases
where
the
abandonment
happens
so
quickly
that
there
was
not
even
headers,
which
may
be
fine.
That
yeah.
B
Yeah
so
yeah
the
headers
can
be
super
slow
and
ideally
we
would
report
those
cases.
Although
I
don't
know,
if
doesn't
now
already
already
cover.
Those
kind
of
cases
are
not
really
everybody
to
know
I.
Guess
it
wouldn't
be
an
error.
So
I
guess
nothing.
Yes,
it's
not
covered.
So
we
really
do
not
like
those
cases.
Sorry.
C
D
E
B
So
sort
of
imagine
you
have
a
website
and
it's
doing
great
it.
Measures
of
the
metrics,
including
FCP
and
MAGIX,
seem
fine
now
I.
Imagine
that
there
is
a
change
in
your
website
and
you
don't
see
any
change
to
FCP,
but
you
see
a
change
in
the
volume
of
metrics
received.
In
that
case,
it
will
be
useful
to
determine
if
there's
literally
just
less
engagement
from
users
or
if
more
users
are
just
leaving,
because
the
FCB
is
actually
slower,
but
since
it
since
to
slow
FCPS
are
not
reported.
B
If
the
user
abandons
then
you're
basically
losing
that
data,
and
even
though
on
aggregate
it
looks
the
same,
the
FCP
is
actually
much
slower
and
abandonment
can
help
determine
those
kind
of
deep
dive
analysis
into.
How
is
the
performance
of
the
website
changing
and
in
addition
to
that,
you
can
also
like
kind
of
correlate
abandonment
with
business
metrics
across
different
different
sites
and
see
what
happens
if
I
do
this
experiment?
What
happens
to
the
abandonment?
What
happens
to
the
business
metrics?
B
So,
in
a
sense,
it's
exposing
part
of
the
performance
landscape
that
currently
is
completely
invisible
to
analytics
providers
unless
they
do
some
super
complicated
server
level.
Analysis
of
oh
okay.
We
sent
this
responses,
but
we
never
got
analytics
back
so
something's
wrong.
Third,
but
it
seems
pretty
impossible
to
do
weight
properly,
because
many
things
could
go
wrong
there,
so
this
just
exposes.
Oh,
this
is
what
went
wrong.
Your
page
is
too
slow
and
the
user
left.
C
G
In
chrome-
and
we
do
find
that
it
varies
a
lot
by
site
so
different.
We
want
to
see
the
metric
because
it
would
actually
kind
of
it
does
depend
on
the
site
and
what
we
found
when
we
looked
at
our
thresholds
for
our
core
web
buttons
metrics.
Is
that,
like
they
said,
did
not
that
did
not
meet
the
threshold
for
performance,
we're
actually
25
percent
more
likely
to
be
abandoned,
then
sites
that
did
meet
the
threshold.
G
We
excluded
everything
like
redirects
and
things
like
that,
like
the
site
just
changed
you
to,
we
exclude
that
from
the
numerator
and
the
denominator,
so
we
say
like
60%
like
we're,
not
counting,
redirects
and
other
side
of
that
and
most
of
it's
not
network
errors.
We
exclude
networkers
except
like
when
we
can't
detecting
commit
face.
So
most
of
it's
not
accountable
for
by
networkers.
G
I
Hi,
this
is
Michelle
from
interest.
I'm
pretty
excited
that
you're.
Exposing
this
information
we
actually
just
did
abandonment
analysis
and
are
basing
the
goals
for
the
performance
team
off
of
that
analysis,
but
I
do
suspect
that
we're
missing
quite
a
large
number
of
logs
from
early
abandons
I,
was
wondering
for
the
alternatives
to
FCP.
Where
are
there
any
other
metrics
considered
I?
Think
for
us
we
define
an
abandonment
as
anyone
who
leaves
the
page
before
our
custom
metric
fires,
which
is
pretty
late.
It's
when
the
page
is
visually
complete
and
interactive.
G
We
did
have
a
lot
of
difficulty
on
a
Niklas
if
you
wanted
to
jump
in
on
finding
a
better
metric
that
was
really
like
user
facing
TTI
is
actually
like.
It's
abandoned
quite
a
bit.
We
actually
removed
the
magic
from
chrome
because,
like
abandonment,
makes
it
so
noisy
that
that
it's
hard
to
see
you
like
some
six
I
just
know
many
sites,
never
reach
TT,
I,
unload
and
Dom
content,
loaded,
aren't
really
user
facing
metrics
and
then,
as
Nico's
explained
about
largest
content,
will
paint
it's
the
largest
content
for
paint.
G
So
far,
so
the
first
content
will
paint
is
the
largest
Constable
paint
until
the
next
artist
comfortable
paint
so
sitting
before
logic,
comfortable
paint
doesn't
work.
We
haven't
been
able
to
find
a
way
to
get
speed
index
into
the
browser
itself,
so
we
do
consider
a
lot
of
ideas,
but
we
didn't
think
of
anything.
We
look
to
hear
if
people
have
other
ideas.
A
One
thought
could
be
to
have
some
sort
of
so
have
FCP
is
the
default,
but
then
also
have
some
Deadman
trigger,
where
the
site
declares
that
if,
if
this
value
wasn't
triggered
by
JavaScript,
this
is
an
abandonment.
So,
as
part
of
the
registration
have
some
JavaScript
variable
that
if
it
was
never
defined,
then
the
page
was
abandoned,
yeah
or
something
along.
Those
lines
just
basically
have
some
way
for
the
site's
to
say
what
it
considers
a
non
abandonment,
and
then
we
could
potentially
report
both,
but
it
can
give
interested
sites
more
info.
A
B
J
We
also
do
something
where
we
report
abandons
that
happened
prior
to
onload,
but
we
annotate,
which
ones
reached
FCP,
and
so
the
purpose
of
doing
that
would
be
because
by
Alton
load
you
usually
guarantee
that
your
analytic
script
has
run,
and
so
you
kind
of
remove
that
middle
ground,
where
somebody
couldn't
reach
FCP,
but
not
quite
make
it
to
where
the
analytics
ran.
So
then
the
definition
of
abandoned
would
still
be
FCP,
but
you
would
still
have
the
data
of
like
okay,
how
many
people
reached
FCP.
A
B
Yeah,
it's
the
incentives,
part
right.
What
are
we
trying
to
incentivize
if
we
tell
developers
to
measure
this,
because
a
developer
can
fairly
easily
make
unload
very
fast
for
their
site?
I
basically
was
pointing
everything
until
the
window
to
unload
and
they
could
that
way,
sort
of
reduce
their
abandonments
until
unload
per
our
definition,
but
that's
not
really
something
that
would
improve
the
user
experience
but
yeah
it's
worth
considering
if
there's
kind
of
where
we
could
say.
B
J
Give
a
specific
example
of
a
scenario
I'm
concerned
about
a
user
sees
a
splash
screen.
Splash
screen
is
spinning
for
15
seconds,
and
then
they
close
the
tab
and
that
experience
was
not
tracked
by
anything
because
it's
not
an
abandoned
by
this
definition
and
the
analytics
head
and
run
at
that
point.
So
the
site
would
have
no
way
of
knowing
how
many
users
were
in
that
state
right.
B
C
For
what
it's
worth,
I
think
it's
worth,
maybe
it's
separating
the
the
problem
in
suit
there's
full
stages
at
which
abandonment
can
happen
and
your
ability
to
report
it
up
until
FCP.
Your
script
could
not
have
run
definitively
right
and
that's
that's
the
gap
that
this
API
can
can
solve
quite
clearly
and
there's
no
other
way
than
the
browser
providing
this
information.
C
Then
there
is
a
gap
between
the
page
is
being
constructed,
but
we
don't
know
if
there
if
your
scripts
has
run
and
that's
the
part
that
we're
discussing
I'd
like
to
tease
those
apart
a
little
bit
I,
don't
think
we
should
bundle
them
together,
like
I
would
still
as
an
MVP
of
they'll
say
even
if
we
solve
the
first
part
of
the
problem.
That's
a
huge
improvement
because,
as
we
know,
that's
already
six
to
eight
percent
of
navigations
on
the
web,
that's
a
big
hole
right.
C
K
Could
I
take
on
because
I
was
going
to
offer
a
similar
suggestion,
I
think
what
Michele
pointed
out
for
the
way
they
measure
abandonment?
Is
it's
more
to
do
with,
like
user
experience
and
conversion
to
some
expected
outcome
and
that
term
abandonment
has
like
a
predefined
context
there,
which
is
the
page
has
already
loaded
analytics,
has
already
loaded,
but
they
have
their
own
next
phase
that
they
want
to
get
all
users
to,
and
it's
not
just
performance
that
gets
users
there.
It's
also
like
the
text.
K
Is
it
enticing
enough
to
keep
the
user
engaged
until
they
reach
that
moment?
And
at
that
point
it
really
needs
to
be
custom
per
page
and
so
I
wonder
if
treating
this
abandonment
API
in
the
overall
context,
like
just
the
wording,
will
it
be
confusing
over
and
over
again,
because
this
is
really
a
failure
to
get
to
the
point
of
being
able
to
measure
performance
api's?
It's
not
you
know.
Users
are
bending
navigation,
but
it's
like
a
different
form.
It's.
F
G
That
add
on
to
that
that
we
might
really
want
to
consider
different
names
for
the
API,
because
I
was
telling
somebody
about
that
works
on
a
website,
and
they
thought
in
that
car
abandonment,
which
could
be
over
like
many
multiple
successive
page
loads.
You
have
a
card
that
that
comes
with
you
and
you
abandon
it.
So
that
concept
is
really
about
feedback
on
the
name.
B
C
B
As
far
as
I
know,
origin
policy
is
moving
forward
at
least
I'm
chrome
soon,
so
I
don't
think
it
will
block
our
timeline.
It's
if
all
goes
well
with
it.
It
just
the
infrastructure
required
to
correctly
and
securely
supports
the
network.
Centered
reporting
requires
alien
policy,
as
opposed
to
the
current
way,
Nell
works,
which
does
apparently
some
stranger
caching,
which
we
would
ideally
want
to
avoid.
So
that's
why
I'm
like
okay,
so
basically
every
network
centered,
reporting
API,
would
rely
on
origin
policy
from
now
on.
B
L
A
A
A
Yeah,
so
this
is
not
really
a
presentation
as
much
as
it
is
just
a
conversation,
starter
I
wanna,
just
you
know,
talk
about
the
implications
of
BF
cash.
So,
first
of
all,
why
should
we
care
about
BF
cash
navigations,
so
browsers
can
implement
or
in
BF
cash?
So
chrome
is
currently
chromium
is
currently
in
the
process
of
implementing
and
shipping
BF.
A
Similarly,
sites
can
change
to
be
BF
cash
eligible
by
avoiding
to
register
the
unload
event
or
the
other
way
around.
They
can
regress
and
somehow
throw
themselves
out
of
the
BF
cash
and
that
can
result
in
significant
user
experience,
changes
that
currently
we
do
not
capture
at
all,
and
then
we
talked
a
bit
on
calls
in
the
past
about
what
we
can
do
about
that.
A
So
if
we
consider,
for
example,
a
BF
cash
navigation
as
being
a
new
page
log,
then
we
can
override,
for
example,
all
the
navigation
timings
and
write
new
ones.
Instead,
that
like,
on
the
one
hand,
it
seems
destructive
and
can
eliminate
data
that
may
or
may
not
have
been
collected
yet
so
I'm,
not
a
fan
from
on
that
front.
I'm
also
not
sure
that
it's
that
it
would
be
web
compatible
to
do
that,
because
it
can
violate
expectations
of
current
analytic
scripts.
A
We
will
have
more
than
one
navigation
in
that
array
and
that
can
theoretically
violate
expectations,
we'll
have
to
look
into
that
and
worst
case
weak
and
spin
those
off
into
a
new
array
that
contains
like
back
for
a
navigation
timing
or
whatever
it
is
that
we'll
have
to
resolve
to,
but
ideally
like
theoretically,
it's
possible
that
we'll
be
able
to
just
trigger
more
in
the
machine
timing
entries
for
these
new
page
loads.
A
So
if
the
user
navigate
to
the
page,
then
move
the
page
to
the
VF
cache
and
then
moved
it
out
of
the
VF
cache
potentially
multiple
times,
we
want
to
be
able
to
correlate
different
user
actions
and
the
timings
that
go
along
with
them
to
those
page
loads.
We
don't
necessarily
want
to
bulk
all
those
entries
into
some
like
we
want
to
be.
We
want
analytics
providers
to
be
able
to
split
them
apart,
so
we
will
need
to
be
able
to
like
not
necessarily
have
different.
A
Timelines
or
different
arrays,
in
which
we
store
those
entries,
but
we
need
to
be
able
to
split
them
out
from
one
another
based
on
those
page
loads
which
brings
me
to
this
subject
of
time.
Origins,
because
right
now
we
have
a
single
time,
origin
that
all
the
metrics
are
based
on
and
I.
Don't
think
that
we
want
to
have
multiple
time
origins.
A
At
the
same
time,
maybe
we
want
to
be
able
to
say
that
you
know
starting
from
3,000
milliseconds
in
this
is
a
separate
page
and
make
sure
that
those
metrics
can
be
filtered
based
on
when
they
happened
compared
to
the
original
time
origin
or
somehow
enable
filtering
of
entries
based
on
like
multiple
virtual
time
origins
without
modifying
like
without
modifying
the
actual
time.
That
is
zero.
D
A
That
is
one
option.
Another
option
is
to
somehow
have
something
similar
as
part
of
the
visibility
observer
I.
That
Nicholas
was
about
to
talk
about,
but
didn't
so
maybe
that
is
because
there
are
a
lot
of
similarities
between
this
visibility,
changes
and
back
forward
changes.
So
maybe
it
makes
sense
to
wrap
them
up
into
a
single
observer,
because
right
now,
the
method
to
know
whether
your
page
has
gone
into
or
out
of
the
bf
cache
is
through
the
page,
show
and
page
hide
events
which
are
not
necessarily.
A
Correlated
with
the
ways
that
we
like
they
are
different
from
all
the
other
AP
is
that
we
want
developers
to
be
dealing
with.
So
maybe
it
makes
sense
to
duplicate
that
information
into
something
like
the
visibility
of
the
observer
or
expand
the
visibility
observer
to
also
cover
the
F
cache
transitions.
A
A
A
C
K
A
K
A
M
Saying
but
I'd
say
the
current
stance
on
visibility
being
to
filter
it
out
is
not
initially
ideal,
I
think
it
for
BF
catch.
It
might
be
interesting
to
see
if
there
are
issues
in
the
BF
cash
scenario.
You
know
if
we're
just
gonna
have
the
same
approaches,
visibility
and
just
filter
it
out,
then
I
think
we
might
be
missing
some
class
of
issues
that
could
happen
when
things
are
loaded
from
cats.
J
Wanted
to
also
restate
the
problem
because
I
don't
know
if
it's
been
stated
clearly
yet
so
we're
thinking
about
this
on
the
chrome
side,
because
we
have
all
of
these
metrics.
We
have
various
internal
dashboards
that
are
tracking
page
load
performance,
and
our
concern
is
that
when
chrome
ships
bf
cache
things
that
used
to
be
back
forward,
navigations,
that
would
likely
be
very
fast
because
most
the
resources
would
be
coming
from
the
cache.
J
Maybe,
since
they've
had
beef
cash
for
so
long,
it's
not
something
people
think
about
that
much
or
how
maybe
somebody
would
compare
performance
in
Safari
or
Firefox
versus
performance
in
Chrome,
given
that
maybe
there
are
more
navigations
on
Chrome
because
of
the
lack
of
PF
cache.
So
I
don't
know
if
that
was
clear
in
the
original
framing
of
the
question,
but
do
any
of
the
analytics
providers
giving
that
context
have
additional
thinking
on
that
I.
D
Was
like
that
from
em
pulse
customers
we
actually
haven't
had
much
questions
or
insight,
or
even
knowledge
of
the
F
cache
I.
Maybe
remember
one
or
two
conversations
I've
had
with
our
customers
about
it,
but
it
doesn't
seem
that
there's
really
much
education,
I
guess
on
how
that
may
affect
the
metrics
that
are
being
tracked
and
that
kind
of
thing
and
part
of
that
is
probably
on
the
analytics
providers-
are
actually
providing
it
as
a
you
know,
first
place
dimension
or
thing
that
they
can
track.
So
there's
certainly
work
for
us
to
to
do
here.
A
A
A
J
J
M
J
A
Yeah,
but
if
we
were
to
report
a
new
entry
type,
then
we
will
meant
a
new
because
right
now
you
have
an
event
like
a
history
navigation,
but
it's
unclear
where
that
like.
If
this
is
a
regular
history,
navigation
or
a
BF
cash
history
navigation,
so
we
will
probably
meant
a
new
type
that
explicitly
says:
DF
cash.
If
we
were
to
you,
know,
trigger
a
new
navigation
type
entry.
K
I
will
say
that
the
BF
kind
of
questions
to
me
raises
a
like
a
hyper
issue
that
I've
been
thinking
of
in
terms
of
spas,
which
is
for
lots
of
reasons.
It
would
be
good
to
track
be
of
cash
navigations
in
order
to
double
check,
because,
like
you
say,
a
browser
could
regress
performance,
you
wouldn't
catch
that
or
if
you're,
just
comparing
it
and
there's
a
big
cliff,
and
you
don't
know
where
that's
coming
from
so
for
all
those
reasons,
I
think
it
is
important
to
track
specifically.
K
On
the
other
hand,
there's
this
problem
of,
like
the
total
number
of
navigations
of
user
commits,
will
influence
your
overall
metrics
and
how
you
normalize.
But
there
are
so
many
things
that
could
influence
that
the
existence
of
these
browser
features
is
one,
but
you
could
have
a
very
long
page
with
lots
of
content
on
it,
but
the
user
just
Scrolls
through
and
gets
immediately
access
to
it
or
a
lot
of
buttons.
K
So
it's
basically
use
their
behavior
and
how
many
top-level
navigations
they
need
to
commit
during
one
session
is
important
to
the
overall.
Like
experience
and
right
now,
when
we're
just
like
treating
a
multiple
and
being
all
together.
It
doesn't
necessarily
give
a
representative
view
because
you
could
change
the
UX
of
the
page
or
or
these
features
that
the
browser
does
now
all
the
sudden
cuts,
the
number
of
navigations
or
a
framework
the
way
application
handles,
and
but
the
user
doesn't
care
so
I'm
a
science.
A
A
K
So
so
I
think
that
page
load
performance
of
a
back
forward
cache
we
expect,
would
be
better
than
like
an
initial
navigation
and
so
whether
I
open
a
navigation
in
the
new
tab
below
that
tab
to
get
back
or
whether
I
navigate
and
hit
back
doing
a
real
like
back
navigation
or
be
of
cache.
Top
navigation
like
the
overall
perf
story
in
all
of
those
scenarios,
I
would
expect
to
be
similar.
It
should
be
representative
of
the
page
performance,
as
opposed
to
like
how
many
actual
navigations
were
done
and
how
good
a
strategy
was.
K
A
Think
that
back
forward
cash
is
somewhat
different,
because
if
you
are
not
in
the
back
forward
cash,
then
it's
not
an
instant
experience,
so
you're
comparing
the
immediate
page
load
with
not
so
immediate
page
load,
but
you
make
a
good
point
that
also
like
there
are
some
cases
where
going
back
from
the
background
to
the
foreground,
because
the
browser
decided
that
you
no
longer
have
memory
at
your
disposal.
That's
also
can
result
in
a
full
load
in
some
cases
or
partial,
oh,
but
it's
so
it's
definitely
an
interesting
comparison.
C
There
are
many
factors
that
influence
the
number
of
navigations
and
it's
not
really
our
goal
to
or
nor
can
we
control
for
that
all
right,
as
you
said,
based
on
my
UX,
you
know
every
new
paragraph
is
in
the
next
button
or
the
page
reload.
The.
What
we're
discussing
here
is
in
particular
contexts
of
Doc
or
word.
We've
established
that
basically
we're
ignoring
that
the
industry
is
ignoring
that.
C
So
the
there's
a
short
term
here
that
we
have-
which
it
maybe
is
unfounded,
but
it
is
a
fear
that
when
we
ship
this
thing
in
chrome,
as
Phillip
said
it
will
look
like
we're
gonna
ship
it,
because
we
actually
think
that
it's
going
to
improve
user
experience
significantly.
C
But
it
may
look
like
externally
that
performance
has
regressed,
because
all
of
a
sudden,
a
number
of
navigations
had
like
the
denominator
has
changed
right
and
it
looks
like
performance
is
actually
regressed,
whereas
the
true
story
is,
it's
actually
improved
and
second
is
because
sites
are
not
looking
at
this
dimension
at
all
they're.
Also,
not
optimizing
optimizing
for
this
experience
either,
so
you
may
have
a
page
that
could
be
eligible
for
DF
cache,
but
you
have
some
I,
don't
unload
handler
that
prevents
it
from
being
so.
K
Yeah
I
think
that's
a
great
summary
I
I'm,
definitely
a
proponent
of
rupiahs,
cache,
navigations
and
the
proof,
because
developers
can
influence
eligibility
and
it's
good
to
track
regressions
in
performance
that
browsers
have.
However,
this
is
just
one
of
many
examples
where
you
can
influence
the
total
number
of
navigations
and
just
naively
right
fixing
this
temporary
gap
I.
Don't
know
why
it's
any
more
special
than
the
other
cases
it's.
It
would
be
good
to
consider
well.
A
A
K
K
A
K
A
Okay,
cool
I'll
ask
around
and
then
we'll
see
if
we
like,
if
everyone
can
make
it
that's
great.
Otherwise,
if
it's
more
convenient,
Tuesday
cool
thanks
and
see
you
in
a
couple
of
weeks
thanks,
everyone.