►
From YouTube: WebPerfWG call 2021 02 18
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
so
as
always,
this
is
being
recorded
and
will
be
posted
online.
Take
it
away
nick.
B
Yeah,
thanks
for
everybody
today,
first
off,
let's
figure
out
the
time
for
the
next
meeting.
I
think
two
weeks
from
now
would
be
march
4th
at
the
same
time
any
concerns
with
that
date
at
all.
B
Otherwise,
we'll
go
with
that
for
now.
So
today
we
have
a
couple
things
on
the
agenda.
I
think
annie
would
like
to
talk
about
layout
shift
normalization.
That
was
something
unfortunately,
we
didn't
have
time
to
reach
during
the
last
meeting
about
a
month
ago.
B
We
have
some
issues
with
element,
timing
and
hr
time
and
something
from
yov
about
bf,
cache
reporting
and
navigation
ids.
So
plenty
of
things
to
discuss,
annie
if
you
are
up
for
it,
would
you
want
to
go
first.
C
So
I
wanted
to
talk
today
about
the
narrative
metric.
Are
my
slides,
presenting
I'm
sorry.
D
C
C
It
provides
data
about
individual
layout
shifts
on
a
page,
and
then
we
have
a
metric
called
cumulative
layout
shift
and
it's
basically
just
the
sum
of
the
individual
layout
shifts
and
we
we
think
that,
like
our
top
complaint
right
now
about
the
core
web
vitals
is
the
fact
that
it's
it
can
grow
unbounded
and
that's
causing
a
lot
of
developer
frustration,
particularly
for
people
who
have
like
very
long
lived
pages.
C
So
we
did
a
lot
of
brainstorming
about
how
we
could
address
the
issue
in
the
metric.
We
tried
like
about
200
strategies
locally
and
evaluated,
how
they
would
rank
some
pages
that
we
had
like
a
user
group
rated,
and
we
did
a
pretty
long
blog
post
about
this.
I
can
answer
questions,
but
it
might
be
easier
to
read
it,
but
we
tried
variants
of
like
how
windowing
cls
different
variants
on
how
we
would
write
statistics
about
layout
shift
like
averaging
or
max
or
percentiles.
C
We
tried
some
non-windowed
approaches
like
just
the
average
over
time
and
there's
a
couple
of
top
solutions
that
came
out
of
that.
One
of
them
is
called
a
sliding
window
so
basically
like.
If
you
look
at
these
blue
lines,
there
are
layout
shifts
individually
that
shifts
over
time.
C
Oh
sorry,
I
have
a
the
wrong
graphic.
A
sliding
window
is
just
a
window
that
slides
over
time,
so
you
would
just
you
know
the
window
will
continually
update
over
time.
I'm
really
the
graphic
is
in
the
the
blog
post,
we're
considering
300
milliseconds
and
1
000
milliseconds
sliding
windows,
and
then
this
is
actually
a
session
window,
so
it
basically
bubbles,
as
you
have
layout
shifts
to
kind
of
contain
them
until
it
hits
a
gap.
C
C
But
it
would
stop
when
there's
a
gap
instead
of
just
the
max
window
like
like
perform
well
for
sliding
we're
looking
at
the
the
sum
of
the
layout
shifts
in
the
window
or,
like
the
average
sorry,
the
max
window,
but
by
either
summing
and
the
max
window
or
max
window
by
like
the
average
of
the
layout
shifts
in
the
window.
C
So
those
were
kind
of
the
the
four
that
performed
really
well
in
our
initial
tests.
And
then
we
implemented
those
in
chrome
and
looked
at
chrome,
stable
data
to
kind
of
give
us
like
more
details
like
looking
at
a
very
large
scale,
analysis
we're
still
in
in
the
midst
of
that
analysis.
C
But
I
want
to
share
some
initial
findings
that
all
strategies
is
expected,
reduce
the
correlation
with
the
time
spent
on
site
and
then,
when
we
ranked
like
cls
sites
by
cls
and
ranked
sites
by
each
individual
strategy.
The
sites
that
rose
to
the
top
with
the
new
strategies
were
all
sites
that
that
are
very
long
lived.
So
it
definitely
all
of
the
variations
on
strategies
are
accomplishing
those
goals.
C
But
we
did
find
it
that
if
we
take
the
the
average
in
a
session
window
as
opposed
to
looking
at
the
max
in
the
window
as
you'd
expect,
if
you
have
a
very
small
layout
shift
and
a
very
large
layout
shift
like
the
average
is
going
to
to
normalize
that
out-
and
we
thought
that
would
be
very
confusing
for
developers
like
if
they
fix
that
tiny
layout
shift,
then
their
score
will
get
worse.
C
So
we
decided
that
we
probably
don't
want
to
move
forward
with
the
average
one,
and
we
also
found
with
a
300
millisecond
window
that
that
it
was
too
short
that
kind
of
the
there
could
be
a
burst
of
layout
shift
from
one
of
them
like
a
page
load
or
a
scroll
or
like
an
interaction,
sp
nav,
and
that
it
would
be
broken
up
into
kind
of
multiple
events.
C
So
we're
really
looking
at
at
a
max
over
a
session
window
or
a
max
over
a
sliding
window,
and
I
wanted
to
just
give
a
quick
update
and
and
ask
for
feedback
on
that.
We
we
do
have
like
the
blog
posts,
and
I
know
some
people
have
already
sent
in
feedback,
and
I'm
really
thankful
for
that.
Two
questions
in
particular
we
had
is
that,
if
we're
still
like
basically
using
the
cumulative
strategy-
and
it
it's
just
capped
by
a
window,
do
people
think
it
it
would
make
sense
to
rename
the
metric.
C
C
F
C
Yeah
yeah,
so
if
you
have
something
like
it's
an
infinite
scroller
and
it's
janking
a
tiny
bit
on
each
scroll
that
if
people
are
so
social
sites
like
can
you
imagine
instagram
twitter,
I'm
scrolling,
not
those
specifics
has
been
scrolling
through
my
feed
and
I'm
just
scrolling
and
scrolling
and
scrolling
that
can
go.
Tiny
layout
shifts
that
cap.
They
go
above
five
seconds
and
so
we're
almost
certainly
going
to
to
have
some
kind
of
cap
to
to
prevent
those
sites
from
scoring
too
poorly.
C
Yeah,
we
saw
a
lot
of
like
kind
of
poorly
animated
like
scores
popping
in
and
it
wasn't
a
great
experience.
But
have
you
been
sitting
and
looking
at
scores
popping
in
for
five
minutes?
10
minutes
like
it's
not
getting.
C
B
And
annie
for
clarification
for
the
session
window,
like
that
diagram
that
we
were
just
looking
at
the
layout
shifts
within
each
of
those
three
sessions
would
be
summed.
C
B
And
then
out
of
those
three
windows,
you
were
saying
that
the
sum
of
that
I
mean,
I
guess
it's
still
just
cumulative
layout
shift,
but
if
you
take
the
average,
that's
where
you're
saying
you
know,
if
you
take
us,
take
one
of
them
out,
then
the
other
one
might
go
down
or
be
worse.
B
How
are
you
planning
on
aggregating
those
three
windows?
I
guess
that's
what
I'm
confused,
I'm
sorry
we're
going
to
take
the
maximum
window
maximum
out
of
all
of
them,
the
max
one
that
you
ever
saw.
C
Yeah
we've
looked
at
statistics
just
in
like
you
know,
other
statistics
like
the
the
75th
percentile
experience,
the
median
experience
things
like
that
number
one,
it's
hard
for
a
developer
to
reason
about
like
what
exactly
are
we
grading
and
then
number
two?
We
found
that
only
very
high
percentiles
correlated
with
like
our
kind
of
simulated
user
experience
and
then
we're
talking
about
like
a
30
second
experience
and
you're,
saying,
like
the
95th
percentile
session
window
of
five
seconds,
like
that,
that's
an
interpolation
right,
so
you're
essentially
saying
the
max
yeah.
B
E
Yeah,
I
think
there
are
very
rare
situations
where
maybe
a
large
sliding
window
would
capture
the
global
maximum,
but
because
of
the
way
the
timings
are
set
up
session
will
always
do
the
same
as
well,
and
then
session
windows
have
variable
amounts
of
time,
and
so
really
that's
the
biggest
difference
is
is,
is
a
fixed
time,
slice
important
or
if
they're,
really
closely
clumped
together?
Is
a
user
really
intuitively
clumping
them
anyway?
And
so
that's
like
semantically,
the
big
difference.
B
And
certainly
if
we
had
better
browser
support
or
something
that
would
automatically
group
user
experiences
and
user
interaction,
maybe
we
could
even
group
by
user
experience
rather
than
just
these
windows,
but
I
think
we're
trying
to
look
at
it
from
more
of
the
raw
point
of
view
rather
than
like
the
when,
after
the
user
clicked
x,
app
and
kind
of
point
of
view,.
A
Yeah,
just
one
comment:
ishan
had
his
hand
raised
earlier,
so
I
just
yeah.
We
don't
typically
run
a
queue
on
regular
meetings,
but
yeah
so
feel
free
to
just
speak,
but
yeah.
G
Or
so
we
read
the
blog
post
with
a
lot
of
interest.
G
You
know
our
feedback
is
that
to
set
the
stage
a
lot
of
our
customers
on
our
platform
are
e-commerce
high-end
and
you
know
we're
really
focused
on
the
flow
and
use
case
of
e-commerce,
one
so
infinite
scroller
is
one
on
a
category
page,
but
another
common
one
is
a
user
going
back
and
forth
from
a
category
page
to
the
product
page
and
going
back
and
forth
between
those
really
really
fast
that
you
can
do
in
a
single
page
application,
and
so
that's
that's
kind
of
what
drives
our
interest.
G
So
we
prefer
session
windows
out
of
all
the
options.
There's
a
little
concern
that
if
you
have
a
tap
and
the
user
is
going
back
and
forth
between
you
know
into
a
product
page
back
to
the
category
into
a
product
page
really
quickly.
You
could
actually
get
a
bunch
of
layout
shift
in
a
single
interaction
or
in
a
single
group
of
those
interactions,
and
maybe
you
know
when
a
user
tap
occurs.
G
You
automatically
just
end
the
window
there,
so
we
don't
like
uncapped
windows
for
for
that
reason,
as
well
as
we
feel
like,
there
are
some
questions
we
had
internally
about.
You
know
users
leaving
windows
open
for
long
periods
of
time.
G
You
know,
especially
if
it
goes
into
the
background
or
even
they
leave
it
up
and
they
have
the
phone
open,
and
we
also
you
you
addressed
percentile.
That
was
going
to
be
my
other
question.
We
felt
like
percentile
over
maximum
would
make
sense
because
they
might
go
through
various
types
of
pages.
G
We
figured
okay,
that
makes
sense,
so
that's
our
feedback
and
and
where
we
had
so
the
most
concrete
one
is
like.
Maybe
ending
your
session
windows
when
a
tap
occurs.
I
know
you
get
the
500
milliseconds,
but
it
may
turn
out
to
be
of
a
bunch
of
transitions
that
go
past
500,
but
they
really
logically
scrolled
through
or
transitioned
through
five
or
six
pages,
and
it
just
keeps
establishing
the
window
because
you
keep
extending
the
sense
of
session.
E
H
F
Although
I'm
not
going
to
get
into
bike
shedding
about
it,
but
to
me
it
sounds
like
you
have
cumulative
layout
shift
with
one
second
slices
over
five
second
max
max
of
those.
So
if
that's,
if
that's
what's
actually
going
on,
there,
then
a
short
version
of
something
like
that,
but
yeah
I
would
say,
definitely
rename,
because
this
feels
different.
B
H
B
And
they
all
are
reporting
something
ever
so
slightly
different
in
some
cases,
but
I
think
I
I
would
agree
that
this
metric
feels
different
enough.
You
know
whatever
the
worst
cumulative
plato
experience
whatever
you
want
to
call
it
that
it
would
make
sense
to
have
it
be
something
else
entirely.
I
B
Awesome
thank
you
for
bringing
that
to
us.
Okay,
why
don't
we
head
into?
We
have
some
issue.
Well,
actually,
do
you
wanna
present
on
your
bf
cache
stuff
before
issues
or
afterward.
A
I
have
no
strong
preference,
I
can
yeah.
I
can
do
that
now.
Briefly,
it's
it's
a
very
short
presentation.
Okay,
so
you're
can
you
take
over
subscribing
for.
A
Okay,
let
me
then
copy
it.
B
A
Okay,
cool
so
bf
cash
reporting
and
navigation
ids.
Essentially,
on
the
last
meeting
a
month
ago,
we
talked
about
df
cash
reporting,
so
just
to
recap
for
folks
who
weren't
around.
Essentially,
we
want
to
align
metrics
with
user
experience,
and
that
includes
bf
cache
navigations.
A
We
want
bf
cache
navigations
to
fire
new
entries
rather
than
override
old
entries.
We
want
a
single
time,
origin
for
all
the
navigations.
In
the
document
to
prevent
confusion,
we
agreed
that
the
best
way
to
go
would
be
to
re-fire
or
fire
new
navigation
timing
entries
with
a
new
navigation
type.
A
We
also
concluded
that
it
would
be
web
compatible
to
do
that
as
well
as
fire
new
pain,
timing
entries
and
other
yeah
like
pain,
timing,
entries
and
input
event,
and
then
the
question
is:
how
do
we
correlate?
How
do
we
correlate
those
new
pain,
timing
entries,
as
well
as
other
like
resource
timing,
entries
and
other
entries
that
we
have
to
that
navigation
compared
to
older
navigations?
A
A
So
what
I'm
proposing
here
is
to
add
a
new
navigation
id
on
the
performance
entry
itself,
so
extending
the
idl
for
performance
entry
which
that
navigation
id
will
incrementally
increase
each
new
navigation
that
the
page
the
document
goes
through,
and
that
would
work
well
for
bf
cache
now,
but
it
would
also
work
well
for
other
navigation
types
in
the
future
and
in
particular,
what
we
have
in
mind
is
to
have
some
sort
of
a
soft
navigation
navigation
type
for
sbas.
A
That
would
work
well
with
the
proposed
app
history
api.
That
is
very
interesting
on
that
front.
This
is
not
yet
like
th
that
work
is
super
early,
but
it
seems
to
fit
the
same
model,
which
is
good,
so
the
proposed
spec
changes
in
order
to
make
that
happen
are
what
you
know.
What
I
haven't
had
in
mind
is
to
add
a
navigation
id
on
site
short
to
performance.
A
Entry,
add
a
counter
to
the
same
effect
to
the
document
to
be
initialized
as
zero
and
then
increment
that
counter
every
time
we're
hitting
a
new
navigation
type
so
initially
for
bf
cache
in
the
future.
We
may
also
increment
the
counter
for
soft
navigations
and
then,
when
cueing
any
new
performance
entry,
we
simply
initialize
its
navigation
id
with
that
variable
from
the
document,
and
that
would
create
the
effect
of
every
new
performance
entry
to
be
easily
correlatable
to
the
latest
navigation
timing.
A
Entry
that
preceded
it
and
just
to
illustrate
that
as
a
code
example.
Let's
say
we
want
to
grab
all
the
entries
for
each
one
of
those
navigations
that
happen
on
the
page.
We
collect
all
the
navigation
entries
and
then
we
collect
all
the
entries
on
the
page
in
general
and
then
simply
iterate
over
the
navigations
and
filter
based
on
entries
that
have
their
navigation
id
be
similar
or
identical
to
the
navigation
timing,
entry
and
navigation
id
and.
H
A
We
can
do
something
with
them.
That
is
obviously
a
simplified
version
with
like
typically,
you
would
probably
want
to
use
a
performance
observer
for
some
of
the
entries,
but
the
logic
would
be
similar
and
we
would
just
like
a
simple
filter
logic
will
be
able
like
will
enable
us
to
split
entries
based
on
navigations
yeah.
So
that's
all
I
had
as
far
as
slides
go,
and
I
would
love
to
hear
your
feedback
on
that
and
whether
you
think
this
make
this
makes
general
sense.
F
A
Yeah,
I
would
prefer
not
to
proliferate
that
data
to
all
new
performance
entries,
because
that
can
make
them
heavier.
It's
it's
better
to
just
have
an
id,
treat
it
as
a
pointer
and
then
you're
able
to
filter
entries
based
on
that
and
get
that
get
the
navigation
type
or
whatever
other
information
you
want
from
the
navigation
entry
directly.
H
Yeah
seems
reasonable.
F
Job
I
do
think
this
is
a
way
to
disambiguate,
which
is
a
concern.
The
last
time
we
were
talking
about
bf
cash,
so
I
really
appreciate
the
work
that
went
into
this.
I
think
this
is
a
way
forward
for
sure
and
maybe
could
be
used
for
soft
maps,
so
I
like
that
is
there.
Is
there
like
a
default
number,
so
I'm
just
curious
about
backwards,
compat
or
like
how
this
works
in
the
future.
If
all
all
entries
are
gonna
have
the
miracle
like
it
and
then
that's,
maybe
not
what
people
are
expecting.
A
So
so
what
I
was
thinking
is
that
zero
would
be
like
if
we
had
that
navigation
id
a
variable
and
or
property
in
the
proof
entry
idl
today
and
had
no
like
without
bf
cash
support.
It
would
just
be
zero
all
the
time
and
then
it
would
increment
over
time.
I
don't
think
like
from
a
backwards
compatibility
perspective.
A
I
doubt
that
extending
the
idl
would
like.
I
don't
think
content
today
is
looking
for
random
properties
and
if
it
finds
them
it
breaks,
I
would
hope
not
but
yeah.
So
so
I
don't
think
backward
compatibility
would
be
an
issue,
but
maybe
you're,
suggesting
forward
combat
issues
that
I'm
missing
like
or.
F
G
B
So
I
know
the
performance
timeline
interface
itself,
so
like
get
entries
and
such
we've
decided
to
put
less
focus
on
or
deprecate
or
not
as
much
but
your
example
itself,
using
it
suggested
that
you
know.
Maybe
there
are
some
use
cases
where
people
would
want
to
always
just
segment
by
the
current
navigation
id
I
mean.
Would
there
be
any
desire
to
like
we
have
again
entries
by
type
or
you
know,
have
they
got
entries
by
navigation
id
or
as
a
filter
that
you
pass
to
it
instead
of
having
to
do
it
in
javascript?
B
Maybe
maybe
not
I'm
just
taking
like
for
a
really
long
running
page
like
let's
say
an
sba.
The
user's
been
up
for
a
couple
days
on
their
gmail
and
they're
clicking
back
and
forth
a
lot
you're
going
to
have
thousands
of
these.
Maybe.
A
Yeah,
so
I
can
imagine
so
I
so
the
code
I
wrote
is
just
for,
like
I
was
aiming
for
rap,
something
that
would
fit
on
a
slide
so
perf
for
a
performance
observer.
I
would
do
something
similar,
but
then
do
the
filtering
as
part
of
the
observer
callback
and
then
you
know
add
it
to
an
array
with
all
the
navigation
entries
related
to
it
or
yeah,
even
not
even
go
over
the
navigation.
A
Just
you
know,
have
a
an
array
that
splits
on
navigation
id
and
then
like
each
cell
in
the
navigation
id,
is
an
object
like
another
array
that
contains
all
the
entries
that
is
fed
from
the
performance
observers.
So
I
wouldn't
put
too
much
emphasis
on
that's
this
particular
example.
Regarding
your
question
about,
like
do,
we
want
get
entries
by
id?
A
I
generally
think
that
all
these
get
entries
by
something
is
a
pattern
we
had
before.
We
had
filters
and
filters
enable
people
to
do
whatever
they
like
split
on
whatever
dimension
that
they
want
and,
to
be
honest,
like
even
from
an
internal
implementation
perspective.
It's
not
like
they
are
significantly
more
efficient
than
something
external
that
takes
a
javascript.
H
B
I
think
that's
fair.
I
was
trying
to
think
if
there
was
any
use
cases
that
people
would
be
using
the
performance
timeline
and
want
to
split
it
rather
than
just
using
a
performance
observer,
and
I
think
every
case
I
can
think
of
you
would
just
be
wanting
to
be
using
the
observer
and
doing
your
own
filtering
anyway.
So,
okay.
A
Yeah,
it's
a
mistake.
It
clears
everything,
but
the
problem
is:
if
you
have
a
page
with
multiple
third-party
actors,
we've
had
cases
where
one
steps
on
the
feet
of
the
others
and
clears
buffers
in
like
accidentally.
A
Yeah,
so
I
think
it's
better
to
have
like
performance
observer
enables
you
to
register
whenever
you
care
about
things
and
unregister.
Whenever
you
don't
and
there's
no
real
need
to
clear
the
buffers.
H
Would
it
make
sense
to
to
change
the
get
entry
by
what
is
it
by
type
right
and
include
the
navigation
id
parameter?
So
it
would
do
the
filtering
for
you.
A
So
that's
yeah.
That's
basically
like
the
the
question
that
nick
asked
more
or
less.
I
think
it
that
no
it's
fine,
I
I
think
it
doesn't
really
make
sense,
because
we
have
filters
today
that
can
do
that
with
relatively
small
amount
of
code.
A
D
A
I
agree
that
filtering
by
id
could
theoretically
be
better
looking
at
at
least
chromium's
current
implementation.
It's
not
like
we're
doing
any
better
internally
for
like
at
least
for
get
entries
by
name.
A
It's
just
yeah,
it's
not
significantly
different
from
getting
the
entire
list
and
then
filtering
it
afterwards,
so,
but
but
but
I'm
open
to
like.
If
people
think
that
it
would
be
a
useful
api
to
have
that,
I'm
open
to.
D
The
other
thing
is,
for
example,
if
you
have
an
observer
and
you
were
interested
in
a
specific
navigation
id
then
would
that
be
something
useful
or
do
you
think?
That's
not
a
common
use
case.
B
H
A
So
maybe
it's
worthwhile
to
decouple
that,
because
it's
yeah
not
neces
like
there's,
no
real
need
to
tie
those
two
together
and
people
can
filter
now
and
then
move
to
a
newer,
shinier
api
like
get
entries
by
id
if
get
entries
by
navigation
id.
If
that
would
be
something
we'll
add
in
the
future,
so
does
that
make
sense
to
maybe
decouple
those
discussions
and
then
potentially
we
can
add
that
sugar,
on
top.
A
H
Or
expose
it
instead
of
an
api
like
expose
it
as
a
pro
an
array
property,
so
we
can
have
like
navigations
or
navigations
and
then
access
it
by
the
navigation
id
directly
like
an
array
indexer
to
get
the
entries.
A
So
right
now,
if
you
get
the
so
performance,
get
entries
by
type
navigation
gives
you
an
array
like
thing
that
you
can
already.
You
know
go
to
oh,
but
but
you
want
all
the
entries
related
mapped
to
that.
H
Right
to
that
navigation
id
because
you're
introducing
a
new
concept,
a
new
grouping
concept,
called
navigation
id,
which
I
find
useful,
I'm
just
thinking
what
would
be
a
useful
and
easy
way
to
process
it
so
could
be
an
api
or
it
can
be
just
a
direct
lookup
from
a
map.
H
D
A
I
think
it
might
be
yeah
like
similar
to
the
like.
I
think
it
would
be
interesting
to
discuss
this
once
we
have
some
implementation
experience
and
I'm
yeah
you
and
nick
and
others
can
play
around
with
the
you
know,
with
those
concepts
see
what's
the
best
way
to
split
see
what
you
know.
A
What
api
shapes
or
you
know
what
what
usage
patterns
emerge
in
the
world
and
then
try
to
converge
on
that.
B
E
So
speaking
of
being
forward
looking,
I
think
that
we
absolutely
should
have
a
navigation
id,
and
I
like
this
proposal
but
to
know
of
earlier
gnome's
earlier
point
about
perhaps
making
it
a
structure,
that's
more
complicated
like
fundamentally.
This
is
about
segmenting
perf
entries
based
on
their
like
logical
grouping,
and
I
do
think
that
benjamin,
at
least
in
the
past
has
turned
around
what
about
visibility
or
long
periods
of
idle
time
or
app.
E
D
E
E
A
A
Yeah,
I
I'm
worried
of
creating
a
like
an
extremely
complex
machinery
when
we
already
have
javascript
and
filters
that
can
enable
any
custom,
slicing
and
dicing
that
may
be
needed
on
the
client
side,
but
yeah
again.
D
So
unless
we
plan
to
do
that,
then
yeah,
then
I
would
see
no
need
for
the
visibility
id
yeah.
I.
E
E
A
Yeah
and
and
visibility
observers
will
give
you
even
better
ways
to
get
that
information
even
retroactively.
Yeah.
E
So
so,
but
but
it's
a
similar
question
right,
like
navigation
entries,
give
you
the
ability
to
listen
to
them
and
slice
the
data
by
listening
to
the
events
looking
at
the
timeline
looking
at
the
start
times
and
slicing.
So
by
that
argument,
you
can
also
do
the
same
thing
manually
with
the
thing.
So
this
is
a
convenience
thing,
but
I
I
concede
that
it's
there.
There
will
be
a
way
to
raise
this
question
later
without
having
to
introduce
the
more
complex
tricks
you
know,
yeah
out
there.
A
Yeah
and
I'm
not
sure
that
if,
if
we
were
to
consider
in
the
future,
you
know
backgrounding
and
forwarding
foregrounding
is
a
new
thing.
Maybe
we
can
just
increment
the
id
in
those
cases
if
we
want
to
expose
new
things
when
that
happens,.
A
That
is
correct,
yeah,
but
I
don't
know
that
like
we
would
actually.
A
Yeah,
I
guess
that
wouldn't
be
super
productive.
What
I
suggest
as
next
steps
is
for
me
to
create
various
pr's
on
the
various
specifications
related
to
that,
because
this
is
a
change
that
touches
on
performance,
timeline,
html
and
pain
timing,
at
least,
if
not
more
so
I
will
open
prs
and
then
maybe
send
them
over
to
the
list
for
greater
scrutiny.
A
And
then
we
can
have
discussions
on
those
on
on
those
pr's
and
specifically,
if
we're
talking
about
bike
shedding
it's
the
performance
timeline,
pr
that
will
be
of
interest.
Does
that
make
sense.
F
Yes,
hey.
I
had
a
just
I'm
probably
wrong
here,
but
I'm
just
curious
like
if
we
have
monotonically
increasing
session
ids
then
isn't
the
largest,
the
one
that
could
be
visible
and
if
it's
not
the
largest,
then
it's
definitely
not
visible.
Or
is
this
whole
visibility
discussion,
something
we
really
don't
want
to
have
I'm
intrigued
by
what
michael
said.
I
think
like
this.
A
Yeah,
so
I
I
think
it
can
or
like
visibility
is
not
necessarily
like
users
could.
Theoretically,
click
on
the
back
button
then
switch
to
a
new
tab
before
you
know
very
quickly,
and
then
this
whole
bf
cache
navigation
would
be
invisible
for
most
of
its
time.
Even
if
you
know
a
small
number
of
milliseconds
would
it
would
still
be
visible.
It's.
A
It's
interesting
to
think
about
yeah.
I
I'm
not
sure
that
we
want
to
treat
visibility,
visibility,
sessions
as
new
navigations
and
then
expose
specific
things
on
them.
A
A
I
guess
I'm
I'm
missing
concrete
examples
of
what
we
would
want
to
expose
for
those
visibility
sessions.
A
And-
and
I
agree
with
you
that
in
principle
it's
an
interesting
area
to
explore,
but
I
don't
know
yeah,
I'm
not
sure
what
to
do
with
it
right
now.
F
That's
fine.
I
just
wanted
to
to
note
that
what
michael
said
there
might
anything
we
can
do
to
improve
clarification
of
visibility
would
be
with
wow
and
I
feel
like
this
is
solving
other
problems,
that's
great,
but
I
like
the
fact
that
it
might
be
able
to
discard
a
whole
bunch
of
what
is
visible
from
the
solution
set.
I
feel
like
we.
If
we
could
get
that
as
a
bonus,
then
let's
take
it,
you
know.
That's
that's
all.
I
want
to
say.
A
A
Cool
okay,
so
if
there's
nothing
else,
I
think
we
can
move
on
to
the
next
issue.
B
So
we
are
going
to
jump
over
to
an
element,
timing
issue.
Let's
see
we
have
about
10
minutes
left,
so
we
can
probably
get
through
two
or
three
of
these
issues.
So
this
is
element.
Timing,
issue,
number
54.
A
I
can
speak
speak
to
that,
probably
essentially
right
now
for
element
timing.
We
only
expose,
so
we
expose
the
load
time,
which
is
identical
to
the
own
event
and
what
we
already
expose
in
resource
timing,
and
then
we
expose
the
render
time
for
the
image.
But
it's
the
render
time
for
all
the
bytes
of
the
image.
A
The
image
is
complete
rendered,
and
that
is
the
point
in
time
that
we
exposed
jpeg
excel
folks
who
are
in
the
process
of
finalizing
a
format
that
is
progressive,
are
somewhat
unhappy
with
just
exposing
the
final
time,
because
it
doesn't
show
the
fact
that
they
are
like
that
jpeg
excel
images
present
something
to
the
user
earlier
compared
to
other
formats
like
avif.
A
So
what
john
is
yawn
actually
suggesting
here
is
have
multiple
stages,
where
we
don't
only
expose
the
final
render
time,
but
also
the
time
in
which
we
have
the
dimensions.
We
have
some
placeholder
paint
or
a
blur
hash.
A
A
D
A
D
H
Maybe
if
it
was
like
sorry
in
the
class,
if
it
was
like
an
app
I'm
just
imagining
an
app
that
has
multiple
thumbnails,
I
don't
know
when
you
click
on
it
and
then
it
shows
the
full
image
somewhere
right
and
then
it
might
not
have
the
image
the
full
high
resolution
image
in
memory
yet
loaded.
Yet
so
you
want
to
know
how
long
it
took
to
show
the
how
we
called
it.
The.
D
Then
you
can
already
measure
that
so
this
is
a
use
case
specifically
for
images
that
are
progressive.
I
think.
J
J
A
weird
line
where
baseline
images
still
draw
incrementally
right,
so
I
guess
you're
you're,
separating
the
the
incremental
painting
for
progressive
images
from
the
incremental
painting
for
baseline
images
and
not
all
browsers
paint
progressively
either
but
you're
almost
wanting
to
define
it
as
like
an
event
for
each
time.
The
entire
image
area
is
changed
if
you
would,
as
each
scan
completes
or
the
the
time
that
the
entire
image
dimensions
is
updated.
J
J
But
you
probably
don't
want
that
many
callbacks,
but
I
could
see
like
there's
there's
a
few
companies
that
have
even
non-excel
but
regular
jpeg,
where
they
use
custom
scans
and
like
the
the
first
version
of
the
image,
is
a
grayscale
version
of
the
image
and
then
they
add
the
color
afterwards,
I'm
thinking
pinterest,
but
I
can't
remember
exactly,
and
so
in
those
cases
they
might
want
to
to
know
when
each
scan
completes
and
that
avoids
adding
logic
about
what's,
first,
what's
good,
what's
not,
they
can,
on
their
own,
determine
two
scans
versus
eight
scans
or.
A
A
So
if
I
understand
you
correctly,
you're
suggesting
instead
of
having
multiple
defined
entries,
you're
suggesting
just
just
an
array
of
these
are
all
the
times
where
scans
happen.
This
is
scan
number
one
scan,
number,
two
etc.
A
A
Time
of
those
scans,
I
guess
yeah.
I
So
I'm
from
pinterest
actually-
and
we
do-
we
have
a
custom
metric
called
printer,
wait
time,
and
that
includes
the
load
or
the
rendering
times
of
all
the
images
in
the
viewport.
The
definition
is
supposed
to
be
for
60
quality
for
our
progressive
jpegs.
So
it's
not
necessarily
that
we
want
to
monitor
every
single
scan,
but
we
just
want
to
be
notified
of
that
specific
point
at
which
our
quality
threshold
is
met.
D
J
D
A
There's
no
one-to-one
mapping
but
yeah
yeah,
but
we
could
theoretically
say
okay.
This
is
the
time
where
scans
one
two
two
were
painted.
This
is
cans
three
to
five.
J
A
Yeah
yeah
yeah
and
giacomo
made
a
comment
on
chad
that
this
is
also
potentially
useful
for
lcp.
I
think
that
for
lcp
we
have
a
different
discussion
from
my
perspective.
Lcps
semantics
are
more
aligned
with
the
good
enough
paint
at
the
very
least,
rather
than
all
the
different,
because.
A
One
of
the
goals
for
lcp
is
to
be
a
single
number
rather
than
an
array
of
scans.
That
can
be
useful
for
some
scenarios,
but
may
not
be
useful
for
everyone,
so
yeah.
We.
We
have
a
separate
issue
related
to
lcp
that
I
don't
remember
the
number
of
where.
Essentially,
I
think
we
somewhat
converged
on
good
enough
for
lcp
semantics.
F
First
paint
is
placeholder
paint
and
then
first
contentful
paint
is
preview
paint.
I
don't
know
how
lcp
actually
falls
into
that,
but
anyway,
I'm
just
saying
like
let's.
A
Okay,
yeah
yeah,
yeah,
okay,
so
you're
saying
there
are
parallels
between
those
milestones
and
the
milestones
for
the
full
page
render.
I
agree
that
there
are
parallels
there
are
also
like.
I
guess
that
depends
on
the
level
of
details
that
we'll
want
to
expose
here
in
terms
of
yeah.
If
we
expose
every
scan
that
that
goes
beyond
what
we
currently
expose
for
the
page
itself,.
A
Unless
you
count
each
time,
we
fire,
like
a
request,
a
request,
animation
frame
as
a
you
know,
a
pain,
but
we
don't
really
expose
any
information
on.
What's
painted
at
that
point,.
B
So
we're
running
out
of
time
or
we're
out
of
time.
I
should
say
how
would
we
want
to
follow
up
on
this
either
through
the
ticket
or
continue
the
discussion
elsewhere?.
A
I
I
think
it
can
be
useful
for
folks
who
have
opinions
or
better
yet
use
cases
for
this
to
to
comment
on
the
issue,
and
you
know
say
this
is
how
I
would
use
this
and
yeah
patrick.
Maybe
you
want
to
comment
like
outline
your
suggestion
of
having
a
more
flexible
structure
that
is
tied
to
scans,
and
then
we
can
see
how
that
expands
to
like
I
I
don't
know
if
this
is
like
jpeg
specific
or
can
expand
to
other
image
types
yeah.
B
All
right,
thank
you,
everybody
for
joining
today
we
had
two
issues
that
we
didn't
get
to
around
hr
time
today.
We'll
move
them
to
next
time
and
we'll
also
talk
about
some
other
things.
Worker
start
and
such
so
thanks.
Everyone
for
joining
today.