►
From YouTube: WebPerfWG call - September 10th 2020
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Thank
you,
so
first
thing
is
next
call
that
would
be
september
24th
same
time
same
channel
any
concerns
with
that
at
all
all
right,
two
weeks
from
now,
so
we
have
a
couple
different
things
today:
we're
going
to
start
out
with
a
presentation
by
nicholas
around
largest
contentful
paint,
and
then
we
can
get
into
a
couple
issues
around
preload
and
performance
timeline,
and
certainly,
if
there's
anything
else
after
that,
we
can
try
that
as
well
nicholas.
C
Sure
yeah,
so
in
the
issue
there's
someone
asked
for
a
way
to
know
when
the
lcp
has
been
finalized.
So
basically
that
happens
when.
C
It
could
be
a
keyboard
key
down,
so
it
would
be
pretty
complicated
to
try
to
figure
out
when
that
occurs.
In
order
to
determine
when
the
lcp
algorithm
has
stopped.
C
It
could
be
considered
as
a
way
to
consider
the
lcp
algorithm
to
have
stopped,
because
any
rendering
after
it
goes
to
the
background,
will
be
affected
by.
However
long
the
user
was
in
the
background,
so
any
timestamps
after
that
are
not
useful
for
the
purposes
of
lcp
right.
So
that
is
something
to
consider
as
well.
So
I
wanted
to
present
an
idea
that
we've
been
thinking
about,
which
is
to
have
a
final
lcp.
C
One
problem
we
have
is
that
right
now,
it's
possible
that
the
latest
candidate
or
yeah,
shall
we
say
the
largest
candidate,
the
one
that's,
the
actual
lcp
has
a
smaller
start
time.
This
can
happen,
for
instance,
consider
an
image
that
has
no
timing,
allow
origin
and
then
a
text
that
is
a
little
smaller,
but
the
render
time
of
text
is
always
precise.
C
So
wait
is
this.
Is
this
a
good
example?
Oh
yes
and
then
so
the
image
will
have
a
start
time.
That
is
the
load
time
of
the
image,
because
it
has
no
timing.
Allow
origin
but
processes,
sorry
entries
are
processed
in
get
entries
in
order
of
the
start
time,
as
we
all
know,
right,
they
are
by
default
returned
in
order
of
starting,
so
developer
would
need
to
check
that
the
image
is
actually
larger
to
detect
that
it's
the
correct
lcp
candidate.
C
As
of
the
current
api
right
due
to
the
attempting
allowance
issues,
as
well
as
the
fact
that
we
consider
removed
elements
to
no
longer
be
valid
lcp
candidates
which,
by
the
way,
I'm
looking
into
that
particular
problem
as
well.
But
let's
leave
that
for
another
time
now.
Another
problem
is
that
you
could
have
a
large
image.
C
That
is
loading
by
the
time
that
the
user
starts
interacting
with
the
page
or
even
or
even
the
time
when
the
user
unloads
the
page,
and
in
that
case
you
should
consider
lcp
to
basically
be
not
not
reached
in
the
sense
that
there
was
an
lcp
candidate,
which
was
the
image.
But
it
didn't
finish
loading,
so
there's
no
valid
lcp
timestamp.
C
However,
the
current
api
and
any
user
of
that
api
will
just
assume
that
the
last
lcp
candidate
that
was
provided
will
be
the
actual
final
lcp
in
that
case,
which
to
me
is
incorrect
and
it
will
hide
some
performance
problems
from
some
sites,
in
particular,
if
they
have
a
super
giant
large
image
that
basically
never
finishes
loading
by
the
time
the
users
leave
the
page,
then
it
will
not
surface
that
problem
to
the
users.
C
And
finally,
the
last
problem
is
that,
like
I
mentioned
from
the
issue,
there's
no
reliable
way
to
know
when
lcp
has
stopped,
you
would
need
to
have
a
bunch
of
event
handlers
and
presumably
you
might
want
to
well
users-
probably
already
look
at
the
page-
visibility
change
so
that
maybe
that's
not
a
huge
deal,
but
at
least
for
stopping
due
to
user
input.
It's
pretty
hard
to
do
right
now
and
I
don't
think
we
would
want
to
recommend
users
to
add
event
listeners
for
everything
that's
required.
C
C
So
those
are
the
problems
with
the
current
lcp
api
that
we're
trying
to
tackle
here,
and
the
proposed
solution
is
to
have
a
entry
which
just
surfaces
this
last
lcp.
C
Again
to
in
case
people
forgot,
the
stopping
conditions
are
when
user
input
occurs
or
when
the
page
becomes
hidden.
So
actually
right
now
for
the
api.
I
guess
it's
when
the
page
becomes
unloaded
or
closed.
So
that's
slightly
different,
but
in
any
case
surfacing
that
would
solve
all
of
these
problems,
because
now
there's
no
ambiguity
on
when
you
did
not
actually
have
a
final
candidate
and
when
the
final
candidate
is
one
with
a
smaller
start
time
than
some
of
the
other
candidates
that
were
surfaced
in
the
api.
C
C
Presumably,
most
users
would
just
want
this
final
one,
since
I
imagine
a
few
people
actually
care
about
about
the
whole
process
of
the
lcp
candidates,
whereas
most
people
just
want
to
know
what
is
the
actual
lcp
value
for
their
site,
so
the
entry
type
could
either
be
largest
county,
full
paint
replacing
the
current
entry
type.
C
This
would
be
at
most
one
as
opposed
to
the
current
api,
which
surfaces
potentially
several
lcb
candidates
throughout
the
course
of
the
page,
and
for
that
reason
it
could
be
exposed
in
the
performance
timeline,
because
that's
what
we
usually
do.
We
allow
exposing
these
one-off
entries
in
the
performance
timeline,
since
it's
an
easy
way
to
determine
if
they
have
been
reached
or
dispatched
at
any
given
point
in
time.
Instead
of
using
a
performance
observer
and
take
records
on
the
observer.
C
C
One
tiny
well
actually,
not
tiny.
I
don't
know
how
big
one
technical
constraint
or
concern
that
I
have
is
that
it
would
need
to
be
available
on
the
visibility
change
event,
especially
if
it's
the
one
right
before
closing
the
page,
so
that
the
user
can
confidently
query
the
entry
and
dispatch
it
to
analytics
or
whatever,
on
that
disability
change
event
before
the
page
gets
destroyed
right.
So
this
is
a
constraint
because,
basically.
C
Lcp
stops
at
that
point
in
time,
so
it
needs
basically
to
compute
the
value
quickly
and
before
the
event
gets
dispatched
to
the
the
page,
which
is
is
a
concern
I
have
worth
mentioning
so
and
that's
it
that's
my
presentation.
I
don't
know
if
people
have
thoughts,
comments,
complaints
or
whatever.
B
Nicholas
so,
if
my
understanding
is
correct,
the
only
time
that
we
would
be
able
to
emit
this
final
largest
information
would
be
on
user
input
or
page
hidden,
so
basically
unload.
That
means
the
data
would
not.
C
So,
even
though
things
like
paint
timing,
we
expose
the
entry,
even
if
it
occurs
after
the
page
has
become
hidden.
In
this
case.
Due
to
the
complexity,
it
may
make
sense
to
just
determine
consider
the
algorithm
for
the
final
candidate
to
be
terminated
upon
the
first
hidden.
If
that
makes.
A
Sense,
nick
did
you
say
onload
or
on
unload.
B
So
for
boomerang,
which
we
are
capturing,
our
largest
well
largest
contentful
paint
through
this
api
today,
we
generally
send
all
of
the
data
that
we
have
at
the
on
load
event
or
the
page
load
event.
I
guess
I'll
just
call
the
page
load
event
because
we've,
you
know,
as
we've
talked
about
in
previous
calls,
the
unload
event
is
rather
unreliable
across
browser
to
send
it
out.
C
That's
that's
surprising
to
me,
though,
because
like
what,
if
the
website
has
a
unload
handler
that
loads
the
whole
page
after
then,
you're
gathering
no
data.
B
Well,
yeah
so,
and
for
boomerang
specifically
for
sites
that
have
single
page
apps,
we
have
our
own
single
page,
app
handling
that
may
not
wait
until
just
the
pages
load
event
may
wait
until
all
the
networking
activity
quiets
down
and
then
we
send
our
beacon,
but
basically
that
event.
The
point
that
the
page
has
finished
its
network
activity
and
finish
its
loading
activity
is
when
we
just
take
the
largest
largest
control,
pane,
whatever
it
was
latest
and
send
it
out
at
that
point.
B
So
it's
basically
the
largest
contentful
paint
until
the
page
has
stopped
loading
from
algorithmic
point
of
view,
so
yeah.
So
it's
in
in
traditional
apps,
it's
just
the
onload
event
and
single
page
apps.
It's
maybe
a
little
bit
later,
but
we're
not
waiting
until
the
end
of
the
the
hidden
state.
If
you
will
to
send
our
data.
C
That's
good
to
know
so
so
I
guess
this
proposal.
If
you
don't
change
where
you
send
the
data,
it
would
not
work
for
you
probably.
B
Not
in
the
way
that
boomerang
sends
data
today-
and
I
guess
my
guess
would
be
that
looking
at
other
analytics
packages,
most
of
them
probably
follow
very
similar
behavior,
which
is
not
waiting
until
the
very
very
end
of
the
hidden
event
to
send
their
data.
Most
most
analytics
packages
send
data
earlier.
B
You
know
you
can
do
some
stitching
in
the
back
end
to
stitch
together
data
from
the
page
load
event,
along
with
some
unload
data.
At
the
end,
if
you
know
your
infrastructure
can
handle
it,
boomerang
doesn't
do
that
today
and
you
know
maybe
other
analytics
packages,
don't
necessarily
do
that
either
by.
B
C
And
have
you
found
the
pageheight
event
to
also
be
unreliable,
or
why
is
that
one
not
used
sorry,
not
page
height,
the
visibility
exchange,
because
I
know
there's
confusing
names
for
that
right.
B
We
well
there's
also
the
balance
of
trying
to
get
real-time
data
as
well
for
us,
so
somebody
could
theoretically
load
a
you
know
long
article
leave
it
open
in
the
background,
while
they're
reading
it
leave
it
open
for
another
five
days,
while
some
you
know
on
their
browser,
tab
come
back
from
the
weekend
and
then
you
know
navigate
away
at
that
point,
that
that
data
is
no
longer
real
time.
If
we
were
to
wait
until
the
the
hidden
event.
B
So
you
know
we
try
to
send
some
data,
at
least
at
the
pages
load.
B
We
actually
do
send
unload
data,
but
for
reference
we
we
do
send
a
beacon
unload,
and
this
is
for
specifically
tracking
how
long
person
was
viewing
that
particular
page.
We
found
in
general,
we
get
about
60
to
70
to
80.
I
haven't
looked
lately,
but
between
that,
let's
say
70
of
of
pages
generate
both
a
page
beacon
and
the
unload
beacon.
So
about
thirty
percent
of
the
time
we're
not
getting
unload
data.
C
Yep,
okay,
so
so
what
I
hear
is
that
it's
likely
analytics
providers
would
not
be
able
to
use
the
api,
so
they
would
probably
keep
using
the
candidates,
which
may
be
less
reliable
but
more
easier
to
use
because
of
the
way
they
are
dispatched
like
continuously
over
time,
as
the
data
is
gathered
instead
of
at
the
very
end
right.
D
I
had
a
question
for
nick:
do
you
track
the
first
visibility
change
event
as
well
as
the
do
you
know
what
the
reliability
of
that
event
is
from
your
data.
B
Oh
sorry,
you
say
you're
saying:
do
we
send
like
a
beacon
at
that
point
or
we?
We
do
not
send
the
beacon
at
that
point,
but
we
we
log
in
if
it
happened
before
the
page
load
event
or
before
we
send
our
first
beacon.
We,
we
log
the
timestamp
of
the
latest
hidden
the
latest
visible.
So
we
know
if
that
has
changed
at
all
during
the
page
load,
okay,
but
we're
not
sending
a
beacon
at
the
visibility
change
event.
B
Sorry
we
enough
to
double
check
our
code.
We
actually
do
attempt
to
send.
I
believe
our
unload
data
at
the
first
page
height
event,
is
that
what
you're
talking
about
do
you
mean
page
higher
visibility,
change
yeah?
Let
me
just
check
that.
D
F
B
B
G
G
I
think
that
might
solve
a
lot
of
the
problems
you
are
seeing
here,
and
that
is
something
that
api
shape,
I
think,
would
be
better.
C
I
mean
I'm
not
sure
what
you
mean
by
last,
though,
and
as
nick
was
saying,
they
can't
really
wait
for
the
last
and
I
think,
having
last
contentful
paint
candidates
would
be
even
more
confusing
than
largest
candidates
anyway,
so
I
don't
think
last
I
don't
really
like
last.
As
the
alternative
I
mean
it.
G
C
Yeah
sure
that
makes
sense,
oh,
but
you
mean
just
change
last,
for
this
one
like
say
last:
contact
for
paint
instead
of
final
largest.
Is
that
what
do
you
mean?
I
would
love
to.
A
So
I
heard
from
from
folks
that
are
gathering
like
a
large
platform
that
they
are
trying
to
measure
like
the
with
the
current
largest
contentful
paint
api
and
have
real
trouble
understanding
when
the
last
one
has
fired
so
that
they
can
they.
A
Basically,
they
want
to
beacon
things
at
that
point
in
time,
instead
of
relying
on
unload,
they
want
to
rely
on
the
point
in
time
in
which
you
know
the
actual
largest
contentful
paint
candidate
has
fired,
and
so
so
this
is
the
problem
they
are
trying
to
solve.
A
I
don't
know
if
this
is
because
I
I'm
also
not
a
big
fan
of
final
here.
At
the
same
time,
I
don't
know
if
this
is
necessarily
the
last
paint,
because
there
could
be
paints
after
that,
that
are
potentially
smaller
or
that
follow
some
user
interaction.
So
I
don't.
A
Maybe
it's
worthwhile
to
you
know
open
an
issue
in
bike
shed.
I
don't
have
any
great
ideas
at
the
moment,
but
yeah.
I
take
your
point
that
the
name
can
be
improved.
H
C
Right
right
because
yeah,
because
you
don't
know
when
it's
the
the
final
right
exactly
when
you
dispatch
the
entry.
G
Yeah
two
birds
with
one
stone:
we
could
give
developers
a
range
of
paint
timings
to
let
them
know
the
first
and
last
and
they
know
that.
Oh,
if
we
have
a
last,
then
the
lcp
is
valid.
That
seems
like
a
lot
easier
to
explain
than
than
this,
but
that
maybe
that's
just
me
and
I'm
I'm
certainly
willing
to
hear
other
opinions
and
other
ideas.
C
Yeah
I
I
kind
of
understand
that,
since
we
have
first
you're
thinking
about
last,
but
I
just
think
that
this
one
is
not
necessarily
the
last
again.
So
that's
what
I'm
worried
about
like
just.
B
Because
last
year
I
mean,
are
we
still
just
talking
about
the
definition
of
last
being
like
when
the
when
a
user
input
happens
or
when
the
page
becomes
hidden
like
there's
no
other
way
to
know?
Last
I
mean
like
for
a
single
page
app
that
could
always
be
more
content.
More
more
pains
happening
right.
F
Discussion,
did
you
mean
single
page
apps?
Yes,
sorry,
yeah,
okay,.
C
Yeah,
so
even
even
like
for
a
normal
page,
app
or
whatever
you
want
to
call
it.
The
I
I
feel
like
the
last
can
be
confusing,
because,
even
if
it's
last
before
hidden
or
before
user
input,
the
the
the
one
I'm
proposing
is
not
that
it's
the
largest
before
that.
B
C
C
In
any
case,
I
think
we
need
to
figure
out
if
there's
actually
people
that
would
be
truly
interested
in
the
api,
because
it
sounds
like
no
one
here
was
and
then
we
will
go
from
there.
B
H
B
So
certainly
would
love
to
see
you
know,
I
mean
there
could
be
other
scripts
or
tools
that
would
you
know
like
to
key
off
of
this,
because
it'd
be
easier.
Just
from
a
developer
ergonomics
perspective,
I
like
having
a
single
event
that
you
know
somebody
could
use
rather
than
what
we
have
to
do
today,
which
is
keep
on
listening.
Keep
on
waiting,
keep
on
checking
things,
so
I
I
do
definitely
see
the
value
in
it,
and
maybe
some
other
analytic
scripts
would
prefer
to
key
off
it.
This
way,
so.
A
Yeah,
but
if
there
was
like.
A
If
you
could
replace
the
what
you're
using
the
load
event
for
today
with
that
event,
that
okay,
we
we're
done
with
paints
because
something
happened,
would
that
be
a
better
way
for
you
to
like?
B
B
A
Sure
yeah.
H
I
Yeah,
I
think
I'd
be
interested
in
doing
that.
From
our
perspective
I
mean
we've
always
used
just
after
unloading
stuff,
just
because
that's
how
the
only
thing
has
been
reliable
right.
Certainly,
we
had
a
reliable
mark
that
we
knew
would
have
happened
a
timeout
after
unload
in
case
we
didn't
get
anything.
Then
I'd
be
interested
in
getting
that
there.
K
B
So
to
I
guess,
to
move
forward
on
this
issue
nicholas.
Did
you
want
to
work
on,
or
I
guess
we
could
probably
solicit
more
feedback
first
would
be,
would
be
one
thing
from
from
other
analytics
providers.
I
could
probably
reach
out
to
a
couple
people
that
I
know,
and
I
don't
know
if
you
know
what
other
contacts
you
have,
but
just
to
kind
of
get
to
get
more
eyeballs
on
the
proposal
would
probably
be
a
good
idea.
C
Yeah
that
sounds
good,
and
then
we
shall
think
more
about
the
naming.
Naming
is
hard.
C
B
Thank
you
for
taking
the
time
to
go
over
that
with
all
of
us
and
be
impatient
with
us.
While
we
try
to
figure
out
figure
it
out
too
any
other
thoughts
on
that
before
we
move
on.
B
Okay,
so
let's
see
so,
there
was
a
couple
issues
in
the
preload
spec
that
we
wanted
to
talk
about.
There's
one
or
two
pretty
quick
ones.
B
First,
is
I
don't
know
if
we
still
actually
need
to
talk
about
this
at
all
so
issue
147
in
the
preload
spec
just
sent
the
link,
or
for
that
this
is
about
basically
moving
some
of
the
spec
language
in
the
preload
spec
that
we
actually
may
want
to
move
to
html
spec
and
not
in
preload
itself.
B
B
G
B
Okay,
so
I
guess
there
has
been
some
back
and
forth
and
you've
actually
taken
a
look
at
that
as
well.
I
guess
there's
nothing.
We
really
need
to
discuss
as
far
as
that
one.
We
can
probably
just
leave
that
to
the
pr
itself.
A
Yeah,
I
think
that
it's
already
like,
as
dom
said
on
the
on
the
issue,
the
relevant
text
is
already
in
html
and
it
just
needs
to
be
removed
from
preload,
but
in
any
case
we
want
to
move
the
whole
thing
over.
So
I
don't
think
there
is
much
to
discuss
here
and
the
same
more
or
less
goes
to
the
next
one.
I
think
yeah.
B
K
So
the
spec
processing
model
into
the
html
standard
anyways.
Is
it
worth
kind
of
doing
these
band-aid
fixes
to
this
spec
and
then,
ultimately
just
just
moving
it
anyways?
Or
what
do
we
think
about
that.
A
I
think
we
don't
really
need
to
do
that.
We
need
to
see
with
philippe
like
when
the
actual
move
to
html
will
happen,
and
maybe
we
can
preempt
it
by.
You
know
making
sure
that
everything
is
covered
in
html,
so
we
could
start
prs
to
move
the
relevant
parts
that
are
still
defined
here
into
html,
but
I
don't
think
that
we
need
to
bother
with
respec
related
chores
when
this
whole
spec
will
probably
move
to
be
a
note
anyway.
K
B
B
There
is
one
other
issue
under
preload
number
149
and
I'd
be
happy
to
take
over
subscribing.
If
you
want
to
talk
about
it,
a
bit
more
sure.
A
The
underlying
issue
here
is
that,
like
people
want
now
to
add
various
hints
to
servers
in
order
for
them
to
modify
the
way
that
h2
push
is
done
in
various
ways
that
servers
don't
currently
expose
to
their
users
for
various
reasons.
A
So
to
me,
there
are
two
underlying
issues
hidden
in
this
one,
whether
it
makes
sense
and
whether
defining
that
kind
of
a
hint
that
is
content
based
will
cause
servers
to
actually
do
what
they
currently
don't
do.
And
I'm
not
convinced
by
that.
A
But
beyond
that,
the
more
the
underlying
issue
here
is
that
we
somehow
sneaked
various
push
related
behaviors
into
preload,
and
I
personally
think
we
shouldn't
have,
and
it
doesn't
really.
A
It
doesn't
really
make
sense
to
define
all
those
server
push
related
behaviors
as
part
of
a
preload
spec
and
in
particular,
because
we're
moving
this
into
html.
A
This
makes
it
even
more
tricky
to
define
those
various
push
related
behaviors
as
part
of
the
link
relationship
only
to
be
used
by
servers
as
part
of
headers.
It's
there's
a
weird
dependency
here
and
I
would
much
rather
see
a
completely
separate
relations
set
up
for
push
and
being
worked
on
as
part
of
a
server
push.
A
I
don't
know
if
it's
something
that
should
happen
at
the
itf
or
here,
but
I'd
love
to
see
this
separated
from
preload
and
related
andrew's
colleague,
lucas
pygmy,
the
other
day
about
no
push,
which
apparently,
is
not
really
defined
anywhere
but
exists
in
various
examples
here
as
part
of
the
preload
spec.
A
As
an
attribute
that
servers
can
add
on
pre,
like
on
preload
headers,
in
order
to
prevent
server
push
from
happening,
and
I
think
it
would
make
sense
to
split
all
that
functionality
apart,
because
it's
not
really
preload
related.
I
A
Like
from
your
perspective,
there
could
be
some
compatibility
concerns
with
that,
because
that
would
mean
you
would
need
to
tell
your
customers
to
change
headers
if
you
want
to
continue
to
push
resources,
if
you
were
to
follow
that
advice.
I
A
Hear:
okay,
anyone
else
with
the
strong
opinions
on
this
okay.
So
I
can
take
an
action
item
to
comment
that
on
the
issue
and
maybe
open
a
separate
issue,
saying
that
you
know
that
push
related
hints
should
be
split.
B
Okay,
one,
so
I
think
that's
the
end
of
the
preload
specific
issues
we
have
one
from
its
timeline
issue.
Issue
number:
let's
see,
105.
B
And
this
is
about
performance
entries
are
affected
by
background
statuses.
We've
talked
about
this.
A
couple
of
different
calls
nicholas
said:
hey,
try
to
remember
correctly.
You
want
to
chat
about
this
a
little
bit
more.
C
Yeah
but
it
looks
like
no
one
from
apple.
Is
here?
Oh
it's
true.
So
that's
a
bit
of
a
problem,
but
I
mean
I
can
go
over.
What
I
chatted
with
uf
offline
was
that
there
is
some
confusion,
maybe
about
the
use
cases.
C
C
C
For,
for
example,
paint
metrics
really
need
to
consider
the
visibility
state
of
the
page,
regardless
of
throttling,
because
right,
but
then
for
other
kinds
of
performance.
Measurements
that
are
not
paint
related
may
have
may
want
to
consider
throttling
of
other
kinds.
So
we
were
thinking
there
could
be
roughly
three
kinds
of
statuses
here.
C
One
is
the
visibility
status
of
the
page,
so
it's
either
hidden
or
it
is
visible,
and
then
the
next
thing
to
consider
is
the
network
status
of
the
page,
so
it
could
be
either
fully
going
or
it
can
be
throttled
by
the
user
agent
and
finally,
the
cpu
usage
of
the
page
can
similarly
be
throttled
or
not
throttled.
C
So
we
were
thinking
exposing
this.
These
three
signals
might
be,
I
mean
already
exposed
visibility,
but
exposing
the
other
two
might
also
make
sense,
and
we
were
wondering
about.
C
First,
do
people
here
have
like
some
concrete
use
cases
for
that,
besides
the
obvious
ones
which
are
track
possible
tainted
net
taintedness
of
paint
metrics,
as
well
as
other
performance
metrics
for
the
case
of
throttling,
and
the
other
thing
we
wanted
to
ask
is
if
people
have
thoughts
on
api
shape,
slash
naming
because
I
was
working
on
something
called
the
visibility
state
entry
and
obviously
this
something
called
visibility.
C
State
entry
would
not
be
extensible
to
network
and
cpu
throttling,
but
at
the
same
time
it's
a
little
different
right
because
for
visibility,
it's
hidden
or
visible,
whereas
the
other
ones
is
basically
throttled
or
not
throttled.
So
I'm
wondering
what
people
think
about
whether
it
makes
sense
to
have
an
apa
that
covers
all
of
those
three
types
of
states
or
whether
it's
better
to
have
them
separate
and
if
together,
then,
who
can
come
up
with
a
good
name
etc.
B
A
lot
of
good
thoughts
there
I
like
the
idea
of
getting
more
visibility
that
bad
use
of
word
there,
but
into
the
network
and
cpu,
and
it
certainly
the
use
case
that
you
pointed
out
for
analytics,
would
be
you
know,
marking
whether
data
is
tainted
or
not,
essentially
in
different
ways.
B
As
far
as
like
the
interface
goes,
it
might
make
sense
to
have
it
all
under
one
api
that
would
have
these
different
attributes.
It
would
seem
to
be
a
little
more
extensible
if
we
have
some
other
future
thing
that
we
wanted
to.
C
Anyone
else
have
any
thoughts.
The
other
thing
I
was
thinking
right
now
is
we've
always
well
or
historically.
C
Visible
page
right,
so
even
if
we're
not
able
to
define
the
granularity
of
throttling
as
long
as
the
user
agent
is
like
in
knowledge
that
it's
throttling
to
some
degree,
then
I
think
it
makes
sense
to
expose
it
and
it
covers
the
use
case
of
figuring
out
a
volume
of
tainted
entries
and
how
the
tainted
the
taintedness
of
the
entries
affects
the
measurements,
because
you
can
like
compare
values
when
it
says
it's
not
tainted
versus
when
it's
saying
that's,
some
taintedness
happened,
and
so
maybe
it's
okay
to
not
say
whether
usage
in
thralling
means
even
if
we're
exposing
it.
B
And
kind
of
related,
but
I
don't
know
if
this
would
be
one
of
the
signals
would
be
this
if
developer
tools
are
open
at
all
I
mean
that
can
certainly
taint
results
and
measurements
in
some
way
too,
but.
C
Yeah,
that's
a
that's
a
good
point,
so
it
sounds
like
there
are
several
different
types
of
statuses
that
could
potentially
be
added
like
death
tools.
Open
sounds
like
we.
A
Sorry,
oh
sorry,
I
mean
devtools
open,
would
impact
the
like
in
a
way
cpu
throttling,
but
not
beyond
that
unless
the
network
throttling
tab
is
also
checked
right,
I
mean
it's
not
a
different
dimension
of
throttling.
It's
a
different
reason
to
reach
those
dimensions.
B
C
B
I
mean
devtools
can
do
many
things
and
if
people
are
on
devtools
or
doing
things,
maybe
pausing,
maybe
stepping
through-
I
don't
know,
I'm
just
I'm
just
thinking
out
loud
that,
like
a
dev
tool,
experience
is
not
a
normal
experience,
and
so,
if
you're
measuring
in
bulk
I'd
like
to
segregate
those
experiences
in
some
way,
just
to
make
sure
you
know
we're
not
trying
to
investigate
a
bunch
of
people
with
dev
tools
open
at
them
just
thinking
out
loud.
I
don't
want
to
develop
this
conversation
about
that
specifically.
B
But
besides
the
three
that
I
definitely
agree
that
you
mentioned
niklas,
there
could
be
other
kind
of
side
ones.
That
side
signals
too.
That
may
be
helpful
for,
for
some
people.
A
Okay,
so
you're
saying,
like
basically
code
like
break
pointing,
is
another
language.
L
So
I
I
actually
want
to
push
back,
maybe
a
little
bit,
I'm
a
little
bit
more
skeptical
on
the
utility
of
this
additional
information.
I
definitely
have
wanted
this
in
the
past,
so
I'm
initially,
I
feel
supportive
towards
it.
In
fact,
we
do
actually
use
visibility
state
at
wikimedia
as
well.
To
basically
just
I
mean
right
now,
we're
probably
quite
aggressive
on
them.
If
visibility
has
been
toggled
at
any
point
before
unload,
we
just
discard
the
entire
event
and
pretend
the
page
would
didn't
happen.
L
As
far
as
performance
analysis
is
concerned,
I
think
in
the
future
we
might
want
to
collect
it
still
and
judge
it
separately.
I
think
it's
still
interesting
to
have
maybe,
but
I
think
over
time
I
I've
become
more
ignorant
of
it,
just
because
I
think
it's
just
part
of
a
larger
range
of
real
world
circumstances,
which,
I
think
to
me
is
kind
of
what
rum
is
for,
is
to
really
get
a
sense
of
the
bulk
like
how
do
your
users
specifically
load
this
page
and
when
throttling
happens
in
the
foreground?
L
It's
and
if
lovely
users
are
in
that
way,
maybe
it's
interesting
to
know
from
a
product
perspective,
but
that
hits
on
privacy
things
as
well
and
how
you
can
synchronize
things
between
tabs
and
you
know
what
is
the
accuracy
if
this
hard
link
goes
on
and
off
does
that
the
data
goes
on
and
off
as
well,
and
things
like
that,
like,
I
think
I
would
want
to
know
more
about
sort
of
the
high
level
use
cases
like
what
do
we
think
as
engineers
say?
L
L
I'm
also
thinking
about
throttling
as
a
range
of
things
right,
like
you,
can
imagine
a
degraded
network
being
a
form
of
throttling
or
some
form
of
degraded
experience
or
having
not
so
much
disk
space
available
or
being
in
swap
or
having
not
much
memory
left
like
to
what
extent
is
that
just
like
so
for
example,
the
reason
I
was
very
much
into
this
earlier
is
because
I
wanted
to
have
more
consistent
raw
data
to
have
a
small
slice
data,
where
I
can
say,
here's
my
sample
group
of
non-floral
devices
in
sweden
using
an
iphone
and
like,
but
basically
what
you're
doing
there
is
you're
trying
to
get
a
stable
baseline,
which
is
useful,
but
that's
also
kind
of
what
lab
testing
is
for
in
a
way,
and
that
has
been
improved
so
much
over
the
past
few
years.
B
Yeah,
I
think,
from
our
point
of
view
too,
it's
not
to.
I
don't
want
to
mark
this
data
to
throw
it
out
or
to
ignore
it.
I
want
to
be
able
to
mark
data
that,
just
like
other
dimensions,
like
your
geo
and
other
thing
like
if,
if
our
customers
wanted
to
segment
their
data
or
to
investigate
the
performance
differences
between
a
fully
visible
page
versus
a
partially
paid
page
that
went
from
visible
to
invisible
or
back
and
forth,
that
they'd
be
able
to
do
that.
So
I
think
having
insight
if
you
want.
H
F
I
I
loved
everything
you
said
I
agreed
completely.
I
have
thought
about
this
as
well
about
this
baseline,
because
I
thought
that
it
would
be
very
interesting
to
know
what
can
be
attributed
to
your
changes
affecting
you
know,
performance
over
time
versus
the
world.
She's
changing
so,
like
you
said
we're
following
those
will
do
browser
changes.
You
know
more
pre-loading,
more
bf,
caching,
more
all
this
other
stuff,
and
so
have
you
improved
or
regressed.
F
Or
is
it
just
browsers
like
patching
over
things
and
stuff
like
that,
and
so
a
stable
baseline
feels
like
it's
interesting,
but
also
representing
real
rum
data,
as
as
things
do,
change
is
probably
you're
right,
like
as
lab
data
becomes
more
reliable.
Maybe
you
know
you
don't
need
that
stable
base
and
you
have
it
already
there.
So.
K
H
At
pinterest
we
have
synthetic
tests,
we
use
puppeteer
and
we
also
collect
rum
data
and
we're
definitely
not
anywhere
close
to
getting
a
baseline,
that's
usable
with
our
puppeteer
tests.
There's
tons
of
regressions
that
we
still
have
at
our
rum
p90.
That
are
definitely
from
our
deploys
that
we
weren't
able
to
catch
with
those
synthetic
tests.
C
Yeah
and
to
be
clear,
even
just
ignoring
the
throttled
ones,
may
not
even
be
I
mean
it
will
make
the
data
a
bit
more
stable,
but
not
necessarily
like
devices
device
distribution
over
time
will
still
change
right.
So
it
will
again.
C
I
feel
like
it's
important,
to
be
able
to
do
like
analysis
of
data
in
a
given
point
in
time
for
specific
change,
you're
looking
in
real
user
metrics,
and
this
may
help
make
the
analysis
a
little
simpler
by
excluding
some
cases
you
may
not
have
control
over,
especially
for
use
cases
or
users
that
don't
receive
a
lot
of
data,
and
so
if
they
receive
some
super
wild
throttling,
it
may
skew
their
data
too
much.
But
it
sounds
like
we
do.
We
yeah
we
we're
almost
out
of
time.
C
So
it
sounds
like
there
is
some
interest
and
then
some
people,
maybe
not
as
interested.
E
I
would
like
to
add
the
interest
from
salesforce
nicola.
I
can
go
into
details,
but
just
I
wanted
to
support
the
interest
in
such
an
api.
C
H
One
more
use
case,
as
with
the
devtools
there's,
definitely
scenarios
where
we
don't
have
a
ton
of
rum
data
and
we
do
need
to
separate
out
any
traces
that
had
throttling
from
devtools
being
open
like
right
now.
My
whole
team
is
working
on
a
new
feature
around
video
and
we're
generating
a
lot
of
logs
with
devtools
open
and
with
pretty
slow
performance
and
we're
seeing
that
it
is
affecting
our
run
data.
So
it's
difficult
to
try
to
hit
our
optimization
goal,
knowing
that
we're
actually
slowing
our
p90
down
with
our
own
work.
L
L
There
is
still
a
small
risk
there,
but
I
think
yeah
being
able
to
mark
those-
and
I
also
think
visibility
is
one-
that's
significantly
different
from
the
others,
because
if
you're
loading
a
tab
in
the
background,
I
would
say
you're
most
likely
not
waiting
for
it
to
load
as
much.
I
I
mean.
I
know
that
as
a
power
user
and
as
a
geek,
I
do
sometimes
open
the
laptops
at
once
and
do
kind
of
eyeball
the
spinner
in
a
way.
L
So
it's
not
entirely
and
in
the
background
in
that
sense,
but
I
think
that
is
still
like
a
majority
case
where
you
likely
want
to
be
able
to
mark
those
things,
but
for
the
throttling.
I
personally
think
it.
It
could
have
a
negative
consequence
on
how
developers
improve
user
experiences.
J
Yeah,
I
know
we're
kind
of
out
of
time,
but
if
I
maybe
seen,
I
also
agree
with
timo
that,
like
exposing,
this
could
have
some
misuses,
if
I
may
so,
at
least,
if
we
can
have
a
baseline
of
what
throttling
means.
J
I
mean
if
it's
like,
let's
say
having
a
baseline,
so
yeah,
because
if
we
just
say
throttling
and
without
specifying
a
baseline
or
something
we
can
have,
the
the
let's
say,
the
opposite
effect
of
people,
discarding
data
which
would
otherwise,
let's
say,
prove
to
be
actual
real
behavior
and
they
just
discard
them
because
they
have
the
throttling
flag.
B
So,
thank
you,
everybody
for
the
discussion,
thanks
nicholas
for
leading
it
we're
definitely
out
of
time.
I
want
to
respect
everybody
else's
meeting
schedules
today.
So
we'll
see
everybody
in
about
two
weeks.
If
there's
anything
anybody
wants
to
talk
about
next
time,
we
can
add
to
the
agenda.
Otherwise,
thank
you.
A
Yeah
and
for
those
that
are
joining
the
hackathon,
see
you
on
tuesday.
Yes,
thanks.
B
Remember
your.