►
From YouTube: TPAC WebPerfWG 2021 10 26 - RUM pain points
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
so
this
is
kind
of
a
discussion
that
I
wanted
to
have
just
to
kind
of
frame
from
a
aram
vendor
and
multiple
run
vendors
and
people
using
rum,
some
of
the
hot
topics
or
things
that
we
think
we
would
love
to
see
in
the
future.
A
lot
of
that
you
know
through
this
working
group
and
through
the
apis
that
we
design
and
such
I
tried
to
solicit
feedback
from
various
other
companies
and
individuals
both
run
vendors
and
other
companies
that
are
doing
their
own
rum.
A
So
I
do
have
some
feedback
here
from
others.
Obviously,
this
presentation
is
going
to
be
from
my
point
of
view
and
we'll
be
biased
towards
the
products
that
I
work
on,
but
hopefully
there's
other
ideas
represented
here
as
well,
and
so
I
do
want
to
thank
speedcurve
and
pinterest
in
particular,
for
sharing
a
lot
of
feedback
and
helping
build
out
some
of
these
ideas.
A
I
was
trying
to
think
through
what
makes
a
good
rum
api
in
particular
and
like
what
apis
have
worked
really
good
for
us
and
what
what
other
ones
have
been
a
challenge
and
then
kind
of
framing
that
discussion
into
specifics
of
things
that
we
would
love
to
see
as
an
industry
or
as
run
providers
or
as
people
using
rom
in
the
future
and
just
for
clarification.
Rum
in
this
case
stands
for
for
real
user
monitoring
and
is
basically
utilizing
all
the
performance
apis
that
we
help
build
in
this
group.
A
A
A
Our
customers
build
their
websites
in
all
sorts
of
various
different
ways,
and
even
though
we
ask
them
to
put
our
script
as
early
possible
in
the
page
so
that
we
can
start
gathering
data
as
early
as
possible,
that's
not
always
possible,
so
sometimes
we'll
load
directly
from
the
head
other
times,
we'll
load
after
the
framework
loads
after
onload
other
times
we'll
our
javascript
library,
boomerang.js
will
load
via
tag
manager,
and
so
we
can't
guarantee
when
we're
going
to
be
on
the
page
so
being
able
to
look
back
into
history
for
important
things
is
almost
critical
to
what
we,
what
we
can
or
can't
do-
and
I
have
some
examples
of
that
later.
A
Another
kind
of
high-level
api
request
when
we
build
or
think
about
apis
is
attribution.
I
think
we've
been
getting
a
lot
better
with
this
lately.
In
fact,
we
were
discussing
one
earlier
with
fergal
was
talking
about
bf
cache
navigations
and
he
was
saying
well,
let's
let
people
know
why
their
bf
cache
didn't
work
and
that's
exactly
what
we
as
a
run
vendor
want
is
if
something
good
or
bad
is
happening,
or
if
we're
measuring
something
in
time
our
customers
really
care
about.
A
Okay,
you've
told
me
this
number
is
changing,
but
why
and
how
can
I
fix
it
and
so
being
able
to
look
at
any
supporting
details?
Any
debug
information
any
causes
of
these?
Why
an
event
happened
or
why
time
was
it
was,
is
really
critical
for
providing
value
besides
just
measuring
a
number
and
then
the
third
key
thing
that
I've
been
thinking
about
a
lot
lately-
and
you
know,
we've
been
discussing
this
a
bunch
in
the
group-
is
spa-
support
designing
these
apis
with
single
page
apps
in
mind,
really
matters.
A
You
know
a
lot
of
our
older
apis
were
designed
even
before
single
page
apps
existed
and
we've
had
to
think
through
and
adopt
them
or
add
other
things
to
make
them
work
in
the
sba
world.
Some
of
our
current
sba,
sorry,
some
of
our
current
apis-
don't
work
very
well
with
spas
like,
for
example,
largest
contentful
paint.
You
can't
reset
that,
even
if
your
customers
are
having
in-page
navigations
so
thinking
through
these
apis
when
we're
designing
them
would
really
help
for
rom,
okay.
A
So
that's
kind
of
setting
the
context
for
some
of
the
other
slides
coming
up
so
first
thing
that
I
would
love
to
see
improved,
and
this
may
be
more
partial
to
the
way
that
we
ask
our
customers
to
integrate
our
product.
So
we
have
boomerang.
Javascript
library
is
responsible
for
collecting
data
for
impulse.
It's
you
know,
50
kilobyte
or
so
a
little
bit
less
than
that
library.
A
A
The
reason
we
ask
them
to
put
all
this
junk
in
their
every
single
page
load
is
we
as
a
performance
measurement
company.
We
don't
want
to
affect
performance
as
much
as
possible
right
and
while
it
may
not
be
the
best
metric
in
the
in
the
world
to
measure
the
onload
event,
static,
scripts
or
scripts
inserted
into
the
page
can
affect
the
load
event
and
we
as
a
boomerang,
we
as
a
run
provider,
don't
want
to
be
affecting
the
unload
event.
A
The
unload
event
still
gives
you
visual
indicators
that
something's
ready
your
progress
bars,
and
so,
if
we,
for
whatever
reason,
if
our
javascript
library
took
a
minute
to
load
because
our
cdn
was
slow,
we
could
actually
be
affecting
the
visual
experience
of
our
customers.
So
we
actually
jump
through
all
these
different
hoops.
A
Async
and
defer
are
not
sufficient
because
they
do
block
the
unload
event.
This
pile
of
text
on
the
right
uses
either
preload
in
modern
browsers
or
iframes
and
older
browsers
to
asynchronously
and
non-blocking
load
our
main
script.
But
again
it
gets
loaded
at
every
single
one
of
our
customer
pages,
and
you
know
it's
basically
just
overhead.
A
There
is
a
possible
solution
in
some
thought
that
yoga
is
working
on
for
resource
loading,
orchestration,
which
I
you
know,
can
kind
of
give
some
hints
to
the
browser
about
how
different
things
can
load
or
what
stage
they
can
load
at
or
whether
they
have
like
a
blocking
attribute
or
not,
and
so
I
would
love
to
see
a
little
bit
more
work
on
that
front
to
see
if
we
can
not
have
to
send
this
text
down.
A
You
know
the
script
down
every
time,
okay,
so
the
second
you
know,
big
thing
here
is
attribution
as
I
was
alluding
to
before
when
rum
measures
something
the
first
thing
that
matters
when
it
changes
is.
Why,
and
the
second
thing
that
matters
is
okay,
how
do
I
make
it
better
or
how
do
I
fix
it?
So
some
great
examples.
Lately
we
have
largest
contentful
paint.
A
The
event
itself
has
the
element
attribute,
which
tells
you
what
the
largest
thing
that
was
painted,
and
you
know
you
can
collect
that,
and
you
can
bring
that
in
with
your
data
and
if
you
care
that
can
give
you
the
best
indicator
towards
what
was
causing
these
paints
same
thing
for
cumulative
layout
shift
and
the
layout
api
layout
stability
api,
the
sources
attribute,
tells
you
what
caused
the
shifts
themselves,
and
so
these
are
just
great
examples
of
us
thinking
about
how
we
can
support
our
metrics
with
the
debug
information
that
we
care
about
kind
of.
A
On
the
flip
side,
we
have
layout
tasks
or
sorry
long
tasks.
Long
tasks
are
awesome.
They've
enabled
us
to
do
a
lot
of
things
and
really
get
insight
into
the
browser
about
what's
going
on
with
the
browser,
internals
and
the
main
thread,
etc.
But
it
doesn't
have
the
right
level
of
detail
of
of
attribution
for
what
caused
the
long
task.
A
It
has
some
very
generic
detail
about
whether
it
was
the
same
or
cross-origin
frame
or
the
self,
and
that
can
you
know
kind
of
guide
you
towards
what's
causing
your
long
tasks,
but
not
really
at
the
end
of
the
day,
you
know
in
your
modern
website,
you
have
so
many
dependencies
that
it
really
doesn't
pinpoint
anything
for
you,
and
I
know
that
we've
considered
you
know
some
sort
of
you
know
main
thread
blocker
for
the
long
task
or
some
something,
but
really
if
we
could
help
improve
long
task
attribution.
A
I
think
it
would
help
with
a
lot
of
things.
I
know
that
so
pinterest
was
really
wanting
to
do
better
root,
cause
attribution
for
all
their
field
data
again
for
things
like
time
to
interact,
metrics
or
total
blocking
time
or
even
first
input
delay.
All
of
those
are
really
affected
by
long
tasks
and
knowing
what
triggered
them.
Knowing
what
the
long
task
was
about
would
really
help
with
being
able
to
improve
them.
A
There
are
some
possible
solutions
here.
The
javascript
self
profiling
api
we've
been
talking
about
could
help.
You
know,
unfortunately,
you're
probably
not
going
to
enable
it
for
your
entire
web
page,
and
so
it's
a
sampling
api,
so
you're
not
going
to
get
it
on
every
beacon.
A
So
I
don't
know
if
there
would
be
a
way
to
utilize
part
of
that
or
something
for
long
task.
Attribution.
Last
year
we
had
a
great
presentation
from
patrick
hulse
around
lighthouse's
causal
attribution
model
that
I
think
actually
would
fit
fairly
well
for
getting
some
details
into
long
tasks
on
why
they're
doing
things,
but
it's
definitely
an
area
that
I
would
love
for
us
to
improve
and
to
think
about
for
new
apis
to
make
sure
we
have
attribution,
and
then
third
big
area
that
I
was
talking
about
before
is
single
page
apps.
A
So
single
page
apps,
you
know
they
affect
every
single
run
metric
all
of
the
metrics
that
we
capture
today
either
are
irrelevant
for
single
page
apps,
like
the
unload
event,
isn't
really
as
relevant.
If
you
have
a
single
page
app
or
you
can't
use
them
very
well,
if
you
have
a
single
page
app
like
the
largest
contentful
paint
that
we
were
talking
about
before
you
know
instrumenting,
and
so
like
all
these
different
apis,
you
have
to
like
think
about
how
they
affect
an
sba
we
impulse.
A
We
basically
have
a
top-level
boolean
flag
for
every
property.
That's
is
your
site
in
spa
and
if
so,
we
measure
completely
different
things
almost
in
some
cases,
and
you
know,
because
an
sba
is
so
different
knowing
when
so,
we
don't
get
like
a
load
event
for
spas.
A
We
also
don't
get
a
starting
event
for
sbas
in
a
consistent
manner
and
there's
some
potential
solutions
here
with
the
app
history
api,
but
for
soft
navigations,
those
navigations
that
you
have
in
the
middle
of
your
session
in
the
page
when
the
route
changes
you
know
we
we
can
hook
into
some
events
that
either
the
framework
gives
us
or
we
can
hook
into
the
history
of
api
or
push
state
pop
state,
hash
change,
stuff
to
kind
of
know
when
a
spa
starts.
A
But
knowing
when
that
spa
transition
stops
is
a
big
nebulous
cloud,
you
know
we
don't
we
with
boomerang
jump
through
a
bunch
of
hacks,
basically
to
listen
to
the
mutation
observer
to
know
when
things
start,
we
patch
over
xhr
and
fetch
functions
to
know
when
things
start,
and
then
we
try
to
figure
out
when
net
requests
are
idle
and
it's
a
lot
of
work
for
something
that
I
would
love
for
the
browser
to
help
us
with
a
little
bit
better
yeah
and
at
the
end
of
the
day,
like
you
know,
a
page
view
is,
is
different
in
reality
from
what
the
browser
thinks
it
is
right,
like
the
page
is
loaded
once
the
unload
event
fires
according
to
the
browser,
and
then
you
know
it's
it's
easy
sailing
from
then
on,
whereas
in
the
real
world,
if
you're,
building
an
spa
like
that
load
event
didn't
matter
your
framework
stuff,
probably
loads,
all
your
actual
views
and
data.
A
After
that
you
have
all
these
in-page
navigations,
and
so
it's
just
it's
a
different
world
right
with
us
with
spas
and
we're
definitely
getting
better
like.
I
think
we're
talking
about
sba's
a
lot
more.
When
we're
designing
things,
but
we
there
are
still
some
things
that
aren't
really
like
spa
friendly.
So
examples
here
are
like
the
largest
contentful
paint
right
this.
A
It's
a
great
metric,
it's
one
of
the
core
web
vitals,
but
it
doesn't
apply
to
any
in-page
navigations
you
just
you're,
not
resetting
to
a
you
know,
base
level
of
zero
every
time
you
in
page
navigate,
whereas
something
like
cumulative
layout
shift
is
so
you
know,
does
work
with
sbas,
because
you
have
all
the
supporting
data,
all
the
supporting
layout
shifts
and
you
could
reset
your
cumulativeness
every
time
a
in-page
transition
happens
and
that's
what
we're
doing
and
other
people
are
doing,
to
support
spas
with
cls,
yeah
and-
and
you
know,
as
I
was
saying
before,
like
regular
apps-
have
very
standard
metrics
from
navigation
timing.
A
But
there's
some
examples
here
of
you
know:
lcp
has
challenges
with
it.
Excuse
me,
the
next
section
is
all
about
network
request,
monitoring,
which
is
something
that
we
we
do
and
others
do.
A
The
first
part
is
in
is
visibility
into
the
network,
and
I
brought
this
up
in
the
past
a
bit
and
we
don't
have
a
good
solution.
Yet
we
have
some
good
proposals,
but
no
good
solution
based
on
some
research
that
we
did.
You
know
if
you're,
if
you're
measuring
all
the
resource
timing
data
on
the
page,
you
only
have
access
to
the
data
within
your
iframes
and
sorry
within
your
main
page
and
any
same
origin
iframes
any
cross-origin
iframe
will
be
invisible
to
you.
You
won't
be
able
to
crack
into
it.
A
You
can't
you
know,
use
its
apis
because
there's
the
security
boundary
and
based
on
the
crawl.
I
did
a
couple
of
years
ago,
up
to
30
percent
of
page
content
is
in
a
cross-origin
iframe
that
resource
timing,
that
javascript
doesn't
have
access
to,
which
is
over
50
of
the
bytes
delivered
on
the
page
for
video.
A
This
is
97
of
all
videos
in
my
crawl
of,
like
I
think
it's
like
alexa
top
1000
were
in
iframes
and
as
a
run
vendor
trying
to
you
know,
give
visibility
to
our
customers
of
what
was
happening
on
the
network.
Presenting
them
a
view
of
a
waterfall
that
is
incomplete
is
frustrating
because
it's
like
well
here's
most
of
your
data,
but
not
all
of
it.
I
can't
tell
you
what
I'm
missing,
there's
just
no
indication
either
that
you
are
missing
anything
even
about
the
scope
of
what
you're
missing.
A
This
is
particularly
painful
for
ads.
Right,
I
mean
a
lot
of
ads
are
delivered
in
cross-origin
frames,
but
even
other
things
other
apis
that
we
care
about,
like
the
cls
layout
shifts
happening
in
an
iframe
matter.
According
to
the
cls
spec,
synthetic
tools
can
measure
the
layout
shifts
happening
in
all
frames,
including
cross-origin
ones,
but
rum
can't
rum
can't
break
into
that
cross-frame
barrier
and
get
the
layout
shifts
that
are
happening
that
are
visually
happening
to
the
user,
they're
just
invisible
to
javascript.
A
So
there
are
some
solutions
here.
One
of
them
is
a
thing
called
bubbles,
a
term
called
bubbles
which
is
basically
a
way
for
us
for
somebody
doing
a
performance
observer
to
request
that
anything
happening
in
any
child.
Our
iframe
gets
automatically
bubbled
up
to
the
top,
which
would
just
make
the
developer
convenience
of
consuming
this
data
a
lot
easier
and
like
today
we
have
to
crawl
all
iframes
to
get
all
this
data,
and
it's
we
do
that
every
time
we
gather
data.
A
A
The
second
part
of
network
monitoring
that
we
are
that
we
care
about
is
just
observing
when
network
requests
start.
This
is
important
for
a
lot
of
different
things.
Pinterest
was
mentioning
around
measuring
network
congestion
or
idle
time
for
boomerang
for
for
impulse.
We
do
our
spa
monitoring
by
monitoring
the
network
and,
as
I
mentioned
before,
we
hook
into
mutation,
observer
and
patch
fetch
and
xhr
to
be
able
to
get
some
sort
of
indicator
when
a
network
request
is
starting
and
thus
ongoing,
but
otherwise
we
have
no.
We
it's
not
complete.
A
A
It
would
really
help
for
some
of
these
monitoring
scenarios,
and
then
there
is
some
ongoing
work
for
fetch
and
service
worker
integration
and
pinterest
have
a
strong
desire
here
to
get
cash
hit
rates
from
their
service
worker,
in
addition
to
their
overall
browser
cash
at
rates.
So
you
know
movement
on
that.
Would
be
would
be
super
cool.
A
A
So
the
request
that
destination
or
the
dependency
tree
of
all
the
resources
on
the
page
like
basically,
what
caused
what
to
download
could
lead
to
a
lot
of
optimizations
that
either
a
site
could
use
or
a
run
provider
or
a
cdn
could
do
around
knowing
when
things
should
load
when,
like
what's
the
critical
path
for
loading,
different
content
so
be
interesting
to
pursue
that
a
bit
content
type
and
response
code
not
available
in
resource
timing
today,
could
really
help
with
a
bunch
of
different
optimization
and
a
b
testing
scenarios
and
error
reporting
if
things
are
not
working
well,
I
know
that
for
response
code,
you
have
things
like
the
reporting
api
to
report
on
problems,
but
getting
that
information
and
resource
timing
itself
would
be
very
helpful.
A
One
other
thing
is:
we
haven't
had
consistency
with
reporting
of
resources
that
are
not
200
level.
It's
actually
getting
better
and
there's
been
a
bunch
of
bug
fixes
around
this
lately,
but
some
browsers,
don't
report,
400,
500
level
responses
or
don't
report
things.
If
there
was
a
dns
error,
it's
just
it's
just
an
overall
visibility
thing
like
I
would
love
for
us
to
get
most
of
the
information
that
you
can
get
from
the
dev
tools,
waterfall
in
resource
timing,
because
there's
always
questions
on.
Why
are
they
different?
A
And
then
we
have
a
discussion
in
the
past
about
tao
timing,
allow
origin
and
its
relationship
with
cores
and
what
information
gets
exposed
where,
and
I
think
it
would
just
be
worthwhile
to
continue
that
discussion
and
figure
out
how
we
can
get
some
of
this
more
detailed
information
if
we
think
we
can
so
some
of
the
issues
tracking
that
are
at
the
bottom
last
thing
is
reporting
api,
so
you
know
run
vendors
and
others
are
looking
into
this.
A
The
only
way
that
you
really
communicate
data
is
through
the
url
that
you
report
to
so
you
know
you
specify
your
endpoint
url,
but
throughout
a
page's
lifetime,
there's
some
additional
metadata
that
might
be
useful,
like
the
user's
session
id
or
their
session
length
or
other
information
that
changes
over
time
and
it'd
be
nice
to
say,
hey
reporting,
api.
When
you
deliver
a
thing,
let
you
know
here's
the
latest
information.
A
I
want
you
to
also
include
in
the
body
payload
not
entirely
sure
about
the
privacy
and
security
aspects
of
that,
but
ian
at
some
point
I
would
love
to
chat
with
you
about
any
ideas
you
have
around
that
and
then
the
final
thing
is
a.
There
was
a
presentation
last
year
around
possibly
using
the
reporting
api
as
a
more
reliable
beaconing
mechanism.
I
think
it's
actually
already
came
up
come
up
a
little
bit
earlier
from
a
discussion.
A
We've
done
so
we've
done
some
research
we
at
impulse
and
our
beacon
our
run
beacon
at
the
onload
event.
Sorry,
the
load
event
I'll
call.
It
load
event
to
distinguish
it
from
the
unload
event.
We
center
beacon
when
the
page
loads,
because
it's
the
most
reliable
point
to
send
data.
We
would
like
to
be
able
to
send
the
a
beacon
later,
because,
obviously,
things
happen
beyond
the
page
load
event.
Javascript
errors,
other
important
things
cls
a
large
control
pane.
A
Some
of
our
customers
delay
their
beak
until
later,
but
the
longer
you
delay,
we
found
the
more
less
likely
you're
going
to
get
that
data.
So
as
an
example,
some
research
that
we
did
if
we
send
a
beacon,
even
with
no
payload
at
the
page
load,
events
about
90
to
93
percent
of
those
beacons
arrive
if
we
wait
to
send
them
at
the
unload-
and
this
is
with
hooking
for
unload.
Well,
I
don't
even
know
if
it's
necessarily
unload
page
hide
visibility,
change
before
unload.
That's
what
I
mean
by
unload
here.
A
Only
82
percent
of
those
beacons
arrive,
and
so
there's
obviously
a
you
know,
essentially
a
10
discrepancy
in
real
terms
of
beacons,
arriving
or
not,
based
on
when
you're
sending
your
data
and
that
may
not
be
acceptable
to
some
of
our
customers.
They
may
not
want
to
lose
that
visibility.
You
don't
know
if
that's
good
experiences
or
bad
experiences
that
you're
not
seeing
it's
just
lost
beacons.
You
know
lost
experiences
that
aren't
going
to
get
reported,
and
you
know
that
that's,
including
using
the
beacon
api
and,
like
all
your
best
practices
right.
A
I
have
some
research
on
that.
If
anybody
wants
to
dig
in
more-
but
you
know
some
of
our
customers
that
may
be
okay
because
they
want
to
make
that
trade-off
of
recording
more
data,
but
unless
beacons,
if
that
makes
sense,
other
others
may
not.
A
So
anyway,
we
talked
last
year
about
a
reporting
api
for
rum
and
maybe
having
a
way
of
queueing
up
data
to
be
stunned
and
then
having
the
browser,
be
that
deliverable
the
reliable
delivery
mechanism
for
sending
it
at
the
end
of
the
day
you
know
kind
of
like
a
beacon,
api
plus
plus
or
whatever,
whatever
you
want
to
call
it.
So
that
would
be.
That
would
be
helpful.
A
Okay,
I
spoke
a
lot.
I
just
want
to
say
thanks
to
speaker
and
pinterest,
for
some
of
their
ideas
incorporated
into
this
and
definitely
want
to
chat
with
anybody
else.
That
has
ideas,
but
that's
the
presentation.
I
realize
we
only
have
a
few
minutes
if
you
will
for
if
there's
any
discussion
to
be
had
otherwise
we
can
always
take
it.