►
From YouTube: WebPerfWG TPAC 2020 meetings - October 22 - part 2
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Anywhere
yeah,
so
if
you
can
share
your
slides
on
the
agenda
document,
cool
all
right,
we'll
help
you
do
that
awesome.
Thank
you,
yeah
and
otherwise
we're
back
and
for
the
last
stretch
of
this
d-pack.
Let
me
share
my
screen
and
we
can
get
started.
B
B
B
I
basically
wanted
to
share
a
rough
idea
of
how
we
could
improve
performance
metric
reporting
through
the
reporting
api
that
came
up
in
various
conversations.
B
Basically,
our
general
guidance
for
web
developers
and
run
providers
is
to
use
page
visibility
and
then,
whenever
the
page
goes
to
the
hidden
state,
that
is
a
signal
for
analytics
provider
to
keep
state
on
the
page
and
and
send
any
analytics
that
they
may
have
for
the
current
session,
because
browsers
may
may
terminate
that
tab
or
that
renderer
at
any
point
in
time.
After
that
isn't
hidden.
B
After
that
hidden
event
or
that
visibility
change
event
with
a
hidden
state
and
events
that
web
developers
use
used
to
rely
on
such
as
unload
before
unload
or
page
hide,
are
not
guaranteed
to
actually
fire
before
the
renderer
disappears,
because
in
many
cases,
especially
on
mobile
by
the
time
that
the
browser
understands
that
the
tab
is
going
to
be
discarded,
it
is
reluctant
to
revive
that
renderer
in
order
to
fire
that
event
and
then
kill
it.
B
In
many
cases
the
render
is
already
in
a
state
that
is
not
necessarily
revivable.
B
Visibility
events
are
not
the
intuitive
point
in
which
developers
think
that
they
should
be
using
in
order
to
send
their
events
up
in
order
to
send
their
beacons
up
having
that
kind
of
session
by
session
reporting,
if
the
user
goes
back
and
forth
from
visible
state
to
hidden
and
so
and
so
on,
that
requires
some
back-end
stitching
for
all
those
different
bits
of
data
to
make
sure
that
they
are
aligned
into
a
single
session
that
developers
can
then
look
into,
and
unfortunately
it
is
still
not
a
common
thing.
B
Many
many
reports
are
biased
towards
unload
time
or
some
other
event.
That
happens
rather
early
in
the
in
the
page,
loading
life
cycle
and
that's
used.
Many
reports.
B
In
favor
of
early
loading,
metrics
and
yeah,
it
skews
the
reports
of
continuous
metrics
that
need
to
be
gathered
throughout
the
lifetime
of
the
page.
B
B
I
also
had
other
conversion
conversation
with
folks
that
were
interested
in
being
able
to
collect
various
metrics
without
ever
running
scripts
on
the
page.
This
is
not
that,
like
this
doesn't
solve
this
particular
use
case.
It
may
or
may
not
be
something
we
want
to
solve,
but
but
this
is
not
what
I
aim
for
here.
B
Is
essentially,
the
developer
defines
a
blob
of
various
various
metrics
and
various
bits
of
information
that
they
want
to
report.
They
define
an
endpoint.
B
Define
some
sort
of
a
timeout
in
case
that
they
don't
want
the
reports
to
lag
too
much
and
then
they
can
theoretically
register.
That
report
declaring
the
end
point.
The
blob
and
the
timeout
note
that
the
blob
here
is
still
empty.
B
The
reason
I
wanted
to
talk
about
this
is
to
figure
out
whether
this
is
something
that
would
be
interesting
to
y'all,
especially
folks
who
gather
those
metrics,
because
it
has
seems
to
me
that
it
would
have
a
few
benefits
over
the
way
that
things
need
to
be
done
today.
It
will
enable
to
coalesce
metrics
from
multiple
visibility
sessions
on
the
client
side.
B
Instead
of
having
to
implement
server
side
session
stitching,
it
seems
like
it
would
have
slightly
better
ergonomics
than
latching
onto
the
visibility
events
and
sending
beacons
manually,
and
if
this
is
indeed
something
that
would
be
interesting
from
your
perspective,
will
you
switch
and
if
so,
how?
Soon.
B
C
You
have
this
is
stephen
from
salesforce,
so
I
don't
think
we
would
be
switching
to
that,
because
today
we
send
a
beacon.
After
so
we
have
a
web
app.
So
it's
not.
I
think
you
call
that
multi-page
app,
it's
a
it's
a
web
app,
so
we
have
one
session
with
multiple
pages
that
can
last
for
hours.
There
can
be,
you
know,
hundreds
of
pages
in
the
same
hundred
of
source
navigation
into
the
same
page,
and
so
we
send
one
payload
per
page.
C
B
Okay,
yeah,
that
makes
sense
so
yeah.
So
your
use
case,
let
me
try
to
restate
that,
so
you
have
an
spa
that
where
you
are
aware
of
transitions
between
one
page
to
the
other
and
report
during
those
transitions,
and
what
do
you
do
for
cases
where
the
user
goes
away?
So,
do
you
keep
track
of
visibility
state?
When
do
you
report
so.
C
Yeah,
so
we
have
something
like
we
implemented
our
own
version
of
tti
and
so
at
some
point
we
know
that
the
page
is
finished
because
there's
no
more
exit
chart
flights,
there's
no
more
activity
on
the
thread,
and
we
have
you
know.
Maybe
we
wait
a
little
bit.
I
don't
exactly
know
the
details,
but
then
we
fire
the
the
beacon
for
that
page.
D
C
Report,
if
the
page
was
you
know
during
that
that
session,
if
the
page
was
visible,
not
visible
for
for
us
to
know.
If
it
was,
you
know,
because
in
the
background
or
not,
but
that's
kind
of
like
how
we
do
so,
we
wait
for
for
the
page
to
be
done
because
there's
no
more
activity
and
then
we
wait
x.
Second,
and
then
we
become
back
the
the
payload.
E
F
Someone
raised
their
hand,
michael,
I
did
I
I'm
eager
to
hear
from
the
providers,
but
I
just
wanted
to
follow
up
with
what
stephen
said,
because
it
makes
sense
that
for
a
long-lived,
single
page
application,
the
page
load
or
like
the
page
lifecycle,
doesn't
map
well
to
each
thing.
So,
there's
like
a
whole
problem
around
cutting
obsessions
within
a
load,
but
even
within
one,
let's
call
it
route
or
session
or
soft
nav.
F
Is
it
true
that
you
wouldn't
want
to
have
a
summary
at
the
end
of
that,
however,
you
slice,
the
softnav
or
whatever
it
is,
and
so
is
there
already
an
existing
problem
where
you
want
to
send
a
report
very
early
as
soon
as
like
loading
is
complete
and
then
there's
a
problem
with
stitching
later
so
just
that
problem
specifically,
even
if
we
solve
the,
how
do
you
slice
pages
and
when
to
send
per.
F
Session,
I
think,
with
this
proposal
you
would
send
as
late
as
possible
once
you
have
as
much
data
as
possible,
but
I've
previously
heard
that
there's
like
a
part
of
it,
you
want
to
send
eagerly
and
a
part
of
it
that
you
want
to
send
once
you
have
the
rest
of
the
data
and
there's
maybe
a
backend
problem
with
stitching.
Is
that
true
anyway,
thanks.
F
Yes,
stephen
or
nick,
or
I've
heard
at
this
from
several
providers.
I
think
like
analytics,
and
I
don't
know
basically
is
it
just
that
like
hey,
this
is
a
lot
of
work.
We
haven't
done
it
yet
or
is
it
something
fundamental
that
could
be
captured
in
a
reporting
api.
G
So
I
think
I
can
talk
to
a
little
more
of
our
use
cases
that
may
help
here,
but
I
would
like
to
continue
on
something
that
stephen
was
saying,
and
I'm
wondering
stephen
if,
if
this
type
of
api
would
also
have
a
way
for
you
to
say,
send
it
now
as
well,
where
basically
you're
just
accumulating
data
into
a
variable
and
either
that
data
will
go
out
when
the
page,
unloads
or
it'll
go
out
after
the
timeout
or
forcefully.
G
But
if
you
do
need
to
force
it
out
because
of
the
single
page,
app
transition,
you
could
say:
oh
everything,
that
I've
queued
up
to
this
point.
Please
send
it,
let's
start
over
after
that,
and
then
continue
with
something
else,
like
that's
kind
of
one
way
that
I'm
thinking
about
how
this
could
be
very
beneficial
for
analytics.
G
So
that
kind
of
brings
back
to
how
this
would
benefit
us
for
rom,
which
is
we
you
know.
I
mean
this
is
almost
a
prime
primary
use
case
for
stuff
that
we
need
we.
We
are
constantly
struggling
between
capturing
as
much
data
as
we
can
and
sending
it
out
right
away
and
that
we
want
to
send
it
out
right
away
because
of
the
unreliability
issues
that
we've
had
in
the
past,
with
paychite
and
unload
and
before
unload
and
all
that
kind
of
stuff,
as
well
as
kind
of
like
a
real-time
component.
G
G
You
know
frame
rates
or
you
know,
there's
a
lot
of
things
that
are
kind
of
like
continuous
data
that
are
happening
that
you
may
want
to
be
to
report
for
your
users,
especially,
you
know
whether
it's
around
a
transition
or
not,
but
just
things
to
keep
track
of
on
the
page,
and
so
we
we
today
we
send
the
majority
of
our
data
at
onload
or
right
around
download
depending
on,
if
it's
a
single
page,
app
or
not.
G
So,
if
the
it's
kind
of
like
an
improvement
on
top
of
like
even
the
the
beacon
api
which
is
like,
I
want
to
just
keep
on
piling
on
the
data,
and
then
you
handle
it
for
me,
like
you,
handle
the
transport
and
let
me
reset
it
or
send
it
or
or
whatever,
if
I
need
to,
but
otherwise
it
would
be
very,
very
useful
for
us
from
a
reliability
point
of
view,
and
I
think
we
would
capture
more.
G
We
would
diverge
away
from
being
so
load
focused
in
our
measurements
and
more
towards
overall
user
experience,
happiness,
which
is
where
we
want
to
go
by
collecting
later
things.
G
Yeah
yeah,
so
I
I
I
would
say
we
could.
We
would
probably
start
using
it
right
away
because
for
customers
that
want
to
track
this
data,
it
would
be,
it
would
give
a
much
better
data.
It
would
give
them
a
more
reliable
time
to
interactive
that
we
tracked
today.
We
have
the
same
thing
with
steven,
where
we
have
kind
of
like
a.
We
wait
for
a
little
while,
but
not
too
long,
because
we
don't
want
to
lose
the
user
experience.
A
You
ian
here-
yes,
I
I
just
want
to
say
in
response
to
what
you
were
saying,
nick,
that
the
the
reporting
api
is
very
geared
towards
coalescing
reports
when
we
can
and
delaying
the
sending
of
them.
You
know
just
to
respect
the
user's
bandwidth,
so
I
know
you
mentioned
having
some
kind
of
api.
That
would
let
you
send
it
immediately.
A
I
think
what
we
would
do
in
that
case
is
actually
just
have
a
way
to
imperatively
snapshot
the
performance
data.
I
can
sort
of
cue
that
report
and
then
you
can
snapshot
it
four
or
five
more
times,
maybe
before
a
single
report,
bundle
was
to
go
out.
G
A
G
Probably
for
us,
because
we
do
have
a
strong,
real-time
component
to
our
analytics.
You
know
we
would
like
to
guarantee.
I
don't
know
what
our
guarantee
would
be,
but
you
know
today
we
deliver
data
to
our
dashboards
within
five
to
ten
seconds,
so
customers
that
may
want
to
get
more
page
lifetime
data
may
sacrifice
some
of
that
real-timeness
to
get
some
of
the
additional
data.
So
there's
probably
a
balancer
that
we'd
have
to
figure
out,
but
it
would
either
probably
be
like
send
it
right
now
or
soon,
but
don't
wait.
G
B
D
Yeah
I
just
wanted
to
call
out
this
is
this
is
really
similar
to
some
something.
That's
come
up
when
we've
been
working
on
task
scheduling
with
some
partners,
specifically
with
airbnb
and
what
they
want
to
do
is
they.
You
know
it's
mostly
for
analytics
or
they're
logging.
D
They
tend
to
batch
that
stuff
and
they
want
to
schedule
it
in,
let's
say
an
idle
task
or
a
background
task
in
post
task,
but
they
also
want
to
guarantee
that
it's
eventually
going
to
get
sent
out
or
that
task
is
eventually
going
to
run.
D
So
they
were,
you
know
wondering
so
what
they
have
to
do
is
track
those
tasks
manually
as
well
as
cueing
them
with
with
the
browser
which
is
less
than
ideal
and
and
what
they
were
kind
of
looking
for
is
something
like
a
similar
guarantee
like
can
we
make
sure
these
tasks
run
on
page
height
or
something
so
we've
even
been
just
kind
of
was
discussed
yesterday,
like
considering
whether
or
not
we'd
want
to
add
an
option
to
post
task
such
that
you
can
mark
that
these
tasks
need
to
run,
and
this
gives
you
maybe
that
best
of
both
worlds,
where
okay
it'll
run,
if
there's
nothing
else
that
can
run
at
the
time.
D
So
it's
not
going
to
interfere
with
you,
know,
user
interactions
and
whatnot
and
we'll
make
sure
it
gets
run
on
visibility,
change
or
something
like
that.
So
definitely
seems
a
lot
of
overlap.
I
wonder
you
know
is:
do
we
need
something
general
like
that
through
task
scheduling,
or
maybe
this
reporting
way
is
a
better
way
to
go.
B
I
think
that
reports,
like
generally
the
actual
report
sending
doesn't
have
to
happen
on
main
thread
or
in
the
render
at
all.
I
don't
know
where
it
happens
now,
but
the
end
but
like
theoretically,
that
could.
B
But
but
I
guess
the
the
bit
that
could
be
interesting
for
the
scheduling
piece
is
the
metric
collection
yeah,
but
definitely
sounds
like
yeah.
Assuming
this
is
of
interest
sounds
like
yeah.
It
would
be
good
to
coordinate
those
efforts.
B
So
I
think
this
is
different
than
sand
beacon
in
the
sense
that
you
provide
the
blob
that
you
want
to
send,
but
it
is
still
empty
and
you
only
fill
it
up
once
the
metrics
come
in,
so
the
I
think
that
send
beacon
the
goal
is
to
like
here
is
data.
You
can
copy
it
and
send
it
now
and
it
should
survive,
render
termination,
but
it
shouldn't
wait
for
rendered
termination,
and
this
is
a
different,
a
different
thing
in
the
sense
that
you
can
like
it's
like
nick
said.
B
It
gives
you
data
reliability,
that
your
renderer
won't
die
on
you
without
sending
data,
or
at
least
to
you
know
a
large
extent.
H
Right
but
I
guess
the
original
intention
of
sand
beacon.
What
I
mean
is
that
this,
like
you,
can
make
this
even
more
generic,
like
you,
want
the
ability
to
have
some
control
about
a
timeout
for
when
this
gets
sent
regardless,
you
want
to
guarantee,
it
is
going
to
get
sent,
no
matter
what
and
you
want
the
ability
to
update
it.
This
is
why
it
meant
that
a
cancellable
and
beacon
would
get
close.
G
I
B
I
This
sounds
interesting,
especially
if
the
intent
here
is
that
it's
going
to
be
more
reliable
than
send
beacon,
because
at
least
as
we've
seen,
we
also
have
this
notion
of
sending
last
minute
telemetry
data
before
the
session
ends
or
before
the
close
or
before
unload
or
locations
like
that,
and
we
we've
seen
that
send
because
it
can
be
unreliable,
especially
if
it
reaches
a
certain
threshold
of
concurrent
requests
and
the
browser
starts
dropping
and
use
using
some
heuristics.
H
Well,
what
I'm
worried
about
is
that
it
was
the
original
intent
of
zen
beacon
to
always
work
and
then,
when
it
actually
was
implemented,
there's
a
ton
of
kv
ads
and
it
doesn't
really
work
in
all
situations.
So
I'm
afraid
that
this
will
be
a
repeat
of
this,
like
the
intent
when
it's
theoretical
is
very
pure.
H
J
Yeah,
I
just
wanted
to
say
that,
from
my
perspective,
the
main
benefit
is
improving
the
reliability
of
sending
the
performance
data
so
from
the
implementation
perspective
like,
we
would
need
to
literally
store
this
somewhere
in
the
browser
in
case
the
renderer
crashes
right.
So
from
that
perspective,
it's
quite
different
from
a
any
task
that
the
renderer
may
want
to
run
or
be
any
arbitrary
data
that
the
renderer
may
want
to
send,
because
we
don't.
We
probably
don't
want
to
guarantee
that
storing
arbitrary
data
for
any
arbitrary
renderer.
J
So
I
think,
like
I
would
say
it's
different
from
both
of
those
things.
But
overall
I
do
think
the
idea
of
improving
the
reliability
is
super
useful,
especially
given
the
current
architecture
of
the
analytics
providers.
Right
now,.
I
I
guess
also
limiting
by
size
can
be
important,
because
if
it,
what
you're
sending
gets
too
big,
it
reduces
the
likelihood
of
the
request
to
succeed,
even
if
it
happens
after
the
render
dies.
B
Last
slide,
which
is
instead
of
open
questions
like
one
of
my
concerns
here,
is
that
if
the
one
that's
dying
is
not
the
renderer
but
the
whole
browser
and
the
browser
presumably
has
a
bunch
of
renders
all
of
them
say,
send
those
reports
when
we're
done,
then
the
browser
now
needs
to
you
know
potentially
send
many
reports
at
the
same
time
which
can
lead
to
draft
reports
or
can
due
to
congestion
or
a
lack
of
time
to
actually
send
them
yeah.
So
this
is
one
thing
that
we'll
definitely
need
to
solve.
B
It's
just
that
if
I
have
multiple
backgrounded
tabs
that
are
still
alive
and
haven't
sent
their
reports
and
now
the
user
kills
the
browser,
then
all
those
reports
presumably
need
to
be
sent
because
their
renderers
died,
but
the
browser
who's
supposed
to
be
doing
the
sending
is
also
you
know,
living
on
board
time.
F
Yo,
if
if
some
of
the
earlier
concerns
around
like
real
time
and
sending
like
partial
updates
earlier,
then
you
might
see
folks
still
using
visibility,
change
events
to
do
a
partial
push
and
then
there
might
be
less
data
because
then,
if
that
becomes
a
popular
pattern
to
like
to
start
by
putting
it
in
here,
which
is
a
very
lazy,
send
beacon,
that's
you
know,
maybe
more
reliable,
maybe
not,
but
it's
at
least
easy
to
call
into
with
the
expectation
that
it
eventually
gets
delivered.
F
C
Yeah
so
yeah.
That
was
actually
my
my
question.
I
think
if
this
api
allowed
a
way
to
say
report
send
now
so
you
would
you
know
you
would
add
your
your
your
data
to
that
object,
and
you
know,
then,
if
you
could,
you
know
when,
for
example,
is
a
soft
nav,
you
could
say:
okay
send
it
now,
you
know
flush
the
buffer
in
a
way.
That
would
be
great,
then
you
know
the
rest.
You
would
use
that
apis.
C
That
api
would
be
called
at
the
end.
You
know
when
you
close
the
tab
and
everything,
but
I
think
instead
of
a
timeout,
it
would
be
great
if
we
could
imperfectly
imperatively,
say
sen,
send
it
now
so
in
a
long
session
we
could
decide
when
to
send
it
and
then
at
the
end
he
would
take
care
of
sending
the
remaining.
That
would
be.
B
B
Yeah
and
then
the
other
question
I
had
in
mind
is
that
a
very
simple
form
of
this
is
something
you
can
get
away
without
back
and
stitching,
but
any
time
of
real-time
reporting
or.
G
Opinions,
I
you
know
for
the
things
that
we
want
to
measure
beyond
the
page
load
beacon
that
we
send
today.
I
think
we
can
package
it
up
and
present
it
to
our
visitors
in
a
way
that
helps
them
understand
that
you
know
these
are
metrics
that
we're
collecting
beyond
the
page
load,
and
you
may
need
to
look
at
two
individual
views
to
kind
of
see
what
happened
during
page
load.
Another
view
to
see
what
happened
beyond
it
or
something
like
that.
G
K
I
K
Can
you
like
kind
of
detail
or
walk
through
the
workflow
by
what
you
imply
by
server
side
stitching?
Can
you
define
that
a
little
bit
more
precisely.
B
B
Basically,
each
report
then
gets
processed
and
displayed
to
whoever
that
reports
customer
is,
but
if
you
get
multiple
reports
per
session,
so
one
example
is
if
you're
sending
reports
on
visibility,
state
changes,
but
the
user
clicks
back
back
and
forth,
and
you
want
to
include
all
that
in
a
single
session
or
if
you
want
to
send
a
report
every
minute
when
what
happened
during
that
minute
or
whatever
other
strategy
of
continuous
reporting
you
can
think
of,
then
you
need
at
the
collecting
server
or
somewhere
in
the
processing
pipeline
to
realize
that
report.
F
Could
I
take
a
stab
at
like
a
more
layman's
explanation
of
because
I'd
like
to
wrap
my
head
around
this
as
well
in
terms
of
where
the
problems
are.
But
but
you
know,
I
I'm
very
interested
in
metrics
after
page
load
like
full
page
life
cycle
and
we've
talked
about
it
a
few
times
this
week
and
so
the
perfect
measure.
F
You
don't
know
what
the
values
are
until
the
very
end
like
when
the
page
is
unloading,
because
if
you
want
to
track
the
average
interaction
cost
or
the
average
smoothness
or
the
total
amount
of
layout
shift,
you
need
to
wait
to
get
all
of
that
data,
and
this
becomes
more
and
more
complicated,
especially
as
you
have
things
like
bf
cache,
where
the
lifetime
of
the
session
is
like
super
elongated,
and
I
understand
that
for
like
rum
frameworks,
they
want
a
very
eager
report
to
go
out
to
show
it
in
the
dashboard
and
to
have
confidence
that
eventually
gets
delivered.
F
So
they
have
a
problem
where
you
may
have,
at
the
very
least
two
reports,
an
eager
one
and
a
late
one
at
the
very
end
of
the
page,
or
you
might
even
have
a
trickle
effect
of
like
every
visibility
change
every
session
navigation,
whatever
it
is,
and
so
what
do
you
do?
Do
you
overwrite?
Do
you
average?
F
G
Yeah
yeah
one
other
just
kind
of
like
aspect
from
a
like
a
data
point
of
view.
That's
a
concern
for
us
is
like
we
like.
Let's
say
you
have
a
data
point
like
a
page
load
time
and
then
later
you
have
a
data
point
about
cumulative
layout
shift.
There's
also
there's
all
this
metadata.
That
goes
with
both
of
those
like
what
kind
of
browser
was
it.
Where
did
they
come
from?
What
were
they
doing
before?
What's
their
session
id
there's
all
this
metadata
for
each
one
of
those
individual
timers?
G
So
it
becomes
a
data
processing
problem
to
have
data
like
individual
data
points
come
in
in
a
trickling
way
versus
being
able
to
kind
of
group
them
together
into
that
single
session,
because
you're
gonna
every
other
view
every
other
report,
every
other
slicing
and
dice
endless
data
is
based
on
the
same
kind
of
characteristics,
except
for
those
individual
time
stamps
that
you
know
that
from
just
a
infrastructure
point
of
view.
That's
why
we
try
to
group
all
of
our
data
kind
of
and
send
it
send
it
through
the
pipeline
at
once.
If
you.
G
G
I
think
it'll
help
a
lot
of
analytics
providers,
because
a
lot
of
us
do
do
the
same
thing,
which
is
we
send
all
our
data
at
once,
because
we
don't
want
to
deal
with
multiple
random
bits
of
data
coming
in
at
random
times
we'd.
Rather,
it
all
arrive
in
one
group.
We
can
process
this
as
one
we
can
push
it
forward,
just
one
we
don't
have
to
like
hold
on
to
it
and
wait
for
other
pieces
that
we
don't
know
if
they're
ever
going
to
come
or
not
it's
just
easier
to
process
individually.
B
A
G
I
do
I
could
envision
this,
you
know
and
completely
spitballing
here,
but
I
can
envision
this
like
classifying
different
types
of
metrics
as
more
real-time
or
less
real-time
right.
The
more
real-time
ones
are
like
the
pizza
time,
because
that
happens,
it
happens,
whereas
you
have
like
other
reports
that
are
more
kind
of
like
a
rolling
average
of
the
last
you
know,
or
they
come
in
when
they
come
in,
so
it
could
be
five
ten
minutes
delayed
or
whatever
just
based
on
on
the
page
itself.
G
A
B
F
B
And
yeah,
it
could
be
what
something
like
what
stephen
talked
about,
where
it's
the
developer
driven
sessions,
that
could
also
be
browser
driven
sessions
where,
ahead
of
time,
the
developer
can
say.
Okay,
send
me
this
report
every
time
that
the
page
goes
hidden
or
every
time
that
you
know
some
navigationy
thing
happens,
and
it
will
also.
B
Maybe
we
could
also
bake
into
frameworks
the
notes
so
yeah
every
time
the
history,
like
the
new
navigation
api,
triggers
a
new
navigation.
This
is
when
those
reports
get
sent
or
something
along
those
lines.
So
I
haven't
thought
about
it,
but
that
could
be
an
interesting
addition.
F
Yeah,
I
suspect
you'll
always
want
developers
to
be
able
to
slice
in
a
more
fine-grained
manner.
But
in
the
case
that
we
all
rally
around
at
least
like
some
macro
notion,
it
would
be
perhaps-
and-
and
I
could
I
could
see-
wanting
to
be
more
eager
and
slice.
It
do
the
default
and
then
also
potentially
extend
it
to
really
the
pull
full.
Like
page
life
cycle
yeah
or
like
the
unload
or
render
or
die.
B
Yeah
and
then
the
major
open
question
I
think,
is
this
one:
can
we
like
currently,
we
already
have
multiple
ways
to
send
reports
and
will
we
be
able
to
unite
behind
behind
this
as
the
one
true
way?
Or
will
this
be?
Just
you
know,
n
plus
one,
but
I
guess
that
is
a
question
that
only
time
will
tell,
but
it's
yeah,
I'm
happy
to
hear,
there's
interest
so.
G
H
So
do
you
mean
having
something
like
a
standardized
report
format
and
you
just
declare
these
are
the
metrics
I'm
interested
in?
Please
send
them
to
that
url
when
there's
a
new
navigation,
so.
B
I
think
this
is
this
use
case
well,.
H
Which
is
nothing
but
oh
I
mean
your
script
could
be
just
like
an
observed
call.
We
were
saying
like
observe
and
report
kind
of
and
declare
which
fields
you
want
to
get
or
it
could
be
through
a
header.
You
know
lots
of
options,
but
I'm
thinking
like
there's
this
trend
of
new
apis,
like
like
network
ever
logging,
where
this
is
all
done.
H
For
you
and
you
don't
get
to
pick
the
format
right
so,
of
course,
there's
too
much
data
by
default
in
the
performance
timeline
yeah
even
for
one
yeah,
but
it's
something
to
think
about.
Probably.
B
Yeah,
this
is
why
I
think
you
need
client-side
javascript
to
reorder
the
data
in
ways
that
your
backend
expects
and
the
predefined
report
format
is
like.
It
seems
like
something
else
than
this,
then
what
I
had
in
mind
here.
I
B
Cool,
so
I
guess
with
that.
Thank
you
for
for
the
great
feedback
on
this
one
and
we
are
out
of
agenda.
We
kept
30
minutes
for
overflow
time,
but
it
seems
like
we
were
right
on
time
throughout
the
full
four
days,
which
is
unprecedented.
E
So
yes,
I
was
going,
I
was
going
to
say
so,
that's
it.
The
web
is
fixed
and
everything
is
fine.
So
we
don't
need
to
work
anymore.
B
Thank
you
guys,
yeah.
I
suspect
that's
not
the
case,
but
yeah
thanks
everyone
for
a
great
four
days
of
sessions.
I
hope
you're
sticking
around
the
virtual
tpack
for
next
week's
breakout
sessions
and
yeah.
I
I
suspect
that,
unlike
last
year
when
we
planned
the
in-person
interim
and
then
it
faded
out,
I
suspect
that
this
year
we
will
try
to
organize
some
sort
of
a
virtual
interim,
because
one
year
is
too
long,
so
stay
tuned.