►
From YouTube: WebPerfWG TPAC 2020 meetings - October 21 - part 2
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
And
we're
back
from
the
break
with
long
task
attribution
patrick,
do
you
want
to
share
your
screen.
B
B
Coming
through,
okay,
all
right,
so
thanks
everyone
for
joining.
My
name
is
patrick.
I
know
a
few
familiar
faces
here
for
my
work
on
chromium
and
lighthouse
over
the
past
couple
of
years,
but
this
is
my
first
time
directly
meeting
many
of
you.
So
I
just
want
to
thank
you
all
for
the
invitation
to
share
here
at
tpac
been
a
long
time
distant
observer,
slash
beneficiary
of
this
group's
work,
so
it's
great
to
put
names
to
faces
and
in
honor
to
participate
this
week.
B
The
context
for
this
talk
is
a
little
bit
different
from
any
of
the
updates.
We've
seen
it's
not
a
formal
proposal
for
a
specific
api
or
an
update
of
anything
ongoing,
or
rather
just
taking
some
time
to
share
and
explore
how
synthetic
tooling,
namely
lighthouse,
has
approached
a
very
common
web
developer
pain
point
and
if
it
might
make
sense
to
bring
this
model
into
a
proper
web
platform
api
so
consider
it
a
pre
proposal
and
let's
get
to
long
task
attribution
in
lighthouse.
B
First,
a
bit
of
background
on
what
lighthouse
is
actually
trying
to
accomplish.
So
whitehouse
is
a
lab
tool
that
captures
performance
metrics
and
identifies
opportunities
for
developers
to
improve
the
page.
B
What
that
means
with
respect
to
long
tasks
is
that
identifying
the
cause
of
main
thread
work
is
really
critical
for
us
long
js
execution
is
usually
the
biggest
culprit,
dragging
down
the
performance
metrics
that
we
see
in
the
lab
and
knowing
that
a
lot
of
work
happened,
isn't
super
useful
on
its
own.
The
first
question
they
have
after
they
get
a
low
score
or
a
little
metric.
Is
why?
B
And
how
do
I
fix
it,
so
doesn't
need
to
know
where
to
go
and
who
to
talk
to
and
actually
to
order
to
make
progress
on
their
poor
performance
if
they
see
something
wrong
on
the
left.
Here
is
an
example
of
one
of
the
lighthouse
audits
that
we
have
today
that
identifies
which
third-party
scripts
were
responsible
for
contributing
to
the
most
main
thread
blocking
time
and
in
order
to
produce
a
table
like
this,
we
need
for
every
single
long
task
to
be
able
to
identify
who
was
responsible
for
it.
B
So
that
brings
us
to
our
overall
goal,
which
turned
out
to
be
quite
different
actually
than
typical
breakdowns
of
execution
time
that
you
might
have
seen
in
traditional
performance
profiling
and
it's
important
to
keep
in
mind
what
we're
actually
trying
to
achieve
with
attribution
here.
For
us.
That
means
we
want
to
find
the
proximate
cause
of
work.
B
In
other
words,
we
want
to
answer
the
question.
Why
is
this
long
task
here
at
all,
which
is
actually
pretty
different
from
the
question
of?
Why
is
this
long
task
taking
a
long
time,
so
attribution
in
lighthouse
is
not
trying
to
answer
why
a
long
task
is
long.
I
think
for
many
of
us
on
the
team.
That
was
a
question.
B
We
were
more
used
to
answering
with
typical
profiling,
but
for
lighthouse,
it's
not
about
what's
executing
during
a
task
and
it's
not
about
what
is
making
a
long
task
take
a
long
time
which
are
both
important
questions,
but
they
tend
to
provide
a
little
insight
for
typical
developers
on
where
to
start
an
investigation
on
why
work
is
occurring,
and
I
bring
this
up
mainly
because
the
current
focus
of
the
w3c
attribution
definition
in
long
tasks
is
focused
around
this
question
of
what
is
currently
being
executed
in
the
long
task
as
opposed
to.
B
B
So
I
think
it's
worth
going
through
the
progression
that
we
experienced
when
trying
to
work
out
this
attribution
of
long
tasks,
a
quick
definition
of
some
important
terms,
I'll
be
using
that
might
be
unfamiliar
because
the
lighthouse
model
is
based
on
a
chromium
trace.
Our
terms
are
derived
from
those
events.
So
when
I'm
talking
about
evaluate
script,
I'm
referring
to
the
initial
task
for
the
evaluation
of
a
script
when
it
first
loads.
B
So
the
macro
task
that
executes
on
a
script's
initial
payload
when
I'm
talking
about
self-time
of
a
source,
I'm
talking
about
the
time
spent
executing
while
that
source
is
at
the
very
top
of
the
stack.
So
at
the
bottom
of
the
slide
here,
you'll
see
a
chromium,
devtools
inspired
flame
chart,
that's
breaking
down
where
time
is
being
spent.
In
this
example,
the
top
of
the
stack
is
at
the
bottom
of
the
chart.
B
So
now
we
got
some
terminology
out
of
the
way,
we'll
start
with
a
very
simple
case
here,
of
looking
at
a
single
long
task
from
an
evaluate
script
of
a
script
called
matascript
js,
and
it's
going
to
do
some
layout
thrashing.
So
when
it's
executed,
it's
going
to
look
at
every
dom
element
and
then
alternate
back
and
forth
between
setting
and
reading
some
dimensions.
B
If
you
collected
a
performance
trace
of
this
sort
of
script,
you're
going
to
see
a
flame
chart,
that
looks
something
like
the
below
and
it's
going
to
have.
You
know
the
value
script
at
the
top
that
initial
call
frame,
but
most
of
the
work
is
going
to
happen
in
these
layout
tasks
that
the
browser
is
doing
to
you
know
relay
out
and
then
re-read
that
width.
B
So
even
if
most
of
the
time
is
spent
in
browser
layout,
here
we
would
still
say
the
bad
script.js
is
the
reason
that
this
long
task
is
here,
and
so
this
kind
of
realization
was
well.
Maybe
self
time
isn't
exactly
what
we're
looking
for
here
and
our
next
thought
was
well.
Maybe
we
can
attribute
by
script
self-time
instead,
but
the
problem
with
that
is
that
many
sites
look
very
similar
to
the
case
we
just
looked
at
except
replace
browser
layout
with
say
the
work
of
a
library.
B
B
So,
here
again,
even
though
jquery
is
at
the
bottom
of
the
stack,
in
fact,
it's
the
entire
stack,
nothing
in
bad
script,
js
ever
actually
executes
within
the
long
task
itself.
We
would
still
say
in
lighthouse
that
bad
script,
js
is
the
proximate
cause
of
this
particular
long
task
being
here,
and
so
this
suggested
well
hey.
Maybe
we
don't
even
want
to
look
inside
the
task
necessarily
to
find
why
it
was
here.
B
Maybe
we
need
to
look
for
whatever
script
scheduled
it
or
whatever
is
responsible
for
this
task
occurring,
and
so
this
we
finally
have
a
case.
We
might
might
go
too
far.
So
now
we
have
two
scripts,
a
middleman
script
that
adds
bad
script,
js
and
then
bad
script.
Js
is
the
same
as
before,
where
we
have
a
set
timeout
of
jquery
doing
some
work
and
in
this
case
it's
a
little
more
up
in
the
air.
It's
a
little
arguable,
whether
middleman
script
or
that
script.
B
Js
is
actually
the
proximate
cause
of
this
long
task
being
here
we
have
the
open
question
of
how
far
back
up
the
chain.
Do
we
go
for
causality
in
lighthouse.
We
stop
at
any
evaluate
script
that
has
a
url
that
we
could
identify
to
a
developer.
So,
in
this
case,
we
would
consider
bad
script
js
to
be
the
proximate
cause
of
this
long
task
in
the
set
timeout
two
main
observations.
I
hope
we
can
take
away
from
these
examples.
B
B
End
up
pointing
at
the
wrong
script
to
begin
an
investigation.
In
fact,
if
you
don't
take
causality
anywhere,
you
might
end
up
saying
that
the
browser
engine
is
the
cause
of
all
long
tasks,
because
even
if
the
script
asked
the
browser
to
do
work,
it
is
the
browser
that
took
too
long
conceivably
to
do
that
work.
B
So
I
want
to
take
a
more
practical
look
at
how
this
actually
works
inside
of
lighthouse
as
a
context,
refresher,
so
again,
lighthouse
is
a
lab
tool
that
records
a
chromium
trace,
while
the
page
is
loading.
So
for
us
that
means
cross-origin
lead
concerns
are
not
really
a
limitation
for
the
analysis
and
we
do
actually
get
bonus
access
to
v8's,
cpu
sampling
profiler
as
well.
B
If
we
have
a
set
of
tasks
that
looks
a
little
something
like
this.
We
have
this
long
task
at
the
very
end
that
occurs
in
a
set
timeout.
We
would
walk
back
and
say
all
right
well,
who's
scheduled
this
particular
set.
Timeout
walk
back
to
the
previous
set
timeout.
We
would
ask
we
scheduled
this
set
timeout
walk
back
to.
B
B
So
some
pseudocode
here
for
you
on
the
bottom
right.
The
general
idea
is
that
we
walk
up
this
tree
of
scheduling
tasks
until
we
find
some
sort
of
evaluate
script
that
has
a
url,
and
if
none
of
that
information
is
available,
then
we
will
fall
back
to
whatever
is
at
the
root
of
the
stack
which
we
saw
is
pretty
helpful
and
does
a
better
job
than
self-time
alone,
and
if
we
don't
have
any
of
that
information
and
the
work
was
occurring
inside
of
an
iframe,
we
will
return
that
frame.
B
And,
finally,
if
we
don't
have
any
of
this
information
whatsoever,
then
we
fall
back
to
unattributable
and
say
we
don't
know
why
this
long
task
was
occurring.
B
B
In
some
cases,
though,
there
are
some
chromium
tracing
gaps
that
we
don't
have
perfect
coverage
of
at
lighthouse
to
find
the
original
instigator,
so
in
fetch
and
event
listener
cases
we
end
up
falling
back
to
whatever
is
at
the
root
of
the
stack
from
what
I
understand.
These
aren't
hard
technical
limitations.
It's
more
of
a
you
know.
We
haven't
gotten
to
that
point
in
implementing
in
chrome
tracing
and
there
are
a
few
cases
where
we're
not
able
to
provide
any
coverage
at
all
at
the
moment
and
that's
for
dom
mutation
cases.
B
For
example,
if
bad
script
adds
a
script
element,
we
lose
sort
of
any
url
to
be
able
to
attribute
it,
because
this
inline
script
doesn't
have
its.
B
We
aren't
able
to
trace
back
who
installed
it
given
the
current
chrome,
tracing
stats
and
then
finally,
non-javascript
async
work
is
so
difficult
for
us
to
track
down.
At
the
moment.
For
example,
the
browser
decides
to
spin
up
something
or
is
executing
async
work
that
wasn't
via
one
of
these
common
javascript
methods.
We
don't
really
know
what
script
was
responsible
for,
causing
it
to
spin
up
that.
You
know
animation,
for
example,.
B
You
might
be
wondering
how
well
does
it
work
well
from
the
attribution
perspective,
we're
able
to
come
up
with
some
useful
url
in
pretty
much
all
of
the
cases,
so
99.6
cases
and
then
point
six
percent
of
all
long
tasks
in
hp
archive
for
august,
2020
received
attribution
of
some
sort,
it's
fairly
concentrated
on
particular
types
of
usage
patterns,
so
96
of
all
pages
have
all
of
their
long
tasks
attributed
to
some
url
and
the
nine
percentile
is
around
12
percent
of
their
execution.
B
Time
is
missing
when
it
comes
to
how
accurate
is
it
we've
been
using
it
this
logic
for
a
little
over
two
years,
and
so
far
it's
not
very
well
to
develop
our
expectations.
Our
users
are
normally
very
vocal
when
they
think
something
unfair
has
happened
to
assign
you
know
particular
script.
Responsibility
for
execution
and
we've
had
only
real
two
major
cases
come
up,
one
of
which
was
a
bug
that
we
were
able
to
fix
pretty
quickly,
and
the
other
is
around
this
fallback
case
in
terms
of
fetch
and
on-load
listeners.
B
The
other
nice
anecdotal
piece
about
this
particular
approach
is
that
previous
attempts
that
we
had
as
I
went
through
the
progression,
monkey
patching
and
polyfill
scripts,
particularly
ones
that
instrument
timing
or
are
trying
to
do
error
catching
end
up
dominating
attribution
when
you
go
with
something
like
self
time
or
the
root
of
the
stack
alone.
B
So
by
being
able
to
trace
back
to
the
original
initiator,
we're
able
to
avoid
that
particular
pitfall,
and
the
other
really
nice
attribute
that
helped
us
early
on
is
that
all
of
this
attribution
doesn't
require
any
expensive
intra-task
timing.
Information
like
sampling,
we're
able
to
only
attribute
based
on
the
macro
task
relationship
information
between
all
of
the
scheduling.
B
The
two
another
good
pieces
of
this
we've
found
so
far.
One
is
that
scheduler
scripts
see
inflated
attribution
if
work
is
scheduled
via
variable
mutation.
So,
for
example,
if
you
have
some
master
script,
that
is,
you
know
running
in
a
set
timeout
loop
or
a
recital
callback,
loop
and
other
scripts.
Ask
it
to
do
things
by
pushing
work
to
some
sort
of
cue.
Then
this
is
the
master.
Scheduler
script
is
going
to
be
receiving
attribution
for
all
of
that
work.
B
Now,
typically,
we
see
this
as
a
problem
when
it's
like
a
large
spa
and
we're
not
able
to
distinguish
between
its
own
scripts.
This
is
the
pattern
that
we
see
common
there,
but
in
lighthouse,
the
most
common
use
case
that
we
see
developers
really
appreciating.
This
particular
type
of
attribution
is
when
it's
distinguishing
between
first
party
and
then
all
the
different
third-party
scripts
they
have
on
their
page,
which
is
pretty
rare
to
see
any
like
sort
of
master
scheduler
between
unrelated
third
parties.
B
The
other
piece
is
obviously
so.
Lighthouse
is
a
lab
tool.
That's
gathering
synthetic
information,
basically
only
over
load
up
until
today,
so
we
have
this
good
validation
that
it
matches
developer
expectations
before
any
user
interaction,
but
we
have
limited
data
to
suggest
how
well
it's
going
to
perform
in
the
interactive
case.
Once
user
is
interacting
with
the
page,
and
it
becomes
a
little
more
dynamic.
B
So
that's
how
lighthouse
long
task
attribution
works
and
I
really
wanted
to
spend
the
rest
of
the
time
noodling
on
some
of
these
open
questions
we
have
first
off.
Is
this
model
approximate
cause
that
we
have?
Does
it
seem
widely
applicable?
How
does
it?
How
does
it
gel
with
other
folks
understanding
of
how
you
should
attribute
work
to
particular
scripts,
assuming
that
it
is
interesting
and
applicable?
B
Is
this
feasibly
implemented
across
browser
up
until
now,
we've
used
chromium
trace
events
to
be
able
to
provide
us.
This
information?
Is
there
anything,
maybe
that
might
be
different
across
browser
that
would
make
this
sort
of
implementation
really
difficult
and
assuming
that
that
is
all
good
and
that
it's
widely
applicable
would
this
model
still
yield
developer
value
as
a
web
api.
C
B
After
cross-origin
concerns
are
taken
into
consideration,
so
I
imagine
this
will
have
to
follow
a
similar
model
to
what
oolong
described
with
memory
in
that
we're
not
able
to
leak
in
a
url
that
you
know
isn't
already
known
to
the
main
page.
But
you
know
in
lighthouse
we
haven't
dealt
with
this
concern
because
it's
a
developer
tool
that
already
has
access
to
all
this
information.
So
given
that
the
most
valuable
piece
that
we
get
out
of
this
is
learning
these
third-party
urls
that
are
causing
problems.
B
A
Oh
so,
first
of
all,
thank
you
so
much.
This
has
been
super
useful.
I'm
wondering
whether
the
bits
and
pieces
where
we
cannot
attribute
are
fundamental
or
just
implementation,
driven
in
particular,
you
mentioned
scheduler
scripts
and
you
know
work
being
put
pushed
from
from
a
queue.
A
Do
you
see
that
kind
of
attribution
like?
Do
you
think
that
kind
of
work
can
be
attributed
to
avoid
over
over
blaming
scheduler
scripts
or
potentially?
We
also
talked
about
something
similar
in
the
context
of
events.
So
if
we
have
events
that
register
some
work,
that
needs
to
happen
post,
hydration
hypothetically,
that
is
also
something
that
we
would
like
love
to
keep
track
in
some
like
in
a
similar
mechanism
and
like
so
yeah
thoughts
on
that.
B
Yeah,
I
I
think
the
schedule
the
problem
needs
to
be
solved
using
a
combination.
I
don't
think
you
can
solve
it
just
by
having
smarter
automatic
attribution.
I
think
some
sort
of
developer
annotation
is
kind
of
required
to
solve
that
particular
problem,
because
the
scheduler
is
potentially
executing
work
of
many
scripts
inside
the
same
task.
It
kind
of
needs
to
be
the
one
to
say.
B
So
I
definitely
think
it's
solvable
with
sort
of
you
know
additional
annotations
from
the
schedule
or
script,
but
at
least
I
haven't
come
up
with
an
idea
of
how
to
address
that
problem
without
their
input,
mainly
because
they're,
you
know,
they're
the
one
that
knows
how
work
is
being
is
being
scheduled
and
sort
of
automatic
attribution,
even
if
they
were
a
heuristic
based
feels
like
it
would
ultimately
be
much
lower
quality
than
if
we
were
able
to
get
some
sort
of.
D
I
did
chat
with
the
v8
team
at
one
point
about
the
feasibility
of
tracking
what's
mutating
each
variable
and
then
what's
reading
from
it
and
somewhat
unsurprisingly,
they
said
that
would
be
way
too
expensive.
We
can't
possibly
do
that.
You
can
imagine
us
having
some
kind
of
heuristic
based
approach
perhaps,
but
we
definitely
can't
do
the
sort
of
perfect
thing.
E
Michael
yeah,
I
mean
first
of
all,
this
is
fantastic.
I
was
curious.
E
You
mentioned
that
you
have
access
to
traces
and
stack
sampling,
just
the
way
lighthouse
does
it,
but
how
much
of
that
does?
Does
this
actually
rely
on?
It
seems
like
the
top
level
task
attribution
like
in
terms
of
context.
This
is
information
that's
readily
available.
E
B
Yeah,
so
the
you're
right,
the
stack
sampling
is
really
only
necessary.
In
fact,
standard.
Regular
lighthouse
today
doesn't
have
the
spec
sampling
enabled
at
all.
So
we
actually
don't
get
that
information
as
sort
of
like
an
experimental
add-on
to
add
more
granularity
to
our
attribution
so
that
that
99.6
number
we're
actually
able
to
get
without
v8
stack
sampling
enabled.
So
it's
really
kind
of
like
a
nice
bonus
to
further
narrow
down,
what's
happening,
but
you're
right.
All
of
the
relationship
information
here
is
macro
task
relationships.
B
B
I
imagine
there
might
be
a
lot
more
information
gleaned
from
being
able
to
understand
the
relationship
between
all
tasks
of
any
length,
as
opposed
to
only
long
tasks.
So
I
would
have
some
questions
to
answer
there,
but
I
think
the
you
know
fundamental
idea
of
being
able
to
track
back
the
task
relationships
and
only
exposing
the
original
url,
or
you
know
id
that
the
original
page
already
knew
seems
much
more
realistic
and
you
know
in
the
case
of
chromium.
They
do
already
have
this
information
in
the
trace.
A
Regarding
your
last
question
and
cross-origin
concerns,
I
think
that
if
we
limit
the
information
that
is
exposed
to
just
the
pre-redirect
url,
this
is
already
information
that
the
page
has
or
can
obtain.
So
the
pre-redirect
url
is
already
visible
to
the
page
by
multiple
means
and
then.
A
A
page
could
stagger
the
loading
of
different
scripts
in
order
to
figure
out
which
one
of
them
is
triggering
a
long
task
and
perform
that
attribution.
Then
it's
not
something
that
people
can
actually
do
in
the
wild
in
production,
but
for
attack
scenarios.
That
seems
like
something
that
people
can
do
and
therefore
this
is
already
exposed,
regardless
of
the
long
task
api
just
from
set
timeout
timing
attacks.
E
So
can
I
go
in
another
direction?
I
don't
know
if
that's
follow-up
or
so
I
think
your
talk
was
referenced
already
earlier
this
week
in
anticipation,
both
from
the
like
responsiveness
topic,
as
well
as
the
spa
topic,
and
today
you
brought
up
the
scheduling
case
where
you
might
want
developer
annotations,
because
the
the
cut
point
so
to
speak
isn't
easy
to
identify
automatically
but
for
responsiveness
or
soft
navs.
E
That's
another
like
potential
cut
point,
and
I
think,
like
I've
heard
you
speak
to
this
before,
and
you
mentioned
that
it's
possible
that
evaluate
script
is
also
happening
in
certain
cases,
like
click
handlers
or
something
like
that.
But
but
you
can
imagine,
developer
annotating
start
points
for
soft,
navs
or
or
certain
important
interactions
and
then,
rather
than
attributing
long
tasks
back,
it's
almost
like
attributing
from
the
start
point
any
task
moving
forward
if
that
makes
sense,
so
any
future
async
work
that
is
somehow
in
its
chain
attributed
back
to
the
start.
E
Point:
that's
what
you
would
consider
proximate
cause.
I
guess
which
seems
interesting
from
that
perspective
as
well
like.
Perhaps
we
can
use
the
same
task
tactics
for
a
different
reason.
Like
end-to-end
responsiveness,.
B
Yeah,
it's
almost
like
taking
the
extreme
document
example
but
reapplying
it
to
you
know
a
specific
point
in
time
of
the
document
right
and
say:
well
all
work
came
from
this
particular
you
know
version
of
the
document
marking
that
point
in
time.
It's
interesting
alternative
use
case.
The
the
graph
information
itself
is
really
useful
for
a
number
of
reasons,
and
you
know
I'm
excited
to
see.
B
F
F
It's
it
can
provide
hints
towards
there's
something
wrong,
but
then
they
obviously
have
to
dig
into
the
tools
and
use
lighthouse
and
other
things
to
actually
get
the
information.
So
if,
if
there's
more
things
that
we
could
track
from
rum
to
actually
start
pointing
fingers,
at
least
at
a
very
high
level,
you
know
that
saves
time
it
helps
the
customer
track.
What
are
the
true
culprits
or
not
especially
over
time?
F
How
does
that
change
over
time?
How
does
it
change
on
this
one
person's
machine
in
this
random
country?
You
know
it's
all.
It's
all
that
kind
of
stuff,
so
yeah
getting
any
more
sort
of
information
that
we
can
beyond
just
what
long
test
reports
today
is,
would
make
this
kind
of
data
way
more
useful
right
now,
like
we
capture
it,
but
we
don't
feel
super
enthusiastic
about
it,
because
we
can't
really
tell
much
more
than
just
this
thing
happened.
F
I
also
would
say,
patrick
you
touched
on
one
point
around
better,
attributing
what
some
third
parties
may
be
doing
on
the
page
and
that
that,
in
particular,
for
what
we
do
for
rum
is
we
sometimes
wrap
top
level
apis
and
the
the
attribution
that
you're
doing
here
helps
stop
pointing
fingers
at
our
own
javascript,
because
it
more
smartly
points
to
well.
What
was
the
true
cause
of
that
api
being
triggered,
not
just
the
javascript
that
was
in
the
middle
happening
to
be
the
the
library
of
choice.
B
One
other
very
unrelated
case
that
kind
of
comes
to
mind.
Is
this
sort
of
useful
developer
attribution
is,
you
know
journal,
had
an
error
with
a
stack
trace.
That
is,
you
know,
largely
just
lists
because
it
points
to
some
async
click
handler.
You
know
et
cetera,
et
cetera
and
being
able
to
attach
what
script
you
know
installed
that
listener
and
where
I
think,
would
be
also
very
useful,
so
maybe
maybe
very
unrelated
and
outside
the
scope
of
this
is
appropriate,
but
just
the
interesting
other
cases
for
this
attribution
graph
information.
G
So
patrick
you
mentioned.
The
second
question:
is
feasibility
of
implementation
cross
browser?
Have
you
tried
to
investigate
on
other
browsers
if
it
could
be
possible.
B
Have
not
would
love
to
hear
if
these
same
concepts
apply
equally
from
the
other
browser
vendors.
My
initial
cursory
investigation
suggests
that
it
would
and
that
perhaps
the
biggest
sticking
point
might
be
the
different
coalescing
of
tasks
and
kind
of
where
was
it
prioritization
and
break
points
that
differ
from
the
analysis
that
we
do,
but
I
don't
have
a
very
strong
sense
at
this
point
whether
this
graph
information
is
easier
or
difficult
to
obtain.
H
This
my
initial
impression
is
this
sounds
a
little
difficult
to
implement
and
expensive,
possibly,
but
I
mean
beyond
that,
I
don't
immediately
have
any
fundamental
problems,
but
those
are
those
are
big.
H
J
A
Any
other
questions
concerns.
A
A
Okay,
I
guess
yeah.
Maybe
we
can
point
that
question
towards
security
folks
to
get
them
to
sign
off
on
it.
Okay,
so
so.
J
B
Oh
are
would
say
my
next
question
would
be:
how
could
we
best
help
in
terms
of
what
would
be
a
what
would
be
a
reasonable
next
step
for
making
this
more.
K
K
Most
importantly,
it's
just
showing
the
use
cases
that
it's
trying
to
address
that
obviously
are
not
being
addressed
by
the
existing
api,
and
maybe
if
there
are
any
security
questions
from
how
do
we
ensure
that,
like
bouncing
across
frames,
the
attribution
is
safe
to
expose
like
how
does
how
do
you
reason
about
that?
If,
if
possible,.
D
D
I
don't
know
whether
it
would
be
possible
for
us
to
just
do
sort
of
a
quick
tweak
to
the
lighthouse
algorithm
to
take
into
account
cross-origin
concerns,
and
then
we
can
stare
at
the
data
a
little
bit
or
or
something
along
those
lines,
but
having
more
data
to
gain
confidence
that
this
would
be
useful
would
make
me
much
more
comfortable,
saying
it's
worth
further.
K
K
L
Hey
I'm
still
here,
I
think
simon
may
be
the
person
like
he
worked.
This
spec
races,
like
peter
marshall
and
simon,
maybe.
B
K
A
Cool,
so
with
that
any
other
last
minute
questions
or
shall
we
move
on
to
the
next
subject.
A
So,
okay
with
that
andrew.
L
L
M
All
right
so
yeah,
I'm
going
to
be
chatting
a
little
bit
today
about
the
js
profiling,
api,
js
self
profiling,
api,
a
thing
that
gets
samples
from
the
main
thread,
whatever
you
want
to
call
it
so
for
a
bit
of
context
here.
This
is
an
api
to
sample
client-side
javascript
continuing
with
the
dev
tools
theme
of
the
last
few
presentations.
This
gives
insight
into
like
what
exact
javascript
is
executing
on
clients
machines
in
pretty
much
exactly
the
level
of
granularity
that
you've
come
to
expect
from
like
local
developer
tooling.
M
So
the
idea
is
like
we
have
users
in
the
wild
running,
like
maybe
a
few
experiments,
maybe
running
some
wacky
third-party
code,
and
we
want
to
figure
out
like
why
a
given
load
was
slow
and
maybe
attribution
alone
doesn't
give
us
all
of
the
picture.
We
want
to
dive
deeper
on
a
more
like
qualitative
level,
and
so
here
we
go.
We
wrote
an
api
for
it
and
it
basically
is
just
a
sampling.
Profiler
we
implemented
in
chrome,
ran
an
origin
trial
in
m87.
M
280,
got
a
lot
of
great
feedback
from
partners
and
found
a
few
issues
ourselves
on
facebook.
So
everyone
was
happy,
so
the
api
has
been
changed
to
require
co-op
and
co-op,
as
we
talked
about.
M
I
think
it's
probably
been
months
ago
now,
but
as
part
of
a
step
to
ensure
that,
since
this
is
like
a
very
timing,
sensitive
api,
it
exposes
like
raw
sampling
timestamps
as
well
as
potentially
like
stack
frames
from
cross-origin
resources,
co-op
and
co-op
and
cross-origin
isolation
in
general
turned
out
to
be
a
pretty
great
fit
cool
and
yeah.
There's
a
demo
of
the
api
on
the
right-hand
side,
fairly
straightforward
you
throw
in
your
sampling
interval.
M
The
ua
will
decide
what
sampling
interval
it
can
actually
best
provide
samples
at
you'll
do
some
work.
It
doesn't
have
to
be
all
in
the
same
macro
task,
it'll,
basically
record
until
you
call
profiler.stop
and
then
you'll
get
a
trace
in
a
try
format,
that's
very
similar
to
devtools,
which
is
a
pretty
effective
form
of
structural
compression
that
you
can
then
send
over
to
service
for
analysis,
rip
it
apart
or
do
what
you
next
pardon
me
do
what
you
will
and
here's
just
an
example
of
the
trace
format.
M
There's
some
more
detail
on
the
github,
which
I
should
probably
have
linked
in
this
slide.
If
you,
google,
jsl
profiling,
it'll
show
up
and
basically
that's
just
making
use
of,
like
common
parent
stack
frames
again
very
similar
to
the
way
gecko
and
blank
have
their
dev
tools,
tracing
cool,
any
questions
or
ambiguities
regarding,
like
the
api
shape
or
anything
before
we
move
forward
to
some
of
the
feedback.
M
Cool
all
right,
so
yeah,
the
origin
trial
went
pretty
well.
On
our
own
side,
we
found
a
bunch
of
like
pretty
pathologically
bad
cases
with
performance
that
we
wouldn't
have
seen
just
like
with
local
profiling,
especially
since,
on
our
end,
we
do
serve
different
code
to
employees,
and
that
was
that
was
great.
M
We
shaved,
like
I
think,
on
the
order
of
like
tens
of
milliseconds
on
certain
interactions,
and
I
don't
have
the
numbers
for
page
load,
but
I
think
interactions
was
like
the
primary
use
case
where
we
found
a
lot
of
unusual
things,
and
this
sentiment
was
actually
also
shared
by
the
excel
online
folks,
who
also
ran
their
own
origin
trial
for
this
and
actually
liked
it.
M
So
much
edge
now
has
renewed
the
origin
trial,
so
yeah,
one
of
the
interesting
things
was
that
the
focus
on
interactions
in
particular,
really
highlighted
the
need
for
this
api
to
be
very
performant.
M
Since
you
can
imagine
if
you're
like
say
clicking
on
a
button
that
maybe
triggers
a
lot
of
work,
you
want
to
obviously
commence
that
work
and
remain
responsive
to
user
input
as
soon
as
possible,
and
that
can
be
a
challenge
since
vms
often
require
metadata
in
order
to
perform
this
profiling,
like,
for
example,
in
gecko's
case,
were
you
to
use
an
api
like
this?
It
requires
with
current
architecture
recompilation
of
some
bytecode,
which
is
not
ideal
to
do
in
response
to
a
click.
So
I'll
talk
a
little
bit
more
about
activation
in
blink.
M
M
But
nonetheless,
there's
still
work
to
be
done
and
we
need
to
figure
out
when
to
do
it
so
yeah.
This
is
basically
just
what
I
said.
One
of
the
we
talked
about
this
in
a
prior
call
as
well,
and
one
of
the
ideas
that
came
up
was
let's
leverage
feature
policy
and
that
seemed
really
great.
M
I
was
really
happy
with
it,
and
then
I
found
out
that
there's
a
lot
of
debate
and
strife
over
how
future
policy
should
support
certain
worker
use
cases
which
we
think
are
pretty
important
to
the
api
in
particular,
such
as
service
workers
and
shared
workers.
M
So
I
think,
that's
still
like
fairly
contentious
last
I
checked
and
blocked
and
coupled
with
the
fact
that
feature
policy
no
longer
has
a
disabled
by
default
param,
it
was
actually
removed
as
part
of
the
migration
to
permissions
policy
sort
of
complicates
things
because
we
do
want
to
be
disabled
by
default.
We
want
to
have
a
surface
where,
if,
like
developers,
can
profile
very
like
meaningfully
on
a
small
subset
of
users,
rather
than
maybe
perhaps
negatively
impact
the
experience
for
everyone
by
introducing
that,
like
warm-up
time.
M
So
we
had
a
few
ideas
for
alternatives
here.
The
most
obvious
one
was
just
having
script
driven
warm-up,
maybe
when
you
can
instantiate
a
profiler
ahead
of
time
and
with
that
profiler
it'll
trigger
like
the
build
up
of
this
metadata,
but
one
of
the
problems
with
that
is
that
a
lot
of
this
work
does
need
to
be
done
on
the
main
thread
and
the
trivial
way
to
mitigate
this.
M
Well,
I
won't
say
it
has
to
be
on
the
main
thread
with
the
current,
like
user
agents
that
we've
looked
into.
It
does
require
a
lot
of
main
thread
work
and
it's
a
lot
cheaper
to
do
this
online
rather
than
stopping
the
world
and
then
rebuilding
all
of
this
metadata
for
every
single
function.
On
the
page.
M
We
could
amend
permissions
policy,
but
that
would
require
figuring
out
what
to
do
with
workers,
especially
or
we
could
just
maybe
keep
it
disabled
for
shared
and
service
workers
for
now
and
then
explore
it
later.
If,
if
that
turns
out
to
be
a
use
case,
that
people
care
about
or
we
could
just
choose
it
and
add
a
new
header
and
just
have
the
enabled
status
for
a
given
script
or
like
cross-origin,
isolated
cluster,
be
the
intersection
of
js
profiling.
One
so
kind
of
curious
folks,
thoughts
here.
M
M
Yeah,
I'm
particularly
curious
as
well,
if
there's
any
precedent
for
any
other
apis
that
might
want
to
do
this
form
of
warm-up
work,
because
digging
through
the
various
x
policy
specs
seemed
to
indicate
that
a
lot
of
it
was
for
just
like
very
simple
feature
activation,
rather
than
we
actually
need
to
do
this
for
performance
concerns.
A
So
what
are
the
blockers
for
permission
policy?
Like?
Did
you
talk
to
ian
cleland
and,
like
other
website
folks
about
supporting
that.
M
Use
case
I've
chatted
briefly
about
the
disabled
by
default
use
case
with
some
people,
but
this
is
a
while
back.
I
don't
think
it
was
ian.
I
think
that
the
main
I
think
the
disabled
by
default
stuff
is
fine,
but
the
shared
workers
stuff,
I
think,
hasn't
been
touched
in
a
long
time,
but,
like
I
definitely
yeah,
I
think
that
it
makes
sense
to
open
up
another
dialogue
regarding
that.
M
But
yeah,
I
guess
so
yeah.
Maybe
we
should
just
move
that
offline.
Then
if
it
seems
like
permissions
policy,
is
the
right
venue
to
go
about
this.
A
I
think
so,
unless
we
will
find
a
fundamental
reason
not
to
go
that
route
but
yeah,
it
would
be
good
to
have
that
dialogue
before
deciding
that
you
know,
we
need
something
completely
different.
Yeah.
M
Cool
sounds
good
to
me,
so
that
was
the
main
contentious
issue
that
I
want
to
talk
about
today.
I
also
want
to
solve
the
hardest
problem
in
computer
science.
We've
had
some
feedback
that
self
profiling
isn't
really
that
commonly
used
in
an
industry
or
as
like
an
industry
context,
and
we
should
probably
figure
out
like
a
simpler
name
for
the
api,
maybe
one
that
made
a
little
bit
more
sense.
So
I
threw
together
some
candidates
took
a
look
at
some
other
spec
naming
I'm
sort
of
partial
to
javascript
sampling
api.
M
I
feel
like
it
encapsulates
the
the
fact
that
this
is
a
sampling
profiler
fundamentally,
and
it
does
in
fact
sample
javascript
again
opening
the
floor.
If
anyone
feels
strongly
about
this,
mostly
just
a
bit
of
a
knit
but
that'd
be
fun
to
talk
about.
E
Naming's
always
fun
when
doing
these
exercises,
I
like
to
think
what
would
somebody
who
is
not
in
this
space
who
hears
about
it
in
a
hallway?
What
would
immediately
come
to
mind
and
profiling
seems
out
of
the
seems
bad
because
you
know
like
user
profiles
like
you
can
imagine,
identity
being
related
to
profiles,
and
so
somebody
could
mistake
that
I
don't
know
that
sampling
would
be
misused.
I
don't
immediately
hate
it
like
what
would
it
be
to
give
costco
food
samples
or
something
I
don't
know
like
there's
no
way
to
misunderstand
it?
E
Maybe
but
yeah,
just
something
like
being
shortened
terse
for
experts
is
one
thing
to
consider,
but
being
very
explicit.
So
it's
immediately
obvious
to
folks
who
hear
about
it.
What
it's
trying
to
do.
M
Yeah
to
answer
karine's
question
sampling
was
chosen
primarily
as
a
it's
sort
of
like
intrinsic
to
the
nature
of
the
api,
like
I
don't
think,
there's
many
other,
like
sampling
based
apis.
Maybe
some
in
the
performance
timeline
space
and
I
think
javascript
sampling
itself
is
pretty
unambiguous
with
regards
to
what
it
refers
to,
but
that's
a
fair
point
that
sampling
in
isolation
maybe
doesn't
mean
anything.
C
Hey
this
is
mike
from
w3c
just
comment.
I
think
I'm
a
bit
surprised
that
the
word
self
is
not
in
any
of
these
other
candidates.
The
self
part
seems
pretty
important
to
me,
at
least
in
communicating
to
the
average
web
developer
about
what
this
is
doing.
M
Okay,
that's
interesting
because
we
had
we
had
some
conversations
where
people
are
actually
confused
at
the
self
part
in
particular,
which
is
hence
what
spurred
this.
This
idea.
A
Yeah
mike,
do
you
want
to
elaborate
on?
Why
do
you
think
the
self
bit
is
important
here.
C
Because
it's
fundamental
to
the
whole
feature
is
that
you're
not
using
this
I
mean
I,
I
think
the
fact
that
we
don't
sort
of
find
that
fundamental
is
because
they're
too
close
to
it,
but
somebody
else
is
from
the
outside
the
idea
that
the
application
itself
has
apis
exposed
to
it
that
allow
it
itself
to
to
to
measure
its
own
performance
is
kind
of
different
from
the
rest
of
the
things
we
do.
I
mean,
maybe
it's
not,
that
different
to
people
who
work
on
web
performance.
It's
it's.
C
It
aligns
with
other
other
work
and
web
performance
work,
but
in
general
I
think,
for
for
web
developers
having
that
kind
of
mechanism
exposed
to
the
web
application
itself
is
a
bit
surprising.
Maybe
but
honestly,
I
it's
just
a
comment.
I
don't
feel
that
strongly
about
it
and
I
think,
probably
you
all
have
had
more
discussions
with
actual
web
developers
about
it
than
I
have
no.
A
No,
not
necessarily
I
just.
I
was
like
to
me
like
I,
that
the
what
is
self-profiling
anyway
comment
resonated
with
me
so.
C
A
C
Yeah,
because
to
me,
that's
like
the
whole
thing
is:
is
self
self
profiling,
like
I
mean
yeah
it's
to
to
just
the
sense
of
it.
This
is
not
a
close
analogy
at
all,
but
sort
of
the
idea
of
bootstrapping.
That
kind
of
thing
where
you're
doing
you're
having
this
system
do
something
reflective
doing
some
self-reflection.
That
normally
you
I
don't
think
is
something
that
that
we
have
a
lot
of
other
cases
of,
but
then
again,
like
I
said
now
that
I
say
that
out
loud.
C
M
No,
I
mean
in
the
broader
context
of
like
full
stack
web
systems.
I
think
that
it
makes
sense
something
like
client
or
self,
definitely
pinpoints
it
down,
since
you
could
have
javascript
on
server
javascript
and
the
client
and
it's
sort
of
interesting,
but
in
the
w3c
or
in
this
world
at
least
it's
unambiguous
that
referring
to
the
client
so
javascript
client
profiler,
doesn't
make
much
sense
either.
M
There's
some
comments
in
the
chat:
js
sampling
tracer,
I
think
that's
interesting,
except
I
feel
like
tracing
and
sampling,
are
sort
of
orthogonal
in
some
ways
since
tracing
can
refer
to
like
measuring
segments,
whereas
sampling
is
just
like
sampling
and
performance
profiler.
I
do
like
that.
Actually,
that
does
seem
to
fit
in
with
the
other
web
perf
working
group.
Apis,
though
it
isn't
immediately
obvious
that
we're
profiling
script.
M
I
know
one
of
the
things
that
one
of
the
asks
that
we
actually
did
have
is
like.
Can
we
sample
things
like
layout?
Can
we
sample
things
like
like
style,
recalcs
and
stuff
like
that,
but
performance
script
profile.
N
Okay,
just
does
the
api
differentiate
between
like
idle
time
and
time
spent
in
yeah,
like
recap,
style
or
layout,
or
anything
like
that.
N
Right
yeah,
one
small
thing
I
noticed
along
the
the
kind
of
tracing
nomenclature
is
that
I
guess
in
the
in
the
spec
right
now
there
is
a
profiler
trace
dictionary,
so
I
think,
for
the
same
reason
that
that
you're
not
too
keen
about
using
the
term
trace,
I
think
you
might
yeah.
I
might
want
to
change
just
the
name
of
that
that
interface
name,
because
to
me
yeah
tracing,
is
the
same
way.
Tracing
is
kind
of
like
instrumented
profiling.
N
It's
it's
not
a
sampling
thing,
so
I
I
think
yeah
a
smaller
thing
to
me.
I
mean,
I
think
you
know
sampling
api
profile
api.
To
be
honest,
the
candidates
you
have
listed
here
are
pretty
good.
I
think
anyone
who
engages
with
the
api
is
very
quickly
going
to
see
that
the
result
is
samples
and
if
they
were
hoping
for
start,
stop
style
payload.
They
will
quickly
realize
it's
not
so
like
it
may
not
be
super
critical.
N
The
api
itself
says
defines
that
this
is
a
sampling
profiler,
and
this
is
what
the
payload
is.
Perhaps
more
important
is
just
kind
of
communicating
the
scope
of
what
is
being
captured,
which
is
basically
javascript,
just
javascript,
and
not
so
I
to
me
the
priority
would
be
yeah,
just
communicating
that
that
it's
just
apps
script,
it's
not
layout
and
idle,
and
everything
else.
M
Cool
yeah-
that's
really
good
feedback.
I
will
definitely
revisit
the
naming.
I
I
think
there
is
a
tendency
to
treat
trace
as
separate
from
tracing
in
the
nomenclature
and
that
distinction
should
probably
be
communicated
as
well
in
in
the
interface
naming.
M
But
yeah
boris,
I
think
that
we
did
discuss
potential
like
future
iterations,
maybe
containing
other
metadata
about
stuff
being
profiled
and
in
which
case
like
we
could
define
it
to
be
more
general.
I
have
no
objections
to
the
performance
profiler
api
name.
I
think
that
anyone
who
who
looks
into
it
will,
as
paul
mentioned
figure,
that
it
is
a
javascript
sampling,
profiler.
K
M
B
A
M
We
only
got
well,
I
don't
have
access
to
the
origin
trial.
Like
survey
results,
since
I
am
a
facebooker,
I
can
ping
nicholas
about
it:
okay,
cool.
That
would
be
interesting
if
there
was
comments
in
the
survey,
but
it
was
mostly
just
like
one
conversation
with
some
microsoft
folks,
where
it's
like
self
self,
and
that
also
resonated
with
me
because,
like
I've
never
seen
self
profiling
used
outside
of
this
context,.
E
Yeah,
I
I
will
say
I
was
confused.
I
wasn't
sure
which
way
self
meant-
and
I
still
maybe
I'm
not.
Is
it
a
special
type
of
profiling,
where
it's
like
self
time
type
thing
or
is
it
that
you
yourself
as
the
web
developer,
do
it
where
the
name
like
the
fact
that
it's
an
api
implies
that
to
me
already
it's
a
web
platform
api
so
that
you
can
use
it.
M
Yeah,
so
it
was
there's
no
special
self
time.
It's
just
samples.
You
derive
potential
consecutive
script,
execution
blocks
yourself
just
from
the
sample
data.
M
I
think
it's
been
mostly
inertia,
keeping
the
name
the
way
it
is
for
this
salon:
okay,
cool
yeah.
I
guess
I
will
probably
bring
it
up
in
the
tag
thread,
but
it
does
sound
like
there
is
some
interest
in
maybe
keeping
it
more
abstract
than
just
script,
and
I
think
something
like
performance
profiler
actually
appeals
to
me
a
lot
more
because
of
because
of
that.
C
So
andrew,
if
you,
if
we
still
have
a
few
more
minutes
on
this,
so
I'm
really
interested
in
this
I've
been
interested
since
dominic
cooney,
I
think,
is
the
first
one
that
told
me
about
it
like
two
years
ago
or
maybe
three
years
ago.
I
don't
know
how
long
it's
been,
but
it
just
seems
like
something
that
could
be
pretty
exciting
for
web
developers.
On
the
one
hand,.
C
But
I
don't
know,
I
don't
have
a
good
sense
of
how
much
a
normal
you
know
the
average
web
developer
would
really
benefit.
From
this
I
mean
I
can
well
understand
how
facebook
would
benefit
from
it.
I
still
have
this
intuitive
sense
that
this
could
be
something
really
exciting
for
for
web
developers
and
useful
and
help
them
solve
problems,
but
I
have
been
disenharting
so
far
in
that
the
feedback
it's
been
a
while,
since
I've
been
paying
attention,
but
the
last
time
I
saw
feedback
from
apple
on
this.
C
It
was
not
super
encouraging
in
terms
of
their
position
about
whether
it
seemed
like
it
would
be
willing
to
implement
it
at
all,
and
I
know
yosuke
is
on
I
know
alex
is
on.
It
would
be
useful,
I
think,
to
hear
from
apple
if
they're
still
strongly
leaning
against
not
being
interested
in
this
at
all
or
if
things
have
since
changed.
O
M
O
M
C
C
C
O
Yeah
I
mean
like
we're
talking
about
something
like
slowing
the
pager
by
10.
Like
that's,
just
not
acceptable
kind
of
cost
right
I
mean
not
to
mention
the
power
cost.
I
mean
the
what
people
like
what
most
of
users
are
most
concerned
about
is
like
do
like
you
know.
Can
I
go
to
the
page
fast
and
does
it
drain
my
battery
like
we're
not
adding
any
feature
that
drain
uses
battery?
That's
not
acceptable.
M
It's
the,
I
think
the
cost
equation
is
more
complicated
than
that,
since
you
can
get
a
lot
of
signal
from
a
small
percentage
of
users
as
like
somebody
who
maintains
a
web
property
and
with
that
small
signal
from
some
users
who
yes
may
have
their
page
load
times
adversely
affected,
you
could
resolve
issues
that
can
reduce
like
performance
and
battery
costs,
for
everyone,
like
the
idea,
isn't
to
have
this
always
on.
In
fact,
that's
what
motivates
the
future
policy
discussion?
It's
more!
So
just
getting
enough
signal
to
to
fix
these
issues.
M
Yeah
there
are
a
lot
of
apis
that
that
are
foot
guns.
It's
true
that
this
could
be
a
strong
one
depending
on
the
like
implementer,
but
it's
also
harder
to
activate
than
many
other
apis,
particularly
because
we
just
do
require
those
extra
like
feature,
policy,
headers
or
whatever
activation
scheme.
We
need
also.
A
Somewhat
of
a
hurdle
for
any
website
that
incorporates
third
parties
and
cross-origin
resources
and
one
one
thing:
if
the
concern
is
around
developers
setting
it
and
forgetting
it,
one
thing
we
could
do
is
have
some
sort
of
a
sound
like
incorporate
sampling
into
the
api
itself,
so
that
it
will
be
turned
on
like
as
a
developer,
you're
setting
a
header
saying
I
want
this
on
one
percent
or
0.1
of
my
users
and
the
browser
will
roll
the
dice
and
include
some
headers.
O
M
M
B
M
Have
four
minutes
left?
I
guess
any
more
questions.
C
C
Facebook
is
a
pretty
important
application
for
a
pretty
huge
number
of
of
users
and,
I
can
say,
similar
properties
from
other
from
other
companies.
I
think
also
have
that
that
that
quality
as
well
that
characteristics
that
they're
solved,
you
know
solving
problems
for
those
properties,
solves
problems
in
user
experience
for
a
pretty
massive
number
of
users,
and
so,
when
you're
figuring
the
cost
equation.
C
That's
something
to
really
consider
that
yeah.
But
but
again
I
I
don't
have
a
good
perspective
myself
on
on
weighing
that
against
how
much
this
would
help
normal
web
developers
and
versus
the
cost
for
for
all
the
other
sites
that
that
people
are
using.
But
it
would
be
nice
to
see
if,
if
there's
some,
some
hope
for
getting
this
made
a
part
of
the
web
platform
getting
implemented
across
browser
cross
browser
engine.
O
I
I
think
it
might
be
something
to
do
with
the
fundamental
sort
of
difference
in
approaching
puff
right,
like
the
our
approach
in
performance
in
january
is
not
about
letting
websites
optimize
for
our
engine.
We
optimize
our
engine
for
like
content
out
there,
so
I
it.
I
mean
it's
sort
of
like
a
chicken
egg,
a
problem,
because
if
the
engine
saw
website
wouldn't
use
it,
you
know
we
wouldn't
optimize,
but
I
think
the.
O
M
If
you're
speaking
in
the
context
of
having
the
user
agent
control
the
activation
threshold,
I
think
it
could
be
suitably
reframed
to
not
be
considered
in
a
b
test
sort
of
distinction
and,
more
so
like
I
am
granting
the
site
this
many,
this
much
resource
to
gather
signal
on
performance,
stuff.
O
M
Decision
yeah,
I
can
just
speak
from
like
a
product
point
of
view.
It's
like
we've
seen.
We
found
this
like
to
be
massively
useful.
Q
It
would
be
useful
for
us
as
well
wikipedia
because
we're
dealing
with
a
lot
of
variations
in
the
javascript
that
individual
users
are
running.
We
even
have
the
ability
to
let
users
write
their
own
javascript
on
the
site,
which
is
you
know
something
fairly
unusual,
which
means
that,
depending
on
which
article
you're
looking
at
and
which
type
of
user
you're
looking
at
the
page,
you'll
be
running
very
different
javascript,
and
so
it's
very
difficult
for
us
to
find
those
user
scripts
or
wiki
specific
scripts
that
are
causing
issues
without
something
like
this.
Q
That
allows
us
to
sample
like
very
random
pages
and
discover
things
that
were
just
invisible
in
the
global
picture
of
even
looking
at
long
tasks,
because
it
just
can't
find
patterns
easily
so
that
I
think
for
any
website.
That's
dealing
with
this
type
of
different
users
get
different
experiences
in
ways
that
it
is
very
difficult
to
monitor
at
scale
could
be
useful
if
anything,
much
smaller.
It's
more.
Q
A
Oh
yeah,
I
think
wikipedia,
is
another
good
example
of
a
website
that
can
abide
with
the
co-app
and
co-op
restrictions
due
to
lack
of
third
parties.
So
thank
you.
That's
that's
a
great
example
and
yeah
and
with
that
we're
at
time,
so
thanks
andrew
and
thanks
all
thank.
I
A
Yeah-
and
we
will
see
y'all
tomorrow
for
the
last
day
of
d-pack-
cheers.