►
From YouTube: TPAC WebPerfWG 2021 10 27 - JS profiling improvements
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Is
that
your
screen
or
yep?
Okay,
all
right
so
hi,
everyone,
I'm
andrew?
I
work
at
facebook,
along
with
my
colleague,
corontan
we're
gonna,
be
chatting
a
little
bit
today
about
the
jsl
profiling
api.
Some
recent
updates
and
some
like
improvements
to
the
api
that
we
think
are
interesting
and
worth
discussing
so
cool.
I'm
going
to
start
up
with
a
little
bit
of
a
recap
as
to
where
we
are
right
now
and
what
the
api
is
for.
A
A
This
is
intended
to
provide
like
more
representative
samples
of
javascript
execution
on
real
user
devices
versus
dev
machines,
which
tend
to
be
biased
in
various
ways
from
like
employee
only
like
script
or,
like
you
know,
they're
significantly,
beefier
than
the
average
device
in
the
field
and
don't
really
capture
like
the
long
tails
of
performance.
A
A
Here's
just
a
very
quick
walkthrough
of
how
the
api
is
generally
used.
You
can
spin
up
a
new
profiler,
basically
for
the
duration
of
an
interaction
record
samples
and
when
it
comes
time,
for
instance,
say
when
the
document
load
event
is
fired,
you
can
send
over
the
trace,
possibly
with
compression
streams
if
you're
feeling
adventurous
over
to
your
server
for
analysis.
A
A
The
api
is
intended
to
be
used
in
combination
with
the
performance
timeline.
It
shares
a
time
origin
and
the
server
can
also
perform
cross-correlation
by
sending
over
information
such
as
you
know,
long
task
entries
event
timing,
as
well
as
like
navigation
and
resource
data,
cool,
here's,
just
a
very
quick
overview
of
the
trace
format.
We
use
a
very
similar
format
to
chrome,
tracing
and
spider
monkey,
in
that
we
use
a
try
to
represent
javascript
frames.
A
These
are
all
like
structurally
compressed
by
de-duplicating
resources,
frames
and
sub-stacks
I'll,
be
posting
the
slides
as
well
after,
if
folks
are
interested
in
diving
in
but
yeah.
So
what's
working
well
with
this
api.
So
far
I
mentioned
that
we've
shipped
in
chrome,
94
and,
like
we,
facebook
as
well,
some
other
large
properties
have
gotten
some
pretty
good
signal
as
to
like
how
the
api
can
be
used
effectively.
A
One
of
our
big
takeaways
was
that
we're
happy
to
confirm
the
hypothesis
that
the
sampling
profiler
didn't
introduce
an
unmanageable
performance
overhead.
We
were
seeing
around
the
one
percent
line
for
our
top
line
visually
complete
metrics,
which
is
pretty
good
evidence
that,
like
profiling,
can
be
implemented
to
a
modestly
large
cohort
of
users
without
too
much
overhead.
A
One
of
the
strengths
of
the
api
that
we're
seeing
is
that
it's
a
very
drop-in
solution
for
a
lot
of
client
teams
to
focus
more
on
perf
and
in
general,
make
perf
more
accessible,
but
by
providing
like
easy
tracing
without
needs
for
like
local
repros
or
endless
debugging,
and
we've
also
seen
strong
adoption
from
other
industry
partners
such
as
microsoft,
as
known,
discussed
earlier
in
the
call.
A
But
as
with
most
things,
we
can
be
doing
better.
Here,
we
found
some
pitfalls
and
gaps
in
the
js
self
profiling
api,
as
it
stands
right
now,
in
particular,
related
to
non-js
execution.
A
A
This
has
been
an
issue
discussed
in
many
contexts
for
attribution
for
events
like
gc,
layout
style
and
paint,
and
it's
difficult
because
in
the
current
version
of
the
api,
it
is
indistinguishable
from
a
simple
idle
execution,
and
so
you
may
end
up
with
many
cases
where
a
synchronous
gc
is
triggered
or
synchronous
like
restyle,
and
you
won't
really
have
too
much
visibility
into
this
and
additionally,
a
synchronous.
Rendering
work
is
also
not
something
that
is
exposed
by
the
api.
A
A
And
yes,
as
I
mentioned,
the
api
does
integrate
well
with
long
tasks
in
that,
like
you
can
cross-correlate
on
the
server,
but
the
interaction
is
still
quite
cumbersome
and
that
performance
overhead
of
like,
even
though
it's
only
one
percent,
we
don't
want
to
be
enabling
the
profiler
at
all
page
loads,
but
we
still
might
want
to
figure
out
if
we
get
like
a
bad
sample,
why
a
long
task
was
slow.
A
B
Right,
thank
you
andrew,
so
yeah.
This
part
is
probably
gonna
feel,
like
repeat
for
all
the
folks
who
send
the
recent
presentation
to
the
group,
but
yeah
so
for
the
next
slide,
yeah,
so
to
solve
the
issues
highlighted
by
andrew
we'd,
like
to
propose
update
to
the
specification
to
other
way,
to
tag
the
sample
captured
by
the
provider
with
the
top-level
work
performed
by
the
user
agent.
So
the
idea
would
be
to
highlight
the
category
of
work
performed
by
the
resurgent
at
the
time
the
sample
was
captured
so
really
here.
B
The
goal
is
to
bring
us
closer
to
the
capabilities
of
profilers
available
in
typical,
typical
devtool.
Today
that
you
get
in
the
browser
where
you
know
the
event
can
be
categorized
in
different
types
such
as
scripting,
rendering
dc
activity
etc.
B
B
We
are
very
interested
in
feedback
and
proposal
as
to
what
would
be
the
most
relevant
and
safe
to
surface
in
that
area.
So,
for
example,
typically,
the
scripting
tag
is
an
easy
one,
so
we
could
even
argue
that
this
could
be
optional.
The
simple
fact
that
you
get
a
stack
id
in
a
sample
could
be
enough
to
assume
that
you
are
running
javascript.
B
B
Now
on
the
rendering
side,
things
are
a
bit
more
difficult
to
spec
as
currency
and
even
processing
model
only
give
us
a
way
to
spec
the
entire
rendering
pipeline.
So
we
could
imagine
a
general
paint
marker
that
aggregate
style
layout
and
paint.
However,
it
could
be
very
useful
to
be
able
to
split
those
markers
into
you
know
that
style,
layout
and
paint.
They
are
working
progress,
efforts
to
try
to
add
the
step
into
the
event
loop
processing
model.
B
On
the
next
slide.
We
show
I
mean
that
one
is
very
simple:
we
simply
it's
the
modification
we
do
to
the
idl
for
a
profiler
sample
where,
on
the
last
line
we
add
an
optional
string
to
represent
a
marker
and
on
the
next
slide
we
show
what
it
could
look
like
in
an
actual
trace.
So
here
you
have
an
example
of
a
garbage
collection
event
that
is
interpreted
as
javascript
on
the
left
side
and
that
you
can
properly
identify
it
on
the
right
side.
B
So
that's
why
you're
able
to
cut
through
the
noise
that
would
be
added
by
a
garbage
collection
event,
or
I
can
do
mentioned
before
you
could
highlight
during
the
what
would
appear
as
idle
stage.
Currently,
you
could
highlight
rendering
events
so,
but
most
importantly,
yeah
for
the
secret
in
privacy
concerns.
B
So
we
have
to
be
very
careful
not
to
expose
a
work
perform
on
a
cross-origin
document,
meaning
that
a
top-level
user
agent
work
may
only
appear
in
a
trace
if
the
responsible
document
has
the
same
origin
as
a
profiler
and
yeah,
and
with
this
new
source
of
information,
there
is
potential
to
introduce
a
new
source
of
site
channel
information.
B
We
haven't
gotten
security
feedback
yet
on
what
is
safe
to
share,
and
there
are
some
concerns,
for
example,
for
events
that
cannot
be
directly
attributed
to
an
origin
such
as
a
garbage
collection
event.
And
in
such
a
scenario,
we
could
consider
requesting
cross-cross-original
isolations
to
our
surface
or
attach
the
marker
to
a
sample.
B
So,
to
summarize,
the
open
questions
that
we
have
we're
still
looking
into
the
split
of
categories
that
would
be
the
most
useful
and
safe
to
surface
and
yeah.
So
for
from
our
perspective,
the
garbage
collection
events
are
really
useful,
but
they
present
the
challenge
of
being
hard
to
isolate
by
origin.
B
And,
finally,
it
is
worth
considering
that
that
kind
of
data
would
be
more
relevant
into
the
performance
timeline
api,
so
that
is
for
the
embedded
state
of
the
interaction.
B
B
So
in
the
v1,
the
long-term
kpi
provides
a
mechanism
to
provide
attribution
to
the
responsible
drive
frame,
location
url,
but
it's
typically
hard
to
what
you
attribute
the
cost
of
a
long
task
to
a
specific
script
or
a
function.
B
So
there
is
a
current
discussion
around
the
v2
implementation
that
targets
at
improving
the
long
task
attribution,
but
for
today,
we'd
like
to
explore
whether
or
not
the
gst
profiling
api
could
be
a
good
candidate
to
help
with
attribution.
B
B
The
record
example
you'd
be
able
to
attribute
to
specific
the
source
of
a
long
task.
B
And
yeah,
and
if
you
consider
the
embedded
state
extension,
you
also
get
the
classification
of
work
for
free
you'd
be
able
to
know
we
wanted
to
actually
spend
time
doing
script
or
rendering.
B
But
this
approach
main
drawback
is
that
it's
can
be
quite
expensive
to
run.
So
it
requires
a
web
developers
to
record
all
the
samples
to
being
able
to
correlate
it
with
a
long
task
that
has
been
observed.
So
this
approach
is
going
to
increase
the
memory
and
separate
pressure
and
is
more
intrusive
than
the
long
task
api
alone.
B
So
we
are
wondering
if
somehow
we
could
limit
a
sample
to
when
a
long
task
is
detected.
So
in
theory,
no
active
sampler
would
be
necessary
to
start
capturing
a
sample
when
a
long
task
is
detected.
So,
for
example,
currently
the
gsa,
profiling
api
requires
a
document
policy
header
to
enable
the
feature,
and
that
could
aim
the
user
agents
to
eagerly
maintain
a
code
map
and
metadata
without
actively
sampling
of
the
main
thread.
B
Potentially,
we
could
schedule
a
background
task
when
we
start
processing
a
task
on
the
main
thread
to
capture
a
sample,
50
milliseconds
after
the
start
of
the
task
and
yeah
that
is
highlighted
on
the
next
slide,
where
the
approach
is
to
you
know,
being
able
to
capture
a
single,
simple,
15
million
15
milliseconds,
later
or
after
the
start
of
a
task
and
so
yeah.
Regarding
the
security
aspect,
oh
sorry,
yeah,
I
skipped
id
and
modification
we
just
an
existing
profiler
trace
to
the
observer
event.
B
B
So
we
are
really
interested
in
feedback
for
the
proposal.
We
hope
it
would
help
improve
the
long-term
contribution
and
troubleshooting.
So
we
have
a
couple
questions
already,
so
we
integrated
during
the
presentation.
But
how
should
we
enable
the
feature,
we're
thinking
that
the
simple
presence
of
the
gs
that
profiling
document
policy
header
could
be
enough
to
start
attaching
samples
to
a
long
task
when
it
is
detected.
B
Yeah,
this
proposal
suggests
capturing
a
single
sample
when
long
is
detected,
and
that
may
or
may
not
be
enough.
We
could
take
the
approach
like
this
is
a
best
effort
approach,
hoping
to
capture.
B
The
long-running
scraper
function
yeah
and
finally,
the
last
question
is
the
cost
of
it
all
like
we're
wondering
if
this
proactive,
long-term
section
may
or
may
not
be
more
expensive
than
simply
running
the
profiler
in
parallel,
since
it
comes
with
additional
work
on
the
mainframe
to
start
this
task
monitoring.