►
From YouTube: WebPerfWG call 2021 09 30 - Extending JS profiling
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
cool,
I
think
we're
good
to
go
so
hi,
I'm
andrew
I'm
a
software
engineer
at
facebook,
I've
been
working
on
the
js
profiling
api
for
last
few
years,
now,
wow
time
flies
and
yeah.
Today,
I'm
just
gonna
be
chatting
a
little
bit
about.
You
know
some
updates
for
the
api,
where
we're
at
shipping
wise
on,
like
various
uas
and
some
ideas
for
extending
the
api,
with
some
additional
context
regarding,
like
other
non-js
execution
and
some
of
the
challenges
there,
along
with
my
colleague
card,
then
so
cool
I'll.
A
Kick
it
off,
then.
So,
just
a
quick
recap
of
what
the
api
is,
since
it's
been
a
little
while
refresh
is
always
good.
So
this
is
a
web
exposed,
sampling,
profiler
for
measuring
client,
javascript
execution.
So
think
of
you
know
the
dev
tools,
profiler
in
like
safari
or
chrome
or
gecko.
A
You
know
web
exposed
and
accessible
by
script,
so
sites
could
measure
their
own
like
javascript
execution
through
invoking
a
sampling,
profiler
collect
traces
and
either
inspect
them
locally
for
analysis
or
send
them
off
to
a
server
for
aggregate
analysis
and
the
real
edge
here
is.
This
helps
you
provide
like,
like
actual
like
rum
client
characteristics
on
like
real
user
like
devices.
A
The
api
is
now
shipped
in
chrome,
94
and
underwent
a
few
changes
recently
as
a
result
of
feedback
from
other
working
groups,
as
well
as
github
issues
that
I'll
go
into
pretty
quickly
and
yeah,
and
there's
a
link
to
the
repo
I'm
just
there
all
right.
A
A
So
one
pattern
that
we're
often
seeing
especially
in
the
orange
origin
trial
that
we
ran
in
chrome,
is
that
a
lot
of
the
logic
that
you
wanted
to
profile
couldn't
wait
for
asynchronicity.
In
many
cases,
and
not
only
that
but
the
way
the
spec
defines
sampling
in
the
first
place,
it
wasn't
quite
necessary
in
order
to
enforce
in
a
synchronous
constructor.
A
So
we
got
good
feedback
on
that
and
switched
over
to
a
synchronous,
constructor
which
well
it
doesn't
block
for
the
first
sample,
ensures
that
it
is
kicked
off
by
the
time
it
returns,
and
so
the
two
parameters
that
you
provide
to
your
constructor
here
are
the
requested
sampling
interval
in
milliseconds,
as
well
as
the
max
buffer
size.
A
Another
extension
that
we
talked
about
in
the
working
group,
I
believe
last
deback
and
so
that's
insured
to
or
that's
designed
to
ensure
that
the
developer
does
not
go
overboard
and
collect
more
samples
than
they
could
possibly
imagine.
Processing.
I
think
similar
to
the
performance
timeline,
resource
timing
capacity,
all
right
and
here's
a
very
simple
example
of
how
you
might
go
ahead
and
collect
a
trace
for
say,
like
the
dom
load
event.
So
you
kick
off
your
profiler
in
the
top
level
script.
A
That's
typically
where,
like
since
the
profiler,
only
measures
javascript
right
now,
you're
not
really
missing
out
on
anything.
Since
you
can
combine
it
with
data
from
the
resource
timeline
and
other
performance
entries
like
event,
timing
is
a
very
popular
companion
to
this
api
that
we've
been
seeing.
It's
particularly
useful
for
debugging,
like
input,
latency
and
so
yeah
up
here,
is
just
a
quick
example
of
how
we
can,
like
you
know,
collect
some
data
along
with
the
trace,
send
it
over.
A
It
shares
a
time
origin
with
the
like
profiler
samples,
so
you
can
easily
cross,
correlate
them
and
present
them
in,
like
a
unified
tracing
view,
cool
and
one
other
addition
that
is
not
on
these
slides
is
that
document
policy
was
leveraged
instead
of
feature
policy
which
we
discussed
in
the
working
group
prior
to
provide
per-document
warm-up
semantics.
A
So
when
a
page
wants
to
profile
in
order
to
avoid
incurring
the
hit
of
you
know,
building
additional
metadata
in
order
to
unwind
stacks
serving
down
the
appropriate
document
policy
at
the
time
of
page
load
will
ensure
that
the
document
can
begin
profiling
very
quickly,
and
we
don't
incur
any
overhead
on
pages
that
don't
wish
to
profile
sweet
all
right
and
here's
a
quick
overview
of
the
tracing
format.
A
Basically,
it's
very
similar
to
the
v8
and
spider
monkey
tracing
formats
for
javascript,
in
which
it
uses
a
try
representation,
because
you
know
common
stack
ancestors
tend
to
be
a
pretty
frequent
pattern
in
traces.
So
you
have
your
stack
frames,
resources,
samples
and
stacks.
A
The
frames
themselves
refer
to
are
one-to-one
with
ecma
functions
and
they
refer
to
items
in
the
resources
array
for
providing
script
urls,
since
those
are
also
something
that
is
duplicated.
Quite
often,
the
samples
themselves
contain
references
to
stacks,
as
well
as
the
timestamp
which
the
samples
were
taken
and
the
stacks
array
provide
linkage
between
frames
and
reference,
each
other
as
well
as
items
in
the
frames
array,
cool,
so
yeah.
I
mentioned
before
that
we
shipped
in
chrome
94.
A
So
what's
going
well
so
far,
we
have
a
few
weeks
of
data
now
as
to
how
we
at
facebook
have
been
using
the
api
as
well
as
other
other
large
vendors.
So
the
initial
data
is
actually
quite
a
bit
better
than
we
anticipated
regarding
top
level
metrics,
enabling
the
profiler
for
a
subset
of
you
like
users.
That's
uniform,
slowed
our
load
time
by
less
than
one
percent,
which
is
pretty
great
evidence
for
the
fact
that
we
can
like
leverage
this
approach
with
fairly
minimal
overhead
moving
forward.
A
So
there's
a
lot
of
there's
a
lot
of
potential
here
from
a
third
party
like
rum
point
of
view
as
well,
and
we've
seen
a
very
strong
adoption
from
other
industry
partners
such
as
microsoft,
who
have
actually
started
participating
in
some
of
the
specification
discussions
as
well,
which
is
really
awesome,
but
yeah
no
with
every
good
thing
comes
some
bad.
So
what
could
be
better
so
far,
and
one
of
the
main
things
that
we've
been
observing,
is
that,
like
non-js
execution,
is
fairly
hard
to
identify
in
traces?
A
This
was
something
that
we
talked
about
back
at
tpac
2018.
Actually,
when
we're
discussing
different
categories
for
the
profiler,
but
in
practice,
what
we're
seeing
is
that
top
level
like
user
agent
work
such
as
scheduled,
like
paints,
layouts
garbage
collections
are
fairly
hard
to
distinguish
from
idle
execution,
and
we
don't
really
have
good
signal
without
resorting
to
some
of
our
older
approaches
like
spinning.
A
A
raft
loop,
for
instance,
whether
or
not
there's
work
being
performed
or
the
browser
is
idle,
not
to
mention
like
certain
things
like
gc
activity,
adds
to
the
noise
of
our
traces,
we'll
demonstrate
or
we'll
show
that,
like
certain
functions
might
take
a
more
variable
amount
of
time
than
they
would
otherwise
if
they
trigger
a
gc
and
yeah.
As
I
said
before,
it's
like
any
asynchronous
rendering
work
is
not
measurable
and
is
very
hard
to
track,
unlike
the
rest
of
client
script.
A
B
All
right,
hey,
content
here
so
yeah
to
start
the
issues
highlighted
by
andrew
we're
proposing
to
update
the
specification
to
add
a
way
to
tag
the
sample
captured
by
the
profiler.
So
you
next
slide
andrew
please.
B
So
the
idea
is
to
highlight
a
category
of
work
performed
by
the
user
agent
at
the
time
the
sample
was
captured
so
really
here.
The
goal
is
to
bring
us
closer
to
the
profilers
that
are
available
today
in
devtool,
where
you
know
you
can
see
how
the
events
are
categorized
by
in
types
such
as
scripting,
rendering
or
gc
activity
and
so
yeah.
So
this
brings
us
to
what
should
be
the
marker
candidates.
B
And
here
yeah,
the
main
issue
here
is
integrable
interoperability.
We
we
want
a
marker
to
have
the
same
meaning
across
tracing
and
user
agent,
and
it's
it's
not
as
easy
as
it's
on
and
we're
very
interested
in
feedback
in
this
area.
So
we
have
a
couple
of
candidates
in
mind
like
of
course,
the
scripting
tag
is
an
easy
one.
So
we
could
argue
that
this
one
could
actually
be
optional,
meaning
having
no
tags
and
a
stack
implies
scrambled
scripting
activity
as
it
does.
Today.
B
We
could
envision
gc
tag
to
represent
a
post
due
to
a
garbage
collection
task,
or
we
could
also
imagine
a
parser
tag
to
a
product
to
show
a
passing
activity.
But
then
the
question
is:
should
it
include
the
html
passing
or
just
be
limited
to
javascript
passing,
or
should
we
even
expose
this
marker
on
the
rendering
side?
B
Here
it's
a
bit
more
difficult
to
spec
the
event.
Loop
processing
model
only
give
us
a
way
to
spec
the
entire
rendering
pipeline.
So
we
could
imagine
a
tag,
aggregating
style,
layout
and
paint
in
a
single
marker.
However,
it
would
also
be
very
useful
to
be
able
to
split
those
markers
and
there
are
work-in-progress
efforts
to
add
the
steps
in
the
event
loop
processing
model.
B
On
the
next
slide.
We
can
see
so
the
modification
we
propose
to
the
idl,
so
we
suggest
adding
an
optional
string
to
the
profiler
sample
ideas
that
were
written
from
the
api
and
for
an
example
of
a
trace
next
slide
yeah.
So
here
we
highlight
it's
just
an
example
of
you
know,
for
example,
noise
that
is
added
through
a
gc
event
where,
on
the
left
side,
you
see
the
trace
that
is
written
by
the
api.
B
Today,
where
you
have
a
longer
trace
that
you
would
expect
and
on
the
right
side,
you
are
able
to
see
to
immediately
mark
this
sample
as
having
been
impacted
by
a
garbage
collection
and
same
goes
for,
for
example,
synchronous,
reflow
or
simply
a
layout
or
rendering
activity
outside
scripting
event,
and
so
there
are
security
and
privacy
concerns
for
this
api,
and
so
first
we
should
not
expose
work
perform
on
a
cross-origin
document,
meaning
that
the
top
level
user
agent
work
may
only
appear
in
trace.
B
Yeah
and
finally,
should
we
even
surface
this
information
through
the
gsl
profiling
api?
There
are
a
couple
other
candidates
which
might
be
better
places
such
as
the
performance
timeline,
api
and
yeah.