►
From YouTube: WebPerfWG Design call, May 2nd 2019
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
A
Yes,
so
it's
currently
planned
for
the
week
of
June
10th,
we
are
not
like
I.
Don't
yet
have
confirmation
on
the
venue.
So
I
wasn't
like
that's
the
reason.
I
didn't
send
an
update
to
the
list.
I
will
hopefully
have
confirmation
on
that
very
shortly
and
we'll
update
the
list
as
soon
as
I
possibly
can
can.
F
A
G
Basically,
it's
on
my
team
working
on
scheduling,
so
he's
sort
of
rhyming
I
think
over
the
next
couple
of
design
meeting
was
leading
up
to
do
this,
but
everybody
just
wanted
to
kind
of
start
introducing
pieces
of
where
our
thinking
is
so
just
starting.
It
like
a
first
nugget.
So
if
you
don't
want
to,
like
you
know
pile
on
the
entire
in
one
short,
so
just
starting
with
some
of
those
underlying
concepts
ever
starting
point.
H
Thanks
should
be
yeah,
so
the
the
kind
of
the
goal
of
this
is
to
talk
about
what
we're
working
on
for
Native
scheduling,
API
and
particular
the
big
underlying
problem
that
we're
trying
to
solve
here.
It's
not
the
only
problem
there
more,
but
we'll
kind
of
get
into
that,
and
just
give
everybody
an
update
and
share
progress,
cue
to
plan
and
kind
of
where
we're
going
with
this.
H
So
to
start,
there
are
two
claims
that
motivate
what
we're
building
here
for
an
MVP
v-0
and
everything
that
I'm
gonna
kind
of
talk
about
today
is
all
around
this
notion
of
priority.
So
user
space
schedulers
like
react,
Maps
other
that
have
been
built
into
user
space
are
limited
by
their
boundaries.
What
I
mean
is
that
they
have
a
notion
of
priority
within
their
schedulers,
but
it's
scoped
is
limited.
It
can't
escape
out
of
that
boundary.
H
H
H
This
is
prioritized,
but
the
spec
gives
leeway
here
if
there's
a
rendering
opportunity
render
right
so
that
we
might
want
to
throttle
this
if
rendering
is
taking
too
long
or
it's
having
too
often
that
kind
of
thing,
but
really
what
I'm
focused
on
is
this?
This
number
three,
this
big
chunk
a
blob
of
everything
else,
and
this
includes
script
like
post
message
and
set
timeout
fetch
completions,
IDB,
completions
script,
tag,
parsing,
HTML,
parsing,
internal
work,
that
the
browser
needs
to
do
garbage
collection,
etc,
etc.
H
C
I
G
H
Sure,
but
at
least
four
blinks
it
says
this
is
what
we
haven't
yeah,
if
you're
saying
some
browsers
have
even
less
prioritization.
That
just
exacerbates
the
problem.
So
in
what
I'm
saying
in
in
blink,
we
do
have
multiple
task
queues
and
the
spec
does
give
us
leeway
to
schedule
between
different
task
queues,
but
for
a
single
document.
It's
one
logical
task.
You
things
run
in
first,
come
first
in
first
out,
so
I
want
to
zoom
in
a
little
bit
on
this
bucket
and
talk
about
how
user
space
schedules
work.
So
react.
H
Skies
are
friends,
uses
postmessage
to
run,
needs
tasks
and
within
the
react,
scheduler
task
there's
a
notion
of
priority.
So
each
of
these
tasks
that
you
see
here
would
be
prioritized
and
it
runs
them
in
a
specific
order.
But
to
my
point
in
the
the
motivation,
these
priorities
are
meaningless
outside
of
this
box.
H
H
So
there
are
as
I
mentioned
beginning.
There
are
other
problems
as
well.
There's
a
lack
of
a
unified
API.
We
have
disparate
scheduling,
api's
requests
called
idle
callback
requestanimationframe,
we
don't
have
a
unified
API
service
here
that
makes
kind
of
an
easier.
So
that's
something
else
we
want
to
address
when
to
schedule
frames,
there's
been
talk
of
frame
rate.
H
H
We
we
still
want
a
separation
of
scheduling,
API
priorities
from
what
happens
internally
in
my
the
point
here
is
that
you
may
have
multiple
documents
sharing
the
same
threat,
the
main
thread,
and
if
you
have
a
lower
priority
frame,
for
example,
a
background
page
then
high
priority
in
a
background
page
may
not
be
high
priority
overall.
So
this
is
the
point.
Here
is
something
that
we're
considering
as
we
build
this
API.
H
So
what
we're
proposing
for
a
model
is
to
introduce
new
priorities,
and
this
is
what
we're
finding
experiment
with.
What
we
want
to
do
is
break
the
everything
else
up
into
levels
below
and
above
so
we'd
introduced
a
high
and
a
low
priority.
That's
around
default
with
existing
work,
getting
scheduled
at
the
default
priority.
H
We
have
a
level
above
input,
a
runner,
a
high
event,
immediate,
not
quite
micro,
tasks,
timing
and
we're
still
kind
of
working
out
some
of
the
details,
but
it's
something
that
runs
as
soon
as
possible,
and
this
is
for
urgent
rendering
that,
like
render
work
that
needs
to
be
done
but
prior
to
rendering
for
the
current
frame,
there's
also
a
use
case
for
something
below
rendering,
but
above
a
high
priority,
which
would
be
used
for
something
like
input,
handlers.
We're
still
kind
of
you
know
again.
H
A
lot
of
this
is
still
preliminary
we're
working
out
the
details
here,
but
we've
seen
cases
where
like
react.
For
example,
they
want
to
process
input
right
away,
but
they
also
want
to
yield
to
rendering
to
keep
animation
smooth.
So
there's
a
use
case
for
having
a
priority
that
could
all
the
very
high
priority
that's
below
rendering,
but
is
only
allowed
to
be
used
by
input
handlers.
So
it's
something
else
we're
planning
to
explore.
H
There
are
a
significant
number
of
challenges
when
talking
about
building
a
system
like
this
and
there's
a
link
at
the
end
for
the
next
few
slides,
there
is
a
very
in-depth
document,
that's
linked
at
the
end.
That
goes
through
in
much
more
detail
with
this,
but
this
would
be
a
whole
nother
topic
for
a
deep
dive,
but
there's
kind
of
three
big
questions
here
and
one
is
you
know?
What
do
we
want
to
prioritize?
There's
a
lot
of
tasks
that
run
the
scope
of
scheduling
is
huge.
H
H
So
throughout
you
know,
the
history
of
scheduling,
there's
been
many
such
systems
that
have
been
built
that
address
this
in
different
ways,
but
it's
usually
based
on
application
specifics.
So,
for
example,
Matz
uses
static
that
works
very
well
for
them
react
uses
the
earliest
deadline
first,
and
then
there
are
others
as
well.
H
There's
another
question
over
how
much
isolation
and
control
we
give
the
browser
over
scheduling,
and
there
are
several
models
here
that
give
varying
degrees
of
control
again,
there's
a
deep
dive
document
attached
at
the
end
here
that
you
can
take
a
look
at,
but
just
as
a
kind
of
example,
of
what
I'm
talking
about
is
you
could
design
a
system
where
there
are
just
there's
a
single
set
of
global
task
queues.
Everything
is
posted
to
these
task
queues.
H
Everything
has
the
same
exact
priority
system
and
scheduling
is
rather
trivial
in
that
case,
or
you
can
design
a
system
that
is
a
little
bit
more
isolated
where
maybe
app
app
code
has
its
own
set
of
task
queues.
Framework
code
has
its
own
set
of
task
queues.
Third
party
code
has
its
own
set
of
task
queues
and
they're
isolated,
so
there's
still
a
shared
notion
of
priority,
but
this
gives
the
browser
control
over
how
it
schedules.
H
So
again,
the
challenge
here
is
figuring
out
what
the
best
model
is
to
use
so
to
wrap
up.
What
we're
proposing
to
start
with
is
the
simplest.
We
want
to
use
a
single
set
of
static
priorities,
the
ones
that
I
was.
That
was
a
previous
slide.
The
observation
is
that
different
apps
have
different
requirements
and
once
you
use
dynamic,
you're
kind
of
forcing
apps
to
adapt
to
that,
it's
not
clear
if
it's
always
gonna
be
the
best
fit.
H
It's
not
clear
which
system
generalizes
and
we
don't
really
understand
the
degree
that
would
starvation
is
going
to
be
a
problem.
So
our
plan
here
is
to
start
simple
and
evaluate
after
the
initial
round
of
experiments
and
we're
also
planning
to
start
with
the
simplest
model
of
a
unified,
logical
set
of
prioritized
task
queues.
So
the
browser
is
not
going
to
do
any
sort
of
fancy.
Isolation,
type
scheduling
to
begin
with.
H
Also
for
the
MVP
didn't
talk
about
this
before,
but
there's
there's
another
problem
that
happens
within
scheduling
that
when
you
yield.
So
if
we
picture
that
one
big
FIFO
model
is
the
task,
that's
at
the
head
is
running
and
it
decides
to
yield
because
it
wants
to
give
browser,
control
to
say,
render
or
to
say,
process
input
or
any
higher
priority
work.
If
it
rescheduled
itself.
It's
stuck
at
the
end
and
this
can
significantly
impact
latency
of
the
app
and
it
incentivizes
yielding
in
the
first
place.
H
So
we
want
to
have
some
sort
of
continuation
support
built
in
that
tasks
can
yield
a
higher
priority
work,
but
continue
also
in
the
the
first
version.
We'd
have
delayed
tasks
and
we'd
have
implicit
support
for
bulk
operations
and,
if
you're
been
all
familiar,
gional
proposal.
This
is
the
idea
of
virtual
task
queues,
so
you
can
do
things
like
flush,
a
task
queue
or
clear
task
queue,
so
roadmap
going
forward
for
us
the
next
thing.
So
internally
we
we
have
I
have
a
proposal
for
the
the
API.
It's
it's.
H
It's
still
kind
of
young
we're
working
through
some
of
the
specifics.
We're
gonna
get
this
out
to
partners
and
everyone
else
as
soon
as
possible.
So
the
plan
is
to
publish
the
updated
explainer
as
soon
as
we
can
then
start
working
on,
implement
this
MVP
behind
a
flag
in
chrome
and
have
our
partners.
Maps
react.
Airbnb
specifically
start
using
us
and
get
feedback
and
then
finally,
a
couple
public
Docs
that
anybody
in
look
at
and
I
invite
you
all
to
comment
so.
G
I
guess
one
thing
I
wonder
you
add
that's
relevant
to
the
roadmap.
Is
that
based
on
kind
of
both
our
data
as
well
as
what
we
keep
hearing
from
our
partners,
we
find
that
just
creating
a
post
based
API,
it's
kind
of
insufficient,
given
that
the
main
thread
is
just
like
poorly
often
very
congested
with
just
rip
tags
coming
in
and
like
fetch,
fetch,
responsive
handling
and
xhr
is
coming
back.
G
H
C
G
C
G
So
I
think
it's
mostly
about
like
juggling
kind
of
priorities
or
like
benefactor.
These
deadlines
that
apps
we'd
like
to
own
at
them
a
bunch
of
deadlines.
Like
oh
you're,
you
know
you're,
like
surrendering
search
results,
it
should
happen
in
100
milliseconds
and
it
would
make
sure
to
break
everything
up
so
that
you've
met
you,
keep
meeting
the
10
millisecond
target
for
frames
and
like
10
millisecond
targets
for
typing
and
keeping
the
user
input
responsive
at
the
same
time,
and
then
it
sort
of
whole
bunch
of
api's
of
them
that
are
wrapped
I'm
mode.
G
C
G
H
Decreasing
latency
overall,
like
I,
guess,
like
the
point
I
was
gonna,
make
here
and
I
didn't
I
didn't
make
it
is
theirs.
So
FIFO
gives
you
one
order
right.
It's
somewhat
random
because
it
depends
on
the
weight
on
the
order
which
things
are
you
know.
Sometimes
the
depends
on
the
order
of
completions
happen
like
when
Network
regrets
come
back
all
of
this,
but
there's
a
specific
order
that
tasks
run
in
and
latency
other
metrics
user
facing
experience.
G
H
Page
load
metrics
also
like
loading
there's,
a
lot
of
stuff
that
happens
early
and
load,
some
of
which
effects
you
know.
First,
katemel
pain.
Things
like
that
and
better
ordering
at
the
initial
one.
There's
a
lot
of
work
going
on
can
help.
G
Yeah
we're
loading
for
loading,
I'm
States,
primarily
about
the
parsing
for
delay
and
time
to
attractive,
and
then
these
things
are
trying
to
interact
with
the
page,
but
then
all
of
my
backpack
shows
and
a
whole
bunch
of
tags
that
are
really
on
private
eyes
in
it's.
It's
really
if
you're
going
to
meet
the
user
expectation.
So
yes,
but
it's
ultimately
from
something
for
the
user
expectation,
is
both
for
interaction
Jake
as
well
as
visual
jerk.
So.
D
G
Yes
back
in
the
original
explainer,
which
I
guess
it's
all
we
have
done
so
far,
and
so
now
we
were
more
like
an
efficient
on
the.
What
are
we
actually
working
like?
What
are
the
world
you
know
having
that
problem,
but
yes,
the
explainer
should
have
motivated
or
does
right
now.
If
you
go
to
the
github
link,
be
juggling
from
the
gender
yeah.
D
I
guess
this
is
connected
serious
question
intended
to
your
own
question,
just
to
bring
it
back,
which
is
in
order
to
validate
like
a
clear
explanation
of
what
problem
we're
trying
to
solve,
for
example,
through
maps
like
so
states
they
they
must
have
an
articulation
of
what
specific
problem
they
want
installed.
An
Airbnb
right
like
those
would
be
the
two
concrete
examples.
D
B
C
K
K
G
So
I
think
we
can
take
a
follow
up
here.
Do
for
each
of
these
three
partners
for
in
like
come
up
with
specific
scenarios
of
those
particular
people,
perhaps
care
about,
and
what
metrics
they
would
use
on
there,
because
it's
hard
for,
like
you,
know
us
to
gonna
fabricate
records
for
each
of
these
days.
But
that's
why
we're
working
so
closely
the
done
so
then
we
can
like
leverage
their
success
rates
and
their
aggression,
metrics
yeah.
A
C
F
F
H
That
there,
at
least
at
least
dynamic,
will
be
an
option
like
with
static
I'm.
Just
thinking
through
your
question
here
was
I
proposed,
starting
with
static
priorities,
so
the
priority
inversion.
That
would
happen
if
we're
talking
about
high
priority
work,
getting
stuck
behind
low
priority
work.
It's
only
because
there
was
no
high
priority
work
to
run
when
we
chose
to
run
the
low
priority
work,
you
can
get
other
priority
inversions
and
I
in
the
long
read
doc.
That's
referenced
here.
I
do
talk
about
priority
inversion
in
the
case
of
timeouts.
H
If
you
have
a
lot
of
like
if
we
were
to
what
option
that
we
talked
about
is
introducing
a
notion
of
time
out
here,
like
we
like
request
idle
call
back.
Has
the
problem
here:
is
you
can't
introduce
priority?
Inversions
and
I
was
a
little
bit
skeptical
of
introducing
them.
For
that
reason,
because
if
you
have
a
lot
of
low
priority
work
that
all
times
out
at
the
same
time
and
you
schedule
high
priority
work,
then
it's
now
the
high
priority
work
is
now
stuck
behind
all
of
those
low
prior
to
works.
Creating
that
inversion.
H
H
C
I'm
having
it
there
like
many
people
who
have
approached
us
and
said
oh
yeah,
like
we
want
to
change
the
priority
of
things
dynamically
because
you
know
maybe
we're
doing
the
water.
Okay,
no,
no
priority,
because
the
user
is
now
interacting
in
some
widget
for
now
and
then
later
I
becomes
in
poni
right,
like
I
work
all
day
like
that
very
common
and
whatever
happens
like
all,
they
usually
put
this
block
on
that.
What
I'm
gonna?
Previously,
though,
priority
thing
that
you
know
maybe
see
what
happen.
Oh
yeah.
H
They're
I,
don't
know
if
I'd
probably
dimensions
here
we
do
have.
The
MVP
is
including
a
way
to
change
priorities,
so
you
can
off
of
version
zero.
You
can
build
dynamic
product
priority
systems
in
user
space.
So
if
you
have
low
priority
work
that
suddenly
becomes
high
priority,
you
can
change
the
priority
so.
G
One
example
of
this
that
I
that
comes
up
is
like
a
Maps
use
case,
so
on
Google
Maps,
like
as
the
user
is
panning
the
map.
It's
really
important.
The
highest
priority
thing
is
like
fetching
off
those
map,
tiles
and
preparing
them
for
rendering.
But
then
the
user
gesture
can
change
doing
like
dragging
a
door
and
then,
like
you,
want
to
be
prioritize.
All
the
tile
work,
so
I
think
so
I
think
smartest.
G
What's
partisan
Aki
do
you
would
address
that
like
there
are
you
need
to
provide
a
way
for
the
de
for
the
developer
to
dynamically
indicate
what
the
highest
priority
thing
is
now
subscribe?
I,
don't
if
you
want
to
get
any
more
details,
God.
Does
that
you
repeat
that
last?
The
last
part,
do
you
want
to
add
any
more
details
like,
for
example,
the
gas
use,
virtual
tasks,
use
and
updating
priorities.
H
H
H
G
And
I
just
sort
of
hard
to
address
that
in
the
general
case.
That's
why
I
think
it
might
be
nice
if
you
could
add
a
couple
of
scenarios
in
our
explainer
so,
for
example,
the
map
dial
gesture
change
or
like
Facebook
feed.
You
know
you
had
things
that
were
off-screen
and
now
like
the
highest
priority
thing
to
people.
I'd
have
a
few
different
examples
and
show
how
the
API
handles
those
cases
would
that
address.
C
C
These
based
on
everyone
else's
priority
is
really
tricky.
It's
a
problem.
You
don't
know
you're
like
existing
a
sphenoid
in
sort
of
system
right
then.
What
happened
is
that
they
don't
they
don't
know
how,
like
you
as
a
one
of
the
users
of
scheduler,
you
don't
know
how
high
a
high
your
product
is
ready,
but
everything
else
happened
right.
Everyone
knows
that
its
own
idea
of
what
is
high
in
a
lot
space
though,
but
there's
no
planation
between
them
so
the
way,
for
example,
on
iOS
Mac
OS
that
we
would
arrest
him.
C
You
know
this
problem
is
the
concept
cos
the
that
specifies
for
his
temple,
which
thread
is
foreground:
the
user,
interactive,
a
user
initiated
right
user
for
user
Interactive's
highest
priority
and
then
followed
by
using
truck
initiated
in
the
background
thread
and
so
forth,
and
these
threads
dynamically
adjust
its
priority
based
on
how
whether
the
enum
containing
applications
of
foreground
or
the,
whether
there's
other
user
interacts
abused,
initiative
threads
and
then
there's
another
system
called
vouchers
which
gives
the
priority
of
the
hi
threatening.
Basically
delegates.
C
C
There's
a
lot
going
on
there
to
mitigate
this
problem
will
have
too
many
things
having
the
same
priority
and
there's
a
lot
of
limitations
with
it
like
sort
of
delegate
the
priority
and
definer
dependencies
now
I
think
it
really
comes
down
to
defining
dependency
like
you
I
bend
over
they,
the
most
efficient
scheduler
is
knowing
what
is
the
dependent
work.
It
needs
to
happen,
and
sleep
want
to
run.
A
A
D
Quick
notebooks,
we
do
need
to
move
on
to
the
next
topic,
but
I
think
I
saw
been
try
to
jump
in
a
few
times.
I've
been
do
you
wanna,
yeah
I,
just
I
just
wanted
to
say,
I
really
appreciate
the
context
given
by
the
Google
Map
tanning
example,
because
previously
you
had
just
talked
about
our
data
and
our
partners.
So
something
concrete
like
the
Google
Maps
panning
fetch
info
was
super
helpful
I
would
love
to
see
more
about
developed
in
this
and
I'm?
D
C
Move
on
guys
asked
one
thing,
which
is
that:
could
you
put
the
kind
whatever
you
had
on
this
current
side
until
I
get
hub
page
somewhere?
This
is
now
like
document
is
outdated
and
it's
really
hard
to
follow
what
is
currently
being
called
service.
If
you
could
update
the
github
page,
it'll
be
great
yep.
C
K
All
right,
everyone
season
all
right,
cool,
so
yeah
for
some
context
here.
It's
because
I'm
working
on
a
JavaScript
profiling
API,
so
that
you
can
understand,
like
you,
think,
execution
profiles
in
wild
better
and
you
can
find
word
that
link
there.
We've
been
focusing
a
lot
lately
on
making
sure
that
get
down
the
cross
origin
concerns
and
through
that
we've
been
like
evaluating
the
feasibility
of
that
approach
via
a
blink
implementation
and
so
far
things
that
are
pretty
good.
K
So
far,
we
can
get
the
incumbent
realm
from
frames
that
are
captured
on
the
stack
with
the
data
that
the
it
provides
us
and
the
cross-origin
script
situation
and
leverage
a
lot.
The
existing
infrastructure.
So
today,
I'm
going
to
mainly
talk
about
like
what
are
some
like
ambiguities
and
issues
with
the
original
proposal
that
we
found
when
actually
doing
the
implementation
and
solicit
some
advice
from
you
guys
as
to
like
what
we
should
do
next.
K
So
the
main
issues
that
we
saw
so
far
were
for,
first
and
foremost,
the
issue
of
building
us
like
symbolization
map
before
you
start
profiling.
So
that's
actually
getting
the
data
that
you
need
to
map.
Like
program
counter
offsets
over
to
function,
names
and
source'
location
information,
this
turned
out
to
be
fairly
expensive,
since
this
is
not
done
by
default
when
you're
just
running
an
arbitrary
script.
This
actually
requires
a
build
up
code
map
mainly
for
performance
and
memory
concerns
in
the
domain
of
trace
format
and
sizes.
K
They
were
around
what
we
expected
from
our
initial
survey
from
chrome
tracing
data
and,
lastly,
I
want
talk
with
the
issue
of
web
tests,
which
are
a
little
bit
challenging.
Given
the
original
proposal
does
not
mandate
any
kind
of
SLA
for
sampling
so
that
it's
fairly
intractable,
with
haven't
claimed
to
figure
out
whether
or
not
a
given
sample
is
correct
if
you're
not
guaranteed
that
you'll
ever
receive
a
sample
from
me,
yeah.
K
Okay,
so
starting
out
on
symbolization,
simple
occasion,
which
two
main
approaches
that
a
VM
could
do
with
this,
they
could
either
always
build
up
back
code
map
welder
running
JavaScript,
but
that's
kind
of
not
the
best
considering
that
takes
up
memory
and
has
extra
like
CPU
time
overhead.
If
we
knew
that
profiling
was
always
going
to
be
performed,
this
might
actually
be
a
teasing
solution
if
we
decide
to
build
up
that
code
map
lazily.
K
So
that
makes
a
little
bit
tricky
since,
if
we
block
on
a
heap
operation
on
the
main
thread
then
like
potentially,
we
could
leak
data
on
like
the
size
of
the
heap,
if
there's
other
frames
inside
of
the
infinite
loop,
and
it's
also
very
slow
on
facebook.com
doing
this.
With
v8s
code
map
popular
took
around
like
200
to
300
milliseconds,
with
a
full
heap
on
high-end
machine,
which
is
fairly
fairly
unacceptable,
considering
that
one
of
the
main
uses
for
the
profiler
is
instrumenting
interactions.
K
K
So
yeah,
as
I
said
before,
like
building
up
that
code
map
online
as
code
is
being
compiled,
is
awesome,
because
then
we
can
start
up
a
profiler
almost
instantly.
If
we
knew
that
we
could
profile,
then
this
would
be
acceptable.
Otherwise
you
really
just
don't
want
to
slow
down
the
web
for
anyone
if
you're
not
going
to
be
profiling.
So
I
was
thinking
about
sing
this
through
feature
policies,
just
declaring,
like
a
profiling
feature
policy
by
default.
It's
disabled.
K
If
this
speech
policy
is
present
like
started,
will
be
super
fast.
The
whole
notion
of
feature
policy
is
being
constricted
to,
like
certain
frames,
it's
kind
of
nicely
with
how
we've
defined
profiler
to
be
context,
specific
and
yeah.
We
won't
have
any
kinds
of
the
main
issue
with
this
approach
is
primarily
like
it.
I
would
say:
pretty
significantly
increases
the
overhead
to
actually
deploying
this
profiler
on
your
site,
so
yeah
any
feedback
on
using
feature
policies
for
determining
this
sort.
D
Interview
I'm
trying
to
understand
the
clock
here,
so
you
as
a
site
operator
or
guess
webmaster
would
set
the
policy
that
would
enable
the
maybe
they're
just
using
the
profiler.
Yes,
so
it's
yeah
it's
his
matter.
Why
would
you
need
to
do
that
or
why
advise
feature
policy
necessary
instead
of
just
like
having
a
some
to
write,
JavaScript.
K
K
Those
limitations
exist,
I
talked
with
Marcus
a
little
bit
of
budgets
in
the
past.
I
forget
what
we
decided
on
on
Marcus
video
and
fell.
Yes,
I'm
on
the
clock,
I.
L
Don't
know
if
yeah,
it
probably
is
possible
to
write
your
JavaScript
engine
in
such
a
way
that
this
overhead
is
almost
zero,
but
ours
is
not
written
in
that
way.
As
far
as
I
know
yeah
we,
whenever
we
profiling
in
our
business
profile,
we
chatter
the
JavaScript
engine
to
throw
away
or
the
cheat
code
and
should
start
recompiling
the
functions
and
to
build
up
a
code
map
when
it
does
that,
so
we
would
have
the
same
problem.
Okay,.
F
L
I
mean
it
doesn't
make
sense
to
have
a
switch
before
any
J's
code
for
the
website
gets
compiled
and
the
header
would
be
such
a
place.
So
we.
A
Have
discussed
header
based
registration
for
various
performance
entries
in
the
past
for,
and
one
of
the
reasons
was
if
we
had
entries
that
were
particularly
expensive,
we
would
want
to
only
enable
you
know
the
collection
of
those
entries
in
environments
where
they're
needed.
So
this
seems
to
fall
into
the
same
bucket.
N
Think
that
the
one
potential
difference
here
is
that
sometimes
we've
been
talking
about
header
registration,
you
would
actually
kind
of
it
almost
instantiate.
The
performance
observer
be
is
via
something
in
the
header
depends
on
the
proposal,
but
here
we're
just
saying
you:
can
this
not
start
measuring
this
right
now,
which
feels
maybe
subtly
different
but
I,
think
well?
Ideally,
we
would
be
able
to
find
a
common
way
of
reasoning
about
both
cases.
K
Andrew
you
want
to
keep
going
yep
sure
so
yeah
I'll
take
a
look
at
the
other
performance
timing,
related
headers
I'm,
not
particularly
married
using
future
policies.
Here,
I
just
figured
that
we
could
leverage
any
existing
infrastructure
there.
It
would
be
useful,
but
a
simple
header
is
likely
to
be
fine
as
well
cool
moving
on
to
what
would
gathered
from
the
Tres
format
so
far.
So
for
a
quick
recap,
the
Tres
format
being
used
in
the
proposal
is
a
tri
format.
That
coalesce
is
commons
up
stacks
as
well
as
duplicated
frames
and
it's
a
structure.
K
It's
a
form
of
structural
compression.
Fundamentally,
and
one
of
the
big
questions
was
like:
do
we
actually
need
to
choose
it
or
potentially
like
a
binary
format
for
this
to
make
it
more
tractable
to
serve
over
the
web?
So
from
our
initial
data
we
found
that
your
eyes
were
a
super
common
source
of
redundancy,
so
that
was
some
pretty
low-hanging
fruit
there.
So
we
added
a
resources
table
to
the
definition
of
the
trace
format
which
reduced
our
trace
sizes
by
around
30%.
So
it's
pretty
pretty
easy
win
for
everyone.
There.
K
So
we
wouldn't
need
to
send
that
we
could
conceivably
have
a
different
approach
where
we
provided
like
a
list
of
feature
or
certain
features
like
properties
of
samples
that
we've
wanted
to
get
inside
of
a
trace,
like
some
consumers
might
not
care
about
time
stamps
when
consumers
might
not
care
about
like
line
and
column
data,
for
instance,
and
so
we
figured
that
just
providing
like
a
base.
Javascript
object,
format
that
is
still
in
a
structurally
compressed
format,
well
still
being
like
mutable
before
saying
to
the
server
might
be
the
way
to
go.
K
Sure
yeah
yeah
right
now
Facebook
is
using
a
Wasden
version
of
snappy
compression
to
send
payloads
to
the
server,
and
we've
actually
had
some
pretty
good
success
with
that,
like
it's
fairly
low
overhead,
but
obviously
like
a
having
a
browser,
API
for
compression
would
be
like
great
and
to
let
like
yeah
big
case
for
resource
timing
as
well
as
really
good,
I'm,
so
cool.
So
here's
some
data
that
we
got
from
pagelet
site.
K
K
M
K
N
K
Yeah,
so
this
is
mostly
just
like
a
worst-case
scenario
like
facebook.com/
Ori
ships,
a
ton
of
JavaScript,
so
this
is
worst
case
in
much
more
ways
than
one
10.
Milliseconds
is
generally
like
what
we
want
to
be
the
lowest
supported
sample
rate
or
the
lowest
reasonably
supported
sample
rate
at
this
time.
So
that's
why
cool
all
right,
yeah.
K
Lastly,
I
just
want
to
talk
quickly
about
testing
the
API,
so,
as
I
said
before,
we
don't,
the
spec
doesn't
really
provide
any
guarantees
as
to
whether
or
not
like
a
sample
will
be
taken
within
a
given
interval
of
time,
mostly
because
the
way
that
those
samples
are
triggered
really
should
be
a
mutation
specific
and
those
implementations
for
true
sampling
fashion
are
typically
driven
by
timing
sources
or
interrupts,
which
both
are
can
be
like
inconsistent,
depending
on
the
system.
Scheduler
lots
of
scheduling
problems
today.
K
So
as
a
result
of
that,
that's
why
we
went
with
that
non
normative
issue.
However,
that
does
make
it
like
fairly
intractable
to
test
like
if
you
want
to
write
a
web
platform
tests.
For
this
say
you
run
a
profiler
for
like
two
seconds.
You
still
could
be
guaranteed
that
you
get
enough
by
the
spec
yeah.
A
K
F
K
Yeah
snow
right,
so
yeah
I
think
you
know
that
does
sound,
probably
like
what
we
want
to
do
here,
go
in
with
the
webdriver
ret.
It's
still
kind
of
unfortunate
that
we
can't
test
directly
what
we
want
to
test,
but
I
feel,
like
most
implementations,
are
going
to
duplicate
their
implementations
across
like
some
sort
of
take
sample
and
the
actual
sampling
that
occurs
cool
so
yeah.
K
That
sort
of
answers
that
going
back
to
that
point
with
Paul
max
is
that
if
we
wanted
to
have
like
this
manual
take
sample
implementation,
it
would
make
expert
testing
that
say.
Performance
done
now
is
included
No
Trace
very
challenging
since
there's
no
way
to
execute
script.
Obviously
from
these
callbacks
does
anyone
have
any
ideas
for
this
cuz?
A
K
D
D
One
of
the
things
so
first
of
all,
maybe
reporting
API
is
something
that
this
could
leverage
and
one
of
the
primitives
in
there
that
we
thought
about
is
rate
limiting,
so
that
the
actual
concern
is,
is
it
unbounded
and
would
it
have
undue
impact
on
the
user
so
being
able
to
reason
about
how
much
data
like
putting
a
path?
How
much
data
can
be
transmitted,
for
example,
and
reporting
and
network
for
logging?
D
We
have
a
policy
of
we
will
do
at
most
one
upload
every
60
seconds
and
that
upload
will
be
limited
to
a
certain
amount
of
bytes.
So
that
way
you
can
at
least
use
a
reason
about
with
the
potential
impact
that's
of
enabling
such
a
feature,
so
maybe
just
something
to
explore
in
that
area.
If
that
makes
sense,
yeah.