►
From YouTube: WebPerfWG TPAC 2020 meetings - October 20 - part 3
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Yes,
yes,
okay,
awesome!
So
we're
a
bit
past
time
I'll
try
to
make
it
brief.
Basically,
in
the
past
we
talked
about
bf
cache
reporting.
Already
we
decided
that
so
the
motivation
here
is
to
align
metrics
with
user
experience
and
bf
cache
is
part
of
user
experience.
As
michael
pointed
out,
when
chrome
implemented
the
and
implemented
bf
cache,
some
navigations
that
were
previously
cache,
navigations
disappeared
and
that,
and
that
goes
against
the
notion
of
the
best
like
the
best
and
fastest
navigations
are
the
ones
that
never
really
happen.
A
So
we
want
to
align
metrics
with
user
experience
and
therefore
we
want
to
include
bf
cache
navigations.
As
part
of
the
entries
that
we
report,
we
also
decided
in
the
past
to
avoid
from
overriding
metrics
for
the
bf
cache
navigated
page,
which
also
includes
all
the
previous
metrics.
A
So
we
decided
to
avoid
overriding
them
and
then
fire
new
entries
of
some
sort,
and
similarly,
we
decided
not
to
mess
around
with
having
multiple
time
origins
for
these
multiple
navigations
on
a
single
document
and
just
have
a
single
time:
origin
that
correlates
to
the
very
first
navigation
that
happened
on
that
page,
and
so
we
want
to
fire
new
entries,
and
I
played
around
with
a
few
ideas
related
to
what
that
may
look
like.
A
The
first
option,
which
seemed
like
the
most
obvious
one,
is
to
just
fire
another
navigation
entry,
so
navigation
entries
live
in
an
array
that
up
until
now,
only
had
a
single
item
in
it,
and
it
seemed
nice
to
reuse
that
array
and
to
add
more
navigation
entries
to
that
array,
with
the
bonus
of
not
adding
yet
another
performance
entry
type,
which
you
know,
we
already
have
a
bunch
of
the
part
that
I
was
slightly
worried
about,
and
I
think
we
disc
we
discussed
last
time
was
around
compatibility
and
whether
this
is
something
that
we
can
actually
do,
because
we've
been
shipping
navigation
timing
entries
as
a
single
item
in
that
array.
A
For
a
long
while
there
is
some
concern
that
websites
have
come
to
rely
on
the
fact
that
there
is
only
one
item
in
that
array
and
will
break
in
new
and
exciting
ways
as
a
result
of
us
adding
more
entries
to
it.
To
answer
that
question
I
played
around
with
a
chromium
prototype,
run
it
over
a
benchmark
of
a
100
000
top
alexa
sites,
and
it
turns
out
that
0.125
of
those
actually
touch
that
second
entry
in
that
array.
A
When
it's
there
and
that
is
2.3
of
overall
navigation
timing
users,
then
I
dove
into
a
bunch
of
examples,
and
so
the
initial
numbers
were
somewhat
discouraging.
Also,
a
lot
of
those
websites
are
fairly
known.
Websites
that
I
suspected
would
have
a
lot
of
usage
beyond
just
the
number
of
websites,
but
looking
at
the
examples,
none
of
them
relied
on
there
being
exactly
one
item.
A
At
least
none
of
the
examples
that
I
actually
found
and
the
examples
seem
to
just
loop
over
all
the
entries
in
the
array
and
therefore
when
it
has
only
a
single
entry,
they
loop
over
one
entry
when
it
has
more,
they
loop
over
those
as
well,
either
as
a
filter
or
as
a
for
loop,
but
generally
they
seem
to
be
just
collecting
metrics
from
all
those
entries,
which
means
that
if
we
were
to
add
a
navigation
entry
to
that
array,
that
will
impact
the
metrics
by
including
those
bf
cache
navigations.
A
In
the
statistics
that
those
websites
collect,
which,
on
the
one
hand,
it
will
result
in
a
metrics
change,
but
at
the
same
time
I
think
this
is
the
exact
matter
exchange
that
we
want
them.
That
we
want
those
extra
navigations
to
affect
so
from
a
compat
compatibility
question.
I
started
out
pessimistic
but
ended
up
optimistic,
and
I
think
this
can
fly,
and
this
can
be
something
that
browsers
can
actually
ship.
A
A
Time
of
the
resource
and
all
those
other
things
that
are
relevant
for
an
actual
navigation
will
just
be
zero
or
very
close
to
zero
or
in
in
in
a
bf
cache
navigation
timing
entry,
so
not
even
very
close
to
zero,
because
we
don't
really
have
load
time
because
load
is
not
retriggered.
We
don't
have
dumb
content
loaded
time,
so
a
lot
of
those
entries
will
just
be
zero
value,
so
that
seems
a
bit
wasteful
and
potentially
not
great.
A
So
I
took
the
liberty
of
trying
to
see
how
someone
would
filter
paints,
for
example,
per
navigation
based
on
that
kind
of,
like
that
sort
of
api
shape
and
from
the
codes
perspective,
it
seems
fairly
reasonable
and
we
can
use
just
the
filter
methods
on
arrays
to
perform
any
kind
of
filtering
on
existing
entries
and,
in
this
case,
based
on
the
navigation,
start
start
and
end
time
for
the
various
bf
cache
entries
or
non-bf
cache
entries.
A
The
second
option
that
I
considered
is
to
introduce
a
new
type
of
navigation
entry
that
doesn't
include
all
those
zero
values,
but
otherwise
includes
all
the
same.
The
same
information
that
is
included
in
the
first
one,
which
is
mostly
the
start
time
and
the
time
stamp
that
we
need
from
that
navigation
and
that,
like
a
code
example
that
would
use
that
kind
of
an
entry
would
look
very
much
similar
to
this
one.
A
Only
that
we
won't
be
looking
at
the
navigation
array
and
we'll
have
to
somehow
special
case
the
first
load
case
from
the
bf
cache
navigation
cases.
A
A
third
option
that
I
thought
would
be
neat
but
then
somewhat
reconsidered
is
to
have
pointers
to
all
the
relevant
entries
just
hanging
off
of
the
this
new
entry.
So
we'll
have
an
array
with
all
the
paints
all
the
lcp
candidates,
everything
else
just
hanging
off
of
that
entry,
and
that
seemed
ergonomic
enough
until
I
realized
that
at
the
time
that
the
navigation
time
navigation
entry
is
created,
we
don't
have
all
those
times
necessarily
so
that
will
either
will
have
to
result
to
pauling
similar
to
the
navigation
timing.
A
Entry
of
the
page,
but
that's
not
great
or
we'll,
have
to
do
something
different.
A
And
yeah
and
the
fourth
option
is
to
have
some
sort
of
a
performance
observer
entry
list
hanging
off
of
the
new
navigation
entry,
where
we,
basically,
the
caller,
can
set
performance
observers
individually
on
each
one
of
those
navigation
instances
and
get
entry
like
use,
get
entries
get
entries
by
type
or
by
name
on
those
individually
on
top
of
performance
observers
registered
on
the
main
page,
and
I
guess
for
me
it's
it
has
a
bun,
it's
weird
and
it's
unclear
how
that
fits
with
other
observers
on
the
page
and
I'm
not
sure
what
the
benefits
are.
A
So
essentially,
after
thinking
about
all
those
different
options,
I
somewhat
landed
with
a
a
favorite
as
option
one
that
basically
because
it
is
compatible
and
because
we
can,
we
can
it
it
seems
to
fit
nicely
with
the
current
api
shape.
We
just
add
a
new
navigation
timing,
entry
for
each
one
of
the
bf
cache
navigations,
and
then
the
question
arises.
A
A
One
thing
is:
backgrounded:
tabs
are
also
that
backgrounded
tabs
that
get
foregrounded
can
also
to
some
extent,
maybe
something
we
want
to
report
on
in
the
future
and
that
can
also
fit
into
that
model.
But
I
don't
think
we
need
to
dive
into
that
now,
because
that
part
may
be
more
controversial
than
what
we
need
to
discuss
here.
A
B
B
To
me,
it
seems
pretty
ergonomic
to
reuse
the
existing
navigation
timing,
entry
array.
C
Thanks
in
your
testing,
when
did
folks
actually
read
the
list
of
navigation
timings,.
C
So,
if
you're,
using
a
performance
observer
with
like
buffering
and
etc,
and
you
all
of
a
sudden
generated
a
second
entry,
it
would
fire
a
second
time,
and
so,
even
if
you
just
wrote
a
loop
over
or
even
if
you
just
assumed
only
the
first
entry
that
same
observer
like
that,
that
second
entry
would
be
the
zero.
If
you
know
what
I
mean
so
is
it
on
page
unload
that
you
would
actually
get
an
array
of
multiple
entry
sizes.
A
So
I
haven't
done
that
kind
of
an
analysis.
I
looked
at
the
code
rather
than
debugged
those
sites.
That
could
indeed
be
interesting
to
me.
A
The
point
that
you're
making
is
a
good
one
and
is
one
that
those
existing
sites
are
even
less
likely
to
see
those
bf
cache
navigations
than
I
thought
they
would,
because
those
bf
cache
navigations
are
likely
to
happen
after
the
point
in
time
right
now
that
they
are
collecting
those
metrics.
C
Array
before
yeah,
I
naturally
would
want
to
read
this
on
load,
perhaps,
and
even
if
I
iterate
the
array,
you'd
never
have
a
second
entry.
B
A
A
We
will
need
to
kick
off.
One
thing
that
I
didn't
cover,
and
I
potentially
should
have
is
that
yeah,
for
example,
we
would
want
to
re-kick
off
similar
to
the
sba
discussion.
We
will
want
to
kick
off
the
first
paint.
First
contentful
paint
lcp
fed
as
extra
entries
as
well
in
that
in
that
scenario,
and
then
we
would
filter
those
extra
entries
based
on
the
start
times
of
the
navigation
timing.
A
So
I
don't
think
we
can
have
a
duration
for
the
navigation,
because,
basically
we
want
to
keep
track
of
all
the
things
that
that
particular
bf
cache
navigation
did
up
until
a
point
in
time
that
it
got.
You
know
either
unloaded
or
brought
back
to
the
vf
cache.
A
So
we
would
want
to
essentially
look
at
all
the
different
like
the
other
metrics
and
which
we
will
need
to
refire
and
collect
them
for
those
for
those
navigations.
We
will
but
there's
no
way
for
us
to
have
a
end
time
as
part
of
the
entry
we'll
have
to
look
at
either
the
end
time
of
the
next
entry
or
you
know.
If
there
is
none,
then
everything
beyond
a
single
point
in
time.
D
Hey
job,
I
just
want
to
throw
in
here
that
what
firefox
does
is
we
note
when
bf
cache
is
used
to
restore?
But
we
don't
note
a
duration.
A
D
E
Hey
you
up
so
so
one
thing
I
noticed
is
that
all
the
proposed
options
looks
very
different
in
terms
of
things
they
are
reporting.
So
I
wonder
like
what
is
the
actual
problem
we
are
trying
to
solve
here
or
like
what
are
the
things
that
we
really
want
to
report
because
yeah,
that's
the
question.
A
So
I
I
don't
think
that
they
are
different
in
terms
of
what
they're
reporting.
I
think
that
the
main
bit
that
we
want
to
report
is
just
the.
A
A
that
a
navigation
happened
and
b
that
it
happened
at
a
certain
point
in
time,
and
then
we
want
to
refire
entries,
and
I
should
have
included
that
explicitly
in
the
slides
and
I
didn't,
but
we
want
to
refire
entries
and
then
filter
them
based
on
those
start
times.
A
So
the
the
reported
bits
are
not
huge.
It's
just
that
we're
looking
for
a
way
to
report
them
in
a
way
that
will
fit
the
rest
of
the
performance
apis.
A
A
Cool
any
other
questions.
A
Cool,
thank
you
and
I
think
we
can
now
move
on
to
andrew
and
is
it
pretending
yeah.
F
F
What
the
api
is,
what
use
cases
it
seeks
to
accomplish
some
of
the
history
and
then
moving
on
to
some
of
the
issues
that
have
sort
of
been
chatted
about
recently
on
the
github,
some
interactions
with
long
tasks,
some
guarantees
regarding
yielding
to
input,
and
maybe,
if
we
have
time
at
the
end,
we
could
talk
about
the
incubation
status
yo.
Do
you
think
that
would
be
more
suited
for
tomorrow's
rechartering
discussion,
or
is
it
worth
covering
here.
A
So
I
wasn't
thinking
of
reintroducing
new
incubations
as
part
of
the
new
charter,
but
we
can
discuss
it
here.
Sure
sounds
good.
F
Cool
without
further
ado,
so
for
those
who
aren't
really
familiar,
isn't
the
pending
has
had
a
long
and
storied
history.
First,
a
rising
as
like
should
yield,
which
was
like
a
more
general
like.
I
would
like
to
yield
to
input
and
other
high
priority
tasks
and
over
time
we
sort
of
refined
it
down
to
just
input
since
that
accomplished.
The
large
majority
of
the
use
cases
that
we
cared
about.
F
We
talked
about
it
at
tpac,
2018,
2019
and
worked
out
some
stuff
like
security
issues
and
calls
since
then,
as
well,
as
did
a
chrome
origin
trial
in
2019,
where
we
got
some
cool
and
fun
results.
So
yeah
thanks
to
everyone
for
all
all
the
feedback
over
the
years
or
the
input
okay.
F
So
what
is
the
api?
So
in
essence,
it
is
a
way
to
remain
responsive
for
work
that
doesn't
want
to
yield,
and
you
might
ask
like
why
would
work
not
want
to
yield
as
frequently
as
say
like
every
frame
and
a
lot
of
work?
That's
done
on
the
web,
especially
with
modern
frameworks.
Today
is
we
have
work
cues
and
scheduling
done,
so
we
break
our
work
into
tiny
bits
and
we
know
which
of
that
is
display
blocking
and
which
parts
that
we
would
know
that
would
change
in
response
to
certain
types
of
inputs.
F
You
can
imagine
say
if
you're
doing
like
a
big
component
render,
and
then
you
get
like
a
user
input.
Press
in
say
like,
for
example,
on
facebook,
like
the
notification
jewel
you're,
going
to
want
to
stop
what
you're
doing
right
there
and
pivot
towards
like
rendering
what
the
user
wanted,
and
in
order
to
do
that,
we
need
this
sort
of
input
or
we
need
the
sort
of
insight
provided
by
the
user
agent
to
respond
to
user
input
directly
so
yeah.
Basically,
it's
a
tool
to
make
smarter
longer
tasks.
F
F
Cool
so
yeah
some
questions
that
have
risen
up
like,
for
example,
like
why
input
only
and
the
primary
reason
is
just
because,
like
the
large
majority
of
the
use
cases
that
we've
aggregated
from
people
who
are
interested
in
apis
like
this
are
primarily
for
display
blocking
work.
That
changes
in
response
to
input
and
in
those
cases
isn't
pending
does
as
advertised,
and
you
might
be
wondering
it's
like
this
also
blocks,
painting
and
network
and
that's
okay.
F
The
user
agent
is
entrusted
if
they're
going
to
leverage
this
api
with
the
knowledge
that,
yes,
their
work
is
highest
in
display
blocking
priority.
And,
most
importantly,
is
this
actually
worthwhile.
Is
this
a
trade-off
worth
making
versus
just
like
slicing
our
tasks
into
smaller
bits
and
from
our
measurements
and
from
our
work
with
partners?
We've
determined
that
you
can
actually
save
a
significant
amount
of
time
and
increase
your
scheduler's
throughput
by
simply
just
yielding
less
frequently
when
you're
performing
the
same
user.
F
F
Cool,
so
one
of
the
first
topics
I
want
to
talk
about
is
integrations
with
the
long
tasks
api.
So,
as
everyone
knows
and
loves
long
tasks,
api
reports
like
tasks
of
takeover-
I
think
it's
like
50
milliseconds,
and
there
are
cases
where
you
might
want
to
make
your
scheduler
quantum
for
your
isn't.
The
pending,
like
long-ish
tasks,
pass
that
duration
and
in
which
case
it's
sort
of
an
open
question
as
to
how
we
should
expose
this
in
aggregate
metrics.
F
F
So
some
of
the
ideas
that
have
been
tossed
around
were
like
adding
extra
annotations
to
the
long
task,
performance
timeline
entries
or
maybe
just
doing
nothing
at
all.
Maybe
it
makes
sense
to
just
still
report
these
as
long
tasks
and
let
the
developer
reasonable
that
independently.
So
I
was
kind
of
curious
folks,
thoughts
on
this
issue
and
how
potential
integration
would
work.
H
Well,
I'm
I'm
not
the
wrong
vendor,
but
I
would
think
adding
it
to
the
an
extra
notation
is
would
be
very
beneficial.
F
B
So
the
la
there
there
was
a
long.
There
was
a
long
task,
but
the
app
tried
to
reason
with
the
user
input
and
probably
or
maybe
ended
its
work
earlier
as
a
result
of
detecting
that
exactly
yeah.
F
So
it's
very
easily
polyfillable,
of
course,
but
for
like
more
aggregate
metrics
where
you
might
just
drop
this
in
on
a
page,
you
might
want
an
additional
view
into.
H
H
I
The
really
tricky
case
I
think,
is
when
people
are
also
using
his
input
pending
for
other
use
cases,
or
at
least
aren't
yielding
immediately
after
calling
his
input
pending.
If
we
were
guaranteed
that
people
would
yield
immediately
after
calling
his
input
pending,
I
would
say
we
would
just
consider
that
to
be
a
break
in
the
task
and
and
only
fire
a
long
task
if
you
had
a
contiguous
block
without
a
call
to
his
input
pending,
because
there's
no
guarantee
there
it's
a
little
bit
hairy.
I
I
wonder
one
other
option
would
be
to
let
people
register
for
sort
of
a
different
task,
entry
that,
like
we
have
regular
long
tasks,
and
then
we
have
long
tasks,
including
no
is
input
pending
calls.
I
don't
love
that,
but
it
gives
you,
I
think,
a
little
bit
a
little
bit
more
flexibility
than
extra
annotations
because
it
actually
lets
you
know
how
long
those
blocks
are,
whereas
if
you
just
have
a
two
second
block
that
says
there
were
some
calls
to
his
input
pending.
F
I
H
G
This
actually
sounds
really
similar
to
a
problem
that
has
come
up
in
in
lighthouse
before
around
using
the
deadline
over
questionable
callback
and
one
problem
we
see
there
is
that
developers
might
be
checking
the
deadline,
but
they're
checking
the
deadline
way
too
infrequently,
and
so
they
still
end
up
with
a
lot
of
schedule.
Work.
I
wonder:
if
would
we
need
to
like
include
a
rate
at
how
frequently
they're
checking
is
input
pending,
or
maybe
that's
not
a
concern?
Have
you
thought
about
that?.
F
Yeah,
that's
interesting.
I
guess
I
would
also
share
the
same
skepticism
with
tim's
point
that
it's
like
you
can
be
calling
it
as
frequently
as
you
want,
but
if
you're
not
making
like
active,
yielding
decisions
based
on
it,
then
maybe
it's
not
as
valuable
as
a
signal.
J
What
about
just,
including
in
the
long
task,
an
array
of
timestamps
of
when
the
callback
was
invoked?
Which
callback
would
that
be?
Oh
sorry,
the
it's.
F
Input
pending
function,
I
see
I
guess
that
is
kind
of
problematic,
since
you
can
imagine
like
we
could
have
a
lot
of
work
items.
I
don't
think
that
would
scale
too
well.
I
Yeah,
I
think
you'd
need
to
somehow
only
surface
the
cases
where
the
timestamp
delta
between
consecutive
calls
was
sufficiently
large,
and
then
it
sounds
a
little
bit
like
you're
reinventing
long
tasks.
You
can
imagine
something
similar,
though,
where
you
have
an
array
of
long
blocks
of
time
during
which
there
was
no
is
input
pending,
call.
F
With
people
worth
abstracting
out
a
little
bit
and
just
noting
long
tasks
that
blocked
input
with
their
own
designation,
irrespective
of
whether
isabel
pending
was
called.
B
Yeah,
that's
exactly
what
kind
of
what
I
was
thinking
it
was
like
if
there
was
also
a
was
input
pending.
Part
of
it,
and
there
were
is
input
pending
calls
and
whether
after
there
was
input
bending
there
was
an
is
input
pending
call
and
it
ended
frequent.
You
know
like
that,
would
kind
of
give
an
indicator
that
the
app
was
responsive
to
the
user
input
rather
than
just
checking
out
a
loop
and
not
doing
anything.
C
Yeah
the
threshold
right
now,
I
think
the
default
is
a
hundred.
You
can
make
it
as
low
as
16,
which
is
lower
than
the
long
task
threshold.
So
there's
there's
overlap
there.
C
Yeah
like
like,
I
know
there
are
folks
who
measure
long
tasks
in
rum
and
still
report
total
blocking
time
and
still
use
that
as
a
useful
signal.
But
like
we
see
it
as
a
useful
synthetic
metric
where
you
don't
actually
have
real
interactions
and
you
don't
when
you
can
actually
measure
interactions,
that's
the
source
of
truth.
I
It
does
get
a
little
bit
tricky
once
you
start
talking
about
sites
with
moderate
data
volume.
If
you
have
infinite
data,
all
you
look
at
is
input
response
times,
but
when
you
have
limited
data
there
are
cases
where
I
think
long
tasks
it
does
give
you
the
we'll.
Let
you
identify
regressions,
you
would
not
be
able
to
identify
otherwise.
J
I
mean
if
it's
a
two
second
long
task
and
they
yield
it
during
that
first
and
last
is
but
pending.
I
don't
think
that's
should
be
considered
a
good
user
experience
like
so
I
I
think,
that's
the
kind
of
cases
we're
trying
to
avoid.
F
Yeah,
we're
definitely
gonna
end
up
with
a
heuristic
solution,
no
matter
what,
because
it's
like
the
the
long
task
metric
itself
is
inherently
quite
heuristic.
I
mean
I
just
a
matter
of
like
is
any
annotation
helpful
in
the
first
place,
I
think
which
it
sounds
like
it
probably
is.
So
that's
good
signal.
I
think
that
it's
probably
fine
intuitively
to
err
on
the
side
of
like
over
reporting
information
about
the
long
task
execution,
especially
since
long
task.
Actionability,
is
something
that's
like
very
important
to
people.
C
So
I'm
not
sure
if
this
suggestion,
if
I
missed
it
or
not,
but
what
nicholas
what
you
said
about
like
long
duration
is
you
know
long
tests
right
now
are
over
50
milliseconds,
but
what
if
it
was
over
50
milliseconds
between
calls
to
is
input
pending,
rather
than
giving
an
array
of
all
the
time,
stamps
and
stuff
like
that,
like
just
saying
what
was
the
longest
duration
of
time.
F
C
Yeah,
this
wouldn't
tell
you
if
if
they
would
have
yielded,
but
it
at
least
tells
you
that
they
checked
often
enough
to.
C
If
they
were
well
behaved
and
would
have
yielded
that
it,
wouldn't
you
know
what
I
mean,
because
I
think
the
case
that
nicholas
is
pointing
out
is,
if
you're,
not
using
a
framework
that
uses
a
user
scheduler
that
that
calls
this
often
enough,
you
could
still
lose
that
opportunity.
You
could
still
have
essentially
a
long
task.
F
F
Cool
yeah
that
sounds
good.
I
will
head
back
to
the
github
issue
with
some
of
these
great
suggestions
all
right,
so
I
think
we're
running
a
little
bit
short
on
time,
so
I'll
just
jump
towards
one
of
the
other
topics
that
we've
discussed,
which
is
the
actual
mechanism
of
yielding
itself.
You
saw
in
my
cutesy
code
example
earlier.
We
just
called
an
imaginary
yield
method,
but
that
doesn't
exist
and
the
big
question
is:
should
it
so?
F
What
users
of
instant
pending
right
now
are
doing
is
leveraging
something
like
set
timeout
or
post
message
in
order
to
schedule
a
new
macro
task,
with
the
assumption
that,
like
okay,
maybe
if
we
enqueue
a
new
macro
task
and
yield
to
the
browser,
the
browser
will
probably
end
up
handling
the
input
for
us,
and
this
works
well
in
practice.
F
Since
most
uas
prioritize
input
very
highly,
and
so
this
is
like
a
fairly
reasonable
guarantee
in
the
worst
case,
you'll
just
check
ism
offending
again,
it
might
return
true
in
which
you
yield
again
so
like
there
is
a
possibility
of
a
very
pathological
case
that
the
spec
permits,
where
it's
like
you
can
yield
as
much
as
you
want,
and
isabel
banning
could
still
return
true
by
the
spec,
but
again
unlikely,
because
ua
is
prioritize
input.
F
So
I
guess
my
question
to
the
group
would
be:
is
this
something
that
we
should
address
at
a
spec
level?
Is
it
worth
introducing
this
kind
of
normative
guarantee,
or
does
it
suffice
to
say
just
let
the
user
maybe
detect
this
case
if
it
actually
notices
this
being
an
issue
which
again
like
they
probably
wouldn't
run
into
it
in
the
field?
Anyway?
F
K
K
Just
one
note
on
that:
it
could
it's
not
necessarily
in
any
kind
of
proposal
yet
right.
It's
in
our
dreams,
maybe
yeah
and
we
don't
have
work.
We
don't.
We
aren't
experimenting
with
an
input
level
guarantee
like,
and
there
is
no
input
priority
at
this
point
right,
so
user
blocking
right.
I
think
right
but
like
the
proposed
spec
right
now
isn't
to
map
any
other
task
queues
onto
these
priorities
like
it's
fairly,
well,
isolated
and
the
browser
has
freedom.
K
K
Right
so
we'd
want
to
do
something
with
the
yield
proposal.
We'd
have
to
augment
the
yield
proposal,
as
it
currently
is
in
some
way
to
handle
this
situation,
and
I
was
kind
of
expressing
yesterday,
like
I
don't
think,
that's
our
primary
goal
right
now,
but
it'd
be
great
to
get
a
signal
of,
should
it
be
part
of
our
goal.
C
I
just
wanted
to
ask
taken
to
that
logical
extreme.
If
the
scheduling
dot
yield
proposal
is
extended
to
handle
this,
and
you
already
talked
about
the
ability
to
yield
and
then
return
back
to
me
with
high
priority
afterwards,
because
I
was
the
one
nice
and
appealed
does
it
eventually
just
replace
his
input
pending
entirely.
K
We
we've
been
asked
that
by
by
some
folks
that
you
know
would
rather
yield
to
more
than
just
input,
and
I
mean
you
can
look
at
them
as
bookends
and
some
people
may
decide
like
you
still
have
to
choose
when
to
yield
right,
which
isn't
going
to
be
all
the
time.
I
My
guess
is
that
we
do
not
want
to
specify
that
the
browser
will
handle
all
input
before
posting
a
new
macro
task.
I
think,
given
the
existing
hooks
and
specifications
that
would
be
extremely
difficult
to
describe,
and
it's
not
clear
to
me
that
it
really
adds
much
value.
I
think
just
saying
will
yield
to
the
browser
and
the
browser
can
do.
What
it
thinks
is
important
is
probably
sufficient
and
an
awful
lot
simpler.
I
I
can
certainly
imagine
sort
of
pathological
cases
where
a
browser
really
would
not
want
to
handle
input
in
this
particular
case
and
so
giving
browsers
the
flexibility
to
schedule,
whatever
they
choose
to
do
seems
preferable
to
me.
Yeah
from
the
information.
L
L
Webkit
doesn't
have
any
mechanism
or
plan
to
prioritize
input
so
but
that's
just
not
a.
F
F
So,
for
example,
in
chromium
we
coalesce
continuous
events
and
dispatch
them
every
frame
instead
of
like
in
the
middle
I'm
mostly
just
asking,
because
I'm
curious
if
it's
like
truly
fifo
in
webkit
yeah.
That's
just
people
got
it.
F
Okay,
interesting,
I'm
so
in
which
case
that
would
be
valuable
to
provide
that
guarantee,
and
it's
important
to
say
that
the
guarantee
is
not
such
that
we
only
care
to
yield
to
input.
None
of
the
use
cases
that
we've
discussed
with
people.
I've
like
expressed
that
desire
it's
more.
So
it's
like
it's
time
for
me
to
like
give
up
the
throne
browsers,
should
do
whatever
work
that
it
deems
is
a
high
priority,
so
we
wouldn't
need
a
like
yield
only
to
input
function.
That's
not
going
to
be
great
for
ux.
F
But
yeah,
if
there's
no
real
objections
here,
I
think
tim's
suggestion
of
expecting
reasonably
reasonable
behavior
and,
if
not
defaulting
to
maybe
a
timeout
in
the
worst
case
or
potential
throttling
when
input
is
detected.
But
it's
not
handled
for
the
next
macro
task.
F
Readable,
so
that's
pretty
much
it
for
me.
I
guess
I
will
open
up
the
last
minute
and
40
seconds
just
like
any
overall
questions.
C
So-
and
I
a
question
just
came
to
mind-
is
presumably
this
input
pending
signal
would
give
you
the
confidence
to
introduce
a
long
task.
Where
previously,
you
would
not
be
confident
and
would
have
yield,
and
so
there
one
topic
is
yielding
to
input
and
to
prioritize
it.
But
another
one
is
just
yielding
to
other
script,
and
so
you
can
imagine
misbehaving
script
which
would
have
previously
yielded
occasionally.
So
it
goes
wrong
robin
between
contentious
actors,
whereas
now
someone
could
say
like
there's,
no
input
I'll
just
block
at
infinitum.
C
So
what
about
yielding
to
other
script
and
making
sure
a
bad
actor
doesn't
sort
of
own?
The
thread.
F
So
I
think
the
surface
is
probably
explored
more
in
detail
by
the
sort
of
main
thread
scheduling
stuff
that
was
discussed
yesterday
for
ism
opening
itself.
It
solves
like
the
very
specific
case
where
the
like
script
already
wants
to
do.
A
long
task
in
like
is
incentivized
to
do
so
because
it
has
like
throughput
wins
that
it's
found
and
in
which
case
like
we
retain
that
responsiveness
and
improve
the
user
experience
when
like
otherwise,
we
wouldn't
really
be
able
to
offer
a
compelling
solution.
F
I
think
that
the
problem
of
script
prioritization
and
like
improving
that
sort
of
cooperative
multitasking
that
you
mentioned
is
like
really
worthwhile.
It
just
seems
like
very
challenging
and
out
of
scope.
K
I
just
wanted
to
add
something
to
what
you
were
saying:
michael
we've.
Our
teams
talked
about
this
a
little
bit
in
internally
as
as
a
risk
for
this
api,
especially
in
context
of
you
know
and
andrew.
I
think
we've
discussed
this
as
well
with
if
you
have
multiple
pages
sharing
a
process-
and
you
may
not
know
about
the
input
from
another
page
if
you're
backgrounded,
and
we
need
to
be
really
careful
about
messaging
around
this
api
to
to
just
not
let
folks
think
that
they
can
just
do
long
tasks
forever.
K
One
mitigation
we've,
we
kind
of
thought
of
is
like
we
may
like
if
it
does
become
a
problem
and
it's
not
saying
that
it
will
but
like
we
could
lie
in
his
input
pending
like
we
can
already
return
like
we,
it's
potential
that
we
could
lie,
and
just
say
yes,
there's
input
pending
if,
if
you're
blocking,
you
know
the
thread
for
too
long
and
there's
other
things,
I
don't
know
if
this
would
actually
be
something
we
want
to
spec
as
a
possibility
or
not,
but
that
maybe
there's
a
potential
there.
F
Yeah
yeah,
I
think
that's
that's
reasonable.
C
I'm
just
thinking
through,
like
both
some
times
where
this
might
be
false,
even
if
there
is
no
input
pending,
but
also
for
future
proofing,
if
there
are
other
instances
where
it's
not
strictly
related
to
input.
But
you
want
to
yeah.
Anyway,
I
mean
it's
nice
to
scope
it
to
specifically
the
problem
that
it's
addressing,
but
also.
F
Yeah,
we
definitely
wanted
to
be
like
a
composable
building
block
such
that,
like
we
weren't
going
to
over
engineer
a
solution
to
problems
that
weren't
quite
obvious
yet
or
as
obvious
as
this
problem.
Space.
F
F
Anything
else
or
do
we
get
to
see
the
thanks
slide.
Sorry,
go
ahead.
A
No
just
regarding
the
incubation
status
question
so
yeah
we
discussed
it.
I
believe,
a
couple
months
back
on
the
call,
and
so
initially
on
one
of
the
calls
yeah.
We
had
the
folks
present
on
that
call
favorable
of
adoption,
but
on
a
later
call
folks
were
less
favorable
of
adoption
and
yeah.
We,
I
essentially
didn't
want
to
raise
that
question
before
rechartering,
because
we
don't
necessarily
have
the
time
for
that,
and
I
think
we
can
re-raise
that
question
in
like
after
rechartering
in
in
the
future.