►
From YouTube: WebPerfWG 191119
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
C
A
I
people,
so
yeah
I
will
take
an
action
item
to
actually
click
watts
of
thread
right
now
that,
like
you
know
at
all,
regarding
that
to
see
if
we
need
to
move
and
around
okay,
and
with
that
thought,
do
you
want
to
kick
up
the
presentation
sure
yeah
things
around,
so
that
basically
I
am
Not
sure
that
the
like
resource
timing,
related
issues
will
be
super
interesting
for
you,
so
do
that?
Okay
is
anything
and
then
you
can
stay,
but
you
could
you're
eating
won't
have
food,
okay,
cool!
Thank
you.
D
D
D
Okay,
hi,
so
for
those
that
don't
remember,
I'm
Scott
has
Lee,
and
this
is
purpose
of
this
is
to
give
you
all
an
update
on
the
post-test
API,
specifically
a
switch
in
the
API
shape.
So
just
a
quick
recap:
we
presented
this
API
at
the
face-to-face
in
June
and
to
set
a
little
bit
of
context.
The
API
we're
talking
about
here,
I
present
a
couple
there.
This
one
is
just
for
post
tasks
which
allows
us
to
schedule.
Prioritize
tasks
with
the
browser
and
the
old
syntax
is
something
like
this:
what
tasks
equals
scheduler
dot
post-test?
D
So
we
started
thinking
about
this
a
lot
and
turns
out.
We
think
this
is
a
good
idea
and
are
changing
to
a
signal
based
prototype
and
that's
what
I'm
going
to
talk
about
in
detail
here.
That's
a
work
in
progress,
the
tiel's
under
review
right
now
in
chrome
and
then
the
other
thing
we've
been
up
to
is
thinking
through
starting
to
think
about
spec
issues
around
priorities.
How
much
flexibility
are
we
going
to
give
browser
vendors?
D
So
this
has
been
a
discussion
that
has
gone
on
the
tag
review
thread
Boris
from
Mozilla
was
was
quite
involved
with
this
and
I
think
we've
reached
a
point
of
understanding,
but
we'll
see
once
we
start
drafting
a
spec,
so
I
won't
have
time
to
talk
about
that
today.
There's
just
not
time
but
I'm
happy
to
come
back
for
another
design,
call
and
talk
more
about
like
how
we're
going
to
expect
priorities.
How
that
should
work
if
that's
interesting
to
this
group,
so
a
roadmap
of
where
we're
going.
D
What
we're
gonna
work
on
next
is
continue
to
resolve
any
API
shape
and
functional
questions.
The
tag
review
was
kind
of
preliminary,
so
they're
going
to
be
giving
an
explainer,
which
I
need
to
update
with
this,
with
a
new
API
shape,
but
they're
planning
to
give
a
more
full
review.
So
we'll
see
what
comes
out
of
that,
we
want
to
move
towards
an
origin
trial
to
evaluate
performance,
API
shape
and
ergonomics,
and
we're
also
starting
to
tackle
a
lot
of
the
big
concerns.
The
post
hero
concerns
such
as
you
know,
starvation.
D
What
do
we
do
about
controlling
third
party
script?
These
kind
of
big
questions
that
we're
not
planning
to
tackle
in
in
v-0,
but
we're
starting
to
think
about
what
the
path
forward
is
there,
so
don't
have
any
great
updates.
This
is
a
lot
of
brainstorming
going
on,
but
we'll
definitely
keep
the
group
in
the
loop
on
that
and
then
hopefully
soon
start
working
on
what
the
zero
spec
will
look
like
so
yeah,
most
of
the
time
left
I
want
to
just
talk
about
what
this
API
shape
looks
like
and
solicit.
D
You
know
feedback
from
folks
what
what
people
think
of
this.
So
you
know
again
in
response
to
tag
feedback.
We
started
thinking
about
what
the
API
shape
looks
like
and
how
we
might
incorporate
this
controller
signal
pattern.
I'll
give
an
example
of
that
for
folks
that
aren't
familiar
with
it
on
the
next
slide,
but
basically
we
have
in
this
concept
in
the
platform
of
an
abort
controller
and
a
board
signal
that
lets
you
control
async
work
after
it's
been,
you
know,
cute
with
the
browser
and
what
we've
discovered
is
two
things
one.
D
This
really
helps
us
model.
It
changes
the
way
we
model
tasks
and
it
fits
a
really
specific
model
that
resonates
with
web
developers
that
we've
we've
shared.
You
know
these
ideas
with
also
you
know
basic
overview.
An
API
shape
changes
were
removing
explicit
tasks,
objects
replacing
that
with
a
promise
and
removing
explicit
task
queue
objects
for
now.
So
what
this
looks
like
and
the
next
few
slides
are
going
to
be
kind
of
diffs,
so
you
can
see
what
the
api
shape
look
like
and
then
I'll
talk
about.
D
Why
I
think
this
is
a
good
idea
and
we
can
have
conversation
about
that.
So
again,
there's
a
bunch
of
links
in
here
we
did.
We
have
a
thread
going
on
our
explainer
repo,
where
we
started
circulating
these
ideas,
so
there's
some
more
context
there.
So
the
new
API
in
this
in
the
base
case
is
very
simple.
It's
just
you
know
schedule
will
work,
get
a
promise,
so
this
really
simplifies.
You
know
the
base
case.
D
In
the
previous
example,
you
would
have
tasks,
objects
and
a
tax
object,
and
we
hung
the
promise
to
represent
the
result
off
of
that
task.
So
you
would
have
either
to
step,
or
you
know,
await
something
that
post
ask
dot
result.
We
also
let
you
do
this
through
task
queues,
which
it
was
even
more
verbose.
D
You
know
you
could
get
a
reference
to
a
specific
task,
queue
post
to
task
on
that
and
then
read
its
result,
so
really
simplifying
the
number
of
objects
in
the
base
case
here,
which
people
are
happy
with
some
people
anyways
so
on
to
this
idea
of
so
this
is
great
if
we
just
return
a
promise,
it's
kind
of
in
line
with
a
lot
of
api's
that
we
have.
But
how
do
you
control
it?
And
there
was
a
long
conversation
sometime
back
about
how
you
do
this
for
fetch
and
what
they
settled
on
in
the
platform.
D
Was
this
idea
that
we
can
create
a
separate
controller
object?
The
controller
object
has
a
signal
and
that
signal
can
be
used
to
observe
events.
So
the
first
example
the
top-
is
how
we
did
that
with
fetch.
So
we
can
create
an
abort
controller
and
then
board
controller
propagates
cancellation
abort.
You
pass
that
to
one
or
more
fetches
and
there's
other
API.
D
Does
that
take
a
board
signal
so
there's
this
heterogeneous
group
of
work
that
can
accept
the
signal,
for
example,
with
geolocation
things
like
that,
which
is
why
this
question
came
up,
but
what
about
post
tasks
tasks
so
changing
the
API
at
the
bottom?
You
see
how
this
would
integrate
in
post
tasks
and
as
opposed
to
ask
the
API
starts
to
look
a
lot
more
like
existing
api's
that
we
have
so
underneath
there.
You
can
say
post
task,
foo
and
past
the
same
signal
to
both
fesh
and
post
task.
I
should
mention
tasks.
D
Controller
will
expose
a
task
signal
which
inherits
from
abort
signal.
So
you
can
pass
these
anywhere.
That
takes
an
abort
signal
and
that's
really
where
the
power
of
this
I
think
comes
we'll
talk
about
that
a
little
bit
more
a
minute
so
again
controlling
heterogeneous
async
work.
The
new
approach
is
just
share
a
signal.
D
D
You
know
and
again
controlling
related
tasks
before
we
let
you
control,
related
tasks
through
task
queues,
which
is
in
the
bottom
and
now
at
least
I
still
think
there
might
be
a
place
for
for
exposing
task
queues,
and
we
can
talk
about
that
later.
But
this
is
how
we're
expressing
relationships
now
is
by
sharing
a
signal.
So
that's
really
the
API
change
there.
Any
questions
about
that
part
before
I
go
on
to
why
I
think
we
are
doing
this.
D
Priority,
what
does
that
mean?
Oh
sorry,
so
the
yes
I
should
have
mentioned
so
task.
Controller
inherits
from
abort
controller,
so
it
has.
This
is
something
we'll
be
adding
with
this
API
task
controller
allows
you
to
not
just
abort,
but
also
to
change
priorities,
so,
instead
of
just
propagating
a
abort
signal
to
everything
that
whole
the
signal,
you
can
propagate
priority
change
events
to
everything
that
shares
the
signal.
So
previously
we
would
do
this
through
changing
the
priority
on
a
task
cube,
but
now
this
is
something
that
the
controller
allows
you
to
do.
D
There's
signals
on
the
way
that
we're
really
the
web
platform
is
going
to
signal
some
sort
of
state
change
on
async
work
and
priority
being
one
of
them.
We
can
integrate
this
with
other
api's,
we've
thought
and
actually
there's
a
on
the
there's,
an
issue
open
I,
don't
have
it
linked
here
on
the
fetch
repo.
This
is
this
was
linked
on
the
tag
review,
for
they
want
to
do
something
similar
for
fetch.
They
want
to
have
a
fetch
controller
that
lets
you
change
priority
and
abort.
A
So
if
so,
first
of
all,
this
looks
great
at
first
glance
from
a
perspective
of
priority
hints.
For
example,
we
were
talking
about
adding
some
like
basically
a
priority
parameter
to
the
fetch,
like
to
the
fetch
call
no
way
to
reprioritize
afterwards,
if
things
change-
and
this
will
give
us
that
yeah
so
yeah.
So
this
looks
great.
Otherwise,
if
I
want
to
create
a
task
that
will
then
create
other
async
tasks
with
different
priorities,
I
could
pass
a
pass
along
like
multiple
signals
to
the
initial
task
that
would
then
propagate
whatever
priority.
D
Yeah,
you
could
definitely
do
that.
You
can
also
there's
use
cases
for
for
providing
a
board
signal,
but
specifying
a
priority.
So
you
might
want
a
group
of
tasks.
I
have
an
example
in
another
slide
here,
but
you
might
have
a
group
of
tasks.
Let's
say
it's
updating
some
UI
state,
but
it's
done
asynchronously
and
it's
really
high
priority
because
the
user
is
interacting
with
it,
but
it
spawns
async
work
like
log
one
I'm
done
or
log
this
event,
something
that
is
just
overall
low
priority.
D
So
what
you
can
do
and
we
have
in
the
design
doc
is
what
happens.
If
you
give
it
a
signal
and
a
priority
right,
one
of
them
has
to
win.
So
we
think
there
are
use
cases
so
that
you
can
treat
the
signal
as
an
abort
signal.
So
yes,
I
want
this
to
cancel.
But
if
you
give
me
a
specific
priority,
then
that
chain
is
broken
and
we
don't
allow
reprioritization.
It
just
sticks
in
that
priority.
D
So
some
other
advantages
here
again:
I
think
the
the
thing
that
we
just
talked
about
and
propagating
this
throughout
other
async
api's
is
the
killer
feature
to
me,
and
that
was
like
what
really
swayed
my
opinion
also
again,
similar
shape
to
existing
async,
api's
and
I.
Just
think.
That's
good
for
Organ
omits
for
developers
of
for
expectations
of
interacting
with
the
API
I.
Think
here
with
signals
you
have
the
you
can
easily
expose
state
but
not
control.
So
when
you're
passing
a
task
queue,
you
can
operate
on
that
task
queue.
You
can
change
its
priority.
D
You
can
clear
all
the
tasks
in
it,
which
is
not
necessarily
what
you
want
to
do.
If
it's
you
know
sending
that
and
passing
that
to
code,
but
with
signals
it's
safer
to
pass
because
they're
they're
read-only,
and
that
was
part
of
the
idea,
I
think
with
the
original
abort
controller
abort
signal
proposal
is
that
you
have
a
read-only
component
of
this
that
you
can
pass
around.
D
Think
also,
and
unfortunately,
real
skates,
not
here
and
tada.
We
talked
at
the
face
to
face
a
lot
about
this
about
they
had
concerns
over
priority
inversion.
Specifically,
what
we
kind
of
settled
on
is
yeah.
It
would
be
great
if
you
had
a
way
to
specify
these
tasks
are
tightly
related,
so
that
you
can
make
sure
they
get
prioritized
together
and
I.
D
Think
signals
gives
you
that
so
I'm
hoping
and
I'd
like
to
follow
up
with
your
okay,
that
this
might
alleviate
some
concern
that
they
have
around
this
and
then
finally,
I
think
shared
signals
are
really
really
good
for
modeling.
What
I've
been
calling
this
task,
subtask
relationship,
which
you
all
you
you
alluded
to,
which
is
this
I,
do
a
thing
and
it
spawns
a
whole
bunch
of
async
work
and
heterogeneous
types
of
async
work.
I!
Think
it's
really
good
at
that.
D
So
I
have
just
a
you
know
a
quick
example
here
and
there's
a
lot
of
subtlety
here.
I
have
a
doc
link
on
the
last
page
for
folks
that
are
interested
in
reading
more
about
this.
We
explored
this
quite
extensively.
The
way
I've
been
thinking
about
this
task,
subtest
model-
is
that
yeah
you
have
this,
like.
You
know,
kind
of
meta
overall
things
that
use
those
users
trying
to
do
like
I
interact
with
a
component
I
need
to
fetch
some
things.
I
need
to
update
some
UI.
It's
an
internal
state,
update
the
UI
I.
D
Where
is
no,
you
can
use
tasks,
use
to
model
that
if
you
create
a
task
queue
just
for
the
subtasks,
of
course,
you're
left
with
having
to
control
both
of
them,
which
is
a
disadvantage,
but
you
can
still
do
it.
It's
just
a
little
bit
cleaner,
I.
Think,
in
my
opinion,
this
way
now
task
queues.
On
the
other
hand,
I
think
are
really
good.
You
know,
when
you
want
to
group
types
of
tasks.
B
D
Do
this
on
the
you
know,
in
the
HTML
spec
like
every
like
different
task
sources,
have
different
task
queues
right
and
we're
allowed
to
prioritize
like
between
them.
Any
way
we
see
fit,
but
there's
nothing
to
say
that
everything
in
that
task
queue
is
completely
related
to
everything
else
in
that
task.
You
so,
for
example,
like
these
it's
again
a
logging
cask,
you
you,
but
I
always
want
these
to
run
at
low
priority,
but
what's
to
say
that
any
two
casts
and
that
logging
to
a
skewer
tightly
coupled
now.
D
D
If
they're,
all
you
know
achieving
that,
you
know
if
everything's,
on
the
critical
path
to
the
UI
update
that
I
want
to
make
sure
they
move
together
as
far
as
priorities
go,
but
not
necessarily,
if
I
have
some
work,
that's
part
of
it
and
related
to
it
like
a
longing
task
that
is
just
strictly
lower
priority,
whereas
task
queues
are
the
inverse
I.
Think
they're
always
prioritize
together.
Task
queues
represent
ordering
signals,
don't
give
you
that
and
I
I
don't
want
to
individually.
D
If
I
individually
change,
the
priority
means
I
need
to
move
it
to
a
different
queue
because
otherwise
I
break
the
ordering
guarantees
which
so
so
these
are
always
prioritized
together
and
they
can't
be
canceled
together.
I
could
decide
that
I
want
to
not
run
any
logging
tasks
and
clear
the
whole
task
queue
for
some
reason,
but
they're,
not
necessarily
that's
not
necessarily
the
case
I
might
want
to
individually
cancel
them.
That's
where
I
think
the
kind
of
main
differences
here
so
with
signals
we're
kind
of
we're
pushing
this.
D
This
task
subtest
model
more
and
we
talked
you
know.
We
talked
to
some
developers
about
this,
and
it
really
did
resonate
with
them
is
like
yeah,
okay.
This
looks
like
that's
how
we
model
like
tasks,
so
I
think
you
could,
by
parinama
code
to
oh
I'll,
stop
here
for
for
questions.
I
have
just
like
a
couple
more
slides
about
like
where
we
might
go
with
this,
but
it's
still
under
consideration.
So
if
anybody
has
any
follow-up
questions
on
this
love
to
take
them
now,.
D
D
There's
been
a
strong
developer
demand
so
far,
for
you
know
doing
this
propagation
ourselves
like
exposing
something
through
the
scheduler
like
current
a
signal
or
something
like
that
or
adding
some
sugar.
The
alternative
is
that
we
can
pass
the
signal
everywhere
where
it's
needed,
which
for
z0,
I
think,
is
totally
fine
until
we
decide
whether
or
not
this
is
a
good
idea,
which
some
folks
have
bring
raise
concerns
about
being
a
potential
foot
gun.
So
I
think
we
need
to
flush
that
out
a
little
bit
what
it
might
look
like.
It's
something
like
this.
D
So
a
few
options
we
could
so
by
the
way
in
v-0,
we
had
a
current
task
queue
that
tried
to
accomplish
the
same
thing,
and
we
could
replace
that
with
current
task
signal
we
can
add
an
explicit
inherent
option
is
an
option
to
there
to
make
it
even
you
know,
maybe
more
ergonomic
or
even
a
separate
method.
So
we
were
flirting
with
like
scheduler
dot,
post
tasks
or
post
sub
tasks
which
really
like
confines
this
relationship.
D
So
this
is,
you
know,
we're
not.
We
still
have
some
exploring
to
do.
We
still
have
some
open
questions
around
this.
We
want
to
hear
use
cases
more
use
cases
for
developers
to
see
like
you
know
why.
Just
passing
the
signal
isn't
good
enough
and
what
else
they
might
you
know
need
we
want
to
think
through
misuse
like
is
this
bad
like
if
we
expose
this
in
third-party
libraries
use
it?
Is
that
what
we
want,
or
is
that
not
what
we
wanted
so
I
want?
D
We
want
to
think
through
these
think
questions
a
little
bit
more,
there's
still
a
question
of
whether
or
not
we
should
build
tasks
use
like
I.
Don't
think
you
know
passing
a
signal.
Sorry
signals
controllers
are
good
for
modeling,
like
one
relationship,
but
not
so
good
if
you
want
to
model
them
both
at
the
same
time.
In
the
same,
you
know
code,
so
maybe
there's
room
for
for
doing
more
here,
but
but
I
think
it's
a
good
start.
D
You
know,
I,
we
haven't
heard
a
lot
of
you
know
push
package
seems
to
match
people's
mental
models.
So
assuming
we
don't
get,
you
know
about
pushback
from
this
group.
Then
then
I
think
we're
gonna,
move
forward
with
this
approach
and
see
what
AG
has
to
say
after
we're
done.
So
that's
all
I
have
there
there's
a
bunch
of
additional
links
for
the
people
that
are
interested.
We've
explored
these
options
quite
a
bit,
and
these
are
all
public
and
just
gave
a
talk
at
blink
on
about.
D
We
focus
more
on
the
priority
issue
in
that
talk
about
how
the
priorities,
respect
and
less
on
this
EP
I
shave
questions
so
I
have
my
size
linked,
and
the
talk
should
be
up
on
line
soon,
if
it's
not
already
on
the
YouTube.
So
just
some
more
context.
So
thank
you
and
any
other
questions
in
the
time
that
we
have
love
to
take.
A
Thank
you
so
I'm
wondering
if
other
implementers,
which
is
basically
been
and
will
particular
call,
do
you
have
the
opinions
like
API
shape,
implement
ability
from
your
things
alone
or
I'd.
F
F
D
E
D
Yep,
we'll
we'll
work
on
that
part
of
the
the
challenge
of
this
on
the
explainer
is
getting
something
fairly.
Succinct
like
this
can
easily
go
into
like
really
long
conversation,
so
I'll
work
on
trying
to
get
this
the
best
we
can
and
yeah.
Let
me
know
happy
to
come
back
and
present.
You
know
anything
else
about
like
the
priority
work
that
we've
been
talking
about
before
us.
I,
don't
know
if
that's
of
interest
of
this
group
or
what
the
next
steps
with
this
group
should
be
yeah.
D
Is
that
aspect
all
right?
No
there's
not
a
draft
back
yet,
but
trying
to
work
towards
that.
Mostly
we're
trying
to
think
about
right
now
is
like
all
the
other
issues
that
we
have.
You
know
like.
What
do
we
do
about
starvation,
like
what
is
gonna
block
v-0,
and
so
we've
been
spending
a
lot
of
time
on
that
and
that's
helping
us,
you
know,
think
of
in
terms
of
spec,
so
I
think
we're
getting
close
to
where
I
want
to
start
writing
the
draft
spec.
So
definitely
we'll
loop,
everybody
in
when
we
get
there.
A
C
Yeah
I
just
wanted
to
give
an
update
that
or
on
Chrome.
We
are
thinking
of
shipping,
the
buffering
of
long
tasks,
the
meaning
they
will
be
stored
from
the
beginning
of
the
page
up
to
a
certain
amount
so
that
when
you
call
performance
observer,
observe
with
buffered
equals
true,
then
you
will
be
able
to
get
entries
that
were
created
before
the
observer
was
created.
C
So
the
reason
for
this
is
before
we
thought
that
there
might
be
some
overhead
in
actually
computing
these
long
tasks,
but
I
looked
at
the
code
and
we
already
have
all
the
tasks
durations
in
chrome.
So
I
was
wondering
if
Firefox
has
to
this
as
well
like
cuz
I'm
managing
have
a
scheduler.
So
you
monitor
the
task
durations
and.
C
The
other
thing
is:
if
we
are
okay
with
it,
moving
forward
with
buffered
flag,
what
would
be
a
reasonable
max
buffer
size?
So
to
recap
we
need
a
buffer
size
so
that
we
don't
just
store
every
single
performance
entry
forever,
but
at
the
same
time
we
want
some
decent
amount
so
that
a
performance
observer
that
is
registered
at
a
reasonable
time
will
generally
or
almost
always
have
all
of
the
information
previously.
C
So
they
won't
miss
entries.
But
then
we
also
don't
want
to
store
all
the
interests,
so
we
set
our
buffer
size
limit
for
that
we
don't
have
data
on
per
frame,
long
tasks
in
chrome,
so
I
was
trying
to
see
if
we
did
but
I
think
we
don't.
However,
we
do
have
data
on
unload
so
time
to
load
event.
I
wrote
in
the
dock
that
at
least
for
19th
percentile,
it's
less
than
10
seconds.
B
B
A
And
I
could
also
like.
Maybe
we
could
take
the
time
in,
like
you
know
when
you
registered,
you
know
how
many
long
tasks
you
have
after
that,
and
we
can
do
the
same
kind
of
back-of-the-envelope
estimates
for
everything
that
happens
before
that.
So
if
we
assume
everything
before
that
is
just
a
collection
of
you
know
short
long
tasks,
how
many
buffers
that
we
need
like
that
would
be.
That
would
give
us
a
cab.
A
C
A
So
let's
say
the
long
task
observer
registered
at
three
seconds,
so
we
have
like
maximum
of
60
plus
whatever
they
actually
seen
from
that
point
forward,
I
mean
we
don't
need
to
start
all
of
it
right.
We
saying
that
this
will
give
us
a
good
way
like
to
take
autumn
eyes,
real
customer
numbers
to
estimate
estimate.
A
A
Yeah
you're
saying
from
the
point
where
they
can
register
a
performance
observer.
They
like
we
don't
really
need
to
buffer
from
that
point
on,
so
basically
the
both
the
time
that
it
took
to
register
performance
observer,
plus
the
maximum
number
of
long
tests
that
you've
seen
at
all,
would
be
like.
Both
would
be
interesting
for
that
estimate.
I
think.
A
H
To
give
you
some
data
point
for
us
to
p94.
The
load
event
is
two
to
three:
oh
sorry,
twenty
to
thirty
seconds,
so
it
can
be
much
higher
than
what
you're
seeing
probably
for
the
web.
So
I
agree
with
UF's
approach
but
for
instance,
for
us
we
will
always
register
its
onload.
So
we're
dealing
with
a
bit
more
extreme
situation
for
the
p90,
so
yeah
I
mean
if
you
come
up
with
numbers
and
share
the
methodology
we
can
just
figure
out.
If
that
still
works
for
us
basically
yeah.
C
A
C
A
C
C
B
C
A
E
H
C
A
H
C
H
Know
yeah
yeah
I
mean
my
experience.
We
looked
at
I
mean
depends
on
which
device
you're
dealing
with
like
right.
There's
a
very
slow
phone
you're
getting
a
lot.
You're
gonna
get
a
lot
more
of
those,
but
in
all
the
studies
we've
done
even
on
slow
devices,
we
never
see
more
than
a
dozen
or
20
of
those
showing
up
in
dev
tools,
so
we
don't
have
instrumentation
in
production
for
long
tests
yet
because
we
were
waiting
for
the
buffered
bike.
H
But
but
you
know,
there's
a
200
sounds
very
reasonable
to
me
and
we'll
be
able
to
figure
out
right
away
if
we're
hitting
the
limit
off
and
let
you
know
as
soon
as
this
is
life,
so
you
can
iterate
it's
fine.
You
know
we're
looking
forward
to
get
some
data
and
then,
if
we
hit
the
limit,
often
we'll
let
you
know
yeah.
C
Brick,
so
as
the
next
step,
I'll
send
a
peer
to
the
actually.
We
only
need
to
modify
the
our
new
fancy
registry
because
that
one
has
the
buffer
sizes,
so
I'll
just
need
to
update
that
which
is
great
and
now
I'll
ping
relocate
to
make
sure
he's
aware
long
tasks
is
not
implemented
in
Apple
in
Safari,
but
they
always
any
objections
about
that
change.
Then
you
should
just
let
me
know:
India
have
yeah.
A
I
think
I'll
just
generally
because,
like
for
the
scheduling
API
as
well,
just
sent
him
the
video
for
this
call
and
try
to
get
him
to
comment
on
all
the
things
watch.
The
full
video.
G
A
C
C
So
the
two
changes
that
we
have
one
instead
of
the
same-origin
checks
plus
recently
added
boolean,
flag
or
tint
thinking,
we
now
just
check
the
response
painting,
which
is
a
fetch
concept,
which
is
pretty
close
to
just
checking
that,
whether
its
same
origin,
or
not,
with
the
reason
we
did.
This-
is
to
align
better
with
course
processing,
because
our
objective
in
the
end
will
be
to
have
a
world
where,
if
you
pass
course,
then
you
get
towel
for
free.
You
know
you
to
get
that.
We
need
to
first
align
better
with
the
course
model.
C
We
now
use
a
method
that
serializes
the
origin
and
that
method
will
return
null
if
the
tainted
origin
flag
is
set
and
the
tainted
origin
flag.
Basically
means
that
there
has
been
two
origin
crosses
in
the
redirect
chain,
so
let's
say
I
am
in
a
which
is
my
request,
origin
and
they
request
a
resource
in
B
and
then
that
resource
redirects
to
see
that's
already
too
purging
process
or
it
can
be
a
b8,
that's
also
to
origin
crosses.
A
Don't
know,
no,
it's
not
problematic.
It's
just
that
yeah
very
no.
C
C
Any
case
we
just
wanted
I
guess
to
make
the
group
aware
that
we
are
trying
to
integrate
this
check
in
to
fetch
and
that
that
will
result
in
some
changes
to
the
processing
model
of
the
check,
which
means
that
you
will
shortly
get
browser
box.
Also
chrome,
doesn't
promise
or
does
not
implement
it
correctly.
C
C
Yeah
because,
like
I
said
now,
if
you
have
multiple
origin
crosses,
you
will
need
to
start
instead
of
being
okay
with
the
page
origin,
I'm,
sorry
to
the
request,
origin.
So
I'm,
not
sure
if
you
have
any
insight
into
whether
that's
coming
or
not,
because
that's
pretty
ting
everybody
I
see
use
a
star
I.
C
So
that
was
what
is
that,
when
you
were
planning
to
discuss
with
you,
you
edit
this
so
yuck
it
is
that
what
you
wanted
to
discuss
regarding
the
integration
or
wisdom,
something
else.
Oh.
A
Yeah
I
think
just
yeah
give
everyone
a
heads
up
regarding
the
upcoming
changes
and
make
sure
no
one
has
strong
objections,
because
a
lot
of
them
motivation
here
is
alignment
with
course.
That
will
hopefully
enable
us
to
treat
cores
as
something
that
implies.
Also
timing
allow
origin.
A
So
we
are
not
aware
of
any
security
issues
that
require
that
kind
of
painting
that
Kors
implied
like
we're
order
of
them.
In
course,
we're
not
aware
of
them
for
timing,
allow
origin,
but
it
seems
like
the
benefits
of
aligning
are
significantly
larger
than
the
one
or
two
sides
that
may
be
impacted
by
this
and
will
lose
their
analytics.
E
C
E
A
B
now
has
control
over
the
end
point
on
a
and
can
force
a
to
do
things
that
a
wasn't
intending
to
do
unless
a
explicitly
says,
I
allow
this
kind
of
a
cross
origin
redirection
through,
be
the
reason
that
I
think
it
doesn't
like
four
cores
it
requires
star
because
they
don't
have
a
chain
of
origins
that
they're
allowing
like
for
like,
there's,
no
way
to
say
access
control,
origin,
this
chain
of
redirects
and
plus
it
probably
adds
a
lot
of
spec
complexity
to
keep
around
that.
A
So
I
think
they
just
went
with
star
and
I
haven't
heard
anyone
violent
and
complaining
about
that,
though
it
seems
like
like
I
believe
this
is
not
a
very
common
case
and
forcing
the
same
star
restriction
on
Tao
seems
like
star
novel
I
think
so
no
no
yeah.
That
would
be
a
good
place
to
start.
Unless
we
understand
that
it
breaks
a
very
significant
views
case,
and
then
we
may
be.
You
know
we'll
see
what
we
do
about
that.