►
From YouTube: WebPerfWG TPAC 2020 meetings - October 21 - part 1
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay
and
we're
back
for
the
third
day
of
the
web
performance
extravaganzas
at
this
year's
dpac
and
today
we'll
talk
about
network
diagnostics,
performance,
dot,
measure,
memory,
talk
about
rechartering,
long
task,
attribution
and
finally,
js
self
profiling
and
norm.
Do
you
wanna
kick
off
and
present
the
network
diagnostic
part.
B
Yeah,
so
let
me.
B
B
Okay,
do
you
see
yeah
representation.
B
Okay,
so
let
me
start
just
reintroducing
myself,
I'm
from
microsoft.
I
work
for
excel
online.
Well,
the
development-
and
I
want
to
talk
about
a
proposal.
It's
not
a
spec
or
draft
for
spec
event.
It's
more
about
understanding
the
need
for
a
certain
capability,
I
think,
is
starting
to
be
more
and
more
important.
B
B
We
have
some
with
the
network,
information,
api
and
some
other
capabilities,
and
in
general
there
are
many
cases
where
application
developers
actually
need
to
be
able
to
optimize
the
the
user
experience
by
understanding
the
network
situation
better
yeah
and
it's
really
hard
to
do
without
existing
with
the
existing
tool.
What
I
want
to
focus
is
not
on
a
general
network
diagnostic
case,
because
that's
a
very
broad
topic,
I'm
going
to
scope
it
down
a
little
bit
and
talk
about
the
last
mile
diagnostics
and
even
scope.
B
It
even
further
talking
about
just
the
local
network
diagnostics
for
for
the
purpose
of
this
discussion,
just
in
terms
of
terminology
when
I'm
talking
about
last
mile,
it's
the
proximity
for
the
user
agents
and
this
diagram
actually
shows
an
example
of
a
common
network
layout
architecture,
especially
now
during
the
remote
working
everyone
working
from
home
situation.
B
B
B
We
can
actually
modify
the
application,
how
the
application
is
implemented
and
by
changing
the
network,
request,
we're
making
the
frequency
and
so
on,
and
generally
it's
probably
very
useful,
to
collect
this
for
us
rim
data.
So
we
can
analyze
and
optimize
our
user
expense
much
better.
B
So
these
are
common
tools
that
network,
usually
network
means
it
means
use
for
network
diagnostics
and
again,
I'm
before
everyone
get
worried,
I'm
not
proposing.
We
introduce
all
these
tools
to
the
browser,
that's
kind
of
a
stretch
and
it's
probably
going
to
be
a
big
security
concern.
B
However,
the
if
you
notice
there's
a
one
tool
that
pops
out,
which
is
the
pink
tool
which
is
very
popular
and
used
by
many
network
admins
for
initial
troubleshooting,
usually
so
what?
Let's
cover
the
existing
options
that
people
have
when
they're
building
apps
and
they
want
to
understand
the
network
situation,
so
they
can
use
network
error
logging.
It
doesn't
provide
the
granularity
about
what
happens
on
the
local
network,
but
it
does
give
us
some
general
connectivity
informations.
B
However,
we
it's,
we
cannot
use
it
directly
to
notify
the
user
in
case
there's
a
problem
when
we
have
some
local
connectivity
issues.
The
other
option
is
using
the
network
information
api
from
our
experience.
B
We
actually
have
integrated
it
into
our
product
and
it
could
be
either
too
sensitive
or
not
enough
sensitive
depends
on
on
the
situation,
get
a
lot
of
false
positives
or
false
negatives,
and
it
could
be
very
too
too
frequent
and
not
in
many
cases
and
not
give
us
what
we
need,
and
we
don't
want
to
confuse
the
user
by
alerting
him
too
much
when
we
detect
something,
it's
also
not
possible
to
configure
it
right,
whatever
the
spec
defined
or
the
vendor,
the
browser
vendor
actually
decided
how
to
implement
it.
B
That's
how
it
will
work.
B
Other
ideas
is
that
people
are
commonly
used,
is
using
libraries
or
write
their
own
code
to
using
network
calls
and
try
to
detect
connectivity
responses,
some
error
codes
and
so
on.
We've
seen
also
cases
that
we
use
people
using
image
loading
just
to
determine.
If
there's
an
error,
it
has
some
advantages,
but
again
we
cannot
detect
if
there's
some
local
network
connectivity
issues
for
the
user.
B
So
I'm
going
to
talk
about
two
approaches
I
was
thinking
of
and
if
you
have
other
ideas
of
course
please
share.
First
one
is
a
having
some
kind
of
extension
to
the
network
information
api.
Second
one
is
adding
a
ping
api
and
we'll
go
over
right
now.
B
So
network
information
api,
as
I
mentioned
earlier-
it's
not
configurable,
especially
not
the
sensitivity
of
when
it
would
notify
that
we
have
an
connection
up
or
down
so
now.
If
we
extend
it
by
adding
some
way
to
configure
it
and
I'm
not
specifying
any
specific
api
structure
or
interface.
I'm
just
saying
what
the
use
case
would
be.
I
guess
so.
We
could
possibly
configure
the
rtt
round
trip
that
we
may
consider
it
as
a
timeout
or
if
there's
a
few
consecutive,
timeouts
or
long
round
trips.
B
We
can
also
consider
passing
a
callback
with
with
predefined
interfaces,
which
will
accept
some
information
from
that
api
and
allow
us
to
write
our
own
custom
implementation
of
the
decision
of
whether
the
connection
is
up
or
down
right
that
some
would
probably
give
us
more
flexibility.
B
But
if
we
don't
do
that,
because
it's
not
trivial
to
define
the
main
limitation
for
this
approach,
I
guess
is:
we
require
to
actually
agree
on
what
determines
network
connection
up
or
down
or
available,
noisy
or
not
noisy.
The
problem
is,
every
application
might
have
a
different
threshold
or
different
requirements
for
what
is
considered
a
good
connectivity.
I
guess
games
online
gaming
might
require
very
low
latency
in
high
availability.
B
B
The
thinking
is
just
to
use
the
regular
ping
api
that
is
used
by
the
ping
tool
which,
underneath
it
that
uses
the
icmp
protocols
and
to
the
ip
destination
and
waits
a
little
bit
and
then,
if
a
certain
timeout
is
passed,
it
consider
this
as
a
lost
ping
reply
or
something
like
that
and
again,
if
we
ping
the
default
gateway,
it
would
in
most
cases
would
be
our
local
network
connection.
B
A
A
We've
re,
like
we've,
realized
that
the
current
api
is
at
the
same
time
exposing
too
many
bits
and
not
answering
people's
use
cases
necessarily
or
not
all
of
them
as
you
as
you
stated
out,
and
it's
not
the
first
time.
I'm
hearing
that
you
know
people
run
their
own
network
measurements
which
are
active
and
bad
and
people
shouldn't
be
doing
that,
but
they're
doing
that,
because
the
api
is
not
sufficient.
A
So
on
the
net
info
api
front,
I
would
love
to
have
your
active
involvement
on
the
on
the
repo,
the
api
it
used
to
be
covered
in
the
working
group.
It's
currently
ycg
only
it
is
yeah.
It's
it
wasn't
adopted
as
part
of
the.
A
Device
and
what's
that
device
and
sensors
working
group
because
of
mozilla's
objections,
I've
since
talked
to
mozilla
folks-
and
I
think
there
is
a
path
forward-
to
modify
the
api
in
a
way
that
will
be.
A
That
will
address
concerns
from
other
vendors,
so
I
would
love
more
more
collaboration
on
that
repo
and,
potentially,
you
know,
figuring
out
a
better
way
to
address
those
use
cases
and
the
custom
threshold
idea
could
be
an
interesting
one
that
enables
people
to
gather
a
small
number
of
bits,
but
the
bits
that
they
actually
care
about,
so
that
yeah.
A
That
could
be
interesting.
Regarding
the
ping
idea
that
seems
risky,
I
would
not
love
for
my
machine
to
tell
me
you're
too
far
away
from
your
office.
Router
go
back
to
your
office
every
time
I
you
know,
go
and
work
in
the
kitchen
and
providing
that
information
to
random
web
pages
seem
risky.
C
A
Better
understand
the
use
cases
that
split
out
the
the
need
for
local
network
information
from
you
know
general
like
how
is
your
network
doing,
which
is
something
that
pages
can
already
roughly
measure.
B
Right,
so
I
think
I
try
to
articulate
that
this
is
just
an
example
of
how
we
could
use
that
ping
api,
whether
it's
a
privacy
problem
or
not.
I
have
a
slide
on
that.
So,
let's
kind
of
let
me
just
quickly,
go
over
the
next
slide
and
then
we
can
have
a
discussion
on
the
privacy
aspects.
Okay,
so
this
is
just
a
proposed
interface.
If
we
go
with
this
ping
idea,
so
it
kind
of
mimics
how
ping
actually
behaves,
I'm
sure
there
are
other
ways
to
formalize
it.
B
I
guess
in
this
case
I
just
chose
to
maybe
send
have
a
an
option
to
send
a
ping
and
getting
a
reply
that
summarized
the
the
results
similar
to
how
the
ping
tool
behaves.
B
And,
alternatively,
if
we
need
more
granular
data,
we
can
actually
capture
each
of
the
ping,
replies
and
add
them
to
performance,
observer
buffer
and
and
then
analyze
them
further.
If
you
want
to
it's
just
an
idea,
we
don't
have
to
decide,
or
it's
just
want
to
hear
what
we
think
about
this
structure.
A
Just
that
yeah,
from
my
perspective,
like
the
the
api
shape
matters
less
than
the
fact
that
you'll
be
exposing
ping
information
up
until
the
default
gateway
which
is
not
currently
accessible.
Unless
people
like
people
can
brute
force
typical
ip
addresses
for
the
default
gateway
and
try
to
get
like,
maybe
maybe
it's
fine
to
expose
it,
because
it's
already
exposed
because
people
can
ping
192,
168,
0,
254
and
they'll
be
right
most
of
the
time
or
I
don't
know
but
yeah.
B
Yeah,
it's
a
tentative
idea
for
an
api
shape.
It
could
be
structure
differently,
so
few
items
that
come
when
we
talk
about
privacy
is:
is
that
an
api
that,
in
order
to
even
activate
it,
we
need
to
have
some
kind
of
a
prompt
to
the
user
for
permission
to
even
do
this
so
ask
the
user
for
approval,
like
some
other
apis,
do
right,
and
how
would
that
look
like
I
mean
the
it
is
possible
to
actually
mimic
what
this
api
proposes
using
existing
technology?
B
It's
not
going
to
do
exactly
the
same
thing.
It's
not
gonna
give
us
the
exact
local
information,
but
it
will
allow
apps
already
do
that.
They
already
tried
to
diagnose
the
network,
condition
using
various
tricks
and
hacks
and
non-standard
way
of
doing
the
diagnostic.
B
So
that's
kind
of
my
the
way
I
think
about
it.
So
I'm
not
sure
that
that
is
something
that
requires,
but
if
other
people
think
it
it
does
require
a
prompt,
then
certainly
say
so.
A
A
This
is
why
we
have
cores
some
routers
may
not
abide
well
with
cores,
but
you
know
it's.
A
Adding
an
an
explicit
way
to
ping
the
default
gateway
requires,
like
maybe
it's
fine
but
yeah.
It's
a.
B
Well,
I'm
not
sure
if
there's
a
security
concern,
if
I
can
ping
the
local
gateway
regardless
of
privacy,
am
I
exposing
some
security
risk
here
and
that's
still
need
to
be
evaluated.
I
guess.
B
Yeah
now
now
one
other
concern
that
was
raised
when
I
started
discussing
this
is
fingerprinting.
Risk
right
is
that
does
this
expose
a
further
fingerprint
surface
from
the
browser?
B
Some
other
techniques,
like
the
network
information
api
rounds,
the
the
round
trip
time
to
25,
millisecond
buckets
it
could
be
even
more
or
less
granular,
maybe
to
three
buckets
like
quick
like
fast,
slow,
medium
or
something
like
that.
Would
that
be
sufficient
to
mitigate
fingerprinting?
I
don't
know
it's.
It
is
something
we'll
have
to
look
into
as
well.
E
I
have
two
questions
here:
one
is:
how
is
it
going
to
help
in
a
vpn
connected
network,
most
of
our
like
a
sales
force.
Most
of
our
customers
are
already
behind
a
vpn,
and
the
vpn
may
not
be
near
to
the
local
territory,
maybe
somewhere
in
a
geographically
separate
location.
So
how
is
it
going
to
help
in
there?
The
second
question
would
be
like
we
already
have
a
round
trip
time
and
how
the
ping
response
time
would
it
would
be
a
yeah.
E
Would
it
be
a
kind
of
a
separate
entry
for
irrespective
of
the
round
trip
time
or
it
would
will
it
help
for
the
round
trip
time
or
the
round
trip?
Time
will
be
a
better
alternative
there.
Instead
of
the
ping
response
time.
B
Yeah
yeah,
I'm
not
sure,
actually
what
the
answer
regarding
vpn.
Definitely
that
changes
the
routing
table.
So
that
means
it's.
I
actually
tried
to
address
it
in
in
the
next
slide,
because
we
still
have
some
questions
if
we
have
more
than
one
gateway
right
specifically,
if
you
connect
to
ppn,
you
have
more
than
one
gig,
but
your
default
rate
would
probably
go
to
the
to
the
vpn
ip
right
when
you're
connected
and
the
then
comes
the
question.
B
Should
we
expose
that
you're
connected
to
a
vpn
or
not
or
that's
even
further,
fingerprinting,
surface
exposure
right,
that's
even
worse,
and
it's
also
when
you're
connected
to
vpn.
It's
probably
not
going
to
be
your
local
network.
B
If
you
ping
the
default
gateway,
it's
going
to
be
wherever
the
vpn
endpoint
starts,
but
it
will
be
the
round
trip
to
there,
which
is
another
information
that
can,
I
assume,
can
be
useful,
especially
the
vpn
stability
and
so
on,
even
though
it's
not
going
to
be
the
network,
but
should
we
report
that
that
to
the
app
in
that
case
it's
hard
to
tell
well,
I
was
trying
to
even
ask
if
we
can
use
this
to
even
allow
pinging,
even
more
destinations
than
just
the
default
gateway,
maybe
any
dns
name
that
comes
from
my
origin
or
from
any
pre-authorized
list
of
origins.
B
I
don't
know
some
security,
some,
how
to
make
sure
no
one
abuses
the
api,
so
we
could
use
some
throttling
to
make
sure
not
too
many
concurrent
calls
used
to
create
some
ping
storm
or
anything
like
that.
B
B
That's
much
harder
to
define,
I
guess
and
harder
to
do
securely,
but
that
would
be,
I
guess,
the
ultimate
use
case
much
more
than
the
thing,
but
I
did
the
thing
about
starting
with
the
thing
and
and
just
to
see
what
you
hear.
The
main
feedback
I'm
getting.
I
guess
from
you
have
mostly
is
a
concern
about
privacy.
Am
I
correct.
A
So
concerns
around
privacy
with
exposing
this
new
information
that
is
not
currently
exposed
and
maybe
putting
that
behind
a
prompt
is
good
enough.
If
the
prompt
is,
you
know
explicit
enough
in
outlining
that
you
know
this
is
a
network
diagnostic
capability
that
the
user
is
enabling,
but
personally
I'd
love
to
see.
A
I
don't
necessarily
understand
the
delta
between
that
and
the
need
for
local
network
diagnostics
unless
you
want
to
ship
actual
network
diagnostics
tools
on
the
web,
which
is
potentially
a
use
case,
for
I
don't
know,
isps
or
whatnot,
but
it
seems
a
different
capability.
B
A
B
Is
there
is
there
anyone
else
on
this
meeting
that
actually
find
this
local
diagnostics
an
important
case
just
wondering.
B
Because
for
us
it
did
seem
like
something
that
we
want
to
have
a
better
user
experience
for,
what's
the
right
way
of
achieving
that,
that's
a
tough
question,
but
we
we
feel
that
the
case
is
strong.
But
I
wonder
if
anyone
else
has
thoughts
on
that.
F
Yes,
my
main
question
is
if
it's
the
responsibility
of
the
web
app
or
if
it's
really
more
of
a
browser,
responsibility
and
browser
diagnostics,
to
to
tell
you
that
your
local
kind
of
like
chrome's
your
network,
is
having
issues
and
retry
kind
of
ui,
but
something
that
works
even
with
offline
spas
or
offline
apps.
Either
a
network
health
in
the
browser
ui
or
something
that's
completely
removed
from
the
web
content.
B
B
Yeah
I
mean
that's
also
could
be
a
direction.
What
we're
thinking
is
more
around
a
standard
api
that
can
be
shared
and
not
dependent
on
specific
user
agent
implementation,
and
also
the
application
itself
could
provide.
C
A
One
comment
from
alex
on
the
chat
is
that
if
this
api
does
move
forward,
it
needs
to
be
async
to
give
room
for
a
prompt
or
some
sort
of
a
you
know,
a
user,
visible
notification
that
enables
it
so
that's
good
feedback
going
back
to
net
info.
I
wonder
if
other
vendors
have
thoughts
and
or
appetite
about
what
a
revamped
version
for
net
info
may
look
like.
That
is
something
that
they
feel
they
can
ship.
A
Because
I
had,
I
had
good
conversations
with
some
mozilla
folks
about
what
the
revamped
net
info
may
look
like,
but
it's
not
yeah.
Some
of
the
employment
situation
has
changed
there.
So
right.
A
Okay,
but
other
than
practical,
like
shipping,
tomorrow
kind
of
considerations,
do
you
see
a
version
of
net
info
that
exposes
granular
custom
information
to
the
web,
up,
something
that
you
would
be
willing
to
eventually
expose
and
ship
from
a
privacy.
G
A
Okay,
sudeep,
if
you're
speaking,
we
can't
hear
you.
B
Okay,
well,
it
was
certainly
good
feedback,
and
this
is
just
the
first
step
in
this
direction.
From
my
point
of
view,.
A
Cool
awesome
so
yeah.
Just
to
reiterate,
I
would
love
to
collaborate
on
the
net
info
repo
and
then
you
can
try
to
figure
out
use
cases
for
local
network
diagnostics
beyond
what
netinfo
can
expose.
C
A
H
Hello,
hello,
yeah.
I
just
introduced
myself.
I
am
one
of
the
co-chairs
of
the
web
and
networks
interest
group
and
we
are
having
similar
discussions
there.
So
two
comments:
one
is
around
network
info
api.
H
There
is
some
proposals
being
brought
in
there
around
information
about
the
network,
particularly
in
the
5g
space,
where
there's
a
lot
of
network
variation
scenes
based
on
cell
sizes
and
stuff,
like
that,
there
are
ideas
around
hints,
being
used
about
congestion,
upcoming
congestion
along
a
road
path
and
things
like
that
and
if
those
kind
of
hints
can
be
shared
to
apps,
so
that
apps
can
use
that
to
take,
let's
say:
do
some
buffering
in
advance
or
something
like
that.
H
This
is
a
proposal
which
has
strong
dependencies
on
information
from
the
operator
network,
and
things
like
that.
So
I
thought
I'd
maybe
just
bring
that
up
so
that
so
that
you
you're
aware
there
is
a
similar
kind
of
topic
being
discussed
there.
My
second
comment
is
perhaps
more
orthogonal
to
this
discussion,
but
in
this
topic
I
noticed
that
you're
talking
about
apis
for
network
diagnostics.
H
There
is
an
idea
in
our
interest
group
where
it's
the
other
way
around.
They
are
trying
to
extend
developer
tools
to
emulate
time
variant,
network
conditions
so,
which
means
that
similar
to
you
have
the
heart
race
format.
There
can
be
a
network
trace
format
where
the
network
trace
is
come
taken
from
the
real
world
and
and
you
kind
of
prepare
a
trace,
and
then
you
do
a
play
of
the
trace
and
then
you
kind
of
test
the
web
web
apps.
H
How
does
the
web
app
behave
when
the
network
conditions
changes,
so
this
kind
of
helps
the
web
developer
make
sure
that
his
app
is
able
to
adapt
to
the
varying
network
conditions?
So
I
thought
I
would
just
like
to
bring
that
up
in
this
discussion,
because
they
are
kind
of
connected
to
some
of
the
discussions
here.
I
can
share
two
links
in
the
chat
window.
A
The
network
trace
idea
sounds
interesting
in
the
context
of
maybe
synthetic
testing
but
yeah.
I
think
it's
a
somewhat
orthogonal
discussion,
but
it
seems
interesting.
So
thank
you
for
sharing
that
and
yes,
if
you
can
share
links
where
we
can
read
more
about
that,
that
would
be
great.
H
D
D
Okay,
since
everyone
here
cares
about
performance.
Let
me
first
relate
memory
to
performance
and
motivate
a
bit
why
we
want
to
measure
memory
usage
like
if
you
have
a
website
and
it's
memory
usage
increases.
There
could
be
two
reasons.
D
D
So
this
is
going
to
be
useful
for
long-running
complex
web
applications.
The
api
can
also
be
used
in
local
testing,
but
in
local
testing.
The
there
is
the
story
is
different,
because
when
testing
locally,
we
can
pass
flags
to
the
browser
to
like
disable
security
to
force
garbage
collection.
D
There
are
also
other
ways
to
measure
memory
locally,
so
the
like
our
main
motivational
focus
here
is
to
get
data
from
production
and
look
at
the
aggregated
data,
so
the
we
are
not
aiming
to
make
individual
calls
to
the
api
meaningful
or
like
focus
on
that.
If
you
want
aggregated
data
okay
last
year
I
gave
presentation
to
this
group
and
proposed
apis
there.
D
At
that
point,
there
were
many
unclear
parts,
many
unknowns,
but
what
was
clear
at
that
point
already
is
that
there
is
this
trade
of
space
and
every
decision,
design
decisions
that
we
make.
It
will
move
in
somewhere
in
this
trade
of
space
and,
in
the
meantime,
api
evolved,
and
I
think
it
converged
to
this
point
in
the
trader
space
like
the
security
story,
improved
a
lot.
I
think
the
interface
also
improved.
I
I
will
describe
that
here
before
doing
that,
I'd
like
to
acknowledge
and
and
thank
people
that
helped
and
provided
feedback.
D
D
I
put
a
link
to
a
blog
post
that
describes
how
to
do
like
proper
feature
detection
and
how
to
set
up
randomized
periodic
sampling.
That
will
be
useful
for
like
looking
at
aggregated
data
before
I
describe
okay,
so
it
turns
a
result.
But
before
I
describe
how
the
result
looks
like,
I
want
to
talk
more
about
high-level
properties
of
the
api,
and
I
think
the
best
way
to
do
it
is
to
compare
it
to
the
non-standard
performance.memory
apis
that
you
might
know
so.
The
first
thing
the
scope.
What
exactly
does
the
api
measure.
D
The
like
the
websites
today
are
complex
right,
so
they
can
embed
iframes,
they
can
run
web
workers,
and
these
websites
are
also
running
on
top
of
complex
browsers
and
browsers
have
their
own
heuristics
on
where
to
allocate
particular
iframes
that,
depending
on
origin
and
policy,
they
may
decide
to
put
one
iframe
on
one
hip
on
one
process
and
other
iframes
and
another
heap,
and
in
this
diagram
that
we
have
like,
we
have.
Two
websites
like
this
diagram
shows
a
frame
tree
and
we
have
two
websites.
One
in
green
has
two
iframes
and
another.
D
One
in
white
has
one
iframe,
and
now
what
exactly
the
apis
are
measuring
for
the
new
api,
it
will
measure
the
memory
usage
of
the
website
together
with
iframes
and
workers.
So
this
is
what
web
developers
would
intuitively
expect,
and
this
also
maps
nicely
into
the
spec
where
we
can
define
the
scope
as
browsing
context
group
for
his
agent
clusters.
D
On
the
other
hand,
the
old
api-
it's
very
easy
to
implement
it.
We
just
return
the
heap
size
counter
right,
but
what
exactly
it's
measuring
in
terms
of
the
web?
It's
hard
to
say,
because
it
depends
whether
there
are
some
other
web
pages
that
happen
to
be
sharing
the
same
heap
also
note
that
the
old
api
may
underestimate
the
memory
usage,
because
some
iframes
are
not
accounted,
and
it
can
also
overestimate
the
memory
usage,
because
there
it
accounts
potentially
unrelated
pages.
D
Another
difference
is
in
providing
diagnostic
data
like
we
make
the
interface
generic
so
that
the
browsers
may
choose
to
provide
more
data
that,
in
order
to
make
the
results
more
actionable
like
it's
possible
to
break
down
the
memory
usage
by
owners
and
the
type
like
one
example
where
it
could
help
for
is
web
browser
could
say
that
hey
this
memory
is
used
by
iframes,
but
the
iframes
are
no
longer
attached
to
the
dom
tree
so
indicating
that
the
iframes
are
leaking,
and
this
amount
of
memory
is
leaking
and,
as
I
said
earlier,
the
security
story
improved
a
lot.
D
That's
because
we
are
using
cross
region.
Isolation
so
like
to
explain
so.
The
problem
is,
if
we
provide
this
api
to
the
web,
attackers
can
start
using
the
api
and
load
cross-origin
resources
and
get
the
size
of
that
resource
and
from
the
size.
Maybe
some
other
information
may
be
inferred
right,
and
this
is
kind
of
the
side,
channel
attack
and
solution
to.
That
is
that
we
require
that
the
web
page
is
cross
origin
isolated.
D
A
link
to
the
blog
post
that
explains
it
well,
but
the
main
idea
is
that
if
the
website
is
crossaging
isolated,
we
know
that
all
resources
that
were
loaded
they
opted
in
to
be
loaded
in
cross-origin
documents,
and
by
doing
so
they
also
uploaded
it
into
potential
set
channel
information
leaks.
There
will
be
more
discussion
on
this
tomorrow
in
this
session.
D
Okay,
now
going
back
to
the
describe
to
the
result,
like
imagine,
we
called
this
api
and
they
we
have
a
website
that
doesn't
have
any
iframes.
In
that
case,
one
way
how
the
result
could
look
like
is
this:
we
get
the
total
bytes
of
the
web
website
and
then
breakdown,
but
in
this
case
the
browser
chose
to
not
show
any
breakdown.
So
this
is
a
valid
implementation
to
not
provide
any
breakdown.
D
On
the
other
hand,
browser
may
choose
to
provide
breakdown.
In
this
case,
a
breakdown
is
very
trivial
because
there's
only
one
entry,
but
it's
still
useful
to
show
and
explain
the
fields.
So
every
entry
in
the
breakdown
breakdown
is
an
array
and
every
entry
there
describes
some
portion
of
the
memory
and,
like
first
field,
is
a
byte.
How
large
is
this
portion,
then
to
what
window
or
iframe
or
worker?
D
This
portion
is
attributed
to
note
that
the
attribution
is
a
also
a
list,
so
it
kind
of
allows
us
to
indicate
that
there
could
be
multiple
iframes
of
windows
and
the
this
portion
of
the
memory
is
attributed
to
this
set.
D
And
similarly,
there
is
a
description
of
types
of
memory
types,
so
here
this
this
part
is
fully
implementation
dependent
like
I
expect
different
browsers
to
have
different
types
here,
for
example,
what
could
these
air
is,
whether
it's
javascript
memory
or
dom
memory,
or
whether
the
memory
belongs
to
detached
iframes
or
array
buffers?
Something
like
that?
D
Okay,
now,
to
a
more
interesting
example,
let's
say
you
have
a
website
with
multiple
iframes
like
one
is
the
same
original
frame.
Another
is
cross-origin
iframe,
then
the
result
could
look
like
one
way
it.
It
could
look
like
this
right.
The
web
browser
managed
to
break
down
an
attribute
memory
to
all
iframes
and
windows.
So
then,
you
have
three
entries
and
the
sizes
add
up
to
the
total
memory
usage
and
then
each
attribution
is
a
list
containing
consisting
of
a
single
entry.
D
What
could
also
happen
is
that
the
browser
may
decide
that
like
if
it
is
impossible
to
distinguish
between
the
same
original,
iframe
and
the
same
window,
and
it
may
choose
to
not
do
that
distinction.
In
that
case,
it
can
group
together
those
two
attributions
and
that's
the
reason
why
we
have
this.
As
a
list
or
array
and
also
perfect
developmentation
would
be
to
group
everything
together
right,
meaning
that
the
browser
cannot
distinguish
between
those
iframes
or
provide
empty
breakdown.
D
But
let's
say
we
have
the
food.com
and
it
embeds
two
iframes
one
is
same
original
frame
and
another
one
is
a
crossover
frame
and
let's
also
assume
that
same
original
frame
redirects
internally
to
another
url,
then
the
results
that
we
will
see
for
the
same
origin.
The
url
field
will
contain
the
most
recent
url
like
of
the
document
of
this.
So
since
it's
redirected
to
the
iframe
after
to
another
url,
it
will
contain
that
url.
D
Just
the
scope
will
be
window
indicating
that
this
is
an
iframe
and
we
will
have
additional
container
field
describing
the
iframe
attributes,
and
in
this
case
this
is
a
match
like
we
have
id
and
source
and
source
is
a
sort
url
before
redirect,
that's
the
one
that
was
provided
to
the
iframe
and
why
we
why
we
need
this
is
for
the
case
of
cross-origin
iframes.
D
In
that
case,
we
cannot
give
the
url
the
current
url,
because
that
one
would
be
leaking
information,
so
instead
we
provide
some
sentinel
value
like
if
you
have
better
suggestions.
Welcome.
So
by
now
we
are
going
with
cross-origin
url
and
the
scope.
D
We
are
not
even
saying
that
it
is
a
window
because
the
iframe
itself
could
contain
other
iframes
and
could
start
workers,
so
we
provided
the
scope
as
the
cross-origin
aggregated
sentinel,
and
the
most
useful
data
for
the
for
the
page
is
container
element
because
using
the
container
then
the
original
page.
That's
calling
this
api
can
figure
out
that
this.
This
is
the
iframe
that
retains
that
memory.
A
Can
you
repeat
the
point
about
the
url
exposure,
because
I
think
I
somewhat
that
so,
if
I
understand
correctly,
you
can
expose
direct
the
redirected
urls
for
same
origin,
but
cannot
expose
them
for
cross
origin.
Is
that
the
distinction
that's
yeah.
D
So
in
that
case
providing
the
most
recent
url
is
useful,
but
for
cross
origin
iframes
we
can
only
provide
what
the
main
origin
knows
and
main
origin
knows
the
source
attribute
of
the
iframe
of
attributes
of
the
iframe,
but
the
most
recent
url
is
not
provided
right
yeah.
So
now,
regarding
the
latest
status,
the
api
is
an
origin
trial
in
chrome
effectively.
The
original
trial
is
running
from
85
to
87,
like
from
september
to
january.
D
It
started
earlier
like
in
82
83,
but
we
had
a
buck
in
the
implementation
and
had
to
pause
the
origin
trial.
The
origin
trial
is
running
with
some
differences
to
the
spec
and
what
I
presented
here.
The
main
difference
is
the
security
mechanism
like
since
not
all
users
rolled
out
the
cross-origin
resolution.
D
We
are
relying
on
the
site,
isolation,
the
mechanism
that
the
all
the
api
also
used-
and
this
means
that
it's
the
scope
is
more
limited
like
we
cannot
show
crossover,
cross-site,
iframes
and
only
the
same
site,
iframes
and
additionally,
the
version.
That's
running
in
origin
trial
is
using
simpler,
attribution
format,
then
chrome
87
added
support
for
worker
memory,
and
that
was
not
possible.
This
is
something
new
like
previously.
It
was
not
possible
to
get
measure
memory
of
workers
with
the
old
api
and
the
plans
for
chrome
88
is.
D
This
will
be
the
version
where
we
will
switch
to
the
gating
behind
cross
origin
isolated.
We
will
also,
together
with
that
switch.
We
will
update
the
attribution
format.
We
need
to
sync
with
the
origin
trial
users,
because
this
is
going
to
be
a
breaking
change
and
we
will
add,
support
for
cross-site,
iframes
and
ship.
D
Hopefully,
I
can
give
like
some
some
feedbacks
that
we
received
like
described
feedback
that
we
received
mozilla
folks,
looked
at
the
api
and
provided
very
useful
feedback
main
concerns
there
were
around
interop
and
around
highlighting
that
the
data
returned
or
the
result
is
specific
to
the
browser
so
based
on
the
on
their
suggestion,
we
made
that
we
renamed.
Initially
we
had
the
memory
types
as
simply
types
now
they
are
called
user
agent
specific
types.
D
There
is
one
open
issue:
whether
we
want
to
rename
bytes
into
user
agent-specific.
Bytes,
I
I'm
curious
to
learn
what
this
form
thinks
about
that
for
me
personally,
it
seems
that
obvious
that
bytes
should
be
specific
to
the
browser,
but
maybe
we
want
to
highlight
that
even
more,
but
then,
if
we
do
that,
maybe
we
can
want
to
add
like
additional
objects,
that
just
is
named
user
agent
specific
and
then
the
result
will
become
a
field
of
that
object,
because
all
fields
may
become
user
engine
specific.
D
Then
there
was
one
more
suggestion
to
introduce
dummy
entries
and
randomize
order
of
entries
in
the
breakdown
list.
I'm
also
curious
to
learn
your
opinion
here.
So
the
idea
here
is
to
prevent
users
from
hard
coding
like
specific
indices
in
the
breakdown
like
breakdown,
0
always
means
main
window
or
something
like
that
and
there's.
Another
open
question
is
what
should
be
the
scope
of
the
api
like?
D
Currently,
we
are
going
with
the
scopes
that
is
in
what
developers
would
expect
like
the
whole
web
page
together
with
all
iframes
same
origin
cross
origin,
but
an
alternative
there
is
to
limit
the
like.
Still
look
at
the
browsing
context
group
or
the
web
page,
but
limit
it
to
the
current
process.
It's
it's
better
than
what
we
had
before
with
the
legacy
api,
because
this
would
not
include
unrelated
pages,
but
it
would
be
limited
to
a
process
and
it
would
then
depend
on
the
process
model.
D
Other
feedbacks
that
we
received
from
users
was
that
in
the
origin,
trial
version
promise
may
take
a
long
time
to
resolve,
and
this
is
kind
of
expected
because
the
way
how
we
implement
it
is
default,
the
measurement
into
garbage
collection.
So
the
measurement
actually
happens
with
the
next
garbage
collection.
We
do
it
in
order
to
reduce
the
overhead,
otherwise
the
alternative
would
be
to
iterate
the
heap,
and
that
would
be
very
costly,
but
in
local
like
this
only
affects
production
in
local
testing.
D
I
So
you
mentioned
the
reported
bytes
like
what
kind
of
bytes
are
they.
D
D
Are
the
sizes
of
objects
that
were
allocated
by
this
browsing
context
group
and
it
depends
on
like
it's
fully
implementation
specific?
What
what
exactly
that
means
and
on
what
heap
these
bytes
are
coming
from.
But
the
high-level
idea
is:
if
you
have
an
object
and
like
we
would
look
at
all
objects
and
then
sum
up
their
sizes.
D
I
C
I
Did
used
to
do
a
little
memory
use
analysis.
One
thing
that
is
very
interesting
is
the
fact,
for
example,
in
many
operating
systems
modern
operating
systems,
at
least
you
have
a
memory
compressor
right,
so
you
end
up
compressing
some
memory,
so
the
actual
bytes
in
the
physical
sense
is
different
from
the
number
of
bytes
that
you
may
see
in
the
view
of
virtual
outer
space.
I
Now.
Another
thing:
that's
important
is
the
distinction
between
dirty
bytes
versus
the
non-dirty
bits
right.
So
if
you
have
a
known
dirty
memory,
that
is
a
map
from
someone
else,
then
that
memory
could
be
purged
by
os,
so
the
cost
of
that
is
different
from
dirty
memory
and
even
for
dirty
memory,
depending
on
what
kind
of
dirty
memory
you
have.
If
it's
a
completely
empty
page,
that's
dirty
like
you,
you
compressor
can
take
care
of
that.
C
I
Made
very
small
right,
so
the
cost
of
that
is
actually
smaller
than
the
dirty
memory
that
has
very
high
sort
of
entropy
of
information
in
it
now
so
I
I
mean
another
interesting
thing
is
like
how
much
of
the
memory
is
getting
written
and
read.
At
the
same
time,.
C
I
D
So
in
chrome
like,
I
think
it
depends
on
the
implementation
and
what
implementation
chooses
in
chrome.
We
don't
try
to
approximate
what
the
actual
physical
memory
usage
would
be.
We
report
the
sizes
as
the
allocates
and-
and
these
are
the
virtual
like
you
could
describe
it
as
a
virtual
sizes
right
and
if
operating
system
underneath
does
some
optimizations.
That
would
not
be
captured,
and
I
think
that
may
be
useful.
D
I'm
not
sure
how
to
spell
expect
that,
but
there
are
also
fingerprinting
concerns
right
and
it's
useful
to
avoid
exposing
too
much
of
the
system
information
so
and
regarding
the
non-dirty
memory,
I
guess
those
memory
that
was
mapped.
I
guess
that's
mostly
the
shared
memory
right
that
was
existing
before
like
well,
it's
at
startup
of
the
instance
of
webpage
and
also
to
avoid
fingerprinting.
This
kind
of
memory
should
not
be
surfaced,
so
the
idea
is
to
only
surface
the
memory
that
the
webpage
actually
allocates
on
top
of
what
the
baseline
is.
I
Yeah
but
for
example,
if
you,
if
you
have
a
blob
and
map
the
content
of
blob
into
array
buffer,
you
could
imagine
that
one
implementation
of
data
is
just
a
mapped
file
into
memory
right
and
then
you
have
a
cream
memory
versus
30
memory,
so
I
mean
yeah.
I
guess
if
the
definition
of
bias
is
completely
implementation
dependent
like
we
could
do
whatever
right.
I
mean.
D
D
I
D
I
I
I
G
Such
great
work
on
this
over
the
last
year,
really
huge
thanks,
really
impressed
with
this
work,
and
I
like
the
way
this
is
shaping
up
such
great.
Such
great
work
really
appreciative.
G
One
of
my
questions
is,
I
think
the
chorus
check
is
a
real
breakthrough
and
do
you
think
doing
kind
of
privacy
review
now
will
help
kind
of
move
the
open
issues
into
result.
Issues
like
what's
your
plan
for
staging
that.
D
D
So
it's
about
the
scope
of
the
api
and
whether
and
it
somehow
relates
to
security
in
a
sense
that
if
you
make
the
scope
very
limited
and
limit
to
the
same
process,
it
should
improve
security
because
we
will
not
get
any
data
from
other
processes,
but
as
a
trade-off
here,
it
would
make
them
the
result
very
much
dependent
on
the
process
model
of
the
browser.
So
if
the
web
page
runs
on
one
device
and
the
same
approach
runs
on
another
device,
we
could
get
very
different
numbers
there.
B
D
B
But
with
the
gpu
memory
would
be
able
to
expose
the
total
available
gpu
because,
in
contrast
to
the
general
ram,
I
guess
where
the
os
usually
uses
paging
or
any
other
strategies
to
deal
with
memory
stress
once
the
gpu
memory
is,
is
full.
I
guess
bad
things
happen,
and
usually
it's
not
handled
very
well.
So
I
wonder
if
we
can
get
sick
about
the
total
available
gp
memory
and
the
attribution
of
different
parts
of
the
app.
D
So
the
way
how
it
would
work,
it
says
not
implemented
right,
but
the
way
how
I
imagined
it
would
work
is
that
we
would
approximate
the
gpu
memory
like
we
would
not
get
real
gpu
memory,
but
from
the
things
that
we
know
in
this
current
process.
Like
canvas
elements,
we
would
approximate
how
much
gpu
memory
that
uses
and
provides
that
as
an
approximation,
I'm
I
I
think,
providing
the
capabilities
like
actual
the
limit
of
the
gpu
memory.
It
may
be
out
of
scope
of
this
api.
D
It's
most
then
moves
in
the
direction
of
like
describing
what
this
device,
what
other
capabilities
of
the
device,
and
that
requires
like
different
privacy
and
like
security
review,
because
the
concerns
are
different.
There.
B
Yeah
I
mean
in
practice
ex
the
gpu
memory
is
already
exposed,
at
least
in
chrome.
It's
easy
to
create
a
simple
script,
to
estimate
the
gpu
memory
by
allocating
canvases.
B
C
You
said
this
was
an
origin
trial
right.
Have
there
been
any
success
stories
of
real
regressions
found
using
this.
D
C
Okay,
such
a
story
could
make
this
more
compelling
it's
a
little
abstract
right
now.
I
mean
I
definitely
understand
the
value
of
it,
but
having
something
more
concrete
would
be
more.
I
I
obviously
I
have
to
really
leave,
but
I
I
have
a
bit
of
concern
about
reporting
any
memory
usage
from
the
cross
origin
in
iframes,
because
that
that
kind
of
constraints
ways
we
can
change
the
behavior
of
crossover
frames.
I
also
limits
or
like
rather
exposes
the
way
process.
A
A
I
About
process
separation,
it
would
expose
the
way
bro
the
browser
strategy
for
the
how
process
is
split.
E
D
I
A
A
So
essentially
we
were
scheduled
to
be
recharted
back
in
june,
but
the
we
extended
the
old
charter
in
order
to
better
accommodate
process
2020
and
because
the
group
showed
desire
to
move
to
living
standards
and
specifically
to
switch
to
the
cr
draft
model
where
we
get
specifications
from
working
graphs
to
crs,
and
then
they
remain
there
forever
and
we
just
update
the
cr
once
in
a
while
with
various
drafts
and
updates.
A
A
If
you
can
review
that
charter
draft,
it
doesn't
change
a
ton
other
than
the
small
bureaucratic
bits
of
of
what
what
deliverables
we
are
planning
to
deliver
and
when
it
does
change
like
there
are
some
changes
around
making
the
scope
a
bit
more
explicit,
because
I
found
the
previous
scope
to
be
a
bit
vague
in
terms
of
covering
everything
that
somehow
improves
the
user
experience.
A
So
I
drafted
some
language
around
making
it
more
explicit
than
splitting
the
different
categories
of
improvement
into
into
measurement
scheduling
and
in
adaptation
which
covers
all
the
deliverables
we
have
today
as
well
as
ones
we
had
in
the
past.
A
But
I'd
highly
appreciate,
if
you
all,
can
review
that
draft-
and
let
me
know
what
you
think
on
the
practical
front,
I
believe
that
we
have
a
couple
of
specs
that
we
can
potentially
bring
to
wreck
before
the
charter
is
complete
or
at
least
try
to
transition
them
to
iraq
before
the
charter
is
complete
and
karina.
A
I
highly
appreciate
your
thoughts
on
the
visibility
of
that
in
terms
of
timelines,
and
if
we
can
do
that,
we
could
potentially
remove
them
from
deliverables,
and
then
there
are
other
specs
that
we
could,
potentially,
you
know,
bring
to
completion,
but
they
are
probably
in
terms
of
timeline
timelines
that
will
most
probably
happen
after
the
rechartering
and
otherwise
for
both
preload
and
resource
hints.
We
talked
about
dismantling
resource
hints
and
then
moving
bits
and
pieces
directly
to
be
integrated
into
html.
A
It's
unclear
to
me
if
we
should
just
remove
them
as
a
deliverable
entirely
or
have
a
section
stating
that
they
are
deliverables
in
transition,
because
we
won't
finish
that
work
before
the
rechartering,
so
karine
on
that
front
as
well.
I
love
your
opinion
and
then
for
everything
else,
all
the
other
specs
that
we
won't
bring
to
completion
or
transition.
A
The
plan
is
to
just
get
them
to
cr
and
then
forevermore
just
include
all
updates
as
a
cr
draft
and
once
in
a
while
run
a
snapshot,
I'm
not
yet
sure
on
what
the
mechanics
of
that
would
be.
But
I
don't
think
we
need
to
concern
ourselves
with
that
right
now
and
other
than
that
there
we
have
a
bunch
of
specs
that
where
we
would
love
a
helping
hand-
and
we
already
mentioned
that
as
part
of
the
intro-
a
bunch
of
specs
where
editors
are
needed.
A
So
if
anyone
is
interested
in
in
contributing
to
specs
or
getting
started
with
contributing
to
specs
with
help
from
the
chairs
and
other
active
editors,
that
seems
like
a
good
opportunity
to
get
started.
So,
if
anyone
is
interested,
please
don't
hesitate.
A
So
I
think
that
the
level
of
effort
varies
based
on
the
different
specs,
but
generally
yeah.
Even
if
you
have
a
few
hours
a
week
that
you
could
dedicate
to
this
subject,
it
would
be
highly
appreciated
and
we
would
love
to
help
in
order
to
move
those
spec
forwards,
make
sure
they're,
well
maintained
and
issues
don't
lag
so.
K
A
Yeah
so
essentially,
if
you're
interested
in
the
in
the
subject
of
specifications
and
willing
to
contribute
a
few
hours
a
week,
even
we
would
love
to
you
know,
help
you
get
started
and
help
you
be
successful
with
that
yeah
and
otherwise.
So
karine
are
you
on
the
call
or.
A
A
A
For
both
resource
timing,
l1
and
page
visibility,
l2
l1
is
really
complete
and
l2.
We
have
one
final
issue,
but
we
have
a
good
plan
of
how
to
resolve
it.
So
we
could
potentially
do
that
rather
quickly,
if
if
there
is
a
prospect
of
moving
it
to
rec
before
we
recharter,
otherwise
we
can
just
leave
it
as
a
deliverable
and
you
know
aim
to
get
it
to
wreck
next
quarter.
But.
L
Yeah,
okay,
so
two
questions.
Actually
you
want
to
remove
it
from
the
charter,
but
we
need
a
working
group
to
commit
to
maintain
it
after
recommendation
anyway,
so
it
can't
really
get
well.
We
can't
really
get
rid
of
them.
L
A
Yeah
yeah,
so
we
plan
to
keep
maintaining
both
of
the
so
resource
timing.
L1
will
be
done,
but
resource
timing.
L2
will
not
be
it's
just
that.
Currently
we
have
both
of
them
as
deliverables
in
the
charter,
and
it
would
be
good
to
clean
up
the
l1
bit
and
obviously
we're
still
committed
to
maintaining
and
keeping
l2
as
a
living
standard.
A
A
So
I
don't
necessarily
want
to
remove
the
deliverables
altogether,
but
just
to
like,
at
least
for
the
resource
timing,
one
clean
up
the
fact
that
we
have
two
different
levels
as
a
deliverable.
L
Okay,
so
I'm
looking
at
technically.
L
A
L
A
I
don't
believe
it,
there
was
it's
just
that
we
somewhat
forgot
to
move
it
to
rack.
L
L
L
A
L
L
Oh,
I'm
not
sure
whether
we
want
to
enable
2
as
the
living
standard.
Actually,
you
anticipate
that
level
3
will
be
highly
different.
A
We
will
potentially
I
don't
know
if
you
attended
the
pre-rendering
discussion
the
other
day,
but
potentially
we
would
be
interested
in
adding
more
modes
or
more
signals
as
to
whether
the
page
is
visible
or
previewed
or
pre-rendered
or
some
of
those
or
all
of
those.
A
So,
yes,
there
will
be
an
api
change.
Are
you
saying
that,
from
your
perspective,
you
prefer
that
we'll
just
take
l2
to
be
the
living
standard
and
perform
those
changes
there.
L
That's
a
possibility.
I
I
don't
say
that
it's
the
right
thing
to
do,
but
well
we
it
depends
on
the
approach
that
the
working
group
wants
for
versioning
used
levels
for
the
moment,
but
we
didn't
have
the
ability
in
the
process
to
evolve
now
that
we
have
the
ability
to
have
the
living
standard.
Then
that's
different.
Maybe
the
question
is:
do
we
just
enrich
the
apis?
So
it's
it's
never
breaking
up
what
exists,
so
we
don't
really
need
versioning.
In
that
case,
that's
it.
A
In
any
case,
it
would
need
to
be
backwards
compatible,
so
I
think
either
way
would
work.
Benjamin
you
wanted
to
yeah.
G
I
just
wanted
to
get
some
clarity
here,
thanks
so
much
for
this
discussion.
This
is
great.
I
wanted
more
clarity
about
the
living
standards
part.
So
when
we
say
don't
break,
does
that
mean
we
can't
remove
interfaces?
Is
there
some
kind
of
more
formal
definition
about
how
this
evolves
and.
C
G
The
second
question
is
for
versioning,
can
we
just
say
page
visibility
year
2020
as
a
designation
I
mean,
is
that
a
sub-versioning
or
what
possibilities
are
exists
here?
Just
can
you
explore
the
space
with
us?
Please.
L
L
When,
when
you
enter
proposed
recommendation,
you
have
to
say
that
this
is
going
to
be
a
living
standard,
that
that's
the
first
step
and
once
you
are
in
recommendation
with
that,
you
are
allowed
to
go
through
a
different
process
to
amend
the
recommendation,
which
is
kind
of
combining
the
the
pattern
protection
that
currently
is
done
at
cr
and
the
ac
vote.
That
is
done
as
pr.
So
changing
your
recommendation
should
be
much
quicker.
L
The
other
change
is
that
the
crs
and
the
cr
drafts
and
cr
snapshot
are
distinguished,
whereas
before
some
crs
were
editorial
and
some
crs
were
substantive,
and
so
some
triggered
patent
policy
actions
and
some
did
not.
It
was
quite
confusing,
but
that's
not
related
to
living
standard.
That's
going
to
apply
to
all
specifications.
C
L
G
L
So
currently,
the
implementation
of
that
process
is
not
entirely
clear,
but
is
there
something
in
the
process
called
last
call
for
review
of
an
amended
recommendation
or
in
purpose
changed?
I
don't
recognize.
There
are
two
different
flavors
of
that
and,
and
that
is
going
to
be
a
kind
of
crpr
mix
of
review.
L
That's
going
to
be
normally
that
should
lead
to
recommendations
that
are
amended
in
the
same
in
this
at
the
same
uri,
you
will
have
a
new
spec
with
additional
content.
A
In
just
to
to
clarify
in
the
previous
discussion
when
we
talked
ab
like
we
concluded
that
the
living
standard
variant
of
amended
wreck
is
something
that
will
have
a
relatively
high
overhand,
so
we
prefer
to
go
with
the
cr
draft
version
of
the
living
standard.
So
basically
ip
commitments
happen
in
cr
and
then
drafts
from
that
point.
On
so
get
so
draft
snapshots
will
be
the
you
know,
tip
of
three.
L
So
the
the
difference
also
for
that
is
that
previously,
if
you
wanted
to
add
a
feature
to
a
recommendation,
let's
say
you
add
an
entire
api
to
an
an
api
that
it
already
exists
and
you
want
to
put
add
a
new
one
in
it,
and
and
for
that
you
had
to
go
to
first
public
working
draft
or
not.
It
was
an
entire
new
track.
And
now,
if
your
recommendation
was
marked
as
living
standard,
you
can
incorporate
that
and
directly
go
to
cr
new
publishers.
L
Levels,
but
when,
when
you
think
that
your
level
two
is
done,
you
publish
it
as
a
recommendation
and
you
work
on
level
3
as
a
cr
directly,
without
going
back
to
first
baby
working
draft
still
the
same
thing:
it's
the
living
standard
and
then
you
work
on
it
and
when
it's
ready,
you
want
to
publish
a
new
recommendation.
A
Yeah,
I
think
that
that
piece
will
probably
have
to
see
when
we
get
closer
to
like
an
existing
example
but
yeah.
Maybe
we
could
take
yeah,
try
that
out
with
page
visibility
and
then
move
it
to
rec
as
a
living
standard
and
then
bring
it
back
to
cr
to
add
the
pre-rendering
related
bits
or
something
like
that.
L
That
that
does
not
prevent
from
publishing
a
level
three
as
a
first
working
draft.
If
we
change
your
mind
later,
I
think
yeah,
it's
it.
It
is
not
mandatory
that
everything
that
is
related
to
page
visibility
will
be
in
the
living
standard
either.
So
I
think
the
process
is
not
is
not,
does
not
make
impossible
to
take
things
back
and
and
put
them
somewhere
else.
L
M
Hey
you
nick.
This
is
mike
smith
from
w3c,
sorry
to
butt
in
I've
been
in
other
meetings,
but
I
wanted
to
talk
about
one
thing
that
benjamin
asked
specifically
so
first,
I
I
work
for
w3c,
I'm
based
in
tokyo
and
worked
on
a
lot
of
the
transitions,
but
benjamin
asks
this
interesting
question
which
is:
is
it
considered
a
breaking
change?
M
M
If
you've
taken
time
to
review,
another
group
stack
some
other
spec
and
you
end
up
decide
finding
out
after
the
fact
that
they
made
a
change
and
went
ahead
and
transitioned
in
a
way
that
just
invalidated
the
previous
review
that
you
did
yours,
it
doesn't
make.
You
feel
super
great,
so
part
of
this
just
being
considerate
about
the
community
of
people
who've,
taken
time
to
to
review
your
spec
and
give
you
feedback
and
just
consider
would
some
change
that
you
make
be
something
that
they
would
like
to
have
an
opportunity
to
review.
G
But
I'm
still
not
getting
great
answers
as
to
how
disposition
of
comments
work
on
tag
review
of
this
working
group.
That
was
one
of
the
things
I
noticed
in
the
documents
in
general
was
that
that
seems
kind
of
ad
hoc
for
this
group,
and
so
I'm
just
hoping
that
there
is
a
like
a
place
where
these
re-reviews
can
happen
and
that
they're
noted
in
the
dock
and
that
place
is
usually
disposition
of
comments
from
what
I
can
tell.
So
I
just
want
to
make
sure
that
we
track
these
reviews
and
comments.
G
A
G
I
did
have
one
contact
had
one
and
I
will
follow
up
with
you
on
that.
I
don't.
I
was
trying
to
find
it
this
morning,
but
I
can't
so.
I
don't
have
a
great
example
for
you,
but
we'll
provide
one
for
you.
A
G
A
A
Cool
yes,
so
we
had
the
question
about
publishing
which
we
covered.
The
other
question
I
had
for
w3c
folks
is:
should
we
remove
preload
and
resource
hints
as
deliverables,
or
can
we
add
them
as
a
separate
section
for
deliverables
in
transition?
L
So
my
feeling
was
that
if
we
don't
plan
to
republish
them
under
this
group's
responsibility,
then
it
should
be
removed
from
the
normative
deliverable
section-
maybe
put
put
it
somewhere
else
for
because
for
ac
review
of
the
charter,
it
would
be
interesting
for
people
to
know
what
what's
happening
to
those
specs.
So
it's
still
interesting
to
have
wording
on
on
that
front.
L
So
that
we
don't
have
such
a
section
in
in
in
the
charter
template,
we
don't
have
a
section
for
things
that
we
transfer
to
somewhere
else,
so
that
was
going
to
be
more
or
less
freestyle,
spec
section
of
the
charter.
I
was
suggesting,
maybe
under
the
either
other
deliverables
or
in
the
section
where
we
have
liaison
to
other
groups
external
organization.
I
think
it
is.
A
L
B
M
So
yeah
that
happens
with
that's
happened
in
the
past,
of
course,
with
a
number
of
other
things,
but
I
would
just
say
that
when
that
does
happen
along
with
whatever
else
you
do
what
we're
just
talking
about
now.
Please
make
sure,
ultimately
that
sleep
is
aware.
What's
going
on,
because
the
relationship
with
what
we
g,
you
know
we
had
this
thing
where
we
created
a
memo
of
understanding
and
we
have
steps
that
we
are
supposed
to
go
through.
M
M
L
M
L
A
A
Anything
else
on
the
front
of
charters,
any
other
questions
going
once
so
yeah.
Essentially,
please
review
the
draft
and
add
comments
there
and
yeah.
I'm
hoping
that
we'll
be
able
to
send
a
like
the
whole
like
kick
off
the
rechartering
very
soon.
A
Okay.
So
thank
you.
I
think
that,
with
that
we
can
go
on
a
break
that
we're
10
minutes
overdue.
For
so
shall
we
take
10
minutes
and
reconvene
at
50
after
the
hour?