►
From YouTube: WebPerfWG call 2021 04 29
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
all
and
welcome
to
the
web
performance
working
group.
This
is
being
recorded
and
will
be
posted
online
and
on
the
agenda
today
we
mostly
have
a
bunch
of
issues
we
wanted
to
discuss,
but
before
we
get
to
that,
I
just
realized
today
that
the
next
call
will
be
on
a
holiday
here,
so
I
won't
be
able
to
make
it
so
then
the
question
is
like:
can
we
move
the
cough
to
be
on
wednesday?
A
The
may
12th
so
one
day
before
the
schedule
time
would
this
work
for
everyone
or,
alternatively,
we
could
also
skip
the
next
call,
as
the
agenda
has
been
somewhat
light,
but
I
don't
know
there
are
a
couple
of
like
somewhat
heavier
topics
that
are
like
you
know
we
want
to
discuss,
but
I
don't
know
if
they'll
be
ready
to
be
discussed
in
two
weeks.
So
what
do
you
all
think
with
the
a
day
earlier
work
for
everyone.
A
A
Okay,
I
don't
hear
any
loud
objections,
so
let's
say
yeah,
we'll
move
it
to
a
day
earlier
and
if
there
are
significant
conflicts,
we
could
discuss
this
over
the
mailing
list:
okay,
cool,
so
otherwise
yeah.
Let's
dive
into
the
issues,
maybe
I
can
yeah
it's
blink
on,
but
there
is
yeah
yeah.
A
A
So,
yes,
okay,
so,
let's
start
with
a
proposal.
So
a
couple
weeks
ago
we
talked
about
proposal
to
expose,
render
blocking
information
to
resource
timing.
This
is
another
proposal
I
made
more
or
less
at
the
same
time,
which
similarly,
I
thought
I
made
years
ago,
but
then
failed
to
find
any
like
specific
issues
that
I
filed
currently
with
resource
timing.
A
We
have
the
initiator
type
information
which
is
very
weird
and
doesn't
provide
a
ton
of
information
and
something
that
would
be
useful
on
top
of
it
is
a
specific
initiator.
So
in
devtools,
for
example,
we
have
linkability
from
a
resource
to
the
resource
that
requested
it
often
also
to
a
specific
line.
A
If,
for
example,
a
resource
is
triggered
by
html,
devtools
can
often
tell
you
what
specific
line
in
the
html
actually
triggered
that
resource
request,
and
I
don't
know
if
we
need
line
by
line
attribution,
but
it
would
definitely
be
helpful
from
my
perspective
to
to
add
initiator
information
that
maybe,
alongside
some
concept
of
a
fetch
id
or
links
to
a
resource
timing
entry
will
enable
us
to
create
dependency
trees
from
run
data.
A
So
we'll
be
able
to
know
that
script,
a
triggered
script
b
that
triggers
script
c,
that
triggered
an
image
load
and
if
that
image
has
triggered,
like
you
know,
is
uncompressed
or
triggered
excessive
processing
or
whatnot.
That
would
be
something
that
will
be
able
to
trace
back
and
pinpoint
to
script
a
that
loaded
it
and
say
to
the
person
responsible
for
script:
a
hey!
This
is
your
fault,
whether
script!
A
A
author
is,
you
know,
someone
else
on
our
team
or
a
third
party
provider,
so
that
is
essentially
yeah.
My
pitch
for
this
new
feature
proposal
for
resource
timing,
and
I'm
wondering
what
do
y'all
think
about
that.
A
Does
that
sound
like
something
that
will
be
useful
from
your
perspective?
Is
this
something
that,
for
example,
cdn
providers
can
use
to
like
an
idea
I
had
in
the
past
was
to
use
that
kind
of
information
in
order
to
you
know,
figure
out
long
dependency
chains
and
then
potentially
flatten
them
using
preload?
B
The
actually,
I
think
it
would
be
even
more
useful
if
it
would
be
like
the
dev
tools
when
it
actually
shows
the
actual
call
tree
the
stack
choices
or
even
change
stack
traces.
B
What
are
the
implications
for
that,
but
in
terms
of
usefulness
for
diagnostic
and
determining
performance
issues,
I
find
it
useful
if
it's
can
be
implemented.
A
Yeah,
I
I
think
that
it
might
be
interesting
to
tie
that
to
the
like
js
profiling
proposal
in
some
way.
So
when
js
profiling
is
enabled,
then
maybe
we
can
provide
more
initiator
data
in
those
cases,
but
I
haven't
really
given
it
a
ton
of
thought
so
but
yeah
I
would
yeah.
But
from
that
perspective
I
I
would
tie
it
to
the
same
security
primitives
that
self-profiling
had.
So
if
we
are
able
to
expose
that
information
from
like
js
profiling
perspective,
maybe
we
can
do
the
same
for
initiator.
A
A
Yeah,
anyway
yeah,
it
definitely
is
an
interesting
angle
to
think
of,
and
I
will
try
to
give
it
some
more
thought.
Thanks.
D
Hey
you
off,
I
have
a
question
what
about
when
we
have
a
service
worker
involved
and
multiple
tabs
open
requesting
similar,
you
know
resources.
How
can
we
organize
this
mess
when
you
have
this
situation.
D
D
A
I
would
imagine
us
having
this
initiator
info,
be
attached
to
the
request,
object
in
fetch
and
then,
if
the
service
worker
is
generating
a
new
request
object,
then
the
service
worker
becomes
the
initiator
instead
of
the
original
render
based
one.
But
this
is
all
like
I'm
making
stuff
up
so.
A
I'm
yeah,
I
would
love
for
you
to
be
like
to
be
involved
in
those
discussions
to
see
how
we
we
define
this.
It's
definitely
more
complex
than
just
you
know.
Let's
initiate
our
information,
it
will
be
great
like
there
are
a
lot
of
edge
cases
to
think
of,
but
yeah
the
the
service
worker.
One
is
definitely
something
we'll
need
to
take
into
consideration.
Yeah.
D
Yeah,
because,
currently
with
service
worker,
I
already
have
enough
problems
trying
to
identify
where
this
resource
comes
from,
especially
multiple
tabs,
and
we
don't
have
an
id
and
it's
hard
to.
You
know
precisely
map
without
an
id,
but
an
id
definitely
would
help
help
in
this
case,
for
both
situations
but
yeah
I'll,
keep
this
in
mind
and
have
this.
You
know
consideration.
D
Not
really
because
devtools
is
I
usually
I
I
have
this,
I
when
I'm
debugging,
I
have
the
dedicated
dev
tools
for
service
worker
and
in
there
I
I
yeah
it's.
I
it's
kind
of
tough
like
what
I
see
it
more
in
like
in
the
wrong
situation,
when
I
have
multiple
clients,
ids
and
yeah,
but
I'll
I'll,
keep
this
in
mind,
but
I'm
not
very
sure
how
we
can
map
one
to
the
other,
but
interesting
situation.
A
A
Yeah,
so
for
preload,
yes,
I
would
definitely
consider
that
at
the
initiator
and
that,
like
so
there's
html
based
preload,
and
for
that
I
will
consider
the
yeah,
the
html,
the
initiator
of
that
preload,
and
you
know
tie
that
back
to
the
like.
If
we
have
line
by
line
attribution
tie
that
back
to
the
link
rail
preload,
a
line
that
kicked
that
off
for
http
header
based
preloads,
I
would
still
consider
the
resource
that
delivered
them.
B
Maybe
if
I
can
highlight
another
specific
use
case
so
when
you
have
an
app
that
has
lots
of
third-party
resources,
which
you
actually
don't
own
a
lot
of
time,
the
initiating
requests
for
resources
and
it's
hard
to
troubleshoot
what
caused.
Even
though
you
can
see
those
resources
requests
on
the
resource
timing,
you
don't
always
know
where
they're
coming
from
what
you
get
them.
So
it
really
helped
to
diagnose
those
cases.
A
Yeah,
this
is
very
much
the
use
case.
I
have
in
mind
for
this.
Oh
sorry,
I
didn't
read
the
no
no,
but
it
it
it's
fine,
but
yeah.
I'm
glad
to
hear
that
I'm
not
alone
in
this.
So
thank
you.
F
It'd
be
like
similar
to
what
simon
hearns
request
map
can
do
with
synthetic,
but
you
know
actually
being
able
to
do
that
from
rum
would
be
really
awesome.
Yeah
yeah,
I
mean
from
you
know,
akamai's
point
of
view.
We,
I
think
this
kind
of
information
would
be
helpful
for
a
lot
of
the
things
that
we
do
from.
F
Obviously,
from
the
rum
perspective,
we
try
to
do
some
of
this
heuristically
or
even
just
you
know,
when
we're
looking
at
waterfalls
and
stuff
like
that,
to
try
to
point
things
out
for
customers,
but
having
a
more
definitive
answer
to
what
triggered.
What
would
definitely
allow
us
to
point
more,
as
you
say
like
to
you
know,
assigning
blame
is
maybe
too
harsh
of
a
term
but
like
to
you
know,
figure
out
what
the
biggest
hitters
are
yeah.
F
F
If
you
know,
eventually,
you
look
back
and
you
find
rogue
requests
going
out
to
security
products
may
be
interested
in
being
able
to.
You
know
find
the
root
cause
of
that
kind
of
exfiltration
or
whatever
it's
doing
so,
not
not
just
from
a
performance
point
of
view,
but
I
think
it
I
think
it
you
know
shining
the
light
on
what
triggered
what
is
enables
all
sorts
of
things
that
we,
maybe
we
haven't
even
really
thought
of
here.
As
long
as
we
can
do
it
in
a
safe,
secure
way,
right,
yeah.
A
Cool
cool
yeah-
that's
like
the
security
angle,
is
definitely
interesting
as
well
to
yeah
be
able
to
catch
those
kinds
of
exfiltration
attempts
in
the
wild,
especially
maybe
tied
together
with
like
report
only
csp.
If
you
have
a
csp
that
reports
suspicious.
A
G
B
D
I
think
another
point
also
is
the
extension
if
an
extension
started
the
request
that
might
be
worth
exposing,
but
of
course
they
not
showing
which
extension
caused
that
so
to
not
track
what
user
has
installed.
A
Yeah,
I
don't
know
what
like
yeah,
I'm
not
sure
that
like
we
will
have
like
how
would
we
expose
scripts
from
another
world
in
that
case,.
D
Yeah,
because
if
if
a
known
resource
is
let's
say,
is
known
for,
I
don't
know,
screen
reader
something
to
inject,
so
people
can
track
that.
If
I
see
that
I
know
the
person
has
this
extension
installed,
so
there
might
be
some
privacy
concern
here.
A
Yeah
there
may
be
some
privacy
concerns.
There
may
also
be
just
like
from
an
implementation
perspective
like
there
is
no
like
if
we
are
pointing
to
previous
resources,
as
the
initiators,
an
extension
is
not
really
a
previous
resource,
so
yeah,
it's
definitely
an
interesting
case
to
think
about.
I
don't
know
whether
we
can
actually
define
it
but
yeah.
It's
definitely
interesting
to
keep
that
in
mind.
It's
also
like
also
possible,
like
going
back
to
the
security
case,
I'm
not
sure
that
we
can
always
have
attribution
to
the
actual
initiator.
A
So,
for
example,
if
we
have
a
script
that
is
setting
off
a
timeout,
that
is
then
fetching
something.
I'm
not
sure
that
this
is
attribution
information
that
we
actually
like
the
browsers
actually
keep.
So
there
may
be
loopholes
in
the
security
story,
but
yeah.
It's
interesting
to
keep
that
use
case
in
mind
and
then
see
if
we
can
tackle
it
or
not,
or
maybe
we
can
only
tackle
non-sophisticated
attackers
but
yeah
okay,.
F
I
will
like
plus
one
to
the
extension
case
we
in
rum
data,
we
see,
requests
triggered
by
extensions
right
and
our
customers
often
ask
like
what's
going
on
here.
Why
do
I
see
this
request
to
this
weird
domain?
That
I
don't
recognize
and
you
know
they
go
to
that
page
and
that
doesn't
happen
to
them
unless
they
happen
to
be
that
user
with
that
extension
right.
F
So
even
if
it
was
just
a
very
simple
attribution
or
a
pointer
to
like
something
outside
of
the
page
triggered
it,
you
know,
or
something
just
to
kind
of
lock
that
in
a
bit
more
because
it's
caused
confusion
for
sure
with
run
data,
it
shows
up
in
resource
timing
right
those
many
of
those
extensions
trigger
patches
that
show
up
in
resource
simon.
A
Yeah
yeah,
I
think
it's
like
it
would
probably
be
safe
to
say
that
this
is
like
triggered
by
an
extension
without
saying
anything
more
about
that.
Like
I
don't
know
if
it
covers
everything,
but
it
will
yeah,
it
will
help
the
case
of
like.
Why
is
this
request
here.
A
A
Okay,
so
thanks
all
for
the
great
feedback
and
great
use
cases,
it's
definitely
yeah.
A
I
suspect
that
there's
a
lot
of
value
in
this
that
we'll
be
able
to
unlock,
and
but
it's
it's
definitely
critical
to
think
of
the
various
use
cases
that
this
should
or
shouldn't
cover
and
the
edge
cases
that
will
trip
us
up,
hopefully
beforehand.
Okay,
so
moving
on
to
the
next.
A
Issue
yeah.
I
don't
know
that
this
is
super
actionable
here
on
the
call
I
was
hoping
that.
A
W3C
folks
would
be
here,
but
essentially
we
need
to
yeah.
We
need
to
set
up
auto
publishing
and
in
terms
of
the
references
yeah
having
those
two
references
is
apparently
like
references
to
resource
timing
and
resource
timing
too,
apparently
tripped
up
a
bike
shed,
so
it's
something
that
we
would
yeah
something
that
we
would
want
to
tackle
relatively
shortly.
But
I
don't
know
this
is
super
actionable.
So
maybe
we
can
move
on
to
the
next
one.
Unless
someone
has
strong
opinions
about
those
references
and
respect.
A
Yeah
just
like
on
that
front,
I
think
I
think
that
we'll
be
able,
like
the
new
spec,
prod
github
repo,
that
I
need
to
figure
out
and
deploy,
but
I
think
that
it
offers
a
great
way
to
deal
to
build
respec
specifications
to
avoid
the
situation
we
had
in
the
past,
where
the
spec
is
you
know,
is
published,
but
then
later
on
breaks.
A
So
I
think
that
it
will
enable
us
to
have
stable
specs
with
build
time,
but
build
time
is
part
of
the
pr
process
that
will
break
in
case
something
new
in
respect,
broke
and
but
breaks
as
part
of
the
publishing
process
and
not
later
on
at
some
auditory
point
in
time.
So
I'm
hopeful
on
that
front.
G
Sure
yeah,
so
so
the
context
was
for
performance
event,
timing.
Basically
we
just.
We
always
prefer
the
timing,
interests
that
are
have
duration
greater
than
104.
I
think,
and
that
list
will
grow
indefinitely.
G
So
the
issue
becomes
like
whether
the
ua
should
be
able
to
clear
the
entry
buffers
at
some
time
like
because,
because
all
the
entry
boxes,
like
I
don't
know
days
old
entry,
buffers
are
probably
not
going
to
be
helpful
anymore,
and
I
think
this
applies
to
other
performance
timings
as
well.
A
Okay
yeah,
so
we
have
clearing
capabilities
and
like
buffer
limits
for
resource
timing,
but
that
proved
to
be
an
anti-pattern
in
terms
of
multiple
scripts,
on
the
page
that
step
on
each
other's
toes
and
clear
the
buffers
and
preventing
other
scripts
on
the
page
from
doing
what,
like
from
collecting
that
data.
A
G
F
So
the
the
event
timing,
interface,
we
have
a
max
buffer
size
in
the
registry
that
we
say
of
150.
So
it's
not
unbound
right.
G
Do
we
have
a
limit
for
event?
Timing,
I'm
not
too
sure.
H
Yes,
we
do.
I
can
yes,
I
can
explain
the
logic
behind
it
a
little
bit.
So
the
main
reason
for
having
the
buffers
is
to
get
data
from
early
in
the
page
load,
so
while
scripts
have
not
yet
injected
their
performance
observers
right.
So
the
idea
is
that
the
buffers
should
be
sufficient
to
capture
the
initial
data.
From
the
page
we
made
them.
H
I
mean
the
the
buffer
limits
are
somewhat
arbitrary,
in
that
we
didn't
do
extensive
investigation
to
figure
out
what's
the
right
length
that
generally
guarantees
that
they
will
be
good
by
then,
but
it
should
be
like
sufficiently
good
and
then,
once
once,
the
buffer
is
full.
It
doesn't
really
keep
increasing
in
size.
The
the
assumption
is
that
your
performance
observer
is
going
to
be
installed
by
them
by
then
sorry.
H
So
there's
no
need
to
keep
new
entries
in
any
buffer
since,
like,
if
you
don't
listen
to
it,
then,
like
that's,
that's
kind
of
the
idea
behind
it,
the
only
one
that
has
unbounded
buffers
as
far
as
time
over
is
user
timing
and
the
reason
for
that
one
is
that
it
is
a
very
developer,
explicit
signal
when
they
call
performance
or
mark
or
performance.measure.
H
So
we
decided
that
I
mean
they
still
need
to
be
kept
because
you,
you
need
to
support,
get
entries
by
a
name
for
example,
so
they
need
to
be
kept
by
the
browser.
For
that
reason,
and
those
don't
have
a
buffer
size
limits.
G
Yeah,
so
for
cases
like
user
timings,
do
we
think
that
we
should
clear?
We
should
be
able
to
clear
the
entries
at
some
point,
because
I'm
just
not
too
sure
like
if.
G
A
A
Is
the
advantage
like,
like
memory
reduction,
related
like
what
what's
the.
H
G
Okay
yeah,
so
I
was
thinking
events
I
mean,
but
since
we
have
a
limit
for
that,
maybe
it's
another
issue
right.
I
guess
that
was
just
very
concerning
about
the
but
where
I
was
filing
for
the
the
github
issue.
I
was
also
thinking
like
user
timings,.
H
G
A
That
sounds
yeah,
because
facebook
also
had
the
issue
on
user
timing,
where
they
basically
wanted
it
for
devtools
purposes
and
not
for
real
reporting,
which.
A
Aligns
with
creating
many
many
entries,
I
guess
it's
a
shame
they're
not
on
the
call
today.
I
think
it
might
have
been
interested.
I.
I
Yeah,
I
don't
remember
if
they
still
do
this,
but
they
would
fire
user
timing
events
and
then
immediately
clear
them
from
the
perf
timeline.
This
would
create
in
chrome,
trace
events
and
the
trace
events,
don't
get
cleared,
and
so
devtools
would
pick
it
up
and
show
it
in
the
timeline.
So
this
was
a
way
to
add
ui
to
the
like
devtools
timeline,
but
they
didn't
actually
care
about.
Having
them
persist
better.
I
They
didn't
want
them
to
persist
in
the
perf
timeline,
so
they
were
explicitly
clearing
user
timing
events-
and
I
don't
know
if
that's
supposed
to
remove
them
from
buffers,
but
that's
that.
I
That
was,
but
they
use
lots
of
timers
that
probably
they
do
persist
in
the
timeline
as
well.
So
the
the
use
case
we're
describing
is
their
debug
usage
like
more
advanced
timing,
that's
just
for
devtools.
They
probably
also
have
plenty
and
for
long-lived
facebook
sessions.
I
imagine
it
does
get
actually
long
like
non-cleared
ones,.
A
Okay,
so
yeah
so
essentially
for
yeah.
I
think
that
makes
sense,
so
we
have
for
all
entries.
We
have
buffer
limits
other
than
user
timing,
where
we
have
clear
marks
and
clear
measures
that
clear,
specific
entries,
so
developers
can
use
that
to
clean
up
those
marks
and
measures
for
long-lived
long-lived
apps
either
because
they
just
want
the
dev
tools
annotation
or
because
they
they
have
collected
those
marks
or
measures
sent
them
over
to
the
server.
They
are
no
longer
relevant.
They
can
be
cleared.
I
guess
from
the
buffer.
A
A
F
F
We
could
aim
to
tweak
them
over
time
and
you
know
as
we
need
so
I
mean
if
any
of
those
buffers
seem
like
we're
wasting
too
much
memory,
it's
not
going
to
be
used
like
we
could
certainly
take
that
into
consideration
and
consider
lowering
it
or
increasing
it
like
we
did
for
resource
timing,
because
we
found
that
resource
timing
was
not
sufficient
enough
to
capture.
H
Oh
yes,
and
that
reminds
me
we
should
implement
the
number
dropped
entries
which
I
specified
but
have
have
yet
to
ship
in
chrome.
So
thanks
for
that
reminder,.
C
Better
get
implemented
by
that
yeah,
and
that
would
enable
us
to
know
if.
I
I
However,
if
you
haven't
registered
your
observer
in
time,
the
buffer
now
hits
its
limit
and
it's
like
many
days
later,
there
are
many
dropped
entries.
You
have
a
huge
hole
in
the
perf
timeline.
This
recording
is
quote
unquote
tainted
at
this
point.
I
Is
it
still
worth
keeping
those
buffered
entries
in
those
cases
like
if
you
haven't
registered
your
observer,
read
from
it
by
the
time
it's
full,
it
almost
is
worthwhile
to
just
clear
it
at
this
point.
Perhaps
you
know
I
don't
know,
maybe
maybe
the
first
couple
of
drawings,
that's
debatable,
but
to
sean's
point
like.
Is
it
really
necessary
to
keep
it
around
forever?
F
I
can
talk
about
resource
timing,
specifically
because
we've
come
across
this
a
bunch
right,
so
it
was
originally
150
entries
and
amongst
our
customers
we
would.
They
would
often
find
their
waterfalls
that
we
present
to
them
to
be
not
not
fully
representative
of
what
they
thought.
Their
page
load
experience
was,
and
we
found
that
and
some
of
our
customer
sites
we
needed
to
manually
bump
that
buffer
size
up
to
250
or
300,
or
we
would
just
tell
them
10,
000
or
whatever
they
want.
So
don't
worry
about
it.
F
Having
at
least
a
partial
result
was
more
valuable
than
having
no
results,
which
is
kind
of
an
even
scarier
thing
or
more.
Concerning
thing
or
you
don't
know
why
kind
of
thing,
so
my
preference
at
least
for
like
three
from
that
one,
that
one
use
case
for
resource
timing
would
just
still
be
to
present
the
whatever
you
captured
and
say
how
many
drops
like
that's,
even
more
information.
That
would
be
good.
I
mean
again
like
I
do
understand
the
browser's
concern
here
about
not
being
a
memory
hog,
but
for
that
I.
I
H
Yeah,
I
do
think
the
like,
if
we
start
playing
with
heuristics
here,
it's
gonna
get
really
confusing
for
developers
and
analytics
providers
like,
of
course
we
can
keep
discussing.
But
something
to
keep
in
mind
is
that
adding
this
kind
of
heuristic
might
cause
issues
right
to
like
people
that
are
actually
looking
at
the
data.
A
I
I
wonder
if
that's
something
that
does
need
to
be
consistent
if
you
just
type
the
data
and
therefore
like
you
can
imagine
like
it
seems
like
these
were
somewhat
arbitrarily
picked
anyway,
and
so,
if
there
is
room
to
allocate
more,
why
not
allocate
more
if
there's
no
longer
room
tablet,
that's
been
allocated,
why
not
clear
them,
but
but
anyway,
I
wonder
if
sean,
if,
if
your
questions
were
addressed
at
this
point,.
A
Cool
yeah
any
last
comments
on
this,
or
shall
we
move
on.
A
Exciting
yeah,
let
me
share
the
issue
of
a
couple
of
issues.
The
first
one
is
allow
http
headers
to
be
defined
for
the
preload
request.
A
So
this
is
an
interesting
issue
that
I
think
we
talked
about
many
years
ago
as
part
of
like
fetch
parameters,
but
essentially
what
the
angular
folks
here
are
trying
to
do
is
they
are
trying
to
preload
fetch
requests
that
are
being
sent
later.
Those
fetch
requests
are
being
fitted
with
custom,
accept
headers,
basically
changing
their
accept
headers
to
indicate
that
they
prefer
json
over
xml
from
various
rest.
A
Api
endpoints
because
of
those
custom,
accept
headers
right
now,
preload,
like
unspecified
preload,
cache
in
chromium,
and
I
believe
in
webkit
as
well
doesn't
match
those
requests
and
when
they
are
later
sending
a
like
calling
fetch,
they
are
triggering
a
second
request.
That
is
actually
the
desired
behavior
here,
because
if
the
preload
cache
were
to
match
this,
like
requests
with
different,
accept
headers,
like
requests
with
different
accept
headers
can
have
different
responses
and
matching
them
in
the
cache
seems
wrong.
A
Ideally,
what
they
would
like
is
an
attribute
on
preload
that
enables
them
to
say
okay.
This
is
the
value
for
the
accept
header,
or
maybe
something
even
more
generic
than
that
that
enables
them
to
set
any
arbitrary
headers
on
that
request
in
order
to
make
sure
that
it
matches
with
whatever
is
fetched
later
on.
A
I'm
like,
on
the
one
hand,
I
I
think
that
the
use
case
is
legit
at
the
same
time,
I'm
concerned
about
adding
cruft
to
html.
I
know
that
in
the
past
we
talked
about
adding
a
json,
like
fetch
options,
attribute
to
various
elements
that
load
resources,
so
that
developers
are
able
to
like
specify
a
json
with
the
various
fetch
options
that
they
want
applied
to
this
resource
request,
and
then
we
would
have
been
able
to
use
that
for
yeah
to
define
new
headers
or
define
many
other.
A
You
know
fetch
parameters,
credential
mode.
What
not.
A
At
the
same
time,
yeah,
like
I
said,
I'm
concerned
about
adding
arbitrary
croft
here
and
there's
definitely
a
trade-off
here
between
html
legibility
and
and
the
usefulness
of
preload
for
those
use
cases.
A
So
I'm
wondering
what
you
all
think
I
yeah
pink
hannaf
and
kasperin
on
the
issue
and
oh
okay,
I
see
that
he
followed
up
yeah
and
oh
that,
ideally
this
falls
from
destinations.
A
A
Essentially,
yeah,
I
wonder
what
do
y'all
think
is
this
like?
First
of
all,
if
like
is
this
a
case
you've
encountered,
and
what
do
you
think
of
the
trade-off
here
between
like
adding
attributes
that
would
define
like
better
define
those
request,
parameters
to.
A
A
A
I'm
from
the
silence
I'm
also
guessing.
No
one
has
strong
objections
to
adding
those
kind
of
attributes
to
link
elements.
G
A
Cool
so
yeah,
I
will
take
an
action
item
to
continue
the
discussion
on
the
issue
and
we'll
see
where
we
get
there.
And
finally,
the
last
issue
for
on
today's
agenda
is
for.
A
When
using
picture
to
load
new
and
exciting
image
formats
that
are
not
yet
supported
everywhere,
we
currently
have
a
way
to
use
to
use
the
type
attribute
in
order
to
just
preload
the
latest
one
and
that
worked
well
when
webp
was
the
new
kid
in
the
block
and
folks
were
interested
in
just
preloading
the
latest
one
and
supporting
browsers,
and
the
type
attribute
gave
us
that.
A
But
if
we
are
interested
in
pre-loading
more
than
just
the
latest
format,
so,
for
example,
nowadays
we
have
adif
that
is
making
progress
in
terms
of
support
in
some
browsers,
and
then
people
are
now
interested
in
loading
like
pre-loading,
avif
and
webkey,
and
people
are
starting
to
play
around
with
the
jpeg
excel
support,
and
it's
becoming
somewhat
more
of
a
problem
like
people
are
interested
in
preloading,
multiple
types,
but
I
guess
just
the
best
one
that
is
supported
from
all
those
separate
from
all
like
from
all
those
different
types.
A
So
people
are
interested
in
being
able
to
say
okay,
I
want
you
to
preload
this
adif.
If
you
don't
support
avif
preload
this
webp,
if
you
don't
support
this
webp
format,
preload
a
jpeg
or
something
along
those
lines,
and
that
is
like
in
the
past.
We
somewhat
avoided
from
tackling
this
use
case
because
it
didn't
seem
like
it
had
a
ton
of
real
life
implications
and
the
and
the
support
metrics
for
newer
formats
were
somewhat
similar
to
the
support
map
breaks
for
preload.
A
So
it
didn't
seem
like
there's
a
real
life,
concrete
use
case,
but
with
avif
and
more
common
preload
support.
A
A
And
and-
and
we
can't
use
link
for
that
because
link
is
not
like.
Theoretically,
we
could
have
a
link
rail,
many
links
and
then
have
multiple
links
inside
of
that,
but
link
is
self-closing,
so
we
can't
have
it
contain
multiple.
A
That
would
be
one
way
of
doing
that,
but
I'm
I'm
not
sure
that
the
processing
model
for
that
would
be
reasonable.
Because
what
happens
when
you
know
you
have
three
of
them
in
the
head
and
then
a
last
one
that
arrives.
A
Like
other
than
no,
but
we
could
figure
out
something
where
yeah
they
would
like.
We
would
still
take
the
order
into
account,
and
people
will
have
to
put
the
latest
and
greatest
first
and
then,
if
you
already
encountered
that
id,
you
ignore
the
preload
or
something
along
those
lines,
but
yeah,
maybe
taking
a
step
back.
What
do
folks
think
about
the
use
case?
A
A
A
I
know
that
we
had
similar
requests
in
the
past,
also
for
something
similar
for
fonts
that
we
similarly
dismissed
with
everyone
just
support
waff2,
so
there's
no
need
for
that,
but
with
the
progressively
rendered
fonts
and
other
advancement
in
the
world
of
web
fonts,
we
may
want
like
that.
The
same
mechanism
would
be
potentially
useful
for.
H
Both
yeah,
I
guess
I
have
a
dumb
question:
why?
Why
can't
you
have
rel
equals
preload
inside
picture
element
sources?
A
So
preload,
typically,
is
something
you
would
put
before
the
picture
is
defined.
So
in
cases
where
the
picture
is
dynamically
generated
or
like
added
later
on.
A
And
those
are
the
cases
in
which
you
would
want
to
preload
that
image.
You
wouldn't
necessarily
like
there's
if
you're
already
discovering
the
picture
element
and
the
images
inside
it
there's
no
functional
difference
between
loading,
it
and
preloading.
It.
F
A
Yeah
one
more
thing
to
bear
in
mind
here
is
that
whatever
we
come
up
with
also
needs
to
work
in
the
header
format
of
link,
so
maybe,
instead
of
an
id
that
will
require
us
to
have
some
sort
of
a
cache
that
maintains
state
across
different
elements
or
different
headers.
A
A
A
A
Once
yeah.
A
A
Worthwhile
to
like,
I
guess,.
A
The
main
conclusion
I
have
is
that
we
need
to
explore
this
space
more.
I
don't
think
that,
like
we,
there
are
multiple
possible
sketches.
None
of
them
is
ideal,
and
maybe
we
should
like
take
that
to
the
issue
and
discuss
it
further.
There
does
that
make
sense
for
folks,
especially
given
that
we're
at
time.
A
Okay,
cool
with
that
we
covered
all
the
issues
we
had
on
the
agenda,
so
great
work.
Everyone-
and
I
will
see
you
all
in
two
weeks-
minus
a
day.