►
From YouTube: WebPerfWG Design call - July 10th 2019
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
On
the
Internet's
as
far
as
scribes
go,
I
can
probably
scribe
for
today,
unless
someone
else
feel
an
urge
to
do
that
and
as
far
as
the
next
call,
the
next
call
is
scheduled
to
two
weeks
from
now
so
July
25th
I
will
not
like
both
me
and
Ilya
will
be
out
of
office
and
like
in
Montreal
for
the
ITF,
so
I
think
it's
like
from
my
perspective.
It
would
be
best
to
postpone
it
to
a
week
after
that,
if
that
works
for
anyone.
B
A
A
D
A
A
F
Right,
okay,
sorry,
yes,
I
was
looking
for
the
mute
and
mute
button.
Yes,
I
typed
it
the
link.
So
so,
if
you
open
that
page,
it
will
load
still
working
progress,
but
it
will
load
its
rest.
The
sample
and
the
idea
that
is,
that
page
will
go
into
our
github
repository
and
unexposed
to
the
vichy
website
and
user.
Normal
means
that
we
already
have
so
basically.
F
Wearing
wet
technologies,
we
have
data
regarding
specifications,
we
have
data
regarding
tests,
we
have
data
regarding
repositories,
get
our
repositories
and-
and
if
you
know
where
to
look,
you
can
find
these
data
is
actually
public
in
some
form
or
another.
But
you
have
to
know
where
to
look,
and
so,
first
and
foremost,
gargantia
is
basically
accessing
those
data
sources
and
exposing
them
to
to
the
world
in
a
more
meaningful
fashion.
F
So
if
you,
this
page
is
basically
saying
well,
show
me
the
data
that
part
of
the
data
that
you
have
rented
to
the
web
performance
working
group
I
showed
an
earlier
version
of
this
page.
You
have
Julia
and
Todd
during
the
face-to-face
meeting
and
I
started
to
implement
some
of
their
feedback
into
it.
F
B
F
F
It's
for
the
rest
of
the
world,
the
window
page
that
basically
goes
and
and
make
it
useful
for
us
where
we
need
more
data
in
terms
of
github
issues
and
so
on,
and
we
need
put
on
show
you
a
page
for
the
channels
themselves,
who
also
are
interested
in
more
finer
grained
data
as
well
and
bottom
line.
Bottom
line
is,
for
you
all
to
basically
realize
what's
happening
in
the
working
group
without
having
to
wonder
where
it's
happening
so
so,.
F
F
F
One
is
an
example
of
the
underlying
thank
you
of
the
underlying
data.
Is
you
can
actually
browser
the
browser,
the
data
which
is
underneath,
and
it
will
show
you
everything
that
it
knows
in
this
case,
I
did
a
query
to
say:
I'll
show
me
all
of
the
active
species
and
inside
the
working
group,
and
then
you
can
dive
into
the
specification
one
by
one
to
see
all
of
the
data.
We
can
show
you'll
notice
that
there
is
a
link
to
WPT
as
well,
which
is
which
is
somehow
ten
years
at
the
moment.
F
It's
relying
on
data
coming
from
Philippe
yankin
Stein.
Actually,
we
don't
have
a
clear
way
today
to
identify
a
specification
with
a
given
WPT
directory.
So
it's
still
a
doc
and
filling
our
concern.
I
have
a
tool.
We
did
that
other
click,
but
I
talked
to
Marcos
last
week
and
it
would
be
nice
if
we
can
have
both
impaction
and
respect
way.
To
say
this
is
the
WPT
directory
for
the
specification,
therefore,
making
the
link
between
the
to
clear,
rather
than
relying
on
some
dark
magic
for
that.
B
F
It's
possible
to
show
the
result
of
double
PT
for
inspects
using
icons.
The
problem
is
like,
if
I
do
that,
I'm
going
to
enable
the
u.s.
attack
again,
this
poor
WPT
server,
reserving
those
icons,
which
is
why
I
did
not
do
that
on
the
public
page
or
if
I
have
that
done
that
for
today
you
would
have
Hull
conducted
happily
connected
at
the
US
attack
on
that
to
our
server,
mainly
because
for
every
single
every
time
you
request
an
icon
to
show.
Let's
say
the
test
result
for
high
resolution
time
on
Chrome.
F
It
downloads
all
of
the
JSON
file
and
filter
it
to
just
what
you
need.
So
if
you
request
an
icon
for
Chrome
and
I
come
from
for
for
Firefox
and
icon
for
Safari,
and
you
do
that
for
the
12
or
15
specification
we
have
in
this
working
group,
then
you
realize
that
it's
downloading
the
same
file
on
the
server
side.
You
know
70
times
and
then,
if
we
haul
load
that
same
page
at
the
same
time,
you
can
imagine
the
server
going
down
pretty
fast.
So
so
we
will
have
to.
F
F
F
F
A
F
E
What
we
talked
about
was
the
idea
of
showing
something
like
issue
counts,
with
direct
links
to
the
github
issue
of
list,
then,
along
those
lines,
a
way
to
enable
the
tool
to
show.
Is
there
something
that
needs
to
be
done
and
then
linked
directly
to
the
issues
was
kind
of
how
we
had
brainstormed
it
at
the
face
to
face,
rather
than
let's
say
query
and
displaying
all
the
assumed
content.
I
F
B
F
That's
right
now:
the
problem
is
that
those
issues
are
20
there,
only
I'm,
not
accessing
the
github
API.
The
PAS
name
is
a
github
API,
it's
limited,
it's
it's
right
limit
is,
is,
is
extremely
low
and
and
we
so
it's
using
it.
We
are
crawling
our
repository
every
night
to
gather
this
information,
but
consequently
the
data
could
be
24
at
most
24
hours
behind
when
you're
looking
at
it
and
we're
looking
at
how
the
way
to
make
this
data
more
real
time
as
much
as
possible.
F
E
F
A
F
F
B
F
E
F
But
if
you
open
your
dev
tool,
you
will
see
that,
especially
when
you
use
the
browser
HTML,
a
tool
that
I
provide
it
as
well.
You
will
see
that
it's
doing
know
as
you
are
unfolding
things.
It
may
create
more
fetches
in
the
background,
because
you're
only
accessing
specific
information
that
was
not
pre-loaded.
Consequently,
which
is
why
also
it
would
be
easier,
it's
kind
of
like
if
we
search
generate
that
on
the
server
side,
and
then
we
don't
have
to
deal
with
all
of
this
craft.
A
F
The
iodine
is
a
more
general
scope,
because
it's
well
also
I
mean
respect,
will
be
there.
Philippians
10
will
be
there
and
so
on.
It's
just
a
way
to
follow
those
people
who
are
creating
tools
around
it
to
synchronize,
because
the
we
have
a
greater
need
lately
to
link
a
lot
of
the
information.
So
we
hoping
to
have
some
of
the
MD
and
people
as
well
there
and
so
on.
So
so,
with
the
goal
of
like
we
can.
No,
we
can
demo
in
the
better.
We
all
our
okay.
A
J
Okay,
all
right,
so
this
is
an
idea
that
we've
informally
floated
in
Prior
discussions,
particularly
involving
profiling.
So
if
you
recall
way
back
when,
when
we
were
talking
about
the
size
trade-off
for
traces,
we
talked
about
adding
a
gzip
format
potentially
and
I.
Think
a
few
people
suggested
that
it
might
make
more
sense
to
break
out
any
kind
of
compression
into
its
own
API
for
reuse,
particularly
because
Facebook
has
also
seen
wins
from
using
compression
on
the
client
side
in
other
places
as
well.
J
So
this
is
just
like
a
informal
like
sort
of
discussion
provoking
explainer,
but
how
we
might
or
if
we
want
to
tackle
this
domain
at
all
yeah,
as
I
said
before.
It's
basically
all
your
favorite
IETF
standards
in
the
browser,
many
of
which
are
already
shipped,
of
course,
with
reference
implementations
for
things
like
image,
decoding
for
cases
like
JPEG
or
for
like
general-purpose
compression
algorithms
like
gzip
and
broadly
for
the
most
part,
the
Plumbing's
re
there
in
most
user
agents.
J
We've
seen
quite
large
delivery
reliability,
improvements
from
using
compression
onto
client
side
prior
to
sending
xhrs,
especially
in
users,
with
really
spotty
Network
conditions,
where
it
makes
a
lot
more
sense
to
spend
a
little
bit
more
time
on
the
client
to
prepare
a
smaller
payload
over.
Like
say,
a
3G
network
or,
worse.
J
Now
we
have
this
great
general-purpose,
programming,
language
and
browsers
like
why?
Don't
we
use
them?
The
big
problem
we
ran
into
when
leveraging
wisdom
for
compression
was
performance?
Were
you
to
ship
a
implementation
of
snappy
naively
with
like
this
progression
algorithm
in
rust,
with
the
necessary
like
standard
libraries
and
alligators?
J
That's
going
to
be
around
one
to
two
megabytes
compressed
of
wasm
binary
and
that's
a
quite
significant
to
ship
to
every
single
user
or
what's
effectively
like
a
well
standardized
algorithm
like
stab,
is
just
one
example
like
we
could
also
use
gzip
just
by
changing
the
compression
level.
It's
family
just
for
performance
reasons.
J
Also,
it
requires
routing
your
codes,
such
that
you
can
interface
with
the
wasm
address
space
effectively.
So
you
would
need
to
do
you
need
to
get
your
data
inside
the
wasm
address
base
to
begin
with,
and
if
your
existing
code
isn't
wired
up
to
do
that
effectively,
then
you're
effectively
doing
a
bunch
of
extra
copies.
J
J
So
in
general,
when
we
thought
about
like
what
we
would
want
out
of
a
compression
API,
we
got
these
bullet
points.
We
really
wanted
to
be
a
synchronous
so
that
we
can
defer
this
off
the
main
thread
and
avoid
horrible
horrible
long
tasks,
and
we
also
wanted
a
uniform
interface
so
that
this
was
potentially
extensible
and
not
tied
to
say,
gzip
or
like
Zi
standard.
J
Additionally,
it
would
be
interesting
to
see
if
we
could
extrapolate
higher-level
parameters
from
compression
algorithms
so
that
we
could
say
like
oh,
like
on
a
like
low-end
CPU.
We
might
want
to
actually
use
a
lower
compression
level
so
that
we
use
less
cycles
and
we're
okay
with
losing
that
extra
degree
of
compression
and
also
referring
back
to
the
Wasden
case.
It
would
be
nice
to
avoid
extra
allocations,
especially
on
memory
constrained
devices.
A
J
Think
in
controller
developer
makes
a
lot
more
sense.
It
would
be
kind
of
jarring
to
get
different
sized
payloads
on
the
same,
like
inputs,
depending
on
your
device
in
general,
like
there's
already
third-party
libraries
that
can
help
you
establish
like
what
tier
your
devices
that
can
help
you
make
that
decision.
J
J
I
E
J
Fair
Andrew
yeah.
So
if
you
want
to
use
the
streams
entry
point,
you
can
there's
also
the
array
buffer
based
one
as
well,
which
can
interface
with
like
older
api's
that
might
just
be
able
to
take
in
like
a
copy
into
a
manager
a
buffer.
It's
also
possible
that
we
could
add,
like
a
Dom
string
entry
point
to
this
as
well.
That
returns
an
array
buffer
I,
don't
see
too
many
options
for
values
to
return
here
other
than
either
a
like
byte
stream,
or
an
array
buffer
right.
E
I
guess
I'm
really
thinking
through
the.
How
do
we
avoid
having
websites
making
these
bloated
memory
copies
and
by
sticking
to
streams
they
have
to
do
extra
work
to
copy
memory
which
is
advantageous,
meaning
if
the
implement
a
built
to
conserve
memory,
you
help
you
hopefully
then
do
the
right
thing
right.
J
Right
and
that
was
also
sort
of
the
motivation
behind
making
the
array
buffer
input
transferable
as
well,
but
you
wouldn't
be
able
to
keep
that
like
input
allocation
around
after
and
you
would
just
get
like
a
new
array
buffer
from
DL.
So
you
would
have
to
very
deliberately
copy
the
array
buffer
before
providing
it.
In.
J
E
Makes
no
sense
I'm
happy
with
it
as
a
perfect
person
like
how
can
we
do
that
and
and
yet
you
know,
yeah.
J
J
Okay,
cool
thanks
for
the
input
any
comments
on
this
sort
of
high-level
surface,
sir.
J
Just
an
option
in
the
fetch-
it's
mostly
just
to
make
this
more
general
there's
cases
where
you
might
want
to
commit
a
lot
of
stuff
to
say,
like
even
local
storage,
where
you
want
to
like
compress
it
beforehand
and
store
it
to
avoid
like
using
much
of
disk
space.
Also
yeah,
that's
a
good
point
regarding
fetch.
It
was
mostly
just
to
make
it
as
general-purpose
as
possible
and
especially
because
streams
makes
it
so
easy
to
compose
various
types
of
inputs
and
outputs
fetch
being
one
of
them.
I
mean.
C
J
Have
to
go
back
the
main
thread,
not
necessarily
well
yes,
you
would
need
a
sync
point
on
the
main
thread
to
spawn
it
off.
That
is
true.
J
Yeah
does
it
have
to
be
on
the
main
thread.
I
think
we
use.
Okay
is
referring
to
the
case
where
you
do
a
fetch,
it
returns,
it
yields
a
response
and
then
sorry
you
want
actually
I
was
thinking
of
the
response
case.
For
the
like
request
case,
you
would
like
well
by
the
thing
is
in
a
request:
cuz,
you
have
to
write,
you
would
want
to
Claudia,
listen,
we're
clear,
questa
compress
and
then
once
that
returns
you
would
do
the
fetch.
However,
this
streams
should
be
sufficient
for
that.
J
C
J
J
C
C
J
J
A
A
What
that's
right,
I
think
you
can
convince
Apache
to,
but
it's
sure
so
you
need
some
sort
of
negotiation
between
the
client
and
the
server
see
if
the
server
supports
it
and
then
like
it,
which
is
potentially
useful,
but
I
think
that
most
of
the
use
cases
here
can
be
just
addressed
if
you,
because
you
are
handling
the
payload
eventually,
so
you
are
collecting
that
data
on
the
servers,
so
you
collect
it
compressed,
and
you
know
you
know
the
method
in
which
you
compressed
it.
So
you
can
figure
that
out
on
the
server.
L
Yeah
I
would
just
like
to
add
my
support
for
something
like
this.
We,
we
do
a
lot
of
jumping
through
hoops
and
ship
a
lot
of
code
and
use
a
lot
of
CPU
in
boomerang
to
try
to
compress
resource
timing,
data
and
user
timing
data
just
like
Andrews
suggesting
for
JavaScript
profiles,
so
having
some
way
of
getting
the
browser
to
do
that
in
a
cheaper,
more
efficient
way,
ideally,
would
be
ideal.
L
E
I
do
think
what
Rios
Takei's
States
has
some
value,
because
if
the
browser
supports
default
compression
through
API
is
the
server's
will
will
quickly
add
default
support
for
the
decompression
on
the
server.
So
you
know
it's
a
chicken
and
egg
problem
here
where
today
it's
hard,
so
servers
don't
support
it.
I
was
just
referencing
web
api
to
see
what
you
have
to
do
and.
J
J
E
Yeah,
it's
useful
to
have
the
ability
to
compress
data
separate
from
the
need
to
store
I
guess.
The
other
question
is
I,
wonder
if
it's
useful
to
have
the
some
type
of
string
that
is
unique
for
the
settings
you're
passing
into
this
to
create
this
compressor.
That
is
clearly
standardized
and
reusable.
Yes,
a
thing
that
can
be
passed
along
with
these
pipes
feels
like
an
important
piece
of
this.
E
E
Compressed
it
could
be
that
or
it
could
be
a
thing
that
goes
with
the
data
on
disk
or
a
thing.
Okay
know
where
the
data
is
going
I'm,
just
okay
in
fetch,
it's
probably
to
tell
the
server,
but
in
putting
it
I,
don't
know
I'm,
just
imagining
scenarios
now,
but
you
know
if
you're
sending
it
through
something
else,
and
you
know
if
it
isn't
going
to
be
lets
say
the
version
of
your
client
upgrades
three
versions
and
your
defaults,
write
and
read
protocol
is
different
and
you
read
old
data.
A
E
Seems
useful
and
not
just
make
you
know,
I'm
just
thinking
three
problems
that
might
happen
when
people
try
to
use
this
API.
No,
it's
good,
though
the
idea
is
well.
It's
probably
an
old
idea.
In
fact,
I
just
saw
Eric
popped
on
here.
Eric,
hello,
so
andrew
is
presenting
the
idea
of
a
compression
web
compression
API
cool
yeah
client-side.
J
Okay,
cool
yeah,
I
know
that
sounds
good.
I'll,
definitely
dig
more
into
that
see
if
that
makes
sense.
The
like
binary-based
header
approach
yeah
for
the
most
part
like
I,
is
there?
Are
there
many
use
cases
where
one
would
like
vary?
The
compression
format
itself.
A
So,
for,
from
my
perspective,
a
mix
of
data
where
some
of
it
is
known
ahead
of
time,
where
you
have
a
lot
of
process,
you
can
invest
a
lot
of
processing
power
and
you
know
broadly
eliminate
where
other
is
something
you
want
to
compress
more
or
less
on
the
fly.
But
I
don't
know
if
this
is
something
you
have
to
tackle
in
a
single
stream
or
if
those
streams
can
be
combined
afterwards.
Somehow
yeah.
C
D
C
D
C
Just
because
things
might
be
optimized
differently,
you
know
even
browser
operating
systems,
so
right
now,
like
I
mean,
if
the
also
just
always
speaking
the
same
type,
that
may
or
may
not
be
fast
or
you
know,
power
efficient
or
whatever
for
sure.
So
you
might
even
like
the
what
media
does
about
like
giving
some
sort
preferred
algorithms
or
like
a
way
to
pick
based
on
the
compression
efficiency
or
like
power,
efficiency
and
stuff,
like
that,
perhaps
yeah.
J
No,
that
sounds
good
I
mentioned
that
in
the
discussion
slide
that
it
would,
it
might
be
worthless
getting
whether
or
not
we
want
to
provide
abstraction
for
things
like
that
across
algorithms
or
to
effectively
have
each
like
implementation.
Take
take
it
down
like
bag
of
bag
of
flags.
For
this
week,
yeah.
A
If
you
have
multiple
options
that
you
know,
for
example,
like
again,
one
of
the
use
cases
was
implementing
SSH
in
the
browser,
and,
if
you
want
to
do
that,
there
are
like
SSH
depends
on
very
specific
Jersey
flags
like
something
related
to
the
flushing,
like
the
words
I,
don't
remember
if
it
requires
immediate
flushing
or
requires
accumulation,
but
it's
like
it's
very
requires
something
very
specific
there
and
you
want
to
enable
that.
So
you
probably
want
to
yeah
a
bag
of
flags,
probably
the
better
option.
One
more
point
is
related
to
compression
dictionaries.
A
That's
also
something
that
I
know
that
the
broccoli
folks
did.
A
lot
of
work
on
I
know
that
for
gzip
you
would
probably
like,
if
you're
compressing
resource
timing
data,
you
can
probably
have
some
sort
of
you
know,
send
down
the
dictionary
that
will
significantly
reduce
the
amount
of
data
that
you're
sending
up.
So
there's
some
trade
up
there,
but
you.
J
A
J
Yeah,
oh,
that
sounds
good.
Is
there
anything
beyond
bag
of
flags
that
you
would
want
for
that
use
case.
J
M
Go
ahead:
oh
listen
to
one
other
point
to
make
for
the
rum
analytics
use
case.
Presumably
this
is
gonna
be
an
asynchronous
task.
We
are
compressing
this
data,
but
in
Rome
analytics
frequently
you're
sending
data
back
to
a
server
as
the
pages
unloading
and
like
a
visibility,
change
handler
or
something
like
that.
So
adding
an
option
to
fetch
could
be
beneficial
and
not
having
to
do
that
work
in
an
asynchronous
way,
which
would
bruise
most
likely
fail
if
you're
sending
it.
M
J
A
M
K
Hey
I
was
just
gonna
say
really
quickly
about
the
compression
level
thing
like
I,
for
us,
like
use
case,
would
be
LZ
for
gzip
bzip2
as
variants
on
compression
elbows,
so
that
matters
more
to
us
and
some
kind
of
abstract
compression
normal
thing,
so
I
think
yeah.
So,
just
like
being
a
little
pic
that
I'll
go
and
do
the
you
know
the
big
bag
of
options
that
that's
totally
sufficient
needs.
Our
case
perfect.
H
So
I
want
to
talk
about
element,
timing
texts,
aggregation
I,
linked
WCG
explainer
there.
We
also
have
a
draft
spec,
but
in
this
meeting,
I
want
to
focus
on
the
decision
we
made
regarding
how
to
aggregate
text
notes
to
containing
elements
but
feel
free
to
look
at
the
WCG.
Explain
your
spec
and
fell
feedback
on
github.
So.
H
The
reason
we
did
this
is
first
to
sleep
with
the
whole
problem
is
we're
trying
to
solve?
Oh
sure,
okay.
So
sorry,
as
a
recap,
the
high-level
problem
we're
trying
to
solve
is
element.
Anyone
wants
to
expose
important
text
content
and
by
important
we
mean
annotated
by
web
developers,
so
in
particular
we
need
a
way
for
web
developers
to
say
this
is
texts
that
I
care
about.
So
presumably,
something
like
saying
putting
an
element.
H
Timing
attribute
on
your
P,
like
the
paragraph
element
or
in
your
header
element,
should
enable
you
to
get
timing
information
about
the
text
that
is
contained
in
that
element.
As
text
notes
themselves
are
not
elements
we
need
a
way
to
specify
which
text
notes
belong,
to
which
elements
for
the
purposes
of
element.
Timing.
Does
that
explain
it?
Okay,
so
we
chose
to
say
that
a
text
note
belongs
to
its
closest
containing
block
ancestor.
H
So
some
of
the
alternatives
that
we
considered
first
was
an
notion
of
depth.
This
was
too
arbitrary.
For
example,
adding
a
hyperlink
should
not
change
whether
the
text
in
the
link
belongs
to
the
paragraph
or
not,
and
but
it
does
increase
the
depth
of
the
text
node
because
it
will
now
be
contained
in
an
additional
element
before
the
paragraph.
So
we
I
considered
that
to
be
too
arbitrary,
Wilson
third,
defining
a
notion
of
top-level
elements,
but
I
think
that
would
require
a
lot
of
work
and
not
just
now,
but
also
in
the
future.
H
H
Then
we
have
another
example
where
the
bottom
is
very
improved.
As
you
can
see
before
it
was
a
lot
of
small
red
rectangles.
It
is
transformed
into
more
cohesive
groups
of
paragraphs
and
then
here
and
you
can
see
that
it's
because
of
the
styling
or
the
linking
being
used.
The
aggregation
approach
takes
care
of
that.
E
N
N
H
E
L
E
D
Well,
are
you
back?
I
am
sorry
about
that
I
was
I
was
trying
to
mute
my
microphone
enough
to.
Let
me
talk
exit,
but
anyway,
just
a
quick
question
about
this.
If
a
developer,
let's
say
does
want
to
learn
something
about
a
very
specific
part
of
the
text,
are
they
able
to?
Could
you
just
as
a
developer,
wrap
it
in
a
in
an
element
so
that
you
could
just
do
the
timing
on
that?
One
thing
to
force
disaggregation.
Is
that
the
idea
that.
N
H
H
D
I
D
A
N
Our
current
proposal
that
would
not
work
everything
would
a
great
up
to
the
nearest
containing
block.
We
there's
an
issue
where
you
we
don't
want
to
report
a
single
element
to
us
or
a
single
text,
node
twice,
because
that
seems
kind
of
confusing.
And
so
then,
if
you
annotate
both
that
span
and
you
annotate
the
parent
element,
we
could
potentially
be
reporting
that
multiple
times,
maybe
that's
correct,
but
that's
why
we
were
shying
away
from
that.
I.
N
H
A
Here,
well,
yes,
we
are
so
yeah
I
guess
if
there
are
any
other
questions,
folks
can
take
like
open.