►
From YouTube: WebPerfWG Design call - April 12th 2019
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay
cool,
so
one
thing
that
I
wanted
to
talk
like
last
time
around,
we
talked
about
potentially
running
a
face
to
face
during
May
meeting
Philippe
earlier
this
week.
He
reminded
me
that
we
should
notify
people
eight
weeks
in
advance
of
any
face-to-face
happening,
and
otherwise
there
were
various
conflicts
with
potential
dates.
We
had
in
mind
for
me,
so
this
is
probably
going
to
happen
during
the
month
of
June.
A
E
D
A
F
C
A
H
Are
for
the
face
to
face
fatigue,
I
seen
you
in
the
twenty
seven
and
twenty
years
of
June,
there
is
a
workshop
on
web
games
in
in
random
costed
by
Microsoft.
So
I
don't
know
if
anyone
here
is
interesting
attending
it,
but
if
they
are
organizing
it
around.
This
workshop
might
be
also
one
solution
for
the
face
to
face
meeting.
A
Yeah
yeah
I
will
try
to
yeah
collect
all
the
data
points.
I
know
that
there
are
some
other
folks
with
some
conflicts
around
June,
so
I
yeah
they're,
trying
to
figure
out
the
date
that
works
for
most,
if
not
all,
people
and
in
which
we
can
actually
find
a
venue
so
but
yeah
I'll,
let
the
group
know
as
soon
as
we
have
conclusion
on
that
front.
I
I
So
when
you
restrict
ourselves
to
image
content
and
text
content,
as
I've
explained
before
this
is
similar
to
element,
timing
in
terms
of
the
scope
of
the
content
were
considering,
am
I
doing
an
echo
okay?
So
we
have
images
including
background
images
and
for
those
we
take
the
timestamp
of
the
first
paint
that,
of
course,
after
very
much
has
loaded.
D
I
Because
Texans
are
affected
by
styling,
then
we
consider
groups
of
text
instead
of
just
a
text.
Note
itself
we're
still
trying
to
figure
out
the
best
way
to
group
loads
of
text
into
his
youth
groups.
That
will
be
the
ones
taken
into
account
by
the
algorithm,
but
the
idea
is
that
we
will
have
groups
of
text
and
for
those
we
consider
the
first
paint,
which
is
the
worst
pain
without
web
fonts
loaded,
but
the
first
paint.
I
I
So
the
main
content
is
determined
right
now
we're
using
the
size
so
for
images
the
size
is
the
visual
size,
basically
the
amount
of
space
it
occupies
on
the
screen.
However,
we
have
heuristic
to
reduce
to
reduce
the
size
if
the
intrinsic
image-
sorry,
if
the
intrinsic
size
of
the
image
is
smaller,
so,
for
example,
if
you
have
a
image
that
is
tiny,
but
you
put
it
as
a
background
image
say
then
it
will
occupy
a
lot
of
the
space
in
the
viewport,
but
it's
not
as
content
for
it.
I
So
we
capture
that
by
using
the
clipped
intrinsic
size
for
text
last
time,
we
got
feedback
that
the
text
size
can
be
hard
to
calculate.
So
for
now
we're
just
saying
the
best
estimate
from
the
user
agent
for
the
smallest
rect
containing
all
the
text
notes,
and
maybe
we
will
need
to
be
more
explicit
there
about
what
that
what
that
means
so
we'll
need
to
see,
but
that's
what
we
have
for
now
and
for
the
LTP
metric.
We
used
the
latest
candidate
found.
So
as
long
as
a
PLC
PRG
limb
is
running.
I
If
you
find
something
that
is
bigger
than
you
use
that
instead,
so
in
this
film
strip
example,
LCP
will
first
look
at
the
text,
which
is
the
title
in
this
case
of
the
news
article
and
that
will
be
the
first
LTP
candidate,
but
afterwards
an
image
that
is
larger
than
that
portion
of
text
is
loaded.
So
LCD
will
be
updated
to
the
timestamp
corresponding
to
that
image.
I
I
So,
for
example,
if
you
scroll,
when
you
click
things
like
that,
that
would
terminate
the
LCD
computation
because
easier,
but
can
do
all
sorts
of
changes
to
the
Dom,
which
means
we
can't
really
properly
compare
the
timestamp
of
the
newly
introduced
elements
with
the
ones
that
were
initially
in
the
website
before
the
user
input,
and
the
second
condition
is:
if
there's
no
user
input
before
the
page
unlocked
then
well.
Obviously,
that's
a
terminating
condition.
I
So
when
you
have
some
heuristics
to
help
us
actually
capture
what
is
where
we
consider
the
main
content
of
a
website.
One
example
is
splash
screens.
So
in
this
example,
the
Twitter
logo
shows
up
as
the
website
is
loading
before
the
actual
loaded
website
shows
up
and
the
Twitter
logo
can
be
big
or
can
be
small.
But
in
any
case
to
solve
this
problem
we
ignore
elements
that
are
removed.
So
if
you're
still
running
the
LCP
algorithm
and
an
element
is
removed,
then
then
that
element
cannot
be
the
most
important
element.
I
I
I
So
in
this
example,
by
five
and
a
bit
seconds,
there's
already
a
bunch
of
the
website
loaded,
but
it's
only
until
after
13
seconds,
that's
the
largest
image
actually
finishes
rendering
after
being
loaded,
so
the
DHAP
there
will
be
much
different.
This
is
a
case
where
it's
also
unclear
if
the
image
is
actually
the
LCP.
I
For
example,
if
you
look
at
the
title
that
title
seems
to
be
like
a
cohesive
group
of
text,
so
this
is
probably
a
good
argument
for
having
a
good
heuristic
table
text
notes,
but
in
this
film
strip
we
did
not
have
any
such
heuristic.
So
that's
why
you
can
see
that
the
two
text
notes
there
are
considered
different,
and
so
the
text
size
is
much
smaller
than
the
image
size.
I
This
is
another
example,
so
again,
lots
of
things
happen
before
the
main
content
is
actually
visible.
In
particular,
the
image
on
the
bottom
takes
a
long
time
to
be
displayed.
Well,
I
should
about
success
already
23
seconds,
but
in
this
case
there
are
two
elements
that
are
pretty
big.
So
this
the
image
below
wins
by
a
little
bit
and
first
the
LCP
is
larger,
because
the
second
image
takes
longer
to
render.
I
Similarly
there's
two
large
images,
but
in
this
case
the
top
one
wins,
and
in
this
case
it's
only
enter
like
33
seconds
that
it's
fully
loaded
it's
unclear.
If
we
should
consider
the
website
to
not
be
usable
before
them,
you
can
see
by
10
seconds
only
a
part
of
it
was
visible
and
SCP
was
even
before
6
seconds.
So
that's
a
pretty
big
difference
and
yeah
another
example.
I
There
is
also
a
big
image.
Many
of
these
examples
are
from
mobile
websites.
Most
mobile
websites
have
like
a
large
image
in
the
page
load,
so
most
of
these
will
be
image
based
LCPs
on
desktop
I
think
it
will
be
a
bit
more
varied,
maybe
because
some
websites
don't
have
such
large
images,
and
they
do
have
maybe
a
little
more
of
text
at
the
conditional
edge
display
for
mobile
I,
think
image
LCD
will
be
very
dominant.
I
Another
example
in
this
example,
this
is
also
kind
of
slow,
slow
website,
but
there
is
a
lot
of
text
before
an
image.
First
renders
so
another
example
for
if
you
avoid
it
before
the
image
loaded
or
if
you
clicked
on
one
of
the
text,
links
and
your
LSAT
people
actually
will
actually
be
sorry,
the
text
so
I
just
text
and
the
last
example.
This
one
is
kind
of
a
tricky
example.
It's
not
clear
to
me
if
the
text
is
the
most
important
image
in
the
background
is
the
most
important
so
our
heuristic
for
screen.
I
Polyphony,
of
course,
we
don't
include
things
like
background
color,
but
if
you
have
a
background
image
of
the
body,
then
we
don't
count
it
in
this
case.
However,
the
background
image
is
currently
in
a
dish,
so
it
will
be
counted
as
the
largest,
so
the
lightest
image
will
be
like
29
seconds,
which
is
quite
different
from
the
largest
text
text.
Initially,
first
window
is
much
much
faster.
Oh.
B
So
we've
gotten
a
lot
of
feedback
from
web
developers
that
first
contentful
paint
is
not
super
useful
because
it
fires
way
too
early
and
I
think
the
ideal
solution
there
is
web
developers
just
be
using
element,
timing
and
annotating
the
elements
they
care
about
and
that's
going
to
be
strictly
more
useful
than
largest
good
temple
paint.
But
we
cannot
count
on
all
web
developers
taking
the
time
to
annotate
the
attributes
that
they
care
about
and
so
having
some
metric.
That
gives
developers
a
better
sense
than
first
contentful
paint
about.
I
I
D
A
So
so
there
are
like
there's
a
large
set
of
heuristics
here,
but
this
is
not
heuristics
based
at
the
like,
if
you
distill
what
we're
trying
to
do
here,
we're
trying
to
use
the
size
of
the
element
as
proxy
to
its
meaningfulness
for
the
page
and
for
the
design,
because,
typically,
what
developers
and
designers
would
put
larger
elements
for
things
that
are
meaningful
for
the
user,
and
so
that
is
precisely
what
heuristic
is
I
mean.
That's
a
textbook
division
of
heuristics
that
you
could
call
it
an
assumption.
A
G
D
I
B
D
Really
the
reason
I,
for
example,
like
safari,
like
more
broadly
won't,
get
there's
not
even
paint
until
such
time
to
reach
that
we
think
the
majority
of
web-based
aimed
it
right
so
like
for
us.
The
first
context
on
painting
is
exactly
what
the
like
website
should
be
mission.
Was
that's
an
iterative
very
first
thing.
We
ever
do.
D
I
B
So
we
have
the
heuristic
around
elements,
getting
removed,
not
being
counted
but
I.
Think
it's
an
interesting
point
that
Safari
already
has
an
existing
set
of
heuristics,
which
are
basically
designed
to
accomplish
the
same
goal,
and
we
have
not
really
thought
about
how
this
metric
should
behave
in
Safari.
So
maybe
what
we
should
do
is
grab
some
film
terms
from
Safari
and
at
least
eyeball
how
we
think
our
metric
would
behave.
I
mean
something.
D
G
I'm,
sorry,
we
have
a
lot
of
websites
here
at
Microsoft
that
are
built
in
react
or
other
frameworks
and
a
lot
of
those
have,
let's
say
three
to
five
phases
of
rendering
before
they
reach
a
point
at
which
they,
this
larger
may
have,
let's
say,
five
stages
of
rendering,
and
it's
about
the
fourth
stage
where
it
becomes
kind
of
user
usable,
then
I.
If
I
understand
correctly.
The
part
of
the
intent
of
this
API
is
to
attempt
to
cover
some
of
those
scenarios
as
well.
Is
that
correct.
B
Yeah
that
matches
my
understanding,
I
think
real
tsuke.
If
you
think,
if
we
can
grab
some
film
strips
from
popular
websites
from
Safari
and
then
hand
annotate
how
we
think
this
metric
would
behave
and
come
up
with
some
examples,
I
mean
hopefully
well
well,
most
likely.
We
see
some
examples
where
Safari
waits
until
exactly
large
the
nipple
paint,
but
we'll
probably
also
see
some
examples
where
it
doesn't
and
we
can
kind
of
talk
through
some
of
those
together
at
some
point.
Would
that
be
useful
right
say
that
would
be
useful,
but.
K
It's
not
necessarily
a
bad
model.
I
think
it
hasn't
been
demonstrated
that
one
is
better
than
the
other
as
in
the
users,
and
we're
satisfied
to
get
everything
at
once
or
to
get
things
as
early
as
possible
and
have
things
load
progressively
for
whole
web
pages,
which
I'm
not
aware
of
any
research.
That's
proved
this
without
a
doubt.
So
something
else
I
want
to
ask,
is
you
you
say:
there's
a
bunch
of
exceptions,
including
with
user
interaction,
especially
scrolling.
K
B
I
Could
probably
this
kind
of
this
case
is
based
on
the
time
stamp
itself
and
based
on
further
information
we
can
provide
like
we're,
providing
like
just
a
generic
for
everybody.
Let
Duke
know
I
have
an
idea.
Oh,
this
is
probably
a
very
small
element.
This
probably
was
caused
by
the
user
aborting
the
website
too
early,
so.
K
I
I
B
And
we
can
certainly
add
more
of
that
kind
of
attribution
to
things
like,
for
example,
bacon,
first
paint,
and
that
way
this
consistent
with
element.
Timing,
theoretically
at
least
Rousseau
Kate
I-
think
one
thing
that
is
different
about
this
metrics
approach
versus
what
Safari
does
is
with
the
metric
we're
identifying
retro
actively.
So
we
say
we
saw
something
that
the
page
maybe
meaningfully
paint
it,
but
we're
not
confident
yet,
and
then
we
can
wait
and
see
something
larger
and
say.
B
Okay,
that's
for
the
previous
paint
was
not
the
meaningful
content.
Now
we
see
the
meaningful
content,
whereas
Safari
can't
do
that
kind
of
retroactive
thing,
which
I
think
means
that
we
will
see
a
non-trivial
difference,
no
matter
what
in
the
behavior
here,
I
don't
know
if
it's
big
enough
to
say
that
the
metric
is
useful
but
design.
So
here
is
things
that.
B
To
the
user,
but
but
so
if
we
see
we
see
an
image
that
takes
up
a
quarter
that
you
Mort,
you
might
say
that
looks
important.
We
would
say
that
might
be
important
and
then
we
wait
and
then
we
see
any
of
the
things
that
50%
of
viewport
can
we
say,
that's
always
important,
whereas
Safari
can't
afterwards
decide
over
that
previous
image
wasn't
important
anymore
because
it
will
already
painted.
D
D
So
from
that
perspective,
I'm
I'm
a
bit
concerned
that
this
seems
to
be
very
much
like
Tarot
choice
of
what
the
browser
doing
today,
like
I,
agreed
with
a
general
idea
that
the
disk
I
wish
some
content
on
a
page.
That
is
important
for
the
page
right
anyway,
we
like
to
measure
the
you
know
when
those
things
are
painted
by
accident
when
I
use
case
what
I'm
questioning
is
to
that
heuristic
today
would
adequately
address
such
a
thing
like
clean,
a
long
way
into
the
future.
B
D
Right,
like,
let's
think
about
the
editor
right,
I
good,
see
you're,
writing
that
anytime
right,
like
any,
you
first
showed
up
split
splash
screen
right.
Well,
the
diony
may
say
to
you:
we
load
it.
You
go
to
the
paying
page,
there's
a
no
image
right!
There's!
No
thanks,
because
this
is
the
editor,
so
it's
basically
black
book
right.
This
main
part
of
this
editor,
this
new
user
will
type
so
there's
a
toolbar
in
an
empty
space.
How
does
this
metric
reward
that
well.
I
I
mean
in
the
press
only
the
text,
an
element
that
is
gigantic
and
will
be
where
you
can
edit.
So
it
would
capture
that,
and
only
if
there's
text
or
image
there,
though
right
it
will
probably
have
an
image
attached
to
the
this
is
like
a
white
editor.
In
that
case,
we
probably
don't
to
me
it
performance
metric.
It's
such
a
simple
website.
You
will
also.
L
D
A
L
A
D
B
So,
for
me,
it's
a
completely
different
way.
There
is
developer
demand
for
some
method
of
knowing,
when
your
pages
painted
meaningfully
that
does
not
require
manual
annotation
one
reasonable
way.
We
can
meet
much
of
that
demand.
It's
certainly
not
perfect.
There
are
some
heuristics.
Do
you
have
any
ideas
on
kind
of.
B
So
I
think
for
suitably
motivated
engineers
the
way
to
explain
this
is
sometimes
it
doesn't
work
great.
You
should
be
using
element,
timing
and
annotating
stuff
yourself
and
that's
kind
of
the
fallback
if
we
are
trying
to
sort
of
reason
through
what
large
content
full
paint
means,
we
can
basically
say
if
there's
a
bunch
of
images
that
are
the
same
size,
it
picks
the
first
one
or
something.
That's
rational,
but
I.
Think
that
my
perspective,
the
guidance,
ideally
is
more.
If
you
start
running
into
problems,
fallback
element,
Timon.
A
B
H
D
B
One
thing
that
I
think
is
different,
I
think
having
element
timing
gives
us
a
good
fallback
for
developers
who
are
motivated.
I
think
that
we
tried,
like
first
meaningful
paint
before
with
just
a
giant
bag
of
heuristics
I.
Don't
remember
if
they
ever
presented
that
here,
but
I
think
that
this
is
simple
enough
and
sort
of
user
experience
focused
enough.
I
Sick
Seiler,
like
that,
so
just
heuristics,
right
and
I,
explained
how
the
size
of
images
is
calculated,
and
so
the
reason
for
us
this
is
to
exclude
some
of
the
elements,
because
they
are
like
background
images
that
are
just
displaying
with
repeated
I.
Guess
it's
not
as
I'm
not
saying
this
is
like
a
black
box
that
the
browser
has
to
like,
invest
a
lot
of
resources
into
to
figure
out,
which
is
the
main
content
for
saying,
let's
consider
using
the
size
of
the
element
in
the
viewport
as
a
proxy
for
how
important
it
is.
I
D
Using
a
element
that
is
big
as
in
page
itself
is
a
heuristics
right,
I
mean
they
even
seems
to
keep
saying
it
like
it.
This
is
not
here
is,
but
it
is
a
heuristics
I
mean
we
didn't
talk
about
the
definition
of
more
heuristics
in
the
great
thing.
So,
let's
talk
about
it
right.
The
problem
is
that
this
is
a
best
proxy
for
the
painting
time.
User
cares
about
and
I.
J
B
I
B
A
K
They're
asking
for
it,
but
I,
don't
know
what
to
ask
for
and
I
think.
What's
something
that
might
be
interesting
is
to
see
the
outcome
of
what
people
do
with
element
timing,
because
the
people
who
do
spend
the
time
to
annotate
will
surely
come
up
with
patterns
in
things
that
recommend
others
to
look
at
etc
and
then
maybe
you'll
become
clear
what
matters
and
what
doesn't
once
API
is
out
a
Depot
actually
trying
to
make.
K
But
right
now
it's
a
lot
of
guesswork
to
figure
out.
You
know
you
know
in
one
way
you
say:
oh,
this
is
better
than
last
potential
paint,
because
that's
what
infiltrators
work
for
X
category
websites,
but
this
doesn't
work
for
another.
You
know
and
amount
of
websites
where
the
background
image
will
be
a
div
or
whatever
in
discussions
we.
J
For
paint
so
I
want
to
say
something
like,
for
example,
we
sell
a
platform
for
customer
to
customize
their
page
and
they
do
that
almost
every
day
and
so
element
timing
enforce
will
not
work
because
the
customer
they
add
and
remove
components
all
the
time
so
their
own
component.
So,
for
example,
this
matrix
would
be
understand.
K
What
I'm
saying
is
that
we
still
haven't
given
the
chance
to
websites
that
I
can
leverage
comments
coming
and
I
will
discover
patterns
that
work
to
inform
the
design
of
a
center
like
this
or
the
other
websites,
but
do
not
use
it.
That's
what
I
mean
you
know,
I'm
playing
with
them
attending
right
now,
I
still
don't
know
if
it's
gonna
do
the
job
for
us
and
if
it's
gonna
have
a
better
correlation
to
use
their
opinion
than
existing
metrics
I.
Tell
you
a
couple.
K
K
You
know
this
takes
time
and
you'll.
You
know
you
can
course,
as
you
go
for
sure
this
is
not
in
my
impression
is
you
know
this
is
making
a
lot
of
assumptions
about
the
user
interaction
model
and,
if
it
were
me,
I
will
study,
for
instance,
how
users
behave.
You
know
we're
going
to
you
know
it's
back
to
the
scrolling
thing
I
mentioned
they
could
be
other
things
you
know
like.
This
is
still
bringing
me.
K
Memories
of
this
fold
result
disk
old-school
model
of
say
synthetic
metrics
where
you're
just
looking
at
what's
above
the
fold
minutes
not
relevant
anymore.
People
are
not
waiting
to
interact,
and
so
it
would
be
if
you're
trying
to
make
something
future-proof,
which
I
think
was
what
we
used
to
cable
staining.
Ask.
K
D
You
could
imagine
some
web
app
where,
like
cases
not
really
fully
render,
but
it's
building
on
that
think
about
some
sort
of
social
media
like
a
Twitter
Facebook
on
festy
loading,
but
maybe
user
can
still
scroll
and
then
maybe
like
that
websites
possibility
for
the
content
that
users
seems
to
be
most
interested
in
and
you
know
I
mean
did
ya.
Did
you
do
any
study
about
one
sites
that
have
already
adopted
it?
B
N
They
consider
this
our
thoughts
mirror
a
lot
of
what
do
they
saying
I
for
analytics
for
a
generic
analytics
case
that
got
probably
a
lot
of
customers.
Anything
that
can
be
done
automatically
is
going
to
be
used
way
more
than
something
that
we
have
to
convince
the
customer
to
go
in
and
just
remember,
society
element.
Timing
is
a
great
example
for
the
customers
that
really
want
to
use
it
or
something
like
user
timing.
N
You
know
a
very
small
percentage
of
our
customers
actually
have
specifically
instrumented
their
pages
with
user
timing
or
will
be
doing
it
with
element
technique.
So
I
really
like
the
idea
of
trying
to
find
something
that
you
know
eristic
based
or
not
gives
other.
You
know
metrics
like
this,
because
our
customers
definitely
are
looking
at
eight
type
stuff
they're.
You
know
they
see
the
first
content
well
paint
in
there
and
they
want
something
beyond
their
like
first
meaningful
paint
or
what
have
you
bobby's?
N
So
on
the
backend
we
did,
you
know
we
can
try
to
figure
out
what
sort
of
bucket
or
you
know
how
to
how
to
present
that
data
in
a
way
that
the
user
input
is
actually
actively
affected,
being
natural,
I
think
much
more
than
any
any
other
metrics
that
we
captured
today.
You
know
user
input
generally
doesn't
affect
now.
Timing.
N
A
A
Our
folks
on
the
group
with
with
the
heuristic
based
approach,
which
is
basically
I,
mean
Nicholas
mentioned
a
few.
You
ristic
SAR,
like
around
input
removed,
element
heuristics
around
text
size
and
whether
or
not
text
nodes
that
are
painted
together
should
be
grouped
as
well
as
your
ristic
surround
background
images.
A
My
question
to
the
group
is:
how
comfortable
would
that
would
be
with
if
we
will
try
to
distill
those
like
the
intent
behind
the
heuristic
as
part
of
the
specification,
but
leave
them
to
be
user
agent
defined
so
I,
don't
know
for
like
for
the
background
image
one.
It's
true
that
browsers
could
figure
out
other
ways
in
which
they,
you
know,
recognize
non
significant
background
images
versus
significant
ones.
I,
don't
know
based
on
you
know.
If
they
are,
you
know
the
the
level
of
entropy
and
the
image
or
whatnot.
A
O
So
perspective
I
know
I'm,
not
thrilled
about
this,
but
I
don't
really
see
a
much
better
alternative
like.
Ideally,
you
would
want
the
heuristics
in
and
in
the
backside.
Maybe
so
in
my
ideal
solution,
the
website
would
get
like
all
that
I'd
means
through
just
nice,
turistic
s--
not
really
feasible,
like
it
would
take
up
too
much
memory.
You
would
track
too
many
elements
you
would
maybe
ever
I
don't
know
privacy
implications.
Exposing
too
much
data.
O
D
Yeah,
let's
say
some
analytics
provider
wanted
to
get
this
information
like
this.
If
there
was
a
way
for
analytics
provider
to
have
the
heuristics
right
like
browser,
provide,
some
data
and
I
know
needs
providers
lawanda
importance
of
heuristics
on
the
metamodel
done
browser
Neha
reporting
bunch.
We
here
is
external
browser
and
hoping
that
work
for
next.
You
know
20
30
years.
I
I
guess
they
again
would
be
to
relax
the
constraints
for
element
I'm
into
dispatch
entries
so
that
the
analytics
providers
can
do
the
work
instead
of
I
was
exposing
the
LCP
with
the
work
being
done.
Just
that
makes
sense.
Senator
do
you
mean,
like
more
things
automatically
from
element
having
essentially
I
know
less
things?
No,
yes,
yes,
more
yeah,
implicit
registration
being
all
relaxed
right,
oh
yeah!