►
From YouTube: TPAC WebPerfWG 2021 10 28 - LCP updates
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
so
yeah,
we
talked
a
lot
about
lcp
in
the
past
and
I
am
currently
looking
into
some
ways
it
like
updates
or
improvements
or
ways
in
which
it
can
be
better.
So
I
wanted
to
share
that
with
the
group
there
we
go
so
lcp
has
shipped
in
chromium
and
chrome
for
a
while.
I
don't
remember
the
exact
dates,
and
now
we
have
a
lot
of
like
we've
gathered
a
lot
of
data
showing
correlation
with
the
general
performance.
A
We
now
also
have
a
bunch
of
case
studies
that
are
showing
high
impact,
and
this
specific
example
of
vodafone's
that
31
improvement
on
lcp
and
just
on
lcp,
so
very
strict.
A
b
testing
conditions
show
increased
sales
by
8,
which
is
a
lot.
A
A
At
the
same
time
that
exposure
in
the
wild
showed
a
few
areas
where
we
have
some
gaps
that
we
need
to
potentially
better
handle.
Progressive
images
are
not
necessarily
accounted
for.
We
don't
currently
differentiate
between
them
and
non-progressive
images,
and
the
lcp
score
is
essentially
when
the
last
frame
of
the
image
is
rendered
rather
than
earlier
frames.
That
could
be
good
enough
in
the
progressive
image
case.
A
A
Arguably,
the
first
fully
rendered
frame
is
a
good
enough
paint
point
rather
than
waiting
for
the
full
video
or
animation
strip
to
end,
and
that
is
a
point
to
potentially
improve.
Videos
are
currently
ignored,
and
so
video
posters
are
taking
into
account,
but
auto
playing
videos
are
not
so
we
have
a
discrepancy
here
between
animated
images
and
autoplaying
videos,
which
are
arguably
the
same
thing,
delivered
into
slightly
different
mechanisms,
and
it
seems
like
we
need
to
tackle
that
discrepancy.
A
Canvas
is
currently
ignored
and
maybe
it
shouldn't
be
and
our
background
image
heuristics
are
not
ideal
and
I'll
go
into
a
few
more
like
a
bit
more
detail
on
that
in
a
bit.
There
are
also
loopholes
and
there
are
a
few
techniques
that
are
at
least
making
the
rounds
on
painting
very
large,
transparent
images
and
tricking.
The
metric
into
thinking
your
lcp
is
significantly
lower.
A
A
So
on
pro
on
the
progressive
images
front,
there
was
a
lot
of
discussion
on
a
couple
of
issues
between
folks
from
the
image
compression
community
and
and
folks
on
that
are
working
on
lcp
to
try
to
figure
out
a
way
for
progressive
images
to
like
a
way
to
better
account
for
progressive
images.
We
looked
into
various
heuristics
that
at
least
can
be
applied
to
progressive
jpegs
at
the
moment,
which
is
right
now
the
only
web
exposed
progressive
image
format,
there's
also
jpeg
excel.
A
Looking
like
looking
into
the
future,
there's
also
jpeg
excel.
That
will
be
a
progressive
image
format,
but
it's
not
a
current
issue
that
it's
more
of
a
future
issue
right
now.
The
main
like,
essentially
we're
trying
to
get
industry
data
on
the
benefits
of
progressive
images,
because
there
have
been
some
arguments
on
the
from
speed,
metrics
teams
and
essentially
saying
that
we
need
to
make
sure
that
the
metric
is
nudging
folks
towards
the
right
behavior
and
right
now.
We
don't
really
know
what
the
right
behavior
is.
A
There
is
no
real
like,
despite
having
an
argument
for
and
against
progressive
images
in
the
image
community
for
the
last
10
years,
or
so,
if
not
more,
we
have
very
little
data
that
shows
that
progressive
rendering
is
actually
better
than
non-progressive
rendering,
and
by
very
little
I
mean
I
found
none.
A
A
Yeah
tammy
evarts
and
the
folks
from
like
yeah
that
that
research
is
is
one
research
that
was
done
on
that
front.
A
It
is
highly
contentious
and
what
I
would
love
to
see
is
data
from
you
know,
folks,
who
are
serving
images
related
business
metrics
and
can
play
around
with
both
variants
to
see
which
one
is
driving
more
engagement,
getting
more
like
getting.
You
know
better
and
have
like
happier
users
and
that
that
would
be.
You
know
a
larger
scale
study
than
the
one
you
quoted,
which
I
think
is
like
yeah
almost
10
years
ago,
then
2014
just
yesterday,
yeah
so.
A
Then
yeah
beyond
progressive,
there's
animated
images.
In
my
opinion,
we
should
just
report
the
full,
like
first
full
frame
for
those
animated
images,
there's
a
a
linked
issue
here
that
where
I
asked
a
few
questions
and
yeah
didn't
really
saw
a
lot
of
engagement,
but
essentially
do
we
want
a
new
attribute
to
expose
that
first
frame.
I
think
we
do.
I
don't
think
we
want
to
override
the
current
render
time,
because
that
would
lead
to
confusion.
A
There
is
a
question
of
when
to
report
those
lcp
candidates,
because
right
now
there
is
a
high
linkage
between
the
time
that
the
lcp
entries
reported
and
its
actual
render
time.
So
it's
like
right
after
render
time
the
lcd
entry
is
reported
if
potentially
the
first
frame
time
is
something
that
we
would
consider
for
the
start
time
of
the
lcps
of
the
time
when
the
largest
contentful
paint
was
actually
happening.
A
We
may
want
to
report
it
ahead
of
time
and
then
change
it.
I'm
not
like.
I
don't
know
there
are
trade-offs
either
way,
and
it
would
be
interesting
to
hear
what
folks
think
and
then
yeah
related
to
that.
Do
we
change
the
current
start
time
from
render
time
when
we
can
to
that
first
frame
pain,
time
yeah.
Similarly,
videos,
all
those
questions
also
apply
to
videos
plus.
Do
we
want
to
report
auto
looping
videos
as
as
lcp
candidates?
A
From
my
perspective,
it's
obvious
that
we
do,
but
if
there
are
arguments
against
it,
it
would
be
interesting
to
bring
them
forward.
There's
the
question
of
canvas.
It's
currently
largely
ignored.
A
We
could
take
draw
image,
calls
into
account
and
treat
them
as
as
an
image.
Maybe
we
can
consider
other
canvas
operations.
I
haven't
given
it
a
ton
of
thought,
but
it's
a
discussion
worth
having,
and
I
think
that
at
least
some
folks
on
this
call
can
have
opinions
going
back
to
the
to
better
aligning
the
metric
with
user
experience.
There's
an
open
issue
to
better
handle
low
entropy
images
that
outlines
at
least
one
of
those
techniques
which
is
essentially
creating
an
infinitely
large
svg
which
is
transparent.
A
Yeah
and
finally,
better
detection
for
background
images.
Right
now
we
have
heuristics
that
are
so.
We
are
currently
ignoring
background
images
for
the
body
that
are
taking
the
full
viewport.
A
We
are
similar,
similarly
ignoring
full
viewport
images,
even
if
they
are
markup
based
images
because
yeah,
regardless
of
the
markup
behind
the
technique,
the
user
experience
is
somewhat
similar,
but
sometimes
the
content
is
a
full
viewport
image
and
it
also
creates
the
wrong
incentives
where
developers
can
like
if
they
slightly
change
their
designs,
they
get
significantly
different,
lcp
scores,
which
is
weird,
so
this
is
potentially
something
we
want
to
fix
as
well,
so
essentially
yeah.
I
wanted
to
share
all
that.