►
From YouTube: WebPerfWG call - March 12th 2020
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
C
D
B
A
A
C
C
Because
I
don't
see
myself
presenting
in
there
all
right.
So,
let's
talk
about
event.
Timing,
since
we
haven't
talked
about
it
for
a
while,
so
I'll
give
a
brief
recap
on
the
API
objectives:
I
also
linked
the
repository
and
WI
CG.
Oh
those
lights
are
also
in
the
agenda
document,
so
you
can
get
to
those
there
so
two
main
objectives.
First,
we
want
to
gather
we
want
to
enable
developers
to
gather
data
on
input
delay.
C
So
what
that
means
is
how
long
it
takes
for
basically
the
browser
to
begin
processing,
the
input
of
users
and
obviously
this
delay
can
be
caused
by
things
like
long
tasks,
because
we
can't
just
stop
a
long
task
and
immediately
start
passing
and
input
when
it
is
currently
running
and
the
other
key
piece
of
data
we
want
to
expose
is
synchronous,
input,
processing,
latency.
So
this
means
basically,
how
long
does
it
take
from
the
user
doing
something
to
the
screen?
Actually
updating
based
on
that's
something.
C
So
what?
How
long
does
it
take
until
the
next
day
after
the
event
handler
has
run
or
the
event
handlers
have
run?
So
those
are
the
two
objectives
and
so
also
what
are
non
objectives
first,
we
do
not
want
to
measure
asynchronous
work
caused
by
events,
so
in
the
previous
slide,
I
said:
how
long
does
it
take
to
measure
the
response
to
input
but
the
event?
Timing
API
will
only
measure
synchronous,
work
that
is
responsible.
C
So,
for
example,
if
your
event
handlers
just
post
a
task
like,
for
example,
do
like
set
timeout
something
or
they
perform
a
fetch
and
I
wait
for
that
fetch.
Then
those
kinds
of
work
will
not
be
captured
by
the
API.
Based
on
the
previous
discussions
in
the
working
group,
we
decided
that
that
kind
of
tracking
is
better
off
done
in
user
timing,
because
these
are
timing.
C
C
C
Scrolls
can
be
caused
by
many
things,
and
some
of
them
don't
even
have
to
be
related
to
impediments.
For
example,
if
I
use
the
scroll
bar
to
scroll
down
those
input,
events
won't
really
be
captured
by
any
event
handlers,
but
there
will
still
be
a
scroll.
So
for
that
reason,
scroll
performance
is
hard
and
it
should
be
decoupled
from
even
timing.
Any
questions
so
far.
A
Asynchronous
work
part
I
believe
that
at
some
iteration
we
talked
about
measuring
basically
being
able
to
measure
the
paint
after
the
event,
so
that
we
will
be
able
to
somehow
correlate
the
the
paint
operations
with
the
event.
If
that
paint
work
is
happening
as
a
result
of
a
fetch.
That
means
that
it
won't
be
captured
here.
I
understand
it
correctly.
Yes,.
C
Suppose
in
that
case,
you
will
always
have
event
listeners
one
way
or
another,
so
it
does
make
sense
to
like
it
is,
it
seems,
feasible
to
add
JavaScript
to
track
those
events,
and
you
must
specify
what
is
the
asynchronous
work
that
you
are
trying
to
track,
because
the
user
agent
cannot
really
at
least
right
now,
cannot
really
do
an
informed
guess
on.
When
does
the
work
triggered
by
these
event
handlers,
and
does
that
make
sense?
C
C
C
C
So
here
in
this
slide,
it's
like
sorry
I
want
a
lot
of
text,
but
basically
we're
exposing
exposing
click
events,
most
events,
pointer
events,
some
key
events,
some
composition,
events
drag
and
drop
events
and
excluding
the
ones
that
are
continuous
for
the
reasons
I
stated
before.
So.
For
all
of
these
input
events,
we
only
expose
the
ones
whose
duration
is
greater
than
a
hundred,
which
means
the
rounded
duration
is
greater
than
100.
C
For
because
we're
rounding,
the
duration
to
the
nearest
multiple
of
eight
and
the
idea
is
that
a
hundred
is
a
good
cutoff
because
of
the
real
guidelines
saying
that
you
should
respond
to
user
input
within
100,
milliseconds
and
the
event.
Duration
is
kind
of
meant
to
capture
precisely
that.
Responding
to
the
user
input
and.
F
C
F
C
F
C
Right,
yeah,
that's
a
good
question
and
we
could
always
add
a
parameter
to
the
observe
column
where
you
could
configure
the
timestamp
to
be
a
little
bit
lower
I'm,
not
entirely
sure
how
lower
we
can
go
for
who
I
would
have
to
ask
privacy
and
security
to
help
me
think
through
it.
But
I
do
think
it's
very
feasible
to
let
the
threshold
be
variable.
C
F
A
For
the
buffered
case,
I
would
think
that
in
cases
where
you
really
care
about
fast
clicks
are
cases
that
are
highly
JavaScript
driven.
So
at
this
point
you
probably
already
registered
your
observers,
so
I
would
suspect
that
keeping
the
default.
You
know
the
one
that's
being
buffered
and
then
you
know
only
a
accumulate
events
that
are
lower
than
that.
If
explicitly
asked
or
right.
C
Yeah,
that
is
also
another
possibility.
I,
don't
know
how
confusing
that
would
be
to
developers
or
like
I.
Imagine
already
getting
about
being,
like
you
didn't
tell
me
if
all
the
slow
events
before
I
registered
my
October,
but
it's
really
possible
that
it's
not
that
confusing
like
we
could
just
do
that
that
make
sense,
but.
C
Mean
that
would
be
easier
because
we
don't
need
to
do
anything
more.
You
just
filter
out
the
ones
or
not
meeting
the
threshold,
so
I
think
yep.
We
can
add
a
parameter
to
observe
now
that
we
have
the
infrastructure
for
that.
We
should
be
using
it
right,
so
we
can
add
a
parameter
for
well
I,
don't
know
the
name
right
now,
but
like
duration
threshold,
whatever
and
I'll
just
need
to
think
what
is
the
minimum
possible
duration
threshold.
That
would
make
sense.
C
All
right,
so,
to
recap
what
is
exposed
via
this
API
first
input
is
always
exposed.
Sorry,
first,
first
of
all,
first
image
is
always
exposed.
Then
slow
inputs
are
exposed
when
their
direct
integration
exceeds
the
threshold
which,
as
mentioned,
could
be
a
parameter
to
the
performance
observer
and
also
I.
C
Guess
yeah
I
had
a
question
about
whether
entry
types
should
also
be
a
parameter
and
I
suppose
I
expect
the
answer
will
be
yes,
so
basically,
you
would
not
receive
all
of
those
parameters
because
you
don't
necessarily
care
about
all
of
them
and
it's
possible
that
you
get
some
noise
if
you
get,
if
you
observe
all
of
these,
whereas
just
observing
the
types
that
you
care
about
may
make
more
sense.
So
we
should
also
add
a
parameter
to
the
observer
so
that
you
can
specify
a
subset
of
the
types,
whether
once
you
actually
care
about.
A
C
Okay,
I,
I
think
actually
I'm,
not
entirely
sure.
I
have
to
double
check
to
make
sure
there's
a
feasible
way
to
detect
that
yeah
I
think
that's
a
good
idea
to
make
sure
there's
a
easy
way
to
detect
all
the
types
that
are
supported
for
observation.
All
the
event
types
I'll
make
sure
that
we
have
something,
but
I
think
we
do
already.
C
The
idea
of
exposing
that
is
that
it
lets
you
compute
percent
house
on
either
input
delay
or
input
total
latency
based
on
that
count.
If
you
didn't
have
the
count,
then
an
increase
in
user
engagement,
metrics
or
similar
would
result
in
an
increase
in
the
number
of
slower
events.
If
there
is
no
corresponding
performance
improvements,
so
that
would
be
pretty
confusing.
The
event
count
helps
you
basically.
C
Have
a
way
to
what's
the
word:
well,
basically
have
a
denominator
denominator
so
that
you
can
do
rational
estimates
on
what's
the
average
user
experience
or
what's
the
90th
percentile
of
a
user
experience
and
so
on,
so
for
the
ones
that
are
not
exposed.
You
would
just
assume
that
the
duration
is
zero.
C
This
doesn't
matter
too
much
how
long
the
duration
is
for
events
that
are
fast
enough
and
then
based
on
that
assumption,
you
can
compute
percentiles.
C
Okay
yeah
the
idea
that
if
if
they
were
not
exposed,
then
you
should
consider
them
fast
and
not
report
dependent,
not
care
about
how
long
they
actually
took
so
much
so
I
also
wanted
to
report
the
chromium
status
of
the
I.
Don't
think
we
have
anyone?
Do
we
have
anyone
from
Firefox
or
Apple
I.
C
So
first
input
is
the
entry
type
and
it's
used
by
approximately
it's
well
slightly
more
than
3%
of
page
loads
according
to
the
chrome
status
popularity
page.
So
that
is
pretty
substantial
and
we
are
planning
to
ship
event
counts
as
well
as
full
event
aiming
in
the
near
future.
So
that
would
be
the
event
entry
type
which
will
let
you
know
of
all
the
slow
events
and
I
can
go
over
the
performance
event.
Timing
interface
really
quickly.
C
C
Finally,
we
also
have
cancelable,
so
I
will
tell
you
if
the
event
was
canceled
or
not,
and
a
new
addition
based
on
feedback
about
there,
not
being
any
attribution
at
all
is
the
event
target.
So
the
idea
is
that
we
will
expose
the
event
target
but
notice
that
it
is
knowable,
so
we
can't
always
expose
event
targets
if
they
are
in
shadow
Dom.
We
cannot
expose
them.
C
In
addition,
if
they
become
disconnected
from
the
document,
then
we
also
cannot
expose
them
because
they
may
be
garbage
collected
and
we
we
don't
want
to
keep
them
in
memory.
Just
for
this
interface,
of
course,
and
you
shouldn't
also,
you
also
shouldn't
store
them
engine
in
your
JavaScript.
You
should
store
certain
identifiers
for
the
element.
If
that
makes
sense,
otherwise
it
will
be
memory
leaks.
A
A
C
A
A
C
A
I
think
it's
like:
on
the
contrary,
I
I
missed
it
I
guess
in
the
previous
iterations,
and
this
seems
like
something
we
want
to
potentially
make
people
aware
of
that.
You
know
a
long
event
that
is
cancelable
is
significantly
worse
than
the
long
event
that's
non
cancelable.
So,
ideally,
if
people
have
long
events,
we
want
them
to
opt
into
making
those
events
passive.
A
C
Yeah,
okay,
that
makes
sense
I
will
make
sure
to
talk
to
deathroll
about
it
a
little
bit.
Oh,
oh,
that
list,
chrome,
dev!
Oh
all,
right
any
other
questions
from
this
slide.
C
C
It
is
basically
an
interface
listing
all
the
events
that
have
been
dispatched
for
every
event
type,
but
I
guess
we
would
have
to
make
the
attributes
optional
until
you
kill
you
support
that
event
type
right.
Otherwise,
you
can
just
implement
it
and
return
zero
because
you
didn't
support
it.
Then
that
wouldn't
give
you
even
type
would.
C
I'm
saying
you
would
do
that
as
opposed
to
implementing
it
by
the
reporting
always
zero
and,
of
course,
that
also
doesn't
yeah.
That
doesn't
make
much
sense,
but
something
we
should
perhaps
point
out
in
this
back,
so
that
it
is
a
reliable
way
to
identify
the
employment
types
that
are
supported
by
the
user
agents.
Yeah
Romney
point.
C
E
C
G
C
F
There's
just
one
thing:
I'm,
just
you
know
a
lot
of
time.
We
we
gather,
you
know
measurements
for
Perth,
but
some
other
group.
You
know
to
try
to
understand
user
behavior.
They
are
also
interested
in
some
other
measurements
to
see
how
well
or
not
the
feature
feature
is
used
and
I'm.
Just
wondering
like
you
know.
F
Those
group
might
be
interested
by
saying
hey
how
many
clicks
you
know
the
user
did
to
do
his
tasks
and
in
this
case
this
API
will
not
work
because
there's
a
threshold
of
you
know
100
meters
gonna,
as
you
said,
you
know
we
could
maybe
lower
that
limit,
but
I'm
just
wondering
like.
If
you
expect
to
have
some
people
asking
you
hey
you
know,
can
we
use
that
API
to
measure
the
number
of
clicks
my
user
use
on
my
app
well.
C
C
Okay,
you
use
the
total
number
to
normalize
the
number
of
slow
events
right
because
otherwise
you
just
know
there
is
like
10
slow
events,
but
if
the
next
day
you
get
20
slow
events,
you
don't
know
if
your
website
just
got
twice
as
slow
or
just
or
if
it
got
twice
as
popular
or
what
happened.
So
the
idea
is
event
counts
will
count
the
total
number
of
input.
Events
of
that
type,
regardless
of
whether
they
pass
any
threshold
or
not.
B
C
If
I
have
to
look
into
it,
I
think
that
the
the
events
being
exposed
are
not
at
least
on
Chrome
not
being
filtered
out
in
the
browser
process,
but
you
could
think
of
another
user
agent
who
filters
out
some
of
these
events
in
the
browser
process.
That
is,
it
does
not
send
them
to
the
render
process
of
the
corresponding
frame
and,
in
those
cases,
there's
no
alternative,
but
to
populate
them
asynchronously.
B
C
A
C
Event,
dispatch
test
does
mention
very
briefly
that
you
can
skip
some
steps
if
you
want
please,
if
you
see
into
the
future-
and
you
know
that
nothing's
gonna
happen
something
very
hand-
wavy
like
that
yeah,
it
is
theoretically
possible,
but
I
do
believe
in
Chrome.
At
the
moment
we
would
catch
them
all
in
the
renderer.
A
Otherwise,
we
have
a
bunch
of
on
trash
issues,
I
looked
at
them
briefly
earlier
and
they
don't
seem
super
like
most
of
them
just
seem
like
work,
but
it
seems
like
we
should
be.
We
should
try,
ask
them
to
just
you
know,
see
which
of
them
are
blocking
versus
non
blocking.
So
let
me
try
to
start
out
by
navigation
timing.
A
F
C
A
A
C
B
C
B
A
B
C
That's
the
other,
confusing
part
we're
using
that
shouldn't,
be
it
shouldn't,
be
used
cuz
it
it's
different
right
right.
I
skewers
him.
So
perhaps
that
part
is
just
putting
up
by
like
if
someone's
not
hasn't
implemented
that
then
you're
gonna
have
they're
gonna
have
a
hard
time
understanding
what
they're
supposed
to
do.
C
A
A
What
kit
folks
had
several
examples
where
it's
clearly
not
a
reload,
for
example,
if
the
page
you're
on
a
ride
arrived,
does
basically,
if
you
navigated,
using
a
post
typing
that
will
like
it,
it's
resubmitting
the
post,
it
asks
you
whether
you
want
to
resubmit.
So
it's
not
a
reload
I
believe
that
was
their
example.
I
still.
C
Think
that
we
shouldn't
be
defining
the
reloads
based
on
how
the
user
got
to
it,
which
is
like,
in
this
case,
hitting
Enter
but
based
on
what
the
user
agent
is
doing,
seems
more
correct
in
terms
of
performance
analytics
stuff
yeah,
which
means
that
this
is
still
just
blocking
on
HTML
defining
reloads
and
in
particular,
for
each
reload.
What
should
be
done?
Yeah
doing
it,
yeah
go.
C
Missing
right
now
we
point
to
reload,
but
like
it's
not
defined
anywhere
and
well,
there's
now
several
kinds
of
well
legs,
there's
memory,
caches
and
all
those
so.
C
A
A
Okay,
so
I'm
marking
this
as
l2
just
to
make
sure
that
we
are
pointing
in
to
the
right
place
and
but
that
you
know
we'll
be
able
to
close
it,
then
and
define
everything
that
needs
to
be
defined
for
reload.
This
part
of
HTML,
but
I
hundred
percent
agree
with
you
that
it's
a
complex
landscape
where
different
browsers
are
doing
different
things.
A
A
We
have
a
new
issue:
surface
chunk
network
timing,
details
issue,
one
another:
okay,
when
what
123
I
will
add
it
to
the
agenda.
A
A
For
instance,
if
the
navigation
timing,
API
surfaced
the
time
and
size
of
each
HTTP
chunk
received
on
the
client,
it
would
be
possible
to
make
this
data
back
to
the
time
specific
elements
were
received
from
the
network
and
use
this
to
determine
the
delay
between
receiving
and
displaying
a
particular
element,
and
we
have
a
so
it's
John
desc.
Oh
I,
believe
that
yes
and
no
I'm
Hoffmann
from
Microsoft
commented
that
this
would
be
great
and
a
very
useful
addition.
A
C
E
A
What
I
so
from
my
perspective,
I
think
that
this
can
be
useful
for
rum
for
debugging
servers,
so
webpagetest
added
something
very
similar
in
terms
of
just
the
coloring
of
this.
You
know
a
twink
was
transferred
here,
but
here
you
know,
even
though
we
have
an
ongoing
fetch,
nothing
was
passing
on
the
wire
and
I
can
imagine
something
similar
being
done
in
rum
and
anybody
point
out.
Oh,
the
h2
server
is
messed
up
or
whatever
other
lower
layer
problem,
but
yeah.
But
it's
interesting
that
no
one
like
you
didn't
hear
that
request
from
anyone.
A
B
But
I
agree
with
you
that
it
if
we
added
it,
people
would
probably
start
being
able
to
fix
things
with
us
right.
So
better.
Better
visibility
is
always
usually
helpful
for
triaging
things
like
this.
Oh
especially
once
you
once
people
start
seeing
things
in
synthetic,
they
start
requesting
a
more
in
rum.
So,
with
you
know,
webpage
just
leading
the
way
here
and
stuff
it.
You
know,
I
wouldn't
be
surprised
if
you
don't
start
getting
more
requests
for
this.
For
you
know
more
advanced
customers.
A
C
A
A
In
terms
of
security,
I
don't
know
if
you
could,
for
example,
what
happens
if
the
generated
response
from
Service
Worker
includes
a
timing,
allow
origin
or
reverse
that
is
a
cross
origin
resource,
but
is
served
or
may
be?
No,
they
can't
do
that
because
it's
a
fake
so
they
cannot
serve.
They
cannot
modify
the
response
headers
for
an
opaque
response.
A
A
That's
my
question
and
I
suspect
that
it
is
as
much
as
it's
making
me
sad.
A
pretty.