►
From YouTube: WebPerfWG call - April 23rd 2020
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay
and
we're
live,
and
as
a
reminder,
this
call,
as
all
calls
will
be
recorded
and
posted
online
and
that
we
can
get
started
for
scribing.
Does
anyone
I
can't
read
okay,
cool
thanks
and
for
the
next
call,
I
actually
have
a
conflict
and
potentially
other
folks
on
the
chrome
team,
also
from
like
basically
throughout
this
entire
week,
for
the
slot?
A
That
is
currently
like
the
current
the
call
it's
currently
at
for
the
next
call
as
a
one-time
event,
we
could
have
the
call
either
significantly
earlier
or
significantly
later
so
same
day,
but
either
8:00
a.m.
with
difficult
time
or
1
p.m.
Pacific
time.
Does
that
work
for
folks
do
folks
have
a
preference
any
other.
A
Yeah
but
I'm
not
like
Jill
and
other
week,
media
folks
are
similarly
affected.
So
I
guess
it's
West
Coast
folks
can
make
8:00
a.m.
then
that
would
be
the
best
option.
I
think
it
Devon's.
Yes.
Only
to
7:00.
A
A
Basically,
our
current
charter
is
gonna,
run
till
the
end
of
June,
so
June
30th,
and
we
need
to
reach
Artur,
so
I
prepared
a
charter
document
that
is
more
or
less
based
off
of
the
existing
charter.
I
didn't
change
a
lot
other
than
the
latest
publications
and
whatnot,
but
I
wanted
to
run
by
folks
a
few
things
like.
First
of
all,
if
anyone
who's
interested
be
able
to
go
over
the
charter
document
and
leave
comments,
that
would
be
highly
appreciated.
A
Otherwise,
I
wanted
to
make
sure
that
we
have
agreement
on
the
larger
themes
that
we
want
to
focus
on
during
the
next
charter
period.
That
doesn't
mean
will
abandon
existing
work.
It
just
means
that
forward-facing
work
is
likely
to
find
itself
in
one
of
those
one
of
those
buckets
and
I'm,
not
yet
sure
how
to
phrase
the
charter
language
with
those
larger
things
in
mind,
but
I
wanted
to
first
run
it
by
the
group
to
make
sure
that
we
are
all
aligned
on
the
same
themes.
A
A
Yeah
SBA
performance
and
figuring
out
a
way
in
which
we
can
better
not
like
have
a
better
way
of
measuring
single
page
apps
and
soft
medications.
Runtime
performance
so
extend
the
work
that
we
did
on
long
tests
and
make
sure
that
we
are
able
to
make
a
run
point
performance
as
well
and
finally
user
experience.
Metrics
we've
talked
about
layout
instability
and
I.
C
C
A
A
Basically,
it
was
obvious
that
when
they
graduate
they
would
graduate
to
the
web
performance
working
group
and
I'd
like
to
I
wonder
if
we're
already
there
on
some
of
them
or
potentially
all
of
them.
I
know
that
we
already
talked
about
isn't
attending
and
we
concluded
that
we'll
send
a
CFC
that
never
actually
happened.
C
C
I
think
it
would
be
nice
to
figure
out
from
a
booth
from
an
implementation
side,
so
getting
the
feedback
from
from
Google
Apple,
Mozilla
and
others
on
on
on
on
those
packs,
but
also
getting
the
user
side
as
well,
like
Stephen,
for
example,
to
see
you
know
which,
which
one
of
those
is
more
in
spot
and
from
from
his
user
side
point
of
view
and
same
thing
with
making
Wikimedia
and
so
on.
Yeah.
A
So,
if
then,
timing
is
a
proposal
that
Nikolas
has
been
working
on
to
basically
measure
the
timing
of
events
we
talked
about
also
like
first
input.
Delay
is
something
that
has
chipped
in
chrome,
which
is
a
part
of
that
larger
proposal,
and
now
Nikolas
is
working
on
the
rest
of
it.
It
basically
enables
developers
to
measure
slow
events
and
be
able
to
report
them,
and
it's
you
know,
yeah
and
as
such
is
targeted
at
the
interactivity
metric.
A
F
A
Okay,
so
the
I
would
like
ties
back
to
the
runtime
performance,
larger
theme
as
an
interactivity
metric.
This
enables
developers
to
measure
frustrated
users
in
the
wild,
so
everything
that
this
will
serve
like
the
every
entry
that
the
Venn
timing
will
surface
is
an
entry
where
this
site
has
failed
to
respond
quickly
to
user
input.
A
B
From
a
consumers
point
of
view,
you
know
at
Akamai
what
that
pulse.
We
definitely
are
interested
in
event,
timing,
because
the
first
input
delay
and
just
the
ability
to
measure
more
these
events.
Obviously
we
have
a
lot
of
our
customers
asking
to
get
access
to
this
data
in
a
more
reliable
way
than
we
have
today.
A
A
For
later,
yes,
so
maybe
this
would
be
a
better
use
of
everyone's
time
just
to
do
that
as
a
home
work
exercise
go
over
the
list.
I
believe
that
document
is
world,
like
anyone
can
comment.
So
if
you
could
just
comment
on
the
document
with
the
level
of
interest
for
adopting
either
one
of
those
proposals,
and
then
we
could
think
it
from
there
Benjamin
you
generally.
F
A
F
A
F
C
A
F
F
The
only
thing
we
have
is
the
buffered
flag,
where
we
manually
add
the
entries
to
the
observer
when
you
observe
with
the
buffered
flag,
like
the
the
previously
created
entries.
So
this
proposal
is
to
add
a
an
algorithm
which
you
can
call
whenever
you
are
about
to
add
an
entry
to
a
performance
observer
to
see
if
that
entry
is
actually
eligible
to
be
added
to
the
performance
of
server.
So
this,
for
example,
works
for
the
event.
F
F
Yeah,
in
fact,
the
reverse:
why
called
a
derangement
threshold,
so
each
I
guess
the
threshold
could
be
reused
for
long
tasks,
maybe
have
different
minimum
values
that
you
can
use
or
whatever
that
would
be
specified
in
the
algorithm
for
the
entry
type.
But
you
came
in,
we
use
the
parameter
so
for
event.
Timing.
We
intend
to
have
a
minimum
of
16.
F
So
if
the
duration
value
is
16
or
more
than
you
create
an
entry
and
you
can
observe
within
the
performance
observer,
so
that
will
be
specified
in
an
algorithm
which
will
be
linked
to
the
column
of
the
two
corresponding
column
or
row
of
the
growth
of
a
performance
event.
Timing,
so
right
now,
as
you
can
see
all
of
them
just
this
is
like
the
draft
PR.
All
of
them
just
say
return
true,
which
is
what
would
be
the
default
right
now.
F
Basically,
every
entry,
where
the
entry
type
matches
the
performance
observer
will
be
just
added
to
the
observer.
But
over
time.
If
we
add
other
parameters
to
the
observe
method,
then
we
can
add
an
algorithm
in
the
relevant
spec
and
just
link
it
here
so
that
you
can
filter
out
some
of
those
entries
to
further
the
observer.
Did
that
make
sense.
A
C
F
C
A
A
C
C
And
that's
why
I
was
getting
to
it's
kind
of
like
well,
it's
it's
not
adding
a
new
feature
in
the
spec.
You
just
adding
a
diff
abstract
definition,
so
it
could
be
considered.
I
could
have
to
double
say
with
a
director,
but
you
know
I
would
say:
let's
see
if
we
can
make
that
that
change.
In
any
case,
I
mean
it's
still
going
to
be
called
l2
if
we
want
it
to
be
called
and
to
by
the
way,
just
that
you
sure
that
you
know
do
you,
you
know
with
the
current
process.
A
G
A
A
A
F
A
A
A
F
Related
question:
what's
the
status
of
thinking
about
moving
to
living
standards,
because
I
thought
there
was
some
suggestion
about
it,
I.
C
C
One.
Last
time
when
I
talked
to
this
group,
you
guys
were
not
as
Extreme
as
some
groups
like
Service
Worker,
where
you
only
want
you
to
do
leading
standards.
You
want
it
to
be
able
to
do
both
do
versioning
and
living
standards,
which
is
which
is
perfectly
fine.
If
you
guys
right
now,
shuttle
run
at
the
end
of
of
June.
If
you're
telling
me
you
know-
and
we
don't
have
to
make
the
decision
today-
is
that
you
actually
would
well
I
really
would
like
to
have
the
option
for
living
standards.
C
Then
what
I'd
suggest
is
to
actually
delay
on
new
charter
until
we
can
add
up
process
2020,
so
we
basically
request
an
extension
of
six
months
and
as
soon
as
process
2020
becomes
available,
we
do
a
new
AC
review
and
forty
at
least
a
new
process
will
require
an
ACP
view
because
of
the
change
in
the
brand
policy.
So
so
I
mean
if,
unless,
unless
you
think
that
some
of
the
incubating
proposal
are
not
today
in
scope
and
therefore
we
cannot
publish
them
as
we
want
I
would
what
I
recommend
in
that
case
is
like?
C
D
A
E
C
It
doesn't
necessarily
have
any
impact
on
implementations,
not
even
being
back
on.
You
know,
as
necessarily
making
progress
on
the
specification.
If
you
want
to
talk
about
from
marketing
perspective.
If
you
want
to
talk
about
high
resolution
time
version,
1
version
version
2,
for
example,
that
could
be
useful.
C
So
so
it's
really
depend
on.
You
know
what
we're
trying
to
achieve
with
with
the
resulting
document,
and
if
the
answer
is
kind
of
like
we
don't
really
care
about
versioning
and
I,
say:
well,
you
don't
really
care
about
recommendation
and
leaving.
So
that
is
good
enough,
for
you,
I
mean
both
will
guarantee
that
you
have
to
plan
commitments
and
from
a
process.
Why
spawn
of
you
there's
little
more
requirements
to
reach?
You
know
to
each
recommendation,
but
we
may
not
care
about
what
the
requirements
are.
A
A
So,
basically
and
I'm
not
sure
what
is
the
exact
process
that
is
currently
being
recommended,
but
right
now,
the
way
that
we
develop
those
specifications
is
that
we
start
out
from
working
draft.
For
you
know,
resource
timing,
l1
started
as
a
working
draft,
then,
as
implementations
picked
up
and
where
it
was
tested,
it
was
specified.
It
moved
to
Kenda
document,
then
proposed
recommendation
and
finally,
to
a
recommendation,
and
at
that
point,
if
we
want
to
add
more
features,
which
we
do,
we
kickoff
resource
timing
level
to
and
start
the
same
process
from
the
beginning.
A
And
then
you
know
when
that
is
done
or
when
this
will
be
done.
We
will
do
something
similar
for
resource
timing
l-3
the
alternative
model.
The
living
standard
model
is
to
start
out
something
new
as
a
working
draft,
then
candidate
recommendation
and
then
I
believe
that
it
stops
there
and
you
get
your
patent
commitments
at
ten
that
recommendation
time
and
updates
the
candidate
recommendation
as
we
go.
A
C
And
and
to
be
clear
by
the
way,
just
because
we
see
linking
1
million
doesn't
mean
that
we
are
preventing
you
as
a
model,
you
can
do
leading
standards
from
time
to
time.
You
take
a
snapshot
and
you
you
you,
you
sticker
version
number
whatever
you
want
next
to
it.
If
you
want
to
so,
the
two
stories
are
not
incompatible.
It
just
said:
what
do
we
need
as
a
working
group,
but
we
think
the
community
needs
as
a
working
group
as
well
and
just
figuring
out.
D
A
C
We
do
things
right
by
the
way
the
prx
process
should
become
just
a
purely
admin
admin
things,
but
you're
right.
The
the
pr
requires
more
checks
and
dennis
yeah
like,
for
example,
you
know
if
we
DV.
If
we
hadn't
try
to
move
to
propose
a
commendation.
We
wouldn't
have
been
cornered
by
the
privacy
people
on
someone
for
high
resolution
time
issues
it's
a
forcing
factor
as
well,
but
and
you're
right.
We
can
reproduce
some
of
that
by
adding
an
internal
process
within
the
working
group
we
can
put.
C
If
we,
you
know
so
Ben,
for
example,
you
like
ideology
in
standards,
but
it's
kind
of
like
so.
If
we
only
have
one
implementer
for
the
living
standards
should
we
still
have
it
as
a
leading
sorry
in
the
working
group,
and
and
if
we
don't
want
that
and
okay,
so
we
have
to
put
set
limits
on
ourselves
as
well.
C
Yeah,
so
so
yeah
they
are,
they
are
you
know
yes,
and
they
are,
they
are
drawbacks
and
for
those
specs,
today's
they
cannot
move
to.
They
cannot
move
to
to
proposal
commendation
because
they
only
have
one
one
one,
one
implementations,
but
those
specs
would
still
be.
They
are
only
called
cornea
recommendation,
but
they
will
be
able
to
become
living
standards
tomorrow
or
whatever
the
democracy
come
up
with
to
to
otherwise.
So
that's
where
it's
kind
of
like
from
a
community
perspective,
you
know,
is
they
learn
to
understand.
You
know
Co.
C
C
A
D
C
A
C
C
G
C
D
A
C
In
the
meantime,
to
go
back
to
Anna's
issues
by
the
way,
I
don't
think
we
should
delay
on
these
issues
just
because
of
this
that
wouldn't
be
fair
to
him
yeah.
So
we
can
see
you
87
and
88,
and
under
l2
and
489
we
can
do
under
l3
and
switched
l3
leaking
standards
wanted
once
we
have
the
new
charter
in
six
months
by
the
way
yeah.
C
A
Make
sense,
yeah
I'm,
also
not
hundred
percent
sure
that
the
definition
of
isolated
context
has
already
fully
landed,
though
it
might
be
possible
for
us
to
like
that.
It's
not
currently
yet
feasible
to
actually
send
the
PR
on
HR
time
that
defines
this
properly.
I
know
that
there
are
a
bunch
of
in
flight
pers
against
HTML
that
defined
the
various
primitives.
That
isolated
contexts
rely
on
so
yeah
we
could
yeah,
but
in
any
case
it's
in.
G
A
G
A
A
C
A
C
A
Well,
yeah,
so
there's
Corp,
that
is,
there,
is
co-op
which
is
potentially
more
well
defined.
I
think
it's
a
separate
specification
that
monkey
patches,
HTML
and
fetch
and
then
there's
the
isolated
context
proposal
from
my
quest
and
I.
Don't
know
what
like
what's
the
status
on
that
and
that
basically
ties
all
of
these
together
into
a
new
thing.
That
HR
time,
for
example,
can
rely
on
and
say
only
in
isolated
context
can
we
expose
high
resolution
timers
yeah.
A
A
A
A
C
So
these
comes
from
the
disability
tracing
working
group,
so
the
distributed
tracing
working
group,
which
I'm
also
involved
it
in
my
spare
time,
is
they
are
tracking
requests
going
through
within
clouds
within
the
micro
services
going
with
within
the
clouds
and
also
inter
clads
as
well.
So
then,
when
something
goes
wrong,
you
can
basically
figure
out
which
micro-service
failed
and
Tracy
on
tires.
C
So
so
those
trace
ID
between
the
clouds
as
well
and
a
while
ago,
back
in
November
last
year,
they
were
wondering.
Can
we
use
survey
timing
for
that
because
it's
already
it's
it's
identifiers
coming
from
the
servers
that
could
be
passed
along
and
so
hard
now
the
latest
has
been
as
I
moved
away
from
attempting
to
use
server
timing
so.
C
But
so
I
would
suggest
that
we
should
close.
This
issue
is
no
with
no
action
on
that
front.
All
saying
that
we
don't
recommend
the
use
server
timing
to
carry
also
information.
That's
simply
timing
information
now.
The
other
worry
that
I
have,
which
is
it's
time,
relate
more
eighty-two
server.
Timing
is
like
right
now
we
say
we
only
carry
timing,
information,
the
the
fact
is,
you
can
put
whatever
you
want
in
those
timing,
information
and
it
doesn't
have
to
be
timing
information.
C
G
A
A
A
Yeah
they
can
just
use
a
server
timing,
as
is
the
fine
request
headers
that
browsers
shouldn't
care
about,
but
then
use
the
browser
like
use,
client-side
code
to
collect
that
information
from
multiple
servers
and
send
that
to
their
reporting.
Endpoint
and
the
origins
need
to
trust
that
client-side
code
to
do
the
right
thing.