►
From YouTube: WebPerfWG call - 2023 07 20 - Canva and WebPerf APIs
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Thank
you,
cool
hi,
so
yeah
I
I'm,
my
name's
Andy
I'm,
joined
by
Ben.
Here
we
both
work
at
canva
on
performance
and
yeah,
we're
here
to
talk
a
little
bit
about
how
we
measure
performance
internally
at
Canberra,
and
this
presentation
has
been
I'm
prepared
with
the
help
of
Anthony
Morrison
to
Don
Juan,
but
they
couldn't
be
here
today,
so
yeah.
First,
what
is
canva
or
for
those
of
you
who
don't
know
it's
a
it's
an
online
design
tool
and
for
creating
multiple
types
of
content.
A
You
know
from
social
media
posts
to
presentations
like
long-form
documents,
videos
and
more
and
then
also
the
ability
for
users
to
share
and
collaborate
on
them
and
also
print
them.
A
So
in
terms
of
our
front-end
architecture,
and
we
split
it
into
multiple
single
page
applications
and
which
internally,
we
we
refer
to
as
pages
and
so
each
of
those
pages
has
its
own
specific
domain,
like
an
application.
Just
to
do
with
you
know,
logging,
in
an
application
just
to
do
with
managing
your
designs,
your
settings
page
and
then
the
you
know.
The
editor
itself
is
its
own
single
page
application.
A
A
And
then
we
also
share
that
same
front-end
code
base
and
with
a
thin
Cordova
wrapper
to
create
our
native
applications
for
IOS
and
Android,
and
we
also
have
a
native
desktop
app,
which
again
leverages
the
same
shared
web
code
base
I'm
going
into.
Like
the
you
know,
the
things
that
these
applications
deal
with,
we
kind
of
split
them
into
into
two
there's
like
the
the
or
two
kind
of
main
groups.
A
There's
the
navigation
and
content
management,
which
is
something
to
do
with
like
organizing
designs
into
folders,
renaming
them
and
user
settings
all
of
these
kinds
of
things,
and
we
typically
find
these
shorter
sessions
and
the
interactions
involved
in
these
pages
are
typically
just
clicks,
searching,
navigate
and
quite
yeah,
quite
basic
stuff,
and
also
that
the
the
content
involved
in
these
is
very
consistent
and
between
user
to
user
and,
for
example,
on
this
screenshot
of
what
we
call
our
home
page.
A
And
you
can
see
it's
just
thumbnails
of
images
and
then
text
below
it,
which
you
know
it's
very
similar
content
for
all
of
the
users
who
view
this
page
and
and
then
we
have
the
the
editor,
which
is
a
very
different
type
of
page,
where
this
is
much
more
highly
interactive
and
we
see
much
longer
sessions
and
several
hours
and
often
uses
a
scrolling
dragging
elements
zooming
and
uploading
having
much
more
varied
content
and
typically
a
large
number
of
Dom
elements.
A
And
so
we
we
render
designs
using
at
the
Dom
and
not
something
like
canvas.
So
we
can
very
quickly
get
to
quite
complex
Dom
trees
and
there's
also
a
huge
variation
in
content
that
you
know,
kind
of
by
Design
and
every
user
will
create
a
different
type
of
or
kind
of
design.
And
and
therefore
you
know
you
can
have
an
empty
design,
which
is
you
know,
relatively
fast
load
or
interactive.
Or
you
could
have
a
design
with.
A
You
know
and
thousands
of
elements
in
it,
which
is
which
is
much
more
complicated
and
both
to
load
and
to
interact
with
and
when
it
comes
to
Performance.
A
And
we
kind
of
split
this
into
three
main
categories
at
canva
and
we
have
like
the
initial
page
load
of
you
know
loading
the
page
before
it
becomes
Interactive
and
then
there's
the
interactions
during
you
know
the
the
session
and
then
sometimes
we
have
unfortunately
crashes,
especially
on
our
mobile
applications.
A
And
yeah
we've
we've
we're
still
iterating
on
on
this,
but
we
we
definitely
find
that
We've
we're
getting
some
experience
of
what
we
think
makes
a
good
metric
for
for
driving
change.
It
can
but
and
some
some
of
the
three
key
principles
we're
we're
finding
that
work
well
is
having
a
simple
metric,
ideally
one
number,
and
that
makes
a
lot
of
Statistics
when
we're
doing
experiments
much
easier
and
rather
than
having
some
more
complicated,
composite
metric
built
out
of
multiple
numbers.
A
A
A
A
So
historically,
we
try
to
use
a
single
definition
for
page
load
across
all
of
our
different
single
page
applications
and
we
try
to
use
time
to
Interactive
as
just
one
metric
to
report
across
the
whole
company
and
but
yeah.
We
definitely
found
that
challenging
of
teams
found
it
hard
to
move
that
metric
in
one
way
or
the
other
based
on
on
a
b
tests,
and
we
also
found
that
different
pages
had
very,
very
different
requirements
around
you
know
some
pages,
don't
necessarily
need
to
be
interactive
if
they're
just
viewing
something.
A
So
some
some
pages
are
finding
that
the
element.
Timing,
API
is
very
useful
for
them
in
the
editor,
we're
actually
leaning
towards
a
fully
custom
design,
interactive
metric
that
we're
working
on
ourselves
and
that
are
going
to
detail
in
the
next
slide
and
then
other
Pages
find
you
know.
The
the
generic
core
web
vitals
are
actually
the
the
thing
that
they
find
is
more
relevant
to
the
way
that
users
use
their
pages.
A
A
If
you
have
a
a
this,
is
you
know
the
what
the
editor
looks
like
when
you
have
a
page
like
this
and
you
look
at
LCP?
There
are
many
many
different
elements
that
could
be
could
be
chosen
as
the
LCP
element
and
this
this
screenshot
here,
which
would
if
this
page
was
loaded.
A
That
would
probably
be
the
element
and
but
then
there's
also
all
of
these
images
here
on
the
side
and
there's
some
text
here
and
of
course
this
is
going
to
be
different
for
every
different
type
of
design
that
uses
and
create
and
load.
So
this
is
a
little
pie
chart
at
the
bottom
of
all
of
the
different
elements
that
were
you
that
we
found
over
a
particular
week
that
were
being
used
and
for
LCP.
So
it
was
very
inconsistent,
so
it
proved
very
hard
for
teams
to
have
an
impact.
A
If
you
know
they
were
working
on
a
particular
feature
and
it
was
very
hard
to
Target
like
oh,
are
we
actually
making
the
user
experience
better
or
not
when
we're
there's
kind
of
this
variability
in
the
in
the
metric?
That's
getting
recorded,
and
so
the
the
particular
metric
that
we're
we're
going
with
is
something
that
we're
calling
design
Interactive
and
it's
part
of
this
essentially
like
a
timeline
that
we've
got
of
marking
these
key
stages
in
the
the
load
of
the
editor.
A
And
we
we
try
to
focus
these
explicitly
on
and
on
the
user
experience.
So
we
here
are
some
of
the
the
key
ones
we
have
a
skeletal
loaded,
which
is
what
we
kind
of
call
our
Chrome
framework
for
for
the
application,
and
so
we,
you
know,
we
we
record
the
first
time,
that's
visible
to
the
user.
A
We
record
when
the
design
is
rendered
for
the
first
time
when
the
design
becomes
interactive
and
and
many
more
we're
getting
up
to
maybe
about
20
of
these
different
key
metrics
that
we're
now
recording,
but
the
the
key
sort
of
Topline
number
that
we're
focusing
on
is
design
interactive
and
which
we're
finding
is
really
useful
as
a
metric
to
communicate
internally
in
the
company
of
you
know.
A
If
this
metric
changes
teams
have
a
good
understanding
of
of
what
that
means
to
users
and
then
also
at
the
the
end,
we're
kind
of
lining
this
up
with
some
other
metrics
like
finding
the
first
input
delay,
which
kind
of
would
would
be
happening.
A
You
know
at
the
end
of
this
timeline
and
sort
of
using
that
as
well
to
add
more
more
and
more
detail
to
this
timeline,
and
another
advantage
of
this
is
that
yeah,
it's
a
consistent
implementation
across
all
platforms
and
that
we
don't
have
to
worry
about
that,
because
we've
like
we're
the
ones
in
control
of
of
how
this
metric
gets
measured.
A
Oh
yeah
and
now
yeah
handing
over
to
to
Ben
to
talk
a
bit
about
interactions
thanks.
B
Andy
so
for
interactions,
we
have
it
split
up
into
two
separate
areas
and
ways
to
be
currently
think
and
measure.
We
have
ones
that
are
on
a
per
interaction
basis,
and
then
we
also
have
track
interactions
across
the
entire
session
and
so
diving
into
what
we
do
for
individual
interactions.
We're
currently
pursuing
an
approach
where
we
measure
the
critical
user
journey,
and
so
we're
going
through
and
instrumenting
every
flow
individually
for
tracking
reliability
and
performance
concerns
using
open,
Telemetry
or
our
own
little
version
of
it.
B
And
so,
as
you
can
see
in
this
diagram,
if
you
use
a
word
to
click,
the
start
present
feature
then
we'll
add
marks
around
when
how
long
it
would
take
for
resources
to
load
and
then
how
long
it
will
take
the
presentation
to
initially
render,
and
so
this
is
sort
of
like
a
Bare
Bones
code
example,
where
the
key
features
are
marked
out
with
spans,
which
are
just
units
of
work
or
units
of
time
that
we
manually
instrument
it
and
something
that
I
think
is
key
with
this
is
there
are
some
like
asynchronous
tasks
which
happen
after
the
event,
handlers
are
fired
and
for
those
cases
we
track
it
in
relation
to
what
a
user
would
think
the
when
the
picture
would
end.
B
Now
one
of
the
things
we've
found
a
bit
tricky
and
right
now
we
rely
on
using
the
event
Loop
before
it's
determining
when
a
render
is
completed
or
when
painting
is
done,
and
so
right
now,
we've
reliant
on
using
the
request,
animation
frames
or
set
timeouts
for
like
an
estimation
of
like
well.
It
should
have
been
finished
by
now,
but
we
know
is
timings
or
there's
accurate,
as
we
can
get
it
without
a
another
API
there
to
get
that
information.
B
Now.
What
we
found
useful
is
that
every
span
that
we
track
has
some
contextual
information
to
track.
Whether
or
not
the
page
was
visible,
useful
for
filtering
out
any
background
throttling
that
happens,
and
we
also
track
long
tasks,
count
and
duration.
We
found
that
correlates
well
with
Ben
with
poor
performance,
and
so
it's
a
metric,
that's
easily
surface
to
developers
as
a
diagnostic
that
can
go
away
and
they
can
then
look
at
after
the
fact-
and
we
also
log
these
using
like
the
user
timings
API
for
local
development.
B
Now
something
we
don't
do
for
every
span,
but
we
can,
they
can
opt
into,
is
measuring
some
frame
rate
metrics
via
the
request,
animation
frame
as
well,
and
the
reason
why
that's
important
is
because
we
find
interaction.
Come
interactions
come
in
two
different
flavors.
We
have
the
discrete
interactions,
so
there's
a
you
like
click
cap
or
key
press
when
you're
trying
to
add
content
or
your
action
is
quite
done
after
you've,
like
maybe
clicked
the
link
or
you've
added
an
objects
to
the
screen.
B
But
what
we
found
is
we
also
have
a
bunch
of
continuous
operations
and
that's
where
we're
more
interested
in
the
frame
rate
and
trying
to
figure
out
how
how
stable
that
is,
and
so
examples
of
this
we
have
in
the
editor
when
users
are
moving
elements
around
on
the
page
or
they're
trying
to
scroll
in
the
object
panel,
and
we
have
some
like
virtualization
going
to
not
to
keep
the
Dom
count.
Small.
B
And
so
this
is
an
example
of
how
we
measure
the
continuous
interactions
where
we
would
start
a
span
when
the
drag
operation
would
start,
and
then
we
measure
all
the
keyframes
using
the
request,
animation
frame
and
then
we'll
finish
it
off
when
it
ends
and
then
send
it
back
and
so
yeah.
What
we
find
is
that
we
currently
use
The
Ref
because
there's
not
quite
a
browser
API
for
measuring
the
frame
rate
of
a
wristband
or
quite
a
definition,
for
how
what
the
best
way
it
is.
B
It
is
to
measure
frame
rate,
and
so
this
leads
us
to
quite
a
few
questions
that
we
have
and
we're
still
exploring.
B
However,
what
is
the
best
way
to
define
a
frame
rate
over
an
interaction,
and
so
Anthony
Morris
who
couldn't
be
here
today,
has
done
a
bit
of
Investigation
into
this,
and
so
that
bouncing
balls,
demo
kind
of
shows
a
few
different
scenarios
of
when
lag
can
occur
during
a
continuous
interaction.
B
But
some
of
the
questions
we're
curious
around
and
continuing
to
investigate
is
what's
the
best
way
to
summarize
it
statistically
so
using
FPS,
or
is
it
a
center
percentage
of
dropped
frames?
Is
it
using
a
mean
p9e
mean
plus
standard
deviation
each
one
of
these
have
different
pros
and
cons,
everyone
sort
of
knows
what
FPS
is
so
that's
kind
of
easy
for
user
uptick,
but
it
can
ignore
some
infrequent
or
slow
frames,
whereas
the
other
ones
are
a
bit
harder
to
describe
and
then
also
other
questions
that
have
popped
up.
B
What
about
partial
frames
and
what
about
the
compositiveness
is
the
main
thread
yeah
just.
Lastly,
to
note
as
well
on
this,
we
used
to
track
resources
or
try
to
capture
aggregate
information
for
resources
using
the
resource
timing
API.
We
found
for
our
cases
that
it
was
largely
not
used
and
our
performance
issues
went
too
much
related
to
it
at
least,
and
so
we've
stopped
tracking
it
for
interactions,
but
some
challenges
we
had
in
there
were
like
slight
differences
and
initiated
types
across
browsers.
B
There
are
some
unrelated
resources,
finished
loading
during
them
and
at
the
time
I
was
fat
enough
for
resource
was
render
blocking
or
not,
but
I
think
this
has
changed
now
because
it's
good
to
see
so
moving
on
to
session-wide
interactions.
So
the
key
difference
between
session
wide
and
per
interaction
is
that
we
we
log
these
over
the
entirety
of
the
page,
and
so
these
well
a
poor
interaction.
B
We
might
not
instrument
every
single
interaction
flow,
the
session-wide
metrics
we
try
and
capture
most
of
the
interactions
for
all
of
them
that
occur
now.
Historically,
we
rolled
with
a
custom,
slow
editor
session,
metric
or
slow
session
metric,
which
was
built
using
long
tasks
and
summing
them
up
all
the
long
tasks
that
were
greater
than
100
milliseconds
over
the
total
editor
session
duration.
And
then,
if
that
was
greater
than
five
percent,
then
would
say
that
was
a
slow
end
of
the
session,
and
we
tracked
this
over
time
across
the
editor.
B
B
The
benefits
that
we've
had
with
this
metric
has
been
that,
like
culturally,
as
Andy
mentioned
earlier
orbits,
they
did
try
to
fix
these
issues.
It
was
a
single
metric
that
we
pointed
teams
to
when
releasing
experiments,
and
they
did
generally
try
and
improve
this
metric.
B
But
it
was
hard
to
explain
predominantly
it
was
related
to
Shorter
sessions
because
it
would
be
skewed
by
Page
load
results
and
then
it
was
hard
to
attribute
regressions
to
individual
features
and
then
debug
them
once
it
did
progress
that
took
a
lot
of
time
and
and
effort,
and
so
it
wasn't
very
actionable
and
so
in
the
future,
we're
moving
towards
looking
at
interaction
to
next
paint
or
frame
rate
and
monitoring
that,
via
the
continuous
interactions
that
we
do
have
instrumented.
B
But
it's
still
early
days
we're
still
in
the
process
of
testing
this
and
rolling
it
out
and
looking
at
the
data
ourselves
but
yeah.
Some
of
the
challenges
that
we're
curious
around
and
gonna
be
investigating
is
what
about
there's
quite
no
like
session
wide
frame
rate
browser
API
so
trying
to
find
guidance
and
figure
figure
that
space
out
and
we're
also
curious
what
the
editor,
in
particular
for
the
longer
running
sessions,
because
imp
shows
a
the
slowest
interaction,
but
users
perform
multiple
interactions.
B
How
how
does
that
factor
into
it
if
one
might
be
slow,
but
we
have
in
number
that
have
also
occurred,
and
so
yeah
you're
also
exploring
yeah.
What
is
the
best
way
to
define
a
frame
rate
over
over
a
recession,
and
so
we
don't
want
to
count
idle
time.
B
Does
animating
elements
matter
and
it's
more
important
to
focus
on
when
the
user
is
directly
interacting,
which
is
the
approach
we're
taking
by
manually
instrumenting.
It.
A
Yeah,
so
I
I
did
notice.
Just
in
the
chat
there
was
a
question
about
design,
interactive
and
design
rendered
and
how
we
measure
them.
A
I
think
yes,
Ben
been
kind
of
touched
on
a
similar
kind
of
thing
that
we're
we're
essentially
using
set
timeout
or
request
request,
animation,
request,
animation,
frame
to
to
try
and
work
out
in
code
when,
when
a
particular
coma
component
has
been
rendered
to
the
Dom,
and
we
instrument
data
key
points
in
our
in
our
code
for
for
different
components.
A
We
are
heading
on
to
crashes
and
yeah.
We
we
do.
Unfortunately,
we
do
find
that
users
do
have
crashes
on
our
native
mobile
apps,
and
so
essentially
we
have
for
the
IOS
and
Android
apps.
A
We
do
see
that
if
there's
a
problem
with
the
design
and
the
application
can
just
appear
to
to
refresh
seemingly
at
random,
which
isn't
a
great
user
experience
and
I'm
experiencing
some
problems
with
playing
these
videos
but
yeah
the
something
that's
going
back
to
our
idea
about
making
the
the
metrics
relatable
to
user
experience.
A
We
found
the
crashes
are
actually
really
really
easy
to
explain
to
teams
what
the
user
experience
is
that
I
think
everyone
who's
is
canva
internally
or
tested
on
mobile,
like
we.
When
we're
testing
out
some
of
our
extreme
features,
yeah,
we
often
do
get
a
crash,
and
so
internally
most
teams
have
a
good
understanding
about
what
a
crash
is,
and
they
know
what
the
user
experience
is.
A
We
find
that
generally
they
they
appear
to
be
out
of
memory
issues,
although
we
don't
have
a
great
understanding
here,
very
often,
the
the
way
that
we
investigate
this
is
based
on
user
reports
or
from
teams
finding
out
something
internally
and
then
having
to
spend
some
time
debugging
to
figure
out
what's
going
on,
and
then
you
know,
hopefully
making
a
change
and
seeing
the
the
total
number
of
crashes
that
we
record
goes
down.
A
We're
we're
very
reliant
on
our
Native
rappers,
for
both
the
IOS
and
Android
applications
to
record
when,
when
a
crash
occurs
and
and
then
send
an
event
to
us,
we
find
that
when
we,
when
we
get
steps
to
replicate
a
crash
and
if
we
can
replicate
it
on
our
native
application,
we
can
also
replicate
it
on
our
mobile
web
and
so
we're
we're,
assuming
that
all
of
the
crashes
that
we
do
record
in
our
native
applications
are
kind
of
getting
replicated
on
our
mobile
web
experience
as
well.
A
A
We
have
seen
the
experimental
reporting
API
but
yeah
we're
not
clear.
If
that
would
give
us
the
information
we
need
so
yeah.
Overall,
the
The
Experience
here
is
is
not
really
having
much
data.
A
So
you
have
to
wrap
up
to
look
at
the
three
kind
of
areas
that
we
we
focus
on
for
in
performance
at
canva.
A
A
That's,
for
example,
on
the
editor
page
load,
where
you
have
a
you
know,
page
of
content,
just
being
able
to
set
element,
timing
on
the
the
root
node
of
that
content
and
know
when
that
had
rendered
to
the
user.
That
would
be
really
really
useful
and
we've
actually
found
internally.
Some
teams
that
are,
you,
know,
still
learning
about
element,
timing
and
do
actually
end
up
adding
element,
timing
to
divs
and
then
not
really
getting
the
data
that
they
expect,
and
so
that
would
be
super
useful.
A
It'd
be
really
interesting
to
get
recommendations
about
like
ideal
frame
rates
and
painting
metrics
across
you
know
the
whole
of
the
the
internet
and
that's
something
that
we've
kind
of
we've
done
internally
and
done
quite
a
bit
of
Investigations
on
but
it'd
be
it'd,
be
really
interesting
to
have
more
recommendations
about
what
makes
a
good
experience
for
users
and
then
crashes
yeah.
A
Our
number
one
pain
point
is
really
the
lack
of
consistent
crash
analytics
across
you
know,
both
our
mobile
web
experience
and
the
native
mobile.
A
Cool
so
yeah
that
wraps
up
our
presentation-
yes,
I,
think
Ben
and
I
are
both
here
to
answer
any
questions
you
have
cool.