►
From YouTube: WebPerfWG call - 2022 07 07 - Field based user flows
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
hopefully,
I
am
coming
across
well,
so
we
wanted
to
talk
about
a
problem,
we're
hearing
more
and
more
about,
and
we
have
some
new
and
exciting
capabilities,
and
so
we
thought
this
was
kind
of
an
open
discussion.
We
wanted
to
start
with
just
this,
so
that
problem
is
moving
from
field
results
to
lab
reproduction
or
another
way
to
think
about.
It
is
sort
of
replaying
user
journeys.
A
So
you
have
some
field
data
that
suggests
there's
some
sort
of
performance,
metrics
problems.
Some
sort
of
metrics
are
subpar,
perhaps
your
core
vitals,
maybe
you're,
getting
this
information
from
paid
speed,
insights
or
chrome
user
experience
crux
results
which
are
available
for
all
pages,
maybe
you're
connecting
your
own
rum
analytics
and
you
have
your
own
rum
dashboard
and
it's
warning
you
that
across
the
plethora
of
users
out
there
there
is
a
problem
that
needs
addressing
you
try
and
you
fail
to
reproduce
it
locally.
A
Why?
Okay,
this
is
a
surprisingly
common
problem
and
it
is
increasingly
so,
for
example,
with
cls.
This
can
happen
and
certainly
with
the
new
responsiveness
metric
that
we're
trying
to
work
on
well
imp.
So
when
this
happens,
you
know
you're,
not
quite
sure
is
the
field
data
wrong.
You
know,
has
the
has
there
been
a
problem
in
the
past
and
it's
been
fixed.
A
You
know,
maybe
you
got
lucky.
Maybe
a
fellow
developer
at
your
company
has
sort
of
fixed
the
patch
and
pushed
it
up,
because
field
data
tends
to
lag,
maybe
even
as
much
as
28
days,
maybe
you're,
just
blind.
Who
knows
so.
A
This
is
not
the
main
point
of
this
presentation,
but
I
think
it's
important
to
mention
that
this
has
always
been
true.
Is
that
your
first
step
is
to
ensure
you're
testing
the
same
site
that
your
users
are
using
so
between
geo-specific
sites
and
setting
up
the
right
geo.
Maybe
the
problem
is
localized
to
a
specific
set
of
users.
A
Are
you
actually
testing
against
production,
or
are
you
using
some
sort
of
staging
or
local
build
that
doesn't
exhibit
the
same
issues
very
commonly?
There
are
network
differences
or
device,
specific
differences
that
you
might
need
to
set
up
specific
caching
conditions
or
or
whatnot,
then
there's
client
specific
state,
maybe
a
particular
user,
was
logged
in
when
this
happened
or
wasn't
logged
in
or
maybe
they
were
getting
some
sort
of
sign
up
flow.
A
A
That
has
been
true
for
a
very
long
time,
and
I
wonder
if,
if
there
aren't,
you
know
things
we
can
share
collectively
as
to
how
this
is
typically
done,
but
these
days
it's
kind
of
a
bit
more
than
that
which
is
especially
true
for
full-page
lifecycle,
metrics,
okay,
so
some
feedback
that
we
hear
very
often
from
developers
is,
for
example,
cls
was
high
in
the
field.
Okay,
I
tested
it
locally.
A
Maybe
I
just
ran
a
lighthouse
report
against
it,
which
is
a
synthetic
environment
and
it
only
tests
loading
until
until
like
tti
or
whatever
it
doesn't
interact
at
all
with
the
page,
and
you
get
a
cls
score
of
zero
and
so
developer
says
works
for
me.
I'm
moving
on.
I
don't
know
what
the
issue
was.
Okay,
but
cls
is
recorded
not
just
during
page
load.
It's
also
recorded
post
load,
it's
a
full
page
lifecycle
metric,
and
so
it's
important
to
understand
in
your
field
data.
A
Were
these
high
layout
shifts
happening
during
load
or
post
load?
If
you
know
from
field
data
that
it's
happening
post
load,
you
should
know
immediately
that
lighthouse
will
never
find
that
source
of
that
problem.
Okay,
an
inp
is
a
interactivity
metric.
It's
a
responsiveness
metric,
so
lighthouse
can
never
capture
that
type
of
issue
and
even
if
you're
trying
to
reproduce
it
locally,
it
could
be
many
interactions
deep.
It
could
be
with
components
that
are
interactive
that
are
below
the
fold.
It
could
be
hidden.
A
So
if
you
are
trying
to
track
down
cls
issues,
it's
not
just
about
how
much
total
cls
was
there
over
the
life
of
the
page.
But
what
was
the
time
of
your
largest
cls
window,
which
element
shifted
the
most
which
specific
url
or
route
was?
Was
the
user
on
when
that
happened,
and
then
for
inp?
You
know
the
time
of
the
interaction
which
event
was
actually
was
it.
A
top
was
the
keyboard
which
specific
key
was
it?
A
What
element
was
being
interacted
with
and
again,
which
url
and
wrote
all
of
this
metadata
is
very,
very
useful,
so
philip
walton
over
at
web.dev,
has
written
about
this,
focusing
on
cls
he's
written
plenty
of
articles
about
measuring
in
the
field
measuring
in
the
lab
going
from
field
to
lab,
and
this
is
sort
of
a
thing
he
built
on
his
own.
I
think
personal
site
and
he
published
some
recipes
on
how
to
do
it.
This
is
specific
to
cls,
so
here
what
he
did
was
alongside
recording
the
cls
for
a
page.
A
He
recorded
the
single
worst
element
to
shift
and
how
much
it
contributed
to
that
to
that
shift
score.
Okay,
so
the
single
worst
problem
and
what
he
found
is
in
this
particular
sample
site,
which
I
don't
remember
which
site
it
was
there's
only
one
issue
that
sort
of
stands
out
once
you
have
a
table
like
this.
So
if
you
happen
to
have
field
data
that
suggests
there's
too
much
cls
for
too
many
users
and
you
try
to
replicate.
You
would
never
have
found
an
issue
on
mobile
because
there
are
no
issues
on
mobile.
A
If
you
look
at
a
table
like
this,
even
on
desktop,
there's
just
one
specific
type
of
element
that
was
causing
an
issue
and
if
you
see
a
table
like
this,
it
immediately
jumps
out
at
you
and
if
you're,
the
page
author,
you
might
know
exactly
what
this
this
is
a
strong
hint
you
might
know
what's
going
on
and
so
that
in
this
case
only
18
and
a
half
percent
of
page
visits.
A
You
know
perhaps
users
were
hitting
this
case,
and
so
what's
the
likelihood
that
you
would
have
been
able
to
reproduce
it
without
a
table
like
this
to
sort
of
guide
you
if
that
makes
sense-
and
this
is
cls
cls-
does
tend
to
lean
more
heavily
towards
loading
issues
but
inp
very
much
so
the
opposite,
inp
can
be
a
post
load
metric
first
and
foremost,
and
so
you
know
knowing
having
a
table
like
this
could
be
very,
very
useful,
for
instance,
okay.
A
Okay,
so
if
you
are
already
using
an
api
like
event,
timing
to
capture
those
long
interactions-
and
you
already
have
attribution
about
what
type
of
event
against
which
node
was
interacted
at
what
time?
There's
this
new
set
of
features
in
chrome,
dev
tools
and
lighthouse,
which
are
centered
around
user
flows,
the
ability
to
record
a
whole
user
flow
and
then
replay
it
with
lab
automation,
run
an
audit
against
it
automatically.
A
This
is
very
useful
for
ci
or
for
your
own
testing,
but
I
thought
to
myself
as
an
experiment:
would
it
be
possible
to
take
the
performance
timeline,
the
apis?
We
already
have
that
run
in
the
field
that
you
would
already
want
to
report
for
metadata
for
a
table
like
I
should
before.
Could
you
construct
a
user
flow
that
you
could
replay
automatically
in
the
lab?
Okay,
it's
pretty
easy
to
get
set
up
with
these
tools.
It's
actually
almost
trivial
to
set
this
stuff
up,
and
I
just
threw
a
couple
of
quick
snippets.
A
A
This
is
good
enough
and
then,
in
this
case
I
use
the
performance
timeline,
performance
observers
to
use
the
navigation,
api
and
event
timing-
and
all
I
did
was
say
what,
if
I
could
get
a
list
of
things,
the
user
clicked
on
in
what
order?
That's
all
I
was
looking
for
in
this
particular
case,
and
I
created
simple
user
flows
like
this.
This
is
done
in
the
field
using
performance
timeline
stuff.
A
We
already
have
access
to
so
in
this
case
this
was
me
interacting
with
web.dev
clicking
on
like
four
things,
and
now
I
can
replay
using
a
snippet
like
this
using
those
tools
I
talked
about
earlier.
The
user
flows
tools,
so
this
is
actually
under
the
hood
using
puppeteer
and
lighthouse
together.
There
are
a
bunch
of
different
ways
to
do
this
type
of
replay,
and
I
think
I
have
a
video
that
it
works.
A
So
that's
a
copy
of
how
just
how
little
code
there
was
to
put
it
all
together,
it's
somewhat
of
an
older
version.
This
is
me
replaying.
It
locally
lighthouse
sets
up
a
bit
of
an
environment.
You
have
some
control
over
how
you
do
this.
In
this
case,
it's
setting
up
a
mobile
environment,
mobile
emulating
some
device,
and
this
is
it
replaying
those
interactions.
A
I
was
worried
when
I
set
this
up,
that
I'd
have
to
like
scroll
to
the
right
position
and
toggle
things,
but
it's
pretty
pretty
good
at
just
finding
those
selectors
and
making
sure
to
interact
with
it
and
at
the
end,
you
get
this
sort
of
time
span
report
which
looks
very
similar
to
a
lighthouse
loading
report.
A
A
So
if
you
happen
to
record
a
time
span,
you'll
get
better
breakdowns,
and
so,
in
this
case
there
was
like
a
lot
of
style
and
layout
and
then
a
lot
of
rendering
delay
afterwards,
and
so
it
kind
of
helps
you
figure
out
what
was
the
problem.
What
was
the
of
all
of
those
interactions,
which
one
was
the
long
one?
A
A
Me
go
to
the
next
there
we
go
after.
I
did
this
quick
experiment
and
I
was
surprised,
pleasantly
surprised
with
just
how
easy
it
is
to
set
up.
There
was
a
twitter
thread
about
just
how
cool
web
page
test
script.
Automation
can
be,
and
there
was
some
suggestion
about.
If
you
have
issues
with
an
spa
navigation
or
with
interactivity,
you
can
actually
script
this
stuff
together.
So
I
quickly
took
a
look
at
the
documentation
there.
A
It
was
very
easy
to
set
up-
and
I
just
made
an
alternative
converter
instead
of
using
the
user
flows,
format
using
the
web
page
test,
scripting
format,
creating
a
little
bit
of
a
script
like
this.
This
is
the
exact
same
user
flow
and
out
of
the
box.
No
problem
runs
in
web
page
test
and
you
get
this
very
nice
step
by
step
by
step.
A
Now
I
think
web
page
test
still
is
mostly
a
loading
type
audit,
so
they,
I
didn't
see
too
many
interactivity-based
audits
and
things,
but
I'm
sure
it's
more
to
come,
but
it
was
phenomenal
to
see
just
how
easy
it
was
to
get
set
up.
Okay.
A
A
A
single
lab,
recording
a
single
lab
replay
is
an
anecdote
and
it
is
very
easy
to
miss
seeing
these
real
issues
that
are
in
field.
If
you
just
have
a
couple
of
anecdotes
and
you're
shooting
you're
throwing
darts
at
a
wall,
setting
up
the
right
environment
is
first
and
foremost,
you
can
get
help
there
by
understanding
what
environments
your
users
are
in
following
the
right
user
flows
is
also
an
increasingly
important
with
post
load,
metrics
or
or
full
page
lifecycle
metrics.
A
If
you
record
metadata
about
your
vitals,
your
field
data
can
give
you
broad
insights
and
a
nice
little
table
like
that,
and
that
might
be
enough
that
you
will
immediately
know
if
you're,
the
page
author
you're
like
aha,
I
understand
that
I've
written
that,
but
even
beyond
that,
we
might
be
able
to
use
performance
timeline
to
automate
creating
flows
that
you
can
replay
and
therefore
get
this
like
just
do
a
full
audit
that
definitely
tells
you
what
the
source
of
the
problem
is,
how
much
cls
how
much
inp.
A
So
I
thought
that
was
neat.
I
think,
there's
a
lot
of
work
left
to
done
to
do.
I
think
the
performance
timeline
is
pretty
limited
event.
Timing,
api
has
limitations,
and
so
I
think
it's
an
open
question
as
to
how
easy
how
brittle
will
these
approaches
be?
How
much
is
left
do
we
need
to
do
more
in
terms
of
creating
full
sort
of
field,
data,
user
flows
and
reporting
it?
Maybe
we
need
a
first
class
performance
timeline
api
for
that?
That's
all.
I
have.