►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Everyone
or
afternoon,
wherever
you
are
I'm
norm
from
microsoft
and
I'm
gonna
talk
a
bit
about
a
responsiveness
metric
we
built
in
excel
online
and
basically
what
we
what
we
learned,
what
we
gathered
from
this
and,
of
course,
we
would
like
to
hear
feedback
follow-up
questions,
if
any,
so
what
we're
trying
to
do
is
create,
and
the
focus
of
this
talk
can
be
about
a
metric
for
responsiveness.
A
A
Let's
just
start
with
recap:
what,
when
I
talk
about
responses,
what
do
I
mean?
I'm
talking
about
responsive
interactions
so
like
it's
was
published
and
already
discussed
in
multiple
places,
the
response
time
for
the
user
to
feel
comfortable
and
responsive
ui
and
smooth
interactions.
A
He
wants
to
have
the
response
time
to
be
less
than
100
milliseconds
and
that's
what
we
talk
about
when
we
talk
about
responsive
interaction
and
that's
a
term
that
will
return
in
this
repeat
in
this
talk
a
couple
of
times,
so
it's
important
to
understand
it
so
the
time
from
the
inputs
until
the
time
we
present
the
frame,
that's
the
response
time
and
it
has
to
be
less
than
100
milliseconds.
A
Basically,
what
this
code
does
is
tries
to
capture
slow
interactions
or
unresponsive
interactions
and
label
them
so
that
we
know
what
exactly
what
the
interaction
that
we're
interested
in
was.
For
example,
it
was
a
click
on
a
menu
click
somewhere
else
keyboard
in
a
certain
text,
box
or
and
so
on,
and
so
we
start
by
creating
a
map
that
we
track
all
the
interactions
that
we
be
able
to
detect
that
we
care
about
mostly,
and
then
we
have
helper
function
function
here
that
accepts
a
dom
event
and
a
string
of
user
interactions.
A
After
that,
we
use
performance
observer
which
tracks
slow
interactions
over
100
milliseconds
and
then
for
each
entry.
We
look
up
the
interaction
that
we
stored
earlier,
using
the
start
time
of
the
performance
event,
timing
entry
as
the
key
since
they
share
according
to
the
spec,
the
same
timeline,
the
timestamp
and
the
start
time.
We
can
reliably
look
them
up
and
if
we
have
an
interaction,
we
can
consider
that
was
a
slow
interaction
and
send
the
information
to
the
telemetry
and
analysis
analytics
systems
for
further
processing.
A
Okay.
So
that's
the
the
basic
code
very
simplified
version.
The
real
code
is
actually
more
evolved
than
this,
and
now
that
we
have
measurements
the
thing
the
measurement
that
we
have
is
for
each
unnamed
or
tagged
interactions.
We
have
whether
it
was
responsive
or
unresponsive,
and
we
have
a
couple
of
requirements
to
create
a
metric.
A
A
metric
is
something
that
is
useful
for
the
business
and
we
want
to
make
sure
it's
stable
that
not
randomly
changes
and
we
want
to
be
able
to
use
it
for
regression
detection
over
time
and
for
a
b
testing
regression
detection
or
improvement
as
well
and
at
least
for
us.
It
was
important
for
us
to
be
able
to
use
it
per
session.
I'm
not
going
to
go
into
what
we
mean
by
a
session,
but
we
can
think
about
it
as
a
page
load
until
tab
close
for
simplicity.
A
A
So
we
started
trying
to
think
what
would
be
a
useful
metric
for
us
and,
as
you'll
see,
we
were
not
successful,
initially
and
I'll
show
a
couple
of
attempts
that
we
had
until
what
current
state
we
have
right
now,
which
we
think
is
much
better.
A
Of
course,
we
want
to
make
sure
that
that
session
has
at
least
one
interaction
to
reduce
the
number
of
information
that
we
track
and
basically
the
the
metric
that
we
have,
that
we
call
responsiveness
is
the
total
number
of
responsive
session
that
we
have
to
the
total
number
of
sessions.
Overall,
we
can
create
a
percent
for
it
and
then
start
tracking
it
so
some
challenges
we
encountered
why,
by
using
this
metric,
it
was
very
noisy.
It
was
kind
of
fluctuating
all
over
and
we
had
a
hard
time
to
stabilize
it.
A
It
has
a
lot
of
variance.
We
have
a
lot
of
variance
in
the
number
of
interactions
per
session.
Some
session
has,
I
don't
know
many,
I
don't
say
numbers,
but
many
interactions,
and
some
just
a
few,
and
this
kind
of
metric
has
a
bias
to
session
with
fewer
interactions
from
the
from
the
math
of
it.
A
Also,
we
try
to
kind
of
stabilize
it
by
tuning
the
variables,
so
we
have
the
percentage
of
interactions
that
is
considered
a
responsive
session,
so
we
it's
not
trivial,
to
determine
what
is
the
right
number
that
would
make
sense
or
would
be
really
useful
for
us
and
also
if
we
try
to
tweak
the
number
to
reduce
the
bias,
try
to
take
the
number
of
minimum
number
of
interactions
that
we
want
to
consider
for
a
session
again.
That
was
a
non-trivial
challenge
to
address
our
second
attempt
we
actually
said
well
that
was
too
complicated.
A
What
we
did
find
is
that
this
was
better.
It
was
better
than
the
first
attempt.
However,
it
wasn't
sense
enough
sensitive
enough
for
regression,
specifically
for
a
b
testing,
and
you
see
here
on
the
on
the
right.
You
see
a
a
chart
that
tried
to
compare
an
a
b
testing
of
a
change,
and
you
can
see
that
it
fluctuates
it
go
up
and
down.
A
Even
though
we
we're
certain
that
this
change
caused
the
regression
we
know
from
other
measurements
and
again,
this
metric
is
not
a
per
session
metric,
so
it
was
harder
for
us
to
use
in
some
of
the
analytics
systems
that
we
have.
A
So
our
current
approach
is
somewhat
a
combination
of,
I
guess,
the
other
approaches
or
trying
to
simplify
things,
and
we
started
by
defining
session
responsiveness,
not
decide
if
a
session
irresponsible
just
give
a
score
of
session
responsiveness,
which
is
the
ratio
of
responsive
interactions
in
the
session.
So
we
took
the
number
of
interact,
responsive
interactions
and
divided
by
the
total
number
of
interactions
that
we
were
able
to
capture.
A
A
We
did
notice
that
it's
very
stable,
very
low
variance
and
it's
useful
to
detect
regressions
for
a
b
testing
and
over
time
and
on
the
chart
here
on
the
right.
You
see
a
comparison
of
an
a
b
testing
between
two
features
and
we
can
see
that
it's
very
reliable
closely
and
over
time
it's
reliably
is
able
to
detect
the
regression
consistently.
A
Just
another
example
to
show
that
the
average
session
responsiveness
is
much
more
useful
for
detecting
regression.
So
in
this
chart
we
can
see
that
the
blue
line
represents
the
metric.
I
explained
in
slide
as
number
two
is
percent
of
responsive
interaction.
We
can
see
that
it's
trending
over
over
time
and
the
bottom
is
the
current
approach
that
we
use
it's
trending
over
time
and
you
can
see
in
general.
It
follows
the
other
metric,
the
previous
symmetric
of
percent,
of
response
interaction.
A
But
if
there
is
a
regression,
even
a
small
one,
we
can
see
a
big
dip
because
it
radically
changes
the
score
per
session
and
it
can
reliably
detect
it.
So
we
found
it
very
useful
for
our
purposes.