►
From YouTube: TPAC WebPerfWG 2021 10 28 - Personalizing Performance
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
So,
at
a
high
level,
we'll
talk
about
little
bit
of
about
linkedin.com
what
it
is
and
where
we
use
it
and
how
we
personalized
images
on
linkedin's
feed,
page
the
home
page
of
linkedin,
using
two
approaches,
one
using
netinfo
and
what
was
our
experience
using
netinfo
and
then
how
we
solved
those
challenges
using
our
own
service
that
we
built
homegrown,
something
called
as
performance
as
a
service
in
short
pass.
A
A
So,
as
you
can
see
here,
there
are
two
images.
One
is
a
lower
resolution
and
other
is
the
high
resolution
link
the
light
as
it
was
designed
to
work
for
everyone
globally
default
chose
400
pixels
just
so
that
the
experience
is
smooth
regardless
of
their
network
connection,
but
they
wanted
to
experiment
with
800,
pixels
images
or
double
the
size
for
people
who
can,
let's
say
afford
it.
They
have
good
network
network
quality
and
good
device
quality,
so
they
yeah
so,
but
why
did
we?
Why
did
they
want
to
do
this?
A
I
mean
the
reason
is
pretty
obvious:
images
play
a
very
big
role
on
feed
and
they
are
mostly
images
and
videos
on
feed
and
they
really
drive
engagement,
and
we
have
seen
this
several
times
in
the
past
that
whenever
we
have
optimized
images,
the
engagement
really
shoots
up
so
overall,
like
the
metrics
that
we
care
about
around
session
engagement
are
really
driven
by
images
and
that's
why
linkedin
light
was
interested
in
optimizing
images,
and
so
the
first
approach
they
took
was
to
use
the
on
device
measurements
or
like
the
net
info
api,
which
directly
gives
a
classification
of
four
classes:
4g
3g,
2g
and
slow
2g.
A
It
uses
on-device
measurements
like
rtt
and
throughput
to
come
up
with
this
classification,
so
linkedin
lite
used
this
api
to
only
give
high
quality
images
whenever
the
network
connection
was
4g,
and
this
is
how
they
did
it.
So
the
client
requests
linkedin
lite
website
by
going
to
linkedin.com
on
their
mobile
web
app,
and
it
sends
both
the
request,
as
well
as
the
net
info
information
and
the
server
side
like
delight
uses
this
information
to
do
something
called
a
quality
classification
like
does
something
like
okay,
it
is
4g.
A
So
let
me
it's
a
very
simple
if
and
else
condition,
if
it
is
4g,
then
let's
go
and
ask
the
image
provider
give
me
a
higher
resolution
image.
Otherwise,
let's
use
a
lower
resolution
image,
so
at
a
high
level.
This
is
how
they
do
it
like
everything.
The
decision
making
and
also
the
result
of
what
image
url
to
use
is
completely
on
the
server,
so
nothing
happens
on
the
client
and
they
return
the
feed
page.
A
So
this
is
the
server
side,
rendering
aspect
of
linkedin
lite
today
and
the
results
were
quite
promising,
and
this
is
what
we
got
after
rolling
out
using
net
info
on
device
measurements
feed
viral
actions.
What
it's
a
very
important
metric
at
linkedin-
and
this
drives
engagements
like
how
many
clicks
like
shares
and
all
that
types
of
actions
which
drive
more
people
to
use
linkedin
feed
and
that
improved
by
0.11.
A
It's
it's
pretty
hard
metric
to
move.
Unless,
until
usually
we
add
new
features,
these
metrics
move,
but
just
with
changing
the
image
quality,
we
were
able
to
see
these
gains.
It's
quite
impressive.
Similarly,
I
mean
as
this
as
the
name
says,
it's
the
amount
of
content
that
is
shared
that
also
improved
a
lot.
Probably
people
are
able
to
engage
with
the
images
more
they
like
them.
You
just
shared
more
so
these
are
all
pretty
good.
A
A
First
thing
is
a
pretty
obvious
one.
I
think
the
browser
support
of
net
info
is
only
on
ios,
so
you
know
it
is
only
on
chrome,
chromium
based,
browsers
and
ios
is
a
big
share
for
us,
and
there
is
no
support
for
that
on
ios,
so
50
or
so
of
our
members
cannot
even
have
this
optimization.
A
And
secondly,
one
other
small
drawback
that
we
saw
is
eighty
percent
of
our
requests
are
marked
as
4g.
So
this
is
quite
shocking
to
us.
I
think
if
80
percent
of
them
are
marked
as
4g,
we
cannot
do
much
about
like
finally
granularly
trying
to
divide
that
segment.
It's
a
whole
lot
of
segment,
it's
only
20
percent,
which
has
other
classes
which
didn't
seem
right
to
us
and
secondly,
all
this
information
that
is
given
by
a
throughput
of
net
info
api.
A
It
is
capped
at
2
mbps
so
for
some
use
cases
not
for
the
image
per
se,
but
for
some
other
use
cases
it
will
be
very
useful
for
us
to
know
something
beyond
2mbps.
Let's
say
for
live
videos,
and
things
like
that.
So
that
was
another
thing
that
was
little
bit
limiting
and
another
limitation
that
we
saw
is
there
was
a
significant
difference
in
the
distribution.
Let's
say
we
plotted
net
in
force,
data,
rtt
and
throughput
and
our
own
way
of
computing.
A
There
was
a
big
difference
in
the
distribution,
and
that
was
at
least
that
our
understanding
is
netinfo
is
a
global
api
that
uses
requests
from
all
the
domains
to
derive
this
information,
whereas
what
we
did
was
mainly
to
use
linkedin's
api
data
only
and
secondly,
netinfo
combines
both
data
centers
and
cdns,
and
their
performances
are
very
different.
So
we
wanted
a
special.
A
We
want
to
separate
them,
because
we
have
some
use
cases
which
are
image,
focused
media
focus
that
are
driven
by
cdn
performance
and
there
are
some
apis
and
other
use
cases
which
are
different
by
data
center
or
pop
performance.
So
we
wanted
that
type
of
differentiation
in
let's
say
the
api
we
build.
A
And
finally,
it's
a
very
important
point,
honestly,
one
of
the
prime
motivations
for
doing
this
is
net
info
covers
the
network
aspect,
but
we
also
wanted
something
that
covers
the
device
aspect
and
we
have
the
gold
standard
metric
page
load
time,
which
covers
both
at
least
for
server
side
rendered
applications
like
lite.
A
A
So
because
of
all
these
things,
we
built
this
service
called
performance
as
a
service
and
roll
it
out
at
linkedin.
However,
it's
a
little
it's
pretty
hard
to
do
this
honestly,
because
deriving
what
members
say
as
good
experience
and
bad
experience
is
a
very
tricky
and
subjective
thing.
What's
good
for
me,
may
not
be
good
for,
say
others
based
on
their
expectations
and
like
use
of
linkedin.com
and
doing
this
in
real
time
like
just
when
they
go
to
linkedin.com.
We
would
like
a
classification-
it's
not
after
the
page
load
has
happened.
A
We
would
like
to
know
this
as
early
as
possible.
This
classification
has
good
bad
or
average
so
that
we
can
make
decisions
on
the
server
side.
So
is
it
possible
to
even
some
build
something
like
that?
And
finally,
we
would
like
to
have
this
for
every
request,
not
just
the
future
request.
A
We
would
like
it
to
be
there
from
even
the
first
request
of
linkedin,
like
of
all
of
our
members,
so
in
I
mean
I
can
go
into
a
lot
of
details
for
all
these
three
things,
how
we
solved
it
using
machine
learning,
but
at
high
level.
I
can
give
you
this
summary
in
one
slide.
If
you
are
more
interested,
we
have
a
blog
which
thoroughly
goes
through
all
these
details.
A
But
what
we
want
is
this
classification,
which
is
good
bad
or
average
using
page
load
time,
because
it
represents
the
entire
end-to-end
member
experience
and
what
we
get
or
what
we
have
at
network
request
time
or
request.
Time
are
these
things,
so
we
can
derive
the
geography
information
from
ip
address.
We
can
derive
some
network
information
again
from
ip
address
like
yes
and
pops,
the
data
center
and
all
that
we
can
derive
device
and
browser
information
from
user
agent.
A
So
we
have
these
things
whenever
someone
makes
a
request,
but
we
do
not
have
page
load
time
or
a
classification
that
we
can
do
on
top
of
these
things
directly.
So
we
built
a
model
and
trained
it
offline,
using
historical
member
data
that
we
obtained
via
ram
and
from
israel,
user
monitoring
and
try
to
understand
the
correlations
of
how
all
these
features
map
to
these
classes
and
it's
a
very
complex
correlation
and
that's
why
we
needed
a
very
complex
model
to
make
this
happen,
and
after
doing
that,
we
got
some
pretty
amazing
results
as
well.
A
So,
just
like
we
expected
this
results
were
much
better
than
what
we
saw
with
net
info.
A
So
again,
the
feed
viral
actions
improved
like
even
the
people
who
were
engaged,
improved
and
revenue
also
improved.
These
are
all
very
tough
metrics
to
move
without
adding
new
features,
and
we
were
able
to
do
it
only
by
changing
the
amazon
relations,
and
this
is
just
one
use
case.
There
are
so
many
other
things
we
can
do
with
this
service
now,
but
again,
like,
like,
I
said
it's
a
very,
very
complex
model,
because
these
are
very
granular
or
not
so.
A
If
someone
is
interested
to
read
about-
and
this
model
like
as
shown
here-
takes
these
informations-
that
we
get
from
ip
address
user
agent
and
yeah,
mainly
those
things
and
returns,
something
called
the
performance
quality
class.
It
returns
good
bad
in
average,
and
the
probabilities
of
each
class.
A
So
we
know
that
it
is
hard,
for
other,
say
teams
and
companies
to
get
the
fund
to
get
sometimes
funding
and
also
to
build
models
at
this
scale,
because
we
we
are
lucky
to
have
so
much
data
at
linkedin.
So
we
wanted
to
give
back
to
the
community
and
get
some
feedback.
So
we
are
planning
to
open
source
this
model
soon
and
we
will
have
an
entire
blog
written
about
it.
Why?
A
We
think
this
model
is
a
general
model
that
can
work
for
many
companies
who
are
also
as
server
side
rendered
based
and
once
you
have
that
model.
The
next
big
question
is:
how
do
I
deploy
that
model
as
a
service
and
use
it
right?
So
we
are
also
writing
another
blog
post
on
our
collaborations
with
tensorflow
team
and
how
we
deployed
that
model,
using
simple
javascript
functions
and
apis
using
the
tensorflow.js
offering,
and
this
is
using
node.js.
So
it
is
on
server
side
and
a
small.
I
think
insider
information.