►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay
and
we're
on
so
hi,
everyone
super
excited
for
this
meeting.
This
is
the
web
performance
working
group
talking
to
the
web
performance
community
at
large,
about
a
b
testing
and
how
we
can
improve
it.
Personally,
I've
been
thinking
a
bit
about
this
problem
space
of
a
few
weeks
ago
and
realized
that
there's
a
lot
of
a
lot
of
misunderstanding
between
the
the
web
performance
community
and
the
a
b
testing
community
and
overall,
we
all
have
very
common
goals.
A
We
all
want
to
make
sites
faster
and
better
for
our
users,
so
I
thought
we
should
talk
about
this
and
try
to
figure
out
a
solution,
so
we
have
a
very
tight
schedule.
Today
we
have
four
talks
by
tim,
cadillac
and
melissa
mitchell
about
the
performance
issues
that
they
see
in
the
wild
in
in
real
sites
today,
and
then
talks
from
grishma
from
optimizely
and
dmitry's
from
google
optimize
about
how
current
client-side
a
b
testing
frameworks
work.
A
I
will
also
present
a
short,
not
really
a
proposal,
but
something
I've
been
thinking
about.
That
could
lead
us
the
way
towards
some
sort
of
a
solution,
and
then
I
I
thought
we
can
devote
an
hour
and
a
half
an
hour
and
15
minutes
or
more
or
less
to
brainstorming,
to
talk
about
all
that
and
try
to
figure
out
the
best
way
forward
to
make
thing
faster,
okay,
and
so
I
think
we
can
get
started
with
the
presentations
shortly,
but
just
a
small
note
beforehand.
A
This
meeting
is
being
recorded
and
will
be
posted
online.
We
will
be
taking
minutes
in
the
in
the
agenda
and
as
all
working
group
meetings.
This
meeting
is
under
the
c,
the
w3c's
cpc,
the
code
of
ethics
and
professional
conduct.
So
yeah
you
can
go
read
that
I
will
post
a
link,
but
in
short,
let's
just
all
be
professional
and
conduct
towards
others
in
ways
that
we
want
them
to
behave
towards
us,
and
with
that
I
think
we
can
get
started
with
the
talks.
A
A
Cool,
so
can
you
share
your
slot
or
you,
you
shared
a
link,
but
do
you
want
to
present
yes.
B
See
it
now
sweet,
okay,
so
yeah,
so
my
name
is
tim
catholic,
just
like
10
second
overview
for
those
who
don't
know,
I
work
now
on
web
page
test
at
catchpoint,
and
I've
only
been
doing
that
since
I
think
I
joined
right
before
the
christmas
holiday
so
prior
to
that,
I
was
doing
performance
consulting,
which
is
where
more
of
this
stuff
is
coming
from.
B
Is
you
know
things
that
I
was
seeing,
and
hopefully
this
ends
up
being
a
decent
like
queue
up
to
tee
up
for
some
of
the
other
stuff
coming
up,
so
just
at
a
high
level,
just
to
sort
of
level
set
a
little
bit.
B
Client-Side
av
testing
works
typically
by
you're,
providing
some
javascript
down
to
the
browser
that
javascript
is
then
going
to
manipulate
the
document,
object,
model
or
the
css
object
model,
or
both
to
apply
one
or
more
experiments
to
the
page,
as
you
would
expect
any
time
that
we're
doing
something
like
this
we're,
manipulating,
dom
or
css
object
model.
We
want
to
make
sure
that
we're
avoiding
shifts
or
flickering
of
content,
because
that's
always
a
risk
anytime.
Any
of
that
stuff
is
changing.
B
It
can
be
disru
disruptive
to
the
user
experience,
but
also
in
the
case
of
a
b
testing.
You
know
if
somebody's
seeing,
like
the
you
know,
case
a
and
then
b
snaps
in
on
top
or
you
know,
whatever
that
situation
happens
to
be
that
could
potentially
mess
with
the
results.
B
So
the
challenge
here
for
av
providers
is
that
they
have
to
find
a
way
to
combat
this.
How
do
we
reliably
get
down
a
b
experiments
down
to
the
browser?
B
Get
them
executed,
get
the
dom
and
css
object
model
set
up
as
efficiently
as
possible
in
a
way
that
they
can
ensure
that
that
flickering
or
shifting
or
fluttering
or,
however,
you
want
to
call
it
doesn't
actually
occur.
Now.
The
easiest
option
is
to
drop
in
a
synchronous
script,
and
this
is
what
you'll
see
very
often.
I
know
this
is
what
optimizely
uses.
What
you
know
with
the
the
synchronous
script
is
probably
easiest
from
an
implementation
perspective,
both
from
a
provider
and
for
the
developers
themselves.
B
It's
literally
just
drop
the
script
in
and
you
go.
The
advantage
of
this
from
the
simplicity
perspective
is
that,
in
addition
to
being
one
line
of
code,
it
gets
that
higher
priority
on
the
network
and
that
execution
is
guaranteed
to
happen
right
away,
so
conveniently
from
an
a
b
perspective.
At
least
this
blocks
the
page
from
displaying
until
those
experiments
have
been
applied.
We
know
that
there's
going
to
be
no
flickering,
no
shifting
of
content
they're
just
going
to
see
the
experiments
that
they
should
see
nothing
else.
B
B
This
is
on
a
3g,
fast
connection,
there's
2.3
seconds
that
we're
spending
here,
making
that
connection
off
to
optimizely
and
then
downloading
that
content
and
the
entire
time
that
that
2.3
seconds
is
occurring.
The
page
is
blocked
like
that's
all
we're
gonna
get
out
of
it.
This
is
a
chrome
profile
from
the
same
trace
or
I'm
sorry
from
the
same
web.
Page
test
run
so
at
the
very
left.
The
little
blue
marker.
B
That's
where
we're
parsing
the
html
line
zero
through
six,
and
you
can
see
there
that
that's
where
it
finds
optimizely's
script
and
that
kicks
off
the
request
up
above
once
the
script
has
arrived.
There's
this
execution
chunk
and
then
right
after
that
that
final
little
blue
tab
off
to
the
right
there.
That's
when
html
parsing
picks
back
up
again
so
that
entire
time
that
2.3
seconds
plus
execution
in
this
case
was
around
800,
milliseconds
or
so
the
html
parsing
is
completely
paused.
B
B
Just
from
you
know
the
folks
that
I've
worked
with
one
of
the
most
predictable
correlations
that
there
is
is
between
that
first
p
paint
start
render
in
some
sort
of
a
business
metric
pretty
much
everybody
I
was
working
with
would
see
a
connection
either
between
engagement
or
conversion
rate,
with
that
particular
performance
metric,
so
pushing
it
out.
2.5
seconds
is
not
ideal.
B
B
This
is
a
webpage
test,
run
showing
single
point
of
failure
situation
where
optimizely,
we
just
tell
it
not
to
respond
just
to
hang
and
you
can
see.
We've
had
a
pretty
dramatic
difference
because
then
we're
up
against
browser
defaults
in
terms
of
timing
out
those
requests,
and
things
like
that.
C
Now,
there's
a
lot
more,
I
would
like
to
know
if
anyone
has
data
on
how
often
this
happen,
because
that's
an
issue,
but
I
would
like
to
know
how
big
of
an
issue
it
is.
Maybe,
in
the
discussion
later.
B
Yep,
that's
a
good
side.
It's
a
good!
It
would
be
a
great
thing
to
see
as
well
on
don't
we
looked
at
this
a
little
bit
when
I
was
at
akamai
but
boy.
I
don't
remember
what
the
results
were
for
that,
and
that's
been
a
few
years
too
at
this
point
as
an
aside,
there's
more
issues
that
come
across
from
hosting
pulling
things
in
from
a
third-party
service
provider
for
recommended.
B
Reading
on
that
harry
wrote
a
really
good
post,
I
guess
a
couple
years
ago,
it
doesn't
feel
like
it
was
that
long
about
self-help
steam
versus
third-party
domains,
which
gets
into
things
around
network,
prioritization
and
stuff
like
that
as
well,
so
the
synchronous
script.
We
have
this
this
this
problem,
where
we've
introduced
this
bottleneck
in
our
critical
path.
So
an
alternative
approach
that
you
see
a
lot
of
ab
providers
do
is
an
asynchronous
script.
B
This
mitigates
our
single
point
of
failure
risk
it
moves
the
third-party
connection
from
that
critical
path.
But
now
what
happens
is
because
it's
an
asynchronous
script.
Now,
theoretically,
the
page
could
be
displayed
before
those
experiments
are
applied.
So
now
we
risk
introducing
that
shift
or
that
flicker,
and
so
typically,
when
we
see
a
b
providers,
use
an
asynchronous
script
or
offer
an
asynchronous
script.
B
B
Harry
found
a
thing
around
black
friday,
apparently
where
he
went
to
a
retailer,
and
the
entire
site
was
blank
because
the
css
was
applied
to
hide
the
visibility
and
set
the
opacity
to
zero,
but
the
javascript
never
loaded.
This
is
an
exception.
I
would
say
to
the
to
the
normal
case:
most
providers,
any
responsible
provider,
is
providing
some
sort
of
a
time
out
as
well
as
part
of
that
snippet.
B
So
this
is
the
google
optimize
default
snippet
and
you
can
see
down
here,
they're
setting
a
four
second
timeout.
So
in
their
case,
if
the
they
hide
the
page
you're
setting
opacity
to
be
zero,
they
try
to
load
in
their
experiments
and
scripts
and
stuff
as
they
need
to.
If
that
stuff
doesn't
come
in
within
that
four
second
timeout
window,
then
the
page
will
be
displayed
again
and
the
experiments
won't
run.
B
That
way,
you
know
again,
you
still
avoid
the
flickering
stuff.
Now
the
defaults
for
each
of
these
services
differ.
Google
optimize
was
about
four
seconds.
Adobe
target
is
three
seconds:
visual
web
optimizer
2.5
seconds.
Each
of
these
can
be
adjusted
in
the
snippet.
You
know
you
can
bump
this
up
and
down
as
you
want.
B
I've
worked
with
a
lot
of
folks
using
shopify
and
google
optimize
is
extremely
popular
with
shopify
folks,
because
with
shopify,
it's
much
easier
to
have
client-side
experimentation
than
something
elsewhere
right
because
of
the
way
the
platform
works
and
google
optimize
is
typical.
So
one
of
the
things
we'll
often
do
is
sort
of
adjust
this
timeout
up
or
down
as
necessary,
typically
measuring
how
often
we're
timing
out
and
seeing
what
we
need
to
do
there.
B
These
timeouts
should
sound
high
to
anybody
who's
like
focused
on
performance,
because
they
kind
are
right
like
if
you
think
about
it.
We're
saying
google
optimize
we're
giving
you
permission
to
block
display
of
the
page
for
up
to
four
seconds
for
four
seconds
or
so
like
pushing
out
start
render,
is
a
significant
hit
in
terms
of
performance.
B
So
these
are
high
defaults,
but
they
kind
of
have
to
be
because
moving
these
requests
out
of
the
critical
path
does
come
with
risk
from
an
a
b
experimentation
perspective,
asynchronous
scripts,
get
that
lower
priority
in
chrome.
What
that
means
is
that
they
get
moved
to
that.
Second
loading
phase,
so
chrome
has
this
layout
blocking
phase
right,
where
it's
prioritizing
head
resources
and
only
loading
stuff.
B
You
know
below
that
and
from
the
body
like
one
at
a
time
with
that
second
phase
ends
up
pushing
all
those
other
resources
out,
so
any
asynchronous
script
gets
scheduled
in
that
second
phase.
It's
now
contending
with
all
of
these
other
page
resources
for
bandwidth,
so
it's
going
to
arrive
a
little
slower
anyway
and
in
addition
to
just
being
queued
up
much
later,
this
is
an
example
that
andy
davies
was
kind
enough
to
offer
up
for
me
from
plums.
B
You
can
see
here
on
request.
Number
25
in
this
case
is
the
request
for
visual
web
optimizer.
It's
an
asynchronous
script.
So,
let's
get
that
lower
priority,
you
can
see
it
sort
of
falling
on
that
second
half
of
the
stair
step
here
of
the
waterfall,
and
it's
contending
with
all
these
other
resources
for
bandwidth,
the
as
a
side
note.
He
also
noted
that
visual
web
optimizer
is
actually
uncacheable.
B
It
turns
out
that
they're
generating
like
a
unique
script
for
each
and
every
page
and
a
random
number,
is
a
query.
Num
parameter
to
the
request,
so
you
can't
actually
cache
it
either.
B
What's
interesting
to
me
and
one
of
the
things
that
I've
seen
with
some
folks
that
I've
worked
with
is
that
this
can
be
a
little
tricky
to
catch
down
depending
on
how
you're,
looking
in
your
analytics
here's
a
web
page
test
run,
you
look
at
the
first
spot
here.
There's
that
thin
green
line,
I
mean
that
lines
up
here
right
at
the
top
of
the
film
strap.
You
can
see
that
the
pages
blank-
that's
actually
when
first
contentful
paint
is
firing
in
this
case,
because
what's
happened
is
we've
set
opacity
to
zero?
B
It's
technically
still
there.
It's
just!
You
can't
see
it.
So,
if
you're,
looking
at
real
user
monitoring,
you're
going
to
see
your
first
paint-
and
everything
looks
great-
it's
only
actually
when
you
look
at
something
synthetically
here
that
cues
up
that
start
runner
to
be
much
later
over
here
to
the
right
that
you
see
there's
a
bit
of
a
discrepancy
between
the
two
metrics.
B
I
ran
into
this
with
again
one
of
the
shopify
or
often
I
guess,
with
a
few
shopify
folks
that
were
using
google
optimize,
where
you
know
they
looked
at
their
real
user
monitoring
and
everything
looked
great
and
it
was
only
once
we
started
diving
into
film
strips
and
web
page
tests,
and
you
know
speed
curve
where
they
were
using
it
that
we
could
see
that
there
was
an
actual
problem
here.
B
B
But
this
was
one
audit
I
did
where
it
was.
The
86th
request
on
the
page
was
the
one
that
actually
brought
back
the
in
in
disabled,
the
asynchronous
anti-flicker
snippet
part.
So
until
this
request
arrived
and
the
javascript
executed,
the
page
would
be
hidden
so
basically
for
this
particular
person
or
this
particular
site,
they
were
reaching
their
google
optimized
timeout
of
I
think
they
had
bumped
it
to
five
seconds
as
well.
B
B
Waterfall
more
recommended
reading
on
anti-flicker
snippets.
I
think
melissa
noted
this
in
her
document
as
well
that
she
put
for
reading
material,
but
andy
wrote
a
great
post
on
anti-flicker
snippets
if
you
want
to
dig
into
that
a
little
bit
and
he
spends
a
little
bit
more
time
on
like
how
you
might
monitor
when
those
things
are
timing
out
and
how
so
we've
talked
about
getting
the
code
down
to
the
browser.
But
now
we've
got
this
whole
process
of
what
happens.
B
After
that
fact,
a
single
snippet
can
include
many
events,
different
pages
and
experiments
kind
of
all
get
lumped
together
in
one
snippet
by
default.
Typically,
in
my
experience,
those
experiments
have
a
tendency
to
linger
on.
I've
worked
with
a
lot
of
organizations
who
couldn't
tell
you
like
you
know
which
experiments
happen
to
be
running
or
who
put
them
there,
and
often
they
find
that
there
are
things
that
are
running
that
they
didn't
realize
were
still
like
in
play.
Now
I
understand
that.
B
That's
not
a
technical
problem,
that's
you
know
an
organizational
issue,
but
it
is
something
that
I've
run
into
often
enough
to
tell
me
that
it's,
you
know
something.
That's
significant,
a
significant
challenge
that
pops
up
the
more
of
those
that
you
have
the
bigger
the
amount
of
code.
The
other
challenge
here
is
that
a
lot
of
these
experiments
tend
to
be
pretty
inefficiently,
written,
so
they're
doing
dom
manipulation
in
a
very
inefficient
way
which
again
prolongs
that
execution
cost.
B
So
all
of
this
code
adds
up
very
quickly.
This
was
the
ultra
mobile
example
where
they
were
pulling
in
about
just
under
you
know,
half
a
megabyte
here
of
uncompressed
javascript
code,
and
so
very
typically,
we
see
this
manifest
itself
in
long
tasks.
B
That
stuff
is
expensive.
Yep
fire
away
tim.
B
B
Thanks,
sir,
so
the
observing
the
dom
can
be
really
expensive,
especially
when
it's
on
a
constrained
mobile
device.
These
long
tasks
can
be
obnoxious,
so
it's,
in
my
opinion,
the
perfect
marriage
of
third
party
risk
in
javascript
bottlenecks.
I'm
not
going
to
spend
a
lot
of
time
on
the.
Why
we
use
it,
because
I
think
some
of
that
gets
queued
up
in
melissa's,
but
you
know
we
have
issues
where
the
data
may
only
be
available
on
the
client
not
actually
available.
B
Prior
to
that,
because
maybe
they're
relying
on
other
third
parties,
some
organizations
can
be
using
it
for
other
things.
Besides
experiments
because
of
its
ease
of
deploying
it
can
be
workarounds
for
development,
backlogs
bug,
fixes,
etc.
B
B
This
is
a
small
example,
but
one
of
the
organizations
I
worked
with.
We
were
looking
at
several
hundred
thousand
dollars
in
lost
revenue
because
of
the
performance
hit
and
then
simplicity,
client-side,
a
b
tests
are
easier
in
most
cases
to
deploy.
You
have
wysiwyg
editors.
It
enables
marketing
to
try
out
new
experiments
and
it
detaches
it
from
that
developer.
Workflow.
B
A
Thank
you
melissa.
Can
you
share
all.
E
Right,
I
will
go
ahead
and
present
I've
covering
actually
I'm
going
a
little
bit
more
into
that
process,
sort
of
thing
and
like.
Why
are
we
seeing
these
problems
and
like
what
does
it
look
like
from
a
core
web
files
perspective?
So
for
a
little
bit
of
background,
my
name
is
melissa
mitchell.
I
am
a
web
ecosystem
consultant
for
google,
which
means
my
day-to-day
job
is
working
with
site
owners
to
try
to
help
them
meet
core
web
vitals
one-on-one
and
really
getting
that
in-depth
feedback
on.
E
What's
what
they're
struggling
with
to
highlight
those
struggles
internally
and
to
talk
about
them
and
universally,
when
clients
are
working
towards
helping
getting
towards
core
web
vitals,
eight
client-side
av
testing
almost
always
comes
up,
regardless
of
whether
it's
retail
or
insurance
banking
content
publishing
a
b
testing
is
pretty
much
universal
in
one
of
the
struggles
that
they're
working
around
in
meeting
these
core
web
vitals,
so
real
life
examples
just
to
go
through
to
kind
of
level
set.
E
I
will
speed
through
this
a
little
bit
since
tim
gave
a
great
explanation
of
what's
going
on
here.
Technically,
as
an
example,
zales.com
they're
using
an
a
b
testing
library
maximizer,
this
is
an
amp
pwa
and
this
is
a
hard
load,
so
we're
having
a
hard
load
spa
without
the
amp
load
into
it,
and
so
this
is
a
little
bit
more
janky
than
the
real
user
experience.
But
it
still
stresses
the
point
of
how
much
an
a
b
testing
library
can
actually
affect
the
user
experience.
E
So
we
are
seeing
fcp
increased
by
about
four
seconds,
so
we're
likely
hitting
that
timeout
right
there
and
then
we're
also
seeing
lcp
delayed
by
a
second
as
well.
This
is
concerning
four
core
web
vitals
as
they're
working
on
this,
because
fcp
is
a
lower
bound
for
lcp
and
if
we're
stuck
at
the
four
seconds,
there's
almost
no
way
to
get
out
of
that
poor
category
a
similarly
cumulative
layout
shift.
E
E
I
have
also
added
the
rum
data
from
the
chrome
user
experience
report
for
what
the
real
user
experiences
are
like
based
on
the
synthetic,
and
we
can
see
that
that
timing
would
make
a
difference
for
these
core
web
vitals
here
as
they're
working
and
going
through
this
another
example,
and
I
think
this
is
on
a
slightly
opposite
spectrum.
This
is
more
like
best
case
scenario
for
out
of
the
box,
a
b
testing
when
we're
seeing
it
out
in
the
wild
and
in
real
life.
This
is
from
npr.org.
E
They
do
use
a
react
single
page
application.
They
are
not
on
amp
and
with
their
a
b
testing
they're,
seeing
about
a
second
impact
on
their
web
pages,
and
now
this
is
important
because
it
is
the
last
piece.
Lcp
is
the
last
piece
for
meeting
core
web
vitals
here
and
they're
right
on
that
edge,
currently
they're
at
3.2
seconds,
and
so
that
roughly
second,
that
that
a
b
testing
library
is
taking
up
is
a
factor
in
them
being
able
to
meet
those
core
web
vital
thresholds.
E
So
why
are
we
seeing
these
challenges?
And
I
can't
stress
enough
that
I
feel
like
the
strongest
challenge
is
that
the
primary
a
b
test
graders
are
not
in
technical
roles.
They
were
not
hiding
hired
for
their
coding
experience
they're,
using
what
you
see
as
what
you
get
editors
and
they're,
typically
completely
unaware
of
the
performance
impact
that
their
work
can
have
on
the
site
and
then
on
the
business
kpis
afterwards
and
including
core
web
vinyls
and
others.
E
E
So
we
will
probably
just
more
dive
right
into
the
server-side
a
b
testing
and
that
it
just
doesn't
meet
the
business
requirements
because,
even
though
that
helps-
and
we
have
a
technical
solution
that
could
do
the
initial
use
case
for
a
b
testing,
it
doesn't
help
with
all
of
the
other
things
that
a
b
testing
is
actually
used
as
a
band-aid.
For
so
before,
I
jump
into
that.
There
is
a
universal
truth
which
face
learned
as
a
developer.
E
Users
are
never
going
to
use
your
software
the
way
you
intend
it's
just
a
fact
of
life,
and
you
end
up
supporting
use
cases,
you
never
knew
existed
and
a
corollary
to
that
is
sometimes
your
users
aren't
even
who
you
think
and
who
you
intended
them
to
be,
and
so
that's
something
to.
I
usually
keep
in
mind
as
we're
going
in
with
the
a
b
testing.
Is
they
get
very
creative
about
these?
E
E
The
first
reason
and
is
the
most
obvious:
we
want
to
increase
conversions
and
revenues,
and
this
a
b
testing
is
essential
for
our
business
and
that's
usually,
what
you're
going
to
see
and
server
side
can
do
this
just
as
well
as
client
side.
So
this
isn't
the
reason
why
we're
not
switching
over
to
the
more
performant
solution.
E
Well,
another
thing
that
a
b
testing
libraries
do
that
are
client-side
is
that
they
reduce
the
workload
on
marketing
teams
and
again,
I
can't
stress
this
enough.
They
usually
are
not
technical,
so
if
they
do
code,
it's
probably
not
what
they
were
primarily
went
to
school,
for
they
are
not
computer
science
graduates.
E
E
Number
four
is
super
important:
it
allows
tests
to
be
independent
of
release
cycles.
This
is
critical
for
the
business
side,
because
most
companies
can
be
fairly
slow
with
their
release
cycles.
Lucky.
If
we
see
every
two
weeks,
every
two
months
is
not
uncommon
and
that's
just
not
agile
enough.
If
a
ceo
comes
down
and
says,
I
want
to
change
x
tomorrow
and
oftentimes,
businesses
will
use
the
a
b
testing
to
adjust
to
this.
E
One
example
here,
one
of
my
someone
I
worked
with:
they
had
services
that
were
only
in
store
and
when
all
the
locations
shut
down,
they
wanted
to
move
quickly
to
offering
similar
services
on
the
web,
but
because
the
they're
outsourced
to
most
of
their
web
development,
they
actually
implemented
the
whole
solution
within
their
a
b
testing
library,
instead
of
going
through
the
initial
pain
of
trying
to
work
it
out
with
their
vendors
before
getting
it
in
there
because
they
had
to
be
more
flexible.
E
That
way,
I
know
we're
running
short
on
time,
so
I
will
keep
it
short.
The
successful
tests
can
be
kept
in
live
until
development.
Work
is
done.
That's
a
lot
of
what
tim
was
saying.
These
will
hang
around
forever.
E
E
E
I
think
it's
roughly
around
70
percent
are
either
unsuccessful
or
unconclusive,
and
this
way
you
are
not
putting
those
failed
tests
or
bloating
your
css
or
your
code
off
of
those
failed
tests,
and
so
there's
a
lot
of
these
business
reasons
that
we
need
to
keep
in
mind
that
the
client
side,
a
b
testing
library,
is
solving
these
pain
points
that
the
current
solutions
that
we
have
to
work
on
the
performance
issues
aren't
yet
addressing
so
happy
to
ask
answer,
questions
and
yeah.
I
will
pass
it
back
to
you.
A
Yolove
awesome,
thank
you
so
yeah
lively
discussion
in
the
chat.
I
think
that
do
we
wanna
spend
five
minutes
on
questions
before
moving
over
to
the
av
providers.
Side
of
things.
A
E
E
I'm
not
quite
sure,
I'm
following
what
that
solution
would
look
like
right,
because
you
first
when
you're
on
the
website
is
this
person
in
group
a
or
group
b
right,
and
so
how
are
you
deciding
at
that
point?
So
you
still
need
to
have
some
javascript
to
decide.
What
group
is
this
person
in
the
a
b
testing
library
people
can
chime
in
here?
I
am
not
an
expert
in
how
these
things
are
actually
written,
mostly
on
working
around
them,
to
try
to
meet
core
web
files.
B
So
I
don't
know
if
like
for
alex,
if
are
you
talking
about?
Basically,
it's
just
like
you
know
your
version
of
like
you're
doing
server
side,
I
mean,
like
the
the
first
party,
is
responsible
for
providing
both
a
different
test,
a
to
one
set
of
users,
test
b
to
another,
whether
that's
driven
by
the
server
or
that's
javascript.
That's
you
know
manipulating,
but
it's
all
self-hosted
like
is
that
what
you're
driving
at
well.
F
When
I
used
to
scrape
data
from
websites
a
long
time
ago,
which
I
officially
don't
do
anymore,
I
noticed
that
there
were
several
websites
that
I
was
interested
in.
The
data
of
that
would
respond
with
very
clearly
either
website
a
or
website
b.
E
Yeah,
so,
if
you're
having,
if
you
have
to
have
site
page,
a
and
page
b
written
then
you're
requiring
engineering
resources
to
write
those
pages,
the
groups
and
the
teams
doing
these
tests
are
not
really
engineers
and
to
get
this,
they
have
to
request
that
resource
from
the
engineering
department,
which
may
or
may
not
be
prioritized
and
certainly
isn't
going
to
happen
fast
enough
in
the
time
that
they
need
to
do
these
tests.
F
D
G
A
Cool,
so
I
think
that,
with
that
we
can
move
over
to
the
a
b
testing
framework
side.
Grishma.
Do
you
wanna
share
your
screen.
A
D
So
hi,
my
name
is
krishna,
I'm
from
optimizely,
so
I'm
here
to
present
about
optimizely.
How
optimistic
does
client-side
ap
testing
I'll
skim
over
the
first
few
slides,
because
both
them
and
unless
I
have
covered
it
exhaustively
like
some
of
the
advantages?
D
Is
it's
easy
to
implement
easy
for
non-engineers
to
get
into
faster
nutrition
and
another
useful
thing
is
short-lived
experiments
where
it
is
time
sensitive
like
for
one
example,
when
black
lives
matter,
protests
were
happening,
optimizely
put
up
a
banner,
that's
a
short-lived
experiment,
and
that
can
be
done
much
faster
without
any
engineering
input.
So
that's
some
of
the
advantages
of
client-side,
a
b
testing
I'll
skim
over
this,
because
I
think
it's
been
covered
different
types.
One
thing
optimized
they
also
does
is
personalization.
D
So
this
is
where
it
gets
more
complex.
With
regards
to
server
side
testing,
like
you
use,
a
lot
of
third-party
integrations
like
if
somebody
has
visited
google
then
do
this.
If
somebody
is
from
microsoft,
as
in
this
example
present
this,
if
somebody
has
visited
this
page
twice
on
the
third
time,
show
them
a
different
view.
So
these
kind
of
examples
make
it
harder
when
it
comes
to
doing
something
server-side,
it's
rendering
itself,
so
personalization
is
its
own
case.
D
So
when
it
comes
to
client-side
ap
testing,
some
of
the
key
constraints
is
like
it
has
to
preserve
user
experience.
We
talked
about
flashing
flicker,
how
it
can
ruin
the
user
experience,
but
can
also
undermine
the
integrity
of
experimentation
results
further
like
if
you
want
to
modularize
the
javascript.
You
have
to
make
sure
that
you
can
only
do
so
such
that
you
don't
rely
on
it
being
executed
at
a
specific
point.
D
Anything
you
inject
from
within
a
javascript
will
happen
asynchronously
and
it's
out
of
control
from
a
vendor
perspective
like
optimize,
we
cannot
assume
when
an
asynchronous
script
will
execute.
So
we
are
limited
by
how
much
javascript
we
can
push
asynchronously
and
the
addition
other
constraint
is
like
most
of
our
customers.
D
Don't
want
to
deal
with
a
lot
of
nitty-gritty
details,
so
they
don't
want
to
deal
with
which
cookies
to
pass,
what
to
pass
to
us.
So
as
a
vendor,
we
have
to
be
able
to
deliver
this
completely
independent
of
where
the
visitor
is
coming
from.
What
site
they
are
around
which
geo
location?
It
is
so
this
is
all
taken
from
the
client
side
and
we
cannot
assume
anything
about
the
visitor
or
the
customer.
D
Additionally,
there
are
chain
there
are
experiments
that
execute
on
certain
dom
changes
or
a
specific
js
condition
like
this
particular
element
is
present,
or
this
particular
variable
is
defined
and
we
at
optimize.
We
also
rely
on
local
storage
for
behavioral
changes
if
a
person
visited
this
webpage
twice
on
the
third
time
show
something
so
that's
the
behavioral
change
and
that
kind
of
experiments
and
personalization
changes
rely
on
local
storage.
D
D
D
So
all
this
information
is
in
this
one
javascript
and
as
and
when
the
customer
does
any
changes,
it
goes
back
to
the
javascript
and
the
javascript
gets
updated.
So,
as
mentioned
before,
there
are
performance
challenges.
We
are
very
aware
of
it
very
acutely
aware
of
it.
As
a
performance
engineer,
I've
had
to
talk
with
a
lot
of
customers
about
this,
so
some
of
the
reasons
is
necessary
to
do
all.
This
is
not
cheap.
D
It's
a
very
complicated
piece
of
code,
but-
and
it
is
huge
it-
there
are
different
types
of
changes,
different
types
of
experiments,
different
types
of
attributes.
We
need
to
evaluate
so
that's
a
lot.
It
can
be
any
number
of
experiments.
Customers
can
there
are
customers
who
have
five
experiments?
There
are
customers
who
have
50
experiments,
so
it
can
range
a
lot
and
each
experiment
can
have
its
own
set
of
variations,
and
the
changes
themselves
can
be
very
large,
especially
non-engineers.
D
D
So
as
we
kept
thinking
about
these
challenges,
we
and
around
two
years
back
cloudflare
announced
its
new
product
cloudflare
workers,
and
that
was
the
perfect
example
for
us,
a
perfect
chance
to
actually
improve
our
products.
So
we
decided
to
use
cloudflare
workers,
which
is
essentially
serverless
execution,
so
it's
a
javascript
node
at
the
edge.
So
we
moved
all
this
execution
that
was
initially
happening
on
the
client
side
now
onto
the
edge.
D
So
all
the
targeting
all
the
evaluation,
everything
is
moved
on
to
the
cdn
edge
node
and
because
it's
the
edge
node,
it
is
much
faster.
It
does
not
rely
on
how
slow
the
browser
is,
how
slow
the
device
is.
It
happens
much
faster
and
we
only
send
to
the
client
or
the
browser
just
the
variations
that
needs
to
apply
just
the
minimal
amount
of
java
that
needs
to
apply.
D
D
So
this
is
optimized
the
edge
which
we
released
about
a
year
ago
and
the
way
this
works
is
it's
again
a
javascript
snippet
in
the
head.
But
this
time
is
the
first
party
javascript
snippet
the
way
customers
implement.
It
is
either
they
target
their
to
fetch
from
our
cloudflare
worker
cdn
or
you
can
set
your
dns
to
get
from
optimizely
edge.com,
either
way
it's
a
first
party
javascript,
so
it
can
forward
the
cookies.
It
can
forward
all
the
important
attributes
that
we
want,
and
so
the
big
difference
is
here.
D
The
execution
logic
is
all
in
the
cloudflare
worker
and
only
the
very
minimal
amount
of
javascript
is
sent
to
the
browser.
So
that
means
what
what
was
213
kb
of
javascript
is
now
close
to
5k
10k
range
and
the
execution
takes
less
than
five
milliseconds,
because
it's
just
doing
very
minimal
javascript
like
changing
the
color
or
whatever
else
the
changes
they
have
implemented.
D
In
addition,
we
inject
a
follow-on,
javascript
snippet
asynchronously,
and
this
is
what
does
the
other
more
other
important
things,
but
are
not
as
render
blocking
like
tracking
all
the
events
that
need
to
be
tracked.
Everything
else
that
goes
into
determining
the
results
of
the
experimentation
and
spa
related
changes,
because
in
spa
world
you
don't
reload
the
pages.
That
means
anytime.
D
You
navigate
the
url
changes,
so
in
some
sense
there
might
be
new
experiments
to
execute,
and
since
you
don't
reload,
you
cannot
go
back
to
the
edge
to
evaluate
these,
so
spa
related
changes
happen
in
the
asynchronous
network.
As
a
result,
most
of
the
performance
challenges
go
away.
There
are
still
a
few
synchronous.
Javascript
is
still
the
problem,
but
so
far
we
have
seen
the
impact
to
be
less
than
50
milliseconds
the
again
changes
frequently,
so
the
snippet
cannot
be
cached.
D
Hopefully,
50
millisecond
makes
sense
that
caching
is
not
the
biggest
concern.
There
are
other
challenges.
Of
course
there
is
security
and
then
the
evolving
browser
framework
ecosystems-
which
I
assume
is
not
the
point
of
this
talk,
so
I
think
that's
it
from
me.
A
Thank
you.
Thank
you.
So
much
one
question
from
pat
minion
is:
do
we
have
a
sense
for
how
much
of
the
pain
is
coming
from
a
b
testing
library
plus
the
data
versus
running
the
client
side
code,
and
if
it
was
all
client-side
by
the
same
origin,
we
could
yeah?
I
don't
pat,
do
you
wanna?
Do
you
wanna
ask
your
question
instead
of
me
asking
for
you.
H
Sure
I
mean
basically
it's
just
trying
to
figure
out
where
the
source
of
the
pain
is
like.
Obviously,
there's
a
lot
of
tooling
built
around
client-side,
a
b
testing
and
flexibility,
but
serving
from
third-party
origin
was
some
of
the
pain,
particularly
around
spoff
and
scheduling,
and
that
kind
of
stuff,
and
the
question
was:
if
we
can
first
party,
serve
and
get
it
delivered
quickly
in
cash
and
all
of
that
kind
of
stuff.
H
How
much
of
the
a
b
testing
pain
goes
away
or
how
much
is
still
left
because
of
how
long
it
takes
to
run
the
code
in
the
browser
yeah
and
do
we
have
to
solve
both
problems?.
D
I
think
we
have
to
solve
both
problems
from
my
experience.
It's
not
a
first-party,
javascript
or
self-hosting
does
help
to
a
certain
extent.
It
does
reduce
the
download
time,
but
especially
with
mobile
browsers
execution
becomes
a
huge
like
the
classic
execution
becomes
the
biggest
reason
why
a
b
test
thing
becomes
the
problem
and
because
it
does
take
a
lot
of
time
for
our
javascript
code
to
evaluate
some
of
our
customers
have
about
100
experiments.
D
So,
even
if
you
put
one
millisecond
per
experiment,
it's
100
milliseconds
just
to
go
over
the
list
of
x
experiments
and
then
there's
a
complication
of
how
they
define
the
experiments.
What
are
the
changes
executed?
So
there
is
a
huge
problem,
even
with
the
execution
part,
which
is
why
edge
takes
all
of
that
out
as
well.
D
D
You
are
yeah
you're,
not
running
on
a
mobile
device
anymore
and
you're,
moving
it
to
a
much
much
faster
system.
So
what
would
take
500
milliseconds
to
execute
here
is
taking
40
milliseconds
on
an
edge
node
and
even
with
that
and
the
download
time,
because
you're
only
sending
5k
instead
of
200k
download,
also
gets
faster.
A
C
C
I
So
hi,
my
name
is
dimitris
dimitropoulos
and
I've
been
I'm
going
to
represent
here.
Google
optimize
I've
been
in
this
team
for
a
period
and
I'm
going
to
maybe
pass
quickly.
You
know
some
of
the
slides
I
have
because
I
think
have
been
covered
by
many
other
people.
You
all
know
a
b
testing
and
that
this
is,
you
know
the
set
of
tools
that
google
optimize
is
also.
I
You
know
in
the
same
area,
optima
google
optimize
was
basically
launched
recently
in
2016
first
as
a
premium
tool,
and
later
year
later,
it
was
launched
for
free
with
with
limits,
and
one
thing
to
say
here
might
be,
you
know
not
really
covered
a
lot
at
least
google
optimize
is
also
providing
what
we
call
server-side
experiments
a
way
to
measure
things
unrelated
with
this
whole
topic
that
we
are
covering
today,
which
is
like
dome
manipulation.
I
But
my
rest
of
the
slides
are
basically
going
to
cover
the
this,
this
client-side
runtime,
but
there
is
also
the
start.
The
part
of
the
product
that
talks
about
the
that's
about
the
statistical
side
of
things
and
there
is
a
solution
there,
but
basically
yeah
the.
As
everybody
has
said,
it's
the
same
story
here,
the
wizard
we
get
there
is
intuitive
and
easy.
I
I
also
would
like
to
stress
here
that
you
know,
in
my
opinion-
and
I
think
you
can
see
it
from
the
interest
out
there-
marketing
efforts
and
ux
improvement
efforts
do
matter
a
lot,
as
you
know,
equivalently
to
the
performance
of
the
site,
because
you
know
a
good
site
with
good
content
is
also
what
users
experience
like.
So
I
think
finding
solutions
to
improve
sites,
even
in
things
like
marketing
or
or
like
you
know,
user
improvement,
user
usability
improvements
doesn't
only
is
better
for
the
sites
and
the
conversions,
but
it's
for
the
users.
I
Of
course.
That's
the
current
use
of
google
optimize
that
we've
seen,
as
I
said,
a
lot
of
appetite.
If
you
think
that
this
is
a
relatively
you
know,
new
offering
as
base
that's
data
that
we
get
from
build
weave.
I
That's
a
small
slide
that
just
reiterates
how
these
tools
work,
at
least
the
runtime.
They
like
dom
injection
part
of
the
product.
This
is
where
a
a
marketer
will
use
the
editor
to
produce
a
bunch
of
changes
in
the
in
the
in
the
current
page
that
we
only
get
in
a
script
which
then
will
be
served
on
the
browser
and
just
before
you
know,
rendering
it
will
kind
of
try
to
change
the
page
fast
enough
and
and
implement
variants
or
personalizations.
I
This
is
how
the
google
optimize
editor
looks
like
not
terribly
you,
you
know.
No,
I
wouldn't
call
it
extremely
user,
like
it's
very.
You
know
easy
to
use
for
someone
that
doesn't
know
anything
about
web
development,
but
as
it
has
been
covered,
there
is
always
a
range
of
skills
out
there.
People
that
are
marketers
have
some
experience
with
web
and
they
just
want
to
have
a
tool
that
bypasses
the
bureaucracy
and
get
gets
the
job
done.
I
Targeting
is
something
in
these
tools
that
also
matters
a
lot,
and
I
think
others
have
covered
that
part.
The
url
is
what's
based
in
in
all
experiments,
it's
an
area
where,
unfortunately,
we
couldn't
find
easily
ways
to
improve,
because
sometimes
I
mean
we
kind
of
bundle
all
experiments
in
a
single
script
that
has
benefits
as
well,
because
you
get
cussing
around
as
you
navigate
through
the
site.
I
You
can't
trim
the
script
easily,
like
you
can
just
get
the
experiments
that
are
relevant
to
the
page
due
to
referral
policies
from
the
third
party
scripts.
You
can't
even
know
sometimes
what's
the
path
that
was
the
page
that
the
user
is
looking
at
and
also
as
as
it
was
referred
by
others.
There
are
spa
sites
that
are,
you
know,
a
very
different
story.
I
We
also
have
a
audience
targeting
in
optimize,
which
is
doing
things
like
you
know,
what's
the
device
that
the
user
is
has
a
very
common
one,
and
I
mean,
as
you
can
imagine-
and
you
probably
know
like
the
abilities
of
a
marketer,
to
ask
a
developer.
Please
like
get
that
experience
on,
say.
I
Zero
is
a
good
example
right.
You
know
I
want
to
have
an
experiment
only
in
a
particular
city.
I
mean
it's
easy
to
say.
I
Yes,
we
can
do
it
that
or
this
way,
but
I
think
in
reality
out
there
in
businesses,
it's
something
that
they
value
the
flexibility
and
the
easy
easiness
of
doing
that
with
a
tool
like
that,
and
there
is
also
the
reason,
because
that
question
came
around:
why
do
we
need
to
go
to
the
client
for
some
of
the
targeting
it's
not
terribly
common,
but
very
often
there
are
signals
that
you
only
can
get
in
the
client
there's
a
lot
of
integration
of
systems
that
happens
on
the
browser
itself.
I
Today,
in
today's
site,
some
sites
will
do
recommendation.
You
know,
get
recommendation
services
on
the
browser
and
even
authentication.
Sometimes
so
you
know.
Sometimes
it's
it's
much
more
convenient
to
check
on
the
client
things
like.
Is
the
user
logged
in
what
is
the
you
know?
What's
my
what
my
recommendation
engine
is
providing
as
the
as
the
content
and
and
link
all
that,
as
targeting
of
the
experiment.
I
Yeah
so,
as
I
said,
users,
the
story
here
is
that
you
know
users
typically
put
this
single
snippet
in
their
whole
site
and
that
helps
them
doing
experiments
on
different
pages
continuously
therapy.
The
iteratively
part
is
also
something
very
important
and
something
that's
really
hard
for
them
to
achieve.
I
You
know,
by
going
to
their
development
cycles
using
google
tag
manager,
it
was
also
mentioned
it's
a
very
big
thing.
We
are
seeing
people
prefer
the
flexibility
of
adding
removing
experimentation
on
pages
or
doing
it
in
a
subset
of
the
site,
as
it
was
mentioned
again,
it's
a
sub-optimal
situation
because
there
is
another
asynchronous
like
request
that
needs
to
happen
in
optimize.
We
try
to
guide
users
towards
limiting
what
their
experiments
are.
I
I
We
kind
of
started
trying
to
be
more
async
in
google
optimize
and
we
started
by
becoming
module
of
google
analytics,
and
that
was
asynchronous
as
it
was
mentioned
again.
We
recently
we
now
offer
both
a
sync
antenna:
sync,
more
simple,
separate
script
that
makes
implement
installation
very
simple,
but
lots
of
problems
and
troubles
coming
from
problematic
installations,
and
also
that
the
there
is
better,
a
better
story
now
with
flicker
and
more
often,
better
performance.
I
Basically,
I
would
like
here
to
talk
about
the
differentiation
between
the
flicker,
which
is
like
you
know,
the
typical
flicker
you
can
see
in
the
page
where
there's
a
render
versus
the
wrong
content.
So
what
we
go
to
the
statistical
side
of
this
project
really
shares
a
tone
is
a
situation
where
a
user
sees
the
original
content
and
that
changes
like
you
can
imagine.
My
title
of
my
site
is
say
something
and
after
a
second,
it
sends
something
else
that
ruins
completed,
experiments
and
the
the
results.
I
So
optimize
has
some
hiding
once
it
loads
to
target
content.
That
may
not
have
been
parsed
in
the
browser,
because
the
script
can
load
fast,
but
it
can
be
added
later.
That's
like
targeting,
hiding
and
and
that's
a
much
better
way
to
hide.
But
the
problem
is
that
you
know
we
need
to
be
in
the
page
to
do
that
before
that,
for
the
synchronous
case,
we
we
had
that
under
flicker
snippet.
That
was
mentioned.
I
That
does
opacity
of
the
whole
page
and
we
did
it
configurable
and
customizable,
giving
options
to
users
to
hide
only
the
part
of
the
page
where
they
are
doing
experiments.
So
at
least
the
user
will
get
some.
You
know
the
skeleton
of
the
of
the
site
so
that
they
have
the
the
experience
that
the
site
is
interactive
and
config
you
can.
You
can
configure
timelines,
there's
as
as
it
was
mentioned
by
team,
there's
a
timeout
for
worst
case
and
yeah,
and
also
recently,
we
not
recently
for
some
time.
I
We've
also
been
covering
the
spi
use
cases.
This
is
another
area
and
for
sp
space.
You
would
expect
that
this
is
going
to
be
a
much
more
trickier
story,
because
the
page
just
builds
itself
and
you
would
hope
to
get
more
developer
oriented
experimentation
solution
there,
but,
unfortunately,
you
know
still.
You
have
marketers
that
need
to
go
and
do
small
changes
in
in
these
virtual
pages,
in
spas.
I
Optimize
free
has
limits
for
for
free
users
up
to
five
experiments.
That's
that's.
Perhaps
one
of
the
reasons
that
you
know
you
don't
have
a
ton
of
things
happening
and
the
total
a
ton
of
overhead
on
sites,
even
though
even
these
limits
are
not
really
reached
that
much
usually.
So
what
I'm
trying
to
say
here
is
that
very
often
this,
these
tools
are
used
to
do
a
few
things
that
are
the
highest
impact.
I
Like
you
know,
you
can
imagine
like
the
changing
the
hero
image.
Oversight
can
have
a
big
impact
on
the
experience
of
users
and
the
conversions,
and
it's
a
relatively
simple
thing,
but
hard
to
get
it
right,
so
they
need
to
iterate
and
a
and
story.
I
want
to
share
to
give
you
some
example
of
why
this,
why
users
find
these
values
in
this
tools?
Is
that
recently
we
did
this
feature
in
google
optimize
for
for
copic
banners
for
sites
that
would
be
targeted.
I
So
you
know,
as
things
were
evolving
in
the
space
of
you
know,
closing
businesses
different
that
that
you
know
the
closing
was
rules
were
different
from
country
to
country
or
state
to
state
or
even
city
city.
I
That
doing
something
like
that
going
through
your
development,
you
know
process
is,
is
very
difficult
for
many
of
these
businesses,
but
as
you
another
thing
to
say
here
is
that
yeah.
Clearly
these
tools
are
not
always
active
on
the
site.
Even
they
are,
they
are
installed,
and
what
we
are
seeing
here
is
that
very
often
the
businesses
will
do
some
experiments,
then
we'll
try
to
implement
those.
I
That's
from
me
almost
in
time
any
questions.
A
Thank
you.
Thank
you,
yeah.
So
one
question
that
yeah
one
question
regarding
usage,
you
mentioned
usage
with
the
google
tag
manager,
which
is
due
to
convenience,
but
then
adds
an
extra
pop
or
two
to
actually
loading.
The
render
blocking
content
is
yeah.
A
I
Yeah,
I
mean
obviously
there's
going
to
be
a
range.
Certainly,
as
tim
said,
we
we
do
try
to
do
a
good
job.
Trying
to
you
know
in
our
shell
documentation
in
even
the
ui
and
google
optimize
warn
them
against
that.
I
I
suspect
that
what
happens
is
that
users
find
it
like
that,
it's
better
to
pay
a
small,
an
extra
price
while
they're
running
the
experiment,
if
they're
going
to
run
it
safe
for
a
week
rather
than
have
the
snippet
installed
in
their
site
continuously,
while
they're
not
running
experiments,
so
there
is
plus
and
minuses
there,
the
flexibility
of
adding
it
while
doing
the
experiment,
then
removing
it
then,
is
probably
a
reason,
but
we
do
try
to
warn
them
in
terms
of
you
know
that
they
would
get
better
performance
by
not
going
through
the
answer
for
sure.
A
Okay,
cool
thanks
any
other
questions.
C
Maybe
something
a
bit
tactical,
like
you
mentioned
that
sometimes
people
don't
even
run
experiments
for
a
long
time,
but
still
they
have
all
the
machinery
in
place,
and
I
was
wondering
if
maybe
the
script
could
notice
that,
for
this
particular
client
there
is
no
experiment
running
and
therefore
could
respond
with.
I
don't
know
if
it
makes
sense,
but
like
at
a
two
or
four
like
no
content
and
then
have
a
stealth
wide
revalidate.
C
I
Yeah
I
mean
there
is
that
that
might
make
some
make
sense.
We
have.
There
is
some
details
there
that
have
prevented
us
from
doing
that.
First
of
all,
okay,
the
empty
container
script
is
relatively
small
like
20k
or
something
so
we
kind
of
thought.
You
know
if
you
do
the
request,
the
server
and
you
know
you
are
going
to
get
back
something
and
we
do
have
some
browser
casting
anyway.
I
You
know
ordering
why
things
don't
work
as
they
expect
them.
Some
other
details
is
that
we
offer
an
api
in
the
client
side
to
not.
You
know,
we
tell
you
if
you
have
experiments
running
or
what
experiments
are
running.
Some
people
are
implementing
in
themselves.
They
use
the
targeting
of
optimize,
but
they
want
to
do
the
implementation
themselves
and
that's
probably
a
good
story
for
things
like
spas
or
features
that
are
not
on
the
very
first
rendering
path,
but
maybe
yeah.
A
Thank
you.
So
I
think
that
with
that
we'll
take
a
five
minute
break
and
then
reconvene
in
let's
say
15
after
the
hour
to
talk
a
bit
about
a
probably
silly
proposal
or
yeah
a
thought
experiment
that
I
have
and
then
talk
about
like
brainstorm.
A
potential
solution
based
off
of
everything
we
heard
up
until
now,
and
that
okay,
so
yeah
see
you
all
in
five
minutes
or.