►
From YouTube: LaunchDarkly AMA for Buyer's Experience site
A
Hi
everybody:
this
is
the
digital
experience
team
here
and
we're
going
to
be
talking
about
launch
darkly
in
terms
of
the
buyer
experience
site.
We
recently
started
to
add
support
for
that
for
running
a
b
testing
onto
our
site,
and
so
we
have
a
group
of
people.
Who've
never
worked
with
the
tool
before
and
we're
going
to
be,
asking
questions
and
trying
to
work
through
the
best
way
that
it'll
work
for
us.
So
first
point
that
I
have
is.
I
have
an
mr
that
I'm
working
against
right
now.
A
A
As
listed,
I
talk
about
the
wrapper
component
for
the
javascript
sdk
a
little
bit
about
the
interface
and
some
limitations.
I
don't
know
how
do
how
do
folks
feel
about
me
just
like
kind
of
walking
through
the
interface
to
start
off?
Does
that
sound
good?
Okay?
So
if
I
go
to
launch
darkly
move
over
here,
so
I'm
logged
in
already.
A
So,
as
I
talk
about
in
the
notes
right
here,
a
feature
flag
is
used
to
decide
what
variation
to
show
users,
and
so,
if
I
click
into
say,
like
test
flag
here,
we'll
see
that
we're
running
a
percentage
rollout
for
whatever
this
test
flag
means,
and
so
that
means
that
50
of
users
will
be
shown
this
and
then
50
of
users
will
be
shown
something
else.
A
If
you
go
to
the
experimentation,
tab,
you'll
see
metrics,
and
so,
if
you
s,
if
I
click
on
manage
metrics
you'll,
see
that
I
have
set
like
different
metric
names,
and
so
then
we
can
track
based
on
what
a
user
does
within
the
buyer
experience
site
so
say
if,
like
a
user
hovers
over
something
scrolls
into
something
into
the
screen,
decides
to
click
on
a
cta
there's
ways
for
us
to
track
that
and
right
here
you
have
the
start
recording
essentially
when
you
want
launch
darkly
to
start
looking
at
what
experiment
is
going
to
be
running
through
a
period
of
time
in
the
best
practices.
A
If
we
go
to
so
I'm
actually
going
to
jump
into
our
old
project,
because
we
have
a
lot
of
existing
stuff
that
we've
did.
So
if
we
look
at
this
pricing
experiment
that
we've
ran
a
year
ago
in
this
feature
flag,
if
we
go
to
the
experimentation
tabs,
we'll
see
like
all
of
this
information-
and
so
I
just
thought
this
was
a
nice
way
of
showing
something
that
we've
ran
before
and
there's
something
to
know
is
that
you
can
run
various
metrics
against
one
feature
flag.
A
But
I
think
that's
most
of
what
would
happen
yeah.
I
think
that's
like
most
of
what
we
would
see
that's
between
between
those
two
sides
between
the
experiment,
tab
and
the
feature
flex
tab.
I
find
myself
spending
a
lot
of
the
time
as
I
was
trying
to
set
this
up.
So
that's
kind
of
a
quick
rundown
of
what
the
ui
looks
like
point
three
I
have.
We
are
currently
only
limited
to
send
seats
in
the
platform,
and
so
then
I
talked
about
how
do
we
wanna
work
around
this
limitation?
A
Is
there
a
way
for
us
to
maybe
add
more
seats?
Is
that
something
that
we
can
do?
If
not?
I
just
said
that
in
the
proposal
to
give
access
to
the
dris
for
people
using
the
tool
and
then
give
it
to
engineering
management
and
team
leads,
and
then
we
will
probably
need
to
rotate
seats
out
and
there's
room
for
iteration
on
this
here.
B
C
Oh
yeah,
so
there's
on
page
metrics,
like
clicks
and
pastries,
that
we
can
track
directly
in
launch
directly
teams,
I'm
wondering
for
like
things
that
are
more
down
the
funnel.
Let's
say
someone
submits
a
form
and
then
it
goes
through
the
whole
salesforce
pipeline
of
like
becoming
qualified
by
the
sales
team.
C
A
So
I
tried
to
write
a
comment
to
this,
but
I
think
I
answered
that
part
of
the
question
in
terms
of
pushing
things
to
the
data
layer.
I
think
that
we
can
add
we
definitely.
That
would
definitely
be
a
good
idea,
I'm
not
sure.
If
there's
a
way
for
us
within
salesforce,
I
would
imagine
that
there
could
be.
We
would
have
to
execute
like
some
code.
That's
like
launch
darkly
code
on
salesforce
itself,
and
I
don't
know
if
that's
possible,
that's
sort
of
dependent
on
their
tool,
not
ours.
D
C
Right
yeah-
that
was
my
point
too,
but
there
was
a
long
discussion
thread
about
the
roi
calculator
and
we're
going
back
and
forth
of
like
what
to
measure,
but
I
I
did
propose
to
just
do
the
initial
form
submission,
because
it's
easier
and
what
happens
afterwards,
isn't
really
on
our
team.
It's
more
like
the
sales
team
trying
to
qualify
that
lead,
but
if
we
wanted
to,
let's
say
like
can
we
do
something
like
this,
where
we?
C
So
this
is
the
contact
form
and
right
now
there
are
two
hidden
fields.
Could
we
just
do
another
field,
that's
hidden
that
contains
the
value
of
the
experiment,
and
then
that
way,
when
someone
submits
this
form,
this
field
is
now
within
salesforce.
Somehow
I
I
know
that
that's
cross
collaboration
and
then
that's
probably
going
to
be
difficult,
but
is
it
possible
to
do
something
like
that.
E
And
but
that
would
actually
be
more
in
marketo
like
you'd,
have
to
have
at
the
hidden
field
set
up
in
marketo,
and
then
you
have
to
inject
that
value.
C
Yeah,
okay,
I
guess
yeah.
We
can
talk
about
that
later
about
what
kpis
we
want
to
measure,
but
it
sounds
like.
Yes,
we
can
measure
something
down
funnels,
just
a
lot
of
work
across
team
and
then
yeah
to
my
second
point
it
needs
to
be.
I
would
like
it
to
integrate
to
ga
as
well
that
way
when
someone
asks
like
hey
how's
this
page
doing,
there
might
be,
you
know,
an
experiment
version
and
the
controlled
version,
so
it
might
be
good
to
have
that
in
ga.
C
B
Yeah
I
was
wondering
with
our
current
implementation
of
launch
directly
on
the
next
site:
can
we
have
abc
tests
or
or
like
optional
abc
tests,
because
most
of
the
time
we'll
have
a
b
testing?
I
see
hobby.
You
have
a
point
there.
A
Yeah,
so
let
me,
as
I
was
sharing
my
screen,
I
don't
know
if
you
saw
there
is
a
feature
flag
that
I
named
roi
calculator.
Oh,
so
let
me
share
my
screen
again,
so
that
was
a
little
easter
egg,
but
I
did
have
a
feature
flag
set
for
roi
calculator
here.
If
I
click
on
that
pretty
much.
A
We
would
capture
like
the
number
value
in
this
instance,
so
that
like,
if
we're
setting
one
like
show
one,
if
you
get
the
value
of
two
show
two,
if
you
have
the
value
of
three
show
three
in
terms
of
the
way
that
the
slots
work
in
the
view
component,
that's
something
that
we
can
just
add
a
third
like
slot
in
that,
and
that
would
be
like
the
dummy,
the
dummy
solution
of
just
adding
a
third
name
slot
to
solve
for
that
in
the
notes
for
best
practices.
A
I
do
talk
about
how,
like
the
more
that
you
split
up
tests,
the
more
you
know
the
bigger
sample
side.
You
need
because
you're
splitting
the
traffic
more
ways,
and
so
there
should
be.
We
should
have
like
a
arbitrary,
hard
limit,
based
on
like
the
traffic
that
we
have
like.
We
shouldn't
have
like
30
different,
like
variations
running
at
the
same
time,
but
I
hope
that
answers
your
question.
F
A
Nathan,
do
you
want
to
vocalize
your
subpoint,
you,
you
just.
F
Said
it
but
yeah,
I
don't
know,
I
find
we
probably
need,
like
hundreds
or
thousands
of
of
users,
to
be
able
to
split
three
ways
to
get
statistical
significance,
but
I
think
that's
an
issue
with
just
a
b
testing
in
general.
E
All
right,
I
don't
know
javi
and
I
and
we've
chatted
about
this
a
little
bit,
but
I
want
to
make
sure
we
vocalize
it
to
everyone
like
what
are
some
of
the
limitations
you
found
using
launch
darkly
in
a
nox
environment,
mostly
with
like
the
api
calling
loading
page
rendering
stuff
like
that.
A
Great
question,
so
I
wrote
this
whole
thing
paragraph
because
it's
a
mouthful,
but
essentially
it's
like
we
have
a
statically
generated
site,
and
so
the
only
way
to
make
a
request
for
something
for
only
the
only
way
to
like
change
between
two
different
variations
of
something
is
for
it
to
happen
on
the
client,
meaning
that
we
have
to
make
a
request
to
launch
darkly
servers
to
get
the
value
of
a
feature
flag,
and
so
that
means
that
whenever
someone
loads
a
page,
we
need
to
wait
for
that
to
happen
and
when
that's
happening,
that
means
that,
like
there's
going
to
be
like
something
that's
popping
onto
the
screen,
this
affects
page
load
time
so
say.
A
If
there's
latency
between
the
rest
of
the
page
and
the
contents
of
the
experiment.
What's
gonna
happen
is
you
know
we're
gonna
have
to.
We
have
a
decision
to
make.
Essentially
do
we
want
to
slow
down
the
entire
page
and
wait
for
that
request
to
do
and
then
show
the
contents
of
the
entire
page.
Or
do
we
want
like
to
show
a
blank
area
of
something
as
the
value
of
the
feature
flag
is
being
evaluated?
A
This
will
negatively
affect
page
feed
metrics.
So
that's
just
something
that
we're
going
to
have
to
work
through
there's
ways
around
that,
but
it
adds
a
lot
of
complexity
to
our
next
site.
Essentially,
we're
going
to
have
to,
we
would
have
to
add
servers
to
our
front
end
so
that,
like
the
server
can
run
that
feature
flag
request
instead
of
having
the
client
do
it
and
that
yeah
we
can
talk
about
the
feasibility
of
that.
But
that's
that
that
would
be.
A
Ideally,
people
have
talked
about
like
having
loading
states,
adding
loading
states
to
things
so
that
like
because
we
have
view
components,
we
could,
theoretically,
you
know,
add
css
and
whatnot
as
components
are
loading
in
for
various
buttons
and
things
to
do.
You
know
anyway,
lots
of
con.
I
I
don't
know
if
I
like
that
either,
because
that's
also
a
lot
of
complexity
added
for
just
this
use
case
of
a
a
b
test,
so.
E
Yeah
yeah,
some
of
the
bigger
ones
we'll
run
into.
Even
if
we
let's
say
we
don't
like
mateo's
idea
with
skeletons
or
some
things
like
anything
I
hate
sam
is
like
above
the
fold.
Anything
in
the
hero
section
is
gonna,
be
the
most
dramatically
impacted.
If
we
change
text
or
images
or
button
styling,
someone
will
see
a
flash
of
something.
So
it's
coming
up
with
like
what
is
the
right
solution,
especially
in
those
first
load
areas
on
the
page.
D
G
Yeah
I
was
wondering
if
there
would
be
value
to
having
a
launch
darkly
onboarding
issue
to
ensure
the
team
members
up
and
running
to
run
a
test,
but
I
also
made
another
note,
like
I
see
the
great
first
iteration
on
the
docs
there.
What
do
y'all
think.
E
I
was
gonna
say
I
I
think
another
boarding
issue
wouldn't
necessarily
be
bad.
I
think
utilizing
lunchtime
this
quarter
ironing
out
some
of
the
kinks.
So
then,
when
we
do
create
the
onboarding
issue,
we
have
a
better
understanding
of
like
how
we
use
it,
how
we
want
to
use
it
and
then
moving
forward.
We
can
just
kind
of
work
between
the
docs
and
an
onboarding
issue.
H
I
have
the
next
point
I
just
wanted
to
to
ask
having
if,
if,
if
it's
like
a
vanilla,
a
way
to
to
do
a
test
in
the
package,
json
file,
so
to
have
like
two
lines
being
controlled
and
experiment.
So
I
think
that
will
help
us
to
make
a
b
testing
in
the
navigation
repo.
A
Yes,
great
question,
so
this
I
imagine
this
is
having
to
do
with
like,
and
I
see
you
wrote
it
here
with
variations
of
slippers
and
the
navigation.
That's
what
you're
asking
so
I
don't
think
that's
possible
something
that
we
could
do
for
the
navigation
is
pass
props
to
it,
so
that
we
can,
from
our
end
from
the
the
nux
site,
have
variations
then,
but
then
the
issue
there
we
have
is
that,
because
the
site
is
statically
generated,
we'll
run
into
issues
there.
A
I
don't
have
a
great
answer
for
you
here.
I
don't
know.
If
folks
have
a
good
answer,
I
would
imagine
that
we
would
need
just
again
for
us
to
run
experiments
like
this.
I
would
imagine
we
need
to
server
side
render
something
that
would
be
the
easiest
way
of
running
stuff
like
this,
in
my
opinion,
but
folks
feel
free
to
chime
in.
If
we
have
prepositions
here.
F
I
think
you
might
be
able
to,
I
think
you
can
conditionally
import
version,
numbers
or
conditionally
choose
version
numbers
of
imports,
and
I
think
you
can
maybe
hit
the
launch
directory
api
before
you
do
that.
But
I
just
don't
know
how
it
would
work
in
the
build
process,
because
then
all
the
other
launch
directory
stuff
would
have
to
get
loaded
in
first
yeah
might
be
possible
somehow,
but
we
have
to
look
into
it.
H
Yeah
from
my
understanding
this
this
quarter,
we
are
not
going
to
conduct
any
tests
there,
which
it
should
be
just
pages
in
the
buyer,
experience
so
yeah.
We
could,
we
could
think
about
it.
H
A
F
Yeah,
I
guess,
with
the
latest
a
b
testing
conversations
going
around.
It
just
seems
like
a
lot
of
the
metrics.
Now
we're
going
to
be
kind
of
looking
at
in-house
either
through
google,
google
analytics
or
snowplow
or
wherever
else
we
get
the
data
from,
and
sometimes
it's
like
conversion.
That's
later
down
the
funnel.
F
So
you
can't
really
track
that
and
launch
directly
and
so
I'm
starting
to
realize
that
our
dependency
on
on
kind
of
the
analytics
portion
of
launch
starkly
are
not
really
there
anymore
and
so
we're
only
using
launch
darkly
as
like
a
toggle,
and
so
I'm
wondering
if
we
could
use
get
labs,
feature
flags
or
google
optimize
like
tools
that
we're
already
paying
for
to
save
some
money
annually.
C
B
F
E
G
Up
this
morning
from
michael
that,
we
should
implement
future
flags
on
the
buyer,
experience
repo.
So
we
are
gonna
use
that
feature
and
it
will
be
there
valuable.
I
don't
know
I'm
if
we'll
use
it
for
a
b
testing.
I
think
that's
also
larger
discussion.
F
Yeah
from
my
understanding
launch
directly
is
pretty
expensive
and
I
don't
know,
maybe
we're
not
financially
tight.
I
don't
know,
but
if
we
can
have
our
in-house
solution,
I
mean
we
could
do
whatever
we
wanted
with
it,
but
again
launch
darkly
works.
So
I
guess
we
can
use
that
for
now.
I
was
just
curious.
G
Would
I'm
going
to
put
another
note
here
seems
like
it
might
be
a
benefit
to
do
a
spike
into
investigating
how
feature
flags
would
work
for
a
b
testing
to
see
what
the
difference
is.
There.
F
Yeah
or
even
optimize,
google
optimize
dennis
mentioned
one
of
the
sub
points
that
looks
like
we
do
have
the
free
version
of
it,
and
so
for
anyone
that
doesn't
know
optimize
is
kind
of
google's
a
b
testing
tool,
and
since
we
have
google
analytics
and
we
just
updated
it
to
the
newest
version,
I
mean-
maybe
we
could
piggyback
on.
I
don't
know
if
they
can
chat
with
each
other,
I'm
not
sure
how
it
works.
D
F
We
probably
have
to
track
metrics
through
ga,
and
so
maybe
on
the
feature
flag.
We
attach
a
certain
ga
attribute
and
then
we
can
compare
two
ga
attributes
over
a
certain
time
frame.
If
we're
looking
at
like
button
clicks
or
something
like
that,
just
an
idea,
I
don't
know
dennis,
probably
knows
way
more.
C
Yeah,
so
we
have
an
account
with
optimize.
You
can
do
a
b
testing
multivariate
redirect
is
similar
like
if
you
have
two
pages.
Personalization
is
probably
what
we're
going
to
use
this
one
for
if
anything,
because
you
can
use
ga
data
to
show
different
pages
depending
on
the
audience
so
yeah
I
mean.
I
know
that,
there's
a
b
testing
and
then
personalization
is
something
else
completely
different,
but
it's
free.
We
have
it.
We
can
set
something
up
if
needed,
it
doesn't
hurt.
C
I
don't
think
but
yeah
again
to
matteo's
point:
it's
not
a
feature
flag.
So
if
there
is
a
winning
result,
then
we
would
have
to
implement
that
separately.
So
that's
the
only
qualm
this.
A
Yeah,
I
I
think-
and
I
don't
know
if
folks
feel
this
way,
but
this
is
how
I
felt
regarding
to
the
discussion
is
that
we
have
something
that
looks
like
for
the
most
part
work.
So,
let's
use
that
and
investigate
other
things
and
see
like
if
there's
more
benefit
to
going
elsewhere.
I'd
imagine
for
this
quarter
like
it
would
make
sense
to
just
use
a
thing
that
already
works
and
see
where
that
leads
us,
especially
after
a
quarter
of
using
it.
A
B
I
also
have
a
question
in
my
previous
role.
We
also
looked
at
optimizely,
but
one
of
the
challenges
that
we
had
was
that
optimizely
also
there
was
a
bug
in
it.
I
don't
know
if
it
was
just
like
our
implementation
or
what
we
did,
but
there
was
like
a
flash
that
people
would
see,
and
just
maybe
something
to
like
look
out
for
is
like.
B
C
Yeah
they
have
this
thing
called
anti-flicker
snippet.
So
it's
in
the
implementation
and.
B
E
Go
ahead,
let's
just
say
like
that:
that's
coming
we're
talking
about
earlier
too.
Is
it
any
sort
of
like
tool,
third
party
tool
out
there
that
we
use
that
has
like
ping
another
server
and
send
back
like
a
feature
flag
or,
I
think,
there's
a
very
good
chance,
we'll
run
into
that
sort
of
flicker
flash?
That's
that's
something,
no
matter
what
we
have
to
kind
of
work
through
to
make
sure
that
we
minimize
like
any
kind
of
issues
there.
B
Yeah,
I
was
just
wondering
I
know
javier
saw
in
your
note
in
your
best
practices.
We
shouldn't
have
too
many
variations.
I
was
just
curious
like
what
your
definition
of
too
many
is.
A
Let
me
check
my
history,
that's
history,
but
I
I
essentially
just
like
looked
up
like
best
practices
like
for
running
a
b
tests
like
on
google,
and
they
talk
about
how
like,
like
the
how,
when
you
add
more
variants,
you
add
like
more
probability
that
things
happen
by
chance
because,
like
the
sample
size
are
getting
and
they
had
like
actual
numbers
based
on
stuff.
If
you
click
on,
I
believe
there's
a
link
that
I
have
in
the
notes.
A
I
could
be
wrong
to
track
how
long
you
should
run
an
experiment
for
based
on
the
variations
and
all
this
stuff
that
you
have
and
the
also
the
amount
of
increase
in
metric
that
you
want
to
have.
I
think
that's
like
I
don't
think,
there's
like
a
one
size
fits
all.
It
depends
on
like
how
many
page
views
you're
getting
how
many
experiments
that
you
have
how
much
increase
in
a
metric.
Do
you
want
to
see
so
I
maybe
like
for
this
quarter.
A
I
think
that's
fairly
reasonable,
given
our
quarter
yeah.
C
Yeah,
I
think
the
roi
page,
we
probably
can't
do
abc
because
it's
it
doesn't
get
much
traction
whereas,
like
on
the
home
page,
we
could
probably
get
away
with
doing
multiple.
A
B
H
I
have
the
the
last
point
I
think
yeah.
It
could
be
interesting
to
customize
the
the
traffic
that
every
the
control
experiment
take,
for
example,
conducting
tests
on
not
every
user-
and
you
know
20
of
the
of
the
users
and
then
divide
that
into
control
and
and
an
experiment
and
and
nathan
said
it's
possible.
A
Yeah,
I
think
that's
that's
a
good.
It's
a
good
point.
One
of
the
things
that
I
wanted
to
do
was
try
to
set
up
an
experiment
live.
I
tried
running
through
the
things
and
then
I
got
blocked
by
something
in
the
experiment.
Tab,
and
this
has
to
do
with
nathan
like
that
one
thing
with
like
user
segments,
not
letting
me
run
experiments.
A
I
don't
know
if
you
remember
that
issue
we
ran
into
a
while
back,
but
I
reached
out
to
support
if
they
could
unblock
me
with
that,
because
I
think
that's
something
on
their
end.
That's
not
something
that
we
control
have
they
responded
or
no
I
mean
they
responded
to
me
before
and
then
I
I
got
unblocked
part
of
the
way,
but
they
didn't
fully
unblock
me.
That
makes
sense,
so
that's
something
that
we'll
just
have
to
reach
out
to
them.
For.
D
I
was
just
thinking
if
we
ever
for
some
reason
need
to
test
like
four
variations.
We
could
do
like
a
little
tournament
of
components
where
we
test
two
of
them
and
then
with
us,
the
other
two
and
we
go
with
the
winners.
You
know,
but
I
don't
think
that
we
will
ever
need
to
do
something
like
that.
But
just
throwing
out
that
idea.