►
From YouTube: Progressive Delivery UX/PM meeting 2020-07-28
Description
Meeting to improve alignment between Product Management and UX for progressive Delivery
A
Most
important
point,
but
I
think
we're
we're
going
into
spinnaker
later,
but
this
is
part
of
that
anyway,
with,
like
the
validation
track,
like
all
right,
a
b
testing,
I'm
having
the
last
interview
today,
if
I'm
correct
and
after
that
is
finishing,
that
up
he's
doing
the
synthesizing
blah
blah
blah,
but
I'm
also
wondering
about
what
is
next.
I've
been
participating
in,
for
example,
the
scorecard
or
category
maturity
scorecard
research
from
create,
and
that
has
been
you
know
a
very
valid
option
I
think
for
for
progressive
livery.
B
I
don't
remember
why
I
wrote
that,
but
I
can
go
with
the
flow.
So,
first
of
all,
we
do
have
a
scorecard
coming
up.
It's
been
pushed
out
a
few
times,
but
we
didn't
want
to
rebase
the
maturity
level
of
cd.
B
I
just
I'm
going
to
remind
everyone
why
this
was
important,
so
we
moved
from
basically
choosing
our
own
maturity
into
actually
having
users
tell
us
what
they
think
and
we
haven't
done
that
for
any
of
the
progressive
delivery.
Stuff
and
cd
is
currently
at
complete,
and
I
kind
of
think
that
we're
not
for
several
reasons.
One
of
them
is
the
fact
that
we
just
started
deployment
to
the
cloud
to
native
cloud,
which
is
like
huge
and
basically
a
basic
operation.
B
That's
needed
today,
so
I
feel
like
we
can't
wholeheartedly
say
that
we're
a
complete
without
you
know
some
really
basic
functionality-
that's
missing
over
there,
so
I
I
want
to
rebase
it
and
and
like
work
on
that.
So,
first
of
all,
we
need
to
figure
out
where
we
are,
then
we
need
to
do
jobs
to
be
done,
and
then
we
need
to
check
that.
B
B
But
I
think
that's
the
flow
and
I
wanted
to
start
with
cd
because
I
think
that's
the
most
important
one
for
for
progressive
delivery.
I
think
it's
our
core
business,
which.
B
So
our
north
star
metric
is
combined
with
the
release
management
stage,
so
a
group,
so
we
have
one
north
star
metric,
which
is
now
called
something
mao,
but
I
can't
remember
which
letter
comes
before
the
mail,
because
I'm
very
confused
about
the
mouse.
I'm
pretty
sure
it's
s
now,
because
it's
stage
and
the
one
that
we
chose
was
the
number
of
deployments.
B
So
we
want
to
see
a
constant
increase
in
the
number
of
deployments
months
after
month
in
order
to
see
that
people
are
adopting
the
release
stage,
because
deployment
comes
after
ci
after
verify,
and
so,
if
you're
deploying
you
are
definitely
using
release.
Okay,
so
that's
the
s
now
and
I
think
what
we
need
to
figure
out
is
what's
going
to
make
us
increasing
that
more.
I
think.
B
That's
really
valuable
research,
and
this
is
a
really
good
topic
for
now,
because
now
I'm
supposed
to
choose
my,
I
want
to
say
gmail,
because
I
think
it's
group,
but
there's
another
mal
that
needs
to
differentiate
what
what
does
what
has
progressive
delivery
contributed
to
the
ismael
and
then
that
actually
goes
into
the
categories.
So
if
someone
uses
feature
flags
they're
using
progressive
delivery,
someone's
using
review,
apps
they're
using
progressive
delivery,
so
so
out
of
the
four
categories
we
need
to
choose
one
metric.
B
B
Second
one
is
feature
flags,
so
I
I
have
an
issue
and
I
can
look
for
it
to
see
if
we
can
count
the
number
of
toggles
on
and
off.
So
there's
a
number
of
things
that
we
can
choose
to
to
count
in
feature
flags
like
how
many
people
are
adding
feature
flags
or
how
many
people
are
using
the
strategy.
But
I
think
the
one
that
indicates
activity
is
the
toggles.
B
C
B
Have
cd-
and
we
already
discussed
the
city
is
huge
like
what
do
we
count?
Is
it
going
to
be
the
extensions
to
the
ammos?
Is
it
going
to
be
deploying
to
the
cloud
like
that's
one
of
the
options
that
I
suggested
so
engineering
has
already
added
a
way
for
us
to
differentiate
how
many
projects
are
using
one
or
of
two
things.
One
is
that
new
launch
type
that
we
introduced?
B
So
if
you
use
a
gitlab
cimo
template
that
we
provided
or
you
use,
auto
devops
and
you
use
a
launch
type,
that's
associated
to
aws
like
ucs
or
fairgate.
It
will
count
it
and
will
know
that
that's
your
deployment
target
and
that
you're
using
progressive
delivery
or
in
this
one
I
don't
know
if
it's
fair
to
count,
because
this
could
also
be
released
management,
but
anyone
who
has
environment
variables,
aws
types,
environment
variables
that
we
added
a
few
milestones
ago.
B
A
Shaky
that
was
a
lot
that
would
that
sounds
great,
marcel
and
laurie.
What
are
your
thoughts
on
that?
Like?
Does
it
sound
good?
Do
you
have
any
additional
ideas?
It
seems
we
have
some
kind
of
like
things
that
are
are
the
way
to
go
where
our
category
maturity
is,
is
up
to
be
done
together
with
our
jobs,
to
be
done.
A
To
kind
of
you
know
flow
that
into
the
right
to
right
side,
but
then
it
is
indeed
about
figuring
out
what
are
leading
metrics
to
to
press
to
to
calm
down
like
what
are
your.
What
are
your
general
thoughts
into
this.
C
Well,
I
know
nothing
about
our
competitors,
but
I
would
if
I,
if
I
was
the
one
who
was
supposed
to
pick
one
of
these,
I
would
look
at
our
competitors
and
see,
which
is
the
thing
from
this
list
that
would
differentiate
us
or
and
or
put
us
in
the
same
field.
If
they're
ahead
of
us
put
us
with
them
and
pick
that
one
and
I
don't
know
what
that
would
be,
but
that's
how
I
would
do
it.
B
They
have
like
so
many
different
features,
it's
hard
to
choose,
but
I
think
like
if
we
look
at
specifically
the
ones
that
the
users
were
really
interested
about,
was
like
multiple
kubernetes
cluster
support,
multiple
cloud
support
and
we're
only
at
aws,
so
we're
like
really
in
the
beginning
of
our
our
journey.
So
I
don't
know
if
it's
like.
I
think
it's
a
good
idea
for
an
aspiration,
but
I'm
not
sure
is
the
right
thing
to
count.
C
C
I
would
assume
that's
what
they're
doing
looking
at
competitive
landscape
and
picking
the
one,
but
it
might
not
be.
It
may
be
something
that
they've
worked
hard
on
and
they
feel
like
it's
in
a
good
space
and
they
want
to
measure
it
to
get
data
on.
I
don't
know.
As
far
as
I
know,
the
product
management
org
was
supposed
to
help
define
some
of
that
like
how
do
we
choose
which
to
test
and
then
say,
maturity
is
moved
based
on
the
thing
that
we
chose,
but
I
don't
know.
B
But
when
you
go
into
the
individual
features,
there's
just
so
many
it's
really
hard
to
choose
and
any
one
of
them
is
indication
right.
So
when
I
talked
to
kenny
today-
and
he
sent
me
this
new
link
that
I
have
not
yet
talked
to
chase
about-
so
I'm
a
little
bit
hesitant
that
you're
going
to
post
this-
and
this
is
going
to
be
a
surprise
to
him,
but
apparently
there's
a
process
for
engineers.
B
So
marcel,
you
know
that
when
we
finish
a
dod
on
a
feature
is
when
they
have
documentation
added
right,
we
can't
release
a
feature
without
documentation
and
there's
a
new
suggestion
which
we
have
not
committed
yet
to,
but
is
something
that
I
want
to
suggest
to
chase
is
to
also
add
to
that
dod
the
option
to
add
usage
ping,
which
means
you
know
some
way
to
measure
the
people
are
actually
using
it.
So
we
made
this
new
functionality.
How
do
we
know
anyone's
actually
using.
D
B
And
that's
this
new
process
that
I
just
linked
in
the
agenda
and
hopefully
I
can
get
chase
on
board
and
we
can
start
measuring
everything,
starting
with
the
new
stuff
right.
Going
back
to
the
old
stuff
will
be
new
issues,
but
I'm
excited
about
that
because
I
have
no
idea
if
everything
that
we've
worked
on
on
the
past
few
months
is.
A
B
It's
true
and
it's
supposed
to
be
a
simple
process.
I
assume
that
the
first
ones
will
be.
You
know
a
little
bit
difficult
and
will
take
some
time,
but,
as
I
understand
a
lot
of
the
different
engineer,
groups
already
started
this
and
they
feel
really
comfortable
with
it.
So
if
we
can
get
to
that
point,
I'll
be
really
happy.
B
So
some
of
the
things
already
have
stuff
that
we
can
count.
So,
for
example,
I
linked
here
aws
launch
type
and
when
I
started
telling
the
group
hey,
I
want
to
measure
this.
It
turned
out
that
there
wasn't
a
way
and
they
needed
to
add
a
database
value
that
now
we
know
it's
not
visual.
Yet
in
any
dashboard,
but
at
least
we
have
the
data,
which
is
like
the
most
important
step.
B
I
think,
because
we
can
ask
at
any
time-
and
we
can
just
look
at
the
value
I
would
like
to
monitor
over
time,
but
also
looking
at
a
value
manually,
good
enough
for
me
at
the
moment
and
when
I
asked
about
the
aws
variables
they
that
already
existed.
So
I
don't
really
know
how
you
know
what
is
there
and
what
is
not.
I
just
keep
asking
engineers.
A
D
B
Actually,
marissa,
I
have
to
tell
you
that
working
you
get
instant
feedback
that
you
never
get
in
any
other
company.
So
when
we
launched
the
auto
deploy
to
ecs,
I
got
a
bunch
of
feedback
like
people
were
like,
I
followed
the
guide
and
it
doesn't
work
means
they
tried
it
right.
I
followed
it
and
it's
not
the
point
that
fairgate
hey.
We
never
said
it
wasn't
going
to
do
that,
but
it
means
that
you
know
there's
a
demand,
so
you
there
was
so
much
feedback.
B
I
was
so
happy
with
it
that
also
like
merge
trains.
Merchants
was
such
a
flop.
When
we,
when,
when
we
released
it,
I
had
to
turn
it
off
like
three
times
on.com,
because
it
was
just
so
such
a
mess,
and
you
know
it's
a
great
feature
now
and
we
had
to
turn
it
off,
not
me
because
it's
verifying
now,
but
you
know,
I'm
still
like
in
the
loop,
so
we
had
to
turn
it
off
a
week
ago
or
two
weeks
ago,
because
there
was
some
kind
of
giddily
problem.
If
you
were
like
no.
B
And
like
so
people
get
so
used
to
it.
It's
great
feedback
really.
D
Yeah,
when
people
are
trying
to
use
it
and
and
aren't
able,
it
shows
they're
interested
they
want
it
like
if
they
weren't
interested,
they
would
just
meh.
I'm
not
gonna.
Try
they're,
not
gonna.
Look
at
that.
That
say
that's
interesting
and,
and
you
know,
walk
away
from
it.
They
definitely
want
a
lot
of
these.
It's
just
when
it
does
work
when
it's
it
dot
when,
when
it
works
perfectly,
does
exactly
what
people
want.
It's
rare
that
people
come
back
and
say
perfect.
D
B
D
B
It
also
gives
us
a
great
indication,
so
we
did
a
refactor
for
future
flags.
As
you
know,
and
we
we
didn't
know
if
we
could
refactor
we
so
one
of
the
before
we
decided
to
do
the
refactoring.
We
were
like
hey
how
many
projects
are
actually
using
feature
flags
like
how
many
people
are
going
to
be
affected
from.
A
D
B
Well,
to
be
honest,
I
also
never
inherited
it.
I
joined
in
june
and
I
think
the
first
milestone
was
user,
id
or
percent
rule.
I
can't
remember-
or
maybe
they
were
in
the
same
one.
It
was
in
12
tech
two.
So
I
maybe
I
was
here
for
a
month
and-
and
I
couldn't
do
anything
that
I
wanted
to
plan,
because
just
the
way
that
it
was
designed
was.
A
I
wanted
to
say
something,
but
it
is.
It
is
like
escaping
my
mouth
as
the
moment,
and
I
wanted
to
speak
to
it.
What
was
it
again
I'll
come
back
to
it
at
a
later
point,
it
seems
we're
in
a
good
spot,
though
so
we're
geared
at
the
same
thing.
Metrics
would
be
super
valuable,
so
I'd
say,
there's
a
good
spot.
A
Is
there
so
the
things
that
we
need
to
have
is
we
need
to
build
up
that
initial
jobs
to
be
on
frameworks
who
can
at
least
kind
of
pinpoint
you
know
which
scenarios
we
want
to
focus
on
for
our
maturity,
scorecard
grading
when
I
was
speaking
with
pedro
from
create,
he
was
saying,
like
we
had
a
lot
of
scenarios
for
jobs
to
be
done
that
we
wanted
to
test
him.
I
said
because
I
was
doing
some
of
the
interviews
for
him.
A
A
A
But
what
is
the
time?
What
is
the
time
frame
we
are
wanting
to
think
of
what
is
what
is
feasible,
I'm
thinking
of
like
all
right.
We
need
at
least
a
full
month
for
us
to
kind
of
get
to
that
job
to
be
done
point
where
we
are
kind
of
confident
in
it,
and
then
we
need
to
set
up
like
the
research
with
like
the
scenarios.
I'm
kind
of
inventing
of
the
similar
framework
as
create
is
using
as
they
already
done
it.
B
Want
to
propose
a
shortcut
so
we're
working
now
with
marketing
on
the
use
cases
for
cd.
I
would
start
with
that.
B
A
B
We're
working
now
on
mapping
out
the
cd
use
case.
The
idea
behind
that
is
to
be
able
to
sell
more,
but
since
we're
mapping
up
the
mapping
out
the
use
cases
there
and
the
gitlab
strong
points
and
all
those
things.
I
think,
let's
just
piggyback
on
that
and
like
this.
A
Is
the
thing
you
showed
two
weeks
ago
right
and
it
is
something
I
was
wanting
to
ask
lori
about
so
do
you
have
a
link
towards
that
by
any
chance
or
it
yeah?
I
do
so
to
to
give
you
lori
the
insight
on
this
marketing.
B
D
A
C
Okay
yeah,
so
I
see
what
they're
doing
they're
just
trying
to
come
up
with
the
messaging
to
use
in
marketing
to
get
people
to
to
use
the
thing.
So
we
can
start
with
use
cases
or
we
can
start
there,
but
you
want
to
go
back
up
to
a
job
to
be
done
and
the
job
to
get
done
has
to
be
solution.
Agnostic.
C
C
Ultimately,
they
all
want
to
make
their
lives
easier,
but
it's
like
how
do
what
part
of
their
job
are
they
trying
to
do
to
to
make
their
life
easier?
So
you
can
go
up
and
then
you
can
still
go
back
to
that
middle
area
of
the
user
scenarios
and
go
down
to
the
task
level
too.
So
these
can
be
good
starting
points
to
do
both
of
those
activities.
C
C
C
C
A
Can
you
elaborate
on
your
answer
and
then
would
write
that
down
and
then
off
to
the
next
task
and
then,
when
they
didn't
complete
it
successfully,
it
would
stay
like
you
would
have
that
period
of
pause
where
it's
like
eerily
empty
conversation
like
all
right
on
to
the
next
next
task,
because
that's
how
we
laid
out
this
this
framework,
like
this
at
least
to
me,
was
super
obvious
but
hey.
I
was
speaking
from
the
interviewing
side
right
yeah,
not
that
it
that
it
matters
too
much,
but
I
found
it
kind
of
funny.
A
C
It
looks
like
we
could
use
the
description
column
to
start
asking
those
how
and
why
questions
like
how
would
they
do
this?
You
ask
the
how
to
go
down
to
the
task
level
and
you
ask
the
why
to
go
back
up
to
the
job
to
be
done
level
so
for
like
release,
planning
should
be
able
to
define
the
planning
of
the
release,
workflow
blah
blah
blah.
Why?
C
C
A
Do
you
have
a
simple
point
here
as
to
when
you
know
you're
at
that
job
to
be
non-level
and
like
when
you're
at
the
right
level
is
what
I'm?
What
I'm
meaning,
and
also
enough
of
the
gaps
that
you
know
you
have
a
complete,
like
relatively
complete
picture
of
the
jobs
to
be
done
for
a
certain
stage
group.
C
Oh
well,
that
you
might
go
back
and
forth
between
tasks,
so
you
might
want
to
start
the
task
level.
What
are
all
the
tasks
that
people
need
to
do
with
this?
Like
this
say
this
release
planning
piece:
what
are
all
the
tasks?
If
you
go
down
to
say
how
are
they
doing
all
these
things?
How
are
they
accomplishing
this
user
story?
Then
you
can
go
back
up
and
go
okay.
C
Why
and
then
you
can
see,
look
at
your
job
to
be
done,
that
you've
come
up
with
to
see
how
they
map
to
the
tasks,
because
sometimes
it's
easier
to
do
it.
That
way,
sometimes
it's
easier
to
flip
it.
The
other
way
just
depends
on
the
thing
that
you're
working
with
and
and
how
familiar
you
are
with
whichever
level
it
is
and
whatever
just
makes
sense
for
it,
but
either
way
it
doesn't
hurt
to
start
with
the
tasks.
It
doesn't
hurt
to
start
just
trying
to
do
the
jobs
to
be
done
first,
either.
A
And
for
the
gaps,
do
you
have
any
any
tips
for
that
because,
like
we
are
missing
out
in
our
product
for
sure
on
some
like,
we
are
not
covering
the
full
range
of
what
our
users
need
right.
C
C
If
we
have
time
you
would
want
to
capture
them,
sometimes
you
don't,
because
you
just
got
the
time
you
have,
but
if
you
come
across
them
I
might
color
code
them,
or,
I
might
say,
like
tbd
or
not
addressed
at
this
time
in
the
product
or
planned
whatever.
However,
you
want
to
address
them,
but
I'd
set
them
aside,
because
that's
not
the
focus
for
the
scorecard,
because
the
scorecard
should
only
be
focused
on
stuff.
That's
in
the
product
at
the
time
that
we're
doing
the
scorecard.
C
A
All
right
pretty
clear:
I
have
an
existing
issue
on
this
I'll
I'll,
expand
that
a
little
bit
with
this
information
and
see
if
we
can
set
up
a
like
a
time
frame
for
us
to
work
on
this
q3
seems
like
a
good
like
or
it
like.
I
would
ideally
set
this
up
as
an
ocr
for
q3,
including
the
jobs
to
be
done
and
like
either
completing
or
being
in
the
process
of
finishing
up
the
maturity
score
card
for
q3.
Does
that
sound,
sound?
Okay
to
you.
B
Yeah
it
does.
I
do
want
to
ask
a
question
about
a
b
testing,
so
we're
kind
of
finished
with
solution
validation.
Is
there
any
problem,
validation?
Is
there
any
next
step
in
terms
of
research
like?
Are
we
gonna
pull
up
some
maths
or
ideas
and
start
asking
them,
or
things
like
that?
Yes,
yes,.
A
There
is
an
absolute
plan
of
action
which
I
completely
detailed
in
the
a
b
testing
research
issue,
and
I
can
actually
point
it
out
directly.
D
A
A
So
this
task
list
kind
of
details
out
so
what
it?
What
is
happening,
p11
participant
11,
I'm
interviewing
today,
then
I'm
doing
the
dovetail
tagging
I
wanted
to
do
it
in
between,
but
I
just
didn't
get
to
it
every
time.
So
I'm
gonna
do
it
in
bulk.
Then
the
inside
synthesis
I'm
intending
to
do
that
collaboratively
to
some
extent
and
bring
in
perhaps
lori
bring
in,
perhaps
you
or
it
and
anyone
else
we
see
fit
but
keep
it
small.
A
Not
too
big
do
some
whiteboarding
sessions
with
those
insights
and
then
we
start
ideating
and
finding
out
where
we
are
not
sure.
Yet,
then
I'm
intending
to
do
just
a
few
interviews,
one
two,
maybe
three,
but
I
think
one
or
two
would
suffice
and
find
out
those
last
details,
we're
not
sure
out,
and
then
we
add
that
to
those
whiteboard
session
figure
out,
the
exact
features
that
we
want
to
implement
have
those
features,
issues
be
created
and
bam.
A
That
is,
that
is
the
plan
of
action,
as
I
see
it
be
done,
and
the
idea
for
the
dovetail
tagging
is
to
be
done
in
I'd,
say
one
and
a
half
weeks
and
then
we'll
have
that
inside
census
meeting.
A
So
I
can
schedule
that
and
then
I'd
say
the
rest
should
kind
of
be
done
within
two
weeks
from
there.
That
is
my
kind
of
time
frame
with
that.
B
Okay,
a
stupid
question
because
I
always
run
to
the
solution.
I
know
you
told
me
a
thousand
times
not
to
do
that,
but.
B
I
like
it,
I
like
the
fact
that
we're
whiteboarding
and
we
have
more
people
on
board
to
get
more
ideas.
So
the
way
that
I'm
envisioning
the
whiteboarding
is
that
it's
similar
to
what
we
did
for
post
deployment.
B
People
like
will
write
a
bunch
of
ideas
and
then
we
can
select
what
to
do
and
since
av
testing
is
really
unique
for
release
in
the
sense
that
it
introduces
a
whole
new
persona
like
I
can
pretty
much
visualize
some
kind
of
idea
of
what
the
ui
will
look
like
I'm
going
to
leave
it
up
to
you,
but
like
I
have
something
like
really
ugly
in
mind
and
then
my
question
is.
B
What
do
we
do
with
the
behind
the
scenes?
So
a
b
testing
is
heavily
reliant
on
specific
metrics
that
you
need
to
get
back
in
order
to
make
a
conscious
decision
on
what
you
want
to
do
right
and
the
metrics
could
be
like
what
we
use
here
like
usage
ping,
which
I
mentioned
before,
or
it
could
be.
A
Yeah
there
are
some
specific
pointers
actually
in
the
interviews
of
suggestions
where
we
should
start,
if
I
remember
correctly,
but
I
have
to
look
back
into
the
interviews
a
little
bit
more
on
to
give
the
specific
answer.
But
there
is
the
a
b
testing,
like
implementation,
part
that
we
can
tie
it
towards
git
lab
and
that
we
can
do
a
b
testing
at
all.
A
And
then
there
is
the
metrics
like
like
that.
We
get
the
metrics
somewhere,
where
we
can
do
something
with
those
metrics
and
then
it's
the
metrics
reporting.
So
there's
like
these
three
kind
of
areas
where
we
can
step
in,
but
I
think
this
is
kind
of
like
the
corner
foundation
and
then
I'd
say
to
be
honest,
that
the
metrics
reporting,
if
you
ask
me,
would
be
the
most
important,
because
ideally
users
want
to
be
able
to
see
a
single
metric.
They
have
defined
somewhere
somehow
where
they
say
all
right.
A
This
experiment
is
either
a
success
or
it's
a
failure
and
we
don't
have
to
derive
an
ask
data
analyst
every
time
to
ask
like
hey.
What
is
the
status
on
bond
is
like
is.
This
is
good.
Is
it
bad?
Is
it
going
well,
are
we
having
enough
users?
This
is
no.
We
we
just
want
to
see
if
we're
successful
or
no
or
not,
and
we
we
have
had
a
single
time
discussion
with
the
data
and
analyst.
I
said
all
right.
A
This
is
the
way
we're
going
to
derive
our
success
metric
and
that
has
been
established
and
now
that
is
being
configured,
and
that
is
the
final
metric.
We're
gonna,
we're
gonna
show
how
that
other
back-end
work
is
gonna
function.
I
think
that
is
a
huge
market
we
can
tap
into,
but
we
have
like.
We
have
some
big
competitors
to
run
up
to,
and
I
think
it's
gonna
take
quite
a
while
before
we
have
an
you
know,
a
viable
minimal
product
for
us
that
works
in
that
way.
B
A
Launch
darkly
exactly
we
want
to
get
to
a
launch
darkly
state
where
you
can
see
an
overview
of
all
your
experiments
running
across
the
company,
and
you
can
easily
see
all
right.
These
are
in
progress.
Are
they
like?
Are
they
going
well
and
are
they
success
or
in
like
not
yet
or
are
they
a
failure?
That
is,
that
is
the
mvc
of
everything
you
want
and
somehow
we
need
to
get
those
metrics
inside
of
gitlab,
because
that's
where
we
want
our
users
to
be
right.
B
So
two
things
about,
I
know
we're
over
time,
but
two
things
about
launch
darkly,
which
is
really
interesting
as
a
base
point
for
this
research
one.
They
just
hit
two
trillion
future
flags
daily
feature
flags
in
their
system.
Okay,
that's
one
number
two.
We
have
an
opportunity
to
beat
them,
and
why
am
I
saying
that
not
I
mean
they
have
an
awesome
feature.
I
can't
say
anything
bad
about
them.
Our
feature
is
non-existent
and
yet
I'm
still
saying
that
we
have
an
opportunity
to
beat
them
and
why
it
turns
out.
B
The
bucks-
and
you
can
understand
that,
just
for
from
the
business
model,
we
have
an
opportunity
to
encourage
people
to
transition.
So
if
we
have
a
good
enough
solution,
I
think
people
will
move
to
gitlab,
which
is
a
really
really
good
reason
for
us
to
pursue
this.
B
On
the
other
hand,
we
need
to
figure
out
what
that
subset
of
of
minimum
functionality.
We
need,
because
we're
not
going
to
have
feature
parity
with
launch
darkly
launched
darkly,
that's
their
business,
they
do
feature
flags,
they
have
about
200
developers.
We
have
two.
A
Yeah
that
sounds
good
yeah,
let's
I'd
say,
because
we
only
have
so
many
developers,
let's
see
where
we
make
it
effective
for
them
to
bring
them
into
the
conversation,
because
they
probably
have
some
good
things
to
say
and
have
some
good
ideas
making
us
that
much
more
effective
in
initial
stage.
Getting
to
that
good
enough
solution,
right,
yeah
sounds
good.
A
I'll
update
I'll
give
an
update
in
the
issue
of
our
conversation
of
today.
I
think
that
is
helpful.