►
From YouTube: [REC] Key Meeting - Product (Public Stream)
Description
First Customer win (Feb 2021) With GitLab Ultimate for IBM Cloud Paks
A
Hello
and
welcome
everyone
to
the
product.
Key
meeting
for
march
started
out
with
his
family
on
a
well-deserved
spring
break
so
hila
and
I
are
hosting
today's
team
meeting
with
the
help
and
rest
of
the
team
brian.
You.
B
Have
the
first
question
thanks
for
jose
noop
and
thanks
for
the
presentation,
very
detailed
and
well
done
so
if
you
can
pass
that
on
the
to
the
the
team.
That'd
be
great,
I
knew
that
the
team
had
a
number
of
key
hires
in
budget
that
they
wanted
to
make.
I
was
just
wondering
if
you
could
give
an
update
on
the
hiring
status
to
fill
these
roles.
A
Yeah,
so
we
have
three,
yes
one,
so
maybe
david
can
you
cover
the
ones
in
your
team
and
then
I'll
go
over
there
and
he
like
can
cover
the
ones
in
growth
and
then
I
could
go
over
the
rest
of
them.
C
I
don't
have
an
ets
when
I
think
we'll
be
to
a
point
where
we
close
yet,
but
everyone
in
the
team
helped
by
doing
a
source-a-thon
we've
gotten
some
really
good
candidates
that
are
now
engaged
on
the
group
manager
positions
that
are
highlighted.
I
I
think,
we'll
be
extending
an
offer
for
create
this
week.
I'm
hoping
we'll
be
able
to
do
the
same
for
manage
next
week
and
then
that
just
kind
of
leads
plan
to
be
filled
outside
of
that
one.
C
I
don't
think
we've
called
out,
but
I
think,
is
also
critical
to
our
success-
is
our
compliance
pm
that
we
also
just
searched
on
last
week
and
now
have
I
think
it's
a
half
dozen
candidates
in
the
pipeline
who
look
very
promising.
B
Any
any
changes
in
the
recruiting
process
with
just
the
pandemic
and
more
companies.
You
know
hiring
remote
people
that
you've
seen.
C
D
Yeah
awesome,
so,
first
of
all
I
I
have
to
say
I
love
the
experiment
readouts
in
the
deck
it
it
took
me
more
time
than
I
had
had
planned
to
go
through
them,
but
there's
just
so
much
information
there.
So
thank
you
for
that.
That's
it's
really
really
helpful
and
please
continue
that.
I
actually
think
this
is
a
great
forum
to
discuss,
wins
and
also
learnings
right.
If
experiments
don't
work
out
right,
I
think
that's
it's
a
really
great
forum
to
do
that.
D
D
I'd
love
to-
and
I
looked
through
the
charts
that
you
had
I'm
just
trying
to
get
a
clean
visualization
of
how
you
think
about
that
funnel,
and
then
I
saw
there's
a
lot,
there's
a
lot
of
opportunity
to
try
to
optimize
it.
So
I
just
love
to
see
like
what
are
the
the
different
drop-off
points
and
how
you
think
about
the
invite
funnel
and
do
we
have
a
clean
data.
Visualization
of
that
anywhere.
E
Yeah,
I
I
can
take
this
and
then
I
can
add
my
comment
afterwards.
So
the
the
clean
database
visualization
is
a
little
bit
challenging
in
the
sense
that
we
don't
have
user
id
tracking
on
the
front
end.
So
what
we
can
look
at
is
essentially
we
can
have
some
charts
that
can
show
us
the
invite
sent
to
accepted
rate
for
an
individual
invite,
and
then
we
can
create
some
separate
reporting
for
front-end
events.
Looking
at
what
does
that?
E
The
funnel
look
like
from
viewing
particular
pages
without
knowing
who
the
user
is,
if
that
makes
sense,
so
we're
actively-
and
we
see
this
as
a
huge
area
of
opportunity-
and
we
have
a-
I
can
share
the
epic
here
in
a
second,
but
we
have
an
epic
related
to
hard
email
confirmation
where
multiple
teams
on
growth
and
manage
access
are
working
on.
Improving
that
overall
experience,
which
I'm
pretty
confident
will
have
a
nice
impact
in
terms
of
the
user
accepted
rate.
F
Yeah
correct:
to
give
you
an
example:
currently
anything
we
don't
have
something
called
the
magic
link,
meaning
if
you,
if
you
try
to
sign
up,
you
confirm
email,
you
come
back.
You
need
to
fill
in
all
the
information
all
over
again,
and
that
also
applies
to
the
invitation
flow.
Imagine
that
so
like
as
part
of
the
effort
we
will
be
implementing
looking
to
implement
that
that
will
help
both
the
overall
sign
up
success,
as
well
as
the
imitation
sign
of
success.
D
E
Yes,
so
for
for
those
individual
experiments
we
can-
and
this
kind
of
gets
into
your
next
question.
We
can
understand
our
lift
for
like
front
end
experiments,
but
we
can't
track
it
to
conversion
for
experiments
that
are
more
back.
End
focused
like
changing
a
overall
experience
for
a
name
space.
Let's
say
it's
onboarding
versus
no
on-boarding,
then.
Yes,
we
can
track
that
all
the
way
to
revenue
and
we
do
okay.
D
That's
good
yeah
awesome.
All
right.
That's
that's
good!
That's
good!
To
see!
I
you
know
it's!
It's
yeah!
You
know!
I
like
the
fact
that
when
you
run
it,
you
run
a
test,
so
example
in
slide
15
the
experiment.
You
know.
Obviously
the
statistical
significance,
two
percent
lift
awesome.
We
know
those
trials
are
real
and
if
you
stick
with
that,
you're
gonna
get
them
forever.
The
next
question
is:
okay.
Is
the
quality
of
the
trials
go
down
or
is
it
the
same?
Meaning
it's
a
creative
right?
E
Yeah,
I
totally
agree
with
you
on
that
one
I
think
in
an
ideal
world
we
would
have
user
id
tracking,
and
that
would
be
the
natural
thing
we
would
look
at
is
do
these
users
that
have
see
this
higher
level
change
on
the
front
end
early
in
the
sign
up
experience.
Do
they
go
on
to
adopt
the
product
at
the
same
rate,
and
we
can't
answer
that
question
today,
because
we
don't
have
user
id
tracking
it
for
experiments
where
we
do
it's
more
of
an
experience.
Change
then.
E
Yes,
that
is
something
that
we
do.
Look
at.
D
D
Okay,
we
test
one
drop-off
point
but
then
see
how
it
all
the
way
it
goes
through,
and
I
know
it
may
increase
experiment
times,
because
you
know,
as
you
go
down
the
funnel
you're,
you
need
more
more
data,
more
n
right,
more
more
more
data
points
in
your
test
of
control
to
kind
of
get
the
significance,
but
I
think
that's,
I
think
it'd
be.
E
A
good
thing
yeah-
and
this
is
me
thinking
off
the
top
of
my
head
and
mike
feel
free
to
jump
in
here,
but
I
think,
with
a
new
experiment
framework
that
the
engineering
team
has
been
putting
a
lot
of
effort
into
in
theory.
I
believe
we
could
create
a
purchase
event
and
not
necessarily
know
who
the
person
is,
but
at
least
know
if
that
purchase
event
occurred,
and
that
would
allow
us
to
understand
the
impact.
Just
on
like
absolute
conversions
between
a
control
and
experiment.
D
Makes
sense,
yeah
and
there's
like
it's
actually
more
important
to
know
that
the
conversion
happened
versus
the
dollar
value,
especially
in
these
type
of
experiments,
because
you
get
into
bias
when
you,
when
you
start
to
say
oh
look
at
that
one,
because
you
get
one
outlier
who
closed
for
fifty
thousand
dollars
like.
Oh,
this
is
awesome,
but
you
know.
I
think
that
the
number
of
conversions
is
more
important,
so
that
sounds
promising
and
if
we
can
do
it
with,
you
know,
obviously
respecting
user
privacy
in
a
way
that
makes
the
makes
sense
right.
E
A
A
Thank
you,
cydia.
The
next
one.
H
F
F
H
Yeah
in
the
okrs
kpis
or
in
the
okrs
there's
still
a
link
between
gitlab
private
and
prospect
one.
I
think
the
situation
changed
because
github
launched
github
ae
and
it's
now
more
of
a
competitive
situation
where
we
need
to
probably
want
to
match
that
and
it's
no
longer
dependent
on
prospect.
One
so
consider
removing
that
from
the
okay.
F
H
The
next
one,
as
well
great
increase
in
the
paid
net
promoter
score
to
33..
That's
awesome.
First
time
we're
we're
well
we're
we're
almost
the
1995
percentile
we're
almost
within
within
our
goal
of
40.,
so
we
might
be
there.
We
just
don't
know
yet,
but
it's
it's
a
great
improvement
from
25.
A
Thank
you
sid
and
yeah.
As
a
reminder,
that's
as
pnps
and
I
know
foreign
team
are
working
on
self-managed
pnps.
A
G
Yeah,
actually
I
it
might
be
a
bit
subtle
on
slide
33.
I
highlight
that
we
have
an
unofficial
score
of
31
from
our
doc
site
pilot
for
self-managed
pnps.
We
found
that
the
doc
site
is
not
an
ideal
channel
for
self-managed
pnps,
but
we
are
going
to
continue
down
the
path
of.
G
Upgrading
our
broadcast
message
feature
capabilities
to
be
able
to
collect
our
self-managed
pnps
that
way,
so
that
is
my
current
update
and
hopefully
more
progress
next
time.
A
Yes,
we
did,
I
think
it's
based
on
the
feedback
from
the
last
meeting
or
group
conversation.
I
don't
remember,
we
he
changed
female
to
cumulative
monthly,
active.
You
say:
okay,
duplicate,
the
summation
of
stages.
C
C
But
we
kind
of
had
a
seamount,
which
was
category
now
with
an
organization.
So
you
might
see
that
too,
because
it's
some
of
the
stages
for
reporting
category
now
as
part
of
the
data
to
everyone.
D
A
H
Yeah,
based
on
the
mps
survey
and
13
10
release
post
discussion
on
hacker
news.
Both
are
full
of
complaints
about
our
the
ux
of
our
merge
requests
like
that
merge
request,
review
dealing
with
large
diffs
interface
is
like
not
doing
what
you
expect
it
to
do.
It's
slow,
it's
sluggish.
H
I
know
we
already
made
improvement
to
the
state
there.
It
was
very
buggy.
I
think
we
fixed
a
lot
of
that,
but
it
seems
that
so
some
of
this
might
be
trailing,
but
I'm
worried
that
there's
more
needed.
A
I
Yeah,
you
bet
so
yes,
we
know
that
mr
usability
is
a
problem
that
comes
up
as
one
of
the
key
negative
things
in
sus.
What
I've
been
discussing
with
both
anup
and
christopher
so
far
and
starting
to
have
a
broader
discussion,
is
a
series
of
okrs,
uber,
q2
and
q3,
to
address
some
of
this,
I'm
not
going
to
tell
you
to
address
all
of
it.
I
The
idea
that
we're
thinking
about
now
is
during
q2,
there's
already
a
proposal
from
the
development
team
to
rework
back-end
things
to
make
sure
that
business
logic
isn't
leaking
into
the
front
end,
which
is
one
of
the
state
problems
that
we're
having
right
now
in
parallel,
what
the
design
team
would
do
during
q2
is:
rework
our
mr
widgets,
based
on
a
new
framework
that
pedro
has
been
working
on
and
that
would
address
usability
concerns
with
the
mr
widgets
and
then
that
would
feed
into
q3
where
we'd
actually
implement
those
usability
improvements.
I
I
Possibly,
is
that
something
that
we
could
talk
about
having
then
designers
go
deep
on
diffs
in
q3
and
we
could
just
kind
of
keep
working
in
parallel
if
we
wanted.
H
Yeah,
I
think,
that's
great,
to
hear
glad
you're
on
it.
I
think
a
lot
of
this
is
interaction
kind
of
like
it's
half
a
performance
problem
and
it's
half
a
ux
problem.
H
H
It
seems
important
feel
free
to
kind
of
try
to
load
a
lot
of
this
in
q2
and
then
let's
talk
about
how
we
have
to
maybe
change
the
organization
or
maybe
work
different
for
one
quarter
to
address
this
because
it
seems
it
seems
something
both
important
and
urgent,
and
maybe
we
should
accept
some
inefficiency
in
solving
this
inefficiency
as
moving
people
temporarily
to
that
or
maybe
ramping
up
the
amount
of
calls
about
this
or
I'm
open
to
I'm
open
to
doing
something
inefficient.
Since
it's
such
a
dire
problem
thanks
but
glad
you're.
I
Yeah,
absolutely
I
want
to
just
confirm
you're
right.
It
is
both
performance
and
usability.
You
got
it
exactly
right.
One
thing
we
are
doing
is
moving
a
designer
temporarily
from
geo
and
distribute
over
to
code
review,
to
support
code
review
temporarily
for
a
few
months
and
then
anub-
and
I
are
also
talking
about
how
we
might
get
more
design
support
on
code
review
longer
term.
What
we
haven't
talked
about
is
developer
resources,
so
that's
something:
maybe
we
could
discuss
separately.
D
Awesome
yeah,
I
was
looking
at
the
stage
per
organization,
deep
dive
and
your
reflections
in
slide
29,
pretty
interesting
chart
just
shows
kind
of
the
curve
of
adoption.
So
I
guess
my
conclusion
was
that
if
they're
not
if
they
haven't
adopted
a
stage
by
stage
15
by
day
15,
it's
unlikely
that
they're
going
to
do
so.
Is
that
a
fair
assessment?
And
how
is
the
team
thinking
about
that?.
F
I
can
speak
to
it
and
I
think
dave
is
here.
He
can
speak
a
little
bit
more
in
terms
of
the
data
insights
high
level.
I
tend
to
agree
like
the
the
that's
why
we
choose
to
focus
on
the
stage
early
on
stage
adoption
early
on.
However,
for
some
of
the
stages
there
are
a
longer
tail
as
well,
so
the
absolute
percentage
that
will
adopt
a
stage
past
the
initial
30
day
or
90
days
is
small,
but
we
do
see,
for
example,
the
cia
adoption
it
can.
F
J
A
Yeah
and
I'll
add
a
couple
of
other
things
with
respect
to
customers
so
smaller,
but
more
but
an
important
group
as
well.
We
are
starting
to
push
data
into
gain
side
and
that
can
now
help
fans
encourage
stage
adoption
and
they
can
also
see
how
the
stage
adoption
is
increasing
or
decreasing,
which
they
didn't
have
the
capability
to
do
before,
so
that
active
sort
of
engagement
is
also
gonna
change.
Hopefully,
change
disco.
E
And
add
one
more
thing
to
this
over,
ideally
in
the
next
month
or
so
we're
going
to
launch
a
new
experiment
where
we
give
users
the
ability,
a
new
space
for
continuous
onboarding.
So
we
can
recommend
additional
stages
and
features
for
them
to
adopt.
E
D
Sam
is
that
going
to
be
heuristic
based
or
more
ai,
based
based
on
behaviors
for
the
first.
E
Version,
it's
you
know
it's
starting
with
our
gut
opinions
and
the
idea
would
be
successful
that
then
this
becomes
far
more
intelligent
over
time
and
we
devote
more
and
more
resources
to
you
know
showing
people
the
right
things
at
the
right
moment.
Awesome
so
starting
simple.
I
love
it.
Yeah
yeah.
D
Sounds
real
exciting.
It's
like
I
said,
the
insights
in
this
deck
are
are
awesome,
so
thank
you
for
that.
K
So
one
more
thing
to
add
to
that
just
to
keep
the
party
going.
We
also
want
to
collect
the
their
self-designated
jobs
to
be
done
in
the
account
creation
flow
and
then
tie
that
into
that
onboarding
so
that
if
they
say
they're
here
for
ci,
we
can
change
the
the
order
of
the
things
we're
recommending
or
something
different.
Depending
on
what
they've
said.
A
Yeah
and
I'm
sure
there
are
a
lot
more
levers
to
pull
here
now
that
we
are
tracking
this,
including
are
things
set
up
by
default.
So
now
there
is
a
dashboard
for
that
setup
by
default
and
working
by
default
for
each
category,
and
we
are
doing
conducting
more
research
on
which
cross
stage
moments
are
key
cross
stage
moments
to
encourage
adoption
from
one
stage
into
the
other.
So
that's
going
to
help
us
feed
into
product
changes
as
well.