►
From YouTube: Plan Stage Weekly 2020-08-19
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
C
Sweet,
I
had
a
great
demo
with
customer
opportunity,
not
a
gold
customer
yet
on
monday,
and
they
gave
us
a
pretty
good
set
of
requirements
for
what
they
were
looking
for
in
a
planning
solution,
and
they
would
be
the
largest
opportunity
if
it
were
to
if
we
were
to
win
it,
that
would
land
into
gold
solely
because
of
plan.
C
So
I
just
wanted
to
share
that
requirement
stock
with
folks,
so
they
could
look
at
it
and
kind
of
see
what
what
an
ideal
customer
expects
versus
what
we
currently
provide.
There's
a
couple
like
obvious
gaps
like
custom
fields
and
more
robust
workflows,
but
it
was
really
nice
that
they're,
even
considering
us
over
jira,
align
and
jira,
which
I
think
is
a
pretty
big
accomplishment,
and
so
I
just
want
to
say:
cutest
everyone,
who's
been
working,
hard
number
one,
but
also
just
spread
the
word
that
that
is
there.
C
It's
worth
looking
at
there's
already
existing
issues
for
all
those
things,
they're
all
on
a
road
map,
but
yeah,
that's
about
it
alex
did
you
want
to
verbalize
your
comment.
D
Yeah,
I
think,
you've
already
covered
it.
I
was
wondering,
if,
like
from
the
dot
from
a
quick
look,
it
looked
like
we
have
all
of
it
covered,
at
least
in
the
plan
like
in
the
medium
plans
or
what
we're
planning
to
work
on
where
we
already
have
the
features.
I
was
wondering
if
there
is
something
that
that,
like
some
asks,
that
would
require
a
bigger
lift
or
something
that
we
need
to
plan
for
like
for
a
longer
term,
or
something
like
that.
C
Yeah,
so
I
think
most
of
the
features
themselves
were
on
the
on
our
medium
term
short
and
medium
term
roadmap
anyways.
So
that
was
good
news.
The
thing
that
continued
to
provide
ongoing
validation
that
our
groups
and
projects
is
is
not
extensible
enough
or
not,
really
flexible.
C
Enough
was
there
as
well,
because
they
were
talking
about
how
they
wanted
to
separate
some
sort
of
like
how
they
manage
their
repositories
from
how
they
manage
their
issues,
largely
because
they
do
least
least
privileged
access,
so
they
they
basically
have
a
whole
group
hierarchy.
C
That
is
just
for
permissions
that
they
then
share
into
the
specific
projects
or
groups
that
need
access
to
specific
resources,
and
so
this
kind
of
leads
into
the
next
point
that
I
had
is
just
a
quick
update
on
the
simplified
groups
and
projects
working
group
that
I've
been
participating
in
with
some
other
folks.
I
documented
some
kind
of
root
cause
the
two
root
cause
problems
that
I
think
are
longer
term
things
to
solve
within
git
lab
itself
and
that's
interacting
with
objects
across
top-level
groups
and
sibling
therapies.
C
So
can
I
have
an
issue
in
one
group:
belong
to
an
epic,
that's
in
another
group
outside
of
the
hierarchy.
Things
like
that,
because
the
way
it
works
right
now
is
that
you
can't
have
a
good
the
person
who's
doing
the
product
management
or
planning
can't
see
what
their
team
is
working
on,
spread
out
across
all
these
different
repositories
and
and
groups,
and
then
the
other
one
is
the
duplication
that
exists
currently
between
projects
and
groups.
C
C
But
I
I
jotted
down
all
the
use
cases
that
I
could
think
of
that
we've
been
exploring
the
working
group
as
well
as
like
some
working
proposals
that
were
bouncing
around,
but
I
really
wanted
to
open
that
up
to
everyone
on
the
planet
stage
to
contribute
towards
that
that
vision
for
what
the
solution
would
be,
because
I
think
this
call
is
filled
with
people
far
smarter
than
myself,
and
we
also
have
some
other
engineers
and
designers
from
other
groups
like
manage
and
create
they're.
C
B
Cool
thanks,
yeah,
I
thought
I'd
do
a
similar
update
just
on,
because
I
know
these
things
can
happen
like
in
silos,
almost
like
so
on.
To
make
sure
the
rest
of
the
team
was
aware
of
the
progress
that
we've
made
with
the
real-time
working
group.
B
So
I've
I've
put
a
link
to
our
working
group
page
in
the
doc,
but
basically
like
we've
shipped
anyone
I
mentioned
in
the
dock
here
in
the
agenda
that
small
clusters
or
single
instance
self-hosted
customers
can
use
the
feature
in
reality,
any
customer
that's
prepared
to
run
action.
Cable
in
embedded
mode
can
use
this
new
feature.
It's
just
most
likely
to
be
customers
with
small
clusters
or
single
instances
who
sell
post.
B
You
can
also
try
this
out
on
dev.getlab.org.
I
promise
you
once
you've
tried
it
and
you've
seen
a
signees
update
in
real
time.
It'll
never
feel
right
that
they
don't
update
in
real
time.
We
are
like
trying
to
kind
of
help
with
the
work
to
bring
this
into
gitlab.com
and
onto
larger
and
into
larger
clusters
for
self-hosters,
but
we
have
to
go
through
kubernetes
for
com
and
that's
we're
somewhat
blocked
with
the
work
and
the
distribution
and
delivery
teams.
On
that
basis,
yeah
the
the
question
has
arisen
like
do.
B
D
Yeah,
so
I
was
wondering
about
if
we're
doing
any
sort
of
workflow
changes
or
what's
being
different
after
the
the
slippage
issue
that
has
been
raised
and
like
actually
I'd
like
to
more
kind
of
I
understand
slippage
is
bad,
but
at
the
same
time
I'm
I
think
it's
kind
of
so
it's
it's
sort
of
normal,
especially
when
you.
D
Plan
aggressively
so
to
say
that
we,
which
is
one
of
the
key
lab's
value
right
right
to
to
go
to
class
and
then
rather
than
than
predictability
and
and
plan
aggressively.
So
I'm
just
trying
to
I
would
like
to
understand
better
is:
is
this
something
that
we
internally
as
a
team,
want
to
become
better
at
planning
things
that
we
want
to
cover
in
next
release?
D
Or
is
it
or
I
mean,
and
it's
probably
also
a
factor
of
being
more
predictable
in
terms
of
communicating
what
we
are
going
to
release
in
one
of
the
releases
and
not
moving
stuff
from
one
milestone
to
another?
Is
it
both?
Is
it
more
on
one
or
the
other
side?
Because,
like
for
the
external
communication,
I
think
we
can
handle
that
in
a
slightly
different
way,
where
we
can
use
more
the
feature
flags,
and
then
we
can.
We
can
kind
of
have
that
tested.
D
You
even
use
gitlab.com
as
a
as
a
test
environment,
so
to
say
as
a
production,
ready
test
environment
and
then
trace
the
future
flags
where
in
like
remove
the
feature
flags
and
that
will
be
going
into
the
next
release.
But
that
doesn't
solve
the
internal
planning.
As
for
for
our
team
internal
planning
problem.
A
And
alex-
and
I
talked
about
this
yesterday-
some
and
thinking
about
not
not
specifically
like
the
release
post
or
the
kickoff
videos,
but
like
marketing
blog
posts,
that
sort
of
like
make
or
set
expectations
for
something
that
will
be
in
a
release
or
in
the
next
few
releases.
Is
it
part
of?
I
don't?
I
don't
know
like
practice
around
that
how
those
things
become
you
know
marketed,
but
we
talked
a
bit
about
like
if
we
had.
A
A
It
doesn't
necessarily
need
to
be
like
public
and
generally
available,
but
and
then
we
say
like
hey,
this
is
going
to
be
coming
in
the
next
release.
Then
we
have
like
100,
you
know
99
to
100
confidence
that
it
lands
in
a
a
well-tested
state.
Like
has
that
ever
been,
and
I
know
that
you
know
how
how
marketing
works
versus
how
product
and
engineering
work
is
always
the
like,
eternal
war,
I
guess
but
like
how
do?
How
do
things
get
sort
of
like
bubbled
up
into
marketing
blog
posts.
C
I
I
can
speak
a
little
to
that,
so
the
way
the
process
works
now
is
like
we
were
supposed
to
as
a
cross-functional
team.
The
first
few
weeks
of
each
month
talk
about
what
we
want
to
commit
to
or
what
we
plan
to
ship
in
a
given
release,
and
so
like
product
drafts
issues.
This
is
like
our
our
timeline.
Workflow
thing:
that's
in
the
handbook,
but
product
drafts
issues
and
spends
a
week
discussing
with
engineering.
C
Then
we
do
the
release
kickoff
video
for
each
group
and
then
that
gets
rolled
up
to
eric
brinkman.
Who
then
does
a
dev
section
overview
during
the
release?
Kickoff
live
stream,
so
that
happened
yesterday
and
then
that
usually
sets
the
expectations
of
things.
That
will
happen,
and
I
think,
like
from
my
my
opinion,
is
the
reason
why
I
care
about
the
slippage
I
like
planning
ambitiously.
C
But
I've
come
to
realize
that
our
stakeholders,
external
stakeholders
and
customers
follow
our
issues
pretty
closely,
especially
for
the
things
that
they
really
want
and
need,
and
when
they
see
like
a
milestone
on
something
and
that
they
see
that
we're
working
on
it.
And
then
it
just
goes
from
one
milestone
to
the
next
milestone
in
the
next
milestone.
It
is.
C
It
started
to
hurt
the
trust
that
our
customers
have
with
our
other
stable
counterparts
in
like
technical
account,
management
and
customer
success,
and
so
it
makes
it
harder
for
them
to
maintain
a
healthy,
longer
term
relationship,
because
you
know
if
they
have
problems
and
we're
working
on
solving
them,
and
we
say
that
they're
going
to
be
solved
and
this
time
frame,
and
then
we
repeatedly
like
don't
do
that
it,
it
kind
of
chips
away
at
a
little
bit
of
that
trust.
C
So
I
think
that's
where
back
to
alex's
point,
we
could
have
things
feature
complete
or
code
complete
behind
a
feature
flag
and
test
that
that's
one
way
to
do
it
and
not
that
we
should
explore
that,
and
I
think
the
other
from
a
process
standpoint
is
looking
at
our
our
velocity
over
the
last
n
number
of
milestones
and
understanding
what
we
have
historically
delivered
and
then
basically
committing
to
sixty
percent.
C
Less
of
that,
so
that
we
leave
slack
in
the
system
right
now
like,
for
example,
in
thirteen
three,
we
were
in
the
project
management
group.
We
had
200
something
plus
percent
of
our
our
historical
capacity
plan
for
that
milestone,
and
while
it's
good
to
plan
aggressively,
I
think
we
also
have
to
take
into
consideration.
If
we,
if
we
build
slack
into
the
system
it
the
system,
will
move
faster,
it'll
be
healthier
just
from
a
systems
like
thinking
standpoint,
the
most
inefficient
systems
are
the
ones
that
are
always
operating
100
capacity.
C
So
there's
a
really
cool
book
called
slack
about
this
whole
theory,
which
I
recommend
reading.
But
that's
my
perspective.
E
Because,
again,
that's
not
how
we
like
to
plan
here
we
like
to
plan
sort
of
in
a
natural
progression,
but
as
we
approach
more
and
more
enterprises
and
more
and
more
larger
businesses,
they
all
work
on
date-based
timelines
and
it's
very
hard
for
them.
If
you
say
well
we're
looking
at
some
time,
q1
of
2021.
E
that
doesn't
really
work
as
well
for
them
as
oh.
We
have
it
planned
for
this
release.
But
if
I
say
we
have
a
plan
for
release,
I
always
have
to
hedge
my
bets
and
say,
but
we
plan
very
aggressively,
so
there's
a
good
chance
that
it
may
be
one
or
two
releases
later
and
then
they
kind
of
look
at
me
funny
and
go
oh.
So
when
can
I
be
guaranteed
to
expect
it?
So
it's
more
about
commitments
and
I'm
very
happy
to
say
to
the
businesses
and
the
enterprise
I'm
talking
to.
E
We
do
plan
aggressively,
but
we
like
to
catch
up
with
our
slip
in
the
next
release.
So
if
it
doesn't
make
let's
say
thirteen
five,
then
it
will
by
defacto
make
13.6,
but
even
that
I
sometimes
struggle
to
say.
So
it's
really
more
about
predictability
and
not
so
much
speed.
I'd
rather
move
slower
and
be
predictable.
E
E
B
Yeah,
like
to
my
point,
there
is
just
that,
like
yeah,
I
mean
you
can
read
it
every
process.
Change
we
make
from
the
canonical
standard.
Gitlab
workflow
makes
it
more
difficult
to
work
with
are
stable
counterparts.
So
you
know
the
whole
quad
planning
thing
isn't
something
that
has
made
the
plan
team,
so
we
miss
out
on
on
that
sometimes
like.
B
If
we
want
to
deliver
features
that
are,
you
know,
documented
properly
that
have
proper
quality
that
have
what's
it
that
are
secure
and
we
want
to
leverage
our
stable
counterparts
and
make
it
easier
for
them
to
contribute
in
the
team.
B
Then
we
kind
of
have
to
be
careful
about
the
process
changes
we
introduce
that
make
things
different
from
the
standard
process,
and
so
that's
why
I'm
being
like
really
pedantic
in
the
issue
about
like
framing
the
problem
properly,
investigating
it
thoroughly
to
find
out
what
is
the
actual
root
cause
and
not
just
the
perceived
root
cause
and
like
what
actually
is
the
problem
like?
Is
it
that
we're?
B
B
After,
like
it
doesn't
matter,
it
would
have
missed
anyway,
so
yeah,
it's
ideally
any
process
changes
we
make,
I
think,
would
would
achieve
all
all
three
things
like
they'd
make
it
more
predictable
for
pms,
but
they'd
also
help
to
increase
our
velocity
and
improve
the
developer
experience
as
well-
and
I
think
the
way
to
do
that
is
like
the
the
standard
gitlab
process
is
thoughtful
and
it
works
for
a
remote
environment.
So
we
should
be
careful
about
the
process
changes
we
make
that
differ
from
that.
D
Yeah
so
like
the
the,
why
I'm
raising
sort
of
this
discussion
and
question
is
because
I
like
planning
and
estimating
is
hard.
Whichever
way
you
do
it
in
days
or
in
in
units
or
whichever
you
do
it
it's
hard,
and
it's
not
exact.
So
I'm
just
wondering
if
we
can
move
from
a
this
is
what
we're
going
to
deliver
in
this
milestone
to
something
like
this
is
what
we're
working
on,
and
this
is
what
we
can
deliver.
D
D
So
that's
what
we
do,
release
in
this
release
or
in
upcoming
release
and
and
that's
just
a
matter
of
phrasing,
the
the
future
flags.
D
Obviously,
there
will
be
some
bugs,
because
in
production
there
is
always
something
that
might
happen
so
so
so
there
is
some
iteration
on
that,
but
then
moving
from
that,
where
you
can
say
well,
these
are
the
features
that
we
know
we
are
releasing
like
99.9,
because
we
just
need
to
raise
the
future
flags
to
something
to
saying
that
these
are
the
next
things
that
a
highest
priority
in
our
queue
and
we
are
working
on.
So
then,
this
very
wrong
estimation
kind
of
it's
sort
of
solved.
C
Yeah,
I
think,
that's
good
feedback.
I
can.
I
can
definitely
start
talking
like
that
in
my
kickoff
videos.
I
think
all
the
pms
can
talk
like
that.
I
think
the
other
thing
too
is
like
the
the
estimating
is
hard
and
it
like
the
more
time
you
invest
in
it.
C
It
becomes
less
valuable
because,
like
it's
basically
like
how
much
more
do
you
want
to
pay
for
the
next
nine
you
know
in
uptime,
but
the
thing
that
I
have
realized
is
that
it
is
possible
like
as
long
as
the
team
kind
of
stays,
the
same
and
estimates.
C
In
the
same
way,
it's
going
to
be
inaccurate
but
consistently
inaccurate,
which
is
why
I
think,
looking
at
like
what
weight
we
do
get
done
in
a
given
milestone
will
help
us
should
help
inform
what
we
should
like
say
that
we
can
get
done
and
the
next
milestone
the
only
other
like
process
change,
or
I
guess
thing
that
I
would
be
valuable-
is
like
shifting
planning
to
the
left
a
little
bit
where,
like
it's
a
product
manager,
I
want
to
make
trade-offs
about
what
we.
C
What
what
I
schedule
for
a
milestone
and
if,
if
I
see
like
10
different
things-
and
I
know
that
we
can,
we
can
only
do
five
of
them
or
like
10
weight.
I
want
to
like
see
those
10
things
with
the
weight.
So
that
way,
I
can
pick
the
the
the
top
10
weight
across
all
those
issues
and
just
do
those
things
so
like
it
forces
trade-off
discussions
before
we
go
public
and
say:
hey
we're.
C
I
think
we
can
get
all
these
things
done
and
that's
just
where,
like
we're
running
the
experiment,
I
think
of
doing
continuous
waiting,
and
I
like
that
we
just
we
need
to
like
have
more
time
per
week
where
everyone
spends
doing
just
a
little
bit.
It
doesn't
have
to
be
like
a
super
long
thing
and
john.
I
don't
know
if
you
want
to
talk
about
your
proposal
for
design
or
solution,
spikes
and
that
sort
of
thing,
but
I
see
that
as
a
way
to
help
that
out
too.
B
Yeah
sure
the
idea
behind
that
was
just
it
was
kind
of
like
an
undone
chord,
that
an
engineer
could
pull
it's
completely
at
the
engineer's
discretion,
and
it
means
that
you're
by
kind
of
going
into
that
process,
the
first
product
becomes
the
design
dock,
so
the
product
is
no
longer
delivering
the
feature.
The
the
feature
is
the
dock
itself,
where
you
provide.
So
if
you
can
imagine
like
the
jira
importer,
that
would
be
a
classic
example
of
this.
The
first,
the
first
thing
that
alex
did
was
create
a
proof
of
concept.
B
So,
like
design
docs
just
the
name,
it
doesn't
have
to
be
a
physical
document.
It
could
be
a
branch
with
a
demo
on
it.
It
doesn't
matter
like
that.
What's
important
is
that
it
reveals
the
complexity
and
that
it
that
it's
a
realistic
investigation
of
the
the
challenge.
That's
ahead,
it's
not
necessary
for
everything.
B
It
shouldn't
really
be
necessary
for,
like
90
percent
of
stuff
like
just
to
grab
a
random
number,
it
should
be
for
the
the
kind
of
small
number
of
things
where
we
simply
don't
know
enough,
or
the
engineer
identifies
early
on,
that
there
is
complexity,
that
they
can't
estimate
and
they
they
need
more
information
for
it's
just
an
idea
I
like.
Ideally,
it
wouldn't
be
necessary,
it's
just
felt
like,
and
this
is
why
I
was.
B
I
was
particularly
pedantic
in
the
issue
about
this,
because
we
took
a
measurement
in
the
middle
of
this
milestone
or
towards
the
end
of
this
milestone.
We
said.
Well,
we
have
this
many
points
in
the
milestone
which
is
so
far
above
our
above.
What
we
know
we
can
accomplish.
Why
is
that?
And
the
temptation
is
to
make
an
assumption
and
say
we
planned
more
than
we
could
accomplish.
That's,
not
necessarily
true.
You
have
to
investigate
and
see
like
how
many
of
those
issues
were
created
after
the
start
of
the
milestone.
B
So
was
an
issue
that
was
a
weight.
Five
suddenly
exploded
into
five
issues,
each
with
a
weight.
Two,
because
now
that's
double
the
number
of
points
that
were
expected,
so
you
have
to
really
investigate.
I
think
the
problem
and
don't
don't
you
know
just
assume
that
like
well,
we
just
throw
everything
onto
the
board
at
the
start
of
the
milestone.
We
hope
for
the
best
and
there
may
be
other
things
going
on
as
it
turns
out
like.
B
The
other
thing
we
don't
measure
is
why
we
don't
try
to
measure
up
until
now
is
how
much
slips
every
month,
it's
incredibly
difficult
to
measure
because
of
the
way
the
way
we
we
work,
we're
engineer,
managers
on
the
18th,
they
move
everything,
that's
not
in
the
build
phase
over
to
the
next
milestone,
and
then
we
look
through
the
stuff
we
sort
of
try
to
make
an
estimate
of,
like
watson,
review,
watson,
verification
those
will
probably
make
it,
and
then
you
have
like
four
days
for
so
maybe
half
of
what's
in
dev
will
make
it
like
it's
extremely
difficult
to
make
that.
B
But
we
should
at
least
try
to
measure
that
I
think
and
see
and
see
like
if,
if,
if
a
pm's
gone
to
plants,
I'm
really
belaboring
this.
But
if
a
pm's
like
expected
to
plan
from
like
the
fourth
of
the
month,
how
can
they
realistically
do
that
because
they
have
no
idea
at
that
point?
How
much
is
going
to
carry
over
from
the
previous
milestone
like?
Is
it
going
to
be
30
points
against
the
capacity
of
40,
or
is
it
going
to
be
20
points,
leaving
them
like
twice
as
much
room
to
plan
stuff?
D
Yeah:
okay,
that's
that's
where,
like
I'm
wondering
if
we
can
leverage
more
the
the
integrations
that
we
are
kind
of
implementing,
because
it's
easier
for
for
me
as
an
engineer
to
estimate
a
smaller
amount
of
work,
just
a
single
issue
right
that
I'm
going
to
work
this
week
rather
than
estimate.
I
don't
know
multiple
issues
for
the
entire
release,
but
then
that
again
doesn't
help
pm's
as
much.
My
understanding
is
because
they
do
need
to
know
what
we
are
going
to
release
in
four
weeks
and
not
in
one
week
but
like
a
giant.
D
Ideally,
you
you'd
go
one
week
and
see
what
you
have
accomplished
and
if
there
is
a
lot
of
work
in
that
week
that
didn't
make
it
for
any
reason,
then
you
can
re-evaluate
very
easily
and
fast
for
the
next
week
and
then
you
kind
of
move
the
scope
of
the
milestone,
much
faster.
Whereas
again,
if
you
commute
for
something
in
this,
given
milestone
a
lot
more
will
slip
to
to
the
next
milestone
right.
C
Yeah,
that's
exactly
the
like
another
root
cause
and
I'll
I'll.
Add
it
to
the
rca.
Is
the
bigger
backed
batch
sizes?
C
Slow
everything
down
like
like
increase,
basically
increase
your
your
cycle
time
and
your
lead
times
dramatically,
and
so
smaller
smaller
batch
sizes
is
like
one
way
to
solve
it
and
the
like
how
I've
traditionally
done
planning
in
other
or
teams
and
organizations,
is
the
release
playing
that
you
do
once
a
month
is
like
thematic
right
and
then
you
do
weekly
iteration
planning
and
you
break
down
vertical
feature
slices
that
you
want
to
ship
right
so
that
way
by
the
end
of
four
iterations
you
might
not
be
like
might
only
be
50
the
way
done
with
the
given
feature.
C
But
if
it's
behind
a
feature
flag,
you
can
still
put
like
some
of
those
things
in
the
release
post
like
the
vertical
feature
slices
and
make
progress,
and
then
you
can
also
course
correct,
like
each
iteration.
If
you
want
to
change
your
priorities,
you're
going
to
shift
that
way.
So
I
would
love
to
see
us
continue
to
figure
out
how
to
dog
food
iterations.
I
think
to
solve,
for
that.
C
We
need
to
figure
out
how
to
do
like
efficient,
weekly,
smaller
batch
size
planning,
and
I
also
think
it'll
be
hard
for
us
to
adopt
iterations
until
we
get
them
integrated
with
issue
boards,
but
the
whole
the
whole
vision
for
that
is
to
have
it
like.
If
you
use
iterations,
it
will
automatic
automatically
measure
your
velocity
and
eventually
we
want
to
surface
that
data
like
in
issue
boards
when
you
are
doing
planning.
C
So
you
know
exactly
what
you
can
commit
to
based
on
your
historical,
like
momentum
and
have
the
tool
help
us,
instead
of
us
just
like
kind
of
like
whatever.
Let's,
let's
hope
for
the
best,
you
know
so
it'd
be
cool
to
continue
to
figure
out
how
to
solve
that
as
a
problem,
because
we
also
have
the
okr
for
dog
fooding
plan
and
I
think,
long
term.
C
I
hope
we
build
better
kanban
tooling
into
our
product,
because
I
think
that
is
better
suited
for
our
organization,
but
it
would
be
great
to
dog
food
iterations
for
a
while,
at
least
to
see
how
the
smaller
batch
sizes
helps.
So
if
you
have
any
ideas
for
that,
I
would
jotten
down
the
rca
too.
C
Yeah,
so
I
can
share
my
screen
and
I'll
like
kind
of
walk
you
through
what
at
least
how
the
product
organization
right
now
is
looking
at
this
there's,
a
big
push
to
have
everything
be
kind
of
data
driven
based
on
usage.
C
There
was
recently
an
announcement
at
the
product
weekly
yesterday
that
we're
shifting
away
from
measuring
how
many
features
we
ship
to
measuring
feature
adoption
of
our
existing
product
and
like
and
that's
gonna-
be
measured
primarily
through
these
malmetrics,
which
is
for
your
stage,
which
is
s
mao
monthly
active
users
which
is
kind
of
unique
users.
C
Long
term
goal
for
this
metric
is
to
be
with
in
sas
and
self
manage
any
unique
user
who
interacts
with
an
issue
whether
that's
leaving
a
comment
creating
it
editing
it
adding
a
label
that
sort
of
thing
we
still
have
to
do
some
implementation
and
self-managed
to
track
those
events
and
like
have
a
union
across
all
the
different,
like,
I
think,
four
different
tables
in
order
to
roll
that
up.
C
Well,
so
right
now,
this
is
just
showing
the
sas
numbers,
but
what
I've
done
down
here
is
this:
is
the
sas
group
monthly,
active
user,
so
users
interacting
with
issues
it's
a
little
bit
over
the
last
few
months,
but
it's
kind
of
still
in
line
with
some
of
the
basic
trend
patterns,
but
our
actual
blended
gmail
across
sas
itself
manages
163
812.,
and
our
target
is
196
400,
which
is
like
a
based
on
a
historical
growth
rate
where
we
should
be
by
the
end
of
the
quarter.
C
Once
we
hit
200
000,
unique
users,
we
get
bumped
up
an
extra
point
in
terms
of
like
the
r
d
investment
model,
so
I'm
really
shooting
it
get
us
above
that
200
before
fiscal
year,
22
planning
starts.
The
thing
that
was
interesting
about
this
is
that
the
growth
slowed
kind
of
considerably
and
self-managed.
C
But
when
I
looked
at
how
many
instances
were
reporting
usage
being
we
had
a
net
loss
of
124
instances
reporting,
even
though
we
had
a
gain
of
2901
total
instances.
So
I
don't
know
if
those
just
aren't
reporting
yet,
but
either
some
some
instances
turned
off
their
usage
being.
C
So
it's
like
it's
not
entirely
accurate
and
reliable,
but
it
is
a
good
kind
of
indicator,
so
we're
still
trying
to
figure
out
why
that
happened
and
then
we're
also
tracking
paid
now
which
is
like
which,
which
of
these
unique
users
are,
are
in
paid
accounts
and
right
now
we
have
about
forty
seven
thousand
four
hundred
forty
process
and
self-managed,
which
is
a
slight
bump
from
june.
C
But
again
I
think
there's
some
credence
to
we
lost
usage
being
data
and
I'm
not
sure
if
that's
because
of
churn
or
just
because
they
turned
it
off,
we
had
a
bunch
of
new
instances
come
online
and
they're,
not
reporting
any
data
so
kind
of
hard
to
tell
there,
let's
jump
over
here
these
metrics-
I
don't
know
why
that's
happening,
but
in
terms
of
total
issues
created
per
month.
This
is
an
interesting
one
that
I
look
at.
C
We
have
over
18
million
created
in
july,
which
is
huge
18.5,
and
I
think
these
are
from
a
couple
ultimate
accounts
that
I
don't
know
what
they're
doing,
but
they
are
creating
lots
and
lots
and
lots
of
issues
which
is
kind
of
cool
to
see
the
other
things
that
I
can
kind
of
run
through
here.
I
think
going
down
some
interesting
stats
just
about
specific
features.
C
This
is
weekly
average
users
that
view
board
and
we're
about
11
000
a
week
right
there,
and
then
the
monthly
total
is
about
twenty
four
thousand
three
hundred
and
three
three
hundred
and
then
in
terms
of
milestones,
created
we're
tracking
that
that
and
it's
kind
of
ticking
upwards,
and
then
also
we
wanted
to
understand
how
much
we
should
invest
in
time
tracking.
C
So
you
can
kind
of
see
here's
the
number
of
issues
with
with
the
time
estimate
we
did
the
category
maturity
scorecard
and
it
actually
dropped
us
down
from
complete
to
viable
as
a
category
in
issue
tracking,
largely
around
small
ux
things,
but
one
of
them
was
like
how
do
you?
How
do
you
add
an
time
estimate
to
an
issue
for
a
non-engineering
person
or
non-technical
person
using
quick
actions
was
not
intuitive
for
them
and
there's
no
ui
element
for
it.
C
So
that's
just
like
a
small,
quick
win
that
I
think
we
can
probably
do
to
improve
that
specific
thing
and
then
the
other
met
like
chart
that
I
can
continue
to
look
at,
and
I
am
interested
by
is
this
weekly
feature
adoption
by
release,
and
this
is
based
on
which
instances
are
using
issues
and
what
percentage
of
all
the
instances
on
a
given
version
are
using
issues
over
what
time
range
right
and
so
like
we
based
on
the
four
week
cycle,
you
can
kind
of
see
things
usually
start
to
come
up
and
then
for
a
given
version.
C
They
drop
off
after
week,
four,
because
those
those
instances
are
upgraded
to
the
next
version.
So
this
kind
of
downtick
is
normal,
but
each
consecutive
release
we've
had
with
12.10.
When
we
started
measuring
this,
you
know
in
the
first
week
it
was
only
28
of
instances
we're
using
issues
that
had
upgraded.
C
Then,
in
the
first
week
it
went
up
to
37
31
with
13
and
then
32
with
13.1
and
38
with
13.2
and
then
with,
which
is
the
highest
jump
that
we've
had
basically
in
first
week,
adoption
of
using
issues
across
all
instances
so
like
what
this
tells
me
as
a
product
manager.
C
Is
that
we're
as
we're
iterating
we're
getting
better
at
product
market
fit
and
we're
like
using
issues
and
using
some
of
the
planning
capabilities
is
becoming
more
attractive
to
customers
and
there's
a
larger
percentage
of
them
that
are
adopting
them.
So
I'm
going
to
continue
to
track
this
over
time,
but
I
think
it's
a
good
leading
indicator
of
whether
or
not
our
mao
number
is
going
to
move
in
the
future.
But
there's
just
some
highlights.
Does
anybody
have
any
questions
about
these
or
the.
B
Metrics
yeah,
I
have
one
I
just
noticed
from
this
chart.
Actually
it's
kind
of
interesting.
You
can
see
that
people
held
off
upgrading
from
12
10
to
13
like
a
whole
version
release.
So
it's
interesting
like
it
probably
stands
the
accuracy
of
the
data.
You
can
see
there
how
you
like
for
one
week
the
week
after
the
release,
I
think
people
upgraded,
but
for
thirteen.
Oh,
it's
thirteen
one
for
thirteen
one.
They
seem
to
have
held
off
on
average
for
a
week
longer
to
upgrade.
C
Yep
there
there's
a
there's,
an
interesting
thing
that
I've
noticed
when
participating
in
some
sales
calls,
and
some
customer
meetings
with
with
customer
success
is
that
they
encourage
a
lot
of
the
our
our
customers
to
wait
a
release
or
wait
several
weeks
before
bumping
up
until
the
first
security
patch
comes
out.
So,
like
we've
talked
about,
there's
been
some
discussions
about.
How
can
we
get
our
customers
upgrade
faster,
so
they
get
the
new
value
faster
right.
C
So
I
think
we've
got
to
kind
of
combat
some
of
our
own
internal
messaging
there
and
try
to
correct
that,
because
I
see
that
is
the
biggest
thing
that
impacts
lagging
adoption
rates
for
upgrading,
which
I
I'll
talk
to
customer
success
and
see
how
we
can
figure
out
that
language.
But
I'm
curious
why
we
would
recommend
that
in
the
first
place,
if
you
all
have
any
ideas,
I'm
interested
to
hear
them.
B
Yeah,
I
don't
know
I
was
going
to
ask
as
well
like
is
blended.
Gmail
is
sorry
I
can't
I
can't
really
say
that
is
that
is
that
measurement.
Is
that
adjusted
for
the
the
fact
that
we
know
a
small
percentage
of
customers
have
usage
ping
switched
on,
or
is
it
simply
an
absolute
kind?
So,
in
other
words,
like
do
we
multiply
by
five
to
estimate
how
many
users
there
are
self-hosting.
C
That's
an
absolute
number,
and
so
if
you
wanted
to
know
that
we
would
we
basically
multiply
what
we
know
for
just
self-managed
and
I
can
pull
that
up
right,
quick
and
I
can
share
my
screen
real
quick.
C
So
if
you
ever,
I
think
I
linked
to
this
in
last
week's,
but
you
can
filter
this
by
self-managed
or
by
sas
and
sas
is
technically
a
self-managed
account
too,
but
so
it's
kind
of
interesting
from
a
data
standpoint.
But
here
we
can
see.
C
I
don't
think
okay
yeah
we're
at
60k,
so
if,
if
you
were
to
look
60k
and
multiply
that
by
what
the
30
percent
have
usage
being
turned
on,
so
it
would
be
more
like
in
the
realm
of
180k
200k
that
sort
of
thing,
if
you
were
to
project
it
out,
I'm
pretty
sure
I
heard
correctly
yesterday
that
the
data
team
or
someone
else
growth
team,
maybe
is
working
on
that
calculation
of
what
the
likely
users
count
is.
I
don't
think
that
you
know
it's
just
a
number.
C
The
the
interesting
thing
that
I've
noticed
is
that
out
of
all
the
larger
customers
that
use
plan
that
I've
talked
to,
none
of
them
have
usage
being
turned
on
because
they're
all
pubsec
or
and
like
it's
just
hard
to
fully
trust
it.
If
that
makes
sense,
just
because
like
if
you
have
the
majority
of
your
your
customers
with
usage
ping
turned
off,
I
feel
uncomfortable
making
assumptions
about
the
majority.
So
as
a
minority,
I
feel
better
about
it,
but
like
70,
I
don't
want
to
make
assumptions
about
that.
C
C
B
All
right,
I
have
the
next
item
I'll
just
turn
my
fan
off,
so
that
there's
not
this.
All
this
background
noise
yeah.
So
I
mentioned
this
couple
of
weeks
ago
that
we
have
this
test
efficiency
project,
it's
kind
of
orthogonal
to
our
work.
It's
not
our
primary
focus.
It's
a
bit
of
fun
really
like,
but
yeah.
It's
been
really
productive.
We've
had,
mrs
from
outside
the
team,
remove
80
000
queries
last
week,
which
is
awesome
because
only
launched
the
thing
on
wednesday
and
we
were
off
on
friday.
B
B
So
even
if
we
achieved
that
in
a
quarter
it
would
still
be
a
net
increase
of
100
000
queries,
so
we
moved
instead
to
try
and
reduce
the
average
time
per
test,
which
I
think
will
be
like
a
kind
of
better
long-running
kind
of
measurement
and
that's
on
a
rolling
average
of
the
last
10
measured
pipelines
right.
B
So
that
was
actually
a
reduction
of
like
two
and
a
half
percent
which
doesn't
sound
like
much,
but
when
you're
running
thousands
of
pipelines
every
day
and
that
has
a
retail
value
of
something
like
six
cents.
You
know
like
it's,
it
starts
to
add
up
so
we're
off
to
a
really
good
start.
I've
linked
the
the
challenge
there.
Anyone
can
contribute
anyone.
B
Everyone
like
if
you,
if
you
see
a
test,
that's
slow,
no
matter
how
small
jump
in
fix
it,
take
the
win
and
we'll
see
how
we
get
on
at
the
end
of
the
quarter.
I
haven't
yet
managed
to
secure
a
prize
for
the
for
the
the
most
valuable
contributor,
but
who
knows
we'll
see?
B
I
have
the
next
item
as
well
like
charlie,
has
asked
me
to
vocalize
this,
so
you
might
have
heard
of
this.
Charlie
and
kanan
ran
the
successful
pilot
of
the
product
shadowing
program.
I
guess
you
could
call
it.
They
met
three
times
each
for
a
couple
of
hours
took
notes
and
they're
drafting
a
blog
post
anyway,
both
of
them
find
it
really
valuable
and
yeah.
I
think
we'd
like
to
expand
it
to
other
members
of
the
team,
see
if
anyone
else
is
interested
and
if
you'd
like
to
pair.
B
I
think
somebody
can
correct
me,
but
I
think
the
oh
it's
written
here,
okay,
so
the
way
to
kick
it
off
is
to
create
an
issue
using
that
template
and
I
think
there's
an
mr
required
to
the
team
page
as
well.
So
if
you
create
an
issue,
leave
it
there
for
your
counterpart
to
pick
up
the
next
slot
very
similar
to
ceo
shadowing,
I
guess
yeah
and
feel
free
to
ask
any
questions,
I'll,
try
and
answer
them,
or
just
contact,
charlie
or
keenan,
and
see
how
they
got
on.
D
What
would
anyone
be
able
to
specify
like
a
bit
more
detail
in
what
does
this
collaboration
involve?
What
like,
I
guess,
charlie,
is
the
best
person
and
tina's
best
person
to
ask,
but
I'm
just
wondering
what
are
you
expected
or
what
are
you
doing
in
those
meetings?
Is
it
client
meetings?
Is
it?
Is
it
going
through
the
issues?
Is
it
discussing
the
features?
Is
it
all
of
it?.
E
B
Yeah
my
understanding
was
in
the
first
session,
like
it
can
be
split
any
way
you
want,
but
given
the
time
zones
and
things
and
on
other
reasons
like
it
made
sense
to
do
it
in
three
tranches
of
two
hours.
You
also
like
it's
more
of
that.
I
think
currently
it's
more
the
engineer
shadowing
the
pm.
B
So
I
think,
in
the
case,
in
in
the
case
of
charlie
and
kanan
keenan
tried
to
find
two
are
slots
that
were
representative
of
the
kind
of
broad
array
of
tasks
that
a
pm
does
so
there
would
have
been.
I
guess,
a
customer
call
involved
there,
where
you
would
shadow
it
just
like
you
would
shadow
a
call
with
sid
during
the
ceo
shadow
yeah,
but
I
think
there
were
other
activities
as
well,
and
what
I
hear
from
what
charlie
was
mentioned.
B
The
other
day
was
that,
like
yeah,
she
got
a
really
good
idea
of
the
challenges
and
that
are
involved
in
being
a
pm,
and
if
it
improves
empathy
between
pms
and
engineers,
then
all
the
better.
D
Are
there
any
like?
Is
it
expected
from
both
sides
to
draw
something
out
of
this
shadowing
or
like
what
what
the
end
result
should
be,
or
is
it
just
sort
of
an
exchange
of
experience
and
and
that's
where
it
ends.
B
So
I
understand
there's
a
survey
at
the
end
and
we
want
to
measure
how
it's
very
simple
at
the
minute.
I
think
it
just
checks
to
see
whether
it
was
productive
for
both
people,
so
whether
they
felt
it
was
a
good
use
of
their
time.
It
is
being
measured
not
in
a
very
very
strict
way,
but
we
would
want
to
know
that
the
thing
actually
produces
some
value
for
the
two
people
involved.
C
I
think
it
also
having
done
the
ceo
shadow.
It
helped
me
that
the
outcome
that
I
got
from
that
was
a
better
understanding
of
the
organization,
so
I
had
more
empathy
when
working
with
people
in
different
parts
of
the
organization
and
was
able
to
be
more
proactive
and
collaborating
and
supporting.
I
think
for
this.
C
I
would
be
interested
in
doing
it
largely
because
I
would
also
want
to
learn
more
about
engineering
gitlab,
it's
kind
of
like
a
two-way
street,
and
I,
like
the
the
more
that
we
like
understand
the
worlds
that
we
each
live
in
the
less
that
we
have
silos
and
that's
like,
I
think,
a
very
good
outcome.
F
Not
really
so
mine
is
read
only
but
I'm
out
next
week,
and
these
are
the
two
people
covering
for
me
for
tech
writer
reviews.
You
know
russell
might,
with
the
previously
assigned
tech
writer
to
plan
who
volunteered
to
take
one
group,
so
project
management,
and
then
marcia
is
my
current
backup
and
I'm
hers
for
create.
So
she
she'll
take
the
other
two
groups.