►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hey
everybody,
I'm
eric
johnson,
I'm
the
executive
vice
president
of
engineering
here
at
git
lab-
and
this
is
october,
27th,
2020's
engineering,
key
review,
we're
going
to
be
looking
at
okrs
and
metrics.
So,
hopefully,
everybody's
got
the
agenda
open.
I
think
it
starts
at
number.
Three
christie,
sus
update.
B
Yes,
so
we
have
some,
we
have
an
okr
in
place
that
is
focused
on
usability.
We
have
three
krs
that
are
clearly
defined
and
one
that
we
are
still
defining
with
the
intent
to
move
sus
up
so
settings
and
navigation
are
now
owned
by
the
static
site.
Editor
group,
we
have
a
kr
on
settings
improvements
and
then
also
validation,
work
to
make
sure
that
the
changes
we're
making
are
effective.
B
We're
also
going
to
update
our
sus
methodology.
So,
historically,
we
have
used
first,
look,
which
is
a
volunteer
group
of
about
3
000,
get
lab
users
to
run
our
sus
survey
on
a
quarterly
basis.
It
was
about
50
sas
and
about
50
self-managed
respondents
and
it's
a
relatively
small
pool
of
people.
We're
gonna,
keep
running
that
that
the
survey
with
that
same
group,
but
in
parallel
we're
gonna,
start
running
it
with
participants
that
we
source
from
our
data
warehouse
with
the
goal
of
focusing
on
sas
users.
B
What
we
have
realized
is
that,
by
focusing
by
having
50
of
the
respondents
being
self-managed,
they
are
likely
not
seeing
a
lot
of
the
changes
that
we
are
making
for
many
months
after
we
make
them,
and
so
we
feel
like
it's
just
we
want.
We
want
to
make
this
less
of
a
lagging
indicator,
and
we
think
that
this
is
a
way
to
do
it.
The
reason
we're
running
them
in
parallel
is
we
want
to
see
what
the
difference
is.
We
also
want
to
make
sure
that
we
have
historical
data.
B
We
never
have
a
gap
where
we
don't
have
historical
data,
and
we
also
want
to
see
what
the
new
baseline
looks
at.
So
we
can
do
an
analysis
on
that.
Make
sure
that
we
understand
what
the
difference
is
according
to
the
new
participant
base,
and
then
our
ux
research
team
is
going
to
spend
the
quarter
focusing
on
determining
the
top
10
usability
problems
and
anup,
and
I
have
had
a
lot
of
conversations
about
this
that
will
feed
into
a
q1
okr
so
that
we
can
start
to
address
some
of
those
eric.
A
Yeah,
just
a
heads
up
to
some
people
about
the
mechanics
of
changing
system
usability
score,
so
we're
going
to
change
our
accounting
method.
That
means
we're
going
to
potentially
invalidate
the
current
sus
time
series.
So
what
we
agreed
to
do
with
the
nuke
was
basically
keeps
us.
The
way
it
is
in
parallel
have
something
called
gitlab
usability
score,
which
is
the
sort
of
modified
sauce
that
christy
just
described,
and
then
once
we
feel
confident
that
it
is
measuring
the
right
thing.
A
B
B
What
we
realized
is
that
feature
enhancement
was
being
used
incorrectly,
so
qe
is
helping
us
basically
set
from
zero
on
that
we're
going
to
blow
that
label
away
on
the
existing
issues,
because
then
it's
not
being
used
accurately
and
we're
going
to
start
using
that
as
the
official
accounting
method
anub
and
I
are
working
on
a
a
communication
to
the
engineering
and
product
teams
to
make
sure
that
everybody
understands
this
is
the
this
is
how
you
we
intend
for
you
to
use
this
and
then
yes,
anub
is
also
going
back
and
making
sure
that
that
50
number
is
accurate
and
then
he's
also
talking
to
the
pm's
about.
C
Yes,
we
got
a
request
from
last
time,
so
this
is
a
follow-up
to
get
lcp
in
place
so
before
we
just
had
the
grafana
links
to
that,
we've
actually
now
have
it
flowing
from
the
tooling
into
sisense
and
are
now
reporting
them
from
the
pages
right
now.
We
only
have
one
page
up
on
the
performance
indicators
page.
C
If
anybody
has
any
feedback
on
whether
this
should
be,
we
should
list
these
all
under
development
page
or
should
we
have
a
separate
page
open
to
that,
because
we
have
five
additional
ones
that
we're
measuring
as
part
of
our
okrs
for
q3
and
then
we'll
be
expanding
those
in
q4
to
additional
ones.
That
product
designates
as
being
important
pages
to
have
timing
below
two
and
a
half
seconds.
D
Craig
yeah,
so
that
it
was
pretty
awesome
like
the
the
the
five
charts
that
I
saw
in
size
sense
and
by
my
my
view
is
you
should
probably
not
on
the
kpi
page
but
find
another
page
where
you
sure
you
share
this.
I
think
it's
really
cool
and
really
really
great
to
be
transparent
on
this,
and
thanks
for
click
thanks
for
clarifying
that
this
is
sasulman,
so
see
it.
As
the
next
point.
E
Yeah,
are
we
it's
a
great
metric,
the
lcp
p90,
it's
surprising
to
me
we're
below
it,
because
that's
a
great
result.
So
now
I'm
like
hey
are
we
we
checking
the.
C
E
Great
news
and
congrats
on
that
christopher.
C
Yeah,
just
an
update
from
just
a
q4
update,
no
cares.
I
know
it's
early,
but
I
figured
I'd,
get
it
out
there.
So
far,
we've
asked
for
volunteering
we've
been
struggling
to
find
one.
We
had
two
engineers
expressed
interest,
but
they
thought
it
was
a
short
term
versus
a
quarterly
commitment
and
they
both
declined.
C
C
They
thought
they
were
just
helping
out
to
get
it
set
up
like
in
five
minutes
like
a
half
a
week,
kind
of
commitment
versus
a
three-month
commitment,
yeah.
That
makes
sense.
That's
it's
like
you
know,
like
hey.
We
need
help
over
here.
Oh
yeah,
we're
willing
to
help.
Oh
wait.
No,
you
mean
full-time,
oh
wait!
Okay,
hold
on
that's
different
thinking
there
from
their
perspective,
so.
C
C
To
help
out,
even
in
a
jam,
though,
from
that
perspective.
E
Cool
yeah,
maybe
we
need
to
assign
a
volunteer.
Let's,
let's
see,
and
there
was
a
joke.
F
Mack,
yes,
thank
you,
so
this
is
fulfilling
the
promise
from
last
time,
a
follow-up
on
improving,
measuring
how
time
to
close
of
high
severity
bugs
the
previous
measurement
pack,
too
many
things
into
one
indicator,
and
that's
that's
my
fault
I'll,
take
responsibility
for
that.
F
After
looking
at
it
to
set
up
teams
for
success,
we
need
to.
We
likely
need
three
separate
indicators
to
encourage
the
behavior
we
need
to
see,
and
the
first
kpi
has
already
shipped
we're
measuring
mean
time
to
resolve
our
mean
time.
To
close,
that's
how
fast
we
are
to
fix
the
bugs
and
we're
doing.
Okay
here
we
we've
been
under
the
target
of
30
days,
whereas
1
and
60
days
right
through
for
for
a
period
of
time
now,
and
there
are
two
more
rpis
coming,
which
is
backlog.
F
That's
how
many
bugs
we
have
left
and
b
or
to
improve
that
we
need
prioritization,
we
work
with
product
and
bugs
that
are
not
severe
at
high.
So
these
are
the
unknown
unknowns.
So
if
this
is
high,
quality
will
lead
the
charging
of
these
bucks.
So
three
indicators,
three
behaviors
and
currently
each
teams
are
doing
well.
I
think
we're
under
the
target
so
far,
craig,
yes,.
D
Yeah,
it's
really
good,
like
usually
I
like
charts
going
up
into
the
right,
but
when
it
comes
down
to
how
fast
you
can
close
bugs
down
to
the
right,
it's
totally
acceptable
and
it
looks
really
good.
I
just
asked
a
clarifying
question
to
make
sure
that
you
know
there's
no
bias
in
the
data
in
the
more
recent
period.
So
if
you're
pivoting
on
bo
when
the
bugs
close
and
there
is
no
bias,
but
if
you're
doing
it
when
the
bugs
open,
then
there
would
be
bias.
F
It's
it's
when
it's
closed
and
I
have
the
the
data
with
our
team
and
the
call
and
call
you're
going
to
confirm
that.
So
we.
F
G
D
F
Okay,
I'll
take
responsibility
on
that.
Sorry,
I'm
still
catching
up
and
we'll
fix
that,
and
we
can
also
rename
this
to
mean
time
to
close
to
make
it
even
clearer
as
well
eric.
You
have
the
point
there.
A
Yeah,
I
know
I
know
craig
was
sort
of
being
jokey
about
the
graphs
going
up
to
the
right,
but
it
is
a
real
thing
where,
when
you're
switching
contacts,
looking
at
so
many
kpis,
it's
helpful
if
they
all
have
the
same
mental
model.
So
we
can
and
should
actually
consider
adding
a
negative
sign
to
the
numbers.
A
You
know
so
that
they
all
kind
of
all
the
charts
kind
of
flow,
the
same
way,
because
we
do
this
point
have
probably
close
to
100
key
and
regular
performance
indicators
and
so
for
lowering
the
mental
overhead
might
make
it
that
much
easier.
D
I
know
I
was
being
tongue-in-cheek.
I
I
think
it's
okay,
if
some
of
them
are
like
that,
but
we
should
just
make
it
clear
that,
like
down
at
the
right
is
good
or
whatever
it
is,
I'm
just
going
back
to
the
last
point.
D
I
actually
was
arguing
that
close
date
should
be
what
you
pivot
the
chart
on
not
open
date
and
the
reason
the
reason
for
that
is
in
this
case
is
because,
if,
if
you
look
at
the
current
month,
it'll
always
if
you
use
open
date,
since
most
of
us
have
not
been
resolved
yet
they'll,
just
like
I
said,
it'll
look
better
in
the
current
month
than
it
really
is.
I
think
sid
said
the
opposite,
so
I
just
want
to
reconcile
that.
E
D
Yeah,
the
other,
the
other
metric
I've
used,
and
if
you
want
to
pivot
on
an
open
date,
sid
the
other
one
I've
used
is
like
resolved
within
30
days
or
resolved
within
two
days,
or
you
pick
the
number
and
typically
those
curves
you
get
to
a
point
where
the
majority
of
the
stuff
is
resolved
within
a
certain
period
of
time,
and
that
way
you
get
a
really
good
sense
of
how
we're
doing,
and
you
can
then
track
it
over
time.
That's
just
another.
D
D
D
D
You
all
right
now
I
get
to
pick
on
jonathan
a
little
bit,
but
it's
all
good
miss.
So
in
a
prior
review
we
talked
about
sock,
two
type
t
report
scheduled
for
november.
It's
we're
sitting
at
the
end
of
october.
Are
we
still
on
track
and
if
not,
do
you
need
any
executive
support
for
that
help
with
that.
H
Yeah
still
on
track,
the
audit
kicks
off,
so
the
audit
kicks
off
in
november.
The
report
date
will
be
sometime
in
january,
so
the
audit
should
take
roughly
two
to
four
weeks.
The
report
date
is
typically
30
days,
post
audit,
so
we're
still
on
track.
The
audit
kicks
off
in
the
next
couple
of
weeks.
We're
super
excited
about
about
the
results.
It'll
be
our
first
stock,
two
type,
two
right.
The
previous
one
was
a
sac2
type
one.
So
customers
are
excited.
H
We're
excited
sales
is
excited,
it's
a
big
win
for
for
git
lab
and
I'm
sorry
to
answer
your
question.
No
at
this
time
we
don't
need
any
support,
but
once
the
audit
kicks
off,
there
will
be
a
schedule
that
schedule
will
include
some
interviews
of
you
know,
administrators.
H
You
know
some
managers,
people
that
own
different
components
of
our
infrastructure
or
people
that
own
different
parts
of
different.
You
know
security
policies
or
processes,
and,
and
so
those
will
be
mostly
within
security-
mostly
those
that
we
reach
out
to
will
will
pretty
much
just
hit
up
infrastructure
for
the
most
part,
maybe
a
little
bit
of
development,
but
we'll
certainly
give
steve
and
give
christopher
a
heads
up
ahead
of
time.
D
That's
gonna
be
great:
okay,
next
one
on
security,
so
this
goes
to
so.
First
of
all,
I
spent
time
with
the
trust
and
safety
team
thanks
for
that.
That
was
really
valuable
discussion
about
where
we
are
in
terms
of
flagging,
abusive
accounts
and
spam
accounts
and
stuff,
like
that,
it
was
really
good
meeting
and
I
think
we
can
actually,
I
think,
there's
a
path
for
us
to
get
that
data
to
flow
through
into
our
metrics,
so
that
we
can
filter
that
out.
D
My
question
was:
it
looks
like
you've
actually
taken
some
steps
to
quantify
the
like
the
first
iteration
of
cost
of
abusive
accounts.
I
love
to
see
where
that
whip
is
or
the
work
in
progress
or
the
I
mean.
I
know
another
meeting
we're
supposed
to
use
with,
but
where's
the
the
first,
the
first
step
towards
that
that
we
can
review.
H
Yeah,
so,
first
of
all,
thanks
for
spending
time
with
john
and
charlo
on
that,
I
was
really
excited
to
hear
about
your
interest
and
your
insights.
You
have
a
lot
of
expertise
in
this
area
as
well
from
your
current,
your
past
roles,
so
your
input
is
super
valuable.
I
just
want
to
say
that
first.
Secondly,
this
has
been
a
kpi
that
I've
been
interested
in
adding
to
one
of
our
top
five
primary
indicators
of
health
security,
health
for
probably
the
entire,
the
all
of
q3.
H
So
for,
like
three
months
now,
I've
been
interested
in
getting
this
put
into
the
handbook.
It's
now
part
of
the
handbook.
The
initial
indicators
are
there,
and
I've
got
two
different
links
here,
for
you,
one
is
to
the
handbook
with
the
kpi,
which
gives
a
disclaimer
of
hey.
This
is
what
this
currently
includes,
and
this
is
what
it
does
does
not
yet
include.
H
So
that
way,
it's
very
clear
what
it
is
you're
looking
at,
then,
if
you
want
to
click
into
that,
I
then
add
to
the
second
link
here,
which
is
the
current
size
sense
graphs,
which
show
what
we
would
categorize
as
cost
of
abuse
on
the
categories
that
we're
tracking,
as
well
as
the
number
of
accounts.
So
we.
H
Further
beyond
quantifying
the
dollar
valuation
of
you
know
like
this
is
costing
us
x
amount
of
dollars
per
month,
but
also
showing
the
number
of
accounts
that
they're
causing
it.
No,
this
is
fantastic.
Actually,
I'm
just
looking
at
the
charts.
Now
I'm
going
to
spend
some
time
with
it.
It's
great
yeah.
Let
me
know
if
you
need
any
help
with
that
myself
charlotte.
Any
of
us
can
go
into
further
detail,
and
this
is
only
going
to
improve
so
with
q4.
We
have
a
number
of
updates
and
changes.
D
D
H
Yeah
thanks
thanks
greg
so
yeah.
I
built
a
pretty
good
relationship
with
sales
from
the
beginning
right.
It's
obviously
we
have
a
huge
impact
to
sales,
a
huge
impact
cross-functionally
with
a
lot
of
teams
right.
We
impact
development,
we
impact
product,
we
impact
infrastructure.
So
from
the
beginning,
I
was
interested
invested
into
determining
our
impact
to
sales,
how
we
can
better
serve
the
sales
org,
how
we
better
serve
our
customers
and
increase
growth,
increase
revenue
increase.
H
You
know,
security
in
in
the
mindset
of
our
customers,
right
the
perspective
of
security
and
so
a
couple
of
those
things
you
mentioned
here
right.
We've
we've
created
an
insurance
package
that
sales
can
give
out
proactively,
which
includes
a
ton
of
transparent
information
about
who
we
are
what
we
do,
and
this
is
all
from
a
security
perspective
right
all
these
things
self-serve.
H
We
also
have
engaged
in
security,
training
with
say
the
sales
org
and
now
for
q4
we're
also
looking
into
building
a
closer
relationship
with
sales
and
partnering
on
sales
calls
proactively.
H
So,
rather
than
wait
for
that
call
wait
for
that
customer
to
say
hey
well
now
we
need
to
go
talk
to
security
set
up
another
call
in
the
future.
Now
we're
trying
to
figure
out
ways
to
get
a
a
participant
on
a
security
representative
on
those
calls
ahead
of
time.
So
they
can
answer
those
questions
in
real
time
and
reduce
that
time
frame.
To
close
those
deals
right.
So
to
answer
your
question
then,
specifically
around:
are
we
checking
the
deals
lost?
So
sales
has
indicated?
H
That's
that's
a
challenging
question
to
answer
that
is
that
has
been
asked.
Deals
are
lost,
I
think,
for
for
a
a
you
know,
a
a
compilation
of
issues
right,
there's,
there's
a
multiple,
multiple
reasons
why
deals
may
be
lost,
they
don't
have
a
clear
indicator
or
a
clear
label,
and
this
deal
was
lost
specifically
because
we
don't
do
x
in
security
right.
That
said,
we've
done
the
flip
side
and
we
have
excuse
me.
We
have
tracked
all
the
deals
that
we're
impacting
by
touch
by
method,
so
that
includes
we're
answering
questionnaires.
H
We're
getting
on
phone
calls
we're
sending
out
a
sock
to
report.
We're
answering
support
questions,
we're
doing
all
these
things
and
that
has
then
led
to
us
closing
the
deal.
So
you
can
see
that
from
this
link
here
it's
also
a
kpi
is
our
impact
on
iacv
and
ultimately
I
hope
that
answers
your
question
and
I
really
appreciate
your
you
asking
these
things.
Craig.
D
E
Yeah
thanks
the
as
eric
said,
the
impulse
to
present
is
strong
and
I
I
stopped
blocking
it
and
I
started
channeling
it
to
videos
beforehand
and
I
think
we're
gonna
make
it
maybe
even
a
requirement
for
key
meetings,
but
I'm
not
there.
Yet.
I
want
to
float
that
idea.
Eric.
A
Yeah,
I
thought
we
did
pretty
well
on
this
when
we
just
had
the
reminder
at
the
top
to
not
present,
and
then
I
took
it
out
because
it
felt
like
it
was,
it
was
just
clutter,
but
then,
when
I
removed
it
sure
enough,
we
started
sort
of
presenting.
I
think
the
the
two
that
I
thought
were
sort
of
presenting
were
christine
christopher's
okr
updates.
We
have
we
happen
to
have
an
how
to
achieve
meeting
later
today,
so
those
easily
could
have
flown
into
that,
but
christie's
you're.
A
A
B
Understood,
yeah,
no,
no
worries
at
all
yeah
I
mean
I
was
just
looking
at
it
because
there
were
a
lot
of
questions
about
suss
in
the
last
ones.
It's
like
an
update,
but
I,
but
I
could
have
just
had
y'all.
Ask
me
questions.
I
don't
have
a
problem
with
that.
So.
B
D
A
So
I
I
would
say
we
should
just
based
on
that.
We
should
just
go
back
to
adding
the
reminder:
hey,
don't
don't
present
and
we'll
just
make
this
pure
discussion.
The
other
question
you'd
ask
is
you
know
we
already
have
separate
support
and
infrastructure
key
reviews.
Are
we
at
the
point
where
this
meeting
is
busting
at
the
seams
and
we
should
do
engineering
department
level,
key
reviews?
If
we
do
that,
then
everybody
would
be
free
to
do
their
own
thing.
E
At
some
point,
we're
gonna
have
pi
meetings
which
are
performance,
no,
not
key
reviews
anyway.
I
I
think
I'm
okay
with
this
so
far,
also
because
engineering
is
just
well
run
in
general,
we
don't
need
to.
We
ask
questions
me
and
craig,
because
it's
our
job,
but
I'm
not,
I
think
it's
super
well
run,
and
I
love
all
the
all
the
data
that
we're
getting.
I
think
everything
is
is
going
well
and
I'm
not
concerned
about
any
specific
part.
D
C
I
don't
know
who
else
got
the
reward,
but
when
they
realigned
the
group
conversations
development
has
one
a
month
now,
even
though
we're
on
an
eight
like
eight
week
cycle.
So
if,
if
we
wanna
do
kpi
reports
on
development,
I'd
probably
start
rolling.
Those
probably
into
the
group
conversation
just
as
part
of
that
as
part
of
that
monthly
update.
E
C
C
E
Yeah
it's
just
like.
We
saw
a
lot
of
duplicate
work.
We
actually
saw
people
put
more
time
in
preparing
preparing
their
gc,
then
their
key
meeting,
even
though
the
key
meeting
is,
in
my
view,
more
important,
so
wanted
to
get
rid
of
that
duplicate
work.
So
now
it's
just
just
recycle
your
key
meeting
presentation
and
I
hope
that
will
improve
the
quality
of
the
key
meetings
and
reduce
the
time
spent
on
both
meeting
the
the
prep
that
that
is
needed
for
both
meetings
and
aggregate.
E
C
Yeah,
I
do
I
do
a
rotation
with
my
reports,
so
I've
been
doing
that
a
little
bit
up
to
this
point.
So
that's
kind
of
my
idea.
The
other
thing
is:
is
the
product
team.
What
the
problem
was
is
that
we're
way
off
topic
of
the
key
meeting,
the
problem,
the
the
the
problem
with
the
was
previously
was
the
product
teams
each
had
their
own,
so
I
felt
like
they
could
be
represented
as
part
of
the
product,
so
that
was
a
good
partnership.
C
Now
I
understand
that
those
have
been
removed
and
product
is
just
presenting
a
single
focus.
So
that's
why
I'm
going
to
probably
bring
it
back
into
my
my
fold
for
the
group
conversations,
in
other
words,
product
was
doing
it
by
section,
so
they
were
part
of
that
participatory,
that's
getting
removed.
So
now
I
have
a
little
bit
more
flexibility
on
my
side
to
talk
about
the
good
things
we're
doing
in
development
by
section
as
well,
cool
experiment.
I
E
E
So
that's
the
bad
news,
but
you
can
now
reduce
the
attack
for
the
conversation
and
I'm
not
forcing
you
to
do
it.
You
can
also
try
first
in
the
group
conversation
to
just
put
this
in.
I
find
it
more
work
for
me
than
other
key
meetings,
and
I
think
your
experience
in
the
google
conversation
will
be
the
same.
A
E
Is
it's
just
like?
Normally,
I
just
I
have
it
on
another
screen.
I
just
click.
Click
click.
Click
like
I
know.
I
know
how
long
it
is.
I
know
it's
60
clicks.
It's
exactly
the
right
information.
I
can
refer
to
a
slide
number
without
pasting.
The
link
in
and
it's
just
the
mindless
clicking
is
the
most
important
part.
I
want
to
mindlessly
click
on
key
meetings
to
be
like
netflix,
although
you
don't
click
on
netflix,
but
I
hope
less
less
brain
capacity
used.
E
Please
just
show
show
me
the
show
me
the
numbers
instead
of
having
me
click
around
for
the
numbers,
but
I'm
not
making
you
do
that
now.
Let's
first
try
to
recycle
this
for
the
group
conversation,
but
I
think
you'll
get
the
same
feedback
from
people
like
where's
our
slide
doc.