►
From YouTube: Key Meeting - Engineering - (Public Stream) [REC]
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
And
I'm
the
executive
vice
president
of
engineering-
and
this
is
the
september
22nd
2020
engineering
key
review,
so
we're
going
to
be
covering
four
of
the
six
engineering
departments
here:
development,
quality
security
and
ux
support
and
infrastructure
have
their
own
key
reviews.
So
we'll
try
to
direct
comments
to
those
meetings
when
appropriate.
I
know
we
had
the
infrastructure
key
review
this
morning
and
there's
one
actually
that
was
directed
to
this
meeting
about
development,
and
on
that
note
I
know
I
believe
christopher
has
a
conflict
about
halfway
through
and
has
to
leave
early.
A
So
if
we
could
put
questions
for
christopher
and
development
up
as
high
as
possible
in
the
agenda,
we
get
to
those
first.
A
So
maybe
maybe
christopher,
do
you
want
to
elevate
number
six
in
the
agenda
up
to
number
four.
B
Are
we
on
item
four?
Are
we
prayer
to
that
point.
B
Yeah
sure,
I'm
sorry,
I
thought
I
didn't
know-
I
didn't
know
if
we
had
already
gotten
okay,
everything
else
was
fyi.
Sorry
about
that.
So
there
was
a
request
that
we
have
the
largest
contentful
paint
marked
as
a
kpi,
basically
at
the
75th
percentile
right
now.
What
we've
done
in
the
short
term
for
the
first
iteration
is.
We
are
getting
this
in
grafana,
so
we've
added
it
to
the
handbook
under
performance
indicator.
B
It
actually
will
point
to
a
page
with
five
charts
of
the
different
pieces
that
we're
looking
at
improving
on
right
now.
It
shows
the
median
we'll
need
to
change
that
to
the
75th
percentile,
so
our
plans
are
to
pipe
it
into
the
data
lake
and
then
once
it's
there
get
to
the
75th
percentile
and
then
break
it
out
into
the
five
pi
so
which
one
we'll
need
to
select
one
to
be
the
kpi.
For
that.
D
As
a
follow-up
from
the
last
meeting
on
me
being
the
single
interface
for
engineering
metrics
and
how
we
remodel
how
we
do
data
projects,
here's
the
fy
on
the
current
process,
some
special
thanks
to
kyle
the
lead,
cad
and
data
team,
so
five
new
pis
on
personnel
staff,
the
gearing
ratio
for
quality
and
ux
six
new
pis
for
overall
community
contributions,
the
first
of
them
being
the
key
which
is
the
the
volume
of
community
mars.
And
then
we
clean
up
roughly
51
apis
to
have
clear
directional
indicators.
D
I
believe
this
was
a
callout
from
paul
last
meeting,
so
we
are
aligning
on
above
above
a
rat
below
below
at
making
it
standardized
and
in
point
four.
We
also
moving
on
centralized
handbook,
first
engineering
indicators
that
completes
the
overall
view
of
development,
quality
and
ux,
and
these
are
drilled
down
into
every
sub
departments
in
engineering
and
metrics
from
the
counterparts
of
ux
and
quality
and
craig.
D
This
is
where
you'll
be
passionate
about
the
on
the
bugs,
because
now
you
see
the
drill
down
bugs
all
the
way
down
and
if
you
click
on
the
metric,
we
also
factor
in
security
and
infrastructure.
As
well
going
forward
as
the
next
iteration,
so
full
transparency
open
up
for
everybody
to
see,
and
we
have
a
few
more
in
progress
as
well
in
point
b,
one
two
three
lily
is
working
hard
on
auditing,
all
our
kpis
to
make
sure
it's
standardized.
D
We
want
to
measure
mr
err
we're
now
working
on
how
to
track
this
as
a
back
house
ledger,
because
we
need
to
know
what
accounts
are
are
submitting
a
mark.
If
you
don't
have
that
knowledge,
you
have
to
be
working
with
a
community
relationship
relationship
relationship
team
to
put
these
out
and
then
we'll
continue
to
build
on
the
centralized
handbook.
First
metrics
overall,
so
I'll
pause
there
for
for
questions
from
the
group.
A
A
I
would
say
the
one
the
one
I
would
want
to
call
out
here
is
5a
roman
numeral
i4,
so
the
gearing
ratio
for
our
our
stats.
Do
you
want
to
talk
about
that
versus
target.
D
Sure
so
number
four
we
are
very
our
target
is
at
42
and
I
believe
we
only
have
15
or
16
s
deaths
in
general,
so
we
need
help
in
this
regard
and
increasing
the
coverage
in
in
the
as
that
space,
and
I
my
gut
check,
is
it
also
correlates
with
the
test
coverage
and
the
bugs
as
well.
So
if
we
help
improve
testing,
those
numbers
should
improve
as
well.
A
Yeah
so
as
we
as
headcount
requests
become
a
thing
we're
at
38
of
our
target
here-
and
this
is
a
really
important
area,
because
this
makes
our
software
robust
for
for
enterprise
customers.
I
think
in
terms
of
how
we
got
here,
we
did
deliberately
limit
this
one,
because
when
we
first
set
the
gearing
ratio,
next
team
was
that
I
forget
two
years
ago,
maybe
15
people
or
something
like
that.
It
wasn't
possible
to
grow
at
400
percent
in
a
year
to
hit
our
gearing
ratio.
So
we
said,
let's
limit
it.
A
This
limit
getting
to
our
ideal
gearing
ratio
just
based
on
how
fast
any
department
can
reasonably
grow,
but
in
terms
of
adding
new
people
and
hiring
them
and
onboarding
them.
But
it
means
when
we,
when
we
limited
hiring
in
the
earlier
part
of
this
calendar
year,
we're
we're
at
30
38
of
our
gearing
ratio,
and
that
means
the
extent
we're
not
getting
back
there.
We're
going
to
start
feeling
the
pains
of
that.
A
So
we
should
probably
look
at
the
backfills
that
we're
doing
from
kind
of
organic
attrition,
rather
than
just
backfilling
them
by
default.
Maybe
we
should
take
a
look
at
I'm
utilizing
some
of
those
to
increase
this
gearing
ratio
in
actually
be
transferring
them
from
one
department
to
another,
because
this
is
this
is
like
the
one
gearing
ratio
for
headcount
that
we're
way
below.
So
I
think
we
have
to
look
into
getting
fixing
that
creatively.
E
Yeah,
these
are
just
a
couple
of
fyi's.
I
wanted
to
let
everyone
know
that
we've
added
a
new
regular
pi
for
actionable
insights
to
help
us
track
ux
research
findings,
and
also
so
we
can
see
how
well
we're
doing
and
actually
addressing
those
findings
and
hopefully
burning
them
down
over
time
as
it
makes
sense.
And
then
I
moved
proactive,
ux
work,
which
is
the
metric.
E
We
use
to
track
the
amount
of
user
research
we're
doing
to
attention
just
because
I'm
keeping
an
eye
on
it
last
quarter
is
a
little
lower
than
I
expected
this
quarter.
It
looks
like
it
also
might
be
lower
than
expected,
although
I'm
not
sure
yet
have
a
hypothesis
that
our
lack
of
unmoderated
research
capabilities
is
what's
causing
the
amount
of
research
to
go
down.
So
the
current
way
that
we
have
to
do
research
is
labor
intensive,
it's
face-to-face
interviews.
E
A
Cool
thanks
so
craig
number,
seven.
F
Yeah,
this
is
the
question
that
I
had
in
the
infrastructure
meeting
that
we
brought
over
to
this
meeting
so
on.
The
infrastructure
page
handbook
showed
that
gitlab.com
site
performance
was,
it
needs
attention
in
the
handbook
and
then.
A
F
A
Yeah
so
great
great
question-
and
I
would
say
performance
is
a
complicated
topic,
so
there
are
definitely
aspects
of
infrastructure,
performance,
back-end,
performance
and
front-end
performance,
all
of
which
are
separate,
and
I
did
this.
This
was
a
question
I
think
at
the
beginning
of
last
quarter
and
I
took
an
action
which
I
haven't
been
able
to
prioritize
to
differentiate
between
those
in
the
handbook.
A
So
it's
clear
to
people-
and
I
didn't
get
to
that
one,
but
now
that
you're
raising
again
it
tells
me
if,
like
we,
probably
need
this
handbook
page,
and
I
probably
need
to
delegate
it
to
someone
to
write
it
for
me
because
I'm
holding
it
up
and
so
the
the
quick
and
easy
answer,
though,
is
the
reason.
Development
front
end
in
particular
is
working
on.
A
That's
leading
to
slowness
or
not
so
the
things
we
detected,
we
determined
were
front-end
specific
issues,
but
there
doesn't
mean
that
there
aren't
issues
that
would
be
on
the
backend
team
would
be
in
the
infrastructure
for
all
three
at
a
given
time
frame,
so
I'll
figure
out
how
to
delegate
and
deliver
on
that
kind
of
like
performance,
landscape
content
and
then
we'll
try
to
make
it
clearer
going
forward
when,
when
different
performance
initiatives
landed,
one
of
those
three
main
main
pockets.
F
Yeah
that'd
be
helpful
because
if
I'm
on
the
infrastructure
page
and
really
they
pulled
all
the
levers
and
there's
not
much
to
do
it
just
be
good
to
go
right
and
then,
but
if
it's
front
end
okay,
we
could
load
this
part
of
the
payload
faster
than
this
part.
That's
really
the
optimizations
we
have
to
make.
Then
it
should
be
a
front.
End
needs
attention
right,
so
that
would
be
very
helpful.
Yeah.
I
agree
cool
all
right,
it
looks
like
jonathan
saw
my
number
0.8,
so
I'll,
just
move
on
to
number
nine.
F
F
Accounts,
whether
it's
at
the
group
level
of
the
user
level
and
then
from
there,
is
it
stored
in
our
production
database
and
then
from
there
point
on
moved
into
our
data
warehouse
so
that
we
can
do
reporting
on
it.
I
just
wanted
to
get
jonathan's
view
of
how
mature
we
are
here
and
what
kind
of
next
steps
need
to
happen.
G
Yeah,
hey
thanks
craig.
I
really
appreciate
you
bringing
this
up.
This
is
obviously
a
very
important
topic
to
us
all.
What
I
can
tell
you
is
that
the
trusted
safety
team
has
done
a
lot
of
work
around
like
behavioral
analytics
and
if
you
remember
melissa
when
she
was
here,
she
did
a
lot
with
machine
learning
and
and
working
with
a
lot
of
like
scripting
and
and
creating
like
automated
detection
scripts
to
identify
these
accounts.
G
So,
within
our
platform
itself,
we
are
unable
we
don't
have
any
technology
or
any
any
feature
that
flags
abusive
accounts
right.
But
what
we
do
is
we
have
scripts
that
we've
built.
We
have
two
programs:
one's
called
scrubber
one's
called
bouncer.
So
scrubber
is
what
detects
the
behavior
right,
so
it
checks
it
against
963,
different
rules,
so
there's
963
rules
that
that
identify
one
more
time.
No,
I
just.
G
G
A
lot
of
time
this
is
over
the
course
of
multiple
years
and
and
so
963
rules
and
then
at
the
end,
and
it
runs
every
10
seconds
in
gcp
and
so
then
every
10
seconds.
Hopefully
we
identify
you
know
100
accounts
or
five
accounts
or
whatever
that's
been
spun
up
in
the
meantime,
for
for
crypto
mining,
or
you
know,
spinning
up
instances
for
other
fraudulent
purposes.
Right
so
identifies
these
accounts,
and
then
we
run
the
script
called
bouncer
and
bouncer
then
goes
and
kicks
off
those
accounts
or
disables
those
accounts.
G
So
what
I
can
tell
you
craig
is
in
the
platform
or
in
our
databases
we
don't
have
anything
flagged
and
we
don't
have
the
features
to
flag
it.
That
said,
we
can
certainly
get
you
the
metrics
from
scrubber
and
bouncer
right.
So
we
have
all
that
data
available
to
you
and
we
can
just
shoot
it
over
your
way.
I'm
sure
there's
a
better
way
to
create
some
sort
of
metrics
or
automation
around
this.
G
This
trust
safety
team
is
currently
working
on
that
now
they're
working
on
putting
this
into
sizes
in
a
dollar
figure
right
like
hey
this
month,
it
cost
us
forty,
seven
thousand
dollars
in
fraud.
Now
I
was
calling
it
fraud.
You,
you
noted
in
in
number
seven
or
number
eight
to
call
it
abuse.
That's
totally
fine,
I'm
not
stuck
on
names.
F
F
I
think
there
is
value
in
in
in
our
production
platform
flagging
these
users,
because
when
we
do
all
of
our
core
data,
then
we'll
have
a
single
source
of
truth
of
like
which
ones
to
take
out
and
not
versus
having
to,
like
reverse
engineer
it
in
the
data
warehouse
for
this
use
case
versus
that
use
case.
I
just
I
think
if
we
can
simplify
that
and
additionally,
if
you
end
up,
you
know
getting
to
a
point,
we
have
false
positives
where
you
need
to
have
a
team
like
actually
review
them
and
stuff
like
that.
F
You
have
an
ability
to
do
that
too.
So
I
would
love
to
see
a
you
know,
a
proposal
or
a
method
to
do
that,
because
I
think
that
will
let
us
lead
us
to
more
standardization
of
coe
report
and
understand
our
business
measures.
So
that's
it.
That's
it's
a
suggestion
taken
for
what
it's
worth
for
the
finance
guy.
G
No
very
valuable
thank
you
for
pointing
it
out.
Yeah
I'll,
get
together
with
charles
and
and
and
then
jan
is
over
the
operations
org
I'll,
get
together
with
them
and
figure
out
what
it's
going
to
take
to
build
that
into
the
product
then,
like
so
from
from
us
digesting
the
data
to
the
database
see
what
we
can
put
together.
Yeah
yeah.
F
That'd
be
great
and
yeah,
I
I
and
since
I
come
from
marketplace
companies
I
can,
if
you
want
just
to
pick
my
brain
on
what
I've.
G
F
Number
10
refers
to
mex
great,
I
can
tell
I
can
see
pride
in
his
bug
measurements.
I
have
a
question
for
him
on
the
bug
movement,
so
this
is
flagged
as
a
problem.
Although
the
trend,
as
I'm
reading
this
chart
that
I
linked
here
in
severity,
one
bugs
seems
to
be-
you
know,
charts
that
when
it's
about
bugs
down
to
the
right,
which
I
seem
to
think
is
a
good
thing,
can
you
give
more
color
and
why
you
labeled
it
a
problem
and
and
and
how
you're
gonna.
D
Target
around
it
and
how
you're
thinking
about
it
happy
to
do
this,
so
this
is
also
why
we're
moving
to
hamburg
first
to
provide
drill
down
before
we
be
looking
at
the
chat
and
who
are
we
gonna
talk
to
which
team
so
we're
defining
stepping
stone
for
us
to
to
move
the
number
down
further.
I
also
want
to
transplant.
D
Instead,
we
transparently
say
that
we
recently
switched
the
levers
to
track
slo
based
on
severity,
so
we
are
still
looking
at
more
analysis
before
we
adjust
the
health
up
to
attention
again
out
of
a
bunch
of
caution,
because
this
is
a
sensitive
metric
and
you
don't
want
to
over
celebrate
before
confirming
that's
the
case.
That's
that's
all
I
can.
I
have
for
now.
F
So
so
so
the
fact
that
the
let
me
just
share
my
screen
to
make
sure
we're
looking
at
the
same
thing.
This
is
what
I'm
looking
at
right,
like
the
fact
that
this
chart
looks
like
it's
going
this
way
right
so
you're
saying
you
just
want
to
triple
check
the
data,
make
sure
that
we
actually
are
getting
better
than
where
we
were
you
know
nine
months
ago,
and
then
you'll
move
this
up,
because
you
have
this
okay,
perfect
thanks
for
that,
that's
great,
and
thanks
for
sharing
this.
F
H
H
D
I
believe
that
some
teams
are
doing
better
than
others.
I
believe,
there's
some
more
automation
that
needs
to
be
completed
to
complete
the
overall
view.
Kyle,
if
you
want
to
chime
in
on
the
triple
triple
checking
of
the
data
before
we
remove
the
health
up
in
the
next
iteration.
I
H
D
Thanks
for
the
feedback
that
we're
we're
actually
looking
at
changing
this
to
it,
so
it's
clearer
currently,
it's
based
on
the
missed
slo.
So
if
there's,
if
the
shot
go
up,
it
means
there's
a
s1
block,
that's
beyond
the
intended
slo,
and
that
might
not
be
the
the
good
way
to
represent
it.
What
we're
looking
at
is
the
x-axis
the
time
the
buck
was
submitted.
H
Is
super
unclear,
let's
put
a
giant
call
out
for
that
on
it
because
it
it's
it's
actually
says
a
historical
book
count
somewhere.
So
it's
very
easy
to
interpret
that
oh
in
january
20th,
that
was
the
number
of
bugs
that
was
open
at
that
time.
D
H
H
As
well,
thanks
back
to
12
we're
seeing
more
feature,
features
contributed
by
the
wider
community,
which
is
awesome.
Is
that
due
to
us,
labeling
them
more
assertively
or
did
the
content
actually
change
of
what
gets
contributed.
D
D
We
would
have
to
circle
back
with
that,
but
I
believe
there's
more.
We
discussed
the
the
ops
section
having
more.
I
Activity
here
backstage
was
deprecated
around
the
time
you
see
it
uptick.
So
there
was
a
number
of
mrs
that
would
be
labeled
backstage
or
like
developer,
facing
changes
that
are
now
labeled
feature
maintenance.
It
could
be
like
test
updates
things
that
are
more
applied
to
changing
existing
features,
but
maybe
smaller
changes.
H
That's
great,
that's
glad
we
got
rid
of
that
grab.
Bag
of
a
label.
Awesome
reminded
that
this
is
a
public
stream
in
13.
There's
the
issue
for
the
mrr
metric.
I
wonder
why
that's
confidential.
D
I
will
set
it
as
a
bunch
of
abundance
of
caution
because
we
may
be
discussing
topics
related
to
ar
happy
to
open
this
up
more.
If
you
want
more
yeah.
H
Let's
open
it
up
and
I
hope
that
people
don't
start
naming
customers
in
there.
I
think.
A
There
there
is
a
concern
which
I
noted,
that
about
this
kpr
kpi
being
public,
let
alone
the
discussion
about
creating
it
being
public
where
people
could
potentially
back
out
our
average
revenue
per
customer
from
this
kpi.
And
if
finance
doesn't
want
that
to
be
a
public
metric.
We
should
think
about
how
to
prevent
that
from
how
to
prevent
that
calculation
from
being
possible,
because
that's
a
key
input
to
it.
H
So
average
average
revenue
per
customer-
that's
pretty
like
it's.
It's
not
going
to
allow
you
to
back
that
out
like
because
it's
very
a
typical
for
customer
to
contribute.
It's
not
super
useful,
I'm
much
more
worried
about
measuring
the
ar
of
the
individual
customer
and
actually
knowing
the
customer
who
they
are
yeah.
I
see
in
merchandise,
fund
siemens
come
in
and
I
see
the
metric
go
up
by
something.
H
Oh
just
a
random
example.
I
should
say
customer
example
x.
So
that's
that's.
I
think
the
much
bigger
worry.
H
I
think
it's
not
super,
I
think
it's
just
our
competitors
for
our
competitors,
it's
interesting
to
know
who's
your
customer
of
gitlab
and
how
much
they
are
paid.
So
hedge
funds
like
assessing
how
we're
doing
as
a
company.
I
don't
think
that's
the
problem.
I
think
the
problem
is
competitive.
Intelligence.
A
H
So
we
might
have
to
make
the
resulting
graph
private
totally
open
to
that
and
then
the
way
it
was
phrased,
it
was
like
hey,
we'll,
take
the
list
of
customers
that
contribute
it
and
multiply
it
with
ar
it's.
It's
merge
request
times,
arr
for
a
reason
like
if
a
customer
submits
five
merge
requests,
we
we
double
count
or
triple
count
or
five
quadruple
whatever
we
count
it
five
times.
So
that's
a
very
intentional
effect.
H
A
And
I
think
we
so
guru
that
said-
and
I
think
we
can
help
make
this
feel
more
natural
to
people.
If
you
make
sure
the
the
output
this
billion
or
trillion
dollar
number
should
not
be
labeled
dollars,
it
should
be
labeled,
mr
dollars,
because
it's
it's
not
dollars,
it's
it's
its
own
sort
of
synthetic
thing
and
we
should
label
it
as
such.
H
Yep
it's
a
new
thing
of
funny
money
and
then
something
that
doesn't
really
belong
in
this
meeting,
but
I
can't
help
but
ask
because
I
don't
know
I
was
thinking
about
it
yesterday.
H
Christy
can
the
docs
get
some
design,
love
and
note.
Can
we
stop
making
everything
a
note?
It's
it's
just
really
disheartening
to
look
at
our
docs
with
notes
and
warnings
just
splashed
all
around
them.
They
they
don't
look
great.
E
Yeah,
so
sorry,
I'm
laughing
just
because
I
agree
with
you
so
deeply
and
my
leaders,
my
tech,
writing
leaders-
and
I
have
talked
so
much
about
this.
So
yes,
absolutely.
We
are
working
on
things
now
to
address
some
of
this.
I
would
also
like
to
hear,
though,
separately,
from
this
meeting,
if
you're
willing
to
share
specific
things,
you
have
in
mind,
because
what
you're
focused
on
may
be
different
than
what
we're
focused
on.
Can
I
set
up
time
with
you
to
do
that?
Yeah.
H
I'd
love
to
do
a
25-minute,
live
stream
rant
about
what's
what's
wrong
with
the
design
of
our
docs,
and
I
think
we
can
get
rid
of
all
the
notes
just
like
in
line
them.
But
that's
that's.
That's
probably
not
the
right
approach
so
lava.
I
love
to
rent
a
bit
and
let's,
let's
do
that.
I
think
we
have
the
best
documentation
in
the
business.
It's
amazing
how
extensive
it
is
it's
very
well
written,
but
we
could
make
it
look
a
bit
more
appealing.