►
From YouTube: Development Group Conversation
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Do
thanks
I
was
trying
to
do
the
math
real
quick
I
didn't
have
time,
because
we
have
24
hires
in
development
in
you,
two
already
and
at
11
like
in
the
pipeline,
hopefully,
but
that
still
seems
like
it's
lower
than
the
5
out
of
6
in
growth.
So
why
is
growth
a
particular
area
of
challenge
on
the
hiring
front,
yeah.
C
A
Also
have
a
slightly
different
group
folks.
What
I'm
specifically
he's
calling
out
is
the
fact
that
we'll
hit
five
versus
six
of
our
target
where
most
of
the
other
areas
we
look
mostly
on
track
so
that
one's
a
little
bit
low
and
that's
also
a
priority
for
us.
So
that's
why
I'm
also
specifically
calling
out
that
one.
The
reason
why
we
actually
might
hit
us
lightly
above
24
hours
is
a
couple
reasons.
One
we
do
have
attrition.
You
know
that
happens
and
we
can
always
hire
for
the
an
attrition.
So
we
should
be
back
filling.
A
A
Which
one
was
that
I
was
thinking
of
basically
some
of
our
labeling,
that
we're
doing
so.
We
can
better
understand
where
we're
spending
our
time
associated
with
various
omar's.
We've
also
started
to
work
on
getting
data
over
to
periscope.
Thank
you
to
Emily
and
there's
the
data
team
for
helping
us
with
that
effort,
and
the
last
one
is
slide.
9.
As
a
miscellaneous
and
a
lotta
that
I
pulled
from
the
engineering
we
can
review,
which
I
encourage
everybody
to
read
on
a
weekly
basis,
because
it's
got
a
lot
of
good
information
in
it.
A
That's
it
that's
a
good
question:
I
haven't
really
honestly
thought
about
it.
So,
to
be
frank
that
hasn't
been
front
of
front
and
center.
On
my
mind,
I
would
expect
us
to
see
some
improvements
in
productivity.
I
would
also
expect
us
to
see
some
definite
improvements
in
our
ability
to
respond
and
to
effectively
impact
infrastructure
that
you
wouldn't
necessarily
see
that
in
development
metric,
but
you
would
see
that
in
a
you
know,
responsiveness
to
basic
time
associated
with
that.
A
The
classic
answer
also
here
is
that
as
you'll
have
a
much
tighter
feedback
cycle,
so
you
would
expect
that
teams
like
growth
and
potentially
fulfillment
teams
would
be
very
interested
in
having
this
capability
just
because
of
the
fact
of
an
immediate
impact
cycle
associated
with
it,
but
I,
don't
think
I've
quantified
that
yet
from
that
perspective.
So
if.
C
I
may
add
to
that
so
Daniel,
one
of
the
things
that
we're
looking
at
so
throughput
is.
It
is
kind
of
a
good
way
to
see
how
often
we're
merging
week
to
week,
I
would
say
the
way
I've
communicated
this
to
my
managers
is
that
CDs
success
is
seeing,
merge
our
trend
line,
sort
of
normalizing
and
we're
merging
week
to
week
more
consistently.
So
this
is
not
about
merging
a
specific
number.
It's
more
that
we're
not
merging
at
a
spike
one
week
versus
the
other
weeks
are
much
lower.
C
So
that's
how
I
would
use
throughput
and
then,
as
Christopher
mentioned,
we're
starting
to
build
cycle
time
or
how
long
it
takes
to
start
an
mr
and
get
it
all
the
way
into
production,
so
that
new
chart
is
in
periscope
and
that'll.
Tell
us
how
long
something
like
this
so
CD
success
would
be
that
our
cycle
time
is
shrinking
significantly
because
we're
not
doing
our
C's
and
waiting
to
deploy
them
and
so
on.
So
those
two
measures
should
help
us
figure
out.
How
successful
is
our
CD
transition
I.
D
E
Sure
so
just
had
a
question
on
how
we
measure
throughput
and
whether
you
know
like
we
look
at
em
ours
and
assume
that
all
those
are
created
equally
or
do.
We
expect
that
they'll
kind
of
average
out
to
a
similar
value
over
the
course
of
a
month,
Sean
shared
a
great
handbook
link,
so
I
think
I've
an
answer
to
that
question
already,
but
would
love
to
just
kind
of
hear.
You
explain
it
a
little
bit
yeah.
A
This
is
a
this
is
a
purely
classic
computer
science
question.
You
know
how
do
you?
Why
do
you?
What
do
you
pick
a
measure
like
this
and
and
choose
to
optimize
an
organization
around
it
you
know
kind
of
might
take
around
it
is,
is,
is
in
general,
we're
striving
for
the
goal
of
you
know
per
engineer,
10
air
Mars
per
per
month
on
average,
that's
an
average.
That's
not
a
that's,
not
a
that's,
not
a
high
high
mark
or
a
low
mark
from
that
perspective.
A
So
when
you
think
about
it
in
general,
one
of
the
things
I
think
about
is
is
is,
if
you
actually
have
an
organization,
statistically
speaking,
that
just
thinks
in
terms
of
small
area
Mars.
Eventually,
what
you
do
see
is,
as
you
see
them,
getting
roughly
sized
around
the
same
amount.
The
one
thing
you
don't
want
to
do
is
force
people
arbitrarily
to
that
position.
You.
A
You
don't
necessarily
want
to
force
them
arbitrarily
to
that
position.
Just
from
the
perspective
of
thinking
hey,
we
need
to
do
these
in
smaller
sizes,
but
we're
not
going
to
dictate
like
a
specific
size,
because
then
that
leads
to
all
sorts
of
other
I'll
call
abnormal
behaviors,
which
generally
are
viewed
negatively
from
that
perspective,
I
could
go,
I
could
pontificate
about.
You
know
certain
types
of
measures
that
have
been
used
classically
in
computer
science
and
software
engineering
that
cause
you
to
do
those
things.
A
F
Christopher
I
have
one
question
so,
on
the
on
the
slide
about
growth
to
productivity
challenge,
we
have
an
item.
That's
crossed
out.
The
increasing
project
maintain
us
to
allowed
code
reuse
to
happen
faster.
Does
that
mean
we
don't
want
that
to
happen,
or
is
it
something
we
just
haven't
done
eyes
under
the
caption?
What
has
been
done
so
or
what
is
being
done
so
yeah.
A
F
A
Question
well,
I
guess
this
is
I,
really
struggle
with
how
to
communicate
this,
so
maybe
have
a
better
suggestion,
all
words,
so
the
previous
site
talks
a
lot
about
the
fact
that
you
know
one
point:
we
have
a
hypothesis
that
you
know
part
of
the
reason
why
or
Emma
our
cycle
time
is
going
up
is
because
of
we
don't
have
enough
project
maintainer
z',
and
we
definitely
want
to
keep
our
ratios
low.
In
fact,
a
you
know
like
if
you
said
Christopher.
What's
what
should
be
our
goal?
You
know
it.
A
So
it's
somewhat
somewhat
still
debated
topic,
but
you
know
ideally
I
think
in
terms
of
somewhere
around
four
to
one
potentially
for
maintainer
z--
to
reviewers.
Potentially,
you
could
see
it
being
as
high
as
a
maybe
if
you
have
a
really
active
reviewer
community
as
an
example,
so
that
there's
a
little
bit
of
reviewer
versus
how
much
her
reviews
during
versus
how
much
our
maintainer
is
reviewing.
But
you
know
you
do
want
to
keep
that
ratio
low.
A
The
the
one
aspect
is
is
that
it's
not
going
to
necessarily
help
the
productivity
challenge,
because
if
you
look
at
the
previous
slide,
what
you'll
see
is:
is
that
we're
not
seeing
it
being
a
primary
indicator
of
productivity
change
so
which
sounds
counterintuitive
I
know
because
I
know
a
number
of
us
have
trouble
with
this,
and-
and
you
know,
we
definitely
get
feedback
around
this.
So
that's
why
we
want
to
keep
the
count
low,
but
it's
not
necessarily
helping
necessarily
push
our
productivity.
A
B
Just
a
quick
thing,
so
one
way
we
were
measuring,
the
ratio
was
just
like
a
script
and
the
website
repo.
So
in
the
www
get
lab
comm
repo,
which
is
fine,
no
great
most
of
our
charts,
are
in
periscope,
so
I
created
an
issue
for
this
I.
Don't
really
know
how
to
like
get
data
into
periscope,
but
thankfully
Taylor
does,
which
is
why
he's
on
the
data
team
and
I'm
not
and
he's
working
on
that?
It's
not
a
super
high
priority.
B
A
A
A
Cool
in
gitlab
fashion,
I'm,
not
gonna,
ask
for
less
questions,
because
we
don't
ask
for
one
more
question.
We
just
end
the
call
so
given
that
there
are
no
additional
questions.
I
want
to
thank
everybody
for
attending
and
if
you
have
any
additional
questions
for
me
outside
of
this,
please
don't
hesitate
to
reach
out
to
me
via
a
slack
email
issue
or
Google
documentation
thanks
everybody.