►
From YouTube: Development Group Conversation (Public Livestream)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
And
it
looks
like
we
have
gone,
live
on
YouTube
so
good
morning,
good
afternoon,
good
evening,
everybody
at
get
loud.
This
is
the
start
of
the
group
conversation
for
a
development.
My
name
is
Christopher
elephants
on
the
senior
director
of
development,
thanks
for
the
questions
that
have
already
gone
in,
I
just
want
to
do
a
quick
voiceover
if
he's
joined,
which
I'm
trying
to
see
if
he's
yes,
he
has
joined
I
just
want
to
introduce
Wayne
Haber
he's
our
new
director
of
engineering
defend
Wayne.
A
A
Apologize
if
I
said
it
was
met,
that's
probably
from
me
a
cut
and
paste
error
and
I
should
just
remove
the
met,
so
I'll
go
ahead
and
move
that
right
now,
it's
more
of
an
update
of
where
we
are
key
aspects
of
that
is.
We
still
have
a
fair
number
to
hire
this
month.
You
know.
If
we
do
successfully,
we
would
hit
26,
that's
not
the
most
we've
ever
done.
In
a
month
we
did
29
in
July,
so
I
feel
like
we're
still
in
a
good
pace
and
a
good
movement.
A
A
A
C
Olivia
you
have
the
next
question.
Yeah
I
was
trying
to
type
it
at
the
same
time.
I'm
curious
about
the
okay
are
about
increasing
productivity.
It
might
be
a
silly
question
when
I
I'm
not
sure
what
yet
is
used
to
calculate
measure
that
productivity.
C
So
they
are
investing
time
in
this
and
it's
important
for
the
engineering
team
to
to
invest
time
on
that,
but
it's
not
reflected
into
the
matrix.
So
instead,
I
was
thinking.
We
are
trying
starting
to
use
weights
in
our
team,
and
I
was
wondering
if
it
would
be
useful
to
leverage
matrix
that
reflect
the
weight
of
the
issue
that
the
things
that
are
closed
to
measure
the
productivity
of
the
team.
C
A
I?
Think
there's
some
key
aspects
here:
that
we
got
to
focus
on
you're,
bringing
up
the
weight
issues.
That's
a
good
one,
the
other
one
that
we're
gonna
explore
is
how
much
actually
changes
are
actually
being
associated
with
things.
The
fundamental
question
that
I
have
is,
if
you
had
a
lot
of
things
that
we're
taking
a
long
time
and
wait
is
this:
can
we
be
more
incremental
and
basically
break
that
up
into
potentially
smaller
MMR's
for
a
useful
work
associated
with
that?
The
other
point
you
bring
up
is
prototypes,
which
obviously
would
be
work.
A
That's
not
necessarily
committed
into
the
code
base,
but
it's
tried
and
then
failed.
We
got
to
account
for
that
and
basically
adjust
to
that
and
make
sure
that
we're
not
basically
seeing
too
much
of
that.
So
I
think
it's
that's
the
reason
why
we
don't
want
to
count
that
associated
with
it.
You
had
like
five
or
six
questions
in
there,
Olivier,
so
I'm
trying
to
I'm
trying
to
hit
on
most
of
them.
But
please
let
me
ask
questions
and
I
come
because
this
is
a
good
discussion
topic.
You
know
itself.
C
A
And
that's
the
change
that
we're
making
is
is
before
we
were
counting
a
number
of
repos
that
weren't
included
in
what
we
actually
shipped
so
we're
adjusting
for.
That
quality
is
included
in
that
as
well
from
the
particular
repos
that
basically
test
out
the
shipping
code
as
well,
because
we
consider
that
to
be
partner
shipping
but,
for
instance,
if
we're
doing
automation
around
the
handbook,
that's
probably
not
going
to
count
it
in
the
shipping
value
associated
with
this
metric.
That's
that's
the
way
we
think
about
it.
From
that
perspective,
okay,.
C
Yeah
then
I
think
we
have
facing
the
same
issue
because
again
one
or
doesn't
value
the
same
as
another
immer.
It's
it's
complicated
and
I
understand
the
fact
that,
by
breaking
down
into
multiple
Emma's,
we
are
growing
in
this
number,
but
it's
I
mean
it's
really,
depending
on
a
case-by-case
basis,
depending
on
what
type
of
work
you're
doing
like
if
you're
starting
from
scratch
and
implementation.
Obviously
you
won't.
You
will
invest
a
lot
of
time
and
it's
not
super
easy
to
break
this
into
ten
ma
manage
requests,
but
it
still
we,
it
will
be
valuable.
C
A
I
think
I
think
you
know
we
should
probably
follow
up
the
discussion
about
it.
I'm
very
curious
to
understand
those
aspects
where
we
feel
like
either
can't
be
broken
up
or
things
around
that
I
think
there
is
gonna,
be
definitely
statistical
anomalies
that
we're
gonna
have
in
a
situation.
The
question
is,
is
you
know
how
do
we
measure
ourselves
and
how
do
we
make
sure
that
we're
working
towards
that
and
if
we
really
feel
it's
misaligned,
that
we
can
propose
an
alternative,
but
right
now
so
far
it's
worked
pretty
effectively.
A
The
key
aspects
that
I've
been
thinking
about
is
automation
and
then
also
special
projects.
It's
an
example
back
in
June
of
2018,
we
actually
had
a
project
where
we
basically
called
bootstrap
for
where
we
were
basically
doing
some
updates
of
the
code
base.
I
wasn't
here
at
the
time,
but
associated
with
that
work.
There
was
a
pretty
big
spike
in
EM
ARS
associated
with
that.
Well,
that's
a
good
example
where
we're
doing
some
turning
over
the
code,
which
is
a
different
experience
from
that
perspective.
A
So
from
an
anomaly
perspective,
we're
definitely
gonna
see
some
of
those
and
it's
definitely
a
stack
of
cards
in
regards
to
the
complexity
of
different
situations
that
come
up
can
actually
be
dominant
factors,
those
with
it,
but
at
the
end
of
the
day,
we
still
need
to
show
that
we're
improving
and
that
we're
getting
the
best
utilization
out
of
everyone
in
the
team.
So
from
that
perspective
you
know
a
lot
of
these
issues.
Balance
themselves
out
work
from
it.
C
A
To
kind
of
take
additional
discussion
on
topic
on
it
because
I
think
it's
it's
definitely.
You
know
like
there's,
no
there's
no
clean
answer
here,
but
you
got
to
pick
something
to
try
to
say:
okay,
how
we're
gonna
evaluate
ourselves-
and
this
kind
of
this
is
kind
of
where
we're
at
right
now.
At
this
point,
our
journey
thanks.
A
E
Your
question
sure
thanks
Christopher,
you
touched
on
some
of
this
already,
but
I
was
just
curious
as
a
newt.
Am
you
know,
your
ok
are
around
improving
predictability
on
customer
expectations,
so
higher
to
plan
increased
productivity.
What
other
dynamics
go
into
to
improving
the
performance?
Thus
far
in
this
goal,.
A
E
E
I
should
I
should
refine
the
question,
so
my
question
was
more
specifically
about
improving
our
predictability.
So,
like
we
plan
less
aggressively
or
are
we
going
to
start
rotating
towards
like
saying
we're,
gonna
do
less
just
curious
about
how
we
manage
through
beyond
you
know
the
other
sort
of
related
okay,
ours.
How
how
else
like?
What
are
the
dynamics?
E
A
C
A
Philosophy
over
predictability
right
so
from
that
particular
you're
gonna
see
a
definite
change
from
the
perspective
of
seeing
that
technical
evaluation
being
made.
One
aspect
that's
been
given,
we've
been
given
feedback
on
is,
is
that
customers
have
definitely
seen
cases
where
they
see
things
adjusting
out,
because
if
you're
gonna
choose
that
model,
things
are
going
to
adjust
out
at
times
and
by
times
I
mean
you
know
fairly
consistent
percentage
just
from
a
number
perspective
right.
So
one
of
the
things
that
we've
thought
that
should
help
alleviate,
that
is
for
anything.
A
Committed
to
customers
or
we've
have
specific
concerns
around.
We
have
the
the
basically
the
tag
associated
with
that
I
forget
the
planning
priority
is
the
tag
and
label.
Sorry,
not
long-term
label
playing
Prairies.
That's
the
label,
and
you
know
that's
where
we
can
use
that
now.
Chris,
first
evaluation.
That
right
now
is,
is
that
we've
LinkedIn
been
doing
it
for
a
few
months.
We've
been
fairly
successful
at
it
so
far,
but
it's
early
days,
I,
don't
want
to
see
that
turn
into
80%
of
80%
of
the
things
that
we're
trying
to
deliver.
A
If
that
turns
into
80%,
then
we
are
definitely
moving
to
predictability
or
velocity
associated
with
that.
So
the
key
aspect
is
getting
feedback
on
that
and
seeing
whether
that's
helping
with
the
discussions
both
with
customers
and
other
expectations
around
it.
Just
because
you
know
I
think
as
a
whole.
This
is
just
one
person's
opinion
in
the
company,
but
I
think
it's
a
whole.
A
A
lot
of
our
customers
see
the
value
in
the
overall
story
we
have
and
the
journey
that
they're
gonna
take
rather
than
a
specific
month
that
you
know
particularly
future
arrives
in
just
from
that
perspective.
So
that's
that's
currently
how
we're
thinking
about
it,
but
we're
always
open
to
feedback
and
adjustment
is
much
like
the
availability
or
for
velocity
decision
was
bein
made.
D
Do
you
want
to
talk,
allows
your
question.
It's
more
of
a
statement.
I'm
I
didn't
see
any
other
questions.
I
don't
want
to
take
away
from
anybody
else,
but
maybe
you
should
consider
at
least
adding
this
to
the
handbook,
because
we
we
seem
to
be
having
this
conversation
multiple
times
and
I
want
to
make
sure
that
every
time
we
add
to
it-
and
we
have
a
point
to
refer
to
there's
multiple
ways
of
measuring
productivity
and
engineering,
and
they
all
have
like
side
effects
and
I
think
on
a
spectrum.
D
I
can
see
weight,
which
is
a
great
suggestion,
but
Oliver
and
Oliver
I
actually
worked
as
a
civil
servant.
At
the
same
point
and
I
paid
our
vendor
per
wait.
This
many
points
you're
paid
for
so
I
I,
like
that
I
think
what
I've
seen
over
time
is
that
you
get
inflation
of
the
weights.
People
just
assign
more
points
to
stuff.
It's
a
relative
measure
and
there's
no,
you
can
you
get
into
this.
You
end
up
spending
a
lot
of
time,
assigning
weights.
D
D
The
nice
thing
about
that
is
that
something
we
actually
want
to
encourage
and
then
the
last
one
is
per
release
post
item,
which
is
something
I've
I
want
to
start
measuring,
but
the
consequences
of
that
maybe
are
obvious
as
well.
Maybe
not
I.
Don't
want
to
use
the
word
obvious,
but
you
get
new
features
over
improving
existing
ones.
For
example,
at
Google.
Launching
something
is
a
promotion.
D
Finnaly
cannot
get
promoted
without
launching
something
you
so
you
get
people
launching
like
a
low
and
I
think
do
or
something
and
all
kinds
of
new
check
lines
instead
of
improving
hangouts
that
it's
a
simplified
story,
I'm
sure
reality
is
much
more
complex,
but
you
basically
get
that
people
launching
stuck
over
improving
stuff,
which
is
maybe
not
something
we
want.
We
want
to
increase
maturity
at
the
same
time,
so
I
think
there's
trade-offs
and
I.
D
Think
of
all
the
trade-offs
splitting
things
up
a
bit
too
much
aligns
with
our
iteration
value,
so
it's
kind
of
a
side
effect.
We
actually
want
because
we
think
it
will
improve,
reduce
coordination,
costs
and
help
us
work
asynchronously,
but
I
I
think
we've
talked
about
this
a
lot,
but
we
never
wrote
it
down.
So
maybe
that
be
is
a
thing
to
write
down
and
Oliver
feel
free
to
chime
in
yeah.
C
Actually,
this
reflection
comes
from
a
bit
of
retrospective
around
through
food
and
I,
really
found
through
food,
very
valuable
to
break
things
down
and
I
think
it.
It
has
improved
the
iteration
a
bit
of
the
engineers,
but
I
think
it
should
not
be
the
ultimate
goal
because
at
some
point
we
reach
limit
under
which
we
cannot.
A
Yeah
and
another
key
aspect
that
you
can't
say
they're,
just
one
of
em
says
it
is-
is
gaming.
The
system
is
totally
allowed
here.
More,
mrs
associate
that
one
of
the
questions
I've
got
in
my
mind
is,
is
whether
we
should
be
counting
automation
associated
with
that.
We
have
seen
cases
where
we've
actually
automated
things
away
and
because
of
that,
it
hasn't
shown
up
in
our
metrics
over
time
and
that's
actually
an
example
where
we've
we've
seen.
We've
seen
some
examples
of
that
where
we're
not
effectively
counting
it
or
we
should
be
thinking
about
it.
A
D
I,
don't
need
to
apologize,
I
think
we're
I
think
one
of
the
hardest
things
in
software
is
measuring
productivity,
so
we're
never
gonna
solve
this
and
we're
just
trying
to
get
better
at
it
and
I
think
what
we're
trying
to
do
is
like
measure
where
people
are
getting
stuck
like:
hey
is
there?
If
there's
no,
not
enough
database
reviewers
like
what
metric
would
show
that
what
would
go
down
could
also
an
alternative.
A
One
thing
that
we've
been
encouraging
feedback
from
the
engineering
managers
of
three
months
ago
was:
we
do
have
the
meantime
turmeric
metric,
which
I
keep
an
average
of
and
tried
to
keep
an
eye
on.
One
aspect
that
we
haven't
been
as
focused
on
is
is
is
trying
to
get
that
down
in
particular,
because
we're
encouraging
people
to
open
their
Amar's
as
early
as
possible.
So
the
question
would
be
is,
if
you
have
somebody
who's
new
and
you
have
basically
Oh
learning
curve
associated
things.
D
A
D
A
F
Obviously
very
early
on,
but
we
have
the
first
chart
in
the
app
which
is
meant
to
be
basically
the
time
to
merge
the
time
to
merge
and
how
many
merge
requests
took,
how
many
days
it's
not
a
histogram
yet
and
it's
a
bar
chart.
So
it's
kind
of
difficult
to
read,
but
we've
also
worked
actually
on
breaking
down
at
least
some
of
the
events
and
aggregating
those.
So
you
have
their
time
from
first
commit
into
your
first
comments.
F
A
G
A
A
Can
basically
think
in
terms
of
that,
particularly
around
definition
of
done.
I
would
definitely
encourage
that
aspect
of
it.
One
other
aspect
that
I've
been
looking
at
is
this
actually,
how
many
comments
I
did
a
little
bit
of
a
brief
analysis
last
week
of
how
many
comments
were
being
made.
I
couldn't
I
didn't
necessarily
come
to
any
summarizing
of
that.
A
So
you
know,
and
if
there
was
another
place
where
I
would
be
gaming,
the
system
is,
you
know
as
a
reviewer,
you
should
be
thorough
and
you
should
make
sure
that
our
definition
of
done
is
being
met,
but
if
you're,
if
you're
starting
getting
to
the
more
esoteric
nature
of
code
development
in
the
thought
process,
you
know,
should
we
be
gaming,
the
system
basically
say.
No,
that's
that's
something!
That's
not
not
appropriate
here.
The
other
aspect
is,
is
just
keeping
iteration
in
mind
from
the
perspective
so
but.
A
A
H
A
Trying
to
think
of
a
really
horrible
analogy,
I'm
failing
on
it
I
think
both
are
important
I
and
it's
it's
like
kind
of
when
somebody
asks
me,
you
know,
should
I
do
a
or
P
oftentimes.
My
answer
is
yes,
we
need
to
do
both.
So
you
know
one
of
the
things
that
we
do
have
focus
on
and
I
have
it
in
the
okay.
Ours
is
around
basically
maintain
errs.
A
A
We
need
to
obviously
balance
that,
because
you
know,
if
you're
spending
all
your
time
reviewing,
then
there's
some
productivity
loss
associated
with
it.
But,
like
you
know,
the
other
thing
we're
doing
is
we're
looking
at
teams
rather
than
individuals.
So
if
you
know,
if
you
had
a
the
extreme
version
of
that
of
one
individual
who's,
reviewing
100
commits
and
everybody
else
is
really
super
effective,
then
maybe
that's
time.
Well-Spent
though
I
would,
even
in
that
situation,
encourage
us
to
balance
that
out,
because
you're
that
one
person
leaves
then
all
of
a
sudden.
A
Your
reviewer
is
gone
and
guess
what
now
now
you've
got
to
reformulate
anyways,
so
so
I
think
there's
a
healthy
balance
that
we
need
to
have
between
these
two
do
I
have
a
ratio,
no
I
think
it's
team
has
to
kind
of
figure
that
out
agent,
if
it
kind
of,
has
to
figure
out
for
themselves
but
I
kind
of
view
it
as
like.
Do
I
drink
water
or
eat
food?
You
need
to
do
both
in
a
given
day.
I
can't
I
can't
live
without
one
or
the
other.
It.