►
From YouTube: [REC] Key Meeting - Development (Public Stream)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hi,
my
name
is
christopher
lapolz,
I'm
the
vice
president
of
development,
and
this
is
the
development
key
review
agenda
or
key
review.
I
should
say
a
document
should
be
posted
in
the
invite
and
I've
included
a
number
of
fyis
up
front,
no
video
this
time
around,
because
I
didn't
have
time
and
we'll
jump
into
questions
starting
at
number.
Four.
B
Awesome
chris,
I
I
know
that
the
team
I
saw
you
note
that
the
team
has
shifted
to
help
with
the
performance
just
wanted
to
check
in
on
the
team,
see
how
the
reaction's
been
to
that
prioritization
decision.
A
Yeah,
so
just
want
to
make
sure
there's
a
little
a
little
bit
of
comprehensive,
comprehensive
nature
to
the
answer.
So
you
know,
I
think,
if
you
talk
to
most
ics
they're
actually
pretty
excited
about
it.
We
gave
some
high
level
guidance,
which
I
think
helps
a
lot
with
that.
So
I
think
that
I
think
that
that's
been
my
general
impression
from
that
perspective.
A
A
We're
actually
doing
some
activation
for
a
few
activities,
and
one
of
the
concerns
I
have
is
that
our
maintainers
we're
leaning
on
a
lot
of
our
maintainers
for
that
area
and
I'm
hoping
that
that
doesn't
cause
people
to
be
less
for
or
make
people
reluctant
to
to
actually
become
maintainers.
So
that's
that's
one
concern.
I
have
that's
really
on
more
the
db
performance
and
the
associated
efforts
there
on
lcp.
A
You
know,
that's
something
else.
That's
been
kind
of
going
on
where
we've
been
working
on
the
team
seems
to
really
gravitate
around.
It
feels
like
it's
kind
of
built
into
the
process
now
and
most
of
the
feedback.
I've
gotten
is
extremely
positive
in
that
area
because
it
feels
like
it's.
You
know
better
customer
impact
associated
with
him.
B
I
guess
just
as
a
fault,
I
guess
dude
some
of
these
problems.
I
saw
that
you
noted
are
like
performance
like
how
do
you
solve,
like
you
know,
make
the
queries
more
efficient?
Do
they
view
them
as
hard
engineering
problems
that
they
can
go
solve?
You
know?
Is
it
some
of
these
things
exciting
for
the
ics
as
well.
A
Or
yeah,
I
think
so
I
think
some
would
argue
that
it
is.
You
know
hard
to
solve,
so
I
think
that's
good.
I
think
the
one
thing
I
would
say
is
we're
still
at
the
stage
where
the
answers
are,
I
won't
say
immediately
obvious,
but
but
they're
definitely
not
like
non-tractable.
I
think
when
they
get
the
non-tractable
state
of
you
know
like
what
do
you
do?
A
I
think
that's
where
you
know
we
have
to
be
very
careful
about
that,
because
usually
it
involves
much
longer
range
decisions,
charting's,
probably
the
best
example
of
that.
From
that
perspective,.
C
I
think
one
one
example
is
like
the
the
pg-11
query
planning
bug
that
was
like
a
that's
a
bug
that
engineers
will
brag
about
finding.
That
was
a
really
low
level.
Computer
sciency.
A
They're
also
famous
like
war
stories
that,
like
people
brag
about
five
years
from
now,
you
know
like.
Do
you
remember
that
one?
What
kind
of
situation.
B
Awesome
well
thanks
for
sharing
that
it's
always
good
to
check
on
the
team.
I
think
sid
give
the
next
one.
D
So
we
great
job
on
the
lcp
performance
should
we
add
the
other
two
core
three
vitals.
First
input
delay
and
cumulative
layout
shift.
A
Yeah,
I
think
that's
a
good
suggestion.
I
think
we
can
look
at
that
and
kind
of
see
whether
or
not
it's
easy
well
one
it
should
be
used
in
measuring
two.
You
know,
then,
look
at
like
what
targets
we
want
to
associate
with
it.
D
Well,
the
the
targets
are
right
there,
100,
milliseconds
and
0.1,
and
then
I
it
might
be
that
we're
already
doing
great
and
we
don't
want
to
go
through
everything
to
adding
this
to
the
page
and
doing
everything.
So
first,
the
sample
would
be
great,
feel
free
to
post
something
in
the
ceo
channel.
If
we're
already
doing
great
on
this,
it
doesn't
make
sense
to
add
measurement.
I
just
I'm
just
curious,
okay,
yeah.
Let.
A
Me
take
an
action,
probably
probably
in
the
next
two
weeks,
get
you
that
data
a
little
bit
slower
right
now,
because
we're
going
into
easter
weekend,
where
a
lot
of.
D
C
Sorry
I
was
on
mute.
I
was
trying
to
speak
up
on
that
one.
So
I
was
gonna
say
I
think
there
are
pages
of
of
gitlab.com
in
particular
like
search
results
and
project
listings,
where
we
have
to
treat
it
as
a
search
engine,
optimization
problem
and
really
gearing
it
towards
like
the
what
the
what
the
surgeons
are
looking
for
makes
sense,
but
then
deeper
in
the
application.
It's
more
about
the
kind
of
human
experience
of
of
performance
and
that's
where
ux
research
and
stuff
comes
in.
A
Yeah,
probably
the
big
one
to
start
with
is,
I
think,
there's
been
a
week
of
no
sub
ones
related
to
db.
So,
like,
let's
be
clear,
that's
that's
number.
One
number
two
is
verify
db.
Shifting
load
gave
us
roughly
eight
to
one
reduction,
because
there
are
eight
replicas,
so
I
understand
is
we've
basically
distributed
that
load
from
the
primary
to
all
the
replicas.
Just
for
the
verify
verify
portion
of
the
query.
So
I
think
that's
a
that's
the
probably
the
second
one
I
would
call
out.
A
A
The
two
others
would
be
obviously
the
sharding
work
and
then
the
ci
abuse
work
as
well
from
that
perspective.
So
it's
given
us
some
focus
there
and
then
the
third
one
probably
is
more
just
a
attitude
shift.
A
We've
been
doing
a
quick
stand-up
on
the
development
side
every
morning,
and
one
of
the
things
we're
reviewing
in
every
stand
up
is
the
is
the
sla
charts,
just
kind
of
review
those
and
that's
kind
of
got
us
in
a
better
spot
where
we're
kind
of
thinking
about
these
things,
particularly
around
a
runner
and
it's
it's
runner.
A
Numbers
are
right
now,
not
as
great
as
we'd,
like,
though
the
key
aspect
to
understand
there
is,
is
we've
unified
everything
and
really,
as
we
expand
the
runner
fleet,
one
of
the
things
we
may
see
is
start
to
differentiate
based
on
paying
customers
versus
non-paying
customers,
those
kinds
of
things.
So
I
think
it's
it's
kind
of
led
to
a
richer
discussion.
There.
C
Yeah-
and
I
I
put
in
the
link
in
b
to
the
stand-up
agenda,
if
people
want
to
see
the
progress
there
and
see,
I
know
same,
has
some
great
charts
about
rapid
actions
like
batch
changes
that
affected
whole
sets
of
queries
that
result
in
step,
functions,
step,
function,
improvements
and
in
database
load
and
whatnot.
Those
are
the
biggest
items.
C
I
think
that
led
to
like
the
immediate
cessation
of
s1s
and
then
indeed
down
below
this
is
more
about
like
the
little
bits
and
pieces
that
aren't
the
big
rocks
in
this
project,
and
you
can
see
the
sort
of
the
orange
line
is
the
asymptoting
of
the
backlog,
so
we're
hunting
around
for
unknown
unknowns,
we're
turning
them
into
known
unknowns
and
that's
where
the
orange
line
kind
of
planes
off.
C
So
I
feel
like
the
backlog,
is
essentially
complete
for
other
individual
queries
and
then
you
can
see
the
purple
is
rewriting
those
things
and
so
substantial
progress
every
day,
tens
of
queries
being
fixed
which
again
results
in
these
not
step
functions
but
very
significant
cumulative
increases
in
in
performance,
and
that
translates
into
availability.
In
other
things,.
D
Yeah
we
have
the
absolute
number
of
mrs,
but
it
seems
to
me
that
only
the
narrow
mr
rate,
matters
like
we
always
wanted
corrected
for
the
number
of
people
on
the
team.
A
Yeah,
I
think
I
think
the
short
answer
on
that
one
is:
is
I'm
comfortable
removing
that
one,
the
one
that
you
mentioned
in
eight?
I
would
like
to
keep,
though,
because
it
kind
of
shows
the
distribution
between
the
two,
and
it's
really
good
for
me
to
understand
the
ratios,
because
contributions
outside
of
development
does
affect
development
in
regards
to
review
times
associated
with
that.
So
if
we
saw
that
ratio
shifting,
I
want
to
be
aware
of
that.
From
that
perspective,.
D
D
A
A
Yes,
like
it
gives
a
lot
of
information
there,
but
it's
kind
of
key
because
one
it
shows
how
many
contribute
contribution
computer
or
how
many
contribute
how
many
members
are
contributing
outside.
It
shows
how
many
are
internal
to
the
team.
Then
it
shows
the
ratio
between
those
two
as
well
and
it's.
I
agree
it's
complex,
but
it's
like
easy
way
for
me
to
quickly
assess.
Okay,
like
like
you
know,
is,
is
is:
is
the
review
ratio
is
going
to
change
as
an
example.
D
C
I
see
so
you
would
have,
because
I
I
think
I
can
get
out
of
this.
We
we
are
consolidating
the
analysts
in
engineering
under
one
team
under
mac
and
one
of
the
things
they're
going
to
be
driving
is
uniformity
amongst
all
charts,
so
all
charts
will
go
up
and
to
the
right,
all
charts
will
measure
one
thing
and
one
thing:
only
it'll
be
very
low
mental
overhead
for
going
from
chart
and
you'll
see
a
refactoring
of
of
all
this
stuff
to
be
simplified
and
more
more
consumable.
D
Okay,
that's
great,
but
can
we
clean
up
this
one
before
the
next
meeting
and
just
have
a
chart
with
the
non-deaf
team
members.
C
C
So
what
we
have
planned
for
open
source
contribution
is
two
things
percentage
of
mrs
that
are
coming
from
open
source
contributions
and
then,
mr
arr,
those
will
be.
Those
are
the
two
things:
okay,.
D
A
Do
you
want
it
on
the
development
page
eric
or
do
you
want
it
on
the
on
the
quality
page.
C
C
A
Back
up
here,
let's
back
up
development,
page
development
metrics,
I'm
the
dri
for
this.
If
what
you're
asking
me
is
for
something
different
than
what
this
graph
shows
and
that's
fine.
When
I
get
asked
the
question
of
where
my
times
where
my
team
spends
its
time,
it's
a
lot
around
how
many
mrs
they're
producing,
but
there's
also
the
fact
that
they
review
a
lot
of
code
as
well.
That's
what
this
chart
represents.
A
The
count
of
number
of
people
who've
been
involved
in
a
given
month.
We
could
definitely
break
those
two
out
separate,
separate
charts,
but,
like
that,
like,
I
think
we
need
both
of
those
pieces
of
information.
Now,
if
you're
asking
me
for
a
separate
chart
to
say
how
many
contributions
you're
making
totally
happy
to
do
that
as
well.
But
like
that's,
what
I'm
trying.
D
To
figure
out
here,
I'm
asking
for
two
things
like
you
can
keep
this
graph,
but
I
would
like
to
change
the
header
to
something
that
more
covers
what
we're
seeing,
because
contributions
outside
development
doesn't
cover
what
this
graph
shows.
So
that's
the
header
should
be
changed
and
that
I'd
like
to
see
the
graph
about
percentage
coming
from
the
wider
community
and
eric
seems
to
be
already
working
on
that.
C
So
my
suggestion
would
be
in
it,
and
I
think
this
covers
your
acid
and
simplifies
things
for
christopher's
at
the
engineering
level,
my
level
we
have
the
two
metrics
I
mentioned
percent
coming
from
the
outside
community
and
mr
arr.
Those
will
also
be
in
qualities
page
because
mec
is
driving
those
initiatives
and
my
suggestion,
what
we
do
here
is
what's
relevant
to
christopher
is
maybe
the
amount
of
time
his
people
are
spending
reviewing
those
mrs
versus
authoring,
their
own
or
something
like
there's
some
kind
of
time
split.
D
C
D
Thanks
that
works,
nine,
I
think
it's
already
answered
by
eric,
but
can
we
make
sure
all
charts
have
to
go
up
and
to
the
right,
for
example,
invert,
the
mars
to
maintain
a
ratio
and
then
set
a
goal?
Five
percent
or
higher
yep.
C
And
I'll
in
the
rather
than
wait,
we're
doing
this
generally
I'll
just
say:
hey
matt.
Can
your
team
do
this?
First.
D
A
This
is
not
split
among
the
people,
though
this
is
actually
bulk
rate
of
mrs,
like
it's,
it's
super
intuitive
to
think
about
how
many,
mrs,
on
average,
a
given
maintainer,
does
a
month.
A
D
C
I
think
the
two
things
we
care
about
that
we
should
measure
separately
are
one
how
much
time
are
maintainers
are
spending
reviewing
versus
authoring
and
then
two.
What
is
the
wait
time
for
these?
Mrs?
So
the
people
aren't
waiting
forever
because
we
don't
have
enough
maintainers
and
that
would
force
us
to
do
both.
A
A
Change
magic,
wait
time.
We
already
do
with
review
times
to
review
time
to
merge,
that's
rtmt,
so
we
already
have
that
one
covered
this
one.
A
This
one
is
specifically
about
what's
the
load
on
those
teams,
because
you
could
still
have
a
good
rtmt
but
not
be
not
not
being
increasing
the
number
of
maintainers
from
that
perspective.
D
D
D
But
I'm
asking
a
lot
about
the
metrics,
because
there's
not
a
lot
of
other
things
to
talk
about,
except
for
the
the
big
stability
one
and
the
gitlab.com
availability
action
thanks
for
that
and
all
the
rest
is
looking
very
good.
Thanks
appreciate
it.
C
B
Hey
eric,
I
have
a
procedural
question,
so
we
spent
a
lot
of
time
in
depth
on
quality,
with
next
review
and
with
christopher's
review
just
now
in
development.
How
are
you
thinking
about
ux
with
christy
and
jonathan
with
security
is,
is,
is
do
you
imagine
I
I
just
procedurally
like
do
you
want?
Is
there?
Is
there
gonna
be
key
meetings
for
those
or
how
are
you
thinking
about
that?.
C
C
And
eventually,
eventually
incubation,
when
we
have
some
metrics,
we
one
person
right
now,
so
that'll
that'll
happen
eventually
for
sure.