►
From YouTube: 2021-10-07 Development Group Conversation Pre-video
Description
The pre-video for the Development Group Conversation on 2021-10-07
A
As
always
with
our
group
conversations.
We
start
with.
What's
the
top
of
mind
where
our
focus
has
been
largely,
at
least
from
my
perspective,
is
on
the
get
lab.com
standup,
which
is
turned
into
a
gitlab.com,
I
should
say
a
reliability
and
security
stand
up.
A
This
has
been
our
focus
for
pretty
much
since
the
beginning
of
the
fiscal
year,
in
particular
the
areas
that
we've
been
focusing
on
as
italy,
cluster
resiliency,
a
new
process
called
fcls,
which
is
future
control,
locks,
sub
transaction
incidents,
as
well
as
just
general
incidents
and
followed
by
action
items
associated
with
that
creating
a
culture
of
reliability
and
security,
error,
budgets,
anti-abuse
api
limits
and
the
rightist
team
edition.
A
We
continue
on
with
our
infradev
work,
which
we've
seen
pretty
good
progress
there.
Q3
okrs
is
also
a
topic
that
we
focus
on
retention
is
always
the
top
retention
hiring.
Are
there
as
well,
and
then
we
have
several
ones
bullet
listed
below
that
are
also
important
and
been
top
of
mind.
For
me,
for
the
last
eight
weeks,
headcount
and
hiring
we've
increased
our
size
by
four
people,
since
the
last
last
group
conversation
eight
weeks
ago,
so
we're
now
at
279
strong.
We
do
have
a
large
number
of
positions
open.
A
In
particular,
we
we
only
have
one
recruiter
and
we
really
need
a
second
recruiter
that
second
recruiter
basically
started
supporting
us
on
september
27th.
So
we're
hoping
to
see
this
number
go
down
and
get
more
in
line.
We
have
several
managers,
who
are
an
interim,
that's
exciting,
to
see,
as
well
as
a
number
of
lateral
trans
transfers.
One
thing
that
what
you
will
see
is
that
our
hiring
count
has
been
fairly
high
from
an
overall
number,
because
that
includes
also
internal
transitions,
as
well
as
promotions
where
a
person
is
moving
into
a
management
position.
A
A
It
is
a
little
bit
low
and
has
been
low
the
past
three
months
in
part.
I
attribute
this
to
both
additional
time
off
for
various
team
members
as
well
as
because
of
the
engineering
allocations
and
head
count.
Resets,
we
have
a
number
of
people
working
in
new
areas
so
that
by
that
definition
they
will
be
a
little
less
efficient
in
their
mrra,
particularly
if
they're
working
on
reliability,
issues
which
may
need
additional
additional
effort
and
also
we're
walking
into
walking
things
into
production.
A
lot
more
deliberate
with
a
feature
flag
capability.
A
Our
open,
mr
review
time
is
continues
to
be
below
the
line,
so
that's
encouraging
that
we've
got
to
keep
track
of
it.
Mrh
is
above
the
line
a
couple
months
ago,
back
in
july,
I
started
initiative
to
basically
bring
it
down
by
retiring
old,
mrs
that
seemed
to
help,
but
we
have
crawled
back
up
so
we'll
have
to
think
about
kicking
off
another
initiative.
You
see
a
dip
here
in
middle
of
september.
This
is
because
a
large
number
of
mrs
were
open
and
immediately
closed
within
a
short
time
span.
A
So
that's
that's.
What
you
see
there
associated
with
that
lcp
oops
lcp
continues
to
be
strong,
we're
excited
to
see
it
below
there.
It
was
down
in
the
one
second
range.
What
happened
was
is
where
lcp
measures
varies
depending
on
how
the
page
changes.
So,
essentially,
we
moved
positions
on
the
page
and
lcp
returned
back
to
around
two
seconds.
A
Okr's.
We
continue
to
make
solid
progress
on
these.
One
of
the
biggest
areas
that
we've
made
progress
on
is
is
hitting
our
error
budgets
we're
pretty
solid
there,
areas
that
we
need
to
work
on
is
in
fulfillment
and
also
in
workspaces,
but
those
are
kind
of
the
big
highlights
from
kr's
that
I
want
to
mention.
A
This
is
because
we
discovered
that
we
could
lose
track
of
repos
and
that
they
weren't
being
consistent.
There
were
ways
to
recover
from
this,
but
they
were
all
manual
associated
with
our
our
work
and
we
needed
to
basically
make
it
so
that
customers
had
a
much
easier
way
of
doing
this,
so
we're
working
on
recovery
procedures
associated
with
that
we're
also
fixing
some
of
the
sync
issues,
in
particular
when
getaway
cluster
is
used
with
geo.
A
We're
also
looking
at
shorting
supporting
incremental
backups
as
an
alternative
snapshots,
though
we'll
be
also
working
on
recovering
from
snapshots
architectural
improvements
for
increasing
the
robustness
and
also
startup
health
checks
as
well.
A
Fulfillment
growth,
anti-abuse,
applied
ml
teams
are
all
working
on
various
initiatives.
Here.
Probably
the
biggest
highlight
is
is
that
the
fulfillment
team
is
working
on
reliability
of
their
pro
components
and
they're,
also
working
on
migrating
to
gcp,
which
is
a
huge
effort
and
a
huge
uplift
required
for
that
team,
and
we're
just
glad
to
see
progress
there
in
the
area
of
enablement,
we've
been
focused
a
lot
on
db.
Related
activities,
though
there
is
one
particular
highlight
that
we
should
mention,
which
is
our
operator,
has
hit.
A
Gitlab
operator
has
hit
general
availability,
so
we're
excited
to
see
that
that
result
from
a
develop
feature
development
perspective.
But
between
that
and
also
the
db
initiatives,
it's
been
good
to
see
the
results
there
and
enablement,
and
then
one
of
the
other
aspects
that
we've
been
focusing
focused
on
is
expanding.
The
incident
manager
on
call
this
is
the
on
call
shift
associated
with
when
incidents
happen,
and
there
needs
to
be
a
person
who
basically
helps
coordinate
issues
and
initiatives.
We've
expanded
from
just
the
infrastructure
team.
A
To
also
now
the
development
team
engineering
managers,
as
well
as
staff
plus
engineers,
are
considered
for
that
rotation.
A
A
For
for
this,
basically,
we
have
currently
security
approvals
turned
on
for
the
repos
associated
with
the
sex
section.
Now
we're
working
on
turning
it
on
for
the
overall
get
lab
product
to
improve
our
product
and
also
dog
food.