►
From YouTube: GitLab 16.3 Kickoff - Database
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everybody,
my
name
is
Roger,
and
this
is
the
database
group.
16.3
planning
with
me
today
is
Alex
hello,
yeah,
Alex,
so
16.3
the
deputies
group
is
offering
at
full
capacity
as
usual,
we'll
have
our
upcoming
absences
listed
on
our
weekly
status
update
and
also,
we
just
need
to
call
out
that
about
half
the
team's
capacity
is
typically
consumed
by
unplanned
work,
so
we've
got
some
priorities,
but
progress
is
generally
going
to
be
slow
and
steady
across
the
board
super
high
level.
A
Our
focus
is
on
initiatives
that
affect
the
availability
and
reliability
of
gitlab.com
and
self-managed
instances.
We
have
two
particular
areas
that
we've
been
working
through
for
the
last
little
while,
firstly
is
our
medium
term
database
scalability
strategy,
and
secondly,
which
is
a
more
recent
Focus
for
us,
is
the
potential
risk
of
lightweight
lot
contention.
This
happens
when,
in
times
of
high
traffic,
certain
queries
span
a
lot
of
tables
and
a
lot
of
indexes,
and
that
impacts
our
availability
overall,
and
we
believe
that
our
query
testing
efforts
here
will
help
us
navigate
this
balance.
Alex.
B
B
Yeah
primarily
we're
seeing
this
lot
contention
issue
happen
on
replicas
right
now,
and
not
so
much
on
the
primary
well,
less
it's
less
impactful
on
the
primary,
which
has
been
good
one
of
the
ways
we
may
also
be
able
to
mitigate
it
is
adding
more
replicas.
So
it's
it's
something
that
we're
in
on.
There
are
ongoing
discussions
with
the
database
reliability
team
about
how
we
want
to
tackle
this
as
a
whole.
But
from
our
side
our
efforts
are
focused
on
query
testing.
A
Cool
all
right,
so,
let's
get
into
some
of
our
top
priorities.
Some
of
these
are
recurring
themes,
so
first
up
here
is
talking
about
partitioning
strategies
and
table
ownership.
This
is
part
of
our
team's,
broader
effort
to
reduce
table
sizes
across
all
of
gitlab.
A
B
Yeah,
you
know
right
now,
we're
mainly
focused
on
helping
there.
There
are
two
kind
of
big
areas
which
are
working
with
teams
directly
in
order
to
figure
out
how
they
want
to
partition
their
tables
and
then
get
them
kind
of
get
the
ball
rolling
with
them
to
get
that
partitioning
happening
and
then
the
other.
And
so
that's
what
we've
been
doing
with
the
identifying
tables
with
long
vacuum
times.
B
That's
how
we're
we're
using
vacuum
time
as
an
indicator
for
our
partitioning
efforts,
because
that
is
a
big
impact
on
our
CPU
saturation,
much
more
than
actually
storing
the
data.
So
that's
that's
the
direction
we're
going
there
and,
on
the
other
side,
we're
also
working
on
the
query
reporting
which
we'll
talk
more
about
in
a
later
section,
because
that's
kind
of
a
wider
effort,
but
we're
we're
working
on
leveraging
query
reporting
in
order
to
observe
how
partitioning
changes
queries
and
make
sure
that
partitioning
won't
have
an
outsized
impact
on
those
queries.
A
Cool
yeah,
and
then
the
next
item
here
also
been
around
for
a
little
bit,
is
updating
our
primary
keys
to
bigint.
A
lot
of
this
was
waiting
on
a
lot
of
backfills
and
migrations
over
multiple
Milestones,
but
I
think
we
we're
finally
reaching
a
point
where
we
can
start
tidying.
Some
of
these
areas
for
git
map,
SAS
I.
Think
an
important
part
to
note
here
is
that
our
efforts
to
mitigate
primary
keys
for
gitlab
SAS
have
spanned
I.
Think
four
or
five
Milestones
now
end
to
end,
but
they
do
not
address
self-manage.
A
So
self-managed
instances
do
not
have
the
same
magnitude
of
risk
because
they
simply
do
not
run
as
large
in
terms
of
traffic
and
users
and
volume.
But
we
do
want
to
mitigate
this
as
well
for
them
both
so
that
both
SAS
and.com
are
using
the
same
database
structures,
but
also
at
some
point.
They
will
hit
this
problem.
B
Yeah,
that's
we're
very,
very
close
for
mitigating
gitlab.com.
The
the
biggest
hold
up
here
has
been
events
which
was
referencing
notes,
which
had
this
issue
and
were
very
close
to
having
that
swapped.
All
the
back
filling
is
done.
We're
really
just
getting
I
think
the
final
swap
went
up
into
a
merge
request
for
gitlab.com
yesterday
or
earlier
this
week
and
the
well
we
after
that's
done.
We
should
be
able
to
clean
up
the
column
and
gitlab.com
should
no
longer
be
at
risk
of
any
primary
Keys
over
50
saturation.
B
Another
thing
that's
kind
of
going
on
at
the
same
time
as
we've
been
working
with
the
scalability
group
to
improve
monitoring
and
alerting
one
of
the
reasons
that
events
took
a
little
bit
longer
is
we
got
a
later
start
on
it?
We
got
a
later
start
because
the
column
wasn't
an
ID
column,
and
so
it
wasn't
automatically
identified
by
our
monitoring
as
being
at
risk.
We've
addressed
that
with
some
new
queries
to
drive
the
monitoring
and
we're
just
working
on
getting
that
merged
in.
B
So
we
have
that
assigned
to
a
Dri
from
their
team
and
we're
hoping
to
get
that
over
the
finish
line
early
on
in
16-3.
A
Cool,
that's
super
exciting
to
hopefully
close
this
out,
at
least
for
SAS
and
and
mitigate
a
big
risk
item.
So
next
up
is
our
automated
database
testing.
This
has
been
something
that
I
know
the
team's
been
really
excited
on
and
working
through
on
a
lot
of
different
directions.
Previously
we
worked
through
and
had
some
technical
architecture
designs
to
sort
of
plan
out
how
this
might
look
I
know
some
of
the
query
analysis
here
also
touches
on
what
we
talked
about
just
a
little
bit
earlier.
Alex
did
you
want
to
dive
a
bit
deeper
on
here.
B
Yeah,
you
know
we're.
We
there's
some
really
exciting
things.
The
team
changed
directions
a
little
bit
last
milestone
in
terms
of
how
they
thought
they
about
how
we
were
going
about
collecting
these
queries
and
they
found
a
way.
That's
much
more
efficient
and
really
pretty
brilliant
and
I'm
excited
to
see
that
move
forward.
We've
had
a
little
trouble
getting
it
merged
because
of
a
variety
of
pipeline
failures
and
conflicts
from
other
changes
within
the
pipeline
structures,
but
we're
really
hoping
to
get
this
first
phase.
B
One
of
the
automated
query
analysis
merged
either
the
very
end
of
this
last
Milestone
or
very
early
next
Milestone,
and
hopefully
we'll
be
able
to
move
on
to
phase
two
which
is
actually
getting
out
a
list
of
added
queries
for
a
merge
request
that
we
put
up
and
that's
really
going
to
enable
us
to
do
really
neat
stuff,
not
just
extracting
things,
to
make
reviews
easier,
but
we'll
also
be
able
to
do
analysis
on
those
queries
to
make
sure
that
they
meet
our
guidelines
and
and
help
inform
people
what
guidelines
they
should
be
using
to
review.
A
Yeah-
and
this
has
been
I-
think
really
important
for
us,
because
we've
been
trying
for
a
long
time
to
make
sure
our
database
queries
perform
in
in
a
performant
way,
but
we
don't
really
know
what
queries
exist
or
are
being
called
each
time.
We
make
some
changes,
and
so
it's
hard
to
optimize
when
we
don't
know
what
the
impact
is.
So
this
this
is
a
key
part
invisibility.
A
So,
lastly,
we've
got
some
smaller
Focus
items
here
in
terms
of
background
processing
and
migrations.
I
know:
we've
been
also
working
through
some
of
these
for
a
few
Milestones
as
well,
but
I
think
some
of
these
are
getting
closer.
Did
you
want
to
touch
on
that
too?.
B
Yeah,
you
know
safe
background
processing
took
a
probably
had
some
leave
and
we
decided
not
to
reassign
it
while
he
was
had
limited
availability
and
so
that
kind
of
has
been
on
hold
while
he's
pursued
a
variety
of
other
priorities
kind
of
simultaneously,
so
that
hasn't
had
a
lot
of
progress
in
16-2
in
16-3.
We're
really
hoping
to
get
one
of
these
checks
implemented
and
so
that
we
can
see
how
it's
resulting
I
know
prabha's
been
in
touch
with
a
couple
of
different
groups
about
adding
them.
B
So
we're
really
excited
to
see
that
move
forward
for
migration
versions
running
in
Milestone,
not
version
order.
There's
a
spike
up
for
that
now
cross,
put
together
the
first
iteration
of
this
and
he's
getting
feedback
from
the
team
before
he
goes
on,
leave
for
a
few
weeks
and
hands
that
over
to
John
to
kind
of
take
the
first
issues
out
of
that
Spike
and
move
them
over
the
finish
line.
So
yeah.
B
Yeah
this
effectively
will
bring
our
customers
who
are
doing
multi-milestone
upgrades
they
it,
whereas
right
now,
those
my
their
migrations
run
in
version
order,
so
that
doesn't
consider
Milestone
at
all.
It
just
considers
literally
the
times
a
time
stamp
from
when
the
migration
was
created
by
a
developer
on
their
laptop.
B
So
it
could
be
that
a
six-month-old
Branch
gets
merged
with
old
time
stamps
and
the
timestamps
line
up
with
you,
know
15
8
and
they
get
merged
into
16-3
right
we're
trying
to
avoid
that
from
now
on
and
at
to
a
lesser
extent
or
a
greater
extent.
I
should
say
more
often
it's
you
know,
16
1
and
16
2
and
16
2
and
16
3
will
be
a
little
bit
mingled
together,
but
this
change
will
make
it
so
that
every
customer
will
run
whether
you're
doing
downtime
migrations
or
no
downtime
upgrades.
B
You
will
run
your
migrations
in
the
same
order
as
everybody
else
and
we're
hoping
that
that'll
prevent
not
only
a
whole
bunch
of
bugs,
but
it'll
also
leave
customer.
If
there
are
problems,
it
leaves
customers
in
a
state
where
they're
still
compatible
with
a
single
version,
which
is
not
really
the
case
right
now.
Yeah.