►
From YouTube: GitLab 15.11 Kickoff - Enablement: Database
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everybody,
my
name
is
Roger,
and
this
is
the
database
groups.
15.11
kickoff
planning
with
me
today
is
Alex.
Our
engineering
manager
Hello
Alex
right
now
we
are
at
1511
and
our
team
is
operating
at
full
capacity,
which
is
an
improvement
over
the
last
two
Milestones
of
recording.
A
We've
got
a
few
top
priorities
to
share
with
folks
today.
I
think
the
first
two
are
must
do.
Items
that
are
critical
to
the
continued
survival
of
gitlab.
First
is
that
we
are
seeing
some
big
spikes
on
our
CPU.
A
We
are
worried
that,
without
understanding
the
cause
of
these
spikes,
that
our
database
is
going
to
basically
reach
its
capacity
and
we
no
longer
are
able
to
serve
things
and
the
second
one
is
with
primary
Keys,
which
we
also
don't
want
to
blow
up
Alex.
Do
you
want
to
just
touch
on
a
little
bit
for
people
as
to
why
these
are
super
important.
B
Yeah,
so
for
the,
for
both
of
these
relate
to
our
primary
database
and
its
ability
to
basically
operate
so
the
CPU
spikes
we're
investigating
are
around
CPU
saturation
on
the
primary
node.
The
primary
node
is
where
all
of
our
rights
happen.
So
while
we
have
a
lot
of
replicas
and
replica
traffic
seems
to
be
mainly
fine.
B
The
primary
note
has
experienced
a
lot
of
a
Sharp
increase
in
CPU
utilization
since
January,
so
at
a
high
level.
We're
trying
to
investigate
what
the
cause
of
that
load
is
and
then
work
if
we
can,
if
we
can't
find
all
of
the
smoking
guns
of
which
we
think
there
are
probably
several
were
we
have
a
few
other
lovers
that
we're
working
on,
which
is
updating
our
postgres
version
and
hopefully,
maybe
getting
that
node
migrated
to
bigger
Hardware
on
the
application
side.
B
We're
investigating,
as
I
said,
before,
we're
trying
to
investigate
what
queries
could
be
causing
this,
but
simultaneously
we're
also
doing
some
investigation
efforts
farmed
out
to
a
number
of
teams
in
partitioning
seeing
if
we
can
partition
some
of
the
big
tables
that
cause
additional
load
kind
of
at
a
base
level.
Just
through
Auto
vacuum
and
and
other
efforts
that
are
more
intensive
on
these
large
tables,
so
that's
kind
of
where
things
are
at
for
CPU
spikes
on
the
primary
Keys.
This
is
a
this
is
a
repeat
hit.
You
know
we.
B
If,
from
our,
we
had
a
this,
this
charted
for
us
about
a
year
and
a
half
ago,
and
it's
back
again.
We
have
some
tables
that
have
integer
fours
that
need
to
be
integer
rates,
and
so
because
we're
going
to
run
out
of
them
if
we
don't
the
most
concerning
one,
is
merge,
request
metrics,
which
has
now
passed
the
80
saturation
level.
Actually
so
yeah,
that's
that's
exciting,
but
that
all
of
all
of
these
are
in
progress
and
we're
planning
to
have
much
of
the
mitigation
done
very
soon.
B
So
we
most
of
the
tables
have
had
background
migrations,
synchronizing
their
four
by
eight
integers
to
eight
byte
integers
and
though
most
of
those
migrations
are
now
done.
So
what
that
means
is
on
gitlab.com.
We
can
start
swapping
to
the
new
columns.
The
big
hurdle
right
now
is
we're
trying
to
get
the
new
indexes
in
place
for
the
we
have
to
add
the
indexes
and
primary
keys
and
foreign
keys
to
point
at
the
new
column
so
that
we
can
swap
them
and
then
drop
the
old
ones.
B
So
that's
kind
of
been
the
big
hurdle
at
the
moment
is
getting
those
new
keys
added.
A
Yeah
thanks
for
that
Alex.
That's
a
that's
a
great
explanation
and
just
to
clarify
for
for
people
here.
The
reason
that
this
primary
keys
to
begin
is
a
repeat
hit
is
because
we
do
know
that
there
are
Legacy
tables
using
four
byte
integers,
but
the
database
team
has
been
generally
concerned
with
system
stability
with
a
whole
bunch
of
other
work,
and
so
we
just
haven't
migrated
everything,
because
the
utilization
has
historically
been
low
for
some
of
these
tables.
B
Yeah
we
actually,
we
we
initially
intended
to
set
50
saturation
marks
and
get
alerting
on
that,
but
due
to
a
a
mix-up,
we
instead
were
monitoring
for
much
higher
saturation.
So
we've
we've
changed
the
monitoring
so
that
it
monitors
for
50
saturation
and
alerts
us
a
lot
sooner.
So
if
any
other
integers,
if
any
other
primary
Keys
hit
that
50
saturation
Mark
we'll
be
able
to
get
our
planning
in
a
lot
sooner
so
for
three
of
the
four
tables
that
are
being
migrated
now
are
only
between
50
and
60
saturated,
and
it's
just.
A
Oh
and
then
the
other
priority
topic
here
for
us
in
1511
is
to
wrap
up
some
of
our
batch
background
migration
work.
This
is
something
we've
been
doing
for
some
time
and
it's
been
exciting
because
gitlab
SAS
is
currently
running
for
migrations
in
parallel,
and
so
our
goal
is
to
roll
this
out
to
some
of
our
self-managed
users
over
the
course
of
15
and
11,
so
that
they
will
be
able
to
utilize
this
as
they
come
to
the
required.
Stop.
A
We're
also
particularly
excited
about
it,
because
the
database
team
has
a
lot
of
things
ongoing
and
closing
out.
This
word
stream,
which
is
critic
ready
close
to
completion,
will
allow
us
to
focus
better
on
the
things
in
front
of
us
and,
lastly,
yeah
we
have
some
pre-16.0
cleanup
items
that
we
want
to
do
also
in
part
related
some
of
the
migration
framework
as
well
Alex
did
you
want
to
touch
on
this
one
a
little
bit
just
briefly
for
folks.
B
B
We
we've
removed
all
of
the
others,
the
only
back
background,
the
only
helper
we
have
that
still
uses
background
migrations
in
the
background,
is
the
partitioning
helper
so
we'd
like
to
migrate
that
partition
the
partitioning
helpers
so
that
they
use
batch
background
migrations
instead
and
ideally,
what
that
means
is
we'll
be
able
to
actually
remove
the
background
migration
framework
code
in
160,
which
would
be
really
great
because
it
leads
to
a
lot
of
just
overhead.
B
For
folks,
the
the
other
pre-16
cleanup
that
were
kicking
around
in
the
back
of
our
head
is
just
preparation
for
postgres
14,
and
so
we've
set
aside
time
from
folks.
We
have
folks
on
standby
in
case
there
are
bugs
related
to
the
postgres
14,
upgrade
that
we're
planning
to
add
support
for
postgres
14
in
160.
so
and.
A
B
And
for
our
excited
listeners,
if
you
hear
this
and
you're
like
oh
I'm,
gonna
go
to
postgres
14
right
as
I
hit
160-0
just
a
heads
up
Omnibus
will
not
support
installing
postgres
14
for
you,
probably
until
as
late
as
16-4.
So
but
if
you're
running
an
external
database,
postgres
14
will
be
officially
supported.
As
of
get
lab.
16.
A
Very
exciting
awesome
and
then
there's
just
a
couple
more
items
we
wanted
to
touch
on
just
from
before.
Just
so
people
can
follow
up.
So
we
continue
to
focus
on
removing
old
migrations
in
15.
This
is
sort
of
a
long-running
technical
debt.
Cleanup
effort
that
we
want
to
do
really
the
goal
here
is
we
want
to
focus
on
high
priority
items
and
and
the
more
of
these
little
things
we
get
out
of
the
way
the
better
our
team
can
focus,
identify,
schema.
B
B
Yeah,
so
on
the
subject
of
the
remove
old
migrations
in
15,
we'll
we'll
actually
end
up
removing
a
lot
more
migrations
come
16,
but
it
took
us
a
while
to
get
started
with
us
and
removing
these
migrations
because
we
do
them
in
huge
batches.
It
takes
it
a
long
time
to
get
through
review
and
so
we're
hoping
to
pick
up
the
pace
a
little
bit
there.
A
Cool
and
then
lastly,
two
other
elements
we
have
been
cost.
We
have
been
working
on
as
well
that
we're
going
to
de-prioritize
a
little
bit
for
this
Milestone,
just
to
really
focus
on
our
top
two
concerns
is
some
of
the
database
testing
work
and
our
CI
partitioning
support.
Both
of
these
areas
are
going
to
continue
to
be
important
for
the
database
group
in
the
long
run.
But
for
now
our
efforts
are
just
going
to
be
focusing
a
little
bit
more
on
in
front
of
us
acute
and
emergent
situations.
B
Yeah,
a
one
note
on
the
CI
partitioning
support
while
Simon
is
Simon,
is
currently
wrapping
up.
What
we
hope
is
the
last
major
helper
that
we're
going
to
need
to
provide
to
that
team.
So,
once
that
helper
is
wrapped
up,
we
see
him
moving
more
into
a
stable
counterpart
role
for
at
least
a
little
while
until
we
can
get
some
of
the
other
pressing
issues
out
of
the
way
and
yeah
just
for
for
CI
folks.
B
If
you're
watching
this,
please
don't
be
alarmed
about
us
deprioritizing,
it's
de-prioritizing
doesn't
mean
you
won't
still
get
priority.
Support
from
us.
So
much
as
our
our
active
attention
may
not
be
on
it.