►
From YouTube: GitLab 16.0 Kickoff - Database
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everybody,
my
name
is
Roger.
This
is
the
database
group.
16.0
planning.
Welcome
with
me
today
is
Alex
Hello
Alex
hello,
so
we're
just
going
to
go
through
quickly,
some
of
the
items
on
our
teams
list
in
160
we
are
operating
at
full
capacity
and
in
general
our
focus
is
to
stay
supportive
of
initiatives
that
affect
the
availability
and
reliability
of
getlab.com
and
self-managed
instances.
A
We
have
a
multi-pronged
database
scaling
strategy
that
is
highlighted
here,
that
we
are
working
through
in
the
coming
months
and
we'll
be
dedicating
one
full-time,
equivalent
engineer
to
help
explore
data
source
solutions
for
AI
related
initiatives
as
well.
So,
let's
jump
into
some
of
our
top
priorities.
First
up
here
is
partitioning
strategies
and
table
ownership.
Some
of
this
was
already
ongoing
with
the
CI
database,
but
it
also
came
out
of
some
of
the
mistake
investigation.
We
recently
saw
with
CPU
spikes
on
our
database
primary
Alex.
A
B
Yeah,
so
I
I
would
actually
broaden
this
to
say
table
reduction
so-
and
this
is
a
theme
we've
talked
about
in
the
past,
but
it's
become
a
lot
more
pointed
recently,
especially
with
some
of
our
CPU
saturation
issues.
B
Over
the
last
couple
of
months,
we
see
partitioning
as
a
good
way
to
reduce
table
sizes,
in
addition
to
a
few
other
methods
like
reducing
or
decomposing
into
multiple
tables
or
that
sort
of
thing,
but
partitioning
is
a
is
a
one
of
the
main
tools
we
have
to
make
take
some
of
our
very
large
tables
and
turn
them
into
smaller
tables,
and
we
need
to
do
that
because
the
the
overhead
caused
by
large
tables,
especially
around
vacuum
pressure,
but
also
just
day-to-day
operations
of
the
database,
are
very
high
and
getting
those
tables
smaller
is,
we
think,
is
gonna.
A
Yeah,
so
this
is
definitely
something
we're
working
closely
with
a
lot
of
different
individual
groups
to
push
forward
as
well
and
also
the
degree
to
which
we
are
prioritizing.
This
is
also
heavily
informed
by
just
sort
of
our
long-term
scaling
plans
for
gitlab
as
a
company,
so
definitely
top
of
mind
for
us
as
we
work
through
160
and
Beyond.
A
One
additional
area
we
want
to
highlight
is
that
we
want
to
also
mitigate
primary
key
overflows
for
all
of
git
lab
generally,
and
so
we're
working
with
group
scalability
to
just
improve
our
monitoring
and
alerting
to
ensure
that
we
are
migrating
additional
tables
in
time
and
then
one
other
point
here
specific
to
160
is
there's
a
few
dependencies
that
we
just
need
to
update
and
make
sure
of
Alex.
Do
you
just
want
to
maybe
help
our
listeners
understand
a
bit
more
about
our
our
database
version?
Yeah.
B
Oh
I
probably
should
have
changed
the
name
of
this,
but
so
with
160
we
are.
We
are
removing
compatibility
with
postgres
12,
while
we
are
officially
dropping
support
for
postgres
12..
This
was
announced
when
we
released
15-0,
so
hopefully
folks
will
have
had
time
to
upgrade
to
postgres,
13
or
later
by
now,
and
so
as
a
part
of
that,
we
we
have
a
check
in
gitlab
to
say:
oh,
are
you
on
at
least
well
right?
B
Now,
it
says:
are
you
on
at
least
version
12,
but
that'll
be
on
at
least
3013
and
in
it
mainly
prints
out
a
warning
that
way
if
someone
is
still
on
the
old
version,
they're
not
immediately
just
like
out
of
luck,
but
we
we,
while
it
may
still
work
in
the
short
term,
with
the
graded
performance.
We
no
longer
support
postgres
12
starting
in
160,
and
it
could
break
at
any
time
at
the
same
time
we're
going
to
be
adding
nightly
tests
for
postgres
15..
B
A
Yeah
so
I
think
all
in
all
this
is
largely
in
line
with
our
existing
database.
Versioning
upgrade
Cadence,
so
I
think.
As
long
as
our
users
are
following
our
recommendation
recommended
installation
guidelines.
They
should
see
no
impact
from
some
of
these
changes,
even
though
we're
upgrading
systems
across
the
board
yep.
B
13
has
been
the
default
installed
version
of
with
Omnibus
since
gitlab
15,
and
it's
been
an
optional
install
since
gitlab
14,
so
it
should
be
readily
it's
most
folks
will
have
already
been
automatically
upgraded
to
it.
Cool.
A
And
then
a
few
focused
themes
and
areas
for
what
we're
working
on
here.
Removing
old
migrations
is
something
that
we've
had
ongoing
for
some
time.
I
think
we've
made
some
decent
progress,
but
it
is
complex
and
slow
work
just
due
to
the
nature
of
changes
as
we
approach
16.0
we're
going
to
actually
extend
this
and
continue
this
work
stream.
A
The
overall
goal
here
is
to
just
remove
a
lot
of
Legacy
code
that
causes
maintenance
burden
across
our
system
and
to
just
simplify
and
accelerate
our
ability
to
innovate
without
having
to
have
a
lot
of
these
Legacy
systems
accounted
for
for
compatibility
reasons,
and
then
the
other
big
theme
here
that
we
are
continuing
to
drive
and
we
have
excitement
about-
is
automated
database
testing,
which
we've
been
working
through
for
some
time
and
having
recently
developed
an
architectural
blueprint.
I
think
we
are
working
to
finalize
that
even
just
this
week.
B
Yeah,
so
we
we
don't
have
a
lot
of
things
hard
set
in
stone
just
because
that
blueprint
is
still
ongoing,
but
we're
hoping
to
n160
get
is
some
of
the
next
steps
happening
or
in
a
couple
of
releases
ago
we
built
this.
We
prototyped
a
query
Interceptor
and
we're
hoping
to
get
that
Interceptor
installed
in
at
least
one
of
our
projects
and
then
we'll
be
able
to
start
hooking
it
up
to
the
backend
systems
that
are
being
designed
as
a
part
of
the
architectural
blueprint.
B
That's
been
ongoing,
so
I,
don't
know
that
we'll
actually
start
implementing
the
blueprint,
n16
or
not,
but
we're
hoping
to
make
some
big
strides
there
as
it
notes.
We
are
a
gitlab
focused
use
case
for
now,
but
we'll
hopefully
we're
we're
building
something
that
maybe
everyone
will
be
able
to
use
someday
we'll
see,
but
mainly
this
is
to
prevent
scenario,
help
us
do
better.
B
Preventing
scenarios
like
the
CPU
saturation
issues
that
we
encountered
earlier
this
year
by
helping
us
catch
some
of
these
poorly
performing
queries
even
more
aggressively
than
we
already
do
through
our
our
database
review
process.
The
other
thing
we're
hoping
is
that
it'll
actually
alleviate
some
of
the
pressures
on
individual
authors
as
as
they're
writing
their
changes,
because
the
database
review
process
can
be
very
intensive
and
so,
instead
of
folks
having
to
self-identify
they'll,
be
able
we'll,
hopefully
be
able
to
automate
away
a
lot
of
the
heavy
lifting
there.
Yeah.
A
And
database
review
definitely
is
something
that
both
consumes
a
lot
of
time
across
gitlab
from
from
database
expertise,
people
but
also
individual
Mr
authors
and
so
the
core
principle
with
prioritizing
this
work
stream.
Here
is
really
it's
important
that
we
review
our
changes
or
code
changes
for
database
impacts,
and
we
want
to
make
these
mandatory
reviews
as
seamless
and
straightforward
and
low
friction
as
we
can
and
then
sorry
go
ahead.
Alex
no.
A
And
then,
lastly,
we
recently
wrapped
up
a
lot
of
work
around
background
background,
migrations
and
there's
just
a
few
General
themes
around
background
processing
that
we
want
to
just
make
sure
increase
system,
stability
and
predictability.
Alex.
Do
you
want
to
just
touch
on
a
couple
of
these
ones
that
you
recently
added.
B
Yeah,
so
we're
we're
we're
broadening
our
scope.
Just
a
little
bit,
we've
made
a
lot
of
advancements
around
background
processing
as
a
part
of
the
batch
background
migration
effort,
and
we
want
to
be
able
to
take
some
of
those
advancements,
especially
around
throttling
during
system
Peaks
to
some
of
our
more
General
sidekick
workers.
That
way,
those
workers
can
take
advantage
of
these
mechanisms
we've
already
built
and
then
just
so
what
we're
there's
a
a
general.
B
It's
we've
made
safe
background
process
like
a
more
General
topic,
along
with
that,
where
we
do
still
have
some
follow-ups
and
bugs
to
address
with
batch
background
migrations,
but
we've
mainly
closed
out
that
effort,
and
we
want
to,
we
do
want
to
make
sure
background
processing
as
a
general
rule
is
safe,
though
so.