►
From YouTube: GitLab 16.2 Kickoff - Database
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everybody,
my
name
is
Roger
and
with
me
today
is
Alex
hi,
and
we
are
here
to
record
the
database
group
16.2
planning
issue.
So
in
16.2
we
are
at
full
capacity.
Everybody
is
present,
except
for
I
think
one
last
minute
leave,
but
overall
everybody's
here.
However,
we
do
want
a
signal
for
folks
that
about
half
the
team's
capacity
is
generally
taken
up
by
unplanned
work
as
well
as
stable
counterpart
support.
So
our
overall
velocity
on
these
priority
items
is
relatively
slower
than
some
other
teams
of
a
comparable
size.
A
So
with
that
being
said,
we've
got
mostly
the
same
themes
as
last
time,
but
we
have
made
some
good
progress
and
updates
so
Alex.
Do
you
want
to
jump
into
our
partitioning
strategies
and
how
our
new
hardware
upgrade
has
changed?
Some
of
this
work.
B
Yeah
so
one
of
the
great
news
pieces
since
last
month
is
we
completed
the
or
the
sorry,
not
we,
but
the
infrastructure
database.
The
database
reliability
group
completed
the
hardware
upgrades
for
our
postgres
instances,
including
our
primaries,
and
we've
seen
substantially
reduced
CPU
saturation.
B
So
that's
really
great,
but
in
addition
to
that,
we've
also
managed
to
focus
some
of
our
our
work
on
the
partitioning
strategies
so
going
forward
we're
identifying
a
set
of
tables
by
vacuum
time
in
order
to
help
teams
prioritize,
which
tables
are
the
highest
priorities
for
partitioning.
So
we
should
see
that
coming
teams
have
expressed
some
frustration
about
like
not
really
knowing
if
their
tables
are
a
high
priority
or
a
low
priority,
even
just
based
on
table
size.
Sometimes
it's
a
good
indicator.
B
Sometimes
it's
not
and
then
the
other
great
part
is
the
auto
explain,
plans
which
is
actually
I'm
going
to
cover
more
of
in
our
automated
query
analysis.
But
we're
excited
to
use
this
analysis
tool
to
Aid
our
partitioning
efforts
and
we
can
cover
more
of
that
below
yeah.
A
And
so
just
generally
for
folks
here,
the
goal
continues
to
be
reducing
table
sizes
because
even
though
we've
bought
more
CPU
Headroom,
which
is
to
say,
we've
relieved
one
resource
constraint
overall,
we
do
see
and
expect
to
have
additional
constraints
with
our
primary
database.
So
that's
where
some
of
these
vacuum
times
come
in
right.
B
A
And
then
next
up
we've
got
primary
keys
to
Big
ends.
I
know
this
is
something
that
we
were
quite
concerned
about
a
few
Milestones
back
and
it's
been
ongoing,
I
think
right
now
we
are
at
about
80
complete
Alex.
Is
there
more
you
want
to
add
on
this
one
yeah.
B
You
know
we
actually
should
see
the
alert
for
this
clear,
probably
in
the
next
few
days,
because
the
table
Swap
got
merged.
But
it's
important
to
remember
that,
since
the
events
table
isn't
done,
migrating
and
the
column
for
notes
isn't
dropped.
Yet
we
are
still
at
risk
for
this
table
until
those
two
items
are
complete
because
we
won't
be
able
to
add
it
will
it'll
error
out
when
it
tries
to
add
events,
and
we
could
see
a
lot
of.
We
could
see
an
increased
error
rate
if
we
hit
this.
B
It's
probably
not
catastrophic
if
we
hit
the
Overflow,
but
it
would
be
bad,
so
we
are
still
at
risk
for
this
until
that's
done,
but
we
are
very
hopeful
that
we'll
have
events
done
in
16-2.
Oh.
A
B
You
know:
we've
had
a
few
different
approaches
for
automated
database
testing
over
the
last
several
months,
and
we
finally
have
found
an
approach
that
we
believe
will
be
able
to
run
on
every
Branch
without
having
substantial
impacts
to
pipeline
timing
or
overall
resource
consumption,
which
is
great
news.
It
means
we're
going
to
be
able
to
use
that
not
just
to
identify
new
queries
added,
but
also
we're
excited
about
the
analysis.
Options
for
identifying
things
like
if
a
table
gets
partitioned
is
the
does.
B
The
explain
plan
show
that
it
is
iterating
multiple
partitions
right
and
because
we
added
autoexplain
as
a
part
of
this,
we
get
generic,
explain
plans
for
every
query,
so
we'll
be
able
to
see
when
partitioning
could
in
like.
We
can
just
add
partition
and
see.
If
that
impacts
our
explained
plans
and
that's
going
to
be
really
really
helpful.
For
teams
to
be
able
to
automatically
identify
queries
that
are
impacted
by
partitioning.
A
B
Or
the
of
the
original
I
think
there
are
some.
We,
we
record
something
like
500
million
or
400
million
queries,
and
they
have
managed
to
de-duplicate
those
queries
down
to
around
20
000,
which
we
save
as
something
like
a
four
or
five
megabyte
bundle,
which
means
we
can
artifact
that
and
we
can
artifact
it
on
the
default
branch
and
we'll
be
able
to
do
comparisons
against
that
to
to.
B
B
Try
and
ship
the
16
squash
in
a
single
Milestone,
we're
hoping
that
we
can
get
this
done
basically
in
a
single
Milestone
this
time,
because
we've
made
many
improvements
over
the
last
several
months
for
safe
background
processing,
probably
shipped
the
changes
that
we
we
now
have,
the
helpers
that
we
can
use
to
add
them
to
sidekick
workers
and
in
16-2,
probably
would
like
to
select
a
a
worker
that
we
can
Implement
these
checks
on
and
monitor
results
and
then
we'll
make
them
widely
available
and
document
them.
B
And
then
the
last
thing
is
migrations
running
in
Milestone,
not
version
order.
So
our
initial
aim
here
is
just
to
get
the
the
version,
the
Milestone
support
added
to
our
migrations
and
we're
also
aiming
to
get
the
ordering
done,
and
we
think
this
will
actually
really
improve
the
experience
for
self-managed
customers
that
jump
multiple
upgrades
it'll
keep
them
if
they,
if
they
hit
an
error
during
a
migration,
it'll
mean
they're
still
compatible
with
a
gitlab
version.
A
Well,
thank
you
for
talking
through
all
of
that,
that's
definitely
a
lot
of
good
stuff
and
it's
really
good
progress
the
past
month.
Thank
you
again,
everybody.
This
is
the
database
group.
16.2
planning
we'll
see
you
guys
around.