►
From YouTube: GitLab 14.4 Kickoff - Enablement:Database
Description
Kickoff for the Database Group for the GitLab 14.4 release
Planning issue: https://gitlab.com/gitlab-org/database-team/team-tasks/-/issues/193
A
Hello,
everyone:
this
is
fabian
group
manager
for
product
enablement
for
the
database,
kickoff
video
for
14.4
gitlab's
next
release.
This
is
very
exciting,
recording
because
I'm
also
joined
with
janis,
who
you
know
who
creates
better
kickoff
and
videos
than
me
and
he's
officially
joining
us
as
the
senior
product
manager
for
database
and
memory.
So
going
forward
he's
going
to
create
these
videos
again
and
today,
we'll
walk
through
where
we
are
with
the
database
team.
A
So
the
first.
The
first
thing
to
to
note
is
that
gitlab
is
focusing
right
now
on
the
availability
and
stability
of
gitlab.com,
and
so
we
are
addressing
so-called
infradev
issues
which
are
actually
being
sort
of.
They
are
created
as
part
of
us
trying
to
understand
what
keeps
gitlab.com
stable.
What
are
the
improvements
that
we
need
to
make,
and
so
right
now,
as
we
speak,
there
are
none
that
the
database
group
has
to
address,
but
they
can
pop
up.
A
A
We
are
essentially
taking
all
of
the
ci
tables
and
we're
moving
them
to
a
different
database,
and
so
as
part
of
that,
many
of
the
tools
that
the
database
group
has
built
to
support
migrations,
for
example,
or
to
handle
how
we
treat
the
database
are,
you
know,
broken
because
now
we
have
to
deal
with
one
more
than
one
database,
and
this
is
something
that
the
database
group
is
actively
working
on
to
support
the
the
sharding
team.
So
there
are
a
few
examples.
A
For
example,
the
fixing
the
broken
database
helpers
fixing
the
migration
helpers
to
support
many
databases,
ensuring
that
our
partitioning
code
works
with
more
than
one
database.
These
are
really
important
things
that
we
need
to
put
in
place
before
we
can
actually
migrate
to
a
world
where
the
ci
tables
are
living
on
a
separate
database,
and
we
can
support
that
on
gitlab.com.
A
A
A
There
are
a
few
follow-up
steps
that
the
database
team
has
to
do,
and
this
is
exactly
what's
going
to
happen
in
14.4,
which
is,
for
example,
to
drop
the
triggers
where
the
old
integer
base
table
is
still
being
updated
and
then
also
removing
the
the
tables
that
are
no
longer
needed
or
the
the
old
integers,
and
this
is
all
like
you
could
argue,
it's
like.
A
Oh,
this
is
something
that
can
be
easily
easily
done
with
some
downtime,
but
as
this
is
relevant
for
gitlab.com,
we
had
to
implement
the
process
that
actually
allows
us
to
do
this,
with
minimal
interruption
to
github.com
and
no
downtime,
and
that
means
that
you
know
this
process
takes
a
longer
time.
There
are
specific
steps
that
need
to
be
executed
in
order.
A
Those
steps
are
understood
a
lot
better
by
janis
than
me,
but
there
are
still
a
few
things
to
do
essentially
swap
swap
columns
for
three
more
tables
and
then
finalize
all
of
the
other
steps,
but
we're
very
confident
that
we
can
get
that
done
in
14.4
and
then-
and
this
is
interesting-
and
here
janice-
I'm
really
excited
to
have
a
quick
chat
for
the
benefit
of
our
viewership.
A
A
I
think
something
that
you
will
spend
a
lot
of
time
on
is
understanding
what
we
are
going
to
do
and
why
we
are
going
to
do
it
in
that
specific
order,
and
so
craig,
the
engineering
manager
for
the
team
has
kindly
put
in
a
number
of
epics
here
that
we
can
review,
and
maybe
jonas
you
can
give
me
your
sort
of
first
impression
of
what
these
are
about
and
why
we
need
to
care
about
them.
B
Yes,
of
course,
so
we
have.
Those
are
four
different
directions:
four
different
things
that
we
want
to
do
at
some
point,
so,
first
of
all,
whatever
we
do,
we
want
tables
we're
going
to
use
the
tape
the
size
of
all
tables
be
below
100
gigabytes,
so
having
tables
that
are
at
the
one
terabyte
that
are
huge
tables,
makes
everything
slower
and
makes
also
our
lives
on
migrating
and
solving
issues
and
making
updates
much
much
much
more
difficult
and
more
risky
also.
So
this
is
the
first
one.
B
This
epic
is
about
trying
to
reduce
all
tables
below
100
gigabytes
that
can
be
either
by
partitioning
them
or
by
doing
other
tricks
like
removing
some
columns,
for
example,
taking
make
taking
json
information
out
of
the
database
splitting
tables
or
whatever.
A
I
think
these
are
some
of
the
strategies.
You
know
that
we
can
employ
to
control
the
growth
of
the
database
and
also
control
the
size
overall
of
of
the
tables,
as
you
said,
I
think
that's
that's
really
valuable
to
gitlab.com,
specifically,
but
also
for
our
self-managed
customers,
because
we
can.
We
can
ensure
that
you
know
when
our
customers
grow.
A
B
B
Even
if
a
table
was
one
gigabyte
instead
of
500
gigabytes
will
now
be
in
smaller
chunks
of
100
megabytes.
So
even
that
one
will
be
smaller
and
of
course
it
will.
It
will
allow
all
instances
to
to
grow,
and
the
same
is
true
for
moving
columns
outside
of
the
database
or
data
outside
of
the
database
or
reducing
the
database.
A
Sounds
great,
thank
you
and
then
maybe
we
can
spend
a
little
bit
of
time
talking
quickly
about
the
automated
database
testing,
because
I
think
that's
also
something
that
may
be
quite
relevant
in
the
longer
term
for
what
we
would
like
to
offer
at
gitlab,
even
though
that
is
something
we
still
have
to
have
to
discuss.
B
Yeah,
we
have
to
think
about
it
for
sure
we
have.
We
are
at
the
step
three
of
of
the
maturity
of
this
feature
we
are
using
it
aside.
It
has
been
very
useful
for
us,
so
the
idea
there
is
that
whenever
you
at
the
moment,
this
is
tested
and
used
by
the
gitlab
engineering
team.
Whenever
someone
makes
an
update
that
is
database
related,
we
now
can
test
it
against
a
real
production
clone,
which
is
amazing.
For
us.
B
That
means
that
whenever
we
make
any
change,
we
are
able
to
test
it
against
our
production
database
at
the
size
of
gitlab.com
database
and
not
a
test
database,
which
means
that
we
can
see
problems.
Early
figure
out
find
problems,
find
either
performance
issues
or
real
issues.
Like
you
know,
there
are
some
types
of
data,
sometimes
that
may
break
a
feature,
and
that
has
been
proven
very,
very
useful
for
the
gitlab
team
and
we
want
to
continue
building
that
going
forward,
because
at
the
moment
we
are
testing
migrations,
which
is
our
database
updates.
B
We
will
want
to
extend
it
and
being
able
to
test
more
things,
and
maybe
at
some
point
this
is
open
for
discussion.
This
is
something
we
are
discussing
and
brainstorming
about
it
internally.
We
could
even
start
thinking
how
we
could
roll
it
out
and
make
available
such
a
feature
to
other
instances
and
other
development
engineering
teams.
A
A
So
if,
in
the
future,
this
becomes
an
issue
again
for
some
tables,
which
we
also
hope
to
avoid
in
first
place,
then
it
is
more
efficient
to
have
tooling
for
other
teams
available
to
do
this
and
to
be
confident
in
being
able
to
do
it
rather
than
the
database
team
having
to
do
it
for
them,
and
so
that's
a
that's
a
pattern
that
I
think
we
try
to
establish
in
many
areas
of
gitlab.
It's
like
if
you
teach
others
how
to
help
themselves.
A
B
B
And
this
is
also
the
process
of
the
team
sitting
down
and
writing
whatever
we
learned
about
this,
so
it's
writing
the
tools
and
also
having
a
summary
for
everyone
of
what
what
happens
here,
how
we
can
do
it?
What
are
the
tools,
how
you
can
self-serve,
but
also
how
you
can
do
it
again
if
we
have
a
similar
problem
in
the
future,
but.
B
B
Part
is
the
last
bullet
in
my
opinion,
which
is
the
result
of
all
this
effort
and
the
battery
testing
all
this,
this
new
approach.
In
order
to
solve
this
very
hard
problem.
Without
while
we
had
zero
downtime,
we
had
to
build
a
new
framework
for
for
run,
background
migrations
and
what
are
background
migrations?
We
we,
we
never
do
anything
live
we
in
a
in
a
huge
production
system
like
gitlab.com
and
also
that's
true
for
all
our
large
instances,
gitlab
instances
out
there.
B
You
want
to
only
do
updates
that
take
no
more
than
a
minute
or
a
few
minutes,
because
you
don't
want
to
block
or
affect
production.
So
what
we
are
doing
when
we
have
huge
updates,
we
are
doing
background
updates,
where
we
have
a
lot
of
background
processes,
making
the
update
in
parallel
with
gitlab
running.
B
So
in
order
to
solve
this
problem,
we
have
to
build
a
new
background
migration
framework
that
we
badly
tested
it
with
the
primary
key
conversions
and
the
idea
here,
because
it's
advantages
to
everything
we
have
at
the
moment,
is
to
have
a
general
availability
and
make
this
the
framework
for
everything.
B
For
all
background
migrations
in
github,
that
will
mean
that
all
migrations
in
github
at
some
point
after
we
release
this,
we'll
be
able
to
run
in
the
background
unattended
and
and
with
all
the
conf,
giving
the
confidence
to
us
and
all
instances
that
they
will
finish
at
some
point,
which
is
very
important.
A
Yeah
super,
I
think,
that's
a
great
overview
I'll
stop
sharing
my
screen.
I'm
really
excited
about
finishing
the
primary
key
conversion
work
in
14.4,
I'm
even
more
excited
about
what
you're
going
to
do
with
the
team
and
the
direction
that
you're
going
to
take.
So
thank
you
very
much
for
listening.
I
hope
you
have
a
lovely
rest
of
your
day
wherever
you
are
and
we'll
talk
soon
about
database
latest
in
the
next
release.
Bye,
bye,.