►
From YouTube: Database Office Hours 2021-03-10
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
welcome
to
database
of
showers
for
what's
10th,
I'm
janice
russos
and
I'm
going
to
share
my
screen
so
that
we
can
check
all
together
the
agenda.
So
the
first.
B
So,
unlike
the
primary
database,
I
don't
know
how
familiar
people
are
with
how
geo
works,
but
geo.
When
we
stand
up
geo,
we
have
a
secondary
tracking
database
that
we
can
write
to,
whereas
the
primary
database
is
read
only
because
it's
replicated
from
the
primary
database
so
we're
still
using
schema.rb
to
track
changes
to
that
database.
One
thing
we've
noticed
is
that
when
we
use
text
limits
and
migrations,
those
are
not
getting
written
to
our
schema
rb,
which
is
concerning
at
the
moment.
We
at
the
moment,
I'm
not
sure.
B
If
it's
the
end
of
the
world,
though
it
does
seem
like
we
probably
should
add
that
back.
I
don't
know,
I
don't
like.
I
don't
know
the
history
of
our
usage
of
text
limits
that
well,
so
I'm
not
entirely
sure
if
we
ran
into
this
before
in
the
primary
database
before
we
switched
to
the
structure.sql
file.
B
A
So
yeah
we
switched
to
using
constraints
after
we
switched
to
using
structure.sql.
So
in
the
case
of
the
gitlab
project,
the
code
digital
project,
we
we
did
not
have
to
go
back
and
fix
those.
A
So
a
first
question-
and
I
added
as
a
comment
here:
how
large
are
the
tables
in
the
geotracking
database?
Can
you
help
us
with
that.
B
So
it
depends
on
the
installation
for
our
large
customers
that
are
using
it
for
disaster
recovery.
Those
tables
the
the
replicated
tables
will
sorry
the
the
tables
for
replicable.
So
like
projects
or
lfs
objects
or
anything
that
we
replicate
and
track,
we
have
an
ent
every
everyone
that
we
track.
We
have
an
entry
for
in
the
the
tracking
database,
so
some
of
these
could
be
quite
large
depending
on
how
many
projects
or
how
many
objects
you're
replicating.
That
includes
merge,
request
diffs,
so
that
that
table
is
certainly
quite
large.
B
I
think,
potentially
in
some
cases-
I
I
guess
I
it's
hard
for
me
to
say
for
sure,
because
I
don't
know
I
I
don't
have
the
numbers
in
front
of
me:
we've
been
trying
to
collect
those
numbers,
actually
one
of
our
one
of
our
biggest
pieces
of
work,
a
big
piece
of
work
and
project
progress
right
now
is
we're
trying
to
actually
get
data
on
how
many
things
people
store
on
their
secondaries
so
yeah.
B
C
Oh
yeah,
I
think,
particularly
when
we
switch
to
structure
that
is
square.
We
want
we
deprecated
my
square
and
we
want
to
get
most
auto.
Postgres
and
structural
square
gives
us
so
since
the
geodb
and
the
other
three
other
two
other
dbs
live
in
gitlab
of
new
developments,
including
text
links,
are
going
to
make
life
hard,
if
you
don't
switch
them
to
structured
sql
too.
So,
in
short,
I
think
the
only
long
term
acceptable
solution,
because
tax
limits
are
now
actually
it's
just
a
performance
issue,
as
the
youngest
is
right.
C
C
Other
things
can
come
up
and
also
all
the
danger
review
and
all
the
you
know.
Let's
say
robocops
are
done
for
the
you
know,
gitlab.
C
C
And
otherwise,
in
the
current
issue,
we
are
just
talking
about
a
simple
performance
issue,
but
we
may
have
really
other
more.
You
know
severe
things
that
like
like,
if
you
suddenly
decide
to
and
in
the
structures
there
is
a
trigger
or
a
function
or
something
like
a
constraint,
which
is
quite
critical
in
that
case,
that
whole
topic
will
be
a
more
you
know,
serious
talk
rather
than
you
know.
C
D
Sorry
there
is
another
concern,
so
let's
say
you
added
the
text
limit
to
to
a
particular
column
in
in
the
previous
milestone
that
will
be
applied
to
customers
who
are
upgrading
gitlab,
but
a
few
releases
later
there
is
a
new
customer
with
a
completely
new
installation.
D
A
A
It
allows
us
to
add
the
triggers
functions,
views
and
other
postcard
specific
objects
that
we
cannot
add
using
a
schema
b.
My
question
was
about
the
size
was
about
how?
How
pressing
is
this?
So
should
we
switch
immediately
or
can
we
use?
So
if
you
were
to
tell
me
that
all
tables
are
like,
have
ten
thousand
records
each?
We
could
say:
okay,
we
could
use
var
varchars,
for
example,
instead
of
text
limits
and
think
about
it
in
the
future.
A
But
if
we
have
tables
that
are
multi-million
records
or
even
above
that,
there
is
a.
A
There
are
serious
concerns
that
make
us
want
to
use,
for
example,
constraints
instead
of
our
cars,
and
I
have
added
a
link
in
the
in
my
comment
in
the
in
the
issue
about
that,
because
if
you
ever
want
to
change
it,
you
don't
want
to
it's
way
way
easier
using
a
a
second
string,
but
either
way,
even
if
we
don't
switch
to
structure
the
square,
we
want
to
to
have
a
consistent
that
we
want
those
limits
to
be
on
all
instances,
because
the
limits
protects
us
from
its
case
is,
is
all
mistakes
that
we
may
have
in
in
the
models,
but
also
protects
us
from
various
inconsistencies
that
we
can
see
on
instances.
A
So
we
have
seen
so
far
for
the
regular
from
regular
gitlab
instances.
We
can
see
a
lot
of
things
not
going
as
we
expect
them.
Sometimes
so
people
may
take
it
back
up
and
apply
that
back
up
or
roll
back
or
do
whatever
and
something
is
not
as
it's
expected
so
having
constraints
there
unique
index
or
whatever
protects
us
also
from
those
cases.
So
I
think
that
all
constraints
should
be
there
for
new
instances
and
all
and
yeah.
The
best
way
forward
would
be
to
use
structural
data
sql.
A
A
He
did
it
so
there
is
an
issue
and
the
work
by
andreas,
where
we
move
to
structure
dot
square.
It's
not
super
complicated.
A
We
you
have
to
do
some
things,
but
there
is
already
the
work
and
the
mars
that
we
did
in
order
to
roll
that
for
for
the
gitlab
project.
So
most
probably
there
is
a
roadmap
erodeman
to
follow
on
how
to
do
that.
C
I
just
checked
the
usage
pin
for
github.com,
particularly
on
github.com.
We
don't
have
jio
enabled,
but
I
didn't
find
any
usage
data.
B
So
yep
we're
we're
we're
aware
of
this
gap.
We,
so
we
have
some
knowledge
about
how
many
nodes
people
have
and
because
we
know
who
some
of
those
nodes
belong
to.
We
have
an
idea.
We
can
get
an
idea
of
how
many
rows
for
some
of
those
objects.
They've
got
based
on
usage
for
other
objects
so
like.
If,
if
I
can
look,
if
I
look
at
like
merge,
request,
diff
objects,
I
don't
know,
I
don't
know
what
we
accounts
for.
B
The
other
thing
I
was
going
to
say
is:
I
I,
I
think,
the
the
going
back
to
string
string,
var
cars
that
ship
has
potentially
already
sailed,
because,
with
the
introduction
of
the
self
the
geo
self
service
framework,
our
framework,
we
we
set
it
all
up
with
text
limits
for
everything,
and
so
we
have
just
introduced
a
whole
bunch
of
like
fairly
large
models
that
use
text
fields
and
are
supposed
to
be
setting
limits.
But
because
we
didn't
catch
it,
they
have
already
they're
already
out
there.
B
A
B
Yep
all
right.
Well,
I
think
I
think
that
answers
some
most
of
the
questions.
I've
got
about
this
thanks
everybody
for
your
input
and
your
help
on
this.
I
know
we
owe
the
geo
team
very
much
appreciate
it.
A
Okay,
so
the
next
comments
by
adrias
he's
not
in
the
call,
so
I
don't
know
if
you
have
checked
the
database
testing
rollout
and
the
issue
for
the
database
maintainers,
please
check
it
out
if
you
have
not,
and
even
if
you're,
not
a
maintainer
there
are.
There
is
a
lot
of
interesting
information
there
and
if
you
are
maintainers,
please
try
to
use
user
testing
and
leave
some
feedback
address
has
added
a
lot
of
very
nice
information
there
on
how
it
works.
A
Let
me
show
you
one
of
the
latest
migrations
that
we
used
it
so
and
so
that
you
can
see
how
it
is
right
now.
So,
for
example,
here
this
is
a
one
of
my
migrations
where
we
roll
out
the
partitioning
of
webco
clocks.
So
you
can
see
here
that
in
this,
mr,
we
can
find
a
couple
of
migrations
that
are
unrelated
because
they
are
not
yet
have
not
been
on
gitlab.com,
so
they
are
not
in
in
the
clone.
So
nowadays,
with
the
latest
update,
we
don't
include
statistics
for
for
those
four.
A
A
You
can
see
the
the
duration,
so
this
one
it
was
a
big
one
and
we
still
expect
it
to
be
really
deployed
to
production
today,
because
this
has
to
schedule
11
000
jobs,
because
this
is
the
migration
that
adds
the
jobs
that
sync
the
two
days
and
bring
all
the
data
from
the
standard
table
from
web
clouds
to
the
partitions,
and
you
can
see
that
now
we
also
have
details
per
query.
So
if
I
go
right
here,
you
can
see
how
many
times
it
each
query
was
called
the
total
time.
A
The
max
time
the
mean
time
and
the
rows
return.
So
this
can
give
us
confidence
that
we
can
shift
this
migration,
so
we
can
check
that
on
average
it's
a
point,
one
millisecond
or,
and
the
max
time
is
16
milliseconds.
So
that's
great
in
this
case,
for
example,
we
can
see
that
some
queries
here
needed
30
seconds,
so
that's
a
red
flag
that
we
may
have
to
think
about
changing.
So
this
is
where
we
find
the
max.
A
You
know
when
we
do
run
its
buds,
so
this
is
a
warning
that
this
migration
may
fail
and
it's
very
helpful
for
us
to
know
that.
Maybe
we
have
to
change
something
or
do
something
else,
and
you
can
see
also
the
rest
here
where
it
is
the
other
migration
that
was
adding
the
partitions
and
the
everything
which
is
also
very
nice,
because
it
it
gives
us
confidence
of
what
will
happen
in
in
production.
A
I
don't
know
if
you
and
the
last
one
is
that
as
a
maintainer,
you
can
always
click
here
and
go
to
the
testing
pipeline
and
check
the
details
and
check
the
full
migrations
running
and
check
the
full
logs.
I'm
going,
I'm
not
going
to
click
it,
because
this
recorded
the
video,
but
you
can
click
there
and
say
I
don't
know.
Do
you
have
any
questions?
Do
you
want
anything
any
additional
details
here.
C
A
So
yeah,
so
with
the
latest
update,
you
won't
see
statistics
for
the
other
migrations.
You
will
only
see
statistics
for
the
migrations
that
are
included
in
dsmr,
so
those
are
all
the
pending
migrations
in
this
branch,
because
this
branch
was
rebased
with
master
and
there
are
also
other
migrations
that
have
not
run
at
least
they
they
are
not
there
in
the
clone.
The
clone
is
maybe
like
an
hour
back
or
something
like
that.
But
nowadays
we
don't.
You
won't
see
additional
statistics
for
those,
and
you
can
see
here.
A
A
Any
other
question
or
comment:
okay
or
you
can
see
another
one
here
where
we
were
able
to
go
to
catch
a
problem
with
a
a
trigger
in
production
that
is
not
on
our
developer
on
structure.sql
and
we
we
had
forgotten
about
it,
so
we
had
to
manually
remove
it.
A
So
I
think
that
we
already
make
a
good
use
of
this
framework
for
testing
on
migrations
against
production
data.
A
So
yeah
and
steve
had
a
question
there
if
it
is
safe
to
run
all
migrations
or
regular
post
migration
background
migrations.
Yes,
it
is
as
andreas
also.
A
Wrote
there
please
use
it
with
everything.
Just
know
that
background
migrations
will
not
be
run
yet,
so
we
have
not
yet
implemented
a
way.
So
if
you
schedule
so,
for
example,
in
this
example,
where
we
schedule
10
000
jobs,
we
don't
run
them.
A
So
if
you
wanted
to
test
the
specific
jobs,
this
framework
won't
test
them.
Yet
it
is
on
our
plans
for
future
iterations
to
maybe
pick
a
couple
of
jobs,
some
jobs
from
the
sidekick
queue
and
run
them
so
that
we
can
get
some
statistics
also
for
the
background
jobs.
But
at
the
moment
we
will
run
everything
except
for
the
background
jobs.