►
From YouTube: Database Office Hours - 2020-01-16
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Right,
we
don't
have
too
much
on
the
agenda
today.
We
had
this
call
last
week.
Maybe
that's
why
maybe
we
can
start
with
with
a
question
about
add
column
with
default.
So
it's
recently
we
added
a
rubric
up
that
basically
complains
when
you
use
it
without
allowing
nulls
null
values,
and
that's
that's
also
the
default.
So
that's
rather
unfortunate.
A
Robocop
basically
complains
about
most
of
these
calls.
The
reason
for
that
was
that
we
we
don't
want
to,
or
you
can
run
into
an
issue
without
when
you
turn
the
laudanol
values,
because
then
you
have
to
worry
doubt
that
a
doubt
that
there
are
none-
and
that
means
scanning
scanning
the
whole
table
once
and
for
large
tables.
That
may
actually
be
a
problem,
because
getting
a
really
large
table
can
take
a
long
time.
A
Basically,
and
while
you
do
that,
you
hold
the
lock
and
that's
that's,
maybe
a
problem
that
all
goes
away
with
PG
Postgres
11.
We
want
to
support
that
from
a
good
lab
13,
but
until
then
the
cop
is
mainly
there
to
basically
trigger
a
discussion
about
it
and
in
some
cases
it
may
still.
You
exact,
acceptable.
I.
Think
this
is
really
a
problem
for
the
four
large
tables.
A
B
Yeah
so
recent
in
the
recent
releases
we
had
several
cases
when
we
try
to
defy
to
production
and
randomizations
and
for
some
reason,
some
of
the
migrations
were
basically
paired
with
a
statement
timeout.
So
the
main
reason
was
migration
codes
changes.
The
database
definition
like
changing
the
table,
adding
a
new
column
of
foreign
key,
couldn't
acquire
a
lock
in
time
to
actually
execute
this
statement,
and
this
is
not
the
only
problem.
B
B
In
some
cases
we
are
not
able
to
remove
a
table
or
drop
a
column
depending
on
on
the
table,
so,
for
example,
a
high-traffic
table
like
products,
it's
really
unlikely
that
we
can
actually
add
a
new
column
or
remove
a
foreign
intervention,
and
it's
my
change
is
to
add
the
idea
came
from
because
I
acting
first
so
to
make
our
migrations
sure,
so
every
we
execute
them.
Besides,
execute
the
migrations.
B
You
know
loop
several
times
with
different
timeout
configurations
and
we
are
hoping
that
you
know
playing
with
the
log
timeout
the
amount
of
time
they
are
waiting
for
the
lock
at
some
point.
We
will
succeed
and
also
while
we
are
waiting
for
the
look
for
a
really
short
period
of
time.
We
are
not
going
to
defect
to
the
overall
performance
of
the
application
and
I
have
a
question
about
this.
Actually,
how
does
the
the
log
time
on
so
the
time
that
we
allowed
a
statement
to
to
get
a
lock
and
the
statement
by
mouth
feeling?
B
A
A
Think
they
should
be
so
the
lock
time
I
should
be
lowered
in
a
statement
time.
What
the
reason
is
that,
basically
the
statement
time,
what
is
the
the
time
after
a
curry
gets
canceled
from
the
very
beginning?
So
when
you
submit
the
query
and
when
you
when
that
takes
longer
than
15
seconds
segment
timeout,
then
it
gets
cancelled
that
doesn't
depend
on
any
state
of
the
query,
so
it
can
be,
can
be
locked
for,
for
that
amount
of
time
it
can
be
scanning
a
lot
of
data
and
then
it
just
gets
canceled
and
they
lock.
A
Timeout
is
basically
a
subset
of
that,
where
only
if
it's,
if
it
doesn't
start
or
it
doesn't
execute,
because
it's
still
waiting
for
a
lock
for
the
this
period
of
time,
then
it
gets
cancelled
after
lock,
timeout.
That
is
always
for
day
for
the
length
of
the
query.
Obviously
that's
always
shorter
than
the
total.
The
total
time
for
the
query
so
I
think
a
lock
timeout.
That
is
greater
than
a
statement
time.
It
doesn't
make
sense
because
of
that
and
having
that
having
a
shorter
makes,
a
lot
of
sense.
B
We
should
also
calculate
you
know
so
our
statement
timeout
is
currently
15
seconds
as
soon
as
I
know,
so
we
need
so.
The
local
time
at
should
be
less
and
also
kind
of
calculate
that
here
the
statement
action
needs
to
be
able
to
execute.
So
that
leaves
us
I,
don't
know
like
50
percent
of
the
time.
We
can
wait
for
the
lock
or
do
we
have
any
statistics
about
it.
How
long
a
DVR
statement
takes
not
really
that's
a
good
question.
A
So
I
think
in
context
of
the
of
implementing
that
that
we
try
a
mechanic
I
think
it's
I
think
you
don't
have
to
worry
about
the
statement
taking
or
the
statement
a
mode
at
all,
because
if
it's,
if
it
runs
into
the
statement
remote
to
date
and
that's
the
same,
this
is
it
works
the
same
way
with
the
with
the
retry.
Just
that
we
limit
the
amount
it
can
come
in
this
waiting
for
a
lock
state
right.
C
B
A
I'm
very
curious
about
how
these
numbers
are
going
to
play
out.
I
owe
you
now
explicitly
define
basically
how
how
long
we
wait
initially
for
for
the
lock
timeout
and
then
we
increase
start
and
also
increase
to
sleep
times.
I
think
is
a
very
good
approach,
because
it's
so
so
explicit,
like
you're
explicitly
defining
what
what
happens
in
each
step.
Yeah.
B
Yeah
I
was
thinking
you
know
some
kind
of
function
to
calculate
it,
but
you
know
if
you
want
to
customize
it,
you
have
to
understand
first
that
function
and
if
you
did
something
really
special,
we
would
probably
use
different
timings
if
we
use
this
piece
of
code
because
you
can
actually
use
it
in
your
application,
nothing
stops
you
to
use
it
outside
of
the
context
of
migrations.
We
have
high
volume
updates
on
tables,
so
he
could
easily
have
these
kinds
of
functionality
there
as
well,
but
yeah.
A
And
it
could
very
well
turn
out
that
in
the
future
we
we
have
more
experience
with
that,
and
we
see
that
oh,
this
is
an
exponential
back-off
is
something
that
we
that
we
typically
wonder.
We
can
still
implement
that,
but
it
gives
us
the
ability
to
easily
change
those
numbers
today.
I
think
it's
really
good
yeah.
B
B
B
Just
kind
of
like
destination
when
I
try
to
kind
of
handle
the
case,
and
we
have
already
high
traffic
table
with
a
lots
of
leads
and
and
updates,
and
that's
why
we
do
well
also
are
really
short
long
time
outs
at
the
beginning
and
later
on
that
you
know
what
we
do.
Long
import
usually
runs
in
a
transaction
and
then
he
locks
the
project
stable
and
might
be
other
theaters
docto.
So
that's
why
we
have
was
so
longer
sleep
times
like
around
10
minutes.
A
So
this
is
also
where
we
can
iterate
on
them
and
see,
see
how
it
goes
and
and
then
align
those
numbers
and
as
far
as
I
could
see
the
the
fallback
from
when
all
those
retries
with
low
locked
or
with
actually
a
limited,
lock
time
out
failed,
is
that
we
that
we
really
run
it
without
a
long
time.
All
right,
that's
the
last
step
after
yes,
that's
also
interesting
for
I.
Think
for
the
delivery
team
and
in
terms
of
deployments,
is
that
we
don't
buy.
A
By
doing
this,
we
don't
risk
that
the
migration
blows
up
and
because
of
it,
but
we
we
try
to
improve
the
situation
by
starting
with
low,
lock
timeouts,
which
sort
of
ads
which
helps
the
stability
of
the
site,
and
only
when,
when
we
exceed
those
retries,
we
do
what
we
already
do
today.
We
run
it
without
any
any
lock
time.
Also,
it
shouldn't
shouldn't
blow
up
migrations
because
of
that
it
might
take
longer
until
the
migration
completes,
though,
obviously
because
of
those
retries
right,
yeah.
D
B
You
can
can
easily
get
rid
of
it,
but
they
are
not
used.
We
can
see
how
using
with
those
two
tables,
and
then
we
can
enough
not
to
say
you've
got
a
confident
that
this
actually
has.
We
could
end
up
wrapping
the
whole
migration
route
with
this,
this
piece
of
block
and
it
can
be
basically
like
transparent
to
developers.
They
don't
really
have
to
know
about
mm-hmm.
D
A
Will
be
really
great
to
have
him
think
it
would
work
fine
for
migrations
that
are
running
within
the
transaction
scope
right,
because
you
can
easily
retry
that
and
it's
consistent,
Allah
I
wasn't
so
sure
about
how
we
could
handle
transactions
that
are
that
disabled
sorry,
migrations
that
disabled
the
transactions
for
some
of
them
I
mean
using
using
the
concurrent
index,
help
us,
for
example
those
those
can
be
retried
as
well
right.
They
check
for
existence
first
and
then
they
create
index.
A
So
that's
not
a
problem,
but
maybe
there
are
some
where
we
trying
is
a
problem
where
sometimes
this
these
migration
helpers,
they
have
multiple
steps
where
maybe
one
of
them
hits
the
hits
the
lock
timeout
and
then
you
retry,
and
you
actually
start
from
a
maybe
inconsistent
state.
So
not
not
sure
what?
How
do
you
like
yeah.
B
A
The
other
hand
I
mean
those
are
the
migration
labels
that
we
implemented
right
and
we
can.
We
could
also
review
them
and
say
like
oh,
this
is
this
helper.
That
has
four
steps,
and
one
of
them
is
prone
to
locking
all
the
traffic
out
because
it's
an
altar
table,
and
then
we
we
explicitly
wrap
this
step
into
into
the
retry
helper
and
then
you
can
still
use
it
the
same
way
for
you
disable
transactions.
You
use
the
concurrent
helper,
but
those
individual
steps
that
are
risky
are
wrapped
into
that.
A
C
C
Yeah,
maybe
too
fancy
in
some
cases
so
yeah,
it's
a
trade-off,
okay
and
then
I
just
added
another
item
to
the
agenda:
I'm,
not
sure
if
you
have
documentation
on
on
background
migrations.
So
far
for
all
migrations
and
database
related
stuff.
We
say
we
do
everything
in
CEO
also
so
everything
what
happens
in
EE
also
happens
in
C
II.
But
there
was
a
bit
of
a
discussion
about
background
migrations
because
there
might
be
a
lot
of
he
only
code
in
the
background,
migrations
that
it's
not
needed
to
have
in
C
II.
C
So
I
put
out
a
proposal
where
we,
if
you
have
a
background
migration
that
does
e
only
stuff
that
you
implement
a
normal
background.
Migration
for
C
II
code
and
build
EE
features
as
a
mixin.
So
in
in
a
c
e
codebase,
there
will
be
a
for
perform
phone
method.
That
does
nothing
but
an
ee,
its
overridden,
and
it
will
do
the
stuff
that
it
needs
to
do
for
EE.
C
C
Yeah
that
we've
seen
that
happens
so
when
you
install
so,
and
so
it
might
be
that
when
you're
sort
of
a
bunch
of
background
migrations
they
get
scheduled
in
sidekick
and
then
at
some
point
you
might
have
installed
ECE
over
the
EE
Co
base
and
sidekicks
twice
to
start
those
jobs,
but
it
can
upload
took
the
class
because
it's
it's
no
longer
in
the
code
base
and
then
it
fails
and
sidekick
decides
to
try
again
and
try
again
yeah.
We
have
customers
complaining
about
this,
so.
D
A
C
It's
like
hacker
news,
but
it's
it's
an
alternative
for
hacker
news,
but
it
does
basically
the
same
and
it's
yeah.
They
have
well
a
bit
different
intentions,
so
they're
not
aiming
to
well
and
for
to
some
people.
Icon
news
is
really
a
showcase
and
and
and
to
be
fancy-dancy
about
all
the
things
that
happen
in
in
a
tech
world
and
they
want
more
yeah
I.
Don't
know
how
to
say
that
I
should
read
their
their
their
terms
because
they
they're
not
a
big
fan
of
hacker
news
themselves.
B
A
Of
we
have
it
scheduled
for
the
next
milestone,
so
this
is
to
to
move
away
from
structural
Ruby
to
the
sequel
schema
dump
I
have
played
around
with
this
a
couple
of
times
and
the
issue
I
ran
into
is
I.
Don't
think
this
is
what
you're
referring
to
right
this,
the
problem
that
you
we
have
different
produce
versions
and
their
schema
dump
is
different,
and
then
there
is
no
compatibility.
B
Okay,
I
just
decided
I
was
playing
with
it
a
bit
also
now
Tammy
res
project,
because
I
was
just
experimenting
with
wick
table
partitioning
and
the
its
we
need
to
find
a
way
to
reduce
so
structure
but
seeker.
It's
more
verbose
right
then,
that
schema
RB,
so
the
chances
of
of
merge
conflicts,
I,
would
say
a
bit
higher
than
I'm
using
schema
RB
and
in
some
cases
when
you
use
a
special
positive
features,
I
was
using
partitioning
and
migrations
in
investing
violent,
also,
basically
down
the
schema
of
the
the
partitions
created
in
the
test
environment.
B
So
probably
we
need
to
find
a
way
to
somehow
fill
that
piece
out.
So
the
mode
is
generate
a
consistent
structure
for
signified,
which
brings
me
to
our
other
question:
did
we
consider
actually
staying
with
schema
RB
and
maybe
look
for
a
gem
or
an
extension
that
enables
us
to
do
more
advanced
database
stuff,
but
still
staying
in
the
in
the
Ruby
world?.
D
A
B
B
A
When
I
thought
of
the
structure
cycle
as
a
way
to
have
a
very
universal
kind
of
schema
management,
where
you
you
can
do
everything,
basically
everything
that
Postgres
supports
but
yeah,
I
was
also
under
the
impression
that
is
very
fiddly,
because
you
may
have
different
puskás
versions,
different
different
dump
tools-
and
you
mentioned
the
problem
with
the
partitions.
It's
quite
universal,
but
also
may
be
a
bit
fiddly.
A
We
can
get
something
else,
that's
sort
of
that
we
can
use
for
general
purpose
additions,
so
you
would
have
I,
don't
know
if
we
can
do
that,
but
maybe
we
can
have
the
schemer
RB
and
then,
in
addition
to
that,
have
an
additional
schema
where
we
have
the
kind
of
more
advanced
features
in
there,
so
that
we
don't
have
to
add
gems
for
each
and
every
feature
that
we
want
to
use.
If
we
in
terms
of
petitioning
and
all
that,
but
we
may
be
able
something
on
the
side
in
addition
to
the
schema
Robbie.
A
B
I
was
thinking
about
you
know.
Some
kind
of
separation
may
be
to
keep
using
schema.org
a,
but
maybe
on
the
side.
Then
you
want
to
do
something
with
in
advance.
That
is
not
supported
by
active
record.
Then
you
need
to
provide
your
sequel
solution
and
that
UT
also
executed.
When
you
you
know
the
schema,
not
sure
how
would
that
work,
I
I
will
try
to
find
a
bingos
or
gems
that
tries
to
basically
extend
the
current
function.
You're
back
in
the
corner,
I.
A
Can
just
imagine
that
there
is
quite
a
few
features
that
that
we
want
to
use,
and
even
if
just
simple
ones
as
using
table
constraints,
for
example,
we
we
discussed
that
for
I,
think
text
columns
and
limiting
the
size
of
text
columns.
You
can
use
a
total
constraint.
Active
record
doesn't
support
that,
so
we
can't
use
it
at
a
moment
will
be
nice
if
we
could
be
more
like
flexible.
On
that
end,.