►
From YouTube: Database Office Hours - 2019-09-12
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
C
C
C
C
A
C
C
One
was
the
incident
yesterday
that
we
have
with
a
column,
removal
and
the
the
RCA
is
also
quite
interesting
to
look
at,
because
it
was
not
that
we
shipped
a
migration
that
was
just
removing
the
column
and
ignoring
I'm,
not
ignoring
it
like
you
should,
but
it
was
sort
of
a
coincidence
deploy
of
two
rather
conflict
interchange,
so
one
one
was
basically
removing
the
column
and
the
other
one
was
undergoing
the
column
at
the
same
time
and
that
sort
of
led
to
the
incident
yesterday
we're
sort
of
discussing
how
we
can
avoid
that.
C
One
option
is
to
also
make
the
ignored
columns
attribute
a
mandatory
part
of
reviews,
but
on
the
other
hand,
it's
really
hard
to
see
and
I
really
don't
want.
Reviewers,
to
you
know,
have
to
capture
those
those
things
I
think.
Ultimately,
what
would
have
saved
us
is
a
better
integration
testing
where
we'd
be
able
to
see
those
two
changes
integrated
and
then
testing
those,
but
that
didn't
cut
it
yesterday,
at
least,
and
the
standard
procedure
is
also
documented
in
the
the
database
documentation
when
you
want
to
remove
the
column.
Basically,
you
first
ignore
the
column.
C
C
All
right,
the
second
one
was
a
MRI
got
in,
was
actually
deployed
on
production,
and
that
was
adding
really
simply
just
adding
a
reference
to
to
an
existing
table,
and
in
this
case
it
was
the
events
table.
I
could'
an
additional
column
and
a
reference
to
another
one,
and
since
we
require
indexes
for
foreign
keys,
we
also
require,
when
you
use,
add
reference
that
you
also
create
the
index
at
the
same
time
and
unfortunately,
this
doesn't
create
an
index
concurrently.
C
D
C
That's
right
when
you
created
make
table
you're
free
to
use,
add
reference
as
soon
as
we
work
on
existing
tables.
We'd
rather
use
like
what
was
it
at
concurrent
foreign
key
I
guess
day.
So
there
is
a
concurrent
helper
for
adding
the
index
adding
the
foreign
key,
and
it
does
the
same
thing,
but
just
in
the
concurrent
fashion.
D
Yes,
so
I
only
have
one
question
and
I
have
been
reviewing
a
lot
of
background
migrations
lately
and
from
all
the
reviews,
I
always
get
different.
Batch
sizes
and
different
delay
interview
lapses,
like
five
minutes
or
eight
minutes
and
I
was
just
wondering
like
do.
We
have
something
like
a
magic
formula
to
determine
those
or
those
are
just
based
on
one
judgment.
How
do
you
determine
those.
C
Yeah,
no
I,
don't
think
we
have
a
magic
formula
for
that
would
be
great
to
have
one
but
I
think
in
the
end.
It's
it's
always
based
on
good
luck.
So
we
try
to
you
know,
schedule
these
for
the
source
of
good
luck.
When
you
think
about
batch
size,
I
think
batch
sizes
see
clear
of
one
of
T
of
the
to
wear,
let's
say:
you're
updating
records
and
then
the
batch
size
sort
of
determines
the
the
scope
of
the
records
that
you
lock.
C
I'd
sometimes
look
at
how
long
how
long
the
update
takes,
for
example.
So
if
you,
if
you
know
that
this
particular
app,
that
is
heavy
because
it
also
runs
a
heavy
select
or
whatever
done
it,
this
can
take
like
ten
seconds
and
now
the
question
is:
do
you
want
to
hold
the
lock
for
10
seconds,
or
would
you
rather
like
lower
the
batch
size
in
order
to
make
this
to
speed
it
up,
and
then
only
all
that
lock
for
maybe
a
second
or
so
well?
C
That
really
depends
on
on
the
kind
of
migration
that
you
run
and
the
other
consideration
that
you
can
have
is
when
you
update
a
lot
of
records
or
any
record
that
you
update,
is
going
to
produce
a
depth
tuple
and
Postgres.
So,
basically,
when
you,
when
you
let's
say
we
try
to
update
the
whole
table,
then
we
would
basically
make
another
copy
of
all
the
records
and
the
table
would
be
double
in
size
after
that
right.
C
So
that's
also
a
reason
why
we
we
want
to
batch
and
I
tend
to
think
that
the
delay
interval
kind
of
plays
into
that
where
you
allow.
You
basically
stop
for
maybe
a
minute
or
two,
and
this
gives
like
processes
like
the
auto
vacuum,
a
chance
to
kick
in
and
clean
up
after
your
last
batch.
Basically-
and
obviously
it's
also
a
way
of
spreading
out
the
load.
So
you
don't
have
like
a
spike
in
traffic
on
the
database,
but
rather
spread
this
all
across
across
a
long
time.
D
C
Not
really
aware
of
immigration
that
is
like
taking
8
minutes
to
complete
until
4
for
a
single
batch,
so
maybe
it
was
just
cautionary
in
a
sense
or
if
we
can
afford
spreading
out
the
load.
I,
don't
see
a
reason
why
why
that
would
harm
us
I?
Think
it's
more
harmful
to
to
have
a
shorter
interval
and
a
longer
obviously
isn't.
B
There,
like
also
like
the
replication
between
the
notes
that
account
like
it,
can
be
quickly
on
the
on
the
right
note
when
the
what
the
read
only
notes
can
take
a
while
to
sync
up
with
that.
Like,
for
example,
we
had
a
truncate
on
the
table
with
millions
of
records
at
some
point,
and
that
was
trouble
for
a
replication
lag,
and
it
was
really
fast
on
retried.
Note
examples
for
that
more
often,
but
that
can
be
a
factor
to
taking
it
into
account.
Yeah.
D
C
B
Yeah
the
next
one
was
from
me:
I
think
you
already
had
a
quick
look
at
it,
so
that
basically
they
want
the
page
of
all
the
job.
Hard
effects
and
I
wanted
to
filter
them
by
name
of
the
builds,
so
they
needed
to
join
with
the
builds
to
find
a
job,
artifacts
and
therefore
forget
up
see
either
about
12
million
chopped
artifacts
and
yeah
I.
Don't
know
how
many
builds,
but
that
will
be
something
in
the
same
range
and
that
was
timing
out,
not
sure
whether
it
is
something
we
can
do.
C
C
C
So
for
each
of
those
12
million
records,
you
would
basically
do
another
index
scan
that
has
an
additional
filter
on
top
of
that,
so
the
basically,
the
second
note
is
not
ideal
in
a
sense
that
you
may,
you
may
have
a
better
index
for
that
where
you
actually
be
able
to
satisfy
all
the
conditions
from
the
from
the
index
right
so
on.
The
one
inside
is
a
lot
of
records.
On
the
other
hand,
side,
the
the
second
note
isn't
greatly
optimized.
C
C
C
C
B
C
So
if
that
information
was
was
present
on
the
artifact
stable
as
well,
we
wouldn't
need
to
join
and
that
that
could
be
quite
optimized
for,
but
that
basically
means
probably
means
to
placate
in
that
information,
which
is
which
is
a
way
of
speeding
things
up.
You
do
normalize,
but
then
you
also
troubled
with
the
fact
that
you
need
to
maintain
those
that
information
consistently.
B
C
B
C
C
C
So
what
we
could
do
is
creating
an
index
on
CI
bullets
which
is
going
to
take
forever
but
anyways
on
the
ID
and
the
name
and
then
make
that
one
partial
and
say
only
for
the
CIA
builds
type
kind
of
things.
So
in
this
case
we
would
basically
have
the
same
index
prefix
as
we
already
do.
So.
The
idea
is
still
in
there
and
then
we
satisfy
the
name
lookup
on
the
on
the
index,
because
that
is
presumably
something
that
changes
when
you
do
a
different
search
and
then
we
make
that
one
partial.
C
A
C
A
C
D
C
C
C
Where
you
you
can
even
like
look
at,
you
can
even
create
a
temporary
table
for
that
or
an
unlocked
able
to
make
it
even
a
bit
faster,
but
then
you'll
be
able
to
insert
into
that
table
basically
from
the
application
you
can
even
lose.
You
can
even
use
batch
inserts
for
that
and
then,
when
you're
done,
you
basically
move
that
data
over
you
batch
batch
updates
from
the
temporary
table
to
the
original
table,
but
then
drop
the
temporary
table.
We
don't
have
mechanic
like
this
in
place,
though.
D
C
B
D
A
A
Okay,
so
just
it's
just
wondering
it
could
be
an
interesting
experiment
to
see.
If,
if
we
compare
the
Postgres
package
ourselves,
it
has
a
specialized
compiler
flag
to
the
CPU
instructions
that
we
are
using.
It
could
gain
some
performance
benefits
out
of
it,
but
this
is
just
really
an
idea
and
not
I
mean
we
cannot
really
expect
too
much.
That's
why
my
start
of
it,
but
could
be
a
nice
optimizations
for
later
on.
C
Thing,
if
I'm
not
mistaken,
I
think
we
actually
compiled
those
screws
ourselves
for
the
omnibus
package,
but
I
could
be
wrong,
but
I
saw
some
custom
Flags
for
that
and
we're
good
luck
on.
We,
we
started
to
use
a
different
approach
when
we
moved
to
Patroni
where
we
actually
use
the
standard
packages.
Oh.
A
C
E
B
C
So
my
understanding
is
that
we
pause
the
effort
a
few
weeks
back
and
I'm,
not
aware
of
us
picking
picking
it
up
again.
I.
On
the
other
hand,
we're
okay,
on
the
other
hand,
we're
actively
actively
discussing
post,
cous,
post,
Chris,
petitioning
and
I
think
if
we
wanted
to
start
that
in
a
good
way,
we
would
actually
need
to
use
post
Chris
10
at
least,
and
there
is
obviously
the
discussion
or
the
effort
required
to
upgrade
lab
common
I'm.
C
Not
aware
of
this
actively
being
discussed
and
on
the
other
hand,
we
also
think
we
also
require
9:6
for
the
standard
package.
So
that's
the
current
minimum
requirement
for
good
luck
and
we
will
also
have
to
bump
that
and
I
think
the
plan
to
bump
tab
was
we
would.
We
would
still
wait
a
few
releases
if
we
follow
the
original
plan.
So
that's
that's
my
understanding
at
the
moment.
C
B
C
B
C
All
right,
so
the
index
creation
is
still
running.
I'll
report
back
later,
I
wanted
to
once
again
advertise
the
puskás
conference
coming
up
in
October
and
Milan
Italy
and
I
I
know
at
least
three
of
us
are
going.
So
if
you
want
to
take
a
look,
I
think
it's
a
good
conference
to
go
to,
and
they
also
offer
training
but
I
think
they
the
most
important
or
the
most
interesting
one
has
already
sold
out.
Unfortunately,
but
the
conference
is
still
good
can
really
recommend.