►
From YouTube: Database Office Hours 2019-11-07
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
All
right
should
we
just
start
I
mean
you
put
some
I
think
those
are
carrier
was
from
the
last
time
when,
when
I
had
to
cancel.
B
Creating
a
joint
table
like
thing
but
putting
extra
columns
in
it
done
and
basically
the
record
is
that
it
makes
sense
to
be
carried
by
the
primary
key.
It
was
always
carried
by
two
foreign
keys,
and
in
that
case
the
question
is:
should
we
always
suggest
to
remove
the
primary
key
because
it
simply
absolutely
not
needed?
That's
the
question.
A
A
Think
that
worked
fairly
well
in
the
past.
Although
I
had
people
say
that,
oh,
oh
you,
you
always
want
to
have
an
ID
column
in
rails,
but
I
think
that's
not
not
always
true,
so
yeah.
My
take
on
this
is
yeah.
We
can,
we
can
remove
or
we
don't
need.
There
is
one
caveat
that
is
usually.
If
you
hear
in
that
situation,
you
would
basically
create
a
compound
and
next
comp
a
composite
index
on
the
on
the
unique
for
the
unique
columns
and
basically
make
that
your
primary
key
and
rails.
A
B
A
That
makes
sense,
we
should
know
what
we
need,
and
sometimes
you
can
argue
that
it's
it's
helpful
to
have
farm
stems
in
retrospect,
when
you
do
some
kind
of
analysis,
but
then
we
we
know
that
we
need
that
for
analysis
later,
so
there's
also
a
good
good
reason
to
keep
them.
If
we
don't,
then
I
don't
see
a
reason
of
keeping
them
any
other
opinions
or
not.
B
Yeah
I
was
surprised
that
actually,
if
you
do
join
them,
it
significantly
affects
the
performance
of
the
query
and
I
noticed
that
you,
you
had
a
suggestion
to
put
the
filter
of
the
IDS
on
the
the
first
joint
on
labelings
and
actually
that's
what
I
did
originally
and
I.
Could
it
actually
in
on
the
join
condition
and
it
was
significantly
faster
but
then
I
just
I
can
just
join
one
table
and
and
it's
it's
bad
for
not
enough.
A
Yeah
I
think
in
this
case
you
actually
have
the
label
IDs
at
hand
right
so,
and
this
is
why
you
don't
have
to
join
the
other
table
because
you
already
have
them
when
I
looked
at
it,
I
thought
it
was.
You
might
want
to
do
the
join,
because
you
may
not
be
sure
that
there
was
label
our
IDs
already
exists,
but
given
that
we
have
the
foreign
key
constraints
to
them,
I
don't
see
any
difference
really.
A
A
Apparently
I
can't
show
my
screen.
I
thought
give
me
a
second
I.
C
Have
to
create
I
trade,
okay,.
A
C
A
If
you
look
at
this,
there
is
a
there
is
a
nested
loop
inside
and
then
we
do
another
less
nested
loop
on
top
of
that,
and
there
is
what
a
planner
is
basically
trying
to
do,
is
picking
for
the
outside
nested
loop.
Picking,
let's
go
it's
only
ever
going
to
retrieve
one
record
for
the
the
alter
relations.
So
it's
it's
always
going
to
go
to
the
primary
key
and
then
retrieve
that
so
I
think
that
sort
of
explains
why
it's
not
picking
up
any
of
the
other
index.
A
All
right,
I,
just
I,
wanted
to
point
out
two
other
things
that
sort
of
happened
yesterday.
They
kept
entertaining
us
for
quite
a
bit
with
the
incident
in
the
afternoon,
and
one
of
them
was
into
the
the
main
reason
why
why
we
ran
into
this
issue
was
basically
a
very
large
query.
That
was
that
was
executed,
and
it's
it's
quite
interesting
to
look
at
that.
I.
Think,
because
we're
actually
trying
to
do
the
right
thing
here.
I
pasted
a
screenshot
of
the
code
in
the
document
where
we,
where
we
basically
batch
or
we
retrieve.
A
A
What
actually
happened
was
that
the
that
we
had
like
an
array
of
30
or
40,000
or
DS
from
a
lettered
import
in
that
case,
what
I
was
that
was
totally
fine,
and
then
we
put
that
in
this
query
and
as
soon
as
we
do,
the
call
to
any
that
sort
of
translates
into
a
query
that
has
30
or
$40,000
ease
and
that
basically
translated
into
queries
stalling
or
even
post
goes
back
installing
for
while
they
basically
try
to
pass
that
query,
because
it's
so
large
and
then
ultimately
I
think
we
were
at
least
part
partially
also
seeing
statement
amounts
from
that,
but
yeah.
A
So
the
issues
that
we
don't.
We
don't
break
down
this
every
when
we
when
we
run
the
query
and
I
just
wanted
to
point
out
again
that
we,
it
sort
of
goes
in
the
same
direction
as
input
validation
in
the
sense
where
you
we
try
to
sanitize
stuff.
That's
that's
not
that's
beyond
our
control,
and
in
this
case
we
should
be
also
thinking
about
the
size
of
the
of
the
textual
size
of
the
curator
that
we've
that
we're
running
and
that
immediately
immediately
should
fix
the
problem.
C
B
C
A
So
if
we
had
thinking
if
we
had
like
a
mechanic
that
would
detect
a
large
query
like
if
you,
if
you're
in
in
rails,
so
if
you
attempted
to
do
like
oh
send
this
like
two
megabytes
worth
of
the
query,
we
may
not
be
might
just
bail
out
and
say,
like
oh
you're,
doing
it
wrong.
Instead
of
attempting
to
do.
A
B
C
A
So
basically,
when
you
what
the
problem
we
had
yesterday
was
we
realized
that
oh
there
is
this
large
query
that
is
basically
plugging
a
becomes,
but
where
is
it
coming
from
so
and
then
you
you,
you
don't
even
have
a
lot
of
indicators
in
this
case
it
was
just
you
know,
select
from
LFS
objects,
very,
a
huge
list
of
hadees
and
nothing
else,
and
then
there
is
this
gem
marginalia,
where
you
basically
have
you
sent
the
query
over
with
a
comment,
and
then
there
is
a
you
know
reference
to
line
of
code
where
it
originates
from,
and
this
is
also
something
it's
you
can
see
in
the
in
the
post
quiz
log.
A
B
A
C
A
So
it
doesn't
always
translate
to
the
same
execution
time,
but
you
can
I
I
like
to
think
of
it
as
a
wage
is
to
detect
if
you
can
actually
improve
a
query,
so
no
not
as
absolute
run
times,
but
as
more
as
relative
to
what
you
did
earlier
on
the
on
database.
Lamp
understand
if
there
was
any
any
improvement.
B
B
A
similar
reading
that
the
production
database
gives,
but
it's
really
useful,
because
I
already
pointed
out
to
a
few
few
people
that
hey
you
can
also
to
some
basic
data
migration
as
well.
If
you,
if
you
write
your
query,
you
know
where
you
can
also
play
with
with
the
index.
You
can
create
your
own
index
there,
so
it's
really
useful
I,
don't
know
what
technology
is
behind.
It
is
like
a
custom
session
or
like
really
long-running
transaction,
but
it's
really
cool.
It's.
A
Actually
set
of
SB
beyond
the
on
that
system,
so
Nick
pulled
all
of
this,
so
I
can't
go
into
a
lot
of
details,
but
what
I
know
is
we
basically
keep
a
post
with
instance
running
that
keeps
keeps
itself
up-to-date
with
production
through
replication
mechanic
and
then,
when
you
create
a
session,
what
basically
happens
on
motherhood's
is
we
create
a
set
of
s
clone
or
from
a
snapshot
for
your
own
session
and
we
basically
promote
the
so
we
start
another
post
post
cluster
on
the
same
system.
A
We
promote
that
readwrite,
so
you
can
actually
you
know
creating
mixes
or
whatever
you
want
to
do,
and
then
you,
you
have
Hiro
you.
You
really
have
your
own
cluster.
You
can
mess
with
basically
and
when
you're
done,
we
just
destroyed
and
repeat
it's
very
cool
use
of
setup
as
I
think,
but
it's.
It's
also
underlines
the
fact
that
you,
you
never
have
anything
cached,
basically
and
because
all
that.
A
Yes,
to
some
extent,
I
think
Nick
also
pointed
out
recently
that
there
was
there
is
an
issue
with
the
replication
mechanic
that
we
have
currently,
so
this
may
lag
a
few
days
or
so
once
we
fix
that,
and
we
should
fix
that,
it
should
always
be
very
close
to
production
like
in
the
order
of
minutes,
but
basically
what
you,
what
you
can
see
is
as
row
cons.
That
is,
that
is
what
you
can
expect
in
production
to.
A
Cook:
well,
if
we
don't
have
more
topics,
I
still
left
one
but
I'm
not
going
to
steal
your
tongue.
So.
B
A
B
You
guys
I
have
a
question
about
a
given
drug
week
ago,
two
weeks
ago
we
had
a
small
issue
about
clapping
on
table
but
heard
a
foreign
key
definition
on
the
project's
table
and
the
drop
table
statement
always
stand
up
and
I
created
an
issue
out
of
it
to
document.
How
do
we
deal
with
this
during
reviews
because
it
looks
like
we
cannot
just
when
there
are
some
conditions?
B
You
cannot
just
drop
a
table
that
has
foreign
key
specification
on
the
hi
world
high-traffic
table
so
and
from
the
comments
I
can
see
that
we
cannot
really
find
a
way
to
actually
drop
this
table,
so
we
need
it.
We
need
to
somehow
document
this.
What
to
do
in
this
case,
like
a
vampire
should
be
requested
or
just
keep
the
table,
as
it
is
truncated
to
me
that
it
was
the
safest
option
because
there
are
no
records,
so
I
guess
the
foreign
key
maintenance
is
zero,
is
no
much
overhead
of
having
an
empty
table
around.
A
Yeah,
it's
sort
of
mind-blowing,
if
you
think
about
it,
that
you
can't
drop
an
empty
table
because
of
that
and
I
don't
see,
I
don't
have
any
other
idea
other
than
them.
What
your
is
all
about,
what
like
keeping
a
graveyard
of
tables
that
are
sort
of
empty
and
you
keep
them
around
until
you
have
a
chance
to
to
drop
them,
although
unless
we
actually
do
have
a
ton
time,
we
don't
have.
We
never
have
a
chance
of
doing
that.
C
Yeah
I
research
for
hours,
thinking,
multiple
ways
how
to
copy
that.
You
know
there's
no
way
true,
it's
just
I
think
using
this
advanced
instance
setting
all
down
time.
You
know
you
can
keep
those
tables
in
the
database.
I'm
killed
some
I've
been
make
some
maintenance.
It's
the
only
way.
Unfortunately,
I
searched
for
all
smart
ways
through
you
know
to
do
it.
It's
like
there's
no
way
I'm.
A
B
Maybe
recycle
down
because
inevitable
actually
works.
You
can
draw
columns
except
the
project
ID
of
course.
So
if
you
need
an
additional
table
for
your
feature
that
that
means
a
foreign
key
to
the
projects,
then
you
could
reuse
it.
So
we
could
maybe
internally
recycle
it,
but
but
that's
a
bit
of
reviewer
overhead
from
from
our
side,
but
at
least
it
wouldn't
say
around
there
empty.
C
B
C
What
I
researched
is
you
know,
theoretically,
if
it
makes
sense
because
open
an
issue
to
Postgres
upstream
so
I
searched
why
it
is
needed.
You
know
to
to
you,
know
to
it
and
it's
needed
at
the
end,
when
you
go
deep
in
the
Postgres
development
by
circles,
you'll
see
that
it's
needed,
but
we
can
check
in
an
open
an
upstream
issue
to
consequence,
if
you
like,
we
can
track
it
and
as
carry-on,
you
know,
ideas
how
to
cope
with
such
a
thing.
B
A
A
May
have
a
chance,
though,
when
we,
when
we
do
upgrades
upgrades
I'm
good
luck.
We
haven't,
we
haven't
yet
figured
out.
If
you
want
to
go,
they
no
downtime
upgrade
route
or,
if
we
just
say
like
we
accept
the
30
minutes
downtime
for
for
upgrading
Postgres,
so
that
might
be
a
chance
of
of
doing
that.
But
those
are
also
rare.
So
we
may
not.
You
may
not
want
to
rely
on
them.
I.
A
A
Or
maybe
something
completely
else
in
case
you're
interested?
We
we
had
this
survey
about
the
database
training,
maybe
last
week
or
week
before,
and
we
got
good
feedback
from
that
and
we're
currently
reaching
out
to
a
few
vendors
for
post
Coast
training
like
three
or
four
companies
crack
reached
out
to
and
we're
waiting
for,
feedback
from
them
and
hopefully
we'll
be
able
to
schedule
a
training
or
quite
soon,.
A
C
A
Think
it's,
it
should
be
open
to
the
whole
company,
and
that
was
also
the
intent
from
the
survey.
I
think
that
that
went
onto
the
whole
engineering
department
and
I
think
we
got
feedback
for
from
I.
Remember
that
30
or
35
people
interested
in
the
training,
the
I
think
the
problem
worried
about
what
I'm
not
sure
about
is
if
we,
if
we're
actually
going
to
have
somebody
that
is
fluent
in
both
rails
and
Postgres,
to
run
the
training.
So
that's
a
bit
of
a
challenge.
A
But
what
is
what
is
readily
available?
Is
things
like
performance
there,
Postgres
performance,
trainings
or
without
the
rails
aspect
to
it,
but
diving
into
the
post,
rest's
performance
issues,
and
there
is
lots
of
companies
doing
that
you
just
have
to
find
a
good
one.
That
is,
that
is
also
offering
that
in
a
remote
setting.
A
Also,
if
you
ever
get
the
chance
to
go
to
any
of
the
post
with
conferences
there's,
there
is
always
a
training
session
around
the
conference
or
in
the
big
four
and
highly
recommend
those
as
well.
They
usually
run
by
by
put
screws
committers,
and
they
really
know
what
they're
talking
about
so
pretty
good
to
attend.