►
From YouTube: Database Office Hours - 2020-07-15
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
let's,
let's
start
happy,
Wednesday
everybody.
This
is
database
office
hours
and
I'm
kicking
it
off
with
something
Andy
created.
You
also
talked
about
it
last
week
and
that's
about
a
typecast
helper.
So
the
background
is
that
suppose
you
would
like
to
change
a
column
type
from
text
to
Jason
B.
That
was
the
example
that
we
discussed.
A
You
basically
have
the
option
to
go
to
the
application
and
iterate
all
the
records,
translate
them
to
jason
and
then
one
by
one
update
records
to
store
them
and
jason,
and
what
Emma
did
was
implementing
a
typecast
function
so
that
this
operation
basically
can
happen
inside
the
database.
So
you
can,
instead
of
loading
all
the
records
into
the
application.
You
can
now
truly
batch
the
operation
in
the
database
and
you
basically
perform
the
typecasting
inside
the
database
and
there
are
also
options
to
handle
errors
in
this
case.
A
So
what
do
you
do
with
any
valid
data
and
you
can
even
go
ahead
and
implement
a
custom
cost
function
and
basically
try
to
convert
to
jason
and
if
it's
not
a
well
a
chase
and
do
something
else
provide
a
default
value
or
something
like
that.
So
that's
a
very
flexible
approach
and
yeah
it's
nice
to
see
that
we
can
do
those
things
in
the
database
fully.
A
B
Yes,
so
this
issue
was
coming
up
on
the
version
app
because
there
are
some
restrictions
about
how
long
the
deployment
cam
can
take
and
I
had
an
idea
about
using
views
to
rename
a
column.
So,
basically,
let
me
kick
off
a
deployment.
We
create
a
view
similar
to
the
existing
table
structure
and
in
that
particular
view,
we
make
sure
that
both
columns
are
present
and
define
and
replace
the
the
model,
the
table
name
in
the
model.
B
So
every
Brides
every
select
will
hit
our
database
view
and
at
some
point
we
can
actually
change
the
column
name
without
breaking
the
application
in
a
post
migration
and
in
the
next
three
days
we
can
just
simply
eliminate
the
view
there
are
some
so
I
tested
it
locally
and
it
seems
to
be
working.
There
are
some
limitations,
of
course,
so
this
relies
on
the
fact
that
you
can
define
the
table
name
in
your
active
record,
although
basically
in
the
cost
class
definition
as
soon
as
you
start
writing
a
pure
sequel.
Query
where
you
referenced.
B
Let's
say
the
projects
table,
it
will
break
simply
because
you
are
not
using
the
standard
like
the
table.
Name
method
because
you
basically
hard-code
your
your
your
table,
name
yeah.
They
were
like
other
issues
when
you
create
a
database
view
from
an
existing
table
and
an
inspector
view
the
default
will
use
and
like
some
other
settings
on
the
table,
are
not
reflected
in
the
view.
B
B
We
also
have
to
hack
into
this
a
bit
too
to
get
the
column
definition
from
the
original
table
and
make
it
available
in
the
newly
but
newly
defined
model,
but
in
the
in
the
model
it's
a
bit
tricky,
but
it
seems
to
be
working
and
I
was
able
to
to
rename
the
project
name
walk
away
and
most
of
the
tests
were
passing.
So
it's
kind
of
working
and
yeah
most
of
the
failures
were
coming
from
these.
These
problems
I
mentioned
earlier,
but
these
are,
some
of
them
can
be
solved.
B
Probably,
if
we
take
this
approach,
we
are
not
going
to
rename
things
in
the
project
stable
because
it's
it's
probably
our
biggest
order.
It
references
on
lots
of
things
and-
and
we
have
hundred
different
kinds
of
queries
querying
the
project
stable.
But
if
there
is
just
kind
of
standalone
model
database
table
where
we
need
to
rename
something
and
really
know
that
we
use
standard,
active
record
stuff
or
RL,
then
this
can
be
a
viable
contracting.
A
It's
a
nice
approach
and
the
right
to
think
that
the
biggest
benefit
from
ladders
that
you
don't
have
to
rewrite
the
table
order.
Basically,
what
we're
doing
now
is
basically
adding
a
new
column
and
then
filling
that
column
up,
so
that
that's
basically
rewriting
the
old
table
and
without
a
prosciutto
meat
out
right,
yeah.
B
A
B
B
Yeah,
it
would
be
quite
a
bit
of
work
to
make
it
make
this
working
for
all
our
orders,
because
we
have
to
look
into
the
queries
and
and
refactor
that
to
to
become
the
table
name
properly.
But
I
mean
it
would
be
pretty
easy
to
check
if
it's
possible,
because
as
soon
as
you
change
the
table
name
you'll
start
seeing
failures
on
on
the
CI.
A
C
D
B
What
we
are
querying,
so
it's
I,
don't
of
you
or
the
order
table.
So
that's!
This
is
just
a
matter
of
I,
don't
know
if
we
can
create
a
new
cockapoo
for
this
to
to
to
check
it,
but
it's
it's
doable
and
if
I'm,
mostly
thinking
about
the
vaginal
I,
don't
know
the
code
base
hundred
percent,
but
that
that
could
be.
C
B
A
B
Yeah,
but
that's
the
the
problem
that
if
you
run
the
migration,
then
the
schema
version
will
be
bumped
and
the
first
step
is
loading
the
schema
into
the
database
and
the
schema
version
is
larger
than
the
migration.
That
adds
the
view,
then
that
view
basically
lost.
So
you
need
an
extra
step
that
creates
is
view
all
the
time
and
you
initialize
your
database.
Oh.
A
B
A
Just
wanted
to
rephrase
that
so
you
can
think
of
the
schema
or
B
file
or
the
structure
sequel
follows
basically
capturing
the
latest
state
of
your
database
schema
and
any
migrations.
So
if
you
run
migrations
on
a
empty
database,
you
basically
end
up
in
that
state
of
structures
equals
schema
Robbie.
But
when
you
load
schema
Robbie
or
structure
sequel,
you
do
you
don't
have
any
pending
migrations.
Because
of
that.
So
that's
why
you
wouldn't
get
the
the
view,
because
that
migration
looks
like
it
executed
already.
C
C
A
Now
it's
a
bit
more
verbose
than
the
rails
schema
for
sure.
On
the
other
end,
you
know
you
know
for
sure
what
is
inside
the
database
and
you
can
work
with
more
advanced
stuff
that
rails
supports,
because
what
rails
tries
to
do
is
basically
supporting
all
the
different
different
database
systems,
like
my
sequel
and
all
that,
but
you
end
up
with
a
very
small
subset
of
what
you
can
actually
leverage
and
constructs,
because
of
that
it
structure
is
equally,
are
basically
free
to
use
any
of
those
features.
D
B
A
So
why
is
this
coming
up
in
the
versions
app
or
why
do
we
have
why?
Why
can't
we
use
the
I
mean
that's
a
nice,
nice
approach.
Anyways
them
should
probably
think
about
that
for
good
luck
too,
but
is
there
a
specific
reason
why
why
we
need
to
do
it
differently
in
the
version
app
than
we
could
and
with
a
good
lab
helper.
D
C
B
C
We
have
a
current
300
second
timeout
from
Auto
DevOps
in
the
helm
command
and
that
causes
the
deployment
process
to
fail,
but
not
the
migrations.
The
migrations
will
keep
going,
but
since
they
take
more
than
300
seconds,
it'll
consider
the
time
it
failed
to
not
push
out
the
code
pass
running
migrations,
so
we're
trying
to
figure
that
out
on
a
separate
issue.
A
B
So
next
is
also
mine,
so
I'm
thinking
about
creating
a
new
table.
We
need
it
for
a
new
feature
and
the
table
will
be
connected
to
merge
request,
but
the
data
will
be
queried
mostly
on
the
root
namespace
level,
so
I'm
thinking
about
adding
a
root,
namespace
ID
to
the
table
tool
to
make
the
querying
of
the
data
of
it
easier.
B
A
D
B
No,
it
would
be
a
completely
new
table,
I'm
want
to
capture
a
few
matrix
and
that's
why
I
need
to
associate
it
with
my
request
and
I
was
thinking
about.
You
know
when
I
query
the
data
for
the
root
name
space.
It
would
be
nice
to
just
query
one
volume
because
I
know
the
root
name,
space,
ID
and
then
I
can
easily
get
get
the
records
without
doing
the
recursive
check.
B
D
A
Right,
so
there
is
I
think
to
changes
in
flight
regarding
this,
because
this
is
such
a
typical
problem
that
you
we
have
very
extensive
queries
because
of
this
recursive
names
base
hierarchy
going
on
and
the
two
changes
where
I
think
we're
thinking
about
to
make
is
adding
the
the
root
namespace
ID
to
the
projects
table,
which
is
a
dynamic
de
normalization.
So
you
have
to
take
care
of
that
when
you,
when
you
move
projects,
for
example,
but
then
you
really
have
an
easy
way
to
select
all
the
projects
for
a
particular
root.
A
Namespace,
and
the
second
thing
I,
we're
about
to
add-
is
the
basically
the
path
to
the
root
namespace
from
a
certain
project.
So
you
know
if
this
hierarchy
is
a
tree
and
any
any
project
would
have
the
information
like
what
is
the
path
back
to
the
root
namespace,
with
all
the
intermediate
groups
and
namespaces
in
there,
and
that
should
help
to
alleviate
those
pains
from
the
recursive
degrees.
A
B
That's
that
would
help
quite
a
bit,
but
maybe
I'm,
trying
to
come
up
with
the
most
performant
solution,
because
you
know,
if,
if
you
have
one
volume,
you
can
easily
have
an
order
based
on
the
index
right.
So
if
you
have
multiple,
let's
say:
project
IDs
you
can
easily
can
all
the
projects
within
your
groups
and
subgroup
because
you'd
be
normalize
the
projects.
D
A
A
Good
point
good
for
the
similar
discussion
recently,
where
we
were
thinking
about
adding
statistics
somewhere,
I,
don't
recall
where,
but
let's
say
the
issues
table
and
then
we
wanted
to
add
another
metric
to
an
issue.
And
the
question
was
whether
or
not
you
you
added
to
the
issues
table
or
you
put
it
into
a
separate
table
so
that
the
issues
table
doesn't
get
as
wide,
and
the
argument
was
was
similar
in
that
case,
where
we
wanted
to
be
able
to
sort
by
that
metric
and
to
also
filter
on
the
on
the
issues
table.
A
And
now,
if
you
put
those
things
on
like
two
different
tables,
even
though
you
can
join
them,
it's
not
ideal
because
you
will
have
to
either
use
an
index
for
the
future
or
for
the
ordering.
But
you
can't
combine
the
two
of
them
and
if
you
put
it
on
the
same
table,
actually
can
do
that.
That's
that
was
kind
of
a
similar
argument.
We
were
making
there
to
not
extract
it
into
a
separate
table.
I.
B
B
A
It
depends
on
the
on
the
index
that
you
have.
You
can
use
an
index
to
support
sorting.
So,
for
example,
if
you
have
an
index
on
on
credit,
a
dresser
and
you
come
in
with
the
create
order
by
created,
adds
a
little
bit
10.
That's
it
so
top
10
by
credit
that
you
can
actually
go
to
the
index
and
just
look
at
the
first
10
records,
because
it
makes
us
sorted
on
disk
and
you
don't
have
to
sort
yeah
at
all.
In.
B
A
B
A
A
We're
just
evaluating
this,
so
I
was
I,
was
kind
of
curious
to
see
if
there
was
any
feedback
or
thoughts
and
man
have
you
been
using
it.
What
worked?
What
didn't
work?
Is
it
better
than
what
we
have
on
slack
any
of
these
kinds,
and
we
can
talk
about
it
here
and
there's
also
a
question
on
the
issue
that
I'll
link
where
you
can
leave
feedback.
If
you
want.
B
D
A
That's
a
really
nice
feature
and
I
think
we're
short
before
setting
up
the
access
feeds
for
that
hope
that
results
in,
but
basically
the
it's
a
really
nice
workflow
you,
you
have
four
CLI
tool
you
which
you
can
use
to
trigger
a
database
clone
maybe
takes
half
a
minute,
or
so
you
set
up
port
forwarding
and
you're
good
to
go
with
the
PC
or
console
you
have
readwrite
access.
You
have
all
the
production
data
in
there.
You
can
mess
without
just
like
you
want
to,
and
then
you
throw
a
way
to
clone
afterwards.
A
But,
yes,
this
is
very
easy
to
do
at
this
point.
I
I
wouldn't
recommend
doing
it
until
we
figure
out
what
we,
what
the
security
concerns
are
and
how
we
address
those.
So
just
thinking
about
like
well,
it's
this
I
can
it's
easily
imaginable
that
it
sends
out
emails
to
customers,
for
example,
it
probably
doesn't,
but
it
could
easily
easily
happen
something
like
that,
and
we
should
figure
it
out
before
we
did.
A
And
there
are
also
discussions
and
ideas
around
testing
those
migrations
in
a
more
automated
fashion.
We
just
discussed
with
Nikolai
you
could
you
could
have
for
CI
job
that
you
can
trigger
on,
like
it
lab
built,
basically
grabs
a
new
clone
from
the
database
boots
up
an
instance
grabs
your
mr
runs
the
migrations
and
basically
leaves
you
with
a
cloned
instance
of
the
database
that
has
the
new
migrations
executed,
and
you
can
log
into
p
sequel
to
check
that
or
you're
getting
back
results
from
the
CI
job
like.
How
long
did
it
take?
A
A
F
I
think
we've
talked
here
maybe
some
time
ago
about
the
telemetry
reviews.
Part
of
those
reduce
is
also
checking
query
performance
and
poor
generated
possible
optimizations.
So
this
is
still
part
right
now
in
from
the
telemetry
review,
we
were
suggesting
that
this
could
be
moved
in
database
reviews.
Now
in
this,
mr,
there
will
be
considered
by
danger
bullet,
so
possibly
after
after
this
emerges
merged,
this
reviews
will
fall
back
in
database
review
trying
there.
If
you
have
any
feedback
there,
please,
oh,
please
add
it
to
them
or
what
is
done.
F
F
F
We
were
doing
that
these
radios
now,
so
if,
right
now,
the
telemetry
danger
is
looking
for
any
changes
in
usage.
Data
files
sometimes
are
involving
query
changes
sometimes
done,
but
we
were
doing
and
we
are
doing
right
now,
all
that.
Sometimes
we
also
involved
database
team
when
we
were
stuck
actually.
F
Yeah
we
were
doing
that
outside
the
roulette
somehow,
so
we
just
at
the
table
the
labels
to
tell
me
to
labels
and
dimension
to
your
telemetry
group,
and
we
were
doing
the
reviews
like
that.
What
is
the
goal
is
to
move
they
specialized.
There
are
many
to
refuse
to
back-end
and
database
well
as
much
as
possible,
so
we
don't
have
this
specialized
material.
F
F
F
A
C
F
C
C
A
A
A
A
If
not,
I
have
a
quick
ad-hoc
question
that
was
just
coming
up,
so
we
are
working
with
different
database
schemas
at
the
moment
rightly
so
we
used
to
have
only
one,
but
we
added
two
more
for
partitioning
and
a
question
I
came
up
in
that
regard
was
do
we
actually
support
or
allow
self
hosted
installations
that
run
their
own
Postgres
to
install
git
lab
and
in
a
schema
that
is
not
the
public
schema.
So
basically,
you
know
any.
Maybe
you
create
a
good
lab
schema
or
so,
and
you
install
git
11
that
schemas.
A
A
That's
right
so
basically
we're
whenever
you
work
with
regular
tables,
you
don't
prefix
them
and
you
basically
get
the
default
schema
and
typically
that's
public,
but
it
can
be
anything
basically.
So
you
you
have
this
search
path,
setting
that
you
can
control
for
your
session,
that
basically
it
tells
you
what
the
what
the
default
schema
is.
A
A
So
right
now,
since
we
added
Morris
emails
to
the
structure,
sequel
for
a
partitioning,
the
structure
sequel
changed
so
that
it
made
the
clay
is
more
explicit
about
the
default
schema.
So
it
basically
when
it
creates
regular
tables,
it
started
to
prefix
them
with
public
right
and
presumably
that
works
for
most
of
the
installations.
A
Unless
you
wanted
that
to
go
into
a
separate,
schema
or
a
different
schema
than
public,
and
it
is
cuz,
it's
a
bit
more
strange
because
they
happen
to
run
into
that
when
they
upgrade
from
one
release
to
the
other,
where
I
think
there
should
be
no
problem
at
all,
because
structure
sequel
is
not
really
involved
in
that.
We
only
use
that
to
create
to
bootstrap
new
installations,
but
as
soon
as
you
have
one-
and
you
upgrade
that
it's
only
about
database
migrations,
so
I'm
not
entirely
sure
why
that
happens
for
them.
A
Yes,
we
would
have
to
sort
of
do
that
manually,
but
you
already
do
something
like
that
and
in
rails,
so
we
basically
dump
the
schema
using
pity
dump
and
then
we
do
some
post-processing.
We
could
remove
that.
The
reason
why
it's
added
is
that
we
are
explicitly
dumping,
those
additional
schemas
that
we
had
so
before
we
did.
A
We
had
only
one
schema
anyway,
so
there
was
no
need
and
Peachy
dump
didn't
explicitly
add
those
schema
prefixes,
but
as
soon
as
you
have
to
PG
dump
is
more
explicit
about
that,
and
it
also
adds
the
default
schema
to
the
table
names.
But
we
could
have
some
post-processing
removing
the
public
prefix
from
everything.