►
From YouTube: Database Office Hours 2020-05-20
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
C
Yeah
and
I'm
happy
to
announce
that
we,
after
after
upgrading
Hitler,
come
to
post
is
11
we've,
not
deprecated
the
add
column
with
default
helpers,
that's
been
around
for
a
while,
basically
adding
support,
you
add
a
column
with
a
default
which
is
more
expensive
in
previous
versions,
so
that
that
used
to
go
like
adding
the
column
without
the
default
and
then
updating
all
the
rows
and
batches
to
the
default
value
and
then
finally
flipping
over
it
with
equal
value
and
with
pursuit
11.
That's
that's
better!
C
That
was
an
improvement
there,
where
you
can
just
add
a
column
with
the
default
with
a
lot
more
constrained,
that's
an
inexpensive
operation
and
only
when,
when
the
record
gets
rewritten,
the
default
value
gets
physically
written
to
disk.
Basically
with
the
change
that
was
linked
in
there,
we
deprecate
just
the
old
helper,
so
feel
free
to
just
use,
add
column,
standard
rails
helper
for
all
situations.
When
you
want
to
add
a
column
going
forward
and
Myra
also
mentioned
on
the
agenda.
There
is
a
slight
problem
with
with
the
backward
strategy,
so
with
old
versions.
C
So
when
you
backward
things,
just
make
sure
we
use
the
old
helper
and
there
is
I-
think
there's
no
risk
of
that
blowing
up
on
us
without
noticing,
because
the
old
versions
prior
to
this
deprecation,
they
still
have
a
group
up
around
their
complain
about
using
at
column
with
the
default
value
in
the
old
versions.
But
that
should
that,
if
we
don't
realize
that
we
go
back
putting
this,
the
group
should
go
up
at
that
stage.
I.
B
C
C
Right
and
then,
whenever
we
have
a
reducing
or
we
want
to
add
a
column
with
the
default
in
previous
versions,
so
previous
prior
to
1210,
we
can't
assume
puskás
11
and
in
those
versions
we
would
still
have
to
use
at
column
with
default.
So
there,
the
previous,
you
know
more
more
involved,
helper
methods
right.
B
Okay,
there
is
only
one
a
little
bit
time
jangchul,
that
is,
that
makes
it
more
complicated,
because
if
we
are
back
partying
and
not
column,
that
is
adding
a
column
to
a
large
table.
We
shouldn't
even
use
our
column
with
the
foul
right,
so
it
should
be
like
contraindicated
or
it
should
be
restricted,
and
we
shouldn't
do
that
to
a
lower
version.
So
I
have
tried
to
that
note
into
our
documentation.
B
C
D
Thank
you,
so
I
had
the
chance
to
work
in
the
Virgin,
app
and
I
was
noticing
they're,
also
discussing
with
dog
at
some
point
that
we
are
having
some
duplicated
code
there
for
migration
helpers
and
I
was
wondering
how
to
approach
that
and
thank
you.
I
saw
that
there
is
already
some
working
progress
to
move
that
that
out
and
I
think
that's
great,
so
yeah.
That
was
it
wanted
to
bring
this
a
bit
in
discussion.
B
E
You
know
copied
a
lot
of
them
over
to
the
version
app,
some
of
them
I,
don't
know
if
we've
looked
at
them
or
not,
may
not
be
as
necessary
with
rails
six,
because
some
things
are
salt,
we're
solved
in
rails,
six
I've
seen
that,
where
we're
using
them
for
at
times
there's
some
things
that
we've
modified
had
to
modify
to
fit
the
version
app
a
little
bit,
but
perhaps
those
can
be
more
generalized
as
as
well,
and
perhaps
we
can
just
start
off
with
like
one
of
the
most
common
helpers,
you
can
go
from
there.
A
That's
an
open
question:
if
you
wish:
let's
discuss
it
as
the
versions
that
developers
if
he
can
make
a
subset,
maybe
based
on
Crassus
mr.pan,
then
maybe
we
can.
We
may
try
to
convince
pecked
a
database
team
to
use
it
I.
Think
not
a
lot
of
people.
Are
you
coding
in
versions
that
or
customers
app?
So
we
are
kind
of
that's
all
that.
E
Makes
sense
I
know
I
I
started
with
some
of
that
concept,
with
up
streaming,
the
large
table
cop
to
get
lab
styles
and
then
using
it
in
the
version
app
and
then
I.
Hopefully
we
can
come
back
around
and
and
use
that
upstream
cop
in
the
get
lab
project
once
we
upgrade
Robocop
to
the
gait
lab
styles
level,
you.
B
B
Well,
if
I
take
a
look
at
the
related
merger
quest
and
I
think
six
people
have
signed
up
so
far,
so
that
is
great
and
some
interesting
facts.
We
have
been
reviewing
around
120
like
an
average.
The
last
three
releases
of
the
current
release
and
I
mean
it
is
like
a
high
number,
so
reviewers
and
trainees
are
well
more
than
welcome
and
I
also
wanted
to.
B
C
Yeah,
this
is
really
fresh
of
what
we've
just
been
doing
so,
but
I
just
wanted
to
ask
this
question
here
as
well,
so
we're
currently
working
on
on
the
partitioning
side
and
we're
taking
audit
log
as
an
example
and
we're
sort
of
working
on
the
idea
of
partitioning
audit
log
by
time,
and
that
would
basically
result
in
like
one
table
audit
events
and
this
one
has
maybe
even
hundreds
of
partitions
or
one
for
each
month.
Basically,
you
can
have
you
can
think
about
other
tables.
Maybe
they
have
hash
partitioning.
C
You
create
a
lot
of
partitioning
for
the
partitions,
for
those
and
sort
of
a
common
scheme
that
I've
worked
with
is
is
putting
those
partitions
into
their
separate
schema.
So
you
don't?
You
still
have
the
same
sort
of
parent
tables.
You
still
have
them
in
the
public.
So
all
the
depends
looks
like
before
the
public
schema
just
the
fact
that
it's
partitioned
and
those
partitions
are
basically
physical
tables
and
they
live
inside
a
separate
schema.
C
The
reason
for
doing
that
is
really
just
convenience,
because
you
know
when
you
look
at
the
public
schema
and
it
has
maybe
ultimately
thousands
of
partitions
very
hard
to
see
the
the
actual
tables
that
you're
working
with
and
most
of
the
partitions
are
identical
anyway.
So
it's
not
it's
not
really
useful
to
have
them
mixed
up
with
the
public
schema
tables,
but
it's
mostly
a
convenience
concern.
C
Basically,
the
question
I
wanted
to
ask
is
if
anybody
like
sees
a
problem
with
creating
a
separate
schema
from
public,
because
this
is
really
the
first
time
that
we're
doing
that
and
note
this
is.
This
is
the
same
database.
It's
it's
really
just
a
way
of
logically
grouping
tables
inside
the
same
database.
So
there's
no.
C
You
know
separate
database
that
we
introduced
and
we
ran
into
an
issue
with
GUI
we're
currently
looking
at,
because
jiwq
makes
it
a
bit
more
difficult
with
foreign
data
represent
such
but
I
was
wondering
if
there
may
be
other
assumptions
where
we
kind
of
assume
that
there
is
only
ones.
One
schema
that
we
only
have
the
public
schema
around
or
if
there
is
any
other
problem
set
that
you
can
already
identify.
C
A
A
C
This
is
the
same
database
physical
database
and
then
schema
is
really
just
a
way
of
logically
grouping
things
or
basically
putting
a
namespace
on
object
names.
So
you
would
have
to
sort
of.
If
you
refer
to
a
partition
explicitly,
you
would
go
by
putting
the
schema
name
partitions,
for
example,
dot
the
partition
name
to
explicitly
explicitly
refer
to
that.
A
A
D
C
Yeah
thanks
for
bringing
it
up,
I
guess
it's
always
always
good.
If
we
can
start
with
with
partitioning
from
from
the
beginning,
it's
much
easier
than
adding
that
later.
We're
currently
focused
on
the
audit
events
side,
because
that's
a
rather
simple
example
for
us
and
what
we'll
get
from
that
is
also
the
migration
address
around
that.
So
once
we
have
them,
maybe
it's
it's
easier
to
apply
up
to
the
product
analytics
even
right,
I,
don't
know
if
that
works
out
timing,
wise
I.
G
F
H
Problem
of
partitioning
is
that
if
you
decide
to
partition,
for
example,
events
afterwards,
you
will
have
to
recreate
everything.
So
it's
not
like
you
can
existing
table
and
disputed
you
have
to
so.
If
it's
a
three
hundred
gigabyte
a
be,
you
will
have
to
to
create
a
partition
table
and
the
partitions,
then
all
the
over
the
partition
and
a
small
partition,
so
the
migration
will
be.
B
No
like
an
estimate
of
how
much
in
execution
time
does
it
take
to
create
a
partition
table
by
date.
The
execution
like
I
know
I'm,
not
anything
that
is
an
irregular
migration
and
I
guess
it
should
be
in
irregular
migration
because
it
should
be
deployed
before
Connery,
so
I
guess
what
I'm
just
asking
is
like.
Is
this
creation
like
fast
execution?
Why.
B
C
Yeah,
that's
it's
really
just
like
creating
empty
tables,
so
you
can't
basically
can't
go
in
and
teach
an
existing
table.
A
partitioning
pattern
right
sure
you
can
only
go
in
and
create
a
new
empty
table
tell
it
to
be
partitioned
by
something
from
the
start
and
then
add
the
data
later
to
it.
Adding
the
data
later
is
the
more
distance,
the
expensive
part
because
you
have
to
copy
stuff
over
but
creating
creating
initially
the
the
partitioning
schemes
it's
just
like
creating
regular
tables,
basically.
C
That's
correct
if
we
ever
want
to
sort
of
explicitly
interact
with
the
separate
schema
in
this
case,
it
is
sort
of
detail
that
we
don't
care
about
really
because
the
the
main
table,
so
the
parent
table
of
all
those
partitions,
it
still
lives
in
the
public
schema.
So
from
a
from
a
client
perspective,
we
really
don't
care
if
the
partitions
live,
live
in
a
public
schema
or
in
the
partition
scheme
or
in
a
separate
one
but
yeah.
H
That
you
keep
the
top
the
partition
table
as
it
was,
so
you
reference
only
the
partition
table
that
resides
in
the
public
schema
and
then
you
have
all
the
details,
the
other
schema,
which
are
the
real
Amy's
and
they're
hidden.
So
the
application
we
never
know
that
there
is.
This
is
a
partition,
schema
or
a
real
thing,
and
and
the
only
one
accessed
is
in
public
the
only
reason
to
to
want
to
change.
F
Okay,
so
it
seems
that,
with
the
new
schema,
it's
sort
of
like
it's
the
same
functionality
as
public,
we're
just
kind
of
namespacing
it
separately,
so
that
we
don't
get
too
cluttered
if
we
eventually
wanted
to
use
schemas.
For
other
reasons
like
for
specific
functionality
to
separate
out,
you
know,
users,
permissions
and
things
like
that.
Would
there
be
any
issues
caused
by
having
these
partition
schemas
that
also
in
that
same
area,
I
can't
think
of
any
reason,
but
thought
it'd
be
worth
asking
so.
C
H
Will
just
have
to
change
our
SQL,
the
guidelines
like
we
have
now,
but
if
you
write,
we
will
always
protect
the
the
table
name
before
their
good
name
and
we
will
have
to
update
it
to
prevent
the
schema.
So
if
you
directly
access
and
write,
you
know
you
is
not
usual
or
well.
You
will
have
to
go
through
all
the
path.
I
cannot
change.
B
B
C
G
B
Okay,
so
the
next
point
is
a
animal
that
I
notice
from
Patrick
about
reducing
the
risk
in
abortion,
in
which,
basically,
we
no
longer
introduce
the
versions
into
the
instruction
file,
but
we
just
create
a
file
and
I
think
it
is
great,
just
encouraging
everyone
to
see
it
because,
like
this
kind
of,
if
you
got
to
merge
it
right
now
because
well,
we
are
still
like
before
the
22nd
and
we
are
receiving
a
lot
of
commits
and
multi
have
conflicts.
But
we
are
probably
going
to
merge
it
next
week.
I
guess!
G
I
C
Expect
that
everybody
at
least
once,
if
not
twice,
ran
into
that
issue,
where
you
create
a
migration,
you
push
that
and
and
like
with
the
next
person,
doing
a
similar
thing.
You
you
have
this
like
really
annoying
conflict
under
the
structure,
Seco,
where
you
just
want
to
make
sure
that
you
know
the
schema
changes.
Schema
versions
go
in
from
from
both
of
those
migrations
and
that's
the
only
conflict
you
have
and
you
rebase,
because
of
that
and
incurs
a
lot
of
waiting
time
and
and
see.
I
cost,
probably
as
well
could
change
job.
B
Yeah
I
think
so
too.
So
I
felt
like
one
question
when
I
was
viewing.
That
is
that,
how
do
we
ensure
that,
when
you
create
a
migration,
we
actually
include
the
file
I?
Think
at
the
moment
it
is
going
to
be
like
to
a
code
review
and
we
we
must
be
mindful.
Data
file
I
like
an
empty
file,
should
be
included,
but
perhaps
this
is
something
that
danger
or
Robocop
should
alert,
because
it
can
be
easily
made
in
a
large
Northwest.
G
B
G
Yeah,
that's
that's
a
good
point
too.
Yeah
I
have
to
think
about
that.
That
makes
sense
yeah.
We
are
I.
Think
in
general.
I
have
to
kind
of
do
it
right
up
because
it's
you
know
it's
gonna
be
a
development
change.
That's
it
kind
of
let
the
other
developers
know
what's
happening.
That's
something.
I
was
gonna.
Hopefully,
look
at
you
know,
sometime
next
day
or
two
so
I'll.
Take
that
in
consideration.
Whenever
anything.
B
B
C
Now
we
just
talked
about
the
product
analytics
events
table
and
upper
and
the
Lina
you
mentioned
the
DMR
just
I,
just
looked
it
up
again
and
I
I
was
wondering
if
we
have
more
insight
into
why
that
table
is
so
like
wide.
It
really
looks
like
in
maybe
a
staging
table
in
a
data,
warehouse
and
I'm
hearing
that
we're
like
putting
what
was
the
15
million
events
per
day?
C
I
D
I
think
I
the
first
step.
Oh,
there
is
set
up
a
limitation,
so
it's
going
to
be
only
100.
Events
per
second
I.
Think
if
I
remember
right,
the
structure
of
that
table
was
just
followed
as
what
is
in
general
structure
of
post
no
blow
event,
I'm,
not
really
from
your.
How
that
looks
like,
but
I
think
that
that's
in
that
direction.
A
I
A
It's
kind
of
an
easy
iteration
power
stetko
so
faithfully
the
commenter
DZ
started
working
on
it
a
week
ago.
So
you
see
that
migration
and
everything
is
quite
not
read
yet.
But
if
you
have
like
fundamental
ideas,
anyone
just
it's
the
right
time
to
really
focus,
because
this
is
going
to
be
huge
and
also
you
may
see
that
it's
used
in
the
grass.
So
it's
not
only
like
a
no
deterrents.
Some
table,
which
is
insert
only
is.
A
F
A
B
A
C
A
A
C
B
B
B
But
just
one
last
question:
hi:
sorry,
with
this
Postgres
update
version,
I
think
we
talked
about
a
little
bit
last
week
about
what
are
the
improvements
that
are
we
looking
for,
and
one
of
them
was
adding
the
column
with
default.
That
is
going
to
be
like
faster.
Do
we
have
anything
else
right
on
the
agenda
for
the
for
the
next
milestone.