►
From YouTube: Database Office Hours - 2020-03-26
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Pling,
thank
you
some
lady.
So
the
first
one
is
Andres.
That
is
not
here.
I
can
I,
don't
think
he's
here.
So
the
meeting
is
going
to
change
to
Wednesday
3:30
UTC
is
starting
next
month
and
it
is
already
reflected
on
the
team
meetings.
Calendar
I
made
a
smart
note
about
this.
This
one
conflicting
with
the
delivery
office
hours
but
I,
don't
think
it
is
like
a
big
deal.
It
only
conflicts
every
other
week
and
I'm
not
sure
what
do
you
think
about
having
also
a
call
on
the
AIPAC
time
zone?
A
A
A
Okay,
moving
forward
I,
don't
know
if
you
saw
the
andreas
announcement
about
changing
from
databases
schema
to
a
structure,
SQL
so
moving
forward.
We
are
going
to
use
the
new
file
on
their
DB
and
you
can
read
all
about
it
on
the
issue
and
if
you
have
like
any
question
or
feedback,
and
it
is
also
important
to
highlight
that
we
need
to
be
mindful
about
the
bosphorus
features
that
we
can
add
and
because
I
think
this
allows
us
to
add.
A
More
polish
features
like
constraints
and
materialized
views
and
all
other
goodies,
and
there
are,
there-
are
some
follow-ups
about
reducing
potential
conflicts,
potential
and
handling
the
version.
The
public
free,
schema
version
handling.
Thank
you
all
prefer
that
do
you
have
any
comment
about
this.
A
B
The
question
is
that
if
we
can
start
using
it,
since
we
have
the
structure,
sequel
file
and
I'm
glass
mentioned
that,
yet
this
is
something
we
could
start
using,
but
it's
not
just
for
enforcing,
not
mass,
but
you
can
actually
have
additional
checks
like
hey.
This
is
an
integer
field.
This
should
be
larger
than
0.
So
probably
we
should
wrap
it
with
the
mass
migration
helper
method
and
document
it
and
start
not
using
it.
B
A
A
Yeah
I
also
I,
also
like
we
could
start
using
it
and
I.
Think
right
now
is
from
the
delivery
and
deployment
side.
I
think
it
is
a
lot
of
time
because
we
are
basically
starting
on
your
release.
So
if
we
ship
something
it
is
going
to
be
deployed
next
week
and
if
something
fails
for
some
reason,
we
can
just
track
it
earlier,
rather
than
and
merging
it
on
the
last
days
on
the
release
and
then
everything
explodes.
A
Okay,
so
moving
forward.
I
was
thinking
if
the
other
day,
because
I
also
noticed
that
most
of
our
trainees
and
reviewers
have
been
on
capacity
icon.
So
I
wonder
right
now
that
we
have
five
montagnards
and
I
think
we
have
four
trainees
and
two
reviewers.
That
is
enough.
I'm
not
sure
if
reviewers
right
now
are
feeling
overwhelmed
by
having
many
merge
request
assigned
I
know
that
from
my
side,
I
always
assign
more
troopers
to
either
Steve
or
albear
and
I'm.
Sorry
about
that,
but
I'm
not
sure
whether
it's
your
perspective
from
this.
C
E
I
get
overloaded
in
waves,
it
seems
so
it's
a
like
every
week
or
two
I'll
just
have
you
know,
I'll
wake
up
to
ten
reviews
in
my
to-do
list
and
that
it's
kind
of
just
like
it
just
kind
of
hits
in
waves,
so
I
mean
I,
think
it
would
be
good
to
have
more
reviewers,
maybe
not
even
necessarily
trainees
but
yeah.
It
hasn't
really
caused
any
major
problems
for
me
at
all
yeah
at
least.
F
So
hi
I'm
new
and
yummies
amid
the
database
team,
so
hopefully
in
one
week
or
two
weeks
from
now,
we'll
be
able
to
help
a
lot
more
with
reviews
I'm
starting
today
with
my
first
suddenly
as
hopefully
I
will
be
able
to
help
a
lot
more
in
one
or
two
weeks
from
now.
Okay,.
C
A
Welcome
journeys,
so
do
you
think
we
should
still
have
the
distinction
between
reviewers
and
trainees
like
those
every
reviewer
wants
to
be
a
trainee
and
then
wants
to
be
a
maintainer,
or
should
we
still
have
like
two
classifications
from
the
reviewer
Roulet
perspective?
Trainee
get
speak
more
often
than
a
reviewer.
F
And
I
said
my
experience
from
being
a
newcomer,
at
least
for
me.
That
really
gives
me
some
space
to
to
go
slow
in
the
beginning
and
then
switch
to
a
training
and
I
should
that
we
made
true
for
other
people
that
are.
We
join
the
database
group,
so
so,
hopefully
I
think
that
that's
a
nice
it
helps
people
get
be
in,
have
a
few
reviews
and
then
decide
if
they
want
to
switch
to
trainee
and.
E
And
that's
exactly
what
I
did
I
didn't
turn
on
to
a
trainee
until
I
don't
know
a
couple
months
ago,
but
I
do
think
it
might
be
good
if
there
are
people
that
are
kind
of
interested
in
being
involved
in
database
reviews
just
to
kind
of
up
their
knowledge,
but
maybe
they
don't
have
an
intention
of
you
know
being
involved
to
the
point
of
being
a
trainee.
Maybe
we
can
get
some
more
back-end
developers
that
just
are
interested.
You
know
it
in
a
smaller
sense.
A
Yeah
I
do
agree,
I,
don't
wonder
how
can
we
like
help
with
this
or
what
will
be
the
way
to
attract
more
people,
because
I
still
get
the
sense
that
the
database
reviews
are
kind
of
daunting?
And
yes,
they
are
not
easy,
but
they
are
also
like
yours.
You
need
to
get
experience
with
them
and
yeah
I'm,
not
sure
if
I
can
attract
more
people
to
join
us
in
the
database
review.
The
people
know
that
we
have
trainees
in
the
database
group.
H
Hey,
so
those
are
maintained,
a
dream
is
right,
like
training
to
become
a
maintainer,
not
every
viewer
per
se,
like
a
so
I
know,
I'm
thinking
from
a
back-end
perspective.
I
would
like
go
to
review,
but
not
at
the
maintainer,
a
lever,
so
so
to
say
so,
I'm
not
sure
if
that
should
be
a
step
in
becoming
a
review.
Big
database
reviewer
for
the
Beckerman
engineers
at
that
make
sense
at
all.
H
So,
like
sort
of
an
entry
point
into
doing
database
reviews
for
the
backend
engineers,
I
mean
pretty
much
everybody
and
engineers
doing
some
database
work,
creating
tables
or
anything
I'm,
not
sure
that,
like
having
reviews
only
like
database
reviews,
only
two
database
team,
a
scale
or
like
all
scale
good
you
not.
Maybe
we
can
include
the
the
backend
engineers
to
do
the
first
kind
of
review
on
there
on
the
database
changes.
I
I
How
I
see
it
for
me
in
this
moment
is
that
I
need
a
bit
more
time
to
learn,
because
indeed,
as
a
back-end
engineer,
I
do
touch
a
lot
of
parts
and
especially
now
with
optimizations
that
we
do
only
newsy
to
use
each
pink
and
so
but
as
I
say,
I
was
saying.
I
feel
I
need
more
experience
around
this
and
I.
I
Remember
that
we
have
a
training
on
phosphorus
in
April
or
something
like
that
and
I
would
like
to
go
on
over
the
training
and
learn
more
and
see
how
things
are
going
for
me
after
after
that.
So
this
is
what
I
have
in
mind.
If
you
have
suggestions
on
what
can
help
me
here,
it's
I'm
open
to
20
from
you.
C
So
my
suggestion
is:
let's,
like
everyone
here,
is
working
with
a
little
big
hand.
Let's
propose
some
names
here
like
I
know.
If
Alexander
is
electronic
shorter,
then
I
know
he
could
be
they
go
to
Europe
adenine
could
be
them
I
just
said
later,
so
let's
make
some
imitations
if
you
wish,
because
we
are
working
with
a
lot
of
you
know
bacon
engineers,
and
we
know
that
some
could
make
good
reviews.
Actually
any
candidates
I
have
to
Adina
Alexandra
Croix
tour.
If
I'm,
if
I
don't
see
your
last
name,
good.
G
H
Anyway,
I'm
thinking
like
to
kind
of
trial
I'm,
not
sure
how
we
can
do
that,
but
I'm
thinking.
If
we
can
try
to
open
up
it
wider
for
like
let
let
back
any
engineers
sign
up
for
doing
database
review
in
some
sort
of
survey,
I
don't
know
so
that
definitely
will
like
everyone
will
get
some
pings
on
database
reviews
and
then
from
there
I
think
we'll
see
some
ones
like
just
rejecting
and
not
want
to
continue
and
so
on,
like
another
option
is
to
like
to
have
three
layers
of
reviews
on
the
database.
H
E
H
C
H
H
A
C
A
Think
they
are
increasing
for
two
reasons.
What
the
main
one
is
that
we
are
constantly
hiring
more
and
more
back-end
engineers,
so
more
and
more
reviews
are
like
appearing
and
I
think
also
we
recently
in
able
to
do
database
reviews
for
finders,
so
I
have
seen
so
much
without
migrations
and
without
actually
some
like
queries,
but
just
for
the
finders.
So
I
have
seen
the
number
of
merger
quests
from
database
increasing
lately.
C
I
A
A
You,
okay,
so
moving
on
the
next
topic
is
kind
of
related
from
the
maintainer
ratios
for
back-end
and
front-end.
We
aim
to
have
a
ratio
of
packing
engineers
to
maintainer
or
front-end
engineers
to
maintain
errs
around
or
below
mostly
below
six
and
right
now
for
database.
It
is
different.
We
have
32
33
engineers
per
maintainer,
I'm,
not
sure
if
we
should
aim
to
have
the
same
number
from
the
same
number
as
back-end
and
front-end
or
anything
should
be
higher
I.
C
Think
this
metric
is
a
broken
line.
We
should
have
read
a
maintainer
or
review
comes
per
amar
per
month,
something
like
that
with
a
database
label.
I
mean
this
comparison
between
began
from
tag
and
database.
Isn't
that
great
compared
to
like
for
like?
We
have
100
database
labeled
Amar's
every
month,
and
we
have
6
maintained
or
something
like
that.
This
would
make
more
meaning.
H
Right
what
I'm
saying
is
that
depends
a
lot
on
what
code
is
being
worked
on
so
month.
Over
month
you
can
have
very
different
numbers
of
database.
Labeled
Amar's,
so
like
I
understand
where
you're
coming
from,
but
then
but
like
there
is
a
Tamira
I
think
there
that
might
not
give
again
as
accurate
data
as
as
as
we
want
to
I,
don't
know
just
just
as
a
note
side
note.
A
A
H
Yeah
I
think
to
get
the
kind
of
gold
note
we'll
have
we'll
need
to
have
both
numbers
to
know
how
many
Amar's
do
require
database
review
and
how
many
people
like
would
you
have
so
kind
of
having
those
two
numbers
will
help
to
understand
how
much
more
work
will
be
coming
or
something
like
that.
So
if
we
I
know
that,
like
it
seems
for
some
recent
discussions,
we
just
we
just
slow
down
on
hiring.
H
So
that's
one
point
that
we
can
take
off
knowing
like
how
many
became
engineers
we
will
be
having
in
next
month's
and
then
having
some
numbers
on
like
some
kind
of
historical
I.
Don't
know
if
we
can
pull
some
historical
data
on
how
many
database
labels
and
Mars
we
use
to
have
how
many
we
do
have
now
to
kind
of
get
a
trajectory
there
I'm,
given
how
many
people
will
be
hired,
we
can
kind
of
predict
well.
This
is
where
we
would
want
to
be
with
the
maintainer
Xin
next
few
months,
if
that
makes
sense,.
C
A
E
A
Well,
for
me,
I
am
at
the
moment.
I
am
not
overwhelmed,
but
there
is
like
week
because
this
is
like
the
first
week
of
the
new
release.
So
basically
everything
is
cricket's
right
now,
but
for
the
last
the
last
week
that
was
the
LA
the
last
week
of
the
past
released
yeah
I
was
quite
busy
I'm,
not
sure.
How
are
you
feeling
Adam?
Oh.
C
A
A
Okay,
so
the
next
one
is
from
cross
who
it's
not
the
coal,
and
he
is
saying
that
the
new
helper
that
was
added
I
think
some
months
or
two
months
ago.
It
is,
it
cannot
be
used
with
disabled
transactions,
but
we
have
a
couple
of
merger
plans
that
they
are
using
this.
There
are
disabled
the
transactions
and
there
are
using
the
helper
and
I
can
see
Adam.
You
are
already
suggesting
some
things.
Do
you
wanna?
Yes,.
B
First,
we
are
adding
the
column,
then
the
pre
filled
the
existing
data,
and
then
we
change
you
know,
maybe
the
constraint
it
depends
how
you,
how
you
pass
the
parameters
and
when
this
whole
thing
runs
in
a
transaction,
each
step
might
require
different
kinds
of
database
lock,
and
the
problem
is,
the
lock
will
be
kept
for
the
whole
duration
of
the
transaction.
So
if
you
start
adding
a
column
that
requires
an
exclusive
lock
on
the
table,
which
means
why
this
lock
is
active,
nobody
can
insert
anything
into
the
database
and
then
you
kick
off.
B
B
Having
moved
forward,
of
course,
we
need
a
cop
for
this
tool
to
prevent
disabling
the
DBA
transaction
plus,
as
we
are
using
this
helper
method.
More
and
more,
we
actually
gathering
some
stats
how
it
performs,
how
many
retry
steps
we
are
doing,
there's
a
link
on
the
bottom
as
a
long-term
solution.
We
will
take
this
helper
method
in
our
migration
helper
method,
so
we
will
grab
specific
parts
of
our
migration
methods
with
locally
price.
For
example,
when
you
add
the
column
for
the
first
time,
we
don't
edit
for
the
data
migration.
B
I
B
So
I
think
at
some
point
you
can
start
incorporating
it
within
our
our
database,
migration,
helper
methods
and,
just
to
reiterate,
it's
usually
a
problem
when
you
execute
several
different
database
course
within
Whitlock
replies,
that's
usually
where
problems
can
happen
because
simply
Whitlock
retries
execute
the
whole
thing
in
one
transaction
and
normally
there
is
a
complex
process.
You
don't
want
to
do
that.
D
C
B
C
B
H
C
C
B
Good
point
and
I
think
we
will
work
on
this
quite
a
bit
when
we
are
starting.
You
know
integrating
the
helper
method
in
our
other
helper
methods,
because
some
methods,
what
we
have,
we
simply
cannot
really
execute
it
for
for
large
tables,
simply
because
it's
it's
just
unlikely
that
it
will
work
using
locally
tries.
Might
it
I'm
pretty
sure
it
would
help
running
those
migrations
on
that
rack
tables,
but
yeah
this?
This
will
be
several
Amar's,
probably
to
to
update
the
dachshund
and
and
victor
the
helpers
burger.
A
C
In
the
case,
we
remove
concurrent
index
I
think
some
other
people
also
have
it.
It
doesn't
accept
index
name
as
the
second
parameter
either
should
be
named
index,
name
or
I
think
they
will
concur
index
by
name
and
I
discovered
three
more
actually
one
om
ours
was
mine.
Then
I
discard
two
more
amar
switch.
It
like
that.
So
down
method
would
not
remove
the
index
section.
It's
not
very
risky,
but
I
fired
a
new
amar
and
then
Adam
made
to
comment
that
maybe
we
could
really
test
up
and
down
and
do
not
miss
such
things.
A
Yeah
very
interesting,
I
haven't
noticed
this
I
think
the
long-term
solution
would
be
to
add
this
to
our
CI,
so
to
make
sure
that
every
migration
is
well
not
reversible
because
they
are
reversible
but
ensure
that
the
schema
when
is
reversible
is
the
same
as
the
previous
migration.
And
perhaps
we
could
also
add
another
cop
about
not
using
remove
concurrently
indexing,
the
down
method
and
to
use
remove
concurrent
index
binary.
B
I
I
think
why
we
didn't
notice.
This
issue
is
when
you
look
at
the
implementation
of
you
run
index.
First,
it
checks
if
the
index
exists.
So
even
if
you
roll
back
your
database
and
the
index
is
not
removed,
you
might
get
it
again.
Nothing
nothing
happens.
So
that's
probably
one
of
the
reason
why
this
could,
just
you
know,
go
unnoticed
because
I
think
we
do
have
some
kind
of
tests
to
revert
the
database
schema
I.
Might
oh,
can
you
confirm
it.
B
B
D
You
yeah
I,
think
I've
seen
this
problem
to
actually
had
a
review
today
that
had
this
problem,
I
think
maybe
one
thing
we
could
do
like
in
the
remove
concurrent
index,
passing
that
you
know
expecting
that
table
name
and
the
column
name
to
be
passed,
and
it's
just
checking
if
the
index
exists
but
I
think
if
we
could
look
and
see
at
that
column
because
this,
maybe
then
we
would
be
able
to
identify
that
that
column
name
was
not
valid.
We
could
raise
an
error
there.
D
A
C
H
H
So
now,
I
have
jobs
like
these
are
basically
temporary
data,
because,
like
we
for
now
my
understanding
is,
we
only
want
to
be
able
to
report
those
back
to
the
user
to
let
them
know
well,
these
are
the
issues
that
we
could
not
import,
go
and
look
up,
see.
What's
going
on,
maybe
re-imported
specific
issues
by
hand
or
stuff
like
that.
So,
like
my
first
thought,
is
I
can
just
put
them
all
in
joy
in
array
field,
give
them
back
to
the
user
and
be
done
with
them.
H
B
H
H
B
H
It
can
be
the
same
theoretically
if
you
are
importing
from
two
different
zero
instances
by
contraries
and
some
online
instance,
or
two
different
on-premise
instances,
and
so
on,
so
potentially
because
they
what
they
are
doing,
they
are
taking
the
project,
sort
of
a
short
form
of
a
project
name
and
then
apparently
the
iid
that,
for
instance,
we
have-
and
this
way
they
generate
a
unique
issue.
Key.
C
C
C
Let
me
check
how
many
data
we
have
on
the
production.
H
B
Just
a
suggestion,
so
you
know
if
you,
if
you
define
an
array
column,
that
means
we
can
potentially
put
tons
of
items
and
this
these
items
will
be
stored
on
the
same
same
row.
So
when
you
load
up
a
record
from
the
database,
everything
that's
in
this
error,
it
needs
to
be
loaded
into
memory.
So
you
can
render
your
list
and
in
a
really
unfortunate
scenario,
when
you
want
to
import
several
thousand
issues,
and
each
of
them
are
failing
for
I,
don't
know
what
reason
loading
that
list
and
rendering
that
this
could
be
a
problem.
B
So
you
will
start
thinking
about
pagination,
but
you
still
have
to
load
that
array
in
memory.
So
I
don't
know
if
there
is
a
way
to
to
limit
how
many
items
to
fetch
from
a
Postgres
array,
but
I,
think
the
safest
option
here
is
to
create
a
separate
table
and
have
pagination
on
it.
When
you,
when
you
list
the
issues.
H
And
then
other
points,
if
I
didn't
add
it
to
the
and
I
think
we're
close
to
time
so
I'll
try
to
be
quick.
That
is
in
regards
to
dimensions,
migration.
So
we
had
a
couple
of
them
that
did
timeout
and
production
and,
although
I
did
run
them
on
database
lab
use
the
indexes
and
so
on,
they
still
failed
on
our
to
run
on
production
and
time
now.
Is
there
a
like
what
would
be
the
best
way
to
try
and
prevent
those
things
or
others
are
just
like
unlikely
scenarios
where
you
can.
H
B
So
I
run
into
a
similar,
similar
issue.
A
migration
has
has
failed
on
production
somewhere
in
the
middle,
but
it
was
trying
to
iterate
over
some
items
and
after
the
improvement
we
proposed,
what
I
did
is
generated
all
the
queries
that
would
potentially
be
executed
on
on
github.com.
For
this
you
need
to
have
actually
the
cow,
so
you
can
generate
ranges
and
execute
each
queries
on
database
lab
on
or
on
your
console
database
console.
B
B
G
H
It
was
also
I
I,
wanted
it
to
run
faster
because
of
this
300
million,
or
something
like
that
notes.
I
didn't
want
to
eat
rate
like
in
batches
of
10,000,
and
you
could
run
it
for
a
week
or
more
so,
I
tried
to
get
like
a
thousand
of
the
rows
that
are
like
matching
the
criteria
that
I
need
and
obviously
at
some
point
and
the
distribution
of
the
data
is
so
wide
that
I
did
wanted
to
query
more
like
it
had
to
fish
a
lot
more
rows.
H
B
Yeah,
it's
I'm
not
sure
this
would
work
for
dimension
stable,
because
it's
huge
but
I,
think
under
grants
mentioned
before
that
we
can
just
iterate
over
ID
ranges
without
actually
running
any
filtering
on
the
table
or
some
filters
that
are
really
fast
and
then
just
generate
the
ranges
and
pass
it
to
the
background
migration
and
that
would
be
produce
tons
of
basically
empty
ranges
where
you
don't
have
anything
to
migrate.
But
it
would
be
much
safer
to
schedule.
H
A
I
think
iterating
over
the
table
might
be
a
better
option
that
using
the
range
by
interval
and
passing
a
relationship
that,
because
I
think
andreas
also
explained
the
other
day
that
it
was
failing
because
of
the
ranges
not
being
the
same
when
they
were
scheduled
and
they
when
they
were
fetched
because
of
the
size
interval.
So,
yes,
it
might
take
some
time,
but
I
don't
think
there
is
some
kind
of
foreigners
to
finish
it.
A
H
Do
we
have
like
a
I
guess
not,
but
just
to
be
sure,
is
it
okay?
If
immigration
like
these
runs
for
like
two
weeks
or
three
weeks,
or
is
there
a
time
frame
that
we
need
to
to
like,
surely,
should
we
rather
like,
for
instance,
vomit
for
first
20
million
motes
and
then
brought
another
migration
for
the
next
20
million
notes,
and
and
so
on,
so
have
multiple
migrations
limited
for
my.
C
A
We
don't
have
an
opera
meet
about
the
timing.
We
have
had
migrations
that
have
lasted
like
weeks
and
poor
for
larger
tables.
So
I,
don't
think
that
is
like
a
stopper
and
regarding
the
splitting
by
IDE.
Well,
I'm,
not
I'm,
not
sure
I,
think
that
can
be
tricky,
because
if
one
migration
fails
again,
then
the
other
one
might
be
a
schedule.