►
From YouTube: Database Office Hours 2020-04-16/APAC
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
A
B
C
I
have
so
I
noticed
some
inconsistencies,
issues
in
some
of
our
tables
and
they
are
quite
large
tables
so
for
merge,
requests
and
issues.
We
maintain
a
separate
table
to
store
metrics
like
how
many
comments
we
need
for
the
merge
request.
When
was
the
first
commit
and
was
the
merge
across
deployed
and
unfortunately,
we
don't
have
a
unique
constraint
on
the
logic
West
ID
order
issue,
ID
top
one
to
one
relationship
and
yeah.
C
It
can
easily
cause
like
inconsistencies
when
we
are
doing
statistical
calculations
like
when
we
joined
this
table,
it
might
count
the
number
twice
so
I
was
thinking
about
cleaning.
This
up
I
have
an
idea
how
to
do
that
in
a
safe
way,
but
I'm,
not
hundred
percent
sure
this
is
the
safest
project
trying
to
search
for
it.
You
know.
Maybe
there
is
like
nice.
They
had
to
have
to
have
these
kinds
of
my
creations,
but
no
I
would
fund
everything.
C
So
the
idea
is
to
add
an
additional
column
to
to
the
table
when
we
track,
which
record
is
actually
unique
and
which
one
is
not,
of
course,
by
default,
each
record
will
be
not
unique
and
the
background
migration
will
slowly
lead
up.
The
rig,
trackers
and
update
this
additional
column
and,
of
course,
for
new
records
after
the
the
code
is
deployed,
will
ensure
uniqueness,
fire
partially
medic
index.
So
that's
the
that's
the
idea,
and
so
far
it
looks
seems
to
be
working.
C
B
C
A
C
A
Native
database
features
but
yeah
at
the
only
thing
could
that
could
be
risk
is
that
the
scheduling
migration
is
needs
to
go
through
the
whole
40
million
records,
and
we
should
just
make
sure
this
will
not
timeout
with
bigger
with
bigger
offset
values,
because
that's
basically
the
risky
part
cause
those
migrations
failing,
failed,
deploys
and
need
to
be
reverted.
Otherwise,
if
done
by
ground
migration
itself
fails
it's
not
that
big.
You
will
be
in
the
same
state,
so
I,
like
idea
with
cutting
conditional
column
so
that
we
can
have
unique
index,
but
also
duplicate
them.
C
A
A
The
real
substitute
other
down
sign
that
you
can't
specify
what
happens
on
conflict.
It
just
updates
all
the
other
columns,
even
like
timestamp.
So
if
I
do
it
out
just
do
a
row,
a
skill
but
like
I
said
save
finder
current
would
will
do
the
same.
So
it's
just
a
detail,
not
something
that
will
change
it.
A
B
A
A
A
C
C
We
don't
have
too
many
lab
tickets,
so
it's
just
an
iteration
over
the
table.
The
core
in
themself
might
be
just
alright
because
we
always
take
10,000
records
out
of
an
index,
basically,
because
in
that
you
can
and
then
for
the
10,000
records
find
applicants
in
the
whole
index
and
since
the
index
is
sorted,
I
guess
this
will
be
relatively
fast
operation.
It
will
take
for
a
few
days
while
we
go
through
the
whole.
All
the
inter-service
have.
A
A
C
Yeah
I
have
one
concern
and
I
probably
missed
it
from
the
merge
request.
So
after
I
clean
up
some
records
or
even
if
I
don't
clean
up,
I
I
would
update
the
is
eunuch
column
and
that
effectively
rewrites
the
the
whole
table
right,
because
I
need
to
say
that
hey
I
already
checked
these
records.
Now
these
are
unique
because
I
I
clean
them
up
and
if
I
look
over
the
whole
table,
that's
lots
of
bright,
so
I
need
to.
A
C
A
A
B
C
A
C
A
A
C
Actually,
I
have
a
hack
for
that
too,
to
check
this.
So
we
have
production,
console
access
right,
so
I
was
able
to
write
a
small
script
that
takes
the
credi
out
of
my
running
rails
instance
and
types
it
into
the
console
and
gets
back
the
plan
and
for
each
iteration
I
am
able
to
get
actually
an
execution
done.
It
takes
a
while,
but
it's
doable
and.
A
C
A
I
was
I
was
I
was
taught
once
by
Victor.
We
should
avoid
running
long
long
running.
We
should
avoid
too
long
running
queries
on
that
protection
comes
o
because
it
something
that
can
happen.
I,
don't
remember
what
was
it
like
delaying
replication,
because
obviously
they
are
not
completely
isolated,
but.