►
From YouTube: Database Office Hours - 2019-08-29
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
sorry
I'm
late
good
to
see
you
all
so
I
think
this
time
the
video
is
automatically
recorded.
So
you
don't
have
that
hassle
anymore
before
we
start
I
wanted
to
point
out
that
we
have
like
a
few
people
joining
the
the
database
review
effort
and
that's
really
great,
like
we
have
seven
people
now,
I
think
doing.
Database
reviews
and
I
really
appreciate
that.
So
thank
you
for
joining
and
digging
into
that.
That's
really
great.
A
I
have
a
first
point
on
the
agenda
from
a
few
weeks.
Back
I
just
wanted
to
point
to
a
video
for
explaining
to
like
optimization
techniques,
so
a
combination
of
a
Union
push
down
and
on
top
K
video
is
not
really
entertaining,
but
the
like.
The
techniques
are
quite
universally
and
and
might
be
interesting
to
look
at.
A
B
B
Have
to
I
don't
have
one
at
hand,
I
think
there
should
be
one
in
there.
Any
comments
like
the
first
comment
from
the
merge
request.
This
is
one
of
the
examples,
but
they
have
they're
working
on
it
further
to
improve
it
like
we
want
to
have
it
behind
the
feature
flag,
and
it
was
tough
to
make
that
happen
to
make
that
little
bit
complicated
and
yeah.
C
D
A
Actually,
a
father
that
the
gem
also
edits
the
query,
so
basically
it
would
send
the
the
whole
comment
to
post
go
see.
Even
so,
even
when
you
look
at
the
post
as
long
as
you'd
be
able
to
tell
where
those
queries
come
from,
that's
that's
also
kind
of
interesting,
because
we
lock
the,
for
example,
all
the
slow
queries
above
one
second
and
then
you
can
easily
relate
to
to
the
code
base,
makes
me
think
that
should
maybe
also
show
up
in
the
performance
bar,
because
Korea
is
the
same.
But
I
don't
know.
B
A
Don't
know
if
I
have
the
full
picture,
but
I
remember
the
one
issue
and
that
was
related
to
the
archive
replication.
So
my
understanding
is
that
what
we
do
to
replicate
to
the
to
the
second
data
center
is
archive
replication.
It's
not
a
streaming
replication,
so
that
means
they're
not
directly
connected
across
data
centers,
but
the
there's
the
geolocation
basically
consumes
the
rather
had
look
from
from
the
archive.
So
it
looks
at
what
is
in
the
leticia
storage
bucket
and
it
grabs
those
wall,
segments
on
and
applies
them
locally
and
and
that
works
fine.
A
As
long
as
you
don't
mess
up
the
archive
and
I
think
that
was.
That
is
what
what
happened
here
and
also
something
that
happens
more
frequently.
Unfortunately,
when
we
do
a
failover
on
the
main
cluster
and
these
days
they
happen
more
frequently,
there
is
a
chance
that
we
actually
mess
up
the
replica,
the
archive
storage,
because
we
we
push
before
we.
You
know
the
old.
The
former
primary
would
basically
push
to
the
archive
storage,
but
the
segment
is
not
present
and
the
actual
timeline.
A
So
there
is
a
timeline
switch
when
we
fail
over
and
the
former
primary
still
pushes
data
to
the
old
timeline
and
when
the
the
downstream
cluster
or
the
downstream
Postgres
instance
is
fast
enough
to
catch
that
data.
It's
basically
branched
off
the
wrong
path.
So
don't
know
if
that
makes
sense,
but
it
does
it
sort
of
applied
what
the
what
the
old
former
primary
pushed
to
the
archive,
but
that
is
not
part
of
the
of
the
main
timeline
where
the
rest
of
the
cluster
goes
to.
A
B
Yeah
yeah,
that
makes
sense
so
yeah
I
understand
like
the
archives,
replicate
or
the
archives
or
replication,
but
where
or
or
why
do
we
need
all
it
for
this
process,
because
I
went
into
some
documentation
about
doing
that
yourself
and
Postgres
that
doesn't
does
not
mention
Wally
you're,
not
sure.
How
does
that
help
or
how
does
it?
That's
all
right.
A
So
I
mean
Wally
is
a
tool
that
helps
us
implementing
the
archive
recovery.
You
can
use
different
tools
for
that,
but
basically
we're
using
our
cut
recovery
and
what
you
yet
from
that
is
you.
You
totally
decouple
those
environments.
There
is
no
way
the
downstream
cluster
would
have
an
effect
on
the
on
the
upstream
primary
location
because
they
are
decoupled.
They
talk,
they
only
talk
to
the
archive
and
the
the
alternative
is
to
do
streaming
replication.
A
So
you
would
basically
have
the
downstream
cluster
directly
connected
to
an
upstream
replica
or
the
upstream
primary,
and
that
can
have
an
impact
on.
So
when
you,
for
example,
consider
when
you
run
a
very
long
curry's
on
the
on
the
downstream
replica,
then
that
may
have
an
effect
on
the
on
the
upstream
cluster,
which
may
be
unlikely
in
the
case
of
kiju
I.
Don't
know
so.
D
A
B
Think
well,
Stan
mentioned
the
other
day.
He
wants
to
have
that
because
of
the
because
it
can
like
put
stress
on
the
primary
know
that
we
don't
want
that.
That's
one
of
the
reasons
I've
seen
discussion
I
think
it.
That
issue
also
why
we
are
sure
that
we
use
swimming
replication.
I,
don't
know
if
that
would
be
a
good
idea
to
do.
A
There
is
also
the
chance,
or
you
can
also
you-
don't
necessarily
have
to
connect
all
the
replicas
to
the
primary.
You
can
also
have
a
cascading
replication,
where
basically,
a
replica
talks
to
replicas
talks
to
replicas
talks
to
the
primary
that
works.
So
if
you
want
to
avoid
the
impact
on
the
primary,
we
could
also
think
about
having
a
dedicated
replica
for
the
just
for
the
purpose
of
feeding
Geo.
If
that
makes
sense,
I
don't
know
just
an
idea.
A
B
You
I
think
that's
it
for
this
point,
I
didn't
think
we
can
move
on
to
the
next
one
about
what
was
happening
last
week
with
the
incidents
related
to
PG
bouncer.
There
was
a
lot
of
stuff
going
on
there.
There
was
elevated
latency
like
every
for
several
days
around
the
same
time,
and
there
are
a
lot
of
theories,
especially
you
can
see
them
in
root
cause
analysis.
The
last
issue,
I
linked
here,
also
stand
talking
a
little
bit
about
that
the
other
day.
B
A
A
The
setup
was
you,
you
have
database
instances,
one
is
the
primary
and
each
of
them
runs
a
PG
bouncer
and
the
clients
connect
to
this
PG
bouncer.
To
talk
to
that
database
instance,
and
a
few
weeks
back,
we
started
seeing
problems
that
the
PG
bouncer
process
was
maxing
out
to
CPU.
So
it's
a
single
signal,
credit
process.
It
was
maxing
out
one
CPU
and
we.
A
There
is
a
little
bit
that
needs
to
be
done
about
it
in
terms
of
how
do
we
route
the
traffic
to
to
those
PG
bouncer
instances,
but
I
think
this
is
what
we're
currently
looking
at
and
already
say,
is
also
something
that
we
wanted
to
test
I
think,
but
it's
not
like
we're
going
to
immediately
switch
to
that.
It's
my
my
guess.
B
Yeah,
as
far
as
I
understood,
they
solve
the
same
problem
a
lot
they
solve
the
problem
on
the
same
way,
because
PG
bar
is
single.
Threaded
and
policy
has
made
up
multi-threaded
having
multiple
multiple
PG
bouncers
would
be
similar
to
act
to
using
Odyssey,
of
course,
policy.
A
nice
firm
testing
before
that
can
happen.
B
But
this
may
be
interesting
to
switch
to
the
next
point.
I
had
because
during
the
incidents
Myra
and
I
were
think,
tie
the
troupe
with
Christopher
I
think
if
we
could
look
at
the
incidents
what
is
happening
and
how
we
could
resolve
that
and
I
personally
didn't
have
a
lot
of
knowledge
about
how
to
set
up
works
and
and
which
components
are
used.
How
did
the
production
I
should
architecture
has
set
up
and
I
was
like?
Well,
what
do
I
need
to
do
about
it?
B
So
I
was,
we
were
wondering
like
what
is
the
duty
about
database
maintain
I,
always
understood.
Advice
was
doing
database
reviews
and
helping
people
out,
making
performant
queries
and
stuff
like
that,
but
I
think
there's,
maybe
a
little
bit
of
site
responsibilities
about
making
sure
that
production
was
correctly.
A
That's
a
good
question:
I
am
I
would
also
have
said
that
at
least
until
until
now,
database
maintainer
or
that
would
that
was
reviewer
means
actually
doing
code
reviews
and
in
a
sense,
you're
also
guarding
the
site,
because
we
usually
look
at
the
size
of
the
of
the
database.
We're
good
love
common
and
try
to
make
it
work
for
that
right.
So
I
guess,
naturally,
when
there
is
a
problem
with
the
database
in
terms
of
curries
not
performing
or
migration
blows
upward.
A
Personally,
I,
don't
I,
don't
think
that,
like
any,
there
is
no
operational
responsibility
for
database
maintainer.
Just
like
at
least
my
understanding
is
that
there
is
no
operational
responsibility
for
Beckerman
teeners
as
well.
Of
course,
people
tend
to
jump
in
when
they
know
something
about
it,
but
it
would
be
new
to
me
when,
when
there
was
any,
if
there
was
any
like
expectation
towards
those
kind
of
things
well,
I'm
yeah
I
think.
On
the
other
hand,
I
think
that
was
something
that
there
is
a
lot
of
stuff
going
on
at
the
moment.
A
It's
also
something
I
can
add.
You
also
mentioned
DVR
a
role
that
is
also
something
that
is
currently
transitioning,
because
we
don't
have
a
lot
of
people
in
that
role,
and
we,
previously
few
weeks
back,
we
engaged
consultancy
to
to
help
us
with
the
database
stack
and
currently
looks
like
they're
actually
going
to
like
share
the
ownership
with
with
the
database
stack
for
good
luck
and
they
share
it
with
one
one
team
in
in
infrastructure,
and
so
this
is
currently
a
bit
in
in
transition
in
the
transitioning
state.
I
think.
E
B
B
Also,
it
was
more
like
helping
out
at
reviews
and
stuff
like
that
and
yeah,
in
a
sense
that
it's
closely
related
like
delegation,
you
know
which
merge
requests
were
done
on
which,
which
my
gracious
were
added,
so
that
helped
them
liked
elves,
also
its
production
with
other
back-end
stuff
that
does
stuff
make
Greek
stuff
on
production
that
we
say
it
like
that
so
yeah,
but
I'm
not
planning
to
have
more
operational
responsibilities.
I'm,
not
sure.
If
my
right,
something
to
say
on
that
yeah.
C
Well,
I
feel
the
same
I
mean
we
joined
last
week.
Incidents
called
because
I
don't
remember
who
paying
us
I
think
it
was
Christopher
schon,
but
he
was
not
like
an
informative
session
for
me,
because
I
didn't
have
anything
special
to
add,
since
I
am
not
very
knowledgeable
on
the
reliability
side
of
the
database,
and
it
is
also
a
bit
confusing.
Where
are
the
limits
drum
drum
draw
for
a
database
maintainer?
Because
we
don't
have
definition
of
a
database
maintain
a
rod,
or
at
least
we
couldn't
find
one
in
the
handbook.
A
Yeah
definitely
good
point:
we
should
define
that.
B
Let
me
go
ahead
to
the
next
agenda.
Point
was
I
mentioned.
We
had
about
storing
issue
and,
worse
request
descriptions,
history
in
a
database.
They
asked
database
like
at
Pitino
database
for
opinions.
I
wasn't
I,
couldn't
like
just
something
that
would
Skrill
scale
really
well.
So
it's
wondering
what
your
take
was
in
this
anyways
I'm.
C
A
Would
have
said
there
is
I
think
there
is
a
little
bit
of
more
likelihood
of
getting
chosen
from
the
Roulettes
if
I
recall
correctly,
so
you
would
get
more
pings
when
you're
a
trainee
maintainer
than
earlier
and
generally
I
think.
Maybe
there
is
also
a
little
bit
more
dedication
towards
becoming
a
maintainer
when
you're
in
that
site.
I
think
so.
C
D
A
All
right
on
that
topic
and
then
we
can
jump
on
the
other
one
I
just
wanted
to
note,
and
there
is
also
already
a
link
on
the
on
yet,
and
there
is
also
a
proposal
out
there
for
a
database
team
that
was
engineering
team.
That
adds
a
bid
to
like
we're
in
the
in
a
bit
of
a
transitioning
phase
at
the
moment
figuring
that
out.
But
the
proposal
is
out
there
of
like
a
team
dedicated
towards
those
those
database
changes
and
yeah.
You
can
read
all
that
Indy
India
more.
If
you
want.
A
And
I
remember
like
that:
I
was
just
concerned
with
putting
into
the
that
stuff
and
or
diversions
into
the
database
because
of
the
of
the
size
of
those
dips.
So
huge
could
be
like
huge
texts,
relatives
and
in
other
places,
we've
seen.
We
actually
already
stored
a
lot
of
lot
of
data
when
it
comes
to
merge
request
nips.
So
I
was
a
bit
worried
about
that,
but
turns
out
that
we
looked
at.
A
Statistics
were
the
volume
of
size
of
role,
descriptions
for
Amar's
and
shoes,
and
it
turns
out
that
over
like
what
was
it
30
days,
I
think
we
would
have
an
average
of
14
megabytes
of
descriptions
changed
in
these
days.
So
it's
probably
not
also
not
a
lot
of
data
that
would
come
in
there
if
you
store
it
in
database.
B
B
A
Cool
should
we
go
for
table
size
and
some
good
luck.
C
Yes,
so
that
is
a
question
that
I
thought
being
questioning
myself
so
normally
to
get
the
size
of
a
table,
I
go
to
cherubs
and
count
the
records
on
that
stable,
but
I'm
not
sure
how
to
get
like
the
size
on
megabytes.
Is
there
a
way
for
to
get
that
for
good
luck,
calm
I
mean
having
access
to
the
console.
That
might
be
a
way,
but
not
sure
if
we
have
another
way
easiest
way.
Yes,.
A
C
And
well,
the
other
question
is
one
that
I
also
get
a
lot.
So
when
we
are
migrating
data,
we
normally
use
background
migration.
If
the
table
is
like
large,
like
namespaces
table
or
projects
table
or
merge
request
table.
But
what
about
if
the
table
is
a
small
like
in
numbers
of
Records,
do
we
still
need
to
use
background
migration
or
we
can
do
it?
We
can
do
it
in
a
normal
library,
a
regular
one.
C
Well,
there
are
some
cases
in
which
we
want
to
great
information
from
one
column
to
another,
and
their
records
are
only
like
200
or
500
or
600,
in
which
I
think
the
operation
is
going
to
be
quite
fast.
So
we
might
not
need
to
do
it
in
a
background
migration.
We
might
simply
do
it
in
a
one
single
query
directly
in
the
migration,
but
that
approach
is
not
documented
somewhere.
A
You
would
still
have
the
same
concerns
about
batching,
so
even
even
for
like
or
it
depends
on
the
table.
Obviously,
but
batching
would
still
something
that
needs
to
be
considered,
but
you
can
I
think
you
can
do
the
same
thing
in
a
post,
deploy
and
on
a
background
job.
If
it
just
takes
a
few
seconds,
don't
see
the
reason
of
putting
that
in
the
back
row
of
migration
really
do
you
or
is
there
any
any
reason
why
we
would
want
to.
C
A
What
does
it
mean
to
me
to
migrate
data?
Maybe
it
also
sort
of
depends
on
the
on
the
heaviness
of
the
calculation
like
we've.
Recently,
there
was
a
security
related
thing
where
you
we
weren't,
we
were
not
able
to
run
the
not-dead
querying
the
database,
but
instead
like
for
a
full
batch,
but
instead
you
were
calculating
fingerprints
or
char
sums
in
the
in
the
migration
itself,
and
then
you
fall
back
to
doing
like
single
record
updates
and
that
can
actually
take
a
wall.
C
A
Yeah
I
would
say
so
and
the
overhead
from
doing
in
the
background
migrations
is
basically,
you
would
always
also
need
a
normal
migration,
because
you
want
to
kick
off
that
job
and
you
implement
that
in
a
different
place,
basically
in
the
back
row,
migration,
but
they
were
other
than
that.
There
is
no
additional
complexity
from
from
you
using
a
background
migration
right.
B
D
B
F
F
E
A
C
Well,
so,
last
week
a
merge
request
was
merged
in
this
merger
quest
that
migration
with
activerecord
coded
it,
and
we
thought
that
it
was
going
to
be
okay,
because
the
active
record
was
executing
a
service
that
was
actually
planned
to
be
a
to
be
executed
in
a
migration,
but
it
cost
some
unforeseen
problems,
not
in
major
I
think,
but
it
might
be
a
good
reinforcement
that
there
is.
We
shouldn't
really
add
active
record
code
in
migrations.
C
It
is
actually
the
pace
that
Ruben
talked
to
us
in
the
past
of
his
hours
and
we
decided
to
put
the
service
inside
the
migration
because
otherwise
copy
and
paste
in
the
service
inside
migration.
It
it
turned
out
to
be
a
migration
of
serengeti
60
hundred
lines,
something
like
that.
He
was
huge,
so
yeah
it
was.
It
was
interesting,
so
I
that
I
do
have
a
question.
It
seems
that
new
instances
don't
run
migrations.
D
A
Good
question:
oh
I
I'm
not
aware
of
another
place
where
we
do
that
to
something,
maybe
that
the
distribution
team
can
help
with
like
guess,
it's
sort
of
a
way
or
did
there
something
you
do
when
you,
when
you
install
a
new
instance
right,
so
guess
guessing
this
has
to
do
with
like
how
do
we,
when
you
first
install
git
lab?
How
do
we
set
up
that
environment.