►
From YouTube: Database Office Hours 2021-02-10
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
B
A
All
right,
let's
kick
it
off
great
to
see
so
many
people
on
the
call.
This
is
the
database
office
hours
call
and
we're
starting
it
off
with
a
shameless
plug
for
the
database
group,
we're
fortunate
to
have
two
more
positions
opening
in
the
database
group
and
I've
linked
the
rule
description
and
also
a
reference
to
slack.
C
A
A
A
Cool
second
point:
on
the
agenda:
last
time
we
talked
about
thin
clones
using
a
database
lab
and
how
to
access
that
with
p
sql,
which
is
kind
of
powerful,
and
then
we
also
talked
a
bit
about
what
we
can
do
with
migration
testing,
and
there
is
a
blueprint
out
there
that
got
merged.
That
sort
of
explains
a
bit
of
the
idea,
and
I
would
like
to
sort
of
run
through
the
workflow
one
time.
A
A
But
we
can
expand
that
later
and
the
idea
is
that
it's
it's
in
a
state
where
we
can.
I
think
we
can
open
it
up
to
database
maintainers
if
that
makes
sense.
So,
basically,
you
would
have
access
to
the
testing
pipeline
and
you
can
actually
see
migrations
execute
on
the
on
the
thin
clone
on
the
copy
of
the
production
database,
and
there
is
some
feedback
coming
back
to
the
merge
request,
and
that's
that's
basically
what
I
would
like
to
share
today.
A
A
B
A
A
Okay,
so
basically
we
start
very
simple:
you
have
a
a
database
migration.
I
prepared
one
of
these,
so
we
can
take
a
quick
look.
So
there
is
in
the
db
folder
you
have
a
boost
migration,
that's
something
that
we've
been
working
on,
basically
dropping
our
table.
So
this
is.
This
is
going
to
drop.
One
of
the
archive
tables
for
audit
events
also
drops
a
function.
Drops
a
trigger
are
very,
very
simple
in
memory
card.
A
There
is
a
bit
more
to
that
to
that
change.
So
if
you
fully
look
at
this,
there
is
a
couple
more
changes
that
I'm
gonna
add.
This
is
related
to
the
migration
testing
and
I
haven't
been
able
to
merge
that
today,
but
this
is
sort
of
included
and
now
basically
we
push
that.
A
A
A
And
then,
whenever
there
is
a
database
change,
we
have
some
rules
for
that.
So
we
would
basically
be
able
to
know
that
there
is
a
database
migration
included,
and
if
that
is
the
case,
then
there
is
an
additional
job
for
a
couple
actually,
but
it's
the
one
we're
interested
in
the
additional
job
is
added
to
the
pipeline,
and
this
is
the
one
that's
basically
kicking
off
the
testing
pipeline.
A
The
testing
pipeline
lives
on
a
different
gitlab
instance
on
the
ops
instance,
mostly
for
for
security
reasons,
and
this
job
is
basically
only
kicking
off
that
pipeline
and
it
actually
runs
in
the
beginning.
So
it
has
no
dependencies
and
it
runs
very
quickly.
So
what
you
can
do
is
go
in
and
see
the
trigger
downstream
pipeline.
A
A
You
can
see
I'm
basically
preparing
a
couple
of
images
and
not
more
in
the
prepare
stage,
and
then
this
is
the
stage
that
you're
interested
in
in
the
job,
which
basically
creates
a
thin
clone
and
runs
the
migrations,
and
this
is
a
manual
job
so
that
it
doesn't
run
automatically
for
all
the
merch
requests
and
pipelines
that
we
currently
have.
This
is
again
for
security
reasons,
so
basically
we
want.
We
still
want
some
someone
coming
in
and
looking
at
the
change
checking
that
well,
the
migration
looks
fair
enough.
A
A
A
If
you
see
other
migrations
executing
there,
because
they
are
they
they're
still
pending,
even
though
they're
not
included
in
the
the
actual
change,
we
have
to
execute
them
so,
depending
on
how
how
often
and
how
fast
we
deploy,
we
will
see
more
less
of
these
unrelated
migrations.
A
In
our
case,
it
should
only
be
the
the
one
that
we're
interested
in,
and
this
is
what
we're
actually
seeing
right,
dropping
the
couple
trigger
function
and
then
dropping
the
table,
and
hopefully
there
are
not
more
migrations
and
there
are
none.
So
this
is.
This
is
the
only
thing
that
got
executed
and
then
we
should
be
able
to
go
back
to
the
original
merge
request.
A
This
is
the
one
that
I
created,
and
this
one
is
gonna
get
already
has
gotten
some
some
feedback
from
the
pipeline.
A
A
I
don't
think
the
table
is
that
large,
so
maybe
there
is
still
a
problem
with
that
estimate,
but
this
is
the
idea.
Basically,
so
you
get
some
feedback
like
this
and
we
plan
to
add
a
lot
more
whatever
we
find
interesting.
For
example,
you
could
also
get
statistics
on
the
queries
that
are
being
executed
in
the
migration
based
on
each
stat
statements,
for
example,
or
there's
a
long
list
of
ideas
and
I'm
sure
we're
gonna
find
more
more
stuff.
That
is
interesting
that
we
can
add.
A
So,
just
as
a
quick
recap,
what
would
happen
is
somebody
creates
a
metric
rest
with
the
migration
gets
reviewed
in
the
beginning
and
once
we're
confident
about
it,
a
database
maintainer
can
go
in
locate
the
latest
pipeline
go
to
that
job
that
we
looked
at
basically
find
the
downstream
pipeline
on
the
ops
instance
and
go
in
and
manually
execute
that
migrations
job,
and
that
allows
you
to
see
the
full
output
of
the
migration
for
the
maintainer
and
we
also
get
feedback
on
the
on
the
metroquest.
A
This
is
this
is
what
we've
seen
here
and
that's
pretty
much
it
when
you
want
to
get
another
iteration
of
this.
This
is
it
works
the
same.
Basically,
so
you
can,
you
could
again
see
the
job
go
to
the
downstream
pipeline
and
kick
it
off
again.
You
can
do
that
as
many
times
as
you
want.
Of
course
it's
not
happening
automatically,
because
otherwise
we
would
get
some
feedback
on
every
migration
run
basically
or
every
every
pipeline.
A
Cool
yeah,
that's
all
I
had
any
thoughts
or
not
much
appreciated.
B
B
B
In
what
in
which
cases
is
it
going
to
fail?
That's
what
I
was
investigating
in
the
document
like
when,
shall
we.
A
So
the
the
migration
pipeline
it
would
fail.
For
example,
if
you
run
into
a
statement,
timeout
or
you
know,
there's
there's
something
wrong
with
the
migration.
You
would
still
get
this
feedback,
so
this
works
despite
the
migration
failure,
but
yeah,
basically
anything
that
can
go
wrong
in
the
migration
that
would
cause
the
pipeline
to.
Is
that
what
you
meant.
B
Yeah,
so
the
merge
request
will
be
unmergeable
because
there
is
a
statement
timer,
that's
what
I
mean.
A
Okay,
no,
that's
that's
not
the
case
right
now,
so
the
the
job
on
the
on
the
gitlab
pipeline.
This
is
really
just
triggering
the
other
pipeline
and
it's
not
even
waiting
for
that
pipeline
to
succeed
or
anything.
It's
just
it's
an
async
kind
of
thing,
and
so
this
is
not
a
blocker
for
the
matrices.
B
Yeah.
Okay.
Second
question:
there
were
very
large
data
migrations.
I
think
we
are
doing
them
too,
like
migrations
which
take
days,
or
I
heard
that
there
were
migrations
taking
months
in
that
issue
by
reviewing
the
demand.
How
will
that
behave
in
that
scenario?.
A
So
this
is
about
background
migrations
right,
so
we
can
have
very
long
background
migrations.
What
would
happen
right
now
is
that
on
the
testing
pipeline,
we
would
see
the
scheduling,
scheduling
part
of
those
migrations,
so
we
have
a
migration
that
basically
enqueues
a
lot
of
jobs
and
redis,
and
this
this
is
something
you
can
see,
but
this
shouldn't
take
too
long
can
also
take
hours,
but
it
shouldn't
take
you
know
months.
A
The
actual
execution
of
those
jobs
is
currently
nothing
you
would
see
from
the
testing
pipeline,
so
it
gets
scheduled
into
redis,
but
there
is
no
side
kick
running
nobody's
picking
up
those
jobs.
One
of
the
things
that
we
are
discussing
is
basically
picking
a
few
of
those
jobs
at
random
and
and
executing
them
on
the
thin
clone,
instead
of
and
getting
statistics
from
that,
instead
of
like
running
all
of
them,
but
that's
sort
of
for
a
later
stage
for
that
project.
I
think.
B
B
A
Yeah
it
runs
inside,
we
have
dedicated
ci
runner
for
that
project,
two
of
them
and
the
migrations
executed,
the
booking
on
the
banner
which
has
gitlab,
checked
out
and
installed,
and
all
that
and
this
rounder.
C
A
C
A
So
the
idea
is
that,
basically,
maybe
from
next
week
we
would
we
would
want
to
invite
database
maintenance,
and
I
would
want
to
be
a
bit
careful
with
that.
You
know
nothing.
I
told
everybody.
A
Like
the
database
maintainers
and
then
see
how
that
goes,
I
think
you
know
when
we
do
that
in
parallel
it
needs
a
lot
of
resources
on
the
on
the
back
end.
So
that's
there
may
be
some
bumps
in
the
beginning.
A
So
the
idea
is
from
next
week,
maybe
in
white
database
maintainers.
So
they
get
access
to
that
pipeline
and
then
yeah
start
using
that
and
and
we
can,
we
can
get
feedback
from
that
and
iterate
on.
A
B
A
Right,
if
you
have
more
comments,
please
let
let
us
know
on
slack
or
next
time
or
on
the
mr.
A
C
Sure
so,
a
number
of
people
have
been
using
the
postgres
ai,
shared
links
and
a
lot
of
I.
Some
of
them
have
been
the
public
links,
and
sometimes
it's
not
the
public
links
which
peter's
calling
out,
which
is
really
good,
because
we
definitely
should
talk
about
that.
But
right
now
the
documentation
specifically
calls
out
depezz
and
I
wonder.
D
C
C
D
But
yes,
I
I
like
the
idea
not
to
to
provide
public
links
where
possible.
Obviously,
sometimes
it's
not
possible
if
it
contains
like
confidential
data,
but
it's
doesn't
happen
very
often.
I
guess.
B
Yeah
totally,
I
agree.
Initially,
the
postgres
link
was
private.
I
asked
for
a
way
to
have
a
public
link
and
that's
now
what
we
have
if
you
want
to
put
it
in
documentation,
I
think
depending
maybe
that
there,
if
there's
no
licensing
obstacle
or
something
craig,
maybe
we
can
just
make
it
the
default
depending
on
how
we
want
to
continue
it
database
lab
versus
postgres
ai
for
etlab
team
members
plus
the
community
members.
If
it
makes.
C
Sense,
a
really
good
question.
I
I
don't
know
anything
about
how
this
stuff
set
up
is
set
up
or
whether
or
not
we
can
make
the
them
public
by
default.
But
that
would
be
super
great,
otherwise
I'll
I'll
put
up
a
merge
request
to
the
documentation
with
with
just
talking
about
making
them
public
for
now,.
D
Yeah,
I've
just
added
something
to
the
document
as
well.
I
was
thinking
about
maybe
teaching
postgres
ai
to
somehow
emit
a
template
which
could
be
used
in
merge
request.
D
So
people
don't
actually
forget
to
all
the
details,
for
example,
preview
plan
and
the
link
to
the
query
plan
and
all
the
stuff,
so
it
would
be
like
using
postgres
ai
they,
so
people
would
need
to
use
postgres
ai,
which
we
are
encouraging
error
anyway,
and
instead
of
copying
and
pasting
the
results
to
to
the
merge
requests
they
just
and
and
picking
parts
out
of
it.
Obviously,
it
could
generate
like
a
ready
markdown
to
be
just
copy
and
pasted
and
put
into
into
the
mr.
A
Yeah,
that
makes
a
lot
of
sense.
Even
we
were
even
going
one
step
further
and
thinking
about
shouldn't
shouldn't
a
merch
request
have
something
like
and
a
way
to
sort
of,
allow
you
to
have
artifacts
like
this
attached.
You
know
so
that
you
can
have
today.
You
can
have
a
comment.
You
can
pull
in
that
markdown
or
you
can
do
the
same
thing
for
the
database
testing
pipeline.
But
what,
if
you
also
had
a
tab
on
the
on
the
merge
request
that
allows
you
to
you
know,
add
query
plans.
A
Maybe
call
out
particular
queries
that
are
that
are
interesting,
and
then
there
is
some
automation
that
we're
building-
and
this
is
again
related
to
database
testing,
that
picks
up
those
queries
and
and
that's
the
plan
for
you,
based
on
the
thin
clone
and
all
that
that
would
be
kind
of
nice
and
getting
that
closer
together
so
that
we
don't
have
yeah.
We
don't
need
to
to
write
summaries
and
stuff
like
that,
or
you
know
just
being
able
to
copy
mark
down
that.
That's
a
good
step
for
sure.
B
D
D
So
you
are
just
mentioning
gitlab
bot
and
say
and
and
say,
okay,
do
the
query
for
me
on
the
data
data
lab
instance,
so
you
don't
have
to
go
to
slack
and
paste
your
query
inside
that
and
the
ports
would
do
the
analysis
and
post
back
or
comment
back
on
on
your
on
your
comment
in
the
mr,
with
all
the
information
you
need,
so
that
would
be
interesting
but
long
way
to
go.
I
I
know.
B
A
Perhaps
we're
already
like
halfway
underway
in
that
that
direction
with
the
testing
pipeline,
because
I
mean
one
thing
is:
how
do
you
trigger
that,
and
maybe
that
mentioning
the
bot
sounds
like
like
chat
ups
next
level
on
the
on
the
match:
request
which
is
nice
and
then
that
could
also
trigger
the
testing
pipeline
and
that
already
interacts
with
postgreserie
on
the
back
end,
so
with
database
lab.
Actually,
so
we,
I
think
we
can
sort
of
think
about
expanding
that
to
getting
query
plans
and
that
feedback
as.
A
A
Which
does
not
seem
to
be
the
case,
then
I
wish
you
all
a
good
day.
It
was
a
nice
thing,
you're
on
a
call
and
I'll
see
you
next
time.