►
From YouTube: 2020 07 21 Database Team Weekly
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
There
were
a
couple
things
that
stopped
us
and
we
were
trying
to
wrap
up
things
last
minute
on
friday,
so
we
paused
on
trying
to
deploy
that.
So
we
didn't
hit
the
goal,
although
let's
run
through
these,
so
we're
able
we
were
able
to
run
through
once
more
on
gcp,
right
and
test.
It
again
did
we
do
staging.
B
A
Here
I
guess
that
was
andreas.
You
put
that
note
there
so
wait
for
it
to
be
tagged.
Are
we
gonna
wait
for
the
tag
to
run
in
staging?
B
Yeah
so
once
it's
merged
to
master
it's
going
to
get
deployed.
One
thing
we
want
to
avoid
is
that
it
goes
into
the
13-2
release.
So
that's
why
we're
not
merging
it
yet
okay,
but
the
release
is
going
to
be
tagged
today,
so
it
should
be
good
to
go
tomorrow
and
the
dependencies
on
the
compliance
group.
B
A
C
Yeah,
I
think
we
can
get
that
scheduled.
I
tested
on
friday
the
fix
that
I
had
what
seems
good.
I
can
also
talk
to
meyer.
I
guess
she
had
been
working
on
that
previously
with
the
release
managers
and
see
what
she
thinks,
but
otherwise
I
think
it
can
go
out.
E
B
Just
saw
the
timeline
for,
and
that
says
like
21,
we
do
the
tagging,
but
we've
had
some
troubles
with
the
deploy
recently.
So
maybe
that's
that's.
Why.
B
Week
I
would
like
to
wrap
up
a
couple
of
small
issues:
we've
been
dragging
along
or
I've
been
dragging
along
already
working
on
those
like
to
finish
this
out.
I
think
we
have
couple,
or
we
have
quite
a
lot
of
issues
in
the
milestone,
and
I
would
like
to
get
those
smaller
ones
done
before
I
before
location.
A
A
B
Yeah,
we
already
captured
that
in
the
point
above
right,
so
we're
waiting
for
the
tag
once
that
is
the
releases
tag,
we'll
merge
the
migration
with
the
sorry
we
watch
dmr
with
the
migration,
and
that
should
get
deployed
maybe
tomorrow,
and
we
should
be
able
to
see
that
migration
happening
from
from
tomorrow.
B
And
we
still
gotta
get.
That
is
sorry.
That
is
point
two
three
is
about
the
other
change
for
development.
C
Yeah,
there's
a
there's
a
couple:
there's
a
small
one
here
that
I'm
gonna
do,
which
is
like
a
a
minor
one,
just
to
update
the
syntax
for
create
trigger,
that's
kind
of,
like
andrea,
said,
a
small
one
that
we
can
knock
out
easily
and
get
that
out
of
the
road
yeah
other
than
that.
I'm
going
to
be
working
on
this
issue
with
this,
like
verification,
query
that
runs
the
select
one
I
don't.
I
don't
know
that
that
will
be
wrapped
up
by
this
week.
D
A
Okay,
anything
else
we
want
to
cover
for
goals
for
this
week.
B
Yeah
we
were
we
had
this
last
week.
I
think
we
were
trying
to
catch
up
async,
but
perhaps
we
have
a
couple
minutes
to
talk
about
it.
We
can
also
talk
about
it
later.
That's
better
up
here.
B
Cool
so
josh,
I'm
I'm
not
sure
I
I
totally
remember,
but
we
all
discussed
about
the
metrics.
So
is
there
a
place
where
we
define
those
that
metric
that
we
were
talking
about
on
that
issue?
I
might
have
missed
that.
One.
D
D
D
B
How
does
that
go
into
the
the
north
star
metric?
Is
that
sort
of
going
to
be
a
mix
of
self-hosted,
single
node
and
gitlab
comma?
D
Yeah,
so
I
ideally,
we
eventually
support.
We
think
we
we're
adding
support
from
man.
You
can
manually
configuring
the
point,
this
url
for
multi-node
instances,
at
which
point
we
should
be
able
to
hit
gitlab.com
and
then
we're
also
working
to
explore
better
supporting
multi-node
instances
just
make
sure
we
have
a
better
a
broader
base
of
instances
to
pull
from.
But
you
know
the
goal
with
this
is
to
try
to
have
a
metric
that
we
can
use
to
understand.
D
You
know
if,
for
if
the
database
group
is
driving
things
in
the
right
direction,
if
it's
getting
worse,
it's
getting
better
kind
of
kind
of
a
baseline
for
some
success
criteria.
D
Most
groups
are
driving
towards
an
active
user
account
for
what's
called
like
gmail
group
monthly
active
users.
So
it's
like
an
adoption
metric,
so
you
can
see
you
know
how
many
active
users
are
actually
using
your
area,
but
for
database
that
doesn't
really
matter
and
so
we're
trying
to,
and
our
mission
is
not
about
adoption
it's
about
scalability,
so
and
performance.
D
Background
so
I
we
could
go
with
more
complicated
one
which
is
which
is
in
there
with,
like,
I
think
with,
I
think,
which
giannis
recommended,
which
was
like
the
which,
which
also
included
the
total
side
of
the
database
and
total
queries
like
that
one
go
back
up
yeah,
but
that
one's
a
little
harder
to
understand.
Probably
for
most
people
like
well.
For
us,
it's
not
really
hard
understanding.
It
makes
a
lot
of
sense,
but
anyways
I'm
open
to
changing
the
query.
D
It
was
just
the
raw
number
of
queries.
I
believe
that
we
were
having
yeah
queries,
the
number
of
queries
per
month,
potentially
or
even
per
week
under
under
the
slo,
so
we
would
set
a
slow
with
target
and
we
could
have
a
different
one
for
the
you
know:
back-end
job
versus
front-end
jobs,
if
you
wanted
to
but
or
background
and
interactive
jobs,
rather
and
and
just
tracking
raw
numbers.
B
Okay,
that
makes
sense.
I
I
didn't
remember
what
queries
by
wk
meant
and
weak
makes
perfect
sense.
Thanks.
B
D
Sorry
for
being
less
than
queer
in
the
issue
right
off,
but
that
does
the
question
of
what
our
slo
should
be
and
should
we
split
it
for
interactive
and
background
jobs.
A
B
And
so
the
one
thing
I'm
not
sure
about,
I
understand
is
so
we
capture
the
data
from
many
systems
right,
so
the
single
node
installations
that
report
that
eventually
also
get
labcom,
but
their
sort
of
characteristics
are
quite
different
right.
So
on
small
installations,
you
would
basically
expect
that
there
are
no
database
problems
at
all.
B
D
We
we
could
do
get
a
comp
separate.
I
was
thinking
I
just
have
them
all
just
aggregated,
and
so
we
would
just
lose.
You
know
not
get
much
credit
for
gitlab.com,
for
example.
D
D
That
captures,
like
the
percentage
of
you
know
and
kind
of
my
question
number
one
here
is:
do
we
want
to
have
this
as
a
percentage
of
queries
that
are
below
the
slo
versus
like
a
raw
number,
because
otherwise
it'll
go
up
into
the
right
without
it's
doing
very
much,
and
it
could
be
a
fairly
large
number
of
queries
that
are
breaking
the
slo
and
as
long
as
we
keep
making
more
queries,
it's
fine,
but
we
can't
break
it
up.
B
B
Start
with
adding
that
metric
for
the
percentage,
okay,
that
makes
sense,
and
then
we
can
see
how
to
sort
of
aggregate
that
or
how
to
refine
it
sure.
Now
we
have,
we
have
a
starting
point:
how
to
capture
data
and
all
that
and
sorry
craig.
What
were
you
going
to
say?
Yes,.
D
It
comes
out
of
the
prometheus,
oh,
but
we
have
to
make
sure
we
can
get
it
from
prometheus.
I
think
we
can
jose.
Do
you
know
if
this
is
since
you're
on
the
call
or
andreas
or
pat,
if
you
notice
well,
but
can
we
sort
of
set
up
get
query
timings
off
of
pros?
I
think
I
think
I've
seen
I've
seen
the
I've
seen
the
dashboards
on.com.
E
D
Got
it,
okay?
Is
it,
does
it
come
from
like
postgres
exporter,
or
where
does
that
come
from?
Do
you
know,
is
that
where
does
it
come
from
like
get
lab,
monitor.
E
Yes,
we
probably
have
this
in
the
exporter
from
positives
and
we
have
these
implements.
I
have
to
check
for
you,
okay,
but
as
far
as
I
know,
like
everything
that
we
are
logging
at
the
moment
is
just
a
question
over
one
second,
so
we
don't
have
a
clear
understanding
in
the
logs.
What
we
have
is
we
start
statements,
but
it's
like
aggregation
of
all
the
statements
or
99
of
the
statements
that
are
being
executed
on
the
database.
B
Gotcha
yeah,
I
think
we
have
quite
a
few
places
where
we
track
that
we
also
have
from
the
application
side.
We
also
track
those
things
you
can
also
find
sql
timings
and
in
logs
in
the
in
the
application
logs,
for
example,
but
I'm
what
I'm
not
sure
about
is
how
much
of
that
is
available
to
like
standard
installations
so
yeah.
B
We
would
have
to
sort
of
find,
find
the
right
place
where,
where
to
get
that
raw
data
from
and
for
the
the
definition
of
what
we,
what
we're
talking
about,
we
will
have
to
at
least
yeah
understand
all
the
creative
like
we
would.
We
would
have
to
know
like
how
many
degrees
did
we
run
overall
and
how
many
of
those
were
above
the
slo
yeah.
D
And-
and
we
can
certainly
adjust
our
query
to
based
on
what's
reasonable
to
collect
and
what
we
have
like.
What
did
we
have?
So
we
can
work
within
those
bounds
and
iterate
over
time
towards
a
query,
but
it'd
be
good
to
start
getting
some
data
back,
so
we
can
have
a
feeling
of
what's
what's
happening
out
there.
D
Okay
cool,
so
it
makes
sense.
Let's
continue
the
conversation
of
where
we
should
get
this
from
and
how
to
do
it.
I
was
I
and
then
we
can
circle
back
with
the
ionos
team
and
see
what's
available
where,
based
on
what
a
recommended
method
of
getting
stuff
is,
I
imagine
grabbing
the
application
logs
will
be
hard.
B
Right
but
in
principle,
the
application
is
also
aware
of
those
timings
and
might
be
able
to
report
them
to
primitives,
for
example,
perhaps
already.
D
B
Should
we
start
with
omnibus
here
or
yeah
good
luck,
combi.
D
Can
we
can
start
with
god.com
if
we
want
to
that?
Might
we
can
start
either
way?
I
would
just
start
with
whatever
the
easiest
is,
but.
D
B
A
B
A
So,
let's
move
on
to
the
next
one,
so
josh
and
I
talked
about
using
this
label,
and
we
just
talked
about
that
one.
So
should
we
keep
the
label
on
that
one?
Are
there
still
things
that
well
yeah,
there's
still
touching
you
talk
about
on
that
one.
B
Yeah
we
talked
about
that
one.
We
could
basically
start
with
a
very
basic
way
of
doing
that,
so
implementing
a
rake
task
and
sort
of
see
where
that
goes
for
how
we
picked
it
up
for
a
good.com.
B
E
That
shouldn't
be
used
any
longer
like
if
you
want
to
force
the
data
index
in
a
column.
Let's
say
you
have
the
bloated
links
in
the
new
one.
You
want
to
start
to
use
the
second
one.
You
can
disable
the
first
one
and
automatically
the
planner
will
get
this
one.
If
you
don't
have
load
any
longer
and
later
you
could
drop
this
index
whatever
you
want.
B
Basically,
the
idea
is
that
we
have
most
of
the
explode
in
those
regular
indexes
anyways
and
if
the
application
would
be
able
to
take
care
of
that,
we
wouldn't
have
to
do
this
manual
repacking
using
pgp
pack
all
the
time,
and
it
actually
gets
easier
even
with
posters
12,
where
you
have
that
concurrently
option.
So
you
can
you
can
recreate
concurrently,
so
that
makes
it
even
more
easier
to
do
it
from
the
application
side
yeah.
B
E
B
B
E
E
A
And
then,
but
you've
been
talking
about
it
for
a
while.
Do
you
have
bandwidth
to
look
at
this,
this
milestone
andreas,
considering
your
pto,
or
should
we
have
either
pat
or
andreas
pick
it
up
or
pat
or
giannis
pick
it
up
when
they
have
time.
B
I
don't
know
if
you
already
files
on
that
one.
There
was
so
that
there's
a
request,
basically
that
when
you
run
an
upgrade
on
kit
lab
you
should
we
should
track
that
information
in
the
database
so
that
you
you
have
an
easy
way
of
being
on
of
knowing
when
certain
upgrade
steps
happened
on
the
path.
So
this
is
really
just
a
tracking
table
they
requested.
B
So
the
idea
is
to
provide
again
the
right
task
that
would
be
run
after
an
upgrade,
and
you
would
make
a
note
in
a
table
that
you
upgrade
from
this
version
to
that
version.
And
then
maybe
you
have
an
additional
rake
task
to
list
that
table
or
two
you
know
so
you
can
understand
the
history.
So
this
is
it's
a
rather
rather
basic
thing
just
to
keep
track
of
those
upgrades.
B
D
D
A
A
B
D
It
makes
sense,
I'm
not
sure
it's
super
urgent,
so
I
think
13,
unfortunately,
is
fine.
First,
who
should
do
it.
D
B
I
think
it's
actually
unrelated
to
migrations,
because
you
know
this
is
this:
is
the
sort
of
schema
version
of
the
database
is
controlled
by
migrations?
But
in
this
case
we
talk
about
the
gitlab
version.
That
is
being
upgraded,
so
I,
if
I
understand
it
correctly,
I
think
we
want.
We
basically
want
to
know
when
you
went
from
31
to
32.
B
I
I
thought
that
two
in
the
beginning,
but
it
turned
out
they-
they
were
thinking
about
the
the
gitlab
version
and
for
the
database
version
it
would
be
hard
to
sort
of
track
that
or
we
already
track
it
anyways
right.
We
have
the
schema
migrations
table
where,
for
each
of
the
migrations,
we
run,
we
insert
a
version
information
anyway.
So
that's
that
is
already
something
that
we.
D
D
D
D
Also,
we
get
I'll,
you
know
comment.
A
A
A
Commenting
andreas,
you
want
to
add
an
exploratory
issue
for
events
partitioning
now
that
makes
sense.
B
Yeah,
the
other
one
is
sort
of
long
standing
and
we
should
keep
that.
But
I
was
just
wondering
what
the
next
immediate
step
would
be
for
us
to
work
on,
and
I
think
it's
more
it's
about,
like
understanding
all
the
use
cases
for
the
events,
data
and
see
if
that
partitioning
approach
makes
sense
or
not.
B
Kind
of
that's
fine,
then
I
can
also
create
that
issue.
If
you
like
youtube.