►
From YouTube: 2020 12 08 Database Team Weekly
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Follow-Up
items
looks
like
giannis
took
care
of
the
first
one
or
actually
there's
a
note
in
the
issue
for
talking
about
setting
up
servers
for
testing
with
production
data,
and
I
commented
on
as
well
jose
asked
a
question
this
morning
about
data
access,
and
I
clarified
that
we're
only
looking
for
data
access
for
people
that
have
it
for
now
and
that
our
long-term
plans
at
the
moment
don't
include
granting
all
developers
access
to
production
data,
so
we'll
see
what
direction
that
goes.
You
don't
necessarily
have
anything
you
want
to
add
to
that.
One.
A
Okay,
I
can't
remember
exactly
what
the
specific
follow-up
on
this
one
is
other
than
just
keeping
an
eye
on
it.
So
there's
a
question
out
there
about
whether
or
not
we
should
upgrade
to
postgres
12..
I
think
we
should,
I
think,
giannis
and
others
have
covered
the
reasons.
Well,
I
haven't
followed
the
last
couple
days.
I
don't
know
if
there's
any
decisions
have
been
made
or
not,
but
I
know
the
concern
is
around
the
no
downtime
upgrade.
I'm
not
sure
that
we
can
ever
avoid
that
with
database
upgrades.
A
I
know
they're
working
towards
it,
but
I'd
rather
us
not
get
behind
again
and
I
don't
have
any
other
solid
justification
other
than
what's
already
been
written
in
the
notes.
So
if
anybody
has
any
good
feedback
on
that,
I
would
recommend
jumping
in
and
adding
their
feedback
to
the
notes.
A
There
are
certainly
performance
improvements
that
we
could
take
advantage
of
with
postgres
12,
and
I
know
the
container
registry
built
their
partitioning
with
pg12
in
mind,
so
it
would
be
a
setback
if
we
don't
yeah.
C
Yeah,
I
think
I
think
this
is-
I
spoke
to
josh
and
I
think
at
the
end
of
the
day,
maybe
this
is
a
a
case
of
making
a
table
with
the
pros
and
cons
and
saying
you
know
if
we
do
this,
this
is
what
we
get
out
of
it,
but
this
is
the
impact
to
our
customers.
If
we
don't
do
this,
you
know
this
is
what
we'll
miss
out
in
the
next
in
the
next
year.
C
C
A
All
right
so
and
then
this
list
below
the
next
items
were
things
that
came
up
last
week
and
I
just
started
throwing
them
into
our
agenda
as
they
came
up.
So
this
one
is
all
about
the
epic
creation
failing,
but
I
think
we
have
a
good
handle
on
where
that
is
right
now
and
yawn
is
working
on
a
fix
right
now
to
disable
those
quick
actions.
So
hopefully
our
ability
to
create
and
close
epics
will
will
come
back
to
us
here
today.
D
Are
we
still
considering
the
low
balancer
problem
as
a
priority,
because
that
certainly
is
a
problem?
Consistency
wise,
even
though
it
wasn't
the
root
cause
of
the
epic's
problem
right.
A
C
Oh
so
I
just
saw
the
the
comment
from
heinrich
and
he
essentially
said
this
is
even
though
this
is
not
the
root
cause.
So
maybe
we
should
handle
the
root
cause
first,
but
then
I
do
think
that
the
other
issues
should
maybe
come
after
that
right,
because
I
think
we
discussed
that
or
with
the
honest
as
well
that
this
may
still
bite
us
in
a
couple
of
months
for
a
different
data
type
right
and
then
we're
going
to
do
the
same
thing
again.
B
B
And
the
the
hooks
there
so
depending
on
how
the
model
uses
after
np4
series,
etc,
it
is
more
prone
to
to
the
subsequence.
I
don't
know
if
you
agree
address.
D
Yeah
and
it's
kind
of
a
timing
problem,
I
think-
or
it
can
be,
your
timing
problem
as
well
like
if,
if
things
take
long
and
you
get
into
into
a
database
timeout,
then
you
hit
that
scenario.
I
think
that
was
that
was
the
example
that
was
given,
where
it's
hard
to
say
how
often
that
that
even
happens.
B
The
good
news
is
that
we
now
know
that
the
problems
not
with
a
lot
buyers,
a
per
se,
at
least
how
often
we
see
the
problem
with
epics.
So
there
is
the
fix
for
the
epics,
and
so
I
assume
that
this
is
not
any
more
priority
one.
It
is
something
that
we
have
to
address,
but
and
it
could
come
back
and
bite
us
at
some
point,
but
so
there
is
the
problem.
Now
there
is
the
issue
for
the
load
balancer.
B
A
B
A
There
was
okay
unless
there's
anything
else
on
those
moving
on
to
the
next
one,
so
there
was
a
request
to
optimize
the
events
table
get
some
help
with
this.
I
don't
know
if
anybody
has
taken
a
look
at
this,
but
I
told
them
we
would
discuss
and
figure
out
how
we
can
prioritize
might
need
to
go
through
the
rights
framework
again
and
figure
out
where
it
falls
on
our
ever-growing
list
of
priorities.
D
I
think
it
was
it's
around
the
contributions
calendar
feature,
if
I
remember
correctly
so,
basically
on
your
profile,
you
have
the
calendar,
and
this
this
goes
back
to
using
all
the
data
from
this
events
table
which
is
rather
large,
and
what
I
remember
from
that
discussion
is
that
we
sort
of
suggested
that
we
can
also
improve
the
way
the
data
is
stored
for
the
contributions
calendar
and
not
try
to
sort
of
crunch
all
those
events
at
a
time
when
you
load
the
calendar
now
the
question
back
back
then
was
this:
this
is
owned
by
by
a
backend
team,
obviously,
and
what
we,
I
think,
what
we
didn't
follow
up
with
up
until
now
is
how
we
would
organize
such
an
effort
where
there
is
a
database
kind
of
topic,
modeling
topic
or
making
those
those
choices,
and
then
there
is
also
the
team
that
owns
that,
and
I
don't
know
what
the
priority
of
that
is
of
for
for
the
backend
team
either
the
best
guys.
A
D
C
B
I
think
that
this
is
the
perfect
use
case
for
working
with
a
feature
team
on
investigating
whether
this
can
be.
We
can
use
partitioning
on
or
other
solutions,
and
this
is
one
of
those
cases
where
the
feature
team
should
drive
that
and
we
should
help
them
identify
whether
the
solution
that
they
want
is
partitioning
or
another
solution,
but
for
sure-
and
I
think
that's
where
we
left
it
with.
E
B
A
month
ago
was
more
like:
can
we
sing
with
you
and
figure
out
how?
What
are
the
available
solutions
for
you?
A
I'm
going
to
throw
it
back
to
product
management
for
that
fabian
or
giannis.
Do
you?
Why
do
you
want
to
reach
out
to
him.
D
For
me,
there
was
a
bit
of
confusion
that
has
been
clarified
in
the
comments
below,
so
this
is
not
about
removing
postgres
exporter.
It's
only
about
a
gitlab
exporter,
which
sort
of
has
this
like
specific
monitoring
for
gitlab
itself
and
we're
gonna,
keep
postco's
exporter
and
then
to
add
to
the
confusion.
There
is
a
cookbook
that
we're
using
for
kit.com.
That's
called
gitlab
exporters,
which
is
going
to
stay
around
because
that's
also
configuring
postgres6
porter.
So
that
was
confusing
for
me
to
see
but
yeah
it's
only
about
to
get
the
kill
of
exposure.
A
You
need
to
head
to
the
confusion,
so
this
blocks
dropping
get
lab
exporter,
but
it's,
I
think,
the
the
memory
team
has
this
scheduled
for
14.0.
So
13.8
is
fine.
Does
this
still
make
sense?
Is
this
something
we're
going
to
need
to
do
andreas?
I
don't
understand
this
issue
enough
to
know.
D
D
C
I
think
my
question
I
mean
this
is
one
of
these.
I
think
typical
situations
where
your
priorities
are
not
my
priorities.
Right,
as
in
my
understanding
is
the
memory
team
would
like
to
ultimately
drop
gitlab
exporter,
because
that
results
in
a
significant
memory,
savings
and
plus
it
is
kind
of
stale-
and
you
know
we
have
a
different
solution,
but
if
I
understand
correctly
in
order
to
achieve
that
goal,
there's
a
rather
lengthy
list
of
metrics
that
are
still
using
this.
Let's
say
legacy
system
right
and
you
know
those
need
to
be
moved.
C
You
know
to
achieve
his
goals,
and
I
I
don't
know
enough
at
the
moment
to
understand
that,
but
my
gut
feeling
is,
if
there's,
if
there's
let's
say
10
groups
right,
who
all
have
to
do
this?
It's
going
to
be
challenging
because
everybody
will
say
well,
I
don't
particularly
care
about
the
gitlab
exporter.
It
does
what
it's
supposed
to
do
for
me
right
just
for
some
color
and
context.
D
What
wasn't
totally
clear
to
me
is
where
what
the
what
to
go,
or
it's
clear
we
want
to
remove
the
calypso,
but
it
wasn't
clear
where
we
wanted
to
those
metrics
to
live
after
that,
so
that
would
be
something
which
would
help,
I
think,
figuring,
that
out
for
for
those
metrics
for
the
bloat,
for
example,
if
I
look
at
the
re-indexing
feature
that
we're
doing
right
now,
we
also
have
the
the
bloat
information
available
in
the
application
soon,
so
it
should
be
relatively
simple
to
you
know,
turn
that
around
and
export
those
metrics
from
the
application
itself.
D
If
that
makes
sense
you
know,
but
that
should
be
something
we
can
also
do.
I
don't
know
exactly
about
those
other
metrics
I
have.
I
think
we
have
them
for
a
long
time,
so
database
rows
counts
something
on
tables.
I
have
never
seen
good
use
of
that,
but
I
could
have
just
missed
that
as
well.
I
don't
really
know,
but
it's
something
that's
been
around
for
a
long.
C
C
So
I
think
what
we
probably
can
offer
here
is
to
say
we'll
help
you
investigate
this
in
13.8
right,
but
we
are
we're.
Not
we
don't
have
all
the
information
right
now
to
say.
Yes,
we
can
do
this
or
this
is
exactly
how
it's
going
to
go
and
I
think
that's
maybe
some
good
feedback,
the
initial
feedback
from
matthias
and
the
memory
team
in
general
to
say,
like
you
know,
if
we
are
making
an
ask,
we
need
to
be
very
crisp
on
what
we
expect
people
to
do
so
that
they
can.
A
Okay,
do
you
want
to
add
a
comment
to
that
issue,
or
you
want
me
to
either
way
it's
fine.
C
Sorry
I
didn't
catch.
That
was
the
question
for
me
or
for
andreas
for
you
we
should.
I
can,
I
can
add
a
note
and
say:
okay.
A
So
this
was,
I
think
this
sits
somewhere
either
in
database
or
memory
team.
They
were
having
some
upgrade
errors
and
it
seems
to
be
related
to
either
the
puma
workers
or
the
connection
threads.
And
I
don't
know
if
anybody
is
taking
a
look
at
the
the
details
of
this
issue
or
not.
A
If
I'm
reading
it
correctly
so
they'd
originally
upgraded
and
they
kept
running
out
of
connections
and
when
they
eventually
upgraded
their
connections
to
300,
I
believe
it
was.
It
was
fixed,
they're,
still
figuring
out.
Why,
even
though
the
documentation
guidelines
said
this
was
the
correct
number
for
them.
A
B
B
A
A
We
do
have
a
spreadsheet
and
we
have
now
put
related
stories
under
an
epic
and
fabian
looks
like
you
are
recommending
handbook
page.
C
Yeah,
so
I
had
an
idea,
which
simultaneously
also
would
mean
offloading
that
responsibility
to
giannis,
so
the
I
think
the
ryze
stuff
should
not
necessarily
live
in
the
engineering
handbook,
but
I
think
we
should
just
create
a
bare
bones
direction.
Page
for
database,
because
this
is
rise
is
essentially
helping
us
to
prioritize
the
next
few
months
and
what
we're
going
to
work
on,
and
that
usually
goes
into
into
the
direction
page
for
a
category
right.
C
So
there's
a
specific
format,
but
I
think
we
can
ignore
80
of
that
for
now,
but
I
think
it's
a
great
opportunity
to
start
and
say
these
are
the
things
that
we're
working
on,
and
these
are
the
relative
priorities,
and
that
is
what
we
expect
to
happen,
and
I
think
that
can
then
be
the
single
source
of
truth
and
be
updated
occasionally
to
to
reflect
any
change.
That
would
be
my
suggestion.
A
Yeah,
so
you
know,
I
agree,
I
think
that's
a
good
idea
and
I
think,
in
coordination
with
the
road
map
that
we
have
out
there
right
now.
I
think
the
road
map
is
easily
more
easily
updated
right,
because
you
just
need
to
update
the
epic
dates,
so
I
think
we
can
get
something
out
there
and
quite
honestly,
it'll
help
with
part
of
what
the
next
step
is.
It's
the
okr
review
and
this
this
all
wrap
up
at
some
point
in
time.
A
As
I
go
through
these,
so
we
got
database
labs
rolled
out
right.
We
have
the
enterprise
edition.
Now
we
have
six
months
on
the
license
and
ultimately
leadership
would
not
like
to
pay
for
database
lab
forever,
and
the
ultimate
goal
would
be
for
us
to
create
our
own
in-house
solution
for
being
able
to
provide
the
thin
clones
for
developers
or
private
runner
to
be
able
to
test
data
migrations
regularly
right,
I'm
concerned
with
all
of
the
things
I
mean.
C
Yeah,
so
I
think
from
from
my
experience
having
a
direction
page
that
is
stakeholder
digestible,
as
in
people
who
are
not
very
close
to
this,
who
can
look
at
it
and
say?
Oh,
this
is
what
the
database
team
is
doing
and
also
with
some
disclaimers
on
there.
What
we're
not
able
to
do
so,
for
example,
geo,
has
the
backup
and
restore
category,
but
I've
essentially
put
a
big
disclaimer
on
it
that
we're
not
doing
anything
there,
because
we
have
no,
no
bandwidth
so
being
transparent.
C
There
is,
I
think,
quite
valuable
and
I
think
also
in
terms
of
sort
of
bandwidth
in
the
team
other
than
the
direction.
I
think.
C
C
Oh,
I
have
several.
I
can
show
you
mine.
A
C
C
C
A
That's
great,
thank
you
not
like
at
the
moment
super
critical,
but
I
think
clarifying
within
the
next
couple
weeks
will
help,
because
again
we
had
that
limited
time
frame
on
the
database
labs
license,
and
I
I've
made
it
clear
that
I
don't
think
we'll
make
any
progress
on
developing
a
replacement,
even
thinking
about
developing
a
replacement
for
that
for
a
couple
of
months,
and
if
we
get
to
the
three-month
mark-
and
we
haven't
even
started
thinking
about
how
we
would
replace
that
thin
cloning
functionality
with
our
own.
A
Let's
start
looking
at
either,
you
know
is
database
lab,
providing
the
value
we
need
there
and
if
so,
a
renewal
license,
and
then
that
will
come
up
for
when
are
we
going
to
have
our
plan
to
replace
it?
So
I
think
this
direction
page
and
communicating
those
priorities
is
on
the
scale
of
getting
that
out
in
the
next
couple
weeks,
maybe
or
well
with
the
holidays
coming
up.
Maybe
early
january
would
be
the
time
frame
on
that.
So.
A
Okay,
so
we
talked
about
that,
allowing
developers
testing
against
production
data,
we
are
where
we
are
and
we
are
in
the
process
of
these
other
ones
too.
How
do
we
get
that
to
go
away
there?
It
is
all
right.
I
think
this
is
actually
the
next
topic
in
the
agenda,
so
we'll
come
back
to
that
andres.
How
are
we
doing
on
reindexing
still
making
good
progress
there.
D
Yeah
still
working
on
that,
the
the
latest
changes
stood
in
review,
which
took
a
little
bit
longer
than
the
what
is
yeah.
What
we've
scheduled
for
yesterday,
but
haven't
gone
through
with
was
was
a
manual
we
indexing
in
production,
where
we
basically
target
the
most
bloated
indexes
that
jose
identified
earlier.
D
I
wasn't
sure
what
was
missing
to
get
that
going
so
brent
had
a
couple
of
questions
about
that.
So
maybe
we
can.
D
We
can
do
that,
and
I
hope
is
that
we
learned
from
from
that
a
little
bit
more
and
basically,
the
last
thing
to
add,
which
is
I'm
about
to
start-
is
a
graph
on
annotation
so
that
we
get
we
get
some
visibility
into
when
those
things
happen,
and
that
should
be
the
basis
for
us
to
enable
that
in
production
in
a
basic
version,
let's
say
I
think
so
development-wise,
maybe
that's
that's
finishing
up
before
christmas.
We
can.
We
can
think
about
it,
enabling
that
in
january
that
should
be
good.
D
A
I'm
just
taking
notes,
I
believe,
sorry
and
then
say
do
ratio.
So
I'm
still,
I
still
have
questions
about
what
the
single
source
of
truth
on
say.
Do
ratio
is
so
we
have
assist
dashboard
for.
A
A
So,
as
you
can
see
for
database
or
13.6,
our
cd
ratio
was
43
percent.
Does
everybody
know
what
cd
ratio
means?
What
what's
driven
off
of
it's
the
number
of
issues
that
we
marked
as
deliverable
that
we
delivered
within
that
milestone,
or
so
that
would
be
anything
for
us,
yeah,
yeah
43
we
delivered
in
13.6,
and
then
there
is
this
calculation
for
a
number
of
reprioritized
issues.
So,
typically
throughout
the
milestone,
we
will
move
things
in
and
out
of
the
milestone
right.
A
So
this
is
once
we
re-prioritize
everything
we
hit
75
of
our
deliverable
issues,
but
also,
according
to
our
retro
account
we
hit
100
of
our
deliverable
issues,
so
I
think
the
difference
there
is
in
the
timing
of
when
these
are
calculated
so
within
six
sense
it
cuts
off
at
the
end
of
the
17th.
It
will
count
the
number
of
deliverable
issues
shipped
with
the
retro
issue.
It
doesn't
count
until
the
23rd,
so
that's
the
difference
in
these.
A
D
Yeah
not
terribly
important.
We
talked
about
that
last
time,
adding
metrics
for
the
integer
capacity,
and
I
tried
to
do
this
last
week
on
the
sites,
but
I
did
a
permissions
issue,
so
we
don't
manage
manage
database
grants
really
thoroughly,
but
we're
about
to
sort
of
get
get
that
going
and
then
we
should
have
those
metrics.
E
Hello,
everyone,
so,
basically,
I'm
trying
to
report
here
in
the
epic,
the
main
things
that
I'm
doing
and
I'm
checking
for
spikes
and
trying
to
understand
them.
I'm
basically
executing
two
types
of
three
two
types
of
analysis
here,
one
for
total
cause
and
total
time
of
resolving
the
queries,
we're
finding
some
candidates
for
tuning
or
from
that
we
can
talk
with
our
friend
in
development,
with
you
guys,
as
this
short
term
queries
improvements.
E
D
E
A
That
the
one
that
andrew
nudigate
just
did
the
overlay
on
okay.
E
E
D
And
that
query,
as
far
as
I
could
see,
it
was
related
to
the
authorization
refresh
worker.
So
that's
a
background
thing
that
recalculates
authorizations.
D
And
there
was
some
cross-reference
to
a
large
customer
that
is
hitting
us
with
a
automated
bot.
So
I'm
not
gonna
say
the
name,
but
perhaps
you
all
know
what
I
mean
do
we
do?
We
see
some
kind
of
link
between
our
customer
and
the
increase
in
traffic
and
load
that
we
we
observed.
D
It
would
be
a
nice
explanation
if
that
you
know
we
saw
that
increase
in
traffic
and
load
over
time
over
the
last
few
weeks
that
correlates
back
to
that
bot
activity
that
that
might
be
an
interesting
lead.
D
Yeah
and
I'm
I
just
talked
to
andrew
a
bit-
he
mentioned
that
this
this
customer
and
they
have
a
bot
running
basically
and
that
generates
a
lot
of
traffic
for
us
and
apparently
that
can
lead
to
group
membership
status.
Changing
and
things
like
that
which
would
in
turn
issue
or
which
would,
in
turn
lead
to
that
worker
have
to
refresh
permission
and
authorizations.
E
And
the
interesting
things
like
do
it
through
the
degradations
that
we
see
during
the
peak
times
so
this
one
that
I'm
evaluating,
like
the
total
time
of
execution,
is
getting
to
set
parameter
and,
in
theory
something
simple
to
solve
for
the
database.
But
due
to
the
degradation.
Did
you
see
that
even
this
is
getting
slower
like
in
the
top
10?
You
have
set
parameters
that
shouldn't
be
there
for
total
time
of
consumption.
You
know.
D
E
Well,
I
think,
set
parameters
because
we
are
doing
random
dirt
data
and
if
you
sum
all
the
set
parameters
that
we
do
because
we
do
have
plenty
of
them,
so
I
need
one
to
stop
and
check
how
many
thousands
we
do.
I
think
we
do
even
more
set
parameters
than
select
ones
and
we
have
different
values,
so
we
compute
a
bit
more
of
course,
then
just
select
one.
E
I
don't
know
the
logic
why
we
are
doing
that,
and
I
see
that
this
is
pretty
well
tuned,
but
the
problem
is
like
number
of
executions
we
have,
that
is
really
high
and
it's
always
present
when
you
have
spikes.
So
if
you
tell
me
two
of
things
that
I
would
like
to
look
would
be
that
one's
first,
but
just
my
analysis.
E
So
and
the
the
cache
in
the
other
is
just
ideas
to
try
to
move
load
or,
as
you
said
or
like
understood
this
later,
to
read
from
the
read-only,
if
we
can,
because
we
are
like
you
see
in
the
total
time
from
queries
now
the
select
from
unit
for
select
star
from
entities
it's
pretty
present
or
if
we
could
move
this
or
have
a
cache
in
front
or
I
don't
know.
E
What
can
we
do
here
would
be
great,
I'm
saying
because,
like
my
feels
like,
if
we
keep
with
this
level
of
degradation,
if
we
don't
manage
to
do
any
activity
in
time,
sooner
or
later,
we'll
start
to
be
all
of
us
in
a
incident
causing
why
we
are
having
like
a
performance
degradation
in
a
high
level.
You
know
so
that's
my
fear.
A
Thanks
jose
andreas
of
the
next
one.
D
Oh
yeah,
because
we
covered
that
one
yeah,
that's
the
re-indexing,
the
manual
one.
A
I
just
wanted
to
reiterate
that
the
15th
is
the
end
of
this
milestone
because
of
the
friends
and
family
date
so
top
of
mind.
Is
the
audit
events
deployment
right?
That's
the
first
step
of
audit
events
deployment.
Are
we
still
on
track
for
that?
A
A
A
B
So
related
to
that,
do
you
agree
that
we
run
a
new
rise,
a
quick
rise
exercise
on
a
thursday,
because
so
that
we
are
prepared
for
the
next
milestone.
It
will
also
help
with
the
do
say,
ratio
and
it
will
help
with
preparing
then
a
kickoff
video,
so
that
we
have
set
the
proper
issues.
If
you
agree,
okay,.