►
From YouTube: 2020 12 01 Database Team Weekly
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Just
a
reflection
of
your
importance
all
right.
Welcome
to
this
version
of
the
database
team
weekly
meeting.
It
is
december
already
wow.
That's
crazy!
All
right!
There's
a
couple
topics
in
the
not
verbalized
section
so
jumping
right
to
the
team
topic.
So
just
mostly
a
reminder
for
me
to
look
back
on
the
mr
for
reviewer
requirements.
I
don't
know
how
many
people
took
a
look
at.
I
know.
Giannis
took
a
look
at
it
and
had
a
lot
of
good
content
there.
A
My
goal
there
was
to
add
the
requirements
for
reviewers
so
that
they
know
they
need
to
provide
things
like
query
plan,
all
the
queries
links
to
the
all
the
required
data,
so
that
maintainers
can
be
more
efficient
when
when
they
actually
get
the
review,
because
a
lot
of
the
feedback
I
gathered
from
the
maintainers
was
just
that
that
it
was
missing
a
lot
of
data.
So
we
can
push
that
down
to
the
reviewers
that
will
make
the
maintainer
time
much
more
efficient.
A
B
A
I
was
going
to
spend
time
on
it
today,
so
I
can.
I
can
actually
organize
it.
So
what
giannis
is
talking
about?
I
don't
know
how
many
people
have
actually
looked
at
the
mr.
A
So
I
did
a
quick
bulleted
list
of
here
are
the
things
that
required
and
said
you
know
if
you
need
more
information
about
these,
please
read
the
section
below
giannis
added
a
lot
more
good
content,
but
it
added
the
require
it
made
the
required
list
quite
long,
and
I
was
trying
to
keep
that
succinct
so
that
reviewers
would
actually
look
at
it
and
go
okay.
I
know
I
need
to
provide
these
things
and
if
I
need
more
information,
I
can
look
down
here
so
basically
providing
a
tldr.
A
B
B
A
So
I
I
will
take
a
look
at
that
later
on
today,
and
then
we
actually
got
the
contract
signed
and
delivered
yesterday
for
database
lab
so
yeah,
that's
fantastic.
That
was
a
lot
of
work,
so
we're
good
to
go.
There
we
nick's,
been
great
procurement.
Team
was
great
on
helping
us
get
it
over
the
line.
So
now
the
real
work
begins.
I
actually
created
a
rollout
issue
here.
A
These
are
the
simple
things
we
need
to
do
so
letting
all
of
engineering.
Actually
anybody
with
a
gitlab.com
email
address
will
now
have
access
to
this,
so
informing
them
in
the
right
order
would
be
helpful.
We
talked
about
nick
talked
about
turning
it
on
last
night,
but
then
that
would
change
joebot
and
every
query
session
would
now
have
a
link
to
it
and
then
we'd
probably
get
inundated
with
questions,
so
I'm
going
to
enable
it
over
the
weekend.
A
I
will
hit
up
all
of
the
keeping
yourself
informed
channels
saying
this
is
the
change.
This
is
what
you
can
expect:
here's
some
of
the
training
material
and
then
nick
will
schedule
a
training
in
a
like
ask
me
anything
session
for
next
week
and
I
wanted
to
get
some
ideas
on
slack
channel.
Do
you
I
think
we
should
set
up
like
a
database
lab
help
slack
channel?
So
it's
specific
to
that.
So
we
don't
clutter
other
channels
like
the
database
channel
or
the
job
channel.
A
So
I
was
gonna
set
up
database
lab
help
channel.
D
A
How
do
I
use
this
feature
is
broken,
so
we
wouldn't
change
the
current
channel
where
people
can
interact
with
slack
to
do
query
plans
now.
I
just
didn't
want
to
further
clutter
that
up,
because
it's
pretty
noisy
right
now
with
and
that's
work
versus
help.
So
I
I
didn't
want
to
overload
the
one
channel,
so
I
was
slack
channels
are
free.
If
people
need
to
go
for
help,
I
figured
we'd
keep
that
in
its
own
isolated
channel.
D
And
before
we
move
on,
this
is
awesome
by
the
way.
So,
thanks
for
seeing
that
through
and
getting
that,
I
think
that
really
helps
us.
A
Yeah
yeah,
as
I
told
nick
yesterday,
I'm
excited,
but
also
scared,
because
it
means
now
we
have
to
deliver
on
the
next
level
of
things
right.
There
was
always
kind
of
this
block
around.
What
do
we
do
next
and
now
we
know
what
we
need
to
do
next
and
part
of
it
is
probably
me
starting
the
procurement
process
for
the
next
six
months
evaluation,
because
I
I
don't
think
we'll
be
independent
of
this
within
six
months.
So
I'll
start
working
on
that
soon.
A
So
then
the
next
level,
obviously,
is
the
provide
developers
with
production
like
data
that
epic,
that
we
have
out
there.
I
think
we
need
to
update
the
description
and
the
rollout
plan.
It
sounds
like
we
are
moving
away
from
doing
obfuscation,
so
do
we?
I
think
we
probably
need
to
remove
that
from
description
of
the
rollout
plan.
We
can
still
keep
the
supporting
issues
separately
if
it's
something
we
want
to
explore
in
the
future,
but
it
seems
like
short
term.
We
are
not
going
to
pursue
obfuscation
and
correct
me
if
I'm
wrong.
B
B
That
it's
better
to
try
and
yeah,
try
it
out
without
anonymization
and
all
the
trouble
that
the
magician
brings
so
yeah.
I
can
yeah,
I
don't
know
others
what
what
others
think
about
it.
D
I
heard
from
the
data
storage
team
today,
so
the
sre
team
that
they
are
also
looking
to
create
a
database
benchmarking
environment
for
their
own
needs
like
to
test
configuration,
changes
for
example,
or
anything
like
that,
and
they
also
aim
to
have
a
sort
of
hardened
and
locked
down
environment
for
that.
So
basically
the
same
idea
there
and
I'm
sure
this
is
going
to
be
a
pretty
good
example
of
how
to
do
that.
We
can.
We
can
probably
borrow
that
or
work
on
that
together.
D
No
they're
not
going
to
do
that,
but
rather
like
tighten
the
environment,
make
sure
that
nothing
escapes-
and
this
is
something
that
we
could.
We
would
do
the
same.
I
guess
okay.
B
E
A
All
right-
and
then,
let's
see
the
next
topic-
is
setting
up
the
single
user
server
for
testing
migrations
that
one,
I
think
had
been
paused
because
we
were
waiting
for
decision
on
database
labs.
So
I
think
I
just
put
this
here
to
say:
we
need
to
bring
this
back
up
and
actually
start
working
on
it
again.
So
we
can
move
towards
maintainers
testing.
Migrations
does
need
to
be
prioritized.
A
D
A
Good
question:
we
do
need
to
prioritize
it,
but
I'm
still
catching
up
from
last
week,
I'm
not
sure
where
we
are
in
other
things.
So
let's
come
back
to
that,
we
can
talk
about
that
under
goals
for
the
week.
A
Let's
see,
and
then
I
just
put
a
note
here:
we
need
to
create
a
follow-up
issue
for
guidance
for
maintainers
testing
data
migration.
I
don't
know
how
much
we've
talked
about
this
outside
of
our
team.
I'm
not
sure
how
much
the
other
maintainers
know
about
this,
so
we
should
start
communicating
this
out
wider
and
setting
the
expectation.
So
I
can
create
that.
A
You
know
that
database
labs
has
been
approved.
I
can
update
the
blueprint
it's
pretty
stale
by
now.
There
was
a
to
do
item
in
there
detailing
the
current
environment
and
I
think
there
were
some
updates
on
the
next
steps
jerry
wanted
diagrams
and
flow
charts
and
stuff.
A
D
D
Data
storage
team
was
picking
that
up
and
they
they
perceived
the
10
increase
in
terms
of
transactions
per
second,
and
then
they
were
basically
wondering
like
where
is
it
coming
from,
and
if
we
had
a
way
to
tell
if
a
certain
feature
that
was
being
being
released
recently
or
something
like
that,
would
have
triggered
that
right
and
we
we
don't,
really
have
a
good
way
of
seeing
that
today
and
I
wonder
if
we
can
sort
of
predict
what
what
kind
of
impact
those
certain
features
have
just
wanted
to
throw
it
out
there
like
a
topic
that
is,
that
is
kind
of
kind
of
interesting
today,
because
from
a
from
a
database
reviewer
perspective,
we're
always
thinking
about
the
performance
we
check
like
is
it?
D
Is
it
the
best
way
to
do
that?
Is
it
efficient?
Are
there
any
anti-patterns
or
stuff
like
that?
But
overall,
since
we're
so
like
iterative
on
things,
it's
kind
of
hard
to
tell
what
what
the
ultimate
impact
of
a
certain
feature
is
on
on
the
site
and
yeah
that
just
something
to
to
think
about.
If,
if
we
have
good
ideas
to
tackle.
D
A
I
don't
have
a
good
idea.
I
can
tell
you
memory
teams
looking
into
something
similar,
so
we
have
a
goal
to
try
and
get
like
the
minimum
footprint
of
a
single
gitlab
instance
down
to
two
gigs,
and
once
we
actually
reach
that
goal.
That's
a
point
in
time
goal
right:
we've
done
it
for
that
release
of
gitlab
and
as
new
features
get
added.
We
will
have
this
problem
on
memory.
A
Usage
like
a
new
feature,
could
get
added
that
could
then
blow
us
out
of
that
limit
again,
so
we're
looking
into
ways
to
set
up
an
environment
to
constantly
test
that
threshold
to
see
to
catch
when
we
exceed
our
new
minimum
footprint,
and
we
are
just
now
brainstorming
ideas
on
it.
So
we
don't
have
a
good
example,
but
we're
considering
the
same
concerns.
C
Just
make
it
a
hard
limit
and,
let's
say
two
gigs
anything
you
need
to
fit
in
and
then
we'll
have
fun.
This
is
you
broke.
It
go
fix
it
yeah.
No,
as
a
as
a
side
note,
a
friend
of
mine
works
for
microsoft,
and
that
is
how
windows
operates,
because
they
have
a
hard
requirement
to
be
able
to
ship
windows
on
the
dvd
release.
C
So
if
you're
going
to
ship
it
on
a
dvd,
there
is
limited
amount
of
space
right
and
not
everybody
downloads
stuff
from
the
internet.
So
if
you
review
paint
right
and
all
of
a
sudden,
it's
twice
the
size,
you
know
it
breaks
the
budget.
I
thought
that
was
pretty
pretty
harsh.
You
know
it's
an
interesting
requirement.
A
D
There
is
also
a
in
in
addition
to
marginalia.
There
is
also
the
I
think
we
just
added
that
recently
that
you
you'll
be
able
to
tag
a
feature
category
for
a
certain
controller.
Something
like
that.
D
But
it's
it's
kind
of
hard
to
tell
the
difference
between
between
an
increase
in
traffic
or
user
behavior
or
that
or
adding
adding
certain
features.
But
you
would
be
able
to
sort
of
see
over
time
how
much
a
certain
feature:
category
impacted
the
traffic
or
what
the?
What
the
portion
of
traffic
for
a
certain
category
was.
B
It's
related
to
our
discussion
in
general
about
performance,
and
this
is
going
one
level
up
which
is
pretty
interesting.
So
it's
we
started
discussing
about
queries.
We
were
discussing
about
controllers
and
now
features
may
have
you
know
more
things
to
do
together,
so
yeah.
A
C
Oh
yeah,
so,
for
the
sake
of
transparency,
janice
and
I
now
meet
regularly
and
talk
a
little
bit
more
about
sort
of
product
questions
and
also
with
the
focus
on
you
know,
product
questions
for
for
memory
and
database
and
there's
one
link
if
you're
interested
in
sort
of
how
features
can
be
modeled.
As
from
the
customer
perspective,
that's
quite
interesting.
It's
got
the
carno
model.
We
had
a
discussion
earlier
on.
C
You
know
where
do
sort
of
database
features
fall
right
and
james
made.
The
point
that
you
know.
Many
of
the
features
that
are
in
database
are
kind
of
must
be
like
things
that
customers
just
expect
to
work
right.
They
need
to
work
and
if
they
don't
work,
people
get
unhappy,
but
nobody
is
going
to
be
particularly
delighted
for
a
specific
feature
because
they
just
expect
it
to
do
what
it's
supposed
to
do
right.
So
a
question
for
for
us
is
also
once
we
have
those
features
under
control.
C
What
are
some
potential
future
areas
where
we
can
actually
potentially
build
some
functionality
into
the
product?
That
is
interesting
and
delightful
for
for
users,
either
our
own
developers
or
other
developers
and
janis,
and
I
quickly
discussed-
and
you
can
add
some
color
to
that
in
a
second
it's
like.
C
Maybe
there
are
some
things
that
we
do
ourselves
at
the
moment
that
are
hard
right
and
tedious
and
not
as
delightful
as
they
could
be,
for
example,
how
we
review
our
own
database
changes
right,
and
maybe
there
are
some
opportunities
to
build
functionality
into
the
product
to
make
that
a
lot
better
right,
and
we
don't
know
what
that
is
or
how
it
could
look
like
right,
but
I
think
there
there
are
actually
some
interesting
potential
opportunities
in
the
in
the
future.
To
say:
look
you
know
if
andreas
needs
to
review
a
database
change.
C
There
are
just
a
lot
of
things
that
are
tedious
right
like
how.
How
can
we
make
it
easier
right,
and
that
was
just
a
discussion
we
had
in
the
morning.
I
really
enjoyed
it
and
I
just
wanted
to
quickly
sort
of
share
it
and
say
like
these
are
some
of
the
things
that
you
know.
We
are
thinking
about
and
there's
a
lot
of
problem
validation
to
do
and
these
kinds
of
things.
But
you
know
in
the
long
term,
that's
maybe
a
direction
that
we
could
consider
and
be
honest.
B
We
just
work
in
order
not
to
break
the
database,
not
to
lose
data,
but
if
we
are
doing
our
work
correctly,
they
will
just
not
be
you
know
dissatisfied,
but
they
will
never
be
delighted.
Like
you
know,
if
we
lower
our
queries
to
five
milliseconds,
no
one
we'll
see.
B
Okay,
maybe
some
admins
will
know,
but
it's
not
like
a
crazy
change
that
we
will
advertise
outside
of
you
know
our
optimizations
and
then
there
is
this
type
of
category
which
called
attractive
or
where,
even
if
you
do,
the
bare
minimum
people
will
get
excited
like,
for
example,
like
you
know,
the
first
time
that
we
use
google
maps
at
least
the
first
time
I
used.
B
Google
maps
was
amazing
and
the
javascript
interface
or
something
like
that
or
the
first
time
you
use
search
and
the
idea
and
the
question
there
is:
can
we
have
an
attractive
feature
for
the
database?
The
more.
B
So,
what's
an
attractive
feature,
what's
a
feature
that
people
don't
expect
and
so,
for
example,
in
our
internal
cycles.
The
fact
that
I
know
what
fabian
was
discussing
in
the
morning
is
that
the
fact
that
allowing.
B
To
test
against
the
production
data
is
an
attractive
feature.
It's
something
that
the
moment
we
have.
It
is
amazing,
and
that
was
my
experience
also
even
with
job
vote.
The
moment
I
joined
the
team
and
I
I
was
able
to
test
against
using
database
labs.
It
was
amazing
because
this
is
not
something
that
we
are
doing.
Everybody
does
and
extending
that
could
we
have
some
similar,
for
example,
that's
a
one
of
the
ideas
for
the
product
as
a
product
category.
B
This
is
a
a
huge
thing:
it's
not
like
a
one-month
idea,
but
could
we
have
a
as
a
category
in
gitlab
database
reviewing,
so
you
are
inside
the
you
are
reviewing
code
and
you
can
also
from
inside
the
gitlab
interface.
You
can
just
just
quickly
click
review.
The
queries
generate
the
plans
check
how
they
run
against
your
production.
Let's
go
crazy
here
and
yeah,
something
like
that.
A
D
C
So
I
think
here
the
things
like
that
we
have
no
idea
how
this
would
work
right,
but
I
think
it
is
interesting
to
think
about
these
these
things
and
then
you
know,
maybe
that'll
take
some
years
but
to
actually
like
get
to
the
to
the
maturity
level
where
this
is
really
amazing.
But
I
think
janice
and
I
can
do
some
work,
and
maybe
there
are
some
small
iterations
that
we
can
do
where
I
don't
know
you
know
it
becomes
easier
to
and
then
have
I'm
not
really
sure
what
I'm
talking
about
here
either.
C
It's
like.
I
know
that
people
often
need
to
explain
their
postgresqueries
right.
They
go
to
an
external
service
to
to
do
this
right
and
then
post
links
to
those
things
right.
That's
inconvenient
right!
It's
like
why
can't
I
do
that
in
product,
for
example,
and
why
can't
I
just
get
my
explain
and
then
that
gets
actually
surfaced
to
you
know
to
people
that
need
to
see
it
easily
right
and
those
are
maybe
smaller
things
that
are
a
lot
more
feasible
and
those.
C
I
think
that
are
essentially
end
user
facing
right
and
that's
quite
exciting
right,
because
then
you're
you're,
addressing
sort
of
usability
issues
for
for
folks
that
need
to
interact
with
the
database
and
which
is
essentially
all
of
you.
So
maybe
you
can.
You
can
think
about
all
of
the
things
that
annoy
you
about
the
current.
You
know
database
reviewing
experience
right
and
those
are
maybe
things
that
we
can
think
about
how
to
fix
right.
C
But
that's
you
know,
that's
just
for
transparency,
and
so
you
know
what
janis
and
I
also
discuss.
Maybe
it's
interesting
to
vocalize
that,
rather
than
being
a
complete
black
box.
D
D
Which
is
thought
it's
often
really
the
the
small
things
that
that
remove
the
tediousness
from
from
those
reviews
like
even
just
having
a
I
don't
know
a
query
formatter
in
comments
like
you
know,
markdown
sql,
and
then
you
put
the
query
in
that.
That's
good
format
if
that
gets
formatted
that
already
helps,
and
if
that
integrates
with
with
other
systems
like
plan
a
visualization
or
so
that
would
be
awesome.
So
thanks
for
sharing.
C
Yeah
and
that's
just
a
an
idea
right,
but
if
we
can,
I
think
that's,
maybe
one
of
the
things
the
honest
and
I
can
do
over
the
next
months
is
sort
of
exploring
that
idea.
Space
doing
a
few
interviews
with
folks
who
work
in
that
space
right,
I'm
actually
validating
that
those
things
are
problematic
enough
to
warrant
sort
of
investment
in
that
area.
C
Right
and
then
you
know
when,
when
that
happens,
you
can
you
can
start
thinking
about
the
first
iteration
right
and
I
think
that
doesn't
mean
we
don't
need
to
do
all
of
the
other
things
as
well.
They're
super
important
right
if
sort
of
features
that
are
expected,
don't
work,
people
get
annoyed
and
that's
really
bad
right.
So
you
know
we
still
need
to
do
that,
but
I
think
ideally
also
for
for
us
if
we
can
do
these
other
things.
C
B
For
completely
after
our
discussion
with
fabian,
I
went
back
and
checked
the
maintainer
review
issue.
We
had
an
issue
where
maintainers
were
writing,
complaints
and
things
that
they
they
would
want
to
change,
and
there
is
a
unicorn
thread
by
halper,
where
he
he
he
defined
everything
that
we
do
for
the
next
20
years.
If
we
had
a
hundred
people,
so
he
also
got
something
like
that.
G
A
Thanks
for
that
overview,
that
was
great,
all
right
jumping
two
goals
from
last
week,
so
this
is
actually
a
two-week
roll-up
since
pat
and
I
were
gone
most
of
last
week,
but
this
is
what
we
had
from
our
last
meeting.
So
let's
get
right
into
it.
So,
on
an
event,
I
think
there
was
a
goal
to
get
that
rolled
out
yesterday.
Is
that
right.
E
Was
a
it
was
a
fair
point
because
I
think
there
is
a
problem
there,
so
we
haven't
really
discussed
the
ultimate
solution,
but
there's
one
of
the
code
changes
we're
rolling
out
at
the
same
time
as
the
migration
could
potentially
cause
an
issue.
E
So
the
safest
thing
to
do
really
is
split
it
in
two
releases
and
roll
out.
The
code
change
now
wait
till
the
next
milestone
and
then
do
the
actual
swap
migration.
Maybe
that's
a
little
too
cautious,
but
we
don't
it's
really
hard
to
get
a
good
view
of
what
could
break.
D
Is
that?
Because
of
the
primary
key
issue,
yeah.
B
B
Yeah,
the
problem
is
that
when
we
do
the
migration
and
we
change
and
we
swap
the
tables,
the
audit
event
table
has
a
composite
composite
key
in
the
partition
case,
because
it
has
the
id
and
the
date
as
a
key
and
rails
does
not
work
correctly
with
composite
keys.
So
we
have
to
set
in
the
rails
code
that
the
primary
key
is
only
the
id
so
that
everything
else
works
as
expected,
and
what
andreas
thought
about
was
that
after
we
release
the
way
we
release,
we
do
our
live
releases.
B
D
B
E
That's
right,
yeah,
there's
one
area
in
the
app
specifically
creating
like
a
merge
request,
approval
rule
that
I
definitely
can
recreate
that
it
breaks,
because
it
can't
create
the
audit
event
when
the
approval
was
created.
So
there's
definitely
at
least
one
potential
production
problem
that
could
happen,
and
we
just
really
can't
say,
there's
a
lot
of
other
failures
in
the
tests
that
are
a
lot
of
them
seem
to
be
noise,
but
it's
hard
to
exactly
trace
down
what
other
issues
there
might
be,
because
there's
just
things
breaking
in
a
lot
of
places.
A
E
I
mean
we
could
I
don't
think,
there's
any
reason
we
can't.
I
think
our
reasoning
why
we
did
it
as
a
normal
deploy
is
we
were
a
little
scared
of
the
schema
caching
and
we
thought
well.
F
A
Okay,
so
new
target
is
the
first
release
at
the
end
of
this
milestone
and
then
the
second
release
next
milestone.
D
And
maybe
for
a
bit
wider
context,
I
think
what
is
unfortunate
for
us
here
is
that
we
don't
have
a
way
of
reasoning
about
live
upgrades
other
than
like
trying
to
figure
out
what
breaks
or
what
could
potentially
break
when
we
do
it.
But
there
is,
you
know
when
we
test
things
in
ci
or
elsewhere
in
staging.
D
There
is
no
good
situation
where
we,
where
we
have,
we
don't
have
the
same
situation
as
we
do
in
production
where
we
want
to
have
live,
upgrades
or
no
downtime
upgrades,
and
we
also
shipped
out
to
customers
or
that
that
is
a
feature.
Gitlab
feature
no
downtime
upgrades,
but
we
never
test
for
them,
and
this
has
put
in
us
before
we,
where
we
just
didn't
anticipate
that
something
would
break
in
another.
D
No
downtime
upgrades
scenario
and
we've
gotten
since
better
at
this
at
sort
of
anticipating
when
those
things
break,
but
it's
still
hard
and
it's
manual
work,
and
you
have
to
really
think
about
it,
and
this
is
why
things
take
so
long
like
right
now
we
we
could
have
like.
If
we
thought
about
that
last
week,
then
we
could
have
sneaked
it
into
the
last
release.
D
It's
just
one
line
of
code
right,
but
it's
it's
really
hard
to
tell
those
things.
B
B
B
And
then
keep
a
server
in
the
old
version
test,
switch
to
a
new
version
and
because
there
are
three
states.
So
it's
all
the
code
or
database
old
code,
new
database,
new
code,
new
database
and
then
migrate
a
post
migration.
So
there
are
some
steps
and
there
is
also
the
overlap
of
a
new
server
and
an
old
server.
B
So
because
a
couple
of
our
the
worst
problems
that
we
have
found
were
new
servers
updating
with
the
new
data,
all
servers
trying
to
to
use
the
the
new
data
and
the
something
breaking
there,
which
is
even
worse.
C
Yeah
thanks
for
for
sharing
this
andreas
and
and
janice.
So
this
is
just
a
connection
that
I'm
making
I'm
on
a
separate
quest
in
a
different
part
of
my
life
with
geo,
where
we
frequently
have
issues
with
testing
complex
sort
of
operations.
C
That
means
often
spinning
up
a
3k
or
10k
reference
architecture
manually
and
then
doing
stuff
right
and
that's
the
test,
and
so
I'm
working
with
quality
a
little
bit
at
the
moment
to
get
to
a
point
where
you
can
have
a,
for
example,
a
3k
reference
architecture
right
that
you
you
can
automate
some
of
these
zero
downtime
upgrades
against
a
specific
release,
or
you
know
you
know
a
nightly
package
or
something
to
actually
evaluate
these
processes.
C
You
know
also
the
r
failovers
and
and
so
on
and
so
forth,
and
I
wonder
if,
if
that
kind
of
setup,
you
know
where
you
say:
okay,
we
have
you
know
a
3k
reference
architecture
here
and
we
are
going
to
do.
We
always
perform
a
zero
downtime
upgrade.
You
know
against
the
latest
version
right
in
that
specific
sequence
and
see
if
there's
actually
no
downtime
right
and
if
it,
if
it
works,
would
that
address
some
of
those
concerns
or
is
it
completely
different.
D
C
I
think
this
is
like
this
is,
in
my
mind,
a
huge
gap
in
our
overall
quality
assessment
and
I'm
trying
to
get
to
a
point
where
that
is.
You
know
that's
a
thing
right,
because
we
I
have.
I
have
processes
in
geo
where
that's
important
right.
I
have
like
there
are
general
distribution
things
like
zero,
downtime
upgrades
or
or
whatnot
right,
and
that's
not
testable
in
nci
at
the
moment,
and
that
means
it's
essentially
either.
C
You
anticipate
problems
and
good
for
you
right
or
you
test
it
manually,
which
is
extremely
time
intensive,
right
and
manual
and
cumbersome
and
people
hate
it.
So
it
doesn't
happen.
So
you
need
an
automatic
way
with
an
appropriate
cadence
to
do
that
and
get
a
report
and
say:
hey
you're,
you've,
we've
tried
this
and
it
broke.
You
know
and
here's
what
broke
and
then
figure
it
out,
and
so
that's
good
to
hear,
because
I
can
add
that
to
my
list
of
concerns
right.
D
The
other
way
to
look
at
that
is
what
is
what
we?
Basically,
we
basically
support
many
different
upgrade
paths,
so
you
can
from
go
from
one
gitlab
version
to
the
other.
There
is
many
different
parts
to
do
that,
and
this
is
for
self-hosted
installs,
and
this
is
already
a
big
problem
right,
the
number
of
passives,
very,
very
large
and
then
thinking
about
gitlab
com.
D
It
becomes
even
more
complex
because
we
don't
really
do
releases,
but
we
do
you
know
regular,
deploys
and,
in
addition
to
the
regular
builds,
you
would
have
to
do
this
no
downtime
upgrade
testing
for
taking
the
next
step.
So
before
you
actually
deploy
it,
you
you
might
want
to
validate
the
change
in
amount
on
time
testing,
and
this
is
very
like
hard
to
deal
with.
D
I
think,
if
you
want
to
catch
all
the
things
and
for
the
note
on
downtime
testing,
it's
it
really
depends
on
which
changes
you
package
into
into
a
particular
deploy,
but
having
having
an
environment
where
you
can
test
a
certain
change.
I
think
that
would
already
be
a
good
step.
It
doesn't
cover
all
the
all
the
things,
but
today
we're
just
blind
on
that.
C
No
end
think
I
mean
I
agree
with
you,
because
I
think
what
we
are
struggling
is
also
with
sort
of
sort
of
a
complexity
explosion,
but
there
are
too
many
permutations
of
doing
things
right,
and
you
cannot
hold
that
complexity
in
your
in
your
brain
anymore.
At
least.
I
can't
right
that
is.
People
will
miss
things
right
where
it's
like.
Oh,
we
did
this
first
and
then
we
did
that
and
that
broke,
but
nobody
thought
of
it.
C
And
then
you
know
you
are
you're
in
a
mess,
and
so
I
think,
having
ideally
at
some
point,
an
environment
that
constantly
sort
of
permutates
what
people
can
do
right
and
just
servicing.
That
would
be
great.
The
first
step
is
just
this
is
the
thing
that
we
test
right,
and
at
least
we
know
that
you
know
that
would
be
caught
on
a
weekly
cadence
or
something
like
that
right.
E
I
think,
maybe
in
this
case
I'm
not
even
sure
that
if
we
had
no
downtime
testing
that
it
would
catch
it,
because
that
was
something
that
I
manually
tested
for
already,
because
we
were
concerned
about
running
the
old
code
into
new
code
against
the
new
schema.
But
this
there's
an
additional
wrinkle
here,
which
is
that
rails,
is
caching
the
scheme
information.
E
So
this
particular
issue
would
only
be
highlighted
if
the
old
server
is
running
and
then
that
server
reboots
before
it's
deployed
or
the
app
restarts
or
something
which
then
causes
it
to
refresh
the
cache.
So
that's
something
that
no
downtime
testing
might
not
even
under
normal
circumstances,
may
not
even
detect
so.
D
You're
right
yeah,
so
in
summary
that
that's
our
paranoia
speaking
right.
C
Yeah,
but
in
my
in
my
experience,
we've
had
these
issues
with
customers
and
then
it's
an
emergency
ticket
and
figuring
out
some
kind.
I
mean
caching
is
always
difficult
from
my
experience
right
and
I
yeah
like
I
completely
agree,
but
I
I
think,
unfortunately
we
are
too
popular
right
and
now
somebody
somewhere
is
going
to
have
that
issue
eventually
right,
and
you
know
that
drives
me
a
little
bit
around
at
night.
You
know.
B
B
Which
increases
it's
still
something
that
yeah
it
may
happen.
It's
not
that
often,
but
it's
something
that
happens
that
we
have
more
more
time
to
do.
E
D
You
would
have
to
deal
with
the
fact
that
you
add
processes
throughout
yeah
for
sure.
A
The
question
is
this:
something
that
we're
going
to
have
to
contend
with
for
every
table.
We
want
to
partition.
B
A
B
A
I'm
just
thinking
about
when
we
actually
get
to
the
point
of
enablement
and
we're,
enabling
we
have
all
the
information
that
are
needed
for
other
teams
to
implement
their
own
partitioning,
so
making
sure
that
we're
capturing
this
part
of
it.
So
they
don't
encounter
the
same
issue
that
we've
encountered
at
the
11th
hour
or
ship
a
production
outage
with
this
right.
So
are
we
capturing
this
step
somewhere
or
when
we
actually
do
enable
other
teams
to
do
the
work.
D
D
A
And
I'm
going
to
drop
off
in
about
five
minutes
here
so
moving
on
to
the
next
step,
be
honest,
you
had
a
goal
to
finish
the
events
proposal.
A
All
right
next,
one.
B
A
A
D
I'm
working
on
the
re-indexing
and
there's
a
bit
of
development
site
and
then
jose
and
I
we
were
talking
about
re-indexing
couple
of
really
large
indexes
and
for
most
of
them
we
can
use
the
the
tooling,
and
perhaps
that
is
something
we
can
do
this
week,
but
I
still
have
to
create
a
change
issue
for
that,
but
we'll
see
maybe
we
can.
We
can
catch
on
up
a
lot
later.
B
B
Okay:
okay,
second
thing
is
what
I
talked
about
above
about
the
discovery
effort
on
your
usual
pinpoints
I
have
to.
This
is
one
of
my
introduce.
B
B
I
would
also
want
to
know
that
we're
back
on
track
with
the
epic
testing,
with
production
data,
to
to
go
there
and
add
a
proper
issue
for
how
we
will
do
it
without
anonymization,
so
that
we
can
start
discussing.
E
Yeah,
I
guess
it
depends
a
little
bit
on
prioritization
of
the
now
that
we
have
the
database
labs
where
we
think
that
falls
in
the
overall
things.
We
should
be
working
on.
There's
there's
a
couple
other.
You
know
unassigned
issues
right
now
in
the
milestone.
B
So
I
have
tried
to
have
in
the
the
issues
in
the
released
in
based
on
the
priorities
that
we
discussed
last
week,
so
more
or
less
after
our
assigned
and
the
ones
that
we
have
already
declared
as
deliverables.
I
have
tried
to
have
them
in
the
order
that
we
discussed
them.
So
it's
easier.
E
Okay,
so
in
that
case,
the
setup,
the
single
server
for
testing
migrations
is
the
top
priority.
Okay.
So
I
guess
it's
just
a
question
of
how
far
we
want
to
go
with
I
mean
the
server
is
set
up.
It's
usable,
it's
very
incredibly
manual
process,
that's
kind
of
error
prone
right
now
to
be
able
to
use
it.
E
So
maybe
there's
a
middle
ground
there
of
something
that
we
can
build
a
little
bit
of
scripting
around
it
without
going
crazy,
so
that
people
can
use
it
a
little
more
easily
and
not
in
a
way
that's
unsafe,
like
having
the
firewall
down
and
then
they're
accidentally
doing
things
they
shouldn't
be
doing.
B
A
I
agree,
I
think
that's
the
top
thing
that
should
be
worked
on
next,
just
because
we
do
have
a
limited
amount
of
time
on
database
lab.
It
is
six
months,
but
we've
seen
some
of
the
things
we're
working
on
can
take
some
time,
so
the
sooner
we
can
get
that
available
to
others
sooner.
We
can
get
feedback
on
and
we
can
talk
more
about
what's
needed
next
asynchronously
but
yeah.
I
think
security
audit
makes
sense.
A
Okay,
all
right
yeah.
We
can
talk
more
about
that
in
the
issue
itself.
I
don't
know
about
other
folks,
but
I
have
to
fall
off
now.
So
I
need.
A
On
it
looks
like
pat
and
giannis
have
added
some
discussions
within
the
retro
topic,
so
take
a
look,
I'm
planning
on
adding
a
summary
to
the
engineering
wide
retro
later
on
today.
So
if
there's
anything
else,
you
want
to
add-
and
we
could
spend
some
more
time
talking
about
it
on
thursday
if
time
allows
but
get
your
feedback
in
because
it
needs
to
be
summarized
before
tomorrow,
so
that
darva
can
do
her
roll
up
and
be
prepared
for
engineering
wide
retro.
F
F
D
I
I
saw
that
today,
I
I'm
not
completely
sure
if
you
know
this
is
kind
of
changing
the
the
way
the
application
works
with
with
the
database,
I'm
not
completely
sure
about
the
about
all
the
all
the
things
the
article
that
jerry
mentioned,
that
was
from
seven
years
ago.
I
think
so.
It's
definitely
needs
updating,
but
certainly
an
avenue
that
we
can,
that
we
can
look
into,
but
it
needs
a
bit
a
little
bit
more
looking
into.
D
I
think,
did
you
see
about
the
the
alternative
or
just
as
an
as
an
idea?
We
we
have
this,
so
this
is
about
the.
I
don't
know.
If
everybody
remembers
that
there
is,
we
have
a
very
large
large
volume
of
select
one
queries
coming
into
the
database
right.
Do
you
know
how
many
like.
F
D
D
So
basically,
you
would
be
able
to
to
catch
those
queries
in
in
the
middle
in
the
load
balancing
layer,
so
they
don't
hit
the
database,
but
basically
the
application
knows
that
it
has
a
good
database
connection.
I
have
no
idea
what
the
effort
it
would
be
to
to
add
that
or
but
maybe
it's
interesting
for
a
pidgey,
bouncer
or
other
say
folks,
to
discuss
a
feature
request
like
that.
I
don't
know.
F
F
That's
the
idea,
the
idea
there
is
exactly
that
in
the
post
against
it's
like
the
select
one.
It's
like
it's
very
fine
all
the
time
and
like
seven
years
ago
who
uses
ruby
on
rails
said.
Well,
I
don't
need
this
established
jump
off.
We
execute
the
statement
directly
more
or
less,
but-
and
this
is
what
I'm
saying
here
like
you-
have
like
some
routing
or
for
a
query
store.
This
will
be
nice.
F
F
F
If
we
reach
our
over
80
k,
we
can
start
to
have
a
degradation
that
they
can
take
the
database
down
or
have
a
really
bad
degradation
that
will
start
to
something
context
switch
like
crazy
and
by
degrading
everything
to
a
point
that
we
are
not
reaching
slash
or
going
down
so
like,
then
I
start
to
get
crazy
to
reduce
tps
so
or
find
whatever
you
have
in
gps,
because
you
have
some
context:
switching
blah,
blah,
blah
and
so
on.
We
are
doing
the
bottlenecking
pd
ball,
so
we
are
using
resources
in
general.
F
D
There
was
an
issue
recently
about
the
rails.
Query.
Caching,
so
basically
rails
has
a
way
of
when
you,
when
you
run
the
same
query
twice
in
the
same
process,
then
it
would,
it
would
return
a
cached
result
and,
as
far
as
I
know,
from
that
issue
that
that
query
crash
doesn't
work
always
or
it
was
disabled
for
some
reason,
I.
A
A
B
The
cast
was
working
when
the
master
was
chosen,
I'm
not
sure,
and
when
we
were
choosing
anything
else,
we
would
not
use
the
cars.
So
we
will
send
two
queries
or
three
queries.
F
One
will
be
for
these
denominations.
I
have
to
look
more
about
the
stat
statements
in
the
lower
rows,
but
I
think
it's
a
small
win,
but
the
main
one
do
you
think
we
can
try
to
change
some
patching
structure
that
we
have
nowadays
what
I'm
saying
suppose
we're
just
supposing
here
we
have
a
query.
That's
like
written
one,
two
rows
between
20
times.
Perhaps
we
can
do
one
query,
fetching
more
data
and
doing
less
costs.
D
This
is
pretty
so.
This
is
basically
and
plus
what
we
would
call
n,
plus
one
craze
right
where
you,
or
at
least
goes
in
that
direction.
That
is
an
example
for
that,
where
you,
instead
of
running
10,
queries
to
grab
10
rows,
you
try
to
do
one
query
basically,
and
I'm
pretty
sure
that
we
have
a
lot
of
these
issues
in
the
copies
been
fixing
a
lot
of
them
as
well.
But
there
are
surely
some
some
of
them.
B
And
nowadays
we
try
to
catch
every.
We
try
to
catch
almost
all
those
problems
and
try
to
advise
authors
to
to
just
run
large
queries
instead
of
a
lot
more
small
queries,
but
yeah
sometimes.
B
Nature
sometimes
so
you
may
have
the
front-end
sending
10
requests
instead
of
one.
This
is
also
so
the
optimization
okay.