►
From YouTube: 2021 05 11 Database Team Sync
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
it
is,
is
it
may
11th?
It's
the
database
team
sync
meeting
and
we're
talking
about
the
ci
builds
migration
timing,
jump
to
team
topic,
but
that's
okay.
We
can
jump
right
into
it.
Jose
is
not
going
to
be
here,
so
we
can
come
back
to
the
one
item.
That's
in
the
infradev
board,
but
andreas
you
were
saying
about
the
complexity
of
the
ci
build
table
and
the
length
of
time
it'll
take
to
migrate.
B
Yeah,
so
I
didn't
have
enough
time
to
look
at
this
ahead
of
the
call
I
just
wanted
to
post
the
progress
on
ci
bills.
That's
at
41
should
probably
take
the
time
and
sort
of
estimate
when,
when
it
finishes,
but
I'm
starting
to
be
concerned
about
the
speed
of
this,
especially
knowing
that
we
have
other
tables
also
in
the
queue
that
we
still
need
to
tackle.
B
B
B
B
A
So
that
would
put
us
another
15
days
out
to
complete,
if
it's
only
three
percent,
it's
going
to
put
us
at
20
days
for
the
remainder
of
the
work
breakfast
on
ti,
build
the
lawn
so.
A
C
B
C
It
remains
stable
over
over
time
yeah.
So
let's
say
that
we
wanted
to
increase
it.
We
cannot
do
so
without
changing
the
algorithm
right
now,
so
maybe
maybe
just
the
random
idea
for
such
cases
or
other
cases.
We
may
also
want
to
add
the
flag
on
the
batsman
migrations
table
that
we
say
yeah
for
this
one,
just
keep
it
where
we
have
it.
C
C
This
is
the
case
for
sure
beans
as
andrea
said,
but
in
general
just
a
random
idea.
B
C
C
Should
be
very
careful
about
after
that,
we
should
be
able
to
go
faster
and
we
only
have
one
big
table
left
after
that,
the
one
with
the
five
billion
records.
What's
the
what's
the
other
big
cable,
you
guys
it's
it's
a
satellite
table,
I
have
to
figure,
find
it
out.
Do
you
recall
my
heart?
Andreas.
C
It's
it's
one
of
the
cia
bits
satellites.
It
has
five
records.
A
The
composite
key:
let's
see
we
have
trace
sections
pipelines
trace
chunks.
Runner
section
are
all
the
priority
all
tied
for
number
three
on
the
most
important
ones
that
we
should
migrate.
C
B
The
bigger
one
and
then
builds
what
was
the
build
sections
that
is
another
fun
one
where
we
we
don't
have
a
single
column
primary
key
to
patch
over,
so
that
we
we
haven't
had
that
before.
So
that's
still
to
be
seen
how
that
works
in
production.
B
So
all
this
all
that
is
saying
basically,
should
we
be
nervous
about
seeing
that
ci
builds
go
on
for
the
next
two
weeks.
A
A
Right
never
mind,
I
can
be
honest
and
I
can
check
in
on
thursday
and
then
maybe
we
try
and
crank
it
up
over
the
weekend
within
reasonable
specs
I
mean,
I
know
we
have
the
auto
balancing,
but
maybe
we
can
try
some
experiments
if
we
think
we
have
some
headroom
over
the
weekend
to
turn
it
up
and
get
some
progress
there.
D
B
A
E
And
just
to
share,
so
I
think
the
situation,
the
system
saturation
is
getting
slightly
better.
Let
me
show
my
screen.
I
think
we
probably
have
some
kind
of
room
to
swallow
up
the
or
eat
up
the
system
resource.
This
is
the
saturation
primary
situation.
If
you
look
at
this
green
line,
that's
about
the
trend,
that's
not
exactly
the
trend,
but
that's
a
very
close
to
a
trend
curve.
You
can
see
it's
slightly
down.
This
is
the
main
ace
we
switched
over
to
pg12.
E
It
looks
like
you
know.
The
peak
is
also
lower
than
the
other
peaks,
so
the
saturation
looks
slightly
better.
We
can
use
the
system
more.
You
know
more
aggressively
a
little
bit,
and
this
is
what
andrew
developed
to
show
the
the
linux
kernel
weight.
E
Job
drop
wait
time,
so
it's
also
much
better
than
the
past.
In
the
past,
pg
11
those
nodes,
the
cpu
weight-
was
like
0.3.4
here,
but
here
now
we
are
below
1.15
after
the
switch.
A
E
A
Yep
all
right,
we'll
check
in
on
thursday,
see
what
the
daily
build
looks
like
our
daily
progress
looks
like
see
what
we're
projecting
and
then
you
can
formulate
a
plan
on
thursday.
A
A
So
jose
is
not
going
to
be
here
now.
The
one
issue
that
did
come
up
for
us
to
pick
up
is
this
load
balancing
issue.
I
think
andreas
you've
seen
this
one
before
we
talked
a
bit
about
it.
Is
there
anybody
that
can
pick
this
up
in
140,
since
we
are
near
the
end
of.
A
A
B
I'm
mainly
spending
time
on
on
the
ci
queueing
incident
and
I
don't
know
where
this
falls
in
terms
of
priority.
So
I
would
have
to
look.
A
A
Back
so
we
talked
about
that
one.
The
time
scale
db
prototype
so
nick
offered
to
create
a
timescale
db
prototype,
it's
related
to
the
time
decay
blueprint.
I
have
a
sync
meeting
with
him
later
on
today.
A
He
thought
that
he
could
crank
out
a
proof
of
concept
on
this
pretty
quickly,
like
I
think
he
said
within
a
day
or
two,
so
I'm
just
going
to
sync
up
with
him
and
what
the
goal
of
his
proof
of
concept
is
and
how
it
helps
with
the
team
be
honest:
I
invited
you
if
it
works
with
your
time.
If
not,
we
can
record
it.
You
can
catch
up
with
it
later.
A
And
ollie,
if
you
want
to
join
I'd,
be
happy
to
invite
you
andreas.
I
figured
it's
too
late
in
your
day,
so
I
didn't
invite
you
anymore
I'll,
make
sure
to
record
it,
like.
I
said,
we'll
post
it
later.
B
That'd
be
awesome,
especially,
I
think
we
talked
about
timescale
before
and
for
me
it's
it's
still
like
not
clear
enough
how
we
would
benefit
from
from
time
series
optic
features
optimizations.
Like
I
understand
time,
skill
db
is
primarily
targeted
at
time
series
data,
whereas
our
data
model
is
typically
not
so
that
would
be
the
the
big
question
mark
that
I
have
in
terms
of
time
scale
db
and
if
we
can
clear
it
up,
that
would
be
awesome.
B
Yes,
this
is,
this
is
related
to
the
cia
queueing
incident
and
I've
only
linked
one
of
these.
I
think
one
of
the
more
recent
ones.
If
you
look
at
the
issue
tracker,
then
this
goes
back
until
march.
I
think
so.
We've
been
seeing
this
on
and
off
many
times,
and
there
is
a
database
query
involved
that
is
becoming
famous
as
the
big
query.
B
I
think
everybody
knows
that
by
now
and
we
drilled
into
the
the
reasons
for
seeing
the
slowdown,
and
we
did
also
ship
indexing
improvements
that
were
picked
up,
but
unfortunately
there
is
one
one
piece
missing
or
something
that
holds
us
back
from
using
those
optimizations
and
that
is
ultimately
related
to
basically
the
fill
factor
of
the
table
or
the
the
ability
to
use
what
we
call
hot
updates
in
the
database,
and
I
would
like
to
make
an
experiment
where
we
change
the
fill
factor
for
the
table.
B
That
is
basically
saying
that
we
reserve
some
space
and
pages
for
the
updates
that
we're
seeing
so
we're
seeing
a
lot
of
updates
on
those
records,
and
that's
that's
what's
causing
the
problem
on
the
database
side.
I
think.
D
B
Literally,
the
last
band-aid
that
I
I'm
aware
of
that
that
can
help
us
in
this
situation
so
that
we
don't
see
this
as
often
or
as
expressed,
and
that's
something
that
I'm
I
would
like
to
do
this
week.
If
we
can,
I'm
gonna
open
a
change
issue
for
that,
so
that
we
we
change
the
full
factor
in
production.
Let
it
sit
for
one
day,
one
or
two
days
and
see
how
that
affects
the
query
performance
that.
B
Only
applied
for
new
data
or
new
pages
being
created
in
the
database,
so
that's
that's
a
change
that
we
can
also
revert
without
causing
causing
any
trouble
in
a
sense.
So
yeah,
that's
what
I'm
looking
still
for
for
comments.
If
you
have
any
and
then
longer
term,
I
heard
from
shakers
that
he's
he's
actually
starting
to
work
on
the
change
in
the
data
model
today.
B
So
that's
gotten
priority
now,
and
the
idea
is
that
we
basically
extract
the
pending
build
information
and
separateness
from
the
very
long
history
of
builds
that
we
have
in
the
database
and
that's
that's
the
way
forward.
Everything
else
is
just
abandoned.
I
think.
A
On
the
the
pending
builds
queue
data
structure.
Is
that
something
that
shrinks
over
time?
So
it's
it's
an
ephemeral.
The
rows
are
ephemeral.
It
doesn't
just
build
up
right.
We
don't
just
continue
to
build
up
pending
rows
after
something's
done,
it
gets
deleted
and
it
goes
okay,
yeah.
B
To
illustrate
that,
we
have,
I
think,
about
1.5
terabytes
of
data
in
this
ci
builds
table
today.
That's
that's
our
history,
even
more,
perhaps
plus
indexes
on
that
and
then,
when
you
extract
the
pending
in
the
running,
builds
you
end
up
with
a
table
that
is
200
megabytes,
and
this
is.
This
is
really
what
we're.
What
we're
interested
in
in
this
case
right
for
ci
queueing.
You
want
to
know
what
is
depending
on
the
running
bills.
B
The
way
we
do.
This
is
basically
look
at
those
1.5
terabytes
right
now,
that's
what's
causing
the
problem.
Yep
and
then
cueing
in
postgres
is
not
nice
because
of
mvcc.
So
that's
that
doesn't
align
very
well
with
postgres,
but
it
is
at
least
a
step
forward
where
we
separate
those
things
and
the
the
table
size
should
remain
relatively
the
same
over
time.
Basically,
so
it's
not
increasing.
A
B
And
that
that
was
working,
but
I
haven't
checked
if,
if
none
of
these
remain
that's
something
that
I
should
do
and
take
a
note.
Okay.
A
Looks
like
cross
got
us
ready
for
cm,
build
trace
chunks
andres
you
get
the
next
one.
A
Alex
is
not
here
for
that
one,
so
jump
over
to
in
review.
Read
mostly
any
updates
on
that
one
on
the
pattern.
A
See
heinrich's
not
here,
so
I
saw
a
lot
of
good
conversation
about
adding
the
batch
migration
information
to
the
admin
panel,
he's
coordinating
with
sunjin
on
how
it
looks
alex
reporting
data
state
and
clone
id
any
updates
on
that.
One.
B
I
think
we
I'm
not
really
sure
if
you
watched
it,
but
I
think
it's
it's
ready.
I
think
perhaps
it's
for
review.
I
think
we'll.
D
A
C
No,
no
there's
nothing
waiting.
I
may
do
one
last
small
update,
but
the
document
won't
change.
It's
not
like
that.
There
is
anything
new
and
the
database
maintainers
have
reviewed
it.
So
yeah,
let's
give
some
space
jerry.
I
think
that
he
had
a
full
schedule
with
update
and
I'm
keeping
him
late
this
week.
A
Okay
and
this
blueprint
is
not
blocking
any
work
at
the
moment,
so
we
can
give
them
the
space.
That's
fine
thanks
ali.
A
One
so
cross
is
out
for
the
rest
of
the
week.
Is
there
anything
in
here
that
needs
to
be
picked
up
by
heinrich
or
anybody
else,
so
that
we're
not
blocked
on
these
top
two.
B
I
think
it
would
be
great
if
we
ship
the
releasing
the
primary
key
migrations
to
sell
forces,
so
putting
managing
to
put
that
into
the
current
release
would
be
good
so
that
things
start
to
happen
on
self-hosted
as
well.
B
C
Yeah
I
give
the
green
light
so
that
we
can
move
with
grass
plan.
Okay
and
yeah.
B
Yeah
there
was
a
problem
with
the
prometheus
metrics
for
the
batch
migrations.
That's
been
addressed,
so
we
in
theory
we
can
get
those
metrics
on
a
dashboard.
Now
I
think
I
realize
I
have
too
many
things
open.
So
that's
why
this
is
still
there.
B
See
I
will
trace
sections
that
has
the
problem
where
we
need
to
decide
what
to
do
with
the
non
or
the
composite
primary
key.
Basically
priority
wise.
I
think
that
is
not
necessary
to
do
until
we
have
capacity
to
run
it
in
production,
which
is
apparently
still
like
two
weeks
out,
but
until
then
we
should
probably
decide
on
that
one
testing
the
migration
pipeline.
That
was
something
we
discussed.
B
I
think,
last
week
we
would
like
to
have
a
mechanic
so
so
that
we
can
easily
test
changes
that
we
make
to
the
to
the
testing
pipeline.
So
we
can.
B
There
is
a
couple
of
pieces
missing
where
we
then
need
to
support
posting
comments
on
on
the
opt
instance
and
also
in
good.com,
and
so
these
things
still
need
to
be
added,
okay
and
index.
This
is
the
context
is
re-indexing.
We
did
have
a
situation
with
re-indexing
over
the
weekend,
where
dropping
an
existing
index
was
basically
being
blocked
by
an
auto
vacuum
process.
B
That
was
happening
at
the
same
time,
and
that
was
also
not
stopping
for
good
reasons,
but
that
basically
leads
to
this
is
not
a
problem,
but
it
leads
to
an
alert
for
the
eoc
reporting
about
the
long-running
transact
transaction.
So
we
because
we
see
this
transaction
wait
basically
forever
and
it's
relatively
straightforward
to
address.
A
D
A
B
We
have
some
kind
of
query
parsing
going
on
because
we,
when
we
see
an
exception
in
production
and
that
is
related
to
sql,
then
we
we
want
to
send
to
send
this
information
to
sentry.
But
we
don't
want
to
send
the
full
query.
We
want
to
normalize.
D
B
As
to
remove
the
the
sensitive
parts-
and
we
use
pt
query
for
that-
which
is
an
awesome
gem
for
ruby
to
parse
postgresql,
but
we
were
using
a
version
that
wasn't
supporting,
I
think
ghostbusters
12
features,
if
I'm
not
mistaken,
and
that
failed.
Basically,
we
created
a
covering
index
which
is
relatively
new
syntax
and
that
failed
and
then
it
was
basically
hidden
by
a
failure
to
normalize
the
query
and
that
that
was
caused
by
the
pt
curry
version.
So
this
is
just
about
updating
the
pt
core
version.
B
C
C
We
have
23
phase
jobs
in
sierra
beans,
but
it's
okay
as
long
as
we
can,
but
we
still
have
this
problem
where
we
keep
them
on
status,
one
you
know,
so
we
try
to
fix
that
running
yeah
so
there,
so
they
are
still
not
for
whatever
reason
we
were
not
sure
and
henrik
had
tried
to
make
a
fix
there,
so
that
their
label
correctly
has
failed
and
with
the
finished
ad,
this
is
not
the
case
for
most
of
those
doors,
but
at
least
we
know
that
when
we
retry
them,
they.
D
B
I
think
the
remaining
issues-
oh
they're-
actually
gone.
I
haven't
checked
investigate
time
a
lot
of
automatic
protection
creation.
I
think
that's
also
kind
of
a
support
issue.
If
I
remember
that
correctly
for
someone
with
a
strange
database,
migration.
B
B
Sure
how
we,
you
know
how
we
deal
with
those
things
going
back
to.
How
do
we
support
anybody
with,
without
you
know,
paying
for
support.
F
Yeah,
that's
remarkably
enough,
one
of
the
reasons
why
I
joined
to
discuss
that
so
I've
I'll
probably
tackle
this.
I've
got
a
paying
customer
interested
in
this
issue.
So
it's
my
intention
when
I've
got
time
to
to
figure
this
out
and
then
check
whether
or
not
I've
got
it
right.
B
Right
now
remember
that
was
different
than
we
had
a
couple
of
issues
that
were,
I
think,
coming
from
non-paying
customers,
but
this
one
is
different
for
sure.
That
would
be
interesting.
F
B
Find
out
if
I
can
help
with
that,
let
me
know
we
should
definitely
know
why
those
petitions
are
not
being
created.
F
C
B
Great
second
one
created
it
last
week
finding
that
we
have
one
index
in
production
that
is
marked
as
invalid,
which
shouldn't
be
the
case,
and
this
goes
back
to
our
migration
helpers,
not
really
paying
attention
to
the
invalid
flag
for
indexes.
So,
for
example,
when
you
do
when
you
create
a
new
index,
it's
being
created
concurrently
so
and
in
this
mode
it
can
actually
fail
and
leave
behind
an
invalid
index,
and
then,
when
we
retry
the
migration,
it
would
just
pretend
that
this
index
already
exists,
which
it
does,
but
it
doesn't.
D
B
We're
seeing
that
I
think
we
should
be
looking
at,
I
think
what
we
can
do
about
it
and
there.
B
Production
issue
where
we
yeah
fixed
the
environment
index.
B
And
lastly,
ruby
dns
there
was
a
posting
on
the
development
channel.
I
think
ruby
seems
to
have
problems
with
long-running
processes
and
dns
lookups,
and
that
is
an
essential
part
of
our
database
load
balancing
strategy,
because
we
use
console
dns
to
figure
out
which
replicas
are
still
available,
and
I
just
would
like
to
review
that.
This
is
either
not
affecting
us
or
we
have
to
fix
it.
D
I
think
I
saw
that
it's
the
bug
was
introduced
in
273
and
I
think
we're
on
272..
So
I
believe
it's
not.
B
Yeah,
that
would
be
good
to
know
before
we
upgrade.
A
And
I'll
I'll
catch
up
with
heinrich
later
make
sure
he's
got
the
triage
rotation
covered
if
we're
going
in
alphabetical
order
he's
up
next.
I
don't
know
that.
There's
anything
I've
looked
at
this
recently,
so.
A
F
Yeah
I
wanted
to
cover
off
that
issue
raised
around
support,
like
issues
being
well
support
like
issues,
issues
that
are
actually
support
tickets.
So
certainly,
if
it's
a
paying
customer
then
get
them
to
raise
a
ticket.
That's
fairly
simple,
if
they're,
not
a
paying
customer,
then
that's
quite
interesting,
because
you
know
we.
F
We
don't
provide
support
for
paying
customers
in
support,
and
I
was
curious
talking
about
this
with
my
manager
actually
just
before
this
meeting,
whether
there
was
guidance
in
engineering
for
where
the
line
is
between
investigating
issues
in
the
product
and
providing
support,
because
you
have
a
team
to
do
that
and
then
the
team
has
rules
about
who
we
don't
provide
support
for.
F
So,
and-
and
I
do
appreciate
that
some
of
these
I
mean
one
of
the
issues-
that's
in
your
boards-
yeah-
that's
a
fairly
complicated
piece
of
work,
not
not
a
great
place
for
that
user
to
be
in,
but
you
can
certainly
imagine
there
were
going
to
be
situations
where
users
of
our
product
do
all
sorts
of
crazy
things
like
downgrading
when
they
shouldn't
and
stuff
like
that,
and
why
would
we
provide
support
to
get
them
out
of
that
mess?
You
know
that's
what
your
backups
for
bye-bye.
F
B
The
one
of
the
extreme
cases
that
we've
seen
recently
was
where
customers
would
basically
change
the
database
in
in
certain
ways
on
their
own.
Like
remember,
a
customer
having
a
gitlab,
gitlab
old
schema
in
the
gitlab
schema,
and
then
things
were
not
working
out,
because
that
is
kind
of
unexpected
and
that
that's
been
popping
up
a
few
times.
I
think
where
it
was
clear
that
they
were
trying
to
manage
their
database
somehow,
but
that
is
sort
of
unexpected
or
not
in
line
with
with
what
we
expect
from
the
application
side.
B
That
was
causing
problems,
and
I
have
the
same
instinct.
I
would
also
say
that
this
voids
your
warranty
kind
of
thing.
It's.
F
Yeah,
that's
the
thing:
if
it's
a
paying
customer
and
going
forwards
that
pretty
much
means
premium
or
autumn
up,
then
they
get
upgrade
support,
so
they
give
us
how
they
gave
us
their
plan.
We
review
their
plan,
we
tell
them.
No,
you
really
don't
want
to
do
that.
I'll
bring
the
database.
F
And
then,
if
it
blows
up,
then
they've
got
support
and-
and
we
know
what
they
did,
because
they
followed
their
plan
and
we
tested
that
and
we
checked
they
had
a
backup
plan
as
well,
so
paying
customers,
you
know,
are
increasingly
all
in
a
good
place
because
of
the
direction
of
travel
on
what
tiers
are
available,
but
yeah.
I'm
certainly
mindful
off
the
back
of
that
issue
and
the
other
stuff
I've
seen
going
on
in
the
database
space.
F
A
A
whole
network
of
them
to
your
previous
question
is
there
guidance
on
like
differentiating
how
to
support
different
customers
while
they're
paying
non-paying?
I
don't
know
that
there
is
I
I
don't
know
that
there's
like
a,
I
would
almost
call
it
like
triage
documentation.
You
know
if,
if
an
issue
submitted
and
then
had
like
a
decision
tree,
does
anybody
else
know
of
where,
in
our
documentation
where
yeah?
A
Okay,
I
mean,
if
someone
reports
a
bug,
then
we
do
need
to
spend
some
time
investigating
the
validity
of
it
right
see
if
it's
truly
a
bug
and
then
from
there
it
would
go
to
okay.
This
isn't
a
bug.
This
is
a
read
the
manual
problem
and
then
figure
out.
A
C
Can
I
can,
I
add
something:
it's
a
little
bit
more
complicated
because
we
are
also
an
open
source
company.
We
have
an
open
source
product,
so
everyone
gitlab
and
all
the
backend
engineers
are,
we
say
to
everyone
and
that's
our
culture
to
to
engage
and
with
the
community.
So
when
someone
comes,
everyone
is
up,
it's
open
for
anyone
to
open
issues
and
talk
with
the
gitlab
team,
and
I
think
that
it's
in
our
culture
to
engage
with
anyone
starting
an.
D
C
So
that's
the
problem
there,
sometimes
between
engaging
with
the
community
and
solving
problem
production
problems
with
somebody
doing
something
crazy,
and
this
is
somewhere
where
we
have
to
figure
out
and
think
about
what's
the
line.
But
I
think
that
we
should
keep
on
engaging
the
community
because
gitlab
is
an
open
source
product,
but
yeah
we
have
to
to
draw
a
line
somewhere.
B
I
find
it
really
difficult
to
engage
with
the
with
the
community
when
there
is,
you
know,
there's
a
bug
report
and
things
are
breaking
you're,
not
you,
obviously,
we
engage
and
that
I
think
we
should
definitely
continue
doing
that,
but
I
find
it
difficult
when
it
comes
to
becomes
obvious
that
well
they
they
were
rather
creative
on
the
database
side,
and
this
is
not.
This
is
unexpected
for
us.
B
How
do
we,
I
think
difficulty
is:
how
do
we
communicate
that
that
they
are
actually
in
a
situation
that
we
can't
do
anything
about
it?
I
think
I
haven't
seen
us
doing
that
on
those
issues,
but
rather
we
would.
We
would
invest
a
lot
of
time
and
come
up
with
very
detailed
plans
on
how
they
can
sort
of
mitigate
the
situation,
even
though
we
don't
have
access
to
their
system.
B
So
it's
also
a
bit
of
a
risk
for
us
doing
nowadays,
because
then
we,
we
might
just
be
wrong
about
guessing
how
their
their
system
looks
like,
and
then
they
have
a
plan,
they're
executing
a
plan
and
maybe
making
things
worse,
and
we
put
up
the
plan,
so
I
find
it
difficult
and
especially
difficult
to
communicate
the
fact
that
this
is
nothing
that
we
can
deal
with
really,
and
I
was
wondering
should
we
do.
B
We
need
to
be
more
verbose
about
telling
folks
that,
if
they
needed
to
do
something
to
the
gitlab
database
in
any
way,
then
this
is
likely
to
cause
problems.
For
me,
I
think
that
is
rather
obvious.
I
think,
but
we
do
have
installations
that
that
are
kind
of
creative
about
it.
A
I'm
not
sure
what
the
question
is
there.
I
was
gonna
go
back
to
what
giannis
was
saying
about.
We
should
engage
and
I
don't
totally
agree.
I
think,
there's
a
distinction
between
like
submitting
code
for
review.
A
That's
an
easy
one
where
we
should
engage
because
we
are
open
source,
but
when
they're
submitting
bugs,
I
think,
there's
a
differentiation
there,
especially
when
it's
obvious
that
it's
self-inflicted
right
and
that's
when
we
redirect
and
say
hey,
you
know,
send
it
to
support
and
maybe
let
ben
differentiate
like
okay
paid
versus
not
paid
and
that's
where
ben
can
start
selling
like
hey.
You
know
you
could
get
better
support
if
you
bought
a
license
or
here's
the
manual.
Please
read
from
there.
So
I
I
think
for
me:
that's
the
differentiation.
A
F
So
I've
put
a
link
there
to
what
it
does
say
in
the
handbook,
which
is
still
doesn't
really
address
the
issue,
which
is
how
do
you
tell
the
difference
between
a
bug
and
and
something
else,
and
I
I
know
I
get
it
all
the
time
you
go
down
a
rabbit
hole
investigating
a
problem.
Is
it
a
bugging
the
product?
F
Oh
yeah,
they
can't
there's
there's
a
communication
breakdown.
Is
it
a
bug
in
the
product
or
is
it
because
they're
called
book
proxies
misconfigured,
which
case
it's?
Not
a
problem,
doesn't
really
tackle
that,
but
I
guess
when
pushing
onto
a
shelf,
if
something
is
looking
like,
it's
a
support
issue
and
it's
something
specific
with
the
state
their
database
is
in
and
the
main
purpose
of
the
request
is
to
fix
their
database.
A
Should
we
to
answer
to
get
specific
on
andrea's
question
if
we
want
to
redirect,
do
we
just
assign
them
to
you
and
then
we
can
figure
out
from
there?
You
mentioned.
F
Yeah
assign
I
just
just-
I
don't
really
work
in
issues
I
work
in
zendesk
and-
and
I
think
my
my
workflow,
my
backup
to
that
is
emails
and
that
you,
we
all
know
how
good
a
workflow
anything
based
on
email
is
so
just
make
just
make
lots
of
noise
and
I'll
try
and
tackle
it
in
one
pass.
Basically,.
A
F
I
mean,
I
think,
I
think,
we're
starting
to
see
some
specific
things.
Customers
are
doing
like
putting
things
in
the
default
schema
that
gitlab
doesn't
expect
to
be
there.
F
We've
got
an
example
of
some
sort
of
auditing
thing
where
they,
it
looks
like
they've,
literally
googled
how
to
audit
changes,
and
it
says,
do
this
thing
in
the
in
the
public
schema
and
now
git
labs
tripping
over
it.
So
yeah.
I
think,
if
we're
going
to
learn
anything
from
these
issues,
it's
to
start
to
compile
a
list
of
things
not
to
do
some
assumptions
that
are
made
that
you
shouldn't
break
stuff
like
that
and
that
I
think
the
place
to
put
that
is
going
to
be
fairly
early
on
in
our
documentation.
F
You
know:
we've
got
like
a
section
that
describes
the
prerequisites
for
gitlab.
That
includes
things
like
the
database
plugins
fairly
high
up
in
our
documentation
about
say,
external
databases
or
the
use
of
the
database
and
the
product.
It
should
say
these
are
the
following
things:
you
shouldn't
do
and
yeah
I'm
I'm
absolutely
happy
to
contribute
to
that
help
out
with
that,
whatever.
C
Should
we
be
opinionated
that
they
should
add
nothing
in
the
public
scheme
in
the
in
the
gitlab
hq
database,
because
this
is
still
open,
so
we
can
switch
to
that
mode
and
so
say,
write
somewhere
that
they
should
not
add
new
schemas.
They
should
know
that
the
new
plugins
or
whatever,
and
that
gitlab
owns
that
database
and
we
can
drop
it
at
any
time.
For
example,.