►
From YouTube: 2021 06 15 Database Team Sync
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
C
C
C
All
right,
we'll
just
jump
into
it
and
let
jonas
catch
up
with
us.
So
first
topic
we're
reviewing
infradev
board
issues
with
jose.
I
didn't
see
anything
new
on
there
jose
any
questions.
Pressing
issues
on
your
side.
B
C
Yep
yeah
no
problem.
Thank
you
just
wanted
to
call
out
nice
work
done
by
heinrich
to
drop
the
number
of
tuples
called
by
that
query.
Is
it
fully
out
on
production
yet.
C
Nice
very
good
results,
there's
a
bunch
of
graphs
in
the
link
there.
So
thank
you
for
taking
care
of
that
and,
yes,
andreas
called
out.
We
dropped
1.5
terabytes
worth
of
data
this
week
by
dropping
the
old
web
book
logs
table
and
then
what
are
we
going
to
drop
another
900
gigabytes
once
we
drop
the
old
partitions.
G
Hey,
if
we're
not
using
that
space,
can
I
use
it.
C
Yeah
I'll
get
back
to
you
on
that
alex
all
right
thanks,
yeah
next
topic,
so
there's
a
thread
going
about
a
request
from
a
customer
to
continue
support
of
pg-11
into
now
an
unknown
period
of
time.
So
far,
they're
saying
six
months.
The
unknown
piece
is
that
is
it
on
azure,
the
sequel,
postgres
sql
single
server
they're
going
to
skip
12
altogether
they're
not
going
to
support
12.
They
only
support
up
to
11
at
this
point
in
time
for
flexible
server.
C
They
do
have
support
for
12
and
13,
but
it's
in
preview
only
so
our
customer
is
kind
of
in
a
hard
spot
right
now
so
they're
asking
if
we
can
continue
to
support
postgres
11
for
six
months,
I
think
is
asked,
but
even
then
it's
unknown
if
azure
will
have
the
support
that
they
need
to
continue
to
run.
So,
there's
a
lot
of
good
commentary
on
that
thread,
I
don't
know,
did
we
ever
come
to
a
consensus?
Are
we
still
just
throwing
out
ideas
on
that.
A
I
think
we're
still
discussing
so
yeah
one
of
the
problems
or
for
us.
Basically,
the
problem
is
that
we
can't
really
use
any
of
the
partitioning
features
in
puscus
12,
and
that
means
basically,
we
can't
tackle
the
large
tables
that
we
have
because
of
the
lack
of
foreign
key
support
for
partitioning
tables.
C
A
F
A
All
right
yeah,
but
it's
you
know,
there's
very
large
tables
five
terabyte
table.
That's
you
know.
A
F
So
I
have
a
naive
question
and
I
have,
in
the
back
of
my
mind,
a
few
conversations
about
maintaining
parity
between
self-managed
and
dot-com
right.
So
this
may
be
the
repetition
of
this.
But
one
question
that
I
had
is
we
already
like
at
this
moment
in
time
like
right
now
right
we
have
postgres
11
and
we
also
have
post
press
12
where
post
was
11
is
actually
deprecated
right.
So
we
right.
F
and
so
in
14.0
we
were
going
to
remove
postgres
11
right
so,
but
we
have
already
upgraded
to
postgres12,
for
example,
on
gitlab.com
right.
So
my
question
is:
are
we
taking
advantage
of
any
postgres
12
features
already
like
right
now
or
have
we
decided
that?
Because
we
don't
want
to
diverge,
you
know
like
between
self-managed
customers
and
get
that
dot
com
that,
even
though
we've
upgraded
we
will
essentially
hold
off
until
postgres
11
is
completely
gone.
Is
that
kind
of
the
situation
that
we
that
we
are
in.
A
F
Because
I
think
this
this
was
my
my
question,
because
I
know
that
some
features
that
were
built
that
have
not
to
do
with
this
right.
They
utilize
postgres,
12
features
and
the
custom
impact.
There
would
be
well
stay
on
postgrad
11
for
a
little
bit
longer,
but
you
will
not
benefit
from
those
features
right
which
in
my
mind,
it's
not
great
right,
but
the
problem
is
more
like
it
probably
incentivizes
over
time,
people
to
actually
upgrade
right.
You
want
to
avoid
it
as
much
as
possible.
A
Yes,
there
is
like
two
points:
one
is
the
schema,
and
since
we
we
always
support
more
than
one
major
version
at
a
time
right
now,
it's
eleven
and
twelve
or
before
fourteen,
and
then
it's
gonna
be
like
twelve
and
thirteen.
Following
the
plan
that
we
had,
we
always
have
to
have
the
schema
compatible.
The
database
schema
compatible
with
the
the
minimum
version
that
we
support.
Basically
right
and
the
the
the
newer
version
is
gonna,
be
like
supporting
that
too.
A
But
you
kind
of
contact
the
other
way
around
and
then
on
the
application
side.
You
might
choose
to
use
certain
postgres
features
that
are
not
available
in
the
or
it's
only
available
in
the
most
recent
version
that
we
support.
F
Like
I
understand,
all
of
these
options
are
not
great
right,
but
I
think
the
question
is
in
some
cases
you
have
a
path
forward
and
you
know
that
you
want
to
like
have
more
like
you
have
some
negative
impact.
You
know
because
of
some
problems
like
that,
but,
for
example,
like
I'm
particularly
interested
in
understanding
like
simply
put
if
we,
if
we
extend
our
pg-11
shelf
life,
would
that
essentially
mean
no
partitioning
work
that
benefit
like
that
uses?
Pg-12
features
until
the
point
when
we
remove
that
database.
I
think
what
I'm
hearing
is.
A
Yes,
for
the
large
tables
that
are
the
ones
that
we
should
tackle,
anyways,
that's
going
to
be
a
problem,
we're
not
going
to
be
able
to
partition
those
un
until
we
have
dropped
posters
11
or
we
decide
to
give
up
referential
integrity
in
the
database,
which
is
also
probably
nothing
that
we
want
to
do
right.
This
is
this
is
what
has
been.
What
has
been
added
to
postgres
12
is
the
full
support
for
foreign
keys
when
you
work
with
partition
tables
think
about
ci
bills.
A
D
F
H
On
top
of
what
andreas
discussed
so
our
plan,
we
thought,
if
we
never
went
if
we
were
never
to
go
to
pc12,
we
had
a
plan
to
build
those
features
ourselves.
So
we
were
discussing
it
but
has
started
some
work
on
adding
support
for
foreign
keys,
but
the
fact
that
we
decided
to
go
to
projects
12
and
we
decided
a
year
ago
meant
for
us
that
we
could
stop
that
effort
and.
H
Focus
on
the
fact
that
focus
on
everything
else
and
depend
on
the
fact
that
we
will
be
on
pg-12
and
use
those
features
that
pg-12
offers
us,
especially
the
foreign
keys,
which
we
need
for
all
tables,
except
for
the
ones
that
we
have
already
added.
So
I
cannot
think
of
any
table
right
now.
That
means
partitioning
that
does
not
have
any
foreign
keys
so
effectively.
D
H
At
the
moment
we
cannot
go
back,
we
could,
if
we
wanted,
if
we
had
the
year-
and
we
said
like-
let's
say
we
pose
to
pg
11
for
a
year.
We
will
start
again
thinking
about
building
this
feature
on
our
own,
but
it's
not
like.
If
we
go
for
two
or
three
or
four
months,
we
will
do
partition,
we
will
wait
for
to
go
to
pc12
and
there
are
all
all
the
upsides
in
the
world
to
go
to
pg
12.
H
H
F
This
is
a
pragmatic
question.
I
also
raised
in
the
thread
that
I
think
nobody
has
really
answered
yet
it's
like.
I
assume
that
at
this
point,
a
lot
of
mrs,
that
clean
up
pg-11
have
been
merged
right
and
if
we
decide
to
actually
continue
supporting
pg-11.
All
of
that
stuff
needs
to
be
reverted
right
because,
or
at
least
managed
right
in
some
way,
and
I.
H
H
In
his
mind,
we,
but
we
will
have
to
go
back
and
check
because
we
have
started
removing
things
so
for.
H
A
Just
wanted
to
add
that
if
we
decide
that
we
need
to
revert
those
things,
we
have
to
do
that
before
we
tag
the
the
release
for
14
00.
So
that's
that
means
we
would
have
to
do
that
within
the
next
few
days.
Basically,
yes.
F
C
C
F
If
we
were
to
extend
it
for
three
months,
anything
that
we
just
discussed
essentially
doesn't
change
right.
We
would
still
have
to
do
that
work
right,
but
would
that
be
a
like
acceptable
time
frame
with
regards
to
the
growth
of
those
tables
for
gitlab.com.
A
A
Let's
talk
about
ci
bills
because
that's
the
largest
table
that
we
have
delaying
that
for
three
releases.
Three
months
puts
us
into
position
where
we
can
only
start.
You
know:
shipping,
the
first
partitioning
migration
in
13
3,
perhaps
right
43.,
and
then
those
partitioning
migrations.
They
take
a
few
releases
due
to
how
that
is
being
released
to
self-manage.
So
we
can't
really
do
that
in
one
go
or
in
one
week
on,
we
can
do
that
in
one
week
on
gitlab
combat,
isn't
very
constrained
with
self-managed
release
cycles.
A
It's
gonna
take
like
two
to
three
months.
In
addition,
until
we
can
basically
get
the
benefits
from
partitioning,
so
that
means
basically
delayed
for
three
months
and
three
months
for
a
migration
perhaps
add
another
month,
because
those
are
going
to
be
large
tables
that
we
have
to
migrate
and
the
longer
we
wait.
A
The
the
longer
is
the
migration
government
going
to
take
due
to
the
data
size
and
we're
talking
about
basically
not
having
any
partitioning
migrations
within
this
year,
and
that
means
basically,
as
we
discussed
in
the
beginning,
that
this
table
is
going
to
be
five
terabytes
at
the
end
of
the
year.
And
we
have.
We
have
to
deal
with
that.
Basically
and.
A
Already
see,
incidents
in
production
where,
basically
a
lot
of
people
are
looking
at
it,
we're
drilling
into
the
problems,
and
we
often
the
often
the
conclusion
is
that
yeah
we
have
to
break
those
table
sizes
down,
so
what
we've
sort
of
with
sre?
What
what
has
been
established
is
that,
yes,
we
should
have
a
target
of
cutting
down
table
sizes,
so
the
physical
tables
are
below
100
gigabytes.
Let's
say,
and
this
means
we
have
to
do
some
some
kind
of
partitioning
and
that's
dependent
on
postcontrol.
F
F
F
The
vendor
chooses
to
not
do
anything
in
that
time
or
things
don't
happen
or
whatever
we'll
be
in
the
same
spot
in
three
months
right
and
then
we'll
have
the
same
discussion,
and
you
know
my
question
is
like
what
will
change
right
in
the
meantime,
we
don't
have
any
control
over
that
whatsoever
right,
and
so
that
makes
me
a
little
bit
worried
right
because
you're
kind
of
tied
to
that,
then
I'm
not
really
positive
that
the
customer's
problem
will
go
away
in
the
next
three
months
or
for
that
matter.
In
six.
B
F
Right,
I
just
don't
know
right,
and
so
that's
maybe
something
to
to
ask
as
well
is
because
another
option
for
that
customer
would
be
to
actually
move
to
omnibus
right,
or
you
know
like
they
stay
on
13.12
for
a
longer
time
right,
but
that
has
security
implications
right
because
we
can't
easily
backboard,
because
that's
also
really
hard
right.
F
So
it's
like
in
a
way
we're
maybe
suffering
here
from
the
implementation
plan
for
that
customer,
not
taking
into
account
that
the
vendor
that
was
chosen
for
postgres,
you
know,
does
not
actually
align
very
well
with
our
postgres
strategy
as
a
company.
So
here
we
are
okay
that
helps
a
lot
andreas.
Thank
you
so
much
for
highlighting
this,
because
I
didn't
fully
appreciate
the
like
the
reasons
why
we
couldn't
take
advantage
of
postgres
12,
while
maintaining
the
other
postgres
version.
F
A
Exactly,
okay
and
something
that
is
sorry
just
to
add
to
that
something
that
is
not
really
visible
in
that
regard
is
that
the
problems
with
the
large
tables
that
also
affects
development
in
the
end,
because
we
have
to
find
rather
creative
solutions
for
dealing
with
those
large
tables.
An
example
was
the
data
migration
for
ci
bills
extracting
the
pending
bills,
where
we
actually
spend
a
lot
of
time
dealing
with
those
situations,
because
we
can't
just
you
know,
do
that.
A
Anymore,
so
that
also
feeds
back
into
development,
and-
and
it's
not
only
a
good
comp
concern
in
that
sense,
would
it
be.
F
Right
it
was
about
writing
something
right
yeah,
if
you
summarize
it
exactly
like
we
did
right
now.
I
think
it
would
become
a
lot
more
apparent
that
this
is
not
a
technical.
You
know
things
are
complicated,
but
it
is
really
a
we.
If
we
continue
to
pursue
this
path,
we
won't
be
able
to
do
this
if
we
won't
be
able
to
do
that.
This
is
the
risk
right.
F
I
think
that
needs
to
be
clear
because
I'm
not
quite
sure,
that's
understood
right
now
and
then
follow
up
with
josh
right
now
to
make
sure
that
that's
clear,
okay,
cool,
because
we
may
still
end
up
doing
this
anyways
right
like
I
can't
this
is
not
my
decision
to
make,
but
I
want
to
make
sure
that
people
understand
the
the
risks
associated
with
that
right.
Yeah,
exactly.
C
F
Yeah,
no,
that's!
That
is
true.
I
think
it's
maybe
the
last
two
releases
rather
than
last
three
releases,
but
I
think
the
point
still
holds.
C
Yeah,
so
I
mean
in
other
companies
that
I've
worked
at
they've
maintained
a
branch
like
a
long-lived
branch
of
the
older
releases
and
they
will
spin
those
up
and
fix
them
for
customers
that
refuse
to
move
forward.
That
might
be
an
option
for
us
too
I'll
just
say:
yeah,
we'll
only
do
critical
fixes
for
you,
but
you're
not
going
to
get
any
new
features
or
anything
if
you,
for
whatever
reason,
just
choose
to
stay
on
1312.
C
F
Think
there's
I've
been
in
that
place
with
a
couple
of
customers.
It
is
a
very
like
not
very
fun
place
to
be
right
because
things
change
a
lot,
so
I
would
really
really
like
to
avoid
having
the
customer
like
special
flavor,
but
absolutely
the
other
hand.
F
You
know
like
it's
a
it's
a
you
know
like
choose
your
choose,
your
poison
right,
it's
like
what
what
is
worse
right-
and
I
absolutely
do
know
that,
based
on
the
deliberations
of
us
as
an
organization,
we
have
back
ported
things,
four
people
for
like
10
releases
if
that
was
necessary
right,
so
it's
like
it
has
happened,
but
it's
not
something.
I
think
we
want
to
do
or
think
is
great
right.
So
I
think
I
I.
H
I
think
that
that
was
a
marine's
point,
also
in
the
thread,
so
that
the
our
code
changes
so
fast
that
if
you
go
six
minor
versions,
six
milestones
you
cannot
most
probably
backboard.
You
have
to
rewrite
the
code,
the
patch,
so
he
was
worried
that
you
have
to
have
a
team
working
on
this
feature
twice.
C
So
I
don't
quite
understand
the
difference.
Fabian
you
mentioned.
Maybe
they
should
go
with
omnibus.
I'm
not
sure
I
understand
the
details
there,
but
is
there
an
option
for
them
to
just
use
a
different
provider?
I
know
it's
it's
a
simple
question
but
huge
implications.
F
Right,
I
think
I
mean-
probably
I
don't
know,
but
I
think
I
think
I
don't
know
why
they
chose
that
specific
setup
right.
I
think
that
was
a
professional
services
piece.
I
think
what
we
would
have
to
offer
is
some
kind
of
migration
path
right
and
it
would
entail
downtime,
but
it
is.
It
is
also
the
case,
like
I
think.
F
Catalin
has
highlighted
that
specific
customer,
for
example,
relies
on
post-risk
replication
for
their
disaster
recovery
with
geo,
and
so
the
azure
is
not
going
to
offer
that,
for
example,
in
the
beginning
so
like
even
if
they
had
both
like
12
available
or
13
they
would
they
wouldn't
be
able
to
use
it
right.
So
I
I
don't
know
like.
I
think
it's
also
like
yeah
I'll
follow
up.
I
think
there's
there's
more
than
one
path
forward
right
and
maybe
the
decision
will
be
you
know
like
delaying
or
post
with
stuff.
F
Is
it's
just
not
the
right
business
decision
to
make
right
at
this
moment
in
time,
and
it's
also,
I
think
I
think
this
is
maybe
a
retro
piece
for
next
time
right
when
we
consider
our
policies,
for
you
know
changing
postgres
versions.
I
think
the
we
should
have
a
pre-written
statement
the
next
time
somebody
like,
because
it's
always
like
that
insert
customer
two
two
days
before
this
happens.
F
F
I
think
I'll,
let
andreas
do
the
summary
I'll
check
with
with
josh
to
make
sure
that
he's
he's
briefed
but
I'll
probably
add
my
two
cents
as
well.
Okay,.
C
Cool
all
right
next
bullet
point:
you
all
can
read
14.0
release
prep.
I
think
they're
going
to
try
to
tag
the
release
today.
This
is
the
15th
there's
a
lot
of
information
there.
Since
it's
a
major
change.
Lots
of
breaking
changes
so
beat
up
be
prepared
on
to
number
four,
so
postgres,
post
migrations
are
failing
and
staging,
and
the
question
I
had
was:
did
we
have
a
missed
by
the
migration
testing
framework
and
andreas
answered?
Yes,
and
no,
we
didn't
run
that
test
prior
to
merging
vmr.
A
Yeah,
I
think
that's
that's.
What's
what
we
missed
basically
is
looking
looking
at
the
results.
It
was.
We
ran
it
after
we
merged,
I
think,
just
to
see
what
would
have
gotten
what
would
have
come
back
and
we
would
have
seen
it
so.
Basically,
we
were,
I
think,
what
we're
already
working
on
is
when
you.
A
We
want
to
basically
put
a
large
exclamation
mark
on
migration
when
it
takes
too
long
or
when
it
you
know
otherwise,
is
not
suitable
to
to
be
merged.
So
that's
one
sort
of
visual
that
we
will
get
to
to
make
that
more
apparent
and
then
running.
The
testing
framework
is
also
something
that
we
discussed.
It
is
not
being
run
automatically
at
the
moment.
A
Perhaps
we
you
know
we
should
try
what
what
that
means
in
terms
of
traffic,
but
we've
also
seen
situations
where
it's
just
taking
or
basically
overloading
database
lab,
because
there's
so
many
things
going
on,
but
perhaps
that's
the
next
step
for
us
to
try
that
and
make
that
an
automatic
kind
of
thing.
C
A
That's
a
good
idea:
yeah
we
can
do
some
random
numbers
on
the
on
the
trigger.
I
guess:
okay.
A
Yeah,
that's
kind
of
what
we
will
have
to
see.
I
think
it
goes
both
for
the
runners
and
the
database
lab
instance.
A
I
think
that
the
database
lab
instances
is
probably
more
likely
to
be
the
bottleneck
here,
because
the
the
migration
runner,
you
know
it's
just
a
ruby
process
that
runs
a
very
expensive
queries
on
the
database.
So
I
think
it's
more
likely
that
we
have
to
or
we're
seeing
problems
with
database
lab
and
then
we
scanning
that
is
also
possible,
like
adding
more
database
lab
instances,
but
we
would
really
benefit
from
you
know,
making
that
our
standard
setup
so
that
we
can
bootstrap
another
instance
easily.
D
G
I
was
thinking
we
should
add
it.
I
wonder
if
it
makes
more
sense
to
just
add
it
as
one
of
the
review
criteria
and
then
educate
people.
In
fact,
we
could
even
make
it
a
preparation
step.
So,
instead
of
having
reviewers
do
it
have
authors
be
like
hey
when
you're
preparing
for
a
database
review
make
sure
you
go
and
run
this
job
first.
A
D
D
C
A
C
All
right
next
so
cross
put
together
an
update
finalize
the
new
batched
migrations.
I
know
they
may
look
at
this.
C
E
D
G
Apologies,
yes,
that
will
be
emptyon.com.
You
could
check
staging
because
we
do
have
geonodine
staging.
If
you
want
to
see
how
things
went
there.
C
Moment
heinrichy
of
number
six.
E
C
E
I
saw
the
announcement
on
the
sharding
group
like
we
had.
They
decided
on
a
path
forward
to
move
ci
tables
to
a
new
database.
If
I'm
correct,
and
so
I
thought,
maybe
we
could
defer
the
renaming
then,
because
it
would
be
similar
like
similar
thing
right,
we
would
be
backfilling
on
a
separate
database
or
something
or
you
know
does
that
work
or
just
so.
We
could
like
give
a
final
answer
to
verify
and
say
no
we're
not
doing
the
renaming
during
this
primary
key
migration.
E
H
I
think
that
the
work
by
the
starting
group
is
vertical
to
any
work
on
those
tables
redesigning
partitioning
renaming,
because
I
think
that
the
the
starting
group
will
just
move
the
table
to
another
cell
to
another
database
and
we
cannot
do
anything
at
it
while
moving.
So
there
is
the
the
process
of
moving
and
switching
the
application
to
the
new
chart,
and
there
is
anything
else
so
and
from.
B
E
C
Just
the
fact
that
they
keep
asking
questions
about
well,
if
we're
going
to
rename
x,
should
we
rename
y,
they
can
do
it
later.
Someone
can
do
it
later.
Let's
not
complicate
our
migration
and
our
primary
key
problem
with
renaming
and
the
last
time
I
talked
to
camille
about
this.
He
totally
agreed
like
this
is
entirely
cosmetic
and
if
it
needs
to
be
deferred,
let's
defer
it
and
not
add
some
more
problems
to
something
we're
trying
to
solve
right
now,
so
we're
going
to
defer
renaming.
Let's
just
call
it
good.
A
C
Yeah
yeah
so
they're
targeting
the
ci
tables
they're.
Should
I
forget
the
name
of
the
one
table:
they're
gonna
start
with
one
table
that
has
no
rows
in
it.
Right
now,.
A
Speaking
of
the
migrations,
remember
there
was
this
one
job
that
failed.
We
discussed
about
it
last
week
already
and
I've
retried
it
with
the
same
settings
and
it
once
again
failed,
and
then
I
even
moved
the
subway
size
to
100
so
from
2500,
I
think
to
100.
A
It
ran
for
like
50
minutes
and
then
it
failed.
I
haven't
yet
had
time
to
look
into
it,
but
heinrich
was
looking
at
it,
pointing
to
the
redis
interrupt
interruptions.
A
E
Yeah,
I
think
because
just
like
the
previous
one,
we're
seeing
this
like
no
century
error
no
exception,
so
it's
probably
like
a
sick
term
or
sick
kill
or
something
that
stops
it,
and
this
one
we
saw.
It
ran
for
four
to
six
minutes
right,
which
is
quite
long,
so
it's
very
likely
something
stops
it
and
yeah.
So
it's
cut
a
bit
counterintuitive,
but
maybe
increasing
the
batch
sub
batch
size
with.
C
H
That's
a
valid
idea:
if
we
increase
the
sub
that
size
we
make
the
job
move
faster
if
we
don't
break
anything,
so
this
is
always
a
valid
idea,
I'm
in
favor
to
tell
the
same,
we
will
get
the
same
result
if
we
could
split
the
job
in
10
jobs,
so
in
reducing
the
bad
size,
but
so,
and
that
will
be
a
more
generic
solution.
H
E
Yeah,
because
I'm
also
like
looking
at
this
job
to
do
the
retry
button
on
the
admin
panel
right,
I'm
looking
for
a
what
you
call
this
more
generic
like
fix
to
fail
jobs
right.
E
So
I'm
not
sure
if
the
retry
button
should
you
know,
lower
the
sub
batch
size
or
split
the
job
into
multiple
jobs
or
both
or
you
know,
what's
interesting
about
this
job
is
that
we
started
with
a
batch
size
of
I
think
15
000
right
the
default
and
then
because
of
the
optimization
it
got
to
like
2
million
rows
in
a
batch,
and
it's
close
to
our
max.
Actually.
So
something
to
consider
is
also
to
maybe,
should
we
lower
the
max
of
the
optimizer.
E
E
H
So
and
that
job,
if
I
recall
correctly,
it
was
it
had
a
bad
size
of
one
million
four
hundred
and
we
had
the
jobs
with
almost
two
million,
so
there
were
jobs
that
could
make
it
with
two
million.
So
I'm
still
not
sure
why
this
keeps
on
failing
but
yeah.
This
is
the
composite
key.
H
H
H
Favor
of
finding
a
way
to
split
jobs,
just
the
the
the
simplest
thing
ever
when
we
try
split
it
into
and
then,
if
those
two
fail,
split
them.
Two
again.
E
C
E
B
E
Yeah
tweaking
the
sub
batch
is
kind
of
easier
when
you
retry,
like
andreas,
did
just
changing
it.
But
the
batch
size
is
trickier
because
you
have
to
insert
a
new
row
right
for
the
the
other
job
so
probably
safer,
to
have
like
a
method
here
or
something
it
could
run
on
the
console
and
the
same
thing
that
the
retry
button
would
call
rather
than
doing
manual
inserts.
C
Yeah,
okay,
so
what's
the
follow-up
on
this
one
then.
A
Should
we
also
add
cutting
down
the
subpatch
size
in
this
case
so
that
we
we
basically
cut
the
batch
size
into
and
we
also
got
the
ended
jobs
and
we
also
cut
this
up
exercise
because
for
a
customer,
if
they,
you
know
self-manage
installation,
if
they
run
into
a
problem,
then
might
either
be
a
statement
timeout.
So
you
want
to
cut
the
subpatch
size
or
the
job
gets
killed
because
it
ran
too
long
and
you
want
to
cut
the
batch
size
and
they
have
no
way
of
recovering.
I
guess
nicely.
E
H
I
will
go,
I
will
be
more
careful
taking
the
the
sabbat
size
down,
so
we
will
split
the
the
bad
size
in
two
and
have
a
heuristic,
a
more
linear
heuristic
for
the
sample
size.
And
if
this
works,
we
can
just
keep
on
splitting
and
splitting
at
some
point
we
will
run
bad
sides
of
one
record.
It
will
work.
C
Okay,
I'm
gonna
jump
over
to
the
triage
board.
We
can
get
back
to
the
billboard
if
we
need
to
so
molly
was
on
triage
last
week.
I
don't
think
anything.
Exciting
came
up
andreas
you're
up
next
and
then
I
added
a
blurb
about
triaging
postgres
checkup
report.
So
if
anybody
has
any
feedback
on
that,
I
know
andreas,
you
put
some
feedback
in
there.
I
haven't
read
through
it
to
be
honest
with
you
trying
to
figure.
A
C
C
No,
no,
I
actually
ended
up
not
working.
In
the
afternoon
yesterday
I
had
a
massive
migraine,
so
I
just
haven't
caught
up
yet
and
there's
an
ongoing
conversation
about
the
database
peak
performance,
which
kind
of
ties
into
this
I'll
link
that
issue
here
as
well.
So
there's
a
lot
of
information
that
we
get
between
the
peak
database,
peak
performance
and
this
one
there's
a
pretty
good
overlap
between
the
top
and
queries.
C
C
There's
some
pretty
strong
opinions
on
both
sides.
So
take
a
look
at
this,
mr
and
I
will
link
the
discussion
about
the
database.
Peak
performance.
You've
all
been
copied
on
it,
so
you've
all
seen
it.
So
if
anybody
has
any
feedback,
welcome
and
appreciated
we're
trying
to
iterate
on
how
to
make
those
reports
useful.
C
So
closed
I'll
get
the
first.
C
All
right-
and
then
we
already
talked
about
dropping
then
on
the
cables
for
webhook
logs
awesome
ali
in
review,
sort
and
filter
migrations.
D
They're
already
filtered
like
we
basically
separated
the
dot
com
migrations
into
a
separate
collapsible
section,
and
now
it's
just
about
sorting
the
migrations
in
the
order
that
they
run.
A
C
We
talked
a
bit
about
the
select
from
pg
type.
Is
there
anything
else
left
to
be
done
on
that
ironic,
yeah.
E
I
Yeah,
I
should
have
the
mr
ready
for
a
review
that
removes
the
round
trip.
Otherwise
I
wrote
the
summary
in
there
last
week
and
I
created
that
new
issue,
so
we'll
see
kind
of
we
get
an
answer
on
the
gapless
piece
that
we
talked
about.
G
Yeah,
there's
a
merge
request
with
the
final
parts
of
the
changes
in
here.
Simon
helped
a
lot
on
a
lot
of
the
stuff,
and
we
we.
We
also
took
the
opportunity
to
add
our
spec
testing
and
do
some
and-
and
I
did
some
refactoring,
so
it
kind
of
spiraled
into
several
things.
A
I
just
rebased
that
one
and
I
was
I
was
waiting
for
that
automatic
testing
pipeline
to
come
back.
That's.
I
B
I
A
Know
it
works
and
that's
a
great
change
for
the
testing
pipeline.
You
know
being
able
to
automatically
run
an
example
pipeline,
seeing
the
feedback.
That's
so
that's
awesome,
so
I
sent.
B
A
Review
tomorrow
I
guess
and
the
table
ownership
it
still
sits
there.
I
think
it
was
pointed
out
that
we
might
want
to
follow
the
the
the
way
we
associate
controllers
and
other
stuff
with
stage
groups
and
that's
a
great
idea.
I
think
and
I'll
I'll
have
to
learn
a
bit
more
about
that
and
basically,
I
think
I
would
like
to
drive
the
yaml-based
documentation
forward
and
then
we
can
see
where
we
add
the
ownership
information.