►
From YouTube: 2021-08-11 subtransaction savepoint sync
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
We've
had
like
a
long
discussion
with
camille
about
the
approach
taken
there
and
I'm
hoping
for
us
to
brainstorm
a
bit.
We
need
to
make
a
decision
about
which
direction
we
take
implementation-wise
there.
There
are
like
two
ways:
it's
still
not
clear,
which
way
is
better,
so
I
think,
for
the
sake
of
efficiency,
we
should
make
a
decision
and
just
proceed
with
one
of
them.
A
Thank
you.
Thank
you
for
the
update,
so
that
is
a
indeed
an
important
piece
and
our
meeting
today
we
want
to
talk
about
a
solution,
because
in
standup
yesterday
it
was
proposed
to
remove
the
save
point
or
remove
the
sub
transactions,
but
we
already
have
comments
in
the
issue
that
there
are
concerns,
because
this
may
do
more
harm
than
than
help,
and
we
want
to
make
sure
that
data
is
processed
correctly
and
also
we
have
facilities
to
roll
back
operations
if
needed.
A
So
removing
the
save
point
will
be
like
the
impact
is,
I
know
so.
That's
why
we're
here
today
and
stem
had
several
suggestions
to
to
resolve
some
known
issues
individually,
but
we
still
need
to
discuss.
A
What's
the
next
steps
best
for
us
take
iterations
what
we
want
to
do
to
mitigate
the
immediate
risk
and
also
come
up
with
a
long-term
solution,
then,
based
on
that
approach,
I
will
develop
a
action
plan
like
how
we
want
to
schedule
the
staffing
and
the
timeline
for
the
long-term
solution,
but
short-term
what
we
can
do.
B
So
I
think
that,
in
order
to
mitigate
the
risk,
we
would
need
to
actually
understand
the
exact
mechanism
of
this
problem,
how
it's
happening,
and
it's
still
not
exactly
clear.
What
is
the
root
cause
or
what's
the
mechanism
that
triggers
the
performance
degradation?
So
that's
the
reason
why
I
think
we
should
make
progress
on
the
instrumentation.
It
will
shed
some
light
on
what
is
really
happening
there.
Then.
I,
I
think
that
trying
to
optimize
our
code
base
in
parallel
working
on
things
that,
for
example,
stan
suggested.
I
think
that's
a
good
idea.
B
We
might
not
be
able
to
remove
some
transactions
completely
because
it
would
require
us
looking
at
the
code
base
and
seeing
where
these
are
actually
necessary
and
useful,
because
there
are,
there
might
be
cases
where
some
transactions
are
useful,
but
I
think
that
in
most
cases
our
usage
of
sub
transactions
stems
from
engineers
not
fully
understanding
how
they
work
and
in
most
cases
they
should
not
be
necessary.
B
What
I
suggest
is
that
we
are
the
rulebook
of
static
analysis
tool
that
is
going
to
warn
engineers
about
sub
transactions
usage
and
would
make
them
check
how
desktop
transactions
are
supposed
to
work.
How
this
mechanic
actually
works
in
rails?
C
I'm
I'm
actually
thinking
like
like
what
you
are
doing
can
be
actually
super
useful
to
not
track
only
like
the
save
points,
but
also
track
like
how
many
operations
we
do
in
the
transaction.
I
think
like
if
we
can
somehow
say
that,
like
the
same
as
we
have
amount
of
the
sql
request
or
like
per
operation,
if
you
are
executing,
I
don't
know,
100
queries
in
the
transaction.
C
It's
just
gonna
take
a
lot
of
like
resources
like
to
to
then
replay
that
on
the
production
and
can
create
like
a
a
contention.
So
I'm
kind
of
thinking
that,
like
what
we
are
kind
of
seeing
here,
is
not
that
we
are
misusing
save
points,
it's
more
like
that.
We
are
using
transactions
that
are
super
long.
That
are
super
wide,
that
lock
a
lot
of
data
that
kind
of
creates
a
a
backlog
of
other
operations
happening
elsewhere.
B
So
we
checked
that
with
nick
why
we
instrumented
our
code
code
base
a
bit
better
to
understand
how
many
long
training
transactions
we
actually
have,
and
we
were
actually
quite
surprised
to
see
that
the
number
is
smaller
than
we
expected
nikolai.
Do
you
remember
the
exact
like
duration
of
long
running
transaction?
We
saw.
E
D
C
C
If
you
have
a
lot
of
concurrent
operations,
doing
exactly
the
same
steps,
because
the
first
one
inserting
gonna
create
a
log
and
another
transaction,
we
have
to
basically
wait
for
this
to
succeed,
and
maybe
it
can
be
rollback
sometimes.
But
I
know
that,
like
we
do
like
in
a
lot
of
cases,
we
do
n,
plus
we
solve
n
plus
one,
but
we
still
have,
for
example,
n
plus
one
happening
as
insert
in
in
some
cases.
A
E
Just
just
on
that,
like
we've,
been
monitoring
the
the
long
transactions
as
part
of
like
infrastructure
for
a
very
long
time,
and
most
of
the
this
is
just
to
kind
of
frame
things
a
bit
most
of
the
transactions
that
the
long-running
transactions
have
got
an
infra
dev
issue
and
they're.
Some
of
the
oldest
infradev
issues,
and
the
reason
why
they
haven't
been
solved
is
because
they're
all
difficult
problems.
So
the
the
one
that
jumps
to
mind
immediately
is:
is
it
archive
trace
worker
yeah?
E
E
So
you
know
there
are
issues
and
but
they
aren't
simple.
In
most
cases,
they
they're
kind
of
tricky.
D
Ahead
so
related
to
the
long-running
transactions.
What
we
have
in
monitoring
had
issue
with
wrong
detection
of
photovolume
activity,
which
runs
not
vacuum,
but
analyze
only
analyze
some
table.
So
for
some
period
of
time
we
thought
that
our
incident
is
somehow
related
to
long
running
transactions,
but
it
was
vacuum,
auto
vacuum
activity.
So
we
exclude
this
second
point
a
long.
We
need
to
define
long
here
because
in
monitoring
we
mean
long
transaction.
We
mean
some
kind
of
like
number
of
seconds
here.
D
C
Like,
like
someone
mentioned
like
these
patterns,
save
ensure
something
insert
and
it
kind
of
uses
a
transaction
rollback
to
to
insert
items
one
by
one,
and
this
is
like
the
pattern
that
is
extensively
used,
at
least
in
the
secure
store
reports
worker,
and
this
is
clearly
like
a
pattern
that,
even
if
you
think
about
that,
if
you
know
like
exactly
how
these
secure
reports
are
inserted,
it's
going
to
create
a
contention
on
database.
C
B
D
C
Like
like,
by
result,
you
would
break
a
lot
of
behavior
related
to
like
how
data
is
being
rolled
back.
E
But
but
how
many
places
do
we
actually
rely
on
on
rollbacks
at
save.
C
C
I
mean
like
like
each
time
that
you
decide
if
you
are
in
the
transaction,
it
will
open.
We
create
a
because
rays
want
to
be
able
to
roll
back
operations.
If
you
do
something
finicky,
for
example,
in
the
after
something
if
there
is
a
eraser
exception,
it's
going
to
automatically
roll
back.
So
I
think
in.
E
E
It's
more
like
it's
like
in
the
application.
Are
we
relying
on
that
behavior
and
and
actually
not
because
generally,
we
bought
the
whole
transaction
when
a
part
of
it
fails
right?
That's
my
that's
my
point,
not
the
fact
that
and
sort
of
on
that
on
on
some
databases,
you
don't
have
save
points
right
and
and
active
record
works
in
those
environments,
so
but
yeah.
D
Also
one
one
point:
when
we
say
nested
transactions,
like
nesting
level,
is
only
one
because
if
we
have
hundreds
have
points
we
it's
still
nesting
level
one
we
don't
have
like
100
def
is
not
right
right,
it's
only
one
two,
that's
it
and
the
ro
rolling
back
to
set
points.
It
happens
only
to
the
latest
set
point
right.
We
never
roll
back
to
previous
ones,
so
we
could
release
them
right.
C
D
C
D
See
so
many
rollback
events
might
happen
multiple
times,
so
we
need
previous
ones.
I
see
I
see
okay,
so.
C
What
is
happening
like,
usually,
you
may
have
a
lot
of
save
points
within
a
given
transaction,
but
it's
pretty
unlikely
that
you're
gonna
have
a
very
big
death
unless
you
do
something
that
creates
64
recursive
calls
to
the
transaction,
but
usually
it's
gonna
be
more
plugged.
D
It
looks
like
with
so
it
looks
like
we
just
need
to
limit
number
of
queries
in
transaction.
Try
to
not
to
use
many
of
modifying
queries
because
selects
without
for
update
is,
is
not
a
problem
updates
and
shorts
and
selects
for
updates
is
a
problem
and
like
we
just
need
to
reduce
this
number
and
try
to
make
transact
shorter
in
terms
of
number
of
queries,.
A
And
so
here
what
I'm
thinking
we
are
talking
about
short-term
things,
I
I
mean
stem
made
two
suggestions
that
there
are
two
things
he
suggested
we
may
do
in
short
term.
Does
that
make
sense
in
the
issue
and
also
this
this
solution,
like
scrub,
the
long
transactions
and
prevent
them
or
reduce
them?
That
sounds
like
a
long-term
solution.
A
To
me
I
mean
that
maybe
that's
a
cure
of
the
whole
problem,
but
it
may
take
a
while
and
a
a
bunch
of
efforts
to
do
that,
but
short-term
what
we
can
do
to
mitigate
the
the
risk
of
the
incidence
in
short-term
any
ideas.
B
So
so,
basically,
I
I
think
that
what
the
best
thing
we
can
do
right
now
is
to
in
improve
observability
because
just
being
a
devil's
advocate
here
it
we
are
not
even
certain
if
the
sub
transactions
are
cause
or
the
symptom
right
at
this
point,
without
improving
observability,
it's
such
a
basic
question
remains
unanswered.
B
We
don't
know
if
save
points
are
a
cause
or
or
a
symptom,
I
mean
say,
sub
transactions
logic,
so
the
best
short-term
solution
seems
to
be
working
on
the
instrumentation,
and
then
we
know
that
we
do
have
some
bad
patterns
in
the
code
base,
some
of
them
identified
by
stan,
and
there
is
no
good
reason
not
to
work
on
them
in
parallel.
D
D
B
D
Not
nested
in
my
experiments
I
created
some
transactions,
so
I
used
save
points
and
when
we
have
65
or
more,
we
deal
with
non-cash
like
we.
We
cash
is
not
working
so
more.
This
is
happening.
Yeah.
B
But
there
was
another
benchmark
you
created
with
releasing
save
points,
and
we
also
saw
something
different
like
strange
there
do
you
know,
what's
the
reason
of
that.
B
D
B
Okay,
it
depends
how
you
structure
the
code,
because
you're
gonna
have
multiple
blocks
in
sequence,
and
this
way
it's
possible
because
when
the
we
exit
the
block,
we
are
creating
a
sub
transaction
in.
We
will
reach
the
save
point
and
we,
when
we
have
a
couple
of
logs
processed
in
sequence,
it's
going
to
result
in
flat
one
level
deep
sub
transaction,
but
when
we
nest
blocks,
that's
closing
transactions
like.
D
From
database
perspective,
like
the
most
useful
case
for
save
points,
is
to
to
keep
the
previous
part
of
transactions
like
avoid
repeating
it
right.
So
the
latest
step
points
is
the
most
useful.
If
we
can
adjust
the
code,
I
don't
know
rails
or
something
like
like.
We,
we
go
back
to
the
roll
back
to
the
latest
have
point,
but
if
we
need
to
roll
back
further,
we
just
roll
back
to
the
very
beginning,
so
whole
transaction.
D
If
it's
possible,
I
think
for
from
the
best
perspective,
it's
it
looks
like
good,
but
but
I
don't
know
about
rails
here.
A
We
are
at
20
minutes,
so
let
me
just
summarize
looks
like
we
don't
know
why
and
we
want
to
do
the
instrumentation
first
and.
A
So
that
leads
to
me
to
me
that
there's
no
short-term
solution
to
cure
the
problem
or
to
mitigate
the
the
chance
of
the
incident
incidence
in
the
production.
A
So
I
have
this
46
here
because
that's
two
actions
called
outbacks
then
does
that
make
sense
to
work
on
in
the
short
term,
or
that's
also
premature,
to
call
those
actions.
B
From
what
I
understand,
these
workers
are
creating
a
lot
of
sql
queries
and
sub
transactions
and
like
are
kind
of
an
anti-pattern,
so
this
is
something
we
should
fix
anyway.
Personally,
so
I
think
it
makes
sense
to
work
on
them
in
parallel.
E
So
sorry
to
keep
playing
devil's
advocate
here,
but
if
you
just
go
to
a
specific
case
of
the
store
security
reports,
worker
right,
like
I
can't
think-
and
that
seems
to
be
a
particularly
bad
user
of
of
of
these,
mainly
because
I
guess
it
just
runs
for
a
very
long
time
and
it's
got
a
lot
of
other
problems
with
it
as
well.
E
But
I
can't
imagine
a
case
where
the
business
logic
in
storing
that
security
report
in
the
database
is
so
complex
that
it
needs
to
have
all
of
these
save
points
like
if,
if
we
took
that
one,
which
is
the
worst
case
like
it,
gets
us
a
security
port
report
from
a
from
a
runner
and
it
puts
it
in
the
database,
there's
no
need
for
save
points.
In
that
any
save
points.
That's
a
really
really
simple
operation
like
take
the
data
store
it
in
the
database.
C
Like
my
look
after
that
some
time
ago,
as
part
of
the
rapid
action,
so
it
should
be
rewritten
simply
because,
like
I
agree
this,
this
code
is
like
not
bark
insert
and
it
should
be
parkinson's
with
absurd,
like
update
on
conflict
and
right
now
it
does
n,
plus
one
and
like
when
we
look
at
matrix.
It
was
like
the
one
producing
most
sql
calls
camille.
E
We
we
see
it
using
an
hour's
worth
of
cpu,
sometimes
for
one
job
right,
so
it's
really
really
bad
and
obviously
it's
doing
something
in
there
that
that
that
it
shouldn't
but
yeah.
If
we're
looking
at
a
rapid
action
now,
it'd
say
look
at
that.
C
C
E
C
E
C
E
Gitlab
we
looked
at
this
yesterday.
It
has
11
000
reports
every
time
it
runs.
So
it
is
a
lot
and
and.
C
And
we
do
it
kind
of
mostly
n
plus
one.
There
is
like
a
very
limited
amount
of
the
bark
insert
in
that
code,
so
I
I
think,
like
from
my
perspective,
it
should
be
bulk,
insert
update
no
conflict
or
something
like
that,
because
right
now
it's
just
gonna
generate
a
lot
of
very
small
transactions,
especially
what
about.
E
Yeah,
but
but
what
about,
if
we,
if
we
really
focused
on
that
like
is
there
is
it
I
mean
camille,
you
probably
know
this
better
than
anyone,
but
is
it?
Is
it
really
hard
to
to
refactor
it
like
or
rewrite
it.
C
It's
gonna
be
hard
to
rewrite
it,
but
I
think
what
we
can
do.
There
are
a
few
constructs
that
generate
transactions
and
save
points
way.
We
could
simply
not
do
them,
so
I
think
we
can
focus
on
reducing
the
amount
of
the
sub-transactions
or
transactions
in
the
narrative
created,
and
this
is
exactly
what
stan
presented,
safe
and
sure
unique.
C
This
is
the
pattern
that
we
always
create
a
save
point
or
transaction,
and
we
simply
know
that
it's
in
performance,
so
we
can
get
rid
of
that
and
we
can
change
this
pattern,
so
I
think
we
should
be
able
to
reduce
the
amount
of
the
transactions
created
by
that,
and
probably
this
would
be
the
first
step
and
really
like
the
next
step
would
be
figuring
out.
If
this
can
be
bulk
insert,
which
is,
I
think,
like
much
bigger
task
to
go.
B
Yeah
and
that's
why
I
think
we
should
work
on
this
in
parallel.
There
is
a
small
chance
that
this
is
going
to
fix
the
the
problem,
the
the
root
cause
of
the
performance
degradations-
and
this
is
going
to
it-
would
be
both
great
but
great,
because
we
would
fix
the
problem,
but
because
we
wouldn't
be
able
to
go
to
the
bottom
of
the
root
cause.
F
More
importantly,
the
people
that
can
work
on
this
are
actually
people
that
don't
have
as
much
reliability
work,
so
I
don't
think
we're
making
any
trade-offs
with
it.
So
who's
writing
it
up.
Andrew,
is
that
you
or
like.
C
F
Right,
basically,
what
I'm
trying
to
do
is
get
it
to
the
right
people's
hands
so
that
then
they
can
start
working
in
parallel.
But
we
need
like
some
description
of
what
that
work
consists
of.
C
C
I
think
we
can
try
to
solve
like
the
the
first
problem,
which
is
transactions.
Maybe
this
will
be
helping,
but
maybe
we
relieve
some
of
the
pressure
and
then
we
could
figure
out
exactly
how
to
rewrite
it,
with
the
focus
on
the
cpu
time.
What
you
andrew
exactly
described,
because
this
effort
will
be
will
be,
will
be
significantly
more
complex
and
it
may
actually
require
changing
some
of
the
database
structure
as
well.
A
Okay,
so
we
decide
so
for
this
restorative
security
report
worker
camille,
you
will
write
out
an
issue
and
we
will
find
out
who
can
work
on
that?
I
will
work
with
christopher
on
that.
A
That's
a
short-term
thing
we
can
work
on
and
meanwhile
we
work
on
the
instrumentation
and
when
we
get
more
data,
I
think
the
long-term
solution
could
be
just
handling
the
long
transactions
in
a
appropriate
way,
either
reduce
them
or
breaking
them
up
or
or
something
based
on.
Our
investigation
is
that
the
conclusion
from
this
meeting
today.
A
Threat
insight
and
also
that
engineer
manager
is
a
tiago.
He
also
already
expressed
the
unit
issue
that
if
they
their
team,
is
going
to
work
on
this,
they
have
to
drop
some
infra
debt
issues
there.
So
I
I
think,
christopher.
We
probably
need
to
work
on
that
with
todd
to
see
how
to
balance
the
team's
load.
If
we
decide
to
have
this
team
work
on
because
right
now,
I'm
leaning
towards
this
team
should
pick
it
up,
because
this
is
the
you
know
specialty
domain,
I'm
I
want.
A
F
We're
good
we're
good
on
todd's
team
working
on
it.
I'll
look
at
we'll
have
to
have
todd,
tell
us
what
we're
trading
off
associated
with
it,
but
I
think
he's
also
sorry
go
ahead,
christopher.
I
think
the
other
effort,
my
gut
says.
G
I
think
the
other
concern
I
would
have
is
maybe
reduce
camille's
role
in
there
and
being
like
the
advisor,
but
just
tell
them.
This
is
your
number
one
priority:
here's
where
you
should
go,
look
and
let
them
write
it
up
and
have
camille
review
it
rather
than
being
the
owner
on
writing
this
up,
because
female's
got
a
few
other
things
going
on.
A
Yeah,
I
think
camille's
adjusted
right
now.
What
needs
to
be?
I
mean
right
on
the
issue
and
the
idea,
that's
it
and
being
being
the
consultant,
because
the
shorty
or
the
conversation
we
can
cannot
share.
Camille's.