►
From YouTube: 2021 07 12 APAC Sharding Group Sync
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
C
Looks
like
the
first
agenda,
so
the
win,
I'm
just
wondering
if
this
do.
We
know
what
cng.
A
Omnibus
or
infra
need
to
unblock
them
because.
B
So
about
infra,
what
I
can
update
everyone
is
that
we
got
approval
to
get
the
data
from
production
to
restore
in
the
benchmark
environment.
I
raised
a
couple
of
things.
We
need
to
adapt
for
test,
but
yeah.
Now
we
are
not
blocked.
I
need
to
work
with
the
infra
team
to
pro
to
add
the
components
that
we
need.
B
The
main
thing
I
want
to
clarify
here,
that's
my
topic.
The
second
one
in
the
agenda
is
about
the
pg
bouncer
structure.
Let
me
quickly
explain
in
dot
com.
Nowadays
we
have
just
one
database,
as
everyone
knows,
and
we
have
two
pg
bouncer
pools
one
in
transaction
mode,
that's
for
web
api
and
another
one
for
a
sidekick
traffic
that
is
in
session
mode.
A
B
A
More
sense
for
sidekick
and
then
some
make
more
sense
for
web
api,
and-
and
if
that's
the
case,
then
I
think
we
need
them
both
for
the
ci
database
when
we
get
to
production
because
that's
going
to
be
running
inside
kik
and
web
api.
Is
there
any
other
considerations
that
would
change
that
rather
than
us
just
mirroring
exactly
the
same
database
setup
that
we
have
today.
B
B
So
if
you
like,
what
I
see
here
is
like
you
will
change
your
configuration
just
not
in
two
points,
like
the
two
end
points
that
you
have
nowadays
what
you
have
for:
ci,
cd,
sorry
for
and
the
regular
today's
application
uses
two
two
different
dns
names
or
two
database
sidekick
and
web
api.
We
will
have
the
same
thing
for
ci.
You
will
have
ci
api
and
ci
side.
Kick,
let's
say.
A
B
A
I'm
guessing
the
way
that
you're
doing
that
is
that
you're
doing
it
on
the
sidekick
nodes,
you're,
just
configuring
them
with
a
different
person,
name
there
yeah
yeah
yeah.
So
I
mean
nothing
in
the
application
really
understands
these
different
pools.
But
if
they're
necessary
for
the
workloads
that
we're
running
to
be
efficient,
then
that
seems
we'd
just
keep
those
but
yeah.
The
application
itself
doesn't
need
to
support
this,
because
we
just
you
just
give
us
the
url,
that's
relevant
for
the
correct
host.
A
It
is
but
you're
deploying
sidekick
on
different
servers
and
then
configuring
it
differently
in
sheffield
or
a
non-nervous
or
wherever
these
are
being
configured
right.
So
the
application
doesn't
understand
that
sidekick
has
a
different
database
connection,
because
in
gitlab.com,
sidekick
just
runs
on
different
servers
with
a
different
configuration.
B
B
B
C
B
Yes,
I
had
some
problems,
supply
configurations
in
this
level
or
in
the
database
host
and
sometimes
jeff
took
me
like
five
minutes
in
this.
If
you
do
directly
in
the
host,
we
can
do
in
like
10
seconds,
so
I'm
trying
to
force
this
to
happen
faster,
perhaps
have
the
configuration
already
deployed
in
shaft.
Have
the
chef
client
stop
it
and
during
our
maintenance,
execute
this
to
refresh
the
configuration
with
that
make
it
like
in
one
minute
or
so.
C
B
B
A
Read
only
traffic
won't
be
changing
live
so
because
the
read-only
nodes
are
just
replicas
of
the
same
database.
We
can,
we
don't
need
to
do
any
live
cut
over
for
them
they're.
Just
we
will
already
be
reading
from
the
correct
ci
databases
ahead
of
time
and
that
the
way
that
will
work
is
you
will
have
the
host
name
for
the
the
petroni
replicas.
A
B
B
C
We
need
to
promote
database
hosts
to
be
like
alterative,
so
kind
of
also
disconnect
like
streaming
replication
from
from
the
main,
and
then
we
need
to
like
reconfigure
pg
bouncer
to
to
write
to
the
new
exactly
what
we
need
to
do
so,
like
it's
kind
of
like
the
coordination
that
we
need
to
like.
We
need
to
probably
stop
pg
bouncer,
promote
and
then
restart
pg
master
and
assume
that
everything's
gonna
work
right.
B
A
B
Did
sorry
sorry,
so
what
I
did
is
something
is
like
during
our
upgrade
is
promoting
a
clustering
cascade,
as
we
will
have
here
like
this
is
a
common
inside
of
a
patroni.
So
it's
like
this
will
not
make
our
lives
so
hard.
This
will
promote
quickly
that
the
action
itself
is
fast.
My
only
concern
is
how
to
update
this
mpg
bouncer
should
be
applied
quickly.
A
Yeah
my
hope
and
what
I
documented
my
hope
is
based
on
using
a
cname
record
and
pg
bouncer,
detecting
that
that
cname
record
changes
within
two
seconds
when
it
runs
its
next
dns
resolution.
But
I
just
actually
don't
know
how
to
do
that,
because
all
my
testing,
it
just
doesn't
work
for
various
reasons.
A
What
I
wrote
down,
but
I
think
I
still
think
that
dns,
if
we
can
work
with
somebody
who
understands
maybe
our
infrastructure,
how
our
dns
is
configured
there
and
what
places
we
might
be
able
to
stick
a
dns
record
that
will
actually
get
read
by
pg
bouncer.
If
we
can
manipulate
that
live,
I
think
it'll
be
the
fastest
way,
because
pt
bouncer
doesn't
even
need
to
be
told
to
reload
a
config
file,
but.
C
Yes,
but
like
like,
like
like
dns,
is
tricky
because
you
might
have
cash
on
there,
so
many
levels
and
like
you,
may
not
really
like
the
second
like
response
time,
I'm
kind
of
like
at
least
in
my
head
dns,
like
the
lowest
resolution
that
you
can
get
reliably
it's
like
60
seconds.
C
A
B
B
A
Not
true
I
mean
camille's
right
now
like
we
could
theoretically
be
faster
than
two
seconds.
If
pp
bouncer
can
pause
and
reload
a
config
file,
you
know
in
100
milliseconds,
then
we
can.
You
can
reload
it
quickly,
so
that
should
be
quick
reloading
the
config
file,
it's
a
simple
command
of
the
future,
so
if
page
bound
to
pause,
reload
resume
can
happen
in
100
milliseconds
and
it's
probably
better
than
using
dns.
A
We
just
need
a
way
to
manipulate
the
config
file
that
isn't
hacky,
but
basically
pretty
much
the
only
logical
option
is
we
have
a
command
that
sshs
into
all
of
our
pg
bouncer
hosts
edits.
The
file
well
runs
those
three
steps
pause
edit
reload.
A
It
seems
like
the
only
logical
way
to
do
it
but
yeah,
depending
on
how
many
hosts
that
is,
that
may
not
be
super
fast.
If
you
had
centralized
single
place
to
change
a
dns
record,
then
you
kind
of
simplify
that
process
a
little
bit
but
could
be
slower
still.
C
So
I'm
kind
of
thinking
like
I
I
didn't
follow
like
recent
discussion
like
how
are
we
gonna
make
the
like
the
authoritative
speech
on
the
of
the
database
rights?
How
long
it's
gonna
take
compared
to
the
pg
bouncer.
A
A
We
need
to
let
we
need
to
stop
rights,
which
we
talked
about
separately
as
like
disconnecting
pg
bouncer
through
some
way
of
blocking
it
or
put
the
pause
command
in
pg
bouncer,
then
you
need
to
just
check
the
streaming
replication
catches
up
to
the
point
where
the
last
transaction
you
were
at
after
you
paused
it,
which
should
not
be
theoretically
more
than
a
couple
of
seconds.
If
we're
pretty
well
up
to
date
with
streaming
replication,
then.
C
I'm
asking
because,
like
it's
kind
of
like,
I
think
it
sounds
like
slightly
perspective
like
if
it
takes
like,
say
10
15,
20
seconds
generous
perspective,
how
much
like
pg
buzzer
can
take
because,
like
it's
like,
if
it
takes
like
quite
a
amount
of
time,
it's
not
gonna,
be
like
like
five
hour
approach.
It's
gonna
be
like
immediate,
almost
immediate,
so
maybe
we
are
kind
of
like
then
aiming
for
the
wrong
goal
of
like
making
these
like
a
few
milliseconds
of
downtime,
where,
like
it's,
simply
not
feasible
to
achieve
that
reliably.
C
So
I'm
kind
of
like
thinking
that
like
if
we
can
actually
make
like
the
worst
case
scenario
for
like
for
this
switchover
of
the
rights
and
for
the
pg
bouncer,
it's
gonna
give
us
like
if
it's
like
10
seconds,
30
minutes
or
five
minutes
or
whatever
else,
and
what
we
are
like
kind
of
looking
after
because
then
like,
if
it's
actually
like.
If
we
are
looking
at
one
minute
like,
I
think
also
like
what
we
can
do
like
how
it
should
approach.
C
Also
changes
like
what
tooling
and
how
robust
it
needs
to
be,
and
maybe
like.
It's
simply
says
that,
like
it's,
not
easy,
that
it's
not
easy
to
achieve
like
say,
30
seconds
like
by
lower
right,
be
on
the
life
running
system,
and
we
need
to
kind
of
like
accommodate
slightly
more
to
have
this
headroom.
B
B
B
I
will
what
I'm
doing
here
currently
like.
We
took
like
the
queries
that
are
taking
most
time
to
be
resolved
and
the
most
executed
queries.
This
is
the
workloads
that
we
have
nowadays,
I'm
asking
to
split
the
traffic
for
what
is
ci
related
and
non-ci
related
for
us
to
exactly
simulate
this
on
the
speedy
bouncer
level.
B
This
is
that
temporary
and
what
dylan
said
as
well.
That
is
right.
If
we
will
apply
this
when
we
have
low
database
peak
time
like
the
replication
leg
is
second,
you
know,
and
I
believe
that
we
with
the
pause.
Even
I
because
we
are
updating
configuration
and
so
on,
there
is
a
chance
that
we
will
have
some
hiccup
on
the
application
like
we
will
return
some
hours,
but
I
believe
that
we
are
in
the
I
believe
like.
I
need
to
test
we're
in
the
house
of
seconds.
C
C
Because,
like
like
in
in
on
average,
like
our
application,
like
it's
probably
somewhere,
closer
between,
like
100
to
500
milliseconds,
with
peaks
being
significantly
larger,
when
there
is
like
a
lot
of
rides.
A
When
we
get
to
deploying
all
this
infrastructure
and
having
this
separate
database
cluster,
the
first
thing
that
we're
going
to
want
to
be
able
to
deploy
is
read
workloads
to
go
to
a
separate
database
for
ci,
so
that
would
be
stuff
that
we're
already
working
on.
I
think
that'll
be
the
logical
thing
before
we
can
ship
something
to
production.
I
think
we'll
have
a
at
some
point.
We
have
a
second
ci
database
cluster
in
production.
That
is
a
replica
and
we
want
to
just
configure
gitlab's
reads
to
go
to
that.
A
That
will
be-
and
that's
that's
through
the
load
balancing
code
today.
So
whatever
goes
through
the
load,
balancing
code
resolves
the
dns
records.
We
want
that
to
be
able
to
support
a
separate
host
name
for
reads
for
the
ci
application
record,
ci
tables,
and
then
that
will
build
us
to
ship
something
to
production.
A
C
C
A
Rights
can
come
later.
I
think,
like
first
step
is
reads
but
yeah
like
and
writes,
are
configured
to
use
a
different
way
of
getting
a
connection
today
in
gitlab.
C
I'm
I'm
I'm
like
in
particular
like
worried
about
like
the
case
that
we
have
two
distinct
connections
to
the
same
logical
database,
and
we
also
have
to
resolve
to
two
phase
permits
in
that
because,
like
like
redirecting
breeds
only
I
think
we
are
not
like.
The
current
code
is
not
really
like
well
prepared
to
redirect
only
reads,
but
maybe.
A
Well,
I
think,
I
think,
all
of
these
things
prerequisites
if
you
take
this
step
back
and
the
application
needs
to
stop
using
joins
the
application,
needs
to
have
a
consistent
model
of
how
it
creates
transactions,
so
that
data
isn't
transacted
against
two
different
databases
in
a
way
where
that
would
lead
to
really
bad
inconsistency
problems.
The
adversaries
need
to
be
resilient
against
that.
So,
like
there's,
a
stack
of
application,
changes
that
go
before
actually
shipping.
That
I
mean
I
would
hope
we
could
maybe
separate
the
read
part
of
this
and
deploy
that
earlier.
C
I
mean,
like
I
mean
like
it's,
not
really
that
it's
not
that
like
we
have
to
solve
all
of
these
aspects.
Yet
because,
with
the
streaming
application
we
would
be
replicating
all
tables
as
well.
So
technically,
like
we
all
have
all
data
for
make
all
our
joints
to
continue
working
today.
C
You're
yeah,
yes,
but
like
what
like
this,
would
this
will
come
later
right
but
like
we
can
embrace
like
testing
this
cluster?
That,
like
it
can
handle,
reads
with
just
like,
knowing
that
these
joints
are
happening
today
and
I
kind
of
work
on
issues
to
like
break
this
one
to
not
like
cross
like
other
tables.
A
Yeah,
we
could
certainly
talk
about
sequencing
that,
but
I
think
we
don't
want
to
get
the
application
thinking
that
it
can
query
like.
We
don't
want
to
get
the
application
in
this
inconsistent
state
where
it
thinks
it
can.
It
can
run
joins
that
it
shouldn't,
especially
if
you've
got
two
different
structure.
Sql
files
and
in
the
gdk
setup
locally
is
all
such
that
they're
two
separate
databases.
They
don't
actually
have
all
the
same
tables
and
they're
not
replicated.
A
A
So,
like
the
fact
that
we're
going
to
do
this
in
production
with
this
streaming
application
and
having
these
identical,
I
don't
think
we
would
ever
want
the
application
to
really
rely
on
that,
and
certainly
not
for
a
long
period
of
time.
It
just
seems
like
the
code
will
be
in
a
weird
state:
there's
not
really
much
advantage
to
getting
that
out.
Early.
A
Things
well,
benchmarking.
We
don't
need
anything,
that's
what
we're
about
to
do.
I
think
that
I
think
there
shouldn't
be
anything
in
the
application
needs
and
especially
because
we
weren't
going
to
use
application
code
in
the
benchmarking
that
the
sql
queries
are
happening
through
jmeter
and
we're
going
to
be
testing
things
at
the
infrastructure
layer.
C
Can
we
because,
like
we
work
on
the
plc
it's
getting,
I
I
saw
that
tongue
is
getting
pretty
green.
Can
we
like
wear
amiibos
package
and
cng
package
out
of
this
poc
and
like
use
that
for
the
benchmarking.
B
C
I
mean
like
because,
like
in
any
world
when
we
start
running
like
our
code
against
production
data
data,
we
need
to
block
address
traffic
and
like
block
as
much
of
incoming
traffic
as
well
to
ensure
that
we
don't
leak
outside.
So
I
kind
of
considering
this
like
really
like
request
for
our
work
related
to
running
application.
A
A
Yeah
well,
the
next
step.
I
think
to
just
like
summarize
that
the
next
step
for
the
benchmarking
was,
we
don't
need
to
deploy
the
application,
and
we
don't
want
to
for
those
security
reasons
for
now,
but
yeah
back
to
tom's
question
about
like
what
to
do
next
to
unblock
people.
I
think
it
is
probably
a
relevant
discussion
point
for
us
to
figure
out
what
configuration
we
need
in
gitlab
what
what
what
it
needs
to
support
for
the
work
to
happen.
A
C
I'm
kind
of
like
I
noticed
that,
like
right
now,
omnibus
and
cng
is
scheduled
pretty
far,
it's
like
two
milestones
from
now
and
I'm
actually
asking
if
this
can
be
speed
up,
because
I
think
it's
gonna
be
a
bottleneck
for
like
for
your
work
there
on
in
your
closet
work
if
it's
come
that
late,
because,
like
omnibus,
is
scheduled
like
in
two
releases
from
now,
since
this
schedule
freely
releases
from
now.
So
it's
pretty
light,
I
think
in
the
timeline,
and
I
think
we
need
this
earlier
to
be
done.
A
Me
I
agree
it
would
be
helpful
to
get
it
done
earlier.
I
I
don't
feel
blocked
on
answering
the
question
about
how
this
streaming
replication
is
going
to
work.
That's
my
god!
That's
kind
of
the
top
of
my
mind
right
now
is
how
is
the
stream
replication
going
to
work?
What
is
the
actual
latency
going
to
look
like,
and
can
all
these
components
be
reloaded
within
a
reasonable
amount
of
time?
None
of
that
depends
on
this,
but
that's
only
one
step
and
yeah
yeah.
C
Yes,
but
I'm
kind
of
thinking
that
we
should
also
like
start
embracing,
let's
say,
configuration
aspect
on
staging
of
like
adding
new
database.
Even
it's
not
houston
yet
and
has
different
scenarios
like
this
database.
Don't
go
nuts
or
go
nuts
or
anything
like
that.
How
applications
gonna
behave.
C
So
I
think
right
now
we
have
actually
probably
two
or
three
tracks
from
the
infrastructure.
Just
like
yours,
what
you're
saying
physical
storage,
streaming
application
and
testing
performance
of
the
second
track
is
really
like
just
embracing
configuration
aspect
and
like
adding
new
database.
That's
maybe
not
used
for
anything
useful,
so
actually
to
check
how
applications
you
I
behave
in
this
scenario.
Like
me,
I
know
this.
Let's
say
database
restarts
or
whatever
or
like
it's
short
bones,
but
harm
it's
going
to
cause
to
application,
but
those
like
monitoring
aspects
and
okay.
C
A
I
think,
like
the
things
we're
working
on,
are
all
consistent
with
that,
but
you're
right
that
the
second
step
is
just
gonna
be
blocked
by
having
support
in
cloud
native
and
omnibus,
because
we
can't
get
on
staging
without
that
right
and
if
that's
what
you
want
to
test
in
staging,
what
gitlab
looks
like
when
it
connects
to
two
different
databases-
and
we
talked
about
that
with
the
ci
instance
variables
table.
Maybe
that's
how
we
want
to
test
that
behavior
then,
yes,
that
we
need
to
make
it
so
that
they
can
be
configured.
A
C
C
So,
with
the
track
number
two,
I
think,
like
this
configuration
aspect
becomes
important
to
embrace
like
that
application
can
function
with
this
database
can
properly
configure
pulling
size.
It's
actually
like.
We
have
a
templates
to
configure
in
this
new
databases
everywhere,
which
is,
like,
I
don't
know,
run
books
chef.
C
Whatever
else
you
use
that
and-
and
the
next
step
on
this
track
would
be
like
ensure
that
we
can
migrate
and
roll
back
on
this
new
database
changes
to
this
database
because,
like
as
soon
as
we
start
moving
more
data,
more
tables
more
migrations,
it's
gonna
be
pretty
important
step.
A
I
think
then
we
we
just
won't
be
able
to
do
any
of
that
until
the
support's
added
to
omnibus
and
collaborative
gitlab,
so
there's
already
discussing
whether
they
can
bring
that
forward.
But
if
not,
then
I
think
we
just
want
to
figure
out
if
we're
actually
going
to
work
on
that
or
if
we
just
say:
okay,
that's
going
to
delay
and
we're
waiting,
we
we
can
try
to
take
it
on
ourselves.
Will
that
really
help
if
they're
going
to
have
to
review
it?
I
don't
know
those
probably
need
to
speak.
C
A
Yeah
I
I
agree.
You
definitely
help,
especially
if
you're
going
to
do
readiness
review
and
all
that
kind
of
things.
It's
definitely
something
that
the
infrared
will
ask
for.
B
B
A
Yeah
I
wrote
down
like
the
different
options
we
would
have
for
disaster
recovery
in
that
situation,
because
there's
there's
multiple
scenarios
and
I
wanted
to
document
the
different
scenarios.
One
is
that
that
you'd
try
to
fail
over
and
no
right
to
make
it
through,
because
the
new
cluster
actually
just
blocks
rights
and
in
which
case?
A
Well,
you
just
change
the
connection
back
because
there's
no
data,
that's
inconsistent,
this
scenario
where
rights
have
gone
across
well,
you
potentially
have
data
loss,
but
yeah
we
can
document
what
we
would
actually
do
as
the
steps
to
reverse
the
situation
and
minimize
the
that
that's
part
of
the
plan
at
the
bottom
of
it
just
this
disaster
recovery
stuff.
But
if
there's
better
ideas
about
disaster
recovery
or
better
changes,
we
could
make
to
reduce
risk.
Then
we
should
document
those
two.
A
There's
also
the
possibility
that
we
could,
like
you
know,
build
tooling
that
if
rights
were
actually
happening
to
the
second
one
we
had
to
roll
back
because
of
performance
reasons,
we
could
actually
replay
that's
the
wall
files.
If
we
hadn't
lost
them,
the
wall
files
would
be
backed
up
to
somewhere.
We
could
rewrite
a
logical
decoder
that
actually
finds
all
the
ci
updates
and
replace
those
against
the
other
database.
A
C
C
What
are
other
tools
not
related
to
gitlab
application?
That
depends
on
the
single
database.
I
mean
I
don't
know
for
monitoring
for
alerting
for
any
kind
of
metrics
anything
that
would
be
broken
when
we
disconnect
that
into
two
databases.
B
Besides,
we
need
to
go
for
elk
as
well,
like
all
the
logging
that
we
are
sending
there
and
we
need
to
create,
like
let's
say
I
standardize
it
set
up
for
infrastructure
for
a
new
database.
You
know
because,
like
today,
ci
tomorrow
can
be
users
or
any
other
split
that
we
will
do
on
the
chart
that
we
will
do
in
our
database,
so
yeah.
We
need
to
take
care
of
all
of
it,
like
it's
not
planned
at
the
moment,
but
you're
right,
yes,
monitoring,
stuff.
C
Yes,
because,
like
I
think
like
we
know
about
some
of
them,
like
gitlab
exporter,
I
think
we
know
about
some
grafana
dashboards.
C
But
the
question
is
like
how
we
can
like
discover
that
better,
like
what
would
be
broken
or
like
or
like
what
components
we
should
inspect
if
they
would
be
broken
in
the
changes
that
we
are
making.
So
maybe
like
first
would
be
like
listing
things
that
may
be
affected
and
then
like.
We
could
go
one
by
one
together
on
figuring
out
if
this
is
actually
affected
or
not.
C
C
Because,
like
what
I'm
asking,
I
know
that
like,
for
example,
some
of
the
ci
runner
queries.
They
were
executed
in
the
past
directly
on
the
database
to
fetch
some
data.
So
I'm
just
kind
of
curious
that
we
likely
have
more
of
such
everywhere.
B
B
C
But
this
is
only
like
the
configuration
aspect.
I
don't
know
the
history
of
that,
I'm
just
assuming
that
we
are
hitting
physical
limits
of
maintaining
many
connections
to
the
pg
bouncer.
That's
why
we
split
that
into
2pg,
bouncer,
redirect
or
side
key
and
all
the
rest
to
the
separate
pg
bouncer,
but
it's
still
like
functionally
for
the
application.
C
It's
still
like
the
same,
let's
say
connection
same
logical
database.
It's
like
using
different
hosts
because
of
the
amount
of
the
nodes
that
we
are
running.
So
I
I
think
it's
more
like
that.
We
are
hitting
like
physical
limits
of
how
many
connections
we
can
efficiently
handle
given
amount
of
the
nodes.
C
C
So
then,
my
question
is
gonna
be
like
is
having
twice
the
amount
of
the
connections
from
the
each
node,
be
a
problem
or
not.
I
assume
that
it's
not
going
to
be
a
problem,
but
this
is
something
also
like
to
look
at.
B
What
it
won't
be
a
problem
for
the
number
of
products
you
have
to
the
database.
The
only
thing
I
want
to
tell
you
here
is
like
rpg
bouncer
configuration
for
these
two
nodes,
for
these
two
clusters
are
different:
okay,
one
of
them
use
transaction
mode
and
for
sidekick
we
use
session
mode.
The
reason
why
we
did
this
delta,
the
difference-
I
don't
I'm
not
aware-
I
need
to
investigate
as
well.
A
Yeah,
I
think
that
probably
dictates
whether
or
not
we
need
to
replicate.
That
again
is.
Why
is
therefore
but
yeah
it
likely
will
be
exactly
the
same.
The
workloads
that
you're
talking
about
that
cause
problems
in
production
to
begin
with
were
probably
ci
workloads
because
they
run
in
web
and
sidekick
and
that's
one
of
our
biggest
workloads
anyway.
C
I
I'm
kind
of
thinking
also
that,
like
one
of
the
reasons
why
maybe
using
session
versus
transaction
is
about
longevity
of
the
connections
and
like
type
of
the
work
that
you
execute
like
on,
the
web
usually
have
very
short
executions
that
you
can
really
like.
C
You
read
data
from
anything
that
is
available,
so
it's
like
not
sticky
and,
like
you,
don't
care
if
you
read
that
from
let's
say
another
connection
or
something
where
in
the
psyche,
this
is
usually
a
long
running
thing
that
you
kind
of
require
consistency
across
execution.
So
I'm
kind
of
thinking.
This
is
more
like
that
optimization
for
the
web
fleet.
C
That
can
be
like
less
the
sorry
that
can
be
more
permissive
in
like
in
the
way
how
you
execute,
and
it
kind
of
allows
you
more
random
access
pattern.
C
Where
sideki,
I
think
I
may
be
completely
wrong,
but
sidekick
kind
of
requires
more,
let's
say
a
long-living
approach,
where
you
kind
of
have
much
better
consistency,
but
you
also
keep
the
connections
for
longer
as
well
on
the
side
key,
and
I
know
that
it
caused
some
challenges
in
the
past,
with
either
transaction
being
for
quite
long
where,
like
it's
not
present
on
web,
but
it's
actually
pretty
good
that
you
are
mentioning
that
because
it
also
affects
how
we
can
reconfigure
the
current
connection,
based
on,
like
say,
let's
say,
schema,
search
path.
C
I
know
that,
don't
you
look
at
that?
Actually,
so
in
the
pg
bouncer,
it
means
like,
like
whatever
you
execute,
can
be
like
treated
as
a
self-sufficient
like
like
an
isolated
query
that
can
kind
of
be
executed
on
any
connection
that
is
available
on
the
other
end
of
the
pg
bouncer,
where
it's
going
to
behave
differently
across
web
and
boom
and
side
key
because
of
the
different
pg
bouncer
configuration.
So
this
is
something
to
keep
in
mind
on
testing
the
behavior.