►
From YouTube: Discuss phased rollout of new CI PGBouncer pool
A
B
Okay,
the
main
thing
we're
trying
to
do
in
this
entire
decomposition
project
is
to
move
from
having
one
database
with
all
of
the
tables
in
gitlab,
in
it
to
two
databases
where
one
table
has
all
what
we're
calling
ci
related
tables
and
the
other
one's
going
to
be
main
tables.
But
an
intermediate
step
in
getting
us
towards
that
goal
is
to
create
a
second
pool
of
pg
bounces
that
still
points
to
the
same
database.
B
B
The
tricky
problem
we're
trying
to
solve
now
is
that
when
you
have
a
second
pool
of
pg
bounces
that
increases
the
number
of
connections
that
will
be
open
to
the
main
postgres
database
and
that
main
postgres
database
has
some
sensible
upper
limit
on
on
the
number
of
connections
that
we
shouldn't
exceed
for
and
and
also
if
we
get
too
close
to
that
upper
limit.
In
terms
of
actual
usage
of
that
main
primary
database,
we'll
also
potentially
overload
it
too.
B
We've
already
figured
out
how
we're
going
to
roll
out
gradually
and
dynamically,
how
many
queries
to
route
to
each
pg
bouncer
node,
but,
as
you
start
to
gradually
increase
the
number
of
queries
going
to
this
pg
bouncer
ci
node,
it
will
start
opening
more
and
more
connections
to
the
main
database
and
eventually
the
main
database
may
exceed
the
total
number
of
connections,
and
that's
primarily
what
we're
trying
to
solve.
The
logical
conclusion
is
okay.
Well,
we
have
these
limits.
B
We
can
place
in
here
and
we
will
just
increase
the
and
decrease
those
in
lock
step
with
this
dynamic
percentage
increase
over
here.
It's
a
bit
more
annoying
on
this
side
because
it
requires
us
to
do
a
reconfiguration
of
pg
bouncer,
which
is
controlled
by
our
you
know,
deployment
pipelines
and
needs
restarts
of
things,
but
is
doable
or
at
least
needs
to
reload.
The
con
pg
bouncer
configuration
these
settings
are
called
max.
Connections
is
what
it's
called
in
postgres
pool
size.
B
It
seems
to
be
what
we
use
in
pg
bouncer
to
set
limits
here
on
how
many
connections
the
pg
bouncer,
will
open
to
the
postgres
database.
Although
pg
bouncer
has
many
configurations
that
control
this,
it
seems
like
pool
size
is
the
one
we're
using.
B
Yeah
that
I'll
stop
there
do
people
have
questions
that
getting
up
to
speed
on
that.
Does
it
make
sense?
Yes,
I'm
good
all
good
on
that.
B
Okay,
then
I
changed
nicole
about
this
the
other
day
and
got
some
pointers
as
to
what
we
need
to
think
about,
as
well
as
then
getting
some
links
to
what
statistics
we
actually
have
on
all
of
this
and
the
configurations.
This
is
a
configuration
for
one
of
the.
What
we
call
sync
pg
bouncers,
I
think,
or
the
web
api
ones,
and
we
actually
will
have
a
different
configuration
for
the
sidekick
ones
and
actually
probably
need
to
get
that
still.
But
the
what
we
did
learn
was
this.
B
I'm
not
very
good
at
using
the
thanos
thing
to
generate
useful
graphs,
but
this
is
probably
much
easier,
probably
a
better
way
to
do
this
than
what
I've
done,
but
I've
just
got
a
stack
bar
graph.
That
kind
of
shows
us
over
time.
The
sum
of
the
different
connections
open
from
pg
bouncer
to
the
primary
petroni
database
and
there's
three
different
types.
There
is
the
active
connections,
the
idle
connections
and
the
used
connections.
B
Active
connections
are
ones
that
are
being
currently
used.
Something
gets
moved
into
an
idle
connection
when
it
has
been
idle
for
60
seconds
and
that's
based
on
some
idle
period
of
60
seconds
or
65
server,
idle,
timeout
65.
So
when
pg
bouncer
is
supposedly
using
like
first
in
last
out,
I
think
or
some
kind
of
round
robin
technique
where
it
tries
to
use
the
smallest
amount
of
connections
it
needs.
B
And
so
therefore,
if
you've
got
50
connections
open
and
you
constantly
keep
using
the
same
like
10
connections,
if
that's
all
you
need
and
then
the
other
40
will
be
moved
to
idle.
I
think
that's
what's
going
on
based
on
reading
the
docs,
which
are
sparse,
so
something
moves
to
idle
connection
after
65
seconds,
at
which
point
it
or
something
moves
yeah
at
which
point
it
gets
closed,
or
maybe
it
gets
closed
after
65
seconds
and
it
gets
idled
in
a
shorter
period
of
time
than
that.
I
think.
B
Yeah,
I'm
not
100
on
exactly
how
that
works.
But,
okay,
I
don't
I
don't
know,
then
I
don't
know
what
is
the
definition
of
idle,
but
okay
anyway,
I
could
re-read
those
stocks
used
is
related
to
this
server
check
delay,
and
so
it's
like,
if
a
server
hasn't
been
used
for
30
seconds,
it
needs
to
do
a
server
check
and
until
it
does
this
select
one.
It
sits
in
the
used
connections
pools
something
like
that.
B
Anyway,
what
matters
is
the
sum
of
these
is
what
postgres
sees
on
its
side.
It's
seeing
this
many
connections
and
they'll
be
relevant
from
the
perspective
of
of
shutting
of
it,
of
it,
seeing
a
limit
exceeded
and
rejecting
connections.
B
What
we
see
during
low
peak
times,
like
probably
roughly
now,
would
be
some
of
the
lowest
peak
lowest
usage
times
in
gitlab.
We
and-
and
this
is
the
whole
weekend
here-
we
kind
of
have
summing
up
to
what
looks
like
less
than
250
connections
across
all
the
pg
bouncer
fleet
and
it
matches
some
other
data
that
nikolai
collected
from
postgres's
side
of
things
where
it
sees
sort
of
a
peak
of
about
320
connections
during
peak
hours.
And
maybe
I.
C
B
Okay,
back
up
again
lose
the
zoom
chat.
It's
hard
to
find
again.
B
B
Okay,
300
and
then
going
down
to
like
sometimes
below
200,
but
you
know
at
most
sort
of
250
during
the
off
peak
times.
C
B
B
If
we
even
opened
400
connections
that
during
off-peak
hours,
the
vast
majority
of
them
would
be
idle
and
therefore
would
not
couldn't
possibly
lead
to
an
incident
because
we're
not
you
know
throughout
this
process,
we're
not
increasing
the
amount
of
traffic.
C
Well,
unless
application
side
connection
puller
sends
some
health
check
query
every
few
seconds
or
so.
C
I'm
not
I'm
not
I'm
not
sure
about
this,
and
sometimes,
for
example,
for
java
applications.
It's
happening.
We
need
to
check
how
application
side,
pullers
and
rails
behaves.
B
Yes,
the
rail
side
of
things,
okay,
so
pg
bouncer,.
B
Does
a
select
one
business
that
will
be
a
health
check?
That's
cheap,
so
rails
does
check.
C
Semicolon,
which
is
very
turned
out
to
be
like
we
in
postgres
community
right
now
discussing
this
like
in
twitter.
I
saw
the
discussion
today
like
this
is
already
considered
as
anti-pattern,
because
you
produce
some
transaction,
but
it
doesn't
have
queries.
It's
really
hard
to
observe
it,
because
producer
assessments
doesn't
have
it
and
so
on
so
like
empty.
C
Some
at
some
point
we
switched
to
semicolon
so
it's
empty
transaction,
the
minimum
transaction.
You
can
have
just
semicolon
right.
B
Yeah
and
there's
also
wall
checks,
but
all
checks
are
only
happening
on
the
replicas
and
probably
not
relevant.
For
this
case,
we
need
to
know
for
wall
checks.
We
do
need
to
know
the
state
of
the
primary
yeah,
so
I
think
we
will
be
doubling
the
number
of
wall
checks,
we're
doing
where
we
ask
the
primary
for
its
current
position.
So
if
you.
C
Well
check,
is
you
mean,
like
the
control
of
the
leg
right.
B
Lsn
and
then
reading
the
same
lsn
from
all
reading,
lsn
from
all
of
the
replicas
to
figure
out
whether
or
not
there's
an
up-to-date
replica.
So
you
can
probably
see
it
sometimes
in
here.
B
It
only
shows
up
sometimes
because
it's
periodic,
but
you
have
reminded
me
with
what
you
just
said
there,
with
that's
kind
of
a
health
check
that
gitlab
implements
that
will
probably
be
doubled
in
this
architecture.
Here
of
asking,
because
now
we
think
that
there's
two
primaries
we'll
be
asking:
okay
maine:
what
is
your
lsn
ci?
B
What
is
your
election,
so
that
will
also
be
double
from
the
gitlab
side
and
the
semicolon
checks,
but
do
we
actually
think
that
the
select
ones
and
semicolons
have
like
any
meaningful
impact
on
load
or
or
activity
of
this?
If
they
finish
so
quickly,
will
we
be
reusing
the
same
connections
or
we'll
be
opening
lots
of
connections?
Because
of
these
queries.
C
It's
very
it's
small,
so
like
it's
like,
but
you
sort
of
contact
switches
but
like
in
terms
of
cpu
load.
It's
the
impact
is
quite
small.
That's
why
we
still
have
it.
We
have
we
had
in
the
past.
We
had
several
times
we
thought
about
getting
rid
of
it,
because
there
are
techniques
to
get
rid
of
them,
but
still
it's
there,
because
it
was
considered
like
a
lot
of
work,
but
benefit
is
not
so
huge
because
because
the
impact
on
load
is
not
so
high,
so
this
is
what
this
is.
C
My
understanding
I
I
was
proposing
multiple
times
to
get
rid
of
health
check
since
last
week,
one
more
time,
but
it's
slightly
out
of
like
the
question
is
like:
if
we
give
more
idle
connections,
more
connections
right,
will
they
consider
be
used
by?
Will
they
considered
as
idle
after
some
time
or
health
checking
activities
will
prevent
it?
I
I
don't
know
so.
B
Yeah
that
is
useful
to
test
out
that
it's
tricky
to
test
all
these
things
and
that's
kind
of
what
I
might
have
to
get
into
discussions
next
anyway,
I'll
okay
jump
back
to
what
we
were
talking
about,
which
is,
I
guess,.
C
B
C
B
C
By
the
way,
it's
a
little
bit
off
topic,
but
I
at
some
point
I
had
the
idea
that
these
health
checking
queries
do
more
harm
on
application
notes
than
on
postgres
nodes.
So
for
postgres
it's
some
load,
okay,
but
for
application
size,
it's
also
significant
load.
So
if
we
would,
if
we
got
rid
of
them,
probably
we
would
be
able
to
decrease
the
number
of
nodes
ruby
so,
but
this
is
off
topic
of
course,
in
this
case,.
B
You
have
an
issue
that
discussed
these
before,
so
we
can
maybe
learn
how
difficult
this
is,
like,
maybe
it'll
end
up
being
something
our
team
can
execute
on,
because
it
makes
sense
for
our
team
and
it's
not
too
tricky,
and
the
only
reason
it
hasn't
been
done
is
because
it
wasn't
urgent
before
but
now
it'll
become
maybe
more
important.
B
B
B
From
the
kind
of
initial
analysis
we
did
was,
we
can
probably
move
over.
You
know
a
reasonably
large
number
of
connections
at
a
time.
What
may
be
like
this,
this
pg
bouncer
main
actually
consists
of
six
pg
bouncer
hosts
today,
and
we
could
maybe
move
five
from
each
host
corresponding
to
30
connections
at
a
time
over
down
here.
B
If
we
did
that,
we
might
get
to
our
target
in
say
like
five
days
of
having
what
did
I
do,
we
haven't
done
exactly
all
the
maths,
but
in
the
end,
state
we'll
be
sending
roughly
half
our
queries
here
and
roughly
half
our
queries
here.
So,
whatever
number
of
connections
we
have
over
here
today,
which
is
270,
we
probably
want
this
thing
to
end
up
at
maybe
150
and
this
thing
to
end
up
at
150,
or
it
may
be
a
little
bit
less.
B
This
one
might
be
120
and
this
one
might
be
200.,
that's
kind
of
our
end
goal
and
if
we
move
over
30
connections
at
a
time,
it
may
take
us
a
few
days
like
if
we
do
it
during
the
quiet
times
or
we
can
do
30
and
in
an
hour
later
do
another
30
during
the
quiet
hours
as
well.
But
or
we
can
do
the
whole
thing
over
a
weekend.
A
But
we
know
that
ci
workload
is
exactly
half
of
the
total
gitlab
workload.
We.
B
Have
some
initial
analysis
we
did
months
ago
that
said
47
or
43
of
write.
Queries
relate
to
ci
tables
so,
and
it
will
probably
vary
depending
on
the
time
of
day,
because
over
the
weekends,
maybe
more
workload
is
ci,
because
it's
automated,
I
don't
know
like,
I
think
it
it
all-
probably
scales,
pretty
linearly
with
usage
as
well,
but
yeah.
B
Let's
say:
we've
moved
10
of
traffic
down
here
we
can
then
extrapolate
and
say:
okay
multiply
this
by
10
in
terms
of
usage
and
that's
now
we
can
see
the
proportionality
between
the
two
of
them
once
we've
moved
some
because
even
like
percentage
of
queries
doesn't
necessarily
indicate
percentage
of
you
know
how
much
pg
balance
or
cpu
usage
it
needs
or
whatever
to
figure
out
how
many
pg
bouncer
nodes.
Okay,
that's
this
issue
I'll
link
that.
B
We
start
with
one
we
set.
We
we
create
our
new
pg
bouncer,
ci
hosts
and
each
host
will
have
one
be
allowed
to
open
one
connection
to
postgres
and
we'll
enable
one
percent
of
our
traffic
to
ci
related
queries
to
go
to
these
new
pg
bouncer
hosts
across
six
nodes.
A
All
right,
so
this
is
on
the
first
and
the
second
day,
and
then
this
is
going
to
roll
over
the
next
days,
increasing,
let's
say
10
percent
each
day.
B
Yeah,
basically
we'll
learn
from
whatever
this
first
day
what
happens
with
these
number
of
connections,
to
figure
out
what
we
do
on
those
subsequent
days
but
yeah.
Ideally,
we
could
do
the
whole
thing
in
a
week,
but
it's
not
a
problem
if
we
need
to
be
safe
and
it
takes
us
two
weeks
to
roll
these
things
all
the
way
out
so
yeah,
and
basically
some
of
this
stuff,
you
know-
depends
on
how
pg
bouncer
actually
closes
connections.
B
So
we
kind
of
have
some
assumption
the
pg
bouncer,
I
mean
it's
sort
of
implied
by
the
documentation.
But
again
this
is
sparse,
and
nikolai
last
time
mentioned
that
probably
the
only
way
to
know
is
to
look
at
the
source
code
or
test.
It
is
basically
how
how
it
handles
closing
connections
that
are
idle.
B
If
we
end
up
in
a
situation
where
this
pg
bouncer
main
is
you
know
we
reconfigure
it?
We
reload
the
configuration,
but
it
doesn't,
it
turns
out
it
doesn't
close
connections
when
you
reload
configuration.
It
only
closes
connections
when
you
restart
it
or
something
like
that,
then,
where
we
need
to
know
that,
because
that
changes
our
exactly
the
way
we
execute
on
this
and
we
can
do
rolling
restarts
of
those
pg
bouncer
hosts,
presumably
without
too
much
trouble.
But.
C
C
B
That
is
without
dropping
connections
from
the
client
but
yeah.
In
our
case,
we
want
to
make
sure
that
it
does
drop
server
connections
when
it
doesn't
need
them
anymore.
I
mean
technically
that's
what
pg
bouncer
already.
Does
it
just
matter?
If
we
lower
the
lower
the
limits,
how
does
it?
How
quickly
does
it
drop
them,
or
if
does
it
does
it
even
take
that
lowered
connection
into
account?
What
do
we
need
to
restart
before
it
does
that.
B
And
yeah,
so
then
we
have
that's
what
jose's
created
this
about
this
issue
about,
and
this
is
kind
of
like
experimenting
with
this
from
purely
the
pg
balancer
perspective.
But
things
are
a
little
bit
more
complicated
when
we
want
to
experiment
the
whole
thing
end
to
end
which
we
can
do
in
staging
environment
under
some
load.
B
We
can
do
a
lot
of
this
testing,
but
the
interesting
thing
here
happens
is
that,
while
on
the
rail
side,
we're
tweaking
the
percentage
of
queries
that
go
to
the
different
pg
bouncer
hosts
we're
not
going
to
be
changing
the
number
of
connections,
we
have
open
to
pg
bouncer.
On
the
rail
side,
we
assume,
if
we've
got
100
connections
to
pg
bouncer.
Here
we
can
end
up
with
100
connections
down
here
end
here
and
that's
totally.
Okay,
because
you've
got
just
as
much
pg
balance
as
cpus.
B
Well,
you
know
more
and
it's
already,
we
already
assumed
that
rpg
bounce.
The
number
of
connections
that
they
have
open
from
clients
is
not
really
a
problem.
We
could
already
increase
it
today
without
too
much
issues.
So
that's
one
thing:
that's
a
factor
to
consider
here
is
like
pg
bouncer
nodes
here
have
still
just
as
many
client
connections
open,
but
we
want
them
to
be
starting
to
lower
the
number
of
server
connections.
They
have
open
proportional
to
how
much
less
queries
they're
getting
on
those
client
connections.
B
Rails
is
kind
of
weird
and
it's
going
to
be
round-robin
these
client
connections
and
not
there's
not
really
an
easy
way
for
us
to
drop
them
live
while
rails
is
running.
As
far
as
I
know,
we
probably
could
figure
that
out
if
we
needed
to,
but
we're
hoping
we
don't
have
to
where
it
is
really
easy
for
us
to
to
reroute
the
percentage
of
queries
going
down
each
connection.
We
don't
want
to
have
to
close
any
connections
if
we
can
avoid
it.
B
C
B
That's
like
an
additional
experiment-
that's
kind
of
hard
to
do
from
just
this
part
of
the
architecture.
We
need
the
whole
thing
to
do
that
experiment
and
we
can
ideally
do
that
in
a
in
a
controlled
way
on
staging
and
it's
quite
difficult
for
developers
like
to
do
this
testing
locally
as
well,
because
we
don't
run
this
whole
architecture
and
we
could.
It
would
be
a
lot
of
work
for
us
to
get
this
whole
architecture
up
and
running
locally.
B
But
then,
even
if
we
did,
we
wouldn't
have
the
load
on
the
system
that
allows
us
to
really
see.
What's
going
on.
So
staging
may
be
a
good
candidate
for
that.
Otherwise
we
can.
We
have.
We
are
building
and
deploying
some
reference
architecture
that
looks
like
this
as
well,
so
maybe
that
becomes
another
place.
We
can
run
these
experiments,
but.
A
All
right
so
at
wet
phase
we
are
now
so
we
have
to
deploy
the
environment
in
benchmarking,
to
test
the
gpg,
bouncer
behavior
and
then
move
into
the
staging
and
implement
start
implementing.
The
change
in
the
staging
environment
is
that
it.
B
Yeah,
I
think,
that's
kind
of
yeah
what
yeah,
what
we've
got
to
do
so,
basically
from
your
side,
it
would
be
good
if
we
could
start
doing
the
experiments
from
the
pg
bouncer
side
and
postgres
here
to
understand
how
it
drops
connections
and
then
you
know,
whatever
we
can
learn
from
reading
the
docs
and
the
source
code
of
pg
bouncer,
to
learn
to
see
and
experimenting
to
see
that
behavior
then
yeah
from
our
side.
B
We
are
implementing
the
code
to
do
this
now
and
could,
when
we
have
that
code
implemented,
we
could
potentially
do
those
experiments
on
staging
or
we
may
have
a
reference
environment
that
is
similar
to
actually
do
those
experiments
on
too
so,
and
I
have
like
a
couple
of
other
things
I
want
to
figure
out
from
mine,
but
also
yeah
from
your
side
like
I
have
collected
a
bunch
of
metrics,
but
you
may
be
able
to
collect
better
metrics
than
what
I've
got
to
figure
out
that
the
data
I
have
here
is
maybe
not
totally
the
full
story.
A
And,
and
also
as
I
understood,
a
gitlab
application
implements,
its
own
health
check.
Query
right
using
select
one.
B
C
Sequence
number
yeah:
by
the
way,
I'm
not
sure
if
it's
gitlab,
maybe
it's
rails
because
right
it.
C
Select
one
until
some
point
when
we
switched
to
this
empty
query,
it
was.
A
I
don't
know
how
how
finoto
was
planning
to
implement
this
test
in
in
the
b
bench
if
it
was
going
to
use
some
benchmarking
towards
something
like
that,
we
may
have
to
put
this
these
health
checks
into
the
test
in
the
benchmark
for
it
to
make
sense,
because
I
think
that
this
is
something
that
can
impact
the
life
of
a
connection
of
database
connection
on
on
the
bouncer,
because
the
bouncer
does
his
own
health
check.
A
Right
and,
as
I
understand,
bg
bouncer
doesn't
consider
the
headshot
that
health
check
to
keep
the
connection
alive.
But
if
a
query
came
from
the
application,
the
mpg
bouncer
is
going
to
consider
this
as
any
query
right
so
yeah.
I
need
to
yeah
I'm
going
to
see
if
I
can
implement
a
few
health
checks
like
this,
in
the
benchmarking
too,
to
see
if
I
can
put
some
load
of
our
checks
on
that
and
see
if
this
is
going
to
affect
the
time.
B
Maybe
you
could
help
point
us
to.
We
probably
have
a
repo
somewhere
that
lists
the
top
queries
and
the
frequency
of
them,
and
that
could.
A
A
Yeah
for
you
for
to
you,
nicole,.
C
But
honestly
I
I
usually
jose
was
running
it,
so
I
I
didn't
run
myself,
never
so
like
correct
launch
that
it's
java
application
so
like,
but
I
I
will
send
the
link.
I
can
find
it
right
now.
I
wanted
to
mention
that
also
patronia
has
health
check
queries,
but
I
guess
they
go
without
the
bombs
involved.
Yeah
yeah.
A
They
go
directly
yeah
now,
I'm
just
I'm
just
worried
that
these
health
checks
can
make
the
connection
sleep
longer
than
they
can
actually
shoot.
The
bouncer.
B
B
A
Yeah,
at
least
for
the
rails,
one
and
the
purpose
of
the
health
check
is
that
something
is
going
to
use
that
connection.
So
the
first
thing
it
does
is
check
whether
that
connection
still
works
or
not.
So
it's
not
like
something's
constantly
keeping
a
live
connection
for
no
reason,
so
I
think
we're
okay,
there
yeah,
but
there
is
another
health
controller
check
which
is
slightly
different
and
that's
gitlab
as
well.
A
B
A
B
Okay,
yeah,
if
you
can
find
more
information
about
exactly
what
endpoint
that
is
in
gitlab
and
what
query
it
executes
we'll
see
that
in
there
and
then
we
can
multiply
out
that
frequency
or
find
it.
You
know
in
our
logs
to
say
how
much,
how
often
that's
happening
and
make
sure
that's
also
accounted
for
in
this.
B
I'm
sorry
top
queries
for
all
of
the
gitlab
database
that
lists
in
a
table
format
like
there's
a.
C
Checkup
tool
right,
but
it's
it
will
be
only
by
total
time,
this
benchmarking
environment.
It
has
two
kinds
of
workload
combined
top
end
by
total
time
and
top,
and
by
calls
so
most
frequent
and
first
is
most
influential
in
terms
of
resource
consumption.
So
most
overall
time
is
highest
right
and
second,
is
by
frequency.
The
most
frequent
queries
yeah
link
right
so.
C
Well,
yes,
but
this
empty.
We
used
register
statements
as
a
basis
by
by
that
time,
probably
we
switched
to
semicolon
and.
B
There's
a
bunch
of
other
rails
related
things
when
rail
starts
up
as
well.
That
appear
in
those
top
end,
crews,
which
we
were
investigating
a
while
back,
because
when
rail
starts
up,
it
asks
for
the
schema
of
every
table
and
you
would
think
that's
not
very
frequent,
but
with
the
hundreds
or
thousands
of
nodes
we
have
and
the
rate
at
which
we
deploy
them.
Those
end
up
in
like
top-end
queries
as
well
may
or
may
not
be
accounted
for
and
so
stuff.
Regarding.
C
These
health
checks.
I
think
it's
like
what
we
should
do.
I
think
we
need
to
check
settings
configuration
of
application
nodes.
However,
else
how
like
to
understand
the
frequency,
but
this
is
right-
you
you're
right
about
that.
It
won't
go
to
every
back
end.
So
probably
it's
not
a
big
deal
right,
because
it
will.
It
will
affect
only
some
back
end
during
health
checking.
Others
back
ends
are
idle
and
they
will
when
go
away.
Yeah.
B
Well,
exactly
that's
the
ideal
situation
if
pg
bouncer
behaves
sensibly
and
we've
got
it
configured
in
the
right
way,
so
that
it's
dropping
connections
sensibly
but
yeah
well,
anyway,
all
right
dude.
There
are
reasonable
action
items
on
there.
We
can
have
like
another
call
next
week
and
also
like
rafael
you're
in
our
time
zone
was
pretty
similar
to
me
and
tom.
So
we're
happy
to
pair
on
this
or
jump
on
calls
and
look
through
some
of
this
stuff
with
you
when
that
makes
sense,
cool
cool.