►
From YouTube: Geo PostgreSQL and DB clustering deep dive
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Computer
there
we
go
douglas
thanks
for
walking
me
through
the
peer
postgres
and
database
replication
streaming,
so
I've
got.
We've
got
a
couple
of
questions
that
we
want
to.
A
So,
let's
start
with
the
you
know,
pick
up
where
we
left
off
here
on
the
postgres
replication.
You
mentioned
that
the
petroni
runs
on
the
postgres
node
itself
right,
so
they
all
they
both
run
on.
If
you
have
multiple
instances
of
progress,
does
what
is
the
configuration
at
that
point?.
B
A
Gotcha,
okay
and
and
then
pg
bouncer,
you
mentioned.
If
you
have
database
load
balancing,
then
it
would
need
to
run
on
every
one
of
the
database
nodes
as
well:
okay,
cool,
but
but
at
any
given
time,
there's
only
one
database,
that's
writable
right!
Every
the
other.
Two
databases
are
streaming,
so
the
the
rights
always
go
to
that
that
primary
database
when
they
come.
B
Yes,
inside
of
patreon
cluster,
we
have
the
concept
of
the
leader
and
habitus.
We
can
have
one
leader
at
a
time.
Okay,
and
you
can
have
many
help
as
you
want,
so
how
they
elect
this
leader
is
that
every
replica
can
become
the
leader
they
try
to
write
this
key.
For
example,
I
am
the
leader
logos.
I
am
the
leader
here.
I
try
to
write
this
key,
that
they
use
as
a
lock
in
the
console
yeah,
because
protons
start
all
the
cluster
information
inside
the
console.
B
So
that,
if
I
got
the
luck,
I
am
the
leader,
I
will
be
the
the
know
that
will
be
able
to
get
rights.
Okay,
all
the
other
replicas
you
can.
C
A
B
Probably,
if
not
wrong
that,
I
don't
know
how
the
internals
of
petrona
happen,
but
basically,
when
you
start
start
bootstrapping
the
first
but
don't
search,
it
will
become
the
the
leader,
the
leader,
okay
yeah,
and
if
a
failover
happened,
I
know
that
they
check
difference
between
the
wall
log
size
they
to
decide
which
which
have
could
be
promoted
to
the
new
leader
gotcha.
A
Than
the
replication,
but
I
think
for
me
it's
kind
of
interesting
and
useful
to
understand
the
the
initial
setup
as
well.
So
thanks
for.
B
B
Yep,
okay,
gotcha-
and
this
is
the
automatic
failover
you
can
also
perform
a
manual
fill
over.
If
you
want,
for
example,
I
can
say
that
okay,
I
want
the
hapkabi
to
become
the
leader,
and
is
that
done
through
console?
Is
that
no
is
that
done?
No.
We
have
some
on
the
bus
commands
to
perform
that,
because
patoni
has
a
cli
that
allow
us
to
do
this.
Okay
and
you
wrote
a
wrapper
on
top
of
the
cli
to
just
to
save
something
simplifications
from
the
end
using
sure.
A
Sure
so,
in
terms
of
the
petroni
architecture,
any
petroni
node
is
pretty
much
the
same.
If
we
should
come
on
to
a
petroni,
node
they're,
all
they're,
all
they
look
the
same
yeah
they're
all
the
same,
so
you
don't
need
to
repeat
the
command
on
all
three.
You
can
issue
it
to
one
and
then
they'll,
okay,
understood.
B
A
Okay
right,
oh
that's
very
cool,
so
I
guess
that's
that's
very
clear
and
thanks
for
that,
then
we
have
the
pg
bouncers.
I
talking
to
a
few
other
people.
I
understand
that
just
connection
pools
to
the
database
because
there's
limitations
on
the
number
of
connections
postgres
can
handle
so
pg
bound
to
handles
that,
and
so
we
have.
We
have
two
sets
from
what
I
can
what
I
can
see
in
the
architecture.
The
reference
architectures
one
one
set
that
sits
in
with
the
database
node
and
one
that
sits
outside.
A
So
is
there
also,
I
guess
the
question
there
is:
are
there
any
implications
for
jio
from
these
pg
bounces
or
in
when
it
comes
to
replication,
doesn't
matter
to
geo?
So
in
terms
of
you
know
when,
when
we
try
to
replicate
the
database,
is
there
a
flow
that
that
that
we
need
to
consider?
Are
there
multiple
flows
or
do
they
always
hit
the
standalone,
pg
bouncer
and
then
make
that
make
make
they
can
and
so
that
does
the
replication
go
through
the
pg
bounces?
B
A
B
But
yeah,
but
you
also
have
a
console
agent
running
on
the
pg
bouncer
in
the
world
and
yeah
yeah
and
the
console
watch
has
the
console
agent
has
a
watcher
that
is
look
keep
an
eye
on
the
platonic
cluster
configuration
inside
the
console
database
and
as
soon
as
they
notice
that
okay,
the
the
new
leader,
is
the
patron
habitat
too.
B
B
Yeah,
this
is
the
the
regular
regular
way
that
regular
rage
connect
to
the
database
on
the
primary
on
the
second
doesn't
matter.
The
application
happens
a
little
bit
different
here,
for
example.
What
use
that
is?
Is
there
your
second
question
here.
B
B
Balancer,
oh,
we
use
it.
You
we
use
it
as
a
single
entry
point
for
connecting
the
buttons
and
like
a
standby
cluster,
so
good
job
yeah.
It's
important
to
note
that
we
don't
ship
any
the
launch
balancer
in
omnibus,
but
we
have.
We
suggest
our
users
to
use
aj
prox,
but
they
can
use
any
other
load
balance.
They
want.
A
I
see
I
see
so
so
omnibus
doesn't
contain
this
load
balancer.
If
you
do
don't
deploy
a
load
balancer,
then,
if
the
primary
database
node
on
the
primary
side
changes,
then
the
secondary
site
will
not
know
about
it
and
it
needs
to
be
updated
to
point
to
it.
So
the
load
balancer
will
take
care
of
that.
Okay,
that
that
answers
that
that
question.
B
Great
so
yeah,
so
basically
how
the
application
works
out.
You
have
the
pat
20
cluster
under
primer,
and
I
have
the
second
pattern.
Cluster.
You
have
a
pattern
cluster
on
the
second
side
that
you
call
the
pattern:
standby
cluster
yeah
and
the
main
leader,
the
leader
on
the
segment
replicates
from
the
prime.
B
A
Gotcha,
okay,
yeah
right
right
that
doesn't
make
very
much
very
much
makes
sense,
yeah,
okay,
so
it's
only
the
prior
leader
talking
to
the
leader
and
so
that
kind
of
leads
nicely
into
our
replication
slots
as
well.
I
guess
so
so
you
know
how
how
do
the
replication
slots
work
with
when
it
comes
to
postgresql?
B
B
A
B
B
This
change
a
little
bit
on
the
primary.
We
need
one
permanent
application
slots
per
second
of
sites,
but
we
need
the
max
application.
Slots
should
be
the
this.
The
double
of
the
amount
of
button
has
every
slots
in
this
case
that
the
minimum
that
we
recommend
is
three
plus
one
for
june.
So,
for
example,
this
one
for
jill
is
the
permanent
application
slot
and
the
other
three,
because
the
patreon
you
need
to
replicate
between
them.
A
So
let's
say
you
have
you
have
a
a
primary
site
with
two
one
leader,
two
helpers,
so
each
of
those
helpers
needs
two
replication
slots
just
for
the
primary
site
and
then,
when
you
bring
in
the
secondary
site,
you
want
to
double
that
double
that
the
number
of
slots
do
you
need
to
configure
each
slot
separately
right.
So
what
points
to
do
the
helpers
also
point
directly
to
the
primary
and
consumer
price?
A
B
No,
if
you
have
three
nodes
on
the
second
three
database
nodes
on
the
second
one,
you
need
only
one
pisco
database,
inflation
slot
on
the
prime
okay,
and
all
of
all
of
them
will
point
to
this
application
slot,
but
only
the
leader
will
replicate
from
this
slide.
The
other
applicants
will
replicate
between
them
and
they
only
start
applicating
from
their
primary
if
they
become
a
new
at
the
leader.
B
A
There's
they
need
to
replicate
amongst
themselves
and
they
need
one
slot
for
the
geosecondary.
Okay,
understood,
okay
and
then
you've
got
petroni
standby
cluster,
which
is
on
the
secondary
site.
Maximum
replication
slots
is
five,
a
minimum
of
three
for
one
replica,
plus
two
for
each
additional
replica.
B
I
believe
that
if
I'm
not
wrong,
I
know
that
I
think
that
I
know
the
question
here,
but
I
can
double
check
gabriel,
that
he
wrote
this
recommendation.
B
Okay,
I
believe
that
we
need
this
tree
for
one
haplica,
because
when
you
bootstrap
the
anode
application,
oh
sorry
when
I
would
strap
they
use
its
own
applications
a
lot
to
together.
Oh.
A
A
A
If
you
wouldn't
mind,
that'd
be
great
okay,
so
now
I've
understand
the
the
replication
there
and
I
think,
you've
kind
of
really
laid
that
out
for
me
on
how
that
connect
kind
of
works
for
there.
So
what
else
have
I
got?
A
Does
each
db
instance
in
the
secondary
site
need
a
replication,
so
we've
answered
all
of
that
we've.
It's
upper
limit
unlikely,
we'll
hit
this
upper
limit
of
number
of
replication
slots.
So
that's
all!
That's
all
good!
Could
we
talk
a
little
bit
about
the
tracking
db
and
how
that
works
in
relation
to
to
this,
and
does
the
tracking
db
have.
A
B
B
B
C
All
right,
what
else
have
I
eaten
yeah?
I
think
there
has
some
questions
about
the
rds.
A
Yeah
so
so
I've
I've
read
about
you
know
being
able
to
use
rds.
I
know
that
I,
I
guess
that's
an
external
database.
A
A
Why
would
a
customer
do
that?
I
guess
for
from
my
understanding:
why
would
they
choose
to
do
that?
I'm
sure,
there's
these
valid
reasons
that
I'm
not
aware
of
would
be
good
to
good
to
get
your
goods
hitting
you
on
that.
Why?
Why
do
we
choose
an
rds
over
deploying
omnibus.
B
I
think
that
there
is
two
ways
that
you
can
deploy.
An
external
database
is
choose
this
cloud
services
like
rds.
B
B
A
Gotcha,
so
from
from
from
a
functionality
perspective,
there
is
no
difference
when
it
when
it
relates
to
gitlab
right
so
running
the
lab
on
postgresql
that
there's
no
difference.
How
does
that
do
you,
selecting
rds?
How
does
that
you
know
impact
replication
for
a
secondary
site.
A
Okay,
gotcha
so,
but
but
when
it
comes
to
secondary
sites,
how
does
hot
yes
work
because
postgres
we
understand
like
there's
a
replication
process.
We've
just
talked
about,
but
if
they
choose
to
use
rds
for
their
is,
is
the
replication
is
the
resilience
built
in
by
the
the
provider
or
how?
What
would
be
the
the
architecture?
Yeah.
B
A
Gotcha
and
and
so
so
replication
and
all
of
that
configuration
is
not
it's
not
necessary,
but
when
it
comes
to
promoting
that
secondary
site.
So
there's
a
couple
of
things
I'd
like
to
kind
of
dig
into
when
it
comes
to
promoting.
How
would
that
work
and
second,
when
it
comes
to
the
tracking
database,
is
that
going
to
be
on
rds?
B
No,
they
can
set
up
the
tracking
the
basement
on
rds
as
well.
Okay,
yeah
about
the
promotion
that
yeah.
We
can't
handle
the
promotion
on
rds
because
they
need
to.
We
can't
interact
with
rds
servers
to
promote
database
same
for
the
other
cloud
providers,
so
yeah.
They
have
quite
a
manual
step
there
too.
Oh,
they
can
automate
the
promotions
but
yeah,
but
on
each
lab
we
own
automated
promotion.
If
you
are
running
on
fully
on
omnibus.
A
I
see
I
see
okay,
I
think
that
probably
is
the
gap.
In
my
knowledge
I
haven't
read
the
documentation
related
to
external
databases.
So
that's
that's
a
action
for
me
to
go.
Go
dig
into
that
one!
Okay,
I
think
we've
covered
everything
thanks
douglas.
A
Oh
no,
this
is
this
is
this
is
perfect.
I
think
what
what
I'm
really
one
of
the
key
things
I
was
missing
was
the
the
signaling
and
and
the
I
guess,
the
data
flow
when
it
comes
to
replication
from
through
the
different
nodes
within
a
multi-node
architecture.
It
might
be
a
nice
idea
to
for
us
to
put
together
a
diagram
of
how
replication
would
work.
A
So
you
know
the
leader
in
the
standby
cluster
talks
to
the
the
tcp
load,
balancer
and
tcp
load
balancer
talks
to
the
leader
in
the
primary
that
kind
of
diagram
or
a
flow
flow
flow
signal.
Flow
data
flow
rather,
would
be,
I
think,
beneficial,
because
you
can
immediately
grasp
that,
but
you've
you've
explained
how
that
all
works.
So
that's
that's
great.
So
I
appreciate
that.
I
I
don't
have
any
any
more
questions.
I
think
you've
answered
everything
I
had
at
least
around
streaming
and
replication.
C
A
Yeah,
I
will
certainly
do
that.
I
think
there's
a
few
other
conversations.
We've
had
I've
had
a
conversation
with
catalin
as
well
on
the
proxy,
where
it
would
be
beneficial
to
have
a
similar
diagram.
So
I
will
open
a
couple
of
issues
to
get
those
moving.
A
Anything
else
you
you
can
think
of
that
would
be
useful
to
cover
that
we
haven't.
Maybe
I
haven't
even
I
haven't
touched
on
or
added
to
the
list
here.
B
A
B
A
We've
covered
everything:
okay,
awesome,
all
right
in
that
case
douglas
think
we
can
wrap
up
really
appreciate
your
time
and
all
the
effort
you've
gone
through
to
you
know
dig
up
dig
up
some
of
the
answers
here.
Thank
you.