►
From YouTube: Geo with Patroni and Postgresql 11 2020-07-16
A
Hello,
everyone
and
welcome
to
this
week's
distribution
demo,
this
week's
demo
in
this
week's
demo.
We
are
going
to
focus
on
one
of
the
issues
that
the
Geo
team
is
trying
to
resolve
for
installing
a
Geo
installation
with
Patroni
replication.
So
in
this
issue,
the
primary
cluster
has
three
node
database
and
the
secondary
has
a
single
node
database
and
the
intended
the
idea
is
to
the
objective
is
to
replicate
database
from
the
secondary
from
a
primary
Patroni
cluster
to
the
secondary,
the
way
that
they
distributed.
The
Geo
team
has
verified.
A
This
is
to
use
Orchestrator,
to
set
up
a
rep
manager
cluster
and
then
switch
from
rep
manager
to
Patroni
and
then
try
to
use
seconder
to
connect
the
secondary
to
the
Patroni
cluster
and
replicate
so
they
run
into
a
series
of
issues.
One
particular
issue
is
permanent
slots
that
cost
some
trouble
and
it
is.
There
are
some
follow-up
issues
to
do
to
work
on
that,
but
in
this
in
this
demo
in
particular,
we
are
going
to
focus
on
follow
their
footsteps
and
see
what
they
are
dealing
with
and
how
we
can
find
as
another
solution.
A
Following
the
same,
the
same
instruction
at
as
the
readme,
the
QuickStart
guide
or
Orchestrator,
and
nothing
particular
to
report
there,
I
just
left
some
notes.
One
particular
issue
that
I
found
a
bit
annoying
was
I
use
the
dev
container
and
the
generated
SSH
key
was
gone
after
I
exited
the
container,
so
I
couldn't
connect
to
the
VMS
after
I
left
the
container
so
I
had
to
rebuild
the
cluster
and
get
that
key
from
the
container
into
my
husband
and
work.
So,
let's
let
me
show
you
the
full
of
important
files
that
I
use.
A
So
we
are
deploying
a
main
cluster
with
three
database
nodes.
Three
console
announced
one
application
node
and
the
secondary,
which
is
called
geo.
One
has
one
database
and
one
application.
So
I
also
noticed
that,
as
we
need
to
have
console
notes,
separate
console
notes,
otherwise
the
orchestrator
does
not
deploy
read
manager.
Does
a
configure
the
rep
manager
correct
me
if
I'm
wrong
and
Robert
if
I'm
wrong
I
do
I
need
to
set
the
console
to
get
the
right
manager
configured
so.
B
It
needs
it
needs
console
to
believe
it,
which
is
what
a
vendor
be.
So
that's
a
business
logic
that
probably
you
and
I
should
have
an
async
issue
and
like
hash
that
out
make
sure
we
understand
what
determines
what,
when
and
how
and
make
the
business
logic
match
what
we
expect
that
that's
a
pain
point
that
I
know
is
there
and
you've
stumbled
over
so
so
yeah.
We
need
to
just
determine
what
our
business
rules
are.
It's
very
naive.
It's
if
you
have
console
your
high
availability.
If
you
don't
you're,
not
okay,.
A
So
yeah,
because
generally
when
we
deploy
the
console,
note
is
also
run
on
the
same
database,
so
I
my
first,
my
initial
thought
was:
if
I
have
more
than
one
database,
I
am
running
into
hae
more,
but
that's
okay,
I
set
up
laughter
pretty
smoothly
following
this.
Exactly
these
three
five
steps
here
and
it
went
ahead
without
any
trouble.
I
have
my
cluster
deployed.
Let
me
show
you.
A
A
A
B
It
it's
not
vital
to
the
current
stuff.
We're
at
I
just
want
to
make
sure,
and
you
said
you
sure
what
the
SH
keep
her
in
the
persistence
we
updated.
The
quick
start.
Did
it
was
a
quick
start?
Did
it
was
that
apparent
to
you
that
the
auto
generated
key
wouldn't
persist
after
your
run,
is
if
it
wasn't
that
we
need
to
make
another
adjustment
to
that
documentation?
That's
a
pain
point
that
you
now
you
know.
B
A
What
I
you
know,
I
should
have
I
mean
I
should
have
known
that
the
the
SSH
key
is
in
the
the
container,
but
since
I
was
running
it
and
I
just
closed,
the
container
I
didn't
expect
everything
to
go
away,
so
it
caught
me
by
surprise
and
I
really
didn't,
take
a
look
into
the
documentation
to
see
what
would
happen
next.
It
wasn't
like
it
both
notification
that
you
know
this
will
happen.
If
you
do
that.
A
A
B
A
A
A
A
A
A
A
A
A
A
A
Because
the
way
that
it
should
have
happened
is
that
Petronius
starts
and
it
tries
to
delete
every
replication,
a
slot
that
is
there
and
it
doesn't
manage
by
itself
and
it
doesn't
manage
it,
and
we
will
say
that
we
should
be
able
to
see
that
the
replication
on
the
primary
and
the
secondary
node
fails
because
the
persistent
slot
is
not
there.
I,
don't
know
why
it
is
complaining
to
drop
the
replication
as
lot
I'm
gonna
help.
A
A
A
A
A
Applicate
geo
database,
we
should
call
this
command
after
we
search
the
patron,
so
we
need
to
stop
I
think
we
need
to
stop
secondary
as
well,
because
it
still
has
an
active
bolts
and
ER
and
switch
from
Patroni
from
switch
from
rep
manager
to
Patroni
will
not
it
happens,
but
it
is
not.
It
may
end
up
with
an
unintended
in
an
unintended
state
which
it
did
so.
The
Geo
team
reported
that
they
experienced
problem
with
replication
on
the
secondary.
A
We
see
we
saw
that
we
we
saw
that
Petronius
startup
was
trying
to
delete
the
the
slot
and
it
couldn't
so.
This
is
something
that
should
be
avoid.
It
should
be
avoided
so,
and
the
reason
was
that
there
was
an
active
vault
sender,
so
this
replication
from
the
secondary
should
have
been
stopped
in
this
particular
setup,
because
we
have
used
Orchestrator
to
set
up
this
cluster
and
we
wanted
to
switch
from
rep
manager
to
Patroni.
A
A
A
D
I
think
you
just
have
to
run
and
reconfigure
again.
The
act
of
replicating
copies
over
the
primaries
HBA
counts,
yeah.
A
A
B
B
A
B
A
A
B
B
A
Is
if
I
understand
it
correctly,
this
slot
is
specific
to
one
particular
node,
so
replicating
a
slot
from
master
to
the
entity
to
the
follower.
Clear
replica
is
not
necessarily
the
right
thing
to
do,
because
that
depends
on
the
cursor
position,
because
now
the
source
of
the
wall
streaming
is
a
different
node
which
might
be
in
another
point
of
recovery.
A
B
That's
correct,
and
that
said
that's
this
is
this-
is
one
of
the
thorny
issue
I'm
just
calling
this
out
cuz
Stephen,
what
Hossein
is
running
into
here
and
what
I
haven't
Wilkin
traitor.
This
is
a
super
thorny
issue.
That's
gonna
be
a
lot
of.
How
do
you
want
to
handle
it
this
database
hardening?
What
do
we
do?
Discussion
from
Orchestrator
as
well?
So
just
calling
it
out.
This
is
a
huge,
a
huge
problem
in
terms
of
uptime,
alright
I'm
saying
back
to
you:
ok,
gotcha,
Norris,.
A
Actually,
yeah
I
just
wanted
to
show
that
first
of
all
the
issue
that
we
had
on
the
the
problem
that
we
had.
Indeed
the
issue
that
was
subject
of
this
demo,
verifying
geo
in
there
verifying
patron
in
a
simple
geo
installation
is
actually
I
believe
occur
because
of
the
way
that
we
configure
the
cluster
and
switching
from
rep
manager
to
Patroni
and
not
killing
the
wall
center
and
not
calling
D.
A
So
if
we
do
this
thing
and
call
the
wall
sender
on
the
secondary
and
call
the
Geo
replicate,
GOP
G
replicate
command
again
and
reconfigure
again,
we
shouldn't
run
into
the
same
problem
that
the
Geo
team
reported
and
if
we
support
Patroni
in
Orchestrator,
this
should
be
fine
because
the
cluster
will
be
set
up
from
since
the
beginning
from
from
the
beginning
like
this,
but
we
I
I
think
that
we
will
run
into
another
issue.
If
one
of
you,
if
the
master,
fails
on
the
primary
side,
the
secondary
side
cannot
simply
follow
it.
A
A
A
Again,
I
tried
to
look
into
because
we
added
in
thirteen
point
two.
We
added
something
too
so
basically
Patroni
registers
the
Patroni
registers
in
the
SRB
serve
as
RB
record
for
poster
school
at
shape.
So
if
we
can
look
up
by
a
DNS
the
became,
we
can
look
up
the
the
primary
by
a
DNS
and
we
can
use
DNS
record,
but
then
there
will
be
two
problems.
A
First,
I
haven't
seen
any
configuration
specifically
DNS
configuration
for
poster
school
that
we
can
point
it
to
a
specific
DNS
end
point
to,
even
if
we
could
do
that
or
even
find
a
way
around
it,
the
problem
would
be
they
did
their
application,
a
slot.
It
should
be
created
back
anyway.
It
should
be
created
manually.
A
C
B
Jose
now
I'm
gonna
check
and
make
sure
not
I'm
going
to
tag
you
on
the
thread
where
I
went
into
all
this.
If
you
can
add,
add
the
details
from
what
you
found,
so
we
can
focus
in
and
like
make
sure
we're,
describing
the
exact
same
because
I
think
you
clear
you've
got
like
an
extra
bit
before
where
I
discovered
it
the
problem.
So
let's
make
sure
you
focus
that
in
so
that
we
can
describe
the
problem
all
the
product
and
then
schedule
whatever
we
want.
A
Together,
I'll
do
definitely
definitely
we
should
follow
this
out.
This
is
going
to
be
a
problem
for
everybody
and
I.
Think
as
long
as
we
deploy
the
secondary
like
this,
and
we
don't
use
the
secondary
cluster,
the
replicate
the
secondary
cluster
feature
of
a
standby
cluster
feature
of
Patroni.
We
will
still
have
experienced
this
proposition.