►
From YouTube: OpenShift Commons Briefing #85: PostgreSQL Operator for Kubernetes with Jeff McCormick
Description
The PostgreSQL operator is a controller built on top of the Kubernetes API and works to automate and implement advanced database orchestration features often required by DBA staff in managing large numbers of PostgreSQL databases. In this overview, Jeff McCormick will describe what an operator is, how they are built, and demonstrates the features of the open source Postgres Operator provided by CrunchyData.
A
Well,
hello
and
welcome
again
to
another
openshift
Commons
briefing
today,
I'm
with
Jeff
mcCormick,
one
of
my
favorite
people
from
crunchy
data
who's
going
to
talk
to
us
about
Postgres
sequel
operator
that
he's
created
using
it
with
open
shift
and
when
all
that
means
it's
a
new
concept
to
me.
So
we're
going
to
let
him
explain
it
and
it's
introduce
himself
so
without
any
further
ado.
Jeff.
Take
it
away
thanks.
B
Diane
yep
today
we're
going
to
talk.
I'm
gonna,
give
an
overview
of
a
new
capability
that
were
that
we
named
a
basically
Postgres
operator
and
I'll
go
into
details
about
what
it
is
and
how
it's
built,
and
that
kind
of
thing
in
this
briefing
and
then
we'll
wrap
it
up
with
a
short
demonstration
at
the
end,
just
to
show
you
how
I'm
how
it
behaves
I
work
for
crunchy
data
or
a
Postgres
company,
and
that's
all
we
do.
Essentially,
we
specialize
in
open
source,
Postgres
sales
support,
also
custom
development
needs
for
people
doing.
B
You
know
unique
things
with
Postgres
database.
We
have
a
presence
in
the
federal
government
space
with
certified
version
of
open
source
Postgres
as
well
to
encourage
you
to
look
at
our
website,
crunchy
datacom,
for
more
information
about
the
different
kinds
of
services
and
things
that
we
we
can
offer,
including
24/7
support.
B
B
Actually
we
started
this
project
called
the
Postgres
operator.
Do
just
that.
So
what
is
an
operator
I
guess?
First
of
all,
you
can
find
all
of
this
code
that
we're
going
to
talk
about
today.
It's
open
source,
it's
out
there
on
github
link
which
I've
listed
there.
So
you
can
pull
this
down.
You
can
build
it
inspect
the
code.
Look
at
it,
there's
built
binaries
on
there
for
you
to
try
out
as
well
and
the
containers
for
it
that
it
uses
they're
actually
out
on
docker
hub
too.
B
So
you
can
run
examples
and
play
around
with
it.
An
operator
really
is
just
a
controller
piece
of
software
and
in
this
context,
we're
going
to
write
some
software
that
controls
deployments
of
Postgres
database
components
within
a
kubernetes
or
OpenShift
cluster.
So
that's
really
the
focus
of
this
operator
and
this
controller,
but
really,
if
you're
familiar
with
just
controller
patterns
in
general.
That's
what
this
is
doing.
You
deploy
the
operator
on
a
kubernetes
or
openshift
cluster.
B
You
would
use
an
operator
to
automate
things
as
well,
so
there's
in
the
in
the
world
of
databases,
there's
all
kinds
of
workflows
that
DBAs
would
do
or
people
that
are
deploying
Postgres
clusters
on
a
environment
like
kubernetes
or
openshift.
So
what
we
can
do
is
automate
a
lot
of
those
manual
tasks
and
the
operator
itself
is
a
place
where
we
can
build
those
kinds
of
automation
layers
the
operator
is
built
in
going.
This
particular
operator
is,
and
it
uses
the
kubernetes
client
api
for
going
and
there's
a
link
there
and
that's
an
interesting
project.
B
This
project
depends
upon
that
open-source
project
and
it
just
allows
me
from
a
golang
to
interact
with
kubernetes
api
using
you
know
code
that
I
would
write
very
much.
The
operator
basically
is
just
interacting
with
kubernetes
api
is
to
do
all
sorts
of
things
like
update.
Labels
of
containers,
create
containers,
delete
containers,
so
everything
it
does
is
based
upon
leveraging
that
kubernetes
client
api.
B
This
operator,
this
Postgres
operator,
is
different
than
some
other
operators,
in
that
this
one
has
a
command-line
interface
and
that
human
beings
can
actually
cause
the
operator
to
take
action
with.
So
from
a
command-line
perspective,
the
operator
works
a
lot
like
cube,
CTL
or
the
OC
command,
in
that
it
allows
you
from
your
desktop,
an
ability
to
interact
with
kubernetes
api,
our
open
shift
api,
and
whenever
you
do
that,
you
can
get
information
back
from
the
kubernetes
cluster.
B
You
can
also
create
objects
on
kubernetes
using
that
command
line
interface,
and
that's
the
primary
means
right
now
for
the
Postgres
operator
to
understand
what
you
want
it
to
do
and
for
you
to
cause
it
to
do
things.
The
operator
runs
is
just
a
standard
deployment,
so
you
run
it
just
like
any
other
deployment
out
on
your
open
shift
or
kubernetes
environment.
B
It
sets
out
there
and
watches
for
third-party
resources
that
we've
defined
I
think
five
or
six
different
Postgres
related
third-party
resources,
and
the
operator
is
sitting
there
watching
for
changes
on
this.
So
when
you
create
a
third-party
resource
called
say,
PG
cluster.
That
is
a
way
that
the
operator
will
notice
that
even
and
it'll
take
action
and
that's
the
primary
I
guess.
The
interesting
thing
about
an
operator
is
that
they're
based
largely
on
third
party
resources,
yeah,
future
versions,
kubernetes
they're,
changing
from
third
party
resources
to
another
type.
B
That
will
talk
about
customer
resource
definitions
so
but
did
in
the
in
in
reality,
they're
they're
sort
of
serving
the
same
purpose
from
a
Postgres
operator
perspective.
Is
it
just
a
means
for
us
to
catalog
or
store
metadata
about
Postgres
deployments
its
place
for
us
to
store
that
metadata
and
interact
with
it
through
a
standard
kubernetes
api
as
opposed
to
us
and
then
in
our
own
similar
type
construct?
So
I'm
really
excited
about
third-party
resources
and
customer
resource
definitions.
B
The
operator
uses
a
template
based
approach
for
what
actually
makes
up
a
Postgres
cluster.
So
it
may
be
a
master
database
container
and
it
may
be
a
series
of
replicas
containers.
It
may
be
services
for
that
those
containers.
It
may
be
a
Postgres
sequel
base,
router
proxy.
All
of
those
things
make
up
what
we're
calling
a
Postgres
cluster
and
you
can
define
those
in
a
template
and
the
Postgres
operator
is
designed
so
that
you
could
add
your
own
set
of
templates
that
meet
your
particular
requirements.
B
Now,
there's
a
default
initial
definition,
but
over
time,
you'll
see
more
definitions
placed
out
there.
So
this
diagram
shows
you
what
kind
of
the
schematic
and
what
the
operator
consists
of
on
the
outside
of
Openshaw.
You
have
this
pgo
client
and
that's
just
a
command-line
binary
going
binary
like
any
other,
and
you
run
it.
It
connects
through
the
kubernetes
api
over
to
your
OpenShift
or
kubernetes
cluster.
So
that's
how
it's
interacting
with
the
operator
or
your
kubernetes.
It's
exactly
the
same
way.
You
know
like
cube,
CTL
or
OSI
command
works.
B
It's
what's
listening
and
watching
for
events
on
those
and
tape
making
changes
over
time
as
those
resources
change.
The
things
that
it
does
is
primarily
create
those
Postgres
deployments
and
those
are
deployment
boxes.
If
you
take
a
look
inside
one
of
those,
it
looks
sort
of
like
this
and
it
can
be
quite
complicated
in
Postgres.
You
can
have
a
master
database
and
then
you
can
have
a
series
of
read-only
replica
databases
connected
to
it
and
it's
replicating
state.
B
Those
databases
also
have
a
related
persistent
volume
claim.
So
there's
lots
of
things
going
on
here
and
one
of
the
values
of
the
operator
is
it
treats
all
of
those
things
is
basically
just
a
Postgres
cluster.
So
it's
a
simplification
of
you
know.
Postgres
clustering,
mechanics
without
the
operator
you
basically
have
to
construct
and
deploy.
All
of
these
things
in
pieces
are
are
kind
of
more
on
a
manual
piece-by-piece
basis.
B
So
why
do
you
want
to
do
us?
What
is
this
useful
for,
which
was
you
know,
kind
of
the
question
I
typically
get
asked
is
like:
why
do
I
need
this
operator?
I
can
just
run
the
containers
and
build
templates
and
deploy
those
and
that's
been
working
that
way
for
a
couple
of
years
we
have
a
suite
of
containers
that
you,
you
know,
there's
lots
of
examples.
B
You
run
some
scripts,
there's
some
JSON
or
ammo
files,
and
you
can
deploy
those
things
and
and
if
you're
a
developer
or
a
good
DevOps
guy,
you
can
string
together
a
series
of
scripts
that
will
help
you
automate
the
deployment
of
those
things.
Well,
the
operator
I
list
some
reasons
here
why
you
would
find
this
useful
and
it's
really
geared
towards
people
that
want
to
automate
workflows
around
databases.
B
These
are
things
like,
typically
things
a
DBA.
We
would
want
to
do
like
backup
databases
or
restore
them,
though,
if
I
build
some
of
those
workflows
implement
those
inside
the
operator,
you
can
do
things
like
reduce
human
errors.
You
don't
necessarily
have
to
build
your
own
set
of
scripting
around
the
base
level
containers
to
do
certain
things
and
when
you
start
working
with
large
numbers
of
database
deployments,
that
can
get
really
unwieldy
over
time,
given
all
of
the
things
that
make
up
a
functional,
robust,
like
Postgres
cluster.
B
So
without
some
sort
of
automation
you,
you
have
lots
of
things.
You
have
to
manually,
keep
track
of
some
people
that
are
deploying
lots
of
databases.
They
want
an
ability
to
implement
a
standard
set
of
practices
or
policies
around
their
databases,
so
the
operator
gives
us
the
means
to
do
those
sorts
of
standard
practices
for
people
that
have
very
specific
policy
needs,
and
then
the
ease
of
use
I'll
show
you
in
the
demonstration,
but
it's
pretty
simple.
B
Once
you
have
the
operator
running
and
deployed
it's,
it's
really
pretty
simple:
to
deploy
a
Postgres
cluster
using
that
versus
some
other
means
large-scale
deployments.
This
is
where
the
operator
I
think
will
really
shine.
If
you
have
like
say
hundreds
of
Postgres
databases,
you
want
to
deploy
and
manage.
That's
where
the
operator
I
think
really
the
value
goes
way
up.
Is
it
gives
you
an
ability
to
collect
and
maintain
metadata
on
all
of
those
clusters,
and
you
can
in
query
based
on
that
metadata.
B
B
You
know,
like
advanced,
are
hard
things
to
do
in
a
database
where
it's
like
multi-step
pieces
that
need
to
happen
in
order
to
do
certain
database
related
things,
for
example,
like
cloning,
a
database,
the
cloning
of
Postgres
database
there's
a
series
of
steps
that
a
DBA
would
have
to
do
to
manage
that
process.
Well,
we
can
implement
those
in
the
operator
so
that
it
makes
it
much
more
user
friendly
and
consistent
to
manage
those
complex,
orchestrations
building
blocks.
B
So
how
do
you?
What
are
the
features
of
the
operator?
Essentially
here
are
things
you
can
do
with
it
from
the
command
line.
You
can
say:
pgo
create
my
cluster
and
that's
going
to
create
a
Postgres
cluster
deployment
named
my
cluster,
and
it
will
everything
that
makes
up
that
cluster
deployment.
That's
the
services,
that's
the
deployments.
It's
the
PVCs
are
persistent
volume
claims.
All
of
that.
B
It's
kind
of
a
collection
of
things
will
get
created
and
instantiated
just
by
seeing
that
committed,
and
then
likewise
you
can
delete
all
of
those
related
objects
by
just
saying
pgo
delete.
For
instance,
there
are
secrets
that
are
used
to
store
the
Postgres
credentials.
Those
would
automatically
get
created
and
deleted
by
these
pgo
commands
for
you
behind
the
scenes,
and
you
wouldn't
you
wouldn't
have
to
manually,
go
in
and
delete
those
things.
B
B
I
was
always
wanting
a
way
to
do
like
an
LS
command
on
a
persistent
bottom
claim,
and
this
is
kind
of
a
simplistic
way
to
do
that.
Pgo
scale
is
a
command
that
lets
you
scale
up
the
number
of
read-only
replicas
in
that
Postgres
deployment.
Initially,
when
you
define
a
post,
aggressed
cluster,
you
can
specify
zero-hour
in
numbers
of
read-only
replicas
and
the
default
is
just
zero,
meaning
that
you
don't
have
any
read-only
replicas.
Well,
if
you
ran
pgo
scale,
my
cluster
replicas
count
one.
B
B
Release
of
that.
You
basically
can
say:
PG
o
upgrade
and
it'll
just
automatically
take
down
the
old
image
and
bring
up
that
cluster
with
a
new
image
with
the
same
data.
Essentially,
you
can
also
do
a
major
upgrade
and
that's
quiet
that
was
actually
quite
interesting
to
develop.
But
what
it
will
do
is
like
convert
from
a
9.5
to
9.6.
That
would
be
considered
a
major
upgrade,
so
that
involves
running
an
upgrade
container
based
off
of
the
old
version
and
then
spinning
up
a
new
version.
It's
quite
an
involved
process.
B
Well
that
pgo
upgrade
command
automates.
That
workflow
pto
create
policy
is
a
way
to
create
a
sequel
based
policy
and
just
name
it
give
it
a
go
at
a
common
name.
So
this
is
useful
for
people
that
have
a
series
of
sequel
statements
that
they
want
to
apply
against
a
database.
Those
can
be
security
related.
They
can
be
just
application
related,
but
basically
they're
pieces
of
sequel
that
you're
going
to
name
and
then
you
can
apply
those
policies
towards
a
series
of
clusters
based
upon
a
selector
criteria
and
that's
really
useful.
B
For
instance,
if
you
had
a
hundred
Postgres
databases
in
use
and
they
and
you
wanted
to
apply
a
specific
security
policy,
you
could
run
pgo
apply
against
that
entire
suite
or
anything
that
matched
the
selector
and
that's
a
nice
way
to
you
know,
maintain
policies
and
it'll,
actually
catalog,
which
policies
are
applied
towards
a
cluster.
As
well
so
at
any
time,
you
can
look
at
a
cluster
and
say
well:
has
these
policies
applied
to
it,
and
that
gives
you
an
ability
to.
B
But
it's
very
useful
to
do
a
you
know,
I
think
clone
of
a
copy
of
a
databanks.
So
it
kind
of
combines,
backup
and
restore
all
in
one
piece
and
operator
is
able
to
do
watches
on
all
of
that
workflow
and
know
whether
or
not
you
know
things
have
actually
completed
or
whether
they've
the
replication
is
a
you
know,
finalized,
essentially,
and
whether
it's
actually
back
up
and
running.
So
that's
an
interesting
command
that
some
people
will
find
useful
and
those
are
really
the
main
features
of
the
operator.
Again,
it's
an
open
source
project.
B
You
definitely
can
take
a
look
at
it.
There's
a
few
releases
of
it
out
there.
Now
we
try
to
do
a
new
release
of
it
about
once
a
month
or
every
six
weeks,
essentially
to
add
new
features
over
time
and
what
it
really
is.
It's
the
means
of
controlling,
like
Postgres
deployments.
It
gives
a
high-level
abstraction
around
that
and
we
think
for
people
that
are
doing
lots
of
Postgres
deployments
on
kubernetes
or
openshift.
This
is
something
they
don't
want
to
look
at
for
sure
to
help
make
their
jobs
easier.
B
You
know-
and
it
does,
you
know
fairly
sophisticated
orchestrations
like
backup,
restore
and
cloning
and
policy
management
in
the
future.
What
you
will
see
here
are
more
advanced
security
and
management
features.
No
doubt
and
more
templates
advanced
templates
of
what
actually
a
Postgres
cluster
consists
of.
It
will
follow
the
path
of
the
kubernetes
api,
so
today
uses
third-party
resources
in
the
future.
It
will
definitely
support
the
custom
resource
definitions
as
tipi
ours
and
become
deprecated.
B
A
B
Know
what
you
would
do
is
basically
get
into
this
project.
You
would
develop,
you
will
see
where
all
of
the
commands
are
divided
out
and
where
the
client
actually
like
has
all
of
the
commands
in
different
golang
packages,
and
you
would
basically
have
to
develop
your
own.
You
know
command
package,
you
would
add
it
to
the
project
and
you
know
it's
submit
a
PR
for
it
and
behind
those
commands,
though,
is
usually
some
code.
B
You
would
add
into
the
controller,
so
you
would
add
some
code
there
too,
to
implement
a
command,
so
they
play
hand-in-hand
on
the
client
side,
you're,
typically
interacting
with
those
third-party
resources.
So
for
some
new
functions
you
have
to
add
like
a
new
third-party
resource
and
then
the
controller
has
to
have
code
in
it
to
deal
with
those
changes
to
those
third-party
resources.
So
there's
code
you
would
have
to
add
in
both
the
client
and
on
the
operators
side.
B
A
B
Know
and
that's
how
you
would
extend
it
today
and
the
same
for
the
templates
as
well.
The
templates
you
can
add
too,
but
in
some
advanced
cases,
you're
going
to
have
to
basically
submit
PRS
to
the
project,
to
get
those
templates
that
you
want.
Add
it
in
that's,
going
to
probably
change
over
time
as
this
gets
more
pluggable
and
we
start
taking
more
advantage
of
going's
ability
to
add.
You
know
dynamic
modules
and
things
like
that.
I
can.
A
See
where
people
might
have
custom
commands,
you
know
in
larger
scale
things
you
mean
you
could
put
the
clone
in
you've
done.
You
know
all
the
basic
stuff
there
and
like
actually
can't
think
of
anything
else
off
the
top
of
my
head.
That
I
might
want
to
do
that.
I
can
cure
you
that
there's
probably
DBAs
that
have
specific
things
and
maybe
the
templating.
A
That's
helped
them
out
for
that
as
well,
but
it
it's
totally
cool
what
you're
doing
in
it's
the
first
time
someone's
explained
operator
this
nicely
as
you
have,
and
we
do
have
plenty
of
time
for
a
demo.
So
there's
a
couple
of
votes
on
and
I,
don't
see
any
questions
from
them.
So
why
don't
you
go
ahead
with
the
demo
and
then
we'll
see
again,
if
there's
any
questions
after
that,
okay.
A
B
If
I
do
PG
Oak
show
cluster
Red
Hat,
it
will
basically
see
everything
that
it
created
and
it
gives
you
things
like
that:
9.6
dot.
Three
is
the
version
of
the
Postgres
that
it's
actually
running
gives
you
the
names
of
all
of
the
related
things
like
the
deployments,
the
replica
sets.
The
pods
gives
you
the
status
of
that
pod.
B
So
1/1
means
that
it's
actually
functioning
gives
you
the
service
end
points
you
can
now
say
pgo
test,
Red
Hat,
and
this
will
perform
the
sequel,
paying
and
you'll
see
the
first
three
say
it's
working
and
it
prints
out
the
equivalent
piece
equal
command,
which
a
lot
of
people
arrest
people
will
find
useful.
Now
it
won't
give
you
the
password
here:
there's
another
command:
I
could
show
you
that
will
basically
display
the
password
secrets,
but
it
will
give
you
at
least
the
piece
equal
command
here.
B
B
So
what
that
command
just
did
was
create
a
PG
backup
third-party
resource,
and
then
the
operator
detected
that
and
it
will
cause
it
to
create
a
kubernetes
job
and
that
job
will
run
crunchy,
backup
container.
It
will
connect
to
the
Red
Hat
database.
It
will
pull
all
or
backup
all
of
its
data
to
a
backup.
B
B
B
Wordy
command
that,
basically
it's
going
to
create
a
brand-new
Postgres
cluster
called
a
restored,
and
it's
going
to
when
I
pass
it
a
backup,
PVC
flag
and
a
backup
path
flag.
It
says
it's
a
clue
to
this
command
and
the
container
that
you
want
to
basically
restore
from
the
previous
backup.
So
that's
why
you're,
giving
it
those
paths
and
then,
when
you
say
secret
from
it's,
going
to
use
the
credentials
from
the
Red
Hat
database,
so
it'll
copy
those
credentials
so
that
it'll
have
its
own
unique
set
of
credentials.
B
So,
if
I
run
that
command,
it
basically
is
off
the
operator.
Now
is
off
doing
all
of
that
orchestration
for
you,
you
say:
PG
Oh
show
cluster.
All
now
you'll
see
you
have
quite
a
few
clusters
test.
Was
one
I
created
before
this
demo,
but
there's
the
restored
cluster
and
you'll
see
that
it's
still
kind
of
working.
It
says
0/1.
B
Feed
a
little
bit
about
policies
so
policy.
It
can
be
just
any
bit
of
sequel
that
you
would
send
you
know
whatever
you,
whatever
defines
a
sequel
policy
for
you
and
it
it
could
just
be
anything
you
want
from
creating
objects
or
adding
security
settings
or
whatever
this
sequel
would
have,
will
eventually
get
applied
and
run
on
that
Postgres
database
as
the
Postgres
new
user.
So
if
you
need
to
switch
the
different
users,
you
would
do
that
inside
your
sequel,
but
I
have
some
policies
defined
here.
Then
you,
some
super
pgo
create
policy.
B
A
B
It
depends
this
will
work
either
with
a
shared
volume
type
like
host
path
or
NFS,
but
it
also
will
work
in
a
different
configuration
where
the
PVCs
are
created
individually.
But,
yes,
you
would
need
it
to
the
least
have
readwrite,
for
you
know,
you're,
basically
creating
the
backups
hope
that
answers
your
question,
but
it
is
interesting
in
that
we're
trying
to
make
this
work
for
all
the
different
volume
types
so,
whether
they
be
shared
ones,
are
things
like
GC
or
AWS
volume
types
where
you
basically
can't
share
those.
B
So
you'll
see
you
in
the
documentation
where,
through
some
configuration
you
can
you
can
tweak
those
those
settings.
You
can
also
change
the
templates
behind
the
PVC
if
you
need
to
add
your
own
attributes
to
actually
what
gets
created
so
like
you.
If
you
wanted
to
add
a
storage
class
or
something
of
that
effect,
you
could
add
that
in
those
the
templates
that
the
operator
is
is
using.
So
it's
a
template
driven
thing.
B
The
operator
is
it's:
reading
templates
for
services
for
our
deployments
and
for
persistent
volume
claims
as
well,
so
you
can
get
in
there
and
do
some
fiddling
with
it
and
tweak
it
and
add
some
attributes
as
well,
and
you
can
set
default
in
the
pgo
configuration
file
that
will
specify
things
like
the
size
of
the
PVCs
and
I
think
also
the
rewrite
or
access.
You
can
specify
that
and
there
as
well
and
override
whatever
default,
that
it's
set
to
I
can
apply
a
policy
like
this
I.
B
B
Let
me
explain
this
command
pgo
apply
policy.
One
is
gonna,
basically
take
that
policy,
one
command
there,
which
is
a
create
table
command
and
post
grass.
It's
going
to
plot
against
anything
that
matches
that
selector.
In
this
case,
I'm
gonna,
say:
name
equals
Red,
Hat
and
I've
got
something
wrong.
There
I
think
yeah
it
already
existed.
It's
already
ran
that.
B
So
it's
giving
me
an
error
back
saying:
you've
already
run
it
essentially,
but
if
I
did
it
on
restored
and
give
you
output
like
that,
and
what
happened
is
it
basically
just
the
operator
runs
that
sequel
against
you
know
whatever
clusters
match
up
with
that
selector,
so
that's
kind
of
a
neat
feature
as
well.
If
I
want
to
scale
up
on
of
these
clusters,
I
can
say:
PG
owes
like
a
pipe
today.
Pg
Oh
scale,
Red
Hat,.
B
So
that
scales
up
the
replica
deployment
to
one
or
sets
it
to
one.
So
now,
if
I
do
PG
Oh
show
cluster
Red
Hat,
now
you've
got
two
pods
out
there
and
it
says
Red
Hat
replica
that
this
one
here
is
basically
the
one
that
just
spun
out.
So
there's
two
deployments:
one
is
for
the
master
and
one
is
for
the
replica.
The
master
is
always
set
at
one,
because
Postgres
is
a
single
master
database.
The
replicas
deployment
is
initially
set
to
zero.
B
You
can
set
that
via
configuration
to
whatever
you
want,
but
it's
just
basically
setting
if
you
set
it
to
zero,
it's
basically
just
setting
out
there
waiting
for
you
to
scale
it
up.
If
you
need
to
now
for
development,
most
people
are
not
going
to
need
to
scale
up
a
Postgres
cluster.
So
that's
why
that
reasonable
default
for
some
people
will
be
just
zero,
but
it's
setting
out
there
in
case
you
do
need
to
scale
it
up.
B
B
For
the
my
clone
2
database
and
basically
you've
just
done
a
thick
clone.
Now
the
operator
can
watch
these
things
for
a
very
long
time.
So
if
these
backups
our
restores
take
you
know
hours,
it's
ok,
because
it's
just
sitting
there
reading
the
cube
api
watching
events
and
it's
registered
to
look
at
all
of
these
Postgres
objects,
and
it
will
only
continue
the
workflow
if
it
knows
that
hey
the
backup
is
finished.
So
therefore,
I'll
go
ahead
and
trigger
a
recovery
on
it.
B
So
that's
the
beauty
of
using
those
cube
api's
and
it's
it's
able
to
do
these
very
long,
watches
and
watch
for
events
so
that
a
workflow
may
take
you
know,
hours
or
days,
and
it
doesn't
really
matter
to
the
operator.
It's
just
sitting
there
waiting
for
things
to
happen.
So
by
do
pgo
show
Buster
my
clone.
We
should
have
a
well
I,
think
I
got
something
off
there.
B
You
know
I,
guess
it's
just
taking
a
while
for
the
clone
to
happen
or
had
a
problem
with
it,
but
that
essentially
would
have
normally
spun
up
a
secondary
clown.
I
must
have
taught
something
wrong
or
caused
it
error
off,
but
that's
all
of
the
demo
that
I
have
today
and
I
guess
Diane
with
that
I'm
kind
of
geeking
out
a
point
to
correct
that
I
think.
A
That
that
was
great
and
doing
a
live
demo.
If
you
don't
have
something
go
sideways
a
little
bit,
it's
not
not
a
live
demo.
Exactly
perfect,
I
think
you've
answered
the
questions
that
the
folks
had
in
chat.
I
am
totally
impressed
with
the
level
of
automation.
That's
in
there
now
and
that's
capable
of,
and
it's
a
great
example,
I
think
just
using
Postgres
for
one
but
I
think
that's
a
great
way
to
learn
about
operators
and
kubernetes
as
well.
So
thank
you
for
taking
the
time
today
to
do
this
along
with
container
izing
Postgres.