►
Description
In this talk, we will discuss our open source project, github.com/crunchydata/postgres-operator, a project which leverages the Kubernetes API for performing advanced PostgreSQL automation on Kube and Openshift platforms, supporting large scale PostgreSQL-as-a-Service deployments. We will demonstrate how to use Crunchy PostgreSQL on OpenShift to provision PostgreSQL clusters with load-balanced replicas, bring up essential database services, and failover with almost no downtime. You will see how using Crunchy PostgreSQL on OpenShift brings an easy-to-use, secure database-as-a-service into your trusted environment.
A
So
yeah
logistics,
so
we're
gonna,
do
talk
on
scaling,
opens
scaling,
post
grass
on
platforms
like
open
shift
and
we're
gonna
go
into
details
on
Postgres
operator.
People
will
probably
find
interesting
so
who
is
crunchy
data
crunchy
data
is
a
company
that
is
Postgres
consulting
services
and
several
Postgres
committers
work
for
crunchy.
We
do
open
source
Postgres,
it's
all
open
source
and
we
also
do
containerization
and
Postgres
and
we'll
talk
a
lot
about
the
containerization
side
here,
but
we're
we're
basically
an
enterprise
Postgres
support
company.
A
A
Some
of
our
customers
do
this
on
on
premise.
Some
in
public
clouds,
so
we've
seen
all
kinds
of
different
combinations
hybrids
between
the
two
as
well,
so
some
people
will
run
say
most
aggressive
replica
on
prim
or
off
prim.
But
the
may
be
the
primary
runs,
some
other
cloud
or
something
so
lots
of
different
combinations.
A
Okay,
so
why
would
you
do
this?
We
want
to
lower
the
cost
of
provisioning
Postgres,
so
the
containerization
helps
a
lot
with
that,
and
some
of
the
tooling
around
it
so,
like
Postgres
operator,
for
instance,
makes
the
cost
of
provisioning
really
cheap.
We
also
want
to
provision
Postgres
in
a
way
where
you
have
a
very
good
control
of
compliance
of
how
your
Postgres
is
set
up
from
a
security
point
of
view.
A
Some
of
the
things
that
make
up
this
Postgres
is
a
service
set
of
containers,
called
crunchy
container
suite,
there's
about
twelve
to
fourteen
Postgres
related
containers
in
that
suite
from
running
the
database,
to
managing
it
doing,
monitoring
and
statistics
collection
and
things
like
that.
We
support
openshift
as
a
primary
deployment
platform.
So
everything
we're
talking
about
runs
on
OpenShift.
The
Postgres
operator
runs
on
OpenShift,
as
of
3.7.
A
So
a
set
of
building
blocks
is
this
container
suite
and
it
runs
the
Postgres
database
open
source
database
lets
you
do
backups
in
a
variety
of
different
ways.
We
support
right
today.
Three
versions
of
backup
utilities
for
Postgres
large-scale
backups
as
well
using
PG
backrest
so
like
our
Postgres
container,
includes
PG
backrest
for
incremental
backup.
A
It
includes
things
like
PG
audit
for
government
or
DoD
requirements
for
database
auditing,
and
then
we
do
things
like
metrics
collection
of
Postgres
and
we
let
you
scrape
those
metrics
with
Prometheus
and
again
there's
a
github
link
there
at
the
bottom.
If
you
want
to
go,
find
more
details
about
it,
documentation
and
a
suite
of
examples,
you
can
just
download
that
from
and
try
it
out.
Images
are
provided
out
on
docker
hub.
A
Those
are
cinemas,
base
images,
so
everything's
free
for
you
just
to
try
some
of
the
terms
for
containerization
that
come
to
play
whenever
you're
deploying
a
database
is
clearly
pods
and
services
and
deployments,
but
most
importantly,
is
management
of
persistent
volumes.
So
a
database
container,
like
ours,
has
multiple
volumes
that
you
can
attach.
So
just
the
data
itself
is
its
own
volume,
but
things
like
configuration
files
archive
logs
backups.
A
This
graph
simply
represents,
as
you
scale,
up
the
number
of
Postgres
containers
or
any
container
for
this
matter.
You
get
to
a
point
where
managing
all
of
those
becomes
kind
of
a
burden
on
the
human
side,
so
the
Postgres
operator
tries
to
solve
handling
large
scale
numbers
of
deployments
and
decreasing
that
burden.
So
that's
why
we
did
the
Postgres
operator
in
the
first
place,
as
people
started,
deploying
hundreds
of
Postgres
containers,
and
one
customer
in
particular,
was
deploying
about
700.
A
So
when
you
deploy
700
database
containers,
something
like
an
operator
is
really
useful
to
help
manage.
That
kind
of
you
know
number
sheer
number
of
things
to
manage
so
about
11
months
ago.
We
started
writing
this
Postgres
operator
and
its
job.
Initially
it
was
to
enable
easy
provisioning
of
Postgres
and
also
Postgres
clusters.
So
it
lets
you
scale
up
the
number
of
Postgres
replicas.
It
defines
custom
resource
definitions
that
are
centric
to
Postgres,
so
there's
things
like
PG
cluster
PG,
replica
backups.
Those
are
all
custom
resource
definitions
that
the
operator
supports.
A
It's
based
on
the
kubernetes
api,
x'
and
client
goes
specifically.
So
why
would
you
do
this
on?
The
right
is
a
typical,
advanced
or
complicated
Postgres
deployment,
one
primary
within
numbers
of
replicas
with
PVCs
and
claims
and
volumes
and
services.
Everything
on
the
right
is
basically
automated
by
the
Postgres
operator
from
a
deployment
perspective,
but
the
management
side
is
really
where
you
get
a
lot
of
value
from
an
operator,
because
it
knows
about
the
build
materials
on
the
right.
A
It
can
let
you
interact
with
Bill
and
materials
type
of
concepts
to
a
Postgres
cluster,
as
opposed
to
you
having
an
individual,
manage
small
number
of
pieces
of
resources
on
your
own
or
keep
track
of
those.
So
the
operator
applies
metadata
across
everything
on
the
right.
From
a
logical
point
of
view,
you
view
it
all
is
just
a
Postgres
cluster.
A
We're
going
to
show
you
just
a
real,
quick
and
demo
of
the
operator
has
a
client
interface
today
and
that
client
is
nothing
but
a
rest
client.
The
talks
to
the
operators
REST
API,
so
when
you're
interacting
with
it
as
I
as
a
human
you're,
interacting
with
the
REST
API,
his
first
command
pgo
create
cluster
rs1.
This
is
how
you
start
creating
posts
across
clusters,
essentially
there's
binaries
for
Windows
and
Linux
and
Mac
as
well,
that
we
distribute
so
we
just
created
three
clusters.
There,
we're
gonna,
apply
some
metadata
labels.
A
So
if
you're
trying
to
manage
hundreds
of
these
containers,
you
want
to
be
able
to
categorize
them
in
certain
ways.
So
here
we're
gonna,
apply
metadata
label
of
environment
equals
test
on
just
two
of
those
and
you're
gonna,
be
able
to
search
and
query
just
based
on
those
metadata
labels.
So
imagine
hundreds
of
these
deployed
with
several
kinds
of
different
categories:
categorization
schemes,
you're
able
to
view
kind
of
your
assets
that
you've
deployed
so
using
this
P
geo
client
means
that
you
can.
A
You
know
examine
what
you
actually
have
deployed
out
there
from
a
Postgres
set
of
assets
that
come
in.
There
shows
you
the
kind
of
the
flexibility
we've
built
into
the
operator.
It's
really
geared
towards
complex
deployments
of
Postgres
that
particular
command
lets.
You
place
replicas
on
completely
different
storage
classes
on
the
primary.
It
also
lets
you
specify
resource
configurations
for
different
pieces
of
your
Postgres
cluster,
and
it's
also
targeting
certain
kinds
of
nodes
for
the
primary,
so
it
uses
kubernetes,
node
affinity.
A
To
specifically,
let
you
place
if
you
want
to
wear
your
primary
Postgres
is
going
to
be
and
then
there's
logic
baked
into
the
operator.
That's
places,
node
affinity,
rules
that
place
the
replicas
on
nodes
that
are
not
for
your
primaries
running.
Essentially,
so
it
gives
you
form
of
H
a
that's
the
configuration
file
on
the
server
for
the
operator
that
defines
all
those
configurations
both
for
resources
and
for
storage
classes.
You
can
have
any
number
of
storage
classes
you
want
and
you
can
do
things
like
backup
to
specific
storage
classes
as
well.
A
So
if
you
wanted
your
backups
to
physically
be
placed
on
storage
classes
like
in
another
data
center,
you
can
do
that
that
last
command
just
shows
you
how
to
view
a
cluster
gives.
You
information,
that's
useful,
and
then
it
basically
a
simple
test
that
shows
the
status
of
the
cluster
PG
ODF
is
just
showing
you
the
data
capacity
utilization
of
your
Postgres
on
a
volume
basis,
so
that
you
know
how
much
of
my
PV
I'm
actually
using
with
Postgres
data,
and
this
last
command
shows
kind
of
overall
operator
status.
A
It's
all
command
line
driven
today,
working
on
a
web
app
to
front
this
is
one
of
our
roadmap
features
that
we're
working
on
that
come
in.
There
just
shows
you
kind
of
overall
how
many
databases
I'm
running
and
what
versions
of
databases
am
I
running?
So
if
your
DBA
and
you
have
hundreds
of
these
things
deployed
it'll,
tell
you
what
specific
Postgres
versions
you're
running
and
how
many
of
them
are
running.
A
A
Another
thing,
while
this
demo
is
running
is
this
is
controlled
with
an
our
back
mechanism,
so
you
can
define
different
kinds
of
roles
for
operator
into
users.
You
can
define
read-only
roles
or
admin
or
users,
so
you
can
precisely
define
you
know
which
features
of
the
operator
specific
users
want
to
use
just
uses
basic
auth
and
a
simple
auerbach
mechanism.
A
There
you'll
see
one
with
multiple
serve
multiple
pods
that
we've
scaled
up.
You
just
run
pgo
scale
and
it
creates
causes
it
to
create
a
new
replica.
That's
attached
to
that
cluster
from
a
debugging
point
of
view,
whenever
we
print
out
information
about
a
cluster,
if
you
have
access
to
cube,
control
or
OC,
it
gives
you
ability
just
to
do
further
diagnostics
by
printing
out
that
information.
A
So
roadmap
people
seem
like
they've
really
liked
this
project,
and
we
have
some
really
large
people
testing
this
out
and
trying
it
out.
So
some
things
that
we're
working
on
kind
of
phase
2
of
this
is
to
handle
advanced,
backup
management.
So,
if
I'm
creating
thousands
of
backups,
what
do
I
do
in
terms
of
scaling
the
management
of
those
backups?
That's
a
problem,
we're
looking
at
solving
also
thick
and
thin
cloning
up
databases
using
different
kinds
of
storage
technologies.
A
Rapid
data
ingest
is
one
where
if
we
can
apply
operator
scaling
towards
rapid
processing
of
thousands
of
input
files,
that's
something
that
we
think
is
interesting
to
look
at
a
graphical
user
interface.
People
would
love
to
see
that,
as
opposed
to
this
beautiful
command-line
tool
and
then
advanced
security,
we
think
we
can
do
things
from
a
security
point
of
view
in
terms
of
applying
sequel
security
policies
across
large
numbers
of
Postgres.
We
think
we
can
do
that
with
the
operator.
A
So
if
you
have
any
interest
in
this
topic
at
all
check
us
out
at
the
booth
six
three,
oh
you
get
a
hippo.
You
can
talk
to
us.
Ask
us
any
kind
of
questions
you
want
about
crunchy
Postgres,
the
operator
things
you
can
do
with
this.
These
projects
are
both
open
source
as
well,
so
they're,
very
accessible
for
you
just
to
go,
download
and
try
out
and
play.
A
We
sell
support
professional
services
and
support
around
these,
and
we
also
do
training
for
customers
on
it,
for
enterprises
needing
support
and
that's
our
business
model,
but
real,
exciting
projects.
We
think
the
operator
technology
really
is
is
the
way
to
go
in
terms
of
advanced
automation
and
especially,
if
you
have
hundreds
and
hundreds
of
containers
to
manage.
We
think
that's
where
the
sweet
spot
is
so
things
like
dev
you
a
if
you've
got
lots
of
databases
that
you
need
to
provision
and
manage
things
like
the
Postgres
operator.