►
Description
Get your espresso ready as we welcome our special guest Gabriele Bartolini, CTO of Cloud Native at EDB, to talk about how PostgreSQL and Kubernetes make a perfect match to scale workloads at the power of cloud native with EDB Operator and OpenShift.
A
A
Thanks
for
being
from
being
from
being
here
and
thanks
to
all
the
attendees,
we
will
have
soon
I
do
have
my
coffee
I
forgot
to
ask
you
to.
C
A
Do
the
coffee
and
let's
start
with
the
usual
question,
first
of
all,
a
brief
presentation
from
from
you,
Gabrielle
yeah,
the
company,
where.
B
B
C
B
Was
acquired
by
EDP
two
and
a
half
years
ago,
and
second
quadrant
was
started
in
2008.
B
D
B
B
A
book
about
postgres,
you
know
the
second
edition
of
the
postgres
admin,
cookbook
and
yeah,
so
I've
pretty
much
started
to
participate
to
the
community
as
a
contributor.
Basically
around
2006
I.
B
You
know:
I
wanted
to
promote
postgres
in
the
in
the
Italian
public
administration
and
I
I
try
to
organize
an
event
in
Italy,
actually
Josh
Berkus,
who
work
works
for
Red
Hat.
Now
he
was
part
of
the
core
team
of
postgres
back
then,
and
he
helped
me
organize
this
event
in
my
city
Prato,
which
is
north
of
Tuscany,
and
it
was
the
first,
let's
say,
large
Gathering
of
the
possess
community
in
in
Europe
and.
A
A
With
the
name
for
for.
B
B
So
Barman
is
actually
at
the
time
you
know
we
were
helping
customers
move
from
Oracle
to
pause
this
okay
and
in
Oracle
there's
a
tool
called
our
men,
which
is
recovery
manager
and
in
that
period
Paul's
guess
I
had
a
solid
framework
for
for
these
business
continuity
and
especially
continuous
backup
and
point
in
time
recovery.
But
what
we
were
used
to
see
at
that
time
were
many
custom
scripts,
so
everyone
was
using
their
own.
B
You
know
custom
scripts
to
perform
backups
and
recovery
and
we
came
up
with
the
idea
of
having
a
python
tool
that
was
able
to
manage
remotely
backups
and
also
the
recovery
part,
and
that's
how
Barman
came
you
know
and
then,
because
I
don't
take
myself
too
seriously
very
soft.
Very
often
we
came
with
the
name
environment,
backup
and
recovery
manager
for
pause.
Okay,.
A
C
A
B
Yeah
I
think
you
know
I'm
I'm
here
to
actually
you
know
that
that's
pretty
much
the
work
I've
been
doing.
The
last
four
years
is
to
I
kind
of
reinvented
myself
in
the
kubernetes
space.
B
Okay,
because
I'm
I've
been
you
know
fascinated
by
devops
the
devops
movement
for
many
years,
and
you
know
all
out
the
automation,
continuous
delivery
or
you
know
all
those
principles
led
us
to
kubernetes,
and
we
came
to
the
idea
why
not
using
kubernetes
for
postgres
as
well,
you
know
and
our
concept
to
bring
Paul's
gets
into
kubernetes
was
born
in
2019
I.
Think
after
we
realized
that
local
persistent
volumes
were
were
kind
of
solid
in
the
kubernetes
space
and
also
the
operator
pattern
was
becoming
dominant.
B
Okay,
that's
when
we
said
okay,
let's
see
if
it's
possible
and
what
we
did
back
then
was
to
actually
try
and
install
postgres
in
kubernetes
on
bare
metal
and
see
how
it
would
perform.
You
know
so
with
a
kind
of
a
fail,
fast
approach
and
we
saw
that
it
was
actually
going
pretty
much
as
fast
as
bare
metal
and
that
when
we
said
okay,
that's
it
let's
move
forward
and
that's
how
our
operator
was
born
and
in
2022
these
operator,
Cloud
native
PG
was
released,
open,
source
and
yeah.
B
Here
we
are,
you
know,
and
then
Cloud
native
PG
we've
got
an
operator
for
for
also
for
openshift,
and
I
will
be
talking
about
that
today.
So,
okay.
A
E
B
About
pause,
guess,
you
know,
I.
Think
poscus
is
one
of
the
most
used
databases
in
the
world
and
it's
never
too
old.
So
it's
been
Innovative
for
over
25
years
now,
there's
version
15
version
16
will
be
coming
out
in
a
few
months.
It's
open
source,
it's
a
multi-purpose
purpose
database,
it's
very
solid!
You
know
the
architecture
is
one
primary
and
multiple
replicas,
which
I
think
on.
B
Basically,
you
can
scale
vertically
with
very
high
number
of
transactions
per
second,
and
then
you
know
stream,
replication,
physical
replication,
synchronous
replication
as
well,
which
is
which
can
be
controlled
at
transaction
time.
So
you
can
say
this
transaction
is
very
important.
I
want
these
to
be
written
also
on
another
host,
or
you
know
this
one
is
less
important.
You
know,
you
know
just
write
it
here
locally.
B
This
continuous
backup
point
in
time,
recovery,
decorative
partitioning
so
anyway,
there's
so
many
features
with
postgres
that
I
think
as
I
always
say,
it's
probably
worth
to
I
think
start
kind
of
a
Blog
series
about.
You
know
these.
These
very
important
features
that
postgres
already
has
for
the
new
generations
that
are
maybe
approaching
kubernetes
now
and
maybe
are
less
familiar
with
the
SQL
standard,
which
is,
in
my
opinion,
very
underestimated.
Language.
D
B
Yeah
this
is
postgres,
and
you
know
in
one
slide,
it's
kind
of
difficult
how
to
to
recap.
Then
this
is
EDB,
so
I
like
to
share
this
I,
don't
want
to
promote.
You
know,
use
this
space
to
promote
the
organization
too
much,
but
I
think
this
is
worth
more
than
a
thousand
words
I.
Think
I
think
EDP
mission
is
to
bring
to
to
to
innovate
through
postgres
and
if
you
follow
this
link,
you'll
see,
there's
actually
a
very
nice
infographic.
B
D
E
D
B
Yeah,
so
basically,
Oracle
migrations
is
one
of
the
key
differentiators
of
EDB.
So
it's
normally,
you
know
I
like
to
Define
it
also
as
a
cultural
change
in
in
an
organization
so
you're.
Of
course
you
need
also
training
for
Paul's.
Guess
who
has
pause,
guess
that's
Kings
in
a
different
way,
but
EDB
has
a
layer
of
compatibility
with
Oracle
and
also
a
set
of
tools,
practices
and
skills
that
help
you
migrate
from
I
mean
my
guide
to
Paul's,
guess
also
using
semi-automated
and
also
made
the
tools.
B
So
it's
it's
a
it's
a
very
important
part
of
our
business,
and
this
is
also
available
in
our
operators.
So,
in
our
operator
you
have
access
to
EDB
pause,
guess,
advanced
Advanced
is
a
fork
of
postgres
that
introduces
an
oracle
compatibility
layer
inside.
So
you
can
migrate
more
easily
your
your
databases
from
Oracle
to
false
guess
with
this
with
this
layer.
While
maybe
you
start,
you
know
your
new
applications
in
postgres,
so
you
actually
see
this
mix
of
use
cases.
B
So
you
know
new
applications
in
postgres,
while
we
also
maybe
keep
also
Legacy
systems
and
not
migrate
or
others
are
migrated
using
the
compatibility
layer
and
the
good
thing
about
ADB
is
that
it
provides
assistance
through
all
all
these
phases.
You
know
as
well
as
being
influential
in
in
the
in
the
Open
Source
Products.
That
now
include
also
the
the
operator
for
postgis,
okay,
so
I
hope
that
helps.
B
You
know
this
is
an
article
that
I
wrote
last
year.
It
was
very
popular
and
this
explains
why
I
think
it's
important
to
run
postgres.
So
if
you,
if
you
run
posters
in
kubernetes,
so
if
you,
if
you
have
a
few
minutes,
you
know
it's.
B
So
you
understand
more
definitely-
and
this
is
also
a
very
important
movement
which
I'm
really
proud
to
say
that
we
have
contributed
to
this
from
the
start.
Is
the
data
on
kubernetes
community
and
the
data
in
kubernetes
community
aims
to
promote
the
usage
of
data
workloads
in
kubernetes?
So
again
you
can
follow
this
link.
You
find
a
report
that
actually
says
that
having
data
workloads
in
kubernetes
actually
increases
the
productivity.
B
Mean
having
data
workloads
alongside
applications
in
kubernetes?
Okay,
so
because
many
times
we
see
that,
for
example,
database
is
left
outside
where
nurses,
okay.
So
with
this
community,
we
are
working
very
closely
with
the
Sig
storage
and
Tech
storage,
which
at
the
moment
are
the
most
similar
groups
of
Interest
inside
the
cncf
that
talk
about
database
in
this
databases
and
data
workloads
in
general,
to
bring
awareness
to
increase
awareness
about
running,
using
kubernetes,
to
run
data
workloads
like
databases
and
postgres,
for
example.
So
that's.
A
Yeah,
and
do
you
think
also
that
also
in
terms
of
operation,
the
increase
of
productivity
so
operates
database
on
kubernetes
provide
some
help
or
improve
the
way
the
operations
or
people
managing
the
pressure?
Do
it
usually
when
they
manage
the
same
framework
data
framework
outside
kubernetes.
B
Opportunity
is
to
move
down
the
complexity
and
the
risks
of,
for
example,
operational
disruptions
down
to
the
infrastructure
level
so
that
we
can
exploit
kubernetes
to
self-feel
to
manage
High
availability
at
the
application
layer,
and
a
database
in
kubernetes
is
an
application.
So,
if
you
think
about
that,
what
we
have
done
with
the
operator
is
essentially
put
all
our
experience.
So
20
plus
years
experience
on
how
to
manage
postgres
in
business
continuity
in
Mission
critical
environments,
but
people
that
have
written
High
availability
tools,
failover
management
tools,
backup
tools
and
have
been
managing
postgres.
B
You
know
with
some
of
the
largest
customers
in
the
world
directly
into
the
operator.
So,
for
example,
if
there's
a
failure
on
the
primary,
we
have
developed
this
logic
inside
the
operator
to
do
the
right
things
with
postgres
and
make
sure
that
we
are
failing
over
in
a
fast
way
and
if
the
primary
comes
back
up
to
try
and
realign
by
using,
for
example,
PG
rewind
in
case
the
the
primary
had
gone
forward
in
the
timeline.
You
know.
B
So
all
of
these,
basically
all
the
stuff
that
could
be
automated-
have
been
put
into
the
the
database
into
the
the
operator,
so
the
database
administrators,
in
my
opinion,
and
that's
what
what
I
always
say
when
I
talk
about
reshaping
the
the
DBA
role
is
that,
in
my
opinion,
it's
more
elevated,
so
dbas
will
be
used
for
the
the
staff.
That
requires
more
more
thinking.
B
You
know
where
I
think
human
human
skills
can
Excel
and
can
make
a
difference
instead
of
the
automated
stuff
that
we
were
used
to
for
many
years
to
the
point
that
it
kind
of
become
became
the
the
the
the
the
the
the
the
the
tasks
that
we're
always
referred
to
to
the
to
a
DBA
profile.
You
know,
okay,
so
I
think
yeah.
It's
it's
gonna,
improve
I!
Think
a
lot
of
these.
You
know
managerials
for
and
the
other
thing,
for
example,
for
for
a
failover
DBA
agencies.
B
A
B
A
To
this
point,
how
much
need
of
much
knowledge
of
kubernetes
about
kubernetes.
B
I
think
and
I
think
my
last
point
so,
okay,
this
is
also
I.
Think
you
also
need
to
have
a
good
operator
but
more
important
when
I
say
you
I
I
refer
I
like
to
read
to
talk
about
kind
of
devops,
multi-disciplinary
team.
So
that's
again
the
DBA
role
reshaped
in
my
opinion,
needs
to
be
part
of
a
of
a
like
a
t
profile
person
that
works
with
other
tprofile
developers
in
a
multi-disciplinary
team
so
that
this
person
can
help
the
developers
with
SQL.
B
We
put
some
gates
that
require
further
end-to-end
tests
to
pass
before
the
patch
is
merged,
and
that's
how
we
ensure
it
continuous
delivery,
which
then
enables
us
to
deploy
continuously
and,
for
example,
have
an
application
that
can
be
deployed
in
a
very
fast
way.
New
features
from
the
developer
to
the
consumer.
You
know,
and
that's
the
goal
of
this
and
the
database
was
I,
think
left
outside
and
now
I
think
with
with
our
operator,
the
chances
to
have
it
inside
are
increased.
You
know.
A
A
A
B
I
think
there's
one
one
cool
feature
of
postgres
is
called
transactional
ddl,
which
is
something
that
when
I,
you
know
we
showed
it
sometimes
should
Oracle
people
that
they
kind
of
freak
out.
But
basically
you
can
create
stuff
and
then
roll
back
okay,
so
that
that's
what
you
can
add
columns,
you
can
you
know,
and
then
roll
back.
So
this
is
a
migration,
so
the
migration
can
be
atomic.
B
So
that's
the
good
thing
about
Paul's
guess
is
that
and
the
other
good
thing
is
that
you
actually
make
the
developer
responsible
for
this
part,
so
the
idea
of
the
microservice
database
is
has
been
at
the
at
the
foundation
of
our
operators
since
day
one.
So
we
want
to
have
a
database
that
that
is
owned
by
a
single
application
and,
by
you
know,
a
development
team
that
is
independent,
okay,
so
forget
about
the
monolithic
database.
In
a
way
you
know,
that's
the
DBA,
you
need
to
make
a
change.
You
need
to
change
version.
B
You
need
to
wait
for
other
tenants
to
to
move
all
it's.
This
is
all
independent,
so
there's
a
highway
between
the
developer
and
and
the
production
system,
and
that's
where
I
think
kubernetes
excels.
You
know,
that's
that's
I,
think
where
you
know
that
the
real
benefits
that
kubernetes
is
it
is
being
is
you
know,
Decades
of
Innovations,
not
only
in
technology
but
also
I,
think
in
in
how
people
in
IIT
work
work
together.
You
know,
that's
I,
think
what
fascinating.
A
You
want
to
use
Google
but
notice
with
the
old
procedures.
All
methodologies
you
probably
are
going
to
fail
soon
and
if
you
want
to
adopt
kubernetes,
you
need
to
just
drop
some
so
that
all
the
process
you
you
use
usually
to
to
put
in
production
or
starting
from
the
source
code
to
the
production
place
so
including.
B
Cool
so
like
here
we
go
with
coordinary
PG,
it's
a
level,
five
production
ready.
So
it's
it's
been
used
in
production
for
for
a
few
years
and
it's
open
source,
it's
Apache,
License,
To,
Zero,
but
I
want
to
you
know,
say
that
edv
last
year
in
May,
in
April,
April
May
donated
the
IEP,
so
the
intellectual
property
to
the
to
these
communities.
So
the
the
actual
the
mission,
the
idea
of
EDB,
was
to
try
and
pursue
the
cncf
sandbox,
which
we
actually
tried.
Last
year.
B
You
know
the
project
was
rejected,
we're
gonna
retry.
So
the
idea
is
to
donate
this
project
to
to
the
cncf.
But
at
the
moment
it's
it's
governed
by
the
the
cloud
native
PG
community
and
so
there's
an
open
governance
model.
So
everyone
can
participate.
So
it's
not
owned
by
dbe,
but
EDB
actually
pays
us
to
contribute
to
this
software.
B
C
B
You
know
combines
both
open
source
and
and
proprietor
City
software
for
sustainability.
You
know
the
the
important
thing
of
cloud
native
PG
is
that
we
don't
use
a
failover
management
tool.
So,
for
example,
we
don't
use
patroni
and
we
don't
use
stateful
sets,
so
we
have
basically
extended
the
kubernetes
controller,
so
the
operator
is
written
in
goal
and
we
take
full
control
of
pause,
guess
and
the
persistent
volumes.
So
we.
C
B
Actually,
the
PVCs
directly,
so
it's
a
very
interesting
model.
It's
all
explained
why
we,
you
know
we
did
this
in
the
documentation,
so
if
you're
interested
so
they
then
another
very
important
approach
is
that
we
we
are
using
immutable
application
containers.
So
the
containers
run
only
one
application,
which
is
the
instance
manager
and
it's
a
go
Application
that
we
have
written
which
control
postgres
and
they
are
immutable.
So
they're
read
only
so
that's
why.
For
example,
we
we
support
restricted
SEC
version
2
in
openshift,
okay,
so
this
is
very
important
and
it's
fully
declarative.
B
So
you
know
it's
about
Convention
of
a
configuration,
so
we
we
have
defaults,
so
you
can
override
pretty
much
every
every
attribute,
but
we
have
defaults
that
work
for
most
of
the
cases
then
yeah
women,
automated
failover.
We
provide
services
and
directly.
We
have
neutral
TLS,
so
security
by
default,
so
we
build
a
CA
by
default
for
each
cluster
and
we
create
a
PLS
certificates
to
communicate
with
the
replicas,
so
no
passwords
and
so
on.
You
can
integrate
that
with
serve
manager.
We
manage
Affinity
control,
then
rolling
updates.
B
So
when
you
update
the
operator
or
the
operands,
we
restart
the
standbys
first
and
then
we
allow
you
to
choose
the
method
to
restart
the
primary
or,
for
example,
switch
over
first
before
you
update
okay,
so
there's
you
know
interesting
things
and
I
want
to
share
how
the
product
is.
Actually
the
project
is
going
very
well
on
GitHub,
you
know
I,
you
know.
If
you're
you
know
falling
out
now,
I
wouldn't
mind
if
you
can
add
the
star,
if
you
haven't
done.
D
C
D
B
B
So
it's
a
fork
essentially
of
cloud
native
PG
and
it's
it
was
actually
the
father,
because
everything
started
as
ADB
podcast
for
kubernetes
or
Cloud
native
posgus.
As
we
used
to
call
it,
then
we
we
open
sourced
it
in
April
last
year
and
since
then
we
have
fourth
EDP
podcast
for
kubernetes
on
top
of
Cloud
native
PG
and.
C
B
Explains
why
why
we
have
more
than
2
000
commits
and
why
it's
been
used
in
production,
by
large
customers
and
for
openshift?
We
have
a
certified
operator
and
we
also
provide
long-term
support
and
we
are
improving
the
you
know.
The
backup
strategies
with
Valero
or
o
ADP,
and
also
we
are
talking
with
custom,
about
having
these
transparent
supports.
So
essentially
what
we
do.
We
take
cold
backups
of
the
standbys
in
a
transparent
ways.
So
if
you
issue
a
custom,
backup
or
a
Valero
backup,
what
happens
is
we
stop
a
standby?
B
We
take
a
snapshot
and
we
restart
it,
so
the
primary
doesn't
even
see
and
for
Recovery
we
use
that
to
restart.
B
B
So
briefly,
you
know
to
deploy
a
three
node
High
availability
cluster
in
in
with
with
our
operator,
so
normally
what
you
would
do
outside
kubernetes
is
you
install
the
latest
podcast
15
Miner
you
created
first,
the
the
primary
and
and
then
and
then
you
create
the
standbys.
B
The
two
standbys,
but
with
the
operator
what
we
do
is
we
also
set
up
Mutual
TLS
between
the
replicas
we
use
replication
slots
and,
for
example,
we
can
set
resources
and
and
so
on
and
create
a
user
and
a
database
for
the
application,
and
we
also
provide
a
way
to
re
reliably
access
the
database
via
the
network,
and
this
is
the
Manifest.
The
Manifest
is
very
simple.
So
in
this
case
we
use
cluster
is
the
kind.
So
it's
the
resource
that
we
have
created.
Then
we
give
a
name
to
the
to
the
database.
B
We
call
it
my
FTB
in
this
case
I'm,
using
the
guaranteed
quality
of
service
by
setting
requests
and
limits
to
the
same
values
of
four
gigabytes
and
eight
CPUs
and
I
create
three
instances,
meaning
one
primary
and
two
symbolize
and
I
I
request
to
have
at
least
one
one
synchronous
standby.
At
any
time,
then
I
request
to
enable
replication
slots
if
you're
not
familiar
with
postgis.
That's
fine
verification
slots
are
a
way
to
keep
the
synchronization
between
the
primary
and
the
standbys.
B
So
the
primary
knows
what
Wall
Files
to
maintain
for
each
standby
and
following
a
failover
they're.
Actually
they
actually
persist
with
our
operators,
so
the
standby,
which
becomes
the
new
primary
and
knows
where
the
other
standby
nice
was
left.
Okay.
So
this
is
a
cool
feature
of
the
operator
that
is
available
for
everyone.
Then
we
can
specify
the
storage
for
the
PG
data
and
and
the
walls
the
world
files
are
the
transactional
log
and
yeah,
and
this
is
what
happens
so.
B
This
is
how
you
apply
the
Manifest,
and
this
is
what
happens
under
the
hood.
The
operator
creates
the
PVC
for
the
PG
data
and
the
wall
then
start
the
pod,
create
the
service,
wait
for
the
primary
to
start
and
then
enables
applications
from
this
moment
on
to
connect
to
the
primary
then
close.
The
standby
by
you
know
creating
the
PVCs
first
starts
the
Pod
and
begins
streaming
replication
with
mutual
TLS
and
then
creates
also
a
read-only
service.
That
points
to
the
to
the
standby
then
creates
the
second
Stand
By
and
so
on.
Yeah.
A
B
The
region
we're
talking
about
you,
know
the
same
region
here
so.
C
A
C
B
The
order
I
would
say
below
three
milliseconds
and
the
third
one
you
know
below
five.
Let's
say
you
have
still,
you
know
in
the
same.
We're
talking
about
a
cluster
in
the
same
city.
I
didn't
add
any
any
slides
about
the
architectures
here.
You
know
because
that
that's
also
a
complete,
you
know
a
large
topic
and
in
any
case,
I've
got
more
slides
here.
You
know,
I
will
leave
them
here.
You
know
I
won't
be
able
to
cover
all
of
them,
but
essentially
yeah.
B
Our
our
approach
is
to
have
at
least
three
availability
Zone
in
each
kubernetes
class,
which
I
I
understand
that
on-premise,
that's
not
always
the
case,
unfortunately,
but
I
think
we'll
get
there.
This
is
still
I,
think
reminiscence
of
lift
and
shift
approach
from
the
lbm
stuff.
So
we
ideally
soon
we'll
have
more
threes
on
kubernetes
clusters
so
that
we
can
Leverage
self-feeling
yeah,
but
that's.
B
B
A
E
B
Briefly,
you
know,
store
storage
for
us,
you
know
being
a
database,
it's
the
most
important,
obviously,
but
you
know
so,
but
the
good
thing
about
it.
The
operator
is
that
we
delegate
everything
to
the
storage
class,
so
we've
support
Dynamic
provisioning.
Only
we
don't
support
static
provisioning
but,
as
you
know,
if
you,
if
you've
got
storage
classes,
yeah
yeah
and
you
know
kubernetes,
you
can
do
that
with
with
them
in
any
case.
B
So
so
you
can
use
local
storage,
which
is
our
you
know,
recommendation
for
higher
the
highest
predictability
and
performance
or
you
can
use
network
storage.
Okay,
I,
have
you
know
two
I
mean
the
two
Extremes
in
a
single
cluster.
So,
for
example,
you
can
use
pause
guess
with
our
operator
in
this
case,
so
you've
got
shared
storage
and
shared
workloads.
So
you
put
databases
in
the
same
nodes
where
applications
are.
You
know
it's
a
mix
of
everything
and
it
works
fine
as
long
as
you're
happy
with
the
expectations.
You
know
that.
B
A
database
in
this
scenario,
but
it
works
fine,
and
this
is
I-
think
what
we
would
like
to
see
more
in
a
non-prem,
especially
on
on-prem
environment.
So
and
that's
the
the
eye
hand.
The
other
end.
Sorry,
so
you
can
have
all
the
possibilities
in
between
is
to
have
actually
a
dedicated
node
with
dedicated
storage
for
a
single
database.
Okay,
that's
where
I
think
the
high-end
users
or
you
know,
for
example,
the
banks
that
are
moving
the
databases
from
where
it
is
now
into
an
environment
like
openshift.
B
You
know
they
have
this
possibility
and
with
kubernetes
it's
pretty
simple,
because
you've
got
things
so
you
put
chains
on
nodes
and
you
say
only
podcasts
can
run
on
these
nodes
and
or
you
can
use
the
the
not
selector
to
actually
put
a
specific
database
on
a
specific
set
of
nodes,
maybe
in
different
availability
zones,
so
that
you've
got
one
in
yeah
a
primary
in
one
available
to
its
own
and
the
other
standbys
and
the
other.
Two
and
for
example,
you
can
even
say
that
I
want
the
synchronous
to
be
in
these
two
available
results.
B
So
this
is
I
think
the
most
extreme
case,
which
is
possible
or,
for
example,
a
very
good
compromise
is
to
dedicate
let's
say,
start
with
three
nodes
dedicated
to
postgres,
with
lockout
storage,
one
in
each
availability,
Zone
and
say
all
the
postgres
workloads
go
into
these
machines.
Okay,
so
applications
are
in
the
same
kubernetes
or
openshift
cluster,
but
databases
are
in
different
nodes.
E
B
I
think
a
very
good
compromise,
I
think
and
you
get
all
the
benefits
of
running
postgres
in
kubernetes,
so
we've
got
a
kind
of
a
native
kubernetes,
Native
and
Cloud
native.
You
know
combo
application
and
database.
B
So
I,
don't
I,
don't
know
you
know.
I
I'll
quickly
go
through
these,
so
our
operator
gives
you
three
way
to
bootstraps.
One
is
the
nadb,
so
any
DB,
you
create
a
a
cluster
from
scratch
and
it's
also
the
method
that
we
use
to
perform
import
of
existing
databases
or
also
at
the
moment,
migration
from
an
older
version
of
postgres
using
logical
import.
So
right
now
it's
possible,
for
example,
to
have
a
database
on
rdas.
Let's
say:
move
it
in
openshift
with
our
operator
using
the
import
facility.
B
So
you
just
connect
with
a
super
user
account
to
the
ideas,
for
example
database,
and
you
create
a
new
database
on
postgres
15..
You
know
with
just
one
one
configuration
file
or
you
can
use
the
recovery,
one
which
is
used
to
create
a
replica
or
replica
cluster
as
a
for
example,
and
the
tuition
point
in
time.
Recovery,
for
example,
you've
got
a
table
that
was
deleted
at
10
10
30
this
morning,
and
you
want
to
restore
the
database
up
to
that
point.
B
This
is
the
article
I
was
you
know
in
which
I
explained
how
to
use
the
import
facility
and
it's
quite
complex,
but
you
know
it's
the
import
facility,
it's
quite
complete
I
would
say,
but
you
know
it's
all
documented,
then
rolling
updates
yeah.
So
it's
essentially
the
way
for
us
to
also
upgrade
postgres
a
minor
update
of
policy.
So
when
every
time
we
we
change
the
Opera
and
the
images.
B
So
if
there's
a
new
minor
version
of
pause,
yes,
so
if
there
are
vulnerability
vulnerabilities
that
have
been
fixed,
we
build
a
new
image
every
day
as
a
community
and
the
sdb
of
postgres.
B
So,
every
day,
if
there's
a
library
that
has
changed
the
Ubi
image
that
has
changed
or
the
new
version
of
postgis
that
it's
been
released,
we
we
create
a
new
image,
and
so
when
you
do
that,
when
you
update
the
operator
or
when
you
change
the
some
values
in
the
configuration
of
pause,
yes
that
require
restart
the
rolling
update
is,
is
triggered
backup
in
recovery.
At
the
moment,
we
support
only
S3
and,
as
I
said
before,
casting
and
or
ADP
I
mean
custom
is
on
the
way,
but
oedp
for
openshift
for
snapshots.
B
Okay,
but
the
the
native
backup
and
Recovery
stuff
for
postgres
is
available
only
on
object
stores
and
so,
for
example,
you
can
use
rdf
with
the
object
store
for
these
and
then
maybe
relay
on
on
the
cloud
or
directly
in
the
cloud
for
monitoring.
We've
got
a
native
promise
use
exporter.
So
by
default
we
provide
some
standard
metrics,
but
also
allow
you
to
configure.
B
You
know
your
custom
metrics
directly
with
configuration
so
with
the
config
map
or
a
secret,
and
that's
that's
neat
and
as
far
as
logging
is
concerned,
we
log
everything
to
standard
output.
So
you
don't
need
to
install
anything
inside
the
containers.
Basically,
we
we
export
directly
in
Json
in
standard
output
for
all
the
all
the
applications
and
the
yeah
all
the
applications
we
run.
We
also
have
a
PG
audit
support
and
for
ADB
ADB
audit.
B
I
mean
there's
a
work
to
be
done
in
transparent,
colon
encryption,
for
example,
to
do
that,
EDB
has
got
transparent
data,
encryption
support
for
Epass
15.
So
if
you
want
basically,
the
data
on
disk
at
rest
is
encrypted,
whereas
at
the
moment
normally
until
it
has
15
or
with
pause,
guess
what
you
do
you
encrypt
at
the
storage
class
label?
Okay,
so
in
terms
of
protection,
that's
all
the
all
the
DCL
stuff
you
know
from
SQL,
so
you
can
create
rules
that
can
only
access,
some
columns
and
and
so
on.
B
Be
done
on
the
transparent
column
encryption,
for
example,
to
enable
encryption
event
at
runtime.
You
know.
B
B
B
D
Jonathan,
okay,
so
I
so.
A
Essentially,
just
please
a
large
it
just
a
little
bit:
okay,
perfect,
okay,
cool.
B
B
E
B
You
know
brand
new
openshift
cluster
I'm,
not
an
openshift
expert,
okay,
and
it's
already.
If
you
want
to
do
to
talk
about
pause,
guess
in
general,.
C
E
B
I
want
to
so
this
is
the
you
know
the
the
operator
we
provide
this
for
cids,
so
backups
Pooler
schedule
backups,
but
the
most
important
one
which
I
will
cover
here
is
the
is
the
cluster.
Okay,
so
I
think
I
created
a
cluster
in
the
postgres
namespace
yeah
call
GB.
E
D
A
B
New
cluster
called
father.
We
can
add
all
you
know
the
labels
here
here
we
say:
okay,
let's
create
a
with
just
two
instances:
okay,
then
you
can
provide
two
secrets.
For
example,
in
some
cases
we've
got
customers
that
you
know
maybe
also
run
these
in
a
in
a
air
gapped
system.
They
need
to
change
the
image
name.
B
So
all
of
this
is
possible,
okay
and
if
the
one
they
can
put
them
in
private
Registries,
because
when
I
was
talking
about
the
immutable
application
container
before
we
provided
a
set
of
requirements
for
the
images,
but
basically
what
you
can
do
is
also
you
can
create
your
own
images
and
and
that,
for
example,
your
custom,
extensions
and
so
on,
and
then
and
then
put
them
in
a
private
registry
for
some
selected
customers.
Okay-
and
you
know,
provide
a
way
to
specify
the
full
secret
storage.
E
B
Is
the
the
storage
for
PG
data?
So
let's
say
I
want
I,
don't
know
20
Gigabytes,
so
this
is
primarily
the
the
that
it's
called
PG
data.
It's
the
main
directory
of
podcast
and
I.
Think
since
version
116
also
of
17
I,
don't
remember.
We
added
the
separate
volume
for
walls,
so
Wall
Files
are
the
transactional
logs.
So
if
we
install
them
in
two
different
volumes,
we
can
actually
parallelize
operation
so
improve
vertical
performance
of
the
database
and,
for
example,
you
could
use
a
different
storage
class
okay.
So
this
is
also.
B
Feature-
and
we
will
add
table
spaces
in
the
future
here
so
with
the
same
mechanism
and,
for
example,
I
would
like
to
explore.
Lvm
I
know
that
red
hat
openshift
has
introduced
support
for
lvm,
I,
think
from
Belgium
fault,
12
and
I
think
this
this
could
be
cool
to
actually
test
these,
especially
if
we're
going
towards
volume
snapshots,
support
kubernetes
volume,
Snapchat
support,
so
it
means
that,
for
example,
with
lvm
we
could
take.
We
could
use
LPM
snapshot
to
to
to
to
perform
incremental
and
differential
backups
of
the
of
the
volumes.
B
Okay,
so
I
think
this
is
an
interesting
area
for
improvement.
Then
here
you
can
specify
the
resources.
For
example,
you
can
put
a
course
memory,
and
here
you
can
specify
the
object
store.
For
example,
you
could
put
the
the
S3
compatible
objects,
so
you
can
define
define
it
here
so
that
the
Wall
Files
are
archived
here
and
also
the
base
backups
are
stored
here.
The
base,
backups
are
these
cons?
B
Let's
say
copies
physical
copies
of
the
data
directories:
okay,
yeah,
then
we've
got
replication
slots
I
enable
them
so
basically,
I
want
replication
slots
to
be
also
failed
over.
In
case
you
know,
there's
a
the
primary
goes
down,
then
I
think
I
think
we're
fine,
so
I
hit
create
and
the
Fabio
cluster
is
is
being
created.
Let
me
see
if
I.
D
E
B
So
we
see
that
the
first,
the
first
one
is
already
running,
and
now
we
are
creating
the
first,
the
first
standby,
because
we
we
just
wanted
one
okay,
so
essentially
the
join
method.
What
it
does
it
you
know
we
have
the
new
PVC
and
we
initialize
it
by
running
a
PG
based
backup
from
the
primary,
so
copying
cloning,
essentially
the
data
directory
and
and
then
you
know,
preparing
it
for
the
new
pod
to
be
started
as
as
possess
and
yeah.
This
is
running
now
so
I.
D
B
E
D
D
D
D
E
D
Magnifying
the
pack
okay.
So
if
I
go
here
and
I
put
three.
B
You
see
that
you
know
it's
scaling
up,
so
the
the
replica
is
being
created.
So
now,
maybe
what
I
can
do
is
show
once
it's
it's
aligned.
Maybe
I
could.
Actually.
Let
me
see
the
plugin
briefly
seeing
people
again
status.
B
This
is
a
very
nice
we
also
export
in
in
other
formats,
but
this
tells
you
the
version
of
pause,
guess:
you're
running
osgus,
15,
1,
again
Convention
of
a
configuration
you
didn't
specify
anything.
The
operator
sets
that
the
latest
version
of
postgres
at
the
time
the
operator
was
released.
The
primary
is
Fabio
one.
The
cluster
is
in
a
healthy
State.
This
tells
you
the
LSN,
the
log
sequence
number
in
the
postgresql.
B
Basically
history,
and
these
are
the
certificates
that
have
been
automatically
created
for
you
without
doing
anything
out
of
the
box.
We
have
a
self,
let's
say
a
separate
ca
for
that
specific
cluster,
which
is
created
by
using
the
the
ca
that
we
generate
when
the
operator
is
installed.
Cool.
A
So
you're
ready
for
interrupt
you,
but
probably
before
leaving
the
session,
there
is
a
question
from
police,
okay,.
D
A
Yeah
Gabriela,
please
could
you
provides
some
more
details
about
the
EDB
additional
layer
for
the
migration
from
Oracle.
B
It's
it
it
it's
part
of
EDB,
post,
postgres,
Advanced
server,
so
it's
it
provides,
for
example,
support
for
some
libraries
that
are
in
in
exist
in
Oracle,
and
they
have
been
made
available
also
in
in
the
in
this
spot
in
the
in
Epass
I'm,
not
probably
I'm,
not
the
best
person
to
talk
about
this.
You
know
so
I
think
if
you,
if
you're
interested
but
essentially
yeah.
C
B
Yeah
I
can
it
should
be
from
available
from
the
EDB
edv
website.
Okay,.
D
Website,
okay,
yeah-
definitely
definitely
it's
there.
So
let
me.
B
E
D
B
B
Essentially,
what
we
don't
have
that
yet,
okay
and
I,
don't
think
we'll
use
cids,
I,
don't
know
if
you
so
let
me
go
here:
Cloud
native
Fiji
dogs,
we're
introducing
120
support
for
role,
management
and
so
declarative
role
management.
So
this
is,
let
me
enlarge.
B
This
is
our
approach,
so
we
allow
you
to
specify
in
the
configuration
the
roles
you
want
to
add
to
or
or
even
for
example,
if
you
ensure
absent,
you
know
that
we
remove
those
those
those
users,
okay,
so.
B
A
managed
databases
section
in
the
future.
At
the
moment
we
just
create
one
database,
it's
the
application
database,
so
out
of
the
box,
when
we
create
a
postgresql
database.
With
with
this
operator,
you
have
a
database
called
by
default,
app
that
is
owned
by
the
app
user.
So.
A
D
A
Around
very,
very
fast
so
time
to
close
and
Say
Goodbye
to
celebrate
people
and
to
just
closing
words.
First
of
all,
thanks
a
lot
from
being
here.
C
B
E
E
A
Me
an
interesting
topic
so
guys
see
you
soon
in
two
weeks,
as
always,
and
again
thanks
for
joining
thanks
for
your
time
and
thanks
for
having
your
morning
coffee
with
us
and
as
always,
you
can
watch
the
record.
The
session
on
on
on
YouTube
saying
that
have
a
great
day
and
again
see
you
in
in
two
weeks,
cheers.