►
From YouTube: Cloud Foundry Community Advisory Board [Mar 2017]
Description
Video from the monthly Cloud Foundry Community Advisory Board (CAB) meeting. Learn more at https://www.cloudfoundry.org/.
Agenda:
Calendar
- Now every 3rd Wed of the month.
Tooling for the call
- See Zoom info above in and the #CAB Slack channel
CFF update
- CF summit
Projects updates
- CAPI
- Diego
- BOSH
- CF Extensions
Community Projects
- Mantra
- Dingo PostgreSQL
Prepared questions from the community
B
A
D
D
E
D
A
It's
community,
and
actually
we
just
discussed
that
the
calls
are
being
recorded
and
puts
it
on
YouTube.
So
you
do
have
to
be
careful
what
you
say
if
you
don't,
if
you're
worried
about
you,
know
things
in
the
public
record,
my
words
yeah
exactly
I,
don't
think
that's
an
issue
for
you,
it's
more
for
me.
Wayne.
A
A
E
D
A
Nancy
you'll
see
some
French
here
and
there
I'm
just
practicing
my
French
all
right,
okay,
so
it's
a
no
one
will
get
started.
I
think
I
mean
this
is
this
is
wonderful.
We
have
a
great
turnout
today
and
we'll
just
get
started.
I.
Think.
The
first
item
on
the
agenda,
since
we
have
quite
a
bit
of
things,
is
something.
A
And
it
working
it's
working
out.
Well,
so
I
don't
see
any
objection,
so
that's
gonna
be
the
way
it
is.
The
other
thing
I
mentioned
before
we
officially
started
is
that
now
the
calls
are
being
recorded
and
Wayne
is
helping
move
them
to
YouTube.
So
if
you
go
to
youtube,
kanji
channel
you'll
see
the
last
call,
so
this
call
will
also
be
on
YouTube.
A
A
Exactly
and
and
I
you
see
for
myself,
because
I've
said
some
things,
maybe
I
should
probably
be
careful
about
and
of
course
you
represent
your
company.
So
what
you
say
also,
you
know
representation,
it's
a
no
record.
Let's
put
it
this
way.
You
know
what
to
do
so.
With
that
said,
let's,
let's
get
oh
one
more
thing,
I
want
to
say
too
before
we
start
is
that
we
now
have
quite
a
bit
of
presentation.
A
So
I
said
this
in
the
past
that
we're
listening
for
people
to
present-
and
you
know,
stocking
away
and
basically
seems
to
have
like
an
unlimited
number
of
things
that
they
can
talk
about,
which
is
fantastic,
but
we
also
want
to
make
sure
other
people
get
a
chance.
So
right
now,
for
next
time
we
have
two
presentations
from
pivotal,
activate
and-
and
this
is
very
exciting,
so
we'll
see
how
we
can
either
work
out
those
two
plus
one
more
or
we'll
figure
it
out.
A
A
I
guess
that
I'll
mention
a
couple
things
that
I
know
since
I'm
part
of
the
committee
putting
together
the
program
for
this
summit.
I
can
tell
you
that
there
is
over
200
submissions,
I
believe
because
I
am
reviewing
107
and
I
know.
There's
more
so
I
can't
speak
more
detail
about
the
submissions,
but
this
is
very.
Very
good.
I
mean
it's
it's
you
know
pretty
much
the
past
few
days.
That's
all
I've
been
doing
Ken
aside,
but
that's
that's
the
fact
and
and
what's
what's
interesting,
is
the
the
VAR
different
submission.
A
So
this
is
good,
so
the
program
should
be
very
exciting,
but
with
this
on
it
I
don't
have
anything
else
to
say
you
know,
except
that
you
know
if
you
are
a
contributor.
Obviously,
if
you're
joining
this
reporter
your
contributor,
you
can
register
for
free,
as
in
before
ping
me
and
I'll,
send
you
the
code.
I
think
they've
already
posted
it.
So
if
you
go
back
in
in
the
history
of
the
cab,
channel,
you'll
see
that
so
there
is
no
excuse.
A
Obviously
you
have
to
travel
here
but
and
even
Gration
has
to
let
you
into
it.
So
sorry
about
that,
otherwise
you
know
join
us
any
comment
from
anybody
from
cff
or
wants
to
add
something.
D
A
D
A
B
Let's
see
past
month
on
Diego,
we've
been
continued
with
various
aspects
of
work
in
extracting
ourselves,
from
having
counsel
as
a
dependency,
so
we're
pretty
far
on
running
route
emitters
locally
on
the
cells
to
prevent
them
from
having
well
a
lock
that
they
will
then
console.
They
we're
not
quite
ready
to
declare
that
non
experimental,
yet
we're
getting
a
little
more
back
from
production
environments
at
scale
and
then
there's
still
more
experimental
works
and
we
send
the
other
component
locks
in
something.
That's
and
five,
the
diego
databases.
B
So
that's
not
quite
ready
to
go
yet
and
let's
see,
we've
been
doing
some
work
to
help
us
integrate
with
the
garuda
fest
project
out
of
london,
which
is
going
to
be
the
new
component
that
manages
the
all
of
the
image
layers
for
containers
on
the
cell,
for
placing
second
toward
and
run
see.
So
that's
nice
to
see
that
progressing.
B
And
we've
also
been
integrating
against
the
new
law
aggregator
api
with
their
d
RPC
interface.
Again,
that's
something
to
opt
into
you
and
I.
Don't
think
they're
they've
fully
stabilized
on
their
API
yet
so
that
might
not
be
ready
to
go
yet,
but
you
can
at
least
I
think
app
logs
and
container
metrics
they're,
not
bin
basis,
and
then
we
have
a
track
to
do
a
pass
to
the
other.
Diego
components
sees
start
using
that
new
API.
A
Okay,
very
exciting,
so
any
question
for
Diego
or
I
guess
some
of
the
things
that
characters
mention
here:
no
okay,
so
one
one
order,
I
guess
one
another
project,
two
core
project
is
obviously
Bosh
and
I
know
Dimitri
drawing,
and
he
actually
will
be
one
of
the
ones
presenting
next
time.
Believe
it
or
not.
I
figured
out
how
to
pay
attention
to
present.
So
what
I
know
I
know
you.
D
A
G
Let's
see,
we
are
continuing
to
work
on
config
server.
The
integration
that
I
believe
is
almost
there
believed,
then
at
some
point
deployed
cm
deployment
with
with
proper
crab
integration.
Instead
of
using
the
sea
life
functionality,
that's
really
there,
mostly
for
the
temporary
workaround,
so
credible
integration
will
be
soon
documented
and
announced
I
guess
in
a
I
believe
in
the
next
major
brushless.
G
We've
been
also
moving
around
at
Cerritos.
So
for
those
who
haven't
noticed,
the
stem
cell
building
staffer
have
moved
into
their
separate
repo
and
release.
Notes
for
the
stem
cell
versions
will
soon
be
moving
there
as
well.
Of
course,
ba
Shire
walls
continue
to
be
updated,
but
all
of
the
stem
store
builder
bits
is
now
separated.
G
We
have
believe
one
remaining
storage
to
switch
over
wash
acceptance
test
to
use
that
a
version
of
the
CLI
and
lab
looks
like
some
emails
to
answer
ensues
well,
I
I
believe
that's
pretty
much
the
overview.
There
is,
of
course,
variety
of
other
tinier
things
like
you
know,
ASAP
Marco,
proposing
to
add
you
know,
sets
discs
metadata.
Cpi
calls
that
the
producing
of
work
I
as
the
resources
it.
What
easier
and
there's,
of
course,
some
work
between
IBM
and
a
safety
discussion
to
do
certain
feature
called
velocity
load
max.
Maybe
beings
can
talk
about
that.
A
I
think
for
people
that
don't
know
in
software
we
had
a
need
for
reloading
OS
versus
just
changing
stem
cells
by
creating
VMs,
so
that
makes
it
a
little
bit
better
for
software
and
and
turns
out
that
s
AP
has
a
similar
need.
So
Marcos,
Dmitri
and
I
have
been
chatting
about
this,
so
you
might
see
this
coming
up
as
well.
F
A
F
A
E
Kathy
I
don't
know
like
I
suppose
we
could
highlight
the
work
that
we're
doing
to
eliminate
a
number
of
the
bridge
components
in
combination
with
Diego,
so
that
is
progressing
along
I.
Think
in
the
probably
CF
release.
255
were
hopefully
eliminating
all
the
ones
that
we've
targeted
and
so
that
secures
that
that
communication
from
Cloud
Controller
to
Diego,
that's
a
server
for
copy
they're.
E
E
A
Yeah
I
mean
I,
think
this
I've
seen
this
since
I
was
you
know,
working
here
at
pivotal
for
a
few
weeks
and
pretty
much
all
the
teams,
I
guess
I'm,
just
echoing
what
you
were
just
mentioned,
is
they
were
all
trying
to
do
this
big
effort,
which
I'll
give
a
free
plug
for
my
friend
Jules
from
IBM.
He
did
a
nice
little
blog
post
on
medium,
where
he
talked
about
write
and
try
to
just
build
complex
stuff
and
in
some
ways
that's
the
you
know
the
wrath
of
of
this.
You
know
this.
D
A
Anyways
the
point
here
is
that
you
know
there's
there's
a
big
effort
to
remove
it.
You
know
it
doesn't
mean
that
it's
bad.
Maybe
it
has
very
good
applications
in
some
cases,
but
for
our
purposes
it
seems
like
it
wasn't
working
out.
So,
okay,
I,
don't
know.
If
there
are
any
question
for
for
pretty
much,
you
know
the
the
main
call
projects,
no.
A
Yeah,
alright,
thank
you.
So
the
first
question
I'm
not
sure
who
would
it
but
I'll
read
it
out,
so
security
is
a
great
thing.
Obviously,
however,
it
is
a
lot
of
problems
with
migrating
to
this,
sometimes
when
there's
a
new
secure
environment.
So
what's
the
plan
for
easing
the
pain
of
mutual
mutual
TLS
of
breakdance
I,
don't
know
I
mean
I
saw
it.
Leave
it
to
Betty,
sent
an
email
on
GF
dev,
where
he
discussed.
Maybe
some
of
that
I
guess
you
know
it's
like
going
to
the
airport.
A
C
A
better
way,
you're
of
the
fact
that
if
you
have
a
long-running
production,
environment
and
you're
upgrading
these
things
a
lot
of
times,
there's
like
little
gotchas
that
are
not
in
the
release.
Notes,
for
example,
like
the
new
DNS
entries
for
the
b-cells,
the
rep
and
stuff
like
that
and
the
auctioneer,
like
the
you
know,
oh
gee,
by
the
way,
here's
your
certificates
better
be
start
at
sell
that
service.
C
That's
EF,
that
internal,
not
the
old,
Diego
auctioneer
or
via
whatever
there
used
to
be
and
then
suddenly,
your
production
cells
are
all
drained
on
their
live
system,
because
the
auctioneer
can't
auction
anything.
And
it's
like
you
know,
the
elections
are
failing
and
stuff
like
that,
and
then
you
turn
white
and
your
heart
is
like
yeah
yeah
and
then
the
logs
are
great
for
developers,
but
the
logs
are
so
fake
that,
like
you,
have
no
idea
what
the
root
issue
is
you
another
eight
hours
to
figure
out?
Oh
it's!
Because
of
that
that
point.
A
B
Know
on
the
Diego
team,
we've
actually
been
very
careful
to
document
those
steps
in
the
in
the
release,
documentation
both
the
metadata
that
you
need
in
those
certificates.
There's
a
section
on
TOS
configuration
and
there
are
separate
documents
on
how
to
upgrade
say
the
option
here
and
the
cell
reps.
They
have
secure
communication,
I.
H
B
F
D
F
D
A
As
a
matter
fact
I'm
glad
you
guys
are
mentioning
this
because
and
he's
not
here,
but
I've
confirmed
with
him
multiple
times,
David
Sedaris,
who
is
the
PM
for
CF
deployment,
I've
run
into
him.
Multiple
time
and
I've
asked
him
hey.
Do
you
want
to
present?
It
is
like
not
quite
ready
and
the
last
time
I
saw
him.
He
said
I
might
be
ready,
but
I
know
I
know
so
so
I
will
make
sure
I
track
him
down.
Yeah.
C
Clearly
this
particular
pain
point.
This
particular
pain
point
is
for
long-running
systems
that
were
running
cf
release,
trying
to
migrate
just
yet
deployment
style
between
the
release,
notes
on
CF
releases
release
and
where
the
other
documentation
errors,
not
all
together.
Clearly,
so,
there's
really
a
lot
of
people
finding
a
lot
of
pain
and
asking
a
lot
of
health
questions
on
those
things
because
of
that
migration
process.
C
E
One
of
the
big
milestones
for
the
release-
integration,
team
and
Davidson
any
in
particular-
is
to
migrate
PWS
from
using
CF
release,
c
CF
deployment
and
we're
hoping
to
get
a
lot
of
learnings
from
that
on
documentation.
Our
advantage
I,
don't
believe
right
now.
We
would
recommend
switching
over
from
CF
release
to
CF
deployment
until
all
of
that's
really
ironed
out
and
we've
got
great
processes
around.
That
also
I
know
David
studies
trying
to
be
a
little
more
rigorous
about
asking
teams
to
update
their
release,
notes
before
publishing
any
of
the
final
releases.
F
For
the
the
PM's
and
that
push
the
button
on
the
the
cutting
the
github
releases,
that's
when
github
sends
the
emails
out
and
I
know
it's
easy.
I
know
it's
my
own
pipelines
and
it's
lazy
yeah,
you
put
it.
You
put
a
blank
template
in
there.
That
says:
hey
fill
all
this
apps
and
that's
what
gets
good
and
then
that's
what
gives
out
emails
out
to
all
the
watches
and
then
you've
got
yes.
F
B
F
Do
that
with
the
github
resource
on
Congress
yeah,
okay
yeah
that
certainly
the
the
email
app
aspect
of
github
use
is,
is
my
use
case
to
finding
out
what's
going
on
so.
C
That
a
little
nugget
right
there
that
you
just
said
about
the
CF
deployment,
not
being
the
migration
from
CF
release
to
CF
deployment,
probably
not
a
good
idea
to
try
that,
yet
that
should
be
communicated
a
little
broader,
because
the
default
response
from
a
lot
of
people
is
well.
Why
aren't
you
using
CF
deployed?
Should
you
should
just
be
using
CF
deployment
and
it's
like
oh
hold
on?
First
of
all,
what
does
he
have
deployment
and
why
should
I
be
using
it?
I
thought
like
this
is
a
little
confusing.
Oh
that's
what
it
is!
E
B
In
general,
CF
deployment
is
not
yet
intended
to
be
g8
and
they're,
getting
it
to
the
point
where
they
have
a
more
intentional
release
policy
around
it
and
documentation
about
upgrading
between
whatever
they
consider
the
person's
identity,
so
that
you,
you
can
use
it
to
rely
on
having
a
long
running
a
stable,
AJ
deployment
of
CF.
But
it's
not
there
yet
yeah
and.
A
As
I
mentioned,
I'm
definitely
gonna
track
David
and
he
already
accepted
to
present.
So
it
was
more
a
matter
of
when
and
I
think
what
the
team
is
saying
here
that
he
wasn't
quite
ready,
I
think
that's
sort
of
part
of
the
reason
he
hasn't
presented,
but
hopefully,
next
month
we
can
steer
a
presentation
from
it.
E
E
F
A
So
will
feedback
that
and
I
guess
an
as
they
are
near
the
question
about
the
projects,
any
mention
a
few
things
about
CF
extensions.
So
one
of
the
things
that's
been
happening
is
the
number
of
submissions
to
me
for
people
that
want
to
have
their
projects
in
CF
extension
that
started
increasing.
Obviously,
sometimes
projects
I
redirect
them
to
community
they're,
not
looking
for
process
or
help
and
stuff
like
that.
So
you'll
see
that
so
I
would
ask
everybody
else.
A
If
you
have
a
project
that
you
are
considering
adding
to
extensions,
let
me
know
the
last
extension
called
there
was
a
lot
of
people
joining,
except
for
you
in
myself,
so
we
kind
of
covered.
What
was
there
and
in
the
fact
that
you
know
in
some
ways
you
know
what
what
projects
do
we
accept
and
stuff
like
that?
So
if
you
have
opinions
about
this,
make
sure
to
join
subsequent
calls.
The
next
goal:
I'm
canceling,
because
I'm
gonna
be
on
vacation,
but
we'll
have
one
after
that.
A
All
right
so
I
want
to
give
as
much
time
to
the
two
presentations
as
I
mentioned.
I,
repeat
again,
if
you
have
a
presentation
in
you
haven't
presented,
especially
in
your
part
of
the
community,
you
want
to
discuss
it.
Let
me
know,
obviously,
as
we
start
getting,
a
lot
of
these
I
might
have
to
to
prune
them
so
that
we
get
you
know.
Maybe
two
presentations
I
would
have
the
most
impact
to
the
community
first
and
we're
starting
to
have
quite
a
bit.
A
So
just
let
me
know
ping
me
and
then
we'll
we'll
discuss
it
so,
for
we
have
two
presentations,
one
from
Altos
and
one
form
stuck
and
Wayne
will
let
dr.
Nick
go
last,
since
he
probably
will
also
finish
the
call
well,
so,
let's
first
go
with
Altos
I
think
alex.
Is
there
he's
gonna
talk
about
project
mentor,
so
you
can
share
your
screen
if
you're
presenting.
A
D
H
To
see
you
all
here,
I
was
very
glad
to
be
invited
to
this
call
and
I'm
going
to
tell
you
about
the
tool
to
transform
manifest.
That
is
called
mantra.
So
mantra
stands
for
manifest
transformation
and
it
is
used
to
is
work
with.
Bosch
manifests
the
story
began
when
we
had
manifest
with
5000
times,
which
we
needed
to
transform
to
use,
in
course,
and
I
met
a
very
interesting
project
by
dr.
unique
that
was
called.
Make
me
spiffy.
This
project
was
able
to
split
manifest
in
different
parts
in
several
sections
and
I
liked.
H
H
H
H
H
H
Releases
lists
and
these
releases
are
numbers,
but
you
want.
You
want
to
have
all
releases
that
start
with
CF
to
have
Russian
latest
version,
so
you
can
call
the
command
that
will
that
we
will
do
this.
You
set
a
setup,
manifest
file,
and
you
you
say
that
JSON
that
you
want
to
merge
is
version
equals
latest,
and
you
said
the
scope,
a
path
to
the
release
you
want
to
update
and
it
brings
it
updates
all
releases
that
start
with
CF.
So
you
can
find
more
examples
of
using
this
tool
in
in
the
in.
H
H
H
F
A
D
H
H
D
C
Cool,
so
are
you
able
to
easily
add
in
the
migrated
migrated
from
stanzas?
Is
that
part
of
the
you
know
I'm
asking
so
for
the
Bosch
manifest
v1
to
be
to
switch
you're,
probably
also
gonna?
If
it's
a
live
environment,
you're
gonna
want
to
add
in
the
migrated
from
stanzas,
for
you
know
like
do
we
do
you
have
to
go
manually?
Add
those
in
or
there's
one
of
the
hooks
that
you
gotta
have
the
manifest
transformation,
a
lot
of
doing
that
easily
yeah.
H
So
this
is
a
thing
that
is:
is
used
to
create
to
extract
all
config,
for
instance,
from
from
manifest
of
Russian
one
so,
and
that
does
the
the
main
that
was
the
main,
the
main
task
for
this
when
it
started
it.
Okay,.
F
No
I
have
no
answers
to
that.
That
is
all
highly
private
internal
information
join
stock
on
when
you
get
fun
at
what
happened
with
the
dingo
know,
this
is
a
Postgres
solution,
sort
of
you
know.
Batteries
included,
Bosch
based
service
broker
and
there's
a
dedicated
all
just
so
we
can
keep
all
the
repos
together.
There's
a
primary
release
that
you
can
conserve
start
with
and
there's
a
question
of
repose
that
represents
of
a
next
version.
So
if
you
ever
want,
can
you
sort
of
walk
through
the
code?
F
Just
let
me
know
and
we'll
do
that
the
core
idea
is,
you
know,
having
worked
on
mini
client
projects,
you
have
huge
ones
GE
downwards,
and
you
know
you
just
get
this
this
feeling
that
the
service
brokers
are
trivial,
that
you
know
how
hard
is
it
to
run,
Postgres
or
or
whatever,
and-
and
you
know
the
service
broker-
API
isn't
complex,
so
we
don't
really
even
five
years
later,
that
really
expose
much
options
of
functionality.
You
know
there's
no
backup
and
restore
hooks
or
anything
and
and
and
really
the
service
brokers.
F
You
see
pretty
much
take.
Take
that
how
hard
can
it
be
and
make
really
trivial
brokers
the
don't
do
lot
and
but
it
turns
out
really
after
after
running
the
thing,
everything
else
is
really
hard
and
I
I
went
looking
for
my
favorite
visualization
of
how
hard
could
it
be,
and
it's
the
owl
picture
pretty
much.
How
most
people
approach
building
service
brokers
say
how
hard
you
draw
some
circles
and
then
just
draw
the
rest
of
the
the
now,
but
it
turns
out.
F
You
know
if
you
really
want
to
build
service
experiences
that
match
that
the
complexity
and
the
feature
set
and
the
horizontal
scalability
of
Cloud
Foundry.
It
is
hard
and
but
also
not
just
that
there.
There
are
a
lot
of
exciting
ideas
and
features
that
you
might
want
to
add
that,
just
because
the
service
broker,
API
doesn't
expose,
doesn't
mean
you
shouldn't
be
offering
them
to
end-users.
F
So
part
of
this
was
me
trying
to
vent
and
express
ideas
that
that
we
never
really
shared,
and
also
this
is
the
experiences
that,
as
a
Postgres
user,
that
I
needed
so
I'll
share
a
couple
of
ideas.
So
this
is
the
assumptions
when
it
comes
to
building
service
brokers
that
I
bring
from
what
I've
seen
was
from
two
years
at
Engine
Yard,
where
we
had
two
thousand
my
sequel
databases
for
two.
F
This
is
two
thousand
businesses
running
and
then,
through
all
the
customers,
we've
had
and
the
tens
of
thousands
of
databases
and
apps
or
whatever,
and
is
that,
unlike
unlike
say,
Diego
or
the
mobile,
runtime
and
kubernetes,
and
those
are
stateless
environments.
That
database
is
always
grow
they're,
like
small
children
and
they
want
to
take
over
the
world
and
they
want
to
take
all
your
stuff
and-
and
you
have
to
look
after
them
differently.
F
2017
we
have
Google,
we
have
Stack
Overflow,
we
have
good
posts,
great
documentation
and
YouTube
videos
and
and
for
the
most
part
you
know,
devs
can
be
self-sufficient,
and
so
we
kind
of
want
to
rotate
the
responsibility
of
DBAs.
From
looking
after
single
databases
to
being
the
the
overarching
curator
of
hundreds
and
thousands
of
databases
and
just
open
your
mind
to
what
the
goal
is.
F
The
Heroku
Postgres
team,
as
a
body
of
humans
hasn't
really
grown
substantially
over
the
years,
but
a
few
years
ago
they
went
over
a
million
databases
that
they
managed
and
so
that,
as
always
they're
an
inspiration
to
our
profession
and
certainly
heavily
excited
by
trying
to
achieve
what
they
achieve
shared
infrastructure
as
much
as
the
old
DBAs
of
old
or
was
one
of
dedicated
machines
because
you
get
high
performers
here.
We
all
know
that
there
are.
There
are
other
goals
that
we
have
when
it
comes
to
platforms.
F
We're
gonna
go
a
tiny
little
bit
of
performance
for
four
to
be
able
to
get
lot
more
orchestration,
but
we're
still
bringing
in
shared
secure
access,
which
is
another
story.
We
don't
really
have
a
good
story
around
in
Cloud
Foundry.
You
know
what's
DB's
role
and
responsibility,
how
do
we
let
them?
How
do
we
let
them
through
plaid
foundry,
to
be
able
to
do
their
job,
we
sort
of
say?
Well,
they
get
bashed
and
they
can
touch
everything
which
is
you
know
not
the
most
accurate
story.
F
So
in
fact,
the
most
important
story
of
every
service
broker
and
every
database
you
ever
do
and-
and
it's
the
last
point
so
I
can
believe
it,
but
really
it's.
The
first
point
is
that
before
you
share
it
with
anyone,
all
the
operators
should
know
how
to
do
disaster
recovery
and
with
more
time,
I
would
tell
you
my
own
personal
stories
of
deleting
customer
data
and
feeling
really
bad
about
it.
F
F
There's
no
good
story
to
tell
that
that
user
about
what
happens
and
of
course
there
are
two
two
levels
of
people
affected:
their
developers
who
feel
bad
and
then
the
end
users
who
now
web
app
doesn't
work
so
we're
not
working
with
with
plants.
At
the
moment
it's
until
we
have
a
better
idea
of
why
you
would
have
a
plan
for
a
database
that
only
wants
to
grow,
finally,
mainly
how
it
works
and
obviously
deployed
with
Bosch,
but
it's
container
based.
F
F
I
have
to
meet
you
I
mean
SF
tomorrow,
so
we
can
hang
out.
If
you
don't
want
to
hang
out
just
think
of
the
word.
So
then
traffic
comes
in
inserts,
it's
a
asynchronous
replication
and
its
current
inclination.
So
after
while
enough
inserts
enough
edits
a
sixteen
meget
forwarded
off
to
the
replica,
also
after
a
timeout
like
a
10-minute
window
of
time
out,
chunks
will
get
sent
off
and
so
replicas
get
replicated
too.
F
You
can
use
SSH
to
send
archives
to
a
local
on-prem
machine
and
when
we
know
that
we
have
that
for
every
single
cluster,
we
can
do
some
fun
tricks,
as
well
as
it's
the
incredibly
fun
trick
of
disaster
recovery,
and
so
one
of
the
fun
tricks
we
can
do
is
create
new
database
clusters
built
on
a
clone
of
the
backup
of
another
one.
And
this
is
space
scoped,
because
that's
the
last
thing
I
can
think
of
doing
for
for
authorization
permissions
of
who's
allowed
to
do
it.
F
It
kind
of
has
two
stories
to
it.
One
is
it
allows
you
to
developer
two
or
the
entire
development
team
to
more
thoroughly
test
the
upgrade
of
production
in
advance
of
actually
touching
production,
because
now
you
can
get
a
copy
of
production,
you
might
then
sanitize
it,
and
then
you
can
test
your
upgrade
and
you
can
test
the
the
blue/green
deployment
through
the
various
phases
you
can
test,
but
it's
blue
and
green
running
against
the
old
schema
and
then
blue
and
green
running
against
the
new
schema
check
that
out
all
works.
F
Perhaps
before
you
find
out
what
that
looks
like
production
and
the
other
obvious
features
is
you
can
developers
can
actually
do
their
own
disaster
recovery
if
accidentally
things
the
way
that
works,
because
we
have
some
isolated
archives
for
each
database?
Is
we
just
copy
the
archive?
It's
pretty
simple
and
then,
when
the
new
leader
wakes
up-
and
he
finds
out
that
he
is
the
leader
but
bitter
is
an
archive-
it
automatically
kicks
into
disaster.
F
Recovery
mode
recovers
the
database
and
then,
when
the
replica
comes
up
or
when
the
next
one
comes
up,
it
finds
out
that
it's
a
replica
and
it
starts
replicating.
So
it's
a
really
powerful
feature
are
built
on
the
back
of
the
same
idea
of
containers
that
know
how
to
recover
for
themselves
its
Cloud
Foundry.
So
we
create
a
new
binding.
Therefore
we
get
a
new
port,
a
new
traffic,
and
you
might
use
that
particular
clone
for
a
minutes
or
hours
or
days
or
whatever
might
be
reporting
database.
It
might
be
whatever.
A
One
quick
question:
what
do
you
call
it
I
guess
I'm
trying
to
understand
is
it?
Is
it
more
of
a
hot
standby?
You
know
master
slave.
Are
you
actually
trying
to
like
have
multiple
replicas,
of
course,
more
than
two
AZ's?
The
reason
I
ask
is
because,
like
what
what
happens
when
my
connection
went
down
and
then
there's
two
replicas
and
they
had
different,
you
know
sync
update
which
one
to
use
you
know.
F
Things
like
that
like
this
is
pretty
shared
the
question,
because
the
time
I
I
see
that
question
so
I
appreciate
your
you
segue
now
hi
veil
ability,
obviously
thanks
to
Dimitri's
team,
entire
Bosch
universe
machines
every
week,
and
so
we
can't
be
at
all
oblivious
to
that
and
then
there's
going
to
be
environmental
losses
when
people
at
Amazon
use
their
power
tools
to
take
machines
down
and
so
yeah.
We
obviously
have
the
replicas
the
it
is
not
just
a
passive
replica.
F
It's
constantly
waiting
till
it
gets
his
turn
to
be
the
leader,
and
so,
if
the
Machine
and
the
line
disappears,
there
is
a
period
of
time
where
it's
we
allow
it
to
come
back
and
that's
why
I
believe
and
I
can
be
argued
against,
but
the
difference
between
high
availability
or
when
you've
got
an
asynchronous
pair
versus
your
durability
of
dedos
is
important
to
understand.
You
kind
of
want
to
wait
for
that
leader
to
come
back
because
it
is
the
best
qualified
leader
it
has
all
the
data,
and
so
for
the
most
part.
F
You
know
we
try
to
wait
for
that.
The
replicas
will
all
give
it
a
some
sort
of
timeout.
We
know
it
to
wait
for
it
to
come
back
before
they
trigger,
but
eventually,
if
you've
got
multiple
replicas
they'll
sort
of
compete
based
on
best
qualifications,
you
know,
and
so
eventually
it'll
fail
over.
The
routing
table
will
get
updated.
A
F
Solve
problem
in
the
Postgres
university
and
said
they
they
coordinate
through
SED
I
mean
how
do
you
make
a
coordinator?
Other
things
we're
packaging,
a
CD,
and
so
they
sort
of
you
know
they
know
they
know
each
other's
current
status
of
where
they're
up
to
it's
a
good
question.
It's
part
that
I
wouldn't
claim
to
know
that
the
most
about
exactly
how
Postgres
does
that
and
as
by
default,
and
we
really
only
sort
of
support.
We
don't
support
only
two
to
no
clusters.
We
don't
really
advertise
how
you
get
a
bit
more
there's
a
flag.
F
You
just
pass
through
an
extra
parameter
parameter
and
you
can
have
a
ten
node
cluster,
but
in
a
simple
case,
obviously
the
earth.
The
other
replica
is
the
best-qualified
there.
It
is
part
of
Postgres
and
that
the
replicas
consider
of
nowhere
they're
up
to
and
then
in
the
the
the
agent
that
sits
orchestrating
and
on
each
node
on
each
sorry
on
each
container
is
able
to
figure
out
whether
it's
the
most
qualified.
D
F
So,
thanks
to
the
wonders
of
gosh,
we
actually
bring
this
back.
So
if
you've
got
familiar
with
the
advanced
functions
of
Diego,
then
you
know
too
much.
We
don't
do
anything
we
actually
it's
super
simple,
no,
not
because
it's
hard
to
make
assumptions
about
state
and
what
the
what
availability
we
have.
So
if
there
was
space
on
that
host
machine
for
that
container
before
then,
there's
space
for
it
now,
and
so
when
that
machine
comes
back,
that
container
will
be
restarted.
The
container
will
will
sort
of
go
through
a
role
reversal.
F
It
will
discover
that
it's
no
longer
leader
and
through
the
sort
of
a
stone
a--the
shift
the
node
in
the
head
process.
All
the
stator
is
deleted
off
the
machine
and
it
come
it.
It
fetches
sort
of
a
new
replica,
either
from
either
from
the
archives
or
from
the
from
the
master,
depending
on
some
characteristics,
and
now
you've
got
a
healthy
cluster
again
healthy,
you've,
probably
awesome
data,
though
that's
awkward,
and
no
one
really
wants
to
talk
about
it.
But
that's
the
downside
of
a
synchroscope
location.
F
F
Completely
open
to
solutions
it's
just
not
I
haven't
had
the
team
hasn't
had
a
great
idea
for
how
you
do
that
and
the
generic
case,
not
knowing
exactly
what
everyone's
gonna
have
on
their
install
base,
and
so
it's
super
interested
in
ideas.
Obviously,
you
can
run
multiple
of
them
because
they're
independent
from
each
other
bit
like
bit
like
we
had
to
go
router
system.
You
can
run
as
many
as
you
like,
they're
all
independent.
You
then
need
a
load
balancer
in
front,
so
hopefully
we
can
come
up
with
a
server
suite
of
ideas.
C
F
F
F
F
There
is
an
entire
experience
that
that
you
should
be
thinking
about,
and-
and
perhaps
this
comes
apart
from
my
own
personal
experiences
but
then
I'd
rather
you
benefited
from
my
personal
experiences,
then
have
to
go
through
disaster,
not
recovering
from
disaster
each
and
every
one
of
you
so
I
I
strongly
hope
that
each
of
you,
when
you
build
service
tiles
service
brokers
service,
you
know
releases
that
you
have
first
and
foremost
in
your
mind.
What
is
your
embedded
disaster
recovery
story?
F
So
one
thing
we
suggested
of
everyone
that
deploys
this
is
after
you
play
with
a
little
bit
to
go
through
the
disaster
recovery
tutorial
and
that
pretty
much
means
running,
delete
deployment
on
it,
taking
everything
down
and
then
going
through
and
running
the
disaster,
recovery,
errands
and
and
feeling
the
confidence
of
it
watching
at
work.
I
think
you
know,
there's
a
lot
of
moving
pieces
in
a
complete
Cloud,
Foundry
experience
and
feeling
feeling
increasingly
confident
that
each
part
works,
or
you
know
how
to
check
it
when
it's
not
working.
F
F
So
if
you
can't
quite
imagine,
what's
going
to
happen,
we're
gonna
delete
not
just
the
containers
but
obviously
the
broker
and
the
router
and
HED
we're
gonna
delete
all
the
machines.
We
use
the
word
cell
just
sort
of
synonymous
with
with
diego,
even
though
it's
implemented
differently.
So
we
take
away
everything.
Obviously
we
keep
the
archives.
That's
really
important.
F
Don't
do
this,
reverse,
don't
delete
all
the
archives
and
think
that
you
can
actually,
it
would
probably
just
reproduce
all
the
archive,
so
it
might
work
actually
rebuild
everything
would
bring
back
then,
with
the
empty
machines.
We
bring
back
the
router
the
cooker,
and
then
you
run
the
errand
and
the
errand
is
is
fundamentally
simple.
It
asks
Cloud
Foundry
for
the
list
of
service
instances
that
it
expects
to
be
running.
F
They
may
be
archives
for
thousands
and
millions
or
thousands
and
thousands
of
historical
databases,
but
you
only
want
to
bring
back
the
ones
that
Cloud
Foundry
expects,
and
so
he
just
reprovision
them
they'll
wake
up
with
the
same
identity.
They
all
contain,
as
it
gets
goes
down
the
nest
to
chain
the
containers.
Wake
up
and
realize
I
already
existed
once
I'll
do
the
last
recovery,
then
the
replica
containers
wake
up,
they
realize
they're
part
of
a
cluster
and
they
start
replicating
either
from
the
leader
or
from
from
the
desert
from
the
archives
themselves.
F
F
Simplistic
security
model
here
just
a
different
bucket,
different
credentials
and
then
and
those
credentials
never
exposed
to
any
of
the
databases.
So
if
there
is
a
vector
of
attack
through
Postgres
you,
they
won't
get
access
to
all
the
passwords
and
other
material,
but
obviously,
over
time
we
have
for
life.
Credit
out
will
have
different
ways
to
do
some
of
this
stuff
and
so
I'm
pretty
excited
about
improving
the
security
model
over
time
but
yeah.
F
F
Of
the
implementations
going
to
be
post
quest
specific,
but
the
idea
of
having
a
run
errand
disaster
recovery,
you
know
I
think
it
should
be
a
standard
that
people
should
go
down
for
at
least
a
document.
Even
if
it's
not
as
simple
as
that
document.
What
is
how
do
you
do?
Disaster
recovery?
It
went
all
crazy.
How
do
I
get
it
back?
F
F
F
Sorry,
if
you're
ever
wondering,
if
you
have
to
ever
argue
about
Cloud
Foundry
versus
OpenShift,
you
can
always
argue
that
shift
doesn't
have
dingos
one.
If
you
bind
it,
what
what
dingo
can
do
for
you
in
terms
of
you
know,
being
there
look,
I,
went
and
looked
at
the
daya's
website
in
part,
because
that
was
sort
of
my
former
employer
went,
bought
it
and
still
this
magnificent
website
discussing
their
wonderful
platform
and
not
a
cracker
of
a
mention
about
durable
services.
F
D
F
A
C
It
it's
just
the
same
as
any
other
Postgres
database
with
shield.
You
essentially
enable
a
enable
it
as
a
target
and
the
source
right
source
and
target,
and
you
had
a
schedule
and
retention
plans
whatever
and
you
have
the
backup.
There's
no
automated
at
this
moment
way
to
just
pay
back
up
every
single
database
on
dingoo,
but
it
is
possible.
You
need
that
so
dingo
handles
the
point
in
times
s,
recovery,
scenarios
and
stuff,
like
that
out
of
the
box
itself,
and
you
don't
have
to
worry
about
those
things
but
yeah.
F
C
Yeah
for
like
compliance
reasons
or
mandates
or
policy
or
whatever
then
yes,
shield
absolutely
works
for
that.
It's
just
a
Postgres
plugin
and
basically-
and
you
just
configure
it
her
database
to
do
that,
and
it
would
not
be
too
challenging
to
automate
the
addition
of
new,
like
so
your
detection
of
new.
They
don't
cook
us.