►
From YouTube: Cloud Foundry Community Advisory Board Call [July 2018]
Description
Agenda found here: https://docs.google.com/document/d/1SCOlAquyUmNM-AQnekCOXiwhLs6gveTxAcduvDcW_xI/edit#heading=h.mewn94q54hla
A
B
She
is,
in
fact
a
Toscana
that'd,
be
maybe
the
first
point
to
make.
If
anybody
here
is
a
table
can
feel
free
to
swing
by
the
booth.
They'd
love
to
see
you
Swarna
and
melissa
from
the
foundation.
Are
there
and
there'll
be
various
members
of
community
that
are
volunteering
to
help
us
at
the
booth
as
well,
so
actually,
some
pretty
decent
CF
content.
That's
been
going
on
there.
B
Okay,
our
own
event:
we've
got
Basile
coming
up,
we
launched
the
schedule,
so
that's
live
more
talks
will
start
showing
up
over
the
course
of
the
next
month
or
so,
but
the
course
schedules,
they're,
really
good
mix
of
end
users,
interesting
technology,
so
I'm
I
think
the
schedule
turned
out
really
well
and
we
want
to
thank
all
the
track
chairs,
who
spent
quite
a
bit
of
time
reviewing
hundreds
of
submissions
and
and
kind
of
narrowing
it
down.
So
amazing
material
was
was
proposed,
so
that
should
be
a
lot
of
fun.
B
Lorna
tells
me
to
remember
to
tell
everyone
that
early
bird
registration
ends
on
the
18th
today.
So
if
you
want
the
discount
I
should
do
that,
although,
if
you're
a
contributor
and
there's
our
usual
contributor
code,
if
you
have
any
questions
or
changes
that,
if
you're
a
speaker
and
you
need
to
get
in
touch
with
the
events,
team
use
speakers
at
Cloud,
Foundry
org
I'm
reading
from
a
list.
Obviously,
let's
see
the
other
thing
was
that
we
we
put
out
a
member
survey.
B
This
was
sent
out
to
the
points
of
contact
that
we
have
at
each
one
of
the
corporate
members
of
the
foundation,
we're
looking
for
feedback.
We
do
this
about
twice
a
year.
So
just
a
reminder,
if
you
happen
to
be
the
point
of
contact,
or
in
particular,
if
you're
at
a
small
company,
that's
a
member
just
make
sure
we
get
your
feedback.
That
would
be
great
because
we
do
better.
If
you
tell
us
what
we
can
do
for
you
and
then
I
think
so.
B
The
last
point
I'll
make
is
that
it
a
bit
of
pmc
lead
news
with
Dimitri
choosing
to
step
back
from
the
PMC
lead
role
for
Bosch.
That
number
one
he's
gonna
be
missed
right.
We
to
make
sure
he's
been
the
stoic
leader
of
I,
think
I,
someone
else
that
maybe
was
back
the
back
of
the
Boche
project
for
many
years
now
he's
moving
on
to
go.
Do
some
different
things
within
pivotal
and
has
nominated
Marco
from
s.
B
Ap
who's
been
the
project
lead
of
the
European
contingent
for
the
Bosch
project,
so
quite
some
time
to
take
on
the
PMC
lead
role,
but
nominations
are
open
if
anyone
who's
been
active
in
that
project
wants
to
be
considered.
You
can
email
me
directly
and
you
can
see
more
details
about
this
on
the
Bosch
mailing
list.
A
A
Agathon,
excellent
and
I
think
part
of
the
way
the
hackathon.
This
is
just
to
kind
of
remind
people
that
don't
participate.
I.
Think
when
you
register
there's
a
little
checkbox
that
says:
hey
do
you
wanna
register
so
put
a
hack
and
obviously,
if
you
not
sure
we
would
suggest
you
just
say
yes
and
then
you
can
always
not
attend,
we
would
rather
not,
but
the
other
thing
also
is
that
if
you
don't
check
the
box,
we
don't
know
how
many
people
so
it's
hard
for
us
to
control.
So
please
check
the
box.
A
I
would
say
just
check
the
box.
If
you
can
anybody
don't
check
the
box
still
swing
by
obviously
the
way
hackathon
work,
you
put
a
team
together
and
you
try
to
build
something
cool
like
you
remember,
last
year
or
last
time,
the
winner
did
you
terraeum
stuff,
and
it's
actually
going
through
extensions
and
we'll
talk
about
this
later.
So
it
is.
There's
lots
of
advantages.
Consider
participating.
D
About
a
few
things
that
have
been
really
exciting
or
interesting
on
the
CF
application
runtime
front
recently,
the
routing
team
had
a
proposal
about
some
of
these.
Our
experience
for
weighted
routing
posted
to
see
if
dev,
so
certainly,
if
you
have
any
feedback
or
thoughts
on
that
I
think
they'd
appreciate
it.
A
E
That
here,
I
would
say
quite
quite
seriously,
and
the
proposals
on
cff
we've
just
inserted
on
it
but
feedback
is,
is
very
welcome.
We're
finding
through
this,
like
quite
iteratively
and
with
lots
of
communications,
but
any
feedback
about
anything.
We've
we've
missed
or
any
suggestions
is
very
welcome.
A
D
A
I'll
ping,
him
I,
guess
we
don't
be
this?
Is
he
here
no
idea,
I
know
that
they
are
having
a
talk
at
Basel
and
sort
of
like
the
updates
for
open,
serviceworker
and
then
also
I
know
just
because
I've
talked
to
dog
yesterday
that
he
was
and
the
Bay
Area
to
sort
of
discuss
with
what
your
report
is
wrong,
open,
serviceworker,
so
glad
that's,
there's
some
updates
if
you're
interested
we'll
definitely
make
sure
that
they
attend
or
send
me
do
things
for
next
time.
B
A
F
Wasn't
gonna
do
interaction
ice
telling
you
Morgan
to
do
an
introduction
for
Morgan,
but
well
here
it
is
Morgan.
Fine
is
one
of
the
new
team
members
joining
as
a
p.m.
on
Bosh
and
I
believe
he
is
here.
I
don't
see
Fred,
who
is
a
another
p.m.
that
is
joining
on
Toronto
side,
but
Morgan.
Doing
a
well
do
want
to
say
something
something
now
some
something
being.
A
F
A
E
A
E
E
Yeah
so
there's
now
a
xenial
stem
cell
line.
That's
out
we're
trying
to
encourage
everyone
to
start
testing
it
and
give
us
feedback
as
quickly
as
possible
and
start
moving
forward
to
that
it
should
be
up
on
Bosh
IO
quickly
as
well
or
it's
there
already.
Sorry!
So
definitely,
please
start
testing
it
and
let
us
know
how
it's
working
for
you
I.
A
E
Think
the
Zinio
testing
is
main
thing
that
we're
interested
in
we're
continuing
on
some
of
our
other
stuff.
We've
done
a
few
past
releases
on
Bosch
266,
and
there
were
some
links,
issues
that
should
have
people
may
want
to
update
otherwise
just
kind
of
continuing
on
the
trucks
I
think
one
other
one.
We're
working
on
right
now
is
improving
CBR
integration
for
backups
on
director
sides,
I
think
that's
about
it.
A
A
No
I'll
mention
one
thing.
I
think
I've
said
that
publicly
and
I'll
say
it
again:
Dimitri
has
been
really
phenomenal
on
Bosch.
We
all
know
that
we
don't
acknowledge
it
sometimes,
but
you
know
we
do
it
privately,
but
I'll
do
it
publicly
and
embarrass
him.
He
is
a
fearless
leader.
Fearless
sort
of
you
know,
proponent
of
Bosch
and
open
source
and
showing
me
what
critical
goes
here
and
we're
gonna
miss
him
a
lot.
But
the
good
thing
is
he's
actually
right
here.
I
know
where
he
sits.
A
A
All
right
cool,
so
we'll
do
the
last
thing
and
then
we'll
go
to
the
talks
for
extensions.
The
main
highlight
is
that
we
have
a
proposal
out
I
believe
it's
the
first
time,
even
as
part
of
this
year.
There
is
a
project
that
sort
of
merges
or
marries
or
connects
Cloud
Foundry
with
you
know
this
hot
new
thing
called
blockchain,
maybe
not
so
hot
anymore,
but
certainly
people
are
spending
money
on
this
losing
their
money.
A
That's
another
way
to
say
anyways.
The
point
is
that
this
proposal
was
submitted.
It's
the
winner
of
the
hackathon
in
Boston,
the
team
in
particular
Nima
fabiani
from
IBM
and
the
rest
of
the
team
they're.
Looking
forward
to
your
feedback,
they've
submitted
the
proposal.
I
know
you're
very
serious.
Yesterday
they
were
working
on.
You
know
more
stuff,
so
you'll
see
a
lot
more
things
and
they'll
have
an
inception
and
all
this,
but
we
need
you
right
now
to
sort
of
take
a
look
at
the
proposal.
A
Take
a
look
at
the
video
from
the
last
from
presentation
and
give
us
your
feedback
right,
because
part
of
this
open
process
is
to
allow
you
to
mention
what
you
like.
What
you
don't
like.
Don't
necessarily
you
know
give
feedback
on
the
direction.
We're
gonna
try
to
do
a
vote
soon,
so
you
don't
have
a
lot
of
time.
A
Okay,
so
take
a
look
at
it,
I
believe,
if
the
leg,
if
you
don't
have
that
like
how
to
put
it
in
the
notice
once
I
get
a
chance
right
and
Nima
is
actually
local
too.
So
if
you
want
to
talk
to
them,
you
can
do
that
to
this
part
of
the
Diego
team.
So
you
can
look
for
where
Eric
and
the
rest
of
the
team
is
and
your
fireman.
Okay.
A
So
that's
that's
the
main
highlight
for
extensions.
Then
you
stop
here
and
see.
If
there's
any
question
for
any
of
the
things
we've
discussed
so
far,
and
then
then
he
asked
sheesh
to
start
getting
ready
for
his
presentation,
which
is
gonna,
be
about
service
fabric
and
some
of
the
updates.
And
then
after
that,
we'll
go
to
stock
and
weigh
in
for
shield
v8.
So
she
she
can
get
ready.
A
G
Great
okay,
so
so
we
started
with
service
fabric,
so
there
are
a
few
updates
on
service
fabric,
and
then
there
is
a
demo
lined
up
so
today,
I
have
a
couple
of
committers,
also
with
me,
who
will
take
us
through
demo,
so
sorrow
and
British,
actually
so
work
for
service
fabric,
so
they
will
go
ahead
with
the
demo.
So
agenda
is
so
we'll
quickly
have
some
project
updates
and
then
followed
by
a
demo
yeah.
So
so
during
the
cap
called
for
the
month
of
May.
G
That
is
where
I
actually
presented
the
development
plan
for
service
fabric.
So
these
were
some
of
the
items
which
we
planned
for
at
least
for
the
next
by
the
next,
if
summit
so
and
now,
since
we
are
almost
at
the
midway
I-
think
it
really
makes
sense
to
basically
present
the
status
of
each
of
these
items.
The
first
one
is
service
fabric
2.0.
So
this
is
where
this
was
also
presented
during
the
Boston
summit,
where
we
spoke
about
our
plans
to
evolve
the
service
fabric
design.
G
So
this
ff1
z1,
auto
design
was
quite
opinionated,
so
the
plan
is
to
basically
move
it
to
a
more
modular
design
where
ease
of
onboarding
other
projects,
for
example
bbr
or,
for
example,
bringing
some
provisioners
like
a
test
becomes
seamless.
The
development
for
this
is
still
ongoing,
as
it
is
a
major
change
for
us
and
but
by
the
time
we
get
to
service
fabric
to
dot
o.
G
In
addition
to
what
we
have
today
as
part
of
the
deployment,
we
will
also
end
up
with
the
CFC,
our
HCD
Bali's
and
the
KSR
api
server
will
also
be
utilized
basically
to
implement
the
control
of
design
for
service
fabric,
so
development
is
incremental
as
of
today,
and
currently
we
have
been
able
to
at
least
bring
in
some
of
the
components
at
CD
and
Cuban.
It
is
API
server.
In
addition
to
this,
some
of
the
implementations
on
the
locking
part
so
has
also
been
moved
from
crowd.
G
Foundries
you
have
to
basically
at
CD,
along
with
that,
some
of
the
backup
controllers
has
been
implemented.
Service
hub
decline
for
API
server
has
been
completed
and
now
the
major
part
of
basically
rewriting
some
of
the
controller.
So
today
we
provision
Bosch
as
well
as
docker
based
services.
So
these
controllers
implementation
is
ongoing
as
of
today,
and
then
there
are
some
operation
controllers,
which
has
to
be
read
it
and
basically
the
pine
controllers
and
in
the
back
up
on
and
then
we
are
quite
hopeful
that
which
should
be
done
before
the
basil
summit
so
yeah.
G
So
that's
about
the
service
fabric
to
doto
the
next
one
is
service.
Web
break,
h
a
ends
ADM
and
multi
AZ
on.
So
this
was
another
item
which
we
have
proposed,
and
this
was
proposed
for
all
the
es,
basically
on
AWS
Azure
GCP,
as
well
as
OpenStack.
So
as
of
today,
the
development
is
completed
for
all
the
years
except
Azure,
and
but
we
are
currently
kept.
G
Gcp
and
OpenStack
implementation
under
observation
to
just
run
some
tests
and
make
a
note
off
with
the
behavior,
and
so
today
we
plan
to
present
a
demo
on
this,
as
well
as
during
the
Basel
submit.
There
is
a
session
plan
where
we'll
talk
about
the
nitty-gritties
of
implementing
a
change
idea:
multi
a-z
for
a
stateful
broker
covering
all
the
AES
and
then
again
there
will
be
a
demo.
G
So
that's
about
the
service
table
catch
is
rate
limiting.
So
this
again,
the
primary
motivation
for
this
story
was
to
address
some
of
the
limitations
actually
wash,
for
example,
giving
a
fair
share
to
the
various
lifecycle
operations
and
then
again,
basically
making
being
fault
tolerant
towards
Bosch
unavailability.
So
so
this
is
and
then
again
not
to
basically
put
in
there
in
deterministic
load
on
Bosh.
So
this
is.
This
story
has
already
been
completed
and
this
is
available
for
people
to
you
use.
Then
next
one
is
deployment
hooks.
G
This
is
where
service
fabric
today
provides
capability
to
service
owners
to
basically
define
some
of
books
for
the
lifecycle
operations.
Basically,
for
example,
a
pre
create
report
of
the
update
or
a
pre
delete,
or
even
a
pre
bind
a
pre
unbind
books,
and
then
this
implementation
actually
runs
in
a
secure,
isolated
manner,
allowing
only
for
some
specific
syscalls.
So
we
do
restrict
some
of
the
calls.
G
For
example,
the
network
calls
and
then
during
the
CF
summit
in
the
basal
there
is
another
session
which
is
lined
up
for
this,
where
we
will
talk
about
how
we
have
basically
implemented
this
in
an
so
that
the
the
load,
so
the
basically
the
workloads
are
run,
are
run
in
an
isolated
in
a
secure
manner
action.
So
so
that's
another
session
which
will
be
presented
by
our
committers,
the
last
two
items
we
are
yet
to
get
started
on
that
so
but
more
details
can
be,
can
be
found
in
the
trackers
actually
and
now.
G
E
I
H
Yeah,
so
the
motivation
behind
was
this:
a
safe
service
proper
was
single,
not
p.m.
so
that
was
obviously
single
point
of
failure
and
also
a
couple
of
points
like
it
was
not
PR
disaster
recovery
applied,
so
one
VM
was
running
particular
abilities
on
land
in
that
zone.
The
data
center
is
completely
down,
so
it
will
not
be,
and
then
they
are
compliant
so
also
during
the
update
of
this
solution
awake
so
it
is
to
take
around
1
to
10
minutes
in
regular
updates
around
the
minutes.
H
When
the
package
of
code
of
it
used
to
happen,
and
then
you
know
the
extension
update,
it
needs
to
take
quite
a
few
mixed
around
5
to
10
minutes
so
so
to
address
all
these
issues.
We
come
up
with
the
approach.
My
service
topic
broker
will
be
having
two
modes,
so
you
basically
in
the
master
slave
model,
follow
majestic
order
and
it
will
be
having
also
the
zero
downtime
deployment.
In
the
sense
the
failover
happens.
Suppose
what
is
during.
H
So
the
failure
will
happens
in
in
few
seconds,
in
worst
case,
around
10
to
15
minutes.
Also,
as
I
mentioned,
then
two
nodes
will
be
situated
in
two
different
availability
zone.
So
thus
we
make
that
a
safe,
HL
and
also
the
backbone
for
for
this
master/slave
architecture
will
help
you
make
utilization
of
people
ID,
where
the
key
polarity,
basically
figures
out,
which
node
is
mastered
without
is
late,
so
based
on
that,
so
master
will
be
serving
the
rapist
at
any
point
of
time.
H
So
this
whole
model
we
have
proposed
for
GCP
OpenStack
turbulence
and
actually
assess
each
dimension.
So
this
is
a
block
diagram
of
H
a
cluster.
So
this
is
the
tower
controller
so
and
there
there
e
two
games
for
the
broker:
V,
em1
and
em2.
Those
are
two
different
zones
and
one
VM
will
be
acting
as
a
master
and
other
things
will
be
acting
as
a
backup
or
standby
node.
So
there
will
be
highly
available,
IP
or
HIV.
So
in
case
of
GCP
it
will
be
unload
panel
load,
balancer
IP.
H
H
When
failover
actually
happens
so
then,
then
the
traffic
gets
forwarded
to
the
second
beam,
which
one
was
the
standby
BM
that
point
of
end,
so
that
becomes
the
new
master
and
the
turkey
goes
there
and
the
requests
I
will
start
by
that
VM
by
that
available.
So
it
happens
in
ten
to
fifteen
seconds
in
worst
case.
Usually
it
is
around
four
to
six
seconds
so
in
this
demo.
So
you
quickly
we'll
set
up
our
demo
and
in
this
demo
we'll
be
showing
the
basic
HL
etcetera
for
multi
C,
which
is
a
cluster.
H
Then
one
failover
scenario
where
we
will
prepare
process
down.
As
you
know,
only
that
portion
and
so
we'll
be
making
process
down.
Of
course,
Alex
proper,
then
over
over
over
to
of
the
turbulence
around
GCP
console
then
will
be
see
a
downtime
measurement
by
the
passkey
ffs
so
I
hand
over
to
release,
then
we'll
be
answering
their
questions
so
British
over
to
you.
I
Go
ahead
with
the
demo
first
will
be
a
setup
so,
as
you
know
that
service
fabric
is
abortionist
deployment,
so
first,
what
I
will
do
is
that
I'll
show
how
our
deployment
looks
like.
So
these
are
the
VM
side.
Service
fabric
is
finally
deployed
with,
as
you
can
see,
that
one
of
our
moon
belongs
to
one
and
two.
So
basically,
the
deployment
is
still
across
multiple
beams
so
that
we
are
suppose
one
of
the
nodes
in
one
of
these.
I
This
goes
down,
then
the
other
is
the
earthen
moon
is
able
to
come
up
and
still
serve
the
process.
So
on
my
screen
on
the
left
hand
side
you
can
see
one
of
the
nodes
and
of
the
right
hand
side.
You
can
see
the
other
nodes
over
the
left
hand
side,
we
have
the
master
node,
which
is
present
inside
one
and
on
the
right
hand,
side
I
have
two
slave
node,
which
is
present
from
Z,
so
I
will
chose
to
showcase.
A
very
small
demo
of
the
use
case,
for
one
of
our
use.
I
Cases
is
a
create
service
instance,
which
is
done
through
the
broken
process
of
the
broker.
We
are
process
running
in
one
of
the
nodes,
so
first,
what
I
will
do
is
that
I
will
do
a
simple
and
just
steal
the
logs,
so
that
we
can
see
which
request
is
coming
to
which
node,
so
that
we
know
that
which
is
and.
I
So,
for
example,
these
are
the
nodes.
On
my
left
hand,
side
you
can
see.
Some
of
the
logs
are
moving
because
some
activity
is
flowing
on
so
now.
What
I
will
do
is
that
at
into
a
simple
catalog
operator
and
hit
the
catalog
improper,
so
you
can
see
that
the
log
started
responding
on
my
left
hand,
which
means
that
that
hold
is
active
and
sources
master
right
now,
so
moving
on,
if
I
good
one
one
of
the
operations
that
we
want
to
create,
we
want
to
showcase
here.
I
So,
for
example,
I
will
create
a
blueprint
instance
here.
So
if
so
before
doing
that,
one
Showcase
is
that
we
are
using,
since
this
is
a
GCP
landscape.
So
we
are
using
the
Google
load.
Balancers
are
provided
by
a
TCP
load
balancing
between
the
two
loads
of
service
traffic
broker.
So
this
is
the
load
balancer
that
we
have.
This
is
a
DCP
counseled
against
OKC.
So
if
I
go
into
this
load
balancer
we
have
two
loads.
I
So
the
way
I
was
showing
you
on
the
module
that
one
of
the
moves
is
master,
the
other
one
is
slave,
so
the
master
is
now
showing
as
healthy.
So
the
project
that
we
have
put
up
here
is
that
if
a
node
is
faster,
it
shows
basically
for
the
load
balancer,
so
that
the
traffic
is
redirected
to
that
coming
back
to
our
demo,
so
I'll
now
fire
a
three
instance,
so
the
call
has
gone
to
the
local
master
node.
Now.
What
I
will
do
is
that
I
will
simulate
an
use
case.
I
So,
for
example,
let's
say
I
bring
down
all
the
processes
in
this
node,
so
this
could
happen.
One
of
the
senior
dues
as
sort
of
discussed,
that
this
would
have
been
a
scenario
for
a
routing
update
or
taxing
a
VM
50
other
outlets
and
easy
failure.
One
of
the
scenarios
could
be
that.
So,
even
if
we
have
one
of
the
scenarios
like
this,
then
the
other
node
should
take
up
the
traffic,
and
we
should
shortly
see
that
the
other
would
mix
of
the
traffic
and
starts.
I
As
you
can
see
now
that
the
other
node
had
started
picking
up
the
traffic
now
so
basically
cm
does
a
last
compression
call
on
the
broker.
So
since
we
have
already
stopped
a
node
on
the
master,
so
the
slave
has
picked
up
the
traffic
and
the
operation
is
going
on.
So,
if
I
just
watch
over
this
creation
process,
so
this
should
succeed.
I
Nevertheless,
even
though
one
of
her
moods
is
down
because
other
node
is
still
up
so
similarly,
if
we
go
back
to
the
console
as
people
discussing
a
while
back
so
now
that
the
other
known
has
become
the
master,
we
should
shortly
see
the
switch
over.
As
you
can
see
here,
the
which
was
cathy
has
now
become
down
or
unhealthy
and
other
which
was
unhealthy,
has
not
become
so
you
become
healthy.
So
that
is
why
the
traffic
is
getting
redirected
here.
So
this
process
will
take
some
time
in
service
that
we
provision.
F
G
Dimitri
I
think
so
we
haven't
evaluated
this
option
actually,
but
we
can
definitely
look
into
it
and
what
the
idea
was
to
be
to
be
more
cloud9
native
in
this
case
and
I
think.
That
is
why
we
have
gone
with
approaches
which
have
been
quite
specific
to
the
the
target:
the
target
infrastructure,
for
example
in
case
of
AWS.
We
use
the
virtual
IP
based
approach
and
where
we
basically
float
the
virtual
IP
and
then
again
depending
on,
for
example,
for
OpenStack.
G
F
One
challenge:
when
doing
that
approach.
Well,
there's
two
challenges
right,
one
is
a
you
know.
Oh,
why
is
this
a
certainly
different
threat,
which
means
that
they
now
have
their
own
inferences,
but
the
second
I
think
more
interesting
challenge
is
that
you
know.
One
of
the
failures
you
were
mentioning
is
an
AZ
failure
and
when
a
hizzy
failure
happens
depending
on
the
ayahs
right,
they
you
may
not
potentially
be
able
to
use
their
a
ap
is
to
do
changes
right.
A
C
A
I
Okay,
so
apart
from
that
so
the
eyes
we
saw
that
this
operation
succeeded
right
now.
So
if
you
check
here
to
see
if
service
status,
so
the
operation
has
now
succeeded,
so
the
use
case
that
we
were
trying
to
show
here,
it
was
simple
that,
even
if
we
for
even
if
a
user
has
filed
a
request
and
in
between
that
our
proper
process
goes
down
or
some
sort
of
may
see,
failure
occurs
or
some
beam
some
due
to
some
process.
The
process
goes
down
stain.
I
The
rock
of
rugged
is
equal
to
serve
the
request
and
complete
the
operation.
So
this
was
one
of
the
scenarios
which
I
wanted
to
demonstrate
now.
I'll
demonstrate
one
turbulent
instead,
so
we
will
trigger
one
torpedoes
incident
and
we
will
showcase
the
failover
time
that
takes
for
bringing
the
other
boat
back
to
master,
so
I
just
bring
up
all
the
processes
back
from
this
zone
so
that
we
are
able
to
simulate
the
incident.
I
I
Okay,
so
the
way
I
was
explaining
is
that
what
we
are
doing
in
this
turbulence
incident
so
I
have
now
triggered
the
transit
sincere.
So
what
we
are
doing
here
is
that
we
are
trying
to
kill
one
of
the
broker
Williams
here
and
maybe
that
kill
could
have
been
not
delete
operation.
So
it's
the
same,
simple
turbine
incident
that
we
say
we
can
see
on
the
turbulence
release.
So
we
are
trying
to
simulate
that
incident
and
be
able
to
check
that
how
how
fast
a
broken
recovers
from
that
failure.
I
So
the
zeros
here
indicate
that
the
loop
is
still
available
and
if
we
hand
the
timestamps
lengthening,
the
operation
is
happening
here,
pre
one
second.
So
the
time
stamp
in
da
indicates
that
and
once
the
finger
pressure
one
some
hooker
goes
down,
then
we
will
see
the
value
one
which
indicates
unavailability
status
and
then,
if
it
comes
back
up,
then
it
will
see
that.
E
C
H
G
I
C
G
C
Because
one
thing
you
may
want
to
look
at
the
the
H,
a
proxy
bought
released,
an
incubator
has
it
uses
people
at
D
on
the
H,
a
proxy
nodes
and
they
trade
a
bit
back
and
forth,
which
takes
the
place
of
your
I
as
load
balancer.
But
it
generally
only
works
inside
things
like
on
Prem
vSphere
effect.
We
did
try
it
and
Google
and
it
doesn't
work.
But
if
you
get
into
the
vSphere
side
of
that,
that
might
be
helpful.
Sure.
I
So
now
that
the
turbulence
test
as
a
ended,
so
I
just
wanted
to
showcase
here.
So
the
36,
as
you
can
see
this
timestamp
so
at
36
the
broker
started
becoming
down.
So
that's
why
the
warning
came
that
the
broker
is
down
and
it
came
back
up
at
the
last
down
signal
that
we
got
was
on
49
and
by
50.
It
came
back
up
so
simply
by
subtraction.
I
can
see
that
we
took
around
14
seconds
for
the
failover
to
happen,
so
the
process
came
back
up
within
14
seconds
so
on
running
the
turbulence
test.
I
So
we
see
that
our
process,
failover
time
is
approximately
so
best-case
that
we
get
around
this
like
five
to
six
depends
from
the
scenario.
Basically
so.
First,
what
we
simulated
was
a
process
down
scenario.
This
is
a
like
kill,
vm
scenario.
So
best-case
we
get
around
60
seven
seconds
and
was
specific
8
around
15
seconds
feel
overtime.
A
So
I
guess
the
thing
to
remind
people
is
that
serviceworker
is
part
of
the
extensions
group.
It's
it's
been
there
for
a
while
and
I
would
encourage
you
to
take
a
look
at
it.
It
works
and
I
guess
I
should
can
can
add
more,
but
northern
Iran
CF,
but
also
I,
think
they
added
support
for
communities.
So
if
you
wanted
to
manage
your
services
there
and
it's
sort
of
a
management
platform
for
your
services
right
so
I
would
say
if
you
are
looking
for
things
like
this,
especially
with
HJ
support.
A
G
A
C
So
I
thought
I'd
give
a
little
bit
of
update
on
shield
a
little
bit
of
background
on
field.
To
start,
if
you're
not
familiar
with
it,
shield
is
an
open
source
projects,
Derek
and
Wayne,
spearheaded
about
two
and
a
half
years
ago
to
bring
backup
services
into
cloud
foundry,
because
we've
got
a
lot
of
clouds
out
there,
bending
under
making
life
creating
wonderful
for
devs,
but
the
ops
guys
were
getting
a
little
bit
nervous
because
all
the
data
was
sitting
out
there.
No
one
was
back
it
up,
so
we
built
shield.
C
We
went
through
a
couple
of
revisions
on
the
UI
this
last
year,
we've
finished
up
the
v8
UI,
which
is
a
complete
retooling
and
more
operator
focus.
I
did
give
my
presentation
at
CF
summit
in
Boston
about
shield
that
video
is
online,
I,
think
from
500
org
website.
You
can
find
all
those
talks.
I
was
going
to
give
a
demo
during
that
talk
and
that
didn't
really
work
out,
because
five
demos
fun
stuff,
as
it
turns
out
I'm.
D
D
C
Today,
because
I
didn't
have
enough
time
to
pull
it
together,
if
there's
interest,
I
can
go
ahead
and
look
at
doing
one
next
cat
call,
but
I
wanted
to
talk
about
today
with
some
of
the
community
outreach
stuff,
we're
gonna
start
doing
and
have
started
doing
with
shield
as
a
project
for
a
while
she'll
kind
of
languished
as
a
github.com.
So
I
start
going
slash
shield
and
that's
where
you
went
for
everything
a
lot
of
github
issues
pour
requests,
but
no
real
community
or
documentation.
C
So,
starting
early
this
year
we
went
ahead
and
put
together
shield
project
IO.
This
is
still
a
work
in
progress,
but
this
is
our
primary
focus
on
on
the
marketing
and
an
outreach
side
of
things
shield
retic
that
I
was
going
to
become
our
primary
communication
channel
with
everyone.
So
starting
from
the
front
this
is
this:
is
my
open
source
sales
pitch?
We
write
a
lot
of
software
and
a
lot
of
software.
C
It
doesn't
get
a
lot
of
marketing
because
it's
open
source
so
we're
trying
something
different
by
trying
to
explain
what
the
things
that
we
have
built
are
and
we're
starting
with
shield.
So,
a
little
bit
of
a
landing
page,
you
know
discussion
of
all
the
things
shield
can
do
what
platforms
we
support
and
then
some
blog
stuff
we're
going
to
be
collocating
all
of
our
documentation
here,
starting
with
the
shield
operators
manual.
This
is
all
live.
A
C
The
way,
so
you
can
go
look
at
this
today.
Please
do
be
advised.
It
is
still
work-in-progress
and
we're
working
hard
to
flesh
out
a
lot
of
documentation,
but
we've
met
we're
starting
an
ops
manual
to
try
and
take
some
of
the
edge
off
of
getting
this
thing
up
and
running,
because
I've
talked
to
a
lot
of
people
in
the
community
and
there's
there's
a
lot
of
questions
about
what
shield
is
and
how
it
works.
Questions
that
shouldn't
be
questions
anymore
and
we're
trying
to
turn
those
into
answers.
C
So
there
will
be
a
lot
of
information
here
in
the
dockside
for
operators
wishing
to
spin
shield,
we've
got
starting
up
our
plugins
reference.
Plugins
is
how
shield
does
back
up.
So
if
you
want
to
back
up
Postgres,
for
example,
you
would
come
out
and
look
at
the
Postgres
plugin
and
it
would
provide
all
the
configuration
necessary
to
do
your
backups
and
stores.
So
now
we
used
to
put
all
this
on.
Go
org
because
we're
a
bunch
of
developers
and
assume
that
everyone
else
in
the
world
loved
reading
go
Docs.
C
The
other
side
of
things
from
documentation
is
the
developer's
documentation,
we're
trying
to
be
better
about
involving
open
source
public
community
in
the
development
process
of
field
and
understanding
the
road
map
and
understanding
how
the
thing
works
under
the
hood
and
how
to
contribute.
The
first
thing
is
really
about
the
only
thing.
That's
on
the
dead
side
of
the
website.
C
Right
now
is
the
shield
api
reference,
so
if
you've
ever
had
to
try
and
curl
something
against
shield,
if
you're
curious
about
you
know
how
to
get
a
list
of
all
stores
for
a
tenant,
all
this
stuff
is
now
out
there,
and
this
is
procedurally
generated
through
a
concourse
pipeline.
So,
as
we
push
changes
up
and
make
releases,
the
website
updates
automatically
because
I'll
be
the
first
to
admit
I'm
Way
too
lazy.
To
do
these
things
by
hand,
so
I
will
use
concourse
wherever
I
can.
C
Similarly,
from
an
automation
standpoint,
we
are
syndicating
our
github
releases
over
the
website.
We're
not
writing
any
release,
notes
specific
to
the
website,
so
these
are
the
same
as
they
are
on
github,
but
this
is
trying
to
facilitate
the
goal
of
getting
everyone
to
go
through
shield
project
il-4.
Anything
they
need
about
shield
and
to
finish
up
I
want
to
talk
about
the
community
page
because,
as
I
said,
this
is
a
lot
about
community
outreach.
We
do
have
a
slack
org
that
we're
running
now.
C
We
still
run
this
shield
channel
and
CF
slack
if
you're
interested
the
main
reason
we
spun
up
our
own
slack
org
is
we
have
used
users
and
use
cases
outside
of
Cloud
Foundry,
and
a
lot
of
them
were
a
little
reluctant
to
join
CF
slack
just
to
talk
to
us
about
shield,
so
we'll
be
probably
in
both
places
for
the
foreseeable
future.
Our
dev
stuff
is
all
taking
place
on
on
the
actual
shield
slack.
C
We
have
teams
channels
and
things
that
we're
doing,
but
that's
there
if
you
need
help,
we've
listed
up
all
the
good
enough
stuff.
We've
got
a
Trello
board
now
and
we're
using
the
Trello
board
to
kind
of
prioritize
all
of
our
work
outside
of
github
issues.
We
try
to
github
projects
and
work
too
thrilled
with
it.
Github
issues
are
hard.
C
We
might
look
at
the
little
tracker,
but
for
now
we're
going
Trello,
so
this
is
kind
of
a
it's
a
public
board
with
non
open
membership.
So
if
you
want
to
keep
track
of
where
we're
at
on
what
we're
working
on
this
is
where
to
go.
What
we're
doing-
and
the
last
thing
I
wanted
to
follow
up
on
whether
finish
up
on
was
our
shield
roadmap
call.
C
So
I
had
some
conversations
with
folks
at
CF
summit
and
the
biggest
the
biggest
blocker
most
people
had
and
wanting
to
get
involved
in
shield
was
not
knowing
where
the
heck
we
were
going.
We're
starting
Wayne
was
going
with
shield
where
the
open
source
side
of
shield
was
going.
So
we
put
together
this
roadmap
call.
C
We
just
had
our
first
call
last
Thursday
I
will
be
doing
these
every
Thursday
or
every
second
Thursday
of
the
month
over
zoom,
and
it's
just
a
place
to
get
in
and
talk
to
the
shield
devs
figure
out
where
we're
going.
What
we're
doing
we'll
be
running
through
the
Trello
board,
we'll
be
talking
about
future
direction
and
new
features,
and
then
my
favorite
part
of
this
is
the
open
forum,
which
is
where
anybody
can
join,
and
we
just
talk
through
issues
and
the
issues
are
have
been
deploying
any
questions.
C
C
C
Obviously
the
next
call
will
be
August
9th
at
11:00
a.m.
Eastern
as
I
was
joking
with
dr.
max.
It's
really
hard
to
schedule
a
call
so
that
the
West,
Coast
and
Europe
can
join
sorry.
India
I,
just
there's
no
way
to
make
this
work
in
a
global
unless
osss
environment.
So
doing
the
call
take
a
look.
Come
talk
to
us
on
slack.
B
A
A
A
All
right
so
with
that
I
will
we
will
end
early
today
and
then
we
have
another
call
scheduled
next
month.
If
you,
if
you
have
something
to
show
us
an
update
or
new
project,
exciting,
usually
like
a
demo
but
obviously
not
always
possible.
Let
me
know
ping
me
and
then,
and
then
also
please
register
for
the
summit
in
Basel.
It's
a
great
place.
Switzerland
has
a
lot
of
cool
mountains.
I
will
tell
you
that
right
now,
so
make
sure
alright
see
you
guys
there
take
care.
Everybody
bye.