►
From YouTube: Community Advisory Board Call 5/16/2018
Description
Get the full agenda here: https://bit.ly/2k3g4l8
A
A
A
You
know,
I
have
been
doing
it
in
Europe,
but
we're
gonna
do
it
now
and
it's
gonna
be
recorded.
So
let's
get
started
I
think.
Usually
we
do.
We
give
chip
and
swore
an
hour
a
chance
to
give
us
an
update
on
the
foundation.
There's
a
big
summit
coming
up
in
Europe.
So
maybe
we
can
get
some
idea
what's
going
on
there?
That's
what
are
you
gonna?
Yes,.
B
What
is
typing
so
she
asked
me
to
that's
for
you
to
do
it
so
yeah,
so
so
a
couple
of
couple
new
products.
Of
course
there
is
the
summit
coming
up
in
Basel,
a
reminder
that
June
1st
is
the
deadline
for
the
CFP,
so
you've
got
about
half
a
month
left.
Also,
thanks
for
everybody
that
voted
for
the
track
chairs,
we
really
got
a
great
response
to
the
voting
process
and
we'll
be
announcing
who
the
winners
are
shortly.
We
kind
of
collate
the
results
and
make
sure
that
the
same
person
isn't
isn't
in
multiple
tracks.
B
So
that's
all
going
really
well
and
again
get
your
you
talk
proposals
in
the
other
thing.
Obviously,
you're
gonna
show
where
we're
gonna
see
the
see
pi44
kubernetes,
which
is
awesome,
but
there's
also
a
lot
of
other
amazing
projects
that
are
happening
both
inside
our
project
teams,
as
well
as
kind
of
out
outside.
In
you
know,
the
broader
CF
community
reminded
that
we've
been
having
a
bi-weekly
call
to
try
to
get
through
updates
on
on
all
these
activities.
B
Every
other
Wednesday.
It's
it's
happening.
Next
Wednesday
I
go
to
the
mailing
list
to
catch
up
on
details,
so
we're
gonna
switch
to
zoom
as
well.
For
that
effort
or
for
that
regular
meeting
and
also
you
could
go
back
to
the
North
American
event
from
Boston.
And
if
you
take
a
look
at
the
cleft
Network
site,
you
can
get
to
all
of
the
videos
cool
pictures.
Yes,
every
every
talk
was
recorded
and
there's
a
binge
watching
pleasure
if
you're
interested
for
hours.
So
those
are
the
highlights.
C
C
A
A
A
The
second
one
is
is
is
very
important
in
terms
of
the
scope
of
it.
This
is
an
effort
by
s
AP,
the
team
from
s
AP
legal
is
here
and
they're
always
eager
to
give
us
an
update
and
talk
about
it,
but
basically
it's
the
multi
apps.
So
this
is
a
way
for
you
to
deal
with
multiple
apps
and
how
to
connect
together
and
so
on.
There's
a
whole
set
of
presentation
that
we
did
before
that
had
summit
and
also
in
the
cat
call
I'd,
encourage
you
to
take
a
look
at
that.
A
A
C
A
You
get
a
chance
to
present
soon
yeah
on
this,
so
I
don't
know
if
anybody
have
any
question
for
us
extensions
or
any
other
projects
there.
You
feel
the
leads
out
here.
So
if
you
have
questions
no
we're
going
to
get
to
the
main
event,
all
right
cool
since
Dimitri
is
gonna,
be
doing
the
presentation
for
the
Bosch
cube
city.
I,
don't
see
Danny
here
he
usually
attends,
but
I
told
them
that
we
would
start
the
push
stuff
half
past.
So
maybe
that's!
What's
going
on
so
they'll
give
us
an
update
on
Bosh.
A
A
F
So,
okay,
so
as
jr.
said,
so
let
me
introduce
myself
first
so
I'm,
one
of
the
initiator
of
a
particular
project
and
the
computer
of
this
project
and
from
IBM.
So
let
me
first
introduce
what
is
autoscaler
right.
So
this
is
a
actually
a
cloth
under
service
which
enables
you
to
automatically
adjust
the
number
of
the
cloth
hungry
application
instances
based
on
the
you
know,
policy
you
defined
so
by
using
this
of
all
the
scaler.
F
You
can,
you
know,
maintain
the
availability
and
maintain
the
policy
level
of
your
vacation
and
okay
and
a
little
bit
history
of
this
project
right.
So
actually,
this
program
has
been
there
for
for
two
years.
We
started
this
product
in
the
middle
of
2016
and
we
have
call
contributors
from
IBM
u.s.
IBM
China,
ICP,
India
and
Fujitsu
in
Australia.
So
this
is
destroy
actually
it's
a
based
on
code
donation
from
IBM
cloud
of
the
secular
service,
and
we
can
sum
up
the
code.
F
Firstly,
you
know
open
sourced
the
that
that
part
is
actually
for
the
way
one
is
a
based
on
you
know,
Java
and
it's
a
pretty
analytical
implementation.
Well,
we
then
we
redesigned
with
the
Microsoft's
temperature,
and
then
we
write
that
service
in
go
language
and
new
job.
Note.
Yes,
so
there
are
several
adoptions
for
these
autoscaler
service,
so
actually
from
October
last
year,
ICT
clock
has
deployed
at
on
production.
F
And
now
we
are,
you
know,
adding
this
service,
as
part
of
you
know,
IBM
cloud
from
the
Enterprise
Edition,
which
is
a
public
isolated,
often
enough
to
confirm
the
IBM
cloud,
and
eventually
we
replace
the
existing
IBM
cloud,
all
secular
studies,
which
is
a
proprietary
implementation
with
this
open
source
edition.
If
it
will
happen
sometime
and
of
this
year
or
maybe
only
that's
year,
for
you
know,
public
cloud
as
well.
F
The
other
adoption
is
the
Suzy
has
include
is
now
including
this
service.
As
part
of
these,
you
know
Suzy
Cloud,
Foundry
distribution,
and
we
also
have
taught
to
peel
the
web
service,
because
the
web
service
also
have
obstacle
ourselves
near.
We
are
looking
at
the
possibility
to,
firstly
open
source
of
you,
know,
pivotal
implementation
and
then
convert
this
to
program
into
one
so
that
we
have
one.
You
know
single
service
in
the
community
a
little
bit
stuff
on
the
components.
F
I,
don't
want
to
drive
deep
into
the
components
of
the
Aucilla,
but
I
want
to
burn
out
here.
Is
that
actually
quite
a
few,
the
components
are
coming
off.
So
if
we
it's
pretty
easy
to
extend
this
also
pillar
to
other
domains.
Besides
Crofton
ramification
like
we
can
customize
a
metric
letter
and
the
you
know
skilling
engine
so
that
we
can
all
the
skill
cross-boundary
platform
components
will
also
do
that
for
skill
out.
You
know
kubernetes
several
key
features
right
now.
We
support
two
type
of
service
gaming.
F
A
one
is
weaken
a
dynamic
scaling
which
is
based
on
you,
know,
province
metrics.
Now
we
have
a
building
magical
support
for
memory,
use
the
memory,
usage,
support
and
response
time.
So
this
beauty,
mattress
port,
will
apply
to
all
the
craft
on
replications.
There
is
no
collision
injection,
your
application,
we
just
you,
know
acting
these
metrics
from
longer
together.
So
you
get
it
when
you
push
your
application,
no
matter
what
won't
happen,
you're
using
right
now,
the
other
type
of
scaling
and
we
call
Nate's
a
schedule
skinny.
F
Basically,
you
can't
define
a
template
for
your
scheduling.
For
example,
if
you
want
to
skill
every
Monday
morning
at
nine
o'clock
to
eleven
o'clock
with
a
number
of
the
instances
they
work
a
due
date
for
the
scheduler
scaling
with
support
booster
recurring
schedule,
something
like
okay
every
week,
every
Monday
you
never
we
cry,
and
also,
every
first
day
of
that
month,
Sampson
at
the
back,
you
can
spy
to
verify.
You
know
specific
schedule
as
well.
F
There
is
not
a
requirement
now
we
make
the
service
broker
to
be
fully
compliant
with
open
service
broker
specification,
and
the
service
also
have
eight
guys
to
manage
the
policies.
F
Also,
you
can
use
of
the
API,
it's
a
query
metrics
and
the
skilling
histories
for
that
there
is
a
corresponding.
You
know
kamala
interface
now
already
in
the
committee.
You
know
Cloud
Foundry
talking
so
our
plug-in
repository,
so
you
can
freely
download
that
you
set
so
like
before
I
go
into
the
next
few.
Slides
are
a
lot
to
give
you
a
short
demonstration
because
it
made
you
make
tea
time
to
skew
our
Scalia
under
Kenan,
so
I
just
a
record
item.
F
Let's
do
that
I'm
not
going
to
show
how
to
do
that.
It's
a
bas-reliefs!
So
basically
what
you
deploy
claw
hungry,
you
can,
you
can
deploy.
You
know
using
the
Bosh,
deploy
a
particular
service
right.
You
know
this
demonstration
I'm
using
the
porch
light,
Presley
all
right,
four
minutes:
okay,
no
problem.
Let's
say
what
are
the
deployments?
F
We
have
a
CFD
Pullman
right
and
we
have
a
autoscaler
deployment
there
so
before
using
that
you
have
to
register
this
service
right
using
the
CFC
I,
you
create
the
service
broker
and
then
you
need
your
nibble,
the
six
poker
access.
Now
you
can
see
the
exclusives
in
the
safe,
the
marketplace,
then,
let's
use
in
store
the
Cloud
Foundry
c'mon
on
plugging
for
others,
gala.
F
F
Let
us
see
how
this
feels
eating
and
skill
hot
right
I'm
in
this
window
actually
shows
the
benchmark
tool.
We
are
using
a
be
a
batch
of
magnitude
to
drive
load
and
there
are
several
other
windows
here
with
this.
One
is
show
the
metric
reserve
that
your
adaptation
are,
we
retrieved
and
also
the
skilling
history,
and
we
also
show
the
app
stats
now
I
say
what
is
going
on:
let's
strap
it
a
load
first
and
you
may
need
needle
wait.
F
F
F
F
Okay,
so
there
is
an
additional
skill
in
history.
The
unit
number
changed
from
two
to
one
and
then,
if
you
add
to
the
status
of
the
application,
there
is
only
one
instance
of
any.
So
this
is
a
very
simple
demo
and
what
is
you
know
what
is
coming
in
the
near
future?
So
one
of
the
things
we
got
from
the
customer
that
they
want
customer
metrics.
We
are
doing
this
for
that
and
we
are
now
working
on
the
dashboard.
F
It
is
viewed
only
the
remand,
openness
or
dashboard
as
well,
so
that
you
can
see
the
so.
Therefore
we
you
can
configure
policies,
since
you
know,
cut
some
of
the
school
in
case
trees.
There
here
I
show
some
of
the
screenshots
here,
so
the
policy
configuration
the
mattress,
dashboard
and
also
skinning
it's
true.
There
are
other
things
like
we
are
going
to
migrate
to
locker
Gator
v2
API
or
we
are.
We
use
a
lot
of
cash
because
it's
quite
nice
features
for
mometrix
hurry.
F
We
haven't
decided
yet
which
way
to
go,
but
but
we
will
do
that
right.
No
matter
is
we
do
it
all
on
cash,
given
that
the
logger
gator
we
wanted
here
is
going
to
deprecate
it
by
the
end
of
this
year.
The
other
major
things
we
want
to
do
is
a
performance
it
about
the
increment.
Our
goal
is
to
support.
You
know:
10,000
applications,
as
I
said,
given
that
there
is
a
small
percentage
of
that
bikini
using
out
scalar.
F
A
A
A
E
Alright,
so
thank
you,
dr.
max
for
giving
us
opportunity
to
present
this
in
today's
cab
meeting.
So
since
we
achieved
incubation,
this
is
the
first
time
I
think
service
fabric
is
presenting
in
the
cab
meeting.
So
so
I've
got
couple
of
slides
here.
The
one
the
first
one
which
talks
about
since
incue
bation
till
I,
would
say
the
Boston
submit
what
is
that
has
been
done
as
far
of
service
fabric
and
in
the
next
slide
we
will
talk
about
what's
a
road
map
in
place,
so
I
think
these
were
for
a
few
of
the
commitments.
E
Also
when
we
initially
proposed
for
incue
bation
OSB
compliance
was
thing
where
which
was
so
I
mean
when
we
proposed
incubation,
we
were
complying
to
2.9
version
of
always
be
API.
So
now
we
can
say
that
we
are
combined
to
2.13
version
of
OSB
API,
and
this
was
also
a
requirement
from
base
to
decouple
some
of
our
lifecycle
operations
from
Cloud
Foundry,
because,
as
we
move
on,
we
know
there
is
an
increasing
adoption
of
basically
the
integration
between
CF
and
Cuban.
It
is
so
as
part
of
this
we
decouple
our
life
cycle.
E
A
Pia
is
from
CF,
and
now
we
can
say
we
are
a
truly
always
be
a
compliant
broker
and
then
moving
forward,
so
service
fabric
also
provides
some
of
the
extension
API
so
which
will
be
covered
as
the
story,
which
will
talk
about
an
option
of
Bosch
to
dotto.
This
was
an
important
story
for
us
and
was
also
a
commitment
from
our
side.
E
As
part
of
our
incubation
proposal,
the
major
requirement
basically
was
to
make
service
fabric
modular
and
support
Bosch
to
go
as
far
as
Manifest
and
ops
files
and
the
various
constructs
which
are
provided
by
Polish
roto
and
also
make
sure
that
the
backing
services,
basically,
which
are
deployed
using
service,
are
bigger,
also
are
able
to
adhere
to
Bosch,
to
dot
wash
tool,
auto
matron
and
and
then
with
respect
to.
So
there
are
lot
of
things
which
row
brings
to
the
plate.
E
X
the
face
file
system
was
is
a
major
thing
and
then
dynamic,
IP,
a
location
for
use
cases
like
sharding
and
then,
of
course,
the
adoption
of
the
ops
files
and
aspects
then
another
one.
Another
next
item
is
basically
schedule
update
window
for
service
instances.
So,
with
respect
to
this,
the
idea
was
that
the
various
instances
which
are
updated
today,
if
you
know
Surrey
sibling,
also
has
a
scheduler
component
in
place,
which
basically
schedules
updates
as
well
as
backups.
E
So
the
idea
about
this
particular
story
was
that
to
basically
spread
the
schedules
across
a
window
of
say,
for
example,
a
user-defined
window
of
a
week
or
a
few
days,
or
maybe
a
few
hours
within
a
day,
so
that
the
end
user
knows
when
his
instance
is
going
to
get
updated
and
it
gives
the
end
user
some
transparency
around
their
backing
services,
instances
cocl
I
plug
in
for
backup
and
restore
so
so
today.
As
you
know,
service
fabric
also
has
got
extension.
E
Api
is
for
backup
and
restore
and,
as
part
of
this
plug-in
was
developed
and
was
open
sourced
as
part
of
this
story.
Pre
update
API,
so
this
again
has
got
many
use
cases
so
before
triggering
an
update
for
a
backing
services.
The
idea
is
that
the
users
can
write
some
of
the
scripts
or
provide
some
of
the
constructs
using
which
they
can
basically
do
some
operations.
For
example,
they
may
want
to
maybe
modify
the
manifest
or
inject
some
metadata
within
the
newly
generated
manifest
yeah.
E
So
this
is
till
I
would
say
till
the
CF
summit
Boston.
This
is
what
was
delivered
as
far
as
moving
to
the
road
map,
so
the
road
map
is
quite
exciting
for
service
fabric,
so
there
was
a
press
of.
The
first
item
is
service
fabric
to
dotto.
So
we
also
made
a
presentation
on
service
fabric
to
out
ordering
the
Boston's
from
it.
So
some
of
the
items,
for
example,
integrating.
E
The
community
projects,
like
say,
for
example,
be
BR
bringing
be
BR,
or
maybe
a
shield,
or
some
of
the
other
monitoring
frameworks,
for
example,
Prometheus
or,
for
example,
getting
a
new
provision.
Iran
voted
on
to
service
fabric
platform,
so
today
we
support
Bosch
base
provisioning
and
dr.
base
prevailing,
so
so,
for
example,
to
bring
it
a
caters
provisional.
Today
it
is
quite
human,
less
task,
considering
the
service
fabric
1.2
architecture.
So
the
plan
is
to
basically
move
to
a
much
more
modular,
a
pluggable
event-driven
architecture
where
some
of
these
shortcomings
can
be
easily
achieved.
E
So
that's
the
plan
which
this
will
be
subject
to
a
presentation
is
also
available,
can
be
looked
upon.
Service
fabric
h,
a
ends
ADM
and
multi.
Is
he
right?
So
today's
service
fabric
is
deployed
on
a
single
VM,
so
it
is
a
single
point
of
failure.
So
the
plan
is
to
basically
make
it
basically
highly
available
and
make
sure
any
updates
or
any
town.
There
is
a
very
minimal
downtime
in
place
when
the
viens
get
updated
and,
of
course,
support
multi
AC
on
various
yes,
so
that
work.
E
Aside
H,
a
and
n
ZD
m
I
think
we
can
can
basically
achieve
at
least
two
to
three
seconds
of
is
the
time
which
is
required
today
for
a
failover
going
for
then,
and
the
idea
is
to
basically
support
it
on
the
various
yes
actually
rate
limiting
right.
So
this
is
again
a
very
important
story.
This.
The
idea
is
that
today,
when
service
fabric
talks
to
Bosh
or
various
components,
I
would
say:
CF,
there
is
no
rate
limiting
in
place.
E
They
we
just
send
the
request
and
then
and
then
we
wait
for
the
target
platform
closest
respond
back
I
mean
at
times
we
have
seen
that
we
have
hit
in
our
test
that
we
have
hit
limits
for
on
NES
or
on
Bosh
or
even
on
the
cloud
controller.
So
the
plan
is
to
basically
applied
some
of
the
data
meetings
with
within
service
fabric
and,
moreover,
when
it
comes
to
some
long-running
processes,
for
example,
long-running
requests.
E
For
example,
a
major
version
upgrade
for
the
backing
services,
which
may
take
a
lot
more
time
than
a
regular,
probably
and
stem-cell
update,
or
maybe
a
regular
code
update
right
so
so
that
all
the
various
for
example,
if
you
consider
a
ball
shop
right,
it
may
end
up
using
all
the
Bosh
workers
for
all.
If
a
now
for
all
the
water
filters
can
be
occupied
by
suppose,
12
simultaneous
updates,
considering
that
the
12
right
so
so
as
to
make
sure
that
there
is
a
fair
share
given
to
various
requests.
E
So
that's
the
idea
behind
this
context,
based
scheduling,
so
everything
I,
just
discussed
in
the
initial,
the
previous
slide,
that
we
have
a
schedule
updates
window
in
place,
but
we
see
some
limitations
now
with
that
approach.
Where
what
happens
is
it
is
just
a
plain
time
based
scheduling
which
is
in
place.
So
the
idea
with
this
story
is
that
we
be
more
deterministic.
So
today,
when
you
send
an
update,
we
do
know
the
target
state
of
the
system,
for
example
Bosh
right.
E
So
what's
the
lowdown
Bosh
or
whether
there
are
enough
capacity
available
in
Bosh,
are
not
so
yeah.
So
the
idea
is
to
basically
be
more
deterministic
and
do
an
optimal
utilize
engine
resources
deployment
hooks
another
very
important
story.
So
the
idea
is
that
we
provide
some
of
the
pre
and
the
post
hooks
when
it
comes
to
the
credence
and
then
give
users
capability
to
have
some
operations
in
place.
For
example,
if
you
want
to
inject
credentials
into
a
manifest,
you
should
be
able
to
do
it
in
a
pre,
great
operation
and
then
moving
forward.
D
D
Present
all
right,
zoom
I
guess
affects
computers
in
all
kinds
of
ways.
So
for
those
who
don't
know,
p.m.
Bosch
I
also
happen
to
work
with
max
all
the
time.
So
this
is
the
presentation
that
gay
first
time
at
the
cube
corn
in
Copenhagen
a
couple
of
weeks
ago,
I
think
so,
given
that
we
have
about
20
minutes,
we'll
probably
go
a
little
bit
faster
or
maybe
skip
certain
things,
but
feel
free
to
interrupt
at
any
time,
and
you
know
it's
it's
make
it
as
interactive
as
possible,
so
washing
cube.
D
You
know
some
of
you
already
know
that
there's
been
previous
attempts
and
figure
out
how
to
integrate
borscht
with
kubernetes.
You
know
whether
it's
a
good
thing,
whether
it's
bad
thing,
you
know
different
approaches
and
whatnot.
So
this
this
presentation
area
explores
one
of
the
approaches:
we've
taken
and
kind
of
a
tries
to
put
a
particular
framing
around.
You
know:
what's
right,
what's
right
to
do
who's
this
guy?
Well,
that's
max
is
orchestrating
this
meeting.
He
likes
bicycles,
then
he's
slightly
going
away
now
boom
right
slacker
here.
That's
me,
I'm
not
slack
over
there.
D
What's
cube
well,
I
think,
hopefully
everyone
knows
what
cube
is
at
this
point.
It's
you
know.
According
to
the
github
page,
production
grade,
container
scheduling
and
management,
different
people
kind
of
view
it
as
a
different
level
of
platform.
Some
people
use
it
as
a
path,
even
though
it's
not
necessarily
designed
that
way.
Some
people
use
it
as
an
eye
as
even
though
again,
it's
not
necessarily
designed
that
way.
A
lot
of
people
view
it
as
just
a
set
of
primitives
that
they
can
build
something.
D
D
It
also
provides
a
you
know,
full-blown
tool
chain
for
release,
engineering,
employment,
lifecycle
management
and,
as
so
here
says,
small
oil,
large-scale
software
well,
mostly
mostly
medium
sized,
but
we
definitely
have
users
that
go
for
smaller
or
larger
all
right,
cube
operators,
not
sure
if
everyone
has
heard,
but
for
those
who
haven't
heard
operators
is
a
concept,
that's
been
popularized
by
chorus
right.
Why
do
we
need
it?
It's
so
cool
as
the
way
describes.
Operators
is
as
following
right.
D
An
author
is
something
that
you
effectively
use
to
help
you
manage
different
kind
of
stateful
applications
may
be
more
complex,
stateless
applications.
Another
kind
of
a
quote
that
Korres
uses
to
describe
operators
is
something
that
you
know.
How
do
we
include
some
of
the
human
knowledge
at
operating
different
software
right
now.
Obviously,
the
software
is
pretty
easy
to
operate.
You
probably
don't
need
any
additional
stuff
like
operators
or
like
other
processes,
you
just
you
know,
maybe
use
support
directly.
Maybe
you
see
em
directly,
maybe
use
cube
directly,
but
sometimes
especially
with
the
data
services.
D
D
Skip
the
slide
since
it's
super
super
blue
and
confusing
alright,
alright.
So
the
way
we
kind
of
were
discussing
with
max
what
is
the?
What
is
the
kind
of
an
interesting
way
to
position
borscht
in
regards
to
kubernetes
is:
is
there
you
know?
Can
someone
provide
the
generic
operator
right
and
if
we
do
so
well,
yes,
we
can,
and
if
we
do
so,
how
useful
is
it
right?
So
we
don't.
We
don't
quite
know
right.
I
think
this
really
goes
from.
D
You
know:
lots
of
experimentation.
We
could
potentially
determine
if
it's
something
that's
useful,
while
operating
kubernetes
cluster
and
using
maybe
Bosch
with
it
we'll
see
what
if
we
could
all
right,
let's
take
a
look
at
what
if
we
could
so
it
looks
like
there
is
some
fancy
things
going
on
there.
So
we
have
any
software
that
we
work
is
trading.
We
have
some
kind
of
a
generic
q
operator
and
then
I
guess
this
generic
cube
operator
is
actually
creating
all
kinds
of
different
kubernetes
concepts.
D
I
will
go
into
detail
on
some
of
them
whatnot,
but
this
is
how
I
guess
the
typical
layout
would
look
like
if,
if
we're
talking
about
the
generic
q
operator
now,
if
you
just
replace
this
generic
cube
operator
with
that
CD
operator,
you
know
and
replace
like
service
one.
Who
is
you
know
it's
it?
You
know
it
one.
It
did.
You
know
to
it's.
You
know
three
and
whatnot
right.
You
suddenly
see
it's
being
a
little
bit
more
specific
right,
so
the
proposition
of
this
presentation
as
well.
D
What
if
Bosh
could
be
this
generic
cube
operator
and
I
guess
any
software
that
you
can
deploy
would
be
packaged
as
Bosh,
and
you
know
you
you
will.
You
know,
leave
the
benefits
of
gosh
that
potentially
you've
learned
or
may
be
learning
so
far,
all
right
and
I
guess
that
was
the
the
slide
to
kind
of
a
cell.
The
main
point,
but
you
know
Bosh,
has
certain
capabilities
in
how
you
install
configure
and
update
workloads
right,
it's
fairly
well-established
tool
to
manage
production
workloads.
It
has
its
own.
You
know
benefits.
D
Now,
how
do
we
do
it?
Well,
as
most
of
you
will
probably
know
there,
is
this
interesting
abstraction
that
we
have
in
Bosh
called
CPI.
Cpi
is
CPI.
Extension
point
allows
you
to
integrate
what
allows
boards
to
integrate,
rather
with
different
I
as
this
container
as
a
service
systems.
Maybe
it's
some
kind
of
a
bare-metal,
api's
and
whatnot
right.
D
So,
given
that
Bosh's
the
way
this
I
view,
borscht
is
more
of
a
generic
compute
Orchestrator
a
CPI
is
a
natural
extension
point
to
you
know,
implement
it
is
basic
integration
with
kubernetes,
and
you
know
to
get
into
more
details.
Currently
CPI
is
abstract
away
images,
compute,
storage
and
networking
from
the
rest
of
the
boss,
right
mainly
from
boss
director.
D
So
moving
on,
you
know
specifically
talking
about
kubernetes
CPI,
you
know
what
does
it
do
right
so,
given
that
kubernetes,
as
I
mentioned
before,
provides
lots
of
different
primitives
to
be
kind
of
a
build
upon
right,
kubernetes
CPI
automatically,
you
know,
uses
those
primitives
to
achieve.
You
know
its
directive,
so,
for
example,
you
know
create
VM,
:
CPI
would
create
this
call
and
the
CPI
will
create
you
know.
D
Persistent
volume
create
VM
will
also
as
well
create
a
few
other
concepts
how
those
kind
of
a
food
should
be
arranged
in
a
cluster,
if
actually
we're
kind
of
trying
to
hide
all
of
those
boring
primitives
from
you
with
even
more
boring
things.
On
top
of
it
such
that
you
don't
have
to
worry
about
all
the
tiny
little
details
and,
of
course,
given
that
CPI's
are
something
that
the
board
director
just
calls
itself,
and
no
one
else
has
to
know
really
about
them.
D
Whatever
you
deploy
on
top
of
course,
doesn't
actually
know
that
it's
being
deployed
on
top
of
it's
just
thinking
deployed
in
some
kind
of
computer
environment,
it
might
have
been
AWS
as
well,
doesn't
even
matter
demo,
one
alrighty
so
install
Kafka
and
zookeeper
I.
Think
for
the
sake
of
time,
which
is
gonna,
do
one
quick
thing
which
is
really
installed
zookeeper,
so
I
have.
Hopefully
everyone
can
see.
This
I
have
a
boss
already
installed
on
top
of
kubernetes
on
gke.
This
is
one
of
the
later
gke
versions.
D
I
believe,
as
we
can
see
over
here,
we'll
have
three
nodes
in
this
cluster,
so
pretty
tiny
cluster.
It's
been
running
for
some
time
now,
since
this
is
the
same
class
that
has
been
used
to
demo
@q
Quan.
So
what
we'll
do
is
we'll
take
a
look
at
boss
releases,
real,
quick.
There
is
lots
at
releases
here,
we'll
get
into
detail.
Why?
But
the
more
important
ones
zookeeper
over
here.
We
also
have
an
existing
CM
deployment
close
your
eyes.
We
don't
have
to
know
about
this
for
now,
but
there
is
no
zookeeper
deployment
already
right.
D
So
if
we
go
ahead
and
oh
and
I
guess
one
thing
that
we
didn't
look
at
its
stem
cells,
so
we
already
have
one
stem
cell
over
here.
It's
a
wooden
stem
cell,
the
same
stem
so
that
people
use
on
Bosch
lights,
nothing
like
exactly
the
same
bits
all
right,
so
let's
actually
deploy
a
zookeeper,
so
I
already
had
a
command
over
here
figured
out
somewhere
the
only
wrinkle
over
here
it's
not
a
wrinkle,
but
rather
something
that
it's
not
enabled
by
default.
D
Today's
my
zookeeper
list
that
I've
maintained
for
testing
purposes
and
whatnot
doesn't
use
genus
by
default.
Eventually
it
will,
but
in
this
particular
environment
we'll
do
that
and
we'll
kick
that
off
all
right.
So
it
will
do
a
little
bit
of
work.
There
I
will
start
creating
a
bunch
of
missing
VMs
now,
as
I
mentioned
before
this
VMs
are
really
pods
right.
D
So
if
we
control
see
this
real,
quick
and
look
at
kubernetes
inside
over
here,
we
get
pod
will
say,
show
labels
grep
zookeeper
will
see
that
about
12
seconds
ago,
so
we
actually
requested
five
different
pods
to
be
created
in
kubernetes
now
the
label
in
particular
ways
and
that's
how
you
find
them.
But
if
we
reconnect
to
the
boards
task,
let's
see
what
it's
doing
all
right,
so
it's
almost
there.
You
will
know
that
it's
also
a
little
bit
slower
than
a
Bosch
light.
D
Now
the
reason
why
I
suspect
this,
because
this
cluster
is
a
little
there
are
provisioned,
so
it
might
be
choking
up
a
little
bit
on
spinning
up
some
of
these
things
and
maybe
the
zookeeper
Perseus
may
also
take
a
little
longer
to
start
up
since
it's
running
already
next
to
bunch
of
pods
from
a
different
deployment.
All
right,
so
the
deploy,
as
you
see,
is
exactly
the
same
procedure.
Will
circle
back
to
it
will
run
some
smoke
test
after
it,
after
it's
done,
but
at
least
in
a
preference
of
time.
D
Let's
see,
let's
hope,
one
to
the
demo,
one
all
right
so
install
Kafka
zookeeper
great.
What
do
we
see?
Well,
we
saw
this
step
we'll
see
this
step
shortly.
We
won't
see
the
Kafka
stuff,
but
you're
welcome
to
follow
some
of
the
instructions
in
the
repo
euro
sure.
What
do
you
see?
Well,
we
had
Bosch
on
culinary's.
We
kicked
off
some
crafter
stuff,
mostly
right
now,
we're
concerned
with
the
installation
step,
but
of
course
anything
that
you
typically
do
with
Kafka
Mosley's.
D
You
can
do
you
know,
for
example,
query
it
status,
maybe
recreates
your
nodes,
do
an
update.
You
know,
do
rolling
updates
such
that
only
one
individual
note
you
killed
or
whatnot.
The
persistent
disks
are
involved,
though
we
didn't
cinnamon
cube,
but
we
can
all
circle
back
and
check
them
out
and
of
course
you
can
manage
other
software
like
this
as
you've
been
before
all
right.
So,
let's
jump
in
into
the
CPI
implementation
details,
so
they're
kind
of
needs
to
be
a
mapping
right
between
both
concepts
and
kubernetes
concentrate.
D
Both
concept
are
super
generic,
so
it
was
fairly
easy
to
map
the
true
and
kind
of
a
line.
The
two
together
so
Rama
Bosch
themself
for
stem
cells
easily
translate
to
registry
images
the
CPI
itself
when
it
called
upload
or
I
guess
when
the
CPI
is
called
to
upload
a
stem
cell
into
the
you
know
into
the
system.
All
it's
doing
is
is
just
importing
a
docker
image
from
the
tarball,
the
word
and
image
themselves
right.
So
not
nothing!
Nothing!
D
Revolutionary
they're,
both
computer
unit,
I,
just
call
it
computer
unit
over
here,
for
the
sake
of
not
necessarily
call
it
VM,
you
know
put
in
parentheses
as
I
was
saying:
borscht
is
a
genetic
computer,
kiss
trailer
right
so
in
this
particular
case
it
maps
to
put
in
other
cases
that
may
map
to
a
physical
machine
or
maybe
container
or
maybe
VM
something
else,
resistant
disk,
well
PD
or
PVC
I.
Guess
it's
again
one
to
one
alignment.
D
There
is
interesting
kind
of
problems
in
kubernetes
currently,
where
you
kind
of
have
to
align
some
of
the
availability
zones
and
whatnot.
If
you
do
kubernetes
and
one
manually,
however,
in
bosch,
because
we
do
have
a
notion
of
availability
zones,
the
persistent
disk
fall
into
proper
places
automatically
Morse
networking
kubernetes
provides
a
single
overlay
network.
I
guess
really
depends
on
your
networking
plugin
for
kubernetes,
but
as
a
default,
that's
what
it
is
and
then
AZ
well
kubernetes
automatically
marks
certain
nodes
from
in
a
particular
AZ
with
a
particular
label.
D
So
you
can
specify
that
in
your
cloud,
config
and
then
Bosh
will
you
know
make
sure
that
when
it's
creating
a
port,
it
will
ask
cube
to
put
it
in
a
particular
easy.
What
else
web
over
here?
So
some
interesting,
tiny
details
in
some
of
the
pod
creation
logic
are
the
following.
So
we
have
untied
finicky
rules
that
are
automatically
applied
right
so,
for
example,
having
you're
having
your
zookeeper
node
being
zookeeper
five
node
cluster
being
spread
over
multiple
machines.
Is
you
know
very,
very
nice
right
you
don't!
D
Even
if
it's
losing
a
single
AC-
let's
say
maybe
you
deploy
a
testing
cluster,
maybe
deploying
something
else
ideally
would
spread
around
now,
if
you're,
even
across
multi
places-
and
you
have
let's
say
more
than
three
notes
right-
maybe
ever
I
don't
know
five
nodes
or
something
like
that
right
or
maybe
nine
nodes
you
may
still
may
wanna,
spread
it
around
to
individual
underlying
VMs
to
reduce
risk
for
failure
automatically
configure
AC
associations.
That's
another
freebie
right.
As
long
as
you
can
figure
is
easy.
D
Wow
config,
then
boss
will
take
care
of
the
rest
of
the
spreading
it
around
daisies
and
finally,
automatically
creating
code
disruption
budgets.
So
who
get
into
a
little
bit
of
detail?
Why
do
we
do
that
and
what
are
the
for
disruption
budgets
but
note
they
created
automatically
all
right.
So
what
are
the
challenges?
So
this
section
kind
of
talks
about
you
know
what
what
kind
of
plans
do
we
you
know
previously?
Maybe
different
attempts
of
for
creating
CPI
by
different
teams
run
into
and
what
we
observe
yourselves
and
how
we
solve
them.
D
So,
first
one
kind
of
a
easy
one
to
identify:
hey
kubernetes
doesn't
really
allow
you
to
do
static
eyepiece,
you
know
what
can
we
do
about
it?
Well,
based
on
our
timing,
we
were
just
able
to
enable
board
genus.
You
know
in
this
releases,
for
example
in
zookeeper,
and
it
just
works
right
so
right
now,
this
zookeeper
that
were
deploying
actually
does
use
DNS
burstiness
CF
that
we'll
get
to
as
well
uses
DNS.
D
Now
there
is
one
kind
of
aesthetics
to
this
thing
is
we
did
actually
implement
manual
networking
on
top
of
kubernetes
such
that
it
does
maintain
a
static
IP
s.
It
is
a
little
bit
expensive
because
what
it
does
it
creates
a
service
for
each
pod,
but
it
does
work
and
if,
for
example,
your
service
does
not
like
DNS,
for
whatever
reasons
you
could
technically
run
it
on
kubernetes
with
static
IP,
but
you
probably
don't
want
to
since
Venus
is
great.
Alright,
what
is
the
next
challenge?
They
will
run
into
so
maintain
workload.
D
Availability
during
the
node
upgrades
when
I
say
no
top
grade
over
here
actually
mean
kubernetes
node
upgrade
right.
So
when
you're
actually
upgrading
your
kubernetes
cluster,
maybe
from
one
seven,
two
one
eight,
how
do
you
deal
with
some
of
the
disruption?
Maybe
you
noticed
ailing
or
something
like
that?
How
do
you
quickly
bring
things
up
so
a
little
bit
of
note
on
what
exactly
is
happening
in
node
draining
right?
So
let's
say
as
an
example
right,
you
wanna
upgrade
from
q1
seven
to
one
eight
right.
You
have
this
cluster
of
beautiful.
D
Let's
say
ten
VMs,
you
wanna
typically,
do
it
one
node
at
a
time
right,
because
you
don't
want
to
just
take
out
your
entire
cluster
and
your
screen.
If
you
can
good
for
you,
but
for
production
customers.
Typically,
they
are
a
little
bit
more
cautious
about
just
throwing
away
their
clusters,
since
typically
they
run
some
kind
of
workload,
and
maybe
it's
not
easy
to
move
to
somewhere
else,
though,
if
you
can
move
it
move
away.
So
as
for
example,
saying
we're
trying
to
upgrade
a
single
node
node
is
running
with
qubit
1.
D
7
know
this
market
is
non-scheduled,
but
at
some
point
that
way
it's
not
going
to
accept
a
new
workload.
Then
it's
gonna
get
drained
for
some
amount
of
time
right,
so
this
is
similar
to
how
diego's
cells
drain
apps
right,
where
they
kind
of
try
to
move
them
off
to
a
different
place,
so
in
kubernetes
know
draining
is
really
limited
to
hey.
Let
me
just
kill
this
container,
however.
D
The
skilling
of
the
container
is
a
little
bit
more
complicated
and
we'll
get
to
that
in
a
sec.
So
eventually
all
the
containers
are
gone,
node
is
removed
then
eventually
node
new
node
is
created,
will
start
the
new
cubed
version,
and
then
everything
goes
on
now
in
this
time
when
we're
draining
the
when
we're
draining
the
node.
But
what
we
want
to
happen
is
the
one
boss,
because
borsch
is
the
generic
operator
over
here
to
actually
bring
the
pods
that
the
kubernetes
is
killing
on
this
particular
node.
D
We
want
to
bring
them
up
somewhere
else
right
now.
Why
do
we
want
to
do
that
so,
for
example,
for
software
like
zookeeper
right,
maybe
you
have
to
maintain
a
certain
number
of
containers
running
right.
So
here
are
five
zookeeper
containers.
You
want
typically
three
of
them
running
right.
If
you
want,
if
you
have
three
of
them,
you
want
two
of
them
running
at
a
time
right.
So
this
is
where
the
whole
put
disruption.
Budgets
come
in.
That's
what
we're
gonna
jump
in
into
a
solution
over
here,
so
what
disruption
budget?
D
Is
it
effectively,
a
constraint
that
you
can
put
on
kubernetes
right
and
it's
a
privity
of
the
kubernetes,
provides
a
net
score.
You
can
say
to
kubernetes
that
hey,
that's,
not
okay,
to
delete
this
container
when
chill
containers
with
a
similar
label
are
healthy
enough
to
continue
on
right.
So,
for
example,
the
constraint
for
zookeeper,
maybe
simply
saying
you
can
only
kill
one
zookeeper
at
a
time
if
any
other
zookeeper
is
dead
already
or
maybe
not
healthy,
or
something
like
that,
you
cannot
kill
another,
no,
the
zookeeper
right.
D
So
this
is
extremely
important
when
you're
draining
the
nose
right,
because
maybe
I'm
not
supposed
to
keeper
instances.
On
the
same
note,
maybe
for
whatever
reason
you
know,
there's
just
different
services
on
that
note.
That
cannot
be
the
whole
kill.
That
wants
something
right.
So
the
notion
of
pdbs
is
important
right.
So
what
we've
done
is
we
have
the
kubernetes
CPI
automatically
create
for
disruption
budgets.
Now
it's
fairly
naive
right
now,
it's
really
creating
a
pdb
with
a
configuration
that
you
cannot
kill
more
than
one
thing.
D
If
you
don't
use
pdbs
and
if
you
do
use
kubernetes
directly,
I
would
suggest
looking
into
that,
because,
if
you're
running
any
kind
of
stateful
software
on
it,
you
should
take
a
look
at
you
know,
configure
it
alright,
well,
I,
guess
I
kind
of
went
over
this
I
forgot
at
this,
let's
touch
less,
but
this
is
how
you
can
take
a
look
at
pdbs
inside
inside
the
cluster
yeah.
What
else
we
got
here
so
improve
board
directors
the
resurrection
strategy
for
faster
for
the
creation.
D
So
this
is
important
in
terms
of
how
to
when
you're
draining
right.
You
want
to
quickly.
You
know,
bring
back
those
pods
somewhere
else.
Right
goal
goal
is
to
improve.
Of
course,
howdy
rector
is
Rex,
ideally
would
be
in
parallel.
Kerning
doesn't
there's
some
pending
changes
to
do
they're
still
working
progress,
all
right.
What
about
more
complex
workloads?
Well
as
a
demo
tool?
We
have
a
quandary
installation
on
top
of
course,
on
the
application
runtime
sitting
on
top
of
kubernetes,
let's
swap
over
real
quick.
It
will
see
that
our
zookeeper
is
done.
D
We
also
see
that
our
time
is
over
for
this
presentation,
but
just
for
the
sake
of
it
over
here
we'll
say
wash
instances
will
see
that
it's
going
to
look
at
two
deployments
over
here.
One
of
them
is
cf
here,
there's
an
H
a
mode
except
maybe
a
blob
store.
Is
the
zookeeper
I.
Also
have
this
other
example
over
here,
so
we
have
a
claw
foundry
app
running
on
it.
This
is
the
load
bouncer
ingress
IP
on
kubernetes,
so
if
you
click
always
max
was
trying
to
deploy
an
example
app.
D
Meanwhile,
sir,
it's
running
on
kubernetes
same
exact
installation
instructions,
nothing
fancy.
There
is
one
ops
file
to
enable
one
of
the
newer
features
in
gardener
and
see
to
disable
swap
limiting,
but
it
should
work,
as
is
it
doesn't
sound
like
I,
have
time
to
run
smoke
test,
but
trust
me
that
they
do
work
I'm,
not
sure.
If
there's
anything
particular
here,
this
should
actually
say
do
not
come
with
swap
space.
Most
local
mistress
do
not
come
with
swap
space,
because
kubernetes
has
been
on
the
position
of
we
should
be
using
swap
in
the
cloud
environments.
D
Hence
we
have
to
change
garden
run,
see
a
little
bit
but
yeah
one
nuance
here
is
we
running
multiple
cm
up
containers
within
each
quad?
It's
something
that
you
know.
That's
the
current
design
of
Cheops.
Maybe
it
will
be
adjusted
in
future.
Maybe
not
we'll
see
here
are
some
upcoming
changes
and
I'll
just
jump
directly
to
this
resources
over
here
is
the
main
repo.
Thank
you
for
listening.
If
you
have
any
questions,
I'll
stick
around
for
a
few
more
minutes
and
I
yield.
My
time.