►
From YouTube: Cloud Foundry CAB Call [August 2018]
Description
Get the Agenda: https://docs.google.com/document/d/1SCOlAquyUmNM-AQnekCOXiwhLs6gveTxAcduvDcW_xI/edit
A
Nobody
actually
yeah
nice
people,
you,
okay,
so
we
have
23
people
I'm
expecting
followed
a
little
bit
more.
Getting
your
arm
ready
and
it's
8:02,
so
I
think
we're
already
starting
to
record.
So
we'll
go
ahead
and
start
as
you
can
see.
We
have
kind
of
a
full
agenda
today,
so,
let's
just
get
straight
into
it.
Is
there
anybody
from
the
foundation
that
can
give
update
on
the
summit?
That's
upcoming
in
New,
York
become
it
registered
or
you
see,
go
ahead
and
register
yeah.
B
C
Dr.
Maddox
is
check,
hey
done
all
right,
hey
man,
how's
everybody
all
right,
so
European
summits
coming
up,
don't
forget
to
register.
If
you're
a
contributor,
you
should
have
seen
the
code
on
the
dev
list
already
feel
free
to
register.
We
have
pretty
awesome
agenda.
If
you
haven't
checked
it
out,
take
a
look,
all
kinds
of
really
fun
talks
about
existing
projects,
some
of
the
direction
of
the
project,
as
always
I
expected
to
be
great
register
registration
numbers.
C
Look
pretty
good,
so
you
know
it
will
it
will
cap
that
at
some
point
make
sure
to
get
registered?
If
you
have
it,
let's
see
so,
there's
a
there's,
a
member
survey
that
was
filled.
It
looks
like
some
of
these
notes
in
here.
From
from
the
past,
that's
already
done
as
cons
over.
We
did
sign
up
with
the
call
for
code
program
that
IBM's
running
we're
still
working
out
some
logistics
about
how
we're
going
to
support
that
may
be
a
virtual
hackathon.
C
C
A
Okay,
I
don't
hear
any
questions:
Thank,
You
chip,
I'll.
Second,
that
call
for
code
I
had
added
that
on
the
agenda
check
it
out,
I
mean
it's
it's
you
know
like
as
a
person
that
lived
through
a
pretty
bad
disaster
and
2010
in
my
home
country
of
Haiti
I
had
done
something
like
this.
Where
I
you
know,
try
to
build
apps
to
help.
People
and
I
had
a
recent
sort
of
discussion
about
this.
A
In
you
know,
if
we
had
the
stuff
that
we
have
for
call
for
coup,
then
obviously
things
that
happen
in
and
put
her
equal
last
year.
You
know
we
see
different
parts
of
the
world
in
Japan.
Other
places
help.
So
if
you
want
to
put
your
coding
skills
to
something
that
is
bigger
than
yourself
and
bigger
than
your
organization,
that's
an
opportunity
so
check
it
out.
Okay,
so
that's
all
I
have
to
say
I
think
chip.
What
about
the
member
survey
I
think
Shana
had
mentioned
it
a
few
times.
C
Done
yeah
no
problem
I
hit
that
really
fast,
because
the
yeah
the
survey
closed
out.
We
got
good
responses.
It
was
sent
specifically
to
what
are
called
the
the
primary
contacts
for
each
one
of
the
member
companies.
The
foundation
found
exported,
maybe
mentioned
it,
but
it's
it's
the
way
that
we
look
at
how
we're
supporting
all
of
you,
the
community,
the
marketing
needs
of
the
organizations
involved,
the
you
know,
various
technical
groups,
so
it
does
a
good
good
gauge
on
how
we
should
adjust.
C
A
All
right,
thank
you
very
much,
okay,
so,
as
you
heard
think,
with
chip
billion
and
the
foundation
care
other
questions
all
right,
so
we'll
get
to
the
two
I
guess
the
other
part
of
the
program,
which
is
sort
of
overview
of
the
different
PMC,
so
kind
of
highlights
anything
important.
Obviously,
if
there's
our
talk
related
to
it
and
there's
a
need.
D
E
So
I
think
they're
they're
close
to
the
point
where
they're
ready
for
more
public
data
on
that
they're
working
with
CLI
team,
about
what
that's
going
to
look
like
in
terms
of
integration.
There
I
think,
even
though
the
API
endpoints
might
be
there
at
this
point.
So
if
you
enable
their
new
component
that
does
that
and
start
moving
drop
ups
around
then
you
can
start
playing
around
with
them.
E
Gardens
also
started
work
on
their
efforts
to
improve
some
of
the
CPU
controls
inside
of
CF,
so
we're
on
the
Diego
team
working
with
them
to
start
integrating
that
I
know.
The
container
networking
team
is
also
making
progress
on
their
track.
Work
around
dynamic,
egress
rules
so
making
those
applications,
security
groups
more
dynamic,
both
in
terms
of
updating
them
on
running
instances
and
tying
them
to
DNS
names
instead
of
just
raw
IP
addresses
and
then
there's
a
bunch
of
integration.
Work
that's
been
going
on.
E
So
that's
all
ready
to
go
and
we're
giving
people
enough
lead
time
before
Trustee
comes
out
of
support
in
April
of
next
year
and
similarly
we're
integrating
its
the
Zinio
stem
cells
that
the
Vash
game
has
been
producing
then
also
we're
increasingly
integrating
the
DTM
project
across
all
of
the
runtime
bas
releases.
So
in
long
term,
that'll
will
give
better
isolation
between
jobs.
Somewhat
simpler
operator,
experience,
support,
release
author
experience
in
terms
of
having
less
complexity
bastards.
So
with
some
security
benefits
there
from
simplifications
and
oscillation
sweet.
C
I'll
add
something,
though,
just
because
you
mentioned
buildpack
Derek
the
for
those
of
you
that
have
been
paying
attention
to
stuff
going
on
in
CN,
CF
you'll.
Note
that
the
effort
to
kind
of
unfort,
the
karoku
and
Cloud
Foundry
buildpack
approach
is
going
to
be
presented
to
the
CNCs
talk
as
a
way
to
try
to
move
that
unfortunates
forward.
That's
going
to
happen,
I
think
next
Wednesday
or
no
I'm,
sorry
Tuesday
morning,
us
time
so
that's
kind
of
cool.
C
A
F
Exactly
so
nice
to
meet
you
all
so
essentially,
we've
had
that
p.m.
transition
at
people
talk
for
for
Bosch
and
since
Dimitri
is
on.
The
call.
I
would
like
to
thank
him
for
all
these
contributions
over
the
year.
It's
done
a
great
job
and
we
try
to
to
do
as
well
you're
trying
to
get
over
there
I
try
it
every
day.
F
F
Yeah
so
I
wasn't
sure
we
didn't
coordinate
in
advance
in
any
case,
so
Marco
has
accepted
to
take
on
the
role
and
sent
out
an
email
recently
highlighting
some
changes
that
we
want.
We
want
to
make
to
our
PMC
so
essentially
adding
separate
projects
with
separate
keys
and
thrown
poutines
it
and
we'll
try
to
rejig
all
our
projects
for
around
CPI
as
well,
because
we
have
some
of
them
incubating
some
of
them
officials.
So
we
need
to
make
things
clearer
to
reflect
the
fact
that
we've
got
well.
F
Some
of
them
are
quite
stable
now,
so
we'll
probably
them
upgraded
to
full-blown
projects
at
at
some
point
in
the
future.
As
far
as
product
direction
go,
we
are
still
trying
to
solidify
our
three-month
six-month
nine-month
map
and
we'll
communicate
some
point
to
make
it
clear
you
shouldn't
expect
too
many
too
many
big
big
changes,
I
mean.
Obviously
we
won't.
We
won't
set
the
world
on
fire
at
least
initially,
with
whatever
bright
ideas.
We
have
yes,
so
continuity
is
a
key,
obviously
for
us
and
transparency
under-21.
F
Also,
we've
got
some
amount
of
debt
as
far
as
github
and
PRS
are
concerned,
and
Morgan
and
I
are
keen
on.
You
know,
paying
off
that
debt
and
be
more,
not
reactive,
because
that's
what
we've
been
doing
but
proactive
in
addressing
them
in
the
future.
So
look
forward
to
maybe
communications
from
us.
If
you
submitted
issues
and
PRS-
and
you
know
from
us
or
one
member
of
the
team,
because
we
want
to
dedicate
some
engineering
time
to
that.
F
And
apart
from
that,
obviously,
we've
had
quite
a
few
releases
in
the
last
few
weeks,
mostly
addressing
some
issues
around
links
and
variables.
Stuff
like
that,
but
also
new
features
got
in
like
VM
hot-swap.
That
will
enable
you
essentially
to
you
know,
create
a
new
VM
and
and
then
delete
the
old
one.
Instead
of
you
know,
delete
and
then
wait
for
the
new
want
to
be
there.
So
this
should
help
with
upgrades
and
scenarios
like
that
and
many
other
things.
What
I
mean?
F
That's
all
in
the
more
detailed
release,
notes
that
we
are
putting
up
now.
So
if
you
have
a
look,
for
example,
at
the
release,
notes
for
260
7.2
and
up,
you
will
see
that
we
put
much
more
detail
into
what
got
in
for
fixes
improvements
and
stuff
like
that.
So
obviously
looking
forward
for
our
feedback
from
everybody
in
the
community
about
how
we
are
doing
up
to
now
and
I
would
be
in
add
some
in.
F
C
Also
I'm
just
gonna
keep
interjecting
additional
thing,
so
the
the
project
that
was
proposed
by
Sousa,
which
I
think
the
the
proposal
document
talks
about
containerize
NCS,
but
it's
the
fizzle
codebase
that
takes
a
Bosch
release
manifest
and
turns
it
into
a
an
image,
has
been
accepted
to
incubate.
As
of
I,
think
leaders
yesterday,
I
believe
or
early
this
morning,.
A
C
A
E
B
A
We
can
people
know
to
go
to
the
link,
all
right
so
on
extensions,
there's
quite
a
bit
but
I'll
put
a
link
to
the
latest
PMC
updates,
but
basically
all
the
different
projects
are
moving
along,
I,
think
things
to
highlight
or
the
autoscaler
is
adding
a
UI.
So
look
for
that
soon.
If
you
are
a
user
of
that
app
autoscaler
I
would
say,
talk
to
ping
bullying
or
go
to
their
channel.
A
They
are
going
to
some
kind
of
like
figuring
out
what
what's
going
on
for
version
2,
which
is
a
reimplementation
and
go
so
if
you
look
for
some
kind
of
an
inception
on
this
soon,
you
got
an
update
for
service
fabric
last
time
and
then,
as
I
mentioned,
Stephens
divine
on
pivotal
is
kicking
ass,
all
kinds
of
stuff
around,
so
you
have
to
have
it
local.
You
check
it
out.
If
you
have
it
looked
at
those
projects,
I
think
they
make
the
experience
in
Kathmandu
a
lot
easier
and
then,
of
course,
the
Stratis
project.
A
Probably
the
highlight
of
extensions
is
sort
of
just
giving
everybody
a
chance
to
sort
of
build
Cloud
Foundry
offerings
I
know
for
a
fact
that
IBM
we're
using
it
so
check
it
out
and
then
blockhead
you
will
get
more
at
Summit.
So
that's
it
for
extensions.
I'll
put
a
link
to
to
be
update,
so
you
can
see
things
so.
I
won't
take
any
more
questions
right
now,
because
we
got
a
move
to
the
two
talks.
Very
exciting,
very
important
talks.
First,
we'll
start
with
pivotal.
This
is
an
update,
so
to
be
clear.
A
Shannon
is
one
of
because
the
PM
leaving
all
this
I
pinged
him.
He
couldn't
be
there.
So
we
have
kuba
who's.
Gonna
was
I,
guess
the
technical
lead
in
there.
So
she's
gonna
give
us
an
update
on
what's
going
on
there
related
to
East
you
and
inboard
rhotic,
so
true
by.
If
you
want
to
share
your
screen,
yeah.
D
D
We
believe
that
this
will
help
us
deliver
value
to
our
users
faster
over
the
years.
The
routing
team
has
a
long
list
of
features
that
our
customers
have
requested
us
for,
and
we've
been
successful
in
delivering
these
in
a
nitrated
way,
but
we
believe
that
you
know
is
tio
and
envoy
being
fairly.
Feature-Rich
will
actually
help
us
deliver.
D
Those
features
faster
and
we
actually
shared
out
proposal
in
the
beginning
of
the
year
and
I
have
a
link
to
this
and
the
presentation
I'll
share
it
out
later,
which
details
out
our
motivations
and
assumptions,
and
all
of
that.
So
that
will
give
you
a
better
idea
of
what
and
why
we
wanted
to
do
this
other
than
just
features.
D
I
think
it
also
helps
us
think
about
a
lot
of
runtime
teams
have
been
thinking
about
how
we
can
provide
shared
services
between
CFC,
our
and
CFAR,
and
we
hope
this
is
probably
one
way
we
can
do
that.
It
also
helps
us,
simplify
the
Cloud,
Foundry
routing
architecture
and
I'll.
Go
into
that
in
more
detail
as
to
how
it
does
that,
so
in
terms
of
how
the
Cloud
Foundry
routing
data
plane
works.
Today,
this
is
a
very
simple
illustration
of
it.
D
You
typically
have
a
load
balance
the
go
router
for
HTTP
routing
in
an
app
and
the
and
domain
balance.
We
have
a
parallel
routing
plane
for
TCP
routing,
which
is
load
balance
or
the
TCP
router
and
app,
and
you
have
domain
for
the
TCP
router.
So
this
is
how
it
works
today.
How
we
have
set
it
up
in
our
test
environments
is
that
we
have
sort
of
this
parallel
routing
plane
that
uses
envoy
as
for
edge
routing,
and
we
have
another
domain
map
to
the
Envoy
in
front
of
the.
D
We
have
another
domain
map
to
the
load
balancer
in
front
of
the
Envoy.
So
if
you
look
at
my
screen
to
the
right
now,
you
can
see
that
I
already
have
a
deployment
running,
and
you
can
see
that
we
have
in
addition
to
all
the
VMS.
We
have
these
three
VMs,
the
east.
You
control,
VM,
the
East,
your
router,
the
ants,
this
your
router
VMs
are
basically
the
Envoy
VMs
and
the
is
to
control.
Vm
has
pilot
and
co-pilot.
You
also
have
something
called
the
CC
route
sinker,
which
is
a
bulk
syncing.
D
Job
helps
sync
routes
between
CC
and
co-pilot
and
I'll
go
into
that
the
next
slide.
So
that's
how
it
looks
like
in
practice
and
that's
how
we've
set
it
up
and
we're
hoping
that
you
know
for
experimental
purposes
that
would
this
would
help
us
until
on.
We
get
to
feature
parity
and
envoy.
This
will
help
us
test
and
get
new
features
out
to
customer
customers
and
users.
D
So
this
is
how
the
CF
routing
control
plane
looks
like.
Today
we
have
the
the
route.
Emitter
gets
routes
from
the
cloud
controller,
30
a
go
BBS
to
the
route
emitter-
and
this
is
not
completely
efficient
in
that
Diego
actually
passes
on
information.
It
really
doesn't
have
to
know
these
droughts
get
to
the
nads
and
the
go.
Router
gets
those
routes
from
the
Nats,
and
this
is
basically
for
HTTP
routing
for
TCP
routing,
there's
the
routing
API
and
the
TCP
router.
D
D
D
This
also
means
that
we
have
only
one
router
so
on.
Why
already
supports
HTTP
and
TCP,
and
there
are
plans
to
support
UDP,
but
it's
not
in
there
yet.
So
this
does
help
us
in
having
just
one
instead
of
two
routers
yeah
and
it
cleans
up
the
orchestration
layer
by
basically
copilot
synching
with
CC
and
Diego
directly.
D
So
that's
that,
basically,
is
how
we've
implemented
it
right
now.
I
won't
go
into
the
implementation
details
here,
but
because
I
want
to
like
go
through
a
demo
but
yeah
up
until
now,
we've
contributed
back
to
the
sto
community
in
terms
of
the
sto
gateway
brought
people.
The
first
ones
to
get
to
want
to
use
on
was
a
ingress
gateway,
and
you
know
using
its
tier
for
it,
so
we
contributed
that
back
to
the
stereo
community.
We
have
basic
HTTP
routing
setup
through
on
one
I
will
walk
through
that.
D
D
We
can
just
make
changes
to
copilot
and
that
standard
interface
helps
us
in
interacting
with
pilots.
So
we
are
contributing
that
work
back
into
the
sto
community.
We
are
focused
on
scaling
of
the
control
plane
and
we
will
be.
We
are
I,
think
one
of
the
few
teams
which
are
actually
testing
the
control
plane,
as
opposed
to
the
data
plane.
D
I
think
the
SEO
community
has
done
a
lot
of
work
in
terms
of
testing
envoy
and
mixer
for
as
data
plane,
components
for
latency
and
throughput,
but
there's
not
a
lot
of
teams
that
are
testing
the
control
plane.
So
we've
been
testing
the
controller
and
we've
gotten
to
about
500
routes
and
500
app
instances
which
is
not
much,
but
even
then,
at
every
level
we've
found
issues
with
our
integration,
either
with
our
integration
or
in
pilot
itself,
and
we
fix
some
issue.
Some
of
those
issues
have
been
fixed
by
the
sto
community
themselves
and
yeah.
D
That's
that's
the
that's
one
of
the
bigger
things
we're
doing.
We
also
are
working
on
weighted
routing
or
traffic
management
features,
and
we
hope,
like
that
by
next
month,
we'll
actually
have
something
to
demo.
There
I
have
a
bunch
of
resources
here
and
I'll
share
it
out,
so
folks
can
go
through
it,
but
the
issue
really
is
is
basically
the
Chili's
that
we
have,
which
has
copilot
pilot
and
the
components
that
we
are
using
and
you
can
get
the
tires
with
it.
D
So
it's
basically
a
bunch
of
you
know:
micro-services
the
product
page
interacts
with
the
reviews
page
and
the
details
page
to
get
information
and
then
there's
a
ratings
app
that
actually
sends
back
the
ratings
to
the
reviews
up,
and
there
are
three
versions
of
the
reviews
app.
We
only
have
one
version
currently
in
our
demo
that
we've
set
up
because
we
are
not
doing
weighted
routing
yet
so
this
is
basically
a
canonical
example.
The
sto
community
has
been
using
and
I
have
this
set
up
already.
D
Domains,
I
have
two
domains
here:
the
sto
dot
sto
acceptance
domain
is
what
is
mapped
to
the
Envoy
gateway
in
my
environment
and
I
already
have
pushed
the
product
rating
reviews
in
details.
App
I've
mapped
the
product
page
to
the
sto
domain
and
I
have
like
internal
the
the
reviews
ratings
in
detail.
Apps
mapped
to
the
internal
domain.
I
also
have
network
policies
set
so
think.
D
Instead
of
having
three
versions
of
reviews,
I
only
have
one
version
of
review
set
in
my
environment
and
that,
basically,
the
way
this
pans
out
is
that,
if
I
open
the
product
page
I
can
see
that
you
know
there's
a
page
which
has
book
reviews
and
it
reads
from
the
ratings
app,
and
we
only
have
reviews
v3,
which
basically
shows
red
stars
and
non
black
stars
and
no
stars.
So
you
can
see
that
this
is
pulling
out
the
information
from
the
reviews,
v3
application.
D
B
A
A
D
D
Yeah,
so
there
isn't
a
way
to
do
that
right
now
they
are
hoping
we'll
get
there
by
next
month.
We
actually
already
I
mean
I,
think
we
have
stories
on
our
backlog
that
actually
are
too
hard.
You
like
implement
that,
but
we
put
in
a
lot
of
thought
into
it.
We
have
a
proposal,
that's
out
there
for
beta
charting
and
the
user
experience
for
that.
So
we
have
like
thought
through
I.
D
D
As
of
now
we're
hoping
to
do
it
I
think
by
like
in
a
month's
time,
we
will
have
something
there
to
demo,
but
yeah,
that's
basically
it.
You
also
have
a
proposal.
I
can
share
that
out.
It's
linked
in
the
presentation
I
have
which
talks
about
in
detail
about
the
Cappy
considerations
and
like
where
we
are
with
it
all
the
options.
So
we've
considered
the
option
of
either
percentages
or
numeric
weights
and
what
works
so
yeah
I'm
happy
to
share
that
out.
Cool.
A
B
D
B
G
G
D
So
yeah
I
mean
I.
Think
we've
like
I
said:
we've
I
mean
we've
set
up
our
framework
to
test,
but
we
are
doing
it.
I'd
rative
lis,
so
we
started
I
mean
if
you
started
with
50
thousand
routes.
I
mean
sure
everything
is
pretty
Drake
and
not
just
in
terms
of
like
on
the
side,
but
even
with
our
integration.
So
we
have.
We
have
a
frameworks
there
and
we
have
started.
D
D
But
I
like
I,
said
I,
don't
think
a
lot
of
folks
are
testing
the
control
playing
components.
There
is
a
lot
of
testing
in
the
mixer
and
envoy,
like
testing
on
one
mixer,
but
not
a
lot
in
the
control,
plane,
components
and
also
I'll
use
case
is
a
little
different
right.
You
are
creating
like
hundreds
of
virtual
services,
which
are
like
those
route
rules
that
they
used
to
call
previously,
and
we
are
like
updating
them
very
frequently.
D
So
we
actually
found
like
that
itself
is
not
you
know
very
efficient
in
sto,
so
I
definitely
think
that
that
is
a
risk
and
we
are
keeping
an
eye
on
it.
We
both
like
both
Shannon
and
I,
attend
the
performance
and
working
group
committee
meetings
and
I
think
that's
a
good
place
for
us
to
start
chiming
in
I,
like
NIMH,
also
in
the
routing
team
experimented
with
actually
having
a
pair
that
contributes
to
a
steal
and
works
on
like
upstream
sto
components
if
you've
just
not
like
come
to
scaling
as
our
priority.
G
D
A
B
A
G
So
so,
actually-
and
this
is-
this-
is
even
better
than
just
the
single
Jules
presentation-
this
is
a
double
Jules
presentation,
I'm,
gonna,
I'm
gonna
talk
for
just
a
minute
or
two
and
then
I'm
going
to
hand
over
to
another
Jules.
We
are
really
a
real
actual
demo,
and
so
you,
you
should
all
recognize
how
lucky
you
are.
G
G
G
G
Let's
discuss
that
after
so,
the
idea
is,
if
you
like,
we've
got
this
amazing
workflow,
which
we're
all
really
happy
about
with
like
Diego
and
Bosch
all
this
stuff,
but
you
have
the
kubernetes
community
kind
of
slowly
kind
of
reinventing
some
of
that
stuff,
but
actually
be
really
nice
to
offer
them
an
easy
way
to
get
that
CF
user
experience
to
try
it
out
to
see.
You
know
that
that
workflow,
that
I
think
most
people
on
this
call
kind
of
love
right,
and
so
that's
what
this
does.
G
This
is
a
plugin
basically
for
the
cloud
controller,
so
you've
got
all
the
same
cloud
controller
api's
everything
works
its
normal,
but
it
enables
the
scheduler
bits
to
be
pluggable,
and
so
you
can
choose
to
use
kubernetes
instead
of
diego
for
actually
scheduling
your
containers.
If
you
already
have
skills
and
stuff
with
kubernetes,
which
means
you
get
a
kind
of
smaller
amount
of
CF
to
manage,
and
maybe
maybe
let
someone
else's,
let
that
especially
be
someone
else's
problem.
G
G
H
G
H
Yeah
right,
it's
better
okay,
so
I
think
I
will
just
die
run
into
the
demo,
just
because
we
don't
have
too
much
time
because
there's
a
few
things
that
I
want
to
show
so
I
guess
everybody
is
seeing
my
terminal
now.
Yeah
I
have
four
paints
and
I
did
a
I
did
some
preparation
for
that.
So
just
for
our
setup,
we
have
Bosch
light
on
SoftLayer
and
we
have
also
a
grenadiers
fast
on
SoftLayer.
H
Our
arena
component
is
currently
running
for
development
reasons
on
kubernetes
as
a
container,
but
they
are
connected
with
foundry
and
what
I
did
here
is
a
watch
on
the
coop
get
pots.
So
you
can,
you
can
see
what's
going
on
on
the
cluster.
The
second
app
here
is
actually
Rini.
I
call
the
Domini
and
the
other
one
is
one
running
app
one
running
cff
on
the
right
side.
H
And
the
staging
is
actually
performed
on
Diigo.
Currently,
we
could
also
do
it
on
with
a
rainy,
but
we
currently
don't
have
this
dream
of
deluxe
setup.
So
that's
why
we
decided
to
for
now
to
use
the
diego
staging,
but
the
app
is
going
to
be
deployed
on
kubernetes
and
you
will
see
after
staging
that
the
app
will
actually
appear
on
kubernetes,
so
just
give
it
another.
Second,
everything
is
staged.
H
Awesome
so
you
already
see
that
the
container
is
getting
created
and
it's
already
running,
and
so
we
last
week
passed
the
smoke
tests
actually
and
I
will
just
see
there
show
the
features
that
arena
is
currently
supporting,
DCF
features.
So,
for
example,
we
could
say
stop
so,
let's
first
show
our
apps.
H
H
So
you
should
see
the
same
app
like
four
times
so
here
this
and,
of
course
you
can
see
it.
When
you
do
a
sea
of
apps,
you
see
the
instance
count
the
correct
instance
count
and
also
we
can
scale
it
back
to
one
app
so
terminating
the
apps.
They
should
also
report
the
correct
instance
count
and
of
course
we
can
curl
our
it
CF.
H
And
there
we
go
hi
I'm
Dora,
which
is
awesome,
and
we
are
currently
working
on
the
Rattus
Rattus
integration,
which
is
log
regatta
on
kubernetes,
and
we
can
also
use
the
loCash
to
show
the
logs
and
that
should
also
work.
So,
for
example,
if
you
use
lakh
cash
tail
and
then
we
need
the
app
goo,
it's
CF
help
demo
geo,
ID
and.
H
Put
it
here:
let's
do
some
more
curls,
because
I'm
on
every
curl
I
do
some
locks
and
if
we
know
lock
has
tail
you
see
deluxe
and
if
I
curl
again
you
see
that
there
should
be
a
lot
more
just
that
you
believing
in
them
not
lying.
So
here's
till
last
look
and
you
can
also
do
it
with
CF
tail.
Actually,
so
you
can
say
CF
tail
demo
and
it
should
also
show
you
all
the
locks,
which
is
also
really
nice.
H
G
I'm
gonna
pre
answer
one
question,
so
I
know
that
someone
will
ask
because
it's
why
I
would
ask
how
many
cats
are
passing
it's
a
little
less
than
50%
of
the
cats
passing
at
the
moment
and
we're
we're
we're
cracking
through
the
remaining
ones.
We
we,
we
think,
there's
a
pretty
manageable
amount
of
stuff
to
get
to
100
percent
of
the
cool
cats
and
a
hundred
seven.
The
cool
cats
is
the
point.
Where
will
will
start
saying
to
people
hey.
You
might
actually
want
to
try
this
out.
H
We
try
to
make
it
really
readable.
So
you
can
see
what
needs
to
be
done
to
do
that.
If
you
want
to
do
it
manually
and
you
can
go
to
all
the
functions
and
try
to
do
it
yourself
manually,
if
you
don't
like
to
run
the
scripts,
however,
if
you
just
want
to
easily
deploy,
you
really
just
use
the
scripts
as
documented
on
the
readme
and
try
it
out
yourself
on
your
local
machine.
Ok,
that's
that.
A
A
A
D
A
European
effort
between
IBM
and
SATA
and
obviously
it
would
be
happy
to
have
your
contribution
if
you're
interested
to
ping
them
I
believe
Simon.
Moser
is
also
on
the
call
so
there's
a
kind
of
like
the
big
team
doing
this.
So,
let's
open
it
for
questions
we
have
9
minutes.
We
don't
have
to
use
the
whole
nine
minutes,
but
you
know.
Hopefully
we
have
more
questions.
I
have
some
questions
but
I'll
wait.
Anybody
talk.
G
The
the
trickiest
thing
coming
up
to
pass
all
the
cats
I,
don't
anticipate
any
problems
and
we
actually
have
I
should
dig
out.
You
can
even
have
a
look
at
our
tracker
actually
and
there's
a
list
we
have.
We
have
a
list
of
the
stories
that
we
need
to
do
to
complete
cats.
It's
actually,
although
there's
about
40
cats
left,
a
lot
of
them
are
kind
of
saying
they're
all
like
one
test.
G
So
we
have
a
list
of
the
stories
that
are
coming
up
and
we
have
them
prioritized
in
order
of
the
firt.
The
first
ones
are
kind
of
like
that.
The
things
that
you
really
would
want-
and
let
me
see
if
I
can
find
it
and
then
there's
a
few
that
are
I,
wouldn't
say
there
they're
going
to
be
problems.
Nothing
seems
like
it's
going
to
be
a
huge
problem
that
we
can
see,
but
there's
a
couple
things
where
we're
not
actual
on
each
more
Rubin
CF
kind
of
different.
G
So
an
example
of
that
it's
a
better
cat
which
asserts
that
if
you
don't
have
enough
resources,
you
get
an
error
message
in
the
CLI
saying:
resources,
insufficient
resources
and
that's
just
super
hard
to
do
in
kubernetes.
Kubernetes
doesn't
really
have
the
concept
of
not
having
enough
resources
because
it's
kind
of
all
asynchronous
in
kubernetes
and
it
just
kind
of
will
retry
into
other.
Our
resources
actually
see
what
I
mean.
G
So
it's
it's
a
couple
of
things
like
that,
where
I
think
we
probably
won't
want
to
pass
those
cats
there's
two
or
three
cats
which
we
think
they're
just
they're.
Just
representative,
with
just
a
different
way
of
working,
we
may
be
criminally,
should
do
it,
but
maybe
I
think
it's
one
of
those
fundamental
differences
in
kubernetes
like
this
kind
of
convergence
of
states,
rather
than
kind
of
imperative,
nuts,
where
it
might
be
relatively
difficult,
it's
not
impossible
to
implement
it
could
implement
it.
G
B
B
B
G
Leave
me
off
so,
although
all
this,
the
staging
site
stuff
still
works
with
droplets,
because
that's
the
best
way
of
keeping
all
the
Cloud
Controller
API
is
consistent.
So
you
can
still
do
things
like
roll
back
to
a
previous
version
of
the
droplets,
which
is
one
of
them
cloud
controller
features.
So
all
of
the
staging
stuff
still
works
with
droplets,
but
at
runtime
we
converts
the
droplet
into
an
image,
so
everything
that
kubernetes
is
an
image
and
it's
an
image
that
we
construct
based
on
the
route
FS
plus
the
droplets.
G
We
just
basically
build
an
image.
It
describes
the
routes
of
s
plus
the
droplets,
and
then
we
use
that
for
all
the
kubernetes
side
stuff,
but
it's
using
the
droplet
that
Cece
thinks
that
should
have
and
the
route
defense.
That's
the
current
deployed
booth
of
us
so
kind
of
issue
it
the
best
of
both
worlds.
G
If
you
see
what
I
mean
and
and
Brett
is
asking
just
even
the
vines
work
to
be
coupled
the
pipes
and
CF
yes,
and
not
only
that
but
hey
Jules
was
mentioning
that
at
the
moment
for
the
demo
we
use
Diego
to
do
the
staging,
because
we
had
a
nice
log
and
tailing
on
there
at
the
moment.
If
you
turn
on
staging
in
arena,
it
actually
uses
Stevens
packs.
G
A
G
Build
is
slightly
different,
build
up,
build,
builds
container
images
directly
and
it
also
uses
packs,
and
we
in
our
build,
aren't
building
an
image
of
we're
building
a
drop
a
two
years.
You
you
end
up,
we
do
it
the
other
way.
So
so
we
build
a
droplet
and
then
create
an
image
dynamic,
and
the
advantage
of
that
is.
It
means
you
get
all
the
patching
stuff
for
free.
So
when
you
deploy
a
new
version
of
the
root
surface
with
the
patches
in
it,
that's
automatic
we
build.