►
From YouTube: Running Cloud Foundry at Comcast
Description
Running Cloud Foundry at Comcast - 02 Neville George, Sam Guerrero, Tim Leong, Sergey Matochkin 720p
A
Our
application
development
teams
use
our
platforms
to
to
run
and
develop
key
applications
that
some
of
you
might
have
might
be
familiar
with
as
Comcast
customers.
So
these
platforms
can
include
things
like
like
OpenStack
or
VMware,
and
obviously
Cloud
Foundry,
just
a
quick
note
next
week
for
the
OpenStack
summit.
We're
also
going
to
be
present
there,
so
anybody
who's
attending
that
we
look
forward
to
seeing
you
at
that
at
that
conference
as
well.
A
So
anyway,
I
sit
on
our
cloud
architecture
team,
so
we
provide
strategic
direction
for
cloud
services
and
it
was
actually
our
team
that
that
made
the
decision
to
go
with
Cloud
Foundry,
as
opposed
to
some
of
the
other
past
providers
that
exist
out
there
and
I'd
be
I'd.
Welcome
any
conversation
about
why
we
decided
way
why
we
made
that
decision
with
any
of
you
throughout
the
conference.
A
No
talk
to
you
about
is
a
is
a
challenge
we
came
up
with
in
supporting
custom
URLs
for
our
customers,
so
I'll
be
talking
to
you
about
that.
A
little
bit.
Sergei
is
our
Application
Platform
architect
and
what
he
does
is
he
works
with
our
development
teams
and
make
it
they
are
leveraging
proper,
architectures
and
design
patterns
that
fit
well
within
the
cloud.
I.
Think
everybody's,
aware
of
the
12
factor,
app
and
Sergei,
is
the
the
champion
for
that
in
our
company.
A
A
How
does
it
route
that
traffic
now
that
it's
it's
it's
trying
to
handle
a
URL,
that
it
is
foreign
to
it
and
and
it
has
to
be
supported
on
both
sites
and
then
how
do
you
enable
SSL
for
some
situation
like
that
and
then
also?
How
do
you
make
it
on
demand?
So
the
first
thing
I'm
going
to
talk
to
you
about
is
HTTP
host
header
replacement.
A
So
basically,
when
that
URL
makes
it
down
to
a
local
Cloud
Foundry
instance,
we
have
our
load
balance
layer,
do
header
replacements
on
the
HTTP
layer,
and
what
that
will
do
is
allow
Cloud
Foundry
to
understand
where
to
route
that
traffic
based
on
how
our
hie
proxy
layers
translate.
One
URL
into
a
locally
hosted,
URL
and,
and
that
would
enable
GSLV
support
so
that
people
can
have
a
globally
available
URL
that
translates
properly
once
it
makes
it
down
and
then
multiple
SSL
Certificates.
A
So
when
you
have
multiple
URLs
that
need
SSL,
enablement,
you're
going
to
have
a
bunch
of
certificates
and
those
certificates
will
will
need
to
be
hosted
on
your
H
a
proxy
later
and
there
going
to
be
multiple
H.
A
proxy
are
multiple
certificates
for
a
single
H,
a
proxy
layer,
so
that
presented
some
challenges
for
us
as
well.
How
do
we
get
around
that?
A
So?
What
we
do
is
is
we
leverage
puppet,
so
puppet
is
responsible
for
making
sure
that
the
HTTP
header
replacements
are
properly
injected
into
the
H,
a
proxy
configs,
and
we
put
hero
in
front
of
that
so
that
it
so
that
the
the
the
values
are
stored
in
a
database
and
what
that
enables
for
you
is
that
you
can
put
any
web
store
in
front
of
that.
A
Any
UI
that
you
want
to
put
in
front
of
your
hero,
database
and
it'll
dynamically
update
the
database
and
dynamically
update
puppet
and
then
update
the
H,
a
proxy
layer
and
this.
This
works
well
for
H,
a
for
HTTP
headers
and
we
can
make
it
on
demand
for
our
customers
and
it
also
works
with
with
SSL
Certificates.
A
So,
if
our,
if
our
users
need
SSL
certificates
that
are
that
are
custom
or
specific
to
their
application,
they
could
do
it
through
that
service
as
well
and
as
long
as
you're
a
cheap
proxy
layer
supports
sni,
you
can
multi,
you
can
support
multiple
certificates
for
a
single
IP,
that's
hosted
on
your
H,
a
proxy
layer.
So
that's
the
first
challenge.
B
Thank
you,
hello,
everybody.
My
name
is
Sergio
Matos,
chair
and
I'm,
working
on
architecture,
team
and
most
responsible
for
a
layer
between
Cloud
Foundry
and
our
developers,
development
community
today,
I
want
to
focus
about
one
aspect.
Of
course,
founder
is
manage
services
and
manage
services.
Api
interface,
Cloud
Foundry
provided
great,
very
convenient
way
to
create
manage
services
like
MongoDB.
You
can
instantiate
two
rabbitmq,
you
name
it
so
cloud.
B
Foundry
comes
with
this
managed
services
and
managed
services
allowed
can
be,
instantiated
can
be
created
with
just
one
command
line
or
or
a
few
API
calls
our
developers.
When
we
start
to
release
code
foundry
to
our
development
community
in
Comcast,
they
immediately
start
to
use
it
and
they
see
when
you
for
development
process,
because
it
gives
them
the
freedom
to
start
to
start
their
backing
storage
right
away
and
to
use
it
and
remove,
and
they
don't
need
it.
So
it's
completely
self-service.
They
don't
need
to
help
from
anybody.
B
What
wins
this
attachment
is
this
likeness
of
a
managed
services?
They
start
coming
back
to
ask
and
asking
is,
is
say,
curve
supported
in
the
managed
services.
If
something
else
is
supported
to
manage
services,
so
we
quickly
realized
that
there
is
a
good
demand
for
managed
services
and
we
need
to
expand
our
library
of
managed
services
with
something
that
we
need
to
create
on
our
own
first
couple,
a
couple
managed
services
that
everybody
asked
and
we
feel
that
it's
absolutely
need
to
be
created
right
away.
Our
lawyer
and
outbound
proxy
logger.
B
This
is
this
is
sort
of
obvious
cloud.
Founder
has
has
log
aggregator,
but
the
actual
consumers
need
to
be
able
to
store
this
application
of
log
somewhere
and
been
able
to
access
and
search
there.
And
the
second
thing
is
a
proxy
layer.
A
proxy
layer
is
required
for
increasing
security
of
our
applications,
because
we
want
to
very
strictly
very
controlled
communication
between
our
applications
and
partners
or
third
parties
like
Amazon,
Web,
Services
and
such
so.
B
This
understanding
of
the
need
to
extend
our
offering
of
managed
service
library
in
Cloud
Foundry.
We
developed
three
principles
that
we
need
to
follow
to
create
our
framework
to
extend
the
library
development
efforts.
It
should
be
easy
and
simple
to
use,
because
because
we
need
to
continue
extending
the
library
and
the
last
but
not
least,
is
support
service
lifecycle,
particularly,
we
need
to
be
able
to
update
our
applications.
B
B
B
That
is
actually
just
a
hard
rock.
We
we
were
able
to
justify.
We
were
able
to
justify
presence
of
doctor
here
and
that
justification
is
doctor,
provides
portability,
so
you
can,
you
can
develop
the
here
containers
and
and-
and
you
can
guarantee
that
it
will
be
run
consistently
across
different
environments.
The
second
is:
docu
provides
just
right
level,
isolations
that
we
need,
and
it's
very
economical
to
run,
because
because
we
can
run
multiple
document
aeneas
on
the
same
vm,
it
doesn't
have
much
overhead
also
docking
is
convenient,
because
doctor
helps
to
support
application
lifecycle.
B
So,
with
this
building
blocks,
we
need
to
put
some
some
glue
together
to
to
build
the
solution,
and
here
on
the
right
you
can
see
you
can
see
a
pool
of
OpenStack
the
games
that
we
run
on
OpenStack
and
each
of
the
DM
at
any
point
of
time
might
run
multiple
several
documentation
that
actually
each
doctor
container
represents
in
service
to
men
is
the
pool
we
have
created,
dhakkir
pull
controller,
so
dr.
Poole
control
is
responsible
to
track
and
manage
all
the
resources
in
the
pool,
including
VMs,
including
docker
images,
docker
containers,
portal
occasions,
storage.
B
All
this
is
managed
by
by
the
pool
controller
that
contains
three
elements:
container
management
manager,
databases,
the
resources
and
capacity
manager
capacity
manager
is
provides
constant,
its
evaluates
capacity
of
the
pool
and
ensures
that
at
any
point
of
time
we
have
enough
resources
in
the
pool
to
spin
up
moisture
bases
to
spin
up
more
containers.
So
this
way
we
don't
need
to
wait
for
a
new
DM
to
boot.
We
already
have
clear
provisions
enough
resources
for
next
few
services
to
start
and
container
manager
is
the
core
of
the
solution.
B
Container
manager
is
actually
responsible
for
spinning
up
bringing
up
new
docker
containers
and
services
inside
doctor
containers
or
tear
them
down,
based
on
the
request
from
the
consumer
of
this
resource
and
consumer
is,
is
actually
this
this
this
element
all
together.
All
together,
you
see
here
is
a
service
broker,
so
for
those
who
are
not
familiar
with
the
service
broker,
interface
and
API
in
cloud
foundry
cloud
foundry
controller
on
the
top
when
it
needs
to
provision
service,
it
talks
through
service
brought
your
API,
so
service
Braccio
api
is
very
simple.
B
It's
literally
like
five
five
restful
calls
that
needs
to
be
implemented.
Service
broker,
api
is
defined
is
is
defined
how
service,
how
close
founder
controller
request
new
services
that
api
is
easy
to
use,
but
it
has
nothing
to
do,
is
extra
provisioning
infrastructure.
So
that's
why
we
put
doc
your
pool
controller,
to
manage,
call
the
infrastructure
elements,
and
once
we
have
the
here
pool
controller,
adding
new
new
horizontal
here
pieces.
This
are
our
services.
Library
becomes
a
trivial
task,
just
just
as
an
example.
This
is
a
technical
conference
right.
B
I
want
I
want
to
show
us
an
example
of
a
request
response
to
to
dock
your
pool
controller.
So
in
this
case
the
service
broker
is
asking
create,
go
of
go
and
create
in
you
in
you
dock.
Your
container,
using
this
specific
image.
Comcast
law
here
in
this
example,
allocate
one
gigabyte
of
memory
for
that
for
this
container
and
expose
a
couple
of
ports,
port,
80
and
5,000
to
the
consumer.
B
When
Tucker
pool
manager
gets
this
request,
it
checks
inventory
of
the
resources
available.
It
identifies
the
VM
that
can
run
the
specific
image
that
has
enough
memory
and
resources
it
allocates
ports
for
port
mapping,
Porter
servants
and
start
a
new
docker
container.
Then
it
returns.
It
returns
back
to
the
requester
information
about
that
container.
B
This
this
all
remains
in
place.
We
now
have
to
now
fulfill
all
our
three
goals.
We
can
very
easily
extend
our
library
our
offerings
for
managed
services,
because
implementation
of
this
layer
becomes
trivial
and
we
do
all
the
provisioning
of
the
actual
infrastructure
through
very
simple,
straightforward
API.
B
C
First
I'd
like
to
thank
everyone
for
the
opportunity
to.
Let
me
share
a
little
bit
of
our
story
with
you
today.
This
is
my
first
cloud.
Foundry
summit
and
I'm
really
excited
to
be
here
so
Comcast.
We
have
a
really
small
engineering
team
compared
to
the
enormous
enormous
virtual
footprint
that
we
have.
So
the
thought
of
bringing
in
a
new
architecture
was
a
little
daunting.
First
at
first,
you
know
we
thought,
there's
a
lot
of
things
that
you
know,
may
change
for
a
service
model.
C
That's
been
really
successful
for
us,
but
I
had
to
run
myself.
That's
kind
of
something
I
was
thinking
about
12
years
ago,
when
I
was
handed
a
tsuris
and
asked
to
see
if
I
can
get
VMware
ESX
to
run
on
so
over
the
last
few
years
with
infrastructure
as
a
service
team.
That
the
folks
has
really
been:
how
quickly
can
we
deploy
be
Elms
and
then
how
you
know?
How
can
we
automate
those
processes?
C
Well,
that's
great
for
most
teams,
it's
really
an
obtainable
goal,
but
it
leaves
our
developers
and
application
owners
our
customers
with
quite
a
few
tasks
to
have
to
complete
after
receiving
their
VM
or
group
of
VMs,
so
I'm
sure.
As
most
you
know,
receiving
a
new
VM
kind
of
leaves
you
with
a
little
bit
of
a
black
hole.
I
mean
you
have
a
nice
VM,
but
there's
a
there's
quite
a
few
things
to
do
with
it.
C
After
that,
so
we
wanted
to
kind
of
change
that
for
our
customers
with
Celt
with
cloud
foundry,
you
know,
we've
introduced
a
paradigm
shift
and
thinking
for
our
architecture
and
engineering
teams.
You
know
we
wanted.
We
want
to
change
our
mentality
where
we
really
focus
more
on
the
the
end
product
of
the
part
of
the
services
we
provide
versus
just
kind
of
deploying
a
VM
quickly.
C
C
So,
with
cloud
foundry,
we
really
introduced
a
self-service
model
to
our
to
our
teams
for
application
and
developers-
ok,
centers
in
developers.
Well,
that's
really
decrease
the
time
between
release
cycles
for
these
teams
and
really
help
them
out,
but
the
key
to
that
agility
is
really
a
careful
coordination
between
developers,
architecture
and
engineering.
C
You
know
we
have
to
be
more
involved
in
the
end
now
to
make
sure
that
you
know
we
are
part
of
that
process
to
offer
more
of
a
holistic
service
model
in
service
offering
so,
and
we
do
that
by
kind
of
its
turning
ourselves
further
along
the
assembly
line.
If
you
will,
with
with
cloud
foundry,
it's
really
offered
it's
offered
more
of
a
self-service
model
for
our
application
development
teams,
so
that
you
know
with
that
with
that
model.
What
it's
doing
for
us
is
it's.
C
Actually
it's
allowing
us
to
to
be
more
engaged
in.
What
we
have
to
do
now
is.
We
can
no
longer
say
that
it's
okay
for
us
to
give
our
customers
a
brand
new
car
that
they
have
to
go
home
and
assemble
the
transmission
before
they
can
drive
it.
You
know
we
believe
that
if
we
make
our
factory
better,
everything
else
will
improve.
C
So
we
have,
we
have
had
some
technical
difficulties
or
challenges
not
difficult
with
challenges
with
with
most
new
things
and
introduced
in
Calgary.
Some
of
those
challenges
have
been.
You
know
having
to
maintain
our
CMDB
to
really
reflect
back
from
Cloud
Foundry
to
our
applications.
Before
is
really
easy.
We
had
an
application
that
we
would
map
to
a
VM.
Then
we've
mapped
an
application
on
or
a
group.
C
Another
thing
is,
you
know
with
with
network,
so
we've
had
to
really
expand
a
lot
of
the
services
we
provide
by
now
getting
more
involved
with
you
know,
firewall
and
GS
lb
and
load
balancing
things
that
you
know
we
really
didn't
do
before.
They
were
really
more
on
the
application
owner
to
figure
out
how
to
get
their
VMs
to
run,
and
then,
finally,
you
know
just
maintaining
Cloud
Foundry
itself.
You
know
learning
how
to
deploy,
build
packs
and
create
custom,
build
packs
how
to
introduce
new
stacks.
C
You
know
how
we
were
going
to
maintain
just
the
releases
of
Cloud
Foundry
in
general,
which
can
be
a
little
bit
on
the
aggressive
side
for
a
team
like
our
is
it
we
really
weren't
heavily
involved
in
a
lot
of
open
source
or
community
driven
projects
in
the
past.
So
a
lot
of
that
was
new
to
us,
so
we
found
that
these
extendable
challenges.
What
really
is
big,
as
we
thought
they
would
be,
you
know
and
they've
actually,
given
us
a
lot
of
new
opportunities
that
we
didn't
really
expect.
C
You
know,
we've
we've
learned
to
really
interface
more
with
our
customers
work
before
you
know
we
were
just
kind
of
in
our
engineering
hole,
we
kind
of
did
our.
We
gave
it
a
platform
and
it
was
kind
of
your
VM
to
take
care
of
from
then
on.
It's
also
helped
us
understand
more
about
how
the
products
that
we
provide
the
services
we
provide.
Really,
you
know,
go
to
the
in
line
that
we,
you
know
that
what
we're
trying
to
really
do
at
Comcast.
C
So
it's
helped
us
understand
what
our
application
is
doing,
how
they
affect
the
business
and
how
you
know
we're
put
more
part
of
that
process
now
and
it's
also
helped
us
become
more
t-shaped
engineers.
You
know
it's
really
increased
our
our
set
of
skills
that
we
have,
and
it's
really
helped
us
to
kind
of
get
developed
and
learn
this
new
model
that
now
we're
part
of
this
DevOps
model
that
that's
it's
really
exciting
place
to
be
right
now,
so
you
know
our
experience
with
cloud
foundry.
So
far
from
an
engineering
perspective
has
really
been
positive.
C
I
mean
it's
really
helped
us
learn
a
lot
of
new
things
and
it's
helped
us
really
focus
and
learn
about.
You
know
all
these
products
and
it
really
the
end
goal
of
agile
product
development
and
time
to
market.
So
with
that
I'd
like
to
thank
you
one
more
time
and
I'd
like
I'll,
pass
the
mic
over
to
my
friend
Neville
George.
Thank
you.
D
Hi
everybody-
hopefully
you
guys,
can
hear
me
right,
so
my
name
is
Neville
I
work
on
the
cloud
services,
engineering
team,
along
with
Sam
I,
would
say:
Sam's
a
very
nice
guy
right.
Every
time,
Tim
and
Sergey
come
up
with
ideas.
You
know
we
still
have
to
support
them
and
keep
our
sanity.
So
you
know
it's.
It's
really
very
nice
of
him
to
do
that.
So
what
I
will
do
today
is
talk
about
some
of
the
operational
aspects
of
cloud
foundry
that
you
know
we
have
found.
D
You
know
in
our
cloud:
foundry,
environment
at
Comcast
and
some
of
the
tools
and
things
that
we
have
done
in
our
environment
in
order
to
support
the
in
the
cloud.
Foundry
instance
that
we
have
at
Comcast
I'll
talk
about
some
of
the
proactive
monitoring
stuff
and
also
about
you
know:
visibility
into
your
environment,
it's
related
to
Cloud
Foundry,
you
know,
and
how
of
they
have
helped
us.
You
know
what
we
have
done.
What
are
the
tools
that
we
have
used
in
order
to
support
the
environment
so
starting
off
with
the
proactive
monitoring
right?
D
It's
the
the
success
of
any
engineering
team
is
in
its
ability
to
actually
prevent
an
outage
right,
preventing
proactively
monitoring
looking
at
the
key
performance
indicators
to
know
what
is
building
up
in
order
to
make
an
outage
right.
In
addition,
you
know
it
would
be
great.
You
know
if
you
can
actually
reach
out
proactively
to
your
customers
or
even
better
right,
if
you
can
resolve
problems
so,
for
example,
customer
quotas,
for
example
right
if
they
are
if
they
are
developing
they're,
innovating
and
they're
starting
to
run
out
of
CODIS.
D
You
know
if
we
can
probably
manage
that
and
make
sure
that
they
have
enough
space
and
stuff
like
that.
It
definitely
helps.
You
know,
helps
avoiding
that
midnight
call.
You
know
escalation
call,
saying
hey.
We
are
running
out
of
space
and
things
like
that,
also
introspective
how
proactively
you
manage
an
environment,
it's
inevitable
that
you
know
that
will
be
outages
right.
So
when
it
when
an
outage
occurs,
the
most
important
thing
is
to
make
sure
that
it
doesn't
occur
again
right.
What
are
the?
What
are
the?
D
What
are
the
additional
configurations
that
we
can
help
and
proactively
manage
all
these
things
before
we
actually
complete
handing
this
off
to
the
operational
team
right,
so
we
have
actually
chosen
Nagios.
You
know
for
our
proactive
management,
there's
a
lot
of
information
available
for
you
to
configure.
D
So
what
we
have
done
this
you
know
it
like,
like
Sam,
mentioned
the
t-shaped
person
right,
so
we
manage
the
complete
instance
of
Nagios
and
and
we
make
sure
that
we
set
up
all
the
counters
and
key
performance
indicators
that
we
need
to
monitor.
So
in
case
there
is
a
problem
and
we
feel
that
you
know
hey
X
is
not
being
monitored.
We
believe
we'd
be
able
to
actually
do
that
in
in
say.
D
So
moving
on,
you
know
we'll
talk
about
the
visibility
of
the
environment
right,
it's
very
important
that
you
know
we
understand,
you
know
what
is
that
in
our
environment
and
things
like
that,
so
Cloud
Foundry
has
a
great
CLI
that
you
can
use
to
get
a
lot
of
information.
The
only
problem
is
that
it's
not
a
single
single
pane
right,
but
you
can
see
everything
and
click
through
everything,
so
we
have
had
the
same
problems
right
and
what
we
found
is.
D
You
know
we
found
a
tool
called
admin,
UI
tool,
it's
available
in
the
cloud
foundry
incubator
that
we
have
used
before
I
move
on,
you
know,
show
of
hands
and
how
many
know
how
many
of
you
know
about
the
admin
UI?
Okay,
great
great,
we
have
we
have
a
few
of
us,
but
for
everybody
who
doesn't
know
it
provides
a
GUI
interface.
You
know
for,
for
you
know,
knowing
your
organization's
your
spaces
who
has
access
to
your
spaces,
how
many
spaces?
You
have
your
quotas.
D
You
know
what
are
the
de
riz:
how
are
they
being
utilized
your
utilization
metrics
of
your
DEA
s?
How
many
applications
are
running
on
it?
You
can
you
can
also.
It
also
shows
you,
you
know
the
growth
of
your
environment
in
terms
of
organizations
and
spaces
and
and
over
period
of
time
how
your
environment
has
been
growing.
D
It
also
aids
in
certain
operational
aspects.
So
you
could
you
could
create
organizations
using
the
tool
you
could.
You
could
apply
quarters,
you
know
to
your
organization's
and
things
like
that.
So
it's
been
a
very
useful
tool
for
us,
so
you
know
so
that
pretty
much
is
everything
that
you
know
I
had
on
on
this
slide.
For
us
to
talk
about,
I
would
like
to
close
by
saying
you
know:
Cloud
Foundry
has
been
great
for
Comcast's
having
the
t-shaped
people
as
well.
D
A
Sorry
yeah,
so
we
are
running
in
production.
I
actually
forgot
that
part
yeah,
so
we're
running
in
production.
We
have
several
key
applications
that
are
in
production
today,
and
you
know
we
have
a
couple
of
environments
and
we're
scaling
it
every
day,
so
we
don't
have
any
it's
not
I.
Would
I
wouldn't
call
it
a
huge
environment
at
this
point,
but
we're
definitely
ramping
up
and
we
have
several
application
teams
because
of
this
platform
and
its
usability
are
very
interested.
So
we're
going
to
be
ramping
up
quickly.
A
A
B
Me
jump
on
this
on
this
question,
so
we
actually,
we
developed,
developed
two
models.
To
do
that.
One
is
for
simple
use
cases.
We
can
have
a
centralized
gob
manager
with
a
CNN
central
urges
at
the
management
that
will
be
work
for
all
applications
that
want
to
use
this
model.
So
this
is
not
application
specific,
but
if
any
any
specific
application
is,
you
know
very
specific
health
check
and
specific
rules
to
failover
or
or
share
just
shall
be.
Then
they
still
today
has
to
do
it.
B
I
can
say
we
have.
We
have
a
really
good
training
model,
but
we
don't
boarding
onboarding
sessions
with
our
development
teams.
We
do
brown
decks
to
introduce
to
do.
You
know
some
brought
over
video
to
have
people
aware
we
focus
on
12
actor
application
model,
because
I
think
that
that
is
very
important
on
overall
microservices
model,
how
not
just
to
shape
your
application,
but
also
to
shape
data.