►
Description
There are lessons to be learned from the biggest open cloud platform AND the most promising container management net platform. Join our panel of Kubernetes and OpenStack experts, in this fun and informative exploration.In the past year, there have been many discussions about if and how to deliver a joint OpenStack Kubernetes environment (using Kubernetes as an overlay OR underlay). In this presentation, we'll take a much deeper dive into the potential benefits and downsides of these different co
A
B
A
Here
uses
kubernetes
already:
okay,
good
and
who's
an
operator
of
kubernetes.
Okay
good
have
some
good
conversations
afterwards,
if
you
want,
does
anyone
have
kubernetes
in
production
right
now.
B
A
Right
on
alright,
so
I
wanted
to
do
this
talk
and
to
get
this
particular
group
of
people
up
here
with
me
because
they
have
such
there's
a
variety
of
talent
and
of
jobs
on
here
and
of
what
they're
doing
with
kubernetes
and
OpenStack,
and
so
we're
going
to
come
at
this
from
from
many
different
angles.
A
That's
who
we
are
or
on
Twitter.
If
you
have
a
question
that
you
want
to
ask
I'm,
not
sure
if
we'll
have
time
for
it,
but
go
ahead
and
tweet
it
up
there
and
tag
us,
and
we
will
either
get
to
it
here
and
I'm
going
to
try
to
monitor
it
or
we
will
for
sure
get
to
it
right
afterwards.
Just
keep
the
conversation
going.
A
B
A
A
A
It
is
with
his
clients
and
also
teaching
classes
and
teaching
his
clients
how
to
get
certified
and
how
to
figure
out
how
to
how
to
run
this
with
OpenStack
and
other
technologies
like
kubernetes
Danberg
is
one
of
the
CTOs
from
the
cloud
team
from
IBM
he's
a
distinguished
engineer
and
he's
my
one
of
my
favorite
container
guys.
So,
whenever
I
have
a
lot
of
questions
about
containers,
I
go
pick.
His
brain
and
Rob
Hirschfeld,
one
of
the
original
OpenStack
guys,
was
on
the
board
for
many
cycles
of
OpenStack.
A
So
a
lot
of
open
technology
up
here,
a
lot
of
kubernetes
knowledge
up
here
and
these
guys
are
doing
it
and
they're
running
it.
And
it's
really
going
to
be
really
interesting
to
hear
the
type
of
stuff
that
they're
that
they're
working
on
oh
and
I'm,
Lisa,
Lisa,
marina,
MP,
I'm,
the
OpenStack
ambassador
for
the
United
States
I
run
the
user
group
in
the
San
Francisco
Bay
Area
world's
largest
OpenStack
user
group.
We've
got
over
6,000
members
now.
A
So,
if
you
are
in
the
San
Francisco
Bay
Area
come
see
me
at
a
meet-up
near
you,
I've
morphed
it
to
the
OpenStack
in
containers
user
group
and
have
run
about
12
meetups
on
kubernetes.
So
follow
us
on
meetup.com,
slash
OpenStack.
We
were
the
first
ones,
so
we
get
to.
We
got
to
do
that.
Title
and/or
that
URL
and
you'll
see
a
lot
of
stuff
on
containers.
A
There's
also
a
lot
of
back
presentations
that
I
have
saved,
including
every
single
one
of
these
folks
has
been
nice
enough
to
present
at
the
meet
up,
even
though
Robert
still
no
one
that
actually
lives
near
me,
but
they've
been
great
and
really
supportive
of
the
user
groups.
So
there's
presentations
and
videos
on
there
we
have
a
youtube
channel.
So
lots
of
content.
A
If,
if
you
don't
get
enough
of
us
here,
so
I
wanted
to
talk
about
this,
let's
start
out
by
talking
about
mainstream
kubernetes
adoption,
because
there's
a
lot
of
lessons
that
we've
learned
from
OpenStack
over
the
years
and
and
what
it
means
to
be
mainstream
and
OpenStack
is
now
you
know
year,
seven,
so
pretty
mature
and
out
there
in
a
lot
of
places.
But
we
know
the
pains
that
happen
know
that
we
know
what
it
took
for
enterprises
to
to
adopt
it
and
I
know
us.
A
B
It's
actually
an
interesting
question
and
I'll
field.
It
first,
but
you
know
I,
think
with
OpenStack.
What
we
saw
was
that
initially
the
idea
was
that
this
is
only
for
web
scale.
Applications
we're
just
going
to
try
to
implement
a
web
scale
application
service
and
that's
great
and
enterprises
that
tried
to
start
using.
It
said:
that's
a
wonderful
idea
for
you
guys
to
use
this,
but
the
tooling
looks
like
it
would
really
fit
what
we
need
for
enterprise
applications.
I
think
kubernetes
is
going
through
that
exact
same
cycle
initial
release
website.
B
Only
that's
the
only
thing
that
we're
going
to
focus
on
and
work
on
and
almost
immediately,
somebody
said
well.
Could
we
like
actually
model
what
an
enterprise
application
might
look
like
in
kubernetes?
They
started
implementing
some
of
the
services
and
and
the
the
tooling
that
was
required
to
start
supporting
applications,
so
the
pet
sets
which
are
now
stateful
sets
persistent
volumes
of
volume
claim
capabilities
that
there
was
a
way
to
provide
persistence
across
the
environments.
These
were
two
things
that
really
sort
of
made
Enterprise
readiness,
a
part
of
kubernetes
thinking,
it's
not
there
yet
right.
B
A
D
And
this
is
there's
a
really
important
point.
Kubernetes
was
built
by
the
team
that
built
Borg,
which
is
Google's
container
infrastructure
scheduler.
So
it's
it's
a
second
system
or
maybe
even
a
third
system,
so
the
architecture
within
kubernetes
is
really
well
baked
as
far
as
what
it
needs
to
do
and
how
it
needs
to
go
for.
Those
of
you
who
had
the
early
OpenStack
journey
with
us.
You'll
know
that
we
didn't
have
a
lot
of
things
baked
right.
Neutron
went
through
a
lot
of
gyrations.
D
You
know
the
the
heat
templates
dashboard
right.
We
had
a
lot
of
sort
of
false
starts
with
Outlook
stack,
and
so
you
know
it's
important
when
you
look
at
kubernetes.
You
know
to
think
through
this,
isn't
just
somebody
coming
up
with
the
idea
and
we're
going
to
workouts
and
patterns
patterns
for
kubernetes
are
much
more
established.
We're
an
open
stack
it.
We
had
to
evolve
a
lot
more.
The.
C
Other
thing
to
keep
in
mind
here
as
well
is
that
kubernetes
is
a
platform
that
has
well-defined
integration,
points
and
extension
points
and
to
make
it
enterprise
ready.
You,
you,
as
a
provider
of
kubernetes
system,
needs
to
implement
various
points.
Those
extension
points
using
tools
and
capabilities
of
the
environment
that
you're
going
to
run
in
and
many
times
how
you
implement.
Those
could
add
greater
security,
greater
enterprise
capabilities
to
an
already
pretty
rich
environment,
orchestration
environment
of
kubernetes,
for
example.
C
B
Kind
of
hoping
that
there
isn't
one
that
there
are
multiple
of
us
that
can
use
the
same
exact
systems,
model
and
I
think
the
OpenStack
community
had
a
problem
with
snowflakes.
Everybody
wanted
to
build
a
slightly
different
OpenStack
environment
and
that's
great
because
the
flexibility
is
there.
Open-Source
drives
that
concept
that
make
it
whatever
you
want,
but
in
the
kubernetes
world
I
see
a
much
more
consistent
model
of
deployment.
People
are
doing
the
same
sorts
of
things
with
the
extensibility
on
the
backend.
That's
that's
really
well
hidden
right.
B
This
ability
to
talk
to
different
types
of
persistent
volumes,
different
types
of
compute
nodes.
It
seems
to
be
handled
slightly
better
in
the
kubernetes
environment
right,
so
I,
don't
think
we
need
a
single
source
to
say
this
is
the
only
way
to
do
kubernetes,
which
is
I,
think
what
we
got
from
a
lot
of
different
vendors
and
the
OpenStack
community
saying
this
is
the
only
way
to
go.
Do
this
now.
C
Now,
to
extend
on
that,
though,
I
agree
that
kubernetes
are
there,
isn't
that
it's
less
of
a
snowflake
right,
especially
if
you're
going
with
different
providers,
cloud
providers
and
the
public
that
provide
them
generally,
you
can
take
one
workload
and
move
it
between
them
and
it
works
fairly.
Well,
it's
very
consistent
where
I
find
some
system
support
coming
in
is
that
while
you
get
the
raw
tools
for
kubernetes
cluster,
which
is
extremely
rich,
how
do
you
configure
one
for
multiple
teams
to
utilize
a
cluster
right?
How
do
you
set
up
roles
and
access
code.
C
Up
the
namespace:
how
do
you
set
up
the
security
groups
between
those?
How
do
you
ensure
that
all
the
different
pieces,
all
those
all
those
components,
are
very
rich
in
a
kubernetes
environment-
are
actually
configured
to
fit
your
needs
and
I
think
that's
where
a
lot
of
education
is
needed,
good
examples
and
probably
a
niche
for
some
service
teams,
but
I,
don't
think
there's
going
to
be
one
right
right
so.
D
Yeah
I,
don't
I
mean
there
were
there
were
a
lot
of
service
organizations.
Moran
'this
is
the
goat
in
this
case,
because
they
were
so
good
at
being
the
people
who
were
helping
do
deployments
doing
customer,
sir,
you
know
doing
you
know
in
there
in
the
field
extending
the
product
doing
projects
where
people
wanted
to
add
to
it
and
kubernetes.
You
know
I
won.
D
I
hope
we
don't
end
up
with
people
who
specialize
in
doing
nothing,
but
installing
kubernetes
we
just
that's
not
where
we
want
the
value
to
be,
and
we
definitely
don't
want
the
value
to
be
a
whole
bunch
of
custom
kubernetes
one
offs.
So
right.
If
you
look
at
what
mer
antis
was
able
to
do
in
the
OpenStack
community
and
help
it
helped
it
grow
very
quickly,
but
in
a
less
controlled
way.
You
know
I,
think
kubernetes,
because
it's
so
cloud
focused.
You
don't
need
all
that
right.
You
were
discussing
how
much
you
know.
D
B
I
think
there's
a
there's
a
better
segregation
right,
so
the
kubernetes
layer
provides
a
cleaner
separation
between
application,
application
interface
and
then
that
underlying
infrastructure,
so
you
can
have
different
back-end
resources
and,
yes,
there
might
be
consulting
needed
to
do
a
particular
integration
for
a
storage
technology
that
doesn't
store
or
a
you
know,
an
are
back
tool
that
hasn't
quite
been
integrated
in
or
all
the
different
network
models
that
could
could
exist
under
kubernetes.
But
the
end
user
is
never
going
to
see
that
right,
the
end
user
and
OpenStack.
D
There's
another
thing
that
happened
in
early
OpenStack
days.
That
was
especially
problematic,
which
would
be
that
you
had
somebody.
Sahara
is
a
great
example
to
me
right.
Sahara
was
a
project
where
oh
I
want
big
data
on
OpenStack
and
it
became
a
project
and
and
kubernetes
community
is
very
averse
to
lumping
new
big
functionality
pieces
into
the
coop
into
the
community
and
that's
another
one
of
the
antidotes
for
that
that
behavior,
where
we
have
this,
you
know
people
have
this
ground
race
to
be
the
first
project.
D
That
does
something
so
they
can
plant
the
flag
and
own
it.
Kubernetes
is
really
not
it's
trying
to
make
the
project
smaller.
So
the
idea
that
you're
going
to
bring
a
big
data
project
and
make
it
part
of
kubernetes
isn't
going
to
it's
not
going
to
work.
Yeah,
isn't
it,
but
it
might
show
up
in
the
cloud
native
foundation,
cloud
native
compute
foundation
where
they're
parking
lot
and
things
like
that.
But
that's
not
competitive.
Okay,.
D
A
C
C
Some
are
provision
all
the
worker
nodes,
a
master
nodes,
all
in
one
account
in
one
network
and
everything,
and
you
you
basically
manage
it
yourself
versus
like
the
system
I,
have
we
provision
all
the
masters
and
manage
all
the
masters
independent
of
the
worker,
those
and
the
customer
scales
up
and
down
the
worker
node.
So
there's
different
management
styles,
but
I.
Think
fundamentally,
we're
where
this
comes
in
is
that
the
kubernetes
are
consistent
across
the
board
right
that
abstraction
layer
is
the
same
across
the
board.
C
You
can
take
a
deployment
into
one
and
move
it
and
put
it
into
another
and
move
it
and
put
it
into
your
on-prem.
That's
one
of
the
hugest
values
of
kubernetes
now,
as
far
as
how
it
runs
and
how
it
operates
and
how
its
managed
that's,
where
you're
going
to
find
some
distinctions
and
I'm
of
the
opinion
that,
because
of
the
extension
points
that
the
cloud
providers
have
to
implement,
they're
generally
going
to
have
a
better
implementation
of
kubernetes
for
their
environment
than
anyone
else,
because
they
own
that
environment.
C
They
have
access
to
capabilities
that
others
may
not
have
direct
access
to.
So
that's
generally
going
to
be
the
case
and
I
know
this,
because
that's
what
we're
doing
as
well
and
that's
going
to
be
true
across
the
board.
But
it
doesn't
mean
that
you
have
to
have
a
cloud
provider
in
order
to
get
a
kubernetes
environment.
There's
plenty
of
examples
where
you
can
get
one
without
a
cloud
provider.
Well,.
B
I
think
there's
an
interesting
capability
here
as
well,
and
that
is
that,
because
the
kubernetes
interface
is
so
consistent,
I
mean
that
there
are.
It
is
possible
to
expose
differences
through
the
interface,
but
because
it
is
so
consistent,
it's
easier
to
start
actually
realizing
the
dream
of
the
hybrid
cloud,
where
I
can
pick
from
a
number
of
different
providers,
including
potentially
my
own
and
potentially
my
own,
multiple
site
sort
of
providers
to
actually
implement
the
applications
and
distribute
them
much
more
efficiently.
B
In
addition
to
the
fact
that
the
containerization
of
these
applications
potentially
dramatically
reduces
the
size
of
at
least
the
compute
element
of
the
resource,
maybe
not
so
much.
The
storage
element,
there's
still
a
question
of
how
you
deal
with
that
in
a
sort
of
a
hybrid
cloud
environment,
you
suddenly
have
the
ability
to
say
well:
I
can
deploy
some
workload
into
my
local
environment,
some
workload
into
a
remote
environment
and
the
models
look
the
same,
so
that's
I
think
incredibly
powerful.
B
At
the
same
time,
there
are
things
like
now:
there's
the
Federation
model,
that's
being
being
enabled
in
Korea
at
ease
and
still
a
lot
of
work
to
be
done
there.
But
potentially
you
even
get
a
single
end
point
to
say:
hey
I
would
like
to
use
the
kubernetes
interaction
model
and
now
talk
to
my
bluemix
and
my
Amazon
and
my
local
provider
based
models
for
these
things
right.
A
C
D
I'm
amazed
at
how
much
people
are
putting
into
containerized
infrastructures.
The
thing
that
that
I
would
say
with
kubernetes
is
kubernetes
is
an
architectural
pattern
or
requires
an
architectural
pattern.
So
if
your
configurations
aren't
designed
for
that,
you
can't
handle
you
know
of
the
durability
to
handle
a
mutable
infrastructure,
can't
handle
stateless
type
of
applications
where
your
your
worker
can
just
die
and
be
restarted
somewhere
else.
There's
a
lot
of
places
where
it's
much
harder
to
make
that
jump
and
probably
not
worth
as
much
well.
C
C
Every
other
day,
so
I
actually
manage
the
watson
workloads
at
IBM
as
well
on
top
of
kubernetes.
Now
not
all
of
them
are
over
there.
Yet
they
have
an
infrastructure
that
they
started
with
before
on
a
lot
of
nvm
based
it's
not
going
to
all
move
from
day
one,
but
basically
we've
set
up
and
we
want
to
preserve
a
kubernetes
environment
so
as
they
move
into
kubernetes.
I
want
it
to
be
a
native
kubernetes
experience,
use
all
the
capabilities
of
kubernetes,
but
we
integrate.
C
We
built
adapters
to
integrate
into
their
existing
environments
so
that
they
have
like,
for
example,
micro
services.
They
have
a
service
discovery
that
lives
outside
of
the
kubernetes
cluster
that
allows
you
to
and
we
sink
nice
the
services
between
what's
in
the
cluster
and
what's
outside
the
cluster.
So
we
get
a
global
view,
but
they
still
have
things
running
on
bare
metal
and
VMs,
but
it's
one
environment
for
them
and
they
can
naturally
move
things
into
kubernetes
as
they
learn
more
and
they
reaaargh
context
some
of
their
apps.
What.
B
Is
that
there's
the
services
capability
within
kubernetes
that
even
lets
you
sort
of
simplify
that
process
of
dealing
with
non
kubernetes
workloads
and
kubernetes
workloads
and
leveraging
that
service
model
to
say?
Well,
look
you
can
discover
from
from
your
kubernetes
environment,
which
is
often
sort
of
more
of
the
front-end
aspect
of
your
systems.
I.
Think
today
you
can
discover
all
those
static,
back-end
resources
or
those
resources
that
have
not
yet
been
moved
entirely
into
the
kubernetes
world,
so
that
your
applications
don't
have
to
change
their
model.
B
D
One
thing
I
would
say
if
you
want
candidates
that
are
good
maps
for
kubernetes,
look
for
things
that
you're
doing
a
CI
CD
pipeline
for
right.
So
it's
a
really
nice
map.
If
you're
looking
for
paid
that
you
know,
should
I
put
this
in
kubernetes
or
not.
If
you
can
build
a
CI
CD
pipeline
for
it
more
than
likely
it's
going
to
flow
into
a
kubernetes
infrastructure
pretty
well
and
if
you're
doing
you
know,
run
book
style,
automation
with
monthly,
you
know
quarterly
updates,
it's
not
a
kubernetes
app,
so
you
know
think
of
it.
D
A
D
But
if
you
really
look
at
what
people
are
doing
with
containers
and
pipelining,
and
the
fact
that
you
don't
allow
access
into
containers
and
the
attack
service
for
containers
is
really
small.
There's
very
strong
arguments
for
containers
right
up
front
being
more
more
secure
way
to
deploy
your
code
than
virtual
machines
because
you
can
deploy
containers
without
having
SSH
access
without
getting
people
access
to
the
hosts.
You
can't
do
that
easily
with
virtual
machines.
D
It's
much
harder
to
manage
a
virtual
machine
configuration
system
without
something
like
SSH
access,
which
is
a
huge
front
door
for
people.
That
said,
I
think
it's
going
to
get
even
better
for
containers.
As
we
see
things
like
dynamic
scanning
of
containers,
pre
pre
deployment
scanning
we're
seeing
some
sidecar
networking
topologies
that
modern
networking
actively
we're
seeing
consolidated
logging
and
performance
tracking
so
that
you
can
do
a
lot
more
proactive
security
based
on
containers
as
part
of
a
broader
posture,
but
the
the
level
of
activity.
D
B
I
think
one
of
the
things
is
that
you
know
the
underlying
container
operating
environment
does
actually
have
some
visibility
into
the
container
environment
to
a
certain
extent,
and
so,
unlike
a
virtual
machine
environment,
where
the
virtual
machine
really
is
completely
segregated
from
the
underlying
operating
system.
If
I
wanted
to
do
monitoring
of
everything,
including
potentially
getting
some
understanding
of
what
the
applications
might
be
doing,
I
don't
have
access
to
that
in
a
virtual
machine
environment
unless
they
do
something
specific
on
a
VM
by
VM
basis.
C
Yeah
I
mean
obviously
a
lot
of
this
comes
down
to
a
level
of
trust,
so
you
get
into
areas
as
far
as
setting
up
the
clusters
with
dedicated
resources.
Isolations,
you
have
to
prove
out
from
a
from
a
provider
standpoint
we
have
to
show
and
demonstrate
where
the
levels
of
isolation
are
and
where
there
are
sharing
and
then,
where
there's
quality
of
services
between
what
is
shared
and
what
is
not
shared
and
even
going
beyond
this
I
mean
containerization,
modern
containerization,
because
we've
had
containerization
is
not
new.
C
I
mean
the
mainframes
I've
been
doing
process
isolation
for
years
and
years
and
years
and
years,
and
that
is
a
highly
secure
environment,
actually
we're
bringing
more
of
the
mainframe
to
like
docker
containers
together,
which
is
kind
of
weird,
a
weird
way
to
think
about
it.
But
it's
providing
a
very
secure
platform
for
in
containers
and
and
what
we're
seeing
from
a
provider.
C
Point
of
view
is
more
of
that
so
you're
getting
signed
images
you're
getting
trusted
the
content
you're
getting
trusted
platforms
now
so
that,
as
you
build
images
and
you
get
them
signed,
you
are
sure
that,
when
they
run
on
a
platform,
a
trusted
platform
there's
no
tampering
of
either
the
image
the
container
or
the
hardware
that
runs
on.
So,
as
more
of
this
is
coming
out,
it's
our
belief
and
we're.
Hopefully,
this
is
true.
C
B
And
that
there
is
also
the
model
of
saying
we
can
actually
deploy
the
entire
kubernetes
environment
on
top
of
virtual
machines
as
well,
so
you
can
get
if
you
need
virtual
machine
security
if
that's
a,
if
that's
a
requirement
of
your
security
group,
because
they're
just
not
ready
to
consume
the
knowledge
around
containers,
yet
you
can
still
leverage
this
technology.
You
can
use
OpenStack
to
deploy
your
containers,
then
deploy
kubernetes
on
top
of
that,
and
then
you
know
have
a
running
word
let'em.
You.
A
Could
do
all
these
for
two
things
best
wait!
Let's
talk
about
this
for
a
second
before
you
just
put
all
these
ideas
and
in
folks
heads
because
and
by
the
way
it's
working.
If
you
have
questions
and
seeing
some
things
coming
or
I'm,
seeing
your
comments
actually,
but
I
can
see
them.
So
if
you
have
questions
especially
at
security,
these
are
the
experts
we
saved
the
best
for
last.
A
But
let's
talk
for
a
second
about
the
technology
here
and
how
exactly
to
build
this,
and
one
of
my
favorite
series
of
meet
ups
was
when
we
had
Robert
come
in
and
talk
about.
You
know
kubernetes
on
OpenStack
and
then
Rob
Hirschfeld
came
in
two
weeks
later
and
talked
about
kubernetes
the
underlay
on
OpenStack
and
then
star
mer
came
back
and
did
a
hands-on
workshop,
and
then
we
were
just
really
confusing
everybody,
and
so
we
can
set
up
here
and
argue
about
the
best
way.
But,
let's
just
you,
know
kind
of
for
big
picture.
B
I
think
that
there
is
a
benefit
to
especially
for
for
an
operation
from
an
Operations
perspective.
If
you
don't
already
have
the
skill
set
to
operate
a
kubernetes
environment,
which
means
that
you
don't
necessarily
have
the
skill
set
of
operating
the
underlying
infrastructure,
I,
don't
know
how
you're
operating
anything
at
that
point.
But
if
you,
if
you
don't
feel
confident
with
that,
but
you
have
somebody
who's
going
to
either
provide
you
an
openstack
system
and
operate
the
entire
resource
there.
B
And
yet
you
want
to
now
add
kubernetes
on
top
I
think
do
in
kubernetes,
from
the
OpenStack
perspective.
Deploying
on
top
is
a
very
viable
solution.
You
can
also
use
tools
like
ironic
to
deploy
on
to
bare
metal.
So
if
you,
if
you
still
want
bare
metal
performance,
the
you
know
what
1%
or
whatever
performance
end
game
you
get
out
of
that,
you
can
do
that
as
well.
So
there's
absolutely
a
clean
model
for
OpenStack,
deploying
these
resources
and
deploying
these
systems
models.
B
That's
actually
what
a
lot
of
the
customers
that
I'm
talking
to
are
doing
because
they're
already
down
the
path
with
OpenStack,
that's
a
tool
that
they've
already
implemented.
So
they
don't
see
the
value
in
deploying
kubernetes
and
then
deploying
OpenStack
on
top.
But
that's
not
the
only
answer.
Right
I
mean
Rob
has
a
different.
D
Answer
very
very
much
yeah
I
mean
so
from
up
if
you're
serious
about
doing
kubernetes
and
you're
building
a
big
cluster.
Adding
a
layer
like
OpenStack
under
it
is
you're
just
adding
complexity
and
more
things
to
manage
and
go
wrong.
So
if
you
know,
if
you're,
if
you're,
building
big
kubernetes
infrastructures-
just
do
it
on,
you
know
doing
it
on
the
metal
makes
a
lot
of
sense
and
then
you
can
actually
buy
machines
that
are
targeted
to
kubernetes,
which
could
be
smaller.
D
Have
you
know
different
networking
topologies
less
ram
a
lot
cheaper,
so
you
know
there's
a
lot
of
options
for
how
you
can
do
that
and
I
think
the
kubernetes
on
OpenStack
is
really
compelling.
You
know
we're
seeing
tremendous
progress
on
it.
It's
might
be
an
operational
pattern
that
the
community
can
get
behind
and
finally
get
upgrades
and
h.a
deployments
sort
of
de
facto
standards.
So
I
think
that
that
is
I
see
that
as
very
compelling.
C
So
I
would
take
it
one
step
further.
Even
so,
I
agree
with
you
keep
the
keep
the
architecture
thin
yeah,
very
thin,
run
it
directly
on
the
bare
metal
itself
or
the
hypervisor
provided
by
the
cloud
provider.
Don't
don't
try
to
interject
another
layer
of
I
would
say
virtualization
at
this
point
also
with
the
networking
we
use
calico
for
that,
because
it's
not
even
an
overlay,
it
just
IP
assignment.
So
it's
very
thin,
very,
very
fast.
C
Networking
support,
but
I
would
go
even
one
step
further
from
what
Rob's
doing,
which
is
what
we're
doing
with
IBM,
and
then
we
manage
kubernetes
with
kubernetes,
because
at
the
end
of
the
day,
what
we
realized
is
that
we
were
building
an
infrastructure
and
a
management
layer,
homegrown
or
using
various
automation
tools,
and
then
we
realized.
Oh,
my
gosh
we're
rebuilding
kubernetes
kubernetes
is
a
lifecycle
management
tool.
C
D
A
B
Is
the
way
for
the
faders
but
there's
another
piece
of
this
puzzle
right,
because
we
talked
about
kubernetes
as
if
it's
like,
oh
I've
got
my
bare
metal
and
I
got
kubernetes
on
it.
There
still
is
that
how
do
I
get
from
bare
metal
to
enough
of
an
operating
system
so
that
I
can
enable
kubernetes?
How
do
I
get
the
core
OS
or
sent
to
us
or
atomic
base?
That
I
can
then
deploy
a
kubernetes
into
and
I
mean
they're
tools
like
like
your
provision
tool,
that's
actually
really
powerful
for
doing
that.
B
You
could
leverage
the
OpenStack
environment.
This
is
why
I
say
OpenStack.
Managing
kubernetes
has
some
interesting
potential
benefits.
If
you
already
have
that
environment
using
ironic
or
the
stripped-down
version
called
Bifrost,
you
could
also
use
that
as
a
way
of
getting
the
bare
metal
infrastructure
deployed
right,
but
but
you
still
have
to
get
that
first
hurdle
deployed
right
before.
D
This
is
where
I
talk
a
lot
about
under
underlay
automation
and
what
rebar
does
and
being
able
to
reboot
machines
and
reimage
them
is
set
up
right.
So
there's
no
magic,
pixie
dust
right,
kubernetes
is
not
going
to
make
your
servers
easier
to
manage,
will
make
your
apps
easier
to
manage,
but
you
still
have
to
have
an
SRE
function
or
an
ops
function.
That's
going
to
manage
and
run
the
kubernetes
cluster
and
monitor
it
and
do
performance
logging
and
all
of
those
things
right
that
there's
no
free
lunch.
C
There
is,
there
is
something
to
be
so.
Kubernetes
is
an
abstraction
layer
for
your
application
to
run
your
applications
and
and
when
we
provision
clusters
for
customers,
we
tell
them
to
keep
it
simple.
While
you
could
do
network
segmentation
at
the
underlay,
don't
do
it
keep
it
simple.
Keep
your
infrastructure
as
simple
as
possible
as
homogeneous
as
possible,
because
it
just
becomes
a
pool
for
the
kubernetes
cluster.
Put
the
network
segmentation
in
the
kubernetes
cluster.
That
way,
it's
portable!
That
way,
you
can
move
it
from
one
to
another
and
divider.
B
And
have
the
same
experience
that
exact
same
model
and
apply
it
to
OpenStack.
So
if
you
still,
if
you
can't
go
100
percent
container
in
your
environment,
so
if
you're
building
a
data
center-
and
you
say
well-
I
still
need
to
support
bare
metal
workloads
and
virtual
machine
workloads
and
I
still
want
containerization.
I.
Think
there's
no
question:
if
you're
going
to
do
container
workloads,
you
need
to
use
kubernetes.
It
is
the
tool
to
help
make
that
easy.
It
provides
a
model
for
the
pattern
for
how
you
build
your
applications.
That
is
really
important.
B
So
from
the
sre
world
view,
you
know
you
really
want
one
set
of
tools
that
can
enable
all
those
different
pieces
and
right
now,
I
still
think
that
OpenStack
is
the
only
tool
that
provides
at
least
the
bare
metal
and
virtual
machine
like
functions
and
and
we
can
then
get
into
all
kinds
of
fun
arguments
about
whether
or
not
you
then
use
that
to
deploy
your
kubernetes
or
you
do
have
a
side
by
side,
environment
for
containerization
versus
those
bare
metal
resources.
But
I
think
there's
something
that
still
has
to
exist
there
as
well.
B
A
A
We'll
buy
you
up
here,
but
my
question
is:
is
kubernetes
going
to
have
to
absorb
adjacent
technologies
the
way
that
OpenStack
has
and
for
those
of
you
that
are
maybe
new
to
overstock,
because
you
came
just
for
the
community
today
and
they're?
Really
days
go
back,
especially
there
was
requirements
to
be.
You
know,
OpenStack
powered
by
to
get
the
certifications
and
it's
you
needed
Novell.
You
need
Neutron.
You
needed
some
of
the
core
pieces
of
OpenStack
to
really
say
you
know
you're
running
OpenStack
and
that
actually
was
a
requirement
but
communities.
A
B
B
If
you
need
to
do
something,
that's
outside
of
what
the
current
core
kubernetes
can
do,
you
can
enable
that
I
say
I
actually
even
said
this
in
OpenStack
days,
it's
always
dangerous
to
extend
beyond
what
the
community
as
a
whole
is
looking
to
support,
because
it
means
that
you're
taking
yourself
out
of
the
easy
integration
into
other
providers,
resources
right
as
soon
as
you
have
something
that's
custom
in
your
environment
that
doesn't
fit
the
normal
models.
You've
broken
it
right
and
in
a
sense
by
saying,
look:
kubernetes
has
this
sort
of
capability.
B
If
you
want
monitoring,
it's
not
a
kubernetes
function,
but
there
is
this
monitoring
tool
that
might
be
useful
right.
Prometheus
is
one
way
of
monitoring
kubernetes.
There's
a
lot
of
work
going
on
between
those
two
communities
to
make
it
nicely
coupled,
but
you
don't
have
to
use
it.
You
can
still
use
sense,
so
you
can
use
xenos.
B
C
And
I
completely
agree
with
you.
I
mean
this.
This
helps
kubernetes
to
keep
the
abstraction
layer
really
clean.
I.
Think
they've
done
a
really
nice
job
of
that
from
a
community
standpoint.
Keep
that
clean,
because
then
you
can
change
out
the
network
provider
and
from
a
programming
model
point
of
view,
you
don't
notice
the
difference
right
it
just
you
might
have
underlying
functionality.
C
That's
a
little
bit
different,
how
it
interacts
with
the
is,
but
from
an
application
point
of
view,
it
operates
exactly
the
same
way
right
so
I
I,
don't
envision
kubernetes
as
a
project
forcing
you
to
use
various
other
components
as
part
of
that
over
overall
project
consuming
more
and
more
great
great
example,
we're
working
on
a
project
with
Google
and
lyft
actually
called
sto.
It's
a
micro
services
fabric.
We
have
no
intentions
of
actually
bringing
this
into
the
kubernetes
project
itself.
We
want
to
keep
it
separate.
C
D
B
D
D
The
project
really
wants
to
keep
itself
skinny
and
small
and
focused
on
some
API
s,
and
so
it's
it's
actually
an
anti-pattern
in
kubernetes
now
thing
that
drove
OpenStack
to
have
all
these
side
projects
where
they
were
trying
to
create
load,
balancers
or
service
and
databases
service,
and
all
these
Amazon
equivalencies
kubernetes
just
says,
run
a
helmet
art
right,
which
is
the
equivalent
to
heat
in
kubernetes.
If
you
want
that
service,
and
so
it's
perfectly
reasonable
to
spin
up
in
adjacency,
there's
a
broker
model,
there's
all
these
ways
that
you
can
add.
D
These
missing
air
quotes
missing
services
around
kubernetes,
but
the
layer
itself,
especially
because
of
Google
z--
Borg
model.
The
layer
itself
is
this
is
a
base
service
and
then
things
get
added
on
top
of
it.
Another
great
example
is
like
Deus,
we're
all
the
open
shift
or
all
these
platform
as-a-service
things
that
are
being
built
on
top
of
it.
There
is
no
desire
to
include
those
in
kubernetes
and
then
break
the
ecosystem,
so
we
will
see
a
much
healthier
ecosystem
in
kubernetes
where
people
are
fighting
to
be
da
kubernetes
paths.
Well,.
C
B
A
So
I
think
are
we
someone
supposed.
A
The
the
thought
will
leave
you
with
one
of
the
things
I
think
Dan's
saying
earlier
about
I.
Think
Mark
Collier
mentioned
it
this
morning
to
you
in
his
keynote
the
not
invented
here.
You
know
people
trying
to
roll
their
own.
Don't
do
that
for
now.
Kubernetes
is
pretty
good.
We
all
sort
of
agree,
and
we
don't
think
I
mean
we
haven't
seen
examples
of
someone
who
tried
to
build
it
themselves.
That
was
going
to
build
anything
better
yet
so
avoid
that
temptation
same
thing.
Mark.
A
Anyway,
that's
kind
of
one
of
our
final
thoughts,
so
we're
going
to
be
here
for
a
few
minutes.
If
you
want
to
come
and
and
ask
us
some
questions
or
just
you
know,
take
selfies
with
us
or
whatever,
and
otherwise
we'll
we'll
see
you
at
the
event
who's
coming
to
the
Finley
Park
tonight,
yeah
all
right.
We
can
finish
the
conversation
there
and
on
Twitter.
Thank
you
guys.
Thank
you.