►
Description
OpenShift Commons Gathering December 5th 2017 Austin, Texas
Chris Wright, CTO (Red Hat)
A
A
A
Open-Source
today
is
really
the
source
of
of
technology
innovation.
You
look
across
all
of
the
different
projects
and
open-source
communities
and
they
are
at
the
cutting
edge
of
technology,
whether
it's
cloud
or
machine
learning
or
you
know,
serverless
all
these
new
development
models
and
an
infrastructure
and
platform
projects.
These
are
developed
in
open
source.
Certainly
this
is
important
to
Red
Hat
Red
Hat's
business
is
about
building
products
that
are
derived
from
open
source
projects,
and
so
the
more
we
see
this
innovation
happening
in
the
upstream
open
source
communities.
A
This
is
just
sort
of
our
redux
view
of
that
same
that
same
concept,
which
is
there
are
a
collection
of
projects.
Those
projects
go
through
kind
of
a
lifecycle.
From
from
our
point
of
view,
which
includes
the
total
pure
upstream
project,
in
some
cases,
there's
a
point
in
time
that
does
community
curation
or
combination
of
different
components
and
then
for
us.
Ultimately,
those
become
products.
So
you
see
things
like
kubernetes
upstream
openshift
origin
as
a
combined
community
distribution
and
then
OpenShift
container
platform
as
a
product
from
Red
Hat.
A
So
when
you
look
at
that
kind
of
from
upstream
to
productize
ation,
there's
a
couple
points
where
people
start
to
look
at
a
technology
from
the
point
of
view
of
what
can
I
do
with
it?
How
does
it?
How
does
it
drive
value
or
business
value?
For
for
me
and
my
company,
starting
at
the
at
the
beginning,
you
see
the
the
innovation
cycle.
A
You've
got
a
kind
of
DIY
point
on
that
that
time
horizon,
which
is
people
who
are
excited
about
the
technology
bringing
it
into
into
their
businesses
and
playing
with
it
internally,
you
see
a
product,
ization
and
standardization
phase
at
the
at
the
end
of
that
cycle.
We
live
mostly
in
the
upstream
we
Red
Hat.
We
live
mostly
in
the
upstream
and
then
a
product
ization
and
standardization
place.
A
Kubernetes
has
been
around
for
a
short
period
of
time
was
only
announced
approximately
three
years
ago
and
in
that
time
period
it's
gone
from
a
new,
exciting,
open
source
project
to
a
de
facto
standard
for
container
orchestration
in
the
industry,
and
that
has
happened
really
rapidly.
So
if
you
roll
back
in
time,
2001
VMware
introduced
server
virtualization
for
x86
2006
Amazon
introduced
ec2.
A
2011,
maybe
was
was
OpenStack
2014,
kubernetes
and
today
kubernetes
is
emerging
as
it
as
a
de
facto
standard
for
container
orchestration.
I
think
this
is
awesome.
We
we
had
reinvent
and
so
a
couple
of
key
announcements.
They're
really
talking
about
kubernetes
and
in
the
keynote
announcement
for
kubernetes
the
the
announcement
for
Amazon
doing
eks
was
one
of
the
most
loudly
applauded
announcements.
You
really
see
the
excitement
enthusiasm
around
around
kubernetes.
A
If
we're
all
collaborating
on
the
same
codebase,
we
create
that
de-facto
standardization
and
what
we
really
need
is
formalization
of
what
the
api's
are,
that
we
expect
are
consistent,
stable
and
not
going
to
change
there
at
the
lifespan
of
major
release
or
even
the
long,
multiple
major
releases,
the
long
lifespan
of
a
project
so,
for
example,
Linux,
which
is
standards
compliant
with
standards
like
POSIX,
has
a
very
well
understood
system
call
interface
and
that
system
call
interface
is
binary
compatible.
That
system
call
interface
is
something
that
has
changed
over
time.
So
it's
it's
augmented.
A
The
core
system
calls
haven't
really
changed.
Many
of
those
are,
you
know,
predating
Linux,
and
my
personal
opinion
is,
if
it
weren't
for
leanness
waking
up
with
a
broken
laptop
and
very
grumpy.
One
morning
when
somebody
introduced
a
regression
that
meant
his
box
didn't
boot,
we
would
be
in
a
really
different
place
today
for
containers.
A
Containers
are
fundamentally
reliant
on
this
well-defined
interface
between
the
Linux
kernel
and
and
user
space
applications,
and
if
it
you
know
if
it
weren't
for
that
morning,
when
leanness
woke
up
and
then
decreed,
and
probably
not
very
nice
language,
thou
shalt
not
break
user
space.
That
really
began
something
that
today
we're
reaping
the
benefits
of
so
standardization.
In
a
more
formal
setting
is
specifications.
It's
things
we
know
and
love
move
slowly
and
in
many
cases
it
can
become
politicized
and
it's
complicated.
A
So
that
comes
brings
us
here
and
I
think
this
is
a
really
impressive.
You
know
kind
of
the
NASCAR
logo
shot,
but
it
just
shows
you
what
Diane
was
talking
about
earlier,
that
the
breadth
of
this
community
there's
a
lot
of
cool
stuff
happening
in
the
OpenShift
commons
community.
Clearly,
openshift
is
built
around
kubernetes
and
and
containers
and
container
images,
but
it's
it's
also
a
ecosystem
of
users,
and
you
know
other
commercial
companies
looking
to
work
together
to
build
this.
A
A
Okay,
that
is
amazing.
If
you
can't
see
I'd,
say
2/3
or
3/4
of
the
room
raise
their
hand
is
here
for
the
first
time.
So
that's
fantastic.
Welcome.
Actually,
I
didn't
expect
that
you
surprised
me
so
to
the
digital
transformation,
but
it's
not
my
favorite
topic
only
because
it
feels
marketing,
buzzword,
II,
but
it
is
a
real
thing.
People
talk
about
all
the
time
and
the
despite
the
buzzwords
there's
there's
real
business
work
going
on
under
the
hood.
A
What
we
see
with
our
customers
is
this
move
towards
speed
and
agility,
and
initially
trying
to
capture
some
efficiency
internally,
so
efficiency
can
be
as
simple
as
use
the
same
building
block
standard
building
blocks,
something
like
Linux
use
it
across
your
your
entire
infrastructure,
agility
and
speed
and
come
when
we
start
using
cloud
technologies
we're
using
api's
to
manage
infrastructure.
We
can
automate
our
work
using
containers
to
deliver
applications,
maybe
even
building
applications
in
a
modern
software
architecture
like
micro
services,
and
this
is
the
world
that
our
customers
live
in.
A
They
run
my
business
or
the
core
business
kind
of
transaction
processing
engine
and,
on
the
other
end,
I
want
to
produce
a
web
front-end
for
my
customers,
consumers
that
are
interacting
with
me
as
a
business
and
I
need
to
be
competitive
with
startups
that
are
doing
things
really
fundamentally
differently,
born
in
the
cloud
and
and
not
necessarily
owning
all
the
same
kind
of
assets
that
that
the
traditional
businesses
have.
So
this
is
the
space
where
our
customers
live,
and
we
work
a
lot
to
bridge
these
two
worlds
together.
A
A
The
two
key
things
here
are
the
cloud
and
modern
software
architectures
cloud
native
applications
and
and
cloud
infrastructure
or
hybrid
cloud
infrastructure.
So
for
us,
we've
been
talking
for
quite
some
time
about
the
hybrid
cloud.
The
hybrid
cloud
is
a
concept
that
it
allows
application
portability
across
a
lot
of
different
infrastructures.
A
That
is
some
application
platform
that
provides
the
consistency,
the
runtime
consistency
across
all
those
platforms
and
that's
where
you
get
portability
and
the
ability
to
kind
of
not
be
locked
to
one
particular
deployment
scenario
and
then
a
bunch
of
other
things
up
and
down
the
stack
in
terms
of
management
and
connectivity
with
storage
and
networking
and
developer
tooling.
But
here
we're
really
focused
on
that
application.
Tier
it's
just
no
quick
blow
out
talk
showing
a
bunch
of
applications
and
moving
from
hybrid
cloud
to
multi
cloud.
A
So
you
see
those
that
cloud
picture
has
changed
to
include
a
bunch
of
public
cloud
providers
and
again,
the
focus
here
is
his
open
shift
and
open
shift,
providing
that
consistency
across
all
those
footprints
so
that
you
can
run
your
applications
independently
of
whether
they're,
an
older
application
or
more
modern
application.
Different
runtimes
or
anything
that
runs
on
Linux
can
can
run
in
this
environment.
A
Same
picture
with
the
hybrid
cloud
and,
as
I
said,
anything
that
runs
in
Linux
can
can
run
in
this
environment
because
containers
are
fundamentally
Linux
or
put
a
different
way:
containers
our
operating
system,
technology
I.
Whenever
I
say,
containers
or
Linux
people
say
well
what
about
Windows
containers?
A
Absolutely
they
exist
and
I
think.
The
point
is
that
it's
an
operating
system-level
technology,
so
the
applications
that
run
on
those
app
on
those
operating
systems
continue
to
run
in
a
containerized
environment.
There's
really
from
an
application
point
of
view,
there's
very
little.
That's
different
between
running
directly
on
the
operating
system
or
running
containerized
on
the
operating
system.
A
A
A
One
way
you
can
deploy
that
is
completely
on
premise,
so
again,
using
something
like
an
open,
openshift
platform
running
these
containerized
components
on
premise,
whether
that's
bare
metal,
virtualizer
or
a
private
cloud
internally.
You-
and
this
is
in
the
context
of
hybrid
apps.
You
can
take
this
same
application,
stack
and
move
it
off
premise
to
the
public
cloud
and
again
it's
that
underlying
infrastructure
of
that
platform.
This
created
consistency,
and
this
is
sort
of
the
either/or
approach
to
hybrid
cloud,
meaning
you
could
deploy
here
or
you
could
deploy
there.
A
Our
customers
are
also
interested
in
in
an
end
scenario,
I
like
to
deploy
here
and
there
and
again
you
could
argue.
This
is
a
strange
way
to
deploy
your
application.
That's
really
not
the
point.
The
point
is
that
your
application
is
made
up
of
components
and
the
components
could
be
deployed
in
on
different
targets
or
different
locations.
So
here
you
see
core
application
logic
and
the
database
on
premise
and
your
messaging
moved
off
from
us
into
the
cloud.
A
You
may
decide
that
you
don't
actually
want
to
manage
your
database
and
you
may
look
to
a
cloud
service
provider
to
give
you
them
the
management
behind
the
database
and
use
it
as
a
service.
So
you
maintain
the
schema
you
it's
your
data,
obviously,
but
you're
now
consuming
somebody
else's
service.
So
you
can
see
the
same
application
messaging
and
you've
offloaded
some
of
your
responsibilities
into
a
database
service,
and
here
you
can
see
it
stretching
across
multiple
public
clouds.
So
it's
the
same.
On-Premise
application.
A
You
have
choice
there,
and
here
you
see
just
an
extension
of
this.
You
know
kind
of
picture
where
we're
starting
to
make
a
more
complicated
application
and
the
application
is
starting
to
consume
external
services.
Some
of
these
services,
our
services,
are
provided
potentially
by
other
platforms
by
hosting
cloud
service
provider,
and
those
services
are
things
that
that
we're
working
to
make
accessible
through
the
service
broker,
which
you'll
hear
I,
won't
talk
much
about
it,
but
you'll
hear
about
it.
A
little
later
today,
really
important
part
of
integrating
into
an
external
environment.
A
So
we
had
this
world
where
we
have
legacy
applications
which
potentially
show
up
as
as
api's
or
services.
We've
got
new
applications
that
we're
building.
We
need
to
create
connectivity
and
bridges
between
those
things
and
a
service
broker
is
a
place
that
can
do
that
as
well
as
connect
you
to
again
cloud
services
native
cloud
services
or
software
as
a
service
offerings.
A
So
my
role
is
really
meant
to
be
focused
on
where
we're
going.
What
is
their
technology
strategy?
What
are
the
things
that
are
interesting
happening
in
the
industry
that
we
want
to
make
sure
from
a
red
1/2
point
of
view
were
paying
attention
to
or
understanding
where
we
intersect
how
we
can
work
with
those
emerging
communities.
So
I
want
to
switch
gears
here
a
little
bit
and
talk
about
that.
A
You
saw
the
kind
of
hybrid
cloud
picture
that
we
laid
out.
The
that
platform
is
really
a
core
component
to
our
hybrid
cloud
story:
we're
trying
to
work
towards
efficiencies
for
two
different
types
of
persona.
One
is
the
developer
and
the
other
is
you
know,
kind
of
the
operations
teams
and,
from
a
developer
point
of
view,
our
focus
is
providing
efficiency
for
the
developer
so
that
effectively
all
lines
of
code
directly
translate
to
business
value.
A
A
Separation
of
duties
or
separation
of
concern.
You
allow
each
persona
to
optimize
what
they're
working
on
so
giving
developers
autonomy
allows
them
to
move
more
rapidly,
you're,
not
sitting
there
waiting.
You
entered
a
trouble
ticket
and
you're
waiting
for
two
weeks
to
get
a
VM,
that's
blessed
by
IT.
A
An
idle
developer
is
somebody
who's,
hey,
you're,
getting
bored
and
B
or
you're,
just
not
being
productive.
So
in
this
kind
of
separation
of
concern
allows
the
operations
teams
to
pay
attention
to
the
platforms
and
the
developers
to
write
code
and
and
pull
in
the
you
know,
languages
frameworks
whatever
that
they're
interested
in
building
their
applications
from
without
a
lot
of
kind
of
headaches.
From
from
having
to
work
with
the
IT
side.
A
While
we
need
to
maintain
summers
like
we
need
to
do
this
efficiently,
but
also
responsibly
so
I'll
talk
briefly
at
the
end
about
about
security
and
the
kind
of
dev
SEC,
ops
or
security
lifecycle
of
an
application.
So
you
don't
want
a
free-for-all.
You
don't
want
to
have
a
situation
where
you
put
things
in
production,
you
don't
even
know.
A
One
of
the
things
I
think
is
interesting
is
is
that
a
platform
is
a
place
where
you
can
create
that
separation
of
duty
so
similar
to
an
API.
An
API
allows
a
certain
type
of
innovation
to
happen
on
either
side.
So,
if
you
have
a
well-defined
boundary,
you
you
understand
how
you
interact,
but
across
the
boundary
you're
free
to
innovate
on
one
side
and
and
free
to
innovate.
On
the
other
side,
it's
similar
here
with
creating
a
platform
platform
serves
like
an
API
in
this.
A
So
I
I
called
this.
The
perpetual
pursuit
of
excellence
I
see
the
industry
working
in
these
three
kind
of
key
areas,
trying
to
continually
improve
the
user
experience,
so
it
could
be
a
consumer
user
typically,
but
not
always
always
trying
to
improve
the
operational
experience
and
the
developer
experience
in
the
middle
on
the
upside.
A
It's
really
the
same:
ility,
its
reliability,
availability,
stability,
security,
all
the
standard
things
that
you
think
of
from
from
an
IT
ops,
ID,
and
that's
where
we're
trying
to
push
towards
that
policy
defined
infrastructure
having
a
platform
gives
you
a
target
to
build
policy
around,
and
one
of
the
things
I
think
is
interesting
and
important,
especially
for
kubernetes,
is,
as
you
onboard
more
and
more
unique
types
of
workloads
on
to
a
plateau.
That
platform
becomes
more
and
more
valuable
and
I
don't
mean
just
in
any
one
context:
I
mean
across
the
industry.
A
So
if
you
have
that
sort
of
80/20,
where
you've
found
a
sweet
spot
near
you,
you
are
working
well
for
80%
of
the
industry
you're
leaving
out
20%
of
the
industry.
You
know,
maybe
it's
a
bell
curve,
and
these
are
niche
corner
cases.
If
you
can
bring
those
same
niche
corner
cases
on
the
same
platform,
that
platform
is
more
useful.
I
would
argue
that
it's
also
building
towards
the
future
architectural.
You
tend
to
have
to
do
things
in
code
refactoring,
the
code
to
make
that
platform
withstand
the
test
of
time.
A
As
you
bring
on
more
exotic
corner
cases,
and
to
me
the
example,
there
is
again
Linux
having
been
around
for
over
20
years
and
evolving
with
the
industry
and
taking
on
more
and
more
use
cases,
so
that
today
you
know
it's
in
my
pocket
on
my
phone,
it's
in
my
TV
some
cars,
it's
in
supercomputers,
it's
it's
ubiquitous!
It's
everywhere.
A
It's
running
all
these
different
workloads
across
many
different
kinds
of
hardware
and
in
the
analogous
space
in
kubernetes,
we're
starting
to
see
more
and
more
workloads
come
to
the
platform
and
I
think
that's
a
really
important
thing
for
this
community
to
to
think
about
and
to
work
towards,
enabling,
together
on
the
developer
side,
we've
mentioned
most
of
this.
It's
about
speed,
agility,
those
kind
of
things.
A
I
think
this
is
an
interesting
space,
because
it's
again
mostly
thought
of
in
the
consumer
arena
and
you're
trying
to
build
simple,
intuitive
interfaces:
they're
contextual,
they
understand
who
you
are
and
even
are
pleasant
to
use
it's
a
delightful
user
experience
in
a
context
where
you're
trying
to
do
business
through
this
application,
you're
trying
to
reduce
the
cognitive
load
on
the
user,
so
that
they're
not
stuck
trying
to
figure
out
their
application.
You're,
really
focusing
on
your
buying
decisions.
A
Think
that's
one
of
the
benefits
of
containers
and
one
of
the
exciting
things
about
container
orchestration
is
you
can
build
these
sort
of
turnkey
components
and
plug
them
into
your
application?
The
delightful
intuitive
personalized
pieces
from
a
consumer
application
point
of
view
today
are
also
starting
to
really
mean
AI
is
under
the
hood,
and
so,
while
that's
about
contextualizing
and
giving
recommendations,
it's
also
how
will
use
data
coming
out
of
distributed
systems
and
understand
the
current
health
of
the
system.
A
A
I'll
skip
over
the
insights
piece,
but
we're
doing
some
of
this
work
in
openshift
IO
OpenShift
IO
can
take
advantage
of
some
kind
of
machine
learning
to
give
recommendations
when
user
developer
is
writing
code
and
you're
pulling
in
a
dependency
in
your
maven
in
your
palm
file,
that
is
known.
Buggy
has
a
problem
with
the
stack
that
you're
creating
security
vulnerability,
for
example.
This
is
just
a
simple
example
of
how
we're
using
AI
internally
at
Red
Hat,
it's
similar
with
the
access
insights,
one
that
I
skipped
over.
A
But
if
you
also
go
from
a
community
point
of
view,
rat
analytics
is
a
project,
though
that
we've
kicked
started
to
bring
some
of
these
analytics
tools
to
OpenShift.
Specifically,
a
lot
of
the
work
is
done
around
spark
and
our
goal
is
to
show
that
this
platform
is
good
for
running
modern,
interesting,
innovative
new
engines.
So
you'll
hear
today
about
machine
learning.
We
work
with
a
number
of
different
partners
on
bringing
these
kind
of
machine
learning
engines
to
our
platform,
similar
story
with
blockchain
very
much
a
partner
centric
view
of
the
world
for
Red
Hat.
A
We
have
a
commercial
ecosystem,
called
the
OpenShift
Hwacheon
initiative
and
we
work
there
to
bring
blockchain
apps
to
OpenShift.
So
you
see
this
kind
of
bring
the
new
technology
to
a
common
platform
as
the
consistent
theme
here
who's
he
in
here
from
the
telco
industry,
small
handful
of
people,
all
right,
you
have
your
work
cut
out
for
you,
and
this
is
an
exciting
space.
A
5G
and
mobile
edge
computing
is
kind
of
the
next
generation
telco
networks.
So
today
the
networks
are
4G.
Sdn
and
NFV
have
been
the
buzzwords
in
in
telco.
The
next
generation
is
5g
and
with
5g
you
get
low,
latency
high
bandwidth,
dense
connectivity
right
at
the
edge
of
the
network.
With
that
you
had
the
ability
and
with
alongside
nfe,
you
have
the
ability
to
start
running
more
interesting
application.
Workloads
at
that
edge,
including
the
traditional
examples,
would
be
autonomous
vehicles,
augmented
and
virtual
reality,
and
what's
interesting,
is
many
of
the
companies
that
are
looking
into.
A
This
are
seen
containers
as
the
optimal
way
to
run
those
workloads
really
important
considerations
around
performance,
jitter,
reliability,
you've
got
messaging
signaling
this
going
on.
That's
managing
phone
calls,
so
you
don't
want
to
drop
those.
So
it's
a
bit
of
a
different
environment
than
a
traditional
enterprise,
data
center
or
cloud
application.
But
these
are
things
that
will
be
really
interesting
for
the
kubernetes
community
to
take
on
as
new
challenges.
They
already
see.
A
A
Lambda
has
made
a
big
impact,
at
least
in
certain
circles
in
the
industry,
there's
a
lot
of
conversation
around
serverless
and
function
as
a
service.
To
me,
it's
the
scale
of
eliminating
scope
from
the
developer,
so
an
asynchronous
event-driven
programming
model
is
ideal.
It's
an
optimal
program
model
model.
It's
also
difficult,
so
here
we're
creating
a
platform
that
takes
care
of
most
everything
for
you
and
you're.
Just
writing
your
business
logic
as
code,
that's
triggered
by
some
event.
A
At
the
very
far
end,
you
have
a
bare-metal
server
where
you're
managing
the
application,
plus
the
operating
system
and
the
whole
mess.
So
you
can
see
this
sort
of
spectrum
we're
moving
up
and
to
the
right
there,
we've
been
investing
in
a
project
called
Open
whisk
and
bringing
that
to
the
open
shift
platform.
A
So
this
is
bringing
that
event-driven
function,
programming,
environment
to
openshift
and
a
topic
that
I'm
sure
you
will
hear
about
again
today,
as
well
as
repeatedly
throughout
cube
con
is
a
service
mesh,
so
micro
services
being
an
architectural
pattern
for
building
modern
software
applications.
As
you
break,
apart
an
application
into
components
and
services,
the
network
becomes
really
fundamental
to
that
application.
A
You're,
probably
setting
your
develop
developers
up
for
some
headaches
and
complications
when
they,
you
know
simple
things
like
the
network
falls
apart
and
breaks
down
and
retries
are
in
part,
inconsistent
across
the
platform,
so
the
sto
project
again
leverages
envoy
as
a
as
a
proxy,
and
this
is
something
that
we've
been
working
on
at
Red
Hat
and
working
together
with
the
OpenShift
community
and
the
broader
kubernetes
and
commute
general
communities
around
bringing
this
to
to
our
platform
and
I.
Think
this
is
the
last
thing
I
will
touch
on.
A
A
So
I
think
that's,
probably
all
I
have
time
for
I'm
sure
I
went
a
little
over,
but
thank
you
for
listening.
This
is
a
really
important
community
for
Red,
Hat
and
I'm
stoked
to
see
so
many
new
people.
That's
a
really
cool
thing
and
the
the
message
I
wanted
to
leave
you
with
is
that
openshift
is
a
platform.
It's
building
those
kind
of
swim
lanes
or
that
separation
of
concern
and
as
open
source
is
the
innovation
engine
for
technology.
Openshift
is
a
platform.
That's
that's
ripe!