►
From YouTube: OpenShift Commons Briefing #50: OpenShift Origin Overview & 1.3 Update with Clayton Coleman
Description
In this special OpenShift Commons Briefing, we had Clayton Coleman, one of the lead architects of OpenShift and also works as one of the core contributors of Kubernetes.
Clayton was kind enough to give us an overview of OpenShift Origin, giving us some invaluable insight on the current release, Origin 1.3.
It may also be of interest the conversation around the roadmap and plans for future releases.
A
A
Hello
and
welcome
to
another
open
ship
Commons
briefing.
Today
we
have
Clayton
Coleman
with
us,
he's
one
of
the
lead
architects
for
open
shift
and
works
as
a
core
contributor
to
kubernetes
and
he's
going
to
give
us
an
overview
of
open
shift
origin
today
and
give
us
some
insights
into
the
current
release.
1.3,
that's
going
out
the
door
now
as
well
as
talk
a
little
bit
about
the
future
and
the
roadmap
for
the
road
ahead
for
open,
shipped
origin
and
kubernetes.
So
take
it
away.
Clayton
Thank.
B
You
Diane,
so
my
name
is
Clayton
Coleman,
as
you
heard
a
architect
for
open
shift
and
I'm,
also
heavily
involved
in
helping
make
kubernetes
a
success
as
both
as
a
community
member
and
as
a
red
header,
so
open
shift
is
ultimately
a
platform
for
application
developers.
Our
goal
is
to
build
and
support
the
kinds
of
workflows
and
development
tooling.
B
That
developers
need
to
start
with
an
idea
and
take
that
idea
all
the
way
to
production,
including
not
just
how
the
application
is
run,
but
how
you
manage
change
in
the
application,
and
that
includes
things
like
continuous
integration
and
communiques
deployment.
It
also
means
packaging
up
your
source
code
into
docker
images
for
running
on
the
platform,
as
well
as
the
ability
to
not
just
run
your
application
on
a
production
server
somewhere
off
in
the
cloud,
but
also
be
able
to
run
that
application
in
the
same
way
on
your
local
laptop.
B
So
it's
very
important
to
us
to
enable,
but
in
the
end
story
as
a
developer,
where
you
can
recreate
the
same
application
environment
for
any
kind
of
application.
No
matter
where
you
need
that
application
to
go
a
question.
I
get
asked
a
lot
is
how
is
open
ship
different
than
kubernetes
I?
Think
a
lot
of
people
today,
you
know
what
kubernetes
is
it's
a
container
orchestration
engine
communities
is
a
great
place
to
run
applications
and
with
openshift.
B
Our
goal
was
to
take
the
next
step,
which
was
not
just
have
a
place
that
it's
great
to
run,
applications
and
as
a
developer,
but
a
place
that
supports
running
many
many
applications,
as
well
as
the
teams
and
operations
necessity,
so
that
your
applications
keep
running,
and
so
we
provide
today
a
secure,
multi
kind
of
distribution
of
kubernetes.
We've
added
a
number
of
tools
that
I'll
talk
about
in
just
a
few
minutes
for
supporting
the
entire
software
development
lifecycle,
as
well
as
helping
to
make
easy
integrations
with
all
of
the
source
code
to
production
flows.
B
That's
been
very
difficult
before
most
people
are
used
to
thinking
of
having
the
resources
of
their
computer
available
to
them,
so
they
might
have
a
laptop
and
a
desktop,
and
some
people
have
access
to
maybe
a
couple
of
servers,
but
in
the
future
there's
a
lot
more.
There's
a
lot
more
out
there
than
that,
and
we
have
cloud
environments.
Those
cloud
environments
might
offer
tens
or
hundreds
of
machines
that,
for
the
most,
for
the
most
part,
are
not
doing
anything
useful.
B
But
this
is
the
heart
of
open
check
and
it
for
those
who
aren't
familiar
with
open
shift,
I'll
I'll,
say
open
shift
builds
on
top
of
kubernetes.
We
make
it
easy
to
run
docker
containers
on
the
machines
in
a
cluster
we
bring
in
persistent
storage,
so
you
can
have
stateful
applications,
things
like
databases,
but
even
just
normal
web
applications
that
might
need
access
to
a
file
system
that
lives
longer
than
I
contain,
and
that
really
brings
the
best
of
both
worlds.
B
You
can
build
applications
that
need
to
deal
with
state,
and
you
can
also
benefit
from
the
flexibility
and
resiliency
of
a
cluster.
We
offer
a
service
layer
that
helps
bridge
the
gap
between
applications
both
running
on
and
off
the
cluster.
We
make
it
easy
to
expose
your
application
to
end
consumers
and
we
integrate
source
code,
see
ICD
and
existing
automation
into
a
flow
that
works
between
both
developers
and
operators.
B
So
with
that
out
of
the
way,
what
have
we
done
with
OpenShift
one
three?
So
this
is
the
third
release
of
the
newest
iteration
of
the
open
shift
architecture.
We
launched
open
shipped
one
owned
just
about
just
over
a
year
ago
now,
and
in
the
last
year,
we've
made
some
some
huge
fundamental
improvements
both
to
kubernetes,
as
well
as
to
open
shipped
on
top.
B
In
the
releases,
the
OpenShift
one
three
release
took
place
on
September
15,
it's
up
on
github
for
people
to
download
and
we've
started
building
and
releasing
images
for
those
that
are
out
there
for
people
to
try.
There
was
a
huge
number
of
features
that
we
delivered
in
Oakland
shipped
one
three
and
the
biggest
would
be
all
of
the
hard
work
that
folks
at
Red,
Hat
and
Google,
and
other
companies
have
made
to
improve
kubernetes
and
had
been
released
as
part
of
open
shift.
B
We
ship
and
support
a
version
of
dr.,
that's
very
well
optimized
to
work
with
kubernetes,
that's
been
tested
at
scale
and
has
a
number
of
has
a
number
of
important
fixes,
and
the
combination
of
these
two
really
helps
helps
administrators
bridge
the
gap
between
a
rapidly
changing
doctor
upstream
and
a
stable
production
environment
for
applications.
The
one
of
our
signature
features
was
an
integration
with
CIC
D
through
Jenkins
I'll.
Talk
about
that.
In
a
few
minutes,
we've
really
improved
the
ability
of
applications
to
consume
configuration
that
is
shared
between
many
different
components.
B
The
biggest
signature
feature-
and
this
is
still
in
an
alpha
state
in
open
ship,
1,
3
and
then
open
ship
1
4.
We
hope
to
bring
them
to
beta
or
final
availability
is
the
integration
of
the
Jenkins
pipeline
feature
jenkins.
2.0
had
launched
pipelines
which
are
a
way
for
a
developer
in
a
single
file
to
describe
a
continuous
integration
and
deployment
platform.
B
Openshift
can
surface
that
to
you
and
give
you
access
to
all
the
information
about
how
those
deployments
are
progressing.
In
addition,
which
we
don't
show
here,
is
the
integrations
that
are
possible
from
Jenkins
and
Jenkins
pipelines.
Allow
you
to
easily
use
open
shift,
builds
and
deployments,
and
all
of
the
concepts
that
exist
in
kubernetes
to
actually
deploy
a
manager
app
from
within
Jenkins.
B
So
we've
really
tried
to
bring
together
the
idea
of
an
environment
to
run
applications
and
the
process
and
the
pipeline
tools
in
Jenkins
that
allow
you
to
set
up
a
repeatable
development
process
that
helps
you
automate
that
that
end
end
lifecycle
all
the
way
from
a
pull
request
to
an
application
being
rolled
out
in
a
deployment
environment.
The
integration
into
Jenkins
is
an
image
that
we
ship
deliver
as
part
of
the
open,
shipped
open
source
project,
its
Jenkins
with
a
special
plugin
that
integrates
it
very
deeply
into
open
ship.
B
So
as
a
developer
and
OpenShift,
I
can
start
off
on
the
command
line
and
launch
and
fire
up
a
whole
deployment
pipeline
in
a
single
command.
If
I
start
a
project
that
has
a
Jinping's
file
checked
into
it,
open
ships
can
automatically
detect
that
file
and
provision
a
personal
jinkin
server
for
you
in
the
open,
shipped
environment,
which
means
really
that
for
a
developer,
it
requires
no
extra
additional
work
to
start
and
run
Jenkins.
B
It's
really
something
that
the
platform
is
managing
for
you,
and
so
we
think
that
this
integration,
others
like
it
in
the
future,
really
help
bridge
the
gap
between
your
production
environment
and
how
things
were
running
and
on
the
development
side.
How
you
get
to
the
point
where
you
can
take
advantage
of
all
the
power
of
the
cluster
we've
done.
Some
major
improvements
to
the
open
ship
user
interface
be
the
biggest
change
that
you
might
notice
as
a
developer.
B
Is
that
we've
tried
to
start
breaking
down
an
overview
of
the
process
that
your
application
goes
through
to
be
deployed.
So
up
at
the
top
of
the
screen
you
can
see
my
pipeline
build
is
actually
in
progress
and
I'm,
going
to
read
out
the
status
and
what
stages
it's
passing
through.
Ouimet
application
deploys
you'll,
actually
see
the
rest
of
the
UI
react,
and
so,
in
this
case,
I'm
I'm
running
Jenkins
in
my
cluster.
B
Obviously,
but
I
have
a
front-end
application,
that's
going
to
be
deployed
by
the
Jenkins
pipeline
and
then
I've
got
a
database
and
we've
added
the
metrics
information.
That's
server
stuff
for
open
shipped,
that's
both
a
part
of
kubernetes,
as
well
as
some
of
the
additional
features
that
have
been
added
into
added
in
by
open
ship
to
really
make
metrics
easy
to
consume
as
a
developer.
B
And
so
here
you
can
see
that
my
front-end
application
is
currently
it's
a
ruby
sample
applications
currently
using
80
megabytes
memory
and
at
this
point
is
not
receiving
a
ton
of
traffic.
So
you
can
see
it's
basically
idle,
but
it's
running
with
two
pods
and
the
big
theme
has
been
to
surface
information
that
was
already
available
and
presented
to
developers
in
a
way
that
allows
them
to
understand
the
relationships
between
those
applications.
B
In
addition,
I'm
not
shown
here
are
a
number
of
other
improvements
to
the
to
the
page.
The
structure
of
the
application,
the
left
bar,
you
can
see.
We've
broken
down
the
different
ways
that
you
can
interact
with
your
applications
into
another,
a
number
of
subsections
and
the
ad.
The
project
page
has
been
significantly
revamped
to
allow
you
to
both
create
docker
images
directly
so
spin
up
a
whole
new
application.
Just
from
a
doctor
image,
it's
located
on
the
doctor,
HANA
or
private
doctor
registry,
as
well
as
to
consume.
B
But
the
idea
of
the
entire
group
of
those
units
of
software
stays
in
place,
and
so,
as
an
example,
I
might
be
running
a
zookeeper
cluster
that
has
five
members
and
in
zookeeper.
Each
of
the
members
of
the
cluster
needs
to
know
that
it's
zookeeper
ones
who
keep
it
zookeeper,
three
etc
in
each
of
those
needs.
B
A
lot
and
you
can
actually
run
several
they're
executed
in
order
if
they
fail,
they
get
retried
or
they
they
call
the
operation
of
of
your
application
containers,
depending
on
what
you
specify,
and
these
and
nic
containers
can
write
and
download
data
and
write
it
to
shared
disk.
So
if
you
need
to
pull
a
configuration
file
from
a
shared
location
or
if
you
need
to
initialize
the
database
or
even
download
binaries
for
the
latest
version
of
your
application,
if
you
want
to
have
a
container,
that's
really
lightweight
and
update
from
some
other
store.
B
The
config
map
is
a
new
concept.
It's
just
like
secrets.
It
allows
you
to
store
text
information,
so
a
great
example
would
be
my
sequel
configuration
file
that
my
simple
configuration
file
might
have
a
number
of
parameters
that
you
want
to
manage
outside
the
application.
What
you
want
to
ensure
that
your
my
single
application
and
your
web
application
can
both
get
access
to
those
settings.
B
The
downward
API
allows
you
to
take
values
out
of
that
config
map
and
inject
them
into
the
application
as
environment
variables
or
as
files
on
disk,
and
that
gives
you
just
a
little
bit
more
flexibility
to
take
containers
that
run
normally
outside
of
kubernetes
and
when
you
bring
them
to
kubernetes.
You
can
use
these
these
different
tools
to
allow
you
to
avoid
having
to
create
a
lot
of
boilerplate
inside
your
image,
and
we
call
this
adaptation.
It
makes
it
easy
to
take
applications
running
off
kubernetes
and
run
them
clean,
open
check.
B
Finally,
I'd
say
one
of
the
biggest
requests
we've
had
comes
down
to
in
the
real
world
deployments
are
very
complex
thing,
so
the
easiest
kind
of
deployment
might
be
but
rollover,
which
is
something
we've
supported
since
OpenShift
one.
Oh,
where
you
learn
new
versions
of
your
application,
alongside
old
version
didn't
gradually
move
the
new
traffic
from
the
old
instances
to
the
new
instances.
There
is
a
more
sophisticated
concept
called
Bluegreen
deployments
and
a
Bluegreen
deployment
is
usually
when
you
set
up
a
whole
new
set
of
instances
of
your
application.
B
But
instead
you
want
to
assign
weights
and
those
weights
might
be
to
I,
want
to
run
a
copy
of
my
application
that
contains
the
new
code
and
I
want
to
run
it
there
for
15
minutes
an
hour
a
day
to
allow
me
to
assess
whether
the
percentage
of
users
who
are
being
sent
to
that
new
instance
end
up
having
a
successful
experience,
whether
through
metrics
or
user
testing,
and
so
in
openshift
1-3.
We've
added
the
ability
to
give
a
route,
multiple,
backends
and
so
does.
B
The
high-level
routes
and
openshift
are
a
way
to
program
a
shared
load
balancer
that
lets
any
application
that
listens
on,
listens
for
HTTP
or
HTTPS
traffic
and
easily
switch
between
easily
set
the
hostname
or
the
port
unison
traffic
to
to
set
security
settings
like
TLS
or
TLS
or
shared
sessions,
and
by
allowing
a
route
to
point
to
multiple
services.
Instead
of
just
one
a
user
can
set
up.
In
this
example,
you
can
set
up
service
a
B
and
C
serve
as
a
receives
90%
of
the
traffic
service.
B
receives
10%
and
service
C
returns
0%.
B
So
as
an
example,
I
might
set
up
service,
see
two
even
newer
copy
of
my
application
and
I
would
gradually
increase
the
amount
of
traffic
running
to
service
C
until
I'm
comfortable
as
working,
and
that
I
might
cut
serve
as
a
and
service
B
over
to
service
C
by
setting
them
to
zero
and
setting
service
C
to
100%,
and
so
a
B
routing
can
be
a
very
powerful
tool.
In
very
specific
scenarios.
We
have
the
OpenShift
1:3
release.
B
We
feel
that
we
really
expand
the
range
of
what
you
can
do
and
what
we
really
want
is
to
gather
feedback
from
developers
and
from
people
deploying
in
production.
What
are
the
additional
tools
that
we
can
build
into
the
deployment
platform
to
make
it
easy
to
do?
Testing
of
applications?
Roll
them
out
have
live
feedback
as
a
developer
that
you're
confident
everything
is
going
to
work
correctly
and
then
build
that
into,
for
instance,
of
Jenkins
pipeline.
B
So
these
tools,
just
like
all
the
existing
tools
in
an
open
ship,
are
things
that
we
can
enable
from
the
Jenkins
pipeline.
So
you
might
run
automated
tests
against
this
new
version
of
your
code
while
it
is
running
in
production
and
then,
if
those
tests
pass
in
the
metrics
from
the
backend
demonstrate
success,
you
can
continue
to
roll
the
rest
of
the
application
out
and
again.
B
Openshift
is
always
about
being
as
flexible
as
possible
to
the
scenarios
that
users
actually
have,
while
different
parts
of
this
may
be
may
be
situational.
The
underlying
goal
for
openshift
is
to
enable
all
developers
to
to
find
the
point
of
integration
that
an
even
open,
shipped
out
of
the
box
does
the
right
thing
for
you
as
you
grow
in
sophistication.
You
can
take
those
additional
steps.
B
A
key
part
of
development
flows
is
understanding
what
images
and
what
content
is
being
deployed
in
your
cluster,
and
it
opens
shift
one
three
and
there's
a
number
of
really
exciting
improvements
that
have
been
made
to
how
openshift
exposes
docker
images.
So
openshift
has
an
integrated
docker
registry,
that's
capable
of
working
with
other
docker
registries.
The
goal
of
the
open,
shipped
integrated
regice,
is
to
make
it
easy
for
developers
to
iterate
and
then
push
to
production
environments,
because
it's
just
a
doctor
registry.
B
So
on
the
right-hand
side
of
the
screen,
you
can
see
I'm
in
the
Red
Hat
project
and
I'm
viewing
an
image
stream
that
has
the
latest
tag.
So
it's
a
docker
image
I
can
see
what
it
runs
and
how
it
runs
as
well
as
in
the
older
versions,
and
the
person
could
push
those
images.
This
is
really
for
when
you
want
to
host
docker
images
that
you're
producing
for
others
in
your
organization
or
your
community
to
consume
and
on
the
developer
side.
B
B
That's
checked
into
the
open
ship
registry
and
pull
it
and
download
it
to
my
own
laptop
to
do
some
testing
before
I
sign
up
on
the
application,
and
so
the
information
about
that
image
where
it
comes
from
the
content
and
security
settings
are
also
available
to
me
as
well
and
as
we
continue
to
grow
the
capabilities
of
the
integrated
open
ship
registry.
It's
very
important
for
us
to
make
it
impossible
for
developers
to
track
and
manage
content
as
well
as
dispose
of
that
content
for
others
to
consume.
B
With
the
open
ship,
one
three
release
also
continue
to
refine
the
flexibility
of
the
open
ship
filled
lines.
There's
many
different
ways
to
build
docker
images
and
there's
many
different
ways
to
build
the
artifacts
that
compose
applications
in
Java.
You
might
build
war
files
or
enterprise
of
your
files.
You
might
also
published
jars
to
them,
maybe
Nexus,
where,
as
in
Ruby,
you
might
create
gems
and
upload
them
to
Ruby
gems
and
for
open
shipped.
B
It's
extremely
important
to
be
able
to
offer
tools
that
allow
you
to
compose
your
docker
images,
either
on
or
off
the
platform.
So
the
easiest
integration
is
always
to
build
images
and
just
push
them
into
open.
The
open
ship
registry
or
to
tag
them
in
openshift
can
take
those
and
deploy
them,
because
OpenShift
is
an
environment
where
it's
easy
to
run,
builds
using
spare
capacity.
The
open,
shipped
platform
exposes
the
lower-level
source,
build
and
docker
build
concepts
that
lets
you
easily
build
images
from
gif
source
code.
B
B
So,
even
with
all
of
the
exciting
things
that
we've
done
with
openshift
1-3,
there's
even
more
exciting
things
coming
down
the
road
in
the
next
version
of
kubernetes
crew,
nades,
one
for
as
well
as
kubernetes
one
five
and
the
open
ship
for
this
is
that'll,
be
based
on
that
there's
a
ton
of
improvements
just
across
the
board
in
making
the
platform
for
applications,
he
is
stable
and
flexible
as
possible
so
to
an
end
developer.
These
may
not
be
details
that
that
you
care
about
from
an
operational
point
of
view.
B
Our
job
is
to
make
it
possible
to
run
some
of
the
more
demanding
use
cases
while
keeping
the
end
developer.
Experience,
simple
and
so
storage
is
traditionally
been
a
very
difficult
problem
for
people
running
applications.
You
need
to
have
a
dis
storage
attached
to
a
particular
machine.
It
needs
to
be
backed
up.
It
needs
to
be
tracked,
monitored
applications.
B
Now.
This
sort
of
flexibility
is
certainly
possible
today,
with
open
shift.
We
want
to
make
it
easy
out
of
the
box
for
an
application
that
needs
10
gigabytes
of
storage,
of
a
certain
type
to
get
that
storage,
no
matter
where
and
you'll
see
increasing
levels
of
of
simplicity
and
focus.
On
initial
setup,
around
making
storage
be
something
that
really
just
becomes
a
detail
that
an
administrator
can
set
up,
let
run,
and
nobody
on
the
platform
has
to
worry
about
it.
B
We
also
want
to
make
it
even
easier
to
get
shared
storage
for
developers
today,
there's
certainly
a
number
of
different
ways:
you
can
configure
storage,
but
one
of
the
easiest
is
using
the
actual
resources
of
the
cluster,
using
the
machines
on
the
cluster
to
give
storage
to
applications
and
unit.
This
works
really
well
for
testing
development
style
applications
where
you
may
not
need
a
may,
not
have
a
very
strong
need
for
an
SSD
or
high
hi
I
ops,
AWS
volume,
but
instead
you
really
just
want
to
get
access
to
a
few
gigabytes
of
local
storage.
B
We'll
combine
that,
with
a
number
of
other
improvements
to
the
platform.
A
lot
of
this
stuff
is
is
more
targeted
towards
operational
teams,
as
I
mentioned
before,
but
one
of
the
one
of
the
exciting
features
coming
in
open
shipped
one
for
open
to
the
origin,
one
for
will
be
scheduled
jobs
and
scheduled
jobs.
Allow
you
to
set
up
a
recurring
container
that
gets
run
as
needed
on
the
cluster.
B
So
if
you
need
to
run
a
job
every
night
to
generate
a
report
from
your
database
in
open
shift
one
for
we
plan
to
have
the
scheduled
jobs
concept
available,
the
scheduled
job
is
just
a
container
like
any
other,
and
so
your
application
code,
your
scripts,
can
then
be
run
at
any
interval.
You
lose
any
interval.
You
desire,
whether
its
nightly
hourly
every
five
minutes
to
make
the
changes
we
need
on
the
cluster.
This
lets
you
kind
of
have
a
if
you
need
to
do
periodic
maintenance
on
the
cluster.
B
You
can
just
back
off
with
application
development,
there's
a
lot
of
exciting
improvements.
The
biggest
is
some
of
the
work
going
on
in
kubernetes
right
now,
around
service
linking
and
Service
Catalog
as
a
developer.
I
may
not
care
about
this
when
I'm
working
on
my
own
application,
but
the
more
I'm
working
with
other
teams
and
the
more
that
I'm
working
with
environments
like
OpenShift
online
that
offer
the
ability
to
consume
resources
that
are
hosted
by
other
cloud
providers.
B
The
Service
Catalog
is
a
way
to
bring
all
of
those
together
so
that
people
can
make
it
easy
to
consume
a
wide
range
Network
and
competing
services
from
the
cluster
and
service
linking
will
be
an
easy
way
to
as
an
application
choose
to
consume
those
resources.
So
I
built
a
simple
Ruby
on
Rails
application
and
I
need
a
database.
B
I
might
go
to
the
Service
Catalog
ask
for
a
database
to
be
provisioned
for
me
and
then
connect
that
to
the
application
and
the
person
who
is
offering
that
database
might
perform
the
administration
and
management
and
keep
that
database
running
for
me
and
I
can
focus
on
making
my
application
work
as
easily
as
possible.
I
talked
a
little
bit
about
headsets
and
the
improvements
coming
to
upstream
deployments.
B
There's
a
lot
of
tools.
We've
intended
to
make
it
easier
for
developers
to
integrate
with
the
platform.
One
of
the
most
hotly
requested
has
been
a
JSON
schema
or
open
chef
that
would
allow
people
to
build
and
generate
client
tools
in
a
wide
variety
of
languages.
We're
going
to
combine
that
with
better
support
for
external
clients,
an
external
consumer,
so
it's
even
easier
to
script
and
integrate
with
openshift
as
a
as
part
of
our
ongoing
work
to
improve
the
performance
at
the
platform.
B
We
really
want
to
support
we're
planning
in
the
next
few
releases
to
move
to
exit
e3
now
itv3
offers
some
pretty
significant
improvements
over
the
at
CD
to
release.
That's
part
of
open
chef
today
and
that
will
translate
to
bigger
and
more
efficient
clusters,
as
well
as
the
ability
to
to
have
longer
histories
of
information
available
for
consumers.
The
application.
So
there's
a
lot
of
exciting
under
the
cover
details
that
we'll
be
able
to
talk
about
more
over
the
next
few
months.
B
When,
with
that
comes
the
desire
to
make
it
even
easier
to
extend
kubernetes
and
to
plug
in
new
capabilities.
There's
been
a
ton
of
great
work
in
the
Parmenides
community
to
expose
things
like
additional
tools
for
administrators
to
gather
information
from
the
hosts
of
the
cluster,
and
it
goes
available
for
application
sharing
and
for
application
optimization.
B
We
are
getting
some
final
tweaks
and
we'll
finally
be
something
that
developers
can
rely
on
and
at
the
lower
levels
for
people
who
are
looking
to
run
in
different
networking
environments.
There's
going
to
be
some
extended
work
in
the
container
networking
interface
for
even
more
flexibility
in
the
future
and
in
the
at
the
developer
level.
You
know
a
lot
of
those
things.
Maybe
you
get
a
lot
of
feedback,
there's
people
who
have
very
specific
desires
and
very
specific
needs.
B
People
who
pitched
in
the
community
to
make
some
of
these
things
possible,
we
there's
a
ton
of
really
important
steps
that
we're
trying
to
make,
as
both
from
the
web
console
and
the
CLI
that
make
it
even
easier
for
developers
to
understand,
what's
happening
with
their
applications
and
to
go
back
and
troubleshoot.
This.
B
There's
a
huge
number
of
people
involved
in
the
docker
kubernetes
and
openshift
communities
that
are
all
working
together
to
try
and
make
all
of
these
tools
as
fast
and
efficient
as
easy
to
use
as
useful
as
they
possibly
can,
and
we
would
love
to
have
you
join
us.
Please
go
to
the
Commons
that
I
can
ship
that
org,
which
is
the
primary
place
for
to
learn
about
how
you
can
engage
and
participate
in
the
community
and
I
want
to
thank.
A
So
because
also
we're
gonna
be
having
an
open
ship
Commons
gathering
in
November
7th
in
Seattle.
So
if
you
want
to
hear
more
from
Clayton
and
from
other
of
the
upstream
project
leads
across
the
the
openshift
Commons,
please
check
it
out
at
Commons.
Openshift
org
you'll
find
all
the
information
about
the
gathering
there.
It's
co-located
with
koukaon
and
cloud
native
Conn
and
prometheus
Day.
B
Exactly
and
the
you
know
we
every
time,
someone
new
contributes
token
ship
and
the
kubernetes
a
little
bit
stronger.
The
openshift
team
hangs
out
on
freenode
on
and
OpenShift,
oh
and
you
can
always
find
us
on.
If
you
have
an
issue
with
openshift
or
a
feature
you'd
like
to
see,
please
go
to
github.com,
OpenShift,
slash,
origin
and,
and
let
us
know
what
it
is
that
we
can
do
to
help.
You
run
your
applications
more
effectively.
Awesome.
A
Well,
thanks
Clayton
I'm,
really
looking
forward
to
this
next
release
of
openshift
1.3
and
to
learning
more
at
the
Commons
events
coming
up
in
Seattle.
So
hopefully
we
can
get
a
lot
more
going
with
the
community
in
the
next
little.
While
so
thanks
again
for
your
time
today,
and
we
look
forward
to
to
hearing
more
about
the
next
couple
of
releases
coming.
Thank.