►
From YouTube: OCG Berlin 2017 - Challenges of Digital Transformation
Description
From the March 28th 2017 OpenShift Commons Gathering in Berlin @KubeCon https://Commons.openshift.org/Gathering
A
A
A
A
So,
let's
talk
about
a
next
generation
application
and
the
really
the
success
of
a
next-generation
application
is
going
to
be
its
ability
to
really
take
advantage
of
a
highly
dynamic
underlying
infrastructure
or
really
maybe
better,
put
take
advantage
of
programming
underlying
infrastructure
to
reflect
the
highly
dynamic
nature
of
today's
computing
environment.
So
you've
got
large-scale
numbers
of
users.
You've
got
users
coming
in
through
different
interfaces,
and
you
know
we're
trying
to
move
quickly
and
as
businesses
adopt
and
and
and
create
value
for
our
customers.
A
While
you
know
not
getting
lost
in
this
sea
of
Technology
turn,
so
the
I
think
there's
three
things
that
are
really
happening
today
in
the
industry
that
are
putting
us
in
a
very
unique
position
in
history,
and
the
first
is:
if
you
look
at
how
we've
been
developing
applications,
the
application
architecture,
the
software
architecture
that
we
filled
our
applications
from
historically
was
traditional
monolithic
applications.
So
you've
got
your
big
behemoth.
It's
your
enterprise,
Java
app
or
whatever,
you've
been
building
and
we're
all
familiar
with
those
we've
struggled
with
those.
A
There
is
some
simplicity
with
the
single
large
code
base
in
terms
of
how
you
manage
it,
and
you
know
it's
just
one
thing
to
operate,
but
it's
definitely
cumbersome
in
terms
of
scaling
and
making
changes
to
it
can
be
complex.
You
have
brittle
systems,
so
you
saw
kind
of
a
shift
in
monolithic
to
multi-tiered
applications
for
scaling,
and
today
the
the
buzzword
is
yours,
microservices
and
I.
Think
the
interesting
thing
about
microservices
is
really
the
notion
of
doing
one
thing
and
doing
it.
A
Well,
you
hear
the
term
founded
context
or
whatever,
but
it
really
harkens
back
to
the
original
days
of
UNIX
and
it
allows
focus
and
separation
of
concerns.
So
you
can
really
focus
on
just
solving
a
single
problem
and
at
the
same
time
we
have
a
process
of
how
we've
been
building
and
delivering
these
applications.
And
historically
we
have
a
waterfall.
A
The
Trident
rude
waterfall
mechanism,
which
is
you
know,
you've,
see
very
degrees
of
success
there,
and
even
today
you
still
see
that
as
a
big
part
of
of
how
we
develop
software
even
in
community,
so
we
have
communities
like
uber.
Nettie's
hat
still
have
some
releases,
as
opposed
to
pure
kind
of
every
commit,
goes
into
production,
cic,
be
kind
of
model
and
that
that
development
process
has
evolved
from
traditional
waterfall
and
something
that's
that's
trying
to
move
much
more
rapidly
and
it's
about
enabling
developers
to
move
quickly
and
an
understanding
that
code.
A
Changes
in
small
incremental
steps
are
more
understandable
and
you
can
actually
mitigate
some
of
the
risk
associated
with
change
if
you're
kind
of
preparing
yourself
for
for
constant
change
and
then
on
the
sort
of
platform
that
you
run.
These
applications
with
on.
Historically,
we
have
vertically
integrated
stacks.
You've
got
the
risk
UNIX
world,
and
then
we
move
move
to
something
like
like
hosted.
So
you
still
have
a
server
environment
that
server
environment
might
be
x86
and
Linux.
A
Getting
closer
never
been
closer
and
one
of
the
things
that
ties
all
these
things
together
is
containers.
Containers
creates
a
lightweight
footprint
for
running
your
services,
your
micro
services.
It
gives
you
the
ability
to
build
and
deliver
an
image
as
part
of
a
pipeline.
So
in
your
DevOps
process
you
could
have
building
an
image
and
deploying
the
image
and
leveraging
those
API
is
to
scale
your
application.
A
Not
true
I
suck
at
graphics,
so
the
I
think
one
of
the
things
to
reflect
on
from
art
from
an
industry
point
of
view
is
where
we
come
from.
We've
come
from
a
world
where
the
operations
teams
and
the
developer
teams
are
really
just
completely
separate
worlds
and
while,
ultimately,
they
may
report
up
through
the
same
CIO
that
may
not
even
be
true.
You
may
have
lines
of
business
owning
developers
that
are
really
independent
from
the
IT
operations
side
of
the
house,
and
that's
worked.
It's
gotten
us
to
where
we
are
now.
A
It's
allowed
IT
operations
teams
to
to
focus
its
roots,
loud
developers
focus,
but
it's
starting
to
break
down,
and
we
have
unprecedented
scale
in
terms
of
the
number
of
users.
We
have
users
with
expectations
for
consistency
when
they
come
in
across
a
mobile
platform,
their
laptop,
if
it's
a
retailer,
maybe
even
in
the
store.
So
this
the
world
is
really
changing
and
this
separation
is
not
really
serving
us
as
well
as
it
as
it
may
have.
In
the
past.
A
A
So
I
wanted
to
talk
about
some
specific
concerns
or
challenges
of
what
we
need
to
do
to
get
to
that
kind
of
perfected,
IT,
ops
and
and
developer
collaboration
story,
and
one
of
the
first
challenges
is
we
look
at
infrastructure.
Historically,
infrastructure
has
been
there
to
support
the
applications
and,
and
today
infrastructure
ends
up
being
an
inhibitor
or
limiter
to
how
we
build
and
scale
our
apps
and
why
you
know.
Why
is
that
I
think
in
the
past
an
infrastructure
component
like
a
physical
server.
A
You
know
the
track
and
stack
of
server
is
a
long
time
consuming
activity.
So
we
brought
virtualization
into
the
data
center,
enabled
some
consolidation
enable
the
first
steps
towards
creating
something
like
programmatic
interfaces
to
the
infrastructure,
but
those
programmatic
interfaces
didn't
really
match
one
hundred
percent,
some
of
the
needs
of
developers,
so
you
ended
up
building
an
infrastructure
platform
and
still
running
it
under
capacity.
A
So
the
application
is
ready
to
consume
the
api's
vegetable
to
the
developer,
to
expand
how
the
application
is
deployed,
really
a
kind
of
an
impedance
mismatch,
and
so
you
end
up
with
unused
capacity.
Your
underutilizing
infrastructure
you're,
not
meeting
the
the
customer
requirements
in
terms
of
how
you
scale
out,
and
so
modern
applications
really
need
to
take
advantage
of
that
underlying
infrastructure
and
scale.
Elastically
I
mean
you.
A
You
hear
the
Pokemon
go
example:
being
kind
of
the
the
perfect
example
for
ku
benetti's
or
within
a
week
of
utilization,
the
user
base
was
well
beyond
their
wildest
dreams
in
terms
of
adoption
rates,
and
so
how
do
you
do
that?
If
you're
either,
you
know
racking
a
stack
and
physical
gear
or
trying
to
to
allocate
virtual
machines
and
run
them
as
long?
A
Maybe
that's
not
the
best
use
of
our
time
and
we
could
actually
really
capture
just
the
essence
of
the
application
and
it's
direct
dependencies
and
use
that
as
the
fundamental
building
block
I
think
that
containers
give
us
this
unique
opportunity
to
express
the
application
requirements
to
the
infrastructure.
So
we've
been
trying
to
do
this.
For
years,
we've
been
trying
to
figure
out.
A
A
The
other
thing
that
containers
do
is
build
some
level
of
consistent.
So
if
you
think
back
in
time,
we've
been
trying
to
figure
out
how
to
reuse
code,
for
you
know
forever,
and
initially
we
had
object-oriented
programming.
This
notion
that
you'd
create
some
class
and
that
that
class
had
the
was
perfectly
abstracted
and
it
would
be
reusable
inside
your
application
and
other
applications,
and
we've
had
some
level
of
success.
A
A
Think
the
other
piece
here
with
with
containers
is
micro.
Services
are
really
to
me.
They
kind
of
go
hand
in
hand
together,
containers
give
you
the
opportunity
to
capture
just
an
immediate
service
and
its
dependencies
and
the
notion
of
micro
services,
whether
or
not
it's
new,
whether
it's
just
a
reimplement
ation
of
service-oriented
architectures,
is
up
for
debate.
We've
got
beers
and
sausages
in
today.
A
It's
great
fodder
for
for
beer
talk,
but
the
notion
that
we're
trying
to
build
these
aggregate
applications
out
of
a
collection
of
services
is
really
powerful
and
allows
us
to
move
quickly.
It
allows
us
to
run
with
independent
teams
at
independent
life
cycles,
using
different,
even
tech
underlying
technologies.
Different
application
stacks-
and
this
is
where
we're
seeing
the
real,
the
real
power
and
the
value
of
containers
and
container
up
orchestration.
A
Another
struggle
is
the
operations
teams
and
the
operations
teams
are
trying
to
keep
up
with
this.
This
this
crazed
pace
of
development,
so
developers
have
more
and
more
tools
at
their
fingertips
and
the
operations
teams
are
just
being
overwhelmed
with
how
do
I
manage
all
these
different
apps?
How
do
I
manage
scaling
you?
If
you
imagine
a
single
monolith?
It's
relatively
easy
to
start
and
stop
the
thing.
Now
you
disaggregate
that
or
decompose
that
application
into
100
services.
A
You
now
have
a
hundred
things
to
start
and
stop
and
they're
scaling
independently
of
one
another,
and
if
you
were
doing
that
with
your
old
techniques,
you
know
this
is
this
is
a
recipe
for
for
failure,
one
mechanism
and
the
way
I
look
at
that.
That
analogy
of
you
once
had
a
single
application.
Now
you
have
a
hundred
one
of
the
things
that
you're
creating
for
yourself
is
additional
complexity
and
that
complexity
is
managed
through
consistency
through
standardization
through
API,
so
you're,
starting
and
stopping
things.
A
Your
primitives
are
relatively
consistent,
regardless
of
the
content
of
a
container,
and
these
are
the
building
blocks
or
the
tools
that
helps
operations,
teams
really
work.
More
efficiently
and
I
think
one
of
the
things
that's
important
to
look
at
is:
where
are
we
deploying
modern
applications?
Modern
applications
run
spanning
some
combination
of
some
physical
infrastructure
because
you
probably
still
have
a
back-end
database
or
some
historical,
critical
to
your
business
transaction
processing
system.
That
may
be
running.
You
know,
even
on
a
mainframe.
A
You've
got
a
virtualized
part
of
your
infrastructure,
whether
that's
using
open
source,
virtualization
platform
or
or
something
else.
You've
got
some
notion
of
cloud
which
could
be
internal
to
your
organization,
private
cloud
and
then
you've
got
public
cloud
and
your
application
somehow
are
spanning
all
these
things,
and
these
different
runtime
environments
create
additional
complexity
and
concerns
for
for
the
operations
team,
especially
when
they're
thinking
compliance
and
how
do
not
only.
How
do
I
run
this,
but
how
do
I
make
sure
I'm
not
violating
some
critical
business
requirements?
A
So
even
if
you're,
limiting
yourself
to
just
a
public
cloud
footprint,
you're
still
spanning
multiple
different
types
of
public
clouds
and
from
an
Operations
point
of
view,
you're
creating
challenges
for
yourself
understanding.
What
are
the
you
know?
What's
what's
common
in
and
then
what's
unique
across
these
different
runtime
environments,
so
I
would
assert
that
that
complexity
of
all
these
different
runtime
environments
is
here
to
stay,
whether
it's
just
multi-cloud
multi
public
cloud
or
whether
it's
across
all
the
different
footprints,
physical,
virtual,
private
public.
A
That
complexity
is
real
and
one
of
the
things
that
helps
you
with
that
would
manage.
That
is
standardization.
Having
a
common
platform
and
from
my
perspective
I
say
that
common
platform
is
Linux
and
Linux
is,
you
know,
try
tried-and-true
we're
familiar
with
it.
We
understand
it.
We
have
many
applications
that
run
on
Linux
and
I.
Think
the
enthusiasm
that
I
see
around
container
platforms
is
we've
found
a
way
to
stretch
Linux
from
a
single
server
world.
A
That's
creating
some
kind
of
an
abstraction
between
your
physical
infrastructure
and
your
application,
so
that
you
could
have
your
dell
HP
IBM,
whatever
x86
hardware
underneath,
and
we
got
away
from
that
vertically
integrated
risk,
UNIX
world
we've
taken
that
abstraction
and
that
concept
and
we're
stretching
it
across
a
data
center,
we're
stretching
it
potentially
across
multiple
data,
centers,
public
and
private,
and
it's
the
you
know
the
core
abstraction
is
Linux.
The
applications
are
running
on
top
of
Linux
and
they're
being
managed
through
some
container
application
or
container
orchestration
platform.
A
So
the
third
piece
is
is
scaling
the
development,
the
developers
in
your
organization,
so
the
pace
of
development
is
increasing.
You
you're,
potentially
adding
developers
to
to
your
organization,
and
you
know
how
do
you?
How
do
you
really
scale
effectively,
especially
in
a
world
where
you're
potentially
creating
multiple
teams
solving
similar
problems?
So
as
yet
decompose
your
application?
A
It's
a
bunch
of
services
and
you've
got
the
kind
of
proverbial
to
pizza
teams,
there's
probably
going
to
be
some
overlap
of
functionality
across
these
teams
and,
ultimately,
if
you
look
at
what
we're
building
we're
building
applications,
maybe
in
a
different
way,
but
following
some
really
consistent
patterns
that
we've
that
we
have
a
lot
of
experience
with
so
there's
services
in
your
application.
The
services
are
connected
through
messaging.
You've
got
some
kind
of
maybe
analytics
component
to
your
application.
You
just
got
some
clustering
component
to
your
application,
maybe
some
batch
processing.
A
These
are
all
things
that
we've
we've
got
a
lot
of
experience
with
in
the
industry
and
the
developers.
Consumption
of
these
different
portions
of
an
application
is
potentially
getting
in
the
way.
So
the
developer
really
just
wants
to
focus
on
writing
code
and
not
necessarily
understanding
the
details
of
how
do
I
set
up
the
messaging
infrastructure.
Or
how
do
I
set
up
clustering
I?
Think
clustering
is
an
interesting
example.
A
We've
we've
talked
about
this
one
before
in
the
context
of
open
shift
and
if
you
look
and
those
multiple
public,
private,
virtualization
type
of
footprints,
for
where
we
could
run
our
application,
something
like
Jagger
groups,
which
is
a
primitive
that
could
be
used
in
something
like
and
fitness
band.
You
know,
distributed
key
value
store.
Clustering
is
at
the
core
that
clustering,
historically
required.
A
Multicast
and
multicast
may
or
may
not
be
available
in
something
like
a
public
cloud
as
a
developer
needing
to
understand
the
differences
of
where
multicast
is
available
and
where
it's
not
that's,
going
to
just
slow
you
down
it's
and
it's
an
impediment
to
your
daily
activities,
having
an
operations
team
that
really
understands
these
different
environments
and
understands
when
you
might
need
to
do
something
like
an
open
shift.
Ping
protocol
for
for
J
groups,
as
opposed
to
the
multicast
clustering
solution
or
or
just
using
DNS
registration
to
register
component
different
service
components
that
are
currently
available.
A
A
Yeah
so
I
think
the
the
summary
here
is
we're
here
we're
here
at
openshift
Commons.
The
summary
to
me
is
that
the
foundation
for
what
I
would
call
continuous
innovation
is
this
new
platform
and
it's
a
combination
of
Linux
and
containers
and
container
orchestration
and
in
our
world
that
looks
like
open
shift,
its
crew,
benetti's
and
and
docker
containers
in
the
core.
It's
exposing
Linux
as
a
runtime
environment
for
applications.
A
It's
linux
at
the
bottom
of
the
stack
for
operations
teams
and
it's
creating
this
communication
mechanism
and
this
programmatic
way
that
developers
can
inform
the
operations
teams
what
their
application
deployment
needs.
To
look
like
how
the
operations
teams
can
can
kind
of
deploy,
help
manage
that
that
infrastructure
and
allowing
area
experts
to
focus
on
their
ability
to
to
really
improve
efficiency
overall
for
for
developers
and
nit
ops-
and
I
think
that's
it-
I
have
a
few
other
slides
here,
but
considering
we're
going
slowly
and-
and
you
can't
see
them
I'll
stop
there.
Thank.