►
Description
Join Andre Tost, IBM Cloud Paks CTO, in a brief overview of Cloud Paks, as well as a discussion of lessons learned while containerizing software on OpenShift.
A
Have
multiple
public
clouds
and,
yes,
usually,
the
customers?
I
talk
to
have
standardized
on
one
particular
cloud
provider,
but
yet
will
also
admit
that
they
have
bits
and
pieces
running
elsewhere.
So
the
answer
to
that
dimension,
if
you
will
or
the
that
dimension
is
we
need
to
have
software
that
runs
in
all
kinds
of
places,
and
the
second
dimension
of
this
is:
what
is
that
software?
A
You
know
so-called
clout
native
application
development
and
that's
where
we're
going
to
put
all
of
the
investment
only
to
then
realize
that
most
of
the
money
actually
goes
and
the
I.t
budgets
goes
into
maintaining
existing
applications,
keeping
them
alive
so
that
I
feel
like
here
again
we're
now
kind
of
in
a
kind
of
all
over
the
map.
If
you
will,
where
you
know,
existing
applications
sometimes
are
being
lifted
and
shifted
on
into
new
places
without
really
a
lot
of
change.
A
Sometimes
they
get
refactored
to
account
for
new
runtime
models
where
they
get
enhanced,
using
additional
functionality
that
becomes
available
and
then
obviously,
this
new
cloud
native
application
development
style
with
using
you
know,
12
factor,
apps,
microservices
and
so
forth
is
also
still
there.
So,
the
the
second
dimension,
if
you
will,
is
what
kind
of
workload
it
means
everything,
but
from
old
to
new
and
everywhere
in
between.
A
So
if
you
think
of
that,
as
kind
of
the
landscape
that
we're
in
we
need
to
have
solutions
that
that
can
run
in
all
kinds
of
places
and
can
run
all
kinds
of
application,
workloads,
the
next
and
and
third
dimension
to
all.
This
is
just
plain
technology,
and
so
we
did
make
a
pretty
big
decision.
I
think
a
number
of
years
ago
that
were
going
to
go
all
in
on
container,
and
we
do
believe
that,
and
we
can
argue
over
how
quickly
that
will
happen.
A
But
we
do
believe
that
ultimately,
containers
will
be
the
defector
and
the
default
way
that
companies
use
to
run
software
of
all
kinds.
I
mean,
there's
always
going
to
be
exceptions,
but
that's
going
to
be
the
default
way
and
it
gives
us
a
number
of
benefits,
of
course,
more
lightweight,
it's
portable
and
so
forth,
and
you
see
the
bullet
points
being
listed
here,
but
simply
saying,
I'm
running
something
in
a
container
in
itself
doesn't
provide
a
lot
of
value.
A
A
But
for
now
I
think
the
whole
industry
is
kind
of
working
with
kubernetes,
so
so
three
dimensions,
and
we
actually
then
said:
okay,
we're
gonna,
take
our
entire
software
portfolio
and
we're
gonna
containerize
it
and
we're
gonna
run
it
on
on
kubernetes
and
I'll
talk
a
bit
more
about
how
we
did
that
and
and
how
that
went
in
that
time
frame
also
came
the
decision
that
ibm
was
going
to
acquire
redhead
and
even
though
red
hat
is
still
a
completely
independent
entity.
A
Obviously,
you
know
the
the
stack
that
redhead
has
you
know
is
something
that
we
want
to
leverage
with
our
software
offerings
and,
if
you
think,
back
to
the
dimensions
that
I
just
mentioned,
red
hat
kind
of
offers,
all
of
that
a
strong
stack
that
starts
out
with
red
hat
enterprise,
linux
openshift
and
on
top
as
the
leading
kubernetes
distribution,
if
you
will
and
then
the
ability
to
run
that
in
all
kinds
of
places
and
having
support
for
all
kinds
of
flavors
of
applications,
so
it
kind
of
was
the
perfect
fit
to
go
to
become
the
underpinning.
A
If
you
will
of
the
of
the
software
that
we're
building-
and
that
has
kind
of
culminated
in
this
picture-
and
this
is
obviously
an
architecture
right.
But
if
someone
were
to
go
okay,
what's
ibm's
kind
of
software
architecture
that
we're
building
right
now
that
we're
building
against
at
the
highest
level
this
this
would
be
it
right.
Well,
we
him
have
this
this.
This
strong
notion
of
the
platform
in
between
that
is
openshift
based
on
rel.
A
That
gives
us
support
for
all
the
stuff
that
you
see
at
the
bottom,
so
we
can
run
it
in
all
kinds
of
places
and
it
acts
almost
as
a
level
of
indirection
if
you
will
from
our
software,
so
that
we
can
now
take
the
hundreds,
literally
hundreds
of
offerings
that
we
have
in
the
market
and
we
have
a
common
landing
place
for
it.
We
have
a
common
target
that
that
software
can
get
integrated
against
as
part
of
containerization.
A
A
It
really
want
to
go
back
to
the
beginnings
of
all
of
this,
so
we
said,
let's
containerize
everything
and
at
the
time
also,
we
did
something
that
I
would
say
is
still
to
this
day,
relatively
disruptive
for
ibm
in
that
our
different
development
teams,
and
we
have
teams
and
labs
all
over
the
globe,
we're
kind
of
used
to
be
fairly
autonomous
in
their
decision
making,
and
you
know
how
you
be,
how
you
are
successful
in
your
respective
market
with
your
respective
customer
audiences,
and
we
now
said
if
we
containerize.
A
Let's
do
that
in
a
consistent
way
across
the
board.
Let's
all
follow
the
same
standards,
let's
all
follow
the
same
best
practices
and
conventions,
and
that
too
this
day
sometimes
is
a
challenge
simply
because
it's
such
a
large
team
and
thousands
of
developers,
literally
across
the
globe,
bring
them
together
and
all
kind
of
follow
a
common,
a
common
model,
and
so
we've
been
building
this
up.
For
the
past
few
years,
like
I
said,
we've
established
a
central
team
that
I'm
part
of
that
is
basically
defining
the
rules.
A
If
you
will
for
how
we
do
this,
and
it
started
out
with
something
as
simple.
As
I
remember
the
conversations
well,
we
did
a
survey
about
of
all
the
container
images
that
the
teams
are
building.
What
are
they
using
as
the
base
os
and
it
came
back
and
I
think
we
found
at
least
16,
or
so,
if
I
remember
correctly,
16
different
base,
os's
that
were
used
across
the
board,
each
with
their
own
pros
and
cons
and
their
own
kind
of
characteristics,
and
we
said
we
want
to
consolidate
that.
A
We
want
to
make
it
all
based
on
a
common
base,
os
and
needless
to
say,
we
picked
ubi
for
this,
which
is
kind
of
rel.
We
couldn't
use
rel
until
that
point
until
ubi
came
out,
because
there
was
no
way
to
redistribute
rel
as
a
base
os
with
our
commercial
offerings,
and
so
that's
why
we're
almost
forced
into
other
options.
But
as
ui
ubi
came
out,
we
went
all
in
on
that.
A
We
took
all
of
our
images
and
rebased
them
we're
not
at
100
at
this
point,
but
we're
fairly
close,
so
that
there's
that's
not
even
a
debate
anymore,
that
any
team
that
goes
and
writes
containerized
software
that
they
use
ubi
as
the
base
image,
and
then
we
also
go
and
certify
that.
But
I
have
a
separate
a
couple
of
slides
on
that.
So
it
starts
there
and
then
we
go
okay.
How
do
we
orchestrate
things?
How
do
we
package
things?
How
do
we
start
adding
some
operational
characteristics
to
all
of
this?
A
A
It's
not
a
100,
automated
and
self-service,
but
again
that's
the
goal
of
what
we
want
to
get
to
and
in
detail.
It's
all
the
things
that
you
see
on
this
slide
here,
so
there's
there's
guidelines
and
conventions.
Overall,
I
think
it's
around
150
or
so
criteria
that
we've
developed,
that
we
are
mandating
or
prescribing,
if
you
will
to
all
the
development
teams
within
ibm,
to
follow
in
support
of
a
being
consistent
and
b
being
what
we
call
enterprise
grade.
So
it
goes
into
things
especially.
Security
is
one
that
I
like
to
point
out.
A
A
A
I
believe,
last
time
I
checked,
we
had
about
6
000,
distinct
images
in
that
process
and
in
that
site
and,
interestingly,
when
we
first
started
certifying
we,
we
actually
brought
the
site
down
for
a
while,
because
it
wasn't
really
ready
for
the
kind
of
volume
that
we
were
throwing
at
it,
but
that
that's
been
fixed.
Now
there
was
a
lot
of
work.
A
So-
and
I
hinted
that
we're
transitioning
away
from
helm,
we're
now
going
towards
operators
we're
all
in
on
the
whole
kool-aid
of
operators.
You
know
being
kind
of
the
management
pattern
that
we
want
to
implement
across
the
board,
just
like
it's
implemented
in
openshift
today
already
as
part
of
that,
we
will
certify
all
of
our
operators.
A
We
have
just
begun
on
that
journey
and
we're
not
operator
based
across
the
board
again
I'll
talk
a
bit
more
about
operators
later,
but
that
that
then
becomes
kind
of
the
foundation
for
this
add-on
certification
that
we
do
so
and
that's
why
it's
shown
here
as
kind
of
a
stepping
stone.
So
we
start
with
the
image.
A
Then
we
go
into
the
operator
and
then
we
look
at
how
everything
gets
deployed,
orchestrated
secured,
how
we
deal
with
rolling
upgrades,
how
it's
being
versioned
all
those
kinds
of
things,
and
we
basically
have
a
rule
book
if
you
will
that
that
shows
how
we're
doing
that
now,
we've
applied
this
for
a
while
and,
like
I
said
earlier,
this
is
basically
what
it
comes
down
to
then
is
we've
taken
the
capabilities
and
offerings
that
we
have
across
ibm
across
large
parts
of
ibm,
not
all
of
it
and
stuff
them
into
these
things
that
we
call
cloudtext.
A
Those
are
the
the
products
that
we
sell
their
assemblies
of
related
capabilities.
I
always
say
each
cloudtext
supports
a
specific
user
experience
for
a
specific
persona
and
specific
set
of
use
cases,
and
but
it
all
goes
on
top
of
what
I
just
said
in
terms
of
our
own
certification
and
containerization,
running
it
on
openshift
and
so
forth,
and
that's
basically
how
we
then
come
to
this
this
view,
and
we
say
we
have
these
six
software
six
products,
six
pieces
of
software.
A
If
you
will
that
run
in
all
kinds
of
places
by
sitting
on
a
common
platform
named
openshift,
and
we
we
we,
we
claim
that
they're
enterprise
ready
so
to
speak
and
and
we're
having
the
process
to
show
that
we
used
to
make
that
work.
A
So
that's
kind
of
what
we
have
on
the
truck
today
and,
like
I
said,
the
cloud
the
we've
been
on
this
journey
for
for
a
few
years
now,
the
cloud
tax
have
been
out
for
a
little
less
than
a
year.
So
there's
still
a
fairly
new
thing
for
us
that
we
have
in
the
market.
A
But
it
is
definitely
something
that
we
declare
as
kind
of
our
strategic
way
of
bringing
software
forward
cloudtext.
So
if
you
buy
any
piece
of
software
from
ibm,
it
will
be
a
cloud
pack
and
it
will
run
on
openshift
as
kind
of
the
default
mode,
so
to
speak
as
again,
there's
always
exception
to
that
rule.
A
So
now
that
we're
about
a
year
into
this
whole
thing.
The
question,
then,
is
all
right:
what
do
we
do
next?
What
do
we
want
to
do
when
we
grow
up
so
to
speak,
and
so
we've
laid
out-
and
I
have
that
on
this
slide
kind
of
a
set
of
technical
priorities-
things
that
we
work
on?
This
is
the
bucket
list.
If
you
will
of
what
we're
doing
right
now,
and
I
I've
sorted
them
into
four
four
categories.
A
If
you
will
and
the
first
one
is
all
about,
I
think
what
we've
noticed
is
that
the
way-
and
it
was
it
was
hopefully
it
was.
You
know
if
you
go
back
to
this
picture
here,
you
can
see
that
we've
built
them
as
verticals
like
I
said,
these
verticals
are
all
in
support
of
specific
personas
and
experiences.
A
What
we
found,
though,
is
that
a
lot
of
times
when-
and
this
is
not
just
our
customers-
this
is
our
own
ibm
teams,
for
example
our
partners
or
gsis,
or
writing
applications
that
are
saying
they
have
a
more
horizontal
view
of
of
everything.
They
said
there
is
a
number
of
services
at
my
disposal
that
I
want
to
be
able
to
orchestrate
in
a
way
that
supports
my
particular
application.
A
A
In
that
respect,
one
that
I
want
to
point
out
is
something
we
call
service
binding
sometimes
sounds
like
a
tiny
little
detail,
but
I
think
it's
tremendously
important
that
we
find
easy
and
automated
and
dynamic
ways
of
connecting
services
to
the
applications
that
take
advantage
of
these
services
and
do
so
in
a
really
hybrid
way.
In
other
words,
do
so
across
you
know,
on-prem,
off-prem
and
so
forth,
and
have
a
standard
for
doing
that.
A
There
is
an
activity
that
we're
involved
in
together
with
redhead
and
a
number
of
other
vendors
to
create,
what's
called
a
service
binding,
there's
a
spec
for
it
and
then
ultimately,
there's
a
service
binding
operator
which
is
aligned
with
you
know,
using
the
operator
pattern.
If
you
will
to
implement
those
kinds
of
things
and
that
should
give
us
a
common
way
of
then
exposing
services
service
instances
to
applications.
A
I
want
to
consume
them
in
a
consistent
way
and
again
being
able
to
to
buy
to
do
this
binding
at
at
runtime
if
you
will
and
not
at
coding
development
time.
So
that's
an
a
a
big
part
of
this
second
bucket.
A
So
that
becomes
kind
of
the
operational
model
in
terms
of
how
you
interact
and
how
you
manage
how
you
upgrade,
how
you
scale,
how
you
secure
your
software
and
and
the
mechanism
or
one
of
the
important
mechanisms
that
we
use
to
do
that
as
operators.
Of
course.
Right
and
again,
it
kind
of
almost
goes
without
saying
that
you
know
again,
I
think
of
it
as
a
design
pattern
that
is
very
helpful
in
doing
end-to-end
life
cycle
management,
and
now
talking
about
that,
I
mentioned
helm.
A
If
you
will
and
what
I
think
is
really
powerful
about
the
operator
model-
is
that
it
allows
you
to
keep
track
of
the
resources
that
are
out
there
to
to
basically
watching
and
monitoring
resources
and
then
being
able
to
do
automated
reconcile
reconciliation
on
those
resources
in
case
something
in
my
target
environment
changes
and
making
that
a
way
of
enforcing
you
know
the
desired
target
state.
So
to
speak,
to
stay
in
place
and
to
some
degree,
that's
just
an
extension
of
something
that
kubernetes
has
been
doing
from
the
very
beginning.
A
Kubernetes
has
a
set
of
abstractions
like
deployments
and
staple
sets
and
so
forth
that
it
defines
with
then
runtime
elements
controllers
that
make
sure
that
those
things
actually
remain
at
their
desired
target
state.
So
to
speak,
and
we're
now
expanding
that,
if
you
will
onto
the
workload
level
and
and
using
the
same
mechanism
and
that
ultimately
will
get
us
into
truly
entering
life,
cycling
life
cycle,
managing
those
target
workloads
to
a
degree
that
helm
just
simply
couldn't
do
for
us.
A
I
I
sometimes
hear
I
get
feedback
where
I've
heard
also
vendors
saying
that
they
feel
that
operators
are
kind
of
a
redhead
proprietary
thing
and
open
shift
only
and
that
we're
locking
them
into
this
platform.
When
we're
doing
that-
and
I
usually
don't
think
of
it-
that
way
at
all,
I
think
of
it
seriously
as
a
design
pattern
that
goes
across
all
of
kubernetes
already
that
we're
now
expanding
on.
That's,
not
something
that
is
a
locked
in
proprietary
technology
at
all.
A
In
my
opinion,
but
again
we
could
probably
have
a
debate
on
that,
and
so
that
then
helps
us
drive
things
like
install
the
update.
You
know
migrate
things
like
that.
We
need
to
do
some
metering
of
our
software
and
so
forth,
so
that
that
all
kind
of
ties
to
this
operational
model,
that's
based
on
the
idea
of
using
operators
to
do
it.
A
The
third
bucket,
then,
is:
I
call
that
hybrid
cloud
consumability.
This
goes
all
the
way
back
to
this
runs
anywhere
kind
of
idea.
Obviously
we
run
things
on
all
kinds
of
hardware
platforms.
We've
done
a
lot
of
work
just
lately
to
expand
both
openshift
and
then
our
cloud
packs
onto
our
power
platform
into
mainframes,
running
z,
linux,
so
that
we
can
truly
run
anywhere
on-prem,
regardless
of
hardware
architecture.
A
A
Needless
to
say,
we
want
that
to
be
the
best
place
to
run
openshift
or
any
kind
of
software
from
ibm,
but
we
also
acknowledge
the
fact
that
there's
a
very
hybrid
world
out
there,
and
so
we
need
to
support
other
clouds-
and
I
just
mentioned
aws
and
azure-
are
just
examples,
but
I
mean
that's
obviously,
a
direct
reflection
of
the
reality
and
the
landscape
that
we
see
out
there
and
we
need
our
software
to
go
with
that
and
often
enough.
A
A
We
have
a
whole
team
that
does
that
that
works
with
vendors
and
saying
you
know
how
can
I
relate
and
associate
and
affiliate
my
own
solution
with
you
know
the
cloud
packs
that
ibm
has,
and
what
does
that
mean
in
that
context?
We're
also
looking
to
publish
at
least
parts
of
our
or
we
we've
already
done
that
to
some
degree
publish
our
certification
criteria,
because
we
do
believe
that
they
reflect
a
lot
of
the
best
practices
that
are
out
there
in
the
industry
and
there's
no
reasons
why
we
wouldn't
let
others
benefit
from
that.
A
A
Experience
is
to
plug
these
things
into
your
application
and
use
things
like
I
mentioned
earlier,
service,
bindings
and
so
forth.
Even
though
service
bindings
is
something
that
we
still,
we
have
to
implement
those
and
we
have
to
support
those
across
the
board
and
that's
work.
That's
yet
to
come,
and
I
mentioned.
B
A
So
we
are
doing
a
ton
of
work,
and
this
is
just
this
is
just
a
really
high
level
kind
of
mock-up
that
we
created.
That
shows
how
we
want
to
have
all
our
capabilities
represented
in
the
operator
hub
and
have
that
installed
in
the
in
the
cluster,
so
to
speak,
make
that
the
way
of
how
you
install
and
then
utilizing
consumer
capabilities.
A
A
A
Further
to
the
right,
but
that's
not
really
the
point,
the
idea
being
is
that
we
need
to
ultimately
get
to
the
stuff
on
the
right
here,
because
to
some
degree
the
things
on
the
left
are
almost
table
stakes.
Yes,
we
need
to
install
and
deploy
and
we
spend
a
lot
of
time
discussing
that,
but
we
need
to
get
into.
How
can
I
now
use
this
ability
to
watch
resources
and
do
reconciles
and
so
forth?
And
how
do
I,
how
do
I
take
full
advantage
of
that
and
then
to
us?
It's
always
to
me.
A
It's
always,
then.
How
do
I
do
that
consistently
across
you
know,
215
different
capabilities
and
other,
and-
and
I
think
a
lot
of
this-
to
be
honest-
is
not
so
much
in
how
do
I
implement
the
operator?
It
starts
with
actually
the
modeling
of
the
crds,
and
I
feel,
like
I
just
had
a
conversation
this
morning
with
a
team
where
I
was
thinking.
A
We
need
to
spend
more
time
on
looking
at
the
crds
making
sure
we
have
them
appropriately
modeled,
that
they
expose
the
right
level
of
detail
to
a
consumer
that
is
meaningful
and
it's
not
necessarily
easy
to
do
that,
and
I
think,
as
we
go
to
the
to
the
things
further
on
to
the
right,
we're
probably
going
to
have
to
augment
and
enhance
the
crds
and
then
the
controllers
that
go
with
them
to
do
that.
A
B
A
Because
you're,
just
you
still
have
a
helm,
chart
you're
just
executing
it
in
a
different
way,
so
we
now
have-
and
there
was
a
bit
of
a
skills
burden,
at
least
for
some
teams
to
go
to
go
with
go
so
to
speak,
but
I
think
we'll
end
up
with
the
majority
of
our
teams:
writing
their
operators
and
goal
that
that
that
seems
to
become
the
the
standard
that
we're
going
for.
A
Ultimately-
and
this
is
my
last
slide
or
my
next
to
last
slide-
is
we
want
to
have
this
all
in
the
red
hat
marketplace?
A
If
you
haven't
seen
that
has
a
way
of
letting
you
register
your
clusters
and
directly
connect
them
and
then
initialize
the
the
install
of
the
operator,
not
on
the
end
workload
by
the
installer
of
the
operator
from
within
this
marketplace.
This
marketplace
is
very
operator
centric
by
the
way,
so
we're
all
in
on
that
as
well.
We
have
only
one
topic,
the
one
that
you
see
here
in
there
today
we're
going
to
be
adding
others
over
time
as
as
we
get
to
it.
A
So
this
gets
us
30
minutes
and
we
can
I
and-
and
then
I
I
just
I
just
jotted
down
a
bunch
of
topics
that
I
can
go
that
go
more
into
what
have
we
learned
along
the
way?
What
are
the
challenges
that
we've
seen?
What
went?
B
Thanks
andre,
I
wanted
to
mention
really
quick,
because
I'm
so
glad
that
you
said
red
hat
marketplace
and
you're
talking
about
integration
between
all
the
different
cloud
packs,
as
well
as
the
thousands
of
developers
and
the
integration
with
red
hat
marketplace
was
also
a
lot
of
work.
So
it's
impressive
how
many
different
pieces
all
the
teams
took.
A
Yeah
I
mean
so
for
redhead
marketplace.
On
the
one
hand,
it
fit
right
into
what
we
had
as
our
strategy
already
like
I
said
we
wanted
to
have
operators
as
a
means
to
not
only
deploy
but
also
manage
everything
we
have
so
to
speak
right
and
then
we
started
meeting
with
the
red
hat
marketplace
team
or
basically
said
we're
going
to
have
a
catalog
and
the
things
in
the
catalog
are
going
to
be
represented
by
operators.
A
There's
also
the
idea
that
in
the
red
hat
marketplace,
if
they
want
to
ensure
a
certain
level
of
maturity
and
quality
of
what's
in
there
and
that's
been
done
by
basically
saying
one
aspect
of
publishing
your
software
into
the
marketplace,
is
that
your
operator
has
to
be
certified,
and
so
we
we
then
did
a
lot
of
work
on
making
sure
that
we
could
actually
do
that.
And
you
know
what's
the
process
for
it
and
what's
the
certification
process
can
it
deal
with?
A
Like
I
said,
we
have
the
battle
scars
to
show
when
we
went
for
image
certification
in
terms
of
the
volume
right
so
in
terms
of
can
it
handle
the
volume,
though
well
the
apis
hold
up
and
so
forth,
and
we're
still
on
this
on
this
day
we're
having
regular
exchanges
with
the
certification
team
at
redhead
not
and
and
then
again
you
know-
sometimes
we
say
well
we're
ibm.
We
we're
just
like
any
other
vendor
to
red
hat,
and
we
are
in
that
context
like.
B
A
A
We
will
not
pull
things
from
the
public
registry.
We
need
to
have
things
in-house.
We
need
to
scan
them,
we
need
to
vet
them,
we
don't.
We
want
to
control
and
govern
what
our
internal
users
consume,
and
therefore
we
will
never
let
anyone
go
out
to
the
red
hat
marketplace
and
just
push
a
button
and
say
deploy
this
commercial
software
because
of
the
structure
that
they
live
within
right.
So
one
of
the
things
that
I
still
feel
like,
we
need
to
figure
out,
and
I
know
it's
on
the
roadmap.
A
B
I
know
that
I
mean
that's
a
question.
That's
asked
frequently
for
public
sector
and
banks,
and
I
know
it's
part
of
your
road
map
when
you
get
in
there.
We
have
a
couple
other
questions,
the
you
know.
The
answer
is
yes
and
ibm
watson
software.
Will
it
be
available
as
a
cloud
pack
or
it's
part
of
a
cloud
pack,
so
not
its
own
cloud
pack?
A
Yeah
watson
is
kind
of
an
interesting
one,
because
watson
is
a
pretty
broad
term.
That's
the
branding
that
is
applied
to
many
distinct
pieces
of
functionality.
I
would
say
we
have
bits
and
two
and
to
be
honest,
the
answer
is:
we
have
bits
and
pieces
in
our
clock
pack
for
data,
we
have
bits
and
pieces
in
our
cloud
pack
for
automation.
A
A
Whether
they're
going
to
end
up
in
being
sold
as
part
of
the
cloud
pack
depends
on
the
individual
piece.
Some
some
of
them
are,
some
of
them
are
not
it's
not
really
our
goal.
I
don't
think
to
have.
Every
piece
of
software
ibm
has
ever
written
in
a
cloud
pack,
but
we
have
it
as
a
role
as
a
rule
to
say
it
all
needs
to
follow.
It
all
will
be
containerized,
it
all
will
run
on
openshift.
A
A
So
we've
basically
said
we
stayed
and
a
bunch
of
pvcs
and
we
can
tell
you
what
kind
of
access
modes
we
need,
what
kind
of
storage
types
we
need,
and
then
it's
up
to
you,
mr
customer,
to
figure
out
how
to
plug
your
storage
into
there
and
to
some
degree,
that's
still
the
case
because,
like
I
said
it
makes
sense
to
have
that
level
of
indirection
and
not
to
have
a
tight
dependency
between
an
application
and
the
storage
that
it
plugs
into.
So
I
I
think
that
that
is
good.
A
At
the
same
time,
I
feel
that
we
need
to
be
a
bit
more
opinionated
about
it
in
terms
of
being
able
to
go
out
to
customers
and
say,
and
here's
storage
that
we
tested
with
and
here's
something
that
we
know
works
and
here's
something
that
doesn't
really
quite
fit
our
needs
for,
say,
enterprise
production
level
use,
and
so
in
that
respect
I
mean
what
I
can
tell
you.
What
we're
using
internally
as
kind
of
our
default
testbed
everywhere
now
is
ocs.
A
Ocs
is
separately
licensed,
of
course,
right
as
a
separate
red
hat
product,
but
it's
it's
the
easiest
for
us
to
plug
into,
because
it
gives
us
all
the
access
file,
types
and
everything
else
that
we
need
and
then,
from
an
ibm
perspective,
we
have
what's
called
the
storage
suite
for
cloud
packs
that
has
our
enterprise
storage
management
products
that
kind
of
plug
into
that
seamlessly.
A
Overall,
though,
it's
obviously
companies
who
embrace
containers
and
say
openshift
and
then
cloud
packs,
don't
wouldn't
necessarily
make
that
the
reason
to
replace
their
existing
enterprise
storage
management
systems
right.
So,
there's
always
this
aspect
that
we
need
to
be
able
to
connect
to
basically
anything.
That's
already
out
there
now
for
air
gap
storage
via
you
know,
if
I
were
to
interpret
the
question
what
that
means.
Basically,
when
we
say
we
want
to
run
in
an
air
gap
mode
and
like
I
said
that
doesn't
that
means
we
cannot
pull
images
from
public
registries.
A
That
means
customers
need
to
deploy
their
own
private
registry
and
and
all
that
red
hat
has
an
offering
in
that
space
quay
right
and
there's
others,
and
we
see
I've
seen
a
large
mix
of
them
and
I
usually
say
it
doesn't
really
matter.
We
need
to
have
the
ability
to
mirror
if
you
will
or
clone
all
the
artifacts
that
make
up
a
cloud
pack
to
a
local
place
and
in
general,
the
artifacts
that
we
have
are
the
images
themselves
and
then
the
metadata
that
goes
with
it.
A
A
Basically
that
says
you
can
take
the
inventory
of
a
cloud
pack
and
it
gives
you
a
list
of
all
the
images,
so
you
can
use
something
like
oc
mirror
to
get
those,
and
then
you
clone,
github,
repos
and
and
so
then
you
can
store
them
locally
into
whatever
place
you
want
and
then
the
the
challenge
really
then
is
to
make
sure
that
all
the
references
get
updated
right
and
so
and
what
we're
looking
at,
rather
than
updating
every
pod
spec
and
pointing
it
to
a
new
image
registry.
A
Now
that
is
local,
we're
using
the
policy
from
openshift
that
I
keep
forgetting
the
name
of
that
is
basically,
I
should
really
write
this
down,
because
I'm
drawing
a
blank
every
single
time,
but
there's
basically
a
way
where
you
can,
at
the
cluster
level,
define
that
certain
ass
for
certain
image
registries
should
simply
be
redirected
for
a
different
place
kind
of
a
dns.
A
If
you
will
and-
and
that
seems
to
be
easier
to
deal
with
than
saying-
I
need
to
go
into
every
kind
of
resource
definition
and
make
all
the
updates
so
that
everything
points
to
the
right
place.
So
in
general,
I
would
say,
our
air
gap
strategy
is
based
on
having
an
inventory
that
tells
you
what
to
pull
from
where
and
then
the
clone
and
then
use
tools
like
oc,
mirror
or
maybe
even
scorpio,
to
clone
them
and
then,
secondly,
use
cluster
level.
Definitions
to
update
the
references
accordingly.
B
A
It
I
mean
these
things
easily
become
a
religious
debate.
I
feel
sometimes
because
a
lot
of
it's
someone
and
I've
seen
a
lot
of
things
out
there
in
public
was
like
what's
the
advantage
of
a
container
or
what's
the
advantage
of
kubernetes,
and
then
they
describe
all
the
magic
that
happens,
I'm
thinking
well,
that
only
works.
A
If
you
have
an
application,
that's
actually
architected
in
the
right
way
that
has
that
is
that
has
stateless
microservices
that
has
microservices
to
begin
with
right,
not
all
the
software
that
we
have
out
there
and
that's
true,
I'm
sure
for
any
customer
and
any
enterprise
as
well
as
for
vendors
like
ourselves,
we
have
software
that
is
not
architected
for
microservices.
That
does
not
necessarily
have
strict
separation
of
stateless
versus
stateful
elements
and
that
are
monolithic.
A.
B
A
A
Can
I
run
this
in
a
can
I'll
run,
multiple
replicas
of
that
right?
Does
that
even
support
kind
of
an
active,
active
mode
of
in
a
way
that
I
can
that
I
can
scale
it?
You
know
through
multiple
pods,
for
example:
not
every
application
is
written
in
a
way
that
that
it
does
that
right.
I've
seen
commercial
applications,
industry
applications
very
successful,
successful
ones
that
go
and
store
their
state
in
the
heap
right
within
their
code,
and
then
you
can't
duplicate
that
right.
A
You
can't
have
two
of
them
and
put
a
load
balancer
in
front
that
just
doesn't
work.
It's
not
architected.
That
way.
So
sometimes
I
feel,
like
you
know,
we
have
to
make
sure
that
the
application
actually
fits.
You
know
all
the
bells
and
whistles
that
kubernetes
and
openshift
give
me,
and
so
we've
been
struggling
with
that
honestly
and
we've
even
seen
offerings
that
had
to
be
rewritten
because
they
just
wouldn't
fit
this
model.
So
there
is
a
bit
of
pressure
on
existing
software
to
kind
of
look
at
the
architecture.
A
So
that's
that's
the
first
one.
The
next
one
here
is
base
os
scratch.
Like
I
said,
we
went
with
ubi
that
has
worked
pretty
well.
One
question
that
we
had
in
the
beginning
was
oh,
and
it
was
every
day
a
team
would
come
and
say
we
have
three
packages
that
we
need.
They
are
in
rel,
but
they're,
not
in
ubi.
Where
do
we
get
them?
And
then
we
got
to
this
point
of
saying:
okay,
you
can
add
well
packages
to
a
uvi
base
and
that's
also
fine
and
that's
going
to
be
legal.
A
We
then
got
into
a
question
of
if
there's
two
flavors
of
uvi
and
there's
a
there's,
a
mini
one
or
minimal,
and
to
a
lot
of
our
teams-
that's
not
minimal
enough.
They
need
something
much
much
smaller
and
so
that's
a
discussion
we
haven't
had
in
a
while.
We
should
probably
pick
up
on
that
to
see
where
that's
going
and
then
the
use
of
scratch
images
to
begin
with.
You
know
where
some
say
I
use
a
scratch
image.
Can
I
certify
it?
The
answer
is
no.
You
can't.
A
You
can
only
certify
if
there's
ubi
inside,
otherwise
it
won't
work
and
things
like
that.
But
overall,
I
would
say:
having
a
common
base.
Os
has
served
us
really
really
well.
I
can
only
recommend
it.
It
makes
life
a
lot
easier
if
you
have
a
large
portfolio,
also
in
terms
of
vulnerability
scanning
right,
there's
a
lot
fewer
vulnerabilities
in
ubi
than
in
other
popular
open
base.
Os's.
Let
me
just
put
it
that
way,
and
that
has
has,
I
think,
it's
a
reflection
of
the
maturity.
B
A
B
A
Really
care
to
be
honest
next
point
here
and
I'm
keeping
an
eye
on
the
time
here
we
got
about
10
minutes.
B
A
B
A
Now,
probably
at
the
top
of
the
list,
we
have
assumed
that
we're
we
we
are
expecting,
and
I
think
we
still
are-
and
we
see
some
proof
to
that-
is
that
most
customers
will
ultimately
end
up
with
a
large
number
of
maybe
relatively
small
or
small,
in
scope,
so
to
speak
kind
of
clusters
so
that
each
lob
or
maybe
even
each
application-
will
get
its
own
cluster
and
then
get
a
high
degree
of
authority
on
that
cluster
and
basically
mess
with
it
as
they
see
fit.
So
to
speak,
and
we
see
that
model.
A
I
talked
to
a
the
head
of
infrastructure
for
a
large
wall
street
outfit.
Just
this
week
was
it
or
last
week
who
then
said
that's
kind
of
they
have
a
self-service
in
which
they
hand
out
clusters
to
individual
organizations.
They
give
them
cluster
admin
rights
and
they
say,
go
knock
yourself
out
right,
but
don't
break
anything.
A
We
also
have
seen.
The
opposite,
though,
is
where
I've
we've
seen:
enterprises
that
have
very
few
very
large
clusters
right
and
have
a
central
cluster
admin
team
that
organizes
and
manages
everything,
and
they
are
much
more
interested
in
isolating
things
that
go
on
to
this
cluster
from
each
other
and
they
use
namespaces
to
do
so,
and
that
can
only
go
so
far
and
it's
led
us
a
bit
to
a
conversation
a
about
how
how
do
we
need
a
structure,
our
software
and
our
install
in
terms
of
what
namespaces
do
things
go
into?
A
That
has
the
right
on
to
any
other
name
space
than
the
one
that
it
was
deployed
into
and
frankly,
in
our
software
we
have
cases
where
that's
happening,
where
we
need
to
do
cross,
namespace
kinds
of
things
right,
and
so
I
I
think
it
led
us,
like.
I
said
to
this
somewhat
philosophical
debate,
that
how
good
is
openshift
and
kubernetes.
I
think
this
is
mostly
a
kubernetes
problem.
To
be
honest,
how
good
is
it
at
at
multi-tenancy
how
good
it
and
multi-tenancy
to
me
is
a
degree
of
isolation
of
workloads
right?
A
A
That
means
the
operators
kind
of
need
to
live
close
to
the
workload
that
they
manage,
but
then
all
of
them
have
one
or
more
crds
that
all
of
a
sudden
have
cluster
scopes,
and
it
makes
it
impossible
to
be
honest,
for
example,
to
run
two
different
operators
with
different
versions
on
the
same
cluster,
because
they're
going
to
trample
over
each
other
just
starting
with
the
crds.
A
So
I
feel
like
there's
work
in
general
to
be
done
in
the
kubernetes
open
shift
space
if
you
will
to
figure
out
what
the
right
model
for
multi-tenancy
is
and
how
I
can
isolate
things
from
each
other
right
now.
The
the
way
we
we
work
best
is
in
situations
where
we
can
just
basically
take
over
the
cluster
and,
like
I
said
in
some
cases
that
works
well.
In
other
cases,
it
doesn't
footprint
is
a
big
one.
A
In
general,
I
would
say
all
of
the
deployments
that
we
do
are
too
big
and
we
always
end
up
with
every
single
time.
I
deploy
a
little
test
cluster
and
I
play
around
and
I
deploy
something
and
I
it
breaks,
and
I
go
ask
someone
saying
here:
I
have
this
weird
problem,
I
would
say
in
50
of
the
cases
they
say,
yeah,
your
cluster
is
too
small,
make
all
your
worker
nodes,
bigger
and
life
will
be
better
for
you
and,
I
think,
that's
a
problem.
A
We
sometimes
we
need
for
these
clusters
to
be
too
big,
because,
after
all,
the
idea
about
containers
is
that
we're
they're
lighter
weight,
less
footprint,
more
efficient
use
of
your
resources.
I
don't
think
we're
necessarily
always
you
know
fulfilling
that
premise.
If
you
will-
and
that
goes
into
details
about
how
do
I
find
the
right
fix
for
how
much
memory
do
I
need
to
reserve?
A
A
When
we
have
real
live
use
case
testing
against
this,
and
we
figure
out
that
we've
reserved
any
number
of
memory
and
we're
actually
only
ever
utilizing
you
know
y
number
of,
and
so
then
you
know
can
we
can
we
adjust
our
the
the
resource
requirements
in
the
individual
resources
to
to
to
be
optimized
and
so
forth,
and
I
feel
like
there's
a
lot
of
good
work
that
can
go
in
there
to
just
reduce
footprint
overall
and
by
the
way.
One
element
of
that
is
image
size
right.
A
A
A
Into
a
single
image
now
you
can
have
a
discussion
on
whether
an
image
is
the
right
place
to
have
a
database
inside
or
not,
but
that
was
it
right
so,
but
in
general
I
think
we
found
lots
and
lots
of
opportunity
to
make
images
smaller
to
kind
of
pay
attention
to.
You
know
how
docker
files
are
being
built
and
and
how
images
are
structured
and
so
forth,
and
I
think
we're
learning
new
things.
B
A
Day
in
that
respect,
I
talked
about
a
lot
about
how
I'm
an
operator.
So
let
me
let
me
skip
that
for
a
moment
here
and
talk
about
versioning
and
that
kind
of
relates
to
the
olm
topic.
Maybe,
and
then
I'll
save
security
for
last
olm
in
general
is
a
beast.
We
cannot
assume
that
we've
mastered
it.
A
There's
this
guy,
my
colleague
chris
here
that
that
I
work
with
who
is
you
know,
probably
the
best
olm
expert
that
we
have,
and
even
he
says
he
discovers
new
things
in
it
every
day
that
he
can't
explain
to
himself
and
so
forth,
and
so
I
think
that's
good
and
bad
in
in
at
the
same
time.
What's
good
is
that
I
mean
we
have
good
linkage
into
the
operator
community
and
also
the
people
that
are
driving
olm.
We
have
great
discussions
all
the
time
again,
all
of
it
in
the
open.
A
This
is
not
a
hush
hush,
ibm
thing
and-
and
I
think
that's
going
really
well
and
we
find
olm
really
useful
and
we
want
to
have
all
of
our
operators
and
catalogs.
We
want
them
to
all
be
described
by
csvs
and
things
like
that.
At
the
same
time,
there's
been
a
lot
of
churn
and
change
to
it,
especially
over
the
last
six
months.
That
makes
life
hard
for
us
right
where
sometimes
we
so,
for
example,
I
think
we
just
had
a
conversation
about
there's
something
coming
that
works
in
4.6,
but
doesn't
work
in
4.5.
A
So
now
we
have
a
piece
of
software
that
needs
to
run
on
both.
So
what
are
we
going
to
do?
We
literally
have
a
conversation
about
whether
we
need
version
ocp
version
specific
catalogs,
because
they
can't
span
versions
and
stuff
like
that.
Right
and-
and
that's
been
a
challenge
version
control
is
one
of
those
things
we
want
to
use
the
operators,
as
also
kind
of
the
version
controller
for
the
workloads
that
we're
deploying
and
so
there's
questions
like.
How
can
I
do
automated
upgrades
and
how
that
does
that
work?
A
A
Oh,
let
me
go
back
to
the
previous
one
and
roll
everything
back
and
right
now:
that's
not
a
first
class
concept
in
olm,
so
maybe
we
need
to
have
a
conversation
or
whether
that
needs
to
be
burdening
in
general,
and
I
think,
that's
probably
related
to
the
fact
that
we
put
commercial
software
in
there
right.
We
have
versions
for
our
software.
They
have
release
dates.
They
have
release
cycles.
We
need
to
upgrade
them.
A
We
need
to
make
that
seamless
and
so
forth,
and
we're
using,
like
I
said,
olm
as
the
vehicle
for
for
doing
all
of
that
and
and
that
gets
also
yet
complicated-
can
get
complicated
to
the
point
that
makes
my
head
spin
every
time
last
point:
security
scanning:
we
have
more
and
more
customers
now
that
I
encounter
that
are
very
sensitive
to
security.
A
They
scan
all.
The
images
gets
me
back
to
my
comment
about
air
gap.
Sometimes
they
use
different
scanners
than
we
use
and
they
find
things
that
our
scanners
haven't
found
and
then
they
there's
an
email
that
comes
back
like
your
image
so,
and
so
it
has
250
violations
of
our
security
policy.
We
shall
not
deploy
it
right
and
then
we
go
whoa.
Where
did
that
come
from
and
what
is
it?
A
And
so
we
have
things,
for
example,
where
user
id
assignments
is
a
very
popular
one
right
image
scanners
would
go
and
check
if
there
is
a
uid
assigned
to
an
image
when
it
runs
as
a
container.
At
the
same
time
there
we
are
doing
a
lot
of
the
actual
management
of
you
know
the
security
and
the
authorization.
A
Sorry,
that's
my
phone
here
and
the
authorizations
that
we
use
we're
doing
that
with
sccs,
and
so
there
is
a
question
of
okay.
Can
I?
What
do
I
need
to
do
in
my
docker
file
to
make
the
scanner
happy,
even
though
I
know
that,
ultimately,
the
scc
will
control
what
kind
of
authorization
that
container
will
have
anyway.
A
All
I
also
feel
like
so
we
just
set
the
discussion.
For
example,
we
have
a
couple
of
pieces
in
our
portfolio
that
that
cannot
run
with
the
restricted
default
scc,
and
there
are
reasons
for
this
right,
we're
saying
so.
We
have,
for
example,
a
ultra
fast
data
transfer,
offering
called
aspara
it
needs
access
to
host
ports
because
it's
doing
some
magic
under
the
covers
to
ensure
that
data
transfers
can
be
ultra
fast,
so
it
doesn't
run
on
the
restricted
sec.
A
B
A
I
feel
like
so
then
we
need
to
have
a
discussion
of,
can
there
be
exceptions,
and
if
so,
how
do
we
articulate
those
exceptions
right
and
how
do
then?
How
will
a
corporate
security
team
manage
these
things
right
and
what
are
the
right
rules,
and
sometimes
I
feel
like
that
as
a
collective
industry,
we're
still
in
a
learning
on
learning
how
to
truly
run
kubernetes
openshift
and
software
on
top
of
it
in
a
secure
way
and
what
the
right,
the
right
governance
is
that
we
need
to
apply
to
this.
A
B
I
think
we
have
sometimes,
but
thank
you
again
for
giving
the
overview
and
diving
deeper
into
development,
and
we
had
a
lot
of
questions,
so
that
was
great
and
again,
thank
you,
everybody
for
joining
us
for
this
openshift
commons
with
andre
toast.
Am
I
saying
your
name
right?
I
guess,
after
all
this
time,
okay
and
I'm
karina
angel,
a
product
manager
in
openshift
and
join
us
next
tuesday
same
time
for
a
security
discussion,
so
we'll
have
cloud
pack
security
experts
here,
so
we'll
see
you
next
time.
Thank
you.
Everyone.