►
Description
Panel: Building Microservices with OpenShift on IBM Cloud
moderated by Sai Vennman with guest panelist from across the IBM Cloud and upstream community leads including Nigel Brown, Josh Mintz, Peter Klenk, Ram Vennam and Doug Davis.
OpenShift Commons
July 29 2020
hosted by Diane Mueller (Red Hat)
A
Hello,
everybody
and
welcome
back
to
another
openshift
commons
briefing.
Today
we
have
something
kind
of
different
but
really
kind
of
exciting
as
well.
We
have
a
panel
of
folks
from
ibm
cloud,
that's
going
to
be
moderated
by
psy
venoman
and
we're
going
to
talk
about
building
with
openshift
and
a
lot
of
other
topics
on
ibm
cloud
and
a
lot
of
these
folks
who
are
on
this
panel
are
from
a
lot
of
the
different
upstream
projects
as
well.
So
we're
going
to
touch
on
some
of
that
in
the
panel
as
well.
A
So
I'm
going
to
let
psy
introduce
himself
and
then
everybody
else
do
it,
and
then
we
are
just
going
to
ask
them
a
lot
of
questions
and
if
you
have
some
ask
in
the
chat-
and
we
will
relay
them
to
the
panelist-
and
this
is
indeed
live,
so
I
take
it
away.
B
All
right,
thank
you
for
that
introduction,
diane,
hello,
everyone
you
guys
are
in
for
a
treat
today.
My
name
is
cy
venom
and
I'm
a
technical,
offering
manager
with
ibm
cloud.
I've
had
the
pleasure
of
working
fairly
closely
with
all
of
our
panelists
today
and
believe
me
again,
you're
you're
in
for
a
treat.
These
are
folks
who
are
extremely
you
know,
motivated
engaged
and
talented
and
in
their
respective
fields
within
ibm,
and
I
want
to
take
a
minute
here
to
just
quickly
touch
on
each
one
of
them
and
have
them
introduce
themselves.
B
C
D
Hey
thanks.
I
my
name
is
doug
davis.
I
work
for
ibm,
obviously
technical,
offering
manager
for
k
native,
so
obviously
the
technology
of
choice.
For
me,
these
days
is
k-native
and
I'm
actually
involved
in
the
delivering
k-native
and
multiple
different
projects
within
ibm
cloud.
So
we'll
talk
a
little
bit
about
that
later.
Hopefully,.
B
All
right
next
step,
let's
go
to
peter.
E
Thanks
hi,
so
I'm
peter
clank,
I'm
the
offering
manager
for
our
devops
tool
in
the
ibm
cloud.
So
that's
you
know
involves
also
reaching
out
to
a
lot
of
open
source
communities
and
some
of
the
upcoming
technologies
like
techdon
and
how
we
can
deliver
that
you
know
really
effectively
through
openshift
through
the
public
cloud
as
a
service
as
roll
your
own.
And
so
that's
my
area.
It's
a
fun
time.
F
Hey
there
folks,
my
name
is
josh
mintz,
I'm
an
offering
manager
for
a
whole
cadre
of
databases
in
the
ibm
cloud
like
postgres,
elasticsearch,
mongodb
and
cloudant.
Not
many
people
are
throwing
in
fun
facts
so
I'll.
Add
one
big
soccer
fan
love
to
watch
chelsea.
So
if
anyone's
a
manchester
united
fan,
we
can.
We
can
talk
after
the
panel
and
have
some
words.
B
That's
great
and
finally,
we've
got
ron
venom,
hi
zai,
I'm
one
of
the
technical
offering
managers
on
the
ibm
cloud,
kubernetes
service
and
the
red
hat
open
shift
service.
B
My
focus
is
on
service
mesh,
so
I
work
closely
with
our
istio
open
source
team
and
also
service
mesh
on
red
hat
openshift
as
well,
and
if
you
guys
are
noticing
that
there's
a
resemblance,
that's
because
prom
is
actually
my
brother,
so
I'll
be
sure
to
throw
him
all
of
my
really
difficult
questions
today,
thanks,
I
expect
nothing
less
now,
a
quote
that
I
want
to
start
with
today,
from
idc
by
2022.
B
90
of
new
applications
will
be
cloud-native
and
developed
with
agile
methodologies
and
an
api
based
architecture
that
leverages
microservices
containers
and
serverless
functions
now
from
the
introductions
today,
I
think
we
saw
that
we
have
a
wide
panel
of
panelists
and
and
they
they
kind
of
cover,
multiple
key
technologies,
but
at
the
root
of
it
all,
I
truly
believe
is
openshift
the
container-based
platform
that
powers
cloud-native
and
container-based
technologies.
B
C
Yeah,
so
I
think,
there's
a
lot
of
reasons
really
driving
developers
and
organizations
to
embrace
technology
like
cloud
native
and
containers
and
openshift,
and
ultimately
it
comes
down
to
three
main
use
cases
that
we
see
time
and
time
again.
The
first
one
is
really
around.
You
know
I'm
building
cloud
native
applications
because
we're
all
developing
software
in
some
form
or
fashion,
and
we
need
to
be
able
to
do
so
quickly,
but
then,
more
importantly,
very
securely.
C
C
World
customers
that
I
talk
to
every
day
are
running
on
prem
ibm
cloud,
other
clouds,
so
they
need
to
be
able
to
have
that
portability
and
that's
exactly
where
containers
and
openshift
come
in
to
give
them
that
common
abstraction,
regardless
of
the
is
that
they're
running
on
and
then
the
third
one
is
around
app
modernization,
because
again
the
customers
that
we
work
with
have
a
long
heritage
of
existing
applications.
So
how
do
we
help
them?
Modernize?
That
footprint
into
containers,
cloud
native
type,
architectures.
B
Right
so
that's
perfect,
there's
clearly
a
number
of
use
cases
for
taking
advantage
of
container
and
and
cloud
native
technologies,
but
at
the
root
of
it
all
and
the
reason
that
we're
gathered
here
today
at
the
openshift
commons
is
openshift.
So
how
do
you
see
openshift
as
a
solution
to
to
help
address
these
use
cases.
C
I
mean
obviously
red
hat-
you
know
long
before
ibm,
acquired
red
hat
has
put
in
a
lot
of
effort
working
upstream
in
the
community,
just
like
ibm
has
as
well,
but
they've
taken
these
open
source
technologies,
which
the
reality
is
they're
they're
hard.
There's
a
lot
of
different
projects,
there's
a
lot
of
moving
pieces.
C
How
do
we
make
sure
that
you
know
a
new
version
of
kubernetes
doesn't
break
something
else
or
some
other
plugins
or
some
other
projects
and
red
hat
is
really
focused
on
bringing
and
packaging
all
that
you'll
need
to
be
able
to
run
that
containerized
workload.
So
that's
really
where
the
importance
and
significance
of
openshift
comes
into
play
is
because
red
hat
has
really
hardened
and
secured
and
has
a
known
process
to
give
those
updates
to
developers
and
consumers.
B
And
while
we're
on
you
chris
one
last
thing
I
want
to
touch
on,
you
know:
you
mentioned
that
red
hat,
even
much
before
red
hat
was
acquired.
They
really
focused
on
you,
know,
open
source
and
contributions
and
working
in
this
container
space
now
post
acquisition
ibm
has
launched
this
openshift
on
ibm
cloud.
Offering
now
how
is
ibm
a
company?
That's
that's
fundamentally,
you
know
many
ways
the
same
as
red
hat,
but
also
has
been
historically
different.
How
has
ibm
kind
of
shaped
itself
and
changed
with
the
acquisition
of
red
hat?
C
That
way,
you
know,
that's
really
why
developers
gravitated
towards
containers
anyway,
was
you
know
I
could
package
up
my
app
all
of
its
dependencies
and
move
that
thing
from
here
to
there
to
anywhere
consistently
so
we're
going
to
keep
doing
that
and
bringing
some
important
operational
characteristics
to
those
open
source
projects
like
the
security,
the
hardening,
the
upgradability,
all
the
things
that
are
really
important
to
our
customers
once
they
deploy
something.
How
do
you
actually
manage
it
and
keep
running
it
from
day
two
onwards.
B
Perfect,
thank
you
for
that,
and
you
know
with
with
this
focus
on
open
source
and
open
technologies.
I
want
to
move
on
to
the
next
panelists
here.
Doug,
your
focus
day-to-day,
as
you
mentioned,
is
capabilities
like
k,
native
and
serverless
and,
as
we
know,
k
native
is
an
open
source
project.
It
really
grew
in
the
open
source
community
now
doug.
The
first
question
I'll
start
off
with
here
is:
where
do
you
see?
You
know
serverless
or
k-native
these
technologies
that
you
work
with
fit
in
with
the
solutions
that
we
just
heard.
Chris
talk
about.
D
Yeah,
so
it's
interesting,
chris
talked
a
little
bit
about
modernization
of
the
app
right
and
if
you
take
that-
and
it
takes
value
in
terms
of
what
it
actually
means
well,
it
means
containerizing
your
stuff
breaking
up
the
monolith
stuff
like
that.
Serverless
is
actually
the
next
next
logical
step
in
that
progression.
Right,
you
take
your
model,
you
break
it
up
into
microservices,
but
then
service
goes
one
step
further
and
says:
okay,
rather
than
just
at
a
micro
service
level.
What?
If
you
can
actually
break
it
down
into
individual
functions?
D
And
you
know
that
sounds
cool
from
a
technology
perspective.
You
know
we're
all
geeks.
We
love
that
kind
of
that
thing,
but
why
are
we
doing
that?
Well,
you're
really
doing
it.
So
you
get
finer
grained
resolution
or
deployment
of
your
application
right,
so
you
can
scale
one
little
slice
of
your
application
instead
of
the
entire
thing
or
even
just
a
micro
service
level
right,
so
you
get
better
resource
utilization.
You
get
all
the
cool
features
of
serverless
like
scaling
down
to
zero.
D
So
when
it's
not
being
used,
it's
not
even
running
at
all,
so
you
get
cost
savings
right.
So
to
me,
serverless
is
the
next
natural
extension
for
your
for
your
container
as
a
service
type
offerings
right.
It's
just
breaking
up
things
down
to
even
smaller
little
bits
for
better
resource
utilization
and
scaling
type
stuff.
B
Definitely
thank
you
now
for
for
our
users
that
are
watching
and
when
they
hear
serverless
many
times
they
think
about
things
like
you
know,
lambda,
maybe
ibm
cloud
functions,
these
platforms
that
enable
you
to
do
code
first
and
then
basically
abstract
away
the
requirements
underneath
now
those
requirements
underneath
are
kind
of
what
we're
focusing
on
today
that
that
open
shift
layer
that
container
based
layer.
Now,
if
we
compare
serverless
of
you,
know
four
or
five
years
ago
to
serverless
today,
how
has
serverless
evolved
to
adopt
these
new
technologies.
D
Yeah,
well,
that's
great.
I
I
think
you're
right
to
a
lot
of
people.
Serverless
means
things
like
give
me
your
source
code
and
I'll
host
it
for
you-
and
that's
definitely
true,
for
you
know,
depending
on
the
platform
but
you're
right
under
the
covers.
They
are
all
for
the
most
part
leveraging
something
like
containerized
technology,
and
so
you
look
at
something
like
like
functions
and
k
native.
They
are
all
using
containers
on
the
covers.
They
just
hide
it
from
you
now.
D
There's
really
not
much
of
a
difference
anymore,
between
container
service,
functional
service
and
serverless,
and
I
think
what
you're
going
to
start
seeing
with
platforms
like
k-native
is
that
line
becoming
very,
very
blurry
to
the
point
where
the
user
doesn't
even
have
to
think
about.
You
know
what
is
the
as
I
want
to
deploy
this
stuff
onto
right.
I
just
hand
over
my
application,
whether
it's
source
code
or
an
image
and
the
infrastructure
just
runs
it
for
me
and
then,
as
a
runtime.
D
I
just
choose
the
configuration
knobs
I
want
right,
so
I
don't
have
to
think
anymore
well.
Do
I
want
to
pass
versus
fast
versus
kaz
that
choice
is
meaningless
anymore
right?
It's
just
what
are
the
runtime
characteristics
you
want?
I
think
that's
the
direction
we're
seeing
with
projects
like
k-native
and
you
can
see
ibm-
is
really
pushing
that
with
some
of
the
newer
offerings
that
you
can
find
on
our
platform.
B
In
fact,
if
you
go
to
the
kubernetes
documentation,
you'll
actually
see
verbiage
around
how
kubernetes
was
never
actually
meant
to
be
a
developer
platform,
but
actually
provides
the
building
blocks
for
a
developer
platform.
Now
I
I'd
say
you
know,
and
I
think
many
people
would
agree
that
openshift
has
solved
that
problem
for
kubernetes
by
creating
you
know,
abstractions
a
dashboard
and
experience
for
end
users
to
more
easily
be
able
to
deploy
applications
they've
in
essence,
created
a
developer
platform
on
top
of
kubernetes.
B
D
Yeah,
so
it's
interesting,
even
though
everybody
keeps
talking
about
how
you
know,
kubernetes
has
won
the
container
war
and
stuff
like
that
and
that
that's
all
true,
unfortunately,
you
have
to
kind
of
become
an
I.t
expert
in
order
to
use
kubernetes
right
when
the
whole
cloud
computing
thing
came
on
board,
everybody
said
hey.
This
is
great.
D
We're
going
to
extract
things
away
from
you,
you
don't
have
to
work
with
vms
directly
and
that's
true,
but
now
you
have
to
understand
how
to
do
networking
and
all
the
other
infrastructural
pieces
just
to
get
stuff
running
in
kubernetes,
and
it's
non-trivial
as
great
as
it
is.
It's
non-trivial,
well
k
native,
takes
a
step
back
and
says
well
what,
if
we
can
hide
all
that
from
stuff
from
you
right?
What?
D
If,
instead
of
you,
telling
us
all
the
various
gazillion
different
congregation,
knobs
that
are
out
there,
how
you
want
those
things
set,
how
you
wire
them
together?
What
if
we
take
a
different
approach
and
say
just
you
know,
give
us
your
container
image
and
what
are
the
runtime
characteristics
you
want
to
see
right?
Do
you
want
to
see
scale
to
zero,
yes
or
no
nope?
I
want
to
stop
it
maybe
five
instances,
because
I
need
five
running
all
the
time
that
kind
of
stuff.
D
I
don't
have
to
worry
about
how
the
auto
scaler
makes
that
happen.
I
just
say
give
me
five
right
and
that's
the
kind
of
abstraction
that
people
I
think
are
looking
for.
They
just
want
to
say
here's.
My
stuff
here
are
the
runtime
semantics.
I
want
to
see
from
an
external
perspective
and
k-nato
does
all
the
magic
under
the
covers
wires
it
all
up
together
for
you
all
leveraging
kubernetes,
but
you
don't
have
to
worry
about
it
anymore
and
it
just
manages
it
all
for
you
the
way
it
should
be
right
and
you
write
it.
D
It's
that
abstraction
that
we
keep
talking
about
now.
It's
great
you
mentioned
that
kubernetes
was
never
meant
to
be
the
platform
for
users
to
actually
interact
with
well
k.
Native
is
almost
the
same
thing
right
now,
while
k-native's
user
interface
is
is
darn
nice.
I
love
it.
It's
really
not
meant
to
be
used
by
the
end
users
themselves
directly
right.
It's
meant
to
be
a
platform
for
serverless
platforms
right
and
you
can
see
red
hat
and
ibm
doing
the
same
thing.
D
There
we're
taking
it
in-house
offering
up
to
our
customers
but
wrappering
it
with
some
additional
red
hat,
ibm
goodness
around
it,
to
make
it
even
easier
to
use
so
that
you
really
don't
even
need
to
understand
yaml
in
many
cases,
which
is
great
right.
So
you
can
see
it
popping
up
even
more
at
you
know,
at
a
higher
level
within
our
own
platforms
himself,
so
that
that's
kind
of
where
we're
headed
with
all
this
stuff.
B
Perfect,
thank
you
now
what
we
talked
about
a
lot
there
was,
you
know
how
k
native
and
openshift
are
making
it
easier
for
developers
to
work
with
the
platform,
but
many
times
you
know
developers
need
automation.
They
need
methodologies.
If
we
go
back
to
our
quote,
we
said
you
know
by
2022.
90
of
new
applications
will
be
cloud
native
dot,
dot,
dot
developed
with
agile
methodologies.
B
So
with
that,
I
want
to
move
on
to
our
next
panelist
here,
peter,
I
think
you
know.
Can
we
talk
a
little
bit
to
the
agile
methodologies
that
have
been
built
to
really
support
the
technologies
that
that
chris
and
doug
covered
you
know
being
able
to
work
with
openshift
being
able
to
work
with
k
native
in
a
way
that
developers
are
better
oriented
with?
Can
you
talk
a
little
bit
to
that
yeah.
E
Sure
and
I'm
actually
going
to
turn
it
around,
because
I
was
thinking
about
this
and
we're
coming
up
on
the
20th
anniversary
of
the
agile
manifesto
believe
it
or
not
in
february
next
year,
and
so
the
industry's
been
talking
agile
for
a
long
time
and
I've
been
in
the
tool
space
that
whole
time-
and
you
know
I
think
what's
been
missing-
is
that
you
know
it's
great
to
talk
about
faster
velocity,
it's
great
to
talk
about
abstraction
and
isolation.
It's
great
to
talk
about.
E
You
know,
testing
pieces
in
smaller
chunks,
but
we
didn't
have
the
architectures.
You
know
20
or
10
years
ago
to
really
support
that,
not
in
a
real
way.
You
know
people
were
working
with
vms.
They
were
working
with
large
java
monoliths,
you
know
packaged
it
as
a
war,
and
you
know
what
we've
really
seen
now
with
the
rise
of
containers
is
the
arc
and
microservices
is
the
architecture
and
the
run
times
for
that
architecture
that
support
an
agile
development.
E
So
you
know
once
you
have
micro
services,
your
code
base
is
smaller.
It's
you
know
it
requires
a
smaller
team.
So
now
you
can
start
talking
about
two
pizza
teams
that
can
collaborate
more
in
real
time
that
bounds
the
size
of
the
problems
you're
solving
any
particular
microservice.
So
now
you
know
the
idea
of
very
short
iterations
or
sprints.
You
know
becomes
a
lot
more
doable
because
you're
bounded
refactoring,
you
know
in
a
way
is
a
lot
easier
because
you've
invested
in
setting
up
the
boundaries
between
those
microservices
and
the
apis
and
you've
decoupled
them.
E
E
You
know,
move
from
a
very
rigid
definition
of
how
it's
going
to
get
deployed
to
more
of
a
serverless
approach
where
you're
just
talking.
You
know
in
terms
of
general
scale
and
doing
a
lot
of
auto
scaling
and
things-
and
you
know
not
being
as
prescriptive
about
how
this
thing
is.
Gonna
appear
in
production,
and
you
know
all
of
that's
really
reinforcing
you.
E
Agile
way
of
working-
and
so
you
see
in
the
tools
you
know,
things
like
continuous
integration
have
been
part
of
agile
from
the
beginning
and
tools
like
jenkins
came
up,
and
you
know
everyone
knew
how
to
do
a
build
and
jenkins
and
whatever-
and
you
know
I'd
say
what
we
saw
in
the
last
you
know
five
years
ago
was
you
know
the
next
generation
tools
were
all
about
deployment
and
kind
of
geared
to.
E
How
do
you
deploy
a
monolith-
and
you
know,
schedule
downtime
windows
and
notifications
and
a
lot
of
manual
processes
and
synchronizing
with
those
manual
processes
that
we're
running
in
other
tools,
and
you
know
as
a
sent.
Essentially,
you
know
how
do
you
adopt
a
devops
approach
and
really
embrace
agile
with
an
architecture
and
an
implementation
that
aren't
really
geared
to
that
and
weren't
really
conceived
with
that
in
mind?
E
B
Right
and
I
want
to
touch
on
that
a
little
bit
here,
peter
and
I
might-
and
I
might
actually
pass
those
back
to
either
either
chris
or
doug.
Do
you
believe
that
you
know
kind
of
to
peter's
point
that
the
the
platform
and
the
orchestration
layers,
things
like
kubernetes
openshift
were
were
built
from
the
ground
up,
knowing
the
way
that
users
were
going
to
be
deploying
on
it?
You
know
the
agile
methodologies
from
from
one
end
to
the
other,
you
know,
were
they
built
with
user
users
and
agile
methodologies
in
mind.
C
I'd
like
to
see
doug's
input
as
well.
From
my
perspective,
I
would
say
that
most
of
users
that
I
talk
to
they're,
not
interacting
with
necessarily
kubernetes
directly
they
are
their
integration
point-
is
from
that
cicd,
tooling,
they're,
pushing
code,
they're
doing
things
and
then
magic
happens
that
they're
not
necessarily
involved
with.
So
I
think
from
that
perspective,
yes,
but
the
other
side
of
it
is,
you
know
like
doug
talked
about
earlier.
All
the
complexities
of
kubernetes
itself
is
obviously
a
steep
learning
curve.
There.
D
Yeah,
it's
interesting,
I
I
know
and
think
about
that
way,
but
I,
if
I
had
to
guess
I'd,
actually
say
no
in
the
sense
that
I
think
most
of
what
I
think
I
see
around
things
like
kubernetes
and
stuff
is
more
around
hey.
We've
got
the
really
cool
technology
containers.
It
has.
D
Of
benefits,
better
scaling
portability,
all
the
stuff
that
chris
talked
about,
and
I
think
they
were
trying
to
build
tooling
around
that
to
make
it
easier
to
manage
those
things,
and
I
think
once
people
realize
that
you
have
this
really
cool
deployment.
Artifact
called
the
container
image
that
that's
portable
and
it
contains
everything
you
need.
You
don't
worry
about
all
the
install
scripts
that
go
around
it.
It's
all
just
sort
of
bundled
up
together.
I
think
it
just
naturally
led
into
the
entire
devops
story.
D
B
Yeah
I
mean
I
just
want
to
add
to
that.
I
think
you're
right
that
I
think
it
just
everything
just
fits
really
well
together.
I
don't
know
if
it
was
an
actual
thought
of
you
know
whether
they
were
thinking
about
one
while
they're
building
the
other,
the
the
the
declarative
model
of
kubernetes
or
openshift,
I
think,
is
what
lends
really
well
to
this
agile
and
continuous
delivery.
B
Like
methodology,
these
individual
development
teams
are
able
to
declare
exactly
what
they
want,
how
the
system
should
run
and
they're
able
to
do
that
in
a
in
basically
a
static
config
and
just
apply
that
config
and
the
declarative
model
of
these
container
platforms,
as
well
as
the
the
controller
mechanism
that
runs
in
there.
That
is
turning
things
to
equal.
What
your
desired
configuration
is
just
lends
really
well
to
that,
and
it
fits
really
well
together.
B
So
I
mean
developers
can
just
you
know,
declare
their
config
and
and
some
sort
of
source
control
and
then
and
then
it
turns
into
actual
running
artifacts,
definitely
an
excellent
discussion
here.
One
thing
I
want
to
probe
a
little
bit
here
is:
you
know
ron.
You
mentioned
that
the
model
of
kubernetes
lends
itself
better
as
a
declarative,
config
approach
now
peter
kind
of
back
to
you
here.
We
we
mentioned
tools
that
have
been
around
to
support,
agile
and
agile
workflows.
B
You
know
everything
from
how
teams
manage
issues
to
how
teams
actually
deploy
code,
maybe
using
something
like
jenkins,
but,
as
we
know
today,
there's
always
some
cool
new
hot
capability
for
doing
devops
and
it
seems
like
every
year
a
new
capability
is
announced.
Now,
why
do
you
think
that
is
and
and
and
can
you
speak
a
little
bit
to
the
to
the
tools
in
the
openshift
ecosystem
and
how
they're
enabling
devops.
E
Yeah,
I
mean,
I
think
they
keep
coming
up,
because
you
know
developers
love
developing
tools
for
themselves.
You
know,
I
think,
there's
a
little
bit
of
that
right.
It's
it's
the
place
where
you
see
kind
of
innovation.
First,
that's
what
before
I
get
into
the
tools
themselves,
I
want
to
talk
a
little
bit
about.
You
know
how
the
nature
has
changed.
So
you
know
just
echoing
what
like
rom
said,
you
know
I
was
describing
tools
five
years
ago
as
very
procedural.
E
E
I
also
want
to
say
decision
making
and
judgment
you
know
for
what
gets
the
production.
So,
how
can
we
you
know?
Essentially
how
can
we
mitigate
risk
earlier?
You
know,
how
can
we
do
more
automation
around
security
scanning?
How
can
we
do
more
automation,
around
integration,
testing
and
again
taking
advantage
of
things
like
kube
and
openshift?
You
know
the
idea
of
spinning
up
new
environments
on
which
you
know
different
kinds
of
very
ad
hoc
probing,
whether
it's
security
or
performance
or
anything
else.
It
becomes
a
lot
easier
and
cheaper.
E
E
So
you
know
still
that
need
for
automation,
it's
kind
of
shifted.
You
know
higher
up
the
value
chain
as
a
lot
of
the
details
of
bits
have
been
taken
over
by
the
platform
techton's
the
project,
we're
really
excited
about
at
ibm
and
at
red
hat,
and
we
were
actually
both.
E
Before
we
came
together
under
one
umbrella
because
we
really
saw
that
the
cicd
engine
that
we
wanted
didn't
exist,
there
were
some.
You
know
interesting
commercial
technologies.
There
were
some.
You
know,
jenkins,
has
kind
of
been
sort
of
open.
E
You
know
previously
under
cloud
beast
control
now
under
the
continuous
delivery
foundation,
but
it's
also
a
generation
back
from
containers,
and
it's
really
not
a
you
know
in
its
own
architecture,
not
a
container-centric
application,
which
makes
it
hard
for
say
us
as
a
public
cloud
vendor
to
scale
it
and
manage
it
across
multiple
regions.
It
just
wasn't
built
for
that.
It
was
very
much
a
single
tenant
kind
of
on-prem
design
center.
So
you
know
taking
these
trends.
You
know
we
want
a
cloud
native
technology
for
implementation.
E
We
want
to
leverage
containers
in
that
implementation,
but
we
also
want
to
leverage
that
kubernetes
model
of
declarative
definitions
and
use
that
for
defining.
You
know
ci
and
cd
pipelines.
So
tekton
brings
those
things
together
and
you
know,
as
someone
was
saying
earlier,
how
you
know
kubernetes
itself
isn't
really
wasn't
start
didn't
start
as
a
development
platform.
It
started
as
a
way
you
build
a
development
platform.
I
think
techton
is
kind
of
that
same
thing.
It's
a
good
set
of
primitives
that
I
think
you
know
abstract.
E
You
know
the
notion
of
your
running
steps
in
containers.
Steps
are
organized
into
tasks
that
you
know
full
control
over
the
execution
graph.
Maybe
things
run
in
parallel,
maybe
they're
sequential
you
have
joins.
You
know
so
arbitrary
complexity
of
how
you
run
this
stuff.
But
then
you
need
to
wrap
an
experience
around
it.
You
know
when,
when
do
different
pipelines
run
what
triggers
them?
You
know
often
it's
events
in
a
git
repo
that
developers
are
initiating,
maybe
there's
some
long-running
jobs
that
are
going
to
go.
E
Do
some
kind
of
soak
test
for
48
hours
that
eventually
come
back
and
you
know
trigger
some
the
next
step
in
your
pipeline.
You
know,
so
all
these
things
that
have
to
get
glued
together
and
I
think
that's
what
the
experience
in
ibm
cloud,
our
continuous
delivery
service
and
the
experience
directly
in
open
shift
with
openshift
pipelines.
You
kind
of
do
with
techton
as
they
take
that
core.
E
You
know
leverage
its
strengths
as
being
a
cloud
native
solution
itself,
and
then
you
know
put
an
experience
around
it.
That
lets
developers
really
just
think.
I'm
going
to
keep
my
code.
The
right
stuff
is
going
to
happen.
B
Perfect,
thank
you.
Thank
you.
Now
I
want
to
build
on
that
a
little
bit
here,
and
we
mentioned
that
you
know.
Kubernetes
was
never
meant
to
be
a
developer
platform,
but
the
wonderful
thing
about
open
source
is
that
if
there
is
a
problem,
someone
out
there
is
going
to
solve
it.
It's
almost
like
golden
rule.
Now
for
kubernetes,
there
are
a
lot
of
those
problems.
When
the
when
kubernetes
was
first
announced,
there
was
a
lot
of
missing
capabilities
in
kubernetes.
B
B
Now,
if
I
were
to
span
that
outside
to
talk
about
any
open
source
project
in
the
kubernetes
ecosystem,
I
don't
really
even
have
a
number
for
that.
Basically,
there's
a
lot
of
open
source
projects
out
there
to
help
you
solve
each
individual
problem
that
you
might
have
now.
The
way
I
presented
that
makes
it
sound
like
a
good
thing,
but
for
an
enterprise
for
a
business
for
a
company
who's
trying
to
solve
problems
and
just
get
something
out
the
door.
It's
daunting,
it's
overwhelming.
How
do
enterprises
make
the
correct
choice
so
rahm.
B
I
want
to
start
with
you
here.
Can
you
speak
a
little
bit
to
the
ecosystem
capabilities
and
the
containers
and
openshift
space,
and
then
particularly,
maybe
touch
a
little
bit
on
on
how
ibm
and
red
hat
can
really
help
our
customers,
businesses
or
just
end
users
in
general,
make
the
right
decisions
sure
I
think
to
build
on
that
problem.
B
You
are
mentioning
each
one
of
these
open
source
projects
tries
to
maintain
a
very
sharp
focus
on
the
problem
that
it's
trying
to
solve
and
and
the
focus
is
usually
pretty
narrow
and
that's
a
good
thing
right
like
if
kubernetes
focuses
on
just
container
orchestration,
then
it's
like
first
class
citizen
would
be
containers
it.
B
B
That
solution
needs
to
be
more
more
and
and
consisting
of
blocks
that
that
build
closely
together,
and
I
think,
like
that's,
where
you
would
use
cloud
providers
because
they
have
tested
they
have.
They
have
commu.
They
work
closely
with
the
communities
of
each
one
of
these.
These
these
blocks
right
and
and
they
build
integrations
and
they
build
best
practices
that
that
fit
really
well
together,
and
then
they
build
extractions
on
top.
B
You
know
whether
the
abstraction
could
be
as
as
simple
as
like
a
cli
or
or
dashboard,
or
integrations
with
other
services
like
continuous
integration,
for
example,
and
they're
able
to
provide
that
as
a
solution.
So
that's
really
where
you
need
to
be
looking
at
to
consume
the
breadth
of
ecosystem
and
not
get
lost
in
each
individual
piece
of
it.
B
B
You
know
thousands
of
microservices
running
across
multiple
data
centers.
How
do
these
users
manage
all
of
these
at
scale
yeah?
So
what
I
do
every
day
is
help
customers
deal
with
microservices
right,
they've
broken
apart
monolith
to
microservices
for
all
the
various
advantages
and
and
as
we
all
know,
the
big
disadvantage
of
a
microservice
is
the
is
the
exponential
growth
of
the
network
traffic
and
the
focus
on
on
the
network
layer.
B
So
users
now
need
to
are
concerned
about
getting
control
of
this
of
this
network
layer
again
and
that's
that's
exactly
what
a
service
mesh
aims
to
solve.
Users
want
to
do
ex
enforce
policies
on
which
microservice
is
allowed
to
talk,
to
which
other
microservices
they
want
to
do
encryption
within
their
their
environment.
B
So
these
are
all
things
that
the
service
mesh
allows
you
to
gain
control
over,
and
so
I
work
on
istio
and
service
mesh
on
openshift
and
these
technologies
are
becoming
really
critical
in
not
only
for
the
control
of
the
network
layer,
but
also
the
the
observability
that
is
missing.
When
you
move
to
microservices.
B
Perfect
and
and
rom,
can
you
touch
a
little
bit
about
what
openshift
is
doing
with
istio
in
in
regards
to
openshift
service
mesh
yeah
openshift
service
mesh
is,
is
based
off
of
the
istio
open
source
project.
B
Just
like
you
have
said
before
that
you
know,
openshift
is
like
enterprise
kubernetes,
you
can
think
of
red
hat
openshift
service
mesh
as
enterprise
istio,
so
they've
taken
istio
and
they
built
a
set
of
abstractions
on
top
of
it.
They
have
a
set
of
best
practices
and
some
policies
that
are
applied,
that
that
are
good
defaults
and
it
integrates
well
with
existing
openshift
resources.
B
So
it's
part
of
just
you
know,
building
that
end-to-end
solution.
Perfect
now,
we've
talked
about
a
lot
of
you
know.
We
started
with
the
base.
You
know
the
core
of
openshift
and
container-based
platforms.
We
we
spread
out
a
little
bit.
We
talked
about
agile
methodologies
and
then
services
around
that
that
layer,
you
know
things
like
serverless
and
now
we're
drilling
into
the
ecosystem.
B
Now,
without
a
doubt,
data
and
databases
are
a
core
part
of
this
ecosystem.
Now
josh.
This
is
a
question
kind
of
geared
towards
you
with
open
shift
and
container
based
applications
in
general.
Stateless
applications
are
becoming
the
norm
and
in
fact,
you
know
even
before
container
container
based
applications
were
really
taking
off
with
the
12
factor.
App
becoming
popularized.
Stateless
apps
were
the
way
to
move
forward
and
working
with
cloud
native
applications.
B
That
means
people
needed
more
data
and
they
needed
this
data
to
be
replicated
and
to
be
highly
available
and
generally
stored
outside
of
local
execution
environments.
So
josh.
How
does
how
does
openshift
and
maybe
even
cloud
platforms,
maybe
specifically
ibm
cloud
help?
Users
tackle
this
new
and
revitalized
need
for
data.
F
Yeah
for
sure,
so,
openshift
and
kubernetes
just
generally
really
crushing
doing
stateless
applications
in
terms
of
stateful
workloads.
I
I
think
there's
room
for
improvement
that
the
community
is
is
putting
into
the
the
newer
versions,
especially
stateful
workloads,
in
regards
to
the
database
space
over
the
last
few
years.
As
databases
move
to
the
cloud,
there's
been
a
lot
more
complication,
developing
databases
for
the
database
developer
like
ibm
cloudant
or
the
postgres
community,
and
the
fact
that
now
everything
is
a
distributed
system.
F
Once
you
put
in
the
cloud,
it's
distributed,
it's
away
from
your
laptop,
it's
away
from
a
server
that
you
can
access
in
your
data
center
and
as
part
of
that,
especially
when
we
talk
about
relational
databases,
you're,
introducing
more
complexity
just
by
the
nature
of
having
a
distributed
system
muxing
that,
together
with
something
like
kubernetes,
which
is
really
good
at
turning
things
off
and
turning
things
back
on
again
is
a
really
good
way
to
lose
data
when
you're
running
a
database.
F
So
I
I
think
that
for
the
most
part,
when
I
talk
to
customers,
they're
they're,
they're,
two
camps
right
now
for
those
that
are
running
open
shifts
and
kubernetes
workloads,
one
is
the
the
net
new
cloud
native
application
that
wants
to
use
a
cloud
native
database,
one
that
has
geographical
replication
and
distribu
distribution,
sort
of
the
new
sql
group
databases.
You
might
be
familiar
with
like
cockroach
db
or
mem
sql.
F
They
would
either
run
it
themselves
in
openshift,
with
the
advent
of
kubernetes
operators
and
openshift
operators,
it's
become
tremendously
more
possible
to
run
databases
at
scale.
In
fact,
our
cloud
database
as
a
service
products
are
actually
built
and
run
with
kubernetes
operators,
and
we
run
more
than
20
000
databases
worldwide.
F
So
take
that
as
a
data
point
for
operators
are
really
really
useful
for
databases.
They
solve
a
lot
of
the
hard
problems,
but
I
think
there's
a
few
more
years
to
get
it
to.
You
know
easy
adoption
at
scale
in
terms
of
running
databases
yourself
on
kubernetes,
because
on
the
other
hand,
I
I'm
also
you
know
in
the
apache
couchdb
community
and
a
lot
of
people
are
having
problems,
running
databases
on
docker
and
databases
on
kubernetes.
F
That's
why
we
release
the
apache
cache
db
operator,
but
they're
they're,
losing
performance
they're,
losing
data
tracking.
Back
to
my
original
point,
though,
there's
two
options
for
user
there's
run
a
database
yourself
in
openshift
or
adopt
a
managed
service
from
a
cloud
provider.
At
least
these
are
the
two
options
I've
discussed
the
most
with
customers.
F
Some
customers
want
to
be
hands-on.
They
have
the
operations
team
and
the
experience
to
run
a
database
by
all
means
would
always
recommend
to
use
a
kubernetes
operator
provided
by
the
vendor
or
a
trusted
open
source
committer,
because
there's
there's
operators
that
don't
come
directly
from
a
vendor
or
vendor
doesn't
exist
for
a
project
because
it's
in
the
apache
foundation,
the
linux
foundation.
F
So
definitely
do
your
due
diligence
there,
especially
as
you
move
higher
up
the
stack
and
levels
of
operators,
you're
going
to
want
to
make
sure
that
you
know
the
people
that
are
adding
automation
and
self-healing
to
the
database
or
the
the
people.
You
probably
want
to
be
trusting
with
that
level
of
addition
to
a
database,
because
there's
all
sorts
of
ways,
databases
will
fail
and
they
will
fail.
F
So
you
definitely
want
to
you
know,
go
with
the
subject
matter
expert
in
that
space
and
the
other
option
is
to
use
a
managed
database
from
any
of
the
major
cloud.
Vendors
and
openshift
and
kubernetes
make
that
quite
easy
to
like
find
an
application
to
an
external
database.
F
So
it
really
depends
on
your
comfort
and
skill
level,
running
a
database
and
whether
you
are
okay
with
handing
off
the
the
management,
automation,
scaling,
security
and
compliance
of
the
database
to
a
cloud
vendor.
So
you
can
spend
more
time.
You
know
working
and
building
openshift
applications
with
your
team,
so
I'll
pause
there.
F
I
know
I
covered
a
lot
of
ground,
but
in
terms
of
data
being
the
core,
I
find
that
when
customers
are
successful
with
data,
it
makes
it
a
lot
easier
to
move
faster,
developing
the
stateless
apps
and
the
open
shift
applications
they're
building.
They
just
knock
that
one
out
make
sure
it's
stable
and
they
can
spend
more
time
doing
the
things
that
provides
them
the
most
business
value
rather
than
trying
to
do
like
schema
design
or
high
availability
of
the
database.
B
Excellent,
excellent
and-
and
you
know,
I
think,
we've
seen
a
recurring
theme
today-
of
putting
gearing
up
open
source
capabilities
that
you
know.
Of
course,
anyone
can
run
them
they're
free,
but
the
overwhelming
thing
that
we've
seen
here
is
that
you're
going
to
need
to
manage
yourself
you're
going
to
need
an
operations
team
to
do
it,
and
I
think
that's
what
you
kind
of
walk
through
there
with
using
operators
to
run
databases
yourself
versus
just
going
with
a
managed
service.
You
know
if
we
take
this
discussion
back
to
what
doug
initially
talked
about.
You
know.
B
Yes,
you
can
move
forward
with
open
source
k-native,
but
you're,
probably
better
off
taking
advantage
of
a
platform.
The
same
thing
goes
for
service
meshing
and
istio
and
service
mesh
and
and
peter
with
the
the
devops
tools
you
kind
of
explain
as
well,
such
as
techton
and
go
with
the
open
source
capability.
So
I
want
to
kind
of
open
this
up
right
now
for
either.
B
You
know
any
of
you
to
kind
of
chime
in
here
and
talk
about
a
little
bit
about
how
the
space
that
you're
focused
in
on
whether
it's
devops
serverless
istio.
When
does
it
make
sense,
and
what
are
the
key
things
that
a
user
needs
to
look
for
to
decide?
Should
I
go
for
the
open
source
solution,
or
should
I
go
for
the
the
cloud
vendor
provided
opinionated
platform.
E
I
mean
so
in
the
devops
tool
space
I
mean,
I
think
we
have
this
conversation
a
lot
and
I
think
it's
historically
something
people
you
know
are
used
to
running
in-house
themselves
and
you
know
often
not
you
know
often
there's
a
central
team
managing
it.
E
Sometimes
there's
not
it
consumes
resources,
and
you
know,
is:
is
there
really
differentiation
enough
in
the
tools
from
you
running
it
versus
using
a
cloud
service
where
you
don't
have
to
think
about
it,
and
I
think
you
know,
especially
when
we're
looking
at
open
source
projects
like
techdon
and
all
the
others.
It's
the
same
thing.
E
If
you
know
something
very
bad
happened
in
the
world
and
you
needed
to
run
it
yourself.
You
have
the
open
source.
You
can
do
that
but
day
to
day.
Why
are
you
choosing
to
put
your
effort
there?
You
know,
instead
of
in
the
applications,
you're
building
and
the
business
domain,
that
you
work
in.
Let
us
run
it,
we
run
it.
You
know
across
the
cloud
across
many
regions,
data
centers-
and
you
know-
we've
we've
gotten
good
at
it
and
I
think
that's
kind
of
the
generic
argument
you
make.
E
For
you
know
cloud
platforms
in
general
is
let
you
specialize
on
what
matters
last
thought
on
devops
is
you
know,
I
think,
that's
the
tool
piece
now.
I
think
you
know
we're
at
a
place
where
people
still
will
write
their
own
ci
and
cd
pipelines
and
processes
to
run
on
top
of
that
tool.
E
I
predict
we're
heading
in
a
direction
where
more
and
more
of
those
pipelines
themselves
will
be
standardized,
so
not
just
the
tool,
but
the
logic
of
what
are
you
doing?
What
kinds
of
quality
and
security
metrics
are
important
to
test
for
that
everyone
should
just
be
doing,
and
you
know
something
we're
trying
to
do
on
top
of
techdon
is
build
that
reusable
set
of
assets.
For
you
know
here
are
the
kinds
of
things
you
need
to
think
about.
E
B
Definitely,
and
and
and
with
that
I
want
to
maybe
take
it
a
little
bit
closer
to
the
stack
openshift
itself.
You
know,
I
think
chris
you
mentioned
in
the
beginning,
that,
with
the
acquisition
of
red
hat
and
then
kind
of
the
onset
of
our
focus
on
openshift,
we
announced
openshift
on
ibm
cloud.
As
we
know,
openshift
is
based
on
the
open
source
platform,
origin,
kubernetes
distribution
or
okd.
Chris.
B
C
Right,
no
yeah,
absolutely
I
was
going
to
jump
in
I.
I
definitely
agree
with
a
lot
that
peter
said,
because
we
run
into
this.
Where
you
know
okb
is
free.
I
can
go
out
there
and
I
can
play
with
it,
and
so,
when
we
have
these
discussions,
it's
not
to
imply
that
you
or
your
organization
is
not
technically
competent
to
deploy
and
run
okd
as
a
platform,
but
similar
to
what
peter
said.
C
We
think
you
should
focus
higher
to
the
things
that
are
solving
your
line
of
business
objectives,
not
running
okd
and
as
a
part
of
the
managed
service,
specifically
with
red
hat
openshift
on
ibm
cloud.
You'll
get
the
99.99
sla,
the
h.a
masters
multi-zone
clusters
bare
metal
worker
nodes.
You
know
all
the
compliance
that
peter
mentioned.
So
that's
the
value
of
the
managed
service,
ultimately
you're
still
going
to
build
containerized
workloads
and
run
cloud
native
apps.
B
Perfect
perfect
now
one
one
more
quote
that
I
want
to
use
for
my
dc
here
and
while
we're
on
euchris
on
a
shift
gears
a
little
bit
here
and-
and
this
quote
here
says
you
know-
by
2024
over
50
of
user
interface
interactions
will
use
ai,
enabled
computer
vision,
beach,
natural
language
processing
and
either
ar
or
vr.
So
these
are
high
level
services
that
are
being
offered
by
cloud
platforms,
especially,
you
know
the
key
thing
here:
ibm
watson
and
I
and
the
watson
services
available
on
ibm
cloud.
Chris.
B
Can
you
touch
a
little
bit
about
how
these
higher
value
services
can
be
consumed
from
the
cloud
native
platforms
that
we've
been
talking
about
today?
I
truly
believe
that
responsibility
lies
on
you
know
as
ibm
cloud
as
a
vendor
to
make
these
easily
consumable
so
that
end
users
don't
have
to
have.
You
know
a
phd
in
data
science
to
be
able
to
effectively
take
advantage
of
these.
These
technologies
that
are
available.
C
Right
so
I
think
you
touch
on
there's
a
lot
of
different
aspects
of
kind
of
adding
these
higher
value
services
to
your
apps
and
and
obviously
we
think
that
consuming
those
easily
and
securely
is
of
utmost
importance
right.
So
obviously,
as
I'm
building
my
app,
I
want
to
add
that
cognitive
capability,
so,
whether
I'm
adding
like
you
said
voice
to
text
or
I'm
doing
a
chat,
bot
or
I'm
doing
some
other,
adding
more
intelligence
to
my
app.
We
want
to
be
able
to
do
that.
So
that's
obviously
very
important.
C
But
another
area
that
I
think
is
probably
more
important-
is
really
around
data
access
data
controls
using
that
cognitive
capability
in
the
right
way,
and
hopefully,
you've
seen
some
of
these
announcements
recently
around.
You
know
with
all
the
things
that
are
happening
in
the
world
that
we
live
in
today,
that
that
ibm
has
basically
announced
that
we're
not
going
to
use
that
ai
technology.
C
For
you
know
some
of
the
racial
profiling,
and
things
like
that-
that
maybe
is,
are
definitely
not
the
right
use
of
that
technology,
so
I
think
you
know
there's
one
hand
of
using
the
technology
using
it
the
right
way
and
that
ibm
is
kind
of
a
steward
of
the
community
and
really
advocating
that
we
do
use
that
technology
in
a
way
that
is
beneficial
to
the
broader
society
as
a
whole.
Not
just
you
know
we're
selling
technology
and
widgets.
B
Excellent
excellent
answer
there
you
know
one
one
of
the
things
that
I
can
kind
of.
Think
of
off.
The
top
of
my
head
is
the
the
call
for
code
initiative
that
ibm
has
launched
and
the
focus
on
using
ibm
cloud
technologies
potentially
solve.
You
know,
covid
related
or
pandemic
related
issues
that
we're
all
facing
today
I'll
pause
here
for
a
second
and
see
if
anyone
else
wants
to
chime
in
on
the
the
use
of
ai
and
higher
value
services
from
the
perspective
of
cloud
native
platforms.
B
So,
as
you
mentioned,
this
user
interaction
layer
is
is
changing
right.
The
the
traditional
here
is
my
my
web
server
that
can
serve.
You
know
thousands
of
requests
by
itself
and
is
able
to
serve
like
static
content
in
this
in
a
synchronous
manner
like
that
that
doesn't
cut
it
anymore,
so
to
take
advantage
of
ai
and
all
these
other
cognitive
capabilities.
When
you
write
your
user
interface,
you
have
to
write
that
layer
a
little
bit
differently.
B
So
we
mentioned
you
know,
leveraging
existing
apis
to
build
this
layer,
so
you're
calling
external
services
like
the
watson
apis
but
you're
having
to
rely
on
all
these
various
apis
on
on
every
request
there.
It
requires
more
processing
power
on
every
request.
It
requires
more
data
analysis
on
every
request
and
in
many
cases,
these
user
experience
are
expected
to
be
a
step
ahead
of
the
user
and
give
the
user
what
they
want
before
before
they
actually
have
to
like
go
around
finding
that
necessary
information.
B
So
there's
a
lot
of
like
predictive
analysis
that
needs
to
be
kind
of
built
into
these,
the
good
user
experience
layer,
but
what
I'm
getting
at
is
that
all
of
these
requirements
put
a
different
set
of
needs
on
on
that
application
layer.
These
applications
end
up
being
very
network
intensive,
very
resource
intensive,
and
they
need
to
be
able
to
quickly
scale
up
and
down
so
containers
and
the
virtualization
layer
that
the
container
platform
provide.
B
I
think,
allows
you
to
build
these
layers
in
a
much
more
effective,
effective
way.
They
give
you
that
control
of
being
able
to
you
know
scale
up
and
down
as
your
as
your
requests
from
your
users
go
up
and
down
and
take
full
advantage
of
the
underlying
infrastructure
that
your
container
platform
is
built
on
and
leveraging.
D
I
just
wanted
to
circle
back
around
to
something
you
were
talking
about
earlier
about.
You
know
when
to
choose
open
source
versus
something
offered
by
a
platform
and
stuff
like
that,
and
I
just
want
to
sort
of
dovetail
what
chris
was
saying
there,
because
my
initial
answer
to
your
question
was
or
if
you
focus
on
the
basic
question,
right,
open
source
versus
offering
from
a
provider
to
me.
I
would
look
at
it
as
I
would
look
at
the
open,
technolo
or
look
at
the
open
source
technology
first,
just
to
see
if
the
base
functionality.
D
Some
of
that
looks
like
it's
going
to
solve
your
needs
and
you
don't
use
that
play
with
it.
Obviously,
it's
free
it's
a
low
cost.
You
can
install
it
play
with
it.
Have
your
devs
go,
have
some
fun
with
it
right,
but
then
at
some
point,
then
you
get
into
what
chris
was
talking
about,
says:
okay,
great
this
technology
at
a
base
level
suits
my
needs.
Do
I
now
want
to
manage
this
myself
going
forward
and
that's
when
you
start
looking
at
okay?
D
D
I
want
to
be
on
a
platform
which
just
allows
me
to
install
it
versus
a
platform
that
will
install
it
for
me
and
manage
it
or
go
one
step
further
and
do
what
you're,
seeing
on
other
platforms,
where
it's
like
it's
as
a
service
right
where
you
just
deploy
your
application
and
everything's
hidden
from
you
right.
So
you
got
a
whole
breadth
of
things
to
choose
from,
and
this
goes
directly
to
what
wolfgang
asks
in
the
chat
here
right.
D
He,
for
whatever
reason,
has
a
real
trust
problem
with
the
managed
surfaces:
okay,
fine,
but
he
he
could
still
leverage
the
open
source
technology
at
a
layer
where
he
can
manage
it
himself.
But
someone
else
is
perfectly
okay
with
ibm
red
hat
managing
a
forum.
But
the
point
is
that
they
can
stick
with
the
core
open
source
technology.
D
Then
they
get
the
freedom
to
move
around
if
they
need
to
right,
they
get
the
same
core
technology,
different
providers,
different
levels
of
managedness.
If
you
want
to
call
it
that,
but
they
don't
have
to
necessarily
swap
out
everything
just
because
they're
going
to
switch
from
one
provider
to
another
right,
the
core
technology
should
be
the
same
wherever
they
go.
So
they
get
that
interability
portability
aspect,
but
still
have
the
level
of
choice
of
how
much
they
want
to
manage
themselves.
B
It's
not
like
the
platforms
have
forked
and
going
off
in
completely
different
directions
for
for
a
platform
like
ibm
cloud,
you
know
we
we
work
in
the
open
source,
so,
although
we
have
our
managed
offering
whenever
there's
key
changes
or
customer
requirements
or
that
kind
of
thing
that
force
us
to
create
new
features,
this
is
the
same
thing
for
red
hat,
the
same
thing
for
any
company
working
in
open
source
with
managed
offerings.
B
Those
changes
are
then
contributed
back
into
the
community,
so
I'd
like
to
quickly
touch
on
what
ibm
has
been
doing
in
the
community
and
specifically,
what
we've
been
doing
around
things
like
helm,
operators
and
the
operator
hub
and
and
and
these
key
open
source
capabilities
that
we're
taking
our
expertise
and
know
how
and
then
contributing
back
to.
So.
If,
if
we
can
touch
on
that,
a
little
bit
I'll
open
that
up
to
the
panel
for
for
anyone
to
chime
in.
C
Sure
so
so
there's
an
ibm
cloud
operator
that
you
can
get
on
operatorhub.io,
so
we're
obviously
excited
about
that
to
be
able
to
run
some
of
the
fundamental
commands
and
things
within
the
platform.
There
are
also
a
lot
of
different
teams.
Contributing
and
josh
will
dig
into
that
from
the
database
side
and
what
they're
doing.
C
But
you
know
we
are
completely
aligned
with
red
hat's
strategy
around
operators
and
adopting
that
methodology
to
simplify
deploying
our
content
and
one
of
the
things
that
got
announced
at
red
hat
summit
earlier
this
year
was
the
the
red
hat
marketplace
and
basically
it's
ibm
and
red
hat
our
isv
ecosystem
in
in
openshift
4.4.
C
It
brought
the
marketplace
into
operator
hub.
So
now
I
deploy
an
openshift
cluster
and
I
see
all
of
that
content
ibm
provided
red
hat,
provided
isb
ecosystem
and
allows
me
to
quickly
deploy
that
content
and
again,
just
simplifies,
not
only
deployment
but
then
ongoing
life
cycle
management.
So
I
just
again
moves
my
responsibilities
up
higher.
B
Excellent
and
just
a
quick
heads
up-
we've
we've
got
a
couple
more
minutes
here
remaining
so
josh.
I
want
to
let
you
answer
your
piece
on
on
the
databases
and
open
source
front.
Ideally.
F
F
So
we
have
the
cloud
operator,
they're,
storage,
operators
from
spectrum
and
there's
product
specific
operators
like
for
kafka
and
streaming
but
event,
streams
are
for
ibm
cloud
object,
storage
on
the
database
front,
we've
been
really
thrilled
with
the
release
and
the
reception
of
the
apache
couchdb
operator
that
comes
from
ibm,
couchbee's,
really
good
at
moving
data
around
wherever
you
need
it
to
be.
So
one
of
the
roadblocks
that
I
see
customers
have
with
kube
is
their
applications
are
supposed
to
be
portable,
but
data
is
affordable.
F
Couchdb
helps
folks
solve
that,
and
we've
seen
good
uptake
there
and
over
time,
there's
going
to
be
a
lot
more
investment
in
the
ibm
community,
especially
around
data
and
operators.
So
look
forward
to
that.
B
A
All
right-
well,
I
I
almost
could
say
I
couldn't
say
any
of
this
any
better
than
you
guys.
It's
wonderful
to
have
you
on
today.
It's
wonderful
to
see
the
enthusiasm
for
open
source
for
openshift
and
all
things
kubernetes
at
ibm,
and
it's
been
really
a
very
interesting
growing
experience
getting
to
have
the
extended
community
participation
of
ibm
and
the
red
hatters
doing
that.
A
So
it's
really
kind
of
an
exponential
growth
in
the
number
of
people
contributing
to
open
source
projects
that
we've
been
all
working
together,
and
so
it's
been
great
getting
to
know
you.
So
thank
you
for
taking
the
time
today,
we
definitely
are
going
to
get
each
one
of
you
back
on
for
a
full
hour,
long,
deep
dive
on
these
topics,
because
every
one
of
them
is
something
we
want
to
hear
more
about.
So
thank
you
for
taking
the
time
today
to
introduce
yourselves
and
being
part
of
the
openshift
commons
really
appreciate
your
participation.