►
Description
OpenShift Commons Gathering @ Kubecon/NA San Diego November 18 2019
A
It's
a
beautiful
day
and
again
I
would
like
to
thank
Red
Hat
for
for
having
us
here
much
appreciated
and
the
thanks
to
the
community
I
think
we're
all
here
as
part
of
the
overall
Red
Hat
community,
the
open
ship
community.
So
my
name
is
Ganesh
Ganesh,
Janaki,
Raman
I
lead
the
SAS
operations
and
delivery
team
for
Broadcom
and
which
means
my
colleague
Jose
Chavez,
who
heads
the
platform
engineering
for
SAS.
A
You
may
be
wondering
what
the
heck
is
Broadcom
doing
here
right,
so
Rock'em
is
typically
known
for
its
chip
design.
Hardware
infrastructure.
If
you
not
notice
the
block
come
has
been
acquiring
companies
like
brocade,
CA,
Technologies
and
Symantec
recently
right.
So
we
have
a
multi-billion
dollar
software
business,
thriving
one
at
that
that
encompasses
infrastructure
software
enterprise,
software
mainframes
security
software,
and
we
have
a
fairly
large
presence
in
SAS
right.
So
what
so,
both
of
us
Jose
and
myself?
We
come
from
the
see
acquisition.
A
So
what
we
will
talk
about
today
is
our
journey
with
Red
Hat
openshift,
where
we
started.
Why
did
we
even
pick
open
shift
where
we
are
today
what
we
are
doing
today
and
where
we
are
headed
right?
So
this
is
a
flashback
of
about
four
to
five
years
back
in
terms
of
where
we,
where
this
whole
journey
started.
For
us,
we
were
growing
SAS
through
acquisitions,
so
they
were
several
products
that
are
coming
to
the
SAS
table
through
acquisitions
that
were
already
pre-baked
pre-built.
A
That
was
revenue
generating
products,
not
greenfield
ones
that
that
we
were
absorbing
so.
The
problem
that
we
had
was
this
heterogeneous
sets
of
products
that
we
had
to
run
an
operate
and
every
each
one
of
these
was
an
island
by
itself,
and
the
landscape
looked
something
like
this
right.
So
we
had.
We
had
products
that
were
built
back
in
the
late
90s
early
2007
monolithic.
A
If
you
can
recall
the
old
Java
j2ee
days,
we
had
these
massive
water
files
that
abuse
run
to
the
latest
micro
services
based
products
that
that
uses
the
latest
technologies
and
tools
right.
So
the
common
joke
in
the
floor
was
that
we
literally
had
everything
from
Pascal
to
Haskell
and
C
to
R
right,
so
we
literally
ran
ran
everything
right,
so
the
other
important
part
is
the
the
development
methodology.
So
we
had
products
that
were
doing
your
old
waterfall
model
and
there's
some
that
were
agile,
so
eventually
we
kind
of
standardized
on
agile.
A
We
follow
the
say
framework.
The
reason
why
I
bring
that
as
the
delivery
of
the
software,
the
release
management
of
the
software
they're
all
totally
different
right.
The
last
part
is
the
is
the
application
infrastructure.
So
some
of
these
products
were
built,
ground-up,
multi-ton
and
even
20
years
back
well,
architected
I,
wouldn't
say
the
same
about
everything.
Some
of
these
were
actually
hosted
services.
They
were
they're,
not
even
true
SAS
services.
A
Some
of
them
ran
in
our
own
data
centers
in
private
cloud,
and
some
of
them
were
built
public
cloud
native
where
they
could
consume
all
of
these
nice
elastic
I
mean
the
only
ones
paying
so
that
we
consume
all
these
new
services
that
are
available
in
the
public
cloud.
So
the
the
plop,
the
the
issue
that
we
had,
the
problem
that
we
had
was
a
problem
of
plenty,
a
plethora,
a
wide
gamut
of
these
products
that
we
had
to
find
a
common
way
to
run
and
operate.
A
Ok,
we,
it
was
not
possible
for
us
to
scale
the
resources
both
in
terms
of
infrastructure,
as
well
as
people
to
be
able
to
manage
these
services.
So
that
was
the
problem
that
way
that
we
went
about
solving.
So
so,
how
could
we
assimilate
these
products
in
a
way
where
and
I
don't
need
to
grew
my
teams
right,
so
the
goals
that
we
had
for
the
SAS
delivery
were
the
following.
As
you
can
see,
we
had
to
make
it
less
complex,
easy
to
manage
and
reduce
the
operational
costs.
A
So
obviously
I
mean
it's
all
driven
from
there
and
we
were
essentially
looking
for
a
common
operating
model,
a
common
console
where
I
can
go,
look
and
and
manage
all
of
these
products
right.
So
that's
where
you
that's,
where
OpenShift
actually
came
into
play.
I
will
so
be
in
CA,
where
one
of
the
oldest
partners
of
red
hat
in
the
openshift
journey.
Somebody
talked
about
four
and
A
three.
We
can
talk
about
two
as
well.
A
No
bunch
of
question
I,
don't
know
how
many
of
you
here
have
used:
OpenShift
version-
oh
my
goodness,
okay,
my
my
commiserations.
Actually,
so
if
you
have
dealt
with
the
cartridges
and
gears
yeah,
it
was
not
fun
to
work
with
right.
So
back
in
2015,
I
think
the
pivot
was
made
to
run
to
adopt
darker
and
kubernetes
I.
Think
I'd
be
preaching
to
the
choir
here.
If
I
start
explaining
the
benefits
of
docker
and
kubernetes,
but
essentially
the
the
the
problem
that
we
wanted
to
solve.
A
We
said
that
if
you
are,
if
you're
a
developer
and
if
you're
able
to
deliver
your
software
in
a
box,
call
this
docker
image.
Okay,
we
need
to
find
a
way
to
run
this
in
an
orchestrated
environment
right.
So
as
long
as
the
developers
were
able
to
come
to
this
a
form
factor
of
software
development,
I
unit
for
software
delivery
as
a
docker
image,
we
would
take
that
we
will
be
able
to
run
that
in
a
in
a
common
model.
A
So
for
us
to
start
stop
scale,
the
operations
team
does
not
need
too
much
of
expertise
in
the
product
to
manage
these
diverse
technologies.
As
long
as
they're
able
to
then
open
shift
and
then
able
to
manage
the
platform
we
should,
we
should
be
good
to
be
good
to
manage
these
right
and
the
other
thing.
The
important
thing
I
I
think
I
mentioned
in
the
earlier
slide,
was
the
was
an
edict.
I
think
my
earlier,
the
earlier
speaker
talked
about
I
energy.
A
We
are
in
large
clusters
of
OpenShift,
both
in
our
private
cloud
and
different
public
clouds,
and
we
have
in
the
process
of
consolidating
them,
but
we
still
do
see
a
need
for
for
having
a
combination
of
both
I
will
talk
about
a
little
bit
about
the
scale
in
in.
In
the
later
part
of
the
presentation
and
I
think
similar
to
what
are
the
ing
use
case,
was
we
run
our
service
in
a
highly
security
environment?
So
we
need
to
go
through
the
the
InfoSec
regime.
We
have
all
our
environment
source
octo
certified.
A
We
have
a
couple
of
environment,
a
PCI
certified,
so
we
do.
We
do
have
a
federal
set
up
as
well
right.
So
that
gives
you
an
idea
of
the
scale
that
we
are
running
so
without
much
ado.
I
request,
my
colleague
Jose
to
talk
about
the
the
technology
stack,
what
we
run
at
a
high
level
and
take
it
from
that.
Thank
you.
B
Okay,
so
our
technology
stack
obviously
running
in
production,
there's
some
common
components
that
when
you
need
to
be
incorporated,
starting
with
the
infrastructure
layer,
as
shown
here,
various
cloud
providers
VMware
could
be
put
in
there
as
well.
The
ultimate
goal
was
to
be
as
flexible
as
possible,
and
you
know
where
we
could
deploy
to
so,
starting
with
that
layer,
then
adding
OpenShift
on
top
of
that,
with
docker
and
kubernetes
to
host
our
services,
of
course,
and
some
of
the
SAS
applications
that
we're
hosting
on
top
of
a
openshift
the
that
are
brought.
B
You
know:
Broadcom
IP,
the
iai
ops,
api
management,
CVD
atomic
MRA
ppm
more
to
come,
then,
in
order
to
support
those
services
we
had
to
have
you
know
infrastructure
services
as
well.
On
the
left-hand
side,
you
have
your
monitoring
pieces
again:
Broadcom
actual
applications,
APM
ASM
uim
DOI
to
the
right
of
the
SAS
applications.
You
would
have
your
common
services
that
are
like
you
know.
Integration
are
needed
by
this,
the
actual
SAS
services
like
Kafka
and
zookeeper.
B
B
It
is
actually
running
as
a
stateful
set
that
elastic
searches
and
just
by
a
show
of
hands
who
runs
you
know
stateful
sets
in
production
today.
Ok,
so
yeah.
That
was
one
of
the
concerns
for
us
is
a
lot
of
the
reading
that
we
had
been
doing
was
you
know
yet.
Stateful
sets
for
like
database
technologies
like
my
sequel,
Postgres
things
like
that
are
good
for
development
purposes,
but
once
you
incorporate
basically
shared
resources,
there
there's
a
risk
involved
there.
B
You
know
with
performance
and
things
like
that,
so
that's
kind,
a
leap
of
faith
we
had
to
take
with
with
the
elasticsearch,
but
it's
worked
out
so
far.
We
have
our
RDB
layer,
as
shown
here.
Postgres
and
my
sequel.
I'll
talk
a
little
bit
more
about
that.
Where
we're
going
with
those
right
here,
they're
depicted
as
standalone
services,
and
then,
of
course
you
can't
be
without
security,
so
you
know
Qualis
for
auth
scans
twistlock
for
scanning
container
images.
A
So
we've
been
running
OpenShift,
the
the
the
redirects
OpenShift
getting
the
first
version
when
would
live
was
the
3.1.
We
quickly
moved
to
3.4,
we
went
live
2016
November,
it's
been
three
years
actually
right
now,
most
of
the
production
workload
we
have
is
in
311
and
we
are
testing
for
actively
and
we
should
be
there
some
day,
the
kind
of
applications
that
we
run
again.
I
kind
of
already
talked
about
the
the
breadth
right.
A
So
we
have
a
lot
of
web
applications,
as
you
may
guess
again
different
hues,
we
run
everything
we
don't
some
nginx
Apache
H
a
proxy
kind
of,
and
we
have
like
you
name
it.
So
the
important
thing
that
we
from
what
I
recall
from
a
pre-owned
ship
to
where
we
are
today,
is
their
ability
to
scale
right
so
with
we.
We
are
able
to
scale
dynamically
based
on
the
based
on
the
number
of
requests.
We
are
able
to
scale
up
scale
down
as
needed
and
I
think
the
support
on
kubernetes.
A
There
has
helped
us
enormously
in
terms
of
managing
these.
This
is
the
web
apps,
which
used
to
need
operators
going
and
spinning
up
VMs
and
where,
after
they
were
after
the
cyber
monday,
is
over
scaling
them
back.
So
we
don't
need
to
do
any
of
these
right
now,
so
we
have
the
other
workloads
like
project
and
portfolio
management
and
continuous
delivery
management.
These
are
large
database
instance
of
more
of
a
workflow
management
kind
of
applications.
The
again
the
most
important
part
that
we
achieved
with
the
openshift
was
literally,
we
run
thousands
of
containers.
A
Ok,
thousands
of
containers
just
for
for
these,
these
kind
of
workloads
and
the
that's
the
staff,
the
operational
staff
that
we
needed
to
manage
this
right
was
anonymous.
The
kind
of
operational
efficiencies
that
we've
achieved
by
moving
these
workloads
into
containers
is
amazing,
both
from
a
resiliency
perspective
self-healing.
They
are
the
the
ability
for
these
applications
to
recover.
I
think
that
alone,
men
has
helped
us
tremendously.
Actually,
then,
we
also
run
a
data
science
platform
where
we
have
these
workloads,
there's
a
lot
of
machine
learning
and
an
a
that
we
do
for
four
different
products.
A
So
one
of
the
products
that
we
have
is
the
payment
security
platform.
We
run
the
largest
online
authentication
system.
It
used
to
be
a
protocol
called
verified
by
Visa
or
MasterCard
secure
code
if
you're
familiar
so
we
run
that
that
workload
and
that
so
there's
a
real-time
fraud
detection
that
we
do
that
that
that
uses
that
that
is
openshift
as
well
again.
This
is
the
MLA
I.
A
Here
is
more
homegrown
proprietary,
it's
not,
and
if
you
tensorflow
but
again,
the
this
one
runs
in
our
open
shift
platform
too,
and
then
we
have
other
enterprise
security
applications.
Again
we
have
applications
like
API
gateways,
API
management
portal
and
things
like
that
and
how
they
talked
about
analytics.
We
run
when
we
started
this
journey.
We
were
kind
of
hesitant
to
do
anything
on
the
on
the
on
the
stateful
applications.
We
see
we
are
still
running
them
and
on
VMs
recently,
it's
been
like
about
over
six
months
now.
A
I
think
we
moved
completely
to
sacred
sets
and
then
the
whole
new
storage
class
has
helped
us
big-time
there
as
well.
So
we
run
our
complete
elastic,
search
and
Hadoop
to
a
small
scale.
We
have
some
spark
workloads
that
we
run
on
a
very
small
scale,
as
in
the
open,
shipped
environment
as
well.
The
last
one
I
kind
of
mentioned
in
the
earlier
slide
about
AI
operations.
A
operations
is
basically
a
suite
of
monitoring
tools
that
we
actually
run
that
we
host
on
behalf
of
our
customers.
We
use
it
internally
as
well,
so
this
is.
A
This
is
again
a
very
useful
part
of
our
overall
platform,
wherein
we
have
this
monitoring
both
inside
out
and
outside,
in
that
we
do
so
when
I
say
outside,
and
we
have
pops
that
are
running
all
over
the
world.
Those
that
are
running
on
open
ship,
that
kind
of
monitors
and
sense
sense,
metrics,
and
we
have.
We
have
a
network
flow
net
flow
analysis
that
that
we
can
do
and
we
can.
A
We
do
have
application
performance
management
that
goes
at
write
to
your
container,
to
your
part,
to
your
service,
to
your
namespace
and
and
your
cluster
right,
and
there
is
the
other
micro
service
versus
the
application
view.
So
there
are
different
levels
of
monitoring
and
metrics
that
it
collects
and
there
is
an
AI
engine
that
actually
does
the
correlation,
and
that
gives
us
a
very
good
visibility
into
what's
going
on
on
the
environment.
So
the
other
part
that
we
wanted
to
talk
about
today
is
I.
A
Think
the
earlier
speakers
are
kind
of
touched
upon
this
one
as
well.
In
terms
of
how
we
manage
this
this
environment,
what
kind
of
automation
we
have
to
make
sure
that
we
have
a
automated
provisioning,
separate
out
the
infrastructure,
provisioning
and
the
application
provisioning
and
the
application
deployments.
So
Jose
will
talk
about
what
we
do
on
that
on
the
automation
or
the
CD
side.
B
Okay,
so
thinking
about
the
technology
stack
that
I
showed
earlier.
You
know,
as
you
start
to
build
these
out
you
you
realize
how
much
you
know
effort
you
know.
Time
goes
into
building,
even
just
one
stack
and
if
you
want
to
be
able
to
scale,
obviously
the
importance
of
automation
comes
in.
So
you
know,
as
the
product
evolved
as
the
the
common
stack
that
we
call
it
evolved.
B
We
started
to
introduce
new
new
tools,
new
processes,
to
make
the
job
easier
and
also,
you
know,
obviously
be
able
to
keep
up
with
the
demand
of
building
out
multiple
environments
as
more
and
more
teams
or
services
were
introduced.
So
the
provisioner.
This
is
more
of
a
high
level
and
I'll
go
into
more
detail
in
a
minute,
but
we
leveraged
a
packer
to
start
building
out.
You
know
whether
it
was
like
an
AWS
image,
a
VMware
image,
Google
image.
B
We
had
some
standards
or
some
security
practices
that
we
wanted
to
apply
to
the
images,
basically
creating
like
a
standard
or
golden
image.
If
you
will
and
then
leveraging
it
as
part
of
the
provisioning
process
that
would
get
fed
into
ultimately
like
the
instances
that
would
get
spun
up
in
the
V
PC
and
that's
what
that
kind
of
bottom
layer
shows
there
and
basically,
let
you
know
deploying
or
distributing
that
image
to
the
various
GCP
or
AWS
accounts
in
order
to
leverage
it
for
deploying
via
terraform.
B
B
So
here's
basically
a
couple
of
use
cases
that
we
identified
when
we
were
building
out
this
latest
generation
of
what
we
call
that
our
deployer
platform
provisioning
the
git
ops
way
it
was
actually
before
we
realized
we
were
doing
at
the
get
ops
way,
but
ultimately
we
used
good.
As
the
you
know,
the
source
of
truth.
B
You
know,
plan
apply
and
that
kicks
off
deployment,
whether
it's
on
in
cloud
or
you
know,
on
prem.
You
know
like
a
vmware
type
deployment,
but
it
goes
through
the
whole
flow
leveraging.
You
know,
ansible
is
one
of
the
core
pieces
there
and
the
key
part
there
is
when
you
build
a
stack.
Not
all
stacks
are
gonna
be
built
the
same.
So
you
need
the
flexibility
to
alter
the
size
right
and
we
designed
it
in
such
a
way
that
you
could.
B
You
know,
there's
a
set
of
defaults,
but
you
can,
you
know,
modify
how
many
open
shift
nodes.
You
know
masters.
You
know
utility
servers,
you
ultimately
need
in
the
stack
and
since
it's
you
know
different
get
repo
for
each
stack.
You
can
manage
those
separately,
but
we
also
realized
that
there
was
a
need
for
testing
as
well.
Quick.
You
know
iterative
testing
of
your.
You
know
our
own
code
and
our
own
automation,
which
is
the
lower
left
hand,
side
workflow.
B
This
is
really
how
you
use
it,
even
if
you
don't
necessarily
need
or
want
to
understand
the
inner
workings,
and
that
was
the
goal,
so
that
was
for
the
platform
deployment.
But
you
also
have
the
services
deployment
side
of
things
again
here
we
actually
use
git
as
the
the
source
of
truth
as
well.
In
this
particular
case,
you
know
if
much,
every
development
team
has
their
own
form
of
CI.
We
give
them
the
flexibility
to
to
build
and
automate.
B
How,
like
you
know,
compile
and
ultimately
do
you
know
their
their
local
testing
and
building
of
artifacts,
but
once
they're
ready
to
actually
test
in
our
sandbox
environment
that
we
provide
for
them
or
even
go
into
deployment
they
effectively
do
a
git
commit
which
triggers
this
whole
CD
pipeline
flow.
Part
of
the
git
commit,
you
know,
specifies
things
like
your
target
environment,
the
services
that
you
want
to
deploy,
and
you
know,
expose
ports
things
like
that.
B
You
know
the
common
attributes
that
you
would
find
that
describe
your
service
and
how
it's
going
to
be
rolled
out
and
we
support
docker
compose
a
format
as
input,
but
we're
actually
focused
more
on
the
helm
side.
Now
and
just
by
show
of
hands
who
here
uses,
you
know
helm
for
the
majority
of
their
deployments:
oh
okay,
so
yeah,
that's!
Actually
one
of
the
newer
features
we
introduced
into
our
CD
pipeline
and
support
for
helm
and
but
ultimately,
the
flow
is
user
commits.
You
know
a
change
to
the
input
file
a
hound
chart.
B
You
know
a
docker
image
in
something
like
artifactory
and
we
pick
it
up.
You
know
we
copied
into
our
own
local
space.
We
want
to
do
a
security
scan
of
that
docker
image
make
sure
it
meets
the
standards
that
and
that
we've
set
and
there's
no
like
glaring.
You
know
critical
issues
there
and
we
use
twist
lock
to
scan
those
images
and
provide
any
feedback
you
can
put
gates
in
place
security
gates,
so
that
you
know
if
there
are
vulnerabilities,
you
can
actually
fail
the
pipeline
right
there
and
send
back
the
report.
B
If
it's,
if
it
turns
out
it's
good,
it
passes
the
the
security
scans.
Then
it
goes
and
syncs
with
with
Ben
Tre
and
because
artifactory
is
actually,
you
know
only
accessible
internally.
We
needed
a
way
to
access,
docker
images,
externally,
Ben
Tre
stores
that
and
then
ultimately,
we,
the
deployer
communicates
with
the
endpoint
OpenShift
service,
and
you
know,
tells
it
it.
It's
ready
to
pick
up
the
images
from
Ben
Tre
into
the
local
registry
and
OpenShift
and
ultimately
deploy
those
services.
So
I
mean
the
bottom
line.
B
Here
is
that
we
wanted
to
empower
developers,
you
know
to
deploy
on
their
own
schedule.
You
know
whether
it's
you
know
the
ones.
The
day
once
a
week,
you
know
a
month
whatever
it
was
without
actually
getting
in
the
way
you
know
and
having
operation
the
typical
operations
model
where
they
have
to
wait
a
certain
period
of
time
in
order
to
get
these
deployments
out
so
now,
they're,
it's
literally
on
their.
You
know,
on
their
schedule,
there's
still
security
gates
in
place
as
well.
B
Where
you
know
this
pipeline
is
part
of
the
input
they
can
dictate
approved
like
an
approver
chain,
if
you
will
and
when
they
submit
it,
has
to
go
through
like
a
couple
layers
of
approvals.
And
finally,
if
those
are
approved
through
the
pipeline,
it
ends
up
getting
deployed
into
duction
and
one
of
the
other
important
pieces
that
ties
into
here-
it's
not
actually
mentioned,
is,
if
it
isn't
in
a
production
environment.
You
want
to
have
a
paper
trail
of
what's
what's
happening.
B
We
integrate
with
ServiceNow,
create
a
ticket,
put
the
necessary
details
in
there,
ultimately
close
the
ticket
and
deploy,
so
that's
all
tracked
and
it
audited,
but
you
can
see
on
the
right-hand
side.
It
doesn't
matter
what
type
where
OpenShift
is
running
as
long
as
we
can
reach
it
through
a
network
we
can
actually
deploy
to
it.
So,
whether
it's
in
the
cloud
or
it's
on
front
yeah,
that's
pretty
much
it
there.
B
Okay,
so
we've
kind
of
talked
about
where
we've
been
where
we
are
today,
but
where
are
we
headed
right
so
where
we
see
ourselves
in
the
next,
you
know
three
six
months.
We
want
to
be
able
to
leverage
operators
to
manage
the
lifecycle
of
services
like
Postgres.
You
know
my
sequel,
Kafka.
Those
are
going
to
be
critical
for
us.
B
We
want
to
start
providing
essentially
self-service
a
self-service
model
for
development
teams
to
be
able
to
plug
in
or
bring
in.
You
know,
database
technologies,
for
example,
quickly
without
having
to
wait
for
the
traditional.
Let's,
you
know,
get
some
infrastructure
and
deploy
the
database.
They're
number
one
is
time-consuming
two,
it
doesn't
scale
well
and
and
three
if
you
can
have
operators
actually
managing
the
full
lifecycle
of
that
it
simplifies
things
right.
It's
part
of
the
whole
automation
story,
so
that's
one
one
big
piece.
B
The
other
thing
is,
you
know
we
want
developers
to
focus
on
what
they're
good
at
doing
right,
developing
services
and
not
focused
on
actually
how
they
deploy.
You
know
like
a
Kafka
clusters,
you
keep
requests
or
things
like
that
right
and
that
can
be
time-consuming
their
time.
Can
be
better
spent
elsewhere,
so
I
think
that
that
operator
story
kind
of
blends
in
well
with
that
we
want
to
incorporate
a
service
mesh
for
multiple
reasons,
but
one
of
them
being
you
know
we
want
to
standardize
on.
B
You
know
the
security
model
there
and
finally
multi
cluster
management
for
various
reasons,
but
we
definitely
see
that
the
benefits
of
that
and
having
a
single,
you
know
paying
a
view
if
you
will
for
how
things
are
operating,
especially,
you
know.
The
health
of
you
know:
clusters
excuse
me,
health
of
clusters
and
even
like
a
chargeback,
you
know
modeling
usage
and
things
like
that.
It's
definitely
beneficial.
B
So
I
know
Diana,
you
were
looking
for
some
honest
feedback,
so
I
just
put
a
couple
of
things
here,
Ganesha
feel
free
to
add
anything
else
that
you
see
fit,
but
basically
we
just
wanted
to
highlight.
You
know
we're
hoping
that
you
know
red
hat
can
bring
some.
You
know,
focus
back
to
okay,
be
on
Forex
right
now,
we're
running
on
3.11,
but
there's
definitely
features
including
you
know.
Some
of
the
newer
operator
features
that
we
want
to
take
advantage
of.
B
One
of
my
engineers
actually
made
a
comment
that
he
was
trying
to
install
code,
ready
containers,
openshift
4.2
on
his
local
laptop
and
it
consumed
like
20
gigs
of
memory
which
brought
it
to
a
crawl.
So
you
know
maybe
take
take
a
look
at
that
and
you
know
we're
looking
for
a
similar
experience
to
like
you
know
how
many
shift
operated
is
a
pretty
quick,
very
small,
install
and
ultimately
simplify
the
upgrade
process,
which
I
know
that
you
guys
are
kind
of
working
on,
but
we
haven't
seen
yet
with
okd.
So
anything
else.