►
From YouTube: OpenShift at CA
Description
Jose Chavez from CA Technologies discuss their production deployment of OpenShift at the OpenShift
Commons Gathering Boston on May 1, 2017.
Learn more and see the slides here:
https://blog.openshift.com/openshift-commons-gathering-at-red-hat-summit-2017-video-recap-with-slides/
A
How's
everybody
doing
today,
good.
Does
anybody
need
a
power
nap?
That
was
the
time
yeah
all
right,
so
my
name
is
Jose
Chavez
with
CA
Technologies
principal
software,
engineer,
basically
you're
going
to
be
talking
about
our
journey
with
OpenShift,
ok,
so
I'm
going
to
read
this
verbatim,
which
will
give
you
a
good
idea
of
one
of
the
main
motivators
for
CA
Technologies
to
migrate
over
to
containers.
A
The
deployable
thing
is
no
longer
an
executable
Java
or
a
jar
file,
but
a
container
that
container
has
everything
it
needs
to
run
and
provides
a
nice
separation
between
application
code
and
infrastructure.
Everything
inside
the
container
is
the
developer's
responsibility
and
all
the
infrastructure
that
the
container
runs
on
and
uses
the
responsibility
of
the
operations
engineer.
A
Key
thing
there
is
basically
we
want
to
enable
the
developer
to
focus
on
what
they
do
best
right
and
not
have
to
worry
about
having
a
running
environment
or
working
environment,
functional
environment
so
and
the
operations
engineers
have
a
better
idea
of
where
you
know
kind
of
the
boundaries
of
their
debugging
for
the
infrastructure.
So
pretty
simple,
there's
some
considerations
there,
one
of
them
I've
heard
earlier
today,
rather
than
charge
back.
A
So
if
I'd
love
to
talk
to
anybody
that
has
had
some
experiences
around
that'd
be
a
good
conversation
for
us
all
right,
so
everybody
meet
Billy
Billy
represents
our
infrastructure
teams,
so
the
history
of
CA,
with
the
with
SAS
application
or
the
transition
success,
I
should
say.
Basically
there
was
no
standardized
deploy
model.
This
is
going
back.
A
few
years
will
first
started
taking
a
look
at
you
know,
moving
SAS
or
you
know,
reinventing
the
SAS
deployment
model,
so
no
standardized
deployment
model.
We
were
manual
deploying
infrastructure.
A
Probably
pretty
boring,
you
know
to
do
the
same
task
over
and
over
again,
but
you'll
see
growing,
pains,
very
highly
error,
prone,
time-consuming
long
deployment
cycles
and,
of
course,
the
infrastructure
team
working
around
the
clock
to
you
know
ensure
that
either
new
deployments
are
met
or
that
deployments
are
actually
functioning.
So
billy
here
is
essentially
showing
picking
up.
A
You
know
the
rocks
and
putting
my
on
the
wagon
in
no
particular
order,
and
I'm
not
sure,
if
he's
actually
having
fun
here
or
not,
I
I
would
imagine
that
instead
of
putting
him
to
the
wagon
he'd,
probably
rather
be
throwing
them
right.
So,
but
you
know
you
can
see
it's
very
manual
and
not
a
lot
of
a
you
know,
order
here.
So.
A
Ca's
landscape
today
involves
a
variety
of
different
applications.
You
can
see
the
traditional
monolithic
service
and
tier
micro
services,
with
multiple
development
processes
and
running
on
different
infrastructures.
So
when
we
were,
you
know
migrating
into
this
new
SAS
model,
we
wanted
to
ensure
that
we
didn't
leave
any
anybody
out.
We
wanted
to
accommodate.
All
of
you
know
the
business
units
in
the
organization,
the
development
teams
that
already
existed,
or
you
know
we're
new
to
the
game.
A
You
know
coming
up
with
the
latest
and
greatest
you
know,
type
of
software,
so
we
had
to
keep
these
things
in
mind,
and
this
is
actually
where
openshift
comes
in
and
Alex
right.
So
it
enabled
us
to
have
a
single
path
to
deploy
single
way
to
manage.
You
know
these
products
within
the
same
environment.
A
So
why
OpenShift
again
standard
application
deployment
model
right
predictable
for
the
most
part,
unless
you
don't
trust
your
playbooks
right
hosting
on
different
infrastructure
providers
being
able
to
manage
your
applications
current
operations,
some
form
of
administrator,
you
know
being
able
to
manage
the
services
you
have
out
there
deployed
today
through
UI,
where
you
know,
if
you
wanted
to
scale
an
application
on
demand
or
you
wanted
to
set.
You
know
resource
limits
on
your
services.
You
know
you
can
do
that
variety
of
other
things,
obviously,
but
those
are
some
of
the
common
ones.
A
The
big
one
for
us
is
implementing
a
fully
automated
CD
deployment,
so
I
n
I'll
actually
go
into
that
numb
in
a
future
slide
here
in
a
minute
and
another
key
part
is
providing
a
common
shared
development
environment
for
developers
to
use.
So
you
know,
one
of
the
the
benefits
to
container
izing
and
using
OpenShift
is,
to
you
know,
reduce
the
time
to
market
right.
So
how
do
we
do
that?
You
know
some
developers
like
to
install
you
know
tools
on
their
own
development,
environment
or
their
own.
A
You
know
laptops,
do,
building
and
testing
and
all
that,
but
what
we
wanted
to
do
is
provide
an
environment
where
they
didn't
have
to
worry
about
learning
how
to
install
OpenShift
necessarily
if
they
didn't
want
to
and
just
give
them
a
method
by
which
they
could
you
know,
write
their
applications
and
from
their
own
desktop
or
laptop
push.
You
know
into
our
chair,
dev,
openshift,
environment,.
A
Our
automation
enables
us
to
it's
flexible
enough
to
deploy
OpenShift
Enterprise
and
an
origin.
Centos
well,
like
I,
mentioned
earlier,
multiple
infrastructure
as
a
service
providers
and
also
varied
the
type
of
environment
that
we
deploy
a
che
non,
a
che,
the
size
of
the
cluster.
All
those
things
are
configurable
and
the
key
point
here
is
the
amount
of
time
it
actually
takes
to
deploy
our
environment
intend.
So,
on
average,
between
one
and
two
hours
and
just
a
little
history,
we
actually
go
back
to
OpenShift
to
where
we
were
doing
this.
A
This
automation-
and
you
know-
we've
had
some
pains
with
using
that
version,
but
our
deployment
times
back
then
were
probably
in
the
order
of
like
five
to
six
hours.
So
it's
drastically
improved
now
in
the
product
overall
openshift
3
I
think
is
far
better
than
it
was
before,
even
in
addition
to
incorporating
openshift
excuse
me
out
of
kubernetes
and
in
docker
I,
just
find
it
much
easier
to
administer
and
and
maintain
than
the
previous
generation.
A
Okay,
so
one
maybe
one
and
a
half
okay,
how
many
would
say
you
know
a
pretty
good
amount,
but
you
know
there's
a
lot.
No,
there's
still
some
work
to
do:
okay,
good,
good
anybody,
not
everybody!
That's
a
Billy
in
here!
That's
still
doing
everything
manually,
okay,
alright,
so
we
kind
of
kind
of
fall
into
the
middle
there
and
I've
debated
whether
you
know
we
should
be
in
that
upper
echelon
rights
of
having
a
fully
end
end
deployment,
just
because
I'd
probably
be
working
myself
out
of
a
job.
So
all
right.
A
A
Then,
of
course
we
have
the
hosted
applications
that
are.
You
know,
customer
facing
right
that
customers
are
consuming.
On
the
left
hand,
side,
we
have
our
monitoring
solution.
It's
actually
a
combination
of
CA
products,
ATM
asmu,
I
am
and
then
system
down
at
the
bottom
for
container
level,
performance,
metrics
and
reporting
or
alerting
on
the
right-hand
side.
You'll
see.
Some
of
the
common
services
that
are
running
outside
is
standalone
our
DB
layer
alch.
A
A
A
A
Basically,
there's
no
human
intervention
that
flow
will
actually
it
with
that
type
of
change
doesn't
have
to
go
through
the
typical
change
cycle.
So
it's
basically
a
push
by
the
developers
and
it'll
go
through
all
the
way
into
the
endpoint
environment,
and
then
we
have
our
normal
change,
which
is
a
traditional
check-in
that
requires
change
management,
approval,
so
high
risk,
trainings
needed
documentation
and
all
that
it's
a
push
button
deployment
so
inside
our
in-house
Jenkins.
A
So
here's
a
high-level
overview
of
both
approaches
of
our
CD
flow.
So,
on
the
left
hand
side
you
have
one
of
many
actual
CI
models,
so
developer
checks
and
code
code
builders
executed,
something
like
teen
City
can
pick
that
up
and
actually
create
a
docker
image
with
those
new
artifacts
that
have
been
built
depending
on
how
it's
tagged,
whether
it's
a
normal
change
or
a
standard
change,
it
may
or
may
not
go
all
the
way,
but
it
lets.
Let's
say
it's
a
it's
a
standard
change
that
does
go
all
the
way.
A
There
would
be
two
different
paths.
One
teen
city
could
actually
call
make
an
API
call
jenkins
to
notify
that
there
is
a
new
image
or
a
modified
image,
or
it
changed
to
an
image.
Basically,
so
what
it'll
do
is
Jenkins
will
pick
up
that
job
he'll
scan
it
with
twist
lock
and
ensure
there's
no
vulnerabilities.
No,
not
you
know
CBE's,
and
if
it's
clean,
if
it
fails,
it
generates
a
report
and
will
go
and
notify
the
appropriate
development
team
to
you
know
patch.
A
Those
up
it'll
come
back
if
it
passes,
and
then
it
will
actually
push
into
the
this
a
specific
location
in
the
artifactory
that
we
use
and
then
it
will
make
an
API
call.
This
is
all
part
of
the
same
pipeline.
It'll
make
an
API
call
to
artifactory
to
actually
synchronize
that
image
to
Ben
Tre
to
essentially
mirror
that
image
out
in
the
cloud
and
it's
key,
because
openshift
cannot
communicate
with
artifactory.
We
can't,
you
know,
transmit
that
image
in
for
duty.
You
know,
obviously
security
and
firewall
rule
consideration,
so
it
will.
A
It
will
put
the
image
in
Ben
Tre
and
then
basically,
Jenkins
will
end
its
a
synchronous,
call
so
sits
there
and
waits
he'll
trigger
a
remote
job
to
RC
a
deployer
running
in
the
the
target
openshift
environment.
Now
what
will
happen
is
it
will
make
an
API
call?
The
deployer
will
make
an
API
call
to
OpenShift
to
force
pulling
down
the
image
from
Ben
Tre
tagging
it
locally
in
the
integrated
dr.
registry
and
then
pushing
it
into
the
integrated
dr.
A
registry,
at
which
point
an
image
change
trigger
occurs
and,
of
course,
the
the
new
images
or
the
new.
The
changes
are
now
propagated
and
running
in
production,
so
that's
one,
the
other.
The
other
path
is
Jenkins,
is
actually
scanning
on
an
interval
against
pre-configured
locations
and
artifactory,
and
if
it
detects
a
change
to
the
image,
it'll
essentially
run
through
the
same
path
as
before.
So.
A
A
So
here's
our
OPA
one
of
our
development,
OpenShift
environments
that
I
mentioned
earlier-
and
this
is
our
sample
sample
EPOS,
referring
to
we've,
exposed
it
out.
It's
publicly
available.
You
know
about
53.
So
if
we
click
on
it,
it's
good
open
up.
You
know
just
tech
space
but
simple
CSS,
OpenShift
demo.
This
is
actually
kind
of
a
guessing
game
in
here,
but
for
all
intents
and
purposes,
I
just
wanted
to
show
actual
changes
that
occur.
B
B
B
A
A
A
A
So
once
once
all
is
said
and
done,
it'll
go
through
the
pipeline
that
I
described
and
we'll
start
seeing
some
changes
here
on
the
screen
when
it
actually
propagates
all
the
way
to
the
development
environment.
So
any
questions.
While
we
wait,
it's
going
to
be
like
two
or
three
minutes
before
that
finishes.
I.
A
So
he's
essentially
a
he
it's
just
like
an
API
layer.
That's
that's
handling
the
the
request
from
Jenkins,
so
he
essentially
brokers
the
request
to
make
the
deployment
from
our
on-prem
image.
You
know
to
load
it
into
the
the
actual
running
environment
out
in
the
cloud
right
in
AWS.
A
A
A
So
we
support
all
of
the
development
teams
that
our
company,
so
we
provide
a
standard
set
of
docker
images
that
were
essentially
stamping
right,
putting
the
stamp
of
approval
on
so
we
use
you
know
the
twist
locks
to
ensure
that
there
you
know
the
vulnerabilities
or
risks
are
addressed
before
we
provide
them
in
artifactory
for
others
to
use.
So
the
images
that
you
see
that
are
being
scanned
when
they
make
changes
and
push
they're
there
from
you
know
tag
in
the
the
docker
file
actually
points
to
our
golden
images.