►
From YouTube: OpenShift at ATPCO
Description
Anandhi Navaneethakr, Veerendra Akula, and Mukul Nadkarni from ATPCO discuss their production deployment of OpenShift at the OpenShift Commons Gathering Boston on May 1, 2017.
Learn more and see the slides here:
https://blog.openshift.com/openshift-commons-gathering-at-red-hat-summit-2017-video-recap-with-slides/
A
Hi
I'm
onna
be
along
with
Veerendra
and
muku
today.
Here
we
are
going
to
present
about
how
we
used
and
why
we
use
open
chests
in
our
company,
which
is
ATP
Co,
which
stands
for
airline
tariffs,
publication
company,
as
the
name
goes,
we
do
cater
the
airline
industry
practically
we
are
owned
by
them.
So
we
are
like
a
neutral
body
for
these
airline
industries
where
they
enter
their
fares,
their
restrictions
and
their
feed
data
into
our
system,
and
we
distribute
the
data.
A
To
know
about
our
company
better,
we
are
about,
we
have
about
four
hundred
and
thirty
four
yeah
lines
which
carriers
which
basically
work
with
us.
We
have
about
four
offices
around
the
world.
99%
of
the
airline
data.
Intermediate
data
goes
through
us
and
we
have
about
173
million
fares
in
our
database.
You
can
see
the
volume.
A
Basically,
we
have
been
more
than
50
years
in
this
industry,
so,
as
you
can
see,
we
are
being
with
the
airline
industry
from
the
day
that
they
started
their
fares
filing
with
papers
to
the
current
day
where
a
single
air
carrier
or
an
airline
can
either
enter
or
update
about
million
fares
a
day.
So
you
can
see
again
the
amount
of
data
that
goes
through
us
and
how
much
we
process.
So
at
the
time
those
we
can
say
how
our
technology
has
moved.
A
We
have
the
older
technology
to
the
table
and
the
new
technology
that
is
coming
in.
Basically,
we
are
in
a
position
most
of
the
time
we
are
unique
use
case
scenarios
would
be
like.
Maybe
the
requirements
of
the
business
needs
haven't
changed
much
and
we
cannot
change
the
way
that
we
receive
or
send
the
data,
but
our
technologies
have
become
outdated
and
we
have
to
change
those
technology.
A
Having
said
that,
basically,
we
cannot
take
much
of
risk.
In
anything
anything
I
mean
we
cannot
take
a
lot
of
risk
in
introducing
new
technologies
into
the
company,
but
at
the
same
time
an
ADP.
We
choose
to
move
towards
a
API
strategy
and
why
we
have
to
do
that
as
big
as,
as
you
know,
the
airline
industry
has
itself
become
very
competitive.
A
Do
it
faster.
It
has
to
be
more
manageable.
If
you
want
it
to
be
more
manageable,
it
has
to
be
more
smaller.
So
we
break
down
a
motor
monolithic
to
our
microservices
architecture,
which
is
very
thin
to
be
very
typical
these
days.
How
do
we
want
it
to
do
it?
Of
course,
we
follow
principles
of
microservices.
We
did
the
domain
modeling.
We
read
what
it
was
not
easy.
We
rewired
ourselves
in
terms
of
our
business
thinking
and
God
is
changing
somehow,
even
though
it
was
difficult.
A
We
got
there,
but
what
we
found
was
one
of
the
take
is
more
structured,
organized
and
easy
to
maintain.
So
in
many
of
the
companies
we
can
typically
see
that
in
their
lower
environments
or
when
they
are
tinkering
or
playing,
they
want
to
go
with
micro
services,
but
when
they
want
to
really
go
with
a
good
solution
or
a
solid
solution,
they
go
for
the
model.
A
I
think
it's
because
it's
easier
to
manage,
but
when
you
get
into
micro
services,
as
you
can
see,
your
services
has
exploded
and
you
have
a
lot
of
them
scattered
everywhere.
So
but
micro-services
also,
you
get
a
lot
of
options.
How
you
are
going
to
build
them
or
deploy
them
or
you
are
going
to
architect
them.
Typically,
if
you
see
either
you
can
take
a
line
on.
In
our
case,
we
go
with
VMs,
so
we
take
a
VM.
A
Do
you
want
to
slash
each
services
into
those
VMs
or
we
put
all
services
together
in
one
VM
and
replicate
them
into
multiple
instances
or
we
take
a
VM.
Dp
can
choose
this?
What
services
goes
bad
so,
although
these
gives
multiple
options,
it
also
brings
in
huge
amount
of
complexity.
So
the
nightmare
begins
right.
How
exactly
are
we
going
to
make
a
build
it,
deploy
it
architect
it
and
maintain
it?
So
what
we
learn
from
all
these
things
is:
if
not
designed
and
architected
properly,
developing
and
deploying
micro
services,
managing
their
infrastructure.
A
Maintaining
and
monitoring
and
troubleshooting
them
can
be
challenges
it's
simply
because
micro
services
is
distributed.
What
do
you
mean
by
that?
It
is
scattered,
it
is
everywhere
and
soon
you
can
lose
everything
you
can
lose,
it
can
go
out
of
control
and
it
is
harder
to
manage
vilandra.
Can
you
please
tell
us
what
our
challenges
and
how
do
you
manage
them?.
B
B
B
How
do
I
even
should
I
take
one
service
can
deploy
to
one
VM?
Take
second
service
deployed
to
second
VM.
Take
third
service
should
I
deploy
to
one
first
VM
should
I
deployed
to
the
second
VM,
or
should
I
create
a
third
VM?
How
do
I
decide
all
these
things
should
I
to
maintain
what
service
is
running
with,
in
which
environment
is
the
right
way
to
do
that?
B
A
B
B
B
B
We
introduced
the
gate
and
we
introduced
the
spring
cloud
technology
stack.
We
have
Eureka
non-fixed
server
and
we
have
the
admin
server.
All
of
our
services
are
sitting
there.
Yes,
we
solve
the
problem
of
service
discovery
and
load
balancing,
but
at
the
same
time
we
introduced
additional
components.
We
have
the
Gateway
catering
for
our
external
requests
to
the
services.
We
have
the
Eureka,
which
is
providing
the
service
discovery
and
we
have
the
Netflix
ribbon,
which
is
doing
the
load
balancing
between
the
services,
but.
B
B
Sure
a
developer
can
whip
up
something
easily
using
micro
services,
but
to
implement
micro
services
at
an
enterprise
level.
It
needs
to
be
done
right.
What
do
I
mean
by
that?
We
have
to
implement
or
provide
to
all
of
our
services
to
get
the
real
benefits
out
of
the
mark-up
micro
services.
Sure
we
can
do
it
by
ourselves,
but
it
takes
time.
B
C
Hi
I'm
Lucas
nor
curly
I'm,
a
magician
sorry
developer
at
a
typical
and
I,
have
a
confession.
So
it's
the
first
time
I
am
presenting
in
front
of
in
a
public
setting.
So
when
I
actually
came
over
here,
I
was
told
by
somebody
pretend
you
are
Superman
on
the
stage
and
I
will
see
how
that
goes
so
I'm
here
to
talk
about
continuous
integration
and
continuous
delivery
deployment.
C
So
now
that
we
had
decided
on
OpenShift
s
our
enterprise
solution,
the
next
our
focus
turns
on
turns
on
automation
and
how
we
can
leverage
OpenShift
for
continuous
integration
and
continuous
deployment.
I
read
somewhere
that
if
you
say
the
word,
CI
CD
in
five
seconds,
somebody
will
say:
Jenkins
and
I
was
absolutely
true.
C
The
first
part
in
our
mind
was
Jenkins
and
we
actually
had
an
external
Jenkins
in
place.
So
we
wanted
to
take
that
to
the
next
level.
So
we
wanted
to
see
how
open
shift
and
Jenkins
will
work
together.
So
for
that
we
had
two
options.
The
first
option
was
having
the
Jenkins
master
and
the
Jenkins
slave
inside
of
openshift.
C
The
second
option
was
having
the
Jenkins
master
outside
token
shift
and
provisioning
the
Jenkins
claves
instead
of
open
shift
and
we
chose
option
2,
and
that
was
because
we
already
had
an
existing
system
Jenkins
system
that
was
catering
to
our
application
bills
and
we
did
not
want
to
disrupt
a
setting.
But
then
how
would
the
Jenkins
slaves
inside
of
open
ships
talk
to
this
external
jenkins?
C
So
for
that
again
we
had
two
options
at
that
point
of
time,
so
we
had
a
docker
form
plug-in,
and
this
talk,
Assam
plug-in,
enabled
the
gentle
slaves
inside
of
open
shipped
to
to
auto,
discover
the
Jenkins
master
and
join
it
automatically.
The
second
option
was
Cuban.
It
is
cloud
plugin
and
that
enabled
us
to
dynamically
provision
the
gentle
slaves
inside
of
openshift.
C
So
this
provides
scalability
as
well
as
provides
the
ability
to
create
keyboards
on
the
fly
for
specific
jobs
at
for
specific
runtimes,
and
we
went
with
the
option
to
because,
in
contrast
to
the
dock,
perform
the
ability
of
the
kubernetes
cloud
to
spin
off
this
docker
slave
order,
Jenkins,
flew
and
used
and
for
run,
execute
for
your
jobs
on
this
particular
slave
and
once
that
job
is
completed,
bring
the
slave
down,
and
that
was
exactly
what
we
are
looking
for,
because
we
did
not
want
to
allocate
any
resources
permanently
for
this.
Jenkins
live.
C
So
now
that
we
have
the
whole
setup
in
place,
we
have
OpenShift,
we
have
Jenkins,
we
have
the
community's
cloud
plug-in
and
we
have
the
other
nexus
repositories.
We
have
our
git
repositories,
all
the
other
ar-ar-ar.
So
now
our
focus
tones
are
turned
on
to
continuous
integration,
so
called
continuous
integration.
We
made
use
of
a
whole
shift
concept
called
source
to
image.
First
to
image,
as
the
name
implies
is
basically
taking
your
application
source
and
converting
it
into
an
execute,
occur,
executable
docker
image,
which
you
can
then
deploy
later
in
openshift.
C
It
would
have
the
custom
configurations
required
for
a
typical,
so
that
makes
it
we
can
basically
then
reuse
it
at
an
enterprise
level
ship
it
across
to
projects
and
they
can
use
it.
So
now
we
have
the
builder
in
place.
Let
me
walk
you
through
this
continuous
integration
with
open
ship,
so
it
all
begins.
When
a
developer
commits
code
to
the
git
repository
and
using
web
hooks
again,
server
is
basically
able
to
initiate
the
build,
so
the
build
the
source
will
be
compiled.
C
C
After
that,
we
turned
our
focus
onto
continuous
delivery
and
continuous
deployment,
so
for
that
we
made
use
of
another
open
shift
strategy,
so
that
is
called
open
chip
binary
deployment
strategy.
So
what
is
that?
So?
That
is
nothing,
but
you
are
building
your
application
outside
of
open
shift
and
the
resultant
war
or
the
jar
that
is
created.
You
are
using
that
to
deploy
it
in
open
shipped
at
a
later
stage,
to
explain
that.
C
Let
me
walk
you
again
through
this
diagram,
so
when,
when
we
have
ready
for
release
when
we
have
identified
our
release
candidate,
the
project
maintainer
is
going
to
trigger
a
release
pipeline.
That's
going
to
spawn
off
this
gentle
executor
and
once
that
is
spawned
off
it's.
We
basically
implement
git
flow,
so
it
will
do
all
the
relevant
tagging
of
the
repository
versioning
and
then
creating
the
release
branch.
We
check
out
the
release
branch
and
we
use
the
we
use
that
code
to
build
and
run
the
unit
test.
C
C
Build,
would
basically
use
the
image
that
we
have
pushed
to
Nexus,
get
that
you
use
the
wall
or
the
job
that
is
pushed
to
Nexus,
get
that
create
our
output
image
stream,
the
application
or
put
image
stream
and
push
it
back
to
Nexus,
and
then
once
it
is
next
Nexus
it
is
available
for
all
the
environments
below
to
be
deployed.
So
it's
going
to
DVR
going
to
deploy
it
into
dev,
move
it
across
prediction:
environments,
QA,
UAT
and
then
to
prod
and
in
prod
we
have
configured
a
rolling
deployment.
C
So
that
means
when
we
deploy
it
to
prod,
there
is
no
outage
to
the
users,
so
that
is
in
effect,
how
we
achieve
CI
and
CD,
with
open
chip
today,
typical
now
I
literally
compressed
here
and
a
half
worth
of
work
in
like
six
seven
minutes
right
now.
So
if
you
have
any
questions
clarifications
you
can
reach
out
to
all
of
us
will
be
here
at
the
conference
here
today
and
the
next
three
days
of
the
reddit
summit,
and
so
now
that
we
have
achieved
continuous
integration
and
continuous
delivery
deployment.
A
Why
not
basically
open
ships
officers
to
escape
auto
skate,
meaning
when
there
is
an
increase
in
the
volume
of
the
request?
The
number
of
pods
it
is
going
to
spin
is
going
to
come
up
and
if
it
goes
down,
it
does
John
but
anytime,
when
it
keeps
on
spinning
the
pause
you're
going
to
run
out
of
the
notes,
the
physical
resources,
so
when
a
new
node
has
to
be
added
to
the
cluster,
it
is
not
automated.
A
At
that
point
we
have
to
figure
out
a
VM
and
then
probation
it
into
openshift,
which
is
kind
of
the
currently
is
a
manual
process
in
future.
What
you
want
to
do
is
we
want
to
take
the
benefit
of
the
other
side.
Go
ahead,
automate
we
are
starting,
but
we
are
early
on
our
roadmap
in
our
roadmap.
We
want
to
automate
our
infrastructure
part
as
well,
so
that
we
are
able
to
create
a
VM
user
izing
around
table
scripts
in
the
playbook.
A
A
Just
to
summarize,
what
we
have
been
saying
so
long.
Why
OpenShift
in
Eddie
Pico?
Is
that
if
you
are
like
us
who
believe
in
you
know,
micro-services
see
ICD,
DevOps
and
really
it's
very
difficult
to
take
and
implement,
because
as
developers,
it's
easy
in
the
eloquent
environment,
but
then
to
take
it
as
an
enterprise
solution.
The
challenges
begins
for
us
and
we
took
open
shipped
hair
help
to
get
the
get
us
quickly
there
so
that
there
are
a
lot
of
steps
in
your
company.
A
So
this
helped
us-
and
it
made
us
to
carry
on
faster,
to
prove
to
our
company
that
indeed,
these
things
are
possible
for
us
and
we
can
think
micro
services
as
a
better
solution
for
an
API
strategy
to
be
a
better
solution
for
our
company.
Of
course,
it
is
hard
for
red
and
blue
to
work
together,
but
in
our
case
it
worked
out.
Thank
you.