►
From YouTube: OpenShift 3 Demo Part 6: Scaling and HA
Description
In this video, Veer Muchandi demos scaling on OpenShift 3.
NOTE: For the latest information on OpenShift 3, please visit https://enterprise.openshift.com or subscribe to the OpenShift Blog (https://blog.openshift.com).
A
I
have
a
PHP
application
here,
that's
running
with
a
my
sequel
database.
I
have
deployed
this
into
a
project
which
has
two
separate
parts.
One
is
the
part
for
the
front-end,
which
is
named
as
DB
test
and
the
other
one
is
for
the
database,
which
is
running
a
my
sequel
database.
As
you
can
see,
the
the
front-end
part
the
DB
test
bar
it's
running
on
node.
Do
let
me
also
show
you
what
I
have
in
my
OpenShift
setup
when
I
do
OSC
get
nodes,
you'll
see
that
I
have
three
nodes
here.
A
The
master
itself
is
acting
as
a
node
as
well
as
I
have
node
2
and
node
3,
but
node
2
and
node
3,
as
are
configured
to
be
for
the
region
primary,
and
you
have
seen
that
the
the
front-end
part
is
running
on
node
2.
Now,
if
I
try
to
scale
up
this
front-end
by
adding
additional
pots,
then
OpenShift
will
make
sure
that
it
will
distribute
those
new
parts
that
are
getting
added
between
this
node
2
and
node
3,
the
reason
being.
A
If
one
of
the
nodes
goes
down
for
some
reason,
then
you
have
at
least
one
of
the
other
nodes
that
is
servicing
this
application.
So
the
application
doesn't
code,
go
down,
that's
how
OpenShift
ensures
high
availability.
Now
let
us
also
see
these
parts
from
the
command
line
as
well,
so
I
have
filtered
the
parts
based
on
the
deployment
configuration
of
type
DB
test,
so
you
can
see
that
there
is
one
part
this
is.
This
is
the
same
as
what
you
have
seen
on
the
on
the
web
console
now,
let's
try
to
scale
it
up.
A
In
order
to
scale
it
up,
we
need
to
know
what
the
replication
controller
is
for
the
DB
test
application.
So
I
am
doing
a
OSC
get
RC
to
find
all
the
replication
controllers
that
are
running,
so
we
have
two
different
replication
controllers.
One
is
for
the
database
part
and
the
other
is
for
the
front
end.
A
This
one
for
the
front
end
is
configured
to
have
just
one
replica
of
this
fronting
running
at
any
point
of
time,
so
the
replication
controller
is
the
one
that
ensures
n
number
of
parts
that
are
configured
here
are
running
at
any
point
of
time.
Now,
let's
use
this
replication
controller
and
resize
it
and
we'll
set
the
number
of
replicas
to
be
2
instead
of
1,
and
let's
see
what
happens.
A
So,
if
you
come
back
to
the
console
now
and
see,
there
is
a
it
was
in
the
pending
status
till
now,
but
now
it
is
in
running.
There
is
an
additional
part.
This
was
the
one
that
was
running
before
it
was
on
node
2.
Now
there
is
an
additional
part
that
got
spun
up,
and
this
is
on
node
3,
now
just
to
show
I'll
resize.
It
again
to
four
instead
of
two
and
we'll
see
quickly,
there
are
two
more
additional
parts
that
are
being
spun
up
and
within
no
time
they
start
running.
A
So
it's
as
quick
as
that.
So
two
additional
parts
got
spun
up
and
one
of
them
is
on
node
2
and
the
other
one
is
node.
3
come
online,
so
you'll
see
that
there
are
four
pods
running.
So
this
is
how
open
ship
scales
up
quickly
and
also
it
distributes
the
parts
that
it
is
creating
across
the
two
nodes
so
that
there
is
high
availability.
Now,
let's
try
to
knock
off
one
of
the
pods
and
see
what
happens
and
I'm
passing
it.
This
name
9wr
HP
I,
am
trying
to
delete
that.
A
So,
let's
see
what
happens,
I
deleted
the
part
and
then
now,
if
I
see
what
how
many
pods
are
available.
I
deleted
the
fourth
one,
but
there
is
another
one
that
got
created
in
its
place
so
in
place
of
9wr
HP.
There
is
a
3t
one
FM.
Now,
where
did
it
come
from?
That's
because
the
replication
controller
DB
test
ensures
that
four
powers
are
available
at
any
point
of
time.
If
you
do
OSC
get
RC
now
it
is
configured
to
be
four,
so
it
makes
sure
that
four
pods
are
always
running
now.