►
From YouTube: OpenShift 3 Demo Part 7: Scaling and HA (contd)
Description
In this video, Veer Muchandi demonstrates more scaling on OpenShift 3, with a focus on how OpenShift enables high availability.
NOTE: For the latest information on OpenShift 3, please visit https://enterprise.openshift.com or subscribe to the OpenShift Blog (https://blog.openshift.com).
A
In
this
video,
we
will
see
how
high
availability
works
for
scaled
applications
on
open
shift.
Even
when
a
node
suddenly
goes
down.
We
in
the
earlier
examples,
we
have
seen
how
the
scaling
works
and
how
multiple
instances
of
an
application
run
as
different
parts
right.
We
will
take
the
same
example
and
we
will
look
at
how
the
high
availability
works.
A
First,
we'll
start
with
an
application
that
that's
going
to
fail,
because
it's
a
single
instance
that
is
running.
So,
let's
look
at
the
same
app
like
before.
It
has
a
front
end
and
DB
test
and
the
back
end
is
a
database
right.
Each
has
a
single
part
and
the
the
front
end
is
running
on
node
2
and
the
database
is
running
on
node
3
right
and
will
also
verify
the
same
thing
from
the
command
line
for
the
front
end.
A
We
have
a
single
part
that
is
running
for
front-end
now,
if-
and
this
is
running
on
node
2
and
let's
also
look
at
what
I
have
on
the
as
setup
as
nodes.
I
have
three
nodes
here:
the
master
itself
is
acting
as
a
node
and
I
have
Note
2
and
node
3.
Both
are
in
ready
state.
That
means
they
are
running
so
I
have
no
two
in
this
region
primary
and
it
is
running
in
zone
east
and
node
3
in
region
primary
zone
west.
A
A
A
And
here
is
five
zero
three
service
not
available.
Now,
as
there
is
only
there
was
only
one
instance
of
this
application
running
as
a
single
par
when
the
node
on
which
that
pod
was
running
went
down.
There
was
no
choice,
it
went
and
the
service
is
not
available.
It's
obvious
right
now.
Let's
now
try
to
scale
up
this
application,
but
before
that
I'll
start
this
instance.
A
Now
my
power
is
up
and
running
and
the
application
is
again
up
and
running,
and
let
me
verify
that
the
node
2
is
now
back
back
up
and
running
right
and
then,
if
I
check
the
number
of
pots,
there
is
a
single
pod
running
on
more
again
now,
let's
resize
this
and
scale
it
up.
I
am
scaling
up
to
4
instances
and
it
is
resized
and
now
I
have
four
instances
of
this
application
running.
A
So
let's
verify
that
so
I
have
four
pots
now
two
of
these
parts
are
on
node
three
and
two
of
them
are
on
node
two.
We
have
seen
that
before
now.
If
we
check,
if
the
application
is
running,
it
is
right
now,
let's
try
to
bring
a
knockoff
the
node.
Suddenly
I'll
go
back
and
shut
down,
I'm
shutting
down
node
2.
Now,
let's
see
what
happens.
A
A
So
if
even
if
one
of
the
nodes
goes
down,
the
application
continues
to
run
without
any
issues,
not
just
that
after
a
little
while
openshift
auto
corrects
itself,
and
it
will
make
sure
that
since
the
replication
controller
is
as
or
or
as
to
run
at
least
four
parts
at
any
point
of
time,
it
will
spin
up
additional
parts
on
the
available
node
in
which,
in
this
case
it
is
node
three
to
make
sure
that
the
number
of
parts
that
we
have
requested
are
running.
So
this
happens
after
a
couple
of
minutes.
A
So,
let's
check,
if
it
has
already
happened,
let
me
clear
the
screen
and
do
it
again
so
now
you
can
see
that
since
no
two
went
down
the
replication
controller
spun
up
two
additional
parts
on
node
three
itself.
Now
all
there
are
four
parts
now
and
all
the
four
pods
are
running
on
node
3,
because
that's
available.