►
Description
In this session from the OpenShift Commons Gathering in Seattle on November 7, 2016, Mateus Caruccio from GetUp Cloud talks about hosting OpenShift on Azure and the road ahead from this Brazilian-based PaaS provider. https://getupcloud.com/
B
Thank
you.
As
I
said,
my
name
is
Matthias
carucci.
You
can
call
me
Matt
for
short,
I
am
the
CTO
of
grip
cloud,
and
I'm
here
to
talk
a
little
bit
about
how
grip
cloud
hosts
its
openshift
offer
on
azure
my
partner,
Joe
Gable,
raise
your
hand
please
juggle.
We
are
here
both
for
gathering
in
cube
com.
So
if
you
want
to
talk
to
us
just
rich,
as
you
can
guess
from
my
bad
english,
this
is
my
very
first
talking
english
for
english-speaking
attendance.
So
I
wish
I
had
a
French
accent.
B
B
So
we
have
this
public
pass
offer
based
on
openshift
orangie.
We
also
do
on-premise
deployment
for
customers
and
consulting
mostly
helping
the
consumers
to
understand
and
embrace
this
cloud
native
thing.
We
focus
on
marketing
agents,
mainly
running
digital
campaigns,
for
big
brands
and
choose
open
shift
for
the
same
good
reasons.
We
all
know
about
all
ready
for
diving
through
the
technical
details.
I'm
going
to
tell
you
about
this
case
a
hood
hood
success,
which
was
our
very
first
case
running
entirely
inside
a
openshift
ev3.
It
is
a
digital
campaign
called
Heineken
up
on
the
hoof.
B
The
free
VIP
party
at
the
hoof
top
of
Martinelli's
building
and
iconic
skyscrapper
example.
Amazing
view
of
the
city.
Duplication
was
very
simple:
they
built
a
facebook
app
to
give
400
free
passes
for
for
for
poor
people
from
facebook,
so
every
Monday
exactly
at
3pm.
The
list
is
opened
and
all
you
need
to
do
is
to
resist
ourselves
with
your
name
and
email
in
a
simple
web
form.
It
seems
easy,
but
don't
for
yourself
where
there
is
free
beer.
There
is
a
huge
crowd
and
not
freezing
software
by
the
way.
B
The
tix
finish
it
in
nine
seconds
and
every
week
there
is
more
and
more
crazy
teenagers
trying
the
luck.
So
this
is
a
classic
example
where
you
can,
where
you
need
speed
when
scaling
research,
the
new
york
tech,
chure
of
the
tree
was
has
made
it
not
only
possible,
but
really
simple.
To
do.
Odd
client
had
to
do
was
to
learn
new
containers
a
few
minutes
before
the
traffic
spike
and
upgrade
it
to
one
or
two
replicas,
two
minutes,
after
that
it
was
something
impossible
for
them
to
do
before
the
riches.
B
So
a
little
bit
of
our
history,
we
started
with
v
1
beta
around
november
two
thousand
twelve,
and
we
keep
it
to
august
2015
during
during
this
time.
Our
only
offer
was
based
on
openshift
v2,
which
was
born
in
AWS,
as
openshift
v3
was
gaining
territory.
We
decided
it
was
time
to
dig
into
it
and
actually
join
the
container
revolution.
B
This
technology's
new
technology
inspired
us
to
reach
for
a
new
provider,
so
start
to
look
for
a
cloud
provider
that
could
be
technology
compatible
with
this
new
platform,
easy
to
deploy,
offering
tools
for
automation
and
fast
recovery,
reliable
with
great
and
affordable
super
plans
and
economically
viable,
so
that
we
could
pay
on
our
own
local
currency.
The
first
v3
deployment
happens
by
the
end
of
august
2015
and
by
mid-november
we
made
our
first
private
beta
release
with
some
hand,
picket
customers
heineken
being
the
first
running
under
our
care.
B
It's
the
real
experience,
no
limitations
on
any
aspect
of
the
system.
So
why
a
sure
everybody
asks
me
why
a
sure
it
will
not
be
the
first
to
enter
in
the
Brazilian
market
from
day.
One
microsoft
has
shown
commitment
with
startups
and
developer.
Community
first
starts
like
charging
in
the
local
chorus
to
support
for
creation
of
new
services
and
applications.
B
B
B
It
works
by
automatically
spreading
the
machines
across
different
fault
domains,
each
one
with
its
own
power
source,
a
network
switch
in
case
of
maintenance
or
measure
update
domains,
guarantee
that
all
a
portion
of
any
given
service
will
be
unavailable.
Only
one
update
domain
will
reboot
at
a
time.
We
don't
even
have
to
think
about
where
to
allocate
the
hosts,
because
they
sure
does
it
automatically
for
us
openshift,
ev3,
rent
the
whole
new
concept.
When
we
talk
about
deployments,
master
and
etcd
are
made
to
last,
while
any
other
component
is
simply
possible.
B
That
means
you
need
to
keep
constant
backup
of
those
two
guys,
even
if
they
are
redundant,
it's
simply
to
risk
to
lose,
let's
say
etcd
and
have
the
entire
cluster
vanished.
On
the
other
hand,
if
a
node
goes
down,
there
is
no
need
for
alarm.
Metaphorically
speaking,
I'll
need
to
do
is
to
create
a
new
node
and
joint
in
the
cluster
piece
of
cake.
B
B
Tough
manager
is
a
DNS
basic
load
balancer
that
provides
automatic
failover
by
monitoring.
Any
points
in
the
traffic
is
booted
evenly
across
each
set
of
endpoints.
Also
they
waited
pattern,
allows
you
to
increase
Webber
rowdy
layer
on
it
small
steps
by
simply
adding
smaller
rollers
with
smaller
weights.
You
don't
need
to
go
big
on
a
single
shot,
there's
only
one
drawback.
So
far,
it
takes
a
little
bit
up
to
30
seconds
to
invalidate
the
DNS
cache.
As
soon
as
an
input
costs
down
the
rollers
run,
the
AJ
proximity
imaging.
B
We
don't
use
the
native
AJ
solution
pacemaker
because
it
doesn't
work
in
the
azure
cloud
yet
because
there
is
no
mood
cast
availability
at
least
didn't
exist
when
we
implemented
at
this
and
jump
nodes.
Are
the
ss-18
pro
and
points
in
the
network?
They
are
using.
Most
of
our
administrative
tasks
where
men
are
intervention
is
required.
B
The
major
part
of
the
cluster
lives
inside
a
private
network.
It's
consisted
of
each
zip
clusters
with
a
cluster
with
three
machines
with
data
disks
attached
to
it
machine
as
they
are
always
set
access
it
by
the
Masters.
It
makes
completely
sense
to
hide
it
it
from
the
outside
world,
infra
notes,
host
the
system's
format,
keys
and
logging,
a
namely
hablar,
hipster,
cassandra
last
search
and
cabana,
and
the
most
important
computing
notes
that
holds
the
user.
Pods
are
also
placed
on
this
network.
We
choose
have
to
hold
the
persistent
volumes
for
the
pods.
B
It's
a
centralized
system,
no
need
for
a
master,
so
we
eliminate
a
single
point
of
failure.
The
cluster
this
in
the
entire
cluster
lives
inside
it
sound
subnet
at
first
we
when
we
implemented
this,
a
fluster
ssds
was
available
in
Brazil,
but
we
figure
out
how
s
they
could
be
used.
They
were
out
that
that
acidic
could
be
used
to
to
hold
the
journal,
while
persistence
could
be
placed
in
the
irregular
data
disk.
So
we
have
a
good
balance
between
performance
and
cost
and
sister
opps.
B
Assume
something
goes
down
or
presents.
The
graded
performance
ticket
is
created
and
assign
it
to
someone
from
the
operations
team.
If
that
person
can't
respond
in
a
reasonable
time,
the
ticket
is
quickly
escalated
until
it
reads
my
own
phone
and
wakes
me
up
in
the
middle
of
the
night.
No,
the
screen,
I
don't
do
anymore.
B
So
when
there
is
the
need
to
expand
the
cluster
I
place
a
component,
we
perform
two
simple
steps.
First,
we
trigger
a
research
manager
to
create
a
new
host.
It
installs
all
the
dependence
in
place,
the
the
machine
in
the
right
sublet,
and
then
we
run
ansible
playbook,
for
example
the
playbook
for
nano
the
scale
up.
So
it
can
finish
the
configuration
and
we
are
done.
We
plan
to
automate
at
least
part
of
this
process
in
a
near
future
and
we
are
looking
for.
B
We
follow
a
very
stage
for
procedure.
We
start
by
creating
and
stage
infrastructure,
which
is
a
mirror
from
the
current
deployment
version
using
research
manager
and
ansible.
We
apply
the
standard
per
shift,
update
process
ship
it
with
the
playbooks
we
test
and
validate
both
automatically
and
manually
and
fix
them
sensitive
backs
to
the
upstream.
If
any
anything's
needs
to
be
fix
it,
of
course,
when
everything
is
fine,
it's
okay,
Ronnie!
Well,
we
deploy
in
production
using
the
blue-green
model.
We
start
by
applying
update
play.
We
start
by
updating
playbooks
to
master
etcd.
B
B
B
So
the
first
pull
request
contribution
allows
you
to
use
as
your
blob
storage
as
a
dog
registry
back-end
and,
as
you
may
know,
rarity
is
where
all
user
image
are
started.
So
it's
very,
very
important
to
keep
it
scalable
in
high
available.
You
just
all
need
to
do
is
to
search
your
account
credentials
into
the
registry
and
it's
the
erratic
configuration
of
course
and
done.
B
This
is
what
we
get
from
this
process,
and
this
is
the
this
is
the
first
time
we
are
talking
about
it,
but
we're
going
to
America
soon
we
plan
to
launch
the
first
part
in
2017
charging
in
dollars.
Don't
worry
if
you
want
to
be
informed,
keep
in
touch
with
us
on
our
website
or
Twitter,
moot
obrigado
and
enjoy
the
conference
I'm
open
to
questions.
C
First
of
all,
thank
you
so
much
for
using
Microsoft
Azure
away
from
the
as
your
team
very
happy
to
have
you
here
so
quick
question.
You
talked
about
the
traffic
going
up
that
you
know
the
predictive.
You
are
predicting
traffic.
What
are
you
actually
using
to
increase
the
number
of
containers
or
you
know
for
scaling
the
traffic?
What
you
are
getting.