►
Description
Cristian Roldan - PaaS @ Produban from OpenShift Commons Gathering Buenos Aires 2018
en en Español
A
I'm
Cristian
Cristian,
Roldán
and
I
generally
work
to
produce
when
I'm
invited
to
a
presentation,
I,
usually
make
an
introduction
that
it's
Peruvian,,
very
short,,
but
Duanes
is
a
Santander
group.
Company.
We
operate
all
the
IV
group
systems
and
for
a
few
months
we've
been
a
transformation
occurs,.
It
is
unified
with
our
company
from
the
Santander
group
and
surely
in
a
few
months
we
will
go
on
to
make
Santander
Global
Technology
so,
well,
I
am
going
to
start
with
my
milonga.
A
How
we
have
gotten
here
from
the
last
44
years
and
well,
I
think
there
are
two
very
important
ones
that
are
very
interesting.
We
have
started
in
this
technology
of
the
path
with
lópez
and
its
version
version
2.
We
have
started
with
an
online
principle
to
work
to
do
tests
to
deploy
monolithic
applications,
applications
that
we
had
in
the
westfield.
It
was
good
to
know
that
these
applications
behaved
in
a
public
environment
and
in
a
peaceful
environment
and
then
more
or
less,
around
May
2014,
the
first
docker
head
pressure
appeared,.
A
We
had
already
been
working
with
an
alpha
version,
a
beta
version
and
really
there
we
had
a
very
important
change.
We
saw
that
this
technology
was
very
promising.
It
was
tremendously
promising
and
we
knew
that
this
had
to
be
our
path.
We
had
been
working
with
virtual
beaches
and
managing
to
create
and
maintain
a
virtual
beach
was
tremendously
expensive
in
the
one
they
had
However,
involving
various
groups.
With
issues
of
educating
all
this
management
is
greatly
simplified.
A
And
we
saw
more
or
less
around
August
2014
that
those
applications
that
we
had
in
the
web,
environment
and
very
large
monolithic
applications
were
really
complex
to
manage
in
an
environment
in
an
environment
of
peace
and
there
the
issue
of
microservices
appeared,.
We
started
playing
with
spring
wood
and
we
realized,
above
all,
the
areas
of
architecture
and
development
that
this
spring
technology
was
very
promising,.
It
was
where
we
had
to
direct
our
designs,
this
technology.
A
A
A
A
A
A
We
wanted
locker,
because
we
knew
that
that
It
was
our
way,
and
that
was
the
reason
why
we
opted
for
the
beginning
of
2016
and
provide
a
global
peace
service
for
the
group
and
well
and
at
the
beginning
of
2018,
with
which
it
has
been
decided
every
two
years.
The
analysis
is
revalidated.
in
the
strategies
and
it
has
been
decided
to
start
with
version
2
of
our
cloud
technology
and
well,.
The
path
will
soon
begin
to
deploy
the
path
version
version
2
in
different
geographies,,
which
is
basically
based
on
openshift
39
on
good
war.
A
A
Well,
what
is
the
pas
version?
1
is,
and
what
are
the
requirements
to
design
the
pas
version?
1
was
the
requirements
are
basically,
these
are
They,
seem
simple,
but
they
are
not
simple
at
all.
First,.
What
is
decided
is
that
we
have
to
design
and
implement
a
hero-style,
platform.
Does.
Anyone
know
Heroku
in
a
few
days,.
A
A
platform
passes,
it
had
to
be
container-based
at
that
time.
It
implements
it.
The
only
container
implementation
there
was
was
docker,
ok,
but
that's
another
very
important
requirement.
It
has
to
be
container-based.
The
other
point
is
self-service.
We
have
to
provide
a
series
of
portals
self-service
so
that
any
developer
or
any
bot
or
application
administrator
through
a
portal
or
through
a
client
could
manage
absolutely
the
entire
cycle
of
their
application.
A
Another
point
that
they
have
raised
is
to
improve
the
use
of
resources,
and
that
is
why
we
went
to
an
global
agility.
Yes,
it
is
another
of
the
aspects
that
we
have
achieved,
also
oriented
the
bobs.
Basically,
everything
we
have
done
is
for
boxing.
That
is
the
reality.
Then
we
have
talked
about
a
series
of
tools
around
openshift
that
we
have
designed
for
devotees
to
improve
the
cycle.
Applications.
A
Have
to
scale
very
easily
and
the
infrastructure
has
to
be
prepared
to
scale
very
easily
in
version
3
0
of
openshift.
All
these
requirements
will
be
a
big
challenge
with
the
new
version
of
networks
well
out
of
the
box.
Practically
all
this
is
solved
very
easily,
but
at
that
time
all
these
requirements
were
really
an
impressive
challenge
for
us
that,
moreover,
the
solution
of
peace
has
to
support
different
providers
of
days.
A
A
A
We
want
to
speed
up
the
time
to
market.
We
want
to
renew
the
applications
because
we
have
to
go
to
other
types
of
runtime,
although
practically
80%
of
the
applications
are
java,
but
but
the
client,
the
clients
were
already
needing.
Another
type
of
tehran
is
fine,
and
these
were
the
requirements
and
These
requirements
gave
rise
to
what
we
have
called
global
peace
global
peace.
A
Mainly
then,
at
the
guide
level
we
use
openstack
and
for
all
the
management
of
virtual
networks
we
use
an
sdn,
is
benois
and
also
what
we
saw
at
that
time
that
to
design
an
agile
and
productive
service
that
can
compete
with
public
providers
such
as
heroku.
Our
goal
was
always
to
do
something
similar,
but
it
is
impossible
to
really
do
something
without
messenger
q,
because
it
is
a
point
of
view.
A
It
is
an
extraordinary
service,
but
our
goal
is
to
seem
to
get
as
close
as
possible
possible
to
heroku,
so
we
said
well,
if
we
have
to
be
like
the
one
who
is
the
father
of
this
technology,
then
we
have
to
design
a
series
of
value-added
services
to
speed
up
developments
such
as
application
management.
Then
we
are
going
to
talk
about
the
fact
that
we
have
developed
very
well.
There
is
the
architecture
of
our
of
our
step
in
openstack.
We
have
three
to
see
here
where
they
are.
A
Is
this?
If
you
see
something
there
is
not,
he
we
have
33
facets
in
opera
stack.
We
have
a
series
of
components.
Each
component
is
in
its
own
subnet
and
well.
We
have
a
layer
balancing
here.
We
balance
the
pencil,
the
console,
the
applications
management
tools.
Everything
goes
through
this
balancing
layer.
Then
we
have
three
masters.
One
important
thing:
all
our
regions
of
the
country
are
identical,
that
is
to
say,
this
architecture
is
identical
in
all
regions.
A
A
Then
we
have
a
very
significant
number
of
computing
nodes.
Then
we
will
see
how
many
a
layer
of
this
hours
to
provide
the
volume
persistence
services,
and
then
we
have
a
few
to
a
few
that
are
well
in
charge
of
managing.
This
is
where
we
have
the
sant
fost.
We
have
the
infrastructure
monitoring
tools.
In
short,
we
have
a
few
machines
to
manage
all
this
complete
cluster
of
the
global
step.
A
Then
we
also
provide
our
clients
with
a
monitoring
solution,
but
for
the
applications,
layer,,
not
the
infrastructure
of
the
path,
but
the
applications
here
we
use
while
intros
code
we
come
with
while
intros
cop
ultrabook
is
desired,
is
a
private
code
tool
but
good
in
the
world.
Traditional
is
a
tool
that
has
a
lot
of
presence
within
the
group
so
to
facilitate
load
tests
to
facilitate
very
specific
tests.
A
We
have
provided
this
monitoring
service
and
all
our
corporate
images,
because
they
already
have
the
monitoring
agent
embedded
and
then
we
have
also
installed
a
data
lake
for
the
pass.
And
what
do
we
do
with
this
data
law?
Because
basically,
we
keep
all
the
logs
generated
by
the
containers
of
our
clients.
We
keep
them
in
this
data
lake,
all
the
access
blog
that
is
generated
by
the
routers.
A
A
A
Well,
as
you
can
see
here,
we
use
a
readme.
We
have
a
solution
of
ale
m.
We
have
two
versions
aleem
in
version
1
and
version
2
version.
2
is
very
focused
for
cloud
2.0
and
I
think
it
is
interesting
to
mention
that
we
do
not
use
match
stream
or
bill
config
those
solutions
that
come
out
of
the
box
or
the
jenkins
that
is
integrated
in
openshift.
A
We
do
not
use
them
because
we
do
not
use
them,
because
here
we
have
an
exclusive
solution
of
building
images
and
applications
is
read
me
what
it
does
has
a
specific
plan
to
deploy
in
the
step,
including
not
new
region,
but
in
several
that
depends
on
the
application
administrator
he
decides
in
which
regions
he
deploys
his
application.
Then
all
this
set
of
tools
to
accompany
the
life
cycle
of
the
applications,
since
it
is
provided
from
this
to
lm
and
not
from
openshift
from
within
the
beginning,
with
the
expansion
tools.
A
Yes,
You
wanted
that
kind
of
information,
you
had
to
go
to
cloudforms
and
well,.
We
saw
that
the
cloudform
at
that
time
basically
gave
us
reports
of
usage
dashboards,
this
and
the
other,,
but
it
really
didn't
give
us,.
It
didn't
allow
us
to
manage
the
teaching
cluster,,
so
it
really
did.
install
a
complex
product
such
as
cloud
forms.
A
What
we
have
decided
is
look,
we
are
going
to
make
this
module,
we
consult
the
openshift
tombstone
through
app
and
res,
and
we
obtain
the
use
of
the
different
resources
that
all
the
projects
do
and
other
interesting
things
what
this
module
does
is
it
comes
to
monitoring.
It
says
to
see
this
project,
how
many
agents
it
has.
It
has
instance
2
perfect,
come
and
go
it's
not
there
in
the
agenda
and
then
well.
We
do.
We
also
go
to
the
registry
and
we
tell
it
to
see
this
project.
What
images
it
is
using.
A
These
images
have
costs,
yes
or
no,.
We
leave
all
of
this
registered,,
so
not
only
that
we
saw
is
that
cloudforms
that
type
of
information,
so
personalized,,
because
it
did
not
provide
it
well,,
then,
in
our
architecture,
it
consumes
services
that
the
infrastructure
gives
us
and
we
use
ntp,
dns
elipa,
in
short,,
the
saté
light
and
well,.
These
are
services
that
provide
it
to
another
group
and
we
have
instance
2
in
all
regions
and
also
what
we
have
is.
A
We
have
a
corporate
age.
All
users
are
registered
in
corporate
age
and
we
also
have
the
groups,
the
definition
of
groups
we
synchronize
groups
that
are
stored
in
the
stable
we
synchronize
towards
openshift
and
finally
mention
that
the
registry
block,
which,
by
the
way
the
registry
block
of
the
remove,
is
excellent.
I
did
not
want
to
comment
because
we
have
tried
it,
but
in
the
end
we
have
not
used
it.
But
it
is
a
touch
registry
that,
if
you
have
the
opportunity
to
use
it,,
the
truth
is
that
it
works
tremendously.
A
Well
for
us,
when
this
architecture
was
designed,,
everything
had
to
be
open
source
if
it
is
a
new
requirement
that
I
did
not
put
in
the
presentation,
but
the
direction
moved
us
is
to
that.
Need
that
everything
we
do
has
to
be
open
source
and
everything
we
do.
We
have
to
publish
in
bits
so
well,
then,
what
we
have
decided
here
at
the
level
of
what
it
records
is
to
use
hardware
I,
don't
know
if
anyone
knows
it
from
harvard
from
a
good
web.
It
is
not
an
open
source
project
for
the
management
of
images.
A
From
2
days
ago,
laclau
net
united
computation
foundation
adopted
it
two
days
ago.
That
already
belongs
to
the
foundation
and
well,
open
cifa
at
first
did
not
provide
a
docker
registry
solution.
I,
don't
remember
if
from
From
33
years
on,
it
began
to
provide
this
service,,
but
at
the
same
time,
that
resisting
touch
that
it
provided
did
not
cover
all
our
needs.
We
have
10
open,
100
regions,,
so
the
corporate
images
have
to
be
synchronized
between
all
the
regions,,
so
the
dow
that
was
registered
was
integrated,.
A
Well,
It
provided
us
with
these
capabilities,
in
turn,,
we
have
clients
who
use
the
resist
touch,
but
who
are
not
clients
of
the
path,.
What
they
do
is
deploy
containers
in
some
network
that
is
who
knows
where,
but
is
not
managed
by
suar
or
by
month,
or
not
x,
pensive.
So
we
also
have
to
provide
service
to
those
clients,
and
for
that
reason
we
decided
to
use
hardware
and
we
continue
to
use
it
and
in
future
versions.
A
A
A
A
A
Vernet,
is
what
it
is
logically
for
the
moment
in
our
stoker
container
content
manager,
and
then
we
are
going
to
talk
about
the
future
and
we
are
going
to
see
what
other
possibilities
we
have,
while
interscope
monitoring
and
promise
I
promised
you
We
use
it
at
this
moment
is
to
monitor
the
etc
of
lte
headquarters.
It
is
essential
to
know
second
by
second,
the
behavior
of
the
ts,
because
basically
the
tc
is
the
heart
of
the
government
etc.
So,
then,
what
we
use
is
averages
very
well.
A
Well,
where
we
have
deployed
version
1
of
the
path
we
have
it
in
different
geographies
in
different
countries
in
different
data
centers
in
you
came
in
spain,
in
brazil
and
in
mexico.
All
these
regions
of
the
country
are
interconnected
by
a
global
network
of
the
group
that
is
called
jesé
net,
and
these
are
all
the
regions
of
the
country
where
we
are,
we
have
in
spain.
A
In
virtual
machines,
but
every
day
we
are
adding
between
one
and
two
new
virtual
machines
in
all
regions
and
from
here
from
these
to
these
regions,
as
we
provide
services
to
The,
pre
and
production
development
environments
are
very
good..
What
else
are
these
value-added
services
that
I
have
mentioned
is
to
provide
a
service
up
to
par
or
something
similar
to
Heroku?.
What
we
had
to
do
is
develop
a
series
of
services,
so
I
am
going
to
name
some
of
these
services
and
what
is
it
about
and
why?
Why
are
they
important?
A
First,
the
gpl
we
have
loved
gpl,
which
is
the
global
path
login.
Basically,
this
is
a
centralized
service
for
managing
logs
from
our
containers,
all
the
standard,
output,
blogs
and
error
that
generates
the
containers
we
send
them
through
flumen
to
the
data
lake
and
then
with
one
with
a
dashboard.
It
is
equivalent
they
can
be
consulted
in
a
multi,
tena
mode.
A
The
way
we
manage
as
a
service
as
a
service
provider
within
the
group.
What
we
did
is
develop
a
status
page
in
the
style
of
amazon
or
as
blue
does,,
because
we
do.
This
is
a
public
data.
Everyone,
customers
and
non-customers
can
consult
this
status
page
and
can
see
minute
by
minute
the
status
of
the
openshift
cluster,
so
if
an
error
occurs
in
an
application.
Finally,
our
speech
is
see
if
there
is
an
error
in
an
application,
and
all
this
is
in
green,
the
problem
of
the
simple
application
and
it
ends
there.
A
The
ticket
is
closed.
Ok,
then,
This
console
has
helped
us
enormously
to
reduce
and
speed
up
the
management
of
tickets,
and
it
really
is
like
that,.
It
has
an
accuracy
of
almost
199,
that
is
to
say,.
It
can
be
11.
Percent.
There
is
an
application
that
has
a
problem,
and
here
it
marks
everything
green,
that
can
happen,
but
really
99%
of
the
cases.
If
an
application
has
a
problem,
it
is
generally
from
the
application
or
from
some
back-end
that
uses
the
application
that
we
have
developed
the
most
a
client.
A
If
we
have
extended
it
I
know
I
know
we
need
certain
functionalities.
We
have
moved
to
the
community
so
that
they
have
an
architecture
based
on
plugins
and
now
I
know
you
can
inject
plugins
for
certain
officials.
That
is
something
we
asked
for.
It
is
something
we
learned
from
the
cloud
foundry
cloud
foundry.
In
that
sense,
your
client
is
very
extensible.
You
can
deploy
it
plugins
and
well,
since
a
year
ago
we
have
this
ability
to
extend,
I
know,,
but
what
we
did
at
that
time,
as
everything
was
fine,,
we
did
not
have
those
capabilities,.
A
What
else
do
we
have?
We
have
usage
reports
with
the
client
with
this
client,
an
administrator
a
project
can
see
how
much
memory
it
is
consuming,
how
much
money
that
project
is
costing
and
well
certain
features
that
we
do
not
have
with
11.
Well,
what
we
do
is
we
provide
it
through
this
client.
Then
we
have
another
service,
which
is
the
global
pass
access.
What
we
store
here,
the
access
logo.
Yes,
everything
that
happens.
A
Every
request
that
goes
through
the
router
is
registered
in
this
service
and
is
You
can
consult
through
a
Cuban
another
product
or
service
that
we
have
developed.
Is
the
global
pass
operation
framework.
Basically,,
when
we
defined
that
it
had
to
be
a
global
pass,?
We
realized
that
managing
this
peace
in
the
traditional
way
is
impossible,
and
we
had
to
automate
it.
100%.
%
had
to
be
completely
automated,.
Those
regions
had
to
be
identical.
So
what
we
did
was
design
this
tool
later
we
are
going
to
talk
about
what
this
tool
is
and
well.
A
A
Well,?
We
have
developed
a
portal
so
that
certain
authorized
people
who
already
have
a
cost
line
assigned
for
that
project.
You
can
create
projects
in
any
of
the
regions
and
finally,
well,.
One
last
penultimate:
we
provide
a
catalog
of
corporate
images
around.
We
have
more
or
less
40
images
that
are
certified,
that
they
are
signed,
that
we
have
done
the
vulnerabilities
scans
in
short
and
we
maintain
them.
We
publish
them,
We
update
them
and
they
are
available
in
all
regions,
and
this
is
the
subject
of
the
latest
service,,
which
is
the
global
pass.
A
Watch,
which,,
as
I
mentioned
before,,
is
based
on
influx
in
graph
and
here
in
telegraph.
What
we
do
is
we
have
raffled
off
the
telegraph
project
and
we
we
have
transformed
into
multi
tenant
telegraph
is
not
multi
by
default,
so
what
we
did
was
use
the
docker
plugin
for
the
metrics.
We
transformed
them
into
multi
tenant
and
through
an
anna
graph
through
a
fna
bar
and
a
specific
touch
you
connect
and
get
the
docker
metrics
all
the
doca
metrics
of
your
project.
A
Okay,
you
can't
see
the
metrics
of
another
project
only
the
ones
you
are
authorized
to
do
good,
and
these
are
all
the
services
that
we
have
developed
for
global
peace
that
are
tremendously
useful
for
absolutely
all
of
us
who
are
behind
peace,,
not
only
the
devotees,
but
also
the
engineering
and
operations
people,
very
well,.
Let's
continue
what
is
the
level
of
adoption
that
we
have
today,
because
at
the
post
level
we
have
35,000,,
it
is
surely
today
we
should
be
around
36,000,
I
have
put
all
the
days.
A
It
evolves
from
100
posts
up
projects,
four
thousand
153
projects
in
total
development
projects.
This
amount
of
pre-production
production,
1480
of
production.
We
have
one
thousand
426.
One
interesting
thing
is:
we
have
critical
applications
deployed
in
the
step,
so
some
of
these
are
non-critical
applications.
I
can
say
which
one
it
is,
but
yes,
in
critical
applications,
and
then
we
have
other
types
of
projects
that
are
quite
interesting,,
which
are
the
temporary
ones..
A
We
saw
at
the
beginning
that
to
speed
up
the
adoption
of
this
technology
without
moving
an
entire
group
towards
this
technology
is
a
task
It's
a
huge
task,.
It's
not
easy
to
convince
people,,
so
what
we
did
is
create
temporary
projects.
You
come
to
a
global
peace
portal
and
by
completing
two
or
three
pieces
of
information,.
Any
employee
of
the
group
in
any
authenticated
user
can
create
a
project,.
It
doesn't
need
any
lines.
at
no
cost.
A
No
authorization
from
anyone
can
create
it
and
it
is
available
for
15
days,
and
this
service
has
really
allowed
many
groups
within
and
many
tips
within
the
santander
group.
You
can
go
to
this
technology.
It
was
really
very
useful
to
have
this
flexibility
of
being
able
to
create
a
project
in
10
seconds
and
already
have
the
ability
to
deploy
applications.
A
A
A
Another
interesting
point
is:
it
is
good
and
how
we
manage
this
project,
because
it
is
It's
easy
to
say
it
here
and
comment
on
it,,
but
managing
the
day-to-day
of
all.
These
services
is
quite
complex
because
we
use,
we
cause
anger,,
we
use
tour
and
well,.
We
define
all
the
tasks
are
on
tour
and
in
turn,
the
support
we
also
give
through
tour
Another
issue
that
we
have
introduced
is
that
absolutely
everything
is
stored
in
git,,
although
we
have
tools
for
collaborative
use,,
but
for
us
git
is
essential
and
everything
is
there,,
including
the
documentation.
A
And
then
what
we
did
was
to
technically
manage
the
10
regions
and
the
thousand
and
odd
servers
we
have
designed
this
tool
of
the
global
step
forecast
framework
that
has
nothing
to
do
with
the
operation
framework
that
they
just
commented
on
decorum
that
voucher
and
what
is
this
operation
brake?
Basically,
what
we
do
an
implementation
of
infrastructure
as
code,
a
case
Interestingly,
a
little
while
ago,
we
had
to
reinstall.
A
The
entire
region
of
Brazil,
because
it
took
approximately
5
hours
out
of
5
hours
to
reinstall
all
the
openshift
worker
nodes
that
had
a
hundred-odd
nodes,
and
that
was
done
thanks
to
this
tool
to
manually
do
a
hundred-,
odd
It
would
be
crazy,.
It
would
take
us
many
weeks,
well,
and
this
tool,,
which
is
based
100
percent
on
the
sky,
at
the
beginning,.
A
We
had
paper
and
later,
when
we
did
an
analysis
of
where
the
market
was
going
at
that
time,
Redhat
had
just
gotten
anxious
and
we
said,
come
on,,
let's
go
for
anzil,
because
this
is
this
is
the
future,
and
we
have
migrated
everything
we
had
on
paper
and
they
have
been
this
solution.
Is
it
supports
several
providers
for
days?
A
Then
we
have.
We
have
a
couple
of
fields
in
all
the
regions.
We
have
two
fields
in
each
region
or
in
case
one
falls.
We
have
the
other
and
in
turn
from
there,
anxiety
can
be
executed
and
also
what
we
have
installed
is
a
good
that
rates
law,
that
is,
from
the
master's
degree
from
a
master's
degree
of
good
who
we
control
all
the
ten
regions
of
the
country.
A
An
interesting
thing
is
the
question
is
and
why?
What
and
why
have
you
used
jenkins
and
not
run
from
or
to
the
tower?
Basically,
this
is
the
secret,
the
pipeline
of
jenkins,
who
knows
it
the
pack
in
front
of
jenkins.
A
few
is
not
brilliant,
it
does
not
work
well,
it
is
very
flexible.
So
we
who
had
our
lo
our
playbooks,,
which
we
are
going
to
talk
about
in
the
next
slide,,
are
still
very,
very
complex,
for
example,.
A
When
we
add
a
node,,
we
are
not
left
with
the
task
of
provisioning,
the
senate
in
elías
and
that's
it,
no,,
not
just
the
provision,
or
I.
Add
the
openshift
cluster
and
2
I
certified
when
a
node
arrived
in
production.
I
have
to
be
100%
sure
that
this
node
is
not
going
to
generate
any
type
of
problem,
and
we
have
to
certify
it
then
the
execution
of
all
of
all
those
playbooks.
A
Well
that
we
all
have
not
two
all
the
missteps
have
an
assigned
role
within
the
answer
and
in
turn
we
make
use
of
the
metadata
if
I
know
everything
type
of
information.
All
the
definition
of
the
assets
are
stored
in
metadata
and
from
this
metadata
we
generate
the
most
dynamic
inventories,
and
this
is
how
we
really
manage,
and
this
has
saved
our
lives
in
operations
for
those
ten
regions
for
the
management
of
those
ten
regions.
A
The
operations
group
is
only
five
people
with
five
people
managing
those
ten
regions
and
a
thousand
plus
machines,
and
that
is
thanks
to
this
tool.
My
hope
is
to
see
some
of
these
functionalities
in
the
product.
Well,
another
interesting
point
that
has
given
us
allowed
to
be
much
more
agile
and
more
productive
is
the
issue
of
the
golden
and
male.
A
So
what
we
do?
We
have
golden
and
match
for
the
node
defined
a
specific
one
for
the
node,
for
the
master
and
for
a
real
base
and
for
what
we
use
for
the
base
relay
by
For
example,.
The
balance
machine
in
the
registry
machine
is
a
real
base,,
so
we
have
all
the
dependencies
in
these
golden
images,.
We
store
all
the
dependencies,,
all
the
best
practices,
all
we
have
learned,
well,.
What
we
do
is
that
we
generate
a
new
golden
image.
A
and
each
time
we
have
to
update
the
path
that
we
are
going
to
talk
about
later.
As
in
our
update
process,,
what
we
do
is
generate
a
new
golden
and
night.
So,
each
time
we
create
one
plus
a
new
machine,
we
say
with
which
version
of
golden
and
match.
We
deploy
this
machine
and
we
do
not
touch
anything
in
the
machine
with
we
enter
it.
When
we
deploy
the
good,
well,
that
time
we
never
enter,.
We
never
have
to
do
any
manual
tasks,
because
everything
is
in
the
lin
watch
goal,.
A
We
have
arrived
at
this
concept
of
using
golden
and
match
almost
simultaneously
with
the
online
offensive
people
we
used,.
We
did
all
the
deployment
on
the
fly,
that
is,
we
deployed
a
small
seed
and
then,
let's
say,
lift
the
cloth
and
tell
him
to
explain
this
to
me,,
deploy
this
and
the
other
and
the
other,.
What
we
saw
is
that
the
satellite,
because
that
version
of
satellites
was
not
enough,
because
there
is
a
recommendation
later
on
that
talks
about
the
subject.
A
There
are
no
more
pets
in
the
traditional
world.
Each
machine
is
sacred,
then
aquino
each
machine,
because
if
it
does
not
work
and
cannot
be
fixed
in
5
minutes,
grandfather
general
deleted
it.
So
that
is
the
management
that
we
have
changed
to
a
management
based
on
livestock
and
basically
when
it
does
not
work,
it
goes
to
Cubans
for
and
we
managed
to
repair
it.
We
tried
to
repair
it,
and
then
we
are
going
to
see
some
repair
playbooks.
A
Another
important
point
that
we
will
talk
about
updates
later
when
we
designed
this.
We
designed
it
thinking
of
a
service
as
a
service
as
a
product
and
public
services.
We
said
we
have
the
ability
to
update
the
path
or
any
component
of
the
path
at
critical
times
of
use
of
the
pass
at
peak
times.
We
have
to
be
able,
so
we
have
designed
the
entire
pass
based
on
these
premises,
then
all
the
part
of
the
routers.
A
What
we
saw
is
that
the
routers,
like
everything
between
the
external
requests,
enter
through
the
router,
so
the
router
is
a
critical
piece
in
this
ecosystem.
So
what
we
said
is
good.
We
are
going
to
give
it
a
brooklyn
deployment
capacity
of
the
routers,
and
this
gives
us
The
globe.
Link
deployment
has
saved
my
life.
A
It's
a
bit
annoyed
or
it
has
crossed
the
threshold
such
and
other
things
that
we
have
advanced
is
in
repair
commands
from
the
slack
channel.
We
say
that
for
me
this
and
what
that
command
does
is
go
to
jenkins
and
tell
it
run
such
a
repair,
playbook,
ok
and
This,
and
this
has
allowed
us
to
be
much
more
agile.
A
Well,
these
are
the
playbooks
that
we
have
had
to
develop,
that
my
hope
is
to
repeat
it
again.
One
day
we
will
see
this
series
of
services
within
the
product
and
well,
we
have
for
what
are
different
categories
of
host
monitoring,
big
data
storage,
rapid
utility,
but
we
have,
for
example,
adding
a
node
to
activate
and
deactivate
the
well
apply
a
specific
role
to
a
machine
delete.
It
ok,
certify
the
node.
This
is
a
super
complex
playbook,
because
what
it
does
is
deploy
yet
a
project
on
that
node.
A
The
node
is
created
by
a
specific
time
tag.
It
doesn't
get
productive
yet
that
node
deploys
a
series
of
certifying
applications.
In
turn
a
of
these
applications.
It
is
one
of
those
deployment
config,
they
use
persistent
volumes.
It
also
varies
more.
The
persistent
volume
is
connected
to
different
points
to
see
connectivity
issues
to
see
if
the
headquarters
in
etcetera
works
and
if
everything
goes
well,
that
new
one
automatically
becomes
productive.
A
This
this
job
Other
issues
are
essential
for
us,.
It
is
the
issue
of
patching
due
to
risk
issues,.
They
force
us
every
month
to
patch,
absolutely
all
the
ones
in
the
group,
so
well,.
What
we
do
here
is
we
leave
the
operating
system,.
The
operating
system
does
not
perceive
it,
and
then
we
are
going
to
see
the
opel
update.
If
you
see
how
it
is
done.
This
is
the
repair
of
the
node,
many
of
our
other
nodes.
A
Right
now
we
are
using
de
byte
maper
de
byte
maper,
because
sometimes
he
wants
to
take
vacations
I,
don't
know
because
we
don't
know
for
what
reason.
But
what
happens
It
is
that
it
can
have
a
type
of
problem
and
what
we
have
to
do
is
basically
the
repairs
consists
of
separating
that
daemon
from
docker
and
deleting
absolutely
all
the
bytemaster
devices
plus
and
deleting
all
the
images
that
are
cached
in
that
at
that
node.
A
After
that,
we
have
more
interesting
things.
For
example,
the
synchronization
with
the
I
must
subscribe,
a
node
with
the
satellite
label,
the
well.
In
short,
we
have
a
few
more
playbooks
and
the
most
interesting,
and
some
interesting
is
the
issue
of
router.
That
I
have
commented
that
we
have
defined
a
critical
component
in
this
solution.
Is
the
router
is
always
deployed
automatically
nothing
manual
and
to
minimize
errors.
Every
router
has
to
be
deployed
through
either
kings
or
directly
typing.
A
The
command
guide
in
anzio
and
one
thing
is
interesting-
is
the
traffic
switch
once
we
certify
that
the
router
that
we
have
deployed
once
we
certify
that
it
works
correctly,
that
it
has
no
problems
that
the
logs
are
not
strange
things?
What
we
do
is
we
move
all
the
traffic
from
the
old
router
to
the
new
router,
and
we
do
this
in
time
rush
hour.
A
A
And
the
availability
of
the
cluster
and
in
turn
the
roll
back
strategy
was
not
very
well
defined.
So
What,
we
said,
is
come
on,.
We
are
going
to
define
what
the
update
strategy
is,
and
these
are
the
requirements
when
we
define
it.,
So,
first,,
being
able
to
update
at
any
time
at
any
time
involved
a
strategy
of
blue
green
deployment
of
the
balances
and
two
routers,.
A
And
then
we
are
going
to
a
node
recreation
strategy,
and
this
is
interesting
to
take
into
account.
We
directly
delete
all
the
old
nodes
and
redeploy
ok,
and
that
has
radically
improved
in
the
sense
that
many
times
when
you
drag
an
old
node,
you
always
end
up
with
old
things
and
later
after
5
or
6
months,
problems
appear,
and
he
says-
and
these
problems
that
the
sun
are
things
from
seven
eight
months
ago
that
you
have
left
lying
around.
Ok
then,
based
on
the
golden
images,
what
we
do
is
we
regenerate
absolutely
all
the
nodes.
A
A
And
then,
as
I
have
commented,
we
do
not
use
the
openshift
playbooks.
We
are
not
using
them
and
another
point
to
Keep
in
mind.
It's
up
to
version
39,.
This
is
valid
from
39,
I
have
to
change
the
strategy
because
basically
openshift
is
not
going
to
support
manual
updating
anymore,
okay,,
so
this
will
soon
be
our
turn
to
have
to
redesign
it
well.
A
And
another
interesting
point
is
during
the
update:
we
are
going
to
have
nodes
in
two
versions
and
that
is
supported
by
openshift
bad,
so
we
have
three
main
phases:
the
preparation
phase
that
more
or
less
takes
10
days,
and
that
does
mean
preparing
well,
is
to
do
all
the
tasks
within
the
satellite
generated
with
tempus
activation,
kit,
cetera,
etc.
you
generate
golden
and
many
certify
them
then
certify.
A
Test
some
critical
images
in
fitur,.
We
have
to
certify
that
again
because
there
may
be
differences
and
yes,
there
are
Sometimes.
Then
my
recommendation
is
that
later
we
are
going
to
see
some
recommendations,.
It
is
careful
with
the
updates
that
we
do
more
and
more
provisioned
at
the
virtual
machine
level,
provisioned
machines
for
infrastructure
assets
and
workers,
not
at
night.
So
well,
then
we
go
ahead
and
now
to
provision
masters.
We
don't
add
the
cluster
to
masters.
A
Provision
then
comes
a
phase
that
is
the
most
critical,
which
is
updating
the
masters
and
infrastructure
nodes,
and
here
what
we
do
is
basically
an
update
script
of
the
masters.
Ok,
thank
god.
We
have
never
had
problems,
but
there
is
the
most
critical
phase
and
it
usually
lasts
five
hours
in
each
cluster
and
after
we
make
the
nodes,
lynn,
france,
they
are
pre-provisioned,
we
add
them
the
tilos
class.
We
certify
we
deploy
the
routers
and
we
certify
the
routers
we
deploy,
calculating
calculating
is
essential,
above
all,
for
the
topic,
auto
scaling.
Yes,
you!
A
No,
yes,
you.
If
you
have
defined
auto
scaling
rules-
and
you
don't
have
the
calculation,
they
will
not
be
able
to
auto
scale
and
if
all
goes
well,
we
do
the
traffic
switch.
We
listen
from
the
old
router
to
the
new
one,
and
this
is
where
some
drops
begin
to
fall
of
sweat,
but
thank
god
to
the
gp
all
this
is
perfectly
controlled.
A
Problems
have
arisen.
Well,
no,
we
have
never
had
problems.
That
is
the
reality.
Everything
has
always
worked.
It
has
worked
very
well
under
this
update
scheme
and
then
comes
a
phase
that
is
less
critical,
which
is
a
phase
of
updating
the
nodes
that
basically
updating
is
is
is
to
mark
the
old
node
mark
it
as
no
schedule
and
talk,
add
a
new
one
and
move
all
the
posts
from
the
old
to
the
new
and
that,
depending
on
how
we
negotiate
with
each
of
the
clients
of
the
regions,
We
can
do
it.
A
Little
blood
very
well,
this
famous
status
page
as
it
made
an
introduction
a
little
while
ago,
and
basically
we
show
the
status
of
all
our
components.
It
is
updated
every
minute
and
well,
we
measure
the
abyss
of
the
they
are
going
to
do
it.
We
have
deployment
robots
that
good.
What
they
do.
It
is
an
intensive
use
of
the
pencil
and
they
measure
the
state
of
the
openshift
pencil
and
we
also
have
some
canaries.
A
Some
caresses
are
not
the
applications
that
are
even
deployed
on
the
internet,
they
are
deployed
on
the
internet
and
we
are
measuring
the
response
time
of
all
these
channels,
and
then
we
have
information
detailed
description
of
each
one
of
the
nodes.
Basically,
this
alone
has
helped
us
to
say
if
this
is
green,,
the
infrastructure
of
the
path
works
perfectly.
A
Now
I'm
like
head
of
engineering,
there
is
another
group
of
operations
really
here,
I
put
two
little
boxes
to
make
this
graph
smaller,
but
really
between
operations.
We
have
many
more
groups
and
within
the
csc
there
are
many
more
groups.
That
is
what
it
does
is.
It
is
in
charge
of
all
cloud
services
of
the
group
and
within
engineering
we
have
groups
for
the
path
I
am
responsible
for
that
group.
We
have
for
the
subject
of
lm.
We
have
parts
marías
for
the
subject
of
high
grade.
A
In
short,
there
are
many
more
groups
but
good,
perhaps
to
summarize
this
organization
and
in
this
group
that
with
what
people
do
we
count
that
in
what
specialist
we
have
three
redhat
consultants,
a
part-time
redhat
architect,
we
have
a
red
hat
theme.
We
have
an
anncol
integrator
that
what
this
integrator
is
in
charge
of
is
basically
setting
the
guidelines
of
how
we
have
to
generate
code.
Ansió
and
It
is
what
it
does,.
It
is
in
charge
of
integrating
all
the
code
that
all
of
us
generate.
A
A
person
in
charge
of
maintaining
the
catalog
of
corporate
images
of
docker,
and
then
we
have
a
nerdy
product
that
is
in
charge
of
negotiating
everything
with
everyone
and
then
a
special
service,
and
that
is
the
person
who
is
in
charge
of
defining
a
bit.
The
path
service
is
not
in
technical
matters,
but
without
defining
the
path
service,
because,
from
the
definition
of
the
service
then
comes
the
implementation,,
which
is
what
the
rest
of
the
people
in
the
group
do
and
how
we
work
with
the
operations.
People
Well,.
A
The
treatment
is
very
cordial
because
there
are
really
the
people
who
were
in
my
group
before
and
what
we
deliver
to.
Them.
Well,
basically
place
good
and
soul,,
all
of
us
who
make
prevebús
playbooks
produce-
and
it
can
be
mainly
that
more
than
more,
we
deliver.
by
reference
guides
on
how
to
operate
the
path
on
how
to
install
a
new
version
on
how
to
update
the
offensive
architecture,
documentation.
The
definition
of
services
document,
the
corporate
images
we
deliver
to
be
published
in
the
districts
template
openshift.
We
are
also
in
charge
of
generating
the
employees
pensive.
A
We
are
a
good
second
level
support
and
we
work
mainly
together
on
new
installations
such
as
new
updates
and
there
we
do
a
kind
of
skill
transfer
and
then
another
of
our
responsibility
is
to
publish
all
the
good
practice
documents
and
everything
that
is
included
in
that
document.
It's
sacred
and
all
applications
have
to
adapt
to
that
good
practice
guideline
and
that's
how
we're
basically
organized.
A
The
important
thing
to
mention
here
is
that
the
engineering
group
and
an
operations
group
are
fine,
and
now
we're
going
to
talk
about
exhaust
and
how
we
measure
how
we
measure
the
use
of
the
country
or
how
the
country
is
working
or
how
the
grass
was
adopted,
how
the
path
is
being
adopted.
Well,
we
have
several
layers,
they
are
all
public.
If
they
cannot
be
there,
you
can
consult
anyone
outside
the
country.
A
We
have
the
percentage
of
use
of
those
of
the
production
environment
as
of
non-
production,
and
the
important
thing
to
mention
here
is
that
once
the
70%
percentage
of
utilization
passes,,
we
automatically
create
a
new
one.
If
it
is
as
simple
as
that,
then
we
have
other
very
interesting
layers
that
are
very
interesting
is
the
response
time
of
the
of
the
canary,
like
the
response
time
of
the
app
and
look
at
the
caganer,
and
he
tells
me
look
at
cristian.
You
have
a
problem
here.
A
This
beak
tells
me
look,
you
have
a
problem
and
really
there
was
a
problem.
We
have
a
problem
with
a
node
and
this
node
generated
this
spike
for
us
and
what
our
clients
do
is
check
these
these
layers
daily.
So
what
else
do
we
have
the
pencil
response
time
here?
The
api
was
also
generating
some
noise
for
us.
It's
worth
measuring
the
pattern
of
connection.
How
many
concurrent
connections
we
have
and
how
is
the
pattern
and
how
are
they
connected?
How
is
it
throughout
a
whole
day?
Is
the
connection
pattern.
A
That
we
measure
the
most
the
use
of
the
doctor
registry
if
the
number
of
posts,
for
example,
here
we
see
a
significant
drop.
That
is
that
It
is
due
to
the
fact
that
a
lot
of
containers
have
been
moved
from
one
region
to
another,
okay,
but
well,.
Here
we
measure
how
our
clients
use
the
dow
that
I
register.
A
That
is
the
amount
of
single
region
projects.
These
layers
are
per
region.
We
have
10
in
total,
and
here
we
measure
the
number
of
300
temporary
projects
and
the
average
memory
usage
of
the
projects,
and
this
is
another
layer,
and
it
is
very
interesting
that
it
achieves
performance,
which
is
basically
not
a
scene.
But
what?
If
we
measure,
we
have
identified
two
of
fifteen
critical,
metrics
and
shaquille
critical
metrics.
We
have
the
response
time
and
the
availability
of
the
metric
and
based
on
that
they
will
make
them,.
A
Let's
say
a
little
bit
and
by
means
of
a
slightly
complex
formula
that
we
have
designed
each
metric
has
a
weight,
and
that
gives
us
the
state
of
health
minute
by
minute
of
the
country.
For
example,
a
99.5
from
the
point
of
view
of
the
s
leaf
is
quite
low,
but
here
what
it
means
that
the
service
of
the
service
is
very
good,
because
it
is
not
a
scene.
These
tones,
but
any
minimal
variation.
For
example,
an.
A
Openshift
update-
and
if
here
we
see
that
we
go
below
these
thresholds,
we
know
that
this
version
has
some
noise.
It
is
generating.
So
this
is
this
layer
and
we
use
it
internally.
The
people
of
offer
engineering
like
to
see
small
variations
that,
at
the
application
level,
are
not
noticeable
at
the
application
level.
A
Well,
I
have
commented
on
the
path
that
we
have
evolved,
which
was
the
first
slam,
and
now
we
have
reached
pack
version
2
session.
2
is
based
mainly
on
the
in
key
2
and
what
the
pas
version
2
has
good
principles
and
being
the
center
of
the
universe
in
the
pack
version
2.
Then
it
is
based
on
private
girls
or
version
2.
But
the
path
of
the
interesting
video
is,
we
are
going
to
start
giving
the
dedicated
cluster
service.
What
does
that
mean
that
some
client
through
the
marketplace
is
going
to
say?
A
I
want
a
dedicated
cluster
for
myself,
and
this
is
an
important
advantage
compared
to
the
global
model
that
we
currently
have,.
It
will
be
based
on
39,
although
openshift
3
10
came
out
but
before
it
was,
but
we
have
certified
39
with
q
vernet,
it
is
19.
Ok,
we
are
going
to
have
different
flavor
president
volumes,
different
flavors
of
persistent
volumes,
nfs
and
based
on
visas.
A
Another
very
important
point
is
micro-segmentation.
One
thing
that
I
have
not
commented
on
is
our
solution
is
multi
tenant.
The
global
pass
is
multi
tena
and
We.
Carry
this
multi
tenancy
capacity
through
the
multi
tenant
plugin,
and
now
what
we
have
done
in
this
version
in
this
new
architecture
is
to
base
it
on
micro
segmentation
and
that
we
use,
because
we
use
the
bold
policy
crespo
list
and
we
have
created
a
specific
role
to
manage
all
micro-segmentation
issues,
and
this
role
falls
to
the
security
group
of
the
group.
It
is
worth
other
things.
A
We
have
improved
security
issues.
Yes,
we
have
made
some
small
improvements.
We
have
cut
all
the
abilities
layers.
We
practically
do
not
allow
anything
to
be
done.
It
is
so
safe
that
We
have
left
nothing,.
We
have
even
added,.
We
have
removed
some
more
capabilities,,
those
that
come
by
default
with
openshift,.
We
are
going
to
make
very
intensive
use
of
the
marketplace,.
The
service
catalog
for
us
is
a
fundamental
piece,,
such
as
the
service
broker,.
Does
anyone
know
what
the
broker
is?
has
used?
A
It
not
well,
and
we
have
learned
this
concept
from
zerbi
broker,
which
is
a
brilliant
concept.
We
have
learned
it
in
pivotal
and
we
have
always
pushed
since
we
learned
it.
We
have
pushed
hard
networks
like
google
to
adopt
this
technology
and
they
let
it
be
heard
and
told
they
liked
the
idea
and
removed
google
to
go
towards
that
technology.
Okay
and
well.
It
is
very
interesting
to
explore
it.
I
recommend
it.
A
What
else
is
based
on
stress
cluster
of
good?
Well,
then,
the
heart
of
the
two
is
a
key
and
life,
and
another
interesting
aspect
is
the
Isolation.
We
are
going
to
improve
the
issue
of
isolation
between
environments
that
are
basically
clusters,
different
clusters
for
each
of
those
environments
and
well,.
That
is
the
version
2
pack
that
in
a
few
days
we
will
begin
to
deploy
in
all
regions
and
I
have
to
talk
a
little
about
recommendations.
A
As
a
result
of
recent
years
we
have
learned
a
few
things
and
well
I
am
going
to
try
to
do
it
quickly
because
I
think
it
will
reconstruct
me
the
etc
of
said
is
the
heart
of
everything
is
essential.
We
have
to
monitor,
monitor
or
whatever
you
want
to
call
it.
Lts
de
It
is
essential
if
the
ssd
starts
to
sneeze.
A
A
In
a
very
short
time-
and
that
was
due
to
old
objects
that
were
not
deleted
properly
plan
the
openshift
prefixes
very
well,
you
can
have
bad
collisions
what
happened
to
us
and
since
it
doesn't
collide,
you
have
to
download
everything
to
pay
for
everything
pensive
and
redefine
the
openshift
prefixes
and
re-build
everything,,
that
is
to
say
that
if
it
is
not
planned,
we
will
generate
availability,
monitor
the
control
plane.
It
is
essential
to
update
to
the
latest
version
available.
This
is
a
recommendation
from
redhat
and
it
is
also
my
recommendation.
A
Each
version:
is
they
correct
many
things,
so
my
recommendation
is
to
try
to
be
up
to
date,
but
although
it
is
a
very
complex
task,
because
I
have
already
shown
how
we
have
10
days
of
preparation
certifications,,
it
is
a
very
complex
and
very
expensive
task,
but
well
worth
it.
Openshift
routers
are
worth
it
in
version
39.
There
is
a
significant
improvement
in
activating
swap
in
the
first
versions.
A
A
Will
update
to
3
774
here,
and
it
cares
about
a
very
important
improvement
regarding
ip
tablets.
When
you
reach
a
level
of
3000
4000
q
vernet
services
is
Felipe
tablets,
It
begins
to
have
performance
problems,,
it
is
difficult
to
update
and
especially
in
a
model
like
ours,
where
we
have
12,000
boxes,
constantly,
updating,
scaling
up
and
down,
deleting
posts,
creating
posts,.
A
What
update
the
client,
that
is,,
if
I
go
to
39
my
client
or
the
client
that
uses
the
ale
m
also
has
to
be
in
39.
There
are
three
small
different
versions:
small
differences
that
may
cause
your
automation
to
stop
working
use.
Satellite
62.
This
is
some
better
recommendations
in
previous
versions.
We
are
going
to
have
10
million
problems.
A
Ok
from
here
it
works
great,
great
and
another
recommendation
is
to
certify
a
specific
version
of
openshift,
that
is
to
say,
sometimes
the
lower
the
version
there
may
be
some
variations
that
are
not
very
well
documented,
but
it
can
There
may
be
variations
in
the
behavior
of
the
prince
that
make
the
rest
of
your
ecosystem
start
to
have
problems
or
not
be
very
compatible.
So.
A
A
Another
interesting
point
is
that
we
have
to
force
our
clients
to
improve
the
resilience
of
applications
use
acne.
If
the
reding
prob,
we
are
forced
I,
have
to
say,
look
if
your
deployment
config
or
your
deployment,
not
it
comes
with
it.
It
doesn't
come
with
the
harness
and
redding
is
perfectly
defined
not
to
accept
that
application
in
production,
because
sooner
or
later
we
are
going
to
have
problems.
So
not
only
have
we
radically
improved
the
resilience
of
the
applications
with
the
closure.
A
Acne
of
our
good
to
another
interesting
point:
It
is
to
define
the
construction
and
approval
plan
of
the
corporate
images
or
the
docker
images,
and
here
when
we
design
this
I
have
to
take
into
account
several
things,
because
this
is
not
just
giving
the
servile
dow
and
that's
it.
It's
not
as
simple
as
that.
First,
we
have
to
define
how
I
am
going
to
build
it
then
how
it
already
does
the
vulnerability
scanning.
A
Then
how
am
I
going
to
digitally
sign
and
how
am
I
going
to
check
the
digital
signature,
then
the
other
is
and
who
supports
it,
and
if
a
problem
arises,
who
is
going
to
support
those
images
and
the
Another
is
who
is
going
to
test
these
images
before
passing
them
on
to
production,?
Who
is
going
to
be
the
wow,,
although
the
frequency
is
updated,
is
another,.
A
Another
question
that
we
have
to
ask
ourselves
is
the
labeling,:
how
are
we
going
to
label
these
images
and
the
obsolescence
issue
is
good,
well,,
one
thing
that
we
have
to
think
about
the
sections
to
define:
well,
if
not
what
we
do,
but
what
we
define.
We
are
going
to
practice.
Mortgaging
the
future-
and
the
other
point
is
that
it
is
very
tempting
docker
hub-
there
are
thousands
and
thousands
of
image
machines,
docker.
A
A
We
have
managed
to
normalize
all
environments
in
the
traditional
world,.
The
environment
is
led
by
one
group,,
the
pre-production
group,
and
the
other.
another
in
each
entity
is
different,
because
all
the
environments
here
are
identical
and
identical.
There
is
no
difference
and
they
are
managed
from
the
same
x-axis
from
the
same
centralized
tool.
Variety
of
rne
is
that
this
requires
our
clients
to
have
java
sprint,
wordpress
js
in
cinex
tom
curley
verdejo.
In
short,
we
have
much
more
than
this.
The
residence
of
applications
that
bernetti
help
us
enormously
resilience.
A
Problem
solving
all
problems
are
known.
They
are
identical.
We
are
not
going
to
find
bad
jokes
at
some
point.
They
are
hardly
all
identical
what
else
we
have
improved
our
services
thanks
to
the
feedback
of
8,000
users
who
give
us,
they
recommend
things
in
certain
forums.
They
ask
us
for
new
services
and
then
a
mouth
is
important.
A
Is
the
group
has
moved
towards
the
site,
gone
methodology
that
and
the
path
are
Fundamental
tools
in
this
in
this
transition
at
the
cultural
level,
we
have
to
understand
that
the
pas
does
not
imply
changes
and
the
changes
depend
on
how
we
are
going
to
use
the
pas.
Our
use,
as
I
have
commented
is,
it
is
a
tremendous
use.
We
have
35
thousand
4000
and
critics
of
projects,
so
obviously
for
us
that
brought
us
a
tremendous
cultural
change.
The
increase
here
going
to
a
boxing
methodology
in
a
large
group
is
very,
very
complex.
A
After
certain
internal
processes,
for
example,
the
cmb,
which
is
a
fight
I
had
with
the
people
of
smb
like
like
we
register
in
the
traditional
world,
registering
an
application,
is
very
easy
for
me.
Look,
it
runs
on
node
1
and
that's
it,
but
not
because
of
an
environment
as
dynamic
as
the
path
that
application
is
not
in
1
tomorrow.
It
is
in
2
the
day
after
tomorrow,
until
3
and
As
the
records
change,.
They
are
internal
processes
that
perhaps
for
a
small
company.
This
is
nonsense,,
but
in
a
large
group
these
are
very
important.
Changes.
A
Today,
management
is
backwards,.
The
management
is
based
on
livestock,.
We
no
longer
have
pets,
a
node
does
not
work,.
It
is
thrown
away.
It's
going
to
be
simple.
That's,
an
important
change.
Change
of
roles
and
responsibilities.
In
this
new
environment.
A,
clear
separation
between
platforms
and
applications.
Before,,
there
was
a
gray
area.
Now,
it's
very
clear.
Look,,
your
page
is
green..
Everything
is
green.
Elsewhere.
A
Flexibility
This
tool
is
very
flexible
and
well,.
You
have
to
define
responsibilities,.
You
have
to
take
into
account
that
you
pay
for
use,.
We
were
not
used
to
it,
that
is,
when
we
did
the
costs
of
a
project,,
we
said,
look,.
This
project
means
an
openshift,
cluster,
sorry,,
a
websphere
cluster.
with
so
many
nodes
and
that
's
the
cost.
Well,
here
it
is
different
here.
It
is
because
of
the
amount
of
memory
how
it
will
scale,
how
many
requests
it
will
have
and
in
that
cost
model
it
was
not
easy
for
us.
A
I
have
spent
a
lot
of
hours
discussing
this
issue
self-service.
Yes,
it's
all
self-service
before
they
are
used
to
such
a
group.
Does
such
a
thing,
the
other
group
at
the
center
of
something
else.
You
are
the
dns.
You
are
the
other.
Well
not
here.
Here
we
are
giving
you
a
series
of
tools,
a
series
of
consoles,
a
series
of
clients
for
someone
With,
a
certain
role,.
You
can
do
absolutely
all
of
this
and
that
changes
the
operating
model
of
the
path.
We
have
already
talked
about.
It.
A
It
is
a
radical
change
from
the
traditional
world
to
a
cloud
world.
Eradicating.
The
change
is
difficult
to
assimilate..
It
has
cost
us
a
lot,
and
then
another
interesting
point.
evangelization
disclosure,
that
is
to
say
this
technology
is
complex.
It
seems
simple
to
do
a
do,
want
to
use
the
doctor
and
read
it
understand.
It
is
very
easy
and
how
to
deploy
an
application
like
the
scale
where
I
save
the
configuration
where
I
save
the
logs.
How
do
I
do
a
travel
shooting?
Yes,
then
it
is
a
radical
change.
A
We
have
become
accustomed
for
15
years
to
using
a
technology,
a
technology,
and
now
we
come
to
this,
so
we
detected
at
the
beginning
of
the
project
that
the
adoption
of
this
project
was
going
to
be
successful,
based
on
how
we
were
going
to
evangelize
and
disseminate,
and
what
What
we
did
was
hold.
Biweekly
workshops
for
15
days,
Mr.
I
was
a
joy..
A
A
and
past
the
future
that
we
are
doing
for
the
future
reminds
me
that,
apart
from
the
past,
I
have
to
start
unfolding
the
wires
what
things
we
have
to
see
in
the
future.
We
have
to
start
investing
time
in
the
future.
Well,
it
has
been
commented
in
the
previous
presentation
on
jose
atomic
for
us.
It
is
a
must
that
we
don't
have
to
use
normally
anymore
to
maintain
golden
and
match.
A
It
is
very
expensive
to
maintain
golden
and
max,
because
we
have
to
generate
it
for
ourselves
because
it
is
good
for
blues
and
for
amazon,
so
managing
that
requires
time
and
resources
So.
Our
idea
is
to
go
to
Red
Hat
Coro.
If.
We
want
to
go,.
Why
didn't
we
go
now?
Well,?
Obviously,
there
was
a
transition
from
Jose
to
Tomic
and
with
the
acquisition
of
Coro,
well,.
What
we're
waiting
for
is
that
the
rules
of
the
game
become
clearer,
a
bit.
A
A
Although
We
have
an
update
methodology,
but
my
challenge
is
to
be
able
to
update
more
frequently
is
to
push
operations
that,
if
having
the
appropriate
tools
that
I
can
update
more
frequently,
this
technology
is.
We
cannot
stay
with
1
pencil
from
a
year
ago,
because
perhaps
the
first
one
will
work
well
on
first
month,
the
second
third,
but
as
soon
as
the
number
of
requests
and
the
load
begin
to
rise,
we
begin
to
see
behaviors
that
are
not
very
adequate.
A
So
the
update
is
essential
that
we
are
going
to
invest
a
lot
more
in
what
is
monitoring
based
on
average
averages,
and
it
is
practically
a
standard
In,
the
cloud
world,,
then
the
issue
of
policies,
report,
graphs.
My
recommendation,
is
to
look
and
read
a
little
bit
of
these
graphs,,
which
is
quite
interesting..
A
Of
cuber
net
is
when
they
are
implemented,
as
rules
for
ip
tablets
and
I
have
already
commented
on
the
respectable
problems,
well
here
it
is
solved
badly,
and
that
is
important,
and
here
is
a
very
interesting
document.
That
explains
everything
is
all
this
history
we
are
going
to
have
for
what
less
in
version
2,
30
classes
more
or
less
is
what
we
understand
that
we
are
going
to
have
and
manage
all
that
to
use
that
amount
of
region.
It
is
evident.
The
acceleration
is
something
important
lightning.
A
We
also
have
to
start
investigating
this
topic,
because
everywhere
advances
in
a
way
playing
minds,
advances
from
another
but
at
the
same
time
playing
mind,
uses
a
very
low
percentage
of
mouth.
So
what
we
want
is
to
start
to
see
how
the
cry
is
going,
since
he
is
a
first
class
citizen
in
which
vernet
is
we
want
to
see
how
it
is
going
with
all
our
images
and
with
all
our
services,
services
mes
is
another
technology
in
which
all
the
providers
have
to
position
ourselves.
A
And
well,
we
are
going
to
have
to
do
something
we
have
to
define
I
have
to
define
what
the
strategy
will
be.
I
am
not
very
clear
about
this
1.0
ejea
since
yesterday,
since
yesterday.
Okay,
so
I
already
believe
that
we
are
in
a
position
to
begin
to
theorize
where
we
have
to
go.
The
suppliers
have
said
they
are
doing
really
very
interesting
things.
Very
large
suppliers
are
They,
are
doing
very
interesting.
Things.
A
The,
openshift
administration
console
that
was
introduced
in
the
previous
session
is
essential
for
us,
and
this
should
have
been
put
in
red
with
bling
there
for
it
to
appear,
because
we
have
really
been
waiting
for
it
for
a
long
time,.
We
have
invested
a
a
lot
of
money,
a
lot
of
people
for
the
peace
to
be
operable
so
tectonic.
We
believe
that
it
will
respond
to
our
needs.
A
What
I
like
to
what
I
really
liked,
when
I
found
out
about
the
acquisition
of
coro
s,
what
I
liked
is
that
coro
s
has
the
vision
of
the
operation
of
a
rabid
operation.
That
vision
for
us
is
fundamental
because
we
not
only
live
from
development.
This
has
to
be
operated.
We
have
critical
applications
and
that
vision
of
decorum
that
towards
the
operation
we
have
always
liked.
So
for
us,
we
have
practically
applauded.
We
have
jumped,,
we
have
thrown
away,.
A
A
What
else?,
Operation
framework
has
also
been
mentioned,.
We
have
to
mobilize
everyone
our
services
to
this
to
this
scheme.
Our
clients
are
asking
us
to
execute
any
type
of
load
until
hours.
We
came
with
web
applications.
Well
now
they
say
one
is
that
I
want
to
run.
My
spark
wants
to
run
it
here
and
I
want
to
run
this
this
database.
What
percent,
maybe
inside
a
year,
is
a
critical
database
and
we
are
trying
to
put
a
cold
cloth
on
those
needs,
but
hey.
A
They
are
asking
us
for
this,
then
the
support
of
any
type
of
port
that
pvp
we
are
working
to
provide.
This
capacity
is
not
easy
at
all,
but
we
believe
that
It
was
a
moment.
We
will
be
able
to
give
it
and
well.
These
are
links
of
interest.
I
recommend
this
link
where
you
can
receive
weekly
news
from
cúber
net.
It
is
about
everything
that
happens
around
cúber
net.
It
has
allowed
me
to
learn
a
lot
and
see
what
it
does.
Other
people
around
this
ecosystem
that
have
grown
enormously
in
the
last
three
years
are.
A
Are
all
the
projects
that
sponsor
each
other,,
those
rulers
logically
sponsor,,
but
there
are
very
interesting
projects
that
are
tremendously
interesting
and
two
days
ago
the
dow
joined
here,,
which
resisted
the
hardware.
I
governed,
openshift
theologically.
We,
have
learned
a
lot
in
many
things.
many
things
really.
The
community
of
origin
is
very,
very
interesting,
and
then
these
are
other
other
links
well.
Well.
The
state
metrix
of
cuber
net
is
that
it
is
evolving
a
lot
in
the
right
direction
and
my
last
message
is
to
have
a
healthy
country.
A
The
secret
is
automation,
but
mainly
monitoring,
monitor,
monitor
everything,
but
not
from
the
point
of
view
of
infrastructure,
but
also
from
the
point
of
view
of
the
user,
how
the
user
perceives
the
health
of
the
step.
That
is
fundamental
and
my
last
slide
here
determined
milonga.
Thank
you
very
much.
Everyone.