►
From YouTube: OCG Berlin 2017 - OpenShift at Amadeus
Description
From the March 28th 2017 OpenShift Commons Gathering in Berlin @KubeCon https://Commons.openshift.org/Gathering
A
A
Okay,
good
afternoon
hope
you're
enjoying
the
day
so
far
so
I'm
here,
I,
don't
like
being
behind
the
desk,
but
I,
don't
know
much
of
a
choice
I'm
here
to
talk
about
it
to
you
about
open
shift
at
Amadeus.
So
this
is
an
adventure
that
started
a
couple
of
years
back
now
for
us
in
2014,
in
fact,
and
so.
A
A
So
we're
a
technology
company,
we
provide
technology
solutions
to
the
travel
industry,
so
we're
helping
content
providers,
content,
operators,
content,
distributors
all
linked
up.
Maybe,
for
instance,
we
take
a
line
availability
and
make
it
available
to
to
travel
agents,
online
travel
agents,
and
we
also
focus
on
the
traveler
experience,
in
fact
that's
central
to
to
what
we
do
from
inspiration
through
search
purchase
during
the
trip
finding
a
hotel
recovering
your
lost
luggage
after
the
trip
things
like
that
yeah,
not
just
the
pleasant
stuff.
A
So
a
few
technical
figures
we
handled
last
year
about
566
million
bookings
boarded
747
million
passengers.
In
fact,
we
also
acquired
a
few
years
back,
a
company
called
avatar,
and
so
that
figure
is
excluding
their
passengers
boarded
as
well
with
theirs.
It
goes
up
above
a
billion
we
host
130
airline
inventories.
A
That
means
that,
basically,
the
the
inventory
saying
how
many
seats
are
available
on
a
given
plane
and
things
like
that
at
a
given
time
are
hosted
in
our
systems
and
we
handle
about
50,000
and
news
as
per
second
and
our
enterprise
service
bus
Peaks,
now
at
over
500,000
queries
per
second.
That
was
the
figure
for
last
year,
so
we
have
a
couple
of
constraints
in
terms
of
the
types
of
data
objects
that
we
handle.
We
have
reservation
records.
You
may
have
heard
of
the
the
PNR,
the
passenger
name
record.
A
There
are
other,
more
modern
name,
structured
booking
records
and
things
like
that.
I
already
mentioned
the
the
inventories
content
provider
inventory.
So
airline
inventories,
rail
inventories,
things
like
that.
These
are
objects
which,
of
course,
we
need
to
handle
in
consistent
manners.
We
don't
want
double
bookings
in
a
hotel,
for
instance,
you
don't
want
if
I
end
up
at
the
hotel
finding
that
someone
else
already
has
your
room
all
that
kind
of
thing.
A
A
Historically,
we
had
a
system
like
a
lot
of
folks
where
we
have
thousands
of
physical
servers
running
many.
Many
different
applications
serving
many
thousands
of
different
services
and
on
different
channels
can
be
web.
It
can
be
more
traditional,
traditional
or
legacy
I,
don't
know
host-to-host
and
things
like
that,
and
we
were
now
in
a
position
where
well,
we
want
still
to
continue
growing.
A
We
want
to
expand,
we
need
to
control
costs,
we
also
have.
We
were
operating
mainly
out
of
one
data
center
in
Germany,
so
for
customers
around
the
world.
That
means
that
you
can't
beat
the
feast
the
speed
of
light.
You
have
some
latency
that
is
there
embedded
in
the
in
the
physics
of
your
your
placement,
so
we
wanted
to
get
closer
to
our
customers.
We
also
had
requirements
to
maybe
install
our
app
our
applications
on
customer
premises,
so
that
meant
also
knowing
how
to
remote
remote
operate.
A
We
were
also
seeing
a
lot
of
disruption
in
the
travel
industry
model
a
previously
you're,
maybe
seeing
10/2
one
look
to
book
ratios
on
on
airline
and
now
we're
looking
more
at
things
like
10,000
to
one
or
I,
don't
know
100,000
to
one
there
are.
There
are
lots
of
different
figures
flying
around,
but
the
the
whole
internet,
age
and
mobile,
and
things
like
that
has
changed
changed
the
way
things
work
an
awful
lot.
A
So
this
basically
led
us
to
okay,
a
new
platform,
also
the
realization
that
we
would
need
to
be
capable
of
working
on
hybrid
cloud,
so
working
with
public
care
providers,
but
also
on-premise
private
cloud,
and
we
also
realized
that
we
wanted
something
which
was
beyond
mere
infrastructure
as
a
service.
Allocating
a
virtual
machine
through
through
code
is
one
thing,
but
in
fact,
for
the
developer
experience,
but
also
the
operator
experience
in
terms
of
simplification
and
decoupling,
the
actions
and
what
different
people
have
to
manage.
A
We
couldn't
simply
just
say:
okay,
we
we
go
for
infrastructure
as
a
service
and
allocate
vm's.
You
get
your
VMs
and
you
have
to
administrate
your
VMs
as
developers
and
manage
the
software
on
there
all
that
kind
of
thing.
So
we
saw
that
we
wanted
something
above
that
and
that
brought
us
to
the
platform
as
a
service.
A
We
also
that
there's
okay
I
mean
there's
an
awful
lot
of
problems
that
you
have
to
deal
with,
which
is
testing
you.
You
simply
can't
test
all
the
configurations,
it's
not
possible
or
when
you
do
a
software
load,
for
instance,
you
know
perfectly
well
that
there
are
going
to
be
different
versions
of
different
software
running
simultaneously
and
that
you
can't
have
pure
atomic
switches
between
a
combination
of
software
levels
of
your
different
components.
So.
A
Okay,
I
won't
talk
too
much
about
this
slide.
This
slide,
I
think,
should
be
known
to
most
people
now
I
mean
containers,
dock
kubernetes
openshift.
What
we
developed
on
top
is
what
we
call
a
MIDI
scale
services,
which
is
our
internal
product,
our
internal
path
built
mainly
on
OpenShift
and
which,
basically,
its
existence,
is
there
to
fill
the
gaps.
So
typical
example:
OpenShift
kubernetes
are
not
yet
suited
to
running
databases,
and
so
we
needed
to
also
manage
the
data
storage
part
as
part
of
our
paths
and
provide
that
to
to
our
end
users.
A
There
was
also
when
we
started
using
OpenShift.
There
wasn't
monitoring
as
part
of
OpenShift,
and
so
we
developed
a
solution
of
our
own.
A
lot
of
this.
We
intend
to
see
it
go
to
the
left
over
time
and
it's
already
started.
There
are
things
that
we
were
doing
in
ACS
in
our
a
MIDI
scale,
services
which
we've
got
rid
of
because
they've
now
been
implemented
in
openshift.
A
A
A
So
throughout
the
world
we
have
deployments
in
Europe
in
Asia
in
the
United
States,
and
we're
really
to
see
it
happy
to
see
that
serving
low
latency
well
yeah,
fairly
low
latency
is
to
the
order
of
200,
milliseconds,
I,
think
or
less
requests
for
airline
availability
and
we're
running
this.
So
this
is
a
this
is
for
full
clusters
on
Asia,
Europe
and
United
States.
We
actually
have
a
lot
more
of
them
running
and
they're
serving
very
happily
several
thousand
transactions
per
second
and
that's
working
really
well,
and
our
operations
folks
are
very
happy
with
that.
A
Now
coming
to
where
we
want
to
go,
there
are
lots
of
open
shifts
evolutions
that
we're
interested
in
one
that
I
will
focus
on
a
little
bit.
More
is
stateful
sets
that
Clayton
already
mentioned
this
morning.
We're
also
interested
in
following
the
monitoring
aspect,
because,
as
I
mentioned,
we
implemented
a
monitoring
stack
of
our
own,
but
we're
interested
in
seeing
where
openshift
is
going
and
possibly
moving
to
that.
A
We
are
for
fairly
obvious
reasons,
because
I
mentioned
that
we've
got
lots
of
a
number
of
clusters
already
worldwide
running
a
modest
online
cloud
availability,
weren't
interested
in
cluster
Federation
to
give
a
single
view
and
a
single
way,
a
single
point
of
entry
for
administering
multiple
clusters,
o
self-hosting,
that's
actually
I'm
interested
in
the
rest,
so
by
self
hosting
I,
mean
that
there's
a
line
of
thought
where
kubernetes
can
in
fact
run
kubernetes
inside
itself
and
so
you're.
Your
masters
are
containers
that
are
scheduled
by
the
other
masters,
and
things
like
that.
A
So
then,
also
we're
interested
in
the
sophisticated
scheduling
aspects,
so
this
is
rescheduling,
for
instance,
when
the
system
sees
that
there
are
things
that
are
interfering
with
each
other
in
bad
ways.
Maybe
a
node
is
is
kind
of
limited.
Maybe
you
have
a
lot
of
fragmentation
and
you
want
to
undo
the
fragmentation
across
the
system.
A
One
aspect
that
we've
really
very
much
interested
in
in
an
immediate
way
is
the
third-party
resources
and
aggregated
servers
aspect.
We
see
quite
a
lot
of
use
cases
for
third-party
resources.
So
that's
basically
the
idea
that
you
can
extend
openshift
and
kubernetes
to
include
an
integrate
functionality
of
your
own
and
we,
for
instance,
there's
a
guy
I
work
with
who's
been
working
on
a
Redis
cluster
operator
type
of
object,
so
that
Redis
cluster
can
be
managed
sort
of
natively
inside
OpenShift.
A
So
today,
we've
done
that
using
config
maps,
but
we'd
actually
like
to
use
third-party
resources
to
give
a
more
fluid
integration,
and
we
have
a
couple
of
other
use
cases
for
that
and
then
the
next
level
up
of
that,
is
to
actually
have
it
available
through
a
service
catalog.
So
some
of
you
were
paying
attention,
maybe
notice
that
Paul
Murray
this
morning
was
talking
about
Service
Catalog,
because
he's
one
of
the
leads
on
that
final
point.
Security,
always
very
important.
One
of
the
things
that
we've
had
to
deal
with
is
encryption
of
secrets.
A
So
at
the
moment
we
have
a
solution,
but
it's
not
it's
not
very.
It's
not
a
well
integrated
solution
and
so
we're
following
closely
as
well
the
the
progress
and
the
design
on
the
encryption
at
rest
and
other
aspects
of
Secrets
that
are
going
on
at
the
moment
and
then
fine
grained
network
access
control
as
well.
So
I'll
just
talk
a
little
bit
more
now
about
stateful
sets,
and
so
the
reason
we're
interested
in
in
stateful
sets
in
particular
is
because
we
would
like
to
run
data
stores
in
open
ships.
A
So
I
already
mentioned
that
we
were
running
data
stores
outside
OpenShift,
and
this
is
one
of
the
things
that
ACS
manages
on
the
outside
and
a
game,
something
that
I
would
like
to
get
rid
of.
Why
not?
It
won't
be
now,
but
it
would
simplify
operations
an
awful
lot
if
the,
if
the,
if
Couchbase,
for
instance,
could
be
administered
straight
through
openshift.
If
all
the
actions
like
scaling
up
an
operative
Cal
space
cluster
upgrading
recovering
a
node,
that's
keeled
over
were
all
automated
and
seamless
through
the
the
openshift
administration.
A
A
There
are
data
stores
provided
out
of
the
box
on
Amazon
on
GC
OpenStack
also
knows
how
to
run
data
stores
in
principle
and
offer
them
as
a
service,
so
well
I
mean
if
they,
if
they
should
go
ahead,
use
them
and
I
would
I
I'd
be
interested
in
seeing
them
available
through
the
pass
Service
Catalog
so
that
I
have
myself
servicing.
That's
uniform
as
well.
Maybe,
but
that's
something
I
need
to
think
a
bit
more
about
and
talk
to
people
about,
but
definitely
yes,
if
the
Yass
has
it,
why
bother
doing
it
and
another
level?
A
However,
not
all
infrastructure
as
a
service
cow
providers,
provide
the
same
data
stores,
and
maybe
the
data
store
you
want
is
not
available
on
the
cloud
provider
you're
targeting.
So
then
you're
going
someone
to
run
it
yourself
and
then,
if
you
try
and
run
it
in
the
infrastructure
as
a
service,
it's
more
complicated.
You
have
to
deal
with
the
cloud
provider
specificities,
you
don't
have
a
common
operational
abstraction,
so
we
think
it's
interesting
to
actually
do
it
at
the
pass
level.
So
I
already
mentioned
inclusion
in
the
Service
Catalog.
A
You
can
have
native
mechanisms
for
for
scale-out
and
for
update
built
in
the
challenges,
there's
always
performance
that
was
mentioned
this
morning
already,
though,
in
the
end,
if
you
are
going
to
run
on
VMs,
that's
probably
where
the
main
performance
penalty
is
not
the
containerization,
so
we
can
consider
using
bare
metal
openshift
on
bare
metal
and
we'll
pay
less
penalty
for
for
running
our
data
stores.
We
also
have
to
think
about
cross.
A
Datacenter
replication
Couchbase,
for
instance,
relies
on
a
full
mesh
of
connectivity
between
the
Couchbase
nodes
of
the
different
clusters
in
order
to
provide
cross
datacenter
application.
So
in
certain
clouds
that
might
be
not
too
difficult
because
I
don't
know,
for
instance,
GCE
you
have
a
flat
network.
You
can
route
from
any
machine
to
any
machine
in
any
region
on
Google
inside
your
project,
but
that's
not
the
case
everywhere
and
then
there's
also
the
challenge
of
finding
the
right
level
of
abstraction.
For
all
of
this.
A
So
you
can
see
that
people
already
have
been
interested
in
this
for
quite
a
while
templates
for
running
various
data
stores
on
kubernetes
or
open
shifts
have
been
available.
Probably
since
the
beginning
that
kubernetes
was
made
available.
However,
they
typically
have
limitations.
I
actually
have
there's
only
single
instance
in
front
of
a
few
of
these,
however,
I
would
actually
expect
that
without
stateful
sets,
they
are
all
necessarily
single
instance,
or
at
least
you
need
some
form
of
shouting
of
your
data
to
be
in
place.
A
So
the
way
we're
going
about
this
is
that
we're
working
closely
with
with
Red
Hat
and
at
the
moment
in
particular
Couchbase.
This
is
the
the
project
that's
on
the
furthest
forward.
As
far
as
I
know,
we've
done
a
first
phase
where
we
can
now
run
a
rack
aware,
Couchbase
and
it
implements
also
scale
out,
meaning
you
can
add
notes
to
Couchbase
on
on
the
fly,
then
we'll
move
on
to
a
second
phase.
A
But
this
is
already
very
useful
because
you
can
just
spin
up
an
Oracle
instance
for
for
dev
and
test
and
throw
it
away
and,
and
it's
very
easy
to
use
it-
makes
Oracle
very
accessible
to
individual
developers
and
things
like
that,
so
we're
seeing
a
great
interest
in
in
that
area.
We've
actually
contributed
a
few
changes
on
that
and
we've
managed
to
make
it
work
on
12.1
as
well
and
run
12.1
inside
OpenShift.
A
A
That's
still
legible,
if
I
do
that,
okay,
so
what
I
have
is
so
it's
a
stateful
ASIS
take
full
set
and
we're
running
so
three
Couchbase
data
instances
here
on
my
laptop
and
I'm
also
running
the
pillow-fight
application,
which
is
Couchbase,
is
sort
of
demo
app.
That
does
some
reads
and
writes
all
over
the
place
and.
A
A
They
need
to
be
able
to
talk
directly
to
each
other,
so
they
need
to
be
able
to
find
each
other,
and
that
was
something
that
was
a
problem
in
the
in
the
beginnings
of
open
shifts
and
kubernetes,
because,
typically
you
would
talk
through
a
service
endpoint
and
there
was
only
one
for
the
whole
service.
So
here
what
we
see
is
that
the
the
Couchbase
instances
are
indeed
able
to
talk
to
each
other.
A
It
took
to
stay
in
sync
to
agree
on
on
the
various
aspects
of
who
has
what
data
and
things
like
that,
and-
and
this
is
part
of
what
the
pet
sets
gives
us,
which
is
this
identity
through
which
each
pod
of
the
rapid
set
is,
is
accessible
and
it
happens
to
work,
okay
and
and
just
for
the
record.
So
this
is.
This
is
just
a
three.
Oh
sorry,
yeah.
If
I
don't
shift
it
across,
so
this
is
a
three
node.
A
A
A
We've
got
a
way
to
go
to
migrate
applications
or
our
applications
onto
openshift,
but
we've
already
got
some
running
and
and
the
good
thing
is
OpenShift
keep
drawing
with
every
release,
we're
happy
to
be
contributing,
I'd
love
to
contribute
more,
but
that's
always
a
matter
of
resources
and
time
and
things
like
that.
So
that's
it
I'd
like
to
continue
the
adventure
all.
D
A
The
the
package
manager-
actually,
we
started
looking
at
that.
We
haven't
been
very
far
with
that.
Currently
we're
not
where
we
we've
actually
built
a
meta
language
internally,
because
it's
one
of
those
things
where
when
we
started
helm
didn't
exist,
so
we
developed
our
own
way
of
packaging
things
and
producing
the
images
and
things
like
that.
It's
not
on
our
highest
part
of
our
highest
priorities
for
shifting
left
in
my
previous
diagram.
D
A
A
It's
secret!
No,
so
well,
if
you,
okay,
the
the
problem
is
that
secrets
inside
the
master
arm
themselves
encrypted
they're
not
encrypted
in
in
memory
and
they're
not
encrypted
on
at
rest
either,
except
if
your
your
volumes
of
your
VM,
for
instance,
that
run
your
@
çd
are
encrypted.
Maybe
okay,
but
as
soon
as
the
VM
is
up
there
they're
visible.
So
one
of
the
constraints
that
I
was
given
was
we
want
the
secrets
in
the
master
to
be
encrypted.
A
So
there
immediately,
you
have
a
chicken
and
an
egg
problem
which
I
haven't
solved
as
such,
but
which
is
that
if
you
want
to
have
them
encrypted
in
the
master,
then
when
they
get
to
the
pod,
they
have
to
be
decrypted
somehow
and
somehow
you
need
a
key
in
order
to
decrypt
them.
So
where
does
that
key
come
from?
How
do
you
secure
it?
Does
it
itself
need
to
be
encrypted?
And
if
it's
encrypted,
how
does
it
get
decrypted?
Okay,
so
there
have
been.
A
There
have
been
a
couple
of
solutions.
Workarounds
to
that
that
have
been
put
in
place.
I
think
Kelsi
Hightower
did
one
article
and
it
turns
out
that
what
I
worked
on,
which
was
more
or
less
about
the
same
time
as
he
was
working
on.
It
was
essentially
the
same
kind
of
thing,
which
is
that
basically,
okay,
we
have
a
some
kind
of
security
module
somewhere,
which
actually
has
either
the
encrypted
form
of
the
secret
or
has
a
key
that
allows
to
decrypt
it.
A
A
Basically,
when
a
pod
starts
up,
I
have
an
init
container
which
examines
the
secrets
and
my
secrets
basically
they're,
now
a
configuration
that
says
we're
to
get
secrets
from
and
I
control
the
access
to
to
those
sources
and
I
place.
I,
actually
I
actually
substitute
the
secrets
volume
on
the
fly,
and
so
the
the
pod
sees
or
the
containers
of
the
pod.
That
need
the
secrets.
A
So
it's
not
a
full-blown
solution.
It's
not
an
ideal
solution.
If
you
hack
the
master,
obviously
you're
king
of
the
cluster,
and
you
can
do
whatever
you
like,
you're
gonna,
get
access
to
secrets
anyway,
but
it's
a
it's
an
improvement.
It
makes
it
harder.
A
hacker
needs
to,
or
cracker
rather
needs
to
go
through
more
steps
to
get
to
your
secrets,
and
so
you
have
more
opportunity
to
audit
that
access
and
to
raise
alerts
through
a
seam.
That
kind
of
thing.
A
A
We
saw
for
the
sorry
for
the
data
storage
part
there,
it
will
be
necessary,
so
the
idea
is
that
you
have
to
use
some
source
of
a
you
need
cinder
or
you
need
just
refers,
or
something
like
that.
I
have
to
admit.
I,
don't
know
that
we've
actually
selected.
One
solution
at
this
point
I
think
that's
still
an
open
thing,
but
basically
we
will
use
some
form
of
cloud
provided:
storage,
block
storage
for
for
for
the
shared
storage
for
the
data
storage,
the
persistent
data
storage.
E
A
B
So
no,
we
are
moving
away
from
shared
storage,
so
we're
you
know
the
data
persistency
we
with
the
no
sequel
we
introduced
local
disks
and
flash,
and
we
even
in
the
Oracle
side,
we
are
going
away
as
well
from
rack
and
the
front
shared
storage.
Precisely
because
from
operational
standpoint,
this
complexity
files
significantly
the
model
and
generates
a
less
robust
solution.
So
we
are
moving
away
from
from
shared
infrastructure
components
now
for
the
large
volumes
of
data.
Yes,
we
are
currently
doing
a
study.
C
I
want
to
thank
Eric
and
Victor
from
Amadeus
for
this
presentation
today,
they'll
be
around
most
of
the
week,
I
think
at
coop
con
and
afterwards
in
the
beer
area,
and
if
you
hadn't
noticed
I
was
soft
shoeing
and
stretching
it
out,
because
the
man
and
in
the
far
corner
is
the
next
speaker
and
he's
just
arrived
Alexis
and
we'll
get
you
set
up
and
give
us
a
second.
So
thank
you
very
much.