►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Our
intention
was
to
be
here
with
our
customer
penniless
yet
from
Daimler
who
is
our
product
owner
in
this
engagement,
but
he
had
such
a
heavy
workload.
He
could
not
join
us,
so
we
are
from
Capgemini.
We
are
doing
some
of
timeless
large
business
applications
and
I
have
an
overarching
role
as
Technical
Account
Manager,
focusing
on
innovation
topics
like
application,
modernization,
cloud
and
big
data
and
Patrick.
Do
you
want
to
yeah.
B
A
A
These
websites
are
one
of
the
front
and
applications
that
we
have
for
our
global
car,
configurator
back-end.
So
what
we
are
talking
about
is
is
the
backend
behind
that
consuming
applications
and
it
holds
the
vehicular
master
data.
So
knows
all
about
these
car
models.
The
equipment
surprises
to
Texas
the
descriptions
in
many
different
languages,
and
it
can
check
if
a
configuration
can
be
built
in
in
a
plant.
A
So
more
than
a
year
ago,
we
started
a
journey
here
with
this
global
car
configurator.
And
if
you
look
at
the
left
side,
you
can
see
the
starting
point
where
we
started.
We
had
a
quite
traditional
Java
application,
so
we
had
a
fixed
release
cycle
with
two
release
cycles
per
year.
We,
it
was
built
on
the
license.
Software
stack
with
db2
and
WebSphere,
and
the
development
team
handed
over
the
the
software
as
a
product
to
different
other
departments
and
those
other
departments
had
their
own
infrastructure.
A
Where
we
are
today
is
in
the
middle
of
this
slide,
we
have
now
a
central
GCC
service,
and
this
general
CCC's
of
GCC
service
is
running
on
a
private,
open
shift
environment
in
the
download
data
center.
It's
set
up
and
operated
by
T
systems
and
all
these
consuming
applications
are
now
starting
to
connect
to
this
central
GCC
and
by
the
end
of
the
year
we
want
to
have
all
consuming
applications
migrated
to
to
the
new
central
GCC.
A
There
is
a
large
business
benefit.
So
now
we
have
continuous
delivery.
We
have
zero
downtime
deployments,
we
have
scalability,
we
have
a
very
good
performance
and
we
have
a
cost
reduction
because
we
do
not
need
all
these
operational
teams
and
we
we
have
do
not
have
the
mainframe
and
the
license
software
anymore.
So
we
are
completely
on
open
source.
We
are
cloud
ready
if
we
have
rest
interfaces
that
are
published
in
the
corporate
API
management.
That's
the
small
epic
G
on
top,
and
we
have
a
new
third
party
system
that
is
called
WL
GP.
A
That's
the
new
regulation
for
the
emissions
of
vehicles
and
what
is
now
starting
is
the
the
third
part
here,
GCC
and
micro
services.
We
have
already
started
to
implement
selected
functionality
from
the
global
car
configurator
as
own
micro
services,
so
the
GCC
itself
will
get
a
little
bit
smaller
and
will
talk
to
these
micro
services
and
with
these
micro
services,
we
can
avoid
redundant
functionality
in
in
different
business
net
business
applications.
A
So,
on
the
next
slide
on
the
left
side,
you
see
is
still
the
monolith,
the
big
blue
box
and
the
functionality
inside
the
monolith,
and
you
can
also
see
that
we
are
having
one
big
database
for
this
monolith
and
that's
our
still
our
status
quo
and
on
the
right
side.
You
can
see
this
micro
services
architecture
where
special
functionality
is
implemented
in
micro
services
and
has
its
own
databases
and
data
supply,
and
we
have
some
architectural
decisions
made
during
that
journey
and
also
some
lessons
learned
that
we
have
put
in
the
gray
box.
A
One
learning
is
we
started
with
cheapest
for
a
fast
migration
of
the
monolithic
Java,
je
application
into
docker,
and
we
decided
to
start
you.
Things
bring
boot
if
we
implement
these
micro-services
here
on
on
the
right
side,
and
we
have
also
decided
to
use
the
base
containers
that
are
supplied
by
red
head
for
che
bows
and
Postgres,
because
we
do
not
want
to
build
our
own
base
containers
and
do
all
this
maintenance
work.
A
B
You
Stefan
now
going
more
forwards
to
I,
guess
more,
the
technical
part
of
the
application
like
what
did
we
do
in
our
journey
and
there
we
started
off
actually
with
a
lot
of
questions
in
mind.
The
first
question
is:
how
do
we
integrate
the
deification
actually
into
the
into
the
open
chef
platform
and,
and
then
the
second
question
is:
how
does
open
shift
actually
help
us?
Is
there
anything
that
helps
us
and
to
that
last
question?
I
can
totally
easily
say
now
after
the
journey.
Yes,
it
helps
actually
a
lot.
B
The
opposite
platform
gave
us
a
lot
of
things
that
we
could
easily
utilize
in
our
application
and
how
we
started
there.
It
was
actually
looking
at.
We
did
some
sort
of
assessment
at
the
component
component
model
of
the
application,
and
there
are
a
lot
of
components
in
there
who
were
not
needed
anymore,
suddenly,
like
we
could
replace
them
easily
with
smaller
and
easier
components.
The
only
component-
and
that's
the
component
that
you
can
see
there
in
this
light
blue
tillage
box
there.
B
That
is
actually
a
business
kernel,
that's
the
business
functionality
and
that's
the
thing
that
we
want
to
keep
and
openshift
and
allows
us
that,
so
that
we
actually
have
only
the
business
functionality
and
only
some
sort
of
adapters
around
that
business
functionality,
which
are
fairly
simple
to
implement
and
well.
Of
course
also.
We
implemented
also
rest
interfaces
which
are
not
really
related
to
OpenShift,
but
there
was
really
helpful
actually
to
simplify
our
application,
some
examples
of
what
we
did
there
and
things
that
we
changed
our
enlisted
there
in
this
table.
B
For
instance,
we
did
changes
to
the
configuration
before
and
all
the
configuration
had
to
be
done
in
different
environments
differently.
There
were
actually
different
scripts
running
others.
Just
manually
changed
the
configuration
we
actually
changed
our
component
model
so
that
it
now
uses
the
Cuban
Edith
or
openshift
platform
capabilities
they're
like
config,
Maps
secrets,
environment
variables.
That's
how
we
configure
our
application.
B
Then
we
changed
the
logging,
that's
a
pretty
common
problem
when
changing
towards
a
container
platform
that
your
containers
will
probably
die,
or
that
they
have
that
you
want
them
to
easily
die
without
having
any
losses,
and
so
we
are
changed
the
way
how
we
locked
and
introduced
also
Chasen
logging,
so
that
we
can
now
aggregate
the
logs
fairly
simple
another
very
important
topic
for
us
and
for
actually
performance
of
the
application
is
we
introduced
caching
distributed
remote
caches.
They
were
fairly
simple
to
set
up
on
the
OpenShift
platform.
B
There
we
got
the
chaebols
data
grid
from
Retta
and
we
could
easily
set
that
up
and
got
a
whole
cluster
running
distributed
cache
cluster
there.
Lastly,
the
monitoring
chopping,
which
in
the
end,
gives
us
now
a
lot
of
insight
into
the
application
and,
on
the
other
hand,
well
implementing
those
liveness
checks
actually
helps.
Also
me
to
have
really
silent
night
so
that
I
do
not
have
to
wake
up
every
time
a
containable
die.
B
So
there
are
a
lot
of
things
that
the
platform
helps
us
and
our
main
achievement
and
main
things
that
we
got
from
the
air
is
that,
first
of
all,
we
can
keep
the
business
functionality
and
just
do
not
just
put
some
small
adapters
around
there.
They
are
really
simple
to
implement
and
replace
like
like
a
lot
of
complexity
from
our
application
with
kubernetes
are
open
ships,
specific
components
and,
in
the
end,
also
introduce
Bluegreen
deployment
to
have
zero
downtime
deployments
for
the
application.
B
Now
going
a
little
more
in
depth
into
the
rest
interfaces,
they
do
not
want
to
stretch
that
too
much
I
mean
rest
is
I,
guess
pretty
common,
but
the
interesting
part
years
has
already
said
that
we
introduced
that
into
the
API
management
solution
of
time
there,
which
is
called
one
API
and
there
we
could
again
utilize
the
concept
of
the
platform.
We
now
have
a
small
API
gateway
running
in
front
of
our
container.
The
small
API
gateway
is
there
for
doing
all
the
authentication
stuff.
B
It's
actually
a
pretty
good
separation
of
concerns
that
we
have
now
there
and
it
gathers
a
lot
of
metrics,
which
API
eyes
are
called
how
often
and
stuff
like
that.
So
that's
yeah
we
have.
We
were
actually
coming
from
soap,
and
that
is
also
really
that
was
actually
a
problem
for
us
in
our
journey
that
there
were
a
lot
of
different
interfaces
that
we
were
using
and
that
the
systems,
the
consuming
applications
actually
used
different
ways
to
communicate
with
with
the
application.
B
B
How
would
it
perform
in
a
productive
environment,
and
that's
one
thing
that
is
I,
guess
very
interesting
also
for
for
having
for
everyone
who
actually
wants
to
move
toward
such
an
open
shift
platform
we
introduced,
we
introduced
actually
low
test
scripts,
which
were
which
are
run
outside
of
the
applicants
I'd
of
the
cluster
against
our
our
cluster,
this
low
test
scripts,
these
lotus
scripts,
emulate
the
usage
of
the
car
configurator.
So
how
would
a
common
user
of
the
car
configurator
use
the
applicator
the
configuration
the
configurator?
B
In
our
case,
he
would
probably
come
from
a
certain
market
speak
a
language.
We
would
randomly
select
that
for
him
he
would
probably
choose
a
random
vehicle
and
want
to
Kant's
to
configure
that
and
change
this.
The
equipment
and,
in
the
end,
maybe
gets
alternatives
and
selects
one
of
those
alternatives
and
what
he
could
see.
There
is
that's
also
very
interesting.
B
We
could
see
our
cluster
performing
and
in
in
like
a
almost
live
productive
environment
like
productive
scenario
and
see
all
the
auto
scaling
working,
see
how
there
may
be
bottlenecks
are
and
I
guess
that
was
also
journey
for
a
lot
of
us
that
that
the
cluster
itself,
we
actually
helped
in
building
up
the
cluster,
and
there
were
a
lot
of
settings
that
we
had
to
adjust.
So
that
not
a
lot
of
settings
at
some
settings
that
had
to
be
in
just
so
that
everything
works
fine
for
us.
B
So
this
load
test
we
actually
created
that
and
we
run
even
longer
chests
with
that
and
I.
Would
everyone
recommend
to
try
to
stress
your
application
and,
of
course,
also
the
cluster,
but
mostly
the
applications,
so
that
you
can
see
that
and
really
try
to
bring
it
down?
That's
the
interesting
part
here.
B
Lastly,
I
want
to
speak
about
I,
guess
the
most
important
and
also
the
most
difficult
part
of
our
journey,
and
there
we
are
still
working
on
that's
actually,
the
cultural
change
change
that
we
are
going
through
right.
Now,
it's
regarding
a
DevOps
operating
model,
because
when
we
moved
to
to
this
platform,
there
was
suddenly
a
new
separation
between
tasks
of
several
teams
beforehand.
We
actually
gave
the
application
to
some
operations
team
and
they
operated
the
virtual
machine.
They
operated
the
application
server
all
for
us
and
they
also
operated
the
application
for
us.
B
But
now,
on
this
platform,
those
guys
not
do
not
operate
our
containers
anymore.
They
just
operate
the
platform
and
that's
where
they
stopped
and
now
there's
a
gap
which
has
to
be
filled
in,
and
so
that's
also
learning
and
that
really
fits
well.
We
are
already
recognizing
that
really
right
now
that
we
started
to
introduce
def
ops
into
the
processes
so
that
the
developers
actually
start
to
to
operate
the
software
and
in
the
end,
yeah
yeah
and
in
the
end,
are
also
gaining
a
little
more
becoming
more
responsible
for
the
application
and
I.