►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
You
know
DHL
Express,
you
probably
know
it.
We
are
delivering
to
220
countries
worldwide
and
we
have
also
global
forwarding
in
in
an
ocean
also
leading
leading
services
worldwide.
So
it's
a
five
hundred
and
ten
thousand
people
company
and
our
child
from
I
challenge
currently
is
how
to
make
it
fly,
and
you
know
with
thousands
of
applications.
We
have
hundreds
of
technologies,
people
all
over
all
over
the
world
and
how
to
get
them,
get
them
be
quicker
and
running
and
we
have
to
do
it.
A
We
have
a
big
big
challenge,
because
obviously
the
markets
we
are
operating
is
changing
rapidly,
maybe
not
so
much
like
for
banks
or
insurances,
but
we
also
impacted
by
the
digitalization,
where
our
competitors
are
introducing
new
digital
products.
Our
customers,
like
Amazon,
are
becoming
our
competitors,
and
so
we
have
a
lot
of
small
startups
really
trying
to
bite
bite
into
our
business
and
yeah.
A
That's
that's
our
challenge
and
that's
why
in
2016
we
kind
of
started
thinking
what
we
do
about
it
from
IT
perspective
and
how
to
really
be
faster,
more
flexible
and
and
fit
innovation
into.
You
know
this
big,
yellow
colors
and
our
approach,
pretty
old
concept,
but
good
works
for
us
is
introducing
or
trying
to
bring
the
challenge
the
change
via
bimodal
concept,
because
in
in
such
a
big
environment,
it's
just
possible
to
release,
come
and
say
like
alright
guys
from
they
from
today.
A
It
was
2016.
When
we
started
looking
for
a
right
technology
and
our
choice
was,
we
wanted
to
find
a
container
based
platform
and
we
wanted
to
have
a
zero
downtime
platform.
It's
something
that
is
very
scalable
and
set
up
for
a
future
and
it
well.
The
technology
is
good,
but
there's
not
just
you
know
part
of
the
solution.
You
need
to
adapt
all
the
processes
around
and
really
adapt
your
operating
model,
automate
your
deployments
and
also
introduce
some
better
commercial
model
or
on
selling
internally.
A
A
Then
you
well
it's
a
little
bit
more
difficult
to
get
the
process
around
updated,
but
the
most
difficult
in
the
end,
what
stays
is
changing
the
people's
mind
and
it's
you
know
if
containers
you
are
having
so
many
new
concepts
about
new
architecture,
about
a
new
way
of
developing
software
and
especially
bringing
all
the
teams
and
people
together
that,
in
the
end,
state
stays
to
be
the
most
hard
part
alright.
So
this
is
how
we
always
started.
That
is
our.
Why
and
well.
A
Well,
this
is
a
little
bit
like
a
big
picture
of
our
our
model.
So
then
the
most
important
is
actually
that
we
are
not
trying
just
to
have
an
open
shift.
You
know
like
a
container
as
a
as
a
service
implemented.
What
we
try
to
do
is
to
have
like
an
entire
end-to-end
ecosystem
for
for
our
developers,
so
that
when
the
projects
are
coming
to
our
platform,
they
have
like
a
full
experience.
A
They
just
need
to
really
come
with
the
source
code
and
all
the
rest
is
set,
and
let
me
explain
how
it
is
how
it
is
being
done.
So
at
the
bottom
we
decided
to
have
an
on-premise
cluster,
or
at
least
start
with
on-premise
cluster
and
advantage
of
on-premise
cluster,
that
you
can
easily
modernize
existing
applications
because
they
are
just
on
the
infrastructure.
A
You
know
sitting
in
the
data
center
next
next
to
each
other,
so
we
have
a
physically
six
cluster
with
physical
servers
as
a
bottom
of
our
OpenShift
system,
and
then
we
have
of
course
open
shipped
layer
and
I
was
personally
fighting
quite
a
long,
a
lot
internally
to
make
an
open,
shipped
cluster
being
just
one
cluster,
many
people,
you
can
make
different
choices.
You
know
splitting
pest
and
production
on
having
one
plus
than
another,
our
we
were
able
to
succeed
and
to
persuade
people
just
have
a
one
cluster
with
everything
inside.
A
But
the
deal
was
okay.
You
can
have
one
cluster,
but
we
still
need
to
you
know
separate
production
from
test
and
that
we
do
with
with
the
networking
zones.
And
here
on
the
picture
you
can
see.
We
have
actually
also
two
two
zones
on
the
cluster
one
internal
zone
for
application
that
only
connect
to
the
existing
data
center
infrastructure,
and
we
have
also
in
the
cluster
built
a
DMZ
zone
for
application
that
are
gonna,
be
exposed
or
being
connected
from
internet.
A
And
so,
let's,
let's
say
a
technology
base
and
what
was
important
for
us
and
is
to
put
also
like
a
the
whole
DevOps
tool
chain,
like
a
standardized
devil.
The
Box
tool
chain
block
on
top
and
practically
to
give
give
the
the
projects
like
an
end-to-end
experience.
So
they
just
come
to
the
to
the
platform
and
they
not
just
have
a
place
to
run
containers
but
also
a
place
where
they
have
all
the
tools
to
do
entire
deployment
deployment
chain.
And
that's
a
that's,
a
important
concept
to
to
change
a
big
company
at
scale.
A
So
that's
the
power
of
the
platform
and
that
you
give
the
people
like
a
stand
as
much
as
possible.
Standardized
block
and,
and
then
the
projects
can
very,
very
quickly
jump
on
this
on
this
model,
and
then
you
are
in
you
know,
scaling
up
and
introducing
the
change
very,
very
quickly,
yeah
and
on
our
cluster.
We
have
people
from
all
over
the
places
from
all
our
divisions,
running
their
applications
and
all
mixing
on
this
on
one
cluster,
and
that
gives
a
good
good
good
simplicity
in
operating
it.
B
So,
as
Tomic
mentioned,
you
know
one
of
the
things
we
did
here
was
we
integrated
the
platform
with
existing
IT.
So
is
this
ignite
infrastructure,
and
so
anytime,
you
do
that
think
one
of
the
biggest
areas.
That's
a
challenge,
is
on
the
networking
side.
People
are
working
with
firewall
rules
and
want
to
do
things
in
a
certain
way
and
there's
certain
processes.
B
And
so,
while
you
want
to
bring
about
change
and
change
certain
things,
it's
kind
of
you
know
there's
a
lot
of
trade-offs
here,
and
so
we
had
quite
a
lot
of
discussions
actually,
as
you
remember,
atomic
on
what
the
best
way
to
do
this
and
the
best
approach.
What
we
decided
to
do
at
the
end
of
the
day
was
not
change
the
way
the
current
IT
does
networking
or
firewall
rules
or
any
of
that
stuff
in
regards
to
the
openshift
platform.
So
what
that
meant
is
we
basically
have
two
zones?
B
We
have
a
DM
set,
as
the
barclays
guys
also
mentioned.
They
have
that,
as
as
most
companies
that
have
internet
facing
applications
are
under
regulatory
rules
and
require
different
levels
of
security
there
and
we
had
obviously
the
internal
and
as
Tomic
mentioned,
we
have
the
same
clusters,
so
we're
running
production
and
tests
on
the
same
openshift
cluster.
And
so
what
you
see
here
on
the
top
box,
basically
is
the
management
zone.
B
So
we
created
a
separate
zone
for
management
and
we
opened
up
obviously
the
ports
for
the
master
servers
so
that
they
can
communicate
with
the
nodes
in
the
DM
set
test
and
production,
as
well
as
in
the
the
internal
ones
and
that's
in
red
the
Masters
on
in
the
middle.
You
see
the
the
in
yellow
the
the
the
manager
of
the
monitoring
logging
metrics
all
of
that
stuff.
B
So
we
have
a
separate
cluster
there
of
notes
for
that
and
then
on
on
the
right
you
see,
the
in
purple
is
basically
the
the
the
infra
nodes,
that's
basically
the
open
ships
route
or
the
proxy.
So
it's
taking
requests
for
Kabana
for
metrics
and
logging
and
that's
what
it's
there
and
then
on
the
Left.
We
have
obviously
an
ansible
host
for
deployment
and
cloud
forms
which
Tomic
we
think
will
address
a
little
bit
about
what
we're
doing
with
chargeback
and
how
we're
using
that.
B
So
basically
that's
opened
up
and
then
I
think
what's
interesting
is
how
we're
doing
the
application
traffic.
So
in
each
of
these,
in
each
of
these
zones,
there's
basically
you'll
see
to
infrastructure
nodes,
it's
again
running
the
OpenShift
router,
because
there's
no
east-west
traffic,
so
where
you
see
basically,
these
firewalls
there's,
there's
no
there's
no
east-west
traffic
going
there's
hard
firewalls
between
that.
B
So
an
application
running
in
the
DM
set
cannot
communicate
with
anything
outside
of
that
unless
you
open
up
specific
rule
sets,
and
so
one
of
the
challenges
with
you
know,
with
with
at
least
the
the
the
proxy,
if
you
let's
say
just
had
a
proxy
that
could
access
all
these
zones.
Well,
then
somebody
that's
accessing
a
DM
set
application.
B
Dmz
application
could
access
an
application
internally,
potentially
so
someone
from
the
Internet
could
potentially
do
that,
and
so
that's
why
we
created
sort
of
these
these
these
these
hard
set
zones,
if
you
will-
and
so
this
is
one
way
to
do
that,
so
the
nice
thing
here
is
the
firewall
rules,
at
least
from
how
DHL
we
were
viewing
it.
We
don't
need
to
change
the
way
they
work.
They
work
exactly
the
way
they
do
today.
B
A
Right,
thank
you,
Thank
You,
Keith
and
now
I
would
like
to
come
back
a
little
bit
to
the
to
the
processes,
because
he,
you
know,
implementing
technically
open
shave
cluster.
It
is,
you
know,
fun
for
a
couple
of
months,
but
this
is
just
a
beginning
of
the
journey
and
the
question
is
okay,
so
I
have
the
cluster,
but
how
can
I
scale?
How
can
I
get
quicker,
and
one
of
the
point
I
mentioned
already-
was:
give
them
the
fast
processes
and
and
and
and
enable
the
application
build,
not
just
on
that.
A
So
in
our
case
we
are
using
our
own
internal
git
repository
to
keep
the
source
code
and
also
configuration
we
are
managing
the
entire
CI
CD
pipeline
with
our
internal
jenkins
was
also
a
big
discussion.
Shall
we
do
it
with
jenkins
on
on
on
on
open,
shipped
or
an
external
and
I?
Think
the
good
choice
was
to
use
an
external
to
open
shift
our
own
Jenkins
have
for
in
the
build
process.
A
We
use
artifactory
to
to
pull
the
dependencies,
and
a
big
big
advantage
for
me
is
that
we
are
using
or
giving
the
people
the
base
images
from
from
Red
Hat.
So
we
don't
have
to
take
care
about
updating
and
maintaining
images,
we're
just
getting
them
from
Red
Hat
and
that's
that's
a
very
big
advantage
of
the
open
shift
as
a
as
a
system
in
the
test.
We
are
using
fortify
scan
and
sorry
cube
scan.
A
So
we
want
to
make
really
sure
that
the
container
going
on
the
platform-
and
you
know
we
are
mixing
everybody
on
the
one
platform-
are
pretty
good
quality.
So
we
require
people
to
use
fortifying
sonarqube
and
we
also
connect
it
to
the
platform
selenium
and
ufd
versatile
functional
tests.
So
they
can
do
an
automated
testing
and
so,
in
the
best
case,
they
they
can
kind
of
run
the
entire
pipe
and
fully
automatic
up
to
production.
That's
that's!
A
A
You
need
to
register
with
10:10
that's
two
weeks
before
before
the
change,
so
you
know
you
know
really
like
you
know
traditional
Enterprise
change
process
and
how
to
make
it
actually
not
stay
in
the
way
and
we
were
able
to
create
like
a
fast,
fast
change
process
and
we
are
using
for
change
tracking
and
ServiceNow
system
and
we
just
connected
it
practically
with
API.
So
whenever
we
deploy
to
production
openshift,
then
we
are
creating
automatically
a
change
ticket
and
and
through
a
very
disciplined
pipeline.
We
ensured
our
change.
The
change
management.
A
Colleagues
that
non
you
know,
change
management
rules
are
compromised.
So
we
are
kind
of
saying
we're,
replacing
a
manual
approvals
that
used
to
be
are
still
for
non
open
shift
systems
with
automatic
approvals
build
built
in
in
the
pipeline
and
that
that
enables
the
whole
system
works.
Fast
and
in
such
a
that's
a
traditional
environment
like
we
are,
we
we
can
deploy
it.
You
know
at
the
speed
of
the
project
they
can
deploy
it.
Every
five
minutes
depends
how
their
pipeline
is
working
yeah.
So
that
gives
the
people
the
entire
story.
A
A
One
more
maybe
interesting
point
for
some
of
you
well
I'm
working
for
the
achievable
IT
services,
the
trail
IT
services.
Well,
I,
can
you
know
within
the
group
and
I
like
IT
company
within
the
group,
so
I'm
selling
the
open
chip
to
my
own
business
unit
and
the
idea
was,
you
know
how
to
sell
it,
and
here
is
a
little
idea.
What
we
we
use.
A
So
we
wanted
to
have
two
things
at
the
same
time
to
sell
up
and
shipped
you
know
and
have
like
paper
use
cloud
feeling
and
on
the
other,
on
the
other
side,
you
know
paper
use.
If
somebody
used
maybe
a
cloud
a
little
bit
too
aggressively.
You
know
the
spent
on
paper
use
can
be
quite
high.
You
know
you
sometimes
really
don't
know
how
to
control
it.
So
an
idea
here
is
to
introduce
your
call
application
box.
A
So
people
buying
you
know
like
a
quota
on
the
cluster,
say:
I
want
to
have
four
cores
and
sixteen
gig
ram
and
within
this
box,
within
this
physical
quota
on
the
cluster
projects,
can
can
deploy
and
consume
this
quotas
they
want,
but
we
will
they
will
never.
You
know
breach
the
maximum
size
of
the
of
the
cloud
of
the
box
like
in
example,
four
course:
that's
maximum
and
on
the
other
side
to
make
sure
that
the
people
properly
sizing
and
buying
a
proper,
proper
boxes.
We
charge
them
a
minimum
charge
always
of
twenty
percent.
A
So
don't
don't
get
crazy.
You
know
they
come
and
say,
like.
Okay.
Give
me
thousand
course,
because
the
paper
used
right
and-
and
you
know,
use
two
cores
and
you
know
I
doesn't
make
any
sense
from
capacity
management
and
yeah,
and
actually
they
are
them
being
charged
physically
between
minimum
charge
and
the
maximum
charge
of
you
know
represented
in
this
in
this
box.
So
that's
a
idea
that
might
some
in
a
similar
situation.
We
need
to
resell
the
open
chip
to
your
internal
customers
or
external
commerce
stores.
It's
maybe
an
interesting
idea
and
yeah.
A
The
final
two
words
on
our
outlook.
So
it's
you
know
we
are
relatively
fresh
guys
on
the
block.
And
so
what
do
you
want
to
now?
I
want
to
expand,
is
we'll
be
building
another
cluster
of
multi
cluster
environment?
That's
one
of
our
expansion.
We
would.
We
are
just
imply
Banting,
a
persistent
storage
as
we
as
we
speak,
and
we
think
about
putting
a
disaster
recovery,
multi,
cluster
disaster
recovery
scenarios.
A
We
have
some
concept
ready
and
I
think
in
next
year,
I
hope
to
persuade
my
internal
stakeholders
that
we
get
with
another
cluster
on
cloud
and
really
run
some
hybrid
scenarios.
So
some
of
you
and
I
saw
already
some
of
you
are
already
doing
that
kind
of
stuff,
and
you
know
catch
me
on
on
the
break
and
give
me
some
good
feedback,
because
we
are
here
to
learn
from
each
other.
So
that's
our
our
little
outlook,
yeah
and
with
that
I
think
we
are
coming
to
the
end.