►
From YouTube: Case Study: OpenShift at SIX Group
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
A
Yeah
and
what
are
we
doing,
we
engineer
and
also
operate
the
six
private
cloud
based
on
right
at
OpenShift.
Currently,
we
are
running
on
3.9,
looking
forward
to
the
release
of
4x,
and
we
also
doing
the
the
onboarding
stuff
for
our
customers.
So
we
really
have
big
engagement
to
our
internal
customers
to
get
them
on
our
cloud.
A
We
also
have
to
integrate
a
lot
of
things
into
the
existing
six
environment
is,
is
really
challenging,
which
we
will
show
you
afterwards
yeah
and
also
we
drive
the
agile
way
of
working
inside
of
six
and
on
the
top
of
openshift.
We
do
a
lot
of
stuff
we
developing
extensions
and
get
openshift
into
an
enterprise-grade
environment
as
Six's
yeah.
So
what
is
six
we're?
Not
the
rental
car
company
we
develop
and
operate
infrastructure
of
the
Swiss
financial
center
and
the
Swiss
banks.
So,
as
example,
we
have
the
stock
exchange
which
is
running
on
six.
C
C
We
need
to
be
capable
to
deliver
software
fast
and
safely.
We
are
continuously
improving
and
learning
and
we
are
getting
ready
for
the
future.
The
container
platform
of
CIT
is
the
next
generation
of
IT
infrastructure,
providing
instant
software
deployment,
pay-as-you-go,
self-service,
integrated
monitoring,
instant
provisioning,
automation,
fast
time
to
market
cloud
native
ready,
no
down
times
with
deployments,
join
us
to
build
and
run
the
future
together.
A
All
right,
let's
talk
about
the
time
line
we
had
in
within
six,
we
started
2017
using
open
shift
or
put
and
cluster
into
non
production.
The
funny
thing
about
that
is,
we
started
with
one
guy
and
one
project
manager,
so
it
was
a
one-man
show.
So
if
oliver
guqin
bill
will
see
this
video
credits
to
you
was
really
cool
work,
then
2018
I
joined
the
company
mark
our
software
engineer,
which
we'll
also
have
a
talk
about
the
operator
later
on,
also
joined
the
team.
A
Then
we
had
the
first
call
life
up
on
OpenShift
after
January
I.
Think
next
thing
was
we
had
to
really
improve
or
or
extend
the
cluster
inside
six,
so
open
shipped
is
a
really
great
product,
but
we
have
to
extend
it
to
get
it
into
our
environment.
So
we
had
the
mark
dated
he
created
a
product
provisioner
that
he
automatically
creates
project
on
open
shift
on
August.
We
had
the
network's
own
implementation.
A
We
have
about
I,
think
30
Network
zones,
so
it's
it's
quite
funny
to
implement
all
those
and
also
get
in
touch
with
the
security
guys
and
firewall
guys
and
stuff
like
that.
So
the
next
thing
was
to
fin
integration.
Maybe
you
guys
are
aware
of
it:
it's
yeah
firewall
security
stuff,
and
we
did
our
only
implementation
on
open
shift
and
Toofan.
B
So
network
segmentation
is
one
of
the
things
where
you
need
to
do
a
lot
of
talking
with
traditional
firewall
people
talking
about
them
about
Sdn
and
having
security
people
being
involved,
and
one
of
the
things
that
a
lot
of
people
often
ask
is:
is
I'm
okay,
with
going
this
way
and
but
I
need
an
audit
trail.
So
who
did
what?
When
and
how
and
also
things
like?
B
Oh
I
want
to
have
an
audit
when
something
is
being
denied
and
then
also
like
when
you
start
talking
about
onboarding,
and
you
don't
want
to
have
everything
done
manual,
you
have
already
existing
approval
processes
within
the
company
and
Dominic
mentioned
our
toughen
integration.
We
did
where
I
mean
the
whole
company
is
doing.
B
Approval
of
who
can
talk
to
whom
we're
through
toughen
and
OpenShift
with
network
policies
has
a
very
nice
and
automated
way
how
you
can
steer
traffic
or
how
you
can
manage
traffic
within
the
cluster,
but
openshift
doesn't
come
if
any
kind
of
approval
process,
and
so
what
we
did
is
we
hooked
or
glued
these
things
together
and
the
the
procedure
for
making
apps
talking
outside
is
is
the
same
as
it
would
run
on
a
VM
in
terms
of
approval
processes.
So
we
just
hooked
into
there.
B
B
Do
you
need
to
talk
to
anybody
outside
and
things
like
that
or
are
becoming
very
important
or
even
where
to
use?
Where
should
your
locs
end
up
for
audit
purposes
so
having
a
project
and
is,
is
usually
not
only
this
umbrella
within
OpenShift
that
matches
to
a
namespace
in
kubernetes,
it
usually
becomes
much
more
and
just
an
or
a
very
common
approach
that
most
people
do
at
first.
Is
they
just
disable
automatic
project
provisioning
and
then
they
set
up
a
template
and
then
comes
the
question
of
okay?
B
Now,
where
do
I
handle
all
that
kind
of
information
so
where,
where
where
do
I
know
which
all
the
projects
are
actually
there
and
how
do
I
look
up
that
information,
but
also
how
do
I
maintain
that
over
time
when
new
requirements
are
coming
to
something
simple,
such
as
the
project
definition?
So
last
summer
people
started
using
operators
for
that
I
guess:
that's
kind
of
like
a
recurring
theme
today
and
operators
are
everywhere.
B
So
the
the
benefits
are
that
you
can
really
write
a
piece
of
software
where
you
integrate
your
business
logic
into
it
and
at
the
same
time,
the
state
is
fully
controlled
within
communities.
All
the
same
rbac
apply
and
also
you
can
very
quickly
react
on
it.
It
doesn't
get
a
synchronous
because
you
can
really
watch
on
CRS
and
then
just
automatically
change
things.
So
certain
certain
definitions,
like
the
contact
details.
They
are
just
random
fields
within
the
project
that
even
normal
users
are
able
to
change,
and
but
the
operator
is
controlling
them.
B
B
We
are
controlling
resource
limits
that
people
are
getting,
because
this
is
the
way
how
they
are
being
charged,
and
then
we
we
end
for
certain
validations
at
some
point.
We
started
sending
them
emails
and
then
over
time
and
now
there's
even
more
integration
and
now
all
now
there's
even
you
can
just
toggle
whether
you
want
to
have
your
own
dedicated
tiller
in
your
project,
so
you're
able
to
deploy
hound
charts
with
the
uber
tiller
without
the
uber,
tiller
and
yeah.
B
This
is
now
going
away,
but
for
now
for
the
current
stable
version
of
helm
charts,
this
worked
quite
nicely
and
in
the
end-
and
this
allowed
us
also
to
integrate
the
onboarding
process
in
the
overall
company-wide
order
portal,
because
the
order
portal,
in
the
end
just
had
to
talk
to
the
openshift
api.
It
just
got
a
very
narrow
down
service
account
with
the
RBAC
on
the
CR
off
the
custom
project,
definition,
which
the
operator
picks
up
and
then
applies
on
to
the
various
object
and
yeah
from
there
on
when
people
are
ordering
a
new
project.
B
There
was
not
only
one.
Operator
is
running
and
doing
things
like
that,
and
there
are,
as
I
mentioned,
operators
for
network
policies
or
operators
managing
all
the
egress
services
which
are
leaving
the
traffic.
So
every
kind
of
connection
outside
of
the
cluster
needs
to
go
through
a
control
point
and
one
of
the
ways
how
you
do
that
is
using
egress
services.
B
You
know
an
open
shift
and
the
way
how
they
are
being
provisioned
is
also
through
operators
that
where
people
are
getting
the
stripped-down
definite
lots
of
also
lots
of
default
like
it
was
mentioned
in
the
previous
talk.
So
you
can
like
really
steer
your
environment
towards
something.
But
operators
were
not
the
only
thing
that
we're
done
at
6,
with
integrating
things
on
top
of
open
ship
yeah.
A
Of
course,
we
had
to
integrate
some
monitoring
metrics
stuff
within
the
vi
environment,
which
is
strictly
right
regulated
and
we
have
to
like
to
engage
with
the
with
the
current
environment.
So
what
we
did
is
here
is
an
architecture
overview
on
the
left
side.
You
see
the
endpoints
which
the
customer
is
coming
in.
So
here
is
Gravano,
which
has
an
integrated
l-dub
authentication.
A
He
goes
to
the
trickster
service,
which
is
a
really
cool
software
which
casas
the
primi
for
is
HTTP
8
api,
which
makes
everything
pretty
fast
inside
Ravana
after
that
trickster
goes
through
the
tunnels.
Query.
Tunnels
is
also
a
great
product
which
gives
you
the
possibility
to
have
the
federated
stuff
for
from
Prometheus
and
carry
those
services
and
also
the
environment
evaluation
on
the
tone
of
sidecar.
A
After
that,
we
also
gave
our
customers
the
possibility
to
access
the
data
not
only
through
Gravano
but
also
directly
on
tunnels.
So
we
had
to
challenge
to
to
the
fact
that
tunnels
and
Prometheus
doesn't
have
an
authentication
layer.
We
put
the
key
cloak
in
front
of
it
that
everything
is
authenticated
at
the
end.
A
What
we
also
had
to
do
that
the
mid
the
most
important
thing
is:
how
do
we
alert
the
alert
manager
was
missing,
so
in
6
we
have
a
strong
process,
how
we
alert
in
case
of
an
of
an
impact
of
a
service.
So
here
is
a
big
sure
how
we
do
the
alerting
workflow.
Now
the
developer
has
the
opportunity
to
label
his
objects.
That
can
be
a
part
that
can
be
a
double-dip
loin
configuration,
persistent
volume
claim
and
so
on.
So
he
labels
his
stuff
say:
hey
I
want
to
enable
the
alerting.
A
A
The
next
thing
we
do
is
real
Abel
and
enrich
those
metrics
with,
as
example,
the
business
unit,
his
service
codes
and
stuff,
like
that,
so
we
enrich
all
those
metrics
based
on
his
labels
after
that
Prometheus
evaluates
those
rules.
If
there
is
no
dirt,
of
course,
we
will
fire
at
next,
and
this
is
a
really
important
thing
we
revalidate,
if
all
those
labels
are
valid,
so
we
can
send
it
at
the
end
to
the
existing
alerting
system
and
at
the
end,
the
operator
or
the
developer
as
well
gets
the
alert.
A
That
means
we
had
to
to
improve
or
to
let's
say,
customize
the
short
time
things
as
containers
are
and
bring
them
into
our
learning
alarming
system.
As
I
said,
don't
reinvent
the
wheel:
it's
we
have
a
lot
of
departments.
Even
we
have
one
floor
only
for
alerting,
so
it's
you
have
to
to
get
in
touch
with
them.
Of
course,
also
a
good
thing
is
we
heard
it
before
self-service,
so
we
do
not.
They
do
not
have
to
get
in
touch
with
us
to
are
late
to
enable
their
alerting,
which
is
pretty
cool.
A
They
can
just
enable
it
by
their
own.
We
have
predefined
alerting
rules
and
we
also
have
open
source
it
inside
6:00
not
generally,
so
they
even
can
send
us
a
pull
request
to
improve
the
alert
or,
as
example,
if
they
have
achieved
a
bi-metal
spaced
alert
which
they
want
to
integrate.
It
can
be
reused
by
another
project
and
team,
which
is
pretty
cool.
So
don't
repeat
yourself.
A
Another
good
point
is
we
do
not
have
a
login
on
an
alerting
system,
so
that
means,
if
we
ever
gonna
change,
that
old
system
or
the
good
system
we
have
in
six.
We
can
choose
that
just
our
configuration
and
send
it
to
the
new
system,
which
is
pretty
cool
and,
of
course,
as
I
said
before,
the
self
service
is
really
important
for
us,
so
the
developer
can
enable
it
by
zone
and
even
can
contribute
to
the
alert
and,
at
the
end,
to
the
alerts
of
the
whole
company.
B
So
things
then
grew
and
and
also
people
adapted,
as
as
they
went
on,
for
example,
as
it
became
clear
that
there
are
so
many
network
zones
to
be
managed,
things
need
to
be
automated,
so
operators
also
for
a
lot
of
the
network
management,
stuff,
became
important
and
then
also
I.
A
Yeah
some
words
for
the
end.
Actually,
we
are
hiring
so
get
in
touch
with
us
and,
of
course
we
are
always
trying
to
contribute
to
open
source
and
stuff
like
that.
But
within
six
it's
it's
not
that
easy,
but
we
are
really
working
on
to
to
open
source
such
thing
as
the
project
provisioner
is
but
yeah
join
us
on
our
path
to
to
openshift
4.6.
Thank
you
very
much.