►
From YouTube: Case Study: OpenShift at Macquarie
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
I
came
in
from
a
Thursday,
so
that's
why
I'm
not
very
jet-lagged
still
so
I
got
to
enjoy
the
weekend
here.
So
I'm
Jason
O'connell
I
come
from
Australia
from
Macquarie
Bank
I
presented
at
OpenShift
Commons
gathering
in
Boston
two
years
ago,
and
at
that
time
we
were
just
migrating.
Our
first
applications
into
production,
so
our
whole
focus
at
that
time
was
getting
into
production
and
making
sure
we
had
everything
stable
and
running
after
that.
A
A
So
I
just
want
to
talk
to
you
today
about
how
we
industrialize
OpenShift
and
scaled
it
out
as
an
offering
for
everyone,
and
so
just
a
bit
about
Macquarie
Bank.
So
there's
Macquarie
Group,
which
is
a
large
global
financial
organization.
Macquarie
Bank
is
a
retail
bank
in
Australia
we're
like
a
digital
first
Bank.
That
means
we've
got
no
branches.
Our
customer
engagement
is
via
our
digital
channels,
so
digital
is
very
important
for
us.
A
I
just
got
a
few
screenshots.
This
is
our
personal
banking
site.
We
also
have
business
banking
and
wealth.
Who've
got
a
lot
of
nice
features.
I
won't
have
time
to
go
through
them
all,
and
this
is
like
a
mobile
app.
You
can
see
it
recognized
when
I
came
to
Spain
there's
a
lot
of
little
personalization
and
things
like
that.
We
also
have
nice
transaction
search.
A
So
in
the
last
year
it's
growing
a
lot.
Our
usage,
we're
optimized
I,
say
for
stateless
applications.
So
generally
we
on
board
micro
services.
We've
got
a
lot
of
legacy
as
well.
We
focus
on
a
particular
use
case,
which
is
the
stateless
applications.
Everything
runs
on
AWS
and
we
rebuild
our
clusters
every
90
days.
This
means
we
build
a
new
cluster
from
scratch,
where
we
can
patch
an
upgrade,
and
then
we
install
reinstall
all
the
applications
on
cross.
We
have
two
clusters
running
and
then
we
cut
traffic
across
so
it's
mandated.
A
A
So
in
terms
of
scaling
the
platform
now
I've
just
got
a
graph
here
around
the
cost
per
application.
Now
this
isn't
the
cost
of
running
the
infrastructure,
but
the
cost
of
supporting
the
application
migrating.
The
application
for
the
application
for
the
platform
team
that
I
run
versus
the
number
of
applications.
A
When
we
first
went
live
in
production,
we
were
getting
quite
good
at
onboarding
new
applications,
so
our
cost
was
coming
down.
We
had
some
good
automation
which
we
could
replicate,
but
things
started
slowing
down.
They
slowed
down
only
onboard
and
more
and
more
teams.
Initially,
we
were
working
together
with
product
teams
to
onboard
their
applications
later,
as
we
expanded
out
to
the
organization
we're
dealing
teams
that
aren't
even
in
the
same
building,
let
alone
the
same
country.
A
Teams
that
are
less,
let's
say,
mature
with
understanding
of
docker
and
everything
was
new
for
them,
and
so
what
it
means
is
we
slowed
down
a
lot.
We
had
to
support
in
production
applications,
you
have
to
support
onboarding
new
teams
and
we
had
to
refine
our
scripts.
So
naturally
this
could
happen.
Things
can
slow
down.
This
is
a
diseconomy
of
scale.
A
What
we
want
really
is
an
economy
of
scale.
Here
we
want
to
make
it
the
weak
and
on
board
ten
teams.
20
teams
go
from
300
applications
to
500
applications
seamlessly
with
no
friction.
It
should
need
the
platform
team,
no
extra
resources
in
order
to
onboard
more
and
more
applications.
That's
what
we're
aiming
for.
A
Naturally,
we
need
to
automate
but
automation
we
had
from
the
start.
Automation
wasn't
the
answer
necessarily
because
initially
we
used
to
automate
everything
per
application,
automate
the
deployments
groups
we
Don
bought
a
new
application.
We'd
need
to
get
new
automation,
scripts,
a
lot
of
teams
would
copy
and
paste
and
then
eventually
there's
people
that
don't
even
know
how
those
scripts
work
and
they
get
a
lot
of
problems
and
they
get
a
lot
of
problems.
We
need
to
support
them.
So
what
we
really
want
is
reuse,
so
we
don't
just
want
to
automate.
A
In
order
to
do
that,
we
have
to
standardize
so
rather
than
everyone
doing
things
differently,
we
started
to
say
you
have
to
do
things
the
same
way,
although
all
teams
think
that
they're
doing
things
differently,
we
actually
realize
that
90%
of
the
applications
we
run
a
spring
boot,
microservices
and
they're,
actually
very,
very,
very
similar.
So
we
had
to
force
teams
to
standardize.
A
Also,
you
need
a
clear,
clear
scope
for
your
cluster,
so
we
focus
on
stateless
micro
services.
That
means
we
give
no
persistent
volumes
for
any
teams.
This
is
just
our
scope
of
our
cluster
and
makes
managing
the
cluster
very
simple
for
your
own
platforms.
You
might
have
a
different
scope
that
you're
offering
for
us.
We
narrow
it
and
that
allows
us
to
scale,
and
naturally
we
need
a
lot
of
self
service
for
teams.
A
A
A
Now
these
services
I
call
capability
services.
These
are
things
like
secrets,
management
which
we're
migrating
to
vault
chat,
BOTS,
see
ICD
with
Jenkins
Kay
native.
These
are
core
capabilities,
and
you
know,
an
open
shift
is
very
easy
to
install
some
of
these
tools
in
five
minutes
and
development
teams
and
product
teams
want
to
do
this,
but
that'll
break
those
principles
before
it
means
we'll
lose
standardization
and
lose
control
and
things
can
become
messy.
So
we
say
that
any
capability
service,
our
team,
runs,
we
build
it
properly.
A
And
our
deployment
and
release
automation
so,
like
I,
said
before
the
teams
used
to
run
their
own
deployments
deployment
scripts
and
things
got
very,
very
messy.
So
we
said
there's
going
to
be
one
way
that
you
do
deployments
and
we're
going
to
write
those
scripts
and
what
that
meant
is
that
early
on,
when
we
upgraded
from
open
shift
three
five
to
three
seven,
there
was
some
breaking
API
changes
and
we
needed
20
teams
to
update
their
scripts
and
that
was
holding
up
our
entire
upgrade.
A
A
A
A
A
So
let
us
see
how
we
fix
it.
I
go
into
our
openshift
portal.
This
is
our
self-service
portal
that
we
built.
You
can
see.
It
looks
a
lot
like
OpenShift,
because
we
use
pattern
fly
which
is
the
same
framework
as
OpenShift
user
itself.
So
this
is
what
all
developers
used
to
interact
and
do
deployments
and
self-service
functions.
You
can
see
the
sort
of
self-service
functions.
We've
got
user
access
management
deployments,
application,
onboarding,
vulnerability
scanning
pulses
synthetic
testing
capacity
management
that
there,
so
everything
self-service
goes
through
this
single
portal.
A
So
what
I
want
to
do
here
is
look
at
that
broken
application,
so
it's
deployed
here
and
we
just
called
it
broken
and
to
make
a
change.
What
I
want
to
do
is
branch
it.
Now
you
can
see
it's
not
just
one
application,
usually
micro
services
work
together
to
form
what
we
call
a
product,
so
we
never
deal
with
an
application
individually.
A
So
if
you
were
to
do
this
in
production,
you
would
you
could
branch
up
a
current
production
environment
and
bring
it
into
tests
so
I've,
just
called
the
new
branch
environment
feature
Jason,
VIX
and
what's
happening
now,
is
we're
getting
an
exact
copy
of
that
environment?
If
we
did
a
copy
of
a
production
environment,
it
would
be
exactly
the
same
except
the
config
would
be
pointing
to
non
production.
A
So
it's
just
running
through
and
running
the
deployment
I
notice
every
one
before
when
they
said,
put
up
your
hands.
If
you
like,
Jenkins
or
not,
we
had
a
lot
of
trouble
with
Jenkins,
so
we
got
rid
of
it.
We
do
all
the
deployments
with
this
our
own
custom
tool.
All
of
these
have
API,
so
end-to-end
pipelines.
Developer
teams
can
integrate
with
this.
They
don't
need
to
use
this
UI.
A
Are
you
last
see
it
deploying
an
open
shift
here
now?
The
cool
part
is
that
an
entire
environment
is
defined
in
git
llaman
files,
so
we've
created
a
branch
automatically
in
git,
and
what
we
want
to
do
is
update
this.
These
files
to
do
the
fix
you
can
see
you
got
a
file
per
application
and
if
you
look
at
one
of
them,
they're
quite
simple:
a
production
application
isn't
much
more
complicated
than
this.
We
try
to
default
everything,
so
teams
don't
explicitly
define
anything
or
override
unless
they
need
to.
A
A
So
now
I
go
back
to
the
deployments
after
making
that
change
and
wish
what
we
should
see
pop
up
here
is
the
original
deployment.
I
did
when
I
branched
earlier
and
then
a
new
one
is
being
kicked
off.
It
got
kicked
off
from
a
web
hook.
So
what's
happened
here
is
as
soon
as
I
made
the
change
its
syncing
up
and
applying
that
that
change
and
that
change
alone
into
that
environment.
We
don't
use
this
in
production,
of
course,
because
you've
got
some
more
controls
to
adhere
to,
but
non
products
very
powerful.
A
It
tells
you
the
change
that
we
made.
As
you
can
see,
it
says
I'm
going
to
redeploy
that
application
and
I'm
going
to
apply
that
config
because
of
that
change,
so
we
made
so
it
means
that
everything's
kept
in
sync
between.
What's
in
your
definitions,
in
the
llamó
files
and
what's
actually
in
the
environment,
and
we
can
go
into
open
ships
here
and
we
can
see,
the
new
potters
has
just
come
up.
A
So
that's
all
great.
We
made
a
fix,
but
now
we
need
to
apply
the
fix
to
the
actual
customer
traffic.
You
can
see
it's
not
applied
here
yet
because
I've
created
a
whole
new
environment,
and
so
for
that
we
do
canary
releases
with
a
tool
we've
built
on
top
of
Apogee
hold
on
this
a
bit
quick.
So
in
this
tool
here
we
can
say
we
want
to
move
from
one
back
end
to
another
back
end.
A
We
can
move
by
probability
for
a
canary
release,
which
I'll
talk
more
about
later,
we're
just
going
to
flick
all
the
traffic
across
here
in
the
future.
All
this
ability
to
canary
release
we're
just
moving
into
sto
away
from
Apogee
it's
much
cleaner,
using
operators
and
using
this
do
for
this,
but
we've
been
using
this
ability
for
two
years
with
Arthur
G
and
it's
allowed
us
to
release
much
faster.
So,
let's
see
if
the
change
work
and
there
it
worked,
so
you
can
see
the
URL
at
the
top.
Is
the
same.
A
A
This
is
great
because
McQuarrie
has
a
lot
of
channels
and
a
lot
of
partners
connecting
through
to
these
banking
API.
So
we
don't
want
to
make
a
release
and
affect
every
single
partner
at
the
same
time,
which
is
what
happened
like
five
years
ago,
when
you
you
couldn't
do
releases
often
because
you
had
to
line
up
your
partners
and
get
them
to
agree
to
a
date.
Now
we
can
release
any
time,
so
I
could
bleed
across
our
mobile
channel.
A
We
also
have
open
API,
sopin
banking,
api
s--,
and
so
they
might
want
to
do
a
change.
They
could
do
that
change
on
the
same
day.
If
they
wanted
they've
done
their
testing,
they
go
straight
in,
they
know
they're
only
impacting
themselves
and
we
could
remove
the
old
namespace
and
bleed
the
other
traffic
across.
So
we've
got
full
capability
to
bleed
traffic.
A
So
our
lab
environments
are
production
except
they're,
cut
down
in
terms
of
the
users
that
can
use
it.
Only
the
digital
team
can
use
these
environments,
but
it's
still
real
money.
A
payment
is
still
a
payment
but
allows
it
a
lot
of
experimentation.
In
those
environments,
teams
can
spin
up
as
many
lab
environments
as
they
want
for
different
features.
A
Betta
environments
are
opened
up
to
staff
and
some
public
beta
testers,
and
we
get
a
lot
of
feedback
from
our
internal
staff
on
new
beta
features
or,
if
there's
any
issues
that
have
regressed,
so
you
actually
promote
through
from
test
lab
beta
into
production.
The
full
set
so
you'll
probably
hear
about
git
ops.
I
think
this
is
getting
really
popular.
A
Everything
we
do
is
get
ops
based,
so
I
wanted
to
demo,
but
I'm
not
gonna
have
time
our
user
access
management
is
a
pull
request,
but
via
that
portal
resource
management,
application
onboarding
everything
is
done
with
the
requests
now.
The
nice
thing
about
that
is.
Every
team
can
then
attach
approvals
on
to
that
by
approving
the
pull
request
and
we
get
full
moderate
ability
of
everything.
A
Now,
I
just
want
to
talk
quickly
about
controls.
If
you
looked
at
a
pod
there's
a
lot
of
things
that
surround
a
pod,
we've
got
a
pot
as
part
of
an
application
which
rolls
up
into
a
product
who
have
got
teams
that
own
that
pod
or
in
the
application
you've
got
user
access
management.
You've
got
resource
management
for
us
being
a
bank.
We
need
controls
on
all
of
these
things
and
the
controls
you
need
at
different
levels.
A
You
need
different
types
of
controls
at
each
stage,
so
I
can't
go
into
all
the
controls,
except
that,
if
you
want
a
scalable
platform,
any
control
you
put
in
you
need
to
focus
on
developer
experience.
You
need
your
control
to
be
frictionless
and
automated
and
self-managed
if
you
blocked
a
deployment
on
a
release
day
for
a
team,
and
they
don't
know
why
they're
gonna
come
to
the
platform
team
to
my
team
and
cause
a
big
noise
and
then
there'll
be
a
lot
of
work
to
get
them
on
on
blocked
right.
A
But
there's
a
lot
of
work
to
to
manage
just
one
control.
So
just
got
an
example
here
of
controlling
resources.
This
is
where
a
team
wants
to
request
CPU
and
memory,
so
they
all
get
allocated.
We
need
incentives
for
them
or
disincentives
in
this
case.
We
say
if
you're
over
your
capacity,
we're
going
to
prevent
your
deployments,
we're
locking
you
down
until
you
bring
it
on
them,
that's
a
disincentive
to
force
them
to
a
dear
to
the
control
to
ensure
that
they
optimize
we
need
charge
back.
A
A
We
need
them
to
be
able
to
self
manage,
so
we
give
them
the
ability
to
auto
clean
up
in
non
prod.
So
they
can
say
these
developer
environments
are
going
to
be
cleaned
up
nightly
and
these
environments
last
longer,
etc,
and
if
they
do
want
more
resources,
it
should
be
self
approved.
So
they
don't
need
to
involve
our
team.
A
They
also
need
visibility
of
their
current
usage
on
a
dashboard,
and
we
also
have
email
and
we're
just
building
out
the
chat
bot
alerting
so
that
when
they
do
approach
their
limits,
we're
alerting
them
so
that
they
know
that
they're
not
going
to
be
blocked
in
production.
They're
given
enough
warning
before
then,
so
you
can
see
just
for
this
one
control
to
make
it
seamless
and
frictionless.
We
need
to
actually
build
out
six
different
things:
six
different
components
and
applications
just
to
manage
the
control
of
resource
usage,
so
just
to
wrap
up.
A
I
saw
this
quote
from
aging
Cockroft
about
a
month
ago,
and
our
principle
is
that
the
product
team
should
be
focusing
on
everything.
That's
inside
the
container.
They
should
focus
on
the
business
logic
and
the
business
value
so
that
eventually
all
the
code
they
have
a
right
is
business
logic,
and
then
the
platform
team
is
focused
on
actually
the
developer
experience,
really.
What
we're
there
for
is
improving
developer
productivity,
so
yeah
I
think
that's
it.
Thank
you.