►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone-
this
is
Simon
cashwell
from
Barclays
I'm,
an
TD
Kassadin
from
Red
Hat,
and
what
we
wanted
to
do
today
was
really
give
you
some
ideas
about
how
Barclays
are
scaling
up
their
OpenShift
deployments.
It's
a
very
large
deployment.
It's
been
in
production
for
some
time
now,
not
only
in
v2,
but
also
v3.
A
So
just
to
give
you
some
context,
Barclays
itself,
it's
a
large
bank
in
the
UK,
our
headquarter
in
the
UK
about
80,000
people
in
about
40
countries,
over
20
billion
in
revenue
about
48
million
customers.
Interestingly
enough
for
every
pound
spent
in
the
UK
on
a
credit
card,
then
a
third
of
that
is
spent
on
a
Barclays
credit
card
and,
interestingly
enough,
it's
a
quite
an
old
bank.
So
it's
been
going
for
three
hundred
and
twenty
seven
years.
So
that
means
the
process.
A
Debt
and
the
technical
debt
that's
available
here
is
quite
large
you're
going
to
be
hearing
quite
a
lot
of
interesting
ideas
over
the
next
few
days
and
new
announcements,
I
have
one
other
announcement
to
make
and
I
think
I've
worked
out
how
to
beat
jet
lag.
So
anyone
a
bit
jet-lagged
today
I
know
Diane
and
a
few
others
are
okay.
Anyone
woke
up
at
a
hot
bus
one
this
morning
help
us
to
help
us
three
and
Hoppus
four
yeah,
it's
quite
bad.
A
So
what
you
have
to
do
is
you
have
to
think
of
the
talk
you're
going
to
give
the
next
day
and
last
night
I
got
as
far
as
hello.
This
is
Simon
Kesh,
Warren,
I'm,
Antony,
Kassadin
and
I
was
asleep
so
try
that
tonight,
if
you're
struggling
with
jetlag,
okay,
not
you
Simon,
so
with
that
what
I
want
to
do
is
really
just
hand
over
to
Simon
and
let
him
talk
a
bit
more
about,
first
of
all,
the
architecture
and
then
the
do's
and
don'ts
of
a
very
large
scale,
implementation,
Simon,
okay,.
B
So
just
to
reflect
in
terms
of
setting
context
as
well.
It's
it's
a
highly
regulated
environment,
we're
working
with
multiple
regulators
so
and
it's
kind
of
risk-averse.
So
that
kind
of
also
sets
context
but
I
think
for
me.
There's
a
big
change
in
the
way
Barclays
is
approaching
technology
and
some
very
senior
executives
has
taught
a
starting
to
talk
about
Barclays
as
a
technology
company
that
does
banking
rather
than
a
bank
that
has
a
big
technology
department.
So
it's
a
very
big
shift
in
mindset
about
how
Barclays
is
trying
to
position
itself
going
forward.
B
So
a
pass.
That's
our
platform
as
a
service
offering
it's
based
on
open
shift,
container
platform,
the
enterprise
version,
and
when
we
thought
about
setting
up
this
service,
we
thought
about
all
the
service
elements
that
make
up
that
service,
offering
not
just
the
technology
and
so
there's
some
surrounding
bits
that
we
spend
as
much
time
in
creating
the
service
as
thinking
about
the
technology.
B
B
We
went
very
early
I
think
that
was
a
very,
very
good
decision
for
us
in
terms
of
been
able
to
learn
from
that
deployment,
get
people
used
to
using
containers
and
platform
as
a
service,
and
then
we've
iterated
on
that
we
started
slow
with
slow
take-up
during
2015,
but
then
it
grew
exponentially
and
it
says
we're
at
thousands
of
pods
containers
now.
Actually
it's
tens
of
thousands
now
that
we've
grown
to
it's
it's
growing
at
a
rate
that
we
can't
keep
up
with
I.
B
Think
the
other
interesting
thing
for
me
about
this
slide
is
in
terms
of
a
relatively
short
space
of
time
in
Barclays
history,
we're
already
creating
more
technical
debt
in
terms
of
v2
and
trying
to
migrate
off
v2.
So
that's
that's
brought
challenges
as
well
in
terms
of
how
we
do
all
of
this
work.
So
hopefully,
that's
given
you
a
bit
of
a
context,
so
I'm
now
going
to
try
and
talk
around
how
we've
architected
this
and
some
of
this,
maybe
a
little
bit
of
an
anti-pattern.
B
So
I'll
just
take
you
through
why
we've
done
things
as
we
have
so
in
terms
of
how
we've
deployed
it.
We've
deployed
impaired
data
centers
and
we
really
build
separate
OpenShift
clusters
in
each
data
center
at
the
OpenShift
level,
they're
completely
separate
in
those
two
data
centers.
But
then
we
join
at
the
global
low
balancing
layer,
the
DNS
layer,
and
then
we
have
local
low
balances
within
each
data
center.
B
It's
not
quite
accurate
that
external
registry
is
actually
shared
across,
so
that
is
joined
up
so
that
the
external
registry
is
shared
between
those
environments.
There's
a
slight
overhead
that
we
mandate
that
everybody
has
to
deploy
their
applications
into
both
data
centers,
so
the
concept
of
active
active,
which
is
also
very
timely
in
terms
of
the
banking
world
in
terms
of
resilience
and
the
ability
to
maintain
service.
B
So
the
bank
has
traditionally
thought
about
active
and
passive
environments,
so
build
a
disaster,
recovery,
environment
and
an
active
environment,
and
then
you
failover
at
some
time.
So
this
was
the
first
platform
that
we
really
sold
trying
to
run
active
active
across
two
data
centers,
although
it
it
is
really
at
that
application
layer.
That
brings
problems
a
little
bit
when
you
think
about
different
workloads
you
might
put
on
there
in
terms
of
this
architecture.
It
means
we
can
drain
a
datacenter
gracefully
over
a
couple
of
hours
and
then
patch
in
that
data
center.
B
So
that's
very
valuable
for
us
in
been
able
to
meet
regulatory
requirements,
about
vulnerability
management,
patching
relatively
quickly
and
easily
and
been
able
to
maintain
a
service,
stability
and
I.
Think
service
stability
is
the
key
mantra
that
we
we
drive
with.
We've
got
to
be
able
to
provide
a
service
that
is
always
up
at
the
platform
level,
so
I
suppose
an
example
of
that
would
we
also
deploy
in
separating
the
non
production
and
production
environments.
B
So
we
actually
have
four
clusters
in
each
data
center
were
also
across
the
globe
in
the
US
and
UK
as
well.
So
we
had
a
storage
problem
in
some
of
our
non
production
environments.
We
monitor
storage,
we
try
and
make
sure
we
don't
run
out.
We
were
rolling
out
upgrades
and
provisioning
more
storage,
and
that
worked
fine
in
one
data
center
was
all
up
and
running.
Then
the
other
data
center
after
we
had
done
that.
B
We
hit
a
problem
and
it
was
a
technical
problem
around
there
with
the
way
the
storage
was
configured
with
everything,
so
it
meant
we
were
down
in
one
data
center,
but
that
was
it
didn't
really
impact
anybody,
because
they're
deployed
in
both
data
centers
and
so
that
service
stability
was
still
there
and
that
resilience
and
being
able
to
maintain
service
really
important.
A
couple
of
other
things
just
to
quickly
briefly
mention
we
deploy
is
in
chef,
so
we
build
using
chef
in
an
automated
fashion.
B
I've
had
my
time
again,
I
might
not
use
chef,
but
that's
the
bank's
mandated
way
fee
for
managing
the
environments
at
scale,
so
we
sometimes
struggle
with
cadence
of
versions
and
being
able
to
update
and
I
think
some
of
that
is
slow
bias
not
being
able
to
just
take
ansible
and
using
chef.
So
interesting
choice
but
chef
is,
is
very
good
for
our
infrastructures
service
offerings
and
across
the
bank
as
a
whole.
B
We
use
it
to
manage
those
environments,
so
it
works
well
for
the
monk
as
a
whole,
but
it
does
give
us
some
little
bit
of
issues
in
the
way
we
manage
our
environment.
We
also
when
we
first
went
in.
We
shared
the
Masters
and
etcd
and
metrics
all
on
the
same
nodes,
and
we
soon
hit
problems
as
we
upgraded.
So
we
split
those
out
and
they're
all
running
on
separate
nodes
and
we've
dedicated
resources.
B
So
we're
constantly
iterating,
constantly
learning
and
I
think
that
ability
to
be
able
to
do
things
in
one
data
center
drain
and
try.
It
has
been
really
helpful
for
us,
as
I
said.
It
doesn't
necessarily
help
if,
in
in
terms
of
deploying
your
application
and
spanning
across
we're,
also
taking
a
slightly
different
approach
to
our
public
cloud
offerings
and
we
run
a
cluster
across
multiple
availability
zones.
So
slightly
different
approach
in
that
world,
where
we've
got
a
non-production
environment
at
the
moment.
B
So
next
I
want
to
talk
about
a
slightly
more
complicated
environment
where
we
try
to
build
in
which
is
a
DMZ
environment.
So
this
has
given
us
additional
challenges
that
we've
taken
a
nice
platform,
that's
fairly
holistic
and
now
we've
had
to
work
in
different
zones
that
have
different
functional
requirements.
Different
security
requirements
and
we've
almost
broke
broken
it
up
and
then
split
it
out
and
move
different
elements
into
different
areas.
B
So
quite
a
challenging
environment
in
terms
of
how
you
build
in
that
environment
and
also
from
a
security
perspective,
quite
challenge
in
terms
of
governance,
so
we've
had
to
iterate
on
governance
and
keep
change
in
the
way,
we're
operating
to
make
sure
it's
secure.
We
meet
all
of
our
regulatory
security
requirements,
but
also
trying
to
give
the
users
a
reasonable
journey.
That's
not
too
dissimilar
from
what
they
used
to
it's
enough
proof
that
we
constantly
iterating.
B
So
we've
had
to
change
a
few
things.
Traditionally
we
just
hook
into
a
D,
so
we
integrate
with
a
D
for
our
authentication
authorization.
We've
had
to
move
to
token-based
authentication
in
this
environment.
So
that's
that's
been
quite
changed
there.
Now
that's
been
quite
interesting.
We
may
refit
that
retrofit,
that
into
our
cluster
campus
environment,
we've
also
started
using
Gluster
for
the
internal
Peavey's.
So
on
the
campus
environment
we
tend
to
use
NFS
and
some
dedicated
block,
storage,
etc,
but
within
this
environment
started
to
use
cluster,
that's
our
first
experience
of
that.
B
B
Quite
a
few
other
security
things.
We
do
try
and
have
an
onboarding
solution,
so
we
don't
have
a
broker
that
manages
this
environment.
For
us
we're
trying
to
make
it
seem,
like
the
users
are
using
the
real
openshift
console
and
the
api.
So
we
try
to
expose
those,
but
we
do
do
have
to
have
an
onboarding
solution.
That
means
people
register,
set
up
security
groups,
ad
domains,
groups
etc,
and
also
we
get
somebody
to
charge.
So
that's
where
we
get
our
charging
details
from
so.
B
Speed
on
a
bit
so
I've
just
wanted
to
talk
briefly
about
the
other
elements
and
I'll
go
through
these
quite
quickly.
So,
as
I
mentioned
before
a
service
focus,
it's
changed
accountability,
so
we've
had
to
change
who's
accountable
for
what,
in
this
environment
and
developers
want
speed
and
self
service
and
access,
but
they're
traditionally,
especially
in
the
bank,
relied
on
infrastructure
teams
to
manage
those
services
for
them.
So
there's
a
change
in
mindset,
change
in
terms
and
conditions.
We've
been
patching
from
the
off,
so
this
is
really
good
for
us
that
we
drain
data
centers.
B
We
give
warning,
but
we
drain
gracefully
and
then
we
patch.
That
means
we've
already
proven
that
we
can
manage
in
a
in
a
fully
resilient
manner,
and
it
means
that
users
have
had
to
get
used
to
that.
I
think
also
in
terms
of
bringing
your
own
image
and
users
need
to
take
on
responsibility
for
taking
those
products
through
governance
exception
within
the
bank.
B
B
It
was
a
dedicated
team,
it's
a
multi,
skilled
team
and
rather
than
traditional
approach
in
the
bank
of
silos
that
you
know,
if
you
had
a
problem,
you
get
a
network
guy
and
you
get
a
Linux
guy
and
you
get
a
middleware
person
in
this
is
one
team
that
manages
it
on
behalf
of
the
users.
Another
key
thing:
we
went
24/7
and
treat
the
non
production
as
though
its
production
really
important
environment
and
quite
a
different
mindset
change
in
terms
of
collaboration.
B
We
work
closely
with
our
end
users
when
we
launched
we
launched
with
the
pilot
environment,
so
we
got
Pathfinder
applications,
people
to
use
the
environment
quickly
and
experience
it
and
help
us
shape
it
and
I.
Think
key
for
us
is
around
that
feedback
loop,
but
we
have
to
prove
and
get
trust
and
buy-in
from
our
developers.
So
we
have
to
keep
iterating
the
platform
so
that
they
believe
we
are
going
to
take
feedback.
B
Charging
we
try
and
do
consumption
based
charging.
This
has
been
one
of
our
biggest
challenges.
Anthony
mentioned
about
process,
debt,
I,
think
the
financial
processes
and
the
financial
structure
of
Barclays
is
one
of
the
hardest
things.
We've
come
across
in
time,
trying
to
introduce
a
cloud
platform
where
you
can
change
for
what
people
charge,
what
people
use
so
a
very
problem,
problematic
and
quite
difficult
for
us
to
do,
but
we
nevertheless
have
implemented
a
version
of
consumption
based
charging
from
the
off.
B
So
I
did
want
to
talk
about
some
of
the
challenges,
so
I.
Think
in
summary,
I
can
summarize,
as
the
biggest
challenge
is
probably
cultural
as
much
is
technical.
So
there's
technical
issues
about
putting
this
platform
in,
but
the
cultural
mindset
and
getting
people
to
change
that
that
debt
that
exists
in
the
bank
is
quite
a
challenge.
So,
with
a
number
of
waves,
we've
tried
to
overcome
that
in
terms
of
working
with
the
developers,
as
I
mentioned
and
I.
Think
also
for
me.
B
I've
seen
a
change
that
now
that
people
are
accepting
the
platform
and
using
the
platform,
we've
changed
from
initial
things
about
me
having
to
go
and
meet
and
do
loads
of
presentations
meetings
and
tell
people
how
it
does
multi-tenancy.
And
yes,
this
application
isn't
going
to
impact
that
other
application
and
trying
to
persuade
people
to
use
the
system.
We
now
got
smart
users
who
are
out
there
saying
why
aren't
we
on
this
latest
version?
B
That
does
this
so
we've
seen
a
different
impact
in
terms
of
the
journey
and
that's
that's
been
with
the
our
path
and
the
acceptance
of
the
journey.
So
a
very
different
change
around
our
problems
and
how
we
do
that
I
do
think,
cadence
and
being
able
to
take
new
versions
is
something
we
need
to
get
quicker
at
and
it
is
around
that
keep
managing
that
technical
drip
debt
and
trying
to
move
to
an
evergreen
environment
if
we're
talking
and
I
think
it
really
links
back
to
service
stability.
B
So
the
biggest
thing
is:
we've
we've
given
a
platform
that
is
available
all
the
time
and
we've
proven
it's.
Are
there
all
the
time
and
we're
drained,
and
we
continually
prove
that
so
I
think
that
that
Trust
is
a
big
win
for
us
in
terms
of
people
at
the
platform,
so
I'm
not
going
to
there's
lots
and
lots
of
benefits
for
us
massive
I'm,
not
going
to
say
too
much,
but
just
to
finish
off,
really
I
hope.
That's
given
you
some
insight
into
the
complexities
of
trying
to
build
a
platform
like
this.
B
You
know
in
a
very
large
and
quite
regulated
environment,
and
also
in
an
environment
that
has
been
around
for
a
long
long
time
with
lots
of
debt
and
I.
Think
for
me,
we've
really
tried
to
create
an
environment.
That
means
app
teams
can
focus
on
the
development
of
applications
and
new
functions
and
not
worry
about
infrastructure,
and
that
that,
for
me,
is
a
real
real
bonus
and
I
always
think
back
in
2015
I
knew
we'd
done
something
right.
B
A
team
from
Portugal
went
from
non
production
to
production
within
two
months.
Nobody
in
the
infrastructure
teams
knew
about
it.
It
just
happened,
it
went
live,
and
that
meant
and
that
that,
for
me,
was
a
real
indication
that
we've
done
something
really
powerful
and
strong
that
barkless.
So
thank
you.