►
From YouTube: OpenShift at Macquarie
Description
Rajay Rai and Jason O'Connell from Macquarie & Wayne Dovey from Red Hat discuss Macquarie's production deployment of OpenShift at the OpenShift Commons Gathering Boston on May 1, 2017.
Learn more and see the slides here:
https://blog.openshift.com/openshift-commons-gathering-at-red-hat-summit-2017-video-recap-with-slides/
A
Good
morning,
good
morning,
everyone
thank
you
for
having
us
over
Diane,
and
especially
thank
you
for
the
beer
announcements.
I
think
that
was
cool
I.
Remember
that
part
at
least
so
anyway,
we've
come
a
long
way.
We
come
from
Sydney,
so
we're
happy
to
be
here
today
and
you
know
it's
an
honor
and
a
privilege
to
be
here
at
the
Commons
we
just
joined
up,
and
the
first
thing
we
know
is:
we've
been
invited
to
speak.
So
that's
great.
Now.
A
A
Then
we
have
Jason
here,
who
is
a
principal
engineer,
working
with
us
on
open
shift
and
all
the
strategies
around
deployment
and
he's
going
to
take
you
through
what
we're
doing
there
and
how
we
make
sure
that
you're
going
fast
as
possible
and
we
have
Wayne
who
is
more,
like
our
employee,
buddies
from
Red
Hat
working
closely
with
us
and
he's
working
on
the
platform.
We'll
talk
a
little
more
about
that
so
Macquarie
Bank.
A
We
have
a
where
we
have
presents
in
about
28
countries
over
15,000
employees,
one
of
the
largest
infrastructure
asset
managers
in
the
world,
and
they
have
six
divisions,
I
work
with
banking
and
financial
services
and
that's
the
division,
which
has
the
most
amount
of
customers
direct
facing
customers,
the
retail
of
division
of
the
bank
and
I
work
with
a
digital
engineering.
I
look
after
the
engineering
aspects
of
it.
A
B
A
Left
anyone
else,
anyone
else,
one
I
guess
come
on.
Yes,
now
it's
roughly
about
five
months,
you
google
it
you
get
anywhere
between
three
and
nine
months,
but
you
know
we
said
it's
about
five
months.
So
that's
the
time
it
takes
for
somebody
to
have
an
idea
and
take
it
to
production.
So
if
you
know
about
the
other
loop,
which
is
observe,
orient
act
and
decide,
that's
the
time
it
takes
for
any
idea
right
to
be
conceived
to
the
time
it's
delivered.
That's
the
speed
of
innovation.
A
That's
happening
around
us,
hence,
we've
said
we
believe
that
it's
important
to
understand
here
that
the
future
is
going
to
belong
to
the
fast
anyway.
So,
having
said
that,
the
ideas
are
coming
in
fast.
You've
got
to
be
able
to
bend
down
on
the
winners.
You've
got
to
test
and
learn.
So
essentially,
organizations
today
have
to
become
a
learning
organization,
important
to
experiment
and
learn,
and
if
you
think
and
just
think
about
exponential
organizations
like
Facebook
they're,
deploying
16,000
changes
a
month
or
Amazon
I
think
in
the
year
2013.
A
So
what
are
we
building
here?
We
are
digital
Bank.
By
the
way,
so
we
a
new
bank,
we
don't
have
branches,
we
rely
heavily
on
the
ability
for
us
to
connect
with
our
partners.
We
have
partners
building
software
and
building
it
with
our
API,
and
we
also
have
various
consumers
that
we
have,
for
example,
our
mobile
applications
or
IOT,
or
anything
else
that
we're
building
is
all
using
the
same
API.
So
we
believe
in
open
endpoint
independence
in
terms
of
the
application
that
we're
building
just
quickly.
A
You
know
it's
all
about
data
money
management.
As
you
can
see
here,
we've
got
a
couple
of
examples
of
what
we're
building
predictive
as
well.
So
we
believe
in
insights,
oversights
and
foresight
insights
about
what
you
you
know.
Where
did
my
money
go?
How
did
I
perform
against
a
budget?
That's
all
about
oversights
and
the
future
is
about
four
sites
which
we're
working
on
as
well.
When
you
go
to
a
bank
like
especially
us,
you
need
to
interact.
A
We
think
that
you
know
a
customer
would
come
and
say
how
much
they
spend
last
month
or
how
much
they
spend
in
London
can
I
see
my
transaction
somewhere
if
a
customer
is
calling
you
up,
that's
really
important,
they're
thinking
like
that.
So
why
not
build
a
solution?
That's
that's
conversational
and
that's
what
we've
attempted
to
do
as
if
there's
an
example
here
that
says,
spend
last
year
around
100
dollars
around
people,
let's
say
payable
is
where
I
live.
A
All
the
transactions
around
that
radius,
approximately
based
searches,
all
that
stuff
comes
up,
shows
you
location,
etc
now
moving
onwards.
What
we're
working
on
as
well
is
chat
BOTS
with
Facebook
for
example,
so
we
are
working
heavily
on
connecting
to
third
parties.
For
example,
for
example,
only
you
could
have
0
connecting
to
our
applications
or
our
solutions
are
api's,
and
you
know
we
could
enable
the
data
to
be
provided
to
up
our
customers
in
a
secure
manner.
A
There's
an
example
here
as
well
with
chatbox.
So
that's
what
we're
working
on
and
in
future,
we
also
have,
and
we
have
successfully
integrated
with
various
voice
solutions
and
internet
and
assisted
channels
like
bike,
lxo,
Google
home
just
quickly
before
I
hand
it
over
to
Jason
I'll
just
run
through
what
we
did.
We've
had
in
2007,
of
course,
like
everyone
else,
we've
been
building
monolithic
applications
and,
of
course,
we
all
know
that
it
was
not
easy
to
deliver
applications
at
scale.
In
speed,
we
moved
on
to
breaking
up
our
our
services
and
our
presentation
layer.
A
A
We
use
Netflix
libraries,
you
know
things
like
ribbon,
Eureka,
zoo
zoo,
and
you
know,
especially
with
service
discovery
and
registries.
It
is
complex
and
we
were
running
into
various
bottlenecks.
Various
challenges
all
the
time
and
we'll
talk
more
about
how
we've
overcome
death
with
the
solution
within
in
the
next
session.
Just
now
and
of
course,
then,
about
a
year
ago,
we
started
moving
our
platforms
to
the
cloud,
so
we
are
an
Amazon.
A
Now
we've
got
open
shift,
we've
got
an
API
gateway,
vias
Apogee,
and
we
are
beginning
to
see
a
lot
of
improvements
in
terms
of
how
we
deliver
our
software.
So
we'd
reduce
the
amount
of
variability.
We've
increased
the
velocity
and,
of
course
we
have
more
visibility
because
we
abused
the
micro
services.
We
have
more
visibility.
Most
importantly,
we
have
speed
today.
So
having
said
that,
I
mean
set
the
context,
there's
a
famous
quote
by
a
guy
from
Google.
He
looks
after
it
in
Google
Google
design.
His
name
is
Luke
Wroblewski.
A
He
said
dream
years
planning
months
and
shipping
days
using
that
as
an
aspiration
at
Macquarie.
We
bought
a
dream
in
months
planning
days
and
ship
in
minutes
and
and
in
thought
in
in
talking
about
shipping
in
minutes,
I'm
going
to
hand
it
over
to
Jason
who's
going
to
take
us
through
how
we're
shipping
software
in
minutes
Jason
all.
C
Right
thanks
Roger,
so
I
just
want
to
talk
to
you
about
how
I
just
about
how
we
moving
towards
shipping
in
minutes.
So
when
we
set
out
that
goal
as
part
of
moving
to
open
shifts,
we
asked
ourselves
this
question,
which
is:
how
long
does
it
take
for
us
now
to
release
a
single
line
code
change
to
production?
Now,
if
it
was
an
emergency
change,
I
would
probably
take
four
hours.
The
actual
change
we've
got.
Automation
is
quick,
but
all
of
the
approvals
you
need
to
go
through.
It's
always
a
big
drama.
C
If
you
need
to
do
an
emergency
change,
so
what
most
people
do
is
what
we
were
doing
is
we'd
have
releases
every
two
weeks:
we'd
bundle
everything
up
into
those
releases,
everyone
get
their
features,
ready,
bug
fixes,
would
go
in
there
and
then
you'd
move
in
a
like
a
two
week
block,
but
that's
slow,
and
if
you
mix
miss
that
release
or
need
to
change,
anything
need
to
wait.
Another
two
weeks
and
two
weeks
for
us
is
quick.
Other
teams
are
doing
month's
monthly
releases.
This
is
a
very
slow
approach
to
doing
change.
C
What
we
want
to
do
is
do
things
quicker,
but
why?
Why
we'll?
What
we're
slowing
us
down?
So
the
first
thing
is
simply
having
a
single
production
environment
having
a
single
production
of
iron
ore
--ml,
then
we
thought
we'd
rethink
this
see
when
you've
got
a
single
production
environment.
You've
got
a
single
place,
which
is
very,
very
fragile.
People
are
very
worried
about
breaking
that
environment,
so
I'll
show
you
how
we
change
that
with
open
shift.
C
Managing
dependencies
is
very
difficult,
because,
if
you're
doing
a
bigger
release
and
you've
got
many
things,
you
want
to
release
and
they're
all
dependent
on
each
other.
Then
you
have
to
do
them
in
a
particular
sequence,
so
it
ends
up
being,
although
you've
got
automation,
you've
got
all
these
people
manually
running
these
automation,
steps
and
these
all
added
up
to
not
having
real
end-to-end
automation
so
to
hit
a
button
and
do
a
full
release
with
a
single
click
of
a
button
was
extremely
difficult
before.
C
So
if
we
imagine
here
this
is
production,
we
run
our
traffic
through
an
API
gateway
which
is
Apogee
and
we'll
say
we're
pointing
to
our
production
1.4
environment.
At
this
stage
now,
let's
imagine
a
customer
has
a
problem
and
we
want
to
investigate
that
problem,
so
we
can
route
that
customer's
traffic
and
only
that
customer's
traffic
to
a
prompt,
fixed
environment.
So
in
here
our
ops
team
has
the
ability
to
turn
on
debug
logging
turn
on
any
Diagnostics
that
they
want,
and
even
change
configuration
in
a
safe
place
to
investigate
that.
C
So
they
can
immediately
respond
to
the
issue.
Now.
Let's
imagine
if
the
developer
has
found
that
they
need
to
that
single
line
of
code
changed
to
release
the
production
to
fix
this
issue.
So
if
the
developer
cuts
a
release,
we've
got
a
continuous
deployment
environment.
So
now,
rather
than
having
a
lot
of
approvals,
a
lot
of
process
to
get
into
production,
we
just
do
it
automatically,
but
we
don't
impact
the
prod
traffic
instead,
that
environment
we
point
to
with
our
lab
domains.
You
see
them
outside
John
McCoy
at
I/o,
so
this
is
still
production.
C
C
We
also
built
a
tool,
so
we
can
diff
responses
from
the
API
so
that
we
could
point
to
continuous
deployment
versus
prod
1.4
and
see
if
the
changes
are
what
we
expect.
So
what
we're
trying
to
do
here
is
get
more
confident
when
we
release
it
to
full
production
that
we're
not
going
to
break
anything
now
we
also
have
a
beta
environment,
so
this
is
really
important
because
we've
got
staff
beta
and
public
beta
to
start
beter,
there's
an
active
set
of
staff.
C
So
any
staff
in
Macquarie
can
get
access
to
beta
and
then
they'll
post
feedback
on
our
Facebook
for
workplace
about
the
applications,
they're
very
good
at
actually
doing
testing
for
us,
so
we
often
leave
features
and
we
learn
from
their
feedback
if
we
want
to
release
those
features
into
full
production,
there's
also
public
beta.
So
these
are
customers
who
are
encouraged
and
rewarded
for
reporting
bugs
at
the
testing
out
things
in
beta.
C
So
let's
say
now:
we
want
to
release
this
into
full
production,
so
we
don't
touch
prod
1.4,
we
build
a
new
environment
and
then
we're
out
the
percentage
of
traffic.
Monitor
that
see.
If
we're
getting
more
errors,
see
if
our
latency
is
getting
slower
or
not
before
we
flick
over
all
the
traffic
and
we
can,
you
can
see,
we
can
roll
back
immediately
as
well.
This
means
that
we
can
do
releases
now,
rather
than
doing
releases
at
night
time
and
being
a
big
deal.
We
can
just
do
them
at
day
time.
C
So
how
do
we
manage
all
of
these
micro
services?
So,
like
roger
said,
we've
got
a
lot
of
micro
services
and
if
you
go
down
a
micro
service
architecture,
path,
you'll
know
that
they
grow
very
quickly.
So
we
have
been
growing
monthly.
So
at
the
moment
we'd
have
about
50
micro
services
and
one
lists
that
we
manage
so
in
open
shifts.
What
we
decided
to
do
was
to
group
these
up
together,
we've
grouped
them
up
by
the
API
that
they
serve.
We've
got
a
personal
banking,
API,
a
wealth,
API
and
event
processing.
C
So
now
that
problem
of
dependencies,
when
you
want
to
change
multiple
things
together,
all
of
these
micro
services,
they
can
call
each
other
in
many
different
ways.
It's
up
to
the
developers.
We
have
more
of
an
open
structure
so
how
they
call
each
service
would
be
up
to
them
so
that
if
we
wanted
to
change
something
like
that
service,
there
strafe,
you
can
see
that
it's
serving
an
API
and
it's
also
providing
services
to
other
micro
services.
So
we
just
have
a
very
simple
principle:
to
keep
it
locked
to
complex.
We
in
one
main
space.
C
So
how
do
we
configure
these
environments?
So
we've
got
a
decorative
approach
to
this
configuration.
What
that
means
is
we
put
all
of
the
information
in
each
channel
files?
We've
got
the
annal
files
about
the
applications,
the
animal
files
that
get
about
characteristics
of
the
applications
and
these
files
have
nothing
to
do
with
open
shifts
and
nothing
to
do
with
the
actual
tools
we
use
to
deploy.
It
open
shifts.
C
C
If
we
change
anything
here-
and
we
run
it
again-
it'll
apply
the
Delta
so
they're,
always
in
sync.
We
always
know
that
what
I've
got
checked
in
is
what's
deployed.
If
you
run
it
again
and
again,
it'll
do
nothing
only
when
you
make
a
change
for
the
apply
that
change
now
rolling
the
cluster
is
something
we
started
off
doing
from
very
early
on.
It's
a
very
good
principle
to
have
so,
basically
every
60
days
we're
going
to
rebuild
everything
in
production.
C
C
But
what
it
means
is
that
when
we
roll
a
cluster,
how
do
we
make
sure
that
we
got
everything
exactly
the
same
on
the
new
cluster
when
we're
doing
this
during
the
day
time
doing
a
release?
So
let's
say:
we've
got
a
new
class
to
come
up
here,
so
I
need
to
copy
the
applications
or
the
namespaces
across.
To
do
this.
C
Very
simply
when
we
deploy
a
namespace
with
a
target
and
get
with
some
metadata,
and
then
we
don't
copy
across
the
configuration
we
just
redeploy
from
those
points
in
git
we
just
redeploy
again
so
then
we
do
the
same
as
when
we
were
doing
a
release.
We
flick
over
part
of
the
traffic
like
this.
We
can
do
the
API
diff
and
then
we're
done.
We've
moved
all
of
our
production
across.
C
So
what
this
means
is
that
about
moving
faster
with
OpenShift.
Now
we've
got
the
ability
to
do
get
fast
feedback.
We
can
test
and
learn.
This
is
an
amazing
capability
now
to
move
quickly,
responding
quickly
to
incidents
you
can
see
for
ops.
This
is
unbelievable
because
they
can
go
in
they've
got
a
safe
place
to
do
changes
to
test
things
out
to
do
Diagnostics
now.
C
One
of
the
biggest
things
is
that
it
transforms
the
culture,
so
the
developer
experience
is
much
much
much
better
than
openshift,
and
also
for
the
business
side,
which
is
the
most
important
is
that
now
they
know
that
their
features
and
their
ideas
can
be
implemented
much
much
quicker
than
they
were
before.
So
this
is
a
change
in
culture
and
I.
Think
people
are
starting
to
learn
as
we're
in
production
openshift
about
these
capabilities
that
they
can
really
get
the
fast
feedback
and
the
learnings
quickly
from
the
business
side
of
things.
D
Thanks
Jason
yeah
great
to
be
here
just
share
the
hands:
who's
jet-lagged
yeah.
We
pretty
much
14
hours
ahead
of
you
guys
so
bear
with
me
yeah.
So
just
painting
the
picture
quickly.
I'm
part
of
the
a
and
their
team
seen
a
practical
consultant
out
of
Australia,
so
based
in
Sydney,
worked
closely
with
the
team
they
also
based
in
Sydney,
which
is
great
so
much
traveling
for
me
for
the
moment
so
yeah,
it's
just
talking
about
use
cases
setting
the
scene
for
this
project.
It's
been
over
about
a
year
now
so
get
in
a
give.
D
You
a
feel
for
the
challenges
and
the
things
that
we
faced
and
sort
of
the
solutions
that
we
came
up
with.
Obviously,
a
lot
of
the
things
that
are
may
have
been
implemented
or
really
been
made
simple,
with
ansible,
for
instance,
so
an
example
of
being
able
to
automate
everything
is,
as
Jason
mentioned,
the
cluster
rolling
stuff.
It's
it's
something.
That's
taught
us
a
lot
of
lessons
of
our
keeping
anything
is
consistent,
keeping
the
versioning
precise
and
with
that,
so
we've
come
up
with
the
strategy
of
parallel
deployments
of
clusters.
D
So
you
have
a
situation
where
a
feature
or
some
service
Jason
asks
or
something
can
you
please
put
that
in
for
me
to
plot
that
version,
keep
the
parallel
clusters
and
then
we
with
route
53
would
release
that
cluster
and
we
can
roll
back
so
that
sort
of
process
everything
is
EBS,
backed
so
there's
capabilities
to
snapshot
things
we
use
are
baked
for
our
instances
on
the
roadmap,
there's
an
idea
to
starting
auto
scaling.
So
at
the
moment
we
just
use
it
back
to
air
my
creep,
the
vision,
the
cluster.
D
With
that
we
were
able
to
spin
up
things
on
demand
passive
led
nodes,
so
we
can
share
costs.
So
it's
a
real
dynamic
environments,
so
it
might
be
a
scenario
we
deploy
ten
nodes
as
of
a
two
and
you
know,
get
like
four
nodes
up
and
running.
So
that's
a
kind
of
the
workload
because,
obviously
in
the
public
cloud,
its
expenses
can,
you
know,
add
up
very
quickly.
D
So
that's
always
a
thing
in
the
back
of
our
mind
and
mentioned
the
60-day
cycle
thing
so
that
there
were,
we
can
patch
the
cluster
to
attach
the
service
and
ensure
that
we
maintain
compliance
which
is
really
good,
building
cluster
45
minutes.
So
one
of
the
ideas
and
examples
was
this
is
now
became
very
popular
within
BFS,
the
banking
side
and
another.
Another
person
was
quite
keen
to
get
a
cluster
set
up
and
you
know
sit
some.
You
asked
me
how
long
is
going
to
act?
D
I
said
probably
about
45
minutes
and
is
about
London
brilliant.
So
this
is
sort
of
you
know
advantage.
We
have
where
this
kind
of
thing,
which
is
great
so
here
it-
has
an
example
of
the
parallel
cluster
method.
I've
told
you
about
the
mobile
app
and
the
web
based
app
or
connecting
to
an
epi
jeesus,
and
we
have
the
I
pinch
of
clusters
in
the
back
there
and
we
can
roll
version,
one
revision
to
and
roll
back.
So
quite
it's
quite
dynamic
from
that
point
of
view.
D
D
Was
used
node
node
labeling,
so
we
can
ensure
that
there's
quality
of
service
on
those
nodes,
firewall
feature
is
a
great
thing.
Security
guys
are
very
happy
about
that
and
we
also
using
OpenScape
for
image
scanning
and
compliance
so
that
way,
you're
able
to
ensure
that
those
images
are
up-to-date
and
compliance
new
versions
use
case
three
is
so
for
this:
it's
quite
challenging
in
a
shared
environment,
so
we're
using
over
commits,
and
for
that
you
know,
you
obviously
got
a
certain
amount
of
subscriptions
computing
memory.
D
D
D
Okay,
now,
certainly
some
of
the
interesting
stuff
that
I'm
being
is
on
use
case.
Five
is
starting
to
use.
You
know
for
the
managing
and
monitoring
of
things
looking
to
use
Prometheus
and
refiner.
This
is
really
good
for
the
micro
services.
This
way,
you're
able
to
tell
you
know
the
service
discovery
is
really
good
and
you
know
it
really
brings
sort
of
the
micro
services.
Stuff
is
under
management.
So
it's
got
a
couple
of
screenshots
that
I'll
show
you.
This
is
something
I
put
together
for
Macquarie
Bank
example.
D
Next
slide
is
I've
got
created,
something
called
top
pods,
so
top
pods
is
effectively
the
relationship
between
pods,
namespaces
and
nodes,
and
that
way,
you're
able
to
get
sort
of
an
idea
of
you
know
the
top
talkers
in
that
circumstance.
So
that
also
works
for
the
network
and
that's
the
pod
counts
over
there.
So
I
think
Prometheus
and
a
finer
integration
has
been
very
exciting
and
I've
been
really
enjoyed
working
with
it,
because
it's
shown
us
a
lot
of
visibility
on
this
customer
service.
D
What
are
using
a
dashboard
for
the
operational
guys
around
sort
of
the
cluster
overview,
so
this
is
a
great
way
to
see.
What's
going
on
a
quick
snapshot
of
the
cluster
and
you're
able
to
sort
of
get
an
idea
of
utilization,
so
the
team
can
make
you
know,
decisions
on
how
they
want
to
deploy
namespaces
your
capacity
planning
and
management
and
this
tasks
quite
well
into
as
using
platforms
so
card
forms.
D
This
were
for
our
reporting
piece
I'm
using
you
know
the
OpenScape
stuff,
as
I
mentioned,
for
reporting
things
yeah
and
just
a
couple
of
other
use
cases
and
sort
of
roadmap
items.
So
we
have
integration.
We've
done
some
integration
with
Splunk
and
that's
sort
of
repurposing
stuff
that
they've
had
in
the
past.
So
it's
always
a
challenge
when
you
go
into
an
environment
where
they
need
to.
You
know,
reuse,
stuff,
that
they
have
and
that's
let's
work
very
well
so
they're
able
to
use
it
full
LDAP
integration.
D
We're
looking
towards
obvious
leading
upgrade
to
the
latest
3.5
and
all
the
exciting
features
are
there
and
we're
using
me
looking
to
use
Nexus
for
the
secure
registry
external
registry
and
as
mentioned
before,
there's
another
team
is
quite
interested
in
deploying
fuse
on
the
platform.
So
yeah
very
happy
to
talk
to
you
all
after
the
session.
If
you
want
to
catch
up
a
batiment
ideas
and
stuff
we've
done,
but
thanks
for
some.
D
B
A
So
we
do
have,
we
do
have
an
ESD
and
we
recognize
that
we
would
want
to
pick
technologies
that
are
more
aligned
with
container
strategies
and
hence
we're
looking
at
moving
most
of
our
workloads
incrementally
to
cloud-based
infrastructure.
This
is
just
the
start.
This
is,
you
know,
moving
the
edge
computing
stuff
up
there,
but
the
second
part
of
our
vision
is
to
have
our
entire
integration
assets.
Also
over
time
move
modernized
in
a
framework.
That's
you
know
more
aligned
with
cloud
infrastructure.
A
A
We
do
have
so.
This
is
very
interesting,
and
maybe
we
can
talk
about
this
in
a
another
session
or
maybe
you
can
take
it
offline,
but
long
story
short.
What
we've
done
is
just
like:
we've
adopted
technologies
that
are,
you
know
more
born
in
the
cloud
we
have
taken.
Technology
like
this
should
be
the
technologies
like
Cassandra
or
controla.
A
We
do
use
a
lot
of
those
kind
of
strategies
to
have
a
data
replicated.
So
instead
of
you
know,
using
acid
we'd
be
using
more
of
cap
theorem
with
more.
You
know,
eventually
consistent
strategies
to
have
data
persisted
on
the
edge.
So
what
we
have
is
data
and
events
being
pumped
from
our
system
of
Records
to
layer
above
and
that's
where
we
do
all
the
magic
and
that's
way
we,
you
know,
have
data,
that's
contextualized
and
you
know
searched
and
things
like
that.
A
So
we
have
a
layer
with
a
pumping
data
into,
but
all
of
that
is
going
to
a
cloud
and
with
that
we
have
the
ability
to
move
things
dynamically.
A
route
data
from
you
know
one
cluster
to
the
other.
So
there's
a
lot
of
complexity
there,
but
it's
a
different
strategy.
It's
more
using
cloud-based
architecture
than
than
your
traditional
architecture.
C
C
E
Can
you
expound
a
little
bit
on
what
you
meant
by
full
LDAP
integration?
What
was
your
use
case?
What
were
you
trying
to
accomplish
and
how'd
you
end
up
pulling
it
off.