►
From YouTube: OpenShift Case Study: Telus Digital - Phil Dufault, Alexandar Podobnik & Tom Vogel, Telus Digital
Description
OpenShift Commons Gathering December 5th 2017 Austin, Texas
Phil Dufault, Alexandar Podobnik & Tom Vogel, Telus Digital
A
A
All
right
we're
going
to
get
started
again
here.
Thank
you
all
for
your
patience
with
our
process
and
everybody
on
Facebook.
We're
really
pleased
to
have
with
us.
My
canadian
telco
tell
us
digital
here
to
tell
us
their
case
study
we're
mixing
up
a
little
bit
and
changing
the
tone
and
then
we'll
get
back
into
upstream
stuff.
So
without
further
ado,
the
folks
from
Telus
digital.
B
So
hi
there
we're
thrilled
to
be
here
with
you
all
today
at
open
ship
Commons.
Our
topic
for
today
is
how
we're
staying
ahead
of
our
avalanche.
How
are
surviving
our
digital
transformation
that
tell
us
so
my
name
is
Phil
Defoe
and
I'm
the
product
owner
for
our
delivery
team
at
Telus,
digital
and
with
me
are
my
two
teammates
that
are
crucial
in
our
journey.
Tom
and
Alex,
hey.
C
B
So
I'd
like
to
start
us
off
with
some
context
about
tell
us,
since
we're
down
in
the
States
tell
us
is
one
of
the
largest
Canadian
telecommunications
companies.
We've
got
48,000
employees
and
lots
more
vendors
and
contractors,
we're
also
into
the
the
TV
and
health
markets,
as
well
as
the
telecommunication
space,
and
if
you
look
into
our
history
of
mergers
and
acquisitions,
you'll
find
actually
tell
us
is
more
than
100
years
old,
so
for
our
enterprise
IT.
This
may
sound
a
little
familiar
to
some
of
you.
We
run
our
own
data.
B
Centers
we've
got
tons
of
bare
metal,
we've
got
IBM
mainframes,
we've
got
systems
of
record
from
the
80s
and
even
90s
legacy,
Java
apps,
maybe
even
COBOL,
hidden
away
in
the
back
waterfall
business
practices,
and
we
have
some
definite
challenges
with
config
and
release
management,
and
this
ultimately,
all
results
in
some
long
release
cycles.
Now
what
about
tell
us
digital
in
particular,
tell
us
digital
was
created
as
a
disruptor
as
a
start-up
within
our
enterprise.
B
C
So
where
did
we
start?
Our
first
solid
attempt
at
digital
transformation
was
tell
us
my
account
and
the
idea
with
that
was
to
enable
customers
to
check
their
bills,
usage
and
call
history,
etc.
So
true,
self-serve
web
experience,
it
was
our
first
stab
at
agile
and
scrum
practices
as
well.
We
dipped
our
toes
into
the
cloud
using
DevOps
practices
infrastructure
as
a
service
ansible
provisioning
for
our
for
our
in
front,
and
it
was
our
first
mobile
first
experience,
so
this
was
delivered
just
in
time
for
the
explosion
of
smartphone
growth.
C
So
we
at
the
time
of
launching,
had
15%
smartphone
penetration.
Now
it's
over
75%
of
our
traffic
is
coming
from
smartphones.
It
was
also
a
highly
accessible
site,
so
we
RW
CAG
to
a
a
compliant.
This
basically
means
it's.
It's
highly
accessible
and
fantastic
for
those
with
visual
impairment
works
with
screen
readers
and
everything
else.
We
offloaded
a
ton
of
call
center
volume
which
saved
us
buckets
of
money
and
thrust
at
us
to
have
the
highest
customer
satisfaction
of
the
big
three
Canadian
telcos.
C
C
So
architecture
was
also
full
of
lots
of
weird
sacred
cows
and
lots
of
shortcuts
and
tech
debt
that
rarely
got
paid
off
so
with
lots
of
different
text
acts
and
reinventing
the
wheel.
This
was
really
difficult
for
new
users
to
get
onboard
and
into
our
platform,
often
taking
weeks
to
get
started
in
development
and
overall
was
just
a
very
confusing
development
user
experience
and
then,
in
terms
of
releases,
we
had
no
automated
build
system.
Integration
typically
happened
very
late
in
the
game.
C
We
had
git
flow
lots
of
future
branches
and
to
compound
the
issue
we
had,
you
know
proprietary,
bash
and
Python
scripts.
That
would
pull
together
some
50
odd
github
repos
into
there's
a
several
monoliths
to
be
deployed.
The
releases
often
took
a
week
to
coordinate,
and
while
this
was
better
than
our
former
quarterly
releases,
there's
still
tons
of
room
for
improvement,
and
this
was
largely
due
to
there
being
a
lot
of
manual
testing
several
days
of
it
for
every
single
release.
We
do
a
full
manual
regression
test.
C
C
So
you
know
one
broken
commit
in
the
whole
release
would
grind
the
whole
thing
to
a
halt.
You
have
hundreds
and
thousands
of
commits
sometimes
just
parked,
and
you
know
in
summary,
that
meant
for
unhappy
devs,
the
tight
coupling
and
our
framework
meant
that
there
was
friction
between
the
teams
and
the
monolithic
architecture
and
weekly
releases
was
not
really
conducive
to
having
autonomous
data-driven
teams.
So
we
finally
realized
our
architecture
had
seen
better
days
and
decided
to
make
a
massive
leap
of
faith
in
order
to
move
on
to
new
and
better
things.
So.
D
We
had
to
start
reimagining
our
team
structure
in
order
to
build
a
culture
of
architecture,
and
so
what
we
did
was
the
single
Telus
Digital
development
team
was
split
up
into
four
different
outcome
teams.
These
outcome
teams
are
mobility,
my
account
home
solutions
and
business,
and
so
we
also
had
enablement
teams,
and
these
enablement
teams
were
there
to
help
enable
the
outcome
teams
to
ship
software
more
efficiently
and
more
productive,
and
that's
us,
which
is
delivery.
We
have
designed,
we
have
API,
we
have
content,
analytics
security
and
many
more
so
institutionally.
D
We
recognized
that
we
needed
a
good
day-to-day
user
experience
for
our
technologists
and
happy
developers
means
wonderful
experiences
which
creates
happy
customers.
So
we
decided
to
listen
to
our
engineers
because
they
had
some
excellent
feedback
for
us
and
as
a
thought
experiment,
we
mapped
our
path
to
production.
D
So
the
operational
component
was
very
important
for
us
and
how
did
we
get
there
with
a
lot
of
experimentations?
We
have
a
culture.
We
were
able
to
fail
fast
and
pivot,
so
we
pull
it
around
with
log
different
tech
stacks
we
played
around
with
PHP
Ruby
Java.
Ultimately,
we
ended
up
on
JavaScript.
We
really
wanted
that
cutting-edge
web
and
development
development
experience
and
also
on
the
infrastructure
side.
We
played
around
with
things
like
ansible
terraform
and
manage
their
own
kubernetes
cluster.
Ultimately,
we
decided
OpenShift
dedicated
was
the
best
for
our
use
case.
D
We
did
originally
do
infrastructure
as
a
service.
It
was
a
lot
of
work
and
we
only
had
a
few
AWS
experts
in-house
and
we
decided
that
platform
as
a
service
was
more
what
we
needed.
So
that's
why
we
ended
up
selecting
openshift
dedicated
because
it
allowed
us
to
outsource
our
server
operations.
So
we
could
focus
on
shipping
experiences
to
our
customers
and
focusing
on
tackling
some
of
our
automation,
bottlenecks,
and
so
because
we
love
our
customers
and
our
technology.
We
are
evolving
it
with
full
transparency
and,
as
you
can
see,
we
have
a
public
wiki.
D
C
C
The
content
platform
allows
developers
and
marketers
alike
to
update
content
without
having
to
redeploy
the
application.
Our
API
platform
is
a
micro
services
network
which
brings
discovery
and
authentication
to
the
table
and,
finally,
delivery.
Our
team
was
responsible
for
making
it
fast,
easy
and,
most
importantly,
fun,
to
build
and
deploy
Telus
software.
C
So
there's
many
technologies
at
play.
In
our
delivery
platform,
we
use
terraform
to
manage
our
classical
AWS
infrastructure.
We
still
use
things
on
AWS,
like
RDS,
for
our
data
bases,
github
stores,
our
source
code,
pretty
self-explanatory.
Vault
stores
are
hashey,
Corp
vaults
towards
our
secrets
and
our
access
control
lists.
Jenkins
runs
our
delivery
pipelines
and,
of
course,
openshift
runs
our
apps
and
our
builds
so
to
truly
expedite
the
development
on
the
digital
platform,
we
needed
a
platform
for
our
application
architecture
as
well,
and
this
is
where
our
starter
kits
come
in.
C
So
what
our
starter
kits?
They
leverage
the
two
strongest
tools
in
the
modern
developers,
portfolio,
copy-paste
and
find
and
replace
so
show
of
hands
who
has
used
stack
overflow
come
on
there.
You
go
right.
So
you
all
know
this.
So
starter
kits
are
a
reference
implementation
of
the
reference
architecture,
so
our
architecture
is
now
enshrined
as
code
they're,
designed
as
loosely
coupled
self-deploying,
continuous
integration
and
continuous
delivery.
Automatons
such
that
none
of
the
teams
get
interrupted
by
another
team's
deployment.
Their
code
is
fully
self
shipping
and
we
attempted
to
not
fall
into
the
framework
trap.
C
So
we
had
bad
experiences
building
our
own
custom
frameworks
before
so.
The
starter
kits
are
a
collection
of
managed
code,
duplication,
which
is
mostly
configuration.
The
frameworks
themselves
are
react,
OpenShift,
etc.
I
made
by
people
much
smarter
than
us,
and
so
the
starter
kits
are
a
functional,
ephemeral,
decorative
item,
potent
translation
cloud
native
configuration
boilerplate
that
glues
together
all
of
these
frameworks,
so
any
proprietary
shared
functionality
is
managed
by
individual
libraries
free,
specific
context.
C
So,
for
example,
our
accessibility
and
SEO
testing
is
all
done
with
telus
proprietary
libraries,
some
of
those
open
sourced,
so
our
backbone,
starter
kits,
which
account
for
about
95%
of
our
OpenShift
cluster.
Currently
is
the
server-side
rendered
react
user
interface,
as
well
as
the
API
micro
service,
starter
kit.
C
So,
in
order
to
achieve
ludicrously
quick
deployment
times,
we
had
to
set
goals,
so
these
were
set
at
the
outset
of
our
delivery
team
kickoff
and
our
goals
were
to
have
under
10
minutes
from
commit
to
production,
as
well
as
to
onboard
new
developers
on
their
first
day
and
have
them
also
push
to
production,
something
we
had
seen
from
a
lot
of
bleeding
edge.
Companies
like
like
Facebook,
they
also
follow
the
KISS
principle.
C
So
all
the
starter
kits
as
well
revolve
around
their
build
pipelines.
So
the
various
build
pipeline
steps
we
have
the
checkout
phase,
so
every
single
commit
triggers
a
check
out
of
your
code.
For
that
commit.
We
apply
the
secrets
stored
in
Hoshi,
Court
vault
and
mount
them
to
openshift
for
being
read
by
the
apps.
We
apply
the
open
shift
templates
for
both
build
and
deployment,
and
this
allows
us
to
couple
our
code
together
with
our
infrastructure,
into
the
same
commits
such
that
we
can
make
changes
in
parallel
and
test
them
as
one
whole.
C
C
The
testing
phase
is,
of
course,
the
most
important
the
tests.
The
quality
of
your
testing
is
basically
what
gives
you
the
confidence
in
your
pipeline.
So
if
you
have
we're
testing,
you
have
low
confidence
and
you
can't
ship
quickly,
so
we
obviously
wanted
to
have
absolutely
the
best
testing
we
could
possibly
get.
So
we
have
everything
from
linting
unit
testing
code
quality
measurement
with
sonar
cube,
node
package
security,
scans
performance
tests.
Basically
you
name
it.
C
So
an
analogy
that
we
like
to
use
for
our
deployments.
It's
like
bowling
with
bumpers.
You
keep
safely
rolling
the
balls
down
the
lane
until
you
knock
down
all
the
pins,
but
our
game
doesn't
just
stop
once
you
knock
down
the
pins
and
get
a
strike,
we
also
have
instrumentation
for
runtime.
So
for
SEO
optimization.
We
have
server-side
rendering
we
also
support
logging
with
our
Cabana
stash.
We
have
new
relic
for
monitoring,
Pedro
duty
and
incident
management
teams
on
call
and
analytics
feature
flagging
security,
time-series
metrics,
really
the
the
world
is
your
oyster.
C
So
all
of
this
comes
out
of
the
box.
You
copy/paste
your
you
Fort
Knox
grade
hello
world,
and
this
means
you've
flipped
the
paradigm
you're
no
longer
getting
your
production
instance
on
the
last
day,
right
before
you
ship.
You
start
on
day,
one
with
a
production
instance,
and
you
make
small
incremental
commits
and
leverage
the
CI
CD
pipeline
to
test
and
deploy
every
single
change
to
production.
So
exposing
the
site
to
customers
is
as
simple
as
toggling
the
feature
you
just
show
the
site
embed
some
links
and
away.
C
B
Onboarding
for
us
that
tell
us
has
traditionally
been
slow
and
very
painful
process.
We
have
some
automation
now
to
onboard
people
onto
OpenShift
clusters
and
additionally,
the
hashey
court
bolts
ID,
and
we
call
that
chippy.
We
distilled
that
tell
a
specific
domain
model
for
our
applications
and
the
users
of
squads
that
are
building
those
and
when
our
users
are
added
in,
we
automatically
provision
them
onto
the
various
clusters
and
tools.
B
So
we
have
a
CLI
and
an
API
right
now
to
manage
this
on-boarding
and
off-boarding
of
users,
and
it
also
assists
in
the
deploying
of
the
starter
kits
onto
our
open
chef
clusters.
So
it's
a
simple
or
a
simplified
single
interface
to
our
various
platform
tools,
and
it
helps
convey
our
architecture,
our
culture
and
our
documentation.
B
Beyond
simply
onboarding
and
delivering
our
code,
we
also
need
to
get
all
of
our
technologists
to
contribute.
We
found
that
we
really
needed
to
respect
our
technologists
in
their
craft.
They
had
great
opinions
and
we
needed
to
listen
to
them.
We
found
that
architecture
really
mattered
and
our
architecture
is
continuously
improving,
because
now
we're
never
satisfied.
B
We're
also
updating
these
starter
kits
with
these
new
and
evolving
standards,
and
these
updates
are
now
coming
from
the
outcome
teams
themselves,
using
an
inner
source
core
press
model,
and
these
outcome
teams
are
great
Canaries
for
us
and
now
they're
generally
doing
the
trailblazing
for
us.
So
these
bets
that
are
paying
off
for
those
outcome.
Teams
can
get
merge
back
into
the
starter
kits
and
be
leveraged
by
the
whole.
B
So
how
are
we
keeping
a
momentum
going?
A
crucial
turning
point
for
us
was
just
getting
the
leadership
team
to
buy
into
a
reference
architecture.
Then
we
received
updated
mandates
with
some
success,
to
align
beyond
Telus
digital
and
to
tell
us
Kudo
and
our
public
mobile
brands
to
the
reference
architecture
and
the
culture
associated
some
other
learnings
we
had
was
that
our
hero
culture
ultimately
had
to
stop
that.
B
Our
key
individuals,
weren't
scalable,
that
our
DevOps
stopped
being
a
team
and
it
became
a
cultural
practice
and
that
the
developers
are
the
subject
matter,
experts
for
their
applications
and
as
a
result,
they
need
to
be
on
call
for
that,
so
they
can
triage
and
escalate
as
needed,
and
those
outcome
teams
are
now
graded
and
measured
on
their
application.
Experiences
and
uptime,
and
our
open
ship
dedicated
cluster
is
responsible
for
keeping
the
stability
of
our
platform
at
large.
I
was
doing
a
really
great
job
for
us.
D
So
now
we
want
to
talk
about
the
juicy
part,
the
growth
and
the
results,
so
we've
had
120
applications
deployed
on
our
platform
since
March
we
have
now
four
hundred
plus
deployments
per
day
in
the
old
world.
We
were
doing
about
one
deployment
per
week
and
this
is
a
huge
huge
upgrade
for
us
and
absolutely
incredible
that
a
telecom
is
able
to
ship
400
times
a
day
and
we're
the
the
developers
at
Telus.
Digital
are
super
proud
of
this,
because
the
platform
really
helped
us
achieve
this.
D
Our
cluster
size
is
doubling
every
few
months,
we're
just
getting
a
lot
of
people
from
other
brands
and
teams
I
really
want
to
join
our
cluster.
We
have
happy.
Customers
were
able
to
ship
updates
all
the
time
very
quickly
throughout
the
day,
and
we
also
have
happy
developers
because
they
don't
have
to
focus
on
the
operational
aspect
of
their
application.
Anymore.
Just
deploys
and
runs
so
a
bit
of
a
case
study.
D
We
want
to
talk
about
the
iPhone
launch
so
recently
on
the
platform
we
launched
the
iPhone
8
and
the
if1
X,
and
in
the
old
world
there
was
a
lot
of
fire
drills.
A
lot
of
stress
people
were
up
all
night.
This
is
a
huge
release
for
Telus
website
problems,
just
just
a
lot
of
stress
and
in
today's
world
with
the
new
platform,
this
is
all
gone.
There's
no
stress,
no
one
was
being
paged
everything
just
released.
D
As
Phil
said
tells
us,
you
know
40,000
plus
people,
and
we
have
other
teams
already
other
teams
and
brands
already
on
our
platform,
and
we've
received
a
lot
of
great
feedback
already
from
other
teams
and
I.
Think
now
what
we
need
to
do
is
we
really
need
to
expand
our
reference
architecture
to
support
other
technology
stacks,
because
there
are
teams
that
are
more
familiar
with
Java
and
Ruby,
and
we
really
want
to
receive
their
contribution
to
our
reference
architectures.
D
B
So,
what's
next
for
us
tell
us
well,
the
future
is
friendly,
so
our
journey
over
the
last
year
has
been
incredible.
Ultimately,
if
you
look
at
where
we
were
and
where
we
are
now,
we've
coalesced
a
lot
of
her
disparate
technology.
Stacks
we've
defined
a
single
reference
architecture
and
turn
it
into
a
digital
platform
with
tons
of
reuse
and
with
all
of
that
uptake.
B
Our
projects
are
actually
getting
to
the
market
quicker
we're
getting
about
a
third
of
the
time
to
build
those
experiences
and
deliver
and
we're
not
going
to
stop
until
our
applications
are
writing
themselves,
paying
it
forward.
True
to
the
Telus
core
value
of
giving
where
we
live,
we're
actively
participating
in
the
nodejs
and
the
open
source
communities,
and
our
outreach
now
is
our
beacon
for
hiring
on
the
new
stack.
Many
of
the
best
and
brightest
technologists
are
actively
seeking
us
out
as
word
of
our
text
at
subscribe
or
spreading.
B
What's
in
the
shippi
future,
I
definitely
think
or
react
to
UI.
So
a
web
experience,
slack
lots
for
sure
and
maybe
even
new,
connected
interface.
Ok
shippi
create
me
a
new
app,
so
some
key
lessons
that
we
want
you
to
be
able
to
take
away.
We
weren't
just
building
a
platform.
We
were
really
building
the
cultural
movement
as
technologists.
B
We
can
get
really
obsessed
into
the
details
and
forget
about
all
the
people
involved
and
exercises
like
mapping
the
path
to
production,
we're
really
really
valuable
and
sharing
that
understanding
amongst
ourselves,
establishing
the
culture
of
architecture
and
enablement.
There
can
be
no
enablement
without
the
architecture
to
support
it.
Anything
less
is
going
to
result
in
anarchy
and
instability
and
no
architecture
can
succeed
without
the
enablement
side.
Anything
less
is
an
ivory
tower,
so
codify
your
standards.