►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Good
afternoon
everybody,
my
name
is
mark
de
nieve,
and
this
is
my
coworker
Kyle
button.
We
both
work
for
a
company
called
paychecks
we've
both
worked
for
paychecks
for
anywhere
between
two
and
three
years.
I
have
over
20
years
of
IT
experience.
Kyle
has
about
five
years
of
IT
experience,
mostly
as
a
developer.
For
me,
mostly
in
operations,
we
both
work
for
a
team
called
infrastructure
platforms,
infrastructure
platforms
and
paychecks
is
responsible
for
all
of
our
storage
server,
virtualization
infrastructure
as
a
service
platform
as
a
service
for
both
internal
and
external
use.
A
Okay,
and
how
many
people
from
the
dev
side
of
things,
what
we're
going
to
be
talking
about
is
more
focused
on
ops
than
on
dev,
but
definitely
we'll
be
talking
a
little
bit
about
both.
So
just
real
quick.
What
is
paychecks
I'm
gonna
read
this
one
exactly.
We
are
the
leading
provider
of
integrated
human
capital
management
solutions
for
payroll,
HR
retirement
and
insurance
services.
We
have
over
six
hundred
and
five
thousand
payroll
clients
and
we
pay
one
out
of
every
12
American
private-sector
employees.
A
So
a
little
bit
about
paychecks
and
our
use
of
open
shift.
We've
been
using
open
shift
since
about
2015
when
RedHat
shifted
over
to
3.0
with
the
kubernetes
back-end
in
200.
In
2016,
we
went
all-in
on
open
shift,
it's
taken
on
a
very
viral
adoption
inside
of
paychecks.
It
is
our
fastest
growing
infrastructure
platform.
A
We
have
16
different
clusters
to
cover
dev
test
and
production
across
three
different
data
centers
in
two
different
regions,
we've
gone
through
seven
in-place
upgrades
from
open
shift,
300
all
the
way
through
three
seven
over
the
past
year
and
a
half
two
years
with
zero
application.
Downtime
during
those
upgrades
in
the
application
hosted
inside
of
OpenShift
are
responsible
for
moving
well
over
500
billion
dollars
per
year.
A
So
what
are
we
going
to
talk
about
today?
We're
not
going
to
talk
about
putting
our
business
applications
into
openshift.
Well,
we
want
to
talk
about.
Today
is
putting
our
operational
tools.
Those
tools
that
make
the
backend
the
infrastructure
work
and
help
us
manage
the
infrastructure,
OpenShift
isn't
just
for
business
applications
and
services.
By
doing
we're
going
to
talk
about
how
we
improve
services
by
running
our
infrastructure
tools
and
open
shift
and
how
running
the
infrastructure
tools
in
OpenShift
helped
us
better
understand
the
open
shift
platform.
A
A
A
We
also
underestimated
the
importance
and
criticality
of
running
our
tools
in
a
highly
available
platform.
This
tools
never
gone
down
before
this
is
way
overkill
or
we'll
just
rebuild
it.
If
it
goes
down
where
many
of
the
different
things
that
we
would
think
and
say
what
we
needed
to
do,
is
we
needed
to
do
better?
A
We
needed
to
be
able
to
help
development
teams
with
questions
and
issues
with
the
openshift
platform,
and
we
weren't
able
to
do
that
because
of
a
lack
of
understanding
of
the
development
side
of
things
we
needed
to
move
outside
of
our
comfort
zone.
We
needed
to
gain
end-user
experience
with
the
platform
we
needed
to
become
more
dev
like
in
our
thinking.
We
needed
to
understand
not
just
how
to
deploy
the
platform
but
how
to
leverage
the
platform
by
putting
our
own
tools
in
it.
A
What
we're
going
to
talk
about
now
is
how
our
team
went
from
a
bunch
of
unknown
magician
engineers
that
perform
very
much
unknown
magic
more
to
a
1-800
dial-in
engineer
for
our
developers,
we
went
from
nobodies
to
somebody's
really
really
quick
to
talk
about
the
application,
the
first
application
that
we
did
from
an
Operations
perspective,
I'm
gonna.
Let
Kyle
tell
you
a
little
bit
more
about
that.
So.
B
We
had
an
application
called
fry
and
fry
managed,
our
naz's
and
our
sands.
It
monitored
them
and
it
was
a
giant
mass
of
PHP
Python,
my
sequel,
a
whole
bunch
of
bash
scripts.
The
code
was
about
four
years
old,
not
very
well
maintained,
but
it
worked.
It
ran
in
one
data
center.
There
was
no
foul
over
no
business
continuity
available.
If
something
happened
to
the
server,
we
would
lose
a
significant
amount
of
insight
into
our
storage
platform
and
we
called
it
fry
because
it
managed
tape,
robots.
B
We
had
bender
and
Fleck
so
why
they
called
it
fry
since
he
can't
even
manage
himself
I,
don't
know.
So
when
we
decided
to
do
this,
there
was
a
lot
of
skepticism
and
unsure
thoughts
from
our
team.
This
was
a
really
critical
application
and
they
were
really
scared
about
uplifting
everything.
So
we,
we
came
up
with
a
set
of
goals,
so
we
needed
to
change
the
thinking.
We're
not
we're
not
a
team
of
developers
we're
all
in
operations,
but
we
needed
to
start
thinking
more
like
them.
The.
B
This
application
needed
a
flexible
framework.
That's
very
well
documented.
The
framework
is
easily
expandable.
We
needed
to
have
a
very
resilient
architecture.
It
has
to
run
in
multiple
geographic
locations
actively,
and
we
wanted
to
have
multiple
instances
per
data
center,
we'd
like
to
get
it
continuously,
deployables
every
all
our
requirements
about
monitoring
and
we're
constantly
bringing
in
new
tools.
We
need
to
quickly
make
a
change
to
this.
This
application,
this
tool
and
deploy
it
and
we'd
like
to
get
so
that
we
could
deploy
it
in
minutes
based
on
business
needs.
B
So
we
we
implemented
this
using
Python
and
Django,
with
a
my
sequel
back-end
with
replication
using
s3
storage
for
file
persistence
when
we
needed
it,
and
we
use
the
message
queuing
system
for
tasks.
The
tool
like
this.
As
long
as
certain
automation
tasks
got
done
with
our
infrastructure,
it
didn't
really
matter
when
they
happened,
and
so
how
did
it
go?
B
B
We
had
no
more
single
points
of
failure
and
there
was
no
more
application
downtime
when
we
had
to
make
deployments.
We
are
constantly
monitoring
our
sand
and
our
Nazz
infrastructures
now
there's
zero
gaps
in
between
when
we
make
a
deployment,
so
we're
never
missing
out
on
any
data.
It's
running
active,
active
active
in
those
three
jet
data
centers
crossed
over
to
regions,
and
we
had
issues
before
where
if
too
much
load
was
going
to
the
application,
it
would
crash.
B
But
what
we
did
learn
was
that
debugging
and
open
shift
can
be
difficult
for
this
application.
We
intentionally
sacrificed
some
traceability
for
the
ease
of
deployment
or
development.
We're
not,
as
I
said,
a
team
of
developers
so
having
to
include
additional
packages
was
kind
of
hard,
especially
when
you're
trying
to
trace
across
you
know
three
data
centers
with
12
pods
and
all
our
logging
is
ephemeral
when
the
pods
die
the
logs
disappear.
So
when
we
do
have
issues
in
our
production
environment
it
it's
kind
of
hard
to
find
them,
but
with
open
shift.
B
If
the
pod
dies,
we
don't
really
care.
Eventually,
we
can
get
to
fixing
that
bug,
because
usually
it's
not
super
critical,
we're
a
lot
better
at
explaining
openshift
concepts,
and
this
helps
us
out
immensely
when
working
with
the
development
teams
when
they
have
to
deal
with
all
the
different
configs
and
deployment
types.
We
know
how
quotas
really
work.
We
have
run
into
some
issues
originally
when
doing
deployments,
and
they
would
you
know,
fill
up
all
the
quota
space
and
the
pods
would
never
get
created.
B
We
learned
all
the
intricacies
of
services
and
routes
having
our
own
application
in
the
platform
that
we
manage,
allowed
us
to
also
troubleshoot
issues
and
find
problems
with
the
platform
before
the
development
teams
and
the
business
applications
actually
had
problems.
We've
seen
this
where
we
were
able
to
find
a
CH,
a
proxy
issues
and
correct
them
before
businesses
saw
them,
we've
found
and
corrected
issues
with
scaling
and
issues
with
pod
deployments.
A
Few
other
things
that
we
also
learned:
we
got
really
good
at
finding
actual
bugs
in
the
platform
itself.
We
would
start
testing
some
of
the
new
features
well
before
they
were
available
for
developers
to
schedule.
Jobs
was
a
great
example
of
that
I
wanted
to
start
using
it,
because
I
saw
a
need
for
it
for
the
developers
and
we
very
quickly
came
to
the
concept
of
we
don't
use
alpha
features
because
of
not
being
quite
ready
for
them.
Yet,
additionally,
overall,
we
were
able
to
offer
much
better
service
to
the
development
teams.
A
We
became
an
infant
influencer
of
the
software
development
lifecycle
and
CI
process,
and
we
created
a
much
tighter
integration
and
trust
between
our
dev
and
our
ops
teams
from
this
process,
and
that's
why
it
says
be
prepared
if
you
were
to
do
something
like
this
be
prepared,
because
if
anybody
here
has
read
the
Phoenix
project,
you
know
the
concept
of
Brent
doesn't
scale.
Well,
you
need
to
make
sure
that
you
disseminate
your
knowledge
to
prevent
information.
A
This
went
so
well.
Other
operations
teams
also
started
to
want
to
get
on
board.
We
started
bringing
in
third-party
integration.
We
brought
on
our
chat
as
a
service.
We've
moved
a
lot
of
our
monitoring
tools:
Griffin
Prometheus
exporters,
things
of
that
nature
into
open
shift
as
well.
All
these
are
running
in
multiple
data,
centers
managed
by
openshift,
making
it
much
easier
for
us
to
worry
about
the
day-to-day
operations
and
not
trying
to
keep
these
applications
up
and
running.
We've
also
moved
a
lot
of
deployment,
automation
and
infrastructure
automation
into
OpenShift
as
well.
A
So
what's
next
for
paychecks,
what
we're
looking
at
next
is
expanding
the
use
of
s2i
for
development.
We've
seen
a
lot
of
good
use
case
from
our
own
use
of
that.
So
we're
looking
to
try
and
move
more
of
development
to
use
us
to
I
we're
looking
at
functions
as
a
service
functions
as
a
service
would
fit
great
for
operations
needs
things
like
web
hooks,
callbacks
things
of
that
nature,
working
with
the
Service
Catalog
integration
for
on-prem
resources,
and
also
looking
to
move
things
like
Jenkins
workers
and
open
shift.
A
So,
in
conclusion,
hosting
our
tools
inside
OpenShift
helped
us
to
build
team
experience
with
the
openshift
platform.
The
features
of
OpenShift
helped
us
make
our
IT
operations
teams
better.
It
may
able
to
manage
workloads,
improve
efficiencies
and
provide
better
customer
experience
to
your
internal
or
our
internal
customers.
A
great
way
of
looking
at
this
is,
if
you
really
want
to
understand
the
customer
experience,
be
a
customer
of
your
own
platform,
eat
your
own
dog
food
or
a
better
way
of
saying
that
I
think
is
probably
drink.
A
Your
own
champagne,
because
it
sounds
much
more
high-class,
it's
a
great
way
to
learn
about
sdlc
and
SDLC
concepts,
see
ICD
and
a
really
good
quote
is
don't
be
a
cost.
Center,
don't
be
another
admin,
just
enable
dev
and
speak
their
language,
and
that's
what
we've
been
able
to
do
by
moving
our
own
tools
into
open
shifts
so
that
we
could
really
understand
the
platform.
C
A
So
the
in-place
upgrades
we
actually
did
a
lot
of
work
using
our
own
ansible
automation.
We
have
a
lot
of
what
we
refer
to
as
paychecks
isms
for
our
environments,
additional
outside
tools
and
things
like
that.
So
we
actually
wrote
our
own
automation,
building
on
the
automation
that
we
got
from
Red
Hat,
but
ansible
play
books
and
things
of
that
nature
to
really
handle
the
the
tasks
to
roll
through
those
upgrades
one
by
one
yeah.
E
And
actually
state
to
come
to
the
open
chef
roadmap
session.
We're
gonna
be
talking
about
how
we're
evolving
the
current
installer,
which
is
open,
shift
on
RAL
using
the
ansible
installer,
and
then
the
will
be
like
the
fully
immutable
installer,
which
is
open
shift
on
the
immutable
OS
and
some
additional
automation
that
we're
gonna
be
introducing
around
that.
So
there.
E
F
B
Not
sure
we
initially
probably
not
there's
a
lot
of
proprietary
code
around
our
own
processes,
that's
been
baked
into
it
for
sure,
there's
some
components
that
we
might
be
able
to,
for
example,
a
lot
of
integrations
with
cisco
MDS
work
that
we
do.
There's
not
a
lot
of
tools
to
work
with
Cisco
MDS
and
we've
had
to
build
a
lot
of
that
in-house.
So
that
might
be
opportunity
for
us
to
open-source.
D
A
I've
had
a
lot
of
experience
with
it.
The
development
environment
or
the
developers
I've
worked
with
have
been
actually
very
happy
with
it,
because,
quite
honestly,
it
did
go
both
ways:
I'm,
not
a
developer.
Personally,
so
I
was
able
to
learn
a
lot
from
the
development
teams
and
development
organizations
on
how
to
use
OpenShift
as
well
as
us,
helping
to
influence
them
as
well.
So
I,
it's
really
worked
out
very
well.
G
A
Are
not
providing
persistent
storage
at
this
point
in
time,
we've
actually
architected
in
such
a
way
that
we
don't
require
persistent
storage,
the
only
exception
to
that
would
be
s3
or
an
object
based
store
that
we
use
for
shuttling
things
off.
That
really
need
long-term
storage.
Outside
of
that,
we
actually
don't
do
persistent
storage.
H
B
E
On
the
storage
question,
there
also
some
couple
of
sessions
this
week
around,
what's
called
container
native
storage,
which
is
some
work
that
the
Red
Hat
storage
team
is
doing,
specifically
around
storage
and
open
shift,
in
addition
to
the
work
that
we're
doing
in
kubernetes
to
support
all
form.
So
if
you're
interested
in
that
I
check
that
out
was
there
one
more
question.