►
From YouTube: Case Study: OpenShift @ HealthPartners James McShane at OpenShift Commons Gathering Seattle 2018
Description
Case Study: OpenShift at HealthPartners James McShane at OpenShift Commons Gathering Seattle 2018
https://commons.openshift.org/gatherings/Seattle_2018.html
A
B
I'm
really
excited
to
be
here
today
to
talk
about
what
we've
been
doing
and
health
partners
for
continuous
delivery.
It's
been
great
to
hear
all
these
talks
and
feel
like
I'm
in
a
place
with
fellow
comrades
as
we
go
through
this
journey
of
open
shift,
and
you
know
just
hearing
this
last
talk
of
the
challenges
and
the
journeys
that
they
go
through.
B
We
go
through
many
of
the
same
things
and
face
them
a
the
same
challenges,
so
I
work
at
Health
Partners,
which
is
a
large
regional,
nonprofit,
health,
insurance
and
healthcare
provider
in
the
state
of
Minnesota.
We
don't
have
offices
in
San,
Antonio
Plano,
which
would
be
nice
right
now,
but
we're
a
large
company.
We
were
formed
in
1957.
B
We
have
over
26,000
employees
more
than
90
clinics
and
hospitals.
This
is
a
very
large
organization,
very
well
known
in
the
Midwest
I'm,
a
part
of
our
platform
and
architecture
group,
which
was
formed
out
of
our
web
team.
The
team
that
ran
our
health
partners
comm
website
and
this
group
has
been
attempting
to
push
the
envelope
here
at
Health.
Partners
grow
our
cloud
practice
and
develop
some
new
techniques
for
how
we
do
our
work
at
Health
Partners
and
how
we
serve
our
customers.
B
B
We
made
a
decision
in
February
of
that
year
and
handed
that
off
to
our
leadership
and
in
the
end
we
were
two
OpenShift
was
chosen
to
carry
out
an
implementation.
We
were
given
a
date
to
get
to
production
by
the
end
of
the
year
and
we
were
really
excited
to
be
able
to
release
our
first
applications
into
production
in
November
of
that
year,
beating
our
release
date
by
six
weeks,
which
is
really
exciting.
We
took
some
time
there
to
develop
our
day-to
operations
practices.
B
A
lot
of
things
I'm
going
to
talk
about
today,
come
from
that
time.
I've
really
focused
energy
around
how
we're
gonna
manage
applications
as
we
go
forward
into
this
platform
and
then
over
the
last
six
to
eight
months
and
going
forward
in
the
future.
It's
been
a
challenge
on.
How
are
we
going
to
bring
our
whole
legacy
systems,
everything
that
we
have
modernize
it
and
learn
and
think
about
new
ways
of
doing
things,
so
we
started
with
just
basic
rest
services.
B
Api
services
got
our
first
web
application
into
production
in
August
of
this
year
we
had
50
services
in
late
October
and
we
finally
got
our
security
approval
for
getting
external
Internet
traffic
into
our
clusters.
Just
this
last
week,
so
there's
been
a
big
effort
over
the
last
six
months
to
really
develop
our
capabilities
in
this
platform.
That's
what
I'm
going
to
talk
about
so
first
I
want
to
contrast
our
new
world.
What
open
ship
provides
with
the
way
people
used
to
work
at
health
partners?
B
I
like
to
think
of
this
as
the
very
high
bar
that
it
took
to
do
really
any
work
within
our
platform,
everything
was
challenging.
We
had
large
shared
environments
where
web
applications
were
deployed.
Single
line
code
changes
to
production
would
take
26
36
3
days
5
days,
that's
just
for
a
single
line
teams
deployed
to
production
once
a
month,
once
a
quarter
I'm
sure
many
of
you
have
lived
in
this
world
right.
These
are.
This
is
the
standard
world
of
large
companies
in
the
past,
and
still
today,
HealthPartners
was
particularly
good
at
this
old
world.
B
Debugging
errors
as
they
came
along
the
way,
would
go
home
as
people
were
showing
up
to
work
the
next
day,
and
this
is
the
kind
of
horror
that
I
was
hired
to
eliminate
I
wish,
but
I
know
that
I
mean
this
was
really
the
challenge
that
our
environment,
that
we
face
in
our
environment
and
it
made
teams
make
some
bad
decisions,
architectural
II,
because
of
the
challenges
in
this
own
wallet
old
world.
This
is
where
openshift
was
our
clear
decision
right.
It
was
a
platform
that
we
could.
B
You
know
that
our
security
team,
we
could
agree
on
our
architecture
team
could
agree
on.
This
is
something
where
we
really
gained
a
consensus
around
working
with
Red
Hat
as
a
partner,
and
we
were
able
to
bring
this
product
in
in-house
and
start
to
work
with
shift.
So
as
we
began
our
implementation,
we
decided
to
think
about
the
challenges
of
our
old
world
and
reshape
that
mindset.
What
should
things
look
like
in
our
new
world?
What
should
developers
lives?
B
Look
like
and
we
came
up
with
a
new
mental
model
for
how
things
should
work,
and
so
this
mental
model
meant
we
made
some
key
decisions.
There
must
be
a
low
barrier
to
entry
to
the
platform
has
to
be
easy
to
do.
The
right
thing
has
to
be
easy
to
experiment
and
learn,
and
then
it
has
to
as
well
be
easy
to
consume
the
services
on
the
platform
that
the
platform
exposes
for
people
to
consume.
B
From
that
point,
things
like
seeker
management,
configuration
management,
all
those
things
now,
you
know,
as
we
talk
about
the
difference
between
a
shared,
you
know,
collision,
heavy
environment
versus
an
environment
where
we
can
turn
over
control
to
these
development
teams.
We
really
started
to
figure
out.
How
can
we
give
control
of
every
part
of
their
workflow
to
the
development
teams,
so
they
can
make
the
right
decisions
for
their
business
partners
for
their
users,
so
we
developed
a
tool
chain
around.
B
This
I
was
laughing
at
the
last
presentation,
as
he
continued
to
mention
components
that
we
use
as
well.
Servicenow
get
lab.
Artifactory
was
wonderful,
really
great
tools
to
use,
and
you
know
every
company
who's
gonna
have
their
own
tool
chain
right.
What's
key
is
exposing
services
for
developers
to
consume
in
automated
fashion,
exposing
these
services
so
that
they
can
consume
their
and
build
their
own
workflows
around
this?
B
So
what
we
did,
what
we
decided
to
do
is
invest
heavily
in
the
connection
between
Jenkins
and
open
shift,
as
well
as
start
a
conversation
very
early
with
our
auditors
around
the
processes
of
change
that
they
have
in
place
right
now
and
those
two
decisions
turning
over
control
to
an
automated
workflow
and
then
getting
in
early
and
discussing
those
things
with
my
our
compliance
and
our
security
groups
are
really
is
what
I
want
to
talk
about
going
forward
here.
So
we've
talked
about
this
low
bar
to
entry.
What
is
the
low
bar
to
entry?
B
B
There's
a
lot
there
think
about
how
hard
it
might
be
in
the
in
other.
You
know
in
your
old
systems,
general
environments,
to
define
everything
you
might
need
right.
I
can
pretty
easily
come
up
with
an
application
name
and
the
team
I'm
a
part
of
right,
and
then
we
can
go
from
there
and
gain
more
context
and
gain
more
complexity
and
skill
right.
But
this
is
what
I'm
talking
about
when
I'm
talking
about
a
low
bar,
we
created
a
Jenkins
entry
point
that
we
call
our
HP
pipeline.
B
We
work
at
Health
Partners,
so
everything
starts
with
HP,
and
so
this
became
an
easy
way
for
teams
to
come
on
and
start
learning
about
how
the
platform
works
and
how
their
applications
work
on
top
of
the
platform.
So
from
that
development
viewpoint,
how
do
we
go
forward
and
actually
carry
out
validations?
B
We're
not
just
slinging
code
to
production
right
that
that
can
be
the
you
know,
taking
this
model
to
the
extreme,
where
oh
man,
you
know
everything's
going
to
production,
we're
you
know
pushing
code,
all
the
time,
we're
doing
great
everyone's
head
happy
and
then
you
start
having
a
terrible
user
experience.
So
what
we
did
was
we
defined,
what
we
call
the
six
stages
or
pipeline
and
if
you've
ever
worked,
you
know
if
you
work
in
a
large
organization,
you
know
that
each
team
has
their
own
use
cases.
I
heard.
B
Oh,
you
know
they're
wonderful
talk
this
morning,
I
think
it
was
by
GE
Digital
Way
I
said
you
know,
every
team
thinks
their
application.
Is
this
special
little
thing
right,
and
so
every
team
is
going
to
ask
for
a
customization
in
potentially
every
one
of
these
areas
right:
the
preparation
of
the
application,
creating
the
build
running
tests,
doing
some
sort
of
pre
deployment,
validation,
deployment
and
then
validating
the
actual
application
into
production,
and
then
repeating
those
final,
three
steps.
B
So
what
we've
done
is
we
define
these
six
steps
in
an
abstract
way
within
our
Jenkins
pipelines
and
then
allowed?
Well,
we
first
of
all
created
defaults
like
we
do
for
everything,
and
so
that
you
know
every
team
has
a
hat
they
can
consume,
but
as
teams
want
to
consume
a
more
complex
workflow,
we
allow
them
to
participate
in
the
building
of
their
workflow,
where
they'd
like
to
add
complexity.
B
So
what
I
want
to
do
is
drill
into
each
one
of
these
stages
and
talk
about
how
this
connection
between
OpenShift
and
jenkins
has
been
key
for
us
and
talk
about
some
of
the
amazing
things
that
the
open
shift
api
is
allowed
us
to
do
within
our
build
and
deploy
system.
So
when
we
think
about
building
you
know
within
this
OpenShift
world
right,
what
does
a
build
mean?
A
build
means.
I
have
a
source
control
system
and
I
want
a
validate
container
image.
B
Now
validated
means
a
lot
of
different
things
to
a
lot
of
different
people.
Right
there
are
security
groups,
room
validated
means
I
passed
all
my
security
scans,
I've
passed,
you
know,
I'm
good
to
deploy
and
I'll
never
have
any
vulnerabilities,
which
is
wonderful
and
fiction.
I
mean
you
have
to
be
honest
right,
but
then
for
application
teams,
a
valid.
A
container
image
means
that
they
know
how
their
application
is
going
to
work
before
they
hit
integrated
environments
right
before
you
start
affecting
other
people's
experience
within.
You
know
a
testing
platform
for
your
application.
B
So
how
our
happy
path
for
our
build
stage
works
is
we
are
a
Java
shop?
We've
been
a
Java
shop,
so
our
hobby
path
is
a
maven
build
into
a
spring
boot
application.
Based
on
as
of
a
couple
months
ago,
a
Java
11
base
source
to
image,
build
and
then
packaged
up
and
pushed
out
to
an
external
registry
that
we
keep
external
to
all
of
our
clusters,
but
it
wouldn't
be
a
validated,
and
you
know,
application
teams
wouldn't
be
happy
without
really
solid
testing.
B
Martin
Fowler
has
this
wonderful
article
about
of
testing
pyramid
free
application,
where
you
really
invest
heavily
in
fine
granular
tests
early
in
the
early
in
the
pipeline
stages
and
then
have
smaller,
but
more
targeted
smoke
tests
and
and
end-to-end
tests
as
the
application
moves
forward.
So
as
a
Java
shop
right,
we
have
capabilities
for
running
unit
and
component
tests.
You
know
spring
boot
tests
component
tests,
I,
don't
know.
B
We
need
good,
validated
validation
early
in
the
process,
so
that
we're
not
deploying
junk
code
even
to
our
development
environments.
Right.
We
want
to
validate
this
as
much
as
possible.
Now,
after
we've
tested,
we
have
a
real
validated
container
image.
We
start
to
push
the
application
over
the
stack.
The
next
stage
is
our
pre
deployment.
Before
we
deploy
an
application
to
any
environment,
we
carry
out
a
set
of
steps
that
validate
that
that
application
should
be
deployed
into
that
environment.
B
Now
there
are
simple
things
like
manual
approvals
right
when
you're
doing
continuous
delivery,
you
can
wait
between
environments,
wait
before
productions
so
that
your
business
actually
wants
to
release
it
and
that's
a
type
of
pre
environment
pre
deployment
check.
But
there
are
many
more
powerful
things
that
the
open
shift
API
can
establish
for
you.
B
We
do
things
like
cluster
health
checks
and
we've
turned
them
into
very
complex
cluster
health
checks
so
that
we
can
do
things
like
patch,
the
CVE
that
came
out
this
last
week
during
the
business
day,
while
teams
were
deploying
to
production
by
the
time
our
seeso
contacted
my
team
and
asked
you
know
hey
what
are
you
guys
doing
about
this
thing?
We
said
yeah,
we've
already
got
a
plan
in
place,
we're
pushing
up
our
environments,
we'll
have
it
deployed
in
production.
Today
this
it's
just
once,
you
have
built
very
solid
pre
deployment
checks.
B
You
can
do
this
type
of
work
during
the
work
day.
You
can
do
it
live
in
production
because
you
know
that
you're
not
gonna
affect
the
development
teams
that
are
working
on
top
of
you.
So
then,
again,
I
just
want
to
stress
how
effective
the
ServiceNow
they
excuse
me.
The
open
shift
API
will
get
to
ServiceNow.
In
a
moment
the
open
shift
API
has
been
for
allowing
us
to
deploy
our
code
effectively.
B
So,
as
I've
said
in
each
of
these
stages,
we've
got
this
built
in
happy
path
and
what
we
do
is
actually
use
Jenkins
to
generate
open,
shift,
manifest
objects
and
apply
those,
since
the
we've
learned
along
the
way
about
removing
annotations
and
things
like
that.
But
we
have
this
happy
path
so
developers
as
they
come
onto
the
platform,
they
don't
actually
have
to
know
anything
about
a
kubernetes
yamo
object,
a
deployment
configure
us
or
route.
All
this
thing,
all
this
just
stuff
just
happens
for
them.
B
It
gets
generated
based
on
the
defaults
that
happen
from
the
pipeline.
So
this
again
creates
this
very
low
barrier
to
entry
and
then,
as
teams,
gain
complexity
and
gain
understanding,
they
can
start
participating
in
the
construction
of
their
applications
on
open
shift.
So
we've
developed
many
different
types
of
deployment
strategies
where
they
can
provide.
You
know
additional
manifest
objects.
Things
like
a
headless
service
are
very
standard.
Things
you'll
apply
out
with
your
application,
or
you
know
a
PVC
request.
B
Anything
like
that
would
can
be
added
along
with
your
application
and
as
openshift
has
evolved,
we
have
evolved
with
it.
So
that
means
that
we've
been
able
to
have
teams
deploy.
We
started
with
OpenShift
3
3
about
a
year
and
a
half
ago,
then
we
developed
most
of
the
stuff
against
open
ship,
3
5.
But
then,
by
the
time
we
had
development
teams
coming
on.
B
We
have
people
actually
asking
us
about
using
staple
sets
using
some
daemon
sets
on
node
selectors
and
we
were
able
to
enable
that
method
as
well
just
by
tuning
some
parameters
say
where
we'll
pick
up
these
objects
from
and
tying
those
things
together.
But
those
things
don't
happen
when
you
start
coming
on
to
the
platform
for
the
first
time
right
when
you're
first
experiencing
open
ship,
you
don't
say
I
want
to
deploy
a
daemon
set
to
a
set
of
nodes.
B
That's
doing
this
this
right,
that's
a
complex
ask,
but
it's
an
important
thing
for
2
for
you
to
support
within
your
development
workflows
so
that
eliminating
that
barrier
for
entry
has
allowed
people
to
come
in
and
learn
and
then
start
being
able
to
participate
on
the
platform
and
then,
after
we've
deployed.
Of
course,
we
verify
right
start
checking
off
the
boxes
that
the
application
is
doing,
what
you
want
it
to
do
again.
This
is
where
the
tie
between
Jenkins
and
OpenShift
has
been
really
helpful.
B
For
us,
we
use
a
number
of
the
plugins
that
are
on
github.
We've
helped
contribute
to
some
of
these
and
as
well
as
running
code
scans,
integrated
tests
that
we
pick
up
by
convention
within
the
application
repository
that
simple
pipeline
file
that
I
showed
you
at
the
beginning
would
actually
automatically
look
for
test
a
test
directory
pick
that
up
send
it
a
bearer
token
and
attempt
to
run
tests
against
the
OpenShift
proxy
API
as
new
pods
are
spinning
up.
These
are
all
just
default.
B
Behaviors
that
happen
out
of
the
box
that
teams
can
get
without
any
cost,
so
this
verification
allows
them
to
participate
in
building
good
tests
against
their
application,
so
they
can
validate
before
they
go
to
production.
Now
this
strategy,
as
well,
has
allowed
people
to
learn,
participate
with
us
in.
In
growing
this
practice,
we
had
a
team
for
whom
this
proxy
API
wasn't
good
enough
because
they
wanted
to
validate
within
a
browser
and
at
HealthPartners.
We
have
this
wonderful
event
that
we
do
twice
a
year.
B
But
now
other
teams
are
consuming
that
as
well,
and
so
exposing
these
things
allows
teams
to
gain
that
complexity
and
write
things
that
help
them
and
then
develop
capabilities
that
can
be
spread
across
your
organization.
Now
within
any
large
organization,
you'll
have
a
change
management
system
and
within
many
of
your
organization's
I
can
assume
your
change.
Management
system
is
a
little
standoffish.
B
Maybe
you
have
a
wonderful
group
I'm,
not
talking
bad
about
any
of
your
compliance.
Folks,
I
really
they're
they're,
really
my
friends
now,
but
it
can
be
difficult
to
interact
with
these
kind
of
change
management
systems
right.
We
have
ServiceNow
in
our
environment
and
we
use
ServiceNow
for
everything
when
it
comes
to
an
odd
perspective
right
when
we
have
external
auditors
come
in,
they
start
from
ServiceNow.
They
start
asking
us.
You
know
validates
that
you
did
everything
that
you
said.
You've
done
for
this
change
right.
B
So
that
means
things
like
testing
results,
work,
tracking
information
from
JIRA,
get
committed,
information
scan
results,
deployment,
time,
deployment,
results
approvers.
They
want
all
this
information,
but
within
the
old
world,
it's
hard
to
manage
that
when
you're
doing
these
long,
drawn-out
processes
that
are
fraught
with
manual
errors
right
and
so
what
we
transform
this
into
is
rather
than
the
system,
that's
locked
down
that
it's
hard
to
access.
We
start
a
conversation
with
them
and
we
turned
our
change
management
system
into
more
of
an
engineering
journal.
So
the
service
now
has
this
wonderful
API
table
API.
B
Basically,
if
you've
ever
used
the
tool,
anything
you're
doing
in
the
UI
of
ServiceNow
can
be
done
over
the
API.
So
we
worked
that
that
group
to
just
expose
that
and
expose
what's
called
the
standard
change
request
process
which
allows
us
to
build
templates
for
standard
changes.
So
just
like
you
would
do
in
any
engineering
journal.
You'd
have
a
you
know
your
experiment,
your
write
out
all
the
steps
that
you're
doing
our
change
management
processes.
Look
like
that.
B
Now,
when
a
when
a
build
starts
going
into
production,
it
starts
recording
and
sending
information,
that's
happening
from
the
pipeline
into
our
into
our
ServiceNow
tool.
That's
things
like
managing
these
tables
of
change
and
tasks,
updating
the
state
way
of
when,
when
things
actually
happen,
we
attach
files
that
show
that
it
was
tested
in
these
different
environments.
We
attach
the
full
build
log
if
the
build
was
successful
or
if
the
build
failed
to
the
service
now
change.
B
So
this
provides
for
us
a
simple
entry
point
for
making
all
the
necessary
API
calls
between
our
pipeline
and
the
change
management
platform.
So
after
doing
this
work,
this
is
where
we
really
started
to
think
about
using
open
shift
and
using
this
platform
that
we've
built
to
start
to
change.
The
way
that
our
organization
thinks
write
technology,
you
know
I,
don't
have
time
to
show
you
any
of
the
code
last
slide,
there'll
be
links
to
where
code
will
be,
but
code
doesn't
transform
right.
We
number
of
people
have
talked
about
this
today.
B
So
through
all
this
work
with
the
pipeline
lowering
the
barrier
to
entry,
what
have
we
done
in
even
just
the
seven
months
that
we've
been
actively
bringing
people
out
of
the
pipeline
actively
bringing
people
onto
the
element
shift
platform
we
can
get
from
commits
into
production.
Doing
all
these
automated
processes
in
18
minutes.
B
We've
done
this
multiple
times,
proving
it
out
showed
our
security
team,
our
compliance
team
they're
all
good
with
it,
and
so
a
single
line
code
change
has
gone
from
2016,
taking
about
36
hours
to
get
to
production
down
to
18
minutes
in
just
seven
months.
We've
now
gotten
to
the
point
where
application
development
teams
are
deploying
to
production
ten
times
per
day
now.
B
Right
once,
you
start
knowing
what
people
are
doing,
how
people
interacting
with
the
platform,
you
can
start
visualizing
how
people
are
working
and
you
can
use
that
to
create
a
positive
feedback,
loop
to
measure
teams
or
more
effectively
measure
how
well
you're
doing
what
you're
doing
so.
This
means
we
can
slice
and
dice.
The
data,
however,
want
this
is
a
visualization
of
the
production
deployments
from
the
third
of
the
fourth
see
we're
using
red
asses.
Oh
and
I
saw
a
product
there,
so
you
can
see
that
it'd
be
really
easy.
B
Hey
if
you're
a
team,
hey
I
can
visualize.
Just
my
production
deployments
or
I
can
visualize
all
the
deployments.
For
this
application
right
once
you're
recording
everything
that's
happening,
you
can.
You
can
basically
own
this
data
and
really
use
this
data
to
help
you
understand
the
system
and
then,
as
I
said,
it
really
starts
giving
a
feedback
back
to
this
platform
team.
We
use
this
chart,
which
is
just
our
number
of
deployments,
to
any
environment
during
the
day
to
be
a
feedback
mechanism
for
our
team.
B
Now
we're
not
quite
concerned
that
people
didn't
deploy
over
Thanksgiving,
but
that
little
that
little
dip
in
November
was
actually
something
that
caused
us
to
go
out
and
interact
more
with
some
of
our
development
teams
and
say:
hey
what's
going
on
in
the
platform?
Are
you
having
some
concerns
and
you
can
see
that
since
then?
We've
really,
you
know,
jump
back
up.
We've
had
a
little
more
engagement
since
then
we
didn't
have
a
mandate
from
upper
management
to
do
this
work
for
across
our
whole
platform.
B
So
it
was
up
to
us
to
create
a
platform
that
was
enticing
for
people
to
come
to,
and
data
like
this
allows
us
to
make
sure
that
we're
accomplishing
that
goal.
Now
these
pipelines,
with
an
open
shift
using
the
API
in
this
way,
have
created
for
us
a
very
flexible
approach.
Now
in
the
past,
you
know
these.
B
These
shared
these
large
shared
environments
were
very
difficult
to
operate
and
manage,
and
we
didn't
have
a
central
automation
mechanism
now
that
we
solved
this
for
applications,
we
took
that
and
we
started
to
apply
those
same
principles
to
how
we
automated
other
types
of
processes.
So
now
we
use
these
pipelines
to
manage
our
config
Maps
and
our
secrets
on
an
open
shift.
We
use
this
for
administrative
actions
like
team
onboarding,
namespace
creation.
We
also
use
that
these
pipelines
to
manage
our
open
shift
infrastructure.
B
We
orchestrate
ansible
tower
jobs
through
Jenkins
pipelines
that
that
manage
open
shift
run
all
that's
how
we
were
able
to
kick
off
that
patch
for
the
CV,
basically
immediately
and
we've
even
gone
back
and
started
to
manage
our
old
j2ee
environments
using
these
pipelines
as
well.
It
started
to
just
create
a
way
that
we
can
manage.
Basically
everything
through
these
validated
standard
approaches
that
really
provide
us
like
that
great
data.
So
just
a
couple
notes:
I
talked
about
how
this
flexibility
has
allowed
us
to.
B
You
know,
had
allowed
developers
to
consume
new
features
within
the
platform
and
that's
what
my
team
is
really
looking
for.
How
are
we
gonna
build
for
the
future?
How
are
we
gonna
allow
teams
to
do
more
complex
things
as
the
platform
develops?
Alongside
of
us
right,
it's
to100
came
out
we're
looking
at
Jaeger
we're
looking
all
the
wonderful
cloud
native
computing
foundation
projects
and
seeing
how
they
might
be
able
to
assist
us
in
building
really
good
applications
that
are
serving
the
needs
of
our
business
and
so
building.
B
But
one
of
the
things
we're
really
looking
to
do
in
2019
is
to
bring
that
feedback
loop
back
and
have
our
developers
contribute
to
the
projects
that
we
all
use.
Things
like
Prometheus,
the
openshift
Origin
participating
with
Jenkins
plugins
that
we're
using
and,
as
I
said,
you
know
all
the
wonderful
CN
CF
projects
that
are
out
there.
So
if
I
had
to
boil
this
down
into
six
points,
the
last
one
hey
we're
hiring,
you
want
to
come,
live
in
Minnesota,
it's
great.
C
B
B
One
of
the
things
that
that
we
did,
you
know
we
had
some
very
specific
examples
of
things
that
we
wanted
to
open-source,
so
you'll
see
two
or
our
projects
here,
the
first
two,
the
ServiceNow
plug-in
and
then
our
we
we've
open
sourced,
a
library
that
has
lauded
us
to
unit
test
everything
that
we
run
inside
of
Jenkins.
So
these
two
things
we
kind
of
wrote
out.
You
know
what
is
this
code
doing?
B
B
You
know
it's
gonna
be
important
for
us,
so
we
we
got
kind
of
a
coalition
of
people
that
could
go
to
our
legal
group
with
a
unified
front
and
a
unified
statement
that
says
you
know
this
is
a
normal
process
that
other
companies
are
participating
in
and
found
other
companies
in
our
you
know,
sector
in
our
area
that
are
doing
the
same
thing
and
in
the
end
you
know
they
were
they
were.
You
know
agreeable
and
they
were
down
with
it.
So
that's.
C
A
Hello,
that
was
a
very
good
example
and
I
wanted
just
to
know
how
developer
developed
from
their
machine,
so
you
show
has
that
from
the
common
to
the
production
is
18
minutes,
which
is
great
fantastic,
but
I
I
was
wondering
developers
develop
on
their
machines,
with
some
mini
shift
or
the
directly
owned
on
the
cluster.
So.
B
B
Our
local
developer,
workstations
mine
are
kind
of
locked
down,
and
so
that's
been
made
it
difficult
to
run
things
like
mini
shift
to
be
able
to
validate
locally
so
right
now
we
allow
developers
to
work
locally
and
connect
out
to
other
services
running
in
our
development
environment.
So
you
know,
if
you,
if
you
got
a
service,
is
calling
out
to
multiple
other
ones.
That's
how
that
would
work.
B
D
So
it
sounds
like
you're,
using
Jenkins
kind
of
as
a
replacement
for
the
source
to
image
functionality.
Can
you
talk
about,
like
once,
I
have
source
code?
What's
your
process
for
setting
up
an
environment
in
your
Jenkins
environment,
do
you
have
like
multiple
Jenkins
servers
and
then
you're
open
ship
project
and
those
resources?
That's.
B
A
great
question:
it
was
something
I
had
to
cut
out
of
this
presentation
due
to
time,
but
we
give
each
development
team
their
own
Jenkins
instance,
which
they
manage
through
those
administrative
jobs
at
a
central
machine.
So
you
know,
if
you
need
to
spin
it
up
or
upgrade
it,
you
can
go
to
that
central
place
and
manage
the
jankins
server.
Then
from
there
we
actually
use
the
sourced
image
process.
B
So
we
use
the
the
Jenkins
kubernetes
plug-in
to
spin
up
dynamic
pods
to
package
artifacts,
and
then
we
send
those
package
artifacts
into
a
source
to
image,
build
that
combines
with
a
base
container
that
my
team
provides.
So
we
have
a
base,
you
know,
nginx,
we
have
a
base
java
application.
We
have
a
bait,
you
know
base
for
basically
all
the
different
types
of
applications.
We
support.
That's
a
great
question.
Thank
you.