►
From YouTube: OpenShift Case Study - Stephen Boyd (Electrical Training Alliance) OpenShift Commons Gathering 2021
Description
OpenShift Commons Gathering 2021
OpenShift Case Study: Electrical Training Alliance
Guest Speaker: Stephen Boyd (Electrical Training Alliance)
OpenShift Commons Gathering 2021
https://commons.openshift.org/index.html#join
A
A
Some
15
years
ago
we
were
still
publishing
books
and
working
through
selling
those
books
and
then
the
the
world
wide
web.
You
know,
crashed
into
the
scene
and
everybody
started
to
say:
hey:
let's
do
these
things
online,
so
we
became
somewhat
of
a
software
company
embracing
such
technologies
as
moodle
for
our
learning
management
system
and
several
other
homegrown
applications
to
support
test
generation
in
a
proctored
manner.
A
As
you
can
imagine,
working
with
those
they're
you're
dependent
on
other
technologies,
moodle,
for
instance-
you
know
you-
you
can
only
do
so
much
before
you're
you're
constrained
with
their
technology,
although
we've
greatly
modified
that
I
I
lovingly
call
it
our
frank
and
moodle,
we
still
use
it
and
we're
still
dependent
on
those
technologies.
A
Recently,
with
the
demand
for
workers,
we
were
given
an
opportunity
to
create
our
program
online
so
that
apprentices
could
actually
get
to
their
curriculum
throughout
the
country.
Some
40
000
people
average
a
year
applying
for
apprenticeship
and
take
their
actual
first
year
apprenticeship
program
through
a
process
that
we're
developing
part
of
the
grant
required
us
to
create
tracking
software.
A
Some
some
knowledge
there.
The
slides,
do
go
into
a
little
bit
of
this,
but
I'll
just
talk
through
it.
If
you
can
imagine,
you
know
managing
a
training
center
and
there
are
250
of
those
across
the
country
each
with
you
know
multiple
apprentices
anywhere
from
15
apprentices
to
several
thousand
in
in
some
of
our
larger
programs.
A
You
know
you
have
to
deal
with
how
many
hours
have
they
had
training?
What
curriculum
are
they
taking?
What
are
their
online
classes
and
so
forth?
All
that
has
to
be
tracked,
so
the
software
that
we've
developed
or
wanted
to
develop
would
replace
some
of
that
replacing
it
because
we
have
systems
out
there
that
all
use
some
of
them
spreadsheets
diane
laughed
at
one
point,
because
I
told
her:
we
have
an
application
out
there
that
uses
foxpro
for
some
of
you
senior
developers
out
there.
A
We
can
definitely
work
with
those,
but
the
problem
came
that
all
of
our
users
ended
up
having
to
double
entry,
their
data
or
sometimes
triple
entry,
their
data,
because
they
would
maintain
the
apprentices
in
their
local
software
and
then
they
would
go
to
our
learning
management
system
and
they
would
have
to
enter
their
users
again
and
they
would
get
grades
and
they'd
have
to
export
that
and
go
to
their
systems.
A
So
we
wanted
to
create
a
system
that
allowed
for
more
natural
transitions
and
and
once
and
done
technology.
If
you
will-
and
we
we
took
this
opportunity-
I
had
been
playing
with
okd
for
some
time.
Looking
at
replacing
our
lms's
or
or
working
within
our
lms
is
enhancing
them
in
some
way,
and
this
became
the
prime
opportunity
for
us.
A
So
we
were
able
to
create
a
system
that
enhances
those.
We
have
several
monolithic
applications
and
their
development
times
are
very
slow.
We
we
take
several
weeks
to
several
months
to
even
deploy
features.
We
are
moving
in
an
agile
way,
but
all
of
that
technology
is
just
slow
going
with
the
testing
and
everything
else.
So
wanting
to
enhance
that
process,
we
made
some
selections
as
I
got
into
it.
We
work
with
outside
development
partners,
and
I
had
done
all
the
business
analysis
work
to
get
this
up
and
running
and
handed
it
over
to
our
seniors.
A
You
know
our
our
are
basically
our
project
management
office
and
then
covet
hit,
and
they
said
that
we're
halting
all
external
development,
but
since
you're
a
developer
as
well.
Why
don't
you
just
go
ahead
and
continue
developing
this,
which
was
a
little
bit
more
than
I
was
planning
on
taking
on
at
the
time.
A
B
A
A
Okay,
so
so
this
is
kind
of
what
we
worked
with
and
we've
got
quite
a
bit
going
on.
We,
we
chose
a
microservices
approach.
We
decided
to
use
quarkus
for
most
of
our
stuff
we're
a
lot
of
java.
On
our
back
end.
We
do
have
some
node.js,
some
some
very
other
items,
but,
as
you
know,
our
moodle
would
be
php.
None
of
that
is
living
here.
These
are
just
augmenting
services
and
they
work
in
concert
with
all
of
our
other
systems,
but
we
did
go
ahead
and
choose
to
use
quarkx
it's
very
fast.
A
A
We
are
using
atlassian
bitbucket
for
our
repo,
simply
because
we
also
use
jira
and
confluence
for
our
documentation
and
support.
We
did
want
to
use
ci
cd
at
the
time
since
we
were
doing
complete
greenfield.
A
A
It
is
capable
of
doing
everything.
However,
we
wanted
to
take
a
look
at
argo
cd
as
well,
because
I
personally
have
a
love
hate
relationship
with
yaml
diane
says
she
loves
the
ammo.
We
like
that,
however,
I,
like
version
control
on
all
things
and
argo
cd,
gives
us
ability
to
have
our
deployment
yamls
yaml's
version
controlled
and
synchronizing
our
deployments
across
our
fields.
So
what
do
we
have?
We
have
two
clusters,
both
openshift
dedicated
the
weirdest
thing
for
us
was
that
our
non-production
cluster
is
about
three
times
the
size
as
our
product
production
cluster.
A
The
reason
being
is
all
of
your
stuff
runs
in
your
non-prod
cluster,
your
pipelines
and
your
image
streams
and
all
of
the
content
that
you're
doing,
including
your
testing,
your
image
building
and
all
that.
So
we
actually
need
more
resources
for
our
non-production
cluster
than
we
do
for
our
production
cluster.
A
However,
our
pipeline
takes
it
at
any
time
we
commit
to
our
bit
bucket.
We
automatically
start
a
build
and
we
have
three
name
spaces
inside
our
non-prod
cluster.
We
have
a
development
branch,
a
test
and
a
usat
development
just
is
where
it
gets
built.
It's
where
everything
gets
done
right
when
we
kick
off
with
the
bit
bucket,
and
it
allows
me
to
make
sure
that
my
services
are
seeing
each
other
once
they're
talking
and
I
can
just
look
at
things
and
make
sure
everything's
working.
A
I
can
promote
those
and
tag
those
so
that
they
end
up
in
test
once
they're
in
test.
We
actually
build
our
end-to-end
tests
and
we
use
jmeter
for
that.
We
do
spot
testing
with
postman,
but
for
the
most
part
we
can
script
jmeter
into
our
pipeline.
So
we
can.
We
can
build
that,
so
we
do
build
out
our
jmeter
tests
inside
the
test
environment
and
all
of
those
are
private
they're,
not
there's
no
routing
to
those.
Those
are
all
just
console
in
and
you
deal
with
what
you
want
to
do.
A
You
can
port
forward
to
do
some
testing
once
we
promote
those
and
we
feel
like
they're
stable
internally,
and
we
want
our
external
partners
who
work
with
our
learning
management
system.
Learning
record
store
to
actually
access
them.
We
promote
it
once
more
to
the
uat
environment
and
I
kick
off
and
let
them
know
while
it's
in
uat
they
have
access
to
it.
While
it's
not
on
this
diagram,
we're
using
red
hat
open
api
management,
so
it's
another
managed
service
that
basically
gives
you
an
on
premise.
A
Three
scale
deployment,
so
all
of
our
apis
are
backed
using
three
scale,
so
that
is
how
they
would
access
those
part
of
our
problem
was
we
want
everything
to
be
destroyable.
That's
the
whole
beauty
of
the
openshift
world
is
that
you
know
you
want
it
to
to
be
long,
living,
scalable,
resilient
and
recoverable
quickly.
So,
while
smes
at
red
hat,
are
maintaining
the
cluster,
we
want
to
make
sure
that
if
there
was
a
catastrophic
event
and
the
class
cluster
goes
down,
we
could
rebuild
everything.
A
There
are
lots
of
options
for
those,
including
you
know,
amazon,
buckets
and
everything
else,
but
we
worked
with
a
group
called
cockroach
cloud,
cockroach
labs
to
create
for
us
some
some
cloud
produced
postgres
compatible
databases
and
they
have
multiple
ways
to
consume.
So
it
worked
for
us
if
you
see
our
development
environment
in
our
test
environment,
we're
using
a
developer,
a
cloud
account
which
is
free
for
the
most
part,
you
get
a
very
small
there's,
no
guarantee,
there's
no
uptime,
but
it
lets
me
do
my
testing
and
it
deploys
it
to
cloud.
A
Just
like
my
production
is
uses
the
same
technology
when
we
get
to
uat.
We
need
a
little
bit
more
longevity
for
our
data,
so
we're
using
their
in
service
operator
to
maintain
a
couple
of
nodes
inside
that
we
use
for
backups
and
everything
else
just
for
uat,
so
that
the
developers
can
have
some
long
lasting
data.
A
When
we
do
move
to
production
in
our
production
cluster,
we
are
paying
for
a
region,
one
region
right
now,
three
node
cluster
of
cockroach
cloud,
which
again
is
a
postgres
compatible
database.
But
what
it
allows
us
to
do
is
make
sure
that
we
could
rebuild
our
cluster
within
you
know
I
say
hours:
it
takes
a
couple
hours
to
spin
up
an
osv
cluster,
but
within
the
once
those
clusters
are
available.
We
can
redeploy
everything
because
nothing
lives
within
the
cluster
as
far
as
our
images
go.
A
So
all
of
that
works
because
we're
using
these
technologies,
our
ci
cd,
builds
the
image
on
our
openshift
cluster,
pushes
it
to
quay
wykway
instead
of
docker
hub
or
some
other
system.
Again
we
went
red
hat.
We
were
working
with
red
hat,
but
red
hat
gives
you
the
ability,
using
quay,
to
give
you
security
testing.
So
immediately,
once
an
image
gets
pushed
away.
A
So
we
had
a
couple
of
things
because
we
have
multiple
users
using
the
technology.
We
have
external
users
using
this
technology.
We
did
not
get
rid
of
our
monolithic
applications,
so
we
needed
to
measure
impact
across
the
board.
So
our
existing
process
for
an
applicant
coming
in
is
that
they
apply
in
person.
They
can
go
to
a
training
center
and
they
apply
and
there's
about
50
000
of
those
a
year.
A
I
was
a
little
off
on
my
memory,
50
000
of
those
a
year,
and
they
they
they
would
go
into
their
local
training
centers
and
apply.
They
would
have
to
go
to
another
training
center
and
apply
if
they
wanted
to
go
to
two
regionally
closed
training.
Centers
for
us,
baltimore
and
dc
are
very
close
and
you
could
walk
into
both
of
them
and
apply
after
a
roll
out
of
this
system.
They
can
now
apply
to
multiple
programs
anywhere
in
the
country
by
filling
out
one
application.
A
We
lost
controls
of
our
slides
there.
We
go
entering
job
training,
training
reports
so
as
an
apprentice
goes
through
their
on-the-job
training,
they
have
to
fill
out
of
evaluations.
Those
evaluations
then
go
to
their
journey
level
worker
who
also
fills
an
evaluation
and
they
get
submitted
to
the
offices.
So
all
of
these
offices
are
very
small
staff.
They
only
have
anywhere
from
one
sometimes
up
to
15
people,
but
you
can
imagine
with
36
000
apprentices
a
week
entering
those
reports
across
250
training.
A
A
Finally
advancement
operations
as
you
go
through
the
process
five-year
program
for
inside
three-year
program
for
outside
residential
and
telecommunications,
you
have
to
look
at.
Have
they
met
all
the
qualifications
for
education?
Have
they
met
all
the
qualifications
for
on-the-job
training?
Have
they
done
any
certifications,
whether
it's
regional?
So
you
can
imagine
that
same
36
000
a
week.
You'd
have
to
go,
collect
all
of
this
data
and
then
figure
out
who's
eligible
for
advancement
throughout
the
program
with
the
new
system.
All
that's
done
for
you.
A
It
automatically
takes
a
look
at
what
you've
set
up
as
your
guidelines
and
parameters
and
all
those
microservices
go
through
the
process.
So
devops
metrics
a
little
bit
more
applicable
to
what
we're
talking
about
here.
So
again,
we
did
not
replace
any
of
our
monolithic
applications.
We
are
supporting
those
with
some
micro
services,
so
our
current
process
for
our
non-microservice
based
deployments,
a
bug-
takes
us
on
average
one
to
two
weeks,
whereas
with
the
microservices
we're
rolling
out
fixes
one
to
eight
hours.
A
New
features
can
take
anywhere
from
two
to
four
weeks
for
our
learning
management
systems,
our
learning
record
stores,
test
generators
and
other
tools,
whereas
a
new
feature
with
the
microservices
anywhere
from
two
to
four
days
and
that's
high.
In
most
cases,
we
can
have
something
out
in
a
day
it's
just
rapid
and
we
do
go
with
that
model.
You
know
we
we
roll
out
our
skateboards
and
our
bicycles
and
our
our
harley-davidson's
into
a
ford
rafter.
You
know
we.
A
A
So
in
terms
of
openshift,
whether
it's
okd
openshift
container
platform
or,
in
our
case,
a
managed
service
with
openshift
dedicated,
it
allows
us
to
rapidly
deploy
services
to
augment
support
or
even
fully
support
new
technology.
You
can
deploy
front-end
sites,
middleware
or
back
insights,
all
of
those
being
supported
securely
in
our
case,
we're
using
three
scale
through
a
red
hat,
open
api
managed
service.
A
B
Well,
I
think
that
was
an
awesome
way
of
doing
that
and
thank
you
for
segwaying
with
the
the
slide
snafus
and
all
of
that
looking
to
see
if
there's
any
questions
from
the
audience
in
person
and
that,
apparently
the
folks
virtually
are
not
figuring
out
how
to
use
the
qa
that
well
or
they
are
surprisingly
silent.
B
I
am
always
thrilled
when
we
get
to
to
see
the
architectural
diagrams
like
you've
shown
and
really
get
them
the
deep
dive
in
on
what
people
are
actually
doing.
A
great
big
shout
out
to
cockroach
labs
and
the
work
that
you're
doing
someone's
just
asked.
Why
do
you
like
argo?
More
than
techton.
A
It's
not
a
like
the
technology.
We
we
actually
started
with
techton
and
and
use
it
quite
a
bit
once
we
kind
of
stabilize
on
on
a
service
we're
using
argo
only
for
the
yaml
management
deployments,
maintaining
version
control
because
it
does
use
customized
io
built
in
right
into
the
container.
It's
it's
more
of
a
preference.
You
can
use
either
one
for
for
your
deployments.
B
B
It
doesn't
make
it
fun
and
boy.
I
hate
the
white
space
issues
with
yaml,
so
it's
quite
quite
a
fun
learning
curve.
I
actually
know
one
of
the
my
friends
and
colleagues
from
another
startup.
I
was
in
indy.net
brian
ingerson,
who
is
the
author
of
yaml,
so
shout
out
to
brian
and
his
cohorts.
B
You
could
probably
find
them
on
wikipedia
somewhere
or
in
a
coffee
shop
in
seattle,
so
we
probably
owe
them
a
lot
more
debts
of
grab
gratitudes
and
free
coffees
for
all
of
the
work
that
they
did
and
and
the
pains
in
our
lives
that
it
has
become.
There
is.
A
One
other
major
significant
advantage
to
using
argo
cd
in
your
production
environment
if
nothing
else,
and
that
is
the
ability
to
lock
it
at
a
specific
level.
So,
while
tekton
is
great
for
doing
your
ci
cd
and
getting
those
out
there
with
customize,
it's
always
monitoring
with
your
argo
cd,
it's
always
monitoring
your
versions.
A
So
if
someone
with
the
authority
accidentally
either
accidentally
or
maliciously,
does
a
deployment
that
you're
not
ready
for
customize
will
see
that
change.
Argo
cd
will
kick
in
and
bring
you
back
to
the
approved
version
automatically.
So
there
are
some
advantages
to
using
argo
cd.
It's
a
flavor,
it's
a
preference!
Really.
A
I
I
like
techton
it's
it's
very
clear
when
you
start
getting
up
there
in
the
services,
though,
when
you
think
about
having
a
pipeline
for
your
deployment,
a
pipeline
for
your
patching
and
then
a
pipeline
for
your
building.
You've
got
three:
maybe
four
pipelines
just
for
every
service
and
if
they're,
all
in
the
same
name
space.
That's
quite
a
long
list
of
pipelines
to
manage
so
again
personal
preference.
They
both
do
wonderfully
and
I've
had
no
problems
with
either.
B
All
right
and
there's
one
more
question
coming
in,
and
I
think
we
have
time
depending
on
how
long
you
pontificate
what
what
criteria
did
you
use
to
pick
a
storage
provider
and
what
was
the
biggest
challenge
you
had
with
storage.
A
Right
we
looked
at
multiple,
multiple
options:
there
we
actually
played
with
crunchy
for
internally
we
went
just
plain
vanilla,
postgres
with
with
containers
and,
ultimately,
what
it
boiled
down
to
for
me
was
again
very
small,
not
for
profit
organization.
It's
me
and
one
other
one
other
guy
working
internally
with
our
external
developers,
so
going
with
a
managed
service
provider
made
sense
for
us,
but
the
operator
works
great
as
well.
A
So
even
internally,
I
would
say,
find
out
what
your
up
times
are
your
survivability
with
the
national
company
that
we
have
in
terms
of
supporting
250
program,
240
programs
countrywide
at
some
point,
as
the
popularity
spins
up,
we
are
also
going
to
need
to
have
regional
data
and
cockroach
does
provide
a
very
native
cloud
ready
system
for
going
natively
across
multiple
regions
right
now.
We're
single
region,
three
node
cluster,
but
cockroach
labs
assures
me
that
whenever
the
time
comes,
they'll
be
right
there
to
make
sure
that
I
can
spin
up
another
node
in
another
region.