►
Description
This CD Foundation Meetup covers how Robert Krall from Rocket Software built out a test automation process using Jenkins and Pytest. Robert’s journey into continuous delivery started as a developer and shifted to CD automation when a better testing process was required. He will show us how to build out an automated testing pipeline that spans multiple operating systems, even connecting to z/OS and DB2 for big data applications. If you are striving to automate your test steps from your CD Pipeline, this meetup will be extremely helpful in seeing how to put it all together.
A
Okay,
looking
good
all
right,
we'll
go
ahead
and
for
the
interest
of
time
and
for
everybody
who
did
get
get
on
here
at
11
o'clock,
we
will
go
ahead
and
get
started.
A
I
have
been
doing
a
lot
of
phone
calling
and
talking
to
people
about
our
open
source
project
artillius
and
through
that
experience
I
met
robert
and
we
started
chatting
and
he
started
chatting
about
his
background,
and
we
have
the
opportunity
here
here
today
to
learn
from
somebody
who's
been
a
software
developer.
He's
actually
got
a
he's.
A
I
have
the
it's
my
opinion
that,
while
orchestration
tools
are
really
important,
it's
what
we
orchestrate
them
to
do
that
creates
our
pipeline
and
makes
it
important.
So
I
bugged
him
and
said:
could
you
please
do
a
presentation
on
pie
test
and
how
you've
built
this
out
and
on
that
I
will
introduce
robert.
B
Sweet
thanks
tracy,
so
so
I
put
together
a
little
slideshow
before
we
get
into
the
real
fun
of
it.
So
the
agenda.
I
just
put
like
quick
overview
of
myself
high
level
background
on
the
product.
I'm
testing
the
current
tech
stack.
We
have
a
future
tech
stack,
a
quick
overview
of
pi
tests,
you're,
not
familiar
with
it.
B
To
show
you
jenkins
pipeline
example,
I
have
the
demo
review
the
jks
pipeline
script
and
then
we
can
do
some
q
a
and
you
guys
can
ask
me
questions
or
whatever.
B
So
excuse
me
a
little
bit
about
myself,
so
I
have
about
10
years
of
sql
development
experience
and
it
experience
using
both
microsoft,
sql
server
and
db2
for
z,
os
out
of
that
10
years.
Seven
of
it
I've
done
a
lot
of
etl
and
data
mining
data,
modeling
jobs,
so
I've
done
some
stuff
like
some
performance
stuff.
B
I've
done
is
I've
sped
up
some
etl
processes,
as
I
was
running
like
15
minutes
to
sub
three
minutes,
I've
written,
really
complex,
t,
sql
statements,
I've
converted,
like
procs
at
running
five
minutes
to
sub
30
seconds.
So
I
try
and
you
know,
make
sure
that
performance
is
going
very
well
and
fast.
B
I
worked
in
various
all
kinds
of
different
industries
from
banking
to
paper,
to
consulting
to
law
firms
in
insurance
to
academia
and
the
state
government.
So
a
little
bit
of
my
education
background.
So
I
got
a
master's
degree
a
few
years
ago
in
data
science
at
the
university
of
st
thomas
in
st
paul
on
minnesota,
with
a
database
concentration.
B
Before
that
I
got
my
bachelor
of
science
in
management
information
systems
at
winona
state
in
well.
That's
supposed
to
be
winona
minnesota
on
saint
paul
minnesota
in
the
spring
of
2009..
So
I'm
working
on
an
automation
project
right
now
and
our
effort
is
to
convert
all
manual
tests
to
automated
tests.
We
started
this
project
back
in
like
january
of
2019
for
rocket
software.
B
B
Based
on
a
time
schedule,
so
a
quick
high
overview
of
the
product,
I'm
working
on,
so
I
work
for
rocket
software,
which
majority
of
you
probably
have
never
heard
of,
but
you've
probably
heard
of
ibm.
We
are
a
solutions
provider
for
ibm,
so
we
create
a
lot
of
software
for
ibm
and
one
of
the
software's
that
we
deal
with
is
the
ivf
security
guardian
product.
B
B
B
We
have
jenkins,
as
though,
as
what's
running
our
pipelines,
we
use
a
lot
of
the
request
library
in
python
and
we
use
notepad
plus
plus
once
in
a
while
in
the
future,
we'd
like
to
maybe
bring
in
some
selenium,
definitely
get
into
docker
and
then
artifactory
would
be
in
the
the
near
future
of
moving
our
automation.
B
So
a
quick
overview
of
pi
tests,
if
you've
never
heard
of
it.
Pi
test
is
a
python
testing
framework.
Here's
the
url
to
pi
test.
B
So
here's
some
examples
of
risk
commands
we
use.
We
can
stop
and
start
tasks
on
gls.
We
can
update
configuration
files
like
pdfs
files
on
the
mainframe.
We
can
install
and
uninstall
policies
with
the
appliance
we
can
run
jcl.
We
can
run
sql
statements,
we
can
copy
data
sets,
we
can
get
a
list
of
all
the
files
in
a
pdf.
B
We
can
stop
and
start
the
ims
control
region.
We
can
get
running
task
ids
if
we
want,
we
can
query
log
files
and
we
can
extract
the
lpar
or
the
system,
time
of
z
os,
so
here's
an
example
that
quickly
wrote
up
for
our
pipeline.
Basically,
we
have
several
pipelines
and
the
way
they
work
is
a
heartbeat
gets
kicked
off
before
the
smoke
is
kicked
off.
B
B
So
we
can
look
at
a
demo
here.
So
first,
let's
look
at
the
actual
pipeline
script,
so
we
do
everything
we
wrote
it
all
using
groovy,
groovy
scripting.
So
there's
a
slew
of
reasons.
We
did
this.
One
reason
is:
we
could
keep
all
the
jobs
and
get
and
have
version
control
over
them.
The
other
reason
was:
if
we
needed
a
new
jenkins
environment,
we
could
quickly
port
all
of
our
jobs
over
and
we
wouldn't
lose
anything.
B
So
I
don't
know
if
anyone's
familiar
with
groovyscript
or
not,
but
the
way
that
this
works
is
you
set
up
all
your
parameters,
so
you
can
have
different
types
of
parameters
like
strings
drop,
downs,
passwords
that
kind
of
stuff,
I'm
currently
only
using
strings,
and
so
we
have
different
types
of
parameters.
So
we
have
markers
what
git
branch
we're
going
to
pull
and
then
these
are
all
specific
on
the
actual
version
that
we're
testing.
So
we
test
a
couple
different
versions,
so
we're
passing
like
the
product
type.
B
If
it's
like
db2
data
sets
ims
the
task
name.
We
have
a
test,
that's
running
on
the
mainframe,
the
guardian
pulse,
which
is
the
appliance
version
numbers
some
zos
console
name
for
arrests,
the
lpar,
which
is
a
system
of
z,
os
some
configuration
names.
If
we
have
to
change
them,
the
database
name,
the
db2
port
and
then
the
next.
B
We
have
there's
different
stages,
so
the
first
one
is
to
download
the
test
packs
and
we
download
the
test
back
of
what
git
branch
we
specified
and
then
this
is
kind
of
where
the
real
meat
and
potatoes
come
into
play
so
for
pie
tests.
So
the
next
step
is
to
actually
run
it.
So
now
we
have
a
bash
script
and
we
do
some
echoes
just
so.
B
We
can
see
them
in
the
logs
and
we
activate
python
and
pie,
chart
or
pi
test,
and
then
this
is
where
we
we
kick
off
pi
test
and
we
send
everything
through
the
cli
pipeline.
B
So
we
have
all
these
different
parameters
that
are
passed
into
pi
test,
and
so
all
these
are
set
sent
through.
So
you
can
see
like
the
guardian
version
number
and
the
task
name
and
all
these
are
the
parameters
that
we
set
up
in
jenkins
up
here
and
then
that
gets
ran.
Excuse
me.
B
Excuse
me
so
that's
kicked
off
and
then,
after
that
we
keep.
You
know
all
the
results
in
jenkins
to
show
the
graph
and
then
a
post
on
successful.
We
go
on
to
the
next
one,
which
would
be
like
the
smoke.
So
if
we
look
at
our
our
pipeline
here
quick,
we
can
look.
We
can
review
this,
so
I
can.
Let
me
kick
this
off.
B
So
if
we
go
build
with
parameters,
here's
all
those
parameters
that
we
set
in
our
script
before
so
we
can
change
any
of
these
whenever
we
want
and
then
they'll
all
just
cascade
through,
you
know
the
cli.
So
if
I
go
build,
let's
see
hopefully
this
works.
B
It
worked
earlier,
so
so
we've
added
a
whole
bunch
of
logging
within
podcasts
and
python,
especially
so
once
this
gets
kicked
off,
we
should
be
able
to
see
some
logging.
B
Repository
so
here's
the
here's,
this
the
pi
test,
cli
command.
So
you
can
see
all
the
values
that
are
getting
passed
through
to
the
cli.
B
So
now
we
have
some
log.
So
what
happened?
Is
that
what
because
we
passed
a
marker,
so
we
passed
the
marker
called
db2
heartbeat,
so
the
heartbeat
only
tests
two
tests,
so
we
can
see
it.
It
collected
82,
it
deselected,
80
and
only
selected
two.
So
it
only
selects
two
tests
and
all
the
markers
are
dynamically
attached
to
test
cases.
I
can
show
you
how
we
do
that
in
a
little
bit
here
and
then
we
have
some
logging
here.
B
So
we
have
a
zeo
smf
class
which
deals
with
all
the
rest
commands
for
z
os.
So
it
tells
you
you
know
what
lpar
you're
testing
in
the
console
name
and
then
we
have
some
other
classes.
We
use
and
then
we're
downloading
the
configuration
file,
we're
updating
the
configuration
file
locally
and
now
we're
uploading
the
configuration
file
to
the
mainframe.
Oh,
this
is
all
done
through
rest.
Now,
what's
happening
is
stopping
the
task
on
the
mainframe
right
now.
B
B
Okay,
so
it
stopped
it
then
it
starts.
It
make
sure
it's
good
and
then
it's
gonna
set
up
the
test
case,
so
it
installs
the
policy
and
the
appliance.
And
then,
after
that
it's
gonna
run
some
jcl.
So
here's
some
jcl
that
ran
so
it
tells
you
what
lpar
system
it
ran
on
the
job
name,
the
job
id
and
who
ran.
A
B
And
then
it'll
output
some
time
stamps.
We
need
to
know
that
information
and
then
what
is
this?
This
is
oh,
so
then
this
is
we're
gonna
query
a
report
on
the
appliance,
so
these
are
all
the
values
that
were
passing
into
the
report.
So
if
you
had
to
like
troubleshoot
it,
we
could
like
log
into
the
appliance,
go
to
the
report
and
fill
in
the
variables
in
the
report,
and
then
this
test
case
passed
and
then
in
our
logging.
B
We
say
you
know,
here's
the
test
case,
name
what
we
expected,
what
the
actual
value
was
lpar
all.
This
is
like
helpful
for,
like
troubleshooting,
if
there's
an
issue,
job
names,
job
ids,
the
clients
that
it
ran
on
dashboards,
that
we
queried
db2
versions,
port
numbers,
that
kind
of
stuff-
and
then
it
goes
on
to
the
second
one
and
this
one
passed
as
well,
so
it
shows
either
expected
in
your
actual.
So
that's
good
and
then,
after
that
it
goes
on
to
the
the
smoke
job.
B
B
So,
if
we're
running
like
a
smoke
or
heartbeat
or
regression,
we
can
pass
those
markers
through
and
then
this
little
for
loop
will
assign
that
marker
to
different
test
cases
so
like,
for
example,
like
this
one,
if
you
pass
in
smoke,
it
will
run
all
the
test
cases
that
are
under
the
db2
folder,
whereas
like
sandbox,
if
you
ran,
if
you
threw
in
sandbox
it
would
only
run
40
40
78,
so
you
can
like
go
at
different
hierarchies
within
the
folder
structure.
B
So
you
can
see
in
here
we
have
tests,
so
we
have
a
broken
down
between
data
sets,
db2
ims
and
then
under
there
we
have
different
like
regression
or
enhancements
or
different
folders,
based
on
what
we're
trying
to
test
the
basic
functionality
filtering
or
other
enhancements.
B
So
it's
kind
of
nice
to
you
know,
do
this
dynamically,
rather
than
managing
all
these
markers
on,
like
hundreds
of
files,
so
it
makes
it
a
little
cleaner.
So
so
yeah
there's
that
and
then
we
do
some
things
with
setup
so
like
here's
an
example
of
setting
up
so
before
the
test.
Oh,
my
screen
is
one.
B
B
Hang
on
my
monitor
just
went
forward.
Okay,
so
so
here's
an
example
when
we're
setting
up
the
actual
session.
So
we
have
to
make
sure
that
the
like
we're
going
to
update
a
configuration
file.
So
we
update
the
configuration
file.
Then
we
stop
the
task.
Then
we
start
the
task.
Otherwise
the
changes
aren't
seen
in
the
task
if
we
update
the
configuration
file,
so
that's
that's
how
we
set
everything
up,
there's
the
first
run
and
then
there's
a
teardown
part
within
pi
tests.
B
So
there's
some
of
that
so
yeah.
I
guess
we
can
talk
about
like
q
a
like.
If
anyone
has
any
questions,
I'm
sure
people
have
questions
so.
A
Yeah,
I
I
have
one
right
off
when.
B
B
Yep,
so
we're
not
we're
not
quite
there
yet,
so
we
would
like
to
get
it
to
like
once
a
developer
check
some
code
in
kick
off
some
tests,
but
we're
not
quite
there
yet.
This
whole
effort
started,
like
I
guess,
18
months
ago,
so
a
lot
of
work
has
been
done
in
18
months,
but
we
still
have
a
lot
more
to
do
so
currently
right
now,
it's
all
just
done
by
time.
B
So
these
are
the
pipelines,
and
then
these
are
just
the
different
times
that
they're
ran
just
various
types
of
times
throughout
the
night,
so
yeah.
That
is
it's.
Definitely
in
my
backlog
of
things
to
do
is
do
one
developer
gets
checked,
checks
and
code.
We
can.
We
can
test
it
right
away,
but.
A
And
would
would
this
be
kicked
off
on
any
kind
of
a
code
update
or
just
like
a
database
update.
B
Oh
good
question:
I'm
not
sure,
because
we
have
like
three
teams,
db2
datasets
and
ims,
so
I
really
have
to
learn
how
each
team
works
and
then
how
like
how
we
want
to
do
it
if
we
want
to
do
it
based
on
a
checking
of
a
gift
branch
or
you
know
if
they
did
a
pull
request
or
something
like
that,
I'm
not
exactly
sure
what
direction
you
would
take
yet.
A
And
then
one
last
question:
do
the
tests
themselves?
Are
they
testing
on
validating
the
schemas
or
are
they
actually
testing
data
based
on
some
kind
of
a
a
change
in
the
applications.
B
Sure
so
this
tool,
specifically,
is
all
auditing.
So
all
we're
really
testing
is
data.
We
do
test
to
make
sure
that
some
configuration
files
are
done
right,
but
majority
of
all
these
tests
are
data
ensuring
that,
because
it's
for
auditing
purposes,
so
we
need
to
ensure
that
you
know
the
audit
trail
is.
Is
there
so.
A
I
don't
think
I've
ever
seen
jenkins
and
I
don't
know
much
about
pi
tests,
but
I
don't
know
if
I've
ever
seen
this
application
for
kind
of
testing.
These
kind
of
really
big
data
sets,
because
that's
basically
what
you're
doing
right.
B
Oh
yeah,
so
I
can
open
up
the
appliance.
B
I
don't
know
I
don't
since
I
wrote
everything
in
rest.
I
hardly
ever
log
into
anything
anymore.
B
Like
because
people
ask
me
like
well,
how
do
you
get
somewhere
in
like
tso
or
ispf
on
the
mainframe
and
I'll
be
like
I
don't
know,
I
just
wrote
a
rest
command
to
do
it
like.
I,
don't
try
and
memorize
ispf
panels
right
because.
A
B
Yeah
I
mean
the
whole
point
of
guardium
is
just
to
like
audit
and
it
all
came
from
sorbane's
oxley.
So
it's
just
a
track
to
make
sure
people
aren't.
You
know
messing
around
in
a
system.
They
should
be
messing
around
with,
and
customers
use
this
in
a
slew
of
different
ways.
So
yeah,
I
don't
yeah.
A
B
Like
I
just
get
it.
A
B
Oh
yeah
yeah,
I
know
that's
cool
yes
yeah.
I
figured
it'd,
be
more
q
a
anyway,
so
so
yeah
as
you
as
you
do
that
so
like
if
we
want
to
install
the
appliance
or
a
policy
for
example.
B
Now,
if
I
remember
how
to
do
this,
we
click
on
this.
B
You
can
pick
a
policy
here.
We
have
a
whole
bunch
of
them.
We
test
all
these
different
things
and
then
I
guess
you
can
select
more
than
one
and
then
you
just
say,
install
last
and
then
it'll
over
here's
what's
installed
currently
on
the
appliance
so
and
then
we
have
different
reports
and,
let's
see
if
we
go
to
like
now.
B
Of
course
I
haven't
logged
in
this
thing
in
a
long
time
and
apparently
I'm
forgetting
where
oh
dashboards,
so
we
have
different
reports
so
all
through
pi
tests
and
automation.
We
call
these
various
reports
all
through
rest
to
extract
the
data.
There's
no
data
on
any
of
these,
because
I
think
the
time
stamps
are
out
of
sync
now.
B
A
B
Yeah,
so
so
that's
a
good
question
I
mean
I
can
show
you
specifically
an
example
of
five
tests.
I
mean
it's
kind
of
to
be
honest
with
you,
there's,
probably
thousands
and
thousands
of
lines
of
python,
so
it
really
depends
on
what
you're
like
what
you
want
to
see.
One
big
if
you're
not
too
familiar
with
pi
test,
for
example,
everything
is
kind
of
driven
from
this
contest
file.
B
So
if
you
wanted
to
do
like
cli
parameters,
they
have
these
hooks
that
are
called
that
are
that
pi
test
automatically
knows
about.
So
one
of
them
is
pi
test,
underscore
ad
option,
and
so
you
can
add
whatever
cli
parameter
you
want,
so
you
can
see
I've
added
a
whole
slower
of
them
here
and
then
once
I
do
that,
then
I
can
call
pi
tester
the
cli,
and
it
knows
what
this
value
is
and
you
can
give
it
a
default
value
if
you
want
so.
A
B
Yeah,
and
so
it's
all
really
driven
a
lot
from
python,
so
we
have
libraries
and
so
here's
a
library,
so
kind
of
went
with
the
oop
design
of
programming,
so
we
have
classes
and
some
of
them
have
sub
classes
and
adherence,
and
so
here
we
have,
you
know
different
actions
you
can
take.
You
can
there's
a
function
called
like
start
task
right,
and
so,
if
you
want
to
start
a
task
first,
I
make
sure
that
the
task
isn't
running.
B
So
if
it
isn't
running,
then
I
can
run
it
and
so
like
this
is
a
rest
command
to
start.
The
task
on
z,
os
through
rest,
so
there's
different
rest
urls.
I
don't
know
if
people
are
familiar
with
rest
or
not,
but
rest
is
a
way
to
like
basically
communicate
with
whatever
service
you
want,
without
actually
going
through
a
gui
high
level.
So
there's
different
rest
urls
for
z,
zos,.
A
A
A
Okay,
robert,
thank
you
so
much
I
am
going
to
this
is
being
recorded,
so
we
will
I'll,
publish
this
and
I'll
send
it
out.
If
anybody
needs
to
dig
deeper
and
robert,
if
they
have
a
question,
is
there
st?
B
Yeah
sure
yeah,
you
can
shoot
me
a
message.
Yeah
no.
B
Cool
well
thanks
for
the
opportunity
to
talk
and
show
off
some
cool
pie,
test
stuff.
A
Yeah,
it's
really
a
fascinating
application
and
I'm
just
wondering
how
you
know
how
many
other
types
of
teams
that
are
that
are
really
dealing
with
big
data
have
applied
these
cd
tools
in
the
way
that
you're
using
it,
because.
A
B
A
Like
it
feels
like
you
know,
you've
built
a
lot,
there's
a
lot
of
intelligence
around
what
jenkins
orchestrates.
Like
I
always
say,
and
in
this
case
it
looks
like
you're
almost
using
jenkins
as
a
as
a
centralized
job
scheduler,
and
basically
your
your
your
python
and
your
pi
test
is
doing
all
the
heavy
lifting.
B
Yeah,
that's
a
correct
statement
yeah,
so
you
know
I
need
to
learn
more
about
jenkins.
I
did
not
really
know
anything
about
jenkins
prior
to
starting
this
role,
so
so
yeah
I
mean
I'm
like
learning
as
much
as
I
can
as
quick
as
I
can
so,
I'm
sure
jenkins
has
more
power
of
stuff.
I
can
do.
A
B
Yeah,
I
think
you
could
the
hardest
part
of
the
entire
thing
is
like
the
setup
and
teardown
like
you
have
to
ensure
that
the
system
is
in
like
the
right
like
state
before
you
can
run
a
test.
Otherwise
the
test
will
automatically
it'll
fail.
You
know,
so
I
have
to
make
sure
that
certain,
like
policies
are
installed
on
the
appliance
and
if
they're
not,
then
it's
not
going
to
work.
A
A
So
if
somebody
wanted
to
get
started
like
day,
one
with
a
pie
test,
what
what
would
you
recommend
to
them.
B
Yeah
definitely
so
there's
actually
a
really
good
book
that
I
read
to
really
learn
pie
test.
Now,
if
I
remember
the
name
of
the
book,
it'll
probably
show
up
here,
I'm
almost
positive.
This
is
it.
This
is
actually
the
this.
Is
the
book
pie
test
appreciate
it
yeah.
This
is
the
book.
So
if
you
really
wanted
to
start
learning
about
pipes,
I
mean
pi
test,
I
think,
is
a
lot
of
a
great
functionality
and
I'm
just
like
touching
the
iceberg
of
it.
B
I
think,
but
this
book
is
extremely
helpful,
like
it
helped
me
a
lot.
I
never
read
it
cover
to
cover,
but
he
talks
about
a
lot
of
different
topics
that
are
very
good
and,
like
explains
it
really
well,
especially
for
someone
who's
never
really
used
it
before,
and
I've
really
used
it
and
drank
the
pie
test
kool-aid.
I
guess
so.
A
You've
embraced
it
yeah,
okay,
so
just
I
am
going
to
take
over
just
for
a
minute
if
you
can
stop
sharing
your
screen,
robert.
A
Stop
sharing
just
so
everybody
knows
I'm
going
to
quickly
share
my
screen.
We
are.
We
have
some
other
upcoming
events.
The
next
ones
will
be
if
you've
ever
wanted
to
understand
about
site,
reliability,
engineers
and
what
they're
up
to
this
should
be
a
pretty
interesting
presentation.
Shivagami
is
a.
She
runs.
The
sre
department
for
emirates
airlines
and
she's
got
quite
a
team
that
she
has
built.
A
I
think
this
will
be
an
interesting
one
for
anybody
starting
to
move
into
that
direction,
so
we're
and
then
in
september
I
haven't
announced
it
yet,
but
we
will
be
learning
about
argo
in
september
and
I'm
super
looking
forward
to
that
one,
because
I
would
like
to
know
more
about
argo.
A
Argo
is
a
cd
pipeline
that
is
cloud
native,
all
right,
everybody
from
the
cd
cd
foundation
online
meetup
and
the
new
mexico
devops
meetup
everybody
be
safe
and
have
a
fabulous
fourth
of
july,
and
we
will
see
you
july
15th
just
around
the
corner
and
thank
you,
robert
for
the
information.
It
was
super
helpful
and
I'm
sure
that
people
learned
a
lot
from
it
cool
thanks.