►
From YouTube: The Jenkins Templating Engine - a CDF Online Meetup
Description
In this meetup Steve Terrana, the mastermind behind the JTE, gives us an amazing demo of how it works and how you can build out a templated Jenkins workflow that works for everyone.
For more Continuous Delivery Foundation content, check out our blog: https://cd.foundation/blog/
A
A
A
I
met
steve
talking
about
I'm
not
sure
how
we
got
in
the
conversation,
but
I
found
him
around
the
conversation
of
templating
for
jenkins
and
he
did
a
demo
for
me
and
I
was
really
excited.
I
think
the
jte
is
one
of
the
more
interesting
projects
personally
around
jenkins
because,
as
you
will
see
today,
the
ability
to
template
and
the
way
that
the
jte
does
templating
is
super.
Super
simple
and
anything.
That's
simple
is
brilliant,
and
so
I'm
pretty
excited
about
the
the
jenkins
templating
engine.
A
Stephen.
Are
you
still
there?
A
We
may
have
lost
him.
Hopefully
he
will
sign
back
in
steven
toronto.
It
works
at
booz
allen
and
he
is
a
he
works
with
tons
of
different
customers
in
building
out
their
pipelines,
and
this
was
one
of
the
reasons
why
he
created
the
jte
because
he
had
the
need
to
create
multiple
types
of
pipelines,
kind
of
on
the
fly.
A
A
Next
month
I
usually
announce
what
we're
going
to
do
next
month
at
these
meetings
next
month
is
the
cdcon.
So
cdcon
is,
I
should
know
the
dates
it's
at
the
end
of
june,
so
please
register
for
cdcon.
We
will
not
be
doing
a
regular
cd
online
meetup
in
june.
We
will
do
that
in
july
and
I'm
working
on
a
spinnaker
presentation.
A
So
on
that
note,
I
am
going
to
pass
it
off
to
to
stephen
and
he
is
going
to
take
us
through
a
journey
on
how
to
template
jenkins.
B
Awesome
thanks
tracy
can
you
you
can
still
hear
me.
Okay,
I
dropped
you.
B
Thank
you
all
right.
So
before
I
dive
into
a
terminal,
I
don't
have
any
any
slides
for
you
today.
I
thought
that
might
be
a
nice
refreshing
break
from
slides
all
day.
I'm
gonna
give
some
context
of
like
what
was
the
problem.
B
We
were
trying
to
solve
so
tracy
mentioned
microservices
and
how
you
know,
teams
that
have
even
five
10
50
microservices
and
are
trying
to
automate
their
software
delivery
processes.
You'll
find
yourself
violating
the
dry
principle
all
over
the
place
right.
You
need
to
define
what
is
that
pipeline
going
to
look
like
for
each
microservice?
B
There's
some
stuff
you
can
do
around.
You
know
reusable
code
in
jenkins,
but
there
wasn't
really
a
way
to
define
in
one
place.
What
is
the
pipeline?
That's
going
to
happen
for
all
these
different
repositories
and
then
how
can
I
plug
and
play
with
that
pipeline
if
I'm
using
different
tools
across
different
services?
So
if
I've
got
some
front
end
microservices,
maybe
some
spring
boot
apps?
Maybe
I've
got
some
stuff
written
in
python.
How
can
I
write
one
pipeline?
B
That's
flexible
enough
to
to
work
with
all
of
those,
so
the
jenkins
sampling
engine
makes
that
possible
and
I
hope,
by
the
end
of
the
next
you
know
45
minutes.
We
explain
how
so
diving
right
into
code
here
and
feel
free
to
interrupt
me
at
any
point.
Something
is
too
small
or
too
big,
and
I
can
I
can
fix
that
so
right
off
the
bat
there's
some
infrastructure,
automation
that
I'm
gonna
show
you
so
it's
just
so
that
there's
no
call
it
no
magic
right.
B
Nothing
here
is
automagically
happening,
so
there's
a
just
file.
If
you
haven't
heard
of
just
it's
a
pretty
cool
project,
it's
sort
of
like
a
make
file
without
all
the
baggage
sorry
make.
So
it's
a
it's
a
task
runner.
So
it
takes
away
a
lot
of
the
complexity
of
running
some
commands.
B
B
So
I'm
just
going
to
pull
jenkins
lts,
I'm
going
to
turn
off
the
startup
wizard,
I'm
going
to
install
some
plugins
and
then
I'm
going
to
use
jcask
jenkins
configuration
as
code
to
automate
a
jenkins
configuration
so
that
we
can
hit
the
ground
running
with
with
the
demo
plugins.txt
if
you're
familiar
with
jenkins,
just
all
the
plugins
I
want
to
install
and
then
jcask
so
we'll
get
into
exactly
what
this
means
in
a
little
bit,
but
the
gist
of
it
is
that
we're
going
to
develop
some
libraries
in
the
jenkins
tumbling
engine
that
contribute
steps
to
the
pipeline
jenkins
needs
to
know
where
to
get
those
libraries
from
so
in
this
demo.
B
B
I
don't
want
to
have
to
constantly
commit
files
to
git
to
get
them
to
show
up.
So
I'm
going
to
run
this
just
watch
command
and
all
that's
going
to
do
is
watch
this
directory
for
changes
to
groovy
files
and
when
I
change
a
groovy
file,
it's
instantly
going
to
commit
it.
So
that's
not
important
for
jte,
just
explaining
some
of
what's
going
on
here.
There's
there's
a
file
watcher!
B
That's
going
to
commit
all
our
file
changes
so
that,
as
I
create
libraries
they're
automatically
accessible
inside
jenkins,
and
that's
all
we
need
out
of
this
just
file.
So
I'm
already
running
just
watch.
It
kicked
off
onto
a
demo
branch,
I'm
going
to
open
up
a
new
terminal
here
and
we
can.
We
can
get
started
so
jenkins
should
be
up
and
running.
Here
I
can
go
create
my
first
pipeline
job
if
we
want
to
go
to
create
job
so
there's
a
lot
of
there's
a
lot
of
different
ways
to
use.
B
B
B
So
I
wouldn't
typically
recommend
you
write
advanced
pipeline
code
inside
your
template,
but
just
for
you
know,
building
that
mental
model
of
how
jte
works.
This
template
is
just
a
jenkins
file.
There's
some
magic
that
happens
before
the
the
template
runs
to
inject
things
into
the
pipeline
environment.
But
it's
important
to
understand
that
there
there's
nothing
special
about
this
template.
B
I
can
create
a
build
method
or
I
can
call
a
build
method
and
now
down
here
in
in
pipeline
configurations,
there's
a
lot
of
things
that
you
can
configure
we're
going
to
start
off
with
libraries.
B
So
in
jte
we
we
call
this
a
step.
So
the
template
is
invoking
a
build
step.
Steps
come
from
libraries,
so
let's
say
I
have
a
team.
That's
using
maven.
I
can
create
a
maven
library
and
right
now
that
doesn't
exist.
So
if
I
go
and
build
this,
assuming
I
correctly
wiped
the
slate
from
when
I
was
making
sure
this
is
going
to
work
the
pipeline's
going
to
fail
right
and
it's
going
to
fail
because
library
maven
not
found
so,
let's,
let's
go
create
a
maven
library.
B
I
can
come
over
to
our
code
terminal
and
just
another
helper
I
have
a
recipe
called
create
and
all
it
does
is
make
some
directories
for
me.
So
if
I
say
just
create
maven,
it's
going
to
spin
up
this
maven
directory
and
then
within
the
maven
directory.
There
are
three
directories
that
comprise
a
library,
there's
resources.
B
B
B
So
I
can
create
a
call
method,
I
can
say
println,
building
from
maven
and
just
some
some
groovy
groovy-ism.
B
So
if
you've
used
like
jenkins,
shared
libraries-
and
you
were
wondering
like
why
do
I
have
to
make
this
call
method,
the
reason
you
make
the
call
method
is
because
it's
syntactic
sugar
for
being
able
to
invoke
it
by
the
short
name.
So
I
saved
this
file.
If
you
remember,
we've
got
a
file
watcher
that
committed
that
change
for
me
already.
So
if
we
go
back
to
jenkins
now,
the
maven
library
exists.
B
So
I
can.
I
can
build
it
and
what's
going
to
happen
is
if
you
look
at
the
the
build
logs-
and
I
can
make
this
a
little
bit-
bigger
jte
tells
you
everything
that
it
does
so
here
are
the
pipeline
configurations
that
we
found
in
our
our
example.
All
we
did
was
set
a
maven
library
and
it
doesn't
have
a
configuration
right
now
and
and
we'll
we'll
show
in
a
bit
how
you
can
pass
parameters
to
these
libraries
at
the
bottom.
B
We
can
see
that
jte
is
going
to
run
a
step
from
the
maven
library
called
build,
and
we're
going
to
invoke
the
call
method
inside
that
step
and
we'll
see
that
it
says
building
from
maven,
so
just
to
demonstrate
that
that's
just
syntactic
sugar.
If
we
change
this
to
build.call
and
we
rerun
it,
the
exact
same
thing
is
going
to
happen.
B
B
So
in
practice
these
jobs
would
be
pointed
at
like
github
organization
projects
or
multi-branch
folders,
where
you're
actually
defining
the
the
pipeline
configuration
for
you
know
an
entire
repository
or
an
entire
github
organization
or
bitbucket
project,
or
what
have
you
so
in
this
example,
you
know
we
can
change
the
pipeline
behavior
without
actually
changing
the
pipeline
template.
B
So
a
team,
that's
using
maven
and
a
team,
that's
using
gradle
and
a
team.
That's
using
npm
could
all
be
using
an
inherited
pipeline
template
that
says
build,
and
then
we
can
swap
this
out.
So
we
can
create
a
gradle
library
instead
and
in
practice
these
pipeline
configurations
can
come
from
like
the
application
source
code
repository.
B
So
let's
create
that
that
gradle
library
get
rid
of
this
comment
here
and
we
can
go,
create
a
build.groovy.
It's
going
to
be
the
same
code
except
we're
going
to
say
we're
building
from
gradle.
Now.
The
important
part
here
is
that
your
template
is
defining
like
an
api
spec
almost
for
what
your
your
pipeline
is
going
to
do
and
you
get
to
define
whatever
you
want
that
to
be.
B
B
So
now,
both
the
the
maven
and
the
gradle
libraries
have
have
this
build
step
so,
depending
upon
I'll,
save
this
and
run
it
depending
upon
which
libraries
I
load,
we
can
get
different
pipeline
functionality
out
of
the
same
exact
pipeline
template
right.
B
So
I'm
gonna
foot
stomp
that
for
like
another
15
seconds,
like
back
in
back
in
the
day
like
two
years
ago,
when
I
was
working
with
teams
that
had
50
or
60
micro
services,
or
they
had
a
whole
bunch
of
different,
you
know
applications
that
they
were
building
pipelines
for
that
used
to
mean.
If
I
wanted
to
change
the
pipeline
across
all
those
teams,
I
had
to
go
open,
50
or
60
pull
requests
right
or
we
had
to
orchestrate
that
change
across
all
these
different
teams.
B
With
with
the
jenkins
templing
engine
that
pipeline
template,
gets
pulled
out
of
individual
source
code
repositories
gets
defined
in
a
central
location
in
tool,
agnostic
terms,
and
then
these
libraries
provide
a
means
for
plugging
and
playing
with.
You
know
the
interface
that
you've
created
through
your
template.
B
So
let's
let's
talk
about,
and
I
know
we've
got
you
know
another
40
minutes
here,
but
if
there's
any
questions
feel
free
to
put
them
in
the
chat
and
tracy,
you
can
feel
free
to
interrupt
me
if
there's
anything
to
talk
about,
but
these
libraries
are
only
as
as
useful
as
they
are
reusable
right.
So,
let's,
let's
imagine
we
wanted
to
pass
data
from
the
pipeline
configuration
to
those
steps.
B
B
When
you
load
the
library
you
can
pass
in
whatever
parameters
you
want
to
define,
there's
sandboxing
in
place
and
whatnot.
So
don't
go
trying
to
read
like
the
etsy
password
file,
but
you
can
define
you
know
the
typical
variables
you'd
be
used
to
around
strings
arrays
booleans.
What
have
you
so
we
can
define
this
parameter.
B
We
can
click
save
and
now
from
the
gradle
library.
If
I
want
to
consume
that
we
can
say
println.
B
B
You
know
along
the
way,
I'm
going
to
be
using
a
lot
of
print
statements
to
show
features,
but
I'm
going
to
try
to
map
all
these
features
back
to
real
world
use
cases
where
that's
important.
So
one
example
that
comes
to
mind
is:
let's
say
I
have
a
sonar
cube
library
and
some
teams
want
to
fail
the
build
on
quality
gate
and
other
teams.
Don't
now
as
a
as
a
devops
purist,
you
should
fail
to
build
if
there's
bad
good
quality,
but
in
in
the
real
world.
B
Sometimes
when
teams
are
just
getting
started
or
they've
got
less
strict
standards.
That
might
be
a
setting
that
you
want
to
control.
That's
the
type
of
thing
that
the
same.
You
know
all
your
teams
can
reuse
this
sonar,
cube
library,
but
you
can
tailor
the
the
execution
of
the
steps
based
upon
these
pipeline
configurations.
B
B
So
there's
a
lot
of
strategies
for
controlling
which
pieces
of
this
pipeline
template
are
inherited
and
governed
and
which
pieces
of
this
pipeline
are
going
to
be
configurable
by
individual
application
teams.
You
know
typically,
for
example,
I
wouldn't
want
individual
teams
to
be
able
to
override
like
container
image
scanning
settings
right.
B
If
I
want
to
fail
the
build,
if
there
are
highs,
that's
the
type
of
thing
I
would
consolidate
into
a
governed
pipeline
configuration,
and
I
would
let
the
teams
tell
me
what
build
tool
are
using,
for
example,
but
you
don't
get
to
change.
You
know
the
container
image
scan
policies
that
you're
going
to
inherit
so
so
far
we
covered
steps
right.
The
the
gist
here
is
that
you
can
create
whatever
interface
you
want
to
in
this
pipeline
template,
and
then
you
can
create
as
many
libraries
as
you
want
to
that
implement
this.
B
B
B
It
doesn't
matter
what
data
each
library
wants
to
take
in
and
it's
totally
decoupled
from.
You
know
the
template
I
can
plug
and
play
with
whatever,
whatever
libraries
I
want
to
for
a
build
step.
B
The
pause
there
just
to
give
people
an
opportunity
to
type
any
questions
that
they
might
have,
but
while
we
do
that,
the
next
feature
I
want
to
talk
about
is
application
environments.
So
sometimes
the
one
exception
to
the
rule
I
just
said,
is
that
sometimes
it
makes
sense
to
create
a
method
parameter
for
application
environments.
B
B
So
inside
the
pipeline
configuration
you
can
create
a
block
called
application
environments,
and
this
is
an
opportunity
for
you
to
define
any
environmental
context
that
you
want
to
to
be
able
to
pass
around
to
deployment
steps
or
steps
that
they're
going
to
do
integration
testing.
Or
what
have
you
so,
let's
create
a
library
called
ansible.
So,
let's
assume
you
know,
hypothetically
that
we
wanted
to
do
a
deployment
through
ansible
and
let's
assume
that
that
ansible
library
was
going
to
contribute
a
step
called
deploy
to
which
takes
in
the
environment
to
deploy
two.
B
That
was
a
lot
of
twos.
So
inside
this
application
environments
block
we
create
a
dev
environment
and
we're
going
to
create
an
array
of
ip
addresses
in
in
practice.
I
would
not
recommend
that
you
construct
your
library
this
way,
but
it
makes
for
a
a
good
demo
here.
So
we
can
say
that
for
a
dev
environment
the
ip
addresses
are
1
1,
1
and
2
2
2
2
8,
respectively,
and
we'll
create
a
test
environment
too.
That
is
similar.
B
So
now
I
can
say,
deploy
to
dev
and
deploy
a
test.
This
dev
variable
comes
from
the
key
that
you
created
right,
so
you
you
specified
here
are
some
application
environments.
One
of
them
is
called
dev,
and
now
you
can
define
whatever
data
you
want
in
the
real
world.
This
might
be.
I've
got
different
kubernetes
clusters
for
my
deployments.
B
I've
got.
You
know
a
different
tag
that
I
should
look
up
on
ec2
instances
to
do
deployments
to
auto
scaling
groups,
whatever
whatever
is
unique
about
your
application
environment
that
you
need
to
be
able
to
tell
the
pipeline
about
this,
gives
you
a
construct
to
be
able
to
capture
that
information
and
then
pass
it
in
your
template
in
a
way
that
still
makes
the
template
easy
to
read.
B
So
I
I'm
a
fan
of
using
those
those
syntactic
sugar
features
of
groovy
to
make
the
templates
really
easy
to
read,
but
staying
in
touch
with
our
no
magic.
This
is
this
is,
what's
actually
happening
behind
the
seats,
so
we
can
implement
a
call
method
that
takes
an
app
environment
and
for
our
example,
we
can
just
say
that
append
dot
ip
addresses
right,
which
is
the
configuration
that
we
put
on
the
application
environment
dot
each
so
we're
going
to
iterate
over
the
ips
and
we're
just
going
to
say
print
ln
deploying
to
ip.
B
So
this
is
just
going
to
demonstrate
that
we're
able
to
take
in
this
this
context
and
then
use
it.
So
when
we
run
the
pipeline,
we'll
see
some
deployment
statements
for
deploying
to
let's
put
some
stages
in
here.
Deploy
to
apm
and
these
application
environments
have
a
short
name
variable
automatically
that
you
can
reference
so
we'll
say
that
we're
going
to
deploy
to
dev
and
to
test
just
to
make
it
easier
to
see
we'll
save
this.
A
There
steve
sure
you
have
some
questions
that
relate
to
what
you
just
were
showing
sure.
Once
one
question
is:
where
would
you
put
node
labels
native
for
running
pipelines,
and
I
think
you
re
you
referred
to
that
somewhat,
but
there's
an
example
that
says:
linux
versus
linux
versus
windows,
etc.
B
That's
a
that's
a
great
question,
so
the
jenkins
templing
engine
is
just
a
fancy
way
to
aggregate
pipeline
code.
You
can
define
node
labels
wherever
you
want
to
which
isn't
super
helpful.
So
let
me
try
to
give
a
couple
of
like
design
patterns
that
I've
seen
in
the
wild,
but
the
the
technical
answer
is
kind
of
wherever
you
want
to.
So.
B
If
you
wanted
to
inside
your
ansible
library
say
that
the
node
label
it
was
going
to
be-
I
don't
know
linux,
then
in
your
ansible
library.
You
could
come
in
here.
This
is
going
to
fail
because
I
don't
have
a
node
called
linux,
but
you
could
come
in
here
and
you
could
say
that
my
node
block
takes
a
label
of
config.node
label.
Let's
make
this
code
a
little
bit
easier
to
read
so
up
at
the
top.
B
I
can
parse
some
configuration,
so
I
can
say
that
the
the
string
for
the
node
label
is
going
to
be
equal
to
the
config.node
label,
just
to
show
how,
like
some
error,
checking
could
happen
here.
I
could
also
say
if
this
node
label
is
not
defined,
fall
back
to
executing
this
closure,
define
the
node
label
so
we'll
come
into
here,
give
some
space
to
the
code,
and
we
can
say
that
the
node
is
going
to
take
the
node
label
variable
that
we
defined
above
and
you
pass
it
in
right.
B
B
See
and
we
we
had
a
failure
because
there
was
you
needed
to
define
that
node
label,
so
the
options
here
are,
you
could
define
it
for
the
entire
pipeline.
I
think
a
lot
of
people
that
are
coming
from
declarative
and
jte
does
have
some
declarative
support.
So
if
you
say
pipeline,
you
know
age
in
any
stages,
I'm
not
as
familiar
with
declarative
syntax.
But
let's
see,
if
I
can
do
this
real
quick
steps.
B
B
This
is
a
live
demo,
so
this
might
not
pass
on
the
first
go
here
missing
ow,
so
that's
a
fun
one.
Actually,
so
because
because
build
is
actually
a
jenkins
pipeline
step,
the
way
that
the
step
resolution
works
is
that
it's
trying
to
call
the
jenkins
step
instead
of
the
jte
step.
So
a
way
we
can
get
around.
That
is
to
put
it
in
a
script
block,
and
if
we
save
that
and
we
build
it.
B
So
here
was
the
build
step
running
through
declarative
syntax,
where
you
can
define
agents
the
way
that
you're
used
to,
let's
also
create
a
a
unit
test
step.
Just
because
I
want
to
show
that
as
long
as
there's
no
step
name
collision
unit
test
isn't
a
jenkins
step
that
you
can
execute.
Whereas
build
is.
B
B
But
your
original
question
was
around
node
labeling,
so
the
two
options
was
option:
one
define
it
inside
your
library
step
and
pass
library
configuration
options
for
what
you
want.
The
node
labels
to
be
option
two.
You
can
use
declarative
syntax
and
not
handle
node
logic
inside
your
steps
at
all
and
let
the
declarative
framework
account
for
that
and
then
option
three
is
in
some
cases.
B
You'll
have
a
helper
like
a
utility
library
that
defines
some
common
logic.
So
for
the
libraries
that
that
I
maintain
there's
a
there's,
a
function
called
inside
image
and
it
takes
a
string.
That's
the
image
name
and
it
does
some
stuff
like
it'll,
do
a
docker
image
image
name,
and
this
is
very
abbreviated
version
of
of
the
code.
But
if
we
wanted
to
create
this
method
signature,
we
can
create
a
helper
function
called
inside
image.
We
could
say
it
takes
a
closure
and
now-
and
this
would
be
inside
a
utility
library.
B
B
So
if
I
wanted
to
make
a
build.groovy
file
and
delegate
node
label
assignments-
or
you
know
kubernetes
pod
templates
to
this
utility
library,
I
would
just
say
inside
image-
I
want
to
run
inside
the
maven
image
and
here's
my
closure
to
execute,
which
is,
I
don't
know-
sh
maven
dash,
v
or
maven
clean
package
so
because
I'm
rambling
a
little
bit
but
because
jte
is
just
a
framework
for
how
to
aggregate
pipeline
code.
A
A
This
is
about
dsl
and
pipeline
configurations.
It
says
this
pipeline
configurations,
if
with
jenkins
always
dsl
or
does
it
have
a
wrapper
plug-in,
so
you
can
do
it
in
yaml
or
json.
B
B
The
reason
that
the
tricky
part
is
going
to
be
around
the
aggregation,
so
you
can
have
more
than
one
pipeline
configuration
file.
I
mentioned
that
a
little
bit
earlier.
You
can
define
one
common
to
the
whole
jenkins
instance.
So
let's
talk
through
a
quick
example
and
we'll
use
this
as
a
as
a
playground
inside
here.
If
I
organizationally
will
put
some
comments,
org
pipeline
configuration-
let's
say
I
wanted
to
add-
I
don't
know
anchor
as
a
common
library.
B
Everyone
has
to
use,
but
I
want
other
teams,
like
an
application
team
to
be
able
to
add
additional
libraries.
B
The
way
you
would
define
that
with
this
dsl
is,
you
would
say
at
merge
and
at
merge
is
saying
we're
going
to
let
teams
add
libraries
to
this
library's
block,
there's
also
ad
override,
if
you
want
them
to
be
able
to
replace
it.
So
this
functionality
of
how
you're
going
to
determine
what
blocks
and
keys
of
the
pipeline
configuration
can
be
modified,
would
be
a
little
tricky
in
yaml,
definitely
not
impossible.
You
could
just
do
an
array
of
yml
paths
or
something,
but
as
of
right
now
it's
always
groovy.
B
I
don't
know
if
I
made
that
more
complicated
than
it
had
to
be
right
now
the
answer
is
always
groovy.
The
answer
is
in
the
future.
There
there
could
be
ways
to
do
it
with
other
configuration
languages.
A
Now
you
have
two
other
questions.
That
kind
of
are
seem
similar.
One
says
what
happens
if
you
deploy
deploy
underscore
to
prod
that
the
prod
does
not
exist
in
the
application
dash
environments
and
the
other
one
is.
How
does
jte
understand
deploy
underscore
to
dev,
as
when
dev
values
reside
is
under
application,
underscore
environment's
name
space.
B
Sure,
let's
take
one
at
a
time,
so
the
we
had
an
ansible,
let's
see
if
I
can
control
z
my
way
back,
so
we
had
an
ansible
library,
so,
let's,
let's
just
do
it,
deploy
to
prod.
So
what
I
expect
to
happen
is
that
it's
going
to
say
you,
you
tried
to
reference
a
variable
that
doesn't
exist
right,
there's
going
to
be
a
missing
property
exception
that
gets
thrown
at
the
very
start
of
this.
We
said
the
templates
are
just
jenkins
files.
B
So
what's
actually
happening.
Is
that
there's
a
there's,
an
object,
called
application
environment?
Well,
there's
a
class
called
application
environment
and
there's
an
instantiation
of
that
class
called
dev
that
gets
put
into
the
pipeline
execution
so
deploy
to
dev.
It
was
able
to
resolve
dev
because
there's
actually
an
object
that
that
is
called
dev
that
gets
created
when
jte
is
initializing
or
parsing
your
pipeline
configuration.
B
B
B
It's
a
great
question:
there's
actually
a
feature
for
that.
So
let's
say
my
my
test
was:
I
want
to
do
unit
I
need
to
build.
I
want
to
do
unit
test
I
want
to
do.
I
don't
know
container
image
scan.
We
haven't
done
anything
that
implements
container
image
scan
yet
and
at
the
end,
we'll
do
deploy
to
dev
and
deploy
the
test.
So
you
can
flesh
out
this
entire
pipeline
and
then
there's
another
block
inside
your
pipeline.
Configuration
called
template
methods.
B
So,
in
our
case
the
methods
that
might
be
run
from
the
template
would
be
build
unit,
test,
container
image,
scan
and
that's
not
right
and
deploy
to.
So
what
this
block
does
you
know
because
you,
as
the
pipeline
author,
determine
this
interface?
B
B
These
are
the
steps
that
might
be
loaded
and
now,
if
it's
not
loaded,
it
becomes
a
no
op
step
right.
So
you
could
build
this
and
it's
going
to
print
out.
Oh,
I
need
to
define
the
node
label
so,
let's,
let's
switch
our
tests
back
over
to
maven
because
I
modified
the
gradle
library
we'll
do
maven
get
rid
of
this
configuration.
B
Oh,
I
did
it
in
the
ansible
library.
Let's
just
go
fix
that
real,
quick,
so
I'll
delete
this
unit
test
step
in
the
ansible
library.
I
did
that
here.
B
B
B
So
now,
when
we
run
this,
we'll
see
that,
for
the
unit
test,
step
which
wasn't
implemented
and
for
the
container
image
scan,
which
wasn't
implemented
a
no-op
step,
replaced
it
with
a
message
that
says
this
was
not
implemented.
B
There
should
be
a
feature
there's
not
today,
but
this
is
a
great
first
pr
for
anyone.
That's
interested
to
make
to
enforce
whether
or
not
these
should
be
loaded
right.
It
should
definitely
be
possible
to
say
enforced,
equals
true
and
then
fail
the
build.
B
If,
if
no
library
was
loaded
to
implement
this,
but
as
of
right
now,
the
way
that
it
works
is
you
define
a
list
and
then,
if
these
steps
aren't
provided
by
a
library,
they
will
become
no
op
steps
in
their
placement
to
avoid
the
method
like
the
method
missing
exception,
that
would
be
thrown
right.
So
if
I
just
to
demo
what
would
happen
if
we
didn't
do
this,
if
I
got
rid
of
this
block
and
I
tried
to
execute
the
unit
test
step,
it's
going
to
fail
right,
there's
no
unit
test
defined.
A
B
Yeah,
are
you
asking
about
the
infrastructure's
code
stuff
so
inside
jenkins?
I
have
inside
this
jenkins
directory.
I
have
a
docker
file
that
defines
my
custom
jenkins
image
that
installs
plugins
and
installs
jcask,
and
then
I
started
off
the
demo
by
running
just
launch
jenkins.
You
could
have
just
run
these
commands
yourself,
so
it
cleans
up
if
there's
an
old
container
and
then
it
runs
a
new
one.
I'm
mounting
my
little
demo
directory
to
the
container
and
then
jenkins
is
looking
for
this,
this
directory
as
a
library
source.
B
So
there's
a
couple,
I
guess
automation
tricks
going
on
to
make
the
live
demo
seamless,
but
it's
it's
jcask
and
it's
docker,
a
custom,
docker
image
and
it's
this
just
file.
That's
doing
some
automation
here
for
the
demo
and.
B
A
Sorry
I'll
put
the
link
into
the
to
where
you
referenced.
Just
you
better
make
sure
it's
the
right
place.
It's
the
place
that
I
would
have
used
it
from.
B
B
I
got
it.
Yeah
go
start
the
repo,
it
gives
me
little
doses
of
dopamine,
so
that
that'll
have
all
the
the
doctor
file
and
the
the
stuff.
For
that
any
other
questions
before
I
I
show
off
one
last
one
last
feature
here:
you're
good,
all
right,
so
this
last
one
I
like
to
show
because
it's
a
personal
favorite.
What
if
I
wanted
to
do
something
like
slack
notifications,
so
let's
simplify
this
pipeline
to
just
the
build
step
again
now.
B
B
So
save
here
so
same
as
before
we
have
to
create
a
slack
library
under
steps.
I
can
create
a
notifications,
doc,
ruby
file.
So
let's
say
I
wanted
to
deploy,
or
I
wanted
to
send
a
notification
before
every
other
step
that
takes
place.
Jte
has
they're
called
life
cycle
annotations
or
hooks,
so
I
can
say
at
before
step
and
now
I
can
put
that
on
whatever
method
I
want
to.
B
Just
because
I
loaded
slack
and
slack
has
a
annotated
method
that
registers
the
method
for
execution
before
each
step.
So
I
can
come
in
and
we
see
that
slack
kicks
off
and
it
says
running
before:
maven's
build
and
there's
a
whole
bunch
of
these
fun
annotations.
I
can
say
at
after
step-
and
I
can
say
running
after.
B
B
The
beginning
there's
an
cleanup
for
if
you
want
to
run
things
at
the
end.
B
B
So
if
we
come
back
to
jenkins,
I
don't
have
to
change
anything
because
again
those
steps
are,
this
can't
be
called
notify
notify,
but
it
can
be
called
that.
B
All
right
so
now
we
can
see
what
happens
so.
Here's
the
original
step
right,
where
we
just
called
a
build
step.
We
had
our
init
annotation,
get
kicked
off.
We
had
our
before
step
the
step
itself,
the
after
step,
the
notify
which
happens
after
every
step,
the
cleanup
and
then
notify
again
happens
at
the
end
of
the
pipeline
and
there's
actually
a
picture
in
the
documentation
that
explains
with
a
picture
how
these
life
cycle
hooks
work
right.
B
So
the
run
starts
we're
going
to
run,
validate
init
for
each
step
in
the
pipeline
before
step
after
step
and
notify
and
then
clean
up
and
notify.
But
there's
going
to
be
some
use
cases
where
I
don't
actually
want
to
run
after
every
step.
So
let's
make
this
a
little
bit
more
sophisticated.
These
hooks
can
take.
B
B
B
Another
example
real
quick
would
be.
You
could
also
say
an
array.
So
like
return
true,
if
the
config
dot
well,
we
can
use
both
hook
context
and
config.
So,
let's
say
just
to
show
what
I'm
getting
at
here
inside
my
slack
library.
If
I
wanted
to
say
that
here's
the
list
of
steps
you
should
notify
after
deploy
to
the
way
you
would
represent,
this
logic
would
be
return
if
the
hook
context,
dot
step,
which
is
the
step
that's
pertaining
to
the
hook,
that's
corresponding
to
it
is
in
the
config.after
steps
right.
B
So
these
hooks
are
they're
pretty
neat.
You
can
do
all
kinds
of
stuff
there's
also
the
current
build
variable
in
there.
If
you
want
to
alert
only
on
failure,
but
life
cycle
hooks
are
just
one
of
my
soft
spots
in
the
framework
that
I
like
to
show
whenever
we
talk
about
jt.
B
B
That's
a
great
question,
so
open
source
is
the
default
answer
to
everything.
The
the
second
best
answer
is
inner
source.
So
I
work
at
a
federal,
consulting
firm,
called
booz
allen,
hamilton
and
we
have
libraries.
B
Jte
is
the
core
of
a
larger
thing,
called
the
solutions.
Delivery
platform
not
super
relevant
right
now.
But
to
answer
your
question
like
within
booz
allen,
we
have,
we
call
them
our
sdp
libraries,
so
we
have
a
portfolio
of
libraries
that
get
used
across
projects.
B
B
So
if
we
go
look
at
the
documentation,
so
alongside
the
jte
docs,
we've
also
got
these
pipeline
libraries
docs
every
library
that
we
have
actually
has
a
an
ascii
doc
file.
So
if
we
look
at
sonar
cube,
we'll
use
a
framework
called
intora,
but
this
this
best
practice
applies
to
if
you're,
using
gatsby
or
sphinx
or
make
docs
or
whatever.
So
every
single
library
is
required
to
have
a
documentation
page
that
outlines
these
are
the
steps
that
are
available
when
you
load
the
library.
B
These
are
all
of
the
configurations
that
that
library
is
going
to
take,
and
then
you
know
examples
and
any
external
dependencies
right.
So
for
the
sonar
cube
library,
you
need
sonarqube,
so
that
gets
that
gets
documented.
So
my
recommendation
would
be
that
you
have
a
repository
within
your
organization.
B
B
A
B
User
acceptance-
testing
you
mean
all
right.
Does
the
question
are
referring
to?
How
do
we
make
sure
the
libraries
work
as
expected?
I.
A
B
So
if
we
open
this
up,
so
spock
is
a
separate
project
from
jte
within
the
the
jenkins
open
source
ecosystem.
My
google
foo
jenkins
spock.
I
think
it's
from
expedia
group,
so
you
can
pull
this
in
as
a
dependency
and
then
getting
into
and
I'll
place.
This
link
inside
the
chat
awesome.
A
A
We
were
at
the
top
of
the
hour.
We
certainly
it
looks
like
there's
a
one.
Another
question:
what
was
the
documentation
page?
You
showed
with
the
diagram.
A
So,
just
for
a
follow-up
for
next
month,
the.
A
The
22nd
actually
actually
through
the
24th.
I
am
going
to
go
ahead
and
put
this
in
the
chat
I
can
find
the
chat.
A
A
We're
going
to
have
a
there
will
be
a
day,
zero,
two
day:
zero,
co-located
events,
one's
a
spinnaker
summit,
one's
a
get
up
summit
and
then
we'll
have
the
regular
sessions,
the
23rd
and
the
24th.
A
B
A
Everyone
who
registered
and
it
will
be
uploaded
to
the
cd
foundation's
online
meetup
playlist
on
youtube,
and
on
that
note,
thank
you
steve.
So
much
for
your
presentation.
Super
super
really
informative
and
I
think
we'll
get
a
lot
of
views
off
of
this
one.