►
Description
Steve Terrana present the Jenkins Templating Engine in a Jenkins Online Meetup.
Jenkins Templating Engine allows you to separate the business logic of your pipeline (what should happen, and when) from the technical implementation. Pipeline templates separate the implementation of the pipeline actions defined in the template into pipeline libraries.
Regardless of the specific tools being used, there are common steps that often take place, such as unit testing, static code analysis, packaging, and deploying artifacts to application environments.
Presented at https://www.meetup.com/Jenkins-online-meetup/events/267752823/
A
This
is
a
Jenkins
online
Meetup
I'm
mark
weight,
I'm
pleased
to
welcome
Stephen
Toronto
with
us
Stephens
going
to
be
presenting
the
Jenkins
pipelining
template
the
Jenkins
templating
engine
and
we're
delighted
that
he's
here
with
us
they've
had
they
presented
some
of
these
concepts
already
a
Jenkins
world
in
2018
and
have
been
using
it
since
then,
to
help
customers
at
scale
use
Jenkins
pipelines
much
more
effectively.
Stephen
go
ahead.
Thank.
B
You
Mark
so
hi
everyone,
I'm,
Stephen,
Tirana,
I
work
at
Booz,
Allen
Hamilton
we're
a
federal
consulting
firm,
so
we
hope
the
government
modernize
legacy,
IT
systems
so
I'm.
Currently,
a
senior
Lee
technologist
at
Booz,
Allen
I,
hope
lead
a
lot
of
our
internal
dev,
SEC
ops,
capability
development,
so
I'm
a
principal
engineer
for
what
we
call
the
solutions
delivery
platform,
which
is
a
project
that
helps
teams,
get
started
with
deficit
gaps,
principles.
The
Jenkins
temping
engine
that
we're
going
to
talk
about
today
is
really
the
core
innovation
of
that
project.
B
So
before
we
get
started,
you
know,
as
part
of
this
talk,
I
want
to
talk
a
little
bit
about
the
work
that
we
do
and
sort
of
try
to
frame
the
challenges
that
we
faced
that
led
to
us
creating
the
Jenkins
dumpling
engine.
So
this
slide
is
just
a
level
setter
for
what
we
mean
a
Booz
Allen
when
we're
talking
about
dev
suck-ups,
you
know,
DevOps
is
all
about
getting
application
developers
and
operations,
engineers
to
work
together
more
effectively
through
things
like
infrastructure
as
code
and
configuration
management
and
then
def
sec.
B
Ops
was
really
the
next
step,
helping
to
bring
security
into
every
step
of
the
software
development
lifecycle.
So,
in
practice,
there's
all
these
different
kinds
of
security
testing
we
want
to
incorporate
into
our
desk
I
got
pipelines.
We
could
give
a
whole
nother
talk
on
on
what
these
look
like
and
practice
different
tools
for
them.
But
for
this
we
really
want
to
focus
on
how
we
went
implementing
these
practices
at
scale
using
Jenkins.
So
this
slide
represents
an
example.
B
You
know
dedsec
ops,
workflow,
and
you
know,
in
addition
to
all
those
different
kinds
of
security
testing
like
container
image
scanning
penetration
testing,
probably
compliance
scanning,
there's
still
all
the
other
types
of
quality
assurance,
automation
like
unit
testing
and
measuring
code
coverage,
browser-based
test
automation
and
API
testing,
and
when
we're
doing
digital
modernization
at
government
agencies,
they
frequently
have
a
diverse
application
portfolio,
which
is
sort
of
consultant,
speak
for
there's
a
lot
of
different
tools
being
used.
You
know
some
teams
might
be
building
front-end
applications.
B
Other
teams
could
be
building
restful
api
is
some
teams
in
the
organization
might
be
using
something
like
sonar
cube
for
static
code
analysis,
but
other
teams
might
be
using
fortify
so
when
it
came
time
to
actually
implementing
dedsec
ops
pipelines
at
scale
from
multiple
teams
simultaneously,
we
were
running
into
a
whole
bunch
of
different
challenges,
so
the
first
was
was
time
right.
It's
this
someone's
writing
their
first
Jenkins
pipeline
to
implement
these
best
practices.
It
could
take
months
for
somebody
to
integrate
with
all
these
different
tools.
B
The
second
was
complexity,
along
with
the
time
it
takes
to
integrate.
All
of
these
different
integrations.
We
also
have
to
manage
the
the
actual
development
of
these
pipelines
on
a
per
application
basis
right
so
the
way
Jenkins
typically
works
is
you
have
a
Jenkins
file,
which
is
your
pipeline,
this
code
artifact
it
lives
in
your
source
code
repository
and
it
really
specifies
what
your
pipeline
is
going
to
do.
So
when
you
have
a
situation,
different
teams
are
using
different
tools
that
Jenkins
file
frequently
is
pasted.
B
You'll
have
to
integrate
it
with
the
specific
tools
for
that
application,
which
leads
to
our
third
challenge.
If
you've
duplicated
that
pipeline
is
code
artifacts
in
your
Jenkins
file
across
multiple
source
code
repositories,
how
do
you
actually
or
some
some
standardization
to
your
software
delivery
processes?
It
can
be
very
difficult
to
make
sure
that
all
of
the
development
teams
in
an
organization-
and
here
too
mandated
software
delivery
processes
when
the
definition
for
how
they
do
that
software
delivery
is
duplicated
in
multiple
places
and
then,
finally,
you
know
continuous
improvement.
B
B
You
learn
something
that
a
change
in
your
workflow
that
can
improve
how
you
do
software
delivery,
making
that
change
is
very
difficult,
because
now
you
have
to
open
sixty
different
pull
requests
to
try
to
orchestrate
a
migration
from
one
version
of
the
pipeline
to
the
next,
as
you
want
to
continuously
improve
your
pipeline
and
then
Booz
Allen
we're
doing
this.
You
know
in
a
single
client
space
where
we're
supporting
a
pipeline
for
multiple
applications
for
a
particular
client,
but
we're
also
doing
this
across
different
client
engagements.
B
So
these
challenges
are
really
being
multiplied
as
we
as
we
scale
right.
So
building
a
pipeline
is
largely
undifferentiated
work
once
you've,
googled,
Soner,
cubed,
plus
Jenkins.
There's
really
no
reason
that
every
DevOps
engineer
at
the
firm
or
really
the
industry
should
be
figuring
out
how
to
do
these
different
tool
integrations.
So
we
wanted
to
figure
out
a
couple
different
goals
here.
The
first
was:
how
can
we
decrease
the
time
it
takes
to
instrument?
Amateur
second
and
like
I
was
saying
writing
these
pipelines
is
largely
undifferentiated
work.
B
You
know
in
a
Sun
or
tube
would
be
one
of
the
the
less
complex
examples
when
you're
talking
about
orchestrating
an
advanced
deployment
pattern
using
helm
to
an
open
shift
cluster,
so
we
thought
that
there
ought
to
be
a
way
to
modularize
the
development
of
our
pipelines
so
that
we
don't
have
to
continue
some
reinvent
the
wheel.
Every
time
we
kick
off
a
project.
Our
second
goal
is
to
lower
the
technical
barrier
to
entry.
B
You
know
mastering
the
art
of
writing.
A
Jenkins
pipeline
is,
admittedly,
a
niche
skill
set
that
has
a
learning
curve
to
it.
So
we
really
believed
that
teams
should
configure
not
build
their
pipelines
because
of
the
fact
that
this
work
is
largely
a
duplication
of
effort
across
different
client
engagements.
B
So
the
idea
here
is
that,
regardless
of
what
tools
are
being
used,
the
workflow
of
a
pipeline
remains
the
same
right.
So
on
the
Left.
We've
got
an
example
jenkins
file
for
an
application
using
maven.
We've
got
a
stage
that
says
we're.
Gonna
do
and
may
even
build
within
a
container
image
that
has
our
dependencies
for
maven,
we're
gonna,
execute
maven,
clean
package
and
now
we're
gonna.
Do
some
sonar
cube
analysis
on
the
right?
B
We've
got
a
Gradle
application
where
we're
gonna
be
using
a
Gradle
image
to
do
the
packaging
of
an
application
and
then
again
we're
gonna
do
some
some
static
code
analysis
with
sonar
queue.
So
the
key
point
to
understand
here
is
that
you
know
the
first
step
of
this.
This
pipeline
is
gonna,
be
built.
We're
gonna
build
a
an
artifact,
it
doesn't
matter
if
it's
maven
or
if
it's
Gradle
or
ant
or
if
it's
doctor,
we
know
that
for
our
desktop
lifeline.
B
The
first
step
in
this
you
know
trivial
example
is
that
we're
gonna
build
an
artifact.
The
second
is
we're.
Gonna
do
static
code
analysis.
It
doesn't
necessarily
matter
if
that's
coming
from
a
from
Soner
Q
Burke
is
coming
from
fortify
or
whatever
tools
that
you
want
to
instrument.
The
the
general
workflow
remains
the
same
so
for
this
example
we're
going
to
build
something
and
now
we're
going
to
scan
it.
B
So
how
can
we
modularize
this
pipeline
code
so
that
every
team,
regardless
of
if
they're,
using
maven
or
Gradle,
can
be
using
the
same
pipeline
template
without
having
to
hard
code
their
their
tool,
integrations
in
jenkins
files
distributed
across
our
source
code
repositories?
So
the
first
is,
you
know:
let's
apply
the
same
best
practices
to
pipeline
development
that
we've
been
applying
to
application
development
for
a
long
time.
So
that's
you
know
the
principles
there
are.
How
do
we?
B
You
know
abstract
away
or
separate
the
business
logic
of
your
pipeline
from
the
actual
technical
implementation,
and
then
how
do
we
modularize
that
technical
implementation
so
that
we
can
swap
tools
in
and
out
pretty
easily
so
the
first
step
there
is,
you
know
in
our
pipeline
configuration
repository,
so
a
central
place
where
we
can
store
our
pipeline
code.
We
can
Korean
what
we
call
libraries.
So
these
are
similar,
but
not
the
same
as
Jenkins
global,
shared
libraries.
This
is
where
we're
going
to
create
the
plug-and-play
modular
implementations
of
different
steps.
B
Alright,
so
in
this
example,
we
would
have
a
maven
library
that
has
a
build
step
and
a
grade.
A
library
that
has
a
build
step
on
the
right.
You'll,
see
an
example
of
what
those
steps
might
look
like
the
pipeline
code
itself
hasn't
changed.
All
that
changed
is
how
we
organized
the
code
and
we
wrapped
it
in
a
call
method
so
that
when
we
load
it
its
invoke
Keable
from
from
the
main
template
and
then
both
applications
are
gonna,
be
using
Sun
or
Q
for
static
code
analysis.
B
So
we
have
a
stone
or
cube
library
that
has
a
static
code,
analysis,
dock
Ruby
file
and
again
the
pipeline
code
for
that
step
has
not
changed.
Besides
the
fact
that
we've
wrapped
it
in
a
call
method.
So
now,
instead
of
having
the
Jenkins
file
and
every
source
code
repository,
we
can
pull
that
Jenkins
file
as
a
pipeline
template
out
into
the
centralized
pipeline
configuration
repository
in
this
example
like
we
saw.
The
template
is
just
build
and
then
static
code
analysis.
B
So
then,
in
each
source
code
repository
instead
of
having
an
entire
jenkins
file
that
card
codes,
particular
tool,
integrations
or
loads,
a
common
global
library
and
implement
some
steps.
We
can
instead
just
have
a
pipeline
configuration
file,
so
we
know
that
both
teams
are
going
to
inherit
the
same
pipeline
template.
So
all
we
need
to
know
to
run
their
pipelines
is
what
tools
are
you
using?
So
in
this
example
on
the
left,
you'd
have
a
pipeline
config
deck
Ruby
file.
B
It
sits
at
the
root
of
the
source
code
repository
and
it's
got
a
library
section
which
just
specifies
that
we
want
to
load
the
maven
and
the
sonar
tube
libraries
for
the
Gradle
application.
The
only
difference
is
that
you're
gonna
load
the
Gradle
library
instead
of
the
maven
library
great.
So
at
this
point,
we've
separated
the
business
logic
from
the
technical
implementation.
B
We
have
a
common,
centralized
pipeline
template
it's
a
tool,
agnostic
or
whoa,
and
then
we
have
modularized
implementations
of
different
steps
that
we
so
that
we
can
dynamically
compose
our
pipeline
at
runtime,
using
these
configuration
files.
So
you
know
the
goal
here
is
to
be
able
to
make
use
of
the
fact
that
these
tool
integrations
don't
change
much
from
from
client
to
client
or
project
a
project.
So
for
us
to
truly
realize
the
reusability
of
these
pipelines
pipeline
libraries,
we
need
to
be
able
to
configure
them
externally.
B
So
in
this
example,
let's
take
a
look
at
our
our
star,
cube
library,
and
it
wouldn't
be
very
helpful
if
we
had
hard-coded
the
donor,
cheap
server
that
we're
using
or
if
we
hard-coded,
whether
or
not
to
fail
the
build
based
upon
the
results.
So
what
we
can
do
is
add
some
configuration
to
our
pipeline
config.
B
So
on
the
left,
you
can
see
that
under
the
stone,
our
cube
section
when
we
load
the
Sun
in
cube
library,
we
can
also
pass
it
some
configuration
options
and
then
the
Jenkins
templating
engine
framework
when
it
loads
that
library
is
also
going
to
make
these
configurations
available.
So
in
the
top
here
under
the
parsed
configuration
comment,
there's
a
config
variable.
Every
library
step
that
is
implemented
is
all
the
wired.
B
With
this
configuration
variable
that
is
able
to
pull
the
configuration
from
that
pipeline
config
file,
so
by
modularizing,
our
different
tool
integrations
and
then
externalizing
their
configuration
to
the
pipeline
config
file.
We've
really
had
a
lot
of
success
with
being
able
to
reuse
these
pipeline
libraries
across
multiple
clients.
You
know
these
libraries
are
open,
source
and
available
for
multiple
projects
to
use.
B
So
instead
of
the
devops
team
supporting
a
client
being
limited
to
just
the
engineer
staffed
on
that
particular
project,
we
can
now
crowdsource
quality
and
have
a
common
framework
for
all
of
the
DevOps
engineers.
Implementing
these
dedsec
app
swipe
lines
to
be
working
together
to
you
know
continuously
improve
the
configuration
options
on
the
library,
but
to
really
speak
a
common
language
when
implementing
these
pipelines
at
scale.
B
B
So
these
lie
burries
our
unit
tested
with
jenkins
spock,
which
is
another
framework
that
was
presented
at
Jenkins
world,
the
last
time
that
I
was
there.
So
you
know
updating
this
common
framework
is
just
like
maintaining
software
for
for
an
application.
We've
got
versioning
of
these
libraries,
so
you
know
as
new
configuration
options
are
added
or
they're
going.
You
know,
you're
gonna,
add
a
breaking
change.
We
can
create
different
releases
of
these
libraries,
and
teams
can
choose
when
to
upgrade
or
the
administrator,
for
that
particular
Jenkins
instance
can
choose
when
to
upgrade,
but
software
development.
B
B
Also,
all
of
these
I
can
show
this
in
a
bit
at
once.
We
move
to
the
demo
side,
but
all
of
these
repositories
that
we
have
ever
read
me
in
them
and
that
readme
has
a
table
of
the
different
configuration
options.
It
describes
what
the
library
does
and
then
it
has
screenshots
of
any
artifacts
that
might
be
generated
from
that
library.
So
then,
in
our
documentation
site
we
can
aggregate
all
of
those
read
news
together
and
have
a
front
facing
you
know
API
almost
for
how
we
can
build
pipelines
from
these
building
blocks.
B
Right,
so
we
can
still
take
this
a
step
further.
So
at
this
point
you
know:
we've
separated
the
business
logic
from
the
technical
implementation
by
having
a
pipeline
template
in
modularized
implementations,
but
we
can
also
pull
out
common
configurations,
and
this
is
where
governance
starts
to
come
into
play.
So
on
the
right
we
have
the
two
pipeline
configuration
files.
We've
got
one
that
says
libraries,
maven
and
sonarqube,
another
that
says
libraries,
Gradle
and
synergy.
B
So
if
you
were
at
an
organization
that
wanted
to
enforce
the
use
of
certain
r-cube,
we
want
to
pull
out
that
common
configuration
all
right.
So
if
we
take
a
look,
we
can
have
an
organizational
pipeline
configuration
that
says
libraries
so
no
queue,
because
everyone's
going
to
inherit
this
and
then
it
also
says
merge
equals
true.
So
this
is
where
you're
saying
as
an
organization:
let's
let
the
individual
applications
have
their
own
Khan
file,
but
then
have
some
rules
around
what
exact
configurations
they're
allowed
to
manipulate.
B
So
for
this
example,
application
repositories
and
their
configuration
file
can
add
additional
libraries,
but
they
can't
remove
the
fact
that
they're
loading
sooner
so
here
we
can
add
a
pipeline
config
to
the
maven
repo.
That
just
says
it's
using
maven,
because
it's
inheriting
the
fact
that
it's
using
sonar
cube
and
it's
inheriting
the
duggar
and
pipeline
template
and
then
the
Gradle
repository
they're,
also
going
to
inherit
that
same
configuration.
But
they're
gonna
have
a
grade,
a
library,
that's
being
loaded
right.
B
So
in
the
Jenkins
templating
engine
you
can
create
governance
hierarchies
that
match
your
organizational
hierarchies
just
by
you
know,
sort
of
like
in
maven
how
you
have
parent
comm
files.
The
same
thing
applies
to
the
Jenkins
sampling
engine,
with
our
configuration
files
and
there's
some
rules
there
that
we
called
conditional
inheritance
around
how
how
you
know
children
configurations
are
able
to
inherit
or
able
to
modify
the
pipeline
configuration
as
a
whole.
B
So,
let's
see
this
in
practice.
I
think
it
it'll
help
to
put
some
meat
on
the
bones
of
what
I'm
talking
about
here,
I'm
checking
time
here,
so
we're
still
good
on
time.
So
the
first
and
simplest
example
would
be
a
regular
old
pipeline
job
for
the
sake
of
demonstration.
We're
going
to
have
these
libraries
do
print
statements,
because
I
really
want
to
focus
on
the
templating
side
of
the
problem
here
and
focus
on
building
our
mental
model
for
how
jte
works
as
a
framework.
B
So
this
is
just
a
regular
old
pipeline
job
under
the
pipeline
definition,
we've
added
a
Jenkins
tumbling
engine
section,
so
there
are
two
pieces
to
your
pipeline.
There's
your
pipeline
template
which
again,
oh,
that
is
the
tool
agnostic,
template
workflow.
The
team
the
pipeline
is
going
to
follow
in
this
case
we're
doing
a
build
static
code
analysis
and
then
of
the
deployment
to
dev
and
to
prod.
Alongside
your
pipeline
template,
you
have
a
pipeline
configuration
file
and
this
is
going
to
read
a
lot
like
your
text
stack
so
and
this
example.
B
We've
got
our
library
section
again
where
we're
saying
that
we're
gonna
load,
the
grade-a
library,
the
Cerner
cube
library
and
then
an
ansible
library
to
do
deployments
in
our
template.
We
make
reference
to
this
dev
and
fraud
application
environments.
So
we
really
want
these
templates
to
be
as
human
readable
as
possible,
because
again
they
should
really
only
Expensify
the
business
logic
of
your
software
delivery
processes.
B
So,
alongside
your
libraries
in
your
pipeline
configuration
file,
you
can
also
define
application
environments,
so
jte
has
what
we
call
primitives,
so
these
are
different
types
of
objects
that
can
be
created
by
reading
the
configuration
file
in
pass
to
the
template
during
runtime.
So
in
this
example,
we
created
a
dev,
app
location
environment
and
we
gave
it.
You
know
in
this
example,
just
a
list
of
IP
addresses
that
we
would
do
a
deployment
to,
and
we
created
a
problem
environment
with
a
separate
list
of
IP
addresses.
B
So
we
know
that
every
different
project
might
have
a
different
set
of
application
environments.
Different
clients
are
gonna,
have
different
numbers
of
them,
so
we
really
can't
hard-code
the
fact
that
there's
going
to
be
a
dev
test
staging
in
fraud,
environment
and
our
goal
along
has
been
to
create
a
framework
for
developing
these
pipelines.
So
in
jte
we
were
able
to
create
these
application
environments
dynamically
through
our
config
file.
A
B
Is
exactly
right,
so
there's
merge,
equals
true
and
override
equals,
true,
also
more
of
our
documentation
at
the
end,
but
all
of
the
conditional
inheritance.
So
we
walk
through
a
couple
different
examples
of
how
this
works
in
our
documentation.
But
there
are
two
key
words:
merge
equals
true
and
override
equals.
True,
if
you
say
over
equals
true,
it
means
that
we're
going
to
completely
replace
this
definition
with
what
was
defined
by
the
application
team.
B
So
yeah
there's
some
rules
that
govern
the
aggregation
process
of
multiple
configuration
files
and
you
can
have
more
than
two
on
in
the
Jenkins
and
since
you
can
define
library,
sources
and
pipeline
configuration
files
as
a
folder
property.
So
on
every
folder
in
Jenkins,
you
could
specify
a
pipeline
configuration
file
that
the
jobs
within
that
folder
are
going
to
inherit.
So
if
you
were
to
draw
your
organization's
org
chart
on
the
wall
and
have
that
taxonomy
defined,
you
could
create
the
exact
same
thing
in
Jenkins.
Just
by
how
you
organize
your
jobs
using
folders.
B
So
here,
if
I
build
it
as
expected-
and
we
see
from
the
previous
run
here-
we're
going
to
load
all
the
libraries
that
are
specified
in
our
configuration
so
at
the
start
of
the
pipeline,
it's
going
to
reach
out,
so
you
can
take
a
look
at
the
build
bugs
along
the
way
as
its
building.
It's
going
to
tell
us
where
it's
finding
different
libraries
from
and
the
configuration
files
that
are
being
added.
B
So
let's
say
I
wanted
to
change
what
tools
this
pipeline
was
using.
Instead
of
you
know,
without
changing
the
pipeline
template,
because
this
is
our
tool,
agnostic,
business
approved
process,
I
can
just
make
a
change
to
the
libraries
that
I'm
loading
when
we
rebuild
our
job
instead
of
loading,
the
Gradle
library,
it
now
says
to
load
the
maven
library
so
as
anticipated,
it'll
load
them
Nathan
library
and
implement,
and
that
library's
functionality
right
so
and
we
can
see
that
right
here
build
from
the
Maven
library.
B
So
this
is
the
simplest
example
just
a
pipeline
job,
but
what?
If
we
scale
this
up
to
an
entire
source
code
repository?
So
in
this
example,
if
we
go
take
a
look
at
the
github
repository
for
this,
this
repository
for
this
pipeline,
rather
we've
got
a
you
know
a
sample
app
for
maven
in
this.
In
this
example,
we
don't
need
any
code
because
we're
just
showing
how
to
dynamically
compose
our
pipeline,
but
we
have
a
pipeline
config
file
and
it
says
it's
gonna
pull
the
maven
library.
B
B
So
this
this
use
case
is
really
just
showing
that,
without
having
to
find
the
jenkins
file
in
the
source
code
repository,
we
can
inherit
the
same
pipeline
across
all
the
branches
in
forests.
Now
the
real
power
in
the
jenkins
template
engine
comes
from
when
you
can
apply
these
templates
to
multiple
applications
simultaneously.
So
here
we
have
a
github
organization
job
we
have
our.
B
B
We
can
see
that
the
configuration
base
directory
so
we're
inside
the
source
code.
Repository
can
I
find
my
pipeline
config
file
and
my
pipeline
template
is
the
pipeline
configuration
directory.
So
if
I
go
take
a
look
at
the
repository,
we've
got
a
pipeline
configuration
directory,
we
have
a
pipeline
config
file
and
a
pipeline
template.
You
can
create
multiple
pipeline
templates
and
store
them
in
a
pipeline
templates
directory,
but
by
default,
your
Jenkins
file
in
your
pipeline
config
repo,
is
the
default
pipeline,
template
and
application
teams.
B
If
you
had
multiple,
we
call
them
named
templates
can
specify
which
main
template
that
they
want
to
be
using.
So
our
example
pipeline
configuration
file
says
that
we're
loading
sonar,
cube,
ansible
and
here
a
Splunk
library
we
allow
merge,
equals
true
and
again,
let's
let
the
Maven
and
Gradle
applications
can
add
which
build
tool
they're
using
and
then
we
define
the
organizational
application
environment
that
are
being
used.
Our
jenkins
file,
our
default
pipeline
template,
has
a
build
step,
static
code
analysis
and
the
deployment
to
the
dev
and
prod
environments
defined
in
our
configuration
file.
B
So
if
we
take
a
look
at
I
can
leave
here
and
the
the
libraries
those
can
also
be
defined
on
every
folder
libraries
get
added
as
library
sources.
So
here
we
added
the
libraries
as
a
global
library
source,
which
means
that
they're
available
to
every
job.
On
the
Jenkins
instance
and
under
the
Jenkins
tumbling
engine
section,
we
didn't
specify
a
global
pipeline
configuration
file,
but
we
did
specify
globally
available
library
sources.
So
these
library,
this
library
source,
is
coming
from
SCM,
it's
coming
from
that
same
repository,
but
it's
coming
from
the
libraries
directory.
B
So
if
we
go
take
a
look
at
this
repository
again
underneath
the
root
of
this
repository,
it's
a
libraries
directory
and
then
every
single
directory
here
is
a
is
a
library
that
can
be
loaded.
So
when
we
load
the
Gradle
library,
it
actually
sees
just
build
a
kuruvi
file
and
it
creates
a
build
step,
and
when
you
call
the
build
step
from
your
template,
it
invokes
the
call
method
and
we're
going
to
print
out
build
from
the
Gradle
library.
So
these
library
sources
you
can
have
as
many
library
sources
as
you
want
to.
B
You
can
scope
them
to
individual
jobs
by
defining
the
library
source
on
a
folder
instead
of
globally
inside
manage
Jenkins,
and
you
could
have
multiple
repositories.
Let's
say
you
have
one
port,
one
set
of
libraries
that
are
common
to
the
whole
organization,
but
you
also
have
a
set
of
libraries
that
are
really
specific
to
your.
Your
team's
individual
use
case
that
that
use
case
is
perfectly
well
supported
by
just
defining
multiple
library
sources.
B
B
How
you
choose
to
organize
the
code
is
really
driven
more
by
who
should
have
permission
to
access
that
code,
then,
by
functionally,
if
it
makes
a
difference
right,
you
could
have
three
different
top-level
directories
in
your
source
code
repository
and
configure
each
of
them
as
a
library
source,
or
you
can
have
three
different
repositories
and
reference
each
of
those,
this
library
source.
It's
really
up
to
you,
how
you
want
to
organize
your
code
to
build
your
pipeline
to
optimize
the
the
rule,
a
rule
based
access
for
who
can
touch
different
parts
of
the
pipeline
configuration.
B
So
if
we
go
back
to
that,
multiple
application
example
that
we
have,
in
this
case
the
github
organization
job
created
two
multi
branch
jobs.
It
created
a
Gradle
one
for
the
Gradle
repository
and
one
for
the
maven
repository
here
for
the
Gradle
application.
We
can
see
that
it
loads
the
Gradle
library
to
do
the
build
step.
If
we
take
a
look
at
the
maven
library,
it's
using
the
exact
same
pipeline
template,
but
it
uses
the
maven
library
right.
B
If
we
take
a
look
at
one
of
the
console
logs
for
this,
we
can
see
that
everywhere
you
see
jte
to
start
the
log.
That's
a
log
coming
from
the
work
itself.
We
can
see
all
the
pipeline
configurations
that
were
added
by
the
organization,
and
then
we
can
also
see
the
modifications
that
were
made
by
the
actual
source
code
repository.
So
in
this
case
the
individual
repo
just
added
the
fact
that
they're
loading,
the
maven
library
and
then
for
each
library.
B
We
can
also
see
which
repository
that
library
belongs
to
here,
we're
seeing
a
warning
message
that
the
library
doesn't
have
a
configuration
file
that
might
be
a
little
outside
the
bounds
of
an
introductory
conversation
on
jte.
But
the
concept
there
is
that,
because
we
can
externalise
configurations
of
the
libraries,
we
need
a
way
to
validate
that.
What
someone
puts
in
their
configuration
file
is
a
configuration
option.
That's
actually
accepted
by
the
library
to
help
with
some
error
checking
to
make
sure
that
you
know
typos
aren't
being
missed.
B
One
cool
feature
that
I
want
to
highlight
is
the
ability
to
do
lifecycle,
hooks
and
jte.
So
let's
take
an
example
of
wanting
to
send
events
to
Splunk
based
upon
the
pipeline.
So
let's
say
you
wanted
to
send
a
notification
to
Splunk
every
time
you
did
a
deployment
or
every
time
you
did
a
build
step
because
of
the
way
JT
works.
You
don't
want
a
hard
code
that
logic
to
do.
Splunk,
notifications,
in
your
maven
library
or
in
your
Gradle
library,
because
now
you've
broken
the
interoperability
of
these
libraries
right.
B
You
want
to
be
able
to
plug
them
in
and
take
them
out
without
breaking
everything.
So
we
need
a
way
to
be
able
to
dynamically
register
different
steps
to
run
in
relation
to
events
in
the
pipeline.
So
if
we
take
a
look
at
the
the
logs
here,
we're
gonna
see
sending
a
Splunk
event
at
the
beginning
of
the
pipeline
Splunk
running
before
the
build
step
and
running
after
the
build
step,
but
our
pipeline
template.
If
we
take
a
look
at
it
again,
doesn't
actually
say
anything
about
sending
Splunk
events.
So
how
is
it
possible?
B
Splunk
is
knowing
that
the
build
step
just
happened
or
that
a
deployment
just
happened,
and
we
do
that
through
what
we
call
lifecycle
hook
annotations.
So
if
we
take
a
look
at
our
Splunk
library,
we've
got
a
couple
different
steps
here.
If
we
take
a
look
at
pipelines
start
we
have
this
init
annotation,
which
just
means
when
the
pipeline
starts
up
after
jze
is
done:
initializing
the
environment
by
loading,
libraries
and
creating
application
environments.
Let's
run
every
pipeline
step
that
has
this
init
annotation
on
it.
So
here
I,
don't
have
to.
B
You
know,
explicitly
call
this
method
from
my
pipeline
template.
It
registers
itself
to
run
at
the
beginning
of
the
pipeline.
The
same
can
be
done
for
four
steps,
there's
also
a
before
step
and
an
after
step
annotation.
So
if
you
wanted
to
send
a
a
spoink
notification
before
every
other
step,
tooks
place,
you
can
add
this
before
step
annotation
to
your
library
step
and
then
right
before
we
execute
each
step.
We're
gonna
dynamically,
execute
all
the
steps
that
have
registered
themselves
as
before,
step
Watchers
right
hook,
real
quick.
B
These
hooks
also
accept
a
conditional
execution
closure.
So
if
you
only
wanted
to
run
after
a
particular
step,
you
can
pass
a
closure,
and
if
that
closure
returns,
true
we're
gonna
execute
the
step
if
it
returns
false
we're
not
going
to
execute
the
step
and
there's
a
couple
different
hooks
that
are
available.
There's
init,
validate
before
step
after
step
and
then
clean
up
and
notify
so
mark
I.
Think
you
had
a
question
I.
B
These
libraries,
these
steps
are
defined
by
who's
ever
creating
the
libraries
themselves.
That
could
be
an
administrator,
the
Jenkins
instance.
It
could
be
a
specific
application.
Development
team
that
has
permission
to
add
libraries.
All
they
have
to
do
is
say
a
for
step
on.
Basically,
whenever
they're
defining
a
step,
they'd
add
an
annotation
to
it
and
then
the
way
it
knows
to
be
registered
is
if
we
take
a
look
at
that
and
configuration
file.
B
B
B
Weeds
of
how
this
works
under
the
under
the
hood,
but
when
we
act
when
we
go
to
invoke
a
step
that
we've
loaded
from
a
library
there's
some
there's
a
wrapper
around
that
step
where
we're
able
to
invoke
different
different
hooks,
so
we
can
say
using
some
meta
programming.
Finally,
all
the
steps
that
have
been
loaded
that
have
this
before
step,
annotation
and
then
invoke
all
of
them,
so
the
framework
really
handles
a
lot
of
that
registration
and
execution
of
hooks
based
upon
the
annotations
that
have
been
loaded.
B
B
We
do
it
a
lot
that
in
an
annotation
that
executes
at
the
beginning
of
the
pipeline,
that
gets
used
a
lot
for
we
call
them
library
constructors.
So
let's
say
you
have
a
library
that
wants
to
expose
some
environment
variables
to
the
rest
of
the
pipeline.
You
could
do
that
by
creating
a
step
that
has
the
init
annotation
and
then
defining
your
at
your
environment
variables
for
the
pipeline.
That
way,
you
don't
have
to
hard
code.
B
The
invocation
of
that
initialization
in
your
pipeline
template
it's
just
an
inherent
part
of
the
library
that
when
you
load
it,
it
knows
to
execute
this
piece
of
code
at
the
very
beginning
of
the
pipeline.
So
if
we
go,
let's
take
a
look
at
a
slightly
more
complex
pipeline.
Template
I,
don't
want
to
dive
through
you.
B
B
We
have
a
github
library
that
allows
you
to
create
on
pull
request
on
commit
and
on
merge,
and
these
really
just
our
business
logic
routers.
So
we
want
this
to
read
as
close
to
plain
English
as
possible,
so
on
poor
request
to
develop
this
develop
is
a
keyword
that
that
Matt
is
actually
a
variable.
That
represents
a
regular
expression.
So
here
we
can
create
a
pour
request
to
the
develop
branch.
We're
gonna
run
continuous
integration.
Alongside
this
template,
we
have
a
config
file.
B
Continuous
integration
is
a
stage
so
in
the
beginning,
I
tried
to
hard
code
what
continuous
integration
meant
and
it
turns
out.
No
one
actually
agrees.
So
the
framework
lets
you
group
steps
together
into
what
we
call
stages
so
in
our
config
file.
We're
saying
that
the
continuous
integration
stage
is
comprised
of
the
unit
tests,
static
code
analysis
and
build
steps.
B
So
then,
within
your
pipeline
template
when
you
invoke
the
continuous
integration
method,
it
goes
and
invokes
the
steps
that
have
been
defined
in
your
convict
file
and
that's
really
just
a
way
so
that
we
don't
have
to
repeat
ourselves
within
our
pipeline
template.
If
we
wanted
to
do
those
three
steps
on
for
requests,
different
branches
or
on
merges
two
particular
branches
stages.
Let
you
basically
not
repeat
yourself
and
consolidate
common
sequences
or
steps
on
poor
request
to
master.
So
this
would
be
a
developer,
creates
a
floor
request
to
the
master
branch.
B
In
parallel,
we
do
commit
tration,
testing,
accessibility,
compliance
testing,
functional
testing
and
then
some
exploratory
testing
and
again.
These
steps
are
named
generically
on
purpose,
because
we
might
have
multiple
libraries
that
implement
the
penetration
test
method
and
then,
finally,
on
merge
to
master.
So
that's
whenever
a
floor
request
has
actually
been
merged
to
the
master
branch.
We're
gonna
do
a
deployment
to
production.
B
So
through
this
pipeline
template,
we
can
map
our
our
branching
strategy
for
how
we
collectively
work
on
a
code
base
that
can
be
mapped
to
our
pipeline
template
where
we
can
perform
different
actions
in
relate
in
two
different
developer
actions.
So
poor
request
of
different
branches
if
you've
got
release
branches
or
hotfix
branches,
the
github
library
that
we've
created
lets
you
do
a
lot
of
business
business
logic,
routing
to
handle
the
pipeline
and
responding
to
different
events
from
github.
B
A
B
My
my
experience
personally
is
that
development
teams
want
to
focus
on
what
they
do
best,
which
is
build
applications
right,
the
it's
very
difficult
to
scale.
The
skill
sets
required
to
learn
how
to
integrate
all
these
different
tools
and
how
to
set
up
and
configure
all
these
different.
Automated
testing
tools
like
sonar
cube
jenkins
architecture
like
the
developer.
Experience
should
really
just
be
your
are
the
tools
that
I'm
using
build
me
a
pipeline.
B
That
does
those
things,
so
this
has
made
it
really
easy
for
us
to
do
that
right,
because
we've
already
completed
a
lot
of
this
work,
every
DevOps
engineer,
Booz
Allen,
doesn't
have
to
learn
how
to
do
the
more
complex
logic
right.
They
can
build
their
pipelines
from
the
work.
That's
already
been
accomplished,
it's
all
open
source,
so
we
can
collectively
work
on
this
framework
and
crowdsource
quality,
so
the
developer
experience
has
been
has
been
pretty
good.
In
my
opinion,
you
know
there.
B
There
are
drawbacks
to
this
approach,
sometimes
like
it
can
be
difficult
at
times
to
fit
everything
into
these
clean
buckets.
But
my
response
usually
is
that
if
it
doesn't
fit
into
this
clean
bucket,
then
I
want
to
have
a
conversation
around
the
development
processes
that
are
going
on.
You
know
if
your
development
process
can't
be
broken
into
a
build
test
package.
Deploy.
Let's
talk
about
you.
B
Process
that's
going
on
and
see
how
we
can
align
it
to
some
common
processes,
because
this
also
helps
a
lot
with
the
different
security
teams.
Right,
so
in
federal,
consulting
applications
need
what's
called
an
authority
to
operate
so
security
teams
want
to
know
that
the
code
being
developed
has
been
tested
in
this
secure.
So
we
can
work
with
those
security
teams
using
this
framework
to
say,
look,
here's
the
business
process
that
you
your
requirements
are
reflected.
You
know
that
there's
going
to
be
a
container
image
scanning
and
penetration
testing.
B
So
let's
get
this
process
approved
and
then,
let's
let
teams
still
choose
the
best
tool
for
the
job
by
having
these
modular
eyes
implementations
of
different
tools.
So
it's
helped
us
a
lot
from
a
security
and
governance
perspective.
There's
a
big
difference
between
being
able
to
say
everyone
must
do
unit
testing
with
this
code
coverage,
static
code,
analysis,
container
image,
spanning
penetration
testing
and
then
relying
on
teams
to
implement
that
versus
the
jte
approach
or
saying
everyone
is
going
to
follow
this
process.
B
B
Appreciate
that
the
beauty
here
is
that
pipeline
templates
run
just
like
Jenkins
files,
you
could
put
regular
Jenkins
pipeline
is
code
scripted
pipeline
in
here,
and
it
would
work
just
fine
I.
Don't
recommend
that
you
do
that,
obviously,
but
from
an
implementation
standpoint,
this
on
pull
request,
that's
just
a
step
in
a
library
it
takes
as
an
argument,
mapped
parameters,
so
it's
gonna,
take
a
two
input
parameter
and
a
from
input
parameter,
and
it's
gonna
execute
this
closure.
B
That's
also
an
input
argument
right,
so
we've
really
just
used
the
flexible
nature
of
Ruby
to
create
domain-specific
languages
almost
with
which
to
build
our
pipelines
so
I
get
a
lot
of
frequent
questions
about
like
is
declarative
pipeline
supported,
and
the
answer
right
now
is
unfortunately
no
and
that's
because
I
would
love
it
to
be,
but
declarative
pipeline
assumes
that
it
knows
everything
up
front.
It
looks
at
like
the
global
variables
from
shared
libraries
that
have
been
loaded
and
just
because
of
the
way
JT
currently
initializes
the
line
run
time
environment.
B
We
don't
actually
know
which
steps
have
been
loaded
until
the
pipeline
has
started
in
libraries
have
been
loaded
so
Oh.
My
response
is
usually
like
the
goal
of
declarative
pipeline
and,
if
Andrew
bears
on
the
line,
please
please
correct
me,
but
the
goal
of
declared
a
pipeline
was
to
create
a
simple
interface
to
lower
the
technical
barrier
to
entry
for
development
teams
to
be
able
to
create
their
pipelines
and
the
drink
and
sampling
engine
does
the
same
thing
with
just
a
different
approach
to
it.
So
go
ahead.
Mark
I
think.
A
Of
I,
think
of
declarative
pipeline
as
an
opinionated
way
to
do
pipeline
jte
seems
like
a
strongly
opinionated
way
to
do
pipeline
they're
different
opinionated.
Therefore,
the
fact
that
they
don't
intermix
does
not
shock
me
at
all
it's.
This
is
a
looks
like
a
really
elegant,
opinionated
way
to
approach
pipeline
description
and
generalize.
It
nicely.
B
Done,
thank
you
very
much
and
that's
really
what
I
try
to
communicate
with
this?
You
know
there's
a
lot
of
moving
pieces.
You
might
have
different
layers
of
configuration
that
can
sometimes
be
hard
to
track,
but
at
the
heart
of
jte
is
just
a
framework
for
developing
pipelines
in
a
tool
agnostic
way
that
helps
you
scale
them
to
entire
organizations
right.
B
There
is
no
reason
that
we
need
to
be
copying
and
pasting
Jenkins
files,
if
the
workflows
large
are
the
same
and
JT
is
really
just
a
way
to
say,
without
hard
coding
yourself
to
particular
tools.
What's
the
business
process
to
get
code
from
a
developer's
laptop
to
production,
and
then
you
know
the
fact
that
we
might
use
different
tools
for
unit
testing
or
for
static
code
analysis,
that's
just
an
implementation
detail
and
you
can
specify
that
in
your
configuration
file.
B
So
this
is
this
is
what
a
more
mature
pipeline
configuration
looks
like
you,
have
the
option
of
allowing
application
teams
to
load
their
own
pipeline
templates
in
their
repositories.
In
this
example,
we
turn
off
that
functionality.
Governance
is
really
a
dial
in
jte.
You
can
choose
how
strict
or
flexible
you
want
to
be
or
how
much
power
you
want
to
give
application
development
teams
based
upon
how
you
configure
it
from
a
top
level
down.
So
we
define
some
application
environments,
we
define
some
stages
and
then
the
library
section
reads
a
lot
like
your
text.
B
Sack
like
I
said:
jte
is
really
the
core
innovation
of
booze
Allen's
solutions
delivery
platform.
So
we
have
our
on
set
of
libraries
that
are
available
on
github.
We've
got
an
SDP
library
that
just
provides
some
helpers
to
the
others,
so
we
load
that
we've
got
a
github
Enterprise
which
provides
that
uncommit
on
full
request
functionality
that
you
saw
in
the
template.
B
We've
got
a
STONER
to
buy
verse
at
a
code,
analysis
docker
for
building
and
publishing
container
images,
twistlock
for
container
image
scanning
OpenShift
for
deployments,
using
helm
to
an
open
ship
cluster,
a
tool
called
the
ally
machine
for
accessibility,
compliance
testing
and
then
OS
app,
which
is
an
open
source
tool
for
penetration
testing.
Right
and
all
of
these
libraries
have
externalized
configuration
options.
If
we
take
a
look
at
the
documentation
for
them.
B
A
B
One
should
ever
have
to
figure
out.
How
did
you
start
our
cube
analysis
with
Jenkins?
The
work
has
already
been
done.
Here's
the
library
that
you
can
load
in
the
current
configuration
options.
If
you
need
new
configuration
options,
don't
reinvent
the
wheel,
just
open
a
pull
request
to
this
library.
Add
the
configuration
options
you
need
when
we
can
continuously
improve
the
extensibility
of
these
libraries
and
the
configuration
options
that
they
support.
B
So
if
we
go
back
to
the
slide
for
a
few
minutes
and
I
want
to
open
it
up
to
some
questions
here,
so
key
takeaways
that
I
want
folks
to
have
about
the
Jenkins
sampling
engine
is
that
it's
a
framework
for
developing
pipelines,
there's
no
right
or
wrong
way
to
do
it.
I
have
opinions,
obviously,
but
really
the
Jenkins
templating
engine
is
a
framework
for
developing
tool.
B
Agnostic,
template
workflows
to
share
pipeline
templates
or
by
multiple
teams,
regardless
of
what
tools
are
used
and
then
this
approach
separates
the
business
logic,
your
template
from
the
technical
implementation,
your
libraries,
allowing
teams
to
configure
their
pipelines
instead
of
build
them
from
scratch.
So
the
three
main
value
props
that
I
like
to
share,
apply
an
organizational
governance.
We
now
know
what
software
delivery
process
each
team
is
going
to
use,
but
we're
flexible
enough
to
let
team
choose
different
tools:
we've
optimized
pipeline
code
reuse.
B
So,
instead
of
everyone
reinventing
the
wheel,
we've
seen
pipeline
development
decrease
from
five
months
to
five
days
for
new
projects
that
are
leveraging
existing
tool
integrations.
So
it's
a
97%
decrease
and
how
long
it
takes
our
new
projects
just
to
pull
for
a
mature,
dev
suck-ups
pipeline
from
the
get-go
and
then
finally
simplifying
pipeline
maintainability,
so
I
managed
large
Jenkins
instances
before
jte
existed
and
in
my
opinion
it
is
a
lot
easier
to
manage
a
pipeline
template
and
then
modularize
tool
integrations
than
it
is
to
manage.
B
Sixty
copied
and
pasted
jenkins
files
that
have
been
tweaked
to
integrate
with
a
particular
tech
stack.
So
it's
really
made
it
easier
to
onboard
new
applications
to
make
changes,
so
the
pipeline's
flow
over
time,
because
you
only
have
to
update
the
template
in
one
place
and
it's
just
really
maybe
easy
for
to
support
multiple
teams
simultaneously.
B
So,
let's
turn
it
over
to
questions
here.
We've
got
some
some
few,
our
codes,
if
you
want
links
or
documentation
or
different,
hands-on
learning
labs
that
can
help
you
build
out
a
local
environment
using
docker,
filled
out
the
libraries
that
we
showed
as
part
of
the
demo
today
and
then
I'll
link
to
our
Gator
channel,
where
I
would
love
that
you
could
get
involved
and
ask
some
questions
so.
B
So
our
the
roadmap
for
tool
integrations
is
really
driven
by
which
clients
need
them.
So
at
this
point,
I,
don't
think
that
we've
got
any
requests
for
Garret.
But
that
being
said,
the
second,
a
team
that
I
or
anyone
at
Blue's
is
working
with
is
trying
to
use
Garret
it'll
be
on
the
roadmap
or
you
know.
If
you
want
to
get
involved
in
the
Gator
channel,
there
is
a
lot
of
documentation
in
the
Jenkins
sampling
engine
log
documentation
or
how
to
create
new
tool
integrations.
B
A
B
That's
a
great
question,
so
there
are
some
best
practices,
or
at
least
maybe
that
might
be
a
bit
far.
There's
a
particular
approach
that
we
take
when
writing
libraries
and
that's
it.
We
don't
install
tools
on
Jenkins
I
have
been
in
situations
where
there
are
three
versions
of
Java
and
use
or
three
versions
in
maven
or
Gradle.
So
instead,
what
we
do
is
maintain
container
images
which
become
runtime
environments
for
the
different
library
steps.
B
So,
for
example,
if
there
was
a
maiden
library,
we
would
have
a
configuration
option
for
that
library
which
specifies
which
version
of
maven
and
then
the
library
would
run
its
step
inside
of
a
container
image
that
has
the
appropriate
version
of
the
tool.
So
when
it
comes
to
platform
specific
considerations,
if
you're
looking
at
integrating
with
the
libraries
we've
already
developed,
the
only
requirement
is
that
docker
is
installed
for
the
agents,
so
they
can
use
those
images
for
runtime
environments,
but
otherwise
it's
really
just
it
executes
just
like
any
other
Jenkins
pipeline.
B
So
you
can
connect
agents
within
your
library
steps
you
can
specify
which
agents
should
be
used.
We're
currently,
during
the
initialization
process
of
jte,
we
make
a
node
call
or
two
so
we're
working
on
making
it
totally
agnostic
so
that
you
can
pass
agent
labels
to
every
piece
of
the
initialization,
but
in
general
it
should
be
able
to
work
on
any
platform.
That's
currently
in
use.
A
Okay,
so
so
the
I
I
think
I'm
I,
think
I
understood
the
darker
capability
and
that
a
that
sounds
really
powerful.
As
remember
how
a
build
tool
is
in
my
world,
some
of
the
things
aren't
docker
izybelle
at
all.
For
instance,
I,
don't
know
how
to
do
docker
eyes,
FreeBSD
or
Mac
OS
or
a
docker
eyes,
but
you're
saying
that
I
could
label
something
there
and
use
the
label
still
to
refer
to
it.
Even
though
it's
not
a
docker
image,
that's.
B
Correct
so
your
your
your
library
steps
execute
just
like
scripted
pipeline
code
right.
So
if,
inside
your
library
step,
you
had
a
node
block
and
you
passed
it
a
node
label,
then
that
piece
of
when
the,
when
the
framework
executes
that
library
step
it'll
execute
the
code
on
the
the
node
with
the
assigned
label.
So
it's
really
like
we
took
a
scripted
Jenkins
pipeline
and
instead
of
having
it
be
a
700
line
file.
B
Oh,
we
broke
it
up
into
a
bunch
of
smaller
files,
but
the
execution
of
that
code
still
works
just
like
any
other
Jenkins
pipeline.
So
you
can
have
node
labels.
You
can
run
pieces
inside
container
images.
You
can
really
write
your
pipeline.
However,
you
want
to
it's
just
that
jte
becomes
the
framework
for
how
you
dynamically
compose
that
pipeline
and
then
execute
the
different
steps.
Thank.
B
Yes,
so
let
me
clarify
that
so
the
the
Jenkins
configuration
is
code,
plug-in
natively
supports
any
plug-in
that
was
developed
using
standard,
plug-in
development
best
practices.
So
that
being
said,
it's
supported
it.
Can
it
comes
up
under
the
unclassified
section,
so
as
long
as
you've
configured
the
global
governance
here
we
call
them
the
the
global
managed,
shangkun's
jte
section
when
you
export
your
Jake
asked
file,
the
jte
stuff
will
be
in
there.
So
that
being
said,
you
need
a
combination
of
jenkins.
B
That's
actually
one
of
the
things
I've
been
I've
been
thinking
about
how
to
improve.
It
would
be
great
if
you
could
create
a
hierarchical
configuration
and
Jake
asked
and
have
that
map
to
two
different
jobs
on
the
instance.
But
as
of
right
now,
like
I
said,
it
takes
a
dual
approach
of
Jake,
asked
and
Javan
yourself.
Well,.
A
B
Correct
I
probably
know
a
little
more
Jing
ends
that
I
should
so
I
is
job.
Do
you
sell
less
than
I
just
used
the
Jenkins
API
to
create
jobs?
Sorry,
if
you
flinched
a
little
bit
but
I
sometimes
I
prefer
to
just
create
jobs
and
configure
things
through
the
the
groovy
API
of
Jenkins
instead
of
job
DSL,
but
it
also
works
with
job
DSL
and
I've.
Seen
projects.
Do
it
both
ways
great.
A
A
A
B
So
like
with
Gil
Gil,
like
with
all
our
libraries,
rather
it's
a
function
of
the
first
time,
someone
needs
it
is
when
they
gets
implemented.
So
at
this
point
it's
on
the
roadmap,
no
one's
implemented
the
library.
Yet.
That
being
said,
we
largely
rely
on
the
get
CLI
itself
for
our
implementation
of
of
those
libraries.
There's
really
only
one
one
step
that
requires
integration
with
the
specific
get
servers
API.
B
So
if
you
open
a
poor
request-
and
you
want
to
know
which
source
branch-
the
poor
request
was
opened
from
like
from
feature
one
to
master,
to
get
the
name
of
the
branch,
it
doesn't
have
a
way
through
the
CLI
that
I'm
aware
of
to
get
that
information.
So
we
rely
on
the
server's
API
to
get
that
so
for
the
github
public
versus
enterprise.
We
use
the
the
github
SDK
to
connect
to
to
github
and
get
that
information
for
git
lab.
We
hit
the
restful
api
endpoint
to
get
that
information
for
a
bit
bucket.
B
A
A
B
Definitely
can
contribute.
Please
join
the
Gator
channel.
The
jte
follows
the
same
contribution
guidelines
as
the
Jenkins
project
itself.
Our
documentation
has
all
the
information
you
need
to
get
started
if
there's
something
in
particular,
someone's
interested
in
working
on
feel
free
to
open
a
github
issue
if
feature
requests
or
join
the
Gator
channel
to
to
talk
about
it.
I
watch
that
that
channel
very
actively
so
I'd
love
to
talk
to
you
all.
B
If
you
do
end
up
thinking
that
the
templating
engine
meets
some
of
your
use
cases
and
would
be
a
a
good
tool
for
your
organization.
We've
got
an
adopters
file.
So
if
you
want
to
share
your
use
of
the
jank
assembling
engine,
just
open
a
full
request
to
our
adopters
AMD
file-
and
it
will
be
automatically
pulled
in
and
listed
as
part
of
our
documentation.
A
Excellent,
thank
you.
Thank
you.
I
think
that
covered
all
the
topics
that
were
on
my
mind,
is
there
anything
else
you
wanted
to
say
in
conclusion,
Steven
before
I
end,
the
recording
and
put
this
on
the
archive
I
will
post
the
link
to
the
Jenkins
Gator
channel
and
to
the
templating
engine
plugin
getter
channel,
once
I've
got
the
recording
saved
to
YouTube
I.
Think
that's.