►
Description
To evolve #Jenkins to support a #microservice implementation, templated workflows will be critical. The Ortelius team gets a walk-thru of the Jenkins Templating Engine from @steven_terrana . A beginning of a great collaboration on microservice management for the CD pipeline.
A
A
B
Awesome,
but
I
think
this
picture
represents
sort
of
a
lot
of
what
you
were
talking
about.
I
I
was
in
a
situation
where
I
was
tasked
with
building
like
a
devsecops
pipeline
to
support
50
or
60
microservices
being
built
by
five
or
six
different
companies.
B
So
I
work
for
a
federal
consulting
firm,
so
we
help
the
government
modernize
their
systems,
so
there's
often
multi-vendor
software
development
going
on
a
lot
of
adoption
of
kubernetes
and
microservices,
and
when
we
showed
up
there
were
two
700
line:
jenkins
files,
one
for
node
applications
and
one
for
java
applications,
and
the
only
difference
between
them
was
10
lines
of
code
about
whether
or
not
you
should
use
maven
or
npm
to
execute
unit
tests
and
as
an
engineer
at
copying
and
pasting,
a
700
line
file
and
tweaking
10
lines.
B
It
felt
like
there
had
to
be
a
better
way
to
do
this.
So
the
very,
very
first
thing
that
started
me
down.
This
path
was
like:
why
don't
we
just
check
the
repo
to
see
if
there's
a
package.json
or
a
pawn.xml,
and
then
dynamically
load,
a
library
based
upon
the
existence
of
those
files,
so
that
we
can
pull
in
two
different
libraries
that
both
implement
a
unit
test
step?
B
So
I
can
use
the
same
pipeline
but
be
a
little
bit
more
clever
about
how
to
pull
in
the
right
piece
of
pipeline
code
that
sent
us
down.
You
know
we
wanted
to
see.
How
far
can
we
take?
That
idea.
Can
we
do
that
for
the
entire
pipeline
being
able
to
create
tool,
agnostic
pipeline
templates,
that
define
business
logic
and
then
separately
define
what
tools
are
we
going
to
use
to
implement
this
templated
workflow?
B
So
a
lot
of
devops
tools
say
they
do
templating,
but
what
they
mean
is
they
do
like
yaml,
aggregation
and
variable
substitution,
which
are
not
not
quite
the
same
thing.
There's
a
big
difference
between
having
like
a
parameterized
jenkins
file
that
you
pass
variables
to
versus
saying
like
all
right.
We
want
to
do
a
build
test,
deploy
but
being
able
to
totally
swap
implementations
in
a
plug-and-play
fashion.
B
So
this
this
picture
just
shows
you
know,
as
you
scale,
linearly
the
the
pain
associated
with
that
is
exponential.
Here,
here's
all
the
challenges,
it
sounds
like
you
know
we're
on
the
same
page.
It
takes
a
long
time
to
onboard
teams.
B
The
challenge
with
that
is,
like
you,
still
have
to
duplicate
the
jenkins
file
everywhere
in
the
specific
industry
that
I
work.
Compliance
and
standardization
are
a
big
deal,
giving
the
developers
access
to
the
jenkins
file
isn't
always:
okay,
there's
nothing
stopping
a
developer
from
bypassing
required
checks
and
just
deploying
to
production
so
being
able
to
define
in
one
place.
B
This
is
the
tool
agnostic
business
process.
That's
required
by
the
organization
for
software
delivery,
but
still
being
flexible
enough
to
let
developers
choose
the
right
tools
is
what
we
were
aiming
for
and
then
the
last
problem
I
that
we've
experienced
a
lot,
is
continuous
improvement
like
if
you
want
to
update
your
pipeline.
B
A
I'm
going
to
stop
it,
I'm
going
to
stop
you
right
there.
I
have
to
ask
a
question:
have
you
when
it
comes
to
microservices?
Have
you
were
you
thinking
in
terms
of
a
poly
repo
instead
of
a
monorepo
when
you
were
thinking
about
building
this
out?
Where
was
it
was
your?
Where
was
your
head
in
terms
of
microservices.
B
My
my
typical
opinion
is
that
any
independently
versionable
thing
should
be
its
own
git
repo.
I
don't,
I
don't
think
that's
a
fact.
I
know
there's
a
lot
of
right
ways
to
do
things,
but
just
because
of
the
way
a
lot
of
pipeline
tools,
work,
handling
monorepos
from
your
pipeline
tooling
is
very
difficult.
So
I
won't
be
as
bold
as
to
say
that
monorepos
are
wrong,
but
I
will
say
that
they
make
life
a
lot
more
difficult
in.
A
A
couple
different
ways:
I
am
so
with
you,
you
know
I
thought
I
was
talking
to
the
folks
at
google
and
apparently
they
have
a
big
mono
repo,
and
I'm
really
I
need
to.
I
really
want
to
find
out
who
the
right
person
is
to
talk
to
them
about
what
that
looks
like,
because
I
I
feel
like
the
same
way.
If
it's
independently
deployed
it
should
have
its
own
workflow
and
it
should
have
its
own
repo.
C
A
C
C
Well,
the
the
from
what
dan
loreck
said
their
goal
is,
they
can
tag
their,
they
can
put
a
a
a
tag
or
a
branch
or
anything
like
that
into
the
mono
repo
and
angle.
B
B
The
the
what
my
response
to
that
I
have
I've
had
a
lot
of
teams,
so
I
consult.
I
try
to
convince
teens
all
the
time
that
the
multi-repo
is
going
to
make
life
easier
and
every
now
and
then
someone
will
come
back
and
say
well.
Well,
google
does
a
monorepo,
it's
got
to
be
okay
and
my
response
was
always
like.
B
Google
has
invested
millions
and
millions
of
dollars
into
perfecting
their
software
delivery
practices
and
they're,
not
they're,
probably
not
using
jenkins
right,
like
they
probably
have
an
entire
team
dedicated
to
custom
ci
cd
tooling,
to
make
that
possible.
So
just
because
the
monorepo
works
for
google,
I
think
for
the
average
team
doing
software
delivery.
There
there's
still
a
lot
of
problems
to
solve.
C
Google,
well
that's
why
they
built
borgin.
I
believe
their
build
system
is
bazel,
so
I
mean
they.
B
So
here's
here's
the
jenkins
templing
engine
and
a
gif
and
we'll
get
to
code
on
the
next
slide.
Our
realization
was
that,
regardless
of
the
tech
stack
when
you're
contextually
looking
at
a
team
or
a
set
of
teams,
the
business
process
of
the
pipeline
largely
doesn't
change
you're,
going
to
run
some
tests,
you're
going
to
build
an
artifact
package
and
publish
that
artifact
you're
going
to
hopefully
scan
it,
deploy
it
somewhere
and
then
run
some
more
tests.
The
specifics
of
what
tools
implement
those
steps
can
vary
from
team
to
team.
B
Some
teams
are
going
to
use
like
npm
or
maven
or
gradle
to
execute
their
builds
and
tests.
Some
teams
might
use
sonar,
cube
or
fortify,
but
that
business
process
is
can
remain
constant
many
times.
So,
let's,
let's
walk
through
exactly
how
that's
possible
with
the
jenkins
tumbling
engine.
So
on
the
left
and
right
super
bare
bones
pipelines
just
to
demonstrate
the
point
here
we
have
a
pipeline
on
the
left
that
uses
maven
and
center
cube
and
a
pipeline
on
the
right
that
uses
gradle
in
sonar
cube.
What
traditionally
would
have
been
two
separate
jenkins
files?
B
We
realized
both
of
these
pipelines,
do
a
build
and
then
they
do
a
static
code
analysis.
It
would
be
great
if
we
could
pull
the
jenkins
file
out
of
these
repos
and
define
it
in
one
place
and
say
this
is
this:
is
the
common
template
these
teams
are
going
to
inherit,
but
they'll
get
to
tell
us?
Are
you
using
maven
or
are
using
gradle
and
then
we'll
be
able
to
pull
in
different
libraries
based
upon
the
configuration
to
implement
this
template?
So
to
do
that,
you
reorganize
the
code
a
little
bit.
B
You
create
pipeline
libraries
in
jte
that
contribute
steps.
So
in
this
example,
we've
got
a
maven
and
a
gradle
library
that
contribute
a
build
step
and
a
sonarqube
library
that
contributes
static
code
analysis.
The
pipeline
code
that
lives
in
these
libraries
is
the
exact
same
pipeline
code.
You
would
have
written
in
the
jenkins
file
just
reorganized
in
a
way
that
we
can
be
a
little
bit
more
modular
in
plug-and-play.
B
We
can
define
a
common
templated
jenkins
file
in
one
place
in
jte
102.
We
can
talk
about.
You
can
have
multiple
templates,
you
can
create
a
named
template
and
let
teams
select
which
one
they
want.
There's
a
lot
of
flexibility
in
how
things
can
be
done,
but
for
the
sake
of
introductions,
there's
a
common
pipeline
template
that
defines
some
tool
agnostic
terms,
what
the
steps
of
the
pipeline
are,
and
then
each
team
gets
a
configuration
file
that
specifies
what
tools
are
going
to
be
used
to
implement
this
pipeline.
B
You
can
define
a
lot
more
things
than
just
the
libraries
in
that
config
file,
but
we'll
get
to
some
of
the
advanced
features
later,
if
we're
interested
in
talking
through
those
things.
So
through
this
pattern
we
didn't,
we
don't
have
to
create
a
jenkins
saw
in
each
repo
anymore.
We
can
create
a
common
template
used
by
as
many
teams
as
you
want
to,
but
still
give
them
the
flexibility
to
choose
the
right
tools
for
the
job.
B
The
these
libraries
become
reusable
building
blocks
to
be
able
to
construct
your
pipelines.
There's
no
reason
every
devops
engineer
should
have
to
google
sonar
cube
and
jenkins
to
find
the
15
lines
of
code.
It
takes
to
do
this
static
code
analysis.
B
We
can
bundle
these
steps
into
libraries
and
then
expose
parameters
in
the
configuration
file
for
parameters
that
you
can
pass
in.
So
from
a
framework
perspective
in
the
live
in
the
pipeline
configuration
you
can
pass
whatever
data
you
want
to
a
library
and
we'll
auto
wire,
a
config
variable
which
gives
the
steps
access
to
that
pipeline
configuration.
B
C
Go
go
back
to
that
on
the
bottom
left
we
have
libraries
and
the
sonic
cube,
and
the
scanner
3.0.
Where
is
that
kept?
Is
that
kept
in
each
individual
teams
repositories?
So
that's
their
data
definition
and
then
the
whole
pipeline,
the
groovy
code,
is
kept
in
a
different
repo.
Is
that
how
it's
laid
out?
Does
that
make
sense?
That's.
B
A
good
question
it
makes
sense,
and
the
answer
is:
it
depends
on
how
you
want
it
to
work,
so
you
can
create
hierarchical
pipeline
configurations.
If
you
want
each
team
to
define
their
entire
pipeline
configuration
themselves,
each
repo
can
have
a
pipeline
config.groovy
file
that
defines
everything
in
this
example.
If
we
wanted
to
pull
out
common
configurations
that
could
live
in
a
central
source
code
repository
that
specifies
the
common
configurations,
the
aggregation
of
pipeline
configurations
follows
some
rules,
so
you
governance
is
a
dial
in
this
framework.
B
B
So
if
you
work
in
a
highly
regulated
environment-
and
you
want
to
define
sort
of
the
template
in
most
of
the
pipeline
configuration-
you
can
do
that
if
you
want
each
team
to
bring
their
own
template
and
bring
their
own
config,
you
can
do
that
too,
and
the
way
it
works
in
jenkins
is
we
created
a
folder
property
where
we
you
configure,
what's
called
a
governance
tier
see
for
on
each
folder
and
on
the
jenkins
instance
as
a
whole,
you
can
point
to
a
pipeline
configuration
a
set
of
library
sources
and
where
the
templates
are
so,
you
can
create
these
governance
and
configuration
hierarchies
just
based
upon
how
you
organize
jobs
in
jenkins.
B
So
if
you,
if
you
wanted
to
not
put
config
files
in
the
application
repo,
that's
fine,
you
would
just
define
it
in
a
separate
repository
and
then
point
to
it
from
the
folder
property
in
jenkins
and
then,
depending
on
how
you
organize
the
jobs
in
jenkins,
you
can
create
whatever
configuration
hierarchy
you
need
to
so
I'm
rambling,
a.
C
Little
bit,
but
no
it's
all
right.
I
get
it
so
in
the
in
your
jenkins,
ui.
I
get
this
from.
You
know,
creating
the
jenkins
file
perspective,
but
from
the
ui
perspective
is
each
let's
say
each
get
repo
going
to
have
its
own
workflow
or
job
or
that's
going
to
reuse
the
the
same
pipeline
configuration
or
is
there
going
to
be
one
one
job
that
that
is
just
so?
Instead.
C
B
So
it
ties
into
the
existing
way
you
set
up
jobs
in
jenkins,
so
you
can
create
multi-branch
jobs,
let's
say
they're
using
jte
and
you
can
create
github
organization,
jobs
that
say
they're
using
jte,
so
it
really
works
as
a
separate
project
recognizer.
So,
instead
of
saying,
go
find
the
jenkins
file
in
this
repo
and
use
it.
It
says
I'm
a
jte
job.
Let
me
look
at
where
I
live
on
the
jenkins
instance
inherit
the
pipeline
template
that's
been
assigned
to
me
or
choose
the
one
that
was
defined
in
the
configuration.
B
I
think
I
can
pull
up
a
jenkins
instance.
If
that
would
be
helpful,
sure
see
if
I've
got
my
jenkins
container
running.
A
B
You,
thankfully
I've
been
I've,
been
practicing
the
same
presentation
for
a
while
now.
B
You,
let's,
let's
pull
up
jenkins.
B
B
B
All
right,
so
if
we
look
at
a
single
repository,
so
if
you
wanted
to
create
a
wanted
to
create
the
same
pipeline,
template
across
every
single
branch
and
pull
requests
in
the
job
configuration-
and
this
is
a
regular
multi-branch
project
like
you-
would
typically
set
up
instead
of
it
pulling
the
jenkins
file
from
each
branch-
you
say
I'm
using
jte
and
then,
if
you
want
to
define
a
configuration
for
this
repository
specifically
down
here,
multi-branch
projects
are
just
fancy
folders,
so
it
inherits
the
folder
property
of
where
do
I
find
the
pipeline
configuration
and
templates
like
where's
the
what
directory
inside
this
repo?
B
B
Instead
of
using
the
python
jenkins
file,
this
entire
github
organization
is
going
to
align
to
to
jte.
So
then,
every
single
repo
inside
of
here
can
use
a
common
pipeline
template
if
you've
got
something
a
little
fancier
where
some
repos
in
the
github
org
job
are
going
to
want.
One
template
and
another
set
of
repos
are
going
to
want
another
template.
You
can
create
named
pipeline
templates
and
in
that
pipeline
configuration
file.
You
can
say
I
want
to
use
the
git
flow
template
or
I
want
to
use
the
kubernetes
template
or
like.
B
However,
you
choose
to
structure.
Your
templates
in
your
libraries
is
totally
up
to
you,
which
is
honestly
why
sometimes
people
can
struggle
implementing
it,
because
it's
very
up
to
you
how
you
want
to
structure
your
libraries,
how
many
templates
you
want
to
have
how
you
choose
to
organize
your
your
governance
structures,
so
in
the
deck
I
usually
for
devops
role,
I
finally
showed
what
templates
look
like
when
we
write
them.
B
B
So
in
this
pipeline
configuration
we
can
define
keywords
which
are
just
variables.
You
define
in
your
constant
that
we're
going
to
inject
into
your
your
template
execution,
so
we
define
regular
expressions
here.
You
could
define
these
keywords,
as
whatever
variable
you
want
to,
but
for
this
use
case
it's
a
regular
expression
and
then
that
gets
used
as
an
input
to
these
onpull
requests
on
merge
branches.
So
we
want
these
templates
to
be
as
easy
to
read
as
possible,
so
on
a
pull
request
to
develop
this.
B
This
block
of
code
is
only
going
to
execute
if
the
jenkins
job
is
representing
a
pull
request
to
the
development
branch
of
a
repository.
Do
a
build.
Do
application
dependency
scanning
static
code
analysis
when
we
actually
merge
to
develop,
deploy
to
a
dev
environment
where
dev
is
an
application
environment
that
you
can
create
in
your
config.
So
if
you
have
different
configurations
for
each
application
environment
like
a
different
kubernetes
cluster
endpoint
or
whatever
environmental
specific
context,
there
is
we
call
these
things
primitives.
So
keywords
is
a
primitive
application.
Environments
are
primitive.
B
There's
things
called
stages,
there's
a
bunch
of
stuff.
You
can
make
it
here
so
on
your
application,
environment,
primitive.
You
can
define
whatever
fields
you
want
to
and
then
we'll
inject
that
dev
variable
into
your
pipeline
template
for
you
to
pass
around
and
do
what
you
need
to
it.
So
these
libraries,
if
this
was
a
real
pipeline
config,
they
would
probably
have
a
bunch
of
input
parameters
to
them,
specifying
how
we
want
these
libraries
to
work.
B
The
sonar
cube,
contributes
to
static
code
analysis
step.
If
a
team
was
using
fortify
instead
of
changing
the
entire
jenkins
file
or
template,
you
would
just
swap
out
I'm
using
fordify
instead.
B
So
the
way
you
choose
to
organize
your
pipeline
configurations
comes
down
to
how
much
can
you
abstract
out
common
configurations
and
what's
actually
unique
about
each
application,
that
you
need
to
specify
in
its
own
python
configuration
file
so
the
rest
of
these
slides
just
sort
of
walk
through
each
thing?
That's
happening
in
this
file,
but
libraries
contribute
these
steps.
Application
environments
encapsulate
environmental
context,
and
then
keywords
are
a
nice
way
to
define
variables
so
that
you
keep
your
your
templates
easy
to
read.
B
A
best
practice
that
we
have
for
ourselves
is
all
of
these
pipeline
libraries
have
documentation.
B
So
we
we've
got
libraries
for
all
the
devops
things
like
zap
for
pen,
testing
plus
dependency.
In
each
of
these
libraries,
we
use
a
tool
called
intora.
If
we
want
to
talk
about
documentation
as
code.
That's
another
thing:
I
like
a
lot
being
able
to
define
the
documentation
for
these
repos
for
these
libraries
alongside
the
code
that
contributes
them
and
then
all
the
docs
for
jte
explain
how
library
development
works.
There's
a
lot
of
other
features.
We
didn't
talk
about,
but
do
I'll,
stop
rambling
and
see.
If
there's
any
questions.
A
Go
back
to
your
your
powerpoint
presentation,
where
you
had
the
yeah,
I
wonder
if
we
could
so
on
the
under
the
pipeline
configuration,
I
wonder
if
we
could,
can
we
grab
that
steve
and
make?
I
guess
that's
too
late
by
the
time.
A
A
Sorry
deploy
up
steam
because
so
this
is
what
we
are.
A
A
We
have
the
ability
to
start
giving
microservices
risk
level
factors:
okay,
because
we're
yeah
we're
gathering
a
ton
of
data
about
how
many
what
the
impact
of
a
single
microservice
might
be
to
the
entire
cluster.
A
We
can
see
if
it's
failed
at
deployment.
If
it's
had
to
be,
if
there's
rollbacks,
if
there's
been
issues,
we
can
determine
that
we,
we
have
a
domain
structure,
a
domain
driven
design
structure.
So
we
can
say
these
are
security
microservices.
A
B
C
A
C
No,
it
would
be
when
the
pipeline,
when
somebody
checks
in
and
changed
to
github.
At
that
point,
we
would
query
what
type
of.
C
A
slow,
a
medium
and
a
fast
pipeline
configurations
for
the
same
microservice,
and
I
from
my
understanding.
You
can
do
that
and
configure
at
the
work
at
the
job
level
which
one
to
execute.
So
it's
just
a
matter
of
of
determining
prior
to
you
guys
or
somewhere
in
the
middle
of
your
stuff,
saying
we're
going
to
go
off
and
run
the
slow
one
versus
the
fast
one,
and
we
can.
We
can
provide
you
the
data
to
make
that
determination.
B
B
What
you
can
do
today
is
resolve
build
parameters
in
the
pipeline
configuration
file,
so
you
could
be
a
little
bit
more
clever
where
you
you
trigger
a
job
with
a
certain
set
of
parameters
and
then
inside
this
file
you
can
access
the
m
variable,
so
you
could
say,
like
m
dot,
build
type
and
figure
out
which
template
to
use
or
something
like
that
and
then
inside
here,
if
you
were
to
have
name
templates,
for
example,
you
define
pipeline
template
equals
and
then
you
give
it
a
template
name.
B
B
B
B
B
C
Well,
even
if
you
start
simple
and
look
at
if
you
have
50
micro
services,
you
know
even
15
if
you
have
15
micro
services
and
you
want
to
have
three
different
versions-
one
for
dev,
qa
and
prod,
that's
whatever
60
or
75,
whatever
the
math
ends
up
being
different
combinations
of
where
it
could
be
at
any
point
in
time.
B
So
when
teams
start
having
like
a
job
for
deploy
to
dev
and
a
different
job
to
deploy
to
qa,
I
tell
them
like
that's
not
a
pipeline.
That's
a
flute
like
you
just
want
to
go
click
a
button
to
do
a
deployment.
That's
not
actually
building
a
continuous
pipeline
that
maps
to
your
development
practices.
That's.
C
So
so
a
quick
question:
can
you,
instead
of
generating
a
jenkins
file,
can
you
guys
generate
a
yaml
file.
B
B
C
B
B
C
Is
the
techdon
project
is
running
into
this
templating
issue
and
techton
is
all
they've
built
out
the
cicd
pipeline
and
it's
all
based
on
tasks,
and
they
have
a
library
of
tasks
that
you
can
reuse.
But
what
they're
running
into
a
problem
with?
B
Yeah,
I
I
from
my
perspective,
jenkins
aside.
You
need
three
things.
You
need
a
a
set
of
implementations
of
steps
right.
If
you
want
to
do
static
code
analysis,
you
need
a
stunner
cube
step,
a
check
mark
step
whatever
you,
you
need
something
that
has
steps.
You
need
something
that
defines
in
tool
agnostic
terms,
your
workflow
right,
so
not
getting
down
to
the
specific
tool
level
just
getting
down
to
the
business
logic.
And
then
you
need
that
third
artifact,
which
is
the
pipeline
configuration
that
says.
B
Okay,
let's,
let's
map
these
the
steps
available
to
us
to
the
the
business
logic
workflow,
so
people
run
into
problems
because
they
try
to
put
all
of
that
stuff
in
a
single
artifact.
When
they're
they're,
three
separate
things,
a
lot
of
the
time.
C
Yeah
so,
for
example,
that
static
on
the
right
side
panel,
you
have
static
code
analysis
that
function
call.
What
is
that
just
the
groovy
function
call
then.
A
B
B
If
you
were
to
run
this
without
loading
any
libraries,
these
methods-
just
wouldn't
be
defined
right
and
jc-
can
handle
that
in
a
certain
way.
But
this
this
is
the
jenkins
file.
It's
just
we
edit
some
stuff
at
compile
time,
so
that
these
these
steps
get
implemented
dynamically
based
upon
what
libraries
you've
loaded.
C
B
C
C
I
have
existing,
I
have
a
slow,
medium,
fast
pipelines
and
I
need
to
choose
which
one
I
want
to
take
for
for
a
microservice
so
that
that
is-
and
that's
where
crewing
external
data
sources
for
the
decision
making
on
which
one
to
take
comes
into
play.
The
other
methodology
which
doesn't
really
fit
into
jenkins
that
we
see
that's
going
to
be
coming
about
is
event
driven
c
cd
and
ci
and
cd.
C
C
So
if
I
want
to
have
listen
for
a
doctor
push,
I
can
have
multiple
things
like,
for
example,
a
docker
push.
Could
the
listeners
on
a
docker
could
push
could
be
a
deployment
and
then
dependency
scan
of
the
container.
B
C
So
I
can,
instead
of
instead
of
hooking
it
in
sequentially.
I
would
just
be
looking
at
listening
for
certain
events
for
payloads
to
get
passed
around,
but
that's
down
the
road
nobody's
really
written
a
a
good
event
based
ci
tool.
Yet.
B
Ahead,
my
my
favorite
feature
about
jte.
That's
not
quite
event.
Driven
is
the
ability
to
do
aspect,
oriented
type
stuff,
so
you,
for
example,
if
I
wanted
to
send
an
event
to
splunk
based
on
the
pipeline.
I
don't
want
to
put
my
splunk
logic
in
my
senarcube
library,
because
that
breaks
the
plug-and-play
side
of
choose
your
tools,
so
you
can
create
steps
and
annotate
them
with
before
step
after
step,
and
then
you
don't
have
to
specify
implicitly
to
execute
this
step.
B
It
will
respond
based
upon
the
annotation
you've,
given
it
so
there's
there's
a
bunch
of
hooks
here,
for
I
want
to
run
some
pipeline
cut
at
the
beginning.
I
want
to
run
code
before
and
after
the
step
and
then
at
the
end
of
the
pipeline,
so
it
allows
you
and
then
there's
like
conditional
execution.
B
So
if
you
only
want
to
run
this
step
after
the
build
step
or-
and
you
can
you
can
access
the
pipeline
config,
so
here
you
can
be
like
only
run
if
the
step
that's
about
to
run
is
defined
in
the
pipeline
configuration
to
make
this
more
dynamic.
But
the
idea
is
that,
instead
of
hard
coding,
the
relationship
for
a
lot
of
monitoring
tools
or
slack
notifications
or
stuff,
like
that,
you,
you
load
a
library
that
implements
annotated
steps
to
respond
to
pipeline
events
and
do
things
accordingly.
Nice.
C
That's
definitely
what
what
we're
thinking.