►
From YouTube: Live 15-minute Demos from Jenkins World
Description
It's been a few months since our last Jenkins Online Meetup (JOM). Many of us have been busy getting ready for Jenkins World which we were able to meet many of you in person...woot woot!
Special thanks to these experts for making time to do a second round of these demos:
Developing Pipeline Libraries Locally: 0:10
Delivery Pipelines with Jenkins: 24:07
Pimp My Blue Ocean: 50:18
Deliver Blue Ocean components at the spead of light: 1:03:30
Mozilla's Declarative + shared libraries setup: 1:21:18
Git Tips & Tricks: 1:47:05
Visual Pipeline Creation in Blue Ocean: 2:03:10
A
B
Okay,
so
I'm
going
to
talk
about
development
of
pipelines
on
cool
machines
in
Jenkins
community
work
on
some
things
actually
I'm,
a
member
of
James
Cole
team,
I
work
for
cloudBees
and
I
participate
in
several
projects,
including
libera
course
project.
It's
a
project
where
we
work
on
open
source
hosting
for
hardware
flows
and
we
use
Jenkins
and
Jenkins
pipeline
there.
So
when
I
was
working
on
my
appliance
for
this
instance,
I
experienced
some
issues
with
the
deployment
of.
B
Complex
biplane,
libraries
and
I
would
like
to
share
some
my
experience:
how
about
approach
into
such
a
development
in
the
Jenkins
project?
I
work
on
many
plugins,
so
probably
you
have
seen
me
implements
like
custom
tools
with
Albury
strategy,
etc.
I
also
maintain
remote
in
integrate
pull
requests
in
the
core.
So
we,
if
you
have
questions
about
these
components,
just
reach
out
to
me
in
the
Jenkins
LC
channel.
So
today,
I'm
going
to
talk
about
pipelines,
Python
search.
B
They
are
very
good
if
you
want
to
stop
contributing
as
code
for
your
projects,
because
you
can
set
up
a
kind
of
framework
by
using
pipeline,
shade
libraries
and
the
hence,
you
can
encapsulate
complexity
of
your
build
definitions.
So
if
you
want
to
build
that
kind
of
framework
for
a
number
of
projects
using
pipeline
can
really
save
you
time,
but
the
problem
is
with
development
of
such
pipeline
solutions
he's
a
short
summary
of
the
current
state
of
pipeline
development
tools.
B
So
there
is
only
one
big
green
block,
a
library
manager
so
opportunity
to
manage
libraries
and
to
download
them
the
rest.
No
dependency
management
but
I
think
that
this
model
is
more
less
done,
but
on
the
other
hand,
there
is
a
lack
of
other
tools.
For
example,
there
is
not
so
much
in
the
integration
you
can
debug
stuff.
B
One
straightforward
approach
is
to
just
set
up
a
Jenkins
server
set
up
for
pipeline
libraries
there
and
just
debug
everything
kondeh
Jenkins
instance,
but
the
problem
of
that,
if
you
want
to
develop
pipeline,
you
usually
get
into
issues
miss,
for
example,
scripts
security
or
just
with
syntax
mistakes.
So
effectively
you
end
up
with
doing
many
cycles,
like
modifying
your
code,
then
commit
and
then
running
the
test.
When
the
test
fails,
you
just
start
from
the
beginning,
25
commits
again,
and
it
takes
much
time.
B
The
approach
I
would
like
to
achieve
is
to
avoid
spending
much
time
on
these
iterations
and
to
just
be
able
to
stop
everything
cochlea
and
in
order
to
achieve
that.
I
have
set
up
an
instance,
so
he
is
kind
of
schema,
for
this
instance.
I
use
configuration
as
quote
not
on
for
job
definitions,
but
also
for
infrastructure
as
code.
B
B
B
So
let's
take
a
look
at
the
Ruby
file
at
the
docket
file
so
effectively
I
start
from
Jenkins
years,
and
it's
a
standard
image
provided
by
Jenkins
project
I
apply
several
hogs
to
configure
update
Center,
because
I
need
experimental
version
of
my
file
system,
a
cm
plug
in
the
next
version.
It
will
be
available
out
of
the
box,
but
after
that
I
just
installed,
plugins
installed
the
environment
and
get
running
instance.
How
do
I
configure
it
in
Jenkins?
There
is
a
per
tunity
to
stop,
would
hook
scripts
and
define
them
by
program
effectively.
B
If
you
go
to
Jenkins,
you
may
find
a
one
page
in
the
commentation
for
these
blue
excretes,
but
actually
it's
a
kind
of
powerful
engine
which
allows
configuring
almost
everything
in
Jenkins
runtime.
So
the
idea
that
once
Jenkins
starts
it
firstly
loads
all
Vikings,
contribute
Asians
and
then
invokes
triggers,
and
it
goes
through
all
the
Ruby
scripts
and
invokes
and
then
one
by
one.
So,
for
example,
there
is
a
pretty
complex,
groovy
script
which
initializes
my
outing
technification
engine.
So
what
does
it
do?
I
use,
Hudson
private
security
room
and
I
registered
several
users.
B
For
example,
there
is
a
user
user
and
also
user
admin.
But
do
you
may
notice
that
there
is,
if
condition
so,
for
example,
for
demo
purposes,
I
used
admin,
but
on
production
there
is
no
admin
user
in
the
system
at
all.
So
there
are
only
users
who
have
read
and
write
permissions
to
jobs,
but
no
administer
permissions
and
then,
in
this
grid,
I
configure
a
role
strategy
plug-in
in
order
to
set
up
security.
So
this
configuration
may
be
quite
complex
but
effectively
just
uses
binary
API.
So
Jenkins
and
I
configure.
B
A
B
Fast,
okay,
so
it's
a
kind
of
philosophical
questions.
So
when
I
say
configurations
code
I
mean
that
is
a
combination
of
infrastructures
code.
So
we
system
configure
test
code
and
also
conspiration
of
jobs,
which
is
called
pipelines
code
engine
project.
So
I
agree
that
configuration
is
called
term.
Lease
may
be
quite
confusing,
but
yeah
I
use
it
just
as
a
combination
of
infrastructures,
code
and
configures
code.
All
right.
Thank
you,
okay!
B
So
here's
my
instance-
and
you
may
see
that
there
are
other
files,
for
example,
I
configured
orchid
in
the
same
way,
so
I
install
I
use.
Yet
another
doctor
plug-in
his
at
the
configuration.
You
may
see
that
this
code
is
almost
declarative,
but
actually
is
just
each
of
ruby
language
and
effectively
it's
fully
valid
groovy
code
constructed
from
Java
EPS
and
moreover,
since
I
use
idea,
it
can
define
a
visual
pom.xml.
These
dependencies
on
all
plugins,
I
use
and
I
get
a
kind
of
visual
Jenkins.
Plugin
I
wish
I
can
verify.
B
So
I
can
verify
that
all
this
code
is
syntax.
Literate
I
can
run
static,
analysis,
I
can
even
debug
it
if
you
want
I
can
show
it
later
and
yes,
it's
a
kind
of
wild
scripts.
If
we
talk
about
pipelines,
so
he
is
an
example.
I
have
a
kind
of
local
pipeline
development
library
and
this
laboratory
just
this
Ruby
script,
just
checks,
existence
of
particular
directory
on
the
docket
English.
B
Definition
from
file
system
ECM
plug-in
so
from
this
location,
and
then
we
just
enabled
this
library
by
default.
After
that,
we
go
through
additional
directories
and
initialize
extra
libraries
I
may
want
to
use
in
my
project.
So,
for
example,
if
I
develop,
a
project
is
10
pipeline
libraries
Imperial
I
can
just
get
snapshots
of
all
of
them
in
my
environment
and
I
do
not
need
to
commit
all
ten
once
in
parallel,
if
I
modify
something
and
after
that,
I
just
stop
several
reference
jobs.
So
it's
how
it
works
under
the
hood.
B
In
a
demo
so
he's
a
configuration
of
plugins
installed
since
I
used.
Ok
I
can
just
run
the
build
of
the
image
beautiful
this
image
for
the
first
time.
It
will
take
lots
of
time
for
sure,
but
since
everything
is
cached,
I
just
launch
it
and
that's
it.
They
think
that
I
installed
plugins
once
and
only
once
when
I
built
the
image.
So
when
I
run
the
image
I
do
not
need
to
configure
anything
else,
so
here's
my
command
line.
So
what
do
I
do
here?
B
So
what
I
have
clicked
the
start
command
and
now,
if
everything
is
fine,
yes,
Jenkins
starts
initialization,
so,
firstly,
it
passes
through
common
eye.
Initialization
steps
and
then
it
reaches
the
stay
to
when
it
needs
to
niveau
quadruple
hook
slips
since
it
takes
a
while.
Now,
let's
see
okay
yeah,
you
may
see
that
there
are
customer
lock
messages
that
something
has
been
loaded,
and
you
may
see
that,
for
example,
there
is
initially
insulation
of
development.
Folder
I
have
shown
you
it's
pipeline,
a
library
and
also
it
is
two
additional
libraries.
B
It
was
able
to
discover
on
my
file
system
and
the
instance
has
been
initialized
automatically.
There
are
some
tweaks,
for
example,
I
can
feel
the
security
from
this
instance.
I
traits
were
called
just
to
enforce
English
on
my
setup,
I
stopped
tools
so
on
this
instance,
I
am
going
to
them
my
Jenkins
IOT's
pipeline
library,
so
effectively
in
Jenkins.
You
can
do
plugins
using
a
single
line.
B
B
B
I'm,
going
to
log
in
as
I
mean
just
be
able
to
show
you
something
so
here's
my
image.
Everything
has
been
initialized
automatically.
By
configuration
s
code.
There
are
several
folders
today
I'm
going
to
show
you
only
the
development
folder
and
there
is
pipeline
library
folder
I've
mentioned.
So
there
are
several
jobs
initialized
by
the
script.
If
we
go
over
to
the
configuration
page,
we
may
see
that
there
is
configuration
of
pipeline
library
which
uses
file
system
ECM,
which
bolts
from
master
is
a
default
version.
B
B
Let
me
show
you
configuration
all
this
job.
He
just
invokes
milk
plugin.
So
it's
a
script
in
Jenkins
which
performs
all
build
steps
and
it's
a
thing
we
use
on
Jenkins
on
Jenkins.
So
if
we
go
inside
it's
pretty
complex
script,
so
here's
a
new
plugin
effectively,
it's
not
even
a
matter.
It's
a
global
variable.
It
performs
builds
of
plugins
on
several
platforms
like
Linux
Windows,
on
particular
JDK
reactions,
and
it
has
additional
options
which
enable,
for
example,
fine
box
check,
style,
etc,
and
everything
is
configured
in
the
script
but
as
a
plug-in
developer.
B
I
just
need
to
set
one
line,
Jenkins
file
in
my
repository
and
everything
else
automatically.
So
he
said
this
Jenkins
file.
You
may
see
that
I
take
it
locally,
not
from
repository
just
because
I
don't
have
windows
label
on
my
instance,
but
everything
else
happens
for
the
local
bit.
So
we
can
just
start
the
build.
B
Working
so
it
starts
at
execution
of
the
project.
It
has
checked
out
pipeline
library
from
file
system
ECM
and
now
it
executed
it
will
take
a
while
to
complete
the
build
it
takes
several
minutes
usually,
but
advantage
of
Jenkins
said
that
once
we
perform
to
checkout,
we
already
have
cached
pipeline
library,
so
we
can
go
to
the
client
library
definition
and,
for
example,
break
something.
B
So,
let's
assume
we
developed
pipeline
and
well,
it's
just
read
something
like
exit
one
here:
I,
don't
even
create
you
for
this
syntax
is
valid
or
not,
because
yeah
pipeline
should
just
feel
okay,
I
click,
blue
plug-in
and
here's
a
of
the
second
group.
What
does
it
say?
Yeah,
most
no
such
method
to
exit
so
I
just
defied.
They
think
this
container,
miss
Jenkins,
runs
in
docker,
I
use,
docket
for
mark
so
effectively
it's
even
located
on
the
remote
virtual
machine,
but
for
the
developer
it
happens
transparently.
B
Moreover,
I
can
do
it
not
only
on
the
library
level,
but
also
on
the
job
level,
so
I
have
another
job
called
apache
httpclient
api
plugin.
Actually
it's
one
of
the
new
of
plugins
and
in
this
plug-in
I
have
Jenkins
file,
which
is
located
locally
and
in
this
job
configuration.
This
job
is
also
a
treated
in
filesystem
ischium,
so
I
took
out
not
only
the
pipeline
library
but
also
the
Jenkins
file
itself
and
if
I
launch
it
here,
it
also
starts
executing,
and
hopefully
it
fails
yeah.
B
B
So
let's
go
back
to
our
pipeline
librarian,
so
this
built
has
successfully
passed.
We
can
see
test
results.
You
can
see
fine
box.
Everything
happens
from
pipeline
library,
but,
as
I
said
now,
we
can
develop
the
things
locally
and,
if
I
need
to
define
multiple
libraries,
I
can
also
do
it
where
they
system
configuration,
because
the
only
thing
I
need
in
my
configuration
is
called
setup
is
to
specify
additional
sources.
A
B
So
this
is
Rob
about
development.
Okay,
I
just
developed
this
stuff.
If
we
talk
about
testing
I
would
definitely
recommend
frameworks
like
pipeline
unit.
Unfortunately,
we
do
not
have
a
demo
today,
but
it's
something
I
use
in
my
production
libraries.
It
cannot
be
a
well
combined
with
approach
because
he's
let's
take
pipeline
library.
Unfortunately
in
janky
say:
oh,
we
don't
have
a
test
so
far,
but
hopefully
I
will
create
a
request
for
that
soon.
So
we
have
a
CRC
folder.
B
But
if
we
wanted
to
add
tests
this
pipeline
unit,
we
could
add
tests
where
and
then
we
have
a
single
repository
with
tests
and
plugin,
and
then
we
using
file
system
you
see,
I'm
playing
interview
could
be
able
to
launch
tests
directly
from
Jenkins.
Also
on
the
local
instance,
great
you
even
risk
by
planning
unit.
We
can
come
combine
that
nice,
okay,
okay,
are
there
any
other
questions,
all
right,
I
guess
no!.
B
So
this
is
just
a
proof
of
concept
approach.
You
can
find
all
the
demos
here,
so
there
is
a
demo
on
docker
hub
which
provides
just
the
wrong
International's.
You
can
start
then
modify
if
you're
interested.
There
is
a
repository
in
github,
so
you
can
just
fork
it
and
then
you
can
modify
it.
If
you're
interested
in
more
advanced
demo,
you
can
go,
for
example,
to
github
liberal
course
liberal
course
I.
So
it's
a
kind
of
working
progress
project
which
also
uses
the
same
configurations.
B
You
may
notice
that
there
are
some
just
in
the
configuration
of
scripts
so
instead
of
using
all
scripts
on
the
top
level,
I
have
google
bootstrap
effectively
this
thing
which
implements
go
to
the
class
loader
and
error
handler.
So
here
I
can
use
a
groovy
classes
and
other
advanced
things
I
want.
You
got
my
instance.
It
simplifies
scripts.
A
lot.
For
example,
I
will
appear
presented
identification
matrix
in
civilization,
but
here
I
just
use
ownership,
basic
security
to
help
button
order
to
define
the
themes,
so
it
simplifies
the
configuration
and
for
particular
things
like
doctor.
B
It
becomes
even
more
fancy.
So,
for
example,
I
moved
docker
cloud
template
either
to
a
library
and
then
I
just
laid
something
from
a
template,
then
define
extra
options.
I
year
you
need
and
get
multiple
configurations
for
images
in
a
short
script,
and
it's
still
a
fully
valid
code
in
which
you
can
even
give
up
from
your
ID.
By
connecting
to
the
jenkins
instance,
need.
A
B
C
Thank
you
very
much,
and
so
my
name
is
Michael
diamond.
That's
me,
I'm
deaf
consultant
as
Claudius
and
during
the
next
15
minutes,
I'd
like
to
give
you
some
appetizers
how
to
set
up
comprehensive
holistic
delivery
pipelines
right.
So
we
start
from
the
very
beginning,
commit
push
some
changes
and
expect
those
changes
to
be
on
production
systems
later
on
in
the
cloud
right
so
actually
thought.
We
thought
this
overview
summarizes
the
different
steps
which
are
actually
mentioned.
C
C
Underlined
baked
Jenkins
right,
so
they
have
a
lot
of
different
tools
and
Jenkins
is
actually
the
foundation
which
integrates
this
over
oil
ecosystem,
integrating
all
different
tools,
for
example,
to
inspect
the
code,
to
inspect
binaries,
to
integrate
this
binary
repository
managers
and
so
on
right.
So
those
are
the
tools
and,
of
course,
above
all,
Jenkins
see
on
the
net
I
hope
you
see
it
so
back
to
the
concepts.
The
idea
is
to
start
this
continuous
field
right
to
give
some
quick
feedback.
That's
the
subtype
line
on
the
very
top.
C
It
just
checks
out
code
and
wants
unit
test
file.
So
the
the
next
one
is
a
more
holistic
one
green
background.
It
continuously
delivers
specific
death
threats
right.
So
major
parts
are,
for
example,
deriving
a
release
version
from
our
maven
snapshots,
frequency
here
or
integrating
and
provisioning
some
target
environments
and
all
that
stuff
right
so
you'll
see
a
lot
of
different
tools
are
integrated.
Actually
those
pipelines
are
derived
from
real-world
success
stories.
Then
after
we
have
created
defined
versions,
we
can
share
it.
So
we
in
the
sense
of
using
developers
or
domain
experts.
C
We
can
cherry
pick
available
versions
and
promote
those
versions
to
release
candidates,
I'll
see
right
and
afterwards.
You
can
also
decide
to
cherry-pick
and
promote
ass-kiss
2g8
general
availability
versions
right,
so
those
versions
are
supposed
to
be
deployed
and
promoted
to
our
production
system,
which
is
located
in
the
cloud.
So
this
is
actually
a
very
quick
summary
of
our
of
our
ecosystem.
Now,
let's
try
to
move
this
away
and
let's
go
to
the
jenkins
dashboard.
C
C
Motivation,
the
main
change
we
want
to
promote
and
stage
two
what
production
is
actually
this.
This
is
this
change
right,
so
we
like
to
change
this
training
tool,
for
example,
Ameen,
and
because
we
are
very
textbook,
like
developers,
we
also
aligned
test
cases
and
because
it
was
a
very
long
day
right.
So
here
in
Germany
we
already
have
almost
night.
We
are
not
so
concentrated.
C
We
try
to
to
set
up
and
rely
on
this
test
case,
and
we
also
have
a
servlet
and
we
do
some
some
changes
in
the
servlet
and
now
we
think
that
actually,
those
set
of
changes
is
sufficient
to
to
bring
this
change
set
to
deduction
right
assault,
because
it's
a
long
day
we
ignored
the
best
practice
to
to
test
locally
right
and
Jenkins.
As
a
word
handy
and
very
sophisticated
automation.
Engine
is
our
single
point
of
truth
right.
It's
the
reverse,
a
couple
of
quality
gates
and
detects
any
flaws
and
test
failures.
C
C
Example
of
our
business
application.
So
now
let's
go
back
to
the
web
browser.
First
of
all,
it's
quickly
look
on
our
business
application,
which
is
all
city
of
the
cloud
right,
or
this
one
still
provides
this
entry
page,
so
the
old
one
which
is
supposed
to
be
replaced
by
the
new
starting
entry
landing
page.
So,
having
said
that,
we
can
directly
again
go
back
to
our
dashboard,
and
here
you
see
that
you
have
obviously
derived
a
couple
of
different
delivery
pipelines
from
our
slide
right.
C
So
here
you
can
see
the
Deaf
version
in
the
middle,
and
this
is
the
Katyn
spirit
on
the
top
and
a
couple
of
other
convenience
bits
yeah
listed
below.
So
we
can
now
see
that
the
project
on
this
build
detected
the
the
test
failure.
Obviously
right,
so
something
happened.
This
project
is
just
set
up
to
deliver
fast
feedback
and
on.
C
Oh
again,
oh
yeah,
do
you
want
me
to
start
again
from
the
very
beginning?
No,
so
hope
that
it
up
yes,
great.
Thank
you,
okay,
good
point
or
thank
you
Orion.
So
obviously
we
have
one
test
failure
here
in
our
thorough
test
habit
right,
so
we
can
zoom
in
and
actually
check
what
happened.
We
have
ocean.
C
C
C
So
again,
here
we
have
usually
we
have
a
key
top
hook
right,
which
context
Allah
Jenkins
installation
to
notify
that
a
change
of
correct
right.
So,
let's
see
we
can
also
trigger
the
pivot
manually
and
quickly
switch
to
the
ocean
and
see
that
now
it's
obviously
asking
rights
or
the
test
pass
Mahmoud.
So
let's
remember
what
I
said
at
the
very
beginning,
those
are
very
big
episodes
us,
so
you
should
definitely
if
you
did
not
do
it
until
now.
C
If
the
open
blue
ocean,
for
example,
a
try
right,
the
portion
set
of
plugins
now,
let's
navigate
back
to
the
Jenkins
dashboard,
and
we
see
that
we
have
glued
together
the
pipeline's,
so
the
continuous
builds
and
also
the
pattern
for
delivering
Deaf
versions.
So
they
can
move
into
this
one,
and
we
see
that
this
pipeline
is
already
on
the
run,
and
you
see
that
it
was
not
a
lie
on
the
slide.
That's
very
beginning,
all
those
steps
are
really
processed
right.
C
So
we
do
some
database
migrations,
integration
tests,
you're
using
brutalizing,
chef,
puppet
and
many
more
things,
and
obviously
we
also
have
a
quality
gate
which
inspects
the
source
code.
It
detects
any
desired
flaws,
so
this
is
the
case.
You
are
personally
that's
a
good
idea
now
to
quickly
move
to
the
dedicated
application
called
so
not
Europe.
Let
me
know
it
a
little
bit,
so
you
can
get
even
more
information.
You'll
see
right,
you
get
the
points
you
can
integrate
it
in
psycho
system
and
you
always
have
all
information
at
your
fingertips,
and
so
not
you.
C
The
dedicated
tool
for
inspecting
source
code
just
shows
and
delivers
even
more
information
or
to
design
flaws.
What
was
detected
right
so
they
can
zoom
in
and
navigate
to
the
glass,
and
now
we
see
that
obviously
there
is.
There
are
some
some
very
bad
practices
according
to
the
defined
set
of
rules.
C
C
C
And
you
can
now
again
trigger
the
pipeline,
which
is
comprised
by
different
sub
pipelines
right
so
the
time,
the
very
few
seconds
or
more
less
one
minute
this
pipeline
once
we
can
quickly
go
into
the
underlying
definition
right.
So
this
actually
is
the
pipeline
where
we
have
defined
different
stages.
Scripted
pipeline
in
this
case,
actually
doing
some
setup
work
it
just
you
know,
I
just
want
to
give
you
some
some
teasers
and
appetizers
I
check
out
the
code.
Do
some
setup
stuff.
D
C
Test,
you
know
that
pretty
much
maps
to
the
stages
which
we
are
described
on
the
slide-
that's
very
beginning
already
doing
some
integration
tests.
We
just
that's
also
just
practice
right.
We
just
want
to
build
and
package
the
world
file
in
this
case
only
once
so,
it
will
be
reserved
actually
the
database
migrations
of
my
cube
quality
gate.
You
saw
that
already.
C
Darker
image
and
the
docker
image
is
as
they're
pushed
to
a
tea
factory,
and
all
this
is
one
u-turn
right.
So
I
give
you
the
link
at
the
very
end.
So
that's
actually
the
pipeline
underlying
movie
based
pipeline
for
to
the
ring
of
death
versions,
and
what
we
see
now
is
that
obviously
I,
hopefully
do
not
forget
it
again
to
change
the
size,
a
little
bit
I'm
so
thrilled
about
this
demo
that
I
always
forget
to
maximize
the
size.
C
So
now
we
see
that
obviously
the
last
run
it
past
successfully
all
different
quality
quality
gates
right,
also
including
promotion
of
the
binaries
of
all
four
and
also
docker
image,
right
to
add
a
factory
or
lose
Nexus.
If
you
like,
and
also
so
much
it
was
integrated
right.
So
Jenkins
is
really.
This
is
a
me
knife
which
integrates
complete
ecosystem
and
a
lot
of
tools.
C
C
Release
chameleon
right,
so
we
do
that
directly
in
the
ocean
and
we
trigger
it
and
see
a
list
of
available
versions
right.
So,
as
you
remember,
we
have
actually
released
the
version
100,
which
is
not
placed
at
the
factory
in
our
case
in
the
cloud
and
this
entry
is
listed
here,
and
you
can
now
promote
this
one
to
be
a
risk
and
edit.
What
does
it
mean?
We
just
add
some
more
contact
information
and
we
promote
the
binaries
of
our
file
and
docker
image
to
dedicated.
C
Logical
repositories
inside
a
key
factory
right,
so
you
see
we
have
some
progression
enabling
adding
quantum
information
and
promoting
the
docker
image
and
also
the
war
file,
and
so
that's
really
important
that
we
always
process
and
operate
on
the
packaged
binaries
which
were
packaged
at
the
very
beginning
right.
So
that's
a
Seibert
and
we
can
now
go
back
to
Jenkins,
dashboard
and.
C
That
one
and
let's
try
to
now
promote
there
are
see
there
to
be
a
jaybird
right
after
some
more
testing
and
functional
testing
and
providing
this
this
version,
maybe
to
some
dedicated
test
environments.
So
you
see,
we
now
want
to
actually
promote
those
binaries
from
a
tea
factory.
So,
for
example,
this
one
we
have
an
eight-point
Tomcat
based
image
and
the
ga
build
is
located
here
in
built
rail.
You
can
also
use,
of
course,
for
example,
Amazon
AWS,
for
example,
to
host
your
your
dacha
imagines
right.
C
C
C
Not
some
some
some
sort
of
2014
right,
so
nowadays,
it's
more
like
moving
around
some
some
random
docker
containers.
We
really
want
to
to
set
up
some
some
services
and
stacks
and
all
that
stuff
right.
So
that's
what
is
offered
and
provided
by
Amazon,
for
example,
also
by
by
Oracle
cloud,
for
example
solids
you
can
open
ocean
and
trigger
it
again.
C
We
want
to
share
pic
and
to
promote
exactly
this
version
and
you
can
go
into
the
pipeline
and
you
see
that
on
those
stages
actually
uses
the
API
underlying
tools
and
to
to
manage
and
to
promote
source
docker
images
to
the
cloud
to
stop
the
current
deployment.
Maybe
it
with
some
support
equipment
and
all
that's
sophisticated
stuff
right.
So
that's
Michael.
A
C
Good
question
the
minimum
number
would
be
one
I
think.
So
what
from
my
long
experience
is
really
necessary
as
a
very
feature-rich
automation
engine,
so
our
case
Jenkins
small
I
would
expect
at
least
having
one
tool
right
one
to
enemy,
so,
particularly
in
very
large
projects,
you
will
already
have
a
lot
of
tools.
You
want
to
integrate
right
and
solve
all
so,
for
example,
the
deployment
to
the
cloud
or
to
a
private
cloud
right.
C
Now
the
commotion
is
active
actually
and
the
docker
image
was
pulled
from
Ben
Tre
and
pushed
to
Oracle
cloud
and
the
container
and
the
service
were
created.
So
a
couple
of
minutes
ago,
before
the
demo,
I
have
upgraded
my
Jiang
installation
to
the
latest.
One
and
I'm
really
happy
that
it's
so
stable
again,
nothing
happened
and
now
crossing
fingers
that
also
the
last
final
part
succeeds
right.
C
A
Or
less
you
took
a
few
minutes
because
Oleg
went
went
a
little
faster,
so
that
was.
That
was
good,
though
that
I
had.
Another
question
actually
are:
are
many
of
these
tools,
which
is
artifactory
and
Sun
or
tube
outside
of
the
java
equals
ecosystem?
I
mean
you're
you're
doing
java
here,
but
you're
using
one
of
these
tools?
Are
they
useful
elsewhere.
C
Absolutely
absolutely
so,
in
our
case,
the
central
is
only
based
on
a
Java
EE
application
right,
which
is
shipped
via
docker
container,
but
usually
it's
not
just
a
hot
Java.
It's
more
heterogeneous
zoo
of
different
scripting
tools,
languages,
platforms
right
and
what
you
mentioned.
It
was
a
very
good
point
for
the
sound
idea.
Factory,
it's
it
really.
It's
it's.
C
It's
able
to
take
care
of
all
different
effect
types
right,
so
not
only
Java
and
docker
image,
just
good
to
serve
as
a
docker
registry,
but
also
to
to
many
fewer
rpms,
for
example,
or
pricing
packages,
and
all
that
stuff
right.
So
that's
really
important.
It
was
a
very,
very
great
point
that
you
should
take
care
of
all
different
binaries
of
on
each
other,
to
bring
a
release
to
production
in
a
functional,
consistent
and
technically
consistent
degree
of
maturity,
cool.
A
All
right,
thank
you
very
much.
That
was
really
interesting
to
see
a
full
pipeline
from
end
to
end
like
that,
and
next
up
we'll
have
Thurston
trailer
doing
a
business
in
a
presentation
called
pimp,
my
blue
ocean,
but
before
he
starts
I'm,
going
to
just
remind
people
that
we
are
taking
questions
on
the
IRC
channel,
the
Jenkins,
IRC
channel
and
all
right
take.
D
It
away
all
right
thanks
daddy,
so
can
you
everybody
hear
me
I
hope
so
Tyler
scream,
if
not
yeah,
you're
doing
good
I
can
hear
you
so
we're
talking
about
pimp
my
lotion
so
about
my
person,
I'm
one
of
the
original
blue
ocean
developer
and
now
working
within
cowpeas
in
another
team,
but
we're
using
actually
blue
ocean
to
deliver
additional
functionality
and
features.
So
what
we're
going
to
talk
about?
You
actually
can
review
and
actually
do
it
by
yourself
now
in
front
of
the
computer.
D
If
you
want
so,
you
can
go
to
my
repository
in
github
I
created
JV,
17b
OC,
so
with
that
you're,
actually
getting
basic
setup
of
Jenkins
plugin,
completely
functional
against
blue
ocean
current
version
of
the
ocean.
Further
actually
I
had
some
darker
file.
If
you
do
not
want
to
run
the
example
NPM
and
stuff
like
that
and
you're
in
your
normal
box,
you
can
do
that.
We
are
darker
and
for
community
and
for
testing
reason.
D
We
of
course,
editor
Jenkins
file,
so
you
have
here
the
the
reason
how
you
actually
can
do
that
I
will
not
explain
that
in
detail.
Rather
than
going
now
and
dive
into
the
presentation,
so
what
we're
going
to
do
is
we
actually
will
create
a
custom
component
right
and
we
will
use
our
custom
CSS
well,
yeah,
actually
right
now,
seeing
if
you
look
carefully
in
the
URL
of
my
browser,
it's
actually
a
react.
Storybook,
so
I
am
on
the
right-hand
side.
D
D
As
you
all
know,
Jenkins
is
we
based
around
about
extension
points
right
so
now,
let's
dive
into
the
typical
plucking
autonomy
of
our
plugin
or
the
normal
Jenkins
plugins
for
the
front,
and
let's
say
that
so
basically,
what's
very
important
is
our
two
files
are
the
Jenkins
extension
John
right
and
our
custom
component
that
we
are
creating
so
the
index
jelly
or
more
traditional
file?
Let's
say
that
for
all
classic
Jenkins
right
and
this
Jenkins
J's
extensional
channel
is
very
important.
So
let's
have
a
look
here.
How
that
actually
looks
like
right?
D
So
what
are
we
doing
with
this
file?
Is
we
actually
telling
blue
ocean
to
use
a
different
extension,
then
the
one
that
is
poor
as
default
and
configured
right?
It's
like
how
we
actually
found
that
out
or
how
do
you
see
where
or
which
extensions
exist?
Sometimes
it's
not
very
well
documented.
Let's
say
that,
but
one
two
good
places
are
there.
So
first
of
all
is
in
the
dashboard
right.
D
If
you
have
a
look
in
the
Jenkins
year,
extension
gentleman
right
there,
you
see
our
default
extensions
right
and
you
can
always
it's
always
the
same
pattern.
You
see
the
extension
point
with
a
unique
ID
for
identified
right
and
then
you
actually
store
the
your
component
mm
within
the
root
or
you
get.
For
example.
Here
you
have
example
for
a
different
path
right.
It
doesn't
have
to
be
in
the
same
level
as
the
Jenkins
logo,
but
it
should
be
like
relatively
X
ability,
excellent
right.
D
So,
let's
see,
for
example,
how
actually
that
is
implement
if
you
go
to
the
core
plugin
right,
there's
a
component
which
is
this
common
called
content,
page
header,
and
here
we
actually
define
our
extension
right
here.
What
we're
saying
is
give
me
the
exemption
of
Jenkins,
a
global
right
and
I
will
pass
as
default
implementation,
the
blue
logo,
with
with
props
home
right,
which
actually
results
in
something
like
this.
We
have
here
the
extension
point
right.
D
We
have
here
the
href
for
the
ocean
logo
and
you
can
see
this
is
a
big
SVG
right
where
the
extension
point
or
said
where
the
logo
is
pass-through
right.
So
back
to
your
presentation.
So
what
we're
doing
here
like
said
what
we,
what
we
want
to
do
is
we
overwrite
our
logo,
our
tank,
Jenkins
header
logo
and
our
final
result
will
look
something
like
that
right.
So
I
love
my
Jenkins.
So
how
do
we
actually
implement
it
that
so,
if
we
go
to
the
corresponding
component,
we
can
see
here.
That
is
a
contradiction.
D
'el
react
component
right.
If
you
can
see
here
that
I
can
or
I
actually
created
a
class
and
I
created
an
icon
or
used
an
icon
and
then
I
used
mine
and
then
I
used
the
the
children
right.
So
if
I
go
ahead
and
just
deploy
that
on
my
running
Jenkins,
it
will
become
something
like
that
right.
So,
for
example,
if
I
said
I
don't
want
you
to
render
this
SVG
at
all,
I
can
actually
go
ahead
and
get
rid
of
that
I.
Save
it
right.
D
So
now
I
actually
have
to
tell,
because
if
I
refresh
here,
nothing
will
happen
right
because
I
don't
have
deployed
it
yet.
So
let's
do
that.
I
actually
use
for
everything
for
every
comment.
I
use
a
comment
with
two
letter,
so
envy
would
be
NPM
run
bundle.
So
if
I
do
here
and
B
what
I'm
doing,
it's
actually
telling
Jenkins
to
actually
deploy
my
new
bungle.
So
if
I
now
go
here
and
refresh
the
whole
thing,
you
will
see
that
the
Jenkins
it's
gone.
Alright,
so
you
can
see.
D
A
D
Great
so
here,
if
I
now
go
and
refresh
again,
I
get
back
my
Jenkins
logo
right.
So
some
points
off
of
of
heads
like
it
always
has
to
be
always
have
to
have
a
default
export
for
your
component
right.
So
the
extension
channel
always
needs
a
default.
Implementation,
otherwise
actually
will
complain
and
it
will
not
work
right.
So,
let's
go
back
to
our
presentation.
D
This
is
what
we're
seeing
right
now
so
there's.
The
second
thing
is:
if
you
may
have
noticed,
we
have
our
default
blue
right
here
in
there.
But
if
you
compare
it
with
our
here,
we
actually
have
a
different
kind
of
blue.
We
made
it
a
little
bit
more
darker,
let's
say
right,
so
how
did
we
do
there?
So
if
you
see
here,
we
have
in
the
classroom
classic
structure
of
our
component.
We
have
a
main
less
and
there
way
of
an
extension
file
right.
D
So
if
we
use
this
extension,
less
file,
Jenkins
will
actually
know
that
we
want
to
extend
the
CSS,
that's
given
into
it
right
and
put
it
now
on.
If
you
see
it
down
here,
I'm,
not
sure
whether
you
can
see
that
good
and
now
I
can
make
it
actually
bigger.
It's
you
can
see
here.
It's
actually
picked
up
in
our
bundling
script.
You
see
here
the
less
and
they
see
progressing,
completed
right
and
here
we're
actually
using
this
file
right
and
we're
actually
compiling
it
to
CSS.
D
So
the
interesting
part
here
is
like
when
we
compare
it
with
result
is
so.
This
is
my
logo,
and
we
said
we
have
the
butter
border
bottom
with
two
particles.
That
would
be
the
red
one
here
right
and
we
have
a
background
color
off
from
4-7,
for
example,
if
I
don't
want
to
have
that
background,
color,
because
I
added
that
only
humanik
react
storybook
had
me,
because
we
have
here
the
same
example,
but
this
would
be
going
into
detail
in
the
second
part
of
my
speech
so
for
to
make
that
happen.
D
That
I
have
this
color
I
actually
used
this
background
right.
So
if
I
get
rid
of
this
color
here
and
say,
I
will
do
a
bundling
again
and
go
back
to
my
jenkin,
because
you
can
see
here.
There's
there's
two
colors
right:
this
one
is
darker
than
this
one,
and
if
we
get
rid
of
the
color
background
here
and
refresh
it
here,
you
will
see
that
this
is
now
without
any
colors
right.
D
So
what
I'm
showing
here
is
like
the
basic
working
with
with
CSS
right
or
better
said
less,
which
we
then
created
you
to
my
CSS,
that
I
want
I
will
go
back
to
that
version.
So
now
what
we
talking
about,
it's
like
how
did
I
actually
came
to
my
the
the
way
to
actually
have
my
my
different
color
here
in
the
head
right.
So
what
we're
doing
here
actually
is
we
we
set
the
back
the
basic
header
default
in
color
right,
it's
it's
a
fallen!
Who
is
heaven
right?
So
what
we're
doing
here?
D
If
you
go
inspect
this
page,
you
see
that
here
this
color
right,
it's
the
same
as
this
one.
So
if
I
get
rid
of
that,
we
have
our
original
blue
right.
So
this
is
actually
a
trick
or
a
hook
because
of
deep
knowledge
of
blue
ocean
right,
because
the
problem
and
part
with
our
solution
here
is
that
we
don't
define
a
good
variable,
for
example,
like
we
defined
here,
the
very
primary
color
right
to
say
like
which
is
our,
which
is
our
primary
color,
and
then
it
would
be
picked
up
by
by
blue
ocean.
D
However,
there
is,
is
a
ticket
open
in
Jenkins,
four,
four,
four,
six
six
and
there
is
actually
described
this
this
problem
and
how
to
actually
add
the
themes,
support
to
actually
easily
more
easily
even
extend
them
right
now.
For
me,
for
example,
would
be
a
dream
to
have
an
extension
point
for
CSS
and
then
I
extend
that
with
my
my
variants
of
the
color
that
I
wanted
for
for
the
the
different
extension
points
and
I
think
weird
now
would
be
time.
Yeah
is
there
any
questions?
Are
there
any
questions.
A
A
That
thank
you,
you're
welcome
and
so
now
you're
gonna
move
on
to
your
second
second
thing:
we
want
to
introduce
that
or
yeah.
So
this
was
put
my
blue
ocean
and
now
we're
gonna
move
on
to
a
more
staid
and
and
regularly
titled
presentation,
contortion
as
well,
delivering
blue
ocean
components
at
the
speed
of
light.
D
So
why
actually
would
I
want
to
actually
fast
my
development
cycle
even
more
and
that
actually
leads
to
the
problem
that
you
have,
or
normally
in
a
project,
half
that
you're
working
with
a
project
manager,
a
product
manager,
a
designer,
a
UX
guy,
and
who
else
actually
wants
to
be
informed
about
the
process
or
the
progress
of
the
product
right?
So
the
easiest
way
is,
if
you
don't
use
dogfooding
or
you
use
dogfooding,
but
you
don't
actually
deploy
every
PR
to
it.
It's
actually
to
show
people
your
work
with
an
independent
version
of
it
right.
D
So
what
we're
using
in
our
team
right
now.
It's
react.
Storyboard
by
this.
This
actually
helps
us
to
deliver
components,
much
quicker
right
and
actually
have
a
validation
of
those
components
actually
on
a
really
really
really
edge
with
the
PM
and
they
with
the
UX
or
designer
guy
yeah,
because
you
can
do
the
screen
shot
and
actually
my
designer
screamed
all
the
time
when
there
was
like
it
picks
it
off.
So
there
was
a
good
thing
before
it
came
to
the
product
you
have
it
fixed.
So
how
do
we
actually
do
that
or
what?
D
What
do
we
use
so
actually
what
what?
What?
What
is
storybook
storybook?
It
allows
you
to
browse
the
component
library
view
different
state
of
each
component
and
actually
interactivity
development
test
right.
So
here's
a
screenshot
of
the
one
that
you're
actually
right
now
seeing
right.
So
as
you
can
see
what
I'm,
what
I'm
doing
here
is
like
I
included
some
specification
right.
So
here
it's
actually
my
rendered
view
right.
They
it's
about.
It's
our
our
presentation,
the
one
earlier
right
and
I
actually
validated
that
I
can
see
the
the
man
did.
D
So
what
we
actually
using
story
before
as
well,
is
to
actually
include
acceptance
criteria.
So
this
one
is
it's
a
nice,
it's
better!
So
here
it's
it's
an
animated
gif
to
actually
show
you
that
you
can
actually
add
the
component
in
different
states
right
and
with
different
data
paths
to
it.
So
you
can
simulate
very
easily
different,
different
education
that
you
might
find
or
that
somebody
might
find
in
your
your
component
right.
D
So
the
other
thing
is
like
there
are
some
nice
shortcuts
because,
for
example,
when
I
take
all
controlled,
Shift
+
F
I
actually
get
my
my
full
menu
of
it
here
and
I
can
go
on
and,
for
example,
navigate
through
my
through
my
stories.
For
example.
Here
you
can
actually
see
I
have
two
specifications
right
and
be
rendered.
We
will
come
to
that
right.
Now.
Let's
go
back
to
our
presentation.
D
D
D
What
I
said
it's
like
what,
when
I
saw
my
showing
you
on
before
it's
like
the
the
you've,
seen
the
the
the
green
dots
right,
this
blue
dots
they're
based
on
the
storybook
specification
error
right,
that's
really
nice
add-on
for
storybook,
which
allows
you
to
actually
etch
a
unit
test
into
your
story
of
a
component
beside
for
someone
that
maybe
not
obviously
see
the
advantage
of
it.
It's
like
that.
You
have
or
develop
the
component
and
within
a
preview
of
that
component,
you
actually
can
add
the
the
acceptance
test
right.
D
This
allows
you
to
simulate
all
the
acceptance
criteria
that
somebody
can
actually
give
to
you,
besides,
you
being
in
the
product
right.
What
we
are
not
doing
here
is
actually
to
to
check
whether
that
is
working.
What
I'm
in
Jenkins
right?
That
would
be
the
last
integration
test.
Let's
say
that
so
the
other
nice
thing
is
not
only.
You
can
actually
run
this
test
of
psycho.
That's,
let's
see
is
test
actually
that
you
can
run
the
test
right.
Let's
go
to
the
hello
swim
page,
but
you
can
actually.
D
C
D
So
here
we
go
so,
for
example,
you
see
that
my
two
tests
right
now
are
actually
in
same
route,
for
example,
with
with
its
bottom
small
to
small
ones.
I
said
they
accepted
well,
if
hello
right
should
be
one
and
our
expected,
our
logo
right
should
be
one
as
well.
So,
for
example,
we
have
here
logo
and
we
actually
pass
a
child
to
it
right,
and
so
it
goes.
I
love
my
crazy
right.
So
that's
correct
that,
for
example,
and
for
Jenkinson.
C
D
I
could
save
it
now
we
have
here
my
Jenkins,
it's
not
the
SVG
that
we
saw
that's
obvious,
but
you
have
more
or
less
defeating
now
how
it
would
actually
look
like
in
Rio,
Jenkins
right
and
now,
for
example,
let's
say
mm:
we
don't
have
this
logo
in
it.
What
will
happen
if
I
here
save
it?
As
you
can
see
here,
I
actually
now
have
my
specification
broken
right.
So
what
happens?
D
Actually,
if
I
do
here
in
my
comment
line
for
example
and
and
and
PM
test
nvm
run
test,
it
would
actually
tell
us
or
show
tell
us
that
we
actually
are
in
a
failed
state
and
that
would
actually
break
our
build
right
now.
So
come
on
say
so
you
can
see
here.
I
have
my
thing
right
and
it
says
like
say
hello,
Unitarian,
it's
a
little
component
expected
0
to
be
1
right.
So
here
actually
is
the
link
to
our
malli.
Where
it's
going
wrong.
We
hello
XP
right.
D
It
would
be
exactly
this
all
right,
25
and
now
I
can
decide
whether
this
is
some
arrow
that
I
need
to
fix
or
whether
I
need
to
fix
the
text
or
the
code
right.
So
here
you
see
now
it's
back
ringing
again
and
as
you
can
see
what
what
the
only
thing
that
I'm
doing
always
is
I
say:
fear,
con
control,
safe
right
and
it
instantly
refreshes.
My
brother
right,
that's
that
is
because
react.
Storybook
is
based
on
on
a
web
pack
server
and
then
using
your
to
actually
notify
and
the
component
to
actually
refresh.
D
You
can
actually
see
that
if
you
call
here
for
the
network
traffic,
you
can
see
frequently
that
there
is
a
ping
to
turn
to
to
our
circ
right.
It's
like
we
something
similar
and
Jenkins
the
sse
plugin
right,
where
we
actually
pinging
our
blue
ocean
or
blue
ocean
get
pinged
from
from
the
back
end
to
to
refresh,
for
example,
actually
fronts
and
stuff
like
that,
so
yeah,
so
SF's
like
when
I
now
run
the
same
empty
right
here
in
my
terminal.
D
It
should
now
actually
do
a
build
success
and
you
can
see
it
actually
failed
and
everybody
and
it
passes
right
and
it's
actually
as
we
suspected.
It
should
be
right.
So
the
the
second
thing
is
like
if
I
now
go
ahead,
go
ahead
and,
for
example,
extend
here
my
my
my
logo,
for
example,
if
I
create
like.
D
D
Right
you
find
excel
this
component,
and
this
we
are
using
just
to
actually
for
for
not
only
running
our
tests
but
to
as
well
to
have
an
evaluation
of
our
code
coverage
right.
How
this
is
the
ratio
like
how
many
tests
actually
are
covering
our
all
our
source
code
right
and
there's
some
nice
nice
configuration
right
that
you
can
actually
use
ingest
to
change
the
outcome
and
the
the
different
processor
we
are
actually
using
a
unit
in
this
case
right
and
yeah.
D
It's
like
we
running
the
whole
maven
test,
for
example
we're
using
the
May
contest
and
it's
the
same
as
I
said,
like
the
NT,
it's
NPM
test,
which
would
be
accepted
the
same,
so
the
the
only
difference
between
the
one
maven
were
wrong
right
and
the
one
we're
running
with
NPM
test.
Is
that
we'd
be
a
tree
adding
here
later
on
we're
using
the
J's
builder
foam
from
Jenkins
right
as
an
infrastructure
to
allow
it
to
not
to
add
our
or
we
have
to
implement
all
this
already
done
in
J's
below
right.
D
D
Maven
execute
not
right
with
that,
may
even
actually
knows,
because
it
will
actually
start
a
profile
with
front
and
plug
in
and
then
do
all
the
compilation
with
with
not
an
NPM
right,
so
it
actually
installs
here,
for
you,
the
not
right
and
an
NPM
and
use
that
one
to
create
the
different,
not
modules
and
stuff
like
that.
So
coming
back
to
our
presentation,
do
we
have
I
actually
have
more
time?
I
actually
would
spend
it
to
explain
a
little
bit
presentation.
D
So
a
actually
when
you
maybe
came
in
you
saw
this
are
my
slides
rights
like
we
are
actually
in
that
one
right
now.
So,
if
you
can
see
here
are
my
slice.
So
what
we're
doing
here
is
we're
creating
an
object
right
and
then
I
actually
add
a
complete
array
right
and
with
this
very
I,
just
use
one
component
and
I
said
I
want
to
render
it
with
a
slider
right,
so
the
slider
is
actually
the
gyro
whole
component
that
you
actually
what
you
see
here
right.
D
So
if
you
see
here
if
this
is
actually
a
drop-down
box
right,
so
what
I
did
is
I
create
use
the
slider
and
source
range,
a
yes
slider
right
to
actually
activate
some
nice
things
like
you.
Don't
you
couldn't
see
that
right
now,
because
there's
no
video,
but
what
we
actually
added
words.
We
have
some
key
handler
in
our
presentation,
so
I
can
go
arrow
with
arrow
right
and
error.
Enter
I
can
actually
go
to
the
next
slide
and
see
there
right
and
I
can
use
the
left
arrow
to
go
to
my
previous
slide
right.
D
So
what
and
then
doing
is
like
I'm
getting
from
the
properties,
all
the
slides
right
then
I
create
here.
This
is
actually
a
GD
component,
GDS
Jenkins
design,
language,
private
7,
drop-down
box
that
I'm
using
here
and
I
passed.
The
these
slides
that
I
want
with
the
options-
and
here
I
actually
render
my
current
slide.
So
what
this
component
simply
does
it
keeps
on
the
states
which,
in
which
currents
slide?
D
Exactly
this
is
all
the
oceans
like
everything
it's
copy
and
paste
by
ocean
like
adapt.
D
D
A
D
The
aim,
and
in
the
second
version,
for
example,
I
may
actually
use
some
real-life
rest
endpoints
right
to
render
it,
for
example,
whether
plug
in
right
there.
You
can
actually
include
that
in
your
in
your
ocean
based
developments
right
because
in
the
end
it's
just
just
extend
an
extension
line
and,
as
you
say,
the
great
point,
all
for
12
Jenkins
until
now
had
always
been
the
Genki
today.
D
D
Normally
it's
like
they
will
use
every
extension
point
or
we
call
every
extension
point
that
they're
fine.
If
you
have
an
area
of
an
extension
point,
implementations
all
will
be
called
right,
but
that
not
all
the
times
that
what
you
want,
but
there's
are
things
that
we're
known
problematic,
but
we
are
in
under
verge
to
make
it
very
very
easy
to
add
your
you're
not
based
development
right
on
top
of
Jenkins
in
no
time
at
all,
excellent.
D
A
E
Yeah
someone's
always
been
using
pipelines
since
December
of
2016,
but
we've
been
using
declarative
after
that
and
I
kind
of
want
to
share
the
challenges
of
our
pre
pipeline
configuration
with
Jenkins
and
how
we
ran
our
jobs
and
then
demonstrate
how
the
declarative
version
of
pipeline
and
the
pipeline
model
definition
and
the
shared
libraries
got
us
to
a
much
better
place,
at
least
for
our
team.
So
previous
I
don't
have
any
jobs
here.
E
Previous
two
shared
pipeline
or
even
previous
two
pipeline,
but
I
will
tell
you
that
one
of
the
key
problems
that
we
were
trying
to
solve
in
that
we
had
was
that
a
lot
of
our
config
was
in
the
Jenkins
jobs,
specifically
across
many
Jenkins
instances.
At
least
three
and
access
to
those
were
limited,
mostly
to
our
teams,
sometimes
to
some
developers,
sometimes
to
some
operations
folks.
E
E
E
We
also
kept
preaching
to
our
developers
and
our
ops
people
that
configures
code
Ops
knew
that
well
most
of
our
developers
at
the
time
did
not,
and
yet
we
ourselves,
because
we
didn't
have
pipeline
weren't,
really
doing
that
ourselves.
So
when
pipeline
came
along,
we
were
able
to
abstract
and
put
a
lot
of
our
config
out
of
our
config
dot
XML
for
our
jobs
and
into
pipeline,
and
and
do
a
lot
of
not
necessarily
shared
but
consistent
code
across
projects.
E
E
We
write
a
lot
of
our
capabilities
for
selenium
because
we're
a
Python
and
selenium
shop.
We
write
those
into
a
JSON
file
and
then
we
send
that
JSON
file,
along
with
the
capabilities
above
into
sauce
labs,
where
it
runs
the
tests.
Tox
is
our
virtual
M
runner
kind
of
sets
up
the
environment
abstracts
a
lot
of
the
things
like
the
pi
PI
tests,
additional
options
and
sets
the
environment
and
some
things
like
the
ASCII
color
plug-in
for
the
build
wrapper.
But
you
can
see
here
there's
a
lot
of
try
a
lot
of
catch.
E
A
lot
of
wrap
wraps,
throw
finally
a
lot
of
inline
classes,
not
to
say
it
was
nasty
because
it
was
still
a
lot
more,
a
lot
cleaner
than
putting
everything
into
job
configs.
But
it
was
a
lot
of
hand
rolled
stuff
and
we
had
to
multiply
this
across.
Sometimes
20
projects
we've
scaled
down
the
number
of
projects
just
because
we
thankfully
had
success
moving
a
lot
of
them
into
development
repos,
which
is
great,
but
you
can
see
here
the
biggest
one
is
probably
the
IRC
notification
plug-in.
E
E
Then
there's
a
giant
try
right
here
to
make
sure
that
all
of
the
variables
and
all
the
PI
tests,
add
options
were
actually
sent
to
talks
and
actually
work,
and
if
not,
then
we
throw
a
failure.
And
finally,
we
write
out
the
results.
We
stash
the
environment
so
that
the
consumers
of
our
builds
upstream
can
be
able
to
to
take
them.
E
But
it's
a
lot
of
try
caches
we're
a
Python
shop
as
I
mentioned,
largely
because
our
development
teams
are
so.
We
wanted
to
share
expertise
and
help
there.
So
a
lot
of
the
groovy
stuff
was
unfamiliar
to
us
for
a
lot
of
us
coming
from
either
PHP
or
something
else
a
lot
of
the
Java
syntax
and
having
to
match
braces
and
kind
of
some
of
the
the
callbacks
and
things
were
kind
of
confusing
to
us.
E
So
that's
what
we
looked
like,
and
this
was
for
one
project,
159
lines
of
code
for
one
project
and
what,
as
we
moved
into
declarative,
which
you
can
see
here,
we've
scaled
that
down
by
I,
think
60%
and
yet
we've
actually
increased
functionality.
So
we
still
have
largely
some
of
the
same
things.
The
capabilities
we
still
mention
and
specify,
but
everything
in
the
pipeline
right
here
is
very
clean.
There's
no
nesting
where
there
doesn't
really
need
to
be
everything
is
declarative
and
we
can
still
set
overrides
for
some
of
the
talks.
E
Things
like
trace
backs
color
options
which
driver
we're
using
and
the
variables,
but
you
can
see
here
as
well.
We've
got
now
stages,
so
we
can
do
parallel
execution
for
nodes.
We
still
write
out
the
capabilities
and
excuse
me
and
on
post,
regardless
of
whether
the
build
succeeds
or
fails
rather
than
doing
try-catch
and
throws.
We
just
use
the
post
step
to
always
write
out
the
results
file
with
the
j-unit
and
active
data
and
tree
herder
are
mozilla
specific
things
and
on
post
as
well
for
the
second
post.
E
Only
on
failure
do
we
actually
send
well
sorry
on
failure.
We
always
send
mail
but
on
the
change
state.
So
if
a
build
goes
from
success
to
fail,
we
send
an
IRC
notification
and
it
goes
if
it
goes
from
fail
to
success.
We
send
an
IRC
notification
as
well,
and
you
can
see
those
showing
up
here
and
pound
FX
test
alerts.
Neat.
A
D
A
A
E
E
A
A
Oh
totally
the
other
question
I
have
is
before
this
before
they
declared
have
you
had
a
whole
big
long
bunch
of
try
caches
and
you
know
script
so
that
stuff
that
really
looks
like
groovy
script,
and
now
you
have
a
groovy
syntax
that
you're
sort
of
configuring
steps
in
did
moving
to
declare
to
make
it
so
that
non
release,
engineer
type
people's
tools
or
release
engineer
type
people
could
make
changes
to
this,
or
do
you
find
you're
still
only
having
a
few
people
touching
this
this
code?
Oh
no.
E
Absolutely
the
ability
for
our
engineers,
whether
they
are
manual
testers,
who
have
working
with
some
developers
or
ops
or
even
another
test
automation,
engineer
the
ability
for
them
to
ramp
up
a
new
project.
They
literally
you
know,
will
copy
and
paste
this
project
change
the
parameters
for
versions
the
Firefox
that
they
need
or
platforms
change,
the
timeout
value
for
the
bill,
duration
or
change.
Maybe
some
of
the
concurrency
options
in
here
so
that
it
runs
PI
tests
with
like
ten
browsers
instead
of
five
or
something
like
that.
E
Most
of
the
time
they
don't
need
to
change
a
whole
lot,
except
for
the
project,
specific
things
like
the
variables.
So
it's
very,
very,
very,
very
clean,
and
when
we
set
up
a
Jenkins
job,
we
don't
really
need
to
clone
any
more
from
a
previous
project
to
get
the
default.
Configs.
That's
largely
in
the
Jenkins
file,
and
even
things
like
deleting
the
workspace
used
to
be
a
problem
for
us,
because
sometimes
we
would
forget
to
specify
that
in
the
Jenkins
job
configure
itself.
E
So
now
we've
got
everything
standardized,
we're
moving
to
docker
as
well
to
make
you
know
yet
another
kind
of
needed
level
of
abstraction,
so
that
developers
will
only
need
to
run
the
docker
file
and
then
they
will
get
kind
of
all
of
this
for
free.
But
the
core
config
is
here
and
it's
very
easy
to
change.
Excellent.
A
E
I
think
we
kind
of
other
than
some
of
the
fixes
that
we
took
to
use
declarative
and
the
pipeline
model
definition
leading
up
to
the
1.2
release
from
from
a
bear
and
from
Robert.
We
really
haven't
come
back
and
revisited
this
I
think
since
probably
February
or
March,
so
there's
a
lot
of
new
features
that
are
either
polished
or
just
weren't
available
for
us
or
weren't
kind
of
ready
for
us
to
consume.
So
there's
a
lot
of
cleanup.
We
can
still
do
right.
A
E
And
so
I
showed
you
the
before
kind
of
the
before
declarative,
before
shared
libraries
and
I
showed
you
the
after
what
I
didn't
show
you
was
the
shared
libraries
themselves
and
so
I.
Don't
think
I'll
go
through
that
exhaustively
here
just
due
to
time,
but
you
can
see
here.
If
you
look
at
this
and
I
think
these
Dave,
probably
a
Dave
hunt
all
right.
Let
me
back
up
just
a
bit
to
show
you
this
and
I'll
include
this
link
later,
but
Dave
hunt
is
on
my
team.
E
He's
a
test,
automation,
engineer,
senior,
test,
automation,
engineer
and
he
was
the
one
who
largely
spearheaded
and
kind
of
took
the
pipeline
code
and
worked
it
with
our
Jenkins
instances
and
with
our
projects
and
I
shadowed
him
and
and
adopted
some
of
them.
But
he
really
did
work.
Dave,
hunt,
co,
uk
and
he's
got
a
whole
bunch
of
blog
posts,
kind
of
chronicling
even
november,
actually
dealing
with
the
IRC
notification
plugin
all
the
way
up
to
yeah
march.
Basically
so.
A
So
I
suppose
I
could
add.
I
need
to
ask
him
maybe,
but
how
are
you
guys,
testing
your
shared
libraries
I
mean
I,
don't
know
if
you
were
on
on
the.
If
you're
able
to
see
all
the
X
presentation
that
you
just
did.
What
do
you?
Do
you
get
any
of
what
he
was
saying
or
I
mean
like
what
do
you?
What
do
you
guys
do
so.
E
Yeah
testing
is
something
that
we
looking
here.
We've
got
an
open
issue
for
one
of
our
biggest
ones.
It's
called
service
book,
so
service
book
is
a
website
that
we
wrote
in
flask,
where
basically,
we
put
projects
the
repos
centralized
everything
our
test
teams
are
doing
so
that
ops
and
developers
and
managers
can
see
where
the
results
are
stored,
all
that
kind
of
stuff.
So
we
actually
wrote
a
REST
API
inside
of
our
service
book
that
our
shared
library
now
calls.
E
So
we
need
to
document
and
test
that,
and
just
recently
we
replaced
a
whole
bunch
of
hand-rolled
HTTP,
GET
and
post
requests
for
getting
things
from
the
service
book.
Api
endpoints
into
our
code
into
our
tests.
We've
replaced
that
by
using
the
pipeline,
enabled
HTTP
request,
plug-in,
and
so
we've
got
version
1.7
and
1.8
that
now
use
that
so
backing
up,
though
yes,
we
still
have
a
huge
need
for
testing
linting
for
coverage
reporting
and
tests,
so
Dave
did
write
a
few
tests.
E
If
we
mess
up
on
anything,
we
can
always
fix
it
with
the
full
request
or
revert.
So
even
though
we've
gotten
to
this
point
and
we're
pretty
stable,
the
way
we're
moving
forward
is
with
the
other
teams,
and
so
we
kind
of
need
to
bring
the
tests
back
so
we'll
be
taking
a
look
at
the
other
parts
of
the
project.
Specifically,
things
like
you
know,
testing
your
code
locally
because
we
used
to
do
that
testing
the
pipeline
code
locally
because
we
used
to
do
that
inside
of
the
groovy
script
inside
of
the
pipeline
editor.
E
A
To
what
Oleg
was
was
doing,
yeah,
okay,
as
also
one,
so
you
were
talking
about
declarative
1.2
that
just
came
out.
It
was
a
bear,
and
others
worked
on.
So
I
noticed
that
much
of
your
pipelines
are
pretty
linear
right
at
this
point,
are
you
guys
good
looking
at
other
ways
to
speed
things
up
like
adding
parallel
stages?
That
kind
of
thing
yeah.
E
In
addition
to
the
post,
build
steps
that
we
got
from
declarative,
I
would
say:
the
parallel
stuff
is
probably
going
to
give
us
the
most
win.
Our
tests
are
pretty
quick
right
now
and
I
think
you
know
having
doctor
and
cached
images
will
help
as
well.
Since
I
mentioned,
we
we
still
clear
out
the
the
whole
work
space
doctor
will
help
a
little
bit
with
some
of
the
caching
and
that
but
yeah.
We
have
a
lot
of
executors
on
hand,
and
so
we
want
to
be
more
efficient
and
using
those
inside
of
our
Jenkins
instances.
E
E
E
Well,
here's
some
of
the
durations
here
these
are.
We
call
these
ad
hoc
but
they're,
basically
a
nice
way
for
us
to
run
either
configuration
changes
for
remote
teams
from
Jenkins
files
or
from
the
project
itself.
Most
of
them
you
can
see.
I
can
sort
here
if
you
run
them
by
themselves,
but
the
longest
running
one
is
11
minutes.
This
is
a
pretty
beefy
I,
think
it's
a
and
I'm,
not
sure
if
it's
an
m3,
medium
or
maybe
an
m4
instance
in
Amazon-
and
we
run
cloud
base
am
eyes
but
yeah.
E
We
haven't
really
taken
advantage
of
the
parallelization
yet
so
I
expect
that
this
would
be
at
least
twice
as
fast,
if
not
three
times
faster.
The
way
we
write
our
selenium
tests,
they
are
atomic,
so
they
do
not
depend
on
any
other
test
data.
Everything
is
self-contained
so
running
them
in
parallel
will
not
you
know,
affect
any
other
of
the
tests
and
a
given
test
suite
so
basically
as
many
browsers
as
we
can
throw
at
sauce
labs
and
as
many
Jenkins
executors
in
the
nodes,
as
we
can
have,
will
do
that.
E
So
we're
part
of
looking
at
a
couple
minutes.
Maybe
five
minutes
tops
for
build
max
nice,
very
nice,
so
yeah,
the
shared
library
gives
us
the
notifications.
It
gives
us
the
post
build
steps.
It
gives
us
a
lot
of
the
credentials,
the
environment
variables,
all
of
the
pipeline's
specific
options
like
making
sure
every
project
has
time.
Stamps
has
the
same
time
out
maybe
different
values,
but
has
a
build
time
out.
I
uses
the
ANSI
color
4x
term.
We
use
that
because
that
helps
us
with
our
Python
stack.
E
Traces
and
the
shared
library
as
well
is
just
very,
very
clean.
Every
project
submits
to
the
same
end,
points
for
s3
buckets
and
that's
all
configurable
for
dev
stage
and
production.
We
submit
to
other
Mozilla
specific
things
like
active
data,
which
are
basically
just
parsing,
the
raw
results
and
graphing
our
failures
and
passes
over
time.
So
we've
got
not
big
data
but
pretty
close
to
it
running
to
show
us.
E
But
as
we
moved
more
into
marryin
a
packed
web
driver
and
we
started
writing
tests
for
Chrome,
not
Google
Chrome,
but
the
browser
Chrome
we
started
needing
and
wanting
to
send
results
for
client
build
as
well
as
web.
Automation
builds
to
the
same
place,
so
we
can
kind
of
get
a
overall
health
of
our
test
ecosystem
and
yeah
the
service
book.
One
is
the
one
that
I
mentioned:
that's
the
one
that
we
implemented
just
previously,
and
now
we
had
literally
have
a
jenkins
file
for
projects
that
use
service
book.
E
That
is
three
lines,
so
we
have
cut
things
down
by
quite
a
bit,
so
I
I
assume
that
over
time
we
will
just
keep
pairing
up
and
tidying
the
Jenkins
files
and
moving
more
things
into
shared
library.
Where
they
fit
and
increasing
our
build
speeds
and
our
reliability
has
gotten
a
lot
better,
so
there
are
no
configures
that
don't
come
out
of
either
poor
reviews
or
messed
up
check-ins,
or
something
like
that,
rather
than
someone
hitting
the
wrong
button
in
jenkins
haphazardly.
E
A
E
Time
we
add
a
new
part
of
the
shared
library.
We
just
do
a
pull
request
for
that.
So
you'll
see
here
sometimes
that
I
or
Dave
will
forget
to
do
a
release.
Note
as
we
cut
a
new
release
in
the
github
branch,
so
we'll
come
back
and
fix
that
down,
but
basically,
with
every
release
you
know
we
put
into
the
readme.
We
put
not
only
the
version
steps
in
the
version
history,
but
the
dependencies
of
the
new
plugins
that
we
add
like
here.
A
E
Know
right
now
they
readme
for
the
FX
test.
Jenkins
pipeline
is
the
canonical
truth
and
we're
sometimes
we're
a
little
slow
to
update
that,
but
we're
actually
going
to
move
this
to
read
the
docs.
So
it
has
better.
You
know,
search
engine
capabilities
and
and
is
also
still
in
SCM,
but
this
is
the
source
of
truth
and
it's
got
its.
We
make
sure
to
add
real-world
examples
from
our
own
usage
and
encourage
people
to
ask
and
to
extend
it
if
they
need
it
as
well.
I'm.
B
E
E
Dave
and
has
looked
at
that
comes
down
to
a
time
thing,
there's
a
lot
he'd
like
to
do,
but
yes,
that
is
something
that
we
would
very
much
like
to
do
as
we
go
forward.
So
we
could
actually
use
help
with
this.
If
anyone
finds
you
know
if
your
selenium
and
Python,
or
even
if
you're
not,
but
if
you
find
some
of
this
abstraction
helpful,
if
you
would
like
to
tackle
some
of
the
open
issues,
please.
D
A
E
E
They
have
the
ability
to
set
a
config
in
their
groovy
to
run
our
test
remotely,
basically
take
the
results
of
the
exit
code,
exit
0
for
success,
1
for
failure
and
then
determine
in
their
own
pipeline
and
blue
ocean
step,
whether
to
deploy
to
the
staging
instance
deploy
to
production
or
whether
they
need
to
take
a
closer
look
at
our
results.
So
we
are,
we
have
both.
E
A
Right,
Thank,
You
Steven.
Thank
you
moving
right
along
remind
people
that
they
can
ask
questions
on
the
IRC
Channel
and
the
people
have
been
doing
so.
So
definitely
do
that.
We're
watching
that
channel
and
ask
and
passing
those
questions
along
and
next
up
we'll
have
mark
weight,
giving
us
some
get
tips
and
tricks.
F
Everybody
I'm
mark,
wait,
I
maintain
the
get
plugin,
and
this
is
some
ideas
concepts
around
dealing
with
things
that
we
may
have
made
mistakes
in,
for
instance,
sometimes
you
discover
that
people
are
checking
in
large
binaries
into
your
git
repository
and
that
LARC,
those
like
fineries
live
forever
and
that
git
repository
gets
larger
and
larger
and
larger.
As
you
deal
with
large
repositories,
they
have
unique
problems
that
you
have
to
deal
with,
and
you
made
a
mistake.
It
really
isn't
healthy
to
have
large
repositories,
but
you
got
to
find
a
way
to
deal
with
it.
F
So
there
are
some
key
concepts
that
you
can
use
to
frame
your
efforts
to
decide
how
to
help
yourself
best
to
live
with
a
large
git
repository.
Some
of
the
things
that
you
can
do
can
help
you
in
dealing
in
reducing
the
load
on
your
remote
on
the
central
repository.
So
there
are
things
that
can
help
there.
Some
of
the
things
you
can
do
can
help
you
on
the
Jenkins
master,
where
it
has
certain
things
that
it
does
like
polling
for
you
or
caching
repos
for
pipeline.
There
are
other
things
you
can
do.
F
That
will
help
you
on
Jenkins
agents,
where
the
pipeline
cache
is
where
you've
got
the
local
repository
copy
and
where
you've
got
a
workspace.
Each
of
those
three
areas
have
different
things
which
can
help
them
and
those
different
things
may
be
applied
in
different
ways
in
your
environment.
So
it'll
help
you
to
understand
which
things
apply
where
so
first,
first
up,
what
can
we
do
to
reduce
the
load
on
that
central
server?
The
git
remote
repository,
the
remote
repository
has
all
the
history
in
it,
but
it
only
has
to
send
history
that's
requested.
F
So
one
of
the
things
we
can
do
is
find
ways
to
ask
for
less
history.
The
git
repository
includes
all
the
large
files,
but
we
can
find
ways
to
only
ask
for
a
subset
of
the
requests,
the
large
files,
so
some
of
the
techniques
available
there
is.
We
can
use
a
reference
repository.
What
that
does
is
that
provides
a
local
cache
that
can
ease
the
load
on
the
central
repository.
We
can
use
a
narrow,
ref
spec.
A
ref
spec
is
a
concept
that
git
gives
us
to
ask
for
less.
We
could
use
a
shallow
clone.
F
That's
a
way
of
asking
for
less
depth
or
we
could
enable
large
files
each
of
those
techniques
we'll
discuss
in
a
little
more
detail
here,
to
give
you
an
orientation
how
any
one
of
those
four
or
all
four
together
can
help.
You
reduce
the
load
on
your
central
repository,
so
a
reference
repository
is
a
local
copy
of
the
remote
repository
and
it
can
reference
that
existing
repository
rather
than
downloading
the
reference
data.
F
So
imagine
if
you
will
a
spot
on
your
file
system
where
you
pull
it
a
copy
of
your
repository
and
everybody
else
on
that
computer
points
to
it.
Instead
of
downloading
it
again
for
themselves
big
benefit,
it
reduces
network
data,
transfer
another
benefit.
It
can
reduce
how
much
local
storage
you
need
on
that
machine.
That's
using
it
be
warned
that
reference
copies
are
not
automatically
updated.
A
F
Dalek
I
see
you
muted
me.
Sorry
I'll
turn
down
my
speakers.
So
a
reference
repository
is
a
local
copy
of
the
history,
and
so
each
of
the
jobs
or
each
of
the
workspaces
that
are
sitting
on
that
disk
can
point
to
the
history.
That's
in
that
reference
copy
with
them
pointing
to
that
history.
They
get.
They
get
dis
space
savings
from
that.
Did
that
answer
your
question.
Yes,
I!
Didn't
great
fate.
F
Okay,
so,
in
addition
to
a
reference
repository,
you
can
narrow
that
the
breadth
of
the
information
that
you
request
from
that
remote
server.
So
what
a
ref
spec
is
is
a
ref.
Spec
is
a
good
way
of
saying
what
are
the
things
on
the
remote
server
that
I
want
to
bring
to
the
local
and
that
ref
SPECT
good
online
documentation
can
tell
you
about
how
you
can
describe
it.
What
the
rules
are
that
that
govern
which
things
you
can
do
in
a
ref
spec
in
which
you
cannot.
F
If,
for
example,
you
only
need
one
branch
in
your
build,
a
narrow,
ref
spec
can
tell
the
remote
server
to
only
give
you
the
history
for
exactly
that.
One
branch
where
the
default
would
give
you
the
history
for
all
branches.
You
can
reduce
local
repository
storage.
You
can
reduce
data
transfer.
However,
there's
a
negative
that
because
you
asked
for
exactly
that
branch,
if
you
are
doing
inside
your
job
comparisons
between
one
branch
and
another,
if
you
didn't
ask
for
the
rest
back
for
that
other
branch,
it
won't
be
there.
F
The
other
challenge
is
that
respect
patterns
are
limited.
You
can't
use
general-purpose
wildcards.
You
can
put
a
wild-card
on
very
end
right
after
a
slash,
but
that's
about
it.
There
isn't
any
fractional
pattern-matching
going
on
in
a
rep
spec
pattern,
so
RESP
X
lets
you
limit
the
breadth
of
your
question
to
the
remote
server
reference
repositories.
Let
you
avoid
bringing
down
data
that
you've
already
got
the
next
way
that
you
can
help
is
you
could
ask?
F
Instead
of
limiting
the
breadth
of
the
question
you
can
limit
the
depth
of
the
history
that
you
retrieve,
so
a
shallow
clone
can
limit
how
many
entries
in
history
you'll
bring
back
from
that
remote
server.
So
if
your
job
really
cares
about
building
the
current
thing
and
does
no
operations
with
history,
you
can
just
ask
for
a
shallow
clone
with
depth.
1,
it
will
reduce
your
local
storage.
It
will
reduce
the
data
transfer
and
keep
your
job
still
running.
However,
there
are
downsides
to
shallow
clone.
F
For
instance,
you
can't
merge
shallow
clone
work
because
it
may
skip
changes
in
bringing
them
down
to
you,
and
so
it
doesn't
have
a
perfect
representation
of
history.
Change
reports
can
be
incomplete.
So
if
you
rely
on
reading
the
change
log
of
a
build,
don't
use
shallow
column,
the
other
is
shallow.
Clone
is
only
available
in
command
line.
F
Yet
so,
if
you're
using
jacott,
you
can't
use
shallow
clone,
so
shallow
clone
gives
us
depth
coverage,
narrow,
ref,
spit
specs
control,
breadth
and
then
there's
one
more
tool
in
your
arsenal
to
reduce
the
load
on
your
remote
server.
Some
get
implementations
have
an
extension
which
allow
them
to
store
large
files
outside
the
repository
that
includes
github,
that
includes
bit
bucket
includes
giddy
there.
Many
that
support
get
LFS
as
a
standard
extension.
It's
a
good
way
to
do.
Enterprise
scale
large
file,
transport
LFS,
is
high
performance.
It's
very
actively
developed.
F
It
dramatically
reduces
the
requirement
for
local
repository
storage
on
you,
however,
there's
a
downside
to
it.
You
have
to
install
the
LFS
extension
on
every
agent
that
will
run
git
LFS.
It
does
require
extra
support
from
the
hosting
provider
and,
at
least
in
the
earlier
implementations,
there's
no
support
for
SSH.
It
has
to
use
HTTPS.
F
A
F
Correct
yeah,
very
good
you've
understood
it
exactly.
There's
there
are
there's
a
separate
area
if
you
will
on
the
remote
server
which
hosts
these
large
binary
files
and
that
the
separate
area
is
kept
safe.
It's
kept
backed
up.
Github
is
very
good
about
a
bitbucket
Getty.
All
of
them
are
take
very
good
care
of
your
large
files,
but
they're
there
they're
not
stored
right
in
the
get
object.
A
F
Go
ahead.
Sorry,
ok!
So
so
now,
let's
shift
our
focus
away
from
helping
your
your
central
server.
How?
What
can
we
do
to
help
the
Jenkins
master,
because
there
are
things
that
will
load
on
the
Jenkins
master
if
you're
dealing
with
these
large
files,
as
these
large
buying
large
git
repositories
as
well,
so
the
master
typically
does
pipeline
scans
of
repositories?
F
Now
on
the
agent,
the
agent
is
a
place
where
you
can
get
large
benefit
significant
benefit
by
using
these
techniques.
So
the
agent
is
the
one
that's
responsible
to
populate
the
workspace
by
check
out,
and
it's
the
one
that
builds
the
job.
It
does
the
work
so
the
key
things
that
can
help
you
there
narrow
the
ref
spec,
so
that
you
ask
for
less
from
the
from
the
central
repository
and
you
get
less
stored
on
your
local.
F
Your
local
workspace
use
shallow
clone
to
just
limit
how
much
history
you
get
when
you
don't
need
full
history.
Reference
repositories
can
be
a
major
savings
on
the
agent
as
Liam
described
earlier.
If
you
have
a
repository
which
is
used
in
hundreds
of
jobs,
you
can
have
those
hundreds
of
jobs
pointing
their
history
towards
a
single
copy
on
disk.
F
Instead
of
having
hundreds
of
copies
of
that
exact
same
history,
large
file
support
as
well
can
be
a
big
help
for
the
agents,
because
you
don't
carry
around
all
of
the
history
of
all
of
your
large
binaries
there's
additional
option
that
is
agent
specific
and
that
sparse
check
out
what
sparse
check
out
provides.
You
is
a
way
to
say
which
exact
directories
of
biscuit
repository
should
I
check
out.
F
A
It
can
be
a
really
having
there
I
mean
it
makes
sense.
The
people
that
like
know
they're
at
least
three.
A
They're
talking
about
what,
but
this
the
kind
of
thing
that
definitely
helps
out,
and
also
certainly
your
master
and
keeps
getting
up
from
limiting
your
bandwidth
right.
A
G
And
blue
ocean
sure
things
so
like
Liam
said
what
I'm
going
to
talk
about
today
is
just
kind
of
gonna
give
an
overview
of
some.
You
know:
semi-realistic
real-world
pipeline
with
the
blue
ocean
pipeline
editor
just
a
spoiler
I'm,
cheating
a
little
bit
from
from
Jenkins
world,
because
I'm
showing
you
what's
going
to
be
what's
out
in
beta
now,
but
but
we'll
kind
of
go
over
this
and
see
how
things
work
just
really
quickly
about
me.
I'm,
a
senior
software
engineering
cloudBees
do
ocean
core
contributor
I've
been
working
on
the
pipeline
editor
as
well.
G
For
for
some
time
you
can
check
things
out
it
to
github
there
and
so
I
guess
just
to
kind
of
kick
things
off.
What
we
want
to
talk
about
is
you
know,
first
of
all,
what
do
we
want
to
do?
You
know
with
this
particular
pipeline,
so
you
know
to
sort
of
simulate
what
a
real-world
pipeline
might
look
like.
G
We're
gonna
look
at
having
a
QA
step
as
well.
So
you
know
in
a
real
world
sort
of
situation.
You
know
what
happens
is
some
sort
of
quality
assurance.
You
know.
Gate
is
one
of
those
things
that
they
sort
of
give
the
sign-off
okay.
This
is
good
or
not,
based
on
some,
you
know
some
actual
using
end
user
testing
and
then,
of
course,
some
content
exploit
mid
step
and
we're
going
to
use
the
visual
editor
to
build
this
whole
thing.
So
we
can
go
ahead
and
just
I
just.
A
G
G
G
So
you
know
typically
in
the
creation
process,
if
Jenkins
files
are
found,
they're
automatically
scanned
and
that
sort
of
thing,
but
in
this
case
none
exists,
and
so
that's
what
we're
gonna.
Let
me
go
ahead
and
do
today.
So
let's
go
and
click
the
create
pipeline
button,
and
once
were
there
you're
basically
presented
with
you
know
an
empty
pipeline,
and
so
starting
off
we
don't
actually
have
anything.
That's
valid,
you
know,
can
go
ahead
and
try
to
save
this
right.
Now.
G
You
should
see
there
are
some
validation
areas,
so
we
need
to
do
some
things
to
fix
this.
So
you
know
first
things.
First,
we've
got
to
add
a
stage,
then
we're
gonna
have
a
bunch
of
different
stages,
but
the
first
stage
I
think
we're
gonna
call
circle,
so
we're
gonna
build
something
on
the
server
and
this
is
gonna
need
some
sort
of
steps
to
it.
So,
let's
give
it
some
kind
of
stuff.
G
Actually
well
mavin
commands,
and
so
that's
thing
number
one
now
to
get
the
Maven
command
to
work.
We,
you
know
waits
to
go.
What
we
need
is
is
some
sort
of
container
to
you
know
that
that
has
maven
in
it.
So
we're
gonna,
add
docker
here,
we're
gonna,
take
the
maven
immigrants
and
there's
a
particular
before
so
3.5
given
gates
limb,
so
we're
gonna,
basically
get
maven
3.5
with
JDK,
8
and
Iceland.
Is
this
kind
of
smaller
this
hundred
megabytes
or
something
so,
let's
see,
and
so
after
we
do
this.
G
A
G
G
G
But
similarly,
what
we
want
to
do
here
is,
you
know,
go
ahead
and
make
something
that's
going
to
make
NPM
available
so
to
do
that,
we're
going
to
use
a
different
docker
image
and
this
one
is
six,
and
so
that's
okay
for
now,
so
let's
go
ahead
and
try
to
save
this,
and
hopefully
it.
It
saves
just
fine
and
we'll
see
how
that
looks
when
you
run.
G
G
So
you'll
see
what
happens,
is
we
sort
of
have
an
issue?
We
can't
access
this.
You
know
to
NPM,
and
the
reason
for
that
is
because
we
don't
have
the
right
user.
So
I'm
running
this
sort
of
you
know
on
some
system
it's
creating
a
docker
image,
but
it
doesn't
have
the
right
user
for
the
client.
So
what
we
need
to
actually
do
is
go
into
the
settings
here
and
give
it
the
args
to
run
if
the
root
user.
Many
of
these
docker
images
require
that
and
so
you'll
see
that
later.
G
G
And
as
soon
as
we
watch
this,
my
it
actually
seems
to
be
saving
all
of
these.
These
things
that
we
need,
and
if
we
look
at
the
end,
we
actually
have
a
successful,
successful,
build
here.
Alright,
so
let's
go
ahead
and
keep
adding
some
steps
to
make
this.
You
know
a
bit
more
realistic
because
right
now
it's
building
some
things,
but
certainly
not
doing
anything.
That's
it
so
similar
to
the
server.
G
What
we're
gonna
do
is
just
save
all
of
the
things
in
the
disk
directory
and
we're
gonna
use
those
later,
but
but
for
the
time
being,
we're
just
believe
that
so
after
that
the
build
happens,
we
want
to
go
ahead
and
do
some
tests
now,
in
this
case,
that's
the
first
test.
I
think
we
said
we
wanted
Chrome
and
so.
G
G
G
So
it's
your
server
build
past.
Of
course,
Clank
guild
is
passing
doing
some
NPM
installs
and
our
other
really
real
Syria,
not
really
doing
much
so
of
course,
they're
passing
it
as
well.
But
you
know,
obviously
what
you'd
want
to
do
is
run
all
of
your
tests
in
parallel,
but
you
know
again
sort
of
that's
your
time
to
do
some
things
for
barbies.
Thank
you
all
right.
So
after
we
had
all
our
tests
passing,
you
know
this
is
where
it
gets
a
little
bit
interesting.
G
What
we
want
to
add
next
is
a
step
kind
of
a
gateway
right.
So
in
a
real
world
scenario,
you've
got
a
QA
team.
You
know
a
lot
of
times
anyway,
that
their
responsibility
is
to
go
and
manually
test
things
to
make
sure
that
you
know.
Let's
say
your
entire
automation
sweep
didn't
miss
anything
obvious
or
you
know,
or
they
didn't
see
anything
else,
that's
okay.
So
what
we
want
to
do
is
to
to
make
this.
G
You
know
happen
in
such
a
way
that
we
actually
have
you
know
a
QA
step
and
QA
is
able
to.
You
know
to
sort
of
improve
things.
So
go
ahead
and
just
make
a
QA
stage:
okay,
the
first
things
we
want
to
do
here,
because
what
we
did
back
on
the
server
on
the
client,
if
you
remember,
we
stashed
some
various
files,
and
these,
of
course,
would
you
know
certain
to
be
user
tests
as
well,
but
we
stashed
server
files
with
a
wire
and
we
stashed
client
files
with
the
disk
directory.
G
G
G
You
know
environment
as
the
root
user,
because
that's
how
the
Tomcat
server
set
up-
and
the
other
thing
we
need
to
do-
is
to
expose
a
port
so
that
we
can
actually
see
what's
going
on
within
the
server
right,
so
Tomcats
gonna
run
under
port
8080
and
we're
just
gonna
expose
it
to
this
local
port.
Here
you
know
11
0
80,
so
going
back
to
our
steps,
we
have
an
application
directory
and
this
is
where
tomcat
web
apps
is
gonna.
G
G
We're
gonna
take
our
server
files.
Now
we
call
we
bun
stashed,
something
in
this
case
it
was
a
target
server
war
and
we're
gonna
put
that
in
its
own
server
directory
and
so
that'll,
be
you
know
whatever,
where
that
that
is,
and
we're
going
to
make
a
brand-new
weird
directory
that
we're
gonna
dump
our
client
files
in
and
what
we
did
bear
was
stash
the
disk
to
a
tree.
G
G
G
All
right
so
right
now
it
looks
like
everything
passed
and
that's
good.
So
we
got
our
scripts
right.
We've
got
something
ton
get
started
and
you
know
based
on
all
of
our
settings.
We
forwarded
things
to
port.
You
know
11
0
80,
so,
let's
see
see
if
we
can
can
access
the
server.
What
we
can't.
Obviously,
because
you
know
it's
not
running
anymore-
that
it
started-
Tomcat
destroyed
the
docker
image
and
you
know
and
everything's
sort
of
act
normal.
G
So
you
know
in
this
sort
of
situation
where
you've
got
a
QA
team
that
they
sort
of
need
to
verify
things
before
it
moves
on
to
the
next
stage.
You
can
very
easily
use
the
input
step
2,
you
know,
pause
things
and
and
and
sort
of
get
a
yes
or
no
answer.
So
what
we're
gonna
do
after
this
shell
script
here
and
keep
in
mind
what's
happening
within
this?
G
G
Now
the
other
thing
that
I
would
say
is
this:
isn't
always
what
you
will
want
to
do,
but
it's
just
a
sort
of
an
illustration
of
what
it's
possible
pipeline.
So
we'll
see
things
went
blue
and
somewhere
another
here.
We've
we've
got
this
wait
for
interactive
input.
We've
got,
you
know
a
go
button
that
that
we
added
an
abort.
G
If
you
know
things
are
bad,
so
what
we
can
do,
because
we're
paused
within
a
state
within
a
you
know,
a
docker
container,
essentially
you'll,
see
that
I
can
go
ahead
and
go
and
hit
this
slide
here
and
I
can
go
and
test
out
that
exact,
build
and
so
what's
happened.
Now.
Is
that
with
this
pipeline,
I've
done
a
build.
You
know
with
some
kind
of
a
client,
some
kind
of
a
server.
G
It's
passed,
some
tests
that
we've
defined
one
on
browsers
and
it's
made
it
to
a
QA
stage,
and
this
QA
stage
has
the
exact
binaries
that
were
built
during
the
build
step,
because
what
we've
done
is
stashed
them
and
restored
them
here,
and
you
know
so
I
can
at
this
point.
Let's
say
something
is
wrong
and
you
know
I
don't
want
it
really
to
say
hello
here,
so
I'm
gonna
go
ahead
and
import
that,
but
we'll
add
one
more
one
more
step
here
and
you'll
see.
G
G
G
Now
this
particular
step
doesn't
really
need
to
run
on
any
particular
docker
container.
Necessarily
we
could
set
it
up
to
you
know
if
we
have
a
specific
shell
script
that
we
want
to.
You
know
to
use.
If
you
want
to
make
sure
that
it's
running
on
you
know
bungee
Linux
or
some
specific
environment.
We
could
certainly,
you
know,
set
up
a
docker
image,
but
in
this
case
we
don't
go
really
care
too
much.
G
G
You
know
again,
things
have
have
passed
our
tests
that
we've
defined
and
we're
you
know
sitting
in
a
QA
stage.
So
going
back
here,
you
know
well,
we
certainly
still
have
had
this
thing
running
and
you
know
a
QA
person
can
decide
hey.
Maybe
we
want
to
go
and
promote
this
to
saging,
so
you
know
that's
what
we
can
do
from
here
or
avoid
it
in
case
we
abort-
let's,
let's
just
do
that
for
now.
Obviously
things
failed
and
it
didn't
reach
the
deployment
stuff
in
that
case,
but
that's
we
run
that.
G
Luckily,
we
didn't
have
a
very
long
test
suite,
so
here
we
are
again
in
this
case
you
know
QA
we'll,
go
and
look
over
here
and
say:
okay,
this
is
good.
You
know,
whatever
changes
were
introduced,
I,
don't
see
any
problems.
I
want
to
promote
that.
So
you
know
once
they
click
the
ok
button.
Basically,
you
know
the
pipeline
will
go
ahead
and
proceed,
and
maybe
you
know
in
a
in
an
actual
real-world
scenario:
you
don't
deploy
immediately
production,
maybe
there's
a
staging
where
you
have
a
copy
of
you.
G
You
know
real
data
if
you're
running
a
SAS
or
something
like
that,
but
you
know
in
our
small
example
here:
basically,
we
have
been
able
to
you
know,
build
a
couple
of
different
components:
run
them
through
testing
in
parallel,
stop
at
a
QA
stage
to
see
you
know
to
make
sure
to
verify
that
our
build
is
good
and
you
know
let
what
the
business
essentially
decide
that
then
it's
time
to
go
to
production
and
so
you'll
see
here.
Basically
what
we've
deployed
we've.
G
A
Of
what
we're
looking
at
so
that's
great
Keith.
So
that's
that's
like
a
real-world
example.
Exactly
yep
yeah
so
of
course
in
the
real
world,
you'd
also
be
putting
commit
messages
into
each
of
those
changes
right
right.
Well,
I
was
just
actually
I'm.
Just
kidding,
I
mean
I'm
just
giving
you
grief.
It's
ok,
I'm,
just
saying
yeah
yeah.
G
A
G
A
Excellent.
Thank
you
all
right.
So
that's
our
last
presentation
for
this
online
jam.
Thank
you
again,
we'll
be
publishing
a
follow-up
blog
post
on
Dragon
Co
with
the
video
and
links
as
a
reminder.
Jenkins
IO
is
a
great
place
to
catch
all.
The
latest
updates
from
the
Jenkins
project,
including
the
latest
declarative
pipeline
1.2,
which
includes
support
for
parallel
stages,
which
is
which
the
beta
ablution
that
we
were
seeing
today
depends
on
and
uses
so
for
more
up-to-date
links
and
videos
follow
Jenkins
CI
on
Twitter
and
also
go
to
the
IRC
channel
hash
hash
tag.
A
Jenkins,
let's
see
here
what
else
tanks
are
all
our
speakers
who
joined
us
today:
Oleg
Michael,
Thurston,
Steven,
mark
and
Keith
also
thanks
to
alyssa
for
organizing
another
great
drinking
online
Meetup.
If
you're
interested
in
joining
a
local
Jenkins
area,
Meetup
check
out
meetup.com,
slash
pro
slash,
Jenkins
or
go
to
Jenkins
IO
on
the
participate
page.
There's
information
on
how
to
start
your
own
meetup.
If
there
isn't
one
already
in
your
area,
all
right
thanks
thanks
very
much
for
watching.