►
Description
Ravi Sharma, Martin Krienke, and Larry Ogrodnek presented the POET pipeline framework for Jenkins including the concept of defining each step of the pipeline with a Docker container. Pipelines are defined with a YAML file and use Docker containers referenced in the YAML file to perform their steps.
T-mobile has open sourced the POET pipeline and looks forward to ideas and insights from others.
A
Record
and
welcome
everyone
it's
time
to
start
we'd
like
to
start
on
time
thanks
very
much.
This
is
a
Jenkins
online
Meetup.
Today
we
have
the
privilege
of
having
the
t-mobile
development
team
present
to
us
on
poet
their
pipeline
system
that
they
use
to
deploy
to
thousands
of
developers
at
t-mobile.
So
Martin,
would
you
like
to
take
it
up
to
lead
us
all?
First,
we'll
let
you
and
Robbie
go
back
and
forth.
Yeah
sounds.
B
Good,
thank
you
mark.
So
just
everybody
and
we'll
just
introduce
ourselves
your
mom.
What
we're
going
to
do
today
is
we'll
go
through
also
just
talking
about
what
the
vision
and
approach
we
had
for
this
pipeline
and
our
strategy
have
what
motivated
us
to
I
end
up
where
we
were,
and
then
Ravi
and
Larry
will
take
you
through
some
of
the
high-level
design
and
the
implementation
then
get
down
to
kind
of
the
core
of
the
engine.
It's
so
it's
kind
of
those
three
areas
of
just
here's.
Here's!
B
What
led
us
to
this
point
where
I
was
feeling
motivated
to
adjust
and
put
things
into
how
do
things
a
little
differently
than
how
pipelines
were
traditionally
being
viewed
at
t-mobile
here
and
then
Ravi
really
was
a
key
person
for
helping
us
drive,
adoption
and
working
with
our
teams
at
the
time
by
he's
now,
you'll
see
a
manager
title.
He
just
had
that
role.
B
Just
a
few
weeks
ago
in
another
team,
you
got
that
opportunity,
which
is
great,
but
really
with
the
folks
I'm
working
with
our
customers
and
understanding
what
their
needs
were,
so
that
kind
of
helped
feeding
our
product
pipeline.
As
we
continued
and
then
Larry
really
was
the
guy
at
the
heart
of
the
engine,
who
spent
all
the
time
putting
the
the
core
parts
of
our
engine
together
from
which
then
everybody
else
was
leveraging
it
so
as
noted
I'm,
Martin,
cranky
senior
manager
of
product
and
technology
at
t-mobile,
Ravi
Sharma.
B
He
also
does
a
note,
he's
now
manager,
product
and
technology
at
t-mobile
in
one
of
our
other
pipeline
teams,
but
he
was
the
head
of
all
of
our
brought.
You
was
basically
product
managers,
I
called
it
product
manager
for
customer
support.
It
was
so
important
how
we
looked
at
customer
support
and
how
we
talked
about
our
customers
and
theater
customers
and
listen
to
them
that
actually
we
had
two
product
managers,
one
for
the
heart
of
the
pipeline
activities
and
then
also
Oh.
Somebody
just
focused
on
customer,
and
we
found
that
paid
off
quite
well.
B
Larry,
as
I
mentioned,
was
really
our
guy,
for
you
wanted
something
technical
built.
I
go
to
Larry
on
on
that,
so
Larry
was
definitely
will
be
the
guy
under
the
covers.
That
did
a
lot
of
the
core
work,
all
right.
So
really,
the
kind
of
where
all
this
came
about
was
when
we
talk
about
managing
a
CI
CD
pipeline.
B
There's
gonna
be
a
lot
of
complexities
right,
just
building
and
designing
that
thing,
and
especially
if
you
work
from
a
Jenkins
perspective,
all
right,
you
have
all
your
plugins
you've
gotta,
get
if
you're
on
vm's
you're
keeping
those
things
going,
you
have
to
get
them
updated
the
variety
of
types
of
deployments
people
want
do
they
want
to
start
doing
blue
green,
especially
we
start
moving
DevOps
way.
We
want
canary
deployments,
there's
a
lot
of
other
workflows
that
you
need
to
integrate.
B
B
You
know
you
just
had
all
these
other
things,
which
I
will
say
a
lot
of
times.
The
customer
support
is
overlooked
a
lot
of
times
and
there's
a
shared
service
organization.
It's
often
you
know
as
much
as
we
think
about
the
things
that
we've
got
to
do
to
provide
tooling.
Are
you
really
thinking
about
your
comest
customer
and
then,
of
course,
if
you're
24/7?
How
are
you
gonna
handle
that?
Do
we
have
sufficient
documentation?
B
How
are
we
training
people
all
those
important,
and
at
least
our
experiences
here
at
t-mobile,
had
been
that
you
know,
especially
if
somebody
was
trying
to
build
more
of
a
centralized
one
or
capability
that
other
teams
could
use
this.
This
category
was
really
lacking
and
if
it's
lacking
boy,
everything
else
kind
of
tends
to
suffer
in
frustrations
mount
around,
and
it
just
becomes
a
challenge
for
everybody,
whether
it
be
the
users
of
the
platform,
the
people
creating
the
platform
and
then
management's
trying
to
make
sense
of
do.
B
B
Another
goal
here
was
developers
like
to
have
their
flexibility
right.
You
want
to
give
them
the
head
engineer
he's,
but
I
got
this
other
thing,
there's
always
a
reason:
hey
I
can't
quite
do
that
because
I
needed
this
one
other
thing,
and
you
don't
have
that
for
me,
so
we
really
wanted
to
try
to
give
them
the
flexibility
and
adapt
to
the
model
that
they
were
using
in
their
development,
so
not
be
too
prescriptive.
We
really
want
to
avoid
that.
B
It's
usually
a
downfall
of
a
lot
of
shared
service
solutions
and
things
as
when
you
start
being
so
prescriptive
that
people
just
really
don't
have
that
flexibility,
and
then
I'm
gonna
key
thing
here
is:
let
them
focus
more
on
the
development
and
testing
aspects
of
whatever
it
is.
They
were
building
of
their
software
rather
than
spending
time
on
having
to
maintain
their
pipelines.
This
wasn't
interesting.
This
has
came
up
recently,
but
our
senior
vice
president
made
a
comment
recently
that
everybody
was
wanting
to
show
their
pipeline.
He
says,
I
talked
to
here.
B
We
go,
you
got
to
see
our
pipeline
and
you
know
like
it
he's
like
I,
just
I,
don't
know
why
and
well
one
things
I
was
I
talked
about.
Was
you
know?
Why
had
they
been
wanting
to
show
their
pipelines
to
him?
It's
really
because
they
were
complex,
they're,
very
proud
of
them.
They
spent
a
lot
of
time
working
on
those
pipelines
and
so
they're
like
hey.
B
We
want
to
show
you
this,
because
this
is
really
cool,
but
our
take
was
pipelined
should
not
be
that
big
a
deal
it
should
be
just
oh
yeah
I
have
a
pipeline.
We
run
our
stuff,
we
can
deploy
it.
We
build
all
the
things.
We
need
it's
a
piece
of
cake.
It
should
not
be
that
big
a
deal,
and
so
this
is
really
where
we
at
t-mobile
has
started
to
shift
po
at
pipeline,
and
some
other
activities
have
been
happening.
B
I've
really
been
trying
to
move
away
from
this
idea
that
every
team
is
having
to
create
their
own
pipeline.
It's
because
it's
not
efficient
for
the
senior
senior
VP.
He
as
I,
don't
remember
its
thousands
of
people
under
him.
It's
not
economical,
really
for
him
if
every
team
is
building
their
own
pipeline
and
trying
to
do
all
the
custom
work.
B
So
what
are
the
other
things
here?
That
we
didn't
want
to
do?
Is
let's
set
some
principles
for
how
we're
going
to
work
on
this,
and
we
do
it
for
both
of
our
team.
We
have
guiding
principles
for
the
team
and
how
we
work
as
a
team,
but
also
then
what
are
we
gonna?
We're
guiding
principles
that
we're
gonna
check
ourselves
against,
as
we
did
this
pipeline
and
and
built
this
out
and
first
of
all,
was
like
look.
Let's
make
it
easier
to
implement
new
capabilities?
B
We
don't
want
to
really
have
an
impact
to
other
users,
and
what
does
that
mean?
Well,
if
I
go
and
I
write
all
new
code
in
my
engine,
you
know
I've
got
everybody,
including
in
a
Jenkins
file
and
they're
running
this
code
and
all
that
and
then
I
make
changes
for
that
library.
I
do
a
bunch
of
stuff.
I
could
be
impacting
those
folks.
They
start
loading
that
one
in
and
you
know,
there's
a
lot
of
other
testing
that
has
to
happen.
B
If
I'm
upgrading
plugins
do
I
know
all
the
plugins
work
I
had
one
person
told
me
that
he
had
come
into
a
company
and
they
had
over
a
hundred
plugins
running
in
their
Jenkins
instance.
Well,
you
know
version
diversion
right.
Those
plugins
can
change
and
we
had
a
pipeline,
not
as
our
team
specifically,
but
here
at
t-mobile
there
was
a
pipeline
that
was
trying
to
do
more
of
the
global
hey.
B
So
that
was
a
challenge
that
we
we
were
very
concerned
about,
wanted
to
make
sure
that
when
people
come
up,
we
want
to
get
somebody
onto
the
pipeline.
We
want
to
help
you
want
to
make
the
usage
the
onboarding
faster
and
easier.
We're
gonna
be
really
conscious
again
it's
about
that
customer
right.
So,
let's
think
about
our
scalability,
our
reusability,
our
flexibility
for
the
dev
teams
hit
all
those
those
marked,
and
so
again
these
are
things
we're
checking
ourselves
against.
B
As
we
went
along
as
we
iterated
on
this,
and
then
we
didn't
want
to
have
this.
What
we
call
a
CID,
CI
CD
pipeline
specialist.
We
did
some
numbers
on
this
and
some
calc
stood
as
to
how
much
money
were
we
saving
by
some
of
these
things
and
if
you
get
down
to
it
any
given
team,
if
you're
in
a
larger
organization
and
people
have
their
own
pipelines,
you
end
up
having
to
have
somebody
that's
kind
of
dedicated
for
that
pipeline.
B
They
may
not
be
24/7
42
hours
a
week,
always
doing
it,
but
when
there's
an
issue
with
a
pipeline,
they
have
to
drop
everything
and
be
right
on
that.
So
that
becomes
a
very
disruptive
in
I.
Think
to
your
other
planning,
maybe
other
work
you'd
like
to
have
that
person
working
on
going
back
to
that
whole
concept.
I've
mentioned
about
us
actually
trying
to
let
people
focus
more
on
the
development
of
code.
B
So
if
we
can
give
more
predictability,
it's
hard
to
find
the
specialists
all
those
things
so
we're
like
okay,
we
want
to
see
if
we
can
start
to
reduce
that
reliance
for
teams
on
that
that'll
also
save
them
money
and
focus
to
where
they
want
to
be,
and
then
again
can
we
actually
abstract
the
underlying
technologies
that
drive
the
pipeline
out.
So,
yes,
we
use
Jenkins
under
the
covers
it.
Jenkins
does
a
lot
of
stuff
first,
that
could
just
give
us
foundation
that
we
could
start
on.
We
didn't
have
to
do
everything
from
scratch.
B
There
is
a
value
to
some
of
these
key
plugins
somebody's
got
a
plug-in
that
lets
me
get
to
get
the
different
things.
I
need
to
send
something
to
Splunk
whatever
it
is.
I
have
these
plugins
I
can
leverage,
but
let's
minimize
those
minimize
them
and
let's
not
have
teams
actually
even
have
to
worry
about
knowing
we're
using
those.
So
those
were
kind
of
our
key
principles
on
that
front,
all
right
and
so
I'll,
just
I
think
this
is
my
last
slide
right
here.
B
What
I
just
want
to
note
was
if
you're
a
shared
service,
and
you
are
supporting
multiple
teams
with
pipelines
and
all
that
remember
it's
not
just
about
the
technology.
You
can
build
that
really
cool
awesome
mousetrap,
but
if
you
don't
focus
on
the
customer,
you
might
get
theme
here.
It's
customer
customer
customer.
If
you
don't
focus
on
that,
it
can
be
you'll,
have
a
real
challenge.
B
You
can
think
you're
listening
to
them
or
it's
very
easy
to
get
into
yeah,
but
I
know
what's
best
for
you
well,
listen
to
what
a
person
has
to
say:
give
them
that
chance
to
provide
you
that
feedback,
and
that's
really
that
third
bullet
is
where
we
did
this.
We
always
gave
a
customer
satisfaction
and
we
were
running
at
about
a
4.9
out
of
5
on
a
customer
satisfaction
with
customers.
It
was
a
set
of
questions
we
put
out.
We
tried
to
be
very
open
about
the
questions
we
asked.
B
We
didn't
try
to
be
leading
with
those
questions.
Sometimes
people
put
questions
out
and
they're
a
little
bit
leading
was
assumptions
under
them,
but
it's
just
something
to
think
about
that
as
you
do
this,
that
you
can
do
great
things
with
technology.
But
if
you
don't
remember
who
your
customer
is
the
development
team?
Folks
like
that
yeah,
you
know,
then
your
you
might
have
a
little
bit
of
a
struggle
so
with
that
I
think
Robi
do
I
turn
it
over
to
you.
Yes,.
C
What
Marty,
actually
just
explained
is
basically,
there
are
good
things
and
everything
what
he
talked
about
is
about
the
customer
actually,
and
we
are
really
very
customer
focused,
so
whatever
we
have
designed
so
far
like
whether
it
is
a
physical
architecture
of
our
pipeline
engine
when
I
say
pipeline
engine,
it's
nothing
but
I'm.
Talking
about
the
Jenkins
here
and
the
pipeline
library,
which
is
our
pipeline
framework
which
which
we
have
developed
is
so
whenever
we
have
design
any
of
these
component.
C
We
have
kept
our
customer
in
the
center
of
the
table
and
see
that
whether
any
solution
we
are
designing
is
actually
helping
the
customers
or
not,
because
in
the
DevOps
team
there
are
certain
questions
when
you,
when
you
design
a
pipeline
library
in
your
organization,
they
have
certain
questions
when
they
start
using.
The
library
itself
is
how
easy
is
to
do
the
onboarding
on
to
the
library.
Basically.
So
if
you
have
a
new
application,
you
have
a
centralized
team
in
your
company.
C
Is
that
taking
care
of
all
the
onboarding
of
applications
in
a
company
or
whether
being
a
DevOps
team?
Can
the
team
take
scale
by
itself
and
how
easy
is
that
and
how
much
learning
curve
basically
is
required
when
you
understand
the
pipeline
and
another
one
is
basically
so
these
are
the
actually
our
experiences
from
last
70
years.
C
We
have
on
our
hands
with
a
lot
of
like
you
know,
different
ways
of
working
on
Chen
and
crediting
the
library
and
that's
why
we
have
come
to
a
conclusion
that
this
is
the
best
architecture
and
the
library
we
can
actually
design.
Do
we
really
need
a
dedicated
resource,
how
we
can
extend
our
pipeline
with
the
features
and
the
capability
if
we
need
to
add
into
our
pipeline.
C
There
are
a
lot
of
duplicity
happens
when
you
have
the
the
micro
service
kind
of
model,
lot
of
duplicity
happens
there,
because
in
the
micro
services
you
are
having,
following
the
similar
methods
of
you,
know,
building
the
pipeline
testing
the
pipeline
and
they're
playing
on
similar
platforms.
So
we
have
the
similar
jenkins
file,
which
we
need
to
repeat
over
and
over
again
with
all
the
components
can
we
can.
We
have
this
duplicity.
We
can
be
avoided
actually.
C
So
there
are
certain
questions
when
people
ask
to
you,
and
so
one
of
the
things
which
I
will
talk
about
right
now
with
the
physical
architecture.
So
what
we
have
basically
done
in
this,
we
are
actually
deploying
deploying
our
jenkins
basically
into
the
kubernetes
kubernetes
namespace.
The
genkan
itself
is
into
a
into
the
container
and
then
we
are.
We
are
leveraging
the
kubernetes
as
well
along
with
it.
C
So,
as
you
can
see
in
the
picture
here,
the
master
itself
is
into
the
container
and
then
we
have
a
persistence
volume
of
the
Jenkins
home,
which
is
right
there
on
this
on
the
other
kubernetes
space,
and
then
we
are
leveraging
basically
having
only
few
very
few
Jenkins
of
plugins.
Basically,
you
know
the
plugins
are
basically
the
heart
of
the
heart
of
the
Jenkins
as
more
and
more
plugins
you
keep
adding
in
to
the
Jenkins.
C
You
get
more
and
more
features,
but
in
this
case
the
we
wanted
to
have
a
chenkin
switch
which
actually
we
have
less
number
of
plugins
just
to
support
our
pipeline.
So
we
used
only
basically
four
plugins
to
maintain
into
our
pipeline
engine
here
and
then
we
have
step
containers
so
usually
all
the
step
containers
which
are
basically
bill
step
container.
These
all
are
the
build
sub
containers
and
different
components
in
your
build
file
which
you
are
going
to
use,
they
spin
up
into
the
same
into
the
same
kubernetes
space.
C
So
what
else
we
have
actually
done
is
here
is
we
have
used
the
Splunk
basically
for
the
loading
we
have
used,
AB
D,
AB
dynamics,
spring
cloud,
config
server.
Basically,
we
have
used
for
our
encrypted
passwords
to
soar
into
the
spring
cloud,
config
server.
The
reason
being
doing
it
is
this
whole
setup
is
actually
fully
automated.
So
fully
automated.
When
I
say
we
are
using
the
Jenkins
config
ads
code,
we
are
using
split
screen
cloud
config
server.
So
when
this
Jenkins
comes
up,
it
takes
like
two
minutes
and
some
seconds
to
bring
a
new.
C
So
by
using
that
what
happen
is
if
everything,
if
your
Jenkins
instance
goes
away
by
any
chance,
you
can
actually
restore
it
again
within
the
same
two
minutes
or
few
seconds,
because
first
of
all,
the
loading
of
the
genkan
is
very
easy
for
us.
We
are
using
very
few
plugins
into
that
and
then
all
the
configurations
are
stored
into
bit
bucket
and
the
version
control
tool.
C
So
it's
easy
to
restore
them
back
now
we
talked
about
the
agents
so
which
are
workers
for
the
for
Jenkins,
so
they
can
be
spin
either
onto
the
same
kubernetes
space
or
they
can
be
actually
done
on
to
another
kubernetes.
So
for
that
we
are
you
using
the
dynamic
allocation
of
the
agents
and
dynamic
provisioning
of
the
agents,
similar
things
we
can
do
on
to
the
AWS.
C
So
in
our
case
we
are
using
both
of
them
and
at
the
end
we
have
a
graph
on
a
dashboard,
so
whatever
happens,
or
all
the
steps
when
you
execute
in
a
pipeline,
the
all
the
matrices
are
actually
getting
collected.
Silicon
graph
on
a
dashboard.
Let
me
so
here
is
a
the
few
details
about
it.
So
the
complete
infrastructure
is
on
kubernetes,
including
Mark
Stone
agents.
Each
team
actually
gets
their
own
pipeline
engine
so
that,
if
something
happens
like
their
isolated
teams,
basically
so
anything
happens
to
one
of
the
masters.
C
The
other
teams
are
not
getting
impacted
in
this
case
and
for
us
it's
very
easy
to
spin
any
any
number
of
masters
on
the
kubernetes
space.
So
we
are
not
worrying
about
those
things.
Shengcún
configures
code
is
highly
utilized
for
us.
We
are
using
very
minimum
number
of
plugins
and
they
are
all
pre-configured.
Even
we
have
like
global
credentials
within
your
organization,
like
you
have
some
global
credential
which
can
be
used
across
the
teams,
but
there
are
specific
credentials
which
team
would
like
to
use
for
that.
C
We
have
provision
folder
level
access,
so
the
folder
level
access
given
to
the
team.
They
usually
have
the
create
the
credentials
which
are
specific
to
their
applications
within
those
folders,
so
that
nobody
else
can
actually
see
them,
and
it's
a
Sox
compliance
perspective
as
well
using
core
for
core
plugins.
You
can
have
as
number
of
plugins.
You
want
actually
and
I
remember
before
coming
up
with
this
particular
architecture.
We
had
200
plus
plugins
and
200
plus
main
plugins,
and
you
can
think
of
the
dependent
plug-in
which
get
installed
so
when
I
say
for
core
plugins.
C
Basically,
these
are
the
main
plugins.
You
have
some
dependent
likely
also
get
installed
along
with
that.
We
have
extended
it
to
16
plugins
because,
like
we
are
using
like
Splunk
as
well.
In
that
case,
so
one
of
them
somebody
is
looking
for
build
chimes
in
because
if
you
go
into
the
UI,
you
need
some
plugins
actually,
so
we
have
extended
it
them
to
16
plugins
into
demo
by.
We
have
removed
a
pager
dependency,
actually
so
by
storing
the
credential
involved.
C
So
what
happened
basically
is
like
you
know,
we
have
our
credentials
stored
right
now
into
the
vault
and
through
that
we
are
using
spring
cloud,
config
server.
So
all,
but
in
this
Jenkins
comes
up
and
up
and
running
it
actually
brings
everything
from
the
from
the
spring
cloud
config
server
in
encrypted
credentials
and
get
them
storing
the
global
global
credential
in
Jenkin,
when
I
say
almost
zero
maintenance
and
that's
a
use
of
it
used
to
be
a
huge
pain
when
you
have
a
single
master.
A
A
So
so
one
of
the
questions
is:
are
you
using
Jenkins
job
builder
and
my
assessment
was
you're,
probably
not
Jenkins.
Job
builder
didn't
look
like
it
was
involved
in
in
the
structure
that
you're
doing
no
okay
and
are
you
using
cloudBees
Jenkins
are
using
clog,
B's,
core
or
the
product
or
Jenkins
the
open
source
component
to
do
the
the
Jenkins
instances?
This.
A
C
You,
okay,
so
now
I'll
talk
about
the
pipeline
execution
example.
Basically
how
the
pipeline
looks
like
and
with
this
design.
So
as
you
can
see
in
this
picture,
there's
a
user
who
creates
a
pull
request
once
he's
done
with
his
task
and
the
on
the
source
code
and
the
source
code
in
our
case
is
basically
into
the
big
bucket
right
now
we
have
a
something
called
pipeline
definition
file,
which
is
a
yeah
Mel
file.
C
Where
you
store
all
of
your
steps,
steps
in
the
sense
like
what
all
different
steps
you
would
like
to
perform
as
part
of
your
pipeline,
you
are
doing
some
pre-built
steps
and
then
build
stab
you're
building
with
different
tools
and
technology
or
a
programming
language
you
use,
and
then
you
you
go
for
like
notification,
sonar
cube
or
testing
and
then
deployments
right.
So
all
the
steps
are
mentioned
into
this
pipeline
definition
file.
Now
the
picture
you
see
is
all
the
clouds
in
the
containers
here.
C
Each
of
the
step
is
a
container
here
in
this
case,
so
you
can
say
like
about
the
bill.
Java
8
and
b1
is
one
of
the
container
another
slack
notify,
which
is
again
a
container
here.
Sonar
cube,
UCP,
notify
and
then
45
deploy
to
K
8
k.
8
is
nothing
but
it's
a
kubernetes.
You
can
think
and
then
the
last
step,
if
you
see,
is
which
is
in
flux,
DB
logging,
so
which
is
basically
the
graph
on
a
graph
and
a
loading
with
you.
So
this
whole
pipeline
steps
run.
C
Each
of
them
runs
like
a
container
and
we
have
something
called
pre
and
post,
which
Larry
will
explain
you
in
detail.
So,
basically,
once
the
pipeline
gets
run,
everything
gets
logged
onto
the
bear
fauna
and
it
creates
beautiful
of
beautiful
graphs
actually
onto
a
graph
Anna
for
us
related
to
the
pipeline
itself.
Cic
D.
Now
the
picture
which
is
down
here,
which
is
for
shared
or
use
reusable
step
container
library.
This
is
something
actually
we
have
created
on
on
our
own,
but
there
is
an
open
source
code
available
for
this,
so
this
step
container
library.
C
What
we
have
done
so
far
is
we
have
40-plus
different
containers
available
for
t-mobile
users,
and
these
are
very
generic
containers
created
on
different
tools
and
technologies
and
people.
The
users
can
extend
them.
They
can
use
the
containers
which
we
have
created
as
a
base
image,
and
they
can
extend
those
containers
for
their
functionality
into
the
pipeline
now
in
this
picture.
C
This
is
the
sample
of
the
pipeline
code
and,
as
you
see
like
you
know,
and
when
I
talk
about
this
sample,
basically
there
is
not
much
learning
for
the
development
team
into
this.
The
learning
in
the
sense,
the
development
team
should
understand
how
to
create
docker
images.
Basically,
we
should
understand
how
to
read
or
update
any
ml
file,
and
that's
the
only
two
thing
I
expect
from
a
development
theme
when
we
say
that
they
can
extend
and
use
this
pipeline
by
themselves.
In
this
we
have
three
component.
Basically
one
is
called
global
section.
C
Another
one
is
called
global
environment
section
and
third,
one
is
steps
and
that
soul
is
all
about
this.
This
whole
pipeline,
in
this
case
the
global
section
you
have
to
specify
application
name,
basically,
what
application
of
the
micro
services
you
are
going
to
build.
You
have
to
specify
that
name,
and
then
you
have
application
version.
So,
as
you
see,
we
have
branch,
wise
versions
here,
like
Baxter
dot
photo
in
this
case.
The
feature
branch
has
like
2.4
dot,
1.
The
reason
being
is
here
having
the
branch
wise
person.
C
Basically,
if
you
have
a
master
branch
and
you
are
creating
creating
a
child
branch
out
of
it
or
a
feature
branch
you
can
actually
have
and
when
you
are
connecting
the
feature
branch-
and
you
are
doing
the
pipeline,
your
pipeline
is
running
so
it's
act
getting
built
on
2,
dot,
4
dot
1.
But
when
you
start
merging
your
feature
branch
into
the
master,
you
will
get
some
time
conflicts
in
the
sense.
C
If
the
feature
branch
also
has
2
dot,
4
dot,
2
and
it
will
row
right
onto
the
the
master
branch
in
this
case
you
can
keep
the
version
separate
for
features
and
master.
That's
how
this
app
wasn't
works.
Then
you
have
global
environments,
so
think
about
the
situation
where
you
are
writing
a
program
basically
and
in
the
program
or
a
cell
scripts
or
any
language
you
are.
C
You
are
defining
lot
of
environment
variables,
which
you
want
you
to
use
throughout
in
your
program
like
you
have
two
different
classes
or
you
are,
and
you
want
to
use
those
environment
variables
so
similar
things
you
have
to
define
here
now.
The
next
thing
is
basically
steps.
So
steps
are
nothing
but
what
all
different
activities
you
are
going
to
perform
as
a
part
of
your
pipeline,
so
you
have
to
define
now.
Step
has
minimum
three
components
here,
which
is
name
basically
jar
build
I.
Would
I
generally
recommend
people
to
use
to
make
sure?
C
Like
you
know,
your
UI
looks
good
instead
of
space
here
and
then
you
have
an
image
where
which
is
the
step
container,
so
this
image
is
basically
contains
all
of
your
tools,
which
are
required
to
complete
that
particular
step.
So
in
this
case,
I
am
having
a
maven
build
and
I
need
a
JDK
8.
So
that's
installed
in
this
image
and
then
I
have
a
command
section.
C
So
when
I
talk
about
this
command
section,
basically
it's
nothing
but
a
it's
a
it's
a
plain
ground
where
you
have
like
it's
a
VM
for
you
now
to
start
writing
all
different
commands.
You
would
like
to
execute
in
that
particular
image
when
it
start
up
as
a
container,
so
right
now
the
example
is
basically
given.
As
with
one
command,
you
can
create
multiple
commands.
Basically,
if
you
understand
the
concept
of
yeah
Mel
file,
when
you
see
the
it
means
it's
an
array
of
the
commands,
you
can
write
as
much
many
commands
as
you
want.
C
Third,
one
and
all
the
old
environment
variables
which
are
we
have
defined
at
the
global
level,
are
basically
you
can
utilize
them.
You
can
see
with
a
built
container
the
environment
variable
we
define
at
the
environment.
Section
of
the
global
is
basically
we
are
using
into
the
image
here
similar
way.
Now
you
have
another
step,
you
want
to
do
a
docker
image
build
once
you
have
your
jar
file
ready.
You
wanted
to
use
the
jar
file
and
create
an
is
out
of
it.
C
Then
you
write
the
next
step
and
step
basically
name
again,
the
name
step
of
the
name,
image
and
command.
So
you
keep
writing
this
with
all
the
step
containers
using
into
it,
and
your
pipeline
keep
enhancing
so
where,
as
everybody
I
think,
people
over
here
are
very
well
versed
about
the
pipeline
library,
which
is
a
groovy
code
which
we
have
to
write
now.
C
C
This
complete
pipeline
pipeline
framework
and
that's
why
we
call
it
as
pipeline
framework
because
the
framework
don't
need
to
be
changed
in
the
sense
until
unless
you
have
a
great
feature
who's
coming
in
and
you
need
to
have
it
distributed
to
all
the
development
teams,
then
only
you
just
go
back
to
your
library
now
and
change
it
rest
of
the
features.
For
example,
you
are
going
to
have
a
chain
meter.
C
You
don't
need
to
go
back
to
your
library
and
add
those
dependencies
into
your
groovy
code,
but
rather
you
just
create
a
container
right
now
and
that
will
help
you
to
extend
your
features
and
the
capability
in
your
pipeline
pipeline
value.
So
most
of
all
things
which
I
just
was
speaking
about
is
here.
So
it's
a
framework
where
the
pipeline
execution
is
defined
by
each
team.
Basically,
so
anybody
in
the
development
team-
you
don't
need
a
specific
person
or
a
specialist
in
your
team,
basically
to
just
work
on
the
pipeline,
extend
the
pipeline
features.
C
No,
any
developers
can
do
that
in
the
UML
file
easier
and
faster
onboarding.
So-
and
this
is
proven
with
us,
that
any
application
or
any
micro
services,
if
you
have
all
the
information
in
hand,
you
can
actually
onboard
application
in
less
than
one
hour
on
point
pipeline
and
that's
what
we
have.
We
have
been
doing
with
the
with
the
within
our
team
about
so
no
biggie
ml
or
groovy.
C
Script
maintain
yes,
the
same
thing,
which
I
just
explained
that
it's
just
you
don't
have
to
have
a
big
yeah
Mille
in
this
case
also
we
are
using
ml
and
if
you
have
like
20
different
steps
to
perform,
you
will
absolutely
say
that,
like
you
know,
the
ml
file
is
getting
increase.
Again,
it's
a
big
ml
part
for
to
solve
that
problem.
We
have
actually
introduced
templates,
which
is
the
next
step.
You
reusable
templates,
so
the
the
templates
when
I
say
basically
is
every
step
which
you
are
writing
into.
C
The
ML
file
can
be
converted
into
a
templates
and
same
templates.
Let's
say,
for
example,
you
have
100
plus
microservices.
You
do
the
same
way,
build
maven,
build
you
do
you
have
the
dollar
bills
again
and
then
you
are
using
sonar,
cube
and
other
different,
but
they
are
all.
The
micro
services
are
following
the
same
way
of
having
your
pipeline
out.
What
building
your
pipeline
now,
if
I
used
this
particularly
ml
file,
then
I
have
checked
out
hundred
different
repositories
and
I
have
to
check
in
the
same
Yambol
file
over
and
over
again
into
that.
C
Let's
say
I
got,
another
features
need
to
be,
or
capability
need
to
be
added
into
the
template.
Now
what
will
happen
again?
I
have
to
check
out
all
of
my
hundred
repositories
and
I
have
to
update
my
pipeline
Doty
amel,
which
is
a
definition
file,
but
what,
if
I,
have
a
template
if
I
create
a
template
out
of
the
repository
in
a
different
wrap
all
together
and
maintain
all
the
templates
and
then
include
those
templates
into
my
main
pipeline
30
ml
file?
C
What
will
happen
is
tomorrow
if
any
change
happen
in
capabilities
and
features,
basically,
you
will
be
doing
that
in
a
separate
repository
all
together
and
which
is
not
not
impact.
Your
current
pipeline,
you
want
some
cases.
I
have
seen
that
if
you
change
any
pipeline
specific
files,
it
starts
a
bill
because
it's
a
auto
bill
as
well.
You
check-in
into
a
code
repository.
C
It
start
another
bill
which
is
not
required
so
time
because
it's
it's
a
pipeline
specific
file,
it's
not
a
code
change,
but
when
you
have
a
separate
repository
all
together
to
maintain
these
templates,
it
means
we
are
not
changing
your
whole
pipeline,
no
DML
file
in
your
source
code
and
any
changes
to
these
files
will
not
do
every
build
off
your
phone
activity.
That's
how
this
templates
will
work
and
I
will
I'm
gonna
actually
show
you
into
how
these
templates
works
on.
How
you
can
include
easily
into
the
into
the
pipeline.
C
30Ml
file
is
a
definition.
File
step
containers
are
again,
I
already
speak
about
it.
Specs
tab
container
is
nothing,
but
you
have
a
specific
container
for
specific
tasks
in
this
case,
and
you
have
to
in
this.
In
our
case,
we
have
a
library
for
that
and
you
can
use
the
docker
how
to
have
these
containers
to
include
in
your
consistency
of
approach
and
application
across.
So
it's
the
same
thing
because
you
do
the
build
pipeline.
All
the
steps
are
similar
same
approach,
same
consistency
across
all
of
your
micro
services.
C
C
C
How
you
basically
do
the
installation
of
the
pipeline
so
I
think
we
have
just
updated
a
pipeline
engine
masters
like
the
infrastructure
which
we
have
built
with
us,
which
is
a
complete
automated
system
for
us
to
stand
up
in
Eugene
Keynes,
all
the
steps
and
the
source
code
is
available
here
for
you
to
stand
up
the
genkan,
so
you
can.
If
you
start,
you
know,
following
these
steps,
you
will
be
able
to
stand
up
a
genteel
within
within
minutes.
That's
that's
for
sure,
and
then
how
you
do
the
Lyle
library
set
up.
C
So
you
know
that
there
is
a
plugin
called
global
pipeline
library
in
Jenkins.
You
just
need
to
have
this
code
ready
and
then
update
the
configurations
into
the
into
the
library,
global
library
sections,
and
you
will
be
up
and
running
in
your
Jenkins
in
our
case,
basically,
what
we
have
done.
This
is
again
automated,
along
with
our
Jenkins
when
we
stand
the
Jenkins
up,
so
these
things
are
very
prefilled
configured
for
us
when
the
Jenkins
comes
up
now
how
to
section,
basically
it's
a
the
getting
started
with
the
pipeline.
C
So
when
you
start
working
on
the
pipeline,
you
need
two
files.
One
is
Jenkins
file,
another
one
is
pipeline,
dot,
UML
file
and
I.
Think
all
Jenkins
lowers
knows
that
the
same
can
file
contains.
Where
is
your
library
basically,
which
you
define
into
the
into
the
plugin
just
now
in
in
the
previous
section?
C
So
you
just
need
to
provide
those
details
here
and
the
Jenkins
file
should
be
part
of
each
microservices
in
in
your
repository,
and
it
never
changed
until
unless
you
are
actually
changing
a
branch
or
any
other
information
related
to
that
it
doesn't
change
frequently,
and
then
we
have
pipeline
dot,
yeah
Mel
pipeline.
No
tamil
is
nothing
but
it's
again
the
same
thing
like
you
have
the
steps
Global
section
and
the
steps
defined
within
the
pipeline.
C
Dodi
ml5
you
now
what
core
plugins
we
need,
so
this
is
I
think
in
between
it
came
in
the
step,
so
I
will
just
let
you
know
these
are
the
four
plugins
core
plugins.
You
need
to
set
up
the
whole
Jenkins
and
to
make
it
make
the
pipeline
framework
up
and
running.
You
don't
need
more
than
that.
You
can
install
more
and
then
like
we
have
16
of
them
it's
because
we
are
looking
for
different
other
other
features
in
the
Jenkins
to
work
on
like
Splunk.
We
are
logging
our
logs
into
this
plank.
C
C
Now
we
will
talk
about
what
different
things
you
can
write
into
the
pipeline
30ml
and
thanks
to
Larry,
he
has
put
a
schema
together.
So
if
you're
not
sure
like
you
know
any
any
of
the
components
or
any
of
the
variables
which
you
are
writing
into
the
into
the
pipeline
dot
yeah
Mel,
are
they
being
teaser
type
or
a
string
type?
Or
what
should
it
be
the
length
or
different
validation?
I?
Think
you
can.
You
can
go
through
with
the
schemas,
and
you
will
understand
more
about
it.
C
Now
we
have
different
sections
in
the
in
the
pipeline.
30Ml
one
is
Haggard
basically,
so
it
has
the
version
and
the
pipeline
section
here.
Then
you
have
the
global
section
where
you
define
the
application
name
and
the
application
version.
After
that,
we
have
a
global
environment
variable
where
you
specify
the
group,
the
environments
which
we
are
going
to
use
into
your
step
containers
so
again,
and
then
we
have
steps.
So
these
three
three
components
will
be
the
part
of
the
pipeline
Dodi
ml.
Now,
let's
walk
you.
Let
me
walk
you
through
with
the
step
section.
C
Basically,
what
different
things
are,
what
what
different
components
or
the
variables
you
can
use
into
the
step
container
in
this
you
can
see
we
have
environment,
condition,
secrets
and
control.
So
I
will
walk
you
through
in
a
briefing
about
this.
So
a
minimum
step
containers
looks
like
you
have
a
name.
You
have
an
image
which
is
your
container
when
it
will
spin
and
you
have
the
command
section,
you
can
write
the
command
or
you
can
pipe
them
commands
a
pipe
or
multiple
commands
into
single,
or
you
can
have
a
script
to
execute
in
this
action.
C
Basically,
now
we
have
something
called
specific
environment
variables
you
can
define
within
the
step,
which
is
basically
it's
like.
You
have
global
environment
variables.
You
can
use
across
multiple
steps
and
then
you
have
specific
environment
variables,
which
you
would
like
to
use
within
a
single
step
and
that's
what
is
the
section
about?
You
can
define
the
environment
variables
here
now.
You
see
if,
in
this
particular
image
tag,
I'm
using
something
called
pipeline,
underscore
app
underscore
version.
This
is
wherever
you
see
like
pipeline
underscore.
These
are
the
naming
conventions.
C
These
environment
variables
are
already
exposed
by
the
pipeline
library.
These
are
not
user-defined,
but
you
can
override
them.
As
you
start
writing
your
pipeline
or
the
command
section,
you
can
start
overriding
them
and
that's
possible.
Then
we
have
something
called
condition
so
the
conditions.
Basically,
let
me
give
an
example
of
you
know
how
we
use
condition.
So
in
this
case
we
have
a
vent
clause
which
says
branch
master.
The
reason
for
that
the
meaning
of
this
basically
is
this
particular
step-
will
only
execute
when
the
branch
name
is
master.
So
let's
say
you
have.
C
You
are
working
on
multiple
features
and
you
have
taken
out
branch
out
of
it
right
now
with
a
single
pipeline.
No
tamil,
you
don't
want
to
you,
know,
delete
or
update
or
any
of
the
staffs
from
the
single
pipeline
dot
Yambol,
which
got
inherited
from
master
to
that
feature,
branch
to
make
sure
that
I
just
wanted
to
execute
something
to
deploy
on
qat
environment,
but
I
don't
want
to
do
it
in
our
my
feature,
branch
right
or
this
particular
step
I,
don't
want
to
execute
in
my
future
branch.
C
In
that
case,
you
can
use
the
vent
clause
and
that's
where
you
are
actually
writing.
I
can
I
only
execute
this
tap
for
the
master
master
when
the
branch
is
master.
Now
there
are
different
ways
to
define
that
you
might
have.
You
might
want
to
write
multiple
expressions
for
that
I
have
a
master
and
release
branch.
This
particular
step
I
will
execute
in
master
and
release
for
both
the
branches,
but
not
for
the
feature
branches.
In
some
other
cases,
you
can
write
a
feature,
slash
star.
C
Basically,
it
means
this,
but
one
of
the
particular
stab
is
only
going
to
execute
when
the
branch
is
featured,
slash
star.
There
is
another
way
to
include
and
exclude
as
well
so
in
this.
Basically,
we
have
master
and
feature
slash
star,
but
in
the
feature
you
basically
wanted
to
exclude
few
other
branches
to
execute
that
particular
step.
So
this
is
how
and
I
think
this
is
one
of
the
important
thing
when
you,
when
you
have.
You
know
parallel
development
going
on
and
you
don't
want
to
execute
all
the
step.
C
Defining
a
pipeline
Rotom
well
as
a
part
of
your
execution
now
there's
something
called
environment
condition
so
for
a
particular
step.
You
want
it
to
skip
that
step,
beaker,
taste
on
the
environment
variables,
so
two
type
of
environment
variable
we
talked
about,
one
is
user,
defined,
environment
variables
and
another
one
is
a
pipeline
exposed
or
the
standard
environment
variables
which
mam.
In
this
case
you
have
a
vent
closed
and
you
can
say
environment
basically
pipeline,
commit
message.
If
the
pipeline
commit
message
is
skip,
CI
exclude
that
so
it
means
that
particular
step.
C
As
soon
as
it
will
see,
a
mass's
is
entered
a
skip
CI.
It
will
exclude
that
particular
staff
from
execution
similar
way.
You
have
user
defined
environment
variables
here,
which
is
deployed
environment,
q,
lab
or
QA
t.
Basically,
so
what
we
will
do
is
it
this
particular
staff
will
only
execute
if
it
sees
that
particular
step
is
for
Q
lab.
The
environment.
Variable
is
true,
q
lab
and
then
in
the
next
example,
you
can
combine
the
the
user,
define
it's
a
it's
a
little
complex
one.
You
can.
You
can
have
that
kind
of
condition.
C
So
further,
if
I
start
with
the
secret,
so
now
we
talked
about
step,
containers
and
I.
Think
by
this
time
you
understand
what
we
mean
about
the
step:
containers
right,
any
steps.
Let's
say:
if
you
have
an
example
of
you,
wanted
to
publish
your
artifacts
into
an
IRT
factory
or
a
docker
registry
right.
In
that
case,
we
usually
store.
When
we
maintain
the
pipeline
engine,
we
have
global
credentials.
Those
credentials
can
be
used
by
anyone,
but
in
artifactory
or
in
the
registry
doko
registry.
There
are
specific
credentials
required
by
each
team.
C
Well,
they
are
authorized
to
place
their
artifact
now
those
credentials
either
they
have
to
put
into
the
folder
level,
but
when
they
are
writing
a
step,
how
we
inject
those
variables
into
the
step.
So
that's
why
this
secret
is
basically
secrets,
helps
you
know
the
section
house
ism
to
define
right
now.
I
admit
that
right
now,
this
the
poet
pipeline
only
supports
two
different
kind
of
Secrets:
one
is
username
and
passwords
and
another
one
is
secret
token.
C
Basically,
the
single
token,
so
only
two
are
supported
right
now,
so
we
need
to
work
on
the
rest
of
them
like
SSH.
If
you
have
to
use
and
there
are
different
other
credentials
types.
So
in
this
case
you
can
see
the
source
source
is
nothing,
but
it's
the
credential
ID
in
genkan,
when
you
define
a
credential
at
folder
level
or
at
global
level.
That's
the
credential
ID
target
is,
are
the
variable
names
where
you
store
username
and
passwords?
C
Now
they
becomes
a
variable
within
the
container
which
you
have
specified
here
and
once
they
become
a
variable,
you
can
use
them
into
your
command
section
to
to
run
your
commands
and
that's
why
the
secret
sections
help
us
to
do
the
jobs
which
requires
credentials
to
run
that
particular
step.
Now
we
have
certain
control
options
as
well,
which
is
time
out
in
minutes.
So
this
is
particularly.
This
is
one
of
my
favorite.
Basically,
because
I
have
being
an
administrator
or,
like
you
know,
taking
care
of
the
the
whole
infrastructure.
C
I
need
to
make
sure
that
you
know
the
resocialize
utilization
is
is
good
for
me
and
for
that,
if
a
job
we
start
running
and
it
get
hung
at
a
time.
So
we
sometimes
don't
know
that
how
long
the
job
is
going
to
be
hung
either
we
get
a
report
back
and
say
we
have
to
kill
those
jobs.
So
in
this
particular
case
we
have
put
a
limit
of
like
30
minutes,
I
I'm,
hearing
a
co
so.
C
Okay,
hold
on
guys
I'm
hitting
the
a
coaxial
so.
A
None
of
the
rest
of
us
are
hearing,
or
rather
you're
sounding
great,
although
if
you
would
like
I
would
be
delighted
to
interrupt
you
again
with
some
more
questions.
Are
you
point
you
or
you'd,
be
okay
with
being
interrupted,
yeah,
okay,
so
one
of
the
one
of
the
questions
was
related
to
parallelization
and
I.
Think
you
would
you
indicated
that
this
is
not
parallelizable,
but
then
Larry
answered
online,
that,
oh
hey,
you
can't
parallelize
inside
a
container
again.
The
can
fundamental
concept
is
a
container
here
that
that's
rude
for
me.
C
C
Would
like
to
answer
I
mean
that
so
there
are
two
kind
of
parallelism
we
talked
about.
When
we
talk
about.
Can
we
group
multiple
steps
to
run
in
parallel
and
in
a
single
group
right?
Second,
one
is
we
did
a
particular
step
container,
which
is
like
it's
a
container
base,
so
we
did
a
stab
you
can
you
can?
C
You
can
run
something
like
in
the
cell
script
or
a
tool
which
can
do
the
parallel
execution
of
the
commands
so
parallel
command
execution
and
parallel
step
executions,
two
different
thing,
parallel
step
execution
right
now,
I
will
I
mean
I.
Have
that
slice
to
to
explain
that?
That's
a
functionality
gap
right
now
that
you
can
write,
execute
parallel
steps,
but,
yes,
you
can
do
parallel
command
execution
within
your
step.
A
C
Yes,
we
are
using
its
kubernetes
based
for
us,
but
you
can
actually
the
way
we
have
designed
this
hull
automation
for
provisioning
the
pipeline
engine,
which
is
Jenkins.
You
can
actually
do
it
on
your
laptop
too.
So
you
can
have.
You
can
have
the
kubernetes
on
your
laptop
and
you
can
install
so
it's
kubernetes,
but
the
essence.
You
can
screen
off
kubernetes
as
well
as
an
AWS
using
ECS,
plugin
yeah.
B
Right
so
just
a
little
bit
more
Thor
for
folks,
so
we
internally
other
teams,
there's
teams
responsible
for
platforms
here
and
whether
we
have
a
pivotal
came
out
with
their
own
kubernetes
platform.
We
went
through
about
three
kubernetes
platforms,
hep
tio
and
another
one,
and
then
we
ended
up
where
we
they
settled
on
was
the
pivotal
Cabernets.
It's
just
it's
kubernetes,
with
biblical
kind
of
putting
some
of
their
own
UI
from
it.
But
everything
that
we
do
and
do
it
is
from
the
perspective
of
using
helm
charge
to
get
the
deployments
going.
A
B
I
think
a
lot
of
the
pipeline
that
we
went
through
I
think
I
would
say
to
kind
of
there
was
we
we
were
doing
additional
container
and
it
worked
before
that
and
then,
as
they
really
settled
out
on
last
year,
settled
out
towards
in
the
year
going
at
the
beginning
of
this
year
to
the
pivotal
platform.
But
yes,
it's
been
good
to
see
that
and
the
docker
container
is
generally
a
doctor
container,
all
rules
that
people
need
to
do
to
do
good
design
for
containers
and
security
and
all
that
apply.
B
B
We
we
showed
the
AWS
and
it
says
VM
quote
unquote,
there's
a
few
times
where
we've
needed
to
have
a
slave
and
something
to
run
that
was
just
more
efficient
running
it
not
in
a
container,
but
it
needed
to
be
on
a
VM
might
be
a
lot
of
files
that
needed
to
be
there
that
need
to
be
persisted
or
took
so
long
to
download
all
the
libraries.
So
there
were
some
things
like
that
that
we've
done
and
just
you
have,
the
ability
to
also
people
could
spin
up
their
own.
C
A
C
So
yeah
we
were
talking
about,
like
you,
know,
fear
of
what
other
features
which
is
control
option
within
the
pipeline,
which
is
timeout
in
minutes,
so
by
default
of
any
job
which
executes
through
this
pipeline.
Basically,
it
gets
aborted
if
he
doesn't
get
complete
within
30
minutes,
but
if
there
are
jobs
or
the
steps,
you
think
that
will
take
more
than
30
minutes.
C
In
this
case
example,
you
can
see
we
have
specified
120
so
that
particular
staff
will
not
get
aborted
for
another
two
hours,
but
after
two
hours,
if
the
job
still
don't
get
completed,
then
it
get
aborted
and
that's
how
you
can
specify
within.
Like
you
know,
if
takes
less
number
of
minutes,
you
can
specify
that
or
if
taken
more
than
30
minutes,
you
can
specify
that
you
for
a
particular
step
and
that's
how
this
option
is
use,
and
this
is
very
efficient
because
it
helps
us
to
do
this
resource
utilization.
C
C
Something
we
Hal
like
continued
onerror,
which
is
basically
when
you
design
your
pipeline.
You
know
the
best
that
like
if
there
is
a
step
which
is
failing
over
and
over
again,
but
you
you
would
like
to
go
with
the
next
step
to
see
and
proceed
further.
In
that
case,
this
continuing
on
error
will
help
for
the
particular
step.
So
if
this
particular
step
is
failing-
and
you
want
to
do
Marcus
true,
so
even
if
it
is
a
failure,
it
will
proceed
for
the
next
step.
Executions
and
that's
how
this
discontinued
ER
is,
is
used.
C
Now,
let
me
just
go
with
the
standard
environment
variables.
So
there
is
a
list
of
standard
environment
variable
so
which
you
can
you
don't
need
to
declare
them?
You
can
directly
use
them
into
the
pipeline
steps
here
and
then,
if
you
would
like
to
know
more
about
the
environment
variables,
they
are
more
into
the
pipeline
stage.
Jason
there
are
hundreds,
hundreds
of
them.
You
can
directly
use
them
into
your
pipeline.
You
can
overwrite
the
variables
as
you
like
and
as
you
go,
we
have
few
pipeline
control
options.
C
So
earlier
I
said
you
don't
need
to
update
this
end
kit
file
over
and
over
again
it's
a
one-time
activity
when
you
place
it
in
your
source
code,
but
there
are
certain
operations
which
you
would
like
to
do
like,
for
example,
you
would
like
to
do
the
load
level
so
as
of
today,
if
you
see
when
we
start
writing
the
pipeline
library
in
groovy,
we
utilize
this
in
Jenkins
file
heavily
and
all
of
the
features
and
capability
we
had
added
to
there.
So
the
similar
way
you
can.
C
Actually,
if
you
have
to
change
the
load
level
you
can,
you
can
do
that
directly
into
the
into
the
Jenkins
file
right
now,
by
default,
the
pipeline
basically
works,
use
a
file
called
pipeline
dot
EML,
which
is
a
definition
file.
But
let's
suppose
you
would
like
to
use
a
different
DML
file,
the
name
it
should
be
different
in
your
source
code.
Then
you
have
to
come
back
into
the
genkan
file
and
you
have
to
specify
that
here
once
you
specify
it
will
take
the
different
pipeline.
C
Yambol
file
or
the
definition
file
and
start
the
execution
or
that
and
start
passing
it
no
the
last
features
and
which
is
very
interesting
and
important,
is
in
templates.
So
I
already
explained
you
about
like
you
know
why
templates
are
useful,
they
are
usable
for
us.
So
let's
say
we
have
a.
There
are
two
different
type
of
templates
you
can
create.
You
can
have
a
location
at
the
local
repositories
where
your
source
code
is
available
or
you
can
have
them
in
a
remote
repository
all
together
and
then
include
them.
C
So
in
this
case
you
can
see
this
step
library,
which
is
sorry
the
steps
we
are
writing
for
off
into
the
ml
file.
Now
I'm
converting
them
step.
Those
steps
into
the
templates
which
is
I
have
a
stamp
rate.
Folder
I
have
a
slack-jawed
camel.
This
particular
step
is
for
slack.
I
took
this
out
from
the
main
pipeline.
Dodi
ml
and
I
put
it
into
a
differently
ml
file,
which
is
slack
note
Yemen.
That
becomes
my
template
right
now
and
I'm,
keeping
that
into
the
same
repository
now.
What
happen
is
when
I
am
start.
C
Writing
my
pipeline
dirty
amel.
The
main
definition
file,
which
is
in
my
code,
I,
write
the
pipeline.
I
start
the
steps
I
can
write
the
steps
as
I
like
as
well,
along
with
that,
I
can
include
with
the
include
option,
the
templates
which
I
have
just
written
within
the
same
repository
and
that's
how
you
can
include
those
templates
in
your
mainly
amel
file,
and
there
are
different
ways
of
doing
that.
C
Even
you
can
create
a
template
for
your
global
configurations.
You
can
create
a
microservice
environment
level
configuration
and
you
can
just
put
that
into
all
that
also
into
a
template.
The
short
first
pipeline,
dot,
Yambol,
you
will
see
is
basically
pipeline
include
and
the
template,
that's
the
shortest
one
you
can
see.
So
we
we
recommend
people
to
use
a
pipe
templates
like
template
and
template
within
template.
It
can
go
deeper,
but
we
don't
recommend
to
go
by
that
route.
C
The
reason
being
is,
if
you
have
a
number
of
micro
services
you
are
using,
but
you
don't
want
to
change
your
pipeline
dot,
yeah
Mel
over
an
hour
again
when
you
add,
or
update
new
steps
or
new
functionality
on
your
futures
into
your
pipeline.
So
what
you
basically
do
is
you
can
take
the
whole
thing
out
into
a
template
outside
and
include
the
main
template
into
the
into
the
pipeline
dot
yeah
Mel,
so
any
changes
further
will
happen
will
happen
into
the
template
not
into
the
main
pipeline,
dot,
yeah
Mel
file
or
the
definition
file.
C
Now,
let's
say:
let's
talk
about
remote
repositories,
so
if
I
have
a
remote
repositories
and
how
can
I
include
that
so
I
have
a
remote
repositories
where
I
am
creating
a
template.
Here
with
the
steps
you
can
include
multiple
steps
within
the
same
template
as
well,
so
that's
also
possible.
So
it
depends
on
like
how
you
wanted
to
utilize
these
templates.
There
is
no
hard
and
fast
rule
here
on
this
one.
C
Basically,
when
you
create
templates
outside
of
the
code
repository,
you
have
to
use
something
called
resources
and
then
you
have
to
start
adding
those
repositories
where
your
template
resides.
So
again,
as
we
understand
you
can
it's
in
and
it's
an
array,
so
you
can
include
as
many
repositories
template
repositories
into
the
section,
keep
repeating
that.
So,
if
you
have
five
different
locations,
where
you
are
your
templates
are
residing,
you
can
write
all
of
them
here.
C
In
this
case,
you
have
to
provide
the
template
name,
the
URL,
where
the
template,
the
repository
of
throw
templates
and
the
label
is
nothing
but
your
branch
which
branch
actually
they
are
residing
and
credential
ID,
so
that
you
can
the
pipeline
have
access
to.
There
are
those
templates,
and
now
you
start
writing
the
pipeline
steps.
You
include
them
with
the
rate
template
with
their
name,
and
you
can
start
including
like
this,
and
that's
how
these
templates
you
can
include
from
the
remote
repositories
I.
C
C
D
You
know
it
works
they're,
all
very,
very
similar,
and
in
fact
we
even
thought
about.
You
know
writing
converters
to
go
to
and
from
different
formats.
If
people
wanted
to
play
around
different
pipelines,
so
some
design
goals
for
the
the
code
itself.
You
know
we
one
of
things
would
be
really
easy
to
test
sounds
a
big
goal,
making
things
test
the
ball,
and
so
we
ended
up.
D
You
know
really
kind
of
isolating
a
lot
of
the
Junkins,
the
Jenkins
functionality
and,
as
Robbie
mentioned,
limiting
the
more
plugins
we
use
just
to
make
things
really
testable
and
be
able
to
test
quickly
the
core
code
itself.
We
also
wanted
to
structure
things
so
that
we
didn't
have
to
change
the
core
code
very
often
so
the
main
exception
point
really
is
you
know
by
is
containers
adding
new
containers
to
do
that,
and
we
lean
on
this.
You
know
very
heavily
and
also
we're
not
the
bottleneck
here.
D
D
The
execution
engine
is
totally
generic,
and
so
this
was
another
big
goal
and
something
we
had
to
push
ourselves
on.
A
lot
was
that
the
pipeline
engine
doesn't
know
anything
about
the
containers.
Like
I
said
you
can
use
off-the-shelf
containers
like
you
know,
we
use
the
Gradle
container
just
directly
from
docker
hub
totally
fine,
and
you
know
this.
It
was
a
little
bit
effort
right.
D
The
containers
have
access
to
the
pipeline
state
information,
so
this
is
something
we
do
expose
like
Ravi
mentioned,
there's
a
lot
of
built-in
variables
environment
variables
that
are
all
in
the
wiki
and
besides
just
some
kind
of
simple
formation
like
you
would
see
in
a
normal
Jenkins
job.
We
also
have
one
of
the
variables
points
to
a
file
that
has
the
total
state
of
the
entire
pipeline.
So
if
you
want
implement
a
container
that
does
things
like,
you
know,
updates
a
dashboard
or
sends
slack
or
email
notification.
D
You
have
the
full
information
of
the
pipeline
state
of
the
elbow
and
JSON
file.
That's
you
know
like
each
each
step,
it's
its
status,
what
the
steps
were
and
everything
that
is
all
available
as
a
step.
Also
the
steps
shared
the
same
workspace,
so
that
was
a
really
important
way
to
share
information
between.
D
You
know
from
step
to
step,
if
you
think
about
its
kind
of
normal
for
a
pipeline
right
like
you're,
you're,
building
code
and
then
running
tests-
and
you
know,
obviously
it's
helpful
to
have
access
to
the
same
workspace
and
that's
another
way
you
can
share.
Information
between
steps
is
by
running
thoughts,
the
workspace,
okay,
so
the
implementation,
so
it's
implemented
as
a
Jake
and
share
library.
D
So
if
you
use
one
of
those
before
it's
basically,
you
know
it's
doing
the
same
thing
like
Robbie
mentioned
it's
using
like
groovy
the
groovy
cps
code,
but
very
small
core.
Like
I
said,
we
wanted
the
push
functionality
to
the
containers
as
much
as
possible,
including
things
that
you
would
normally
think
about
is
built-in
and
we
thought
a
lot
about
built-in.
Like
you
know,
we
had
a
lot
of
debates
internally
about
how
to
implement
like
things
like
notifications
and
reporting.
It
seems
like
those
should
be
built
into
the
into
the
pipeline.
D
But
again
we
pushed
ourselves
to
make
those
just
normal
containers
like
everything
else,
and
that's
that's
how
they
work
you
can.
Let
me
have
you
some
plugins
to
keep
things.
You
know
Swan
and
testable,
like
I
mentioned
before
testable
code
as
a
priority
currently
17
unit
tests,
you
know
always
good
to
have
more.
D
D
Also,
the
pipeline
builds
itself,
you
know.
So
you
know.
If
you
look
in
the
repository
is
a
pipe
on
the
mo
file
that
we
use.
You
know
it
basically
just
runs
the
tests
and
there's
some
slack
notifications,
I
think,
but
just
to
make
sure
we
had
a
lot
of
exposure
to
the
pipeline
as
you're
building
the
pipeline,
and
you
know
able
to
test
that
features.
D
Right
away
was
important,
so
we're
also
on
the
cover
how
to
extend
the
pipeline
functionality,
and
so,
as
I
mentioned,
like
the
main
extension
point,
really
is
by
adding
new
containers,
and
so
obviously
things
like
both
test
deployment
or
vacation.
Those
are
all
there
yeah
but,
like
I
said
rapport
unification,
even
Arvind
is
there
and,
and
that
ended
up
being
really
useful,
because
even
things
like
slack
notifications,
we
ended
up
with
like
three
or
four
different
ways
to
do
it,
because
some
teams,
you
know,
just
want
a
really
quick,
simple,
Sacrement
message.
D
Like
hey
something
didn't
work,
you
know
go
to
this
link
and
see
what
happened.
Some
teams
won
and,
like
you
know,
each
step
as
each
stuff
completes
update
a
slack
message.
So
you
get
like
step
by
step.
Progress
in
you
know
in
real
time,
but
we
even
added,
like
kind
of
a
psycho
bot
to
handle
some
this
functionality
and
so
I
think
by
keeping
things
separating
this
way
ended
up
being
really
flexible.
We
didn't
have
to
like
again
modify
the
core
pipeline
code
because
one
team
one
at
different
notifications,
it's
all
totally
extensible.
C
C
So
this
is
this
is
how
and
you
can
search
any
of
the
containers
you
just
type
in
you
will
get
all
of
the
the
bill.
Types
and
you'll
get
the
information
so
yeah.
Circling
back
to
you,
yeah.
D
I
think
yeah,
thanks
for,
if
that's
a
good
yeah
and
so
a
lot
of
those
containers,
you
know
we're
built
by
our
team,
but
a
lot
of
them
are
also
put
by
other
teams
and
could
should
we
did
you
know
into
this
bigger
poll
which
I
think
was
really
awesome
and
yeah.
That
website
is
another
shout
out
to
drone.
We
kind
of
stole
this
idea
from
them.
They
have
like
you,
know
similar
plug
in
an
X
for
for
the
plug
in
their
plugins.
You
know
I
work
in
the
same
way,
and
so
we
took
their.
D
They
have
an
open-source
web
site
for
their
design.
Of
that
we
use
the
same
thing
and
in
fact,
a
lot
of
those
drone
plugins
like
I
mentioned.
There's
nothing
special
about
these
steps,
they're
just
any
container.
Really.
You
can
use
some
of
the
drone
plugins
in
our
pipeline
too
and
I
think
in
in
our
wiki.
We
have
some
examples
of
like
for
slack
using
the
drone
and
slack
plugin.
Just
as
is
there's
nothing
special
about
it.
D
We
have
a
template,
that's
like
you
know,
build
standard
container,
and
that
way
everyone
working
on
these
containers
can
build
them
in
the
same
way
in
a
repeatable
way,
and
it's
almost
like
a
mini.
It
gives
you
a
mini
API
into
more
complex
logic.
So
you
might
have
you
know
a
template.
That's
like
say
notification
and
internally
might
you
know,
call
a
slack
bot
and
then
maybe
do
some
do
an
email
and
do
something
else,
and
you
can
just
expose
a
very
simple
interface
to
developers
to
reuse.
D
Okay.
So
another
extensibility
point
is
this
idea
that
Robby
touched
on
earlier
a
little
bit
about
pre
and
post
steps,
and
so
as
we're
building
this
out
again,
we
wanted
to
keep
the
core
really
small,
but
there
are
some
functionality.
We
want
it
to
run
every
time,
and
so
originally
you
know
we
had
the
like
reporting
here.
The
shows
like
the
influx
reporting.
D
We
ended
up
building
and
that's
implemented
as
a
container,
but
we
had
to
add
it
to
every
single
pipeline
and
that
you
know
got
to
be
a
little
burdensome,
even
though
you
can
like
I
said,
simplify
that
with
the
templates
we
ended
up
doing
was
like
at
the
Jenkins
administrative
level.
You
can
define
kind
of
a
global
template
repository
and
will
automatically
load
some
pre
and
post
steps,
and
these
are
kind
of
mini
pipelines,
and
so
there's
something
special
about
these.
It's
almost
like.
D
C
D
The
images
in
there
had
been
scanned
by
an
internal
scanner.
You
can
implement
that
as
as
a
pre
step,
and
then
you
can
fill
the
pipeline
before
it
even
gets
to
the
user
code
and
for
the
posts
I'd
like
if
you're
gonna
do
special
reporting
like
we
did
for
influx
or
some
like
that
you
can.
You
can
do
that
to
what
they're
one
super
this
and
these
are
all
again
implemented
as
pipelines
and
so
all
the
stuff
that
robably
talking
about
for
a
normal
pipeline
like
it's
all
container
based
it's
it's
a
list
of
steps.
D
You
can
use
environment
variables,
use
conditionals,
it's
all
the
same.
That
was
another
thing.
You
know
we
wanted
to
keep
the
syntax
like
the
small
so
that
we
don't
have
a
lot
of
concepts,
but
we
kind
of
reuse
them
in
different
places
to
make
it.
You
know
it's
very
flexible.
You
can
do
a
lot
with
it,
but
there's
not
a
lot
of
concepts.
You
really
need
to
know
to
do
any
of
this.
D
Of
VOP,
aha,
so
again,
if
there's
something
you
want
to
add,
everything
is
open-source
and,
like
I
said
we
have
a
decent
test
suite
already,
and
so
you
can
feel
confident
get
some
into
PR
and
you
know
as
long
as
it
pass
the
best
and
you
know
had
new
tests
for
the
new
functionality.
They'll
be
great
I
know
a
lot
of
people
are
asking
about
doing
the
parallel
build.
So
it's
something
we
we
don't
have.
We
talked
about.
D
A
Larry
Larry,
that's
that's
a
great
thing
for
me
to
hear
that.
Okay,
this
is
available
open
source.
Your
team
is
the
team
at
t-mobile
is
continuing
and
willing
to
listen
to
pull
requests,
evaluate
them
have
discussions
about
them.
That's
that's,
really,
amazing.
Any
concerns
you
have
or
any
things
where
you
say.
Oh,
please,
don't
don't
go
this
direction
or
that
direction.
D
I'll,
let
Martin
Ravi,
and
so
that
I
think
I.
Think
one
thing
is
if
it's
something
like,
for
example,
of
parallel
stops
or
something
will
be
a
big
change.
It's
I
think
it'd
probably
be
helpful
to
have
a
discussion
about
it
first,
because
there
are,
there
are
a
lot
of
like
something
like
parallel
steps,
for
example,
the
reason
it's
not
there
right
away
is
because
it
seems
like
an
easy
thing
to
say.
D
Let
me
run
these
steps
in
parallel,
but
it's
actually
very
complex
right
if
you
think
about
it,
because
what
do
you
really
mean
by
that?
Do
you
mean,
like
I,
want
to
run
multiple
steps
on
the
same
agent?
What
do
you
mean?
I
want
to
have
different
agents
like
in
the
careers
cluster
running
steps
in
parallel,
in
which
case
the
workspace
is
no
longer
shared.
How
do
you
merge
the
workspace
and
how
do
you
kind
of
branch
off
in
for
work,
and
so
it's
not
be
very
complex
and
stuff?
That's
something
like
that
right.
B
You
know
something
else,
I'll
just
mention.
That
is
also
a
definitely
familiarize
yourself
with
the
pipeline
the
concepts
and
run
it
a
bit.
One
of
the
things
that
we
did
run
into
is
that
some
teams
would
say:
hey.
We
need
XYZ
plug-in
added
in
we're
really
this.
The
answer
was
no
we'll
do
that
with
a
step
container
and
you
implement
a
step
gainer,
but
it's
sometimes
hard
to
get
out
of
that
habit
of
oh
I,
get
a
plug
in
I'll.
B
Add
some
more
groovy
code
into
my
pipeline
to
make
something
happen,
and
so
there's
a
some
just
getting
used
to
that
for
some
folks
was
a
little
bit,
but
once
I
got
it
then
they're
like
Oh
rock
and
roll,
and
they
were
they
were
five,
but
that
would
be
another
thing
that
I
would
suggest.
It's
definitely
familiarize
yourself.
The
power
of
the
templates.
Some
of
the
conditioning
I
saw
some.
You
know
great
questions
being
asked
about
hey.
What
do
you
do
with
errors
and
Robbie
kind
of
touched
on
that
a
little
bit?
B
We
do
have
these
wet
conditions
and
things
so
definitely
a
good
place
start
getting
used
to
it.
Familiarize
and
then
you
know,
go
under
the
covers
a
little
bit.
Take
a
look
at
that
code
that
Larry
developed
get
yourself
there
and
then
yeah,
absolutely
like
Larry,
said
post
an
issue
or
other
things
for
questions,
and
we
can
talk
about
some
of
those
things
to
help
folks,
if
they've
got
those
types
of
inquiries,
what.
D
Do
you
know
one
other
thing
to
keep
in
mind?
Is
that
if
you're
familiar
with
like
second-story
libraries,
you
probably
know
this,
but
if
not,
it's
may
be
worth
mentioning
that
it's
groovy
code,
but
it's
not
really
groovy
code,
because
it's
executed
by
jenkins
using
this
continuation
passing
style.
And
so,
if
you
look
at
the
code
and
you
might
notice,
there's
some
kind
of
funny
things
like
we
are
using
these
simplified
for
loops
or
like
4i
instead
of
you
know,
you
know
a
more
modern
for
loop.
C
C
Just
let
me
finish
this
slide
and
I
think
we
can
take
the
questions
and
answers,
like
you
know,
for
another
rest
of
the
time.
Thank
you.
Okay,
thank
you.
So
I
just
want
to
touch
base
with
the
the
pipeline
limitation
of
functionality,
one
of
them
we
already.
We
talked
about
the
parallel
step
execution,
but
right
now
it
doesn't
support
Windows
and
iOS
build,
but
there
is
a
way
around.
We
have
done
for
the
iOS
bill.
C
Basically
it's
all
about
containers,
so
within
the
container,
if
you
do
the
ssh
and
you
can
reach
to
the
to
the
server,
basically,
you
can
deploy
or
you
know
you
can
take
care
of
the
iOS
builds
to
in
that
case.
But
right
now,
windows
are
you
have
to
do
a
hardwire
connection
of
the
with
the
jenkins
and
then
use
the
freestyle
job
or
the
other
jobs?
The
way
you
you
write
it
currently
and
then
just
to
summarize
it
a
little
bit.
So
if
you,
if
you
see
like
you,
know
whatever
we
have
speak,
you.
C
We
have
added
into
the
t-mobile
using
this
pipeline
is,
is
not
only
one
component,
but
there
are
different
components
which
have
really
helped
us.
The
infrastructure
which
we
have
built,
the
pipeline
library
or
the
framework
which
we
have
held
and
the
the
kind
of
support
model
which
we
have
within
the
organization
so
which
has
allowed
us
a
lot
of
money,
and
you
know
added
values
with
us.
The
company
and
the
customer
focus
you
know,
or
do
you
even
approach,
so
these
components
has
have?
C
A
Very
much
Robbie
soon
so
I've
got
a
couple
of
questions
that
have
come
to
my
mind.
I
wanna
I'm
gonna
give
personal
bias
first,
so
forgive
me
putting
myself
at
the
top
of
the
queue
to
ask
my
questions,
so
you
mentioned
customer
support
as
a
challenge
where
their
techniques
you
found,
as
you
were,
helping
your
customers
adopt
this
that
were
helpful
and
others
that
you
found.
Oh,
we
thought
would
be
helpful
and
ultimately
were
not
helpful
in
getting
adoption.
C
So
yeah,
that's
a
that's
a
really
good
questions.
Actually
so
because
I
particularly
mentioned
that
we
were
customer
focused.
So
all
the
features,
if
you
see
in
this
pipeline
what
we
designed-
and
you
know,
while
we
were
doing
the
design
discussion
actually
so
you
know
even
Martine
Larry
and
a
lot
of
other
folks
from
our
team,
we
usually
talk
about.
Ok,
we,
we
are
designing
these
particular
features
and
we
keep
the
customer
in
focus
that
whether
this
is
a
simple
approach
for
customers
or
the
people
or
the
developers
who
will
be
using
it.
C
If
it's
not
simple,
then
we
have
to
go
by
other
alternative
approach.
So
that's
the
one
thing
we
do
so
the
whole
design
was
completely
focused
for
the
customers
uses
for
the
customers.
Actually,
second
thing:
this
pipeline
was
so
simple
that
for
me
it
was
easy
to
to
get
trained
teams,
so
what
I
usually
do
is
when
we
start
onboarding
people
or
out
on
this
particular
pipeline,
because
we
are
a
centralized
team.
But
if
you
see
we
are
like
I
think
I
haven't
mentioned,
we
are
people
of
like
a
team
of
seven
people
right.
C
C
It
is
basically
the
collaboration
in
the
sense
the
development
team
start
creating
their
own
containers
and
they
start
actually
contributing
in
the
library
itself,
and
there
is
a
process.
Actually
we
followed.
So
when
anybody
is
like
want
to
contribute
in
the
step
contain
the
library
we
actually
have
a
review
process
for
that.
Our
team
reviews
that,
and
then
we
merged
the
poured
into
the
step
container
library
source
code
and
that's
how
they
are
available
for
everybody
else,
and
we
wanted
to
make
sure
they,
like.
C
C
So
on
the
quarterly
basis,
we
actually
reach
out
to
team
the
whole
number
of
teams
and
we
get
their
feedbacks
whether
the
pipeline
is
actually
making
sure
that
you
know
we
wanted
to
make
sure
that
the
pipeline
is
fulfilling
their
requirements
and,
as
Marty
mentioned
that
you
know,
our
customer
satisfaction,
which
is
cset
Skol
was
really
great,
which
is
approximately
four
point:
nine
out
of
five,
because
the
teams
were
well-trained.
The
the
value
addition
was
great
for
the
team
and
teams
were
able
to
execute
a
you
know
the
task
by
themselves.
C
B
I
think
also
just
add
a
little
bit
more
there.
As
I
noted
there,
our
team,
one
of
the
things
our
our
team
was
responsible
for
and
still
is,
is
helping,
educate
and
get
teams
into
some
of
the
newer
technologies.
We
were
a
large
organization
and
working
very
hard
to
do
digital
transformation
shift
that
move
that
ship
turn
it
in
a
different
direction
and
part
of
that
was,
for
example,
the
containers
and
step
containers.
B
We
ran
into
a
variety
of
teams
who
had
not
previously
really
engaged
and
used
containers
before
so
this
gave
us
the
opportunity
to
then
educate
teams
on
using
containers.
Some
teams
were,
they
were
they're
awesome.
We
can
roll
with
it,
others
we
just
needed
to
educate
them,
but
this
was
important
for
us.
Not
only
did
we
do
it
for
the
pipeline,
but
we
did
it
for
the
organization
they
get
at
that.
So
we
really
want
people
were
moving
to
container
development.
B
You
get
away
from
all
of
the
we
had
a
large
legacy
system
and
platforms
that
we've
been
migrating
away
from,
so
these
were
all
other
benefits
we
got
and
then
things
like
our
documentation
continued
to
get
tuned,
how
we
train
every
time,
we'd
go
to
the
teams
and
talk.
Well,
we
get
that
feedback
and,
oh
ok,
something
wasn't
we
didn't
do
this.
Well,
so
that's
when
we
talked
about
the
technology,
we
really
had
to
also
be
open
to
them.
B
What
the
customers
were
telling
us
and
if
they
tell
you
know,
you
know
I,
always
talk
visit
to
sales.
A
key
thing
about
sales
is
once
he
tells
you
no
and
you're
trying
to
sell
them,
something
which
I
said.
This
is
a
sales
thing
here
we're
trying
to
sell
them
on
our
pipeline.
We
weren't
mandated
or
anything
like
that,
didn't
and
they
say.
No,
then
why
are
they
saying?
No,
you
know
what
is
the
behind
that?
It
might
be
that
you
know
sometimes
it's
as
simple
as
well.
B
They
already
had
a
pipeline
and
we're
taking
away
their
job
as
they
view
it,
and
so
you
might
have
that,
but
other
times
it
might
be
well,
your
this
is
gonna,
create
more
work
for
me
or
they
just
didn't
understand.
So
we
spent
the
time
to
talk
to
him
and
I
that
really
also
shaped
in
how
we
continue
to
talk
with
teams
get
better
at
that,
improve
it,
what
we
did
for
our
documentation,
etc.
We
got
some
really
good.
A
Thanks
very
much
so
we
have
a
question
related
to
your
kubernetes
environment.
Can
you
give
us
some
hints
about
what's
its
relative
size
and
how
much?
How
do
you?
How
does
it
scale
out
for
you?
How
do
you
watch
it?
Those
kind
of
things
is
that
part
of
your
team,
or
is
there
some
other
organization
it
actually
Shepherds
the
cluster.
So.
C
This
is
part
of
our
team,
so
the
kubernetes
clusters.
They
are
managed
by
the
IT
team.
Basically,
they
host
the
kubernetes
cluster.
We
are
the
users
for
the
clusters,
but
every
like
the
namespace
in
the
in
the
community
clusters.
We
have
like
two
namespace,
two
or
three
namespace
which
are,
and
the
location
is
like
based
on
the
uses.
I
can't
tell
you
the
right
data
right
now
like
either
it
is.
We
have
got
500
GB
or
the
CPU
limits.
C
I
think
we
have
done
some
calculations
based
on
like
how
many
pipeline
engines
we
can
host
on
particular
namespace,
so
that
calculation
is
not
on
top
of
my
head
right
now.
But
yes,
we
did
a
calculation.
The
only
thing
which
I
can
tell
you
is
the
Jenkins
home
workspace
like
which
need
storage.
That
was
less
than
20
in
C
before
each
pipeline
engine,
because
we
don't
store
any
artifacts
into
Jenkins
form.
Every
artifact
which
get
built
is
really
goes
into
the
artifactory
or
the
docker
registry,
the
the
logging.
C
We
don't
have
the
logo
rotation
a
lot
of
log
rotations
and
the
stories
into
the
Jenkins
arm
the
logs
are
actually
getting
stored
into
Splunk,
so
these
are
kind
of
the
different
small
small
techniques.
You
can
say
we
have
used
basically
to
utilize
the
resources.
So
so,
yes,
we
don't
use
lot
of
storage.
As
far
as
I
remember
right
now
we
have
14
or
maximum
like
15
pipeline
engine,
all
Jenkins
running
on
one
namespace.
B
Yeah
I
think
I'll
just
add
a
little
bit
more
there.
So
to
what
Ravi
was
saying,
we
would
have
to
go
look
say
with
the
number
of
CPU
allocations,
but
as
noted,
we
spin
up
those
individual
pipelines
for
teams,
and
so,
if
we
start
pressing
up
against
how
many
CPUs
were
allocating,
the
good
thing
is
with
the
the
step,
containers
and
the
agents
that
trigger
the
step.
You
know
help
initiate
those
step
containers,
those
are
all
dynamic,
they
come
and
they
go
so
you
just
needed
on-demand.
B
So
the
advantage
to
us
has
been,
therefore,
that
we
don't
consume
huge
amounts.
The
main
masters
have
to
stay
there.
They're
running,
it's
been
interesting
as
we
see
with
some
of
our
solutions
when
they're
kind
of
just
idling
away.
If
you
run
a
full
CI
CD
and
something
is
being
built
every
few
seconds,
we're
not
that
level,
so
it
hasn't
been
egregious.
But
if
somebody's
really
chased,
we
can
find
a
little
bit
more,
but
we
just
work
with
a
platform
team.
B
If
they
really
feel
like
we're
pushing
up
against
it,
we
have
monitoring
tools
that
they
provide
us
as
well.
Then
we
can
always
ask
for
more
and
they've
been
great
to
work
with.
We've
been
one
of
the
teams,
as
they
came
on,
we've
always
been
engaged
with
them
very
closely,
and
so
they've
been
very
supportive
of
helping
us.
A
C
D
Yeah
so
yeah,
so
we
do
have
a
lot
of
unit
tests.
We
also
have
a
lot
of
that's
not
part
of
this
project.
It's
a
separate
project,
but
we
have
some
integration
tests
that
you
know
go
through
some
pipelines
and
do
full
like
builds
and
deploy
like
and
make
sure
that
the
poi
succeeded.
And
then
you
know
make
sure
that
the
metrics
end
up
in
the
database
said
everything
worked
out.
So
we
have
a.
We
do
have
like
internally
bigger
integration
test
suite
that
we
also
run
and
then
yeah.
We
try
and
be
really.
D
A
Thank
you
now.
How
are
you
handling,
docker
container
component
components,
security
concerns,
like
somebody
chose
an
outdated
docker
image.
They
decided
to
build
based
on
a
Java
version
that
is
now
six
years
old
or
something
do
you
have
systemic
things,
or
is
that
left
two
teams
to
decide
how
they
deal
with
securing
their
own
code?
Yeah.
B
That's
a
great
question
so
generally
right
now
there
are
t-mobile
guidelines.
Oh
they
are
ever
evolving,
it's
something
that
our
security
organization
has
been
working
hard
to,
update
to
and
get
to,
and
and
we
have
certain
things
that
there
are
requirements
or
certain
scans.
They
do
want
to
run
those
scans
and
then
there's
actually
step
containers.
We
have
where
there's
other
logging
and
information
that
supposed
to
say
hey.
We
ran
these
things,
here's
all
the
data,
and
so,
if
somebody
is
doing
those
scans,
then
that
stuff
is
also
being
captured.
B
That's
personal,
the
t-mobile
mandates,
but
we
generally
it's
not
our
job,
as
we
see
it
to
be
the
enforcer
that,
though
those
posts
and
pre
and
post
steps
can
be
a
spot
where
those
can
be
injected.
Certainly
there
are
some
spots
where,
if
we
really
need
to
know
what
the
pipeline's
are
doing,
we
can
provide
some
of
that
behind
the
scenes
capture
it.
But
we
use
this
t-mobile
guidelines.
Look.
We
generally
were
from
a
perspective
of
trust,
but
maybe
verify,
and
that's
really
what
the
security
teams
do
in
audits
and
things
like
that.
B
If
we
saw
something
egregious
just
because
we're
helping
somebody,
we
might
say,
look
yeah.
This
isn't
good
our
own
pipeline,
our
own
containers,
certainly
those
ones
that
we're
showing
the
library
of
will
go
through
scans,
make
sure
they
read
hearing
also
to
the
appropriate
thing.
So
that's
how
that
works
as
we
really
leave
it
to
the
teams
to
be
the
correct.
Do
the
do
the
things
they're
supposed
to
do
within
t-mobile
guidelines.
A
Thank
you
thanks
very
much.
Thank
you.
So
are
there
there
was
some
allusion
earlier.
Are
there
any
substantial
problems
you
found
by
choosing
to
use
containers
as
the
key
element
of
your
pipeline?
You
you.
It
appears
that
you've
represented
something:
that's
more
powerful
to
my
mind,
than
a
simple
jenkins,
execute
some
code
in
the
pipeline,
you're
really
using
containers
as
the
steps
were
there.
Things
were
that
you
said
oh,
but
this
thing
we
can't
do
it
that
way
and
if
so
any
insights
that
you
gain
from
that.
B
Yeah
I
was
gonna,
say
the
one
I
could
jump
to.
We
talked
briefly.
We've
touched
on
couples,
I
mean
when
Robbie
mentioned.
The
iOS
in
the
Windows
builds
the
whole
thing
about
the
aw
ECS
to
ability.
Those
were
really
where
we
did
have
certain
things
where
we've
noticed,
if
somebody,
if
they
didn't
have,
maybe
in
Larian,
can
jump
in
and
correct
me
if
I
miss
on
something
here,
but
some
of
the
times
you're
doing
a
maven
building.
You
had
a
lot
of
libraries
that
were
being
it.
B
You
know
you
just
got
this
big
Java
build
all
these
libraries
that
needed
to
come
in.
If
you
didn't
have
a
lot
of
that
stuff
pre-loaded,
depending
on
the
networking
things,
you
could
run
into
some
challenges
with
a
lot
of
stuff,
all
the
dependencies
being
rebuilt,
and
so
there
were
some
situation
where
we
did
do
some
work.
B
Where
we
said:
okay,
there
might
be
where
we
need
a
static,
VM
or
use
the
ECS,
so
that
was
AWS
our
ability
for
those
to
know
you
know
EC,
isn't
just
a
way
to
kind
of
create
a
server
and
bring
up
what
you
need
on
it.
Dynamically,
which
is
awesome,
we
don't
have
quite
that
level
of
automation
here
we
have
tools
and
things
to
do
somewhat,
but
look
Amazon
and
Microsoft
to
put
a
lot
of
money
into
building
those
types
of
automations.
B
So
we
looked
at
leveraging
that
so,
where
we
needed
somebody
needed,
but
more
that
having
that
cash
available
to
him,
we
could
spin
that
up,
it
would
load
up
and
then
they
could
do
what
they
needed.
So
those
were
at
least
one
that
I
can
think
of
where
the
containers
get
you
a
long
ways,
but
once
in
a
while,
we
had
a
few
challenges
and
usually
we
can
also
work
with
teams
to
maybe
revamp
how
they
build
their
containers,
set
up
some
of
maybe
that
dependencies
ahead
of
time
in
their
container.
C
Almost
like
a
month
to
solve
this
problem
because
the
source
code
was
used
and
then
this
libraries,
which
Martin
talks
about
the
Marin
libraries,
how
to
share
them
across
so
we
we
actually
go
by
the
AWS
and
they
see
ICS
and
we
can
spin
that
to
from
the
pipeline
engine.
So
we
used
lies
that
VM,
you
know
to
screen
the
events
that
really
helped
us.
A
Thank
you.
So
one
of
the
one
of
the
questions
just
came
in
recently
was
the
individual
had
arrived
late
and
didn't
see
anything
in
terms
of
how
you
do
checkouts
and
other
artifact
manner
management.
Do
you
have
sample
docker
files
that
you
are
sharing
online?
That
represents
steps
of
what
you're
doing.
C
So
how
we
do
checkout
so
is
like
it's
a
it's
a
Jenkins
pipeline
library,
so
the
checkouts
are
done
through
the
groovy.
The
plugin
I
think
the
gate
plug-in
Larry,
correct
me
if
I'm
wrong
and
then
the
docker
file,
which
you
are
talking
about
the
step
containers.
They
are
internal
to
t-mobile
and
they
have
like
certain
t-mobile
specific
information.
So
it's
hard
to
share
them,
but
will
will
internally
check
like
you
know
how
can
make
it
more
generic
for
the
outside
people
and
you
can
share
those
containers
as
well.
B
Was
something
actually
we
just
have
been
talking
about
over
the
last
few
weeks?
So
when
we
look
at
what
we
put
in
open
source
without
like
there
were
a
few
more
instructions,
we
could
provide
and
also
look
to
maybe
bring
wheat.
As
Larry
noted,
we
used
the
drone.
We
we
borrowed
the
drone
code.
I
mean,
frankly,
why
build
a
whole
new
UI
for
managing
our
containers?
We
did
add
a
little
nuance,
which
was
we
could
have
a
set
of
we'll
just
take
in
terms
of
a
project
with
a
bunch
of
repos
in
it.
B
People
could
then
ask
for
a
repo.
They
were
they
want
to
put
their
step
container.
They
could
submit
to
us
that
they
have
one
ready
to
go.
That
they'd
like
to
be
added.
We
kind
of
had
a
master
list
it
within
another
repo.
That
would
allow
us
to
build
that
documentation
and
with
the
links
and
all
the
information
for
those
containers.
B
So
we
talked
about
at
least
getting
some
of
ours
up
there
as
examples,
but
as
Larry
noted,
if
you
had
just
a
very
basic
container
hello
world
container,
try
some
of
the
parameter
passing
I
mean
in
the
end,
which
the
examples
do
show
kind
of
how
to
do
all
that.
I
can
also
work
for
you
quite
well
yeah
and
again.
D
Like
the
the
public,
like
the
public
maven
container
or
the
public
Gradle
container,
like
those
all,
you
know,
work,
fine
I
mean
any
containers
you're
doing
and-
and
this
is
like
a
common
approach,
I
think
to
like
I
said
before,
like
if
I
you
know,
haven't
tried
the
Google
cloud,
build
the
containers
but
I'm
sure
those
would
work.
Or
you
know
if
you're
using
at
Amazon
code,
build
like
you
know
it's,
it's
all
the
same
approach
right,
we're
starting
the
container
income
and
the
container
so.
A
Thank
you
thanks
very
much
so
so
monitoring
is
one
of
the
questions
that
was
just
just
raised.
Are
there
things
that
you
have
been
doing
for
monitoring
where
you'd
recommend,
hey
others
should
consider
this
monitoring
technique
and
are
the
things
we're
that
you've
learned
from
your
monitoring
systems?
Tell
us
a
little
bit
more
about
what
you
monitor
and
why
you
monitor
that.
C
So
we
have
like
different,
you
know:
tools
we
are
using
like
one
of
them
is
for
the
pipeline
perspective.
We
are
using
graph
Anna,
the
in
flux
DB,
which
is
as
part
of
container
itself
like
the
pipeline,
the
high
availability
availability
of
the
pipeline
and
different
matrices
about
the
all
the
pipeline.
So
that's
the
graph
owner
we
are
using
and
it's
in
a
container
step
container.
And
then,
apart
from
that,
we
have
app
dynamics.
C
We
are
using
to
check
the
traffic
and
you
know
the
pipeline
engine,
so
we
have
automation
around
where
we
keep
hitting
the,
although
because
we
have
like
28-
or
you
know
close
to
30
jenkins
running
in
the
company's
space
and
then-
and
so
we
keep
hitting
the
US
and
we
get
like.
You
know:
200,
okay,
and
then
we
put
this
on
the
slack
message.
So
we
utilize
that
for
the
slack
messages,
when
there
is
any
problem
or
any
like,
you
know,
disk
space,
for
you
know,
issues
or
not
getting
hit.
B
B
You
can
always
do
better
on
your
monitoring
and
those
pieces,
but
kind
of
those
components
are
the
ones
that
all
come
together
to
form
up
how
we
monitor
and
watch
so
there's
information
about
how
often
our
pipelines
running
you
know
how
many
step
today
and
there's
things
like
that.
There's
so
there's
that
kind
of
information,
that's
the
griffin
o
stuff
Ravi
talked
about
the
app
D
gives
us
more
of
a
good
performance
information
about
how
things
are
running
they're
a
bit
if
familiar
that's
a
third-party
third-party
product
out
there
that
are
organized
it.
B
A
Thank
you
thanks
very
very
much.
I
think
I
think
we've
settled
most
of
most
of
the
questions.
I've
got
at
least
one
more
this
one.
You
are
also
welcome
to
say,
I
refuse
to
answer.
The
question
is:
have
there
been
mistakes
you've
made
where
you
think
other
people
should
learn
from
your
mistake?
Sometimes
we
make
mistakes.
C
Definitely
I
will
start
and
then
Martin
and
Larry
can
edit
to
that.
So
it's
been
like
80
years,
I
mean
I've,
been
working
with
Jenkins
and
have
made
a
lot
of
mistakes.
Actually,
I
made
a
mistake
when
I
started
using
lot
of
plugins
in
my
Jenkins.
My
masters
goes
down
and
I.
You
know
I'm,
just
scared,
like
you
know
the
next
day.
What
will
happen
and
the
teams
will
not
able
to
use
it.
C
So
over
the
period
we
have
learned
like
you
know,
we
should
we
should
avoid
using
large
lot
of
plugins
so
that
that
we
learn
and
that's
how
this
and
then
don't
use
lot
a
lot
of
groovy
code
to
extend
your
pipeline
and
that's
how
this
pipeline
came
in
picture,
and
these
this
is
far
like
you
know,
part
of
all
the
learnings
we
did
so
far
and
the
birth
of
this.
So
this
beautiful
pipeline
and
the
architecture
which
we
came
up
with
yeah.
B
On
what
Robbie
is
saying
a
little
bit
I
mean
look.
All
of
this
is
the
fruition
of
learning
from
our
mistakes
and
where
we
could
do
things
better
and
we
could
continue
on
and
I'm
sure,
even
on
you
know
in
this,
we
even
you
know
had
some
spots
where
we
we
challenged
ourselves
often
for
us
it
was
how
we
were
engaging
with
the
development
teams
or
something
like
that,
and
so
we're
always
checking
I
mean
that
idea
of
being
honest
about.
Where
were
your
issues
biggest
challenge
and
why
this
came
about
in
Rob?
B
You
can
touch
on
it.
I
I
got
frustrated
because
we
wanted
to
be
container
based,
but
some
of
the
developers
in
our
team
by
default
would
just
kind
of
go
back
to
writing
and
extending
the
Jenkins
code,
and
it's
like
no.
No,
this
is
gotta
all
be
abstracted
and
it's
a
little
bit
harder
concept.
You've
been
working
in
just
the
pure
Jenkins
world.
You
know
you've
got
to
kind
of
shift
your
mind
around
that,
and
so
I
was
like.
We
need
a
reset
and
I
actually
came
across
drone.
B
I
learned
final
about
drone
and
I,
went
to
Larry
and
said:
hey
check
this
out,
Larry
look
at
what
this
does.
What
if
we
go
down
this
path
and
it
just
rolled
and
I
mean
we
actually
got
the
initial
core
of
this.
In
two
months
we
had
running
and
people
being
able
to
start
to
use
and
then
over
another
four
five
six
months.
We'd
then
continue
to
get
the
other
enhancements
in
and
things
like
that,
but
yeah.
B
So
there's,
certainly
even
in
some
of
the
code
and
how
we
did
things
were
constantly
challenging,
I
would
say
even
to
date-
or
you
know,
our
monitoring
isn't
quite
as
strong
as
we'd
like
it
to
be
we'd
like
to
dial
that
up
a
bit
more
but
again
just
learning
from
every
pieces
and
those
were
just
like
that's
culmination
of
a
lot
of
our
other.
You
know
attempts
and
where
we
were
and
just
keep
working
on
that
idea
of
continuous
improvement,
Larry.
D
Yeah,
we're
in
summarize
really
well
I,
think
yeah.
The
whole
project
is
really
like.
You
know,
try
not
to
repeat
the
same
mistakes
and
I
guess:
I
got
tech
onto
Robbie's
point
about
plugins
I
mean
with
the
share
library.
You
know
early
on.
We
ran
into
some
problems
where
you
know
we're
trying
to
get
the
library
running
on
different
chickens
that
have
different
plugins
installed
and
even
plugins
that
we
weren't
using
you
know
just
there
would
be
Java
class
path
issues
where
one
thing
was
using.
D
You
know
some
version
of
some
Java
yeah,
more
library
that
another
instance
had
a
different
version
of
it
and
sawed
through
different
plugins.
You
know
and
those
ended
up
causing.
You
know,
issues
yeah,
I,
think
I
say
the
one
is
right.
Is
you
know?
Complex
logic
should
be
testable
right,
it's
something
we
focus
on
a
lot.
I
think
you
know
someone
made
the
point
that
we
can
do
all
this
in.
You
know
chickens
file
using
groovy,
that's
true,
but
I,
don't
know
how
many
people
are
have
tests
for
their
chickens,
pipelines-
probably
very
few.
D
No,
it's
very
easy
to
test,
and
so
you
know
try
to
make
things
as
easy
to
test
as
possible.
Having
that
be
a
goal
is
important.
You
know
I
guess.
Another
thing
is
to
have
things
documented.
That
was
a
getting
big
focus,
but
you
know
whenever
we
add
a
new
feature,
we
made
sure
that
we
had
documents
for
it
and-
and
you
know
not
not-
have
things
that
were
kind
of
hidden
or
or
secret.
You
know
I'm
trying
to
be
have
everything
documented.
A
Much
so
Ravi,
Martin
and
Larry.
Thank
you.
Thank
you
for
taking
your
time
with
us.
We
so
appreciate.
Are
there
any
concluding
remarks
you
have
I
think
we've
answered
all
the
questions
that
were,
on
my
mind,
the
questions
from
our
audience.
Any
specific
things
you'd
like
to
do
before
we
close
our
session
I,
don't.
B
Know
that
I
have
any
else
really
myself
other
than
you
know
really
appreciate
the
opportunity
we,
obviously
we
open-source
this
our
because
we
did
feel
like
hey,
let's,
let's
see
out
in
the
wild,
let's
see
if
this
has
value
for
others
out
there
and
see
what
can
feed
back
I
should
we're
getting
from
the
broader
community
out
there
as
well.
So
again,
just
really
appreciate.
I,
know
people's
time
is
very
valuable
and
really
appreciate
everybody
coming
in
listening
in
on
this.