►
From YouTube: Using Jenkins and Spinnaker to Supercharge Your Deployment Pipelines - Isaac Mosquera, Armory
Description
Using Jenkins and Spinnaker to Supercharge Your Deployment Pipelines - Isaac Mosquera, Armory
Just because you've decided to use Spinnaker for CD doesn't mean you have to throw away all of your existing DevOps tooling built around Jenkins. Spinnaker treats Jenkins as a 1st class citizen and has native integrations to improve your software delivery process. In this talk we'll review how Spinnaker can add automated deployment verification, 1 click rollbacks, deployment windows, deployment notifications without writing additional code & scripts.
A
So
good
morning,
thank
you
for
joining
us
today
about
three
to
four
months
ago.
I
was
talking
to
a
very
large
bank
and
we
normally
do
these
workshops
and
exercises
with
them
to
really
understand
the
kind
of
pains
they're
going
through
and
in
this
room
is
myself
and
their
VP
of
engineering.
A
director
of
engineering,
DevOps
folks
product
folks,
individual
contributors
and
I
asked
them
a
question
that
they
really
just
couldn't
answer
and
to
be
fair.
A
The
second
part
of
that
is
that,
as
we
define
the
process
a
year
ago,
it
looks
very
different
to
today,
because
we're
always
iterating
we're
always
improving,
and
then,
where
does
that
get
codified
in
our
heads?
And
so
as
people
leave,
the
company
and
new
people
enter
answering
the
question
of
how
this
piece
of
software
gets
into.
Production
is
actually
rather
impossible
without
having
to
have
meetings
going
to
a
Wikipedia,
but
that
a
wiki
page,
but
that
won't
have
a
full
answer.
A
So
we're
in
this
room,
and
they
don't
have
the
answer,
but
the
workshop
and
the
exercises
to
try
to
get
the
answer.
And
of
course
we
start
with
something
simple
like
this,
which
is
how
to
stuff
get
into
production.
Oh
we'll
go
through
this
thing
called
dev
and
it
progresses
till
it
gets
into
production,
but
the
reality
is
that
it
includes
people
like
developers
like
QA
who's,
running
the
integration
test.
It
also
involves
external
systems
like
get.
A
A
We
actually
have
development
environments,
we
run
integration
actually
in
the
in
the
development
environment.
As
we
go
to
stage
we
run,
the
integration
tests
again
turns
out
that
we
uncovered
that
there's
PM
approval
now
the
way
that
they
were
managing
this
process
here,
the
PM
QA
devops
approval
the
way
that
they
were
managing,
that
was
through
spreadsheets
through
JIRA
and
a
bunch
of
meetings,
and
so
you
know
that's
software
delivery,
bye-bye
spreadsheets
and
I
think
we
can
do
better
than
that.
A
After
the
approval
happens,
they
went
to
go,
update,
JIRA
and
then
they
would
wait
for
Tuesday
to
happen
and
when
you
ask
them
why
Tuesday?
Well,
that's
just
when
we
do
deployments
and
so
I've.
Probably
a
lot
of
us
in
this
room
have
an
arbitrary
day
when
we
do
deployments,
it's
I'm
not
sure
what
Tuesday
is
better
than
Wednesday
or
Thursday.
They
just
happen
to
choose
that
that
date,
and
probably
the
person
who
chose
that
date
no
longer
works
at
that
company
has
moved
on.
So
nobody
there
knows
why,
but
they
have
it
there
anyways.
A
So
what
starts
off
as
what
we
all
think
as
a
simple
deployment
script
to
in
a
particular
environment
is
actually
more
like.
This
talk
to
your
manager.
Talk
to
that
other
product
person
get
the
approval
of
product,
do
a
security
scan
and
then
update
JIRA.
It
is
a
amalgamation
of
all
of
these
scripts
and
a
lot
more
that
software
delivery
is,
and
so
this
is
the
exercise.
So
this
is
us
actually
running
through.
It
now
used
little
sticky
notes
with
them,
because
they
don't
really
know
their
own
process.
A
Somebody
will
put
up
a
sticky
note
and
then
the
other
person
will
go
ahead
and
move
it
to
say
that
this
isn't
actually
how
it
happens.
It
allows
us
to
be
fluid.
We
erase
the
lines
and
we
draw
them
to
new
new
cards,
and
we
do
this
process
together
until
we
have
a
really
full
pipeline.
That
tells
us
tells
us
the
full
story
about
how
software
gets
from
a
developer's
laptop
all
the
way
into
production.
A
Here's
a
couple
more
pictures:
it
turns
out
that
laugh
a
little
bit
because
when
they
they
told
me
they
just
deployed
into
Amazon,
as
we
dug
in
they
actually
deployed
to
GCP
as
well
and
HIPAA.
But
I
thought
it
was
odd
because
they
didn't
tell
me
upfront
that
that
was
the
case
it.
It
took
a
lot
of
probing
to
really
get
into
understanding
what
happened.
They
also
deployed
our
kubernetes
as
well.
That
was
like
a
vague
mention
at
the
beginning
of
the
conversation,
and
so
we
end
up
with
this.
A
This
diagram,
so
here
is
what
it
looks
like
in
the
end.
Even
still,
this
is
a
little
bit
abstract
if
you
notice
here,
we've
grouped
AWS,
GCP
and
HIPAA
environments
all
into
a
single
box,
but
that
isn't
actually
at
how
it
happens.
This
goes
through
in
each
each.
In
each
cloud
we
actually
deploy
to
multiple
regions
in
a
phase
deployment
doing
canary
rollout.
So
it's
actually
a
lot
more
complex
than
when
you
see
on
the
screen
and
that's
the
truth.
A
That's
a
complexity
of
the
world
that
we
live
in
when
it
comes
to
software
delivering-
and
this
is
the
the
the
thing
that
spinnaker
goes
to
solve
is
not
deployments
its
delivery.
Delivery
begins
when
somebody
starts
coding
on
something
and
wants
to
see
it
in
an
environment,
and
never
it
doesn't
end
when
it
gets
into
production
because
after
it
gets
into
production,
we
still
have
to
monitor
those
things
and
those
have
to
roll
back
into
the
way
that
we
do
deployments
the
way
that
we
verify
that
our
deployment
was
ok
and
involves
security.
A
Like
Tracy
was
saying
earlier
that
that
was
an
Chrissy
was
saying
earlier.
Those
are
really
really
important
aspects
to
our
deployments
and
we
and
it
needs
to
be
integrated
into
what
we
call
software
delivery
today.
So
a
little
bit
more
about
spinnaker,
we
have
an
extremely
active
community.
We
have
over
400
commit
contributors,
mostly
from
the
cloud
providers
themselves
like
Google,
like
Amazon
Microsoft.
We
have
over
100
commits
a
day
and
we
are
growing
our
community
year-over-year.
A
It's
used
across
hundreds
of
enterprises.
Geez
are
just
some
of
the
few
companies
that
are
using
it
in
production.
Today,
I
actually
did
a
keynote
at
the
spinnaker
summit
this
weekend
and
on
stage
we
also
had
Salesforce
Pinterest
Airbnb,
who
are
great
additions
to
the
community,
who
are
using
spinnaker
massive
scale
in
the
cloud.
A
So
we're
not
here
to
just
talk
about
spinnaker
or
just
software
delivery.
I'm
here
to
tell
you
about
how
Jenkins
and
spinnaker
is
something
really
awesome,
and
if
you
look
at
how
spinnaker
was
born,
it
was
born
at
Netflix
and
they
realized
they
had
one
of
the
most
massive
jenkins
clusters
in
the
world
and
all
of
their
software
delivery
went
through
there
and
they
didn't
want
to
rip
it
out.
A
They
wanted
to
add
to
what
Jenkins
was
already
doing,
and
so
spinnaker
was
born
with
first-class
integration
to
Jenkins
to
add
and
add
value
on
top
of
what
Jenkins
is
already
doing,
but
we
want
to
go
through
some
terminology
first,
because
our
terminology
is
slightly
different.
Spinnaker
is
a
application
centric
platform,
not
an
infrastructure,
centric
platform.
If
you
go
into
the
AWS
console
or
you're
looking
at
a
cooper,
Nettie's
console,
it
tells
you
all
of
the
nodes
or
the
pods
that
are
running,
but
if
you're
an
application
developer,
you
really
only
care
about
your
application.
A
Everything
else
just
doesn't
really
matter
to
you,
and
so
everything
in
spinnaker
starts
with
the
application,
and
we
refer
to
this
as
an
atomic
deployable
unit.
It's
typically
a
micro
service
again,
because
this
was
born
at
Netflix.
Everything
at
Netflix
is
micro
services.
Oriented
a
pipeline
is
a
defined
workflow
of
a
trigger
something
that
automate
automatically
sets
off
the
pipeline
or
executes
the
pipeline
and
multiple
stages
that
we
put
together.
A
The
trigger
is
an
automated
way
to
kick
off
the
pipeline,
for
example,
a
git
commit
or
somebody
pushing
something
to
an
s3
bucket
or
somebody
pushing
something
to
J
frog
or
a
container
to
docker
registry
and
stages.
Now
stages
are
different
for
us
than
they
are
in
the
world
of
Jenkins
or
maybe
other
stages
you
might
be
familiar
with.
These
are
actually
predefined
actions
that
are
used
in
a
pipeline
to
create
our
workflow.
A
You
don't
write
code
for
these
stages.
They're
all
predefined,
you
just
Express
and
configure
your
the
inputs
into
that
stage,
and
this
is
the
the
powerful
thing
about
about
spinnaker
is
that
it
hides
or
abstracts
the
complexity
from
you.
So
you
can
just
focus
and
worry
on
and
worry
about
the
things
that
you
need
to
get
done.
So
this
is
what
it
looks
like
together.
In
the
upper
left
hand,
corner
I
would
do
a
live
demo,
but
I'm,
not
that
brave.
A
So
these
are
little
animated
gifts
in
the
upper
left
hand
corner
you
have
your
application
name.
These
are
our
software
delivery
pipelines
that
middle
tab,
the
infrastructure
tab
will
show
you
what's
actually
deployed
into
production
and
TAS
show
you
everything,
that's
been
executed
through
the
UI
or
otherwise.
So
you
have
this
rich
UI.
That
tells
you
everything
you
need
to
know
about
your
application
so
that
you
can
actually
start
writing
a
pipeline
that
encompasses
from
from
your
developers
laptop
to
production.
A
How
do
we
configure
a
Jenkins
master?
It's
pretty
simple.
We
have
a
tool
called
halyard,
which
we
call
Hao.
For
short,
if
a
master
already
exists,
you
just
configure
it
by
giving
it
your
the
the
Jenkins
name,
the
URL,
the
user
name
and
password
to
connect
up
to
the
API,
and
then
we
redeploy,
spinnaker
and
spinnaker
will
start
pulling
Jenkins
and
asking
for
the
jobs
that
are
available.
What
jobs
have
been
executed
and
completed,
and
now
we
can
actually
start
integrating
our
pipelines
with
Jenkins.
A
So
let's
get
into
some
pipeline
configuration.
So
when
we
first
setup
our
pipeline,
we
first
just
configure
the
pipeline
no
stages
just
yet
we
want
to
trigger
off
of
something
typically.
What
I'm
doing
here
is
I'm
just
scrolling
down
through
the
available
jobs
and
Jenkins,
so
that
I
can
listen
to
those
jobs
and
when
those
jobs
execute
are
complete.
We
will
kick
off
this
pipeline
a
little
bit
more
here.
There's
a
property
file
input
that
property
file
input.
A
Allah
gives
us
the
ability
to
get
outputs
from
the
Jenkins
job
through
a
key
value
file
that
key
value
file
will
contain
things
like
a
get
hash
or
the
artifact
name,
or
maybe
we're
going
to
dev
or
stage
or
production
whatever.
It
is
that
you
want
to
tell
spinnaker
through
some
sort
of
variables
you
can
do
so
in
that
file.
A
A
This
allows
us
to
actually
kick
off
a
job
on
Jenkins,
so
at
the
very
beginning,
Jenkins
was
kicking
a
pipeline
off
now
we're
to
using
spinnaker
to
integrate
back
into
Jenkins
and
to
kick
off
a
job
again
we
define
the
job
that
we
want
to
kick
off.
In
this
case,
we
are
gonna,
kick
off
chef
and
we're
able
to
pass
in
attributes
in
two
or
as
inputs
back
into
this,
and
what's
great
about
this,
is
that
I've
seen
this
a
lot
of
companies
is
that
typically,
these
things
are
manually
done.
A
So
one
of
the
things
about
software,
delivery
and
CD
is
that
there's
a
lot
of
human
processes
involved
with
it.
Humans
are
always
involved
in
these
processes
and
it's
very
hard
to
remove
them
as
much
as
we
want
to
it
as
much
as
it's
my
goal
at
armoury
and
our
goal
as
part
of
the
CD
F,
to
automate
everything
and
to
make
sure
that
we
can
get
everything
into
production.
A
It's
going
to
take
time,
and
so
we
have
this
stage
called
the
manual
judgment
stage,
and
this
allows
us
to
stitch
together
stages
together
and
having
a
human
input
in
the
way.
So,
if
you
can
recall
that
first
slide,
where
I
had
the
PA,
the
PM
approval,
the
QA
approval
and
the
DevOps
approval,
we
can
actually
set
up
these
approvals
in
spinnaker.
In
that
way,
you
can
stitch
these
jobs
together.
Gets
you
out
of
spreadsheets.
A
It
gets
you
out
of
doing
these
ad
hoc
meetings
just
to
make
sure
that
someone's
gonna
say
yes
or
no
and
I
think
that's
a
great
thing,
because
I
think
we
can
all
agree
that
meetings
aren't
the
best
in
the
world.
We
can
also
set
up
different
inputs
for
the
judgment
to
say
I'm
too
scared
to
deploy.
Let's
do
it
whatever
it
is.
You
need
the
kind
of
feedback
you
need
from
the
users.
We
can
have
that.
A
The
other
thing
that
we
can
add
on
top
of
the
value
that
Jenkins
provides
if
it's
already
doing
a
lot
of
the
work
for
you
are
these
things
called
deployment
windows,
so
nobody
likes
to
deploy
at
5:00
p.m.
on
Friday,
because
I
know
for
one
I
love
hanging
out
on
the
weekend
with
my
kids
I
don't
want
people
to
playing
on
a
Friday
and
every
winning
my
whole
weekend
so
spinnaker
you
can
have
these
deployment
windows
that
says
nobody
deploy
on
Fridays
after
12:00
or
two.
A
Whatever
time
you
want,
it
will
sit
there
and
wait
until
the
next
deployment
window
is
open.
So
that
way
you
can
actually
start
the
deployment
process
and
get
to
the
very
point
that
you
needed
to
stop
in
the
moment.
That
window
opens
again.
Maybe
it's
Monday
morning
at
10:00.
It
continues
going
without
human
intervention.
Now
this
is
a
there's,
a
very
important
thing
here
that
we
believe
in
we
believe
in
guardrails
not
gates.
This
is
something
that
you'll
hear:
Andy
Glover
the
director
of
Netflix
who
started
spinnaker.
A
There
says
a
lot,
and
this
is
the
guardrail.
This
is
going
to
tell
you
to
slow
down
and
stop
because
it's
3:00
p.m.
on
a
Friday
don't
deploy
people
love
their
weekends,
but
if
you
really
need
to
deploy
it's
not
going
to
stop
you,
you
can
still
override
that
and
continue
deploying
into
production
in
case
there's
an
emergency.
A
We
also
have
this
concept
of
Auto
verifications
now
you'll
see
in
the
UI
that
is
called
canary
and
and
really
what
we're
doing
here
can
airing.
If
you
think
about
canoeing,
it's
actually
two
parts,
every
one
kind
of
puts
them
together,
but
it's
two
parts:
one
is
the
orchestration
part?
How
do
we
deal
with
traffic
management?
How
do
we
roll
out
our
PI
our
software?
The
second
part,
though,
is
the
analysis
part.
Typically
we
do
that
manually.
We
go
to
these
data
dashboards
or
metrics
dashboards,
and
we
go
look
at
this
chart
and
we
go.
A
What
did
it?
Look
like
yesterday
and
there's
a
45
percent
CB.
You
look
right,
yeah,
it
kind
of
looks
okay.
Let's
keep
moving
forward,
let's
go
to
10
servers
right,
so,
let's
just
let's
put
the
orchestration
part
to
the
side,
but
if
you
just
take
the
analysis,
part
that
part
really
shouldn't
be
done
by
humans.
Computers
are
pretty
good
with
numbers.
So
let's
let
the
computers
do
that
their
job
and
so
what
we
do
we
build
a
system.
This
is
the
third
generation
system
was
built
by
mostly
Google
and
Netflix
on
our
verification.
A
Engine
and
all
it
does
is,
takes
two
time
series
sets
of
data
and
compares
them
against
each
other.
Using
an
algorithm
called
the
mann-whitney.
U
algorithm,
and
this
test
tells
us
whether
these
two
numbers
are
the
same
or
different.
You
can
literally
throw
stock
ticker
symbols
at
it.
Anything
that
is
time
series
based.
We
will
accept
so
we
can
take
data
from
like
memory.
Cpu
revenue
transactions
per
second
queue
length
size
and
we
can
compare
those
things
and
will
automatically
tell
you
whether
this
is
a
good
deployment
or
not.
A
A
At
like
looking
at
dashboards
and
comparing
numbers,
we
can
get
back
to
coding
enjoying
our
day
being
productive
for
our
companies,
so
here
I'm,
just
configuring,
this
canary
engine
or
the
analysis
engine
I'm
waiting
it
a
little
bit
I'm
going
to
say
this
is
a
CPU
bound
application
and
then
giving
it
the
input.
So
you
can
see
here
I'm
looking
at
all
of
these
metrics
now
the
Netflix
API
is
actually
deployed
using
this
very
same
system
and
they
look
at
hundreds
of
metrics,
so
you
can
throw
as
many
metrics
as
you
wanted
this.
A
It
will
do
the
analysis
for
you
and
come
up
with
a
judgment
again.
The
whole
idea
is
adding
value
on
top
of
the
the
deployment
that
Jenkins
is
already
doing
for
you
here
we're
configuring
this
stage
there
we
just
configured
the
the
actual
profile
and
the
reason
we
separated
these
two
things
is
because
if
you
look
at
API
systems,
they
all
roughly
look
and
behave
the
same.
We
want
to
roughly
look
at
the
same
level
of
metrics,
so
we
wanted
to
take
out
the
the
profile
and
be
able
to
use
that
profile
again.
A
Then
we
actually
come
back
in
here
and
we
actually
configure
a
stage
with
it.
The
baseline
offset
here
is
where
I'm
saying
hey,
look
back
a
hundred
minutes
prior,
you
can
look
back
24
hours
prior
48
hours
prior,
so
you
can
compare
today
with
what
happened
with
yesterday.
So
it's
up
to
you
to
configure
it.
However,
you
want
it's
extremely
configurable
the
lifetime.
There
is
real
important.
A
The
reason
why
the
lifetime
there
is
really
important
is
because,
ultimately,
in
order
to
do
data
analysis,
we
need
data,
and
if
you
don't
have
enough
data,
then
it
will,
it
might
be
a
little
bit
off.
So
if
you
need
more
time
to
collect
the
data
and
collect
confidence
in
what
you're
deploying
set
that
for
a
longer
time,
yeah
and
Netflix
even
still
has
problems
with
this,
and
you
can
imagine
their
scale.
They
set
things
for
a
longer
period
of
time,
but
it
gets
enough
data
to
give
it
confidence.
A
So
here's
our
final
pipeline
and
I'm
just
moving
too
fast.
All
right.
So
there's
a
manual
judgment.
We
are
going
to
update
our
chef
server
call
out
to
Jenkins.
Then
we
call
out
the
Jenkins
again
to
actually
do
the
deployment
and
then,
after
the
deployment
is
done,
we
will
do
an
automated
verification.
All
of
this
using
Jenkins
and
spinnaker
together,
so
you're
adding
value.
A
So
once
we
execute
it,
I
executed
it
manually.
There
I
just
push
the
button.
It
asked
me
for
the
the
artifact
or
the
job
number
from
Jenkins
that
I
want
to
run.
It
starts
kicking
off
the
pipeline.
You
see
it
running.
It
stopped
there
that
manual
judgment
I,
went
ahead
and
approved
it,
so
there
I'm
starting
it
again,
picking
build
number
36,
which
is
the
last
one
telling
me,
which
is
the
trigger.
A
The
manual
judgment
comes
up,
I
select,
yes,
because
I
do
want
it
to
continue
going
and
it
continues.
I
could
have
said
no
and
the
pipeline
would
have
stopped
right
again
getting
getting
out
of
CD
by
spreadsheets
here,
because
it's
not
Tuesday
so
I'm
modeling
out
that
pipeline
from
the
very
beginning.
It's
gonna
stop
right,
as
I
mentioned,
we
believe
in
guardrails
and
not
gates,
so
I'm
gonna
continue
moving
forward,
I'm
gonna
say
skip
so
I
am
gonna
push
this
forward,
but
it
does
stop
me.
A
It
does
try
to
advise
me
on
what
I
should
be
doing,
and
then
it's
gonna
continue
moving
forward
once
it
gets
into
the
Jenkins
stage.
We
have
a
lot
of
information
back
from
Jenkins
on
what
Jenkins
is
doing
the
job
that
it's
running,
how
it's
running?
How
it's
progressing
gives
us
link
back
to
Jenkins.
A
So
we
can
go
see
the
job
right,
there's
a
task
waiting
for
the
job
to
start
waiting
for
Jenkins
to
find
a
proper
runner,
and
if
we
bow
and
if
we
have
J
unit
output
from
tests,
we
actually
pull
that
back
into
spinnaker
as
well.
And
you
can
see
the
number
of
tests
failed
or
passed,
and
this
is
the
last
one,
the
verification
stage.
A
What
I
think
is
the
most
valuable
thing
here
and
there's
in
this
pipeline
is
that
we're
actually
running
a
live
service
and
we're
able
to
grab
data
from
whatever
data
store
or
metrics
provider.
You
have
whether
it's
data
dog,
New,
Relic
Prometheus.
We
integrate
with
many
different
sources,
and
you
can
see
it
doing
the
analysis
and
it's
doing
the
analysis
and
I
believe
five-minute
intervals,
and
you
can
go
into
here
and
see
exactly
how
it's
doing
its
analysis.
It
gives
you
a
little
bit
of
feedback
of
what
that
what
the
numbers
look
like.
A
So,
if
anything
were
to
fail-
and
this
is
really
important-
if
anything
were
to
fail-
you
were
able
to
go
back
in
there
and
dig
into
the
pipelines
and
understand
exactly
what's
going
on
and
so
very
early
on.
We
started
using
machine
learning
for
for
this
type
of
analysis,
and
we
realized
that
it
was
a
really
bad
idea,
because
two
reasons,
one
fitting
a
model
for
this
type
of
data
is
really
really
hard.
It's
actually
not
a
normal
distribution,
it's
it's!
It's!
It's!
A
It's
very
power
law
distribution
for
a
lot
of
the
data
that
we
use
on
system
level,
metrics
and
then
it's
different
for
let's
say
traffic,
and
so
the
other
reason
why
it's
really
bad
is
because
if
you
have
a
model
and
something
breaks
in
it,
it'll
definitely
tell
you
that
something
broke,
but
you
figuring
out
as
a
developer.
What
broke
is
practically
impossible
and
so
trying
to
debug.
This
was
hard,
so
sometimes
simpler
is
better,
and
in
this
case
it
is
because
it
allows
us
to
go
in
and
understand,
like.
A
Oh,
the
CPU
went
up,
allows
us
to
start
our
investigation
as
to
why,
once
everything
is
in
production,
that's
after
everything's
deployed
here's,
our
application,
centric
infrastructure
view
again,
I,
don't
care
about
the
other
100
applications
that
are
running
in
my
kubernetes
cluster
I
care
about
mine.
That
I
call
commits
I
can
see
the
nodes
and
pods
that
are
running
here.
On
the
right
hand,
side
I
can
actually
pull
up
the
logs
from
those
pods,
so
I
don't
have
to
do.
Coop
control
to
switch
between
accounts
or
namespaces
it
easily
comes
up.
A
I
can
see
what's
healthy
or
unhealthy.
If
these
pods
weren't
coming
up,
it
would
tell
me
exactly
why
they
didn't
come
up,
so
we're
able
to
actually
scale
them
up
and
scale
them
down
easily.
So
there's
a
lot
of
activity
that
we
can
do
in
this
application.
Centric
point
of
view
that
we
would.
That
would
be
a
lot
harder
to
do
through
the
CLI.
So
what
did
we
do
in
the
last
20
minutes?
So
we
automated
that
manual
handoff
between
human
beings,
that
came
out
of
spreadsheets
and
meetings.
A
A
We
visualize
the
infrastructure
inside
of
whether
it's
kubernetes
or
AWS
or
GCP.
We
can
actually
visualize
our
infrastructure
across
multiple
accounts
among
across
multiple
regions
for
our
application
as
developers,
we
only
care
about
our
applications,
not
the
world
of
the
infrastructure,
and
this
is
what
makes
Jenkins
and
spinnaker
so
awesome.
So
thank
you.
If
you
wanna
get
started,
go
to
spinnaker
dot,
IO
I
appreciate
the
time.