►
Description
For more Continuous Delivery Foundation content, check out our blog: https://cd.foundation/blog/
First-time Spinnaker Contributors Workshop - Fernando Freire, Armory
Are you curious about how to get involved in open source and the Spinnaker project? Jump start your contributor journey by attending our first-time contributors workshop. You’ll learn how to leverage Kustomize, Telepresence, and many more useful tools to jumpstart your next Spinnaker contribution. Using these tools you’ll make your first modifications to the project by adding a brand new stage to Spinnaker.
A
Perfect,
so
for
the
folks
on
the
stream,
we're
just
getting
started
here
in
the
room,
there's
a
link
on
the
screen,
if
that's
coming
through,
yet
to
join
Spinnaker
slack.
If
you
haven't
already
we're
going
to
use
generated
credentials
to
a
shared
kubernetes
cluster
today,
so
I
can
send
you
those
credentials
in
slack
as
I.
Create
them
it'd
be
helpful.
If
you
just
share
like
a
username,
it
could
be
your
GitHub
handle.
It
could
be
a
random
string
of
letters
just
something
that
I
can
use
to
create
something
for
you.
A
A
No
okay,
let's
go
ahead
and
get
started,
then.
So,
thanks
all
for
coming
we're
here
to
learn
more
about
becoming
a
contributor
to
the
Spinnaker
project.
We've
been
learning
a
lot
about
Spinnaker
over
the
last
two
days
and
it's
exciting
to
see
all
of
the
new
faces
here.
All
the
new
you
know
contributors
to
the
project.
So
what
we're
going
to
try
and
do
today
is
equip
you
with
all
of
the
tools
that
you
need
to
get
started,
actually
contributing
code
to
the
project.
Of
course,
that's
not
the
only
way
that
you
could
contribute.
A
You
know,
there's
also
being
able
to
use
the
product
find
issues
with
it
file
it
in
the
issue
tracker
contributing
to
documentation,
there's
a
number
of
things
that
you
could
do
within
the
project
that
are
still
valuable
work.
Today,
though,
what
we
are
going
to
focus
on
is
how
to
actually
make
changes
and
we're
going
to
do
so
in
the
open
source
project
in
particular,
and
if
we
are
able
to
get
through
most
of
the
content,
then
we'll
start
to
focus
on
some
plug-in
development
towards
the
end.
A
So
as
we're
all
familiar,
Spinnaker
is
a
platform
for
doing
continuous
delivery
right
being
able
to
orchestrate
pipelines
and
deployments
across
a
number
of
different
targets
and
Cloud
providers.
The
way
that
that
gets
consumed
is
by
each
of
these
Individual
Services
and
and
we'll
go
into
a
little
bit
of
a
history
lesson
here,
but
when
Spinnaker
open
source
the
the
project,
initially
their
goal
was
to
create
microservices
per
like
set
of
features
or
or
a
part
of
the
domain
right,
a
slice
of
the
pie.
A
So
each
of
these
Services
they
all
focus
on
a
different
aspect
of
continuous
deployment,
and
they
all
follow
similar
patterns.
Underneath.
So
the
theme
that
we're
going
to
see
throughout
the
day
as
we
work
on
some
of
these
Services
is
how
do
we
or
sorry
how
does
data
get
into
the
platform?
And
then,
how
is
that
data
exposed
to
other
services?
A
So
the
goal
was
to
get
all
of
that
data
in-house
and
then
make
it
available
to
other
services
within
the
Spinnaker
ecosystem
and
then
allow
developers
to
to
describe
their
deployments
from
there.
So
what
we're
going
to
do
is
we're
going
to
do
a
go
through
a
brief
overview
of
each
of
the
different
Services.
You
understand
how
they
interact
with
each
other,
there's
a
lot
of
arrows
on
the
screen.
Let's
not
worry
too
much
about
the
arrows
for
now
and
just
focus
on
the
individual
services
within
that
core
box.
A
So,
starting
in
that
top
row
we
have
deck
gate
and
Igor
deck
is
considered
the
front
door
of
Spinnaker.
It's
the
front
end
application
that
provides
all
the
nice
GUI
features
that
you
know,
users
end
up,
seeing
the
most
a
gate
is
considered
the
API
Gateway
for
the
whole
system.
So
if
you're
familiar
with
microservice
architecture,
you
generally
have
something
sitting
at
the
edge
that
exposes
functionality
for
a
whole
bunch
of
internal
services.
A
So
gate
is
going
to
be
the
one
exposing
all
of
the
high
level
functionality
that
deck
will
use
and
if
you're,
integrating
with
you
know
the
Spinnaker
API
more
often
than
not
you're
going
to
be
integrating
with
deck.
Sorry
with
gate
you're
not
going
to
be
interacting
with
Individual
Services.
Unless
you
intentionally
expose
them,
then
the
last
one
on
the
top
row
here
is
Igor.
Igor
is
the
service
that
is
responsible
for
CI
artifacts,
so
it's
going
to
interact
most
often
with
Jenkins,
with
Docker,
with
some
of
these
other.
A
You
know,
artifact,
like
systems
and
again
that
pattern
of
caching
resources
and
then
making
them
available
Igor
will
take,
for
example,
all
of
the
Jenkins
jobs.
In
your
Jenkins
cluster
and
then
expose
them
as
jobs
that
you
can
then
reference
or
trigger
pipelines
within
the
system.
Then,
in
that
middle
row
we've
got
Kayenta
Orca,
Echo
and.
B
A
A
Okay,
so
Kayenta
will
be
responsible
for
aggregating
data
from
different
metrics
providers
and
then
exposing
that,
along
with
judgment,
functionality
for
other
services
within
the
system.
So,
for
example,
if
you're
trying
to
determine
whether
service
is
healthy
in
production
data
dog,
is
your
provider?
Client
is
going
to
be
responsible
for
ingesting
metric
data
from
datadog
and
exposing
a
way
for
you
to
create
a
judgment
like
CPU
if
CPU
exceeds
50
then
fail
the
Judgment
right
next
on
the
list
is
orca.
A
So
it's
going
to
take
this
graph
and
it's
going
to
go
through
the
graph
and
make
sure
that
each
of
those
things
are
being
completed
and
if
they're
not
then
they'll
end
up
reporting
status
back
right
and
that's
how
we
end
up
seeing
errors
in
the
deck
UI
then
we've
got
echo
echo
here:
is
our
notification
service.
So
all
notifications
end
up
going
through
this
system.
A
A
lot
of
things
that
end
up
triggering
deployments
will
go
through
Echo
first,
so
it's
going
to
broadcast
messages
and
make
sure
that
other
services
are
aware
of
what
needs
to
happen.
Echo
is
probably
if
Orca
is
the
most
complex.
Echo
is
one
of
the
simpler
services
that
you'll
you'll
end
up
seeing
today
and
then
we've
got
Fiat
on
the
far
left
of
this
middle
row
and
Fiat
is
providing
authorization,
auth,
n
and
no
authentication,
auth
and
and
authorization
auth
Z
right.
A
So
those
two
pieces
of
functionality
are
being
provided
for
the
rest
of
the
cluster.
So,
for
example,
when
you
execute
a
pipeline
and
you're
authenticating
with
a
saml
provider,
then
a
check
will
be
made
to
Fiat
to
make
sure
that
you
have
permissions
to
actually
execute
that
Pipeline
and
Fiat
will
just
return
that
value
for
other
services
to
consume.
A
Then,
on
the
bottom
row
we've
got
Rosco
Cloud
driver
front.
50.
Roscoe
is
the
service,
that's
primarily
providing
baking
functionality.
So
are
we
familiar
with
the
concept
of
baking
and
like
immutable
infrastructure,
yeah,
seeing
subnauts?
Okay,
so
Roscoe
will
allow
you
to
bake
an
Ami.
For
example,
it's
also
responsible
for
rendering
customize
and
helm-based
manifests
when
you're
deploying
to
kubernetes
and
Cloud
driver
is
the
abstraction
for
all
of
the
different
Cloud
providers.
A
So
when
you
want
to
deploy
an
Emi
or
you
want
to
deploy
a
kubernetes
deployment
or
you
want
to
retrieve
the
status
of
a
load
balancer
in
Azure,
all
of
those
operations
are
going
to
be
encapsulated
inside
of
cloud
driver.
So
it's
responsible
for
wrapping
all
of
the
cloud
specific
sdks
and
then
providing
a
consistent
interface
for
the
rest
of
the
services
to
interact
with
then
on
the
bottom
right.
We
have
front
50
front
50
is
considered
the
persistence
or
the
storage
layer
within
the
project.
A
Anything
related
to
application,
pipeline
or
Canary
metadata
will
end
up
being
stored
in
this
service,
and
then
you
know
it's
got
a
number
of
different
backing
stores.
You
can
use
the
the
object
stores
in
your
cloud
or
you
can
use
Mineo
in
cluster
or
for
most
production
use
cases.
You
can
use
a
SQL
backing
store,
which
is
definitely
some
that
we
recommend.
A
So
the
last
thing
I'll
mention
around
all
of
these
Services
is
that
by
default
and
you'll
see
this
today,
the
backing
store
for
most
of
these
Services
is
going
to
be
redis.
Reddis
is,
of
course,
not
a
persistent
store,
but
it
does
make
it
simple
to
get
started
right.
Really.
What
we
recommend,
if
you
start
running
Spinnaker
in
your
own
environment,
is
to
choose
a
SQL
based
backing
source,
so
you
can
do
SQL
or
postgres,
but
again
today,
we'll
just
focus
on
the
redis
client,
so
I'm
going
to
stop
there
and
see.
A
Perfect,
okay,
so
if
you
came
to
or
saw
some
of
the
talks
yesterday,
you
might
have
seen
a
couple
of
different
ways
to
get
all
of
this
stuff
installed
and
configured
today.
The
shared
cluster
that
we're
going
to
be
using
is
using
the
open
source
Spinnaker
operator
and
then
we'll
use
customize
to
install
that
Spinnaker
cluster
within
a
namespace.
So
the
credentials
that
you've
received
will
give
you
access
to
a
shared
kubernetes.
Cluster
you'll
have
admin
effectively
admin
access
within
your
namespace,
but
you
won't
be
able
to
go
into
other
namespaces.
A
So
don't
worry
about
that
and
then
we'll
the
the
reason
that
we're
wanting
to
do
this
is
so
that
we
can
run
most
of
the
Spinnaker
services
in
this
remote
kubernetes
cluster
and
then
only
modify
the
services
that
we
need
to
on
our
local
machine.
So
now
we'll
we'll
get
into
a
little
bit
more
of
the
development
setup.
So
what
we're
going
to
go
through
today
is
setting
up
our
Editor
to
make
sure
we
can
actually
work
on
the
project.
A
A
We're
going
to
need
a
coupe
cuddle
to
access
the
cluster,
we're
going
to
need
a
telepresence
which
is
a
tool
for
creating
that
bi-directional
proxy
between
our
services
on
the
local
machine
and
our
remote
cluster,
some
kind
of
editor
we're
going
to
be
using
IntelliJ
IDEA.
You
can
use
the
Community
Edition,
it's
free
if
you're
not
already
using
it
to
work
on
some
of
the
Java
Services,
we'll
use.
A
A
B
A
A
little
bit
later
on,
yeah,
okay,
perfect
yeah
for
sure,
okay.
So
we're
going
to
talk
about
the
the
next
phase,
so
we
need
to
go
ahead
and
get
Spinnaker
running
inside
of
our
namespace.
So
in
order
to
do
that,
we're
going
to
use
customize
and
the
Spinnaker
operator
to
accomplish
that
so
I'm
going
to
go
ahead
and
put
this
on
the
screen
here.
A
So
what
we're
going
to
use
to
help
us
get
through
this
today
is
the
too
bad
I
can't
mirror
the
browser
window
anyway.
We're
going
to
use
this
project
to
jumpstart
our
ability
to
get
Spinnaker
running
inside
of
the
cluster.
So
what
this
repository
does
is
it
takes
a
bunch
of
common
configuration
patterns
within
the
spinker
project
and
that
allows
you
to
apply
them
using
customize,
and
it
gives
you
like
a
nice
big
Spinnaker
service,
crd
that
you
can
use
to
apply
to
that
cluster
so
to
cover
hopefully
unfamiliar
ground.
A
So
in
this
case
the
Spinnaker
service
crd,
has
a
whole
bunch
of
different
configuration
values,
they'll
be
in
the
same
format
if
you're
familiar
with
halyard
to
actually
create
you
know,
I
want
a
Spinnaker
cluster
with
a
kubernetes
account
and
I
want
it
to
have.
You
know,
X
permissions
and
the
ability
to
you
know
access
why
S3
buckets
and
then
the
operator
will
create
all
the
configuration
necessary
for
that
and
start
up
all
of
the
services
that
we
were
just
talking
about
in
that
architecture
diagram.
A
A
A
Okay,
so
what
you'll
end
up
finding
in
this
directory
is
a
whole
bunch
of
different
patches
that
you
could
use
to
configure
services.
So
what
we're
going
to
look
at
first
is
this
customized
minimum,
or
this
recipes
customize,
minimum
and
I'm
going
to
make
sure
that
this
is
correct
for
you,
so
you're
not
constantly
switching
colors
here,
and
so
this
is
the
simplest
possible
definition
that
we
can
make
for
installing
a
Spinnaker
cluster.
So
this
customization,
what
it's
saying
is
namespace
Spinnaker.
A
What
we're
going
to
want
to
do
is
change
this
to
the
name
of
the
namespace
that
was
created
for
you.
So,
for
example,
if
I
was
creating
my
own
namespace
here,
I
would
change
this
namespace
to
aphidated,
and
what
that's
going
to
do
is
it's
going
to
map
all
of
the
resources
that
are
being
created
in
this
customize
block,
they're
all
going
to
be
mapped
to
that
namespace.
So
you
don't
have
to
manually
go
into
each
of
the
manifests
and
say:
I
want
to
change
the
namespace
here
now.
Customize
will
take
care
of
that.
A
For
you,
then,
we've
got
a
components
block
and
that's
where
we're
going
to
pull
in
on
the
bulk
of
our
configuration
today.
So
the
first
that
we're
going
to
look
at
is
core
base.
This
is
just
going
to
define
the
base
Spinnaker
service.
Crd
then,
in
the
next
value,
we're
going
to
Define
in
cluster
persistence
and
we'll
go
look
through
all
this
configuration
so
you're
at
least
familiar
with
it,
and
then
we're
going
to
define
a
kubernetes
Target.
So
when
you
first
stand
up,
Spinnaker
you'll
be
able
to
go
in,
create
pipelines.
A
You'll
be
able
to.
You
know,
do
a
simple
deployment
to
the
current
namespace
that
you're
in
then
at
the
bottom.
Here
we're
going
to
patch
the
version
in
so
in
this
case
we're
going
to
make
some
manual
edits
here.
This
repository
is
primarily
geared
towards
deploying
our
proprietary
version
of
Spinnaker,
but
with
a
change
to
this
version
patch
and
the
addition
of
another
patch
we'll
be
able
to
install
open
source,
vinegar,
no
problem
and
then
the
last
one
here
we're
actually
going
to
remove
this
Transformer
all
right.
A
How
familiar
is
the
group
with
customize
I
I?
Don't
want
to
go
like
to
like
medium-ish,
okay,
okay,
so
I'll
briefly
explain
this,
but
we're
not
going
to
end
up
using
it
today.
So
what
this
Transformer
block
does
down
here
is
it
actually
takes
a
set
of
resources
and
then
prefixes
or
suffixes
of
value.
So
in
you
know
our
development
cluster,
for
example,
our
Engineers
are
usually
working
with
a
cluster
admin
role,
because
it's
a
development
cluster
and
those
are
global
resources.
A
So
this
might
not
be
familiar
to
a
lot
of
folks.
This
is
something
that
was
introduced
and
customized
for,
and
components
give
us
the
ability
to
group
a
whole
bunch
of
different
customized
resources
together
and
then
expose
them
like
we
do
in
our
top
level
manifest
over
here.
So
it's
a
nice
way
to
kind
of
abstract
away
a
lot
of
configuration
that
you
may
not
want
your
end
users
to
to
worry
about
too
much.
So
in
this
case,
all
we've
done
is
Define
a
Spinnaker
service.
This
looks
interesting.
A
That
looks
a
little
bit
better
and
we
can
see
here
that
we
have
top
level
spec
with
our
Spinnaker
config,
everything
under
Spinnaker
config
maps
to
equivalent
how
config
values
and
there's
definitions
and
descriptions
of
all
this
stuff
in
here.
But
of
course,
the
open
source
website
also
describes
these
things,
and
then
we
have
the
ability
to
set
individual
excuse
me.
We.
B
A
Now
this
Bears
a
little
bit
more
explanation,
so
this
customized
block
it
might
be
weird
to
think
of
using
customize
and
then
have
like
a
customized
definition
inside
of
the
operator.
The
difference
between
the
customize
that
you
define
in
this
manifest
and
the
one
that
we're
going
to
be
using
today
is
that
this
allows
you
to
do
runtime
patches
within
the
cluster
itself.
So
what's
nice
about
this,
for
example,
is
if
you
need
to
attach
a
sidecar
to
all
of
the
different
Services.
A
If
you
want
to
do
that
dynamically,
you
don't
necessarily
want
to
have
to
Define
that
up
front.
You
can
Define
that
in
this
customized
block
and
it'll
do
it
on
the
application
side,
or
maybe
you
need
to
do
like
a
more
fine-grained
edition
of
labels
inside
of
the
project
that
you
don't
necessarily
know
up
front
customize
will
help
you
do
that
here,
but
this
is
more
of
an
advanced
feature.
We're
definitely
not
going
to
dive
into
it
today.
A
Okay.
So
we've
gone
through
our
core
definition
here,
we're
going
to
quickly
show
what
in
cluster
persistence,
looks
like
so
we'll
go
down
to
our
persistence
directory.
You
can
of
course,
switch
to
S3
if
you
want
to
work
on
this
in
your
own
cluster,
but
you
don't
have
permissions
to
do
that
today.
So
we'll
stick
to
Ink
cluster
and
in
the
in
cluster
config
we're
going
to
stand
up.
Mineo
Mineo
is
an
open
source,
key
value
store
that
is
S3
compatible.
A
So
if
you
ever
have
a
need
to
test
something
in
an
S3
like
environment,
locally
mini
is
a
great
solution.
What
that's?
What
we're
using
today
and
then
we're
going
to
be
generating
a
secret
here,
which
is
super
secret
you're
all
going
to
have
the
same
Secret
and
we'll
go
into
Spinnaker,
config
and
we'll
see
how
we
actually
configure
this.
So
we're
going
to
Define
persistent
storage,
we're
going
to
lie
to
Spinnaker
and
say
we're
deploying
to
S3
or
we're
persisting
to
S3.
A
Spinnaker,
won't
know
the
difference
and
we're
going
to
tell
it
what
bucket
we
want
to
deploy
it
to.
This
name
usually
has
to
be
unique,
but
in
because
we've
got
our
own
Mineo
cluster,
it
doesn't
really
matter
here.
So
all
of
this
will
just
help
you
deploy
to
Mineo
in
your
namespace
and
then
the
last
thing
we're
going
to
talk
about
is
our
kubernetes
patch,
so
we're
going
to
go
down
to
targets
kubernetes
default,
and
then
we
can
look
at
our
customization
here,
and
this
is
basically
the
same
pattern
that
we've
seen
before
right.
A
So
let's
go
ahead
and
talk
about
how
we're
actually
going
to
get
this
applied.
So,
let's,
if
we
go
to
the
root
of
the
directory,
so
let's
go
here.
If
we
go
to
the
root
of
the
directory,
you'll
see
that
there
is
a
customization.yaml
file.
This
is
actually
pointing
to
a
much
larger
customization
file.
We're
not
actually
going
to
end
up
using
this
one
today.
A
So
we'll
see
all
of
this
information
being
generated
for
us
in
including
any
like
claims
that
we
need
for
Mineo
and
all
the
other
related
resources.
Let's
go
ahead
and
make
sure
that
was
the
right
file.
A
A
A
Okay,
so
one
of
the
last
things
that
we
need
to
do
here
is
we
need
to
make
sure
that
we're
installing
the
open
source
version
of
Spinnaker.
If
you
look
at
any
of
these
individual
patches,
what
you're
going
to
see
is
that
the
namespace
looks
like
it's
not
open
source.
So
let's
go
look
at
like
targets,
for
example,
and
let's
look
at
the
provider
definition
here,
so
crds
they're
Bound
by
something
called
a
group
version
and
a
kind.
A
So
in
this
case
the
group
inversion
is
this
API
version
here,
spinnaker.armory.io
V1,
Alpha
two.
So
what
we're
going
to
want
to
do
is
we're
going
to
want
to
introduce
a
patch
which
is
in
the
utilities
directory
that
will
change
this
API
version
to
spinnaker.io,
slash,
V1,
Alpha
2..
The
only
functional
difference
here
is
that
you're
only
going
to
be
configuring,
the
open
source
functionality
that's
in
there
and,
of
course,
we've
already
got
the
open
source
vinegar
operator
installed.
A
A
A
So
now
we've
got
a
patches
block
targeting
Spinnaker
kinds
and
the
path
is
set
to
switch
to
OSS
and
we
can
just
inspect
the
patch
here.
It's
very
simple:
we're
doing
a
replace
operation
on
the
API
version
path.
This
is
a
Json
path.
So
if
you
need
to
do
something
more
specific
inside
of
a
crd
that
you're
trying
to
modify
or
any
kubernetes
manifest,
you
can
specify
that
here
and
then
the
value
that
you
want
to
replace
it
with.
So
here
we're
just
saying
replace
it
with
spinnaker.io
V1,
Alpha
2.
A
So
we're
going
to
go
ahead
and
close
this
one
and
we're
going
to
open
up
the
correct
one
so
now
in
the
root
of
the
projects,
we'll
go
ahead
and
paste,
this
uncommented
throw
it
back
where
it
should
be,
and
when
we
do
our
customized
build,
we
should
now
see
the
correct
value-
great
okay
and
now
the
last
thing
that
we'll
do
before
we
go
ahead
and
apply
things
now
that
we've
got
our
open
source
patches.
We'll
of
course
need
to
change
this
version.
A
B
A
As
well,
can
I
get
your
name
Heinrich,
okay,.
A
A
You
will
so
one
thing
that
we'll
see
when
we
go
ahead
and
apply
this
is
you
might
see
some
errors
and
if
you
do
end
up
seeing
errors,
the
likeliest
explanation
is
that
your
version
of
Coupe
cuddle
is
older
than
what
we
need
to
use
today.
So
Coupe
cuddle
before
version
121
shipped
with
customize
three
and
Coupe
cuddle,
post
121,
shipped
with
customize,
four
and
customize
four
is
what
we're
going
to
be
using
today.
A
B
A
A
You
let
me
keep
that
going
and
then
Heinrich
I
will
generate
your
cluster
right
now.
B
A
A
All
right
so
I'm
going
to
go
ahead
and
explain
what
we're
going
to
be
doing
next,
so
we
have
I've
just
created
a
quick
namespace
for
myself,
just
so
that
I
can
follow
along
with
you
and
we're
going
to
run
I
guess,
let's
talk
a
little
bit
about
the
cubeconfig
file
that
you
have.
So
that's
just
has
static
credentials
to
the
cluster
that
we're
going
to
be
working
in
today.
My
personal
preference
is
to
store
cubeconfig
files
in
individual
files
and
then
reference
them
in
each
command.
A
I'd
like
to
be
explicit,
but
that's
by
no
means
the
only
way
that
you
can
use
these
cubeconfig
files.
You
can
go
ahead
and
merge
this
with
your
core
cubeconfig
file.
If
you
want
or
you
can
use
something
like
Cube
context
or
I,
don't
know,
there's
like
a
number
of
tools
to
to
manage
Cube
config
files
for
you.
So
that's
what
you'll
see
me
doing
today.
A
I've
also
got
my
Cube
cuddle
command
Alias
to
KC,
because
I
like
typing
less
letters
and
then
I'm
going
to
be
perfect
prefixing
my
commands
with
a
namespace,
but
you
don't
necessarily
need
to
since
you
will
only
have
access
to
the
one
I
don't
know
if
I
said
that
what
what
you
could
do,
yeah
what
you
could
do
to
confirm
is
you
could
actually
go
open
up
that
cubeconfig
file
and
then
see
if
a
namespace
was
added
to
your.
It
did.
B
A
Yeah,
okay,
so
I'm
gonna
go
ahead
and
run
this
and
see
that
things
failed,
because
why
that's
bizarre,
oh
because
I'm
missing
the
verb
excellent!
So
when
we
run
this
command,
what
we're?
What
we're
saying
is,
go
ahead
and
use
these
credentials
go
ahead
and
dump
it
in
this
namespace
or
verb.
Of
course,
we
always
have
to
have
a
verb
and
we're
using
Cube
cuddle.
A
In
that
case,
it's
going
to
be
apply
and
instead
of
Dash
F,
which
is
probably
what
we're
used
to
seeing
you
can
do
it
Dash
K
and
that's
going
to
tell
Kube
cuddle
to
use
customize
instead
and
then
the
additional
argument
to
dash
K
is
the
current
directory,
because
we're
going
to
be
using
the
the
current
directory
where
that
customization.yaml
file
is-
and
this
exact
error
is
what
I'm
talking
about
here.
A
No,
that's,
okay,
no,
no
totally
fine
yeah!
So
the
speaker,
the
open
source
operator
is
installed
already.
That's
all
we
have
to
worry
about
is
our
customized
config
today
and
it
looks
like
there
is
an
issue
here,
so
we
get
to
triage.
This
is
always
the
fun
part
of
live
workshops
right,
so
we
created
an
object
that
doesn't
match
the
namespace.
This
is
coming
from.
A
A
Okay,
so
if
you
were,
if
you
were
following
along
here,
this
Transformers
block
was
originally
left
in
I
told
her
I
told
everyone
to
remove
it,
but
let's
make
sure
that
we
remove
this,
we
don't
actually
need
the
unique
service
account
here.
A
A
So
let's
go
ahead
and
look
at
our
component
here
and
then
understand
where
this
this
is
from
the
roll
binding.
So
these
are
all
going
to
be
cluster
level
resources.
So
we're
not
actually
going
to
want
this,
so
we're
going
to
do
a
little
bit
of
live
editing
here.
So
what
we're
going
to
want
to
do?
Let
me
go
ahead
and
open
up
our
customization
file,
so
we're
going
to
be
editing
files
inside
of
the
target
kubernetes
default
component.
A
A
A
A
A
So
the
operator
is
going
to
run
in
two
different
modes,
so
you
can
either
run
it
in
a
basic
mode,
which
is
only
going
to
have
access
to
the
current
namespace
or
a
cluster
mode,
which
is
what
we've
done
in
this
case
and
in
cluster
mode.
It's
going
to
watch
all
namespaces
for
any
application
of
a
Spinnaker,
Spinnaker
service
crd
and
we'll
manage
the
creation
of
that.
No.
A
Also
add
today
we're
running
inside
of
a
shared
kubernetes
cluster
that
has
enough
resources
for
us
all
to
play
in,
but
this
approach
that
we're
working
with
here
is
going
to
work
just
as
well.
If
you
have
sufficient
resources
on
your
local
laptop,
so
typically,
what
we
recommend
is
you
have
somewhere
between
a
plus
course
and
32
gigs
of
memory,
and
that
should
be
sufficient
for
your
machine
to
not
and
die
and
also
run
all
the
Spinnaker
services.
A
A
You
should
see
if
you've
managed
to
run
that
Coupe
cuddle
apply
is
if
you
do
a
get
pods
inside
of
your
namespace.
You
should
see
not
only
Mineo
but
all
of
our
open
source
Services
running
here,
and
you
may
see
it's
possible
like
you,
can
see
here.
A
Orca
restarted
once
it's
possible
depending
on
how
quickly
we
were
applying
this
that
you
know
it
just
took
a
little
bit
of
time
for
the
service
to
become
healthy,
but
you
should
eventually
see
after
two
or
three
minutes,
that
all
of
the
services
are
up
and
running.
A
Let's
go
ahead
and
get
started,
I
think
most
folks
have
been
able
to
get
up
and
running
if
you
haven't,
feel
free
to
drop
a
line
in
that
slack
channel
or
ask
raise
your
hand
wave
your
arms
wildly
okay.
A
So
what
we're
going
to
do
next
is
we're
actually
going
to
start
working
on
some
of
the
projects
here
before
we
do
that
I
want
you
to
make
sure
that
your
cluster
is
working.
The
way
that
you
intended
to
so
I'm
going
to
go
ahead
and
create
a
new
split
here
and
we're
going
to
port
forward
a
few
services
to
our
local
box
and
the
reason
we
want
to
do
that
is
because
the
cluster
is
running
in
a
remote
location.
A
If
you
want
to
do
this
in
your
own
environment,
you
can
set
a
load
balancer
setting
in
your
Spinnaker
service,
crd
and
that
will
generate
classic
load
balancers.
So
that's
going
to
create
two
elbs
one
for
gate
and
one
for
deck.
We're
not
going
to
do
that
today,
we're
just
going
to
be
port
forwarding
to
our
local
machine.
Luckily,
that
is
fairly
straightforward
to
do
so
again,
I'm
going
to
be
specifying
a
namespace,
because
my
I'm
running
with
a
different
Cube
config,
but
you
should
be
able
to
follow
along
here.
A
So
our
verb
is
going
to
be
port
forward
and
then
we're
going
to
port
forward
in
this
case
two
services.
So
if
you're
unfamiliar
with
Coupe
cuddle
here
we
can
shorten
service
to
SVC
and
then
we're
going
to
do
spin
gate
and
port
forward
8084
a
little
bit
of
shorthand.
If
you're
going
to
be
port
forwarding
to
the
same
port
in
Coupe
cuddle,
you
can
just
specify
the
ports
once
and
it
will
go
ahead
and
port
forward
that
and
then
in
a
separate
split,
we're
going
to
port
forward
spin
deck
to
9000.
A
A
A
A
So
this
is
a
variation
of
what
you're
going
to
see
if
your
Spinnaker
is
working
correctly.
So
what's
going
on
here
is
that
I've
got
cash
from
a
different
cluster
that
I
Port
forwarded
to
my
local
machine.
So
these
are
resources
that
are
just
going
to
be
from
that
other
cluster.
They
won't
actually
be
able
to
navigate
there.
So
what
we've
got
is
the
base
Spinnaker
UI
I'm,
going
to
assume
we've
all
used
Spencer
before.
Is
that
a
fair
assumption?
A
A
I'm
going
to
add
a
deploy,
manifest
stage,
select
the
Spinnaker
account,
so
Spinnaker
is
just
the
name
of
the
account
that
was
created
by
default.
You
can
of
course,
name
your
account
anything
you'll
need
to
go
into
that
Target,
slash
default,
slash
or
sorry,
Target,
slash,
kubernetes
default
directory,
and
in
there
the
account
definition
will
be
named
Spinnaker.
So
you
can
of
course
call
this
whatever
you'd
like
and
then
I'm
going
to
override.
A
Well
we're
not
going
to
override
the
namespace
here,
dump
in
our
artifact
for
nginx,
save
it
and
then
run
it,
and
that
should
tell
you
if
your
cluster
is
up
and
running.
A
So
what
we're
going
to
be
doing
today
is
we're
going
to
be
working
on
two
services,
but
really
one
just
for
the
sake
of
time
we're
going
to
be
adding
a
new
stage
and
in
this
case
we're
going
to
be
adding
that
stage
to
Orca.
So
let's
talk
about
the
use
case,
why
you
might
want
to
do
this?
So
if
you,
how
many
folks
attended
the
talk
yesterday
about
like
plug-in
development,
we
got
one
two
yeah
perfect.
So
the
the
same
use
case
applies
there.
A
I'll
recap
it
for
the
folks
that
didn't
attend
the
talk.
Yesterday.
Spinnaker
is
a
very
mature
platform.
It's
feature
set
covers
90
of
the
use
cases
that
you
will
have
when
you
want
to
do
a
continuous
deployment.
Those
other
10
cases
are
going
to
be
specific
to
your
organization
and
that's
going
to
be
integrating
with
like
a
custom,
internal
API.
It's
going
to
be
doing
something
for
compliance
reasons.
It's
going
to
be.
You
know
sending
metrics
to
your
bespoke.
A
You
know
metrics
provider,
those
things
that
are
like
specific
to
your
organization,
but
are
not
necessarily
going
to
be
something
that
benefits
the
community
at
large.
Those
are
going
to
be
the
things
that
you
want
to
extend
here
and
ideally
you'll.
A
Do
it
as
a
part
of
a
plug-in,
but
today
we're
just
going
to
work
on
stages
and
tasks
and
and
the
front
end
work
inside
of
the
open
source
services
yeah
again,
ideally,
you'll
go
through
the
plug-in
framework,
but
just
for
the
sake
of
you
know,
working
on
something
interesting,
we'll
be
doing
that
in
the
open
source
projects.
A
So
what
we're
going
to
need
to
do
that
is
we're
going
to
need
to
clone
two
different
repositories
and
I'll
explain
how
we
can
follow
up
with
the
deck
stuff
after
the
conclusion
of
the
workshop
today.
A
But
if
you
go
to
github.com
Spinnaker,
that's
going
to
be
the
home
page
for
all
of
the
projects
and
then,
if
you
go
to
slash
Orca
after
that
and
slash
deck
in
another
tab,
you're
going
to
get
to
the
two
repositories
that
we
have
here
now
we're
going
to
want
to
make
sure
we
clone
those
to
our
local
machine,
because
those
are
the
services
that
we'll
actually
be
building
and
running
ourselves.
A
A
A
That's
typically
the
the
approach
that
we
take
so
Engineers
on
our
team
they'll
either
create
a
fork
themselves,
or
we
also
Fork
into
our
own
organization
and
then
contribute
to
the
organization's
fork.
And
that
way
we
can,
you
know,
do
our
own
poor,
peer
reviews
internally
and
then
merge
them
up
to
the
main
project.
Here.
A
And
while
we're
cloning,
I'll
explain
another
concept
here,
based
around
the
way
that
Spinnaker
releases
get
created,
so
the
master
branch
is
considered
Mainline.
That's
like
all
the
functionality,
that's
going
to
go
into
the
next
major
release
of
the
Spinnaker
platform,
but
if
you're
contributing
a
change
or
you're
targeting
a
specific
version
of
the
API
you're
going
to
want
to
check
out
a
different
branch
and
all
of
the
branches
in
the
project
are
prefixed.
With
this
release,
Dash
key
and
then
scrolling
all
the
way
to
the
bottom.
A
You
can
see
the
three
most
recent
releases,
127
128
and
129
.,
and
so,
if
you
need
to
make
a
change,
that's
specific
to
the
API
version
that
you're
targeting
in
that
version,
you'll
want
to
check
out
that
release
branch
and
make
your
changes
there
and
test
them
now.
There's
some
Advanced
topics
we
can
get
into.
If
you
know
we
want
to
talk
about
it
after
the
facts
around
like
which
branch
you
want
to
Target,
depending
on
the
kind
of
change
you're
making.
But
those
are
all
you
know
down
the
road.
A
A
Okay,
so
what
we're
going
to
do
is
this
is
going
to
be
the
home
page
for
Spinnaker
itself
and
then
you're
going
to
go
into
the
applications.
Tab
go
ahead
and
create
an
application,
give
it
a
name
and
then
enter
in
an
email
here.
Demo
at
example.com
is
fine
for
the
purposes
of
our
demo.
A
And
then
you'll
be
dumped
in
this
pipeline
creation
screen,
and
so
this
top
view
here
is
like
a
graphical
representation
of
your
pipeline.
You
can
add
a
whole
bunch
of
stuff
in
here.
We
could
even
do
another
deploy
stage
if
we
wanted
to,
and
then
we
can
create
this.
You
know
notion
of
of
a
graph
here
by
just
adding
stages
with
different
dependencies,
and
so
let
me
just
get
rid
of
some
of
these
other
ones.
A
And
then
you
can
put
it
into
the
text
box.
That's
in
that
deploy
manifest
stage
more
complicated
or
even
more
realistic
pipeline
examples.
You
typically
won't
embed
a
text
version
of
the
resource
here,
you'll
usually
provide
it
as
an
artifact
and
there's
a
whole
bunch
of
different
ways
that
that
can
come
through.
You
can
configure
those
manifests
to
live
in
a
git
repo,
and
then
you
can
pull
those
in
even
as
a
trigger
from
the
pipeline.
You
could
render
them
using
customize
or
Helm.
A
B
B
A
A
Another
neat
little
plug
here,
I
guess,
while
we're
waiting
for
folks
to
clone,
if
you
really
want
to
keep
stuff
organized,
a
really
neat
tool
inside
of
the
terminal
is
tmux.
If
you're
not
super
familiar
with
it
helps
you
manage
windows.
So
that's
what
I'm,
using
here
today,
a
combination
of
VI
for
managing
these
configuration
files
and
then
different
splits
and
windows
for
keeping
all
of
this
organized,
and
you
can
name
them
and
and
reorganize
them.
You
can
even
create
different
sessions.
So
typically,
what
I'll
do
is
I'll
have
a
session
per
cluster
that
way.
A
A
So
today,
we'll
just
be
working
in
the
master
branch
and
what
we're
going
to
be
doing
today
is
we're
going
to
be
adding
a
stage
and
we're
going
to
walk
through
kind
of
the
layout
of
orca
at
a
very
high
level,
and
then
what
we're
going
to
do
when
we
add
this
stage
and
then
how
we
can
actually
see
the
service
running,
so
we're
going
to
leave
those
Port
forwards
going
and
we're
going
to
use
them
once
we
stand
up
the
Orca
service
locally,
so
I'm
going
to
switch
over
now
to
IntelliJ
and
I'm
going
to
pull
this
over
to
the
main
screen
and
I'm
going
to
make
this
larger
I.
A
Don't
I'm
actually
gonna
this
fella
for
now.
A
Okay,
I,
don't
know
if
I
can
make
this
larger
does.
B
A
A
Okay,
so
let's
move
back
here
and
I'll,
just
briefly
show
you
how
the
the
project
is
laid
out
here.
We
use
a
tool
called
tree
to
look
at
the
high
level
directories,
so
Orca
is
split
into
a
number
of
different
packages.
This
is
actually
not
a
great
view.
Let's
just
go
look
at
this
view.
A
So
let's
go
ahead,
and
how
is
this
going
to
look?
Oh
right?
Okay,
so,
let's
do
maybe
a
depth
of
three?
No!
That's!
Not
even
close
about
a
depth
of
10.-
that's
maybe
too
much
but
it'll-
give
us
a
sense
for
for
what
we
need
to
do
when
we're
in
here.
So
are
we
all
familiar
with
how
the
core
Spinnaker
services
are
I
realize
I
glossed
over
entirely
like
the
language
and
technologies
that
are
used
in
these
Services?
We
talked
about
what
they
do,
but
not.
A
Okay,
all
right,
so
we
we
talked
about
what
they
do,
but
not
necessarily
how
they're
actually
built.
So
let
me
dig
into
that
for
for
a
second
here
all
of
the
core
Spinnaker
services
that
you
see
are
going
to
be
written
in
a
variation
of
java,
so
there's
a
number
of
different
jvm
languages,
but
they're
all
going
to
be
based
on
the
jvm.
A
The
majority
of
the
project
is
written
in
Java,
There,
Are,
Places
and
Corners
within
the
project,
where
you
will
see
groovy,
kotlin
I
think
there's,
maybe
literally
one
Scala
file,
but
for
the
most
part
the
project
is
Java
and
the
technology
that
we
use
for
all
the
services
is
spring
boot.
Now,
thanks
to
the
efforts
of
David
and
team
here,
David
who
just
stepped
out
there,
we've
just
recently
upgraded
to
Spring
boot,
2-4
and
I
believe
that'll
be
coming
in
the
next
version
of
Spinnaker
130,
or
something
like
that.
A
So
some
pretty
big
I
guess
like
infrastructure
upgrades
to
the
project
that
are
coming
soon,
so
what
we
want
to
so
what
we
want
to
be
able
to
do
today
and
I
I
guess
I'll
just
show
you!
The
actual
services
that
we
want
to
build
is
we
want
to
build
a
stage.
So
let
me
explain
this
I
guess,
first
by
looking
at
deck
and
then
we'll
talk
about
how
this
is
going
to
get
built
in
Orca.
A
A
So
when
I
add
a
node
here
like
deploy
manifest
it's
attaching
to
the
root
node
whenever
I
add
more
stages
here,
they're
going
to
be
chained
together
and
then
you
know,
let's
say
we
could
have
a
situation
like
this,
so
I'm
just
going
to
keep
choosing
random
stages,
but
I
could
make
this
stage
depend
on
code
build,
and
so
now
we've
got
this
nice
little.
You
know
fork
and
then
a
join
at
the
end
here.
A
So
the
way
that
Orca
is
going
to
interpret
this
is
it's
going
to
organize
all
the
different
stages
that
it
needs
to
execute,
and
then
it's
going
to
execute
them
in
sequence,
based
on
when
they
need
to
return
a
result
and
then
they're
going
to
execute
individual
tasks
in
the
same
way.
So
a
deploy
manifest
task,
for
example,
is
going
to
have
a
start.
It's
going
to
have
an
apply,
and
then
it's
going
to
have
a
monitor
and
deploy
the
deploy.
Manifest
stage
is
you
know
we
can?
A
We
can
look
at
the
Orca
code
as
well
in
a
moment,
but
the
reason
it's
doing,
that
is
to
make
sure
that
your
deployment
actually
worked.
So
when
we
go
actually
run
this
pipeline,
you
can
see
this
blue
bar.
Here
is
the
current
stage
that's
running,
and
then,
if
we
look
at
the
task
status,
that's
where
we
can
see
the
graph
of
tasks
that
are
being
run
for
the
deployment
here.
A
So
you
can
see
there's
a
number
of
different
things
that
are
encapsulated
within
a
deploy
manifest
stage,
and
this
is
really
important,
because
this
could
be
really
anything
right,
but
this
these
are
all
things
that
are
related
to
the
act
of
deploying
a
manifest.
So
it's
easy
for
your
users
to
say:
okay
I'm
going
to
go,
deploy
this
manifest,
and
they
don't
necessarily
have
to
think
about.
Well,
am
I
going
to
be
able
to
apply
it.
Do
I
need
to
monitor
it.
A
You
know
that
monitor
task
is
making
sure
that
when
this
deployment
gets
rolled
out,
you
might
have
seen
very
briefly.
There
was
a
yellow
like
waiting
for
for
stable
or
something
like
that.
I
can't
remember
what
the
exact
label
is,
but
that
task
is
actively
sitting
there
querying
the
kubernetes
API
and
asking
are
you
stable
yet?
Are
you
ready
yet
have
you
passed
your
startup
Readiness,
whatever
probes,
and
only
once
that's
done?
Will
it
move
on
to
the
nest
task
here
and
you
can
see
that
there
are
timings
here.
A
A
A
So
in
Java,
of
course,
we
can
extend
different
classes.
This
tells
us
you
know
that
there's
like
some
baseline
functionality
that
we
need
to
implement
in
order
for
stage
to
exist
properly
within
Spinnaker.
So
there
are
two
different
kinds:
well,
there's
there's
a
couple
different
kinds
of
stage
definition
Builders.
So
these
are
classes
that
are
meant
to
instruct
Spinnaker
on
what
functionality
is
available
within
that
stage.
A
A
Okay,
so
spell
Expressions.
They
stand
for
the
spring
expression,
language
I
think
maybe
in
Spinnaker
they
stand
for
the
Spinnaker
expression
language,
but
the
the
base
is
the
same.
They
give
you
the
ability
to
Define
dynamic
values
within
a
pipeline.
So,
let's
go
back
to
our
example
here,
really
quick,
let's
say
inside
of
this
configure
stage:
I
actually
wanted
to
parametrize
the
image
version
that
I'm
pulling
in
I
could
introduce
a
spell
expression
in
this
text
manifest
and
that'll
be
between
a
dollar
sign
and
two
curly
brackets.
A
A
If
we
wanted
to
parameterize
the
account
we
were
deploying
to
really
anywhere
within
this
stage
definition,
we
can
include
a
spell
expression
and
we
know
based
on
the
implementation
here,
that
we
have
to
honor
that
contract
now
there
are
ways
for
you
to
disable
expression
evaluation,
but
you
know,
essentially
what
we're
saying
here
is
that
you
have
the
ability
to
the
individual
stage
that
we're
going
to
be
working
on
today
is
actually
not
going
to
be
expression,
aware
we're
just
going
to
use
the
simpler
stage
definition
builder.
A
So,
when
you're
deciding
on
what
kind
of
functionality
you
want
to
include
in
your
task,
this
is
one
of
the
first
decisions
that
you'll
make
because,
like
what
is
the
the
contract
that
I
want
to
expose
to
users
of
my
stage,
then
we
can
see
here
that
we
have
a
number
of
methods
here.
That
we
have
to
override
the
one
that
we
want
to
care
about.
The
most
is
this
task
graph.
So
remember
that
when
we're
looking
at
the
execution
here,
we
have
our
graph
of
stages
and
we
also
have
our
graph
of
tasks.
A
So
how
DEC
in
particular
knows
which
tasks
are
exposed
is
by
understanding
the
task
graph
that
comes
from
Orca.
So
all
of
this
stuff
we
can
now
see
is
directly
related
to
the
Builder
here.
Here's
our
resolve
manifest
task.
Here's
our
deploy
manifest
task,
and
each
of
these
have
their
individual
implementations.
So,
for
example,
we
can
go
ahead
and
look
at
deploy
manifest
task
and
we
can
see
that
it's
doing
a
whole
bunch
of
stuff
right
to
actually
deploy
our
manifest,
and
we
can
do
the
same
by
looking
at
any
of
these
other
tasks.
A
So
this
is
kind
of
the
units
of
abstraction
that
you'll
be
using
when
you
define
your
own
stages,
there's
also
a
few
things
that
you
can
do
outside
of
this
there's
like
a
whole
life
cycle
within
pipelines
that
you
can
Implement.
So,
for
example,
if
you
want
to
take
some
action
after
the
stage
has
completed,
then
you
can
go
ahead
and
do
that
if
you
want
to
make
your
stage
retriable
or
cancelable.
A
A
Implementing
that
method
will
give
you
the
ability
to
clean
up
resources
in
that
remote
service
and
make
sure
that
you're,
not
you,
know,
creating
a
mess
for
yourself
internally
and
then,
of
course,
these
are
all
a
whole
bunch
of
helper
methods,
but
this
this
gives
you
a
flavor
for
what
you
can
do
within
stages
and
I
would
recommend
using
the
existing
open
source
stages
as
kind
of
a
template
for
the
things
that
you
want
to
be
building
internally,
as
there's
a
number
of
patterns
in
there
that
kind
of
help
you,
you
know,
get
the
creative
juices
going
for
what's
possible,
so
I'll
stop
there
again
and
let's
see
if
there.
B
B
A
A
Right
right,
there's
a
question
in
the
chat
for
what
is
ort
Service.
So
this
is
a
bit
of
a
historical,
oh
gosh,
I'm
gonna
butcher.
The
history
here
ort
is
the
original
name
for
Cloud
driver
I
want
to
say
it
was.
There
were
two
Services
ort
and
Mort
and
I.
Don't
recall
exactly
what
the
purpose
of
those
Services
were
yeah.
This
is
like
digging
back
into
I.
Don't
know:
2014
2015.,
but
I,
don't
recall
exactly
off
the
top
of
my
head.
What
it's
currently
used
for.
A
I
think
if
I
understood
your
question
you're
asking
what
the
boundary
is
for
a
task,
yeah
yeah,
so
the
boundary
is,
is
really
whatever
decision
makes
sense
to
you
in
the
deploy
manifest
stage.
I
I
know
that's
kind
of
a
non-answer,
but
in
the
deploy
manifest
stage
you
can
see
they're
broken
down.
Logically
by
the
things
that
you
would
be
looking
for
right
when
you're
actually
like
I,
think
I
guess
think
about
it
in
terms
of
what
you
want
to
communicate
to
the
end
user
right.
A
A
If
there's
only
one
task
so
kind
of
like
think
about
breaking
it
down
into
where
it
makes
sense,
and
there
there
is
a
mechanism
for
you
to
pass
information
along
there's
something
called
a
context
that
will
be
passed
between
tasks
and
stages
that
you
can
populate
and
include
custom
information.
So
you,
you
could,
for
example,
have
a
task,
that's
retrieving
credentials
and
then
passing
it
on
to
some
other
tasks.
Within
that
stage,.
B
A
Okay,
so
wow,
we
have
nine
minutes
here,
so
I
think
what
we're
going
to
be
able
to
do
is
start
up
the
Orca
service
and
see
it
connect
to
the
remote
cluster
and
then
I'll
show
you
the
the
stage
that
we
would
be
building
here.
So
let
me
walk
through
that
a
little
bit
and
then
we'll
start
up
the
service.
So
a
stage
can
be
very.
Very
simple:
I've
got
super
secret
stage
here:
okay,
okay,
so
what
we're
defining
here
is
a
class.
A
My
super
secret
Stage
IT
implements
stage
definition
builder
and
then
we're,
of
course,
implementing
this
cancelable
stage
to
do
some.
You
know,
return
a
different
result
here
if
we
got
canceled
and
providing
the
task
graph.
So
in
this
case
we're
going
to
for
the
sake
of
example,
Define
a
start
Task
and
an
end
task
and
we're
going
to
give
them
labels.
A
And
then
the
implementations
themselves
are
not
super
complicated,
they're,
just
going
to
return,
succeeded
in
all
cases
and
then
they're
going
to
return
a
output
and
an
output
in
this
case
is
a
value
that
can
be
retrieved
by
deck.
So
this
is
information.
Let's
go
back
to
our
our
deploy
status
here.
If
you
look
in
the
implementation
for
the
deploy
manifest
stage,
all
of
this
information
that
is
described
within
this
white
box
is
something
that's
defined
in
the
outputs
block
of
that
stage.
A
So
that's
what
we're
saying
here,
we're
saying
here
is
a
stage
start
status
and
in
the
front
end
implementation.
You
can
get
into
this
map,
grab
this
key
and
then
get
this
hello
value,
and
then
we,
you
know
compose
this
into
the
higher
level
stage.
Now
let
me
talk
a
little
bit
more
about
the
anatomy
of
what's
going
on
here.
We
have
this
in
the
spring
boot
world,
for
those
of
you
that
are
not
as
steeped
in
the
Java
ecosystem
spring
boot
is
is
known
or
spring.
A
More
generally
is
known
for
a
concept
called
dependency
injection,
which
means
that
you
can
Define
different
resources
that
can
then
be
dynamically
loaded
at
runtime,
based
on
what
an
implementing
class
is
asking
for.
So
the
way
that
we
make
this
stage
discoverable
to
Orca
as
it's
running,
is
to
add
this
component
annotation,
and
so
this
is
a
spring
level
component.
I'm
sorry,
a
spring
level
annotation
that
is
declaring
this
class
needs
to
be
loaded,
especially
whenever
we're
requesting
something
that
implements
this
stage
definition
builder
and
then
to
satisfy
this
interface.
Here.
We're
going
to
you.
A
A
Unfortunately,
we're
not
going
to
have
a
ton
of
time
to
dig
into
this
today,
so,
let's
at
least
go
ahead
and
get
the
service
started,
and
then
I'll
tell
you
where
you
can
go
to
get
more
detail
to
kind
of
continue
the
workshop
at
your
own
pace
and
we'll
talk
about
the
the
cluster
that
you're
working
in
and
kind
of
the
parameters
there.
So
let
me
pause
for
a
second
and
get
set
up
again.
A
A
What
we're
going
to
want
to
do
is
you'll
either
have
a
little
like
collection
up
here
of
like
a
little
toolbar
that
will
help
you
run
Services.
Otherwise,
what
you
can
do
is
you
can
go
to
the
Run
task
up
here.
This
is
all
IntelliJ
specific.
By
the
way
we'll
talk
about
what
to
do.
A
If
you're,
not
wanting
to
use
the
the
inbuilt
IntelliJ
stuff,
you
want
to
create
a
Gradle
task
that
includes
orca
orca
web,
so
Orca
web
is
where
all
of
the
like
service
definitions
are
all
the
controllers
in
in
Spring
that
are
going
to
respond
to
requests,
and
then
there
is
a
particular
plugin
that
we'll
want
to
install
and
I'll
show
you
that
in
a
moment
called
end
file
we'll
go
ahead
and
enable
that
and
we'll
add
a
EMV
file,
and
this
is
the
wrong
project.
So,
let's
just
go
down
to
orca
wherever
Orca
is
let.
B
A
Emv
right
and
that
file,
of
course,
will
get
created
once
we
run
telepresence
itself
locally,
and
so
what
we
want
to
do
here,
it's
time
to
I,
guess
get
a
little
bit
more
conceptual
again
is
we're
going
to
run
the
service
locally
after
we've
run
telepresence
and
what
telepresence
is
going
to
do
for
us?
Is
it's
going
to
create
a
bi-directional
proxy
from
our
local
machine
to
our
remote
kubernetes
cluster
and
the
way
that
it
accomplishes?
This
is
if
we
go
look
at
our
Cube
cuddle
again
and
we
do
go.
Look
at
our
pods.
A
So,
for
example,
if
we're
going
to
go
ahead
and
use
telepresence
on
Orca,
usually
what
you
will
see
is
you
will
see
a
much
longer
string
here,
like
the
hash
ends
up
getting
much
longer
and
you,
when
you
describe
that
pod
it'll,
be
the
telepresence
node
that
you're
actually
going
to
be
using
once
you've
done,
that
you're
going
to
get
a
whole
bunch
of
environment
variables.
What
telepresence
is
doing
under
the
hood?
Is
it's
actually
mounting
the
file
system
from
the
container
onto
your
local
machine?
A
So
that's
going
to
give
you
access
to
all
of
the
environment
variables
that
kubernetes
is
setting,
and
then
it's
also
going
to
muck
with
your
DNS
to
make
sure
that
things
like
you
know,
spin.clouddriver
dot.
My
namespace
is
working,
so
you
should
be
able
to
do
that
all
locally
and
pretty
transparently,
and
so
the
reason
we
want
that
dot
and
telepresence
file
is
because.
B
A
Intellij
we
want
IntelliJ
to
be
able
to
pick
up
those
values
so
that
your
local
Orca
instance
is
actually
talking
to
the
remote
cluster
okay.
So
we
should
be
able
to
run
that
and
then
I
will
show
you
how
to
actually
run
the
service
if
you're
not
using
IntelliJ.
So
let
me
go
back
to
Kitty
here
the
terminal,
emulator
and
I'm
going
to
do
a
DOT,
slash,
Gradle.
A
Oh
because
I'm
in
the
I'm
in
the
wrong
directory,
I
need
to
go
back
to
the
root
of
the
project
and
then
I
can
do
a
Gradle
build
just
for
the
sake
of
brevity
I'm
going
to
run
this
Dash
as
X
check
and
I'll.
Explain.
What's
going
on
here,
so
dot,
slash,
Gradle
W,
if
you're
not
familiar
with
the
Java
space
Gradle,
is
the
build
tool
that
we
use
inside
of
the
Spinnaker
project.
A
It
comes
with
a
shell
script
in
the
project
itself
and
it'll
download
the
right
version
of
Gradle
and
then
it'll
read
the
task
graph
that
it
needs
to
do
for
a
build,
for
example,
in
the
local
project.
Now
that
Dash
X
check
value
that
I
put
at
the
end
there
Dash
X
is
saying,
exclude
this
task
from
the
build
and
I'm
excluding
check,
which
in
this
case
is
a
shorthand
for
test
integration
tests
a
whole
bunch
of
other
stuff.
So
just
so
that
we
get,
you
know
a
build.
A
We
see
a
build
running
here
we'll
we
should
see
this
successfully
complete
and
then
once
we've
done
that
we
can
also
do
a
run
and
we
should
see
the
service
run,
although
it's
probably
going
to
fail
since
I
don't
have
stuff
Port
forwarded
locally.
So
with
the
understanding
that
we
have
exactly
one
minute
left
here,
I'm
going
to
show
you
how
you
can
continue
this
workshop
at
your
own
pace.
A
So
if
you
go
to
handy
dandy,
YouTube
here
and
you
go
to
the
Spinnaker
Channel.
A
We
can
see
a
preview
for
whatever
this
is
I
can't
even
read
that
okay,
let's
go
here,
and
let's
look
for
Spinnaker
YouTube
channel.
A
A
And
if
you
go
down
to
the
Spinnaker
channel
here
go
to
the
videos
list.
You
should
be
able
to
search
in
here
and
if
you
search
for
gardening
days,
Workshop.
A
There
you
go
so
you
should
see
this
new
spin
contributors
Workshop.
You
get
to
listen
to
me
for
another,
two
or
so
odd
hours,
but
we'll
actually
work
through
the
Orca
definition,
we'll
walk
through
that
IntelliJ
setup
again
so
you're,
not
just
hearing
me
talk
through
it
and
you'll
also
get
to
look
at
the
modifications
that
you'll
make
to
deck.
A
So
at
the
end
of
that
Workshop,
what
you
should
be
able
to
do
is
have
a
new
stage
that
you
can
run
within
your
Spinnaker
cluster
and
what
we're
going
to
do
with
this
particular
cluster
that
you
have
access
to.
Is
we're
actually
going
to
leave
it
up
through
the
remainder
of
this
week?
So
if
you're
going
to
kubecon
feel
free
to
enjoy
the
conference
and
as
you
have
time
work
through
it,
we'll
tear
down
the
cluster
at
the
end
of
this
week,
but
again
all
of
these
instructions.
A
You
can
follow
through
on
your
at
your
own
pace.
So
with
that,
thank
you
for
coming
hope.
You
learned
a
lot
of
stuff,
and
you
know
if
you
have
questions
I'll
be
around
the
rest
of
today.
I'll
I'll
be
at
kubecon.
So
if
you
want
to
come
by
the
Armory,
Booth
I'm
happy
to
answer
questions
as
well,
thank
you
to
Christos
Brian
Alfredo
here
for
for
helping
out
today
and
hope
you
have
a
great
rest
of
the
conference.