►
From YouTube: Cohort Day 2 - Ortelius with Jenkins
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
there's
two
ways:
we
can
do
this
for
the
getting
started
one.
We
can
go
over
how
to
configure
ortelius
in
like
a
kubernetes
cluster,
or
we
can
get
into
the
jenkins
side
of
things
how
we
hook
into
jenkins
so
either
way
we
could
either
way
we
could
go.
It
doesn't
matter
to
me
it's
your
your
choice,
guys
and
we'll
do
that.
A
So
we'll
we'll
go
into
the
the
jenkins
side
of
things
first
and
kind
of
give
you
a
feel
of
how
ortelius
is
going
to
fit
into
your
jenkins
pipeline.
So.
C
A
A
A
B
B
A
So
today
we're
going
to
look
at
one
of
our
microservices
for
our
demo
application.
A
So
we
have
the
hipster
store
demo
application
out
there,
and
what
we're
going
to
do
is
we're
going
to
focus
on
just
this
pipeline
for
this
service,
so
typically
we're
going
to
have
a
pipeline
for
every
service
or
workflow
for
every
single
service
out
there.
So
the
one
we're
going
to
be
working
with
is
is
the
the
email
service
as
part
of
that
process.
A
So,
in
order,
if
you
know
we
have
our
pipeline
here
and
we're
just
going
to
be
focusing
on
that
one
service
and
some
of
the
stuff
that
we
want
to
capture
if
we
actually
go
into
one
of
these
older
ones
is
in
this
case
the
services
are
being
built
as
containers.
A
We're
going
to
actually
have
the
jenkins
job
is
going
to
be
part,
is
going
to
be
what's
actually
going
to
do
the
docker
build
we're
going
to
grab
some
of
the
container
information
about
where
it
has
pushed
the
digest.
The
the
information
about
the
repo
itself,
the
helm,
chart,
that's
actually
used
to
install
the
service
into
the
cluster,
so
that's
kind
of
information
that
we're
gonna
gather
from
the
jenkins
pipeline.
A
A
So
this
is
the
actual
the
actual
service,
the
the
service
itself.
Isn't
that
important
for
what
we're
doing,
but
basically
it
looks
like
it's
a
python
application
that
is
going
to
provide
the
actual
service.
So
from
our
point
of
view
this
what
the
service
does-
and
you
know
how
it's
put
together-
we
don't
care
about
that
as
much.
A
What
we
want
to
do
is
actually
look
at
how
we
can
gather
information
about
that
service.
So
remember
in
in
the
ortilius
world,
everything
is
a
component
or
a
version
of
a
of
a
component.
A
A
What's
the
user
id
that
we're
going
to
log
in
with
and
the
password
that
we
have?
Actually,
let
me
make
sure
I'm
actually
hitting
the
wrong
server
here.
B
A
While
that's
loading,
so
we're
gonna
we're
basically
this
information
here.
A
Nope
everybody's
out
there
so
this
this
part
here
you
can
put
into
jenkins
at
the
the
system,
environment
level,
so.
A
You
can
set
this
up
as
regular
credentials,
credential
objects
and
stuff
like
that.
So
at
the
jenkins
level,.
A
You
can
configure
like
the
user
id
the
password
at
the
jenkins
system
level.
So
when
we're
looking
at
the
definition
in
here,
it's
just
going
to
be
environment
variables
that
you're
going
to
reference,
but
for
just
simplification
for
people
to
follow.
I
just
hard
code
them
here
so
this
this
is
not
a
requirement
to
expose
the
user
id
password
that
can
be
done
at
the
jenkins
level.
Some
of
the
next
stuff
that
we
have
is
what
is
our
application
in
our
application
version
that
we're
going
to
package
this
microservice
in?
A
So
one
of
the
things
that
we
do
at
the
component
level
is
we'll
actually
say
what
is
the
application
that
this
belongs
to
or
who's
the
consuming
application?
So
in
the
artillis
world,
everything
is
based
on
a
dotted
directory
structure.
A
So
global
is
your
is
like
the
highest
level
and
then
santa
fe
software
is
going
to
be
our
our
next
level.
Then
the
online
store,
candy
store
are
going
to
be
the
the
different
domains
underneath
that
and
then
we're
going
to
deploy
out
to
an
aws
environment.
A
For
my
from
what
I
remember
is
this
one
is
going
to
build
and
I
don't
think
we're
going
to
do
the
deploy.
Oh,
we
do.
We
record
the
deployment,
so
in
this
case
we're
going
to
record
that
the
deployment
went
out
to
this
environment,
so
an
environment
in
the
arturous
world
is
just
going
to
be
a
set
of
of
endpoints.
A
It's
just
a
way
to
gather
together,
so
you
could
have
a
an
endpoint
that
could
be
your
database
server.
You
could
have
your
kubernetes
cluster
as
an
endpoint,
and
then
you
may
have
like
a
on
on-prem
vm.
That's
running
the
monolith,
for
example,
so
the
environment
can
be
comprised
of
any
any
number
or
any
types
of
endpoints.
A
Basically,
the
environment
is
representing
a
place
that
we
deployed
to
the
next
part
that
we
have
is
going
to
be
the
component
name,
so
in
this
case
we're
going
to
be
working
with
our
email
service,
our
base
component
version.
So
this
is
the
schematic
version
that
we're
going
to
be
utilizing.
Let
me
see
if
I
got
logged
in
here
change
up
our
filter.
A
A
Sometimes
I'll
I'll
route
through
my
there,
it
is.
A
Email,
so
here's
our
our
email
service
that's
coming
across,
and
we
can
see
that
we
did
our
deployment
over
to
the
aws
cluster.
So
remember
we
have
the
the
schematic
version,
the
1.2.0.
A
A
The
last
part
is
going
to
be
the
the
build
number
or
the
job
number
coming
from
jenkins,
and
then
we
have
the
git
commit.
As
the
last
part
we
put
the
underscore
g
or
the
dash
g
just
to
signify
that
this
is
the
actual
commit.
So
when
we
parse
it
just
allows
us
to
parse
this
this
string
very
easily.
A
So
we
have
our
our
service
name,
our
component
name.
The
base
schematic
version
in
this
case
we're
adding
a
custom
action
to
it,
which
is
going
to
be
the
helm.
Chart
action
which
allows
us
to
deploy
from
ortelius
using
a
helm,
chart
the
chart
name,
the
chart
version.
Sometimes
the
chart
version
may
be
derived.
A
The
image
registry.
This
is
where
we
push
our
images
over
to
I
like
way
over
docker
hub,
it
doesn't
have
the
restrictions
like
docker
hub
has,
and
it's
much
more
stable,
we're
just
gonna
tag
it
as
latest,
and
then
this
is
some
of
the
new
service
owner
service,
catalog
data
that
we're
pulling
in
service
owner
email
phone
number
as
part
of
that.
So
this
is
kind
of
like
your
your
base,
information
that
we
need
to
set
up
now.
A
Some
of
some
of
this
information,
I'm
looking
at
putting
into
a
tamil
file
and
being
able
to
read
that
in
and
expose
it
as
environment
variables,
instead
of
having
it
as
part
of
the
jenkins
file
and
the
reason
being
is
what
we've
run
into
is
not
every
developer
has
access
to
update
the
jenkins
files.
Those
are
controlled
by
another
group.
A
So
if
they
want
to
change
like
the
server's
owner
or
the
service
phone
number,
they
don't
have
access
to
do
that,
because
it's
more
like
a
generic
jenkins
file,
so
we're
looking
at
a
way
to
have
a
file
that
either
it'll
be
tomml
or
yaml
that
the
developers
who
actually
have
access
to
in
their
repo
to
be
able
to
define
this
information.
A
So
now
we
do
some
shell
scripting
magic.
Some
of
the
things
that
we're
doing
here
is
to
determine
the
git
branch.
So
there's
some
crazy
git
commits
commands
to
figure
out
what
branch
you're
on
and
that's
going
to
provide
us
the
branch
name.
This
will
basically
work
on
any
any.
A
What
ends
up
happening
in
the
jenkins
side
is
jenkins,
will
create
a
detached
branch
for
the
job
and
because
of
that,
it
gets
into
this
weird
state
of
trying
to
figure
out
who
you're
who
you
came
from
and
that's
what
this
crazy
command
does
is
figure
that
out
at
that
level
and
then
the
git
url,
another
git
command
to
figure
out
what
the
remote
url
is,
whether
it's
going
to
be
the
ssh
version
or
the
https
version
of
the
url.
A
It's
just
gonna
save
any
hiccups
down
the
road
where,
like
you,
can't
get
logged
into
a
repo
or
something
like
that,
because
somebody
forgot
to
give
you
access
so,
whenever
possible,
use
the
ssh
version
of
it.
So
that's
going
to
be
our
full
url,
some
of
the
things
that
we
want.
It's
just
going
to
be
like
the
get
repo
name.
A
So
here
we
just
go
ahead
and
do
some
said
commands
to
pull
out
what
the
repo
name
is
and
that's
going
to
give
us
the
basically
the
organization,
slash
repository
so
in
our
in
in
our
case
here
that
is
going
to
be
where
this
repo
is
stored.
A
Where's
email,
it'll,
look
like
this
it'll,
be
you
know:
ortelius
store
dash,
email
service.
Oh
there,
it
is,
will
be
the
repo
that
we,
this
part
of
it,
is
what
will
end
up
in
that
variable
at
that
level
and
then
from
there
we
go
and
grab
the
short
commit
and
get
that
off
of
the
log.
A
A
So
you
have
your
base
version
in
our
cases,
email
service,
the
variant
is
gonna,
be
the
branch
that
you're
on
now.
This
could
be
like
the
pr
that
is
being
used.
It
could
be
a
branch
like
maybe
like
maintenance.
It
could
be
a
branch
like
for
feature
requests
say
you
know.
New
ui
would
be
that
branch,
but
what
we've
recognized
is
within
a
brand
or
a
variant.
You
can
have
additional
schemas
within
that.
A
So
let's
say
we're
on
the
the
maintenance
branch
and
we're
gonna
iterate
through
multiple
builds
on
the
maintenance
branch
that
we're
going
to
push
out
to
our
cluster.
We
kind
of
want
to
keep
track
that
the
variant
is
the
maintenance
branch
and
that
we're
going
to
go
ahead
and
iterate
from
one
point
1.2.0.
A
You
know
build
number
to
the
next
one,
so
we're
going
to
keep
on
doing
multiple
builds
for
that
variant.
As
part
of
that,
so
I
would
recommend
using
a
variant
in
the
artillious
world.
That's
where
we'll
see
that
we
have
our
service
name
and
then
our
variant,
which
in
this
case
we're
on
master
and
then
within
master,
we're
going
in
iterating
through
the
builds-
and
you
can
see
here-
we
didn't
do
a
check
in
so
we
didn't
get
a
new
commit,
but
we
are
doing
new
builds.
A
So
basically,
this
is
just
me
re-running
the
jobs,
but
what
it
allows
us
to
do
is
have
new
component
versions
on
the
ortus
side
that
we
want
to
be
able
to
deploy
from
there
we're
going
to
kind
of
tie
all
that
together
and
that's
what
that
is
doing
there
we're
pulling
that
the
whole
version
string
together
now,
one
of
the
things
that
we're
adding
for
the
service
catalog
is
bringing
in
the
readme
file
as
part
of
that
process.
A
So
we're
basically
checking
to
see
if
there's
a
readme
file
locally,
and
this
is
actually
going
to
change
up
this
format
a
little
bit
down
the
road,
but
basically
we're
checking
to
see
if
there's
a
readme
that
we
want
to
associate
with
our
image,
then
our
image
tag,
this
one
is
basically
creating
the
tag
that
we're
going
to
use
on
the
image
that
we're
going
to
push
to
quay.
A
You
can,
whatever
one
that
you
already
have
in
place,
you
can
use.
You
know
just
take
this
off
and
or
supply
your
format
for
your
image
tag
that
you
have,
whether
it's
gonna
be
like
latest
but
same
concept.
We
tag
everything
with
the
the
branch,
those
can
be
master,
our
component
version
number
and
the
commit
it
just
makes
it
handy-
have
a
unique
tag
out
there.
Now
we're
actually
gonna
get
into
this
place
this
step.
So
all
of
this.
A
So
when
you
implement
artelias
into
your
pipeline,
this
is
kind
of
like
the
required
piece
for
kind
of
like
the
hard-coded
provide
me
data.
A
A
So
your
build
step
and
your
push
me
I
just
put
in
a
generic,
build
and
push
but
based
on
yours,
you
can
just
plop
your
you
know.
A
Pre-Build
you're
gonna
want
to
put
this
in
post,
build
we're
gonna
want
you
know,
do
you
do
your
build
step
and
then
we
do
a
magic
command
after
the
push.
So
one
of
the
weird
things
with
docker
is
you
don't
get
a
docker
digest
until
the
image
has
been
pushed
to
a
remote
registry?
A
So
any
image
that's
been
built
locally,
doesn't
have
a
digest
associated
with
it.
You
don't
get
the
digest
until
you
actually
do
the
push
and
then
from
there
you
can
inspect
the
registry
that
that
actual
tag
to
get
the
image
digest
it's
a
weird
quirk,
but
that
step
has
to
be
done
after
you
know,
post
push
at
that
level.
A
So
you
know
you
put
your
build.
You
know
kind
of
put
your
build
stuff
in
here,
however,
you're
doing
it,
we've
had
people
use
canaco,
build
docker
builds,
you
know,
there's
a
bunch
of
different
build
tools,
but
in
in
general,
that's
going
to
happen
and
then
once
post
post
push
we're
going
to
grab
our
digest
and
then
we're
going
to
record
all
of
this
with
ortillius.
A
So,
basically
we're
going
to
go
ahead
and
start
pushing
all
this
information
up
to
artelius
for
this
component
version.
Now,
one
of
the
things
that
we're
doing
here
is
we're
actually
going
to
save
this.
This
information
that
we're
gathering
and
we're
going
to
persist
it
to
a
file
and
the
reason
being
is
if
we
have
to
do.
A
Let's
say
we
have
multiple
services
that
are
building
in
a
single
workflow
instead
of
having
one
workflow
per
service.
So
let's
say
we're
going
to
build
four
services
and
one
right
after
another,
we'll
actually
go
ahead
and
run
this
step
for
each
service,
that's
getting
created,
but
we'll
keep
on
persisting
it
to
the
same
file
and
what
ends
up
happening
is
we'll
use
that
file
later
on,
when
we
record
that
a
deployment
happens.
A
Now,
in
this
case,
we
are
going
to
basically
update
our
component.
Tell
ortelius
that
there's
a
new
version
out
there.
Ortillius
comes
back
and
says:
okay,
I
just
created
a
new
version
for
you:
here's
the
information
we're
going
to
put
that
into
that
json
file
and
then
eventually
we're
going
to
record
the
deployment
here
where
we're
going
to
utilize
that
information
and
that
application
version
information
later
on
now,
you'll
see
here
at
the
component
level.
We
don't
have
any
thing
to
do
with
the
application.
A
So
if
you
have
the
service,
that's
going
to
be
created,
you
don't
know
what
the
application
is.
Yet
you
know
that
type
of
scenario.
You
know
who
the
consumers
is
we'll
deal
with
that
at
the
deployment
time.
So
here
we're
actually
going
to
record
the
deployment
being
done
and
the
reason
why
we're
recording
it
is
we're
giving
it
a
deployment,
data
and
deployment
environment.
A
Now
we
could
actually
do
the
deployment
from
or
to
this,
and
we
just
leave
off
a
couple
parameters
at
that
level.
So
if
we
do
the
deployment
through
ortelius
we're
actually
going
to
execute
this
custom
action
to
do
the
deployment,
so
it
just
depends
on
how
you
have
things
plumbed
together,
whether
you're
going
to
have
the
jenkins
pipeline
continue
doing
the
deployment
or,
if
you're,
going
to
switch
over
towards
this
doing
the
deployment
either
way.
A
It
doesn't
matter
to
us
we're
going
to
gather
the
information
one
of
the
advantages
of
utilizing
ortillius
to
do
the
deployment
is
when
we
deploy
this
application
version
to
the
environment,
we'll
incrementally
figure
out
what's
changed
and
only
deploy
the
change
pieces
for
that
application
version.
So
we're
not
going
to
redeploy
everything
all
the
time
and
typically
the
deployment
step
will
be
somewhere
else
in
your
pipeline.
You
may
do
a
bunch
of
like
security
scans,
some
test
cases
and
then
eventually
you
go
and
do
the
deployment.
A
So
in
your
pipeline,
you
probably
won't
see
the
update
component
and
the
deploy
next
to
each
other,
like
like
in
this
sample.
They'll
be
spread
apart
in
where
you
are
in
your
pipeline
process.
So
the
end
result
is
when
we
actually
run
this
get
back
to
our
email
service.
A
So
we
can
see
the
the
running
the
shell
script.
That's
that
part
of
the
derived
values
that
we're
going
through
and
figuring
out.
We're
we've
taken
our
our
docker
file
going
to
tag
it
with
our
new
build
number
and
we're
going
to
have
that
commit
out
here.
A
A
A
Yeah
so
the
over
the
overall
goal
is
to
you
know,
be
able
to
capture
what's
happening
in
your
pipeline
and
record
it
into
artelia.
So
we
have
we
capture
that
information,
and
then
we
can
build
up
the
relationships
around
that
how
things
are
being
consumed
and
related
together
is.
A
B
A
Basically,
you
can
cut
and
paste
this
in
change
your
domain
names.
You
know
what
you're
gonna
call
things
your
login,
yeah
yeah
and
that's
gonna,
be
you
know
like
basically
the
what
I
would
call
the
hard-coded
piece.
You
know
just
giving
us
some
names,
because
if
some
of
this
could
be
theoretically
derived
so
like
you
may
have
a
domain
out
there,
that's
everybody's
gonna
be
using
so
you
haven't,
got
gotten
to
a
domain
driven
design
level.
Yet
so
you're
just
going
to
throw
everything.
A
Let's
say
into
the
company
store
level
which
is
fine,
you
know
store
services
and
then
something
like
the
email
service
could
be
coming
from
the
job
name,
for
example,
so
it
could
be
instead
of
hard
coding.
A
You
can
use
the
workflow
name
as
that
there's
so
many
variables
that
people
expose
in
their
jenkins
files
in
their
jenkins
libraries
that
a
lot
of
this
can
be
derived
and
then
the
next
next
section
you
plop
in
the
drive
values
and
then
and
that's
just
going
to
go
all
pre
your
build
step
and
then
post
build
step,
you're
going
to
grab
your
digest
and
you're
going
to
go
ahead
and
add
in
typically
these
two
and
that's
it
you're
up
and
going
with
your
your
pipeline.
A
Then
now
people
get
fancy
down
the
road,
they'll
move,
move
this
stuff
into
functions,
groovy
functions
or
methods
and
then
or
they
may
have
a
like
the
build
step.
A
They'll
take
this
and
put
it
into
a
groovy
library
and
they'll,
say:
they'll
they'll
do
the
build
in
the
library
and
then
what
you
do
is
in
that
library
just
add
on
the
the
order
of
these
pieces.
B
B
A
And
you
could
that's
a
really
good
way
to
think
of
it.
Sasha
is
we're
just
gonna,
be
doing
some
additional
logging
we're
going
to
log
what
you're,
what
you're
creating
in
your
in
your
jenkins
file.
So
that's
what
you
know
you
can
think
of
it.
We're
adding
an
additional
logging
steps
in
there.
B
So
at
the
moment
everything's
done,
you
know,
like
you
just
mentioned,
that
with
groovy,
I
don't
have
control
over
that
stuff.
It's
done
by
a
run
team,
so
I've
got
the
pipe.
I
can
use
the
pipelines
whenever
I
want
to
even
create
my
own.
That's
why
I
asked
the
question
of
how
to
integrate
jenkins
yeah.
I
could
integrate
this
into
jenkins.
A
Yeah,
so
do
you
have
your
your
jenkins
file,
handy.
B
That
I
always
have
to
use
for
the
they
always
we
have
to
use
a
ci
folder
and
then
there's
config.json
and
jenkins.
Looked
at
that
conflict.json
could
I
add,
ortilia's
configuration
into
that
as
a
extra.
A
Yes,
I
think
you
can
so
we
would
be
adding
in
to
that
json
this
information
up
here,
you
know
what
is
you
know,
what
is
your
component
yeah.
A
A
Though
from
the
config.json,
those
are
going
to
get
loaded
into
the
workflow,
and
then
you
could
reference
them
at
that
level.
A
Okay,
thanks
so
like,
instead
of
you
know
comp
version,
it
may
be
env.com
version,
because
comp
version
is
what
we've
defined
up
here.
A
So
it'll
just
be
how
you
reference
fi
variables
from
the
config.json.
A
So
I
don't
know,
I
don't
think
the
config.json
will
be
able
to
allow
you
to
run
commands
you'll
have
to
see
if
it
does
or
not.
My
guess
is
it
doesn't.
A
Okay,
so
so
pre-build,
you
can
even
actually
do
this
a
lot
of
this
post
build
as
well,
but
basically
you
want
to
log
this
derived
information,
the
derived
image
digest.
So
you
could
actually
do
this
as
well,
but
we're
if
you're
not
using
like
here
we're
using
the
derived
values,
but
if
you're,
if
you,
if
you
ignore
that
and
you're
just
using
like
pre-defined
values,
we
can
move
a
lot
of
this
to
post,
build
and
you'll
just
need
to
add
in
this
the
dh.
A
B
B
A
It
doesn't
really
matter
the
the
the
pipeline
tool
either,
whether
it's
jenkins
or
cloud
build.
You
know
if
you
look
at
our
our
our
repos,
we're
using
this
exact
same
process,
but
under
cloud
google
cloud
build.
A
That's
correct,
so
the
library
is
still
named
under
deploy
hub,
even
though
the
repository,
if
you
want
to
look
at
the
source
code
for
it,
is
this
comp
update.
A
A
You'll
need
to
install
the
hub
library,
and
this
is
all
the
the
documentation
about
the
parameters
and
then
the
code,
it's
growing,
I
have
some
more
updates
and
then
there
is
a
oops,
the
supporting
api
library-
and
this
deals
with
things
like
how
to
post
json
get
json
a
lot
of
the
low
low
level
logging
in
this
is
where
the
api
level
you'll
see
where
we're
wrappering
the
the
the
restful
api
calls
and,
like
you
know,
we
need
to
get
the
deploy,
an
application
by
application
id.
A
A
So
if
you,
if,
if
you
need
anything
on
on
that
side,
you
know
that's
the
driver
there
that
we
just
looked
at.
So
that's
where
we'll
be
able
to
this
is
what
you're
working
with
when
you're
back
at
this
level.
A
So
when
you
do
a
a
pip
install
of
deploy
hub,
it'll
go
and
put
it
out
here
with
all
the
the
corresponding
dependencies
and
stuff.
So
it's
a
pretty
simple,
install.
A
But
you
will
have
some
so
sasha.
You
will
have
some
stuff
that
you
could
put
into
your
config.json,
but
you
still
will
need
a
couple,
little
updates
to
your
pipeline
file,
your
jenkins
file
and
then
down
the
road.
You
could
probably
wrap-
or
you
know,
move
those
into
the
jenkins
libraries
that
there
you
they
they
have
like.
A
We
have
some
folks
that
are
have
a
library
just
to
do
the
build,
and
then
they
have
another
library
just
to
do
deployment
through
helm,
so
they're
actually
deploying
with
helm
from
jenkins,
and
we
just
stick
like
this
part
in
the
build
section
where,
after
the
component's
been
built,
we
want
to
log
that
the
information
about
the
build
and
then
when
they
they
do
the
deployment
from
jenkins
using
helm.
A
B
So
so
jenkins,
when
it
sees
a
jenkins
file,
it
automatically
will
pick
that
up
right,
yep.
C
Steve
I
have
some
few
questions.
Can
you
go
to
the
hoteliers
board
dashboard.
A
Yeah,
let
me
go
over
to
this
one.
Yes,
absolutely.
C
C
So
it
means
every
microservice
if
I
have
let's
say
10
micro
services,
so
I
have
10
components
listed
in
ordinance
dashboard
exactly
yes,
that's
right
and
another
another,
because
we're
talking
about
the
building
the
pipelines.
What
I
feel
from
the
last
day
I
attending
the
city
on
the
people
are
questioning
about
that.
Let's
say
you
have
a
micro
services,
then
you
trigger
out
the
bill,
but
there
are
some
few
tools
that
are
really
feasible.
C
C
A
Yeah
and
that's
when
that's
where
we
went
with
the
the
python
library,
because
it
basically
is
a
shell,
you
know
it
gets
installed
into
those
pipeline
tools
as
a
shell,
so
here
we're
running
a
shell
command
to
run
inside
of
jenkins.
If
we
look
at.
A
One
of
ours
I
feel
like
I
get
the
web
or
we'll
get
docs,
so
this
is
actually
being
done
through
google
cloud
build.
This
is
our
our
documentation
website
and
you'll
see
here
we're
doing
something
very
similar,
so
we're
setting
up
those
environment
variables.
You
know
the
name
of
the
application,
the
repo
those
type
of
things
so
you
can
see
here
is
we're
deriving
that
the
version
number
again
as
part
of
that.
So
slightly
it's
the
same
data.
It's
just
the
way
google
cloud
build.
A
So
it's
it's
the
same
concept
we'll
fit
into
any
of
the
ci
cd
tools.
It
just
is
slightly
different
implementations
based
on
their
language
set.
You
know
this
is
all
based
on
running.
A
You
know,
containers
and
stuff
like
that
at
that
point,
so
this
is
actually
running
a
a
docker
image,
so
we
actually
take
our
dh
program
and
wrapper
it
up
into
a
container.
That's
all
ready
to
go
as
well,
and
you
can
run
the
container
instead
of
installing
the
python
library.
So
our
our
deploy
hub
comp
update
image
has
everything
installed
on
it
already
ready
to
go
so
it
we
can.
We
can
and
the
short
answer
is
we
can
fit
into
any
of
those
tools.
C
A
So,
let's
take
a
look
at
a
a
service,
so
I
went
into
the
cart
service
in
a
specific
version
of
the
cart
service.
I
want
it
to
150.,
and
here
we're
looking
at
in
the
middle
is
the
cart,
the
version
of
the
cart
service,
and
then
these
are
the
consuming
applications.
A
So
tracy
calls
this
the
blast
radius.
So
this
is
so
if
we
break
150
here,
these
are
going
to
be
the
application
versions
that
are
going
to
are
going
to
be
affected
by
it.
So
this
is
so.
If
I
blow
this
up,
I'm
going
to
take
out
these
two
on
application
versions.
A
A
So
in
this
case
we
can
see
that
we're
consuming
cart
service,
150
here
and
all
these
other
services
at
this
level.
Now,
one
of
the
things
we
recognize
is
that
this
layout
is
not
going
to
scale
to
100
or
200
services,
so
that's
on
our
on
a
to-do
list
to
rework
same
with
this.
A
This.
This
is
another
view
of
that
same
map
in
the
middle
is
the
application
and
all
the
consuming
services.
Now
one
of
the
things
to
answer
your
question
is:
do
we
have
a
map
showing
service
to
service
interaction?
Not
yet?
That
is
where
we
are
going
to
be
adding
on.
A
So
this
is
like
everything
I'm
consuming,
and
I
want
to
make
sure
that
I
have
all
the
right
versions
in
my
runtime
environment,
for
this
version
of
my
application
to
run
now
down
the
road
we're
going
to
take
service,
mesh
information
and
prometheus
grafana,
those
types
of
logging
and
telemetry,
metrics
and
overlay
it
on
top
of
this,
as
well,
so
we'll
be
able
to
to
define
service
to
service
or
see
service
and
service
transactions,
as
well
as
we're
going
to
introduce,
what's
called
a
component
set.
A
Let's
say
we
want
the
payment
service
and
the
shipping
service
these
two
here
we
want
to
tightly
couple
so
they're
always
deployed
these
two
version.
16
of
the
payment
service
in
version
17
of
the
shipping
service,
are
always
deployed
together.
That's
going
to
be
what
we're
calling
a
component
set
and
we'll
tightly
couple
those
services,
those
versions
of
the
services
together
in
the
component
set,
and
then
the
application
will
consume
a
component
set
up
at
that
level.
A
A
B
B
A
B
Yeah,
it's
all
broken
down
into
150
service
components.
Basically,
if
we're
talking
ortillius,
you
know
yeah
yeah
and
I
want
to
put
that
in
ortillius.
Yeah
you'll
need.
A
To
so
just
to
give
you
a
highlight
of
some
of
the
the
the
additional
service
catalog
data,
let
me
see
which
I
can
remember.
A
C
Was
he
what
I
like
to
do
is
in
the
dashboard?
Is
that
when
you
have
a
visualizer,
I
see
we
discussed
that
before
this
service
smash
and
githubs,
and
this
kind
of
thing
is
on
the
roadmap.
I
want
to
see
in
the
dashboard
that
I
click
on
the
service,
and
I
I
by
clicking
the
service
I
can
adding
the
side
guard
into
it
and
then
automatically
behind
the
curtain.
Service
mesh
is
integrated
into
it
and
when
I
click
in
there,
I
go
back
to
the
linkedin
or
skew
dashboard
and
visualize
all
the
metrics.
A
C
That's
my
dream
of
the
because
clicking
on
the
services
you
can
integrate
the
site
card
and
service
mesh.
You
don't
have
to
write
so
many
ml
files.
I
think
that's
everyone
is
looking
forward
to
it.
If,
if
that
is
going
to
be
happen
there,
I
I'll
love
that
this
is
absolutely
what
people
like
about
it.
A
Yeah,
so
some
of
the
things
I
can't
find
the
the
the
application
version
I
was
playing
with,
but
some
of
the
things
like
we're,
adding
in
like
the
pager
dirty
pagerduty
service,
url,
so
that'll
be
a
clickable
link
to
take
you
to
the
the
data
on
the
the
the
service.
You
know
what
what
is
the
escalation
policy
that
type
of
thing
who's
the
contact?
So
things
like
you
know,
I'm
at
a
a
component
level.
A
So
one
of
the
things
that's
tricky
that
you're
talking
about
is
visualizing
that
information
across
the
pipeline
there's
tools
out
there.
That
will
just
look
at
like
the
production
cluster
and
we
can
look
at
the
service
mesh
for
production
and
give
you
a
view
of
that.
But
if
this
that
this
service
has
been
deployed
to
not
only
to
it's
made
it
through
the
pipeline,
it's
gone
from
dev
to
qa
to
production.
I
want
to
be
able
to
see
how
the
service
meshes
is
acting
and
configured
in
qa
versus,
what's
how
it's
configured
in
production.
A
So
that's
one
of
the
challenges
because
we
sit
so
high
up
in
the
the
view
of
the
world
that
we're
looking
across
all
the
clusters,
all
the
environments
and
what
we'll
probably
end
up
doing
is
adding
on
additional
boxes
here
that
will
list
the
environment
that
this
been
that
this
service
has
been
deployed
to
and
do
a
link
that
says
view
service
mesh.
You
know
view
the
container
logs.
A
You
know
those
type
of
things
view
tracing,
so
those
are
gonna
be
done
at
a
environment
by
environment
level,
and
then
then
it
gets
even
a
little
more
complicated,
because
if
you
look
at
the
prod
environment
and
you're
running,
you
know
like
uber
or
airbnb
runs
the
multiple
clusters
in
every
single
aws
region
in
the
world.
A
So
they're,
you
know.
The
number
of
clusters
that
like
airbnb
is
running
is
in
the
thousands
just
to
support
a
a
single
service.
So
we
have
this
this,
this
additional
layer
that
we
have
to
kind
of
navigate
through
that
we
have
to
work
on
the
visualization
side.
But
I
hear
what
you're
saying
I
I
want
to
get
there
too.
We
just
got
to
figure
out
the
right
road
map
to
do
the
navigation
to
get
that
all
pulled
together.
C
A
Oh
so
for
serverless
serverless
like
lambda
function
is
just
another,
a
another
component
version,
so
it
doesn't
for
us.
We
track
it
because
we
don't
store
anything.
We
just
do
pointers.
We
can
point
to
the
get
repo
that
is
being
used
to
store
the
lambda
code
in,
for
example,
there's
there's
where
it
is
when
it
when
we
deploy
to
lambda,
we
record
that
we've
run
basically
aws
command
to
deploy
the
land
the
function
into
the
serverless
environment.
We
record
all
that
information.
A
So
from
there
we
still
could
have
a
like
a
swagger
endpoint
file
so
how
to
interact
with
that
lambda
function.
We
can
have
that
that
details
license
consumption
and
cves
probably
aren't
going
to
be
as
critical.
I
haven't
figured
out
how
to
scan
a
lambda.
You
can
actually
scan
the
like
the
python
code
for
the
cves
before
it's
placed
up
in
the
into
aws,
but
that's
the
type
of
thing
that
we're
working
on
gathering,
so
the
swagger
will
be
just
for
this
service.
What
what
is
this?
A
What
are
the
endpoints
for
this?
This
particular
service
that
will
capture
that
that
help.
C
A
A
Then
filter,
oh
that's
right!
So
let's
say
we
have
the
a
mongodb
in
internally
when
we
assign
a
a
type
to
a
component.
A
A
So
in
this
case,
like
an
application,
server
would
be
like
a
tomcat
server
or
a
jetty,
or
something
like
that.
You
know
more
traditional
server
and
the
kubernetes
definition.
It
says
this.
This
server
knows
how
to
talk
to
kubernetes
what
we
do
with
the
component
types
and
the
endpoint
types
is
we'll
automatically
route
at
the
deployment
time
using
the
artillious
deployment
engine
route,
components
to
the
right's
endpoints
as
part
of
the
deployment
process.
A
So
let's
say
we
have
we'll
just
take
a
traditional
application,
we're
going
to
deploy
an
ear
file,
and
so
we
have
an
ear
component
and
we
have
a
database
component,
for
we
have
two
endpoints
in
our
environment,
one
for
the
database
server
one
for
the
tomcat
server.
When
we
go
through
and
do
our
deployment
the
deployment
engine
will
go.
Oh
I
have
this
component,
it's
an
it's
going.
It
needs
to
go
to
the
application
server.
That's
going
to
be
the
ear
file
I'll
route
that
over
to
the
tomcat
server
now.
A
My
second
component
I
have
is
the
mongodb
server-
is
a
mongodb
type.
I
know
automatically
to
route
that
over
to
the
database
server
as
part
of
that
process.
So
that's
where
the
the
custom
types
and
kind
of
come
into
play,
if
you're
not
using
the
artilleries
deployment
engine
they're
more
for
just
like
tagging
and
references.
B
B
A
And
then
it's
it
will
then
associate
over
to
the
components
so
and
that's
one
of
the
things,
because
we
have
a
component
really
isn't
a
thing.
You
know
it's
not
like
a
it's
just
a
pointer
to
a
thing.
You
can
use
a
component
for
your
terraform.
You
can
use
a
component
for
your
test
cases.
You
could
use
a
component
for
your
database.
Your
containers,
your
serverless,
even
config
maps
can
be
a
component
and
what
that
allows
us
to
do
is
when
we
look
at
an
application.
A
I
picked
a
bad
one.
Oh
there,
it
is
at
the
bottom.
We
can
associate
different
versions
of
those
components
to
that
version
of
the
application,
so
you
can
think
of
application
versions
as
the
packaging
step,
and
we
want
to
pull
in
different
versions
of
the
components
into
that
version
of
you
know
that
packaged
version
of
the
application.
A
So
if
you
need
a
specific
version
of
your
terraform,
then
you
need
a
specific
version
of
your
config
map
or
you
need
a
specific
version
of
your
istio
of
your
of
your
service
mesh
to
be
defined.
All
those
can
be
component
versions
that
we're
going
to
pull
together
and
have
it
all
in
one
place.
So
we
understand
what
is
our
runtime
environment
for
that
application
to
you
know
to
be.
B
A
Yeah
and
you
can
have,
and
then
you
can
do
you
know,
depending
on,
if
you're
using
the
artillious
deployment
engine
or
if
you're
using
external
one,
you
know
you
can
you
can
say
you
know,
I
want
to
make
sure
that
this
test
case
was
executed
and
through
the
custom
through
the
actions
that
we
have.
A
You
know
you
can
have
actions
at
the
pre-post
custom
actions
at
the
application
level,
the
component
level.
So
after
something's
deployed
post
deployment,
you
could
say
go
run
all
the
test
cases
I
just
deployed
and
you
can
make
sure
that
those
are
being
executed
and
recorded
as
part
of
that
process.
A
Let's
say
we
associate
that
we
need
to
run
the
test
case.
You
know
20
against
this
version
of
that
application.
A
We
can
actually
query
from
the
jenkins
pipeline
the
list
of
components
and
the
test
cases,
and
we
can
say:
oh
this
application
has
these
test
cases
associated
with
it,
I'm
going
to
get
that
list
and
pass
it
off
to
jenkins
now
to
go
off
and
do
all
the
the
actual,
heavy
lifting
and
running
the
test
cases
for
me
wow.
That's
amazing,.
B
Just
I'm
going
to
use
a
work
example
from
terraform,
so
they
use
atlantis
to
control
the
who
can
you
know
who
can
apply?
Basically,
so
would
it
could
attilius
replace
a
tool
like
that
when
it
comes
to
terraform
whether
it
was
or
it's
not
designed
for
something
like
that?
No.
A
Replacing
that,
but
what
you
would
want
to
do
is
to
make
the
association
that
that
that
is
happening.
So
you
have
what
was
the
atlantis.
It
was
called
atlantis.
B
Yeah,
so
so,
for
example,
if
I
go
make
a
pr,
I
have
to
take
my
branch
first
and
make
a
pr
and
I
commit
it,
it
create
it
as
a
plan,
but
it
doesn't
apply.
There's
a
the
last
gate
is
for
someone
from
from
the
cloud
platform
team
to
approve
that
for
that
advantage
and
then
atlanta's
all
around
their
plan.
Yeah.
A
Yeah,
so
what
would
what
we
would
do
is?
Are
they
running
at
jenkins
job
to
do
the
apply?
Are
they
going
into
the
atlantis
ui.
B
I
haven't
got,
I
don't
have
access
to
that
part
of
it,
but
I
I
I
assume
it's
probably
atlantis
ui,
that's
pretty
good.
So.
A
Let's
assume
that
we
can
get
notified
that
the
apply
was
performed,
and
then
we
would
go
ahead
and
log
that
this
version
of
the
terraform
was
applied
to
this
environment
and
that
would
get
reflected
back
into
the
artelius
and
at
the
either
at
the
environment
level
or
as
a
a
component
version
to
the
application.
B
Yeah,
because
what
I'm
finding
is
that
atlantis
will
show
you
your
output
of
the
plan
in
the
pr
right,
so
you
can
drop
down
like
a
little
drop
down
and
it
shows
you
what
it
ran
right,
but
once
your
pr
has
been
approved
and
gone,
it's
almost.
It
almost
feels
like
it's
really
pretty.
It's
like
a
bit
of
a
mission.
You've
got
to
go
and
find
your
pr
again.
B
B
A
Like
that,
oh
and
you
just
want
to
go,
look
at
the
log
to
see
how
it
it
defined
your
your
your
node,
your
node
pools
that
information
that
link
we
can
associate
back
to
the
either
at
the
component
level
or
at
the
application
level,
or
even
at
the
environment
level.
We
can
do
it
at
three
pla
at
the
any
of
the
three
places.
A
The
things
that
is
on
the
roadmap
is
to
version
environments,
so
whenever
you
make
a
an
environment
change
that
we
record
that
there
is
a
what
what
happened
in
that
environment,
you.
A
A
You
know,
we
didn't
change
any
of
the
services,
we
didn't
change
any
of
the
image
tags.
You
know
we
didn't
change
the
the
replica
set
account,
but
at
the
environment
level
we
did
do
some
cluster
updates.
You
know
we
changed
the
node
pools.
We
went
from.
You
know
we
added
in
additional.
A
B
I
think
I
might
be
getting
the
wrong
terminology,
so
just
correct
me
guys
they
changed
this,
I'm
trying
to
remember
what
they
did.
If
somebody
changed
the
service
mesh
they
took,
they
it
was.
There
was
public
and
private.
I
think,
and
someone
took
the
private
off
so
and
I
had
a
big
presentation
to
do
on
the
monday
and
I
couldn't
fall
for
it
anymore.
B
A
Yep,
exactly
exactly
so
that's
kind
of
where
we're
at.
I
know
we
kind
of
went
a
little
bit
around
about,
but
you
know
the
we'll
just
kind
of
wrap
up
here.
So
you
know
from
the
jenkins
pipeline
perspective.
We
have
two
pieces.
A
A
What
jenkins
has
done
for
creating
that
that
new
component
version
and
then
the
second
step
is
the
deploy
piece
so
we'll
either
deploy
using
the
deploy
of
ortiz's
deployment
engine
or
we'll
record
that
jenkins
did
the
deployment
either
way.
We
have
that
second
step
in
our
our
pipeline
to
record
what
happened
as
as
part
of
the
action
and,
like
you
said,
sasha
is
we're.
A
Ortelius
is
logging
what's
happening
in
your
pipeline
process,
and
so
we
provide
visibility
into
what's
happening.
You
know,
because
we
know
that
a
component
was
created
now.
Some
of
the
things
we
didn't
get
into
that
are
available
is,
if
you
create
a
new
component
and
you
don't
know
who,
which
applications
are
consuming
it.
As
long
as
we
have
a
base
definition,
or
at
least
one
association,
we
can
turn
on
a
flag.
A
It's
like
auto,
increment
app
and
what
that
will
do
is
we'll
take
the
old
version
of
the
component
we'll
go
find
who
which
applications
teams
are
consuming
it
and
we'll
go,
create
new
application
versions
for
all
those
application
teams
consuming
the
new
version
of
the
service.
A
So
they'll
know
that
they
have
a
new
version
that
they
need
to
test
as
part
of
that
and
there's
even
ways
to
kick
off
test
workflows
down
the
pipeline,
you
know,
do
a
deployment
and
then
do
a
test.
Workflow
run
the
testing
workflow
as
part
of
that
process.
B
A
B
A
That's
pretty
radical
yeah
and
like,
like
you,
said,
I'm
you
know
bringing
in
service
mesh
data
overlaying
telemetry
health
metrics.
A
You
know
service
slos
how
those
are
mapping
on
top
of
that
all
the
service
catalog
data
is
where
we're
really
trying
to
pull
it
all
together
to
be
able
to
have
you
know
a
single
place
to
go
and
view
what's
happening
with
a
service
and,
like
I
said,
the
doing
it
for
prod
is
easy.
A
You
know,
if
you
look
at
I'm
just
going
to
monitor,
prod
and
always
look
at
what
prada
is
doing.
That's
the
easy
purple.
If
we
from
the
artelia
side,
we
want
to
look
at
it
from
the
highest
point
of
view
across
every
single
cluster,
every
single
environment,
because
you
want
to
be
able
to
compare
what
you
know
production's
running
great.
Do
we
have
a
mirror
in
qa?
A
Are
we
in
sync
between
qa
and
production,
so
the
developer
needs
to
test
something
or
we're
getting
a
wobble
in
production?
Are
we
seeing
the
same
wobble
in
qa
or
if
we're
going
to
hit
qa
with
a
load
test,
how
that's
going
to
run
and
if
we
need
to
make
it,
you
know
we
need
to
have
those
comparisons
in
place
to
have
that
real
world
view
of
an
application
not
only
from
the
production
view,
but
from
what
the
developers
are
are
making
on
their
side.
You
know
their
little
little
changes.
C
Last
few
one
of
the
recommendations,
suggestions
from
my
side
is,
I
see
in
the
class
native
landscape.
That's
really
a
very
valid
case.
We
talked
about
this
before
the
discord
as
one
of
the
project
that
I
really
have
an
eye
on,
for
whenever
this
launch
is
name
is
cubella.
So
what
you
try
to
solve.
The
problem
is
that
you
have
don't
write
a
yaml
file.
You
just
write
a
configuration
file.
It's
build
out
the
yaml
file
for
you
and
deploy
to
the
kubernetes.
C
That's
called
open
obligation
model,
so
they
do
it
tremendously,
but
later
down
the
road
they
do
stuff
that
they
don't
expect
to
do
like
progressive
delivery.
It's
not
a
job
for
gilbert,
so
they're
putting
stuff
so
much
into
it.
So
they
lost
focus.
What
is
trying
to
solve
the
problem,
so
I
think
right
now,
ortillius
and
deploy
hub.
There
are
so
many.
I
think
we
are
living
in
a
micro
services
world
and
we
have
floods
of
information.
A
A
It
is
shipwright
they're
trying
to
do
a
small
yaml
type
of
world
as
well.
C
C
I
think
we
have
a
build
job
for
building
the
images
talker
is
doing
so
docker
is
actually
an
entire
stack
that
doing
so
many
things
now,
people
like
to
run
their
tools
like
container
d,
cryo,
building
tools
like
new
job
and
eco,
and
there
is
so
much.
I
think
the
ship
ride,
I
think,
is
a
union
of
builder
and
I
think,
an
eco.
They
have
built-ins
the
union
of
both
of
those.
So
I
think
that's
a
great
tool
I
have
tested
out.
C
I
think
three
months
ago
I
haven't
never
tested
about,
but
really
happy
that
now
they
are
going
to
the
cdf.
A
B
A
So
hopefully
that
helps
you,
you
think
about
how
we
can
integrate
into
jenkins
next
time,
we'll
do
one
around
the
setup
of
ortilius
in
kubernetes,
specifically
more
on
your
world
sasha
with
ss
ssl
certs,
and
how
to
configure
the
ingress
into
the
cluster
for
running
artelias.
B
Yeah,
so
this
is
so.
This
was
one
of
the
most
amazing
sessions
here,
because
I'm
dead
I
want
to
run.
My
goal,
is
to
run
autelius
and
show
them
a
real
live
demo
of
it
actually,
with
the
one
of
the
applications
for
the
business
that
it's
being
mapped
yeah,
because
I
did
a
demo
for
people
and
cool
sure
there
was.
There
was
test
data.
You
know,
you've
got
this.
B
A
Yeah,
so
the
the
changes
into
the
jenkins
pipeline
are
minimal,
but
they
still
need
to
be
made
and
if
you
need
help
just
reach
out
and
I'll
point
you
where
we
need
to
make
those
updates
and
then
next
time
we'll
we'll
get
some
time
to
go
over
the
ingress
for
ortelius
into
your
eight
you're
on
aws
right.
A
B
And
rds
aurora.
A
I
just
have
to
get
the
I
have
the
solution,
but
it's
not
together.
I
have
the
the
engine
x
running
with
the
reverse
proxy
directilius,
but
I
don't
have
it
with
a
an
ssl
assert
and
I
have
ssl
for
deploy
hub
running
over
on
google.
A
That's
running
the
sas
version,
so
that
has
all
the
ssl
stuff
in
it.
But
I
need
to
merge
the
two
and
then
you
know
on
the
azure
side,
we're
running
istio
server
smash.
We
have
ssl
cert
installed
into
istio,
so
there's
this
whole.
We
just
got
to
figure
out
which
one
we
want
to
yeah.
B
C
I
think
sasha,
where
you
have
this
kind
of
thing.
I
think
it's
the
perfect
time
to
do
some
blog
post,
writing
for
that
or
might
be
putting
on
some
youtube
video
for
that
and
for
the
kubernetes
content
and
service
match.
I
am
absolutely
up
to
it.
I
think
we
are
doing
this
kind
of
stuff
regularly,
because
I
think
we
are
doing
a
r
d
work
on
a
lottery,
because
we
need
to
search
and
search
on
the
internet.
C
What
are
the
tools
we
need
to
integrate
with
hoteliers
yeah
before
we
reinventing
the
wheel,
because
there
is
so
many
tools
out
there
that
I
think
is
help
our
job
really
easy.
So
I
think
we
have
to
do
this
kind
of
session,
and
tonight
steve
you
put
down
of
so
many
great
information
to
us
and
hopefully.
B
A
Actual
help
on
the
implementation
steps
you
know
now
that
we
have
the
overview
in
place.
We
can
do
that
as
well.
So
thank
you
everybody
for
showing
up
today.
If
you
have
any
questions
just
reach
out
and
we'll
get
this
implemented.
A
I
do
two
a
week:
one
is
like
4
30
mountain
time
on
tuesdays
or
3
30
month
time,
one
of
those
two,
but
it's
afternoon
my
time
on
tuesdays
and
then
on
thursday
mornings.
Okay,
yeah.
B
This
is
a
friday
morning,
all
right
yeah.
This
is
best
for
me.
Okay,
I
try
to
click
on
the
invite
links,
but
they
I
mean
those
are
only
for
this
one
and
the
next
one
right,
but
invite
links.
I
don't
know
they
didn't
work
for
me.
I
don't
know
if
it
was
just
me.
Maybe.
A
I
believe
they
are
on
either
double
check
the
the
shared
calendar,
but
okay,
they
should
be
on
the
shared
calendar.
So
that's
one
place
double
check
and
the
to
see
I
may
go
in
and
change
the
invite
link
because
I
may
be
under
tracy's.