►
Description
Ortelius solves a complex problem - the tracking and organization of microservices across cluster. In this video, Steve Taylor shows us just how Ortelius does its job.
A
Looks
like
we
got
a
lot
of
people
doing
some
work
in
the
background
or
other
people
in
the
room,
doing
work
for
as
well.
So
today
we're
going
to
go
over
how
artelias
and
the
cicd
process
fit
together,
and
I'm
gonna
take
a
look
at
what
we
have
as
far
as
the
ci
process,
as
well
as
some
of
the
pieces
where
we
kind
of
insert
in,
and
this
is
to
tie
into
some
of
the
blog,
the
blog
that
sasha
put
together.
A
So
let
me
go
ahead
and
share
my
screen.
A
So
one
of
the
things
that
when
we
think
about
the
cicd
process
and
what
we're
trying
to
achieve
is
the
capturing
of
new
components
when
they're
being
created,
so
one
of
the
things
that
I
have
here
is
I
brought
up
the
artelius
user
account
on
the
sas
version
and
what
we're
going
to
see
here
is
we
have
like
the
website,
the
user
guide,
and
some
of
these
are
based
off
of
maintenance
based
off
of
the
master
branch.
A
So
one
of
the
things
of
the
way
the
triggers
that
we
set
up
in
the
google
cloud
build
was
to
look
at
any
any
branch
that
came
in
and
we're
gonna
go
ahead
and
build
and
record
it.
Some
of
them
will
only
do
it
on
a
pull
request,
we'll
only
be
looking
at
the
the
main
part.
So
one
of
the
things
that
if
we
pick
one
of
these,
this
is
one
of
the
last
ones
deployed
is
this
is
the
container
that
is
going
to
be
for
the
documentation.
A
So
what
we
can
see
is
we
did
a
hook
into
our
cloud
build
and
we
recorded
you
know
information
about
the
build.
We
have
the
chart,
that's
used
to
install
it,
and
then
we
have
information
about
what
was
produced
in
the
build
process.
So
we
have
the
registry,
the
digest,
the
tags
and
the
the
get
info.
Now.
A
Why
the
why
this
is
important
is
if
you
look
at
the
cicd
process
overall
anytime,
that
there's
a
build
that
happens,
we
want
to
treat
that
as
a
potential
version
that
can
go
out
to
production,
so
there
may
be.
If
we
go
back
to
our
list
here,
we
may
have
a
lot
of
versions
of
components
that
may
never
make
it
out
to
production.
You
can
see
in
this
list
here
a
bunch
of
them
were
never
deployed
the
last
one
that
was
was
this
this
one
build
985
or
deployment
985..
A
So
that's
one
of
the
things
that
in
us
in
the
cicd
process,
we
want
to
capture
that
information
about
what's
being
created
where
it
was
stored,
and
how
can
we
get
to
it
because
everything
we're
dealing
with
here
is
is
basically
docker
containers.
It
makes
it
pretty
easy.
You
know
we're
going
to
be
pushing
to
a
registry
we're
going
to
know
where
that
registry
was
pushed
to.
We
want
to
grab
some
information
about
that.
A
So
let's
go
ahead
and
take
a
look
at
how
we
actually
did
that
initial
capture.
A
So
and
sasha's
blog
does
a
really
good
job,
explaining
the
different
steps
into
the
the
cloud
bill.
So
I'm
not
gonna
go
into
super
detail
about
what's
happening
at
that
level,
but
some
of
the
I'll
point
out
some
of
the
key
points
that
we
have
things
that
we
need
to
know
are
going
to
be
like.
What's
going
to
be
the
application
that
we're
going
to
associate
this
version
of
the
component
too,
you
know
what
is
the
name
of
the
component
like
the
base
name.
A
A
We
showed
him
what
we
were
doing
in
one
of
our
old
demos
a
couple
years
ago,
and
he
said
you
know
what
your
your
branches,
you
should
treat,
those
as
variants
so
variant.
You
can
think
of
as
like
a
feature
branch
and
within
a
variant
you
can
still
have
versions.
A
So
I
could
have
you
know
a
a
new
microservice
that
we're
going
to
be-
or
let's
say,
a
new
sign
on
that
we're
gonna
have
to
deal
with
privacy
issues,
so
we
may
have
a
new
terms
of
agreement,
and
that
may
be
the
feature.
Branch
should
be
in
terms
of
agreement
that
we
want
to
track
as
a
variant
versus
what's
out
there
as
part
of
the
existing
pieces.
So
variant
is
optional
and
then
we
have
our
versions.
These
are
schematic
versions.
A
In
this
case
I've
just
defined
basically
hard
coded
the
the
beginning
part.
The
ending
part
you
will
see
here
is
I'm
doing
a
little
trick
with
the
git
repo,
where
I'm
actually
going
in
and
counting
the
number
of
commits
on
the
branch,
and
that
allows
me
then
have
a
like
a
build
number
for
lack
of
a
better
word
because
in
google,
if
we
remember
back
on
the
google
quiz,
I
mean
on
the
google
side
on
the
cloud
build
our
build.
A
Is
this
weird
hexadecimal
number
it's
great
for
having
uniqueness
and
stuff
like
that?
But
it's
not
practical
for
the
the
the
versioning
schema.
So
that's
where
I
do
this
little
trick
to
go
ahead
and
get
the
commit
count,
and
then
I
like
to
tack
on
with
it
prefixing
it
with
like
a
dash
g
to
to
identify
that
it's
a
git
commit
the
short
sha
and
that's
where,
at
the
end
of
the
day,
we
end
up
with
our
name.
A
Then
there's
our
schematic
version
with
the
the
git
commit
number
incrementing
and
then
the
short
commit
right
here
as
part
of
that
that
name,
so
that
gives
us
some
a
way
to
identify
very
quickly
this
version
of
the
of
the
component,
so
that's
kind
of
where
we're
setting
up
some
information.
A
We're
going
to
also
question
for
you:
yeah
go
ahead.
Are
the.
A
Yeah,
it's
a
weird
quirks
that
we
we
started
out
with
with
the
original
version
of
ortilius.
When
it's
called
release.
Engineer
there's.
I
don't
know
if
you
guys
remember:
there's
a
world
there's
a
whole.
This
this
whole
piece
of
artillery
is
called
dm
scripting.
A
It's
a
scripting
language
that
we
ended
up,
writing
phil,
gibbs,
ended
up
writing
and
because
it's
its
own
language,
things
like
the
dash
here
when
you
get
into
the
scripting
language,
gets
interpreted
as
a
minus
sign.
So
it
tries
to
subtract
the
the
two
words
which
which
throws
an
error,
so
we
actually
have
to
go
through
and
and
kind
of
scrub
the
names
and
put
them
into
underscores.
A
The
other
thing
that
you'll
see
is
there's
no
periods
as
well,
because
periods
actually
end
up
as
part
of
the
domain
name
structure,
so
global.ortulus.sas,
so
user
ids
like
if
you
do
like
first
dot
last
name,
we
actually
convert
that
to
first
underscore
last
name.
So
there's
a
couple
weird
quirks
that
we've
had
to
deal
with
that
we've
been
carrying
along
when
we
eventually
get
to
the
point
where
we
can
rewrite
the
deployment
engine,
we
can
fix
those
restrictions
down
the
road.
A
Yeah,
it's
a
weird
one
that
just
got
carried
along,
and
it's
just
you
know
unpinning.
It
is
a
bigger
project
than
we
were
that
I
was
hoping
for
it
to
be
some
of
the
other
things
that
we
are
kind
of
grabbing
or
setting
up.
This
point:
where
is
this
going
to
get
pushed
to?
This
is
just
the
action
that
we're
going
to
run
to
do
the
deployment.
A
If-
and
this
allows
us
to
have
some
flexibility
if
we
have
to
do
like
a
salesforce
deployment,
we'd
be
using
a
different
custom
action
instead
of
helm
chart.
In
this
case,
we
want
to
know
what
our
chart
name
is.
A
Chart
name
can
also
include
chart
museum,
the
url
and
pieces
like
that,
that's
in
the
documentation
and
then
which
environment
that
we're
going
to
actually
be
associating
and
deploying
this
to
other
things
that
we're
kind
of
doing
at
this
level
is
figuring
out
the
the
tag
name
for
the
docker
push
again
we're
doing
we're
pulling
in
the
commits,
and
it's
basically
gonna
be
a
mirror
of
what
we
have
at
the
version
level
here
and
then.
A
B
A
Dh
update
comp
and
with
all
these
parameters,
you
can
see
all
the
the
environment
variables
that
we
defined
earlier
we're
just
passing
those
across,
and
this
is
where,
like
the
some
of
the
keywords,
are
coming
into
play.
A
So
when
we
update
the
attributes,
so
everything
with
a
comp
adder
is
we're
updating,
attributes
all
the
way
through
into
the
ui
and
storing
them
to
the
database.
Certain
keywords
we
pick
out
and
we
put
on
certain
places
on
the
the
detail
dialogue.
So
things
like
that.
We
have
coming
down
the
road
for
the
service
catalog
like
slack
channel
pagerduty
url
logs
those
types
of
attributes
will
map
across
as
part
of
that
process.
A
So
this
dhcomp
update
is
actually
a
python
script
and
it
is
separated
into
two
pieces
there's
the
main
driver.
If
I
can
get
up
to
the
top
here
close
this
here
for
you,
so
the
main
driver
is
the
the
dhx,
basically
the
executable
part
of
it
and
then
there's
the
api
piece.
The
api
piece
has
all
the
basically
the
restful
api
calls
that
we
need
to
do
so
if
we
want
to
clone
that's
a
bad
example
like
here
assign
a
a
component
to
an
application.
A
So
basically
we
pass
in
some
parameters.
Application.
Everything
in
in
ortilius
is
stored,
based
on
an
integer
for
the
id
for
the
object.
So
we
pass
around
a
bunch
of
object,
ids
and
in
this
case
we're
making
a
couple
restful
api
calls
out
to
the
endpoint
doing
that
those
associations.
A
So
what
we've
tried
to
do
on
this
side
is
really
simplify
by
wrapping
all
the
api
calls
into
something
that's
a
little
more
meaningful
at
the
python
level,
and
then
the
driver,
like
I
said,
has
a
couple
different
modes:
whether
we're
going
to
be
doing
like
a
deployment
we're
going
to
approve
something
move,
something
the
one
we're
looking
at
now
is
the
update
comp,
which
is
updating
the
data
about
a
a
component,
a
couple
interesting
other
flags
on
that
we
can
actually
auto
increment
the
application
version,
and
we
can
also
auto
increment
the
component
version.
A
So
we
could
have
it
work
in
two
modes
where
we
can
replace
the
existing
component,
that
that
was
there
previously
or
we
could
keep
on
creating
new
versions
of
the
components
and
then,
when
we
create
a
new
version
of
the
component,
we
can
then
go
ahead
and
create
a
new
version
of
the
application,
just
assigning
components
to
applications.
A
Another
one
that
comes
into
play
is
key
value
pairs.
This
is
where
we
could
actually
pull
in
like
a
properties,
file
and
load,
those
in
as
attributes
to
the
component
version
as
well.
So
if
you
have
key
value
pairs
that
we
want
to
like
config
maps,
those
can
come
in
and
be
associated
to
the
component
version.
Now,
that's
a
separate
step
so
on
our
cloud
build.
If
we
wanted
to
do
that,
we'd
actually
have
to
run
the
comp
update
twice
the
dh
twice
once
with
comp
up
with
update
comp
and
the
other
one.
A
With
with
the
k,
kb
parameter,
some
other
things
importing
exporting
objects,
so
you
can
move
them
between
databases.
A
A
Files
and
data
between
steps-
so
that's
just
one
of
the
things
that
we
had
to
do
to
help
with
persistence
and
then
just
a
list
of
all
of
the
parameters
that
are
available
and
there
there's
a
corresponding
markdown.
For
this.
A
I
got
myself
in
a
weird
mode:
there
it
is
doc,
so
we
did.
You
are
able
to
look
at
the
markdown
for
both
the
api
level,
which
is
the
deploy
hub.
Basically,
the
deployment
python
module
and
then
the
driver
has
the
markdown
as
well.
A
Both
this
whole
python
piece
gets
pushed
over
to
pi
pi,
so
you
can
download
it
just
doing
a
pip
install
of
that,
and
you
can
then
incorporate
it
into
your
cicd
process.
Now.
One
of
the
things
that
we're
doing
at
this
one
in
our
cloud
build
is
we're
actually
doing
a
deployment
as
well,
so
we're
kind
of
like
wrappering,
both
things
at
once
so
after
everything's
been
updated,
we're
actually
going
to
deploy
to
that
deployment
environment
that
we
defined,
which
is
going
to
be
our
aks
cluster.
A
So
after
everything
gets,
it
gets
updated
we're
going
to
go
ahead
and
deploy
right.
After
that,
you
could
do
the
two
steps.
You
could
do
the
update
comp
and
then
come
back
and
do
the
deploy
we
just
kind
of
did
it
as
convenience.
The
wrapper
at
all
is
one
as
part
of
that
steps,
so
that's
kind
of
what's
happening
at
the
ci
level.
A
Like
I
said
we
can
break
this.
If
I
leave
off
the
deploy
env,
we
would
just
be
updating
the
components
and
possibly,
if
we
have
auto
increment
on,
we
could
be
incrementing
application
versions.
A
So
if
you
look
at
the
application,
this
one's
pretty
simple,
if
you
look
at
the
last
deployment
that
was
for
the
website
this
one,
I
got
something
going
on
crazy
shouldn't.
Look
like
that.
I
have
a
parameter
set
wrong,
but
we'll
look
at
the
docs
that
we're
just
working
with
85.
A
A
A
So
that's
kind
of
what's
happening
on
the
deployment
side.
Now,
in
a
tradition
in
a
traditional
ci
process,
we
would
want
to
do
like
on
the
ci
level.
We
probably
want
to
do
a
a
check-in,
build
it
and
deploy
it
to
the
dev
environment,
so
that's
kind
of
like
what
we
have
going
on
with
our
deployment
environment
as
if
in
the
regular
pipeline
you
think
about
we
got
to
go
to
qa.
A
We
would
then
at
the
application
level.
I
don't
have
it
set
up
because
we
don't
have
we're
doing.
Qa
is
slightly
different,
but
I
would
go
ahead
and
pick
which
environment
I'd
want
to
go
to.
So
I
would
have
to
have
a
qa
environment
or
pod
environment
listed
here
that
I
want
to.
I
would
go
ahead
and
deploy
that
version
of
the
application
to
that
environment
and
because
we
have
everything
basically
tied
at
the
component
level,
all
the
metadata
that
we
need
to
retrieve
the
the
image
out
of
the
docker
registry.
A
We
can
go,
do
this
and
repeat,
it's
basically
rinse
and
repeat
at
any
stage
of
the
pipeline,
whether
we
have
something
going
on
at
dev
or
at
prod
we're
going
to
do
the
same
process.
We're
going
to
pass
in
different
data,
we're
going
to
pass
in
key
value
pairs
around
the
the
production
environment
versus
key
value
pairs
for
the
development
environment,
but
we'll
be
able
to
move
that
in
repeat
and
pass
in
the
data
down
the
the
pipeline.
A
Any
questions
on
that.
Okay,
we're
gonna
jump
over
to
another
way
to
do
this,
because
a
lot
of
people
still
use
jenkins
for
the
pipeline
and
we
actually
have
a
jenkins
library,
groovy
library.
It's
also
called
deploy
hub
is
the
groovy
library
and
there's
a
it
lives
out
in
our.
A
I
think,
our
ortelius
repo.
I
can't
remember
off
the
top
my
head,
but
basically
we
can
get
into
fancier
stuff
on
the
on
the
jenkins
side.
So
again
we
go
through
and
define.
You
know
a
bunch
of
variables
just
to
make
things
easier,
and
then
we
get
into
being
able
to
create
a
new
component
version.
A
So
I
tried
to
mirror
these
between
the
python
library
and
the
groovy
library
there's
some
divergence
between
the
two
just
because
of
the
requirements
that
we
had
initially
for
the
jenkins
side.
A
But
basically
this
will
go
ahead
and
create
us
a
new
component
version
if
we
need
one
and
then
return
the
id
that
we're
going
to
going
to
use
here,
we're
going
to
update
the
attributes,
basically
the
chart
and
the
chart
name
space,
those
type
of
things
those
attributes
are
coming
across
through
just
a
and
basically
a
dictionary
in
the
groovy
side
and
we'll
go
ahead
and
pass
that
array
of
array
of
key
value
pairs
over
to
the
update
component
adders.
A
So
it's
the
same
concept
underneath
the
covers
the
the
groovy
library
is
doing
the
same.
Basically,
the
same
restful
api
calls,
but
we
do
have
a
little
more
flexibility.
From
this
point
I
actually
have.
One
of
our
customers
is
doing
a
deployment
they
go
and
check
to
see.
If
the
deployment
is
successful,
if
it's
not,
then
they
actually
automatically
do
a
rollback
of
the
deployment,
so
they
deploy
out
to
an
environment.
They
run
their
test
cases
through
their
jenkins
pipeline.
A
If
they
don't
like
it,
then
they
actually
go
and
automatically
roll
back
to
the
previous
version
that
they
had
out
there.
So,
there's
a
lot
more
flexibility
in
the
groovy
library
side
and
there's
a
full
documentation.
A
It's
it's
off
this
weird
jenkins,
I'll,
find
you
guys
the
find
everybody.
What
where
it
is.
I
can't
remember
it
is.
A
It's
off
a
weird
one,
because
it's
off
of
the
jenkins
site,
where
the
repo
is
so
it's
not
under
our
our
our
organization,
I'll,
find
it
and
put
it
out
in
the
discord
channel,
but
there's
full
groovy
docks
that
we've
created
for
it
and
everything
that
that
happens
around
that,
and
that
will
I'd
love
to
get
rid
of
the
the
groovy
library
and
just
just
do
everything
off
of
the
python
library,
but
there's
some
functionality,
that's
missing
in
the
python
that
we
need
to
deal
with.
A
So
one
of
the
things
that
we've
been
kind
of
moving
to
this
is
a
dealing
with
salesforce,
and
here
we
can
see
that
we're.
Basically
I'm
just
I
just
stubbed
it
out,
but
basically
we're
calling
update
the
the
python
library
from
just
the
command
line,
call
in
the
jenkins
script
in
the
jenkins
file.
So
that's
the
other
way
that
I'm
looking
at,
but
when
you
get
into
this,
there's
not
as
much
flexibility
to
make
those
decisions
like
the
automatic
roll
back
and
pieces
like
that.
A
A
You
know,
gather
information
from
the
ci
process,
store
it
at
the
component
level,
make
our
associations
that
are
going
to
be
pulled
together
as
part
of
an
application
version
and
then
go
ahead
and
be
able
to
deploy
it
right
then,
or
be
able
to
deploy
it
down
the
road.
Now
one
of
the
other
modes
that
the
the
python
library
kind
of
works
in
is
to
be
able
to
record
a
deployment,
and
this
is
where
most
people
start
with
is
they'll.
A
Have
some
sort
of
process
like
to
have
helm
running
or
like
in
the
salesforce
case?
They
have
jenkins
calling
salesforce
through
the
sales
source
plugin,
but
we
want
to
record
what
those
deployments
are
doing
so
there's
another
mode
where
we
can
pass
in
a
list
of
components
that
were
deployed
to
an
environment
and
we
can.
A
The
the
python
library
will
then
go
ahead
and
create
an
application
version
around
that
those
components
and
it's
basically
it's
given
a
basically
a
json
file,
saying
here's,
the
environment,
name,
here's
the
application
name
or-
and
this
is
the
list
of
components
that
we're
going
to
record
for
this-
this
deployment
that
this
other
tool
did
for
us,
one
of
their
things,
I'm
working
on
and
hopefully
going
to
wrap
up
this
week,
is
being
able
to
take
the
output
of
a
kubernetes
deployment
description.
B
A
Deployment
yaml
from
a
running
cluster,
so
we're
basically
like
reverse
engineering,
the
cluster,
to
figure
out
what
is
in
there,
because
one
of
the
things
that,
because
we're
storing
it,
we
can
actually
work
our
way
backwards.
A
So,
in
this
case,
this
image
is
the
image
that
is
actually
being
run
by
the
kubernetes
cluster.
Now,
because
we
have
that
image
and
on
the
our
side,
let
me
pick
one
of
these.
A
We
have
the
basically
know
the
image,
because
we
have
we
can
concatenate
together
the
registry
and
the
tag
we
can
actually
go
backwards
and
figure
out
what
is
the
actual
component
version
that
is
running
in
the
cluster,
so
that
is
one
way
that
we
are
going
to
be
able
to
pull
together.
What
is
an
application
version
running
in
a
in
a
cluster?
A
Now
most
people
have
been
kind
of
grouping
together
their
application
versions
in
a
name
space,
so
that
gives
us
a
little
more
context
to
put
around
what
is
an
application
at
that
level.
A
One
of
the
things
that
we
we
ran
into
is
handling
multiple
container
tags
that
point
to
the
same
digest.
So
this
customers
is
renaming
or
re-tagging
as
they
go
from
from
stage
to
stage
those
they'll
say.
Qa
latest
would
be
like
one
of
the
tags
that
they
put
on
this
digest
and
then
they'll
do
prod
latest
that
way
when
they
get
to
their
their
production
deployment.
They're
always
pulling
prod
latest
tag
as
part
of
that
process.
A
There,
it
is
so,
even
though
we
don't
have
the
the
digest
at
this
level,
we
can
actually
do
two
things.
You
can
actually
run
a
cube,
ctl
command.
I'm
sorry
docker
command
to
go
query
based
on
this
tag.
A
What
the
digest
is
the
other
way
that
we're
thinking
about
doing
it
is,
at
the
label
level
we're
actually
going
to
label
the
digest
into
the
deployment
just
to
just
for
speed
sake
and
convenience,
we'll
put
it
at
that
level,
so
we
can
retrieve
from
there
so
we'll
be
able
to
work
our
way
backwards
from
either
the
image
tag
or
the
digest
and
get
us
to
the
component.
A
Inception
to
deployment
as
part
of
the
process.
A
Over
does
that
kind
of
make
sense
how
we're
we're
kind
of
fitting
together
and
hooking
into
the
cicd
process?
Now
sasha's
blog
goes
into
details
of
what's
happening
at
the
cloud
build
level.
So
he'll
answer
all
those
questions
for
you
on
that.
C
B
And
I'm
wondering
about,
can
I
consider
this
as
a
history
of
comics.
D
A
No
because
it
would
be
like,
let's
say,
jenkins
was
doing
the
build
and
it
felt
it
failed.
The
compile
the
next
step
would
be
to
go
ahead
and
tell
ortillius
that
it
was
successful
that
a
new
component
was
created
in
the
docker
registry
and
because
it
never
got
to
the
docker
registry
we'd,
never
record
it.
B
B
A
Yeah,
that's
one
of
the
things.
That
is
that
we
need
to
do
as
a
a
way
to
make
the
ui
load
faster
is
when
you
get
a
lot
of
these
component
versions
and
application
versions
that
never
made
it
out
of
the
the
initial
ci
build
that
they
do
get
in
the
way.
So
an
archiving
feature
is
something
that
we
need
to
look
at
and
if
you
could
open
a
an
issue
for
that,
that
would
be
great
because.
C
B
Can't
yeah,
I
don't
know
how
to
describe
this.
That
idea,
just
dropped
came
to
my
mind.
B
D
Yeah
hi
steve
so
like
how
old
hi
so
like
how
we
creating
the
dependency
graph
position.
One,
and
the
second
question
was
how
we
are
checking
the
health
status
of
a
component.
Like
suppose,
our
component
goes
down.
So
how
able
to
detect
that.
A
Right
so
the
initial,
so
one
of
the
things
in
components
is
and
in
applications
we
like
to
start
with,
what's
called
a
base
version,
the
base
version,
we
basically
kind
of
strip
off
all
the
the
variant
in
the
like
the
the
version
number
and
that
becomes
our
base
version
that
we
start
with
from
there
on
out.
We
then
create
version,
one
of
the
component
or
so
components
and
applications
and
pretty
much
everything
works.
A
The
same
way
for
versioning
in
artelias,
so
we'll
go
ahead
and
create
the
new
the
new
version.
So
we
have
our
base
version
version,
one
so
base
version
you
can
think
of
as
base
zero
version,
zero
and
then
version
one
is
gonna,
be
our
our
component
that
we
just
created
and
then,
if
we
have
another
update,
that
comes
along
we'll
go
ahead
and
look
for
the
parent
of
there's
a
whole
parent,
a
parenting
tree.
A
So
you
know
through
the
virgin
tree
who
your
parent
is
and
then
also
you
know
who
your
base
version
is
as
well.
We
keep
those
relationships
as
part
of
the
the
database
data.
A
A
It's
based
on
name
if
there,
if,
if
you
have
the
right,
if
you're
putting
a
creating
a
component
in
the
same
domain,
you
know
like,
like
I
had
arthritis,
dot
sas
and
then
the
same
base
name.
Then
it's
gonna
be
considered
the
the
same
component
that
we
need
to
update
as
part
of
that
process.
A
So
that's
kind
of
and
the
same,
the
same
logic
works
with
with
applications
as
part
of
that
process.
D
A
Exactly
now
there
is,
there
is
a
scenario
I
can't
remember
how,
where
which
part
enables
it.
But
let's
say
you,
you
do
a
deployment
into
qa
and
that
you
just
deployed
a
single
microservice.
A
I'm
sorry,
you
you
it's
when
you
create
a
new
microservice.
So
if
you
create
a
new
microservice
and
you
have
five
applications
that
are
consuming
it,
you
would
then
go
ahead
and
find
where
the
previous
version
of
that
component
was
used
in
which
applications.
So
we
know
there's
these
five
different
applications
that
we're
consuming
the
the
minus
one
of
our
component
version
and
then
we'll
go
ahead
and
create
new
component
versions
for
all
five
applications
consuming
the
new
component
that
we
just
created.
A
So
there
are
some
tricks
that
we
do
behind
the
scenes.
But
in
order
to
do
that,
we
need
to
have
a
starting
point
and
that's
the
hard
part
is
getting
the
starting
point
and
that's
why
I've
been
looking
at
basically
reverse
engineering,
a
cluster
to
give
us
based
on
a
name
space,
to
give
us
a
starting.
A
Artelias
and
deploy
up
don't
do
anything
on
transaction
monitoring
or
health
status,
or
anything
like
that.
So
in
our
proposal
for
the
service
catalog,
one
of
the
things
that
that
we're
going
to
add
is
links
to
monitoring
tools.
So
if
you
wanted
to,
for
example,
for
this
microservice,
we
know
that
the
datadog
url
for
this
service
is
x
and
that's
some
of
the
service
catalog
data
that
we
want
to
embed
at
the
component
level.
A
Where
are
the
logs
for
production
but
developers
when
they're
doing
work?
You
know
qa
says
something's
broken.
The
first
thing
they
want
to
do
is
go
look
at
the
logs
and
if
they
know
where
the
qa
logs
are
for
this
cluster
for
this
micro
service
and
they
could
just
get
to
it,
really
quick
they're
going
to
be
much
happier
and
that's
where
you
know
it's
like
this
end
and
and
relationship
that
we
have
to
deal
with
when
we're
talking
about
this
version
of
this
component
lives.
A
In
these
n
environments,
these
n
environments
each
can
have
n
clusters
and
because
of
that,
we're
gonna
have
n
logs
for
each
microservice.
So
it
kind
of
just
cascades
out
as
we
look
at
those
relationships.
D
Sorry
also,
regarding
that,
so,
like
my
blog
was
regarding
that
part
like
microsoft,
generation
or
basically
observability,
so
I
came
across
that,
so
basically,
three
systems
are
there
like.
So
basically,
the
three
kind
of
data
are
log,
metrics
and
traces.
So
basically
the
transactions
and
most
of
the
systems
have
different
screens
and
there
is,
there
doesn't
exist,
any
correlation,
so
the
manual
labor
goes
so
much
to
like
suppose.
D
If
some
problem
strikes
us
down,
we
have
to
like
manually,
do
all
those
things
and
correlate
on
our
own
so
like
this
is
the
moving
trend
that
towards
a
generic
solution
where
we
can
get
all
these
three
data
together
and
correlation
between
them.
Yeah
the
service
goes
down.
We
get
the
logs,
we
can
get
the
logs
traces
as
another
metrics
of
the
system.
We
want.
C
D
Also,
and
also
like,
if
you
were
mentioning
the
links
for
different
tools
like
there's,
a
standard
being
developed
currently
like
open,
telemetry
right,
so
the
so,
basically
the
so
basically
the
currently
all
this
we
need.
Basically,
standardization
is
needed.
We
are
having
lots
of
tools
that
do
they
are
doing
this
thing,
but
there
is
no
standardization
in
the
format
of
the
data,
so
basically
to
open
telling
me
project
is
around
that,
and
if
we
can
utilize
that
data
we
can
maybe
in
the
future
we
can
show
that
directly.
A
Yeah
exactly
and
I'd
love
to
bring
in
that
that
data
directly
into
artelias,
but,
like
you
said
that,
because
there's
no
standard,
it's
we
spend
a
lot
of
time
chasing
vendors
to
get
their
data
now.
One
of
and
that's
why
right
now
on
the
easy
out
is
to
just
to
give
a
link
to
the
other
tool.
A
A
What's
the
email
address
and
phone
number
of
the
owner
for
this
microservice?
What's
the
page
or
duty
service
sheet?
Look
like
you
know,
those
types
of
of
things
are
important
as
well
and
then,
like
you,
said
overall,
if
we
could
start
bringing
in
some
base,
you
know
a
green
red
for
a
service
at
for
at
an
environment
level.
That
would
be
a
huge
now,
there's
a
whole
other
thing
around
policies
as
well.
How
is
how
is
this
service
conforming
to
the
policies?
A
A
You
have
business
policies
that
we
need
to
look
at
as
well
and
those
rules
that
need
to
go
around
it.
So
there's
there's
a
lot
of
data
that
we
can
associate
to
components,
version
of
a
component
and
roll
those
up
at
the
application
layer.
A
Now
one
of
the
things
that,
as
part
of
the
service
catalog
that
we're
looking
at
I'm
doing
is
it's
kind
of
along
the
the
policy
ish
side
is
when
we
have
a
a
docker
container.
Let's
say
it's:
a
python
container
python
is
going
to
have
all
of
its
modules
that
were
installed
into
the
container.
Now
we
can
run
a
scan,
a
security
scan
against
that
container
and
we
can
grab
all
the
there's
two
things
we
need
to
grab.
A
One
is
what
are
all
the
licenses
for
all
the
module,
because
we
need
to
know
if
we're.
If
we
have
some
module,
that's
a
license
that
the
attorneys
don't
like
and
then
the
second
one.
What
are
the
cbes?
You
know
what
are
the
vulnerabilities
that
are
out
there,
so
those
will
will
need
to
roll
up
at
the
container
level,
and
that
would
be
here
are
all
the
licenses
and
all
the
cves
for
this
component
version
and
then
from
there
because
of
our
relationships.
A
You
know
everybody
asks
that,
so
if
we
can
go
through
and
roll
these
things
up
to
the
application
layer,
I
can
just
click
and
say
print
and
I'm
done
within.
You
know
a
couple
seconds
providing
somebody
about
what
my
license
consumption
is.
So
that's
kind
of
the
where
we're
headed
with
with
the
service
catalog
data.
A
And
I'd
love
to
be
able
to
bring
in
telemetry
and
health
status.
Those
types
of
things
as
well
so
definitely
keep
your
a
pulse
on
what's
happening
on
the
open
telemetry
side
for
us.
D
A
Now,
some
of
the
things
that
you
know
we
could
look
at
to
start
with.
You
know
like
just
start
with,
like
maybe
a
prometheus,
you
know
open
telemetry
with
a
prometheus
or
you
know
if
they
have
some
some
way
that
we
can
initially
integrate
and
wait
for
other
tools
to
be
brought
into
the
open
telemetry.
A
That
could
be
a
way
because
it
you
know
because
we
have,
we
have
what
the
because
we're
coming
from
the
development
side.
So
we
know
what
the
developers
want
their
application
version
to
look
like
when
it
runs
and
when
they
get
it
to
a
cluster.
A
Now
we
have
to
take
the
other
side
and
make
sure
that
it's
matched
up,
that
we
don't
have
straight
transactions
that
somebody
has
a
bad
route,
so
we
want
to
be
able
to
marry
what
the
developers
are
wanting
and
what
the
operations
side
is
giving
them
to
make
sure
that
both
sides
are
are
meshing
together.
So
we
have
a
good
view
of
the
what's
happening
for
the
application,
because
one
of
the
things
that
we're
hearing
a
lot
now
is.
A
Is
it
works
in
my
cluster
scenario
and
part
of
it
is
the
people
when
they
came
to
kubernetes
they
didn't
change,
they
didn't
take
the
opportunity
to
change
to
make
their
their
development
practices
and
their
operation
practices
better.
Basically,
what
they
ended
up
doing
was
they're
treating
a
kubernetes
cluster
as
a
server,
and
everybody
gets
their
own
server.
Just
like
we
used
to
have
server
farms
for
every
application
team
now
we're
having
cluster
farms
of
every
single
application
team
getting
their
own
cluster,
but
there's
no
need
for
that.
A
If
you
look
at
the
way,
kubernetes
is
designed,
it
can
scale
to
500
nodes.
I
mean
5
000
nodes
that
you
can
scale
to
so
there's
plenty
of
room
for
multiple
application
teams
to
live
in
the
same
cluster
and
be
separated
by
smartly.
I
should
say,
put
the
word
smartly
around
it.
Smartly
separate
applications
based
on
namespace,
and
when
I
talk
about
that,
you
want
to
have
reusable
pieces
in
your
domain
driven
design.
A
Those
should
be
in
a
a
name
space
that
can
be
shared
by
other
applications
as
part
of
the
process.
So
you
want
you,
don't
want
to
have
copied
the
same
microservice
into
15
namespaces
and
it's
the
same
container
in
all
15
namespaces,
just
because
they
don't
want
to
figure
out
namespace
security
in
our
back.
You
know
you
want
to
have
it
those
reusable
pieces
in
ones
one
namespace,
and
then
you
set
up
the
routing
and
the
security,
so
they
can
be
shared
across
the
15
other
name
spaces.
As
part
of
that
process.
A
We're
getting
you
know,
there's
there's
a
couple:
there's
several
bad
habits
that
we're
carrying
forward
from
our
past
that
it's
going
to
be
a
challenge
to
get
people
to
change,
but
I
think
some
of
the
things
that
will
help
will
be
like
istio
server.
Smash
routing
will
help.
E
Yeah,
so
there's
been
some
pretty
heavy
technical
stuff,
I
kind
of
gone
through.
If
there's
anybody,
if
anybody
out
there
wants
to
you
know,
have
a
further
discussion
on
it.
I'm
totally
open
to
chatting
as
well,
because
I
know
there's
some
of
you
out
there
who
are
sort
of
new
to
this
space,
so
don't
feel
if
you're,
if
it
was
overwhelming
today
that
you
can't
reach
out,
say
okay,
I
didn't
understand
this
part.
It's
totally.
C
E
Because
we
are
going
to
start
coding,
we
got
next
week
because
our
steve's
going
to
start
pushing
issues
out
there
so
it'll
be,
and
I
think
that
we
have
everything
set
up.
We
still
have
to
the
the
cloud
build
stuff
that
sasha
documented.
We
still
have
to
apply
that
to
building
our
or
tla's
container.
Is
that
correct?
No,
it's
all
done
all
done.
So,
hey.
C
A
So,
on
that
front
we
have,
I
believe,
zach
I
saw-
was
picking
up
some
of
the
microservices
that
we
did
back
in
october
to
add
in
the
home
part
and
the
cloud
build
part.
So
there's
some
of
those
microservices
that
we're
adding
in
that
will
need
the
cloud
build
and
those
pieces
added
to
it
in
helm,
charts,
but
that's
just
part
of
the
the
new
development
that
we
have
going
forward
here.
So
you
know
everything
is
all
pretty
much
in
in
sync:
the
zero
clusters
up
there.
A
The
one
thing
I
have
to
do
is
get
some
data
for
people
to
work
with,
and
that's
on
my
list
this
week
to
get
out.
E
A
E
B
A
So
anybody
have
another
topic
that
they'd
like
to
go
over
in,
like
a
couple
weeks.
A
I
don't
I
don't
mind
doing
these
at
all.