►
Description
The Ortelius TOC meets every 6 months to review the product roadmap for the Ortelius Open Source Microservice Management platform. In this meeting, the primary enhancement list is reviewed and discussed.
A
Okay,
all
right,
you
guys
should
be
able
to
see
this
in
slideshow
mode.
Are
we
good
yep
all
right?
Well,
everybody!
Thank
you.
It's
been
a
it's
been
a
while,
since
we
have
I've
gotten
together
may
was
our
last
meeting.
A
I
don't
know
how
the
time
flew,
but
it
did
even
though
we
were
in
covet
and
all
standing
around
thinking
that
we
wanted
to
go
out
and
do
something
steve
and
I
focused
on
building
an
open
source
community
and
it
has
been
been
a
whirlwind
just
to
make
sure
we
all
know
who
each
other
are
phil.
Gibbs
phil
was
one
of
the
initial
masterminds
behind
what
we
call
our
domain
structure.
A
A
He's
a
he's,
a
devops
program
owner
at
unisys
and
one
of
our
end
users
doug
orr.
He
is
now.
I
don't
know
if
this
is
the
still
where
you're
at
right
now
doug
but,
let's
just
say,
doug
is
infamous
in
the
kubernetes
world
he
came
from
google.
A
B
C
A
Am
going
to
keep
it
out
of
the
other
mode,
because
I
think
that
causes
problems
so
doug
doug
orr
has
been
a
mentor
to
us
and
a
coach
from
the
very
beginning,
along
with
tim
kelton,
who
we
met
at
descartes,
labs
he's
no
longer
there.
I
don't
know
why
I
didn't
update
that.
He
now
works
in
the
autonomous
auto
field,
with
a
company
called
aurora.
Is
that
correct
tim?
A
I
think
it
is
yes,
yes,
tara.
She
tar,
hernandez
we've
met
her
through
the
cd
foundation.
She
has
been
pretty
instrumental
because
she
was
the
one
who
recommended
our
open
source
project
to
the
cdf
jeremy
davis.
I
met
on
a
linkedin
connection
as
well
as
hank.
Jeremy
is
a
chief
architect
at
red
hat
and
has
some
had
some
really
interesting
insights.
For
me
when
I
first
spoke
to
him,
so
I
said,
please
join
our
toc
because
we
are
in
a
pretty
critical
place
in
terms
of
the
product.
A
Hank
is
the
same
way.
He
is
part
of
the
open
source
community
works
for
bp,
michael
galloway,
he's
going
to
try
to
be
here
today.
He
has
father
duties
to
do
in
the
mornings,
but
he
is
a
senior
engineering
manager
at
netflix
and
has
been
also
somebody
who
has
kept
kept
tabs
on
us
and
has
been
looking
at
how
we're
managing
some
components
that
may
solve
some
of
the
netflix
problems
and
bill.
Pertelli
has
been
a
friend
and
mentor
and
a
coach
for
the
last
several
years.
A
I've
known
bill,
probably
going
on
15
20
years,
and
he
has
helped
with
giving
us
direction
around
how
to
build
out
a
sas
model
and
been
a
sounding
board
for
us
over
the
course
of
the
last
two
years,
just
to
do
an
eight
month
recap,
because
that's
how
long
it's
been
since
we've
spoken
and
I'll
just
kind
of
go
over
some
of
the
things
we
covered
there
and
where
we're
at
so
in
july.
A
In
may,
we
talked
about
the
redesign
of
the
gui.
That
was
our
primary
focus
around
the
toc.
We
did
deliver
that
in
july.
Believe
me,
it
was
a
crazy
may
june
and
july
for
steve,
and
I
getting
that
pulled
all
together.
A
We
merged
all
the
downstream
code
of
deploy
hub
with
ortilius,
because
we
knew
we
were
going
to
really
focus
on
building
out
that
community,
and
we
did.
We
focused
on
growing
that
the
ortillius
community.
We
now
have
about
a
120
members
and
I
would
say,
there's
20
folks
who
are
active
and
excited
about
working
on
this
particular
project
and
solving
this
particular
problem.
A
We
have
architecture,
meetings,
outreach
meetings
and
general
working
group
meetings
every
other
week
to
keep
everybody
on
top
of
things
right
now
we're
working
on
a
blogathon.
If
any
of
you
are
interested
in
contributing
to
the
blogathon,
we
would
love
to
have
it.
A
We
moved
from
the
jenkins
source
code,
compile
process
that
we
were
doing
to
google
cloud,
build
and
we're
ready
now
for
the
open
source
community
to
start
doing,
pull
requests
and
pushing
that
so
it
gets
automatically
created
on
that
note,
we're
doing
the
same
thing
with
the
website
that
will
be
delivered
by
the
15th
of
march,
where
it's
all
in
a
hugo
server
with
doxy,
so
that
we
can
have
everybody
contributing
to
the
website.
We
are
following
as
much
as
we
can.
A
How
jenkins
did
that
we
enhanced
some
of
the
apis
and
the
jenkins
groovy
library
we
added
a
cli
for
updating
component
versions,
application
versions
and
for
performing
deployments,
as
well
as
a
circle
ci
orb
for
managing
component
to
test
application
and
pushing
application,
work
test,
workflows
based
on
a
component
update,
and
then
we
have
started
adding
new
microservices
back
to
into
the
into
the
functionality
and
steve's
going
to
cover
some
of
that
december
8th.
We
were
officially
welcomed
into
the
cd
foundation
as
an
incubating
project.
A
Again,
tara
hernandez
at
google
was
sort
of
our
inside
advocate.
I
think
she
really
gets
what
we're
doing
she
understands.
I
think
from
her
google
experience
the
complexities
around
managing
these
relationships
before
they
get
deployed,
so
she's
been
pretty
excited
about
what
we're
doing
the
open
source
community
has
been
amazing.
A
Perjot
and
nitu
were
the
two
to
first
step
up
and
be
community
managers.
Marky
jackson
did
some
initial
coaching
with
them.
Siddharth
has
since
stepped
in
and
helped
as
well.
Christopher
karam,
owen
and
sasha
have
all
really
just
jumped
in
in
terms
of
understanding
the
architecture
and
starting
to
get
involved.
A
Sasha
gave
himself
a
new
name
yesterday
called
the
get
destroyer
which
we
thought
was
pretty
funny
and
then
sergio
has
stepped
in
recently
and
has
been
a
real
trooper
helping
us
do
some
initial
designs
around
a
reward
system
and
badges
and
natch
is
somebody
who
we
want
to
attract
more
of.
He
is
a
great
graduate
research
assistant
at
new
jersey,
institute
of
tech
and
excited
about
learning
to
be
part
of
an
open
source
community.
So
we've
really
worked
to
try
to
increase
a
diversity
in
this
program.
A
I
really
push
women
to
get
involved,
so
I
have
both
project
and
need
to
are
have
been
super
helpful
in
getting
that
done.
We
have
submitted
to
become
part
of
a
gsoc.
We
have
four
basic
or
three
basic
projects
that
we've
submitted:
two
our
integration
projects,
spinnaker
and
argo
cd.
A
I
actually
spoke
to
somebody
who
is
a
someone
who
does
a
lot
of
machine
learning
and
as
a
data
scientist
to
work
with
us
on
doing
data
graphs
and
sort
of
expanding
the
current
visualization
of
our
relationship,
graphs
that
we're
working
on
and
start
thinking
about
how
we
can
use
some
of
the
data
that
we
now
have
to
start
doing.
More
predictive
analysis.
A
C
So
I'll
ask
one
real
quick:
you
showed
the
jenkins
groovy
library
and
then
also
spinnaker
and
argo
cd
integrations
that
you're
working
on
are
you
seeing
a
shift
from
jenkins
has
kind
of
been
a
core
feature
in
way
people
used
the
service
in
the
past.
So
are
you
seeing
kind
of
a
shift
that
you're
trying
to
go
after
there
is?
That
is
that.
B
Now,
what
we're,
what
we're
actually
seeing
is
people
that
have
invested
heavily
in
jenkins
are
sticking
with
jenkins
and
new
companies
like
new
startups.
Smaller
projects
are
adopting
the
get
ops
methodology
right
off
the
bat,
so
the
you
know
the
transition
from
a
jenkins
pipeline.
They
get
opposite
their
two
ends
of
the
the
universe.
B
So
we
haven't
seen
people
you
know,
do
a
huge
move,
all
the
way
from
jenkins
over
to
a
get
ops
methodology.
Yet
that's
why
we're
supporting
both?
You
know
the
group,
the
the
jenkins
side,
is
still
going
to
be
there
with
our
groovy
library.
B
B
So
that's
kind
of
like
what
we're
seeing
right
now
as
part
of
that
process.
Some
of
the
new
things
that
were
on
the
ci
cd
front
that
we're
excited
about
is
some
of
the
event-driven
ci
tools
like
kept
in
that
is
looks
promising
as
part
of
the
process.
Yeah.
A
I
I
would
guess
that
we
probably
will
be
before
the
end
of
the
year,
be
adding
events
having
us
called
as
an
event.
I
think
that
most
people
on
the
cd
space
are
pretty
excited
about
events,
and
we-
and
there
is
a
new
working
group
at
the
cd
foundation
that
I'm
working
on.
That's
the
events
working
group
and
we're
looking
at
you
know
what
that
control
plane
looks
like
and
what
the
protocols
will
be.
B
So
some
of
the
things
that
we've
been
working
on
is
this
came
out
over
a
year
ago
with
netflix
and
the
concept
of
component
sets.
What
we've
recognized
is
even
though
microservices
are
supposed
to
be
loosely
coupled
there
are
scenarios
where
they
need
to
be
tightly
bound
together,
so
they
all
are
moved
through
the
pipeline
as
a
unit.
So
we're
in
the
process
of
introducing
what
we're
calling
component
sets.
B
Netflix
around
this
topic
and
how
like
argo,
cd,
so
intuit
uses
argo
cd
as
part
of
the
process,
and
you
know
netflix
obviously
uses
a
version
of
spinnaker,
so
they're
the
concepts
that
they
had
around.
B
What
they're
calling
application
sets
was
some
confusing
terminology,
and
we
were
trying
to
find
out
if
we
had
overlap
with
those
tools
and
what
we
recognized
was
what
they're
calling
application
sets
has
nothing
to
do
with
applications
really
and
so
we're
gonna
move
forward
with
what
we're
gonna
call
component
sets
that
type
the
ability
to
tightly
couple
a
set
of
microservices
together
or
components
and
version
that
component
set
and
be
able
to
track
that
as
one
of
the
relationships
and
that
will
be
rolled
into
the
application
package.
Now.
B
Yeah,
it
seems
like
the
concept
of
a
product
which
is
going
to
be
a
collection
of
services,
makes
more
sense
than
an
application
so
like
in
the
in
the
netflix
world.
They
call
every
service
an
application,
so
the
word
applications
is
being
reused
and
mis
understood
a
lot.
So
right
now
we're
considering
a
product
as
the
replacement
for
application.
A
B
So
one
of
the
next
features
that
we're
going
to
be
adding
is
around
what
came
up
out
from
our
interviews
with
some
of
our
contributors
from
the
sre
and
opside,
and
also
like
tyler
jewell
from
dell
capital
recommended
us
look
at
the
more
of
a
service
catalog.
Where
information
about
the
service
you
know
who
is
the
owner
of
the
service?
How
do
I
get
a
hold
of
them?
What's
the
development
slack
channel?
B
Are
there
any
transaction
logs
around
telemetry
that
we
need
to
look
at
and
associate
and
then
also
the
the
key
value
pairs
that
are
associated
with
that?
So
that's
just
the
some
of
the
the
service
catalog
information
that
we're
going
to
be
adding
at
the
component
level
and
then
that
will
roll
up
to
the
application
level
and
then
that'll
be
visible.
B
One
of
the
other
things
that
I
don't
know
if
it's
on
the
next
slide,
but
we
are
looking
at
going
into
the
containers
and
pulling
in
information
from
scanning
tools
like
snick
cyclone
dx,
to
get
the
licenses
that
are
being
used
in
the
container.
B
B
Now
that
initial
gathering
of
that
information
will
then
feed
downstream
to
being
able
to
do
policy
control,
you
know,
can
I
can
I
deploy
this
or
is
there
a
policy
violation
because
there's
a
service,
that's
consuming
a
license
that
hasn't
been
approved
by
the
attorneys,
for
example?
B
So
policy
control
is
going
to
be
on
the
back
tail
of
this
information
that
we're
going
to
gather
we're
not
focusing
on
policy
these
next
two
quarters,
but
that
will
probably
be
fourth
quarter
or
first
quarter
next
year.
A
And
again,
there's
a
there's:
a
policy
working
group
coming
out
of
the
interoperability
working
group
at
the
cdf
and
we're
trying
to
standardize
on
what
those
policies
could
potentially
look
like
that.
We
could
then
base
our
policy
engine
on.
B
And
one
of
the-
and
this
is
where
we
kind
of
circle
back
around
to
the
event-driven
ci,
because
you
can
we'll
have
this
information-
let's
say:
there's
cves
that
are
out
there
that
are
high,
that
we
want
to
do
some
extra
testing
on
that.
You
can
get
into
more
of
a
dynamic
pipeline
process
based
on
the
information
that
we're
gathering.
B
So
that's
some
of
the
things
that,
with
this
data
that
we're
collecting
we'll
be
able
to
do
some
interesting
things
down
the
road
go
to
the
next
one
and
then
like,
like,
I
said
within
that
meeting
that
we
had
with
intuit
and
and
netflix
and
everybody
the
interesting
part
around
intuit,
like
I
said,
uses
argo
and
the
get
ops
model
we
are
in
the
process.
B
We've
had
some
new
people
join
that
have
some
familiarity
with
with
argo,
as
well
as
like
the
weave
works,
flux,
d
and
rancher
has
one
called
fleet
we're
looking
at
those
get
off
solutions
to
see
how
ortelius
can
fit
into
the
into
a
git
ops
model.
B
Right
now
it
looks
like
we'll
be
able
to,
instead
of
deploying
you
know,
helm
chart
to
a
cluster
and
using
helm
directly
to
the
cluster
that
we'd
actually
deploy
yaml
to
a
git
repo
and
then
from
there
once
we
do
the
push
and
that
part
the
get
ops
process
takes
over
from
there
as
part
of
that
there'll
still
be
hooks
in
there.
So
we
can
most
of
those
tools.
The
git
operators
have
a
notification
process,
so
we
can
get
notified
when
everything's
done.
We
can
keep
track
of
all
the
metadata.
B
That's
happening
in
you
know
what
got
deployed
where
at
that
level.
So
that's
just
one
of
the
things
that
we're
looking
at
we're
still
scoping
out
how
we're
gonna
do
it
the
design
part,
but
that
is
it
doesn't
look
too
bad
for
us
to
bring
in
that
type
of
solution.
A
So
one
of
the
one
of
the
kind
of
the
more
vocalized
problems
around
get
ops
is
the
number
of
yaml
files
that
you
have
to
support,
particularly
when
you
have
several
different
environments
or
clusters.
You're
rolling
out
to
and
those
clusters
have
different
key
value
pairs
or
configurations
that
you're
having
to
update.
A
I
recently
did
a
pros
and
cons
of
get
ops,
and
I
kind
of
did
the
math
around
what
you
would
have
to
to
manage
and
you're
looking
at
managing,
potentially
hundreds
of
yaml
files
in
branches,
and
that
is
what
a
lot
of
the
current
get
ops
users
are
starting
to
complain
about.
So
the
goal
here
would
be
to
replace
that
human
effort
of
updating
all
of
those
yaml
files
manually
and
instead
be
the
central
repository
that
generates
those
yaml
files
and
then
updates
the
proper
git
repository.
A
D
It
was
yeah
it
it
it'll,
pull
me
away
again,
probably
in
about
20
minutes,
but
okay,
the.
So
this
is
an
interesting
problem.
This
is
a
problem
that
we
definitely
know
at
netflix
in
terms
of,
as
we
have
moved
more
and
more
towards
yaml-based
configuration
for
infrastructure
and
ci
cd
and
all
kinds
of
other
things.
D
One
way
that
I
know
that
folks
have
been
attempting
to
test
out
ways
to
update
yamls
within
the
repository
directly
is
through.
Oh,
my
gosh,
the
name
just
flipped
right
out
of
my
head
shoot.
It
starts
with
an
a
isn't
that
oh
gosh,
I
gotta
find
it.
There
is
a
service
that
I
know
that
specifically,
is
good
for
this,
like
distributed
updating
of
the
repos.
D
I
just
wonder
if,
if,
if,
if
there's
ways
to
maybe
integrate
with
those
services
or
figure
out,
if
those
services
might
be
useful
to
to
leverage
just,
it
seems
like
a
big
problem
to
solve,
specifically
that
one.
B
And-
and
that's
one
of
the
the
things
that
were
the
point
that
we're
at
right
now
is
when
you
look
at
a
get
ops
methodology,
the
git
repo,
that's
depending
on
who
you
talk
to
one
of
them
is
called
an
environment
repository
where
you
have
all
the
the
kubernetes
manifest
files
in
that
repository,
depending
on
who
you're
talking
to
on
the
cluster
side.
If
it's
a
flux,
d
versus
fleet
versus
argo
cds
operator
over
there,
they
all
expect
a
slightly
different
directory
structure
of
where
file
should
be
and
how
they
should
be
named.
B
And
things
like
that
and
I'm
on
the
get
ops
working
group.
That's
on
the
cncf
side,
listing
in
to
see
how
they're
going
to
kind
of
standardize
that,
to
figure
out
how
we're
going
to
integrate
into
that,
but
whatever
open
source
tools
that
you
know
out
there
that
we
can
leverage
like
you
said
to
be
able
to
update
the
manifest
in
that
the
the
repo
would
be
great,
because
I
we
definitely
don't
want
to
write
something
from
scratch.
D
C
D
Me
see
if
I
can
find
the
ones
that
I
was
aware
of.
Maybe
it's
a
fit.
Maybe
it's
not.
I
don't
know
just
yeah
I'll
see
what
I
can
find.
A
And
the
and
the
problem
we're
finding
is
that
and
we're
hoping
that
the
the
getups
sig
at
the
cncf
will
address
this.
But
all
the
operators
are
are
requiring
different
formats
of
the
ammo
information.
D
That's
a
big
problem:
this
yeah
yeah,
sorry,
you
know.
B
Then
there's
ibm
has
razer,
you
know,
so,
there's
all
these
different
and
a
lot
of
the
implementations
that
we're
seeing
like
with
rancher's
fleet.
B
They
don't
care
how
you
name
the
yaml
files.
What
they
do
is
they
actually
go
into
the
the
files
and
look
for
specific
labels,
for
example,
is
this?
Is
this
snippet
of
manifest
for
my
cluster?
You
know:
is
it
labeled
for
my
cluster
that
type
of
thing?
So
it's
it
really
which
makes
sense,
because
then
what
ends
up
happening
like
with
with
fleet?
Is
you
you
check
in
a
change
to
the
the
get
repo
and
it
doesn't
matter
what
the
change
is,
but
everybody
gets
notified
that
that
is
looking
at
that
repo.
B
There
was
a
change,
go
figure
out.
If
you
have
to
do
something
with
that
change,
so
they
throw
it
over
the
fence
to
the
cluster
side
and
the
cluster
goes
back
to
the
repo
and
starts
looking
for
a
change
that
it
needs
to
act
upon
to
bring
it
into
the
correct
state.
So
there's
different
implementations
and
that's
one
of
the
things
that
is
tricky
with
this
one.
A
As
well
as
the
get
operators
themselves
are
starting
to
store
some
of
the
override
information
which
obfuscates
that
even
farther
from
what
you
can
see
easily,
so
they
get
the
operator
itself
defines
those
values.
So
you
don't
have
to
define
it
in
the
in
the
yaml
file.
But
then
you
don't
know
what
those
values
are,
if
you're
the
developer,
what
the
what
you
have
to
go
back
to
the
get
operator
github's
operator,
try
to
figure
it
out.
A
So
it's
it's
just
a
lot
of
the
information
is
so
buried
in
the
scripts
and
inside
of
these
these
locations
that
it's
not
visible.
We're
trying
to
what
we'll
try
to
do
is
be
that
visibility,
as
well
as
generate
the
files
to
be
pushed
up
to
to
get
if
they
can.
Just
especially
you
know,
we're
kind
of
this
is
to
be
something
that'll
be
later
in
the
year.
B
And
then
some
of
the
things
that
we
pushed
off
onto
the
back
burner,
we
did
some
initial
work
around
istio,
basically
because
we
know
what
a
version
of
an
application
looks
like
and
how
it
compares
to
the
like
of
the
current
production
version
that
we
can
update
the
istio
routes
dynamically
to
say:
here's
your
new
version.
These
are
the
five
services
that
I
want
you
to
route
to
for
this
persona
and
then
everything
else
is
going
to
come
from
the
like
the
production
services.
B
So
we
did
some
work
on
that.
We
are
actually
in
the
process
of
slowly
rolling
this
out
into
ortiz
itself.
We've
moved
the
website
and
all
the
documentation
like
tracy
is
saying
into
hugo
based
containers,
so
those
are
running
in
a
kubernetes
cluster
over
in
azure,
and
we
have
istio
right
now
doing
some
basic
routing
for
that.
But
we're
gonna
slowly
keep
on
extending
that
out.
B
A
On
that,
on
that
istio
we
have,
you
know
we
were
a
little
concerned
about
doing
too
much
integration
into
in
terms
of
the
service
mission
until
we
decided
who
might
be
winning
it
kind
of
looks
like
istio's
winning.
A
We
don't
see
linker
d
very
often,
so
I
think
that
we'll
probably
stick
with
istio
unless
anybody
sees
something
else,
that's
changing
because
it's
kind
of
like
the
get
ops
problem,
which
one
do
you
standard,
which
one
do
you
integrate
to,
which
is
the
most
important
one
to
go
with.
First,
we
think
sdo
will
be
the
the
long-term
winner,
though.
C
B
Right
right
exactly
and
that's
where
one
of
the
things
back
on
the
the
service
catalog
the
information
that
we're
going
to
be
gathering
looking
at
the
transaction
logs,
that
something
like
envoys
kicking
out
so
to
be
able
to
for
this
service
where
what
what
type
of
transactions
are
going
through
it
and
how
is
it
relating
to
the
other
services
that
are
out
there?
B
So
we'll
definitely
keep
an
eye
on
envoy
and
then.
Finally,
initial
last
time
we
spoke,
we
were
looking
at
fossa
and
sonotype
integration.
B
We
did
look
at
those
and
what
we're
seeing
is
the
spdx
has
version
two
out
and
that's
slowly
being
adopted.
B
Mainly
the
spdx
is
focused
on
the
the
licensing,
that's
in
the
the
packages
like
the
node.js,
the
python
modules,
etc.
So
those
that
information
is
is
becoming
more
standardized
and
we're
able
to
pull
that
out.
Also
there's
another
project
called
cyclone
dx
that
basically
does
the
same
thing
as
spdx,
slightly
different
json
format,
but
the
same
idea
and
we're
looking
at
those
two
to
be
able
to
pull
that
information
up
into
ortelius.
A
And
then
two
last
things
this
was
shared
with
everybody,
it's
out
at
a
google
doc.
So
this
is
how
you
can
get
to
the
google
groups.
If
you
want
more
information
on
what
we're
doing
and
more
insight
into
what
the
open
source
community
is
doing
and
there's
the
discord
channel,
invite
welcome.
All
of
you
to
you
know,
check
in
on
the
discord
channel
once
in
a
while
and
see
what
the
what's
being
discussed
could
be
anything
from
our
new
evil.
Lord
badge,
to
the
get
destroyers
new
mohawk.
A
So
you
never
know
what
we're
talking
about,
but
please
do
and
that's.
B
Let
me
and
that's
one
thing
that
we've
been
trying
to
really
embrace
on
the
open
source
side
is
to
be
a
place
for
people
to
con
collaborate
and
not
only
around
on
ortilius.
You
know
how
we're
going
to
write
it,
design
it
but
also
help
each
other
out.
You
know
I
have
this
kubernetes
issue
with
gke
and
gcr.
B
How
do
I
solve
that?
You
know
so
we're
really
trying
to
open
the
community
up
to
focus
on
you
know
helping
each
other
out
and
learning
and
teaching
each
other,
because
what
we're
finding
is.
We
have
a
really
wide
range
of
expertise
that
have
joined
the
project,
which
is
great
and
there's
always
seems
to
be
somebody
to
answer
a
question
or
at
least
point
it.
Somebody
put
you
in
the
right
direction
to
get
what
you
need.
A
A
I'll
start
down
here
at
the
domain
level,
and
I'm
I'm
really
not
going
to
go
into
the
deploy
hub
pro
features,
I'm
just
really
going
to
keep
track
of
what
the
we're
doing
at
the
open
source
level.
Here's
how
we
now
display
what
domains
look
like
in
this
case.
We
have
two
catalog
domains:
one's
a
store
service
and
one
is
purchase
processing.
A
We
have
a
stress
testing
domain
that
has
a
load
generator
sub
domain,
but,
what's
probably
more
important,
is
how
we
break
out
the
catalog
domains
and
show
the
solution
spaces
and
what
they're
solving.
In
this
case,
the
purchase
processing
has
five
different
subdomains
that
a
microservice
developer
could
publish
their
microservice
to
currency.
Checkout
cart
payment
shipping
now
at
the
highest
level.
A
Now,
when
the
component
does
change,
we
track
that,
so
a
new
version
of
the
cart
service
was
updated.
In
this
case,
it
was
updated
by
a
circle.
Ci
workflow.
You
can
see
that
when
it
was
defined,
it
was
defined
using
a
custom
action
of
a
helm
chart
and
the
helm
chart
we
executed
was
called
the
cart
service.
A
We
do
track
the
key
value
pairs
and
these
key
value
pairs
can
be
mapped
back
to
the
environment
and
we
track
where
that
cart
service
has
been
deployed.
If
it
has
been
deployed,
there
can
be
cases
where
it's
not
been
deployed
yet.
What's
also
important
here
is
we're
showing
up
front
the
blast
radius.
A
So
we
know
now
know
that
if
the
cart
service
is
updated,
it's
going
to
impact
the
one
dot
it's
going
to
create
the
1.2.10
version
of
the
labor
day
sale
and
it's
going
to
create
the
1.2.91
version
of
the
fourth
of
july
cell.
So
if
we
go
back
up
to
the
application
level
now
and
we
take
a
look
at
the
application-
is
now
listed
as
new
it's,
we
have
a
new
version
of
it,
even
though
the
hipster
store
team
themselves
did
not
update
their
application.
A
It
did
get
a
new
version
because
the
the
cart
service
was
updated
and
if
we
take
a
look
at
the
what
occurred
based
on
a
cluster,
we
can
take
a
look
at
the
details
for
that
and
we
can
still
see
now
that
the
what
we
call
this
is
like
a
traditional
bomb
report,
and
we
can
also
see
down
pretty
quickly
what
changed.
A
So
in
this
case,
if
we
do
a
comparison
between
the
new
version
of
the
hipster
store
and
the
base
version,
the
only
thing
that
changed
for
the
hipster
store
cluster
itself
was
the
cart
service.
So
now
we're
elevating
that
data
and
we're
really
pushing
it
right
into
your
your
face.
We've
always
in
the
old
version.
We
always
had
this
data,
but
it
wasn't
shown
in
graphical
images
and
what
we're
learning
is
the
more
graphical
images
that
we
can
use,
the
quicker
the
information
is
consumed
and
the
easier
it
is
to
digest.
A
B
And
one
of
the
things
that,
like
tracy,
showed
on
one
of
the
the
slides
around
the
gsoc
summer
of
code.
We
because
we're
going
to
be
adding
in
additional
relationships
of
the
component,
sets
and
then
also
down
into
what
is
in
a
container
all
the
dependencies
at
that
level.
One
of
the
parts
of
the
one
of
the
many
projects
we
have
is
revisiting
the
visualization
that
we
have
here,
because
we
know
that
in
the
last
eight
months
we've
seen
companies
go
from
having
10
to
15
micro
services.
B
Now
this
you
we
go
back
and
talk
to
the
same
companies,
they're
approaching
100
micro
services,
so
seeing
a
company
with
100
to
200
microservices
is
becoming
very
common
in
the
last
eight
months.
So
we're
going
to
be
revisiting
that
visualization,
because
we
know
that
it'll
get
cluttered
very
quickly.
C
Definitely
as
it
scales
yeah
one
question
on
you
mentioned
in
earlier
in
the
slides
on
kind
of
a
catalog
and
there's
some
pretty
interesting
things.
How
would
that?
How
would
you
envision
that
fitting
into
the
ui-
maybe
that's.
A
A
good
question
I'll
show
you
where
we
would
put
it
steve,
actually
has
already
tried
so
here
at
the
er.
All
of
that
data
goes
back
to
the
at
the
component
level.
So
if
we
just
think
about
it
from
the
component
level,
we
need
to
rework.
This
screen
in
particular
see
how
everything
is
now
in
one
big
box.
A
What
we're
going
to
have
to
do
is
create
smaller
groupings
potentially
so,
for
example,
maybe
general
information
isn't
so
broad
general
information
is
maybe
the
name
and
the
and
that
who
the
owner
is
and
the
owner's
email
or
how
to
connect
to
to
slack
or
something,
and
then
we'll
have
to
start
sectioning
off
these
other
areas.
For
this
other
data,
which
is
what
we
would
call
that
support
data,
so
it's
going
to
have
to
go
in
that
area
itself.
A
So
that
is
a
that's
a
great
question
tim
and
it's
one
that
we're
starting
scratching
our
head
and
saying.
What's
the
easiest
way
to
show
this
information,
because
it
always
rolls
up
and
we're
trying
to
show
it
at
the
highest
level
so
that
an
application
team
can
view
it
or
a
particular
micro
service
developer
can
view
it
based
on
what
they
need
to
see.
C
I
I
think
it's
a
great
step.
I
think
it's
a
awesome
chunk
of
information
to
capture
just
to
be
clear.
There's
companies
just
building
entire
offerings
on
just
trying
to
capture
that,
and
you
do
want
it
right
next
to
all
of
the
application
and
the
deployments
a
lot
of
times.
So
I
do
see
a
lot
of
power
with
that.
B
A
B
The
interesting
things
is
when
you
look
at
you
know
this.
This
component
has
been
deployed.
You
know,
we've
talked
to
people
that
are
have
17
qa
environments
and
you
know
which
means
they're
in
in
in
the
17
qa
environments,
they
could
be
running
multiple
clusters
or
you
know
that
the
components
been
delivered
to
so
it
really
becomes
this
cascading.
You
know
and
evan
of
n.
You
know
type
of
of
trickle
effect
that
you
have
to
track.
B
So
if
something
breaks
in
one
of
the
qa
environments-
and
you
want
to
get
to
the
logs
or
the
telemetry-
that
you
can
be
able
to
drill
down
and
find
it
here
very
easily
as
part
of
that,
so
we're
going
to
have
to
rework
some
of
the
layout
to
deal
with
that
scenario.
C
D
Not
not
to
suggest
a
a
huge
addition,
but
I
I
could.
I
could
say
that
one
other
area
that
we've
been
drilling
into
and
I've
had
some
conversations
in
the
past
with
azure
folks
too
around
this
is
this
topic
of
more
than
just
the
application
relationships,
the
infrastructure
relationships
as
well
as
you,
as
you
mentioned,
you
know,
there's
lots
of
yaml
files
and
increasingly
the
the
cloud
resources
and
requirements
are
intermingled
with
application,
configuration
and
other
kinds
of
details,
and
so
those
things
are
getting
versioned
increasingly
more
and
more
together.
D
In
fact,
recently
that's
been
something
that
we've
been
investing
a
lot
in
is
figuring
out.
The
infrastructure
code,
workflows
is
part
of
the
git
ops.
Workflows
is
part
of
the
safety
and
delivery
of
changes.
So
if
I
was
to
change
some
aspect
of
my
infrastructure
configuration
like
adjusting
my
firewalls
or
or
making
a
change
in
terms
of
load,
balancers
or
security
certs
or
whatever,
it
might
be,
that
those
changes
would
go
through
a
validation,
workflow
and
in
delivery.
The
same
as
software
change.
D
My
point
here,
though,
is
with
the
power
of
being
able
to
see
the
question
of
what
has
changed
most
of
the
tooling
that
I've
seen
in
lots
of
places
the
infrastructure
changes
are,
are
not
always
done.
The
same
way
that
software
changes
are
and
so
being
able
to
see
that
hey
all
this
infrastructure
also
changed,
and
that's
what
led
to
this
problem.
B
And,
and
where
are
those
are
those
infrastructure
changes?
Is
that,
like
a
terraform
definition
that.
D
Point
so
we
should
use
the
big
there's
lots
of
different
places
right,
there's
the
terraform
place
that
can
be.
You
know
this
could
be.
You
know,
kubernetes
or
crds,
that
that
I've
updated
for
my
it
could
be
another
place
that
happens
and
I'm
gonna
use
the
big
I
for
infrastructure,
just
meaning
stuff
outside
of
my
application
code,
because
that
can
include
things
like
I've
changed
jenkins
jobs
that
impacted
my
my
application.
D
I've
changed
I've
changed
in
netflix,
we
have
a
concept
called
fast
properties,
but
it
just
figured
like
global
persistent
properties
like
I've
changed.
Those
that's
led
to
plenty
of
incidences
where
we've
tried
to
figure
out.
How
do
we?
D
How
do
we
capture
that
launch
darkly
as
an
example
leans
in
on
that
idea
of
feature
flags
and
feature
flags
can
lead
to
incidents
when
you
change
those-
and
you
wouldn't
be-
you
wouldn't
see
the
answer
to
what's
changed
in
in
that
case
in
this
view,
but
I
think
that
that
that
what's
changed
concept
is
tremendously
valuable,
so
if
it
was
possible
to
make
it
so
that
maybe
others
could
build
in
signals
into
this,
that
could
either
say
hey.
D
This
application
also
depends
on
these
things
or
these
concepts
and-
and
you
know,
be
able
to
relate
the
changes
that
occurred
over
there
to
at
least
know
that
those
changes
were
also
things
that
happened
on
that
cluster
within
a
period
of
time.
D
That
would
be
amazing
that
you
know
maybe
it's
a
I
don't
know
if
it's
a
federated
model
or,
however,
you
would
want
to
go
about
it,
but
being
able
to
link
those
things
together
opens
up
all
kinds
of
opportunities
to
say,
like
just
reset
everything
back
to
the
way
it
was
yesterday
because
it
was
working,
then
maybe
you
don't
actually
want
to
do
that,
but
at
least
you
get
a
a
quick
understanding
of
that
right.
D
Spinnaker
knows
some
of
it
and
recently
with
the
efforts
around
managed
delivery.
We
have
a
vision
for
that
to
potentially
encourage
capturing
of
all
as
much
infrastructure
as
possible.
D
It's
a
plugable
design,
so
you
can
see
our
d-based,
so
you
can
add
in
new
infrastructure
configuration
into
it
and
that's
eventually,
we
want
to
get
to
the
place
where
essentially
as
much
infrastructure
as
possible
is
is,
is,
is
captured
and
can
be
versioned
and
improved
in
a
way
that
allows
us
to
both
validate
those
infrastructure,
changes
from
low
risk
environments
to
high-risk
environments,
but
also,
I
think
there
is
that
concept
of
being
able
to
know
these
are
all
the
things
that
my
application
cares
about,
which
has
all
kinds
of
other
opportunities
to
explore
on
its
own
as
well.
D
So
I
don't
know
I
don't
want
to
just
say
you
know,
manage
delivery,
because
it's
new,
it's
a
new
initiative
and
there's
there's
been
a
lot
of
work
on
it.
But
that
concept,
I
think
you
know
is-
is
a
powerful
one.
A
Well,
I
might
reach
out
to
you
on
that
topic,
michael,
I
I
have
a
isaac
and
I
have
a
a
tentative.
We
are
trying
to
get
together
to
chat
about
both
get
ops
and
what
spinnaker
data
can
be
pulled
back
in
once
it's
been
deployed
because
we
have
those
integrations.
We
can
start
tracking
that
so
I
might,
I
might
reach
out
to
you
to
be
on
a
call
with
isaac.
D
Yeah,
a
good
sig,
just
if
you
want
to
poke
at
that
is
the
spinnaker
as
code
sig.
I
can
send
you
info
on
that
and
in
that
sig.
This
is
exactly
the
kind
of
conversation
that
I
think
would
be
interesting.
It's
a
different
angle
on
on
it,
but
I
think
that
that
would
be
a
sig
that
would
have
far
more
educated
people
than
me
on
the
details.
Yeah
well.
A
I'm
thinking
I'm
thinking
we
already
have
this
object
called
an
environment
and
it's
you
know,
and
we
would
just
be
adding
just
like
we're,
adding
change
data
about
a
component
that
rolls
up
to
an
application.
We
could
store
it
at
the
environment
level,
which
then
rolls
up
to
the
application.
B
And
one
of
the
things
that
I've
been
thinking
about
is
we
actually
start
versioning
environments.
So
whenever
there
is
a
change
made
to
an
environment
and
an
environment
is
just
a
collection
of
endpoints,
so
we
would
actually
create
versions
of
an
environment
so
anytime
that
an
environment
changes.
We
capture
that
and
persisted
somewhere.
D
D
Into
the
the
the
the
manage
delivery
stuff,
so
it
doesn't
mean
I
think
these
concepts
are
compatible
and
connect.
The
point
of
the
versioned
environments
in
the
managed
delivery
situation
was
specifically
to
be
able
to
say,
be
able
to
safely
introduce
infrastructure
changes.
So
when
you,
an
environment,
is
a
logical
collection
of.
D
Actually,
and
so
when
that
infrastructure
changes
because
you've
either
rolled
out
new
software
or
you've
changed
some
other
configuration,
the
concept
is:
is
that
that's
captured
in
a
way
that
that
can
allow
you
to
both
safely,
introduce
it
and
test
it
and
then
roll
the
whole
thing
back
to
a
previous
state,
so
versioning
was
actually
is
a
brand
new
thing
that,
I
think,
is
getting
that
it
was
just
committed
a
couple
days
ago
or
is
like
right
on
the
edge
of
committing.
So
you
got
to
join
that
sake.
Okay,.
A
Phil
gibbs
was
way
ahead
of
us
all
of
us
on
this
topic,
and
that's
why
you
built
this
amazing
domain
based
relational
database
with
a
versioning
engine,
basically,
is
what
it
is.
So
you
can
track
all
those
kinds
of
relationships
and
we're
all
tickling
around
the
same
thing,
right,
yeah,
yeah,
exactly
and.
B
The
the
interesting
part
is
that
we're
we
keep
going
back
and
forth
on.
Is
we
like
to
take
a
proactive
approach
saying
this
is
what
it's
going
to
look
like
what
your
view
of
your
world
is
going
to
look
like
before
you
go
and
apply
it
to
an
environment,
and
then
you
know
that's
one
one
way
to
handle
it
is.
This
is
what's
going
to
change.
If
you
do
this
to
this
environment
compared
to
its
current
state,
the
other
way
is
to
be
the
reactive
and
something
does
get
changed.
B
We
will
react
and
record
it
at
that
level.
I
think
the
the
the
proactive
is
much
more
avoids
a
lot
of
potential
problems
that
you
could
have
when
you
make
a
change.
The
reactive
is
more
recording
it
more.
You
know
trying
to
just
give
you
a
history
or
an
audit
of
what's
happened.
D
My
take
given
that
you
know
we're
we're
kind
of
focused
on
solving
specific,
a
specific
category
of
problems
that
we
see
at
netflix,
even
though
it
does
apply
outside
would
be
that
you
know
conceptually.
I
think
these
things
connecting
is
great,
but
I
would
imagine,
there's
already
been
discussion
about
you
mentioned
terraform
right,
there's
lots
of
different
places,
configuration
and
infrastructure
exists,
and
so
you
know
as
much
as
this
becomes
an
extendable
concept
of
environments
here.
I
think
that
would
be.
That
would
be
where
it
could
get
interesting
like.
How
could
you?
D
How
could
you
leverage
if
manage
delivery
becomes
a
thing
outside
of
netflix?
How
could
that?
How
could
how
could
that
fit
into
this,
but
existing
companies
or
existing
ways
of
solving
this
problem,
like
you
mentioned,
terraform
or
or
kubernetes,
or
other
other
mechanisms
for
defining
the
the
infrastructure
aspects
of
your
environment
as
those
get
iterated
on?
How
could
those
plug
in,
I
think,
would
be
exciting
so.
B
The
interesting
part
is
there's
like
what
you're
describing
does
not
fit
into
the
get
ops
world
very
well,
because
all
the
relationships
that
need
to
be
managed.
So
so
it's
one
of
those
you
know
two
different
worlds
are
kind
of
clashing.
You
know,
the
concept
of
you
know,
state
management
is
is
is
right
on,
but
the
persistence
of
that
data
through
a
git
repository,
I
think,
is
a
little
backwards.
D
Yeah
we've
had
a
similar
notion
where
we've
struggled
with
the
idea
of
a
true
get
ops
flow.
Where
you
know,
git
is
actually
the
source
of
truth
that
comes
into
conflict,
a
lot
when
you,
when
you
need
to
have
essentially
you
wanna,
you
wanna,
know
this:
it's
the
source
of
truth
of
what's
desired,
but
that's
actually
not
always.
D
Actually
it's
not
always
up
to
date,
and
so
there's
a
lot
of
risk
there,
because
certain
operational
actions
like
if
you
were
to
hit
roll
this
whole
thing
roll
this
whole
environment
back
through
you
know,
deploy
hub.
You
need
to
then
update
potenti.
You
need
to
figure
out
what
you're
going
to
do
if
you're.
If
you're,
you
know
a
repository,
now
thinks
it's.
D
It
thinks
that
the
environment
should
be
on
a
certain
version
that
it
shouldn't
be
or
certain
details
are
incorrect,
and
so
then
you'd
open
up
a
pr
flow
and
so
that
during
that
time,
your
environment's
actually
not
correct,
and
it's
actually
not
even
the
desired
state
anymore,
so
that
that
workflow
is
one
that
we've
we've
also
had
some
some
challenges
with
figuring
out.
How
that
how
to
reconcile
sorry,
I
didn't
mean
to
dominate
the
conversation.
A
A
We
are
four
minutes,
though,
before
hours
up-
and
I
don't
want
to
take
any-
I
don't
want
to
go
over
because
I
know
everybody's
busy.
Thank
you,
everybody
for
your
involvement
here,
your
input
and
your
kind
mentoring
over
the
course
of
the
of
the
last
few
years.
For
those
of
you,
who've
been
here
from
the
beginning.
I
think
we've
made
some
really
great
progress
in
terms
of
adoption.
A
We
are
starting
to
see
now
some
of
the
open
source
users
are
starting
to
now
adopt
the
product
and
we're
seeing
adoption
overall
starting
to
increase
what
we,
what
we
learned
in
2020
was.
A
A
So
we
started
out
down
at
soa
kind
of
process
and
then
went
right
back
to
monolithic,
and
I
think
that
we're
doing
the
same
thing
with
when
we
put
a
whole
application,
borrowing
on
shared
services
but
shove
them
into
their
own
name
space.
So
that
and
and
then
version
that,
because
it's
the
only
way
to
logically
create
the
application,
we're
hoping
that
we
can
simplify
that
with
this
model.