►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Well,
thank
you
all
for
showing
up
here,
I
picked
up
jira's
white
paper
and
I
was
looking
at
the
architecture,
and
I
saw
something
that
looked
very
familiar,
a
little
line.
That
said,
evidence
store
sure
I,
don't
know
if
you
know
what
the
artillius
team
has
been
up
to,
but
we
are
all
about
Gathering
pipeline,
both
security
and
devops
data,
and
starting
to
do
reporting
on
it
and
aggregating
it
up
to
what
we
like
to
call
The
Logical
application
in
a
microservices
environment.
B
All
of
these
pieces
and
parts
get
more
complicated
when
we
decouple,
and
so
we've
been
focused
on
solving
the
problem
around
decoupled
environments,
but
what
I
would
so?
What
I
thought
we
would
do
today
is
just
to
share
some
information
about
the
kind
of
evidence
that
we're
storing
and
the
kind
of
evidence
that
you're
storing.
B
We
are
really
working
towards
getting
a
consensus
in
terms
of
what
is
the
evidence
that
should
be
Federated
and
managed
over
time,
historically,
versioned
and
tracked
of,
for
example,
a
really
good
example
of
it
is
an
s-bomb
s-bombs
are
done
at
the
component
level
and
microservices
and
pulling
that
information
up
and
generating
it
a
literally
a
Federated
view
of
an
s-bomb
for
an
entire
organization
or
an
s-bomb
for
an
application
or
an
s-bomb
for
an
environment.
B
So
that's
the
kind
of
conversations
we've
been
having
so
I
was
kind
of
hoping
you
would
share
with
us
what
your
evidence
store
looks
like,
and
what
we
might
do
is
share
with
you,
what
we're
doing
and
see
where
there's
where
the
data
is
overlapping,
where
we're
agreeing
and
what
we're
missing.
C
Yeah
certainly
yeah,
because
obviously
fatigue
will
notice
such
so.
We
so
we've
had
a
really
old
set
of
Technologies,
like
any
good
old
fashion.
Enterprise
that's
been
around
for
a
long
time
for
which
source
code
we
have
a
single
system.
C
Finally,
and
and
but
for
cicd
we
have
many
systems
and
I
would
say
we
have
varying
varying
practices
that
teams
do
within
those
systems
and
one
of
the
challenge
when
we
took
over
this
group
a
couple
of
years
ago
was
we
spend
a
fortune
on
all
this,
an
audits
reviews
or
app
by
apps.
C
So
literally,
someone
has
to
sit
down
with
an
every
application
team
and
manually
review
their
processes
with
them,
and
you
know,
because
really
we
had
nothing
and
we
couldn't
prove
anything,
and
we
know
teams
were
doing
things
that
didn't
necessarily
do
and
with
with
any
control
Gates
as
we
embarked
in
our
Cloud
Journey,
we
we
were
trying
our
best
to
figure
out
that
you
know.
C
Could
we
set
a
set
of
standards
that
people
had
to
do
in
order
to
go
to
Cloud
and
again
it
was
the
simple
bypass,
those
types
of
controls
that
we
had
in
place
or
intentionally
unintentionally,
as
we
subsequently
started
to
discover.
So
one
of
the
things
we've
started
working
on
was:
can
we
capture
the
data
that
happens
when
people
are
developing
and
as
that
development
and
that
code
gets
carried
through
all
the
CI
CD
processes
and
ultimately
ends
up
in
production?
And
you
know
if
we're
able
to
harness
it?
C
How
are
we
able
to
put
it
together,
Stitch
it
together
in
a
way
that
makes
sense,
and
then
how
can
we
use
that
information
for?
Is
it
engineering
maturity?
Is
it
some
sort
of
analysis,
metrics,
insights,
performance,
latency
production
support?
If
there's
an
incident,
how
do
you?
How
do
you
track
back
if
a
change
is
made
or
compliance
ordered
risk
right
if
we've
got
regulated
applications?
How
do
I
prove
beyond
beyond
doubt
that
I'm
compliant
with
the
particular
regulation
that
that
application
might
be
subject
to?
We
have
obviously
internal
audits.
C
How
do
I
prove
that
my
application
and
my
my
code
and
my
processes
and
my
deployments
are
compliant
with
the
internal
Fidelity
controls
and
policies
from
risks
or
security
Etc?
So
we
started
with
source
code,
Management
Systems.
We
said
okay.
Well,
what
do
developers
do
right
so
well
they
commit
code,
they
push
codes,
they
create
pures,
they
review
P,
they
review,
they
add
comments,
they
merge
a
pipeline
kicks
off
and
a
pipeline
could
do
one
many
or
nothing
right.
We've,
no
clue
sonar
scans
are
being
done
or
they're
not
being
done.
C
Are
the
results
been
you
know,
ignored
or
the
exhaust
results
actually
been.
You
know
actioned
on,
like
you
know,
stop
the
build,
Etc
and
all
the
way
to
the
you
know
the
artifacts.
We
create
the
really
the
life
cycle
of
an
artifact
through
your
ability
to
immutability
and
then,
ultimately,
how
did
how
do
I
know
when
somebody
deploys
something
or
not
and
am
I
able,
then
to
track
that
all
the
way
back,
so
we
started
with
source
code.
C
So
we
were
we're
capturing
all
of
the
events
that
happened
in
GitHub
and
bitbook
that
we're
moving
to
GitHub.
So
in
GitHub,
so
when
commit
when
people
do
things,
we're
capturing
those
events
and
we're
logging,
we
put
them
into
a
ledger,
so
every
kind
of
SCM
Ledger
and
then
we
have
a
CI
Ledger,
a
CD
Ledger,
and
then
we
on
the
back
end
of
that
ledger,
where
we've
got
we're
joining
to
get
data
together
so
commits,
are
linked
to
PRS
or
links
to
code.
C
Reviews
are
linked
to
maybe
a
CI
pipeline
and
then
linked
to
an
artifacts
or
artifacts
and
Etc.
So
that's
and
then,
when
the
pipeline
is
running,
we
wanted
to
know
what
they're
doing
so.
Are
they
running
sonar
they're
running
unit
tests?
Are
they
running
performance
tests
or
they're,
running
chaos,
tests
or
running
integration
tests,
just
just
insights
into
what
they're
doing
or
the
publishing
artifacts?
Are
they
releasing
artifacts
or
promoting
artifacts?
C
So
within
our
pipelines,
we
have
this
venting
model
that
we,
if
you
use
our
standardized
mod,
our
standardized
pipelines,
our
pipeline
libraries,
we
have
Eventing
in
behind.
So
where
anything
that
happens
in
a
pipeline,
we
actually
event
out
what
it
is,
what
and
different
metadata
associated
with
and
if
it
generated
any
files.
We're
actually
capturing
those
files
out
of
the
Pod
and
we're
storing
them
in
S3
and
we're
providing
those
references
back
to
them
as
well.
So
everything
that
happens.
C
We
we
capture
an
event
of
that
thing
and
we
log
that
again-
and
you
know
we-
we
have
the
shares
that
we're
able
to
link
everything
down
up
together
and
build
that
picture
of.
You
know
from
literally
commits
to
production
deployment
and
everything
in
between
so
that's
kind
of
we're
at
and
then
the
goal
is
once
we
have.
This
data
is
provide
it
to
people
for
their
use
cases.
So
in
our
case
we
want
to
govern
production
deployments.
So
are
you
have
you
any
secrets?
Have
you
run
a
code
scan?
Have
your?
C
Is
your
your
open
source
up
to
days?
Have
you
ticked
and
checked
all
the
boxes
that
somebody
says
you
have
to
do
and
only
then
allow
that
change
got
a
production
if
all
those
things
are
true
based
on
evidence
or
that
can
come
in
then
for,
like
we've
got
sock
regulations
so
sock
one
is
our
first
regulation
regulatory
requirement.
We
want
to
say:
okay,
you
know
why
don't
I
go
into
a
dashboard
and
see.
C
Is
this
production
deployment
cert
compliant
or
not,
and
it
should
just
be
a
tick
box
or
an
X
right.
So
I
don't
need
to
pay
our
external
order.
There's
millions
of
dollars
a
year
to
run
around
after
each
individual
team
to
determine
whether
they
are
or
not.
If
I
can
get
us
hey.
Look
at
this
there's
a
check
box,
yes
or
no,
and
maybe
they
just
do
some
sampling
to
verify
yes
right,
so
security,
how
the
security
get
insights
and
production
support.
There's
an
incident
hey.
Was
there
a
change?
C
Yes
or
no?
If
there's
a
change,
oh
my
God
like
most
likely
and
then
the
support
team
under
the
app
team
can
go.
Okay,
here's
the
change
most
likely
causes
Pro.
Here's
all
the
commits
that
were
part
of
that.
So
you
know
again
time
to
resolution
will
be
much
faster
because
now
they've
got
a
a
nice
little
kind
of
boundary
around.
What's
most
likely
and
they're,
not
you
know
you
know,
so
they
have
a
place
to
start
right,
they're
not
gone.
You
know
they
don't
have
to
control
and
true
ring
and
people
or
whatever.
C
You
know
we
we
built
the
Jenkins
plug
and
we
contributed
that
because
that's
in
that's
standard
everything
we're
doing
but
in
Jenkins
is,
is
pretty
standard
right,
I,
don't
think,
there's
anything
we're
doing
we
use.
Maybe
we
use
Amazon,
so
we
use
some
Amazon
Tech,
but
they
could
easily
be
swapped
in
and
out
with
other.
You
know,
variants
of.
A
And,
and
where
does
the
does
the
Eventing
come
from
the
the
publishing
of
events
or
broadcasting?
Events
come
from
the
plug-in
then
or.
C
Yeah,
so
what
we
did
was
if
you're
familiar
with
Jenkins
you
you
can
have
like
def
the
pipeline
Library.
You
know
and
you
can
have
your
catalog
so
in
the
in
the
library.
What
we
did
was
every
VAR
we
create
or
everything
every
function
we
created
in
that
we
wrapped
it
in
a
closure
right
so
called
where
the
venting,
and
then
we
have
a.
C
We
have
an
Eventing
plug-in
that
we
Deploy
on
all
the
all
the
team
controllers
within
our
Jenkins
instances,
so
got
it
as
as
that
happens,
the
closure
just
it
gets
executed
and
the
plug-in
then
gets
that
data
and
that's
emit.
We
just
submit
everything
and
then
it
then
it
then
publishes
a
house
because
we
run
an
Amazon
and
the
publish
house
then
gets
we
have
the
lambdas
are
listing
for
published
events,
and
we
just
use
Dynamo
yeah.
C
Yet
no
we
we
haven't.
No,
we
haven't.
We
haven't
even
thought
about
signing
anything
across
the
board.
Yes,
because
yeah
well,
first
of
all,
we
don't
have
certain
Technologies.
We
have
are
so
out
of
day
if
we
need
to
upgrade
them,
but
we
haven't
signed
events,
and
we
probably
will
sign
something
at
some
point,
but
you
know:
do
we
need
to
sign
an
individual?
These
are
all
immutables
and
no
one
can
you
know
the
event
gets
generated.
It
gets
logged
into
what
we
could
call
a
ledger.
C
So
that's
an
immutable
store
and
right.
We
pause
process
then
out
of
the
ledger.
So
we
have
a
bunch
of
subscribers
sitting
on
the
no.
We
have
a
bunch
of
subscribers
sitting
on
The
Ledger,
listening
for
events
and
then
they're
they're
doing
some
Transformations
and
the
the
eventually
then
create
the
evidence
store
records
for
whatever
it
is,
and
oh
I
got
a
a
PR.
Sorry
I've
got
a
code
review,
okay.
C
Associate
we
start
associating
everything
together
if
we
did
sonar
Cube,
for
example,
and
we
use
yeah
there
we
go.
We
you
see
the
events
we're
using
CD
events
is
as
the
backbone
of
of
of
of
the
of
every
type
of
event
we're
catching
so
we're
adding
to
see
the
events
where
it
might.
We
might
want
to
extend
something
or
if
there's
a
new
event.
We
we
we're
adding
to
that
yeah,
so
we're
able
to
stitch
everything
together
and
then
we
should
be
able
to
build
that
picture
from
CR.
C
You
know
from
the
initial
commits
to
when
those
commits
appear
in
production
and
go
be
able
to
go
from
left
to
right
and
right
to
left
right
right.
A
B
A
B
Look
at
any
value
stream
tools
to
do
what
you're
talking
about.
C
For
like
values
through
management,
no,
we
didn't.
You
know
well
one
well,
two!
Well,
not!
No.
We
didn't
use
one
tool
because
obviously
we
have
we've
got
four
or
five
different
vendor
based
tools
running
all
these
platforms,
but
we
were
interested
in
the
Raw
data
like
value
stream,
mapping
tools.
B
You
know
they're
pretty
focused
on
managing
door
metrics
and
we
I
mean
we've
had
we're
doing
some
work
with
the
Texas
office
of
the
attorney
general
and
they're,
asking
for
some
very
similar
data
that
you
just
defined,
which
we're
building
out
in
the
in
the
what
we
like
to
call
our
compliance
scorecard.
I
haven't
seen
that
kind
of
data.
C
No
and
I
know
all
the
tools
like
we
looked
at
not
well.
We
look
not
necessarily
looked
at
a
particular
value
stream
mapping
to
and
from
a
vendor,
because
we
already
used
them
with
Cloud
bees.
But
what
I
tended
to
find
with
some
of
these
tools?
Is
they
work
fantastic
within
their
own
scope,
their
own
boundary
from
the
inputs
to
the
exits,
but
they
didn't
they
weren't.
C
They
had
no
visibility
about
what
might
have
happened
outside
outside
of
their
own
tool,
so
the
Dora
metrics
were
to
might
implied
like
and
I'm
100
certain
this,
but
I,
don't
I
can't
prove
a
jet
and
I
think
people
would
be
shocked
here
when
I
do
prove
it
is.
How
do
I
know
if
my
developers
does
do
what
code
they
commit,
how
much
of
it
does
not
get
into
the
hands
of
a
customer?
It's.
B
A
A
C
Is
like
an
another
another
use
case
we
have
here
and
again,
I
I'm,
not
the
demanding,
metrics,
but
I.
Think
some
metrics
are
so
gamed.
Like
you
know,
what's
the
one
time
to
Market
or
the
you
know,
we
and
people
measure
it
by
jira
right.
When
did
I
start
the
Epic
and
when
did
I
end
the
Epic
and
I
go
well
and
now
now
you
get
it
subjective
to
a
team
go
well.
C
What's
the
final
stories
ago
it
staged
right
and
the
majority
people
said
I've
I've
staged
that
the
artifact
is
now
up
in
print
releases
or
whatever
you
want
to
put
it.
But
when
does
it
actually
end
up
in
production
in
the
hands
of
somebody?
So
they
use
these
start
and
stop
things
in
jira
and
they
go
that's
how
they
measure
velocity
and
I
go
well.
Well.
Did
it
end
up
in
production
in
a
week
two
weeks,
ten
weeks,
three
months,
six
months
ever
and
then
we
just
don't
know
so
again,
that's
another
one.
C
Another
big
pet
pee
for
mine
is
like
I,
don't
know
if
you've
dealt
with
sock
applications,
but
one
of
the
things
is
more
than
one.
A
developer
cannot
push
a
production
in
sorry,
a
code
change
into
production
without
another
review,
a
review
happening
right.
So
that's
it's
it
that's
one
of
the
regulations,
part
of
a
sock
thing.
So
just
stop
me
or
you
or
anybody
just
writing
some
code
and
just
pushing
it
to
production,
and
so
people
use
you
know,
code
views
or
separation
of
Judy
in
active
directory.
C
Well,
I
wrote
the
code,
so
I
can't
press
the
button
so
Tracy
I
ring
you,
hey
Tracy,
that
code's
in
there
press
the
button
and
go
well.
That's
kind
of
that
doesn't
de-risk
anything
you've
just
pressed
a
button
that
I've
you've.
No
idea,
it's
like
to
think.
Did
you
pack
your
own
bag
at
the
airport
and
even
code
reviews
as
I
see
it
all
the
time
I
go:
hey
I
just
put
the
pr
up.
C
Can
someone
approve
it
and
the
developer
discuss
approve
right,
so
I'd
love
to
do
deep
analysis
on
you
know
quality
of
code
review
process
and
you
know
versus
lines
of
code,
and
you
know
how
much
has
changed
in
the
thing
that
we're
reviewing
so
30.
Second
review
might
be
okay.
If
there
was
one
line,
one
chord
line,
a
one
line
change,
but
a
30
second
review
of
a
hundred
lines.
Changed
is
obviously
just
a
check
box.
So,
and
you
know
getting
into
that
level
of
engineering
analysis,
so
we
can.
C
We
can
prove
that
we're
we're
reducing
risk
and
we're
trying
to
increase
quality
of
our
our
of
what
we
deploy
into
production
and
all
of
this
data.
That's
why
I
say
there's
so
many
use
cases
that
this
data
can
open
up.
You
know
you
know
engine
just
again:
engineering
production
support,
triage
compliance
security.
You
know,
there's
a
ton
of
them.
There
yeah.
B
And
I
I
keep
talking
to
people
about.
You
know
the
data
is
so
important
in
order
for
us
to
build
policies,
you
can't
build
policies
without
data
and
we
have
to
have
a
way
to
start
federating
the
information
into
one
location,
so
we
can
actually
create
actionable
data
and
centralizing
the
logs
tends
to
be
I've
been
working
on
a
CD
events.
B
White
paper,
and
one
of
the
comments
that
spotty
had
made
is
that
the
City
events
has
the
potential
even
to
bring
in
testing
logs
right
to
centralize
the
data
around
the
testing
log
so
that
we
could
have
access
to
the
testing
logs.
So
there
is
so
much
data
that
we
have
the
potential
to
pull
in.
B
We
have
recently
most
recently
been
really
focused
on
some
of
the
security
tooling.
So
like
the
sonar
scans
and
whatnot,
we
We've
we've
brought
in
and
we've
added
to
the
scorecard
the,
but
we've
we're
starting
to
now.
Look
at
like
the
open,
ssf
scorecard.
Have
you
guys
looked
at
at
any
of
the
security
kind
of
steps
to
track
that
information.
C
And
the
opennesses
have
store
cards
on
is
on
on
something
to
review,
so
one
of
our
challenges
is
with
like
we
like,
for
example,
use
J
frog
riber
on
such
an
old
version.
Actually
it's
a
year
out
of
date
as
it
is
so
we're
we're
unable
even
to
do
some
basic
stuff
like
security
scanning
and
things
like
that,
the
binaries
on
on
what
we're
bringing
in
from
the
open
source
community
so
we're.
C
We
haven't
gone
there
yet
because
I
think
we've
we've
some,
maybe
we've
more
core
fundamental
things
to
fix
first
and
then
yeah
so
like
we
want
to
get
to
signing
artifacts
we
want
to
get
to
you
know:
how
do
we
verify
of
what
open
source
packages
we
do
bring
in?
And
you
know
we
allow
people
to
download
things
from
the
internet
right
so
pretty
hard
to
sign
anything
or
prove
anything
when
you've,
when
developers
or
build
machine,
are
free
to
download
them
from
their
desktop
or
download
them
like
curl
wget
from
within
something
they.
C
Yeah,
so
we
fundamentals
there
to
fix
but
yeah
what
security
is?
You
know?
Obviously
it's
it's.
Unfortunately,
it's
it's
getting
easier
and
easier
to
be
caught
out
unintentionally.
C
So
I
know
how
do
we
you
know
so
it
you
know
our
first
things
is
just
make
sure
we
have
the
right
left
with
the
with
as
many
protections
as
we
can
in
place
and
that
and
that's
kind
of
where
we're
focused
at
Now
versus
maybe
the
data
or
the
signings
or
you
know,
even
if
we
can
just
be
able
to
protect
ourselves
from
scanning
everything
and
making
sure
we
we
as
we
bring
in
things,
it's
all
been
scanned.
C
We
do
now
on
some
on
anything.
That's
in
our
GitHub
repository,
we're
scanning
them
all
up,
but
we
want
to
put
SAS
right.
I
want
to
put
code
ql
and
everything
yeah,
but
you
know
we
have.
At
least
we
have
secrets
been
worked
on
now
when
we've
SCA
we've
in
the
ca
tool
in
place.
So
we
have
that
scanning,
but
we
just
scan
right.
B
C
Something
correct
so
it's
it's
kind
of
like
help,
make
it
better
and
then
how
do
you
like
one
of
the
things
we
like
if
you
never
had
scanning
before
you
only
had
scanning
on
a
partial
set
of
your
portfolio,
then
like
there's,
there's
going
to
be
a
huge
amount
of
true
positives
in
the
findings
of
these
scans
of
applications
that
have
been
scanned
before,
but
the
remediation
of
those
is
could
be
quite
sizable.
So
how
do
you
balance
risk
versus
you
know?
C
B
Speaking
of
version
perspective,
when
you,
when
you
grab
the
data
at
what
point,
do
you
do
that
at
the
point
in
time
that
the
build
is
done.
C
C
So
if
there
is
something
wrong
with
it,
then
it's
there.
It's
already
in
the
system
and
what
we'd
like
to
do
is
push
that
into
the
dependency
map
into
the
package
managers
and
say:
hey
I've
got
this
package
manager
run
npm
installed
or
run
Maven
install
whatever
it's
then
going
externally
and
we're
scanning
it
on
in
on
the
in.
C
You
know,
as
it
comes
in
the
front
door
and
then
making
a
decision
there
and
then
to
decide
to
allow
in
or
not
so
that
is
Our
intention
long
term,
but
that
that
requires
a
little
bit
more
work,
especially
around
networking
and
stuff
and.
C
We
will
be
yeah,
we
were
just
kind
of
we
did
CI
first
and
then
we're
moving
into
the
CD
patterns
now
so
yeah
so
we'll
be
for
for
deployment.
We're
going
to
be
capturing
things
like
you
know,
whatever
our
control
gates
are,
we
decide
to
put
in
we're
going
to
obviously
capture
all
of
those,
the
the
events
related
to
those
control
Gates.
C
So
we
have
the
evidence
of
you
know
what
were
the
inputs
to
the
control
Gates
and
what
was
the
output
of
the
control
Gates
and
when
you're
deploying
who
deployed
what
was
the?
What's,
the
actor
that's
doing
the
deployment
or,
if
it's
assuming
something?
What
is
this
assuming
and
doing
their
deployment?
What
was
deployed
and
where
has
it
been
deployed?
So
is
it
into
which
cloud
provider?
Is
it?
What
what's
the
id
of
that
account
or
subscription
Etc?
C
And
you
know
we,
you
know
we
can
probably
link
it
and
we
can
link
it
to
the
artifact
that
was
deployed
and
what
technology
type
it
was
we,
the
next
logical
thing
might
be:
do
you
get
down
into
the
next
granular
level?
So
if
you're
going
into
AWS,
for
example,
you're
using
terraform
or
cloud
formation,
it
can
deploy
a
multitude
of
things.
C
So
how?
How
do
you
you
know
we?
Obviously
we
know
what
the
cloud
information
template
or
terraform
template
artifact
is.
But
how
do
we
know?
How
do
we
next
link
that
to
the
physical
resources
and
those
resource
IDs
of
what
was
put
in
so
we
haven't
we're
stopping
at
the
deployment
artifact
and
say
this
is
the
thing
that
was
deployed.
C
Obviously
you
can
go
in
and
read
that
and
say
what
what
what
did
it
create
or
what
did
it
update,
but
the
next
logical
step
after
for
us,
then
would
be
is
if
we
can
get
down
to
the
actual
resources
that
were
touched
by
the
deployment.
B
And
when
you
do,
when
you
grab
that
data,
do
you
do
a
version
on
a
Shaw
or
a
release
number
or
some
way
of
identifying
that
particular
object?
A
container
yeah.
C
We
will
so
in
in
this
up,
so
we
up
to
cicd,
so
everything
is
share
based,
so
we
have
to
commit
charge
the
pr
as
their
link
to
the
commit
we
can
link
all
them.
Sonar
Cube
actually
does
give
you
a
memory
which
ID
it
is,
but
sonar
Cube.
Does
it
give
you
be
able
to
link
back
to
your
git?
Your
git
I
think
it's
the
under
pressure,
it's
the
pure,
but
you
can
link
there
and
then
in
our
pipeline.
C
Obviously
we
have
the
shares
to
get
Jazz,
so
we're
able
to
link
and
then
we're
we're,
storing,
there's
a
I
can't
remember
the
identifier
versus
in
Jenkins
there's
a
unique
identifier,
so
we're
able
to
use
that
and
for
the
pipeline
and
that's
able
we're
able
to
thread
any
all
stages
that
happen
in
the
pipeline.
And
then,
when
we
publish
to
art
we
publish
up
the
artifact,
we
have.
We
have
those
binders
and
we're
decorating
the
artifact
as
well.
So
we'll
we'll
have
all
the
ideas.
C
Now,
ultimately,
we
will
be
signing
the
artifacts
when
we
upgrade
because
jfro
provides
signing
at
that
level,
so
we'll
be
able
to
assign
things
and
then
be
able
to
store
that
signature
as
well.
So
right,
we
are
we're
because
we're
because
every
event
is
triggering
and
we
have
the
same
ID
we're
able
to
like.
We
can
stamp
the
unique
build
ID,
the
identifier
on
the
artifact
that
created
it,
so
that
then
we
can
go
from
the
artifact
back
into
the
bills
or
we
can
go
for
the
bills
into
the
artifacts.
C
So
yeah,
so
we
we
run
service.
Now
this
was
majority.
A
lot
of
companies
do
so
in
that
we
we
have
a,
we
call
it
a
product
model,
so
we
have
like
a
business
unit
and
we
have
a
product
line
and
then
there's
a
product
and
then
there's
an
app.
So
each
is
ID.
So
at
that
level
it's
all
app
IDs.
So
we
we
have
the
app
ideas
that's
going
through
because
we
make.
C
What
we
have
done
here
is
when
you
create
a
repository,
for
example,
you
have
to
have
you
have
to
have
the
app
ID,
so
don't
worry,
but
we're
able
to
associate
that
then
back
to
a
set
of
contacts
right
and
even
a
service
now
for
us
it
could
even
go
back
into
the
servicenow
assignment
group,
because
we'd
know
the
assignment
group
of
that
application
to.
C
Pointing
to
we'd
point
to
it,
someone
would
someone
be
with
a
reference
back
to
it
if
they
want
to
tell
there's.
B
B
We
can
point
to
we
do
we
do
ortelius
now
manages
the
ownership,
because
we're
trying
to
provider,
we
don't
know
what,
if
they're
using
service,
now
or
not,.
B
C
B
Didn't
schedule
enough
time
really
for
a
for
this
call
to
show
you
what
we're
doing
with
artillius
could
we
do
a
follow-up
call
I
believe
that
there
are
some
items
that
you
may
not
have
completed,
yet
that
we've
already
done
and,
of
course
we
would
love
for
you
to
to
be
an
adopter
of
artillius
I,
don't
know
if
there
is
a
a
good
integration
with
what
you
already
have
done,
or
if
there's
a
way
that
we
can
help.
B
You
finish
the
project,
but
we
would
love
to
be
able
to
have
that
discussion
or
at
minimum
get
some
feedback
from
you
on
what
we
have
accomplished
so
far
with
how
we're
Gathering
evidence
now
keep
in
mind.
Ortilius
is
thinking
about
a
less
of
a
I
mean
we
can
support
any
kind
of
object.
We
can
support
a
file,
any
kind
of
file
system.
We
can
support,
database
objects,
SQL
statements
and
we
can
support
containers.
B
So
those
are
sort
of
the
three
objects
we,
while
we
could
support
monoliths
without
a
problem
where
we
really
shine
is
when
it's
decoupled.
B
The
decoupling
just
everything
you
just
told
us
in
a
monolith,
it
becomes
even
more
complicated
microservices
because
you
have
all
these
teams
creating
feature
sets.
Now
you
have
a
a
giant
hairball
or
as
I
used
to
call
it
a
a
giant
death
star
of
dependencies
and
we're
working
with
the
salsa
team
and
at
Google
and
and
guac
to
start
pulling
in
the
package
dependencies,
which
is
even
a
bigger,
Death,
Star
and
and
we're
thinking
how
you
know
how
we
can
use
that
data
to
better
refine
our
blast
radius
information.
C
B
B
So
we
are
looking
at
solving
artifact
issue,
the
data
Gathering
issue
and
the
dependency
issue
throughout
that
that
platform
and
I
think
that
what
I
think
we've
been
thinking
about
it
for
a
while.
It
sounds
like
you
have
two
so
I'd
love
to
get
your
feedback
on
the
ortelius
platform
itself.
C
Yeah
what
we
can
do
is
we
could
set
up
a
follow-on
we're
way
next
week,
I'm
in
the
US
next
week,
but
I'd
like
I'd
like
to
bring
in
some
more
the
people
who
are
a
couple
of
people
who
are
actually
working
on
the
team
itself,
because
obviously
they
they'll
actually
say
when
we
do
this
or
do
that
or
do
the
other
thing.
C
They
need
more
debt
but
yeah,
and
we
could
see
then,
if
there's
opportunities
of
because,
like
the
thing
we
have
is
big
and
there's,
the
team
is
only
so
so
big
right,
so
they
can
only
go
as
fast.
So
if
there
are
things
out
there
that
can
complement-
and
you
know
and
and
be
you
know,
part
of
the
overall
solution,
and
then
you
know
we're
you
know,
because
it
crosses
so
many
things
anyway.
So
we
don't
build
everything
ourselves,
so
it
it.
C
B
We
would
what
you
already
have
done.
If
there's
a
way
we
can
bring
that
into
artillius.
We
would
love
to
do
that
and
it's
pretty
open
all
the
it's.
You
know:
we've
built
a
pretty
it's
all
based
on
apis
in
terms
of
getting
to
the
data
and
the
we
do
a
pretty
extensive
versioning
process,
though,
because
we
are
thinking
about
how
components
are
versioned
and
then
how
applications
are
versioned.
B
So
I
don't
know
how
that
would
impact
you,
but
we
do
do
a
pretty
extensive
versioning
process.
We
basically
built
a
versioning
engine
for
com
for
configuration
data,
any
kind
of
configuration
data.
B
What
else
I'll
do
is
I'll
I'll
ping,
you
for
a
meeting
the
week
of
June
19th,
then.
C
Yeah
either
yeah
the
19
to
26
summer
vacationed
in
for
the
first
two
weeks
of
July
but
yeah
either
one
of
those
two
weeks,
yeah
I'll.
C
I'm
going
to
say,
not
well
definitely
not
Thursday,
not
Friday.
So
what
times
is
we
could
shoot
for
this
time
on
either.
C
An
office
where
we're
trying
to
do
an
end
user,
an
end
user
meeting
on
the
26,
27
or
28
of
June.
But
we
could.
C
C
B
It
doesn't
really
matter
to
us.
We
can
always
reschedule
things
right
now,
yeah
right
now
there
is
a
CDI,
a
CDF
online
Meetup
that
we're
doing
at
nine
o'clock
mountain
time,
I'm,
not
sure
what
time
that
is
for
you,
but
that's
two
hours
earlier
than
now,
probably
Wednesday
the
21st
would
be
a
little
better.
C
B
B
C
10
o'clock,
which
time
zone.
B
It
would
be
noon
Eastern
and
we
can
do
earlier
too.
We.
B
B
C
B
You
so
very
much
for
you
know,
being
open
to
the
idea
of
sharing
information.
I
I
appreciate
it
from
an
open
source
perspective.
It's
so
critical
for
us
yeah.
C
It
is
again
again
none
of
what
we're
doing
is
unique
standard
across
the
board,
so
the
more
as
the
more
people
involved
in
these
things,
the
the
faster
capabilities
and
Integrations
are
all
those-
and
you
know
it's
a
it's
a
win-win
for
everybody.
It.
B
Is
unique
in
that
you've
gotten
so
far,
everybody
started
talk
about
all
of
the
evidence,
as
you
call
it,
and
it's
interesting
you'd
use
that
term
because
I've
been
calling
it
evidence
for
quite
some
time,
but
the
forensics
around.
What
we
do
is
can
be
extremely
important
and
in
my
world
when
I
think
about
it.
If
we
could
begin
building
that
kind
of
forensics
around
all
of
the
open
source
tools
that
we're
using,
then
we
have
some
real
data
to
start
playing
with
yeah.
C
C
It
would
yeah
because,
and
once
you
start
getting
into
it,
it's
just
use
cases
just
keep
coming.
I
know,
that's
the
thing
it's
like
it
just
it
just
so.
Oh
someone
else
comes
like
production
support
was
well,
it
originally
started
for
us.
It
started
out
as
a
like
regulations
right.
So
how
do
we
do
it
all
the
internal
and
external,
regulated
applications
and
snowballing?
C
Now
to
here
you
know
this
Persona,
that
person,
the
other
person
and
even
like,
say
you
know,
from
a
finance
from
a
from
a
Workforce
perspective
like
how
do
we
know
that
we're
getting
value
for
what
we
invest
in
teams
and
are
we
sure,
the
time
that's
been
spent
by
people
and
I
didn't
think
of
the
contractor,
one
where
you
might
be
paying
an
external
body
for
work?
And
do
you
know
if
you're
getting
value
for
money,
yes
or
no.
B
C
I'm
pretty
sure
we
could
probably
answer
that
question
now,
but
I
mean,
but
anyway
it
is
interesting.
Yeah.