►
From YouTube: CDF - SIG Interoperability 2021-04-01
Description
For more Continuous Delivery Foundation content, check out our blog: https://cd.foundation/blog/
A
A
I'm
I'm
okay,
because
I
haven't
been
able
to
come
to
a
kubecon
talk
yet
so
that
makes
me
a
bit
stressed.
But
how
are
things
with
you.
B
They're
they're
fine
a
little
busy.
I
have
to
do
the
keep
contact
as
well,
but
I'm
quite
excited
about
it.
I
think
it
should
be
a
really
good
match
there.
Yeah.
A
D
B
B
B
Good
great
well
welcome
to
today's
interoperability.
Seg
meeting
vincent
behar
is
going
to
give
a
presentation
on
tell
me
if
I'm
describing
this
correctly
on
using
jenkins
x,
to
gather
metrics
on
using
the
door
metrics
and
nicole
farthens
new
space
metrics
developed
with
her
github
team.
We
have
a
link
in
the
notes
to
the
paper
that
they've,
just
written
so
that'll,
give
you
some
background
on
these
new
metrics,
so
that'll
be
really
exciting.
But
first
let
me
share
my
screen
and
we'll
go
over
just
some
of
our
admin.
B
Great
we'll
go
over,
I
think
the
action
items
we
have
for
today's
meeting.
If
we
look
steve
to.
B
B
Okay,
great
so,
let's,
let's
look
at
the
artifacts
metadata
work
that
we
have
now
and
steve.
Are
you
on
the
call?
Yes
awesome?
So
I
will
let
you
speak
to
what
you
have
added
to
this.
Does
that
sound
good.
E
Okay,
well,
if
you
just
since
you're,
sharing
just
go
ahead,
scroll
down
and
I'll
kind
of.
E
So
I
kind
of
broke
it
down
into
five
types
of
artifacts,
and
this
is
not
the
definitive
list
and
if
we
need
to
change
any
of
this,
I'm
totally
open
to
it.
One
of
the
first
one
is
more
of
the
traditional
file
artifact
that
we
think
of
like
a
jar
file,
ear
file
where
those
get
pushed
to
a
some
sort
of
repository,
artifactory
nexus,
maven
repository
and
when
they
get
pushed
to
those
artifact
repositories,
they're
going
to
have
certain
attributes
around
them,
and
mainly
you
know
the
basic
stuff
around
a
file.
E
One
of
the
things
that
does
happen
in
those
repositories
for
the
artifacts
is
they're
categorized
not
only
based
on
the
name
but
also
they'll,
have
some
sort
of
version
tag
or
commit
that's
associated
with
them,
so
that
version
tag
commit
can
be.
E
You
can
have
multiple
ones,
those
pointing
to
the
same
artifact
version,
so
you'll
see
like
you'll,
have
like
a
snapshot
or
it'll,
then
get
transformed
from
a
snapshot
to
a
release
candidate
and
then
it'll
get
transformed
from
release
candidate
to
the
the
final
version.
So
there
is
some
transformation.
The
these
artifact
repositories
are
not
immutable,
they
can
like
the
tags
can
be
changed
and
things
like
that
you
can
go
and
replace
the
version.
E
The
database
artifact
is
something
that
we
have
in
artelias
and
the
concept
behind
a
database
artifact
is
it
comes
in
two
pieces.
You
have
two
file,
basically,
two
file
artifacts,
one
for
a
roll
forward
and
one
for
a
roll
back.
E
Both
of
those
are
typically
different
file
names.
I
I
an
example
of
a
roll
forward.
Is
I'm
going
to
add
a
column
to
a
table
and
the
roll
back
is
going
to
be
dropping
that
column
that
I
just
added
what
this
allows
is
databases,
acting
in
an
incremental
up
way
when
you
need
to
update
them,
so
you
have
to
go
through
your
whole
versioning
of
the
tables
and
apply
all
the
different
versions
in
the
right
order.
E
E
This
role,
this
concept
of
a
database
artifact
with
the
two
different
role
forward:
roll
back
pieces.
E
E
So
the
next
one
is
the
container
image
artifact.
Basically
this
one,
I
did
not
go
into
the
deep
down
into
the
contents
of
a
container.
E
I
looked
at
once
a
container
has
been
built
that
we're
going
to
do
something
with
that
built
image
and
usually
that
gets
pushed
to
a
registry,
and
that
registry
is
where
you
can
retrieve
it
from
registries,
can
be
either
public
or
private.
They'll
have
an
organization
as
part
of
the
naming
convention.
You
know
so
this.
This
image
belongs
to
this
organization.
E
E
So
if
I
do
like
a
docker
build,
I'm
not
going
to
have
a
digest,
which
is
one
of
the
things,
and
these
are
immutable
as
part
of
that,
so
we'll
never
get
the
the
duplicate
on
on
that
that
point.
So
that's
one
of
the
nice
things
with
the
the
the
container
images
is
they're
immutable.
The
tags,
though
tagging
you
can
re-tag
something,
so
you
can't
trust
the
tag
to
be
the
definitive
go-to
for
if
you're
gonna
do
like
reporting,
you'd
wanna
be
moving
and
or
tracking
the
digest.
E
The
next
one
is
endpoint.
Artifact
endpoint
is
like
a
loose
term.
I'm
trying
to
use
there
if
somebody
has
a
better
idea
of
of
something
other
than
endpoint
just
feel
free.
But
what
I'm
trying
to
gather
here
is
something
that
a
target
that
we
deploy
to
it
could
be
a
server.
It
could
be
a
physical
server.
It
could
be
a
virtual,
a
vm.
It
could
be
ec2
instance
a
kubernetes
cluster.
It
could
be,
you
know,
aws
lambda
function,
you
know
we're
going
to
go
somewhere
inside
of.
E
This
world
and
what
I'm
trying
to
gather
here
for
endpoint
artifact
is
what
is
the
definition
of
the
end
point?
You
know
what
is
making
up
the
end
point,
and
so,
in
the
case
of
like
on
the
the
kubernetes
kind
of
cloud
provider
side
there
we're
gonna
have
something
like
a
terraform
or
cloud
formation.
E
Some
sort
of
yaml
file,
that's
going
to
describe
what
the
endpoint
is
going
to
look
like
and
that
that
yaml
json
definition
is
what
I
was
thinking
that
we
would
be
in
the
a
repository
that
you
could
always
go
back
to.
So,
if
you
want
to
go
recreate
a
an
endpoint,
you
go
pull
it
from
the
repository
and
then
run
the
appropriate
program
against
that
file
to
go
ahead
and
create
that
endpoint
as
part
of
that
process.
E
This
really
is
describing
or
trying
to
describe
the
different
pieces
of
hardware
that
are
actually
on
a
machine
that
we're
going
to
be
working
with
this
one
is
more
of
a
what
I
would
call
a
representation
or
a
reporting
of
a
hardware,
artifact
versus
a
way
to
stand
up
a
new
hardware
artifact
because
you
have
to
actually,
if
you
want
to
stand
up
a
new
piece
of
hardware,
you
actually
have
to
have
somebody
physically
go
put
in
a
graphics
card.
You
have
to
go.
E
Somebody
go
physically
put
in
a
disk
drive,
so
this
is
more
along
the
lines
of
reporting
for
hardware
artifacts,
and
there
could
be
a
disconnect,
as
par
as
part
of
what
a
technician's
doing
to
a
machine.
They
go
and
swap
out
a
graphics
card
and
they
don't
put
a
new
one
in
that's
exactly
the
same
as
the
old
one.
They
have
a
slightly
different
model
or
whatever.
So
we
could
end
up
with
drift
at
the
hardware
level
and
that
would
be
need
to
be
addressed.
E
But
if
we're
looking
at
pulling
together,
everything
as
a
whole,
you
know
from
the
base
level
hardware
all
the
way
up
through
cloud
up
to
databases
contain
you,
know,
container
images
and
files,
that's
what
I
was
trying
to
achieve
here
at
that
level
and
if
there's
anything
I
missed
feel
free
to
go
ahead
and
change
it.
This
is
just
my
initial
thought
of
where
we
could
start.
B
A
A
That
would
be
great
okay,
so
I
actually
like
steve,
you
actually,
I
think,
highlighted
spdx
when
we
first
time
started
discussing
standardized
metadata
like
in
december
or
something
I
have
been
casually
looking
at
spdx
and
browsing
links,
foundation,
site
and
other
places,
and
I
noticed
an
announcement
on
new
york's
foundation
site,
which
I
put
the
link
there
and
it
seems
nukes
foundation
submitted
spdas
spec
as
a
candidate
to
become
a
standard
as
part
of
iso,
and
then
I
found
the
draft
spec
on
iso
sites
and
if
you
go
there
cara,
so
it's
pretty
like
you
have
to
pay
to
get
the
full
draft,
it
seems,
but
it
seems
it's
in
progress
and
also
when
I
was
watching
the
presentation
by
jim
zemlin
during
linux
foundation,
member
meetings,
which
I
put
the
link
there
as
well.
A
He
mentions
that
spdx
accepted
as
international
standard
for
how
open
source
metadata
is
shared.
So
this
made
me
think
that
maybe
we
should
you
know,
get
in
touch
with
spdx
folks
and
to
learn
more
about
what
they
are
doing.
What
this
standard
consists
of
and
perhaps
get
someone
from
spdx
to
join
one
of
our
meetings
and
thanks
to
tracy
miranda,
he
got
in
touch
with
kate
stewart
from
spdx,
so
she
will
join
our
meeting
in
two
weeks
on
15th
of
april.
A
E
Right,
so
let
me
see
if
I
can
find
it
real,
quick.
E
So
is
it
apache
apache
2,
you
know,
there's
about
it
seems
like
hundreds
of
them,
and
that
was
the
main
goal
was
to
gather
in
the
initial
licenses,
because
that's
what
everybody
wanted
to
know,
that's
what
the
attorneys
wanted
to
know
was
what
was
the
licenses
that
are
being
used
so
and
I've
been
following
spdx
for
years
now.
E
B
E
So
what
I
found
was
the
version.
One
of
the
spec
is
a
lot
easier
to
follow
than
the
version
two.
The
version
two
has
taken
it
to
a
new
level
and
it
just
to
get
the
idea
of
what
they
were
thinking
look
at
the
version.
One,
the
documentation
is
just
laid
out.
I
think
a
little
bit
easier
to
follow.
E
E
Now,
when
we
get
to
a
container
a
container,
let's
say
we're
doing
a
python
flask
application
and
a
container
we're
going
to
have
many
python
modules
installed,
so
we're
going
to
have
many
spdx
files.
One
for
each
package
should
be
there
same
thing
like
node.js,
there's
gonna
be
one
for
each
anything.
You
bundle
up
and
you
push
out
to
you
know
like
to
pi
pi
or
any
of
those
anytime.
You
do
your
packaging.
This
is
where
they're
basically
requiring
you
now
to
give
us
some
basic
information.
E
E
There
is
here's
package
information.
This
is
where
the
overlap
really
comes
into
play.
You
know
like
what's
the
package
name,
you
know
how
are,
how
is
it
being
described?
What
is
the
package
version
examples
of
that?
So
this
is
when
we're
talking
about
vocabulary
and
definitions.
E
Is
version
1.2?
E
The
version
two
I
found
was
like,
I
said
a
little
bit
harder
to.
They
didn't
lay
out
the
the
definitions,
quite
as
cleanly.
B
E
But
here's
version
2.1
and
it
has
something
similar.
You
know
what
what
is
a
package?
What
is
the
supplier?
E
So
a
lot
of
the
information
that
we're
looking
at
the
concepts
are
going
to
be
the
same
but
like
for
package,
download,
location
and,
let's
say
we're
dealing
with
a
java
jar
that
would
make
sense
that
this
would
be
the
maven
repository
that
the
jar
file
is
going
to
come
from,
so
that
would
be
around
a
jar
file,
that's
being
packaged.
Now,
when
you
create
a
jar
file,
you
don't
necessarily
have
a
way
to
embed
spdx
information
into
it.
E
E
E
So
for
us
we're
typically
thinking
about
after
a
ci
build.
I
have
a
package
and
that
package,
let's
say
I'm
going
to
do
a
python
module
a
python
executable
or
you
know
that's
going
to
include:
let's
do
a
node.js.
So
I
have
a
node.js
program
and
it's
includes
so
many
different
packages.
E
Then
I'm
going
to
have
a
package
of
spdx,
that's
going
to
include
other
spdxs,
so
it's
kind
of
this
nested
cascading
effect
that
you
have
to
roll
up
the
information
and
then,
if
you
put
that
node.js
into
a
container
and
then
you
have
multiple
other
pieces,
you
have
to
keep
rolling
up
these
relationships
to
get
the
true
representation
of
what
a.
What
is
it?
What
is
in
the
container
to
describe
that
so
the
way
I
look
at
it
compared
to
what
I
just
described
around
file,
you
know
artifacts,
this
is
information.
E
C
Yeah,
do
we
know
of
any
companies
that
have
adopted
spdx
in
in
you
know,
in
in
real
use,
real
world
use.
A
F
Yeah,
I
will
say
it's
well
worth
having
the
conversation
with
kate.
I
had
a
a
quick
chat
with
her
and
yeah
this
there's
the
licensing
part
of
spdx,
which
is
less
interesting,
but
the
package
management
side
and
yeah.
I
appreciate
that.
That's
pretty
complicated
but
yeah,
I
think
to
spdx
2.2
is
what's
out
now,
but
they
are
working
to
3.0,
so
we
should
kind
of
be
aware
of
what's
happening
there
and
the
direction
as
well
she
mentioned
is
a
lot
of
the
they're
kind
of
in
the
the
s-bon
space,
so
software
bill
of
materials.
F
That
being
said,
I
think
the
specifics
of
cicd
is
not
going
to
have
that
domain
knowledge,
and
we
are
the
right
folks
to
kind
of
bring
that
perspective
into
that.
I
think
it's
it's
well
worth
working
with
them
and
layering.
On
top,
where
we
can
just
because
I
think
yeah
from
all
the
indications,
it's
picking
up
a
lot
of
momentum
in
adoption
as
well
across
platforms.
E
Yeah
there's
a
lot
of
terms
in
the
spdx
world
that
we
can
adopt
without
reinventing
them.
You
know
what
is
a
versioning
schema.
What
is
a
git
repo?
You
know,
though,
those
those
terms
are
already
been
defined
in
in
the
spdx
specs,
so
we
can
just
refer
to
them
instead
of
creating
our
own
and
being
you
know
slightly
off,
I
I
would
say
that
we
as
much
as
we
can
adopt
from
the
standard,
especially
since
it's
being
put
put
in
front
of
iso,
that
will
will
save
ourselves
a
lot
of
time.
E
E
Yeah
and
like
I
said,
the
the
spec
is
really
what
I
consider
low
level
it's
when
I
create
a
package
of
of
code
that
I
want
to
share.
It
may
not
go
through
a
ci
cd
process,
but
it
is
a
shareable
object
and
because
it's
shared,
I
wanted
to
attach
certain
attributes
to
that.
So
people
understand
when
they
go.
Look
at
my
module.
You
know
who's
the
author
of
it
where's
the
website.
E
You
know
what
are
the
other
dependencies
that
you
need
to
install
to
make
this
this
piece
run.
So,
like
I
said
it's
it's
for
us,
it's
a
little
low
level
and
we'll
need
to
look
at
it
at
a
roll-up
perspective,
and
that's
one
of
the
things
we're
doing
on
the
artelia
side
is
is
rolling
that
up
we're
actually
bringing
in
cyclone
dx
has
some
decent
tools.
E
Some
of
the
few
open
source
tools
that
you
can
use
to
scan
for
spdx
records,
as
well
as
other
packaging
information,
and
also
the
cyclone
tools,
will
look
for
against
cves
as
well,
go
up
against
the
cve
database
and
kind
of
correlate
the
two
together.
E
So
that's
one
of
the
things
that
is,
I
found
and
then
there's
another
tool
that
I
found.
It's
called
the
dependency
track
dependencytracker.org
I
think,
and
they
provide
a
ui
to
visualize.
E
What's
in
a
basically
when
you
go
gather
all
these
spdx
records
and
and
the
other
packaging
information,
you
can
logically
group
them
together
into
what
they
call
a
project
and
you
could
look
at
all
the
licenses
or
all
the
cves
for
your
project,
and
it
gives
you
that
visualization
at
that
level
and
that's
another
open
source
project.
C
More
fundamental
question:
if
we
have
a
if
there
is
a
an
open
standard
like
spdx
already
in
place,
what
exactly
is
it
we're
trying
to
define
here.
E
Well,
that's
what,
as
part
of
the
vocabulary,
that's
why
I
was
saying
we
should,
instead
of
us
going
through
and
trying
to
come
up
with,
what
we
consider
is
a
the
definition
for
a
source
control
repository
and
we
come
up
with.
You
know:
subversion
git,
you
know
whatever
and
we
go
through
and
do
all
of
our
definitions
that
instead
of
doing
that
project
that
we
just
refer
to
what's
already
been
done
on
the
spdx
side.
E
A
And
yet
the
other
thing
tracy
miranda
highlight
is
that
they
may
lack
the
perspective
from
cic
domain.
So
we
can,
you
know,
look
at
what
they
are
doing
and
see
if
there
are
gaps
and
contribute
them
there
or
you
know,
but
yeah,
as
tracy
mentioned,
like
kate
stewart
will
join
our
meeting
15th
of
april
so
like
that
would
be
a
great
great
opportunity
for
us.
A
You
know
hear
what
they
are
working
with
like
version
three,
they
start
working
with
that
and
we
can
share
our
thoughts
and
what
we
are
doing
within
cdf
and
see
what
she
says.
Maybe
they
already
thought
about
these
things,
but
they
may
not.
Have
you
know
the
right
contacts
or
whatever
you
know?
That
is
like
kind
of
why
I
brought
spdx
to
pickup.
E
Yeah
and,
and
what
I
found
initially
is
vendors
or
open
source
projects
are
the
only
thing
that
they're
really
putting
in
the
spdx
record
is
the
license
at
this
point.
I'm
hoping
that
as
people
adopt
that
they're
going
to
start
putting
in
more
information
more
of
the
individual
records,
I
have
not
found
anybody,
that's
doing
the
dependencies.
Yet
when
you
look
at
the
projects,
so
there's
a
lot
of
information
on
the
spec,
but
people
are
just
doing
a
couple
one-liners
in
in
their
in
their
spdx
records.
For
now,.
C
C
You
know
whether
it
be
jar
file
or
whatever
give
you
a
slew
of
metadata,
as
well
as
anything
that
adheres
to
the
to
the
docker
hub
standard
for
images
that
already,
given
you
know,
red
hat
with
their
quay
repository.
That
already
gives
you
everything
you
would
imagine
about
the
image
itself,
so
I
don't
I'm
not
sure
how
much
value
there
is
with
anyone
else,
trying
to
define
a
standard
for
for
meta
metadata
having
to
do
with
those
types
of
artifacts.
C
So
all
you
need
to
do
really
at
the
end
of
the
day
is
to
refer
to
and
say
this
is
an
npn
module,
here's
the
address
for
its
repository
and
a
story,
and
then
anybody
who
wants
to
create
let's
say
somebody
wants
to
crawl
through
various
dependencies
of
a
fully
fully
fledged
software
system.
All
they
need
to
know
is
that
oh
okay,
I've
got
an
npm
module
module.
C
I've
got
a
javascript
application,
I
have
here's
a
quay
address
and
if
they
have
adapters
for
that
system,
those
adapters
know
exactly
what
to
do
to
retrieve
the
metadata
from
quay
or
from
npm
or
from
artifactory,
and
that's
that
there's
really
no
need
to
re
to
you,
you'd
be
reinventing
the
wheel,
and-
and
why
do
that?
So
I
can
totally
understand
why
they
don't
do
that.
E
So
just
for
example,
if,
if
npm,
you
have
to
use
the
word
creator
as
the
keyword
and
in
python,
you
have
to
use
the
word
author.
E
Now
you
have
to
have
two
different
adapters
and
what
the
the
standard's
trying
to
do
is
is
literally
standardized
we're
going
to
use
author
instead
of
creator
as
spdak
standard
across
all
these
different
programming
languages.
So
now,
when
we
go
and
crawl,
this
stuff
we're
not
having
to
rewrite
different
adapters
and
and
change
switch
adapters
on
the
fly
as
we
go
through
the
different
languages,
because
if
you
look
at
a
container
that-
and
this
this
applies
to
like
rpms
as
well
at
the
operating
system,
level
that
the
spdx
applies
to
those.
E
So
if
I
want
to
go,
get
a
list
of
all
the
licenses
that
are
installed
in
the
container,
and
I
have
some
of
it's
in
node,
some
of
it's
in
golang
and
then
I
have
my
rpms.
I
have
to
have
three
different.
I
have
to
scan
it
three
different
times,
because
I
had
three
different
standards
I
had
to
look
for
and
the
spdx
is
trying
to
make
it
a
make
it
easier
to
do
it
just
scan
once
and
get
all
the
the
data
that
I'm
looking
for.
G
I
was
wondering
I
guess
there
are
some
attributes
in
those
that
are
the
same
for
all
artifact
types
right,
so,
for
example,
I
guess
like
name
and
path
and
size
are
more
or
less
there
for
all
of
them.
Would
it
be
feasible,
then,
or
maybe
good,
even
to
have
a
common
something
had
a
high
level.
Artifact
definition
with
these
are
the
common
attributes
for
all
artifact
types,
and
then
we
have
for
these
certain
types.
We
have
some
other
attributes
as
well.
E
Yeah,
I
was
thinking
of
that
where
you
could
do
an
inheritance
model,
so
you
have
some
base.
E
Base,
like
you
said,
like
name
or
path,
would
be
at
the
base
level
and
then
like
a
database
would
inherit
from
a
file,
and
then
you
know
that
type
of
inheritance.
That
was
one
of
the
other
ways
I
was
thinking
of
laying
it
out,
but
I
wanted
to
get
your
feedback
and
look
at
you
know.
This
is
just
like
an
initial
brain
dump.
I
think
there's
some
scenarios
that
or
some
other
types
of
artifacts
I'm
not
thinking
of
that.
We
need
to
look
at.
G
Yeah
another
way,
I
guess
to
do
that
instead
of
having
and
here
a
hierarchy,
would
be
that
you
can
have
these
common
attributes
as
mandatory
attributes
and
then
the
other
ones
could
be
optional
and
depending
on
what
type
of
artifact
you
are
talking
about.
Those
will,
of
course,
not
be
optional,
then
to
you,
but
there
was
concealed
the
optional
in
the
whatever
manner
the
metadata
scan
my
schema
or
something
like
that.
E
Some
of
them,
you
know
like
the
base
ones,
like
name
well,
name's,
the
main
one
version
would
be
the
other
one.
Those
would
be
the
two
main
ones
size
is,
you
know
a
size
of
a
hardware.
Artifact
is
questionable
because
we're
trying,
I
I
define
the
the
a
hardware
artifact
is
basically
is
a
file
that
describes
what
the
what
is
on
the
hardware.
So
it's
basically
a
a
description
of
the
hardware.
Now
you
could
do
the
size
of
that
file
would
be
another
required
argument,
but
that's
one
of
those.
G
G
E
I
think
that
would
be
an
interesting
way
to
describe
an
artifact,
and
now
I'd
have
to
look
at.
What
we
need
to
do
is
look
at
what
something
like
the
maven
repositories,
because
they've
been
having
the
most
experience
around
this
like
artifactory
nexus
those
those
repositories
to
see
what
they're
doing
at
that
level,
because
that's
who
we're
what
we're
really
trying
to
describe
the
way
I
look
at
it
is
is,
if
I
want
to
go,
get
an
artifact.
What
do
I
need
to
go?
Get
it?
E
I
need
to
know
the
name
of
it
which
the
path,
the
version
and
then
the
the
the
location.
Those
are
like,
the
four
things
I
need
in
order
to
to
retrieve
something
now.
The
immutable
part
is
where,
like
on
the
docker
side
of
the
digest,
is
the
immutable
id
that
you're.
Referring
to,
I
don't
know
if
we
can
get
to
that
on
a
file
type
you
possibly
could
be
could
by
storing
the
md5
of
the
file
and
use
that
as
the
descriptor
of
it.
But
those
you
know
that.
E
That's
that's
one
way
that
we
can
look
at
the
a
unique
number
to
describe
an
artifact
yeah.
Maybe.
G
Some
sort
of
digest,
yeah
in
a
city
or
clc,
I
see
a
context.
We
would
also
need
a
way
to
to
identify
what
specific
version
or
version
of
the
artifact
that
we
have
run,
for
example,
a
test
for
or
if
we
want
to
to
rebuild
a
certain
integrated
artifact
based
on
some
baseline
or
whatever.
We
need
to
be
able
to
fetch
exactly
the
same
versions
to
rebuild
the
same
thing
again,
so
we
need
to
know
exactly
what
to
have
built.
G
E
Totally
immutable
yeah
and
that's
the
the
challenge
when
you
start
work
looking
at
files,
the
shot
md5
is
pretty
good.
You
know
you
can
you
can't
get
clashes
now
and
then,
but
it's
very
rare,
where
you
can
get
a
clash
of
two
different
files
having
the
same
md5.
E
Yeah,
so
if
you
can
add
in
digest
as
part
of
the
the
attributes
for
the
other
artifacts
or
something
that
we
need
to
look
at
tracking,
I
think
that
would
be
a
good
good
addition.
B
Excellent
good
good
start
on
this
work,
great
discussion.
I
think
we
should
move
on
to
vance
presentation,
because
we
have
20
minutes
left
in
the
meeting,
and
I
want
to
make
sure
we
have
time
for
it.
We
will
discuss
policy
driven
ci
cd
if
there
is
time
afterwards,
otherwise
we'll
have
to
bookmark
that
for
next
meeting
van
time,
would
you
like
to
present?
Do
your
demo
yeah
great.
D
D
So
it's
the
topic,
so
I'm
going
to
talk
about
continuous
delivery
indicators.
So
it's
a
topic
that
we
wanted
to
address
for
a
while
at
the
emotion
and
recently
we're
starting,
we
started
to
to
collect
metrics
and
to
put
some
visualization
for
it
using
grafana
and
at
the
same
time
there
was
another
track,
other
people
at
work
starting
to
collect
feeling
of
developers.
D
So
it
started
a
year
ago
with
the
current
situation
and
when
the
so
yeah.
So
all
the
work
I
did
to
collect
the
metrics
and
to
to
visualize
them
using
rafana,
has
been
put
in
open
source
and
is
being
integrated,
has
been
integrated
into
chain
insects.
So
it's
been
finalized
these
days.
D
We
think
that
it
was
very
interesting
because
it
makes
the
two
efforts
we
are
trying
to
do
like,
at
the
same
time
collect
system
metrics
to
see
how
our
application
and
cicd
platform
was
doing
like.
This
is
an
example,
for
example,
for
one
single
repository,
so
we
can
easily
see
which
version
has
been
released,
deployed
in
staging
or
production.
If
you
have
undeployed
release
in
position,
for
example,
and
for
a
specific
time
range
so,
for
example,
for
the
last
two
weeks
so
the
last
sprint
we
have
a
sprint
of
2x.
D
We
can
see
his
number
of
contributors
review,
pull
requests
mean
time
to
review,
for
example,
a
percentage
of
deployed
release
in
position
which
is
quite
low
for
his
application.
D
There
are
lots
of
things
like
release,
interval,
deployment,
interval
and
more
technical
metrics,
too,
or
indicators
such
as
moon
duration
for
the
pipeline
pipeline
failure,
and
things
like
that.
So
it's
not
finished.
We
we
have
other
indicators.
We
would
like
to
try
and
to
add,
of
course,
and,
as
I
said
at
the
same
time,
we
had
some
people
started
to
put
some
internal
framework
to
collect
developers,
productivity
and
feeling
at
the
end
of
each
sprint.
So
this
is
an
example
where
we
can
see
for
a
small
team.
For
example.
D
D
And
so
basically,
it's
a
form
of
that
split
matrix
in
two
categories:
the
matrix
you
can
collect
from
the
system
as
what
we
are
doing
with
insects,
collecting
events
from
the
cluster
or
from
the
gate
system
like
pull,
request,
events
and
so
on,
and
the
survey
metrics
that
you
can
collect
from
asking
questions
to
the
developers
and
into
five
categories:
satisfaction,
performance
activity,
communication
and
efficiency,
space
and
a
different
level,
so
individualized
group
and
the
system-
and
we
can
put
some
metrics
in
different
category
and
novella,
and
what
we'd
like
to
do.
So.
D
It's
still
working
for
us
continuously
improving,
and
so
what
we'd
like
to
do
is
to
to
be
able
to,
at
the
end
of
the
spring,
for
example,
to
mix
both
the
system
data
we
can
collect
and
visualize
through
katana
with
what
we,
the
survey
data
that
can
ask
developers.
D
So
it's
a
big
complex
in
our
situation,
because
we
have
teams
that
are
working
across
multiple
repositories.
We
don't
have
the
one
repository
or
one
team
per
repository.
So
it's
a
it's
more
complex
but
we'll
try
to
manage
something
so
yeah.
So
we
can
have
a
view
different
view
of
the
system
matrix.
So
this
is
for
sorry
that
one
was
for
a
single
repository.
D
D
Concern
and
I
try
to
group
that
into
the
different
category
of
the
space
framework.
So
it's
not
finished
just
working
for
us
just
trying
to
put
some
some
graph
into
different
categories,
just
performance
activity
and
so
on.
So
not
really
nice.
Yet,
but.
D
And
what
else?
Yes?
As
I
said,
we've
open
sourced
that
in
the
genson6
project,
so
it's
not
enabled
by
default
yet,
but
it's
just
easy
to
enable,
and
the
goal
is
that
people
using
shooting
stacks
can
benefit
from
these
indicators
that
will
be
automatically
collected
from
the
system
and
displayed
in
grafana,
and
they
can
add
more
and
continue
so
how
it
works.
D
Internally,
we
have
a
small
go
application,
which
is
collecting
events
from
the
humanities
cluster,
such
as
the
genetic
insects,
release
and
pipeline,
which
has
a
which
are
a
custom
resource
definition
of
kubernetes,
and
we
are
also
collecting
github
events
or
whatever
git
system
you
are
using,
such
as
the
pull
request,
events,
deployment
events,
for
example,
using
a
github
api,
which
has
a
an
api
for
deployments,
and
we
are
doing
that
through
lighthouse,
which
is
a
sub
project
of
joint
insects.
D
So
we
don't
have
to
to
listen
directly
to
the
upstream
git
system.
For
example,
we
get
the
events
flowered
in
by
a
lighthouse,
so
just
a
plug-in
for
lighthouse.
D
So
that
works
well
for
general
insects.
In
fact,
it
can
be
easily
adapted
to
a
different
system,
because
that's
the
beauty
of
kubernetes
is
that
everything
is
an
event.
You
can
watch
for
a
lot
of
things
and
when
you
are
practicing,
github
is
the
same
with
your
git
system,
so
you
can
easily
collect
all
kind
of
events,
so
our
smaller
collector
is
then
going
to
store
everything
in
a
simple
postgresql
database.
D
So,
for
example,
we
have
our
release
a
request
with
when
it
was
created,
ready
for
review,
merge
and
so
on
of
the
pipeline,
the
statues
time
to
draw,
and
so
on
and
deployments
which
environment,
which
version.
D
And
so
on
and
after
that,
we
can
use
the
pathfinder
to
to
extract
all
the
data
and
put
some
nice
visualization,
and
the
nice
part
is
that
you
can
mix
multiple
data
sources
with
kafana.
So
we
can
have
one
dashboard
for
an
application
where
we
can
have
a
metrics
or
indicators
for
for
the
cicd
part
of
the
application
like,
for
example,
the
time
to
review,
attempt
to
mail
time
to
release
and
so
on,
and
you
can
also
have
your
blogs
from
your
application
running
in
your
production
system.
D
Your
parameters,
metrics
from
your
application
and
so
on.
So
you
can
get
a
high
overview
of
everything
for
one
application,
which
is
very
nice,
so
yeah.
Basically,
that's
it.
So
if
you
have
questions.
E
D
E
D
And
something
we'll
continue
to
do
because
it's
very
interesting
to
get
to
get
that
kind
of
feedback,
and
what
is
even
more
interesting,
of
course,
is
to
merge
it
with
the
system
matrix
you
can
collect
so
that
we
can
see,
for
example,
in
the
example
I
had
like
all
the
developers
said
they
felt
it
was
a
very
efficient
sprint,
so
we
can
see
maybe
the
system
matrix,
that
it
was
efficient
too.
E
Yeah,
it's
interesting,
like
you,
said,
to
merge
the
the
data,
because
it
helps
put
some
context
around
the
raw
data
that
you're
collecting
it's
very
nice.
F
Yeah
no
great
initiative
and
love
the
dashboard,
the
the
spaceware
mark,
that's
something
we
started
looking
at
in
the
end
user
console
and
yeah
I've
read
through
it's
pretty
pretty
interesting
stuff.
Have
you
looked
at
all?
I
know
I
don't
know
if
you've
seen
the
the
four
keys
project
from
google
on
the
the
dora
metrics
itself
is.
D
Yes,
yes,
it
was
what
we
initially
wanted
to
do,
but
we're
not
really
finished
with
all
the
metrics
you
want
to
collect,
as
the
indicators
are
going
to
display
on
them.
But,
yes,
it
was
our
initial
goal.
Yes,
exactly.
D
D
So,
for
example,
we
are
collecting
and
displaying
the
time
for
review
and
it's
a
it's
much
easier
to
understand
for
people
than
to
to
act
on
it,
since
the
mean
lead
time,
which
is
like
too
big.
It
has
two
too
many
steps
in
it.
So
it's
a
good,
it's
good
indicators
to
track
to
give
a
high
overview,
but
we
need
other
indicators
too,
that
we
can
act.
Oh
no,
so
yeah
and
I
didn't
yeah.
C
Sure
quick
question:
what
what
is
what
what's
the
dependency
on
on
jenkins
x,
specifically
with
this
system
that
you've
come
up
with.
D
So
we
are
creating
the
events
we
are
creating
from
the
kubernetes
cluster.
We
are
creating
a
custom
resource
definition
that
have
been
defined
by
by
argentine
so,
for
example,
the
pipeline
custom
resource
definition
or
the
release
open
definition,
but
in
fact
it
can
be
something
else.
For
example,
we
don't
have
to
track
the
pipeline
crd.
We
can
try
because
the
champion
sex
is
built
on
top
of
section.
We
can
strike
tech
pens
pipeline
run
and
it
will
be
similar
the
release.
We
can
also
watch
for
a
git
release.
Events.
C
Let
me
refine
my
question:
maybe
that
will
become
more
clear
if
I
wasn't
using
jenkins
x,
can
I
deploy
your
your
system
and
define
its
input
points
somehow
or
where,
where
its
events
are
coming
from,
if
I'm
not
using
jenkins,
can
I
do
that.
C
A
D
D
So
yeah
you
see,
I'm
I'm
looking
to
the
junction
6
api,
but
the
code
is
like
100
line
to
collect
the
pipeline,
so
it
just
it's
just
a
small
small
controller
controller
that
will
watch
for
a
specific
event
and
yeah
and
store
that
in
the
in
a
positive
database.
So
it's
easy
to
change
that
we
can
even
make
make
one
that
can
listen
to
easier
pipeline
or
watching
sex
pipeline
activity.
C
Okay,
there
are
these
events,
the
the
cloud
events
that
techton
emits
or
are
these
jenkins
accidents?
These
are
ginseng.
D
Sex
events,
events,
but
it's
built
on
top
of
tectano,
so
the
pipeline
is
a
giant
sex
pipeline
activity.
It's
a
gentle
size,
representation
of
a
pipeline
which
is
kind
of
the
equivalent
of
the
pipeline
kind
of
I.
C
Know
a
bit
more
things
in
it.
I
understand
I'm
wondering
if
it
would
be
helpful
to
it
would
be
valuable
to
adapt
this
to
the
underlying
pipeline
mechanism,
which
is
really
technon
itself,
and
this
is
something
we
are.
We
are
using
tecton
in
ebay
and
our
next
generation
cd
system
will
be
completely
based
on
that,
and
something
like
this,
this
kind
of
a
metrics
gathering
automatic
metrics
gathering
would
be
huge.
It
would
be
awesome.
Maybe
we
can
contribute
to
to
this
project.
Is
this
something?
That's?
C
Is
this
an
open
source
project
that
you
have?
Yes,
I
put
the
link
in
the
nuts.
I
have
a
link
in
the
in
the
hack,
md
document.
C
D
It
all
right,
but
yes
it
will
be.
That's.
F
C
Yeah
the
the
tecton
folks
have
already
well.
We
have
antonio
here
I
think
we
he
can
talk
about
that
he's.
It's
it's
already,
I'm
sorry
andrea
here
he
can
talk
about
that.
The
cloud
events
already
being
emitted
for
tecton
events,
our
cloud
events,
so.
F
C
F
C
Actually
be
able
to
use
if,
but
we
would
have
to,
we
would
have
to
how,
should
I
say,
d
jenkins
exit.
You
know
excise,
it
so
that
it's
that
it's
relying
on
the
underlying
cloud
events
versus
the
jenkins
x
events,
but
it
would
be
very
helpful
if
that's
the
case.
Yeah.
D
Yeah,
that
would
be
a
good,
a
good
project,
as
I'm
sure
the
other
interesting
part
will
be
how
to
to
to
get
events
from
the
git
system.
So
in
the
context
of
drinking
sex,
it's
easier
because
we
have
lighthouse,
which
is
so.
I
don't
know
if
you're
familiar
with
lighthouse,
but
it's
a
system
that
does
all
the
interaction
between
vector
and
everything
that
happens
inside
the
agency's
x
cluster
and
the
external
repositories,
which
are
hosted
on
github
or
pitt
bucket
or
whatever.
D
C
Yeah,
okay,
I'll
take
a
look
at
the
your
your
project
and
see
how
we
can
maybe
start
to
contribute
to
that
to
to
make
it
more
generic.
D
Yeah
or
maybe
start
a
new
one,
maybe
we
can
like
just
keep
this
one
as
a
first
newsletter
and
get
a
new
one.
That
would
be
more
generic
as
a
cdf
level.
I
think
that's
good
and
nissan
will
be
able
to
maybe
to
switch
on
something
sex.
Yeah.
D
Yeah
I
get
to
I'm
ready.
B
Thank
you
so
much
fantastic
presentation
again,
yeah.
It's
really
interesting
pulling
in
the
different
data
sources
and
then
the
wider
picture.
You
get
that's
pretty
fantastic,
any
more
questions
for
vaseline
before
we
wrap
up
today.
B
Good
good
we're
just
over
time,
so
we
will
wrap
up
today's
meeting
great
presentations
and
discussions
today.
Thank
you
all
for
being
here
with
us
and
we
look
forward
to
seeing
you
at
our
next
meeting
when
we
will
be
talking
fatty.
Do
you
want
to
go
over
kate,
kate
stewart
will
be
joining
us
yeah.
B
C
B
A
very
good
bye,
thank
you.