►
From YouTube: CDEvents Working Group, 2022-07-20
Description
For more Continuous Delivery Foundation content, check out our blog: https://cd.foundation/blog/
A
B
C
B
B
B
Okay,
yeah,
let's
get
started
so
welcome
everyone
yourself
to
the
to
this
participant
list,
so
today
July
the
20th.
This
is
the
City
events
working
group
and
yeah.
The
meeting
is
recorded
so
that
you
know
so
I
added
a
few
updates
on
the
agenda
for
today.
So
let's
go
quickly
through
those
oh
hi
breath.
B
C
B
C
Well,
not
really
you
can
go
there.
There
is
a
branch
yeah
I've
already
shared
the
link
to
that
branch
called
skeleton
project,
so
I
got
a
demo
from
my
colleague
who
is
working
on
this
the
other
day.
I
think
it
needs
another
T.
At
the
end.
B
C
C
Did
you
see
that
yeah
this
message
makes
sense
and
it's
a
valid
government
I
think
this
is
really
just
the
first
step,
though,
for
the
python
SDK
just
to
bring
it
one
to
one
on
par,
With,
It,
Go,
SDK,
I
think
there's
a
lot
of
things
we
can
and
should
do
in
the
python
SDK
to
make
it
more
yeah,
make
it
useful
more
as
a
library
rather
than
just
a
CLI.
C
This
Library
will
be
the
main
main
goal
of
it,
but
we're
starting
with
just
one-to-one
parity
to
get
a
good
starting
point
for
it.
Yeah
work
is
going
to
continue
actively
for,
for
some
time,
at
least
so,
depending
on
exactly
what
we
set
as
the
goal
for
the
python
SDK
I
think
it
should
still
be
be
done
in
before
the
work
well
in
time
for
the
0.1
release.
C
So
yeah
I
think
that's
the
main
main
update.
Well,
I
hope
we
can
show
or
add
a
little
bit
later
is
some
good
getting
started
guides
if
people
want
to
test
it
out
for
themselves,
but
we're
not
quite
there
yet.
C
Yeah
I
think
Tariq.
My
colleague
just
wanted
to
have
some
little
program
to
start
with.
He
had
some
things
he
wanted
to
to
clean
up
like
some
defense
issues
and
things
like
that,
but
I
think
once
it
works
properly,
then
I
think
we
should
definitely
go
back
to
the
and
the
CD
events
product
and
work
from
there.
C
B
Thanks
all
right
next
on
the
spinner
curves
POC,
the
pr
is
merged
on
the
second
site
and
also
they
turn
under
implemented
the
ability
to
to
use
a
variable
image,
shot
and
Spinnaker's
side.
So
it's
great
on
the
website.
I
just
wanted
to.
Let
folks
know
that
I
set
up
like
a
nightly
job
that
picks
up
this
pack,
the
latest
pack
and
publish
it
on
the
website.
So
that's
all
automatically
updated.
Now.
B
We
have
an
upcoming
presentation
at
the
opennessource
summit,
Europe
Eric
and
myself,
so
you
got
a
link
to
the
chat
there.
If
you're
interested
it's
going
to
be
about
the
the
metrics
POC.
B
In
terms
of
their
formatting
work
on
the
spec
that
are
enjoying
with
the
plates
to
lower
camel
case
and
so
forth,
and
currently
working
at
the
CI
packet,
that
will
work
in
progress
bi
there
and
I
just
continue
to
go
through
the
dock,
so
this
is
mostly
look
and
feel
type
of
change,
so
I
hope
it
should
be
pretty
set
for
what,
but
in
doing
this,
I
realized
that
there
are
a
lot
of
missing
Partners
in
the
model
out
of
events
they
get
pretty
bare.
They
have
nothing
defined
in
there
and
so
yeah.
B
Yeah,
that's
it
I,
don't
know
if
there
is
any
update
on
the
Java
SDK
or
on
the
GitHub
webbook
to
see
the
events,
work,
don't
see,
Challenger
or
Mauricio.
So
probably
not.
B
Yeah
so
then
I
had
a
couple
of
points
for
discussion
on
the
spec.
One
is
about
the
build
events
and
the
other
one
is
about
the
CD
event,
content
mode,
yeah
I,
don't
know,
shall
we
get
into
the
discussion
for
those
or
is
there
anything
else?
People
would
like
to
bring
up.
B
Okay
cool,
so
the
first
one
is
about
the
data
model
for
the
build
event,
so
that
the
reason
I
opened
this
issue,
while
I
was
looking
into
the
metrics
PLC.
So
this
is
about
producing
dollar
metrics
or
LCD
events,
and
so
what
we
need
to
do
there
is
to
be
able
to
correlate
events
from
different
stages
in
the
code
builds
deploy
process
in
the
city
process
to
extract
those
Dora
metrics
and
know
what
information
like.
B
How
often
are
we
deploying
and
how
long
it
takes
for
a
change
from
when
it's
developed
to
when
it
gets
into
production
and
and
so
forth,
and
in
trying
to
to
build
this
model?
I
realized
that
yeah,
the
the
build
event,
don't
really
contain
anything.
Apart
from
the
build
artifact
ID
right
now
and
I
was
suggesting
that
we
could
add
the
Repository
to
them
and
also
the
last
change
or
some
kind
of
ID
that
represent
what
is
the
state
of
that
repository
when
the
build
was
made
yeah.
B
So
we
had
some
discussion
with
Eric
and
Matthias
on
this
yeah,
so
I
think.
B
If
I
understood
correctly,
the
the
an
emulator
you're
allocate
for
not
adding
the
repository
information,
you
need
the
events
and
they
wanted
to
to
understand.
A
So
yeah
it
feels
a
bit
strange
to
actually
have
a
repository
information
if
you
have,
if
you
all,
have
a
change
of
Entry,
have
someone
where
you
have
the
cluster
information
to
actually
have
it
that
in
the
build
of
it,
whereas
yeah.
So
that
was
the
kind
of
like
this
the
start
of
the
discussion,
but
you
had
some
some
questions
and
opinions
there.
B
Okay,
so
you
say:
if
you
have
okay,
if
you
have
the
the
change
that
includes
the
repository
already.
A
So
it's
it
sounds
for
me,
it's
more
logical
effects
referring
to
the
change
than
to
start,
rather
than
actually
duplicating
this
information,
and
there
are
two
events
if
that
makes
sense,.
C
There's
a
question
there
I
don't
remember
exactly
but
I
think
typically,
when
we
talk
about
a
change
we're
talking
about
or
we
have
two
things,
we
have
a
change
proposal
which
would
be
like
a
put
a
request
or
a
merch
request
or
a
review,
and
then
we
have
the
actual
change,
which
will
be
something
like
a
commit
or
a
set
of
commits,
getting
merged,
or
something
like
that.
We
are
not
talking
about
a
merge
request
here.
Right,
we're
talking
about
an
actual
commit
or
something
like
that
is
that
great.
C
A
I
would
say
that
could
be
the
real
change,
not
the
not
the
pull
request
for
match.
Request.
Yeah,
that's
good!
However,
you
can,
of
course
you
might
need
to
build
if
you're,
if
you're
inside
a
PR,
you
might
need
to
build
there
before
you
can
actually
do
the
birch.
If.
C
You
know
there
could
be
some
some
verification
or
something
like
that-
absolutely
but
then
at
least
every
code
review
system
that
I
know
of
it
does
boil
down
to
there
being
some
comment
you
can
point
to,
and
that
would
be
the
in
in
GitHub
or
in
gitlab.
It
would
be
the
latest
commit
in
a
branch
I
think
and
in
Garrett
it
would
be
one
commit
but
yeah.
Yes,.
C
C
I
know
we
discussed
like
A
change
is
just
a
change
proposal
until
it
gets
merged,
but
here,
if
we
want
to
build,
then
we
actually
want
to
build
for
a
change,
so
I'm
not
sure
if
we
send
the
appropriate
events,
we
probably
should
like,
if
you
either
in
the
case
of
Garrett,
if
you
replace
the
commits
that
review
points
to
or
in
the
chain
in
the
case
of
gitlab
and
GitHub,
when
you
add
another,
commit
to
the
branch
that
the
review
points
to
do,
we
send
change
events
for
that,
even
though
nothing
has
really
changed.
C
D
A
good
point,
I
I
know
that's.
We
had
this
similar
challenge
internally
and
we're
currently
capturing
the
the
the
repo
push
events
effectively,
as
well
as
the
any
of
the
key
pure
related
change
events
and
our
mechanism.
How
we're
correlating
these
together
is,
is
in
the
build,
not
necessarily
captioning
in
the
in
the
in
the
the
compilation
are
built.
That
might
happen.
It's
really
the
Clone,
the
Clone
event
that
comes
down.
D
We
obviously
have
some
the
last
change
and
that
will
be
tied
to
that
that
build
and
that's
how
we're
correlating
them
from
those
sem
events
to
the
the
pipeline
clone
event,
and
if
that
makes
sense,
it's
it's
it's
a
tricky
one,
but
I
found
we
didn't
want
to
over
complicate
some
of
the
the
get
related
functionality,
but
it's
just
trying
to
understand
all
the
permutations
that
can
happen
on
the
branch
versus
how
we
tie
it
really
to
the
asset
that
we're
building
at
the
end
of
the
day.
C
Because,
on
a
general
in
a
general
sense,
I
think
I
I
would
say
it
makes
sense
that
if
we
are
sending
all
the
change
events
that
we
need
to
be
able
to
to
link
to
them,
then
it
might
be
better
to
link
to
those
change
events
rather
than
repeating
repository
and
what
we
want
to
call
your
last
change
or
or
something
like
that.
But
we
should
also,
of
course,
allow
for
not
having
to
chain
send
change
events,
because
you
might
want
to
start
sending
events
at
the
build
stage.
C
I
think
that
should
be
okay
as
well,
and
then
perhaps
especially
if,
if
you're
talking
about
things
like
yeah,
GitHub
or
gitlab,
or
things
like
that,
where
you
have
the
source
code
externally
to
your
own
team
or
to
your
own
organization.
But
you
do
the
builds
and
things
in
terms.
You
know
that
happens
in
world
of
course,
a
lot
that
we
have
source
code
in
GitHub,
but
it
gets
built
by
Azul
internally
in
in
world
of
course.
So
the
first
thing
that
would
happen
is
a
build
so
and
I
think.
C
Maybe
something
like
what
Andrea
proposed
is.
There
are
some
use
cases
for
that
as
well,
but
it's,
of
course,
always
tricky
like
what
what
is
mandatory
and
what
is
not
mandatory.
And
if
it's
not
mandatory,
then
the
system
that
wants
to
to
correlate
that
information
would
have
to
support
both
I
guess,
both
looking
for
a
previously
sent
change
event
or
reading
out
the
the
repository
and
commit
information
from
the
actual
build
events.
C
C
So
the
idea
I
think
for
the
mattress
perspective
is
to
be
able
to
say
what
commits
are
included
in
this
build
that
wasn't
included
in
the
other,
build
like
what
are
the
new
comments
that
are
included
in
this
build,
because
that
will
help
us
calculate
this
lead
time
for
changes.
So
so,
if
there
is
a
change
like
someone
has,
has
created
a
commits,
how
long
does
it
take
until
it's
included
in
a
build
that
is
then
eventually
deployed
or
sort
of
put
into
production?
That
is
the
metric
that
we
want
to
to
grab.
A
Okay,
so
if
you
have,
if
the
build
builds
one
commit,
but
then
you
have
three
more
commits
before
that,
there
wasn't
in
part
of
a
build,
then
you'll
still
need
like
to
Traverse
back.
C
A
C
Need
to
go
and
ask
the
The
Source
controls
system
like
what
are
the
yeah,
for
instance,
by
doing
some
sort
of
History
diff,
like
what
other
commits
that
are
included
in
in
this
thing,
that
I'm
building
now
versus
the
last
thing
I
built.
So
it
definitely
gets
a
bit
tricky.
If
you
want
to
look
at
the
information
up
afterwards,.
C
A
C
To
figure
it
out,
I
would
say.
Probably
we
do
want
events,
but
what
I'm
I'm
probably
getting
at
here
is
that
I
don't
think
it
should
be
required
to
send
events
for
every
commit
to
get
this
lead
time
for
changes
working,
because
if
we
do
allow
the
build
event
to
say
here
is
what
I
have
built.
Then
we
have
enough
information
to
be
able
to
look
that
up,
not
just
by
looking
at
the
event
store.
Of
course,
you
would
have
to
go
and
contact
the
source
control
system
to
ask
additional
questions.
B
Yeah
yeah,
unfortunately,
as
you
said,
I
mean
the
the
apis
are
different.
So
that's
something
that
maybe
in
future
we
we
can
try
to
to
drive
getting
like
a
common
way
to
ask
this
question
to
different
gate
hosting
system
but
yeah
for
now,
it's
I
think
the
best
we
we
can
do
is
to
at
least
have
that
information.
B
From
from
the
events.
B
B
B
So
that
we
don't
require
anyone
who
wants
to
implement
CD
events
to
must
have
like
a
repository
with
all
the
events
stored
in,
so
that
would
be
an
optional
feature
of
the
of
the
protocol.
So
that's
that's
the
current
status
in
my
mind,
so
I
I
when
I
think
about
fields
that
we
we
that
we
need
not
necessarily
mandatory,
but
that
we
may
need
for
building
this
POC
I
make
the
assumption
that
we
may
or
may
not
have
that
Repository.
B
And
the
second
thing
is
that,
even
if
we
we
do
have
like
all
the
events
in
a
database,
I
think
for
for
routing
and
filtering
purposes,
it's
good
to
have
at
least
a
basic
amount
of
information
like
a
repository
for
instance,
because
if
I
want
to
say
I'm
interested
in
all
the
events
that
are
specific
to
a
certain
Repository
and
I
have
a
build
event,
and
that
build
event
does
not
tell
me
what
repository
this
build
is
about.
B
A
I
guess
the
risk
of
going
down
that
route
is
that
you
will.
You
might
have
unlimited
amount
of
these
information
that
you
want
to
have
in
the
routing,
because
at
some
point
you
might
be
looking
for
well.
I
want
this
I'm,
the
only
interested
in
the
bills
on
this
branch,
which
you
can
be
detailed,
and
then
you
start
you
want
to
include
those
ones
in
the
build
funds.
A
Also,
so
yeah
I
get
your
idea,
but
yeah
we
should
really
careful,
or
at
least
States
I
have
some
kind
of
mythology
of
what
the
information
include
to
know.
How
deep
we're
gonna
go
in
this
this
information
or
how
much
we
want
to
introduce.
B
Okay,
yeah
I'm,
not
saying
that
this
should
be
mandatory
and
just
saying
that
I
think
it.
It
would
be
good
to
to
have
it
covered
by
our
data
model
so
that,
if
we
want
to
send
this
information,
then
we
have
a
common
way
of
sending
it
otherwise
yeah
we
can
pretty
consume
it
in
a
standard
way.
This
information
is
provided
by
different
system
in
different
formats.
Then
it's
hard
to
consume.
A
Okay,
one
question
with
this
talk
and
we're
we're
we're
pointing
at
the
commits.
So
basically,
we
are
only
talking
about
building
from
source
code,
that
is
from
text
right.
A
Because
it
can
be
that
you
build
yeah
when
you
build
a
binary
with
other
binaries,
for
example
like
I,
guess
that
if
you're,
if
you're
building
a
languages
has
compiled
Libs-
and
you
say,
Okay
I
want
to
kind
of
like
statically
link
this
so
I,
don't
dynamically
link
it.
Then
you
will
include
this
Library,
which
the
none
is
is
a
binary
and
that
binary
might
not
be
stored
in
in
your
get
source
code.
It
might
be
stored
in
like
a
j
for
your
repository
or
something
else.
A
B
I
I
don't
think
we
are
limiting
this.
The
scope
I
mean
at
least
in
in
my
experience.
When
you
have
this
kind
of
dependencies
you
you
will
have
in
your
in
your
repository
or
you
will
document
what
is
the
list
of
your
dependencies
and
that
that
list
of
your
dependencies
with
a
version
is
versioned
along
with
your
source
code.
B
A
Okay,
so
you're
thinking
that
you
have
to
have
basically
all
your
versions
and
so
on
that
you're
building
have
to
be
in
a
git
committee.
A
A
B
Do
you
have
another
mechanism
in
mind
news
that
we
should
consider.
A
Well,
I'm
not
too
sure
that,
for
example,
that
all
the
bills,
for
example
in
Eric's,
on
where
we
build
together
things,
is
something
that
is
actually
like
stored
in
a
git
commit
or
something,
but
it
could
be
that
we
send
in
in
eighth
in
April
we
have
it
a
composition,
defined
about
speed
rate
I'm,
going
to
find
a
number
of
compositions
and
a
such
event
can
point
to
other
artifacts.
B
A
B
Make
sense
I'm
not
sure
I
I
understand
so
when
you,
when
you
make
a
build,
you
will
have
the
source
code
and
you
will
have
binary
dependencies.
B
A
It
might
not
be
defined
inside
a
file,
it
might
be.
It
might
have
been
defined
as
at
in
some
other
process
that
you
you're,
saying.
Okay,
we
want
to
have.
We
have
to
talk
about
baselines
before
we
wanted
to
find
a
baseline.
So
we
have
a
tool
that
says:
Okay
I
want
to
have
these
versions
and
the
Baseline
tool
that
has
the
idea
of
what
you've
built
and
then
in
order
to
track
what
versions
are
put
together?
A
We
send
this
composition
defined
event
and
then,
when
we
build
an
artifact,
we
can
point
to
this
and
say:
okay,
this
composition
defined
event
was
what
we
built,
but
this
composition
that
we're
creating
doesn't
necessarily
be
stored
inside
a
file
inside
a
git
Repository.
C
A
Yeah,
basically,
we
have
artifacts
and
then
we
have
compositions
and
it
can
be
like
pure
binary
builds
or
like
packaging
or
something.
C
If
that
is
like
how
we
want
to
deal
with
that,
if
we
want
to
consider
the
action
like
when
you're
doing
you're
producing
or
transforming
source
code,
if
we
see
that
as
something
other
than
combining
already
transformed
source
code
I'm,
not
saying
they
should
be
different,
I'm
just
saying
we
need
to
take
a
decision.
I
think
whether
we
see
those
as
is
like,
it's
build
and
packaging
the
same
thing
or
are
they
different
things?
Maybe
that's
what
I
mean.
A
B
A
A
But
Eric
haven't
you
talked
about
that.
You
also
have
had
a
situation
where
you
can
recombine
a
group
of
like
a
baseline
or
something
you
do
something
on
it,
and
that.
C
C
No,
that's
perfectly
true
so,
for
instance
like
when
you
want
to
do
a
multi-level
integration,
verification
flow
of
some
stuff,
then
typically,
maybe
the
first
thing
that
happens
is
that
you
compile
something
from
Source
or
in
our
case
you
might
get
something
like
an
already
built
binary
from
A
supplier,
but
when
a
new
version
of
something
a
new
version
of
a
binary
or
an
artifact
is
or
is
added
to
the
system,
then
a
whole
lot
of
things
kick
off.
C
You
want
to
include
that
new
binary
version
in
whatever
compositions
it
should
be
part
of
to
be
be
able
to
be
tested,
and
then
we
build
a
lot
of
things
that
we
would
then
call
baselines
and
then
Baseline
can
contain
other
baselines,
etc,
etc.
So
the
path
from
an
initial
code
change
up
to
something
actually
being
considered
released,
which
would
be
what
we're
talking
about
here
early
time
for
changes.
It
does
include
multiple
steps,
typically
of
just
packing
things
together
in
more
and
more
complex
combinations
and
then
verifying
that.
C
C
But
somehow
the
way
we've
thought
about
it,
a
little
bit
is
so
we
have
a
concept
in
something
I
presented
a
long
time
ago,
where
we
say
that
when
we,
when
we
build
something
which,
in
our
our
flow,
is
called
creating
a
component
which
is
roughly
the
same
as
smart
effects,
then
we
say
that
a
components
is
created
from
things
in
different
repositories
of
different
versions.
And
then
we
don't
say
that
the
repository
version
is
necessarily
A
commit
or
even
a
source
code
management
system
like
revision.
C
It
could
also
be
like
an
inversion
index
in
some
database,
or
it
can
be
a
document
version
or
whatever,
but
something
that
has
a
version
so
that
when
we
go
look
for
it,
we
can
extract
the
exact
same
version
again.
It
could
be
a
composition
version
as
well,
so
I
think
what
Andrea
is
proposing
would
still
be
quite
generally
useful
to
say
that.
Okay,
when
we
built
this
thing,
we
went
over
here
and
got
this
version,
and
that
is
what
we
included.
C
So
that
is
what
we
have
and
then
just
say
that
the
version
doesn't
have
to
be
a
commit.
It
can
be
anything
that
is
version
handled,
because
that
I
think
is
a
reasonable
requirement.
We
can't
just
say
we
went
over
here
and
took
the
latest
thing,
because
we
don't
know
what
the
latest
thing
was
when
we
did
it.
C
We
are
connecting
it
to
not
really
to
the
buildings,
we're
connecting
it
more
to
the
artifact.
We
haven't
really
thought
about
events
that
much.
But
when
you
go
look
at
the
yeah
I
guess,
for
us,
an
artifact
is
a
file
and
a
file
can
exist
in
many
different
places,
so
a
single
component
as
we
would
call
it
and
a
multiple
artifacts.
But
if
we
talk
about
the
component,
that's
not
in
fact
like.
C
This
is
the
thing
that
was
actually
built,
and
that
is
the
thing
that
we
have
a
reference
to
multiple
repositories
and
multiple
versions
yeah.
So
it
would
be
the
the
final
result
of
the
build
step
or
the
transform
step
or
whatever
you
want
to
call
it.
I
guess
yeah
connecting
into
the
artifact,
would
be
more
more
true.
A
C
And
I
guess
for
for
the
metric.
It
doesn't
really
matter
that
much
because
what
is
what
is
being
deployed
is
not
the
failed
job.
It's
the
result
of
the
build
job
as
long
as
we
can
track
it
from
there.
That
should
be
fine.
A
Yeah
now
you
have
to
stop
and
if
I'm
I'm
going
too
deep
or
if
you
don't
follow
me,
but
the
question
is:
if
you're,
if
you
want
to
in
your
metrics,
want
to
trace,
for
example,
this
is
a
deployment
here
and
I
want
to
see
what
commits
were
part
of
it.
Do
you
actually
want
to
need
to
Traverse
and
look
at
at
build
jobs?
Then?
A
A
A
Yeah
I
think
I
got
you.
There
was
a
bit
of.
We
were
breaking
up
a
little
bit
but
I
guess
you
were
you're
agreeing
with
me.
There.
A
So
yeah
yeah,
if
you
want
more
explanation
of
it,
I
could
probably
take
a
look
at.
We
have
I,
haven't
posted
a
picture
in
one
of
the
issues
here.
If
you
want
some
more
picture-wise
explanation
of
what
I'm
talking
about
the
rest
of
you,
that
is.
B
Yeah
yeah
I
mean
for
the
initial
POC,
or
at
least
you,
but
for
the
events
that
we
have
now
we
don't
have
a
concept
of
composition
yet
and
I
was
hoping
for
initial
work
to
avoid
getting
into
the
the
idea
of
composition,
because
it
has,
it
adds
a
lot
of
complexity
and
the
amount
of
time
and
the
effort
that
we
we
have
to
put
into
this
is
it's
limited.
B
So
I
just
wanted
to
start
with
a
simple
use
case
of
a
single
repo
Assumption
of
a
single
repo
where
we're
doing
builds
and
then,
of
course,
adding
more
sophisticated
use.
Cases.
A
B
B
Yeah
yeah,
I'm
thinking,
sorry
and
so
you're
saying
in
the
artifact
packaged
event,
artifact
published
event
right.
A
Yeah
yeah,
so,
for
example,
they
thought,
like
packaged
event,
would
have
and
this
information
of
what's
or
what
source
code,
and
so
it
contained
that
would
be
more
logical
than
I,
actually
that
the
build
event
would
have
it.
B
I'm
not
sure
if
it
will
be
more
logical
or
not,
but
yeah
it's
I,
I,
yeah
I
think
I
tend
to
agree
that
what
we
need
is
information
thought
specifically
for
the
for
the
POC.
We
need
information
more
about
artifact
being
produced
rather
than
Fields
being
executed,
and
it's
probably
a
good
degree
of
overlap
between
the
builds
and
the
artifact
events,
but
I
guess
different
events
might
be
more
interesting
for
different
use
cases.
B
So,
like
yeah,
say,
someone
is
more
interested
on
the
pipeline
type
of
you
than
the
build
event
is
probably
more
relevant
if,
like
in
our
case
very
interesting,
the
artifact.
It's
the
artifact
event
that
is
more
relevant.
D
Okay,
just
just
from
my
point
of
view
here
are:
we
are
we
assuming
that
artifacts
could
be
built
outside
of
the
build
pipeline?
To
your
point.
Is
that
because,
for
me,
the
way
we'd
be
looking
at
this
is
there's
a
deep
correlation
between
the
SCM
events
to
the
to
the
Clone
events
that
are
happening
within
the
pipeline
to
tie
that
lineage,
which
would
then
be
tied
to
the
artifact
share
being
produced
in
the
pipeline
to
help
us
with
this
correlation.
D
D
No,
it's
because
I
sort
of
tend
to
agree
with
Andrea
on
this.
The
simple
use
case
at
the
end
of
the
day,
if
multiple
multiple
binaries
could
theoretically
be
built,
it
like
the
artifact
that's
been
built,
could
be
consumed
of
multiple
things
are
a
simple
thing
right,
and
if
we
it's
just
really
how
that
composition
model,
we
want
to
look
at
it.
D
I
was
like
Andrea
made
before
the
simple
use
case,
going
that
everything
would
be
coming
from
the
it's,
assuming
that
that
micro
views
in
place
that
any
artifact
would
be
tied
back
to
the
build
process
that
we
could
easily
correlate
to
the
SCM.
It's
just
it
just
that
makes
sense
to
me,
because
otherwise,
where
the
infinite
number
of
possibilities
that
gets
hard.
A
I
guess
what
me
and
Eric
talked
on
on,
lastly,
was
that
we
do
want
to
correlate
kind
of
like
clone
events,
or
maybe
the
sem
and
so
on,
but
we
we
don't
know
if
we
want
to
involve
the
actually
the
pipeline,
so
he
found
if
you're
looking
for
deployment.
Are
you
actually
interested
in
what
pipeline
this
was
running,
or
are
you
more
interested
in
what
SCM
events
or
SCM
git
commit
was
contained
to
that
one?
So
that
was
a
discussion.
D
A
Just
want
to
spend
a
couple
of
seconds
here,
so
this
is.
This
is
an
angel
image
where
we
have
the
SAS.
That's
the
source
change
and
the
seed
of
the
green
CD
is
done.
The
composition
I
talked
about.
We
don't
need
to
deal.
Rc
stands
for
artifact
trade,
it
which
is
artifact
packaged
and
then
RP
is
notified.
A
Published
here
so
I
want
to
hear
is
my
idea
that
I
want
to
be
able
to
link
the
published
events
back
to
the
source
change
without
going
through
all
the
blue
ones,
which
are
can
be
in
the
pipelines
which
can
be
test
and
so
on.
So
that's
why
I
wanted
to
have
a
quick
password-
and
that
was
my
my
reason
or
idea
behind-
maybe
not
including
this
in
the
building,
but
have
good
artificial
package
and
study.
B
Yeah
yeah
see
what
what
you
mean,
but
yes,
but
I
think
I
mean
if
you
are
interested
in
using
the
build
event.
I
think
in
in
my
mind,
I
associate
it
to
what
Jimmy
called
the
Clone
event.
So
it's
Associated
the
beginning
of
the
the
pipeline
that
is
produced
in
the
artifact.
B
Then
you
will
need
that
information
in
there
as
well
right.
So
it
depends
okay,
which
route
you
go
through,
but
I
mean,
and
it's
it's
fine.
It
makes
sense
to
to
have
that
information
in
in
the
artifact
as
well,
but
I
don't
see
why
it
shouldn't
be
in
the
build
event
too,
but
yeah
I,
don't
know
I
don't
want
to.
B
A
I
mean
I
I
get
your
point
from
from.
If
if
we
only
have
a
build,
that
would
be
if
the
build
of
Anton
represents
builds
but
and
then
it
might
be,
the
all
the
other
things
that
you
might
do
and
if
it's
my
own
building,
but
in
this
Anne
Maurice
PR,
will
all
the
the
steps
there
were
a
bunch
of
other
things
that
you
might
do
that
I
guess
would
be
also
maybe
build
these
Santos
build
events
we
put
on
trade
extra,
but
for
those
ones,
of
course,
so
yeah.
B
B
So
maybe
I
can
I
can
make
a
proposal
for
extending
the
the
artifact
event.
B
Or
both
the
build
and
artifacts
events-
and
we
can-
we
can
discuss
on
that
on
that
PR
then,
if
that
makes
sense.
B
I
mean
yeah
I
would
like
to
to
be
able
to
to
move
forward
with
at
least
a
simple
model
and
I
understand
that
there
are
a
lot
of
complications
like
like
subtleties,
like
we
discussed
in
the
beginning,
like
the
difference
between
the
change
or
specific
commit.
B
What
do
we
want
to
track
and
yeah
all
the
possible
composition
models
and
so
forth,
but
I
would
like
to
at
least
have
a
simple
model
to
begin
with
that
we
can
experiment
with
in
the
POC
and
then
yeah
as
we
make
the
application
that
we
use
in
the
POC
more
complex
and
made.
Maybe
yeah
depend
on
multiple
repositories
or
use
external,
more
external
dependencies
or
yeah.
We
can
build
and
this
various
we
can
adapt
the
the
model
to
to
fit
that.
A
Yeah,
that
sounds
very
reasonable.
My
question
may
be
a
little
bit
provocative
is.
Do
you
actually
need
build
events?
If
you
want
something
very
simple.
B
Yeah
I
don't
know
if
you
need
the
build
event.
The
POC
go
ahead
and
check
that,
but
we
do
have
them
in
the
in
the
spec.
For
now,
but
yeah
yeah.
A
B
Think
a
bit
about,
if
you
need
them
at
all,
we
may
not
need
them
yeah
right.
So
if
we
just
for
the
sake
of
the
of
the
metrics
but
I
need
to
to
double
check.
A
Sorry,
if
I'm
I'm
not
getting
you
there,
but
just
trying
to
help
you
on
on,
because
I
do
understand
your
idea
of
making
things
as
simple
as
possible
which
for
a
PC
to
get
something
running.
So
that
was
the
idea
of
just
using
the
artifact
just
to
get
something
through.
B
With
the
repository
and
the
source
version
or
the
version
of
the
source,
then
I
guess
we
don't
need
in
that
case
the
the
build.
B
B
Okay,
well,
okay,
but
I
think
we
have
a
way
forward
in
this.
B
So
we
could
use
some
of
the
articles
for
the
metrics
VOC,
so
I
will
propose
some
changes
for
the
artifacts
model,
then
so
that
we
can
use
it.
You
can
use
it
there.
B
Yeah
so
there's
another
point
of
discussion
that
I
wanted
to
to
bring
up
I,
don't
know
if
you
have
enough
time
to
to
discuss
that,
but
I
think
we
record
already
a
bit
of
it.
D
B
B
B
Information
is
sent
as
a
Json
and
that
Json
then
embeds
any
vendor
content
that
needs
to
be
sent
and
I
think
it
would
make
sense
to
have
a
similar
approach
for
City
events
assuming
HTTP
binary
modes
for
cloud
events,
which
I
think
is
the
what
we
considered
always
to
have
like
the
cloud
events,
information
in
HTTP
headers,
and
then
we
could
consider
both
having
a
structure
mode
where
we
have
things
like
meta
or
subject:
Express
Json
within
the
payload
of
the
message
and
then
adding
a
data
failed
where
we
embed
vendor
data
and
as
an
alternative
I
have
a
binary
mode
where
we
would
have
like
things
like
the
City
events,
meta,
the
subject
links
when
we
introduce
them.
B
Everything
in
HTTP,
headers
and
I.
Try
to
draft
an
example
of
how
it
could
look
like
and
then
adding
the
vendor
content
in
the
as
a
payload.
So
I
think
we
have
previously
discussed
this
and
I
think
there
was
some
agreement
and
on
having
these
both
options.
B
The
advantage
of
the
binary
mode
is
especially
for
places
where
there
is
a
radio
format
like
even
in
terms
of
projects.
If
you
look
at
captain
or
tactone
or
Jenkins,
they
already
have
some
Cloud
events
with
their
own
format.
So
in
binary
mode
it
would
be
possible
to
have
the
CD
event
information
added
in
the
city
in
HTTP
headers
format
and
preserve
the
original
payload,
which
means
that,
on
the
receiving
side,
like
clients
that
do
support,
Cloud
events
will
be
able
to
benefit
from
the
cloud
event.
So
City
events,
information.
A
So
one
one
question
here
now:
Maybe
I'm
Wrong
here,
but
isn't
the
isn't
the
idea
of
using
the
cloud
events
that
you
you
don't
have
to
care
about
the
account
of
the
transport
layer
thing
because
it
struck
me
now
when
we
were
talking
about
HTTP,
but
maybe
someone
prefers
Kafka
or
amqp
yeah.
She
would
really
talk
about
each
of
those
modes
and
and
talk
about
this
in
our
respect.
A
Yeah,
just
so,
we
don't
go
too
deep
into
into
something,
whereas
I
guess
the
the
idea
with
Cloud
events
is
the
transport
part
of
it
is,
is
agnostics
you
don't
need
to
care
about.
A
It
should
be
really
simple
to
to
switch
transfer
protocols.
We
shouldn't
be
and
need
to
care
about
that.
That's
why
we
won't
be
able
to
run
Cloud
events,
and
so
we
can
take
our
seed
events
and
send
it
regardless
if
it's
stage
to
P
or
Kafka
or
mqp.
B
Yes,
yes,
so
this
is
not
something
that
changes:
the
kind
of
data
model
of
City
events
or
anything,
but
it's
even
if
you're
agnostic
in
the
spec
agnostic
of
the
transport,
you
need.
You
still
need
to
Define
a
binding
to
that
transport
right.
So
I
think
you
still
need
to
say:
okay,
when
you're
signing
a
CD
event
on
top
of
Cloud
events.
How
do
you
do
that?
B
Because
you
actually
need
to
send
these
events,
so
you
need
to
define
those
formats
and
even
Cloud
event
supports
multiple
modes,
so
we
need
to
kind
of
Define
LCD
events
mapped
onto
those,
and
when
we
do
that,
I
mean
we
can
do
that
in
a
way.
That
is
more
useful
for
like
a
transitioning
into
City
events
and
that's
what
this
is
about.
B
Yeah,
if
I'm
trying
to
implement
CD
events
in
in
tectum,
for
instance,
where
we
try
to
use
existing
events
from
Captain
when
doing
the
the
POC
I'm
having
this
this
approach,
I
think
it's
very
much
useful
and
in
fact
it's
it's
it's.
What
the
go.
Sdk
does
right
now:
kind
of
it
stores
all
the
CD
events
and
information
as
Cloud
events,
extensions,
but
I.
B
Think
if
we
pick
it
to
formalize
that
and
have
these
two
two
options
or
I
mean
we
could
say
we
just
do
the
binary
version
of
if
you
think
that
this
structured
version
there's
no
value
in
that
I.
Think
that
in
in
my
opinion,
the
binary
version
is
a
very
time
and
it's
it's
good
for
transition.
The
the
structure
version
gives
more
flexibility
in
the
data
that
we
can
transport
and
also
it's
more
probably
more
readable
but
yeah
anyway.
B
I'll
I'll
probably
make
I'll
try
to
make
a
proposal
about
this,
and
if
we
agree
that
we
want
to
do
this
to
four
months,
we
we
need
to
Define
in
The
Binding
inside
the
the
format
of
the
HP
address,
so
how
they
would
look
like
in
practice.
A
Yeah
sure,
but
we
probably
should
do
not
only
HTTP
but
some
other
one
if
we
need
to
define
those
bindings
and
maybe
something
that's
rather
odd
or
that
doesn't
look
like
Po
can
be
translated
easily,
but
I
think
it's
a
good
idea
to
find
it.