►
From YouTube: CDF - SIG Events Meeting - 2021.11.16
Description
For more Continuous Delivery Foundation content, check out our blog: https://cd.foundation/blog/
D
B
B
Now
we
had
two
items
that
we
discussed
one
from
a
previous
meeting
and
one
from
the
last
seg
meeting,
and
we
can
start
with
the
sig
meeting,
one
I
think
which
is
so
on
the
sig
meeting.
It
was
sort
of
decided
that
some
time
in
the
january
february
time
frame,
we
will
make
a
push
outwards
from
the
sig
as
it
was
identified
as
a
focus
area
from
yeah
linux
foundation.
B
Meeting
something
steve
was
there,
he
knows
more,
but
the
question
I
think
we
should
discuss
a
bit
is:
what
do
we
want
to
have
in
place
when
we
do
that?
Push
so
try
to
define
something
reasonable,
but
perhaps
more
than
what
we
have
today
and
set
january
february
as
a
deadline
for
that
increased
scope.
A
Yeah
so
my
thoughts
and
oh,
let
me
back
up
my
kind
of
vision
and
how
events
is
going
to
be
successful
is
if
we
are
able
to
get
concrete
use
cases
on
how
people
are
using
tools
and
how
they're
trying
to
connect
them
together.
A
You
know
so
the
one
thing
that's
happened
over
the
last
you
know,
10
years
is
not
everybody's,
been
standardizing
in
a
single
tool
for
cicd
across
the
whole
organization,
from
dev
to
production,
so
there's
gonna
be
a
mix
of
tools
and
I
think
we
should
reach
out
and
if
this
may
be
just
what
the
what
january
february
we
we
do
is
try
to
get
these
use
cases
out
there
and
kind
of
put
a
box
around
them.
A
So,
like
you
said
we
don't,
we
don't
want
the
scope
to
be
too
huge,
but
also,
I
think
we
need
to
do
some
fact
finding
to
find
out
where
we
need
to
head
next,
because
if
we're
able
to
get
three
or
four
good
use
cases,
we
should
be
able
to
see
the
pattern
between
those
three
or
four
use
cases
and
where
we
need
to
focus.
You
know
a
core
set
of
events
to
build
out
from
that's
just
my
my
view
of
how
we
should
approach
this.
A
If
we
don't,
I
think
we'll
just
create
a
bunch
of
extra
work
for
ourselves
and
create
a
bunch
of
stuff
in
the
specification
that
may
not
be
quite
right
and
then
we
go
off
and
code
it
and
waste
time
and
have
to
go
back
and
rework
it.
That's
what
I'm
worried
about.
B
I
guess
when
we
pick
those
use
cases,
we
would
I'm
not
thinking
about.
Probably
we
should
pick
three
or
four
quite
different
use
cases,
but
also
I
mean
there
are.
There
are,
I
believe,
projects
that
aim
to
to
reach
some
sort
of
agnosticism
or
whatever
you
want
to
call
it
towards
several
event.
Producers,
like
argo
or
sorry,
argo,
would
be
one
of
those
examples.
I
think
so
we
don't
just
like
say.
Oh
here
is
something
you
can
do
with
our
spec
and
and
people's
response
will
be
well.
A
And
the
same
thing
with
captain,
you
know:
captain
has
their
their
set
of
core
events
that
are
out
there
and
jenkins
is
coming
along
with
theirs.
So
that's
why
I'm
kind
of
thinking
of
it
from
a
end,
user's
perspective,
where
I
want
to
develop
a
pipeline,
and
I
want
to
hook
together.
A
A
You
know
something
along
those
lines
where
you
know
you
know
the
docker
build
may
be
happening
with
with
jenkins
the
and
same
with
the
signature.
The
deploy
may
be
happening
with
argo
and
then
the
approval
being
done
with
captain.
You
know
we're
going
to
have
these.
A
You
know.
One
of
our
goals
in
the
events
is,
is
to
kind
of
hook
together,
dispersed
tools
in
a
common
language.
A
And
that's
just
a
small
example-
and
I
know
you
know
our
poc
alre
does
some
of
this
I'm
just
saying
that
we
can
take
it
to
the
next
level
of
introducing
you
know
snick
or
you
know,
as
part
of
the
security
scan
policy
management
on
how
to
deal
with
the
pipeline.
A
You
know
so
there's
a
whole
wide
range
of
things
in
in
the
use
case,
but
I
think
we
need
to
kind
of
find
out
what
people
are
doing
and
I
know
tracy
you
mentioned
this
is-
is
kind
of
a
competition
that
you've
talked
about
in
the
past
like
bring
us
your
pipeline
world,
you
know,
submit
your
pipeline,
so
we
can
see
what
you
have
going
on
and
that's
kind
of
what.
B
A
Exactly
I
know
I
know
if
we
didn't,
if
we
weren't
able
to
find
people,
we
can,
you
know,
reach
inside
of
our
organizations
and
and
come
up
with
something
but
it'd
be
interesting
to
find
out.
I
know
like
I
think
it's
fidelity
is
coming
on
board
with
the
cd
foundation.
A
E
Well,
I'm
not
sure
if
capital
one
is
still
around,
I'm
not,
I
don't
know
who
the
end
users
are
now.
I
know
that's
a
focus
for
tracy
miranda.
She
wants
to
bring
on
more
and
more
end
users,
but
I'm
not
sure
who
is,
I
know
fidelity
is
because
they
are
maybe
they're
using
screwdriver.
A
Yeah,
I
don't
know,
that's
what
I
mean
and
I
know
cara's
talking
with
fidelity,
because
they
want
to
bring
on
a
couple
new,
some
of
their
open
source
projects
into
the
cd
foundation.
So
they
may
be
a
a
new
active
end
user,
slash,
contributor
role.
A
A
B
A
And
I
think
we
should,
like
you,
you're
saying,
put
a
box
around
it.
So
if
somebody
comes
to
us
with
a
you
know,
some
of
these
jenkins
pipelines
can
be
overly
complicated.
You
know
where
they
have
literally
30
30
different
steps
in
their
pipeline.
A
You
know
and
they're
they're,
getting
to
a
point
where
they're
hard
to
maintain
that
there
may
be.
You
know
that
would
be
a
great
pipeline
for
us
to
look
at
and
go
well.
We're
gonna
pick
x,
y
and
z
out
of
this
and
focus
on
that,
because
x,
y
and
z
is
overlapping
with
another
company's
pipeline
and
we
can
focus
on
those
common
pieces
as
extending
the
vocabulary
into.
B
So
in
these
use
cases
to
what
extent
would
we
get
away
with
just
drawing
sequence,
diagrams
and
saying
here?
This
message
would
be
sent
and
would
be
picked
up
for
this,
and
to
what
extent
do
we
actually
have
to
implement
support
in
these
various
tools
for
sending
or
receiving
cd
events
messages?
I
guess
a
combination
of
the
two
I
think.
A
Yeah,
my
my
view
would
be
that
we
would,
like
you,
said,
diagram
it
out
and
then
once
we
diagram
it
out,
that
could
be
part
of
the
presentation
and
the
outreach
that
we
do
in
january
or
february
is
showing
these
are
five
pipelines
that
we've
gathered
from
just
as
a
a
small
sample.
A
Here's
where
our
current
vocabulary
set
is
and
here's
our
proposed
next
step
kind
of
like
a
road
map
for
the
next
step
of
the
the
vocabulary.
That's
going
to
tackle
these
pieces
of
these
parts
of
the
pipeline
and
then
from
there
depending
on
the
tools
and
how
much
control
we
have
over
them.
A
We
may
or
may
not
be
able
to
do
some
implementation
around
it
and
we
may
need
to
reach
out
to
folks
like
at
screwdriver
saying
we
have.
You
know
one
of
your
largest
customers
x,
y
and
z,
or
let's
say
yahoo
is
doing
this,
and
we
really
want
to
you
to
adopt
some
events
around
that
and
see
if
we
can
get
that
project
to
move
along
with
the
implementation
of
that
part
of
the
of
the
vocabulary.
A
Now
now
things
now
things
that
are
closer
to
home,
like
you
know,
techton
and
kept
in,
and-
and
I
don't
know
if
we
really
have-
maybe
argo,
but
you
know
those
events
as
well-
are
a
little
bit
closer
to
home
that
we
can
have
a
little
more
control
over
and
do
some
more
coding
around
those
as
well
and
on
the
jenkins
side
as
part
of
that
process.
A
So
I
think,
like
you,
said
eric,
there's
gonna
be
a
mix.
Some
of
it
will
be.
This
is
what
we've
the
the
new
vocabulary,
and
this
is
what
we
need
these
other
projects
to
start
implementing,
and
this
is
how
you,
how
you
do
it
and
then
things
that
we
can
do
ourselves.
We
can
tackle
on
our
own
and
kind
of
expand
out
the
the
sdk
around
that.
B
Yes,
does
this
approach
sound,
okay
with
everyone
else
to
find
three
or
four
use
cases
and
go
from
there
to
diagram
out
the
flows
and
work
on
the
vocabulary
based
on
that.
B
D
D
And
I
think,
of
course
it
would
make
us
make
a
better
better
business
case
for
us
when
we
see
that
these
are
things
that
users
are
actually
requesting
and
actually
would
actually
use.
I
mean
not
just
inventing
some
events,
because
we
we
want
to
whatever
we
need
to
have
it
tied
to
some
real
world
use
cases
that
people
will
will
use.
F
Yes,
I
agree:
if
you
get,
if
you
can
get
some
real
life
pipelines
or
use
cases,
that's
for
sure,
a
great
plus.
I
think
there
are
still
some
areas
from
even
from
the
poc
that
we
can
still
work
on
like
for
the
pse
to
to
get
techdon
and
captain
working
together.
We
have
to
do
some
kind
of
hacks
to
to
get
the
right
context
traveling
around,
so
that
you
know
they
wouldn't
really
understand
each
other.
A
Oh
yeah,
definitely
I
don't.
I
don't
think
we
should
stop
working
and
wait
for
use.
Use
cases
to
come
in
definitely
keep
on
building
out
from
the
knowledge
that
we
have.
D
D
D
F
I
was
going
to
see
another
another
area
that
I
think
it's
important
to
cover
and
at
least
say
some
story
around
it.
As
the
the
fact
that
I
guess
the
the
main
objection
I
I
get
to
to
events
are
like
visibility
of
the
overall
workflow.
If
you
have
a
workflow
which
is
distributed,
and
you
have
one
piece
sending
an
event
and
another
piece
and
then
how
do
you
get
the
overall
view?
F
With
the
security
with
the
supply
chain,
security
being
so
like
a
focus
of
attention
now
what
what
about
at
the
stations?
So
if
you
have
your
pipeline
that
is
divided
across
multiple
parts,
how
do
you
get
the
attestation
with
everything
that
happened?
F
You
know
across
the
different
platforms
and
I
think,
actually
having
a
common
protocols
can
help
for
both,
because
this
is
already
the
case
that
the
pipeline
is
spread
across
platforms,
because
we
use
github
for
scm
or
gitlab,
and
then
we
have
something
for
building
and
something
else
for
deploying
often
but
having
like
common
semantics,
can
help
us
like
building
a
better
visibility
into
the
pipeline
and
creating
like
collating,
maybe
at
the
station,
from
different
platforms.
So,
but
we
need
to.
F
I
think
we
need
to
have
a
stronger
or
more
clear
story
around
that
in
the
poc.
We
had
this
like
box
in
the
bottom,
about
like
monitoring,
viewing
pipelines
and
so
forth,
and
I
think
it
would
be
good
to
have
a
a
good
story
in
that
part
as
well.
I
think
that's
an
important
part
for
me
as
well.
A
Is
there
an
existing
project
out
there?
That
will
be
a
dashboard
for
cloud
events.
F
Nothing
serious
that
I
could
find.
I
mean
there
is
in
the
for
the
poc.
I
use
the
you
know
the
cloud
event
player,
which
is
just
showing
you
the
list
of
cloud
events,
but
it
would
be
nice
to
to
have
something
that
you
know
gets
the
the
timestamps
and
the
type
of
activity.
F
If
you
have
links
to
s-bombs
or
at
the
stations,
you
could
have
a
tool
that
goes
and
fetches
all
the
attestation
for
the
different
parts
and
collates
them
together.
For
instance,
I'm
not.
These
are
kind
of
random
ideas
and
not
maybe
well
thought
through,
but
I
think
it's
an
area
that
it's
important
to
explore.
Yeah.
A
I
wonder
if
captain
or
argo
would
have
something
that
we
could.
I
know
they
don't
talk
cloud
events,
but
I
wonder
if
there
we
can
kind
of
grab
parts
of
their
ui.
You
know
the
visualization
part
and
kind
of
strip
out
their
events
and
plug
in
our
events.
C
F
Yeah,
you
can
see,
you
can
see
the
sequence
that
you
define
in
on
on
captain
site
and
the
thing
is
with
captain.
You
can
define
your
your
sequence
of
your
environments
and
your
sequence
of
things
that
should
happen
in
your
pipeline
and
then
captain
delegates,
the
actual
execution
to
external
tools
right
and
it
does
that,
through
events
and
in
the
in
the
park.
We
it
sends
events
and
then
tacton
runs
the
deployment,
for
instance,
and
then
captain
gets
the
event
back
and
then
on
the
ui.
It
marks.
F
Okay
deployment
is
done
and
it
gives
you
the
the
shell.
The
image
was
used
for
the
deployment
and
everything
so
in
a
sense
that
that's
correct,
but
it
also
true
that
it's
a
bit
of
cheating
with
captain,
because
you
need
to
know
what
are
the
stages
of
your
deployment
in
advance.
You
cannot
get
a
list
of
events
and
you
know
like
visualize
what
you
have
without
her.
F
F
Right
and
if
I
think
of
the
fedora
use
case
that
was
presented
to
this
group,
I
mean
where
they,
you
know,
they
use
events
and
they
have
like
hundreds
of
packages
running
their
ci
and
then
sending
messages
to
like
a
central
system.
Saying:
okay,
yeah,
it
went
well.
You
cannot
define
in
advance
what
you
expect,
because
then
every
package
would
have
to
go
to
the
center
system
and
say:
oh
yeah,
I'm
also
going
to
run
this
test
now
and
that
kind
of
breaks.
The
purpose.
A
Well,
the
one
idea
that
I
have
and
I've
I've-
I
can't
remember
which
tool
does
this,
but
it
may
be
dyna
trace,
but
the
we
could
kind
of
do
a
like.
You
said
the
time
stamp
and
kind
of
have
a
like
a
sliding
time
scale,
and
then
that's
going
to
be
on
your
horizontal
axis
and
then
on.
Your
vertical
axis
would
be
the
different
events
that
are
kind
of
occurring,
so
you
can
kind
of
look
at
at
this
time.
This
triggered
at
this
level.
Just
an
idea.
A
I'd
had
to
look
at
the
graphing
patrick
packages
and
see
if
we
could
make
the
make
that
data
make
sense,
or
we
kind
of
do
a
a
time
scale,
and
then
we
kind
of
connect
the
boxes
as
we
go
saying
that
this
sent
this
this
event
from
github
sent
an
event
over
to
captain,
for
example-
and
you
know
at
12
o'clock
was
the
github
event.
Then
1201
was
the
captain
received
and
kind
of
do
a
sliding
scale
of
a
moving
graph.
A
So
you
can
kind
of
see
how
things
are
evolving
over
time.
So
a
combination
of
those
may
be
an
easy
way
to
hook
together
into
a
visualization
that
shouldn't
take
that
much
a
huge
amount
of
coding,
because
there's
a
lot
of
graphing
packages
out
there
that
do
do
a
lot
of
the
work
you
know
for
the
layout
wise.
We
just
gotta
get
the
data
in
there
correctly.
F
F
This
is
the
and
because
they
so
they
come
from
different
systems,
so
they
we
need
to
have
some
rules
in
the
protocol,
so
that,
if
you
receive
an
event
and
then
you
send
an
event
that
you
need
to
kind
of
transport,
some
of
the
information
from
the
source
event
into
the
new
events
that
you're
creating,
so
that
you
can
have
this
correlation.
F
We
don't
right
now,
but
it's
something
that
we
love
to
design
some
some
kind
of
mechanism.
For
that
I
I
guess
I
was
looking
at
how
distributed
tracing
works.
A
F
Bit
and
so
we
might
be
able
to
use
a
similar
type
of
approach
instead
of
reinventing
the
wheel,
and
I
think
highfill
has
support
for
ordering
of
events,
but
I
think
those
are
generated
by
a
single
system.
So
I
don't
know
the
distributed
system
with
different
platforms.
How
that
works.
C
So
an
eiffel
it
is
we
link
between
events,
so
that
would
be
the
linking
thing
that
you
actually
link.
If
you
have
one
events-
and
you
have
a
next
event,
then
you
you
give
a
link
to
the
previous
event,
so
you
can
traverse
the
links.
A
Right,
but
to
do
that,
the
events
have
to
be
persisted
somewhere
right.
C
Yeah,
if
you're
gonna
search
backwards,
of
course,
if
you
listen
to
all
events
right
away,
then
if
you
receive
an
event,
you're
gonna
know:
okay,
oh.
A
Yeah
yeah,
so
if
we
have
a
a
master
listener,
that's
listening
to
all
events.
Then
you
can
trace
back
through
the
the
link
chain,
yeah,
okay
and
then
that
master
listener
can
decide
whether
it
wants
to
persist
all
the
events
or
just
cache
them
in
memory.
C
And
when
it
comes
to
graphing
libraries,
I
pasted
in
the
chat
there
there
is
something
called
this
js.
C
Yeah
yeah,
so
it's
examples
of
of
doing
it
and
it's
it
seems
quite
simple
to
actually
get
up
and
show
one
of
those.
So
that's
just
an
idea,
but
that
that
is
a
graph
that
you're
drawing
and
then
we
kind
of
like
need
relationships
between
events
in
some
kind
of
way.
A
Yeah
and
the
other
one
I
use
is
this
one:
it's
d3
is
the
other
one,
so
viz,
js
and
and
d3
are
the
two
primary
open
source
ones
that
I've
used.
A
Because
we
deal
with
graphing
all
the
time
in
in
deploy
hub
and
ortelius
so,
but
those
are
the
the
two
packages
that
the
both
of
both
of
those
libraries
you
know.
Most
of
these
libraries
are
pretty
simple:
to
use,
it's
just
a
matter
of
of
getting
the
data
into
them
and
that's
usually
pretty
easy.
C
A
So
it
sounds
like
we
have
our
work
cut
out,
one
on
the
well
in
three
areas,
one
being
the
or
four
areas,
one
being
the
the
visualization
of
of
events
and
event
history.
A
The
security
around
events
between
dispersed
systems,
the
extending
the
the
rounding
out,
the
poc
vocabulary
as
part
of
that
and
there's
a
fourth
one.
I
just
can't
think
of
off
the
top
of
my
head,
but
that
that's
kind
of
what
I
kind
of
see
what
we
have
as
our
kind
of
to-do
list.
A
I
think
we
should
let
me
put
in
the
I
finished
the
or
I
got
a
draft
going
for
the
proposal.
A
A
But
we
would,
I
think
next
week
is
a
cdf
toc
meeting
again.
So
I'd
like
to
present
the
proposal
to
the
the
cdf
next
week.
I
don't
think
we
can
get
much
pushback,
so
I
think
it's
more
of
a
formality
for
mike
and.
D
E
D
So
we
need
to
drop
off,
unfortunately,
now
for
private
appointment.
So
thanks
for
this
discussion
today
and
hope
we
get
somewhere
else
somewhere
more
during
the
next
half
an
hour.
A
So
on
the.
A
Poc
and
the
andre
you're
saying
that
we
have
some
some
of
the
events
there
that
we
need
to
kind
of
clean
up.
Is
that
something
that
we
need
to
discuss,
or
is
it
more
of
like
a
coding
implementation
thing.
F
No,
I
think
we
need
to
to
agree
on
a
mechanism
that
we
can
transport
context,
that
system
might
might
need
or
if
you
want
to
do
that,
what
I
mean
is
that
in
the
plc,
because
a
captain
is
acting
as
an
orchestrator,
it
sets
its
context,
eddy
and
then
tecton
receives
it,
and
then
captain
expected
expects
to
have
it
back
in
the
events
right
to
close
the
loop
and
to
do
the
right.
Visualization.
A
F
And
for
that
to
work,
we
have
some
specific
code
on
tecton
site
that
picks
so
some
code
on
the
captain
adapter.
That
takes
the
context
and
put
it
in
into
the
event
and
then
some
code
on
the
technolon
side.
That
gets
the
this
information
from
the
body
of
the
event
and
then
stores
it
and
then
sends
it
back
at
the
end
of
the
pipeline
and
then
on
the
receiving
side
of
captain.
Again.
We
need
to
get
this
information
out
of
the
event
into
the
place
where
captain
expects
it.
F
Right
so
I
mean
what
one
part
of
the
solution
could
be
on
captain
side.
If
captain
spoke
those
events
kind
of
natively,
they
would
set
the
context
in
the
right
place
and
they
could
get
the
context
in
the
right
place,
but
we
we
need
to
have
a
place
where
to
store
this
kind
of
information,
and
it
could
be
like
an
extension
field
where
applications
put
some
data
that
they
need,
but
we
need
also
to
be
able
to
express
to
the
receiving
side
that
place
forward
this
data.
F
When
you
send
other
messages-
and
I
I
don't
know
it's
it's
an
extra
use
case-
I
mean
it's
useful
for
the
poc.
I
don't
know,
if
that's
a
specific
use
case,
that
we
want
to
focus
from
the
beginning
to
being
able
to
carry
this
context,
application
specific
context
back
for
sure
we,
I
would
like
to
for
sure
to
be
able
to
to
have
like
a
history
like
we
were
discussing
earlier,
so
that
we
can
build
the
sequence
with
the
graph
of
events.
A
Yeah,
so
what
I
was
talking
to
one
of
the
women
that
was
running
google
docs,
she
was
explaining
the
google's
event,
processing
and
one
of
the
things
that
google
did
was.
They
would
add,
data,
contextual
data
to
the
event,
and
if
you
didn't
need
it,
you
ignored
it.
So
it's
kind
of
like
an
ignore
process.
A
But
what
ended
up
happening
was
you
had
the
contextual
data
that
you
needed
to
to
get
back
like
you
said
they
closed
the
loop
as
part
of
that
and
I'm
wondering
if
we
need
to
kind
of
implement
that
with
a
mix
with
the
the
history
chain,
possibly
where
each
each
event
would
add
to
the
history
of
the
event,
its
own
contest,
own
contextual
data
element.
A
As
part
of
that,
so
you
can
go
back
and
close
the
loop,
because
I
can
see
when
we
start
crossing.
You
know
just
for
example.
If
you
want
to
let's
say
you
have
jenkins
and
captain
and
techton,
and
you
want
to
jenkins,
started
to
build
and
you
wanted
to
loop
back
around
to
the
after
going
through.
You
know
the
the
other
couple
events
that
you
wanted
to
send
something
back
to
jenkins,
that
you're
going
to
need
like
the
build
id
or
some
contextual
reference
way
back
at
the
start
of
of
jenkins.
A
And
if
you
look
at
even
jira,
if
jira
was
involved
in
in
the
process,
you
have
a
juror
ticket.
You
want
to
update
a
status
on
the
jira
side
that
you're
going
to
have
this.
This
contextual
stuff
across
multiple
tools
and
they're
gonna
have
their
own
reference.
C
You
should
add
on
that
contextual
information
in
in
the
event,
so,
if
you're,
having
like
jira
jenkins
and
so
on,
and
so
on
with
that
with
all
of
those
contexts,
be
part
of
the
the
message
that
will
like
so
it
will
grow
larger
and.
A
I
think
you're
going
to
have
to
if
you
don't
have
an
event
listener
that
you
can
go
ask
questions
to
so
you
know.
So
that's
one
way
to
do
it.
The
other
way
is
to
have
an
event
listener.
That's
monitoring
all
the
events
and
and
kind
of
keeping
track
of
them.
A
So
when
I
need
to
go
close,
the
loop
for
captain,
I
can
say,
give
me
the
history
of
this
event
and
I
can
go
get
get
a
list
and
say:
oh,
I
need
to
go
now
focus
on
the
the
captain
update
for
example,
or
whatever
that
I
can
get
to
that
at
that
layer.
So
I
think
we
have
kind
of
two
choices.
C
A
Yeah
and
as
if
you're
gonna
in
order,
if
you
have
the
links
to
diverse
backwards,
you
if
you
want
to
you
know
if
my
event
right
now
has
a
link
to
my
pr,
the
previous
one,
let's
say
github
that
if
I
want
to
now
traverse
back
to
github
and
now
I
have
the
information
about
the
github
in
github's
previous
event,
how
do
we,
where
is
that
information
persisted
because
that
transactions
already
ended?
A
So
there's
you
know,
even
if
we
do
the
event
chain,
we
have
to
be
able
to
go.
Ask
something
about
the
information
about.
You
know
going
all
the
way
back
through
history.
Does
that
make
sense.
C
Yeah,
I
think
you
know,
I
think
you
get
what
you're
talking
about,
but
then
I
guess
it's
about
how
much
information
we
want
to
put
in
the
events
as
such
when
we,
when
we
had
the
yeah
like
the
priests
back
or
what
are
called.
We
talked
about,
for
example,
having
branches
being
created,
pull
requests
being
updated
and
so
on.
So
maybe
all
of
those
events
might
contain
all
the
information,
but
then,
of
course,
there
will
be
information
that
might
not
contain
it,
but
maybe
cover
the
majority
of
use
cases.
A
Well,
to
kind
of
solve
the
problem
that
andre
was
talking
about
being
able
to
have
the
the
contacts
id
or
whatever
it
was
called
for
captain
to
be
able
to
close
the
loop.
A
Even
though
it's
a
small
amount
of
information,
we
have
to
have
access
to
that
to
be
able
to
as
part
of
the
history
chain,
to
be
able
to
kind
of
build
that
that
the
closing
of
the
the
loop
process,
so
we
have
to
be
able
to
somehow
either
we
pass
that
along
as
the
payload
data,
or
we
go
to
previous
events.
Previous
events,
till
we
get
back
to
where
we
came
from.
A
To
get
that
that
that
captain
contact
id.
C
So
one
one
thing
when
it
comes
with
adding
information
in
in
events
keeping
on
that
would
be,
for
example,
if
you're,
if
you
have
a
pretty
big
c
engine,
where
you
have
a
bunch
of
different
pipelines
that
is
like
converging.
So,
for
example,
you
have
like,
if
you
would
have
a
hundred
different
components
that
would
build
up
to
one
big
component.
C
Would
you
then
aggregate
all
of
these
information
in
the
in
advance
that
going
forward?
In
that
case,
it
would
be
quite
much
information.
C
A
Yes,
I
I
totally
agree
that
the
the
payload
could
get
very
big
at
that
point
and,
like
you
said
around
the
the
cloud
events
trying
to
keep
the
the
payloads
small,
that
we,
if
you
just
reference
your
your
previous
event
and
then
like
I
said
in
order
to
to
reference
anything
happen
before
you
have
to
kind
of
you
have
to
persist
at
the
event
somewhere
in
order
to
find
out
those
those
payload
bodies
for
every
event,
which
kind
of
comes
into
like
what
you're
saying
andre
with
the
visualization
you
can,
if
you
have
more
data,
if
you
have
the
payloads
persisted,
you
can
do
some
really
nice
graphing
with
some
really
nice
details
of
what's
happening
because
you're
going
to
have
access
to
that.
A
That
we
need,
we
should
take
to
make
to
create
a
solution
where
we
keep
the
cloud
event.
Payload
small
is
the
way
we
should
focus,
or
should
we
let
the
the
payload
grow.
F
Yeah,
I'm
not
sure
I'm
wondering
whether
we
could
have
multiple
multiple
options
in
certain
use
cases.
You,
don't
you
don't
need
to
look
back.
F
You
don't
really
need
to
go
back
to
the
service
and
say
yes,
I
sent
my
this
notification
and
I
did
that
because
that's
that's
not
what
you're
trying
to
do
so
and
I
think
for
that
kind
of
scenarios
having
small
events
and
light
events,
yes,
is
enough,
might
need
to
have
some
some
context
still,
but
not
the
entire
chain.
So
if
you
have
some
context,
you
can
still
say
okay
find
out
that
certain
notifications
were
sent
for
this
event.
F
If
you
want
to
know
that
afterwards,
but
you
don't
need
to
add
the
whole
story
in
every
event,
so
maybe
it
could
be
one
option
in
the
protocol.
Let's
say
if
you
want
to
use
the
protocol
for
the
kind
of
like
asynchronous
call
and
response
remote
column
response,
then
you
need
to
enable
this
full
transact,
full
sequence,
history.
F
A
Yeah
because
it
it
does
make
it
interesting
when
you
look
at
where,
where
does
the
event
change
start,
you
know,
does
it
start
a?
Does
it
start
at
github?
Or
could
it
be
something
back
in
like
jira,
which
would
be
like
a
user
story
getting
marked
for
something?
A
As
you
come
back
to
close
the
unit
test
event,
you
can
you
know
that
message
being
broadcast
back
would
have
the
master
event
id
in
it
as
well,
and
then
they
can
look
up
internally
to
say,
oh,
I
need
to
close
the
loop
on
this
unit
test
event,
as
part
of
that.
Does
that
make
sense
where
you
kind
of
have
a
way
to
keep
track
of.
A
Events
and
as
you
do
a
broadcast,
listen
type
of
world.
You
can
look
up
internally
if
you're,
looking
to
close
something
from
another
event,
coming
back
in.
F
Yeah
I
mean
you
could
have
a
notification
center's
luck
that
triggers
an
engineer
to
say.
Oh,
there
is
a
problem
here
and
I'm
going
to
create
an
issue
for
that
and
then
you
know
I
mean
there
is
always
a
chain
of
event,
but
I
don't
think
we
we
can
find
the
really
beginning.
F
Like
maybe
th
the
fact
of
having
the
loop
and
closing
the
loop
is
a
kind
of
a
separate
problem,
so
we
could
say
we
have
kind
of
two
different
ways
of
using
events.
One
is
kind
of
the
decoupled
way
where
you
send
the
event,
and
then
you
know
you
kind
of
forget
about
it.
F
D
F
Time
if
you
will-
and
we
could
consider
that
a
different
way-
a
different
way
of
using
the
protocol
or
if
you're
a
different
and
that
like
dedicated
design
for
that,
so
it
seems
to
be
a
rather
complex
problem.
So
I'm
not
sure.
Maybe
we
could
try
and
set
up
for
the
next
pse
scenario,
where
we
don't
hit
that
particular
issue
and
focus
on
that,
because
it's
maybe
the.
E
F
A
C
So
when
we're
having
this
discussion,
I
don't
know
I'm
looking
back
to
the
idea
of
having
pipelines
described.
A
You
know
where
you're
starting,
so
you
can
always
grab
information
and
persist
information
from
your
starting
point,
all
the
way
through
to
the
completion.
So
it's
very
linear.
A
And
I
think
that's
what
kind
of
has
been
she
been
achieved
in
the
poc
already,
but
I
think
we
do
need
to
look
at
more
of
a
dynamic
event
model.
C
So
maybe
need
to
like.
Would
it
be
helpful
if
we
draw
down
the
scenarios,
so
we
have
a
bunch
of
scenarios
there.
So
we
can
take
a
look
at
those
because
then
you
can
have
maybe
have
questions
around
it
or
grasp
it.
A
little
bit
easier.
A
C
Yeah
I
was
trying
to
move
back
to
happen,
so
that
sounds
like
a
good
one,
because
I
mean
when
I'm
I'm
hearing
this
with
going
back
to
the
source,
I'm
I'm
thinking
of
okay.
What
happens
if
you
have
two
different
components?
That's
that's
gonna,
be
you
need
two
like
two
libraries
for
building
your
projects,
which
is
to
start
in
that
case
and
so
on.
So
there's
a
lot
of
some
questions,
so
maybe
that's
easier
to
to
see
when
you
have
a
picture
in
front
of
you.
A
Yeah-
and
I
think
your
the
scenario
that
you're
talking
about
is,
if
you
have
a
hundred
100
micro
services
that
are
all
being
built
independently
and
they
have
their
own
event
world
happening
and
that,
but
then
you
need
to
consolidate
and
to
kind
of
have
all
these
events
funneled
into
one
before
you
can
move
on
to
the
next
bigger
event
or
consolidate
event.
You
know
like
for,
for.
Like
an
example,
you
do
all
these
builds
in
parallel
and
then
you
want
to
go
ahead
and
deploy
at
that.
A
A
Yeah
and
that's
one
of
the
things
we're
doing
on
the
artelia
side
is
looking
at
those
relationships
and
tracking
those
dependencies
at
that
level.
A
So
we'll
we'll
track
them
or
to
this
is
persisting.
Those
relationships,
like
you,
said
at
that
level,
then
also
at
a
higher
level.
As
let's
say
you
take
an
npm
docker,
a
node.js
docker
image,
it's
going
to
have
its
thousand
dependencies
in
it,
but
then
that
is
going
to
create
a
service.
A
A
At
the
current
time,
we
don't
have
an
event
listener,
letting
us
know
that
everything's
been
you
know,
built
and
packaged,
but
that's
one
of
the
things
that,
from
the
artillery
side,
the
events
that
we're
going
to
be
looking
at
are
going
to
be
ones
around
those
as
well.
A
World
how
captain
argo
nortilius
are
working
together
to
perform
a
git
ops
model
for
deployment
and
tracking,
what's
been
deployed,
where.
A
And
we
will
we
want
to
do
that
event
based
as
well
to
make
sure
that
the
the
vocabulary
is
there
for
one
of
those
use
cases,
so
we
I
can
bring
that
we're
about
90
percent
done
with
that
process.
A
I
can
do
one
yeah.
I
can
do
that.
I
don't
know
if
we
want
to
do
it,
let's
see
where
we're
at
next
week
to
if
we
want
to
do
it
that
one
or,
if
we're
under
the
vocabulary,
one
either
way.
A
Yeah
put
your
thinking
caps
on
about
the
event
chains.