►
Description
For more Continuous Delivery Foundation content, check out our blog: https://cd.foundation/blog/
A
A
C
B
C
B
C
Were
you
based
in
rail
waves,
whales?
Okay,
that's
all
for
whales,
okay,.
D
But
you
said:
22
degrees
right
celsius,
yeah,
okay,
we
have
had
32
here
in
sweden.
D
And
I
guess
we're
getting
something
like
that
soon.
Also
this
week.
A
B
A
A
So
I
only
had
two
things:
one
is
what
you
mentioned
and
chat
that.
C
A
C
A
A
A
A
B
A
A
Okay,
shall
we
get
started
with
the
the
point
you
asked
yeah.
C
So
yeah,
I
think,
for
the
spec
and
and
everything
I
was
wondering
whether
we
put
this
into
it's
a
dedicated
project,
because
right
now
it's
living
with
the
working
group,
but
as
it
evolves
as
a
spec
and
other
things.
I
think
you
should
have
so
I
I
think
it
becomes
a
project
and
then
and
if
you
would
like
to
compare
it
open,
telemetry
to
some
extent
that
it's
like
one
one
there.
But
we
have
a
specification
where
we
will
have
versions.
C
We
will
have
obviously
the
libraries
as
mentioned,
and
I
think
we
also
want
to
establish
open
source
project
like
structures
of
people
who
are
contributors
maintainers
and
therefore,
so,
might
that's
why
my
idea
was
to
move
this
into
dedicated
project
where
I
just
came
up
with
a
name.
It
could
be
some
name
that
presented
open
life
cycle
observation
or
whatever.
C
We
want
to
call
it
and
then
have
a
dedicated
governance
and
all
of
those
structures
for
a
project
in
place,
especially
as
more
projects
that
are
going
to
build
on
top
of
it.
I
think
that
structure
would
be
ideal.
From
from
my
point
of
view,
I
just
wanted
to
get
your
feedback
on
this
because
for
the
sick,
it's
obviously
you
have
to
seek
chairs,
but
you
have
no
other
maintainers
like
the
project
and
this
also
they
might
be
contributing
to
it.
C
I
know
that,
usually,
it's
not
part
of
a
sick
or
a
physics
charter
to
create
new
open
source
projects,
at
least
that's
the
case
in
the
cdf
and
assume
the
cdf
is
kind
of
similar
there,
but
I
think
for
this
case
it
would
make
sense-
and
there
were
other
cases
like
the
githubs
working
group
like
created
their
own
projects
where
they
keep
stuff
as
well
as
they
go
there.
So
this
is
just
my
proposal
and
I
just
wanted
to
hear
your
feedback.
C
A
Yeah
from
my
point
of
view,
I
think
that
that
is
definitely
the
the
plan,
long
term
or
mid
mid
term.
Hopefully
it
depends
how
fast
we
go.
A
We
started
in
the
in
the
cdf
sig
events
working
group
repo,
just
because
we
had
the
repo
and
it's
a
easy
and
quick
way
to
get
started,
and
we
wanted
to
focus
first
on
getting
the
initial
draft
and
plc
up
and
running,
which
might
also
be
a
mean
to
you,
know,
attract
more
contributors
and
make
kind
of
make
a
point
of
why
we
are
doing
what
we're
doing
but
yeah,
and
I
mean
definitely,
I
think
we
we
should
have
a
dedicated
home
for
this.
A
I
think
there
are
a
few
bits
like
you.
You
mentioned
the
spec,
so
the
spec
might
go
into
its
own
repo
with
its
own
versions
and
tags
and
everything
we
will.
We
started
working
on
sdk
as
well.
We
might
eventually
have
an
ecosystem
of
adapters.
A
C
C
C
What's
the
I
mean,
I
don't
want
to
create
anything
too
complex.
Don't
get
me
wrong
but
like
how
are
like
the
individual
adopters
like
how
can
they
influence
the
spec
and
so
forth,
like
most
projects
have
some
kind
of
enhancement
proposal
project
and
then
it
might.
I
think
it
might
feel
a
bit
like
an
overkill,
but
like
also
speaking
like
from
a
captain.
I
think
it's
all
the
same
from
the
tactical
point
of
view.
C
If
this
is
the
central
means
of
communication
in
another
project,
I
think
you
want
to
have
some
kind
of
like
process
behind
it,
how
you
can
actively
contribute
to
it.
I
think
that's
also,
maybe
attracting
especially
more
commercial
solutions
as
well
say.
Well,
this
is
like
not
just
a
an
open
spec.
This
is
a
project.
It's
behaving
like
a
project.
You
can
contribute,
you
can
join
it.
You
have,
and
you
can
take
an
active
role
in
there.
There
is
a
well-defined
governance.
A
Yeah
yeah,
I
didn't
think
about
the
governance
until
now,
but
yeah,
that's
that's
a
good
point.
I
mean
we
might
set
up
a
governance
separated
from
the
from
this
group.
I
don't
think
it's
like.
It
wouldn't
be
the
top
priority
for
me
to
do
it
right
now,
but
yeah
I'm
open
to
mainly
because
I
think
we
have
limited
capacity
and
I
just
wanted
to
focus
first
on.
You
know
getting
things
started
and
having
something
to
talk
about,
but
I
think
yeah.
Definitely.
A
That's
also
why
I
started
this
discussion
about
the
naming,
because
I
think
when
we
have
the
name,
we
can
start
buying
a
domain
and
have
a
website.
I
I
was
using
cloud
events
as
a
reference
as
a
model.
If
you
will-
and
I
think
they
have
a
similar
process,
so
they
they
were
born
born
out
of
the
serverless
working
group,
but
they
are
an
independent
project
with
their
own
governance
and
yes
back
and
projects
and
and
everything
so
it's
it
would
be
a
similar
model.
I
think.
B
There
could
be
some
inspiration
to
get
from
the
eiffel
project
because
they
matthias
will
know
more,
but
they
basically
started
like
a
community
around
the
spec
itself,
where
many
companies
and
stakeholders
are
now
contributing
various
parts.
I'm
working
on
a
net
library
for
afib
right
now,
so
so
there
might
be
some
some
inspiration
to
get
there.
D
Yeah
yeah
in
iphone,
we
have
it's,
we
have
a
name
eiffel,
so
it's
eiffel
community
which
is
the
base,
and
then
we
have
the
protocol
itself
and
then
we
have
all
the
the
various
tools
around
it
and
like
yeah,
sdks
and
so
on.
D
But
there
we
have
kind
of
like
the
sig
is,
is
one
repo
in
the
eiffel
community,
collective
community
yeah.
Let
me
see
just
so
I
don't
s
mess
up
myself
right
now,
yeah,
we
call
it
community
repo.
So
we
have.
We
have
a
different.
We
twist
it
a
little
bit
than
we
do
here,
because
here
we
got
the
sig
events,
patrols,
kind
of
like
these
meetings
and
so
on,
and
that's
what
we
host
in
the
community
ripple
in
the
ifa
community
space.
D
So
that's
a
little
bit
different.
So
I
I
guess
this
will
look
more
like
what
we
have
in
cloud
events
where
I
looked
at
their
github
structure
and
they
have
cloud
events
and
then
they
have
a
spec
and
then
they
have
all
the
sdks
under
the
what
you
call
the
top
organization,
probably
they
call
it
yeah.
D
So
the
cloud
events
is
one
organization
yeah,
whereas
a
field
called
a
full
community
is
the
organization,
but
then
the
question
what
I
have
is
if
we
would
break
out
the
protocol
into
its
own
repository,
which
I
don't
think
is
a
bad
idea.
A
That's
a
good
question.
I
think
one
of
the
mission
of
the
of
the
sikh
would
always
be
to
from
my
point
of
view,
you
know
to
get
people
involved,
maybe
doing
presentations
about
how
they
use
events
in
ci,
cd
and
different
parts
of
cict,
and
you
know
having
presentation
having
user
stories.
A
Yeah,
I
think
mostly
developing
the
sdk.
Maintaining
the
spec
might
be
done
in
a
separate
project,
but
I
would
expect
at
least
in
the
beginning,
there
will
be
a
lot
of
overlap
between
the
the
team
that
is
part
of
this
seek
and
the
team
that
is
doing
the
maintaining
the
open
source
project,
but
over
time
they
might
diverge.
D
Okay,
so
see
the
sig,
it
might
actually
have
more
general,
like
more
like
talks
and
discussions,
presentation
run
around
events
in
general,
which
might
not
be
part
of
the
the
new
repo
yeah.
That
sounds
so
like
a
reasonable.
D
I
has
felt
that
the
creating
of
the
new
protocol
is
probably
the
kind
of
like
the
biggest
thing
that
this
sig
is
gonna,
be
doing
so.
Therefore,
it's
probably
gonna
be
most
mostly
that
that
we're
doing.
C
C
Yeah-
and
I
think
also
if
we
can
decide
why
we
do
all
the
work
over
there
and
we
have
an
established
project,
that's
used
by
hundreds
of
tools
out
there
and
that's
a
great
achievement
and
by
itself
as
well.
That's
really
well,
okay!
If
this
is
what
was
bootstrapped
here
and
it
would
I'd
say,
it
would
be
fine
as
well,
but
it
feels
like
it
all
centers
around.
A
Yes,
we
definitely
need
a
name,
so
we
had
an
initial
discussion
about
the
name
it's
reading
ago
and
yeah.
So
we
we
had
a
few
things
that
I
think
we
agreed
upon
like
we
don't
want
the
name
to
be
associated
to
be
c
or
the
cdf
specifically,
and
I
think
this
makes
sense
also
in
my
discussion
that
we
are
having
today
we
want,
but
we
we
had
several
different
opinions,
maybe
about
different
ideas,
about
what
the
name
should
be
descriptive
or
should
be
more
like.
A
I
don't
know
a
nice
name
that
is
not
necessarily
associated
with
the
specifics
of
the
project,
so
we
have
several
different
ideas
and
I
think
we
need
to
continue
the
discussion
there
and
yeah
just
agree
on
a
name
that
I
think
we
can
stick
to
on
the
long
term,
because
I
would
hate
to
go
and
you
know,
create
a
gita
organ
projects
and
dogs
and
then
having
to
rename
everything
I
mean
it's
so
I
mean
the
the
work
in
progress
name
that
we're
using
in
the
sdk
is
simply
cbe
cloud
cd
event,
continuous
delivery
events,
but
that's
not
necessarily
the
name.
C
I'd
also
say
for
the
name
to
have
something
that
actually
reflects
what
it's
doing
in
determining
events
in
the
context
of
cd
makes
sense.
But
if
you
take
whatever
foundation
piece
out
of
it,.
C
A
Yes,
one
of
the
concern
with
having
a
name
that
describes
the
kind
of
the
context
where
the
protocol
acts
as
that,
if
we
kind
of
change
the
scope
in
the
future,
then
you're
stuck
with
a
name
that
might
not
describe
anymore
what
the
protocol
is
about,
and
so
that's
why
one
of
the
proposal
was
maybe
to
take
a
name
which
is,
I
don't
know
some
greek
word
or
some
name
that
we
would
like,
but
it
isn't
necessarily
fully
descriptive,
but
yeah
that's
one
of
the
proposals,
so
I
I'm
not
sure
I
don't
think
we
can
come
to
a
conclusion
on
the
name
discussion
here
in
this
group
right
now.
A
A
Yeah,
I
sent
around
a
form
and
I
think
it's
it's
still
open.
So
if
you
want
to
see
what
the
the
proposals
were
and
if
you
want
to
add
your
own
proposal
or
comments.
D
E
A
All
right,
then,
I
would
switch
to
the
second
part,
which
I
wanted
to
show
you
where
we
stand
with
the
poc
and
what
is
working
today.
E
A
I
hope
you
can.
This
is
readable.
I
painted
this
diagram
basically
to
show
what's
deployed
in
the
cluster,
where
the
demo
is
running
today,
actually,
maybe
first
step
so
I
I
wrote
this
poc.sh,
it's
a
script
with
a
bunch
of
dependencies
and
prerequisites.
A
Courier
ingress
and
yeah
everything
and
we'll
configure
the
component
to
work
together.
A
So
at
the
end,
after
you
run
the
script,
you
get
your
kind
cluster
and
then
you
have
tekton
installed
with
the
cloud
events
component,
which
is
the
experimental
component
that
we
have
here
and
the
dashboard
and
the
cli.
This
guys
must
be
installed
locally
and
then
captain
is
also
installed
with
two
extra
components:.
A
You
define
filter
rules
and
every
time
the
filter
rule
is
matched
it's
basically
kinetic
eventing
takes
care
of
getting
events
from
the
broker
to
the
sync
specified
in
the
trigger
all
right,
so
nbf
and
tekton
cloud
events
talking
to
the
broker
captain
outbound
looking
to
the
broker,
and
then
there
is
also
the
cloud
event
player
application,
which
is
listening
to
the
to
the
broker.
Titan
triggers
is
listening
from
for
certain
events
from
the
broker,
and
captain
involved
is
listening
for
certain
events.
A
And
team
we
can
show
you
the
shipyard
file,
so
on
captain
site,
the
setup
is
relatively
simple:
there
is
a
sequence:
what
captain
calls
a
sequence,
which
is
a
delivery
sequence,
which
has
only
two
steps
or
tasks?
A
The
first
is
approval,
it's
a
manual
approval
and
the
second
one
is
the
deployment,
and
the
idea
is
that,
after
an
approval,
then
the
deployment
is
started,
but
the
actual
deployment
is
executed
by
tecton
and
before
the
approval
happens,
tecton
will
build
a
container
image,
send
a
message
to
the
captain
to
trigger
the
sequence,
and
then
the
sequence
will
happen.
A
Then
we
go
back
to
the
user
waiting
for
approval
and
once
approval
is
achieved,
then
captain
says
it's
a
deployment
triggered,
which
then
goes
to
the
broker
as
a
cd
artifact
published
event.
A
A
The
cloud
event
player
oops,
there's
nothing.
E
A
A
A
A
E
A
A
E
A
Right,
okay,
the
pipeline
finished.
So
if
you
look
on
captain
site,
you
can
see
that
there
is
a
delivery
which
is
waiting
for
approval,
so
it's
the
production
environment.
So
we
go
to
the
delivery
and
expand
it.
You
can
see
here.
This
is
the
chart
of
the
container
image
it
was
created
by
the
tecton
pipeline,
and
if
we
do
the
approval,
then
captain
will
start
the
next
task,
which
is
the
deployment.
A
So
tactile
sent
the
pipeline
run,
cued
and
started
and
finished,
but
it
also
sent
the
artifact
packaged.
Then
captain
sent
the
article
published
and
once
tacton
finished
the
deployment
it
sent.
Also,
the
service
deployed
message
there.
So
once
the
service
deployed
message
is
sent,
captain
closes
the
sequence
and
you
can
see
in
the
deployment
here.
A
A
Yeah
in
the
results,
I'm
using
tekken
results
to
transport
some
of
the
information,
so
this
is
the
environment
id,
which
is
the
url
where
the
demo
is
deployed.
A
A
E
A
D
Thanks
yeah,
it
was
nice
yeah.
I
think
a
question
hey.
C
Go
ahead,
please,
and,
as
I
mentioned,
I
think
on
another
spread,
which
we
had.
I
think
it's
greater.
We
have
just
working
right
now
because
right
now
it
feels
a
bit
like
an
overhead,
maybe
driving
it
this
way.
But
now
it
could
do
funny
things
like
adding
an
additional
sequence
in
captain
they've
already
going
to
totally
different
environment,
and
then
we
could
see
how
it
like
gets.
It
gets
picked
up
for
quality
gate
validation.
A
Nicely
good
yeah,
so
I
I
had
a
chat
yesterday
with
johannes
and
jorgen
and
yeah.
They
also
mentioned
like
adding
multi-stage
of
quality
gates.
I
think
eventually
would
be
really
good
to
have
quality
good
dates,
because
in
my
understanding
those
are
like
validation
of
a
deployment.
A
So
if
you
don't,
if
you
don't
pass
a
quality
gate,
you
could
have
like
a
incident
type
of
event
from
from
our
protocol,
which
is
it's
the
kind
of
information
that
you
need
if
you
want
to
have
like
the
whole
devil's
matrix
or
dollar
metrics
type
of
information.
C
In
this
case,
we
would
create
a
deployment
finished
with
failed
event.
Okay,
don't
create
an
incident;
you
only
really
create
incidents
if
something
is
in
production
or
running
an
environment
and
something
unexpected
happens
so
deployment
failing
is
nothing
we
consider
unexpected.
You
consider
just
something
that
might
happen.
C
That's
the
problem
notification.
We
distinguish.
If
you
do
a
deployment,
it
might
just
not
work.
That's
not
ideal,
but
kind
of
like
expected
to
happen
from
now
and
then,
and
we
only
create
the
problem
when
something
happens.
That's
like
literally
unexpected
and
most
likely
not
related
to
employment
as
well,
like
think
of,
like
an
availability
zone
failing
or
suddenly
or
response
time
spiking.
For
no
specific
reason.
C
So
after
the
deployment
is
complete,
you
would
then
obviously
trigger
like
a
performance
test
which
you
could
simply
in
this
case
used
was
hey.
We
could
use
the
captain
a
service
even
to
run
it,
which
would
be
funny
because
then
we
would
even
have
a
task.
That's
knee
detector,
nor
captain
or
these
even
the
captain
execution
plane.
Then
we
would,
after
that,
the
test
is
run.
We
would
look
at
the
results
where
the
quality
gate
and
based
on
the
quality
gate.
C
We
will
then
decide
what
we
want
to
do
so,
roll
back
the
deployment
or
promote
it
to
the
next
stage,
depending
on
whatever
we
wanted
it
to
be.
C
We
do
because
we
use
it,
for
we
use
the
same
as
slo
definitions
we
use
for
pre-production
and
production
right.
The
quality
gates
are
always
based
on
slo's,
maybe
two
to
clarify.
We
use
them,
for
example,
for
validation
of
remediation
action.
So
if
you
have
a
problem
in
production,
we
have
a
radiation
action
which
we
triggering
we
then
validate,
but
we
actually
change
the
metrics
again
yeah,
because
otherwise
he
would
just
mess
around
with
the
environment.
It
would
not
validate
whether
it
actually
helps.
A
Okay,
yeah
make
sense
the
other
area
where
I
would
like
to
continue
developing
in
future,
which
I
think
is
important
is
this
bottom
layer
here,
because
I
want
to
show
the
advantage
of
the
fact
that
everything
is
is
going
for
the
same
protocol
and
also
the
broker,
but
the
fact
that
everyone
speaks
for
the
same
protocol
here.
It
means
that
then
here
you
can
have
someone
that
listens
to
all
the
events
and
they're
all
in
the
same
consistent
format
and
then
right
now.
I
only
have
this
content
player.
A
You
know
to
measure
how
you're
doing
over
time
in
your
deployments,
and
so
I
think,
having
the
see
how
using
a
common
protocol
makes
it
easier
not
only
for
different
applications
to
talk
to
each
other,
but
also
to
build
an
ecosystem
of
applications
that
can
benefit
from
having
a
shared
semantics
in
the
events,
so
that
allows
you
to.
You
know,
understand
events
that
are
coming
from
different
platforms
and
build
logic.
On
top
of
that,
that's
scenario,
I
think
it's
also
important
to.
D
Yeah,
that
sounds
logical.
Do
we
want
to,
or
is
it
too
early
to
add
another
component,
so
we
have
three
components:
communicating.
D
I
don't
know
if
it's
a
good
idea
or
not,
but
they
there.
There
were
some
kind
of
work
on
on
a
cloud
events
plug-in
for
jenkins.
I
think
so.
We
would
like
to
use
that
one
and
connect
the
third
component,
or
should
we
focus
on
the
visualization
part?
The
monitoring,
because
I
agree-
I
mean
that
is
at
least
a
visual
impact.
So
if
you're
showing
it,
you
can
show
to
people
that
okay,
these
two
components
are
using
the
same
protocol
and
therefore
we
can
use
one
visualization
to
view
them.
D
A
Yeah,
I
think
it's
we
can
add
an
extra
component
or
we
can
have
multiple
flavors
of
the
poc,
where
we
use
captain
and
jenkins
captain
intact
on
tekton
and
spinnaker.
You
know
we
can
build
multiple
versions.
We
can
have
one
where
we
use
three
or
four,
but
it
becomes,
I
guess,
maybe
more
complex
if
you
have
too
many
components
to
show
the
demo
and
to
run
everything
on
your
laptop,
but
definitely
I
think
it's
it's
worth
getting
more.
C
Components
on
board
what,
but
from
the
monitoring
perspective,
I
think
what
would
be
nice
maybe
to
show
to
transform
this
into
or
to
use
the
the
the
trigger
events
and
convert
them
to
otlp,
which
would
give
us
open
telemetry
data,
and
then
you
could
visualize
it
really
as
a
transaction
trace
for
each
individual
equine
event,
sequence
that
got
started
because
they
would
have
to
monitor
anything.
You
see
it,
no
matter
how
complex
your
flows
are.
You
can
visualize
them
so.
E
D
C
Yeah
you
can
use
just
jager.
If
you
push
it
into
the
into
jager,
you
could
visualize
it.
A
Yeah
yeah,
because
that
would
be
great
because
I,
the
kinetic
components
were
tracing
already,
so
I
tried
enamel
enabling
it
and
connected
to
zipkin,
but
they
showed
tracing
only
within
every
single
event.
So
you
know
they're
still
tracing
information
about
the
story
of
a
single
event,
but
not
across
the
events,
which
is
the
most
interesting
part,
so
yeah,
definitely
having
something
like
ltl
b
that
allows
us
to
see
and
yeah
across
all
the
events
would
be
be
best.
C
C
D
A
A
I
need
to
to
reach
out
to
some
spinnaker
folks
to
see
if
they're
interested
in
working
on
this.
I
think
it
would
be
great
because
it's
it's
another
one
of
the
cdf
project
as
well.
So
I
think
we,
if
you
have
jenkins,
we
have
tekton,
we
have
also
spinnaker.
I
mean
I
think
it's
it's
great,
that
we
have
captain
which
is
not
in
the
cdf,
but
it's
also
great
to
see
that
this
is
protocol
is
not
meant
to
be
for
cdf
projects
only.
So
this
is
a
very
good
story.
E
A
Okay,
great,
I
think
it
was
really
useful
feedback,
so
we
have
some
good
ideas
on
how
to
far
develop
this.
So
for
the
we
have
a
talker
devops
word
with
novicio
and
myself.
The
the
conference
is
happening
in
september,
but
the
recording
is
is
going
to
happen
next
week.
A
So
I
think
the
version
that
I've
shown
you
today
is
pretty
much
what
we're
going
to
use
for
the
conference,
because
I
don't
think
there
is
much
time
to
it.
Much
more.
Unless
I
mean
if
there
is,
if
you
are
able
to
build
something
quickly
with
otlp
and
jaeger.
C
A
Then
yeah,
then
I
think
this.
This
version
is
what
we're
going
to
show
for
the
conference.
A
I
think
it's
it
should
be
enough
for
this,
this
first
ufc
and
then
we
can
continue
building
on
this
and
we
discussed
yesterday
with
johannes
and
you're
against
mauricio
in
in
august.
I
think.
B
A
Will
present
this
to
the
to
one
of
the
captain
music
groups
as
well,
and
I
plan
to
do
a
presentation
to
the
to
the
tactical
community
as
well.
So
we
can.
We
can
keep
iterating
on
these
and
making
more
and
more
people
aware
that
we're
working
on
this
and
hopefully
get
more
people
joining
our
meeting
as
well
contributing.
C
And
I
also
talk
to
the
team,
because
I
think
we
can
show
some
how
we
dynamically
reconfigure
like
delivery,
sequences
and
workflows
on
the
fly
in
in
captain.
While
we
keep
the
rest
the
way
it
is
so,
but
it's
it's
really
great
and
I
really
enjoyed
that
really
great
demo.
Thanks
for
taking
the
initiative
to
craft
this
forward,.
C
D
A
A
A
Yeah
so
like
cd
pipeline
run
started
and
cd
button
finished,
but
the
the
way
that
we've
made
it
is
so
that
you
can
specify
special
annotations
on
your
text
and
resources
like
this.
So
if
you
say
this
is
the
the
build
artifact
pipeline
and
if
it,
if
it
has
the
cd
artifact
package,
enabled
annotation,
then
it
will
send
that
event.
If
the
pipeline
is
finished
successfully
and
same
for
the
for
the
service
deployed.
D
A
Yes
yeah,
so
we
have
two
components
that
they
are
doing:
translation,
inbound
and
resets
here,
inbound
and
outbound.
A
It's
slightly
more
than
just
a
translation,
because
if
you
see
so
there
is
some
some
logic
in
there
as
well.
So
when
captain
receives
what
is
it
the
delivery
triggered
after
the
approval,
then
it
will
send
sorry
when
it
receives
the
artifact
package,
it
will
send
a
delivery
trigger.
A
And
when
it
receives,
then
the
outbound
it
receives
a
deployment
trigger,
it
will
send
an
article
published
and
you
can
see
there
is
no.
It's
not
like
a
one-to-one
mapping
between
these
events.
It's
a
mapping
that
makes
sense
for
this
poc
at
least
but
yeah.
So
we
need
to
do
some
working
here
to
kind
of
make
sure
that
the
events
that
we
have
on
the
cd
side
align
somehow
with
the
captain
events
and
we
also
with
events
from
other
platforms
that
are
interested
in
enjoying
this.
A
A
I
also
try
to
map
a
little
bit
how
the
different
bits
of
information
are
passed
across
the
events
and
right
now,
things
like
the
trigger
id
and
the
context
from
captain
they're
just
somewhere
in
the
in
the
payload
of
the
event,
but
in
a
location
that
is
not
defined
anyhow
by
our
problem,
but
there
we
could
think
of
having
some
extension
mechanism
so
that
you
can.
C
Code
like
today,
yeah
the
idea
is
also
standardized,
so
we
had
them
in
for
like
quite
the
violin
captain
they're
kind
of
inspired
already,
but
not
yet
fully
have
the
w's.
We
see
trace
context
in
there.
B
C
The
idea
is
to
have
like
the
trace
parent
moles,
which
is
study
and
the
parent
id
in
there
distributed
racing.
We
had
just
the
captain
events
before
that,
but
there
was
there
have
been
plans
for
a
while
to
convert
this
to
the
wcc
trace
context
anyways,
and
this
could
make
it
into
the
cloud
events
main
part
even.
C
A
Cool
yeah.
The
other
thing
I
wanted
to
to
do
is
to
go
and
present
this
to
the
cloud
events
community
and
I
already
spoke
to
one
of
the
folks
there
it's
from
ibm.
So
it's
easy
for
me
to
get
in
touch
but
yeah
so
maybe
schedule
a
presentation,
maybe
after
the
summer,
so
we
can
get
some
feedback
from
from
them
as
well
and
and
yeah.
A
C
C
Let's
see
the
idea
is
you
can
say?
Well
you
can
keep
your
pipeline
delivery
right
now,
but
if
you
want
to
add
like
an
additional
stage,
you
don't
have
to
touch
any
of
your
core
pipeline
code.
You
can
add
chaos
engineering
on
top
of
it
without
having
to
touch
your
pipeline,
so
there's
well
I'll
talk
to
you're
going
to
do
harness,
but
this
looks
very
interesting.
It's
also
good
that
it
runs
on
kind,
so
everybody
can
test
it
locally,
which
I
think
is
super
important
as
well.