►
Description
For more Continuous Delivery Foundation content, check out our blog: https://cd.foundation/blog/
C
Probably
alois
will
join
a
little
bit
later,
but
he
has
another
conflicting
meeting.
He
just
wanted
me
to
announce
that
he
might
join
later,
but.
B
B
B
Same
same
same,
hopefully,
we
can
start
like
collaborating
now.
I
think
that
so
far
we
are
just
kind
of
like
been
trying
to
organize
the
work
and
we
are
really
pushing
hard
to
get
something
up
and
running.
So
I
think
that
in
the
following
like
couple
of
weeks,
it
would
be
great
if
we
can
just
create
the
most
minimalistic
thing,
with
the
vocabulary
that
we
are
trying
to
define
and
then
just
iterate
and
refine.
And
you
know
these
are
just
basically
like
the
basic
first
steps
into
like
interoperability.
B
C
Sure,
and
how
do
I
pronounce
your
first
name-
is
its
salah
boy.
C
B
C
B
I'm
not
entirely
sure
who
else
is
joining.
I
think
that
viva
is
also
joining.
He
was
really
interested
in
working
on
the
tecton
side
with
andrea.
So
maybe
we
just
need
to
wait
a
couple
of
minutes
more
you
can
just
for
you
to
know.
I
mean
we
try
to
keep
these
kind
of
meetings
as
short
as
possible.
This
is
more
like
a
sync
to
make
sure
that
we
tackle
all
the
you
know,
all
the
all
the
things
that
are
happening
and
just
to
move
things
forward
in
general.
B
It's
just
a
thing:
it's
not
a
meeting
just
to
have
a
meeting,
and
we
usually
tend
to
discuss
here
like
what's
going
on.
What's
what
are
the
next
steps
and
the
things
that
we
talk
in
a
slack
about
like
okay?
How
do
we
organize
ourselves?
So
we
can
see
some
progress.
A
No,
I
absolutely
agree,
and
now
the
focus
is
as
much
as
you
said
it's
on
the
poc
here
to
see
if
we
can
get
anything
ready
for
the
silicon
before
silicon
to
get
to
like
capitalize
on
the
attention
that
we
will
get
from
there
to
get
more
people
into
this
work.
So
yeah,
I'm
looking
forward
to
that
as
well.
A
B
He
doesn't
seem
to
be
here,
which
is
strange
because
he's
always
early.
No,
he
wouldn't
he
wasn't
joining
today.
Okay,
that's
good!
Okay!
So
then
I
think
that,
well,
maybe
let
me
give
up
just
to
see
if
he's
joining.
I
think
that
he
was
really
trying
to
push
some
stuff
forward
as
well.
So.
B
In
any
case,
let
me
see
if
I
can
share
the
link
for
the
meetings
like
the
meeting
notes,
and
we
can
take
some
notes
because
the
conversation
is
being
recorded,
so
I
think
that
it
should
be
okay
if
we
also
take
some
meetings
and
meet
ignorance
and
can
like
record
what's
the
plan,
what
are
the
action
items?
B
Let
me
add,
I
think
that
you
can
add
your
names
in
there.
Let
me
copy
so
today
is
first
of
june.
C
D
B
Sorry
about
that,
I'm
back
welcome
all
right
and
then
we
have
bebop
now
hi
viva,
hey
mauricio.
How
are
you
doing
sorry
for
pinging
you
in
slack,
but
I
think
we
wanted
to
start
and
it's.
D
Good
to
have
you
here,
if
you
thank
me,
so
I
I
just
right
now
perfect.
B
B
I
think
that
we
have
that
pull
request
somewhere,
and
I
think
that
I
wanted
to
what
I
wanted
to
bring
into
the
discussion
is:
what
are
the
next
steps
to
manage
this
pull
request?
I
appreciate
all
the
comments.
I
think
that
I've
closed
most
of
the
comments
already
I'm
happy
to
go
back
again
and
check.
B
And
that's
pretty
much
it,
so
let
me
see
go
back
again,
so
the
idea
of
this
pull
request
jurgen.
Just
for
you
to
know
it's
a
pretty
simple
go
library
that
is
creating
the
cloud
events
mapping
for
for
the
events
that
we
have
here
in
this
repository
in
sig
events-
I
don't
know
if
you
have
looked
at
this
repository
before.
B
I
don't
know
why
it's
not
going
to
code
here,
but
if
I
go
to
code
here
inside
this
directory
vocabulary
draft.
This
is
where
we
kind
of
like
trying
to
define
what
are
the
events
that
we
want
to
focus
at
the
beginning,
and
if
you
take
a
look
at,
for
example,
here
we
have
a
bunch
of
of
you
know
of
common
like
continuous
integration,
events
like
about
bills
and
test
cases
and
artifacts
being
packaged
and
published.
B
So
what
what
we
wanted
to
do
is
we
wanted
to
see
if
we
could
have
a
library
that
allows
us
to
create
these
events
and
to
admit
these
events
and
probably
what
viewer,
what
viva
and
andreas
is
doing
people
if
you
can
put
collect
the
link
of
the
of
the
tecton
controller.
I
know
that
there
is
an
issue
now
where
we
are
discussing
that
in
tecton
right.
D
Yes,
so
there
is,
there
is
an
issue
made
for
the
cloud
events
controller
specifically,
and
there
is
a
poc
which
I'm
making
for
emitting
cdf
events
or
using
the
controller.
So
currently
we
are
working
using
a
pipeline
run
and
if
the
pipeline
redone
will
have
like
an
annotation
which
says
like
okay,
this
cdf
event
needs
to
be
sent
if
whatever
happens
on
this
pipeline
run.
D
So
we
are
currently
working
on
that
and
I
have
scaffolded
the
pipeline
run
controller
up
outside
outside
the
or
tecton
pipeline
itself,
not
in
the
main
repo,
because
it
has
to
go
as
like
an
experimental
first
and
then
it
will
be
merged
into
tecton.
D
D
So
I
will
be
creating
so
I'll
so
I'll
be
creating
a
pr
to
like
the
tecton
cd,
experimental,
repo.
E
B
D
B
B
B
We
have
the
jenkins
google
summer
of
code,
a
student
that
is
working
on
something
pretty
similar,
so
we
need
to
definitely
guide
them
to
kind
of
like
follow
the
same
approach
that
we're
following
with
tecton
and
with
whatever
we
do
with
captain
to
listen
or
consume
events.
C
So
from
what
I've
seen
from
the
just
briefly,
the
the
events
that
you
defined
here
very
similar
to
the
events
that
captain
has
internally.
So
I
think
that
should
be
really
possible
to
align
these
events
and
probably
in
the
first
step,
we
we
need
to
kind
of
have
some
some
small
translation
service
that
just
translates
service.
The.
B
B
C
C
Yeah,
I
think,
as
a
poc,
this
kind
of
external
or
this
additional
component
that
will
be
necessary
for
the
plc,
but
in
the
long
run
we
we
should
think
about
how
to
kind
of
merge
the
specifications
and
come
up
with
one
specification,
as
it's
also
shown
in
the
in
the
figure
that
eric
shared
in
the
dislike,
where
it's
really.
This
cd
events.
B
Perfect
perfect-
and
I
think
that
yeah,
that
for
me,
makes
a
lot
of
sense.
I
mean
that's
easy
to
create
and
I
think
that
we
can
just
get
it
done
pretty
quickly
and
if
techno
is
kind
of
like,
including
that
into
their
pipelines,
we
can
just
have
a
kind
of
like
an
end-to-end
thing
that
we
can
show
pretty
quickly
and
that's
why.
I
think
that
the
next
next
kind
of
important
conversation
that
we
need
to
have
is
like.
B
What's
the
use
case
right,
we
know
that
tecton
is
running
pipelines
and
they
are
kind
of
like
annotating
their
stages
or
connect
their
tasks
into
emitting
cloud
events.
Now
what
do
we
do
from
the
captain
side
right
like?
What
can
we
actually
do
from
it
and
what
kind
of
events
would
we
need
for
doing
that
and
yeah
by
looking
at
the
captain
docks,
I
saw
kind
of
like
that.
B
You
have
like
the
auto
remediation
kind
of
thing,
and
I
wonder
kind
of
like
what
that
involves
and
kind
of
like
how
do
we
trigger
that
and
how
do
we
can
like
configure
like
the
actions
here?
Let
me
see
I
was
looking
at
kind
of
like
this
documentation,
which
I
I'm
hoping
that
that's
kind
of
the
one
that
I
should
be
looking
at
for
kind
of
like
what
do
you?
How
do
you
define
your
remediation
and
then
how
do
you
kind
of
create
actions
for
them
right.
B
E
Cncf
meeting,
at
the
same
time
where
I
had
to
present
no
worries,
I
think
we
can
do
two
things
so
one
of
the
ideas
when
we
work
with
pipelining
tools,
we
usually
use
highly
generic
pipeline
and
kept
very
often
parameters
these
pipelines.
So
that's
what
we
do
in
the
shipyard
file
to
give
an
idea.
E
You
might
have
a
definition
on
how
to
define
a
production
tenant
or
you
roll
it
out,
but
captain
would
then
tell
you
which
candidate
actually
wants
to
deploy
or
how
you
generally
handle
the
stage
and
that
captain
will
handle
which
dates
to
deploy.
So
for
us,
a
pipeline
is
actually
not
a
stage
for
us
a
pipeline
depending
on
what
it's
doing
is
actually
what
we
consider
a
task
in
a
sequencing
captain.
E
It's
almost
like
a
bit
of
inception,
but
we
would
trigger
do
a
deployment
for
us
and
we
would
work
with
what
we
call
micro
pipelines
from
from
an
idea
more
or
less
so
you
dissect
pipelining
what
they're
actually
doing,
whether
it's
a
test
execution.
It's
a
deployment,
it's
an
environment
creation,
whatever
you
wanted
to
do
it
and
the
pipeline
itself
is
only
parameterized
and
what
what
captain
then
would
do
it
calls
ecap.
E
We
call
it
the
service,
but
technically
it's
exactly
this
event
where
we
push
it
where
it
supplies
all
these
parameters,
then,
to
the
to
the
underlying
pipeline
and
and
to
your
point,
we
can
either
do
this
for
a
pipeline
for
a
deployment
for
a
test
for
a
a
promotion.
I
think
one
one
thing
that
we
one
of
the
reasons
why
we
built
it.
That
way
is
that
we
can
easily
add
additional
tasks
that
the
original
pipeline
knows
nothing
about.
E
C
B
D
B
E
Yeah,
so
we're
not
building
anything
I
mean
technically,
you
could
do
it
with
a
custom
sequence,
but
the
idea
is
usually
we
get
a
new
artifact
event
like
something
new
is
here
to
deploy
or
create
them.
Okay,
then,
usually
this
event
moves
into
captain
captain
and
looks
at
what
we
have
in
the
shipyard
file,
which
is
the
strategy
more
or
less
how
this
specific
application
or
a
service
should
be
deployed.
E
It's
usually
based
on
a
per
project
basis,
and
then
captain
picks
and
choose
what
it
wants
to
do
with
this
specific
application,
obviously
might
be
different
from
an
application.
After
from
an
application
component
to
a
database,
to
maybe
an
ios
application,
whatever
it
is,
and
then
based
on
the
sequence,
what
you
call
a
sequence,
what
is
defined,
it
will
then
call
other
services
which
again
can
be
pipelines.
E
E
So
throughout
the
year,
you
kind
of
encourage
people
to
do
fast
experimentation,
but
during
black
friday
and
cyber
monday
you're
not
encouraging
people
to
do
experimentation
because
you're
making
60
of
your
revenue-
that's
not
the
best
time
of
the
year
to
try
out
that
new
card
application
that
might
fail.
So
you
might
just
by
the
switch
digital
rounding
captain
or
you
add,
an
additional
security
test
policy.
E
Again,
we
would
just
define
the
order
of
what
we
want
to
do,
because
for
certain
stages,
we
might
want
to
call
a
performance
test
for
others.
We
want
to
call
a
security
test
for
others,
we
run
the
performance
test
and
the
chaos
test
at
the
same
time
again,
this
is
all
something
I
don't
need
to
encode
in
the
pipeline.
This
is
dependent
on
maybe
some
policy.
So
that's
one
way,
it's
more
as
a
back
and
forth
communication
between
the
two.
E
I
think
micro
pipelines,
that's
what
I
like
to
call
them
like
sharing
micro
services
with
micro
pipelines,
they
did
an
orchestra
and
and
more
or
less
captain
is
x,
is
acting
as
the
mesh
in
here
that
kind
of
links
to
things
properly
together,
but
also
to
your
earlier
point.
The
pipeline
for
us
is
also
a
service.
So
when
we
do
the
remediation,
we
can
also
call
the
pipeline
so
assume
that
we
want
to
scale
something
up
or
change
an
environment
configuration
based
on
the
problem.
E
B
I
think
that
that
makes
sense.
So
basically,
at
that
point,
when
you
kind
of
run
and
deploy
by
what's
kinda
like
the
event
in
captain,
that
needs
to
be
emitted
to
say
something
was
deployed
like
new
service
or
something
like
that.
B
B
Okay,
perfectly
so
indeed,
yes,
so
this
one
is
like
this
is
okay,
new
local
image,
let's
say
or
new
film
thing
whatever,
and
here
like
when
you
deploy
things.
It's
just
like
you
have
another
artifact,
which
is
probably
a
new
service
running.
E
C
E
B
For
for
that,
like
for
environments,
you
have
as
well
like
events
there
like
what
are
kind
of
like
the
environment.
Events
like
every
time
that
you
create
an
event,
do
you
need
to
notify
captain
that
they
send
you
a
vendor?
Like?
Can
you
say
a
new
environment
available.
E
No,
we
would
we
have,
we
would
send
it
to
the
pipeline.
We
would
tell
you
for
example,
first
time
I
want
service
a
deployed
to
death,
it's
it's
like
part
of
the
metadata
of
the
event
and
the
next
event.
They
would
game
when
to
have
service,
a
being
deployed
to
say
to
do
pre-production
and
there's
other
metadata
like
we
would
send
you
which
type
of
deployment
we
want
to
use.
Be
please
deploy
it
for
me,
please
direct
deploy.
B
It
for
me,
okay,
so
I
think
that
that's
super
helpful.
So
in
general,
what
captain
is
doing
is
basically
parameterizing
these
micro
pipelines
and
this
defining,
which
ones
yeah,
okay,
great
and
now.
I
think
that
that's
why
I
need
jurgen
to
help
me
up
to
set
up
kind
of
like
something
pretty
simple
around
this
right.
So
we
can
definitely
assume
that
we
will
have
some
pipeline
some
tecton
pipelines
that
are
going
to
build
stuff
and
then
I'm
going
to
deploy
stuff,
and
then
we
build
this
component.
B
C
And
then
we
can
in
between,
like
you
already
have
which
environment
should
I
deploy
this,
then
we
could
also
make
use
of
the
captain
quality
gate.
That
can
be
like
the
next
step.
In
the
first
step,
we
maybe
kind
of
like
do
not
do
the
quality
gates,
but
just
have
a
multi-stage
environment
and
then
there's
a
subsequent
step.
We
can
add
the
quality
gates,
and
this
should
not
change
anything
on
the
text
on
site.
So
that's.
E
The
nice
thing
about
this
demo
would
be
you,
for
example,
add
a
staging
captain
and
you
didn't
don't
touch
any
pipelines,
it's
just
parameterizing
those
pipelines
differently,
and
then
you
see
that
deployments
are
running
differently.
Changes
are
running
differently
or
you
say
well
nobody's
running
a
security
check
in
here.
I
want
to
add
a
security
check
in
here.
E
It
makes
sense,
and
the
advantage
is
you
have
like,
like
one
of
the
major
arguments
when
we
talk
to
the
bigger
companies
is
that
you
have
a
real
separation
of
concerns,
because
the
sre
or
whoever
is
responsible
for
the
delivery
process.
Only
austin
owns
this
delivery
process.
That's
a
separate
file,
the
devops
engineer
owns
all
those
micro
pipelines,
and
then
wall
is
like
lego.
E
They
are
comp
they're
combined
at
run
time,
but
people
are
only
responsible
for
what
they
want
to
do,
and
the
individual
developer
just
says
what
what
artifact
they
want
to
deploy,
but
don't
have
ever
access
to
the
other
components
in
there.
That
is
one
of
the
major
drivers
that
are
not
all
stuffed
into
one
thing
together,
but
yeah.
E
I
think
the
easiest
way
is
to
have
what
I
mentioned:
the
translation
service,
where
we
translate
from
the
captain
event,
which
is
an
sh
captain,
started
event
into
a
start
test
event
with
the
proper
metadata
which
we
didn't
send
to
a
tactile
pipeline.
We
just
need
to
ensure
that
we
get
the
that's.
Also
when
you
remember
when
we
had
this
discussion
yeah,
we
said
like
we
need
to
start
it.
We
need
a
finished
event,
because
this
then
feeds
back
into
the
the
execution,
bits
and
pieces
yep.
B
And
I
think
that
that's
the
main
like
so
there
are
two
main
reasons
for
the
poc:
one
is
the
cd
like
the
silicon
and
the
other
one
is
to
refine
the
current
events
that
we
have,
so
we
actually
make
them
real
right
with
this
scan
like
iteration
of
creating
a
plc
making
sure
that
the
events
kind
of
like
fit
the
purpose
and
then
going
back
and
redefining
the
events
and
the
properties
and
making
them
more
concrete,
so
yeah.
So
if
we
can
work
on
these,
I'm
I'm
more
than
happy.
E
B
E
E
A
B
Something
like
that,
I'm
not
entirely
sure.
If
tecton
has
that
so
basically
tecton
we
can,
we
can
build
something.
To
do
that.
I
mean
tecton
is
not
listening
for
events
in
that
way,
yet
at
least
not
for
our
events.
So
maybe
what
viva
needs
to
implement
at
some
point,
or
we
need
to
have
that
translation
mechanism
as
well
to
figure
out
what
pipeline
to
trigger
when
we
get
a
captain
event
for
triggering
pipelines.
E
D
So
this
might
be
part
of
the
another
poc
that
I
made
last
week,
which
is
which
isn't
completed
because
the
cdf
events,
library
is
still
kind
of
yet
to
be
done.
But
this
might
be
part
of
the
poc
I
built
last
week
for
like
the
conversion
broker,
we
could
probably
we
can
discuss
more
on
how
that
would
work.
B
Yep,
I
think
that
well,
in
that
sense
view
I
don't
want
to
kind
of
like
add
you
too
much
on
your
plate.
I
think
that
if
you
help
and
if
you
work
with
andrea
on
the
controller
on
the
tecton
side
I
can
take,
I
can
take
a
like
the
initiative
on
with
jurgen
on
the
captain
side
for
this
translation,
and
then
we
can
see
if
we
can
first
of
all
use
connect
your
plc
use
any
other
library
and
making
sure
that
again
we
don't
we
don't.
B
D
No,
I
was,
I
was
hoping
if,
like
the
thing
that
I
made
before,
could
like
help
you
right
now,
so
I
like.
B
Matias,
you
know
viva,
do
you
have
any
other
specific
question
about
kind
of
like
this
kind
of
use
case
and
they
come
like
the
initial
steps.
D
No,
but
this
looks
good
I
haven't
seen
captain
as
much.
I
have
to
still
try
it
out
once
at
least
great.
A
C
A
Yes,
when
will
we
decide
specifically
how
to
do
that
or
more
completely
how
to
do
that,
whether
we
build
something
into
takedown
or
whether
we
write
some
small
of
event,
listener
that
just
sends
the
correct
start
command
to
take
them
on.
However,
that
would
work
just
so
yeah.
It
doesn't
fall
between
the
chairs
so
to
say.
B
That's
a
very
good
question
and
I
think
that
it's
all
about
this
kind
of
translation
component
that
first
is
going
to
basically
listen.
So
basically,
this
one
is
the
one
that's
going
to
pick
up
cd
events
and
create
captain
events
and
as
soon
as
we
know
how
to
do
that
doing
it.
The
other
way
around
shouldn't
be
that
difficult
right,
like
making
sure
that
we
go
and
create,
for
example,
pipelines.
It
shouldn't
be
that
difficult,
because
it's
just
basically
using
the
tecton
library
to
trigger
to
create
new
pipelines.
B
New
pipeline
runs
that
can
be
hacked
pretty
not
pretty
easily.
It
might
take
a
week
or
so,
but
I
think
that
we
can
just
get
it
done
and
then
see.
Where
do
we
put
these
components?
I
know
that
the
component
that
is
building
viva
it
makes
a
lot
of
sense
to
have
in
tecton
itself,
but
all
the
other
things
like
the
this
translation
layer.
B
B
There
is
not
too
many
other
topics
from
my
site
to
discuss.
I
wanted
to
definitely
have
a
quick
chat
with
jurgen
to
kind
of
again
get
get
started
with
the
captain
side
to
see
if
I
can
install
something
or
if
we
can
define
how
to
build
that
component
and
and
where
you
know-
and
I
do
not
have
any
other
topics
to
discuss.
Kinda
like
in
this
meeting,
because
I
can
definitely
see
that
we
are
making
progress,
and
I
wanted
to
mention
eric
you.
B
A
C
A
But
I
will
just
go
through
it
as
I
post
it
in
slack,
so
we
will
have
both
way
communication
now,
as
we
define
it,
so
some
stuff
will
be
will
trigger
the
importance
and
stuff
will
trigger
techno
stuff.
So
I
will
make
sure
to
to
add
that
I
still
had.
I
know
transport
is
the
wrong
way,
but
broker
like
what
broker
will
be
used
to
actually
get
these
events
between
tekton
and
captain,
or
will
we
just
pick
something
when
we're
ready
for
it.
B
If
we
can
use
that
or
we
can
use
kind
of
like
direct
communication
right
like
we
can
just
send,
for
example,
from
tecton
the
component
that
it's
generating
the
cd
events
can
have
a
sync
which
is
basically
going
to
be
an
url
where
it's
going
to
be
send
events
p
http
there
will
be
a
component
that
will
pick
up
those
events
and
then
send
another
cloud
event
to
another
http
endpoint
in
captain
that
can
be
kind
of
like
possible,
as
well
kind
of
like
no
need
for
broker
like
message
messaging
broker
there
in
the
middle.
B
But
we
can
also
add,
like
you,
know,
okay
native
broker
there,
and
it
should
work.
It
shouldn't
be
any
different.
A
And
that
then,
if
we
use
a
direct
http
endpoint,
presumably
this
is
a
small
detail.
Perhaps,
but
captain
will
receive
that
event
as
just
raw
json
or
whatever,
and
then
it
will
take
that
raw
json
and
go
to
our
event.
Library
and
say:
can
you
please
decode
this
for
me,
so
it
wouldn't
use
the
go
sdk.
The
cloud
events
go
is
the
case
to
like
set
up
some
sort
of
event
receiver
and
then
receive
it
through
there,
it's
more,
the
captain
would
receive
it
directly
and
then
decode
it
using
the
library.
B
No,
that's
where
the
translation
like
component
is
right,
so
tecton
is
sending
to
a
component
that
has
this
library
and
understands
so.
Tecton
has
a
component
that
will
create
cd
events
right
and
that's
going
to
be
sent
to
another
component
that
it's
in
the
middle,
transforming
from
cloud
events
using
the
go
sdk
to
captain
events.
So
there
is
another
component
that
basically
has
the
our
library,
plus
the
captain
library
to
be
able
to
the
translation,
and
that
component
is
the
one
that
is
going
to
send
to
captain
already
captain
messages.
E
B
E
Shows
greatly
how
things
can
work
together.
That's
I
think,
that's
really
nicely,
and
then
I
think
that
you
will
get
by
the
way
as
a
side
effect.
That
jurgen
can
then
also
show
you
get
you
get
more
or
less
end-to-end
tracing
we're
not
yet
using
trace
context
in
there
for
historic
reasons,
because
we
started
some
of
this
work.
B
E
Yeah
yeah,
I
think
once
we
get
it
to
the
point
where
you
say:
okay,
there's
a
new
image
that
I've
built
here.
It
is
update
your
charts
run
it
and
then
you
go
back
to
tekton
and
deploy
it
and
then
for
the
next
run
we
add
an
additional
station.
It
gets
deployed
to
that
stage
as
well.
I
think
that's
something
that
people
will
find
really
nice
without
having
to
touch
a
pipeline
at
all
and
just
eggs.
B
B
Good
stuff,
I'm
happy,
we
are
making
progress,
that's
really
good!
All
right!
Folks!
I
think
that's
pretty
much
it.
Let's
keep
it
short.
Let's
make
sure
that
we
don't
kind
of
like
do
meetings
for
having
meetings
only
so
if
you
guys
want
to
catch
up
at
some
point
in
slack
feel
free
to
ping
me
or
ping
each
other
and
let's
push
this
forward.
C
Sounds
great
mauricio,
do
you
want
to
stay
on
this
call
and
swing
a
little
bit
on
me?
If
I
can
show
you
how
captain
works,
or
do
you
want
to
do
this?
Another
time.
B
C
Yeah
sure
so
let
me
just
share.
C
Just
a
second,
so
I
think
you
already
saw
that
the
tutorials
and
saw
a
little
bit
of
captain,
but
I
just
make
sure
to
make
this
full
screen
here
and
what
I
just
want
to
show.
You
is
like
how
captain
orchestrates
all
the
different
tools
and
how
also
where
techno
can
fit
in
so
this
is
one
of
our
public
demos.
C
I
can
also
share
the
link
in
the
chat,
so
you
can
also
have
a
look
and
what's
especially
interesting,
and
I
think
which
will
help
a
lot
in
our
pc.
Is
it's
just,
for
example,
the
potato
head?
It's
a
ctf
demo
project.
Let's
just
focus
on
this
for
the.
A
C
The
the
important
part
I
think
for
our
poc
and
where
I
want
to
what
I
want
to
highlight,
is
the
sequence
screen
here.
So
if
you
go
into
the
into
the
potato
head
project
into
a
sequence
screen,
we
can
see
a
whole
delivery
sequence
that
was
being
executed
and
the
what
we
can
also
see
here
where
we
can
really
take
a
look-
and
we
can
use
this
for
for
for
debugging
or
for
learning
how
it's
stunning
captain
is.
C
We
can
take
a
look
at
all
the
different
cloud
events
that
are
actually
sent
from
captain
and
and
send
two
captains
and
from
captain
and,
of
course,
it's
a
little
bit
different
for
the
city
events,
but
it's
also.
It
might
also
be
a
bit
similar.
So
this
is
the
complete
payload
of
a
cloud
event
that
is
sent
to
captain,
and
this
is
the
basically
the
payload
that
we
have
to
create
this.
C
We
should
create
once
tactone
does
the
ci
part
and,
let's
say,
comes
up
with
a
new
image.
So
this
is
a
just
the
the
container
image
and
the
version-
and
this
is
sent
to
captain
so
captain
knows
this
is
the
image
that
should
be
deployed
which
project
which
service
service,
which
stage
and
then
some
other
information.
C
That
is
just
a
meeting
information
for
for
captain-
and
we
do
have
this
here,
for
example,
here
in
our
production
stage,
so
that
that
it
is
deployed
to
production
was
actually
sent
from
the
hardening
stage.
So
we
have
a
multi-stage
delivery
pipeline.
It
starts
in
in
our
hardening
stage.
We
have
a
delivery
sequence
and
the
sequence
consists
of
a
couple
of
different
tasks.
So
we
have
a
deployment,
a
test,
an
evaluation
we
get
the
sli,
but
for
our
purpose
we
could,
for
example,
just
focus
on
deployment.
C
We
can
do
a
deployment
when
the
deployment
is
finished.
We
go
back
to
tecton
and
do
another
deployment
in
in
production
or
we
can
do
whatever
we
want
whenever
we
want
to
add
some
test
here.
We
just
added
in
the
shipyard
file,
but
we
don't
have
to
touch
the
pipelines
in
tactum,
for
example,
and
how
to
how
the
tests
are
triggered.
Again,
it's
very
similar.
C
It's
we
will
have
all
the
data
on
the
artifact
itself
and
the
tests
are
triggered
just
with
a
test
triggered
event.
So
this
is
how
how
the
events
are
working.
It's
always
the
task
and
the
trigger
and
then
inside
the
task
we
can
see
who
is
actually
consuming
this
event.
In
this
case
it
was
a
cheat
meter
service.
This,
for
example,
could
be
attacked
on
service
or
we.
C
Send
this
to
tecton
and
detect
and
start
and
and
then
finish
its
its
work.
So
this
is
how
we
can
see
how
captain
orchestrates
the
whole
delivery
sequence
and
I
think
what
alois
also
mentioned
earlier
was
we
have
a
complete
trace
and
how
one
artifact
goes
through
the
whole
system.
So
we
can
see
that
this
version
was
deployed.
First
in
hardening
there
was
this
delivery
sequence.
It
was
evaluated
with
100
of
the
quality
gate.
It
was
then
released
to
the
next
stage
and
the
next
stage
was
production.
C
So
this
is
the.
This
is
one
of
the
the
public
demos
we
have
for
captain,
where
we
can
take
a
look,
how
the
the
events
look
like
and
which
events
we
have
to
send
and
which
events
we
can
send
back
from
captain
there
are
more,
but
this
is
just
for
the
for
a
delivery
sequence
for,
like,
let's
say,
maybe
a
typical
delivery
sequence
with
deployment
tests
evolution.
We
can
see
all
the
events
here.
B
So
you
have
a
single
sequence
there
right
so
that
it's
called
delivery.
Yes,
okay,
can
you
can
you
show
us
kind
of
like
the
definition
of
that
sequence,.
C
Sure
I
would
go
back
to
the
project
here
and
here
it
was
the
potato
project
and
we
can
take
a
look
at
all
the
resources
that
it's
managed
here
by
captain.
It's
also
publicly
available.
You
can
also
take
a
look
at
this
and
the
shipyard
defines
all
the
sequences
and
it's
very
easy.
This
is
the
whole
definition,
so
I
have
in
hardening
and
in
production
I
have
the
delivery
sequence
in
hardening.
C
C
The
delivery
sequence
here
and
it
starts
with
the
deployment.
The
deployment
strategy
is
blue-green
on
the
microservice
level,
then
I
have
some
tests.
I
want
to
execute
some
performance
tests.
What's
not
specified
in
captain
is
which
tool
is
responsible
for
this.
This
goes
into
another
concept,
which
is
the
captain
uniform
the
captain's
uniform.
C
C
They
are
also
shown
here.
It's
the
deployment.
C
You
look
very
closely.
We
can
see
the
get
sli,
which
is
right
now,
a
part
of
the
evaluation,
because
we
need
to
go
to
some
sli
provider,
fetch
the
data
and
then
do
the
evaluation.
This
is
not
necessary
to
to
be
defined
in
the
in
the
shipyard,
but
everything
else.
A
C
Is
a
one-to-one
weapon?
It's
basically
everything
that's
happening
and
for
a
subsequent
stage
this
one,
the
delivery
sequence,
will
be
triggered
when
hardening
when
a
delivery
finished
event
occurs
from
the
hardening
stage
and
if
we
want
to
define
some
filter
here,
so
this
one
only
is
triggered
when
it's
passed
but
for
let's
say
a
rollback
sequence,
we
could
just
say
it
should
only
be
triggered
when
the
delivery
finished
but
failed.
So
the
universe
was
finished.
C
C
So
this
is
how
we
can
define
and
how
we
can
also
make
a
whole
flow
of
different
delivery,
sequences
or
other
sequences
even
going
through
different
stages
like
production
or
harm.
B
C
Tasks
that,
let's
say
our
our
reserved
keywords
in
captain
this
can
be
found
in
the
captain
documentation.
But
it's
basically
deployment
test
evaluation
release.
D
C
But
if
you
want
to
write
your
own,
let's
say
security
scan
or
something
like
this,
you
can
define
it
and
the
capital
will
create
an
event
which
has
the
type
that
say
hardening
dot
security
scan
and
whenever
we
want
to
start
this,
this
task
it
will
be
hardening.securityscan.triggered
and
whatever
tool
is
responsible
for
the
security
scan
that
says
sleek.
It
will
answer
with
a
start
event
and
once
the
security
scan
is
finished
with
the
finished
event,
and
then
capture
will
move
on
with
with
the
next
phase.
C
The
important
part
here
is
that
the
other
tools
they
do.
They
actually
do
not
even
know
that
there
was
one
additional
task
added.
So
if
you
want
to
do
a
if
you
want
to
do
it
before
test
or
after
test
for
the
tools
it's
completely
transparent,
they
don't
need
to
know
and
for
the
deployment
usually
in
captain,
we
can
do
deployments
with
helm,
but
we
could
also
replace
the
whole
helm
part,
for
example,
with
with
tecton
and
also
do
deployments
with
with
tecton
just.
B
Yeah
yeah,
which
tecton
can
be
using
helmet
at
the
same
time
right
like
so
it
doesn't
really
matter
what
tool
you
use,
but
what
I'm
interested
in
here
is
okay
by
looking
at
this
car,
like
sequence,
how
do
I
so
I'm
guessing
that
if
these
are
reserved
words,
this
basically
means
a
set
of
events
that
are
it's
going
to
be
emitted
somewhere
right
and
how
do
we
configure
that
like?
Where
are
those
events
going
to
be
emitted
and
how
do
we
integrate
their
yeah?
We
with
our
stuff.
C
Yeah,
so
all
the
events
we
do
have
a
specification
of
all
the
events.
C
So
these
are
all
the
events,
but
I
said
like,
if
you
add
some.
E
C
So,
for
example,
we
can
take
a
look
at
a
simple
service.
That
is
how
the
subscription
mechanism
is
working.
So
we
do
have
templates
for
this,
but
just
to
see
like
here
we
can
see
for
which
events
I'm
sorry,
which
events
that
this
integration
is
is
listening
for.
It's
the
litmus
integration,
it's
starting
chaos
tests
and
it's
it's
listening
for
test.triggered
or
also
test.finish,
because
in
a
dot
finished
event.
If
it
comes
from
some
other
test
integration,
we
can
also
stop
the
chaos.
C
Let's
say,
performance
tests
are
finished
and
we
will
receive
these
events
in
the
litmus
integration.
Then
we
say:
okay
performance
tests
are
already
finished.
We
can
also
stop
now
chaos
because
it's
not
needed
anymore,
but
here
we
can
define
it.
Captain
sends
this
to
we're
using
here
as
the
as
the
messaging
system
we're
using
here
nets-
and
you
can
just
subscribe
to
this
and
listen
for
this
kind
of
events.
B
Yep,
perfect,
okay
yeah
that
makes
sense
so
so
yeah
in
in
general.
I
think
that
I
do
understand
the
idea
we
will
need
to
build.
So
basically
what
we
will
need
to
do
is
we
will
need
to
create
kind
of
like
a
repository
with
the
shipyard
definition
for
the
sequence
that
we
want
to
have
like
a
pretty
simple
sequence.
B
C
You
add
the
project
to
captain,
so
it's
the
as
I
showed
you,
the
potato
head,
you
don't
add
the
source
code
or
whatever
to
captain,
but
you
add
you
add
the
project
and
only
meet
information,
not
meet
the
information,
but
the
help
chart
the
test
files
remediation
instruction.
This
is
what
you
add
to
captain.
It's,
basically,
the
all
the
information
that's
needed
for
deployment,
but
not
the
source
code
itself.
B
Okay,
okay,
good,
so
we
will
so
basically
we
create
that
we
have
it.
So
that's
the
next
question.
So
should
we
work
with
like
something
like
this,
like
a
public
captain
instance
for
the
plc,
or
should
we
kind
of
like
try
to
kind
of
install
a
setup
where
we
can
have
tecton
and
captain
all
working
kind
of
locally.
C
Yeah,
I
think,
for
the
poc
coming
up
with
our
only
installation
where
we
can
really
mess
around
the
thing.
That
makes
sense
if
we
want
to
like
exchange
some
tools
or
exchange
some
some
components.
A
B
Perfect
yeah,
so
definitely
so
if
we
can
basically
then
have
like
a
very
minimal,
because
in
order
to
run
these
projects,
these
definitions,
these
connect
sequences,
we
need
to
have
basically
what
like
the
control
plane
for
captain
right.
Yes,
we
have
the
control
plane.
Then
we
can.
Then
then
we
can
just
basically
create
these
services
and
subscribe
to
events.
That
captain
is
meeting
right.
C
Yes,
we
would
need
to
install
captain,
create
a
project
and
create
one
service
like
the
project
is
the
is.
C
C
C
Yeah,
I'm
working
on
a
quick
start
guide
on
the
k3d
okay,
and
this
makes
it
really
easy
to
run
captain
also
locally
kind
might
also
work.
It
was
just
not
working
on
my
machine
because
kind
needs
a
little
bit
more
resources
than
k3d,
but
I
can
share
the
same
installation
instructions
and
probably
I
can
even
try.
B
Yep,
I
can
give
it
a
try
and
see
if
I
can
get
it
working
and
then
I
can
start
playing
around,
then
we
can
start
basically
defining
these
kind
of
the
sequences
and
how
to
connect
things
together.
I
think
that
that
would
be
great
yeah
very
stuff,
and
I
think
that
for
for
the
rest
of
the
team,
I
think
at
the
end
like
pushing
for
merging
the
library.
B
So
we
can
start
using
the
library,
even
if
it's
buggy
or
if
it's
not
complete
or
if
it's
not
covering
all
the
events,
but
just
having
the
pull
request
merge.
What
we
can
do
is
we
can
definitely
make
reference
to
that
library
from
other
components,
something
that
by
having
it
in
a
pull
request,
it's
going
to
be
a
little
more
tricky.
B
E
E
C
And
in
cards,
so
he
also
he
again.
We
have
a
multi-stage
environment
here
and
I
think
regulation
has
defined
in
production
and
the
I
can
open.
C
So
it's
again
defining
all
the
delivery
sequences
and
having
a
look
in
production,
we
can
even
see
a
rollback
sequence.
C
E
To
stop
here,
but
you
can,
for
example,
see
that
these
are
individual
sequences
like
every
stage
has
its
own
sequence,
which
also
allows
you
to
manage
them
separately
before
we
had
everything
in
one,
but
here's
either
actually
every
stage
or
more
or
less
every
part
of
that
delivery
has
its
own
sequence.
It
could
be,
for
example,
also
a
subset
of
your
tenants
that
you
want
to
move
there.
Sorry,
you
again,
I
think
it's
a
good
idea
to
share
this
year.
B
I
feel
that's
good,
so
that's
kind
of
makes
me
wonder:
what's
happened
first
here
like
delivery
and
then
roll
back
only
if
that's
triggered
by
that
event
right
and
the
same
with
delivery
direct
only
if
the
events
happen,
that
sequence
is
going
to
be
executed.
C
Yes
exactly
so,
this
is
only
triggered
on
the
delivery
direct
if
a
delivery
direct
is
finished
from
the
staging
stage
here.
This
is
a
way
to
also
model
it
with
different
delivery
strategies.
So
I
have
one
service
that
goes
with
the
blue
green
deployment
strategy,
and
I
want
service
that
goes
with
a
direct
deployment
strategy
and
to
not
kind
of
they
are
not
getting
interfered.
C
C
So
if
there
might
be
some
services
that
need
a
canary
rollout
and
others
need
to
be
green
and
others
be
direct,
you
can
simply
build
it
and
you
can
even
have
this,
for
this
is
the
whole
file,
it's
86
lines
of
code
and
it's
for
a
whole
project,
and
you
can
even
manage
now
two
different
delivery
strategies
and
it's
not
really
a
large
file.
So
it's
still
easy
to
understand.
C
Rollback
mechanism
here
and
what
we
also
have
is
this:
the
remediation
file
and
the
remediation
file
is
basically
triggered
when
there
is
a
remediation
sequence
and
the
remediation
sequence
is
triggered
when
there
comes
an
open
problem.
So
if
there
is
a
problem
or
an
alert,
let's
say
prometha
sends
an
alert
or
there's
a
problem.
Ticket
from
dynatrace
sends
to
captain.
It
will
start
the
revelation
sequence,
and
here
we
have,
let's
say
a
response,
time
degradation
and
we
can
map
this
problem
type
to
a
set
of
actions.
C
So
in
this
case
there
is
one
action
that
should
toggle
a
feature:
we
have
a
human,
readable
name
here.
What
should
be
done?
The
description
and
then
also
all
the
values
that
will
be
will
be
sent
to
the
action
provider.
So
the
action
provider
will
integrate
the
the
set
of
values.
C
What
should
be
done?
We
can
eat
for
other
remediations
just
to
give
an
example.
This
could
be,
the
action
could
be
scaling
and
the
value
could
be
scale
up
and
then
like
a
percentage
or
or
a
value
or
even
like
a
restart
and
then
with
a
value.
Maybe
a
grace
period
for
for
termination
of
these
kind
of
things
for
all
of
the
feature
flags.
We
have
the
feature
flagging
the
name
and
the
property,
but
this
is
really
kind
of
yeah
specific
to
the
use
case.
C
But
again
here
we
do
not
have
the
tools
directly
defined,
so
we
could
use
unleash
or
launch
darkly
or
split
eye
or
different
feature
frameworks,
and
they
would
listen
for
this
for
this
action
and
they
would
execute
this
action
again
by
this
triggered,
started
and
finished
types
of
cloud
events,
so
that
captain
again
knows
what's
going
on
and
if
the
revelation
is
successful
and
then
once
this
action
is
done,
captain
can
start
to
do
another
evaluation
with
the
slo-based
quality
gates
and
then
can
probably
move
on
to
another
action
that
can
be
a
sequence
of
actions
or
just
close.
C
The
either
close
the
problem,
ticket
or
yeah
also
escalate.
If
there,
if
there
was
no
action
that
could
really
solve
the
issue.
B
Perfect
yeah,
this
makes
sense
yeah.
This
is
great.
This
is
great
and
the
more
that
we
can
show
the
better
right,
like
the
more
that
we
can
use
the
tools,
the
better,
so
having
kind
of
like
an
understanding
of
all
these
features
will
help
us
to.
I
think
that
go
faster
at
some
point
like
as
after
we
go
to
the
stage
of
kind
of,
like
initially
aiding,
initially
seizing
all
the
components
making
sure
that
they
can
talk
to
each
other
and
then
building
more
and
more
advanced
demos,
so
yeah.
This
is
great.
C
Yeah,
I
think,
once
we
have
the
mapping
between
the
cloud
events,
we
can
do
a
lot
of
things.
It's
just
that
this
part
that
you
mentioned
also
in
the
in
the
diagram.
That's
that's
the
crucial
part
here,
great
great.
B
Any
other
questions
what
I
don't
know
if
this
is
still
around
nope,
not
for
me
good
bebop,
any
questions.
B
Yep
yeah
yeah.
We
will
be
doing
that
and
I
will
definitely
be
trying
to
document
that
for
running
tecton
and
also
captain
in
in
probably
in
kind
and
probably
I
will
take
a
look
at
k
3d
as
well
just
to
see
how
it
works,
and
I
mean,
if
you
guys
are
using
that
already.
I
can
maybe
take
a
look
and
see
how
different
is
from
kind.
It
should
be
pretty
similar.
That's
what
I'm
expecting
so
yeah
great
great,
so
jurgen.
B
Let's
move
it
to
the
to
the
slack
channel
at
some
point
and
let's
try
to
move
these
things
forward.
If
you
can
share
that
kind
of
tutorial
that
you
mentioned,
I
will
give
it
a
try.
B
Thank
you,
everyone
else
from
joining
and
let's,
let's
meet
again
on
the
cg
events
meeting,
I
think
that's
happening
monday.
B
So
let's
talk
there
and
and
try
to
define
the
next
steps,
but
in
the
meantime
I
will
be
pushing
the
library
and
the
events
and
post
about
like
the
tech
concert
as
well.