►
From YouTube: Scaling Delivery & Operations Automation with Keptn
Description
Events in CI/CD as explained by Andreas Grabner with a focus on Keptn. Andreas provides a thorough review of Keptn, how Dynatrace has implemented Keptn and the basis of their event driven CD.
B
B
B
B
B
The
charter
for
events
and
cd
is
looking
at
how
events
can
help
to
create
ci
cd
systems
with
the
decoupled
architecture.
That's
easy
to
scale
and
makes
it
resilient
to
failures.
Using
events
could
also
increase
automation
when
connecting
workflows
from
different
systems
to
each
other
and
as
a
result,
empowering
tracing
visibility,
visualizing
auditing,
of
the
connected
workflows
through
these
events,
so
why
this
is
important
now
is
because
we
are
beginning
to
see
a
pivot
in
in
what
we
call
ci
cd.
B
Now
any
anybody
who's
spoken
to
me
before
knows
that
I
like
to
define
what
ci
and
cd
is
continuous
integration
was
really
focused
around
automating,
a
a
pull
request
and
a
build
and
creating
what
we'd
like
to
call
a
releasable
candidate
based
on
the
build.
Continuous
delivery
is
the
process
of
pushing
a
push,
pushing
the
the
build
through
the
life
cycle,
including
security
scanning
and
testing,
and
all
of
the
events
that
make
a
cd
pipeline.
B
So
events
in
ci
cd
is
really
about
creating
this
abstracted
layer,
where
we
have
a
catalog
of
events
that
we
can
call
now
in
microservices
world.
This
becomes,
in
my
opinion,
more
critical,
because
not
every
microservice
is
the
same
and
we
should
be
able
to
start
generating
a
pipeline
on
the
fly
based
on
the
risk
level
of
a
microservice.
B
B
He
is
the
co-chair
of
the
special
interest
group
in
interoperability
for
the
cd
foundation
and
he
is
beginning
to
have
a
conversation
about
policies,
policies
that
different
tools,
support
policies
that
create
guard
rails,
and
I
believe
that
we
will
see
a
convergence
between
this
concept
of
events
and
policies
moving
forward.
So
I
haven't
got
a
date
confirmed
yet,
but
he
will
be
our
star
for
next
for
next
next
month's
event,
it
will
not
be
a
tools
discussion,
it
really
is
going
to
be
a
discussion
about
what
policies
are
and
how
those
are.
B
Policies
are
going
to
start
moving
forward.
These
guard
rails
on
that
I
am
going
to
stop
sharing
and
I
am
going
to
introduce
andy
grabner
and
he
is
he
he
will
he'll
introduce
himself,
but
he
is
the
kind
of
I
would
say,
he's
driving
events
through
captain.
B
I
think
events
are
a
fabulous
idea
and
I
think
dinah
trace
was
ahead
of
all
of
us
when
they
started
thinking
about
it,
and
I
believe
that
andy
was
part
of
that
initiative
and
he
is,
he
knows
everything
about
captain
and
I'm
going
to
turn
it
over
to
you.
Andy.
A
Thank
you
so
much
tracy,
it's
funny
when
you
say
this
is
all
about
events.
Yes,
captain
is
event
driven,
but
this
is
also
an
event
now
right.
A
We
are
here
at
the
meetup
and
so
let's
put
up
a
good
show
yeah,
as
tracy
said
at
a
day,
I
am
working
for
dynatrace,
I'm
a
self-proclaimed
devops
activist,
where
I
like
to
initiate
and
activate
the
excitement
about
devops
in
people,
because
I
think
that
whole
movement
is
really
interesting
and
really
you
know
can
can
make
a
lot
of
impact
and
if
you
get
the
right
people
motivated,
then
that
helps
on
the
other
side,
I
have
been
I'm
a
deaf
real
for
captain
the
open
source
project
and
I've
been
doing
this
for
two
years
tracy.
A
As
you
said
about
two
and
a
half
years
ago,
we
actually
sat
down
at
dynatrace
with
a
group
of
of
customers.
Colleagues
and
basically
said
you
know
as
we're
all
moving
towards
the
cloud
native
world
and
we
as
dynamers
build
an
observability
tool.
We
want
to
integrate
all
these
different
tools
and
then
we
started
building
a
demo.
How
we
want
the
perfect
end-to-end
delivery
pipeline
to
look
like,
and
then
we
ended
up
building
a
jenkins
pipeline
that
had
a
lot
of
hard-coded
integrations
to
make
everything
work,
and
then
we
saw
well.
A
B
Yes-
and
I
can
open
it
up,
so
anybody
can
ask
a
question
on
the
fly
if
you'd
like
me
to
or
we
can
keep
everybody
muted
until
the
end.
It's
totally
up
to
you.
A
I'm
happy
to
also
take
questions
throughout
because
I
think
that
makes
it
a
little
more
interactive
and
we
have
seen
so
many
pre-recorded
sessions.
We
know
this
is
live,
but
then
for
people
that
actually
dedicate
the
time
how
to
be
live.
It's
a
chance
for
them
to
ask
questions
right
away.
B
So
everyone
I
am
allowing
you
to
talk,
even
though
this
generally
stops
you
from
being
able
to
do
that.
So
if
you
need
to
mute,
you'll
have
to
do
that
on
your
own,
so
you
will
be
opened
up
to
to
chat
with
andy
and
ask
him
questions
as
we
go.
A
Perfect-
and
I
see
there's
things
coming
in
in
the
chat
so
I'll
try
to
keep
an
eye
on
it,
but
if
I'm
focused
on
my
screen
tracy,
maybe
I
will
but
it
sounds
awesome,
so
I
do
have
slides
prepared,
but
I
also
have
a
demo
and
I
won't
actually
start
with
the
demo
and
show
you
captain
live.
I
can
encourage
you
to
check
out
our
just
recently
updated
website.
Captain.Sh
you
can
from
here
find
all
the
information
you
can
find
the
information
on
github
you
can
find
slack.
A
I
can
encourage
you
to
join
our
slack
channel
slack.captain.sh.
We
also
have
a
slack
channel
on
the
cncf
slack
because
we
are
a
cncf
project,
but
what
I
have
for
you
today.
I
have
my
standard
captain
instance.
So
in
my
case,
captain
is
installed
on
the
kubernetes
cluster.
In
my
case,
I've
chose
chose
eks.
Now
you
will
learn
more
as
I
go
through
my
presentation
later
on
about
the
structure
of
that
of
captain
and
the
terminology
and
the
architecture
behind
the
scenes,
but
as
a
first
project
that
I
want
to
focus
on
today.
That's
my.
A
It's
called
my
captain
07
project
and
what
you
see
here
is
a
link
to
a
git
repo,
so
captain
internally
everything
you
do
with
captain
every
time
you
create
a
project,
it
creates
a
git
repository
that
is
hosted
within
the
captain
installation,
so
we
have
a
so-called
configuration
service.
That
is
basically
git
repo,
but
you
can
also
set
up
an
upstream
git
and
in
my
case
I
have
set
this
here
to
my
captain-
07
simple
node
upstream
git.
If
you
want
to
feel
free
to
go
to
that
url.
A
Actually
let
me
post
it
in
into
the
chat
as
well.
There's
nothing
secret
about
it.
It
contains
all
the
configuration
of
my
sample
project
in
order
to
create
a
project
in
captain
and
captain
is
really
all
about
automating
different
workflows,
different
processes,
around
continuous
delivery,
but
also
around
automating
operational
tasks.
A
So,
in
my
main
use
case
today,
I
am
showing
you
a
delivery
use
case,
but
I
also
show
you
how
you
can
integrate
different
aspects
of
captain
like
the
quality
gate
capability,
the
performance
test,
automation,
chaos,
test,
automation
and
quality
gates,
or
even
other
remediation
in
your
existing
pipelines.
A
My
example
here,
if
you
want
to
get
started
with
captain,
you
have
to
create
a
so-called
shipyard
file,
so
the
shipyard
basically
specified
declaratively.
What
type
of
stages
do
you
have?
I
have
dev
staging
and
prod,
and
also
what
should
happen
within
that
stage,
so
you
can
specify
what
type
of
deployment
strategy
what
type
of
test
strategy
you
want.
A
If
something
comes
comes
out
of
depth
that
passes,
it
will
automatically
promote
it
in
staging.
If
it's
warning,
I
have
to
manually,
prove
it
if
it's
failing
it
doesn't
make
it
into
staging
for
production.
I
have
it
a
little
different.
I
said
everything
that
that
would
be
good
out
of
staging,
whether
it's
past
or
warning
state
I
have
to
manually,
approve
it
and
what
you
also
see
here
in
production.
I
also
have
a
remediation
strategy.
This
is
the
whole
automating
processes
around
operations,
automation,
so
typically
remediation.
A
So
only
the
only
thing
you
need
is
you
say:
captain
crate
project
here
is
basically
what
we
call
the
shipyard
file,
the
definition
of
your
delivery
process
and
what
captain
does
with
it.
It
will
automatically
for
every
stage,
create
a
branch
def
broad
staging,
as
you
can
see
here.
If
I
go
to
my
staging
branch,
then
they
they,
their
structure
is
like
this.
A
You
have
some
artifacts
that
are
considered
kind
of
stage
wide
that
can
be
used,
but
then
like
in
this
case,
I
have
some
j
meter
scripts
that
can
be
used
for
any
type
of
micro,
service
or
service
that
are
on
board.
But
then
I
can
also
onboard
a
service
which
I
call
simple
node.
It's
a
single
micro
service,
and
in
here
I
have
some
specific
additional
config
files
that
then
the
individual
tools
that
captain
will
orchestrate
and
talk
to
through
events
that
they
can
then
access
and
can
say,
hey.
A
Okay,
here
are:
the
here
are
actually
that
g
meter
scripts,
the
tests
that
should
be
used
for
test
executions
for
the
simple
node
service
in
the
staging
environment.
What
else
do
we
have
here?
Helm
here?
Are
my
helm,
charts
I'm
using
helm
for
deployment.
I
could
also
have
captain
reach
out
to
other
tools
like
jenkins,
deploy
hub.
I
can
reach
it,
let
it
reach
out
to
spinnaker
to
harness
any
other
tool,
but
in
my
case,
I'm
just
using
a
service
that
comes
with
kept
natively,
just
a
helm
service.
A
I
also
have
slos
in
here
so
service
level.
Objectives
are
really
key,
because
everything
captain
does
at
any
stage
during
delivery
and
also
during
auto
remediation
captain
is
pulling
in
your
slis,
your
metrics,
from
your
observability
platform.
Whether
this
is
your
prometheus.
Whether
this
is
your
dyna
trace
your
data
dog
in
a
relic,
whatever
you
have,
and
then
it
is
comparing
it
against
your
objectives
and,
depending
on
that,
it
will
then
calculate
a
score
and
based
on
then
decides
what
do
we
need
to
do
next?
A
All
right
so
there's
different
things
here
that
we
that
we
all
the
different
configuration
elements
now,
as
I
told
you
what
I've
done,
I've
created
this
project
just
with
that
yaml
file,
that
we
call
it
the
shipyard
file
and
I've
onboarded
a
single
service
which
means
in
captain.
I
immediately
see
dev,
staging
and
prod.
A
Earlier
in
preparation
of
this
demo,
I
have
already
deployed
different
versions
of
my
microservice.
I've
pushed
version
three
now
through
dev
staging
and
prod
so
version
three
is
here.
I
can
get
the
link
here
to
the
ingress,
so
that
means
I
can
get
direct
access
to
my
application.
That
is
deployed.
I
always
say
here:
I
don't
get
paid
for
building
nice
sample
apps.
This
is
a
node.js
based
app.
A
A
A
So
what
I
want
to
show
you
is
one
way
to
send
captain
an
event
and
with
this
trigger
a
workflow
that
I
want
captain
to
automate
for
me
and
in
this
case,
and
I'm
kicking
this
off
now,
one
of
the
ways
is
to
use
our
cli,
so
captain
the
cli,
I
say
captain
I'm
sending
you
a
new
event
and
it
is
a
so-called
new
artifact
event.
That
means
for
my
captain
07
project
I,
for
this
particular
service
within
that
project.
A
I
have
a
new
image
for
you
and
what
captain
is
now
doing
it
is
taking
that
initial
event
is
looking
at
my
shipyard
file
figures
out
that
the
first
stage
in
my
shipboard
file
is
the
dev
stage
and
then
starts
sending
out
events
to
then
trigger
the
tools
that
I
have
kind
of
hooked
into
captain.
That
can
then
do
the
individual
tasks
like
deployment.
A
Actually,
first
comes
the
configuration
change,
so
let
me
actually
look
at
this
right.
I'm
saying
build
number
four.
That
means,
if
I
go
back
to
my
to
my
git
repo
and
I'm
going
into
def,
because
this
is
the
first
stage
then
one
minute
ago,
as
you
can
see
here
in
the
helm,
chart
I
have
magically
somebody
changed
my
values,
yaml
file
to
build
number
four.
So
the
first
thing
that
captain
actually
does.
If
you
trigger
that
end-to-end
delivery,
it
is
asking
hey
which
tool
can
actually
make
the
configuration
change.
A
That
andy
wants
to
do
like
changing
the
the
tag
or
whatever
other
metadata
you
have,
and
once
this
is
done,
then
the
next
thing
happens
like
the
actual
deployment.
So
I've
kicked
this
off
as
you
can
see
here,
and
just
to
show
you
what
else
I
have
here,
I
have
a
cube.
I
have
a
kubernetes
right,
obviously
install
I've
been
running
here.
I
have
my
captain.
A
That's
the
default
namespace
where
captain
runs
in
the
captain
namespace,
so
you
can
also
see
that
in
my
case,
the
way
you
are
you're
you're,
plugging
in
your
individual
tools
to
listen
for
these
events.
A
You
have
to
install
what
we
call
a
cab
service,
it's
basically
nothing
else
than
a
proxy
that
is
receiving
an
event
from
captain
and
then
is
you
know,
calling
let's
say
your
external
tool
or
we
have
some
services
kind
of
you
know,
built
in
like
with
a
helm
service
that
can
do
helm,
deploys.
We
have
a
chain
meter
service
that
can
run
the
chain
meter
tests.
We
have
a
notification
service
that
can
send
notifications
to
slack.
A
So
one
thing
I
should
see
here
I
have
my
select
integration
and
then
I
can
see
at
8
14..
This
was
like
two
minutes
ago
p.m,
local
time
here
in
austria.
Yes,
it
is
that
late.
That's
why
it's
dark!
I
had
the
initial
configuration
change
to
build
number
four,
the
deployment
already
finished
and
then
also
the
tests
have
already
been
executed.
There
was
a
performance
quick
test,
so
these
are
all
the
tools
that
are
kind
of
you
know
and
you
know
engaged
now.
A
If
I
go
back
to
captain,
I
have
one
way
to
look
at
the
event
stream
right.
It's
an
event,
driven
system.
So
if
I
click
on
services
here
then
remember,
I
told
you
in
my
captain
project.
I
have
one
service
called
simplenote
and
earlier,
as
you
can
see
here,
I
do
a
lot
of
demos
and
I
always
flip
between
version.
One
two,
three
four,
then
back
to
version
one
two,
three,
and
now
it's
version
four.
So
what
you
see
here
is
this
is
the
initially
called
the
root
event
that
was
sent
into
captain.
A
I
did
it
through
the
cli
in
the
captain's
bridge.
This
is
the
ui
that
you
see
here.
You
can
actually
see
our
event
format,
we're
using
cloud
events
and
we
basically
use
you
know
a
standard
convention
about
what
type
of
events
do
we
have
a
configuration,
change,
a
deployment
test
start
and
so
on,
and
then
every
event
then
has
additional
data
like
in
this
case
right.
A
This
was
passed
in
with
andy,
wants
to
switch
to
version
number
four
for
this
particular
project
for
this
particular
service
and
captain
is
taking
that
event
and
is
then
kind
of
executing
the
workflow
based
on
the
shipyard
file
and,
for
instance,
the
next
thing
was
that
the
helm
service
came
in
and
said,
hey,
there's
a
configuration
change,
so
I
want
to.
I
want
to
do
the
deployment
so
now
here
I
actually
get
to
see
the
event
that
then
the
helm
service,
so
it
could
also
be
deploy
hub.
A
A
I
am
the
helm
service
and
by
the
way
here
is
the
deployment
uri
local
within
the
kubernetes
cluster,
and
here
is
the
public,
url
and
so
on
and
so
forth,
and
then
the
process
goes
on.
As
you
can
see
here,
tests
were
executed
again
by
events.
Slis
were
then
retrieved.
This
is
kind
of
the
standard
process
of
captain
retrieving
slis
to
figure
out.
Is
this
a
good
build
or
a
bad
build
by
reaching
out
to
your
testing
tools,
reaching
out
to
your
monitoring
tools
and
then
deciding
good,
build
or
not
a
good,
build?
C
That's
a
quick
question
for
you:
yeah:
how
do
you
set
up
the
relationship
that
the
test
case?
The
test
must
wait
for
the
deployment
to
finish.
A
So
the
test
does
not.
The
test
does
not
start
until
the
deployment
tool
sends
an
event
to
captain
that
the
deployment
is
actually
finished.
So
from
a
from
a
sequence
flow.
You
have
a
so-called
configuration
changed
event.
That
means
hey
I've
changed
I
have
a
new
configuration
in
git
like
same,
I
have
updated
my
helm
charts.
This
is
an
event
that
your
deployment
tool
would
listen
to,
because
it
knows
I
need
to
deploy
or
apply
a
change
configuration
to
the
actual
target
system.
A
In
my
case,
helm
is
doing
this
so
helm
is
then
deploying-
and
in
my
case,
the
deployment
tool
also
needs
to
make
sure
that
the
deployment
finished
and
when
it
is
finished,
it
then
sends
the
deployment
finished
event
and
by
default
now
you
can.
You
can
also
completely
change
that
flow
as
well.
If
you
want
to,
especially
with
our
new
version,
captain
0.8,
which
which
comes
out
in
about
2-3
weeks,
but
the
default
is
the
deployment
tool
since
deployment
is
finished
with
the
information
about
here.
A
Is
the
public
accessible
uri
for
that
thing
that
I've
deployed
and
then
the
testing
tool
would
say?
Okay,
I
am
I'm
reacting
to
that
deployment
finished
event,
because
now
it's
time
to
execute
tests
and
then
once
the
tests
are
done,
then
the
testing
tool,
in
my
case
g
meter
sends
back
an
event
that
says:
hey
tests
are
finished.
A
A
All
right,
so
this
was
a
quick
overview
of
just
you
know
how
captain
looks
and
feels.
Let
me
just
go
back
to.
Where
is
my.
A
All
right,
so
what
is
captain
captain
is
actually,
I
think,
much
more
than
what
I
just
showed
you,
because
I
think
it's
different
things
for
different
people,
so,
depending
on
who
you
are
what
you
can
really
do
with
captain
captain
allows
you
to
automate
the
use
case
that
you
want
to
automate
where
you
may
right
now
do
things
manual
like
quality
gates
is
the
number
one
use
case.
We
currently
see
our
adopters,
which
is
instead
of
me,
having
to
look
at
your
test
reports
or
whatever
else
you
have
to
figure
out.
A
This
is
a
good
or
a
bad
deployment.
After
the
test
to
run,
we
can
automate
that
we
can
do
a
progressive
delivery.
What
I've
just
shown
in
the
end-time
use
case.
We
also
have
a
use
case
that
we
call
sre
automation.
This
is
all
about.
You
have
an
application
or
service
deployed,
and
then
captain
can
orchestrate
the
execution
of
a
performance
test
like
the
gmeter
test.
I
showed
you,
but
then
we
also
work
very
closely
with
the
chaos
engineering
community.
A
So
you
can,
while
the
low
tests
are
running,
also
enforce
chaos
and
then
validate
again,
if
is
the
system
behaving
to
even
under
chaotic
situations,
within
your
defined
service
level
objectives,
and
then
the
last
use
case
that
we
see
is
all
the
remediation.
So
if
a
problem
comes
in
from
your
monitoring
tool,
let's
say
you
get
a
prometheus
alert.
Captain
can
then
execute
not
a
remediation
workflow
with
deploy
test,
evaluate,
but
a
remediation
workflow,
which
would
be
execute
this
action
and
then
evaluate
if
it
solved
the
problem,
if
not
execute
the
next
one.
A
If
that
didn't
solve
it,
maybe
then
send
a
select
message
to
inform
somebody.
So
you
pick
the
use
case.
Every
use
case
has
its
own
configuration.
You
already
saw
earlier.
The
shipyard
file,
the
sli
slos
workloads
for
load
testing,
run
books
for
the
ramination,
so
you
we
bring
this
to
captain
into
the
config
repo
and
then
the
beauty
of,
and
I
think
this
is
also
where
tracy
I
think
got
really
excited
when
she
heard
about
captain.
A
Is
we
connect
to
your
tools
through
a
standard
protocol
and
which
that
which,
which
means
you
bring
your
tool
that
make
most
sense
for
your
environment,
because
every
microservice
is
different,
every
environment
is
different.
Also,
while
captain
itself
runs
on
kubernetes,
it
can
trigger
all
of
these
different
tools
that
do
things
in
any
type
of
environment.
We
have
a
lot
of
folks
that
are
currently
triggering
jenkins
pipelines
as
part
of
the
delivery
that
are
deploying
some
java
enterprise
apps
somewhere
right.
We
are
not
constrained
to
kubernetes.
This
is
also
important.
A
So
what
captain
really
does
it
automates
the
configuration
of
your
tools
by
providing
them
the
specification
that
you
brought
in
and
it's
orchestrating
these
processes
around
the
use
case
that
you
picked,
and
it
does
all
of
this
in
the
declarative
way
everything
is
stored
in
git,
everything
is
version.
Controlled
slos
are
core
to
everything
we
do.
I
will
get
to
this
in
a
second,
more
detail
and
very
important
bottom
right
standards.
A
This
the
communication
protocol
between
captain
and
all
these
different
tools
is
based
on
cloud
events
and
we're
very
excited
that
the
special
interest
group
exists.
Now,
where
we
can
bring
in
our
thoughts
and
then
hopefully,
we
all
as
an
industry
come
up
with
a
standard
and
and
then
we
can
finally
get
rid
of
all
these
custom
integrations
that
we
all
built
and
and
have
to
maintain.
A
As
I
said,
this
is
captain,
and
currently
we
have
a
lot
of
different
captain
users
that
are
using
captain
for
different
things,
sumit
into
it
he's
a
principal
engineer
and
he's
using
captain
in
combination
with
argo,
gatling
and
jenkins
for
large
scale
distributed
performance.
Testing
he's
also
very
heavily
involved
with
chaos,
engineering,
but
basically
what
they
are
using
for
is
automated
quality
gates.
A
So
we
use
jenkins,
and
these
pi
well
jenkins
is
a
great
tool,
I'm
not
bashing
on
it,
but
the
way
we
used
it.
We
basically
had
you
know
obviously
mixed
information
about
what
process,
what
delivery
process
do?
We
have
the
target
platforms,
the
environment.
Everything
is
hard
coded
and
there's
no
clear
separation
of
concern,
and
so
what
we
what
we
said.
How
can
we
solve
this?
A
Because
this
is
not
maintainable,
because
we
ended
up
with
many
different
copies
of
that
same
pipeline
just
with
small
permutations,
and
then
we
started
building
our
own
libraries
to
kind
of
extract
things
into
higher
level.
And
then
we
I
mean
again
jenkins-
can
do
a
lot
of
things,
but
we
ended
up
spending
a
lot
of
time.
A
So
we
said
the
real
solution,
for
it
would
be
to
kind
of
look
at
this
process
that
we
typically
build
in
a
in
a
pipeline
and
then
just
as
we
are
breaking
up
monolithic
applications
into
smaller
pieces
and
having
somebody
that
then
orchestrates
the
business
process.
We
said
well,
why
not
look
at
the
process,
look
at
the
individual
steps
and
then
the
individual
tools?
What
type
of
capabilities
do
these
tools
actually
provide
and
then
just
separate
them
and
rip
them
apart
and
then
use
eventing
to
connect
them?
And
this
is
exactly
what
we've
built.
A
So
we
allow
you
to
specify
the
process
on
the
left.
We
allow
you
to
specify
which
capabilities
you
have
on
the
right,
so
what
type
of
tools
provide
which
type
of
capability
and
then,
as
captain,
is
orchestrating
the
process?
It
sends
out
the
event
with
the
right
metadata
like
hey.
I
need
somebody
that
can
deploy
container
number
one
into
the
dev
stage
with
blue
and
green,
and
then
you
may
have
one
or
even
more
tools
that
can
do
this,
and
then
they
can
say:
hey
captain.
A
A
Yes,
I
can
do
it,
they
can
stand
a
start
event,
and
then
captain
also
knows
whom
to
wait
for
so
there's
something
we're
building
in
new
now
to
to
better
control
the
flow
and
also
better
support,
multiple
capabilities
on
the
right
side
of
tools
that
can
that
can
participate
in
the
workflow
right,
but
in
the
end,
very
simple
process
on
the
left
and
who
consumes
the
events
where
I
I'd
like
them
to
click,
call
them
almost
capabilities
the
tools
on
the
right
side.
A
So
this
drove
our
architecture.
That
means,
if
you
install
captain,
I've
showed
you
earlier
on
my
kubernetes
cluster.
You
get
the
captain
control
plane,
the
core
component
that
you
install.
That
also
comes
with
the
whole
event
thing.
If
you
want
to
use
captain,
you
then
do
exactly
what
I've
showed
you
you
create
a
project,
and
then
you
specify
the
workflow
and
there's
two
types
of
workflows.
The
one
is
around
ci
cd
or,
let's
say
continuous
delivery,
not
ci,
continuous
delivery.
A
This
is
where
you
specify
the
shipyard
file
and
you
specify
what
type
of
process
do
you
want
to
automate
in
delivery
or
you
can
use
captain
for
all
the
remediation.
This
is
where
you
specify
a
remediation
file
and
say
captain.
I
want
you
to
listen
to
alerts
from
the
monitoring
tool
and
then,
depending
on
the
alert
that
comes
in,
I
want
you
to
send
the
events
to
trigger
these
types
of
actions.
A
So
you
define
your
process,
your
application
plane
and
then,
on
the
other
side,
you
may
have
even
different
people
in
your
organization,
but
then
specify
the
tools
in
the
so-called
execution
plan,
and
this
is
also
what's
new.
Now,
with
with
the
current
version
that
I'm
showing
is
captain
073.
That's
the
latest
released
version
in
this
one
control
and
execution
plane
is
together
on
one
kubernetes
cluster
with
0.8,
which
is
currently
in
alpha.
A
You
can
install
the
control
plane
on
one
cluster
and
then
your
execution
execution
plan
also
in
different
cluster,
allowing
you
to
through
true
multi-cluster
multi-target
multi-cloud
processes
right,
whether
this
is
around
deployment
or
testing,
then
it's
up
to
you,
but
the
execution
plan.
Basically,
is
what
that
what
I
showed
you
earlier.
This
is
where
you
plug
in
your
individual
capabilities,
your
tools,
they're
listening
to
the
events,
and
then
you
know
acting
upon
the
events
as
captain
sends
them
and
a
an
event
or
a
workflow
is
triggered
by,
let's
say
a
developer.
A
Who
says
I
have
a
new
artifact
since
the
event
like
I
did
earlier
with
my
captain
cli
and
then
captain
is
orchestrating
that
process,
sending
out
events
waiting
for
the
responses
from
these
tools
and,
depending
on
that,
making
a
decision
on
how
to
go
forward
and
the
nice
thing
about
this.
Is
you
have
a
clear
separation
between
process
and
tooling?
You
can
always
change
the
process
without
breaking
any
hardcoded
integrations,
because
there
are
no
hardcoded
integrations
with
the
tools,
and
you
can
also
change
and
swap
the
tooling
on
the
bottom,
without
having
to
think.
B
Andy
in,
in
terms
of
you
know
creating
industry
standards.
It's
this.
The
control
plane
is,
what's
always
what's
gotten
my
attention
where,
in
this
architecture,
in
terms
of
like
the
the
event
seg,
where
do
you
think
that
they
should
be
focusing
some
of
their
standards
in
this
architecture?.
A
Exactly
here
on
the
events,
so,
basically
right,
if
you
think
back
of
my
of
my
captain's
bridge,
I
think
the
the
events
that
need
to
be
specified
and
standardized
are
exactly
those
that.
A
Exactly
I
think,
that's
what
we
should
standardize,
where
we
should
come
up
with
a
standard
standard
set
of
events
like
what
type
of
events
do
we
even
want
right?
What
do
we
need?
We
need
a
configuration
change
event.
We
have
a
deployment
event
with
a
test
event.
We
have
an
evaluation
event
with
a
promotion
event
with
a
a
problem
event,
a
remediation
event
and
so
on.
There's
different
events.
A
We
currently
have
for
different
activities
in
your
workflows
and
we
have
started
in
captain
because
we've
been
doing
this
for
like
a
year
and
a
half
or
two
years
now
we
came
up
with
different
types
of
events,
all
the
specs
that
we
have-
and
I
know
that
right
I
mean
we
have
folks
like
andreas
krima
and
some
others
in
the
in
the
cdf
and
hopefully
also
in
in
the
in
the
standardization
group.
But
here
let
me
also
put
this
over
right.
A
This
is
where
we
were
where
we
are
documenting
all
of
our
events
that
we
are
using
the
captain
cloud
events.
So
everything
is
based
on
cloud
events,
but
then
we
have
our
own
definition
of,
for
instance,
a
deployment
event
right.
What
what
information
is
there,
and
I
think
this
should
be
put
into
consideration
when
the
sick
is
now
discussing
what
events
should
be
standardized
and
what
should
be
part
of
the
payload?
What
do
we
need
right.
C
Yes,
I
have
another
question:
if
you,
so
how
does
captain
handle
well,
let
me
back
up
a
little
bit
so
a
use
case
that
we
have
is.
We
have
a
customer
that
has
17
qa
environments
for
a
single
project
and
then
within
each
one
of
those
qa
environments.
There's
multiple
clusters:
how
is
that
data
represented
in
captain
or
in
the
get
repo
that
that
captain's
working
from.
A
That
is
a
good
question,
so
I
think
right
now
just
hearing
what
you
said
we
had
the
we
have
the
concept
of
stages
here.
Let
me
just
go
back
to
my
shipyard
file.
A
Exactly
each
branch
is
a
stage
and
then
in
every
stage
you
will
then
get
now
in
in
my
current
version
here
right
when
kept
no
seven,
we
have
a
very
opinionated
way
what
happens
within
a
stage.
It
is
a
configuration,
change,
a
deploy,
a
test
and
then
evaluate
and
then
either
a
promote
or
a
reject,
depending
on
the
on
that
result
with
captain
0.8
the
next
one,
you
can
actually
then
specify
individual
sequences
within
a
stage
where
you
can
kind
of
break
out
of
our
very
opinionated
way.
So
I
see
two
options
for
your
scenario.
A
A
You
can
then
say
you
have
a
single
qa
stage,
but
within
that
you
have
one,
we
call
it
sequences.
You
have
one
sequence.
That
first
does
the
qa1
testing,
then
the
qa2
testing
qa3
testing,
and
you
can
then
actually
specify
these
sequences.
So
the
the
idea,
let
me
just
see
if
I
can
quickly
find
the
the
spec
here
as.
A
So
there
would
be
one
option
or
the
other
option
would
just
really
be.
As
I
said,
you
have
just
a
stage
all
qa
and
then
in
here
you
have
test
q1,
sq2
and
so
on.
It's
up
in
the
end,
all
right.
Actually
we
have
multiple
stages.
Sorry
you
have
qa1
and
then
you
have
another
stage
that
would
be
qa2
right.
I
think
that's
different.
We
probably
need
to
look
into
this.
What
makes
most
sense.
C
And
then
how
is
the
this
goes
more
along
on
the
deployment
side
on
instead
of
on
the
test
side?
I
guess
it
would
apply
to
test
too,
but
we're
if
you
have
multiple
clusters
in
the
each
qa
stage,
and
they
each
have
unique
data
that
need
to
be
passed
to
that
cluster.
A
So
the
data-
that's
this
the
way
this
works.
If
you
have
different
clusters
you
will
you
basically
install
the
execution
plane
on
these
clusters
and
then
the
execution
plane
is
basically
saying
hey.
I
am
cluster
abc
and,
and
I'm
basically
I'm
interested
in
in
events
that
have
you
know
certain
metadata.
So
if
I
call.
C
A
Exactly
the
remote
cluster
so
that
you
have
a
central
control
plane
right,
the
control
plane
would
be
centralized
and
you
would
have
multiple
execution
planes
and
then
it's
a
it's
a
polling
mechanism.
So
that
means,
if
you
have
10
different
clusters,
they
would
basically
say
hey
captain.
Do
you
have
anything
new
for
me
and
new
for
me
means
do
you
have
anything
that
needs
to
be
done
on
this
particular
cluster
in
this
particular
project
or
this
particular
environment?
So
it's
absolutely
that's
the
way
it
works.
Yeah.
So.
A
A
Let's
say
for
your,
I
don't
know,
let's
say
staging
stage
right.
If
this
would
be
your
qa
yeah,
then
for
every
service,
let's
say
jmeter
right.
That
would
be
a
good
example.
Then,
in
here
you
can
specify
configuration
files
that
are
then
accessible
by
that
remote
tool
that
runs
on
that
remote
cluster
and
then
pull
in.
Let's
say
your
test
files,
but
you
can.
A
This
is
the
way
we
have
done
it
currently
with
gmeter,
and
this
can
be
done
with
other
tools
as
well.
You
basically
here
specify
what
should
really
actually
be
executed.
What
type
of
test
should
be
executed
with
how
many
virtual
users,
and
so
on,
depending
on
what
we
call
test
strategy
and
test
strategy,
is
nothing
else,
another
piece
of
metadata
on
these
events,
so
when
you're
sending
an
event-
and
you
say
first,
I
want
to
execute
my
let's
say-
performance,
quick
tests.
A
Then
captain
sends
an
event
and
says
who
can
run
performance,
quick
tests,
and
then
your
listeners
in
that
environment
says
yes,
I
can
do
it,
and
I
also
know
based
on
that
description,
that
I
need
to
execute
this
particular
script
with
these
types
of
virtual
users
in
loop
count
and
so
on.
So
it's
all
metadata.
That
then
comes
as
part
right,
as
you
can
see
here.
Performance
quick
as
part
of
of
these
events.
D
A
The
sequences
are
not
in
parallel
right
now,
so
right
now,
there's
a
sequence
that
is,
you
know,
executing
its
individual
tasks
and
once
the
sequence
is
finished,
either
successfully
or
failed,
it
depends
on
the
outcome
of
the
individual
tasks.
This
then
triggers
other
sequences
that
are
waiting
for
the
trigger,
which
means
you
can
do
some
type
of
parallelization.
A
A
You
run
your
dev
and
your
staging
with
all
your
testing,
and
then
you
have
five
production
environments
and
you
wanna,
you
wanna
do
this
all
individually
through
individual,
but
in
parallel,
maybe,
but
with
with
some
individual,
some
specific
notifications,
then,
yes,
these
processes
would
run
in
parallel
because
they
all
get
triggered
by
the
same
sequence
finished.
A
If
the
testing
stage,
the
testing
sequence
in
the
testing
stage
finishes.
So
we
have,
for
instance,
one
of
our
customers
or
users.
A
They
are
using
this
for
doing
multi-stage
delivery
at
the
same
time
right
so
they
have
different,
they
have
their
their
qa
pipeline
and
once
that
is
done,
then
they
know
they
can
roll
it
out
to
their
different
sas
environment,
to
the
different
deployments
that
they
have
for
different
end
users,
and
captain
is
going
to
do
this
for
them
in
parallel,
even
though
for
every
single
production
sequence,
then
they
are
engaging
with
what
you'll
see
here,
a
so-called
approval
process.
A
So
right,
if
I
look
at
this,
I
have
dev
staging
and
prod,
but
what
they
have
is
prod
one,
two,
three,
four
five
and
six.
Now
the
process
continues
right.
Basically,
there
are
five,
then
parallel
streams
going
on,
but
what
I
have
done
here,
I
have
manual
interaction
today
and,
let's
say
a
human
decide.
Do
I
really
want
to?
I
know
it's
good
to
go,
but
do
I
really
want
to
do
this?
Yes
or
no.
A
I
hope
this
makes
sense.
I
know
it's
a
little
hard
to
draw
with
my
with
my
fingers
in
the
air,
but
it
would
be
something
like
right,
there's
another
product
here
and
another
part
here
and
another
part
here,
and
so
if
this
is
finished,
it
triggers
this
at
the
same
time,
this
this
and
this,
but
in
in
prod
itself,
whether
this
is
plot
one
two,
three
or
four,
you
can
then
have
a
sequence
that
first,
let's
say
includes
a
select
notification
or
asking
a
human
hey.
Do
you
really
want
to
prove
this
into
production?
D
A
D
C
So
one
other
question
I
have
is-
and
I
don't
know
if
you
have
a
hierarchy,
but
let's
say
I
wanted
to
insert
a
after
the
build
of
a
container.
I
want
to
insert
a
new
event
to
do
security
scanning,
and
I
want
to
do
that
across
all
my
projects
in
my
in
my
organization.
C
A
A
A
A
Good,
I
I
wonder
so.
Where
did
I
stop?
Let
me
think
about
this.
I
stopped
at
explaining
to
you
the
the
architecture
and
how
all
this
looks
like
now.
Let
me
go
back
to
the
demo
that
I
kicked
off
earlier
now.
As
you
remember,
I
mean
we've
been
here
so
when
I
kicked
off
build
number
four,
then,
on
the
right
side,
you
see
all
the
chain
of
sequences
right,
def
was
there,
then
it
got
into
staging
and
staging
configuration
change
happened.
The
deployment
happened.
A
That
means
build
number
four
is
now
also
in
staging.
This
is
great,
then
what
happened?
A
longer
performance
test
ran
so
in
this.
Oh
actually,
a
helm
service
deployed,
then
a
performance
test
executed
with
test
strategy
performance
peak.
That
means
that
that
test
ran
about
four
minutes
from
2020
to
2024
and
then
slr
retrieval
started
again,
and
in
this
case
you
can
see.
The
slis,
which
is
a
key
component
to
captain,
is
now
a
little
bit
more
well
much
more
than
the
sli
on
slo
validation
in
depth
in
depth.
A
We
only
had
a
simple
check.
That,
basically
said
was
that
did,
did
jmeter,
execute
and
did
gym
meter
report
back.
Everything
is
good,
and
if
that
that
was
good,
then
it
was
considered
good.
However,
in
staging
we've
expanded
this,
you
can
see
here
there's
a
lot
of.
We
call
them
sli
service
level
indicators
that
captain
is
is
retrieving
from
my
observability
tool
and
then
it's
comparing
every
single
sli
against
my
objectives.
Now,
where
does
this
data
come
from?
In
my
case,
it
comes
from
my
observability
platform
is
dynatrace
right.
A
A
So
I've
created
a
simple
dashboard
that
says
I'm
interested
in
response
time
right,
I'm
a
performance
engineer,
so
I'm
definitely
interested
in
response
time,
p95
the
95th
percent
of
the
90th
and
the
50th.
Then
we
also
have
a
capability
here
to
look
at
response
times,
split
by
individual
test
cases.
So
my
achievement
script
executes
a
version,
a
home
page,
an
echo
and
invoke
now.
A
If
you
remember
my
sample
app,
this
is
the
the
echo
the
invoke
so
my
cheat
meter
script
is
kind
of
simulating
all
these
different
use
cases
and
now
in
dynatrace
I
have
response
time
failure
rate
as
well
as
number
of
external
service
call
or
back-end
service
calls
as
a
metric
per
test
case.
I
have
transaction
failure
rate
overall,
I
also
have
some
process
cpu
and
so
on
and
so
forth,
and
what
I've
also
done.
I
have
augmented
the
dashboard
you
can
see
here
if
I
zoom
in
a
little
bit.
A
I
basically
said
this
is
this
should
be
an
sli.
So
while
this
dashboard
looks
nice-
and
I
maybe
would
normally
look
at
this
dashboard
to
validate
if
the
build
is
good
or
the
data
from
that
time
frame
when
the
test
trend
was
good,
I
can
now
say
I
don't
want
to
do
this
manually.
I
want
captain
to
do
it.
This
is
why
I
can
specify
past
criteria
and
I
can
do
a
combination
of
a
fixed
value
like
it
should
be
faster
than
600
milliseconds,
but
I
can
also
do
relative
value.
A
Plus
10
means
it's
comparing
it
with
the
previous
good,
build
and
says
it
will
pass
if
it
is
not
if
it
is
smaller
than
plus
10
from
the
previous
baselines,
and
it
should
also
be
faster
than
600.
So
I
can
actually
combine
things
and
I
can
do
pass
criteria
and
if
I
scroll
to
the
right
here,
I
can
also
specify
pass
and
warning
criteria.
A
A
That
means,
if
this
one
fails
regardless
how
good
the
other
metrics
are,
it
will
failed
the
build,
and
so,
if
I
go
back
to
my
captain
bridge,
every
single
row
here
is
basically
one
of
these
metrics
that
were
pulled
out
from
from
my
dashboard
and
the
green,
yellow
and
red
depends
on
how
the
evaluation
was
done
and
then
the
top
row
is
the
overall
score,
and
it
seems
in
this
case
I
achieved
100.
All
the
points
that's
great.
I
can
see
the
individual
results
also
as
its
values.
A
I
can
also
see
them,
as
you
know,
like.
Obviously
I
can
chart
it,
so
I
can
see
trends
over
time,
but
the
thing
is,
it
automatically
makes
the
decision
for
me,
and
now
it
tells
me
hey
guess
what
we
have
a
new
build
for
you.
That
would
be
ready
to
go,
but
it's
currently
waiting
because
you've
specified.
A
A
In
order
to
enter
production,
the
approval
strategy
is
always
manual
and
don't
let
anything
automatically
approve
into
pro.
So
I
always
need
I'm
always
asked,
and
this
is
why
I
get
the
question
here
and
I
either
get
the
question
here
or
if
I
click
to
this
view,
where
I
have
dev
staging
and
fraud,
I
see
version
three
is
still
improv.
That
means,
if
I
open
up
that
link,
the
current
version
is
still
build.
Number
three,
but
captain
tells
me
there's
a
new
version
available.
It
achieved
100
points.
A
This
is
also
what's
happening
again
if
this
is
done
by
the
helm
service
or
if
it's
done
by
some
other
deployment
tool.
As
long
as
the
tools
that
you
have
provide
this
capability,
you
can
use
this.
You
can
use
captain
to
orchestrate
this
type
of
process,
all
right,
which
means,
let
me
I
know,
tracy
we've
almost
used
one
hour
and
I
I
do
have
a
couple
of
slides
left,
but
I
think
I
want
to
kind
of
jump
ahead
because
I've
explained
a
lot
of
things
already,
especially
around
the
slis
and
slos.
A
Okay
cool,
so
one
thing
that
I
want
to
really
this:
this
is
what
I
like
about
captain
right.
It
is
I'm
sorry
it
was
one
slight
too
too
fast
or
too
far,
so
I
just
showed
you
a
key
component
of
the
cabin.
This
is
the
quality
gate,
evaluation
based
on
slos
and
whether
you
use
dynatrace
or
prometheus
captain
can
pull
in
the
data
from
different
data
providers.
But
the
power
of
this
is
that
this
quality
gate
capability
is
not
only
used
in
captain
as
part
of
the
whole
end-to-end
delivery
workflow.
A
You
can
also
use
it
individually
because
we
don't
wanna
push
captain
on
you
and
say
now.
You
have
to
use
captain
end-to-end
and
connect
all
of
your
different
tools
at
once.
We
also
say:
hey,
like
christian:
they
have
gitlab
as
their
pipeline
where
they
do
build,
deploy
tests
and
then,
in
the
verify
stage
they
simply
call
captain
and
say
hey.
The
first
thing
I
want
to
do.
I
want
what
I
want
to
model.
A
What
I
want
to
automate
with
captain
is
the
process
of
evaluating
my
quality
gates,
and
so
you
can
also
use
captain
just
for
this
particular
process
and
the
way
this
looks
like
I
have
another
project
here
that
is
just
called
qualitygate.
So
in
this
case
I
have
you
know
my
entry
points
here
that
kick
off
a
processing.
Captain
is
not
the
configuration
change,
it
is
the
so-called
evaluation
started.
So
in
this
case
the
overall
end-to-end
process
is
much
smaller.
A
I
want
you
to
do
an
evaluation
of
the
captain
project,
this
service
and
this
stage
for
this
time
frame
between
start
and
end,
and
then
captain
is
basically
doing
just
this
piece.
It
is
reaching
out
to
the
monitoring
tool
that
is
configured
but
is
listening
in
this
case
it's
dynatrace
and
then
it's
pulling
in
those
metrics
right.
This
is
now
just
a
different
dashboard,
but
what
I
wanted
to
show
you
is
that
you
can.
A
You
can
you
can
put
in
whatever
whatever
tool
you
want
and
you
can
just
use
the
quality
gate
capability
on
its
own.
That's
what
I
want
to
say
and
then
from
there
maybe
go
into
quality,
gate
and
testing
and
then
maybe
the
whole
delivery
process.
That's
also
possible
good.
Last
before
I
drive
it
home
I'll
drive
it
home.
Before.
I
finish,
the
last
use
case,
which
I
really
like,
is
also
data
driven
operations.
A
So
I've
talked
about
this,
but
one
use
case
is
that
if
you
have
your
production
monitoring
and
learning-
and
you
get
a
problem
reported
by
your
let's
say
prometheus
alert,
then
captain
can
also
execute
and
orchestrate
the
so-called
remediation
workflow.
So
what
you
see
on
the
left
now
is
no
longer
the
shipyard
file
where
it
specifies
stages
and
actions
within
the
stages
or
sequences,
but
here
you
specify
the
so-called
remediation
workflow.
A
Now,
in
this
case,
captain
sees
there's
a
conversion
rate
dropped
error
coming
in
and
the
user
has
specified
hey
if
a
conversion
rate
dropped
event
comes
in
then
the
first
thing
I
would
do
is
scale
up.
So
what
captain
does
it's
not
scaling
up
itself?
It
sends
out
an
event
that
is
called
scaling
and
then
it
would
trigger
whatever
tool
is
listening
to
it
and
then
the
important
thing
here
is:
we
also
integrated
our
quality
gates,
meaning
for
all
the
remediation.
It
is
exactly
as
important
to
validate.
A
How
is
the
system
doing
so
executing
an
action
then
validating,
if
it's
good
great
remediation
done?
If
not,
the
next
action
will
be
executed
and
then
again
re-evaluate
it.
If
good
awesome
problem
solved,
if
not,
you
would
have
the
option
to
then
escalate
it,
let's
say
to
a
human
being,
but
you
can
automate
and
remediate
through
captain
workflows,
and
while
this
is
great
and
a
lot
of
people
like
this,
a
lot
of
people
are
also
scared
about
it,
because
why
do
you
trust
this
automation
production?
A
And
this
is
where
I
believe
we
have
such
a
great
opportunity
to
work
with
the
chaos
engineering
with
community,
because
we
can
do
what
I
like
actually
now
called
test
driven
operations,
meaning
using
captain
to
run
load,
but
also
chaos,
because
we
can
do
both
with
integrations
with
litmus
and
with
gremlin,
so
captain
can
run
load
in
four
scales.
Allows
you
to
validate
that
your
alerting
works
correctly
and
then,
if
you're
alerting
your
monitoring
sends
an
alert.
A
You
can
then
use
captain
to
kind
of
start,
defining
refining
battle,
testing
your
remediation
actions,
because
basically,
what
you're
doing
here
you're
trying
to
test
and
validate
that
the
application
that
you've
currently
deployed
in
a
qa
environment
or
the
test
environment
that
is
put
under
stress
and
under
chaos,
can
bring
itself
back
to
a
healthy
state,
even
under
chaotic
situations
and
the
nice
thing
with
this.
Is
you
can
test
this
as
part
of
a
delivery
pipeline
and
if
it
withstands
the
test,
it
means
hey.
A
You
know
actually
not
only
good
to
deploy
that
artifact,
that
you've
been
testing
into
production,
but
you
should
also
promote
that
remediation
workflow
into
production,
because
if
chaos
strikes
in
production,
you're
now
safe
that
this
has
been
tested
and
works
so
to
wrap
it
up.
Hopefully
it
is
clear
or
clear
what
captain
is
different
use
cases
through
configuration
by
connecting
your
tools
and
captain
automates.
A
These
use
cases
by
connecting
the
tools
through
events
at
the
core
we
have
slos,
everything
captain
does
will
be
evaluated
and
we
have
some
tutorials
to
get
started
with.
You
can
go
to
tutorials
captain.sh,
but
yeah
there's
more
things
you
can
find
out
on
our
website
on
git,
and
it
would
be.
You
know.
I
think
we
still
have
a
lot
of
things
ahead
with
the
with
the
special
with
the
stick
here
on
eventing
and
tracy,
really
great,
that
you
are
driving
this.
B
Andy
that
was
really
an
amazing
presentation,
so
everyone
andy's
not
just
a
a
great
presenter
and
educator.
He
also
is
part
of
the
pure
performance
podcast.
He
and
his
cohort
brian
wilson
run
that
and
their
last
one
they
had
dave
farley
and
it
was
a
really
good,
a
really
good
podcast,
and
I
am
going
to
post
that
in
the
chat
in
case.
Anybody
is
interested
in
following
andy
and
brian
on
their
podcast,
because
there's
some
pretty
interesting
information
that
goes
on
there.
B
B
And
I
did
post
the
zoom
link
to
the
next
event,
the
sig
for
the
events
in
in
cicd
that
will
be
march
1st.
It
is
at
8
a.m,
pacific
time,
11,
eastern
and
I
have
put
the
the
zoom
link
there
if
you're
interested
in
getting
involved
in
that
group.
I
know
that
we
are.
We
are
all
pretty
excited
about
it
and
the
more
minds,
the
better.
B
All
right,
I
will
take
silence
as
a
as
a
as
a
key
to
in
the
session
andy.
Thank
you.
We
really
really
appreciate
you
joining
us
today
and
I
look
forward
to
having
a
lot
more
conversations
around
this
topic
in
the
future.
Yeah.