►
Description
In this video, we show you the metric and traces generated by the Lifecycle Controller.
Join us in Slack: https://cloud-native.slack.com #keptn-lifecycle-controller-dev
Star us on Github: https://github.com/keptn-sandbox/lifecycle-controller
Follow us on Twitter: https://twitter.com/keptnProject
Sign up to our newsletter: https://bit.ly/KeptnNews
A
Hello,
everyone
today
we're
very
proud
to
present
you
the
progress
we
made
with
the
captain
lifecycle
controller
in
the
version
020,
which
is
mainly
focused
on
telemetric
data
in
the
lifecycle,
controller,
I'm,
Giovanni
and
Florian,
will
show
you
what
we
did
in
this
release
and
so
the
the
first
question
to
Florian.
What
do
you
like
most
about
the
captain
lifecycle
controller
at
the
moment?
Well,.
B
That's
tough
question
because
there
are
so
many
things,
but
right
now,
I'm
very
excited
about
open,
Telemetry,
being
a
first
class
citizen,
therefore
us
being
able
to
provide
a
great
possibilities
in
terms
of
tracing
and
monitoring
and
I'm.
Looking
forward
to
demonstrate
that
today,.
A
C
C
So
I
want
to
build
on
top
of
your
last
video,
where
you
show
how
the
lifecycle
controller
works.
So
we
see
a
manifest
is
being
applied.
There
are
cycle
controller
takes
place
to
run
some
pre-deployment
tasks,
then
the
deployment
takes
place
and
afterwards
some
post
deployments
and
all
of
this
will
be
monitored
by
metrics
and
places
from
open
Telemetry.
Then
this
data
it
will
be
ingested
into
Jaeger,
to
see,
dispense,
relationship
and
also
the
metrics
into
Primitives.
B
All
right,
thank
you,
Giovanni
yeah!
So,
first
of
all,
if
you
decide
to
try
all
this
out
yourself
after
watching
this
video,
the
installation
of
the
lifecycle
operator
is
very
easy.
So
you
basically
just
need
to
apply
to
manifest.
One
is
for
the
third
manager,
which
is
a
requirement
for
the
operator
to
run,
and
then
the
second
one
is
the
Manifest
for
the
operator
and
our
scheduler
implementation,
and
then,
if
you
would
like
to
follow
the
demo
and
try
it
out
yourself,
we
have
provided
a
tutorial
for
that
in
our
repository.
B
So
all
you
need
to
do
is
to
clone
the
Repository
change
into
the
directory
that
we
have
listed
here
into
this
observability
directory
and
execute
the
make
install.
We
have
also
provided
an
extensive,
read
meter
where
you
can
read
about
the
general
concepts
used
in
this
tutorial
and
how
everything
works
together
right.
Okay,
with
that
being
said,
let's
jump
into
the
actual
demo.
B
So
what
do
we
have
here
on
my
screen?
You
should
now
see
the
lens
kubernetes,
IDE
and
I'm
in
the
in
our
single
service
directory,
where
we
have
the
following
files.
So
we
have
first
of
all
our
deployment
demo
containing
the
deployment
that
we
would
like
to
have
managed
by
the
workload
controller
by
the
lifecycle
controller,
and
then
we
have
a
deployment
task
which
should
be
executed.
Let's
just
quickly
recap
and
see.
What's
in
this
deployment,
so
here
we
deploy
a
simple
service
with
the
name
test,
and
here
you
see
the
captain
annotations.
B
It's
part
of
the
Beta
app
and
the
post
deployment
task
that
should
be
executed
is
the
pre-deployment
hello
task
all
right
and
at
the
top
part
of
my
screen,
you
see
the
captain
workload,
instances
that
will
be
created
by
the
lifecycle
controller.
So
far,
there
is
nothing
but
now
I'm
gonna
apply
everything
that
I
have
in
this
directory
and
very
soon
you
should
see
a
workload
instance
popping
up
here
see
here.
B
It
already
is
so
it's
our
weight,
a
workload
of
the
waiter
app
in
version
2.0,
and
we
can
already
see
that
the
pre-deployment
phase
is
succeeded.
Now
it's
waiting
for
the
deployment
to
be
finished.
So
if
we
switch
to
the
pots
here,
you
should
see
the
the
test
spot
for
the
deployment
being
in
the
pending
State
and
very
soon
it
should
be
up
and
running.
B
So
now
the
init
container
seems
to
be
ready
and
soon
the
actual
app
should
be
running
in
our
namespace
yeah
there.
It
is,
and,
as
you
can
see
here
now,
our
post
deployment
check
did
kick
in.
So
now
we
have
the
Port
running
that
should
execute
the
post
deployment
check
right
all
right.
B
This
is
the
example
that
you
might
have
already
seen
in
one
of
Thomas's
earlier
recordings,
but
now,
let's
jump
into
the
observability
part
of
all
this,
so
first
of
all,
we
are
gonna,
inspect
the
traces
that
we
have
generated
during
this
deployment.
B
So
the
entry
point
in
this
case
is
the
webhook
part,
because
the
web
perk
of
the
lifecycle
controller
is
registered
to
be
called
for
all
Bots
that
should
be
deployed
in
that
namespace,
and
this
one
is
our
initial
entry
point,
so
you
can
see
here,
21,
spans
and
yeah.
So
what
this
will
now
allow
you
to
do
is
to
get
observability
and
better
comprehend
of
how
all
the
components
in
the
lifecycle
controller
do
work
together.
So,
as
I
said,
we
have
the
annotate
Port
spend.
B
That
is
the
root
for
this
complete
workflow
than
the
web.
Fork
will
create
the
workload
and,
as
you
can
see
here,
this
is
all
nicely
attacked,
with
the
name
of
the
app
the
name,
the
version
of
the
deployment,
the
name
of
the
workload
and
so
on,
and
this
will
give
you
a
better
observability
of
what's
going
on.
B
Also
during
the
workflow,
you
might
see
some
error,
but
first
of
all
the
nice
thing
about
operators
is
that,
even
though,
in
one
of
the
reconcile
Loops,
there
might
be
some
some
unexpected
error,
it
will
take
all
this
up
in
the
next
iteration
and
eventually
the
whole
system
should
go
into
the
desired
State,
as
it
indeed
did,
as
we
earlier
saw
and
yeah.
B
Also,
if
there
are,
there
is
some
error.
This
is
exactly
what
introducing
traces
is
all
about
as
well,
because
it
will
make
it
much
easier
for
you
to
see
what
might
have
gone
wrong
during
one
of
your
deployments.
B
All
right,
so
that's
the
tracing
part,
but
we're
not
completely
done
yet,
because
we
also
promised
you
some
metrics,
because
the
lifecycle
operator
sends
all
traces
and
metrics
to
the
open,
Telemetry
collector,
which
can
then
be
used
as
a
central
data
point
for
all
your
observability
data,
and
that
means
that,
in
addition
to
the
traces
that
we
just
talked
about,
we
will
also
get
some
metrics
that
can
be
ingested
by
Prometheus,
for
example.
B
So,
as
you
can
see
here
on
this
Matrix
endpoint,
we
have
several
metrics
that
you
can
make
use
of
like
the
active
deployment
Contour
or
the
deployment
duration
histogram
the
active
task
counter
and
then
also
metrics
for
for
finished
deployments
and
finish
tasks.
But
as
you
can
see,
this
is
a
little
bit
hard
to
read.
So
why
don't
we
just
go
into
grafana,
where
we
can
nicely
import
and
visualize
this
metrics
that
we
see
here.
So
let's
do
that
and
adjust
the
time
frame.
B
So
what
you
see
here
is
a
dashboard
containing
time
series
for
these
metrics
that
I
just
talked
about
so
first
of
all,
we
have
the
active
tasks.
So
apparently,
we
do
not
see
anything
here
and
that
might
be
because
the
granularity
is
a
little
bit
too
rough
in
that
case,
because
the
host
deployment
task
was
finished
very
soon.
B
But
what
you
can
still
see
here
is
that
the
number
of
finish
tasks
has
increased
by
one
so
earlier
before,
starting
the
demo,
I
tried
it
out
and
already
had
to
complete
the
tasks
before
I
started
the
demo.
So
that
would
explain
this
initial
value
of
two,
but
during
the
course
of
the
demo,
this
was
increased
to
one
and
the
same
kind
of
metrics.
B
We
also
have
for
the
deployments
so
deployment
being
the
the
test
deployment
that
we
showed
you
earlier
and
here
the
granularity
was
small
enough,
because
the
complete
workflow
took
a
couple
of
seconds
with
all
the
pre
and
post
deployment
checks
being
reconciled
and
the
bot
being
scheduled-
and,
as
you
can
see
here
during
the
time
that
the
deployment
took
the
number
of
the
active
deployments
in
the
running
state
was
increased
to
one
and
once
that's
finished
went
back
down
to
zero.
B
So
this
is
a
nice
metric
to
view
the
activity
that's
currently
going
on.
In
your
cluster,
and
eventually
once
that
has
finished,
the
number
of
finished
deployments
has
increased
to
three.
In
that
case,
the
cosine
tried
it
out
earlier,
as
I
said
all
right
and
I
hope
this
gave
you
a
nice
overview
of
how
you
can
use
all
this
Telemetry
data
in
your
cluster.
B
So
what
we
have
seen,
we
have
seen
the
use
of
open,
Telemetry
data
and
maybe
to
highlight
this
again,
even
though
we
have
shown
you
how
to
make
a
use
of
Jaeger,
Prometheus
and
grafana
in
this
case
do
not
necessarily
bound
to
make
only
use
of
these
particular
tools.
So
this
can
nicely
be
integrated
with
any
other
tools
that
support
open,
Telemetry
integration,
because
we
sent
all
the
observability
data
of
the
lifecycle
controller
to
the
open,
Telemetry
collector,
which
can
then
act
as
a
central
point
of
of
data,
all
right
and
yeah.
B
Yes,
as
we
have
shown
you
with,
for
example,
grafana
dashboards,
you
have
a
quick
and
easy
way
of
visualizing.
The
current
activity
in
your
Cloud
store
and,
for
example,
have
a
good
impression
of
the
frequency
and
the
velocity
of
your
deployments,
and
with
that
I
would
like
to
hand
back
over
to
Thomas.
A
A
So,
regarding
the
lifecycle
controller,
we
often
hear
some
kind
of
questions,
what
the
lifecycle
controller
is
about
and
why
we
are
doing
this
and
how
this
compares
to
other
tools.
So
we
got
two
questions
in
the
last
few
in
the
last
week,
and
one
of
them
was
how
can
the
livestock
controller
can
be
used
with
Captain?
So
will
you
both
wants
to
take
this
over.
B
Yeah
I
would
like
to
take
that.
So
that's
a
great
question
and
of
course
we
kept
that
in
the
back
of
our
minds
from
the
beginning
when
we
started
to
develop
the
lifecycle
controller
and
one
way
to
do
this
would
be
to
make
use
of
the
task
definitions.
B
The
tumors
are
already
showed
you
in
one
of
the
earlier
recordings,
because
those
are
very
flexible
and
easy
to
set
up
because
provides
a
convenient
way
of
executing
a
node.js
scripts
and
within
such
a
script
you
could
easily
make
use
of
the
Captain
API
in
order
to
trigger
any
sequence,
pretty
much
so,
for
example,
in
the
post
deployment
checks,
you
could
trigger
a
captain
evaluation
sequence
and
then
immediately
make
use
of
all
of
the
quality
gate
and
evaluation
capabilities.
That
Captain
provides.
A
Okay,
thank
you.
Thank
you.
Florian
and
Giovanni
I
think
you'll
deal
a
bit
with
how
this
could
work
with
argosity.
Could
you
give
us
more
insights
into
this.
C
Yes,
we
work
hard
to
make
everything
around
the
lifecycle
controller
to
be
working
automatically
with
any
possible
tool
that
you
might
use
in
your
deployment
could
be
Argo.
Cd
can
be
flux,
it
can
be
directly
on
Helm
install
a
cube.
Ctl
apply
as
we
see
in
the
demo.
C
The
cool
thing
of
the
capital
lifecycle
controller
is
is
tool
agnostic
on
which
tool
is
carrying
your
deployment,
so
the
lifecycle
controller
can
take
place
before
Argo
kicks
in
and
try
to
really
deploy
your
application
running
some
pre
deployment
tasks.
Also
after
the
parliament
is
done
by
Argo,
we
can
then
trigger
the
captain
post
deployment
tasks.
For
that.
A
Okay,
thank
you
also
for
this.
Yes,
thanks
for
joining
me
both
here
at
today's
maintenance
talk
just
for
you
sitting
in
front
of
your
computer
and
want
to
get
in
touch
with
us
and
I'm
sure
you
will.
A
There
are
some
things
you
could
do
to
get
in
touch
with
us,
so
at
first
there
everything
you
saw
here
is
in
our
repository
of
in
the
captain
saying
box
of
the
life
and
the
lifecycle
controller
prototype,
and
there
is
a
working
group
which
is
discussing
all
of
these
things,
such
as
how
should
an
application
life
cycle
look
like
in
a
cloud
native
environment,
and
there
is
also
a
repository
for
this.
A
You
can
also
share
your
thoughts
regarding
all
of
this
and
also
tell
us
what
you
would
like
to
see
in
the
lifecycle
controller.
Therefore,
we
have
a
select
channel
in
the
thin
self
slack,
which
is
called
Captain,
app
lifecycle,
working
group
and,
last
but
not
least,
you
can
meet
all
of
us
also
on
slack
and
discuss
things
or
features
with
us
in
this
fake
Channel,
which
is
also
in
the
sincere
slack
and
called
CAP
lifecycle,
controller
development
and
as
in
every
source
project.
A
We
are
very,
very
happy
to
have
you
as
a
contributor
and
see
your
pull
requests,
see
your
comments
and
have
you
in
our
community
and
I'm
sure
every
one
of
us
is
very
happy
to
accept
pull
requests
so
feel
free
to
reach
out
to
us
with
this
I.
Thank
you
for
your
for
your
time
and
having
us
here
and
hope
to
see
you
soon.
Thank
you.
Bye-Bye
bye,
bye,.