►
From YouTube: Keptn Metrics - March 2023
Description
Learn more: https://lifecycle.keptn.sh
Get started with tutorials: https://tutorials.keptn.sh
Join us in Slack: https://slack.keptn.sh
Star us on Github: https://github.com/keptn/keptn-lifecycle-toolkit
Follow us on Twitter: https://twitter.com/keptnProject
Sign up to our newsletter: https://bit.ly/KeptnNews
A
Hello,
everyone
to
this
cap
maintenance
recession.
Today,
we'll
talk
about
the
Matrix
operator.
My
name
is
Thomas.
A
And
in
the
last
few
weeks,
we
created
some
things
around
Matrix,
which
could
make
your
life
easier,
and
we
wanted
to
show
you
what's
what's
in
there.
So
at
first,
what's
the
problem
with
metrics
in
Humanities
many
of
the
components
we
have
in
kubernetes
meet
some
metrics,
such
as
the
kubernetes
HPA,
such
as
other
rollouts,
also
of
Lego,
and
so
on.
A
A
This
code
needs
maintenance
and
what
we
thought
as
a
captain
team
was
what,
if
there
were
some
single
metrics
provider
in
the
communities
environment,
so
that
you
only
install
one
component
in
New
York.
You
need
this
cluster,
which
provides
the
Matrix
to
you,
which
also
has
all
of
the
observability
parietals
in
there,
and
you
can
simply
query
this
and
yes,
Brian
shows
us
how
this
works.
Like
all.
B
Right,
I'll
be
glad
to
leave
you
all
right.
So
recently
we
have
introduced
our
new
captain
metric
operator
and
the
way
this
works
is
that
you
can
Define
your
metrics
that
you
want
to
observe
as
custom
resources,
which
will
then
be
we
can
solve
by
the
operator
in
this
custom
resources.
You
will
find
an
example
later
in
the
demo.
B
Basically,
you
would
tell
which
provider
you
want
to
get
those
metrics
from
and
what
query
to
use
to
fetch
the
magnetics
from
the
respective
provider
the
provider
points
to
an
observability
platform,
so
the
ones
we
currently
support
in
the
metric
operator
are
Prometheus
and
dynatrace.
And
then,
once
the
cap
metric
has
been
applied,
the
metric
operator
will
go
to
the
defined
observability
platform
and
retrieve
the
metric
values.
B
Using
the
query
that
you
provided
once
it
has
a
value,
it
will
update
the
status
of
the
captain
metric
custom
resource
and
you
will
be
able
to
inspect
the
value
that
we've
got
and
in
addition
to
that,
the
metric
operator
will
register
itself
to
the
kubernetes
custom
metrics
API,
and
that
means
that
you
can
make
use
of
those
metrics.
For
example,
in
kubernetes
horizontal
pod,
Auto
scaler
by
referring
to
the
metric
objects
in
your
HPA
definition
and
yeah.
B
So
we
are
now
in
Visual
Studio
code,
and
what
do
we
see
here?
So?
First
of
all,
we
have
our
deployment
of
our
Potato
Head
sample
application.
So
that's
our
demo
application
that
we
usually
use
when
showcasing
stuff
in
our
captain
lifecycle
toolkit
the
very
simple
micro
service
based
application
that
we're
now
going
to
deploy
right.
B
So
this
will
be
deployed
into
the
Potato
Cube
ctll
namespace,
which
is
the
same
namespace
where
our
actual
application
lives,
and
we
want
to
get
our
values
from
dynatrace
using
the
following
query,
which
yeah
represents
the
CPU
throttling
for
for
the
pots
that
we
are
running
for
this
application.
B
B
So
all
right.
So
now
our
metric
called
CPU
throttling
is
created
and
to
see
if
everything
works.
Let's
go
back
to
lens,
and
here
in
the
custom
resource
section
you
will
see
the
cap
metric,
which
are
part
of
the
metrics.
That
Captain
is
H,
API
Group
and
if
you
click
here,
we
will
see
that
the
metric
is
is
applied
and
yeah.
Now
we
also
get
a
status
so
right
now
we
have
a
value
of
1.36
for
our
CPU
throttling
for
that
all
right.
B
So
right
now
we
have
one
replica
deployed
because
that's
the
original
value
from
the
deployment
that
we
applied
earlier
and
now
we
would
like
to
decide
when
whether
to
scale
up
or
down,
based
on
the
value
that
we
get
from
our
metric,
and
for
this
you
first
of
all
Define
the
name
of
the
metric
that
you
want
to
observe.
And
here
in
the
described
object
section,
you
can
refer
to
the
captain
metric
custom
resource
that
we
just
applied.
B
So
in
that
case
we
want
to
inspect
the
CPU
throttling
and
ideally
we
would
like
to
achieve
a
value
of
one
with.
So
when
scaling
up
this
value
should
should
decrease
over
time.
And
if
we
recall
earlier,
the
value
was
1.3.
So
in
theory,
as
soon
as
the
horizontal
bot
Auto
scalar
is
applied,
it
should
do
its
job
and
scale
up
the
potato
head
entry
deployment.
So
let's
just
do
that.
A
B
And
what
I
wanted
to
show
you
as
well
is
in
the
meantime,
until
this
is
running,
we
can
also
get
the
metric
values
using
the
cube,
CTL
raw
command,
so
here
I've
provided
an
example
command
in
the
make
file
which
basically
executes
the
cube.
Ctl
get
draw
with
the
the
path
to
the
custom,
metrics
API
and,
as
you
can
see
here,
this
is
another
way
of
retrieving
the
current
values
of
our
metrics
and
now,
let's
check
the
status
of
our
horizontal
part
autoscaler
and
see
how
it's
doing
so.
B
As
you
can
see
here
all
right,
so
we
can
see
that
everything
was
wired
out
correctly
because
it
was
able
to
get
the
value
of
the
CPU
throttling
metric
and
based
on
that
value,
it
was
able
to
calculate
the.
A
B
Replica
value,
so
if
you
go
back
to
the
lens
into
our
pods
section,
we
should
now
see
that
we
have
three
replicas
of
our
Potato
Head
entry
service
running
and
over
time.
Hopefully
this
should
have
the
desired
effect
of
decreasing
our
CPU
throttling
match
Bridge
and
yeah.
So
that's
everything
that
I
wanted
to
show
you
for
this
example.
B
In
addition
to
what
we've
showed
you
now,
so
we've
shown
you
how
you
can
make
use
of
these
metrics
in
the
horizontal
autoscaler,
but
we
want
to
make
this
as
flexible
as
possible
and
by
providing
these
metrics
as
part
of
the
custom
metrics
API
in
kubernetes.
You
could,
for
example,
also
make
use
of
those
in
tools
like
Ardo,
CD
or
also
in
Cada
or
maybe
even
flux
or
in
Captain
version
one.
B
So
with
this
nice
unified
approach
of
registering
ourselves
at
the
custom,
metrics
API,
we
think
this
is
a
very
powerful
and
flexible
way
of
aggregating
metrics
from
from
different
providers
and
making
them
accessible
in
a
in
a
unified
way
and,
as
I
said
right
now,
we
do
support
Prometheus
and
dinotrace,
but
of
course,
being
an
open
source
project.
B
We
always
look
forward
to
receiving
also
external
contributions,
so
right
now,
for
example,
there's
also
a
data
dot
provider
in
the
works
right
now
and,
of
course,
if
you
would
like
to
add
your
provider,
you
are
more
than
welcome
to
to
contribute
and
we're
happy
to
get
in
touch
with
you
and
look
at
your
proposals
and
pull
requests
and
with
that
I
would
like
to
hand
back
to
Thomas.
A
So
what
can
you
do
to
make
the
Matrix
operator
better
at
first
just
try
it
out
so
install
the
Matrix
operate,
the
install
the
whole
lifecycle
toolkit,
if
you
want
play
around
with
it,
try
it
out
and
if
you
find
some
issues,
hopefully
not
or
if
you
find
something
which
could
get
in
which
we
could
make
better
feel
free
to
raise
issues
or
pull
requests
or
whatever.
B
Yeah
pretty
much
yeah
so
really
simple.
Actually,
yes,.
A
So
it's
not
the
big
rocket
science
and
it
would
bring
a
lot
of
value
to
the
captain
Matrix
provider.
If
we
to
look
at
Matrix
operator,
if
we
have
much
more
providers,
therefore
feel
free
to
raise
a
pull
request
for
that.
A
Yes,
as
we
said
in
the
in
the
previous
slides,
you
only
need
one
integration
for
every
for
every
component
to
get
to
many
observability
providers
in
the
end.
So
if
you
know
any
tool
which
need
which
might
benefit
of
some
integration
in
the
capital,
Matrix
provider,
Matrix
operator,
sorry
feel
free
to
write
this
integration.
There
are
at
the
moment,
I
think
two
ways
to
integrate
so
via
the
kubernetes
Matrix
API.
We
also
will
our
own
API
so
there's
enough
integration.
A
Integration
points
for
everyone
therefore
feel
free
to
do
so.
And
yes,
let's
make
the
consumption
of
Matrix
in
kubernetes.
So
with
every
contribution
you
make
to
this
project,
I
think
we
are
making
the
live
life
of
many
people
much
easier.
And
yes,
it
would
be
awesome
if
this,
if
this
would
get
get
a
bit
larger.
A
Yes,
the
last
famous
famous
famous
last
words:
sorry
how
to
get
in
touch
with
us,
so
at
first
Lincoln,
Stars
indicator
repository.
So
if
you
try
it
or
if
you
try
out
the
life
cycle
controller
itself,
if
you
try
out
the
Matrix
provider
and
you'll
think
this
is
a
very
awesome
project
feel
free
to
spread
the
word
around
it
stars
and
so
on.
You
can
also
share
your
thoughts.
A
So
if
you
want
to
see
some
feature
in
the
captain,
life
cycle,
toolkit
orange
cabinet
sales,
race
issues
and
talk
twice
on
the
scene
and
in
the
end
you
can
always
reach
out
to
us,
as
I
said
before,
on
on
viable
request
and
so
on,
and
you
find
those
in
the
lifecycle,
controller
development.
A
So
we
hope
that
you
enjoyed
this
session
and
we
yes,
hopefully
you
will
have
a
lot
of
fun
with
this.
Thank
you
and
bye-bye
all.