►
From YouTube: Keptn Refinement Meeting - Feb 14, 2023
Description
Meeting notes: https://docs.google.com/document/d/10Fig1eYFZ9iQFSYWkz0c4eTwzgJiPtQI5IsczbvLsuE/edit
Learn more: https://keptn.sh
Get started with tutorials: https://tutorials.keptn.sh
Join us in Slack: https://slack.keptn.sh
Star us on Github: https://github.com/keptn/keptn
Follow us on Twitter: https://twitter.com/keptnProject
Sign up to our newsletter: https://bit.ly/KeptnNews
A
A
Okay,
hey
everybody:
this
is
the
captain
refinement
meeting.
We
have
February
14th.
Today,
let's
get
right
into
it.
We
have
actually
three
different
tickets
to
discuss
today.
Let
me
quickly
share
my
screen
here.
A
Yeah
we
have
three
different
tickets
here
that
we're
gonna
discuss.
I
also
created
an
estimation
poker
room
for
now.
If
we
need
it,
but
I
think
those
tickets
are
mostly
not
really
needed
to
be
estimated.
In
the
end,
let's
see
floor,
do
you
want
to
kick
us
off
right
away.
B
Okay
yeah,
so
if
on
today's
Valentine's
Day,
you
run
out
of
conversational
topics,
you
can
reflect
on
this
issue
here.
Basically,
this
is
about
the
deployment
interval
metrics
right.
So
basically,
this
is
one
of
the
Dora
metrics
that
we
provide
with
the
captain
livestock
toolkit
and
also
displaying
our
grafana
dish
dashboards
and
before
we
introduce
the
concept
of
revisions
of
Captain
app
versions.
It
was
quite
quite
straightforward
right.
B
So
basically,
if
you
had
a
deployment
of
a
version
2
for
example,
then
the
predecessor
that
you
used
as
a
reference
for
the
time
between
deployments
was
obviously
version,
one
right
of
the
same
patent
app.
B
But
now
we
have
the
case
where
we
could
potentially
have
multiple
revisions
of
version,
one
for
example,
so
we
could
have
version
one
revision,
one
version,
one
revision,
two
and
so
on,
and
then
the
question
is
which
of
these
revisions
to
take
as
a
reference
for
version
two
then
and
right
now,
the
behavior
is
that
if
we
want
to
get
the
reference
for
for
version
one
for
version
two
sorry,
then
the
earliest
revision
of
the
previous
version
will
be
taken
as
a
reference
point,
and
then
the
deployment
interval
will
be
basically
be
the
difference
between
the
end
time
of
the
of
version
2
and
the
start
time
of
version
one,
and
now
we
have
the
the
question
whether
it
should
stay
this
way
or
maybe
actually
the
the
latest
provision
of
the
previous
version
should
be
taken
as
a
reference.
C
C
To
take
the
revision
also
in
consideration,
so
we
then
we
need
to
display
the
revision
from
which
we
opted.
E
I
would
probably
vote
for
the
first
revision
of
each
version,
because
what
the
interval
deployment
metrics
actually
looks
like
tell
us,
it
tells
us
how
often
or
what's
the
interval
between
deploying
version
one
and
version
two
so
I
guess
taking
the
first
revision
of
both
things
could
be
that,
because
that's
the
real
time,
the
all
the
revisions
are
basically
just
reruns,
but
the
version
one
is
already
deployed
at
this
case.
So,
at
least
for
my
point
of
view,
it
should
be
always
revision.
A
D
F
F
So
it
was,
it's
not
always
version
one.
Oh
okay,.
D
B
C
I,
don't
think
so,
because
the
because
the
end
time
is
the
thing
which
is
relevant
if
you
think
of
larger
applications,
which
might
take
half
an
hour
to
an
hour
to
deploy
or
sometimes
also
two
hours,
then
the
time
of
the
the
end.
Time
and
time
when
it's
finished
is
relevant.
A
A
E
What
what
do
you
mean
Thomas
that
start
time
and
finish
time?
Why
don't
we
always
like
it's
an
interval
between
the
previous
version
and
the
current
version?
Oh.
C
B
B
E
C
E
C
E
C
E
B
Hours
in
one
second,
which
is
so
this
interval
of
one
second,
would
completely
get
lost
this
information.
A
E
A
E
A
C
D
C
D
A
C
D
B
Yeah
I
think
for
that
ticket.
Now
we
have
to
change
the
the
calculation
of
the
interval
because
it
right
now
it's
the
the
duration
difference
between
start
time
of
V2
and
and
ends
time
of
V2
and
start
time
of
week,
one
so
that
needs
to
be
changed.
A
Okay,
then,
let's
move
to
the
next
ticket.
Anybody
from
the
community
still
have
feedback
on
this
or
input
or
anything.
A
Okay,
then,
let's
move
on
the
next
ticker.
This
is
a
bigger
one.
We
want
to
split
the
metric
and
the
life
cycle
feature
basically
of
klt
into
two
separate
operators.
A
Giovanni
posted
this
ticket
yesterday
yeah.
So
the
main
reason
is
basically,
we
want
to
kind
of
have
those
two
features
usable
separately,
so
you
don't
necessarily
want
the
klt
full
feature
set
together
with
the
Matrix
operator.
Maybe
you
just
want
to
have
the
Matrix
operator
or
just
the
Matrix
feature
basically
and
not
use
the
klt
at
all,
and
since
it
is
kind
of
Standalone
and
has
its
own
feature
set,
that
can
be
useful
separately.
A
So,
in
order
to
like
be
able
to
support
that
for
users,
we
want
to
split
the
two
things.
The
two
feature
sets
into
separate
operators,
that's
the
plan
and
then
yeah.
A
A
A
We
already
discussed
this
slightly
and
kind
of
found
out
that
maybe
upgrading
could
be
an
issue,
but
if
we
actually
dump
the
CRT
versions
correctly,
that
should
be
finally,
and
then
we
don't
get
upgrade
issues.
Probably
we
already
had
some
other
thoughts
on
here.
B
Yeah,
so
basically
my
concern
or
the
the
roadblock
I
hit
here
when
I
initially
looked
into
this,
was
that
yeah
with
regards
to
the
separation
of
concerns,
because
right
now
the
the
cat
metric
controller
uses
the
captain
evaluation
provider
which
point
to
Prometheus
or
dynatrace,
for
example,
so
they
Define
where
to
get
the
metrics
from
and
those
Captain
evaluation
provider
live
in
the
life
cycle,
API
Group,
and
in
addition
to
that
in
the
captain
evaluations
or
evaluation
definitions,
we
also
support
referring
to
these
evaluation
providers.
B
So
in
the
evaluations,
we
don't
only
go
through
the
captain
metrics,
but,
for
example,
we
can
go
directly
to
Prometheus
or
to
dinotrace,
and
this
leads
to
a
situation
where
the
cabin
evaluation
provider
of
the
lifecycle,
API
Group,
is
used
both
or
would
be
used
both
by
The
Matrix
operator
and
the
current
operator,
and
even
though
it's
only
a
shared
read
still,
it
would,
for
example,
if
you
want
to
just
use
the
Matrix
operator,
you
would
also
need
to
have
the
custom
resources
from
the
from
the
normal
operator
being
installed.
B
So
it's
not
really
really
self-contained
and
that's
why,
in
my
last
comment,
I
suggested
to
introduce
a
metrics.paton
matrix
provider
which
will
then
also
live
in
the
new
metrics
operator
and
in
the
current
operator
we
should
support
only
looking
at
cat
metrics
custom
resources
during
the
evaluation.
It's
not
in
order
to
not
have
this
provider
functionality
duplicated.
C
E
E
E
E
A
C
D
A
E
If
we
cut
it
now,
what
about
the
users
currently
using
Captain
Evolution
provider
and
Prometheus
or
dinotrades
directly,
if
they
update
to
the
new
version?
If
we
do
this
change
and
we
release
it,
for
example,
in
0.6
and
it
stay
update,
suddenly
none
of
the
evaluation
made
by
them
will
work
because
they
won't.
They
will
need
to
change
their
providers.
They
would
need
to
change
their
evaluations,
definitions
to
point
to
the
captain
metric.
They
need
to
create
Captain
metric
providers.
D
E
F
A
F
D
A
E
F
D
E
Why?
But
we
can
rewrite
this
logic,
we
didn't
release
it
yet,
so
we
can
adapt
it
directly
like
support
Captain,
metric
and
Captain
metric
provider,
or
you
directly
support
the
evolution
provider
and
directly
go
to
the
Prometheus
and
dinosaurs.
E
D
E
C
E
But
do
you
want
to
have
it
as
part
of
zero,
six
or
zero?
Seven.
F
F
F
D
A
A
F
A
A
A
F
E
F
E
B
I
mean
converting
the
the
life
cycle.
That
evaluation
provides
us
to
metrics
dot
metrics
provide
us
that
should
be
relatively
straightforward,
then
maybe
in
the
in
the
evaluation
definitions
where
we
refer
to
those
to
the
evaluation
providers,
maybe
we
can
automatically
generate
the
cat
metric
with
the
correct
provider
referenced
here,
but
if
you're
not
sure
I
didn't.
E
E
Maybe
I
don't
I
know
we
didn't
release
the
helm
charts
yet,
but
maybe
that
would
be
or
so
we
need
to
adapt
the
life
cycle.
Helm
chart
structure.
A
D
F
B
D
E
D
C
Because
you
will,
you
will
want
to
install
the
lifecycle,
toolkit
and
the
Matrix
operate
them
parallel,
and
it's
not
a
big
problem
to
act
to
disable
the
lifecycle
toolkit.
A
F
A
C
F
From
the
Health
Report,
one
is
less
work
for
the
young
repo,
a
bit
more
of
logic
for
the
it
sucks.
F
We
do
have
a
POC
of
a
separated
thing,
I
don't
know
if
I
would
do
another
research.
If
we
want
to
do
this
as
quickly
as
possible,
not
to
come
for
a
few
things.
E
A
Okay,
but.
D
A
Is
oh.
F
A
A
Just
commented
on
this
issue
or
add
it
right
away
here
or
create
a
new
issue,
but
I
guess
we'd
prefer
to
add
it
into
this
one
anything
that
you
want
to
add
right
now
asking
the
community
mostly.
A
A
E
A
No
okay,
but
that's
yeah.
That's
basically
the
same
thing
again:
okay
Anna
actually
started
working
on
this
already.
We
are
planning
to
kind
of
use,
slash
repurpose,
helmet
file.
If
anybody
knows
the
two,
it's.
F
A
Awesome
yeah,
so
we're
gonna
use
this
tool.
It's
pretty
cool,
basically,
you
can
put
in
your
Java
file
and
it
will
generate
a
Helm
chart
for
you
with
already
provided
templating
stuff.
It's
pretty
amazing
so
here
in
the
examples.
A
A
Okay,
anything
else
to
this
ticket
shall
we
add
something
any
questions.