►
From YouTube: Keptn Community & Developer Meeting - September 6, 2023
Description
Meeting notes: https://docs.google.com/document/d/1y7a6uaN8fwFJ7IRnvtxSfgz-OGFq6u7bKN6F7NDxKPg/edit
Learn more: https://keptn.sh
Get started: https://lifecycle.keptn.sh/docs/getti...
Join us in Slack: https://slack.keptn.sh
Star us on Github: https://github.com/keptn/lifecycle-to...
Follow us on Twitter: https://twitter.com/keptnProject
Sign up to our newsletter: https://bit.ly/KeptnNews
A
A
I
see
thumbs
up
okie
dokie,
because
in
the
last
weeks
we
had
some
people
having
problems
with
some
links,
so
I
just
want
to
double
check
everything.
A
You
might
see
here
in
the
Epic
that
many
many
tickets
have
been
closed
because
the
implementation
or
the
first
cut
of
the
implementation
is
done.
However,
everything
is
behind
the
feature
flag,
so
the
SLO
part
is
not
yet
enabled
by
default.
You
need
to
enable
it
through
environmental
variable
in
the
metric
operator.
A
Otherwise
this
won't
work
so
soonish.
There
will
be
also
the
documentation
of
this,
and
you
can
start
to
play
with
that
and
let
us
know
if
everything
is
fine
or
there
are
some
Corner
cases
that
we
didn't
consider
or
we
need
to
polish,
and
for
that
I
like
to
hand
over
to
Florian
for
a
quick
demo
about
this
new
cool
feature.
B
Let
me
start
the
screen
sharing
one
all
right.
You
should
now
see
my
screen
yeah
in
this
demo.
I
would
like
to
show
you
how
you
can
make
use
of
this
new
analysis
crds
that
we
introduced
yesterday
just
mentioned
these
are
basic,
or
the
goal
of
this
is
to
kind
of
resemble
the
evaluation
functionality
of
Captain
V1.
So
here
you
will
be
able
to
basically
Define.
B
First
of
all,
your
analysis,
value
templates,
they're
called
so
this
kind
of
resembles
what
the
slis
that
service
level
indicators
used
to
be
in
Captain
V1,
and
what
you
can
do
here.
You
can
similar
to
the
metrics
we
already
have.
You
can
refer
to
the
provider
that
you
would
like
to
use
for
fetching
those
metrics,
and
then
you
can
also
specify
a
query.
B
What's
a
cool
new
feature
here
compared
to
the
captain
metrics
is
that
can
make
use
of
the
gold
templating
syntax
to
insert
several
parameters
that
you
would
like
to
use
for
a
specific
analysis
later
on.
So,
for
example,
here
in
this
query,
we
would
like
to
retrieve
all
the
or
the
number
of
containers,
with
the
status
ready
of
a
particular
namespace
that
we
don't
want
to
have
hard-coded
here
in
the
query.
But
we
would
like
to
later
specify
when
we
do
a
concrete
analysis
and
next
would
be
the
analysis
definition.
B
So
this
is
similar
to
what
the
SLO
yaml
file
in
Captain
V1
was
a
service
level
objectives,
and
here
you
can
basically
provide
a
list
of
objectives
that
you
would
like
to
evaluate,
and,
for
example,
here
refer
to.
We
refer
to
this
ready
analysis,
value
template
where
we
get
the
the
number
of
ready
containers
in
our
namespace,
and
here
we
can
specify
a
failure
and
warning
criteria.
B
So
in
this
case,
in
this
analysis,
we
fail
if
we
have
less
than
one
ready
container
in
our
namespace,
and
we
have
a
warning
if
we
have
less
than
two.
There
are
also
other
operators
that
can
set
here
for
this
failure
and
warning
targets
like
greater
than
equal
to,
and
you
can
also
specify
ranges,
so
you
could
also,
for
example,
for
example,
say
if
you
want
to
fail.
B
If,
if
the
number
of
containers
is
within
a
certain
range-
and
we
believe
this
will
provide
a
lot
of
flexibility
to
kind
of
tailor,
the
analysis,
definitions
to
your
particular
use
cases
and
then
similar
to
Captain
B1,
you
can
assign
a
weight
to
each
of
those
objectives,
and
you
can
also
say
that
a
particular
objective
is
a
key
objective,
which
means
that,
if
that
one
fails,
the
complete
analysis
will
fail
so
similar
to
what
Captain
V1
used
to
do
and
then
also
since
we
can
specify
multiple
objectives
here
with
different
weights.
B
Those
weights
will
then
be
added,
summed
up
and
a
percentage
of
the
overall
achieved
score
will
be
calculated,
and
then
you
can
Define
your
thresholds
that
you
would
like
to
the
thresholds
for
passing
the
analysis
and
for
achieving
the
warning
status
in
case
the
evolution
didn't
pass
and
yeah.
So
this
will
I
think
make
the
migration
from
Captain
V1
to
the
lifecycle
toolkit
much
much
easier
and
yeah,
and
here
at
the
bottom
we
have
the
the
metrics
provider.
B
So
this
is
the
same
as
we
used
for
the
captain
metrics
we
already
had
in
the
lifecycle
toolkit
all
right.
So
this
is
the
definition
of
the
the
goals
you
want
to
achieve
and
then,
in
order
to
execute
a
concrete
analysis,
we
have
the
analysis
CRV
on
the
right
side.
Here
and
here
we
can
say
that
we
want
to
do
a
analysis
for
a
particular
time
frame
that
is
defined
with
those
from
in
two
parameters.
B
B
B
Let
me
apply,
it
is
manifest
and
the
analysis
has
been
created
and
now,
let's,
let's
switch
over
to
K9s
and
as
you
can
see
here,
we
have
our
analysis
sample
demo
with
the
status
set
to
pass
and
now,
let's
quickly
go
into
the
the
yaml
here
and
basically
in
the
status,
you
will
first
of
all
see
whether
the
analysis
has
passed
or
not
why
the
pass
property
and
then
in
the
Raw
property.
B
You
will
see
a
detailed
Json
string
containing
the
status
for
all
the
objectives
that
you
have
defined
in
your
analysis,
definition
and
whether
they
have
passed
or
not
and
yeah.
This
is
everything
I
wanted
to
show
you
now.
Are
there
any
questions.
B
Exactly
so,
the
analysis
here
is
that
you
see
on
the
right
side,
there
yeah
the
analysis
definition.
This
will
be
the
static
configuration.
D
And
I
can
only
use.
Do
one
namespace
at
a
time
can
I
do
this
for
multiple
namespaces.
B
Yellow
analysis
definition
that's
located
in
one
namespace,
but
for
example,
if
you
want
to
do
apply
an
analysis
in
different
namespaces,
you
can
always
refer
to
the
analysis.
Definition
from
that
particular
name
species.
So
you
wouldn't
need
to
to
recreate
the
definition
in
every
namespace
that
you
would
like
to
execute
an
analysis
in
did
that
answer
your.
E
Are
no
further
questions,
I
will
I
have
one
minor
one
in
the
analysis
on
the
right
and
S
is
on
the
quotation
marks
I
assume,
that's
not
needed.
C
F
Sharing
good,
so
hopefully
you
see
my
screen.
Okay,
so
I'll
basically
follow
up
on
what
floor
already
said.
F
What
we
did
is
we
actually
wanted
to
make
the
onboarding
from
Captain
V1
to
the
current
captain
in
past
known
as
the
klt
as
smooth
as
possible,
and
therefore
we
were
thinking
about
how
to
make
the
conversion
from
the
evaluations
performed
in
the
captain
V1
as
smooth
onboarding
as
possible
to
the
captain.
F
So
we
assume
that
it
would
be
good
to
create
a
converter
for
the
important
resources
used
in
the
captain
V1,
so
there's
our
SLO
and
SLI
yamls
and
to
convert
them
to
resources,
which
already
mentioned
to
the
cap,
analysis
definition
and
Analysis
value
templates.
So
I
will
first
start
with
the
SLI
converter.
F
I'll,
show
you
a
quick
demo.
So
here
we
have
an
sli.dml
which
is
very
simple.
We
have
only
two
indicators
through
food
and
response
time
and
using
the
converter,
which
is
a
part
of
the
metrics
operator
binary.
We
can
convert
it
into
multiple
analysis,
value
templates,
which
can
be
afterwards
applied
to
your
cluster
yeah.
Let
me
show
you
a
demo
how
this
can
be
performed,
so
you
can
call
the
Matrix
operator
binary
either
directly
or
you
can
run
into
the
docker
container.
F
You
can
run
the
image
as
part
of
the
docker
container,
where
you
specify
the
convert,
SLI
parameter
with
the
path
to
the
SLI
to
the
amount
you
want
to
convert.
You
need
to
specify
the
SLI
provider
that
you
need
to
use.
This
needs
to
be
the
same
for
the
for
all
the
analysis,
value
templates
and
then
also
the
SLI
namespace
for
the
analysis,
value
templates,
where
they
will
be
placed,
so
you
can
directly
apply
to
your
cluster.
F
So
what
you
already
saw,
the
slider
demo,
let's
make
a
conversion,
as
you
can
see,
we
had
to
SLI
indicators
and
we
have
created
two
analysis:
value
templates
with
the
query
from
the
SLI,
with
using
the
templating
scheme
that
Florian
already
mentioned,
with
the
provider
named
dynatrace
and
the
namespace,
where
the
provider
is
located
so
Captain
as
we
specified
in
the
parameters.
F
Yeah,
the
documentation
is
particularly
ready,
but
it
was
still
not
merged
into
domain.
It's
prepared
in
the
pull
request,
but
I
assume
that
the
documentation
will
be
merged
in
the
next
few
days.
You
have
a
link
to
it
now
in
the
document,
so
you
can
go
through
it.
You
have
an
example
use
how
to
use
it.
What
is
it
about
how
it
converts
the
format
for
the
SLI
converter,
we're
basically
able
to
fully
convert
the
SLI
dot
yaml
to
multiple
analysis,
value
templates?
F
Yes,
any
questions
regarding
SLI
converter
here.
E
B
F
Good
then
I'll
continue
with
the
SLO
converter,
where
yeah
I'll
say
data
beginning.
We
have
also
a
seller
converter
documentation
same
as
for
the
SLO
it
wasn't
merged
yet,
but
I
assume
it
will
be
merged
in
the
next
years.
We
have
some
description,
some
usage
about
conversion
conversion
details
here.
F
We
need
to
say
that
we
do
not
support
fully
all
the
use
cases
or
the
conversion
of
all
the
use
cases
that
are
in
the
present
in
the
slo.dmls,
because
in
some
cases
it
doesn't
make
a
logical
sense
and
in
some
cases
it's
too
complex
and
it's
a
really
a
corner
case
that
it's
not
worth
to
actually
for
the
convert
for
the
users.
F
Some
cases
it
can
be
also
manually
converted
what
needs
to
be
say
with
the
conversion.
We
do
not
support
conversion
of
the
comparisons,
as
we
do
not
have
this
functionality
in
place.
Yet
I'm,
not
sure
if
we
will
have
that's
up
on
the
team
to
decide
if
we
will
support
this
use
case
also
in
the
future.
G
F
We
have
specified
multiple
objectives
with
the
pass
burning
criteria
or
multiple
combinations
which
are
converted
to
a
single
analysis
definition,
which
already
showed
and
mentioned
basically
the
weights.
The
targets
are
not
specified
with
a
just
the
symbols,
but
we
have
a
dedicated
fields
for
converting
the
logic
from
one
one
resource
to
another.
F
One
big
change
would
be
made
in
the
SLO,
the
demos
we
previously
specified
the
pass
and
warning
criteria,
but
in
the
analysis
definition
we
reverted
the
criteria,
so
we
specify
the
failure,
criteria
and
learning
criteria.
Therefore,.
F
The
path
criteria
are
basically
reverted
and
adapted
in
the
way
that
it
makes
sense
yep
to
show
a
quick
demo.
F
F
Sl
namespace,
where
the
analysis
value
templates
are
located
because
we
refer
reference,
the
analysis,
value
templates
from
the
analysis
definition.
F
You
saw
that
in
the
previous
demo
and
the
name
of
the
definition
which
will
be
created.
So
let's
execute
it
yeah
and,
as
you
can
see,
we
have
a
single
analysis.
Definition
with
the
name
definition
name
specified
here
and
SLO
namespace
default,
which
references
analysis,
value,
template
reference
default.
F
The
names
refer
of
the
analysis.
Value
templates
are
taken
from
the
SLI
names
in
the
SLO,
so
yeah,
that's,
basically
all
I
wanted
to
in
say
just
one
node
left
in
the
documentation.
In
the
conversion
details,
you
can
read
more
details
about
unsupported
use
cases
and
also
supported
use
cases.
You
have
a
nice
examples
of
how
the
SLO
objective
looks
like
and
what
it
will
be
converted
to,
so
you
can
go
through
it
and
also
at
the
end.
You
have
a
more
complex
example,
which
I
basically
showed
during
this
demo
yeah.
A
I
think
mandatory
question
similar
to
the
previous
one
SLO
namespace.
Does
it
make
sense?
Because
we
are
not
using
we're
not
specifying
the
SLO
namespace
yeah,
maybe
Worth,
to
call
it
analysis,
template
namespace,
name.
A
I
wanted
to
say
more
few
words
about
what
we
support
and
what
we
don't
support,
because
in
Captain
V1
the
language
was
very,
even
didn't
restrict
at
all,
but
you
can
specify
so.
Instead,
we,
instead
of
converting
blind
link
everything
we
try
to
focus
on
use
cases
that
make
sense
so
use
cases
that
are
used
by
our
users
that
the
use
cases
that
we
focus
on
to
convert
out
of
the
box.
A
Then
another
thing
that
I
like
to
say
about
the
current,
or
is
it
called
a
percentage
comparison
so
comparing
against
previous
Evolution
values?
We
don't
support
that
yet,
but
we
have
a
plan
how
to
address
this.
The
current
idea
that
we
are
evaluating
is
not
final.
We
didn't
finalize
these
details
yet,
but
the
idea
that
is
floating
around
is
to
expose
every
analysis
evaluation
done
as
an
open,
Dynamic
magic.
So
this
metric
can
be
then
ingested
into
your
possibility
platform
and
in
the
analysis,
template
you
can
reference.
A
This
would
be
having
the
same
behavior
of
Captain
B1,
but
we
didn't
went
through
the
wall
checked
to
verify.
That
makes
sense,
visible
and
all
the
good
stuff.
E
When
the
share
is
loading
okay,
so
I
wanted
to
quickly
share
a
bit
our
experience
with
transferring
or
converting
the
lifecycle
toolkit
repository
into
a
mono
wrapper
before
it
was
kind
of
Monolithic,
and
now
it
will
be
a
honorable
with
separate
artifacts
I,
don't
know
if
you
have
seen
it,
but
we
have
an
epic
for
this.
E
Some
of
the
tickets
are
actually
done
already
so,
especially
the
mono
wrapper
setup
for
metrics
operated
lifecycle
operator
and
such
those
are
already
merged,
and
we
are
kind
of
in
the
process
right
now
of
releasing
all
the
separate
artifacts,
as
you
may
also
have
seen,
there's
lots
of
new
tags
and
and
releases
here.
Everything
has
its
own
tag.
Now
previously
we
just
had
klt
with
the
version,
and
now
every
single
artifact
has
its
own
versioning.
E
Some
of
them
actually
already
got
a
1.0
release
where
we
think
they're,
basically
stable
and
and
good
for
production
use,
some
of
them.
Some
of
them
don't
have
that
yet,
but
yeah
yeah
and
the
next
steps
for
this
are
basically
to
go
through.
As
you
see
here,
lifecycle
operator
is
not
under
the
new
tags.
Yet
Matrix
operator
is
not
there
yet
the
release
pull
request
for
those
ones
if
I
filter
this
correctly
are
still
open.
E
Here
we
are
in
the
process
of
releasing
this
and,
as
you
may
or
may
not
have
seen,
judging
from
the
pull
request,
I
authored,
at
least
in
the
last
week
or
weeks,
there's
many
hiccups.
Transferring
this
to
a
monorabus
super
hard
does
many
pitfalls
and
we
fell
in
most
of
them.
I
think
so.
Yeah
there's
lots
of
fixed
release
pipeline
PRS
around
yeah,
even
this
week
and
last
week
it's
it's
more
than
10.
E
By
now
we
are
using
release
please
for
this,
which
works
fine,
that's
all
cool
and
fine,
it's
just
hard
to
configure
and
sometimes
has
somewhat
undocumented
Behavior,
where
you
actually
need
to
go
into
the
code
and
check.
What's
going
on
in
general,
I
think
we
can
recommend
release
please
for
doing
releases,
it's
just
hard
to
switch
over
this
process.
Basically,
so
what
else
did.
E
Lots
of
growing
pains
in
this,
but
in
the
end
the
final
result
will
be
that
we
will
have
a
klt
tag
again
for
zero.
Eight
two,
but
all
the
sub
components
of
this
have
their
own
versions
and
we
can
kind
of
mix
and
match
and
dump
their
version
separately
and
such
this
gives
us
flexibility
and
and
also
prepares
for
the
next
thing
in
this
epic
here
in
the
monorepa
Epic,
which
is
gonna,
be
the
klt
umbrella
charge
one
chart
to
rule
them
all.
E
E
So
you
can
only
specifically
use
that
one
or
you
want.
You
want
to
use
certmanager.io
instead
of
rkot
search
manager,
so
you
can
easily
just
set
the
hand
value
to
false
and
only
use
the
rest
of
of
the
hamstruct.
Basically,
so
that's
going
to
be
the
the
final
final
solution
for
this.
E
So
then,
you
can
yeah
fully
be
flexible
in
what
you
want
to
actually
use
from
from
Captain
and
what
you
don't
want
to
have,
or
you
can
even
change
in
the
future
by
just
changing
your
hand,
value
you
wanna
I,
don't
know
down
the
line
in
two
months
after
you
migrated
from
Captain
V1,
for
example,
you
wanna
enable
metrics,
so
you
can
easily
just
switch
the
hand
value
a
matrix
operator
is
going
to
be
deployed
and
then
you're
gonna
see
metrics
being
created.
F
I
have
one
will
there
will
be
a
compatibility
metrics
between
versions
of
certain
components
and
how
they
can
be
in
Combined,
actually,
together.
E
I,
don't
think
we're
gonna
need
it,
because
we
will
have
this
cap
umbrella
chart
which
will
have
prescribed
versions
of
everything
inside
already
so
I.
Don't
think
it's
really
necessary.
We're
gonna
need
to
check
that
in
the
future,
but
I
don't
think
like
from
point
of
view
now,
basically
I
don't
think
we're
gonna
need
it.
E
H
How
are
the
I
yeah.
C
Thanks
just
wondering
how
are
the
different
versions
of
the
CRTs
being
handled
in
the
in
the
unchart.
E
That's
a
very
good
question:
I
think
the
solution
for
now
was
to
bundle
them
all
always
and
then
so
you
would.
You
would
also
install
all
the
metric
crds,
for
example,
even
though
you
don't
have
the
metrics
operator
enabled
there's
no
nice
way
in
Helm
to
support
CRTs
and.
E
There's
there's
basically
no
industry
standard
of
doing
this,
so
we're
doing
it
the
best
we
can
we're
just
gonna,
always
ship,
all
the
crds,
so
users
have
them.
We
need
them.
Basically,
okay,
thanks.
H
I
had
one
question
too:
so
user
would
still
be
able
to
like
download.
You
know,
install
Captain
as
a
whole,
but
like
install
older
version
of
Matrix
operator
and
a
newer
version
of
lifecycle
operator,
would
that
be
possible.
E
That
would
not
be
possible
or
it's
I
mean
it
would
be
possible,
but
it's
not
recommended
I
would
say.
Okay,
the
ideal
solution
would
be
that
you
used
the
klt
helm
chart.
E
Charts
bicycle
talking:
charts,
that's
the
right,
so
this
is
basically
the
record
that
publishes
our
hound
charts
and
this
will
have
a
full
kot
tag
for
the
whole
set
of
components
so
with
Matrix
operated
as
a
operator
and
such
and
you
will
just
install
that
chart
and
then
pick
and
choose
inside
what
you
want
to
have.
A
A
A
Then
another
quick
thing
is
that
cubiccon
North
America
of
this
year
me
and
Anna-
will
be
there
and
we'll
also
be
running
the
project
meeting.
So
if
you're
around
just
pass
by,
we
can
discuss
about
Captain
roadmap
feature
planning
whatever
with
the
community
and
also
want
to
shout
out
yesh
congrats
for
your
lighting
talk
to
be
an
accepted.
D
Everything
you
needed
there,
I
have
a
slide
update,
Stacy
is
up
and
at
it
and
since
nobody
cared
about
timing,
she's
trying
to
reduce
the
blog
post
now
and
ran
into
some
issues,
I
suggested
she
jump
on
community
meeting.
I,
don't
know
if
she's
going
to
or
not
but
yeah.
I
Yeah
so
I
was
trying
to
post
on
medium
because
I
think
that's
kind
of
like
where
I
found
that
the
most
of
the
posts
are
from
the
past
and
I.
Don't
have
access
to
actually
publish
anything
to
medium.com,
Captain
and
I'm,
hoping
that
someone
could
help
me
with
that.
A
Will
ping
you
separately,
okay,
so,
first
of
all
good
morning,.
I
A
I
I
I
Okay,
great
I
appreciate
it.
Thank
you
yeah
and
then
once
I
become
an
editor,
then
I
can
just
go
ahead
and
get
that
blog
post
out
and
then
coordinate
with
Meg
with
the
rest
of
updating.
D
I
Yeah
and
giovania,
because
I
know
that
we
were
trying.
D
A
D
It's
let's
say
that
hey
Giovanni,
no
problem,
we
can
get
your
document.
Does
anybody
else
have
anything
in
documentation
they
want
to
discuss
and
we
can
get
through
this
really
fast,
because
I've
spent
the
last
week,
including
the
weekend,
doing
the
name
change
so
I
haven't
been
doing
much
else.
The
next
thing
I'm
going
to
be
working
on
is
going
through
some
of
those
guide
chapters
and
implementing
to
make
sure
some
of
them
were
sort
of.
Assuming
that
you
had
a
that,
you
were
going
through
a
getting
started,
exercise
first
and
I.
D
Don't
think
we
necessarily
need
all
the
getting
started,
but
we
can
take
some
of
the
steps
that
we
drafted
for
that
so
I'm
going
to
see
and
make
if
we
can
make
some
of
those
a
little
bit
better
guide
chapters
and
look
at
the
structure
of
that
a
little
bit
and
that's
what
I've
been
up
to
so
I'm.
We
ready
to
refine.
A
I
think
so,
but
I
see
that
Yash
asked
for
a
peer
review.
Oh
so
so
please,
everyone
go
over
the
pr
and
provide
some
feedback.
I
see
that
some
tests
are
failing,
I
will
check
later,
what's
going
on
there
and
before
we
go
into
the
tickets,
I
like
to
have
another
shout
out
to
rakshit
congrats
for
becoming
an
approver
in
the
last
days
and
months,
rakshit
work
hard
on
the
Google
summer
of
code
project,
improving
the
metric
operator,
and
it's
only
well
deserved
that
you
know,
are
an
approver
of
the
project.
D
A
B
But
this
is
for
me
yeah.
This
is
a
small,
smaller
refactoring
ticket
to
improve
the
stability
of
the
metrics
controller.
Basically,
a
sense
of
this
issue
here
is
that,
within
the
logic
of
the
controller,
we
call
this
providers
that
new
provider
method
that
returns
us
the
the
provider
implementation
so,
for
example,
Prometheus
dynatories,
dql
provider
and
so
on,
and
to
make
the
the
metrics
controller
more
testable.
The
actual
component
that
gives
us
the
provider
implementation
should
be
injectable,
so
we're
decoupled
from
the
actual
provider
implementation.
A
H
Yes,
actually
I
was
the
one
who
faced
this
issue
while
in
the
aggregation
controller.
So
it's
just
a
follow-up
on
that
we
are
so
yes,
I
can
work
on
it.
A
B
Yeah,
this
is
a
very
small
Improvement
for
the
digital
Health
provider,
because,
right
now
the
oauth
URL
will
be
authenticate
against
this
hard-coded
in
the
provider
code
and
in
the
future.
It
might
benefit
from
also
making
this
configurable
understanding.
A
B
Already
have
the
the
secret
containing
the
the
dinosaurs
tenant
URL,
so
this
might
also
be
the
best
place
for
this
authentication
URL
to
the
included.
So
we
can
maybe
edit
the
description
and
add
that
here.
A
A
E
A
E
C
I
A
E
A
E
Yeah
I
noticed
that
we
don't
have
all
the
all
the
linking
and
test
commands
available
in
the
make
files
that
we
use
in
pipelines
to
test
stuff,
especially
codex
C
island,
is
not
there
and
also
we
don't
have
unit
test
commands
as
far
as
I
know
in
the
make
file.
So
so
that
would
be
something
nice
to
add
to
make
local
development
easier
yeah.
So
this
would
basically
be
adding
make
file
commands
for
Golden,
C,
Island
and
also
for
unit
testing
and
yeah.
E
Heavy
inspiration
can
basically
be
taken
from
from
the
pipelines
where
all
these
commands
are
run
already
they're,
just
not
in
the
make
files.
Yet
I
added
a
link
here,
anyways
yeah,
the
link
is
to
kind
of
show
how
how
sub-project
make
files
or
subfolder
make
files
can
be
used
from
the
root
mag
file.
A
E
Like
this
one
yeah,
something
like
that,
maybe
you
can
change
that
yeah
and
I
will
also
add
some
links
to
the
locations
in
the
pipelines,
basically,
where
we
use
those
commands
so
that
it's
easier
things
for
implementation.
A
A
A
Then
this
one
I
don't
remember
too
long
ago.
Two
weeks
ago,
oh
yes,
from
the
last
committee
meeting,
we
discussed
that
some
user
when
they
try
out
Captain.
A
They
had
this
feature
request
because
they
want
to
try
out
Captain
task
as
a
pre-flight
check
so
to
before
the
deployment
occurs.
They
want
to
run
this
task,
but
they
want
to
start
to
build
confidence
that
what
they
brought
in
the
task
makes
sense
and
really
evaluates
a
situation
that
should
prevent
deployment
to
go
through.
A
However,
you
cannot
really
build
up
confidence,
because
if
there
is
an
error
like
we
saw
in
the
pipeline
Whenever,
there
is
an
error
in
the
pipeline.
We
need
to
follow
up
with
another
PR
to
fix
that
and
they
don't
want
to
block
the
deployments
if
there
are
some
problems
with
the
capital
task.
While
they
are
developing
that,
therefore,
we
should
instead
have
a
mode
where
Captain
tasks
are
executed
in
the
pre-deployment
steps,
but
they
do
not
fail
or
stop
no
matter
their
results.
A
They
want
to
stop
the
deployment
itself,
so
people
can
play
with
Captain
tasks
then,
when
they
build
the
confidence
that
the
task
does
what
they
wanted,
they
can
really
enforce
the
problem
to
be
blocked.
If
the
task
fails,
so
my
planning
is
to
add
a
new
configuration
in
the
captain,
config
called
block
deployment.
If
the
value
is
true,
which
is
the
default
one,
then
the
current
behavior
of
the
lifecycle
operator
is
maintained.
If
it's
false,
then
we
ignore
the
result
of
the
task
and
we
always
let
the
deployment
go
through.
A
Yes,
that's
not
a
bad
idea,
but
Yes
actually
makes
sense,
because
block
deployment
is
generally
enough
to
cover
both
tasks
and
evaluation.
So
I
would
say
at
the
moment,
let's
focus
on
the
task.
Yeah
and
I
already
put
evaluations.
A
A
E
Think
you
want
to
work
on
that
ticket.
Please
comment
on
there
that
we
can
assign
you.
D
Is
this
going
to
spawn
a
documentation
issue?
I,
have
a
captain
config
reference
page
that
will
need
to
be
updated,
won't
be
a
big
deal
like
to
do
it,
but.
A
D
Let's,
oh
did
I
text
to
leave
that
up.
Let's
see,
I'm
sorry
I
did
this
a
few
days
ago.
My
mind
is
mush
I,
don't
think
that's
terribly.
D
And
new
content
and
then
the
bigger
issue-
and
you
talk
to
Adam
and
we
missed
each
other
all
day-
is
that
the
the
exercise
that
Adam
did
is
so
good?
It's
just
excellent,
but
it's
pretty
much
for
metrics
and
observability,
and
we
need
something
like
that
for
the
release
life
cycle
maintenance
user
case,
and
until
we
can
do
that,
my
only
idea
is
to
pull
up
the
old
original,
getting
started
guide
that
had
the
potato
or
we
could
just
remain
silent
until
we
get
it
done.
D
E
A
A
A
I
I'm
happy
to
do
that
as
well.
I
can
I
I
reach
out
to
Johannes
already
so
yeah
we'll
be
in
touch
okay,.