►
From YouTube: Keptn Developer Meeting - Mar 24, 2022
Description
Project demos by Keptn developers: new features, fixes and automation improvements. Meeting notes: https://docs.google.com/document/d/1y7a6uaN8fwFJ7IRnvtxSfgz-OGFq6u7bKN6F7NDxKPg/edit
Learn more: https://keptn.sh
Get started with tutorials: https://tutorials.keptn.sh
Join us in Slack: https://slack.keptn.sh
Star us on Github: https://github.com/keptn/keptn
Follow us on Twitter: https://twitter.com/keptnProject
Sign up to our newsletter: https://bit.ly/KeptnNews
A
B
A
C
No,
could
you
please
continue
sharing
for
me.
A
C
C
C
E
C
Okay,
yeah,
then
the
next
thing
is
some
fixes
for
the
release
pipelines.
We
had
some
issues
with
our
automatic
release,
pipelines
regarding
kind
of
race,
con.
C
And
then
right
in
the
next
step,
it
couldn't
be
found
when
we
wanted
to
upload
artifacts
to
that
release.
So
we
added
a
sleep
command
in
between
that
seems
to
fix
that.
C
And
then
also
there
was
there
was
some
some
minor
bugs
with
other
shell
commands.
As
you
can
see
here,
some
some
I
removed
some
quotes
so
that
yeah
and
some
kinds
of
folder
structures.
E
E
With
with
draft
releases
on
captain
contrib
and
captain
sandbox,
and
it
seems
to
have
auto
resolved,
it
seems
to
have
been
an
api
issue
by
github
itself.
That
being
said,
it
could
still
be
that
the
captain
bot
token
that
you're
using
here
could
positively
or
negatively
affect
it.
Long
story
short
the
github
draft
release,
if
you
create
it
and
if
you
upload
assets
to
it
is
very
flaky.
E
Just
just
draft
releases,
like
anything
else,
worked
out
of
the
box,
but
the
draft
releases.
Why
are
the
github
cli?
I
need
to
add
that
actually,
that
one
was
quite
flaky
using
whatever
you
do
manually
or
some
other
tools
worked
out
of
the
box,
but
the
github
cli
and
draft
releases
that
one
was
very
flaky.
C
C
C
In
any
case,
the
support
archive
is
still
going
to
be
there
and
won't
be
deleted
with
this
checkbox.
So
you
will
still
have
insights
into
your
run
if,
if,
if
you
want
it
for
debugging
purposes,
additionally,
in
this
pr,
I
also
lowered
the
time
to
live
for
failed,
namespaces
namespaces
from
failed
runs.
C
This
should
be
enough
for
everybody,
so
yeah,
and
that's
actually
it
for
me.
F
First
of
all,
can
you
hear
me
yeah?
I
can
hear
myself
again.
I
will
turn
down
my
volume
for
now.
So
if
you
have
any
questions,
please
ask
them
after
I've
presented
everything
all
right.
Okay,
here
it
is.
B
F
So
for
this
meeting
I
would
like
to
talk
about
two
pull
requests,
both
about
the
shipyard
controller
or
shipy
and
yeah.
Basically,
the
first
one
was
a
fix
regarding
representation
of
the
overall
sequence
state.
F
Then
the
whole
sequence
state
was
considered
to
be
finished
and
that's,
of
course,
not
wrong.
So
what
I've
added
in
this
pull
request
is
first
of
all,
an
integration
test
that
checks
that
scenario
and
the
logic
in
the
shipyard
controller
has
been
adapted
to
really
go
through
all
the
stages,
all
the
sequences
within
those
and
really
check
if
all
of
those
are
actually
finished
before
setting
the
overall
sequence
date
to
to
finished
all
right.
First
of
all,
are
there
any
questions
about
that?
My
volume
now.
F
Here
we
had
some
cases
where
also
when
we
had,
for
example,
a
stalling
sequence,
and
you
wanted
to
abort
it
in
some
cases,
certain
entries
that
are
expected
in
the
database.
For
example,
the
original
sequence
triggered
event
that
triggered
the
sequence,
because
that's
being
used
to
to
revert
the
sequence
finished
events
to
afterwards.
F
So
if
that
was
not
there
anymore,
for
whatever
reason,
then
the
database
was
not
cleaned
up
properly,
and
so
we
had
some
edge
cases
where
new
sequences
in
the
same
stage
were
still
blocked,
and
this
pull
request.
F
Yeah
did
some
changes.
So,
first
of
all,
if
the
original
sequence
triggered
event
is
not
there
anymore,
it
will
still
proceed
with
cleaning
up
everything.
It
can
to
really
make
sure
that
the
the
stage
where
the
sequence
was
running
in
is
not
blocked
anymore,
so
new
sequences
can
be
executed
again
after
cancelling
this
stalling
sequence.
F
G
H
Yeah,
probably
yes,
good
so
yeah,
the
last
sprint
on
my
side
was
mainly
focused
on
some
bug,
fixing
and
enhancing
or
fixing
the
integration
test
or
the
unit
tests.
First
of
all,
I
will
start
with
the
bug
fixes.
H
Actually
this
was
kind
of
backfinxing
enhancement
or
cleanup,
where
we
actually
try
to
unify
the
response
code
when
the
git
upstream
is
not
reachable
due
to
some
reasons,
for
example,
until
now,
when
the
user
added
an
invalid
token,
he
received
an
four
to
four
error
or-
and
this
was
actually
misleading,
quite
misleading,
because
when
the
user
had
added
invalid
remote,
ur,
uri
or
url,
he
received
404
errors.
So
actually
we
wanted
to
unify
all
these
responses
regarding
the
configuration
of
the
git
upstream,
so
it's
kind
of
now
unified.
H
Had
the
response,
some
error-
or
it
was
averted,
then
the
sequence
hadn't
failed
until
now,
because
it
was
checking
just
for
the
failed
result
when
the
get
sli
or
the
lighthouse,
as
when
the
evolution
has
normally
failed
due
to
due
to
quality
gates.
H
So
now,
actually
the
sequence
is
failing
when
some
kind
of
any
problem
appears.
So
this
was
actually
more
of
a
bug
fix
then,
as
I
already
said,
fixing
and
enhancements
of
the
integration
tests
or
the
unit
tests.
H
We
had
multiple
enhancements
of
the
backup
restore
test,
for
example,
backupping
the
kit
credentials
when
using
the
resource
service
integration
test,
as
we
recently
enabled
the
resource
service
in
our
integration
test.
So
this
needed
to
be
adapted
and
also
adding
its
head
reset
to
the
integration
test,
as
it
is
part
of
their
documentation
and
was
not
covered
in
the
integration
test,
as
well
as
some
refactoring
and
enhancements
in
this
test,
and
a
bigger
task
was
enhancing
the
quality
gates
tests
so,
which
means
triggering
the
evaluation.
H
In
this
particular
request,
we
increase
the
coverage
of
the
unit
tests
yeah
and
also
enhance
the
integration
test
to
have
more
checks
edits.
Also,
some
integration
tests
with
unexpected
behavior.
We
also
found
out
some
bugs
which
were
fixed
as
part
of
this
pull
request
and
yeah.
It
was
kind
of
making
making
the
testing
and
the
software
better
testable.
I
I
I
I've
took
over
the
last
sprint
a
couple
of
bugs,
and
actually
it
was
almost
all
about
bugs
and
the
first
one
was
actually
discovered
by.
I
think
I
think
it
was
christian
where
it
was
a
kind
of
security
hole
in
the
webhook
service,
where
it
was
easy
to
just
use
this
add
notation
in
the
girl
payload
to
actually
upload
files
from
the
web
hook
service,
for
example.
This
is
a
very
simple
example.
I
I
took
over
from
christian,
where
we
just
configure
a
web
hook
with
a
post
request
method
to
this
url
there
and
all
you
needed
to
do
is
just
to
set
the
custom.
Payload
too,
for
example,
add
sign
and
then
a
path
to
some
local
file
of
the
webhook
service
and
all
of
the
sudden
you
had
uploaded
with
a
post.
B
I
The
content
of
this
file,
the
fix,
was,
of
course,
to
disallow
the
usage
of
this
add
sign
inside
the
data
block
of
the
curl
command.
I've
done
this
as
far
as
it
it's
possible.
There
are
other
ways
to
tell
curl
to
use
a
local
file.
These
were
already
disallowed,
for
example
the
minus
f
flag
and
so
on,
and
in
this
screenshot
you
see,
for
example,
that
this
particular
example
where
you
try
to
upload
that
etc.
Host
file
did
fail.
I
I
I
I
I
Not
tricky
but
a
situation
which
could
also
just
happen.
For
example,
if
you
just
put
a
backslash
before
each
of
the
characters
of
the
url.
Curl
will
also
happily
just
take
this
and
make
a
request
to
the
url
kubernetes.default.service.
I
Urls,
so
we
have,
I
think
somewhere
already,
so
I
think
this
was
actually
already
in
place
where
we
disallow
local
kubernetes
locally
domain
names,
for
example,
kubernetes
discriminate
is
default,
blah
blah
blah,
localhost
and
so
on,
and
in
this
fix
of
just
they
are
enhanced
or
also
the
parsing.
I
This
also
covers
the
case
where,
between
each
of
the
characters,
there
is
a
backslash
and
so
on.
So
I've
tried
to
make
this
as
much
bulletproof
at
this
like
it's
possible
and
in
the
end
you
also
see
here
also
in
this
example
again.
This
will
not
work
with
the
current
version
because
it
will
save
okay
command
contest,
url
kubernetes
in
this-
and
I
think
this
goes
on
here
in
this
example,.
I
We
really
need
to
improve
sorry
christian.
We
really
need
to
improve
parsing,
though,
because
now
we
don't
make
this
distinction
between
url
options
and
payload.
So
one
thing
which
is
not
which
which
is
really
really
not
nice,
is
that
these
names
here
combinate
this
community
dot
default
and
so
on,
are
actually
also
not
allowed
in
a
regular
payload
right
now
we
need
to
fix
that
in
a
follow-up
issue
where
we
really
need
to
somehow
actually
rework
the
whole
parsing
of
the
commands
question.
Please.
E
E
I
Yeah,
in
the
end
two,
I
would
say
quite
big
security
holes.
I
know.
E
F
E
I
Be
could
be
misleading
you
absolutely
right,
good.
Those
were
those
two
things
about
the
web
book
service.
Let's
go
to
the
next
one
resource
service.
There
was
a
ticket
which
was
kind
which
was
kind
of
a
bit
weird,
but
in
the
end
the
focus
of
this
ticket
shifted
a
little
bit.
In
the
end,
I've
updated
the
description
of
this.
What
has
been
done
for
the
resource
service?
I
The
first
thing
I've
done
is
the
previously
there
was
a
proof
of
concept
implementation
where
we
just
introduced
retry
logic,
for
example,
when
the
resource
service
is
handling
a
resource
deletion
and
it
failed,
then
there
was
introduced.
The
resource
logic
and
one
original
goal
of
this
ticket
was
to
introduce
this
also
to
all
other
places
that
make
sense
this.
I
I
just
did
the
second
one
was
that
a
couple
of
weeks
or
days
actually
weeks
ago,
the
resource
service
was
prepared
to
run
in
a
mighty
replica
fashion,
so
multiple
resource
services
in
parallel-
and
there
was
a
preparation
done
that
certain
things
should
not
be
handled
via
api
calls,
because
the
allot
bundles
are
balanced,
but
by
regular
nuts
events
in
particular.
This
was
the
project
delete,
finished
event,
so
we
are
actually
firing
from
ship
that
controller,
and
we
are
we
we're
modifying
or
adding
nuts
capabilities
to
the
resources.
I
So
this
one
is
actually
connected
to
the
nuts
broker
and
would
get
this
event
when
a
project
is
deleted
and
each
of
the
replicas
will
then
clean
up
its
local
file
system
for
this
particular
event
and
not
a
single
one.
So
we
don't
need
to
synchronize
between
each
replicas,
but
each
of
the
replicas
get
the
event.
That
was
the
preparation
for
having
resource
service
with
multiple
replicas,
although
it's
still
shipped
and
will
be
shipped
for
the
near
future,
with
just
one
replica
depends
a
little
bit
what
you
want
to
achieve.
I
It's
probably
enough,
with
one
replica
to
support
zero
downtime
scenarios
later
on.
When
we
fully
support
h,
a
high
availability,
we
might
think
of
really
running
it
with
a
multiple
replicas.
Nonetheless,
this
is
now
in
place,
but
was
had
a
problem
because
we
switched
from
regular
nuts
to
chat
etched
chest
stream.
I
But
there
was
a
conflict
between
resource
service
and
shipyards.
Shipper
controller
shipy
when
setting
up
those
streams,
so
basically
resource
service
did
that
shipyard
controller
did
that
and
it
was
just
a
conflicting
setting
in
a
stream
from
a
high
level
point
of
view.
I've
changed
this.
I
In
the
code
of
the
resource
service,
where
I've
discovered,
then
that
there
is
really
no
need
for
the
resource
service
to
use
chat
stream
at
all.
So,
even
though
you
are
using
jet
stream,
you
are
not
forced
to
use
to
use
the
features
of
jet
stream,
so
I've
changed
back.
Basically,
the
nuts
package
I've
deleted
everything
we
had
here.
I
It
was
anyway
just
a
copy
of
what
we
had
had
in
the
ship
controller
and
created
the
small
little
package,
which
is
just
called,
not
subscriber
where
you
can
just
yeah
subscribe,
like
you,
would
normally
do
to
an
event
on
a
nuts
broker,
in
particular.
I
I
I
Could
think
of
putting
or
evolving
this
package
into
a
separate
library
so
that
we
don't
use
separate
top
kind
of
duplicated
or
code
which
does
the
same
things
in
now?
I
think
three
places
resource
service,
distributor
and
gps
controller.
We
should
use
a
single
module
nice
with
a
nice
interface
for
that
in
the
future,
and
there
was
not
more
to
that.
We
did
more
in
this
ticket,
but
but
we
reverted
whole
parts
of
it,
because
we
realized
that
it's
actually
not
needed.
I
I
Currently,
the
distributor
when
it
is
running
it
sends
it
is
sending
this
heartbeat
to
the
captain
control
plane
where
it
just
receives,
then
the
current
subscription
for
itself
and
can
react
if
subscriptions
have
been
changed
on
the
control
plane
or
the
bridge
actually
and
problem
here
was
if,
for
example,
this
does
when
a
distributor
is
running.
Okay,
when
a
service
is
running
with
a
distributor
and
you
change,
for
example,
the
api
token
or
you
lose
connection,
then
this
distributor
and
also
the
service
will
silently
just
continue
to
run.
I
You
could
take
a
look
at
the
logs
where
you
could
then
would
see.
Okay,
there's
a
problem
with
sending
heartbeats,
but
from
the
outside,
everything
is
fine.
Nothing
seems
to
be
broken.
You
don't
immediately
see
that
there's
a
problem
with
this
distributor.
It
cannot
actually
reach
the
control.
Then
it
does
nothing,
not
so
good.
I
I
Specify
additional
environment
variables
which
are
defaulting
to,
I
think
for
this
one
to
ten
out
of
five.
I
don't
know
now,
but
you
can,
for
example,
set
the
maximum
amount
or
number
of
registration.
Retries
registration
means.
You
start
a
distributor.
It
will
try
to
register
itself
to
the
control
plane,
and
this
could
be
configured
now
to
be
retried,
for
example,
here
three
times
if
it
doesn't
work,
distributable
exit
maximum
maximum
number
of
heartbeats
retries.
That's
when
you're
having
this
router
already
running,
you
change
all
of
this
on
on
the
control
plane.
I
The
api
token,
for
example,
who
doesn't
have
other
reasons
then
after
this
amount
of
time?
Also
the
shipyard
adder
of
the
distributor
will
exit
with
a
error
message:
let's
try
it.
I
will
just
start
this
router
right
now
on
my
cluster.
I
have
scaled
down
right
now,
the
api
service,
so
it
could
not
connect
at
all
to
the
control
plane.
You
see
it
will
try
it
three
times
after
the
third
one.
It
will
then
give
up
and
exit
what
I
will
do
now.
I
will
scale
up
the
api
service.
I
G
I
And
it
should
start
complaining
that
something
is
not
okay
like
it
did
before,
but
actually
after
this,
after
the
third
time
we
trying
to
send
its
heartbeat
to
the
control
plane,
it
should
then,
in
the
end,
give
up
and
what's
also
nice,
it
doesn't
just
do
a
fatal
exit,
a
forced
exit
to
say,
but
it
also
will
gracefully
shut
down.
So
there's
no
real
need
for
that
right
now,
but
it's
prepared
to
do
whatever
it's
necessary
to
gracefully
shut
down.
For,
for
example,
you
see.
G
I
Third
time
it
didn't
work,
it
will
terminate
the
uniform
logo,
it
will
turn
it
terminate
the
event
for
water
and
it
should
be
pulled
out.
So
this
happens
in
a
graceful
manner
and
not
just
os
exit
and
yeah,
as
I
said,
still
need
to
clean
up
this,
and
I
will
fire
the
pull
request
today.
There
will
be
some
additional
environment
variables.
I
J
I
K
A
G
Yeah,
so
not
many
fancy
things
from
my
side.
The
first
thing
is:
we
had
a
few
troubles
with
github
in
this
days,
so
we
decided
to
enable
the
possibility
for
the
user
to
have
a
create
project
or
an
update
project
comment
without
specifying
the
user.
This
is
because,
especially
on
windows,
clients,
when
when
was
it
like
couple
of
days
ago,
a
week
ago,
it
was
possible
to
for
this
comments
to
work
with
without,
but
with
the
user.
We
had
some
trouble
so
now
both
of
them
can
can
be
specified
but
are
not
mandatory.
G
So
basically
before
we
were
using
many
models
that
were
stored
in
the
shipyard-
and
this
was
not
very
nice
because
then
we
had
a
little
hack
in
the
go
mode
where
things
were
looking
very
ugly
and
the
fix
was
everywhere
in
shippy
to
move
these
models
to
the
go
utils
and
then
change
the
handlers
swagger
files,
so
that
we
can
reproduce
the
docs
in
a
way
that
we
don't
need
to
copy
this
models
everywhere,
nothing
hard.
G
Now
you
have
a
few
a
bunch
of
new
models
shared
in
goyu,
teals
and
shipyard
is
cleaned
up
of
them
and
then,
finally,
the
struggle
of
this
week,
for
me,
was
our
integration
test.
Failing
bernd
has
explained
already,
white
was
failing
for
the
research
service
not
subscription.
I've
managed
also
to
fix
a
few
things
for
the
backup
restore
test.
G
We
were
not
properly
testing
the
service
after
it
came
back
up,
it
was
using
exactly
the
same
deployment,
so
we
were
not
really
noticing
if
or
not
a
new
deployment
was
starting,
and
we
also
had
a
few
trouble
with
an
old
nat
server
name
to
be
cleaned
up
and
yeah.
Removing
some
parts
that,
from
the
research
service
perspective,
didn't
really
make
sense.
G
B
Thanks,
we
have
some
definitions
here,
repairing
some
desks
so
sorry
for
the
background
noise
yeah,
I'm
going
to
present
simple
requests
that
I
need
for
the
bridge,
starting
with
allowing
to
configure,
send
started
flag
for
a
web
book
configuration.
B
So
the
request
here
was
yeah
that
in
the
weapon
camel
similar
to
the
scent
finished,
you
cannot
now
also
configure
the
send
started
flag
which
leads
to
either
the
webhook
service
sends
the
start
event
by
the
sends
the
started
event
or
don't
send
it.
So
your
integration
then
has
to
take
care
of
that
and
yeah.
B
This
was
now
added
also
to
the
captain's
bridge
so
that
when
you
configure
a
web
book
for
in
the
in
the
bridge,
you
can
configure
the
send
started,
and
this
and
finished
event,
so
both
of
those
flags
are
now
configurable
via
the
ui
as
well.
B
Yeah
and
the
other
two
pull
requests
are
bug
fixes,
first
of
all,
css
fix
for
the
dashboard.
B
Since
this
text
here
might
yeah
get
longer,
the
tiles
for
projects
ended
having
different
sizes
and
yeah.
With
my
per
request,
we
allow
to
the
text
to
break
so
the
tiles
are
the
same
size
and
split
across
the
screen
nicely.
B
And
one
more
is
when
starting
the
captain's
bridge
we
load
some
metadata
and
especially
when
you
activate
the
version
check
feature,
then
we
also
load
version
json
from
get.sh
and
if
that
was
not
available
or
couldn't,
if
get
captain.shs,
not
up
at
running,
for
example,
then
yeah
the
dashboard
or
the
projects
were
not
loaded
and
yeah.
This
is
now
fixed
that,
even
if
this
version
chase
is
not
available,
we
still
show
the
projects,
but
for
that
case
the
version
check
will
not
work
and
this
yeah
disabled.
B
B
That's
all
from
my
site
hanging
back.
K
Okay,
I
mean:
can
you
bring
back
your
screen
just
because
one
question
first,
first
pull
request
sure
because
the
behaviors?
What's
the
situation
when
I
now
subscribe
to
a
started
or
finished
event,
will
then
be
these
radio
buttons
be
disabled.
B
Yeah
the
sense
started
so
the
flags
are
not
set
and
this
radio
buttons
are
disabled,
as
we
had
with
the
finished
event
as
well.
I
don't
have
a
bridge
running
at
the
moment
to
show
it,
but
we
had
it
with
the
finished
flag.
Already
we
had
the
same
behavior
so
only
for
subscribing
to
triggered
events.
We
sent
the
started
and
the
finished
event
from
the
web
book
service
and
subscribing
to
started
or
finished
events
will
never
send.
D
So
if
there
is
no
more
items
for
the
agenda,
I
think
we
can
just
proceed
with
whatever
coding,
so
we
have
quite
a
lot
of
people
on
the
call.
Maybe
there
are
some
questions
about
gsoc
or
other
related
things
and
yeah.
D
If
there
are
no
questions,
we
can
quickly
close
down,
but
so,
if
you
want
to
ask
something
just
unmute
yourself
and
we
can
discuss
these
topics.
D
Okay,
so
while
we're
there
and
don't
hesitate
to
provide
any
feedback
about
contributing
experience
in
the
chat,
so
we
have
help
contributing
etc
because
we
are
working
on
improving
contributing
guidelines
and
other
materials.