►
From YouTube: Keptn Developer Meeting - July 14, 2022
Description
Meeting notes: https://docs.google.com/document/d/1y7a6uaN8fwFJ7IRnvtxSfgz-OGFq6u7bKN6F7NDxKPg/edit
Learn more: https://keptn.sh
Get started with tutorials: https://tutorials.keptn.sh
Join us in Slack: https://slack.keptn.sh
Star us on Github: https://github.com/keptn/keptn
Follow us on Twitter: https://twitter.com/keptnProject
Sign up to our newsletter: https://bit.ly/KeptnNews
A
A
Involved
could
yeah
yesterday
evening
or
maybe
morning
for
the
us
folks
they
release
a
blog
post
where
they
officialize
this
the
results
of
the
voting,
and
you
can
see
now
in
the
cncf
landscape
page
that
captain
is
prominent.
A
C
Please
we
completed
all
the
checklist
for
incubating
projects
right
now,
so
from
a
logistics
standpoint,
it's
all
done
later.
There
will
be
follow-up
like
transferring
all
the
project
infrastructure
to
the
cncf,
ensuring
that
there
is
access
to
caption
accounts
by
the
cncf
and
all
other
hoops
which
are
expected
from
the
incubating
projects,
but
this
part
is
completed.
C
Another
thing
which
is
on
our
list
is
security.
Audit
yeah.
The
ball
is
currently
on
my
side.
We
did
a
few
initial
iterations,
but
I
guess
full
audit
will
rather
happen
after
the
summer
vacation
break,
but
this
process
is
also
pending
and
we
know
that
there
is
one
issue
we
will
need
to
look
into.
Is
a
secret
storage
because
right
now
we
definitely
have
evidence.
So
that's!
C
Well,
in
theory
everything
so,
for
example,
currently
there
is
nuclify
account
it's
paid
from
my
credit
card.
There
is
a
bunch
of
other
things
owned
by
donna
trees
for
captain.
Ideally,
everything
should
be
moved
over
the
cncf
so
that
the
cncf
can
access
it
and
manage
that
as
it
expects
to
do
for
all
incubating
and
graduated
projects.
C
And
potentially
it
includes
twitter
accounts
and
all
other
social
media,
which
is
the
easy
part,
but
the
problematic
part
for
us
is
all
the
ecosystem,
hosting
hang
charts,
services
and
other
bits.
So
we
will
need
to
look
into
this
infrastructure
if
anything
is
hosted
on
gc.
I
believe
some
of
the
bits
are
hosted
on
the
then
rs
gcp
at
the
moment,
so
it
will
need
to
be
transferred.
A
A
A
E
All
right
yeah
this
so
this
one.
Basically,
when
you
first
install
captain
the
pods
error,
they
crashly
back
off
and
then
eventually
everything
kind
of
settles
down
and
and
things
become
running.
I've
been
on
a
number
of
calls
where
basically
my
concern
here
at
two-fold
one,
if
someone's
new
to
captain
and
they
install
and
they
see
things
erring,
they're,
just
gonna
go
oh
well,
it's
broken,
I'm
just
gonna,
I'm
not
gonna
wait
around,
but,
secondly,
more
important
to
the
previous
point.
E
E
E
A
D
Yeah,
I
can
already
tell
you
what's
probably
going
on
since
we
have
this
dependency
on
on
nuts.
We
need
to
wait
like
for
the
event
broker,
basically
to
start
up
until
everything
else
can
start
up
and
then
also
the
shipyard
controller.
Again,
everything
depends
on
that.
So
everything
needs
to
wait
until
that
is
up,
so
we
have
kind
of
a
staggered
startup
and
that's
that's
why
stuff
is
failing
in
the
beginning.
D
Not
really,
I
don't
think
you
can
even
open
the
bridge
before
that,
I'm
not
fully
sure,
but
we
can
definitely
do
something
about
the
over
the
crash
loops.
I
would
say.
A
I
want
to
discuss
a
bug
that
was
recently
solved
by
klaus.
Oops
bronc
got
the
next
one,
so
in
captain
17
there
is
an
issue
damn
it.
A
I
need
to
open
it
when
you
open
the
bridge
and
you
try
to
copy
your
auth
command
from
the
ui
it
will
prevent.
I
will
provide
you
with
a
wrong
auth
command,
because
here
the
endpoint
value
of
the
auth
command
is
filling
with
the
wrong
data.
Instead
of
using
the
public
available,
url
of
your
kept
installation
tries
to
use
the
kubernetes
service
name
instead.
A
A
A
H
Okay,
let's
see
today,
I
have
three
pull
requests
that
I
want
to
show
you.
The
first
one
is
about
integration
tests.
So
recently
we
started
working
on
improving
the
speed
of
those
a
little
bit,
and
one
part
of
that
was
that
here
we
have
it
yeah.
So,
for
example,
if
you
look
at
this
issue
here,
we
discovered
that
at
some
places
we
during
us
during
an
integration
test,
we
start
a
full
delivery
sequence
of
potato
head
and
in
some
of
the
tests,
that's
actually
not
really
needed.
So,
for
example,
in
the
backup
restore
test.
H
We
just
want
to
verify
that
after
doing
a
mongodb
backup
and
resolve
whether
we
still
see
all
events
by
the
api,
so
this
is
doesn't
have
anything
to
do
with
a
particular
delivery
sequence.
So
that's
why
we
removed
it
from
that
and
the
same
thing
for
the
ssh
public
key
alt
testing
proxy,
which
basically
check
whether
a
new
project
can
be
created
with
ssh
or
with
a
proxy
to
a
git
repository.
H
So
that's
why
we
that
we
didn't
really
need
the
delivery
sequence,
as
I
said,
and
in
each
of
the
cases
where
we
do
execute
a
sequence
just
for
the
sake
of
getting
events.
I
pretty
much
just
executed
the
dummy
sequence
very
simple
and
shorter
one
to
make
the
tests
a
little
bit
faster
and
then
also
in
some
places
we
didn't
make
use
of
the
temp
deer
functioning
go,
but
we
rather
just
created
directories
in
the
folder
where
the
tests
were
running,
and
that
was
a
little
bit
inconvenient.
H
If,
for
example,
you
were
executing
the
integration
tests
locally
on
your
machine
because,
for
example,
for
the
mongodb
backup,
you
would
suddenly
get
the
folder
containing
all
the
mongodb
data
within
your
clone
of
the
captain
repository
now,
we're
making
use
of
the
temp
tier
directory
to
create
this
temporary
directory
somewhere
else
at
the
end
of
the
tests
were
removing
it
and
now
that's
a
little
bit
cleaner,
all
together
all
right.
Any
questions
about
this
pr.
H
So
this
was
about
ensuring
that
all
the
components
that
are
involved
in
connecting
to
the
control,
plane
and
subscribing
to
events
and
also
sending
events
back
to
captain
were
shut
down
properly.
In
case
a
pod
gets
killed,
for
example,
during
an
upgrade
to
a
new
newer
version
of
the
service,
and
one
thing
that
we
found.
Let
me
just
check
if
I
can
find
it
here.
H
Closer
look
at
the
last
documentation
and
there
it
was
mentioned
that
if
you
call
the
publish
method
to
send
an
event
to
nuts,
it
might
be
that
it's
not
directly
sent
to
the
server.
So
nuts
internally
keeps
a
local
cache
and,
of
course,
when
that
cache
is
full,
it
will
send
everything
to
the
server,
but
during
the
shutdown
to
really
ensure
that
you
don't
kill
the
part
with
anything
left
in
the
cache.
H
That's
nuts,
it's
recommended
to
invoke
the
flush
method
which
will
ensure
that
all
data
is
being
sent
to
the
server,
and
now
we
added
that
to
the
to
the
disconnect
function,
which
is
called
at
the
end
of
the
execution
of
a
of
a
captain
service
which
is
connected
to
the
control
plane
via
the
the
cp
connector
ap
library.
H
And
this
way
we
ensure
that
everything
is
written
too
to
nuts
just
to
be
safe.
We
didn't
really
actually
discover
this
or
we
didn't
encounter
a
problem
where
an
event
wasn't
sent
immediately
to
to
nuts
after
invoking
the
publish
method.
But
this
way
we
are
really
really
really
sure
that
everything
is
written.
C
Today,
really
nice
to
have
additional
stability
hardenings,
because
yeah
every
change
here
makes
it's
quite
important
for
captain
users.
H
Exactly
yeah
all
right,
yeah,
let's
continue
with
the
last
one
that
was
about
the
web
quick
service.
So
here
we
had
an
edge
case.
Let
me
just
open
the
screen
here,
the
issue
here,
so
here
we
had
an
edge
case
where
the
web
book
service
was
subscribed
to
a
started
event,
and
this
in
this
case.
H
Obviously,
when
the
web
book
service
processes
that
kind
of
event,
we
don't
want
it
to
respond
with
a
started
and
a
finished
event
on
its
own,
and
usually
that
is
not
the
case,
but
here
we
had
an
edge
case
where
the
web
book
service
received
the
started
event
and
then
tried
to
retrieve
the
the
web
hook.
Configuration
gamble
to
look
for
the
for
the
appropriate
web
hook
to
be
executed,
but
at
that
point
in
time
the
project
was
not
available
anymore.
H
The
reason
for
that
is
that
we
have
this
on
pre-execution
error
method
within
the
webcook
service,
which
is
executed.
If
yeah,
if
there's
an
error
before
we
even
retrieve
the
appropriate
webhook
configuration-
and
in
that
case
it
always
responded
with
started
in
the
finished
event,
and
the
fix
was
just
to
check
for
the
for
the
event
type
of
the
incoming
event
that
it
tried
to
process,
and
only
if
that
was
a
task,
the
triggered
event.
H
I
I
don't
know
why
they
kicked
me
out
there
yeah.
What
did
we
do
in
the
bridge
team?
We
modernized
the
angle
application
so
beforehand
the
single
page
applications
all
of
the
code
was
transferred
to
your
browser
and
with
the
monetization
only
the
stuff
you
need
for
that
certain
page,
your
navigation
path.
I
G
I
It
so
yeah
there
we
see
it
there's
a
screenshot
from
before
my
installation
on
on
the
google
platform,
so
there
we
see
that
the
main.js
file
has
more
than
three
megabytes
and
some
other
files
as
well,
so
quite
big
application
and
with
demoralization
you
can
see
here
we
have
smaller,
we
have
more
files,
but
they
are
smaller
and
the
main
part
of
the
javascript
is
went
down
to
1.5
megabytes,
so
yeah
a
lot
of
improvement.
I
think
we
will
even
have
better
results
when
it's
all
cleared
just
a
master
branch
from
yesterday.
I
D
Quickly
share
as
well
here,
okay,
so
in
the
last
screen,
I
mostly
worked
on
a
new
pipeline
for
some
scans
with
common
security
tools,
and
I
will
quickly
show
off
what
I
did
here.
So
basically,
we
have
a
completely
new
github
actions
pipeline
here
now
called
security
scans,
which
runs
on
a
scheduled
interval.
So
it
will
run
every
monday
morning
in
utc
time
and
basically,
what
it
does
it.
D
It
will
render
our
helm,
installer
and
then
scan
with
scan
that
installer
with
commonly
used
security
tools,
and
I
introduced
a
few
different
ones
kicks
which
is
kind
of
yeah,
it's
kind
of
a
best
practice
to
use
that
it's
a
security
tool
cubescape
with
different
testing
frameworks
that
they
have
available
and
then
cube
conform,
which
is
more
on
the
kubernetes
api
yaml
level
of
scanning.
D
So
it
ensures
that
the
daryama
is
correct
in
every
way
yeah
and
basically,
we
get
a
nice,
a
nice
output
of
security
results
when
we
go
to
the
security
scans.
Here,
let's
see
that's
some
newer
ones,
but
basically
you
get
for
every
every
single
one
of
those
scans.
You
get
it
get
an
output,
and
you
can
already
see
that
we
have
some
work
to
do
here,
as
oleg
already
mentioned
as
well.
D
So
those
results
here
kind
of
overlap
with
with
what
he
said
earlier
as
well.
I'm
actually
also
working
on
a
poc
right
now
to
integrate
sneak
with
our
our
tooling
here
to
have
one
additional
tool.
D
Okay,
then
I'm
gonna
go
right
over
to
some
improvements
that
I
already
did
as
an
outcome
from
from
those
scans.
So
for
now
it's
just
in
the
in
the
harm
installer,
but
there
will
be
some
some
other
security
fixes
in
the
code
directly
as
well.
D
I
kind
of
checked
against
common
common
usage
patterns
and
chose
some
very
generous
resource
limits,
so
you
shouldn't
really
run
into
any
issues
there.
But
still,
if
you
do,
you
can
still
adjust
this
in
the
helm
values.
So
this
is
just
customization
in
our
ham
values
file.
So
this
is
easily
overrideable
by
users
if
it's
necessary.
D
D
This
is
kind
of
back
best
practices
that
you
should
do
that
we
didn't
do
before
and
are
doing
now,
basically
for
our
captain
services
for
captain
core
yeah
there's
more
to
follow.
I
have
another
pr
open,
that's
not
merged,
yet
I'm
still
working
on
some
stuff
and
some
more
security
improvements
and
then
later
on.
We
can
definitely
look
into
the
reports
that
the
tools
generate
and
create
some
more
action
items
of
that.
Basically-
and
that's
it
for
me
ending
over
to
the
next
person,
which
is
anna.
A
G
B
Okay,
so
I
have
a
bunch
of
prs
which
were
cleanupless
flow
mentioned
of
the
integration
test.
This
is
one
is
just
removing
some
redundant
tests
we
had
and
that
are
now
done
as
either
unit
test
or
separate
pipeline
like
the
zero
downtime,
nothing
much
to
say
about
this.
I
also
worked
on
the
remediation
service.
This
is
a
sdk
based
service
and
we
moved
most
of
the
integration
tests
we
had
for
this
service
to
component
test.
B
I
just
wanted
to
shamelessly
make
some
kind
of
advertisement
about
sdk
about
how
easy
it
is
to
make
tests
using
it
and
how
easy
it
was
to
improve
the
coverage,
because
here
I
have
another
pr
in
which
basically,
we
reached
100
coverage
of
the
service
thanks
to
using
the
fake
captain
of
the
sdk
yeah,
nothing
much
to
say
about
this.
B
Another
thing
that
I
wanted
to
mention
is
this
pr,
which
is
on
the
migrating
towards
research
service.
These
are
actually
two
pr's.
First
one
is
a
go
util
change.
This
is
breaking
change
on
every
single
file
in
go
utils
in
which
we
mentioned
configuration
service.
Now
we
are
instead
using
a
resource
service.
B
This
means
that
the
library
is
now
incompatible
with
the
older
version
of
the
captain.
Of
course,
unless
you
install
by
hand
resource
service,
but
still
it
won't
work,
the
changes
in
captain
a
half
of
those
changes
were
done
by
florian,
I
think,
but
basically
right
now.
What
is
important
to
know
is
that
in
the
installer
file
we
have
both
services,
resource
services
and
configuration
services
pointing
to
resource
service.
B
This
should
help
with
zero
downtime,
the
upgrade
of
your
installation,
and
we
planned
then
later
on,
to
remove
the
second
one,
and
I
think
there
has
been
some
similar
changes
in
the
nginx
configurations
so
that
now
also
the
research
service
paths
are
are
allowed
and
what
else
was
their
flaw?
Was
it
the
the
accounts
that
changed
as
well
that
are
not
in
this
vr?
Probably
was
it
I
think
so
yeah.
B
So
if
I
recall
correctly,
also
service
accounts
are
for
both
services
right
now
and
later
on
will
be
migrated
only
to
resource
service.
Correct
me,
if
I'm
wrong,
yeah,
okay,
that's
I
think
more
or
less
it
from
my
side.
If
you
don't
have
any.
A
Okay,
after
this
bad
noise,
I
will
think
I
think
paulo
you're.
G
A
J
First
of
all
confirm
that
you
can
still
hear
me
not
talking
to
myself.
That
would
be
great.
Thank
you.
Yes,.
J
And
I
just
want
to
briefly
introduce
something.
Basically,
we
worked
on
the
last
few
weeks
on
wait
a
second
I
just
just
have
to
select
this
current
screen.
Okay,
I
think
this
is
one
all
right.
You
should
be
able
to
see
my
screen
right
now,
so
I
know
if
any
of
you
had
a
look
recently
at
the
swagger
ui
of
captain,
and
you
saw
that
there
is
a
new
endpoint
here
defined.
I
just
wanted
to
introduce
it
and
explain
what
it
what
it
is
for,
at
which
point
we
are
and
so
on.
J
So
the
input
endpoint
is
part
of
the
api
service
and
the
idea
behind
it
is
that
we
have
a
way
to,
let's
say,
apply
some
pre-packaged
configuration,
including
some
resources,
to
accept
an
installation
to
a
given
project.
That's
already
existing,
that's
its
purpose,
more
or
less.
It's
still.
An
alpha
is
still
not
done
completely,
it's
be.
For
the
moment,
the
current
implementation
has
gone
across
three
pull
requests,
we're
still
developing
it.
There's
still
some
feature
missing,
but
just
to
give
you
a
very,
very
basic
description
of
what
it
does.
J
J
The
simple
manifest
contains
some
operation
to
perform
and
to
apply.
For
example,
some
api
calls
to
create
a
service
to
add
some
yaml
file
to
upload
the
resource.
Anything
you
can
think
of
the
idea
is
that
it
can
be
done
in
some
sort
of
generic
way
as
much
as
possible.
So
the
the
point
is
that
you
say
that
you
prepare
the
package.
Put
a
zip
with
everything
you
need
read
all
the
sequence
of
tasks
that
need
to
be
performed.
J
You
select
a
project
right
now,
I'm
selecting
my
local
installation,
a
test
input
which
has
no
service
there,
and
I
will
try
to
basically
create
a
service
and
upload
a
couple
of
web
book.
Yaml
config,
you
will
see
multiple
copies,
but
this
is
for
testing
purposes,
not
demo
purposes,
but
yeah.
The
the
archive
is
the
same.
So
if
I
just
execute
this,
the
effect
of
the
sample
package
is
the
same
zipper
so
before
for
the
moment,
there
is
nothing
in
output
again
on
the
development.
J
This
will
change,
but
the
idea
is
that
now,
what
I'm
going
to
have
as
soon
as
I
can
refresh
my
bridge
page,
is
that
test
input.
All
of
a
sudden
has
a
test
service
which
was
not
before
created
with
the
importing
of
the
package,
and
if
I
look
inside
the
service
itself,
if
I
manage
to
maybe
not
sure
the
idea
is
that,
with
inter
service,
you
can
see
some
resources
because
for
some
reason
don't
work
anymore.
But
yeah
trust
me
on
that
one.
J
J
Yeah
you're
right,
then
they
said
hello
modify.
Thank
you
yeah.
The
main
point
is,
as
I
said,
we
want
to
give
captain
the
ability
of
importing
some
sort
of
prepackaged
confusion.
It's
nothing,
it's
nothing
completely
new,
something
that
could
be
done
already
with
the
api.
The
idea
is
that,
instead
of
instead
of
the
user,
creating
a
script
which
calls
all
the
right
api
endpoints
with
all
the
prepared
input
and
whatnot,
it's
just
preparing
a
very
simple
archive.
J
There
will
be
it's
not
yet
there,
but
there
will
be
some
templating
capabilities
so
that
the
content
of
the
api
calls
and
of
the
resource
file
being
uploaded
can
be
customized
depending
on
environment
depending
on
variables
depending
on
the
stuff
but
yeah.
The
idea
is
that
it's
there,
it's
being
still
been
developed
and
don't
be
scared.
J
If
you
see
it,
if
you
want
to
try
and
have
fun
with
it,
please
do
we
still
didn't
publish
the
full
draft
of
the
of
the
manifest
and
the
format,
because
it's
still
under
development,
but
will
be
published
pretty
soon
on
on
the
some
documentation
site,
not
sure
exactly
where,
if
we're
going
to
include
directly
within
the
readme
or
somewhere
else,
but
we
are
going
to
do
that
pretty
soon.
J
A
I
think
everyone
has
a
chance
to
speak.
There
are
no
messages
in
the
chat,
so
any
question
from
the
audience.
A
Looks
like
no.
Hence
I
think
we
can
close
the
meeting
a
bit
earlier
than
usual
thanks
a
lot
and
very
nice
progress,
as
always
looking
forward
to
make
use
of
the
important
point
to
simplify,
also
the
usage
of
tutorials.
This
way
we
can
have
the,
for
instance,
the
potato
head
project
as
a
simple
zip
that
the
user
can
just
upload
via
this
endpoint
and
then
automatically
a
full
project
is
already
there
configured
with
services
a
way
to
trigger
new
sequences
and
the
web
book
configuration
for
that.