►
From YouTube: Keptn Developer Meeting - May 19, 2022
Description
Meeting notes: https://docs.google.com/document/d/1y7a6uaN8fwFJ7IRnvtxSfgz-OGFq6u7bKN6F7NDxKPg/edit
Learn more: https://keptn.sh
Get started with tutorials: https://tutorials.keptn.sh
Join us in Slack: https://slack.keptn.sh
Star us on Github: https://github.com/keptn/keptn
Follow us on Twitter: https://twitter.com/keptnProject
Sign up to our newsletter: https://bit.ly/KeptnNews
A
All
right
recording
is
on
and
welcome
everyone
to
next
episode
of
captain
developer
meeting.
As
I
said
before,
currently
kubecon
in
europe
is
ongoing,
where
also
captain
has
its
own
booth
and
yeah.
A
The
infos
I
got
is
that
quite
a
lot
of
graphic
is
there,
which
is
great
to
see
that
there
is
a
lot
of
interest
in
the
project,
and
this
means
all
that
we
are
already
complete
and
can
kick
off
today's
meeting.
A
B
There
we
go
so
I
want
to
quickly
show
you
a
new
feature
of
prometheus
service,
which
I
have
been
working
on,
which,
in
short
means
we
can
now
use
captain
create
secret
to
create
credentials
for
prometheus
service
to
access
an
external
prometheus
instance
for
querying
metrics.
B
Just
to
make
this
clear
here
we're
not
supporting
like
a
full
end-to-end
delivery
use
case
with
an
external
prometheus
instance.
This
is
only
for
querying
metrics.
B
B
Then
I
am
using
captain
create
secret
with
all
those
credentials
and
very
important
the
scope
captain
prometheus
service.
It
now
tells
me
the
secret
already
exists,
so
I'm
just
gonna
delete
the.
B
B
What
I
will
do
now
is,
I
will
trigger
an
evaluation,
so
I've
configured
this
project
for
a
simple
evaluation
using
an
sli
file
and
an
slo
file,
and
we
should
here
see
the
evaluation
appear
here.
Any
second.
B
Okay,
here
we
go
1004,
that's
the
current
timestamp
and
we're
seeing
the
evaluation
has
failed.
We
can
see
prometheus
service
finished
with
result
fail
with
unable
to
retrieve
metrics,
and
if
we
further
inspect
the
cloud
event
we
can
see
it
tried
to
query
from
myprometheushost.com,
which
was
the
wrong
configuration.
B
We
still
have
support
for
that
in
the
code,
but
most
likely
prometheus
service
with
the
change
will
not
have
permissions
to
access
that
secret
anymore.
So
we
will
make
sure
that
in
a
documentation,
it's
written
down
how
to
convert
from
one
from
the
old
secret
creation
to
the
new
one,
but
essentially
it's
as
easy
as
just
using
create
secret
on
the
cli
or
going
in
here
and
adding
a
secret
with
the
scope
of
captain
prometheus
service.
C
A
No
problem,
let
me
do
that
and.
C
Perfect
yeah
yeah,
so
what
I've
done?
This
sprint
mostly
was
yeah.
First
of
all
remove
the
distributor
from
the
api
service.
C
C
So
that
makes
everything
with
respect
to
achieving
zero
downtime
a
lot
easier,
because
we
don't
have
two
containers
that
are
dependent
on
each
other
in
the
same
pot
and
yeah.
Since
the
api
service
was
only
using
the
functionality
of
sending
events
to
nuts.
In
the
first
place,
it
made
a
lot
of
sense
to
just
directly.
D
C
C
So
that's
that
and
then,
if
you
go
back
to
the
to
the
document
yeah
for
the
remaining
sprint,
I've
exclusively
worked
on
testing
our
zero
downtime
capabilities,
basically
with
setting
up
a
test
setup
where
I'm
triggering
sequences
every
two
seconds,
for
example,
and
while
I'm
doing
that
upgrading
the
the
images
of
services
like
the
shipyard,
controller,
lighthouse
service,
resource
service,
etc
and
seeing
if
the
sequences
that
I'm
triggering
are
able
to
complete
and
yeah.
C
I
found
a
couple
of
minor
adjustments
that
still
needed
to
be
made
with
respect
to
the
readiness
checks
of
the
kubernetes
pods
running
the
task,
executor
services
and
yeah.
Also
in
the
new
cp
connector
library,
some
minor
tweaks
were
necessary,
but
at
this
point
in
time
it
looks
very,
very
promising
that
we
are
going
to
be
able
to
achieve
zero
downtime,
and
this
will
actually
also
be
the
the
main
focus
of
the
development
team
during
the
upcoming
sprint.
C
E
Good,
so
you
should
be
receiving
it
yeah,
so
it
was
the
last
print
I
worked
as
well
as
for
mostly
on
the
zero
damn
time
preparations
for
the
services,
as
was
already
mentioned,
the
cp
new
cp
connector
library
was
introduced,
and
now
we
are
using
now
we
are.
E
Our
services
to
use
dcb
connector
library,
so
I
was
doing
that
for
the
lighthouse
service
and
approval
service.
E
When
adopting
the
services,
we
came
across
a
problem
that
actually
the
cp
connected
library
does
not
have
the
functionality
to
forward
the
locks
forward,
the
analogs
of
the
services,
so
this
needs
to
be.
This
was
needed
to
be
implemented.
E
So
the
you
look
forward
new
look
forward.
The
component
was
introduced
to
the
cp
connector
library,
which
can
be
used
now,
also
and
yeah
upon
that
the
lighthouse
service
and
apollo
service
were
adapted
to
use
the
cpconnector
library.
So
that
means
the
helm,
charts,
also
the
code
in
the
in
the
services
and
everything
that
was
needed
also
with
integration
tests
yeah.
F
Thank
you
very
much.
Yeah,
I'm
going
to
present
a
request
for
the
captain's
bridge
today.
F
It's
about
making
the
filters
in
the
sequence
screens
table
across
page
refresh
and
yeah
previously,
so
you
were
already
able
to,
and
I
could
left
demo.
This
apply
filters
on
the
sequence
screen
so
that
the
list
of
sequences
that
you're
seeing
is
filtered-
and
this
was
just
yeah.
F
F
So
when
you
refresh
the
page
or
visit
the
page
with
clear
parameters,
they
are
loaded
into
the
filter
and
also
when
you
switch
pages,
then
or
if
you
visit
the
page
without
any
query
parameters,
it
will
load
your
old
setup
from
the
local
storage
and
yeah.
So
you
can
save
these
filters
or
even
share
them
by
copying
this
link,
sending
it
to
somebody
and
yeah
also
what
does.
F
F
F
But
yeah
the
filters
will
be
stored
to
the
local
storage
with
for
each
project
separately
and
as
you
can
see,
I
have
here
a
filter
which
yeah.
F
Obviously
there
is
a
sequence
with
a
staging
stage,
but
we
have
here
kind
of
the
paging
and
on
the
first
page
it
was
not
showing,
but
I
could
load
all
the
sequences
and
I
would
see,
for
example,
this
one
sequence
which
was
executed
on
a
stitching
stage
yeah,
and
for
that
also
we
have
the
clear
all
button
which
will
reset
and
then,
of
course,
when
you
visit
the
page,
you
always
have
this
reset.
D
So
thank
you
definition
screen.
D
Okay,
so
I'm
going
to
present
the
automatic
provisioning
feature
that
is
now
also
available
for
the
bridge,
so
this
is
the
pr
and
if
you
want
to
test
it
or
to
install
it
here
is
a
short
short
instruction.
How
to
do
that.
D
So,
basically,
what
you
have
to
do
to
have
it
in
the
bridge
is
to
set
the
control
panel
bridge
automatic
provisioning
message
for
a
customized
messaging
bridge,
as
well
as
the
control
plane
features
automatic
provisioning
url
to
tell
the
bridge
that
automatic
provisioning
is
now
provided
and
they
get
upstream
is
not
required
anymore.
D
D
So
this
also
works
for
ssh,
it's
the
same
physically
and
if
you
have
set
up
a
project
without
any
git
upstream
it's
pre-selected
and
the
message
will
be
shown
that
there
is
no
github
stream
configured,
but
you
cannot
save
anything
and
you
still
can
configure
an
https
or
ssh
upstream.
If
you
want
so
yeah,
that's
basically
any
questions.
D
This
hint
will
be
shown
if
there
is
no
message
configured
so
this
this
is
the.
So
if
you,
if
you
don't
have
this
message
configured,
this
is
the
default
behavior
that
we
recommend
you
to
install
an
upstream.
This
is
just
a
special
case
if
someone
provides
an
installation,
the
this
custom
message
in
the
home,
values
that
this
is
displayed
instead,
I
think,
do
I
have
some
screenshots
somewhere.
I
don't
know.
D
There
yeah
this
message
still
is
displayed.
If
you
don't
send
the
message
so.
D
Okay,
any
further
questions,
otherwise
I
will
hand
over
to
class.
G
Oh
okay,
so
yeah,
let's
start
with
the
yeah,
the
other
bridge
updates.
So
the
first
issue
we
had
was
that
we
didn't
show
a
notification
on
the
update.
So
if
the
research
service
was
configured
and
you
update
the
project,
then
there
was
no
notification
that
the
github
stream
was
updated.
I
can
show
it
simply.
G
Here
we
now,
if
the
option
is
updated,
we
show
this
meshes
version
also
because
unsafe
this
was
missing
if
the
resource
service
was
enabled
so
yeah.
This
message
is
not
here,
so
just
a
minor
change.
Next
to
that
also
there
was
a
validation
error,
so
for
the
for
get
token,
the
validation
was
triggered
after
the
github
stream
was
updated,
but
only
in
firefox.
G
So
this
was
no
problem
in
chrome,
but
in
fairfax,
but
this
was
just
a
missing
type
for
a
button
that
triggered
the
whole
variation
again
after
resetting
the
token
so
yeah.
This
message
is
now
only
displayed
not
after
an
update
then
the
next
thing.
G
Here
we
have
some
performance
improvements
in
the
project
part
because
for
the
environment
screen
we
are
loading
all
sequences,
the
first
sequence
for
all
services
for
all
stages.
This
did
not
scale
well,
because
if
we
have,
for
example,
10
stages
and
10
services,
this
would
result
in
100
api
cores
to
the
sequence
endpoint.
G
This
is
now
improved
due
to
the
adjustments
on
the
api
that
we
can
now
provide.
Several
captain
context
for
the
sequence
and
now
up
to
100
sequences
are
loaded
with
one
api
coil,
so
instead
of
100
api
coils,
this
is
now
reduced
to
one.
G
Next
to
that,
we
upgrade
to
yeah
angular
12,
just
a
matter
change.
Nothing
new
on
the
on
the
ui
was
pretty
straightforward.
Just
some
new
strict
checks
were
added
and
one
or
two
new
configurations
for
the
for
the
tests.
That's
basically
no
major
exchange
or
something
then
also
for
a
sequence
screen.
I
reduced
some
api
calls
because
for
the
metadata
we
show
here
on
the
filter,
we,
the
coil,
was
triggered
every
time.
G
The
sequence
list
here
was
updated
and
this
is
not
needed
so
also
the
api
quests
in
this
screen
are
reduced
just
to
one
at
the
beginning
for
the
metadata
or
if
the
project
is
changed
yeah.
Let's
do
that
and
the
last
thing
for
sso.
We
had
a
problem
that
if
the
api
called
to
the
metadata
endpoint
of
the
api
server
service
failed,
then
the
bridge
was
not
able
to
be
accessed,
and
this
was
also
a
problem
that
the
logout
was
not
shown.
G
G
That
yeah,
the
captain
version,
couldn't
be
fetched.
It's
then
n
a
because
the
metadata
is
never,
but
I'm
able
to
see
that
bridge
and
I'm
also
able
to
log
out
here
because
this
information
was
lost
before.
As
you
see
in
the
issue
here,
there
was
nothing
you
can
yeah,
there
was
no
option
to
log
out
or
something
else
and
yeah.
This
was
adjusted
and
yeah.
G
This
doesn't
depend
on
the
metadata
endpoint
anymore.
There
are
also
some
additional
checks,
let's
say
for
a
great
project
and
the
update
project,
because
they
they
depend
on
the
metadata
endpoint,
and
so
we
just
permit
the
action.
If
the
metadata
endpoint
cannot
be
retrieved,
then
we
just
say:
action
is
not
permitted
and
also
for
the
update
project
thing.
We
say
here.
This
is
not
permitted
because
the
metadata
couldn't
be
fetched
because
maybe
it
isn't
available
or
there's
another
case
where
you
set
up
your
own
permission,
handling
and
returner
403.
A
A
A
And
let
me
just
open
up
this
mirror
board
for
a
second,
because
I
have
here
two
approaches
that
can
be
compared
with
each
other
and
it's
the
first
one.
I
call
the
the
stage
centric
approach,
which
is
basically
a
mapping
of
our
current
api
and
points
from
the
configuration
service
to
the
folder
structure,
and
this
would
then
end
up
in
a
structure
that
looks
like
that,
one.
Where
we
have
a
dot
captain
folder,
then
we
have
slash
project
underneath
and
then
also
stages
or
stage
underneath
and
then
within
the
stage.
A
We
then
store
the
configuration
for
the
services,
as
shown
here
by
the
yellow
lines,
those
folders
yeah.
They
are
actually
then
served
by
those
end
points
which
is
fine
and
from
what
I
understand
also
currently
implemented
in
the
resource
service,
which
already
supports
this
feature,
and
this
would
then
end
up
in
a
in
an
example,
as
shown
here.
We
have
project
config
stage
config,
and
here
we
have
the
service
configuration.
A
Then
the
definition
of
the
stage
and
underneath
or
below
is
then
the
service
specific
config
so
far
so
good,
but
a
proposal
would
be
to
also
bring
the
service
on
the
top
level,
meaning
to
have
project
stage
and
service
on
top
level
and
then
below
the
service.
The
subfolder
stage-
and
this
would
then
be
shown
here
by
an
example
where
we
have
again
project
configuration
service,
pure
service,
configuration
and
sorry
stage
configuration
and
then
here
are
service
config,
but
this
can
then
be
stage
dependent
and
this
approach
of
storing
a
conflict.
A
This
way
would
make
it
from
my
point
of
view,
a
little
bit
easier
to
then
do
some
overrides,
because
we
have
here
then
the
same
struc
sub,
folder
structure
in
this
service
folder.
As
well
on
the
stage
level,
and
we
could
easily
override
service
specifics,
config
with
the
recent
stage
defaults,
this
is
a
little
bit
harder
with
the
previous
approach
as
the
path
or
the
the
folder
structure
does
not
match
that
easily
and
makes
it
from.
A
From
my
point
of
view,
a
little
bit
is
difficult,
more
difficult
to
to
override
conflict
config,
but
this
is
currently
in
discussion.
A
I
would
like
to
invite
everyone
who
is
interested
in
that
or
has
an
opinion
to
first
of
all,
yeah
go
to
cap81,
read
it
and
then
also
provide
your
comment,
especially
on
this
question.
Right
now.
A
How
should
the
folder
structure
look
like
this
is
kind
of
my
proposal,
but
I'm
also
open
for
other
ideas
so
that
we
then
can
conclude
a
solid
solution.
A
All
right
are
there
any
questions
immediately
to
that
topic.
A
B
One
concern
that
I
would
have
is
probably
be
able
to
figure
it
out,
but
depending
on
the
approach
we
do,
it
would
be
nice
if
we
can
automatically
distinguish
between
that's
a
service
or
that's
a
stage.
I
think
with
the
approach
you
have
here,
it's
it's
very
clear
by
looking
at
it.
Just
with
your
eye.
B
You
see
this
is
a
service,
so
this
is
a
stage,
but
we
need
to
make
sure
they're
separated
in
folder,
because
if
we
start
mixing
them,
which
is
probably
the
the
old
approach,
then
it's
very
hard
to
automatically
say:
okay,
that's
a
service
or
that's
a
stage
or
that's
a
default
or
that's
actually
configuration.
So
I
think
this
approach
nicely
separates
it.
A
A
H
Yeah,
just
a
quick
heads-up,
I'm
audible,
yeah
sure
yum,
just
so
that
I
was
still
fixing
the
hardware
half
of
the
meeting
yeah.
So
one
update
that
we
have
jsoc
project
announcements
tomorrow.
So
as
an
organization
we
will
know
them
today,
but
public
projects
will
be
announced
tomorrow
at
the
end
of
the
day.
H
So
by
now
I
don't
know
how
many
projects
were
accepted
and
what
topics
will
be
covered
in
these
projects,
but
hopefully
we'll
get
maybe
up
to
five
slots.
H
Let's
see
it's
very
unlikely
that
we
get
five,
but
who
knows
other
news
so
today
we
have
captain
project
office
house
just
in
two
hours,
so
if
you're
interested
please
join
and
participate
and
yeah
there
will
be
interesting
conversation
there.
I'm
not
sure
what
exactly
will
be
the
topics,
but
we
will
focus
on
cuny,
maybe
on
some
basic
captain
introduction
and
yeah.
Everyone
is
welcome
to
participate.
A
Cool
thanks
and
olek.
Do
you
have
also
some
news
to
share
coming
from
from
kubecon.
H
The
moment
no
so
important
you'll
see
that
the
captain
completion,
as
I
announced
onslack,
is
stuck
at
the
moment.
So
they
passed
all
the
public
comment
periods
until
may
25th,
at
least,
and
it
means
that
optimistic
captain
completion
date
would
be
the
beginning
of
july.
H
H
Other
news
from
cubecon
I
do
know
so
one
important
thing
that
cd
events
was
publicly
announced
on
tuesday.
It
was
a
bit
unexpected
for
contributors,
including
me.
H
I'm
not
sure
why
the
announcement
happened
that
way,
but
the
fact
is
the
fact
that
the
announcement
went
public,
so
one
of
the
items
for
captain
would
be
to
adopt
the
new
version
of
stevens
once
we're
ready.
So
the
plan
is
to
have
alpha
specification
officially
announced
and
we
can
integrate
it
and
see
what
are
the
differences.
H
I
also
briefly
pitched
application
lifecycle,
services,
and
I
mean
application,
lifecycle
events
and
basically
see
the
events.
Contributors
were
rather
interested
in
having
it
as
a
part
of
cd
events
standard
instead
of
creating
a
new
one.
So
maybe
it's
also
an
item
we
could
discuss
with
everyone.
A
A
Okay
looks
like
not
the
case
then,
thanks
for
joining
in.
I
hope
it
was
interesting
for
for,
for
you
yeah.
I
found
it
very
interesting,
great
presentations
of
all
the
features
and
improvements
that
have
been
implemented.
The
last
two
weeks
and
yeah
stay
tuned.
New
features
are
coming,
a
new
release
is
around
the
and
yeah
see
you
in
two
weeks
bye.