►
From YouTube: Keptn Community Meeting - July 16th, 2020
Description
Meeting notes: https://docs.google.com/document/d/1y7a6uaN8fwFJ7IRnvtxSfgz-OGFq6u7bKN6F7NDxKPg/edit#
Learn more: https://keptn.sh
Get started with tutorials: https://tutorials.keptn.sh
Join us in Slack: https://slack.keptn.sh
Star us on Github: https://github.com/keptn/keptn
Follow us on Twitter: https://twitter.com/keptnProject
A
Recording
has
started
this
means
welcome
everyone
to
this
community
meeting.
It
was,
it
was
a
great
work.
A
lot
of
stuff
had
has
been
accomplished,
and
I
think
we
are.
We
have
cool,
really
cool
announcements
to
tell
at
the
end
of
the
community
meeting
and
to
get
there.
Let's
have
a
quick
look
at
the
agenda
for
today.
A
Then
we
also
have
here
a
reminder
for
the
cap
mailing
list
and
the
google
group
when
you
want
to
get
updates
on
captain.
Please
sign
up
for
this
mailing
list
and
then
you
will
get,
I
think,
a
weekly
email
with
all
the
upcoming
events
and
all
the
announcements
that
are
important
to
know,
and
this
brings
me
now
directly
to
the
next
and
then
the
next
section,
which
is
the
which
is
the
discussion
of
the
pull
requests
that
have
been
implemented
on
this
week
and
for
getting
this
started.
B
Thank
you,
so
that
is
on
the
screen.
Basically,
I
would
like
to
present
the
three
pull
requests
which
somehow
I
highlight
the
first
one
is
in
the
helm
service.
B
So
you
know
the
home
service
takes
care
of
the
deployment,
and
for
this
the
helm
service
requires
administrator
rights,
and
here
I
now
added
some
checks
whether
this
helm
service
has
the
admin
administrator
rights.
So
basically
it
checks
whether
it
has
queue
cluster
admin
rights,
so
this
is
necessary
in
order
to,
for
example,
create
name
spaces
and
then
apply
the
handshards
of
the
users.
B
B
We
moved
all
those
distributors
into
in
the
part
into
the
port
and,
for
example,
here
I
do
have
the
configuration
service
and
this
configuration
service
now
has
its
own
distributor
next
to
the
container
where
the
functionality
of
the
configuration
service
is
implemented
and-
and
this
has
major
advantages,
the
biggest
of
course
is.
We
do
not
have
that
much
parts
now
and
also
the
container
of
the
distributor
always
runs
inside
the
same
node
as
the
configuration
service
itself.
B
So
basically,
we
remove
some
network
boundaries
because
we
can
send
over
there
over
the
event
on
localhost,
okay
and
finally
yeah.
The
last
point
request
here:
I
remove
the
check
whether
the
cap
namespace
already
exists
and
removing
this
check
now
allows
to
install
captain
multiple
times.
So,
for
example,
you
can
now
install
the
controller
plane
of
cabin
and
if
you
decide
afterwards
that
you
would
like
to
also
have
the
continuous
delivery
execution
plane,
you
can
simply
type
captain
install
and
specifying
the
parameters,
and
this
will
then
upgrade
cabin
and
install
the
continuous
delivery
plane.
D
I
work
on
this
blue
request,
so
basically
adding
common
labels
for
the
help
charter.
We
already
have
so
basically
adding
all
of
this
line
to
all
of
the
coordinates
resources
that
we
have.
Yes,
you
can
see
here
we
use
chart
release
name
as
the
instance
name
and
managed
by
eventually
it
will
be
changed
into
help
if
you
are
deploying
music
home
and
then
it's
gonna
be
which
component
that
this
resource
belongs
to,
because
this
captain
approach
of
router
surfaces
belongs
to
openshift.
D
So
eventually
it
will
be
converted
into
open
shift,
something
it
depends
on
the
helm,
template
and
there
are
so
many
changes
of
them
almost
the
same,
but
I
make
ones
changes,
maybe
a
bit
fundamental
about
how
you
sound
chap.
So
all
right
now,
if
you
see
here,
every
surface
will
have
their
own
object
and
within
that
object
there
will
be
image
and
repository
text
previously.
What
we
have
the
image
name
will
be
hardcoded
in
here.
D
So
if
you
log
into
this
one
captain
root
surface
and
the
version
is
also
hardcoded,
so
there
is
no
way
if
users
want
to
change
the
image
version.
So
what
I
did
I
make
it,
I
move
it
to
the
value
cmo
so
that
users
can
override
this
value
to
the
image
version
that
they
want
to
use
because
previously,
based
on
this
issue,
I'm
sorry
I
mean
the
issue
in
here.
D
There
is
a
patch
given
in
this
one,
so
it
it
has
specific
image.
So
if
we
hard
coded
the
value,
there
is
no
way
for
us
to
use
a
patch
like
this
in
the
future.
So
I
think
that's
all
for
requests
that
I've
made
just
one.
Thank
you.
B
That's
really
nice,
and
can
you
please
go
back
to
your
wellness
file,
because
I
really
liked
your
approach
so
much
that
I
also
implemented
this
for
all
services
in
the
control
plane
too,
and
I
have
one
little
minor
difference.
So
please
do
not
specify
the
tag,
and
I
can
afterwards
point
you
how
I've
implemented
this
using
the
tag,
because
this
will
heavily
ease,
isn't
our
releasing
process.
B
A
All
right
the
issues
I
want
to
present
you,
the
first
one,
is
that
we
did
an
update
of
the
documentation
of
the
captain's
bridge
and
now,
when
you
go
to
to
the
captain
docs,
then
you
will
find.
Currently
you
will
find
the
develop
section,
but
this
will
be
turned
into
the
release.
0.7
section
pretty
soon
and
when
you
go
there,
you
will
now
find
the
reference
section,
the
captain's
bridge
and
in
there
we
now
explain
the
features
of
the
captain's
bridge
which
are
the
pathetic
authentication.
A
Then
the
daily
version
check
the
deep
linking
mechanism
and
also
the
functionality
that
is
provided
in
regard
to
the
delivery
assistant
and
yeah
feel
free
to
check
it
out
to
to
scroll
through
the
documentation
and
to
get
get
the
information
you
want
yeah,
it's
it's
now
provided
then.
My
second
issue
I
was
working
on
was
related
to
doing
a
double
check
of
the
specification
and
checking
it
against
the
implementation
and,
let's
just
go
over
to
the
pull
request.
A
What
I
did
is
I
yeah
went
to
the
captain's
back
repo
and
in
this
repo
we
maintain
the
specifications
like
the
shipyard
specification.
The
specification
of
the
sr
of
this
slo
and
sli
files
and
of
the
remediation.yaml
and
I
took
a
look
on
all
of
those
specifications,
and
then
I
double
checked
it
with
implementation,
and
I
came
up
with
a
minor
issues,
but
not
not
really
big
ones
yeah.
A
At
the
end,
it
now
looks
good
so
that
we
are
also
ready
to
release
this
captain
specification
on
version.
A
A
Then
yeah
during
this
week,
I
also
started
to
test
out
the
master
branch
of
captain
and
in
this
process
I
identified
minor
issues
regard
in
regard
to
user
messages
and
what
I
did
I
polished
those
and
I
also
fixed,
broken
links
in
the
cli
and
and
bridge
just
go
there
to
briefly
show
you
what
I
mean
yeah.
A
What
I
did
is
I
took
a
look
at
all
the
references
that
are
in
code
and
updated
those
to
reference,
the
new
documentation
and
the
and
the
new
yeah
new
documentation,
and
I
streamlined
the
user
messages
so
to
be
more
user-friendly.
A
Well
and
last
but
not
least,
this
is
a
important
change
for
all
of
those
that
are
using
dynadres
for
monitoring,
because
in
the
dynadress
service,
which
is
an
implementation
of
the
dynadress
captain
integration,
I
removed
the
part
of
installing
the
dynadress
one
agent
during
com
during
the
command
of
of
configuring
monitoring.
A
The
reason
behind
that
is
that
actually
it's
kind
of
problematic
when
a
customer
or
a
captain
user
already
has
dyno
trace
installed
on
this
cluster,
and
when
then
the
dynadris
service
tries
to
override
the
dynatrace1
agent,
by
maybe
a
new
version,
and
instead
of
installing
the
bon
agent,
I'm
now
showing
a
link
to
the
documentation,
and
let
me
show
you
that
by
a
real
example,
when
I'm
now
running
the
command
captain
configure
monitoring
dynadres,
then
the
dynadress
service
starts
to
work
and
instead
of
installing
the
bon
agent,
it
just
tells
me
that
the
phone
engine
is
not
installed
on
the
gaster
and
for
installing
it.
E
Quick
question
to
this:
I
think
it's
a
great
move,
because
I
think
it
should
be
a
separation
of
concerns.
The
captain
should
not
install
the
one
agent,
but
because
this
is
a
a
drastic
change
should
can
we
make
the
maybe
the
output
a
little
more
clear,
not
clear
with.
E
E
A
A
E
A
F
F
Yeah
all
right,
the
changes
I
want
to
present
to
you
today
are
all
related
to
cap
18,
which
was
the
removal
of
the
automatic
configuration
and
setup
of
ingress
objects
in
order
to
expose
the
captain's
api.
F
So
this
in
turn
will
overall
require
a
little
bit
more
configuration
but,
on
the
other
hand,
give
users
more
possibilities
in
terms
of
adapting
cap
into
their
environments
where
they're
running
it
on
all
right.
That
being
said,
let's
just
jump
to
the
first
pull
request,
so
this
one
was
related
to
the
cli.
F
So
in
previous
versions
of
captain,
the
cli
did
the
install
command
by
deploying
installer
port
and
providing
it
with
the
required
values.
For
example,
in
terms
of
the
platform
you're
running
captain
on
the
type
of
ingress
you
want
to
use,
etc,
and
then
that
installer
basically
did
apply
all
the
manifests
took
care
of
the
configuration,
especially
for
the
virtual
service
that
was
used
for
the
for
exposing
the
captain
api
and
then
finally,
the
cmi
tries
to
authenticate
automatically
at
the
captain
endpoint.
F
That's
exactly
what
it
does,
so
it
doesn't
configure
the
ingress
configuration
anymore
so,
but
we
will
of
course
point
you
to
the
appropriate
documentation
after
installing
captain,
in
order
to
make
it
as
convenient
as
possible
for
you
to
configure
the
api
access
and
yeah.
What's
also
worth
mentioning
here,
is
in
the
install
command
of
the
captain,
cli
we're
using
the
helm,
gold
libraries.
So
if
you
want
to
install
the,
if
you
want
to
install
captain
via
the
cli,
you
don't
have
to
have
the
helm
cli
installed
on
your
local
machine.
F
F
F
So
now
the
new
functionality
of
that
command
is
to
just
set
the
credentials
for
the
basic
authentication
of
the
bridge
or
what
has
also
been
added.
You
can
now
use
the
configure
bridge
command
to
output
your
current
credentials
for
the
bridge.
So
this
is
a
nice
convenience
feature
to
yeah
retrieve
the
credentials
a
little
bit
easier.
So
you
don't
have
to
keep
cuddle
get
the
secret
anymore,
but
you
can
directly
use
the
captain
cli
to
do
that
and
also
by
default.
F
In
order
to,
for
example,
provide
dynatrace
with
the
captain
api
endpoint
and
since
we
removed
that
config
map,
we
replaced
that
with
a
new
one.
So,
basically,
the
setting
up
the
dynametray
service
will
require
one
additional
setup
step,
which
of
course,
is
documented
here
in
the
readme.
So
now
you
have
to
provide
it
with
a
secret
containing
the
api
token
for
a
diner
trace
and
the
dynamics
tenant
and
then
also
the
captain
api
url
and
the
captain
api
token
and
yeah.
F
Of
course,
in
the
documentation,
we
have
a
section
where
we
describe
in
detail
how
to
retrieve
those
values
based
on.
If
you
want
to
use
a
load
balancer
for
exposing
the
captain
api
or
a
node
port
service
or
or
use
well,
if
you're
using
port
forward,
then
it
won't
work
but
for
load,
balancers
and
notepad
and
also
inverses.
F
It
will
work
and
we
will
provide
you
with
the
documentation
for
that
and,
finally,
the
last
pr
was
quite
a
big
one,
so
this
was
fixing
all
the
integration
tests
that
were
breaking
because
of
those
changes
I
just
presented
and
yeah.
Basically,
let's
have
a
look
at
this
build
run.
So,
as
you
can
see,
there
is
no
red
build
everything
has
run
through
and
has
been
marked
with
a
very
nice
green
color,
which
we
always
like
to
see
and
yeah.
F
So
hopefully,
those
integration
tests
will
help
us
during
the
release
process,
especially
and
of
course,
during
the
further
development
of
captain,
in
order
to
to
guarantee
that
also
the
latest
status
from
the
master
branch
is
always
working
all
right.
Are
there
any
questions
about
those
yards.
E
F
A
A
B
So
this
is
not
worse
to
present.
Okay,
it
is
so
minor.
A
That's
a
minor,
then
I
will
move
it
over
to
the
done
section.
Right
and
florian.
Are
you
going
to
present
us
the
upgrader
next
time?
I
will
present
it
next
time,
yeah
cool
thanks,
then
I'm
in
the
in
progress
column
there.
You
will
still
see
two
issues
from
my
site:
it's
the
documentation
on
the
the
new
installer
or
the
new
way
of
installing
captain
it's
done,
but
I'm
still
waiting
for
an
approval
to
get
it
merged
into
the
official
documentation.
A
D
C
B
A
Okay,
thank
you
very
much
andreas
and
then
in
the
assigned
and
committed
column.
We
have
two
issues
that
are
related
to
the
bridge.
One
is
maybe
a
bug,
but
it's
not
identified
yet.
But
yemen
told
me
that
he
will
investigate
this
problem
next
week,
and
the
second
issue
that
I
I
brought
in
was
that
the
version
check
did
not
work,
but
I
have
to
double
check
whether
this
is
still
true
or
not.
A
And
this
brings
me
now
to
the
backlog
and
like
last
week
we
have
the
last
and
ultimate
issue
there
for
releasing
captain,
and
I
will
now
pull
it
over,
because
we
are
ready
to
create
a
release.
Branch
of
the
master
branch
and
we'll
then
move
on
with
the
testing
phase
of
the
release,
and
this
will
happen
by
tomorrow.
Tomorrow
we
will
create
a
release,
branch
and
we'll
then
work
through
the
checklist
that
we
have
provided
here.
A
A
These
two
packages
are
required
for
captain
as
they
implement
a
lot
of
utility
functions
and
after
that
yeah
we
have
to
take
care
of
also
releasing
captain
contrib
services
that
are
required
for
the
tutorials,
and
you
will
see
them
there,
and
when
this
is
done,
we
will
go
over
to
released
captain
so
that
we
have
a
release
branch
that
we
then
can
test
all
right.
Yeah,
then
the
preparation
phase.
A
This
phase
focuses
on
getting
the
the
documentation
ready
as
it
should
be
up
to
date
by
today,
but
still
maybe
minor
changes
will
be
unnecessary.
A
A
And
this
brings
us
down
over
to
the
testing
phase.
The
captain
team
will
perform
tests
on
g
key
and
and
keys
the
full
installation.
We
will
try
out
on
g
key
and
keys
and
also
the
quality
gates
use
cases.
We
will
also
do
on
g
key
and
keys,
and
next
to
those
two
kubernetes
environments.
We
are
also
testing
captain
on
openshift
3.11.
A
And
last
but
not
least,
we
will
do
the
official
release
of
captain
and
we
will
then
publicly
announce
it
via
slack
and
by
sending
out
an
email
to
everyone
who
is
part
of
the
google
group
yeah
a
long
list
still
a
couple
of
things
to
do,
but
we
are
getting
closer
and
closer
to
the
point
where
we
have
kept
note.7
released.
A
A
A
Yep
that,
because
I'm
releasing
or
doing
all
these
tasks
for
a
release
will
be
done
by
I
assume
also
by
florian
andreas,
my
side
and
airmen,
who
will
be
coming
back
to
the
office
by
next
week,
and
we
will
yeah
keep
the
checklist
updated
so
that
you
can
see
the
progress
of
the
releasing
process
and
also
the
point
where.
Then,
the
release
is
available.
A
And
maybe
when
you
want
to
start
working
on
an
issue
on
a
good
first
issue,
I
pulled
into
nice
issues
imre
you
are
assigned
yourself
to
that,
one
that
says
in
the
cli
the
cube,
cuddle
version
check
reports.
If
no
connection
to
cluster
could
be
made.
Are
you
still
okay?
Taking
that
one.
D
Yeah,
I'm
still
okay
I'll
do
it
after
this,
my
lattice
request
is
reviewed.
Okay,.
A
The
problem
is
that
when
you
do
a
get
on
the
slash
event,
endpoint,
then
the
resale,
the
response
returns
a
500
error,
but
this
is
actually
not
true,
because
the
internal
service
did
not
break
and
instead
of
returning
a
500
error,
a
404
should
be
returned
which
is
more
closer
to
what
it.
What
the
response
should
be,
because
in
this
case
an
event
is
missing
and
404
says
that
yeah
resource
is
not
available
or
not
found.
A
I
will
leave
it
in
there
when
someone
from
the
community
wants
to
take.
It
just
feel
free
to
to
write
your
name
down
or
to
raise
your
hand
by
just
commenting
and
posting
a
comment,
and
then
I
will
assign
you
to
the
issue.
There
was
now
a
question
coming
in.
Will
the
upgrade
path
from
0.6
to
x
to
0.7
be
possible
with
all
the
changes
to
install
yeah?
It
will
be,
but
maybe
can
I
take
or
can
someone
take
this
question.
F
Yeah
sure
so
the
installer
will
take
care
of
upgrading.
The
upgrade
will
take
care
of
upgrading
from
0.62
to
07
and
it
will
migrate
all
the
events
and
projects
that
were
available
in
previous
installations
of
captain.
So
you
will
still
have
the
data
to
come
available
in
your
new
cabin
installation.
A
A
This
one
does
it
not,
and
the
second
job
will
upgrade
it
automatically
and
in
case
you
are
going
with
the
job
that
does
not
the
helm
upgrade.
Then
we
provide
here
guidance.
A
Do
this
manually
yeah
for
each
release
or
each
each
service
that
you
deployed
with
captain,
and
this
will
then,
or
with
this
guide,
you
can
upgrade
from
helm
2
to
help
3
manually.
A
A
Bit
shorter
compared
to
the
previous
ones,
but
that's
okay,
and
I
this
means
that
we
are
now
at
the
end,
feel
free
to
ping
us
on
slack
when
you
want
to
get
an
update
on
the
release.