►
From YouTube: Keptn Developer Meeting - April 07, 2021
Description
Demos by Keptn contributors. Meeting notes: https://docs.google.com/document/d/1y7a6uaN8fwFJ7IRnvtxSfgz-OGFq6u7bKN6F7NDxKPg/edit
Learn more: https://keptn.sh
Get started with tutorials: https://tutorials.keptn.sh
Join us in Slack: https://slack.keptn.sh
Star us on Github: https://github.com/keptn/keptn
Follow us on Twitter: https://twitter.com/keptnProject
Sign up to our newsletter: https://bit.ly/KeptnNews
A
So
we
are
live,
we
are
live
hi
and
welcome
everyone
to
the
next
episode
of
the
captain
developer
meeting.
I
will
quickly
start
this
meeting
with
two
announcements
to
share.
A
Okay,
now
it's
working
it's
about
two
announcements
we
have
to
to
share
at
the
beginning
of
the
meeting,
because
two
issues
have
been
identified
with
the
new
captain:
odot
14.1
release
both
have
been
published
and
shared
in
slack
or
on
our
upgrade
guide,
but
I
want
to
emphasize
it
again
here
in
this
meeting
and
the
first
information
is
about
the
distributor.
A
Please
upgrade
your
distributor
to
odot
14.1
and
also
make
sure
that
the
environment
variable
pops
up
underscore
url
is
set
to
not
not
followed
by
captain
minus
nuts.
Here
we
have
changed
the
name
of
the
nuts
cluster
and
that's
the
reason
why
you
need
to
make
sure
that
now
your
url
pops
up
url
points
to
this
name.
A
As
said
informations
about
doing
this
are
provided
on,
the
captain
upgrade
guide
the
link
you
can
follow
just
here
and
then
the
second
issue
we
encountered
is
with
helm,
meaning
when
you
do
a
helm,
upgrade
and
use
the
minus
minus
reuse
values
flag,
then
helm
does
not
properly
update
the
the
helm
charts
of
captain
and
you
will
then
run
into
a
problem
and
instead
of
using
this
flag,
you
need
to
go
with
the
option
of
just
open
it
of
fetching
the
values.
First.
A
This
is
a
helm,
get
values,
captain
minus
n
captain
and
then
you
just
put
it
into
a
values.yaml,
and
then
you
consume
this
yaml,
while
you're
running
the
helm
upgrade
command,
meaning,
as
you
can
see
here
at
the
end,
I
have
added
this:
minus
minus
values
equals
my
values.yaml
and
I'm
not
using
the
reuse,
values
flag
anymore,
and
so
you
are
on
the
safe
side
when
are
using
a
helm
upgrade
for
upgrading
your
captain
installation.
B
A
C
I
put
a
note
on
there
so
I'll
I'll
discuss
it.
So
I've
seen
a
couple
of
times
where
you
do
captain
configure
monitoring
and
it
times
out
and
there's
a
couple
of
messages.
In
slack
we
have
back
and
forth
about
the
discrepancies
between
configure
dash
monitoring,
configure
dot
monitoring,
so
I
just
want
to
bring
it
up
and
see
what
the
the
latest
is
on
that.
A
Yeah,
we
are
aware
that
here
we
have
actually
not
captain
conform
events
that
we
are
using,
as
you
pointed
out
correctly,
we
use
here
configure
monitoring
and
has
not
the
pattern
of
of
triggering
it
and
then
waiting
for
started
or
finished
event.
We
are
aware
of
this
problem.
D
A
Yeah
this
is,
it
will
then
follow
just
the
default
eventing
standard,
where
you
have
configure
monitoring
dot
triggered
event
in
order
to
inform
the
service
that
the
service
needs
to
configure
something.
Once
the
service
then
starts
working,
it
sends
back
configure
minus
monitoring,
dot
started
and
once
it's
finished
it
yeah
sends
back
the
finished
event,
so
that
captain
as
control
plane
knows
about
the
completion
of
the
of
the
task
itself.
A
D
A
D
Okay,
you
mean
like
sending
an
event,
configure
minus
monitoring
event
through
the
cli
to
configure
monitoring.
A
Right,
we
will
not
delete
this
cli
command,
but
we
will
change
it
in
a
way
that
you
can
also
add
this
configure
monitoring
task
into
a
sequence,
execution.
E
D
Cool
so
for
now
this
means
we'll
have
to
read
the
configure.monitoring
event
in
our
service
and
interpret
it
as
a
configure
minus
monitoring
event
and
send
that
event
back
to
the
cli
right.
I
think
that's
what
prometheus
service
is
doing
at
this
point
of
time.
D
So
the
fix
that
we
have
right
now
in
prometheus
service
is
to
is
to
basically
handle
configure.monitoring
event,
that's
cli
sense
and
basically
think
of
it
as
configure
minus
monitoring
event,
and
when
the
prometheus
service
finishes
setting
up
the
monitoring,
it
just
sends
a
configure,
minus
monitoring,
dot,
finished
event
to
the
cli,
and
then
the
cli
says.
Okay,
I
have
received
configure
minus
monitoring
top
finished
event
and
I'm
done
with
the
monitoring
setup.
Everything
is
good
so
right
now
the
service
has
to
handle
that
thing.
D
So
my
my
question
was
more
along
the
lines
of
until
we
have
some
consistency
in
the
events.
I
guess
the
service
is
the
one
which
will
have
to
will
have
to
fix
basically
for
now.
A
All
right,
then,
we
move
on
to
florian's
bullet
point.
It's
about
zero
downtime
upgrade.
G
All
right,
just
one
second,
I'm
probably
gonna,
hear
myself
again
talking
so
following
the
usual
approach
of.
I
will
turn
down
my
volume
now
and
if
there
are
some
questions
afterwards
I'll
be
happy
to
take
them
all
right.
What
did
I
do
this
week?
So
I
did
some
tests
regarding
the
zero
downtime
capabilities
of
captain,
because
we
want
to
be
able
to
do
upgrades
of
captain
without
interrupting
the
usage
of
the
cabin.
G
So
we
want
our
apis
to
be
available
while
the
upgrade
is
going
on
and
of
course,
we
also
want
to
be
able
to
still
execute
sequences
and
not
have
them
ending
up
in
inconsistent
state
for
some
reason,
because
the
parts
of
the
previous
installation
have
been
terminated
and
yeah.
Basically,
what
did
I
do
so?
G
So
for
that,
I
created
an
experimental
pull
request
where
I
implemented
what
I
thought
would
be
necessary
to
reach
this
zero
downtime
capability
and
based
on
that,
I
have
created
several
implementation
issues,
so
just
real
quick.
What
will
be
necessary,
so
we
will
need
to
first
update
the
upgrade
strategy
of
our
deployments,
because
right
now,
most
of
the
deployments
are
using
their
recreate
upgrades.
That
strategy,
which
will
which
means
that,
when
upgrading
the
previous
spot
will
be
terminated
and
once
that
is
done,
the
new
part
will
be
started.
G
Where
we
also
additionally
say
at
any
point
in
time,
we
do
not
want
zero
parts
of
a
particular
deployment,
then
this
will
ensure
that
at
any
point
in
time
at
least
one
part
of
the
service
will
be
available
to
handle
incoming
requests.
G
That,
in
addition
with
defining
a
grace
period
and
pre-stop
books,
will
allow
us
to
provide
zero
downtime
of
our
http
api
providing
services.
So
the
grace
period
just
ensures
that
all
current
requests
that
the
service
might
still
be
handling
are
completed
properly.
Before
shutting
down
the
pot,
then
yeah,
what
else
we
noticed
that
we
have
a
conceptual
problem
or
challenge
when
it
comes
to
the
execution
plane.
G
So
for
services
that
are
using
the
distributor
to
receive
and
send
events,
so
since
the
distributor
is
running
as
a
side
car
inside
the
same
pot
as
the
actual
service
attending
the
events
like
the
lighthouse
service,
for
example,
then
we
might
have
scenarios
where
the
distributor
is
already
not
available
anymore,
but
the
service
wants
to
send
outgoing
events
via
the
distributor
back
to
captain.
So
this
would
lead
to
a
lost
event
or
vice
versa.
G
If
the
distributor
is
already
receiving
events
from
nuts,
but
the
container
itself
is
not
or
the
service
container
is
not
ready
yet
so
this
would
also
lead
to
lost
events,
and
therefore
we
decided
to
try
out
the
new
approach
where
we
do
the
nut
subscription
directly
in
the
light
source
service,
for
example.
So
thank
you,
ben
for
implementing
that,
and
with
that
approach
we
were
able
to
make
the
lighter
service
catch.
All
events
that
are
being
sent
even
during
an
upgrade,
so
those
results
were
already
promising.
G
Yeah
and
if
you're
interested
in
further
details,
you
can
of
course
have
a
look
at
the
issues
that
are
linked
here
in
the
summary,
and
if
you
have
any
other
questions,
please
feel
free
to
reach
out
so
stop
sharing
now
and
turn
up
my
volume
again,
I'm
still
hearing
myself
but
yeah.
If
you
have
any
questions,
please
go
ahead.
J
So
hopefully,
you're
seeing
my
screen
yeah
just
so
the
last
print
started
with
some
integration
tests
and
unit
as
enhancements
and
fixes.
J
J
Also
fixing
the
proxy
integration
test
to
use
the
proxy
server
as
a
cluster
ip
and
not
the
loan
balancer,
as
it
was
not
necessary
to
have
an
external
ip
address
exposed
outside
of
the
cluster
and
also
some
fixes
of
the
backup
resource
integration
tests
with
the
configuration
service.
As
previously,
this
service
tends
to
be
some
flake
than
to
be
flaking,
so
there
were
some
improvements
to
make
it
more
stable
and
in
the
last
week
we
didn't
have
one
single
failure
about
this,
so
hope
it
will
be
as
it
is.
J
J
J
Automatic
automatic
provisioning,
url
or
not
an
environmental
variable.
It
is
enough
to
set
it
in
the
help
charts
which
will
lead
to
setting
the
environmental
variable
on
the
places
where
it
is
where
it
is
required,
and
actually
this
url
will
be
called
during
the
process
of
the
project
creation
and
the
data,
the
kit
credentials.
Data
should
be
provided
by
this
url
in
the
http
response,
and
the
captain
will
use
this
data
to
create
the
project
and
use
them
for
any
other
actions
that
are
needed
during
the
project
life
cycle.
G
K
I
think
in
the
integration
test
of
the
provisioning
url
feature,
you've
used
some
kind
of
a
ready-to-use
mock
server
which
can
be
installed
in
a
cluster
and
used
for
testing.
Your
experience
with
that
is
that
something
worth
following
for
other
use
cases
as
well
or
is:
was
it
too
complicated.
J
J
I
And
maybe
something
to
add
here,
the
mock
server
is
actually
set
up
statically
in
the
integration
tests,
and
so
you
could
go
ahead,
for
example,
and
configure
it
new
for
every
integration
test,
which
would
be
pretty
useful.
I
guess
so.
I
If
there's
the
need,
we
can
just
add
a
new
integration
tests
and
configure
the
mock
server.
However,
we
want
in
that
single
integration
test
and
then
reconfigure
it
for
the
next
one,
so
it
should
be.
K
K
I
Yep,
thank
you.
I
can
share
as
well.
I
So
for
me
I
don't
have
any
specific
pull
requests
to
show
this
week
or
the
sprint,
because
I
mostly
worked
on
a
terraform
setup.
That's
that's!
Not
in
our
open
source
wrapper
right
now
it
will
live
alongside
so
the
cluster
that
I
set
up
here
lives
alongside
our
integration
test
clusters
that
we
already
use
on
gcp.
I
And
I
set
up
another
cluster
for
us
for
chaos
testing.
We
want
to
implement
chaos
tests
in
the
future,
using
litmus
chaos,
and
for
that
we
needed
a
new
cluster
with
some
some
basic
setup
stuff.
So
this
cluster
already
has
litmus
installed,
including
gitti
as
well,
and
also
an
ingress
engine
x.
If
we
want
to
access
anything
from
the
outside
so
yeah,
that's
that's
what
I
mostly
worked
on.
I
It's
all
done
with
terraform
as
infrastructure
as
code,
so
it
should
be
super
nicely
extendable
as
well.
If
we
need
any
new
clusters-
and
it
will
actually
also
in
the
future,
integrate
the
gcp
integration
test
clusters
that
we
have
also
into
that
setup
so
that
it's
all
managed
by
terraform
again
and
we
don't
have
to
do
anything
by
hand
there
basically
yeah.
H
H
What
was
it
resource
service
enabled
for
a
bridge
server?
Then
this
new
farm
is
shown
as
the
old
one
will
be
there
without
certificate
proxy
and
yeah
there.
We
can
configure
additionally
their
certificate
here.
You
can
provide
some
certificate
and
yeah
security
and
yeah.
This
there's
also
a
validation
that
it
has
to
begin
with
begin
certificate
and
then
certificate
for
that
that
yeah
just
to
read
it
if
the
red
one.
H
For
example,
I
have
locally
the
cluster
installed
squid,
we
support
one
to
it,
yeah
also.
We
can
skip
the
ssl
certificate
check
here
and.
H
Ship,
it
etc.
Yeah
also
the
token
yeah.
Okay,
we're
not
use
that
here,
but
yeah.
This
can
now
be
configured
also
for
the
ssh,
but
we
now
have
the
yeah
still
have
to
remove
the
air,
the
username.
H
We
don't
have
a
token,
because
we
have
the
private
key
here,
there's
also
a
verification
here
for
the
keypad.
So
let's
take
this
one
also
add
drag
and
drop
is
available.
There's
also
a
verification
that
this
one
needs
to
be
right.
H
Next,
to
that
the
always
say
passphrase
has
to
be
entered.
I
can't
be
entered
optionally
for
the
ssh
part
and
yeah.
That's
basically
for
this.
Currently
the
git
url
that
is
shown
here
for
the
repository
is
not
supported.
I
guess
it
has
to
start
with
ssh
repository
repo.
Maybe
we
will
change
to
that.
No,
we
should
discuss
this,
but
at
the
moment
it's
just
like
this
format.
Just
instead
of
https,
it
is
ssh.
L
So
this
sprint,
I
did
a
bunch
of
few
tiny
things.
I
start
from
one
that
still
need
approval,
but
it's
basically
done.
This
is
linked
to
a
bank
that
was
found
recently.
L
L
I
discuss
this
with
klaus
and
the
other
guys,
and
what
we
came
out
with
is
that
in
case
of
web
book
service
we
filter
for
projects,
we
can
have
only
one
project
for
each
subscription
in
any
other
cases,
the
only
thing
we
check
is
whether
or
not
the
service
is
configured
and
if
the
service
is
configured,
then
we
also
check
whether
the
stage
is
in
the
subscription.
L
L
This
was
an
copy-paste
mistake
that
was
changed
skip
over.
There
was
some
inconsistency
in
our
git
commit
parameter.
Some
of
the
finished
events
had
it
also
stored
inside
wrong
places.
So,
for
instance,
inside
the
event
itself,
there
was
here
a
git
commit
id
and
somewhere
else.
So
this
pr
was
a
simple
one,
removing
and
cleaning
up
this
and
then.
Finally,
I
worked
on
cleaning
up
the
all-out
comment
from
the
cli.
L
We
had
a
few
wrong
messages
that
were
due
to
the
fact
that
this
were
checks
done
at
the
root
command
and
we
didn't
know
whether
or
not
the
o
out
was
in
the
comment
specification.
So
what
I
did
was
just
simply
add:
where
is
it.
L
Simply
add
the
function
that
checks
whether
or
not
we
should
skip
version
check
to
avoid
this
kind
of
errors
and
plus
now
we
are
filtering
for
every
api
returned
error
so
that
we
don't
return
the
error
code,
but
we
return
some
using
friendly
errors
and
yeah.
That's
it
if
there
are
questions.
Otherwise,
I
think
I
move
to
herman.
F
I'm
going
to
present
one
pro
request
for
captain's
bridge,
which
is
about
showing
the
remediation
sequences
in
default
color
when,
while
running
and
basically
for
remediation
sequences
yeah,
we
were
displaying
them
as
failed
or
red
or
interpreted
them
as
a
problem,
since
we
had
also
in
earlier
captain
versions
the
problem,
events
in
the
sequence
and
yeah.
This
is
now
updated,
so
yeah
a
remediation
sequence.
As
long
as
it's
running
and
no
fails,
no
filled
tasks
are
in
the
sequence.
F
G
K
Okay,
I
have
a
bunch
of
tickets.
I
want
to
quickly
go
over
the
first
one.
Was
this
one,
a
chore
ticket
which
is
just
just
a
you
know,
cosmetic
one
where
I
just
go
over
to
the
went
over
the
current
code
and
replace
the
wording
or
the
term
unallowed
with
denied,
since
I
think
unallowed
is
no
proper
english
word
or
something
like
that,
and
this
actually
also
came
out
in
the
last
meeting.
We
had
that
the
wording
is
not
really
good.
K
For
example,
we
are
also
speaking
about
blacklist,
which
is
not
totally
correct
so
somewhere
in
there
black
list.
I
also
replaced
the
occurrences
of
that
terminology
blacklist
cube
urls,
for
example,
with
the
knight,
urls
and
so
on.
So
this
is
really
nothing
fancy
here,
just
replaced
every
occurrence
of
unallowed
with
the
night
or
blacklist
with
the
knight
just
a
cosmetics
thing.
The
next
one
passing
the
date
time.
The
git
hash
and
docker
build
time
to
docker
build
in
the
release
pipeline.
K
So
that
was
just
preparation
in
our
build
pipeline,
where
we
now
just
pass
on
those
this
information
to
the
build
arguments
to
the
docker
file
in
essentia,
essentially
when
they
are
built
yeah.
Thank
you,
morgan
for
reviewing
my
stupid
errors.
I
made
there
in
a
pipeline
in
the
end,
also
nothing
big
here.
K
We
needed
to
do
it
for
three.
I
think
three
workflows
the
ci
workflow,
the
pre-release
workflow
and
the
release
workflow,
where
we're
just
passing,
build
time
and
git
sharp
to
the
docker
build
actually.
K
Next
thing
was
about
the
distributor
to
show
this
information
git
commit
build
time
and
so
on
actually
a
bunch
of
more
information
you
see
in
the
screenshot
here
when
you
start
the
distributor
in
the
current
current
version,
it
will.
K
When
it's
starting,
this
information
click
commit
build
time
start
time
if
it's
a
remote
execution,
plane,
service
or
not
and
all
kind
of
other
services,
and
these
two
things
here
at
the
bottom
at
the
top,
are
coming
from
the
build
pipeline.
All
other
things
are
coming
from
helm,
values
or
whatever.
K
Next
thing,
there
was
a
problem
about
a
project
name
size.
K
For
example,
the
lengths
may
vary
from
what
technology
what
technology
you're
using?
This
is
just
a
tiny
pr
here
which
makes
that
configurable
and
also
provide
the
default
value
for
the
setting
via
helm.
So
the
default
project
name
length
is
200
and
configurable
by
this
value.
Here
this
will
be
picked
up
then
by
gpa
controller
and
validated.
K
A
Bern,
can
I
ask
you
a
question.
We
have
now
here
default
of
200
characters
for
project
name.
F
A
If
I
remember
correctly,
the
helm
service
actually
has
a
restriction
on
45
jars,
meaning
that
when
a
project
name
is
longer
than
45
charge,
then
the
helm
service
starts
to
complain.
Is
this
correct.
K
A
I
mean
I
like
it
to
have
here
such
a
high
number,
but
I
think
that
the
helm
service
yeah
has
here
some
limitations.
A
But
yeah,
if
you
can
never
follow
up
on
that,
would
be
great
thanks.
K
Okay,
then
this
is
the
implementation
part
of
that
story,
where
we
actually
valid
using
that
parameter,
project,
name
max
size
and
just
validating.
K
K
Here,
we
can
actually
take
a
look
at
how
this
looks
like
so.
The
motivation
here
was
not
hop
via
the
distributor
to
reach
out
to
the
control
plane,
but
do
it
directly
from
your
cabin
service
via
just
a
library
provided
by
the
core
team.
K
We
take
a
look
here.
I've
named
this
library
for
now
lib
cp,
connector,
zip,
stands
here
for
control,
plane,
so
kind
of
a
control
bank
connector,
and
what
we
can
see
here.
Let's
close
this
implementation,
but
here's
an
example:
I've
just
created
for
d-m-wing
to
you.
K
So
actually,
if
you
want
to
create
a
service
without
a
distributor,
then
you
just
need
to
provide
some
wiring
code
and
feed
this
to
the
library.
The
first
thing
you
would
need
to
do
is
create
a
captain.
Api
object
with
an
endpoint
and,
of
course,
with
the
token
there
you
have
that
one
first
thing
second
thing
is
to
create
something.
I
call
subscription
source,
so
that's
this,
that's
something
you
use
to
get
informed
about
subscription
changes
you
just
provide
it
with
the
uniform
api
of
the
kept
api.
K
The
third
thing
you
need
to
create
is
a
connector
which
will
actually
create
yeah
a
connection
to
the
message
broker
here
in
my
demo,
I
have
brought
forwarded
my
nuts
message
broke
on
my
cluster
and
I
just
used
the
local
host
address
with
the
board
for
222
here,
then,
you
need
to
create
what
I
call
event
source
and
there
can
be
multiple
kinds
of
such
event
sources.
K
K
Public
captain
api
to
pour
for
events
and
give
it
them
and
there
could
be
any
kinds
of
event,
source
implemented
as
long
as
they
fulfill
the
interface
it's
fine
and
in
the
end,
you
just
provide
the
subscription
source
and
event
source
to
something
I
just
called
control
plane,
and
then
you
need
to
register.
K
How
do
you
register
you
give
it
a
context
as
always
and
something
which
needs
to
fulfill
the
integration
interface?
How
does
this
look
like
for
now?
It
just
has
a
method
on
event,
so
this
will
record
whenever
an
event
is
coming
in
and
it
you
need
also
to
provide
data
via
a
registration
data
method
where
you
could
provide
initial
data
for
the
control
plane.
What
is
my
integration
name?
It's
you
just
woopy
doopy.
K
Where
am
I
running?
What
is
my
version
and
so
on
and
so
forth,
and
you
could
also
provide
initial
subscriptions
already,
so
that's
still
necessary,
because
we
already,
we
always
had
the
possibility
to
give
the
user
the
the
the
possibility
to
create
initial
subscription
by
environment
very
radius
that
pops
up
topic
thingy.
So
that's
the
equivalent
to
this
later
on.
Maybe
this
is
not
needed
anymore,
because
we
have
some
kind
of
more
central
way
of
configuring
subscription,
but
right.
K
And
you
need
to
provide
this
method
on
your
integration
and
just
say
what
are
my
default
initial
subscription
here?
I
don't
provide
any
subscriptions.
Okay,
that's
already
it.
This
example
here
will
not
do
anything
with
the
event,
but
just
print
the
print
out
got
an
event
and
the
event
type.
Let's
try
to
run
that
example.
K
K
K
Sequence,
echo
service
and
trigger
that
one
I
see
on
the
right
side.
My
process
just
got
to
that
event
where
the
own
went
on
event,
method
and
just
print
out
that
it
got
an
event
of
course.
Here
now
we
should
just
see
that
shipboard
controller
is
actually
waiting
for
start
at
the
finished
event
and
so
on.
You
know
that,
but
my
initial
implementation
here
is
not
doing
that
so
we'll
just
abort
that
sequence
and
that's
it
good.
K
I
think
that's
really
nice
that
we
can
now
just
yeah
connect
with
our
leap
and
don't
need
that
distributor
in
every
place,
because
we
figured
out
that
the
distributor
on
the
configuration
plane
is
it's
a
nice
thing
to
have.
Yes,
because
it
makes
you
play
a
platform
and
not
from
language,
agnostic
and
so
on.
K
The
distributor
itself
could
also
just
use
that
library
to
have
not
duplicated
code
and
so
on,
but
that's
that
needs
to
blend
in
the
future.
For
now,
this
library
only
supports
event
source
of
type
nuts
and
also
the
event
source
of
type
http
is
to
be
implemented.
K
K
K
K
So
the
letter
service
has
exactly
the
same
code
here
subscription
source,
not
events,
blah
blah,
you
already
solder,
so
that
and
on
event
we'll
just
hand
over
the
event.
It
got
to
the
event
handler
and
doing
whatever
is
necessary
and
at
initial
registration
data.
It
just
provides
these
three
information
triggered,
get
sli,
finished
and
monitor
and
configure
events
I
want
to
do.
Would
you
be
just
to
read
those
from
the
environment
and
their
environment
variables
passed
to
the
lighthouse
service,
but
there's
a
proof
of
concept.
K
I
just
yeah
hardcoded
it
here,
yeah,
that's
also
already
it
from
my
side
there's
more
things.
I
could
speak
about
that
leap,
but
maybe
it's
not
because
it's
not
finished
right
now.
Let's
do
that
in
the
next
meeting,
maybe
because
it
also
provides
the
means.
Well,
for
example,
you
could
have
it
prepared
here.
If
you
get
an
event,
you
could
just
go
to
the
context.
K
A
A
little
bit
long
but
important
important
for
everyone,
because,
with
this
change,
the
distributor
will
be
gone,
and
so
all
integrations
also
need
to
adapt
this
new
library
and
so
on,
and
I
think
it's
definitely
worth
to
start
talking
about
this
topic
today,
as
it
will
be
part
of
the
next
developer
meetings.
Anyways,
really
great
work
cool
to
see
what
has
been
accomplished
in
the
last
two
weeks.
Awesome.
A
D
I
had
a
problem
with
the
local
setup,
where
telepresence
wouldn't
work
properly
because
of
distributor,
but
if,
with
that
gone,
I
think
we
can
start
using
telepresence
for
local
development,
which
is
a
big
plus.
A
Cool
thanks
for
for
this
hint
yeah
and
when,
when
the
distributor
is
gone,
I
think
also
developing
integrations
is
easier,
as
you
don't
have
to
run
two
processes
anymore,
and
so
also
also,
this
part
will
be
more
easier
for
everyone.
K
I
mean
I
would
not
completely
get
rid
of
the
distributor
in
other
places.
There's
still,
there
might
be
still
places
where
it's
really
nice
to
be
not
bound
to
a
golang,
for
example,
but
it
will
be
certainly
easier,
because
also
the
distributor
will
just
use
that
leap
and
we
don't
need
to
maintain
code
twice
or
even
more
often,
but
maybe
the
distributor
will
just
be
something
which
is
living.
C
C
One
other
thing,
obviously
not
not
an
answer
today,
but
andy
mentioned
it
as
well.
The
waiting
for
started
at
finished
events,
and,
if
you
don't
receive
start
events
or
finished
events,
the
tasks
hang
forever.
So
I
don't
know
if
that's
had
any
thought.
Yet,
if
not
I'll
just
raise
a
gift
of
issue
and
we
can
discuss
it
on
there.
A
Just
to
clarify
whenever
an
integration
sends
back
a
started
event,
then
we
are
running
in
the
situation
of
having
hanging
sequences.
But
when
the
integration
does
not
respond,
neither
with
a
started
or
finished,
then
we
are
timing
out
after
10
minutes
or
so
here
we
have
the
fallback.
But
it's
just
the
situation
when
a
integration
act,
but
did
not
finish
so
to
say.