►
From YouTube: Keptn Community & Developer Meeting - October 21st, 2021
Description
Meeting notes: https://docs.google.com/document/d/1y7a6uaN8fwFJ7IRnvtxSfgz-OGFq6u7bKN6F7NDxKPg/edit
Learn more: https://keptn.sh
Get started with tutorials: https://tutorials.keptn.sh
Join us on Slack: https://slack.keptn.sh
Star us on Github: https://github.com/keptn/keptn
Follow us on Twitter: https://twitter.com/keptnProject
Sign up to our newsletter: https://bit.ly/KeptnNews
A
A
Okay,
hi
and
welcome
everyone
to
the
next
episode
of
the
captain
death
meeting.
It's
today
a
week
where
we
are
going
over
the
issues
and
bugs
that
have
been
resolved.
The
last
three
weeks
and
before
diving
into
that
yeah.
A
Two
weeks
ago
we
talked
and
about
doing
the
these
presentations
on
the
captain
captain
deployment,
which
is
our
captain
installation
that
is
triggered
every
day,
based
on
the
on
the
master,
build
but
yeah
before
the
meeting
was
kicked
off
today
or
we
already
identified
the
situation
that
the
ci
script
for
for
deploying
the
latest
version
of
captain
is
not
scheduled
or
is
it's
not
deploying
the
latest
build
when
we
have
a
not
scheduled
trigger?
A
A
And
do
we
have
identified
any
other
blocker
for
for
doing
that
for
doing
the
presentation
on
the
captain
captain.
B
Maybe
create
some
better
text
the
best
examples
there,
because
the
project
is
currently
created
with
empty
data,
basically
so
provide
maybe
the
sock
shop
and
some
evaluations
there.
C
The
sock
shop
itself
might
not
work,
or
at
least
it's
gonna
be
complicated,
because
we
have
three
instances
there
all
on
the
same
cluster.
As
far
as
I
remember
so
you
you
will
overwrite
each
other
all
the
time,
but
evaluation
should
be
doable
and
I
mean
delivery
would
also
be
doable,
but
helm
service
would
need
to
be
run
in
execution
plan
remote
execution
plan.
C
Well,
obviously,
that
that
would
also
be
a
good
segue
to
my
features
that
I
implemented,
but
it's
also
the
fact
that
you
will
overwrite
each
other,
so
we
have
dev
staging
and
production
or
whatever
there,
and
if
you
deploy
like
on
any
of
those,
you
will
overwrite
anything.
I
mean
for
just
test
data:
it's
fine
if
we
can
inject
them,
obviously,
but
it
won't
be
functional
in
that
sense
that
you
can
approve
a
deployment
or
something
like
that.
So
it's
really
just
here
are
some
test
data.
A
Is
someone
up
for
thinking
or
working
on
that
to
have
some
desk
data
in
there?
A
I
think,
maybe
first,
a
little
bit
of
brainstorming
would
be
needed
to
get
an
idea
what
we
can
ingest
and
then
setting
up
the
data
itself.
B
A
A
lot
to
money:
okay,
here
we
have
a
script
change
and
then
yeah,
that's
data,
all
right,
okay,
but
then
we
should
be
ready
to
do
the
next
demo
session.
Based
on
that
one
cool,
then
I
would
like
to
hand
over
directly
to
christian
who
is
telling
us
about
his
features.
He
has
implemented
yeah.
Therefore,
I
stopped
sharing.
C
Thank
you.
So
I
have
two
pull
requests.
I
want
to
present
today
the
first
one
as
already
teased
in
the
last
section.
A
little
bit
is
a
new
demo
service
so
to
save
for
our
integration
tests,
so
our
integration
tests
used
to
use
the
cards
microservice
and
that
one
is
quite
resource
hungry
and
also
takes
quite
long
to
get
up
and
running
for
the
java
microservice.
Basically,
we
know
there
is
ways
around
this
with
a
different
java,
virtual
machine
etc.
C
C
Excuse
me:
I
think
this
is
still
something
that
should
be
done,
but
I
can
obviously
show
the
results
from
a
local
integration
test
so
locally.
I
was
able
to
finish
this
in
five
minutes.
Roughly
I
think,
last
time
I
ran
it,
it
was
two
or
three
minutes.
There
is
some
network
dependency.
Obviously,
that
might
take
longer
or
not.
C
C
This
has
been
in
before,
and
I
just
added
the
retro
loop
around
it
because
it
seems
like
the
service
that
we
are
deploying
the
potato
head
service
is
coming
up
so
quickly
that
the
information
that
there
is
a
new
service
and
the
new
port
running
etc
is
not
populated,
quick
enough
against
all
kubernetes
nodes,
and
when
captain
runs
the
dial
timeout
function
here,
we
actually
run
into
a
timeout,
although
if
you
try
it
again,
five
seconds
after
that,
it
works.
So
it's
a
weird
kubernetes
behavior.
C
I
can't
really
explain
it,
but
the
retry
obviously
will
fix
it,
and
for
now
we
have
three
retries
and
we
wait
five
seconds
in
between
so
in
the
best
case
it
works
immediately
and
in
the
worst
case
we
add
an
additional
15
seconds
to
our
pipeline.
D
Oh
okay,
so
yeah
this
print
was
a
lot
of
pack
fixes,
so,
let's
dive
into
the
first
one.
The
first
problem
was
that
a
service
incorrectly
showed
that
there
are
open
remediation.
So
this
is
this
icon
here
or
even
in
this
screen.
There
was
a
red
indicator
that
there's
a
running
remediation,
and
this
is
now
fixed
that
if
remediation
is
not
running
anymore,
then
the
indicator
is
also
gone.
D
This
was
because
the
fetch
did
not
did
not
check
if
the
remediation
sequence
is
running
or
not
just
if
it
there
if
there
are
any
remediation
sequences.
D
That
was
this
bug.
Next
to
that
that
the
web
book
service
or
web
book
file
yammer
files
that
are
created
on
bridge
side,
the
problem
was,
if
a
remediation,
if
a
webhook
a
subscription
is
updated,
for
example,
cardstp
and
they
change
it
from
dev
to
staging,
then
the
webhook
in
staging
just
use
that
then
the
webhook
in
yeah
def
is
not
deleted
correctly.
D
D
This
should
now
be
deleted
and
the
correct
one
in
staging
should
be
created.
Next
to
that
also,
there
was
a
problem
with
the
task,
no
see
another
problem
yeah
because
you
removed
some
default
basket.
That's
right!
It
is
now
empty
and
we
have
to
check
that.
I
guess
and
yeah
and
now
in
the
cards
to
be.
This
is
now
correctly
deleted
and
updated
and
in
depth
in
staging.
It's
now
correctly
here.
D
Then
the
next
issue,
yeah
in
the
in
the
sequence
screen
server
screen
and
in
the
environment
screen.
We
showed
the
approval
task
and
we
allowed
the
user
to
accept
or
design
it.
The
problem
was
if
we
are
in
the
sequence
screen,
there
was
the
problem
that,
if
accepted
it,
it
wasn't
correctly
updated
in
the
service
screen
it
in
the
ramen
screen,
and
this
has
also
be
fixed
that
if
it's
checked
in
environment
screen
so
screen
sequence
screen,
it
is
now
updated
in
each
screen.
D
Yeah.
Next
to
that,
we
had
a
weird
behavior
with
our
show
oil
sli
button,
because
if
there
are
more
than
10
evaluations,
10
110
sli
is
configured
then
yeah.
The
button
shows
up
to
show
all
slis
and
not
just
10,
and
this
sometimes
had
the
weird
behavior
that
it
was
not
where
it
should
have
been
placed
and
yeah.
This
has
also
been
fixed
to
take
the
width
of
the
chart
instead
of
take
a
fixed
diff
for
the
positioning.
D
D
D
Yeah-
and
we
always
look
at
that
in
the
message-
if
the
version
is
higher
yeah
and
the
last
thing
we
now
added
tests
for
our
push
server,
so
we
now
also
use
chest
for
testing
on
the
bridge
server
side
and
some
other
mocking
tools
to
get
to
that's
nearly
everything
so
just
be
much.
D
We
intercept
the
coils
if
we,
if
you
want
to
retrieve
some
things
of
the
shipyard
or
other
core
elements,
and
we
also
don't
just
call
the
function
for
it,
we
also
can
trigger
an
example
request,
so
that
also
the
api
for
it,
the
refrigerator
of
the
parameters
etc,
can
be
tested
and
yeah.
That's
basically
it
it's
not
merged.
Yet
it's
yeah.
Now
it
can
be
reviewed
and
yeah.
D
E
Yeah,
first
of
all,
this
print,
also
from
my
side,
was
mostly
about
bug.
Fixing
the
first
one
I
would
like
to
present
is
one
in
the
bridge
if
the
bridge
was
kept
open
for
hours
in
the
background,
but
there
was
a
weird
behavior
that
yeah.
If
some
api
calls
failed,
for
example,
that
might
happen
if
the
internet
connection
is
lost
or
yeah.
Some
other
reasons.
E
The
ui
looked
weird
because
we
have
been
showing
this
error
message
on
top
and
you
still
saw
the
view
where
you
were
at,
and
this
even
might
happen
in
any
other
view
as
well,
and
this
I
changed
this
behavior
so
that
yeah,
not
this
prominent
error,
is
popping
up,
but
if
any
api
calls
are
failing-
and
this
might
especially
happen
for
our
polling
process,
then
we
just
show
a
toast
message,
as
yeah.
E
C
Yeah
can
we
think
of
a
mechanism
to
see
if
the
user
is
active
and
be
like?
Okay
user
is
not
active.
I
don't
know
slow
down
those
polling
mechanisms
because
obviously
we
cannot
prevent
that.
Somebody
leaves
it
open,
but
there
is
always
a
chance
that
there's
a
memory
leak
or
network
outage
or
whatever,
and
we
will
accumulate
a
lot
of
errors
or
whatever,
and
if
we
slow
this
down,
then
there
is
a
benefit
that
I
can
see.
E
We
could
definitely
detect
when
the
user
is
leaving
the
focus
from
the
application
from
the
web
browser
and
either
disable
the
polling
at
all
or
yeah
slow
it
down.
That
would
be
actually
a
new
feature
but
which
we
have
not
considered
yet,
but
that
would
be
something
we
can
do.
E
E
In
case
that
it
was
not
described
here,
what
the
problem
actually
was,
but
in
case
that
you
subscribed
to
a
service
delete
event,
the
webhook
service
was
triggered,
but
the
web
pokemon
was
already
deleted
and
those
events
are
actually
internal
events
so,
and
we
also,
we
also
had
them
hard
coded
in
our
bridge,
so
we
got
rid
of
them
and
your
integrations
should
not
subscribe
to
service
delete
service.
Great
event.
E
We
saw
the
issue
at
klaus's
presentation
that
he
had
a
configuration
where
either
service
delete
does
create,
was
subscribed
to
and
yeah.
This
needs
to
be
so
subscriptions
to
those
events.
It's
of
course,
of
course,
to
be
updated.
E
E
E
We
had
also
a
wrong
error
message
when
creating
secrets,
and
it
was
saying
that
dots
are
actually
allowed
for
secrets,
but
since
they
are
not
allowed.
Actually,
there
was
just
a
change
of
this
error
message
so
that
it
says
that
only
lowercase
of
numeric
characters
and
dash
is
a
lot,
and
it,
of
course,
has
to
start
with
an
alphanumeric
character.
E
E
E
F
A
E
Where
do
I
have
some
onboarding
here?
I
have
a
shipyard,
I
think
you're
redirected
to
the
you're
still
on
the
settings
page
so
you're
here
in
the
settings
page
and
you
see
the
project
settings
and
you
can
also
click
on
services
and
create
the
service,
but
you're
also
able,
of
course,
to
click
on
a
sequence
screen.
As
you
can
see,
and
in
that
case
you
had
an
empty
screen
and
actually
you
had
to
refresh
the
browser
to
have
this
screen
working
correctly.
That.
B
E
And
the
two
features
that
were
added
from
my
site.
Actually,
this
is
a
documentation
thing,
so
we
document
the
the
supported
process
for
the
bridge
which
is
not
yet
published.
So
I
can
just
show
you
the
commit
here
because
it's
a
captain
gita
bio.
So
we
have
here
our
supported
browsers,
and
this
is
basically
just
saying
that
we
are
not
supporting
internet
explorer
because
we
got
some
requests
there
and
the
captain
bridge
is
not
designed
to
work
on
mobile
devices.
So
we
I
don't
optimize
the
ui
to
work
on
mobile
devices.
Yet.
C
E
Good
question:
one
problem
that
I
also
see
is
that
we
are
not
testing
the
safari
in
our
tests
actively
and
the
other
browsers
use
the
same
engine.
So
this
is
like
covered
in
our
unit
and
ui
tests.
C
There's
a
lot
of
safari
versions
out
there
that
are
not
capable
of
parsing
certain
date
time
yeah
formats.
Let's
put
it
like
that.
There
is
helper
functions
around
that.
Obviously,
and
angular
solves
this
problem
somehow,
but
I'm
just
saying
there's
a
lot
of
things
that
can
go
wrong
and
there
are
some
things
that
went
wrong
in
the
past,
so
especially
for
safari.
We
will
need
something
yeah.
We
have
a.
E
Okay,
thanks
for
the
feedback
there
we'll
note
that
and
adjust
that-
and
one
more
thing
I
wanted
to
show
is
that
we
showed
the
payload
of
the
last
event
or
you
can
view
the
payload
of
the
last
event
in
the
web
book
config
or
generally
in
any
integration
configuration.
E
So
when
you
configure
integration
like
adding
a
subscription,
for
example,
you
have
this
show
example
payload
button
here
so
currently
it's
disabled,
because
I
don't
have
a
task
selected,
but
once
I
select
the
task,
for
example,
the
evaluation
task,
the
button
is
enabled
and
it
will
load
example
payload.
Of
course,
I
don't
have
an
evolution
triggered
event
here,
but
I
think
I
have
deployment
event
here.
E
E
It
will
load
and
display
the
deployment
triggered
event,
and
you
can
view
it,
especially
for
the
web
hook
configuration
this
might
be
handy
because
there
you
can
configure
your
api
call
or
your
web
hook
call
to
include
data
from
the
event
payload
in
the
future
and
as
you
have,
as
you
have
seen,
if
the
event
was
not
triggered
in
a
project.
So
we
consider
only
events
from
that
one
from
that
project.
Then
it
will
show
that
we
could
not
find
any
events
of
that
type.
H
H
Why
is
that
taking
so
long
all
right
now?
Here
we
are
so.
This
first
issue
was
that
the
acute
sequences
might
not
have
always
been
completed
in
the
order
in
which
they
have
been
triggered.
So,
as
you
can
see
here,
for
example-
and
this
provided
us
with
a
screenshot
where
that
was
the
case.
H
H
I
have
changed
distributed.
Controller
did
not
use
the
current
time
when
it's
reached
the
code,
where
the
sequence
should
be
added
to
the
queue,
but
it
will
instead
use
the
timestamp
of
the
incoming
triggered
event
in
order
to
establish
the
ordering
within
the
queue.
So
now
this
should
happen
much
less.
H
Nevertheless,
if
we
do
still
have
some
networking
going
on
there,
so
if
the
if
a
couple
of
events
are
sent
at
the
same
time,
of
course,
we
cannot
guarantee
that
they're
received
at
the
api
in
that
exact
order,
but
this,
I
think,
a
very
unlikely
edge
case.
That
might
not
might
not
happen
very
often
any
questions
about
that.
F
I
would
have
a
question
I
also
in
the
past
saw
kind
of
a
related
issue.
Actually
just
a
question:
did
you
thought
about
introducing
some
kind
of
sequence
numbers
for
captain
events?
F
Because
this
fix
reduces
just,
as
you
said,
the
probability
of
things
going
wrong
but
wrong,
but
it
I
think
it
does
not
fix
the
actual
root
cause
behind
it.
C
Or
sorry
for
a
monotonic
number,
you
would
also
need
to
have
a
service
in
between,
because
if
you
upscale.
C
F
H
H
We
got
an
addition
to
the
cli,
where
the
current
config
file
that
is
used
is
printed
out.
Every
time
a
command
is
executed.
H
That
itself
is
actually
quite
handy,
but
still
it
might
not
be
a
desire
to
have
this
printed
out
every
time.
So
that's
why
I
changed
the
log
level
of
this
message
to
verbose
so
that
it's
only
printed
out
one
when
the
user
explicitly
says
so
by
providing
the
minus
verbose
flag
all
right,
any
questions
about
that.
H
H
B
H
I
will
explain
it
in
the
meantime
yeah,
so
usually,
when
a
sequence
is
executed
by
the
shipyard,
controller
and
orchestrated,
it
maintains
a
collection
of
triggered
events
for
the
tasks
that
are
currently
open,
so
tasks
that
have
to
be
executed
in
order
to
proceed
within
the
sequence
and
those
triggered.
H
Events
are
also
made
available
via
public
api
in
order
for
remote
execution,
plane
services
to
query
those
events
and
see
whether
they
should
take
over
a
task
and
if
everything
goes
well,
this
collection
is
emptied
once
once
a
task
has
been
completed,
but
of
course
not
every
time.
Everything
goes
well,
so
we
had
some
cases,
for
example
due
to
race
conditions,
or
something
like
that
where
the
entries
of
the
triggered
events
collection
remained
in
that
collection,
even
after
a
sequence
has
been
finished
and
also
actually
those
two
issues
here
are
kind
of
related.
H
So
also
we
had
the
case
where,
when
a
project
was
deleted,
all
the
relevant
collections
have
been
deleted
as
well,
but
due
to
some
timing
issues,
we
discovered
the
case
where,
for
example,
the
triggered
events,
collection
of
the
previous
instance
of
a
project
was
still
available
after
the
project
was
recreated
with
the
same
name,
and
so
in
the
pull
requests
I
made
for
those
two
issues.
I
made
sure
that
the
collection
is
cleaned
up
properly
and
we
don't
have
any
triggered
event
corpses
lying
around
here
and
yeah
for
each
of
those
prs.
H
H
So
the
way
it
previously
worked
was
that
you
could
actually
only
cancel
a
sequence
once
it
has
actually
been
started.
H
So
the
first
triggered
event
has
been
sent,
but
we
had
some
cases
where,
for
example,
a
couple
of
sequences
have
been
piled
up
in
the
queue
and
one
sequence
was,
for
example,
didn't
finish
properly
and
was
in
the
stalling
state,
and
it
was
possible
to
cancel
this
this
blocking
sequence,
but
it
was
not
possible
to
before
doing
so
clear
up
the
queue
of
the
other
sequences
because
they
had
not
been
started
at
this
point
yet
and
therefore
decision
was
created
and
in
the
request
that
fixes
that
we
made
it
possible
to
cancel
the
sequence
before
the
actual
execution
has
been
started
in
order
to
clean
up
the
queue.
H
All
right,
if
not,
let's
continue
with
the
next
one,
so
this
is
about
the
bad
book
service.
H
All
right
yeah,
so
we
have
the
feature
of
disabling
the
automatic
sending
of
finished
events
for
the
for
the
web
book
service.
So
this
allows
captain
integrations
that
should
be
called
by
the
web
book
service
to
take
over
their
responsibility
of
sending
a
finished
event
with
the
proper
payloads
and
previously
the
web
book
service.
H
In
that
case,
just
sent
the
color
request
and
if
no
immediate
error
happened,
meaning
during
the
execution
of
the
current
request.
It
didn't
send
any
further
events,
because
that
was
the
intended
use
case,
but
still,
for
example,
if
the
actual
http
request
resulted
in
a
status
code
equal
or
higher
than
400.
H
So
now
the
web
book
service
will
inspect
the
curtainment
to
be
executed
and
check
whether
the
fail
with
body
command
is
set,
and
if
not,
it
will
add
this
flag
to
the
to
the
argument
to
the
curl
command,
and
this
means
that
now,
with
that
flag
status,
codes
are
higher
or
equal
than
400
will
also
be
interpreted
as
a
error
of
the
curl
command
and
with
the
fail
with
body
flag,
it's
also
possible
to
to
get
the
the
http
body
so,
for
example,
four
or
four
not
found,
and
then
the
web
book
service
will,
in
that
case
send
back
a
finished
event
with
the
proper
error
message
and
additionally,
when
working
on
this,
I
also
took
an
additional
safety
measure
because,
as
you
might
know,
it's
also
possible
to
to
import
secrets
and
use
those
in
the
web
book
request
and
with
the
change
I
made.
H
I
I
tried
to
be
quick
and
to
have
them
already
open,
it's
gonna
be
all
bugs,
so
it
will
be
quick,
so
the
first
one
was
about
the
upgrade
command.
Basically,
when
you
did
the
upgrade
from
the
cli,
some
of
the
services
were
not
upgraded
or
they
were
upgraded
in
reality,
but
it
was
not
shown
in
the
uniform,
and
this
is
because
to
save
ourselves
from
denier
of
service,
uniform
was
not
updated.
In
case
you
didn't,
have
a
change
in
the
service
name.
Now
we
also
check
on
the
version,
so
it
works.
I
Fine
next
one
it
goes
in
pair
with
er
means
fix
on
the
secret
service
being
too
long.
This
is
an
error
from
kubernetes
and
the
configuration
service
used
to
wrap
it
up
inside
of
500
status
code.
Instead,
now
we
are
exposing
this
error
with
a
proper
400
something
code.
I
Next
issue
was
from
the
materialized
view
of
the
shipyard
controller,
so
bridge
was
showing
a
different
score.
That
was
the
lighthouse
evaluation
score.
This
is
because
you
can
have
other
services
that
gives
you
returns.
You
have
finished
events
for
an
evaluation,
so
now,
in
the
materialized
view,
we
are
filtering
so
that
we
show
the
score
from
the
lighthouse.
I
What
else
this
is
also
a
tiny
one.
It
was
an
error
message
which
was
again
not
being
properly
reported
so
now.
This
is
correct,
basically,
the
in
the
moment
in
which
we
are
checking
the
slo
file.
If
we
have
a
problem
from
the
git
now
we
get
that
error
being
shown
in
the
bridge
as
well,
and
then
what
do
I
miss?
I
I
I
Finally,
deleting
a
service
did
not
delete
a
subscription,
so
there
was
no
check
in
the
uniform
that
what
subscriptions
correspond
to
a
service
that
are
that
is
not
there
anymore.
Those
should
be
changed
now.
This
is
being
done
via
changing
the
querying
once
the
uniform
database
and
changing
this
for
each
of
the
uniforms
in
case
of
a
subscription
which
was
based
only
on
that
service.
I
F
Thanks:
okay,
this
my
presentation
when
you're
able
to
see
my
screen
should
be
able
to
here,
would
also
be
very
awake.
Very
quick.
First
of
all,
this
was
something
discovered
by
adam
and
he
also
provided
actually
a
fix
for
this.
It
was
has
to
do
with
the
form
of
the
shipyard
which
is
uploaded
to
the
bridge
and
thus
yeah.
It
does
effectively
cause
the
bridge
to
crash.
It
was,
if
you
upload
a
shipyard
without
yeah
stages
inside
it
so
effectively
an
invalid
shipyard
file.
F
Then
the
bridge
had
all
kinds
of
problems
and
basically
crashed.
He
already
provided
a
very
correct
fix
for
this,
but
he
had
some
problems
with
git
and
this
messed
up
his
pull
request.
I
just
put
his
fix.
F
I
just
took
his
fix,
refined
it
a
little
bit
provided
some
tests
and
committed
fix,
and
this
was
already
reviewed
and
is
now
on
master
branch.
You
see
we
just
basically
do
a
tiny
check,
also
on
yeah
the
shipyards,
back
stages
stages,
slice,
which
should
be
greater
than
zero.
Otherwise
we
return
an
error
and
present
it
to
the
user
so
that
this
form
of
shipyard
doesn't
make
it
even
to
the
system
and
this
catched
early
so
to
see-
and
the
second
thing
also
discovered
by
adam-
was
nasty
out
of
memory
bug
in
the
distributor.
F
I
took
some
time
and
discussion
with
adam
on
his
system
to
get
an
idea
why
this
is
happening.
In
the
end,
I
was
able
to
reproduce
it
locally
with
unit
tests
and
yeah.
Basically,
we
had
here
some
problems.
You
see
it
also
here
in
the
logs
that
the
cache
we
are
internet
internally
using
the
distributor
component
keeps
growing
and
growing,
and
of
course
you
see
it
here.
It
will
be
done
at
the
end
very
quickly
at
some
point
where
the
container
then
crashes,
with
the
out
of
memory
exception
why
this
was
happening.
F
It
has
something
to
do
with
multiple
events
still
in
the
system
which
had
except
exact,
same
cloud
event
id
my
suspection
is
that
this
was
not
introduced
by
the
user
in
this
case
adam,
because
we
actually
have
a
logic
in
place
which
is
just
regenerating
ids
when
they
entering
the
system.
So
this
was
some
kind
of
weird
behavior
of
I
think
the
shipyard
controller
which,
in
the
end
ended
up.
You
can
also
see
it
here
in
multiple
events
having
the
same
id.
This
is,
however,
something
I
could
not
find
out
why
it
happened.
F
F
So
we
still
need
to
keep
an
eye
on
this.
If
such
a
situation
could
still
occur
occur-
or
this
was
fixed
by
some
bug-
fixing
the
shipping
controller
or
any
other
component.
However,
however,
we
can,
although
this
should
not
happen,
we
can
survive
such
a
situation
and
it
should
not
be
bad
for
the
performance
of
the
system.
F
In
the
end,
I
also
took
the
opportunity
to
yeah
do
a
little
bit
of
refactoring,
as
you
see
there
quite
quite
a
bunch
of
changes
here,
to
make
the
distributor
yeah
more
reliable
and
to
improve
the
code
quality
a
bit
also
in
other
corners
of
the
code
base.
F
H
B
G
Yeah,
I'm
working
on
managing
cli
conflict
with
viper
right
now
we
are
doing
our
own
implementation
with
load,
cli
config
and
store
cli
config,
I'm
rewriting
it
to
use
wiper
to
unmarshall
the
config
and
master
the
conflict
back
into
files
that
still
work
in
progress.
A
Yes,
are
there
any
other
questions.
A
A
Okay
with
that
said,
thanks
for
joining
this
meeting
stay
tuned
and
see
you
next
thursday,
bye,
bye.