►
From YouTube: Keptn Community Meeting - January 21st, 2021
Description
Meeting notes: https://docs.google.com/document/d/1y7a6uaN8fwFJ7IRnvtxSfgz-OGFq6u7bKN6F7NDxKPg/edit
Learn more: https://keptn.sh
Get started with tutorials: https://tutorials.keptn.sh
Join us in Slack: https://slack.keptn.sh
Star us on Github: https://github.com/keptn/keptn
Follow us on Twitter: https://twitter.com/keptnProject
A
B
A
Thanks
everyone
for
joining
as
usual,
please
put
in
your
name
on
the
attendee
list,
so
we
can
keep
track
of
smds
and
also
please
follow
our
cncf
code
of
conduct.
That
basically
means
please
be
nice
to
each
other
in
our
community
and
development
meetings.
We
got
a
couple
of
agenda
items
for
today.
If
you
have
anything
more
to
add,
please
add
it
to
the
agenda
or
edit
to
the
agenda
for
next
week.
If
you
cannot
cannot
address
it
for
today,.
B
A
The
first
agenda
item
is
already
assigned
with
my
name.
I
briefly
talked
about
this.
As
you
might
know,
we
have
already
been
participating
in
the
linux
foundation,
mentorship
program
that
was
previously
called
the
community
bridge
and
we
had
a
mentorship
for
two
mentees,
and
this
mentorship
program
is
now
basically
renamed
to.
A
And
the
captain
is
planning
again
to
participate
in
this
program.
The
applications
period
just
started
just
opened
up
today
for
projects
and
we
will
basically
participate
with
one
or
two
projects.
So
please
reach
out
or
check
the
the
cncf
mentoring
page
on
github,
if
you're
interested
in
joining
us
or
becoming
a
mentee
in
the
captain
project,
and
we
hope
we
have
some
very
interesting.
A
So
we
will
have
more
updates
in
the
in
the
coming
weeks,
as
we
are
also
developing
our
project
descriptions,
but
just
as
a
heads
up,
we
already
working
on
this
next
agenda
item
is.
C
Christian
just
note,
I
think
we
should
mention
that
ankit,
our
mentee
from
the
last
quarter
published
a
blog
post
this
week
about
his
experience
in
this
program
for
everyone
who
wants
to
be
a
mentee,
please
read
the
blog
post.
A
I
will
link
it
here.
It's
very
nice
blog
from
akit,
you
might
have.
You
might
know
him
from
the
developer
meetings
as
well,
thanks
for
mentioning
this
giannis.
A
D
I
think
it's
enough
if
I
just
talk
about
it
quickly.
So,
a
couple
of
weeks
ago
we
have
published
captain
zero,
etier
alpha
and
obviously
we've
published
release
notes.
But
if
you
just
want
to
have
a
quick
summary
of
what's
new
in
captain
080
alpha,
then
there
is
two
youtube
videos.
D
I
recommend
watching
the
first
one
linked
in
here
is
basically
the
highlights,
based
on
the
release,
notes
that
we've
created
with
captain
with
this
version
and
the
second
video
in
this
list
is
a
quick
introduction
into
the
new
shipyard
specification
that
we've
published
with
zero
eight
zero
alpha.
D
C
Yeah
also
just
a
quick
announcement.
Currently
there
are
a
couple
of
discussions
going
on
around
captain
project
and
also
in
regard
to
the
0.8
release
for
the
final
version,
and
if
you
want
to
contribute
or
also
want
to
bring
in
your
opinion
on
certain
topics,
please
feel
free
to
follow
those
discussions
and
to
post
a
comment
there.
We
are
very
help
happy
to
see
other
ideas,
thoughts
and
concerns
that
we
then
can
also
consider
when
building
certain
features.
C
Then,
today
we
started
the
discussion
about
whether
it's
necessary
to
have
a
mapping
between
a
service
and
a
sequence,
or
we
should
provide
smart
default,
and
the
third
discussion
is
about
the
situation
that
the
task
that
about
the
behavior
of
a
task
sequence
and
whether
it
should
fail
based
on
a
failed
task
or
not
yeah.
Please
also
take
a
look
there.
C
A
It's
right
now
we
are
basically
starting
the
discussion.
It's
we
are
in
the
process
of
starting
an
integration
for
locust,
for
which
is
a
load
testing
tool
similar
to
jmeter,
for
example,
and
we
are
reaching
out
in
slack
to
find
the
folks
that
would
be
interested
in
joining
us
in
the
development
we
have
already
found
some,
and
so
we
are
initiating
this.
We
will
start
first
gathering
the
requirements
and
then
starts
coding
in
the
next
weeks.
B
B
A
Right
now
the
discussion
that
we
are
having-
and
we
also
you-
can
also
find
the
link
to
the
git
repository
where
we
will
start
development
so
yeah.
I
think
that's
everything
for
the
for
for
this
agenda
item
and
the
next
one
is
an
open
question
from
the
last
meeting.
A
Should
we
streamline
the
versioning
of
the
spec
with
the
captain
versions,
very
good
question:
did
anyone
already
has
established,
or
do
we
already
have
some
conclusion
how
to
answer
this
question
or
we
just
keep
it
for
now?
We
keep
it
as
an
open
question.
A
Actually,
in
my
opinion,
it
makes
sense
to
have
different
versions
of
the
spec
and
the
implementation
as
we,
if
you
progress
with
minor
versions,
we
won't
change
the
spec
for
for
minor
versions,
probably
so
it
can
live
with
two
separate
version:
definitions,
in
my
opinion,.
C
I
think
so
too,
because
we
had
this
discussion
also
on
the
level
of
the
services,
and
we
also
decided
to
keep
the
services
separated,
even
though
it's
you
need,
then
a
matrix
that
tells
you
which
version
of
service
is
working
with
with
which
version
of
captain.
D
I
I
agree
on
that
in
principle,
but
we
should
make
sure
the
versioning
we
use
is
semantic.
D
So
with
the
changes
we've
done
now
with
captain
080,
we
obviously
also
made
a
jump
in
the
minor
version
of
the
spec,
which
is
fine.
But
let's
just
say
we
at
some
point
release
captain
1.0
then
I
believe
the
spec
should
also
be
1.0,
because
the
spec
in
that
sense
is
to
be
considered
general
availability
and
and
production
ready.
I
don't
know
how
to
call
it,
but
I
guess
you
know
what
I
mean.
D
So
maybe
we
need
to
align
at
some
point,
but
that
might
create
confusion
again
whatever,
but
this
this
could
be
the
one
way
that
we
go
with
the
spec
version.
A
Okay,
but
if
there
are
no
objections,
I
I
try
to
find
the
or
to
write
down
the
conclusion
in
general:
keep
it
with
separate
versions,
but
for
major
versions.
A
We
have
to
align
g
captain
zero,
implement
spec,
something
like
this
that
make
it
clear.
C
A
A
D
Thanks,
okay,
so
my
contributions
for
the
last
couple
of
days.
They
start
with
the
captain
service
template.
I
have
taken
the
task
to
make
sure
that
the
new
cloud
events
are
also
available
in
the
new
cap
or
in
the
existing
captain
service
template,
and
this
task
has
proven
to
be
quite
some
work,
because
in
our
eventhandlers.go
we
had
handler
functions
for
almost
every
task,
that
or
for
almost
every
cloud
event
that
captain
0.7
had
and
now
with
the
introduction
of
all
the
started
and
dot
finished
events
and
even
some
more
cloud.
D
Events
in
captain
0.8.
There
was
quite
some
work
involved,
so
I'm
not
gonna
go
for
all
the
details
in
here.
I'm
just
gonna
go
through
the
rough
edges
of
the
things
that
I
had
to
do,
and
I
just
want
to
mention
that
there
will
also
be
a
documentation
for
the
people
or
for
all
the
developers
that
are
writing
integrations
based
on
our
go
service
template
on
how
to
migrate
from
the
old
version
to
the
new
version.
D
So
the
most
important
change-
and
this
is
also
kind
of
a
summary
of
what's
new
in
captain
0.8-
is
that
we
have
migrated
to
a
newer
version
of
cloud
events.
We
are
now
on
cloud
cloud
events,
version
1.0,
which
is
reflected
in
the
cloud
events.
Sdk
go
v2,
so
we're
not
the
only
ones
that
are
mixing
spec
versions
or
that
have
mixed
spec
versions
versus
releases
of
libraries.
D
So
that's
good
to
see
here,
but
that's
this
change
and
then,
while
in
the
old
version
of
captain,
we
rely
on
go
utils
package
lib
for
many
of
our
helper
functions
and
cloud
events.
We
have
introduced
a
new
package
called
package,
lib
v020,
which
includes
all
the
structs
names
and
utility
functions.
You
need
to
implement
the
new
cloud
events
basically
and
just
as
an
example,
I'll
go
down
to
the
test
triggered
event.
This
is
what
the
chest
triggered
event.
D
It's
kind
of
an
event
that
didn't
exist
before
to
be
honest,
but
this
is
what
it
looks
like
now.
You
basically
pass
in
a
captain
handler.
You
have
your
incoming
cloud
event
and
you
have
your
cloud
event.
Data
which
is
a
test
triggered
event
data,
and
then
you
can
start
processing
the
event
like
you
did
before
all
right.
There's
a
lot
more
changes
into
this
pull
request.
So,
if
you're
interested
into
the
details,
please
take
a
look
into
this
pull
request.
D
It
should
be
quite
complete
or
if
you
just
want
to
wait
until
we
have
the
documentation,
then
that
should
be
a
lot
easier,
because
this
pull
request
obviously
covers,
like
all
possible
cloud
events
that
you
need
to
go
through
and
not
just
yeah,
not
just
the
ones
that
you
might
need
all
right.
Any
questions
so
far
on
that
or
any
remarks.
D
Cool
the
next
one
I
want
to
share
with
you
is
that
we've
created
a
new
github
action.
Actually,
this
is
the
first
github
action
that
we've
created
officially
under
the
captain
organization.
This
action
is
a
helper
for
our
internal
or
for
our
github
ci
pipeline
that
we're
working
with
and
all
it
does
is.
D
So
this
this
helps
for
automation
like
I
want
to
build
a
docker
image
for
a
pull
request,
and
I
want
to
publish
this
docker
image.
Then
it
needs
to
have
a
good
name
or
a
proper
version,
and
this
this
helps
for
generating
such
a
version.
D
Basically,
what
I
did
is
this
is
the
the
bash
script
that
was
in
our
workflow
for
ci
and
with
captain
itself
we
obviously
have
the
mantra
of
let's
not
write
a
bash
script
or
let's,
let's
not
write
a
script,
but
let's
rather
make
something
containerized
or
something
testable,
something
reusable.
D
This
is
exactly
what
we
did
here.
We
basically
just
say
hey.
I
want
to
use
this
github
action
that
the
captain
team
has
published,
and
with
that
I
also
want
to
come
to
the
conclusion.
If
you
want
to
use
such
an
action,
you
can
obviously
use
our
action.
This
is.
This
is
basically
publicly
available
for
everyone,
so
this
is
not
just
for
us.
B
B
Okay,
today,
I
have
three
items
to
show
you.
The
first
one
is
a
pretty
cool
extension
for
the
distributor.
So,
as
you
know,
the
distributor
up
until
now
was
mainly
responsible
for
subscribing
to
captain
events.
So,
for
example,
a
deployment
finished
event
or
evaluation
finished
event
or
triggered
events
as
well.
In
case,
you
want
to
write
a
action
executor
and
perform
a
certain
task.
B
That's
been
scheduled
by
the
shipyard
controller
and
the
distributor
is
basically
the
best
way
to
do
so,
so
you
can
run
it
as
a
side
car
within
the
same
pot
as
your
cabin
service.
So
that's
the
the
best
practice
that
we
recommend
to
have
them
within
the
same
pot
and
then
the
distributor
will
forward
the
events
you
are
subscribed
to
to
the
cabin
service.
B
So
that
means,
for
example,
if
you
are
operating
a
cabin
service
outside
of
the
actual
cluster,
where
captain
is
running
on,
you
can
use
the
distributor
to
forward
or
to
send
api
requests
to
the
distributor,
and
then
the
distributor
will
take
care
of
communicating
with
the
captain
api.
So
that
means
you
only
have
to
configure
the
access
credentials
to
the
captain
api.
B
So
the
captain
api,
endpoint
and,
of
course,
the
captain
api
token
in
the
distributor,
but
you
don't
have
to
handle
all
that
credential
credential
information
within
the
captain
service,
so
within
the
captain
service,
if
you
run
it
within
the
same
pot
as
the
distributor,
you
can
just
access
the
those
free
api
services
within
captain
via
those
urls.
So,
for
example,
if
you
want
to
communicate
with
the
mongodb
mongodb
delta
store,
you
can
then
access
that
via
http
localhost,
8081
and
slash
mongodb,
datastore
or
for
convenience.
We
also
provide
different
synonyms
for
those
services.
B
B
Also,
as
part
of
this
pro
request,
I've
tried
to
to
rework
all
the
environment
variables
and
see
which
ones
are
not
necessary
anymore
and
which
ones
were
maybe
duplicated
for
similar
purposes,
and
now
that
list
of
parameters
is
up
to
date
and
as
you
can
see,
it
is
quite
long,
but
fortunately,
for
most
of
the
usual
use
cases,
it
is
sufficient
to
only
set
a
very
small
subset
of
them.
So
here
we
have
added
two
examples
of
how
you
might
want
to
use
them.
B
You
basically
only
need
to
set
the
pops
up
recipient,
so
that
is
the
hostname
of
your
captain
service,
the
port.
It
is
listen,
listening
for
events
on
and
then
the
path
and,
of
course,
the
pops
up
topic,
and
also
the
good
news
is,
if
you
of
course
run
your
captain
service
within
the
same
and,
for
example,
use
our
goal
service
template
that
christian
just
showed
us.
B
You
also
can
leave
those
three
parameters
here
empty
because
by
default
the
distributor
will
be
set
up
to
correctly
send
incoming
events
to
to
the
default
host
that
and
important
path
that
the
service
is
listening
on
all
right,
and
the
second
scenario
is:
if
you
want
to
operate
your
cabin
service
outside
of
the
cabin
cluster,
you
obviously
need
to
know
how
to
communicate
with
the
captain
api.
B
So
in
that
case,
for
the
distributor
you
need
to
set
the
captain
api,
endpoint
and
the
api
token
variable,
then
you
can
also
set
the
interval
in
which
the
distributor
will
call
for
new
triggered
events,
but
that
can
also
be
left
empty.
So
we
have
a
default
of,
I
think,
10
or
30
seconds.
I
think
it's
10.,
so
you
don't
necessarily
have
to
set
that
then
yeah.
Those
three
parameters
are
the
same
as
above
so
again.
B
If
you
use
the
goi
service
template
and
run
within
the
same
pot,
you
can
leave
those
empty
and
then,
of
course,
you
need
to
set
a
pops
up
topic.
The
only
limitation
right
now
that
we
have
here.
If
you
run
the
distributor
and
the
kept
service
outside
of
the
cluster,
you
cannot
use
nuts
wildcard
syntax.
A
I
do
have
one
it's
a
pretty
amazing,
pretty
big
thing.
I
I
do
have
one
question
on
the
on
the
beginning
of
when
you
shared
the
different
urls
for
the
same
thing,
if
I
was
just
wondering
if
it
is
a
bit
confusing,
especially
for
the
mongodb
data
store,
since
first
we
have
the
mongodb
data
store,
but
also
data
store
is
available
and
then
event
store,
but
written
a
little
bit
differently
with
the
hyphen.
A
If
it's
kind
of
introduces
a
little
bit
of
con
or
might
introduce
more
confusion,
then
it
might
help
if
it's
because
data
store
and
event
store
could
mean
kind
of
different
things
for
for
different
people,
that
the
data
store
is
something
like
where
you
really
log
some
some
data
or
put
some
data
and
the
event
store,
is
obviously
something
where
you
only
have
events
and
maybe-
but
it's
just
my
opinion
that
maybe
keep
the
the
the
names
that
we
usually
had
and
I
think
we
always
called
it
the
mongodb
data,
so
I
just
go
with
mongodb
datastore.
B
The
reason
behind
this
why
I
did
did
it
like
that
is
we
do
have
an
open
discussion
about
renaming
the
paths
to
our
core
or
to
our
api
services,
and
here
I
wanted
to
maybe
think
ahead
a
little
bit
and
in
case
we
do,
for
example,
rename
the
mongodb
data
store
to
slash
data
store.
You
can
already
use
that
path
and
do
not
have
to
update
it
afterwards.
A
Okay,
yeah,
maybe
I
I
take
a
look
at
the
the
ongoing
discussions
and
take
a
look.
What
is
the
outcome
of
this
and
then
we
can
maybe
streamline.
D
E
E
C
So
yeah
at
the
end,
it
should
be
aligned
with
the
api
and-
and
this
is
what
we
have
ongoing
and-
and
we
are
talking
about
discussing
about-
but
what
I
don't
like
is
exposing
a
technical
details,
and
this
is,
for
example,
where
we
have
the
slash
mongodb
data
store.
Why
should
we
expose
the
information
that
internally
we
are
using
a
mongodb?
C
E
What
we
can
maybe
do
here
really
for
0.8,
switch
basically
to
event
store,
which
is
the
clean
path
so
to
say,
and
have
it
for
backwards.
Compatible
compatibility
have
also
the
mongodb
data
store
still,
which
is
just
redirecting,
of
course,
to
the
event
store
and
in
the
next
version,
then
we
could
get
rid
of
that.
Mongodb
data
store
path
because.
B
E
B
All
right,
then,
let's
continue
so.
The
next
issue
is
quite
a
small
one.
So,
basically,
here
the
requirement
for
that
was
since
we
now
are
switching
to
different
event
types,
and
one
of
them
is
the
evaluation.finished
which
will
now
replace
the
previous
evaluation
done
event.
B
We
needed
to
remove
all
the
references
to
the
old
event
types
so,
for
example,
in
our
integration
tests,
we
had
several
places
where
we
still
reference
those
evolution
done
events
and
in
this
pr
I
basically
went
through
all
the
places
where
that
occurred
and
removed
the
references
to
those
events.
So
nothing
too
spectacular
here,
but
are
there
any
questions
about
that.
B
B
It
needs
to
check
the
version
of
the
shipyard
of
the
project.
This
incoming
event
is
related
to,
and,
of
course,
if
the
shipyard
is
not
does
not
have
the
correct
version,
the
shipyard
controller
should
exit
with
a
task
sequence
that
finished
event
with
a
status
of
errored
and
results
failed.
So
also
the
code
here
is
not
that
complicated.
B
All
right,
if
not,
let's
have
a
quick
look
at
the
working
items,
because
there
have
been
a
few
bugs.
I
think
this
fixed
as
well
in
this
week,
so
the
first
one
was
an
interesting
one,
so
that
was
a
memory
leak
in
the
shipyard
controller
and
that
was
caused
by
yeah.
B
Actually,
I
discovered
that
when
I
was
developing
this
new
functionality
for
the
distributor
where
it's
port
open
the
triggered
events
by
the
shipyard
controller,
there
are
noticed
that
the
shipbuild
controller
for
each
request
tried
to
create
a
new
mongodb
connection
and
didn't
close
it
properly,
and
I
fixed
that
with
that
pierre
pr
and
also
what
was
where
dynatrace
helped
me
to
validate
the
the
fix
I've
made
is.
I
could
use
it
to
to
have
a
look
at
the
memory
consumption
of
the
service,
so
that
is
before
the
fix.
B
So,
as
you
can
see
here
very
nicely,
the
memory
grew
then
the
part
crashed
because
of
an
o
m
error.
Then
the
memory
grew
again
and
that
kept
on
happening.
Then
I
made
the
fix
and
after
that
it
looked
like
this.
So
for
of
course,
at
the
beginning,
the
memory
grew
a
little
bit
because
it
needed
to
establish
the
connections
to
the
mongodb,
for
example.
But
then,
even
though
the
number
of
requests.
B
Was
steady
and
there
were
more
and
more
requests
coming
in
the
memory?
Consumption
stayed
the
same.
So
that's
again,
a
very
nice
example
of
how
dynatrace
enables
you
to
or
enables
us
to
self-monitor
captain
and
actually
use
it
to
to
develop
captain
and
have
an
eye
on
performance
issues.
B
We
also
discovered
a
small
bug
in
the
helm
service
where,
in
the
deployment
finished
events,
it
was
not
sending
in
the
deployment
finished
events
we
do
have
the
property
deployment
uri
local,
which
is
the
uri
within
the
cluster,
where
the
service
that
was
just
deployed
is
reachable,
and
in
that
case
we
discovered
that
the
port
within
the
uri
was
not
set
correctly,
so
it
was
actually
hard
coded
to
80,
and
I
fixed
that
with
that
pr.
So
basically,
what
I
did
here
is,
for
example,
when
we
have
this
this
service
definition
here.
B
We
actually
look
at
the
created
kubernetes
service
resource
and
go
through
all
the
parts,
and
then
we
take
the
proper
port.
So
in
that
example,
it
is
80,
but
for
example,
if
the
created
service
is
exposed
at
port
9000,
for
example,
then
the
helm
service
will
take
that
the
the
part
that
is
defined
here
all
right.
I
think
that's
it
for
my
side,
so
I'm
handing
over
to.
E
First
question:
is
this
big
enough
to
see
that's
good?
Okay,
I've
I've.
Only
I've
picked
only
one
issue
which
I
think
is
worth
mentioning
today.
B
E
I've
worked
on
this
in
the
last
week
and
I
think
the
pull
request
is
still
open,
but
I
can
show
you
that
one
and
it's
about
a
new,
I
think
very
convenient
feature,
especially
in
development
phase,
where
you
can
use
the
cli
to
basically
watch
the
event
stream
of
some
of
the
commands
you've
issued
via
the
cli.
E
E
E
E
What's
new
here,
you
can
actually
provide
here
a
w
flag,
which
means
a
watch
flag,
and
if
you
do
that,
you
can
actually
watch
the
events
which
are
resulting
from
your
our
command.
So
you
see
the
cli
is
not
terminating
here.
It
is
basically
continuing
to
pull
for
events
which
are
resulting
you
see
just
one
when
event
came
in
here
additionally-
and
I
think
that
was
already
the
last
event.
E
So
this
is
useful
if
you
just
want
to
yeah,
take
a
look.
What
events
are
firing
in
the
system
when
you
do
certain
stuff
on
your
cluster,
where
the
cli.
D
Yeah,
probably
we
will,
we
would
have
to
check.
I
think
if
we
print
it
to
std
error,
then
only
the
the
json
payloads
will
be
passed,
but
we
should
take
a
look
at
this
because
with
jq
this
gets
an
awesome,
formatting
and
colorized,
and
everything
makes.
E
Yeah,
that's
actually
another
topic.
We
need
you
had
to
address,
so
this
one
should
probably
not
go
into
that
standard
into
that
stream
so
that
we
have
machine
processable
output
actually,
but
I
think
there's
an
issue
for
that.
What
you
can
do
here
is
because
you
can
also
output
this
the
yammer
in
the
yaml
format,
which
is,
in
my
opinion,
much
cleaner
than
the
json
output,
so
I've
just
issued
here
the
same
event
event
new
artifact
with
the
armor.
E
As
you
see
it's
quite
fast
already
finished
again
it
outputs
the
stuff
in
yamaha.
What
else
can
you
do
sorry
for
interrupting.
A
I
think
it's
pretty
cool,
but
the
only
thing
or
that
might
be
missing
is
a
kind
of
a
delimiter
so
that
jq
or
or
in
yammer
vomiting.
B
A
They
know
when
the
event
starts
an
event
ends
because
I
think,
maybe
in
the
payload,
the
the
json
payload
just
a
comma
between
the
events
or
maybe
have
it
as
an
array
and
then
a
comma
between
the
events.
So
you
can
process
it
and
in
three
dashes
here
between
the
events.
But
I
I'm
not
100.
E
Where
was
I
yes,
you
can
actually
use
that
double
you
flag,
also
in
other
commands.
If
I
take
a
look
at
captain,
let's.
E
Captain
get
event
we
need
to
pass
in
a
captain
context
for
which
the
events
gets
filtered.
Let's
just
look
at
this
one.
D
E
D
E
E
You
can
do
that
with
by
the
minus
minus
watch
time
flag,
and
now
it
should
actually
stop
after
five
second
seconds.
It
did
stop
here
so
where's
this
command
else
available
or
this
flag.
I
think
it's
the
send
new
artifact
command,
which
we
already
saw,
then
the
get
event-
and
you
can
also
issue
this
one,
this
this
flag
with
the
start
evaluation
command.
E
E
Sorry,
okay,
like
this,
you
see
here
the
description
of
this
flag,
minus
dot
or
minus
double
minus
minus
watch,
print
event,
stream
or
here
the
time
out
which
defaults
to
max
int
or
something
like
that,
and
it's
in.
Second,
that's,
basically
it.
I
think
this
feature
is
especially
when
you're
developing
stuff
and
wanted
to
try
out
things
and
see
what
events
are
fired
in
the
system.
It's
quite
useful.
E
The
main
logic
for
this
is
implemented
in
go
utils
when
you
are
developing
a
service,
and
you
want
to
use
this
feature,
you
want
to
just
repeatedly
boil
for
events
filtered
by
some
filter.
E
You
can
use
the
implementation
here.
I've
updated
the
readme
file
in
the
go
youtubes.
Until
now,
it
was
only
possible
to
query
events
from
the
event
store.
This
was
done
like
like,
like
shown
here,
you
basically
creating
a
event
handler.
E
You
create
a
filter
and
then
you
ask
the
event
handler
to
fetch
the
events
which
are
matching
the
filter.
Now,
what
you
can
do
is
you
can
wrap
so
to
say
so
to
say
the
event
filter
with
the
event
watcher.
You
can
create
it
like.
Like
this
new
event
watcher.
E
E
So
the
oldest
event
will
then
just
fetched
from
this
timestamp
here,
and
you
can
also
provide
a
timeout,
for
example,
50
seconds.
If
you
have
a
watcher,
you
can
just
start
watching.
You
provide
it
with
a
go
context
and
it
will
return
you
and
go
channel,
hear
all
events,
and
you
can
just
loop
over
this
channel
to
get
the
events
then
we'll
also
provide
you.
It's
not
shown
here
a
cancel
function
where
you
can
actively
cancel
their
polling.
E
B
E
We
should
not
do
this
every
time
from
scratch,
so
please
reuse
this
event
watcher
or
extend
it
if
you,
if
new
features,
are
needed
here
and
that's
exactly
how
it's
always
implemented
or
incorporated
into
cli,
it's
basically
just
this
code.
That's
it!
A
A
D
E
D
A
Okay,
so
if
there
is
nothing
to
add
here,
then
thanks
everyone
for
joining
me
and
see
you
all
again
next
week,
next
thursday,
and
please
everyone
go
ahead
and
try
out
the
alpha
release
and
watch
the
videos
from
christian.
That
will
give
you
a
good
head
start
on
the
new
format
of
the
shipyard.