►
From YouTube: Keptn Community & Developer Meeting - Feb 1st, 2023
Description
Meeting notes: https://docs.google.com/document/d/1y7a6uaN8fwFJ7IRnvtxSfgz-OGFq6u7bKN6F7NDxKPg/edit
Learn more: https://keptn.sh
Get started with tutorials: https://tutorials.keptn.sh
Join us in Slack: https://slack.keptn.sh
Star us on Github: https://github.com/keptn/keptn
Follow us on Twitter: https://twitter.com/keptnProject
Sign up to our newsletter: https://bit.ly/KeptnNews
A
Oh
perfect,
so
we
can
kick
it
off
so
welcome
everyone
to
the
next
iteration
of
the
community
meeting
for
the
captain
project.
We
have
so
many
new
faces
already
said
before
we
started
with
this
meeting,
so
maybe
we
can
do
an
introduction
round
first,
so
maybe
starting
topmost
in
my
chat
view
is
Daniel
would
like
to
present
yourself.
B
Sure,
hello,
everyone,
I'm
Daniel,
I'm
a
developer
on
taste
test.
B
A
A
C
Thank
you
yeah,
so
I'm
Adnan,
I
I
run
devrel
at
Trace
test,
so
I'm
here
to
talk
about
the
boring
stuff
and
then
regarding
the
actual
fund
integration
things.
That's
where
Daniel
comes
in,
so
yeah
really
happy
to
be
here.
D
Good
morning
so
I'm
telling
the
one
in
one
of
the
engineers
of
trade
staff
as
well
and
I'm,
glad
to
be
here
I'm
super
happy
that
we
know
Captain
very
good
software.
E
Yeah,
that's
correct,
so
hi
hello,
everyone
I
am
harshal.
I
am
from
India.
I
am
a
developer
at
atlassian,
and
I
am
looking
forward
to
contributing
Captain
as
part
of
gsoc
I
explored
captain
and
went
through
some
of
the
tutorials
and
I
also
went
through
some
demos
of
the
past
year
projects.
So
I
found
it
really
interesting
and
looking
forward
to
learning
and
contributing
to
the
captain
present.
A
F
A
Go
over
to
the
agenda
so,
first
of
all,
thank
you.
Everyone
to
introduce
yourself
and
welcome
so
for
today.
I,
don't
have
many
news
about
Captain,
LTS
and
that
life
cycle
took
it,
but
because
I
would
like
to
reserve
more
time
about
the
tres
test,
integration,
showcase
and
also
the
other
demo
from
Florian
about
the
custom
metric,
but
for
t-soc
we
are
currently
working
on
the
application.
For
that.
A
E
Yeah
yeah
sure,
so
there
is
one
issue
where
they
have
mentioned:
two
of
the
possible
projects
and
further
they
will
mention
the
projects
when
they
are
decided
as
and
when
they
are
decided.
F
F
Actually,
there's
a
link
in
what
they
want.
Obviously
chat
talk
on
slack,
but
there
is
a
link
to
yes.
This
is
in
commute.
Excuse
me
I.
If
you
are
interested
in
being
a
mentor
or
a
mentee,
please
knock.
Please
put
something
here
about
what
you're
interested
in
and.
F
You
have
a
project
in
mind,
anybody
who
has
a
project
in
mind.
You
want
to
create
an
issue
in
the
appropriate
repo,
for
you
know,
whichever
project
it
is
and
describe
in
some
detail
standard
issue
and
then
add
it
to
this
list.
This
is
going
to
be
their
main
place
where
they
look
to
see
what's
going
on
who's
there,
and
if
you,
if
you
want
to
be
a
mentee,
you
may
have
an
idea
of
a
project.
F
You
may
have
looked
and
said:
geez,
you
know,
klt
could
use
such
and
such
and
I'd
like
to
do
that
and
yeah
we'll
also
you
can
watch
here
and
you
will
be
seeing
projects
put
up
by
other
people
too.
So
does
that
make
sense.
G
All
right,
thank
you.
Let
me
find
the
correct
screen
to
share,
and
here
it
is.
Let
me
just
make
this
a
little
bit
bigger
yeah
today.
I
want
to
give
you
a
real,
quick
update
on
what
you
can
look
forward
to
in
the
next
version
of
the
lifecycle
toolkit,
which
are
the
kept
metrics.
So
we
have
added
a
new
custom
resource
to
a
project
which
is
called
the
captain
metric
and
with
those
you
can
refer
to
to
evaluation
or
metric
provider.
G
You
might
have
already
seen
those
in
previous
versions
of
the
life
cycle
toolkit
where
those
providers
were
used
by
the
cap,
evaluation,
crds
and
evaluation
controllers,
so
a
provider
would
be,
for
example,
a
Prometheus
instant
instance
in
your
cluster
and
in
the
cat
metric.
You
can
then
Define
a
query
that
you
would
like
to
use
for
retrieving
a
certain
metric
from
that
provider,
and
this
the
value
of
this
metric
will
then
be
updated
every
periodically,
based
on
what
you
define
in
the
fetch
interval
seconds
right.
G
So
in
this
case,
every
five
seconds,
we
will
retrieve
that
the
result
of
that
query
and
store
it
in
the
status
of
the
cat
metric
crd
and
for
accessing
these
values.
We
have
multiple
ways,
so
first
of
all
would
be,
of
course,
to
to
retrieve
the
the
status
of
the
catamometric
custom
resource,
but
then
other
than
that
we
also
provide
a
specific
endpoint
for
accessing
those
metrics
which
is
going
to
be
hosted
by
default
on
the
999
Port
of
the
of
the
service
exposed
by
the
lifecycle
toolkit.
G
So
that's
one
way
and
then
finally,
via
a
matrix
adapter,
that's
now
going
to
be
included
in
the
lifecycle
toolkit
operator.
We
actually
enable
it
to
integrate
these
metrics
with
the
kubernetes
custom,
metrics
API,
and
that,
for
example,
allows
you
to
refer
to
these
metrics
as
object
metrics
within
a
horizontal
pod,
Auto
scalar
configuration
in
kubernetes.
So
that's
pretty
cool.
G
So,
as
you
can
see
here
here,
we
have
a
sample
deployment
that
can
scale
between
1
and
10
replicas,
and
the
decision
about
scaling
up
or
down
will
be
made
based
on
on
our
metric
sample
metric.
So
this
would
be
the
way
how
you
can
refer
to
that
and
by
observing
this
metric,
the
horizontal
Part,
auto
scalar
will
then
be
able
to
make
decisions
about
scaling
up
or
down
and
then,
finally,
if
you
want
to
to
retrieve
those
metric
values
using
the
cube,
CTL
library,
for
example,
you
can
do
so
by
executing
the
cube.
G
Ctl
get
minus
minus
raw
command.
Where
you
point
to
the
custom,
metrics
API,
and
here
we
have
added
some
samples
on
how
to
retrieve
that.
So,
for
example,
here
yeah,
maybe
that's
a
unique
thing
about
this
API
and
something
that
took
me
at
least
quite
some
time
to
figure
out.
If
we
want
to
access
this
metric,
you
have
to
include
it
twice
in
the
path.
So
that's
just
the
way
it
is
the
custom
metrics
API
format,
Works
in
kubernetes
and
then
yeah.
F
Sorry,
what
was
that?
What
what
all
can
I
use
this
for
any
any
metrics
at
all?
You
don't
limit
sanitary.
G
Basically,
in
app
deployment-
that's
done
by
the
captain
lifecycle
toolkit.
You
can
also
refer
to
these
metrics
in
the
pre
and
post
evaluations,
so
that
that's
one
way
to
use
them
and
yeah.
H
F
H
A
G
Exactly
yeah,
so
if
there
are
no
further
questions,
I
will
hand
over
to
the
next
one.
C
Thank
you
guys.
Thank
you.
Let
me
first
bore
you
a
bit
by
by
sharing
a
slide
deck,
because
we
all
love
slide
decks
that
one
and
that
will
go
for
slideshow
perfect.
Let's
go
to
the
beginning
here,
so
yeah
I'll
just
give
you
guys
a
quick
rundown
of
Trace
test
and
the
problem
we
want
to
solve
and
a
quick
intro
into
the
pain
points
of
testing
right
now.
Just
so
you
know
exactly
what
we
want
to
solve
now.
Trace
test
is
basically
Trace
based
testing
for
a
cloud
native
world.
C
That's
the
that's
the
kicker,
where
we
believe
that
testing
is
hard.
It
has
been
hard.
It
never
really
has
gotten
any
easier,
especially
when
you're
thinking
about
testing
distributed
systems,
that's
just
even
harder.
We
have
a
lot
of
a
lot
of
engineers
in
our
team
that
have
both
extensive
experience
with
testing,
but
also
with
building
and
developing
distributed
system
and
systems,
and
it's
it's
just
not
simple.
C
It's
really
hard
to
do
and
the
main
pain
points
that
we
we've
found
out
and
we've
pointed
out,
is
that
it
really
hurts
when
you
don't
really
know
when
an
HTTP
transaction
fails,
especially
if
you
have
Services
service
Communications.
If
you
have
requests
where
one
service
triggers
another
and
then
that
triggers
a
message
queue
and
that
should
triggers
something
else.
You
never
really
know
when
stuff
fails
and
it
really
hurts,
because
you
can't
really
mock
that
in
a
test.
C
You
can't
really
mock
message,
cues
or
or
sqs
in
AWS
or
anything
of
that
nature,
but
also
the
the
last
thing.
That's
really
really
painful
is
that
you
have
to
write
so
much
code
to
just
set
up
the
testing
just
to
set
up
integration
tests.
You
have
to
write
a
bunch
of
code
and
plumbing
and
whatever
not
else
to
just
actually
write
the
test
itself.
So
it's
it's
a
lot
of
pain
points
we
want
to
solve
and
we
really
want
to
make
testing.
C
So
this
is
something
we
we
would
like
to
say
is
the
new
way
of
testing
where
we
assert
based
on
these
bands,
and
then
we
can
run
tests
for
the
entire
distributed
system
and
know
exactly
what's
happening
at
which
point
of
the
system.
So
a
really
quick
rundown
of
how
this
works
is
that
as
any
test
executor,
you
do
have
a
trigger
and
that
trigger
triggers
a
test
against
your
distributed
system.
So
that
can
be
a
rest
API.
It
can
be.
C
Grpc
doesn't
really
matter,
but
the
kicker
here
is
that
once
you
get
that
response
back,
you
not
only
get
the
response
you
get
the
distributed,
Trace
as
well,
so
Trace
test
will
pull
your
Trace
data
store.
It
can
be
pretty
much
any
Trace
data
store
you're
using
from
Jaeger
to
any
of
the
vendors
that
we're
just
talking
about
like
New
Relic
or
whichever
else
you're
using,
and
then
you
can
write
tests
and
write
actual
assertions
based
on
the
response,
but
also
the
entire
distributed
Trace.
C
So
you
know
exactly
what's
happening
even
if
you
have
multiple
microservices,
even
if
you
have
database
calls,
etc,
etc.
All
of
that
will
be
available
in
the
trace
itself
and
then,
based
on
that,
those
assertions
that
you
do
set,
you
can
get
test
results
for
your
test
test
Suites.
So
it's
it's
really
quite
magical,
so
I
reckon
with
that
quick
intro
I
can
hand
over
to
Daniel
and
he
can
actually
jump
into
the
real
stuff
and
coding
and
show
you
how
how
it
all
works.
C
So
let
me
stop
the
sharing
and
hand
it
over
to
Daniel.
B
Perfect,
let
me
share
my
screen:
okay,
okay,
so
hi
everyone.
So
should
you
start
how
this
integration
started
between
Trace
test
and
Captain
projects?
B
It
started
with
two
issues:
one
of
the
on
the
life
cycle,
toolkit
project
of
Captain
and
another
one
on
the
captain
Integrations,
where
we
discussed
two
things
in
one
of
them,
how
we
could
integrate
Trace
test
with
Captain
and
how
we
could
evaluate
our
service
checking
the
their
operative
in
Telemetry
traces
and
later
this
discussion
evolved
to
thinking
about
SLO
for
traces
on
on
life
cycle
toolkits.
B
So
what
was
the
idea
that
thinking
thinking
side
of
Trace
test
and
talking
of
the
Captain
G,
so
we
figured
out
that
we
could
adapt
The
Twist
as
CLI
to
run
with
Captain
and
start
to
evaluate
services
with
Twisters
the
integration
works
like
this.
You
have
a
sequence
on
captions
that
triggers
a
test
task.
This
test
task
could
be
seen
by
the
job
executor
service
that
that
is
configured
to
run
the
trace
test
CLI
to
a
job
manifest
that
will
show
to
you
soon
inside.
B
By
doing
that,
Twisters
will
will
CLI
will
call
Trista
server.
Do
all
the
tests
on
all
these
assertions
needed
and
we'll
answer?
It
is
okay
for
you,
the
service
is
right
or
not.
There
is
a
problem
and
you
can-
and
this
validation
is
broken-
an
example
of
that
can
be
seen
here.
This
is
a
job
config,
where
we
will
listen
to
the
the
cloud
event
of
tests
and
we
are
configuring
this
job
to
run
the
trace
test
Li.
So
we
have
a
test
definition
yaml
that
is
I.
B
I
am
of
that
says
how
you
can
call
a
service
which
type
of
data
we
will
pass
to
the
service
and
which
validations
will
we
will
do
on
the
on
the
traces
of
this
API
so,
for
instance,
I'm
using
API
called
Poke
shop.
Let
me
show
you
here
that
that
has
a
use
case
called
that
it's
our
wrapper
on
the
Poke
API
nowadays
has
they.
It
has
actions
to
hash
it,
their
Pokemons
and
in
part
Pokemons
inside
the
data
its
database
and
on
the
import
is
the
case.
B
Api
call
that
cows
that
says
I
want
to
impart
a
Pokemon.
If
I
receive
this
information
and
sent
you
one
had
chemical
saying
here
is
a
sync
code
to
import
a
Pokemon.
Sorry
folks,
I
have
a
co-pilot
right
now.
That's
my
daughter.
B
Another
I
said
Pokemon,
so
he's
a
problem.
Another
part
that
is
our
worker,
that
that
does
their
sync
process
behind
the
curtains
shatting
on
poke
API.
If
everything
is
fine,
if
everything
is
fine,
it
saves
the
Pokemon
on
our
database.
It
is
finisher
how
we
can
use
this
use
case.
We
can
go
to
Captain
right
now.
We
registered
some
projects
on
it.
One
of
them
is
called
a
priestess
integration.
Bookshop
on
this
project.
We
have
a
service
called
Poke
shop
and
sequence.
B
B
In
a
few
minutes,
we
will
start
to
run
trace
tests
inside
Captain.
What
happens
behind
the
curtains
is
that
we
call
internally
the
Bookshop,
the
tristas
API.
The
test.
Api
will
start
a
run
like
this.
In
this
run,
it
will
happen.
What
Adrians,
say
to
you.
We
will
trigger
this
endpoint
on
the
Poke
API.
With
this
data
it
really
turns
data,
but
also
a
trace
on
what
is
happening
behind
the
scenes.
B
This
is
pretty
cool
because,
with
one
simple
API
call,
we
would
test
validate
evaluated
the
entire
process
of
the
of
importing
a
Pokemon
which
is
really
hard
to
do
in
a
test
duration
test,
and
by
doing
that
we
can
add
some
some
sections
on
this
Trace
just
check
if
we
are
in
queuing
a
message
correctly,
if
we
are
posting
on
the
API
correctly,
and
if
the
worker
is
working,
fine,
the
queuing
a
message
calling
book
API
is
and
expecting
the
that
book.
B
Api
is
okay
and
if
the
idea
data
was
persistent
on
the
database,
so
this
is
a
great
project,
for
this
test.
Is
that
I
I
had
another
test
about
about
open
Telemetry,
but
I
will
I
will
skip
this
because
because
of
time,
what
are
you
want
to
just
showed
you
after?
That
is
just
so
we
can
impart
it.
As
definition
on
Captain
run
with
Teresa
semi
I
mean
with
that
we
can
evaluate
a
service
on
on
captain
this.
Is
it
folks,
do
you
have
any
questions.
A
B
It's
I
had
mixed
feelings
for,
for
example,
for
the
job
execution
engine
the
job
execution
service-
it
was
fine,
I
could
I
could
figure
out.
How
do
we
I
could
do
the
things
and
and
use
directly
on
Captain
I
with
the
documentation?
I
also
was
able
to
deploy
a
Captain,
Mike,
Roberts
roster
and
start
to
do
the
test,
but
there
are
some
some
parts
that
I
needed
to
customize,
because
all
I
imagine
was
missing
or
a
version
was
because
layer
I
can
I
can
point
to
you.
B
These
points
open
an
MPR
tool
to
help
you
with
this
documentation,
but
the
overall
was
fine.
Okay,.
A
F
Actually,
I
have
one
more
question
before
you
go
on.
Is
this
going
to
be
listed
on
the
captain
Integrations
page.
B
Yes,
this
is
the
one
something
that
I
want
to
to
talk
with
you,
how
you
can
do
that
we
have
a
documentation
for
that,
so
I'm,
probably
if
you
want
it,
I
can
open
NPR
to
to
provide
more
data
to
listen
there
is
it.
A
Here
there
is
this
small:
get
to
start
a
documentation.
I
will
add
it
to
the
document
agenda.
A
Support
foreign
topic
to
discuss
with
the
community
if
we
would
have
some
amp
charts
also
for
the
lifecycle
toolkit
where
this
Helm
chart
should
leave.
Should
we
use
the
current
Helm
repository
that
we
have
for
Captain
LTS
and
have
a
sub
package
there
or
put
the
community
prefer
a
different
Repository.
I
Yeah,
actually
not
me,
but
our
colleague
Moritz
discovered
that
if
we
want
to
release,
for
example,
klt
together
with
the
captain
charts
from
the
current
existing
Helm
channels,
Repository,
then
it
might
be
problematic
due
to
Arctic
artifact
Hub
yeah.
We
didn't
find
it
an
existing
example
where
two
different
home
charts
are
released
from
one
Repository,
we're
not
100
sure
it
needs
a
little
bit
more
research,
but
the
easiest
way
would
be
definitely
to
have
two
different
repositories
for
Captain
Altius
and
Captain
lifestyle
toolkit.
A
G
C
Can
explain
as
well
yeah,
that's
the
the
official
documentation.
Are
the
trace
test
documentation
so
we'll
go
ahead
and
use
that
to
provide
the
official
ones,
the
official
documentation
on
the
captain
side
as
well.
Just
so
we
can
get
that
get
that
going
as
well.
C
So
the
the
link
you
shared
for
the
captain
con
trip.
That's
all
we
need.
There
is
no,
no
more
specifics.
We
need
to
go
into
to
get
it
cool
yeah,
we'll
also
make
sure
that
we
got.
We
get
the
integration
on
our
website
as
well,
so
we
have
both
websites
and
documentation
going.
So
we
have
a
landing
page
for
that
and
yeah
I
think
it
would
be
also
super
cool
if
we
can
get
this
recording
and
put
that
on
a
website
as
well
just
for
for
reference.
A
For
sure
we
will
publish
the
recording
as
soon
as
the
meeting
ends.
Zoom
usually
takes
a
while,
but
then
we
will
publish
them
on
YouTube,
usually
in
our
captain.
Youtube
channel.
Are
you
in
the
captain
slack
or
in
the
cncf
slack.
A
F
A
What
was
the
other
tool
that
you,
you
have
from
cubeshop
the
cube
test,
very
cool
tool,
I,
really
like
it
great
work
there.
That's.
C
A
Perfect
yeah,
because
I
also
did
something
with
that.
Where
I
was
using
the
job
executor
to
call
Via
curl
a
test
run
basically
to
trigger,
then
a
k6
tester.
C
F
Interesting
somebody
who
couldn't
be
here
Andres
who's,
one
of
our
top
people,
of
course
we're
just
talking
blog
posts
and
he's
been
talking
to
people
I
think
he's
doing,
but
the
notion
that
testing
has
moved
farther
move
to
the
right
that
it.
You
know
there
was
so
many
microservices
that
this
is
becoming
a
bigger
and
bigger
problem.
I'm
sitting
there,
oh
God,
it's
too
bad.
He
wasn't
here
for
this,
but
he
will
watch
the
recording
later.
C
Oh
cool
yeah,
regarding
that,
would
you
would
you
be
up
for
writing
a
blog
post
just
to
announce
the
integration
like
I
would
love
to
work
with
somebody
from
your
team
I'll
just
get
that
going
as
well.
I'd
just
like
just
to
make
it
official
I!
Think
that,
like
it's
going
to
obviously
help
the
documentation,
if
we,
if
we
do
a
blog
post
but
I
think
it
would
be
very
helpful
just
for
reach
yeah.