►
From YouTube: Keptn Architecture Meeting - Jan 10, 2023
Description
Meeting notes: https://docs.google.com/document/d/1y7a6uaN8fwFJ7IRnvtxSfgz-OGFq6u7bKN6F7NDxKPg/edit
Learn more: https://keptn.sh
Get started with tutorials: https://tutorials.keptn.sh
Join us in Slack: https://slack.keptn.sh
Star us on Github: https://github.com/keptn/keptn
Follow us on Twitter: https://twitter.com/keptnProject
Sign up to our newsletter: https://bit.ly/KeptnNews
A
A
We
had
the
metric
server,
a
new
project
that
we
create
during
the
Christmas
holiday
in
in
order
to
obstruct
this
SLI
and
solo
provider
in
terms
of
a
generic
metric
expression
in
kubernetes.
So
you
can
use
it,
then,
in
your
HPA
or
any
other
CAD
integration
to
scale
your
workload
and
then
we
have
application
discovery.
A
B
Let's
maybe
start
off
with
with
the
initial
ticket.
This
actually
fall
out
already,
so
Thomas
actually
created
this
initial
Epic
on
how
to
in,
in
the
future
at
least
install
and
customize.
The
toolkit
and
I
did
some
research
on
that
some
initial
research
and
looked
at
other
different
operators
that
are
out
there
in
open
source
and
check
what
they're
doing
and
also
checked
the
operator,
SDK
best
practices
and
so
on,
and
basically
the
result
was
that
most
people
have
a
very
simple
Helm
chart
and
then
use
CRTs
to
actually
configure
everything.
B
And
so
this
was
my
first
revelation.
I
had
a
second
research
ticket
to
actually
do
that
in
the
end
and
kind
of
do
a
proof
of
concept
to
to
kind
of
create
a
let's
call
it
Captain,
config
crd.
For
now,
where
you
can
configure
stuff,
that's
kind
of
generic
for
the
klt
operator,
so
nothing
specific
in
terms
of
Captain,
apps
or
workloads
in
that
sense,
but
general
settings
that
are
independent
of
your
apps.
B
So
the
thing
that
I
actually
used
to
to
test
the
whole
thing
is
the
auto
collector
URL.
And
if
we
go
to
my
PLC
pull
request
here,
oh
man,
if
I
find
the
right
stuff
here,
so
we
have
a
new
API
Group,
which
is
called
options.
B
So
it's
let's
open
this
here
and
you'll,
see
in
here
in
the
captain
configs
back.
We
have
the
auto
collector
URL,
which
is
defined
as
a
string
and
then.
B
So
yeah
this
is
super
basic.
Now
in
the
spec
here
you
would
directly
Define
your
auto
collector
URL
as
a
string
with
with
whatever
URL
you
will.
C
B
And
then
with
the
DraStic
operator
stuff,
so
you
have
just
another
reconcile
Loop
here
you
have
a
controller
for
the
captain
config
which
reconciles
that
configuration
and
in
here
it
fetches
the
the
configuration
first
just
some
error,
handling
error
message
on
an
info
message
actually
and
then
in
here
we
set
the
hotel,
collector
URL
inside
the
operator
if
it
changed
and
then
re-queue
after
some
time
to
check
again.
Basically,
so
the
whole
thing
is
basically
built
on
the
premise
that
we
just
do
more
of
the
same
that
we
do
already.
B
We
have
lots
of
other
CRTs
for
Captain
apps
workloads
and
such
and
configuration
should
just
be
another
crd
to
give
us
the
option
to
kind
of
configure
whatever
we
want
in
a
generic
way
for
the
operator
as
well.
B
So
this
was
the
basis
for
the
research
I
did
here,
and
the
cool
thing
is
that
we
can
quite
easily
actually
integrate
that
together
into
a
ham
chart,
so
that
the
Hampshire
itself
can
be
super
simple
and
the
only
thing
you
kind
of
configure.
There
is
probably
the
namespace
where
you
want
to
install
the
operator,
and
then
everything
else
comes
with
the
with
the
CRT
basically,
and
for
now,
if
I
can
find
it
in
here,
I
actually
already
set
up.
D
E
I
had
a
small
question
to
you:
Morris
sure
I
think
that's
great,
but
I
was
just
wondering.
While
you
were
presenting
the
captain,
configure
Sardi
and
then
I,
it
will
probably
happen
that
you
could.
E
We
could
deal
with
several
Captain
Coffee,
so
I
can
have
a
captain
config
for
the
namespace
I,
don't
know
Hotel
demo,
where
I
want
to
send
it
to
a
specific
collector
and
then
I
have
another
Captain
config,
because
I
wanted
to
send
it
somewhere
else
so
having
a
multi,
multi
exporters
or
multi
endpoints,
because
potentially
you
have
several
teams
dealing
with
several
toolings.
So
we
also
need
to.
D
B
Yeah,
that's
a
good
point
actually
so
for
now,
I
actually
have
it
in
my
research
results
here.
I
didn't
think
of
that.
Yet
so,
instead
I
added
some
stuff
here,
a
validation
web
update,
make
sure
that
there's
only
one
Captain
config
at
any
point
in
time.
B
That's
what
I
thought
we
need
for
now,
but
if
you
say
it
would
make
sense
to
have
multiple
call
effects
for
different
namespaces
or
different
apps,
for
example.
So.
E
The
if
you
take
flame
bit,
for
example,
they
have
the
notion
of
cluster
config
and
then
they
have
the
config.
So
cluster
config
will
be
widely
for
everything,
and
then
you
can
have
specific
config
that
you
define
on
one
specific
namespace
and
then
that
config
will
be
taking
instead
of
the
cluster
one.
B
B
Yes,
I
did,
but
in
the
end,
I
decided
to
go
with
our
own
crd,
because
we
have
the
option
to
have
conversions,
for
example
between
API
versions,
and
we
can
very
nicely
have
also
validation
for
the
configuration
in
there.
So
we
kind
of
can
have
a
separate
validation
web
hook
and
if
you
configure
something
in
a
wrong
way
or
somehow
your
configuration
doesn't
fit
with
any
other
config,
we
could
immediately
catch
that
and
return
a
nice
error
message
to
the
user.
That's
something
we
couldn't
do
with
config
Maps.
D
F
You
coming
back
to
the
Hendrix
topic.
I
think
this
is
a
much
larger
discussion
right.
We
want
to
basically
because
what
we
do
with
the
lifecycle
toolkit
is,
you
correctly
said:
Moritz
we
are
basically
generating
or
the
ideas
that
we
can
generate
traces
across
multiple
stages.
We
can
generate
traces
for
individual
deployments,
but
apps
that
span
multiple
stages
in
theory
right.
That
goes
also
across
multiple
namespaces.
F
If
you
are
separating
your
stages
with
namespaces
in
a
single
cluster,
and
then
the
question
is
really
to
you:
where
does
this
data
end
up
the
observability
data
that
we
create?
Do
we?
Should
this
data
end
up
in
one
location,
or
do
we
really
need
to
specify
multiple
locations?
Do
we
need
to
separate
it
out.
D
A
I
speaking
of
open,
Telemetry
I
think
does
not
make
sense
to
configure
different
places
because
we're
using
the
hotel
collector
and
if
you'd
rather
use
the
collector
to
Multiplex
the
data
across
multiple
or
possibility
providers.
A
D
F
B
I
mean
the
auto
collector
URL
only
took
as
an
example
here
to
implement
my
my
proof
of
concept.
Basically,
something
other
to
configure
would
be,
for
example,
the
version
of
our
factions
runtime,
or
maybe
an
allow
and
deny
list
for
different
pre-post
deployment
tasks
and
stuff.
Like
that,
that's
all
stuff
you
could
configure
there
maybe
do.
B
That's
just
off
the
top
of
my
mind.
We
started
a
list,
but
it's
literally
two
things.
So
it's
the
function's
runtime
version
at
the
ultra
collect
figure
out
for
now,
I
think
yeah.
B
Some
some
nice
discussion
in
here,
especially
about
the
harm
chart
in
the
end,
so
the
whole
plan
for
this
is
to
basically
dump
down
the
harm
chart
as
far
as
possible,
because
our
whole
setup
is
based
on
customize
and
customized.
Doesn't
play
Super
well
together
with
him,
and
we
cannot
or
don't
really
want
to
switch
off
of
customize,
because
it
helps
us
in
in
many
many
ways.
B
We
couldn't
really
feel
the
feel
the
same
hole
for
us
so
yeah.
The
plan
would
be
to
have
customized
and
then
a
super
simple
Helm
chart.
On
top
of
that,
where
you
can
configure
your
namespace
and
your
release,
name
normal
hemp,
stuff
and
everything
else
comes
from
CRTs.
A
A
B
B
A
No,
no
I
think
this
logic
is
built
into
the
helm.
Chart
directly
not
be
a
customized.
B
We
actually
generate
it
from
customized,
or
we
need
to
kind
of
have
a
separate
section.
Let's
not
touch
the
customize
or
something
the
other
option
would
be,
but
that's
a
long
shot
for
now.
I
would
say
having
some
form
of
management
operator
that
actually
deploys
our
klt
and
then
that
one
would
again
with
the
crde.
G
D
G
B
Yeah
in
general,
it
would
work
probably-
and
we
also
get
the
benefit,
for
example,
to
have
power
over
upgrading
the
operator
and
how
that
whole
life
cycle
happens.
G
I
think
it
would
be
cool
that
we
that
we
could
automate
the
updates
of
the
car
of
the
lifecycle
toolkit
exactly
because
then
you
could.
You
could
set
up
some
things
as
a
release,
Channel
or
whatever,
and
could
say
I
always
want
to
stay
at
the
last
stable
version
and
during
the
update
that
updates
automatically.
B
F
A
B
Yeah
and
then
how
to
do
the
helm
chart
in
the
end,
this
was
kind
of
also
part
of
the
the
Epic.
In
the
first
place,
we
had
kind
of
open
questions
about
if
we
want
to
have
the
helm
chart
in
our
main
repo
or
if
we
want
to
split
it
into
a
separate
one
and
use
the
chart,
release
action
from
him
to
kind
of
handle
releases
of
that
automatically
I
would
say
that's
kind
of
a
minor
decision
later
on.
B
F
B
It
was
posted
yesterday,
together
with
the
announcement
for
this
month,
but
not
before
I
think.
F
I
think
that's
always
good
lecture.
Maybe
just
you
know
post
it
again.
Maybe
Chobani
are
we
posting
this
recording
on
the
YouTube
channel?
Yes,.
A
A
G
Yes,
yes,
I
can
thanks:
okay,
Justice.
G
Yes,
it
looks
good,
okay,
so
I
think
it's
always
a
good
idea
to
share
screen
and
I
just
brought
a
bit.
What
I
did
here
so
at
first?
Why
so?
I
will
present
a
bit
about
Captain
Matrix
more
and
the
main
idea
of
Captain
Matrix
was
to
get
Matrix
or
get
get
things
as
decision
as
as
decision
based
or
the
kubernetes
HPA,
so
for
the
horizontal
product
scalar,
and
what
we
thought
about
was
that
we
might
need
Matrix
more
often
in
our
clusters,
as
as
for
the
kubernetes
HP.
G
So,
for
instance,
we
need
Matrix
for
our
life
cycle
toolkit
for
abbreviations.
If
someone
doesn't
use
the
captain
lifecycle
toolkit
for
such
things,
they
might
want
to
use
metrics
for
Argo
decisions
for
for
Flagger
decisions
and
so
on,
and
while
creating
this
prototype,
I
thought
it
would
be
a
good
idea
to
Simply
create
more
lesson
interface
to
have
Matrix
as
custom
resources
in
a
given
in
disgust.
G
Therefore,
at
the
moment
we
designed
two
custom
resource
definitions.
The
one
was
the
metric
itself
and
the
second
one
is
the
metric
provider
whereby
the
metric
provider
is
implemented
in
the
capital
Matrix
operator,
and
this
operator
is
responsible
for
at
first
watching
The
Matrix
results
fetching
the
metrics
from
the
observability
provider
and
write
back
the
status
to
the
metric
results
so
that
that
data
from
the
absolute
provider
can
be
cleared
in
the
key
when
we
discussed
itself.
G
The
second
part
we
wrote
or
I
wrote
for
this
was
the
captain
Matrix
about
the
adapter,
and
this
is
simply
here
for
fetching
the
car
for
providing
these
Matrix
as
custom
Matrix
in
Q
and
eaters,
and
this
can
be
consumed
by
the
community
by
the
communities
horizontal
portal
to
scalar,
and
therefore
everything
is
fine.
G
So
this
was
was
only
the
the
picture
of
it,
but
I
think
I.
Think
you
want
to.
You
want
to
see
this
a
bit
more
in
detail.
G
Yes,
I
did
okay,
so
I
have
a
very
short
example
of
this
somewhere.
G
If
we
take
a
closer
look
at
Dana
trees,
we
see
some
kind
of
metrics
in
there.
So
we
have,
we
have
our
metrics
blade
and
we
see
some
names
of
such
of
certain
tricks.
So,
for
instance,
I
have
I,
have
building
hosting
unit
assigned
I,
have
an
availability
rate,
and
so
on.
I
took
some
self-monitoring
metrics
for
for
my
from
a
playground
here
and
at
first.
The
first
thing
you
have
to
do
is
to
great
to
set
Apple
provider.
G
The
provider
is
defined
in
the
is
is
built
in
The
Matrix
of
the
Matrix
controller,
and
he
knows
if
I
have
a
provider
called
planetrace
I
can
he
knows
how
to
connect
to
dyno
trees?
So,
for
instance,
I
have
a
Target
server
and
I
have
a
secret
key,
which
has
the
hesitate
token
in
there
and
using
this
data
it
can
connect
to
Dynamic
rules.
G
The
second
thing
is:
I
have
I.
Have
my
Matrix
so
and
for
instance,
I
book
I
simply
took
some
self-motoring
Matrix
in
there
and
said
I
have
this
query
I
fetched
this
metric
every
10
seconds
and
my
sources
time
or
trees,
the
same
and
I
can
Define
as
many
metrics
as
I
want
and
the
controller
will
fetch
the
Matrix
in
this
and
frame.
G
G
So
this,
what
you
what's
also
possible
is
that
you
could
mix
The
Matrix
between
several
providers,
so
you
could
also
have
a
provider
which
is
called
Prometheus.
The
only
thing
we
should
change
in
this
case
is
the
sauce
and
Theory
it
was.
The
cost
resource
would
be
exactly
the
same.
Only
the
provider
implemented
in
the
in
the
Matrix
Matrix
controller
insert
Okay.
So.
G
Edge
or
metrics,
so
we
could
also
say
Okay
describe
new
tricks
and,
let's
say,
aggregate
the
data,
and
we
we
also
see
that
the
seconds
ago
and
the
value
is
this
one.
So
we
can
query
them
in
kubernetes
as
we
want.
But
the
second
thing
which
is
more
interesting
and
which
was
the
main
use
case,
was
we
wanted
to
consume
them
via
the
hpe.
G
So
why
are
the
Wilder
units
horizontal
autoscaler
to
take
scaling
decisions
and
therefore
we
could
also
say
we
want
to
get
the
these
things
from
the
kubernetes
API,
and
this
was
the
thing
where
a
real
world
cup
real
really
hard
at
some
point
in
time,
because
it's
not
it's
not
so
easy
to
find
out
how
this
path
looks
like.
G
So
what
you
see
here
is
that
we
are
querying
that
we
are
querying
a
custom
metric
which
is
namespaced
in
the
namespace
default,
and
this
is
a
this
is
a
funny
thing,
because
here
we
are,
we
are
relating
to
our
own
Matrix
resource,
which
is
residing
in
the
default
E4
team
space.
We
see
that
it's
of
the
current
Matrix
Matrix
Capital
message.
G
G
Unfortunately,
I
can't
get
the
Matrix
now,
so
this
the
people
call
is
not
not
with
me
today.
I
think
and
I
know
what
I
know
what
what
was
happening
here.
I
made
an
update,
and
this
broke
my
broker
thing.
G
So
in
reality,
normally
you
would
get.
You
would
get
the
response
from
the
from
the
apis.
I
don't
know,
and
you
would
also
you
would
also
see
some
kind
of
changing
it
today,
but
I
think
I
posted
this
somewhere
and
it's
like
Channel
just
a
second.
C
G
It's
always
looks
like
this,
so
this
would
look
like
this,
so
you
would
get
you
could
least
with
the
Matrix
at
some
point
in
time
or
all
of
the
metrics
you
have,
and
the
second
thing
you
might
you
you?
Yes-
and
this
was
the
same-
the
second
commercial
I
can
yes,
I
will
I
will
I
will
do
this
tomorrow
in
the
community
implementation
wise?
G
This
is
not
most,
and
this
is
almost
the
same
as
we
had
in
the
evaluation
controller
of
the
lifecycle
toolkit.
The
only
thing
we
which
changed
was
that
we
have
a
new
controller
here,
so,
for
instance,
so
I
created
a
new
Matrix
operator
here,
I
created
a
new
Matrix
adapter
and
the
operator
has
one
controller.
So
this
is
the
metric
controller,
and
the
logic
of
this
is
very
small.
G
So
if
you
take
a
look
here,
this
is
the
whole
reconciliation
logic,
so
this
has
around
under
120
lines.
It
get
it's,
it
simply
watches
the
watches.
Yes
and
yes,
Henrik.
H
G
Same
way
and
to
Simply
create
a
simple
provider
which
simply
fetches
me
tricks.
So
if
you
take
a
look
here,
this
is
the
dyno
Trace
provider.
This
is
around
100.
Do.
E
You
think
we
can
do
like
a
custom
like
a
deliver
with
a
Prometheus
and
and
then
a
trace
like
a
like.
A
Keda
is
doing
it.
The
HTTP
metric
provider,
where
you
just
Define
an
endpoint,
the
post
request
the
headers
how
to
retrieve
the
the
metric.
So
then,
the
the
community
don't
have
to
build
a
basic
provider
just
for
querying
an
API.
If
we
have
like
the
HTTP
generic
metric
provider,
I
think
that
will
help
a
lot
for
our
community.
G
If
there
is
something
else
we
could
build
it,
so
the
only
thing
I
wanted
to
show
here
is
that
the
logic
for
evaluating
some
Matrix
is
not
very
much
in
our
in
our
thing
here,
so
it
was
around
40
lines
of
code
if
we
want
to
create
a
create
a
another
watching
Eric
Matrix
metric
provider,
if
such
thing
exists,
what
or
if,
if
this
can
be
used?
Yes,
it
will
totally
make
sense.
G
The
simple
idea
of
of
all
of
this
was
that
you
simply
have
to
create
this
is
this
is
more
or
less
a
unification
of
Matrix
inside
of
a
kubernetes
council,
so
when
you
think
of
Captain
before
you
will
have
a
SLI
provider
for
each
for
each
for
each
of
the
solution,
and
this
live
providers
were
not
very
slim.
G
To
be
honest
in
our
case-
and
this
is
once
the
this
is
thought
one
step
forward-
we
could
say
we
have
a
way
to
fetch
Matrix
in
a
generic
way,
but
we
could,
for
instance,
create
create
an
SLO
object
which
references
to
this
Matrix
and
simply
Compares
it
against
against
threshold
and
I
think
this
is.
This
would
be
a
very,
very
mighty
mighty
tool
to
be
honest,
so
because
everything
we're
doing
with
slos
and
so
on,
simply
in
the
entire
Matrix
I.
G
Think,
okay,
so
at
the
moment
Implement
that
this
is
dinotrades
and
Promises.
So
the
two
things
we
implemented
also
in
the
regulation,
control
of
of
the
lifecycle
toolkit,
so
the
implementation
is
the
same
and
in
the
future
it
could
also
be
a
plan
to
to
use
this
Matrix
interface
also
for
the
captain
evaluations.
Therefore,
the
captain
captain
evaluations
would
also
work
without
specific
providers,
but
use
this.
This
Matrix
provider.
G
Yes
and
I
think
that's
it
almost
so,
for
the
Matrix
is.
The
second
component
is
the
Matrix
adapter.
This
is
the
one
for
the
for
the
hpe,
and
this
is
also
a
very
simple
thing,
mostly
boiler
boilerplate
from
from
another.
G
From
another
thing,
the
only
thing
I
had
to
write
was
one
provider,
and
this
is
the
first
interface
to
our
Matrix
interface,
which
says
yes,
I've,
go
to
the
object,
Matrix
message
Matrix
and
take
what
you
get
and
represent
this
as
a
value,
but
I
think
it
is,
is
something
which
can
be,
which
you
can
easily
easily
see
in
the
pull
request.
G
Yes,
this
would
be
it
from
my
side
at
the
moment
so
and
I
don't
know
if
Christians
yeah
showed
him
yesterday
how
the
HP
things
work.
G
Unfortunately,
I
broke
the
people
today,
but,
as
I
said,
I
will
try
to
make
it
work
for
tomorrow's
community
so
spoiler
tomorrow.
It.
F
Will
work
great
I
got
it.
I
got
a
couple
of
questions
first,
thanks
for
this,
and
some
of
these
questions
might
be
stupid
because
of
my
lack
of
knowledge
on
this.
But
my
first
question
is
in
your
metric
definition:
you
specify
a
frequency
I
think
you
had
10
seconds
man.
That
means
every
10
seconds
you're
reaching
out
to
that
API
pulling
the
data
in
from
the
last
10
seconds,
I,
assume
and.
F
F
Are
some
more
specific
things?
Okay
and
then
and
here's
why
my
really
the
lack
of
of
knowledge
you're
writing
this
back
to
the
metrics
object?
It
means
you're,
writing
a
value
every
10
seconds.
That's!
Where
does
this?
Does
this
just
overwrite
the
status?
What
does
the
kubernetes
also
keep
a
history?
F
G
At
the
moment,
this
is
implemented
in
a
very
simple
way,
so
we
are
not
taking
about
taking
care
about
some
about
Market,
more
data
at
once.
So,
as
you
said,
the
time
frame-
these
are
things
for
instance,
which
could
be
held
in
a
you
know,
as
a
low
object
to
keep
a
list
of
the
last
10
values
or
whatever.
E
Oh
this
is
you
it's
amazing,
because
it's
a
universal
adapter
for
for
metrics
API,
so
basically,
so
I
was
expecting
that
when
I
do
when
do
I
need
that
metric
from
a
kubernetes
perspective,
then
it
reached
out
to
the
metric
operator
and
the
Metro
Copper
at
that
moment
says:
oh,
you
need
the
metric
I'm
going
to
pull
it
out.
So
basically
you
don't
have
that
pulling
sequence
that
has
to
get
the
metric
every
10
seconds.
G
Yes,
but
this
would
be
synchronous,
synchronously,
more
or
less,
and
this
could
get
really.
This
could
lead
to
the
same
situation,
because
when
you
are
using
the
kubernetes
HPA,
this
will
also
query
every
let's
say
30
seconds.
That's
true,
and
therefore
that's
what
this
would
be
to
exactly
the
same
situation
and
could
get
even
worse.
So,
for
instance,
if
you
you
are
querying
the
object
more
often,
then
you
might
have
lots
of
lots
of
theories
against
them.
G
I
can't
stop,
certainly
to
provide
API
if
you
are
not
using
caching-
and
this
was
also
the
reason
why
I
chose
executive,
this
architecture
to
say
I
want
to
specify
the
Matrix
as
a
custom
resource
in
a
cognitive
way,
queried
every
in
seconds
and
simply
simply
consume
it
when
are
needed.
So,
for
instance,
I
could
use
this.
This
metric
I
have
for
the
kubernetes
HPA,
but
I
could
also
use
it
for
Argo.
Okay.
At
the
end,
it
wouldn't
cause
more
more
queries.
G
Notice
could
also
be
used
for
evaluation
purposes.
This
was
the
reason
why
we
why
we
why
we
pulled
this
out
into
a
separate
component.
Okay,.
E
And
and
how,
in
that
case,
at
the
moment,
you
define
a
freak
like
a
frequency,
one
one,
how
long,
how
frequent
you
want
to
collect
the
the
metric.
But
is
this
the
way
to
say
oh
I
want
to
give
me
the
data
between
between
I,
don't
know,
1
and
115.
G
So
at
the
moment,
as
I
said,
this
is
this
is
defined
in
a
very
simple
way:
okay,
and
we
are
only
only
starting
the
last
metric.
B
But
then
again
the
metrics
provider
actually
has
that
functionality
already.
So
if
you
define
your
query
correctly,
you
can
have
that
anyways,
just
not
configurable
every
time,
so
you
could
have,
for
example,
I,
don't
know
the
moving
average
with
a
Time
window
of
whatever
10
minutes.
You
just
Define
that
in
your
Matrix
query,
and
then
you
have
it.
G
There's
nothing
we
have
to.
We
have
to
look
at
because
keto
at
the
moment
has
a
clear
scope
on
auto
scaling
and
yeah.
G
They
don't
want
to
be
a
matrix
providers
for
this
song
and
in
our
case,
as
we
are
more
related
to
the
whole
department
to
the
whole
delivery
domain
and
so
on,
I
think
it
would
make
sense
to
have
one
metric
server
which
which
does
this
so
what
I
also
thought
of
was
was
to
create
a
generic
provider
for
a
metric
server
for
keto,
and
then
they
could
get
rid
of
their
their
their
Integrations.
E
Beware
because
they
do
a
lot
a
lot
of
things
with
jobs.
They
also
have
lots
of
provide
adapters
that
go
pull
out
data
from
the
different
queues
for
their
your
jobs
use
case
as
well.
A
G
Depends
so
I
think
it
depends
on
the
on
how
often
you
query
the
data.
G
So
if
your
query,
if
you
query
a
lot
of
cities,
every
every
one
or
two
minutes,
you'll
see
at
least
themselves
are
not
the
problem.
So
I
created
a
test
with
conflict
maps
at
some
some
time
ago.
I
think
I
created
40
000
conflict
maps
and
had
no
problem.
E
I
had
a
note,
maybe
it's
a
it's
Prometheus
is
becoming
like
a
standard
or
open
temperature
metric
is
becoming
like
a
standard.
G
The
there
is
an
idea
to
expose
the
metrics
we
have
as
objects
as
Prometheus
Matrix,
because
this
would
be
a
thing
where
we
could,
where
we
could
integrate
in
almost
every
tool,
even
in
keita
or
in
Argo
or
influx.
G
So,
and
this
the
nice
thing
is
that
this
wouldn't
even
be
much
effort,
because
it's
really
easy
to
to
expose
things
so
to
promote
to
expose
promises.
As
we
all
know,.
G
G
And
this
could
be
one
way
you
would
cact
Matrix
could
bring.
A
E
Or
or
you
basically,
you
naturally
push.
We
have
the
options
to
when
we
deploy
it
to
have
a
naturally
the
exporter
enabled
so,
which
means
every
time
we
provide
metrics.
We
export
it
to
a
destination.
A
Because
I
find
it
a
bit
weird,
do
we
need
that?
Because
we
are
extracting
metrics
from
the
obserability
provider?
So
why
do
we
need
to
push
them
again
back.
E
Because
we
at
the
end,
we
are
like
a
dictionary
because
we
we
have
like
Keda,
because
Keda
is
doing
it
only
for
HP
like
mentioned,
but
here
we're
doing
for
more
larger
scenarios.
So
we
could
be
like
a
like
a
bridge
between
different
providers
and
then
at
the
end,
we
we
collect
the
metrics
and
then
we
expose
them
back.
Okay,.
F
H
H
F
But
the
default
setting
might
be
that
the
opportunity
platform
interests
all
of
the
metrics
right
I
know
with
the
energy.
Is
you
have
to
annotate
what
you
want,
but
maybe
you
say:
okay,
I'm,
just
I
want
to
figure
out
what
this
is.
What
this
means
for
me,
an
end
user
perspective
all
right.
This
is
just
trying
to
figure
this
out,
although
how
this
look
like
in
a
way
in
the
in
a
real
life
scenario,
so.
G
In
effect,
if
I
would
be
an
end,
user
I
would
take
a
look
on
my
observative
platforms,
such
as
planetarest,
would
take,
would
try
to
find
out
how
the
metric
is
named
would
specify
this
metric
on
the
other
side,
because
that's
the
same
as
we
are
doing
in
cabinet
at
the
moment.
So
I
specify
my
Matrix
object
and
that's
it
where
the
data
comes
in
two
two
diameteries
I
think
that's
not
the
problem
of
the
Indians
all.
F
G
E
That's
why
I
was
that's,
why
I
was
wondering
as
well
if,
instead
of
being
our
own
own
adapter,
we
do
everything
in
a
Prometheus
format
and
we
only
use
the
official
Prometheus
adapter
for
HPA,
which
means
whatever
we
have.
We
expose
it
back
in
Prometheus
and
then
user.
At
the
moment,
a
lot
of
people
are
deploying
the
Prometheus
adapter,
which
is
the
most
the
only
one.
Oh
no
there's
all
there
wasn't,
but
the
most
popular
one
for
sure.
E
G
So,
but
in
the
end
you
have
to
you
have
to
start
a
data
somewhere.
So
even
if
you
use
the
use
premises
in
adapter
and
so
on,
you
would
have
to
query
an
external
system
more
or
less
or
even
you
you
could
you
could
write
your
data
to
a
Prometheus
to
promises
in
your
car
store.
You
could
use
an
external
one
and
in
our
way
you
could
use
to
create.
G
You
could
use
the
thing
once
and
you
could
you
could
specify
it
once
and
use
it
thousand
times
so
the
same
new
trick.
G
G
The
second
thing
is
always
the
problem
what's
happening.
If
your
promises
server
is
not
available.
G
And
if
the
metric
server
outside
is
not
there,
we
are
scaling
on
the
last
known
literally
well
is
so
I
think
it's
I
think
for
for
some
decisions.
It's
a
good
idea
to
have
the
metrics
in
the
cast
and
not
outside.
E
But
but
you
agree
Thomas
that
you
still,
we
still
need
to
do
like
a
metric
provider.
I
mean
I,
don't
know,
I,
don't
remember
the
terms
in,
but
we
need
to
have
a
a
way
of
pushing
our
metrics
back
together.
So
then
the
HPA
scenario
will
be
managed
by
Keta.
G
And
for
hpe
we
would
for
for
our
Kira,
we
invite
we're
not.
We
might
even
need
another
provider,
so
we
could
expose
our
data
as
as
promises
exports.
G
Well,
we
could.
We
could
expose
some
kind
of
some
kind
of
API.
D
H
To
just
consume,
with
the
generic
metrics
API
scalar
to
say,
pointed
towards
the
kubernetes
API
endpoint,
where
the
metrics
are
being
stored
and
say
just
extract
from
the
from
the
Json
paths.
The
value
yeah.
E
H
I
mean
in
the
in
the
metrics
server:
wouldn't
it
be
a
big
deal
just
to
expose
it
anyway,
as
in
promethos
metrics
I
mean
it's
yes,.
A
D
D
A
E
E
And
also
it
will,
if
even
if
they
put
annotation,
it
will
be
annotated
on
the
operator
of
Captain
or,
if
I'm
of
the
metric
operator,
and
that
will
be
taking
the
context
of
the
part
of
the
namespace
of
that
specific
workload.
So
then
it
will
be
indexed
in
a
very
strange
way.
That
way.
So
it's
not
going
to
be
very
efficient,
I
think
it
makes
more
sense
that
we,
if
we
want
to
export
from
the
metric
operator,
that
we
add
the
right
context.
So,
oh,
we
pulled
out
this
information
from
this.
A
Meanwhile,
we
are
waiting
for
the
writing
answer
in
order
to
write
not
the
answer.
I
think
for
testing.
We
could
be
good.
Nope
I
haven't
used.
Prometheus,
okay,
I
have
a
question
about
the
collector.
Do
you
need
to
compile
your
own
version
of
The
Collector?
If
you
want
to
have
this
transformation?
No.
E
The
the
hotel,
collector
country,
the
latest
version,
if
you
go
to
the
country
repo,
has
all
those
processors
included.
D
A
H
No
but
I
want
to
say
thank
you,
Thomas
I
think
it's
it's
a
cool
cool
project.
H
C
Yeah,
so
I
would
like
to
present
and
discuss
the
application,
Discovery
Epic,
so
Andre
provided
the
initial
description
of
this
whole
story
here,
the
bottom
line.
Why
do
we
want
to
have
to
do
this?
C
Manifest
already
make
use
of
the
lifecycle,
toolkit
and
initially
or
immediately
get
the
Dora
Matrix
that
we
provide
via
the
toolkit
or
also
the
the
traceability
features
with
the
with
the
open,
Telemetry,
traces
right
and
yeah.
C
So
that's
the
goal
for
the
for
the
end
user
and
here
Andre
already
formulated
a
couple
of
initial
open
questions
which
I
would
like
to
go
through
after
I
have
presented
my
Approach
so
first
of
all,
or
maybe
what
what
is
the
best
time
to
generate
the
captain
app
resource
I
think
we
have
to
talk
about
that
for
a
second
to
understand
the
complete
concept,
because,
right
now,
what
we're
doing
is
when
we
we
reach
the
web
hook
when
the
webbook
is
being
called
for
reports
that
is
about
to
be
scheduled.
C
So
the
Pod
for
one
of
our
deployments,
for
example,
if
it
will
check
whether
the
the
labels
are
present,
the
labels
that
we
rely
on
and
if
it's
part
of
a
cabin
application
and
all
the
names
are
there,
the
labels
for
annotations.
It
will
create
the
workload
instance
referring
to
the
to
the
app,
and
here
is
the
question:
where
do
we
want
to
actually
create
this
cat
map?
That
workloads
will
refer
to
so
for
a
single
service
application,
where
we
only
have
one
deployment
right
now?
C
You
should
now
see
the
the
code
of
the
web
hook
right
now.
What
we're
doing
is
if
we
have
the
required
annotations,
but
there
is
no
app
annotation,
so
no
app
name
there
are
part
is
referring
to
here.
We
know
that
we
have
a
single
deployment
application
for
single
service
application
and
automatically
create
the
depth
that
we
refer
to,
but
in
the
case
of
multi-service
applications.
C
This
is
a
little
bit
trickier,
because
also
at
the
time
when
the
web
Hook
is
called,
we
might
not
even
have
all
the
deployments
that
that
refer
to
the
app
because
they
might
come
in
at
the
later
point
in
time,
and
therefore
this
would
increase
the
complexity
of
the
Redbook
quite
a
bit.
If
you
would,
at
this
point,
try
to
Poll
for
some
time
for
further
thoughts
coming
in
referring
to
the
same
app
and
yeah
for
the
web
hook.
C
This
is
definitely
a
way
too
complex
in
the
error-prone,
because
this
should
be
a
rather
simple
component,
so
that
kind
of
rules
out
the
first
approach
that
was
suggested
in
the
initial
ticket
here,
and
that's
why
I
decided
to
do
the
proof
of
concept
with
the
second
approach,
which
I'm
going
to
show
you
in
more
detail
now,
which
is
that
the
web
hook,
of
course,
again
checks
for
the
required
annotations
and
checks
if
the
app
annotation
is
there.
C
But
after
that,
it
doesn't
create
the
captain
app.
So
regardless,
if
it's
a
single
service
application
or
a
multi-service
application.
But
rather
it
creates
a
captain,
app
creation
request,
which
means
that
this
is
kind
of
a
promise
for
the
for
the
operators
that
reconcile
the
workload
instances
and
the
tasks
and
whatnot.
And
of
course,
the
captain
app
version
that
there
will
be
a
captain
app
resource
eventually,
when
all
of
these
resources
are
being
reconciled,
meaning
that
the
workload
instances
especially
are
being
handled
and
yeah.
And
this
already
simplifies
the
the
web
hook.
C
So
we
just
make
sure
that
all
the
labels
are
set
or
annotations
and
then
promise
on
the
quotes
that
the
app
will
be
there
right
and
then
that
opens
the
window
for
a
couple
of
other
interesting
questions
which
I
already
briefly
touched
upon,
namely
that,
for
example,
what
how
do
we
handle
the
case
where
we
don't
apply
everything
at
once,
but,
for
example,
it
takes
some
time
until
all
the
deployments
of
our
applications
are
are
deployed
and
available
in
our
cluster
and
here
my
initial
idea
for
this
proof
of
concept
was
to
introduce
Discovery
deadline,
seconds
property
in
the
captain,
app
creation
request,
meaning
that
for
a
certain
amount
of
time
that
can
be
configured
with
that
here.
C
So
by
default,
it's
right
now,
30
seconds.
During
this
time
we
keep
trying
and
see
whether
a
user
defined
application
will
come
in
right.
So
if
the
user
itself
creates
a
captain
app
with
all
the
workloads
that
should
be
contained
in
this
app,
is
there
then
of
course
we
wouldn't
want
to
overwrite
anything
here
or
create
the
app
custom
resource
on
our
own,
because
we
in
this
case
we
would
like
to
use
what
the
user
has
provided
right.
But
if,
after
that
time,
maybe
I
can
already
go
to
the
controller
of
this.
C
But
if
we,
after
this
time,
don't
find
anything
the
defined
by
the
user,
then
we
know.
Okay,
we
have
the
the
promise
that
we
will
create
a
cat
nap
and
now
we
have
to
deliver
on
that.
So
at
this
time
we
create
the
captain
app
with
the
controller
automatically
and
then
the
nice
side
effect
of
this
is
that
during
this
discovery
deadline,
so
this
32nd
spirit,
for
example.
C
It
is
very
likely
that
all
the
other
deployments
that
are
part
of
our
app
or
are
going
to
be
part
of
our
already
also
added
to
the
cluster,
because,
if
you
think
of
of
the
application
of
a
Helm
chart,
it
will
might
take
a
couple
of
seconds
depending
on
how
many
manifests
you
have
in
there,
but
I
think
after
30
seconds,
for
example,
we
might
already
be
on
the
safe
side.
C
But
of
course
this
is
a
very
individual
value,
and
if
you
really
apply
your
manifests
manually
one
by
one,
then
of
course
you
might
want
to
increase
this
deadline.
Discovery
period.
C
So
so
much
for
that
and
then
yeah
then
what
happens
then
is
if
we
have
not
found
an
app
at
this
point,
we
are
going
to
create
it
and
let's
go
to
the
creation.
Sorry
I
forgot
one
crucial
thing
so,
of
course,
at
the
before
we
create
the
actual
app
custom
resource.
We
at
this
point
would
also
like
to
know
which
workloads
do
we
have
here
that
reference
our
app.
C
Of
course,
the
pre-post
tasks
and
evaluations
are
going
to
be
empty
at
this
point
because
we
wouldn't
want
to
to
lock
any
deployment
due
to
some
tasks
that
we
add
here.
So
this
is
going
to
be
empty
and
then
we
are
adding
all
the
workloads
that
we
have
found
at
this
point
in
time.
Right
and
then
afterwards,
we
create
the
cat
map
custom
resource
and
from
there
on
it's
basically
exactly
the
same
logic
that
we
had
previously.
C
So
the
creation
of
the
captain
app
will
resolve
in
the
creation
of
that
Captain
app
version
and
the
version
keyword
is
then
something
that
I
would
also
like
to
further
discuss,
and
then
everything
will
be
reconciled.
And
after
that
has
happened,
you
are
gonna,
be
able
to
to
view
all
the
the
metrics
and
the
traces.
If
you
have
configured
your
open,
Telemetry
collector
together,
all
this
data
and
yeah
words.
B
B
So
with
that
you're
safe,
that
you
actually
need
to
create
the
thing:
okay,
cool,
exactly
yeah,
and
then
the
this
timeout,
the
30
seconds.
Where
is
that
configured
now.
C
Exactly
that
it's
cool
thanks
all
right
yeah.
So
maybe,
let's
briefly
talk
about
the
version,
because
this
I
found
to
be
a
bit
tricky
to
decide
on,
because
we
don't
have
any
explicit
ad
version
set
in
our
application
Discovery
case.
C
So
right
now,
in
this
proof
of
concept,
I
hard-coded
it
to
1.0.0,
for
example,
but
this
will
be
something
that
we
maybe
have
to
discuss
later
and
think
about,
because
I
think
one
idea
was
also
to
derive
the
version
from
the
versions
of
the
workloads
that
we're
deploying
but
I'm,
not
sure
how
how
we're
gonna,
how
to
best
approach
this.
C
So
if
you
can
already
keep
this
in
the
back
of
your
head
and
if
someone
has
a
suggestion
on
that
and
I'm
open
to
to
new
ideas,
but
right
now
in
this
proof
of
concept,
this
it's
hard
coded
to
this
particular
version
and
from
there
on
New.
Her
Visions
are
going
to
be
created
if
something
changes
all
right
and
then
the
second
conceptual
question
I
had
here
was
what
happens.
C
For
example,
if
we
think
about
our
Potato
Head
and
we
deploy
our
initial
version
and
we
automatically
create
the
the
captain
app
and
then,
for
example,
The
Potato,
Head,
potato
growth,
third
arm,
for
example-
so
we
add
a
new
deployment
to
our
existing
application.
C
C
So
right
now,
I
believe
or
that's
just
my
opinion.
We
should
definitely
support
this
because,
of
course,
if
we,
if
we
say
that
the
initial
empty
barrier
entry
barrier
for
using
the
klt
should
be
minimal
as
possible,
and
then
we
somehow
block
new
deployments
from
being
deployed
due
to
the
the
app
that
we
created
previously,
not
including
this
particular
deployment
and
I-
think
this
wouldn't
be
a
good
user
experience.
So
we
shouldn't
block
anything
from
being
deployed
in
that
case,
if
we
are
doing
this
automatic
creation.
C
So
that's
why
right
now
implemented
it
in
a
way
that
if
a
new
deployment
is
discovered,
then
it's
added
to
the
captain
app
custom
resource
and
with
the
captain,
application
restartability
already
been
merged
and
in
place.
This
will
result
in
a
new
revision
of
that
particular
app
version
and
then,
basically
this
this
added
deployment
will
be
reconciled.
All
the
other
workloads
will
already
be
in
the
reconciled
state
if
the
versions
of
those
have
not
been
changed
in
the
meantime,.
I
Yeah
sure
you
said
it
will
result
in
a
new
revision
shouldn't
result
in
a
new
version.
If
we,
for
example,
add
a
new
workload,
I
would
expect
that
should
result
in
a
new
version,
but
the
revision
revision
is
just
a
minor
update
just
due
to
restarting
the
post
deployment
checks,
but
in
this
case,
I
would
expect
a
new
version,
not
a
new
revision.
No.
C
It
would
maybe
also
be
a
possibility
so
right
now
I
kind
of
implemented.
Both
approaches
just
to
see
see
how
it
feels.
So
here
sorry
yeah.
C
Why
can't
I
make
it
smaller,
okay,
yeah,
maybe
for
presentation?
This
is
this
was
purely
intentional,
so
right
now,
I've
implemented
it
such
that
when
a
new
deployment
is
being
added
or
a
new
workload,
and
that
will
create
a
new
revision
of
the
app.
But
if
one
of
the
versions
of
the
existing
workloads
changes,
then
that
will
lead
to
increase
in
the
app
version.
C
B
C
A
C
A
Not
the
100
sure,
but
with
what
I
mean
is
I
deploy
version
one
of
my
application
and
it
contains
head
left
arm
LS
like
then
I
deploy
a
new
version
of
my
application
and
now
I
have
also
the
right
arm
and
right
leg.
Together
with
the
left
leg
and
right
leg
and
left
arm,
then
I
said
Okay.
The
third
version
is
that
is
just
right.
Arm,
right:
leg,
head,
no
left
leg
and
arm.
I
cannot
do
that
now.
C
I
I'm
also
not
personally
fan
of
creating
if
I
understand
it
correctly.
The
web
hook
creates
the
captain
app
creation
request.
Crd
always
if
a
new
bot
comes
in
and
the
handling
gift,
Captain
app
already
exists
is
handled
in
the
reconcile
Loop
of
the
captain
at
creation
request.
Reconciler
do
we
basically
want
to
because
we
had
a
nice
scenario
where
the
user
applies,
the
Manifest
and
the
captain
app,
and
this
was
one
but
I
thought
we
want
support.
I
We
do
not
want
to
touch
this
part,
but
we
want
to
support
also
additional
use
case
where
the
captain
app
is
not
edit,
I.
Think
now
we
basically
squash
together
these
two
parts
by
always
checking
if
the
captain
app
not
checking
if
the
captain
app
exists
in
the
web
hook
or
maybe
I,
do
not
understand
it
completely.
So.
A
I
Kinda
but
needs
I
need
to
think
through
maybe
a
little
bit
also.
B
I
agree
with
trevani:
actually
it
makes
sense
we.
We
are
basically
reusing
all
the
code
that
we
already
have
and
just
adding
a
new
scenario
on
top
of
that.
Basically,
but
we
can
reuse
all
the
rest,
because
all
the
rest
of
the
reconciliation
up
until
the
point
where
you
actually
create
the
the
cat
knob
automatically
is
the
same
exactly
that.
So
that's
the
big
benefit
I'm.
I
I
C
That's
what
the
discovery
that
line
is
for
so
basically,
what
happens,
then
is
the
the
web
hook
again.
So
this
is
the
the
same
for
regardless
if
the
app
will
be
created
automatically
or
if
it's
created
by
by
the
user,
so
it
will
say,
here's
the
app
creation
request,
make
sure
that
this
exists
and
then
the
app
creation
controller
will
then
notice
that
we
already
have
a
user
defined
app
and
then
it
will
delete
the
creation
request,
for
example,
and
then
that's
it
pretty
much
for
this
component.
C
So
from
a
user
perspective.
Basically,
nothing
changes
here.
A
B
I'm,
coming
back
to
the
scenario
where
we
want
to
delete
the
workload
from
the
cap,
nap
I
would
actually
say
that
it's
fine
to
draw
the
line
somewhere
there
in
terms
of
automatic
out
of
the
box,
behavior
first
to
save
complexity
on
our
side
and
also
I
feel,
like
that's,
not
necessarily
A
kind
of
quick
start
scenario
that
we
need
to
support.
B
A
C
B
But
then
again,
if
you
actually
delete
the
workload
or
the
deployment
underneath,
you
will
have
a
like
a
mismatch
between
what
is
configured
in
the
captain,
app
that
we
created
and
the
actual
state
in
the
custom
yeah.
J
I
I
personally,
do
not
think
that
we
need
to
support
this
kind
of
scenario,
because
it's
so
complex
already
for
us
that
the
user
is
really
should
be
willing
to
create
its
own
Captain
app,
because
we're
thinking
about
this
scenario
just
to
kind
of
apply
the
Manifest
that
I
have
without
any
changes
and
normally,
if
you
wanna
again
play
with
the
Manifest,
you
should
really
go
the
path
that
you
create
your
own
Captain
app
and
apply
it.
Not
let
everything
go
now
to
Discovery.
That
should
be
just
the
simplest,
clearest
part,
at
least
from
my
perspective,.
D
I
B
B
The
only
possible
thing
to
do
is
to
add
it
to
this
one
or
to
not
edit
anywhere,
because
we
can't
create
the
same
one
with
a
new
same
one
with
the
same
name:
anyways
right,
yeah,.
A
B
I
I
No
I
mean
that
the
version
will
be
the
same
value
as
the
generation.
I
J
I
A
G
But
I'm
I'm
not
sure
if
it
is
good
if
the
version
field
in
that
version,
it's
not
a
semantic
version
yeah.
We
have
to
take
a
closer
look
on
the
on
the
code,
because
I
know
that
I'm
not
sure
if
we
are,
if
we
are
using.
C
G
J
G
C
A
G
Because
yeah,
we
had
a
really
really
ugly
bug
in
one
of
the
early
stages
of
the
lifecycle
toolkit
because
I
think
it
always
matched
when
some
when
some
application
at
the
workload
instance
in
there.
And
we
wanted
always
to
ensure
that
the
last
one
has
been
done.
When
we,
when
we,
when
we
bind
it.
G
This
synchronization
I'm
not
sure
if,
if
they
provide
it
somewhere
in
the
annotations
and
what
and
so
on,
I
also
know
that
Argo
would
provide
either
the
community
or
the
or
what
it
take
if
it
is
a
hidden
chart.
A
A
A
G
Important
part
is,
after
some
time
the
users
will
switch
to
the
to
their
own
app
definition.
Yes,.
G
D
C
Oh
no
I'm
just
going
through
my
list,
but
I
think
yeah.
Maybe
one
thing
that's
interesting
if
we
consider
the
spans,
because
I
think
that
was
also
a
question
in
the
research
ticket,
whether
we
break
anything
regarding
to
this
fence,
so
we
don't
break
anything.
C
But
maybe
one
thing
that's
interesting
to
consider
when
we
talk
about
the
addition
of
additional
deployments
after
we
have
already
reconciled
app
version,
how
this
will
look
like
so
here
in
the
initial
Trace,
we
have
deployed
all
the
all
the
services
that
we
had
at
this
point
in
time,
so
we
had
the
entry,
the
head
and
so
on
and
after
that
has
been
reconciled.
C
I
have
added
a
third
leg
or
a
third
arm
to
the
potato,
and
in
that
case
a
new
span
will
be
created
with
the
obviously
with
only
the
the
new
addition
being
included
in
the
trace,
because
everything
else
Source
has
already
been
reconciled
at
this
point.
So
therefore,
only
the
new
workloads
will
be
included
in
this
in
this
new
parents
pen
here.
So
that's
something
to
keep
in
mind
because
we
have
a
new
span
ID
for
the
new
revision
that
has
been
created
here.
That
makes
sense.
D
C
Is
yeah
so
here
we
might
refine
the
whole
application,
restart
epic
2
to
link
to
the
same
Trace
ID
for
several
revisions.
I
think.
A
No,
it's
fine
that
you
create
different
traces,
but
open
Telemetry
also
supports
links.
Basically,
when
you
create
one,
it
says:
I
also
link
another
trace
and
I
can
jump
then
to
the
other
Trace.
If
I
want.
So
you
can
still
keep
a
bit
of
context
between
two
different
traces.