►
From YouTube: 2021-12-08 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
Yes,
do
we
have
james
joining
in
okay
cool,
great
agenda
doc,
so
I
think
that
we
could
probably
wait
for
a
minute
or
so
and
then
get
started
on
the
updates
for
the
prometheus
exporter
tests
and
then
the
prometheus.
A
Prometheus
receiver
porous
did
you
have
any
other
updates
on
your
pr's?
I
know
I
had
listed.
A
A
Yeah,
I
think
what
has
happened,
which
was
that
there
is
a
pr
that
is
waiting
awaiting,
merge.
A
A
Yes,
and-
and
you
know,
we
worked
with
him
on
getting
those
resolved
and
addressed
and
then
he's
added
more
details
on
the
on
on
on
rug's
questions.
A
So
that's
resolved
now
we'll
I'll
work
with
bogdan
to
get
that
merged,
so
that'll
kind
of
unblock
the
rest
of
the
tests
that
porous
has
pending
yeah.
A
All
right
so
james
or
do
you
want
to
go
through
hi
eunuch?
Thank
you
for
joining.
Do
you
want
to
go
through
a
quick
update
of
the
prometheus
exporter
tests
because
one
of
the
things
we've
been
doing
as
just
for
everybody's
understanding
to
be
on
the
same
page?
Is
that
we've
been
verifying?
A
You
know
the
end
to
end
all
the
components,
the
exporters
in
and
the
as
well
as
the
receiver
right
for
end-to-end
validation
of
data,
so
we've
been,
we've
actually
been
re-looking
at
the
exporter
to
see
you
know.
What's
if
the
testing
is
tests
are
complete
and-
and
you
know,
the
functionality
exists
and
and
goes
through
as
expected
right
so
james,
do
you
want
to
go
through
the
list
of
tests?
We
have
looked
at
and
what's
missing,
we've
filed
issues
for
that.
D
Yes,
definitely
so
it's
closely
related
to
this
issue
that
was
posted
here.
So
in
this
issue,
6
376,
the
metric
builder,
does
not
assign
the
still
non
values
to
the
history.
Do
you.
A
Want
to
share
your
screen.
D
Yes,
just.
D
Have
to
change
the
setting
for
this
okay.
A
Okay,
you
have
to
do
you
have
to
like
bounce
out
and
bounce
back
in
or.
D
We're
just
I
do
not
seem
to
be.
D
But
I
can
just
go
through
it.
D
Or
the
issue
that
I've
just
posted,
I
think,
might
be
better
in
this
case.
Oh.
A
A
D
Yes,
thank
you
yes,
so
in
this
issue
6376,
the
metric
builder
currently
does
not
assign
the
still
non
values
to
the
histogram
and
summary
values
for
the
fill
scrapes
from
the
prometheus
scrape
loop.
So
the
proposed
solution
is
to
use
the
otrp
format
in
the
otlp
prometheus
receiver,
with
the
data
point
flags,
as
the
stillness
marker
for
metrics,
with
no
value
present
to
address
this.
D
Then
we
want
to
be
able
to
handle
this
stillness
marker
of
the
data
point
effect
data
point
flags
that
indicates
a
metric
data
point
reflect
no
recorded
value
so
thus
that
the
prometheus
remover
exporter
should
be
able
to
respond
to
the
otp
flag
for
stillness,
and
this
will
allow
the
end
to
end
compatibility
in
the
collective
pipeline
with
the
use
of
otlp
native
stillness
marker
flag
from
the
receiver
from
the
prometheus
receiver
to
the
prometheus
remote
red
exporter,
and
the
proposed
solution
that
we
are
implementing
is
to
check
every
data
point
for
all
metric
data
types
such
as
the
god
sum
histogram
and
summary
for
the
otlp
flag
and
set
the
sample
value
of
the
data
point
containing
this
flag
of
the
stillness
to
the
still
none
value,
and
this
will
ensure
the
prometheus
removal
exporter
to
handle
and
respond
to
the
otrp
flag
set
from
the
receiver
into
the
prometheus
removal
exporter
in
the
collector
pipeline.
D
So
that's
what
we
have
proposed
as
a
solution
yeah.
So
far,
this.
A
B
Yeah,
I
think
we
also
decided
that
we
need
to
take
this
change
before
we
make
it
stable.
So
you
are
right
at
some
point
that
we
need
to
have
the
sameness
represented
in
otp
10,
yeah
yeah.
B
So
so
are
you
saying
this
is
the
problem
is
only
for
the
count
time
series
for
histograms
because
they
are
not
close.
C
The
solution
will
be
implemented
for
all
metrics
all
types.
B
D
E
So
why
I
I'm
slightly
confused
because
for
for
other
time,
series
that
aren't
histograms,
obviously
we
do
get
stale
nands
appended
through
the
appender
interface.
It
are
we
not
getting
those
for
specific
histogram
data
points.
Are
we
just
not
handling
them
correctly?.
C
So
in
the
histogram
data
points,
so
the
value
count
is
in
the
otlp
is
basically
unsigned
integer
64.
so
because
of
which
we
can't
represent.
It
can't
take
these
still
nine
values.
So
when
we
try
to
we
get
float
bits
of
steel
and
from
the
primitive
descriptor
and
when
we
tried
to
cast
it
into
unsigned
integer
it
cast
it
into
the
minimum
in
64
values
and
because
of
which
it
has
a
cascading
effect
on
many
things.
C
Like
start
time
stem
and
these
things
because
start
time,
stem
logic
is
dependent
on
count
values
and
when
the
count
values
comes
out
as
a
minimum
64,
it
doesn't
because
in
the
start
time
stem
it,
tries
to
it,
compares
previous
script
values
with
android
and
the
initial
step
rules
with
the
current
ones,
and
because
we
have
a
min
in
64
here
it.
It
can't
do
a
comparison
or
it
does
the
bad
comparison
and
the
start
time
stamp
is
also
affected
by
this
issue.
C
So
a
solution
was
discussed
rather
than
rather
than
when,
when
we
have
a
failed
metric
or
a
stale
metric
instead
of
instead
of
cast
representing
count
values
as
tail
nand,
we
use
the
data
point
flag
as
a
marker
for
stale
matrix
or
steel
data
point
and
use
that
for
furthering
the
for
further
things
like
time,
series
adjustment
and
things.
E
Makes
sense
so
you're
just
saying
that,
instead
of
we
just
basically
need
to
check
the
count
to
see
if
it's
stale
man
and
then,
if
it
is
yes,.
C
That's
what
yeah
check
for
count
everywhere,
but
just
because
right
now
the
count
itself
is
not
coming
correct,
for
it
can't
hold
these
still
nine
values.
So
we
have
this
problem.
E
Okay,
but
we're
not
like
re-implementing
any.
What
would
you
say
like
we're,
not
keeping
any
caches
of
what
we
have
seen
or
haven't
seen
we're
just
handling
the
stalemate
count
correctly.
F
Yes,
there
is
another
count,
but
some
or
some
other
float
field
right.
Doesn't
it
set
this
down
n
value
in
in
multiple
fields.
F
C
Yeah
right
now,
it's
the
logic
is
with
like,
or
so,
if
the
count
or
sum
these
are
the
two
main
data
points
that
it
checks
for,
if
that
is,
that
is
still
from
promises,
a
scrape
look.
That
means
the
whole
histogram
is,
is
stale.
C
No
just
for
it
checks
for
count
awesome
if
they
are
still.
That
means
the
whole
histogram
data
point
is
still,
but
if
a
bucket
is
still,
it
still
goes
as
it
is
with
that
single
bucket
still.
F
That
is,
is
that
logic
of
the
scraper
or
is?
Is
that
a
decision
in
the
receiver?
It
feels
like
us.
The
stalemate
is
not
a
valid
bucket
value,
so
I'm
not
sure
why
we
would
try
to
pass
that
forward.
C
Yeah,
because
right
now,
even
in
right
now,
the
way
the
logic
is
everywhere.
It's
only
like,
even
in
the
metric
adjuster
time
series,
it
only
checks
for
count
or
some
values
for
everything
it
doesn't
check
for
bucket
value
to
do
any
time
reset
or
anything.
F
C
Yeah,
actually,
that's
the
only
field
checked
in
the
current
implementation,
because
in
the
premise
escape
loop,
considers
each
bucket
count
and
stillness
for
failed
script
as
separate
data
points,
and
it
sends
them
separately
with
still
nine
values.
B
Okay,
so
you're
saying
the
sum
and
the
count
could
be
stale
and
the
bucket,
not
stale
or
the
other
way
the
bucket
could
be
stale
and
the
some
on
the
count
in
our
case
must
not
be
stayed
right.
That's
what
you're
saying.
B
B
A
Can
we
call
that
out
here
in
this
issue,
just
to
be
clear.
C
A
F
E
Then
I
think,
I
think,
for
now
having
bucket
values
sales
be
passed
through
is
probably
the
correct
behavior
until
we
have
the
flag
handling
overall.
F
Yeah
and
then
eventually,
if
if
we
observe
a
stale
nand
value
on
a
metric,
we'll
just
emit
no
data
at
all
other
than
the
flag,
which
I
think
is
what
the
spec
expects
us
to
do.
C
This
is
the
this
is
the
is
reset
function
within
the
matrix
gesture,
so,
basically
for
cumulative
distribution
and
summary
we
just
care
about
count
count
and
some
values.
We
don't
really
check
for
energy
set
for
buckets
like
if
the
bucket
value
decreases,
then
do
we
reset
the
whole
matrix?
No
because
it
just
checks
for
and
sum-
and
this
is
one
thing
and
and
like
to
set
the
flag.
We
basically
have
this
in
our
transaction
append.
C
We
have,
we
convert
the
oc
matrix
to
otlp,
so
there,
basically
also
we
need.
Probably
we
will
need
to.
When
we
do
the
translation.
We
need
to
check
whether
the
value
is
still
nine
and
then
set
the
flags.
So
so
one
point
here
was
like
to
do
this.
C
Probably
we
may
need
to
use
prometheus
package
within
the
oc2
matrix
dot
go
to
check
for
whether
the
values
are
still
and
set
the
flags
accordingly,
but
it
only
right
now,
like
the
it
only
checks
for
count
and
some
it
doesn't
check
for
bucket
values.
F
Yeah,
so
we've
got
a
couple
choices
here
and
I'm
not
sure
that
this
is
the
appropriate
one,
because
this
oc
2
metrics
translation
package
is
not
prometheus,
specific
and
has
other
uses.
So
I'm
not
sure
I
would
want
to
introduce
the
prometheus
specific
stale
checking
here
alternately.
F
We
could
check
the
the
float
values
that
remain
after
the
the
translation,
2p
data,
or
we
could
do
the
stainless
checking
only
in
the
p
data
translation
pipeline
and
defer
defer
until
that
becomes
the
standard
pipeline,
ensuring
that
we're
handling
stainless
through
the
p
data
flag
instead
of
actual
values.
F
Maybe
I.
F
F
Hope
we're
not
there
for
long
either.
I,
I
think,
probably
doing
a
post
open
senses
to
p
data
translation.
You
know
another
pass
over
the
metrics
after
they're
in
p
data
and
looking
for
that
value
and
setting
the
still
flag.
There
would
be
probably
the
better
way
to
handle
it
in
that
path
if
we
need
to
handle
it
in
that
path.
F
A
Yeah,
I
agree
I
mean,
but
it
seems
like
there
are
multiple
dependencies
here
right
I
mean
your
pr
getting
merged
for
all
the
p
data
changes,
then
open
census
and
p
data.
F
The
exporter
code
changing
should
be
happening
now
because
now,
okay
yeah,
I
I
think
and
james
had
already
started
on.
A
F
Yeah
there's
either
prometheus
or
premature
remote
right,
but
that
can
happen
now,
because
the
exporter
can
react
if
it
sees
the
flag,
it
can
set
the
the
values
to
the
state
on
the
end.
Otherwise
I
can
just
pass
them
through.
So
as
long
as
the
stay
on
and
values
are
being
passed
through
by
the
receiver,
that
will
continue
to
work.
A
F
Yeah,
but
well,
actually,
I'm
not
sure
how
it
will
work
for
the
poll
one,
but
it
it
needs
to
be
investigated
at
the
least
because
we
probably
don't
want
to
be
manually
emitting
that
value
for
sure.
A
F
It
may
be
something
where
we
can
avoid
emitting
a
metric
at
all,
like
I
avoid,
including
it
in
the
collection
we'll
have
to
look
at
how
that
can
be
handled,
but
for
the
remote
write
exporter.
Definitely
it
needs
to
set
the
stalemate
value
in
the
data
points
that
it
pushes
out.
A
A
Okay,
so
james,
do
you
want
to
go
back
to
your
issues?
I
think
we
covered
both
right.
D
Yes,
so
we
covered
so
back
to
the
issue,
then,
in
order
to
check
for
every
metric
data
type
in
the
prometheus
remote
right
exporter,
we
currently
have
the
implementation
to
check
the
every
data
point
for
all
those
metrics
for
including
the
the
count
gauge,
as
well
as
the
histogram
and
the
summary
metric
data
types.
A
D
A
All
right
so
any
other
questions,
david
or
vishwa
again,
david
thanks
for
your
reviews.
I
totally
appreciate
it.
I
think
we
finally
have
anthony's
pr
ready
to
merge.
So
hopefully
some
of
this
will
get
unblocked.
A
So
the
other
item
that
at
least
I
had-
and
I
know
that
the
prometheus
folks
have
not
joined
in
today,
but
it
would
be
great
if
any
of
you
have
been
following
the
new
prometheus,
remote
right
version,
2
discussions
and
I
understand,
there's
a
doc
have
any
of
you
seen
it.
B
I
saw
I
saw
a
dog
shared
by
rich
a
few
weeks
back.
Let
me
let
me
get
that
for
you,
okay,.
A
Because
I
think
that
it
would
be
good
for
us
to
kind
of
understand
what
the
changes
are
and
because
we'll
have
to
keep
up
with
any
other
additional
compatibility
requirements.
If
that
comes
in.
A
Yeah,
it
might
be
worth
worth
us
kind
of
reading
through
this
and
understanding
like
what
the
changes
are
because,
hopefully
not
much
just
additional.
You
know
support.
A
A
A
A
B
So
I
want
to
bring
it
up
for
discussion,
so
they
they
are
the
ultimate
recipes.
You
know
for
for
realizing
these
prometheus
metrics,
because
I
see
most
customers.
They
don't
know
how
to
you
know,
get
all
these
metrics
kubernetes
cluster,
for
example
at
least
for
kubernetes
scenarios,
and
when
I
look
at
all
these
mix-ins
that
numerous
operator
actually
ships,
you
know,
which
includes
dashboards,
alerting
rules
and
recording
rules
from
a
few
open
source
projects
like
core
dns,
node,
exporter
and
prometheus.
B
Actually,
these
are
the
three
mixers
that
actually
ships.
I
wonder
if
we
can
ship
that
as
a
recipe
as
a
part
of
the
collector.
A
Are
there
specific
recipes
that
are
that
you
have
prioritized
or
kind
of
identified,
which
would
be
the
which
would
be
used
for
kubernetes
monitoring.
B
Yeah
see,
that
is
something
that
I
don't
have
enough
data,
so
I
looked
at
all
the
dashboards.
There
are
23
of
them.
There
is
no
telemetry
on
what
is
most
commonly
used.
A
I
mean
we
could
we
could
ask
the
larger
community
to
kind
of
select
what
they
use
from
a
list
of
choices
and
kind
of
ask
in
our
sig.
You
know
in
our
sig
channels,
as
well
as
in
our
sig
meetings
just
but
it
would
be
still
qualitative
data
in
a
small
sample
set
right.
I
mean
it's
not
going
to
be
quantitative
in
any
way.
Yeah.
B
And
then
they
also
have
a
lot
of
other
issues.
For
example,
none
of
these
recording
rules
or
alerting
rules
actually
segment
by
cluster
resource,
for
example,
scenarios.
So
for
multi
you
know
resource
stores.
You
know
where
you
know
two
kubernetes
cluster
can
send
metrics.
You
know
to
the
same,
let's
say
the
same
matrix
workspace
in
the
back
end.
We
need
to
make.
We
need
to
make
sure
that
we
adapt
all
those
stuff
to
be
usable
for
our
data
and
clouds
cloud
stores.
B
It
just
holistically
runs
a
recording
rule
for
all
the
resources
that
are
sending
metrics
you
go
to
that
store
because
prometheus
tsdp
is.
Is
that
way.
B
If
we
agree
upon
you
know,
I
wanted,
I
can
volunteer
to
take
that
up
and
do
further
progress
on
there.
I
already
created
quite
a
few
perks
and
issues
and
all
the
the
open
source
repositories
asking
for
opinions
on
people
did.
A
B
They
also
pointed
out,
you
know
a
lot
of
queries.
Doesn't
you
know
honor
this,
for
example,
cluster
segment,
segmenting
the
derived
metrics
or
alerting
rules
by
gesture?
So
there
is
some
interest
here.
That's
what
I
meant
to
say.
A
Yeah,
I
I
mean,
I
think
I
think,
there's
definitely
use
cases
right
because
making
those
recipes
available,
for
you
know,
customers
to
easily
be
able
to
use.
This
is
useful,
but
I
think
that
we
probably
don't
have
enough
data
to
prioritize
which,
which
mixins
are
useful.
I
mean
one
of
the
things
richard.
Maybe
you
could
do
is.
Can
you
share
the
just
the
page
on
mixins
and
you
know
what
exists
today
and
and
then
we
can
also
kind
of
david.
Do
you
have
any
insight
into
this?
A
Even
you
know
you're
looking
at
kubernetes
monitoring,
some
also.
A
A
Something
else
richard
do
you
want
to
share
your
screen
for
the
mix-ins.
B
Yeah
once
again
searching
give
me
a
minute,
so,
for
example,
this
is
one
of
the
makes
sense
that
kubernetes
ships,
it's
a
kubernetes
mixin
that
is
picked
up
by
the
prometheus
operator
and
then
prometheus
operator
also
picks
up
two
more
mixins,
along
with
the
kubernetes
mixins
one
is
the
node
exporter.
Other
one
is
the
codiness,
and
this
is
the
coordinates
in
and
then.
B
I
don't
think
that
is,
and
the
last
one
is
the
node
mixed
in
almost
every
open
source
project
that
that
actually
is
instrumented
with
prometheus.
They
have
basically.
A
So
I
mean:
do
you
have
data,
though,
which
was
that
this
is
useful
for
the.
B
This
is
the
this
is
the
only
way
the
customer
gets
something
out
of
these
metrics,
you
know
about
which
they
have
no
knowledge.
I.
B
Yeah,
this
is
the
starting
point
for
them
to.
You
know
realize
these
metrics
and
the
usefulness
of
them.
E
Okay,
so
now
I
I've
connected
the
dots
the
kubernetes
mix-in
is
used
by
cube
prometheus,
which
is
a
a
really
commonly
used
collection
of
stuff
for
monitoring,
kubernetes.
C
E
E
An
interface
in
it
that
allows
you
to
specify
a
different
storage
backend
and
I
believe,
it's
being
used
by
datadog,
if
I
remember
correctly,
for
to
allow
them
to
have
keepstate
metrics
in
their
agent.
I
wonder
if
something
similar
would
work
for
the
collector,
but
otherwise,
of
course,
I
think
these
are
all
components
that
we
probably
it
would
be
nice
to
have
an
easy
deployment
mechanism
that
grabbed
all
of
these.
In
addition
and
use
them
with
the
collector.
B
No
skip
targets,
it's
not
part
of
medicine,
actually.
F
There
might
be
some
subset
of
recording
rules
that
we
may
eventually
be
able
to
support,
but
we
can't
really
do
anything
that
relies
on
the
state
without
taking
on
the
the
burden
of
storing
state
over
some
longitudinal
period
of
time.
E
A
F
F
B
Yeah
but,
but
so
so,
we
all
have
the
cloud
support
for
recording
rules
and
alerting
groups
right.
B
So
if,
if
somebody
actually,
you
know,
monitors
the
coverages
clusters,
they
have
nothing
right
now,
right
that
they
get
out
of
the
box
and
that's
what
prometheus
operator
is
solving
by
giving
you
all
these
recipes
out
of
the
box,
which
they
do
not
have
in
the
cloud
that
they
have
to
do
it
manually.
F
Sure
the
useful
part
of
that,
though,
is,
is
setting
up
the
node,
metrics
and
cube
state
monitor
and
the
scraping
of
those
yeah
that
gets
the
the
core
metrics
in
place,
the
the
alerts
and
dashboards
and
recording
rules
that
are
in
in
these
cube
mixins.
F
A
B
So
out
of
the
box,
you
know
the
the
operator
has
to
install
these
targets
like
cubesat
and
then
configure
scraping
for
them
yeah
and
then
the
corresponding
dashboards,
alerting
rules
and
recording
rules
for
the
metrics
limited
by
these
targets
would
have
to
be
set
up
in
the
in
the
cloud.
The.
A
B
Yeah,
it
could
be,
it
doesn't
have
to
be
automated
because
I
don't
know
if
everybody
would
need
it,
but
most
of
the
customers
would
need
it
and
the
currently
the
current
version
of
these
artifacts.
You
know
the
recording
rules,
other
inclusion,
dashboards
doesn't
account,
for
you
know
multiple
resources,
you
know
sending
metrics
to
the
same
store.
E
B
You
know
the
workspace
or
whatever,
so
that
need
to
be
fixed
as
a
first
step,
and
then
you
know,
depending
on
the
cloud's
need
they
can
automate
this
to
be.
You
know,
seamless
for
the
customers
or
it
could
be
just
manual
as
well
the
customers
when
they
take
these
open
source,
dashboards
and
other
takeaways
and
recording
rules.
You
should
just
work
in
the
cloud
you
know.
Currently
it
doesn't.
A
Yeah,
because
I'm
trying
to
figure
out,
you
know
what
the
pipeline
would
be
and
what
would
be
actually
what
kind
of
meta
information
would
we
provide
about
the
sources
for
you
know
again,
a
customer
on
the
service
side
like
if
you're,
using
a
managed
prometheus
service
to
have
a
you
know,
kind
of
an
pre-built
set
of
alerting
rules
that
they
say
that
are
suggested
right
that
hey
this
is
the
type
of
source
you're
receiving
these
metrics
from.
A
Would
you
like
to
you
know,
set
up
these
or
add
these
alerting
rules
to
your
profile
to
your
workspace,
and
then
would
you
like
to
visualize
it
right?
I
mean
that
service,
that's
on
the
service
side.
As
far
as
I
can
see,.
B
That's
cloud
cloud
specific:
I
agree
yeah
in
that
case,
but
but
but
think
about
this
right
today.
If
I,
if
I
put
cube
state
metrics
in
my
cluster
and
not
exported
in
my
cluster
and
then
send
data
to
let's
say
aws
prometheus
right
and
then
I
do
this,
a
multiple
cluster
sending
data
to
the
same
workspace.
Now,
if
I
take
these
open
source
dashboards,
none
of
them
would
make
sense
because
they
are
holistically
visualizing,
the
entire
account
or
the
workspace
not
for
cluster,
so
even
the
manual
the
oasis.
A
Yeah,
I
think,
as
david
said,
we'd
have
to
figure
out
how
we
would
convey
some
of
this
information
right
if
at
all,
from
the
collection
side,
and-
and
I
think
that
the
other
question
I
have
for
you-
which
was
that,
are
you
looking
at
on-prem
setups?
Also,
I
mean
where
these
recipes
might
be
more
useful.
Given
that
you
know
you
have
a
self-managed.
B
B
Yeah,
so
these
are
the
most
common
usual
ones
right.
Anybody
who
wants
to
monitor
the
kubernetes
cluster,
whether
in
the
cloud
or
in
the
on-prem,
they
would
go
after
these
open
source.
You
know
knowledge,
you
know
dashboards
and
alerting
groups
and
recordings,
because
they're.
B
A
Again,
just
to
me
just
to
be,
you
know
just
trying
to
understand,
I
think
it's
service
side
service,
specific.
It's
I'm
just
trying
to
understand.
You
know
what
can
we
convey
from
the
collection
side
that
actually
makes
it
useful
on
the
service
side
to
just
spin
up
and
make
it
easy
for
the
customer
to
use
right
the
the
right,
alerting
rules
and
the
and
and
the
right
dashboards.
F
F
End,
but
if
we're,
if
we're
gonna,
do
that,
then
we
need
some
sort
of
convention
for
what
do
we
call
that
attribute?
You
know
if
it's
not
already
in
our
semantic
inventions
and
the
the
dashboards
and
rules
and
alerts
that
would
be
set
up
on
the
receiving
end
need
to
be
aware
of
that
yeah.
Exactly.
B
A
So
maybe
maybe
we
should
look
at
it
actually
from
a
semantic
conventions,
point
of
view
and
make
sure
that
any
conventions
that
are
already
defined
in
kubernetes
are
reused
for
kubernetes
monitoring
and
at
the
same
time
also
there
is
a
set
of
definitions
on
hotel
which
can
be
used.
For
you
know
these
labels.
B
Yeah,
so
that's
what
victims
are
about,
so
the
mixins
are
supposed
to
unify
these
conventions.
So
all
you
know
open
source
prometheus.
C
B
That
yet
I
see
them,
I
see
the
mix
in
proposal.
It's
all
half
break.
I
need
to
check
with
rich
on
on.
You
know
how
we
make
this.
You
know.
B
A
But
I
I
think
the
action
item
here
definitely
on
our
on
the
hotel
end-
is
to
actually
define
or
align
with
common
semantic
conventions
that
already
exist
to
be
used
for
resource
naming.
A
So
there's
some
research
needed
here:
okay
conventions
that
may
already
exist
david.
Are
you
aware
of
any
conventions
existing
on
the
kubernetes
side?
Again,
you
know
this
might
be
something
where
we
could
have
in
kubernetes
project
work
group
for
hotel
to
define
these
or,
if
not,
if
they
don't
already
exist,
for
kubernetes
monitoring.
A
Like,
for
example,
you
know
if
you
have
network
layer
metrics,
for
example,
or
ebpf
metrics
or
database
metrics,
that
are
being
coming
in
through
custom
application.
B
B
Yeah,
so
this
yeah
exactly
the
one
one
challenge
here
is
for
projects
like
queue,
dns
and
node
example:
node,
except
one
exporter
right:
they
they
are
not
always
google
editors,
they
could
be.
You
know
outside
kubernetes
as
well,
so
the
trick
here
is
to
to
understand
that
and
make
sure
that
you
can
make
that
customizable
exactly
so
that
it
will
work
for
inside
and
outside
kubernetes,
and
I
think
that's
the
reason
why
mixins,
you
know
didn't
take
off
well.
B
In
my
opinion,
I
think
there
is
no
clarity
on
that
part.
More
specifically,
and
people
just
started
doing
random
things
in
each
one
of.
A
A
I
see
it
so
it
seemed
to
be
a
like
a
convenience
project
initially
and
then
you
know
it
would
help
works
for
specific
use
cases.
But
in
our
case
I
mean
if
we
were
making
a
recommendation
for
from
an
semantic
conventions.
Point
of
view
for,
as
well
as
for
resource
naming,
standardization
we'd
have
to
actually
look
at
at
least
specific
sets
of
metrics
baseline
for
kubernetes
and
then
other
types
of
metrics,
which
we
published
some
place
at
a
minimum.
B
Yeah-
and
I
also
saw
mixin's
two
version-
two
proposals:
a
few
months
back
like
a
half
a
page,
proposes
somebody
started.
I
think
that's
also
going
nowhere.
I
think
if
we
can,
if
we
can,
I
see
rich
everywhere
there.
So
if
we
can
sync
up
with
him,
okay,
one
of
the
meetings,
I
think
that'd
be
a
good
start
for
us.
Yeah.
A
Yeah,
I
think
so
I
mean
again.
We
should
definitely
have
a
bit
more
understanding
and
discussion
around
this,
because
anything
that
we
can
publish
to
make
it
easier
for
users
to
exactly
understand.
You
know
what
the
labeling
should
be,
because
people
ask
that
all
the
time
right,
they're
trying
everybody's
trying
to
figure
this
out
and.
B
A
Yep,
I
agree.
Okay
cool
so
I
mean,
I
think,
that's
a
good
action
item
and
and
again,
let's
let's
I'll
ping
richard
so
that
he
can
maybe
join
next
time
and
let's
see
how
we
can
take
this
discussion
forward,
because
there
is
a
good
need
for
the
documentation.
At
least
I
totally
agree
with
you.
B
Then
templating,
I
think
all
these
have
to
be
mixed.
Mixings
can
be
combined,
you
know
by
the
users
with
different,
you
know
templates,
and
then
they
can
take
the
artifacts
and
use
it
depending
on
the
needs.
I
think
that
was
the
original
intent,
though
it
doesn't
work
really
well.
At
this
point,.