►
From YouTube: 2021-06-09 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
C
It
was
good
I
I
did
pretty
much
nothing
and,
as
is
customary
within
grafana
labs,
everyone
actually
leaves
you
in
complete
peace.
So
you
that's
really
good.
I
didn't
get
a
sing.
No!
That's!
Not
true
tom
reached
out
about
pizza,
but
except
for
let's
reach
out
about
the
pizza,
I
got
zero
anything
from
any
work
related,
so
yeah.
That's.
D
Good
good
as
well
trying
to
cut
some.
You
know
rest
after
conference,
the
global
maintainers
sammy
was
crazy
time.
I
think
it's
still
happening.
To
be
honest,
the
second
part
today,
but
wow.
D
D
Totally
different,
you
get
the
same
feeling
of
other
adrenaline.
D
A
All
right,
if
there
are
topics
that
folks
have
let's
kind
of
get
started,
we
have
jana
also
now
hi
anna
good
morning.
E
A
F
A
Good
good,
hey,
we
are
you're
all
so
busy
that
I
was
just
happy
to
hear
that
richard
had
taken
a
break,
so
I
was
thinking
about
it.
You
know
I
was
like
okay,
it's
a
good
time
to
take
a
break
stealing.
C
B
F
Dude,
sorry,
can
you
repeat
that.
F
So
in
regards
to
the
last
week's
discussion
regarding
the
scrape
target
update
service,
we
plan
to
implement
it
into
the
previous
receiver.
That
would
be
able
to
update
a
list
of
scrape
targets
using
http
request.
F
We
decided
that,
after
looking
into
the
http
sd
config
being
implemented
upstream,
we
decided
to
look
more
into
it
to
see
if
we
could
use
it
for
our
use
case,
and
it
does
seem
pretty
promising
and
after
some
initial
times,
like
small
experiments
with
starting
our
own
local
server
and
testing
it
out,
it
seems
like
it
does
work.
The
way
that
we
intended,
and
we
just
wanted
to
give
an
update
that
we
plan
to
use
that
as
our
escape
target
update
service
for
our
project
and
we're
wondering.
F
When
do
you
guys
think
that
will
be
able
to
be
landed
in
the
prometheus
release?
And
if
there's
any
way,
we
can
help
to
make
sure
that
it
comes
out
on
time.
G
So
the
prerequisite
spending
review
so
any
review
is
welcome
and
I
also
expect
a
team
member
formatives
to
review
it
this
week.
So
that's
beyond,
and
the
next
comment
is
release
is
actually
next
week.
A
Okay,
julian,
can
you
please
share
the
pr
you
know,
link
with
us,
so
that
we
can
actually,
some
of
us
can
help
review.
A
A
F
Yeah,
I
could
go
over
kind
of
like
a
summary
of
it,
but
yeah.
F
I
think
when
we
were
kind
of
looking
into
deciding
between
a
push
or
pull
mod,
whether
we
wanted
to
start
a
server
at
a
server
endpoint
within
the
receiver
and
just
push
requests
to
it
for
list
escape
targets,
as
opposed
to
just
having
the
prometheus
receiver.
Just
pull
the
list
of
scrape
targets
periodically
with
the
push
model.
F
F
So
that's
what
we
decided
on
after
drafting
up
the
design
for
both
models
and
that's
why
we
looked
into
the
http
st
config
and
thought
it
would
be
a
great
fit
as
it
uses
pull
requests
periodically
at
a
refresh
interval.
H
I
We
haven't
really,
I
think,
that's
likely
to
be
something
that
we'll
try
to
address
if
we
try
to
address
it
in
the
load
balance
service,
the
idea
of
gating
all
of
the
responses
until
all
of
the
expected
requests
came
in
so
that
things
were
kind
of
synced
came
up.
I
I
I'm
also
I
was
discussing
this
with
jana
the
other
day,
and
we
wonder
whether,
as
long
as
we
can
have
stability
in
the
assignment
of
targets,
if
one
pod
goes
away
and
comes
back,
if
it's
actually
going
to
be
a
problem,
the
the
likely
scenarios
where
we
would
expect
it
to
happen
are
when
someone
has
changed
the
the
scaling
or
when
a
pot
has
died
and
has
been
restarted
and
to
the
extent
that
we're
using
stateful
sets
and
we're
able
to
allocate
to
allocate
to
the
the
ordinal
position
rather
than
to
a
particular
pod
that
whose
id
is
going
to
change
one
pod
dies
and
comes
back
in
that
normal
position,
and
that
picks
up
the
targets
that
the
previous
one
had,
and
so
there
won't
be
a
handoff
from
one
running
collector
to
another.
I
In
that
case,
in
the
case
of
reallocating
all
of
the
targets,
because
we've
changed
the
number
of
replicas,
then
we
expect
that
that's
because
we're
not
building
an
auto
scaling
system.
At
this
point,
we
don't
think
that's
going
to
be
a
thing.
That's
going
to
happen
frequently
enough
for
it
to
be
a
particular
concern.
I
It's
going
to
be
manual
intervention
someone's
going
to
have
changed,
that
monitoring
system
and
for
a
minute,
or
so
there
may
be
changes
in
the
data
that
they
get
and
and
unless
and
until
that
becomes
a
problem
that
we
think
we
have
to
solve
through
more
invasive
measures.
We're
not
going
to
solve
it.
So
the
the
the
other
problem
is
that
the
collector
doesn't
really
provide
us
with
signals
back
when
it
has
completed
a
scrape.
I
E
You
know
redistribute
the
targets
to
the
collectors
and
then
it
starts.
Accepting
the
you
know
starts
to
scrape
again
the
collector
doesn't
provide
us
that
type
of
like
life
cycle.
You
know,
events.
C
E
Stackdriver,
I
think
it's
it's
there's
going
to
be
a
timeline
of
a
minute
or
something
that
we
may
experience
out
of
the
samples
just
because
you
know
there
might
be
retries
and
so
on.
I'm
not
expecting
it
to
be
much
much
longer,
but
you
know
like
if
we
sacrifice
this
rebalancing
things
as
new
targets
appear
like
if
we
are
making
everything
very
sticky
to
the
collectors.
E
That
problem
is
not
going
to
exist.
I
mean
we
need
to
think
about
this
problem.
It's
just
not
like
we're
compromising
everything
in
order
to
implement
things
easily
it
I
mean
my
ideal,
you
know
the
ideal
solution
would
be
actually
being
able
to
flush
things
out.
You
know
having
some
sort
of
like
life
cycle
events
on
the
collector
that
we
can
remotely
you
know.
Can
you
know
interest,
listen
to
and
stuff
like
that.
E
So
the
controller
knows
when
all
the
remote
drive
samples
have
been
flushed
from
the
collectors
and
then
the
collector
can
come
in
and
you
know
read
some
new
targets
and
start
scraping
again.
We
need
to
be
able
to
do
that.
You
know
handshake.
I
I
think
there's
also
a
knob
that
we
can
turn
here,
which
is
the
refresh
interval
of
the
vsd
discovery
mechanism
and,
to
the
extent,
that's
a
very
short
refresh
interval,
the
amount
of
time
that
we're
talking
about
potentially
duplicated
and
then
that's
out
of
order
samples
will
be
short
and
we'll
be
talking
a
couple
refresh
intervals.
So
if
that's
a
30
second
interval
we're
talking
about
a
minute.
If
it's
10
seconds,
you
know
20
30
seconds,
then.
I
I'm
sorry,
I'm
talking
the
refresh
on
the
http
get
request,
not
the
the
file
sd.
That
would
be
backing
it.
Although
I
does
the
httpsd
write
to
file
sd,
I
thought
that
just
provided
the
response
back
on
a
channel.
G
E
Yeah
exactly
like,
we
may
already
have
the
data
for
that
exact.
I
mean
it's
for
that.
You
know
time
stamp
or
something.
So
it's
just
not
a
huge
huge
deal.
That's
why
we
thought
that,
like
let's
begin
with
this
and
maybe
try
to
address
this,
like
you
know,
it
would
like
graceful
shutdown
of
the
collectors
in
the
long
term
to
make
it.
You
know
better,
that's
why
I
don't
think
it's
going
to
be
a
huge.
J
E
Let's
just
kind
of
make
it
harder
to
operate
the
collector,
because
we
are
you
know
looking
at
some
of
these,
you
know
failed,
like
number
of
like
remote
rides.
In
order
to
you
know
it
is
a
availability
metric
right
like
so
that
just
complicates
that
for
sure,
because
not
every
out
of
the
order,
error
is
actually
very
critical,
but
there's
no
way
for
us
to
be
able
to
tell
that
you
know
automatically.
So
that's
that's
the
biggest.
E
I
think
complexity,
at
least
in
in
in
terms
of
like
things
that
we
wanted
to
do
to
you
know.
Opera
like
provide
and
manage.
You
know
collect
their
with,
like
some
available,
symmetrics
and
so
on
and
some
alerts,
maybe.
E
You
know,
error
like
we,
don't
know
the
difference,
hey
is
it
because
of
the
resharding
or
is
it
because
of
like
there
was
a
bug
and
we
are
seeing
out
of
the
order,
samples
right,
there's
no
way
for
us
to
differentiate
it
at
this
point,
but
you
know,
like
I
think,
when
you
scale
up
or
scale
down
you
know
that,
like
for
a
period
of
time,
you
may
see
some
errors
coming
from
the
collectors,
so
you
know
that
liking,
let's
just
say,
you're
rolling
out
something
it's
a
new
configuration.
E
It
will
take
a
while
for
you
to
be
able
to.
You
know,
get
back
to
the
same
level.
So
I'm
not
super
concerned,
because
you
know
people
who
are
making
that
configuration
change,
know
that,
like
it
will
be
a
period,
you
know
that
things
will
catch
up
as
anthony
was
saying
that
if
we
want
to
build
this,
like
all
the
scaling
thing,
that
complicates
a
lot
of
things
because
then,
like.
E
G
And
to
answer
this
question,
the
remote
right
does
not
have
any
kind
of
type
of
error.
We
just
look
at
the
return
code,
so
there
is
no
definition
of
the
message.
If
you
just
post
to
anything
that
replied
to
android
point,
this
will
think
that
it's
it's
fine.
So
it's
really
like
very
simple
protocol
and
actually
like
some
of
the
remote
right
receivers,
don't
need
they
don't
all
reject
the
autofocus
insert.
So
that
will
also
depend
on
who
you
are
talking
to
all
the
ones
that
are
based
on
prometheus.
G
G
K
Okay,
I
have,
I
have
one
question.
So
is
there
a
way
to
to
make
a
http
call
or
query
an
endpoint.
K
Actually
see,
which
instance
is
actually
have,
you
know.
K
Because,
if,
if
I
make
a
mistake
in
authoring,
for
example,
a
scrape
job,
it's
super
hard
to
troubleshoot
and
there
is
no
target
ux
just
like
how
prometheus
gives.
So.
This
is
going
to
be.
E
The
standard
discovery
mechanism
should
be
able
to
return
you
the
targets
like,
but
it's
it's
not
like.
Gonna
return.
This
allocations,
I
think
right
anthony
it
would
be
nice.
I
I
I
think
we
certainly
could
right
so
the
the
initial
strawman
that
I
put
up
for
the
url
that
the
service
discovery
mechanism
would
reach
out
to
in
the
load.
Balancer
is
jobs,
job
name
targets
with
a
query,
parameter
to
identify
the
collector,
that's
requesting
the
data,
so
the
query
parameter
would
filter
it
down
to
just
that
that
filter
that
resource
down
to
just
that
collector's
targets.
I
K
And
there
is
no,
there
is
no
way
today
right.
There
is
no
way
to
query
to
see
the
target
assignments.
I
I
think
you're
asking:
does
the
prometheus
receiver
currently
expose
what
targets
it
thinks
it
should
be
scraping?
And
I
don't
know
unless
the
discovery
library,
the
discovery
manager
exposes
that
on
its
own?
I
don't
think
it
does.
I
don't
think
the
receiver
does
anything
to
expose
that
explicitly.
G
I
Discussion
of
expanding
the
use
of
z
pages
in
the
collector
it
could
probably
fit
in
well.
There.
A
Okay,
so
let's
move
on
to,
I
think
richie
you
had
itemized
the
compliance
test
results.
C
Yeah,
just
as
an
update,
because
I
saw
I
saw
0.28
getting
released,
so
I
just
ran
the
updated
test
and
dumped
it
here.
The
retries
is
new.
The
rest
is
the
same
from
from
before.
E
Yes,
yeah
is,
is
retro
ice,
so
we
had
a
discussion
before
that,
like
we
wanted
to
implement
retries
the
way
that
prometus
server
is
implementing
with
the
same
configuration
settings
and
so
on.
E
But
then
you
know
the
collector
maintainers
had
an
opinion
that
we
should
be
just
using
the
existing
retry
mechanism
that
is
available
to
all
the
other
components
is.
Is
this
retracts
like?
Are
you
expecting
just
retrying
things,
or
are
you
expecting
the
same
algorithm
to
be
implemented?
The
retry
strategy
well
implemented
here.
C
For
complete
compliance
with
how
prometheus
behaves
that
prometheus
behavior
needs
to
be
the
reference,
I
I
get
the
argument
that
there
is
already
a
defined
behavior,
potentially
within
within
within
open
telemetry.
C
We
do
have,
we
don't
have
a
lot
of
of
of
written
down
and
confirmed.
This
must
be
done
in
this
way,
and
let
me
just
a
moment.
E
To
me
to
me,
the
clear
advantage
is
like
people,
you
know
who
are
you
know
auto
tuning
things
may
bring
their.
You
know
configuration,
but
the
collector
is
implemented
like
behaving
so
differently
that
that
auto
tuning
may
not
work.
You
know
for
the
open,
telemetry
collector,
like
you,
can't
really
you.
E
From
the
server
and
the
open
telemetry
collector
to
behave
the
same
in
terms
of
performance,
they
are
very
different.
So
that's
why
I
I'm
convinced
that,
like
it's,
okay,
not
to
support
what
permitted
server
is
doing
exactly
but
yeah
like
it's
it's,
it
would
be
nice.
If
you,
you
know,
have
any
opinions.
C
I
don't
have
a
strong
opinion
on
on.
If
this
is
a
fail
or
like
a
must,
or
should
to
talk
an
rfc
to
119
language.
I
I
suggested
within
the
in
the
issue
or
in
the
pr
which
I
just
linked,
that
we
discussed
this
within
within
premises
team.
What
what
we
actually
want
to
define
as
as
mandatory
option
or
whatever.
C
If
something
is
mandatory,
I
think
it
should
be
the
same
across
across
the
fleet
and
from
speaking
with
my
open
telemetry
head.
I
wouldn't
be
surprised
if,
if
other
implementations
also
define
a
specific
behavior
for
for
certain
failures
or
retries,
so.
E
Yeah
we
there
there's
like
some
certain
like
custom,
retry
retry
configuration
settings,
and
that
was
like
the
reason
that
I
just
wanted
to
make
this
is
you
know
similar
to
what
primitive
server
is
doing.
C
So
the
currently
defined
thing
you
also
see
this
in
the
in
the
pr
500s
should
be
retried.
400
shouldn't
julian
made
the
very
good
point
that
four
to
nine
can
conceivably
be
be
retried
with
probably
some
heuristic
exponential
backup
as
someone
who's
coming
from
a
networking
space
seems
like
the
same
default,
but
also
not
super
optimized.
C
This
is
not
currently
super
defined
if
you
have
any
any
measurements,
any
anything
which
is
already
defined
on
open,
telemetry
side
or
other
best
practices.
I
I
think
it
makes
sense
to
try
and
align
across
the
industry
and
not
everyone
defining
their
own
thing.
G
Well,
I
I
would
note
that
we
have
a
requesting
promise
to
improve
the
ritual
mechanism,
because
it
is
really
stupid,
informative,
well
stupid.
In
the
same
very
simple,
I
will
say
that
it
will
retrieve
the
500
but
like
forever,
and
it
will
like
never
drop
so
we
are
now.
There
is
a
request
to
change
that,
to
like
have
more
rules
like
don't
retry,
if
the
samples
are
more
than
one
hour
hold,
that
kind
of
things
which
we
are
it's
being
implemented
as
an
option
as
well.
G
So
I
don't
think
that
the
retry
mechanism-
well,
I
think
it
will
still
evolve
informative,
because
we
see
that
with
some
receivers
it
can
cause
issues.
The
goal
was
to
never
lose
the
sample
currently,
but
we
see
that
for
some
receivers,
sometimes
they
they
just
never
receive
the
same
button
and
that
causes
a
lot
of
memory.
Usage,
informatives.
E
B
C
E
Yeah,
that
would
be
great.
Let
us
know
by
the
way,
like
I
really
want
to
have
an
answer
to
this
before
the
collector
is
stabilized,
because
you
know
we
don't
want
to
break
this
configuration.
C
A
E
Okay,
in
the
worst
case,
we're
going
to
deprecate
the
existing
behavior,
keep
it
as
it
is
supported,
probably
forever,
but
will
not
document
it.
If
we
change
it.
You
know.
A
All
right
cool:
can
we
step
through
these
some
of
the
other
issues
also,
I
think
that
folks
added
in
the
prs
that
are
related
to
some
of
the
fixes
that
have
already
been
filed
for
on
the
hotel
side
for
some
of
the
tests
again
up,
I
think,
is
merged.
I
don't
know.
Let
me
see.
I
I
Yeah,
I
think
functionally
it's
there,
we're
just
making
sure
that
the
tests
are
not
flaky
and
then
it
looks
like
the
manual's
got
a
pr
up
already
for
the
repeated
labels.
So
I'm
just
starting
to
look
at
that
now,
which
is
great.
A
C
C
Well,
most
of
the
I
mean
you
you
need
to
detect
when,
when
something
stales,
you
can
mark
it
as
such
in
the
remote
right.
But
oh
it
reminds
me,
do
you
have
a
cache
of
or
well
or
something
or
do
you
send
everything
directly
once
once
you
have
scraped
it.
C
I
I
don't
think
we
do
in
the
exporter.
We,
we
have
a
cute
retry
mechanism
that
I
I
think,
sits
in
front
of
the
exporter,
so
it
doesn't
really
know
about
it.
If
the
exporter
returns,
a
failure
that
can
be
retried,
the
queue
to
retry
will
try
to
send
it
again,
but
I
don't
believe
that
the
exporter
maintains
any
state
about
what
it
is
since.
C
Yeah,
and
in
this
case
you
don't
get
it
for
free,
I
I
I
thought
you
had
some
some
caching
I
mean.
Maybe
this
can
also
alleviate
some
of
david's
concerns,
at
least
if
you
have,
if
you,
if
you
build
no,
maybe
not
regarding
the
out
of
order,
if
you
were
to
scrape
stuff
out
of
order,
you
keep
it
in
the
bell
and
then
you
send
it
on.
But
okay
forget
it.
I
said
anything
about
it.
If
you
don't
maintain
it.
I
You
don't
get
it
for
free.
We
do
maintain
some
state
in
the
receiver
that
may
help
with
that.
We
we
maintain
when
we
first
saw
a
metric,
and
we
maintained
the
previous
value
that
we
saw.
So
we
can
try
to
do
reset
detection
on
time
series.
We
may
also
be
able
to
use
that
mechanism
to
store
okay.
This
this
has
gone
away.
We
should
have
made
a
stale
marker
for
that,
but
I
just
don't
think
we've
had
anybody
dig
into
the
design
on
how
we
should
implement
stillness
in
the
receiver.
Exporter
combination.
C
If
you
just
keep
the
last
one,
that
will
still
not
be
enough
for
for
complete
stillness,
but
it
already
gets
you
quite
a
bit
and
you
can
in
theory
extend
it.
I
I
guess
like
when
you
last
saw
it.
J
That
also
then
ties
into
outer
order
handling,
because
if
the
target
does
reappear
or
depending
on
the
out
of
order,
it
sets
down
this
marker
to
be
rejected
by
the
other
end,
which
should
prometheus
tstv
does.
But
if
you
don't
reduce
tsdb,
you
kind
of
have
to
make
sure
you're
handing
out
order
correctly.
I
The
the
receiver
uses
a
mark
and
sweet
garbage
collection
approach
for
clearing
up
those
prior
sample
data
points
that
it
keeps
around.
So
that
may
also
be
applicable
like
if,
if
we're
able
to
say
okay,
every
x
number
of
scrapes
we're
going
to
go
through
and
do
this
garbage
collection
if
we've
removed
targets
that
we
haven't
seen
anymore,
we
can
emit
a
stainless
marker
at
that
point,
but
we
need
to
keep
emitting
stainless
for
some
period
of
time,
or
is
that
a
just
a
one
time
we
emit
stainless
and
move
on
you?
J
Like
inside
prometheus,
the
way
it
works
is
that
it
remembers
all
folks
from
the
last
script
whether
it
was
successful
or
not,
and
that's
sufficient
for
all
of
this
to
work
now.
The
actual
implementation
is
a
little
more
complicated
for
performance
reasons,
but
basically
you
only
need
the
last
screen,
whether
to
succeed
or
not.
I
G
G
A
Any
other
questions
folks
have
on
the
tests.
I
think
there
are,
I
think
stillness
was
probably
the
biggest
area
that
we
haven't
really
addressed
in
depth.
Yet.
A
Okay,
moving
on,
let's
move
on
to
david's
question
david:
you
had
an
item
yeah.
H
So
I've
been
looking
into
trying
to
see
if
we
can
use
metric
relabeling
to
rename
metrics
in
the
collector,
and
it
currently
doesn't
work
mostly
I'm
hoping
that
maybe
some
of
the
prometheus
folks
that
have
that
know
a
bit
more
of
the
history
behind
like
why
the
metadata
endpoint
was
introduced
and
how
it's
intended
to
be
used
could
help
me
out
it.
H
It
seems
like
a
bug,
or
at
least
an
oversight,
that
if
I,
if
I
look
at
what's
in
the
wall
or
what
gets
appended
in
the
appender
interface
for
us,
we
only
really
have
access
to
the
final
metric
name
for
a
particular
metric.
But
the
metadata
endpoint
is
generated
from
a
cache.
That's
populated
immediately
after
reading
a
line
of
prometheus,
and
so
what
this
means
is
that
if
we
try
and
look
up
metric
metadata
based
on
the
final
name,
we
don't
actually
find
any
metadata
and
I'd
like
to
be
able
to
support
that.
H
J
Yeah,
so
the
first
thing
is
metric.
Related
configs
is
a
feature
of
last
resort
when
you
can't
fix
things
elsewhere.
So,
for
example,
this
renaming
of
container
cpu
seconds
total
that's
not
renaming.
That
should
generally
be
done
if
it's
problem
fixed
in
c
advisor,
at
least
in
theory.
Obviously,
the
real
world
is
much
more
complicated
and
but
yeah
for
something
extremely
popular,
like
c
advisor
that
should,
at
least
in
principle,
be
possible
if
there's
an
actual
problem
there.
But
the
name
looks
fine
to
me.
J
The
arching
is
metric
label
fake
supplies
to
samples,
not
metric
families,
so
the
behavior
here
is
exactly
as
expected,
because
this
isn't
really
expected
to
come
up,
but
it's
also
not
safe
to
update
this
thing
for
samples
and
apply
to
metric
families,
because
we
don't
know
if
that
applies
to
every
single
sample
that
is
coming
from
that
metric
family
from
that
screen
or
if
there's
a
mix
or
in
fact,
there's
a
collision
on
the
other
side.
So
I
think
it's
one
of
those
ones
where.
H
Okay,
the
only
solution
I
could
think
of
is
having
both
the
initial
and
the
final
metric
name
in
the
cache,
but
I
don't
know
if
that
also
turns
into
some
hairy
problem.
J
Yes,
it
does
as
a
because
sample
names
aren't
always
tied
to
metric
names,
and
I
don't
often
know
that
at
top
my
head
know,
if
you
can
do
that
sort
of
thing
really
regular
grammar,
because,
like
counters
sure
the
sample
name
is
underscore
total.
The
metric
family
name
doesn't
so
being
able
to
do
that
generically
across
those
is,
it
might
be
doable,
I'm
not
sure
off
top
of
my
head,
as
I
said
that
that's
a
more
formal
matzi
question,
but
yeah
the
general
thing
is
try
to
avoid
using
metric
label
configs.
G
H
H
H
H
Okay,
cool:
I
will
update
the
issue
and
we'll
probably
just
do
some
validation
in
the
collector
to
prevent
this
case
from
happening
because
right
now
it
it.
I
think
it
fails
silently
and
if
you
turn
on
debugging
it
yells
at
you
for
something
that
doesn't
make
any
sense.
So
we
can
at
least
warn
people
not
to
do
this,
but.
H
I
think
it
it
does
get
treated
as
unknown
and
then
it
gets
rejected
at
some
point
within
open
telemetry
for
being
unknown.
J
H
Information,
I
think
that
actually,
the
the
thing
that
we
need
is
whether
it's,
for
example,
a
counter
or
a
gauge
or
like
yeah,
the
type,
the
units
and
the
help
don't
matter
as
much,
but
without
that
I
don't
think
we
know
how
to
handle
it
in
open,
telemetry.
J
Yeah
yeah,
you
could
look
at
the
underscore
total
in
that
case,
but
also
there
within
some
graphite
naming
things.
There's
a
semiconvention
of
underscore
total
means
that
it's
a
drop
wizard
counter,
which
just
means,
if
it's
an
integer
that
goes
up
and
down
which
is
not
prometheus,
meaning
of
counter
so
yeah.
A
It
well
thanks
any
other
questions
that
folks
wanted
to
discuss
or
any
other.