►
From YouTube: 2020-10-29 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
Yeah,
I
I'm
back
in
my
old
phones
which
have
about
80
minutes
of
battery
life,
so
we'll
see.
B
Andrew
and
I
discussed
having
him
share
the
screen
today.
Also
I've
looked
at
the
agenda
that
people
filled
in
and
I
was
going
to
propose
that
we
brought
andrew's
item
to
the
top
since
we
missed
on
his
item
last.
D
Sorry,
sorry,
that's
not
the
one
I
want
to
show
before
I
get
to
that.
There
is
a.
D
So
that
way
we
have
visibility
across
all
spec
participants.
We
have
been
talking
mostly
about
trace
spec
issues
in
this
vexic
meeting
on
tuesdays.
But
for
this
this
is
like
a
logistics
things.
Whether
everyone
can
can
attend
that.
D
I
I'm
thinking
the
proper
inflection
point
in
order
to
be
able
to
have
this
done
in
earnest
is
after
we've
brought
the
trace,
spec
issues
down
to
zero
right
now
we
got
two
and
the
p
one
issues,
sorry
down
to
zero
and
right
now
we
got
two
and
then
a
lot
of
conversations
can
be
revolved
around
the
metrics
spec
issues,
which
is
what
we're
talking
about
here
in
this
meeting,
because
that
encompasses
like
the
majority
of
the
p1
issues,
all
the
people
issues
actually
that's
on
the
to-do,
which
is
twelve
of
them,
which
I'll
get
to
so
that's
the
idea
I
want
to
plant
in
the
as
a
seed
of
an
idea
for
those
people,
so
that's
the
logistics
for
like
maintainers,
improvers
and
participants,
so
I'll
move
on
to
the
status
of
the
issues
that
we
have
for
p1.
D
So
I
ran
this
report
just
before
this
meeting
and
we've
got
12
to
do's
up
two
from
last
week.
I
think
a
couple
of
them
new
ones
were
created
like
three
days
ago.
I
needed
signees
on
those.
Two
of
them
are
in
progress
right
now
and
just
to
see
where
we've,
what
we've
accomplished
11
have
been
resolved.
B
B
C
So,
just
today
I
submitted
apr
related
to
issue
600,
which
specifies
open,
metrics
interoperability,
and
I
included
a
couple
lines
about
receiving
open
metrics,
but
all
I
did
was
link
to
the
design
document
in
the
collector
repo.
It
currently
describes
how
the
collector
does
it.
I'm
not
sure
if
this
is
something
we
will
ever
do
anywhere
else.
B
D
D
Okay,
so
that's
it
for
the
status-
and
this
is
the
other
agenda
item
that
we
didn't
get
to
get
to
last
week.
But
specifically,
we
need
the
metrics
table
filled
out
for
this
compliance
matrix,
and
this
is
in
order
to
be
able
to
have
the
corresponding
language.
Six
be
able
to
provide
estimates
as
to
what's
been
implemented
and
what
needs
to
be
implemented.
B
We're
we're
going
to
run
into
some
trouble
here
because
there's
missing
specification
really.
So
it's
a
little
hard
to
fill
out
this
table
and
I
I've
run
into
this
myself
this
week.
So
I
actually
feel
like
I
have
to
write
something
in
order
to
avoid
sending
pr's
that
don't
have
a
specification
backing
them
to
some
of
the
repos.
I
work
in
so
there's
this.
B
I
don't
want
to
go
into
the
depth.
We've
been
discussing
it
internally
at
lifestep.
We
we
have
started
working
on
a
metrics
compliance
table
at
lifestep,
as
you
know
andrew.
So
we
could
leave
this
assigned
to
a
light
step
engineer.
I
guess,
but
really
we
got
to
finish
some
spec
before
we
can
really
fill
that
out,
and
I
don't.
I
occurs
to
me
that
we
don't
really
have
a
ticket
right
now
saying
finish:
symmetric
sdk
spec.
G
D
I'm
not
sure
whether
it's
related
but
like
for
trace
issues
and
then
the
corresponding
trace
pr
to
the
spec.
One
of
the
double
checks,
that's
usually
done
is
whether
the
pr
makes
a
change
also
to
the
compliance
matrix.
So
if
we
got
like
these
two
open
pr's,
whether
they
should
have
one
of
their
patches
modify
that
table
as
well.
D
A
B
Thank
you,
andrew
reihan.
Would
you
like
to
start
the
next
item
here
about
the
collector
contrib.
H
Yeah,
so
I
have
two
things
to
discuss:
mainly
they
are
like
okay,
I
need
some
kind
of
guidance,
so
the
first
one
is
like
from
the
hotel
collector
contribution
people.
So
I
was
trying
to
use
like
our
like
common
otl
free
things,
but
so
we
cannot
use
it
like.
I
was
getting
an
error,
so
I
just
saw
and
like
tigran
replied
to
me
on
our
gtar
channel.
I
think
it's
something
by
design
and
we
should
not
use
them
directly.
H
H
E
I'm
happy
to
help,
but
I'm
not
sure
I
I
understand
the
problem.
So
is
this
related
to
the
image.
H
Yeah
the
first
one
like
so
I
said,
need
guidance,
so
I
cannot
import
the
otlp
like
common
one
like
so
so
here
is
the
import
thing.
I
attached
the
image,
so
it
throws
an
error
like
so
we
cannot
use
them.
Then
tignan
said
yeah.
We
should
not
use
these
collector
quantity
people
in
the
collection
we
should
and
what
do
you
miss.
H
So
the
thing
is
like,
rather
than
using,
like
common
dot
stinky
value
type,
we
can
maybe
use
the
like
our
p
data
dot
easting
map-
something
like
this,
so
I
I
just
want
to
make
sure
my
understanding
is
correct
and
we
cannot
utilize
this
common
dot
data
types
like
sticky
value
or
string
stingray
or
whatever
we
have
in
the
common
or
internal
project
right.
E
So
the
fact
that
the
p
data
are
backed
by
a
proto
should
not
be
visible
or
should
not
be
something
that
user
relies
on,
because
we
want
to
move
everything
towards
having
a
one
deceleration
civilization
to
do
lazy,
serialization,
basic,
decentralization
and
stuff
like
that.
So
because
of
that,
we
want
you
to
use
the
p
data
and
not
care
about
how
we
back
the
data
by
a
byte
array
by
or
by
motors
or
anything.
E
So
yes,
the
answer
is
yes,
you
need
to
to
use
command.string
key
value,
the
the
the
p
data
that
you
need
to
use
the
p
data
that
string
map.
H
Yeah
got
it
so
yeah
thanks
and
the
second
point
was
like
so
so
I
am
almost
kind
of.
I
am
not
fully
done
done
with
the
like
consumer
dot
metric
consumer,
as
you
suggested,
work
done
yeah.
So
following
your
guide,
I
am
almost
done,
but
I
am
wondering
like
where,
to
put
it
like,
I
am
confused
here
kind
of
lost
like
how
can
we
enforce,
like
all
exporters,
to
use
this
consumer
like
so
any
thought
on
that
or
where
to
put
the
code
so.
E
So
what
we
did
before
you
can
look
at
our
exporter
helper
package
into
the
in
the
in
the
core.
What
we
did,
that
is,
we
you
have
an
option
called
with
with
resource
as
labels
or
whatever
you
call
it
so,
and
if
people
use
the
exporter
and
set
this
option
that
will
get
set,
we
cannot
enable
by
default,
because
we
don't
want
this
to
be
enabled
for
for
otlp
and
for
for
other
protocols.
But
but
if
users
explicitly
say
with
this,
then
we
will
install
it.
H
E
Yeah,
and
by
default
we
enable
to
every
exporter
that
is
not
otlp.
That's
that's!
On
us.
We.
We
have
to
put
that
line
to
enable
this
okay,
thank
you
so
or
we
make
it
opt
out
and
then
and
then
only
for
otlp.
We
do
without
this,
so
we
can.
We
can
discuss
this,
but
this
is
just
a
minor
thing.
I
would
say.
D
D
By
the
way,
I'm
happy
that
still
continuous
sharing
screen,
if
that's
helpful
or
I
can
relinquish
it
to
whoever
if
they
want
to
drive
and
point
at
things.
So
just
let
me.
B
Two
items
left
so
uk
has
this
pr
226
in
the
protocol.
Repo
and
we've
been
discussing
this
enough.
I
B
B
A
E
B
That's
great
and
I'm
pleased
to
hear
that,
michael
from
your
end
as
well,
I
think
there's
probably
no
one
perfect
solution
for
us.
So
having
a
flexible
arrangement
is
going
to
work.
I
think
it
does
mean
a
little
bit
more
implementation
work
for
in
the
various
places,
but
I
think
that's
probably
the
right
outcome,
given
all
the
factors.
I
So
the
main
thing
is
that
the
the
e-core
was
the
simple
linear
buckets
and
the
completely
perfect
log
buckets
and
lock
linear.
The
three
formats
will
be
natively
supported
by
the
protocol,
so
it
will
be
very
efficient,
but
for
quadratic
and
cubic
approximation.
I
I
B
That's
that
sounds
pretty
good.
I
had
a
little
audio
trouble,
so
I'm
not
sure
I
got
everything
I
apologize.
I
I
The
profit
buff
would
use
its
var
in
encoding
so
that
to
encode
only
using
the
minimum
number
of
bytes
to
encode.
For
example,
you
declare
as
a
long
eight
byte
integer,
but
your
actual
value
is
20,
so
one
byte
would
be
enough,
so
protobuy
for
the
product.
Buff
would
optimize
this
with
a
one
byte
one
byte
actually
encoding.
On
a
wire
like
a
months
ago,
there
was
a
pr
to
change
the
counters
to
fixed
64,
which
is
a
8
byte
long.
I
I
E
Bytes,
so
so
the
the
thing
is,
if
you
do
for
a
thousand
we're
talking
about
8k,
which
is
probably
like
four
mtu
packages-
maybe
five
five.
So
so
you
are
talking
about
having
four
extra
packages
on
the
network
for
for
for
serialization
these,
because
in
memory
representation
is
the
same.
So
so
in
memory
like
in
ram,
it's
always
going
to
be
8k,
because
there
is
no
such
thing
in
a
programming
language
as
a
vector
of
variance,
so
you
will
have
it
deserialized
as
a
vector
of
of
eight
byte
objects.
I
Yes,
I
understand
it's,
it
is
a
it
is
this.
Basically,
I
was
like
that's
against
classical
trade-off
between
cpu
and
space
for
whether
for
this,
for
just
typically
for
a
few
small,
like
you
have
a
summary,
a
few,
a
few
eight
byte
loans,
no
big
deal
just
exactly
more
convenient
and
more
clear
to
use
them
as
eight
and
cut
them
as
eight
bytes.
I
E
I
The
thing
is
in
in
my
day,
job-
and
you
realize
I
once
did
some
destrucialization
work
on
the
variant
it
comes
out.
If,
assuming
your
input,
data
is
already
in
memory.
You
are
destabilizing
a
fixed
size,
eight
byte
or
four
by
eight
integers
versus
you're
digitalizing,
a
wiring
into
eight
byte
longs.
It
comes
out
when
the
numbers
are
small.
I
I
E
I
E
Or
even
even
if
it's
not
a
line,
you
do
a
mem
copy,
which
is
a
registry.
The
compiler
will
do
the
underline,
load
and
store
so
yeah
map
copy
yeah,
that's
possible,
so
so
so
in
depends
on
the
language.
I
agree
anyway.
I
I
don't
have
a
strong
opinion
on
this.
As
I
said,
I
would
like
others
to
speak,
but
it's
it's
clear
that
I
did
not
think
about
us
carrying
thousands
of
buckets
when
I
made
that
change,
because
I
was
mostly
thinking
about
having
20
30
buckets,
that's
not
in
the
mouth.
I
Like
the
dd
sketch,
I
think
default
is
4k
buckets
or
something
like
that,
and
also
there's
one
versus
land
on
the
land.
8K
bytes
no
big
deal
on
land,
but
if
you
are
submitting
this
when,
like
your
front
end,
your
app
submitting
your
metric
to
a
cloud
server
either
is
uralic
or
date
log,
so
that
kilobytes
might
matter.
I
B
Yeah
yeah,
I
agree,
there's
it's
hard
to
like
so
we're
talking
about
a
protocol
structure
which
requires
us
to
fit
into
this
very
rigid
like
there
is
a
array,
and
it
has
integers
in
it
that
are,
must
be
eight
bytes
and
such
and
look
at
I've
been
talking
a
little
bit
about
circle
hist
in
the
last
few
weeks.
B
If
you
look
at
like
some
of
the
or
or
even
hdr,
histogram
or
t
digest
like
these,
these
libraries
generally
come
with
an
encoding
which
is
just
like
give
me
a
histogram,
and
I
will
give
you
a
biased
array
which
is
like
super
compressed,
and
they
do
various
things
and
we're
not
talking
about
that.
We're
talking
about
representing
a
histogram
in
a
way
that
can
fit
natively
into
a
protocol
buffer.
So
we've
lost
some
encoding
like
efficiency,
no
matter
what
and
then
maybe
we
can
get
that
back
by
using
gzip
and
so
we're.
B
The
decision
here
is:
do
we
want
to
go
with
a
custom
encoding
where
we
might
be
able
to
shave
off
a
bit
of
a
bite
and,
like
maybe
run
length
and
coding
or
delta
encoding
of
the
bucket
counts?
And
such
like,
there's
all
kinds
of
ways
to
compress
this
stuff,
but
it's
gonna
be
a
lot
of
work.
So
why
don't.
I
B
I
E
Sorry,
I
think
we
can
do
it
in
the
future
if
we
realize
this
is
a
problem,
we
can
add
another
repeated
field
within
and
deprecate
the
previous
one.
We
have
a
story
to
to
start
accepting
the
new
thing.
I
Okay,
yeah,
I
I'm
fine
with
this.
Unless
you
want
to
talk
about.
While
I
was
working
on
this,
I
realized
there's
a
an
argument
for
having
ming
and
max
included
in
the
histogram.
I
did
not
do
this
in
my
pr,
I'm
just
talking
here
such
that
in
previous
works.
I've
hit
like
strange
things
when
the
histogram
doesn't
have
minimax.
I
You
ask
the
historic
histogram.
What
is
your
100
percentile
histogram
says
the
value
is
1000
at
100
health
and
you
ask
another
metric,
which
is
say
the
max
metric
of
your
your
number
that
the
max
metric
says
the
max
is
900..
Now
the
user
is
kind
of
puzzled.
Why
does
100
percent
has
a
thousand?
Why
is
the
100
of
greater
than
my
max?
I
E
The
other
problem
with
max
and
mean
is
they
are
not
subtractable
so
which
means
if,
if,
if
I
do
the
same
thing
for
saying,
I
send
a
histogram,
and
I
I
have
a
time
series
and
now
I
need
to
to
know
the
max
between
two
different
so
max
over.
Let's
say
an
hour
like
I,
I
do
a
roll
up
on
of
an
hour
and
I
want
to
to
say
what
is
the
max
for
for
an
hour.
That's
something
that
I
cannot
calculate
with
with
mean
and
max.
I
I
B
I
I
think
subtraction
is
sort
of
beside
the
point.
It's
that
you
can't
con
you'll,
never
see
a
change
after
you've
hit
a
local
max.
Like
you
know,
an
hour
ago,
I
had
a
maximum,
and
I
haven't
been
anywhere
near
that
since
I
don't
know
my
reason,
because
of
the
old
maximum,
if
you're
being
cumulative,
I
think
you've
just
touched
on
why
this
is
hard
to
get
into
the
protocol.
Is
that
the
temporality
that
you
want
for
min
max
is
different
than
the
temporality
you
want
somehow
for
the
buckets
themselves.
It
seems.
B
B
I
think
we've
we're
into
confusion
over
terminology,
unfortunately,
because
for
the
most
part,
cumulative
and
delta
mean
something
about
things
that
we're
adding
and
subtracting,
and
so
what
we're
saying
is
that
if
you,
if
you
have
a
histogram
and
you
configure
that
histogram
for
cumulative,
it's
not
going
to
be
helpful
to
have
the
max
and
moon
attach
that
same
histogram
also
be
cumulative.
I
think
that's
what's
being
said,
and
I
don't
know
how
mainly
to
cleanly
fix
this.
I
B
I
I
got
it
so
I
was
thinking
about
the
absolute
value,
these,
the
the
the
mean
and
max
is
just
a
numbers,
so
I
think
you
you
guys
are
thinking
about
the
in
the
spatial
I
mean
in
the
temporal
dimension.
It's
you
are
tracking
a
the
cumulative
number.
E
E
Yeah,
that's
that's
the
problem.
That's
that's
how
I
don't
know
how
to
to
because
in
order
to
make
min
and
max
useful,
you
need,
you
need
to
always
send
them
delta.
You
need
to
always
like
reset
them
after
you
send
a
point
like
a
histogram
next
time,
when
I
report
that
his
histogram
for
the
same
time,
series
I
need
to
to
report
a
min
and
max
from
the
the
from
this
interval
from
the
last
report
to
now,
because
otherwise,
if
I
report
the
min
and
max
from
the
beginning
of
the
process,
it's.
I
Useless
yeah,
so
the
another
some
clever
question
in
my
previous
pr
somebody
asked
for
the
bucket
accounts:
are
the
buckets
counts?
Accumulative
or
not?
You
say
the
count
is
five
in
this
bucket.
Does
this
include
all
the
buckets
below
it
or
not
seems
our
implicitly?
It's
it
is
not
it's.
Each
bucket
is
independent.
I
B
Yeah
I'll
definitely
review
that
I've.
It's
given
me
thoughts
about
there's
a
connection
with
this
summary
data
type
that
we
talked
about.
The
beginning
meeting,
like
summary,
has
quantiles,
and
that
was
one
where
we
also
have
been
talking
about
adding
a
min
and
max.
So
it
sounds
to
me
like
at
this
point.
If
we
want
min
and
max,
we
should
think
of
them
as
separate
series
which
have
independent
temporalities,
therefore,
which
is
maybe
not
ideal,
but
not
not
wrong.
E
Yeah,
I
also
commented
on
the
the
summary
type
for
the
moment
to
remove
the
minion
max
that
was
added
there
just
just
to
follow
up
on
this,
like
we
can
add
it
later.
If
we
really
want.
B
E
B
It's
a
fighting
window
and
prometheus
yeah.
It
just
gives
you
that,
so
it's
it's
functionally
like
a
delta,
but
it's
not
it's
not,
but
they're
overlapping
deltas,
which
is
not
one
of
the
temporaries
that
we've
specced
out.
E
Yeah,
so
that's
one
thing,
so
there
are
a
bunch
of
things
and
I
don't
know
min
and
max.
Do
you
want
to
carry
the
same
sliding
window
temporarily
or
do
you
want
to
carry
the
delta
temporarily?
I
don't
know,
and
I
I'm.
B
I
don't
want
what
prometheus
does
with
summaries
at
all
so,
but
I
do
want
min
and
max
so
I'm
not
sure
how
to
resolve
this.
B
What
I
meant
by
that
was
not
to
be
offensive,
I
don't
think
the
sliding
window
is
what
I
want.
I
do
like
summaries
with
min
max
and
I
think
users
want
to
see
quantiles.
B
I
think
we
we
sort
of
drilled
that
in
I
will
look
at
the
issues
226
in
the
proto-repo.
Sorry,
the
pr
and
review
that
I
think
bojan's
point
about
removing
min
max
may
be
the
right
answer.
I
realized
last
week
somebody
from
aws
asked
what
would
change
from
the
old
version
of
it,
and
I
think
I
said
nothing,
so
we
shouldn't
add
min
and
max
in
the
same
pr,
and
though
didn't
as
I
agree,
we
can
add
that
later.
B
G
Again
josh,
I
just
agree
with
you
that
it'd
be
better
to
handle
the
min
and
max
in
a
different.
Pr
summary
is
definitely
something
that's
again:
breaking
the
entire
prometheus
pipeline
right
now,
yeah.
J
G
K
Just
before
we
move
on,
I
just
want
to
make
a
a
comment
or
a
question
on
the
the
men
marks
and
the
histograms,
it
seems
to
me,
like
min
max,
are
useful
right,
but
they're
only
useful
if
your
back
end
is
doing
deltas
right.
G
K
B
But
I
also
think
like
by
the
time
you've
got
a
minor
max.
It's
no
longer
a
count,
it's
a
gauge
in
the
sort
of
terminology
we
have,
so
it
would
be
also
okay,
a
wrinkle
to
specify
when
you
have
a
histogram
data
point
that
bucket
counts
reflect
the
temporality
that
was
set,
but
the
min
and
the
max
are
always
delta
or
sorry.
I
just
used
the
term
again.
I
did
it
again
like
there's
no
temporality
associated
with
gauge,
so
it
may
be.
J
E
A
temporary
in
this
case,
I
believe,
because
because
without
a
temporarity
josh,
if
the,
if
the
the
thing
represents
a
sliding
window
like
min
and
max
from
the
past
10
minutes,
but
I
report
points
every
minute.
I
cannot.
I
cannot
do
this
so
it
has
to
even
though
it
sounds
like
a.
E
Okay,
so
I
would
suggest
that
we
can
have
this
conversation
as
a
separate
as
we
we
said,
and
we
can
maybe
probably
add
them.
The
only
requirement
that
I
have
is
split
prs
to
to
to
focus
things
to
two
things.
E
Like
don't
add
the
mean
and
max
in
the
pr
that
we
add
summary
right
now,
because
we
just
decided
of
the
fact
that
summary
we
want
to
support,
but
not
yet
decided
on
the
the
thing
and
don't
make
don't
make
the
review
be
harder
than
than
it
has
to
be
so
just
focus
on
on
things,
and
then
we
can
have
a
long
discussion
and
discuss
about
adding
min
max
everywhere
and
that's
why.
For
example,
I
don't
want,
even
in
the
pr
that
changes
the
histogram
to
change
the
histogram
buckets
definition
to
change
other
things.
D
K
E
And
I
think
there
was
another
person
who
tried
to
say
something,
but
he
was
interrupted
by
you.
M
Yeah,
this
is
luca
from
cloud
watch,
I'm
the
one
making
the
pull
requests
for
the
the
summary
actually.
So
I
had
one
question
related
to
the
the
temporality
so
for
the
purpose
of
these
first
request
would
be
okay
to
keep
the
temporality
field
out
from
it,
or
do
we
require
any
way
to
define
a
specific
tempo
or
aggregation
temporality
for
the
summary
we
would
say.
E
E
I
don't
know
that's
an
option.
The
second
option
is
to
think
about
how
to
express
this
right
now.
Right
now,
I
tend
to
say
no
but
explain
explain
to
the
user
that
this
not
means
a
delta
does
not
mean
cumulative
and
I
don't
know
actually
what
it
means,
because
in
prometheus
I
think
it's
even
worse
in
prometheus
the
count
and
the
sum
maybe
I
may
be
wrong,
but
the
count
and
the
sum
are
cumulative
but
the
but
the
the
the
quantiles
are
our
sliding
window
with
overlaps.
E
B
E
E
It
will
become
like
that.
You
will
become
like
that,
but
but
so
far
histogram
is
very
clean,
like
we,
we
have
one
temporarily
applied
to
all
the
fields
so
far,
but
for
for
summary,
I
think
I
think
we
need
to
explain
this
and
I
think
I'm
happy
to
because
we're
not
not
gonna
produce
it
from
our
sdk,
and
this
is
mostly
for
backers
compatibility
with
openmetrics
and
prometheus
to
just
follow
whatever
they
say
and
explain
that
some
count
are
cumulative.
E
D
D
E
G
G
E
You
probably
the
better
source,
is
just
to
look
at
the
code
of
the
prometheus
client.
Let's
see
if
they,
if
they
move
the
some
cal
account
or
they
just
calculate
and
never
reset
from
the
from.
B
It
doesn't
make
sense
in
a
scrape
model
to
ever
deltas.
Basically,
I
think
we
should
move
on
the
issue
about
histograms
and
summaries
and
min
max
is
really
interesting.
I
don't
think
we're
going
to
resolve
it
in
this
discussion
here.
I
think
we
should
move
on
to
justin's
bullet
here.
C
Yeah
all
right,
so
I
took
on
a
very
old
issue
some
time
ago,
issue
600,
which
was
mostly
about
metric
naming.
It
included
a
request
to
specify
openmetrics
interoperability.
So
I
added
some
guidance
here.
I
actually
started
writing
up
some
guidance
about
values,
and
I
ran
into
some
of
the
questions
that
we
were
just
discussing,
and
so
I
decided
to
punt
on
that
and
this
pr
is
really
just
about
how
to
rename
open
telemetry
metrics
to
so
that
they
can
be
exposed
to
something
that's
reading
in
the
open,
metrics
exposition
format.
C
I'm
certainly
open
to
trying
to
specify
more
stuff
about
value.
Some
of
it
is
pretty
straightforward.
B
Because
I
haven't
caught
up
on
this,
I'm
I'm
would
I
want
to
ask
some
clarifying
questions.
I
can
imagine
open
metrics
integration
happening
in
two
ways.
One
would
be
for
a
say,
a
collector
to
scrape
an
open,
metrics
target,
which
would
be
a
lot
like
scraping
a
prometheus
target
and
then
one
for
exposing
an
open,
metrics
endpoint
that
you
can
scrape
a
collector
with,
which
is
it
that
you're
trying
to
assist.
C
This
it's
mostly
about
exposing
exposing
open,
telemetry
metrics
that
can
be
scraped
by
something
that
is
used
to
scraping
prometheus
yeah.
I
noticed
that
there
are
those
those
same
two
directions
that
kind
of
need
to
be
specified,
but
though
the
case
where
we
are
scraping
where
something
like
the
collector
is
scraping
an
open,
metrics
endpoint,
I
decided
that
maybe
it
doesn't
make
sense
to
have
that
specified.
I'm
not
sure
that's
going
to
happen
in
a
lot
of
places
in
our
code.
C
I
expect
it
to
really
just
be
the
one
receiver
in
the
collector
yeah
in
the
collector,
and
they
actually
have
a
pretty
good
design
document
already
that
describes
what
that
receiver
does,
and
I
I
decided
not
to
re-specify
that
here
so
just
linked
to
it.
E
Yeah,
that's
that's
very
good,
so
so
officially,
I
think
george.
There
are
three
cases
there
is
our
sdk
exporting
exposing
open
metrics,
which
justine
just
focuses
on
there
is
the
collector
scraping
open
metrics,
and
there
is
third
one,
the
collector
exporting
openness
which
which,
for
me
that
is
very
interesting
and
problematic,
and
I
would
like-
and
we
talked
about
this
before,
with
with
prometheus
exporter
as
well.
I
would
like
probably
to
not
support
that,
if
possible,.
B
Yeah
I
support
that
as
well.
I
just
want
to
make
sure
that
we
were
agreeing
on
that
that
I
think
there's
an
idea
that
we're
going
to
use
this
remote
right
export
to
do
the
sort
of
I'm
pushing
into
another
prometheus
system-
and
I
mean-
let's
not
log
down
here
but
there's
a
missing
metadata
question
and
the
prometheus
project
is
working
on
adding
metadata
to
the
remote
right
path
and
until
then
I
don't
know
what
people
are
actually
doing
if
they
haven't
got
a
prometheus
server
running.
B
E
The
the
problem
for
everyone
to
understand
our
biggest
problem
is,
we
need
to
keep
a
state
and
we
need
to
so.
Our
metrics
is:
is
a
pipeline
processing
our
collector,
so
the
problem
with
exposing
open,
metrics
or
prometheus
as
exporter
is
we
need
to
keep
a
state
with
everything
that
we
see.
So
we
become
another
yet
another
prometheus
storage,
because
we
need
to
keep
all
the
metrics
that
we
see
for
prometheus
to
be
able
to
scrape
from
us.
So
we
want
to
be
moving
to
a
model
where
we
fire
and
forget.
E
B
Well,
I
I
I
think
in
the
long
distance
future
you
will
see
people
asking
to
put
state
into
the
collector
and
do
more
aggregation
at
which
point
maybe
there
will
be
a
way
to
escape,
but
I
agree
that
it's
a
technical,
huge
hurdle
to
get
to
and
I
don't
think
we
should
try.
E
B
E
B
There's
a
high
level
like
library,
guidelines,
saying
saying:
open
telemetry
will
provide
you
prometheus
just
because
that's
what
was
written
a
year
and
a
half
ago,
or
so
I
don't
think
we
do
specify
exactly
the
behavior
of
a
prometheus
exporter.
That
would
be
a
good
thing
for
an
sdk
spec.
I
suppose
you
know
I
I
think
I
I
think
what
the
go
prometheus
exporter
that
I
was
involved
with
does
is
probably
close
enough
to
spec
there
weren't
any
gotchas
that
I
can
think
of.
B
I
think
there
are
gotchas
how
to
specify
a
prometheus
receiver
for
the
hotel
collector,
because
again,
there's
that
state
component
added-
and
I
know
that
that
is
there's
some
open
issues.
I
know
it's
not
quite
working
right
now
and
I
know
that
the
aws
team
is
super
excited
about
it.
So
I
think
we're
gonna
all
focus
on
prometheus
receiver.
A
little
bit
this
quarter.
G
Yeah
I
mean
josh,
I
think
that's
that's
a
fair
statement,
so
I
mean
we
did
add
a
prometheus,
remote
right
exporter
to
go
a
couple
of
months
ago
and,
and
that
was
pretty
close
to
spec
in
terms
of
you
know
the
go
implementation
for
the
sdk
so
that
we
do
have
you
know
a
pretty
deep
design
that
we
could
add
to
as
documentation
to
the
to
the
spec
or
or
just
you
know,
to
contrib
where.
F
B
I
think
we
should
that
makes
sense.
I
don't
know
that
there's
a
lot
of
questions
to
answer
there
and
I
don't
think
we
should
discuss
the
hotel,
collector
prometheus
receiver
here
and
now.
But
that's
that's
under
scrutiny.
I
think
we
all
have
our
eyes
right
now,
yep.
I
agree
my
my
question
just
to
follow
it
is
I
I've
looked
into
how
to
do
scalability
with
prometheus
and
to
do
like
sharding.
B
Essentially
there's
like
a
hash,
mod
operator
that
you
can
do,
and
I
don't
know
how
much
manual
effort
it
takes
to
set
up
yet
I've
read
through
it
and
I've
kind
of
like
squinted
at
it,
but
I
haven't
gone
through
it
myself.
If
there
were
a
nice
and
easy
document,
saying
here's
how
you
configure
a
pool
of
hotel
collectors
to
use
the
hotel
receipt,
the
prometheus
receiver,
they
are
stateful.
But
if
you
shard
properly
it'll
work
correctly,
you
have
to
use
a
stateful
set
or
something
like
that.
B
I
don't
know
how
you
know
cetera
et
cetera,
but
I
think
people
are
going
to
want
to
know
how
to
set
up
a
prometheus
to
replace
a
horizontally
scaled
prometheus
pool
with
a
horizontally
scaled
collector
pool.
That's
what
we
need
to
know
how
to
do.
E
Cool
okay:
do
we
have
any
other
topic
in
the
five
minutes,
or
are
we
just
talking
random
things.
J
Which
which
I'm
fine
like
sharing
knowledge,
there
are
two
left
here:
justin
I'll,
follow
up
with
your
your.
B
Guidance
pr,
thank
you
either
I
don't
know
which
is
more
important
around
time.
Jason
or
I
forget
your
name
already.
Sorry,
the.
J
N
Well,
actually,
what
I'm
doing
here
is
I
had
this
issue
open
for
a
while,
and
what
this
issue
boiled
down
to
was
the
fact
that
resource
attributes
need
to
be
converted
to
labels
within
exporters
and
something
I
was
working
on.
Was
this
specific
to
prometheus
remote
red
exporter?
N
I
looked
at,
I
talked
to
reihan
about
like
what
he's
been
doing
with
like
the
generic
component
that
is
planned
to
be
used
amongst
exporters,
and
that
doesn't
quite
work
for
what
I'm
trying
to
do.
I
want
to
know
if
this
makes
sense
to
you
guys.
I
had
talked
to
josh
about
this
before
I'm
looking
to
kind
of
combine.
N
What
a
resource
attribute
in
otlp
is
with
the
concept
of
a
prometheus
external
label,
so
prometheus
external
labels,
it's
something
that,
like
customers,
can
configure
and
promote
these
remote
writes
where
their
labels
are
allowed
to
start
with
double
underscores
regular
prometheus
metric
labels
are
not
allowed
to
start
with
double
underscores
because
those
are
reserved
for
internal
use.
So
what
what
reihan
is
creating?
N
B
Josh,
I
think,
oh,
it
sounds
like
we
just
want
to
allow
double
underscores
and
I
don't
really
actually
see
anything
preventing
us
from
from
doing
it.
What
I
believe
you're
asking
for,
is
a
way
to
add
more
labels
and
allow
them
than
double
underscores,
probably
configured
at
the
export
exporter,
but
possibly
as
a
independent,
independent
stage.
That
just
adds
labels,
which
is
something
we
can
already
do
right.
B
E
B
A
A
N
B
Well,
I
think
it
would
just
preserve
that
one
underscore
as
you
could.
You
could
just
write
double
underscore
remote
double
underscore
as
a
key
value
and
put
whatever
you
want.
There.
E
Okay,
I
like
that
and
I'm
lost,
but
I
will
try
to
read
the
issue
again
and
understand.
G
All
right
logged
in
again
it's
a
nuance.
Maybe
we
should
get
some
time
with
you
to
kind
of
walk
you
through
it,
because
we
are
kind
of
going
around
in
circles.
We've
been
waiting
for
some
resolution.
B
This
seems
easy,
why
I'm
sorry
that
I'm
not
able
to
pay
attention
to
everything
we're
being
overly
strict?
Let's
just
let
the
double
underscores
go
through
with
what
I
think
we
should
do.
N
B
N
E
E
B
Well,
that
was
also
another
possibility.
Right
now
is
that
we
could
distinguish
resources
and
labels
and
let
double
underscore
resources
happen,
but
we've
got
another
topic
of
converting
resources
into
labels
and,
if
that
happens,
first,
we've
lost
the
information
we're
over
time.
Everybody.
Let's
talk
about
this
in
the
issues.