►
From YouTube: 2020-08-18 Spec SIG
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
D
C
C
So
do
we
have
an
agenda?
I
would
be
happy
to
share
the
like
big
meeting
notes.
D
Unfortunately,
can
we
go
over
the
issues
in
the
specs
at
least
use
this
time
carefully,
go
over
the
issues
and
maybe
decide
who
or
assign
some
owners
for
that,
because
that's
something
that
we
need
to
take
care
of.
C
So
you'd
like
to
triage
just
the
ones
labeled
metrics,
yes,.
E
Bogdan,
we
also
have
a
couple
of
questions.
I
think
we
added
some
questions
on
the
pr
on
the
issue,
so
we'll
just
add
it
to
the
doc.
D
C
D
So
we
we
did
a
bunch
of
progress.
We
merged
like
three
pr's,
I
think,
or
something,
and
where
this
one
and
yeah
so
anyway
go
go
ahead.
Orolita
you
you
mention,
first
about
having
a
question.
E
Okay,
I
mean
so
in
general,
we
have
a
couple
of
areas
that
we
need
your
guidance
on,
one,
that
you
know
we're
writing
to
prometheus
remote
right
exporters,
as
you
know,
and
and
one
is
for
the
collector
and
one
is
for
the
go
sdk
right
so
as
the
oltp
definition
changes
right,
which
you're
in
the
midst
of
for
the
collector,
especially
what
is
your
timeline
because
our
interns
are
going
to
drop
off
in
a
couple
of
in
two
and
a
half
weeks.
D
So,
let's
put
this
way
to
to
not
to
make
sure
the
interns
have
enough
time.
If
I
don't
finish
the
performance
stuff
until
thursday,
the
meeting
time
you
move
ahead
and
use
the
old
one,
and
I
will
I
will
pay
the
depth
on
on
changing
to
the
new
one.
D
At
one
point
when
when
I
will
migrate
the
entire
thing
so
so.
E
Yeah
well
done,
because
we
were,
we
were
kind
of
keeping
a
timeline
of.
If
you
can
even
give
us
an
you
know,
code
base
that
we
can
build
with
by
monday
next
monday.
That
would
be
awesome
because
we
can,
then
you
know,
make
all
the
changes
in
the
exporter
and
and
update
the
tests,
and
you
know
be
able
to
actually.
D
D
That's
the
whole
idea
so
anyway,
there
are
other
workarounds
if
you
wanted
to
discuss,
but
let's
give
me
this
time
until
thursday
and
we
can
discuss
thursday
somewhere
around
but
yeah.
That's
why
that's
what
I
said
in
the
pr?
If
you
can
give
me
two
three
days
to
to
make
sure
I
don't
have
to
then
come
and
fix
all
these
things.
Yeah.
E
Yeah
exactly
well,
then,
because
you
know
we
want
to
make
sure
that
we
support
you
and
all
the
changes
that
are
you're
making,
and
you
know
we
test
the
components
accordingly.
D
Definitely,
and
and
by
the
way,
very
happy
with
the
pr
qualities
and
stuff
young
does
a
good
good
good
job.
So
thanks
thanks
for
that,
no.
E
I
think
that
was
one
of
our
questions
and
then
the
second
question-
and
this
is
not
so
much
even
related
to
the
collectors
specifically
but
in
general-
is
the
and
we
are
going
to
raise
an
issue
about
this-
is
that
where
do
the
again?
Having
and
clear
definition
of
where
do
different
types
of
exporters
reside
in
terms
of
reposter
language
reapers
right
because
they
are
inconsistently
hosted
today
for
the
different
languages.
So
we
have
done
an
analysis
and
we
can.
We
will
file
an
issue
on
that.
F
D
E
I
mean
again
we
just
want
to
make
sure
that,
are
you
know,
meeting
the
right
guidelines,
so
it's
just
more
that
it's
easier
for
us
to
understand
because
you
know
javascript.
The
javascript
seg
has
a
different
set
of
guidelines.
Javascript
has
a
different
set
of
guidelines
and
and
the
go
go
contrib
a
versus
go.
You
know
so
again
it's
it's
just
not
very
clear.
I.
C
Yeah,
I
chatted
with
that
with
connor
about
that
one
earlier
today
and
I
I
sort
of
agree,
but
thank
you
for
agreeing
the
right
issue.
C
Well,
well,
we
we're
all
talking.
I
added
a
few
more
items.
Are
we
done?
Let's
see,
are
we
we're
done
talking
about
otlp
release
schedule?
I
think,
there's
there's
a
little
uncertainty
left
we're
going
to
do
some
more
performance
measurements.
We
should
have
something
very
soon,
but.
D
But
quick
question:
how
is
the
feeling
in
general
do
people
prefer
more
semantics
versus
I
mean
I
need
to
to
know
how
much
I
need
to
tune
where,
where
is
the
balance
between
tuning
and
and
keeping
a
nice
semantic
in
the
in
the
protocol,
because
I
feel
like
the
way.
The
reason
why
I'm
not
finished,
because
I
started
with
something
I
saw
some
performance
I
started
to
tune,
do
I
have
to
tune?
Do
I
not
have
to
when?
When
is
the
threshold
that
we
say
we
are
good
enough?
I
don't
know.
D
C
My
thoughts
are
tied
up
in
the
sort
of
a
little
wary
that
we
have
not
is
that,
where
we've
taken
things
out
of
the
protocol,
that,
in
order
to
put
back,
are
going
to
end
up
looking
odd
or
something
like
that.
So
the
removal
summary
is
an
example
and
there's
this
question
that
I
put
below
about
system
cpu
utilization,
and
I
I
was
sort
of
wondering
how
I
ideally
want
to
see
that
handled
and
it.
C
It
I
was
well,
this
isn't
actually
exactly
the
same
question,
but
we've
potentially
discovered
the
need
for
a
a
different
type
of
temporality
or
could
be
potentially
explained
as
some
sort
of
structure
information
combined
with
some
temporality
information
that
I
I
feel
something
feels
a
little
incomplete.
C
Does
that
make
sense
like,
for
example,
if
we
were,
if
we.
C
There
there's
one
question
was
about
like
the
the
cost
and
complexity
of
metric
descriptors,
having
sub
messages
and
onenotes
and
so
on.
Let's
I
actually
I'm
ready
to
kind
of
ignore
that
one,
the
bigger
question
that
I
feel
a
little
concerned
over
is
if
we
start
adding
like
new
data
point
types,
because
things
seem
slightly
different.
Well,
those
data
points
do
add
quite
a
lot
of
space
to
the
points
in
memory.
D
Put
a
one
off
on
that,
so
so
my
in
my
prototype
right
now,
I'm
testing
with
one
off
on
the
descriptor
type
and
one
off
on
the
data
type.
Both
of
them
are
adding
one
allocation.
So
we'll
have
two
allocations
are
not
per
point,
so
they
are
not
affecting.
When
we
have
multiple
points,
they
are
only
per
per
metric,
so
you
will
affect
a
metric
with
one
point,
but
it
will
not
affect
metrics
with
multiple
points.
D
D
I
can
trust
me.
I
can
convince
him
because,
for
example,
for
the
data
points,
it's
very
simple
to
convince
him.
We
don't
know
that,
may
be
unbounded,
let
me
get
to
10
items
and
and
we
we
need
to
have
a
way
to
not
to
not
reserve
that
much
space
and
allocating
only
one
one
object
is
not
is
not
an
important
thing.
C
C
C
To
be
okay,
especially
if
tigran
buys
into
it,
okay,
and
if
I,
if
it
says,
if
it
looks
like
you
know,
there's
a
15
performance
hit,
but
we
think
we
can
get
that
back
by
writing
some
hand
code.
I
think
that
that's
also
okay,
okay,
but
I
don't
know
that
I've
I
mean
what
do
I?
Why
are
you
asking
me?
C
D
B
C
D
D
A
C
Cool
well,
okay,
what's
what's
we're
looking
to
thursday,
then,
and
it's
it's
becoming
more
and
more
important.
I
wanted
to
talk
about
this
value
recorder
default
aggregation.
We've
we've
gone
through,
there's
so
many
options
for
what
we
might
get
and
we've
talked
about
how
mid
max
sum
count
was
seen
as
troublesome.
C
I
originally
proposed
that
myself
and
it
was
partly
a
response
to
the
kind
of
well
knowledge
that
histograms
have
fixed
boundaries,
so
they're
hard
or
they
don't
always
work
very
well
gauges.
C
You
you've
watched
some
information
summaries,
you
card
it
hard
to
merge
them
and,
michael
last
time
should
post
cd
sketch
and
that's
always
been
part
of
the
dialogue
in
open
geometry
as
as
sort
of
knowledge,
knowledge
of
sketches
has
been
growing
among
everybody,
and
so
I
think
it
was
great
to
see
wendy
doug
published
that
I
said
something
last
time
that
michael
was
a
little
taken
by
or
didn't
did
was
surprising
to
michael
about
how
I
I'm
not.
C
It
looks
to
me
like
the
datadog
agent,
has
a
private
implementation
of
dog,
sketch
or
dd
sketch
that
is
not
exposed
and
that
there's
a
protocol
that
they
use.
That
is
not
documented
for
a
protobuf
type
protocol,
and-
and
so
I
was,
I
have
been
looking
at
the
what
what
datadog
did
publish
was.
This
sketches
go
repository
that
has
an
implementation
that
they
tested
for
their
experimentation
and
for
their
paper,
which
is
a
nice
clean
implementation
of
dd
sketch,
but
it
doesn't
have
ac.
C
You
don't
get
access
as
far
as
I
could
tell
to
the
private
fields
that
you'd
need
to
copy
into
a
protocol,
so
there
is
a
little
bit
missing
there,
but
it
seems
that
it's
not
like
non-existent.
It's
just
hidden
a
little
bit
so
michael.
Do
you
feel
like
now
is
a
good
time
to
propose
db
sketch
as
a
standard
for
for
hotel.
H
Yeah
so
actually
yeah,
I
you,
you
seem
to
have
discovered
basically
everything
that
I
got
internally
here.
H
I
have
a
couple
links
if
that's
useful,
but
the
response
was
basically
do
they
want
or
to
buffer
json,
and
we
just
implemented
ourselves
in
the
agent,
but
we
have
this
pretty
print
or
humanizer
code
that
they
can
use
to
look
at
the
buckets
just
to
inspect
how
it's
working,
but
it
seems,
like
you
found
that,
from
my
perspective,
in
terms
of
the
proposal
itself,
the
reason
I
was
proposing
it
is
because
the
the
sketch
is
nice
because
you
can
derive
everything
the
min
max
sum
count
from
the
sketch.
H
You
can
also
query
it
directly
for
quantiles
and
they
are
merging
over
changing
bucket
boundaries
in
this
implementation,
unlike
with
histograms
dd.
Sketch
also
has
one
other
attribute
in
terms
of
sketching
in
that
we've
targeted
a
one
percent
relative
error.
H
Instead
of
a
one
percent
rank
error,
which
is
to
say
that
the
value
that
you
get
out
of
a
query
to
a
dd
sketch
will
always
be
within
one
percent
of
the
actual
quantile
that
you
are
looking
for,
rather
than
between,
like
if
you're
looking
for
a
p99
right,
a
rank
error
will
will
will
return
the
actual
p98
or
p100,
which
has
the
nice
thing
that
it
is
actually
a
value
that
was
submitted.
But
from
the
product
perspective,
people
usually
aren't
looking
for
a
specific
value,
but
rather
for
the
p99.
H
C
Yeah,
so
I
I
buy
into
it.
I
know
that
there
are
some
other
kind
of
competitors
in
the
space
of
like
approximate
algorithms,
for
this
sort
of
thing
one
was
mentioned
is
about
moments
and
that
as
well
as
linear
algebra.
I
actually
like
this
like
dd
sketch,
is
like
a
lot
simpler
in
terms
of
like
concepts.
C
It
looks
to
me
like
if
we
were
to
take
the
db
sketch,
go
algorithm
or
code
and
and
extend
it
looking
at
the
data
agent
code
or
ask
data
dogma
to
do
that,
but
get
some
sort
of
dot
proto
into
the
that
same
repository
and
then,
like
a
marshall
unmarshall
function,
then
we'd
be
a
lot
closer
to
saying.
Yes,
I
want
that
for
my
default
aggregator
for
value
recorder.
I
also
note
that
it
does
include
exact
values
from
max
some
count
and
so
on.
C
So
it
does
stand
in
for
all
those
fields
that
we
had
part
of
included
as
part
of
the
mid-max
sum
count,
which
was
which
is
nice.
It
does
also
mean
the
flip
side.
Is
it
they're
hard
to
difference
than
generally
speaking,
but
that's
almost
always
going
to
be
a
problem
like
you
can
difference
a
histogram,
but
you
can't
difference
the
max.
You
know
like
there's,
not
enough
information,
so
that
said,
I
still
think
this
is
a
good
option.
Good
I'd
like
to
know
what
other
others
think.
D
I
would
like
to
understand
a
bit
of
min
marks.
Do
they
are?
Are
you
saying
sending
the
mean
max
from
the
beginning
of
the
computation
or
from
the
last
reporting.
C
This
is
not
a
discussion
about
whether
you
call
it
cumulative
versus
delta
and
which
I'm
starting
to
feel
there's
a
problematic
terminology
like
when
we're
talking
about
gauges
or
about
histogram
measurements.
What
I
think
of
is
the
structure
is,
is
grouping
not
adding.
I
think
we've
caused
ourselves
too
much
trouble
by
by
trying
to
use
delta
and
cumulative.
C
Although
I'm
now
on
a
tangent
about
terminology,
there
is
a
there
is
in
your
pr
199,
the
proto
pr
when
you
remove
summary,
there
was
some
discussion
about
how
the
the
term
delta
and
cumulative
are
basically
cumulatives
aren't
useful
for
for
for
summaries,
let's
say,
and
that
that's
one
reason
we
might
consider
taking
them
out
of
any
specification,
but
I
prefer
now
the
way
I'm
thinking
about
it
now
is
that
the
term
cumulative
and
delta
both
suggest
that
there's
some
sort
of
addition
happening
either
you're
looking
at
a
small
change
in
the
sum
or
you're.
C
Looking
at
the
total
sum
and
we
we
we've
been
trying
to
use
delta
and
cumulative
to
describe
gauge
measurements
and
the
concept
doesn't
work
very
well,
I
I
think
we
should
be
talking
about
interval
measurements,
but
in
any
case
the
interval
you
can
get
a
max
for
an
interval.
You
get
a
max
for
the
next
interval
if
it
well.
Sorry,
if
you
get
a
max
for
one
interval,
and
then
you
got
another
cumulative
style
max,
you
can't
say
now
what
what
am
I
trying
to
say
exactly
you,
you
can't
compute
well,
you've
lost
information.
C
The
same
reason
why
a
cumulative
summary
doesn't
work
very
well.
Is
that
you
can't
look
at
the
difference
between
two
two
measurements
and
and
get
anything
about
that
max.
For
example,
yes,.
D
C
H
C
It
doesn't
mean
it's
meaningless,
but
it's
not
very
useful,
and
so
all
I
guess
I'm
trying
to
say
I
like
about
this-
is
that
if
you
just
look
at
today's
histogram
definition,
it
both
requires
you
to
give
explicit
boundaries
which,
which
is
not
always
easy
to
do
correctly.
It
also
doesn't
give
you
exactly
the
min
the
max.
C
Because
you
know
the
min
is
going
to
be
in
the
middle
of
some
bucket
range
and
there's
no
way
to
signify
that
you
know
the
smallest
value
or
the
largest
value.
C
C
Well,
let
me
pull
up
the
link.
I
have,
and
maybe
michael
has
another
one
wait.
Where
did
it
go?
Oh
I
put
it
in
the
gitter
earlier.
So
let
me
go
find
that.
C
This
first
message
is
the
classic
summary
it's
gotten
in
my
sum
and
then
actually
I
don't.
I
don't
actually
know
what
this
is
again
no
comment.
I
believe
this
is
what
we're
looking
at
and
again.
We
need
a
little
bit
more
documentation
before
I
can
go
in
depth,
but
I
I
believe
that
this
k
and
this
n
are
the
boundaries
and
the
counts
of
a
a
bucket
like
this
data
structure,.
A
C
A
D
D
H
C
C
Well,
I
just
think
it's
something
people
shouldn't
be
doing,
but
in
other
words,
computing
any
sort
of
summary
over
their
entire
process
lifetime.
C
A
D
Let's
say:
let's
see
the
following:
let's
say
the
following:
if
we
get
delta,
it's
simple
to
compute
cumulative,
even
with
mean
max,
if
you
want,
if
we
get
cumulative,
let's
assume
that
sorts
was
a
prometheus
process
and
we
scraped
that
and
we
get
prometheus
we'll
get
cumulative.
We
can
get
back
deltas,
we
can
compute
deltas,
but
we
will
no
longer
have
the
min
max
now.
Is
it
important
to
know
this
is
an
interesting
thing?
D
Is
it
important
to
know
that
our
histogram
is
a
dd
sketch
or
is
a
another
type
of
histogram,
because
because
of
the
algorithm
that
that
mike
michael
has
for
for
for,
like
you
are
changing
the
buckets,
but
the
buckets
are
having
these
properties
of
being
always
mergeable
and
so
on
and
so
forth.
D
Maybe
maybe
what
we
will
end
up?
Having
is
another
type
in
the
one-off
list
in
the
metric
descriptor
called
dd
sketch,
which
supports
only
delta,
for
example,
because
we
say
this
is
how
how
it
will
be
and
and
the
message
point
the
data
point
is
still
the
histogram
data
point.
H
So
why
couldn't?
Maybe
I
don't
understand
this
that's
possible,
but
why
couldn't
you
write
the
cumulative
prometheus
histogram,
just
as
as
the
the
the
sketch
bucket
for
bucket
sort
of,
not
literally
but.
H
Yes,
okay,
you
just
globally!
I
understand
okay,.
D
But
my
proposed
solution,
because
I
think
the
backend
needs
to
know
if
this
is
variable
buckets,
let's
call
them
variable
buckets
or
whatever,
even
though
they
have
this
nice
property.
I
think
we
will
end
up
having
these
as
one
another
another
entry
into
the
one
off
in
the
metric
descriptor,
because
we
have
to
encapsulate
this
information
that,
for
this
metric
you
will
get
variable
variable
buckets
and
the
backend
has
to
be
prepared
for
that.
C
There's
a
lot
to
think
through
there.
I
I
feel,
like
I'd,
feel
more
comfortable
if
that
that
protocol
was
commented,
but
I
think
you've
accurately
summarized
all
the
concerns
here.
I
don't
think
that
we
really
need
to
get
hung
up
on
delta
versus
cumulative,
but
it's
probably
true
that
a
receiver
wants
to
know
whether
they
should
expect
buckets
that
are
going
to
move
around
or
not,
although
I
suspect
any
real
system
has
already
dealt
with
that
problem,
like
even
normal
histograms
can
change
buckets.
So,
yes,.
D
C
Never
and
they've
always
had
classic
histograms
they
didn't
ever
do
any
kind
of
sketch
literally
was
just
having
this
conversation
earlier
today.
Yes,
so
so
don't
look
to
the
monarch
people
for
guidance
on
this
topic.
D
No,
I'm
just
thinking
I'm
just
giving
you
examples
of.
I
don't
know
we
should
probably
talk
with
prometheus
people
and
see
how
they
handle
the
if
they
handle
easily
the
the
the
the
dynamic
buckets.
C
Or
well
I
I
suspect
the
answer
is
definitely
no
and
that's
because
they've
they've
destroyed
their
histogram
by
the
time
they
put
it
in
their
storage
so
like
they
take
each
bucket
becomes
one
time
series
and
they
don't
know
about
histograms
in
the
back
end.
That's
what
I've
learned
from
studying,
prometheus
remote
right
so
yeah.
This
is
going
to
create
trouble
for
for
prometheus
as.
D
It's
interesting
because
they
send
the
buckets
for
every
point
so
for
every
time
series
they
send
the
buckets.
So
they
support
having
different
buckets
per
point,
and
I
I
was
curious.
Why
do
they
not
even
but
yeah
they
almost
have
everything
up
to
the
back
end.
I
think
it's
just
a
back-end
implementation
that
they
don't
have
anyway.
To
answer
your
question,
I
personally
how
I
would
do
from
here.
I
would
have
to
think-
and
we
should
think
if
we
need
to
know
the
information
that
the
buckets
are
variable
or
not.
D
That's
that's
fine
and
the
other
thing
that
we
miss
is
min
and
max
and
probably
what
we
will.
C
I
know
new
relic
has
asked
for
it.
I
so
I
I
feel
like
it's
a
reasonable
request.
D
It
is
a
reasonable
request,
but
what
I'm
trying
to
say
should
it
be
in
in
with
a
histogram
with
the
buckets
or
should
we
have
just
for
the
new
relic
sanity
and
stuff?
Should
we
have
a
different
in
the
in
the
summary
that
will
be
added
later
as
a
different
thing.
C
Yeah
and
I,
and
even
as
we
look
at
this
datadog
protocol
here,
like
they
call
this
sketch
payload,
but
this,
I
believe,
includes
both
summaries
and
sketches.
I
I
don't.
If
I
had
comments,
I
would
know
better,
but
looking
through
the
code
led
me
to
believe
this
as
well
that
in
the
agent
code
there,
basically
that
you
can
think
of
a
histogram
as
a
type
of
summary,
you
can
think
of
the
quantile
summary
as
a
type
of
summary,
and
you
could
go
with
dd
sketch
as
a
type
of
summary.
D
That,
yes,
they're,
not
saying
we
should
no!
No,
I
would
I
would
duplicate
the
code
and
that's
that
that
is
a
performance
hit,
because
the
one
of
being
in
every
point,
so
so
as
long
as
we
don't
have
the
one-off
in
every
point,
performance
is
way
better
versus
having
the
one
off
in
every
point.
But
essentially
semantically
is
exactly
what
you
said.
It's
it's
just
about
how
we
model
the
data
in
the
proto
to
to
have
a
bit
better
performance.
C
Okay,
I
don't
know
if
we've
created
action
items
here,
michael,
can
you
at
least
on
your
end,
do
some
information
on
whether
like
datadog
would
like
to
own
this
at
some
level,
publish
a
marshall
non-marshall
like
a
reference
implementation,
at
least
for
go
since
that's
the
one
we're
going
to
use
in
the
go
sdk
and
the
collector,
which
is
sort
of
the
most
important
place.
C
D
H
Real
quick
for
marshall,
on
marshall-
I
don't
know
if
that
satisfies
at
all
what
you're
looking
for.
H
C
C
Sorry,
I
can't
find
it
because
I'm
in
the
middle
of
I'm
not
on
the
share
how's
that
and
I
know
how
to
find
it-
yeah
yeah
yeah,
no
I've
I've
seen
this
as
well.
Okay,
I
I
think
what
yeah
I
was
digging
through.
I
think
it
makes
me
think
I
I'd
really
want
a
documented
protocol,
like
with
a
link
to
the
paper,
because
the
paper
is
great
and
you
know
anyway
it.
C
C
C
I'm
gonna
put
a
link
to
this,
and
then
I
gotta
go
pretty
soon.
I
would
like
to,
if
you
don't
mind,
hear
me
out:
oh
gosh,
I
got
the
wrong
url,
I'm
not
going
to
taste
it
in,
but
datadog
has
published
a
go
repository
of
this
algorithm
that
is
not
being
used
by
anybody
except
open
geometry.
Go
briefly
before
I
head
out
to
pick
up
kids,
I
wanted
to
well
I'm
not
sharing
anymore,
but
this
is
about
spec
issue.
819.
C
I
was
implementing
a
host
host
metrics
plugin
for
go.
This
would
be
the
equivalent
of
the
collector's
host
metrics
plug
in,
but
in
pure
code,
because
we're
assuming
we
don't
have
a
collector
some
of
the
time-
and
I
was
reading
through
otep
119,
which
talks
about
this
utilization
metric,
and
it
seems
that
for
timing
utilization,
it's
sort
of
a
special
case,
because
and
and
we've
we've
tried
to
keep
all
these
observer
instruments
kind
of
stateless,
meaning
that
the
callback
doesn't
have
to
remember
the
last
value
it
reported.
C
This
is
the
first
time
we
have
an
example
where
you
do
need
to
know
that,
and
part
of
me
might
want
to
just
kick
it
out
of
the
spec
and
say
that
this
utilization
concept
can
be
derived
downstream,
but
you
have
to
have
a
state
or
like
a
stateful
downstream,
that's
going
to
remember
two
points
and
then
compute
a
difference,
in
other
words
to
compute
utilization.
We
need
to
know
the
difference
and
again
there
is
a
meaningful
lifetime
utilization,
but
it's
not
what
people
want
to
monitor
or
export.
C
This
can
be
derived
in
your
downstream
system,
your
vendors,
you
can
derive
this
we've,
given
you
all
the
data,
but
but
you
know,
aaron
wrote
this
into
the
spec,
because
the
idea
was
to
be
easy
to
monitor
and
it
won't
be
easy
to
monitor
if
you
have
to
synthesize
it
first.
C
So
I'm
I'm
not
sure
what
the
right
thing
to
do
is
here.
It's
pretty
minor,
real
big
big
scheme
of
things
here.
C
Well,
it
did
give
me
ideas,
I'm
not
saying
anything
is
broken,
but
remember
back
back
back
back
way
way.
Long
ago
we
had
otif
88,
we
talked
about
maybe
10
possible
instruments.
We
talked
about
delta
observer
and
we
decided
not
to
include
it
so
delta
observer
was
the
notion
of
an
observer
that
reports,
changes
and,
and
the
reason
why
we
left
it
out
is.
D
C
The
reason
why
I
said
that,
though,
is
that
I
was
thinking
if
I'm
gonna
report
time
utilization.
What
I
really
ought
to
do
is
report
difference
in
use
time.
So
if
I
report
usage
as
a
difference,
I
delta
then
the
then
it's
essentially
a
rate
question
and
rates
can
be
drive
downstream.
So
if
I
report
a
delta,
then
time
utilization
is
a
rate
of
a
delta,
but
if
I'm
not
reporting
a
delta,
you
both
have
to
keep.
You
have
to
have
state
in
your
back
end
and
compute
the
delta
and
then
or
compute.
A
C
Happen,
but
I
think
the
goal
of
one
119
was
to
make
it
so
you
could
mark
these
things
as
simply
as
possible,
so
you
wouldn't
have
to
do
a
join
or
or
a
subtraction
or
something
like
that
in
order
to
compute
utilization
and
and
what
I'm
seeing
is
that
if
the
protocol
gave
me
or
if
the
instruments
gave
me
a
natural
way
to
report
a
change
through
an
observer,
then
at
least
I'd
be
one
step
closer.
But
but
I
still
have
to
compute
a
rate
to
get
that
utilization
and.
C
The
kernel
I
would
have
to
compute
the
delta
myself,
yes
and
then
I
would-
and
I
would
record
the
delta,
but
then
at
least
the
downstream
system
wouldn't
have
to
retrieve
an
old
value
to
compute
utilization.
I
see
so
that's
the
only
minor
thing
that
just
came
up
and
I
I
don't
think
that's
important
enough
to
change
anything,
but
it
did
make
me
think.
There's
probably
a
use
here
for
recording
deltas
from
observers.
But
it's
just
a
rare
case
and
I
think
time
is
actually
very
special
and
we
should
just
know
about.
D
D
And
for
the
moment
for
the
moment,
if,
if
we
were
to
build
this
utilization,
how
I
would
do
it
most
likely,
I
would
propose
that
we,
I
don't
know,
maybe
we
build
an
aggregator
or
something
that
has
the
previous
state,
the
the
previous
point
and
it's
capable
of
computing
delta,
and
then
it
has
to
do
the
division,
and
anyway
this
aggregator,
once
you
do
utilization,
you
will
dump.
You
will
put
a
gauge
on
the
wire,
because
there
is
no
more
things
yeah
right
now.
C
That's
all
easy
to
imagine
it's
a
little
bit
harder
and
it's
sort
of
like
making
life
difficult.
I
think
the
easy
approach
is
just
to
have
a
stateful
observer,
that
that
remembers
its
last
values
and
and
then
later
on
a
year
from
now,
when
we
have
a
view
machinery,
they
can
do
that
for
you
and
then
maybe
sure.
C
Don't
wanna
keep
you
here.
C
Should
cut
that
off,
it's
not
it's
super
low
priority.
I've
I've
stated
that
you've
understood
it
aaron,
I'm
sure
you
heard
it
great.
I
have
to
run
now
and
I
and
I
know
that,
there's
one
more
item
for
gang's
gonna
raise
yeah,
I'm
gonna!
Listen
to
you.
While
I
get
ready
to
go.
B
Yeah
so
for
for
a
use
case,
we're
trying
to
add
a
authentication
plugin
or
extension
to
the
prometheus,
remote
right
exporter
and,
like
I'm,
just
wondering
how
that
could
be
done
and
like
if,
like
because
I
know
the
existing
extensions
in
the
collector
doesn't
exactly
do
anything
like
that.
I'm
wondering
if
there
is
a
possibility,
something
like
that
could
be
done
in
the
collective
response.
I'm
not
sure.
I
understand
your
story.
D
B
For
me,
sorry
yeah,
so
so,
for
example,
if
like
say,
if
I
want
to
add
like
a
like,
because
like
the
http
client
right
now
supports
like
basic
auth
and
their
token
right.
So
if
I
want
to
support
anything
other
than
that,
how
how
would
is
there?
B
D
See
so
so,
let's
assume
amazon
has
its
own
way
to
do
identification,
which
is
not
tls,
which
is
not
sorry,
which
is
very
complicated,
not
token
based
and
it's
other
thing.
How
do
we
add
that
that's
an
interesting
thing.
E
I
mean
bogdan.
What
we
wanted
to
understand
is
that
you
know.
Is
there
an
recommended
architecture
in
terms
of
you
know,
plugins
that
could
be
leveraged
and
called
from
say
an
exporter
or
say
a
collector
or
you
know
which
oh.
D
E
This,
because
I
mean
the
same
use
case-
will
exist
for
any
vendors
right
where
there
is
say
new
relic
has
an
component
or
splunk
as
a
component.
D
Not
really
not
really,
because
if
they
don't
reuse
something
so
so,
if
they
don't
use
an
exporter,
a
standard
exporter
like
prometheus
remote.
E
E
An
open
implementation,
which
is
then
something
because
we
do
want
to
make
sure
that
open
telemetry
is
the
baseline.
B
D
It's
not
like
it's
still
public,
but
it's
not
it's
not
what
I'm
trying
to
say
you
you
still.
You
have
your
code
public,
it's
just
that
it's
a
proprietary
amazon
authentication
like
that
is
used
there.
D
E
D
Because
it
seems
it
seems
a
bit
hard
to
to
do
the
code
injection
right,
I
bet,
is
not
only
config
that
you
need.
You
need
some
specific
code,
correct.
E
I
mean
you
need
a
config
at
a
minimum
right.
That
is
a
variable
that,
where
you
are
switching
on
some
kind
of
authentication,
which
is
not
standard
in
the
in
the
exporter,
but
then
on
the
other
hand,
you
also
need
a
plug-in
of
some
sort,
which
has
the
functionality
to
be
able
to
complete
the
auth.
D
Yeah,
so
that's
that's
the
part
you
need.
You
need
something.
To
be
honest,
to
be
honest,
I
have
not
too
much.
We
haven't
talked
too
much
about
this,
but
I
would
suggest
you
may
want
to
look
at
grpc
out.
Okay,
you
have
something
yeah.
We
may
be
able
to
to
do
something
similar
with
that
for
all
the
exporters,
all
the
components.
Okay,.
F
D
D
E
No,
I
mean
this
is
super
helpful
logan,
because
we
wanted
to
just
get
your
guidance
and
thoughts
on
it
and
and
then
we
will
definitely
propose
and
generate
more
general
architecture.
D
E
D
It's
also,
they
ask
some
people,
they
ask
us
to
to
carry
out
tokens
from
receivers
to
exporters
yeah,
so
a
user
puts
an
out
token
from
three
levels
and
it
goes
via
open,
telemetry
ecosystem
until
it
hits
amazon,
for
example,
amazon,
cortex
or
whatever
it
is
yeah.
They
want
to
set
up
the
the
token
or
the
old
things
way
way
above
so
anyway,
we
need
to
think
a
bit.
I
would
start
with
this.
D
D
E
B
D
Yeah,
the
other
option
is
the
whole
service
con.
D
Has
we
have
a
notion
of
a
service
inside
the
collector
service
passes,
what
we
call
the
host
information
about
or
or
passes
some
information
to
every
component,
and
we
can
pass
this
out,
but
where
this
code
leaves
so
so
in
general,
you
are
saying
that
is
runtime,
but
where
is
this
code?
How
do
you
inject
it
in
the
core
like
if,
if
you
don't
put
a
core
directly
dependency
on
the
on
that
package,.
D
It
does
not
have
very
go,
unfortunately,
has
very
poor
support
for
plugging
for
dynamic
plug-ins.
So
I
would
encourage
you
to
not
go
that
path.
Okay,
you!
You
can
check,
but
last
time,
when
I
checked
the
go
plugins
there
is
a
an
effort
called
go
plugins,
but
only
on
linux
does
not
work
on
any
other
system.
E
D
Anyway,
there
is,
there
are
two
problems:
how
do
we
pass
the
code
to
the
components?
Yes,
we
have
ways.
Second,
is
how
do
we
pass
the
information
between
different
services
like
different,
open,
telemetry
collectors?
Like
you
have
the
sdk
talking
to
the
agent
talking
to
the
collect
as
a
collector.
He
has
some
tokens,
maybe
even
though
we
are
talking
otlp,
you
may
want
to
configure
this
thing
from
the
source.
That's
another
problem.
The
the
third
problem
is:
how
do
we
inject
the
code
like
this
code?
So
there
are
a
lot
of
problems.
D
I
don't
know
the
solutions,
but
I
think
we
should
start
breaking
them
down
in
small
pieces
and
propose
solutions
for
for
different
things.
It
may
be
complicated
to
get
the
amazon
thing
inside
the
core
inside
the
core,
the
open,
telemetry
core
in
the
country
where
we
have
amazon
already
and
google
and
everyone
we
may
be
able
to
get
that.
Okay,
but
inside
the
core
may
be
hard
to
to
get
it,
because
people
want
to
to
make
amazon
more
special
than
google.
No.
E
D
So
far,
our
solution
was
very
simple:
yeah
every
customer
has
its
own
exporter,
which
is
proprietary,
has
code
dependencies.
Well,
it's
not
proprietary,
it
is
open
source,
but
it's.
It
has
dependency
on
vendor
specific
things
and
we
don't
care
what's
happening
there,
but
it's.
This
is
an
interesting
mode,
because
the
the
prometheus,
this
prometheus
exporter
may
be
reused
by
others
as
well.
So,
as
I
said,
first
option
for
you,
immediate
option
is
creating
the
contrib,
a
wrapper
called
amazon
cortex
or
whatever
it's
called,
and
that
uses
exactly
this
code.
D
Just
has
a
different
name
register
as
a
different
type
of
exporter
called
amazon
cortex,
and
then
you
can
inject
your
your
custom
thing
for
the
moment
use
at
least
we
have
90
percent
of
the
code
common.
It's
and
everyone
can
reuse
this
receiver
directly
with
tls
it's
the
first
step.
I
would
do
that
yeah
and.
E
E
Yeah
that
that's
very
good
advice
appreciate
it
because
we'll
definitely
make
a
proposal
back
as
we
look
at
you
know
the
implementation.
Thank
you
so
much.
B
E
E
B
D
Okay,
I
think
we
should
close
this.
Thank
you
so
much.
Everyone
and
yeah
see
you
next
thursday,
michael,
please.
If
you
have
time,
explain
to
us
the
min
max.
H
Yeah
I'll
follow
up,
I
mean
it's
just
a
quantile
like
any
other
right,
so
the
min
max
will
be
within
one
percent
of
the
p100.
It's
just
that
we
implement
it
with
a
precise
value.
Yeah,
so
it'll
be
sort
of
a
product
decision.
I
think
at
some
point
whether
or
not
we
might
carry
that,
but.
H
D
Sure,
but
but
my
pro,
my
point
is:
do
we
need
to
carry
in
otf
mean
max,
as
you
said,
is
this?
Is
this
very
important
some
some
some
some
informations
or
would
sorry
with
data
that'll
be
very
unhappy?
If
we
don't
send
them
mean
max
all
the
other
films
in
I
don't
know.
Maybe
there
are
other
implications
I
may
be
wrong,
but
because
min
max
can
be
used
when
you
do
interpolation
to
do
different
tricks,
it's
not
like
any
other
quantile.