►
From YouTube: 2023-03-29 meeting
Description
Open cncf-opentelemetry-meeting-3@cncf.io's Personal Meeting Room
B
A
Oh
for
real
yep,
oh
dude,
we
should
grab
a
drink
or
something
I'm
good
I'm
in
New,
Jersey.
B
A
A
Yeah,
you
all
do.
Are
you
all
doing
the
all
pan
stuff
this
week
or
just
a
team
thing.
A
Cool
well
yeah
at
the
very
least,
I
have
good
good
lunch
racks.
D
Yeah
I
guess
the
first
one
is
mine
and
I
created
an
issue
yesterday,
based
on
a
conversation
with
someone
here
at
Griffin
and
basically
the
requirements
they
have
is
foreign
cater
that
needs
access
to
the
URI
or
to
the
URI
the
user
is
requesting
in
order
to
know
which
tenant
that
belongs
to
so
so
that
it
knows
which
data
store
to
go
and
look
for
authentication
data.
D
D
We
thought
about
perhaps
having
emerging,
so
that
reminded
me
that
we
had
a
notion
of
interceptors
or
authenticators
at
the
very
beginning,
and
we
had
those
unary
interceptors
and
string
interceptors
and
it
should
be
handled
functions
on
the
Educators
so
that
other
Educators
could
override
the
default.
D
Now
for
this
use
case
now,
when
talking
about
Solutions,
one
of
the
things
that
we
thought
about
was
either
having
interceptors
being
exposed
as
part
of
the
API
for
authenticators,
so
that
in
this
case
here
we
could
Implement
and
it
should
be
Handler
that
or
a
function
that
returns
and
it
should
be
handled
and
receives
an
http
Handler.
And
then
we
have
access
to
the
request,
URI
and
we
return
or
block
or
by
or
approve
the
processing
of
the
request.
D
Now,
the
other
solution
that
we
thought
about
was
having
a
more
generic
notion
of
middleware
instead
of
authenticator
and
that
kind
of
matches
what
what
Sean
from
atlassian
suggested
a
I
think
a
year
ago
or
so,
and
basically
it
would
allow
extensions
to
become
interceptors
like
middleware,
for
both
HTTP
and
grpc.
D
So
in
in
our
case,
instead
of
having
organs
educator,
we
would
be
having
a
reader
instead
and
the
same
case
for
Sean's
so
chance.
Sean
first
suggested
or
requested
a
rate
limiting
processor
I
think
it
was
back
then,
and
then
we
thought
that
an
extension
would
be
more
suitable,
like
a
an
authenticator
extension.
But
the
problem
is,
we
probably
want
to
chain
different
middle
words
as
part
of
one
stack
so
that
we
can
have
Authenticator
components
and
the
middleware
like
limiting
and
block
lists,
and
so
on
yeah.
D
So
that's,
basically
it
and
the
proposal
that
I
would
have
is
to
adopt
a
the
notion
of
or
to
create
a
notion
of
middleware
and
have
them
implemented.
Just
like
authentic
headers,
so
we
would
then
create
an
interface
or
middleware,
with
the
three
functions
to
implement
Ary
stream,
interceptors
for
JBC
and
HTTP
Handler
for
http
and
and
have
those
called
as
part
of
the
configure
PC
in
the
config
HTTP
packages
like
on
the
two
server
right,
but
I'm
wondering
if
you
have
other
ideas.
C
I
mean
it
extends
the
current
about
logic,
and
one
thing
that
I
want
you
to
to
take
into
consideration
is:
do
we
keep
this
outing?
We
already
added
authenticator.
Do
we
deprecate
those
and
move
them
to
Interceptor
middleware,
whatever
we
call
them?
That's
the
first
thing.
Second
I
I
want
you
to
to
Pro
to
provide
helpers
like
the
out
may
become
metadata
Interceptor
or
whatever
you
call
it
that
people
that
need
access
to
metadata
don't
have
to
implement
three
functions.
They
can
Implement.
C
They
can
use
this
Helper
and
Implement
only
the
current
function
that
accepts
the
metadata
and
they
can.
C
Hence,
if
we
have
a
helper
like
the
current
out
as
a
headers
or
whatever
we
call
it
thing,
implementers
don't
have
to
care
that
we
will
add
a
new
thing
for
drift
as
an
example
there.
C
But
so
for
me,
is
this
a
story
for
deprecating
or
whatever
we
do
with
the
current
authentication
config,
and
maybe
we
keep
it.
Maybe
we
don't
I,
don't
know
just
give
me
a
solution
there
and
I
think
having
duplicate
there
is
going
to
be
confusing,
so
probably
we'll
have
to
to
to
put
a
story
to
to
change
that,
and
second
one
is
helper
around
people
that
don't
have
to
implement
all
the
possible
interceptors
that
are
there
having
a
lower
level.
Api
happy
happy
to
offer
that.
D
So
we
also
talked
about
the
notion
of
authorizers
back
then
so
the
original
proposal
from
from
Sean
was
a
rate
limiter
and
when
and
then
we
evolved
into
the
notion
of
authorizers,
so
basically
our
thought
back
then
was
we
have
authenticators
so
that
identifies
who
is
the
user
and
then
authorizers,
which
then
validates
whether
the
action
is
possible
to
be
executed
by
such
a
user
right.
So
we
could
have
like
a
chain
of
authorizers
and
then
one
authentic
educator.
D
Now
the
the
reason
behind
that
is
an
authorizer.
The
interface
for
an
authorizer
is
simple:
it's
just
a
returning
is
yes,
or
no
so,
or
actually
a
three
options,
so
neutral
block
or
accept
right,
so
accept,
reject
or
or
neutral
now,
with
this
new
API
or
this
new
middleware
solution,
I
don't
see
a
a
common.
D
Basically,
it
is
either
blocking
or
not
blocking
the
execution
right
so
and
it
might
not
even
be
blocking.
It
may
be
changing
the
metadata
that
is
flowing
in
so
there
isn't
a
chance
of
changing
a
metadata
and
then
proceeding
with
execution
or
stopping
the
execution
altogether
or
directing
the
execution
to
somewhere
some
other
place
right.
So
I
guess
with
middleware,
it's
not
very
clear:
what
is
the
desired
action
for
each
middleware
that
is
going
to
be
implemented.
D
My
plan
at
the
very
beginning
was
just
implemented
basic
stuff
and
then
see
how
it
evolves,
especially
connect
virtual
health,
so
I
think
that
auth
was
gonna,
fit
very
neatly
into
middleware
I,
just
don't
know
yet
whether
we
want
to
provide
users
with
a
higher
level
API
for
authenticators,
which
is
easier
to
grasp
and
to
reason
about,
and
just
as
a
implementation
details
they
use
middlewers.
D
So
the
actual
implementation
is
just
made
aware,
but
to
the
user
is
it's
exposed
as
a
an
Authenticator
so
yeah?
So
that's
that's
what
I
had
in
mind
up
to
this
moment
so
basically
come
up
with
API
come
up
with
a
proposal
and
once
the
the
the
the
basic
Primitives
are
done,
then
we
think
about
how
to
fit
a
health
into
that
and
then
perhaps
which
other
helper
apis.
We
can
build
on
top
of
that,
to
make
it
easier,
for
instance,
but.
C
Out
already
does
that
I
mean
out
is
already
that,
because
we
we
have
this,
we
don't
expose
these
Primitives,
but
we
have
The
Primitives
already
implemented.
So
so
you
already
have
that
helper
in
out.
Actually
out
is
what
what
is
right
now,
the
helper
for
that
you
may
not
call
it
out,
you
may
call
it
header
extension
or
whatever
we,
because
we,
we
say
I
think
we
also
have
a
header
extension,
but
we
can
call
this
header
helper
or
whatever.
That
gives
you
access
only
to
the
headers.
C
The
other
thing
that,
by
the
way
I
want
you
to
think
about
is
how
do
we
make
sure
we
don't
move
processing
into
these
middle
words
interceptors,
so
I
want
to
make
sure
we
are
not
adding
yet
another
layer
to
do
data
processing
here
so
not
sure
how
to
to
hide
that
or
how
to
yeah
I
mean
the
word
processor,
no,
no
Alex,
that's
not,
but
seriously
speaking,
I
I
want
to
to
maybe
find
a
way
to
protect
us
to
say
hey.
These
are
all
about
metadata.
C
D
Yeah
I
mean
those
are
good
questions
and
I.
Think
the
rule
of
thumb
here
is
that
middleware
should
not
should
not
really
do
pipeline
or
data
processing
like
Telemetry
data
processing
right.
So
this
is
the
role
of
processors.
It
is
really
about
the
the
channel
where
information
is
coming
from
right,
so
it.
C
C
So
that's
where,
where
I
think
you
you
get
in
a
tricky
situation,
because
you
may
want
to
do
things
related
to
the
data
as
well
and
I
I,
don't
know
where,
where
that
should
be,
for
example,
should
because
some
of
the
interceptors,
for
example,
don't
have
the
data
part
parts
and
we
we
will
most
likely
not
gonna,
give
users,
and
we
should
give
users
ability
to
parse
the
data
there
or
they
shouldn't
do
that,
because
that
removes
other
optimizations
that
we
can
do
in
other
places.
D
No,
it
makes
sense
and
I
think
we
can
even
use
the
rate
limiting
as
an
example
of
where
should
rate
limiting
be
applied,
depending
on
on
which
types
of
rate
limiting
is
needed
like
if
it
is
request
per
second,
then
it
is
a
middleware.
If
it
is
data
points
per
second,
then
it
is
a
processor.
Then
it
has
to
be
part
of
the
pipeline.
C
All
right
so,
but
you
may
have
to
to
pass
some
information,
for
example,
about
the
URI
into
the
context,
so
that
the
processor
that
does
data
points
per
second,
does
it
at
a
URI
level.
For
example,
you
want
to
have
a
rate
limit
of
data
points
per
second
per
URL
or
per
something
that
comes
in
the
metadata.
You
may
have
to
combine
something:
a
middleware
with
the
processor
to
make
sure
that
the
data
are
annotated
with
the
with
a
tenant
or
something
like
that.
And
then
you
do.
You
do
rate
limited
by
tenant.
D
So
just
thinking
out
loud
right,
so
perhaps
that
solution
would
involve
having
an
Interceptor
that
picks
data
from
the
client
connection
and
places
into
the
context
and
then
a
rate,
limited
processor.
That
reads
this:
data
that
was
placed
by
the
Interceptor
and
do
the
actual
rate
limiting.
E
C
Yeah
that
that's
a
possibility,
yes
to
not
parse
it,
for
example.
That's
that's
a
good
point
like
if
you
want
to
avoid
parsing
the
request,
because
you
know
you
are
already
over
the
limit
you
may
you
may
put
that
into
optimization
wise
it
may.
But
what
I'm
trying
to
say
here
to
this
point
is
like
these
are
components
that
we
may
have
to
make
them
work
together
and
we
shouldn't
see
I
mean
yeah,
that's
that's
what
I
wanted
to
say,
like
every
component
has
its
own
role.
C
D
So
I
will
check
then
small
steps
and
Define
first,
the
the
The
Primitives
and
then
come
up
with
a
with
a
plan
for
the
future
of
us.
It
may
involve
those
middlewares
interceptors
or
it
may
not
involve
them.
I
think
it
will
converge
just
perhaps
for
the
user
having
the
notion
of
authenticators
May
be
easier
and
then
in
the
future
helpers,
so
that
people
can
build
extension
or
interceptors
without
having
to
implement
all
of
the
all
of
the
stream
Interceptor
unit,
Interceptor
and
majority
Behavior.
C
D
D
G
I
do
I,
do
have
one
question
about
it,
though
so
you
mentioned
that
this
would
be
supporting
HTTP
and
grpc
is.
Is
there
an
opportu
like
how?
How
would
an
end
user
know
what
can
and
can't
be
intercepted
by
these
components?
Would
you
know
what
would
it
be?
G
D
It
is
basically
the
same
way
that
authenticators
work
today,
so
they're
applied
to
server
components
or
to
client
coupon.
So
we
have
server
authenticators
and
client
authenticators
in
case
of
this
creeper
receiver,
I
think
it
we're
not
gonna.
C
D
That's
that's
true:
yeah
I
think
implementation,
wise
whatever
consumes
HTTP
server
settings
is
gonna
benefit
from
interceptors
and
whatever
uses
the
grpc
server
settings
is
also
going
to
benefit
from
interceptors,
but.
D
I,
don't
know
I
I
really
don't
know
so
for
HTTP
it
would
have
to
be
a
round
Tripper
and
for
grpc
very
likely.
The
Interceptor
as.
D
So
do
we
want
that
so
I
don't
have
a
use
for
that
for
that
right
now,
should
we
have
it
implemented
even
before
hearing
from
users
that
they
need
that,
or
should
we
Implement
that
or
consider
implementing
that
only
when
we
hear
concrete
use
cases.
G
Yeah
I
mean
I'm
thinking
about
how
you
know,
for
example,
rate
limiting.
You
know.
Currently
we
have
scrape
interval
or
whatever
we
call
it
on
the
on
the
client
side
for
scrapers
and
really
that's
just
another
form
of
of
rate,
limiting
right
like
how.
How
often
do
you
wanna
pull
your
your
back
end
or
whatever
your
service
is
so
I
I
just
see
the
two
as
very,
very
strongly
related,
maybe
they're
not
exactly
the
same,
but.
C
They're
not
the
same
because
we
are
the
one
that
start
the
action
rate
limiting
is
more
or
less.
Somebody
else
triggers
the
action
and
you
are
stopping
them
their
action
versus
here.
You
control
the
trigger
it's
better,
not
to
start
the
action
rather
than
stopping
you
from
executing
the
action
that
you
started
like.
Do
you
see
the
difference
like
yeah.
C
Even
receptor
component,
even
response
headers,
so,
for
example,
we
may
get
response
headers,
which
we
right
now
in
scrapers.
We
we
drop
because
we
don't
care
about
them.
But
let's
assume
somebody
cares
about
response
headers
and
want
to
do
something
with,
let's
just
say
they
put
10
in
there
for
whatever
reason
they
put
a
tenant.
There
is
a
pool
process,
a
pool
request
like
Prometheus,
let's
say
in
open
metrics
somebody
puts
in
their
response:
headers
the
tenant
ID.
C
Don't
ask
me
why
or
they
put
a
token
there
and
they
we
want
to
to
use
that
token
to
let's
say
later
authenticate
another
request.
So
this
is
a
an
example
where
I
think
an
Interceptor
for
for
client
may.
It
may
be
needed
to
get
a
token,
maybe
put
it
in
the
context,
carry
with
the
data
and
then
in
the
exporter,
use
it
as
a
token
for
for
authenticating.
The
next
request
here,
collector
being
the
middleware.
So
there
are
use
cases
I
think
to
to
get
access
to
the
to
the
request.
D
So
we
we
actually
have
the
header
Setter
extension
right
now
and
I
think
it
is
based
on
on
client
of
educators.
So
that
would
be
one
one
use
case
or
the
first
implementation
of
a
client
interceptor,
but
I
mean
arguably
it
is
implemented.
It
is
there,
so
we
don't
actually
need
to
touch
it
and
people
are
using,
but.
C
I
would
change
it.
I
would
change
it
to
be
implemented
on
this
new
framework.
Like
don't
get
me
wrong,
I'm,
not
gonna
change
the
the
extension
itself.
The
public
part
of
the
extension,
but
I
would
change
it
to
be
to
prove
that
our
new
design
for
for
this
Interceptor
works
for
for
all
the
cases,
I
will
change
that
to
use
the
new
design
to
make
sure
we
we
got
it
right.
D
Yeah
so
I
I
almost
multiplied
half
of
the
call
today
with
this
topic.
So
we
should
continue
the
discussion
there,
but
I
I,
just
I
do
agree
with
you
and
if
we
do
that,
we
are
going
to
do
breaking
changes
for
people
using
the
header
right
now,
because
if
you
look
at
the
issue
that
I
created
I
suggested
another
node.
D
Yeah
all
right
so
I'll
come
up
with
a
proposal
and
perhaps
a
well
perhaps
I'm,
definitely
a
APR
for
that,
and
then
we
can
discuss
it
over.
G
Thanks
Jersey
I
guess:
I'll
go
next.
The
the
first
of
my
two
issues
is
whether
or
not
we're
okay
to
move
ahead
with
enabling
Hotel
metrics
by
default.
This
was
a
comment
from
some
time
ago.
I
think
it
was
in
October,
where
Bogdan
suggested
that
we
should
move
to
enabling
it
by
enabling
Hotel
metrics
by
default
as
soon
as
possible.
C
There
are,
there
is
only
one
issue
that
I
filed
and
I
think
the
person
from
livestep
working
I
think
was
Gustavo,
so
the
problem,
the
problem
that
we
have
will
be
for
the
resource
attributes.
C
So
there
are,
there
are
a
couple
of
the
source
attributes
that
we
have
like
Service
service,
ID
service
name
and
some
I
think
these
are
the
two
main
ones
now
these
are
in
open
sensors
in
the
current
the
schema
this
inversion.
These
are
these
are
normal
attributes
for
every
metric.
Now,
when,
if
we
enable
the
hotel
theme,
this
will
become
a
special
new
metric,
the
the
the
label
set
or
whatever
is
the
the
target
info.
Okay,
that
one
target
info
so
we'll
become
targeting
from
now.
C
The
problem
that
I
want
to
make
sure
is,
if
we
do
that,
we
expose
it
in
open,
Telemetry
metrics.
The
most
usage
that
we
have
right
now
is
people
will
set
up
a
receiver
to
scrape
that
that's,
at
least
on
our
side,
to
scrape
that
in
point,
because
we
didn't
give
them
a
chance
to
to
do
other
ways.
So
now
what
I
want
to
double
check
is
the
metrics
that
we
are
right
now,
exposing
with
open
sensors
script
with
exported
Prometheus
script
with
the
Prometheus
receiver
they
get.
C
Some
P
data
is
the
same
P
data
we
get
after
the
auto
enablement.
They
will
get
a
Target
info,
but
I
think
with
the
target
info.
We
do
the
right
thing.
We
put
them
in
the
resource.
I,
don't
know
just
make
sure
what
I'm
looking
for
is.
Do
we
get
the
CMP
data?
It
would
be
great
if,
if
we
do,
if
we
don't
get
let's
at
least
document
the
differences
when
we
enable
this,
that's
that's.
Those
are
my
concerns.
These
are.
C
G
H
I
can
document
the
differences
that
we
will
see.
I've
actually
been
running
this
configuration
and
things
are
not
great
I,
don't
think
we
should
talk
about
them
in
a
moment,
but
I
will
document
them
we're
having
a
problem
with
what
comes
out
of
open
census.
So
there's
a
service
underscore
name
and
a
service
underscore
version
and
a
service
underscore
version
underscore
ID
and
they
don't
match
the
thing
that
is
service.name
and
service.instance.id,
and
service.version
and
I
can't
quite
explain
how
I
think
we
should
solve
this.
Partly
the
Prometheus
receiver
is
doing
some
magic.
H
That
is
not
quite
right,
we're
not
respecting
Hotel
labels
if
they
come
from
the
scrape
and
the
target
info
part
actually
does
work,
but
it's
really
confusing
when
in
a
Prometheus
setting,
you
want
to
turn
those
dots
into
underscores
and
we're
not
always
turning
them
back
into
dots
consistently.
It's
kind
of
a
big
mess,
however.
H
I
would
like
control
to
be
able
to
push
otlp
from
an
otlp
from
an
otel
SDK
I
mean
going
through
a
round
trip
and
defining
it
correctly
is
a
there's,
a
whole
lot
of
work
and
we
should
be
able
to
push
otlp
metrics
separately
from
documenting
the
strange
cases
we
get
from
going
through
a
Prometheus
export.
First.
C
But
we
do
that
because
Hotel
gome
doesn't
give
us
yet
the
the
way
of
getting
metrics
directly
from
open
sensors.
So
we
we
went
through
this
route
because
we
use
Prometheus
as
the
middleware
both
of
these
libraries
know
to
to
use
Prometheus.
So
what
we
do
right
now
we
use
Prometheus
as
the
the
the
the
the
the
the
library
that
actually
meditate
mediates
between
the
two,
the
two
libraries.
So
if,
if
we
are
able
to
push
everything
to
open
Telemetry,
even
better
but
I,
don't
think
we
have
the
the
some
there
is.
C
There
was
a
missing
part
in
the
exporter
to
be
able
to
implement
an
open
source
exporter
or
something
I
I.
Don't
remember,
but
David
David
knows
very
well
the
the
issue
there.
Yes.
H
I
also
do
understand
that
issue,
so
we
can
solve
this.
I
want
to
just
to
speak
up
and
say
that
light
step
moved
to
Gustavo
off
that
effort
and
I've
replaced
him,
so
I
will
be
documenting
and
carrying
that
forward.
I
think
I
also
see
the
chat
from
Anthony.
I
have
been
using
the
normalized
featured
flag.
H
We
need
to
get
that
one
on
as
well
might
be
that
the
proper
thing
to
do
is
when
hotel
is
in
use
to
stop
putting
the
service
underscore
name
and
service
underscore
instance,
ID
and
service
underscore
version
into
the
census
code
path,
because
that's
where
the
corruption
is
coming
from.
C
And
I
remember:
there
was
another
issue
that
I
linked
right
now,
which
is
with
the
underscore,
totals
and
stuff
which
open
metrics,
sorry,
which
hotel
has
a
behavior
open
sense,
which
has
another
Behavior,
which
will
also
be
solved.
If
we
are
able
directly
to
push
the
metrics
from
open
sensors
into
open,
Telemetry
and
open
Telemetry
will
become
the
the
de
facto
thing.
H
E
Yeah,
the
discussion
was
basically
a
process
discussion.
It
was
that
this
feature
gate
has
been
there
in
the
office
State
for
several
months
now,
and
the
question
was:
do
we
feel
comfortable
moving
it
to
Beta
state,
so
it's
enabled
by
default,
but
still
can
be
disabled?
It
was
just
garage
and
I
on
the
call.
At
that
point,
both
of
us
were
comfortable
with
that
I
expect
garage
will
have
an
issue.
H
Okay,
I,
don't
quite
a
sorry,
don't
don't
grow
up
yeah
yeah,
the
the
for
me
I'd
like
to
see
both
feature.
Flags
turned
at
the
same
time,
so
it's
used
the
hotel
SDK
and
do
the
normalization.
H
It
sounds
like
there's
still
a
problem
with
the
Census
Data
path
and
we
need
to
follow
up
on
the
census.
Bridge,
but
I
don't
feel
like
I
have
a
details
to
carry
that
forward.
Right
now,
I.
E
Think
the
normalization
data
flag
should
be
able
to
be
enabled
without
the
auto
data
path,
because
that's
on
the
Prometheus
exporter
side,
I
understand
that
they're,
related
and
I
would
I
think
I.
Agree.
I
wouldn't
want
to
enable
the
hotel
path
by
default
without
also
enabling
normalization,
but
normalization
can
go
first.
Okay,.
C
Okay,
perfect,
so
we
are
blocking
on
the
normalization
being
enabled
we
are
blocking
on
on
these
and
then
Josh
you're
you're
gonna
follow
up
with
the
bridge,
and
maybe
we
don't
have
to
go
through
Prometheus
at
all.
But
if
we
have,
we
should
enable
this
after
the
normalization
is
enabled.
H
C
I,
that's
a
bit
confusing,
which
we
should
so
for
me,
there
are
four
or
five
issues
right
now
about
this:
let's,
let's
make
a
canonical
one
or
somewhere,
where
I,
where
I
can
track.
Where?
Where
do
we
want
to
to
have
that.
G
All
right,
but
the
next
issue
is
also
mine.
I
just
want
to
bring
up
this
issue
that
Evan
had
opened
some
time
ago.
G
I
believe
we
talked
about
this
back
in
February
and
we
had
all
agreed
that
we
would
make
the
resource
attributes
available,
but
there
was
a
comment
that
was
still
left
unanswered
in
the
issue
and
I
just
wanted
to
follow
up
here
to
make
sure
that
we
have
a
path
moving
forward
for
this
I
don't
know
Evan.
Do
you
have
any
more
details?
You
wanted
to
add.
F
G
C
G
B
Guess
Carlos.
I
Hi,
yes,
so
I'm
not
sure.
This
is
something
that
we
still
need
to
discuss,
because
there
was
a
little
bit
of
a
misunderstanding
on
this
PR.
But
basically
the
config
schema
Library
is
is
something
that
that
my
employer
is
using
to
generate
metadata
for
the
configuration
structs
in
in
The
Collector
and
then
we're
we're
generating
documentation
and
Tool
tooling
around
that
I
have
a
PR
to
kind
of
reorganize
that
and
moving
some
of
the
library
content
into
the
PKG
directory.
I
There
are
two
two
exported
functions
anyway,
so
I
was
I
was
asked
to
bring
this
up
here
and
and
get
get
you
sort
of
the
the
communities
take
on
on
sort
of
the
future
of
this
Library,
whether
it
belongs
in
the
contributo
at
all
and
and
whether,
if
we
are
going
to
keep
it
in
the
contribute,
though,
whether
it's
okay
to
basically
put
this
library
in
the
PKG
directory
and
have
a
couple
of
exported
functions,
so
yeah
I
I,
would
be
fine
to
just
take
this
whole
Library
out
and
put
it
into
into
a
separate
repo.
I
I
figured.
You
know
we
could
keep
it
here
if
people
are
using
it
or
if
people
want
it
or
if
they
think
there's
some
possibility
that
it'll
be
used
in
the
future
and
I
can
go
over
sort
of
what
what
the
what
the
config
schema
functionality
produces.
I
can
share
my
screen
real
quick.
If
people
are
interested
but
yeah
does.
B
Said
a
lot
of
public
API
in
that
PR,
but
it
actually
is
two
functions:
I
entirely
misread
the
the
div
I
think
from
my
side.
It
makes
sense
to
to
keep
this
on
pkt
way.
Box
update
other
stuff
there
and
it
seems
like
a
useful
feature
in
order
for
your
vendor,
but
for
everyone
so
yeah
I,
don't
know
we
still
need
to
to
go
through
a
PR,
but
it
makes
sense
to
me.
E
That's
it
yeah,
I
I
would
say
the
person
who
commented
on
that
I
think
I
agree
with
what
Pablo
said.
The
one
thing
I
would
call
out
is
that
it
looks
like
there's
some
supplementing
from
a
main
package
out
into
the
named
package
and
all
of
those
things
would
be
new
exports
if
there
are
exported
identifiers
in
there,
because
you
can't
import
a
main
package,
but
they
would
not
be
moved
into
a
place
that
could
be
important.
E
A
I
think
I'm
next
I
just
had
a
quick
one
that
there's
a
conversation
happening.
A
few
minutes
for
this
meeting
and
I
just
realized.
I
was
going
to
be
here
so
I
figured
I'd.
Ask
if
anyone's
not
familiar
it's
okay,
but
can
can
you
all
confirm
for
Prometheus
native
histograms?
A
E
My
recollection
is
that
the
Prometheus
text
format
does
not
currently
have
support
for
the
native
histograms,
so
it
would
only
be
supported
through
a
permutation
web
right.
A
Cool
okay,
that
aligns
with
my
understanding,
cool,
no
further
questions.
Your
honor.
A
You
see
my
code
logged
in
yeah
yeah,
no,
no
I
think
we
folks
within
my
organization,
are
working
on
improving
that
and
I'm
not
sure
what
else
I
can
say
at
this
time.
C
In
general,
I
would
like
to
discuss
with
you
a
bit
about
that.
Maybe
in
slack
but
I
found
very
hard
for
people
to
use
Prometheus
pool
for
scaling
stuff
and
you
will
hit
troubles
and
I'm
probably
tend
to
to
encourage
you,
if
possible,
to
use
a
push-based
mechanism
as
a
as
a
friend,
not
not
as
somebody
who
represents
any
Community
or
anything.
A
G
All
right,
Josh.
H
H
It's
trying
to
add
you
know
the
data,
batching
capabilities
and
my
initial
attempt
to
that
had
used
attempt
had
used
an
otel
go
library
that
I
was
involved
with
writing
a
long
time
ago,
so
I'm
confident
in
it
it's
the
attribute,
set
data
type
from
hotel,
go
and
I
want
something
like
it,
but
I
want
to
use
native
P
data
types
directly
so
that
I
can
take
a
p
data
map
and
use
it
to
form
an
attribute
identifier,
an
attribute
set
identifier.
H
Maybe
you
already
have
an
answer
to
this
I'm
waiting
for
it.
The
reason
why
I
noticed
it
is
that
the
batch
processor
is
doing
something
very
simple
when
it
does
its
batching
and
we
would
like
better
batching,
better
batching
means
we
can
combine
Scopes
and
instruments
and
attribute
sets,
and
we
just
need
to
have
a
way
to
form
those
Maps
easily
and
correctly
without
writing
a
bunch
of
code.
Thank
you
very
much.
That's
all
I
want
about.
Also
please
my
Best
Western
PR
is
open
and
waiting
for
reviews.
So.
C
I
just
linked
to
you,
the
the
library,
that's
what
I
was
mentioning.
We,
we
do
have
a
hash
library
for,
for
that.
H
H
C
We
we
should
do
that
I
Linked
In,
the
document
CC
Dmitry,
who
who
owns
these
and
I,
think
we
should.
You
should
work
with
him
to
move
that
in
in
a
p
data
util
or
whatever
around
that
name
or
whatever
we
call
it.
But
yes,
we.
We
should
bring
you
to
core
and
use
that,
and
we
use
this
extensively
in
bunch
of
other
places
where
we
need
to
to
do
some
kind
of
grouping.
Grouping
of
data
in
Matrix
World,
especially
yeah.
H
This
is
what
I
was
looking
for:
I
I,
don't
want
to
comment
on
the
exactness
or
inexactness
of
using
hashing
at
this
moment,
but
thank
you,
I'll
take
a
look
at
it
and
I
think.
Ultimately,
we
would
like
to
see
an
option
for
the
batch
processor
to
use
it
to
combine
Scopes
and
Metric
instruments
and
so
on.
C
Was
added
like
three
four
months
ago?
So
essentially
we
needed
for
for
for
multiple
purposes
and
we
started
to
to
standardize
because.
D
C
That
can
be
changed
because
we
don't
put
it
on
the
wire,
so
it's
it's
used
for
local
for
local
purposes.
So
if,
if
the
concern
is
to
change
the
light
at
the
hash
Library,
we
can
change
the
hash
library,
that
is
an
internal
implementation.
The
the
reason
why
so
here
the
the
the
whole
idea
was
that
we
we
have
resources
we
need
to
group
by
resource,
or
we
have
metrics
with
a
bunch
of
attributes.
We
need
to
identify
that
are
similar,
attributes
and
stuff
like
that.
C
We
use
these
I
think
we
changed
to
use
it
in
Prometheus
receiver.
I
may
be
wrong
or
if
we
change
it
or
we
are
about
to
change
that
anyway,
you
can
look,
provide
feedback,
change,
change,
the
hash.
If
you
have
a
better
hash
or
anything
change,
it.
D
H
About
the
hash
thing,
but
you
know
we
do,
we
are
going
to
depend
on
otel
go.
It
does
have
this
existing
functionality,
which
is
exact
not
approximate
and
I
I'm,
not
not
I'm,
not
saying
we
shouldn't
use
a
new
library,
but
there's
an
existing
Library,
that's
already
in
use
by
the
SDK
for
exactly
this
purpose,
and
we
should
look
at
performance
before
we
go
further.
Yeah.
D
So
that's
that's!
Basically
what
I
was
gonna
mention.
So
if
you,
if,
if
you
think
that
another
hashing
function
is
better
or
in
in
whatever
better
meaning,
you
have
do
open
a
discussion
because
then
we
we
need
to
align
with
other
places
where
we
we
have
hashing.
So.
C
Let
me
let
me
explain
a
couple
of
things
to
Josh.
This
is
not
perfect
and
I
know
we.
We
could
have
done
a
perfect
one,
but
the
reason
why
we
intended
we
intentionally
did
not
do
the
the
one
that
construct
the
string
or
or
encodes
this
as
a
string
or
something
like
that,
it's.
C
It's
an
array
or
something
the
reason
why
we
didn't
do.
That
was
because
we
we
had
to
to
not
hold
the
memory.
So
one
of
the
thing
that
we
had
was
the
memory
pressure
of
holding
all
the
because
we
we
were
using
this
into
a
Prometheus
receiver,
where
we
need
to
hold
memory
for
all
the
the
mtses
or
whatever.
H
I
I
would
say
also
the
Prometheus
SDK
has
a
fingerprinting
mechanism
and
it
also
checks
collisions.
So
so
it's
a
downgrade
from
a
Prometheus
use
case
to
stop
checking
collisions.
In
my
opinion,
lightstep
has
its
own
metrics
SDK
that
it
uses
it's
compatible
with
Auto
Ghost
API,
but
it's
totally
different.
I
also
have
fingerprinting
in
there
and
I
also
have
a
correctness
check
and
if
I
disable
that
correctness
check
I
get
one
less
allocation,
I
get
a
little
bit
less
safety
and
you're
right.
H
C
H
The
talking
about
the
go
SDK
for
Prometheus
metrics-
if
you
use
it,
it
will
create
a
map
of
a
fingerprint,
64-bit
fingerprint
and
then,
if
you
look
it
up,
it
will
scan
that
list
and
check
equality
and
in
order
to
do
that,
it's
keeping
the
original
attributes
set
in
memory.
So
it
has
that
memory
pressure
and
it's.
C
C
That's
that's,
but
that's
on
the
library
side
where
they
have
to
report
the
points
anyway.
So
so
there
they
have
to
hold
this
memory
because
they
have
to
keep
the
the
the
labels
around
anyway
to
report
them
in
in
the
Prometheus
server.
Where
they
script
is
they
don't
do
the
collision?
And
that's
where
we
got
the
the
inspiration
to
do
the
same
thing.
So
in
the
Prometheus
server
you
can
check
that
they
have
a
hash
function.
C
H
Okay,
that's
great!
If
you
can
show
me
or
if,
if
you're,
right,
correct
about
that
I
believe
you
all
you're
saying
that
we're
using
128
bits
they're
using
64
bits
conclusions
are
already
okay
and,
and
you
don't
want
to
keep
memory
around.
That's
a
good
reason
not
to
use
the
otago
approach,
because
it's
it's
going
to
require
you
to
keep
that
memory
around
so
that
you
can
check
equality.
H
My
point
was
that
if
all
the
inputs
are,
you
know
like
you,
don't
you
can
have
a
very
small
resource
with
you
know
one
version,
one
service
name
and
one
service
version,
and
now
you
know
you're
varying
your
your
your
service
instance
ID
and
you're,
using
host
and
Port.
As
your
service
instance
ID,
you
don't
have
a
lot
of
input,
variability
there's
a
chance
of
collision,
it's
much
smaller
than
264.
I.
Don't
know
how
big
it
is.
C
C
What
is
what
is
the
canonical
way
that
we
will
use
for
for
enabling
disabling
things?
Are
we
going
with
disable
or
enable,
and
I
saw
that
you
said
disabled,
but
is
that
install
now
can
I
merge
the
pr.
G
Yeah,
so
that's
the
current
way
that
the
Otep
was
written.
It's
obviously
not
a
specification
yet,
but
that's
I
mean
I.
Guess
that's
soon
coming
soon.
I
guess
but
I
suspect
there'll
be
discussions
there
as
well,
but
so
I
don't
know.
If
you
want
to
merge
it
now
before
we
move
it
to
the
spec
or.
D
C
Least,
yeah
I
I,
remember
the
same
thing
and
then
Anthony
proposed
a
disabled
thing
and
I
I
would
just
want
to
make
sure
that
the
the
the
work
that
Alex
is
doing
for
for
the
hotel
configuration
the
library
configuration
will
kind
of
be
consistent
with
this.
So
Alex.
If
you
are
okay
with
changing
that
to
enable
I
would
much
rather
prefer
enabled,
but
I
just
want
to
have
consistency,
Jurassic,
I
I'm
willing
to
drop
that
disabled
false
inconvenience.
C
The
for
consistency
like.
G
C
D
Anthony
are
your
comments.
Part
of
the
discussion
that
booked
and
linked
to
here
I
would
love
to
see
your
Arguments
for
disabled.
E
I,
don't
actually
recall
the
discussion
that
the
doctor's
talking
about
I
don't
see
them
on
this
issue.
Thinking
back
about
what
might
have
caused
me
to
make
such
comments,
though
it
probably
relates
to
the
use
of
environment
variables.
I,
believe
the
specification
for
configuration
through
environment
variables
requires
that
the
variable
be
named
in
such
a
way
that
you
have
to
give
it
a
true
value
to
change
the
behavior.