►
From YouTube: 2022-06-09 meeting
Description
cncf-opentelemetry@cncf.io's Personal Meeting Room
A
Off
to
a
very
exciting
start
this
morning,
yeah
trask
is
out
this
week,
so
we
don't.
We
don't
have
our
our
note,
taker
and
moderator.
A
A
All
right
so
I've
added
a
couple
of
items
to
the
agenda
and
it
looks
like
there's
an
item
about
exponential
histograms.
That's
been
added
as
well.
We
can
start
with
that.
B
Yeah,
I
left
the
item
there.
Mostly
curiosity.
We
started
using
that
live
step
trying
to
play
with
it
looks
very
nice
pretty
useful,
so
we
are
curious,
whether
there's
a
timeline
plan
for
these.
A
So
exponential
histograms,
you
can
already
use
them
kind
of
in
an
experimental
fashion.
You
can
use
the
view
api
to
switch
a
histogram
to
use
exponential
instead
of
instead
of
explicit,
bucket
histograms
you
have
to
you,
have
to
use
internal
classes
so
ones
that
you
know
we
mark
as
unstable
and
and
not
necessarily.
A
You
know
proceed
with
caution
effectively
and
that's
that's
the
way
it
has
to
stay
until
the
specs
stabilizes,
but
there's
kind
of
a
catch-22.
I
think
the
spec
stabilization
is
waiting
on
a
couple
of
language
implementations
of
the
exponential
histograms
before
it
stabilizes.
So
you
know
in
terms
of
java,
java's
readiness
and
and
like
I,
I
think
java
is
almost
ready.
A
You
know
in
terms
of
its
implementation,
there's
a
few
rough
edges
that
I
noticed
in
terms
of
how
the
how
the
the
the
implementation
auto
adjusts
the
scale
of
the
histograms,
because
it's
supposed
to
do
that
based
on
the
the
range
of
measurements
that
comes
in,
and
I
noticed
some
some
something
that
looked
like
a
bug
with
that.
A
So
I
need
to
investigate
that,
but
other
than
that
you
know,
and
I'll
probably
work
on
that
in
the
next
couple
of
weeks
or
so
other
than
that
java's
in
a
pretty
good
place
for
this
okay.
Thank
you
so.
B
Much
for
today
yeah
for
the
answer
from
that
one
sounds
great
yeah.
We
we
would
be
happy
to
help
testing
if
there's
any
need
on
that
front.
On
this
back
side,
yeah
jmcd
has
been
working
on
that,
but
yeah
we're.
I
think
that
goal
and
java
are
the
only
implementation.
So,
let's
try
to
see
if
we
can
push
on
other
languages,
so
we
can
actually
go
stable,
given
that
java
is
in
good
state.
A
Yeah
and
then
the
other,
the
other
tool
that
is
available
for
sdk
users
to
configure
exponential
histograms
is
the
ability
to
specify
at
the
metric
reader
level,
the
default
x,
aggregation
by
instrument
type.
And
so
you
know
that's
like
a
broader
brush
stroke
than
views.
So
you
can
say
all
histograms
should
use
exponential
histograms
instead
of
explicit
bucket
histograms.
I
have
a
pr
open
for
that.
A
It
hasn't
gotten
any
attention
so,
and
you
know
I
think
that's
is
that
for
the
auto
configuration
no,
that's
that's
in
the
sdk
and
so
that
that's
part
of
the
sdk
specification
we
punted
on
that
for
our
initial
stable
release
of
metrics
in
in
java,
but
you
know
there's
a
pr
ready,
and
so,
when
onorag
and
and
and
john
and
others
could
find
some
time.
B
A
So
the
other
two
items
on
the
agenda
were
added
by
myself
and
you
know
normally
translates
this
meeting,
but
he's
he's
out
today,
and
so
maybe
when
we're
done
with
these
laurie,
if,
if
you
or
or
maybe
jason
can
give
an
update
on
the
instrumentation,
if
there's
any
updates
that
are
necessary
there,
we
can
do
that,
but
so
I'm
preparing
for
a
release
of
the
java
sdk
tomorrow.
This
would
be
version
1.15.0.
A
And
then
I
have
been
working
on
something
that
I
think
is
kind
of
interesting,
and
it's
actually
it's
good
that
carlos
is
here,
because
I
I
want
to
hear
what
his
thoughts
are
on
this.
So
I've
been
I've
been
working
on
some
instrumentation
to
bridge
in
metrics
that
are
available
from
the
the
kafka
java
kafka
client.
A
It
has
its
own
internal
metrics,
tooling,
that
it
has-
and
it
exposes
those
through
this
this
this
hook
that
you
can
have,
and
so
I
want
to
bridge
those
metrics
into
open
telemetry,
there's
a
ton
of
useful
metrics
that
are
available
for
producers
and
consumers
that
use
the
java
client
and
the
reason
that
it's
it's
interesting
is
because
to
carlos
is
because
you
know
over
in
the
specification
carlos.
A
I
think
you
drove
this,
but
there
are
some
semantic
conventions
that
have
been
added
for
metrics
in
kafka,
and
these
these
have
been
added,
I
think,
from
the
perspective
of
a
broker.
So
these
are
brokers
that
are
exposing
these
via
jmx,
and
then
you
have
the
jmx
collector
that
that
collects
these
and
and
translates
them
to
the
the
collector's
p
data
internal
data
model,
and
so
you
know
what's
interesting-
is
that
from
the
client
metrics
perspective.
A
So
for
java
client
there
is
there's
some
overlap
with
these,
but
but
really
just
a
a
small
amount.
So
some
of
the
concepts
appear
that
you
know
you
know
are
also
true
from
the
perspective
of
a
broker.
But
you
know
what
I
don't
want
to
do
is
I
I
don't
want
to
limit
us
to
only
bridging
over
those
metrics
that
are
that
are,
you
know,
have
intersection
with
these
semantic
conventions,
because
we'd
be
leaving
a
lot
on
the
table
as
a
part
of
this
draft
pr.
I
did.
Let's
see
this.
A
Let's
see
this
prince,
so
I
I
you
know,
I
wrote
a
little
program
that
prints
out
all
the
the
metrics
in
markdown
format
that
are
exposed
by
the
kafka
client,
and
this
is
like
a
table
of
them
all,
and
and
so
it's
a
long
table.
There's
there's
200
there's
over
200,
distinct
metrics
and
you
know
I
went
through
all
of
them
and
they're.
You
know
you
can
make
the
case
that
all
of
them
are
useful
in
different
contexts
and
so
kind
of
what
was
occurring
to
me,
as
I
was
looking
at.
A
This
is
like
one
it's
useful
to
bring
these
over
and
by
leaving
them
out
of
open
telemetry
we're
doing
a
disservice
to
users.
It's
it's
definitely
data
that's
useful
to
users,
but
two
to
try
to
codify
these
in
the
semantic
conventions.
A
That
would
be
required
to
get
that
in
and
the
other
thing
that's
true
is
like,
even
if
we
did
get
these
into
the
specification,
the
metrics
that
are
available
through
the
java
kafka
client
might
not
perfectly
are
unlikely
to
perfectly
align
with
the
metrics
that
are
available
through
coffee
clients
in
other
languages,
because
the
you
know
the
vendor
that
produced
that
kafka
client
may
be
different
and
the
authors
of
that
code
are
going
to
be
different,
and
so
they
might
have
taken
different
things
into
account
and
so
carlos.
A
I
know
that
you
opened
an
issue
at
the
spec
level
about
this
kind
of
topic.
You
know
you
know
so
library,
specific
metrics
semantic
conventions,
so
I've
I've
left
a
comment
on
this.
That
you
know
describes
my
point
of
view
on
this.
If,
if
you
could
take
a
look
and
if
others
could
take
a
look,
that'd
be
great
and
effectively.
My
point
of
view
on
this
is
that
what
I'm
doing
here
in
this
kafka
client
metrics
bridge,
is
I'm
I'm
not
instrumenting
kafka
at
the
source.
A
I'm
just
bridging
in
metrics
that
its
authors
have
already
decided
are
important
and
I'm
offering
like
a
translation
layer
between,
like
you,
know
its
data
model
and
the
open
telemetry
data
model,
and-
and
so
I
you
know,
I
try
to
articulate
that.
I
don't
think
that
that
type
of
thing
should
be
codified
in
the
semantic
inventions.
A
I
don't
think
it's
a
good
use
of
time
and
it's
not
something
that
we
can
actually
control,
because
kafka
can
change
its
instrumentation
implementation
upstream
at
any
time
and
then,
like
our
semantic
conventions,
we
wouldn't
be
able
to
guarantee
those
anymore
and
so
yeah.
That's
just
food
for
thought.
B
Yeah,
I
agree
on
that
after
some
discussion
here
with
my
team
in
we
came
with
that
conclusion
as
well,
that
it's
a
dream
to
have
all
the
semantic
conventions
be
part
of
hotel.
It's
not
really
reasonable.
You
know
in
the
real
world,
so
I
think
we
will
go
with
probably
with
having
general
suggestions
and
having
common
cases
in
spec
and
then
the
rest
of
the
stuff
provides
some
guidelines
or
something
because
yeah
it's
not
going
to
be
reasonable.
B
Also,
you
may
have
heard
that
kafka
there's
a
proposal
in
kafka
to
report
the
metrics
that
are
reported
right
now
through
jmx,
the
gatherer
directly.
You
know
using
otlb,
but
this
will
be.
That
would
be
only
but
it's
a
proposal.
So,
let's
see
how
that
goes,
and
that
that
would
be
only
brokers
and
producers,
no
consumers
metrics.
So
here
you
are.
A
Yeah,
so
I
just
wanted
to
catch
other
people
up
in
the
java
community
about
that
discussion.
So.
B
Okay,
in
that
case,
let
me
ask
I
I
know
that
so,
as
I
said
before,
we
already
started
testing
in
like
expo
history
and
12
stories
about
the
expo
histogram
in
java
we
already
use.
I
already
already
used
the
internal
classes
that
you
mentioned.
B
I
saw
that
you
mentioned
that
in
slack
jack,
so
pretty
nice,
the
the
only
thing
that
we,
I
am
mostly
curious-
and
this
is
something
that
is
not
important-
just
general
curiosities,
whether
it's
there's
plans
or
sounds
very
bad,
to
enable
support
for
this
because,
like
through
the
auto
artifact,
you
know
because
at
this
moment
I
had
to
actually
write
the
code
plug
it.
You
know
plug
it
manually.
B
A
That
doesn't
sound,
very
bad,
and
so
I
think
that
that's
gonna
land
in
the
specification
at
some
point
when
we
decided
to
punt
on
exponential
histograms,
we
said
that
you
know
we
we
punted
on
exponential
histograms
and
we
also
adjusted
the
the
sdk.
A
A
It's
not
there
anymore
and
histogram
was
instructed
you
to
use
the
best
available
histogram,
and
so
the
idea
was
that
sdk
authors
could
choose
between
exponential
histogram
and
explicit
bucket
histogram,
based
on
some
ambiguous
criteria,
and
we
pulled
that
out
because
you
know
we
argued
that
if,
if
an
sdk
changed
its
behavior
from
explicit
bucket
histogram
to
exponential
bucket
histograms,
it
would
be
unexpected
in
a
breaking
change
for
its
users
to
have
that
type
of
thing,
and
you
know
as
a
as
a
compromise
what
we
said
by
getting
rid
of
that
best
available.
A
Histogram
aggregation
was
that
later,
once
exponential
histograms
are
stable,
we
would
go
back
and
add
an
option
to
the
otlp
exporter.
That
would
allow
you
to
choose
exponential
histogram
as
your
default.
Histogram
preference
for
otlp
exporting,
and
so
I
expect
that
to
be
added
to
the
spec
at
some
point,
and
so
you
know
I'd
be
in
favor
of
personally
I'd,
be
in
favor
of
making
that
option
available
in
auto
configure
in
an
experimental
fashion
beforehand:
okay
yeah,
that
will
be
super
nice
super
helpful,
perfect.
A
Yeah,
it's
been
discussed,
there's
a
few
things
that
have
been
discussed
with
respect
to
the
the
hint
api.
So
you
know
you
might
want
to
hint
that
if
you
have
a
histogram
instrument,
what
the
bucket
boundaries
should
be.
A
A
But
exponential
histograms
solve
basically
well,
they
solve
the
the
bucket
boundary
problems
really
nicely.
You
don't
need
to
know
your
bucket
boundaries
ahead
of
time
at
all
to
have
you
know
good
density,
good,
histogram
density
around
the
areas
where
your
measurements
are
being
recorded.
So
that's
why
I'm
such
a
big
fan
of
exponential
histograms.
A
Well,
if
there's
no
other
questions
or
topics,
we
can
end
early
today.