►
From YouTube: 2022-06-10 meeting
Description
cncf-opentelemetry meeting-2's Personal Meeting Room
C
A
Yeah
carlos
joined
us,
and
he
was
interested
in
plans
for
adding
exponential
histograms,
so
the
the
status
on
those
is
that
they're.
Currently,
you
can
currently
configure
exponential
histograms
via
views.
You
have
to
use
internal
classes
and
we
can't
make
those
classes
public
until
the
spec
stabilizes
there's
at
least
one
rough
edge
that
I'm
aware
of
with
exponential
histograms,
where
they're
not
like
quite
ready
for
prime
time.
A
I
I
found
a
situation
where
the
the
scaling
didn't
work,
as
as
it
was
supposed
to,
and
I've
been
meaning
to
look
into
that
so
but
other
than
that.
I
think
they're
to
my
knowledge,
they're
they're
in
pretty
good
shape.
A
A
There's
a
couple
other
interesting
details
there,
so
one
the
I
have
this
pr
out
in
open
telemetry
java
to
allow
readers
to
influence
the
default
aggregation
by
instrument
type
and
so
that
that's
gonna
be
important
when
exponential
histograms
are
are
stable,
because
while
you
can
configure
exponential
histograms
via
views,
that's
kind
of
a
views
have
some
downsides.
A
So,
like
you
know,
if
you,
if
you
wanted
to
enable
exponential
histograms
across
the
board
for
all
your
instruments,
you
know
configuring
that
at
the
reader
level
is
the
better
way
to
do
that
and
that
change
that
I
have
would
allow
for
that.
A
But
it's
not
important
for
this
next
release,
because
exponential
histograms
aren't
stable
yet
and
then
one
other
thing
carlos
asked
was
whether
we
could
add
a
experimental
property
in
auto
configure
to
enable
exponential
histograms
for
otlp
export,
and
I
think
the
answer
is
yes.
We
can
do
that.
A
That
would
depend
on
this
metric
reader
default
aggregation
pr
that
I
talked
about,
but
once
that
gets
in,
we
could
add
a
property.
That
said,
you
know
if
you're
using
otlp
prefer
exponential
histograms
as
your
as
your
histogram
instrument
type
and
that's
been
discussed
at
the
spec
level.
I
think
that's
going
to
happen
at
the
spec
level.
It's
just
you
know,
waiting
for
exponential
histograms
to
become
stable,
so.
C
A
Because
new
relic
is
interested
in
exponential
histograms
too
and
yeah,
but
I
think.
C
For
many
experimental
flags
that
is
still
a
reasonable
bar,
not
all
of
them,
of
course,
especially
ones
that
haven't
seen
more
java
specific
or
something
this
one
doesn't.
So
I
think
starting
the
pr
first
in
the
spec
it'll
still
take
four
months
or
something,
but
at
least
that
gets
something
going
well.
A
Actually,
I
think
it
could
get
added
to
the
spec
before
exponential
histograms
become
stable,
because
there
are
environment
variables
that
are
mentioned
in
the
spec
about
exemplar
filtering.
In
example,
our
sampling
that
you
know
and
exemplars
are
still
unstable,
and
so
there's
kind
of
a
precedent.
B
It
can
go
faster,
yeah
yeah,
I
saw
there
was
an
issue,
someone
issued
and
put
an
issue
into
the
oteps
about
monitoring
for
machine
learning
models
and
I'm
like
hey.
A
B
B
At
least
nine
months
before
anything
starts
moving
on
that,
so
that
was
funny
because
we're
built
I
mean
part
of
our
product-
is
monitoring
for
machine
learning
models
and
so
definitely
you're
interested.
But
I
don't
think
it's
going
to
move
very
fast,
sound,
pretty
cool
though
yeah
yeah,
it
is
cool.
So
maybe
the
idea
basically
like
you
want
to
be
able
to
monitor
the
quality
of
how
the
model
is
doing
over
time.
So
you
can
know
if
the
model
is
kind
of
like
it's
training.
B
C
B
Yeah
exactly
was
there
some
discussion?
Oh
you
wanted
to
talk
about
event,
api
jack,
but
so
I
just
want
to
bring
up
something
before
that.
So
today
we
needed
http
client
instrumentation
at
virta
and
we're
using
a
java,
11,
http
client,
there's
no
library
instrumentation
for
it
and
there's
no
well
there.
It
might
be
asian
but
we're
not
using
the
agent.
So
I
just
created
a
quick
wrapper
class
around
the
http
client
that
does
tracing
it.
B
Doesn't
do
any
metrics
because
we're
not
using
hotel
metrics
at
the
moment,
but
do
you
think
there
would
be
appetite
for
putting
having
that
be
library
instrumentation,
even
though
it
doesn't
there's
no
like
interface,
we
can
implement
it's
just
going
to
be
a
separate
like
use.
This
trace
use
this
http
client
instead
of
the
official
one.
If
you
want
to
do
it,
so
that's
how
you
do
it.
B
Yeah,
it's
a
shame
they
don't
they
don't
provide
hooks
like
callbacks
for
for
instrumentation
it'll
be
very
handy.
You
should
just
migrate
to
http
yeah.
I
think
we're
actually
trying
to
reduce
our
dependencies
as
much
as
we
can
so
so
having
that
well,
especially
because
yeah
well,
there's
that
I
guess
we
do
since
we
deploy
into
our
customers
data
centers
like
we
want
to
keep
our
our
the
surface
area
of
scanning,
that
we
need
to
do
as
small
as
possible,
and
it's
already
huge.
B
But
if
we
could
one
thing
everything
we
do
is
a
little
bit
easier,
so
so
I
may
put
in
within
the
next
week
I'll
I
may
put
in
a
pr
to
at
least
have
something
to
talk
about.
Yeah
that'd
be
interesting.
How
do
you
think
that
the
instrumentation
api
will
be
useful
in
this
case,
because
it's
just
going
to
be
a
rapper
with
some
really
simple
span
creation?
I
wonder
still
http
cement.
C
Conventions
right
stuff,
yeah,
yeah,
that's
fair,
like
there's,
probably
a
risk
already
attribute
scatters
for
the
job
agent
instrument
petition.
So
you
could
move
them
to
the
library,
one
and
reuse
them
in
the
java
agent.
You
wouldn't
be
able
to
use
the
wrap
break.
Then
you'd
still
be
able
to
use
the
instrumentation
playbooks
cool.
B
B
No,
I
mean
because
it's
still
I
mean
it's
so
simple
I
just
like,
and
but
I
will
actually
try
that
out
as
soon
as
I
have
some
time
so
see
if
it'll
work
for
it
I
mean
I'm
sure
it'll
work.
The
question
is
whether
it
is
awkward
or
whether
it
is
easy
enough.
So.
C
B
A
good,
if
at
all
possible,
yeah,
although
this
is
a
bummer,
because
you
can't
just
replace
I
mean
I
guess
we
could
you
could
if
we
have
the
the
getters
like
the
property,
the
stuff
that
gets
the
stuff
off
the
request
or
in
response,
that's
the
review.
That'll
be
the
reusable
piece
exactly
yeah,
even
if
the
what
we
have
to
actually
give
people
is
something
that
isn't
actually
an
actual
http
client,
but
it's
as
good
as
we
can
do.
B
Abstract
class
that
is
abstract,
so
it
would
still
be
an
http,
possibly
I'm
not
sure
how
easy
it
is
to
actually
implement,
though,
but
it's
because
I'm
not
doing
that
right
now,
I'm
not
j,
I'm
not
extending.
B
B
You
know
about
our
stuff,
the
calls
look
the
same
and
it
doesn't
matter.
Yeah
it'd
be
nice.
If
we
could
I'll
try
it,
but
I
don't
know
how
easily
it
is.
C
B
B
B
C
A
Yeah,
so
the
two
approaches
that
were
discussed
was
and
were
being
debated
for
their
relative
merits.
Were
you
know,
a
unified
api
to
serve
both
of
the
use
cases
for
emitting
events
and
to
serve?
You
know
the
log
appender
api
use
case,
and
then
the
second
was
there
was
different
apis
for
for
each
of
those,
and
so
in
the
log
sig
there
was
there
was
disagreement
about
which
direction
to
go.
A
You
know,
on
one
side
was,
was
tigran
and
and
this
other
guy
santosh,
and
they
thought
that
they
were
in
favor
of
a
unified
api,
and
then
I
think
the
rest
of
the
group
for
the
most
part
was
either
neutral
or
in
favor
of
of
separate
apis,
and
you
know
that
was
discussed
in
the
utep
as
well,
but
because
there
wasn't
consensus,
I
think
that
tigran
said
that
as
a
a
tc
member
can
break
the
tie
and
he
was
a
tc
member
that
was
there
so
he
broke
the
it
wasn't
really
a
tie
of
sorts.
A
It
was
just
like
a
lack
of
a
consensus,
and
so
he
he
made
the
call
that
for
the
otep
to
go
forward,
they
were
going
to
go
with
a
unified
approach.
It
doesn't
set
it
in
stone
because
things
can
still
be
debated
and
will
be
debated
when
they
try
to
make
that
pr
to
the
spec
but
yeah.
That's
that's
the
conclusion.
So.
A
Than
what
you
had
done
in
your
experimental
pr,
I
I
demonstrated
both
in
the
experi.
I
had
two
prototypes
one
for
each
of
those
situations
and
actually
those
prototypes
were
kind
of
like
used
to
debate
their
relative
merits.
So
so,
when
it's
unified,
what
exactly
does
that
mean?
It
means
the
language.
The
language
is
that
there's
a
logger
provider
and
a
logger.
A
So
you
obtain
a
you
know
you
configure,
or
from
your
open
telemetry
instance,
you
get
a
logger
provider
and
from
a
logger
provider
you
get
logger
instances
and
the
logger
instance
has
two
apis
in
it:
an
api
to
build
events
and
an
api
to
build
more
generic
and
lower
level
log
records
can't.
A
That
you
know
yes,
I
mean
I,
I
voted
to
go
in
the
opposite
direction:
tigran's
primary
arguments,
the
one
he
thought
was
the
strongest
was
consistency
with
the
other
signals
you
know
having.
A
This
would
be
a
new
pattern
of
having
two
kind
of
api
entry
points
with
one
sdk
implementation
of
those
that
and
then
you
know
the
there
is
history
of
logging
events
using
loggers
to
log
events,
especially
in
like
microsoft
in
in
dotnet
areas,
and
he
brought
up
a
couple
of
examples
of
that.
So
it
doesn't
really
match
well
with
java
terminology,
but
could
we
get.
B
Away
with
in
our
java
implementation,
having
a
you
know
how
our
metrics
builders
are
kind
of
interesting
and
different,
and
how
you
can
get
instruments
that
are
for
longs
or
for
doubles,
could
we
is
there
a
way
we
could
do
something
similar
to
get
a
logger
for
events
or
for
log
offenders
like
maybe
make
the
switch
so
that
you're
like
I
want
it?
I
really
want
the
event
like
this
thing
will
will
do
both,
but
I
really
just
want
to
think
it's
going
to
generate
events
or
let
me
generate
events
potentially.
A
It
is
still
a
small
api
surface
area.
I
will
say
that
from
a
logger
you
say
like
event
builder
and
you
pass
in
the
name
of
your
event
and
then
you
just
say
build
or
emit
something
like
that
and
you
can
optionally,
you
know,
use
the
builder
pattern
to
set
additional
properties
when
you
want
to,
and
so
you
know
you
there's
only
two
apis
on
the
logger
there's
an
event
builder
and
a
log
record
builder,
and
so
you
know
it
it's
not
a
ton
to
get
bogged
down.
It's
not
like.
C
B
A
So
so,
if
once
the
sotep
goes
in,
if
so
I
I
kind
of
I
kind
of
built
the
original
prototypes
that
were
kind
of
driving
that
conversation,
and
I
don't
know
it
would
just
kind
of
feel
a
bit
weird.
If
I
made
like
a
prototype
that
it
was
different
than
one
of
the
original
two
from
the
conversation
wondering
if,
if
one
of
you
two
could
propose
that
alternative
api
instead.
B
Yeah,
when
that,
when
I
see
that
go
in
as
a
go
into
the
spec,
I'll,
certainly
bring
it
up.
A
Okay,
there's
not
a
lot
of
surface
area,
so
it
shouldn't
be
too
big
of
a
lift.
A
B
B
So,
like
our
like,
for
example,
our
doubles
versus
long
thing
doesn't
show
up
in
the
spec
at
all,
but
right,
I
don't
think
anyone
has
a
problem
with
it.
Well,
bogdan
had
a
problem
with
it,
but
that's
all
right.
B
A
Originally,
I
had,
you
know,
described
the
point
of
the
apis.
Is
you
know,
providing
a
language
agnostic,
similar
experience,
coding,
experience
for
telemetry,
and
I
can't
say
that
anymore?
It's
just
that's
just
not
the
case.
Despite
the
specification.
B
B
Are
different,
I
don't
know,
do
go
people
think
that
the
I
mean
I
know.
Tyler's
worked
super
hard
to
make.
The
go
apis
feel
go
idiomatic,
but
I
mean
I
have
no
idea.
So
as
far
as
I
can
tell
goes
just
a
lot
of
error
handling,
I
guess
pretty
much.
All
it
is
as
far
as
I
can
tell
is.
Every
other
line
is
error
handling.
So
I
don't
know.
C
B
And
the
package,
the
package
names
for
instrumentation,
especially,
are
really
crazy
in
the
go
instrumentation.
C
C
B
B
B
C
C
B
C
B
A
C
Yeah
so
then
there
was
the
other
question
or
asked
about
sharing
the
manage
channel
with
that
person
to
other
grpc
clients.
I'm
pretty
sure,
that's
something
we're
not
too
interested
in
supporting
just
because
it
makes
us
rely
on
the
grpc
library,
implementation,
details.
C
A
A
Yeah,
I
kind
of
figured
that
we
would
go
ahead
with
that.
What
you
were
describing,
where,
like
you,
only
use
the
grpc
based
transport,
if
you
yeah,
if
you
call
the
deprecated
set
channel
method,
yeah.
C
Still
removing
that
someday
so
yeah
like,
I
don't
think
it's
really
a
great
answer.
If
you
say
you
can
use
this
definitive,
it
might
be
gone
someday,
it's
better.
To
just
say
we
don't
support
this,
which
I
I
don't
know
if
it's
subrisk
or
something
but
like,
I
think,
even
in
practice
like
you,
wouldn't
want
to
share
a
connection
between
your
telemetry
and
your
business
logic,
because
if
your
business
logic
connection
fails
for
some
reason,
you
lose
your
telemetry
or
vice
versa.
It
can
cause
problems,
so
I
can
also
bring
that
point
up.
C
A
Well,
if
we
go
down
that
direction,
that
you're,
describing
where
we
use
the
okay
http
based
implementation
by
default,
all
the
time
and
you
have
to
you-
have
to
jump
through
hoops
to
use
the
grpc
one.
Then
there
is
no.
There
is
no
channel
sharing
like
that.
That's
not
even
it!
That's
not
even
exactly.
C
And
then,
if
the
grp's
implementation
went
away,
then
again
there
is
no
channel
string.
Yet
so
that's
sort
of
like
again.
If
we
really
wanted
to
support
this,
then
we'd
have
to
maintain
the
grpc
implementation.
We'd,
probably
end
up
with
accept
channel
and
just
do
that
for
2.0,
also
so
that
we
probably
don't
want
to
do
that.
B
C
B
Know,
that's
just
me:
what
do
you
use
for
pcs?
We
use
grpc,
but
I
don't
like
it.
I
don't
have
to
like
it
actually
we're
thinking
we're
actually
doing
some
interesting
things
we
may
switch
over
to
trying
to
use
internally
nats
for
all
of
our
rpc
use.
The
request
reply
stuff
that
nats
provides.
B
Yeah
yeah
yeah,
it's
a
super
high
performance,
lightweight
message
queuing!
Well,
it's
by
it
by
default.
It's
a
topic.
It's
a
pub
sub,
but
there's
jet
streams
on
top
of
it,
which
is
has
at
least
once
every
semantics,
but
you,
but
it
built
into
the
basic
gnats
protocol.
Is
this
request
reply
mailboxes
and
you
can
basically
completely
implement
async
rpc
using
it,
which
is
pretty
cool.
C
B
B
B
I
mean,
certainly
what
we
have
doesn't
need
to
be
super
super
high
throughput,
but
but
I
mean
I
think
I
think
our
cto
was
saying
when
he
was
he
was
in
nvidia.
They
used
it
at
very
high
throughput
throughput,
like
million
millions
of
operations,
a
minute
without
any
problems
at
all
millions
of
messages
a
minute.
So
it
was
a
little
latency.
A
What
do
you
get
out
of
using
something
like
that
versus
just
also
like
you
know,
in
grpc,
you
have
the
the
code
generation
piece,
you
describe
your
your
interfaces
and
you
generate
stubs
for
for
for
your
methods,
and
so
clients
can
call
those
methods
and
and
servers
can
implement
that
that
stub
and
and
respond
to
them.
So
do
you
get
any
of
that,
like
tooling
out
of
this
framework,
because
if
you
don't
it
feels
like
you
should
just
like,
you
know,
build
your
own
little
rpc
thing
with
you.
You
know
so.
B
B
Yeah,
so
we
I
mean
at
the
moment
our
plan.
We
are
actually
haven't
done
this,
yet
we're
just
still
talking
about
it,
but
our
plan
is
just
to
use
our
regular
protobufs
and
just
have
different.
Have
you
know,
have
a
consumer
to
do
that,
the
processing
rather
than
having
to
have
a
grpc
service,
so
it's
certainly
easier
to
configure
nats
than
it
is
to
configure
your
grpc
channels.
I'll
say
that
much
I
I.
A
Kind
of
like
onorag's
approach
of
you
know
this
implementation.
You
know
re
re-implement
grpc
without
grpc
yeah.
C
C
B
B
B
No,
but
we
really
want
kind
of
we
want
the
event
bus
part
of
it,
so
we're
going
to
be
using
it
we're
going
to
be
using
it
as
an
event
bus,
and
so
if
we
can
also
use
it
for
our
pc,
it
means
we
have.
You
know
one
less
technology
we
have
to
worry
about,
so
I
mean
in
for
a
penny
in
for
a
pound.
I
don't
know
we're
not
sure
how
it's
going
to
work
out,
we're
still
alive
we're
a
startup
we
get
to
play
around
with
stuff.
C
B
That's
has
a
key
value
store.
You
can
run
on
top
of
it,
which
I
really
don't
understand
how
that's
implemented
but
yeah,
including
like
being
able
to
get
listeners
for
the
for
changes
to
the
values
for
a
given
key,
which
sounds
a
little
crazy,
but
I
was
gonna
play
around
with
that.
Just
for
the
heck
of
it.
It
seems
like
a
terrible
idea,
but
I
don't
know
it's
worth.
A
A
I
don't
know
if
it's
a
terrible
idea,
but
it
has
it's
problematic
because,
like
the
jdbc
drivers
are
all
synchronous
and
if,
like
the
pub
sub,
is
like
this
asynchronous
listener
thing,
and
so
you
have
to
use
a
different
jdbc
driver
that
supports
asynchronous,
and
so
you
know,
you're
getting
off
the
happy
path
fairly
quickly.
Yeah.
C
A
A
B
C
Yeah
I,
like
I've,
been
helping
my
friends
report
some
bad
jobs
from
rails
to
go,
and
so
I
was
writing
sequel
and
I
always
wish
like
they're
using
myself,
maria
db
or
whatever,
like
in
with
possible.
You
can
pass
an
array
to
an
in
well
with
mysql
you
just
have
to
like.
I
have
to
generate
10
question
marks
in
my
code
for
x
and
like
based
on
the
n
items,
I'm
putting
in
how
to
add
n
question
marks
to
my
string,
yeah
wow.
B
B
C
C
B
B
C
B
Well,
I
mean
the
company's
nearly
three
years
old,
so
you
know
we
have
stuff
now
we're
like
we're
now
we're
just
at
the
point
where
we
think
we
really
need
an
event
bus,
because
we're
all
the
monitoring
we're
doing,
needs
more
event
driven
processing.
So
it
makes
sense
to
start
thinking
about
in
bed.
Bus.
B
A
I'll
merge
the
the
other
minor
ones
that
I
opened.
The
annotations
and
updating
the
dependencies
cool
cool.
C
A
You
all
had
your
own
distribution
of
of
the
java
agent
right.
C
C
A
C
C
A
1.15
has
one
important
bug
fix.
You
can
now
have
multiple
multiple
metric
readers
doesn't
blow
everything
up.
Oh
yeah,.
B
Cool
well
yeah,
we're
not
using
we're
just
using
bear
prometheus
at
the
moment,
so
I
haven't
had
to
worry
about
it.
Yet
we
just
need
a.
We
need
a
prometheus
shin,
that's
what
we
need
who's
going
to
build
the
prometheus
shim,
so
you
can
use
the
prometheus
apis
to
generate
open,
telemetry
metrics,
I'm
not
going
to
write
that
code.
C
So
I
wonder
why
we
chose
like
our
default.
Buckets
are
like
one
by
one
like
they
get
very
big,
very
fast,
and
so
I
couldn't
get
any
precision
on
the
smaller
numbers.
If
normally
our
pcs
are
in
the
hundreds
of
milliseconds
or
something,
but
we
only
have
like
one
bucket
for
that
interesting.
That
was
a
bit
surprising.
I
was
like
wow
when
I
actually
tried
to
use
these
buckets
like.
C
B
Do
you
write
fast,
fast
code?
That's
that
it
is
in
the
second
range,
normal
people
write
slow
code,
the.
B
B
Yeah,
I
don't
know
I
mean
I
think
we
got
those
buckets
from.
I
thought
those
were
the
default
prometheus
buckets.
Actually
I
thought
that
was
what
what
what
motivated
choosing
those
numbers,
but
I
don't
yeah.
C
B
Yeah,
maybe
I
mean
maybe
again
prometheus-
is
towards
the
people
who
write
slow
code,
not
people
who
write
fashion.
I
mean
if
you
write.
C
B
C
B
A
Like
are
you
feeling
overwhelmed
or
anything
like
that?
I
I
don't
think
so.
I
think
you
you
two
have
been
doing
great
at
giving
me
feedback
on
pr's
when
I've
opened
them.
So
you
know
things
are
just
gonna
go
a
little
bit
slower
and
we're
not
going
to
have
as
many
independent
brains
coming
up
with
the
you
know
the
the
thoughts
of
what
needs
to
move
forward
and
how
to
do
it.
But
you
know
I'm
doing.
A
B
Yeah
absolutely
yeah.
My
cto
was
all
in
favor
of
me
contributing
the
okay
or
the
job
11
http
client
instrumentation,
so
it's
at
least
in
in
favor
of
me,
keeping
and
keeping
in
touch
with
the
community.
So
that's
good
great
yeah,
I
kind
of
use
whenever
my
my
builds
are
running
in
jenkins.
I
go
over
and
look
at
what's
open
in
open
telemetry
for
me
to
look
at
so.
B
I
have
to
use
a
different
user,
which
is
very,
very
confusing
because
I
virta
made
me
have
a
my
own
user
for
compliance.
So
I
know
it's
annoying,
but.
B
B
A
B
B
B
A
You're
still
logged
in
you
just
can't
you
just
can't
do
that
command
line
stuff
with
you,
yeah.
B
B
I've
just
been
doing
it
for
a
long
time,
so
kind
of
what
I'm
in
the
habit
of
having
canary
there.
For
my
for
my
personal
stuff,
yeah
and
I.
B
Separate,
like
there's
no
crossover,
I
mean
chrome
profiles
are
supposed
to
be
the
same
thing
but
they're
they're,
separate
yeah,
but
yeah,
just
how
I
keep
a
whole
whole
separate
instance,
old
thing.
Of
course,
the
issue
with
chrome
canary
is,
I
don't
know,
one
update
in
10
just
makes
it
completely
unusable,
which
so
it's
been
pretty
stable.
Lately
I
haven't
had
too
many
problems,
but
occasionally
I
don't
get
one
where
it's
like.
They
make
some
major
change
internally
to
the
javascript
engine
or
to
the
rendering
engine.
It's
like.