►
From YouTube: 2022-10-06 meeting
Description
cncf-opentelemetry@cncf.io's Personal Meeting Room
A
D
D
C
Yeah-
and
it's
maybe
not
quite
as
big
as
the
number
of
bullet
points
there,
but
I
wanted
to
remember
what
I
wanted
to
talk
about.
So
that's
notes
for
me,
so
we
have
a
use
case,
a
couple
of
use
cases,
but
primarily
an
Android
where
we
are
getting
spam.
Data
generate,
let's
just
say
hand
waving
generated
elsewhere,
really
in
our
particular
case.
It's
like
in
react
native
code
but
living
within
Android
and
when
surfacing
those
to
the
Android
SDK
for
export.
C
The
current
challenge
is:
there's
no
real
way
to
propagate
the
existing
propagates.
The
wrong
word:
there's
no
way
to
fabricate
spans
with
the
existing
or
pre-created
Trace
ID
span
ID,
because
those
are
auto-generated
inside
of
the
span
Creation
in
our
SDK
in
the
Java
SDK.
C
So
the
other
use
case
would
be
something
similar
to
like
a
log
replay
like
if
you
had
done
a
bunch
of
tracing
and
had
a
log
file
that
contained
all
of
the
span
data,
and
you
wanted
to
write
Java
code.
That
would
then
read
this
log
file
I'm
using
log
file
just
as
text
file
containing
some
sort
of
marshalled
or
serialized
spam
data.
You
can't
actually
create
spans
with
the
same
Trace
IDs
and
span
IDs
right
using
the
SDK
today,
or
at
least
there's
no
real
good
way
of
doing
that.
C
So
there
is
this.
Delegating
span.
Data
and
I
was
mainly
just
bringing
this
topic
up,
hoping
that
people
have
ideas
about
this
or
other
ways
of
working
around
it
that
aren't
as
hacky
as
what
we
we've
currently
done.
D
So
is
that
true,
that
I
kind
of
think
like
that
you
can't
give
when
you
create
a
span
I
know
you
can
do
like
span
grab
span
context,
yeah
create
Trace
ID.
D
E
Or
something
yep,
so
so
all
you
really
need
to
do
is
be
able
to
parse
one
of
those
marshaled
spans
and
shape
it
into
a
span.
Data
right.
C
E
So
I,
it
sounds
like
what
you
are
looking
so
expanded
is
just
an
interface
right
yep
and
we
don't
provide
any
public
out
of
the
box.
Implementations
of
that
interface
that
aren't
internal
and
and
that's
kind
of
seems
to
be
the
the
root
of
the
issue,
but
yeah
I
think
that's
kind
of
intentional.
We
have
test
span
data,
which
is
like
a
builder
pattern
around.
You
know
populating
the
various
Fields
you
might
need
on
a
span
data
I'm
assuming
you
don't
want
to
use
that
because
it
has
tests
in
its
name
right.
C
It's
kind
of
not
what
it
was
designed
for
and
I
agree
that
it
does.
The
the
decision
does
seem
very
intentional
and
I
kind
of
remember.
You
know
just
got
like
this.
Is
a
opinionated
and
very
tightly
locked
down,
Epi
like
by
Design
I,
get
it
I'm,
just
wondering
if
there's
I
don't
know
some
other
well.
E
The
easiest
way
to
do
it
would
be
to
do
what's
test.
Spandata
does,
but
in
your
own,
your
own
Library.
So
just
like
have
an
auto
value,
implementation
of
span
data,
and
then
you
know,
you
know,
that's
a
really
small
amount
of
code
and
provides
the
the
Builder
Setters
that
you
would
need.
D
Can
you
give
us
that
big
picture
again,
like
I,
didn't
understand
why
it's
not
why
you're
at
what
point
are
you
buffering
to
disk?
Is
this,
for,
like
network
is
down
kind
of
thing
that
you
want
to
yeah.
C
I
mean
so
that
we
we
have
two
use
cases.
The
buffering
to
disk
is
one
of
them
right
now,
we're
exporting
via
Zipkin.
So
we
have
hacks
into
the
Zipkin
exporter
to
do
this,
but
that's
not
our
long-term.
C
Our
long-term
plan
is
to
not
be
on
Zipkin
forever
right,
So,
eventually
we're
going
to
have
otlp
spans
of
some
kind,
but
yes
to
buffering
to
disk,
for
the
purposes
of
network
being
unreliable
and
still
being
able
to
do
tracing
within
within
the
app
when
network
is
unavailable,
which
is
pretty
common
for
mobile
right
and
then
the
other
use
case
is
in
framework
like
not
necessarily,
but
our
specific
immediate
use
case
is
react
native,
where
you
have
JavaScript
code
running
inside,
of
an
embedded
chunk
and
to
the
Android
SDK,
that's
just
kind
of
opaque,
but
the
JavaScript
is
being
interpreted
and
running
and
in
its
running
it's
it's
doing,
tracing
right
and
it
could
be
making
Network
calls.
C
It
could
be
doing
all
kinds
of
things,
but
it
it
ends
up
generating
traces
that
have
Trace
cities
and
span
IDs
in
the
react
native
code
and
those
get
surfaced
or
one
approach
is
for
them
to
be
surfaced
back
to
Android
code
Java
code
for
export,
and
in
doing
that,
we
have
to
make
sure
that
we
maintain
those
Trace
IDs
and
span
IDs.
We
can't
you
know
we
can't
create
new
ones.
C
Does
that
yeah
yeah
so
because
we
already
have
an
SDK
a
framework
in
Java
to
support
Android
that
does
a
bunch
of
other
stuff.
We
want
to
be
able
to
export
those
using
the
Java
exporters
and
not
have
to
implement
or
re-implement
react
native
exporters.
So.
E
Then
you
have
the
Java
code
that
maybe
uses
a
otlp
Json
exporter
that
exports
to
essentially
some
some
file
location
and
you
have
the
JavaScript
SDK
doing
a
similar
thing.
So
both
of
them
are
writing
to
files,
and
then
some
centralized
code
reads
from
those
files
and
and
sends
them
off
over
the
network.
When
the
Network's
available.
C
D
So,
for
buffering
for
network
failure
issues
it
would
it
feels
like
it
would
be
ideal
to
potentially
take
the
just
the
Proto
buff
and
write
the
Proto
buff
to
this.
C
I
think
so
because
then
you
can
I
mean
the
one
of
the
goals
there
being
to
make
marketing
simple,
right
and
inexpensive.
D
It
seems
like
a
potentially
interesting
thing
to
explore
for
broader
use
cases
like
in
in
our
distro
in
our
our
exporter.
We
do
have
a.
We
do
fall
back
to
disk
storage
when
the
network
is
down,
and
so
we
do
something
different.
Of
course
it's
not
otlp
protobuf,
but
our
our
transmit
format.
We
just
write
that
straight
to
disk
so
that
we
can
just
read
it
and
pipe
it
later.
E
You
can
do
that
today,
with
otlp
using
internal
classes.
So
there's
you
know
marshallers
for
the
metric
logs
and
traces.
E
C
Yeah
I'm
I'm
just
a
little
bit
surprised
that
no
one
has
tackled
a
use
case.
Similar
to
this,
like
I,
know
that
there
are
a
bunch
of
people
that
have
Telemetry
flowing
through
other,
like
Pub
sub
systems
or
Kafka
or
whatever,
and
they
just
want
to
like
they
just
want
to
export
those
and
I'm
surprised.
No
one's
kind
of.
E
Well,
I
I
think
it's
kind
of
done
in
a
way.
I
think
that's
what
the
otlp
Json
logging
exporters
are
for.
You
know:
you're
you're
logging
out
this
stuff
in
a
serialized
format,
and
you
could
have
a
separate
process
reading
those
telling
those
logs
reinterpreting
them
and
and
sending
them
somewhere.
Yeah.
C
E
Gotcha
but
The
Collector
does
that
right,
yeah
and
what
I
just
described
you
know
you're,
typically
not
doing
it
on
a
mobile
device.
C
B
B
C
E
C
Yeah,
it's
interesting
yeah
we're
we're
in
the
early
stages
of
donating
Android
anyway.
So
yeah
having
that
component
be
reusable
seems
like
a
good
thing.
D
All
right
yeah:
let's
talk
about
the
One
release.
B
Yes-
and
we
also
have
in
the
call
Emily
she's,
coordinating
the
release
of
the
upcoming
micro
profile
6
release,
and
we
wanted
to
well
include
this
latest
one.
This
upcoming
one
Point
19
release
a
part
of
it,
but
the
dates
are
quite
tight
and
that's
why
we
we
are
concerned
on
getting
these
through
this
soon
then
later.
D
So
we
did
a
few
weeks
back
finally
nailed
down
what
are
precise
release
targets
were
so
in
the
Java
repo,
it's
the
Friday
after
the
first
Monday
of
the
month.
D
So
that
means
tomorrow
is
our
Target.
Now
whether
we
hit
that
Target
or
not
it's
a
different
question
but
we'll
come
back
to
and
then
the
I'm
guessing,
that's
probably
the
only
do
you
care
about
the
instrumentation
repo
release.
D
E
F
Well
hi:
this
is
a
well
Bruno,
is
attaching
I'm
wondering
like
I,
usually
on
the
I
mean
ready
to
perform
the
release
or
you
need
to
just
like
you
need
to
wait
for
Friday.
That
is
asking,
so
you
need
to
double
check
some
other
things.
E
E
Chicago
central
time
so
I
think
that's
UTC,
minus
five.
B
D
Yeah
we
may
I
tried
to
get
the
instrumentation
release
out
on
the
Tuesday,
which
is
probably
still
too
late
for
you,
but
there's
several
kind
of
updates.
We
have
in
119
that
affect
from
the
SDK
repo
with
the
logger
logging,
apis,
so
I
think
Tuesday
or
Wednesday.
It's
probably
realistic.
D
So
Jack,
let's
talk
about
the
119
release.
Is
there
anything
that
you
need
approved
before
then.
E
No,
so
there's
nothing
critical,
there's
a
there's,
a
handful
of
PR's
that
are
outstanding,
but
none
of
them
are
are
particularly
important.
I
went
through
and
made
sure
that
John
got
eyes
on
all
the
important
ones
earlier
this
week.
So
as
far
as
I'm
concerned
I'm
good
to
hit
the
the
release
button,
I
just
want
to
stick
to
the
schedule
unless
there's
a
really
good
reason
not
to.
D
All
right,
Bruno
again.
B
So
the
jungle
world
is
moving
from
the
javax
package
to
Jakarta,
and
the
upcoming
release
of
spring
6
and
carcass
3
will
be
based
on
the
the
packages
from
Jakarta.
B
D
We
do
already
have
instrumentation
for
servlet
5.
Jakarta
namespace.
D
So
is
there
something
specific
that
we're
missing
at
this
point.
B
D
Why
do
the
the
knowledge
you
mean
the
Java
X
annotation.
B
Oh
no,
but
so
when
I
mean
Jakarta,
it's
like
everything
that
starts
with
Java
X
is
going
to
be
replaced
by
Jakarta,
not.
C
B
F
So
to
add
more
context:
GSS
305
was
not
moved
to
jig,
Carter,
Bruno
I.
Think
I
think.
Maybe
it
is
a
confusion,
confusion
bit
because
the
Jakarta
also
has
a
nullable
and
that's
being
added
to
combat
annotations
thousand
new
annotation.
So
that
might
be
a
confusion
page,
but
the
GSR
305
I
think
didn't
make
to
trick
Hunter.
F
But
I'm
not
sure
for
this
knowledgeable
is
that
Builder
time
attack
or
this
runtime
check
do
you
know
anybody.
D
So
we
we
include
it
as
a
compile
only
the
Json
as
a
compile,
only
dependency.
Okay,.
F
So
that's
like
you
could,
if
you
want
to
move
this
one
to
use
educator
style,
so
that's
a
similar
thing
basically
also
compiled
one.
So
that's
from
the
Decatur,
comma
annotation
suspect,
okay,.
F
Yeah,
maybe
at
some
point
I
mean
Java
X,
as
Bruno
said,
he's
a
pretty
much
stable
stabled,
so
they're
pre-kittish
or
a
dead
end
is
if
you
couldn't
move
with
the
alternative,
but
you
got
her
name
space
I,
think
in
the
bedroom.
D
Cool
yeah
I
mean
I
I,
don't
think,
there's
I,
don't
think,
there's
any
reason
why
we
have
to
use
one
or
the
other
and
I
don't
think
it
should
be
a
freaking
change
to
swap
them
out.
Since
it's
just
annotate
compile
only
annotation
dependency.
F
D
F
So
that's
why
I
Eclipse
yeah
it
didn't
move
to
Eclipse
Foundation.
D
Anything
else
Bruno
that
you
are
aware
of,
that.
We
need
to
be
thinking
about
for
Java
act
for
Jakarta.
B
Well
not
yet,
but
I
promise
that
when
I
migrate,
my
extension
to
to
Jakarta,
if
everything
blows
up,
I
will
I'll,
let
you
know.
D
D
B
D
Okay,
I
see
so
they
are
using
so
they're,
not
using
Code
owners
they're
using
the
component
owners.
D
Yeah
I
think
this
is
a
great
idea.
The
code
owners
is
problematic
just
because
that
requires
making
the
making
them
right,
giving
them
right
permission
to
the
repo,
but
the
component
owners
is
great
I
think
we
should
I'm
I'm
thumbs
up
on
that
foreign.
B
D
E
I,
don't
have
huge
opinions
on
it,
so
I
kind
of
gave
the
directional
high-level
opinions
in
my
initial
code
review.
E
There's
a
lot
of
code
there,
so
I
I
gotta,
be
honest,
I
I,
don't
think
I
have
the
time
to
go
through
every
file
and
go
through
the
nitty-gritty
details.
There's
just
too
much
but
like
the
the
place
where
it's
at
now,
I'm
aligned
directionally
with
it.
So.
D
There
was
one
question
about
the:
let's:
just
go
through.
Maybe
the
open
comments.
D
So
this
one,
the
library
read
me
so
Peter
I-
was
curious
about
this.
Also
so
in
the
library
can
this
be
used
outside
of
the
Java
agent.
G
Not
in
its
current
form,
no
so
yeah,
it's
a
relatively
I
I
would
say
it's
the
first
step
to
solve
the
ticket
that
was
opened
and
the
ticket
explicitly
wanted
to
have
this
functionality
embedded
in
the
agent.
So
they
don't
want
to
run
anything
else,
but
it
is
pretty
independent.
I
would
say
the
only
thing
from
from
the
agent
that
we
use
is
the
SDK
functionality
to
export
to
collect
and
Export
the
metrics
themselves.
G
Job
yeah,
so
so
thanks
thanks
guys
for
for
reviewing
I
based
on
this
review,
I
refactored
I
think
it
is
now
forming
two
two
libraries
and
one
Java
agent
component.
G
The
two
libraries
are
first,
first
one
is
it's:
it's
a
it's
a
jmx
engine
itself
which
which
is
responsible
for
detection
discovery
of
MBS
checking
conformance
with
with
with
expectations
and
Reporting
the
metrics
and
the
other
library
is
yaml
or
for
for
dealing
with
with
description
of
of
the
metrics
from
for
rules
in
yaml
format
and,
of
course,
in
the
agent.
It's
a
very
simple
piece
that
glues
together
it
just
starts
the
whole
thing,
just
one
one
class
is
there.
B
A
E
Using
it
as
a
standalone,
Library
instrumentation,
so
just.
G
D
So
let
me
kind
of
give
an
overview
of
like
runtime,
metrics
and
kind
of
how
we
generally
split
out
library
and
Java
agent
instrumentations.
D
D
G
Right,
yes,
so
that
that's
that
was
kind.
What
I
refer
you've
heard
from
looking
at
the
code,
so
the
only
dependency
for
for
the
jmx
engine
currently
is
the
compile
time
dependency
for
for
the
for
the
metrics
dealing
with
metrics,
so
yeah.
D
And
then
the
the
rules
here
wouldn't
be
accessible
via
the
library
instrumentation
today,
but
potentially
in
the
future.
That
could
be
moved
over
into
a
library
component.
D
And
what
is
the
what's
the
goal
of
splitting
these
into
two
parts?
Well,.
G
G
G
G
G
D
D
Of
our
instrumentation
just
has
the
two
modules:
the
Java
agent
and
the
library
instrumentation.
D
G
E
I'd
say:
keep
it
as
a
single
Library.
Unless
the
need
presents
to
be
able
to
use
the
jmx
engine
portion
and
explicitly
not
have
the
yaml
portion,
because
that's
where
there
would
be
an
advantage
is,
is,
if
there's
a
situation
where
there's
a
problem
from
also
having
the
yaml
present.
D
G
So,
with
respect
to
Kafka,
because
there
is
some
duplication,
we
removed
a
Kafka
producer
and
Kafka
consumer
metrics
from
this
pull
request.
They
were
already
originally
there.
You
may
want
to
discuss
this
topic
later
and
add
them
or
or
forget
about
them.
It's
yeah.
A
Yes,
I
was
even
thinking
less
about.
What's
in
here
it
was
the
open
Telemetry
collector
has
a
Kafka,
jmx,
sort
of
mode
or
receiver
that
you
can
configure,
and
if
this
one
is
like
a
sort
of
replace
that
one
and
sort
of
you
you
get
sort
of
extra
metrics
because
we're
we're
hooking
into
the
the
Kafka
itself
right
as
well
as
getting
the
jmx
stuff.
So
you
don't
need
to
run
the
collector
at
all.
Basically
just
wondering
if.
B
C
E
Does
the
collector's
Copco
receiver
use
the
does
it
use
the?
What
is
it
the
jmx
metrics
gatherer
from
the
contribute.
E
E
I,
don't
know,
maybe
it
makes
sense
to
it's
a
jmx
metric
together.
This
is
a
standalone
process
right,
so
maybe
it
makes
sense
to
pull
out
its
internals
process,
the
metrics
and
Bridge
them
over
to
the
open,
Telemetry
metrics
SDK,
with
the
component
that
we're
discussing
today.
So
this
new.
G
Well,
so
jmx
metrics
gatherer.
Yes,
it
is
a
separate
process
and
well
from
technical
point
of
view.
It
is
more
heavyweight
because
it
pulls
in
all
the
groovy
runtime
stuff,
so
leveraging
directly
its
code
We.
We
played
with
this
idea,
but
we
decided
not
to
especially
that
it
needs
to
get
the
ambience
through
remote
interface,
not
through
the
local
interface.
So
that
would
be
really
very
few
things
that
we
might
pick
up
from
that.
D
G
D
We
discussed
this
maybe
two
weeks
ago.
I
think
one
idea
that
had
come
up
was
using
this
and
being
able
to
reuse
this
code
and
point
it
at
a
remote
jmx
server
as
a
and
use
that
to
replace
completely
replace
the
jmx
metric
gatherer.
E
Yeah,
and
if
that
was
the
case,
then
you
could
have
the
The
Collector
it's
Kafka
receiver
would
be
consistent
with
the
the
Kafka
metrics
that
are
bridged
by
you
know
the
component
that
you've
just
recently
contributed
and
so
we'd
have
consistency,
regardless
of
whether
you're
collecting
them
from
the
agent
or
via
the
collector.
G
D
B
Can
we
can
create?
We
can
create
a
future
issue
to
refactor
these
and
align
with
with
what
we
are
saying
here.
G
E
D
I
think
we're
all
in
favor
of
rewriting
this
and
getting
rid
of
the
groovy.
So
it's
very
encouraging
to
have
a
potential
path
forward
to
remove
The,
Groovy
stuff.
D
Let
me
just
see
if
there
was
oh
yeah
so
I
think
the
then
the
readme.
Where
did
that
and
then.
D
Okay,
the
readme
ended
up
just
in
the
Java
agent-
okay,
yes,
okay,
cool,
so
I
think
that's
I
mean
I!
Think
it's
fine
to
if
the
library
instrumentation
isn't
fully
usable
independently
in
the
first
round,
but
in
the
future
we
would
want
to
have
that
as
a
sort
of
something
that
people
could
use
independently
and
the
readme
would
I
guess
live
over
there
or
some
Place
common.
G
D
Yeah
one
other
option
here
from
a
artifact
perspective,
if
the
library
generally,
if
the
library
isn't
usable
on
its
own,
we
just
throw
everything
in
the
Java
agent
module
under
you
know
that
can
still
be
organized
by
package,
but
then
we
can
pull
it
out
as
needed,
but
that's
not
terribly
strict
and
especially
since
we're
planning
on
making
it
a
usable,
Library
module
anyway.
I
think
it's
fine
either
way.
G
D
Oh
yeah
I
liked
this
idea
of
like
that
yeah
that
you
could
potentially
build
the
configuration
programmatically
not
only
through
yaml
I'm
kind
of
interested
in
that
in
our
distro
of
being
able
to
programmatically
configure
some
jmx
metrics
to
collect.
E
B
D
I
would
yeah
I
totally
I
totally
got
why
you
wanted
to
use
different
terminology
for
ambient
attributes
from
metric
attributes,
but
I
really
hesitate
to
deviate
from
the
standard
open,
telemetry
the
Blessed
open
Telemetry
term,
because
it
used
to
be
called
label
in
open
Telemetry
in
an
early
metrics
draft.
E
I
I
agree
we
could
add
Hotel
underscore
as
a
prefix
to
it
to
disambiguate,
because
it
does
seem
like
the
word
attributes
is
overloaded
here.
It
means
two
things
in
two
different
contexts:
I
understand
where
you're
coming
from
Peter
from
confusion,
standpoint.
D
D
Oh
yeah
for
the
naming
I
think
it's
okay.
To
give
it
like
I
mean
it
kind
of
makes
sense,
since
it's
sort
of
trying
there's
a
there,
is
a
mind
share
of
like
people
know
what
met
GMX
metric
gatherer
is,
and
so,
if
we're
trying
to
explain
what
this
explain
this
as
something
comparable
to
that
as
long
as
it's
just
sort
of
in
the
documentation,
I
guess
as
opposed
to
in
the
code,
maybe.
G
D
B
D
D
Okay,
cool
I'll,
keep
plugging
away
at
at
reviewing
and
we'll
see.
If
we'll
see
where
we
are
early
next
week,
if
it's
something
we
can
land
in
the
release
or
not.
F
I
just
wondering,
because
I
have
been
searching
and
trying
to
look
for
like
a
Witcher
like,
for
example,
hotels,
back
version
use
per
release
like
a
1.18,
where
is
the
best
place,
is
a
document.
Okay.
This
is
this
really
is
aligned
with
hotels
back
1.13,
because
I
have
been
trying
digging
or
yes
through.
Do
you
think
is
a
is
a
makes
more
sense
and.
B
F
Have
that
in
the
readme
for
the
hotel,
Java.
E
Yeah
I
think
I
could
I,
don't
think
we
do
that
today.
The
closest
thing
we
have
is
that
we
have
the
semantic
conventions
module,
which
you
know.
We
update
that
to
align
with
a
particular
version
of
the
specification,
but
you
know
that's
hidden
pretty
deep
into
in
the
internals
of
that,
and
so
it's
not
obvious
so
yeah
I
think
it
could
make
sense
for
us
to
say
that
a
particular
what
version
of
the
specification
we're
currently
aligned
with.
C
C
C
D
Yeah
for
now,
oh.
F
Yeah,
the
other
thing
is
the
so
like
you
added
to
the
teacher
log,
the
oh
okay,
so
that
really
is
in.
That
really
is
a
change
log.
Okay,
so
I
was
thinking.
Reading
me
but
read:
the
MU
is
the
the
current
Branch
right,
yeah.
B
E
Stefan
and
I
are
talking
about
something
in
the
the
chat
that
I
think
is
interesting.
There
are
prior
conversation.
E
We
were
talking
about
the
consistency
of
the
Kafka
broker,
metrics
between
the
collector
and
jmx,
and
it
doesn't
appear
that
the
Kafka
metric
receiver
uses
the
jmx
gatherer
that
there's
there
is
a
jmx
receiver
which
manages
that
sub
process,
the
jmx
metric
gatherer
process,
but
it
doesn't
appear
to
be
the
case
for
the
the
Kafka
metrics
receiver,
yeah.
E
I'll
include
a
link
or
Stefan
already
included
a
link
to
it
in
the
chat.
It's
a
receiver
for
the
collector.
A
There's
more
of
a
cure,
I
think
it's
not
something
like
I!
Think
I'm
not
talking
to
someone
this
ticket
right.
But
it's
something
that's
interesting
to
see
if
this
becomes
a
replacement
for
for
that
right.
If
you
can
plug
this
in
into
Kafka
and
you
get
sort
of
jvm
metrics
and
all
the
goodies
from
that,
but
instrumentation.
But
you
also
get
sort
of
everything
you
you
get
from
the
normal
Kafka
receiver
out
there.
So
you
can
almost
be
like
you
can
run
with
the
collector
right.
E
Yeah,
exactly
that's
what
I
was
coming
at
it
from
is.
If
these
are
two
different,
you
know
code
bases
that
gather
the
same
data
that
we
should
try
to
have
them
aligned
in
terms
of
how
they
shape
that
data.
E
So
we'll
just
have
to
keep
that
in
the
back
of
our
heads.
I,
don't
think,
there's
anything
to
do
about
it
now,
but
at
some
point
we
should
do
that
analysis
and
make
sure
they're
synced,
no
I.
A
Think
it's
I
think
this
jmx
stuff
is
really
cool
so
to
be
able
to
support
things
like
this
right,
because
you'll
you'll
be
hooked
into
the
process
and-
and
you
don't
have
to
manage
one
more
thing.
But
you
also
get
the
ability
to
get
it
through.
Not
only
traces
of
how
your
message
flow,
like
that
it
was
in
Kafka
and
how
long
it
took.
But
you
can
actually
have
sort
of
local
instrumentation
into
Kafka
itself,
which
is.
A
If
no
one
I
and
just
one
quick
question
more
more
of
a
specification
one,
so
it
was
just
implementing
some
metrics
here
and
and
need
to
see
implement
the
gauge.
But
it
seems
like
Gage,
only
supports
an
observer,
so
you
have
a
counter
which
is
in
a
sort
of
the
always
increase
the
counter.
You
have
the
up
down
counter,
and
this
occurs
where
there
wasn't
any
gauge
where
you
can
sort
of
okay.
A
Here's
your
new
value,
type
of
thing
where,
because
I
had
to
use
kit
for
where
I
know
the
current
value,
but
I
can't
really
create
Lambda
for
it,
because
I'm
generating
in
in
the
method
where
I
have
my
my
my
counter
or
sorry,
where
I've
liked
that
I'm
engaged.
E
I
think
Trask
is
navigating
to
this,
but
this
has
been
a
popular
request.
The
idea
of
a
synchronous
gauge
it
was
punted
on
for
the
initial
release
of
the
metric
specification,
but
I
think
it's
going
to
come
at
some
point.
It's
it's
been
popular
enough
where
it
can't
be
ignored.
D
Yeah
but
you're
right
for
now
the
workaround
is
make
your
own
synchronous
Gauge
by
synchronously,
storing
it
into
a
you,
know,
Atomic
long
or
something
and
then
have
your
async
gauge,
pull
that
value.
A
E
And
that
can
be
frustrating
if
you
have
a
bunch
of
distinct
values.
You
want
to
record
for
different
attributes.
You
might
have
to
maintain
a
map
from
like
attributes
to
like
all
your
different
atomics
that
represent
their
latest
value.
So
it's
frustrating.
The
SDK
should
really
handle
that,
for
you.