►
From YouTube: 2022-04-07 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
Honorary
and
I
discussed
him
hitting
the
doing
the
release
during
or
right
after
our
meeting
this
evening,
because
I
mucked
around
with
the
build
the
release
workflow
a
bunch
and
I'm
expecting
it
not
to
pass
on
the
first
try.
So
I
want
it
to
be
available
to
help
fix
that.
C
D
Like
a
plan,
so
there's
there's
one
pr
that
I
have
open
I'll
link
to
it
here.
D
And
so
what
this
does
is
the
otlp
metric
exporter
as
a
the
builder
has
a
property
or
a
method
called
set,
preferred
temporality,
and
that
translates
the
aggregation,
temporality
argument
into
a
function
that
is
used
to
implement
the
the
five-way
function,
that
determines
aggregation
temporality
by
instrument
type,
and
so
this
proposes
get
getting
rid
of
that
set
preferred
temporality.
That
translates
that
from
a
like
an
enum
to
a
function
and
just
allows
you
to
set
the
function
directly
and
the
reason
I
think
it's
we
should
do.
D
This
is
the
the
current
methods
that
preferred
temporality
gives
the
impression
that
there's
only
two
options:
cumulative
and
delta
and
really
there's
there's
a
variety
of
options
that,
where
just
cumulative
and
delta
are
kind
of
the
only
two
that
are
specified
right
now.
But
there
could
be
more
in
the
future.
D
So
the
spec
says
that
the
this
the
spec
says
that
there's
an
environment
variable
that
and
I'll
get
a
link
to
it.
Real
quick.
C
Have
we
implemented
this
environment
variable
yet.
D
Well,
so
the
if,
if
we
decided
between
now
and
the
actual
stable
release
that
we
wanted
to
change
this,
then
you
know
there
would
be
a
small
change
in
the
contract
of
the
otlp
metric
exporter
builder.
And
so
you
know
is
that
a
change
we
could
live
with?
Probably
I
don't
know
it
doesn't
seem
significant
enough
where
we
would
have
to
do
like
another
release,
candidate.
C
D
I
think
it's
in
good
shape,
so
so
now
you
know,
I
ran
the
api
diff
tool
against
the
current
metrics
sdk
and
and
looked
at
the
api
surface
area
and
like
it
seems
it
seems
reasonable.
It's
the
it's
the
narrowest
api
surface
area
that
we
can.
I
think
we
can
publish
and
still
accomplish
the
requirements.
E
F
That's
mine
all
right,
hi,
everyone
in
case
you
don't
know
me,
I'm
dan
jaklowski.
I
work
a
lot
on
the
collector
in
particular
on
metric
and
log
acquisition,
so
just
for
context
on
this
one,
the
collector
group
basically
has
been
debating
whether
or
not
to
allow
the
collector
to
manage
sub-processes
of
any
kind
and
sort
of
trying
to
define
when
that
would
be
acceptable
and
when
not.
F
Basically,
I've
made
the
case
that
we
should
keep
this.
I
think
it's
very
important.
Others
have
argued
that
we
should
just
not
do
this
at
all,
but
I
think
this
is
a
you
know.
This
is
an
open,
telemetry
component
that
it's
that
it's
running
and
basically
I'm
here
to
ask
for
some
some
collective
wisdom
on
who's
using
this
is
it
I
gather
it's
not
super
actively
maintained
and
obviously
I
know
this
is
used
in
the
collector,
but
is
it
being
used
elsewhere
as
a
standalone
collector?
Does
anyone
know?
F
Does
anyone
care?
This
would
help
me
understand
like
if
we
propose
some
changes
to
sort
of
secure
the
jmx
receiver
from
the
perspective
of
the
collector?
Is
there
going
to
be
an
ability
to
sort
of
do
necessary
work
inside
of
the
jmx
metric
gatherer,
if
necessary,
I'm
just
kind
of
trying
to
trying
to
feel
this
out
like
where,
where
this
component
lives
in
the
minds
of
the
java
community.
B
So
I
know
that
I
mean,
certainly
I
believe
splunk
is
using
splunk
folks
initially
contributed
this
component
and
are
very
active.
In
I
mean
it
is
a
pretty
well
maintained
component.
B
And
just
recently
light
step
carlos
mentioned
that
lightstep
wanted
to
use
this
also.
C
What
what
traffic
is
looking
at
now.
F
This
component
lives
in
yeah,
the
java
contrib
repo.
So
you're.
Sorry,
maybe
I'm
misunderstanding.
Your
question.
C
F
F
C
F
And
I
can
say
from
observing
you
that
it
is
too
so
so
it
sounds
like
probably
at
least
from
most
of
the
maintainers.
B
Carlos
also
mentioned
from
the
lightstep
perspective
that
they
want
to
use
it
through
the
collector
also
because
he
was
curious.
He
was
asking
questions
about
the
the
integration
there.
F
Okay,
awesome,
that's
really
helpful,
so
I
think
what
I'll
do
is
maybe
I'll
open
an
issue
on
the
repo
to
sort
of
try
to
propose
that
that
become
more
of
the
formal
perspective,
and
if
there
are
people
out
there
who
who
would
really
like
to
see
this
thing
continue
to
exist
as
a
full-fledged
collector
of
its
own.
Basically,
then,
hopefully,
we'll
get
some
get
someone
to
speak
up
about
it,
but
otherwise
that'll
be
a
helpful
perspective
to
have
for
maintaining
the
the
two
together.
I
guess
so
appreciate
the
thoughts
everyone.
D
So
would
you
imagine
that
the
you
would
get
rid
of
the
java
component
and
like,
supposing
that
nobody
else
is
using
the
java?
The
java,
whatever
it's
called
the
jmx
collector,
is
in
a
standalone
session?
Would
you
imagine
getting
rid
of
that
and
implementing
this
all
and
go
in
the
collector.
F
You
couldn't
all
probably
tell
me
much
better
than
I
am
I'm
not
a
java
expert
but
as
I
understand
it,
it's
almost
impossible
to
collect
jmx
metrics
without
actually
running
it
through
java.
D
And
another
thing
that's
coming
to
mind
is
so
we're
gonna
have
semantic
conventions.
We
already
do
have
semantic
conventions
on
some
of
the
the
data
that
is
some
of
those
metrics
that
are
being
collected
by
via
jmx,
and
so
some
sort
of
translation
between
what
comes
from
jmx
has
to
be.
You
know
the
semantic
inventions
have
to
be
incorporated
in
there.
F
And
I'm
not
sure
to
what
extent
that's
been
considered
in
the
like
the
design
of
the
metric
names
so
far.
But
I
think
I
think
everyone
would
want
to
see
those
aligned
with
the
semantic
conventions
for
sure
and
formalized
in
the
semantic
conventions.
If
they're
not.
D
My
understanding
is
like
so
the
the
only
semantic
inventions
we
have
today
in
java
are
for
memory,
we're
working
on
other
ones,
and
so
the
the
jmx
collector
will
collect
metrics
from
certain
beans
that
are
related
to
memory,
but
the
the
trans,
the
translation
project
process.
Doesn't
it
doesn't
respect
the
semantic
inventions
today,
so
that
would
be
something
that
this
jmx
collector
has
to
do
in
general,
and
you
know,
however,
it
lands
would
have
to
think
about
that.
B
Yeah,
I
think
jack
was
mentioning
last
week
that
the
met
gmx
metric
gatherer
is
always
auto-gen,
basically
auto-generating.
The
names
based
on
the
path
of
the
jmx
object
that
it's
reporting
against
so
yeah.
We
would
we,
wherever
it
ends
up.
We
need
to
have
some
mapping
for
specifically
things
that
are
spent.
F
Yeah,
I
know
there's
some
capability
because
we
have
it
runs
groovy
scripts
and
in
those
query
scripts
you
can
basically
define
the
data
model
for
the
metrics.
You
want
it
to
admit
you
sort
of
say
these
are
the
beans
I
want
to
collect,
and
this
is
what
they
should
come
out
like.
So
if
that's
not
happening
for
the
memory
stuff
right
now,
then
I
I
suspect
we
could
could
plug
into
that
model.
Somehow,
although
you
probably
aren't
getting
memory
from
m
beans,
maybe
you
are
I'm
sorry,
I
don't
know.
I
don't
actually
yeah.
B
Yeah,
this
is
an
interesting
question
to
me,
because
I
mean
certainly
it's
just
a
wire
protocol
talking
to
jmx
but
yeah.
How
tightly
that's
tied
to
like
java
serialization
and
how
painful
that
would
be
to
do
outside
of
java
is
something
I've
never
looked
at.
B
There's
some
projects
that
expose
jmx
as
like
json
over
the
wire,
but
that's
not
really
gonna
be
too
helpful,
because
that
requires
installing
something
on
each
of
the
jbm's
apps.
B
Yeah,
if
there's
pushback
I'd,
be
I'm
very
curious
to
see
if
there's
anybody
who
wants
this
independent
of
the
collector,
because
one
path
we
have
for
those
users,
if
it's
a
minority,
is
the
java
agent
of
providing
kind
of
similar.
B
I'm
imagining
at
some
point.
We
would
also
allow
people
to
configure
the
jmx
metrics
that
they
want
to
capture
in
the
java
agent,
so
that
could
always
be
a
path
forward
for
people
who
don't
want
to
use
the
collector
and
allow
that
to
be.
You
know
just
really
focused
on
providing
a
you
know,
a
great
collector
experience.
B
Yeah
thanks
for
joining
all
right,
timer
emily
is
this
you
or
this
sounds
like
a
micro
profile
question.
G
I
think
my
headset
is
not.
B
G
Yes,
it's
so
basically,
so
we
look
at
the
like,
both
micrometer
and
and
oh
and
canola
macro
meter
and
open
telemetry
matrix.
So
we
want
to
support
the
kind
of
the
joint
matrix
and
etc.
So
one
thing
is
that
we
saw
the
missing
is
in
the
open,
telemetry
matrix
and
there's.
A
G
A
G
Concept,
this
issue
has
been
around
for
like
two
years,
wondering
what's
gonna
all
look
for
this
issue.
D
I
don't,
I
know
it's
popular.
I
know
that
it
is
been
pushed
for
from
the
initial
stable
release.
D
I
imagine
that
it'll
be
something
that
gets
attention
after
you
know
in
in
the
future,
given
its
popularity,
but
I
don't
know
the
current
status.
I
can
ask
at
the
next
spec
meeting
to
see
what
folks
are
thinking.
B
Yeah,
that
would
be
great-
and
maybe
just
mention
that
you
know
the
java
for
java
users.
Micro
profile,
similar
to
micrometer
are
kind
of
critical
interoperability
scenarios
for
our
users.
B
G
Right,
the
the
team
are
currently
the
the
don
bond
here
and
currently
leading
the
effort.
Currently
they
are
working
to
modify
the
spec
make
sure
so
that
the
rental
energy
can
plug
in
either
micrometer
or
open
telemetry
matrix.
So
they
plan
to
have
a
release.
A
B
And
do
they
need
these
like
a
open,
telemetry
timer
to
actually
be
available
already,
or
do
they
just
need
to
know
that
it's
you
know.
G
Yeah
they
wouldn't
like
to
to
make
it
to
have
it
available
now,
so
that
when
they
create
an
api,
they
know
they
can.
They
can
like
expose
the
timer
api.
Otherwise,
if
they
expose
a
timer
api
and
the
people
bring
in
oh
hotel
and
they
can't
really
implement
it
so
because
we
want
to
make
microphone
magic
is
so
agnostic,
so
the
implementation
can
either
plug
in
hotel
or
micrometer.
G
If
we
like,
we
export
the
exposure
timer
like
that's
kind
of
full,
so
everybody
bring
in
a
micrometer
in
standalone,
moving
towards
the
hotel.
B
So
in
hotel
metrics,
we
are
today
like
from
instrumentation
capturing,
essentially
what
you
would
consider
timer
values.
I
mean
we
are
capturing
like
response
times.
I
guess
we're
doing
that
in
histograms.
A
B
So
would
this
be
an
option?
Do
you
know
if
this
would
be
an
option
emily
like,
as
opposed
to
a
dedicated
timer
instrument?
G
So
I
am
not
100
sure
I
think
it
maybe
is
that
you
could
put
her
that
on
the
issue.
Maybe
is
it
kind
of
yeah,
I'm
not
expert
in
here
in
that?
C
Yeah
I
mean,
I
think,
that
if,
if
slash
when
open
telemetry
introduces
the
timer,
it
will
just
be
an
abstraction
that
uses
a
histogram
under
it.
There's
not
there's
not
really
any
difference
between
the
two
things.
Histogram,
you
would
you.
The
timer
is
just
basically
wrapping
some
sort
of
function
with
timing
using
an
instagram,
so
I
think
it
would.
I
think
it
would
end
up
essentially
being
what
you're
saying.
D
Yeah,
so
a
histogram
instrument
when
you
capture
data
in
a
histogram
instrument
and
open
telemetry
you're,
saying
that
you're
interested
in
knowing
information
about
its
distribution
and
so
histamine
histograms.
G
B
G
Right:
okay,
okay,
so
I
can
yeah.
I
can
pass
on
this
information
to
the
team,
so
that
would
be
yeah.
Take
a
look
to
see
whether
that's
enough
for
yeah
for
the
for
the
time
for
the
timer.
G
Can
you
put
a
link
to
them
to
that?
That
page.
B
G
D
Yeah,
it
would
be
fairly
straightforward
to
do
to
add
that
api,
because,
as
john
mentioned,
it's
just
kind
of
syntactic
trigger
over
a
histogram.
It's
just
you
know
we
can't
implement
that
until
the
specification
includes
it.
G
Right:
okay,
we
would
just
take
a
look.
D
And
then
I
guess
the
other
thing
that
you
we
we
can
do
by
you
know
introducing
the
timer
instrument
ourselves
is.
We
can
have
control
over
the
clock
so
on
the
specification
issue
that
talks
about
adding
a
timer
instrument,
jonathan
talks
about
like
some
of
the
complexities
associated
with
actually
like
timing,
something
correctly,
and
so
we
could.
We
could
simplify
that
for
users
by
introducing
this
api.
E
B
Just
I
thought
this
was
a
good
thing
for
folks
to
know
about.
If
they're
following,
would
you
mind
giving
just
a
brief
synopsis.
A
Yeah,
so
this
is
one
of
the
pr's
many
pr's
that
I've
made
recently
the
instrumentation
api,
and
this
one
is
interesting
because
it
resolves
a
problem
of
us
hardcoding,
the
instrumentation
version,
so
now
the
if,
for
example,
if
you
have
your
own
instrumentation
library,
of
course,
you
can
use
the
set
instrumentation
version
method
on
the
builder
to
set
it
programmatically
in
in
your
code,
but
it
can
be
kind
of
annoying
because,
usually
you
do,
you
would
want
to
set
the
same
version
as
the
one
you
have
in
your
onboard.
A
Xml
file
or
gradle
build
file
whatever
and
propagating
this
from
build
script
to
actual
code
is
always
a
problem,
so
we've
resolved
it
by
adding
sort
of
automatic
detection.
A
There
is
now
that
the
instrumenter
api
now
tries
to
determine
the
version
based
on
the
instrumentation
name.
We
are
looking
for
a
properties
file
in
the
meta
fio
open
the
meta,
instrumentation
directory
yeah.
That
description
is
here
and
the
file
is
named
the
same
as
instrumentation
name.
So
if
you
have
like
instrumentation
called
my
instrumentation,
it's
my
dash
instrumentational
properties
and
there's
a
single
property
called
version
and
if
instrumenter
detects
that
there
is
a
file
like
that
on
the
class
path,
it
will
take
this
version
by
default
and
use
it
by
default.
B
And
important
motivation
for
this
well
yeah
design
is
the
in
java,
it's
fairly
common
for
people
to
build
these
uber
jars,
where
they
just
take
all
the
dependencies
and
slam
them
all
together.
In
fact,
java
agent.
B
And
so,
by
having
different
file
names
here
for
each
one
makes
that
just
by
default
work
as
opposed
to
there's
some
times
where,
if
you
reuse
the
same
file
name
when
you're
uber
jarring
there's
ways
to
merge
them.
But
by
default
they
don't
they
just
get
overwritten.
B
B
Yeah
and
then
a
lot
of
the
work
that
the
other
pr's
materish
was
mentioning,
are
really
around
the
instrumentation
api
stabilizing.
That.
A
Yeah,
I
still
have
a
few
changes
in
mind,
but
the
the
instrumental
api
looks
mostly
like
it
should
look,
I
believe
in
the
final,
in
its
final
version.
So
if,
if
you're
using
that,
you
can
take
a
look,
if
you
have
any
other
requests
or
comments
or
any
kind
of
feedback
on
that
and
yeah.
B
So
the
solution
there
is,
we
split
out
this
separate
artifact,
so
the
core
instrumentation
api
can
be.
We
can
move
that
to
stability
sooner
and
then,
as
like
say,
http.
B
B
B
Yeah,
because
there's
going
to
be,
we
think
you
know
some
of
these
are
going
to
take
a
long
time.
Some
will
get
stabilized
sooner
and
some
will
take
a
long
time
and
there
may
be
future
semantic
conventions.
So
this
seems
like
a
good
path
towards
stabilizing
stuff
sort
of,
as.
B
B
Do
you
want
to
try
to
summarize
this?
It's
it's
awesome,
but
I
don't
know
how
many
people
cares
quite
so
deeply
as
some
of
some
of
us
about
the
bytecode
details,
but
certainly
everyone,
I
think,
cares
about
startup
performance.
H
H
A
lot
of
our
matches
are
as
such
that
we
try
to
instrument
all
the
classes
that
extend
some
class
or
implement
some
interface,
but
just
by
looking
at
the
bytecode,
it
really
is
like
it's
it's
hard
to
know
whether,
like
some
class
extends
runnable
or
some
interface,
because
to
walk
the
type
hierarchy,
you
would
need
to
parse.
The
bytes
of
the
class
find
all
the
interfaces
then
find
the
bytes
for
all
the
interfaces
check,
whether
any
of
them
is
runnable
and
basically
continue
that
way,
which
is
quite
inefficient.
H
So
I
try
to
cheat
a
bit
and
and
make
it
so
that
when
we
are
loading
the
class
like
before
we
transform
a
class,
I
try
to
ensure
that
all
its
super
types
are
already
loaded.
So
we
could
efficiently
check
whether
whether
the
superclass
hierarchy
contains
some
type.
H
B
Yeah,
I
think
it's
a
great
hack
that
super
I
had
done
something
similar,
but
not
quite
as
good
in
glory.
H
Of
course
it
has
some
limitations
like
it
doesn't
work
for
for
classes
in
bootstrap
class
loader,
because
we
can't
intercept
the
loading
there
and
can't
do
this.
The
super
classes
first
trick.
H
We
could
like
them
some
other
strategies
like
maybe
combine
the
mature
so
that.
H
That
currently,
we
run
like
the
type
measure
first
and
then
the
matches
for
like
individual
advices.
We
could
like
run
them
in
in
the
different
order
for
food
classes,
in
the
hopes
that
the
advice
matches
would
filter
out
more
stuff.
If
that's
needed.
H
One
problem
is
that
we
have
a
couple
of
measures
that
are
more
complicated.
They
don't
just
look
at
the
super
class
name,
but
they
look
at
some
annotations
on
methods
in
the
superclass
and
this
kind
of
circumvents
this
strategy,
because
one
bad
matchup,
basically,
I
guess-
should
negate
most
of
the
gain
of
this
work.
H
B
Yeah
this
was,
I
thought,
really
interesting
and
very
motivating
to
try
to
figure
out
what
to
do
about
these
annotation
matchers
is
look
at
now
with
the
optimizations
in
this
pr
which
don't
apply
to
these
matchers,
but
these,
if
disabling
these
matchers
or
if
we
could
optimize
them,
we
can
see
the
amount
of
gain
in
startup
time
that
is
still
out
there.
B
B
Matchers
by
default,
do
we
do
we
lose
the
http
route
if
it's.
H
I
implemented
framework
specific
integration
for
jax
rs
some
time
ago,
and
one
of
the
motivations
behind
this
was
that
annotation
matching
isn't
actually
really
accurate.
It
can't
accurately
detect
whether
some
method
really
is
a
jax
rs
method
in
all
cases.
H
So,
instead
of
that,
I
did
the
framework
integration
for
jersey
and
and
some
other
jax
rs
frameworks.
H
So
what
we
lose
currently
is
basically
only
the
span
that's
created
for
like
suspended,
has
those
code
function
and
code
class
attributes,
and
if
we
want,
we
could
implement
something
similar
in
the
framework
specific
code.
Also,
and
of
course
we
have
this
jax
rs
annotation
matcher
for
circular
sa
1.0,
the
framework
instrumentations,
don't
support
anything
that
old.
I
think.
H
B
Yeah,
what
I've
seen
the
the
startup,
the
people
who
complain
the
most
about
startup
time,
that
I've
seen
are
people
running
pods
with
one
or
less
cpus.
B
I
think
in
kubernetes
there's,
maybe
some
kind
of
like
default
kill
switch
like
if
your
pod
doesn't
start
up
in
a
certain
amount
of
time,
or
maybe
that's
just
a
best
practice
or
something.
I
think
that
that's
one
area
where
I've
seen
where
the
whatever
it
is.
The
java
agent
adds
enough
to
bump
it
over
that
time,
and
so
their
pod
just
never
comes
up.
B
All
right
did
anybody
else
have
anything
else
to
chat
about
today.