►
From YouTube: 2021-12-13 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
Sorry,
what's
up
same
thing,
that's
probably
up
with
most
of
the
java
community.
A
Yeah,
let's
see
how
much
we
get
that,
but
if
not,
then
that's
that,
then
you
know
it's
understandable
right.
So
yeah.
A
A
I've
just
had
a
ping
from
erin
to
say
that
she's
she
can't
make
it
today.
So
we
you
know
we
shouldn't
wait
for
her,
but
let's,
let's
give
everybody
a
couple
more
minutes
and
then
get
started.
C
I
will
share
and
I
think
we
had
we
didn't
have.
Oh,
we
had
one
specific
action
item
from
last
week,
which
was
for
me
on
stability,
telemetry
stability.
C
So
two
things
there
one
is
there's
a
new
res
recent
spec
issue
from
tigran.
C
Or
pr
to
define
instrumentation
stability
so
dump
that.
C
The
and
then
the
one
thing
that
kind
of
came
out
of
some
follow-up
discussions.
C
So
many
meetings
so
many
notes,
okay
yeah,
is
because
I
think
one
of
the
concerns
people
had
last
week
about
stability
about
stability
was
that
different
jvms,
open,
jdk,
open,
j9
graw
would
emit
different
metrics
and
well.
A
And
that
that
that's
absolutely
right-
and
it
has
to
be
that
way
around
because
for
some
jbm
implementations
in
particularly
growl
in
native
mode,
some
of
these
metrics
just
won't
exist.
You
know
you
won't
get
jit
compilation
metrics
out
of
the
stack
of
the
compiled
binary,
for
example.
So
so
I
think
that's
that
statement's
exactly
right.
C
So
does
anybody
have
does
that
feel
like
it
resolves
the
issue?
Jonathan,
I
think
you
had
brought
that
up
initially.
C
So
sorry,
which
issue
the
metric
instrumentation
stability.
C
Maybe
you
did
it,
maybe
somebody
else.
I
thought
you
had
brought
that
up
last
meeting,
but
maybe
somebody
else
anyway
is
if
there's
does
kind
of
the
what
I
describe
if
nobody
has
any
concerns
about
that,
we'll
consider
that
done
and
move
on.
D
Like
if
this
effort
has
any
effect
on
the
like
ga
release,
state
of
of
the
java
instrumentation.
C
I
this
would
have
this
is
its
own
instrumentation,
so
I
don't
think
it's
related
to
you're
talking
about
the
instrumentation
you're,
asking
specifically
about
the
instrumentation
api.
C
Yeah,
no,
I
don't.
I
don't
see
a
connection
between
that
would
influence
that
that
makes
sense.
C
And
then
we
didn't,
we
kind
of
discussed
this
idea
of
last
time
having
collecting
the
metrics
from
existing
that
existing
observability
vendors
collect
as
sort
of
a
place
to
start
and
narrow
down
from
and
ben,
had
put
together
his
list.
I
don't
know
if
I
don't
see,
I
think
aida
dog
was
one
that
we
were
thinking
of
getting
from
jason
for
splunk.
B
B
A
Yeah,
I
think
so,
I'm
just
linking
to
a
spreadsheet
I
started
making
which
basically
has
got
the
existing
stuff.
A
That's
in
the
jfr
prototype
the
micrometer
elements,
and
I
think
if
we,
you
know,
if
we
just
want
to
add
some
more
tabs
to
this
json
like
you,
could
put
the
splunk
ones
in
and
jack,
if
you
put
in
the
new
relic
ones
or
ask
one
of
the
java
team
to
to
to
fill
them
in
then
that
way
we
you
know
we
can
just
basically
do
kind
of
a
gap
analysis
when
we've
got
a
few
to
look
at,
I
mean
what,
from
what
I've
been
talking
about
with
erin
she's,
very
keen
to
have
people
anchored
around
what
we're
actually
going
to
use
these
numbers
for
and
that
speaks
to.
A
You
know
what
what
do
the
end?
Users
of
products
like
hey,
dog
and
new
relic
was
fun
to
actually
do
and
to
also
make
sure
that
we
we
want
to
to
to
be
clear
that
this
is
about
aggregate
numbers
that
this
is.
This
is
not
about
the
sort
of
single
instance
apm
which,
which
you
could
you
can
do
with
with
some
of
the
products.
Instead,
what
we're?
What
we're
trying
to
do
is
to
do
something
which
is
which
makes
sense,
even
in
quite
large
and
quite
coarse
aggregates.
A
So
that's
that's.
You
know
what
I
took
away
from
the
last
meeting
as
to
what
we
needed
to
do
for
this.
You
know
with
the
spreadsheet.
What
do
other
people
think.
A
Oh
totally,
but
my
my
feeling
is
is
that
we
have
to
make
sure
that
we
we
take
both
sets
of
opinions
into
account,
because
we
all
know
that
there
is
so
much
data
you
could
generate.
You
know
you
can
generate
basically
an
infinite
amount
of
telemetry
data,
if
you,
if
you,
if
you
want
to
so
so
part
of
this,
is
making
sure
that
that
actually
we
get
the
balance
between
having
a
a
a
a
data
rate
which
isn't
overwhelming
but
at
the
same
time,
still
preserving
the
actual
signal.
D
A
B
E
So
yeah
I
can
do
that
for
new
relic
as
well.
I
can
go
through
the
the
java
agent
and
figure
out
what
we
capture.
So,
how
do
you
want
that
actually
represented,
though.
A
Well,
trust,
if
you
can
just
flip
to
the
spreadsheet
there's
just
it's
a
really
basic
one.
You
know
so
this
is
what's
in
the
jfr
prototype,
there's
like
a
broad
category
of
which
metrics
we've
got,
whether
it's
cpu
or
gc,
or
what
have
you
and
then
basically,
it's
just
these
these
different
metrics
that
we
capture.
E
A
Yeah,
let's
add,
let's
add
a
dimensions
column
as
well.
We
probably
need
it
as
well
as
as
well
as
notes
really
don't.
We.
E
Maybe
another
column
for,
like
you,
know
the
source,
whether
this
is
from
splunk
or
new,
relic
or
whatever.
A
E
To
represent
the
vendor
that
you
know
we're
kind
of
documenting.
E
Oh,
this
tab
is
just
jfr
yeah,
so
so
you
anticipate
this
being
like
a
tab
per
kind
of
source.
So,
like
a
tab.
A
That's
what
I'm
thinking,
because
at
the
moment,
we're
not
necessarily
going
to
be
having
the
same
values
so
so
what
I'm
thinking
is
if
we
actually
just
document
what
we've
got
in
each
of
the
implementations,
from
that,
we
should
be
able
to
to
pull
out
something
which
is,
you
know,
a
sensible
either
core
set
or
superset
of.
What's
there.
A
Yeah,
I
think
I
would
say
that
those
were
an
application
level
concern
rather
than
a
specific
pvm
level
concern.
What
do
other
people
think.
E
D
F
C
Yeah,
I
think
we
need
semantic
conventions
for
connection
pools
in
general,
in
open
telemetry,
but
yeah.
I
would
say
not
in
the
scope
for
this
group.
D
C
Yeah,
we
would
probably
want
to
unify
that
at
the
open
telemetry
level
for
pool
metrics
but
yeah.
Let's
pull
it
in
and
we'll
see
where
that
goes,
cool
and
yeah.
If
we
get
rid
of
it
later
so
be
it.
F
C
Yeah
for
the
schedule,
I
think
we
were
going
to
cancel
well.
We
were
going
to
cancel
the
java
meetings
next
week
and
the
week
after.
A
I
yeah
because
I
think
I
think
that
that
certainly
two
weeks
today,
red
hat
is
completely
shot
over
that
period
apart
from
people
who
doing
critical
sports
stuff
everybody's
off.
So
you
know
yeah,
because
the
27th
is
two
weeks
today,
the
third
that's
the
week
being
christmas.
The
new
year.
A
A
E
I
don't,
I
don't
remember
that
I
think
metrics
have
been
stable
at
the
otlp
level
for
for
for
some
time.
A
D
A
I
think
you
know
those
were
basically
the
main
topics
I
mean.
I
think
the
next
step
really
is
just
to
to
fill
in,
that
spreadsheet,
get
all
the
all
the
data
that
we
have
from
from
everyone
that
wants
to
participate.
A
I
will
ping
marcus
at
datadog
and
and
ask
him
if
he
would
who
could
find
some
time
to
to
do
a
similar
set
of
metrics
for
later
dog,
and
then,
let's
reconvene
once
we've
actually
got.
You
know
all
of
our
example
use
cases.
I
guess.
A
No
okay,
all
right!
Well,
I
think
that's!
That's
probably
everything
then
should
we
should
we
break
early
and
maybe
see
some
folks
on
thursday.