►
From YouTube: 2022-01-10 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
D
A
A
A
D
Sure
the
the
meeting
in
the
opec
time
zone
last
wednesday.
D
It
was,
it
was
actually
it
was.
It
was
a
small
meeting,
but
it
was
productive,
so
there
were
just
four
of
us
there
was.
There
was
myself
for
nanowright
and
fabian
from
from
prometheus
and
tommy
from
micrometer,
and
basically
we,
you
know,
we
I
think
we
went
about.
D
I
don't
know
kind
of
40
minutes,
or
so
I
just
wanted
to
make
sure
that
fabian
and
tommy
were
kind
of
on
the
same
page
about
this
idea
of
what
we're
trying
to
do,
which
is
is
essentially-
and
this
is
actually
something
I
want
to
talk
to.
D
The
people
about
is
to
kind
of
just
just
confirm
that
that
we're
all
seeing
the
same
thing
we're
trying
to
define
a
minimum
set
of
metrics,
which
any
implementation,
at
least
that
should-
and
we
can
talk
about
whether
the
the
the
the
normative
term
is,
should
or
must
supply
and
then
to
also
define
an
extension
model
to
allow
implementations
and,
depending
on
the
exact
runtime
circumstances,
to
be
able
to
do
more
than
that.
But
to
try
to
to
map
that
out
in
some
more
detail.
D
I
actually
feel
like
the
the
core
of
this
will
be
quite
small.
So
that's
why
the
extension
model
and
how
you
you
kind
of
extend
outwards
from
the
you
know.
The
mandatory
core
is
is
important.
I
also
run
through
the
concept
of
the
p-shaped
api.
D
We
ran
through
a
worked
example
of
one
of
the
differences
between
concurrent
and
stop
the
world
gc,
just
because
I
think
it's
it's
an
an
illustrative
example
of
some
of
the
differences
that
can
go
on
and
yeah
just
made
sure
that
everybody
was
on
the
same
page
really
and
then
started
to
to
talk
about
what
else
we
wanted
to
to
add
in.
I
can't
remember
who
raised
the
point.
I
think
it
might
have
been
either
fabian
or
tommy
suggested
getting
the
drop
wizard
folks
involved.
D
It
was
fabian
actually
and
fabian
says
that
he
has
contacts
there,
so
he
reached
out
to
them,
but
the
the
the
the
chat,
that's
drop,
wizard
and
the
lead
maintainer.
There
has
just
gone
on
paternity
leave,
so
we
may
not
get
any
quick
responses
for
them.
I
reached
out
to
datadog
again
I
haven't
heard
that
from
them.
D
So
I'm
I've
reached
out
to
them
to
see,
if
maybe
they
would
like
to
participate
and
see
if
they
would
like
to
join
as
well
as
it
stands.
We
have
five
implementations,
the
jfr
prototype,
micrometer,
splunk
new
relic
and
the
prometheus
client
who've
signed
up.
D
So
I
kind
of
feel
like
even
if
we
don't
get
any
more,
that's
sort
of
enough
of
a
basis
to
go
on
with
so
so
it
would
be
great
if
we
could
get
came
on
if,
if
we
can
find
another
drop,
wizard
committer,
brilliant,
we
may
or
may
not
get
datadog,
but
you
know,
even
if
we
only
have
these
five
to
start
with.
I
it
sort
of
feels
to
me
like
that's
enough.
Does
anyone
have
any
concerns
about
that?
A
These
seem
like
pretty
good
yeah,
because
we
seem
like
pretty
good
representatives
from
you,
know:
popular
open
source
to
popular
commercial.
D
Yeah
and
actually
the
door
just
occurs
to
me
that
I
think
fabian
literally
starts
at
grafana
today,
but
he
was
previously
in
stana.
So
I
suppose
we
could
also
ask
him
if
he,
if
he
has,
I
don't
know
what
it's
done.
A
story
for
this
is
maybe
with
fabian
there.
D
They
just
want
the
prometheus
roof,
but
it
may
be
worth
just
having
a
chat
with
him
and
seeing
seeing
if,
if
we
want
to
add
in
stannah
as
well
or
if
he
recommends
that
so
but
yeah,
so
I
think
we're
basically
we're
in
a
pretty
good
place.
I
mean
I
I
mean:
does
anyone
have
any
any
real
concerns
with
with
actually
just
trying
to
start
the
work
of
trying
to
distill
out
what
the
common
core
here
is?
A
Yeah
do
do
folks
feel
like
these
are
complete.
I
mean
the
work
is
kind
of
done
as
far
as
filling
out
the
spreadsheet
tabs
and
we're
ready
to,
like
you
say,
you're,
suggesting
moving
on
to
trying
to
pull
out
common
pieces.
D
Yeah,
I'm
just
trying
to
see
what
you
like
suppose.
We
took
these
five
as
our
as
our
reference
set.
Can
we
I
mean,
leaving
aside
the
question
of
how
we
name
them.
We
can.
We
can
talk
about
naming
conventions
in
a
second,
but
from
a
semantic
point
of
view,
can
we
I
mean,
let's
take
an
example
like
let's
take
the
cpu
time,
for
example,
cpu
utilization,
which,
in
in
the
jfr
prototype,
is
called
runtime.jvm.cpu.utilization
in
micrometer.
D
So
just
that
kind
of
work
to
go
through
and
just
sort
of
sort
of
see
what
we
can.
You
know
from
this
data
set
what
we
could
actually
distill
out
as
something
which
which
all
of
these
implementations
could
realistically
target.
A
Yeah
yeah,
I
I
think
it
definitely
looks
like
worth
moving
to
that
step
and
then
you
know
we
can
always
circle
back.
As
you
know,
additional
metrics
flight
get
flushed
out
or
appear
on
the
list.
D
So,
let's
just
and
the
names
here,
I
think,
are
probably
the
least
important,
but
let's
actually
start
with
the
the
cpu
utilization.
E
D
I
think
what
we
want
to
to
to
do
is
to
establish
a
common
core
that,
at
least
for
those
implementations
that
are
participating,
that
they
can
source
data
which
can
be
used
to
fill
this
metric
in
the
exact
details
of
how
they
do
it.
I
don't
really
think
are
called
to
make
it.
You
know
it's
yeah.
We
should
be
setting
an
achievable
bar
and
then
have
them
have
them
implement
that.
However,
however,
they
choose
to
really.
D
Yeah,
I
mean
so,
for
example,
one
of
the
ones
that
I
was
noticing,
as
I
went
through,
is
deadlock.
Several
implementations
for
a
deadlock
count.
That
is
not
something
which
exists
today
in
the
jfr
prototype,
and
it's
not
something
that
is
directly
present
in
the
jfr
data.
D
B
There
are
components
we're
already
doing
this
and,
as
far
as
I
know
like
in
fact,
like
drop
wizard,
has
a
a
gauge
for
for
deadlocks.
D
C
D
Stock
deadlock.com,
so
new
alec
has
that
prometheus
prometheus
has
jvm
underscore
threads
underscore
deadlocked,
which
is
again
is
is,
is
that
kind
of
kind
of
number?
What
does
splunk
have.
D
And
and
yeah,
and
that
that's
a
good
point
to
ask-
I
mean
it
essentially
that
speaks
to
the
question
about
what
is
the
normative
level
that
we
have
for
these
things
I
mean,
maybe
maybe
the
answer
is
I
mean
I
would
like
to
have
as
much
at
the
must
level
as
we
can
just
because
I
think
it
makes
a
stronger
statement,
but
at
the
same
time
I'm
realistic
enough
to
know
that
at
least
some
things
will
have
to
be
shared
and
then
maybe
we'll
end
up
with
like
three
metrics,
which
we
can
do
at
must
and
that's
it,
and
at
that
point
you
think.
D
B
So
the
outcome
of
this
should
be
some
some
sort
of
a
specification
right
which
is
like
all
of
the
names
of
the
of
these
metrics
and
the
detects
and
description,
and
so
on.
There
should
be
sleep.
Should
we
like
create
a
file
in
like
one
of
the
one
of
the
github
repos,
maybe
in
specs
or
like
semantic
conventions,.
A
In
the
yeah,
I'm
hoping
that
in
the
spec
repo
that
we
can
add
it
there,
even
though
it's
java
specific,
there
seems
to
be
precedence
for
putting
language
specific
semantic
conventions.
There.
E
So
if
we
follow
the
the
lead
of
the
semantic
inventions
done
so
for
each
kind
of
concept
that
we
want
to
capture
the
instruments
are
assigned
a
name,
an
instrument
type.
E
For
example,
you
know
histogram
up
down
counter
whatever
best
fits
the
the
shape
of
the
data
a
unit
and
in
a
description,
so
that's
kind
of
what
we
should
aim
to
come
up
with
for
each
of
these
and,
like
you
know,
I
guess
in
addition
to
that
probably
dimensions
as
well
dimensions
and
an
enumeration
of
a
lot
of
liable
values.
If
there
is
such
a
thing.
D
A
Cool,
so
any
other
like
high
level
discussion
points
before,
and
then
we
could
use
the
time
to
just.
I
think
then
wanted
to
start
going
through
these,
and
just
we
can
use
the
time
to
start
listing
things.
B
Can
we
create
that
that
file
in
in
github,
like
soon
like
it's
just
like
one
line,
is
enough?
I
guess
just
some
strawman
with
a
draft
state
so
that
everyone
will
be
able
to
see
like
what
is
the
next
step
and
what
form
should
should
that
be
and.
D
So
I'm
just
looking
at
the
the
there's
actually
a
page
called
semantic
conventions
for
runtime
environment
metrics,
which
is
actually
looking
pretty
pretty
empty,
but
it
also,
I
think
it
other
people
tell
me
if
they
think
they're
they're,
seeing
the
same
thing
as
me,
but
I'm
reading
this
that
this
this
specifies
that
the
naming
convention
should
start
process.run
time,
not
just
runtime.
D
D
D
Johnny
doesn't
care
because
in
the
in
the
micrometer
case,
it's
just
a
filter
on
the
way
out
anyway,
right.
D
Okay,
so
yeah,
so
basically,
our
conventions
will
start
process.runtime.jvm,
just
in
terms
of
in
terms
of
how
the
stack
is
fitting
in
so
what
else?
What
else?
Just
looking
at
the
list
I
mean,
if
can
we
maybe,
if
we
all
just
take
a
few
minutes
to
look
through
the
different
tabs
here
and
just
see
what
else
we
can?
We
can
easily
pull
out
that
we
can.
We
can
think
of
as
being
as
part
of
the
core
that
everything
we'll
be
able
to
to
get
access
to.
E
So
there's
obviously
memory
pool
utilization
that
comes
up
over
and
over
again.
D
That's
the
tricky
one,
because
what
does
a
pool
mean
because
we
have
old
pools
and
young
pools
and
we
have
shenandoah,
which
is
not
generational,
remember
so
the
exact
phrasing
of
how
we
do
the
tactical
dimensions
on
on
some
of
some
of
those
things,
I
think,
needs
a
bit
more
thought.
So
so
yes
they're
an
obvious
one,
but
they
may
actually
be
a
harder
case
than
we
realize.
E
D
Well,
because
you
see
that
this
is
probably
a
case
where
we
don't
have
precisely
correctly
aligned
to
terminology,
so
so
the
term
pool
is,
is
what's
used
inside
the
jvm
to
mean
a
chunk,
a
piece
of
off-heat
map,
a
piece
of
on-heat
memory
and
I'm
just
looking
at
the
micrometer
notes
and
actually
there's
a
there's.
One
of
the
dimensions
is
either
area
heat
or
area
non-heat.
D
So
that
seems
to
imply
that
that
that
goes
beyond
looking
at
the
purely
heat
memory,
which
is
kind
of
what
I
was.
I
guess
I
was
only
really
thinking
about
about
about
heat
memory.
Right
now,.
E
Here's
a
straw,
man
and,
let
me
know,
let
me
know
why
this
is
wrong,
so
we
want
to.
We
want
to
be
able
to
determine
how
much
memory
is
being
used
by
the
jvm,
and
we
want
to
have
attributes
that
characterize
like
they
group
the
memory
by
different
areas
or
groups
or
pools
whatever
the
unit
is,
but
the
total
memory
across
all
the
groups
is
interesting
in
itself.
E
So
you
know,
can
we
say
that
we
are
interested
in
being
able
to
characterize
like
the
total
memory
being
used?
And
you
know,
essentially
we
have
another
discussion
about
how
we
want
to
like
break
that
up
into
chunks
like
how
we
want
to
group
that
and
yes
grouping
is
implementation,
dependent.
D
That's
that's.
That
seems
reasonable.
Now.
The
only
other
thing
which
you
need
to
be
careful
of
is
the
distinction
between
memory
use
and
memory
committed,
because
one
is
is,
is
effectively
the
memory
that's
been
reserved
by
by
the
the
operating
system,
but
not
actually
touched
yet.
D
So
you
can
you,
you
can
reserve
a
you
know,
a
certain
chunk
of
memory
if
you've
got
a
fixed
xmx,
that
value
is
going
to
be
going
to
be
fixed
for
the
entirety
entire
lifetime
of
the
process,
but
you
you
aren't
necessarily
using
all
of
it
yet.
E
D
We
need
we
need
to
have
those
two
numbers,
and
it
feels
to
me
like
that
should
somehow
be
those
should
be
separate
meters.
We
shouldn't
use
both
of
those
or
as
dimensions
so.
D
B
So
also
there
is,
there
is
a
like
another
one,
it's
a
pair
of
macs,
which
is
the
initial
like,
for
example,
the
initial
heap
size.
B
Okay.
So
it's
like
you
have
basically
like
four
values
you
in.
If,
if
I
talk
about
like
heap,
because
I
believe
that's
the
easiest,
imagine
then
you
have
an
initial
value
that
you
start
with.
Then
you
will
have
like
the
committed
size
between
the
two.
There
is
a
youth
size
and
you
have.
You
can
have
a
max.
B
And
the
initial
size
is
not
there
in
micrometer
but
like
it
could
be.
If,
if,
if
max,
is
there,
it
can
give
you
some
information
about
what
is
going
on
in
inside
of
your
jvm.
D
Yeah
also,
as
I'm
thinking
about
this,
did
you
say
that
the
splunk
data
is
basically
being
called
micrometer.
C
D
Okay,
so
so,
and
in
in
the
case
of
new
relic
and
prometheus,
not
all
of
new
relics.
Data
comes
from
from
mx,
but
some.
E
D
D
D
So
so,
actually,
this
problem
might
simplify
by
reconciling
jmx
with
micrometer
with
jfr.
D
Which
is
slightly
easier
because
that's
only
three
different
things
that
can
can
vary
because
at
least
the
semantics
you
know.
We
know
that
the
semantics
for
prometheus
will
be
the
same
as
disney
relic,
for
example,
at
least
for
the
things
which
are
looked
after
by
mx
beans
and
micrometer
and
splunk
should
have
the
same
semantics.
But
for
those
those
items
as
well.
B
D
Yes
and
fabian
is
the
maintainer
for
that,
so
so
I
should.
I
should
have
invited
him
to
come
today.
I
don't
know
whether
seven
o'clock
is
a
bit
late
for
him,
but
he
certainly
comes
to
the
other,
the
other
meetings,
I'll
I'll
ping
him
and
see
what
he
says.
D
Well,
you
see
I'm
unsure,
because
the
the
data
just
just
eyeballing
it,
the
metrics,
which
are
presented
from
micrometer,
seem
to
be
slightly
different
to
the
ones
that
are
presented
from
from
from
new
electric
prometheus.
So,
for
example,
micrometer
doesn't
have
like
the
deadlock
thread
count,
whereas
neuralic
and
prometheus
do.
F
Right
new
relic
is
gonna
have
like,
so
that's
the
new
relic
agent.
I
would
assume
right
so
it's
instrumenting
additional
things,
so
I
would
just
say
that
micrometer
is
still
pulling
from
the
jmx
events.
Sorry,
this
is
aaron
by
the
way
I
had
my
camera,
so
it's
still
pulling
from
jmx
events.
It's
just
surfacing
different
things.
D
C
F
D
Right
so
to
take
an
example,
so
so,
let's
so
for
splunk,
for
example,
we
have
online
13.
jvf
used,
which
is
the
amount
of
used
memory.
What
does
that
correspond
to
in
the
underlying
micrometer?
Is
that
jvm.memory.usage
after
gc.
A
D
Because,
surely,
if
this
is,
if
this
is
jmx
based
this,
you
know
things
just
get
updated
onto
the
with
a
lot
as
last
value
onto
the
mx
beam,
and
then,
whenever
you
scrape
it,
you
get
to
see
what
the
last
value
was.
You,
don't
it's
not
actually
being
pushed
with
updates
from
each
each
individual
garbage
collection
event,
it's
just
whatever
it
was.
The
last
time
gc
ran
right.
A
I
think
it's
the
the
current
usage
of
the
pool,
the
current
it's
not
after
the
last
gc,
I
think,
there's
a
separate
attribute
for
last.
I
forget.
B
D
Yeah
yeah,
that's
that's
what
I'm
thinking
but
folks,
if
we
just
think
about
the
the
action
and
the
mechanism
here
for
one
second,
when
a
java
application
is
running
unless
gc
is
occurring
and
only
during
the
evacuation
phase,
the
old
generation
is
not
changing
right.
Sure
some
objects
will
be
going
out
of
scope
and
will
be
becoming
garbage.
But
the
jvm
has
no
way
to
update
pool
statistics
based
on
that
because
until
until
the
tracing
path
runs
doesn't
know
what's
alive
and
what's.
B
That,
as
as
far
as
I
know
it,
it
can
create
objects
and
in
the
old
chain.
So
if
the
object
doesn't
fit
into
eden
it
can
it
can
create
the
the
object
like
in
the
origin
immediately
and
skip.
D
So
if
you
create
like
100
meg
byte
array,
for
example,
yeah-
that's
totally
going
to
get
created
in,
but
apart
from
that
that
rare
event,
so
maybe
it
updates
the
statistics
for
that
that
case,
but
in
in
general,
it's
not
going
to
update
it
and
shrink
and
shrink
the
full
usage
size
when
objects
die.
That
will
only
happen
at
a
gc
event.
So
yes,
you're
right.
D
Eden
eden
will
be
continuously
decreasing
and
increasing
in
terms
of
memory
used.
Yes,
because
you
can
update
the
every
time
the
that
the
the
jvm
issues
and
utila
small
object
allocation.
It
can
increase
the
usage
size,
so
the
eden
size
will
will
be
continuously
increasing.
D
D
So
so
so
so
the
the
survivor
spaces
will
not
will
not
change
size
between
collections,
at
least
not
from
the
point
of
view
of
what
jmx
is,
but
in
general
the
model
is
going
to
be
the
same
when
there
is
when
there
is
a
change,
the
jvm
will
at
some
stage,
update
the
value
that's
held
in
the
in
the
mx
beam
and
then
for
prometheus.
It's
just
a
question
of
when
you
scrape
it
and
I
assume
for
the
new
relic
agent
as
well
jack.
D
A
So
do
we
want
to
pick
I
mean,
since
these
four
are
fairly
similar
than
given
the
jmx
sourcing?
Is
there
one
that
we
want
to
kind
of
focus
on
in
terms
of
synchronizing,
with
jfr.
A
So
we
don't
have
over
here,
we
have
dimensions,
spelled
out,
we
don't
have
dimensions
documented
here
yet,
but
there
are
the.
If
I
recall
in
the
the
prototype
it
does,
it
did
have
dimensions.
D
D
D
And
I
noticed
that,
on
on
these,
on
the
actual
the
five
implementations
we
have,
we
actually
don't
have
a
an
instrument
type
on
any
of
them,
which
we
definitely
should
have.
A
This
one
has
the
prometheus:
has
a
instrument
type
here,
yeah.
D
That's
good
good
good
of
fabian
he's
he's,
put
us
all
to
shame.
D
Okay,
so
the
way
that
the
jfr
prototype
approaches
it
is.
We
update
this
with
every
value
that
we
get
from
every
garbage
collection
that
we
see.
D
So
so
we
we
will
see
a
set
of
values
which
corresponds
to
the
state
of
the
heap
after
every
gc
finishes.
E
So
the
histogram
is
then
capturing
how
many,
how
many
bytes
were
in
that
particular
area
of
memory
like
after
that
gc
and
it's
useful
information,
or
we
presume
that
it's
useful
information
to
see
like
the
distribution
of
that
yeah.
Like
you
know,
if,
if
gc
happens,
10
times
over
the
course
of
a
collection,
it's
useful
to
see
the
distribution
in
a
histogram
form
format
of
what
the
memory
was
at
the
end
of
each
gc
yeah.
E
I
think
it'd
be
hard
to
you
know,
duplicate
that
type
of
instrument
type
with
with
a
jmx
based
approach.
D
Yeah,
I
can
believe
that,
so
maybe
actually
we
need
to
convert
this
to
some
other
other
form
of
like
so
so
this.
This
is
part
of
the
the
you
know
this.
This
is
what
I
expected
to
happen,
because
the
the
jfr
data
is
so
very
event-driven,
which
is
which
makes
it
different
from
the
other
implementations.
A
B
This
can
be
interesting
from
the
from
the
auto
like
metrics
or
instrument
types
point
of
view
as
well,
because,
as
far
as
I
remember,
the
histogram,
there
is
like
a
counter
and
it
must
be
like
monotonic.
E
So
my
interpretation
is
that
you
know,
let's
say
you
do
garbage
collection
five
times
over
the
course
of
a
collection,
and
you
know
you
record
that
the
memory
was
you
know,
one
megabyte
then
two
megabytes,
then
one
megabyte
and
half
a
megabyte
and
one
megabyte
again,
and
so
you
know
you'd
have
buckets
corresponding
to
the
number
of
bytes
in
that
particular
area,
and
so
the
histogram
would
would
just
give
you
an
expression
of
that.
So,
like
you
know,
for
twice
the
you
know,
we
had
a.
E
E
A
And
I
think
that's
the
constant
confusion
between
up
down
counter
and
gauge
counter
up
down
counter
engage
right
like
this
would
be
I'm
guessing
ben.
Is
this
implemented
as
an
up
down
counter.
A
E
Well,
if
you
choose
like
a
gauge
as
your
instrument
type
instead
of
a
histogram,
then
the
g,
the
jfr
implementation
is,
is
losing
visibility
into
certain
bits
of
information.
So,
but
you
know
it's
you
you
can.
You
can
have
a
consistent
experience,
so
the
jfr
implementation
could
just
update
essentially
like
gauge
values
like
a
map
of
gauges
or
and
then
whenever
a
collection
occurs,
the
it
could
read
the
valid
the
current
values
of
those
gauges.
D
That's
that's
my
feeling
as
well
jack.
That's
why
I
wanted
to
actually
compare
these
in
this
much
detail.
So
I
think
I
think
the
answer
is.
Is
that
this
this
probably
has
to
become
a
gauge,
which
is
is
somewhat
disappointing,
although
actually
no
wait
a
minute.
What
do
I
actually
want
to
say
here
is
the
answer
that
it
becomes
gauge
or
is
it?
D
The
answer
is
the
answer
that,
when
we
that
we
store
something
else
in
the
in
the
handler
and
that
when
a
an
observation
by
the
exporter
happens
that
we
actually
do
the
summarizing,
then.
C
C
I
was
very
confused
with
counters
and
counters
and
gauges
when
I
was
writing
the
micrometer
instrumentation.
D
E
D
D
Yes,
I
mean
basically
we
we
get
information
on
every
gc
type
of
event,
and
that
includes
quite
fine
grain
things.
So
we
can
see
the
state
of
things
at
individual
phases
of
gc,
so
so
yeah,
I'm
confident
that
we
can
match
whatever
is
in
jmx
from
from
from.
What's
from
the
jfr
data.
D
Yeah,
just
a
couple
of
the
implementations
mention
tracking
pools
which
don't
have
have
which
have
off-heat
memory
so,
for
example,
the
new
relic
one.
D
D
E
I
had
a
I
had
an
application
in
production
that
was
creating
threads
at
a
high
rate
and
every
thread
has
a
fixed
amount
of
overhead,
and
my
I
was
getting
killed
on
all
my
containers,
so
gotta
watch
out
for
non-heap.
B
Also,
if
you
load
a
lot
of
glasses,
especially
if
you
generate
dynamic
classes
and
you
load
them,
that
can
be
the
same
issue,
metaspace
will
will
be
pretty
crowded.
D
D
A
A
E
So
just
some
I'm
looking
at
the
jmx
access
for
heap
and
non-heap
like
memory
metrics,
and
so
there's
a
memory
being
that,
like
you
know,
it
has
two
methods,
it
says:
get
heat
memory
usage
and
get
non-heat
memory
usage
and
both
of
those
return.
The
same
memory
usage
object,
type
which
you
can
use
to
get
information
about
that
by
by
area.
So
whatever
we
have
for
heap
usage,
we'll
have
similar
types
of
information
for
non-heap.
So
that
means
you
know:
init
used,
commit
and
max
again.
A
Right
and
we
do
actually
have
a
prototype
in
the
instrumentation
repo
for
the
jmx
metrics,
so
this
is
this
will
be
a
good
thing
for
us
to
steer
into
conformance
with
whatever
we
come
up
with.
Also
yeah.
D
A
Do
we
want
to,
we
got
a
few
more
minutes?
Do
we
want
to
talk
about?
Maybe
pull
one
more
example
out
like
cpu
to
talk
through.
C
A
Let's
just
talk
and
then
maybe
as
for
next
time,
can
we
get
like
somebody
trying
to
think
what
what
action
items
should
we
have
for
sort
of
fleshing
out
or
is
there
an
action
item
for
next
time
for
fleshing
out
the
memory
used
a
bit
more.
E
We
can
try
to
express
that
in
you
know
the
semantic
inventions
tabular
form.
A
Right
right,
so
the
initial
document
yeah
I
like
that
idea
of
because
we
could
push
a
pr
just
with
a
you-
know-
sort
of
blank
cable,
a
placeholder,
I'm
happy
to
do
that.
A
And
then
does
somebody
want
to
take
give
an
attempt
at
filling
that
in
for
jvm
memory
used.
E
D
And
you
know
I
I
get
the
feeling
that
when
we
actually
try
this
in
practice
on
a
real
application
that
we're
actually
going
to
see
slight
differences
in
the
time
series
that
we
get
for
the
two
routes,
but
I
think
that's
okay
and
I
think
that
actually
it'll
be
interesting
to
see
how
large
the
divergence,
in
the
shape
of
the
time
series
actually
is.
I
I
could
be
wildly
wrong.
Maybe
it
actually
is
just
very
very
similar,
maybe
it's
not
at
all.
D
Maybe
it
is
actually
noticeably
different
depending
on
how
you
source
the
data
from,
but
because
we
can,
we
can
set
this
up
with
two
separate
exporters,
one
which
is
using
you
know,
micrometer
and
one
which
is
actually
using
jfr
directly
into
otel.
We
can.
We
can
actually
have
both
sets
of
data
at
once
from
something
which
is,
you
know,
which
is
the
exact
same
process.
So
it's
the
same.
Underlying
data
set,
but
it's
just
just
visualized
in
two
different
ways,.
A
B
That's
an
interesting
question,
though,
because,
like
the
system
cpu
usage,
it's
it's
not
bound
to
the
runtime.
B
It
is
jmx
will
give
you
the
tcp
usage
dcp
utilization
of
your
host.
It's
not
tcp
utilization
of
the
jdm.
A
It's
also
does
anybody
know
how
to
get
c
system
cpu
time,
because
I
think
you
just
get
cp
system
cpu
load,
but
that's
not
the
same
as
like
on
the
runtime,
the
side.
You
can
get
the
actual
cpu
time,
which
is
nice.
C
A
So
yeah,
that's
a
good
point:
the
about
the
system,
cpu
usage
and
whether
that
even
fits.
F
This
is
one
of
those
interesting
things
that
I
think
open.
Telemetry
is
an
interesting
position
to
make
a
recommendation
about,
which
is
why
I
thought
the
conversation
about
java
runtime
stuff
is
good
to
have
right
because
we
can
say
within
the
broader
cons
you
know
within
all
these
other
metrics
that
should
complement
what
you're
using
here's,
what
the
jvm
needs
to
have.
F
So
we
can
make
some
statement
about
where
you
might
get
a
better
measurement
for
process
metrics,
and
you
really
that
you
know
getting
that
information
from
the
runtime
is
not
the
best
plan.
Do
you
see
what
I'm
saying?
I
don't
necessarily
know
the
right
answer.
I'm.
A
Yeah,
what
I
was
thinking
jonathan
meant
by
this
was
that
in
the
spec
metrics
are
split
out.
A
B
So
I
might
would
add
a
to-do
item
to
this
later
on,
but
not
not
right
now,
but
when
this
will
be
like
documented,
most
probably
in
the
in
the
auto
java
libraries
that
this
scan
like
these
two
and
also
memory
as
well.
A
little
bit
can
vary
a
lot
on
which
java
version
are
you
using
and
if
you
are
in
docker
or
not
because
the
docker
support
was
like
back
ported.
But
not
everything
and
the
different
versions
does
vaguely
different
things
for
vastly
different
things.
A
All
right
well,
we're
at
five
till
should
try.
We
should
wrap
up
any.
Does
anybody
want
to
take
a
sort
of
an
initial
crack
at
this
in
a
similar
way
to
the
jbm
memory
used
of
trying
to
fill
out
that
table
or
shall
we
push
it
till
next
meeting
to
discuss
further.
D
I
I
think
I'm
going
to
focus
on
on
trying
to
get
the
the
jfr
implementation
to
produce
memory
used
in
a
in
a
suitable
instrument.
Format
and
I've
got
the
the
non-heat
metric
options
as
well.
So
I
think
I
think
I
probably
got
enough
action
items.
Anybody
else.
A
I
think
we've
got
great,
I
mean
if
we,
I
think
with
there's
some
great
progress
here.
Just
if
we
can
sort
of
hammer
out
one
and
see
how
that
goes.
That
will,
I
think,
help
people
to
see
how
others
could
go
on
further.
A
But
if
anybody
wants
to
for
sure
being
an
opinion,
slack.
B
Okay,
all
right,
I
I
can
take
a
look
at
it,
but
maybe
next
week,
so
I
I
cannot
really
promise
promise
everything,
but
they
anything
but
based
on
the.
A
Okay,
awesome
and
what
slack
channel
are
we
using?