►
From YouTube: 2022-02-07 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Oh
ben
is
pinging.
Is
there
a
zoom
link?
C
D
A
A
Cool
ben
I
was
looking
at
the
notes.
Looked
like
there
was
some
good
discussions
in
the
last
week's
meeting.
Anything
worth
sharing
with
this
group.
B
I
I
think
the
the
you
know
we
just
talked
about
a
couple
of
things
which
which
we
need
to
do.
I
mean,
I
think
what
one
thing
which
we
definitely
need
to
do
is
put
a
small
pr
in
to
patch
the
specification
to
say
that
we
only
officially
support
au,
192
and
upwards,
because
that
is
the
pre
191
behavior
is
not
guaranteed
to
be
accurate.
So,
although
we
are
java,
8,
technically
speaking,
we're
only
actually
really
accurate.
If
you're
on,
if
you're
later
than
update,
192.
B
I
think
it
does
okay,
and
that
one
is
is
not
purely
a
jfr
problem.
That
is,
that
is
a
general
problem.
If
you're
in
a
container
you
are,
you
are
not
guaranteed
to
to
have
the
right
numbers
for
for
jmx
and
a
bunch
of
other
stuff
as
well.
B
So
that
was
the
first
thing
I
also
I
put
a
draft
pr
in
which
is
the
is
the
move
to
counters
to
match
what
jack
did
for
the
spec,
which
we
can
talk
through.
I
can
we
can
show
it
or
whatever,
and
then
I
have
got
some
things
to
do
for
gc,
which
I
haven't
had
a
chance
to
get
to
yet,
but
I
hope
to
get
to
this
week.
A
Oh
yeah,
the
jmx,
the
gmx
metrics,
because
I
I
know
that's
being
used
by
several
people.
The
contrib
repo
scraping
was
there
kind
of
some
discussion
on
overlap
there
or
if
that
should
follow
conventions.
B
A
B
That's
that's
a
good
point.
Let
me
that
reminds
me.
I
shall
I
shall
ping
tommy
and
let's
see
if
he
can
send
me
those
to
me.
A
And
for
the
cpu
utilization?
Oh
well,
that's
this
pr
and
I
don't
think
we
have
jonathan.
A
But
yeah
it
would
be
good
to
get
a
further
review
on
this
pr
for
the
the
cpu
metrics.
E
C
One
general
thought
I
had
on
that
was
that
there's
like
there's
several
different
views
of
cpu
utilization,
there's
like
a
one-minute
average
and
maybe
like
a
current
utilization,
and
so
I'm
not
sure
all
of
those
will
be
able
to
be
captured
in
jfr,
and
so
I
you
know,
thoughts
that
are
going
through
my
head,
are,
you
know,
are
we
selecting?
Are
we
going
to
spec
out
conventions
for
only
metrics,
which
are
the
you
know
can
be
captured
in
both
jfr
and
via
jmx,
or
are
we
going
to?
B
B
I
was
thinking
either
I
mean
it.
Doesn't
I
don't
you
know
we
can't
guarantee
that
each
implementation
will
have
the
same
metrics
available,
because
some
of
them
just
won't.
You
know
at
some
point
we'll
have
to
support
graviam
and
native
compiled
binaries,
and
they
just
won't
have,
for
example,
jet
compilation,
metrics,
just
not
there,
so
so
I
think
that
it's
okay
to
have
variation
across
implementations
as
well.
I
just
want
to
make
sure
we
write
all
this
stuff
down.
B
A
And
this
looks
similar
to
the
yeah
the
discussion
on
cpu
on
the
percent
utilization
like
what
is
what
does
cpu
represent.
B
No,
I
was
just
going
to
say
that
so
further
to
that
point,
one
of
the
things
which
came
up
last
week's
meeting
was
that
we
shouldn't
lose
sight
of
the
fact
that
we
may
actually
discover
that
there
are
some
gaps
in
jmx4
in
jfr
here.
So
something
which
we
keep
need
to
keep
in
mind
is
if
we
find
something
which
we
just
think
well,
this
is
a
hole
it's
not
presently
being
generated.
We
can
actually
go
back
and
ask
people
to
to
add
things
for
us,
certainly
from
a
red
hat
specific
perspective.
B
The
the
team
that
work
on
on
on
jfr,
you
know
have
have
contributed
back
stuff,
so
I
mean
obviously
that
doesn't
help
us
immediate
versions,
but
it
is.
It
is
possible
to
do
that.
B
A
So
jack
in
is
there.
C
Oh
right,
the
recent
yeah-
this
is
very
vague,
yeah
there's
there's
like
a
definition
that
comes
in
from
the
jmx
bean
that
describes
what
recent
means
and
it's
very
jmx
oriented
and
same
with
that
one
minute
average.
C
I
had
the
same
thought
about
you
know
cpu
utilization
like
how
is
that
actually
computed
interested
in
the
discussion
that
took
place
on
the
last
metrics
call,
then.
B
Sorry,
I
need
to
come
off
mute
so
so,
basically
that
that
number
is
for
for
jfr.
B
I
don't
know
we
need
to
look
at
the
implementation,
so
I
I
would
say
that
what
we
would
do
is
take
that
and
just
average
it
so
that
it
will
we'll
record
any
elements
we
get
of
that
and
then
smooth
out
to
the
average
value.
A
For
the
system,
one
and
then
I'm
assuming
this
is
same
but.
B
Process,
oh
yes,
I
see
what
you
mean
yes,
so
one
of
them
would
be
would
be
the
value
that
we
call
currently
called
system
total
and
then
the
process
will
be
will
be
the
jvm
itself.
A
F
Just
a
small
thing
to
add
here
when
I
see
those
metrics,
I
get
kind
of
the
feeling
that
for
memory
usage,
there's
a
lot
of
jvm
specific
stuff
and
it's
fine
for
us
to
have
like
the
jvm
namespace
there,
but
for
process
cpu.
That
seems
like
something
that
pretty
much
every
process
on
every
platform
should
have
sh.
Isn't
that
kind
of
a
bit
of
out
of
scope
for
the
jvm
specific
namespace
of
metrics
shouldn't
that
kind
of
be
left
out
and
focus
only
on
the
things
that
only
the
jvms
will
have
under
this
namespace.
C
Yeah
that
that's
an
interesting
question,
so
we've
talked
about
that
a
little
bit
before
so
you
know,
let's
say
that
there's,
especially
with
the
system
metrics,
there
is
overlapping
concerns.
There
are
system
level
semantic
inventions
describing
things
like
cpu
utilization
and
memory
utilization
and
there's
also
like
general
process
level
metrics
for
those
as
well.
So,
like
you
know,
not
jvm
specific,
but
what
is
the
cpu
utilization
of
this
process?
What
is
the
memory
utilization
of
this
process,
and
so
I
opened
an
issue
with
the
spec
trying
to
clarify
okay.
C
So
in
a
situation
where,
within
the
jvm,
we
can
capture
kind
of
views
of
these
general
metrics,
but
from
the
jvm's
perspective,
should
we
and
do
we
divide,
define
our
own
semantic
inventions
for
those
or
do
we
try
to
place
the
data
that
we
gather
from
the
jvm
in
the
more
general
process,
level,
metrics
or
system
level
metrics?
G
The
way
that
I
read
into
this
too-
and
this
is
based
on
past
discussions-
we've
had
here-
is
that
this
is
kind
of
the
system
utilization
as
seen
by
or
as
can
be
measured
through
the
jvm,
and
until
we've
done
more
extensive
research.
We
can't
necessarily
say
that
those
are
the
same
things
with
the
reported
system
metrics
that
are
specked
elsewhere.
C
Yeah
just
to
add,
like
a
little
bit
more
context
there.
So
you
know
the
general
system
in
general
process
level.
Metrics
may
have
attributes
that
we're
not
able
to
complete
when
giving
you
know
when
recording
these
from
a
jvm's
perspective.
F
A
Yeah,
I
think
anybody
reading
this
would
have
or
a
lot
of
people
reading.
This
would
have
the
same
question,
so
it
probably
makes
sense
if
we
have
a
notes
or
explaining
what
the
overlap
is
in
the
in
the
spec
itself.
B
I
mean
that's,
we
can't
be
the
only
people
to
be
running
into
this
right.
Some
other
people
must
have
have
an
implementation
or
either
they've
encountered
this
or
they
will
encounter
this.
So
I
think
it's
good
to
work
through
it.
You
know
aaron,
just
pinged
me
and
and
suggested
that
maybe
we
talk
to
the
node
people
and
see
what
their
implementation
does.
C
I
think
there's
they're
trying
to
define
runtime
semantic
inventions
as
well
right
now.
G
Well,
jack,
I
don't
know
about
you,
but
I'm
astonished
that
the
the
javascript
one
is
like
really
terse
and
the
the
java
one
is
like
super
verbose
and
long.
A
Yeah,
I
think,
we're
you
know
we're
sort
of
trailblazing
here,
so
I
think
it's
okay
for
us
to
decide
like
to
make
a
call
sort
of
one
way
or
another.
A
B
So
the
ones
that
I'm
not
sure
about
are
is
that
bottom,
one
to
ask
the
average.one
minute,
because
that's
defined
as
the
average
cpu
load
for
the
whole
system
and
that
to
me
I
don't
think
something
which
is
a
whole
system
value.
You
know
that
seems
a
bit
weird.
To
put
there,
I
mean
the
system
total
that
we
can
observe.
B
B
You
know
a
general
host
monitoring
component,
which
also
scrapes
system
level
metrics
and
puts
those
in
a
different
part
of
the
name
space
or
they
don't
right.
So
so
I
think
that
including
the
data
that
we
can
get
about
system
level,
cpu
utilization
makes
sense
for
those
users
who
aren't
also
doing
host
monitoring,
which
some
people
might
not.
We
used
to
see
when
I
was
at
new
alec.
We
used
to
see
people
turn
it
off.
B
So
that's
why
I
think
that
we
should.
We
should
record
the
system
level
cpu
that
we
can
see
from
from
the
java
perspective,
because
then
that
way,
people
don't
have
to
look
at
it
if
they
think
it's
useful,
they
can
look
at
it
and
for
those
people
that
don't
do
host
monitoring,
maybe
there's
some
value
there
for
it,
and
if
they
prefer
to
rely
on
the
the
host
level
monitoring
as
their
source
of
truth,
then
they
have
that
instead,
if
they've
got
it
installed,.
C
Yeah,
I
agree
with
that
line
of
thinking,
and
you
know
the
the
memory
usage
makes.
It
is
like
a
good
argument
in
my
head,
so
there's
process
level,
memory,
utilization,
semantic
conventions
that
are
being
defined
and
the
attributes
that
are
required
are
things
that
we
definitely
can't
populate.
And
so
you
know
we
want
to
just
find
process
level
memory,
utilization
metrics
for
the
jvm
specific,
and
we
want
to
have
attributes
that
matter.
C
For
jvm
users,
you
know
pools
and
full
name
and
and
type
whether
it's
heap
or
non-heap
the
what
has
already
been
merged,
and
so
you
know
just
so.
I
think
memory
is
definitely
useful
to
have
jvm
a
jvm
specific
view
of
things
and
it's
pretty
easy
to
extend
that
to
other
areas
like
cpu,
cpu
utilization.
C
A
G
A
G
G
A
Well,
I
think
this
is
a
very
real
use
case.
The
for
users,
who
aren't
using
a
host
level
agent.
C
Especially
because
those
host
level
agents-
it's
not
like
they're
stable,
yet
they're
they're,
still
experimental
things
themselves
so.
C
But
one
thing
I
think
we
could
do
to
move
this
pr
along
is:
I
can
go
through
and
actually
identify
where
in
jmx
this
data
would
be
supplied
from,
and
so
you
know
we
can
get
a
good.
We
can
all
like
go,
and
you
know
view
the
javadoc
for
those
beans
and
just
see
exactly
what
what
is
represented
by
the
data
that
we're
collecting
and
hopefully
make
a
judgment
on
whether
that's
actually
useful
enough
to
define
in
semantic
conventions.
A
G
C
It's
like
a
very
you
know,
strange
method.
That's
attached
to
the
the
jmx
beam
that
allows
you
to.
A
C
Which
could
give
you
you
know
it
could
be
a
local
maximum
or
a
local
min
and
a
minimum
and
not
be
representative.
So
you
know,
hopefully,
if
you
have
like
a
one
minute
average,
maybe
those
those
peaks
and
valleys
are
smoothed
out
more
so
I
mean
maybe
it's
useful
from
that
perspective.
C
The
cpu
time
is
just
the
number
of
of
nanoseconds
or
milliseconds
that
were
spent
by
the
process
you
know
in
in
you
know,
if
it's
cumulative
since
the
beginning
of
the
process,
if
it's
delta,
you
know
in
that
collection
interval
how
many
milliseconds
were
spent
by
the
cpu
for
this
particular
process.
The
utilization
attempts
to
you
know
define
that
as
a
percentage
by
you
know,
you
have
to
have
a
denominator.
A
But
it
isn't
that
this
metric,
just
I
mean
that's
normally
what
I've
done
with
like
cpu
time
is
or
any
kind
of
total
time.
You
know.
Gc
time
is
you,
you
know
divide,
you
subtract
the
last
metric
and
divide
by
the
interval,
and
that
gives
you
the
utilization
for
that.
The
most
recent
interval.
G
A
G
B
Yeah,
I
had
a
question
about
whether,
because
these
for
utilization
is,
is
written
as
a
gauge,
is
it
actually
a
gauge
or
is
it
an
async
down
down
to.
C
C
I
mean
so
like
imagine
you
imagine
over
a
five
minute
window
you
had
like
you
know
five
values
recorded
for
each
one
minute,
so
you
had
like
60
70
65,
60
55..
How
do
you
aggregate
those
if
you
want
to
roll
that
up
into
a
five
minute.
C
A
The
issue
that
I
see
with
aggregating
this
one
is
that
it's
just
a
like
a
snapshot
in
time.
It's
like
I
looked
at
my
cpu
utilization,
for
you
know
a
millisecond
and
it
was
x.
It's
not
really
like
over
the
last.
A
B
Okay,
because
in
in
the
jfr
implementation
that
I
currently
have
it's
in
the
pr,
but
basically
because
I'm
getting
called
every
second
with
an
update,
I'm
just
queuing
them
up.
So
I
when,
when
I
so
I've
got
this
currently
implemented
as
an
async
up
down
counter.
G
G
B
C
Yeah
and
jmx
is
always
going
to
be
subject
to
that
that
weakness,
because
we're
always
going
to
get
a
snapshot
in
jmx.
Well,
I
guess
we
have
to
look
into
the
semantics
of
what's
being
provided,
but
I
imagine
it's
just
a
snapshot.
B
B
B
A
A
Yeah,
even
if
you're,
even
if
we're
getting
utilization
once
a
second,
if
it's
instantaneous
not
like
the
average
over
the
last
second,
it's
still
very
kind
of
I
mean
the
long-term
average
is
nice,
but
at
any
I
don't
give
a
lot
of
credence
to
or
a
lot
of
emphasis
on.
You
know
for
a
split
second,
the
cpu
said
x,
since
it
juggles
up
and
down.
C
E
A
And
that's
where
a
host-based
agent
is
a
lot
better.
I
feel
like
these
things
that
we
can
get
from
the
jvm
are
just
sort
of
it's
the
best
we
can
get
from
inside.
E
You
don't
know
like,
I
think,
we're
we
have
so,
if
we're
observing
with
gauges
to
just
see
what
the
latest
value
is.
That
feels
fine
to
me,
but
I
think,
if
we're
looking
at
utilization
using
anything
other
than
a
gauge,
we're
mapping
something
that's
already
been
mapped,
I'm
sure
those
are
valid
uses
of
verbs,
but
we,
we
would
be.
You
know,
doing
more
statistics
on
something.
That's
already,
you
know
we're
reminding
something.
That's
already
been
munged.
E
C
That
that
might
be
the
case,
I
was
actually
just
thinking
about
that.
There's
another
situation,
that's
similar
for
memory
usage,
so
the
init
and
the
max
values
for
any
given
pool
are
only
reported
just
kind
of
once
at
the
beginning
of
an
apps
life
cycle,
and
then
they
just
start
reporting
negative
one
values,
and
so
we
stop
reporting.
E
D
E
Period
where
it's
zero
and
then
it
gets
the
value
again,
but
that
value
never
changes
which
would
still
let
you
see
the
change
over
time
as
you
have
processes
come
and
go
right,
but
it's
not
a
gauge
per
se.
I
don't
I
don't
know.
B
B
G
B
Let
me
let
me
stick
this
in
chat
for
people
as
well,
because
there
is
actually
a
list
of
the
jfr
events
here
that
someone
has
kindly
put
together
and
they've
got
like
big
chunks
of
json
yeah,
that's
the
one
that
I
always
refer
to
as
well.
That's
a
good
list,
so
this
one
has
got
some
some
17
specific
ones
in
it
which
talk
about
condoners,
but
I
think
we
might.
We
might
get
numbers
from
that.
Do
we
know
who
best
solution?
Is
it's
such
a
weird
name,
but
it's
what
I
always
look
for.
B
Okay,
oh
it's
tom
trump
tom
shindle,.
G
D
Just
quickly
I
mean
it
was
a
little
bit
too
slow,
but
going
back
to
this
cpu
count,
there
are
not
only
metrics,
there
are
also
resources
in
open
telemetry
and
it
just
quickly
googled
it,
and
there
seemed
to
be
some
java
resources
already
like
the
name
of
the
runtime
and
so
forth.
So
maybe
things
like
the
cpu
count
would
fit
more
into
resource
semantic
conventions
and
not
into
metrics
mental
conventions.
C
That's
actually,
I
kind
of
like
that
thought
thinking
about
it
from
like
memory,
init
and
max
as
well.
If
those
are
static,
values
that
don't
change,
we
could
have
a
resource
detector.
E
G
C
Yeah
we
collect
a
lot
of
these
for
jvms
already,
actually
probably
all
of
them,
but
we
could
extend
those.
These
are
all
generals
semantic
inventions
at
the
resource
level.
G
So
that
should
that
should
be
mentioned
on
jonathan's
seat
pr
right
because,
like
cpu
count,
we
think
falls
in
that
category.
A
And
I
was
thinking
we
already
have
like
jack.
Maybe
it
would
be
worth
try
like
putting
out
a
pr
to
propose
moving
a
couple
of
these.
A
If
that's
what
we're
this
is
hard,
though
yeah,
because
of
the
attributes.
A
C
C
B
E
C
E
E
E
G
E
I
have
trouble
being
like
okay,
so
I
know
I
can
say
the
highest
number
of
cpu
that
it's
allowed
to
use,
and
I
know
that
containers
we
had
all
kinds
of
problems
that
we
got
fixed
with
them
accidentally.
You
know
detecting
that
they
were
sitting
on
a
32
core
machine
when
they
actually
only
had
access
to
two.
What
I
don't
know
is:
if
they're
like,
they
only
have
access
to
four.
Do
they
always
report
four
or
would
they
report
anywhere
from
one
to
four
depending
I
just
I
don't
remember.
E
That's
like
a
level
of
detail.
My
brain
can't
be
bothered
to
keep
in
its
head,
so
my
brain
in
the
head.
This
doesn't
make
sense,
but
that's
the
part
I
don't
remember,
is
if
those
values
change
based
on
what's
actually
being
used
or
if
it's,
if
it
is
literally
just
those
bounds
that
you
set
up
at
the
beginning,
I
can't
remember,
am
I
making
any
sense?
I
just
don't
remember
how
dynamic
some
of
those
pieces
are.
E
B
Even
if
the
container
did
change
its
size
and
number
of
cpus
that
it
was
using,
which
I
just
don't
think,
there's
any
way
that
it
could
do
java,
wouldn't.
G
I'm
still
parallel
tracking,
with
your
comments
and
my
comments
about
the
spec
and
resource
life
cycle
from
earlier,
and
there
is
a
section
in
the
sdk
the
resource
sdk
that
says.
G
G
I
can
link
to
it
it's
you
know,
there's
a
lot.
C
A
C
A
A
C
I
mean
I,
I
definitely
can
do
that.
I
I
want
to
know
what
people
think
about
the
dimensions
that
we
have
on
this,
so
the
ability
to
have
dimensions
for
based
on
pool
name
and
and
type
whether
it's
heaper
non-heap
so.
B
C
Yeah-
and
I
think
when
I
was
testing
around
with
this-
that
a
bunch
of
the
pools
just
didn't
have
return
of
value
when
you
asked
for
what
the
init
memory
was
or
what
the
max
memory
was,
and
so
only
pools
that
could
be
configured
with
you
know,
xms
and
xmx
were
actually
returning
values.
A
A
And
once
we
start
encoding
it
into
the
resource
attribute
name,
we
lose,
I
mean
it's.
A
little
looser
correlating
between
the
pool
like
back
ends,
would
sort
of
need
to
understand
that
better.
C
C
G
A
Relic
the
goal
I
think
in
the
future
for
resources
is
to
be
able
to
sort
of
tie
them
to
a
particular
run
of
a
process
and
be
able
to
do
that
correlation
in
the
back
end.
But
I
know
our
back
end
doesn't
have
that
capability,
and
so
I
would
probably,
in
our
exporter,
just
take
the
resource,
attributes
and
shove
them
back
into
regular
metrics.
D
B
B
C
Time
well,
I'm
going
to
go
back
through
and
the
cpu
pr
and
just
indicate
where
all
the
metrics
are
coming
from,
so
that
we
all
have
a
good
idea.
What
we're
dealing
with
and
can
make
a
decision
on
whether
those
are
useful
or
not.