►
From YouTube: 2022-03-07 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
Oh
no,
no
problem,
just
what
okay
yeah!
If
there's
anything
that
you,
I
don't
think,
there's
too
much
controversial.
I
think
there
was
one
question
about
either
keeping
or
skipping
jvm
cpu
utilization.
B
Yeah,
as
I
think
I
I
believe
we
do,
but
I
need
to
double
check.
A
A
And
then
did
it
make
sense
to
you
the
moving
the
pool
the
thread
pool
parallelism,
pushing
that
to
the
thread
pool
discussion
later,
instead
of
tying
it
to
cpu
metrics.
B
A
It
disappeared,
oh
my
god.
I
just
I
seriously
restored
this
last
on
friday.
Sorry
ben
is
asking
for
the
copy.
A
C
A
Yeah
I
swear
I
went
in
on
friday
and
it
was
gone
and
I
went
to
the
trash
and
I
restored
it
and
how
did
it
go?
Google?
I
thought
we
were.
I
thought
this
got
fixed.
A
Thought
that
I
did
okay
anyway,
I
won't
try
to
tax
my
memory,
because
this
is
not
going
to
happen
all
right.
Well,
we
were
just
talking
about
the
cpu
metrics
vr
jonathan,
you
were
sorry.
I
I
got
distracted
there
you're
explaining
why
I
thought
this
had
some
tied
to
cpu
also.
B
Yes,
because
what
what
happened
so
when
the
jvm
starts,
there
is
a
there
is
a
process
or
workflow
of
jvm
ergonomics,
which
you
basically
size
like
a
bunch
or
set
a
bunch
of
properties
in
the
jvm
and
the
commune
pool
is
one
of
them
and
it
depends
on
the
cpu
or
the
available
processor
count
which
is
like
how
many
cores
do
you
have
basically.
B
And
the
common
pool
is
like
a
special
thresh
pool
in
the
jvm,
for
example,
if
you
are
using
like
powerless
streams-
and
you
are
not
specifying
a
threat
pool,
it
will
use
the
common
pool.
If
you
are
using
anything
that
uses
the
fork
join
api
or
you
are
using
it.
That
will
go
to
the
common
pool
as
well.
A
From
name
spacing
like
I'm,
trying
to
think
like
the,
if,
once
we
introduce
well,
we
can
always
revisit
this.
I'm
happy
to
revisit
this,
put
it
in
now
and
revisit
it
with
the
thread
pools
also
in
case
there's,
common
naming.
B
Yeah
also,
I
believe,
if
you
use
completable
future
and
you
don't
don't
specify
the
the
pool-
that's
the
same
thing
it
will
it
reduces.
I
am
not
sure,
though,
if
this
can
be
different
from
the
from
the
available
pro
oh
wait.
No,
it
will
be
never
mind,
so
the
the
jvm
will
use
a
formula
which
will
depend
on
the
available
processor
count,
but
the
correlation
level
can
be
different
than
the
available
processor
accounts.
So
that's
why
I
believe
we
should.
We
should
keep
it.
A
Okay,
do
like
so
this
I'm
wondering
if
it
can
be
name-spaced,
something
related
to
thread
that
gets
very
long,
but
like
jvm
common
doesn't.
A
We
can
also
use
underscores
like
if
we
want
to
do
jvm
dot
thread
dot.
You
know
common
underscore
pool
under
store
or
that
parallelism,
because
we
might
have
other
pools.
C
If
we
can
avoid
doing
that,
I
think
that
might
be
better
just
because
some
some
of
the
stuff
might
may
end
up
getting
translated
into
prometheus
conventions
and
I'm
not
sure
what
the
rules
are.
If
we
start
using
delimiters
which
which
prometheus
already
has.
D
D
D
Should
we
should
we
try
to
avoid
underscores,
I
think
so
yeah
I
mean.
Ideally
it's
item
potent,
so
you
can
go
to
prometheus
and
back
to
open
telemetry
and
get
the
same
result
so
like
going
from
open
telemetry
to
prometheus
would
replace
dots
with
underscores,
which
would
be
fine,
but
then
going
back.
You
would
probably
have
dots
again,
which
would
not
be
what
you
originally
had.
B
D
B
There,
like
a
I,
I
remember
there
wasn't
any
but
right
now
is
there
any
like
naming
convention,
especially
for
the
word
separator
in
open
telemetry.
C
A
Sorry
I
misunderstood
you're
right
dots
are
the
level
separators,
but
then
there
are
underscores
used
within
individual
attributes:
oh
okay,
okay,
at
least
on
the
tracing
side.
B
I
believe
the
two
should
be
identical,
so
if
it
is
defined
on
the
on
the
tracing
side,
the
matrix
api
specification
must
follow
that.
A
B
A
In
oh
in
the
open,
telemetry
side,
it's
just
the
whole
thing.
It's
just
a
name,
I'm
not
sure
what
the
mapping
to
I
know,
there's
a
bunch
of
mapping
to
oh
to
prometheus
defined,
but
I
have.
D
C
D
A
Yeah,
so
I
think
that's
at
least
what
I've
seen
is
it's
nice
to
use
like
we,
so
we
definitely
have
this
as
sort
of
a
name,
space
and
hierarchically
for
other
people
to
define
other
runtime
environments,
and
we
can
define
that
underneath,
but,
like
common
pool,
I
think
probably
would
be
underscored.
D
A
Yeah,
at
the
risk
of
it
getting
long,
I
sort
of
like
the
idea
of
it
being
name
spaced
under
a
thread
somehow.
A
In
either
case
that
we
might
find
that
we
want
other
pools
or
pool
dot,
maybe
pool
dot
parallelism
with
a
tag
for
the
name
of
the
pool
or
something
like
that.
But
I
think
that's
like
that
was
sort
of
jack's
comment
about
we'll
know
more
when
we
do
thread
stuff,
but
I
also
think
it's
totally
fine
to
put
in
our
best
guess
for
it.
At
this
point.
A
Okay
and
then
yeah,
the
other
two
were
just
using
process,
cpu
name
space.
Again,
I
think,
will
make
sense
to
do
what
we
think
is
best
at
this
point
and
we'll
get
feedback
later.
A
Yeah
and
then
this
one
just
yeah,
I
think
that
would
help
to
know
if
micrometer's
collecting
that
already,
I
don't
see
any
reason
not
to
include
it
seems
like
a
reasonable
metric
to
add.
A
Ben,
do
you
want
to
talk
about
the
gc
stuff
that
you
started?
Looking
yes,.
C
So
it's
it's,
not
good
news,
I'm
afraid
we
we
had
looked
at
some
some
basic
metrics
and
then
I
was
going
to
try
and
investigate
to
see
what
we
could
do
for
gc
throughput
and
you
know
to
see
if
there
is
an
apple
strapless
comparison-
and
there
absolutely
is-
I
can
totally
get
it
for
jfr,
but
I
don't
think
we'll
get
it
from
jmx
because
I
need
I
need
to
actually
see
the
details
on
individual
events
and
while
I
can,
I
can
do
that
from
from
jfr
data.
It's
not.
C
Right
so
for
g1!
Well
I
mean
obviously
for
for
stop
the
world
collector,
it's
obvious,
because
it's
just
the
amount
of
the
ratio
of
time
spent
in
gc
versus
not
so
you
can
compute
that,
but
it's
the
calculation
is
slightly
more
complicated
for
a
concurrent
collector,
because
you
only
some
of
the
threads
are
doing
gc,
so
so
for
for
g1
old,
for
example.
A
So
would
it
is
it,
can
we
look
at
it
as.
A
Cpu
time
percentage
of
yes
u-time.
C
A
In
stop,
would
it
in
stop
the
world
case?
Would
it
still
be
percentage
of
cpu
time,
or
would
it
be
the
more
the
more
the
the
wall
clock
time.
C
For
for
serial
of
the
same
number,
oh
or
for
a
stop
the
world
collector,
they
are
the
same
number,
because
it's
it's
either.
You
know
it's
a
binary
state.
It's
either
doing
application
processing
or
it's
doing
gc
right,
there's,
never
any
state
which
is,
it
is
anything
and
when
it's
in
the
gc
state
it's
doing
zero
percent
application
through
it.
But.
C
C
So
in
that
case
the
simple
calculation
of
number
of
cause
times
by
the
concurrent
time
is,
it
is
potentially
a
slight
overestimate,
but
that's
under
most
circumstances.
That's
not
that!
That's
not
that
bad.
A
So,
which
one
are
you
proposing
as
the
definition
of
gc
throughput
so
justine
or
I
guess
what
what
do
you
get
from
jfr?
What
can
you,
what
are
the
components
you
get
from?
Jfr
jfr.
C
Has
got
all
of
the
data?
I've
actually
got
some
code
in
my
in
my
jfr
hacks
project,
which
probably
explains
this.
It
uses
the
the
jfr
analysis
module
which
basically
uses
apache
calcite
to
give
you
an
sql
interface
to
query
jfr
and
so
from
there.
It's
just
a
question
of
of
querying
the
right
and
the
right
events.
C
So,
first
of
all,
we
do
a
query
to
see
what
collectors
are
are
present
in
the
file
and
then
from
that
you
can
then
determine
whether
or
not
it's
a
concurrent
or
a
or
a
stop
the
world
collection,
and
then
the
calculation
is
simply
for
a
concurrent
for
stop
the
world
collection.
It's
the
the
elapsed,
duration
times
by
the
the
parallel
threads
that
are
available
for
a
total
cpu
count,
whereas
for
a
concurrent
collection,
it's
the
elapsed.
Duration
times
by
the
concurrent
threads
plus
the
total
pause
time
turns
by
the
parallel.
A
And
so
on
the
jmx
side,
you
tried
with
the
gc
notification
events
and
that
didn't
give.
C
Oh
I've
forgotten
about
those.
No,
I
will
do
that.
It
took
me
quite
a
while
to
get
actually
get
numbers
that
match
the
ones
that
I
could
see
in
in
jmc,
and
I
only
just
got
that
working
this
afternoon.
So
I
haven't.
I
haven't
looked
at
those,
but
that
is
a
good
point.
I
should
go
and
look
at
the
notification
events.
Thank
you.
C
B
Or
how
much
time
the
application
has
after
gc
like
gets
its
share,
but
I
believe
from
like
from
like
real
world
scenarios
like
looking
at
the
gc
overheads.
Sometimes
like
gives
you
like
an
easier
digestible
information,
because
you
can
see
that
it's
like
going
up
and
it's
basically
the
overheads
that
are
like
it's
easy
for
the
users
to
to
understand.
A
A
D
A
And
again,
cpu
time
versus
wall
clock
time.
Right
is
what
we're.
C
A
B
I
just
sent
this.
This
is
what
micrometer
is
doing,
and
this
is
like
calculating
the
the
percentage
for
the
jvmgc
overhead
and
it
is
using
a
look
back
interval,
so
you
need
to
define
like
what
is
the
interval
that
you
are
monitoring
this?
I
guess,
if
you
are
not
defining
it,
that
will
be
the
the
jvm
startup
time.
D
C
Can
you
can
you
drop
that
in
the
in
the
chat
in
the
the
slack
chat
or
stick
it
in
the
document.
B
That's
the
time
spent
on
gc,
and
this
is
jmx
jc
notifications.
A
What
were
you
thinking
on
ben
so
in
the
jfr?
Are
you
adding?
Do
you
get
more
detail
than
just
duration.
A
C
A
C
A
A
A
A
Nice,
the
counts:
do
you
get
the
thread?
I'm
just
curious.
C
Is
sql
and
if
you
look
at
the
other,
the
other
link
I
put
in
the
chat.
C
Is
something
that
gunner
wrote
so
it
uses
a
calcite
nice,
so
this
only
works
for
files.
Gunner
and
I've
got
some
ideas
as
to
how
we're
going
to
make
it
work
with
with
streams
of
data
as
well,
but.
C
Yeah
I
just
started
playing
with
it
last
week
and
I
my
original
code
was
having
trouble
reproducing
the
the
same
numbers
I
saw
in
jmc,
but
but
this
library
gives
me
gives
correct
answers
like
pretty
much
out
of
the
box.
A
So
yeah,
I
I
like
the
idea
of
trying
to
capture
some
make
it
something
like
this
or
something
that's
cumulative
versus
a
you
know,
spitting
out
an
overhead
number.
The
back
ends
will
be
happier
with
that.
A
So
I
think
it's
yeah
and
if
we
have
two
different
ones,
if
we
can
get
something
better
from
jfr,
I
think
it's
okay
to
have.
You
know
different
signals
from
each
doing
the
best
we
can
with
each
as
long
as
it's.
You
know,
spec
documented
what
those
signals
mean.
A
A
A
B
B
A
Yeah,
I
I
mean,
I
think
it
would
be.
Okay,
also
there's
certain
that
are
just
like
very
traditional
metrics
jvm
metrics
that
are
aren't,
maybe
within
some
broader
category,
but
we
could
start
adding
those.
A
Cool
well,
I
think
these
are
the
big
ones.
C
I
feel
like
we're
making
some.
You
know
steady
progress,
it's
not
super
quick,
I
think
we
are.
We
are
getting
there
and
I
think
I
think
the
numbers
that
we're
you
know
we're
bottoming
out
each
of
the
each
of
the
metrics
we
want
to.
We
want
to
make
sure,
make
it
into
1.0.
So
I'm
reasonably
happy
how's
everyone
else.
A
Cool
then,
let's
anything
else
you
wanted
to
get
out
of
today.
D
A
So
but
no,
we
have
not
discussed
anything
similar
for
metrics,
but
I
think
it
would
be
a
great
idea
either
in
conjunction
with
I
mean
like
with
span
because
right
we
kind
of
we
want
to
marry
spam
and
metrics
a
lot
of
so
it
could
be
an
extension
to
with
span
to
also
capture
a
metric.
For
that.
C
C
D
I
mean
I
just
know
that
you
know
in
drop
wizard
or
in
in
micrometer
or
in
in
micro
profile
they're.
Basically,
like
simple
annotations,
like
you
can
put
at
times
on
your
business
method,
and
then
it
just
times
the
business
method,
and
you
know
how
long
it
takes
or
had
to
count
it.
And
then
you
get
rates.
B
Yeah,
I
I
believe
the
the
matrix
specification.
The
api
specification
is
not
there
yet,
for
example,
as
the
last
time
I
checked
the
metrics
api
specification
did
not
have
a
timer
at
all,
so
you
cannot
have
a
timed
annotation
without
the
time.
D
B
I
think
so,
but
it's
interesting
because,
like
not
all
of
the
like
environments,
languages
has
annotations,
but
I
believe
if
this
would
be
defined,
then
it
should
be
defined
there,
or
at
least
the
basics
should
be
defined
there.
For
example,
having
a
timer.
A
A
For
annotations
for
timers
yeah,
the
creation
of
timers
would
be
in
the
api
and
spec
I'm
creating
the
annotation
convenience
api
that
probably
wouldn't
be
in
the
spec,
but
would
be
something
similar
to
with
span
in
the
instrumentation
repo
that
can
be
enabled,
because
you
need
sort
of
some.
You
either
need
spring
or
java
agent,
or
something
that
can
do
magic
with
the
annotations.
A
But
yeah,
if
you
have
something
I
mean,
if
you
you
know
we're,
always
looking
for
contributions
and
that's
yeah.
D
Let's
thought
about
because
I'm
kind
of
all
sometimes
always
joining
also
joining
the
micro
profile,
folks
like
who
want
to
do
their
next
version,
and
they
of
course
have
annotations,
because
in
a
micro
profile
container
you
have
this
annotation
magic.
I
was
just
wondering
if
there
is
like,
if
there
was
something
similar
going
on
with
open
telemetry,
it
would
make
sense
to
align
it
in
a
way
but
yeah
so
well.
I
can.
D
C
A
Yeah,
so
that
is
what
we're
doing
with
like
with
the
built-in
instrumentation
or
the
instrumentation
that
we
produce,
which
uses
the
instrumentation
api.
A
So
the
instrumentation
api
is
built
on
top
of
both
tracing
and
metrics,
and
so
it
does
calculate
you
know
the
duration
of
a
http,
outgoing
http
call
and
it
generates
both
a
span
and
a
metric
for
that.
A
Which
is
sort
of
where
my
first
thought
was,
you
know,
enhancing
the
width
span
to
similarly
produce
a
metric.
Although
I
also
know
sometimes
you
don't
want
the
overhead
of
a
span
you
just
want.
You
know
just
want
a
timer.