►
From YouTube: 2022-04-04 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
So
topics
today
could
be
current
specs.
A
C
B
I
think
it's
sort
of
related
is
still
this
question
that
at
least
bogdan-
and
I
believe,
tigran
also
wanted
to
have
resolved,
which
is
should
the
jvm,
for
example,
metrics
instrumentation
even
be
reporting
process
cpu
time,
and
so
I
kind
of-
and
I
think
that
is
sort
of
what
the
bogdan's
blocking
comment
over
here
is
about.
B
Doesn't
really
say,
but
I
think
sort
of
at
the
core
is.
He
is
thinking
that
people
should
use
the
collector
to
for
process
metrics
system
and
process
metrics,
and
so
I
think
he's
he
doesn't
say
it
here,
but
I'm
just
kind
of
reading
the
subtext
based
on
his
previous
comments
that
he
still
wants
to
know
why
the
jvm
metrics
should
be
reporting
these
and
not
the
collector.
D
Right
so
what
happens
in
darker?
D
Doesn't
that
violates
the
darker
best
practice
to
have
like
one
process
per
container,
or
should
that
run
as
root
so
that
it
has
access
to
the
two
c
groups
and
name
spaces
so
that
it
can
like
report
the
the
process.
B
That's
a
good
question.
I
don't
know,
I
mean.
D
To
me,
that
is
that
is
kind
of
the
explanation
why
we,
or
one
would
want
this
like
if
we,
if
users
wants
to
like
run
only
one
process
per
container,
which
is
the
best
practice
I
would
say,
then
the
collector
will
not
have
access
to
this
unless
it
is
running
as
root
which
users
might
not
want
that
either.
E
I
mean
docker
society.
I
think
you
make
a
good
point,
but
docker
aside,
I
mean
I
think
the
collector
potentially
could
be
running
on
a
separate
physical
host
right
in
some
in
some
deployments
it
could.
It
could
be
that
way.
In
which
case
is
he
suggesting
like
doing
the
gmx
scraping?
Is
enough
for
cpu,
or
was
it
non-specific.
B
E
F
D
F
It
does
so
there's
specifically
a
a
host
metrics
receiver.
That
is
good.
That's
you
know
if
it
doesn't
already
collect
these
process
level,
metrics,
that's
the
plan,
as
is
my
understanding,
and
so,
but
I
haven't
thought
about
jonathan's
point
about
it
running
in
like
root
mode,
so
that
it
can
access
the
information
from
the
other
processes
that
hasn't
occurred.
To
me,
I'm
sure
it's
come
up
in
the
collector
group.
D
So,
if,
if
it
is
just
like,
like
reporting
system
level
matrix,
it
can
run
without
roots
so
that
that's
that's
all
right
but
like
you
need
to
run
at
least
as
an
elevated,
like
user,
to
have
access
to
other
processes.
D
Or
it
can
run
as
the
like
the
same
user
as
as
it
is
like
monitoring.
But
if
you
have
like
multiple
processes
that
is
executed
by
multiple
users,
then
either
you
need
to
play
around
with
like
groups
or
having.
I
don't
know
so
that
that
will
come
complicated
because
like
if
you
run
the
collector
with
the
same
user,
that
one
process-
and
there
is
another
process
you
also
want
to
like-
have
reports,
but
that
is
run
by
another
user.
F
So
trask
we
we
definitely
well
what's
interesting.
Is
we
got
the
pr
in
to
observe
jvm
memory
metrics,
which
there's
some
overlap
with
the
process?
Memory
metrics
that
exist,
and
so
but
the
the
the
change
request
is
for
cpu
cpu
metrics
being
collected
from
the
process
itself,
and
so
I'm
wondering
I'm
wondering
what
the
distinction
is.
F
Is
it
that
the
jvm
memory
metrics
have
kind
of
unique
attributes
that
we
want
to
collect.
B
Yeah,
I
think
that
jvm
metrics
are
an
easy
sell
over
the
collector
because
yeah,
if
they're
broken
out
by
space,
it's
not
a
pure
overlap
with
the
collector,
whereas
the
cpu
metrics
at
least
cpu
time
is
a
pure
overlap
and
if
anything,
it
has
less.
We
have
less
detail
than
the
collector
because
we
can't
break
it
down
by
user
kernel.
A
B
We
should
add.
Oh
no
sorry
ignore
me.
D
D
A
B
Yeah,
so
I
think
I
mean
there's
and
I
think
process
cpu
time
is
the
best
example
and
that's
why
I
kind
of
targeted
this
issue
at
that.
I
think
once
we
get
it
because
this
is
the
most
overlappy,
although
this
particular
pr
doesn't
have
process
cpu
time.
B
B
B
B
From
their
metrics
library,
I
mean
we
kind
of
had
that
spreadsheet
of
all
the
different
metrics
libraries
that
we
were
sort
of
cross
referencing
and
I
think
they
all
collected
these.
B
A
D
I
I
guess
I
have
like
two
more
like
things
like
for
this,
like
one
is,
if
you
start
this
like
locally
or
in
an
environment
where
you
don't
necessarily
want
to
start
a
collector,
you
still
will
be
able
to
see
this.
D
You
don't
need
like
a
separate
process
to
collect
that
metric
for
you
also
what
if
the
jvm
and
the
linux
kernel
or
the
collector
disagrees,
because,
because
that
happened
like
in
the
past,
like
jvm,
wasn't
always
darker
of
air.
So
if
you
are
running
an
older
version
of
java,
8
or
actually
that's
not
not
necessary,
so
I'm
not
hundred
percent
sure,
but
there
are
still
differences
between
like
before
child
10,
which
is
practically
java,
11
and
8..
D
E
E
Usage,
that's
interesting,
it
are
you
asserting
that
or
asking
asking
I
don't
know.
G
E
Well,
so,
do
we
think
there's
a
path
forward,
responding
to
bogdan
on
this,
or
I
mean
how
strongly
do
we
feel
because,
like
it
could
be
added
later,
like
we.
G
F
I
guess
so
one
thing
that's
gone
through
my
head.
Is
this
kind
of
sets
a
precedent,
so
if
this
doesn't
get
in,
then
it's
kind
of
setting
the
precedent
that
if
some
piece
of
telemetry
data
can
be
collected
by
the
the
collector
in
a
in
a
standardized
way
that
a
sp,
a
particular
runtime
shouldn't,
collect
that
as
well?
And
I
think
I
disagree
with
that-
I
think
that
you
know
runtime.
You
shouldn't
need
to
run
the
collector
to
get
a
good
picture
of
what's
happening
in
your
runtime.
E
B
Yeah,
I
think,
process
cpu
time
again
is
a
great
example
for
this,
because
that
is,
you
know,
an
awfully
critical
metric.
F
With
maintaining
this-
and
I
don't
think,
that's
a
really
good
argument,
I
think
it's
a
really
small
maintenance
burden
compared
to
the
overall
surface
area
of
instrumentation
that
we
have
so.
E
F
Well,
so
that
like
this
would
be
if
this
goes
in,
then
other
languages
could
set
their
own
semantic
inventions
for
similar
concepts.
Yeah,
I'm
very
I'm
happy
with
that.
I
love
that
actually
yeah,
I
don't
I,
I
think
that's
fine
too,
because
yeah
what
I
said
that
I
don't
think
you
need
to
should
need
to
run
a
collector
to
get
a
good
experience.
B
So
that's
sort
of
this
option
making
a
copy
of
the
process
cpu
time
under
the
jbm
space.
D
B
Yeah
so
let's
maybe
we
should
reply
to
like
what
is
the
motivation
to
have
in
this
pr
we've
got.
I
think
three
metrics.
F
That's
my
opinion.
I
I
think
that
the
jvm
should
be
able
to
tell
you
and
that's
kind
of
in
my
perspective
since
the
beginning
for
for
jvm
runtimes
in
general.
So
you
know
the
metrics
that
we
produce
should
give
you
a
decent
idea
of
how
your
jvm
runtime
is
is
functioning
and
cpu
utilization
is
a
is
a
key
part
of
that.
E
Be
used
for
corroboration
of
other
measurements,
like
in
the
case
where
you
do
run
a
collector,
or
it
can
highlight
differences
where
they
don't
corroborate.
B
E
Yeah,
but
I
guess
in
the
case
where
the
java
one
is
wrong,
you
might
have
to
account
for
that
right
or
you.
You
would
want
to
know
that,
because
the
jvm
makes
some
internal
choices
based
on
those
numbers
right.
E
E
B
So
is
there
are
there
cases
where
we
don't
have
the
pool
names.
E
A
B
These
ones
were
not
splitting
by
user
and
system
space.
Oh.
B
Okay,
these
are
two
different
metrics
one.
G
B
G
B
Other
than
this
potentially
vague
link
here,
which
would
be
really
great
if
anybody
can
get
sort
of
confirmation
on
what
that
means,
and
maybe
ben
would
know,.
B
What
about
the
point
of
sort
of
interrupt
with
other,
like
with
micrometer
jonathan
part
of
part
of
the
goal
right,
is
to
have
these
mappings
between
micrometer
names
and
open
telemetry
names,
or
were
you
even
thinking
in
micrometer
two
to
if
we
had
open
telemetry
names
to
use
those
or
just
would
you
do
that
at
the
otlp
mapping
layer,
so.
D
With
micrometers
to
the
text,
we
would
like
to
support
the
what's
that
the
open
telemetry
naming
conventions
or
the
70
conventions
that
would
be.
That
would
be
ideal,
though
the
right
now
this
is
very
like
I
would
say
that
depends
on
the
timing,
because,
in
order
to
support
this
in
two
attacks,
I
believe
it
would
be
great
if
the
semantic
conventions
would
be
stable
by
them,
but
we
can
always
like
add
this,
or
I
think
we
can.
We
can
add
this
later,
because
in
micrometer
we
have.
D
We
have
a
meter
filter
which
is
kind
of
similar
to
open
telemetry
views,
the
views,
api
and
the
sdk.
So
we
can
always
like
rename
like
these
these
these
meters.
That
means
that,
even
if
we
feel
not
do
it
in
2.0,
we
can
do
it
into
that.
One.
B
Okay,
I
I
can
definitely
reply
on
there
with
these
two
points.
I
I
understand
those
this
one,
I'm
a
little
vague
on
still
any
other
points
or
expanding
of
this.
That
would
be
worth.
D
So
I
I
would
just
mention
it
as
far
as
I
remember
it
was
before,
like
java
8
update
192,
that
java
8
wasn't
really
like
darker
there.
So
that
means,
when
you
like,
ask
generate
that
hey
what
is
the
cpu
account
or
recipe
utilization
or
whatever
it
was?
It
was
getting
the
the
data
from
the
host
not
from
the
container.
D
So,
for
example,
if
you
limited
the
cpu
count
to
one
on
a
on
a
host
which
had
like
I
don't
know,
eight
cores,
then
java
saw
all
eight
of
them,
but
darker
limiters.
It
like
limited
each
avm
to
one
and
the
same
for
memory,
and
this
can
lead
to
like
nasty
issues.
But
let
me
try
to
confirm
d.
B
I
think
I
think
we
even
mentioned
that
in
the
memory
spec
jvm
memory-
spec-
let
me
see
so
it
was.
A
D
B
D
B
D
B
D
D
I
believe
this
was
the
blog
post.
Then
we're
echo
like
announce
this.
D
B
Yeah,
so
this
was,
I
was
just
searching,
so
definitely
we
thought
it
was
192
previously
in
the
notes
and
ben
did
want
to
make
a
pr
to
add
this
language
to
the
spec.
B
F
I
guess-
and
this
is
like
assuming
that
you
you
either
can't
or
don't
want
to
run
a
collector
in
your
environment,
but
so
the
the
situation
I'd
like
to
be
able
to
identify
is
that
there's
a
memory
leak,
that's
occurred,
and
so,
like
the
what
I
would
use
what
I
would
look
for
to
identify
that
situation
is.
F
I
would
look
for
like
the
heap
memory,
the
usage
going
up
and
to
the
right
slowly
and
that
correlating
with
increases
in
the
number
of
garbage
collections
that
happen
and
then
the
amount
of
cpu
that
gets
utilized
because
eventually
you'll
your
memory
will
get
pinned
at
like
100
or
99
and
you'll
just
constantly
be
garbage
collecting,
trying
to
free
up
a
little
bit
of
space
for
more
allocations
to
take
place.
F
And
so,
if
you
can
see
that
all
from,
like
you
know
those
three
signals,
memory,
garbage
collection
and
cpu
utilization,
then
that's
pretty
useful.
So
that's
that's
you
know
it.
It
kind
of
does
feel
weird.
I
guess
to
if
you're
trying
to
identify
that
well,
would
it
feel
weird
to
try
to
pull
metrics
from
the
that
are
emitted
from
the
collector
to
identify
that.
F
It's
it's
a
bit
weird
though
right
because
like
you're,
you're
jvm
has
a
service
name
associated
with
it
and
it
has
a
resource,
and
that
resource
is
like
involved
in
identifying
the
source
of
that
telemetry
and
so
like.
The
collector
would
need
to
be
able
to
emit
a
resource
attached
to
this.
These
process
metrics
in
such
a
way
where
the
back
end
could
correlate
those
together
or,
like
you
know,
stitch
those
together
so
that
it
identifies
that.
F
Yeah,
that's
the
situation,
though,
and
you
know
just
being
able
to
identify
that
that
without
the
collector
would
be
nice,
in
my
opinion,
like
the
requiring
that
the
collector
be
deployed
to
your
environment,
to
be
able
to
detect.
That
is,
is
a
bit
strange
to
me.
B
B
Okay,
but
I
was
I'm
hoping
that
this
you
know,
while
we
kind
of
chip
away
and
get
make
slow
progress
here,
that
we
can,
that
shouldn't
stop
us
from
moving
forward
on
gc,
which
I
think
should
be
at
least
simpler
from
a
spec
perspective.
Since
there's
no
overlap
there
with
the
collector.
F
Has
anyone
caught
up
with
ben
in
recently
I
know
he
was
interested
in
that
at
one
point.
B
Cool,
we
will
make
progress,
as
we
have
time
keep
revisiting
that
at
least
we
know.
What's
next,
we
have
a
plan
this
who,
edit
this
one.
C
Hey
so
basically,
I
can
mention
and
yeah
it
basically
overlaps
a
little
bit
with
with
what
you
were
talking
about
regarding
jdm.
C
This
is
from
the
the
jmx
gatherer
it's
supposed
to
be,
where
my
impression
is
that
it's
supposed
to
work
along
a
jmx
receiver
in
the
collector
reporting
a
set
of
basic
jvm
stuff,
and
this
is
basically
when
you
have
a
job
application
like
kafka
server
and
you
cannot
or
you
don't
want
to
instrument
it.
C
B
Just
definitely
take
a
look
at
the
existing
semantic
conventions.
A
B
Have
memory
usage,
init
committed
max,
broken
out
by
with
dimensions
of
pool
names.
B
So
I
think
these
ones,
what
we
should
do
is
and
jack
had
sent
a
pr
to
the
instrumentation
repo
to
update
our
metrics.
To
follow
the
this
new
naming
convention.
B
We
should
update
the
the
jmx
metric
gatherer
to
also
use
the
new
naming
convention
and
then
definitely
the
garbage
collection
that
will
overlap
with
this
future
work
right
so
for
sure
the
name's
gonna
change,
because
now
everything
is
under
process.runtime.jvm.
B
But
I
would
probably
wait
to
change
until
this.
You
know
until
there's
a
proposal
for
gc
metrics.
B
Okay,
I
see
you
could
make
if
you
are
interested
in
doing
that
for
sure
we
definitely
you
know
we
want
to
pursue
gc
metrics
in
the
spec.
C
B
Yeah,
so
that
is
already
here,
so
that
would
be
something
that
you
know
if
you
or
somebody
wants
to
send
a
pr
to
the
jmx
metrics
to
update
that
to
use
the
new
conventions.
B
That
would
be
awesome.
Okay,.
C
F
Okay,
perfect
fast
track.
I
was
taking
a
brief
look
at
the
the
contrib
jmx
metrics
artifact
module
and
it
seems
like
there's
an
automatic
mapping
between
the
jmx
metrics
and
in
open,
telemetry
metrics,
and
so
you
know,
yeah.
I'm
trying
to
figure
out
the
best
way
on
how
we
would
map
specific
jmx
beans
to
these
conventions.
F
B
B
Yeah
I
like
that
sort
of
like
it
would
be
process
runtime,
jvm
and
then
maybe
like
jmx
dot,
dot
dot
for
like
just
generic
jmx
metrics.
C
F
Yeah
I'll
definitely
take
a
look
because
I
like,
like
trash
site,
I
opened
the
pr
for
to
update
the
the
instrumentation
to
the
new
semantic
inventions,
and
so
this
is
just
you
know
the
same
type
of
thing,
but
getting
those
metrics
over
a
network
instead
of.
C
B
Yeah
and
of
course,
they're,
not
even
the
new
ones
are
not
stable,
but
at
least
but.
B
C
B
Yeah
yep
carlos,
while
you're
thinking
about
the
jmx
metric
gatherer.
B
B
And
I
was
thinking
of
something
similar
from
the
jmx
metrics
of
sending
a
pr
to
the
collector
contrib
repo
to
update
that.
But
from
what
I
could
tell
in
the
collector,
contrib
repo,
they
just
say:
hey.
You
have
to
download
the
jmx
metric
gatherer
yourself,
and
so
you
should
download
the
latest
and
then
link
it.
C
B
So
if,
if
you
want,
if
you
end
up
making
that
more
seamless
for
users
and
bundling
it,
we
could
do
something
similar
like
this
each
time
the
jmx
metric
gather,
gmx
metrics,
is
updated,
send
auto
pr
over
there
to
keep
that
in
sync.
If
that
was
one
of
the
reasons
for
not
bundling
it.