►
From YouTube: 2022-02-02 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
A
A
Okay,
should
we
should
we
get
started?
I
don't
think
today
is
going
to
be
super
long,
because
the.
A
Well,
we'll
see
last
week's
meeting
on
west
coast
time
was
was
quite
significant
because
we
managed
to
get
the
first
specification
pr
merged,
which
is
yay
so
providing
we
actually
have
an
implementation
to
match
against
it,
which
I
think
we
should
have
that
pretty
much
guarantees
the
stuff
will
be
in
open
source.
Metrics
1.0,
which
is,
I
think,
is
a
a
good
result.
A
The
the
only
question,
of
course,
is
exactly
what
will
be
in
that
the
set
of
metrics
we
have
so
far
are
really
just
a
placeholder,
so
I
think
in
the
coming
weeks
the
main
effort
should
be
to
really
just
try
to
flesh
that
out.
I
have
an
open
action
item
to
to
fix
the
jfr
implementation,
to
match
against
what
the
new
specification
says
and
then
to
do
some
exploratory
work
based
on
that,
but
I
haven't
completed
that
pr
yet
and
it's
not
it's
not
ready
to
go.
A
A
A
And
jack
submitted
that
and
basically
what
it
does
is.
It
adds
just
some
of
these
semantic
conventions
for
for
a
handful
of
metrics
to
start
with,
so
we're
conforming
to
the
the
the
same
spec
as
everyone
else
and
we're
doing,
process
dot
run
time
as
our
as
our
prefix.
A
It
just
seems
to
be
much
more
straightforward
to
just
do
what
everybody
else
is
doing
in
terms
of
other
implementations
rather
than
strikeout
on
our
own,
and
all
of
these
are
defined
as
asynchronous
up
down
counters.
A
There
was
some
debate
about
whether
or
not
they
should
they
should
be
some
sort
of
histogram,
but
basically
we
want
to
be
able
to
aggregate
across
multiple
time
periods,
and
so
for
that
reason
the
the
delta
histograms
were
talked
about,
but
not
agreed
upon.
So
we're
going
to
go
with
asynchronous
up
down
counters
and
the
form
we
have
so
far
are
memory
usage
memory,
init
memory
committed
and
memory
max.
A
We
would
expect
the
memory
in
it
and
and
and
in
a
lot
of
cases,
memory
max
won't
actually
change
that
much
so,
for
example,
if
you,
if
you,
if
you
set
xmx,
then
memory
max
is
not
going
to
change
over
the
entire
lifetime
of
the
process,
but
there
are
obviously
cases
where
the
heap
size
will
change.
If
you
haven't
set
that
value,
so
we
do
need
to
account
for
that.
One
to
change,
overall
usage
and
committed
are
would
were
talked
about
and
felt
to
be
slightly
different
numbers.
A
D
Actually,
I'm
trying
to
find
something
that
I
know
I
saw
in
the
documentation
about
the
name
like
be
used
in
the
word
usage
or
used
like
in
one
of
them
was
referring
to
percentage
and
one
to
the
actual
amount
of
things
used,
and
I
just
wanted
to
confirm
that
that
is.
I
guess
it
is.
But
I
cannot
find
that
page
on
the
docs
where,
where
that
thing
is
talked
about.
C
A
A
That's
a
good
call
out
because
it
also
applies
to
when
we
talk
about
cpu,
because
in
the
jfr
case
you
get
you
get
utilization.
So
let
me
put
that
down
as
well.
A
Okay,
cool
all
right.
Well,
I'm
glad
to
see
that
there
are.
You
know
we're
basically
on
the
right
page
for
for
what
we
have
so
far.
I
you
know
I
have
an
action
item
in
order
to
to
change
the
existing
prototype
and
jfr
to
meet
these
conventions.
It's
underway-
it's
not
done
yet,
but
I
hope
to
have
something
to
show
soon.
A
Okay,
I
think
johnny
was
looking
at
the
the
possibility
of
doing
the
same
thing
for
for
for
the
jmx
implementation
as
well.
So
the
next
three
things
we
need
to
look
at.
I
think
our
cpu
is
check
on
anything
else.
We
need
from
memory.
A
I
mean
bear
in
mind
that,
of
course,
this
is.
This
is
purely
for
minimal
implementation.
I
mean
very
useful
things
that
we'll
be
able
to
get
from
jfr,
which
I
don't
think
are
exposed
to.
Jmx
are
things
like
allocation
rates,
because
I
just
don't
think
that
the
jmx
being
to
provide
us
with
any
any
handle
we
can
get
on
that,
whereas
it's
really
trivial
to
get
them
from
from
jfr.
A
Let
me
get
out
of
that
and
we
need.
We
need
some
terminology
for
this.
What
do
we
call
this?
If
one
implementation
has
it,
but
it's
not
required
as
standard
optional
extension,
something
a
word
like
that.
What
do
people
think.
A
Okay,
because
the
other
thing
that
I'm
thinking
about
and
I'm
coming
back
to
this
idea-
that
what
we
want
to
do
is
we
want
people
to
be
able
to
plug
in
a
basic
set
of
open
source
components
and
end-to-end
symmetrics
flow.
Now,
a
sufficiently
smart
dashboard
might
be
able
to
auto,
detect
certain
metrics
and
configure
things
accordingly.
A
So
what
I'm
thinking
about
is,
let's
suppose,
we've
got
a
open,
open
cemetery
agent,
which
is
sending
metrics.
It
goes
to
an
endpoint
which
is
either
the
open,
telemetry
collector
or
something
which
speaks
otrp
natively,
and
that
component
is
also
a
data
storage
unit
and
for
the
sake
of
argument,
let's
suppose
that
it's
not
a
prometheus
backend,
so
that
it
it
it's
very
much
open
terminology
all
the
way
through
from
storage
and
then
displays
in
grafana.
A
A
So
therefore,
I
know
that
the
the
the
the
allocation
metrics
ought
to
be
there,
so
I
can
automatically
configure
that
and
automatically
bring
that
up
based
on
a
fingerprint
which
tells
me
what
the
implementation
is.
A
Being
being
used
is,
does
that
does
that
make
sense
to
everybody
else,
because
while
we
want
a
minimal
set
of
metrics,
which
are
absolutely
guaranteed
to
be
there,
so
a
dumb
implementation
will
automatically
pick
up
at
least
something
that's
useful
for
devops
folks
having
the
ability
to
actually
configure
metrics,
which
are
going
to
be
more
helpful
based
on
what
is
it
is
actually
sending.
The
data
seems
to
me
to
be
a
reasonable
goal.
To
aim
for.
A
I
guess
the
other
way
we
can
call.
It
is
implementation,
dependent,
metrics.
A
A
A
Graviem,
of
course,
will
have
things
like
it
will
still
have
a
garbage
collector,
but
a
a
statically
compiled
piece
of
java
is
going
to
show
up
and
look
a
lot
more
like,
for
example,
a
go
binary
which
is
statically
compiled
but
has
gc,
so
so
the
the
the
exact
profiles
of
what
we,
what
we
get
you
know
we
shouldn't
assume
that
the
set
of
things
we
have
today
is
actually
all
there's
ever
going
to
be
so
so
that
that's
why
I
think
this
ability
to
implement
and
say
this
is
you
know
this
is:
is
hot
spot
in
dynamic,
vm
configuration
or
this
is
a
statically
compiled
binary,
or
this
is
substrate
vm
or
ibm
j9
or
whatever
is
interesting?
D
I'm
not
sure
if
I'm
not
sure
if
I'm
missing
something
from
the
model
but
and
the
metrics
knowledge
you
get
the
metric
provider,
then
that
gives
you
a
meter
and
the
meter
has
a
name,
I
think
even
a
version
or
something
so
wouldn't
be
enough
to
just
make
sure
that
we
have
a
proper
meter
name
for
all
the
like
jvm
runtime,
specific
metrics,
and
then
you
know,
keep
consistency
within
that
particular
meter.
A
D
A
You
mean
from
the
point
of
view
of
the
implementer,
the
library
implementer
okay,
I
hadn't
thought
that
that
was
a
safe
assumption,
but
I'm
perfectly
prepared
to
be
wrong
about
this.
What
do
other
people
think
can
we
can
we
make
the
assumption
that
the
the
naming
of
the
meter
provider
and
that
naming
will
be
will
be
reliable.
C
Your
name
is
usually
a
instrumentation
library
name,
so
something
like
open
to
him
to
run
time
metrics,
at
least
for
now,
like
I
think
bogdan,
has
some
open
issue
right
now
to
maybe
re-sculpt
that,
but
for
now
we
wouldn't
really
expect
that
to
be
some
category
within
a
library.
At
least
that's
been
our
convention
for
all
of
our
instrumentation.
So
far,.
A
Okay,
so
let
me
let
me
check,
I
understood
that,
let
me
let
me
say
that
back
to
you
and
oregon
and
check
that
I
got
that,
so
that
would
mean
that
the
the
naming
convention
would
apply
to
the
version
of
the
specification.
Not
of
the
specific
implementation
is.
That
is
that
another
way
of
saying
that.
C
E
A
Right
so,
let's
think
about
how
that
plays
with
with
something
like
a
garbage
collector
right.
So
so
there
are
going
to
be
different,
optional
packs
of
garbage
collection
metrics,
which
which
show
up,
depending
on
whether
you're
running
with
g1,
you
know
or
shenandoah,
or
you
know
some
old
school
folks
running
on
cms,
because
they're
still
running
java,
8
right,
so
so
the
the
library
name,
if
it
is,
if
it
doesn't
want
one
to
a
maven
artifact,
doesn't
give
us
enough
information
to
know
exactly
what's
being
published
exactly
yeah.
A
What
do
you
think
ivan
is
that
is
that
an
indication
that
we
will
need
some
other
other
way
of
being
able,
because
you
know
I'm
starting
from
the
point
of
view
where
we
actually
have
no
knowledge
and
we
just
look
at
the
set
of
metrics
that
are
being
being
sent
and
based
on
some
property
of
the
of
what
is
being
sent.
We
can.
We
can
determine
you,
know
and
notice
that
this
this
determination
happens
not
only
the
sending
side
at
all.
D
I
I
was
just
checking
against
the
metrics
api
and
I
I
don't
know
like
to
to
me.
It
seems
like
everything
on
their
specific
meter
or
a
library,
library,
name
and
version
is
controlled
by
the
authors
of
that
particular
library
and
to
me
it
just
sounds
like
it's
enough,
but
I
I
might
be
missing
some
like
some
some
more
context
here.
A
A
Well,
for
this
I
mean
I'm
concerned
about
both,
but
in
this
specific
case
right,
the
use,
the
the
the
user
model-
and
I
keep
coming
back
to
this-
as
my
primary
use
case-
is
I'm
someone
that
doesn't
know
much
about
java,
I'm
in
the
devops
role.
I've
implemented,
something
which
is
probably
a
piece
of
infrastructure.
To
me,
let's
say
it's
a
patchy
spark
just
for
definiteness.
It
doesn't
matter
what
it
is.
But
let's
say
it's
part
right.
I
follow
the
instructions
to
write
a
docker
file
to
deploy
spark
for
the
first
time.
A
I
think
oh
better,
monitor
this.
I
dropped
the
open
cemetery
agent
into
the
into
the
mix,
because
that's
what
the
instructions
tell
me
to
do
and
I
switch
it
all
on
I've
got
some
sort
of
metrics
collection
thing
that
my
node
processors
are
already
using,
and
I
point
it
at
the
otrp
endpoint
for
my
for
my
metrics
for
the
for
the
node
stuff
and
I
then
install
a
grafana
dashboard
into
the
front
end.
A
E
I
mean
the
brute
force
approach
would
be
that
the
dashboard
is
smart
enough
to
just
query
all
the
metrics
figure
out,
which
metrics
are
there
and
then
kind
of
reverse
engineer
what
it
is,
so
that
we
don't
have
any
profiles
or
so
at
all.
It's
just
the
dashboard
fig
can
figure
out,
depending
on
which
combinations
of
metrics
are
available.
What
it
probably
is.
A
And
that's
that's.
I
guess
the
question
I'm
asking,
but
framed
in
a
slightly
different
way.
Do
we
want
the
the
capability
to
say
categorically
based
on
these
conditions?
This
is
a
jvm
of
this
type.
So
therefore
you
should
look
to
display
these
things.
Do
we
want
to
encode
that
in
the
standard
and
publish
it
so
that
dashboard
authors
are
not
randomly
guessing,
but
they
can
instead
look
at
the
standard
and
say:
oh
it's
one
of
these.
C
That's
that's
the
question,
so
I
you're
talking
more
on
the
back
end
side
like,
for
example,
the
fauna
right
then,
and
I
think
it'll
just
it's
too
hard
like
it's
not
that
hard
for
an
app
to
know
what
gc
is
being
run
and
we
can
put
them
to
the
resource.
We
might
even
already
have
some
resource
definition
for
garbage
collection,
I'm
not
sure,
but
that
sort
of
information
we've
probably
put
into
the
resource.
So
then
the
back
end
does
know
what
the
current
environment
is
for
that
app.
C
A
Okay,
so
then
that
moves
the
problem
back
to
the
the
actual,
the
metric
generator.
Yes,
an
instrumentation
library
to
do
that
and
then
and
then
basically
those
so
maybe
that's
the
solution,
maybe
that's
that's!
The
answer
is,
is
that
we
do
do
that
and
then
you
know
to
ivan's
point.
It
moves
it
back
into
the.
Let's
have
a
look
at
this.
C
A
And
other
things
as
well,
you
see
because
the
other
the
the
this.
This
is
good,
because
it
means
that
we
can
keep
the
smarts
within
the
instrumentation
library,
but
at
the
same
time
we
can
also
do
runtime
determination.
There
are
some
nasty
corner
cases
that
I
have
in
the
back
of
my
head.
For
example,
if
you
put
java
11
in
a
single
core
container,
then,
regardless
of
what
gc
you
specify
you
end
up
with
cereal,
is
that
not
even
queriable
like
system
looking
at
the
system
properties
will
give
you
the
wrong
answer?
A
That's
what
I
mean
you
actually
have
to
poke
it.
You
see
itself
and
say,
tell
me
what
you
are
and
that's,
but
then
that's
the
problem
that
we
can
we
can
solve
within
within
the
instrumentation
library,
so
so
yeah.
I
like
this
approach.
Let's,
let's,
let's
record
this.
B
So
I
think
that
answers
the
question
of
how
do
we
want
to
encode
that
information,
but
then
ben?
I
think
you
were
getting
that.
Do
we
want
to
define
any
kind
of
a
standard
of
saying?
Okay.
If
the
runtime
is
this,
then
you
should
display
these
graphs.
You
should
have
these
alerts.
You
should
like
this
is
what
you
should
be
focused
on.
If
this
is
your
runtime.
A
Yes,
that's
a
that's
a
good
point,
so
we
can.
We
can
generate
the
data,
so
it
makes
it
easy
for
the
dashboard
to
detect,
but
I
think
that
actually
having
some
normative
profiles
or
normal
normative
fingerprints
to
say
if,
if
the
the
resource
attributes
look
like
this,
this
is
what
you
should
want
to
display
as
a
dashboard
implementer,
because
it
that
that
seems
to
me
to
be
the
proper
role
of
a
standard,
because
we're
defining
a
simple
tag
that
you
can
look
at
and
saying.
A
Right,
you
know.
Basically,
this
is
this.
This
is
this
is
talking
about
the
the
the
paved
road.
You
know,
the
parts
that
that
actually
are
most
people
are
most
likely
to
to
encounter
you
know.
So
maybe
we
don't
do
anything
other
than
ibm's,
most
common
collector.
I
think
it's
balanced,
maybe
if
some
ibm
focus
want
to
contribute
it's
an
open
source
standard
you
want
to.
You
want
to
define
a
profile
and
come
back
and
say
what
what
the
dashboard
should
display
for
another
ibm
collector
send
us
a
pr.
D
But
my
understanding
and
please
fabian
correct
me
if
I,
if
I'm
wrong,
is
that
when
people
go
to
get
off
another
they're
just
going
to
get
a
template
and
those
templates
are
based
on
the
metric
and
then
breaking
down
on
some
particular
attacks
from
that.
So
if
some
vendor
wants
to
do
something
like
more
specific
or
more
annoyance,
they
are
okay,
they
can
do
it,
but
I
think
what
most
people
will
do
is
just
get
a
girlfriend.
D
A
template
get
some
metrics
there
and
if
you
happen
to
have
a
runtime
that
is
not
publishing
one
of
those
metrics,
then
some
of
these
charts
will
be
empty,
but
that
will
be
pretty
much
it.
You
will
most
likely
get
value
out
of
it,
even
if
you
are
using
some
other
gc
that
we
didn't
see
up
front
because
it's
probably
going
to
be
just
different
tags.
But
the
metric
is
going
to
be
the
same
right.
A
Well,
this
this
is
this
is
this
is
one
of
the
things
that
I'm
also
thinking
about
is
suppose
we
have
a
situation
where
you're
selecting
a
garbage
collector
okay,
if
we
make
up
the
the
the
the
the
attributes
completely
different
for
each
gc,
how
do
you
compare
between
two
of
them
in
particular?
And
again,
this
is
one
of
my
standard
use
cases
because
it
exposes
some
of
the
some
of
the
issues.
A
D
Yeah,
but
that
goes
really
far
from
the
initial
use
case
of
I'm
just
a
devops
person,
and
I
got
some
metrics
and
I
need
to
see
what
is
this
jvm
thing
doing?
Okay,
these
second
use
case
that
you're
describing
is
a
lot
more
detail.
As
someone
who
knows
their
has
to
be
coming
to
do
this
and
that
person
probably
will
be
capable
enough
of
figuring
out
okay,
I
need
to
do
a
panel
with
this
matrix
with
this
particular
tag
and
compare
with
each
other
with
this
other
type
like
two
different,
two
different
personas
there.
A
Are
they
two
different
personas,
though
I
mean
I
can
well
imagine
someone
who's
an
adult
person
that
doesn't
know
much
about
java
and
someone
says
to
him
or
her
look
at
the
throughput
metric
for
gc.
Maybe
it's
gc.
What
does
your
throughput
look
like
right?
That's
that
to
me
is
quite
a
high
level
number.
A
Now
the
the
amount
of
time
spent
in
processing,
weak
references,
sure
that's
a
detail
that
you
have
to.
You
have
to
know
your
in
order
to
to
get
into,
but
but
throughput
seems
to
me
to
be
to
be.
You
know
something
that
a
lot
of
people
would
be
interested
in.
E
Yeah,
the
question
is:
if
we
can
even
make
it
comparable,
I
mean
if,
if
the
algorithms
differ
so
much
that
it's
not
you
know,
maybe
one
is
concurrent
with
the
application
and
then
the
times
it
takes
that
that
the
garbage
collector
run
doesn't
have
much
of
an
impact
on
the
application
performance
and
the
other
is
stopped
the
world
and
so
forth.
So
maybe
it's
not
possible
to
actually
come
up
with
numbers
that
make
all
these
different
approaches
comparable.
A
It's
a
good
door,
I
mean,
I
think
I
have
an
algorithm
which
which,
which
does
that,
because
by
default
concurrent
collectors,
use
half
the
cause
when
collecting
so
so
for
stop
the
world.
It's
easy
because
your
throughput
is
the
percentage
of
the
second
spent
in
garbage
collection
versus
the
the
time
window,
but
for
for
concurrent
it's
it's
slightly
more
complicated,
but
it
still
is
still
calculable
because
it's
it's.
However,
many
calls
you're
using
over
the
the
the
concurrent
time
of
the
collector,
which
is.
A
C
I
was
sticking
to
the
resource
idea
and
I
came
up
with
one
problem
in
that,
as
long
as
sdk
is
in
the
app
that's
creating
the
metrics,
it's
no
problem
like
the
sdk
can
compute
the
resource,
but
one
of
our
common
ways
to
get
metrics.
Is
this
jms
metrics
receipt
scraper,
which,
for
example,
if
you
have
your
cassandra
running
and
then
you
point
the
scraper
at
it
all
you
all
you
get
is
what's
in
jmx
and
you
compute
some
metrics
from
that.
A
C
C
A
A
A
B
I
suppose
it
doesn't
help
us
with
older
current
versions,
but
is
that
something
that
we
want
to
maybe
see
if
openjdk
is
willing
to
add
something
so
that
we
can
query
runtime
what
the
configured
collector
actually
is.
A
That's
a
good
question.
So
then,
then,
actually,
that's
a
that's
a
much
broader
discussion,
tommy
I'll
put
my
teeth
in,
but
that's
that's,
certainly
one
that
I
think
we
should
have
is
based
on
this.
What
do
we
think
the
gaps
in
the
jmx
api
is?
How
do
people
want
to
approach
that?
A
A
I
think
we've
covered
everything
which
is
on
the
agenda.
Anybody
got
anything
else.
E
Yeah,
just
this
cpu
topic
is
not
quite
sure
I
mean
I.
I
think
what
we
should
take
care
of
is
that
we
only
provide
metrics
that
are
directly
related
to
the
jvm
process
itself,
because
if
we
start
additionally,
including
generic
cpu
using
utilization
things,
and
then
we
have
probably
other
semantic
conventions
with
linux
host
monitoring,
that
also
provides
cpu
statistics,
and
maybe
they
slightly
differ
it's.
I
think
this
will
be
complex.
A
That's
a
good
point.
We
actually
discussed
this
one
at
the
west
coast
time
meeting
last
last
week.
We
went
into
quite
a
lot
of
detail
if
you,
if
you
have
the
time
and
if
you're,
if
you
can
read
stuff
on
on
on
double
speed,
if
you
want
to
want
to
watch
last
week's
meeting
okay,
I
will
yeah,
because
here
we
go
so
this
is
this.
Is
the
question
is
basically,
should
we
capture
it?
A
If
we
should
do
we
do
we
put
it
in
the
process.one
space
or
do
we
actually
overlap
with
the
existing
system.cpu.utilization
with
a
separate
resource
tag,
or
should
it
go
into
process
or
cpu
time?
Okay,
cool?
I
will
watch
that
call.
I
wasn't
aware
yeah
for
those
of
us
in
the
european
time
zone,
it's
a
little
late,
but
it's
it's
it's
doable.
So
if
you,
if
you
can
come
to
both
it's,
it's
always
helpful
to
have
an
additional
bridging
between
the
two
meetings.
Yeah.
A
C
B
The
extra
fun
part
about
it
is
depending
on
specifically
which
patch
patch
version
of
java
you
are
running
on,
the
meaning
of
it
has
changed,
and
there
have
also
been
multiple
bugs
related
to
it,
especially
if
you're
running
in
a
docker
container.
So
sometimes
it
means
the
cpu
the
docker
constrained
cpu
usage.
Sometimes
it
means
the
machine
that
the
container
is
running
on
cpu
usage
and
there
have
been
bugs
about
that.
So
the
meaning
has
changed,
and
even
after
it
changed,
it
wasn't
always
reporting
what
it
meant
to
be
reporting.
A
So
just
just
a
question
for
the
group:
should
we
actually
say
in
our
standard
that
not
only
are
we
java
8,
we
only
guarantee
reliability
of
the
results
if
you're,
eight
above
a
certain
patch
level,
because
my
understanding
from
the
open
jdk
folks
is
that
no
further
changes
to
the
the
semantics
of
eight
running
in
a
container
are
going
to
be
back
quartered.
A
They
did
some
changes
around
about.
I
think
191,
it's
the
number
that
sticks
in
my
head,
but
but
no
further
changes
are
coming,
so
we
can
say
if
you're
above
200,
eight
update
200.
This
is
stable,
but
we
don't
guarantee
the
behavior
before.
That
is
that
is
that
worth
explicitly
calling
out
in
the
spec.
A
A
Alrighty,
it's
22,
so
any
any
last
things
for
today
or
should
we
close.