►
From YouTube: 2021-11-29 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
C
C
C
E
G
C
A
C
A
No,
absolutely
this
is
what
I
get
for,
trying
to
show
willing
and
got
myself
a
fedora
laptop.
When
I
joined
red
hat
how's,
everybody
doing,
I
think,
we're
oh
wow,
there's
like
10
people
here,
that's
cool,
so
I
don't
think
that
we
actually
managed
to
get
a
appropriate
document
for
this
setup.
Yet
so
I
am
just
and
of
course,
with
the
red
hat
set
up
for
google
docs.
A
So
if
we
can
just
basically
just
use
this
desktop
for
today
and
we'll
get
it
into
the
proper
open,
telemetry
docs
place
once
that's
been
set
up.
Okay,
so
just
under
link
in
the
chat.
A
So
this,
essentially,
is
just
all
the
discussion
that
happened
on
the
the
existing
issue
so
far
and
a
few
other
bits,
but
should
we
also
use
the
like
the
main
java
document
for
recording
meeting
minutes.
C
Yeah,
I
just
added
that
to
the
chat.
If
folks
can
add
your
name,
and
I
don't
mind
I
could
I
could
take
notes
on
the
days
that
are
that
are
not
2
am
my
time.
I.
A
Okay,
so
let's
kind
of
the
agenda
as
well,
so
this
basically
is
the
first
time
we
got
together.
So
I
didn't
really.
You
know
I
just
put
together
something.
My
kind
of
my
intent
was
put
something
together,
which
everyone
will
hate,
and
then
we
can
sort
of
use
that
as
a
starting
point
for
trying
to
find
something
that
people
might
hate
less
so
so
all
of
this
was
just
you
know
my
attempts
to
put
down
what
was
in
my
head,
and
it's
not
you
know
it's
by
no
means
complete.
A
So
do
people
want
to
just
like
maybe
take
two
minutes
to
read
it,
and
then
we
can.
We
can
just
kind
of
kick
it
around.
C
A
I
mean
basically,
I
mean
the
probably.
The
first
thing
is
that
there's
two
things
there's
a
question
in
my
head
about
how
much
we
need
to
to
fit
into
the
same
conventions
as
the
other
open,
telemetry
metrics
implementations
for
the
other
languages.
That's
the
first
thing
and
then
there's
a
second
thing
about.
A
You
know
and
then
also
to
try
and
that
that's
kind
of
there
to
try
and
avoid
this.
You
know
idea
that
we'll
get
stuck
in
a
lowest
common
denominator
world,
where
the
only
metrics
that
we
can
support
are
the
ones
which
are
supported
by
all
the
possible
implementations,
and
I
feel,
like
that's,
going
to
be
much
much
too
limited.
A
I
also
think
it
would
be
good
to
do
some
some
thinking
about
some
gap,
analysis
on
on
what
we've
got
for
my
very
rough
and
ready,
jfr
prototype
versus
the
micrometer
metrics,
which
are,
I
think,
are
have
got
some
slightly
different
information
in
them
and
then,
of
course,
you
know
we
need
to
know.
You
know
things
like.
What's
missing
what
will
work
in
native
mode?
A
What
other
implementations,
if
any
we
should
think
about,
I
mean
I
don't
know
what
else.
What
else
people
think
is
is
important
beyond
you
know
micrometer
and
jmx
and
jfr,
and
then
you
know
basically,
what
are
we
gonna
do
with
all
this
stuff
like
what?
What
is
when,
at
what
point
do
we
know
that
we're
done
and
we
start
actually
shipping
it
as
a
product,
so
those
were
kind
of
my
biggest
biggest
sort
of
high
level
things
that
I
thought
we
should
talk
about
today,
yeah,
what
other
people
think.
H
Well,
I
think
I
don't
know,
I
think
you
already
know
what
I
think,
so
I
don't
care.
I
need
other
people
to
talk
so
from
a
usability
perspective.
If
I'm
thinking
about
people
consuming
these
metrics
like
why
this
conversation
is
worth
having
at
all,
maybe
we
have
people
with
existing
dashboards
out
and
about
in
the
universe
we
have
people
who
are
used
to
looking
for
certain
kinds
of
jvm
metrics,
just
within
the
micrometer
community
is
jonathan.
Here
you
are
here
yay,
you
can
keep
me
honest.
H
I'm
looking
completely
look
at
all
the
all
the
cameras
but
like
we
had
to
do
a
little
bit
of
tweaking
to
the
jvm
metrics,
to
account
for
differences
between
open,
jdk
and
openj9,
for
example,
because
some
of
the
garbage
collection,
statistics
and
stuff
are
a
little
different,
because
the
garbage
collectors
are
a
little
different,
et
cetera,
et
cetera.
So.
H
Yes,
well
also
for
girl,
which
just
doesn't
when
you're
in
native
mode
just
drops
a
whole
bunch
of
stuff
completely.
So
how,
from
a
jvm
perspective,
we
have
like
what
open
telemetry
says
are
the
recommended
metrics
for
runtimes.
We
have
general,
I
would
say,
like
cross
industry.
This
is
the
kind
of
stuff
that
we
can
get
out
of
a
container
whatever
the
container
is.
Who
cares?
H
Maybe
we
could
get
general
usage
stuff
out
of
there,
but
I
think
from
a
java
perspective,
because
we
have
the
different
kinds
of
jdks,
because
we
have
different
expectations
with
our
20
years
of
history
for
what
kinds
of
thread
metrics
thread,
utilization
or
memory
utilization
or
garbage
collection.
Behavior
like
what
kinds
of
things
should
we
be
measuring
for
these
kinds
of
processes?
H
H
Out
in
the
wild,
but
that's
where
that's
where
I
think
we
need
to
talk
about,
because
I
would
like
to
get
to
the
place
where,
with
open
telemetry
or
with
micrometer
or
with
pick
your
other
favorite
metrics
implementation
thing.
H
I
think
the
best
thing
that
open
telemetry
is
doing
right
now
is
trying
to
establish
semantic
conventions.
So
the
question
is
what
the
hell
are:
those
conventions
for
jvm
or
java
native
mode,
which
is
a
new
question
entirely.
C
Yeah,
I
think
that
we're
like
with
this
group
at
the
I
don't
know,
I'm
still
trying
to
figure
out
what
we
can
sort
of
concisely.
The
goal
is,
you
know
that
I
can
write
down
and
communicate
that
one
sounded.
I
mean
like
just
defining
open
telemetry
semantic
conventions
for
jvm
metrics.
C
A
I
think
so
I
mean
basically
my
my
take
on
this.
Is
you
know
this?
This
sort
of
the
fact
that
the
semantic
conventions
have
not
been
fully
defined
has
been
sort
of
at
the
back
of
my
brain
and
bugging
me
for
about
18
months,
and
I
haven't
actually
had
the
cycles
to
do
anything
about
it,
and
I
finally
do
so.
I
I
think
that
actually
getting
this
nailed
down
to
the
point
where
we
can
actually
support
multiple
different
implementations
or
providers
of
data
into
open
telemetry
is
actually
well.
A
It
might
well
even
be
needed
just
to
declare
metrics
with
1.0
for
the
jvm
space,
so
I
I
think
this
is
absolutely
critical
to
get
this
nailed.
You
know
we
don't
want
a
situation
where
we
have
a
standard
which,
which
talks
about
a
metric
of
you
know.
I
don't
know,
allocation
right
and
if
you
source
the
data
from
micrometer,
it's
done
one
way
and
you
get
one
result
and
if
you
do
it
from
jmx
or
jfr,
you
get
a
completely
different
answer
or
worse,
yet
a
slightly
different
answer.
A
So
so
that's
that's
what
I
mean
when
I
think
about
semantics.
I
want
I
want
there
to
be
a
a
description
that
yeah,
okay,
you,
you
can
come
and
plug
your
data
source
into
inflammatory
and
here's
how
you
need
to
do
it
in
order
to
make
sure
that
you
you're
doing
it
in
the
same
way
that
everybody
else
does
it
and
that
you're
actually
talking
about
the
same
quantity.
Your
users
see
the
same
quantity
between
the
presentations.
G
G
And
also
through
native
as
well,
and
see
what
breaks
because,
like
what
we
learned
at
micrometer,
is
that,
like
the
different
different
flavors
of
jvms,
has
widely
different
like
opinions
about
what
they
provide.
A
Yeah
and
and
to
riffle
that
point,
the
other
thing
is:
is
that
and
that's
go?
This
was
goes
back
to
what
aaron
was
saying.
This
needs
to
be
generally
useful
out
of
the
box
right.
We
don't
want
to
be
in
the
situation
where
someone
comes
and
sets
up
an
inflammatory
implementation
and
dashboards,
and
they
like
the
operations
people
have
to
care
about
which
garbage
collector
it
is,
and
precisely
how
things
are
configured
the
defaults
must
just
work
and
display
generally
useful,
influenced
information.
I
My
comment
so
far
is
that
this
is
maybe
impossible.
I
mean
let's
take
allocation.
Somebody
brought
up
allocations.
Okay,
so
we
have
a
metric
for
allocations.
We
have
a
test
that
tests
that
okay,
you
know
we
should
see
this
many
allocations,
but
what
if
it
gets
scalarized
on
some
jvm,
some
jvm
will
go
scalarize
it.
We
have
no
heap
allocations.
I
Well,
that's
fine,
because
it
didn't
actually
do
any
heap
allocations
and-
and
I
mean
there
are
so
many
things
here
that
are
very
implementation.
Specific
and
some
implementations
may
go
for
something
that
will
be
work
in
a
production
environment,
so
sacrificing
position
for
for
for
overhead,
for
example,
giving
you
the
general
idea
of
what
is
going
on
so
that
you
can
do
something
about
it,
but
maybe
not
exact.
H
So
it
is
very
hard.
I
would
completely
completely
agree
with
you,
but
this
is
why
I
think
if
we
can't
get
a
convention,
we
could
we
might
at
least
come
up
with
a
guideline,
because,
like
already,
we
have
the
difference
between
heap
allocation
or
garbage
collect
like
there's
metrics,
when
with
then
this
is,
I
have
to
talk
from
the
micrometer
perspective.
Sorry,
you
guys
all
have
to
do
the
open,
telemetry
piece
I
go
through
joanie.
H
Aside
from
some
fussing
around
that
I've
done,
but
it's
not
anything
worth
sneezing
at
so
what
even
just
moving
from
jvm
to
native
mode,
regardless
of
which
jdk
you
have
there's
metrics
that
just
vanish,
they
don't
even
apply
anymore
they're,
not
even
there.
So
we
have
that
problem
in
the
java
space
for
sure
right.
So
the
question
is:
are
there
conventions
that
we
can
establish
that
say
when
you
have
a
garbage
collection
collector
or
you
have
thread,
pool
allocations,
we're
gonna,
try
to
name
the
metrics.
H
H
What
are
all
the
things
joanie,
shenandoah
and
something
else
and
the
other
thing
I
don't
remember
it's
monday
and
I'm
technically
on
holiday
today,
so
my
brain
is
allowed
to
not
work
so,
but
do
you
see
what
I
mean
so
like
absolutely
there's
stuff
that
doesn't
work
that's
great,
but
if
we
can
have
some
kind
of
a
guideline,
even
for
people
who,
because
what
what
I
see
we
have
people
coming
into
the
micrometer
slack
or
people
coming
in,
I
get
some
issues
and
stuff
with
quercus,
sometimes
where
it's
like.
H
I
They
would
just
continue,
I
mean
look
at
loom
when
we
have
virtual
threads,
then
we
might
start.
H
H
This
is
an
interesting
question
and
if
we
have
this
conversation
about
conventions,
then
I
think
or
even
about
guidelines,
we
can
have
a
conversation
about
loom,
arriving
and
about
what
that
means.
When
people
are
using
metrics
about
jvm
thread
utilization,
what
are
they
talking
about?
What
are
the
information?
Are
they
trying
to
get
and
then
how
should
we
as
metrics
providers
as
library,
and
I
mean
we're
all-
basically,
a
library
instrumenters
of
some
kind
right?
That's
another
question
we
have
to
resolve
someday,
but
what
should
we
do?
H
I
Yep
no,
I
agree.
There
is
a
lot
of
things
to
be
figured
out.
I
A
Are
I
mean,
I
think
I
think
these
are?
These
are
some
of
the
things
that
we
you
know
we
I
want
to
kind
of
tackle
head
on
right,
so
I
you
know,
I
I
hear
both
of
you
and
you
know,
I
think,
there's
a
lot
of
a
lot
of
sense
in
that,
but
it's
I
kind
of
feel
like
we
have
to
do
something
because,
what's
your
turn
to
do,
we
don't
do
anything.
What
do
we
end
up
with?
A
We
end
up
with
jmx
or
the
lowest
common
denominator
or
worse,
yet
you
know
the
ebtf
stuff
right.
That's
the
thing
which
really
scares
me
is
that
if
we
don't
come
up
with
a
decent
solution
for
this,
what
ends
up
being
standardized
is
ubps
metrics,
and
that
is
basically
hf.
Prop
data
hs
birth
data
and
that's
that's
a
major
regression
in
the
java
space,
and
that
undermines
everybody.
A
I
Yeah-
and
you
also
said
that
we
shouldn't
go
for
the
lowest
common
denominator,
and
I
think
that
is
absolutely
right,
because
we
have
garbage
collectors
that
are
now
totally
different.
If
you
look
at
gc
app,
you
know
heap
size
after
gc
and
you
compare
that
across
garbage
collectors.
You
will
have
very
different
results
and
you
will
be
able
to
use
that
in
very
different
ways.
I
So
so
I
I
don't
subscribe
to
the
idea
that
we
can
find
a
semantic
meeting
for
every
single
garbage
collector
that
that
will
make
sense
to
people,
because
they
will
want
to
use
it
in
different
ways
and
it's
simply
not
possible,
for
some
of
them
to
to,
for
example,
estimate
the
live
set
for
4
4,
for
you
know
from
just
such
a
metric,
because
we
won't
have
such
a
metric
in
any
meaningful
way.
I
So
so
so,
and
if
forcing
you
know
that
metric
to
be
available
will
be
too
performance
intense,
I
think
so.
So
so
yeah
your
your
idea
there
about
not
not
going
for
the
lowest
common
denominator.
I
think
it's
right.
A
Okay,
so
let's,
let's
just
let's,
let's
just
take
a
slightly
different
path
here,
because
one
of
the
other
things
that
that
is
in
the
dock
is
just
thinking
about
the
potential
gap
between
micrometer
and
and
this
jfr
prototype.
And
basically,
I
think,
there's
a
there's,
a
certain
amount
of
overlap
here.
But
I
also
think
that
it's
probably
worth
you
know
going
through
this
in
detail.
Hi
erin's
dog,
I'm
trying
to
trying
to
figure
out
what
gaps
there
are
and
then
asking
ourselves.
A
Is
there
a
credible
third
source
of
data
that
we
can?
We
can
do
relatively
easily,
which
kind
of
confirms
that
gap
analysis?
Because
I
would
like
to
get
to
a
core
set
of
metrics.
Then
we
can
have
a
discussion
about
how
we
name
them
because
currently,
in
the
jfr
prototype,
it's
all
runtime
dot,
whereas
all
my
commenter
stuff
doesn't
have
that
additional
top
level.
So
one
thing
which
people
might
want
to
think
about
is:
does
that
top
level?
Make
sense?
Is
that
something
that
the
other
runtimes
do?
Do
that
you
know?
A
J
A
Okay,
so
that
that
that
sort
of
makes
sense,
so
then,
then
I
think
they.
Basically,
the
question
becomes:
is
that
too
onerous
on
the
on
the
part
of
the
micrometer
guys?
What
does
that
imply
if
it's
micrometer
folks
are
gonna
gonna
have
a
different
different
naming
convention,
or
can
you
have
something
that
auto
magically
shuffles
the
names
into
the
under
the
runtime
top
level?
If
you
guys
do
talk
to
oak
club.
G
So
in
in
that
space
like
there
are
two
like
interesting
things,
one
is
micrometer.
Two
point
x
will
be
released
end
of
next
year,
so
that
gives
us
an
opportunity
to
break
like
naming
things
and
also
you
can
always
have
a
meter
filter
where
you
can
just
rename
things
as
you
wish.
Okay,
you
can
like
you,
can
append
the
prefix.
A
Okay,
so,
on
the
one
hand,
having
the
one-time
top
prefixes,
it
just
just
fits
into
the
standards
with
what's
already
present
in
hotel
erin.
You,
you
had
some
concerns
about
this
anything
specifically
or
are
you
just?
Are
you
happy
that
this
isn't
an
onerous
burden
if
you
can
just
automatically
remap
it
from
we.
H
Can
we
can
automatically
remap
it?
I
mean
part
of
the
thing
that
I
don't
like
about
relying
on
renames
is
it's
all
extra
processing
right
and
you
want
your
metrics
to
just
not
like
as
little
extra
processing
as
you
can
manage,
because
it's
already
adding
it's
already
adding
to
your
to
your
floats.
You
try
to
minimize
it,
but
it's
already
adding
so
for
me
when
you
add
stuff
to
do
automatic
renames
like
yeah
that
works,
but
it's
still
extra
stuff.
H
That
adds
that
adds,
and
what
I
find
is
that
when
people
are
trying
to
people
who
are
really
used
to
doing
observability,
this
makes
a
lot
of
sense
and
they
know
what
they're
looking
for
and
they're
fine,
but
more
and
more
people
are
having
to
figure
out
what
this
all
means
and
what
it's
all
for
and
when
the
metrics
change
it's
like,
it
just
makes
it
harder
for
everybody
to
get
started.
H
Because
then,
everything
has
a
different
different
measurements,
different
names,
different
conventions,
and
then
they
don't
know
what
they're
measuring
and
half
the
time
their
first
instinct
for
what
they
should
measure
is
wrong
anyway,
they're
already
on
the
front
foot,
and
then
they
have
to
sift
all
this
stuff
out
and
it's.
I
just
think
it
makes
it
difficult.
I
Yeah
I
mean
if
there
is
a
semantic
definition
for
for,
say,
live
set
or
whatever
we
call
things
and-
and
it's
simply
not
present-
for
for
many
garbage
collectors
that
that
might
help,
because
if
you
do
have
that
metric,
it's
always
going
to
be
the
same.
It's
it
is
going
to
be
an
estimate
of
the
size
of
the
lifestyle
and
if
you
have.
I
I
K
H
H
Right
how
you
do
that
right
now
with
a
one,
seven,
one,
eight
where
the
hell
ever
are.
We
are
right
now
I
would
say
we
would
probably
add,
like
we've
done
in
a
couple
other
places
like
there's:
a
prometheus
duration,
naming
configuration
naming
convention
thingy
now
that
you
could
opt
into
because
some
people,
like
the
duration
and
the
name
instead
of
timer
in
the
name.
We
would
probably
do
the
same
thing
if
we
come
up
with
semantic,
oh
you're,
using
open
telemetry.
H
You
want
to
be
able
to
hear
use
this
dating
convention,
and
this
stuff
comes
out
the
right
way.
That
would
be
the
way
we
would
do
it
and
then
with
2x.
If
we
say
that's
the
way,
we
think
things
are
going
forward.
Joni
just
hit
me
whenever
I
overstep,
because
it's
not
my
baby
but
yeah,
then
that
would
be
the
we
think.
That's
the
way
to
go.
That's
the
way
to
go,
but
that
gives
people
a
chance
to
opt
in
early
right.
A
A
A
Identify
what
the
actual
semantics
of
each
metric
is,
so
you
know
marcus
was
talking
about
the
live
set.
You
know
I
I
want
to
to
think
about
whether
there
is
something
which
we
can
be
can
be
done
about,
like
total
gc
cycle
time
and
the
gaps
between
between
cycles.
D
You
know
what
is
really
what
what
is
the
end
game
here
in
terms
of
how
the
user,
how
you
know
what
type
of
experience
you
want,
the
the
consumers
of
these
this
information
to
have,
or
you
know,
because
I
think,
without
that
understanding,
I'm
just
having
trouble
understanding
what
I
would
want
to
collect
and
how
I'd
want
to
collect
it.
I
Agreed,
I
mean
it's
better
to
start
with
a
set
of
metrics,
that
we
know
that
people
can
use
to
solve
a
certain
problem,
or
for
something
and
and
not
just
go
for
what
are
the
metrics?
We
could
could
figure
out
how
to
get.
A
Yeah,
but
I
mean,
I
think
I
think
we
you
know,
we
we
have
a
set
of
metrics
that
people
are
already
using.
I
mean
we
can
you
know,
I
mean
between
yourself,
marcus
and
jack
and
kirk,
I'm
sure
we
could.
We
could
put
together
a
list
of
metrics
that
that
we
think
people
would
would
want,
because
you
know
for
those
of
us
who
are
representing
observability
vendors
and
the
metrics.
Your
customers
already
is
yeah.
A
D
A
A
D
The
other
thing
I
would
like
to
know
is
like
how
far
are
we
allowed
to
stray
here,
because
I
mean,
I
know,
there's
a
lot
of
focus
on
jvm
things
here,
but
from
my
perspective,
I'm
always
interested
in
what
the
impact
of
a
particular
stack
is
on
the
hardware
and
understanding
that,
so
you
can
map
that
state
back
to
behavior
in
the
in
the
target
platform
that
you're
looking
at-
and
you
know
my
issue
with
a
lot
of
these
metrics
that
are
typically
exposed.
Is
they
just?
D
H
That's
a
good
question.
I
think,
from
the
open,
telemetry
micrometer
perspective,
we're
more
looking
at
sre
style
analysis
of
of
metrics,
at
least
from
my
perspective.
Please
anyone
else
speak
up
rather
than
low
level
bare
metal
hardware-
I
don't
know
like
I,
I
wouldn't.
I
wouldn't
expect
to
use
these
metrics
with
this
kind
of
naming
convention
for
tuning
the
behavior
of
the
jvm
right.
D
Right
so
so
I'm
not
sure
I
understand
what
sre
conventions
like.
H
Reliability,
engineering
right
so
we're
talking
about
prometheus
dashboards,
new,
relic
dashboards,
instanta
datadog,
like
big
and
so
part
of
the
reason.
The
naming
conventions
matter
is
because
you're
always
looking
at
an
aggregation
of
workloads
running
in
a
big
distributed
topology.
So
if
I
have
a
bunch
of
java
applications
running,
some
teams
are
running
open.
J9.
Some
teams
are
running
open,
jdk,
some
teams
are
running
in
native
mode
and
some
teams
aren't.
H
What
can
I
see
in
aggregate?
So
I
can
understand
how
my
applications
are
behaving.
There's
a
certain
set
of
semantic
conventions
that
are
coming
from
open
telemetry
that
would
handle
a
base,
node
layer
which
is
like
your
operating
system
level,
basic
utilization
stuff,
but
people
still
want
to
understand
how
their
java
workloads
are
running
in
the
aggregate
where
I
could
say.
H
Okay,
I
want
because,
with
the
with
prometheus
filtering
right,
I
could
say
show
me
all
of
my
shopping
cart,
apples,
the
stupidest
freaking
example
ever,
but
whatever
my
monster
combat
application,
which
is
way
more
fun.
I
want
to
find
all
my
game
applications
running
across
three
regions
and
I
want
to
know
how
that
application
is
behaving
in
total.
A
Another
option
might
be:
let's
suppose,
that
you
do
a
canary
deploy
you
make
a
code
change
and
you've
got
a
cluster,
and
you
deploy
a
single
machine
just
to
see
whether
the
code
change
has
any
high
level
visibility,
visible
change
in
behavior
from
deploying
it.
A
Yeah
and
an
aggregation
here,
kirk
is
happening
not
only
at
the
the
machine
level.
It's
also
happening
temporally,
so
this
is
this.
Is
this
is
low
resolution
data?
There
is
not
a
lot
of
high
definition.
D
So
in
so,
in
that
case
it
you
know
the
way
I
I'm
understanding
this
is
that
we're
talking
about
two
different
data
sets
that
we're
trying
to
correlate
together.
So
one
is
giving
me
an
indication
of
how
things
are
performing.
The
other
might
be
giving
me
an
indication
as
to
why
things
are
performing
the
way
they
are.
A
A
D
Yeah,
so
so
we
have
two
different
data
sets
here
that
we're
looking
at.
So
if
I'm
going
to
break
these
metrics
down,
then
it
seems
to
me
that
we
have
two
different
paths
that
you
would
like
right,
you'd
want
something.
I
guess-
and
you
know
you
know
what
and
and
something
that
maybe
another
category
breaks
down
into.
Why
does
that
make
sense?.
A
A
D
A
H
H
D
I
understand
what
you're
trying
to
say,
but
yeah.
D
I
don't
know
I
just
again
just
trying
to
define
what
the
boundary
is
in
terms
of
what
you
plan
on
delivering
because,
like
I
said,
I
have
my
favorite
set
of
metrics
and.
D
D
A
I
And
some
of
them
I
mean
it,
and
it's
also
about
the
invariant
right
right.
You
know
if
the
if
they,
if
they
are
guaranteed
to
increase,
if
the
lives
that
are
increasing
between
gcs,
then
it
doesn't
matter.
If
it's
imprecise,
you
will
still
be
able
to
do
trend
analysis.
So
it's
like
you.
You
need
to
look
at
what
you're
trying
to
do
with
it.
D
D
Because
again
you
know
if
you
look
at
the
different
types
of
collectors
and
of
course
you
have
different
questions
and
you
know
the
it's,
you
know
we
we
not
only
want
to
know
about
live
sets,
but
we
also
want
to
know
about
transients
during
concurrent
cycle
durations
or
you
know
something
silly
like
that
right.
You
know
but
yeah
but
yeah,
but
the
whole
point
is
I
mean
this
becomes
contextually
sensitive
at
that
point,.
A
It's
it's
just.
I
think
that
you
know
we
can,
we
can
add
more
use
cases
and
we
can
add
more
metrics,
but
defining
just
a
basic
set
of
things
which
can
be
useful
within
the
domain
of
sre.
Analysis
is
probably
the
core
of
what
we
want
to
do
here.
Does
anyone
does
anyone?
Do
anyone
feel
that's
not
true?.
A
Mean
that's
one
thing
we'll
put
together
a
spreadsheet
which
we
can
all
contribute
to
with
metrics
in,
and
then
we
can.
We
can
do
it
in
a
spreadsheet
rather
than
a
word
doc.
We
can
do
that
between
here
here
and
the
next.
The
next
call,
I
guess,
but
yeah,
basically,
the
the
ones
that
were
easy
kind
of
low
hanging
fruit
from
jfr
were
with
these
types
of
things,
the
the
context
switch
rate.
Your
number
cpus
you've
got
any
long
locks.
A
You
know
utilization
dc
duration,
which
I
think
I've
defined
as
stop
the
world
that
can
totally
change
allocation
utilization,
some
network
numbers
and
then
in
open
symmetry.
A
You
tag
those
as
being
either
read
or
write,
so
you
have
the
same:
you'd
have
a
histogram
with
two
sets
of
tags
on
it
and
then
the
other
ones
which
which
I
just
thought
of
were
worth
cycle
time,
which
is
the
explicit
length
of
the
duty
cycle,
including
the
concurrent
time
that
that
one,
I
wondered
about
a
throughput
metric
as
well,
something
which
which
provided
you
with
some
sort
of
comparison
between
a
stop
the
world
collection
and
a
a
concurrent
one.
A
A
That's
the
statistic
which
can
roll
up
so
that
if,
in
fact,
if
you
only
have
the
the
bandwidth
or
the
the
collection,
where
you
you
see
that
once
a
minute
that
still
kind
of
makes
sense,
yeah
and
then
someone's
been
adding
about
chip
completion
as
well
and
then
on
micrometer
erin
supplied
these,
and
I
think
that
there
is
definitely
some
overlap.
A
But
that's
what
that's?
What
I'm
saying?
I
think
that
there
is
a
need
to
do
a
proper
gap.
Analysis
for
these
two.
K
Yeah
ben
not
to
get
too
meta
but-
and
I
think
the
straw
man
is
great-
it's
an
awesome
starting
point.
I
think,
having
a
spreadsheet
would
also
be
helpful
for
getting
work
done
but
like.
Where
do
we
expect
this
to
live
long
term
and
I'm
seeing
martin's
hand
up
sorry
that
I
just
jumped
in
with
my
voice?
Sorry:
do
we
because
the
other
convention
conventions
all
live
in
markdown
docks
in
repos?
Where
do
we
expect
it
to
live
long
term?.
A
We
expect
it
to
live
exactly
there
once
we've
got
been
granted
the
proper
access,
because
there's
some
apparently
there's
just
organizational
stuff
like
it's.
Why
we're
using
like
we're
camping
on
a
borrowed
zoom
link
for
today?
Okay,
so
so
no
there's
all
this
stuff,
that's
going
into
the
open
cemetery
refers
where
it
belongs.
We
just
get.
A
In
place
in
spec,
though
yeah
I
would,
I
think
so
classical
do.
K
A
C
From
just
for
people
who
are,
since
we
have
some
folks
new
to
open
telemetry,
so
there's
the
open,
telemetry,
there's
the
api
and
the
sdk,
the
actual
api
surface
and
the
sdk
for
emitting
metrics
and
exporters,
and
so
that's
the
api
has
been
declared
stable
already
from
a
spec
perspective,
not
the
java
one.
Yet
the
sdk
is
nearing
metrics.
C
Sdk
is
nearing
stability
from
the
spec
perspective,
probably
a
few
month
couple
of
months
off
still
from
the
java
perspective,
and
this
discussion
is
so
then,
on
top
of
that
in
open
telemetry,
we
have
semantic
conventions,
for
you
know
how
to
collect.
Http
data
trace
data
log
data
metric
data,
and
so
I
think
that
the
scope
of
this
discussion
is
all
within
metric
semantic
conventions
defining
those,
and
so
those
would
live
in
the
spec
repo.
So
we
would
need
to
you
know,
submit
prs
and
update
those.
B
You
know
all
good,
so
I
think
yeah,
obviously
very,
very
new
to
this
whole
whole
effort.
So
it
sounds
like
we
can
work
on
that
spreadsheet
together
and
try
and
come
up
with
the
core
the
core
set
of
metrics.
Is
it
my
understanding
that
there's
also
the
ability
to
go?
What
you
know
go
a
superset
over
that
that
that
anyone
can
actually
build
a
superset
of
stuff
they
need
specifically
for
their
consumers.
A
Sure
I
mean
a
good
example
would
be
something
like
you
know:
the
shenandoah
gc
right,
not
every
vendor,
ships
it,
but
those
vendors
that
do
ship
it
would
want
to
be
able
to
ship
metrics
which,
which
were
only
applicable
to
shenandoah.
Surely.
A
I
think
that's
true
so,
but
are
we
also
arguing
for
a
namespace,
which
is
something
like
runtime.jvm.ext.
A
B
Cool,
okay,
I'm
just
thinking,
I'm
just
thinking
flexibility.
We
do.
We
do
get
a
major
version
of
java
coming
out
every
six
months
now
right,
and
so
you
know,
for
some
for
some
vendors
it'll
be
a
case
of
speed
of
speed-
is
of
the
essence,
try
and
put
something
out
to
capture
some
of
those
things
like
the
green
threads
coming
out
in
loom
as
an
example.
A
So,
for
example,
things
like
the
value-based
class
blocks,
which
are
going
to
go
into
valhalla
so
where
you,
where,
because
there's
actually
a
jfr
event
which
enables
you
to
count,
enables
you
to
see
when
someone
locks
on
something
which
has
a
constructor
which
is
going
to
go
away
like
integer,
for
example,
and
that's
that
that's
the
the
kind
of
thing
which
it
might
be
useful
telemetric
for
hey
you've
got
loads
and
loads
of
these
types
of
events,
but
which,
obviously,
in
a
future
release
of
java.
A
K
K
Yes,
so
I
I
personally
think
it's
okay
for
there
to
be
java,
8
specific
metrics
in
this
spec,
because
I
think
it's
not
going
away.
So
I
think
it's
a
good
idea
to
keep
them.
D
Which
gets
another
point:
it's
like
it's
going
to
be
difficult
to
in
the
future,
to
take
metrics
away
from
people.
A
That
yeah,
that's
what
I'm
saying
that
that
I
think
applications
defecation
will
be
difficult.
That's
why
maybe
the
the
extended
space
makes
sense
runtime.jvm.ext
with
the
with
the
the
the
upfront
written
semantics
that
stuff
in
there
can
be
removed
because
then
anything
which
we
think
we
might
only
need
temporarily,
we
could
put
in
there.
D
A
Yeah,
that
was
that's
a
huge
topic
and
not
just
for
metrics,
but
you
know
it
does
you
know
there
is
this
question
about?
How
should
we
make
it
so
that
we
can?
We
can
actually
change
our
mind
about
what
we're
reporting
on
or
or
you
know,
just
how
much
we're
aggregating,
because
I'm,
I
think
I
thought,
thought
some
of
the
same
use
cases
that
you
have
kirk,
where
basically,
you
want
to
to
be
able
to
when
a
system's
in
trouble,
turn
up
the
resolution
and
turn
up
the
fidelity.
A
Yeah
so
so
yeah,
I
think
those
questions
are
are
open,
but
probably
out
of
scope
for
this
specific
group,
I
think
at
some
point
you
know
people
will
need
to
think
about
about
how
to
dynamically
adjust
what
the
profiles
look
like,
but
I
think
that
probably
needs
to
be
solved
at
a
higher
level
than
just
the
jvm
metrics
subject.
D
I
Okay,
so
the
fact
is
that
that
we
have
to
live
with
the
fact
that
that
things
will
be
removed.
The
the
jvm
is
now
deprecating,
things
for
removal,
and
things
are
getting,
you
know,
will
disappear
over
time,
so
so
so
that
that
is
something
that
that
I
think
we
just
need
to
live
with
understand
in
a
sense,
the
way,
the
way
things
are
going
same
true
for
jfr,
we
have
a
proposal
in
the
works
for
for
being
able
to
to
have
deprecation
known
in
the
jfar
metadata.
I
So
if
we
want
to
remove
events,
we
can
know
a
few
releases
in
the
releases
in
advance,
so
so
that
is
also
going
to
be,
I
think
a
reality
over
time
and
and
then
one
last
point
was
we
shouldn't
get
too
disheartened
if
there
isn't,
if
there
is
something
that
we
really
really
want,
and
that
is
tractable,
we
can
always
add
it
to
the
jdk.
So
you
know
that
is
also
a
possible
solution.
I
True-
and
we
have
done
that
in
the
past
from
datadogs,
so
so
you
know
if
if
there
is
a
good
reason,
for
example,
for
needing
a
better
allocation,
profiler
exception
profiler,
something
profiler,
then
then
you
can
probably
help
cool.
A
Okay,
what
how
are
we
doing?
I
mean
basically,
we've
covered
all
the
major
issues
that
I
I
wanted
to
to
raise
and
surface
in
this
first
meeting.
Is
there
anything
else
anyone
particularly
wants
to
to
to
deal
with
now
or
shall
we
should
we
basically
set
up
those
documents
for
drawing
together
the
different
sets
of
metrics
that
people
think
start
collecting
some
commentary
about
them
and
then
see
what
we
can
come
up
with
in
the
meeting?
A
There
is
a
meeting
next
week,
but
it's
on
a
a
european
and
japan
friendly
time
zone
to
pick
up
some
folks
who
want
to
contribute
in
in
the
japan
time
zone,
so
anyone
who's
here
from
europe.
We
can
meet
next
week
on
wednesday
at
10
o'clock
central
european
time.
So
it's
nine
o'clock
in
the
uk,
my
saying
but
for
west
coast
folks,
it's
probably
unreasonable,
but
it
is
unreasonable.
It's
two
in
the
morning
for
you
guys.
A
So,
that's
probably
that's
no
good,
but
in
the
meantime
you
know
this
group
can
meet
again
in
two
weeks,
simon
and
that
will
give
us
time
to
pull
together
those
documents
which
we
can
set
up
as
soon
as
we
have
access
to
the
appropriate
errors
in
the
the
hotel,
google
docs.
A
I
Yeah,
just
one
thing
occurred
to
me
since,
since
I'm
a
metrics
new
in
in
in
open
or
open
telemetry,
are
we
talking
about
simple
metrics
here
like
like
key
value
pairs,
or
are
we
talking
also
about
high
cardinality
metrics,
where
we
can
have
gazillions
of
keys
with
values.
A
So
so
in
open
symmetry
there
are
the
metrics.
There
are
only
really
three
instruments
and
for
reasons,
if
you
have
a
look
at
my
jfr
prototype
and
then
see
me
on
slack
that
there
are
reasons
why,
for
most
things,
you
actually
want
to
prefer
a
a
histogram.
A
Basically
the
way
that
the
open
cemetery
model
works
is
well.
You
see
if
you
look
at
the
code,
but
basically
a
gauge
is
effectively
a
volatile
which
you
just
update
when
it
when
it
you
you,
you
get
a
new
value,
but
you
don't
have
any
control
over
when
the
open
geometry
exporter
scrapes
that
value
it
just
takes
whatever's
there,
so
that
can
that
can
lead
well.
For
starters,
you
don't
get
every
tick,
so
it
has
lost
update.
You
only
ever
see
the
latest
update
so
for
most
of
the
metrics
that
are
currently
there.
A
I
D
A
There's
a
whole
subgroup
about
that.
That's
not
that's
not
for
this
group
to
do
we
we
pick
up
the
samples
that
others
define
so
if
you're
interested
in
that
there's
a
there's
a
whole
other
other
area
and
other
groups
that
need
to
talk
about
sampling.
D
Yeah,
but
that
depends
so
that
also
there's
consumers
of
this,
but
there's
also
what
I
would
call
you
know.
I
would
certainly
like
to
be
able
to
adjust
how
things
are
sampled.
A
And
in
general
in
deployment
that's
possible,
it's
probably
it
might
be
worth
you
having
a
look
at
how
some
of
the
samples
were
implemented,
but
but
yeah
you
would
when,
if
you
were
deploying
this,
I
think
I'm
right
in
saying
and
then
someone
trusts.
C
Not
in
met-
I
I
don't
think
metrics
has
sampling
defined
at
this
point.
Sampling
is
just
about
tracing
span
data.
At
this
point.
G
But
you
can,
you
can
have
some
effects
so,
for
example
like
gauges
where
you
don't
like
update
the
the
exported
value
every
time,
then,
when
the
value
changes
the
background,
you
can
have
some
effect
on
like
how
do
you
configure
your
exporter
like?
Do
you
export
the
data
every
minute,
then
your
sampling
rate
will
be
a
minute
or
we'll
promote
you,
scrape
you
in
every
10
seconds.
In
that
case,
your
sampling
rate
will
be
10
seconds,
so
you
can
have
some
effect
on
it.
Right
now,.
D
Okay,
I
think
that's
not
the
type
of
sampling
that's
actually
talking
about,
because
that's
again,
it
feels
like
it's
a
that's
a
consumer
directed
issue,
and
I
was
the
reason
why
I'm
consuming
the
data.
That's
there
I'm
actually
talking
about
how
the
framework
collects
data
itself
and
how
it
actually
samples
for
data
internally.
For
you
know
whatever
it's
looking
for.
A
That's
that's
kind
of
an
implementation
detail.
I
mean
my
commentary
folks
would
do
it
one
way,
but
for
the
jfr
it
comes
straight
off
at
jfr,
recording
stream
and
there's
one
update
for
every
event
that
comes
on
the
on
a
particular
jfr
event,
type
yeah,
other
other
things
might
might
do
differently,
but
it
it.
I
think
it's
going
to
be
an
implementation
defendant.
G
J
C
There's
a
lot
of
different
things
that
you
can
be
talking
about
when
you
mean
stable
and
a
lot
of
different
components
there,
the
actual
apis
and
sdks
well,
the
general
ones
are
not
blocked
on
semantic
conventions
and
that's
kind
of
the
way
that
open,
telemetry
works
in
general
is
there's
the
api,
sdk
and
semantic
conventions.
C
So
specifically,
the
semantic
conventions,
though,
do
influence
the
instrumentation
that
we
generate
that
we
generate
and
so
yeah
there
has
been
a
request
at
the
open,
telemetry
level
or
a
directive
from
open
telemetry,
and
I
should
follow
up
see
if
this
is
still
in
place
to
not
release
any
stable,
instrumentation
libraries
until
the
semantic
conventions
are
declared
stable,
because
there
is
some
desire
to
have
like
a
backward
compatibility
guarantee
for
the
actual
shape
of
the
telemetry
that's
emitted,
so
that
there's
no
breakages
of
dashboards
and
that
sort
of
thing
that's
a
very
difficult.
C
That's
a
very
high
bar
that
in
general
hasn't
been
the
case
for
at
least
most
observability
vendors
in
the
past.
So
I'm
not.
I
don't
entirely
agree
with
that
as
the
bar,
but
yeah.
So
it's
complicated.
I
don't
have
a
clear
clean
answer
for
you.
Sorry.
C
A
Cool
okay,
any
more
for
anymore,
or
should
we
should
we
get
two
minutes
back.