►
From YouTube: 2022-01-19 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
Maybe
a
quick
question
before
we
start,
because
I
just
checked
the
the
the
slack
and
ben.
I
saw
your
message
from
yesterday
regarding
the
histogram
representation
of
the
cpu
used,
and
where
can
I
find
the
implementation
of
these
jfr
based
metrics
that
you
mentioned.
A
So
the
open
symmetry
format
has
native
histograms.
B
A
A
Let
me
check
this
in
the
in
the
chat
and
the
document
as
well.
A
A
Okay,
so
so
shall
we
shall
we
get
going?
Did
you
think
tommy.
C
A
Okay
cool,
so
let's,
let's
start
at
the
at
the
top,
so
basically
the
progress
on
the
spreadsheet,
I
think
is
has
been
pretty
good,
probably
include
a
link
to
that
because
I
keep
it
keeps
dropping
off.
A
A
Yeah
so
there's
the
spreadsheet
and,
as
you
can
see,
I
mean
I
think
this
is.
This
is
pretty
good.
We
now
have
the
jfr
prototype.
We
have
drop
wizard,
they
sort
of
put
some
stuff
in,
but
they've
said
that
they
will
micrometer
splunk
datadog.
A
The
k1
folks
are
also
going
to
put
some
stuff
in,
but
haven't
yet,
and
then
the
prometheus
java
client
libraries,
which
are
pretty
pretty
comprehensive.
Thank
you
and
then
the
final
tab
is
the
draft.
This
is
where
we
are
are
going
to
to
add
that
stuff,
hi
tommy
thanks
for
joining
us,
sorry
for
interrupting
dinner,
hello,
no
problem,
yeah
and
then
the
final
tab
is
the
draft.
This
is
basically
this
is
not
set
in
stone.
A
This
is
this
is
basically
as
I'm
going
through
and
modifying
the
the
jfr
prototype
to
to
to
fit
into
this
this.
This
mold,
I'm
just
adjusting
the
names
of
the
instruments
that
are
used
in
jfr
to
match
against
this
convention.
A
A
So,
in
the
absence
of
any
anymore
to
change
that
I've
started
changing
the
instruments
in
in
the
jfr
prototype
over
to
use
that
as
I've
dug
into
it,
starting
with
the
gc
ones,
I'm
also
finding.
Oh,
so
I
should
also
say
when
we
we
sat
and
looked
at
these.
It
looks
very
much
as
though
all
of
the
implementations
more
or
less
and
some
implementations
do
something
slightly
different
for
some
of
the
values.
But
a
lot
of
things
are
directly
sourced
from
the
jmx
case.
A
So
the
two
big
things
that
we
need
to
have
agreement,
or
at
least
consensus
on,
are
what
the
jmx
implementation
says
and
what
comes
out
of
jfr,
because
those
are
completely
separate
routes.
They
don't
they
don't
share
any
code
or
any
parts
in
common
and
they
can
produce
different
answers.
So
so
those
are
the
things
that
we
have
to
get
agreement
on,
how
we,
how
we
label
things
so
because
anything
which
is
sourced
from
from
jmx,
should
be
able
to
fit
into
to
to
the
convention
effectively.
A
What
that
means
is
the
burden
lies
on
the
jfr
side
to
conform
to
as
closely
to
something
that
can
be
scraped
from
jmx
as
possible.
So
that's,
you
know.
That's
extra
work
for
the
folks
working
on
jfr,
but
I
don't
think
that's
a
particularly
big
issue.
I
mean
I
think
that
most
people
are
going
to
want
to
use
the
jmx
piece
if
that's
what
they're
already
comfortable
with
so
so
it's
it's
sort
of
incumbent
on
the
on
on
the
jfr
implementation
to
match
that
so
yeah.
A
So
as
I've
dug
into
this,
I
I'm
trying
to
figure
out
the
one
that
I've
been
picking
most
of
all
on
is
the
the
the
jvm.memory
used
and
also
the
cpu
used.
Those
are
the
two
pieces
that
I've
I've
started
with,
and
what
I've
teased
out
is
that,
depending
on
which
which
collector
is
in
use,
you
actually
have
to
listen
to
different
jfr
events.
A
So
the
first
thing
you
need
to
do
when,
when
running
with
jfr,
is
to
actually
check
which
which
collector
you've
got
in
use,
because
not
all
of
the
information
that
you
might
want
is
located
in
the
same
place
for
each
garbage
collector,
and
this
also
then
raises
the
question
of
exactly
what
gets
put
into
those
metrics
for
different
on
the
jmx
side
when
running
under
different
garbage
collectors.
A
So
so,
for
example,
the
the
the
runtime
dot
dot
jvm
memory
used,
the
ones
that
I
think
are
common
across.
All
all
gcs,
with
the
possible
exception
of
shenandoah,
are
totally
used
committed,
reserved
eden,
used
eid
in
size
and
survivor.
A
And
the
and
of
those
you
see
even
used
only
makes
sense
for
g1
as
an
example
of
something
which,
which
is
not
is
not
common
across
the
gcs.
So
at
some
point
we
have
to
decide
what
we
do
about
about
working
with
these.
These
different
collectors,
which
can
produce
numbers
which
don't
make
any
sense
you
know
for
a
parallel
collector
eating
news,
makes
no
sense
because
at
the
end
of
a
collection,
the
eden
used
is
zero,
because
it's
a
completely
evacuating
collector
shenandoah
is
currently
not
a
generational
collector.
A
So
basically
I
I've
been
just
trying
to
hack
a
path
forward
and
and
see
where
we
get
to,
and
that
was
as
far
as
I
got
last
night,
so
I
wanted
to
to
see
what
other
people
thought
and
to
see.
If
we
can,
we
can
think
around
this
a
bit
more.
A
The
other
question
which
is
related
to
this,
which
is
a
bit
lower
down
the
the
agenda,
is
exactly
what
are
we
we
putting
in
scope,
because
you
know,
although
hotspot
is
by
far
and
away
the
most
commonly
used
and
most
important
vm,
that
we
need
to
support
it,
isn't
the
only
one
you
know
I
spoke
to
erin
about
this,
and
she
has
very
much
opinion
that
we
do
need
to
support
ibm
j9
as
well.
A
Is
there
anything
else
that
people
think
we
should?
We
should
have
in
scope?
Is
another
question
to
think
about.
A
C
D
A
Okay,
so
let's,
let's
maybe
let's
tackle
that
one
first,
let's
tackle
which
jvms
and
gc's
are
in
scope.
So
for
hotspot
I
mean
we're
we
for
the
for
the
jfr
implementation,
we're
with
java
17.,
so
the
only
things
that
that
will
have
to
listen
to
are
well.
It
won't,
for
example,
include
cms,
but
overall
this
implementation
goes
as
far
back
as
java
8.
So
we
will
need
to
include
cms
in
the
list
as
well,
although
that's
not
something
which
I'll
need
to
to
do.
A
A
For
8
and
11
I
mean
the
collector
list
is
going
to
be
cms.
Parallel
should
be
the
same
set
of
pools
serial
and
I
don't
know
whether
the
serial
collector
shows
up
separately
and
differently
from
parallel.
If
someone
had
some
time
and
could
investigate
what
data
actually
shows
up
in
jmx
with
the
serial
collector,
then
that
would
be
really
useful.
A
If
someone
can
take
that-
and
you
might
think
why
are
we
doing
cereal
well,
because
cereal
is
what
you
get
if
you
run
in
a
single
core
container
and
according
to
data
that
I
saw
when
I
was
at
new
relic,
over
25
percent
of
people
run
their
java
in
in
in
single
core
containers,
so
whether
they
know
it
or
not,
they
are
getting
serial.
So
we
need
to.
We
need
to
cover
that
case,
for
them.
A
Yeah
and
the
crazy,
the
crazy
thing
is,
is
that
this
isn't
just
java.
You
know
you
have
the
same
behavior
in
in
go
because
go's
are
concurrent
collector
as
well.
So
you
know
it's
just
crazy
that
people
people
complain
so
much
about
stop
the
world
time
and
that
yet,
when
handed
a
a
platform
which
runs
with
a
concurrent
collector,
they
immediately
hobble
it
back
to
stop
the
world
collector.
It's
just
completely
crazy.
C
A
Yes,
but
of
course
there
isn't
a
state:
well,
there
isn't
a
fully
supported
version
of
shenandoah
in
in
11.
C
A
It's
a
good
question,
I
mean
that's
that
one
is
is
open
to
open
to
discussion.
I
think
we,
I
think
we
ultimately
should,
and
I
I
think
this
work
is
public.
I'm
sure
it
is
at
red
hat,
we're
doing
a
bunch
of
work
to
make
sure
that
the
jfr
works
on
on
native
images.
As
far
as
that's
possible,
but
again
that's
going
to
run
into
some
questions
about
exactly
what
data
is
available.
A
I
mean
yeah,
I
I.
What
do
people
think
I
mean
we
definitely
will
need
to
support
native
images
at
some
point,
but
whether
we
you
know
given
the
what
we're
developing
here
is
a
standard.
B
What
happened?
We
should
just
take
care
that
we
don't
make
things
mandatory
that
make
no
sense
in
native
images.
So
as
long
as
everything's
optional,
then
we
people
can
still
use
it
in
a
native
image
and
just
don't
have
the
data.
But
if
we
define
in
the
standard
that
we
need
some
mandatory
metrics
that
are
not
available
in
native
images,
that
would
be
yeah.
A
Yeah,
I
see
what
the
same
fabian
it's
it's
a
good
point.
I
I
wonder
whether
actually
the
thing
to
do
is
to
start
from
you
know
to
actually
go
back
to
the
use
case.
The
use
case
is
sort
of
one
of
my
more
compelling
ones.
Is
I'm
brand
new
to
java
I'm
a
devops
person.
I
don't
really
know
that
much
about
java.
I
just
want
to
plug
in
the
components
and
actually
get
something
useful
into
my
dashboard
yeah.
A
C
Don't
think
so,
but
I've
got
I
can
imagine
what
it
is.
Yeah.
A
I
mean,
I
think
I
have
a
copy
of
it.
No
I'd
have
to
bring
my
other
laptop
yeah
I'll
I'll
post
some
screenshots
in
the
slack
channel
of
what
it
looks
like,
rather
than
try
to
demo
it
now,
but
basically
that
has,
I
think,
a
good
set
of
metrics
for
for
this
use
case
of
you
know
you
you
plug
it
in
and
switch
it
on,
and
you
get
some
data
back.
A
So
so
my
what
one
approach
we
could
take
is:
let's
just
try
to
get
the
metrics
which
drive
that
and
and
see
and
lay
those
out
and
see
how
they
how
they
look
in
the
standard.
Now
it
will
have
a
couple
of
things
in
it
which
wouldn't
make
sense
for
gravity
native
such
as
glasgow,
but
those
we
could
definitely
mark
as
optional.
A
D
A
That
would
be
super
useful
if
you
can
share
it
at
some
point.
I'll
get
a
document
out
after
the
guys
at
red
hat
and
see
see
what
the
state
of
play
is
for
jfr
events
as
well.
A
So,
okay,
so
coming
coming
back
to
the
the
the
hotspot
question
so
for
for
eight
and
eleven
we
would
support
parallel
cms
serial
g1
for
seventeen.
We
would
support
parallel
serial
g1
and
shanandala
z,
gc.
C
A
Is
there
someone
who
can
put
some
time
in
to
help
with
with
doing
looking
at
the
jmx
outputs
and
just
eyeballing
it
and
making
sure
that
it's
the
same
sorts
of
numbers
that
come
out
and
checking
that
that
they
genuinely
do
produce
the
same?
The
same
pool
names.
A
C
A
So
so
it's
going
to
vary
slightly
is
my
gut
feel
I
want
to
go
through
this
exercise
to
basically
see
if
I
can
quantify
my
my
intuition
here
at
some
point.
When
we
get
onto
the
item
I'll
show
you
the
code.
I've
got
for
adapting
away
from
histograms,
because
my
my
prototype
of
jfr
is
all
histogram
based
and
that's
obviously
not
going
to
work
against
against
jfr
against
jmx
jmx
is
just
going
to
scrape
the
bean
and
see
whatever's
there
right,
but
jfr.
I've
actually
seen
every
event.
So.
C
A
Can
I
kind
of
need
to
do
something
with
those,
so
if
I'm
only
being
scraped
every
30
seconds
and
I've
actually
got
30
events,
I
need
to
do
something
with
that,
like
course
in
it
somehow.
A
There's
also
the
possibility
that
I
could
do
something
else
like
add
additional
non-standard
tags,
like,
for
example,
I
could.
I
can
actually
just
fake
out
a
histogram.
I
could
do,
for
example,
account
event
and
emit
that
on
a
different,
a
different
attribute.
C
A
At
the
same
time,
give
myself
enough
flexibility
to
allow
me
to
to
to
emit
other
metrics
or
other
tags
which
which
have
gotten
better
better
data
on
them.
Yeah.
C
C
B
D
Similar
thing:
okay,
one
of
the
problems
that
I've
noticed
in
looking
into
this
is
that
metrics
get
registered
sometime
after
the
application
starts
up,
and
it's
potential
that
garbage
collections
have
already
happened.
So
you
don't
necessarily
get
a
100
accurate
picture
where,
if
you
want
to
know
okay,
how
many
collections
have
happened
over
the
course
of
my
application?
You'll
miss
any
that
happen
before
you
start
before.
You
register
the
notification
listener
in
your
recording
metrics.
A
Yes,
I'm
this
is.
This
is
a
problem
for
well
for
everything.
This
is
why
I'm
such
a
big
advocate
in
the
jfr
space
of
using
a
java
agent
for
this,
because
I
want
to
run
as
early
as
possible
for
the
manual
registration
of
the
the
jfr
piece.
We
do
it
as
early
as
possible,
so
it's
done.
You
know
for
spring
boot.
Applications.
It's
done
before
you
before
you
even
hit
the
main
method.
A
It
for
exactly
this
reason,
of
course,
so,
okay,
so
that's
that's
a
good
good
note
to
have
on
the
spreadsheet
is:
where
does
this
data
come
from?
So
so
so
both
prometheus
and
micrometer
have
the
ability
to
derive
from
jmx
listeners.
A
So
that
might
be
a
question
for
me
to
fire
out
to
to
the
other
folks
who
have
produced
implementations
is
what
what
are
they
doing?
Are
they
just
scraping,
or
are
they
actually
looking
at
eventlessness,
so
it
if,
for
folks
who
are
doing
eventless
the
base
stuff?
A
B
I
mean
usually
at
least
in
the
prometheus
world.
You
wouldn't
want
to
average
anything
in
the
client
library
right.
You
just
produce
the
latest
value
and
if
you
want
to
calculate
averages
over
time,
you
do
it
in
the
premises
server
in
your
monitoring
tool
and
not
because
average
kind
of
implies
that
you
look
at
the
time
interval
and
then
the
client
library
would
define
what
time
interval
you're
looking
at
and
that's
something
that
you
don't
want
to
hard
code
in
a
client
library.
A
Okay-
and
I
get
that-
I
mean
philosophy
that
makes
sense,
but
this
is
one
of
the
differences
that
we
we
have
to
facade
over.
Is
that
some
things
are
our
client
push
where
you
know
I
mean.
Ideally,
I
want
to
send
every
event
if
I
could,
but
open
telemetry
does
support
the
concept
of
aggregation.
A
So
I
think
there
is
a
philosophical
difference
as
to
how
we
approach
this,
I
mean
so.
Basically
what
I'm
thinking
here
is,
let's
suppose,
what
your
polling
interval
for
prometheus
is
what
typically
kind
of
30
seconds
something
like
that.
B
Sorry
I
didn't
get
the
question
so.
C
A
A
B
Yeah,
but
then
I
mean
in
for
that,
you
usually
would
use
counters
like
you're
interested
in
the
total
time,
the
to
a
total
cpu
time
used,
and
you
just
have
incrementing
counters
for
that.
And
so,
if
you
miss
any
intermediate
updates,
you
just
get
the
total
sum.
The
next
time
you
pull,
and
it
doesn't
really
matter
like
how
much
it
increased
like
in
the
meantime,
because
you
have.
B
A
Okay,
but
that
that
has
its
own
problems,
because
if
you
are
just
counting
and
accumulating
cpu,
a
a
a
sustained
slightly
increased
cpu
utilization
over
a
30
second
polling
period
will
look
identical
to
a
massive
spike
which
only
lasted
for
a
second
or
two.
That
is.
B
A
A
Can
we
do
something
which,
which
you
know
where
we
actually
have
two
levels
where
we
have
the
baseline,
which
is
just
you
know
the
average
over
the
time
period
and
then
also
produce
something
on
a
different
instrument
which
actually
has
histograms
and
actually
has
you
know
a
finer
grained
representation
of
the
data,
or
is
that
too
much
work?
What
do
we
think.
C
I
have
a
feeling:
a
different
instrument
is
probably
better
because
even
on
average,
it's
not
clear
that
that
is
better
right,
because
it's
still,
if
you
have
a
spike
in
the
spike,
might
still
be
gone
if
it's
an
average
and
so
getting
those
aggregations
down
well,
will
probably
be
hard
either
way.
If
we
had
a
separate
instrument,
histogram
that
seems
like
it
could
be
more
useful.
A
Okay,
so
that
that
then
suggests
a
path
forward
that
we
will
have
is
a
set
of
of
instruments
which
are
baseline,
mandatory,
which
are
our
single
values
and
then
which
are
basically
averages
and
then
something
which
will
will
be
a
a
histogram.
Let
me
just
share
my
screen.
Real,
quick,
and
let
me
show
you
some
code.
C
A
What
we
can
we
can
do
complicated
yeah
totally
for
something
like
cpu
that
that's
probably
what
we
want.
We
may
not
want
an
entire
histogram,
because
it
may
just
not
be
that
interesting
most
of
the
time,
but
the
the
the
max
value
certainly
does
sound
like
it.
It
might
be,
and
that's
easy
because
you
just
put
that
on
another
tag,
right,
okay,
so
this
is
the
gc
event
handler,
and
this
is
currently
implemented
as
a
histogram,
and
we
have
two
of
these.
A
So
then
this
is
the
baseline
one,
where
we
record
all
of
the
the
times
that
we
got
in
milliseconds
and
then
we
have
for
this
one.
We
have
the
heap
space,
so
this
one
is
how
how
much
the
heat
contained
and
then
the
committed
and
the
reserved
area
the
reserved
area,
doesn't
change
unless
you
do
a
heat
resize.
So
that
is
pretty
much
always
going
to
be
static,
but
we
have
computed
anyway
and
then
individual
collectors
have
have
different
things,
because
this
is
just
the
total
heap
size.
A
There's
no
definition
here
into
young
or
old
gen.
We've
got
no
resolution
into
that
because
those
come
from
from
different
different
handlers.
So
so
this
is
the
g1
one,
and
this
has
size
of
eden
and
size
of
delta,
the
delta,
the
change
in
the
size
of
eden
and
also
the
survivor,
spies.
B
A
It
shouldn't
be:
let
me
check
that
that
if
so,
that's
a
good
catch,
but
I
I
would
want
the
histogram
to
be
reset
after
each
after
each
collection.
That
is.
A
C
And
one
phase
where
you
transform
the
data
into
exports,
if
you
were
to
push
to
two
you'd
sort
of
be
like
first
you
convert,
then
you
send
the
same
data
to
two
exporters.
A
A
And
I
mean
you,
wouldn't
you
didn't
have
to
architect
it
like
that
as
nyx
says
you
could
totally
have
multiple
exporters
or
in
otp
you
can
do
whatever
you
like,
but
probably
it's
simpler
to
architect
it
to
actually
have
the
collector.
Do
the
fanout.
A
Cool
so
yeah,
so
so
the
resetting
of
the
histograms,
I
think,
is
it's
important.
I
think
I'll
check.
You
know
it's
a
good
question.
We
need
to
make
sure
that
actually
these
are
resetting
and
after
each
time
they
scrape
the
because
the
scraping,
as
it
were
happens
by
the
hotel
exporter
thread,
the
actual
exporter
thread
comes
along
and
it
collects
values
from
the
instruments,
which
is
why
these,
I
think,
will
get
reset.
Surely
the
cpu
handler,
on
the
other
hand,
is
this,
is
this?
A
This
is
on
a
branch
I
haven't
committed
the
stuff
yet
and
here
what
we
have
is
we
have
a
list
of
doubles
which
we
just
collect
and
the
way
that
I've
got
it
set
up
so
far
is
we
just
have
an
averaging
function
and
then-
and
this
is
a
bit
weird-
I
was
weirded
out
by
the
first
time.
I
saw
this.
You
start
with
a
meter.
A
You
build
an
up
down
up
down
counter
of
doubles
and
then
you
use
this
thing
called
build
with
callback.
So
the
meter
now
has
a
bunch
of
callbacks
on
it.
There's
no.
A
There's
no
there's
nothing
which
you
directly
record
so,
for
example,
in
the
in
the
gc
heap
handler,
we've
got
specific
objects,
called
duration,
histogram
and
memory
histogram,
and
when
new
data
comes
in,
we
make
an
explicit
record
this
value
call
into
the
histogram
object
for
using
this
callback
style.
We
don't
do
that.
A
Instead,
all
there
are
callbacks
which
says
when
the
pullback
is
fired,
call
the
record
method
on
it,
so
that
callback
style
is
a
little
bit
different
and
then
what
we
do
is
we
we,
we
calculate
the
average
and
apply
it
on
different
tags
to
the
three
sets
of
edges
that
we've
got
and
then
the
the
accept
method
is
then
straightforward.
All
we
do
is
we
record
the
data
as
it
comes
comes
through,
so
we
stick
that
in
a
list
and
my
code's
got
a
bug
in
it.
B
They
just
and
just
quickly
looked
at
the
open,
telemetry
histogram,
and
I
think
I
found
a
note
that
it's
kind
of
supposed
to
be
compatible
with
what
openmetrics
has
in
the
prometheus
world,
and
that
would
mean
that
you
cannot
reset
it
because
in
in
this
world,
the
sum
of
the
recorded
values
like,
for
example,
the
sum
of
the
recorded
gc
times
would
be
a
counter
which
is
continuously
increasing.
So
if
you
go
ahead
and
take
that
histogram
but
reset
it,
then
it
would
drop
to
zero,
which
would
violate
the
assumptions
like
that.
C
Open
geometry
has
both
delta
and
cumulative
temporality.
C
A
B
Delta,
what
I
mean
the
what
what's
the
with
the
maximum
value,
if
you
use
delta,
the
maximum
value,
is
still
the
value
since
the
application
start
right.
It's
not
like
that.
You
reset
the
maximum
value
with
each
poll
in
a
delta
representation,
and
anyway,
it's
maybe
too
too
detailed
for
this
discussion.
C
Yeah
yeah,
like
the
sdk
like
the
java
sdk,
it's
implemented
as
a
delta
storage
and
then
at
the
export
layer.
If
it's
a
cumulative
exporter,
then
it.
B
Copies
the
dollar,
just
thinking,
if
you
reset
it
and
the
counters
drop
to
zero,
then
prometheus
would
just
assume
you
restarted
the
application
and
implicitly
added
the
last
value
scene
so
that
it's
again
like
an
increase
in
counter
and
so
presets
wouldn't
really
work
in
that
world.
But
yeah
anyway,.
D
So
for
a
micrometer
for
exporting
to
prometheus,
if
we
want
to
export
a
max,
we
use
a
time
windowed
max,
so
we'll
take
the
step
interval
for
so
the
interval.
How
often
we
are
reporting
or
how
often
we
expect
prometheus
to
be
scraping
and
then
we'll
make
a
ring
buffer
and
we'll
keep
the
max
value
in
ring
buffer
and
rotate
every
step
interval.
D
So
essentially,
if
you
witness
some
maximum
value
in
a
step
interval,
then
that'll
be
reported
as
the
max
until
the
ring
buffer
rotates
it
out.
So
that
way
you
kind
of
have
a
bit
of
a
memory
of
the
max,
but
it
will
roll
out
over
time
so
that
way
across
enough
time
you
can
see
max
changing
and
it's
not
the
same
the
whole
time
and
it's
exported
to
prometheus
as
a
gauge.
So
it's
not
an
issue
that
it
may
go
up
and
down.
B
D
Yeah,
so
actually
we
had,
we
wanted
it
to
be
part
of
the
histogram,
but
since
it's
not
defined
as
being
part
of
the
histogram
and
prometheus's
histogram
type,
we
just
make
a
separate
gauge
for
it.
Okay,
that.
D
D
So
that
way,
if
you
have
some
spike
in
the
time,
it'll
show
up
and
then
eventually
it'll
go
away.
So
you
can
see
when
a
spike
happened
and
part
of
the
reason
of
doing
the
time.
Windowing
over
multiple
intervals
is
because
we
think
that
a
lot
of
times
you
might
have
a
spike
happen,
and
that
may
also
correspond
to
somehow
being
somehow
failing
to
report
the
metrics
in
that
same
interval,
because
the
application
is
getting
a
lot
of
traffic,
it
can't
handle
it,
and
so
maybe
it
fails
to
report
metrics.
D
D
B
A
A
I
think
we
need
to
make
sure
that
if
you
change
provider
for
your
for
your
open
cemetery
output,
that
you
don't
necessarily
get
surprisingly
different
data
artifacts,
so
I
hear
what
tommy's
saying
about
about
having
a
shifting
baseline
to
assure
you
don't
lose
stuff.
So,
let's
think
for
a
minute
about
let's
so
the
standard's
out
it's
deployed
is
the
expectation
that,
in
addition
to
to
the
prometheus
exporter
and
various
others,
the
micrometer,
for
example,
now
also
has
an
open,
telemetry
exporter.
D
A
Okay
and
it's
what
I
think
I
find
tricky
a
little
bit
about
this
at
the
moment
is
it's
kind
of
a
chicken
and
egg
problem.
At
the
moment
there
aren't
really
good
ways
to
build
a
pipeline
which
says:
here's
my
java
application
here,
my
my
hotel
metrics
for
the
hotel
library
and
it
exports.
It
goes
out
as
otlp
and
there's
a
disconnect
against
actually
getting
that
all
the
way
through
the
pipeline.
A
At
the
moment.
What
most
people
want
to
do
is
to
convert
that
back
to
prometheus
metrics
and
then
do
that
and
then
store
that
for
display
so
to
go.
You
know:
otlp
collector,
prometheus
grafana,
for
example.
The
problem
with
that
is
that
then
introduces
the
semantic
assumptions
of
how
data
is
handled
in
prometheus
into
into
a
pipeline.
When
we
probably
don't
want
that
long
term,
everyone
I
talk
to
keeps
saying.
Yes,
we
are
definitely
wanting
to
build
native
otrp
support
for
metrics,
it's
just
not
on
this
quartus
roadmap.
A
So
one
of
the
things
that
I'm
trying
to
guard
against
here
is
that
we
we
don't
build
in
architectural
assumptions
that
are
valid
for
prometheus,
but
not
going
to
be
valid
once
you
no
longer
have
to
round
trip
through
it.
In
order
to
display
your
data.
B
A
A
But
if
that's,
if
that's
not
the
case,
then
we'll
need
to
think
about
this.
I
mean
partly
with
the
way
that
I'm
thinking
about
this
is
wanting
to
build
the
sort
of
views
that
you
can
build
in
in
mission
control
and
in
mission
control.
You
actually
have
the
ability
to
to
look
at
individual
time
windows
and
say
within
that
window.
There
was
this
much
total
time.
This
was
the
max,
and
this
was
the
min
now,
of
course,
in
the
mission
control
case
they
have
all
of
the
event
stream
of
data,
so
they
can.
A
They
can
do
that.
If
you
dynamically
size
the
window
and
change
the
window,
they
can
recalculate
all
the
histograms,
because
they're
working
off
the
of
the
original
data,
we
can't
do
that
there
has
to
be
some
aggregation
somewhere.
A
So
so,
if
we,
if
we
take
the
idea
of
a
fixed
size
window
which
is
going
to
be
the
otlp
export
window,
whatever,
however
big
that
is,
then
either
we
calculate
those
individual
values,
the
min
the
max
the
count
and
the
total
and
and
just
fix
that
size.
You
can't
you
can't
get
down
any
lower
than
that,
but
I'm
not
sure
there
is
actually
any
other
way
to
do
that
because
we
yeah,
we
there's
just
no
way
in
the
otlp
format
to
export
the
raw
data
as
we
actually
have
it.
A
A
A
A
C
C
A
Yeah
I
mean
for
j9,
it's
it's
fine,
because
there
are.
There
are
some
j9
people
at
red
hat.
I
can.
I
can
gently
poke
once
we
you
know
so,
maybe
maybe
anarag,
maybe
that's.
The
answer
is
we'll
just
produce
something
which
is
which
covers
the
hot
spot
cases
and
then
show
it
to
the
j9
people
and
say
before
we
standardize
this
yeah.
Would
you
feel
terrible
if
this
was
what
it
looked
like
you
had
to
to
provide
data
to
fit
this.
A
Yeah,
no,
I
think
we've
got
all
the
way.
Through
I
mean
the
only.
The
only
other
thing
is
the
current
pr
which
basically
just
adds
in
support
for
parallel.
The
shenandoah
are
actually
planted
on
for
this
vr
it.
We
should
also
add
an
issue.
To
add,
add
support
for
serial
make
sure
that
serial
produces
data
in
the
same
formats
that
we
expect,
but
serial
shouldn't
be
too
difficult,
I'll
I'll.
Take
that
one,
at
least
for
the
jfr
side.
A
A
Yeah,
I
think,
made
some
good
progress
thanks
everyone
and
have
a
great
rest
of
the
day
and
we'll
see
you
in
two
weeks
time.