►
From YouTube: 2022-02-24 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
A
B
C
B
D
So
I
I
copy
it
to
my
own
calendar,
but
I
basically
ignore
that
because
I
don't
trust
that
the
links
are
kind
of
like
permanent.
So
I
always
go
back
to
the
open
telemetry
calendar
each
week
and
click
whatever
link's
in
there.
B
B
A
E
So
I
I
signed
up
for
the
for
the
calendar
updates
the
google
group,
but
that
is
usually
out
of
sync,
so
I
usually
go
to
the
google
calendar
and
open
a
link
there.
A
We
have
oh,
we
do
have
all
right.
Ben
added,
a
topic.
Awesome,
no
jack
added,
a
topic
cool
yeah.
All
right.
Tell
us
about
your
topic.
D
Yeah
so
there's
a
there's,
a
pr
open
in
the
the
spec
at
the
metrics,
the
metrics
spec,
and
it
proposes
adding
support
for
a
batch
api,
an
optional
batch
api
for
metrics,
and
the
idea
is
this.
So
if
you
have
a
callback,
that's
expensive
to
invoke
and
you
need
in
the
results
of
that
callback
are
going
to
be
recorded
to
multiple
instruments.
It's
really
hard
to
organize
that
yourself.
D
If
we
could
support
this
and-
and
I
like
it
conceptually,
but
you
know,
I
was
just
playing
around
with
our
api
and
sdk
a
bit
and
it's
it's
going
to
be
a
bit
tricky
and
I
propose
something
that
I
think
could
work
like.
It's
not
wired
up
or
anything,
but
just
like
from
an
api
perspective.
I
think
it
could
work
and
I
wanted
to
get
your
all's
opinion
on
it,
and
so
this,
this
little
pseudo
code
kind
of
demonstrates
what
I
think
we
would
want
to
do.
D
So
in
this
case
we
have
two
counters
foo
and
bar,
and
we
want
to
be
able
to
record
to
each
of
those
when
a
single
callback
is
invoked,
and
so,
instead
of
doing
the
normal
build
with
callback
on
those
we
we
say
observer
and
we
get
an
observer
that
we
can
later
record
to
in
a
single
callback.
And
so
we
get
a
foo
observer
and
a
bar
observer.
D
And
then
later
we
register
a
batch
call
back,
which
is
just
a
runnable,
and
that
runnable
is
invoked
once
per
collection.
And
you
know
you
can
call
whatever
observers
you
want
in
there
to
record
to
those
instruments.
D
So
that's
kind
of
the
idea
and
why?
What
is
this?
Why
do
you
pass
in
the
observer
to
the
register?
So
I'm
thinking
of
that
as
like
a
safety
net,
so
to
ensure
so
that
the
sdk
can
guarantee
that
you're
only
recording
to
the
instruments
that
you've
that
you
told
us
that
you're
going
to
record
to.
D
Yeah
that
type
of
thing
like
and
in
this
whole
this
whole
idea
is
kind
of
I'm
unsure
about
it,
because
once
you
have
these
observers
that
you
just
have
a
handle
to
outside
of
your
callback
like
you
can
you
can
invoke
those
observable
long
measurements
outside
of
your
callbacks?
You
can
just
like
invoke
them
synchronously
and,
and
that's
not
something
that's
intended,
and
so
you
know,
I
think
what
what
I
imagine
doing
is.
D
A
A
Yeah,
it's
definitely
needed.
I
was
just
popping
over
to
some
instrumentation.
We
have
like
for
oshie,
for
example,
where
right
these
two
and
there's
a
bunch
more,
it
would
be.
We
basically
have
to
call
this
expensive
operation
to
capture
all
that
after
oceanic
process,
metrics
for
each.
D
Yeah,
and
is
that
an
expensive
thing
to
invoke
yeah.
A
D
A
D
Yeah,
so
I
I
you
know
this,
this
is
exactly
the
type
of
use
case
that
I
was
thinking
about
and
I
think
josh
was
thinking
about,
and
I
think
you
know
we
have
some
open
issues
about
this.
It's
been
asked
for
at
least
once,
and
it
definitely
is
the
type
of
thing
where,
if
a
user
wants
to
like
be
efficient
and
not
call
that
expensive
function
more
than
once
it's
it's
really
quite
challenging
to
do
so.
D
A
F
A
D
A
Cool
yeah:
let's
talk
jbm
metrics,
because
we
missed
this
week.
I
forgot
that
it
was
a
holiday.
G
Yeah
no
worries
I,
given
that
we
have
some
time
if
people
don't
mind
if
anyone
doesn't
want
to
stay
for
metrics,
so
they
won't
be
offended,
but
I
I
just
figured
that
we
might
as
well
just
just
get
a
bit
of
a
slight
catch
up
and
just
just
resync,
because
I
was
I
was
putting
some
things
together
today
and
just
looking
at
what
you
know
where
I
think
we
are
with
with
how
things
are
coming.
G
I
think
that
we're
you
know
we're
still
a
little
ways
off
having
a
final
set
of
metrics,
and
I
just
wanted
to
take
people's
temperature
about
well
how
how
quickly
we
can
make
some
progress
and
what
we
feel
like
we.
Actually,
we
actually
would
need
to
to
feel
done.
G
Because
the
type
is
is
either
heap
or
non-heap,
but
then,
when
you
look
at
the
names
of
the
pools,
they
are
things
like:
g1,
new
and
g1,
old
and
metaspace.
D
Yeah,
so
if
I
understand
you,
it's
like
yeah,
so
you're
you'll
always
know
your
type
based
on
your
pool.
D
Yeah
I've
been
toying
around
with
the
metrics
in
this
exact
format
and
doing
that
type
of
thing.
You
know
summing
up
the
heap
and
the
non-heap
and
displaying
those
because
you
know
like
while
there
will
be
some
people
that
are
interested
in
the
the
memory
usage
of
the
specific
pools,
like
other.
More
casual
users
are
just
interested
in
the
higher
level
and
maybe.
H
G
Also,
bear
in
mind
jack
that
we
will
have
places
where
we'll
want,
to
sum
up
all
of
the
heat
areas
and
and
not
show
display
the
pool
names,
because
it
will.
That,
then,
is
an
apple
strapless
comparison
between
different
garbage
collectors.
G
So
so
I
think
in
all
cases
the
the
the
type
is
redundant,
but
I
think
it
does
make
sense
to
to
keep
it
because
you
want
to
still
sum
up
over
all
the
values
of
it.
E
I
believe
having
the
type
is
a
great
help
for
the
users
who
wants
to
like
quickly
quickly
query
something
because
you
will
be
able
to
make
like
a
very,
very
quick
distinction
between
like
even
on
without
knowing
like
which
gc
are
you
using,
and
also
you
can
basically
like
very
like
have
some
kind
of
like
a
little
bit
more
advanced.
Carry.
For
example.
If
you
want
to
like
list
the
the
pools,
but
you
are
only
interested
in
the
heap
whose
and
you
you
don't
want
to
show
the
non-hip
boost.
E
So
I
believe,
having
both
is
a
huge
help.
G
G
Yeah,
well,
you
know,
make
some
decisions.
You
know
quickly.
So
that
brings
us
back
to
the
question
of
what
the
definition
of
gun
is.
G
A
So
I
don't
think
we're
holding
up
the
boat
for,
like
I
think
each
sort
of
semantic
convention
area
will
have
its
own
ga.
G
A
A
Well,
I
know
that
the
ga
the
metrics
sdk
ga
is
coming
pretty
soon.
I
don't
think
any
of
the
semantic
conventions.
Ga,
though,
are
coming
soon.
A
H
A
Yeah
and
it's
starting
to
hurt
in
particular,
http
and
there
there
are
two
efforts
right
now
in
the
community:
pushing
for
http
and
messaging
to
stabilize
those
two.
But
it's
slow.
D
Yeah,
I
agree,
I
don't
think
we
should
let
this
kind
of
go
on
forever.
So
what
were
the?
What
are
the
other
things
that
we
want
to
specify?
So
cpu
is
outstanding.
We
have
memory
gc.
D
Of
embedded
in
cpu
right
now
are
some
questions
about
like
pools.
Like
thread,
pools
and
stuff.
You
know:
do
we
want
to
treat
those
as
a
sub
subsection
of
cpu
or
just?
Is
it
its
own
thing?.
G
What
can
you
get
out
of
jmx?
I
mean
that's,
that's
not
a
problem
at
all
for
for
for
jfr,
but
can
you
actually
get
the
thread
cpu
from
jmx.
G
A
Yeah
I
mean:
do
you,
do
I
mean
the
just
memory
cpu
gc
I
mean,
would
make
a
pretty?
Is
there
anything
missing
from
that?
That
would
you
know
not
make
it
a
compelling
sort
of
1.0.
E
Disk
space,
or
should
it
like,
belong
to
the?
I
believe
that
that's
that's
like
one
of
our
like
big
questions,
because
there
is
a
system
in
the
process
spec
as
well.
A
Yeah,
I
think
it
should
go
to
the
if
it
can
live
in
the
system
level.
If
there's
nothing
special
that
we
can
get
from
the
you
know
java,
like
kind
of
the
memory
pool
breakout,
then
I
think
it
should
be
at
the
system
level.
D
Yeah,
there's
that
outstanding
issue
that
I
have
open
that
I
think
we've
talked
about,
but
it
hasn't
really
gotten
a
lot
of
tension
which
is
like
how
do
we
resolve
conflicts
between
what's
specified
at
the
runtime
level
and
at
the
system
level?.
G
Yeah,
how
do
we,
how
do
we
pump
pump
that
up
to
somebody?
Because
we
know
it
kind
of
doesn't
matter
on
one
level?
What
the
what
the
answer
is,
but
we
we
need
to
know
what
the
answer
is,
so
we
can
do
the
right
thing.
D
Well,
I
guess
let
me
let
me
try
to
answer
that
with
perhaps
like
a
question
so
like
some
memory,
usage
for
the
jvm
is
distinct
from
memory
usage
at
the
system
level,
and
so
we
have
our
own
semantic
conventions.
For
that
cpu
is
in
the
same
boat.
We
can't
get
the
same
types
of
dimensions
that
are
available
at
the
system
level,
so
we're
getting
our
own
semantic
conventions,
for
that
is
disk
in
the
same
cam
or
is
disk
different
like
can
we
from?
E
What
does
the
system
level
like
specification
like
or
how?
How
are
these
defined
in
case?
If
I
am
running
inside
of
a
docker
container?
Is
this
system
like
belongs
to
the
the
process?
So
basically
I
like
these.
These
will
be
defined
as
I
am
inside
of
the
container,
or
will
that
mean
the
the
host,
because
I
believe
if
they
are
different,
then
we
have
our
distinction.
D
Would
it
be
at
the
container
level
because,
like
if
you
imagine
who's
collecting
these
these
system
level
metrics
it's
going
to
be
the
collector
running
in
agent
mode?
And
you
know
if
you're
running
a
pod,
you
can
have
multiple
containers
in
a
pod
in
kubernetes,
and
but
you
know,
if
you're
running
the
agent
as
a
container
alongside
your
application
as
a
container,
then
that
agent
as
a
container
won't
be
able
to
pick
up
this.
The
disk
usage
of
your
applications
container.
E
I
I
guess
the
question
is
like
whether
we
want
to
do
like,
but
it
won't
be
to
add
only
like
blue
hanging
fridge
to
the
one
zero.
If
that's
the
case,
then
maybe
you
should
look
at
like
simple
metrics
like
I
don't
know,
trade
count
instead
of
disk
space.
G
G
Yes,
and
where
did
we
get
to
with
the
discussion
about
about
baggage
and
whether
we
actually
wanted
to
do
things
like
you
know,
cpu
count
for
containers
and
how
or
do
that
in
resources.
G
Because
we
talked
about
that-
and
I
don't
recall
that
we-
that
we
actually
made
a
decision.
D
Yeah,
so
what
can
you
can
you
refresh
the
conversation.
D
I
thought
that
we,
I
thought
that
we,
you
know
at
least
for
initial
memory
and
max
memory,
although
it
is
repetitive
and
that
information
is
static
and
not
changing.
D
Putting
it
in
a
resource
makes
it
really
hard
to
analyze
in
systems
like
prometheus
and
probably
many
vendor
backends,
because
you
know
imagine
trying
to
graph
current
memory
usage
versus
max
and
if,
if
that
max
memory
is
contained
as
like
a
static
attribute
as
part
of
like
the
resource
that
seems
hard
to
do
in
prometheus.
G
I
mean
if,
if
we're
happy
to
do
that,
then
it's
I
mean
it's,
it's
relatively
easy
to
do
right
because
it's
just
a
fixed
value
from
the
jmx
point
of
view
and
from
the
from
the
jfr
point
of
view,
it's
just
looking
at
the
you
know
the
events
which
are
produced
once
per
block
which
have
that
information
in
it
and
we
just
we
just
re-up
it
with
whatever
we've
got.
So
that's
that
should
be
pretty
straightforward.
That
should
just
be
a
an
up
down
counter
right.
E
E
But
as
far
as
I
remember
it
is
there
that
the
available
processor
count
that
that's
what.
G
E
Can
resize
themselves
and,
as
far
as
I
know,
the
j
the
javadoc
will
like
it
says
that
they
are
advising
it
to
pull
this
value
and
resize
the
distress
pool.
I
D
If
we
spec,
if
we
spec
it
as
this,
is
the
current
cpus
that
the
jvm
has
access
to-
and
you
know
our
initial
take-
is
just
to
use
like
a
static
value,
whatever
was
reported
on
application
startup,
then
you
know
we
could
always
adapt
that
later.
If
we
run
into
a
situation
where
the
cpu
account
does
change
and
we'd
be
perfectly
like,
it
would
be
kind
of
like
a
bug
fix
rather
than
you
know,
some
change
in
how
we're
adhering
to
the
spec.
A
Cpu
count
does
feel
like
something
that
could
be
at
the
system
level,
though,
and
initial
memory
max
memory
could
be
at
the
process.
G
E
Also,
this
system,
like
the
the
cpu
being
on
a
system
level-
that's
that's
also
like
brings
back
the
question
if
you
are
in
darker.
That
can
be
like
a
very,
very
different
from
like
what
you
what
you
experience
inside
of
the
container
and
what
the
host
has.
Maybe
the
host
has,
like.
I
don't
know
like
32
cpu
cores,
but
inside
of
the
container,
you
will
just
see
one
and
that
will
be
a
big
change
for
your
gc,
the
cheat
threads,
the
common
pool
and
so
on.
D
Yeah
yeah
so
totally,
let's
say
let's
say:
there's
a
system
level
spec
for
cpu
count
and
let's
say
that
the
jvm
has
all
the
information.
It
needs
to
record
that
at
the
system
level
and
also
the
collective
running
in
agent
mode,
has
all
the
information
needed
to
record
that
at
the
system
level.
I
think
both
could
report
it
and
the
agent
would
see
a
much
higher
value
potentially
than
the
jvm
would.
D
D
A
To
clarify
this
question,
so
I
I
will
open
a
spec
issue
whether
that
means
it
will
get
clarified
or
not,
is
a
different
question.
But
at
least
we
can
open
that
and
try
to
push
forward,
get
a
answer
and
even
clarify
it
in
because
I'm
not
seeing
I'll
read
through
this
carefully
first
to
see
if
it's
defined,
but
I
didn't
see
anything
on
a
quick
search.
A
D
How
do
folks
feel
about
getting
the
easy
stuff
in
like
the
stuff
that
we're
really
sure
on
just
getting
that
merged?
And
you
know
so
that
we
we
have
a
pretty
good
baseline
and
then
things
like
disk
space
and
some
of
the
other
more
questionable
metrics.
We
can
just
kind
of
have
more
lengthy
conversations
about
or
we
resolve
over
time.
A
I
think
that
sounds
great
also,
where
what
do
we
need
to
do
to
get
this
merged.
D
Yeah
so
last
time
we
talked,
which
was
several
weeks
ago
now
I
took
away
an
action
item
to
like
go
through
all
these
metrics
and
and
basically
just
make
it
really
easy
for
you
to
figure
out
what
the
sources
of
these
metrics
so
that
we
could
determine
whether
they're
important
or
not
like.
I
think,
that's
what
we
need
to
do
is
look
at
each
of
these
and
say
is
this:
are
these
important
things
that
we
want
to
include
yes
or
no,
and
so
that
that's
what
this
comment
is
here?
D
Jonathan
provided
a
gist
that
basically
showed
where
all
this
stuff
came
from,
and
so
I
just
summarized
that
here
and
so,
if
folks
could
go
through
and
say
just
like
for
each
of
these
metrics,
whether
you
give
it
the
thumbs
up
or
thumbs
down,
I
think
that
would
be
important
very
good
to
get
it
moving.
G
Can
you
put
a
link
to
the
to
the
pr
in
there?
Oh,
you
just
have
yeah
just
something.
G
I've
just
noticed
that
we've
we've
got
init
and
max,
which
we
were
just
talking
about
as
though
they
were
xms
and
xmx.
D
So
is
that
let
me
pull
up
my
implementation,
real
quick.
D
I
have
an
implementation
that
pulls
these
so
there's
a
memory
usage,
mx
bean
that
has
a
method
called
methods
called
get
init,
get
used,
get
committed
and
get
max,
and
you
know
so
each
one
of
these
pools
that
we
have
one
of
these
memory
usage
classes
for
has
one
of
these
get
max
methods,
and
so
that's
kind
of
what
what
I
was
envisioning
recording.
D
So
I
like,
I
guess,
like
the
the
heap,
is,
has
a
max
based
on
xmx
and
some
of
the
other
pools.
Don't
they
have
different
parameters
which
control
their
maximum
memory
usage
in
some
cases
and
in
other
cases,
they're
they're
unconstrained.
G
It
depends
yeah
as
always,
yeah
you
know
it's.
You
know
we
still
have
to
deal
with
a
case
where
somebody
will
do
something
crazy,
like
pin
xmx
xms
to
xmx,
and
then
at
that
point
you'll
you
we
will
have
a
defined
mx
size
because
the
hit
the
pool
geometry
can't
change,
but
in
other
cases
yeah
they
could
be.
They
could
be
unconstrained.
G
G
G
I
I
G
Me,
let
me
see
if
I
can,
I
can
provoke
a
reserved
size
change.
I
will
take
an
item
to
do
that.
I'm
sure
I'm
sure
there
are
ways
of
doing
it.
Also.
E
Ben,
could
you
please
also
check
the
the
the
language
specification
because,
as
far
as
I
know,
the
heap
being
continuous
or
must
be
continuous,
that's
that
doesn't
hold
anymore
in
every
case
it
was.
It
used
to
be
the
case
always
for
hotspot
hotspot,
but
I
believe
they
changed
it,
and
now
it's
not
a
not
a
a
constraint
as
far
as
I
know
or
not
in
every
case,
I
don't
remember
the
details.
G
Jonathan,
I
I
I
don't
think
I
need
to
check
this
back,
because
I
think
the
spec
is
silent
on
this
point.
However,
I
do
have
a
recollection
that
ibm
jvms
are
capable
of
using
non-contagious
seats,
so
I
think.
G
Yeah,
I
think
you're
right
so
so
that
assumption
that
reserved
heat
can't
change-
I
I
think
is,
I
think,
is
not
a
safe
one,
because
because
the
heat
is
not
contagious,
you
could
just
allocate
another
arena
right.
D
But
we're
having
this
conversation
because
we're
talking
about
whether
max
is
the
right
word
to
use
to
describe
this
idea
or
or
you
suggested,
a
different
alternative
done
right.
G
H
G
That,
like
I
think
that
max
is
totally
fine
as
terminology,
and
you
know,
of
course,
for
the
jfr
implementation.
It's
it's
kind
of
move
because
it
only
works
on
hotspot.
G
So
if,
if
what
is
recorded
in
hotspot
is
always
going
to
be
the
same
and
if
what
the
jmx
implementation
reads
is
the
value
of
the
the
jmx
beam
and
so
in
the
case
of
an
ibm
jvm.
D
Yeah,
and
so
I
guess,
we
would
just
need
to
be
aware
that
in
certain
circumstances,
the
max
may
change
and
and
users
shouldn't
usually
shouldn't
assume
that
max
is
static.
C
A
Oh
sorry,
I
was
gonna
yeah.
I
was
gonna
try
to
leave
us
10
minutes,
because
I
wanted
to
talk
about
micro
profile.
C
But
probably
I
I
yeah
we
have
to,
I
have
to
leave
for
another
time
I
go
to
job.
Someone
will
be
at
the
door
in
two
minutes,
so
this
is
also
related
to
micro
profile.
So
it's
a,
I
saw
some
macro.
The
matrix
api
is
stable,
but
matrix
spec
is
still
marked
as
experimental.
D
So
that
so
there's
gonna
the
spec
is
gonna
stabilize
in
the
next
couple
of
weeks
likely
it's
not
for
sure,
but
I
think
it's
likely
and
then
after
that,
the
java
sdk
has
some
work
to
do
to
catch
up
to
what's
currently
specified
and
then,
after
that,
in
when
we're
comfortable,
we
would
we
would
consider
marking
the
java
metrics
sdk
is
stable
as
well.
C
D
I
A
Let's
emily,
I
think
why
don't
we
get
back
to
after
we
talked
to
honorable
this
evening,
because
I
know
honorag
has
much
more
aggressive
plans.
C
Yeah,
okay,
great
yeah,
probably
next
week,
we'll
catch
up
again.
Yeah.
A
C
G
Trust
can
you
stick
me
down
with
an
action
item
because
it
suddenly
occurs
to
me
that
you
know
we're
talking
about
the
reserve
memory
size
changing
on
ibm
jvms.
I
can
just
ask
the
folks
that
red
hat
had
worked
on
open
j9,
how
we
provoke
that
and
what
what
the
actual
observed
behavior
is.
So
so
stick
me
down
with
an
action
item
for
that.
A
A
Basically,
review
provide
feedback
approve,
and
then
I
I
can
push
on
people
to
get
it
merged,
and
then
it
sounds
like
the
next
two
things
after
that
would
be
gc
and
threads
any
volunteers
to.
G
A
If
not
we'll
try
again
for
volunteers
next
in
the
next
metrics
meeting.
A
Cool
yeah,
I
think
it
would
be
great
progress
just
even
to
get
this
one
merged
and
yeah.
I
will
work
on
trying
to
find
out
the
kind
of
the
system
answer.
The
system,
metrics
question,
which
will
help
us
decide
where
how
to
move
forward
on
disk
space
and
cpu
count,
and
then,
if
they,
you
know
and
then
potentially
proposing
those
in
the
system,
metrics
is,
if
possible,.
D
So
there's
a
question
about
we,
the
the
memory
semantic
inventions
are
merged
and
they're
they're
pretty
easy
to
to
implement
in
instrumentation
today.
D
A
Cool
and
probably
we
have
it's
kind
of
already
there's
some
framework
there,
so
I
think
it's
mostly
probably
massaging
the
existing
or
is
it
runtime
metrics.
A
A
Ben,
how
do
you
feel
about
your
initial
question
of
how
to
get
to
one
zero
and
definition
of
done.
G
I
think
we're
a
great
deal
closer.
I
don't
think
that
this
we've
necessarily
got
a
final
definition
of
one
one,
zero
and
done
here,
but
I
feel
like
this
has
been
a
really
good
session
and
we've
made
a
lot
of
progress.
How
does
everyone
else
feel.
A
Yeah,
I
think
that's
a
good
question
to
ask
at
the
beginning
or
in
each
of
our
meetings,
make
sure
we're
tracking.
A
All
right
well,
then,
any
other
topics
that
anybody
had
we've
got
four
minutes
left.