►
From YouTube: 2022-03-24 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
B
I
have
a
topic:
congratulations
to
laurie,
a
new
freshly
minted
maintainer
in
the
instrumentation
repo.
Thank
you
laurie
for
everything
that
you
do.
B
B
So
if
you
have
interest
and
time
take
a
look
through
here
jack,
do
you
I'm
guessing?
We
should
probably
wait
for
john
to
make
any
changes
significant.
C
D
Yeah,
I
I
gotta
say
that
I
have
I've
made
very
few
changes
to
any
of
the
automation,
and
so
in
my
couple
of
months
as
a
maintainer,
I
think
I've
just
done
a
release
or
two
and
they
haven't
been
particularly
problematic.
So
I
haven't
had
to
dig
into
the
weeds
too
much
so.
D
B
Yeah,
I
think
the
the
only
the
significant
change
that
I
would
want
buy
in
from
that
honorable
was
receptive
to
when
we
chatted
earlier
this
week,
and
I
think
specifically
need
to
make
sure
that
john
is
good
with.
Is
change.
Removing
the
nebula
not
using
the
nebula
plug-in,
which
does
the
versioning
based
on
tags
and
having
hard-coded
hard-coded
version
in
the
source
file
and
then
the
release,
automation,
release
branch.
That
kind
of
goes
with
making
that
not
painful.
D
Yeah,
so
I'm
not
sure
if
folks
are
aware,
but
yesterday
the
metrics
sdk
specifications
stabilized.
So
it's
the
culmination
of
I
think
years
of
work
and
I'm
personally
pretty
excited
about
it.
D
I
think
that
impacts
us
less,
but
we
talked
last
week
and
right
now
we're
on
target
to
publish
or
to
mark
our
metrics
sdk
artifact
is
stable
in
our
next
release,
which
would
be
1.13.0,
and
you
know,
part
of
that
is
deciding
which
are
which
of
the
artifacts
would
want
to
stabilize,
which
are
ready.
So
I've
been,
you
know,
making
a
lot
of
prs
to
the
the
metrics
sdk
trying
to
tidy
it
up
and
tighten
it
up
before
we
stabilize
it.
I
think
it's
pretty
close.
D
I
think
that,
to
the
best
of
my
knowledge,
I'm
done
with
the
pr's
of
substance
and
so
like
it's
just
gonna
be
like
tidying
up
and
things
like
that
after
this,
but
you
know,
I
think
I
I
I
just
you
know.
While
we've
been
on
this
meeting,
I've
been
scanning
the
artifacts
that
are
related
to
metrics
and
are
we
comfortable
with
marking
the
metrics
sdk,
the
otlp
metrics
exporter
and
the
otlp
http
metrics
exporter
as
stable?
B
What's
the
the
api
surface
of
these,
is
it
just
the
standard
exporter
builder
yep?
That's.
D
The
that's
the
interface,
and
so
it
very
closely
mirrors
the
api
interface
for
the
trace
versions
of
the
exporters
which
are
stable.
D
I'd
say
the
one
exception
is
that
we
have
this
ability
for
the
grpc
exporters
to
set
your
channel
and
we've
deprecated,
that
on
the
tracex
border,
because
we
you
know,
we
encourage
folks
to
use
the
standard
setters
on
the
builder
instead
and
if,
if,
if
there's
some
sort
of
capability
that
they
want
to
configure
that
isn't
available
to
the
standard
setters
we
want
to.
You
know
we
want
to
have
feedback
about
that
and
consider
adding
it,
but
because
the
metrics
exporter
hasn't
been
marked
as
stable.
B
We'll
see
what
I
mean,
I'm
I'm
all
for
removing
it,
the
when
you
use
it
the
big
difference
I
know
like
we
don't
use
it
from
the
agent
side
and
we
benefit
from
the
okhttp
implementation
of
the
exporter
and
as
soon
as
you
use
that
channel
you
have
to
pull
in
all
of
grpc,
and
so
it's
really
nice
not
to
have
that
as
a
dependency.
D
Yeah,
and
so
you
know,
removing
that
method
would
cause
some
like.
You
know
the
metrics
exported
to
deviate
a
bit
from
the
trace
exporter,
so
that's
kind
of
a
consideration
as
well.
Is
it
okay?
Should
we
have
the
api
surface
area
mirror,
even
though
we're
not
entirely
happy
with
the
original
or
allow
them
to
diverge.
D
So
I
yeah
we'll
discuss
with
anarag
tonight
and
I
guess
the
one
other
artifact
that's
interesting
is
the
metrics
testing
package
and
I'm
not
entirely
confident
in
the
surface
api
surface
area
of
that,
and
so,
unless
somebody
is
really
clamoring
for
it.
I
say
we
punt
on
marking
that
as
stable
for
now-
and
you
know
just
delay
it's
a
testing
artifact.
After
all,
I've
personally
found
some
rough
edges
trying
to
use
it
in
tests,
but
haven't
had
the
time
to
go
and
clean
them
up
yet.
B
Yeah,
I
know
we
wish
that
we
hadn't
stabilized
the
traces
testing
artifact,
because
there
were
a
couple
of
changes
that
some
nice
new
patterns
that
honorag
put
in
there,
but
that
kind
of
we
would
have
liked
to
remove
the
old
patterns
and
reuse
those
names.
Some
of
the
same
method,
names,
yep,.
B
Jonathan
since
you're
here
just
wanted
to
give
you
sorry
about
dragging
on
and
continuing
to
have
thoughts
about
this
as
I've
continued
to
sort
of
push
in
other
spec
issues,
so
in
particular,
so
I
was
trying
to
do
process
cpu
count,
and
it
was
interesting
that
in
the
system
cpu
so
right,
so
I
was
arguing
that
or
bogdan
was
asking:
why
don't?
We
only
have
cpu
utilization,
and
why
do
we
need
the
cpu
count?
B
Which
I
mean
it
is
what
users
usually
want,
but
I
I
was
thinking
utilization
is
just
a
gauge
and
not
really
aggregatable
like.
I
would
want
to
calculate
that
server
side
usually
based
on
process
cp
time
and
process
cpu
count,
but
it
sounds
like
they
have
a
very
specific
definition
of
utilization
being
the
delta,
the
cpu
time
delta
since
the
last
measurement.
So
it's
very
specifically
tied
to
that
last,
the
last
time
it
was
checked
and
it
is
the
utilization
over
that
specific
window.
B
So
it
really
is
essentially
the
inverse.
Then
you
get
the
same
data
which
and
the
argument
was,
it
was
simpler
for
some
back
ends.
If
we
do
that
calculation
client
side,
instead
of
pushing
that
to
the
backhand
or
charts.
B
Taking
so
taking
the
process
cpu
time,
we
have
to
keep
track
of
the
last
process.
Cpu
time
we
reported,
take
the
difference
and
divide
that
by
the
cpu
count.
C
B
It's
normalized
from
zero
to
one
value.
It's
a
normalized
value
from
zero
to
one.
C
So,
for
example,
if
you,
if
you,
if
you
are
running
in
kubernetes-
and
you
don't
like-
set
a
cpu
request
and
limit,
the
jvm
will
see
only
one
as
the
available
cpu
account,
which
means
that
your
common
pool
size
will
be
one.
So
your
fortune
api
will
not
be
parallel.
Your
stream
will
not
be
parallel.
You
might
want
this
information.
C
I
believe
so,
since
the
docker
support
was
like
backported.
If
you
are
using
the
if
you're,
not
using
like
a
very
very
old
joy,
then
it
is.
It
is
consistent
across
the
crusty
versions.
I
missed
that
that
was
back
ported,
sorry,
cool.
B
C
As
far
as
I
remember-
and
it
was
like
quite
interesting
to
me
that
the
javadoc
said
that
it
can
change
over
time.
Oh.
B
And
that
the
the
common
thread
pool
size
will
change,
will
react
and
change
over
time.
C
I
don't
know
that
that's
a
good
question,
if,
like
a
jv,
americano
mix
is
like
pulling
that
value
and
it
feels
like
resize
thing,
because
it
is
not
just
the
core
like
vulcan
but
yeah.
As
your
comment
said,
it's
also
like
the
gc
threads
and
also
which
gc
is
used.
So,
for
example,
if
the
available
cpu
count
is
one,
then
only
this
single
threaded
gc
will
be
used.
It
will
not
go
to
g1
or
it
will
not
go
to
parallel.
If
that
changes
runtime,
I
doubt
that
the
jvm
will.
D
C
The
gc
on
the
fly
or
the,
I
guess,
the
justin
time,
compiler
threats-
are
also
based
on
on
this
value
and
I
don't
think
that
will
be
changed
either
but
like
there
can
be
like
threat
pools
who
are
like
watching
this
value
and
they
can
resize
themselves.
C
C
B
Yeah
I
was,
I
was
amazed
that
so
with
with
there's
some
linux
command,
you
can
change
the
cpu
affinity
or
the
cpu
pinning
of
process,
and
I
did
that
on
a
jvm
on
a
java
process,
printing
out
the
available
cpu
cpus
and
it
did
update.
B
I
did
to
revisit,
I
mean
so
revisit
once
we
start
mapping
out
gc
and
thread
pools.
D
B
Yeah,
just
since
it's
not
part
of
the
specific,
I
think
our
specific
cpu
need
for
right
now
that
in
jonathan's
pr
but
yeah
as
soon
as
somebody
starts
working
on
the
gc
or
common
thread
pool
and
wants
that
I'm
happy
to
push
start
pushing
on
pushing
on
it
again.
B
So
yeah,
so
the
comment
I
left-
oh
yeah
in
here
was
just
to
add
a
footnote
that
explains
that
these
are
not
don't
share.
That
kind
of
very
strict
utilization
definition
that
the
system
cpu
utilization,
does.
B
Okay,
yeah
and
then
I
promise
right
after
that.
I
will
start
poking
people
to
to
merge
to
review
people
with
merge
approval
to
review
and
we'll
see
what
new
fun
discussions
come
out
of
that.
B
B
So
it's
bogdan
and
tigrin
both
commented
on
another
pr
that
they
didn't
necessarily
support
languages,
reporting,
process,
metric
process,
cpu
and
trust
process.
Yeah
metrics.
B
They
are
the
collector
maintainers,
so
it
makes
maybe
some
sense
that
they're
very
invested
in
the
collector
reporting
owning
those
and
think
everybody
can
and
should
use
the
collector
anyway,
I'm
gonna
for
this
one,
I'm
gonna,
I'm
gonna
open
us
an
issue
to
track
bogdan's
concern
so
that,
hopefully
we
can
just
move
forward
with
this
pr
regardless
and
then
we'll
we'll
just
start
tracking
it
from
the
java
side
and
then,
if
someday,
they
inexplicably
forbid
that
we'll
have
to
we'll
we'll
deal
with
that.
Then.
B
I
did
notice
that
I
the
go
language
the
go
already
has
instrumentation
that's
reporting
system
and
process
metrics,
so
I'm
kind
of
that's
one
of
my
art
gonna,
be
one
of
my
arguments
is
that
would
be
the
most
natural
language
to
lean
on
the
collector
and
even
they
are
reporting
have
a
module
that
can
optionally
report.
B
Those
is
the
concern
that
they
could
differ
or
it
creates
confusion
like
what's
the
concern
yeah,
if
you
have
both
like
if
you're
getting
both-
and
I
mean
yeah,
certainly
can
tell
the
difference
between
them
because
of
the
resource
attributes
on
them.
B
So
I
don't.
I
don't.
B
B
The
argument
wasn't
that
they
could
differ.
It
was
that
having
them
twice,
you
know,
would
be
like
what
do
back-ends
show
that
that
could
be
confusing.
Well,
they
get
their
choice
now,
don't
they.
B
Yeah,
I
don't
know
it
seems
it
seems
overblown
to
me
yeah,
so
I'm
gonna,
hopefully
I'm
gonna-
try
to
shelve
that
over
to
an
issue
where
it
can.
People
can
discuss
it
to
their
hearts
content
without
blocking
us
from
moving
forward.