►
From YouTube: 2021-12-02 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Yeah,
although
working
on
a
new
team
and
everything
is
like
going
crazy
getting
to
learn
everything
which
is
fun
but
also
stressful,
because
I'm
like
I
really
want
to
get
stuff
done.
But
oh
I
don't
understand
this
code
base
at
all
and
oh
I
don't
understand
our
cid
c
icd
pipeline
at
all
and
yeah
anyway.
A
A
A
Do
we
have
any
do
we
have
any
open,
telemetry
deadlines
that
we're
trying
to
do
before
the
end
of
the
year
or
do
we
get
to
take
it
easy,
also?
Well,
definitely
yeah.
C
Okay,
well,
I
was
just
going
to
mention
like
metrics,
so
you
know
we
talked
about.
I
think
the
plan
was
last
time
that
the
next
release
would
be
a
stable
release
for
the
metrics
api,
and
if
memory
serves
correctly,
our
plan
was
to
have
a
release
candidate.
That
was
ready
a
couple
weeks
beforehand,
and
so
I
think
we're
kind
of
getting
close
to
that
couple
weeks
beforehand.
So
that
might
be
a
pseudo
deadline.
Type
thing.
B
B
I
will
also
you
know
say
during
the
spec
meeting
tuesday
it
must
have
been
tuesday.
It
might
have
been
the
maintainers
meeting
monday,
but
I
think
it
was
spec
media.
There
was
definitely
some
strong
pushback
from
the
go
sig
about
stabilizing
the
sdk,
the
sdk
spec.
So
my
guess
is
that
that
is
going
to
be
pushed
off
of
bits.
C
And
that
affects
us
because
well
we
wouldn't.
B
C
Right,
but
I
think
you
know
with
our
api,
isn't:
aren't
we
getting
rid
of
the
global
meter
provider
as
part
of
that
and
doesn't
that
kind
of,
I
guess,
is
the
metrics
api
stabilization.
Doesn't
that
go
hand
in
hand
to
some
extent
with
the
sdk
stabilization,
or
can
we
keep
those
two
completely
separate?
I.
B
B
I
believe
means
if
you
want
to
use,
I
mean
if
you,
if
you
want
to
start
instrumenting
things
with
the
metrics
api,
you
have
a
stable
api
to
build
to
build
instrumentation
against.
If
you
want
to
use,
you
actually
can
like
emit
and
consume
metrics,
then
you're
working
with
something
that's
not
yet
100
stable.
B
So
I
think
the
answer
is
yes,
we
can
keep
them
separate
api
being
most
important
for
instrumentation.
It.
C
Just
I
always
had
to
remind
myself
of
this
because
the
the
sdk
artifact
like
open,
telemetry
sdk,
that
class
has
a
dependency
on
the
metrics
sdk
and
the
kind
of
regular,
stable
sdk.
C
B
C
Yeah,
it's
it's
an
implementation
dependency
right,
yeah,
we're.
B
A
Yeah
jack,
this
took
me
a
a
while
to
many
weeks
of
being
reminded
by
honorable
why
this
was
okay,
when
we
were
first
figuring
out
the
figuring
it
out,
but
I
it
made.
It
eventually
made
sense
to
me
that
users,
it
forces
users
to
opt
in
by
adding
that
unstable
artifact
to
their
dependency.
C
A
C
A
D
D
B
D
That's
where
things
get
super
fun,
yeah
yeah,
you
mean
like.
Do
you
translate
to
pull
in
your
dependencies
runtime
dependencies?
I
think
I
think
the
answer
is
you
can
be
aware
of
them,
but
I
don't
know
if
they're
pulled
in
by
default,
I'm
more
familiar
with
aether
than
maven,
though
so
I
know
what
aether
does,
but
I
don't
know
how
maven
configures
aether.
D
Spt
re-implemented
their
dependency
management
system
from
scratch.
With
this
thing
called
crap,
I
forget
the
name
of
it
off
the
top
of
my
head,
but
I
it's
it's.
It
follows
ivy
conventions
and
it's
gonna
be
pretty
in
line
with,
probably
what
gradle
does
there?
D
I
think
I
think
runtime
does
run
time
only
and
is
not
transitive
compile
time.
There
is
a
hook
and.
D
Yeah
yeah,
I
don't
as
long
as
what
you
do
translates
into
maven
language.
I
think
you're
fine
with
regards
to
transitive
dependencies
there,
because
they
they
actually
do
scope
to
scope
dependencies
in
spt,
because
it's
ivy
and
iv
is
freaking
insane
in
that
way.
D
So
you
can
you
actually
have
control
where
you
can
decide
whether
or
not
you
want
to
translate
if
you
pull
in
any
particular
scope
in
the
config.
No
one
does
this
in
practice.
No
one
ever
touches
that
feature,
but
it's
there
you
can
do
it.
B
A
So,
john,
what
do
you
have
thoughts
on
the
release
like?
What
a
do
you
still
want
to?
Do?
People
still
want
to
target
a
stable,
metrics
api
release
in
december
or
push
that
to
january.
B
B
C
And
honorable
has
a
pr
out
right
now
to
change,
observe
to
record,
which
I
think
might
be
the
last
thing.
So
just
the
verbiage
of
how
you
record
things.
D
Right
but
that's
cosmetic,
so
one
is
cosmetic
and
one
is
removing
a
feature.
How
much
bake
time
do
we
need
for
cosmetic
or
feature
removal
changes.
C
B
But
would
we
given
that
we
are
ripping
out
binding
and
doing
a
little
bit
of
tweak
on
the
api?
If
this
do
we
want
to
hold
off
one
release
until
we
stabilize
the
api?
I
don't
I
don't
have
a
strong
opinion
one
way
or
another,
but
I
think
it
is
from
my
user's
perspective.
It
might
this
release
is
probably
it
will
they
always
going
to
be
very
painful
to
get
rid
of
binding?
E
D
D
Okay,
I
I'm
gonna,
be
I'm
I
before
I
give
you
the
30.
Second
removal.
You
have
to
know
my
stance
on
this.
I
don't
think
we
should
have
removed
binding,
so
I'm
going
to
be
biased
with
what
I
say
and
I'm
going
to
try
to
avoid
being
biased
with
what
I
say.
Okay.
So,
but
just
so
you
know
where
my
bias
is.
I
am.
I
was
totally
against
removing
binding,
but
I
understand
the
reasons
behind
it.
It
is
not
specified
exactly
the
the
specification
does
not
say
like
here's.
D
What
a
binding
api
looks
like
the
specification
says
that
you
can
optimize
how
attributes
are
passed
in,
and
so
it
gave
us
room
to
implement
binding
when
we
discuss
in
the
metric
sig.
I
mentioned
that
java.
Has
this
binding
api
and
like?
Are
we
going
to
break
the
binding
api
and
people
like
no
that
that's
allowed
by
the
spec?
D
Here's
the
rule
that
lets
you
have
a
binding
api
and
we'll
figure
out
what
this
looks
like
later,
but
there's
a
there's,
a
contingency
of
people
who
want
to
attach
the
binding
api
with
the
batch
record
api
that
we
also
removed,
and
so
the
plan
right
now
is
to.
Basically,
if
we
want
to
release
something,
that's
stable,
we
don't
know
if
the
binding
api
is
going
to
look
exactly
the
same
way.
It
is
today
if
it
gets
tied
into
batching
and
we
need
to
prototype
that.
D
D
D
It
is
if
you
look
at
the
binding
api,
we're
effectively
doing
resource
management
in
java,
and
that's
that's
one
of
the
things
like
what
it
does
is.
It
gives
us
a
way
to
pre-allocate
memory
for
metrics
and
then
not
take
allocation,
hits
and
hotpaths,
and
that's
that's
the
primary
purpose
of
it
and
that's
why
I
think
it's
really
important
for
the
agent
to
make
use
of
it
or
have
access
to
it
in
some
fashion.
Yeah
we
anyway,
that's
where
things
stand.
E
Okay,
let
me
let
me
ask
a
real
real,
dumb
question
right,
so
there
seems
to
be
at
least
some
consensus
that
this
part
of
the
api
is
important.
There's
also,
if
I
interpreted
what
you
said
correctly,
josh
the
expectation
that
it's
at
least
quite
likely
that
this
is
going
to
come
back
in
some
form.
Even
if
you
don't
know
what
that
form
is
yet
is
therefore
not
the
conclusion
that
actually,
the
api
isn't
finished
yet,
and
so
therefore
we
shouldn't
do
a
stable.
D
E
But,
but
how
realistically,
how
many
people
are
going
to
use
a
stable
version,
which
is:
is
this
missing
major
features?
What's
what's
the
benefit
of
marking
this
one
stable?
What
I
guess
a
different
question
is:
what
guarantees,
what
support
obligations
are
we
setting
ourselves
up
for
by
marking
this
as
stable?
E
D
I
see
I
see
so
I
guess
what
I'm
suggesting
regarding
binding
right.
I
think
there's
a
piece
of
binding
that
I
think
is
very,
very,
very
important
to
java
that
is
less
important
in
other
languages,
especially
given
how
they've
implemented
things.
D
The
notion
that
we
want
to
pre-allocate
memory
for
high
performance
throughput
right
is
kind
of
tied
to
how
we've
implemented
it
in
java
and
tied
to
our
implementation
and
expectations
of
where
we
run
in
python
it
it's
entirely
unlikely
that
they
need
a
bind
api.
Given
how
they're
doing
things
it
actually
is
unlikely
to
help,
even
for
some
of
the
things
they're
doing
the
way
they've
designed
it
and
that's
total.
That's
fine,
like
we
just
have
different
designs
so
like
what
I'm
suggesting
is.
D
The
bind
api
in
the
overall
spec,
I
think,
is
less
important.
It's
more
important
that
we
have
efficient
metric
collection
if
and
that
the
names
of
the
meters
and
like
the
the
instruments
that
you
get
are
meaningful
in
java.
There's
a
specific
technique
that
we
use
to
make
it
also
efficient,
because
the
the
thing
that
is
spec
is
the
most
general
use
case
of
how
to
get
you
know
measurements
in
so
that
I
I
hopefully
that
that
helps
answer
this,
like
I,
my
personal
opinion
is
there's
there's
an
aspect
of
optimization
in
java.
D
That's
that's
important
that
we
need
to
account
for,
and
I
personally
don't
think
this
has
to
be
fully
specified,
because
I
think
it's
going
to
be
slightly
different
per
language.
It
might
be
similar
across
some
but
it'll
be
different.
D
D
D
We
can
still
change
it
over
time
if
we
think
that
that
bit
of
it
is
not
correct
around
how
to
do
more
efficient
memory
management,
if
we
figure
out
a
way
to
optimize
the
api
that
we
have
beautiful,
but
that
api
doesn't
really
change.
C
Well,
so
I
I
I
was
just
gonna
add
a
couple
of
things
so
one-
and
I
think
josh
just
said
this,
but
you
know
releasing
the
api
without
this
part.
It's
when
we
add
this
part,
it's
gonna
be
like
backwards
compatible,
so
we're
not
going
to
be
making
any
breaking
changes
by
adding
it
back.
C
It
just
makes
it
more
efficient
and
the
other
thing
so
so
josh
in
terms
of
the
impact
of
not
having
this
so
you're
you're
focusing
on
on
like
allocations,
but
if,
without
the
bind
api,
if
you
are
going
to
be
recording
against
the
same
set
of
attributes
over
and
over
again
you're
still
going
to
only
take
the
allocation
hit
once
the
first
time
it's
recorded
the
next
times,
you're
still
going
to
have
a
performance
hit,
because
you
have
to
do
a
lookup
against
a
concurrent
hashmap,
but
it's
not
an
allocation.
D
D
Get
gc'd
and
reallocated
right
right,
yep,
and
you
also
take
an
additional
lock
for
the
concurrent
map.
Lookup,
so
like
basically
bound
instruments
are
twice
as
fast
as
regular
instruments
in
practice
because
effectively,
you
have
one
lock.
Instead
of
two.
D
B
Do
some
sort
of
I'm
really
brainstorming
thinking
out
loud
here,
but
could
we
do
some
sort
of
kind
of
optimistically
bind
internally
and
then,
if
we
detect
someone
doing
a
recording
like
with
explicit
labels
on
bind
that
instrument
or
unbind
that
binding.
D
D
So
like
the
code
that
jack
wrote,
I
think
fixes
our
cardinality
issues,
but
what
it
does
is
it
actually
makes
binding
more
important
because
we're
doing
this
gc
every
single
delta
cycle,
so
we
could
possibly
loosen
that
restriction,
and
then
we
might
get
to
a
more
optimal
sweet
spot
in
terms
of
allocating
the
allocating
memory
for
known
things
it
doesn't,
it
doesn't
give
you
that
guarantee
of
no
allocations
that
was
nice
about
binding,
but
yeah.
That
might
not
be
important.
C
D
So
so
we
we
should
already
be
recommending
to
users
that
they
pre-instantiate
attributes
as
much
as
possible
right
to
avoid
allocating
an
attributes
array
when
they
record
measurements.
That's
just
that's
something
that
we
can
recommend.
Users
do
with
examples
and
all
that
kind
of
stuff.
So
what
this
would
mean
effectively
is
in
our
cleanup
code
around
the
the
that
concurrent
hash
map.
D
A
Josh
is
there
anything
we
should
be
doing
in
the
instrumentation
currently
where
I
don't
believe
we're
using
bindings
currently
because
we're
always
pulling
attributes
from
the
spans
or
the
same
attributes
that
are
being
passed
in
sort
of
generically.
D
C
D
For
certain
things,
like
anything
most
of
the
stuff
that
actually
the
the
jfr
prototype
has,
I
think,
can
be
pre-bound
if
that
makes
sense.
So
it's
almost
everything
in
that.
Like
jvm
metrics,
you
can
pre-bind
all
your
attributes
for
efficiency
reasons.
We
do
pre-binding
in
the
all
of
the
observability
for
the
sdk
itself,
like
around
reporting
traces.
D
D
Yeah,
though
those
by
nature
need
to
be
dynamic,
but
they
also
those
are
higher
cardinality,
so
so
yeah
like
the
best
way
to
phrase
this
would
be
if
you're
instrumenting,
something
like
spring
or
you
have
a
known
route
and
that
route
is
stable.
If
you
can
pre-bind
the
route
and
report
against
the
route,
that's
ideal
right,
if
that's
not
feasible,
which
I
think
like
right
now
may
not
be.
Then
it's
okay
to
so
again.
The
api
we're
exposing
is
the
baseline
api.
D
That
should
be
good
enough
for
common
use
cases,
and
then
this
binding
api
is
really
about
optimizing
and
just
reducing
throughput
in
some
really
important
scenarios,
and
so
like.
We
always
need
the
existing
api,
as
the
fallback
when
we
can't
bind
and
and
requests
where
you
get
like
just
a
url
is
a
great
example.
You
know
if
we
want
that
as
an
attribute
or
like
a
urlish
thing,
you
can't
use
the
bind
api.
It's
it's
not
feasible
there,
but
also
that's
where
we
start.
Relying
on
jack's
high
cardinality,
cleanup
code.
B
So
just
to
I've
got,
I
have
to
take
off
for
five
minutes,
but
just
to
wrap
up.
I
think
we
kind
of
have
two
options.
We
either.
B
Or
we
hold
off
and
wait
until
declare
the
api
stable
until
we've
got
a
solution
for
this
performance
issue
that
might
or
might
not
involve
bound
instruments
if
we
can
figure
out
some
other
way
to
do
it
internally.
B
The
third
option
was
just
basically
this
the
same
as
those
two
but
hold
off
for
a
month.
It's
four
options
hold
on
for
a
month
to
let
things
stabilize
the
studio.
B
And
then
we
maybe
doesn't
matter,
we
don't
have
to
hold
off
because
we're
not
saying
it
wasn't
anywhere.
So
I
think
there's
it
feels
like
there's
a
few
options
there
and
I'm
not
sure
I
think
maybe
we
should
revisit
this
this
evening
with
honorable,
make
sure
all
the
players-
and
I
don't
know
if
josh
and
jack
will
be
able
to
make
it,
but.
C
Yeah
I'll
be
there,
I
do
think
that
there's
value
in
releasing
the
api
even
without
bind,
I
know
like
folks,
are
saying
that
it's
not
useful
without
this
high
performance
feature.
But
you
know
one
of
the
primary
ways
that
metrics
are
being
used
today
is
to
instrument
http
servers
and,
as
we
just
discussed,
those
have
to
use
the
traditional
api,
so
there's
definitely
value
there.
D
Yeah
yeah,
I
agree
that
there's
it's
the
api
we
have
is
still
valuable,
there's
a
there's,
a
dance
of
how
valuable
and
and
is
it
is
it
an
mvp
right,
and
I
hate
that
term,
because
usually
what
that
just
means
is
people
cut
scope
to
the
point
where
something's
not
ever
going
to
be
adopted,
and
that's
that
what
I'm
suggesting
here
is
I
want
to
understand
the
v
part
of
like.
D
Are
we
releasing
something
that
people
will
adopt
or
not,
or
are
we
just
releasing
something
that
we
scope
down
to
what
we
could
agree
on
right,
and
I
think
I
think,
there's
a
dance
here
that
we
have
to
be
that
we
have
to
figure
out.
A
D
Right
right
and
actually
it
doesn't
stop
bind
either.
So
so,
I
think,
in
terms
of
marking
the
api
stable
and
common
user
usage,
we
have
a
decent
api
for
users
to
produce
custom
metrics
in
these,
like
one-off
scenarios
for
the
high
performance
stuff,
as
long
as
our
instrumentation
can
solve
it
can
solve,
can
use
that
2x
performance
boost
for
now,
and
we
can
figure
out
how
to
expose
an
api
later,
that's
probably
reasonable.
D
So
if
we
want
to,
if
we,
if
we
wanted
to
mark
1.10
stable
as
a
this,
gives
you
a
way
to
add
custom
metrics
onto
agent
based
metrics,
and
we
expect
that
most
of
the
scenarios
where
users
add
custom
metrics
are
not
bound
instrument
scenarios
because
most
bound
instrument
scenarios
should
be
done
by
like
the
the
jfr
thing
or
like
agent-based
solutions
right
for
like
the
these.
These
built-in
known
bits,
I
think,
that's
reasonable
and
that's,
and
I
want
to
throw
that
out
there
as
like
the
straw
man
proposal.
A
C
And
even
for
folks
that
do
have
access
to
the
sdk,
it's
really
bad,
like
you're,
going
to
have
to
cast
the
in
the
interface
to
like
an
sdk
version
of
it,
and
I
think
even
then
the
accessory
for
bind
is
package
private
right
now,
so
we'd
have
to
adjust
that
to
make
it
public
on
the
sdk's
interface.
A
F
Think
you
could
still
use
method
handles
or
reflection,
but
yeah
it
would
be
painful.
Yeah
yeah,
definitely.
D
Are
you
seeing
performance
issues
with
the
metrics
today
with
our
current
preview
releases,
because,
as
far
as
I
know,
you're,
not
using
the
sdk
itself,
was
using
bind
for
itself
observability
and
jack
fixed
that
when
he
removed
bind
where
we
can
still
use
it
in
the
sdk
itself?
But
are
we
seeing
any
any
concerns
about
metric
related
overhead
for
like
this
http
instrumentation,
because
again,
that's
probably
going
to
be
the
most
significant
source
of
memory
usage
of
metrics
initially
in
the
agent.
A
I
have
not
benchmarked
that
my
general
thought
is
that
it's
more
efficient
than
the
collecting
spans
on
every
request
already
at
least
so.
It's
probably
not
the
major
bottleneck,
at
least
with
100
sampling,
where
you
know
it
could
be
noticeable
as
people
who
are
using.
You
know
one
percent
sampling,
small
sampling
rates
and
which
mitigates
the
span
cost,
but
not
the
metrics
cost.
D
All
right,
well
yeah,
so
we're
talking
like
the
difference
between
you
know:
10,
nanoseconds
and
20
nanoseconds,
here
per
per
record
or
whatever,
like
with
what
I've
in
my
in
my
performance
benchmarks
like
it's,
not
it's
nowhere
near
the
cost
of
making
a
span
absolutely,
but
I'm
also
more
sensitive
to
metrics,
and
possibly
some
of
this
is
my
googliness
slipping
out
in
terms
of
like
you
know
the
the
expectation
of
what
this
needs
to
be.
D
D
Every
single
hit
has
to
lock
for
some
reason
right,
like
I've
been
that's
what
I've
been
kind
of
prototyping
the
performance
on
for
the
implementation
and
there
you
see
that
2x
bump
for
bound,
but
it
could
be-
and
that's
where
you
see
bounds,
showing
no
memory
allocations,
but
not
unbound.
Because
again
I
was
doing
every
single
attributes
new.
D
You
see
a
bit
more
gc
churn,
so
it
could
be
that
in
practice
this
isn't
that
big
a
deal
because
in
practice
most
of
the
metrics
people
care
about
are
httpc
latency,
metrics,
hdb,
latency
metrics
inherently
have
high
cardinality
and
inherently
have
that
in
them,
and
that
we
operate
pretty
much
as
best
as
we
can
right
now.
In
that
scenario,
and
that's
what
our
api
is
targeted
at,
we
could
say
that
that's
totally
fine
and
that
I'm
over
optimizing.
That
could
easily
be
the
case
here
with
bound
but
yeah.
So.
A
It
makes
sense
for
the
http
request.
Metrics
I
mean
I
agree
that
sound
is
not
that
important
and
that
doesn't
sound
like
we
could
even
really
use
it
that
much
anyway,
but
what
about
jfr
ben
for
the
jfr
metrics?
How
important
do
you
think
binding
is
like?
What's
the
throughput?
Are
these
being
recorded?
You
know:
are
there
just
tons
of
metric?
Is
this
a
huge
metric
stream
that
that
optimization
would
be
important,
critical.
E
I
I
think
we
are
much
too
early
to
talk
about
optimization
and
what
what
the
appropriate
thing
from
that
virginia
metric
says.
You
know
the
enormous
elephant
in
the
room
is
we
still
don't
have
a
good
answer
for
allocation
metrics,
which
are
probably
the
noisiest
things.
Yet
you
know
the
thread
group
of
stuff
is
there,
but
it's
a
stub
right.
We
know
we
need
to
do
something,
but
you
know,
I
think,
we're
you
know
we're
probably
orders
of
magnitude
away
from
being
able
to
worry
about
factor
of
two.
E
C
Question
for
the
instrumentation
folks,
so
how
often
do
the
bound
instruments
show
up
in
instrumentation
like
I
can
imagine
you
know
runtime
metrics
those
can
be
pre-allocated
all
the
http
stuff,
probably
no,
where
else
do
bound
instruments
show
up
and
how
often
are
those
paths
hit.
A
C
D
I
think
where
you
see
this
often
is
when
you're
measuring
cues
and
like
cache
hits.
So
if
you
have
like
any
kind
of
in-memory
cache
that
you
want
to
throw
observability
on
you
kind
of
pre-allocate
like
this
was
a
hit.
This
was
a
miss
in
the
attributes.
You
pre-allocate
that
memory
and
then
you
have
a
real,
fast
increment
on
the
cache
hit
and
miss,
because
that's
also
in
your
hot
path
again,
I'm
speaking
from
instrumentation
from
internal.
D
So
I
assume
this
is
going
to
apply
when
we
start
doing
that
kind
of
metric
level
instrumentation
externally.
But
my
question
for
trask
is:
have
we
done
any
like
metric
specific
instrumentation,
or
are
we
just
right
now
taking
again
the
the
sweet
spot
of
where
we've
already
instrumented
for
traces?
We
also
get
metrics.
A
Yeah,
so
there's
that
plus
the
only
other
thing
besides,
that
is
the
jvm
metrics
that
we
capture
you
know
at
whatever
the,
but
those
are
just
being
scraped,
so
performance
is
pretty
irrelevant.
A
I
think
the
you
know
anything
where
we're
pulling
the
where
it's
being
accumulated
somewhere
else
and
we're
just
scraping.
That's
fine!
That's
why
I
was
kind
of
curious
about
the
the
jfr.
Metrics
are
a
little
bit
more
event,
based
where
we
could.
We
were
capturing
an
event
stream
and
converting
that
to
metrics,
so
it
was
more
potentially
more
sensitive.
A
A
Cool
well:
let's
oh
go
ahead.
E
Okay,
so
imagine
you
have
a
typical
kind
of
enterprise-y
app.
It's
going
to
have
a
bunch
of
thread
pools
in
it.
You
know,
and
the
names
will
be
something
like
you
know
in
in.
In
the
jvm
case,
there
will
be
pool
n
dash
thread
x
right
all
threads
within
that
pool
are
basically
doing
the
same
job
typically,
so
you
want
rather
than
having
an
individual
fan
out
to
individual
threads.
E
You
want
the
allocation
to
be
recorded
on
that
pool,
because,
if
you're
processing
and
you've
got,
if
you,
let's
suppose,
you've
got
a
regression
where
you've
suddenly
added
a
bit
of
code,
which
does
something
stupid
and
suddenly
is
allocating
way
more.
It's
going
to
show
up
clearest
at
a
pool
level
rather
than
an
individual
thread
level.
E
So
for
both
cardinality
reasons
and
also
clarity
of
signal,
you
want
to
aggregate
certain
groups
of
threads
together
so
that
they
they
appear
as
one
that's.
Basically,
the
zeroth
order
approximation.
There
are
other
things
you
might
want
to
think
about
that
at
some
point
you
may
actually
want
to
have
command
and
control
to
say,
okay.
Well,
we've
noticed
something
weird
is
happening
with
this
jvm.
E
Can
you
now
stop
grouping
things
and
actually
show
us
the
data
for
the
individual
threads,
but
that's
that's
down
the
road
of
place,
a
piece
the
the
first
thing
to
do
is
to
just
just
have
it
show
up
and
group
groups,
similarly
name
threads
together.
E
C
D
So
the
idea
is,
if
it's
long
lived,
you
you
take
the
hit
to
bind,
it
once
allocate
the
memory
and
then
for
the
life
cycle
of
that
object.
So
it's
like
a
thread
group
comes
and
goes,
which
I
think
is
unlikely.
But
when
you
first
discover
the
thread
group
you
can
bind
if
you
decide
that
that
thread
group
is
gone
and
you
need
a
new
one.
You
can
unbind.
E
Yeah,
yeah,
and-
and
this
is
also
important
for
frameworks
like
drop
wizard
because
drop
wizard-
has
a
policy
that
it
it
kills
threads
off
and
recycles
them
over
like
every
60
seconds,
whether
they've
done
anything
or
not.
So
you
get
an
explosion
of
thread
names
with
drop
wizard.
D
And
like
binding
is
more
important
for
something
like
tomcat,
where
you're
like
loading
in
an
application
and
serving
against
it,
and
you
might
want
to
bind
some
attributes
when
that
application
gets
loaded
and
then
unbind
them
when
you
stop
serving
that
application
right,
but
it
it's
again,
that's
why
I
think
this
is
more
important
for
a
jvm
type
use
case
than
like
some
something
where
you
have
like
a
single.
You
know
application
binary
that
lives
on
its
own
and
it's
less
less
valuable
in
that
case
anyway,
in
other
languages
and
other
ecosystems.
C
What's
you
know
we're
gonna
have
this
conversation
tonight,
so
we're
gonna
bring
this
up
again
later
in
tonight's
meeting,
but
I
guess
one
last
question
for
josh
is
like
do
you
have
any
sort
of
feeling
for
when
the
the
batch
api
could
get
flushed
out
such
that
we
could?
You
know,
re-add
the
bind
api
in
java.
D
D
Pre-Allocate
but
effectively
the
the
tldr
difference
here
is:
you
would
have
a
method
that
you
call
to
pre-bind
instruments,
but
you
would
bind
multiple
instruments
and
multiple
attribute
sets
all
at
once
with
some
sort
of
a
name
handle
that
you
can
use
to
refer
to
the
things
you've
bound
and
then
you
can
record
the
entire
set
of
instruments
in
one
go,
and
this
would
be
both
async
and
sync.
Possibly.
D
I'm
not
a
fan
of
that
api
personally
and
I,
I
think,
there's
a
lot
of
things
that
make
it
easy
for
us
to
implement
and
not
so
great
for
users
to
use.
So
I
think
we
need
to
we
need
to.
D
I
I
have
my
personal
feeling
is
basically
we're
going
to
start
discussing
in
january
and
it
won't
be
until
march
until
you
see
a
specification
that
we
could
implement,
but
we
should
be
prototyping,
like
I
think,
it'd
be
reasonable
for
us
to
say
you
know
what,
let's
control
our
destiny,
let's
prototype,
what
we
think
the
bind
api
needs
to
be
if
we're
going
to
tie
it
with
batch,
because
we
know
about
batch
record,
we
know
about
bind,
we
know
our
use
cases
and
our
usages,
let's
prototype
it
and
we'll
put
the
proposal
out
there
like.
D
D
We
can't
do
it's
very,
very,
very,
very
struct-based,
where
you
pre-allocate
a
struct
with
a
bunch
of
bound
instruments
inside
of
it,
and
then
that
entire
struct
gets
treated
as
a
as
an
entity
and
like
almost
a
transaction
that
it
just
doesn't
fit
java.
And
so
I
think
that's
why
my
fear
is.
A
All
right,
cool.
A
C
A
A
I
don't
know
when
this
has
probably
always
been
here,
but
we've
just
never
noticed
it
or
maybe
it's
a
new
feature,
but
one
of
the
problems
we've
had
is
that
patch
releases
run
from
have
always
run
from
the
yaml
files
in
maine
and,
of
course,
that
drifts
and
then
we
can't
make
patches,
but
it
does
appear
that
we
can
or
we
can
run
the
cml's
from
a
specific
branch.
A
So,
and
this
is
good
news
for
our
patch
release
process,
we
need
to
automate
some
more
stuff
nikita,
as
you
mentioned,
yep
we're
also
using
now
release
branches
for
both
main
releases
and
patch
releases,
so
that
we
can
sync
the
versions
in
the
tags
so
that
the
tags
actually
have
the
examples
in
the
tags
build
against
that
tag
and
the
docs
are
updated.
A
So
that's
that's
nice,
but
for
now
it's
a
little
bit
more
manual
until
we
automate
that
I
wanted
to
call
out
that
the
jvm
metrics
working
group
kicked
off
on
monday.
A
Then
I
thought
that
was
you
know
it
was
great.
You
know
a
lot
of
kind
of
you
know
for.
C
A
E
Yeah,
I
thought
it
went
really
well,
it's
nice
to
get
some
some
different
opinions
and
some
different
voices.
I
thought
there
was
a
good
spread
of
people
I'm
reaching
out
to
some
more
folks
and
I'm
hoping
there's
actually
going
to
be.
Even
you
know,
even
more
end
user
and
sre
representation
of
the
folks
that
actually
use
this
stuff.
I
I
thought
we
did
pretty
well,
but
I
want
to
make
sure
that
we
we
keep
focused
on
making
sure
we
build
and
deliver
something
that
that
contains
the
information
that
people
actually
want.
E
So
so
I'm
trying
you
know
if
anyone
has
any
any
good
contacts
in
the
sort
of
you
know
the
working
sre
space
who
might
who
might
be
prepared
to
come
along
and
and
give
us
their
input.
Then
please
encourage
encourage
white
participation.
A
Awesome
and
a
bunch
just
just
briefly
the
last
couple
weeks,
since
we
did
a
meet
last
week,
a
bunch
of
been
a
bunch
of
things
even
with
the
holidays,
a
very
busy
repo.
Thank
you.
Everyone.
B
A
Finally,
merged
this,
this
was
that
lambda
problem
in
the
early
like
one:
eight
zero,
u
31
and
prior,
and
we
were
debating
for
a
while.
Whether
we
should
even
support
worry
about
that
and
support
it,
and
we
finally
did
decide
the
workaround.
Wasn't
that
bad
and
we
have
a
test
now
running
against,
like
I
forget,
8,
1,
8,
u5
or
something
really
early.
A
So
that's
cool
the
logging.
A
logging
is
moving
along
thanks
to
jack.
We've
got
a
logging
appender.
Now
that's
going
through
the
logging
jack.
Is
that
going
through
the
logging
append?
Do
we
have
the
like
the
appender
api
concept
broken
out?
I
forget.
C
No,
so
there's
you
know
the
appender
api.
The
I
think
the
current
consensus
is
that
if
it
exists
it
shouldn't
live
in
open,
telemetry
java.
It
should
just
be
a
open,
telemetry
java,
instrumentation
specific
api,
because
that's
who's
going
to
be
utilizing
it.
That
does
not
exist
yet,
and
so
I
I
don't
think
it
needs
to
exist
until
we
get
closer
to
actually
wanting
to
enable
this
type
of
like
routing
of
logs
through
open
telemetry
in
the
agent
itself.
A
Awesome
we
finally
got
through
our
android
problems,
bunch
of
changes
required
for
that
around
what
cache.
There's
all
these
problems
around
the
the
caffeine
cache
and
different
caches
of
what
supports
java
8
and
what
doesn't
use
unsave
and
what
works
with
android
so
really
happy
to
have
that
done,
and
we
have
the
checks
build
checks
in
place
now
to
prevent
breakage
of
android.
A
We
were
only
using
it
in
the
instrumentation
in
a
non-critical
place,
so
we
just
replaced
that
with
atomic
lung.
But
I
know
in
the
java
repo
in
the
metrics
api.
The
plan
is
to
split
and
use
long
adder
when
it's
available
basically
non-android
and
have
a
separate
android
path
for
atomic
long,
because
we
don't
want
to
sacrifice
the
performance
in
the.
C
A
Api
there's
a
new
sort
of
aws
sdk
library
instrument,
instrumenter
that
automatically
gets
applied
via
a
service
provider
and
kind
of
the
reason
for
calling
this
out
is
we're
renaming.
There's
two
other
instrumentations.
We
have
that
also
do
this,
where
it's
enabled
they're
library,
instrumentations
that
are
enabled
just
by
being
on
the
class
path.
So
we're
calling
these
auto
configure
instrumentation
libraries
and
we'll
also
have
a
separate
version.
A
I
nikita
brought
us
into
the
got
a
gradle
enterprise
subscription
for
us,
so
this
is
allowing
us,
it's
really
cool.
Let
me
just
show
you
the
I
think,
laurie
linked
to
it,
how
we
can
see
flaky
tests.
A
Less
flaky
yeah,
so
basically
it
aggregates
over
time
all
of
our
builds
and
we
can
see
the
flaky
tests
and
we
can
see
the
ones
that
are
the
most
flaky
and
so
lori
was
picking
off.
I
think
one
of
the
we'll
keep
our
fingers
crossed,
but
hopefully
that
will
knock
off
a
couple
of
these
a
few
of
these
top
flaky
tests.
A
This
was
a
big,
very
cool
change.
Onorag
had
driven
in
the
api
in
the
in
the
sdk
of
using
ok,
http,
directly
passing
grpc
protobuf,
all
this
manual
marshalling.
A
We
fixed
exemplars,
so
exemplars
now
work
in
the
latest
agent
and
instrumentation
releases
very
la
working
through
just
kind
of
last
small
changes
to
the
instrument
or
api.
It
feels
like
it's
starting.
It's
getting
pretty
stable,
stabilizing
nicely.
A
We
do
this
in
a
few
places
now,
which
is
great
as
far
as
having
no
impact
on
customer
apps,
which
can
do
reflection
and
by
introducing
our
own
methods
and
interfaces
and
things
into
hierarchies.
We
can
mess
things
up,
and
this
one
had
the
nice
side
effect
of
we
can.
We
could
remove
this
with
hacked
some
proxy
java
proxy.
A
The
java
proxy
instrumentation
or
java
proxies
to
make
some
edge
cases,
work,
which
is
now
not
needed
and
then
upgrading
test
containers
version.
The
this
was
done
so
that
we
can
use
the
nikita
has
been
playing
with
test
containers
cloud.
A
Any
thing:
oh
nikita
mentioned
that
that
allows
him
to
run
with
a
lot
more
parallelization.
Currently
in
the
instrumentation
repo,
we
have
to
throttle
the
gradle
default
parallelization
a
lot.
Otherwise
things
fail
a
lot,
and
so
by
offloading
the
containers
to
the
cloud,
I
was
able
to
get
a
lot
more
throughput.
A
This
is
just
a
beta
service,
invite
only
so
we
don't
have
a
real.
We
haven't
hooked.
This
up
to
the
build
or
anything.
B
A
G
No
idea-
actually,
I
I
mean
see-
I
probably
we
still
will
rely
more
on
proper
gradle
caching
than
test
containers,
but
for
those
cases,
when
we
change
actually
something
like
fundamental
like
tooling
or
api,
then
yeah
probably.
A
Yeah
because
our
our
nightly
build
even
with
even
the
not
even
the
one
that
uses
gradle
cache
still
takes
like
an
hour
and
a
half
every
night.
Well,.
G
I
somehow
missed
that.
I
will
have
to
take
a
look
at
that
because
I'm
good.
A
I
will
I
listen.
I
think
I
opened
an
issue
a
while
ago
because
it
went
it.
There
was
one
point
I
think
a
few
months
ago
where
it
was
going
great
and
then
it
went
like
up
to
an
hour
or
something
I'll
ping.
You,
the
the
details.