►
From YouTube: 2021-07-16 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Theta
clock
one
definitely
ten
o'clock,
one,
it
just
depends
on
how
many
minutes.
So,
if
I'm
in
my
pajamas,
you
know
I
have
long
meetings.
If
not,
then
I
had
the
chance
to
take
a
shower,
got
it
and
like
there's
three
levels,
I
think
you
might
notice
my
background
at
a
core
working
space.
That
means
I
had
almost
no
meetings
and
I
had
plenty
of
time
to
even
get
over
there
and
if
I'm
dressed
fit
at
home,
that
means
media
meeting
load
got
it.
A
Does
does
amazon
pay
for
co-work
pay
for
co-working.
B
Not
technically,
but
as
far
as
I
know,
they're
still
paying
for
my
commute,
even
though
I
don't
commute,
and
so
that
is
about
the
same
price.
So
I
sort
of
consider
that
as
being
recorded,
yeah
got
it
yeah
it's,
I
guess
a
30
30
minute
train
ride,
or
so
that
adds
up
to
around
a
100
bucks
a
month
or
so
meter
pass.
B
B
B
B
B
E
D
I
put
your
name
on
something
and
you
didn't
show.
I
know
I
heard
we
had
an
all
hands.
I'm
sorry,
I
don't
know,
let's
see
so
we
had.
We
talked
about
nested
clients,
bands,
so
ludmila
who
used
to
work
in
the
azure
monitor
group
now
works
in
the
azure.
Sdk
group
is,
and
they
are
instrumenting
with
open,
telemetry
and
so
they've
come
across
issues
because
they
they
have
their
own
http
science.
They
produce
http
client
spans,
but
we,
but
then
they
use
neti
or
ok
http
under
that.
D
So
they
get
duplicates
because
there's
our
manual
right
they're,
not
sticking
it
to
our
context
key
and
even
if
they
do
stick
it
into
the
context
key.
They
weren't
propagating
the
context
down
right.
D
They
were
assuming
that
they
were
like
a
terminal
span,
and
so
they
wouldn't
do
make
current
put
their
make
their
span
current,
so
that's
sort
of
where
it
initially
came
from
and
then
we
kind
of
talked
through
she
and
I
had
talked
through
earlier
this
week,
and
so
we
kind
of
presented
a
proposal
to
the
group
here
and
we're
looking
to
get
buy-in
from
the
java
folks
to
prototype.
D
This
in
java
instrumentation
and
then
take
it
to
the
spec,
and
so
basically
the
idea
is
instead
of
treating
all
client
spans
together
and
suppressing
all
nested
client
spans
kind
of
grouping
them
into
layers
that
correspond
to
semantic
conventions,
so
database
client
span
http
client
span,
maybe
a
messaging
or
rpc
client
spam
and
allowing
those
allowing
nesting
of
those
right
allowing
you
to,
and
that
could
also
that
could
still
be
a
toggle.
D
I
think
of
user
preference,
whether
you
wanted
http
under
database,
but
we
would
suppress,
then
any
duplicate
of
the
same
layer,
so
http
under
http
doesn't
really
provide
value.
D
We
think
we
did
talk
about
the
splunk's
use
case
and
it
sounded
like
the
splunk
customer
who
wanted
to
see
everything
really
was
more
wanting
these
additional
layers
underneath
the
http
client
span,
which
still
would
be
you
know,
supported
by
this
kind
of
a
proposal.
D
D
And
potentially
I
don't
I'm
not
sold
one
way
or
another
on
on
that
right
now
we
model
those
as
internal
spans
and
then
the
jdbc
as
client
spans.
D
D
Right,
that's
a
good
question.
I
I'll
clarify
with
ludmilla
what
she's
thinking
it
may
also
be
like
at
the
messaging
layer.
You
could
propagate
the
messaging
header,
but
then
at
a
lower
level.
You
might
also
propagate.
B
Yeah
messaging,
I
think,
but
just
even
keeping
it
simple
like
this
database
http,
there's
no
messaging
thing
right
right
and
as
we
so
one
option
is
it
doesn't
matter,
and
we
just
have
some
convention
that
the
peer
service
is
set
to
the
same
thing
for
all
of
these,
that
sort
of
might
work.
Okay,
because
the
main
point
is
to
make
sure
that
the
peer
service
span
is
the
one
that's
being
propagated,
because
that's
how
service
maps
generally
work
as
long
as
that
precondition
is
met,
it
doesn't
matter
that
much
which
is
propagated.
I
guess.
B
B
B
C
A
B
B
You
wouldn't
have
like
start
end
times.
You
could
have
just
times
with
events,
but
it's
not
as
nice.
I
still
you
know
this
nested
coin
spent
thing
is:
there's
no
good
answer.
That's
why
this
issue
has
sort
of
been
open
for
the
past
year,
and
I
mean
as
long
as
everything's
configurable.
I
guess
like
just
giving
the
user
the
choice
of
what
works
for
them
and
give
them
all
the
options
that
just
might
be
the
only
way
to
go
rather
than
trying
to
make
any.
C
So
it's
almost
like,
I
think
it's
very
analogous
to
like
a
logger
where
some
users
want
to
see
it
at
finest,
and
some
only
want
to
see
it
at
info
and
others
yet
still
only
want
to
see
it
worn
right.
It's
kind
of
it
feels
to
me
like
the
same
problem,
but
I
don't
know
how,
like
I
don't
think
that
the
spec
encompasses
that
tunability
of
granular
anything
span.
B
C
A
C
A
A
D
Yeah,
that's
interesting
because
I
we
had
the
opposite
at
one
point
we
didn't.
We
only
captured
like
jdbc
calls
if
it
was
inside
of
a
request,
but
that
caused
problems,
because
if
you
don't
like,
if
somebody
has
some
background
threat
or
something
that
we're
not
tracing,
it's
a
lot
better
to
at
least
capture
those
jdbc
requests.
B
A
B
Our
pc-
and
so
that's
the
thing
right,
so
I
I
don't
think
there
should
be
a
separate
span
for
our
pc
and
so
then
are
you
supposed
to
add
rpc
conventions.
To
this
I
mean
what
we
do
right
now
is
add
rpc
to
this
layer,
but
if
we
were
to
not
suppress
http,
should
that
also
have
our
pc
should
one
of
them
not
have
it
or
that
becomes
sort
of
hard
to.
B
D
D
B
B
D
Cases,
I'm
not
sure
I'm
following
like,
I
guess.
D
D
D
B
I
assume
we're
trying
to
consider
this
theoretically
yeah
like
I,
I
don't
know
what
I'm
just
sort
of
wondering
what
we
want.
I
mean,
of
course
this
is
also
fine,
but
it's
just
hard
for
me
to
reason
about
why
in
one
case,
we
would
have
dispensed
another
case.
We'd
only
have
one
because
they're
supposed.
B
D
B
A
Just
thinking
outside
the
outside
the
box
for
a
moment
is
there
any
way
we
could,
and
I
know
this
is
just
throwing
a
giant
monkey
wrench
into
things,
but
with
the
merging
it
couldn't
get
any
more
messy
go
for
it.
No
I'm
just
thinking
about
the
the
merging
and
losing
the
timing
information,
but
could
we
have
some
automated
way
when
the
merging
is
happening
to
generate
an
event
at
span?
Inner
like
merge,
span,
start
an
event
and
merge
span
end.
A
I
mean
you'd
have
to
there's
the
rest
of
the
span
api
that
you
have
to
figure
out
like
if,
if
that
inner
one
is
also
generating
events,
I
guess
you
just
add
more
events
in
to
the
existing
span.
I
don't
know
if
there's
anything,
I
guess
if
it's
setting
attributes,
what
do
you
do
with
that?
Yeah.
A
A
D
Right
so
you
lose
sub
nesting.
D
Right,
but
you
would
thought
you
would
lose,
you
would
lose
that
event
on
the
subspan
was
you'd
have
to
create
some
kind
of
link.
How
would
you
know
that
that
was
at
the
sub-spam
level
versus
well.
A
B
C
D
B
B
D
I
guess
coming
just
I'm
still
a
little
stuck
in
the
like
trade,
like
trade
system
tracing
world.
D
That
I
mean
it
makes
sense
to
me
that
if
the
user
is
doing
it,
they're
just
going
to
create
one
span
that
has
both
because
they're
just
making
one
call
it's
one
call
site
and
they're
doing
it
at
once
versus
having
multiple
layers
in
the
code
itself.
D
I
think
the
I
mean
at
least
what
makes
sort
of
sense
to
me
as
a
default
would
be
to
by
default,
suppress
like
kinds.
C
D
D
So
I
think
ludmila's
view
on
this
is
the
azure
sdk
azure
sdk,
that
similar
to
dynamo
that
emit
this
database
call
and
she
does
want-
and
I
think
you
know
sometimes
it
becomes
multiple
http
clause,
and
so
she
thinks
that
that
visibility
is.
B
D
Option
so.
D
So
my
initial
thought
was,
we
could
take
you
know.
Kind
of
split
these
out
per
semantic
convention
have
a
different
key
for
each
one.
Probably
we
would
also
need
to
have
a
general
one
if
we
want,
in
order
to
support
suppressing
all
nested
client
spans
of
any
type
without
having
to
check
each
one.
D
And
so
I,
oh,
I
talked
to
ted,
I
talked
to
ted
after
you
had
to
leave
in
the
the
your
wednesday
spec
call
about
this.
C
B
D
No
idea,
I
had
something
so
we
would
put
the
so
yeah
the
clients,
oh
yeah.
He
was
concerned
about
right.
The
extra
work
that
instrumenters
would.
C
D
B
B
D
Yeah,
so
in
the
azure
sdk
case,
for
example,
they
have
they,
they
emit
their
own
http
spans,
but
they
also
use
neti
or
okay
http
under
the
hood.
So
that
ends
up
those
duplicates
are.
Are
they
shouldn't
admit
this
http.
B
C
D
B
F
B
Like
maybe,
then
we
come
to
a
case
where
you
do
want
to
htv
standards,
or
maybe
like
http,
probably
isn't
a
good
one
but
messaging.
I
guess
we
already
seen
instant
messaging
spends.
So
I
think
strategy
of
suppressing
just
based
on
semantic
images,
probably
also
isn't
going
to
be
very
general,
so
unkind.
B
D
C
D
D
That
was
actually
that
was
a
question
that
ted
asked
also
is
like.
Why
would
you
why,
if,
if
your
hdb
client
wraps
another
http
client,
why
do
you
instrument
the
top
level
http
client?
D
B
B
B
D
So
I
yeah
the
idea
is
to
suppress
things
that
really
look
like
duplicates
yeah.
B
C
C
D
B
B
D
Right
right,
and
actually
that
would
be.
D
That
we
do,
we
could
the
aws
sdk
could
add
itself
basically,
as
a
http
client
span,
for
example,
to
suppress
nested
http
client
spans.
D
D
Url,
based
on
your
pc
yeah,
I
mean
I
mean
like.
If
you
stick
the
say
you,
you
could
populate
the
http
client's
band
key.
B
B
B
B
B
D
D
D
Reasonable,
what
do
you
think
of
orm
cause
like
hibernate
calls
today
we
mark
them
all
internal.
Like
queries,
hibernate
queries,
hibernate
inserts
saves:
do
you
think
they
should
be
database.
B
D
B
A
D
A
B
C
B
D
B
E
It
is
hipster
pittsburgh
distillery
whiskey
with
brewed
with
ginger,
so
you
know
it's,
but
it's
pretty
intense.
I
like
it.
E
A
But
not
e
means
it's
from
scotland
and
it's
proper
whiskey.
E
E
So
I
noticed
that
in
my
benchmark,
there's
a
whole
bunch
of
gc
being
run
and
when
I
dove
into
it,
it
turns
out
that
I
made
this
new
measurement
class
where
I
actually
box
my
doubles
and
logs,
which
you're
not
supposed
to
do
and
escape
analysis
was
my
friend
for
sums
and
for
gauges,
but
not
for
histograms.
E
A
Does
anybody
know
yeah?
No,
exactly
that's!
That's
exactly
why!
What
just
out
of
curiosity,
what
version
of
java
and
what
virtual
machine
you're
running
on
for
your
benchmark.
E
A
E
Do
we
want
to
just
convert
longs
to
doubles
and
say
you
know
what
that's
it
we're
taking
a
stand
as
java,
certainly
what
they
do
in
micrometer.
E
B
D
You
mentioned
joshua,
I
think
you
said
something
about.
Is
that
why
those
native
methods
were
used.
E
So
in
in
the
existing
metrics
sdk
there
is
a
method
called
record
double
and
there's
another
method
called
record
long
and
they're
like
embedded
all
the
way
through
human,
and
so
you
can't
even
primitive
rather
than
native.
D
E
A
A
D
There
was,
I
was
missing
out
on
some
amazing
stuff
in
the
sdk
repo,
but
I
didn't.
E
C
E
Yeah,
okay,
so
it
sounds
like
there's
two
options:
one
is
don't
care
about
longs
and
just
stick
with
doubles.
E
The
other
option
is
to
to
wire
the
primitives
all
the
way
through
and
just
kind
of
unbox
in
our
apis.
B
A
So
what
josh-
and
I
have
been
trying
to
talk
about
with
the
new
api-
is
to
make
it
so
that
normally,
if
you're
building
a
counter
you're
going
to
you're,
going
to
basically
get
the
long
one
by
default
and
if
you're
building
a
histogram
you're
going
to
get
the
double
one
by
default,
but
that
there
is
a
there
is
like
an
escape
hatch.
You
can
use
to
switch
to
the
to
the
maybe
less
standard
one,
but
it
does
still
mean.
Then
the
sdk
has
to
be
able
to
wire
both
types
all
the
way
through.
B
E
A
B
E
B
E
E
If
we
have
one
of
those
weird
cases
with
like
long,
you
know
not
long
counters,
then
we
can
make
it
possible.
It
just
doesn't
necessarily
have
to
be
easy.
It's
more
the
question
of.
If,
if
we
have
long
counters
and
everything
else
is
doubles,
I
think,
no
matter
what
we
have
to
deal
with
this.
The
the
notion
of
converting
long
to
double
probably
isn't
viable,
like
like
a
user's
expectation.
E
If
I
have
a
counter
and
I'm
reporting
logs,
if
they
start
seeing,
doubles
on
the
wire,
is
that
going
to
surprise
them
or
not
that
that's?
If
the
answer
is
no,
that
would
not
be
surprising,
then
then
it's
fine.
We
can
just
convert
to
doubles,
but
the
answer
is
yes.
That
would
surprise
them.
Then
it
sounds
like
I
should
just
wire
lungs
and
doubles
all
the
way
down
through
and
try
to
try
to
get
this
gc
try
to
get
the
gc
performance
better.
E
Also,
what
jvm
should
I
be
benchmarking
against?
If
you
have
suggestions.
A
No,
I
think
I
think
hotspot
is
fine.
I
didn't
know
whether
j9
maybe
had
better
escape
analysis
or,
worse
escape
analysis.
I
yeah.
I
just
I
don't
know.
E
Yeah,
I'm
actually
trying
to
target
hotspot
with
no
flags
if
possible
and
see
what
the
the
baseline
performance
is
there.
As
as
the
standard
anyone
can
go
and
customize
later
sure,
but
because,
where
they're
like
standard,
I
figure
people
who
don't
care
it
should
do
a
decent
job.
B
You
know,
generally,
if
you
have
time
you
can
benchmark
anything
you
want,
but
java
11
is
a
reasonable
one
to
focus
on.
Okay,
like
there
have
been
a
lot
of
performance
improvements
in
java
9,
especially
so
java
8
and
11
10,
to
perform
pretty
differently.
But
at
this
point
I
think
our
baseline
focus
2011
is
fair.
A
D
Oh,
our
our
ingestion
service,
updated
ssl
certs
in
like
february
yeah.
C
B
D
C
I
have
two
very
small
things
before
I
duck
out
honorag.
I
appreciate
your
input
on
that
sampler
contrib
story,
the
one
that
got
bumped
over
to.
I
think
that
idea
is
awesome.
I
I
fell
back
to
name
initially,
not
thinking
like
I
thought
attributes
initially,
which
kind
of
felt
misguided.
So
I
was
like
oh
name
is
better,
but
I
like
this
context
attribute
was
the
example
that
I
put
in
there
did
that
if
you
have
you
seen
it
yet.
B
C
Better
yeah,
I
don't
know
when
we'll
get
to
it,
but
I
think
it's,
I
think,
that's
solid,
that's
better
than
using
name
anyway.
There's
a
very
there's,
a
very
old
issue
in
the
spec.
A
About
that
particular
about
how
to
do
this,
I
guess
it's
kind
of
the
opposite
of
suppression,
rather
than
it
was
the
stop
key.
But
it's
like
a
start
key.
It's
like
an
opt-in,
it's
the
same
thing
just
the
other
way
around
yeah.
A
There
is
an
old
spec
issue
where
people
have
meandered
around
and
not
accomplished
any
consensus,
so
you
might
want
to
at
least
take
a
look
at
it,
but
I
think
it's
like
for
contrib.
That's
totally
fine,
like
yep.
Also
a
great
place
like
to
say:
hey,
here's,
the
thing
we
builded
some
contrib,
it
works
working
great.
C
Yeah,
that's
really
good
to
know
and
worth
looking
at
when,
if
and
when
we
get
around
to
actually
doing
it.
Who
knows
when
that
will
be
okay
and
then
the
other
thing
was
around.
C
I
think
the
thing
that
my
name
was
on
earlier
that
got
removed
because
I
didn't
show
up
to
the
meeting
was
around
the
overhead
testing
stuff
and
I
think
that
we
have
directions
sort
of
pinned
down
now
after
chatting
with
nikita
and
looking
at
the
comments
on
that,
and
that
is
we'll
go
ahead
and
use
test
containers,
we
will
assume
that
reusing
a
container
is
not
necessary
that
being
on
the
same,
vm
is
probably
good
enough
to
start
and
yeah,
we'll
just
I'll
just
keep
going
from
there.
C
C
And
then
the
next
thing,
I'll
add,
is
a
an
action
to
publish
that
base
image.
I
guess.
B
C
A
My
idea
was
far
worse
far
worse,
he
had
a
crazy
josh
had
a
crazy
idea
with
the
github
action
in
the
s
repo.
That
would
that
would
call
that
would
like
do
stuff
over
in
the
the
instrumentation
repo
much
crazier.
Just
don't
ask
me
how
I
know
about
it.
That's
why
that's
all
I
say
but
yeah.
I
think
I
think
the
idea
like
I
think
that
idea
of
a
nightly
job
that
will
just
go
and
figure
out
if
there's
a
new
version
and
unpublish
something
to
the
packages,
it
seems
like.
A
B
B
B
C
B
A
A
A
B
A
Where
I
think
it
just
would
be
weird
to
have
that
in
david
central-
and
this
is
where
I
was
wondering
like-
could,
could
you
is
there
some
way
we
could
use
jetpack,
but
I
don't
know
that
that
would
you'd
need
like
a
whole,
separate
repository
that
was
doing
that
right,
yeah,
I
don't
know
if
jetpack
allows
you
to
search
all
the
existing
versions
like
whether
it
has
an
api
for
that
or
not
certainly
their
ui
lets
you
go
and
look
at
all
the
versions
that
have
been
published,
but
I
don't
know
if
there's
an
api
for
it
anyway,
all
right,
I
gotta
run
yeah
but
yeah
well
good
to
see
you
good
job.
B
Yeah
for
this
intellij
thing,
this
is
quite
unfortunate,
like
it
seems
to
be
a
missing
feature
bug
in
intellij,
because
anything
that
gradle
can
build.
Intellij
should
handle,
but.
D
B
B
D
So
we've
got
this
instrumentation
api,
caching,
that
is
shading
caffeine
right
and
then
we're
pulling
that
in
to
the
api.
D
We're
basically
embedding
that,
where
is
it
the
source
set
we're
embedding
into
this
package
basically
to
work
around,
we
could
not
get
it
working
to
just
consume
it
as
we're
having
lots
of
problems
with
the
gradle
shadow
plug-in
shading.
B
E
Yeah,
that's
I'm
not
surprised
at
all
interesting,
but
it
so
so
effect.
So
if
I
understand
this
correctly,
you're
taking
the
shadowed
sources
that
this
thing
generates
and
turning
them
into
the
source
directory
for
this
module,
and
then
it
builds
those
sources
and
packages.
A
jar.
B
E
So
if
I,
if
you
were
to
call,
I
guess
the
the
question
is,
if
you
were
to
look
at
the
gradle
project
and
call
the
appropriate
tasks,
those
class
files
where
you're
adding
them,
would
I
see
that
as
a
class
file
output,
or
is
it
like
right
past
the
task
that
intellij
looks
like
and
it
it's
it's
at
the
thing
that
the
packager
looks
like
to
make
the
jar?
Do
you
know
what
I
mean.
B
Which,
since
this
is
modifying
the
source
that
I
understand
the
question
correctly,
it
should
be
at
the
like
before
any
tests
like
it's
not
related
to
jar
or
anything
like
that.
It's
the
gradle
configuration
for
this
project.
So
I
imagine
it's
quite
early
in
the
life
cycle,
so
it
is
surprising
that
intellij
isn't.
E
It's
not
about
the
life
cycle,
it's
about.
Like
there's,
you
know
your.
Your
tasks
depend
on
different,
like
features
of
the
project.
Yeah
and
intellij
might
just
be
looking
at
a
different
one
than
this
is
using.
E
E
E
So
yeah
in
the
past,
when
I've
had
to
fix
these
things,
you
just
have
to
play
whack-a-mole
until
you
find
that
right,
all
right.
B
I
think
yeah,
maybe
just
adding
the
pile
only.
That
seems
just
because
this
approach
gradle
wise.
It
seems
like
a
good
one.
Somehow
I
don't
know,
maybe
it
isn't,
but
it
seems
better,
very
relative.
It
seems
better
than
the
older
one.
I
think
so,
and
it
just
seems
to
be
an
intellij
bug
so
working
around
it
in
the
simplest
ways,
rather
than
changing
the
scripts
again
is
probably
the
way
to
do
it.
Yeah.
C
C
B
E
Well,
so
maybe
maybe
so
so
two
things
we're
trying
to
get
the
ability
for
baggage
labels
to
be
attached
to
metrics
labels.
As
the
feature
when
you
do
that
you
lose
all
advantage
of
binding,
it
disappears,
so
it's
still
possible
that
batch
recording
could
be
faster,
but
I
don't
I
don't
see
how
to
make
it
faster.
E
There's
this
notion,
I
think
you
you
might
have
mentioned
this
this,
the
the
idea
where
you
want
to
have
like
a
set
of
labels
and
then
like
record
metrics,
and
it
filters
out
the
labels
at
once
across
all
of
them.
There
there's
a
interesting
way
that
we
could
implement
that.
That
bogdan
was
explaining
to
me
that
that
could
be
faster.
E
So
basically,
what
we
do
is
we
record
every
measurement
with
every
attribute
that
you
have
and
then,
when
we
get
to
the
aggregation
step,
we
erase
the
ones
that
weren't
relevant
to
the
metric
and
that
actually
gives
you
a
significant
performance
boost
in
your
hot
path
and
it
doesn't
add
as
any
like
a
couple
nanoseconds
on
the
other
side.
E
Basically,
not
not
a
ton
of
time,
so
there's
a
possibility
that
we
could
make
a
batch
record
where
you
shove
metrics,
with
like
highly
highly
customized
attributes,
and
then
we
hit
that
label
for
a
whole
bunch
of
them,
and
that
could
be
an
efficiency
thing
but
honestly,
like
if
the
rest
of
the
metrics
api
is
that
slow
that
we
feel
we
need
this.
I
think
we
failed
personally,
so
I'd
rather
I'd
rather
stick
with
what
we
have
and
anything
we
do
around
batch.
E
We
should
consider
a
big
experiment
and
we
should
feed
that
back
into
the
metric
sig
over
time.
But
I
I
don't
think
that
api
is
going
to
be
part
of
the
v1
of
metrics.
Once
the
the
specification
stabilizes,
I
think,
that's
cut,
that
said,
josh
mcdonald,
I
think,
will
want
to
push
for
it.
I
think,
if
you
look
at
his
go
prototype,
he
has.
B
B
B
B
B
E
Awesome,
I
would
get
rid
of
the
well.
What
do
we
call
it
to
be
called
multi-measurement
or
batch
record?
That's
what
it's
called.
I
would
get
rid
of
that
record
just
because
I
don't.
I
don't
anticipate
that
making
the
spec.
B
E
B
I
will
say
that
I
was
a
bit,
I
don't
think
it's
bad,
but
it
was
a
bit
annoying
to
convert
back
from
otlp
to
metric
data,
which
we
do
because
we
have
to
bridge
the
class
loader
from
the
agent
into
our
testing
infrastructure.
So
we
just
use
otp
for
that,
and
so
knowing
whether
the
points
are
double
or
inch
is
a
bit
harder
than
it
used
to
be.
It
wasn't
a
big
deal,
but
just
have
to
dig
through
is
double
method.
E
There
was
it'd
be
fun
to
explain
to
you
how
many
debates
there
were
think.
I
think
I
spent
like
you
know
actual
physical
days
discussing
that
yeah
yeah
yeah,
so.
B
E
Well,
the
end
result
was,
I
think,
basically,
dynamic
languages
were
slightly
prioritized
over
static.
E
E
So
the
the
next
thing
is,
I
don't
know
if
you
saw
this
well
java,
doesn't
use
histograms,
yet
so
you're,
probably
totally
fine.
Are
there
any
histograms
coming
out
instrumentation
or
summaries.
B
E
Summaries
are
basically
deprecated,
so
those
will
become
histograms.
I
guess
there's
and
you
do
have
views
but
barely,
and
I
don't
think
you
can.
Your
configuration
for
the
instrumentation
is
kind
of
dynamic
right.
Is
there
a
way
that
you
can
default
those
to
histograms
now
for
your
tests,
or
I
want
to
I
this.
The
reason
I'm
asking
is:
I
want
to
know
how
hard
histograms
are
to
deal
with
in
practice.
I
have.
I
have
some
experience
myself,
but
I
want
to.
E
E
D
E
E
B
E
Oh
yeah,
that
that's
fair
I
did
I
send,
did
I
send
the
the
perf
benchmark
cl
because
it
that
shows
how
to
make
histograms.
I
think
I
did
sorry
and
I
call
it
cl.
I
mean
a
pull
request.
Sorry,
I've
been
in
google
too
long
here
we
go.
E
Oh
yeah
yeah
wow,
it
was
it
was.
I
didn't
even
pay
attention
and
john
submitted
it
okay.
So
how
do
I
put
this
in
the
notes?
I'll
put
it
here,
we
go.
E
That's
recording
the
labels
histogram
set
up
in
existing
api
okay.
So
if
you
look,
if
you
look
in
this,
pull
request
that
I
just
let
me
actually
make
it
a
real
link
in
the
notes.
Well,
his
name
isn't
very
nice.
E
Oh
yeah,
so
if
you
check
files
changed
and
you
look
at
the
the
one
benchmark,
I
added
a
benchmark
for
histogram.
So
right
here
you
can
see
we
make
the
the
double
bound
histograms
and
then
the
other
file
where
you
configure
the
sdk.
E
E
E
Let's
see
so
this
will
be
okay.
This
specific
aggregator
will
be
exactly
what
you'll
see
when
you
migrate
to
histograms
in
the
new
sdk.
That
much
is
kind
of
assured
everything
else
about
the
sdk
is
still
kind
of
in
the
air,
but
that
much
I'm
pretty
confident
is
gonna
like
that.
We're
we're
in
we're
in
enough
consensus.
No
one
has
really
fought
that
so.
E
Yes,
so
it's
weird
that
you
only
set
the
instrument
type
to
value
recorder
and
it
hits
both,
but
the
the
name
rejects
matters
so
that
that'll
pull
the
right
one.
I
think
you
have
to
specify
instrument
type
or
it
crashes
today.
A
D
E
B
E
Right
yeah:
if
you're
able
to
do
that,
if
that's
not
considered
breaking
that'd
be
kind
of
cool,
because
then
we
can.
We
can
actually
get
a
get
a
feel
for
what
people
think
of
histograms
coming
out
of
java.
B
D
That'd
be
cool
if
the
next
meeting
after
tomorrow's
release.
B
D
E
No,
it's
it's
not
the
new
sdk,
no,
no,
it's
it
that
is
not
merged
yet,
but
the
we're
starting
to
move
to
these
new
otlp
metrics
data
model
types,
and
I
think
java-
has
pulled
in
the
where
we
consider
it
stable
and
we're
planning
to
move
everything.
That
would
say
double
value,
recorder
or
long
value
recorder
is
going
to
use
this
aggregation
in
the
new
metrics
sdk.
E
D
E
Is
yeah
the
this
was
a
fun,
a
fun
silly
little
bug
to
fix,
but.
E
Benchmarks,
they
just
nped,
and
it
was
like
the
silliest
thing
anyway,.
E
E
When,
if
you,
if
you
get
a
release
with
histograms
in
it
for
hdb
stuff,
I
I
will
do
my
best
to
give
that
to
give
that
the
run
it
through
the
ringer
over
here
and
try
to
get
some
histograms
output
and
kind
of
into
some
visualizations
just
to
see
what
it
looks
like.
D
Cool
yeah
that
would
be
awesome
and
then
tomorrow
I
will
work
on.
I
will
work
on
the
release,
notes
and
hitting
the
button.
B
D
D
E
E
Basically,
that's
what
we've
been
working
on
in
our
open
source,
stuff
and
we've
been
focused
more
on
getting
our
exporter
integration
tested
and
getting
our
resource
detectors,
integration
tested
so
making
sure
if
you
spin
up
on
like
gke
gce,
that
sort
of
thing
that
you
get
the
appropriate
semantic
conventions
coming
out
of
your
resource
detection
that,
unfortunately
we'll
we're
probably
gonna,
have
to
release
that
with
our
exporter,
because
our
exporter
takes
a
hard
dependency
on
the
the
labels
that
are
generated
and
until
those
labels
are
considered
stable.
E
I
need
to
make
sure
the
versions
are
the
same
and
the
way
I
do
that
is
by
code,
releasing
them
at
the
same
time
and
forcing
the
stuff.
So
eventually
we
want
to
get
that
resource
detection
into
core.
If
that's,
if
that
makes
sense,
you
know
if
you
want,
like
gce,
based
resource
detection,
that
everyone
can
use
to
understand
if
you're
on
you
know
cloud
run
or
the
compute
engine,
I'm
not
sure
where
we
have
all
our
resource
detectors.
Now,
maybe
it's
in
contrib,
I
don't
know,
but
right.
B
B
E
Oh
well,
well,
I
guess:
do
you
have
hard
dependencies
between
your
exporters
and
your
resource
detectors.
B
E
Yeah,
so
we
we
have
this
google
cloud
has
this
notion
of
a
monitored
resource,
and
so
we
have
resources
built
in
and
baked
in
with
like
into
the
metrics
model.
Okay,
and
so
there's
like
a
physical
dependency
on,
we
need
some
of
these
labels
to
exist
to
actually
push
the
data
correctly.
E
D
B
B
B
B
Like
we
have
like
spring,
our
spring
instrumentation
generates
this
spam
for
spring,
but
we
ignore
the
span
name
for
internal
spams
like
when
we're
matching
the
data
that's
exported
x-ray.
We
try
to
just
look
at
the
bare
minimum
of
data
to
not
be
tied
too
hard
to
how
the
instrumentation
is
modeling.
It
right
now.
B
B
Of
implementation,
detail
type
type
of
plugins.
Those
are
outside
of
that
scope
like
so
it
doesn't
even
matter
that
much,
but
first,
just
talking
with
the
end
result,
we
should
have
two
entry
points.
One
is
I
o.
One
is
something
called
library.
Instrumentation
one
is
something
called
java
agent
instrumentation.
B
Else
it
doesn't
matter
so
much
yeah,
so
this
is.
B
D
So
but
then
this
I
had
posted
somewhere.
D
Oh
here
yeah
so.
C
D
A
B
D
B
B
I
think,
like
the
impact
of
the
gradle
plug-in,
is
far
less
so
like,
for
example,
with
these
dependencies.
If
library
instrument
accidentally
includes
a
java
agent
thing,
that's
quite
bad,
which
is
why
having
the
separate
groups
is
more
important
here,
even
though
this
is
just
testing,
so
it
doesn't
matter
that
much
anyways,
but
as
a
general
concept,
we
want
to
make
sure
that
library
doesn't
actually
make
a
job
agent
right
for.
B
Yeah
for
gradle
plugins.
It
doesn't
matter
that
much
but
like
it's
really
just
what
names
do
we
think
users
will
like,
I
thought,
maybe
having
them
under
the
same
name:
space.
I'd
like
it.
But
given
this
example,
maybe
it
doesn't
matter
maybe
different
name
spaces
matches
the
scheme
anyways.
So
that
is
a
good
argument
for
the
different
names
for
the
gradle
plugins.
Also
yeah.
B
B
B
B
D
Oh
yeah,
so
that's
what
happened.
I
don't
know
if
you
saw
the
slack
discussion
from
earlier
today.
D
What
happened
was
right.
Nikita
had
changed
this
one
because
we
had
discussed
that
and
in
his
earlier
pr
request.
I
had
suggested
this
and
he
changed
this
here,
but
then
on
the
snap
he
discovered
the
snapshot
that
the
gradle
plug-in
like
this
didn't
matter,
and
it
was
based
on
the
name
of
the
file
of
which
group
id
it
went.
There's.
B
B
B
C
D
D
I
almost
understand
I'll
understand,
okay,
so
what
you're
saying
is
this
one
actually
does
matter.
B
D
D
D
D
D
E
Well,
it
depends
on,
if
you
so
in
gradle,
I
believe
it'll
just
take
the
latest
version,
so
that
should
be
okay
but
yeah.
That's
my
only
suggestion
is
like
that
may
or
may
not
confuse
users,
but
it
seems
reasonable.
B
B
B
C
B
Okay,
cradle
plug-in
artifacts,
I
guess
we
should
be
a
bit
precise.
There
we're
gonna.
D
D
Okay,
I
can
send
a
pr
to
make
this
change
and
mention
the
other
option.
B
D
Give
nikita
and
matesh
options.
D
B
D
Let's
yeah,
I
agree:
let's
I
will
go
ahead
and
add
that
to
monday
night's
tuesday's
agenda
because
that's
a
tricky
one.
E
D
D
All
right,
I
will
I'll
drop
a
note
in
chat
that
but
let's
chat
about
vertex
4
on
monday,
so
that
hopefully
there's
no
like
last
minute
merging
before
the
release.
Yep.
D
Okay,
I
think
we're.
D
D
D
Right,
yes,
have
a
good
one
and
wish
me
luck
with
hitting
the
button.
D
I
will
merge
whatever,
if
you
have
any
luck
with
metric.