►
From YouTube: 2021-12-09 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Other,
but
let
me
present
myself,
I
I
come
from
the
microphone.
B
D
I
guess
it
is
not
afternoon
anymore
for
europe,
because
it's
my
lunch
time
so
yeah,
that's
late,
I'd
be
asleep.
A
All
right,
hey
jack,
so
let's
start
with
the
110
releases
monthly
releases.
I
think
typically,
would
be
sdk
end
of
this
week.
F
Right,
but
we're
definitely
not
doing
that.
Okay,
we
wanted
to.
We
want
to
do
a
release
candidate
for
metrics,
and
I
don't
know
how
close
we
are
to
feeling
comfortable
with
that
I'll.
Let
josh
and
others
speak
to
it.
F
D
Stable
sdk
because
the
spec
isn't
stabilized,
I
know
that's
why
I
wanted
to
make
just
yeah
cool
then
yeah,
I
from
an
api
standpoint.
I
think
bound
got
removed
and
I
think
we're
in
decent
shape
because
I
saw
the
global
stuff
go
through.
Does
anyone
have
concerns
about
global?
Well,
the
global.
F
A
What
so,
what's
the
advantage
to
doing
that
versus
just
releasing
the
monthly?
You
know,
rc.
A
What
happened
that
was
odd?
Okay,
releasing
you
know,
110.
A
And
not
doing
the
rc,
because
you
know
firm.
That
would
help
us
the
instrumentation
to
also
stay
in
sync.
I
don't.
I
don't
really
want
to
battle
with
dash
rc
releases
in
the
release
pipeline.
If
we
don't
need
to.
F
And
so,
but
we
don't
want
to
release
a
1.10
stable,
like
we
want
to
put
the
metrics
api
into
the
main
api,
which
is
already
marked
stable.
So
if
we
released
it
as
it
is
today,
we
would
be
locked
in
and
if
there's
changes
or
things
that
we
need
to
tweak
because
we
haven't
accounted
for
every
possible
user's
use
case
right
now,
then
we're
stuck.
A
Oh,
I
see
right
because
you're
going
to
drop
the
dash
alpha
on
the
right.
F
So
the
idea
was,
we
would
delay
110
until
january
and
release
an
rc
here
in
december
to
allow
people
who
are
wanting
to
go
forward
and
get
their
stuff
updated
to
verify
it
and
test
it
like
josh
could
test
things
out
with
the
like.
The
the
vendors
could
would
could
test
things
out
to
make
sure
things
are
good
before
we
cut
the
official
110..
So
I
think
the
idea
was
we're
going
to
delay
110
until
january.
A
Okay,
so
then
nikita
and
matthias
laurie
instrumentation
folks.
What
what
do
you
think
of
skipping
december
release,
instrumentation
release
and
waiting
until
january
to
release
110.
F
F
Yeah,
so
we,
the
goal,
is,
I
think,
probably
this
week,
onorag
volunteered
to
do
the
rc,
because
I
don't
trust
that
everything
is
going
to
work
and
he
knows
the
gradle
release
process
a
lot
better
than
I
do.
F
So.
I
think
honorable
is
going
to
do
that
and
we'll
connect
with
him
this
evening
and
if
he
feels
like
things
are
ready,
he
still
had
a
few
last
little
things.
We
had
some
dependencies
that
were
a
little
weird
and
he's
been
cleaning
up
over
the
last
couple
days.
So
I
want
to
verify
that
he's
good
to
go.
G
There
is
one
thing:
this
is
like
a
minor
thing,
but
I
I
noticed
this
the
other
day
on
on
some
related
tasks
I
was
working
on,
but
the
so
the
api
has
overloads.
So
if
you
take
like
long
counter,
for
example,
you
can
record
just
a
value
or
value
with
attributes
or
value
with
attributes
and
context.
What
are
folks
thoughts
on
adding
default
implementations
for
the
overloads
so
like.
G
If
you
look
at
sdk
long
counter,
for
example,
if
you
just
call
the
the
add
method
without
attributes
or
context,
it
has
default
versions
of
attributes
like
empty
attributes
and
context
current
that
it
passes
in.
So
we
could
reduce
the
required
implemented
surface
area
of
the
api
by
offering
defaults
that
match
the
sdk
for
those,
and
we
could
always
add
those
later
right,
yeah
and
that's
why
it's
minor
yeah.
F
G
No,
no,
no
it,
but
it
would
reduce
the
let's
see
test
mock
stuff.
Maybe
I
would
like
it
there's
less
code
in
the
sdk.
If
we
do
that,
but
I
said
actually
less
code
because
it's
just
moving
to
the
api
yeah,
that's
what
I'm
thinking
through.
D
Yeah,
it
probably
doesn't
change
too
much
yeah.
There
are
two
implementations
of
the
api.
One
of
them
is
the
no
op,
but
that
you
override
all
those
methods
to
do
nothing
instead
of
pass
through
to
a
method.
That
does
nothing.
So
I
don't
know
if
it
actually
disco
there.
There
was
some
weird
thing
around
default
methods
and
compatibility.
I
can't
remember
the
details,
but
I've
been
slightly
hesitant
on
those
personally
until
I
haven't
gotten
burned
effectively
using
them,
but
that's
anyway,
I
I'm
curious.
D
F
D
F
D
F
Well,
I
think
it
has
to
do
with
the
fact
that,
like
at
overrides
and
all
sorts
of
interesting
things
like
that
in
the
way
that
it's
probably
put
together
in
the
bytecode,
but
I
think
this
is
such
a
like.
If
someone
has
somehow
added
a
method
that
ends
up,
I
mean
the
odds
of
this
happening
seem
astronomically
small
and
I'm
not
worried
about.
A
F
F
A
G
G
D
I
don't
know
what
the
use
case
for
that
is,
but
true
I
mean,
but
the
scala
part
of
me
really
likes
that,
but
the
the
observability
part
of
me
doesn't
because
I
want
a
name
to
the
thing
if
it
fits
like
I
in
the
stack
trace,
I
want
a
good
name
for
it.
G
F
Yeah,
I
suppose,
if
you
were
writing
unit
tests,
that
involved
instruments,
you
could
lambda
them
up
as
the
as
the
implementation.
I
I
don't
know
it
seems
easy
enough
to
just
mock
them
either
way.
I
don't
know
we
can
add
them
later
if
we
need
them
right,
cool,
so
plan
I'll,
we'll
talk
to
anarch
this
evening
and
assuming
he's
good
to
go
I'll,
let
him
go
ahead
and
figure
out
how
to
release
an
rc
using
our
great
old
madness.
G
A
All
right
moving
on
so
this,
the
big
conversion.
Thank
you
lori,
as
I
picked
up
the
play
instrumentation
so
that
will
get
us
up
to
98
out
of
99
with
the
last
one
and
mathias
reached
out
to
this
is
actually
a
very
complicated
one,
but
mathias
reached
out
to
kubowat
who
did
the
original
implementation
and
he's
going
to
take
a
look
at
this.
I
think
in
the
next
couple
of
weeks.
A
Yeah,
so
we'll
want
to
start
thinking
again
about
stable
version
of
the
instrumentation
api.
Nikita
had
created
this
project
of
tasks,
the
scrolling
is
not
ideal.
Is
it?
How
do
I
get
rid
of
all
this
space
on
the
top
anyway?
So
I
think
that'll
probably
be
a
focus
going
forward.
A
A
So
we
had
gotten
an
exemption
to
this
requirement
for
the
java
agent
package
itself,
since
it's
like
an
aggregate
of
sort
of
a
distribution
but
individual
instrumentation,
so
individual
library
instrumentations.
A
We
cannot
release
stable
versions
of
those
until
telemetry
stability
is
defined,
and
then
we
have
to
follow
that.
The
one
thing
that
I
wanted
to
call
out
from
here
specifically
is
we're.
Not
we
won't
be
allowed
to
remove
anything.
A
So
if
we
have
that's
right,
I
think
it's
good.
I
think
we've
hidden
all
things
that
are
not
specked
out
behind
experimental
flags,
but
we
need
to
be
careful
there,
because
if
we
did
leak,
say
an
experimental
attribute
by
default
or
something
that's
not
specked,
we
would
be.
We
would
be
stuck
with
that
indefinitely.
A
I
don't
know
if
anybody
else
has
looked
at
this
and
has
kind
of
thoughts
on
it
or
summaries.
Anything
important
that
you
think
is
worth
calling
out.
D
The
thing
that
makes
me
most
nervous
about
this
is
whether
or
not
schema
urls
will
get
adopted.
That's
the
question
we
should
all
be
asking
and
or
pushing
on
all
of
like
we're
all
vendors,
so,
let's
all
adopt
schema
url
files
as
the
way
to
convert
and
keep
things
stable,
but
this
leans
in
real
hard
to
it,
and
so
it
does
rely
effectively.
D
We
don't
allow
any
changes
for
until
july
1st
and
then
post
that,
if
you
want
to
make
changes,
you
need
to
use
the
schema
url
file
to
keep
it
safe,
the
theory
being
that
that
seven
months
of
not
changing
anything
as
we
discover
bugs,
will
annoy
enough
vendors
to
go,
support
schema
files-
I
guess-
or
I
hope
anyway-
I
just
want
to
throw
that
out.
There
is
like
that's
that's
a
crucial
part
of
this.
Is
you
will
be
allowed
to
make
changes?
A
That's
a
good
point
that
reminded
me
that
one
of
the
things
in
here
was:
if
we
do
not
send
a
schema
url,
then
we
cannot
make
any
changes
at
all
ever
so
we
need
to
make
sure
that
we
are
sending
schema
url,
which
I
don't
know
if
we
are.
A
Did
I
read
that
right,
josh
about.
D
D
D
Yeah
now
that
that
said,
there's
some
nuances
I
think,
are
worthwhile
pushing
on,
for
example,
if
you
have
telemetry
like
that,
and
you
do
need
to
break
it,
and
you
make
a
major
version
bump
of
your
implementation.
That
users
know
as
a
breaking
change.
Is
that
acceptable
right
right
now?
The
way
it's
specified
you
wouldn't
actually
be
able
to
make
that
major
version
bump
right,
because
that'd
be
too
confusing
for
users.
D
However,
if
there's
an
expectation
that
a
major
version
brump
is
breaking,
I
think
we
might
be
able
to
alter
this
language
for
that,
because
I
think
that
and
you
shouldn't
do
it-
we
shouldn't
do
it,
but
if
we
get
into
a
scenario
where
we
have
to
make
a
major
version
bump,
I
think
that's
reasonable
for
the
agent
itself
to
have
a
major
version
of,
but
that's
probably
worth
discussing
on
the
thread.
A
Yeah,
that's
a
good
yeah,
so
the
agent
itself.
D
There
I
did
get
a
call
out
around
experimental
telemetry
in
there,
so
you
you,
you
are
allowed
to
experiment
with
new
signals
and
mark
them
experimental.
As
long
as
it's
clear
to
users,
what's
experimental
and
what's
not
you'd
be
able
to
change
experimental
going
forward.
A
Yeah,
so
we
do
that
with
we
do
have
experimental
flags
for
we've
hidden,
basically,
all
semantic
all
at
span
attributes
that
aren't
part
of
the
spec.
We
hide
behind
experimental
flags,
so
I
think
we're
good.
There.
A
D
Yeah
yeah
like
if
it's
like
a
this
little
thing,
leads
to
users
running
out
of
disk
space.
That's
no
longer
a
little
thing
right.
That's
like
a
worthwhile
change,
but
if
it's
like
this
particular
name
that
we
chose
a
year
ago
might
be
confusing
until
we
have
a
way
to
migrate,
that
name
to
a
less
confusing
name,
that
is,
that
doesn't
break
people
who
rely
on
the
previous
name
for
queries
and
alerts,
and
that
sort
of
thing
we
shouldn't
change.
It.
A
D
A
So
this
is
a
good
question,
then,
so
that
this
covers
the.
I
understand
well
how
this
applies
to
attributes
but,
for
example,
like
spans
themselves
that
we
capture,
like
we
capture
like
we
have
this.
We
capture
internal
spans
around
controllers
internal
spans
for
like
hibernate,
calls
various
internal
spans
that
aren't
really
defined,
that
aren't
defined
by
the
spec
that
there's
no
semantic
convention
for
what
a
controller
span
is
or
what
an
orm
span
is.
A
D
I'd
say
two
things:
one
is
if
it's
useful
to
users-
and
you
want
to
give
them
a
consistent
api
across
instrumentations,
we
should
spec
it
from
a
integer.
Like
java
agent
standpoint,
you
should
treat
those
spans
that
you
produce
as
public
api
and
try
to
keep
them
stable
again,
because
you
know
users
will
start
to
rely
on
them.
They'll
write
queries
against
them.
They
might
put
alerts
against
them.
D
Anything
that
you
do
to
change.
Those
is
literally
changing
the
public
api
of
the
java
agent.
So
if
they're
clearly
denoted
as
experimental
or
like
you
know,
you
have
to
flip,
you
can
flip
them
on
and
off,
and
that's
controlled
by
the
user.
You
can.
You
can
provide
like
a
migration
path
where,
if
you
want
to
change
the
shape
of
those
spans,
and
it's
done
with
the
flag-
that's
that
that
you
know,
if
you
think
of
like
how
you
would
design
a
java
api
and
the
different
ways
that
your
api
produces
results.
D
You
kind
of
want
to
do
the
same
thing
with
like
your
config
and
then
the
spans
that
come
out.
So
if
I
don't
change
my
config
between
versions
and
it's
not
a
breaking
version,
the
telemetry
that's
generated
should
be
compatible
with
what
was
there
before.
So
anything
that
I
had
before.
I
should
still
have.
D
D
H
To
me,
it
feels
it's
like
a
couple
of
years
too
early
to
give
like
such
stability
guarantees
considering
the
state
of
our
instrumentation,
like
not
all
of
the
instrumentations
generate
spans
that
are
sensible.
H
A
Yeah
there's
kind
of
a
divide
between
things
that
are
really
well
that
are
well
specked
out
like
http
spans.
You
know
we
follow
those
pretty
well,
and
maybe
we
could,
you
know
not
break
those.
You
know
keep
those
stable,
but
we
have
so
many
other
instrumentations
that
have
not
been.
You
know
really
specked
at
all.
D
A
A
You
know,
hide
everything
basically,
except
these
really
stable
http
conventions
behind
an
experimental
flag.
That
says
you
know
you
know
by
default,
you
download
the
java
agent
and
you
only
get
these
really
stable
things.
G
Yeah,
it
sounds
like
based
on
this,
this
proposal
that
you
either
you
know
if
that
first
bullet
point
is
true:
you
either
include
the
schema,
url
or
no
breaking
changes
are
allowed
and
that
there
is
no
way
for
us
to
communicate
in,
like
documentation,
for
example,
that
something
is
a
prototype
that
we
actually
have
to
emit
some
sort
of
property
on
the
data
itself,
because
you
know
we
have
a
contract
on
the
data.
G
That's
emitted
itself
that
it's
stable
and
so
like
is
there
any
way
we
can
specify
in
a
schema
url
that
everything
that
falls
under
this
instrumentation
library
is
is,
is
prototype
and
subject
to
change.
D
I
I
actually
think
that
that's
what
we
should
recommend
here.
So
if
you,
if
you
look
at
like
for
me,
not
prometheus
god,
kubernetes
and
crds,
and
they
have
like
a
specification
for
what
they're
like
the
the
yaml
that
you
send
in,
is
and
there's
like
a
url
there.
I
think
schema.
Url
is
kind
of
similar
to
this.
But
there's
this
notion
that
you
make
like
beta
urls
and
as
long
as
that
beta
logo
users
know
that
this
could
change
and
break
in
the
future
and
eventually
there's
a
stable
version
right.
D
We
should
totally
be
able
to
define
a
a
experimental
schema
url,
where
it's
clear
to
you
like,
because
that
schema
url
is
attached
to
the
telemetry
if
we
need
to
put
it
in
actual
labels
on
spans.
So
users
know
that
this
is
part
of
the
experimental
bits
of
the
agent
versus
not
we
can
do
that
too.
But
again,
you
should
be
able
to
define
a
schema
url.
That's
like
here's.
D
What
I'm
producing-
and
this
will
break
you
know,
because
the
version
clearly
denotes
that
it's
not
stable,
and
then
you
should
be
able
to
continue
to
experiment
like
the
again.
My
fear,
with
that,
the
original
definition
of
instrumentation
stability
was.
We
can't
do
experimentation
and
I
agree
with
everyone
here
that
we're
not
at
a
stage.
We
can
do
that
with
all
our
spans.
There's,
maybe
like
three
things:
we
do
that
we
could
stabilize
and
the
rest
needs
exploration.
D
So
I
I
like
I
like
that
proposal
jack
and
I
think
we
should
push,
but
I
don't
know
if
this
is
going
to
be
an
ode
tip,
but
I
can
bring
it
up
with
tigran
next
time
I
meet
with
him,
and
we
should
comment
on
this
bug
of
the
idea
that
we
have
these
experimental
schema
urls.
I
love
that
idea
that
that
would
be
a
good
way
to
push
back
well.
G
I
would
actually
take
it
one
step
further
too,
so
I
like
what
you
said,
but
I
think
that
there's
so
much
surface
area
in
the
agent
right
now
that
even
trying
to
you
know
articulate
for
all
the
libraries
that
we
have
auto
instrumentation
for
what
the
experimental
schema
is
seems
a
bit
impractical.
G
I
mean
think
about
how
long
it
took
us
to
do
the
conversion
from
to
the
instrument
or
api,
and
now
we
have
to
go
back
in
and
characterize
all
of
these
instrumentation
libraries
in
terms
of
what
they
produce
in
different
scenarios.
It
seems
hard
so
like
is
there.
We
is
there
a
way
that
we
can
have
a
schema
url.
That
basically
wildcard
is
everything
that
like
basically
makes
no
guarantees
about
what's
in
there
and
you
get
what
you
get.
D
That's
a
good
point.
What
I
was
originally
thinking
for
that
schema
url
would
be
like,
for
example,
the
orm
stuff
right.
Your
scheming,
url
would
be
like
java
agent,
slash,
hibernate,
experimental
v1
and
your
schema
url
literally
doesn't
say
what
comes
in.
It
only
says
what
has
changed
between
versions,
so
it's
actually
would
be
an
empty
file
and
the
string
tells
you
it's
experimental.
G
Okay,
so,
but
if
you
do
change
your
experiment,
so
you
change
your
prototype
and
then
you
know
what
is
it
actually
useful
to
describe
what
changed
from
nothing
to
to
now,
like
you
know,
if,
if
you
originally
started
with
a
blank
file,
so
that's
like
you
know,
v0
and.
D
It
lets
you
transfer
between
v1
and
v0
or
v0
and
v1,
so
you
only
need
to
know
the
diff
to
know
how
to
convert
your
telemetry
back
to
the
format
that
you
were
using
previously
right.
Okay,.
G
A
D
D
What
you
do
is
you
have
a
schema
url
that
says
this
is
java
agent,
hibernate,
experimental
v1,
you
make
a
breaking
change
and
you
just
make
v2
and
there's
no
conversion
between
v1
and
v2,
because
it's
a
breaking
major
change
right
because
schema
urls
follow
senver
in
the
last
version
number.
So
that's
worst
case
scenario.
D
You
can
do
that
and
then
you're
just
telling
users
what
version
of
the
experiment
they're
on
and
that
you've
broken
things
right
and
and
it's
clear
it's
it's
denoted
in
there
best
case
scenario
is
you
could
provide
some
migration
paths,
especially
if
scheming,
urls
get
leaned
into
so
that
you
do
less
of
that
churn,
even
while
you're
experimental
but
worst
case,
you
should
be
able
to
just
bump
that
version
number,
and
it
says
this
is
breaking
as
far
as
so
again.
We
might.
D
We
need
to
run
this
idea
through
tigran,
possibly
write
it
up
in
a
notep.
If
anyone
is
interested
in
writing
that
up,
please
do
I
am
interested
in
writing
up.
But,
as
you
know,
my
time
is
super
overloaded.
D
So
I'm
not
gonna
cookie
lick
this
one,
but
I
think
this
would
be
a
great
like
addition
to
what
tigran
is
proposing
to
make
sure
we
have
room
to
experiment
and
what
I'm
suggesting
is
this
of
just
you
make
you
make
an
empty
schema
file
just
bump
the
version
every
time
you
break
worst
case
scenario.
It
still
communicates
clearly
with
users
as
experimental.
It
communicates
clearly
when
breaking
changes
happen
and
it
abides
by
what
the
schema
files
are
meant
to
do
best
case
scenario:
you
give
people
migration
even
during
experimental
phase.
G
The
the
fact
that
those
schema
files,
even
if
empty,
live
in
a
different
repository,
does
greatly
increase
the
friction
of
making
these
experimental
changes.
D
If,
if
you
have
a
way
to
push
them
to
so
the
schema
files
are
a
url
and
the
idea
is
that
they
also
live
in
that
url.
So
if
they
live
in
the
instrumentation
java
directory-
and
you
also
push
them
to
a
public
url
from
that
directory-
that's
fine.
They
can
live
right
in
the
instrumentation
repo.
D
There
you
go
there,
you
go
and
then
just
use
that
for
for
fun
yeah,
so
so
right
now
they
all
resolve
whether
or
not
that's
actually
going
to
be
a
requirement.
I
I
don't
think
that
that
is
enforceable
as
a
requirement
in
practice.
E
Wait
so
you
you're,
proposing
that
we
use
as
long
as
we
experiment,
we
use
experimental,
schema
under
experimental
namespace
and
we
do
whatever
whatever
we
want.
We
break
as
much
as
we
as
we
want
just
increasing
the
schema
version,
but
when
we
have
instrumentation
emitting
like
not
experimental
schema,
then
we
not
supposed
to
increase
major
version
of
that
schema
every
single
day.
D
No,
no,
no,
absolutely
yeah
yeah
exactly
so
once
you
move
off
experimental
you're
down
to
like
now.
You
have
a
stable
api
and
you're
and
you're
keeping
everything
as
stable
as
possible
and
you're
providing
migration
and
that
lives
a
long
time
yeah
yeah.
This
is
about
just
clearly
communicating
when
we're
in
experimental
phase
versus
not.
A
And
I
do
feel
like
this
as
a
distribution,
a
problem
with
versioning
distributions
right,
because
if
we
were
talking
about
individual
instrumentations
like
the
hibernate
instrumentation,
we
could
just
label
the
hibernate
instrumentation
itself
as
experimental
right
and
then
we
wouldn't
have
any
of
these
as
alpha.
And
then
we
wouldn't
have
any
of
these
problems.
A
So
that
makes
sense
right,
josh
that
it's
this
seems
to
be
a
distro
problem.
A
Honestly,
I
can
take
an
action
item
to
at
least
I
will
comment
on
this
spec
issue
with
sort
of
what
we
discussed
here.
D
H
A
Yeah,
I
don't-
I
don't
bring
that
in
in
my
in
our
distro
because
it
tends
to
not
be
attend.
The
underlying
jdbc
sql
instrumentation
is
what's
really
useful,
even
the
the
controllers,
I
don't
pull.
I
have
the
controller
spans
disabled
by
default
in
our
distro.
Also.
G
You
know
we
said
that
open
telemetry
is
holding
ourselves
to
a
higher
standard
and
that
if
we
don't
include
a
schema
url
that
implies
that
it's
stable
and
I
think
that
that's
probably
too
strict.
I
think
it's
probably
putting
up
too
much
friction
for
for
us
in
the
open,
telemetry
as
part
of
the
open
telemetry
community
to
experiment.
G
E
Have
anybody
thought
about
native
implementations
and
schema
and
schema
evolution
about
what
native
instrumentation
I
mean?
If
library
itself
starts
a
meeting
copenhagen,
they
should
provide
schema
url
as
well
for
their
like
model
span
model.
Yes,
but
well.
There
will
be
no
control
over
that
and
how
rigorously
they
provide
backward
compatibility
and
conversions
and
whatnot.
D
C
Okay,
there's
one
more
fun
case:
we
have
library
instrumentations
that
can
be
extended
with
custom
attribute
extractors.
So
while
our
instrumentation
is
well,
it's
it's
all.
According
to
the
schema,
the
user
can
put
whatever
they
want
into
their
own
attribute
extractor,
and
then
it
all
goes
out
at
the
single
span
with
possibly
any
attribute.
D
Yeah,
so
that
that,
by
the
way
you
have
to
read
between,
you
have
to
implicitly
read
a
few
different
parts
of
the
spec,
but
that's
allowed.
If
you
read
the
original
telemetry
schema
thing
specifically,
the
elephant
in
the
room
here
is
baggage,
and
I,
when
we
were
talking
about
instrumentation
api
stability,
I
still
want
to
know
how
I
attach
baggage
to
spans
and
metrics
in
in
the
java
agent,
like
how
do
how
do
I
say,
take
these
baggage
attributes
and
I
want
them
attached
to
everything.
D
If
you
see
them
right,
that
would
is
totally
outside
of
the
schema
url,
and
that
is
user.
Customization.
That's
totally
expected
to
happen,
and
users
are
knowingly
configuring
and
breaking
that
so
like
that
is
outside
this
contract
of
what
the
agent
produces
will
be
stable
and
here's
what
we
produce
anything
that
you
add
onto
this
is
up
to
you
to
keep
stable
right
anyway.
There's
we
have
thought
about
that.
D
It's
kind
of
in
the
spec,
but
you
have
to
like
read
all
of
the
details
and
think
through
implications
a
little
bit
to
kind
of
get
back
to
there.
There
was
a
there
was
a
discussion
on
that
in
the
original,
like
schema
otep,
if
I
recall
correctly,.
A
Cool,
I
think
we
have
some
good.
That
was
great.
Thank
you.
I
think
we
have
some
good
action
items
out
of
that.
Let's
go
on
to
make
sure
we
get
to
jason's
topic,
so
hey
yeah!
That's
me.
I
wrote
that
because
I
don't
know
if
people
saw
that
fabrizio.
A
I
just
wanted
a
proxy
for
him
since
he's
not
on
this
call,
but
he
offered
to
do
some
documentation
around
configuration
as
it
applies
to
all
of
our
instrumentation,
which
I
think
is
a
very
generous
offer,
and
I
didn't
have
a
great
way
to
answer
his
question
better
than
what
he
suggested,
which
was
just
grepping
for
config
gets
and
and
like.
I
just
wanted
to
see
if
people
have
other
guidance
that
we
could
provide
to
him
because
he's
offering
to
do
something
nice
for
us
where,
where
was
it.
A
But
yeah
what
about
what
other
guidance
might
we
have
for?
Locating
all
of
the
configuration
for
instrumentation,
specifically.
A
A
Where
I
know
there
was,
I
saw
some
discussions
in
the
java
repo,
the
sdk
repo
about
stocks
and
where
docs
should
be
living,
did
we
get
this
repo
created?
We
did
okay,
cool
so
does
fabrizio,
know
about
this.
Is
there
or
there
was
somebody
else
in
the
sdk
repo
who
was
offering
to
do
some
documentation,
work
yeah.
F
I
think
that's
somebody
from
the
hotel,
docs
or
cncf
docs.
I
don't
know
who
it
was,
but
there's
a
there
was
a
pr
opened
in
the
sdk
or
the
api.
Whatever
options
you
java
to
bring
back
all
of
the
instrumentation
docs,
I'm
like
no,
no,
we
remove
those
on
purpose,
and
then
this
was
I
just
brought
up.
I
remembered
that
we
had
this
other
repo
that
needs
to
be
created,
or
that
has
been
created.
It
needs
to
be
populated
with
actual
documentation,
so.
A
Yeah
cool
what
okay
redmond
washington,
what
time
zone
is
fabrizio
in
spain.
C
A
Yep:
okay,
I'm
wondering
if
it
would,
because
we've
been
struggling
with
the
doc
and
it's
awesome
to
get
some
volunteers
here.
I
will
try
to
set
up
a
maybe
a
meeting
that
we
can
pull
just
a
special
topic
meeting
about
docs
for
next
week.
A
Yeah,
okay,
I
will
yeah
because
I
know
we've
been.
This
has
been
a
struggle
for
a
while
I'll
take
an
action
item
to
follow.
F
Up
and
if
these
people
are
going
to
be
will
or
willing
like
for
vco
and
the
person
from
honeycomb,
you
just
pointed
out
his
name.
I've
already
forgotten
are
willing
to
like
keep
this
maintenance
up
to
date,
like
the
I,
the
the
website
folks
would
prefer
to
have
the
docs
just
in
the
website
repo,
but
I
think
we
also
wanted
a
centralized
place
for
examples
and
runnable
buildable,
real-life
real-world
examples
for
things.
A
Yeah
I
mean-
and
I
I'd
like
to
understand-
I
know
honorad
was
wanting
to
have
them
consolidated.
Maybe
we'll
talk
this
evening
with
him
about
that,
because
I
was
kind
of
of
the
opinion
that
you
know.
Let's
just
put
it
all
in
the
website
repo
that
just
seemed
easy
to
me.
F
G
Yeah
I
was
in
camp,
have
a
dedicated,
docs
repository
so
that
the
docs
and
working
code
examples
could
be
in
one
place,
and
you
know
my
reason
was
yeah.
I
I
thought
that
it
was
unlikely
that
the
the
docs
repository
itself,
the
website
repo,
would
want
to
have
a
bunch
of
java
specific
tooling
in
it
to
have
working
code
examples,
and
you
know
if
you
scale
that
to
all
the
different
languages
that
might
be
interested
in
that
type
of
thing.
I
I
just
thought
it
wouldn't
work.
A
Oh,
ignore
you,
okay,
jason
posted.
In
slack,
I
mean
in
chat
josh
exponential
histograms.
D
Yeah,
I'm
probably
going
to
miss
tonight,
unfortunately,
so
I
I
hopefully
james
is
here.
If
he's
not,
you
know
we
can
talk
about
this
later,
but
so
the
two
things
one
is
exponential
histograms.
D
He
has
the
aggregator
implemented
to
the
point
where
we
can
actually
wire
it
into
our
performance
test
suite,
but
I
noticed
that
there
wasn't
a
performance
test
for
it,
so
I'm
offering
to
write
one
if
he's
not
already
doing
it,
and
the
second
thing
is
the
the
current
spec,
like
limits
exponential
histograms
to
like
200
buckets
and
the
more
that
I've
looked
at
like
our
internal
implementations
and
things
that
and
our
latency
buckets
that
we
have
now.
That
seems
like
an
absurd
amount
of
memory.
D
D
as
like
a
default
baseline.
You
know
we
aren't
really
sure
what
you're
doing
thing
initially
and
then
like
you
can
go
up
to
2000
if
you
configure
something.
So
I
was
thinking
about
suggesting
that
for
initial
experiment,
just
because
I'm
worried,
I
want
to
get
this
into
the
benchmark
and
look
at
the
memory
cost,
but
I'm
a
little
bit
nervous
that,
like
200
ish
buckets
is,
is
just
a
huge
number,
especially
when
you
look
at
like
our
existing
explicit
bucket
histogram
is,
I
think,
eight
buckets.
D
D
This
this
would
be
the
number
of
buckets
per
stream,
so
that's
per
attribute
set
yeah
per
the
whole.
G
D
Right
well
so
again
that
doesn't
it
doesn't
it's
not
fully
correlated
there
so
like
we'll
have
to
experiment
with
it.
But
what
I,
what
I
want
to
do
around
the
spec
is
so
200
is
for
super
high
fidelity
and
like
the
idea
being
like
the
highest
possible
fidelity
effectively.
If
you
know
the
spec
author
for
pieces
of
this,
they
were
looking
to
like
fully
represent
floating
point
bits
as
efficient
as
possible
with
the
highest
possible
resolution,
and
then
there's
like
in
practice.
D
What's
a
good
balance
that
we
need
to
strike,
I'm
suggesting
tend
to
be
aggressively
push
people
towards
thinking
smaller,
but
I
think
I
think
we
do
need
to
prototype
this
and-
and
I
don't
want
to
see
200
be
hard-coded
as
the
only
maximum
there.
Just
because
I
feel
like
this
is
an
area
we
need
to
tweak.
So
I
just
want
to
call
that
out
and
talk
talk
it
through.
D
It
might
be
good
to
talk
through
with
james
as
the
implementer,
and
then
we
can
have
another
discussion
on
the
metrics
around
what
people
are
seeing,
but
until
we
have
a
notion
of
how
much
memory
this
costs,
I
want
to
do
a
full
comparison
of
the
exponential
histograms
against
the
existing
histograms
and
do
a
memory
profile,
because
if
we
find
out,
200
is
the
same
amount
of
memory
usage
and
the
same
efficiency
as
as
the
explicit
buckets
great
like
we
just
made
the
life
everyone's
life
better.
D
But
if
there's
like
a
huge
discrepancy,
we'll
have
to
look
at
it.
I'm
just
I'm
nervous
around
around
what
we're
gonna
find
out
here.
So
anyway.
D
D
A
And
I
realized
I
was
thinking
buckets
in
you
yeah.
This
is
just
duration,
buckets
we're
talking
about.
I
mean
the
histogram
buckets.
A
D
Yeah,
so
this
would
be
the
this.
This
is
the
number
of
histogram
buckets
that
you
preserve
for
your
scale
factor
so
effectively.
You'd
have
you
know
with
10
buckets
depending
on
your
scale
factor.
If,
if
your
data
shifted
to
the
right,
you
still
get
that
10
bucket
resolution,
you
know
with
with,
what's
up
in
the
high
end.
D
D
Yeah
so
like
worst
case
scenario,
if
you
have
by
the
way,
I'm
still
running
that
high
cardinality
agent
from
like
two
months
ago
and
we're
up
to
about
six
million.
No
sorry,
six
thousand
unique
attribute
sets
from
the
client
ip
address
fund
anyway.
So
worst
case
scenario,
you
have
something
like
that,
where
you
have
just
an
absurd
number
of
histograms,
which
is
why
I
want
to
be
a
little
bit
more
conservative
around
bucket
size
and
histograms.
G
Quick
quick
question
about
histograms,
so
you
mentioned
negative
values.
Our
can
you
jog
my
memory.
Josh
can
explicit
bucket
histograms,
only
record
positive
values
for
now.
D
D
D
Yes,
and
are
we
want
to
lift
that
restriction
at
some
point
in
the
future,
but
that
that
I
don't
know
when
that
will
be.
G
Yeah,
I'm
doing
like
a
like
a
pass
on
the
sdk
to
make
sure
that
you
can't
do
illegal
things,
and
so
that's
something
that
I'll
add
is
making
sure.
E
That
histograms.
G
D
Like
go,
go
look
at
how
you're
supposed
to
handle
errors.
There's
like
a
there's,
a
supplementary
guide.
You
should
read
through
yeah.
A
All
right:
well,
we
are
two
minutes
over
our
five
minute
cut
off,
so
just
real
real
briefly,
because
some
good
stuff
went
in
last
in
the
last
week,
a
lot
of
progress.
Thank
you.
Lori
on
bringing
down
the
flaky
tests,
I'm
not
quite
sure
how
to
chart,
read
the
charts
yet,
but
there's
been
a
lot
of
prs
bringing
those
down,
so
I
want
to
analyze.
I
need
to
analyze
that
new
feature
for
capturing
query
parameters,
servlet,
request
parameters.
A
A
This
is
a
really
nice
interop
between
for
the
our
project
reactor
instrumentation
between
manual
instrumentation
and
agent,
auto
instrumentation,
for
project
reactor,
and
this
was
done
by
ludmilla
from
the
azure
sdk
group,
because
azure
sdks
use
project
reactor
extensively.
So
this
is
a
real
important
use
case,
gradle.
Caching,
thank
you.
Nikita
all
the
gradle
cache
work
makes
our
builds
faster.
Our
local
builds
faster
running,
muzzle
checks,
oh
yeah.
This
was
a
long-standing
issue
now
that
we
can
actually
run
muzzle
against
our
api
instrument.
A
Our
open
telemetry
api
instrumentation,
which
is
great,
gives
us
a
lot
more
confidence
there
that
we're
not
breaking
against
old,
that
we
still
support
older,
open,
telemetry
api
and
then
just
some
good
discussion.
We
had
about
reasons
behind
dropping
and
when
we
can
drop
older
library
version
instrument,
instrumenting
older
library
versions,.
A
Cool,
I
want
to
end
on
time,
I'm
jason,
so
we
didn't
get
to
this
either
post
in
slack
or
we'll
had
it
first
on
next
week,
yeah
no
big
deal
or
I'll
jump
in
tonight
either.
One
awesome
yeah
see
ya
thanks.
Everyone
see
ya.