►
From YouTube: 2021-10-14 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Glad
you're
sticking
around
able
to
stay
around
for
we'll
take
whatever
percentage
of
your
time.
We
can
get.
A
Yeah
we'll
see
how
it
all
ends
up
playing
out.
We
definitely
need
to
also
start
getting
more.
I
mean
we've
got
two
new
approvers,
which
is
awesome,
but
we
definitely
need
to
get
more
people
involved.
B
A
E
B
B
So
yeah,
I
wanted
to
start
with
just
talking
about
kind
of
the
main
work
that's
going
on
in
the
instrumentation
repo
right
now,
keep
everybody
updated.
The
is
focusing
on
instrumentation
api
stability,
and
so
nikita
created
a
github
project
here
to
track
and
to
track
things
that
we
think
are
needed
for
instrumentation
api
stability,
and
we
had
a
good
discussion
in
the
earlier
on
tuesdays,
sig
kind
of
talking
about
how
we
can,
because
one
of
the
blockers
has
is
semantic
attribute
stability.
B
From
the
perspective
that
our
our
semantic
attribute,
extractors,
like
the
http
attributes,
extractor
rely
on
the
semantic
attributes,
whether
it's
the
naming,
whether
it's
that
they're
required,
whether
it's
their
even
present
on
client
or
server
spans,
and
so
we
wanted
to
map
out
a
path
to
instrumentation
api
stability
that
doesn't
require.
It
doesn't
depend
on
semantic,
attribute
stability.
Since
we're
you
know,
that's
probably
going
to
take
some
more
time,
and
so
the
the
thought
at
this
point
is
to
stabilize
the
instrumenter
that
the
attributes
extractor
interface.
B
B
Extractor
net
attributes,
extractor
db
activities
extractor
and
for
now
move
these
to
an
internal
package
in
the
instrumentation
api,
which
our
convention
from
the
sdk
repo
is
that
that
would
allow
other
instrumentations
that
we
publish
to
use
those
internal
packages.
Since
we
have
a
bomb
that
requires
version
alignment.
B
And
that
would
allow
us
then
to
stabilize
say,
like
the
ok,
http,
instrumentation
library
instrumentation,
since
that
doesn't
it
uses
the
http
attributes
extractor,
but
it
doesn't
expose
that
in
the
public
interface,
and
so
you
know
it
definitely,
it's
loses
a
lot
of
the
that's
where
a
lot
of
the
benefit
comes
from
for
library,
authors
for,
say,
people
who
want
to
do
native
instrumentation
or
use
the
instrumentation
api
outside
of
our
repo.
B
Is
these
attributes
specific
semantic
attribute
extractors,
because
it
really
helps
to
enforce
the
semantic
attribute
semantic
conventions,
but
we
think
there's
still
a
significant
benefit
to
stabilizing
the
instrument.
Instrumentation
api.
B
Or
before,
and
not
waiting,
not
relying
on
that,
but
would
love
any
thoughts
that
people
have
any
alternate
ideas.
People
have.
A
Mappable
changes
via
schema,
shouldn't
or
not
won't
be
considered
breaking
breaking
changes
right.
A
So
if
we
could
get
that
kind
of
pinned
down
and
get
stability-
and
I
get
this
made
the
dimensions
cut
at
1.0
and
say
we're
going
to
use
schemas
to
do
this
mapping,
then
we
wouldn't
have
this
problem
right.
It
would
be
really
nice
because
we
could
actually
just
say
hey
this
http
attributes,
instructor
version.
1.7
supports
schema
version,
one
point
whatever
I
don't
know,
and
if
you
have
the
collector
or
some
back
end
that
does
schema
mapping,
then
it
will
just
you
know,
work
magically.
B
So
I
think,
there's
still
one
unsolved
problem
there.
What
do
you
think
and
it's
sort
of
related
to
the
semantic
attributes
constants.
A
Well,
it's
probably
more
of
an
issue
with
the
actual
semantic
attributes,
rather
than
the
extractors
right,
because
the
extractors
would
point
at
a
single,
a
given
extractor
would
point
to
single
schema
and
revving
up
just
pointed
a
new
schema
which
shouldn't
be
a
problem
right,
but
the
fact
is
we
do.
We
probably
do
need
to
keep
the
old
schema
generated.
Schema
sorry
generated
attributes
constants
around
somewhere
for
backwards
compatibility
for
api
users
in
general,
right.
G
I
am
all
on
board
by
the
way,
with
what
you
were
saying,
john
as
something
the
spec
needs
to
provide
something.
That's
been
lacking
a
little
bit
with
schema
and
semantic
attribute
convention,
and
I
think
the
having
the
v1
v2
things
seems
to
make
the
most
amount
of
sense,
but
also
will
probably
get
incredibly
unwieldy,
given
how
fast
the
spec
has
been
releasing
and
how
fast
that
version
number's
changing
yeah
like
I
think
it's
the
right
thing
to
do
it
just.
A
Other
thing
that's
a
little
bit.
Annoying
is
mostly
what's
happening
right
now,
with
the
with
the
semantic
conventions.
Is
that
we're
adding
new
attributes
and
so
version?
One
point
I'm
just
making
things
up.
I
don't
know
for
sure,
but
version
1.6
is
compatible
like
1.7
is
fully
compatible
with
1.6.
Just
has
more
things
in
it,
so
it
doesn't
feel
like
we
would
need
to
generate
a
new
like
a
newly
packaged
version
of
the
file
in
that
case,
because
they're
fully
backward
compatible,
but.
A
So
it's
not
the
sdk
is
not
the
issue.
The
issue
is
that,
let's
say
I've
got
a
library,
that's
instrumented,
with
version
1.5
of
the
semantic
conventions
and
there's
another
library
completely
unrelated.
It's
integrated
with
1.7
semantic
conventions.
A
They
need
to
operate
in
the
same
api
space
and
those,
and
if
you
only
are
providing
1.7
as
a
part
of
your
api
package,
the
1.5
one
will
break
because
it
will
be
constants
that
have
been
renamed
or
changed
or
remapped,
or
something
like
that
right.
It
won't
compile
anymore,
and
so
we
will
have
broken
broken
backward
compatibility.
In
that
case,
it's
not
an
issue
for
all.
G
There's
there's
a
weird
convention
in
this
is
going
to
be
a
weird
pool,
but
lightweight
opengl
java.
What
is
called
lw
jgl
thing
around
the
opengl
convention?
Lw
jgl.
I
think
it
is
so
the
way
that
works
is
actually
they
have
version
packages,
but
the
new
versions
after
the
1.0
just
have
changes
in
them.
A
So
this
is
interesting
because
we
could
actually
use
potentially
use
the
schema
to
do
that.
Work
right.
That's
one
of
the
goals
of
the
scheme
is
we
could
actually
just
I
mean
as
if,
if
they,
if
the
spec
folks
would
just
cut
a
1.0
of
some
of
the
semantic
conventions,
yeah
man,
those
spec
folk
yeah,
like.
A
So
we
could
use,
we
could
change
the
generator
code
so
that
it
used
the
schema
to
generate
the
diff
file,
which
is
the
new
ones
right.
The
naming
of
those
files
is
going
to
be
weird.
I
don't
know
how
it's
going
to
end
up
looking,
but
but
you
certainly
could
imagine
that
being
something
that
could
work.
G
So
quick
question
this
this
mapping
here,
because
I
think
the
some
of
this
will
be
relevant
to
all
cigs,
and
some
of
it
is
java
specific.
G
Is
this
something
that
the
java
sig
is
planning
to
to
kind
of
drive
a
solution
for
because
like
to
get
the
instrumentation
api
marked
as
stable?
Or
is
this
something
you're
planning
to
hide?
I
know
we're
talking
about
it
now,
but
just
in
general,
I
think.
Whatever
solution
you
come
up
with,
probably
should
get
elevated
after
it's
done
so
because
this
would
really
help
us
mark
things
stable
upstream,
like
this.
This
is
one
of
those
missing
components
that
no
one
has
an
answer
to.
A
G
Question,
I
believe,
go
is
doing
something
there's
a
bit
of
a
kerfuffle
around
it
actually,
but
their
generation
is
not
in
the
it's
in
a
separate
repository,
so
they
have
their
own
tool.
That
does
stuff.
I
don't
know
of
any
others
off
the
top
of
my
head,
but
I'd
have
to
like
to
look
it
up.
A
G
A
Yeah
yeah
resource
is
way
more
complicated
because
to
I
mean
at
least
at
the
tracer
at
least
the
tracer
level,
a
tracer
has
a
single
instrumentation.
It
has
a
single
schema,
but
then,
when
used
there,
you
can
you
end
up
cramming,
potentially
multiple
resource
versions
based
on
auto
detectors
and
yeah.
It's
a
much
more
complicated
problem.
B
B
Problem
with
tracers,
if
you
do
span
current
and
set
attribute
on
from
a
different
tracer,
that's
not
even
a
tracer
span
current
right,
that's
not
a
trace.
A
A
But
it
is
true,
a
span
can
only
have
one
schema
associated
with
it,
but
you
could
have
been
generating
attributes
for
multiple
schemas
or
something
I
don't
think
there's
anything
we
can
do
about
that.
The
auto
detection
case
is
going
to
be
we're
definitely
going
to
run
into
that
case.
No
matter
what,
like
that's,
not
something.
The
user
is
really
in
control
of.
D
B
D
Yeah
but,
but
still,
if
you
have
library
may
like
compiled
against
different
versions
in
one
application,
then
you
have
this
exactly
the
same
pro
problem.
Even
if
transformation
is
compatible
just
rename
the
attribute.
But
how
do
you
have
semantic
attributes
right
classes
with
different
constants?
Yes,
exactly.
D
G
F
H
G
B
And
so
for
extractors,
where
it's?
Oh,
we
have
things
like
what
http
url
methods
post.
If
any
of
these
changed
or
requirements
changed,
this
would
be.
This
is
less.
I
think
this
is
easier
to
just
generate
new
versions,
but
we
would
so
what
we're?
What
we're
saying
is.
We
would
probably
have
to
generate
versions.
B
B
Yeah,
it
makes
me
a
little
nervous,
not
knowing,
because
if
we
like
how
many
changes
or.
B
D
E
D
To
make
them
public
at
some
point
and
after
that
point,
okay,
we'll
still
will
still
be
modified
potentially,
so
we
still
have
to
support
that
schema
evolution
in
some
way
yep.
But
that's
the
way.
That's
that
interesting
question,
probably
for
wider
maintainers
meeting,
actually
how
how
other
languages
instrumentations
aim
to
support
schema
evolution
so
sdk
and
api
support
is
like
if
it
flashed
out
it's
it.
It
was
thought
about
during
what
happened,
what
not,
but
how
instrumentations
are
going
to
support
schema
evolution.
G
I
think
this
is
a
great,
a
great
discussion,
something
that
I
I
tried
to
formulate
some
of
my
thoughts
here
in
the
in
the
specification,
because
I
actually
think
there's
a
bit
of
bundling.
That's
lost,
for
example,
you're,
looking
at
the
http
attributes,
extractor
as
a
unit
of
instrumentation
and
to
some
extent
it
might
make
sense
for
http
attributes
to
be
its
own
little
schema
url
with
its
own
version
scheme
for
semantic
conventions.
G
Similarly,
with
resource
detectors
right,
it
might
make
sense
to
have
a
case-based
resource
detector
that
detects
all
or
nothing.
So
you
get
all
of
these
attributes
or
you
get
none
of
these
attributes
right,
because
there's
there's
there's
an
issue
with
some
of
these
semantic
conventions
of
like
what's
optional
and
what's
required
of
we're
starting
to
see
what
I
call
these
should
musts.
G
If
you
provide
this,
you
must
also
provide
this
right
and
it's
getting
kind
of
awkward,
and
I
feel
like
there's
a
missing
component
here
and
I
think
it's
these
instrumentation
extractors
that
you've
defined
right
of
like
I
have
a
thing
that
detects
a
set
of
information.
That's
important
to
this
instrumentation.
G
You
know
I
have
a
thing
that
detects
a
resource
and
when
this
runs
it
should
detect
these
five
attributes
and
if
it
doesn't
detect,
if
it
doesn't
detect
it
shouldn't,
give
any
of
them
right
and
that
we
should
be
a
little
bit
more
explicit
the
specification
around
this,
whereas
right
now
everything
is
just
flat
attributes
and
one
schema.
G
So
I
don't
know
if
that's
worthwhile,
if
that
helps,
I
don't
know
if
it
does
or
not,
but
it's
something
that
I've
been
kind
of
thinking
through,
and
I
tried
to
write
it
down
in
a
otep
that
no
one
understood
so
I
just
deleted
it
because
it
was,
it
was
pretty
poorly
worded.
But
that's
that's
the
idea.
I
was
trying
to
figure
out.
B
Yeah
I
like
I,
I
saw
the
your
most
recent
otep
about
the
resource
conversion
and
I
like
that
at
least
the
idea
is
there
of
I
mean
I
know.
We've
talked
about
doing
this
schema
evolution
of
the
collector
side,
where
the
collector
will
generate
the
mappings
based
on
the
the
spec
yaml
evolution
mappings,
but
also
having
that
potentially
built
in
at
the
sdk
layer
seemed
interesting.
B
B
G
G
G
Url
it
becomes
its
own
thing
that
we
release
is
like
here
is
what
http
telemetry
looks
like
there's
a
dedicated
group
of
experts
around
http
and
http
instrumentation,
and
I
think
that
it
just
it
scopes
the
problem
to
something
that
I
think
we
can
more
easily
solve
and
be
confident
on,
so
that
that
that
would
be
my
suggestion.
That's
something
I've
been
thinking
about.
I
haven't
figured
out
how
to
phrase
it
correctly
that
isn't
scary
to
people,
because
it
does
mean
more
fragmentation
or
like
a
different
release
process
or
something
right.
A
A
Constants
doesn't
belong
in
the
api
sdk
repository,
maybe
that
belongs
in
the
equivalent
of
the
new
proto
java
repository
that
is
synced
to
the
version
of
the
proto.
Maybe
we
need
generated
these
generated
classes
to
be
synced
to
the
version
of
the
stack
and
not
synced
to
the
version
which
they
they
are
implicitly
right
now,
because
we
kind
of
are
releasing
the
same
version
of
the
api
sdk
and
the
packages
along
that
along
with
the
spec.
But
we
certainly
don't
need
to
do
that.
A
So
if
there
were
a
separate
repository
and
it
could
be
a
separate
publishing,
it
doesn't
have
to
be
sorted
by
story,
but
maybe
makes
sense
for
publishing
reasons
to
keep
it
separate.
But
if
we
had
a
separate
publishing
of
those
generated
constants
that
was
not
tied
implicitly
to
the
api
and
sdk,
then
the
instrumentation
can
pull
in
specifically
which
version
they
need
like
so
like
the
http
instrumentation.
A
So
it
would
at
least
solve
one
of
the
problems
and
that
we
wouldn't
be
tying
two
things
together
that
don't
make
sense
to
tie
together,
which
is
the
api
sdk
with
the
semantic
attributes,
because
they're
really
completely
independent,
except
for
the
very
tiny
cases
where
we're
actually
using
them
when
we're
generating
some
internal
metrics
and
things
like
that.
A
A
I
think
we
could
then
even
have
that
that
generated
attributes
classes
could
be
broken
out
by
category
and
published
all
the
different
categories
published
as
independent
files,
and
then
a
given
instrumentation
only
needs
to
pull
in
the
ones
that
they
want
and
depend
only
on
the
versions
that
they
want.
B
I
think
the
I
mean
the
yeah,
the
the
part
about
the
semantic
attributes-
versioning
not
being
tied
to
the
sdk
repo
in
particular,
makes
a
ton
of
sense,
like
I
almost
feel
like
they
would
live
better
in
the
instrumentation
repo,
where
we
already,
you
know
we're
already.
All
our
instrumentation
is
sort
of
tied
to
semantic
conventions
already
yeah.
B
A
G
B
Not
sharing
the
resource
space,
but
yes
right.
G
G
B
B
G
B
F
D
D
G
G
G
This
this
discussion
around
packaging
generation
too,
I
feel
like
that,
hits
more
than
just
java.
Is
there?
Is
the
right
place
to
talk
about
that?
The
specification
sig
or
one
of
the
instrumentation
stakes
like
where,
where
should
that
get
raised.
D
G
Okay,
that's
a
good
point.
I
I
unfortunately
can't
go
to
those
nikita.
Are
you
gonna
raise
that
in
the
maintainers
meeting.
D
A
B
All
right,
yeah,
so
good
great
progress
on
the
instrument
or
api
conversions
83
out
of
99
down,
there's
a
few
pending
pr's.
B
So
yes
thank
you
to
everyone
contributing
there
and
oh
yeah.
I
was
the
neti
server.
Instrumentation
is
really
this
brings
in
http
metrics.
Now.
F
B
Oh
yeah
and
the
one
I
didn't
put
it
here,
but
one
seven:
we
will
try
to
pull
the
trigger
tomorrow.
If
it
doesn't
work,
then
we'll
pull
it
monday,
so
lots
of
good
stuff,
I
think,
coming
through
the
the
metrics
and
instrumentation
in
one
seven.
G
A
question
about
that.
We
have
talked
before
about
having
example:
our
sampling
be
off
by
default.
1.7
did
that
go
through
or
because
it
didn't
look
like
it
did,
at
least
in
the
sdk.
B
G
As
soon
as
you
have
that
one
seven
release
I'll
do
some
testing
on
our
our
exemplar
back
end,
I
know
that
I
did
some
testing
with
the
the
api
and
sdk
and
it's
working
beautifully.
I'm
I'm
very
happy
with
what
we're
seeing
there.
It's
very
cool
log
trace
metric
correlation
yeah.
So
I'm
more,
I'm
really
excited
to
try
it
out
in
the
agent
and
look
at
that
by
default
and
see
what
that
looks
like
cool
cool.
G
If
I
have
time
I'll
try
to
get
an
open
source
demo
together,
it's
just
harder
for
me
because
it's
actually
hard
for
me
to
get
all
the
weird
prometheus
jaeger
components
set
up
versus
just
using
you
know
the
google
tools
so.
G
If
google
lets
you
use
the
marker,
I
guess
no,
I'm
not
allowed
to
use
that.
That's
like
that's
like
the
main
problem.
You
need
to
get
an
exemption
to
use
docker
compose
on
anything
yeah.
It's
it's!
It's
not
is.
G
Well,
there's
like
safety
things.
I
guess
I
don't
know
anyway,
but
hey
if
there's
a
helm
chart
that
does
it
I'm
gonna,
I'm
gonna
go
after
that.
D
C
Can
we
maybe
just
create
a
custom
spam
name
extractor
just
for
longer,
I
mean
it
doesn't
suit
any
of
our
two
scenarios
that
we
expect.
So
it's
not
like
it's
not
sequel,
obviously,
and
it
has
like
another
named
and
generic
db,
it's
fundamental
structure
so.
D
B
What
does
what
does.
C
B
Oh
right,
so
we
we
parametrize
the
sql.
The
attributes
attribute
extractor
with
you
can
change
whether
it's
db.sql.table
or
db
dot
cassette.
B
Right
but
we
call
it
sql
extractor.
C
It's
still
called
sequence,
but
it's
it's
like
sql
like
languages,
so
cassandra
can
kind
of
matches
that
I
mean
all
the
passing
rules
are
really
the
same.
Pretty
much
the
same.
B
I
Hello
yeah,
so
I
have
a
prototype
pr
draft
open
on
the
open,
telemetry
java
instrumentation
project
for
a
log
for
j2
appender,
and
so
basically
the
concept
is
is
that
it
creates
a
open,
telemetry
appender
that
you
can
configure
in
your
log4j,
xml
or
json
configuration.
I
And
then,
when
you
configure
your
your
open,
telemetry
logging
sdk,
you
call
a
static
method
that
associates
that
sdk
instance
with
the
appender,
which
was
probably
probably
previously
created.
You
know
when
the
application
started
and
and
then
logs
from
log4j
start
flowing
through
the
open,
telemetry
sdk
and
can
get
exported
to
wherever
you
want.
For
example,
otlp.
I
So
I
came
across
some
issues
with
like
the
logging
sdk
and
I
kind
of
talked
I
talked
about
them
here
in
in
the
in
the
pr
description,
I'm
going
to
fix
those
and
but
yeah
take
a
look.
Let
me
know
what
you
think
if
the
patterns
are
good,
we
can
apply
this
to
other
logging
frameworks.
B
Yeah
yeah,
so
logging
is
going
to
be
a
little,
is
a
little
different
for
us
since
there's
no
logging
api
and
normally
our
bridging,
is
across
the
api
with
bridge
the
api.
So
I
talked
to
anurag
last
week
about
options
and
I
think
his
preference
was.
B
We
was
initially
thinking
of
putting
like
going
through
the
slf
for
jay
or
some
some
existing.
We
have
to
go
through
the
bootstrap
some
api
in
the
bootstrap
class
loader.
B
G
I
So
I
think
it
would
be
a
kind
of
a
weird
cyclical
dependency
thing,
so
auto
configure
would
have
to
depend
on
or
on
an
instrumentation
artifact,
which
would
you
know
depend
on
you
know
artifacts
from
open,
telemetry
java
again
so
sdk
artifacts.
So
I
think
if
you
did,
that
you'd
have
to
move
this
disappender
to
open
telemetry
java,
probably,
and
that
doesn't
seem
like
the
right
place.
A
Right,
yeah,
I
think
it
would
have
to
be.
I
think
it
would
have
to
be
fully
sdi,
because
I
don't
think
we
would
want
to
build
in
to
our
sdk.
Those
offenders
like
we
do
with
prometheus,
because
against
first
class
support
right
right,
right.
Okay,
I'm
just
transferring
it,
but
that
seems
like
the
right
way
to
approach
it.
B
A
I
A
A
I
B
B
So
do
we
yeah
so
to
the
question
of
agent
support
so
yeah,
so
in
one
seven
we
have
no.
We
still
have
no
logging
story
in
the
agent
other
than
the
mdc
trace
id
span,
id
stamping
that
we've
had,
but
with
all
this
progress
in
logging
1.8
might
have.
That
would
be
really
cool.
I
Yeah
one
thing
I
discovered
along
the
way
is
that
you,
the
logging
sdk,
is
actually
unusable
right
now.
You
can't
create,
I
forget
what
the
class
is
called,
but
the
the
equivalent
and
logging
of
the
the
tracer
provider.
You
can't
create
one,
because
the
the
method
required
to
access
the
builder
is
not
public
and
so
yeah.
I
don't
that
would
definitely
have
to
be
fixed
before
the
instrumentation
has
any
hope
of
using
it.
A
A
B
B
B
This
was
the
last
one
I
put
this
on
here,
but
it's
really
mateish's
topic.
C
Yeah
I
was
supposed
to
bring
this
up
even
earlier,
but
I
kind
of
forgot
about
it
yeah
anyway.
So
I've
added
several
properties
to
configure
the
cap,
capturing
http
headers.
If
you
scroll
a
little
bit
up,
you
can
see
those
monsters
very
long
names
and
we've
talked
with
fabry
who's,
our
splunk's
technical
writer
for
for
instrumentation.
How
could
we
improve
them?
C
So
one
thing
that
we
thought
of
is
maybe,
if
we
have
like
common
properties
for
all
http
instrumentations,
which
is
clearly
the
case
here
instead
of
using
word
common,
you
could
we
could
use
like
the
name
of
the
convention,
so
other
instrumentation
http
skip
experimental
capture,
headers
server
response,
so
it's
like
it's
a
bit
shorter
and
it
contains
pretty
much
the
same
thing
and
if
we
had
like,
if
we
have,
we
actually
have
a
common
property
for
database.
C
So
we
have
a
db
sanitized,
something
property
which
still
uses
also
instrumentation
common
prefix,
but
it
could
use
like
auto
instrumentation
db,
sanitize
statements
or
statement
scientists,
something
like
that.
C
Also
instrumentation
net
peer
service
mappings.
Instead
of
all
the
instrumentation
common
period,
you
say
this
mappings
and
so
on.
So
that
was
the
idea
to
just
basically
skip
the
word
come
on,
since
it
doesn't
really
provide
any
additional
context
or
meaning
to
those.
C
Okay,
cool,
so
once
1.7
is
released,
I
will
submit
the
pr
that
changes,
this
experimental
line
to
the
the
one
that
you
can
see
here.
B
All
right
any
thing
anybody
else
wanted
to
chat
about
today.
B
Cool,
so
just
briefly
other
things
that
went
in
in
the
last
week,
this
was
a
big
one
ludmila
who
leads
the
tracing
in
the
azure
sdks
and
azure
sdks
use
reactor
heavily,
so
she's
been
working
out,
interop
between
that
and
auto
instrumentation,
in
particular
how
they
can
propagate
context
downstream
because
they
capture
their
own
http
instrumentation,
for
example,
or
their
own
database
instrumentation,
and
if
they
don't
propagate
it
downstream,
then
we
also
auto
capture
stuff
from
like
neti,
because
they
use
neti,
and
so
this
really,
I
think,
is
a
big
step
in
sort
of
the
reactor
support.
B
Context,
strict
context
checks,
I
don't
know
if
laurie,
yes,
laurie.
Thank
you
for
keeping
keeping
at
this.
It's
giving
us
a
lot
more
confidence,
giving
me
a
lot
more
confidence
in
sort
of
like
some
of
these
areas,
where
we've
had
some
code
code
paths.
B
In
case
we
were
leaking
context
that
maybe
someday
we'll
be
able
to
remove
those.
If
we
have
enough
confidence
that
we're
not
leaking
context
anywhere,
this
was
the
http
headers.
This
is
a
I've
seen
users
request
this
also-
and
I
know
splunk
has
users
requesting
this.
So
it's
a
this
is
a
good
feature
in
one
seven.
B
B
This
is
for
sampling,
so
we
had
a
request.
Users
want
to,
and
I've
seen
also,
this
attribute
based
sampling
and
I
think
we
lost
nikita
already,
but
he's
he
was
speaking
up
for
in
the
spec,
for
attributes,
based
sampling
being
an
important
use
case
for
users,
and
I've
definitely
seen
that
also
from
our
users,
and
so
this
was
driven
by
that
they
had
no
way
to
sample
court
spans
like
that.
They
didn't
want
certain
jobs
that
were
just
way
too
noisy.
B
So
this
allows
them
to
do
that.
This
was
a
regression.
Unfortunately,
in
one
six,
that
I'm
surprised,
no
users
reported
it,
except
that
it
was
a
little
weird
like
your
async.
Servlet
span
would
be
really
fast
short
your
server
span
and
then
it
would
be
way
too
short.
B
It
wouldn't
cover
the
full
length
of
the
the
request,
and
so
we
just
fixed
that
so
normally
it
would
release
a
patch
release
for
that,
but
I
think
probably
I'll
discuss
with
honorable
see
if
he
thinks
we
should
release
a
patch
anyway
or
just
included
in
one
seven,
of
course,
updating
to
the
sdk.
B
Lots
of
more
metrics,
stuff
and
yeah
just
cleaning
up
the
api,
a
lot
of
work
going
into
cleaning
up
the
the
public
api
surface,
trying
to
drive
towards
instrumentation
api
stability
and
oh
yeah.
This
was
a
during
the
conversion
from
servlet
tracer
to
servlet.
Instrumenter
laurie
had
intentionally
not
updated,
included
more
attributes
that
the
instrumenters
encourage
and
allow
us
to
capture
just
so
that
the
tests
wouldn't
all
change
it
made.
B
Thank
you
lori
that
actually
did
make
reviewing
it
way
easier,
and
so
this
adds
new
attributes
to
that
are
captured
by
servlets.