►
From YouTube: 2023-01-17 meeting
Description
cncf-opentelemetry@cncf.io's Personal Meeting Room
B
B
C
C
I,
don't
see
anybody
talking
so
I
guess
we
haven't
started
a
meeting.
Sorry
for
the
delay
myself.
Just
Google
decided
to
kick
me
out
one
minute
before
this
meeting
started.
So
usually
stop
you
know.
Okay,
thank
you
for
joining.
So,
let's
jump
into
the
actual
meeting.
We
have
a
few
items
already.
The
first
one
is
just
a
reminder
that
today
we
will
be
doing
the
release
for
January
it's
slightly
delayed.
C
Some
people
were
on
holidays
and
I
considered
that
it
could
be
good.
Just
wait
a
little
bit.
You
know
I'm
just
in
case.
Please
check
the
VR.
C
We
added
some
clarifications
here
and
there
are
some
changes,
but
it's
good
that
you
review
that,
especially
if
you
are
on
holidays
the
next
one,
it's
fun
cool
name
attribute
is
semantic
conventions,
that's
a
PR
that
actually
has
been
requesting
reviews
long
story
short.
We
have
this
pool
name
attribute
as
part
of
semantic
conventions.
It
already
exists,
but
usually
in
databases
you
define
pool
name
and
some,
but
sometimes
there's
no
pool
Name
by
default.
C
C
It
seems
straightforward.
We
need
more
reviews,
I,
don't
see
Josh
Stewart
here
so
we'll
just
ping
him
just
to
see
whether
this
good
impact,
you
know
the
efforts
that
he
has
been
driving
regarding
cardinality.
Otherwise,
please
take
a
look.
He
looks
great
I
want
to
give
it
an
approval
and
yeah.
It
will
just
be
in
your
suret.
D
C
Yeah
it's
from
the
same
contributor
and
it's
also
related
to
databases,
yeah
yeah,
that
has
no
approvals,
but
it
has
initial
reviews
but
yeah.
It
needs
more
reviews
as
well
yeah.
Thank
you
so
much
for
that.
D
One
is
that
we
would
potentially
capture
pii
and
sensitive
data
there,
but
that's
partially
handled
by
most
likely
changing
it
to
optional,
so
that
it's
an
explicit
opt-in
like
we
have
for
HTTP
header
value
capturing
where,
where
the
same,
the
same
concern
would
come
up,
but
I'm
just
wondering
if
the
the
other
SDK
offers
an
instrumentation
offers
have
any
special
requests
with,
for
example,
capturing
bind
values
but
also
bind
keys
and
such,
and
we
could
probably
also
use
some
input
from
people
more
familiar
with
non-sql
statements
and
whether
it
fits
fits
there.
D
C
C
If
there
are
no
more
no
comments
on
on
these
two
database,
specific
semantic
conversion
changes,
let's
go
to
the
next
one,
so
Jim
MCD
specify
a
media
provider
configurable
kubernet
limits.
Please.
E
Hi
I
just
wanted
to
remind
people
about
this.
Pr
that's
been
out
for
a
while.
There
was
a
lingering
concern
or
question
in
the
group,
probably
going
back
five
to
six
months,
that
some
other
sdks
did
have
a
hard
limit
on
the
number
of
10
series
that
we're
willing
to
produce
for
a
single
instrument
that
was
set
to
2000,
probably
from
java
first
I
think,
and
to
help
unblock
that
I
had
written
this
PR.
E
This
link
there
proposing
that
we
after
some
debate,
we
we
settled
on
the
idea
that
each
instrument
would
have
a
cap
of
a
default
to
2000
that
there
would
be
no
unlimited
configuration
and
the
contentious
part
is
what
happens
when
you
reach
reach
that
limit.
E
The
statement
in
the
text
is
that
we'll
just
simply
start
stop
reporting
those
metrics
consider
them
dropped.
Consider
them
errors
log
about
them.
Warn
the
user
tell
them
it's
happening,
try
and
make
it
sound
like
it's,
not
silent,
but
then
there's
not
much
else.
We
can
do
except
report,
dropped
metrics
or
do
something
fancy.
E
E
But
that's
not
what
I'm
after
and
the
the
point
here
is
that
the
simplest
thing
we
can
do
I
think
is
to
to
drop
those
those
data
points
and
make
sure
that
the
user
can
see
that
so
I
think
the
next
priority
would
be
to
work
on
monitoring
of
dropped
data,
so
a
Time
series
to
indicate
how
much
metrics
you're
dropping
or
how
many
spans
you're
dropping.
For
example,
that's
what
I
would
like
to
see
next,
but
this
is
still
contentious
and
it
hasn't
seen
a
lot
of
movement.
E
E
F
Josh
I
had
a
question
about
that.
You
just
said
that
it
would.
The
ways
were
currently
written.
It
would
drop
any
additional
attribute
measurements
right,
yeah.
Sorry,
the
way
that
I'm
reading
the
pr
is
that
it
drops
the
entire
batch
of
metrics,
which
means
that
it's
going
to
drop
all
attribute
sets
for
that
instrument.
E
Yeah,
that's
that's
more
or
less
what
I'm
composing
you
know
you
reach
2000
for
one
instrument,
two
thousand
combinations
of
key
of
attribute
labels.
At
this
point
you
can't
add
one
more
and
and
not
break
that
limit,
but
having
2
000
series
out
of
an
unknown
number
is
not
also
that
helpful.
It's
you're
losing
information
at
that
point.
So
to
me,
reporting
partial
information
is
not
very
helpful.
E
I
would
rather
raise
an
alarm
and
then
continue
to
report.
22
000
out
of
n
total
series,
you've
lost
the
total
sum
or
the
total
average.
At
this
point
and
this
the
solution
that
I
think
most
people
picture
is,
you
know
either
fix
the
code
or
turn
on
attribute
filtering
for
your.
You
know,
by
putting
in
a
view
and
I,
think
that
the
I've
long
believed
that
the
only
good
reason
for
us
to
have
a
views
and
support
is
to
enable
the
user
to
fix
this
sort
of
problem.
E
F
Yeah
but
I
mean
so
I
just
want
to
like
point
out
that
in
this
situation,
what's
going
to
happen
is
that
a
user
is
going
to
run
the
code
they're
going
to
look
at
their
observability
dashboard
and
they're,
not
going
to
see
data
and
they're
like
they
may
be
looking
at
logs,
but
like
they're
likely
not
going
to
be
looking
at
logs
and
they're.
E
Yeah
I
hear
that
I
I
think
the
the
potential
to
be
misled
and
then
not
even
know,
there's
a
problem
because
the
2000
series
showing
up
I'm
not
reading
my
logs,
so
I'm
getting
personal
information
and
I,
don't
even
know
it.
That's
kind
of
a
concern.
I
have
and
I
think
it's
more.
F
Yeah,
that's
not
great
either
right
like
getting
2000
series
come
in.
It
also
May
mislead
them
to
think
that
they're
successful
right
like
because
they
again
like,
if
they're,
not
reading
the
blogs
they're,
not
reading
the
logs,
and
that's
why
I
really
liked
Riley's
a
proposal
earlier
where
it
was
that
it
it
it
communicates
at
the
observability
dashboard
that
something
was
wrong
when
you
consolidate
all
of
those
additional
attributes
into
like
a
single
Consolidated
like
I,
don't
know
some
warning
attribute
that
says,
like
things
went
wrong
here,.
E
I
I
get
it
I.
Think
we'd
be
better
off
encouraging
the
group
encouraging
users
moving
forward
to
monitor
for
errors
and
monitor
for
drops
of
data,
so
one
time
series
to
indicate
dropped,
metrics
points
and
then
they
can
have
a
dashboard
which
is
am
I,
dropping
data
or
not
I,
think
that
would
be
a
better
outcome,
but
I
guess
my
concern
about
the
the
hypothetical
that
Riley
put
forward.
So
that's
just
for
the
group.
This
is
the
idea
that
you
reach
2000.
E
At
this
point,
you
go
into
some
degraded
mode
where
you're
going
to
keep
those
two
thousand
and
anything
new
gets
counted
as
a
as
a
catch-all
like
the
the
leftover
overflow
bucket
and
it'll
it'll,
be
as
if
you
were
stripping
all
the
attributes
that
are
not
that
are
not
in
the
2000
that
you've
already
got
and
then
like.
This
is
your
catch-all
so
there's
some
implementation
questions
that
I
don't
have
I
haven't
worked
this
out
myself,
so
I,
don't
know
how
complicated
that
is
and
I
don't
know
what
we're
going
to
run
into
I.
E
E
But
I
agree
with
your
point
that
if
we
did
what
Riley
suggested,
the
user
would
see
the
error
in
their
dashboard
as
part
of
their
time
series
data
they'd
see
an
attribute
label
called
something
is
missing
here
or
something
like
that.
Now,
if
you
go
into
the
dashboard
and
you
query
for
a
total,
like
you're
rolling
up,
that
sum,
it's
going
to
be
really
easy
to
get
the
correct
sum
and
not
know
that
you're
dropping
data.
So
then,
when
do
you
discover
that
you're
dropping
data?
E
You
have
to
have
an
ungrouped
display
of
that
time.
Series
anyway,
I
I
just
wanted
to
promote
stuff
yeah.
F
That's
a
good
point.
So
if
you
have
an
ungrouped
display
like
what
I'm
guessing,
that
means
is
that
you
have
another
metric
that
says:
drop
data
like
you
know,
five
or
something
I,
don't
know
just
some
other
metric.
That
says
that
right,
I
think
if
that
makes
sense
and
I
think
that
that
would
resolve
a
few
different
issues
that
you're
talking
about
it.
It
does
lead
this
issue
like
if
you
didn't
contain
it
in
the
original,
you
will
have
incorrect
sums,
which
may
not
be
what
is
desired,
but
that's
another
point.
F
So
what
I'm
asking,
though,
is
then,
if
you
have
another
metric
that
shows
this?
What
would
you
like
the
default
Behavior
to
be
if
we
had
a
specification
that
stated
that,
because
I
I
think
that
if
you
drop
the
entire
batch
of
data-
and
you
reported
all
of
those,
it
may
not
be
as
useful
as
having
some
of
it
with
you
know,
an
error
coming
in.
E
Well,
I
guess
this
is
perhaps
work
we
should
be
doing
now.
I
guess
my
feeling
is
that
I
would
like
to
see
every
SDK
output.
E
One
time
series
per
successful
span
sent
one
of
the
time
series
per
drop
expands
and
same
for
metrics
same
for
logs.
That's
six
times
series
per
per
SDK.
Now
I
tell
the
user
in
very
strong
terms.
You
are
responsible
for
monitoring
your
drop
points
count,
so
your
drop
spans
counts
or
dot.
Metrics
points
or
Dropbox
counts.
If
you're,
not
monitoring
that,
like
game
is
over
already,
please
start
mounting
your
drop
data.
You've
probably
got
incomplete.
Traces,
you've
probably
got
incomplete
metrics
if
you've
got
any
drop
data
and
we
know
that
drop
data
happens.
E
It's
like
we
haven't
made.
You
know
enough
effort
to
help
the
user
when
they
when
they
get
this
problem,
which
is
like
a
hard
problem
with
toiletries
like
you're,
not
reporting
your
Telemetry
where's
it
going
going
into
some
log
somewhere.
No,
we
need
to
make
this
more
visible.
I,
I,
don't
I,
don't
know.
My
opinion
is
I
was
trying
to
help
move
this
back
forward,
so
it
seems
like
we.
It
was
an
overly
simplified
approach
and
I
will
rest
here.
I,
don't
have
any
more
to
say.
F
F
Think
that's
kind
of
the
key
is
like
we're
having
a
lot
of
discussion
amongst
developers
here,
but
it'd
be
really
cool,
I
think
to
have
users,
because
the
thing
that
I
anticipate
is
that
a
user
is,
you
know
if
we
implemented
something
like
Riley's
solution
and
that
you're
actually
consolidating
attributes
like
I,
feel
like
80
of
the
time
they're
not
going
to
care
they're
gonna,
see
you
know
the
the
one
out
of
the
two
thousand
attributes
that
they
actually
care
about
they're
going
to
get
their
thing,
it's
going
to
work,
they're
going
to
do
a
sum.
F
It's
going
to
roll
up,
it's
going
to
be
the
correct
sum,
and
it's
going
to
be
in
that
off
case
where
they're
like
wait,
I
thought
that
we
were.
You
know
there's
this
one
attribute
that
I
wanted
to
actually
look
up
like.
Why
isn't
that
there
that
that's
going
to
be
the
question
and
I
think
that
in
that
situation,
if
you
had
a
metric
as
well
exposed
saying
that,
like
we've
rolled
up
attributes
like
there's,
there's
actually
like
a
limit,
that's
been
exceeded,
then
they
could
find
the
breadcrumbs
to
go.
F
Look
in
and
just
discover
actually
where
it
was
going,
but
the
key
thing
that
I
I'm
trying
to
think
of
is
like
you
know.
If
we
don't
actually
do
that
roll
up,
and
we
don't
present
that
to
the
user.
That
way.
F
E
Much
of
the
time
I
mean
if
they're
looking
for
the
one
out
of
2000
and
it
happens
to
find
it
and
then
I
I,
wonder
if
they'll
be
upset
that
they
were
allowed
to
pretend
there
was
no
error
for
a
long
time,
but
I
I'm,
taking
this
feedback
the
way
I
think
you
mean
it
is
I,
think
we
should
do
more
work.
I
think
we
have
to
I
have
to
prototype
this
before
I
can
recommend
it.
E
First
of
all,
that's
the
Riley
solution
and
I'd
like
to
hear
from
others
as
well
I
want
to
before
I
stop
talking.
There
was
one
other
technical
point
in
this
discussion
that
hasn't
been
mentioned
here
and
it's
about
how,
when
you
choose
Delta
temporality,
if
you
came
from
a
statisty
world
or
datadog
world
or
or
like,
if
you
have
that
support
in
your
system,
there
is
a
graceful
resolution
which
is
if
the
user
stops
using
extra
cardinality.
E
You
can
go
back
to
recording
everything,
and
in
that
case
the
the
there
is
a
graceful
outcome
after
the
event
and
I
and
I
like
that
and
I
would
like
to
encourage
that.
Although
I
know
that
you
know,
if
you
have
a
Prometheus
back
end,
you've
got
to
turn
it
into
cumulative
at
some
point
and
then
that's
where
this
breaks
down
you.
You
have
unknown
gaps
in
your
cumulative
and
that's
hard
to
deal
with
the
unknown
gaps
in
your
Delta.
It's
just
a
gap
right,
so
I
think
we
should
just
table
this.
E
G
A
little
bit
of
context
on
the
the
Java
2000
limit.
We
we
cherry-picked
that
from.net's
implementation,
so
you
know
we
didn't
originally
have
any
defenses
in
place
for
cardinality
explosion,
but
before
shortly
before
we
went
stable,
we
decided
that
that
was
a
critical
piece.
So
users
didn't
have
a
foot
gun
and
Okay,
so
I
guess
I
I'm
in
favor
of
this,
the
simple
approach
that
that
Josh
has
proposed.
I
I
see
what
you're
saying
Tyler
about
you
know
having
this
this.
G
This
extra
series
that
captures
a
dimensionless
version
of
the
of
the
sum
and
I
I
guess
like
what.
Why
is
it
so
contentious
to
track
dropped,
drop
data
points
in
the
SDK
itself,
like
that?
That
seems
like
it
shouldn't
be
that
hard
to
do
the
follow-up
work
to
service
this
to
users
like
I'm,
just
imagining
you
just
have
a
an
instrument
with
dimensions
for
each
instrument,
name
that
had
that
had
data
points
dropped
like
where
would
the
disagreement
come
from
with
that?
No.
F
Yeah
I
think
that's
I,
don't
disagree,
I
think
that's
a
positive
position.
I
think
that's
great
follow-up
work.
If
you
get
it
I
think
it's
just
like,
then
what
what's
the
the
original
instrument's
behavior
in
that
use
case,
yeah
I
think
it's
still
still
an
open
question
like
is
the
original
instrument
still
dropping
all
of
the
attributes?
Is
it
dropping
partial?
Is
it
going
to
roll
up
into
a
single
attribute
that
says
that
you
know
X
number
of
you
know
all
the
other
attributes
that
would
have
been
collected
are
now.
F
G
Yeah
and
just
on
the
just
one
thing
was
occurring
to
me
when
you
were
talking
about
you,
know
rolling
up
all
the
points
enter
all
the
measurements
into.
You
know
a
single
series
that
indicates
you
know:
cardinality
limit
was
exceeded,
so
I
I,
don't
actually
think
met.
A
lot
of
users
will
be
able
to
find
that
super
easily
I'm
imagining
you
know
somebody's
using
the
probably
what's
going
to
be
the
most
popular
instrument,
which
is
tracking
HTTP
server,
duration
and
I'm.
G
Imagining
dashboards
are
going
to
be
look
incorporating
filters
for
the
the
path
and
or
the
status
code,
and
so,
if
you're
looking
for
those
attributes
to
be
there
but
they're
not
there,
because
those
attributes
are
being
dropped
and
all
of
a
sudden,
your
data
is
going
to
look
inaccurate
and,
like
so
I,
don't
see
that
much
of
a
difference
between
rolling
them
up
into
a
like
a
catch-all
bucket
and
just
dropping
them
all
together.
F
H
F
F
Think
one
also
like
having
a
proof
of
concept
at
the
role
of
is
even
like
feasible
I
think
it
is
because
I
think
Riley
said
that
he's
done
this
before,
but
yeah
I
think
it's
just
like
what
what's
the
you
know,
what's
the
most
common
user
experience
and
because,
like
Josh
kind
of
pointed
out
like
well
Jack,
you
just
pointed
out
that,
like
there
may
be
a
need
to
like
filter
based
on
certain
attributes
and
that
they
may
not
show
up
I,
don't
know
if
there's
a
need
to
filter
on
something
that
comes
in
the
2000
attributes
or
more
that's
a
guts
at
open
question
like
the
HTTP
duration,
I,
think
you're
talking
about
like
status
code
or
something
like
that.
F
E
Oh,
so
it's
all
path:
okay
or
a
mistake.
That's
the
I
think
that's
what
we're
really
protecting
the
user.
Yet
so
I,
all
of
a
sudden,
put
a
bogus
path
in
I've
got
a
variable
in
my
path
and
now
I've
quickly
exploded
the
cardinality
limit.
The
dashboard
that
was
rolling
up
by
HTTP
status
code
now
broke
because
it's
missing
an
arbitrary
amount,
and
this
this
conversation
has
me
thinking
that
an
all
or
none
type
of
failure
would
be
nice,
so
we
could
roll
up.
E
G
D
G
That
would
that
be
a
replacement
for
tracking
drop
data.
Point
counts
because
like
if
that,
if
that
roll
up
that
single
Series
has
a
has
a
predictable,
attribute
key
and
value,
then
you
can
look
for
the
presence
of
that
and
you
might
not
need
a
separate
series
that
was
like.
E
E
Thank
you
all
I
think
that
was
productive.
Maybe
I
think
the
I
think
we
needed
a
little
bit
of
prototyping,
but
it
sounds
like
a
fallback
to
a
single,
a
single
cardinality,
degraded
mode
with
a
special
attribute.
Jack's
idea
will
be
the
best
outcome
for
us.
I
can
agree
to
work
on
this
as
a
medium
priority.
Don't
promise
next
week,
but
I'll
work
on
it.
If
that's
what
people
like.
C
I
Yet
I
opened
this
pull
request.
Finally,
I
know
we
discussed
it
before
in
this
meeting
and
there
was
a
discussion
that
it
should
cover
all
signals.
I
kind
of
I.
D
I
With
that
but
fell
back
to
I
think
we
should
cover
all
signals,
but
have
individual
ones,
because
I
mean
it
just
makes
sense
that
you
might
want
to
suppress
tracing
but
like
in
the
exporter
example
that
you're
gonna
not
want
to
create
trade
spans
for
exporting
your
spans,
but
you're
going
to
want
to
log
errors
or
if
that
process
fails.
So
it
makes
sense
to
have
individual
and
maybe
a
wrapper
of
suppress,
all
signals.
D
C
By
the
way,
this
is
the
second
attempt
at
some
issue
that
Denise
was
trying
to
address
in
the
past
and
yeah.
These
questions,
never
wrapped
up.
I
could
say
it
looks
good
in
general.
C
I
am
a
little
bit
curious
about
the
other
six,
because
I
know
that
the
Ruby
and
JavaScript
have
these
and
it's
very
much
needed
there.
Yeah
I,
don't
know
whether
there's
any
need
for
these
smaller
languages.
I
I
would
guess
it
is.
J
G
A
J
J
They
want
to
be
able
to
suppress
all
they
want
to
be
able
to
suppress,
tracing
and
want
to
be
able
to
suppress
metrics
and
logs
when
it's
ready,
but
having
having
all
of
those
is
probably
probably
where
we
will
end
up
in
the
end.
I
Yep
yeah-
that
was
my
thinking
that
it
would
come
in
separate
pull
request
to
add
the
others
and
where
I
can
add
it
into
this
one,
but
the
what
languages
have
suppressed
all
I
didn't
realize
those
existed.
A
There's
my
internet,
okay,
I
mean
Donuts.
The
the
flag
is,
is
more
flexible.
It
allows
like
suppression
for
the
underlying
operation
like
if
you
have
a
higher
level
operation
like
grpc.
It
can
surprise
the
underlying
HTTP
or
TCP,
and
also
you
can
surprise
the
entire
contacts
for
anything
that's
triggered
by
the
SDK
itself.
For
example,
if
you
have
a
OTL
PX
powder
that
is
calling
HTTP
stack,
you
can
you
can
remove
any
HTTP
instrumentation
from
that
particular
otlt
car
foreign.
H
G
I
haven't
looked
at
this
too
closely.
Actually,
this
is
more
on
the
instrumentation
side
of
things
and
Trask
is
here,
I'm
wondering
if,
if
if
he
can
comment
on
the
value,
this
would
have
to
users.
B
Sure
yeah
I
mean
I
think
it's
useful.
We've
had
people
ask
for
this
info
generally
I've
recommended
sampling
like
a
rule-based
Samplers
or
seeing
health
check
kinds
of
things,
which
is
the
main
use
case
that
we've
seen.
We
have
like
40
up
votes
on
having
a
better
health
check,
suppression
more
easily
configurable
health
checks
in
high
school.
H
I
ask
because
it
is
instrumentation
related,
but
it's
the
kind
of
thing
instrumentation
packages
would
probably
not
be
doing
this
right,
like
the
end
user
would
be
doing
this,
so
it
would
be
maybe
one
of
those
places
where
we
have.
You
know
the
manual
API
interacting
with
our
automated
instrumentation
and.
H
Idea,
I
just
wonder
if
maybe
a
way
to
move
forward
with
this
Tristan
is
just
to
get
one
or
two
other
language
maintainers.
To
have
a
look
at
it,
you
know:
go
in
Java
are
the
two
that
come
to
mind
that
are
a
bit
different
from
the
languages
they've
already
implemented
it
just
to
you
know.
If,
if
the
other
maintainers
think
it's
cool,
then
then
I
think
we
could
probably
just
approve
it
since
it
seems
like
there's
huge
value
in
it.
B
Okay,
some
some
discussion
of
the
advantage
of
this
like
what
what
can
be
done
with
this
approach
that
can't
be
done
with
a
sampling.
H
I
Sampler
can
get
very
complex,
confusing
if,
like
one
of
the
cases
that
we
have
you
know,
Elixir
is
for
these
database
calls
from
a
worker
queue
Library.
So
it's
popping
stuff
off
of
out
of
a
database
and
running
jobs
and
there's
all
these
queries
that
it
has
to
make
to
keep
up
with
the
with
the
queue
and
people
don't
want.
All
these,
like
constant
database
spans
and
sampling
them
out,
would
mean
sampling.
I
mean
it
would
I,
don't
even
know
how
you
do
it
that
it
wouldn't
affect.
I
H
B
G
G
So
you
mentioned
the
span
name
and
potentially
the
instrumentation
scope
like,
although
I
don't
think
scope
is
accessible
to
to
to
spans,
but
I
think
one
of
the
things
that
we
we
had
trouble
doing
in
in
Java
instrumentation
was
making
enough
of
the
attributes
accessible
to
Samplers,
to
make
decisions
to
exclude
things
like
health
checks
and
things
like
that
and
but
but
we
did
it
eventually
and
so
so
I
you
know,
is
it
not
possible
to
tell
that
one
of
those
spans
is
part
of
the
noise
based
on
its
on
the
attributes
that
are
present
Tristan.
I
In
one
particular
case,
I
did
see
someone
wrote
a
sampler
to
deal
with
this
and
but
and
they're
doing
like
a
check
of
five
different
attributes
in
the
sampler.
That
only
covers
this
one
particular
case,
so
if
they
had
to
extend
it
to
do
also
health
checks
and
also
the
export
case,
where
you
don't
want
your
exporter,
creating
spans
for
its
exports,
infinite
Loop
to
type
thing,
the
it
would
just
keep
growing
and
so
being
able
to
wrap
it
just
makes
it
a
lot
less
tedious.
K
Yeah
so
I
think
my
concern
here.
Kind
of
is
that
this
appears
to
be
in
the
API,
but
it's
trying
to
address
an
SDK
concern,
at
least
with
the
exporter
using
the
HTTP
clients,
the
estimated
HTTP
client
trying
to
prevent
recursive
instrumentation
concern
these
days
right.
But
if
it's
in
the
API,
that
means
then
that
any
instrumentation
can
start
suppressing
their
transportation
for
anything
else
outside
of
it,
which
seems
like
a
really
big
flip
gun
to
give
the
API
just
ice.
I
K
B
That
sense,
good
yeah
I,
couldn't
do
this
today.
Already,
though,
which
is
one
of
the
other
thing
I
was
gonna
ask
to
compare
this
to,
which
is,
if
you
mount
a
context
with
a
span
with
the
sampled
flag,
Set
false
I'll,
essentially
programmatically,
it's
the
same
thing
as
writing
that
custom
sampler
right.
H
H
Instrumentation
packages
to
touch,
but
I
do
also
understand
the
need
for
it,
like
I,
do
see
it
as
in
as
an
API
issue,
in
the
sense
that
in
some
cases
doing
this
with
Samplers
and
things
is
straightforward
right
like
if
you're
an
operator-
and
you
want
to
do
this
through
configuration,
it
seems
important
to
be
able
to
do
that.
So
we
should
make
sure
our
Samplers
can
do
it,
but.
H
See
plenty
of
situations,
whereas
as
a
developer,
this
the
sampler
route
would
be
maybe
just
longer
and
more
difficult
and
have
its
own
foot
gun
of
accidentally
shutting
off
other
things,
if
you
weren't,
necessarily
dealing
with
spans,
you've
created
and
I,
can
see
it
being
like
a
very
simple
solution.
H
You
know
for
developers
knowing
they
want
to
shut
off
this
particular
path.
Right
here
to
me,
it
has
I
think
it
has
value,
but
we
might
want
to
include
in
the
documentation
for
it
that,
like
shared
instrumentation
packages,
should
avoid
shutting
off
shutting
off
spans
that
they
don't
control,
that
is
for
end
users
yeah,
so
I'm
I'm
for
it,
but
I
I
agree
that
it's
always
dangerous
to
add
things
to
the
API,
and
we
should
think
about
it.
C
E
I
have
sort
of
a
statement
about
how
the
sampler
API
is
capable
of
doing
a
lot,
but
without
a
configurable
and
sort
of
flexible
composition
of
Samplers
mechanism
that
we
don't
have
like.
The
user
is
stuck
in
a
position
where
it's
awful
to
deal
with
Samplers
and
I
think
that
hotel
will
get
to
that
sampling.
E
Sig
talks
about
it
once
in
a
while,
but
when
I
heard
Anthony
and
Ted
just
now
speaking
about
the
problems
of
API
versus
SDK,
I
was
reminded
that
in
the
past,
when
I've
had
to
implement
I
think
that
a
staple
scenario
we're
talking
about
which
is
an
SDK
is
using
some
vast
amount
of
code
out
there,
which
might
be
instrumented
internally
for
itself.
So
there's
this
problem
of
recursion
or
like
I
am
spamming.
Myself
and
I
am
a
span
exporter
and
then
I'm
a
customs
fan
exporter
that
does
synchronous
export.
E
E
It
is
something
that
the
implementation
of
the
SDK
notes-
the
SDK
implementer,
needs
to
know
that
there's
a
danger
of
recursion
and
then
they
use
that
any
recursion
device
internally
to
prevent
recursion.
So
I
am
going
to
make
sure
that
as
I
call
my
vast
API
of
code,
that
I
don't
re-enter
myself,
and
that
is
a
technique
that
each
language
may
be
able
to
implement
and
go
I.
Did
it
with
stack
introspection
once
there
are
other
ways
to
do
it.
E
If
you
have
a
lock
in
the
context
or
whatever
it
is,
there's
different
languages,
do
it
differently
and
I
feel
like
that
is
an
SDK,
specific
solution
that
we
can
offer
SDK
implementers
for
this
problem,
because
then,
because
otherwise
we're
trying
to
adapt
the
sample
API,
which
is
really
meant
as
an
API
for
users,
not
an
SDK
thing,
so
yeah
rsdk,
configurability
sucks,
it
needs
way
more
work.
You
can't
get
to
the
scope
that
needs
to
be
solved,
but
this
problem,
I'm
hearing,
sounds
more
like
any
recursion
problem.
H
I
yeah
I
should
clarify
I,
don't
think
even
I.
Think
most
sdks
have
some
way
of
dealing
with
this
internally,
but
even
if
they
don't
like
SDK
should
not
be
reaching
out
to
the
API
and
then
like
re-entering
like
anything
internal
to
the
SDK
around
suppressing
spans
or
whatever
should
be
using
whatever
internal
mechanisms
they.
E
It
doesn't
work.
This
is
the
problem
like
you
are
implementing
a
thing
that
calls
grpc.
Grpc
is
calling
some
other
logging
library
that
other
log
library
has
a
hook
that
calls
your
library
and
now
you're
back
like
anytime,
you
have
a
hookable,
plugable,
configurable
SDK
you're,
going
to
end
up
with
this
danger,
so
you
shouldn't
be
doing
that,
but
that
is
absolutely
a
danger
in
the
real
world
and
I.
Don't
think
we
should
use
a
sampler
API
to
prevent
it.
H
H
E
Like,
for
example,
in
in
a
go
context,
I
would
put
my
own
internal
variable
into
the
context
and
they
would
say:
I
am
now
hotly
in
my
SDK
pack.
I
may
not
issue
another
SDK
call
because
of
recursion.
So
I
put
my
own
marker
in
there
and
I
checked
my
own
marker
when
I'm
entering
myself
and
then
I
can
prevent
recursion
and
that's
all
I
need.
E
Is
there
a
case
that
we're
talking
about
where
it's
I'm
preventing
instrumentation
for
a
third
party?
That
might
happen
in
my
context,
because
that's
what
Samplers
are
better
at
they're,
just
terrible
for
now.
H
I
just
think
we
should
parse
out
the
issues
between
what
sdks
are
trying
to
do
internally
and
what
kind
of
tool
we
would
give
our
end
users.
It
might
inform
the
tools
we
give
our
end
users.
Maybe
we.
D
H
Want
to
give
our
end
users
some
way
to
avoid
recursion
but
or
we
should
be
checking
for
recursion
in
general,
but
I
think
this
mechanism
is
like
pretty
simple
right,
we're
saying
shove,
something
into
the
context
that
says:
hey,
stop,
stop
tracing
nothing
below
this
point
trace
and
then
we
have
another
thing
that
can
go
in
there
and
pull
that
flag
off
and
and
we're
done
and
I
could
see.
Sdks
want
to
use
that
internally
too,
but
it's
just
we
should
we.
A
E
Keep
that's
a
good
point.
Chad
and
I
will
say
this
is
I'm
speaking
from
an
example
that
I
remember
doing
like
six
or
seven
years
ago
and
the
it
is
nice
to
integrate
this
with
the
observability
SDK.
What
I
said
before
it
made
it
sound
like
it
didn't
need
to
be
it's
just
a
device,
any
recursion
device,
but
the
thing
is
I
said:
I.
Did
it
with
stack
introspection?
E
So
at
the
moment,
when
I'm
checking
my
stack
I'm
already
producing
a
Telemetry
event,
I'm
going
to
use
that
stack
for
my
Telemetry
event,
I'm
going
to
use
that
stack
to
check
my
sampler.
If
I
should
record
this
Telemetry
event,
I
needed
to
capture
capture
the
stack
first
and
that's
the
ideal
moment
to
check
for
recursion
am
I
on
my
own
stack
I
shouldn't
go
in
if
I'm.
E
If
I'm
already
on
my
own
stack
by
the
way,
here's
the
stack
that
produced
the
error
or
here's
the
stack
that
was
going
to
be
the
event
before
the
one
was
not
here.
So
you
can
save
a
lot
of
money
by
you
know,
taking
the
stack
once
doing
any
recursion
and
then
using
it
for
the
event,
like
my
log
needs
to
have
a
stack,
Trace
attached.
That's
the
event
right.
There
yeah.
H
It
makes
sense,
but
I
think
I,
think
the
the
API
we're
talking
about
is
for
situations
where
there
is
no
recursion
going
on
right.
Like
the
end
user
is
just
trying
to
suppress
a
health
check
or
some
particular
noisy
path,
and
it
doing
it
as
Samplers
annoying
and
it
would
be
a
two-liner
to
do
it
in
code.
And
can
you
please
just
give
me
the
ability
to
like
shut.
H
Off
right
here
until
I
tell
you
to
turn
it
back
on
again
and
I
can
see
the
value
of
that
I
also
understand
Anthony's
concern
that
it
could
be
a
foot
gun.
The
fact
that
it
already
exists
in
several
languages.
It
appears
to
be
like
okay
implies
to
me
that
it's
it's
acceptable
and
actually,
like
you
know,
I
I'm,
always
a
little
I
wouldn't
say
upset,
but
I
do
get
a
little
angsty
when
I
find
out
languages
have
been
adding
things
to
the
API
without
circling
back
to
the
spec
group.
I
I
Scope
doesn't
always
help,
so,
even
if
that's
in
the
sampler,
because
like
in
the
job
queue
example,
it's
the
database
Library,
that's
making
these
calls
that
they
don't
want
the
the
spans
of,
and
so
the
instrumentation
Library
would
be.
The
database
called
would
be
the
database
instrumented
Library,
not
the
job
queue.
So
you
don't
have
the
parent
instrumentation,
hey.
G
I
B
Q
example,
it
sounds
like
maybe
you're
missing
a
span
collection
for
the
job
queue
execution
itself,
because
then
you
could
apply
the
sampler
at
that
level.
That's
we
have.
We've
had
users
raise
that
issue
in
Java
before.
D
H
I
think
this
is
worthwhile.
My
I
think
my
request
to
kind
of
resolve
it.
Tristan
is
if
you
could
get
maintainers
from
a
couple,
different
languages
that
don't
have
this
yet
to
to
review
it
and
and
chime
in
go.
C
H
My
request
would
be
that,
let's,
let's
just
make
sure
we
can
put
it
in
the
spec
and
have
every
language
agree
that
this
is.
This
is
a
good
way
to
do
it
Java
and
go
come
to
mind.
I,
don't
know
if
you
want
to
reach
out
to
python
as
well,
but
that
that
seems
like
the
quickest
path
to
to
getting
it
resolved.
H
Yeah
I
mean
we
technically
already
have
a
couple
prototypes
because
you
know
several
languages
but
but
yeah
like
if
someone
could
do
a
quick
pass
at
this
and
go
in
Java
just
because
they're
they're
a
bit
different
from
these
other
languages,
where
we
have
it
already
that
to
me
would
would
be
fine.
That
would
be
enough.
B
Yeah
and
that
that's
part
of
my
hesitation
or
my
only
hesitation
on
this
is
that
in
Java
we
haven't
needed
this
and
we
have
quite
a
lot
of
use
instrument.
Users
of
our
instrumentation.
H
H
Discussed
this
fully,
but
I
do
feel
like
we
do
kind
of
have
a
rule
that
it's
like
it's
okay
for
implementations
to
choose,
not
to
implement
a
portion
of
the
spec
right
like
implementing
the
spec
differently
and
like
like
coming
up
with
like
a
totally
different
tracing
implementation
is
the
thing
we
want
to
avoid.
We
want
consistency
but
I
think
it's
okay
or
an
implementation
to
just
hold
off
on
something,
if
they're
not
sure
about
its
utility.
H
So
maybe
that's
a
middle
ground,
but
I
would
still
be
interested
to
know
it's
like
if
you
were
to
implement
this
in
Java.
Would
this
design
be
like
acceptable,
or
are
you
going
to
run
into
trouble?
B
I
think
it's
pretty
straightforward,
throw
something
in
the
context
and
check
it.
I
linked
Otep
in
the
chat
here
at
Ludmila
had
opened,
which
we
have
implemented
this
not
in
the
Java
API,
but
in
the
Java
instrumentation
API,
and
we
have
needed
that
right
and
we
have
found
this
useful.
B
So
I
was
kind
of
curious
if
you
know,
and
I
don't
want
to
hold
up
the
pr
for
General
suppression.
But
if
there's
if
it
makes
sense
to
combine
the
two
for
example,
putting
one
thing
in
the
context
that
has
is
related
is
a
structure
of
something
that
tells
you
about
suppression,
more
than
just
a
single
Boolean
flag
right.
B
Point
of
that
thing,
yeah
and
suppressing
like
if
you
have
optionally,
if
you
want
to
suppress
nested
client
spans
since
some
back
Ends,
don't
really
support
nested
client
spans
well
or
if
you
want
the
full
all
the
nested,
client
spins.
H
H
Than
this
one,
so
they
I
think
that's
kind
of
the
feedback.
I'm
I'm.
Looking
for
with
this
thing,
it
seems
like
it
has
grown
in
a
Grassroots
style
in
multiple
languages,
but
perhaps
in
like
slightly
different
ways,
and
it
would
be
kind
of
nice
to
just
have
it
officially
in
the
API
and
then
have
that
be
what
you
use
in
Java.
But
if,
like
what
we're
proposing
here
would
be
like
insufficient.
C
C
H
We
can
get
some
some
spec
maintainer
attention
on
it,
but
that
would
be
my
next
question.
Maybe
Tristan,
if
you
can
you-
and
maybe
the
people
have
already
implemented
this
in
JavaScript
and
Ruby-
could
have
a
look
at
the
more
advanced
version
the
Java
is
proposing
and
seeing
if
that's
overly
complex
or
if
you
have
concerns
about
exposing
more
details.
I
Yeah
yeah
I'll
take
a
look
at
this
and
bring
it
up
to
people
in
Ruby
and
JavaScript.
If
this
could
be
a
replacement
for
what
they
have
right
now,
yeah.
J
A
A
That's
not
good
instrumentation
is
using
a
convention,
so
basically
we're
saying
there's
a
flag
from
the
istk
and
that
flag
has
a
special
name.
And
if
you
want
to
surprise
things,
you
can
use
that
name
because
the
instrumentation
authors
are
a
smaller
Set
The
Sims
sufficient
by
giving
that
API
to
all
the
application
users
who
worried
about
what
they're
going
to
do,
because
that
flag
might
might
cause
trouble,
which
can
be
very
hard
to
troubleshoot.
A
They
could,
but
it
makes
it
very
hard
for
application
developers
to
use
it.
So
it's
basically
it
is
not
a
very
welcoming
interface
for
application
developers,
but
for
instrumentation,
because
you
already
choose
the
hard
way
you
can.
H
A
D
H
H
A
I
C
Okay,
is
there
any
more
comments?
We
have
only
three
minutes
in
the
call
and
besides
following
up
with
the
Williamson
Tab,
and
this
PR
and
Riley's
answers
anything
else
to
his
course
or
to
mention
quickly.
C
Okay,
there
are
no
more
announcements
or
questions
or
issues
thanks,
so
much
for
coming
I
will
we
are
getting
two
minutes
back
de
facto
stay
safe
and
see
you
next
time.