►
From YouTube: 2023-01-31 meeting
Description
cncf-opentelemetry@cncf.io's Personal Meeting Room
A
A
A
A
C
Thanks
can
I
share
oh
yeah
thanks
Sue
yeah,
so
we
have
just
relaunched
for
I.
Think
the
third
time
now
HTTP
submit
to
Convention
effort
working
group
and
we're
following
Ted's
new
proposed
semantic
convention
process
of
trying
to
expedite
sort
of
these
this
work.
C
So
this
working
group
just
started
yesterday
meeting
three
times
a
week
for
the
next
six
weeks
to
try
to
finish
up
any
the
remaining
issues.
After
that
in
the
process,
the
community
will
have
four
weeks
to
review
and
provide
feedback,
and
after
that,
the
working
group
and
spec
approvers
will
have
two
weeks
after
that
to
clean
up
and
merge
the
final
PRS,
after
which
you
know
the
goal
is
for
HTTP
semantic
conventions
to
be
stable.
C
That's
very
tied
still,
since
it's
our
first
semantic
convention
going
stable,
it's
very
tied
to
the
just.
What
does
it
even
mean
some
bigger
picture
things
that
the
semantic
convention
working
group
are
working
through,
so
we've
got
two
boards
here,
one
for
the
age,
specifically
things
HTTP
things
that
we're
working
through,
but
then
there's
also
General
semantic
stability
things,
some
of
which
several
of
which
are
required.
C
So
what
I
wanted
to
ask
this
group
is,
if
we
can
use
some
time
from
this
group
each
week
for
the
next
six
weeks
to
draw
attention
and
discuss
a
few
issues
in
each
meeting
to
try
to
avoid
to
increase
Community
feedback.
Since
HTTP
is
something
that
everybody
on
this
call
probably
lives
and
breathes,
and
to
avoid
surprises.
After
during
the
sort
of
four
week
review
period.
B
I
feel
it
makes
sense,
leave
scroll
a
little
bit
up,
there's
already
a
five
minutes
time
box.
My
suggestion
is:
maybe
you
can
reduce
the
metrics
topics
to
10
minutes
and
increase
the
instrumentation
to
15.
B
A
C
Cool,
so
I
threw
a
few
on
here
to
that
I
wanted
to
just
get
some
eyes
on
and
gather
any.
You
know
feedback
that
you
all
have
in
the
car.
One
is
changing
the
https
band
name
recommendation,
which
of
course,
would
be.
You
know
it's
a
breaking
change,
which
we
could
do
that
doesn't
mean
we
have
to
do.
C
It
would
be
adding
the
HTTP
method
in
front
of
the
route
for
what
it's
worth
in
our
product,
our
backend
product.
This
is
how
we
display
things
to
users
anyway,
so
I
do
think
it's
I
think
it's
a
good
change.
C
And
then
kind
of
a
corollary,
if
we're
going
to
change
that
currently,
our
default
is
like
HTTP
get
or
HTTP
post.
If
there's
no
route
available
and
so
I
was
thinking,
it
may
make
sense
to
remove
that
HTTP
piece
and
just
do
get
from
a
symmetry
perspective.
E
I
think
it.
You
know
these
are
very
sound
changes
and
a
great
example
of
why
we
need
to
do
a
pass
through
all
of
these
conventions
before
we
mark
them
stable,
right,
like
adding
the
method
in
front
of
there
is
like
actually
valuable,
because
you're
differentiating
potentially
between
you
know
different
actions
on
the
same
route,
but
at
the
same
time,
I
think,
there's
that
broader
question
around.
E
How
much
do
we
want
to
go
breaking
things
so
I'm
not
saying
we
should
not
break
things,
because
this
is
a
great
example
of
like
a
small
but
clearly
valuable
change,
but
since
you're,
all
the
the
first
one
through
through
the
process.
E
I
think
this
is
I.
Don't
know
if
there
is
like
a
strong
answer
other
than
looking
at
our
schema
translations,
and
maybe
seeing
if
this
is
the
kind
of
thing
that
we
can
handle
with
a
schema
translation,
maybe
prototyping
out,
maybe
trying
to
be
the
first
one
through
that
process
as
well.
All.
F
D
I
wanted
to
call
it
out
so
Ted
I
think
everything
you
said:
I
totally
agree
with
it.
It's
interesting
in
that
you
are
completely
synthesizing
your
spin
name
from
your
attributes
and
so
I
think
there
is
an
opportunity
here
for
us
to
have
a
component
that
would
synthesize
spend
names
from
attributes
as
part
of
a
schema
transition
it,
but
I,
don't
know
if
that
is
going
to
apply
globally
across
all
span
names
right.
D
It's
really.
It's
really
an
interesting
problem
there,
but
high
level
I
want
to
say
I
feel
like
this
kind
of
breakage.
This
kind
of
consistency
is
merited
like
this.
D
This
is
one
of
those
huge
usability
gains
from
spending
that
we
want
to
figure
out
a
way
to
enable
so,
whatever
you
need
from
schema
and
schema
translations,
if
that's
the
way
to
solve
it
or
if
we're
willing
to
just
allow
it
to
Break
because
of
the
gains
that
we
get
and
because
these
are
not
marked
stable,
yet
I
could
go
either
way
here.
D
You
know,
like
I,
think
we
should
avoid
breaking
for
breakage's
sake,
but
when
there's
a
huge
usability
gain
like
what
I'm
seeing
with
the
span
name
change,
it's
it
could
it's
it's
worth
it
for
us
to
evaluate
weather
breaking
now
before
marking
stable
makes
sense.
If
we
can
backfill
that
with
schema
related
fixes,
that'd
be
useful,
the
problem
is
I,
don't
think
we
can
make
a
forward
backward
translator
for
what
the
previous
name
used
to
be
right
because
schema
names
were
not
accounted
for
and
I.
C
C
We
just
obviously
it
sucks
the
the
potential
breakage
sucks
and
needs
to
be
called
out,
and
it's
worth
some
further
discussions
in
the
working
group
about.
If
there's
anything,
we
can
do
from
a
schema
translation
perspective,
but
that
that
wouldn't
necessarily
block
the
change
foreign.
Thank
you.
I.
G
Think
that's
fair
I
have
a
couple
of
questions
which
I
apologize
if
they're
obvious
the
first
is:
is
it
explicit
anywhere
that
the
name
is
stable
and
expected
to
be
stable?
Is
That
explicit
anywhere
on
this
fact.
F
So
these
semantic
conventions
document
that
one
has
a
document,
a
header
on
the
stability
and
the
guideline
for
how
the
name
is
composed,
is
part
of
that
specification,
even
though
it's
not
defined
in
the
yaml,
because
we
haven't
found
a
neat
way
or
it
would
solved
for
one
in
first
place
or
a
neat
way
to
to
Define
it
with
a
templating
language
or
something
like
like
just
proposed.
But
it's
it's
part
of
the
document,
so
I
think.
F
If,
if
the
document
says
stable
on
top,
then
it
will
also
also
applied
to
this
band
name
itself.
G
Okay,
yeah.
My
second
question
is
more
of
a
it's
not
doesn't
apply
specifically
to
http,
but
in
your
proposed
change,
you're
removing
HTTP
and
having
only
the
verb
and
the
route,
and
then,
if
the
route
wasn't
available,
it
would
be
only
the
verb.
G
I
think
we
should
decide
from
like
a
global
policy
perspective.
Do
we
prefer
to
have
the
the
whatever
you
call
it
instrumentation
type
in
the
name
like
HTTP,
because
if
you
had
a
span
that
was
just
named
Delete
for
example,
or
get
it
might
be
difficult
to
distinguish
it
from
equivalent
spans
for
different
instrumentations
like
redis,
for
example?
G
G
C
Yeah
the
the
products
I've
seen
generally
I
mean
use,
use
the
span
type.
You
know
looking
at
HTTP
like
give
us
an
HTTP
convention
and
you
know
just
play
like
an
icon
or
some
other
differentiator
for
that.
It's
HTTP
versus
database.
G
Yeah
and
that's
all
well
and
good
for
HTTP,
I
guess,
but
for
span
types
that
are
not
recognized
by
back
ends.
It
makes
it
a
lot
harder
and
for
tools,
Like
Jaeger,
for
example,
which
don't
do
any
of
that
type
of
like
categorization.
It
definitely
makes
it
harder.
F
And
and
I
agree
with
with
Dan
that
it
would
make
sense
to
have
some
convention
there.
I
know
that
we
don't
have
any
such
prefix
for
our
PC
I.
Don't
think
we
have
it
for
for
databases,
but
I
I.
Don't
know
that
part
right
now,
but
we
should
decide
on
either
having
it
everywhere
or
probably
nowhere
to
be
to
be
consistent
and
then
not
only
follow
this
in
individual
semantic
conventions,
but
also
make
that
part
of
the
span
name
guidelines
which
we
already
have.
D
So
I
don't
want
to
get
too
bogged
down
with
this
particular
issue,
but
this
is
we're
running
into.
If
you
remember
there
was
a
talk
about
having
a
type
for
a
span
like
almost
a
type
system,
so
where
you
hit
database
and
RPC
like
the
question
I
would
have
is,
should
it
be
grpc?
Should
it
be
RPC?
Should
it
be
like
the
name
of
the
RPC
framework
right
same
with
database?
D
Should
it
be
postgres,
should
it
be
DB
and
I
believe
we
kind
of
tabled
that
discussion
we're
going
with
kind
of
a
structural
type
system?
If
you
will,
where
the
attributes
actually
tell
you
a
whole
lot
about
what
type
of
span
it
is
that
way,
you
know
if
you're
writing
an
HTTP
span
and
there's
also
a
semantic
invention
about
a
particular
HTTP
framework
that
you
want
to
include
with
additional
attributes
about
things
that
framework
provides.
D
You
can
just
append
them
to
the
span
right
so
spin
name,
unfortunately,
is
one
and
done
it's
not
an
aggregate.
It's
not
a
thing
where
we
can
like
pen
that
stuff
to
it
and
what
I
don't
want
to
do
is
have
that
debate
about
making
sure
a
type
is
in
the
span
and
then
what
type
does
that
need
to
be
so
because
of
that?
I
think
that
it's
fair
for
us
to
not
require
the
beginning
thing
there
and
I
think
generally.
That
discussion
needs
to
be.
D
We
probably
need
more
time
for
it,
so
I
don't
want
to
like
waste
everyone's
time
here.
I
just
want
to
call
out
some
of
my
concerns.
Right
now,
with
spending
right
span.
Name
is
one
and
done
you
get
one
per
spending.
If
we
have
a
bunch
of
semantic
conventions
that
apply
to
it,
I
would
like
for
all
of
them
to
be
kind
of
additive,
so
we
don't
end
up
with
lots
of
duplicate
spans
just
to
annotate
types.
That
would
be
very
unfortunate.
G
Yeah
I
agree
there
and
then
my
final
point
is
I.
Guess
kind
of
a
follow-up
of
my
first
one
I
asked:
is
it
explicit
that
span
name
is
stable?
The
the
sort
of
follow-up
to
that
is
should.
C
E
H
D
It
given
given
like
even
our
kind
of
span
based
metrics
that
we
produce
inside
of
the
collector
I
feel
like
span
name,
is
too
important
to
not
have
it
be
stable.
C
I'm
gonna,
Skip
and
cherry
pick
in
for
time
this
one
because
I
opened
it,
but
the
reason
I
had
so.
This
was
the
problem
we
actually
had.
Customers
report
this
because
right
in
the
Java
in
their
Java
app
sitting
behind
a
web
server,
that's
terminating
SSL
and
in
the
Java
app
we're
reporting
the
scheme
as
HTTP,
and
they
think
that
oh
I'm
unencrypted.
C
But
it's
also
I
completely
understand
that
it's
it
would
be.
You
know
it's
also
not
accurate,
like
it's
also
accurate
that
it
is
did
receive
on
the
Java
process
over
http.
G
I
G
C
Cool
and
in
the
interest
of
time
the
Ludmila
is
here,
yeah.
C
Can
you
present
this
issue.
J
Yeah,
so
we
used
to
have
HTTP
flavor,
but
it
sounds
like,
after
that
to
introduced
a
couple
of
generic
attributes
to
describe
particle
and
name
and
version,
and
we
should
try
probably
to
minimize
number
of
attributes.
We
have
and
refer
to
something
generic,
and
here
we
can
have
protocol
name
set
to
https,
PDF
or
anything
else.
Http
should
be
default.
We
shouldn't
set
it
if
it's
HTTP
and
the
version
we
can
also
apply
some
defaults
to
version.
J
G
From
I
guess
a
style
standpoint,
flavor
is
not
exactly
a
technical
term.
J
G
I
see
somebody
thumbed
down
on
this
change.
Can
we
be
like?
Is
there
a
well
that
was
Riley?
Is
he
on
the
call
to
explain
why.
B
I
I
think
there
are
some
inconsistency.
For
example,
we
also
have
the
HTTP
scheme
where,
if
you
look
at
elastic
common
schema,
they
simply
referred
to
the
URI
scheme,
so
this
would
apply
to
http,
htis
and
also
iftp.
Any
other
thing
and
I
feel
we're
very
inconsistent
here,
I
I'm,
supporting
that
we
make
it
more
General,
but
there's
a
high
order
bit.
I
I
think
we
should
make
it
consistent.
So
if
we're
going
to
make
this
change,
I
would
suggest
we
also
make
the
HDD
scheme
changes
to
URI
scheme
or
something
foreign.
B
B
A
J
Yeah,
we
don't
have
schema
transformation
for
this
either.
E
That
was
going
to
be
my
one
comment.
It's
just
yeah,
please
I
think
this
is
like
a
good
test
run
for
our
schema.
Translation
tools
and
I
expect
we're
going
to
run
into
a
lot
of
edge
cases.
D
Yeah,
anytime,
you
need
schema
translation,
open,
a
bug
of
what
you're
looking
for
and
then
we'll
try
to
take
that
on
and
make
sure
that
we
can
accommodate
it
and
get
that
into
the
spec
as
needed.
It's
also
possible,
like
I,
said
before.
If
some
of
these
changes
are
are
either
innocuous
effectively
or
really
high
user
value,
I
don't
think
we
need
the
schema
translation
for
this
initial
jump
that
that
bar
is
completely
subjective,
though,
but
yeah
I
think
some
of
the
things
they're
proposing.
C
Interest
of
time,
because
I
know
we're
over
our
bucket
I
took
the
changes
that
James
had
made.
C
The
big
change
to
the
build
tool
to
allow
metric,
CML
files
and
I
wanted
to
ask
if
this
would
be
acceptable
as
a
as
a
PR
for
now
I
know,
there's
some
future
changes
that
we
want
to
make
to
the
build
tooling
to
to
not
have
necessarily
repetition
or
to
have
the
yaml
or
have
to
have
the
markdown
better
formatted,
but
I
would
also
I
would
kind
of
like
to
be
able
to
move
forward
with
this
and
then
come
back
with
another
pass
on
after
we
have
the
tooling
support.
D
You
know
I
think,
that's
exactly
the
intention
of
what
we
wanted.
The
the
changes
to
how
you
generate
tables
might
give
you
more
options
for
what
you
can
put
in
the
markdown
to
synthesize
new
table
shapes
and
layouts
and
things,
but
it
shouldn't
change
the
yaml
itself
and
the
yaml
itself
is
what
we're
going
after.
So
please
make
the
yaml,
because
we
need
that,
so
we
can
do
code,
gen
and
other
things
to
get
the
instrumentation
kind
of
end
to.
C
End
cool
I'm
gonna
make
this
a
non-graft,
then
because
it
does
have
the
yaml.
K
Thank
you
so
much.
Okay
in
that
case,
for
the
sake
of
time,
let's
jump
right
to
the
next
one,
which
is
about
metrics
from
spans
Thomas
or
passing
early
around
yeah.
L
Hey
should
I
share.
My
screen
is
that
gonna
make
this
easier.
L
You
could
just
share
it
because
I'm
actually
not
too
used
to
so
you
can
just
share
this
in
or
the
issue
there.
So
this
is
kind
of
a
proposal
suggestion.
So
we.
H
L
A
use
case
that
we're
trying
to
solve
and
we've
listed,
our
requirements
there.
Basically,
we
want
to
generate
kind
of
key
metrics
for
customer
applications.
We
want
to
generate
it
from
the
hotel,
Auto
instrumentation
spans,
there's
a
degree
of
flexibility
with
the
metrics
that
we
want
to
have.
We
want
to
generate
the
metrics
from
100
of
the
spans,
regardless
of
the
sampling
decision
and
that's
kind
of
salient
Point
here,
but
we
still
want
span
sampling
rules
to
be
respected,
with
respect
to
export
and
propagating
sampling
decisions
so
kind
of
thinking.
L
The
the
sampling
decision
is
embedded
into
the
span,
but
then
100
of
spans
gets
sent
to
the
collector
where
the
decision
to
detects
to
export
is
actually
executed.
So
this
is
referred
to
in
that.
That
says,
deferred
sampling
and
you've
kind
of
kept.
That
name
and
then
so.
When
you
have
100
of
fans
sent
to
the
collector,
you
could
use
something
like
the
Spanish
processor
or
so
I
like
it
to
export
your
desired
or
create
an
expert,
exact
metrics.
L
The
second
solution
is
to
actually
have
a
processor
within
the
sdks
themselves,
so
you
basically
instead
of
dropping
these
fans,
you
rely
on
the
record
only
second
decision
and
then
have
your
spam
processor
within
the
SDK
extracts
and
exports
metrics,
but
the
spans
never
leave
yes,
okay,
so
those
are
our
two
kind
of
ways
that
we've
caught
with
that
bring
this
up
with
the
hotel
community,
because
you
know,
obviously
this
is
something
that
we
would
want
to.
L
L
L
K
Can
ask
some
questions,
but
I
guess
that
the
first
question
that
I
would
like
to
ask
just
to
clarify
things
is
that
you
want
to
count
how
many
spans
you
have,
even
if
they
were
sampled
out
correct.
B
L
L
D
Ahead
so
another
way
to
phrase
what
Riley's
asking
your
option?
Number
two
basically
involves
very
like
no
spec
changes
really
because
it
it's
something
that
you
could
Implement
today
by
making
an
aggregate
sampler
and
making
a
span
processor
as
like
contrib
in
doing
that,
I
I
would
suggest,
like
I
think
you
should
be
a
little
more
aggressive
with
your
ask
here.
D
Supportive
of
that,
however,
what
you
don't
want
to
do
is
have
the
existing
behavior
of
a
sampler
and
the
existing
behavior
of
a
processor
somehow
get
affected
by
that,
and
the
issue
now
is:
if
you
read
the
specification
for
how
spans
are
generated,
when
the
sampler
makes
a
decision,
it
kind
of
prevents
processors
from
seeing
spans
a
little
bit
right,
which
means
you
can't
use
just
a
simple
span
processor
to
calculate
metrics,
because
you
won't
get
access
to
all
of
them.
D
So
I
would
argue
that
you
might
want
us,
like
a
hook
here,
for
this
particular
use
case,
which
allows
Samplers
to
still
do
their
job,
which
makes
sure
that
we
are
not
in
record
only
because
we
don't
want
to
accidentally
write
these
traces
right
and
then
allows
you
to
have
some
hook
that
that
that
will
be
able
to
make
metrics
from
all
possible
spans.
That's
it's
not
quite
a
span
processor
and
it's
not
quite
a
sampler.
It's
like
a
little
bit
of
both.
D
It's
like
something
that
would
live
kind
of
between
them,
but
I
think
I
would
go
a
little
more
aggressive
because
right
now,
what
you're
proposing
for
users
is.
They
would
have
to
do
two
things
to
have
correct
behavior
and
if
they
only
do
one
of
them,
they
get
poor,
Behavior
right
because
suddenly
all
spans
would
be
recorded,
and
that
seems
risky.
J
I
would
add
that
there
is
one
more
spark,
a
change
that
would
need
it
even
further
kind
of
a
second
solution.
Instrumentations
can
and
should
guard
edging
attributes
is
recording
flag
or
at
the
advanced
setting
to
save
on
performance.
So,
even
if
we
can
hook
into
spans
be
for
non-exported
spans,
they
don't
have
attributes
all
the
attributes.
M
Is
may
I,
this
is
a
great
topic
for
sampling
Sig.
It
meets
every
other
Thursday
at
eight
o'clock,
Pacific
time.
I.
Don't
think
we
should
take
this
meeting's
time
any
longer
to
talk
about
the
details,
but
there
have
been
proposals
on
how
to
do
probability,
sampling
as
well,
and
when
I
was
making
those
proposals
and
working
out
those
drafts.
We
discovered
all
kinds
of
implementations
in
the
sampler
API,
which
are
being
discussed
more
or
less
right
now
and
I
I
could
say
more
in
the
sampling
Sig.
M
The
the
parent-based
sampler
composition
mechanism
makes
it
really
hard
to
get
a
probability
sampler
speaking
as
a
vendor,
I
would
prefer
to
see
probability
sampling
that
we
can
accurately
count
and
there's
a
big
spec
on
that,
so
that
we're
not
sending
spans
to
The
Collector
after
the
decisions
been
made.
I
don't
think
that
that
is
a
good
idea.
Performance
like
trade,
so
I
would
prefer
to
see
us
pursue
the
probability
of
sampling,
work
and
I'd.
M
Invite
you
to
come
talk
about
how
we
could
fix
the
sampler
API
in
the
sampling
seg
week
and
a
half
from
now.
M
It's
not
this
week,
I
also
be
glad
to
swap
meet.
If
you
want
to
talk
about
sampling,
I
can
fill
you
in
on
some
of
the
context.
There's
also
an
issue
that
I
I
can
dig
up
that
describes
the
limitation
of
the
sampler
API.
M
E
Cool
thanks,
I
just
want
to
quickly,
plus
one
with
the
Josh's
we're
saying,
which
is
that
this
should
be
a
a
first-class
citizen
in
open
Telemetry
like
part
of
the
value-add
of
open
Telemetry.
Is
that
we're
providing
you
this
combined
stream
of
telemetry,
allowing
you
to
do
these
kinds
of
things
while
also
making
sampling
coherent
and
not
something?
Not
a
foot
gun
that
you
can
screw
up
so
I
think
it's
it's
worth.
It.
L
L
M
I
would
try
to
if
there's
general
interest
in
this
topic.
I
feel
like
what
needs
to
be
said
is
that
the
Jaeger
remote
sampling
configuration
is
something
of
a
model
for
us,
but
it
doesn't
actually
support
the
type
of
probability
sampling
that
me
that
I
as
a
vendor
interested
in
so
there's
a
like
a
desire
out
here
and
I'm
aware
of
it
and
I've
felt
it,
but
but
it's
not
being
focused
on
by
really
anybody
in
the
community
and
the
desire
is
to
get
first
class
support
into
the
sampler
API
in
a
coherent
way.
M
M
I'd
love
to
work
on
it
in
the
coming
year.
If
there's
an
interest
in
my
I
get
support
from
the
vendor,
but
this
is
definitely
an
area
of
interest
and
something
we
can
talk
about.
Example,
it's
like.
N
Can
I
see
one
more
the
one,
only
one
theme
I,
don't
think
what
he
needs
is
what
you
are
talking
about.
George
I
feel
like
you,
you
are
trying
to
solve
a
much
complicated
sampling
decisions
and
problem,
and
he
just
needs
to
make
sure
that
he
measures
latency
and
couple
of
other
things
for
every
span
and
don't
don't
necessarily
export
that
just
keep
it
local
and
calculate
the
the
metrics
out
of
that
am
I
am
I
confusing
here
things,
because
I
think
you
you
are
looking
for
for
having
a
more
complicated
and
more
complex
sampling.
N
O
A
M
You
track
latency
when
the
fans
are
not
sampled,
I
think
is
the
question
and,
and
there
will
be
a
desire
to
compute
100
of
metrics
on
every
Span,
in
which
case
you've
got
to
record
this
band,
but
you
don't
have
to
sample
it
and
there
there
can
be.
You
know:
metrics
integration
with
the
tracing
SDK
for
that
I
I,
do
think.
You'll
have
a
lot
of
interference
and
Trouble.
The
documented
trouble
with
the
sampler
API
will
hit
you
there
as
well,
but
I
think
we
should
discuss
it
in
different
form.
N
A
N
M
M
I
mean
this
is
also
running
into
the
topic
that
has
come
up
in
the
instrumentation
messaging
group
about
how
we
don't
have
the
ability
to
add
a
Spam
link
after
spam
start
and
how
the
sampler
is
also
interfering
with
that
problem.
I
think
these
two
could
be
worked
out
together.
I
would
love
to
have
this
topic
discussed
in
the
sampling
fig.
K
M
E
Would
be
great
just
to
to
put
this
all
on
a
road
map
right
like
like?
How
are
we
going
to
integrate
metrics
and
tracing
from
the
perspective
of
all
these
different
sampling
issues
and
just
just
call
out
the
the
simple
version
of
having
access
to
100
of
the
data
and
then
roadmap
about
the
rest
of
it
from
there?
E
D
Hey
I
do
want
to
call
out
that
I,
don't
think
this
has
to
affect
sampling
so
like
this
is
solvable.
If
we
just
change
the
possible
number
of
states,
a
trace
can
be
in
which
I
think
is
what
Bogan
solution
is
probably
about
there
with
Trace
level.
But
if
we
just
change
the
number
of
states
the
trace
can
be
in
and
we
allow
some
other
hook
to
say:
I
want
access
to
all
spans.
D
Then
we
we
have
this
new
Trace
level,
where
you
could
actually
have
traces,
be
generated,
sent
to
this
new
thing
without
affecting
any
of
our
sampling
decisions
or
any
of
the
existing
trade
States
right,
because
if
Trace
traces
aren't
in
one
of
the
recordable
levels
right,
then
they
don't
go
out
the
door.
I
think
that
this
is
entirely
solvable
without
sampling.
If
we
want
to
tie
it
into
the
sampling,
that's
okay,
but
again,
I,
don't
want
to
inflate
the
issue
to
the
point.
We
don't
get
a
solution,
that's
my
fear
here.
D
So
if
we
take
it
to
the
sampling
Sig
and
we
find
out
we're
conflating,
let's
bring
it
back
and
solve
it
without
sampling.
M
I
I
agree
that
you
can
solve
this
gut
sampling
and
I
pointed
out
the
connection
to
instrumentation
messaging.
We
need
more
span
States
for
that
that
problem
as
well
and
I
think
that's
why
these
are
connected.
So
we
should.
We
should
talk
about
somewhere,
I'm,
not
quite
sure
how
level
it
is
related
to
this
topic,
but
I
I'll
stop
talking
now.
K
Okay,
for
the
sake
of
time,
we
suggest
we
just
continue
on
assistant
before,
let's
start
talking
down
into
something,
if
not
just
bring
it
back.
You
know
this
topic
back
to
the
spec
and
then
we
can
try
to
figure
that
out
without
something.
K
Thank
you
so
much
okay.
We
only
have
17
minutes.
So
let's
try
to
see
what
we
can
cover.
So
next
one
we
know
DV
statements
default
sanity
session
and
uniform
format
you
around
have
you
know
I'm.
K
I
guess
he's
not,
but
basically
there's
an
open
PR,
basically
for
a
recommendation
that
when
you
get
something
like
a
DB
statement
or
anything
like
that,
basically
you
just
sanitize
this
to
protect
private
information
basically
needs
more
eyes.
Yuri
is
not
here
but
Yuri.
Who
was
not
happy
with
the
current
approach.
K
So
actually
let
me
share
with
my
screen
again.
Probably
that's
the
easiest.
K
So
basically,
this
is
Judy's
opinion
on
this
one,
but
there
we
need
more
eyes.
I
think
that
what
he's
mentioning
is
fair.
One
of
the
things
that
he
mentions
is
that
we
don't
like.
We
cannot
just
go
and
suggest
that
this
is
done
without
specifying
how
so
Jack
I
think
was
also
mentioning
in
a
new
message
that
what
kind
of
stuff
should
we
go
for.
K
If
there
are
no
more
comments,
please
review
that
one
I
think
it's
an
important
one
gallery.
Does
that
we
talk
about
this
briefly.
I
was
not
Jack.
Sorry
was
the
contributor
himself,
never
mind,
I.
Think
it's
an
important
one.
I
think
it's
a
good
one.
John
already
has
that
we
just
have
to
actually
put
something
in
the
specification.
H
Yeah
thanks
hi
everyone.
So
this
is
a
proposal.
That's
quite
old.
It
got
stuck
in
some
discussions,
but
I
haven't
seen
like
like
blockers
on
that
one.
So
just
want
to
revive
this.
The
context
is
some
of
the
spans.
Some
of
the
products
would
like
to
show
whether
there
are
like
entry
points
into
a
service
or
not,
so
that
you
need
to
know
whether,
like
the
parent
of
the
span,
is
a
remote
span
or
not.
H
So
the
proposal
is
to
at
a
field
that
that
would
expose
basically
whether
a
span
had
a
remote
parent
or
not,
and
in
the
discussions
that
there
were
some
discussions
about
potentially
also
adding
a
has
remote
parent
API
to
the
span
context.
H
I
H
It's
so
the
the
use
case
is,
if
you,
for
example,
have
a
consumer
span
for
messaging.
It
can
be
both.
It
can
be
in
an
entry
point
or
it
can
be
active
polling
done.
It's
not
an
entry
point
right,
so
it's
like
in
the
context
of
another
Trace.
So
it's
an
inner
span
basically
and
there
you
can
not
differentiate
just
based
on
what
is
exported
in
the
span
data,
whether
it's
an
entry
point
or
not,
and
the
proposal
here
is
to
add
this
as
an
explicit
field
on
the
span.
N
N
Not
easy
mode
itself,
because
the
easy
remote
property
is
a
property
of
or
that
you
use
for
creating
a
child.
Then
then
you
need
to
know
that
your
parent
is
remote
like
the
spine
context
that
we
create
has
that
property,
but
we
never
export
a
span
that
has
a
span
context
with
easy,
remote
property,
but
the
power.
H
Sense,
yeah
I
think
that
the
title
is
misleading.
I
will
change
it
on
the
pr.
So
it's
actually
about
has
remote
parent
and
we
don't
have
a
call
like
this
on
the
API
right.
Now,
though,
the
information
is
available
right
because
we
can
access
the
the
parent.
E
What
about,
rather
than
adding
another
field,
adding
another
value
to
spend
kind,
but
that
solution
work
as
well.
H
I
think
so,
whatever
would
allow
to
yeah.
H
E
Not
not
talking
about
what
what
is
the
parent
of
this
fan
but
saying
like
you're
saying
this
is
essentially
a
client
span,
but
it's
not
really
a
client
span,
so
we
need
another
name
for
this
kind
of
span.
Yeah.
H
Like
the
concrete
case,
we
had
with
messaging
who's
the
consumer
spans
right,
which
is
not
really
clear,
whether
it's
an
entry
point
yeah.
K
F
The
value
used
for
sorry,
okay,
no.
F
Changing
the
value
used
for
spanking,
changing
the
mapping
and
meaning
there
that
would
be
a
breaking
change,
as
opposed
to
having
an
attribute
where
you
just
add,
I,
think
what
you
propose
there
is
called
parent
span
is
remote.
It's
also
something
to
consider,
because
that
part
of
of
the
trace,
Pro
2,
is
already
stable.
F
But
but
you
would
also
change
how
you
map
existing
spans
to
the
enums
right.
E
E
You
know
very
thoroughly,
and
so
what's
coming
up
now
is
in
the
messaging
group
that
they're
figuring
out
that
we
need
additional
values
and
additional
ways
to
describe
these
asynchronous
traces
and
it's
a
tricky
problem
in
general
figuring
out.
What's
the
best
way
for
us
to
model
asynchronous
traces,
which
is
why
I
kind
of
agree
I,
think
Alex
would
be
great
if
you
could
get
just
start
participating
in
that
that
working
group,
because
I
think
we
should.
We
should
do
this
holistically.
A
H
More
yeah
of
a
general
question
on
on
we
had
this
yeah,
quite
General
Proposal
with
the
elastic
common
schema,
and
we
can
also
take
this
offline
that
maybe
because
I
I've
seen
this
is
on
the
board.
So
just
want
to
ask
like
how
we
can
chime
in
and
help
out,
none
with
that
or
plan
to
actually
contribute
and
work
together,
hard.
K
I
think
we
just
need
to
get
people
who
approved
the
previous
one,
this
one
again
so
basically
like.
If
you
go
check
like
the
previous
one,
had
a
lot
of
approvals
already
to
really
then
I
really
review
that
a
Lolita
is
probably
around
much
full
rally
and
Yuri
probably
should
be
poked
directly
yeah
yeah.
A
E
And
again
Alex
my
what
we're
trying
to
do
this
year
is
stabilize
all
of
our
semantic
conventions
so
that
that's
kind
of
like
the
time
to
do
it
is
as
we're
going
through
each
semantic
convention
and
having
a
public
review
period,
making
sure
that
we're
in
line
with
what
ECS
is
defining
the
thing
I
think
I
wonder
about.
E
Is
it's
not
been
totally
clear
to
us
like
from
the
elastic
side
like
how
much
time
or
energy
or
like
effort
availability
there
is
to
to
do
that
so
feel
free
to
like
Ping
me
on
slack
about
that
and
let's
see
if
we
can
get
it
coordinated
thanks.
K
K
Okay,
let's
continue,
then
the
next
one
Assad
masika
support
explicitly
removing
attributes
from
an
instrument
API
you
around.
P
Yeah
hi,
basically
I
have
three
issues,
most
of
them,
as,
as
you
called
before,
I
need
more
eyes
and
to
help
Push
It
Forward.
If
it's
possible,
this
one
I
think
is
a
pretty
straightforward.
Is
the
ability
to
remove
attributes
from
a
specific
instrument?
N
A
P
N
P
But
you're
you're
talking
about
something
that
is
determined
during
initialization
I'm
talking
about
attributes
that
you
report
them
dynamically.
So,
okay,
so
I
explained
the
context.
I'm
I'm,
working
in
stream
native
and
on
Apache,
pulsar
and
Apache
poster
is
essentially
a
messaging
system
like
Kafka.
It
has
topics
it's
distributed,
so
you
have
more
than
one
broker,
and
so
a
topic
has
there's
a
load
balancing
that
moves
topics
between
vocals,
so
one
topic
can
move
from
block
one
to
broker
two.
So
essentially
they
want
to
stop.
P
Stop
reporting
attributes
related
to
this
topic
in
bokeh
one
and
start
reporting,
when
bokeh
too,
to
be
able
to
do
that,
you
need
to
explicitly
say:
I
want
to
remove
right,
although
attributes
related.
Let's
say
you
have
15
or
17
instruments,
okay
for
a
topic,
so
you
need
to
be
able
to
do
that
and
so
it's
it
is
available
like
in
other
jvm,
metrics
and
Frameworks,
of
course,
and
in
open
Telemetry,
it's
not
and
I
didn't
see
any
reason
it
why
it
shouldn't
that's
why
I
am
suggesting
it
would
be
something
in
the
spec.
So.
N
So
you
you
want,
you
want
to
stop
recording
a
data
point,
what
we
call
the
data
point
Time
series,
so.
P
P
I,
wouldn't
call
it
old
I
would
call
it
that
this
topic
is
no
longer
hosted
on
that
bokeh,
so
I
wouldn't
call
it.
Oh.
That
would
call
it
it's
not
it's
not
here
anymore.
I
A
I
Just
it's
not
removing
attributes
from
an
instrument
instruments,
don't
have
attributes,
but
it
is.
But
it's
removing.
N
Marking
us
still,
whatever
I,
think
that's
how
we
call
it
in
the
protocol
as
still
or
or
that
you
know
no
longer
have
data
points
with
this.
P
I
I
think
I've
solved
something
related
to
that,
but
I
think
it's
not
really.
The
same
also
explain
that
in
the
issue.
D
So
one
one
thing
I
want
to
call
out
is
it's
not
necessarily
true
that
there's
a
one-to-one
mapping
between
the
metric
storage
and
an
instrument
which
is
a
I,
don't
know
if
that's
true
in
every
language,
implementation,
I,
don't
remember
if
we
actually
locked
that
down
in
the
spec
or
not,
but
it's
not
like
what
you
want
to
do.
The
fact
that
you
want
to
clean
memory
is
absolutely
a
known
issue.
There's
actually
a
whole
bunch
of
I
think
open
metric
specs
issues
around.
When
do
we
clean
up
stale
time
series?
D
How
do
we
make
sure
our
cumulative
doesn't
overrun
memory
and
that
sort
of
thing
but
I
think
yeah
this?
This
is
probably
the
number
one
thing
we
have
to
figure
out
with
metrics
around
cumulatives.
It's
probably
the
number
one
like
usability
concern
is
overall
memory
usage
and
being
able
to
like
clean
up
and
get
rid
of
unused
kind
of
components.
D
I,
like
the
idea
of
of
you
directly
telling
us
like
hey
this,
isn't
used
anymore.
You
can
go
clean
up
your
memory,
but
there's
some
complications
here.
I
think
we
do
need
to
think
through
just
as
an
FYI.
So
because
it's
not
again,
instruments
and
and
storage
are
not
one-to-one.
P
Yeah
I
I,
totally
understand
I,
think
you
mentioned
I,
guess
the
the
correlation
to
views
right,
because
that
together
ties
it
up
to
to
maybe
several
storages
right.
P
This
is
exactly
what
what
I'm
suggesting
here,
that
I
I
need:
more
people
that
have
experienced
that
and
to
steer
that
suggestion
forward
to
brainstorm
it
so
to
speak,
because
that's
like
a
a
real
need
for
it,
and
this
is
why
I
I
need
a
someone
to
help
with
that.
Yeah.
K
We
only
have
one
minute,
sadly,
by
the
way
so
I
suggests
we
continue
a
lot
of
line.
You
have
another
one
about
supporting
unit,
Improvement,
selector
or
metric
view.
Likewise,
please
review.1.
P
And
so
the
second
one?
Yes,
so
it's
actually
related
to
the
memory,
also
and
memory,
as
you
said,
of
of
matrix,
it's
a
long
one
to
explain
in
one
minute,
not
this
one.
Actually,
this
the
the
one
before
I
think
that
yeah
so
essentially
that's
a
very
hard
to
explain
in
one
minute,
but.
K
I
think
that
it's
not
enough
yeah
I,
remember
reading
about
this
one.
P
Yeah,
so
so
maybe
I
can
just
say
in
10
seconds
that
I
really
need
someone
from
the
group
to
help
with
that
I
kind
of
I
think
the
issue
has
all
the
context
and
I
can
explain
now,
but
it's
essentially
it's
related
to
minimize
memory,
consumption
to
the
lowest
possible,
with
real,
reasonable
bounds,
because
I
think
that
a
matrix
Library
should
be
not
interruptive
in
terms
of
memory
and
CPU
and
I.
Think
I
have
a
way
of
doing
it.
I
want
to
make
sure
that
it
doesn't
I
mean
the
person
in
the
Java.
P
A
K
Yeah
yeah
anyway,
sorry
by
the
way
for
maintenance
were
still
around.
We
are
one
minute
past
the
time
there's
a
PR
from
Alan
on
logging
by
the
way,
so
please
review
that
I
don't
want
people
to
you
know
that
are
taking
by
surprise
for
this,
but
basically
we're
clarifying
the
logging
expectations
regarding
errors
in
the
sdks
okay.
That's
it
sorry
for
running
out
of
time
and
didn't.
Did
it
quick
and
we
couldn't
cover
all
the
topics
so
see
you
next
time.