►
From YouTube: 2023-02-21 meeting
Description
cncf-opentelemetry@cncf.io's Personal Meeting Room
A
Okay,
let's
start
thank
you
for
joining,
so
let's
go
over
the
items,
the
first
one
I
put
it
there,
it's
basically
about
potentially
well,
not
potential
actual
inconsistencies
between
how
we
specify
the
codes
for
otlp.
A
Recently
we
specified,
we
clarified
what
codes
meant
error
which
are
supposed
to
be
retrievable,
but
that
was
only
for
HTTP
for
grpc,
there's,
no
clear
table,
let's
say,
and
actually
the
fact
that
there's
a
few
tables
in
the
issue
itself
talks
about
the
the
big
need
to
do
super
clarify
this.
You
stick
around
by
the
way.
Arc
is
not.
B
Hey
Carlos
yep,
just
a
point
on
this
one,
like
a
big
point,
is
also
that
a
lot
of
users
are
complaining
that
I
think
on
the
server
side.
I
may
be
wrong,
they're,
getting
just
a
lot
of
spans
marked
as
error
just
because
we
only
allow
a
return
status
code
of
zero
to
be
successful.
So
anything,
that's
not
zero
is
spans
marked
as
error
and
so
they're.
Getting
a
huge
amount
of
Errors
I
think
is
the
main
concern
here.
B
I
think
I
think
you're
right,
there's
also
a
little
confusion
as
to
how
we
should
handle
these
errors,
because
there's
multiple
tables
here,
I,
don't
know
if
there's
a
particular
and
then
it
was
also
kind
of
thing
pointed
out
that
there's
other
projects
that
have
tried
to
do
this,
as
well
as
to
classify
these
errors
as
useful
or
not
or
error,
or
not
error.
B
C
C
Isn't
this
tangential
to
the
to
the
idea
of
whether
a
grp
c
status
code
is
retryable
from
an
otlp
standpoint?
Isn't
this
strictly
about
the
semantic
conventions
and
setting
the
span
status.
A
Partially
I
think
it's
very
relative.
Probably
it's
a
good
opportunity
to
work
on
these
two
but
yeah.
Probably
this
is
not
required
for
the
most
basic
need
for
this
issue
itself.
D
So
I
propose
so
grpc
is
also
written
in
HTTP
status
code
and
I.
Do
not
remember
who
but
I
think
that
the
idea
like
as
the
first
iteration
to
align
those
two
span-
statuses
I,
think,
is
the
way
forward
and
we
can
always
interact
on.
On
top
of
that,
for
instance,
we
also
have
I
also
heard
some
complaints
that
some
users,
even
from
the
client
perspective,
do
not
see,
for
example,
not
found
as
an
error,
because
sometimes
they
are
polling
or
trying.
You
know
to
to
think
they
are
simply
checking
if
things
exist.
D
E
I,
like
I,
like
the
Tyler's
recommendation
with
making
it
configurable,
but
we
still
need
a
default
and
that
default
can
be
the
the
alignment
between
HTTP
and
jrpc.
That
Robert
is
mentioning
so
I
think
there
are
two
parts
here.
First
part
is
anyway:
we
have
to
figure
out
the
default
or
or
what
the
recommendation
is,
and
then
we
allow
configuration
that's
the
second
step,
because
I
don't
believe
we
should
make
it
mandatory
to
configure
things
here.
So
so
we
definitely
have
to
have
a
default
yep.
D
By
the
way,
I
also
was
thinking
about
one
thing,
because
in.net
instrumentation,
the
HTTP
and
grpc
instrumentations
I
kind
of
done
together.
There
is
one
instrumentation
library
or
Aaron
or
really
do
you
remember?
What
are
the
status
codes
for?
Grpt
are
the
same
as
HTTP
or
you
do
you
do
not
remember.
D
A
By
the
way,
Robert
I
saw
that
you
posted
your
latest
attempt
to
get
to
have
a
table
yesterday.
So
probably
that's
good.
Could
you
mind
summarizing
that
proposal?
This
is
the
latest
probably
good,
give
us
some
insight
into
what's
what's
in
your
mind,
so.
D
Basic,
so
basically,
this
is
just
checking
what
what
is
the
like
transformation
between
HTTP
status
code
from
the
grpc
status
and
I
just
checked:
what
is
the
status?
What
is
the
status
for
the
HTTP
semantic
conversion
and
I
use
the
same
for
the
grpc
proposal,
so
this
is
the
way
I
try
to
align
it,
one
to
one
like
based
on
HTTP,
without
changing
HTTP
to
make
them.
You
know
like
in
line
at
least
from
HTTP
semantic
conventional
perspective,.
A
Also,
the
interesting
thing
that
I
see
is
that,
for
anything
that
is
not
okay
for
the
client.
It's
an
error.
So
that's
something
to
keep
in
mind
whether
that
would
be
good
or
not.
A
Only
only
somebody
thinks
something
else:
I
think
we
should
go
have
an.
Maybe
he
has
an
opinion
on
these
as
well
yeah
and
then
probably
mentioned
that
in
the
maintenance
meeting
is
not
all
my
ten
years
are
here
so
they're
aware
of
these,
and
they
realize
whether
these
could
be
a
bad
change
for
them
or
not,
but
yeah
I
suggest
we
go
ahead
with
a
PR.
F
F
Sue
for
issues
that
we
wanted
to
raise
to
everybody's
attention,
the
broader
community's
attention-
I'm
gonna
turn
it
over.
This
is
probably
the
most
important
one
right
now,
Josh.
Is
there
anything
in
particular,
you
want
to
call
out
for
this
yeah.
H
That
I
want
to
address
I
think
it
was
Johan's
comment,
but
it
might
have
been
someone
else
the.
Why
are
we
doing
this?
H
So
basically,
one
thing
that
we
learned
in
the
semicon
group
is,
unless
we
explicitly
say
what
will
remain
stable,
there's
lots
of
confusion
over
like
what
stability
means
and
we
want
to
be
explicit.
So
the
idea
here
is:
we
have
this
notion
of
a
Telemetry
schema.
We
have
semantic
conventions
and
we
have
guarantees
around
stability
for
semantic
conventions
of.
H
If
we
make
a
change,
we
will
use
a
schema
to
enforce
that
that
change.
You
know,
keeps
things
consistent.
However,
we
never
defined
what
is
in
scope
for
semantic
conventions
to
touch
or
Target
or
make
requirements
about,
and
so
this
discussion
is
about
what
semantic
conventions
can
apply
to
and
what
they
will
apply
to
and
what
we
will
keep
stable.
H
So
if
you
depend
on
something
right
and
and
you
design
a
tool
that
relies
on
traces,
looking
a
particular
way,
what
are
the
things
that
semantic
conventions
will
give
you
that
you
can
rely
on
and
what
things
won't
semantic
conventions
really
be
able
to
cover,
and
so
there's
a
there's,
a
few
interesting
tidbits
in
there,
but
first
I
just
want
to
make
sure
you
understand
why
we're
doing
this
right.
It's
it's
like.
Let's
be
explicit,
because
nowhere
did
we
specify
what
is
in
scope
and
what
is
out
of
scope.
H
So
this
is
explicitly
saying:
what's
in
scope,
so
yeah
the
tldr
is
semance.
Conventions
can
enforce,
attribute
names
and
types
for
key
attribute
keys
right
for
resources,
spans,
metrics
and
logs.
H
If,
if
that's
something
we
want
to
enforce,
for
example,
anyway,
span
names,
you
know,
span
names
are
kind
of
an
open
slate,
but
we
actually
are
defining
span
name
conventions
for
HTTP.
So
this
defines
that
that
name
will
be
something
that
you
can
consider
stably
generated
by
instrumentation
like
we
won't
change
the
recommendation
for
what
that
name
looks
like
in
semantic
conventions
after
we
go
stable
metric
names
and
metric
units
are
also
another
thing.
H
That
I
think
is
dramatically
important,
given
the
discussion
around
milliseconds
and
seconds
going
on
right
now,
so
that
would
be
once
we
Define.
This
instrument
is
of
this
unit.
You
can
rely
on
that
going
forward
and
there
would
have
to
be
some
kind
of
a
schema,
URL
mechanism
to
tell
you
if
that
changes.
H
So
this
is
basically
what
what
is
in
force
that
will
remain
stable,
that
you
can
rely
on
and
then
you
know,
allow
us
to
Define
schema
URL
to
change
these
things
in
a
stable
way
going
forward.
So
those
are
kind
of
the
things
that
are
in
there's
a
few
nuances.
If
you
look
at
the
specifics
and
I
think
there's
a
few
things
to
call
out
span
links
was
the
big
one.
H
There
are
a
few
other
things
that
are
interesting
like
span
status
right
span
status
is
currently
a
Boolean.
It's
it's,
you
know.
Well,
it's
not
a
bullying,
it's
truly
and
it's
unset
error
or
success
right.
H
One
thing
that's
interesting
is
you
know
we
can
provide
guidance
for
how
to
fill
out
span
status,
but
we
really
can't
enforce
that
algorithm
and
so
we're
actually
there's
a
debate
going
on
about
whether
or
not
it
should
be
in
scope
as
something
that
we
Define
in
semantic
conventions
or
out
of
scope
for
Savannah
conventions.
So
that's
a
good
discussion
that
I
want
to
call
in
there
spam
kind
I
think
we
can
actually
have
the
in
scope
where,
if
I
generate
a
span,
that
I
say
is
client
with
a
set
of
attributes.
H
I
ensure
that
the
type
would
always
remain
client
and
I,
don't
switch
it
to
be
producer.
You
know,
I
think,
that's
something
that
we
can
enforce
so
that
that's
the
current
recommendation.
The
status
is
not
something
we
can
really
enforce
and
kind
is
something
we
can
enforce.
I
think
if
you
there's
one
other
discussion
down
further.
F
That
we
can
talk
through
yeah
on
the
span
status,
so
you're
saying
right
now
in
HTTP
semantic
conventions
we
do
have
you
know
a
recommendation,
for
you
know
4xx
being
error
on
the
client
side
unset
on
the
server
side.
F
F
H
So
the
enforce
would
mean:
how
do
you
change
it,
or
does
it
never
change?
So
if
we
want
to
enforce
that,
that
would
remain
stable
across
romantic
conventions
effectively.
I
think
we're
locking
it
down
to.
This
is
exactly
the
mapping
that
we
have
for
all
time
and
it's
not
configurable
if
we
don't
enforce
about
semantic
conventions.
What
we
do
is
here's
a
recommendation
for
how
to
fill
it
out
and
then
we
can
do
overrides
and
users
can
do
other
things
or
customize.
H
You
know,
however,
they
want
right,
but
if
the
the
idea
here
is,
what's
the
expectation
of
someone
consuming
the
data,
can
they
make
any
reasonable
expectation
about
what
that
status
is
or
what
it
means
right,
yes
or
no,
and
so
I'm
suggesting
that
we
actually
have
it
status,
be
out
of
scope
and
we
do
our
best
to
make
sure
it's
good,
but
I'm.
I'm
I
have
two
reasons
for
this:
one
that
I
didn't
mention
in
the
previous
discussion,
because
I
think
I
don't
want
to
derail
it,
which
is
I.
H
Think
collapsing
status
to
tyrannary
is
somewhat
problematic,
especially
when
you
think
about
sampling
and
how
people
want
to
interact
with
status,
I
think
giving
people
flexibility
to
understand,
HTTP
status
code
and
say
I
want
to
put
400s
over
here.
I
only
want
to
sample
500s.
That's
a
reasonable
thing
to
do.
Those
are
different
types
of
Errors,
400
versus
500
right
I
might
want
to
sample
one
and
not
sample
the
other
and
so
collapsing
them
down
to
a
ternary.
H
I
still,
don't
think,
we've
actually
figured
out
status
well
and
so
I
think
we
need
room
to
kind
of
design
and
discuss
around
it.
So
I
don't
think
we're
ready
to
stabilize
or
enforce
status.
In
my
opinion
that
and
then
in
addition,
I
also
think
that,
given
the
way
things
are
defined
today
and
the
way
we
think
of
schema,
URL
is
status
going
to
be
something
where
we
have
a
mapping,
always
where
we
say
this
attribute
maps
to
this
status.
H
Is
that
something
that
that,
like
we
do
like
we
did
for
HTTP
I
still,
don't
think.
We
kind
of
understand
that
yes,
status
is
not
always
about
HTTP,
either
agreed
agreed,
so
I
I
just
anyway
for
two
reasons:
one
is
I,
don't
feel
like
we're
ready
to
enforce
status
or
kind
of
we're
still
kind
of
figuring
out
how
we
want
it
to
mean
and
I
think
ternary
might
not
be
the
end
result
of
status.
I
think
status
might
end
up
having
more
in
it
and
then
secondarily
I.
H
H
Stat,
you
know
status
has
to
be
one
of
three
values:
I'm
going
to
interact
it
with
that,
but
I.
Don't
really
think
you
should
be
building
a
dependency
that
if
the
status
is
error,
that
means
HTTP
is
either
400
or
500,
or
something
like
that
right,
I
think
I,
think
that's!
You
should
interact
with
HTTP
status
code
directly.
F
Yeah,
the
the
part
of
that
that
made
a
lot
of
sense
to
me
is
that
this
is
something
that
is,
as
we
saw
from
the
jrpc
discussion
right
before
here.
It's
something
that
users
do
often
want
to
configure
the
meaning.
E
They
they
can.
They
can
already
do
that
because
we
we
have
a.
We
have
an
attribute
for
all
these
free
form
status
like,
for
example,
the
grpc
code.
We
put
it
as
an
attribute
HTTP
code,
we
put
it
as
an
attribute,
so
they
can
already
override
the
status
in
a
processor
like
in
the
collectors
or
in
the
backend.
If,
if.
D
E
H
No
I
think
I
think
those
are
the
the
big
things
of
like
this
is
explicitly
saying
what
we're,
what
we're
going
to
enforce,
with
schema
URL
and
what
semantic
inventions
is
allowed
to
touch
for
now,
and
if
we
want
to
expand
the
scope
of
semantic
conventions,
we
would
need
to
expand
this
particular
list.
That's
that's.
That's
the
idea
behind
this
br.
I
H
G
I
To
this,
regarding,
basically
the
consequences
of
this
PR,
because
they're
currently
Parts
in
a
semantic
conventions
that
are
not
covered
by
what
you
have
in
in
the
list
here
of
being
supported
by
schema
conversions.
And
what
is
unclear
to
me
is
what
happens
to
these
parts
of
semantic
conventions.
Are
they
just
not
be
declared
stable?
Should
that
be
dropped
out
of
semantic
so
that
stuff
go
out
of
the
semantic
conventions?
Should
it
be
like
a
separate
like
a
step
field
of
of
kind
of
recommendations
that
we
offer.
H
Yeah,
that's
that's!
That's
a
good
question,
so
I.
My
suggestion
right
now
is
that
those
parts
of
the
specification
would
become
recommendations
but
not
kind
of
enforceable
semantic
conventions
right,
so
they
would
be.
We
can
keep
them
marked
as
experimental.
We
can
move
them
into
their
own
section
where
it's
clear
that
they're
not
enforced,
but
we
would
need
to
move
them.
K
Isn't
that
just
going
to
make
things
more
complicated,
though,
if
we
have
stuff
that
we
think
is
important,
like
there's
been
kind
of
a
request
to
kind
of
go
the
opposite
direction
right
now,
for
example,
we
have
semantic
conventions,
spread
out
between
like
traces,
metrics
logs
and
it
seems
like
it
would
be
less
confusing
if
we
for
each
convention.
We
just
put
everything
you
know
in
one
place
on
one
page
for
that
convention.
H
G
H
Yeah
I
mean
and
again
spam
links
is
something
I
think
should
be
on
our
roadmap.
To
add
support
for
in
the
tooling,
because
span
links
can
have
attributes
and
I
think
we
can
update
our
tooling
to
kind
of
Define
what
they
look
like
and
how
that
looks,
but
there's
an
interesting,
so
events
attached
to
particular
spans
and
links
attached
to
particular
spans
are
kind
of
fuzzy
in
our
tooling
right
now
and
I'd,
like
to
kind
of
strengthen
that
before
events
I
think
are
easier
to
deal
with
than
links.
H
I
think
links
are
a
little
bit
harder,
but
I
think
there's
a
little
bit
of
tooling.
That
needs
to
get
improved
there
and
I'm
explicitly
calling
links
out
because
I
think
they
are
not
something
we're
prepared
to
enforce
with
the
tooling
we
have
today,
but
I
would
like
to
add
them
right
again.
This
list
is
not
meant
to
be
a
permanent
list.
It's
meant
to
be
what
we
enforce
now,
what
the
scope
of
semcon
is,
and
so,
if
we
want
to
expand
it,
we
need
to
up
the
tooling
with
the
expansion.
F
F
And
we
found
it
not
possible
for
some
of
the
Java
HTTP
client
libraries,
we
sort
of
found
that
there's
two
different
levels
of
HTTP
clients,
at
least
in
the
Java
space,
where
we
have
high
level
clients
that
we
can
instrument
at
the
high
level,
but
don't
give
us
access
to
low
like
retries
and
redirects
and
then
there's
low
level
ones
that
do
give
us
that,
but
don't
actually
give
us
sort
of
the
over
the
whole
thing.
F
F
F
F
E
L
Is
connected
like
strongly
connected
to
the
first
option,
like
I?
Think
if
you
implement
like
a
low
level,
HTTP
client
instrumentation
is
then
you
should
must
maybe
emit
the
connect
span
and
in
the
second
case,
when
you
implement
the
sort
of
very
high
level
instrumentation,
you
probably
should
do
that
if
you
can,
but
it's
not
like
it's
not
a
must,
but.
J
E
G
L
If
you
put
it
that
way,
yes,
but
if
we
release
HTTP
semantic
conventions
without
the
connect
span
and
we
Implement
instrumentation
that
strictly
follow
the
conventions,
then
you
will
have
this
Gap
in
observability,
where
you
won't
get
any
sort
of
like
connection
error,
telemetry.
F
Foreign,
so
the
the
connection
that
I
see
plugged
in
between
the
connect
span-
and
this
proposal
is
that
if
today,
like
with
the
second
approach
where
you're
instrumenting
the
high
level
thing
you're
still
going
to
get,
if
you
have
a
problem
connecting
to
the
server
you're
still
going
to
get
your
HTTP
client
library
is
going
to
throw
some
kind
of
exception.
You're
going
to
get
an
error
and
you're
going
to
capture
that
error
span
for
that
HTTP
client.
E
L
In
some
HTTP
clients
that
I've
look
through
or
instrumented,
like
the
connection
phase,
happens
entirely
before
anything
with
the
HTTP
request
system.
So
the
instrumentation
that,
just
like
hooks
itself
onto
the
sending
HTTP
request
and
receiving
the
response
face,
it
won't
even
notice
that
something
before
has
failed
and
it
won't
register.
If
I
hear
at
all.
F
And
so
it's
low
level
where
our
we
can
instrument
the
request
and
response
Loop.
F
But
that
happens
after
the
connection
was
established
once
the
channel
is
available,
and
so
we
are
able
to
get
every
request
and
response,
which
is
great.
We
can
implement
the
HTTP
resend
spec
and
you
can
get
that
more
fine-grained
level
of
observability,
but
then
we're
not
sure
where
to
where
how
to
elevate
to
the
user.
L
Go
ahead:
oh
no
I
just
wanted
to
add
that
if
we
were
to
implement
the
recent
spec
in
like
more
HTTP
clients,
this
same
issue
would
probably
surface
like
in
other
places
and
just
Nelly
I'm,
pretty
sure
that
the
ophdp
interpretation
revised,
which
I
have
recently
started
modifying,
will
also
exhibit
the
same
thing.
E
So,
for
me,
for
me,
is
the
following:
first
of
all,
connectspan
does
not
belong
to
a
trace.
Most
likely
connectspan
is
a
standalone
operation
that
is
orthogonal
two
things,
because
in
a
modern
world
with
microservices
and
everything,
you
usually
make
a
connection
to
a
service
and
reuse
that
connection
multiple
times.
So
because
of
that,
you
you,
don't
you
don't
you
don't
say
that
this
connect
spam
belongs
to
a
thing,
it's
first.
So
so,
how
do
you
know
if
something
failed
in
case
of
native
so
I?
L
E
L
E
L
Which
will
be
registered
like
like
an
HTTP
span
will
probably
fail
if
it.
If
the
client
tries
to
send
over
a
broken
connection,
then
a
new
connect
spam
is
made
because
a
new
connection
is
made,
and
then
you
get
a
resend
recent
request,
which
we
will
have
like
a
counter
about.
F
So
I
think
we're
maybe
we're
making
an
assumption
about
like,
like
the
connection
pool
right,
because
bogdon's
point
is
we're
pulling
from
the
connection
pool
what,
if
that
is
already
failed
and
at
that
point
we're
getting
we're
not
getting
something
from
the
pool
bugged
in
or
I
I
think
we're
making
an
assumption
in
in
our
instrumentation
that,
if
Nettie
can't
get
something
from
the
pool
is
going
to
try
to
make
a
fresh
connection
and
we're
going
to
get
that
failed.
Connect
span.
M
E
And
I
think
what
we
should
do,
it's
also
the
the
the
the
connect
kind
of
does
not
belong
to
a
request.
In
my
opinion,
if
you
have
a
connection
pool
disconnect
span,
will
it's
not
I
mean
some
traces
May
encounter
some
latency
because
of
the
connection
happens
almost
in
the
same
time,
but
usually
the
way
how
it
works
is.
F
So
in
interest
of
time
here,
for
it
sounds
like
maybe,
if
we
can
matesh,
maybe
provide
more
like
specific
examples
with
a
few
different
Java
libraries
of
where
the,
where
the
common
usage
and
then
talk
through
that
with
plugged
in.
L
E
Yeah
and
maybe
ask
ask
some
other
languages
because
Java
is
in
general
one
way,
but
there
are
net
people.
Maybe
python
go
whatever
at
least
one
more
language
ask
for.
E
F
All
right,
thank
you,
I
think
the
last
one
just
an
interesting
time
of
just
one
more
because
we
discussed
last
week
and
I
been
thinking
more
about
the
seconds
and
milliseconds
and
getting
cold
feet
about
the
buckets,
not
matching
the
default
duration,
an
open,
telemetry
I
know.
You
know
we
discussed
this
hint
API
and
I.
Think
the
hint
API
is
great
and
actually
solves
the
problem
for
our
instrumentation,
because
I'm
fine
with
our
instrumentation,
specifying
the
default
buckets
everywhere.
F
But
My
worry
is
users
of
the
metric
API
emitting
their
own
timed
events
and
if
I'm
worried
that
they're
gonna
think
you
know
seconds
is
the
open,
Telemetry
default
duration
and
they're
going
to
run
into
this
problem
a
lot
with
the
default
buckets
not
matching
or
we're
gonna
in
our
documentation.
We're
gonna
have
big,
bold
letters,
copy
paste,
this
hint
API
line
everywhere
that
you
emit
them
and
that
doesn't
feel
like
the
most
ideal
outcome.
H
M
D
Say
it's
milliseconds
I,
just
one
thing:
I
just
want
to
say
that
the
the
specification
it
is
said
that
the
duration
default
default
unit
is
milliseconds
and
it
is
stable.
M
F
This
one
I
feel
like
we
could,
you
know
we
can
add
a
also
potentially
s
M
for
milliseconds.
D
Yeah
I
I
think
I
once
created
an
issue,
and
it
is
it's
already
the
and
it
is
not
specified,
but
it
is
acceptable
to
have
the
units
just
by
default.
If
it's
not
present,
we
say
that
it
is
milliseconds,
it
is
not
documented
I
think
there's
an
issue,
and
everyone
agrees
that
it
is
okay
to
put
the
unit.
F
All
right
in
interest
of
time,
I
think
that
at
least
a
new
idea
to
to
chew
through
or
I'll
add
that
comment
Josh
here.
If
you
don't.
C
One
quick
one,
quick
addition
to
this
conversation,
so
readers
metric
readers
have
the
ability
to
you.
C
The
default
aggregation
by
instrument
type,
we
could
have
the
Prometheus
exporter,
you
know,
choose
bucket
boundaries
for
its
explicit
bucket
histogram,
that
align
with
the
Prometheus
defaults
and
no
that
want
to
work
because
then
the
measurements
would
still
be
coming
in
in
seconds.
Sorry
ignore
that
that
doesn't
work.
H
C
Yeah
they're
already
aggregated
by
the
time
that
they
get
to
the
Prometheus
reader,
so
it'd
be
a
lossy
conversion,
maybe
not
for
the
explicit
buckets
but
for
their
exponentials
and
we've
gone
down
that
route.
E
All
right,
oh
I,
think
we
can
extend
Josh
as
I
think.
Based
on
your
idea.
We
can
extend
the
views
to
apply
to
units
as
well
and
then
like
allow
allow
me
to
select
an
instrument
using
the
unit
of
the
instrument,
and
then
we
can
Define
default
views
for
seconds
for
milliseconds
and
stuff,
like
that.
I
think
that's
kind
of
the
pad
that
I
I
was
getting
from
you.
Okay,.
G
H
It's
a
good
idea
allowing
units
and
Views
is
approved,
I
believe
if
you
take
a
look,
that's
another
PR,
that's
open
right
now
it
has
no
negative
comments
and
it
has
enough
approvals,
but
we're
still
waiting
to
merge
that
so
I
think
that's
in
line
that.
H
B
E
We're
done
we're
not
going
to
convert
anything
the
person
that
creates
an
instrument
with
seconds
as
a
unit
will
record
seconds,
so
they
they
will
be
good.
The
person
who
instruments
with
milliseconds
will
record
milliseconds
so
now
now
the
thing
is
the
the
final
aggregation
will
pick
the
right
default
based
on
this,
the
the
the
the
the
the
unit
selection.
E
So
so,
even
though
I
for
for
HTTP
latency
I
recorded
seconds
for
grpc
latency
I
recorded
milliseconds
I
will
have
different
buckets,
one
will
be
multiplied
by
10,
because
I
didn't
do
any
calculation,
but
I
select,
I
I
had
two
different
views
installed,
one
that
is
saying
and
if
unit
is
milliseconds,
select
this
bucket.
If
you
need
this
second
select,
these
other
buckets
I
didn't
do
any
unit
conversion,
but
I
just
allowed
me
to
configure
two
different
defaults,
one
that
apply
for
second
instruments,
one
that
apply
for
milliseconds
instrument.
E
H
Just
to
make
sure
you
made
it,
you
made
it
better.
Just
FYI
but
I'll
I'll
write
that
down
in
the
in
the
pr,
because
yeah
I
think
that
all
ties
together
pretty
nicely.
C
Everyone
should
still
take
a
look
at
the
the
proposal
to
extend
the
histogram
API
to
allow
users
to
you
know,
give
advice
or
hints
about
the
the
default
buckets
I
still
think
that
that's
a
useful
thing,
regardless
of
where
this
discussion
goes
so
take
a
look
at
that
that's
linked
in
the
well
I'll
link
it
in
the
document.
It's.
G
A
Thank
you
so
much
for
that.
Okay,
for
the
sake
of
time,
let's
move
on
Tiff
you're
next
yeah.
K
Just
real
quick
for
Trask
and
other
semantic
convention,
people
who
are
like
pushing
on
stability.
The
last
conversation
really
reminded
me
that
we
have
a
prototyping
requirement
with
spec
changes
and
they
tend
to
be.
You
know,
have
a
prototype
in
a
couple
different
languages
because
usually
we're
prototyping.
You
know
sdks
and
things
like
that.
What
I've
noticed
with
semantic
conventions,
though,
is
that
that
is
not
necessarily
something
that
Maps
well
to
Flushing
out
all
of
the
issues.
K
K
K
So
we
might
want
to
rethink
redefining
what
those
prototyping
requirements
should
be
when
we're
marking
something
as
stable
for
semantic
conventions.
K
Does
that
make
sense
like
like
we
want
to
make
sure
we're
not
checking
a
random
HTTP
library
in
different
languages?
We
want
to
be
like
what
are
the
different
kinds
of
instrumentation
that
we
have
here
and
making
sure
that
we're
we're
checking
those
kinds
of
things,
and
it
would
be
great
if
we
could
be
a
little
more
specific
in
that
guidance.
K
F
I
think
I
might
try
to
summarize
that
as
prototyping
making
sure
to
hit
every
part
of
the
spec,
because
I
think
that's
sort
of
what
happened
with
HTTP
is
we
didn't
really
prototype
the
or
at
least
in
Java.
We
hadn't
really
tried
to
do
the
resend
spec
stuff.
It
wasn't
until
Mateus
started
tackling
the
recent
spec
that
we
ran
into
these
issues.
L
K
I
just
so
the
last
time
we
had
an
HTTP
group
and
those
those
spec
changes
were
written.
There
was
some
prototyping
that
went
on
and
it
it
just
it
didn't
catch
any
of
this
stuff.
K
That's
all
I'm
I'm,
noting
is
if
we
want
to
make
sure
that
we
aren't
having
this
discussion
on
like
the
other
side
of
having
marched
some
convention
stable,
like
our
current
plan,
is
like
Market
stable,
then
go
update,
all
the
instrumentation,
but
there's
some
inherent
risk
there,
like
what
happens
if
we
start
doing
that,
we're
like
oh
crap,
there's
this
whole
class
of
like
stuff.
K
Maybe
the
solution
there
is
to
to
have
like
a
kind
of
release,
candidate
version
and
not
actually
Market
stable.
Until
we've
done,
you
know
a
migration
of
like
a
good
chunk
of
instrumentation,
so
anyways.
The
last
conference
conversation
just
reminded
me
of
that.
This
is
like
a
maybe
a
flaw
in
our
current
plan
for
stabilizing
instrumentation,
which
is
to
Port
everything
after
we've
marked
its
table.
F
K
Okay,
I
can
open.
I
can
open
an
issue
that
no
problem
see.
A
Thanks
so
much
for
that,
okay,
if
that's
it,
let's
discuss
that
this
itself.
Next,
one
Alex
you're
around
half
remote
parent.
J
Yeah
thanks
I
want
to
revisit
this
proposal.
I
think
I
brought
this
up
a
few
weeks
ago
and
then
also
discuss
this
with
the
messaging
sick.
J
We
had
some
more
input
there,
so
it
seems
to
be
there's
some
interest
actually
on
having
this
feature
to
expose
the
has
remote
parent
on
the
parent
itself
and
also
adding
this
to
the
API.
J
There
has
been
some
discussion.
I'll
show
my
screen
about
where
the
spend
kind
would
be
a
yeah,
a
good
candidate,
to
encode
this
information,
but
turns
out
to
to
be
yeah
not
suitable
for
that,
because
also,
especially
for
producers
and
consumers
of
messaging,
this
property
of
whether
yeah
this
the
parent
is
remote
or
incoming.
It's
not
really
specified
in
this
pack
yeah.
So
what
I'm
looking
for
is
basically
I.
Think
that
there's
a
lot
of
interest,
not
really
some
some
pushback
on
that
one.
J
So
if,
if
we
could
get
some
some
more
ice
on
that
and
approvals
for
this
one,
there's
a
related
issue
regarding
span
flag.
So
this
could
be
a
way
to
implement
this.
To
encode.
This
information,
then,
as
a
flag,
I,
think
it
was
a
proposal
from
tigrant
back
then.
K
I
mean
just
like
a
vague
thought
of.
It
seems
a
little
unfortunate
to
have
both
this
and
spank
kind,
because
it
does
kind
of
seem
like
this.
Mankind
should
cover
this,
but
if
we
can't,
if
spankind
is
like
too
stable
to
go
in
and
clarify,
this
I
feel,
like
my
runner-up,
would
be,
to
maybe
just
adhere,
add
something
that
adheres
more
directly
to
what's
in
the
w3c
spec
around
this
stuff,
rather
than
tack
on
a
a
third
thing,
so
that
we
have
like
spank
kind
w3c.
E
No,
the
the
w3c
flags
are
Irrelevant
in
this
case
because
they
are
mostly
pres.
So
the
w3c
defines
only
the
communication
between
two
things,
so
essentially
they
are
present.
Only
the
w3c
is
is
involved
only
when
the
think
is
remote.
That's
why
I
I
feel
like
you
cannot
ask
them
to
add
a
flag
to
say
easily
mode.
It
feels
like
right.
We
are
only
about
remote
spans.
A
K
K
J
J
Need
like
new
spam
kinds
that
would
make
it
explicit
or
having
this
as
an
explicit,
flagged
and
or
attribute.
K
Another
way
of
asking
this
is
like
is
producer
consumer
useful
at
all
in
its
current
state.
If
it's
not,
maybe
we
should
improve
it,
but
I.
C
K
E
Producer
consumer
were
mostly
defined
for
for
distributed
cues,
not
for
internal
cues.
That
was
the
mindset
initially
because
it
was
to
to
model
Kafka
rabbit
and
queue.
You
name
it.
E
K
Right
right,
so
maybe
that's
that's
an
incorrect
application
of
producer
and
consumer
right
like
are
these.
Are
these
mankinds
more
useful
if
we're
strict
about
saying
like
client
and
server
they
they
represent.
These
logical
edges
that
are,
you
know,
serialization
boundaries
that
that
would
make
sense
to
me.
It
seems
like
that
would
make
them
more
useful,
and
maybe
we
add
something
else.
We
should
like,
maybe
actually
look
at
how
we're
modeling
internal
cues.
That's
like
a
thing.
We
don't
really
have
any
explicit.
A
A
Basically,
this
is
about
quite
often
not
having
good
enough
defaults
when
the
user
doesn't
Define
a
service
name,
and
we
end
up
with
unknown
service,
which
is
not
helpful
at
all,
and
whether-
and
there
are
a
bit
of
options
to
improve
this
one
is
to
have
resource
detectors
for
service
name
specifically
or
basically
relaxing
the
unknown
service
definition.
The
second
one
is
Tristan
regarding
whether
AWS
SDK
instrumentation
should
always
use
x-ray
propagation
or
not.
That's
a
complex
one.
A
So
I
will
not
go
into
details
here,
but
that's
worth
reviewing
that
and
that's
only
for
AWS
AWS
instrumentation,
but
still
and
the
final
one
is
one
I
created
myself.
It's
actually
should
have
been
a
draft
PR,
but
anyway
it's
basically,
you
may
remember
the
discussion
regarding
adding
links
to
spans
after
they
were
created.
So
now.
This
is
an
alternative
about
trying
to
represent
links.
A
As
soft
links
through
ad
event,
which
is
very
very
similar
to
how
record
deception
Works
on
on
the
span
interface,
currently
so,
okay,
so
that's
it
pretty
much!
Please
review
those
ones
offline.
Thank
you
so
much
and
talk
to
you
soon.