►
From YouTube: 2022-01-18 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Morning
or
afternoon
everybody
let's
wait
one
more
minute
and
then
we
can
start
by
the
way.
I
suggest
that
we
start
with
metrics.
Usually
we
leave
that
at
the
end,
but
since
we
are
trying
to
make
progress
on
that,
probably
you
can
start
there.
If
that
sounds
okay.
B
B
B
A
So
I
just
wanted
to
ask
everybody
who
has
any
interesting
logs
to
to
see
what
the
logging
sig
is
currently
focused
on
and
and
if
you
have
any
opinions
to
to
come
to
the
logging
sig
meetings,
the
primary
focus
right
now
for
the
sigis
declaring
the
data
model
and
the
otlp
logs
stable,
and
the
plan
is
to
do
that
this
quarter
by
the
end
of
the
quarter.
A
A
If
you're
interested
in
this
topic,
please
have
a
look
at
the
current
state
of
things.
The
data
model.
I
put
the
link
to
the
data
model
in
the
leading
notes
document
here
and
and
have
a
look
and
come
during
the
segments
and
let's
have
a
discussion.
If
you
have
any
thoughts
and.
A
Anyway,
so
you're
welcome
to
to
come
and
join
and
and
be
engaged
in
the
discussions,
and
please
do
it
now
right,
so
we
don't
want
to
to
wait
three
months
and
then
end
up
in
a
situation
when
we're
ready
to
declare
it
stable
and,
and
only
then
people
start
coming
up
with
suggestions
for
something
to
be
changed,
etc.
Right,
that's
the
primary
focus,
the
data
model
and
otlp
logs
the
logs
supporting
otlp
protocol.
A
Another
thing
that
has
been
recently
happening
in
the
logging
area
was
prototyping
logging,
library,
sdks
and
both
java
and
python
did
implement
the
prototypes.
The
feedback
so
far
was
mostly
positive.
There
were
some
adjustments
as
a
result
of
prototyping
that
that
we
made
to
the
specification
and
again
if
this
is
something
that
you're
interested
in
in
having
support
for
logs
in
open,
telemetry,
sdks
or
support
for
existing
non-open
telemetry
logging
libraries
to
to
work
in
a
way
that
complies
is
compliant
with
open,
telemetry's
way
of
doing
logging.
A
This
is
another
area
of
work
that
that
that
has
been
a
recent
area
of
focus
for
the
logging
seek,
and
we
we
do
not
have
a
specific
deadline
for
this
as
well.
But
again,
it
would
be
preferable
for
us
to
have
a
reasonably
complete
specification
of
what
this
is
supposed
to
look
like
by
the
end
of
the
quarter.
The
implementations
obviously
can
happen,
after
that.
A
There
is
no
specific
deadline
or
rush
to
to
have
all
the
languages
implemented
at
the
moment,
and
the
third
area
that
the
logging
sea
is
working
on
is
adding
support
for
logs
to
open
telemetry
collector.
There
is
letting
link
the
the
milestone
that
we
have
in
the
collector
here
in
the
second
collector
logs,
milestone.
A
So
we're
working
on
adding
support
for
logs
to
the
collector.
There
is
already
significant
set
of
capabilities
in
the
collector
existing
and
we
could
actually
use
some
help
with
implementing
the
remaining
features
or
fixing
some
of
the
bugs
that
we
found
to
have
the
what
we
call
a
basic
lock
supporting
open,
telemetry
collector.
So
if
you,
if
you,
if
you're
interested,
if
you
know
how
to
write
coding
go
help
is
is
appreciated,
I
think
that's
it
again.
A
If
you,
if
you
want
to
have
a
broader
discussion
around
logs,
I
think
the
the
logging
sig
meeting
is
maybe
a
better
place
to
talk
about
the
issues.
A
C
Yeah,
it
would
be.
Can
I
make
a
request
that
the
logging
sig
or
for
you
or
somebody
make
basically
the
update
you
just
gave
into
a
blog
post?
I
think
a
thing
you
know
feedback
we've
gotten.
C
I
think
we
want
to
try
to
get
better
at
at
making
these
kinds
of
updates
public
and
doing
like
a
public
call
for
participation,
especially
for
you
know
like
with
metrics.
When
we
started
publicizing,
we
were
we
were
going.
C
You
know
going
to
beta
with
metrics
the
first
time
that
that
brought
in
a
bunch
of
people
from
the
the
metrics
world.
So
so
we
should
make
sure
that
that
we
grab
the
logging
people
in
the
same
way.
A
Yeah,
okay,
that's
a
good
idea,
sounds
good,
let's
see
I'll
see
if
I
can
do
it
or
maybe
if,
if
not,
then
maybe
someone
else
from
the
logging
sick
that.
B
D
D
Maybe
one
issue
at
least
myself,
so
I
think
without
riley's
update,
I
guess
we're
on
the
same
track
that
we
were
last
time
and
if
you
gave
me
a
minute,
I
could
pull
up
his
board,
but
it
basically
comes
down
to
waiting
for
more
sdks.
To
be
finished
is
my
understanding.
B
I
remember
there
was
a
small
discussion
regarding
instruments:
identity.
Oh
yes,.
D
That
basically
killed
the
meeting
last
week
and
it
wasn't
very
well
attended,
so
we
could
rehash
it
right
now.
There
is
still
another
metric
sig
on
the
calendar,
although
it's
been
sparsely
attended
as
well,
and
we
last
week
did
discuss
having
those
discussions
out
in
the
open
here.
So
may
I
briefly
summarize
the
issue
actually
carlos.
D
Could
you
find
the
issue
and
project
it
while
I
speak
sure,
okay,
so
the
question
is,
I
am
in
an
sdk
and
I
am
registering
instruments
and
I
am
on
the
at
the
point
of
registering
the
same
instrument
name
twice
what
shall
happen,
and
I
I
believe
that
probably
the
answer
may
depend
on
whether
you're
a
synchronous
instrument
or
an
asynchronous
instrument,
but
we've
also
discussed
how
neither
of
those
things
really
matter.
D
The
concern
is
that
within
a
single
instrumentation
library,
it's
pretty
unusual
to
expect
or
anticipate
this
type
of
duplication
to
happen.
So
may
we
return
errors.
Maybe
we
return
the
same
instrument
and
it's
not
clear
that
we've
specified
this
at
all
the
opinions
expressed
last
week,
if
I
recall,
were
that.
D
D
Riley
had
come
up
with
a
point
that
that
we've
already
specified
that
views
when
you
try
to
configure
them
in
a
way
that
doesn't
necessarily
work
correctly,
that
we
will
return
we'll
print,
an
error
and
return
to
you
an
instrument,
and
I
think
he
was
trying
to
use
that
precedent
to
say
that
when
you
register
a
duplicate
instrument,
never
mind
what
views
are
configured.
If
you
just
register
the
same
instrument,
it
should
be
an
error.
I
think
some
of
us
were
saying,
but
but
a
harmless
error.
D
E
There
was
the
example
with
the
you
get
the
same
meter,
because
that's
how
we
started.
We
get
the
same
meter
with
the
same
name,
and
then
I
try
to
let's
say,
have
a
counter
with
the
same
name,
same
description
and
everything,
and
what
what?
What
does
it
happen
then,
and
the
whole
example
was
based
on
the
fact
that
there
may
be
two
different
structs
classes,
whatever
they
are
called
based
on
the
programming
language.
That
means
access
to
the
same
instrument.
E
D
D
The
question
is:
what
happens
when
you
make
double
observations
of
the
same
key
set.
The
same
attribute
set
and
you've
got
two
observations.
D
They
came
from,
say
different
callbacks
that
were
registered
independently
and
they're
conflicting
and
we
have
to
choose
one,
and
we
already
do
the
question
of
whether
you
think
that
the
user
has
done
it
wrong.
At
this
point,
and
I
think
bogan,
you
said
no,
it's
okay,
people
keep
coming
in
and
saying
I
want
to
have
more
than
one
callback,
they're
independent
as
long
as
they're.
Truly
independent,
that's
okay!
D
E
The
problem
is
in
specification,
which
says
this:
behavior
is
undefined
and,
and
I
believe
that
this
behavior
should
be
defined
in
the
api,
because
users
needs
to
know
how
to
use
the
api.
If
this
is
expected
or
not
expected,
and
what
is
the
behavior.
E
So
this
is
how
we
started.
I
mean
I,
I
hundred
percent
agree
with
your
thoughts,
but
this
is
not
what
is
written
right
now
right
now,
it's
written
that
api
like
an
implementation,
may
return
or
may
not
return
the
cd
stars
and
so
on
so
forth,
and
so
it
comes.
D
From
there
this
bullet
that's
on
the
very
bottom
of
carlos's
screen,
which
says,
however,
the
sdk.
Oh
it's
moving.
Now
the
sdk
will
apply
the
view
configuration.
This
is
what
riley
was
saying
last
week
and
if
this
conflict
had
been
created
through
a
view,
it's
an
error,
we're
saying
but
we're
saying
it's
not
a
conflict.
If
you
create
duplicate
instrument,
names-
and
I
think
that's-
okay-
instruments
are
for
users
and
views
or
configuration
and
semantically.
D
The
instruments
are
functional
and
meaningful
when
you
hand
out
more
than
one
of
them,
given
the
caveat
about
asynchronous
ones,
exactly.
D
D
Just
to
clarify
the
specification
on
this
issue
to
allow
to
duplicate
instrument
registration
as
long
as
they're
the
same
kind.
E
Well
correct,
but
that's
that
has
more
implications.
It's.
It
sounds
easy
in
the
specs,
but
implementation
wise.
If
people
did
not
do
that,
it
has
implications.
You
mean
that
people
will
have
trouble
returning
the
same
instrument.
D
I
actually
don't
think
it's
necessary
to
do
the
same
instrument
as
long
as
the
sdk
is
proper,
because
you
will
see
two
streams
with
the
same
identity
and
the
thing
to
do
is
aggregate
them
together,
as
if
there
had
been
a
missing
attribute
that
was
erased
in
the
source.
So
if
I
erase
an
attribute
before
the
open,
telemetry
sdk
sees
it,
I'm
going
to
see
two
ads
for
the
same
key
set
and
the
correct
thing
to
do
is
add
them
together.
D
If,
if
so
in
the
sdk,
when
you
see
two
series
with
the
same
identity,
you
have
to
aggregate
them
or
else
you're
violating
the
single
writer
rule.
So
I
don't
see
why
there's
a
problem
whether
you
implement
the
same
instrument
or
different
instruments,
it
shouldn't
hurt
correctness.
You
agree
well.
E
D
Well,
I
will
say
that
having
written
implemented
more
than
half
of
the
view
spec,
I
have
a
feeling
for
it
and
I
would
be
happy
to
give
it
an
effort
of
my
own
on
this
particular
issue.
We
can
assign
it
to
me.
G
Yeah,
I
was
just
gonna
click.
Make
a
clarify.
Ask
a
clarifying
question.
So
does
this
mean
that
we
are
going
to
loosen
the
language
in
the
api
spec
that
says
that
that
an
error
must
be
thrown
if
an
instrument
with
the
same
name
is
requested
from
the
same
meter.
E
G
Okay
and
what
can
be
done
in
a
backwards
compatible
way,
because
the
language
today
says
that
an
error
must
be
thrown
so
can
we
can
we
do
a
complete
180
in
a
backwards
compatible
way
and
say
that
instead
of
an
error
must
be
thrown,
the
same
instrument
must
be
returned
so
from.
E
I
think
from
from
this
from
the
user
perspective,
if
that
was
an
error,
and
right
now
is
no
longer
an
error.
E
G
Okay,
yeah
I'm
just
asking
for
well,
I
I'm
not
I'm
not
sure
I
really
care
about
the
result.
As
long
as
the
language
is
changed
in
some
way
and
loosened
at
least
and
may
be
reversed,
so.
D
Okay,
I've
heard
all
these
concerns
I'm
going
to
take
a
pass
over
all
the
specifications
on
both
the
api
and
the
sda
side
and
see,
if
there's
something
inconsistent
that
I
can
resolve
and
I
my
goal
would
be
to
say:
correctness
should
be
the
the
over
overall
goal.
Is
you
know
correctness
as
long
as
the
sdk
can
do
this
correctly?
It
doesn't
matter
that
we're
returning
duplicates
but
I'll
just
I'll
go
over
it.
It's
assigned
to
me
now.
Thank
you.
D
I
don't
think
we
have
any
other
metrics
issues.
We
can
pull
up
riley's
sheet.
C
Yeah
I
I
pasted
the
link
to
what
I
believe
is
the
right
project.
In
there
there
there
was
one
other
issue
around
should
exemplars
be
enabled
by
default,
looks
like
it
hasn't,
been
updated
in
a
in
a
bit
since
yuri's
concerns
around
it
being
poorly
specified
were
dealt
with
in
november.
D
I
wish
josh
s
were
here
to
speak
about
the
current
state
of
it.
I
know
google
was
very
interested
in
having
them
on
by
default
and
I
felt
like
he
conceded
on
that
issue,
because
there
was
pressure
to
not
do
it
on
by
default
before
ga.
C
D
C
I
can
follow
up
on
the
issue,
but
I
I
mean,
regardless
of
when
we
circle
back
and
implement
exemplars.
I
totally
agree
that
if
we
made
the
metric
specs
stable
without
exemplars
and
got
that
out
there,
that
would
be
fine,
but
I
don't
think
we
should
be
flip-flopping
on
the
defaults
having
it
default
to
be
off
in
one
version
and
on
in
another
version,
unless
that's
really
associated
with
them.
Being
you
know
a
prototype,
that's
experimental
versus
not.
D
Right,
you
just
reminded
me
of
the
probability
specification,
probably
sampling
specification,
that
we've
got
ending
right
now
and
it
is
very
clearly
trying
to
make
an
experimental
reality.
That
is
not
future
specification
so
that
we
can
get
other
changes
in
place.
First
before
we
go
changing
defaults,
but
in
this
this
case
it
was
just
about
expediency
and
getting
to
a
ga
with
the
basic
metrics
flow,
because
my
I
mean
how
many
vendors
actually
use
this
data
prometheus,
maybe
uses
this
data
and
and
question
mark
yeah.
F
C
But
it's
it's!
It's
like
a
chicken
egg
thing
right
like
this
is
like
a
really.
This
is
the
area
where
open
telemetry
actually
starts
to
push
back
or
push
past
the
current
state
of
the
art
and
really
start
to
take
advantage
of
the
fact
that
we're
emitting
all
of
these
different
signals
together.
This
is
like,
I
believe,
like
the
first
feature,
we're
adding
that
that
really
takes
advantage
of
that
it
causes
open
telemetry
to
be
interesting
beyond
just
being
a
convenient
standard
for
the
individual.
You
know
tracers
logs
and
metrics,
maybe.
D
I
mean
in
the
metric
signal
there's
you
know
we're
innovating
in
the
sense
that
there's
no
push
metric
standard
out
there,
that
someone
could
use,
and
so
so
the
people
who
are
saying,
let's
just
release
this
are
saying.
Well,
we
do
have
something
new.
We
have
a
protocol
on
sdk
that
can
push
metrics,
whereas
before
you
had
to
pull
them
and
so
on
sure.
C
Okay,
I
guess
the
the
thing
I
I
was
looking
for.
Clarity
on
is:
if
this
discussion
is
just
about
this
stuff
is
experimental
right
now
we
want
to
leave
it
off
until
until
we
stabilize
it.
That's
that's
totally
cool,
but
it's
more.
C
If,
if
it
went
into
this,
the
spec,
when
this
thing
is
stabilized
that
that
it
would
be
should
be
off
by
default,
it
seems
to
me
like,
like
we
should
have
have
all
the
information
open.
Telemetry
produces
be
sufficiently
low
overhead
enough
that
you
can
run
it
all
in
production
and
all
on
by
default,
and
people
can
optimize
to
turn
off
the
stuff
that
they're
not
not
using.
D
Yeah,
I
think
that's
it
also,
I
mean
don't
forget:
we've
lost
baggage
as
well,
and
and
people
when,
at
the
beginning
of
this
whole
effort
baggage,
is
one
of
the
selling
points
you
get
your
baggage
and
your
metrics
and
that's
also
stalled
and
there's
a
few
related
reasons.
So
we
should
take
a
look
at
it.
B
D
F
D
I
mean
we
have
some
time
in
symmetric
box.
There
was
an
interesting
discussion
that
happened
last
week
at
the
prometheus
working
group,
and
this
is
a
discussion
about
how
metrics
shall
reflect
resources.
We
are
not
calling
this
a
blocker
for
ga,
it's
a
data,
modeling
question,
and
it's
in
when
you
export
data
from
open
telemetry
into
prometheus.
We
are
saying
you
will
take
your
resource
and
you
will
not
turn
it
into
a
set
of
attributes
on
every
metric
in
the
process.
This
is
partly
the
opinion
of
the
prometheus
group.
D
What
prometheus
is
telling
us
we
should
do
is
take
those
resources,
turn
them
into
target
metrics
target
information
metrics,
which
is
basically
a
metric
with
the
value
one
and
some
attributes,
and
then
what
this
means
is
that
you're
going
to
be
required
to
join
your
resources
with
your
metrics
through
queries
and
configuration,
whereas
in
the
original
open
census,
original
open
census
vision,
they
were.
Resources
were
part
of
the
metrics
and
it
would
happen
automatically
so
we're
at
this
point.
D
D
Originally,
it's
not
there's
no
easy
way
when
you're
using
prometheus
to
get
your
resources
turned
into
metric
attributes,
and
I
think
that
it's
okay
to
leave
that
there
where
it
is,
but
it
does
bring
to
mind
how
a
user
who
is
consuming
metrics
that
were
produced
by
open
telemetry
but
have
been
handled
through
prometheus,
how
the
user
will
get
back
their
resources
and
what
we
in
a
particular
in
the
case
of
a
particular
sdk
one
sdk.
D
It's
not
too
hard
to
figure
this
out,
because
there's
only
one
target
that
you
will
scrape
and
you
can
figure
out
what
to
do
when
you're
scraping
a
prometheus
server,
which
is
what
you'd
get
for
me.
An
open,
telemetry
collector
using
the
prometheus
pull
protocol
you'll
get
more
than
one
target
because
you've
been
aggregating
your
metrics
and
now
the
question
is:
do
I
have
any
way
to
combine
the
resource
data
from
target
information
and
the
metrics
data
from
my
sdk
of
my
open
telemetry?
D
D
But
if
we
don't
do
more,
I
think
we're
not
fulfilling
the
vision
of
open
census,
I'm
sort
of
just
replying
to
ted's.
What
are
what
are
we
not
providing
in
metrics
1.0?
That
was
promised
in
the
beginning
of
open
census,
and
it's
something
like
that.
D
And
if
I
may
just
point
out
the
the
the
contradiction
a
little
bit
more
clearly
we
send
traces,
we
put
all
your
resources
on
them
and
now
you
can
see
all
the
resources
on
your
traces.
We
send
metrics
and
prometheus
is
telling
us
don't
put
those
metrics
on
those
resources
on
your
metrics
and
it's
not
it's
not
clear
how
you're
going
to
bring
back
together
that
data
right
now
in
the
world
of
open
geometry
meeting
prometheus.
E
This
problem
is
only
if
you
are
using
prometheus
as
a
back
end,
because
our
protocol
sends
them
together.
So
it's
it's
a
conversion
problem.
It's
not
a
an
open,
telemetry
protocol
problem
because
we
do
the
same
thing.
D
E
It
is
not
if,
if
in
our
protocol,
we
do
that
it's
just
when
we
convert
to
prometheus
that
we
we
break
that
convention.
This
is
how
I
see
it.
I
don't,
I
don't
think
prometheus
will.
Their
request
is
more
or
less
when,
when
you
are
sending
it
to
a
prometa
server
or
pro
media
server
scripts
from
you,
this
is
how
we
would
like
you
to
expose
to
us
versus
what
we
should
do
with
our
within
our
product
or
community.
D
Because
prometheus
server
is
still
the
only
place
you
can
configure
things
like
recording
rules
or
aggregation
on
right.
It
seems
still
still
important
to
me
so
maybe
one
day
when
the
collector
can
support
truly
replace
a
prometheus
server,
then
we
will
have
worked
through
this
issue.
D
Okay,
I
think
I've
beaten
up
that
topic.
It's
not
necessarily
a
problem.
It's
just
a
semantic
disagreement.
D
D
B
Numbers
yeah:
it
would
take
me
a
minute
to
find
that
problem.
Okay,
perfect,
so
we
can,
since
we
are
not
going
to
be
discussing
that
now,
unless
you
want
that,
we
can
probably
move
on
and
just
paste
it
here,
I'll
paste
the
center
finder.
B
B
Okay,
so
let's
move
on
instrumentation
needs
some
spec
love
yeah.
This
is,
I
think
this
is
a
tip
that
requested
some.
Yes,
some
actual
reviews
may
remember
that
we
talk
about
this
very
briefly.
B
F
A
A
C
Well,
one
way
or
the
other,
we
would
love
to
get
this
resolved
because
it's
been
open
forever
at
this
point
and
the
the
sig
is
like
really
deep
into
adding
wanting
to
add
stuff
to
the
spec
at
this
point,
so
I
think
what
we're
really
just
looking
for
here
is
just
you
know.
I
think
the
original
intent
of
this
was
was
to
make
sure
that
the
technical
committee
and
other
core
people
were
aware
of
what
what
we
were
up
to
and
felt
like
it
was.
C
It
was
a
good
idea.
I
do
agree
that
it's
like
a
little
bit
funny
as
an
otep,
because
we
often
don't
make
oteps
that
are
kind
of
proposing
a
roadmap
but
anyways.
That's
that's!
So
that's
where
it
stood.
It
would
be
great
to
just
get
it
closed
one
way
or
the
other.
A
As
a
result
of
this
author
being
merged,
there
is
no
immediate
changes
that
will
happen
to
the
specification
right
correct.
There
will
be,
there
will
be
more
iterations
as
a
result
of
those.
There
will
be
changes
to
the
specification.
C
Correct
this
is
this
is
basically
proposing
that
if,
if
this
group
can
can
look
at
these
different
areas
like
http
and
and
get
these
conventions
into
what
we
consider
to
be
a
working
order
with
outside,
you
know
feedback
that
we're
gonna
do
with.
Is
this
group
fine
with
us
declaring
things
stable
at
that
point?
Essentially?
C
And
so
it's
not
it's
not
like
a
binding
contract.
It's
not
like.
If
you
merge
this
this
otp,
then
the
spec
gets
changed.
I
think
this.
This
is
coming
more
from
the
fact
that
that
the
instrumentation
group
has
spun
up,
and
it's
got
a
lot
of
a
lot
of
the
core
contributors
in
that
group
are
not
like
core
contributors
that
come
to
these
kinds
of
meetings
or
who
have
been
working
on
like
sdks
and
apis.
F
I
mean
just
to
jump
in
here.
I
think
I've
not
been
directly
working
on
this
old
tap,
but
but
I
think
the
main
intent
of
this
otip
is
just
to
basically
make
clear
what
needs
to
be
changed
into
spec
in
order
to
release
a
stable
set
of
http
semantic
conventions.
F
Exactly
if,
if
anybody,
I
think
the
idea
is
to
just
like
get
an
agreement
of
what
do
we
need
to
change
to
declare
http
semantic
convention
stable
and
this?
I
think
this
document
should
be
like
basically
a
point
where
people
can
kind
of
like
disagree
now
at
this
point
or
request
additional
things
for
https
and
90
conventions
to
be
stable.
I
think
it's
maybe
not
formulated
in
the
best
way,
but
I
think
that
is
the
intent
of
this
document
that.
C
That's
totally
correct,
we're
we're
saying:
hey,
we
we
think
stabilizing
these
conventions
is
very
important.
It's
it's
starting
to
become
you
know
in
it.
We're
mature
enough
as
a
project,
that's
starting
to
become
an
issue
that
the
conventions
aren't
stable
and
so
we're
going
to
try
to
stabilize
them
we're
going
to
do
the
work
that's
listed
in
this
stock.
C
Does
this
core
group
agree
that
this
set
of
work?
If
we
do
it
successfully,
would
be
enough
to
to
stabilize
the
conventions
or
are
we
missing
some
some
chunk
of
work
that
the
the
spec
group
would
want
to
see
like?
Is
there
just
something
we're
overlooking,
and
so
this
is
just
a
place
to
to
get
that
called
out.
So
we
understand
what
what
our
roadmap
looks
like.
A
A
question:
what
exactly
do
we
mean
by
stable
here
we
were
intending
to
have
this
schema
hand
the
schemas
to
to
to
care
of
the
stability
issue
right
to
allow
the
changes,
while
avoiding
breakage.
C
Correct
so
yeah,
so
stable,
stable
means
switching
switching
to
being
bound
by
what
we
can
do
with
that
schema
and
and
actually
doing
it.
So
I
would
say
once
we
start
declaring
these
things
stable.
If
we
change
it,
then
we
have
to
now
go.
Do
all
the
work
of
implementing
a
schema
transformation.
B
C
Yeah
so
so
I
mean
this
is
related.
This
is
just
in
the
sense
that
this
is
the
actual
work
we're
doing
so
this
is
a
spec.
C
C
We
just
said
in
the
past
there's
this
span
that
has
these
attributes
and
that
didn't
actually
match
to
the
mechanics
of
how
http
works,
because
you
have
retries
and
and
redirects,
and
so
there
were
just
a
lot
of
back
and
forth
about
like
what
is
the
best
way
to
model
that
so
that
you
don't
end
up
with
like
a
nest
of
unnecessary
spans,
but
actually
model
the
connection
between
retries
and
redirects
correctly,
just
just
to
give
a
brief
overview
for
this
group,
because
I
would
love
love
more
actual
feedback
on
this.
C
What
we
went
with
was
saying
you
should
just
have
a
span
per
actual
physical
http
request
and
not
have
a
span
that
sits
as
like
a
a
extra
logical
span
that
would
measure
all
of
the
retries
and
redirects
and
everything
sitting
above
these
spans
and
the
reason
for
that
is
in
most
cases
it's
just
essentially
a
duplicate
span,
because
in
most
cases,
http
requests
are
just
a
single
request
and
the
other
reason
was
looking
at
how
how
instrumentation
works
in
the
real
world
that
that
sort
of
like
overall
amount
of
time
people
are
measuring
and
setting
up
alerts
on,
is
actually
almost
always
a
higher
level
span.
C
That
represents
like
a
controller
action
in
their
framework
or
something
along
those
lines,
and
so
there
was
a
desire
to
get
rid
of
to
because
because
having
these
extra
spans
actually
constitutes
a
lot
of
extra
overhead
and
cost,
because
these
are
so
common,
we
wanted
to
to
get
rid
of
the
extra
span.
C
So
we
went
with
just
one
span
per
actual
request
and
to
connect
those
spans
to
each
other
with
links
and
to
add
a
retry
count
as
an
attribute
on
the
spans,
and
so
that
we
saw
that
as
being
enough
structure
to
be
able
for
a
computer
which
is
analyzing.
This
data
to
see
that
this
http
request,
followed
by
a
redirect,
followed
by
retry,
followed
by
retry,
followed
by
another,
redirect,
is
actually
a
sequence
of
related
related
spans.
C
So
that's
the
structure
we
went
with
this.
This
pr
defines
that
and
it
would
be
great
to
get
get
eyes
on
this,
because
we
think
this
is
a
pretty
important
change.
A
C
So
there
actually
was
a
bit
of
confusion
and
where
that
came
from
was
thinking
about
these
pieces
of
instrumentation
in
isolation
like
if
the
only
thing
you
had
instrumented
was
your
http
client,
then
you
would
want
a
span
that
measured
the
total
amount
of
time
for
the
quote:
logical,
http
request.
I
tried
to
make
this.
F
C
F
C
Measure
and
look
at,
and
so
there
was
this
issue
of
saying,
like
okay,
that's
valuable,
but
at
the
same
time
this
seems
like
a
lot
of
extra
overhead
and
the
way
we
resolved
it
was
looking
at
the
real
world
and
in
the
real
world
you
don't,
generally
speaking
in
production,
just
have
this
http
client
floating
in
space.
It's
it's
always
encapsulated
by
by
some
larger
sequence
of
stuff
that
represent
a
request
at
that
time.
C
It
might
be
a
sequence
of
these
things,
but
a
controller
action
in
a
in
a
web
framework
is,
I
think,
the
classic
example
of
the
kind
of
span
that
would
encapsulate
that
that
overall
amount
of
time
you're
looking
at
so
that's,
that's
that's
the
way
we
resolve
that
issue.
A
B
B
C
That
might
be
this
is
related,
but
johannes
might
want
to
take
that
one.
I
think
he
added
it,
but
I
just
want
to
finish
this
with.
Could
we
please
get
approvals
on
on
this
http
issue?
If
people
think
it's
reasonable,
because
we're
blocked
right
now,
thanks.
C
Yes,
let
me
get
that
done
before
we
go
into
johannes
thanks.
So
on
a
related
note,
we've
assembled
a
bunch
of
experts
for
these
various
domains,
but
because
most
of
them
have
been
new
to
the
project,
they
aren't
currently
approvers
of
the
spec.
C
So
I
just
wanted
to
check
in
what
would
be
the
we
do.
We
have.
I
have
added
with
tigran
added
everyone
to
to
a
team
who
we
think
would
be
a
reasonable
spec
approver
people
who
are
paying
attention
to
these
issues
have
expertise
and
are
willing
to
only
approve
semantic
convention
issues
when
they
feel
like
it's
in
their
area
of
expertise.
C
So
my
proposal
is
to
add
this
group
as
a
a
spec
approver
group
just
for
the
semantic
convention
areas
of
the
spec,
but
I
just
wanted
to
double
to
to
check
in
how
would
you
know
how
the
tc
actually
like
to
review
this
kind
of
request,
like.
A
I
I
have
a
slight
concern
with
that.
I
think
the
the
expertise
in
semantic
convention
is
domain,
specific
right,
it's
not
across
all
of
the
open
climates
right.
So
how
do
we
give
these
people
a
proving
rights
on
on
specific,
like
subset
of
the
semantic
conventions
which
are
for
that
particular
domain,
that
they
are
experts
in,
but
not
to
to
to
everything
right?
I
don't
think
we
have
the
right
mechanisms
with
that
bright
granularity
to
allow
these
approvals
to
happen.
C
I
can
try
to
refactor
how
the
semantic
conventions
are
laid
out
to
to
make
the
code
owner
stuff
a
little
more
straightforward.
Honestly.
I
think
it
would
make
it
easier
for
people
to
review
if,
if
this
semantic
conventions
were
actually
pulled
out
all
into
their
own
section,
rather
than
being
kind
of
sprinkled
about
other
sections
and
then
chopped
up
according
to
each
convention,
they're
sort
of
chopped
up
like
that
right
now,
but
not
not
precisely
the
request
here
would
be.
C
I
guess
in
the
meantime
that,
through
whatever
approval
process
we
want
to
do
I
mean
the
the
people
on
this
list
are
are
responsible
people
who
are
willing
to
only
look
at
to
only
mark
something
as
approved
if
they've
reviewed
it
and
it's
in
their
area
of
expertise
which
is
sort
of
what
we
we
ask
of.
You
know
all
spec
approvers
that
you
don't
just
click
approve
on
something
unless
you
feel
like
you're
qualified
to
do
that.
A
C
Yeah
or
we
can
do
it
with,
I
think
you
can
be
code
owners
of
specific
files,
or
do
you
does
it
have
to
be?
You
can
so.
C
But
it's
actually
the
the
way
the
expertise
is
broken
out
is
more
like
I'm,
an
expert
at
sql
databases,
I'm
an
expert
at
http,
and
so
I
want
to
be
able
to
approve
any
semantic
conventions
related
sql
databases.
A
E
C
H
Like
you
delegate,
which
part
the
the
the
next
three
like,
we
should
nominating
the
approvers
for
those
specific.
E
C
In
that
case,
I
feel
like
that's
what
this
this
group
is
this
this
the
instrumentation
sig
team
we
formed
consists
of
all
the
members
of
the
instrumentation
sig
who
have
said
they
are
willing
to
both
do
the
work
of
reviewing
this
stuff
and
are
willing
to
be
responsible
with
the
approver.
C
E
Yeah
there
are
around
one.
There
are
like
10
people
here
right,
but
a
couple
of
them
are
from
tc.
I
don't
know
if
we
need
necessary
to
add
them,
but
so
let's
say
there
are
eight
out
of
that.
Okay,
I
was
expecting
a
smaller
list
and
then
the
group
extend
the
next
one
but
sure
we
can.
We
can
send
the
entire
list
and
that's
it
yeah.
This.
C
This
list
currently
consists
of
core
people
who
are
looking
at
all
of
this
and
then
the
messaging
messaging
people,
because
yeah.
C
All
right
thanks!
That's
that
that'll
be
helpful
to
to
hand
out
that
approvership
and
I
would
like
to
suggest
like.
Oh,
I'm
gonna
try
to
get
that
refactor
put
in,
but
it
would
be
great
if,
if
we
didn't
necessarily
wait,
if
we
could,
just
you
know,
assign
that,
according
to
just
the
current
semantic
there's
like,
I
think
three
or
four
semantic
convention
folders
right
now,
yeah.
If
we
can
especially.
E
A
B
F
Yes,
I
put
it
on
there
in
our
messaging
semantic
convention
work
group.
We
came
across
like
a
case
where
we
would
really
like
to
add
links
to
a
span
after
the
span
was
created.
I
mean
an
alternative
solution.
We
have
would
be
just
to
create
kind
of
dummy,
child
spans
and
add
kind
of
the
links
to
those
dummy
child
spans,
which
does
not
feel
ideal
to
us
and
they
are
looking
into.
Is
this
looking
into
it?
F
I
found
this
issue,
which
is
pretty
old,
which
didn't
receive
an
answer
and
which
basically
asks
for
allowing
recording
links
after
spank
creation
time
exactly
what
we
would
like
to
do,
and
there
there
was
a
conscious
decision
to
not
allow
it
there's
a
link
here
to
an
old
tab
and
basically
the
old
tap
mentioned,
like
the
main
reason,
the
old
dimensions
for
not
allowing
it
is
sampling.
So,
basically,
when
having
a
link
its
bank
creation
time,
there
is
more
like
options
for
sampling
or
sampling
can
be
implemented
more
reliable.
I
mean
up
to
this
point.
F
We
don't
have
any
sampler
that
in
the
in
the
spec
that
considers
links,
so
it
just
wondered
if,
at
this
point,
this
decision
can
be
revisited
or
if
we
just
kind
of
are
or
if
we
just
kind
of,
are
bound
to
this
decision,
and
we
cannot
have
links
after
sampling,
then
we
should
maybe
close
this
issue
and
make
it
clear
or
otherwise,
if
you
would
be
open
for
changing
that
and
allowing
links
after
spank
creation,
I
mean
I'd,
I'd,
be
willing
kind
of
to
submit
the
pr
visas
back
change
because
yeah,
as
I
said
for
us
in
the
messaging
work
group,
this
would
allow
us
to
come
up
with
a
much
simpler
and
more
ergonomic
solution
to
problems
that
we
are
facing.
D
D
C
Thanks,
johannes
and
yeah
I
having
been
looking
at
all
the
messaging
modeling
with
them.
I
I
agree.
It's
it's
just
a
reality
that
all
the
things
you
want
to
link
together
in
a
model
that
is
logically
very
coherent.
C
It
can't
really
be
done
in
a
lot
of
these
environments
at
start
time,
just
due
to
the
the
way
these
messaging
systems
are
implemented.
So.