►
From YouTube: 2022-09-08 meeting
Description
Instrumentation: Messaging
A
B
A
A
C
Yeah
no
starbucks
for
you
today.
C
A
C
Yeah,
I
could
use
the
coffee
just
a
little
bit
here.
I
think
that's
the
post
meeting
goal
right.
There.
C
Let's
see
so,
it
looks
like
we're
pretty
much
at
quorum
for
a
minute
past.
I
think
we
could
probably
start,
though.
C
Let's
see
if
I
can
figure
this
out
again
cool
so
just
to
kick
it
off.
Everyone
on
the
call,
welcome
and
if
you
haven't
already,
please
add
yourself
to
the
attendees
list.
C
The
agenda
is
very
me
heavy
today.
So
if
you
have
something
else
you
want
to
talk
about,
please
add
it
to
the
agenda.
You
should
also
have
a
pause
at
the
end.
Oh
great
yeah,
I
got
more
things
to
talk
about.
Okay,.
A
B
B
C
Off
I
wanted
to
give
everyone's
a
heads
up.
I
know
this
is
related
to
this
sig.
It's
going
to
be
its
own
sig,
as
the
auto
instrumentation
sig
itself
had
its
kickoff
meeting
earlier
this
week
for
a
breakdown
on
the
meeting
notes,
they're
linked
it's
a
very
high
level.
At
this
point,
we're
still
discussing,
I
think
proof
of
concepts.
There's
gonna
be
demos,
demos
that
were
already
done
in
this
repository
in
this
sig
meeting
about
the
ebpf
and
the
compile
time
instrumentation.
So
both
of
those
are
there.
C
Just
a
heads
up,
yeah
go
ahead
and
take
a
look
if
you're
interested
also,
we
are
having
meetings
every
two
weeks
for
that
on
tuesday
at
9,
00
30
in
the
morning,
if
you'd
like
to
show
up
definitely
welcome
to
collaboration
on
that
one,
okay,
cool.
So
next
on
the
agenda
is
the
alpha
metrics
sdk
progress.
This
is
kind
of
stalled.
C
We
are
still
around
the
90
complete
point.
I
don't
think
there's
been
much
change.
I
think
we
merged
a
pr
for
the
grpc
exporter,
there's
still
the
http
for
hlp
exporter,
the
testing
framework
for
the
hlp
exporter.
We
talked
about
last
week.
These
three
are
new,
there's
really
small
pr's
for
this
to
do
in
the
views
package,
the
changelog
I
did
move
that
over
this
is
an
update
to
include
all
the
changes
for
when
we
merge
this
new
sdk
main
into
main
itself.
C
It
essentially
is
a
high
level
of
everything
that's
getting
removed,
which
kind
of
is
the
next
topic
I
wanted
to
talk
about,
and
then
this
is
just
again
like
a
really
quick
to
do.
There
was
a
essentially
an
unqualified
to
do
here
to
populate
this
testing
in
the
readers
test,
with
some
real
data
just
added
some
stuff
really
quick.
C
I
was
kind
of
questioning
whether
we
even
needed
to,
but
I
think
it's
probably
better
to
have
a
test
that
doesn't
test
just
an
empty
thing,
because
that
could
also
be
you
know,
an
empty
value
test
which
may
not
catch
something,
so
that
was
added
there
again
yeah.
Those
are
three
pretty
small
ones.
The
http
exporter
is
obviously
a
pretty
big
one.
C
Yeah
next
up
is
the
prometheus
exporter
code,
we're
still
having
a
work
in
progress
on
that.
I
think
I
saw
mike
on
the
call
I
don't
wanna
yeah.
Maybe
we
can
check
in
mike.
A
Yep,
so
I've
just
been,
you
know,
still
kind
of
figuring
this
stuff
out.
I
think
that
the
actually
just
took
the
work
in
progress
off
of
it
to
try
to
like
get
some
more
feedback
on
it.
I
added
some
tests,
so
it's
still
just
like
the
basic
functionality.
I
want
to
add
some
like
sanitizing
and
stuff.
I
don't
know
if
there's
you
know
it's
like
a
prometheus
specification
that
I
should
try
to
shape
this
to
follow.
A
Maybe
if
someone
wants
to
review
that
and
point
to
some
things
that
I
should
keep
in
mind,
I
can
add
those
changes,
but
it's
functioning
and
the
test
passes.
So
I've
just
been
trying
to
get
the
ci
tests.
All
all
green
right
now.
A
C
So
I
would
just
look
at
some
of
the
other
files
in
the
sdk,
the
metrics
sdk
there's
build
tags
in
those
just
to
block
it
from
being
run
on,
go
17
yeah.
It
can
also
help
you.
You
don't
want
to
do
that
to
every
go
file.
Otherwise
the
testing
framework
thinks
there's
no
go
files
in
there.
So
usually
there's
like
a
doc
file
that
has
just
a
you
know:
package
documentation.
We
do
also
the
it
looks
like
the
linting
just
needs
to
get
run
as
well.
C
C
It
hopefully
that
hasn't
been
spamming.
Anyone
it
can.
People
can
see
the
build
failing
and
they
just
kind
of
expect
you
to
be
fixing
things.
So
it's
good
to
know
that.
That's
not
the
case
that
you
want
people
to
take
a
look
at
it,
but
yeah.
I
think
that
would
also
help
cool
okay.
I
think
that
I'll
move
this
over
here
then
again,
not
really
the
biggest
thing
in
the
world.
C
C
Like
that
to
be
released,
it
needs
reviews,
I'm
obviously
not
speaking
over
it
on
the
call,
but
definitely
would
appreciate
any
any
reviews
that
people
have
to
to
hit
all
the
open
pr's.
Here
we
we
have
two
issues
left.
One
of
them
is
merging,
so
I
think
we're
really
close.
Just
a
heads
up
on
that.
C
One
of
the
things
that
I
did
want
to
point
out
is
the
merge
that
I
was
kind
of
talking
about
here
is
we
are
going
to
be
deleting
a
bunch
of
packages
again
which
we've
done
in
the
past,
and
this
is
going
to
orphan
these,
so
they're
going
to
exist
still
on
the
package.
C
So
if
you
haven't
already
seen
it,
this
has
been
identified
a
while
ago.
We've
done
this
a
few
times.
Unfortunately,
so
I
don't
know
what
we
want
to
do
here.
Do
we
want
to
continue
adding
to
the
pile
or
do
we
want
to
just
try
to
resolve
some
sort
of
deprecation
strategy?
Here,
I'm
mainly
looking
at
aaron,
because
you
had
taken
up
this
this
push
to
try
to
do
this
back
or
this
issue.
B
Yeah
that
got
pushed
away
to
the
white
to
the
wayside
with
trying
to
get
the
metrics
done.
I
can
pick
it
up
and
try
and
do
some
of
the
deprecation
of
the
old
modules,
but
at
some
point
we
are
going
to
have
like
it's
gonna.
It's
going
to
remain
in
the
the
documents
forever.
There's
just
no,
no
getting
around
that.
B
If
we
publish
a
version
where
we
deprecate
kind
of
at
the
the
package
level,
we
can-
or
I
think,
there's
a
way
to
deprecate
at
the
package
level
where
the
docs
will
say
this
is
deprecated
something
else
but
yeah.
I
don't
find.
B
C
B
Documentation
so
we
could
publish
a
version
like
oh
30.1
that
just
has
at
the
very
top.
This
is
deprecated
moved
to
things,
have
changed
in
30.1
or
in
31.
C
I
think
that's
all.
We
can
really
do
honestly,
because
I
mean
I
went
through
this
before
I
tried
to
try
to
copy
what
was
going
on
here.
Yeah
I
did.
I
think
I
wrote
this
down.
C
I
essentially
created
some
repositories
and
tried
to
do
this
exact
thing.
Oh
no,
I
think
it
does
yeah
okay,
so
I
do
think
that
the
the
doc
site
can
get
updated.
C
I
think
this
is
a
module
deprecatement
yeah,
it's
a
modular
level
deprecation.
Let
me
just
double
check.
Also,
I
really
like
mushroom
hunting.
If
you
can't
figure
it
out,
no,
it's
not
in
here,
so
I
don't
know
where
that
would
be.
C
This
I
have
to
go
look
through
this
again.
There
is
a
flow
of
information
and
maybe
it
got
removed
at
a
later
stage
in
this
document,.
B
C
C
Yeah,
I
don't
think
that
I
got
I
looked
into
the
package.
We
could
definitely,
I
think
that,
similarly,
we
can
listen
to
a
package
application.
C
No,
it's
it's
entire
modules,
so,
like
all
of
these
modules
that
are
listed
here
were
moved.
If
I
remember
correctly,
we're.
C
Right,
yeah,
and
so
a
big
problem
now
is
that
like
you'll,
try
to
like
exporters
are
always,
I
think,
the
biggest
one,
because
somebody
will
come
across
and
say
like.
Oh
here's,
the
yeager
exporter
right.
So
I'm
just
going
to
import
this
package
really
quick
and
then
this
doesn't
exist
anymore
and
they
get
a
really
old
version
that
did
exist
and
then
it
relies
on
main
from,
like
you
know,
0.20
as
well,
so
yeah,
all
of
a
sudden
they're
running
into
a
bunch
of
issues
and
compatibility
stuff
and.
A
A
C
Yeah,
I
would
love
to
know
if
bazel
works
with
this
deprecation
notice
and
understands
that,
like
they
should
stop
using
this
package
and
look
for
another
one,
but
I
know
that
my
investigation
did
turn
up
that
if
you
do
try
to
use
like
a
go
mod
that
is
deprecated
it.
It
does
tell
you-
and
it
says,
like
you
know,
here's
the
deprecation
notice.
This
deprecation
notice
has
used
something
else
instead,
so
similarly
like
when
you
try
to
add
it.
A
That
the
warning
the
warning
might
be
helpful,
but
I
what
I
haven't
figured
out
yet
is
how
to
stop
it
from
ever.
Trying
again,
you
know.
In
other
words,
you
know
it.
You
know
it
generates
code.
Basically
that
refers
to
these
that
it
it
thinks
at
the
moment
is
the
right
choice,
and
then
you
run
into
compilation,
failures
and
such.
A
B
C
They
will
never
actually
fully
remove
everything.
Essentially,
everything
that
is
is
retracted
is
essentially
becomes
like
a
second
tier
lookup.
So
if
there
are,
if
there
are
absolutely
no
packages
that
aren't
retracted,
it
will
still
go
through
the
retracted
list
to
find
the
latest
so
like
there's
no
way
to
ever
say
like
this
package,
never
existed
it
or,
and
it's
completely
deleted.
So
like
yeah,
I
I
don't
think
that
there
is
a
way
to
actually
do
that
and
I
think.
A
And
even
if
fairway,
I
don't
think,
I
don't
know
if
it
should
be
the
proper
use
case
here,
because
usually
yanking
dependencies
is
done
when
there
is
a
security
issue
or
the
dependency
is
known
to
really
be
bad.
But
in
this
case
I
really
hope,
that's
not
the
case,
but
there
could
be
someone
still
relying
on
open,
telemetry
v0
and
they
would
need
those
dependencies.
C
Yeah,
like
we
don't
think
they
should
use
it
but,
like
like
damian
said
like,
maybe
they
don't
want
to
upgrade
their
code
and
they
really
like
that
pre-release
version
we
had
so
yeah.
I
think
deprecation
is
like
really
our
early
approach
that
we
can,
I
think,
safely,
going
forward
views,
but
we've
all
decided.
I
think
in
the
past
that
it's
a
it's
a
good
first
step.
If,
if
there's
nothing
else
so
there's.
A
Also,
just
one
more
point
is
that
there's
there's
a
pattern
here.
It
would
be
unfortunate
if
we
had
to
work
around
it
explicitly
in
the
future,
but
where
I
think
a
good
deal
is
that
we
had
a
module,
we
had
a
go
module
that
we
abandoned
the
module
and
some
of
the
package
paths
within
it
became
subsets
of
a
of
another
module
that
we
continue
to
use.
A
And
so
I
think
the
failure
is
that
when
gazelle
goes
to
look
up
these
package
pads,
it's
probably
looking
for
the
most
specific,
like
the
module
with
the
longest
path
that
it's
going
to
guess
is
the
first
one
to
contain
a
given
package
path,
and
I
think
that's
what
leads
it
to
go
and
find
these
version
0.20
modules
and
say
I
got
it.
You
know
I
found
it
out
here
when,
in
fact,
if,
if
it
could
be
told
pretend
like
that,
module
doesn't
exist
at
all.
A
It
would
then
walk
that
package
path
up
to
one
of
the
modules
that
we
are
in
fact
supporting
and-
and
you
know,
choose
the
right
module
there.
C
It'd
be
really
cool
if
gazelle
supported
understanding
deprecation.
That's
a
huge
question
and
yeah.
A
Well,
I
I
expect
us
to
get
back
into
trying
to
use
it
again,
maybe
within
the
next
month.
So
maybe
I
can
take
up
that
investigation.
I've
kind
of
stepped
away
from
that
work
for
the
last
few
months,
but
I'll
freshen
it
I'll
get
back
to
you,
okay,
I
I
would
think
if
we
deprecate
the
old
modules.
It's
like
it's
not
a
good
answer,
but
we
can
like.
If
gazelle
doesn't
handle
deprecated
modules,
then
we
can
like
just
send
it
back
to
them.
C
Yeah,
I
think,
that's
fair
and
I
think
that
that's
a
really
good
way
to
start
a
conversation
to
see.
If
maybe
we
should
do
something
else
like
maybe
they
have
their
own
data
structure
that
they
would
use,
but
yeah
steve
I'd
love
if
you
could
investigate
that
a
little
more
that'd
be
awesome.
C
So
back
to
the
original
question,
though
aaron
are
you,
okay
with
us
just
essentially
orphaning
these
other
pro
these
other
packages
or
what?
What
are
your
thoughts
on
that.
B
Yeah,
I'm
fine
with
that.
Okay,
honestly,
if
we
need
to
have
kind
of
a
weird,
one-off
version,
then
we
will
need
it
and
that's!
Okay,
like
I
don't.
C
It
once
it
shouldn't
be
too
hard
to
go
for
these
other
20
packages
or
something
I
mean
it'll
take
time,
but
it's
not
like
days
of
time.
It's
just
a
few
minutes
of
time.
Yeah.
A
C
I'll
put
that
in
the
issue:
okay,
let's
see,
let's
go
back
to
sharing
the
screen;
okay,
next
up
on
the
agenda,
so
the
oclp
partial
exporter,
success.
C
This
got
merged-
I
don't
know
when
I
think,
maybe
earlier
this
week,
and
I
wanted
to
not
export
these
new
types
that
were
included
as
the
partial
success,
and
I
opened
up
this
pr
and
I
think
I
don't
know
people
have
been
following,
but
there's
a
big
conversation
that
kind
of
came
out
of
this,
and
I
wanted
to
kind
of
follow
up
on
it,
and
I
think
that
the
kind
of
the
starting
point
for
the
conversation
was
understanding
how
these
partial
success
are
expected
to
be
used.
C
Currently
right
now
they
are
being
sent
to
the
the
global
open,
celebratory
error
handler
and
I
think,
at
the
end
of
the
day,
something
that
josh
pointed
out
that
he
wants
is
like
he
wants
to
be
able
to
count
these
and
to
somehow
communicate
them.
I
think
that's
also.
My
next
question
is
like
I
think,
they're
writing
these
somewhere,
whether
they're
to
logs
or
to
metrics
or,
to
I
don't
know
an
sms
but
like
yeah
like
they
wanted
to
go
somewhere
right.
C
So
that's
kind
of
the
key,
and
so
one
of
the
ways
that
it
was
designed
originally
was
that
it
would
go
to
the
error
handler
and
then
the
error
handler
you
could
register
would
parse
all
of
these
errors
because
they'd
be
a
distinct
type
and
you
could
use
the
internal
error
is
function,
and
so
I
was
I
was
thinking
through
this
and
this
pr
kind
of
turned
into
an
issue.
C
I
think
about
design
of
this
about
how
we
could
best
do
this,
and
I
think
that
there's
differing
opinions-
and
I
wanted
to
kind
of
discuss
it
but
I'll
pause
here,
because
I
know
there's
other
voices
in
this
conversation
and
what
other
people
talk
about.
It.
B
I
do
want
to
jump
in
with
one
thing
at
the
very
offset
just
remember
that
this
is
a
scope
to
the
otlp
exporter.
This
is
not
something
that
is
necessarily
sdk
wide,
it's
just
something
that
was
added
to
the
otlp
protocol,
so.
C
Yeah
I
saw
that,
but
I
also
saw
that
there
are
other
places
that
do
support
partial
responses,
and
I
think,
if
that's
something
that,
like
here's,
one
that
I
looked
up
in
the
collector,
like
sumo
logic,
has
an
exporter
that
actually
handles
partial
exports
and
they'll
they'll
respond,
saying
like
yeah.
That
was
not
everything.
Here's
some
dropped
records
that
we
actually
didn't
export.
C
So
I
did
think
that,
like
this
is
more
universal
than
just
the
otlp,
and
I
think
it
might
be
a
problem
if
we
try
to
just
limit
the
otlp,
because
we're
gonna
have
situations
like
the
current
implementation.
Where
you
have
the
you
know,
the
hotel
handler
right
now
is
looking
at
all
these
errors
and
it's
going.
Oh,
I
know
what
this
is.
This
is
an
otlp
partial
export,
but
then
like
if
sumo
logic
came
along
and
said
like
no,
this
is
also
a
partial
export.
C
So
then
you
need
to
handle
two
errors,
and
so
I
was
like
kind
of
like
going
like
well.
This
seems
like
a
valid
thing
for
any
exporter
to
be
doing.
Not
all
exporters
are
going
to
be
doing
it.
Definitely
not
all
exporters,
but
I
do
think
that
I
wanted
to
say
like.
Maybe
this
is
more
universal
than
just
the
otlp
and
we
may
want
to
support
that.
B
A
Gotcha,
the
the
pattern
is
also
used
throughout
the
collector,
with
the
the
partial
error
type
that
can
then
compose
or
wrap
around
the
the
p
data
representation
of
the
telemetry
that
failed
to
send
so
that
whatever
was
responsible
for
trying
to
send
it
could
try
again.
A
I
don't
know
if
that's
supportable
in
the
the
otlp
partial
success.
I
I
think
that
just
indicates
that
there
was
a
failure,
but
not
that
this
is
specifically
the
telemetry
that
failed,
but
keeping
that
in
mind
where
we
can
might
be
helpful
because
there
might
be
some
exporters
that
could
indicate
to
us.
This
is
the
specifically
what
failed.
C
Yeah,
that's
a
good
point
and
for
what
it's
worth
the
otlp
is
the
same
thing.
I
think
that's
probably
where
this
came
from
in
concept
as
well
anthony
the
otp
response
is
just
saying
like
here's,
the
number
that
failed,
it
doesn't
actually
tell
you
what
failed.
Currently,
I
don't
know
if
that
will
change,
but.
C
Yeah-
and
I
don't
know
if
that
will
ever
change-
actually
that's
a
good
point,
because
I
think
this
is
almost
kind
of
like
a
http
response
for
bad
data
like
if
you
send
malformed
otlp
or
something
like
that.
The
exporter
says,
like
I
dropped
two
of
these
and
I'm
guessing.
It
comes
back
with
a
method
message
that
says
something
to
the
effect
of
like
you
just
you
know
you
transform
these
wrong
or
you
sent
these
wrong.
C
So
I
think
you're
right,
like
I
think
this
partial
response
is
also
saying,
like
we're
going
to
drop
this
data
and
maybe
there's
something
internal,
but
you
essentially
cannot
retry
this.
For
that
reason,
but
that
being
said
like
that,
I
think
is
more
of
a
specification,
how
we
actually
want
to
handle
otlp
transport
and
failures,
and
I
think
we
do
have
a
retry
policy
also
in
the
otlp
exporter,
but
I
wanted
to
say
like
how
we
can
handle
this
and
one
of
the
things
that,
like
after
having
this
conversation,
I
was
realizing.
C
Is
that
like
this
currently?
Is
it's
bifurcating
the
errors
that
are
being
responded
or
being
returned
from
the
exporter
spans?
C
And
I
think
that
that's
a
misstep
here,
because
I
think
that
there's
by
sending
it
to
the
otlp,
like
I'm
sorry,
the
the
global
error
handler
errors
that
are
the
partial
response
of
the
export
and
then
sending
other
errors
back
to
the
span
processor
that
actually
started
the
call
to
the
export
span,
your
you're
changing
where
the
errors
are
going
and
I
think
that's
a
problem,
because
the
global
handler
doesn't
actually
have
context
of
each
exporter,
and
so
they
may
want
to
handle
those
differently.
C
And
so
then
you're
coming
back
like
this.
This
is
like
the
constant
problem
with
this
global
error
handler,
like
you,
never
have
this
mapping
of
where
the
error
came
from
it's
more
of
a
dump,
and
I
think
that
would
be
a
really
great
idea
if
we
could
return
this
partial
error
as
a
response
from
the
export
spans,
it
helps
identify
the
failure
that
was
seen
in
that
call,
which
is,
I
think,
something
that
we've
seen
before.
I
pointed
out
a
few
in
the
sand.
C
Library,
like
the
I
o
writer
reader,
execute
templates
like
these
things
like
when
they
do
partial
reads
rights
or
something
like
that.
They'll
also,
you
know
tell
you
like:
this
is
partial,
here's
the
error
as
to
like
what
didn't
actually
happen,
and
I
think
that
then
it
passes
it
upstream,
because
this
is
one
of
the
key
things
about
the
user.
C
And
by
returning
this
error,
I
think
it
allows
you
to
shim
in
a
span.
Processor
that'll
actually
export
those
errors,
which
I
gave
a
proof
of
concept
of
how
to
do
that.
Where
you
can
actually
well
okay,
it
didn't
actually
go
above
it.
It
did.
C
You
know
then
parse
out
these
errors,
and
then
you
know
in
this
case
specifically,
you
can
export
it
as
a
metric
right
or
or
log
this,
or
you
know
this
essentially
allows
for
third
parties
to
start
implementing
some
sort
of
proof
of
concept
as
we
develop
this.
C
What
we
want
these
metrics
to
look
like,
and
if
you
don't
you
send
this
back
up
through
to
the
batch
spam
processor,
the
simple
spam
processor
they've
both
already
logged
it
with
the
ocel
global
error
handler
because
again,
this
is
like
one
of
those
like
big,
dumb
things
like
they
don't
have
a
place
to
send
it
so
they're
like
okay,
here's
the
dump
of
the
error
I
got
you
can
have
this
now,
so
I
I
don't
know
I
felt
like
the
more
I
talked
about
this
I
was
like.
C
B
So
if
you
notice,
I
already
put
a
couple
comments
on
there,
so
I'm
not
going
to
rehash
that,
but
I
agree
like
I
think
that
your
approach
is
probably
the
better
way
to
go
and
if
we
have
time
to
do
it,
I
think
that's
the
way
to
go
appropriately.
B
C
I
I
think
it
does.
I
was
a
little
concerned.
I
think
I
had
this
in
the
response.
If
we,
if
we
send
off
things
to
the
otlp
or
the
global
error
handler
right
now
and
then
in
the
future,
we
don't
then
we're
going
to
have
a
problem,
but
then,
when
I
saw
the
simple
span
processor
and
the
bath
spring
process
are
going
to
do
the
same
like.
I
don't
think
that
was
as
much
of
an
issue
for
me
because
they
will
ultimately,
I
think,
end
up
in
the
same
place.
C
So
I
think
that's
that's
fair,
which,
if
I'm
hearing
you
correctly
it's
just
merged
as
pr
and
I
think
then
we
could
start
iterating,
which
was
kind
of
where
I
came
to
the
point
today
as
well.
Yeah.
D
I
think
you've
convinced
me
definitely
better
wait
way
to
go.
D
I
mean
you
need
to
make
sure
that
the
retry
logic
that
is
being
applied
happens
underneath
the
checking
for
partial
success,
because
these
are
defined
as
things
you
can't
retry
and
I
mean
there's
some
history
behind
the
actual
spec
in
the
proto
that
I
don't
want
to
rehash
too
much,
but,
like
this
error,
message
is
like,
in
my
opinion,
the
lowest
common
denominator
of
like
what
you
can
do
as
a
vendor
to
tell
the
user
don't
retry
this
here's
some
information
and,
of
course
the
the
even
the
sort
of
like
placeholder
for
the
rejected
number
count
was,
was
more
or
less
what
I
argued
for.
D
I
actually
I
wanted
the
structured
error
message
as
well,
which
I
think
would
have
been
for
the
best,
but
we
didn't
get
that
far.
I
can
definitely
see
how
so
the
thing
I
was
confused
by
at
first
is
that
the
export
spans
and
try-
forgive
me
for
thinking
metrics,
because
we
do
have
reasons
to
reject
spans
as
well.
Here
at
lightstep,
the
export
spans
interface
takes
spans
and
returns
error.
D
You
never
see
that
like
protocol
message,
that's
this
response,
the
response,
and
I
think
that
was
what
was
I
was
being
thrown
off
by.
However,
if,
if
you
get
to
an
okay
status
and
have
a
partial
success,
I
see
returning
that
as
an
error
to
be
the
right
thing
to
do.
D
One
of
the
ones
to
think
about
like
in
spans,
is
like
your
clock,
is
set
to
1972
or
something
like
that,
and
we
we
just
don't
support,
really
old
spans
here
at
lightstep.
So
we're
going
to
tell
you
sorry,
that's
too
old
and
there's
no
other
structure
to
the
message
other
than
to
say
something
like
you
know,
your
bands
are
too
old.
D
We
also
get
upset
about
missing
the
service
name
for
what
it's
worth,
so
I'm
very
supportive
of
your
approach.
Tyler.
I
think
also
what
parents
said.
D
Just
let's
maybe
merge
this
as
an
internal
thing,
so
that
users
can
start
seeing
error
messages
which
is
kind
of
like
most
important
for
now,
and
then
my
I
expect
somebody
and
I'm
not
sure
who,
in
the
next
six
months,
will
propose
a
sdk
option
to
support
metrics
about
all
the
signals
so
that,
like
every
sdk
exports,
six
metrics
two
per
signal
count
counts
of
failed
and
successes,
or
something
like
that
and
and
that's
what's
dbd
anyway.
Thank
you.
C
Okay,
cool
all
right.
I
will
I'll
take
that
as
an
action
item
submerge
this
I
am,
I
think
it
got
me
really
excited
because
I
realized
david
had
asked
a
while
ago
for
metrics
around
this
exact
thing,
and
I
was
like
wait
a
minute.
Oh
this
is.
C
This
is
exactly
what
we
were
talking
about,
so
I'm
really
motivated
motivated
to
get
that
error,
because
I
think
that
flows
brand
processor
could
then
start
exporting
things
and,
like
everyone
can
start
playing
around
with
what
they
want
to
export
as
metrics,
so
cool
okay,
I
will
start
sharing
again.
C
Okay,
so
next
on
the
agenda,
I
put
this
on
here,
just
after
kind
of
unfortunately
finding
this,
because
this
is
kind
of
a
bug
there's
this
issue.
That
seems
that
it's
related
to
the
metrics
here
specifically
says
that,
like
there's
duplicate
metrics,
so
it's
a
little
opaque
as
to
what
this
is
saying,
but
essentially
what
it's
trying
to
do
is
record
a
histogram
value
that
is
scoped
by
this
attribute.
C
That
is
a
string
slice
and
what
they're
finding
is
that
they're
not
being
aggregated
as
the
same
metric
and
it's
a
little
bit
opaque
based
on
the
description.
But
I
think
it
boils
down
to
this
idea
that,
like
our
attribute
sets
the
strings,
the
well,
not
the
strings,
all
the
slice
methods
are
not
comparable,
and
so
I
try
to
test
and
assert
contains
is
not
a
great
measure
of
comparability
by
the
way.
C
But
this,
if
you
just
essentially
look
up
this,
you
create
a
map
and
that
map
is,
you
know
a
set
as
an
equivalent.
So
you
take
this
string.
Slice
attribute
and
you
index
a
map
that
way,
and
then
you
look
up
one
that
has
the
almost
not
almost
the
exact
same
string
slice,
but
obviously
this
slice
here
is
going
to
be
at
a
different
address.
C
It
doesn't
find
the
initial
entry
which
developers
of
the
metric
to
sdk
that's
a
problem.
I'm
sure
you,
I
just
realized
so
all
of
these
times
that
we're
actually
using
the
attributes
to
index
things
and
look
them
back
up
if
they
included
slices
they're
going
to
be
counted
as
different
metrics
every
recording.
C
So
that's
a
problem.
I
put
this
as
a
bug
that
needs
to
be
resolved
in
the
beta
phase,
because
I
don't
think
it's
necessarily
an
alpha
thing.
Given
it
already
exists
in
the
sdk,
but
I
think
it's
a
problem-
and
I
looked
really
quickly
before
this
meeting
into
this,
and
I
think
this
is
going
to
involve
going
into
the
internals
of
how
maps
are
implemented
and
go
because
it
seems
odd
because
you
can
specifically
set
like
you
know.
The
comparability
of
s1
and
s0
in
this
situation
returns.
C
True,
if
you
try
to
use
the
equal
operator
on
them.
So
like
there's
something
that
is,
is
unique
about
the
comparability
here
and
it
may
be
something
to
do
with
the
version
I
kind
of
wanted
to
bring
it
up,
because
maybe
somebody
knows
this
already
and
I
I'm
just
kind
of
like
okay
josh
has
his
hand
up.
D
So
I
could
probably
I
think
I
think
we
can
fix
this
as
a
as
an
author
of
sdk,
specs
and
stuff,
I
kind
of
don't
care
for
the
slice,
valued
attributes.
It
was
a
thing
that
was
added
for
trace
and
I
see
why
we
have
them.
People
use
them.
I
guess
they
should
expect
correctness.
D
Inside
of
the
attribute
package,
there's
this
magic
that
you
may
remember,
it
supports
like
a
hard-coded
number
of
of
fixed
size,
arrays,
and
then
it
like
falls
down
to
reflection,
and
if
you
have
like
too
many
of
them,
if
that
code
would
treat
the
slice
valued
attributes
the
same
way
as
it's
doing
at
the
like,
in
other
words
recursively
at
least
one
level
of
recursively,
in
other
words,
to
replace
a
slice
valued
attribute
with
a
array
valued
attribute.
D
I
guess
we
could
imagine
other
ways
around
this
like
if
you
every
time
someone
sets
a
slice,
valued
attribute,
you
put
the
value
as
the
array
value
attribute,
which
has
a
size
and
is
comparable.
I
think
this
might
also
solve
our
problem
as
long
as
there's
no
bite
slices
underneath
it.
I
guess
you
know,
like
you,
have
to
think
about
bite
slices,
maybe
separately
anyway.
I've
said
enough.
I
think
we
could
fix
this.
C
Yeah,
I
think
you're
right
100.
I
think
we
could
fix
this,
but
I
I
don't
know
immediately
how
the
comparison
is,
because
I
I
I
think
see
your
point,
but
the
problem
is
like.
I
also
think
that
we
do
create
an
array
here,
even
if
it's
like
a
variable
array
using
the
reflect
package
so
like
underlying
it's
all
an
array.
D
C
D
D
C
I
think
yeah,
I
I
kind
of
imagined
it
as
for
whether
we.
D
Want
it,
I'd
have
to
go
look
at
the
implementation
today
for
slice
value
attributes,
but
I
mean
you
know
how
slices
are
immutable
right.
So
do
we
copy
the
slice
if
we're
copying
the
slice
we
might
might
as
well
create
the
array
instead,
which
which
can
be
done
with
code?
D
Like
you
see
there,
you
know
you
need
either
reflection
or
a
switch
statement
with
you
know
the
small
value
sizes,
it's
not
so
bad
yeah
that
way,
you're
doing
it
at
the
at
the
time
when
you
set
the
attribute
and
the
attribute
set
building
code,
the
stuff
that
we
just
looked
at
doesn't
have
to
change.
C
Yeah,
no,
I
think
that's
that's.
I
agree.
I
don't
think
the
api
is
going
to
need
to
change.
Oh
it
obviously
can't
but
like
we
don't
think
we're
going
to
add
anything
either.
I
think
that
under
the
hood,
get
dropped
probably
try
to
address
this.
So
okay
cool
I've
assigned
it
to
you.
If
you
can
take
a
look
at
it,
that'd
be
great
and
we
can
move
on
from
there.
C
Okay,
next
on
the
agenda,
jamie,
just
an
update
on
the
hotel
in
it
launcher
I'll,
go
ahead
and
hand
it
over
to
you.
A
Basically,
an
update
to
say,
there's
no
real,
specific
update
right
now,
just
with
vacations
and
all
that
around
labor
day
we
were
able
to
kind
of
share
with
the
team
what
we
talked
about
last
week.
As
far
as
you
know,
the
auto
exporter
and
auto
resource
detector
packages
and
how
value
valuable
they'd
not
only
be
in
general,
but
also
as
it
pertains
to
how
this
config
setup
would
go.
So
we're
going
to
we're
a
little
tight
on
bandwidth
this
week.
A
But
it's
something
that
we're
looking
at
next
week
to
hopefully
have
kind
of
something
put
together,
at
least
a
design
idea
of
what
that
would
look
like
for
next
week's
sig
meeting.
C
Awesome,
that's
really
exciting,
so
yeah,
that's
that's
great
news.
I
definitely
appreciate
it.
C
Okay,
cool
next
up
aaron,
should
we
hold
off
on
the
scope
attributes
with
their
status
and
flux,
yeah?
Okay,
so
here's
the
yeah
okay.
B
So
I
linked
to
the
spec
issue
yeah.
I
am
not
super
close
in
following
that,
but
I
know
we've
merged
the
trade,
the
scope,
attributes
for
traces
right.
B
I'm
just
wondering
if
we
want
a
contingency
plan
if
that
gets
pulled
or
whatnot.
If
we
want
to
pull
that
back,
yeah
create
fpr.
C
C
This
issue,
so
it's
identified
in
the
milestone
its
title,
is
a
little
big
get
up
could
get
used
to
be
updated,
but
yeah.
I
think
that
I
wanted
to
also
capture
this
one
of
the
things.
If
I
remember
correctly
from
the
signating
this
week
was
there
was
like
there
was
a
line
of
sight
on
the
end
of
this.
I
think
that
this
oh
yeah,
the
line
of
sight,
I
think,
was
around
the
api.
C
I
think
this
was
a
summation
that
carlos
gave
was
that
having
scope
attributes
at
the
implementation
level
of
the
specification
was
something
that
we
wanted
to
do,
but
the
protocol
was
the
breaking
change.
That
bogdan
was
issue
or
was
concerned
about,
because
the
attributes
are
now
an
identifying
characteristic
that
identifies
metrics
coming
in
or
scope
metrics
coming
in,
and
that
was
the
breaking
change
because
before
they
weren't
there
and
now
they
are
there,
and
so
I
think
that
was,
I
think,
more
his
concerns.
C
I
think
that
the
way
that
we've
added
the
attributes
here
is
it
still
is
backwards
compatible
with
our
implementation.
If
you
don't
add
them,
they
will
still
be
considered
like
the
same
identifier.
In
the
scope,
because
we
add
them
as
well
based
on
that
last
issue,
we
looked
at
maybe
not,
but
if
we
get
string
slices
and
other
slices
to
equate,
I
think
that
the
comparability
of
scopes
will
be
the
same.
B
Well,
I
mean
the
comparability
should
be
the
same
for
empty
scope
versus
nil
in
the
attribute
side.
C
Right,
yeah,
no,
and
I
think
it
is,
but
the
problem
is,
if
you
add,
yeah
you're
right
so.
C
It
should
still
remain
to
be
the
same
identity
of
any
sort
of
metric
that
you're
sending.
So
I
don't
think
that
that's
a
breaking
change
on
our
part.
I
do
think
on
the
protocol.
It
is,
but
I
don't
have
as
much
insight
and
honestly
it
wasn't
something
I
understood,
because
I
didn't
read
all
of
this,
but
I
I
definitely
I'm
up
for
waiting
until
this
is
resolved,
to
make
sure
that
we
are
correct
in
that.
B
B
Just
I
don't
want
us
to
accidentally,
merge
it,
and
then
the
specification
goes
somewhere
else.
I
don't
think
that
will
happen,
but
I'm
just
wanting
to
bring
up
awareness
of
that
and
and
consideration
of
that.
C
You
think
that's
fair.
Do
you
know
what
what
were
you
expecting
timeline
wise
for
this
next
110
release.
B
Well,
I
mean
aside
from
the
spec
issue
and
the
partial
success
like
honestly,
I
would
do
it
today
if
we
could.
C
Okay,
yeah,
then
I
mean
I
I
I
think
that
that
sounds
fair
to
me
is
if
we
just
wanted
to
revert
the
changes
for
the
partial,
six
or
sorry
for
the
scope
attributes,
because
I
think
that
we
could
do
the
release
and
then
we
could
just
revert
them
right
back
again
to
be,
you
know,
to
block
essentially
the
111
release.
That
sounds
fair
to
me.
I
I
have
no
problem
with
that.
C
C
D
I
guess
I
mean
I,
I
think
the
spec.
If
the
spec
made
a
made,
an
error
producing
the
data
in
the
protocol,
isn't
the
terrible,
isn't
really
a
terrible
thing.
The
way
bogan
wrote
that
complaint,
I
think
you
can
you
can
you
can
see
that
the
sdk
is
doing
something
right?
We
just
haven't
defined
what
it
means
I
would.
I
would
be
okay
with
leaving.
D
What
you've
done
in
my
expectation
is
that
the
otlp
exporter
sees
every
scope
as
a
separate
thing,
regardless
of
what
attributes
are
on
it
and
just
you
know,
your
resource
metrics
will
contain
one
scope,
metrics
per
per
meter
with
distinct
attributes,
and
I
think
that
is
what
was
done.
C
B
C
A
The
important
thing
is
not
to
get
out
ahead
of
the
spec,
but
I
think,
as
aaron
mentioned,
there's
good
reasons
to
get
a
release
out
on
our
end.
Soon
we
want
to
start
removing
117
and
not
like.
So
I
think
I
would
vote
for
pulling
it
out
and
being
prepared
to
put
it
back
in
right
after
the
release
expecting
that
the
specs
are
going
to
eventually
get
this
sorted.
But
let's
not
get
ourselves
out
ahead
of
the
spec
with
new
api.
C
I
think
josh
is
right
that
we're
being
conservative,
but
I
think
I'm
okay-
and
I
agree
at
this
point.
Okay,
so
I've
got
this
added
to
this
issue.
B
I
will
take
steps
to
make
a
pr
to
revert
that
one.
I
think
it's
just
adding
scope
some
in
a
few
different
places.
D
C
Them
was
docs,
I
think
yeah.
So
I
I
I
would
say
just
revert
all
three
of
them
because
then
it's
a
really
easy
click
to
revert
that
one
pr
revert.
C
B
C
C
D
Yeah
that
sounds
good
to
me.
I
I
I
meant
it
when
I
said
I
admire
your
conservative
approach.
It's
the
right
thing
to
do.
C
Okay,
all
right,
okay,
cool!
Then
I
guess
the
only
other
thing
aaron
blocking
the
release
would
be
the.
B
C
Yeah,
okay,
cool
okay!
Next
on
the
agenda,
we
have
15
minutes
left
placeholder
for
runtime
metrics
instrumentation
discussion
go
hand
it
over
to
you
josh.
D
Yeah
during
our
kickoff
for
go
autumn
instrumentation
this
week
at
least
one
bullet
had
some
questions
or
a
placeholder
there
for
discussions
about
getting
more
automated
metrics
of
stuff
that
you
don't
ordinarily,
auto
instrument.
These
are
like
really
customized
libraries
for
getting.
You
know,
runtime
metrics,
for
example,
and
I
wanted
to
actually
don't
think
there's
time
now
and
I'm
not
even
prepared
to
give
much
of
a
conversation
about
this.
But
I've
filed
some
issues
for
a
while
they've
been
filed.
D
I've
also
got
done
some
research
and
drafted
some
solutions
for
some
of
these
issues.
The
existing
contrib
repository
has
two
packages
called
runtime
and
host.
Both
of
them
have,
in
my
opinion,
defects.
Some
of
them
are
major
defects.
Some
of
them
are
minor
defects.
D
I
have
questions
about
how
we
will
migrate
and
whether
people
want
to
preserve
the
exact,
bold
behavior
of
those
rather
shoddy
libraries,
in
my
opinion,
or
not
in
my
opinion,
not
but
I'm
also
less
conservative
than
some
in
my
working
on
this.
I've
produced
one
library
that
david's
reviewed
that
moves
us
forward
in
the
runtime
metrics,
so
the
go
team
has
produced
published
like
official
metrics.
Now
and
the
experiment,
or
is
to
do
an
automatic
translation
from
the
go
runtime
metrics
into
open
telemetry.
Essentially
I'm
close,
I'm
realizing.
D
I
was
looking
at
you
just
clicked
on
an
issue
like
I'm
waiting
for
the
one
field.
That's
in
the
mem
stats,
that's
not
in
the
runtime
metrics
and
it
should
be
coming
and
go
120.
I'm
still
calling.
I
have
I've
solved
some
of
these
problems
in
our
downstream
hotel
launcher,
so
I've
got
the
code
now
and
I've
got
one
reason
to
call
read:
mems
stats,
which
is
pretty
awful,
but
it's
because
gc
cpu
fraction
is
there.
D
One
of
my
issues
that
was
filed
a
while
back
is
that
the
gc
cpu
fraction
is
useless
on
its
own.
You
need
to
multiply
it
by
some
things
and
the
number
1000
plus
you
know
like
number
of
max
cores
or
whatever
max
threads,
to
get
it
to
be
useful.
So
I
have
a
library
doing
that
as
a
placeholder
and
it
looks
like
it's
coming
in
go
120.
D
So
then,
if
we
have
a
instrumentation
library
that
automatically
translates
runtime
metrics
correctly
it'll
give
us
everything
we
wanted.
We
can
retire
some
of
the
old
code
and
have
a
new
runtime
instrumentation
package.
I
find
that
that
host
package
has
a
few
things
I
want
probably
to
keep.
It's
got
like
system
memory
system,
cpu
and
system
network,
but
it
also,
in
my
opinion,
it's
like
it's.
It's
mixing
process
metrics,
which
are
going
to
overlap
with
runtime
metrics,
so
we
should
probably
retire
the
process
metrics
out
of
the
host
package.
D
That's
like
another
one
of
my
recommendations.
All
of
this
is
just
chaos
like
changing
metrics.
I
wanted
to
figure
out
what
people
want
to
do
as
far
as
migration
and
stability
as
we
consider
changing
some
of
these
things.
That's
the
real
question.
I
don't
have
good
answers
to.
How
long
do
we
want
to
preserve
the
behavior
of
stuff
that
was
super
pre
1.0
for
open
telemetry?
Those
are
questions.
There
are
issues
where
we
can
discuss
those.
I
don't
want
to
talk
any
more
about
that
today.
C
Yeah,
so,
first
off
thanks
for
tackling
this,
I
think
that,
given
the
pre
1.0
state
of
those
instrumentation
libraries,
I
I'm
open
to
not
trying
to
preserve
behavior,
especially
for
just
not
removing
it
we're
just
moving
it.
I
do
want
to
make
sure
that
we
have
a
clear
design
direction
on
that,
though,
because
it's
really
easy
to
always
take
a
step
backward
when
you're
trying
to
fix
things.
So
I
I
agree.
I
think
that
you
know
if
you
see
something
coming
down
on
120.
D
Yeah,
I
think
it's
just
really
not
urgent
here,
especially
given
that
we
still
have
alpha
to
get
out
and
stuff.
So
I
just
want
to
make
sure
people
know
that
that's
waiting
don't
want
to
take
care
of
just
cycles
away,
and
those
are
those
of
you
who
are
interested.
There
are
packages
of
compatible
hotel
instrumentation
in
the
latest.
Hotel
launcher
go
that's
my
sort
of
pending
draft
and
you
can
import
those
directly
as
well.
C
Okay,
cool
awesome
thanks
next
up
erin,
you
are
up
again
with
performance
testing
requirements
for
pr.
B
B
The
key
thing
is
most
of
those
are
single
threaded.
There
is
one
id.
So
what?
Where
do
we
want
to
set
as
kind
of
the
bar
for
this
project?
I
still
need
to
work
with
them
to
actually
get
a
benchmark.
That
says:
hey
spans
when
you
do
two
at
a
time
like
two
multi
two
go
threads,
creating
spans.
This
does
perform
better,
but.
C
So
are
I
don't
think,
go
f,
go
threads
are
well.
I
can't
remember
the
gopro.
Stat
is
immutable
after
runtime
has
begun.
So
is
there
a
way
that
we
could
just
have
one
implementation
when
the
gopro
is
at
one
and
then
when
it's
above
one
it
would
use
a
sink
pool.
B
I
I
would
be
okay
with
that.
The
the
the
thing
about
it
is:
it's
less
about,
go
proc
and
go
threats
like
if
you
only
have
one
one.
B
Yeah,
I
see
because
I
don't
I
don't
think
any
of
those.
Maybe
some
of
them
do,
but
most
of
them
do
not
have
any
kind
of
go
go
routines
in
them,
whereas
this
create
the
test
that
they
that
was
generated
here.
It
creates
a
lot
of
go
routines,
but
it's
very
tightly
scoped
around
just
the
id
generation
not
span
generation.
B
A
B
Where,
where
do
we
kind
of
take
that,
like
sometimes.
C
My
initial
thought
is
that
if
you're
running
everything
in
a
single
go,
routine
performance
is
probably
not
the
most
important
to
you.
If
you're
running,
multiple
go
routines
to
try
to
utilize
concurrent
programming.
B
C
A
C
See
wanting
to
optimize
both,
and
so
I
I
would
not
want
to
just
throw
away
the
single
go
routine
performance
for
the
only
for
the
sake
of
the
benefit
of
the
multi.
If
you
can
achieve
both-
and
I
think
that's
kind
of
where
I
would
stand
on
that.
A
Sorry,
I
would
just
add
that
if
things
were
coming
up
the
other
way
around,
if
the
first
implementation
had
been
the
pool-
and
someone
had
suggested
that
we
use
the
current
implementation,
we
would
probably
not
even
be
talking
about
it,
because
we
would
just
be
saying
I
mean
this
is
for
low
traffic
applications.
So
it's
not
good.
It's
not
better.
A
Yeah,
I
think,
with
the
the
issue
with
the
the
lock
comes,
when
you've
got
contention
on
it
right
when
you've
got
multiple
things
trying
to
access
it,
when
you've
got
a
single
thing
trying
to
access
it,
it
obviously
doesn't
have
much
overhead
and
I
think
a
pool
is
going
to
be
equivalent
as
well
right.
It's
going
to
be
very
easy
to
get
something
out
of
that
pool
if
you've
got
no
contention
for
it.
B
Yes,
but
there
is
additional
overhead
in
a
pool
versus
getting
and
releasing
a
lock.
C
Like
if
I
remember
correctly
from
the
docs,
it's
amateurized,
if
you
have
a
large
number
of
concurrent
accesses
to
it,
and
I
think
that
that's
the
again
yeah-
that's
where,
like
the
benefit,
comes
in
it's
like
this
concurrency
pattern.
C
I'd
have
to
read
into
this
I'll
be
honest.
I
saw
you
responding
and
I
saw
a
conversation
active,
so
I
didn't
participate
as
much,
but
I
think
that
yeah,
I
think
I've
actually
accurately
said
what
I
would
try
to
go
for
here.
D
I've
definitely
seen
c
plus
plus
sdks,
where
people
were
extremely
rigid
about
this
kind
of
thing
and
you
could
or
could
not
have
a
thread
create
and
and
their
requirements
never
to
create
threads.
For
example,
this
isn't
exactly
the
same,
but
you
could
imagine
if
the
auto
resource
and
the
auto
propagator's
work
is
carried
to
its
full
extent,
you
could
have
like
an
implementation
choice.
A
I
think
we
should
question
which
one
do
we
want
to
have
as
the
default
that,
but
providing
both
could
be
a
a
way
forward.
Out
of
this.
C
Yeah,
the
sdk
accepts
a
generator.
C
Yeah,
that's
a
good
point
anthony
I
do
realize
we
have
like
a
minute
and
a
half
left
on
the
timeline,
one
of
the
respectable
people's
time.
Josh.
I
see
you
have
an
error
message
you've
seen
in
the
wild.
I
I
think
I've
seen
this
before.
I
think,
with
our
limited
time
can,
can
you
open
an
issue
with
this.
D
I
think
this
is
a
hotel,
proto
issue.
Basically,
I
don't
think
sdks
should
have
to
handle
this
problem.
I
want
us
to
say:
let's
pass
this
through
and
let
the
receiver
deal
with
it.
What
do
you
guys
think
about
that.
C
I
think
you
are
correct
because.
D
C
Yeah,
I
agree,
but
yeah
can
you
make
sure
we
capture
this
as
an
issue,
though
I
will
thank
you.
C
Think
there
is
as
well
but
string
errors.
C
Okay,
I
think,
with
that,
we'll
probably
have
to
end
it
here,
we're
at
time
thanks
everyone
for
joining.
I
definitely
appreciate
all
the
help.
If
you
have
time,
please
get
after
those
reviews,
otherwise
we
will
see
you
all
next
week
same
place
same
time,
bye.
Thank
you.