►
From YouTube: 2022-07-19 meeting
Description
cncf-opentelemetry meeting-2's Personal Meeting Room
A
B
C
Good
good,
good,
good,
good
good
to
hear
typing.
I
went
to
the
spec
sig.
It
was
great,
very
cool.
I
followed
like
20
of
what
was
going
on,
but
there
it
is
yeah.
You
just
keep
it's
like
when
I
started
coming
to
this
state.
It
was
like.
I
had
no
idea
what
was
going
on
for
like
months
truly
true
yeah.
Look
it's
gone
now.
Look
at
me.
I
have
no
idea.
What's
going
on
just.
D
D
For
that
whereabouts,
amsterdam
and
belgium,
oh
nice,.
B
Cool
I've
always
wondered
how
much
do
you
speak?
Afrikaans.
D
B
Because
it's
like
the
what
is
it
that,
like
in
belgium
and
the
netherlands,
there's
some
like
the
language
is
shockingly
similar
or
not
shot
like
you
know,
it's
like
portuguese
to
spanish
sort
of
yeah.
D
B
D
So
we
actually
got
tomorrowland
tickets
four
years
ago
and
now
it's
happening
after
covers
so
we're
going
to
boom.
For
that,
I'm.
D
Just
more
excited
to
go
to
amsterdam
and
like
sit
in
like
the
tulip
forests
and
all
of
that
sort
of
stuff.
B
I
gotta
remember
a
couple
good
food
places.
I
went
to
there,
but
it's
been
so
long.
I
can't
there's
some
really
good
co-working
cut
cafes,
also,
if
you
ever
like
a
masochist
and
want
to
want
to
do
open
source
stuff
from
yesterday.
C
F
C
I
I
do
not,
I
think,
before
you
were
on
the
call
I
was
like.
I
maybe
got
20
of
that.
The
there's
there's
one
part
that
I
found
relatable
and
interesting,
but
please
I
leave
it
to
the
professionals.
E
I
would
hardly
classify
myself
as
that,
so
please
feel
free
to
chime
in
if
you
have
any
additional
commentary
on
on
any
of
these
things
and
okay
in
this
specific
recap,
but
yeah
yeah.
So
the
first
thing
we've
been
talking
about
this
little
bit.
There's
this
partial
success,
pr
for
otop
export
responses,
and
there
was
a
pr
in
the
spec
sig
repo
and
a
pr
in
the
protoz
repo,
and
I
feel
like
the
pr
and
the
proto
repo
was
the
only
one
that
I
really
fully
actually
understood.
E
But
I
ended
up
getting
in
this
situation
where
the
the
proto-pr
was
closed,
but
the
spec
pr
has
some
approvals
on
it.
So
we're
trying
to
figure
out
what
to
do
in
this
situation
and
it
was
a
long
bike
shed
about
they're,
basically
like
two
fields
on
the
on
the
new
proto
message:
it's
like
an
error
message
and
then
a
number
of
things
that
were
successful.
E
I
believe-
and
I
think
this
all
kind
of
centers
around
this
optional
field
or
the
whole
arguments
kind
of
I
think
we're
discussing
about
optional
in
general
and
whether
you
could
make
the
full
message
optional
or
you
could
make
any
of
the
parts
optional
and
what
that
would
actually
mean
for
the
response.
But
I
think
for
some
reason,
I
believe
the
hotel
collector
has
issues
with
having
with
using
the
optional
keyword.
For
some
reasons.
E
And
so
I
think
where
this
landed
was
like
right
now,
with
accepted
data
points
being
a
number.
Zero
is
like
a
very
meaningful
number
and
if
it
was
not
meaningful,
maybe
you
could
just
have
these
things
be
like
empty
string
and
zero
would
kind
of
be
equivalent
to
onset.
E
So
I
think
they
were
going
to
possibly
just
open
a
pr
that
changed
the
semantic
from
accepted
to
rejected
things
so
that
way
like
if
you
sent
back
a
zero,
that
would
mean
that
everything
was
successful
or
if
you
just
left
it
off.
That
would
mean
that
everything
was
successful.
E
C
But
the
the
takeaway,
I
think,
was
that
they're
just
trying
all
parties
involved
to
acknowledge
that
it's
a
bike
shed
and
they're
like
we
just
want
to
get
something
merged.
So
hopefully
that
actually
happens.
That's
what
I
got.
E
E
That
is
not
the
overall
semantic
conventions,
so
the
question
was
really
should
we
do
this
and
the
person
asking
the
question
josh
was
saying
that
we
will
tackle
the
how
later,
because
that's
going
to
be
like
a
big
problem,
you
want
to
make
sure
that
this
would
be
like
a
reasonable
thing
and
sam
and
others
were
all
thumbs
up
on
this,
and
it
looks
like
yeah,
I
think
sam
mentioned
this
would
be
useful
for
tracing.
E
And
so
it
does
look
like
you
will
probably
get
like
language
and
library
specific
semantic
conventions.
I
think
there
were
some
issues.
E
Tiga
was
wondering
about
how
schema
support
would
work,
but
I
think
again
there
should
be
alignment
on
this
being
a
thing
that
would
be
useful,
so
I
think
we
will
get
it
things
that
were
brought
up
is
that
the
the
collector
has
useful
metadata.yaml,
that
these
are
for
receivers,
that
produce
metrics
and
they.
E
They
kind
of
describe
the
metrics
that
are
being
produced
and
in
fact
these
are
used
to
generate
some
metric
producing
code
and
it's
also
used
to
generate
like
these
documentation
files
so
super
useful
for
metrics.
It
would
be
useful
if
we
could
come
up
with
something
similar
for
for
other
conventions.
I
think
there
is
some
of
this
on
the
main
spec
repo
for.
C
I
told
josh
that
he
could
ping
me
for
he's.
Gonna
try
to
move
this
forward
and
I
just
told
him
he
can
ping
me
and
I
can
you
know,
try
to
get
input
from
the
ruby
sig
for
the
tracing
portion
of
this.
So
heads
up.
E
A
E
Have
some
kind
of
library,
specific
attributes
right
now,
but
they're
not
technically,
and
then
I
know,
we've
had
some
that
we've
wanted
to
add
over
time.
I
think
background
jobs
come
to
mind
as
being
one
like
thing.
That
does
not
really
have
good
support
at
the
spec
level.
E
E
E
Using
this
so
just.
F
E
E
E
Try
to
look
for
an
example
that
I
can
comprehend,
but
this
thing's
too
long
so
go
ahead
and
take
a
look
at
it.
I
think
the
call
was
that
this
needs
to
be
approved
before
the
next
spec
release
can
go
out
because
there
have
been
some
some
changes
on
metrics.
Specifically,
there's
been
some
network
attributes
that
network
metrics.
E
You
record
different
values
based
on
sending
or
receiving,
and
that
was
a
direction
attribute,
but
it
seems
like
this
is
not
a
pro
apply
universally
to
all
network
protocols,
so
they've
kind
of
made
a
decision
to
stop
splitting
metrics
by
a
direction
attribute
and
actually
separating
these
out
as
separate
metric
names.
So,
instead
of
having
you
know,
network
dot,
io
with
a
direction
attribute
of
send
or
receive
you'll
have
like
a
network,
dot,
io
dot
send
metric
and
it
will
just
have
the
value
right
network
dot.
Io
that
receives
metric.
E
All
right
so
wrapping
things
up,
there
was
some
discussions
about
outstanding
items
for
otlp
1.0
we're
just
kind
of
going
over
the
list.
I
think
most
of
these
things
are
still
on
the
table.
The
one
thing
that's
probably
not
on
the
table
is
short
keys
for
json
encoding.
E
It
is
kind
of
like
a
really
it's
something
that
was
being
introduced
for
rum
for
browser
things.
It's
like
a
really
primitive
way
to
compress
the
payload
by
using
very
short,
json
ds.
But
you
know
values
stay
stay
large.
It
seems
like
they're
going
to
go
into
tables.
For
now
there
are
some
other
viable
options.
It
seems
to
compress
things
from
the
browser.
E
The
last
parts
I
will
wrap
up
quickly,
just
by
saying
there
are
some
edge
cases
I
think
around
high-res
histograms,
mainly
if
you
end
up
getting
like
a
spike
in
traffic
like
I
think
you
can
get
into
this
situation
where,
like
the
number
of
like
buckets,
grows
to
like
a
very
large
number.
But
if
you
have
like
a
spike
and
then
it
kind
of
returns
back
to
like
some
like
normal
range,
you
end
up
with
like
this
kind
of
permanently
hosed
histogram.
E
E
E
Another
flawless
spec
sig
recap
any
questions
or
comments
beyond
his
flawlessness.
C
Who's
saying
that
jesus
is
slow
like
encoding
things
as
jesus
as
like,
like
a
performance
overhead
I
this
is
like
way
out
of
context
like
I
don't
even
remember
at
what
point
you
made
this
comment,
but
I
was
listening
into
the
spec
sig
and
he
was
like
yeah.
You
know,
gzip
ended
up
being
slow
and
we're
going
to
xyz.
So
that
took
me
by
surprise,
but
I
guess
yeah
you
do
have
to
like
encode
stuff.
So
I
think
it
was
in
the
context
of
using
binary
protocol
buffers
instead
of
g-zipping
stuff,
but.
F
E
A
C
F
Thought
I
was
like
well
gzip
will
do
that
for
you
programmatically
in
a
way
like
in
a
way
that
we
don't
have
to.
I
could
imagine
that
a
performance
penalty
in
the
browser
but
like
it
also
seems
like
the
type
of
thing
that
would
optimize
the
heck
out
of
since
they
do
compression
all
over
the
place
with
responses
and
requests,
but
yeah
I'm
interested
to
see
where
that
comes.
But
it's
not
important
to
us
really
at
the
moment
we
don't
do
the
json
encoding
at
all.
So.
E
So
yeah,
just
to
add
a
little
bit
to
that
like
that
was
my
first
thought
when
the
short
keys
came
out,
and
one
of
the
first
comments
on
that
issue
was
like,
isn't
that
what
gzip
is
for
and
my
understanding
I
I
do
not
claim
to
be
a
browser
expert,
so
anybody
interject
if
any
of
this
is
wrong.
But
my
understanding
is
that
browsers.
E
They
do
a
lot
of
com
pressing
on
the
on
the
requests
and
the
responses,
but
when
the
browser
is
sending
it
does
not
have
access
to
these
compression
apis
at
all.
So
like
browser,
javascript
cannot
actually
gzip
unless
you
bring
your
own
library
for
it,
and
therein
lies
the
problem.
It's
like
the
browsers,
have
have
these
capabilities
they're,
just
not
exposed
to,
via
any
apis,
to
javascript
running
in
the
browser
to
compress
things
that
it's
sending
out.
F
E
A
E
I
tried
to
avoid
browsers
for
most
of
my
early
career
because
they
were
totally
unusable.
I
feel
like
in
the
past
few
years,
like
I've,
taken
a
look
at
them.
I'm
like
oh,
these
are
actually
somewhat
usable.
It
seems
like
they
actually
figured
out
this
box
model
thing,
at
least,
which
you
know
took
a
long
time,
but
apparently
they
are
they're
still
unusable,
so
stay
away,
stay
away
from
the
browser.
F
Enough
to
tell
me
twice
so
since
that's
this
is
a
good
segue
point.
Since
apparently
I
have
the
next
couple
of
agenda
items
and
I'm
gonna
hijack
it
and
add
yet
another
one.
Have
we
ever
heard
any
desire
for
the
json
encoding
support
for
ruby?
We
don't
implement
it.
We
could,
but
has
anyone
ever
asked
just
a
just
a
question.
E
I
haven't
heard
anybody
ask
for
ruby.
Specifically,
I
have
seen
a
general
kind
of
comment
in
the
specs
seg
around
not
wanting
to
pull
in
a
proto
buff
dependency,
and
I
think
this
person
was
not
a
rubius.
They
were
not
even
in
javascript.
They
were
in
some
other
ecosystem.
E
They
felt
like
protobuf
was
a
heavy
dependency
a
and
I
think,
b
like
it,
is
not
uncommon
in
like
a
complex
application
and
to
end
up
getting
like
an
unresolvable
like
protobuf
dependency.
Due
to
all
of
your,
you
know
dependencies
that
you
have.
I
think
you
can
end
up
getting
into
a
situation
where
there's
like
no
protobuf
library
that
defined
that
that
satisfies
the
requirements
of
the
application.
So
I
think,
for
those
reasons,
there
is
sometimes
a
desire
to
have
a
a
javascript
or
a
a
json
transport.
F
That
makes
sense-
I
was
thinking
about
it
in
the
context
of
like
it
could
be
a
way
to
let
the
otlp
exporter
work
before
it
jr
these
platforms,
but,
on
the
other
hand,
there's
also
java
libraries
that
can
do
protobuf
stuff
really
easily,
but
okay.
That
was
sorry
that
was
out
of
the
blue.
The
first
actual
question
I
had.
F
I
was
curious
what
the
status
of
the
otlp
exporter
split
was
because
now
we
have
the
otlp
exporter
and
we
also
have
the
common
and
the
http
and
the
the
grpc
stub
implementation
sort
of
robert,
I
think,
is
a
good
question
for
you.
Do
we
know
how
to
like
collapse
or
move
like
yeah?
What's
going
on
with
that.
A
Lazy
answer
leave
it
with
me,
but
yes,
we
do
want
to
collapse
it
and
we
do
want
to
move
it,
but
the
two
aren't
that
parody,
some
of
the
fix
have
been
in
the
hotel.
The
exporter,
I'm
just
been
kind
of
sidetracked,
with
metrics
work,
so
I
will
do
it.
I
don't
know
if
it's
blocking
anyone
right
now.
I
think
it
was
one
of
the
pieces
that
we
wanted
to
do
to
hit
1.0.
A
Is
there
like
a
like
a
press,
say
like
an
outside
pressure
that
I'm
missing
for
doing
this?
No
more.
C
F
Don't
know
I
just
I
know,
there's
already
been
drift
between
them
and
I
was
thinking
if
we
finish
it
sooner
rather
than
later.
We
will
stop
having
that
drift
problem.
That
was
my
main
thought,
but
it's
not
a
it's
nothing,
urgent
or
outside
of
us.
That's
asking
for
it.
I
was
just
thinking
like
oh
it'd,
be
nice.
If
we
didn't
have
that
problem
again,.
A
Yeah,
no,
I
I'd
say
it's
already
drifted
so
like
any
incremental
drip
like
it
just
it
needs
to
be
addressed
whether
it's
like
one
or
two,
three
one
or
ten
things
that
have
like
differed
between
the
two.
At
this
point
they
have
diverged.
So
it's
just
going
to
require
that
kind
of
like
insulation
rework
to
yeah.
So
for
me,
like
yeah,
I
don't
know
it's
not
a
great
answer.
It's
just.
It
doesn't
feel
overly
pressing
at
the
moment.
A
I
still
have
every
intention
of
finishing
it,
because
the
metrics
exporter
is
going
to
live
in
that
new
package.
So
for
me
it
like
it's
gonna
get
done
regardless.
It's
just
a
matter
of
when
is
more
of
up
in
the
air
right
now.
Okay,.
F
That's
fair,
I
was
just
curious.
I
hadn't
heard
much
about
it,
so
the
next
thing
on
there
was.
I
guess
these
next
two
things
are
kind
of
related.
Since
we
split
the
contributor,
I
feel
like
there's
been
more
outside
contribution
or
more
interest
recently.
It
might
be
just
part
of
the
normal
cycle.
Honestly,
it
may
not
have
anything
to
do
with
the
split,
but
we've
had
a
couple
of
release.
Requests
come
through,
people
have
been
asking
us
like
hey.
When
is
the
next
release
and
the
answer.
A
F
Whenever
somebody
complains
loud
enough
right,
but
I
was
wondering,
do
we
think
we
should
have
a
more
defined
release,
cadence
or
and
or
do
we
think
we
should
have
a
more
formal
roadmap?
At
this
point,
I
feel
like
we're
a
legit
implementation
of
open
symmetry
at
this
at
this
stage,
and
I
was
wondering
if
we
should
do
some
of
those
things.
Oh
did
we
discuss
this
last
week,
oh
crap,
I
wasn't
here,
I'm
sorry.
I
totally
missed
it.
A
I
was
just
gonna
add
I
was
gonna
say
like
with
like
the
ci
rework
that
you
did
andrew.
It
makes
deploying
a
lot
easier,
a
lot
quicker,
there's
a
lot
less
breakage
splitting
the
sdk
and
the
contrib
has
helped
a
bunch
as
well.
A
We
also
have
more
maintainers
on
the
contrib
repo,
so
there's
like
less
bottlenecks,
namely
me.
I
would
encourage
people
to
feel
empowered
to
like
release
releases.
They
merge
like
if
they,
if
a
bug
fix,
comes
in
like
it's
a
lot
easier
than
it
was
before
to
just
release
that
one
gem
and
just
do
like
an
immediate
release.
A
I
would
be
in
favor
of
approach
like
that.
I
don't
know
if
people
have
differing
opinions
or
if
they
want
to
let
it
build
up
on
a
weekly
cadence
but
like
if
you
had
a
feature
or
at
a
bug
fix
like
the
release
process
is
pretty
pretty
simple
now,
so
it's
like
you
fix
something
on
rack,
you
can
release
rack
and
it
would
be
cool
to
see
that
shift
to
be
the
norm,
because
now
it's
not
like.
A
Well,
you
got
to
wait
till
friday
because
that's
when
we
do
it,
it
could
be
just
be
like
yeah.
No,
it's
ready
again,
it's
not
that
much
added
overhead
and
for
more
maintainers.
It
can
just
be
simply
as
like
I
merge
I
release,
I
don't
know
what
people
think
about
that,
but
that's
a
long
time
ago,
like
a
year
and
a
half
ago,
that's
kind
of
where
I
hope
we'd
get
to.
Is
you
just
merge
as
you
go
or
it
releases
you
go.
F
You
can
trip,
especially,
I
think
it
makes
sense.
Sorry
eric
sound,
like
you're,
going.
A
B
B
Maybe
we
should
just
like
every
tuesday
make
it
a
point
of
the
meeting
to
be
like
cool,
like
anything
that
ought
to
get
released
like
take
time
box,
five
minutes,
which
I
thought
was
like
a
decent
approach
but
yeah,
I
think
the
update
is,
we
did
release
stuff.
There
was
actually
a
bunch
of
stuff
broken
that,
because
when
we
when
by
we,
I
mean
ariel
and
sam
and
not
me-
migrated
to
contrib,
there
were
some.
You
know
whatever
hard-coded
stuff
or
some
dashes
that
should
have
been
underscore
type
things.
B
So
they
worked
through
that
were
able
to
get
a
release
out
and
now
I
think
it's,
I
think
and
sam
correct
me.
If
I'm
wrong,
I
think
we're
comfortable
kind
of
being
able
to
pull
the
trigger
whenever
we
want
on
a
release.
C
Yeah,
I
would
say
it'll
it
might
break
one
or
two
more
times,
but
we
worked
through
a
lot
of
the
breaky
stuff.
Last
week.
Unfortunately,
ariel's
gone
so
he's
like
the
chief
fixer
of
the
brakey
stuff,
but
I
can
try
to
release.
I
think
the
aws
sdk
release
is
super
minor.
It's
like
it's
like
suppressing
a
nil
warning.
That
was
the
one
that
amir
asked
for
and
then
realized
wouldn't
be
supported
anyway
for
ruby
25.
C
But
I
would
be
a
great
candidate,
so
I
can
try
to
release
that
and
see
if
there's
any
breaky
breaky
stuff.
B
Yeah,
I
think
I
think,
I'm
in
favor
of
robert's
approach
of
like
let's
just
release
as
we
go
or
as
it's
deemed
you
know
like.
I
don't
think
we
need
to
use
for
one-line
bug,
fixes
that
do
nothing
but
like
not,
but
you
know
like
if
there's
some
random,
you
know
change
that,
doesn't
impact
anything
but
like
releases,
people
go
and
feel
comfortable,
but
then
also
like,
I
think
it's
worth
just
taking
five
minutes
in
this
meeting
and
being
like
cool.
B
Let's,
like
just
do
a
quick
confidence
check,
anything
need
to
get
released
also
like.
If
people
ask
for
a
release,
we
can
maybe
point
them
to
say
like
hey
cool
like
and
we
don't
feel
like
it's
necessary
or
we
feel
like
it's
premature
or
something
we
can
say
like
we'll
come
to
the
meeting
and
talk
about
it
too.
But
I
don't
know,
I'm
I'm
totally
fine
with
just
saying,
let's
just
release
as
we
go.
A
I'm
just
gonna
add
because,
oh
I'm
sorry
go
ahead.
Robert
I
was
just
gonna
say:
approvers
can
also
kick
off
releases
too.
So,
like
there's
more
like,
I
think,
the
the
autonomy
of
like
all
the
approvers
and
maintainers
there's
just
more
people
who
can
just
like
kick
it
off
and
then
once
you've
kicked
off
a
release
like
a
maintainer
can
just
merge
it
like
there's
just
more
people
who
can
get
the
the
ball
rolling
and
I
think
big
bang
releases
just
become
more
cumbersome.
So,
like
these
small
anybody
can
kick
off
a
release.
E
Cool
yeah,
I
was
just
gonna,
summarize
everything
that
has
been
said
so
far.
It
seems
like
the
the
release
process,
so
some
bugs
are
being
worked
out
to
actually
make
it
so
that
we
could
actually
get
to
a
release.
As
you
go
situation,
it
seems
like
right
now.
If
you
try
to
release
as
you
go,
you
might
end
up
spending
your
afternoon
figuring
out
why
the
release
is
not
going
out
so
so
we
should
get
that
running
like
a
well-oiled
machine,
and
maybe
we
can
kind
of
incrementally
build
up
to
release
as
you
go.
E
Maybe
we
can
do
this
checking
on
tuesdays
for
a
couple
weeks
and
actually
see
if
that's
a
viable
option
and
then
kind
of
aim
for
like
a
tuesday
release
and
then
once
that
goes
is
running
smoothly.
Maybe
we
can
consider
release
as
you
go
if
tuesday
releases
are
not
not
cutting
it,
I
don't
know
I'm
just
throwing.
F
C
F
I
I
don't
know
if
there's
I
don't
have,
I
guess
a
strong,
either
way
between
having
the
tuesday
five
minute
check
during
the
meeting
to
see
if
we're
ready
to
release
and
contrib
or
if
we
want
to
just
immediately
go
to
release.
As
you
go
kind
of
do,
release
I
mean
yeah.
I
guess
I
don't
have
a
strong
feeling
either
way
whatever
the
group
feels.
C
A
A
That
could
happen
on
like
the
tuesdays
is
just
checking
like
the
same
as
the
backstop,
but
like
instrumentation,
all
like
doesn't
really
fit
into
the
releases
you
go
from
for
me.
Like
my
mind,
that's
like
because
now
you're
you
release
like
rack
and
then
you
also
have
to
do
the
instrumentation
all
song
and
dance
where
it's
like
every
tuesday
we
could
release
instrumentation
all
with
kind
of
like
the
roll
up
of
changes.
Now
it's
just
this
this
one
thing,
so
you
check
cause.
Have
we
released
everything?
Yes
or
no?
Has
anything
been
released?
A
Again
I
don't
know
I
don't
really
have
strong
feelings,
but
that
one,
the
releases
you
go
for
instrumentation
all
seems
a
little
bit
more
awkward
than
just
like
the
individual
gems.
But
I
don't
know
if
people
have
opposing
views
on
that.
C
Yeah
that
makes
sense
to
me
not
having
to
release
instrumentation.
All
I
mean
we
could
theoretically
get
our
tooling
good
enough,
that
you
could
release
a
new
version
and
release
instrumentation
all
with
the
click
of
a
button.
Hey
pup,
hey
doggo,
oh.
A
Who
is
this
robert?
This
is
misha
and
she's
like
100
years
old,
and
she
has
two
teeth
and
her
tongue
is
usually
hanging
out
of
her
mouth.
She
can't
hear
very
well,
but
she
can't
hear
the
sound
of
the
fridge.
F
All
right
so
yeah,
I
think,
that's
a
decent
plan.
I
will
reserve
my
judgment
on
the
tuesday
releases
of
instrumentation
all
I
worry
that
maybe
there
will
be
confusion
for
the
end
users,
but
we'll
just
see
if
they
get
confused
rather
than
just
me
vaguely
asserting
that
they
might.
I
think,
that's
a
good
plan.
I
had
something
on
there
about.
Do
we
need
a
more
defined
roadmap.
That's
a
very
vague
sort
of
I
wanted
to
put
feelers
out
there.
I
don't
know
if
we
have
enough
to
needs
to
be
on
a
roadmap.
F
Maybe
we
don't.
I
wanted
to
see
if
anyone
had
any
thoughts
about
that
at
all.
B
I
mean
what,
like
I
don't
know,
isn't
the
road
map
like
please
refer
to
the
spec
like
what
is
our
road
map?
B
Maybe
there's
some
instrumentations
we're
planning
to
get
to,
but
we
haven't
yet
what
would
sort
of
be
something
you
think
would
benefit?
People
would
benefit
from
being
on
there
right
now.
That
is
sort
of
now
exists
like
implicitly
or
we
know
you
know
from
attending
these
meetings,
but
people
might
not
know.
F
Sorry,
I
think
the
main
one
might
that
I
can
think
of
immediately
is
signal
support,
like
we've,
gotten
questions
about
when
our
metrics
going
to
be
ready,
support
lines.
If
not
why,
when
that
came
from
the
demo
web,
things
like
that
might
be
on
a
road
map,
maybe,
but
that's
actually
kind
of
the
only
thing
I
can
think
of,
but
I
was
that's
why
I
wanted
to
ask
if
anyone
else
had
thoughts
about
it,
because.
B
Good
points,
I
don't
I
just
I'm,
not
the
one,
I'm
not
implementing
the
log
signal
right
now,
so
I
don't
want
to
put
a
gun
to
someone's
head
and
say
this
is
what
it's
due
by
or
whatever
is
so.
F
But,
okay,
that's
that's
fair!
This
hotel,
js
stuff
that
matt's
browsing
around
looks
interesting.
What
do
they
do.
E
So
they
actually
seem
to
have
some
project
boards
and
have,
I
think
they
were
far
enough,
along
with
some
of
their
implementations,
that
they
were
able
to
kind
of
start
breaking
breaking
things
out
a
little
bit
more
and
having
yeah
having
projects
defined
around
things
which
made
it
a
lot
easier
to
track
the
work
and
to
put
stuff
up
for
grabs
for
for
people
to
pick
up
so
so
yeah.
I
think
you
know
if,
if
we
could
magically
have
a
board
like
this,
that
was
managed,
like
I
think,
would
be
a
no-brainer.
A
E
We
would
want
it,
but
I
think
yeah
there's
certainly
some
some
overhead
in
putting
something
like
this
together,
but
if
we
wanted
to
try
to
do
something
along
these
lines
and
if
it
was
unanimous
and
not
going
to
be
a
pain
for
folks,
we
could
consider
going
down
this
path.
F
I
guess
if
we
wanted
it,
I
would
volunteer
to
do
it,
but
if
we
don't
need
it,
if,
if
we
feel
we
need
it,
I
will.
I
will
throw
myself
onto
that
fire
to
do
project
managing
things
with
boards
and
cards
and
issues.
F
If
we
don't
have
enough
to
fill
said
board
with
useful
things
that
people
would
like,
then
I
don't
think
it's
worth
it.
I
think
my
impression
from
just
this
chat
is
like
there's
just
not
enough
to
add,
which
is
fine.
I
didn't
really
know
if
there
was.
I
missed
a
couple,
specs
things.
I
don't
really
know
what
was
planned.
So
that's.
B
I
don't
mean
to
nag
it
or
whatever.
I
think
it's
fine,
I
just
don't.
I
just
yeah.
I
mean
I
don't
know
if
communication,
I
just
don't
want
it.
I
don't
want
to
over
promise
under
deliver,
but
some
yeah.
I
think
the
way
you
phrase
some
of
the
ways
to
communicate
this
stuff
is
probably
appropriate
and
gives
people.
I.
A
A
D
That's
all
good,
I'm
just
gonna
say
like
as
a
point
of
reference
when
you're,
like
speaking
to
the
community
or
when
people
ask
questions
having
that
link,
is
a
bit
different
to
pointing
them
to
an
issues
board
which
might
be
like
very
saturated
or
like
a
couple
open
draft
prs,
which
you
know
might
not
be
where
you
want
them
to
be.
So
the
project
board
could
be
like
a
nice
overview
and
then
just
keep
educating
people
that
it's
there
if
they
want
visibility
on
it.
F
I
might
I
might
take
a
shot
at
it
for
the
log
signal
as
well,
since
I
think
I'm
the
only
person
that's
cared
about
that
in
the
past
and
then
yeah,
maybe
if
it
feels
useful,
we
could
use
it,
but
if
it
doesn't,
I
like
the
idea
that
it's
easier
to
point
people
to
a
project
board
than
just
giving
them
the
link
to
our
issue
tracker,
which
is,
I
feel,
like
our
issue
tractor
our
issue
tractor.
F
F
Cool
all
right,
sam,
I
think
you
were
next
on
the
agenda.
C
Oh
boy,
so
like.
C
Yeah,
I
I
feel,
like
my
memory,
no
longer
functions.
So
if
I've
like
brought
this
up
in
recent
history,
just
yell
at
me,
we
keep
having
these
issues
where
people
are.
I
refuse
to
get
more.
Sleep
is
physically
impossible,
so
yeah
so
someone's
like
hey
this
g.
This
thing
doesn't
work
on
postgres,
0.2
and
you're.
Like
I
don't
pg,
the
gem
version
is
like
version
5
or
whatever
you
know.
That's
those
aren't
real
numbers,
and
I
I
kind
of
I
don't
really.
C
It
comes
up
a
lot
because
we're
doing
so
much
monkey
patching
like
the
amount
of
like
the
compatibility
between
like
instrumented
gem
versions,
is
not
it's
not
that
readily
codifiable
or
it's
like
not
satisfying.
You
can't
really
say
like
we'll
support
the
last
two
major
releases
of
whatever,
because,
like
they
might
just
do
some
internal
refactoring,
but
like
maintain
their
external
facing
apis
and
we're
still
kind
of
hosed.
C
C
I
I
forget
what
the
term
would
be,
but
you
know
conditionals
in
the
code
that
are
like
if
pg
version
you
know
0.2
like
define
method
x,
if
pg
version
is
greater
than
0.2
define,
method
y.
That
seems
terrible
to
me.
C
However,
in
the
interest
of
like
instrument
and
code,
I
don't
know,
maybe
it's
worth
it
and
then
I
don't
know
if
people
have
experience
with
like
you
know,
maybe
I'm
just
not
being
imaginative
enough
on
how
that
code
would
look.
Maybe
there's
a
way
to
make
it
clean
if
we
want
to
support
multiple
interfaces
for
the
same
instrumentation
anyway.
There's
a
lot
of
words.
Don't
know
how
coherent
that
was.
But
I
wanted
to
try
to
have
a
synchronous
discussion
because
I
figured
it
might
be
helpful.
A
So
I
think
like
can
you
guys
hear
me
yeah?
Okay,
it's
interesting,
because
I
think
it
depends
on
how
the
thing
we're
instrumenting
behaves
and
so,
like
I
don't
think,
there's
like
I
don't
think
we
should
try
to
narrow
in
on
a
one
size
fits
all
so
like
I'll.
Just
kind
of
like
give
some
random
examples.
A
A
So
talk
about
like
the
conditionals
in
code
for
looking
at
versions.
I
think
in
those
cases
it
kind
of
makes
sense.
It's
like
they
have
supportive
versions.
We
should
support
their
supported
versions
when
they
stop
supporting
their
supported
versions.
We
or
a
version
we
stop
supporting.
That
version
like
I
think
that
falls
into
the
realm
of
being
reasonable.
A
So
that's
like
one
case
and
I
think
that's
like
the
more
potentially
more
complex
one.
From
my
perspective
now
you
have
like
another
library.
Let's
I
don't
know
it's
called
graphql,
let's
say
it's
just
like
they
go
from
one
two,
three
four
and
like
they're
like
no
we're,
not
gonna,
have
like
re-release
1.00
whatever
and
have
a
hot
fix
there.
It's
like
you,
you
want
a
bug
fix
like
you're,
going
to
use
4.1.
A
A
But
if
you
want
to
use
four
you're
going
to
use
the
latest
version
of
our
instrumentation,
like
it
should
kind
of
just
like
increment
and
grow
and
evolve
with
it
like
we
don't
have
to.
I
don't
think
it's
reasonable
to
try
to
support
every
historical
version
of
instrumentation.
A
I
think
that
the
instrumentation,
like
you,
can
go
back
and
use
an
older
version
and
that
will
support
your
older
version
of
graphql
where
that
becomes
tricky
is
if
our
instrumentation
has
another
dependency
and
it's
like
well,
you
want
to
use
the
latest
sdk
and
for
whatever
reason
we
have
this
coupling
that
says,
like
you,
have
to
use
the
latest
riffcraft
kill.
So
now,
all
of
a
sudden.
A
If
you
want
to
update
this,
you
have
to
update
that
and
you
get
that
kind
of
like
spaghetti
mess
and
that's
where
you're
you're
hurting
people
and
you're
hurting
the
usability
of
that.
So
I
don't.
I
don't
actually
have
a
clear
answer
there.
I'm
not
going
to
pretend
that
I
do
because
I
don't
think
there
is
one,
but
in
this
like
ideal,
like
sunshine
and
rainbows
world,
it
would
be
nice
if
we
could
just
follow
along
with
supported
versions
and
if
our
latest
supported
version
continues
to
support
this
older
version.
A
Great,
like
that's
absolutely
great
but
like
we,
it's
like
sam,
says
like
with
the
the
conditionals
like
it
just
there's
a
certain
point
where
it
just
becomes
unwieldy
and
the
quality
of
the
code.
The
quality
of
the
instrumentation
will
suffer
from
it
and
the
reliability
will
suffer
and
it
just
becomes
heavy
and
then
I'll
stop
now
so
andrew
can
jump
in.
F
To
say,
yeah
your
point
about
the
cross
dependencies,
making
it
difficult
to
just
go
back
and
get
an
older
instrumentation
version
is
real.
We
do
have
inter
repo
dependencies
like
that.
The
common
gem
and
the
registry
are
the
two
that
come
to
mind
and
we
have
forced
upgrades
for
that.
For
now.
I
I
don't
think
it
will
be
incredibly
common.
Like
the
registry,
gem
is
probably
not
going
to
evolve
very
often
the
the
common
helpers
gems
stuff,
like
that's.
F
It's
probably
just
not
going
to
change,
often
enough
to
be
a
huge
obstacle
to
that
kind
of
policy,
but
it
does
happen
sometimes.
So
it's
worth
noting
with
regards
to
sort
of
conditionally
supporting
multiple
instrumentation
or
multiple
versions
in
the
code,
like
switching
on
the
ruby
version
or
whatever,
to
determine
the
interface
or
the
gem
version
or
whatever
that
can
work
in
some
instances.
F
I
think,
if
I
remember
correctly,
I
think
the
active
job
implementation
had
a
conditional
thing
in
it
where,
like
it,
would
check
to
see
if
a
certain
bit
was
present
on
a
class
and
if
it
was
it
would
like
enhance
things
more
because
that
was
a
marker
of
going
from
rails
time
to
six
or
something
that
type
of
thing
can
work
in
limited
quantities
like
so
that
was
a
nice
thing
that
we
did
it
didn't.
F
It
wasn't
burdensome
in
the
code
but
like,
for
instance,
the
postgres
one
that
came
up
to
do
the
same
sort
of
thing.
There
would
be
kind
of
complicated
again,
not
not
horrible,
it
could
be
done,
but
it
is
more
complicated
than
the
active
job.
Equivalent
example
like
we
check
for
one
thing,
and
if
it's
there
we
do
more
than
we
do
additional
crap.
F
F
I
think
my
overall
fee
in
this
issue
is
that
it
is
and
should
always
be
different
for
each
instrumentation
like
each
one
should
define
policy,
and
we
can
say
that,
like
by
default,
we
start
with
like
the
last
two
versions
or
whatever
like.
If
we
don't
have
anything
better
to
go
from
like
but
like
yeah
all
of
the
rails,
specific
stuff
should
probably
follow
the
rails.
Versioning,
in
my
mind,
different
answers
for
different
gems.
The
thing
that
kicked
this
all
off
postgres,
one
that
it
was
it
was
less
of
a
did.
F
E
C
A
Does-
and
I
think
it's
just
a
really
difficult
problem-
I
think
that
we
need
to
hold
ourselves
accountable
to
figure
this
out
before
we
1.0
any
instrumentation,
because
obviously
there's
gonna
be
some
stability,
guarantees
that
go
with
it
to
the
point
of
like
the
cross
dependencies
once
upon
a
time
like
the
goal
was
to
only
depend
on
the
api.
The
api
does
not
turn
very
much,
and
so
you
don't
get
that
cross-dependencies,
but
somewhere
along
the
line.
A
We
like
added
common
and
like
stuff
like
that,
if
common
becomes
like,
if
we
determine
that
that's
potentially
a
blocker
for
doing
the
right
thing,
I
say
we
get
rid
of
it
like
screw.
It
like
the
stuff
in
common
is
like
these,
like
one-line
methods
or
two-line
methods
or
three-line
methods
like
duplication,
sucks,
and
it
makes
everyone's
like
skin
crawl
or
whatever,
but
like
if
we
can
have
a
better
experience
with
supporting
instrumentation
by
duplicating
a
bunch
of
like
crappy
little
methods
like
that
sophia.
A
F
A
I
think
it
kind
of
I
think
it
was
with
like
with
the
registry
and
like
face
like
the
way
it's
set
up
like
prior
to
splitting
it.
There
wasn't
churn
there
right
right
if
we
can
continue
to
not
churn
there
indefinitely
or
if
we
do
it
comes
at
like
a
major
point
release,
then
it's
it's
like
the
rules,
change
on
a
major
release.
A
Right
but
again
I
think,
like
we
have
to
saying
we
have
to
do
things
without
actually
like
providing
actionable
items,
is
kind
of
shitty
but
like
we
have
to
figure
this
out
before
1.0.
A
Right,
like
this,
isn't
even
taking
into
account
like
semantic
conventions
and
things
like
that,
right,
yeah,
but
again,
yeah
I'll
pause,
a
little
jump
and.
F
B
My
opinion
is
we
shouldn't
1.0
instrumentation
unless
my
my
understanding
is
sort
of
like
one
we're,
not
writing.
Most
of
these
we're
just
accepting
the
main
maintenance
responsibility
for
them,
and
so
I'm,
like
super
apprehensive
to
take
on
any
sort
of
policy
to
be
given
some
instrumentation
by
somebody
like
race,
the
race,
car
gem,
for
example.
I've
never
used
race
car.
I
will
never
use
the
race
car
gym
or
never
liking.
B
You
know,
maybe
I
maybe
we
used
internal
and
chef
but,
like
you
know
like
I'm
like,
I
don't
want
to
garrett,
not
just
like
take
on
maintenance
responsibilities
for
a
couple
versions,
but
like
three
years
worth
of
this
random,
you
know
ruby
kafka,
gem
and
it
seems
disingenuous,
like
it
actually
seems
misleading
to
like
offer
that,
because
I
have
no
intention
of
ever
like
leveling
up
in
the
race
car
ecosystem.
To
like
kick
enough.
The
other
thing
is
that
my
understanding
of
open
telemetry
is
that
this
should
be.
B
Instrumentation
should
eventually
be
first
party
instrumentation
and
that's
sort
of
like
what
for
that's.
What
1.0
is
to
me
we're
always
otherwise
we're
always
sort
of
like
living
on
a
hope
and
a
prayer
that,
like
people,
aren't
gonna
break
like
break
public
apis
on
a
non-major
release
or
we've
patched
a
private
api
which
we
try.
I
know
we
try
like
very
hard
to
do
but
like
not
to
do
and
yeah
and
there's
something
to
be
said
for,
like
maybe
we
maybe
and
also
like
the
point
of
a
1.0.
B
In
my
mind,
we
should
only
be
pushing
for
that
if
it's
like
hurting
adoption,
which
I
haven't
seen
any
evidence
that
people
are
really
looking
for
1.0
on
their
on
their
instrument
on
some
random
instrumentation
gem,
so
yeah,
I
you
know,
I'm
hesitant
to
I.
Basically
my
response
to
this
issue
is
codified.
Policy
is
a
no.
I
don't
ever
want
to
go
about.
I
think
we
should
document
what
the
existing.
I
think
we
should
write
a
code.
B
Yet
I
see
robert
write
a
little
script
that
gender
that
grabs
all
the
compatible
blocks
and
makes
a
markdown
table
that
says
like
here's,
what
our
current
compatibility
is
but
like
I
don't
necessarily
want
to
enforce
a
one-size-fits-all
policy,
and
then
I'd
want
to
avoid
one
point.
I
like
the
plague
unless,
like
like
sure,
I
think
maybe
rails
like
we
may
or
may
not
know
some
of
the
folks
on
the
rails
team
and
like
so
maybe
that's
like
an
exception,
but
like
really
avoid
1.0.
Unless
it's
really
a
deal
breaker
for
people.
A
I
agree
with
what
you
said:
there's
the
exceptions
to
avoiding
1.0.
For
me,
I
think
the
exceptions
become
things
like
http
libraries,
when
the
semantic
conventions
hit
stability
there.
I'd
like
to
1.0
them
purely
because,
like
those
are
some
like
pretty
important
signals
in
like
the
telemetry
ecosystem
and
it's
like
having
the
semantic
conventions
having
the
stability
guarantees
around
like
the
signals
and
the
attributes,
I
think,
is
a
really
good
selling
feature
for
adoption.
A
I'm
sure
vendors
like
it,
not
that
I
really
care
to
standard
offenders.
Sorry,
vendors,
it's
it
just
seems
like
a
good.
A
It
seems
like
a
a
good
thing
to
the
health
of
the
ecosystem
and
in
the
same
vein,
like,
I
think,
like
databases
like
our
database
instrumentation,
so
redis
postgres,
my
sequel,
I
think
those
become
really
important
and
then,
ironically,
eventually
like
your
producer,
consumers
and
your
kafkas
and
your
evidence,
but
those
like
starting
with,
like
the
ones
that
are
really
hardened
and
predictable
and
like
establishing
the
spec
like
http,
is
a
really
good
first
candidate.
I
think
for
like
pushing
towards
that
and
then
we
get
to
figure
it
out
around
gems.
A
That
are
I,
in
my
experience,
don't
see
as
much
like
wild
churn
right,
like
graphql
would
be
at
the
back
of
the
list
for
me
because
it's
like
it's.
B
Crap
you
all
yeah.
No,
I
think
that's
a
good
perspective.
I
don't
want
to
be
like
a
total
like
I
never
I'm
not
dying
on
a
hill
like
never
1.,
especially
it's
like
you
know
the
http,
ruby
library
itself,
like
surely
there's
some
stability
we
can
expect
from
the
core.
You
know
like
from
the
actual
so
yeah.
B
I
think
that's
a
good
point,
and
it
is
this
way
I
do
I'm
worried
like
http.rb,
like
I
don't
know
what
this
guy,
who
maintains
it's
on
a
role
in
his
next
in
his
next
minor
release
and
whether
he's
done
a
break
something
on
our
instrumentation
but
yeah.
I
think,
I
think,
being
able
to
let
people
feel
comfortable
with
symantec,
like
the
basically
be
able
to
do
whatever
span
metrics
generation,
based
on
some
assumptions
that
their
instrumentation
is
giving
them
the
tags.
The
attributes
they
want
is
totally
reasonable,
so
so
yeah,
it's
a
good
point.
A
Yeah,
I
think,
like
the
tldr
is
like
if
the
gem,
the
instrumented
gem,
has
a
support
policy,
we'll
mirror
that
so
like
rails,
otherwise
like
we
will
take
it
on
a
case-to-case
basis
like
so.
If
the
author
of
some
gem
says
like
we
don't
support
this
version
anymore,
it's
unsupportive
like
there's
no
expectation
from
us
to
support
it,
and
it's
like
best
effort
like
if
it
is
covered.
A
It's
covered
otherwise,
like
check
the
appraisals
like,
I
think,
one
of
the
things
that
we
should
make
sure
that
we're
being
good
about
and
predictable
for,
like
consumers
of
these
libraries,
say
like
if
the
appraisals
like,
if
we
have
an
appraisal
for
that
version
of
the
gem,
you
should
assume
with
some
confidence
that
it's
supported,
but
that
doesn't.
C
C
I
agree
that
that
would
be
really
very
cool
to
have,
in
addition
to
those
compatibility
blocks
like
in
a
markdown
file,
if,
in
addition
to
that,
we
could
just
have
actually
like
the
build
status
for
the
appraisal
corresponding
to
each
version
as
part
of
like
a
table.
That's
just
like
yeah.
It
looks
like
if
you
run
tests
against
it.
It
works
so
that'd
be
neat.