►
From YouTube: 2023-01-10 meeting
Description
cncf-opentelemetry meeting-2's Personal Meeting Room
A
A
B
B
A
D
It
appears
as
though
there
are
a
couple
of
new
phases,
but
I
haven't
been
around
for
a
few
weeks,
so
they
might
not
be
as
new
as
they
appear,
but
shall
I
start
I'm
going
to
start
I'm
Matt
I
work
at
lights,
up,
I've
kind
of
been
around
open
Telemetry
since
since
early
days,
I
spent
a
lot
of
time
working
on
open,
Telemetry,
Ruby
back
in
kind
of
like
2019
2020
I
have
spent
much
less
time
working
on
it
lately,
but
I
still
come
to
the
meetings
and
generally
will
at
least
start
off
with
the
intro
of
what
happened
in
the
in
the
specsig,
which
is
kind
of
a
meeting
that
is
is
before
this.
D
C
No
I'm
Eric
I
work
at
Shopify
been
yeah
and
nice
to
meet
you
all
as
well.
A
Cool
I'm
Joe
Farris
I'm,
the
CTO
at
thapa,
we're
a
software
development
consultancy
and
we
use
Ruby
heavily.
We
also
practice
SRE
for
our
client
support
projects,
which
is
where
our
interests
of
open
Telemetry
comes
in
I'll
pass.
It.
E
E
Yeah
I'm,
gladisa
and
I'm
a
developer
at
softbot
and
I
want
to
contribute
to
open
Telemetry
right
now,
specifically
to
the
metrics
part,
yeah.
D
Great
to
have
you
both
I
think
most
rubies
have
used
some
thought
by
software
over
the
years
so
good
to
see
you
all
here.
C
D
Most
likely
but
yeah
all
right,
I'll
go
ahead
and
get
started,
I'll
run
through
the
specsig
update
and
then
and
then
we
can
kind
of
get
onto
our
meeting.
There
is
an
agenda.
So
if
you
have
anything
that
you're
like
looking
to
talk
about
edit
there
and
we'll
cover
it
in
that
part
of
the
meeting,
it
definitely
sounds
like
we
should
probably
talk
about
the
state
of
metrics
at
some
point,
so
we'll
make
sure
to
at
least
have
that
conversation
before
the
end.
D
Your
mind
go
ahead
and
add
it
there.
D
So
the
specsig
is
kind
of
the
the
body
that
writes
the
specification,
which
is
kind
of
where
all
the
language
implementations
derived
from
I
kind
of
have
to
comply
with
with
the
spec.
So
the
first
item,
which
is
set
there,
it
was
a
January
release.
There's
a
PR
for
the
January
release.
D
The
spec
spec
has
monthly
releases
kind
of,
like
any
other
type
of
software
foreign
ongoing
issue,
which
always
freaks
me
out
a
little
bit,
but
it
seems
like
nobody
is
or
few
people
are
coming
out
like
strongly
against
this
I
guess
you
can
see
I'm
the
first
person
to
comment
on
it,
but
basically
span
attributes
can
be
strings.
Booleans
numbers
or
uniform
arrays
thereof.
D
Currently,
that's
kind
of
how
things
work
out,
but
there's
been
a
drive
to
make
it
so
that
you
can
have
or
I
believe
Maps,
but
it
would
allow
you
to
have
nested,
Maps
or
arrays
nested
rays
unless
it
rays
of
maps
all
all
the
above
and
I'm.
D
Looking
like
nobody
answered
my
question
actually
and
that's
the
stance,
it's
mainly
asking
for
feedback
at
this
point
in
time,
so
I
guess
we'll
we'll
see
where
this
goes.
I
will
I
will
talk
more
with
people
at
Ed
lights
up
to
see.
If
this
worries
anybody
else,
other
than
me,
I
feel
like
I
might
be
the
onlybody
who
cares.
A
D
Yeah
there
there
is
good
question
of
how
much
this
applies
to
to
metrics
I've
been
thinking
about
a
little
bit
more
from
the
tracing
perspective,
but
I
think
that's
a
good
call
out,
and
it
does
at
least
from
the
like
Trace
perspective,
talks
about
Jaeger
and
Zipkin,
and
it
says
array
and
map
values
must
be
serialized
to
Json
and
recorded
as
a
tag
of
type
string.
So
that
might
be
a
similar
workaround
that
could
work
for
Prometheus.
D
D
D
That
receivers
should
generate
a
new
random
Trace
ID.
If
any
of
the
following
is
true,
the
field
is
not
present.
The
field
contains
an
invalid
value
and
I.
Think
most
people
were
very
confused
about
this
proposal
because
then,
like
the
invalid
Trace
ID
is,
is
a
defined
value,
and
you
know
that
it's
invalid
and
if
you
receive
something
that
didn't
conform
to
the
to
what
you
expected,
it
would
definitely
be
invalid,
but
to
generate
a
new
valid
Trace
ID
to
replace
it.
D
It
just
makes
it
look
like
this
fan
actually
belonged
to
a
a
valid
Trace,
which
is
not
the
case,
so
I
think
the
yeah
I
think
the
the
resolution
here
is
that
this
should
be
treated
as
invalid.
Whatever
that
means
for
a
system,
if
a
span
arrives
with
a
invalid
Trace
ID
did.
F
D
D
They
did
talk
a
little
about
this
left
padding
situation
that
we
have
with
with
eight
byte
Trace
IDs,
but
that
is
still
that
is
still
fine.
It's
still
a
valid
Trace
ID,
it's
just
shorter
when
you
look
at
it
as
as
hex.
D
There
was
an
ask
to
merge,
APR,
Define
conversion,
mapping
from
hotel,
Expo
histograms
to
Prometheus
native
histograms.
It
has
a
couple
of
reviews,
but
I
think
the
discussion
was
that
they
wanted
to
leave
that
open.
Just
a
little
bit
longer
see
if
it
got
another
approval
or
two.
D
There
was
some
clarification:
Around
The,
Bash
span,
processor
I,
don't
think
anything
actually
needs
to
change,
but
we
should
verify
I.
Think
there's
some
ambiguity
in
the
in
the
wording
that
led
to
possibly
some
different
implementations,
but
the
best
band
processor
should
should
I
support
whatever
the
queue
reaches
the
batch
size
or
the
schedule
delay
timer
completes,
whichever
is
first.
D
And
then
this
happened
to
be
the
most
controversial
topic
of
the
the
meeting.
There
is
the
the
timeout
Millies
or
yeah
the
schedule
delay
Millie's
configuration
for
that
stand,
processor,
it's
currently
set
to
5000
milliseconds
five
seconds
and
there's
a
they're
trying
to
introduce
a
complimentary
or
or
an.
D
Variable
for
the
logs
for
the
batch
log
processor
and
the
feeling
was
that
five
seconds
is
way
too
long
for
logging
and
that
they
would
like
to
see
it
more
at.
Like
you
know,
one
second
and
I
think
there
was
some
desire
to
try
to
have
like
some
uniformity
around
the
the
values
so
I
think
at
least
like
the
first
attempt
at
this
was
going
to
be
see
if
they
could
reduce
the
these
scheduled
delayabilities
for
for
traces
and
maybe
increase
a
little
bit
for
logs
like
reach
a
happy
medium.
D
But
this
led
like
a
huge
bike
shed
and
and
for
good
reason.
It's
like
logs
and
traces
are
different
things
you're
going
to
emit.
You
know
different
volumes
of
them
from
different
applications,
so
having
the
same
number
doesn't
necessarily
make
sense,
but
nobody
actually
has
any
data
I
think
to
back
up
any
of
these
values
anyways
so
like,
and
you
could
go
ahead
and
try
to
like
make
some
assumptions
about.
D
D
That
was
the
essence
of
the
bike
shed
I,
don't
know
if
anybody
really
came
away
with
a
with
a
resolution,
but
I
think
there
was
some
resistance
to
wanting
to
at
least
decrease
this
number
for
traces,
because
there's,
while
retaining
stuff
in
memory
you
know,
has
overhead
in
terms
of
like
a
memory
footprint
actually
sending
stuff
over
the
wire
has
other
performance
impacts
and
changing
these
things
will
kind
of
change.
D
Changing
a
default
anyways
would
potentially
change
existing
Behavior.
So
if.
F
I
find
like
issues
like
this
particularly
interesting,
because
it's
like
this
is
a
configurable
value
like
there's
an
environment
variable
for
it
already
so
trying
to
like
force
a
new
default
in
it
on
everyone.
After
it's
been
set
for,
probably
I
could
I.
Think
years
is
a
realistic
thing
year
years.
F
It
always
strikes
me
as
a
little
bit
odd,
like
the
reason
why
I
say
it's
odd
is
that
internally,
like
at
Shopify,
we
have
our
pre-configured
wrapper
that
all
of
our
Ruby
applications
use
and
we
looked
at
changing
the
default
across
our
Fleet,
but
we
kind
of
backpedaled
a
bit
and
realized
that,
like
it,
doesn't
make
sense
to
try
to
assume
that
one
size
fits
all
so
realistically,
it
makes
more
sense
for
us
to
hand
to
like
if
we
have
an
application
that
has
different
export
characteristics.
F
What
we
actually
haven't
seen
is
that,
like
the
schedule
delay
being
an
issue
right
because,
like
any
application
under
any
sort
of
like
real
world
traffic,
is
going
to
export
long
before
it
hits
that
delay
right,
because
once
it
hits
its
map,
what
is
it
the
queue
size
of
the
patch
size
that
we
we
check
for
it'll
export
when
it
hits
that
right?
F
So
again,
it's
like
it's
interesting
to
see
like
what's
the
motivation
for
saying
that
everyone
needs
to
change
this
behavior
and
like
where's,
the
empirical
data
to
back
it
up
to
say
like
yes,
this
is
going
to
be
like
a
net
positive
right,
so
I'm
not
surprised.
It
turned
into
a
bike
shed
because
I
feel,
like
I'm
instant,
getting
a
bike
shed
by
even
talking
about.
C
I
I
do
think
a
lot
of
this
stuff
is
symptomatic.
It's
like
a
second.
G
C
Think
This
is
complicated
and
we
shouldn't
change
the
false
and
I
agree
yeah.
Let's
keep
going.
D
Cool
yeah
I
mean
I,
think
this
the
feedback
kind
of
validated
that
the
bike
side
was
a
little
bit
worth
it
and
that
I
think
yeah
I.
Think
generally,
what
you
were
saying
Robert
was
was
echoed
by
by
a
lot
of
the
concerns
there,
and
it
is
a
configurable
value.
So
if
this
doesn't
so
I
think
you
know,
Hotel
should
strive
to
have
defaults
that
aren't
totally
insane
but
make
them
configurable
for
specific
use
cases.
D
And
then,
lastly,
but
not
leastly,
there's
an
issue
to
make
the
exponential
histogram
stable.
So
this
is
a
a
feature
that
has
inspected
in
metrics.
There
are
a
number
of
implementations
at
this
point
and
they
just
want
to
see
if
there's
anything
any
final
things
that
need
to
be
addressed
before
making
it
stable.
The
two
things
that
didn't
come
up
in
this
issue
was
whether
it
should
be
a
should
or
a
must.
D
It
was
originally
written
as
a
mosque.
I
think
the
reason
people
wanted
it
to
be
a
should
I,
don't
think
it's
impossible
to
implement
the
Expo
histogram
in
any
language.
I
say
this
because
I
work
on
the
JavaScript
implementation
and
if
it's
impossible
in
any
language,
it
should
be
impossible.
In
JavaScript
like
there
were
some
hurdles,
I
learned
a
lot
about
JavaScript
numbers
in
in
the
process,
but.
D
Yeah,
like
just
an
FYI,
if
you
ever
try
to
use
like
the
bitwise
operators
in
JavaScript,
they
truncate
numbers
to
32
bits
and
just
silently,
don't
say
anything
about
it.
That
was
a
bit
of
a
surprise
but
anyways,
that
is
to
say
that
this
should
be
possible
in
most
languages
like
I.
Don't
like
the
yeah
there,
but
there
is
a
huge
kind
of
like
I.
Don't
know
ramble
process
to
understand
what
you're
actually
implementing
I.
Think
that's.
D
The
harder
part
is
is
the
understanding
the
data
structure
and
the
mechanics
of
it,
but
there
are
enough
reference
implementations
around
that
I
think
it's!
D
It's
there's
enough
prior
art
that
people
could
draw
inspiration
and
come
up
with
something
or
for
other
languages,
but
that
is
not
to
discount
at
all
that
it
will
be
like
a
Time
investment
by
somebody
in
one
of
these
things
to
kind
of
go,
go
through
all
that
that
work,
so
so
yeah,
if
we're
ever
ready
for
one
I
I,
do
know
something
about
the
exponential
histogram
and
at
one
point
in
my
career,
I
did
write
some
Rubicon.
So
maybe
those
two
things
could
come
together,
but
we'll
see.
D
D
D
You
always
be
able
to
fit
the
entire
floating
Point
range
into
your
exponential
histogram,
but
the
problem
is
when
it
gets
down
to
that
size,
whereas
in
two
buckets
it's
like
not
a
super
useful
histogram,
so
for
like
a
cumulative
histogram,
if
you
did
end
up
with
values
across
the
whole
range
like
you
would
end
up
with
this
like
two
bucket
histogram,
which
is
not
awesome,
so
they
want
it
kind
of
like
a
way
to
kind
of
like
reset
accumulative.
D
But
the
discussion
was
that
this
is
not
really
like
a
it's,
not
an
issue
specifically
with
the
exponential
histogram,
it's
kind
of
like
a
generic
problem
with
metrics
and
cumulative
cumulative
metrics
in
general,
and
the
the
focus
should
be
to
solve
that
generally
I.
Think
and
then
the
exponential
histogram
would
benefit
from
that
General
solution.
D
Oh
and
I
have
run
over
by
three
minutes.
I
usually
try
to
wrap
this
thing
up
by
by
20.
After
maybe
that's
what
Arielle
was.
G
G
D
So
anyway,
this
this
wraps
up
the
spec
sing,
update,
I,
don't
know
if
anybody
had
any
final
questions,
comments
or
concerns
about.
B
D
C
Her
yeah
I'm
curious.
If
people
there
was
a
proposal
to
merge
the
mappings
between
the
hotel,
expense,
Metro
histogram
and
the
Prometheus
Spar
sensor
histogram
was
that
the
this.
D
C
D
That
didn't
really
come
up.
I
think
what
had
come
up
was.
Somebody
really
wants
this
merge
and
there
is
one
green
check
and
two
non-green
checks
and
I
think.
Ultimately
they
wanted
one
or
two
more
greens.
D
That
was
the
extent
of
the
discussion
between
theme,
exponential
histogram
and
Prometheus
native
at
least
this
week,
but
yeah
I.
C
D
C
Yeah
no
worries
I
was
just
curious.
Whether
yeah
I
was
curious.
If
there's
user
experiences
with
the
Smart
Systems,
but
I
can
review
the
recording,
if
see.
If
anyone
mentions
it
and
maybe
just
look
into
where
these
people
were
what
their
context
is
cool
I,
don't
sorry
to
take
one
more
time.
D
No,
it's
all
good,
that's
what
we're
here
for
all
right.
So
we
have
a
few
items
on
the
agenda
put
here
by
Arielle.
I
will
mention
for
for
the
late
comers
that
we
have.
Oh,
my
Joe
and
Clarissa
they're
from
thoughtbot.
They
are
they're
interested
in
metrics,
so
we
should
discuss
metrics
and
and
the
state
of
things
before
the
end
of
the
meeting,
but
maybe
we'll
we'll
go
through
arielle's
items.
First,.
G
My
mine
are
just
kind
of
like
again
raising
some
awareness,
I
gotta
I
could
use
some
PR
reviews,
please
by
maintainers
and
approvers
on
a
country
package.
There's
a
bunch
of
them
that
are
in
draft
just
do
the
ones
that
are
not
in
draft
awaiting
a
review.
G
Thank
you.
So
we
don't
have
to
dive
into
them
right
now,
just
bringing
them
to
folks
attention.
If
we
can
give
us
a
hand
with
that,
one
of
those
things
actually
is
related
to
Clarissa
and
Joe,
because
we're
running
into
issues
with
the
appraisal-
gem,
not
working
with
Ruby
3-2
and
some
shop
of
folk
opened
up
some
PRS.
So
it
would
be
really
helpful
to
us
if
y'all
can
get
somebody
who
is
doing
maintenance
on
the
appraisal
gym
to
take
a
look
at
three
two
compatibility.
G
There
are
two
PRS
open
with
alternative
solutions.
They
may
not
be
the
best
but
having
you
here
and
having
your
eyes
might
be
very
helpful
for
us.
G
There
are
some
details
in
the
3-2
build
PR,
so
essentially
for
us
to
work
around
it,
but
any
fixes
would
be
great
and
then
I
ran
into
some
other.
Like
compatibility
issues
but
I'm
not
even
going
to
get
into
those
right
now,
because
it's
some
silly
combination
of
like
Google
Proto
buff
not
being
supported
in
some
version
of
Ruby,
and
it's
like
I-
want
to
pull
my
hair
out
with
protobus,
but
but
I
got
past
those
issues.
G
The
next
bit
on
that
list
was
the
each
Ruby
truffle
build.
So
that's
breaking
because
of
some
incompatibility
with
a
simple
cub
and
that's
up
on
the
Upstream.
Sorry
on
the
on
the
main
repo
I
did
not
get
a
chance
to
look
into
it,
but
main
is
broken
on
open
symmetry,
Ruby,
SDK,
Repository
and
I,
just
kind
of
want
to
bring
that
up
to
you
folks
attention.
If
anybody
has
capacity
to
look
into
that,
that
would
be
really
great
and
the
last
thing
on
my
list,
which
is
again
really
fast,
I'm.
G
Sorry
Matthew.
If
I'm
going
faster
than
you
can
click
I
wanted
to
kind
of
bring
this
to
the
attention.
We've
got
a
bunch
of
approved
PR's
that
have
been
Upstream.
Sorry
on
the
on
the
main
repository
again
they're
approved,
but
haven't
been
merged.
G
So
I've
got
like
kind
of
like
a
pleading
with
my
hands
for
someone
who
is
a
maintainer
to
take
a
look
through
these
to
merge
them
in,
in
particular,
on
trace
this
very
interesting
to
me
and
to
others,
so
that
one
is
is
something
that
would
be
really
great
to
get
merged
since
it's
been
approved.
So
that's
all
on
my
list
of
the
things
that
I
was
interested
in
is
just
a
kind
of
like
a
gentle
nudge
to
get
folks
eyes
on
things.
B
D
Thank
you
for
the
rundown
and
yeah.
If
it
looks
like
things
are,
are
in
some
need
of
some
eyes
approvals,
merge,
merges
comments,
All
the
Above
built
or
broken.
G
You've
got
a
professional
development
budget.
Robert
I've
got
a
bunch
of
recommendations
to
fill
that
bookshelf.
D
To
me
that
sounds
good
excellent.
Maybe
we
will
hand
this
over
to
Robert
to
to
discuss
metrics
a
little
bit
unless
somebody
else
feels
like
they
know
the
state
of
the
world.
Unfortunately,.
F
I,
don't
think
anyone
knows
the
state
of
this
world
I'm
like
the
closest
person
to
it,
because
I
created
the
mess
that
now
exists
in
the
sharepo.
Just
like
a
slightly
relevant
context.
I
was
working
on
this
for
a
period
full
time.
I'm
not
anymore.
Priorities
have
shifted,
and
it's
not
a
focus
and
I've
been
earnestly
just
having
trouble
to
like
find
spare
time
to
work
on
it.
So
I'm
super
super
excited
to
find
people
who
are
interested
in
contributing
to
it.
F
The
good
side
of
this
is
that,
despite
the
current
state
being
holy
and
entirely
incomplete,
I
do
think
it's
in
a
position
where
there's
more
bite,
bite-sized
pieces,
I,
don't
know
if
I'm
called
bite
size
because
it's
still
like
Chomps.
So
on
the
left
column.
There's
like
the
to-do's
is
that
I
think
that
these
can
be
actually
approached
as
individual
things.
I
did
try
to
introduce
some
rough
ordering,
I
think
the
asynchronous
instruments
Andrew
Hayworth
has
been
working
on
and
he
is
I
think.
F
Last
time
we
spoke
of
the
mind
that
there's
going
to
be
some
refactoring
to
be
done,
which
is
fine,
and
we
expected
that
so
I
don't
know
where
he's
at
with
that
he's
still
out
for
another
week.
It
might
be
worth
checking
in
with
him
on
that,
but
the
next
thing
would
be
metric
readers,
so
there
is
some
code
in
the
repo
right
now
for
a
metric
reader,
but
I
would
say
that
it
is
just
effectively
like
a
placeholder.
F
So
if
someone
is
looking
to
take
on
a
piece
of
work,
a
metric
reader
would
probably
be
a
good
start.
I
I,
guess,
like
one
of
the
things
is
I'm
not
sure
is
like
is
like
anyone
coming
in
with
like
specific
questions.
Are
they
just
looking
for
a
thread
to
pull
on
to
get
started?
Wow.
F
Okay,
so
just
like,
from
my
perspective
of
things,
the
spec
for
the
metrics
in
open,
Telemetry
I
found
rather
it's
dense.
It's
complicated,
there's
some
code
Snippets
that
don't
match
the
written
word.
I,
don't
know
if
that's
changed
in
the
last
couple
of
months,
but
it's
honestly
I
would
describe
it.
It's
challenging
I'd
say
it's
a
a
lot
harder
than
the
tracing
spec.
F
So
if
you're
approaching
this
like
it's
not
what
I
I
don't
know,
maybe
I'm
a
crappy
developer,
but
it's
not
easy.
I
have
not
found
it
to
be
easy.
I've
found
it
to
be
quite
tricky
because
you're
trying
to
well
I've,
been
trying
to
balance
prioritizing
like
hot
paths.
F
We
know
that
this
stuff
happens
synchronously
right
like
when
we're
incrementing
a
counter.
So
we
need
to
make
that
as
fast
as
possible,
but
it
still
needs
to
be
atomics
we're
still
going
to
hold
locks.
We
need
to
make
sure
that
we're
doing
that
in
such
a
way
that
the
whole
world
doesn't
stop
just
because
you
know
someone's
counting
from
one
to
two.
F
So
we've
prioritized
like
the
writing
path
for
the
aggregation
and
Counting
the
next
fastest
thing
would
be
exporting
and
then
there's
some
some
I'd
say
a
little
bit
Messier
code
that
could
use
some
refactoring,
because
the
slowest
path
would
be
anything
involving
configuring
new
instruments
right.
So,
if
you
have
to
add,
if
you
want
to
add
a
new
counter
somewhere,
it
has
to
create
a
bunch
of
associated
objects
with
your
meter
providers.
If
you
have
multiple
media
providers,
obviously
it's
going
to
have
to
go
through.
F
It
has
to
update
a
bunch
of
references,
because
again
we
are
trying
to
optimize
for
the
right
path
being
as
fast
as
possible.
F
That's
where
I
think
there's
still
room
for
for
improvement,
but
I.
Think
if
you
want
to
tackle
something
that
is
going
to
be
and
I'm
not
going
to
say
it's
easy
I'm
going
to
say
it's,
probably
the
most
approachable
and
again
I'm
cavating
I.
Think
of
this
is
really
that
approachable
you'll
have
to
really
wrap
up
spec
API,
just
for
like
the
base
rep
like
Foundation
of
it
and
then
the
SDK
for
the
actual.
Like
nuanced
implementation,
details
I
would
go
after
the
metric
readers.
F
I
would
pick
a
specific
metric
reader
like
a
like
one
of
the
periodic
push.
Exporter
are
purely
again.
They
have
periodic
push
readers
and
then
they
have
different
pull
and
push
exporters,
and
the
spec
is
ambiguous
of
how
you
actually
should
structure
it.
You
could
have
these
like
X
exporters
that
are
also
metric
readers.
You
can
have
a
reader
that
is
fed
and
exporter,
which
is
more
similar
to
the
tracing
implementation.
F
F
That's
where
I
was
planning
to
spend
my
next
set
of
time
to
this,
but
it's
again,
maybe
I'm
a
poor
developer,
but
it
this
is
definitely
a
place
where
you
need
to
just
like
it's
hard
for
me
to
give
you
information
until
you
have
questions
right,
specific
questions,
I
think
this.
This
work
is
really
valuable
and
really
excited
that
someone's
trying
interest
in
it,
because
I
would
like
to
use
this
internally
I'd
like
to
start
updating
it
internally
and
testing
on
some
poor
unsuspecting
team.
F
G
So
would
it
be
helpful,
for
example
like
if
we
had
like
the
compatibility
Matrix
like
if
we
did
if
we
were
able
to
go
back
and
say,
take
this
back
put
together
the
compatibility
Matrix
and
essentially
be
like
check
these
off?
These
are
getting
done
and
then
gladysa
can
like
work
through
those
I
assume
that
collection
you're
working
on
Joe
or
is
it
just
glad
he's
all
working
on
this?
Or
is
it
other
folks
at
thoughtpot?
Also.
A
Right
now
it's
just
Clarissa.
We
do
support
open
source
generally,
we
have
a
concept
internally
called
investment
time
when
people
can
spend
time
on
open
source
and
I've
been
trumpeting.
The
open,
Telemetry
cause
internally
and
clears
has
stepped
up
to
contribute,
so
we
can't
put
somebody
full
time
on
it,
but
we
can
have
Ruby
developers
available
to
make
progress
on
it,
starting
with
Clarissa
yeah.
G
B
G
My
words
carefully
there's
a
lot
of
features,
there's
a
lot
of
features
and
a
lot
of
Nuance
to
the
specification.
So
if
we
could
have
like
some
way
of
like
laying
this
out
like
I,
see
that
you
have
the
project
board
and
I,
don't
know
that
Andrew
tried
to
kind
of
organize
things
in
the
project
board,
but
it's
kind
of
like
we
still
want
to
get
to
a
compatibility
Matrix
for
for
sign
off
for
1.0
from
the
the
specification
committee.
I
guess
it
is
right.
Look
I'm.
G
F
Correct
I
think
I
think
as
long
again,
it's
been
probably
a
few
months
since
I've.
Actually
like
went
like
like
closely
comparing
the
API.
What
we
have
in
code
versus
the
specification
I,
don't
know.
If
there's
been
any
Trend
there
I
know
there
was
some
in
the
SDK
and
I
think
I
addressed
it,
but
the
API
at
this
point
like
Turner,
no
Churn,
it
should
be
in
a
it's.
F
It's
it's
fine
right
now,
I
think
it's
the
SDK
that
really
needs
the
the
bulk
of
the
attention
right
now,
it's
the
actual,
like
implementation,
I,
think
that's
what
you're
asking
right.
G
Yes,
sir,
yes,
that's
precisely
what
I'm
asking
only
because
I
don't
know
and
glaze
I,
don't
know
how
much
familiarity
you
have
with
with
kind
of
like
how
the
spec
is
laid
out,
but
it's
essentially
like
bring
you
know.
You
can
bring
your
own
implementation
of
this
thing,
so
we
are,
for
all
intents
and
purposes,
the
reference
implementation.
So
it's
like
hey,
there's
the
API
package
and
then
anybody
can
Implement
their
own
SDK
bring
bring
their
own
vendor
specific
SDK
implementation
of
it,
but
we
try
to
have
at
least
the
the
the
bare
minimum.
G
So
that's
what
I
was
referring
to
it's
saying
like
there's
the
API
gem
versus
like
the
SDK
implementation
of
that
API.
It's
it's
a
little
bit
foreign
for
us
and
Ruby
for
the
most
part,
because
I
mean
it's
all
code.
That's
getting
executed,
there's
no
sort
of
like
it's,
not
common
for
them
to
be
just
like
a
bunch
of
template
methods
that
people
are
putting
on
it
as
a
package.
You
know
oftentimes
we're
like
I.
G
Give
you
an
implementation
of
this
or-
and
you
know
we're
only
plugging
in
adapters
for
the
most
part
I
think
for
here.
It's
kind
of
like
a
no
op
versus
a
full
implementation
kind
of
situation.
I
I
hope
that
what
I'm
saying
is
helpful
or
I'm
or
I'm,
just
confirming
what
you
already
know.
F
F
That's
going
to
like
aggregate
your
counters
and
things
like
that
as
like
just
another
comparison
like
the
tracing
API
is
sufficiently
decoupled
from
the
SDK
that,
like
we
did
a
little
hack
days
project,
we
didn't
finish
it,
but
we
were
starting
to
implement
the
Ruby,
SDK
and
rust,
and
because
the
API
and
the
SDK
are
decoupled
in
such
a
way
that
like
Arielle
was
explaining
was
that
we
can
actually
continue
to
use
all
of
our
consisting
instrumentation
and
everything,
and
we
can
actually
just
swap
out
the
SDK
without
it
actually
or
ideally
like.
F
We
haven't
proven
it
yet,
but
it
should
not
impact
anything
like
all.
Our
instrumentation
should
continue
to
work
and
we
can
just
hot
swap
out
the
SDK.
G
One
thing
I
will
add,
though,
is
that
there's
a
little
bit
of
nuance
in
the
API
packages.
There
are
no
op
proxy
implementations
that
you
can
actually
invoke
in
the
Ruby
code
without
it
being
initialized,
so
that
so
you
you
basically
get
a
what
we
call
proxy
components
that
are
then
materialized
later
once
the
SDK
is
initialized
to
prevent
things
from
breaking
or
for
you
to
like
expect
some
null
reference,
or
something
like
that.
G
So
there
is
code
in
the
API
SDK,
sorry,
API,
gems
and
themselves
they're
kind
of
placeholder
proxy
classes
that
will
then
delegate
to
materialize
SDK
components
once
it's
initialized.
So
it's
not
like
just
a
description
versus
an
implementation,
but
it's
actually
like
a
swappable
runtime
component
that
is
interesting.
I,
don't
wanna,
I
wanna
make
that
nuanced
distinction.
There
yeah.
F
We
had
to
solve
for
the
problem
where
at
any
point
in
time,
if
you
have
the
API,
you
should
be
able
to
invoke
any
of
the
methods
and
it
shouldn't
blow
up,
and
they
should
just
those
references
should
just
work
later.
If
you,
you
know
initialize
the
Tracer
somewhere
and
then
the
SDK
gets
configured
sometime
after
that.
Tracer
that
you
initialized
before
should
work.
F
If
you
try
to
admit
a
span
before
the
SDK
is
configured
it's
a
no
op
span,
it
does
nothing,
but
after
the
SDK
gets
configured
you
should
have
the
real
thing
or
it
should
behave
because
it's
the
real
thing.
The
metric
implementation
is
the
same
way
and
all
that's
already
been
done
so
the
metric
API
is.
It
has
all
those
proxies
already
set
up
so
like
the
bulk
of
the
work
again
is
going
to
be
primarily
in
the
SDK.
D
Yeah,
the
only
thing
I
would
add
to
that
is
that
the
the
API
Jam
does
kind
of
also
include
the
no
no
other
implementation.
It's
like
kind
of
a
a
combination
of
those
things.
So,
if
you
never
plugged
in
like
an
SDK,
everything
is
still
work.
Nothing
would
break,
but
you
would
not
be
generating
any
Telemetry.
That's
kind
of
part
of
the
the
design
of
the
project
as
a
whole.
F
In
in
the
spirit
of
like
because
again,
I
said
that,
like
this
is
like
there's
there's
not
a
lot
of
like
really
easy
low-hanging
fruit
from
my
perspective,
but
I
do
really
want
this
to
support
anybody.
Who's
interested
in
doing
this
work
so
like
because
I'm
limited
and
like
bandwidth
of
just
like
actually
focusing
on
this
holy
and
I.
F
But
if
you
are
starting
on
this
work
again,
I
encourage
like
looking
at
the
metric
reader
and
starting
to
kind
of
get
an
understanding,
or
at
least
build
up.
Some
questions
like
not
having
bandwidth
to
work
on
this,
like
in
the
way
that
I
would
like
to
doesn't
mean
I'm
unavailable
right,
like
I'm,
really
happy
to
support
anyone.
That's
working
on
it,
whether
it's
reviews,
or
even
just
like
some
impromptu.
F
Like
question
and
answer
like
things
like
just
effectively
like
pair
on
the
stuff,
the
volume
will
obviously
have
to
be
a
little
bit
lighter,
but
I
really
do
want
to
support.
Whoever
is
working
on
this
so
like
if
you
start
ramping
up
and
then
you
start
to
get
an
understanding
of
it
and
have
questions
about
it
like
I
am
available
and
like
I,
still
need
to
understand
some
of
the
parts
too
right
like
if
I
fully
understood
the
like
the
SDK
implementation,
ideally
would
have
written
it.
F
So
there's
still
some
learning
on
my
side
to
do,
but
I
want
to
make
sure
that
it's
clear
that,
like
not
having
bandwidth
to
work
on
it,
isn't
the
same
as
not
being
able
to
support
someone
working
on
it,
which
I
am
going
to
make
myself
available
too.
So.
D
Yeah
on
that
note,
I
will
point
out.
There
is
a
cncf
slack,
I,
don't
know
if
you
all
are
in
there,
but
there's
there
are
channels
for
for
each
of
the
projects.
So
there's
there's
one
for
for
hotel
Ruby,
but
there's
also
one
for
hotel,
metrics
and
the
hotel
specification.
So
there
is
there's
kind
of
that
community
resource
for
for
a
lot
of
things.
So
if
you're
not
there,
I
would
definitely
encourage
you
to
show
up
I,
don't
know
if
somebody
has
the
link.
Oh
okay.
D
Eric
said
that
Clarissa
has
already
been
talking
to
us
on
the
slack,
so
you
have
found
it,
but
yeah
definitely
check
out
the
other
Hotel.
You
know
Dash
channels,
because
there's
quite
a
few
there,
I
guess.
The
other
thing
that
I
can
add
is
that
metrics
has
been
like
a
long.
A
long
project
I
think
you
know
tracing
was
pretty
straightforward
in
retrospect,
although
it
seemed
long
at
the
time,
but
the
metrics
API
and
data
model
has
gone
through
like
several
revisions.
D
I
feel
like
we're
like
on
like
the
third,
the
third
kind
of
like
revision,
which
the
reason
I
bring.
D
That
up
is
that
I
feel
like
the
things
in
hotel,
depending
if
you're
like
looking
around
at
other
projects
and
other
implementations
like
some
may
have
I,
don't
know
a
layer
of
historical
patina
on
top
of
them
from
like
these
previous
iterations
and
implementations
and
I'm,
just
not
sure
like
which
one
is
maybe
the
most
current
or
best
one
to
look
at
I
know
I,
believe
Robert,
you
confirm,
but
all
of
them
seemed
incomprehensible
enough.
That
Robert
was
not
looking
too
heavily
at
any
of
them.
I
think
and
I.
F
Oh
I
ended
up
looking
at
python
a
bit
more.
They
did
a
refactor
somewhere
along
the
way
where
it
was
fairly
digestible.
It's
still
a
little
bit
different
than
what
we
did
and
I
don't
know.
If
that's
I'm
not
super
familiar
with
python,
it's
not
I'm,
not
sure
if
I
just
like
it
wasn't
catching
on
to
like
some
of
their
edms
there.
F
But
python
was
a
pretty
solid
reference.
Go
a
lot
of
churn
there.
Javascript's
interaction
is
completely
incomprehensible
to
me.
So
I
avoided
that,
but
some
of,
like
the
more
encapsulated
pieces
of
code
there
were
actually
helpful,
so
JavaScript
and
python
I'd
say
were
bit
more
helpful
than
the
other
languages.
For
me,.
D
D
Before
I
say
anything,
but
maybe
that
is
going
to
be
like
the
most
up-to-date
when
it
when
it
is
finally
finished,
yeah
I
feel
like
there
might
be
some
weirdness
in
JavaScript
I
know
for
the
exponential
histogram,
like
the
way
that
the
way
that
JavaScript
SDK
was
set
up
and
the
interfaces
were
set
up,
I
was
forced
to
implement
a
a
diff
method
which
no
other
exponential
histograms
had
to,
and
the
consensus
was.
You
would
probably
never.
Actually.
This
should
be
dead
code
in
the
real
world,
but
the
interface
is
required.
D
A
Is
the
histogram
I
like
there's,
definitely
some
homework
that
we
need
to
do
so.
We
can
ask
more
informed
questions,
but
since
I'm
here
is
the
histogram
significantly
different
from
what
would
be
calculated
in
a
Prometheus
exporter.
D
I'm
not
totally
familiar
with
the
with
the
data
models
or
the
types
of
histograms
available
in
in
Prometheus
I
do
think
they
have
something
like
the
exponential
histogram.
Some
sort
of
high
high
resolution,
histogram
and
then
probably
also
a
fixed
bucket
histogram,
is
are
both
of
those
true.
G
C
I
mean
it's
fine.
They,
the
RC,
came
out
recently
I
think
it's
in
Maine.
Now
the
first
version
of
it
or
I,
don't
know
if
it
came
in
a
patch
release
or
a
minor
release,
but
it's
definitely
been
released
at
prom
con
or
whatever
yeah
I
think
there
it's
in
I
think
open
Telemetry,
exponential,
histogram
intentionally
delayed.
C
You
know
stabilizing
the
specification
to
sort
of
wait
on
the
sparse
histogram
stuff
from
Prometheus,
so
that
there
could
be
compatibility
between
sparse
histograms
and
also
the
sketch
histograms
in
or
sketch
distributions
in
statsd
world
they're
dog
statisty
world
or
whatever
you
want
to
call.
It
I
think
there's
some
differences
with.
C
Maybe
the
way
you
may
like
there
might
be
some
configuration
options
may
be
slightly
different,
but
as
far
as
I
know,
you
can
sort
of
translate
between
all
three
types
without
losing
yeah
without
sort
of
losing
granularity,
and
they
all
at
the
core
are
doing
the
same
thing
where
the
buckets
dynamically
resize.
C
F
B
F
It
had
to
be
compatible
with
Prometheus,
but
there
was
something
I
forget
what
it
was
this
chemical
a
while
ago.
It
was
like
something
about
how
the
bucketing
offset
work
needed
to
change
for
to
actually
one-to-one
with
Prometheus
and
Archer,
and
then
change
came
through
or
if
I,
completely
misremembering,
but.
C
They
were
supposed
to
be
like
a
one-to-one,
I,
think
yeah.
That
makes
sense
to
me.
It's
that's
largely
the
goal
of
open
Telemetry,
so
I'm
sure
they
sort
of
attempted
to
prioritize
it.
Aerials
I
would
if
I
had
links
into
code
like
Ariel,
has
I
would
link
them,
but
I.
Don't
offhand.
G
A
Yeah,
if
it
is
similar,
I've
actually
contributed
to
the
there's,
a
different
Prometheus
exporter,
a
client
for
Ruby,
so
I'm
pretty
familiar
with
the
way
that
works.
So
there
might
be
some
prior
art.
We
can
study
for
the
metric
calculations.
We
need
to
do
in
Ruby,
we've.
B
A
Using
histograms
and
Prometheus
for
a
while
in
production-
and
we
do
it
right
now,
with
the
Prometheus
exporter
and
they're
fixed
buckets,
but.
D
D
Yeah
they
have,
they
have
different
data
models
from
the
API
side.
It's
the
same.
You
just
have
a
histogram
instrument
and
there's
just
a
record
method
on
it
and
you
just
record
values,
but
you
can
set
it
up
to
be
backed
by
a
fixed
bucket,
histogram
or
an
exponential
histogram
and
ultimately
like
these
histogram
data
points
are.
D
F
F
This
isn't
going
to
be
probably
too
relevant
right
now.
The
other
thing
that's
not
implemented
in
the
Ruby
part
of
things
that
have
been
holy
and
entirely
ignored
as
views
we
figured.
We
just
come
back
to
it
later
when
if
we
had
more
like
a
more
complete
implementation,
views
are
just
like
a
bunch
of
Transformations
on
the
metric
streams.
You
output
from
your
thing,
you
could
turn
like
a
counter
into
two
separate
counters
that
will
expose
to
different
labeled,
metrics
I,
don't
know,
I,
think
it's
like
a
carryover
from.
F
D
B
D
This
repombing
right
now
the
open,
tell
me
open,
Telemetry,
Dash,
Proto
repo
is
where
all
the
the
protocol
definitions
reside
for
otlp
and
ultimately,
this
is
what
will
be
used
to
implement
at
least
the
the
otlp
exporters
and.
D
D
There
is
a
count
sum
I,
think,
there's
Min
and
Max
in
here
as
well
and
then
kind
of
a
an
array
of
bucket
counts,
and
then
you
use
the
bounds
to
figure
out
what
those
buckets
actually
represent
and
in
the
exponential
histogram
you
end
up
with
min
max
some
count.
You
want
to
put
zero
count
a
scale
and
then
you
kind
of
have
these
a
bucket's
representation
for
the
positive
range
and
the
negative
range,
and
then
the
buckets
have
an
offset.
D
So
if
you're
so,
ultimately,
that's
what's
residing
in
the
zero
zeroth
index.
D
Which
is
most
likely
not
zero
and
then
and
then
your
accounts,
and
then
you
kind
of
use
the
scale
to
figure
out
what
each
one
of
those
what
the
boundaries
are
are
for
each
one
of
the
the
buckets
based
on
their
index
based
on
index
and
scale.
You
can
crunch
some
numbers
and
figure
out
what
is
represented
by
the
count.
E
E
Andrew
said
we
could
work
together
next
week
when
he's
back
from
from
his
vacation.
So
we
think
we
can
work
together
on
something,
and
maybe
he
can
help
me
too.
E
F
I
definitely
encourage
you
to
as
much
as
I
hate
to
say
it
read
through
the
API
and
SDK
spec
and
if
you're
anything
like
me,
you'll
have
to
read
it
multiple
times,
because
it's
really
really
dense.
But
it's
I
think
it's
really.
It's
like
absolutely
necessary
to
work
on
it
so
like
that
is
the
most
painful
but
realistic,
starting
point,
I
think
and
then
yeah
definitely
talk
to
Andrew
Hayward.
When
he's
back,
he
might
have
some
better
low-hanging
fruit
too,
to
get
started
on
and
he's
also
just
really
wonderful
to
work
with
too.
G
I'll
say
it
again,
thank
you,
Clarissa
for
being
a
part
of
this
project.
We
really
appreciate
your
participation
and
your
help.
Thanks
Joe
for
supporting
the
community
and
I
thank
y'all,
I'm
gonna
have
to
say
have
a
great
afternoon
because
I'm
gonna
bounce.