►
From YouTube: 2020-07-02 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
D
B
B
B
For
those
joining
the
call,
I
think
we're
still
waiting
on
josh
but
be
sure
to
open
up
the
doc
and
put
your
name
down.
If
you
have
any
issues
or
agenda
items,
make
sure
you
add
those
to
the
agenda.
E
I
know
it's
going
to
be
a
kind
of
a
light
week.
I
have
to
have
some
people
at
least
sort
of
think
are
on
holidays
today.
So
I
put,
I
only
put
a
few
items
in
the
agenda.
E
E
So
for
me,
the
big
item
to
talk
about
today
is
the
otlp
protocol,
but
I
want
to
talk
about
some
hoteps
first
and
justin
is
here
on
the
call-
and
I
I
know,
there's
still
some
open
points
of
discussion
on
your
pr.
Would
you
like
to
talk
about
that.
D
There,
this
pr
has
kind
of
drifted
a
little
bit
into
the
the
configuration
of
when
these
metrics
ought
to
be
constructed,
and
I
I
think,
I'm
struggling
a
little
bit
with
the
language
of
like
how
to
leave
it
open
that
these
things
might
be
configurable
in
the
future
without
getting
into
the
can
of
worms
of
actually
specking
out
what
that
configuration
would
be.
So
I
I
would
love
some
feedback
on
the
language
there.
D
There
was
a
request
to
add
back
in
the
concept
of
error,
of
a
success,
equals
true
or
false
as
a
label,
and
I
am
in
favor
of
not
adding
that
back
in
and
of
leaving
the
canonical
status
code
is
the
only
indication
of
success
and
failure.
C
Ahead,
I
think
it
makes
sense
to
rely
on
that,
especially
there
is
another
working
group
related
to
error,
reporting
and
stuff,
and
based
on
that
work,
we
should
once
that
work
establish
the
the
definition
of
errors
and
everything.
Probably
we
should
follow
the
same,
the
same
pattern,
but
so
far
we
use
status
for
this,
so
we
should
stick
with
what
we
have
right
now
and
not
not
go
into
a
new
territory.
With
this
change.
E
Yeah
I
like
this
fan
status
approach.
I
know
there's
some
dissent
out
there
and
there's
this
new
hotep
filed
number
123,
which,
in
my
opinion,
blows
that
matter
up
and-
and
I
haven't
considered
it
very
carefully
yet,
but
it's
this
issue
is
not
settled
about
10
status,
but
I
think
we
can
agree
on
spam
status
code
for
this,
at
least
in
this
group.
E
I
don't
know
so
justin,
let's
see
I
I
hear
you
about
the
configuration
question
I
I
would
try
my
best
not
to
say
anything
about
how
the
configuration
of
this
thing
that
this
happens,
like
the
semantic
convention
is
supposed
to
say.
If
you
want
to
make
a
metric
from
a
span.
E
I
started
trying
to
figure
out
if
we
could
implement
it
yet
and
it's
not
quite
I'm
not
quite
able
to
do
that
yet,
given
some
of
the
I
guess
limitations,
particularly
some
of
the
the
fact
that
we
haven't
answered
all
the
otlp
questions
has
left
me
thinking.
We
should
just
push
off
on
configurable
sdk
until
we
have
more,
like
the
basics,
are
working.
E
And-
and
I
think
that
they're
and
it's
nice,
it's
a
nice
story
to
say,
if
you're,
if
you're
measuring
duration,
just
like
sorry
where's
that
issue
filed.
E
Having
trouble
remembering
now
where
it
was
originally
appeared,
but
somebody
was
talking
about
extending
semantic
conventions
for
grpc
to
record.
D
E
There
are
going
to
be
cases
like
where
someone
wants
to
measure
a
duration.
That's
like
just
super
fine
grain
and
say
what
they
want
is
an
accumulation
of
like
how
many,
how
much
time
did
I
spend
in
the
section
perhaps
over
you
know
a
period
of
time,
and
so
it
won't
be
always
the
case.
The
span
makes
sense
when
you're
measuring
a
duration,
and
so
I
think
it's
important
that
the
spec
exists
independently
of
spans.
E
Sorry
I
now
now
recall
this.
There
was
a
ish,
so
somebody
pointed
out
in
a
getter
that
there
had
been
some
communication
of
assassin's
fly
deck
created
by
the
micrometer
team,
which
was
quite
critical
of
the
java
hotel
implementation,
and
it
pointed
out
a
number
of
things
which
were,
I
think,
largely
style
questions.
But
the
major
topic
was
really
the
lack
of
a
timing
instrument,
and
so
I
pointed
at
an
old
issue
that
was
filed
in
february.
E
Of
course
I
mean
duration,
not
time
so
so
in
my
response
to
this
particular
gitter
discussion
that
I
posted
on
the
issue
as
well.
I
pointed
out
there's
two
ways
you
could
go
forward.
One
is,
you
could
say
we're
we're
going
to
if
you
want
to
measure
time,
just
use
a
span
and
that's
probably
going
to
be
the
right
answer
for
many
many
cases
and
then
the
other.
The
other
answer,
which
is
what
micrometer
actually
does
is
it
gives
you
an
instrument
which
has
a
built-in
stopwatch.
E
Basically,
so
you
can
say
start
and
stop,
which
is
almost
like
a
span,
but
it's,
but
it's
creating
a
metric
instead
of
an
output
outputting
a
span,
and
so
the
question
is:
should
we
should
we
add
a
timing
instrument
for
recording
durations
and
or
should
we
have
an
api
that
lets
you
kind
of
automatically
measure
a
duration
by
timing,
a
section
which
is
exactly
what
spams
do?
So
if
you
had
a
configurable
sdk,
you
could
configure.
E
My
if
I
had
to
choose,
I
would
I
would
say
we
should
just
support
a
time
value
or
duration
value
in
an
instrument.
James
bevington,
who
I
don't
see
on
the
call
responded
with
a
question:
why
don't
we
just
have
another
type
like
we
have?
We
have
n64
and
float64.
E
E
If
you're,
if
you're,
have
a
critical
section
which
is
going
to
be
very
frequently
exercised,
and
what
you
really
want
is
the
sum
of
time
spent
in
that
critical
section,
then
you
would
create
a
duration
counter
and
you
would
count
it
every
time
you're
in
the
duration
in
this
section
and
then
the
sum
of
that
time
will
be
something
that
you
can
form
a
rate
out
of,
and
that
rate
will
be
a
cpu
load
factor
effectively
or
sorry
wall
time.
Utilization
is
what
it'll
be.
E
How
much
time
are
you
actually
spending
as
a
rate
of
total
time?
So
that
leaves
me
to
suggest
that
we
should
add
a
duration
type
and
have
a
new
instrument
for
corresponding
to
that
type.
Although
it
means
going
from
in
go,
it
means
going
from
12
concrete
instruments
to
18,
which
is
like
like
kind
of
boggling,
but
it
seems
like
the
right
solution,
but
it
did
touch
on
this
topic
of
time
sections
and
how
to
record
those
go
ahead.
Please.
C
C
C
So
if
I'm
gonna
add
in
java
duration
thing,
it's
gonna
be
a
bit
horrible
to
use,
because
I
still
have
to
to
actually
compute
that
using
the
monotony
clock,
not
the
another.
The
time
now
plot.
E
C
E
The
idea
of
a
helper,
I
think,
would
address
the
micrometer
complaint,
which
is
that
there's
a
usability
issue
especially
around
time
and
if
it's,
because
you
need
to
figure
out,
what's
your
monotonic
clock
and
you
have
to
use
it
and
it
gives
you
an
integer
which
you
have
to
cast
back
into
something
which
then
you
have
to
apply
the
correct
units
to
because
what's
the
unit
of
an
integer,
I,
as
you
can
see
the
problem
and
the
the
slide
deck
was
not
very
fair
as
far
as
like
never
mind,
but
but
I
think
that
really,
what
we're
looking
for
is
better
helpers
for
time,
and
maybe
it
doesn't
need
to
be
new
instruments.
E
E
I
can
see
that
happening
and
it
may
just
make
sense
to
do
and
go
to
have,
because
the
built-in
time
has
monotonicity
in
go.
You
might
just
create
new
instruments,
but.
C
Personally,
I
would
go
always
with
helper
things
or
extensions
to
the
api,
if
possible,
and
not
adding
these
directly
into
the
api.
As
as
a
thing-
and
I
think
this
is
a
good
example
where,
where
this
can
happen,
this
can
be
implemented
as
a
helper.
On
top
of
the
current
things,
it's
just,
for
example,
as
a
helper
for
for
go,
you
just
create
behind
the
scene.
You
create
an
instrument
that
let's
say,
uses
ink
and
you
record
nanoseconds,
for
example.
C
As
as
you
need-
and
you
are
good
or
also
also-
there
is
a
good
question:
do
we
have
a
support
in
otlp
as
one
of
the
possible
value
to
be
time,
or
do
we
go
with
with
one
of
the
in
float.
E
C
C
For
for
spans,
for
spans
is
not
the
user
that
does
the
the
measurement
we
do.
The
measurement
behind
the
user
has
just
a
start,
operation
and
operation.
So
we
deal
with
the
monotonicity
and
the
right
clock
to
use
and
stuff.
E
So
the
question,
then,
is:
how
do
we
specify
it?
I
think
I,
what
I'm
thinking
about
is
making
a
helper
that
gives
you
a
span
like
api,
that
you
can
use
just
to
record
durations,
but
but,
as
james
pointed
out
like
sometimes
it's
not
really
what
you
want.
You
want
to
add
up
a
bunch
of
durations
and,
like
that's,
a
different
instrument
type.
E
So
my
my
fear,
with
with
adding
helpers,
is
that
we're
getting
away
from
this
sort
of
uniformity
that
we've
built
up
for
all
the
instruments
have
a
direct
call
and
they
have
a
batch
call
like
you're
not
going
to
use
batch
calls
if
you
have
a
helper
and
so
on.
Like
that's,
that's
the
fear
that
I
have.
C
But
I
think
I
think
the
helpers
will
will
will
satisfy
if
we
have
only
single
calls
will
satisfy
99
of
the
the
cases
and
every
time
when
you-
and
if
we
explain
the
helper
is
actually
just
a
float
instrument
or
a
double
instrument
with
unit
millisecond
user
can
use
the
raw
api
if
they
need
this
advanced
features
so
yeah.
I
guess.
E
Yeah,
okay,
so
I
think
we
should
cut
off
this
conversation
now
to
make
it
through
the
rest
of
the
agenda
I
have
to.
I
haven't
thought
much
about
this.
The
idea
of
having
a
span
like
api
is
basically
the
stopwatch.
Api
is
like
pretty
common,
and
maybe
that's
what
we
want.
E
You
probably
are
right
that
you
can
adapt
things
so
that
you're
able
to
use
the
like
record
batch
if
you
really
want
to.
I
would
give
those
that
we
all
perhaps
think
a
bit
about
this,
but
it's
important
because
we
got
pretty
heavily
criticized
by
a
competing
framework
and
timing
time
is
really
important.
E
E
I
think
the
question
is
how
to
specify
it
and
that's
I
haven't
thought
enough,
so
I
will.
I
will
be
happy
to
think
about
that
a
little
bit.
Thank
you,
jefferson.
Thank
you.
All
there's
this
other
otep
with
part
which
I
think
has
basically
been
approved
by
everybody,
and
I
think
we
could
merge
it.
James
has
provided
some
final
feedback.
E
There's
a
question
about
inodes.
I
feel
really
uninterested
in
inodes
for
some
reason,
but
that's
because
I
don't
work
in
storage.
Often
so
aaron
are
you
on
the
call?
E
C
Two
comments
on
this.
The
author
needs
to
resolve
all
the
yeah.
I
was
going
to
comment
on
it
on
this
yeah,
not
here
and
the
other
thing.
It
has
three
approvals.
We
need
one
more
and
also
josh
approve
the
pr
124,
no.
C
And
you
in
the
meantime,
if
you
approve
one
two
four,
I
can
merge
that
after
this,
oh
in
this
repair.
Yes
in
this
repo
you
have
to
that
makes
mirrors
all
the
co-donors
changes
from
from
a
specification
great.
E
And
that's
also
in
this
repo
today,
yes,
awesome,
I've
been
waiting
for
this
yeah.
No,
all
right.
I
approve
it
without
even
reading
it.
E
Great
thank
you,
so
I
I'll
go
over
that
one
more
time.
I
left
feedback
a
lot
back
and
I
felt
good
about
where
we
ended
up.
So
I
the
next
on
the
agenda
that
I
had
put
in
here
was
I
just
wanted
to
kind
of
raise
some
flags
about
the
collector
and
the
like.
We've
been
moving
forward
with
a
spec
and
implementing
a
bunch
of
sdks
and
all
of
a
sudden
people
are
trying
to
use
it
and,
like
I've,
run
into
so
many
problems.
E
So
I
just
wanted
to
share
that.
I've
begun
focusing
a
little
bit
on
like
making
sure
the
collector
is
operational
end-to-end.
We
seem
to
need
more
more
tests
of
end-to-end,
especially
for
otlp.
E
Also
in
the
go
exporter,
we've
got
some
incompatibilities
there,
so
the
root
cause
I
haven't
dug
into
the
detail,
but
it
looks
like
the
conversion
from
otlp
to
open
census
to
prometheus
is
losing
something
somehow
and
I
haven't
dug
enough
into
it.
All.
This
is
mostly
just
making
me
want
to
move
forward
with
otlp,
and
it
means
that
at
some
point
we
need
to
replace
the
prometheus
exporter
or
adapt
it
to
use
the
otlp
data
type
directly.
I
think,
rather
than
the
transformer
which
I
think
is
going
to
improve
lots
of
things.
C
Yeah,
so
by
the
way
to
give
you
feedback,
one
of
the
reason
why
we
didn't
do
that
was
because
we
are
waiting
for
otlp
to
kind
of
stabilize
so
yeah.
It
makes
sense.
Did
you
connect
problem?
I
don't.
E
That
we're
not
communicating
to
users
what's
what's
what's
to
be
expected,
I'm
not
surprised
by
where
it
is,
I'm
just
I'm
a
little
disappointed
myself
for
not
kind
of
getting
ahead
of
the
problem
and
and
like
cautioning
people
that
this
not
probably
not
something
you
want
to
try
yet
even
just
the
sdk
exploring
the
prometheus
we've
discovered
some
problems
for
the
go
go
implementation
at
the
today,
so
just
just
one
I'd
like
to
be
known,
I
think
we
should
work
harder
on
on
these
issues
before
the
next
set
of
releases
and
also
time
these
releases
a
little
bit
more.
E
So
perhaps
with
that
we
could
move
into.
I
think
what
will
be
the
likely
to
be
a
tough
discussion,
I'm
not
sure,
especially
with
bowdoin
here
I
I
want
to
go
through
it.
There.
I've
been
thinking
about
one
of
the
things
that
changes
here.
That
is
appears
to
be
a
style
choice,
and
I
don't
want
to
just
propose
style
choices
about
defending
it.
E
I
think
that
this
is
something
that
bogdan
decided
a
while
back
for
the
sort
of
current
version
that
that
we
have
checked
in
this
was
the
decision
that
every
one
of
the
one
of
types
for
a
particular
value
type
is
going
to
be
a
a
dedicated
and
separate
message.
So
you
had
summary
data
point
and
you
had
histogram
data
point.
You
had
int
and
float
into
double
data
points.
The
reason
why
this
changed
is
not
just
an
arbitrary
change,
so
the
idea
was
that
in
159.
E
That
way,
we
don't
have
to
duplicate
labels
and
the
raw
values
will
contain
just
the
extra
labels,
not
the
ones
that
were
already
aggregated,
that
were
grouped
by
I
changed
the
sort
of
concretely
I
refactored
it
a
little
bit
further
and
one
of
the
things
I
noticed,
which
made
me
feel
good,
as
I
was
doing
this,
is
that
all
the
original
data,
all
the
original
comment
here
had
this
thing
saying
that
there
was
something
called
a
data
point.
I
didn't
change
the
word
data
point.
E
There
was
no
data
point
though,
and
now
we've
added
it
back
so
this,
so
this
change
does
does
undoes
what
you
did
bogdan
and-
and
the
argument
is
that
this
is
going
to
help
us
with
the
raw
values
sharing
only
having
to
record
extra
labels.
The
data
point
then.
C
Yes,
one
thing
count
the
number
of
allocated
messages,
if
that
keeps
the
same
number
of
allocated
messages,
that's
that
was
the
only
rule
that
we
are
tied,
mostly
nothing
else.
If
the
number
of
messages
that
we
allocate
stays
the
same,
I'm
fine
if
we
allocate
more
messages,
means
more
overhead,
more
gc
problems,
and
I'm
I'm
willing
to
duplicate
some
of
these
things,
as
we
did
for
some
performance
game
that
will
will
affect
us
forever.
Okay,
that
was
that
I
would
like
to
follow
on
these.
E
So
at
least
for
the
scalar
values
there
shouldn't
be
a
change
because
those
got
rolled
in
so
then
I'm
skipping
over
the
descriptor,
which
is
where
most
of
the
change
happens
down
to
the
data
point,
which
is
here
so
the
data
point
has
its
common
labels,
the
common
timestamps
and
then
it's
it's
a
logical
one
of
because
we
know
that
the
real
one
of
in
proto
is
very
expensive
logical,
one
of
has
a
embedded
in
or
double
so.
Those
are
definitely
not
going
to
allocate
anymore.
E
Histogram
and
summary
are
that's
it.
So
histogram's
summary
is
gonna
have
one
extra
allocation,
and
maybe
we
can
figure
out
a
way
to
address
that
the
the
of
course
we
could
just
inline
all
the
fields
here
and
then
we
would
be
widening
our
tag,
values
for
the
protobuff
encoding.
I
am
okay,
I'm
sympathetic
to
the
concern
about
memory.
It's
gonna
happen
for
a
histogram
of
summary
with
this
change.
I
am.
C
I
actually
think
for
the
collector
is
not
going
to
be
a
big
problem,
because
we
can
use
google
proto
or
something
similar
and
and
embed
the
fields
directly
or
do
crazy
things.
If
we
want
it's
mostly
anyway,
what
I
would
do
by
the
way
I
would
ask
tigran
you
know
him.
He
has
a
good
benchmark.
He
wrote
the
benchmark
framework
for
yeah.
I
remember
so
ask
him
to
to
run
with
this
an
example
and
see
if
performance
changes
again
nothing
about
semantics
or
anything
we
can
whatever
we
want.
E
I'm
thank
you.
I'm
relieved
a
little
bit
so
the
the
highlight
here
of
with
exemplars
is
that
these
are
now
the
way
I've
written
this,
and
I
think
I
can
go
through
and
write
more
and
more
comments,
but
I
just
to
speak
it
out
loud.
These
examples
are
have
two
functions.
One
is,
if
you
want
to
encode
exact
data.
You
can
use
this
field
to
encode
raw
data.
It's
like
you,
don't
want
to
put
any
aggregation
in
just
dump
the
data
here.
E
And
that'll
allow
you
to
include
your
choice,
id
and
stuff
like
that,
and
I'm
the
re
there
you
could.
You
could
imagine
putting
raw
data
into
the
same
place
as
the
scalar
values
and
just
repeating
lots
of
extra
points,
but
that
what
I
was
trying
to
to
to
avoid
was
ambiguity
between
ambiguity
between
last
value,
like
I
just
have
one
number
and
exact
value,
which
is
where
I
have
lots
of
them.
Otherwise,
if
you,
if
you
put
them
both
in
the
scalar
field,
they
end
up.
E
Looking
the
same,
like
you
end
up
needing
another
value
to
say
what
type
of
aggregation
you
had
and
I
think
currently,
we
have
made
an
effort
not
to
explicitly
list
the
aggregation,
because
it's
implied
a
lot
by
the
type
of
data
that
you're
looking
at
and
it
seems
like
it
would
just
add
complications.
E
So
so
raw
values
here
can
be
used
as
an
exact
encoding
or
they
can
be
used
in
addition
to
any
of
the
other
values
to
be
sample,
exemplars
the
way
that
they
were
used
in
the
past-
and
I
think
I'm
I'm
able
to
write
more
about
this,
and
I
I
think
we
should
actually
show
prototypes
to
actually
do
this
stuff
as
well.
The
real
changes
in
this
change
are
have
to
do
with
the
the
descriptor
name,
description
and
unit.
E
Don't
change
so
I
I
don't
know
I
thought
a
lot.
I
talked
to
a
lot
of
people
and
I
looked
at
the
problem
a
lot,
so
I
I
don't
know
how
to
explain
this
other
other
than
this
is
what
made
sense
after
after
looking
at
this
space.
Quite
a
bit
is
that
value
type
is
supposed
to
be
a
description
of
exactly
the
type
of
data
you're.
Looking
at
and
monotonic
doesn't
change
the
type
of
data.
E
It
just
changes,
some
semantics,
so
I've
moved
monotonic
back
into
another
field,
which
I'm
calling
kind
kind
is
a
bit
set.
That
includes
temporality
the
way
we
have.
It
includes
exactly
the
same
three
temporality
values
that
is,
and
the
lengthy
comments
that
tyler
wrote.
So
that's
instantaneous,
delta
and
cumulative
those.
Those
are
three
bits
and
then
I've
added
bits.
E
So
this
notion
of
grouping
versus
adding
tells
us
whether
you're,
looking
at
a
value
recorder,
value
observer
or
what
one
of
the
adding
instruments
and
I've
I've
written
some
comments
somewhere
about
how
I,
in
this
file
somewhere
in
this
file,
part
this
is
hard.
I
don't
know
where
anything
is
got
to
be
up
here.
It
is
this
is
what
I'm
saying.
Structure
means
there
are
some
aggregation.
E
There
are
some
ways
that
you
might
end
up
using
this
data,
which
are
going
to
be
not
so
meaningful,
and
this
structure
bit
is
going
to
help
you
potentially
avoid
those
cases.
So
let's
not
do
the
review
in
real
time.
I
just
wanted
to
call
out
what
the
big
changes
are
here.
So
it's
it
becomes
a
bit
set,
there's
three
bits
for
temporality,
there's
two
bits
for
for
structure
and
then
one
bit
for
monet,
monotonicity
and
one
bit
for
synchronicity.
E
I've
tried
to
give
examples
of
why
all
those
bits
are
useful
information,
but
they
are
just
useful
information.
The
only
required
for
correcting
this
information
is
temporality,
and
so
I'm
arguing
that
we
should
include
all
the
metadata
that
we
have
about
otl
the
hotel
instruments,
because
it
can
help
a
system
sort
of
make
choices
about
how
to
represent
data
when
it
doesn't
know
anything
else,
and
it
could
potentially
caution
you
against
interpreting
certain
types
of
data.
E
So
the
way
I've
coded
this
up
is
that
there
are
instantaneous
delta
cumulative
grouping,
adding
monotonic
synchronous
bits
and
then
I
went
through
and
I
reasoned
out
all
the
valid
combinations
of
them
and
that
they
are
18..
So
these
would
be
the
valid
kinds,
and
this
will
tell
you
whether
you're
looking
at
cumulative
or
delta
or
instantaneous,
whether
you're
looking
at
which
structure
which
monotonicity,
which
synchronization
synchronicity
properties.
You
have
that's
the
bulk
of
this
change
right.
There.
E
And
I
so
then,
just
returning
to
the
value
type,
we
still.
We
have
four
value
types
that
are
duplicated
for
in
and
flowed,
and
this
is
not
I'm
not
trying
to
make
it
so
that
you
have
to
record
whether
the
instrument
was
using
inner
flow.
Remember
the
spec
doesn't
say
you
have
to
do
anything
with
integer
and
floating
point.
It
just
says
number
and
if
you're
probably
javascript
you're
not
going
to
have
both
because
the
language
doesn't
really
give
you
both.
E
What
I'm
trying
to
do
here
is
make
it
so
that
you
know
in
the
protocol
just
which
fields
to
look
at,
because
I'm
trying
to
avoid
the
situation
where
we
say
float
should
be
good
enough
for
everybody,
because
all
your
numbers
are
going
to
be
less
than
2
to
50
or
whatever,
because
I
know
that
that
won't
hap
that
that
will
happen.
I
know
that
the
network
monitoring,
if
you're,
monitoring
bits
per
second,
you
end
up
overflowing
32
bits
really
fast
and
you
overflow
50
bits
pretty
fast
too.
E
E
So
that's
not
kind
information.
It
is
literally
just
type
information
so
anyway,
this
is
still
draft
draft
form,
but
I
thought
we
should
talk
about
it
because
it's
a
big
change.
C
It
is
I
some
preliminary
feedback
and
probably
would
expect
this
coming
from
me.
Can
we
please
split
into
smaller
changes?
I
I
think
the
data
point
is
independent
of
the
metric
descriptor
change
in
in
a
way
that.
E
Yeah
sure,
for
me
it
was
important
to
put
together
the
whole
proposal.
I
don't
I,
I
don't
think
I
would
push
for
merging
this
all
at
once,
but
we
have
to
be
careful
not
to
make
releases
and
generate
new
code,
which
I
think.
B
Of
not
splitting
this
up.
For
that
exact
reason,
I
remember
I
had
a
similar
pr
where
there
was
a
big
revisal
and
it
was
asked
to
split
it
up,
and
now
we
have
a
temporality
being
released
in
a
protobuf
that
other
libraries
are
depending
on
and
that
should
never
have
been
released
like
that
was
still
under
debate.
So
I
I'm
I'm
a
little
hesitant,
especially
now
that
we're
like
restructuring
how
we're
distributing
the
protobuf.
B
It
should
should
probably
cohesively
be
like
something
that,
because
we
can't
gate
it
anymore,
right
like
that,
you
can
have
other
people
start
pulling
it
in
at
arbitrary
points.
At
this
point,
what
I'm
trying.
C
E
Agree
that
they
are
and
and
the
exemplar
stuff
is
also
independent
too.
So
so
I
could
stage
this
as
three
pr's,
but
I
I
want
to
sort
of
make
sure
that
it
gets
reviewed
as
a
whole
and
then
kind
of
debated
independently,
but
they
kind
of
merge.
At
the
same
time
I
I
would
think-
and
it
does
add,
conflict
but
and
that's
easy
to
address.
C
I
need
to
read
all
the
comments
and
all
the
implications,
but
so
far
the
the
the
structure
and
everything
looks
looks
promising
to
me
and
everything
data
point.
I
gave
you
the
feedback
about
that.
The
last
one
is
examples,
I'm
not
hundred
percent
sure.
I
understood
the
the
difference
between
scholar
and
raw
data
inside
the
the
the
type
I
think
what's
yeah.
C
The
reason
why
I
did
not
understand
that,
so
you
have
the
yes
here.
The
reason
why
I
do
not
understand
this
is:
I
saw
the
raw
as
being
these.
What
we
call
the
exemplars
and
for
me
is,
if
I
have
a
let's
say,
a
sum
of
latencies
and
I
have
an
example,
so
I
can
have
a
scalar.
C
E
E
What
I've
said
so
far
is
you
can
tell
the
difference
between
a
sum
and
a
last
value
by
looking
at
that
structure.
If
it's
grouping
structure,
it's
a
last
value,
if
it's
adding
structure
it's
a
sum,
but
then,
if
it's
raw
value
it's
it
could
be
either
in
some
sense.
It's
like
it's
instantaneous
and
anyway,
there's
like
I
feel
like
there's
a
great
area
about
whether
you
want
to
call
something
instantaneous
to
say
that
it's
raw
and
use
the
scalar
fields
or
or
use
the
raw
fields.
C
I
also
think
I
also
think
the
raw
comes
with
other,
so
may
come
with,
let's
say,
for
example,
I
I
am
producing
a
summary
or
a
histogram
of
things,
and
I'm
also
giving
you
some
examples
of
raw
values
for
whatever
to
do
other
things
with
that.
So
so
I
think
the
raw.
What
if
I
understood
correctly
the
row
of
the
raw
end
and
row
double
from
here.
I
think
this
may
come
with
another
value
type.
E
Right
gosh:
where
did
I
say
it?
I'm
I'm
sorry.
This
is
not
the
right
way
to
do
this
review.
I
I
probably
should
have
said
it
here.
The
point
is,
you
can
put
a
summary
or
a
histogram
in
and
still
include
exemplars
because
they
are
still
associated,
but
the
type
of
the
data,
the
value
type
is
histogram
or
summary.
If
you
want
the
type
to
be
raw,
just
include
exemplars
and
put
the
type
as
raw
and
you'll,
and
that's
that's
how
you
signify
raw
as
opposed
to.
C
Examples
I
need
to
understand
better:
let's
not
do
the
review
right
now.
I
need
to
understand
better.
When
is
the
case
where
I
send
only
a
role.
E
I
would
I
would
offer
that
you
should.
Let
me
take
another
edit
pass
on
this
document,
because
that
is
really
a
key
question
and
I'm
not
sure
where
I
answered
it,
if
at
all
so,
but
that
was
we've
we've
narrowed
in
on
the
real
trick.
Part
of
this
question
for
me,
which
which
I'm
glad
to
hear
that
you
all
understood
it
so
yeah
raw
versus
raw,
exact
values
versus
thumbs
versus
last
values,
it's
a
little
bit
ambiguous
how
they
should
be
represented,
and
I
need
to
go
through
these
comments
again.
E
Okay,
this
has
been
very
encouraging.
Thank
you
all
for
your
feedback.
Does
anyone
else
want
to
discuss
this
at
the
moment
or
have
a
comment?
This
is
just
really
draft
format.
Almost
I
I
will
keep
editing
and
and
by
monday,
or
so
I
think
this
would
be
in
a
place
where
you
could
all
take
a
good
look
at
it
or
you
can
look
at
it
now,
but
you'll
probably
end
up
with
the
same
types
of
questions
that
we
just
discussed.
C
Also,
in
order
to
synchronize
this,
I
will
take
the
responsibility
to
do
the
the
release
for
proto
zero
five.
I
will
talk
to
everyone
that
I'm
taking
that
responsibility,
so
I
will
do
it
whenever
we
feel
it's
done
well.
B
E
That
was
my
next
question.
Is
I'm
starting
to
want
to
have
a
little
bit
more
clarity
and
certainty
about
collector,
as
well
as
in
as
much
as
we're
close
to
readiness
here
and
people
want
to
run
it
I'm
interested
in
getting
involved,
because
we
need
these
exporters
to
work
so
so
what
would
the
I
guess
my
question?
Voguing
is:
what
would
it
look
like
for
you
and
ideally
like
whether
it's
two
pr's
or
four
pr's,
or
one
or
three
we
get
them
merged?
We
generate
new
code.
E
C
First
of
all,
we
need
an
official
release
of
this,
so
once
we
have
all
of
these
we'll
make
it
zero
five
or
whatever
okay
and
then
the
collector
the
next
release
in
the
collect.
So
we
update
the
proto
dependency
on
on
release
versions.
Okay,
so
we
need
first
to
have
a
release
version
in
the
proto.
In
order
to
be
able
to
update
the
collector
yep
makes
sense,
so
right
now
works
with
zero.
Four,
we
need
a
zero
five
here.
E
E
B
Has
something
changed
in
the
collector
because
the
previous
release
of
the
collector
pulled
the
master
branch
of
the
proto?
It
didn't
pull
from
a
release.
It's
pulled
from
zero
four
right
now,
no,
no!
No!
No!
It
was
the
opposite
dirt
like
the
collector
was
built
and
then
zero
four
would
had
to
get
released
because
it
contained
changes
from
master.
B
I'm
not
sure
I
so
so
what
happened
was
the
collector
built
it?
It's
built
off
of
a
sub
module
now
like
it
builds
its
own
representation
of
the
proto
right
right.
So
when
it
did
that
release,
it
actually
pulled
from
master
of
the
proto-repo
and
because
it
had
broken
changes,
then
we
had
to
go
back
in
in
the
proto-repo
and
do
a
v04
release
to
actually
have
that
be
pulled
into
the
go
repo.
B
B
C
See
that's
the
version
here.
How
do
I
determine.
E
B
I'm
pretty
sure,
I'm
pretty
sure
in
the
git
submodule
file,
you're
supposed
to
have
some
sort
of
a
branch
or
a
tag
specific
specified
if
you
wanted
to
pull
from
that.
Otherwise
it's
just
going
to
pull
from
the
default
branch
which
is
currently
master.
So
that's
that's
kind
of
what
I'm
talking
about,
but
let's
file
an
issue
yeah
I
agree
like
I
think
it
can
be
solved
asynchronously
or
it
can
be
verified
as
synchronously.
But
I
do
think
that
it
is
an
issue
I
want
to
make
sure
it's
recognized.
C
We
can
we
can
change
the
sound
module
to
point
to
a
different
branch
right
now.
It
points
to
the
just
to
give
you
this.
It
points
to
this
pr
that
the
pr157,
so
that
is
the
latest
pr
included.
C
B
Sorry,
can
you
just
send
me
a
link
and
getter
for
the
thing
you're
looking
at?
We
can
just
move
on
okay,
perfect,
yeah,
cool.
E
Okay,
I
wrote
ai,
mr
alias,
if
you
want
to
record
your
thoughts
in
a
collective
issue.
I
think
that
would
probably
push
us
in
the
right
direction.
E
While
I
think
we've
run
through
our
agenda
high
level
thing
is
people
are
excited,
they
want
this
and
it's
not
quite
working
yet,
so
I'm
I'm
going
to
keep
at
it
with
the
debugging
of
various
things,
and
I
think
it.
I
can
split
this
into
two
pr's
as
long
as
we're
really
really
sure
that
we're
not
going
to
get
into
this
funny
situation
again,
but
I
think
it'll
break
the
collector.
E
E
Okay,
well,
we'll
all
be
a
little
bit
more
cautious
in
that
case,
and
I
I
will
pledge
to
update
these
comments
and
I
think
it's
fair
to
say,
I
should
split
it
up,
split
it
into
into
two
parts,
at
least,
but
they'll
be
like
siblings,.
A
Okay,
yeah,
I
would
be
happy
to
approve
and
merge
immediately.
If
somebody,
how
would
you
where
would.
E
There's
been
maybe
three
or
four
issues
that
independently
have
arrived
in
the
last
couple
weeks,
which
are
all
the
same
issue.
Basically
all
saying
I
tried
to
go
exporter,
I
tried
the
oclp
and
this
is
what
I
saw
it's
weird
and
confusing,
and
then
it's
broken,
so
I'm
trying
to
prevent
new
issues
from
appearing
that
are
identical
as
well.
E
Well
then,
then
that
means
you're
gonna
have
to
make
a
release,
because
people
are
just
gonna.
Take
the
current
release,
whatever
whatever
it
is,.
E
Yeah,
I
think
we
should
maybe
consider
it
yeah,
that's
a
I
would.
I
would
be
okay
with
that.
It's
only
causing
trouble
at
this
point
to
have
a
broken
otlp.
Okay,
all
right!
I
have
the
action
item
I'll
make
sure
that
we
get
followed
up
on
this
next
week.
Mm-Hmm
cool,
all
right,
disable,
the
actual
police.
C
Okay,
I
I
have
to
run
right
now.
I
will
look
at
the
proto
changes
and
provide
comments
there.
Okay,.