►
From YouTube: 2021-04-06 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
C
All
right,
I
think
my
web
camera's
working.
How
are
we
doing
on
people.
C
D
D
C
All
right,
we'll
give
ourselves
another
like
two
two
minutes
to
grab
folk
to
join
in.
If
you
have
anything
for
the
agenda
feel
free
to
add
it,
there's
actually
a
there's
a
spot.
There
should
be
a
spot
yeah
right
around.
C
C
All
right
so
did
anyone
else
have
any
topics
that
they
wanted
to
add.
C
Okay,
so
if
not
first
agenda
topic,
I
wanted
to
go
through
stability
release
blockers,
at
least
the
ones
that
I
think
are
somewhat
easy,
so
just
to
call
out.
We
have
this
this
bug
here,
which
is
the
clarifying.
What
instrument
name
means
and
like
the
scope
and
like
you
know,
with
how
it's
used
and
what
it's,
what
what
it
implies
from
a
data
model
standpoint.
C
I
think
that
the
single
writer
pr
actually
answers
that
by
defining
the
definition
of
a
metric
stream
and
the
identity
of
a
stream
and
how
we're
going
to
evaluate
that.
So,
if
you
have
a
chance
to
this,
this
pr
has
been
out
for
two
weeks:
I'm
not
sure
what
what
else
is
open
on
it.
That
needs
to
get
resolved,
but
if
we
can
get
that
through,
that
will
resolve
that
issue.
The
next
one
that's
open
is
the
changing
labels
into
attributes,
there's
an
open
pr
for
that.
C
There's
this
open
issue.
So
the
pr,
I
think,
only
has
one
review
so
far.
So
folks
could
take
a
look
at
that
that'd
be
ideal.
Just
you
know
address
any
concerns
you
have
with
the
actual
implementation
of
it.
I
think
it's
it's
pretty
straightforward.
C
The
last
thing
there's
a
few
there's
some
to
do's
in
metrics
that
proto
that
I
think
before
we
announced
that
metrics.credo
is
stable.
We
should
actually
clear
up.
So
I
don't
know
if
anyone
remembers
this
one.
This
is.
This
is
somewhat
of
an
old
issue
and
we've
talked
about
it
in
this
meeting
several
times
the
ability
to
capture
all
raw
metric
data
without
aggregation.
I
think
victor-
might
have
brought
this
up.
It's
it's
quite
a
quite
an
old
issue.
So
there's
a.
C
I
don't
remember
where
it
sits
besides
folks
thought
we
should
address
it.
I
recently
moved
it
into
data
model
and
marked
it
as
allow
for
ga,
instead
of
require
for
ga
prior
to
this
meeting.
I'm
curious
if
anyone
disagrees
with
that.
The
tl
dr
is
this
would
be
a
new
aggregation
type
of
no
aggregation.
C
C
C
C
C
D
It's
called
1.0,
not
ga.
According
to
ted
young,
I
think
1.0
not
ta.
What
not
ga
this
was
very
clearly
discussed
and
I
think
I'm
quoting
it
correctly
is
so
the
protocol,
the
ltlp,
is
gonna,
go
1.0.
That's
I
mean
to
say
that
we
shouldn't
use
the
word.
Ga,
that's
all
I
mean.
F
C
These
terms,
this
release
required
for
ga
allowed
for
ga
because
of
how
we
use
them
previously,
I
am
still
leveraging
them,
but
post
post
stability
will
actually
make
better
tags
for
what
the
hell.
That
means
sounds
unreasonable,
okay,
cool
I'm
going
to
comment
on
this,
and
then
I'm
going
to
assign
this
to
myself
for
a
time
to
get
rid
of
that
to
do,
and
then
we'll
consider
that
one
done
and
no
longer
a
blocker
for
ga
cool
all
right.
C
So
I
think
the
last
the
last
one
here
and
I
think
bogdan's
on
vacation-
was
this
notion
of
histogram
sums
and
negative
values.
We
had
talked
about
this
last
time
and
I
put
our
comments
in
the
thread,
but
this
is
we
need
to
there's
a
there
was
a
to
do
in
metrics
that
proto
about
knowing
if
a
sun
inside
of
a
histogram
being
monotonic
is
important.
Bogdan
clarified
that
so
we
said.
Basically,
if
the
sum.
C
We
ignore
some
if
there
are
any
negative
bucket
values
when
we
export
to
prometheus
or
open
metrics,
and
the
concern
here
was
that
we
might
not
have
negative
bucket
values,
but
we
could
still
have
negative
measurements,
so
the
sum
might
not
be
a
counter
and
then
do
we
export
to
prometheus
or
not
I'd
like
to
make
a
decision
on
whether
or
not
we
consider
resolving
this
a
release.
Blocker
is
the
primary
the
primary
concern
here.
C
D
I
think
this
is
there's
a
reasonable
distinction
to
make
about
like
monotonic
sum
and
non-monotonic
sun,
but-
and
this
goes
back
two
years,
we
we've
avoided
trying
to
distinguish
the
histogram
instrument
by
whether
its
values
are
positive
or
sorry
non-negative
and
the
the
reason
this
matters
is
that
if
you're
counting
something
like
latency,
you
know
it's
a
physical
quantity.
So
it's
non-negative
and
therefore
the
sum
is
meaningful,
even
if
it's
not
100
useful
on
its
own.
D
What's
the
sum
of
all
latencies,
but
if
you're
summing
something
that
might
go
negative,
the
sum
is
still
actually
meaningful,
but
you
should
expose
it
as
a
non-monotonic.
So
it's
more
like
a
gauge,
so
this
discussion
is
about
whether
the
sum
should
be
considered
a
counter
or
a
gauge,
and
I
think
we
can
talk
about
that
later.
It's
like
a
after
ga
kind
of
thing.
F
Well,
I
guess
the
the
question
would
be
is
like.
Is
that
some
would
would
we
mark
yeah?
I
mean
like
it's
exactly
what
you
said.
It's
like
the
sun
would
be
a
gauge
or
it
would
be
a
counter,
and
so
I
does
I
guess,
to
re-ask
the
question
in
a
different
way.
It
would
be
like
if,
if
I
create
something,
that's
a
histogram
in
the
api,
is
it
going
to
alter
what's
in
the
oltp
model
like?
F
C
That's
a
great
question:
I
think
this
primarily
affects
compatibility
with
other
protocols
specifically
prometheus
and
when
histograms
can
get
used
in
prometheus
and
when
they
cannot.
D
C
I
think
that's
where
this
entire
concern
is
coming
from,
and
so
then
the
question
is
like:
if
it
goes
back
to
our
api
of,
how
do
we
make
sure
histograms
produced
in
open
telemetry
are
compatible
with
prometheus?
Is
that
a
data
model
problem?
Is
that
an
api
problem
right?
Do
we
need
to
do?
We
need
to
solve
it
there
if
we
are
recording
something
that
you
can't
do
in
prometheus,
because
we're
doing
histograms
with
negative
values
right?
My
opinion?
Is
we
just
say
that
that
use
case
yes,
open
telemetry
supports
it.
C
It
doesn't
translate
into
prometheus
we're
going
to
have
to
have
clear
documentation.
I
don't
think
it's
a
it's
a
release,
blocker.
If
we
allow
that-
and
we
can
decide
not
to
that's-
that's
an
api
kind
of
decision
right.
F
Yeah,
I
think
I
think
I
would
agree.
It
sounds
definitely
more
like
something
that's
part
of
the
api,
because
as
long
as
it's
not
like
the
histogram
itself
has
the
sum
counter
inside
of
it,
then
it's
then
it
might
be
an
issue.
But
again
this
seems
more
of
like
it's,
the
oltp
exporter,
which
is
handling
that
and
we
decided
to
opt
into
supporting
it.
C
Okay,
so
let's,
let's
let's
write
this:
anyone
want
to
help
me
wordsmith
here,
I'm
going
to
change
it
to
be
allowed
for
ga,
or
you
know
if,
if
somebody
wants
to
resolve
this
in
some
way
ahead
of
time,
I
feel
like.
D
D
G
C
So
action
items
on
this,
then
that
we
actually
do
need
to
make
a
comment
on
histograms
appropriately,
or
is
this
something
that
needs
to
get
handled
when
we
design
the
api
to
force?
Some
encounters
in
histograms.
F
I
mean
based
on
what
we
just
said.
It
seems
like
it's
it's
something
that
in
the
api,
we
would
need
to
make
the
comment
that
you
know
moving
forward.
You
know
unless
there's
a
presumption
that
it
should
be
negative,
that
we
would
just
move
forward
with
the
thought
that
it's
that
they're
going
to
be
non-negative
values.
C
Okay,
I
just
tried
to
zoom
in,
and
I
think
my
chrome
died.
Okay,
there
we
go.
B
B
D
Yeah,
it's
not
it's
not
essential
or
required.
It's
just
something
we
can
do,
and
I
am
looking
at
the
prometheus
client.
I
don't
think
that
they
actually
check
for
negative
values.
So
I
think
that
this
is
compatible
with
prometheus
down
to
the
level
of
this
ambiguity
like
not
actually
knowing
whether
some
is
going
to
be
meaningful
as
a
counter.
Until
you
see
the
data.
C
Okay,
so
does
this
still
line
up
with
our
plan,
then
we're
going
to
for
now
we'll
say
that
some
can't,
you
know,
can
be
considered
a
counter.
D
D
We
have
a
monotonic
boolean
on
the
count,
the
sum
type,
which
is
positive
if
monotonic
and
negative,
if
non-monotonic,
we
can't
add
in
the
future
a
monotonic
bit
on
the
on
the
histogram
and
that
that
doesn't
make
sense
for
it,
because
it's
really
the
sum
that
we're
trying
to
say
is
monotonic,
so
I'm
not
sure,
but,
but
if
we,
if
we
go
with
no
field
called
monotonic
right
now,
we
can't
add
one
in
the
future
that
defaults
to
true
monotonic,
which
is
what
we're
asking
for
as
a
default
right
now.
D
C
Okay,
you
inverted
what
I
thought
we
were
doing
so,
let's
just
make
sure.
Let
me
make
sure
I
understand
what
you're
suggesting
we
could
add
a
boolean
value.
That
would
say
that
the
sum
is
monatomic
or
is
monotonic
sum
right.
D
D
But
I'd
rather
have
it
be
a
semantic
comment
saying
the
inputs
to
this
histogram
are
non-negative,
because
you
could
also
have
a
semantic
comment
on
your
on
your
history.
I'm
saying
the
inputs
are
between
zero
and
one
or
the
inputs
are.
You
know,
limited
to
zero
to
a
hundred
like
there's
all
kinds
of
additional
comments
you
might
put
to
clamp
the
range
of
an
input
to
a
histogram.
C
It's
just
the
positive
name:
positive
negative
has
serious
implications
for
prometheus.
C
Okay,
so
I'll
just
put
that
as
an
example
of
something
that
we
could
do
that
defaults
to
false
okay
cool.
So
we
have
something
that
we
could
do.
That
is
a
straw
man
that
doesn't
break
the
actual
protocol.
I
think
that's
reasonable.
B
C
If
it's
non-monotonic
yeah,
but
I
don't
think
it
here's
a
question
for
you
right,
you
keep
mentioning
like.
Is
this
part
of
the
identity?
If
you
get
one
histogram
that
is
monotonic
with
all
the
same
identity
characteristics
and
another
one
that
has
the
same
identity
characteristics,
it's
not
monotonic.
Are
you
going
to
consider
those
different
metrics,
no
you're,
going
to
consider
that
a
bug
right
like
I
have
one
person,
who's
recording
a
particular
histogram
of
some
type,
another
person
who's
recording
a
totally
different
histogram?
It's
not
part
of
the
identity.
C
It's
it's
more,
the
it
it's!
It's
like
an
implicit
or
derived
value
off
of
identity.
If
you
think
relationally
right
like
if
I
have
an
identity
of
a
metric
and
it,
and
if
I'm
considering
this
thing,
non-monotonic
then
all
values.
I
should
also
consider
non-monotonic.
It's
like
a
an
implicit
derived
thing
and
if
I
don't
get
that
it's
a
bug,
it's
kind
of
like
where
we're
talking
about.
If
I
get
histograms
and
floats
just
raw
floats,
probably.
B
So
so
I'm
thinking
so
I
agree
with
you
on
the
on
host
or
on
the
sdk.
What
I
was
thinking
more
important
is
that
if
you
on
the
collector
side,
you
get
a
whole
bunch
of
people
or
you
know
different
measurements,
potentially
that
you
may
want
to
join
for
a
given
resource.
That's
that's
really
where
my
questions
are
well.
C
Yeah,
I
think
it's
the
same
question
there.
If
you
see
that
it's
probably
a
bug
right,
it's
probably
somebody
changed
an
actual
instrument
in
their
code
and
reused,
the
name,
and
we
need
to
consider
that
an
error
scenario
that
that's,
because
there's
nothing,
there's
nothing
practical
that
you
can
do
there.
That's
useful.
B
G
B
B
C
That's
a
that's
a
whole
discussion
and,
like
we
have
an
entire
team
here
that
does
signals
quality
and
effectively
like
with
just
practically
the
way
things
work
is.
If
you
have
a
metric
of
a
particular
identity
and
name,
and
you
change
it
in
any
way,
all
hell
breaks
loose
in
your
system
practically
so,
like
that's
a
totally
different
discussion
that
needs
to
happen.
C
It's
actually
part
of
the
instrumentation
sig
and
the
semantic
convention
sig,
and
so
I
I
highly
recommend
having
that
discussion
over
there
for
now,
like
that's,
that's
a
little
bit
outside
the
scope
of
this,
and
I
don't
think
we
have
time
to
really
address
it.
Well,
okay,
sure,
cool
all
right,
so
that
is
commented
for
now.
That
is
now
off
the
blocking
list.
Great,
let's
take,
we
do
have
this
performance
concern.
I
want
to
spend
some
time
on,
but
let's
first
talk
about
the
the
exponential
bucketing
histogram
protocol.
Buff.
H
This
keep
sharing
yeah.
If
you,
I
can
just
describe
the
enhancement
proposal
briefly,
so
this
this
pr
describes
an
extension
to
the
current
protocol,
which
only
supports
explicit
bounds
to
one
that
supports
exponential
bucketing.
H
This
provides
efficiency,
improvements
on
the
transport
side
and
that
there's
some
simplicity
that
adds
for
vendors
like
us,
who
want
to
support
open
telemetry
and
it's
important
for
some
of
our
customers
as
well.
So
you
know
we're
hap
kind
of
support
some
of
the
work,
but
there
is
to
get
this
proposal
created
and
get
feedback
on
it
as
an.
G
C
G
C
I
don't
know
if
I
can
yet
hold
on
that's
why
I've
been
missing
the
oteps.
Let's
try
to
copy
it
from
here
yeah
I
did
when
I
saw
this.
I
did
want
to
put
it
in
the
column
just
so
you
know,
I
think
I
tried
and
couldn't
all
right,
let's,
if
I
copy
you
in
there
just
let
me
do
that.
D
By
the
way,
I've
read
all
of
the
work
on
this
histogram
proposal
and
it
just
keeps
falling
to
the
bottom
of
my
list
for
what
I'm
supposed
to
be
reading
today
and
reproving
today
for
for
hotel.
So
it's
it's
something
I'm
still
aware
of,
and
I've
probably
read
more
of
uk's
work
than
anyone
else.
D
So
so
we
do
need
more
reviewers
on
this
other
than
me,
I'm
probably
just
going
to
approve
it
since
I've
read
every
single
draft
he's
written
so
far,
and
this
is
something
I
we
asked
him
to
give
us
more
detail
in
the
form
of
an
otep.
So
it's
time
for
everyone
to
review.
C
Yeah,
I'm
really
looking
forward
to
reviewing
this.
I
hope
you
don't
mind
I
I
will
mark
this
as
allowed
for
ga
right
and
I
think
that's
the
idea,
and
I
know
the
the
recent
changes
we
made
to
histograms
were
so
that
we
could
support
this
kind
of
an
oak
tap.
So
I'm
really
excited
to
see
this.
Let's
see
will
this,
let
me
pull
it
in
now.
Come
on
so.
I
D
That's
right
and
apologies
for
that.
We
went
back
and
forth
more
than
once
on
this
issue
and
after
all
was
said
and
done.
We
did
go
in
that
direction,
so
we
will
create
a
a
point.
That's
similar
to
what
is
currently
called
histogram.
I
think
it's
open
question
whether
we
should
rename
histogram
now
to
explicit
histogram
or
whether
we
should
just
have
the
default,
be
sort
of
the
legacy
prometheus
compatible,
explicit
histogram,
but
just
call
it
histogram
and
then
the
new
one
will
be
called
exponential.
D
Histogram
and
we've
left
open
room
for
many
other
histograms
in
the
future.
I
D
Yeah,
it's
already
created
in
this,
so
at
least
I
believe
correct
me.
If
I'm
wrong,
everybody.
I
D
I
mean
that
we
are
deciding
to
move
into
the
top
level,
one
of
which
is
inside
of
metric.
It
means
that,
right
now
we
see
yeah
on
the
screen
you
see
in
histogram,
which
is
deprecated.
You
see
histogram,
which
is
the
explicit
form
today
and
summary,
which
is
an
alternative
and
then
we'll
just
sort
of
lump
in
a
new
type
right.
There.
I
D
And
just
the
last
thing,
I
I
hope
is
that
we
all
is
that
we
get
the
dynatrace
folks,
including
atmar
and
georg,
to
approve,
because
I
know
there
are
several
different
ways
to
go
with
an
exponential
histogram.
And
I
just
I
want
to
leave
it
to
the
experts.
D
Who
are,
you
know
really
into
invested
in
in
the
math
here,
and
I
just
think
that
we
should
all
keep
reading
it.
But
let's
make
sure
that
those
people
approve.
I
Yeah,
so
at
a
very
high
level
of
the
mass
thing,
is
we
properly
extend
right
now,
it's
very
not
restrictive,
allows
any
base
exposure.
Essentially
you
define
bound
equal
space
raised
to
the
power
of
some
exponents,
so
base
is
totally
anything.
Anything
girl
goes
yeah.
If
you
look
at
the
message,
the
blue,
that's
really
key.
The
bass
has
no
restriction.
I
That
means,
if
you
want
to
merge
things
at
diff
with
different
base.
That's
usually
a
lossy
merge,
so
there
are
different
proposals.
How
you
want
to
restrict
base
to
a
certain
series
like
level
what
it
is
creates.
Only
a
given
series
of
base
is
allowed,
not
the
alternative,
then
any
any
two
base
from
that
theory
can
be
merged.
D
Yeah,
I
don't
have
a
problem
with
sort
of
that.
What
you
just
described,
could
you
just
say
for
the
group
and
just
to
clarify:
are
there
other
formulas
other
than
what
we
see
on
line
18
bound
equals
base
to
the
power
of
exponent?
I
Well,
this
is
just
plain
standard
elements,
not
elements,
maybe
middle
school
description
of
a
exponential.
There
are
other
variations
like
approximation
like
linear
cubic
quadratic,
but
this
get
more
exotic.
So
I'm
following
the
spirit
of
keep
it
simple.
Everybody
understand
what
this
simplest
formula
is.
D
E
I
This
is
alternative
formula
which
is
actually.
This
is
the
same
thing
proposed
in
the
udd
sketch
also
from
one
comment
from
a
person
in
google.
They
said
they
are
internally
using
this
formula
too,
and
this
happens
to
be
internally
used
by
newbie
relic
too,
because
a
nice
feature
any
any
two
base
from
the
theory
can
be
merged
losslessly.
I
So
I
further
then
described
that
the
current
proposal,
the
allow
anything
going
space
is
extensible
to
the
to
this
form,
probably
the
very
bottom
there's
a
way
to
to
do
it
just
go
down
to
the
very
very
bottom,
there's
a
way.
I
I
It's
the
scale
of
the
base
on
b
script
discrete
base,
you
can
say:
oh
it's
base.
The
scale
is
one
two,
three
four
five,
so
any
any
two
based
on
the
scaled
base
is
more
mergeable,
and
this
is
in
terms
of
perfect
buff.
It's
backward
compatible
because
you
are
changing
a
single
value
to
a
new
one
off,
and
this
is
again
even
extensible
you
have
the
base
scale
is,
is
like
a
new
method.
You
can
add
other
other
scales
to
one
of
here.
That's
three
or
more.
D
Thank
you,
that's
exactly
what
I
was
asking
and
I
agree
that
is
a
compatible
future
thing
we
could
do.
This
is
wonderful.
I
D
C
C
Here,
if
that's
okay,
I
think
I
think
the
the
two
news
here
is:
we
we
need
more
people
to
go
review
this.
I
will
make
sure
that,
on
our
end,
the
the
the
folks
that
you
mentioned,
who
were
suggesting
the
base
two
I'll,
make
sure
they
go
in
and
actually
comment
on
the
otep
as
well.
C
C
I
think
it's
flipped
up
and
down,
if
I
recall
correctly
yeah,
I
flipped
it
21
days
ago.
So
this
is.
C
If
we
have
a
cumulative
aggregation
temporality,
but
we
don't
know
the
start
time,
then
some
really
hard
questions
get
opened
right.
So
the
proposal
here
was
to
make
the
start
time
required
for
cumulative
metrics,
so
that
we
can
absolutely
determine
when
a
data
stream
has
reset,
as
opposed
to
doing
gymnastics.
C
Is
that
something
we
want
to
pull
the
trigger
on
and
actually
specify
right
now
or
is?
Is
this
something
we
want
to
do
later?
I
don't
think
this
is
something
we
can
do
later.
I
think
if
we
specify
it,
if
we're
going
to
do
this,
we
have
to
specify
it
out
the
gate,
because
it's
a
requirement
that
would
break
existing
users
right
where
they
are
suddenly
violating
a
protocol
restriction,
and
this
is
one
of
those
fuzzy
restrictions
that
you
literally
can't
write
in
protocol
buffer
language.
D
Since
that's
my
issue,
I'll
say
a
little
more
at
the
bottom
of
this,
and
and
in
the
time
since
I
wrote
this
I've,
I've
come
to
a
place
of
having
a
sort
of
good
answer.
At
least
I
think
for
myself
about
you
know:
otlp
has
a
data
point
type.
That's
meant
for
the
case.
Where
you
don't
know
the
reset,
that's
called
gauge,
or
it's
called
non-monotonic
cumulative.
D
It's
called
gauge,
let's
say
because
if
you
do
know
the
start
time,
you
can
use
one
of
our
sum
points
so
and-
and
this
matters
mostly
because
people
are
used
to
prometheus
and
they're
importing
prometheus
data
into
otlp.
D
If
prometheus
were
more
open,
metrics
compatible
with
its
sort
of
own
spawn
like
it
would
actually
have
reset
times.
So
this
wouldn't
become
an
issue.
So
we're
saying
this
is
really
about
legacy
data
from
prometheus.
It's
not
open,
metrics
compatible,
so
just
use
gauge
if
you
can
and
if
you're,
if
you're
using
prometheus
for
your
time
series
data
store,
whether
it's
a
gauge
or
counter
doesn't
actually
matter
to
you
very
much.
So
that's
fine.
D
Now
I
think
it
matters
for
some
to
know
when
something's
a
counter
or
gauge.
So
I
have
a
a
proposal
that
that
I
can
use
inside
of
a
collector
or
something
that's
importing
prometheus
data
to
handle
this
correctly
and
that
the
proposal
is
written
up
somewhere
in
the
bottom
of
this
thread.
I
think
or
one
of
the
others
on
this
topic,
and
it
basically
says
that
you
can
reset
a
time
accumulative
by
choosing
a
new
start
time
and
this
this
only
works
for
monotonics,
which
is
what
prometheus
exports.
D
So,
when
you're
importing
prometheus
data-
and
you
see
a
cumulative
just
reset
it
remember
when
you
reset
it
and
and
remember
the
value
that
you
reset
and
then
from
now
on,
you
report
the
difference
from
that
and
your
your
your
value
and
you
have
effectively
changed
a
time.
Stampless
input
cumulative
to
a
time-stamped
output
cumulative
by
resetting
it
and
the
prometheus
sidecar
that
we
have
working
at
lightstep.
The
open,
telemetry
prometheus
card
does
exactly
this.
That
was
copied
from
the
stackdriver
sidecar,
which
did
exactly
that.
D
So
I
think
that
that's
pretty
legit
what
it
says
is
that
we're
going
to
put
a
gap
into
the
series
when
we
have
resets
and
we're
importing
prometheus
data
as
opposed
to
what
prometheus
does
when
it
imports
that
same
data,
which
is
to
put
a
heuristic
in
place.
So
otlp
is
changing
how
to
handle
resets
essentially,
and
the
implication
is
that
you
should
handle
it.
This
way.
C
D
Does
that
belong
and
in
the
proto
repository
is
an
issue
where
I
wrote
my
pseudo
code
that
I
just
described.
We
should
find
that,
because
this
issue
is
very
old
and
doesn't
he
actually
could
be,
this
could
be
moved
into
the
proto
repository.
I
think.
C
Okay
to
duplicate
this
issue
with
proto
based
issue-
I
know,
but
but
in
terms
of
like
everything
you
just
said,
the
algorithm
you
define
probably
should
be
specified
somewhere.
D
Right
yeah,
I
think
of
that
as
a
data
model,
it's
like
in
the
section
where
you
say
importing
data
when
you
sort
you're,
saying
I'm
importing
a
monotonic
cumulative
from
legacy
source
without
time
stamp.
Here's
how
to
handle
that,
and
it's
one
of
several
kind
of
valid
data,
conversions
that
we're
talking
about
between
the
stream
model
and
the
time
series
model.
Essentially,
I
think.
C
D
C
I
actually
I'm
going
to
bring
that
up,
because
I
think
it's
highly
correlated
and
I
was
actually
going
to
do
the
delta
cumulative
conversion.
One
because.
D
I've
defined
it
as
a
cumulative
delta
to
cumulative,
so
I
can
define
half
of
this
as
cumulative
to
delta.
I'm
importing
a
prometheus
time
series
without
times
cumulative
without
time
stamps,
I'm
going
to
turn
it
into
deltas
by
remembering
the
last
value
and
then
you're
going
to
turn
it
back
into
cumulative
using
your
delta
to
cumulative.
It
should
just
work
out.
C
Okay
sounds
good
that
that
I
mean
that
that
sounds
wonderful,
okay,
cool,
so
I'm
gonna!
You
are
the
owner
of
this,
so
I'm
just
gonna
put
the
action
items
here,
we'll
comment
and
that's
that
for
those
of
you
who
aren't
familiar
with
the
delta
december
thing,
let
me
pull
that
up.
C
How
to
rebuild
from
deltas
so
this
is.
This
is
a
related
one,
that's
required
for
ga.
This
is
effectively.
I
threw
my
thoughts
together
here,
but
the
quite
the
open
question
is:
if
we
get
a
bunch
of
delta
points
that
don't
have
sequence
numbers,
I
mean
we
have
time
stamps,
but
if
we
get
a
bunch
of
delta
points
from
something
right,
what
do
we
do
and
so
effectively?
I
decompose
this
into.
C
There
are
going
to
be
delta-based
metrics
that
we
get
from
non-otlp
sources
like
statsd,
where
effectively
we're
going
to
have
a
best
effort
solution
of
what
to
do
in
the
absence
of
timestamps
right
and
then
we'll
have
delta
based
metrics
from
otlp,
where
I
think
we
can
enforce
that
time.
Stamps
exist
that
they
abide
by
a
certain
characteristic
and
we
can
do
a
little
bit
better
job.
C
So
this
I
was
planning
to
write
up
this
week
and
get
a
proposal
out
there
into
the
data
model
spec
for
folks
to
read
if
anyone's
interested
or
comments.
Please
comment
on
this
issue,
but
yeah
that's
their.
I
agree
with
you.
Josh
they're,
highly
correlated.
D
C
Awesome,
thank
you
so
much.
Okay.
The
next
thing
is
around
otlp
concerns.
We
have
15
minutes
left,
but
I
think
this
is
actually
going
to
be
our
discussion
and
our
follow-up
so
to
rehash
on
this.
Tigran
ran
a
bunch
of
benchmarks
around.
This
is
literally
just
encoding
and
decoding
protocol
buffers
in
otlp
version
four
and
head
with
our
new
one
of
changes,
and
so
head
would
be
the
most
recent
otop4
was
previous
nanoseconds
per
op,
the
less
nanoseconds
per
operation,
the
better.
C
This,
I
believe,
is
overall
time
spent,
so
you
can
see
for
like
simple
integer
metrics,
where
I
think
this
is
just
sending
like
a
integer
gauge
it's.
It
is
an
incredible
slowdown
if
we're
sending
mass
amounts
of
gauges,
it's
less
of
a
concern
around
histograms,
but
that's
because
again,
like
the
overhead
of
sending
a
histogram
and
all
the
martian
on
marshalling
it's
just
less
noticeable.
This
is
literally
the
worst
case
scenario
of
hey.
How
bad
are
one
ofs
and
go
compared
to
not
using
one
offs?
C
Well
pretty
bad
because
of
the
full
extra
allocation
and
pointer
chain
that
you
have
to
go
through
right
victor.
You
ran
some
benchmarks
and
dot
net.
Where
you
actually
didn't
see,
you
didn't
see
the
same
kind
of
slowdown.
I
believe
there
was
a
slowdown.
Still
do
you
want
to
speak
to
this
at.
B
All
sure,
if
you
scroll
down
to
the
second
chart,
which
which
has
a
longer
run
really
the
the
the,
if
you
just
look
at
the
differences,
I
did
I
pretty
much.
I
tried
to
copy
tigran's
example,
so
the
example
is
basically
encoding
and
decoding
different
types
of
data
and
I
think,
there's
a
hundred
metrics
in
there
and
there's
a
bunch
of
data
points
and
so
forth,
but
in
general
the
the
message
is
pretty
much
between
the
one-off
and
non-one-off
it's
pretty
similar,
depending
on
whether
you're
doing
encoding
or
decoding.
B
In
some
cases
you
know
the
encoding
may
be
a
little
bit
faster.
In
some
cases
the
decoding
may
be
a
little
bit
faster,
but
in
general
the
difference
is
really
minute
between
having
one
off
so
not
having
one
offs.
C
Yeah,
so
I
think
what
this
points
at
is
effectively
protocol
buffers
and
go
suffer
from
performance
with
some
of
the
designs
that
we're
using
with
these
one-offs.
So
that
leads
to
a
fundamental
question
of:
is
this
inherent
to
go
as
a
language?
C
C
Number
two
was
you
know,
protocol
buffers
are
designed
so
that
you
don't
have
to
unpack
these
messages
so
that
the
sdks
do
the
encoding
and
then
in
the
collector,
unless
you're
actually
unboxing
a
data
point,
you
don't
touch
it
and
go
is
eagerly
expanding
everything
all
the
time
right.
So
there
tigran
has
a
few
examples
of
treating
things
as
like
black
boxes
or,
for
example,
in
go.
We
can
define
a
version
of
otlp
that
doesn't
use
one
of
that
just
use
repeated
fields.
C
That
has
a
comment
that
says
these
are
all
one-offs
and
I
actually
guarantee
your
performance
will
improve
if
we
do
that,
and
it
is
completely
binary
compatible
right.
C
So
I
think
the
next
step
here
that
I
want
to
call
out
is:
we
need
to
brainstorm
some
ideas
for
how
to
fix
performance
within
go
that
are
practical,
I'm
not
suggesting
that
rewriting
go
protocol.
Buffers
is
practical,
but
who
knows,
maybe
this
community's
way
more
intelligent?
Well,
I
know
they
are
more
intelligent
than
me,
but
maybe
they're
way
like
like
that
kind
of
community,
where
we
can
make
our
own
implementation.
C
This
is
meant
to
be
a
very,
very
low
lightweight
agent.
It
doesn't
affect
everybody,
but
I
think
it's
important
enough
to
open
telemetry
that
we
can't
afford
a
4x
performance
degradation
around
sending
gauges.
I
think
that
that
we
can
agree
is
a
little
too
much
if
it
was
like
five
percent
degradation.
C
Okay,
you
know
that's
like
a
different
story,
but
4x
is
pretty
bad.
Does
anyone
disagree
with
that?
C
Okay,
so
anyone
have
any
brainstorming
ideas
of
what
to
do
here.
G
I
have
a
brief
comment:
I'm
jacob
from
influx
data,
so
I've
made
performance
improvements
to
generator
protobuf
in
the
past
by
adding
a
patch
file
adjacent
to
the
dot
protofile,
and
then
a
go
generate
directive
that
both
calls
out
to
proto-c
generates
the
protobuf
and
then
applies
the
patch
it's
clean,
because
anybody
can
just
you
know,
make
or
go
generate,
but
it's
a
little
dirty,
because
if
the
protobuf
implementation
changes,
then
somebody
has
to
modify
the
patch
the
next
time
that
the
product
is
generated.
G
C
That's
a
great
suggestion,
all
right,
let
me
I'm
not
on
the
right
window
here
we
go.
C
All
right,
I'm
gonna,
throw
out
the
other
idea
that
I
had
of,
and
this
could
actually
fall
in
line
with
your
patch
define
a.
C
Now
I
want
to
call
out
just
so
everybody's
aware
when
you
define
one
of
foo
right
with
repeated
bar.
C
Name,
you
can
also
have
a
repeated
bar
right
and
then
there's
this
weird
thing
that
happens
where
I
believe
repeated
bar
name.
I
don't
remember
what
happened.
Sorry
name
equals
five.
I
don't
remember
what
happens
here
so
if
we
just
say
that
even
if
protocol
buffers
allow
that
we're
not
going
to
allow
it,
I
think
we're
fine,
but
there's
this
weird
name
spacing
thing
that
happens
with
one
of
where
in
go
and
in
other
languages.
C
You
end
up
with
like
fubar
or
sorry,
foo,
name
being
name
spaced
together,
so
the
whole
notion
that
we
can
define
a
second
otlp
relies
on
the
fact
that
we
don't
reuse
these
names
in
and
out
of
the
one
of
with
different
field
numbers,
and
given
that
we
actually
want
to
support
json,
I
think
that's
mandatory
so
anyway
just
calling
that
out
case
people
weren't
aware
of
some
of
the
really
dumb
things
in
produce.
C
D
C
I
yeah,
I
cannot
confirm
nor
deny
if
there
is
or
is
not
discussion
yet
because
I
don't
know
if
what,
when
I
I
need
to
I'll,
have
to
go
check
what
I'm
allowed
to
say.
D
The
google
c,
plus
plus
protobuf
library,
takes
the
opposite
approach
and,
like
actually
manages
like
a
one
of
I
guess,
it
uses
union
tricks.
G
D
C
Can
neither
conform
to
that.
I
can
tell
you:
there
is
an
alternative
way
to
encode
this,
that
is
less
nice
on
the
user
and
far
more
binary
friendly
and
far
more
runtime
friendly
and
I'm
more
than
happy
to
outline
that
in
the
bug.
C
D
And
it
sounds
like
you
only
need
that
in
the
collector-
and
it
sounds
like
the
collector
already
has
a
wrapper
around
otlp.
So
maybe
this
is
kind
of
like
an
issue
that
can
be
hidden,
meaning
you
can
ensure
that
there's
always
one
of
the
repeated
set
in
the
p
data
abstraction
of
the
collector
codebase.
C
Right,
so
that's
that
I
can.
I
can
push
on
that
a
little
bit
I'll
also
see
what
is
going
on
internally,
that
I
can
talk
about
publicly
and
put
that
in
the
issue
for
folks,
if
you're
curious,
yeah
any
anyone
else
have
any
other
brainstorming
ideas
for
how
to
resolve
this.
B
B
Is
there
room
for
that
discussion,
or
maybe
that's
just
too
far
out.
You
know
from
that
perspective,
so
if
we
were
able
to
and
if
it
was
just,
you
know,
convenient
to
have
the
semantic
to
use
repeat
off
just
in
general
from
you
know
these
instrument
kind,
and
we
avoid
the
whole
issue.
C
To
see
what
you're
talking
about
here,
because
I
don't
I'm
not
groking
it
right
now,.
B
Right
so
that
one-off,
where
you
have
a
histogram
and
a
gauge
and
so
forth,
I
mean
there
are
in
in
effect
different
sections.
So
what
if
we
loosen
up-
and
this
is
just
for
discussion-
I
don't
know
that
we
should
do
this,
but
if
we
loosen
up
to
just
always
just
be
repeat
off
so
then
okay,
so
that's
not
the.
C
The
issue,
I
think
I
think
the
issue
in
the
proto
is:
where
is
that
metrics
proto
that
I
had
open?
Do
I
still
have
it
open
here?
It's
not
this
one
of
this
one
of
kind
of
pre-existed
right.
This
was
already
there.
C
It's
actually
the
num
data
point
number
data
point
this
sucker
now
has
a
one
of
this
is
the
problem.
B
C
Well,
so
that's
what
I'm
proposing
we
do
just
for
the
collector,
so
only
the
collector
would
see
this
of
there
would
be.
This
would
be
an
optional
as
double
and
optional
as
int,
and
it
wouldn't
be
inside
of
a
one
of
so
in
the
collector
effectively.
What
happens
this
one
of
turns
into
a
structure
and
go
that
then
has
a
substructure
that
has
a
double
and
a
strobe
structure
that
has
an
ant
instead
of
the
structure
for
numb
data
point
just
having
a
single
double
in
it
right.
C
That's
like
a
huge
amount
of
boxing
and
pointer
indirection
and
allocations
for
go
just
just
to
call
this
thing
a
lot
of,
but
if
we
make
these
optional
values
on
number
data
point
only
for
the
collector
right,
then
this
number
data
point
suddenly
just
has
two
extra
pointers
on
it:
allocation
wise:
it's
not
a
big
deal
because
we're
already
allocating
this
and
everything's
gravy
it.
Whether
or
not
it's
repeated
has
nothing
to
do
with
it.
C
Repeated
might
actually
cause
more
performance
degradation
because
you'd
be
allocating
an
array
versus
allocating
a
single
you
know
float,
but
whatever
I
I
think,
hopefully
that
answers
your
question.
B
Yeah,
I
I
yeah,
I
think
it
does.
I
I
was
just
suggesting
that
maybe
there's
some
you
know
instead
of
instead
of
specifically
out
you
know
specifying
this
is,
for
you
know,
go
versus
not
others.
I
mean
I'm
sure,
there's
a
different
way
to
specify
this
protocol,
where
we
keep
this.
As
you
know,
a
structure
of
ass
double
and
maybe-
and
this
is
just.
B
C
And
this
is
in
the
hot
path
is
causing
the
performance
degradation,
understood
yeah.
So
if
we
make
this
be
like
a
repeated
message
of
values,
that's
just
exacerbating
the
problem
and
like
not,
we
wouldn't
have
the
ability
to
fix
it.
Possibly
in
that
case,
because
you
couldn't
there's
no
binary
compatible
way
to
do
that.
Right.
B
Understood
yeah
yeah
understood
so
all
I
was
suggesting.
Is
that
don't
do
the
one-off?
Just
do
the
the
repeat.
You
know
as
double
as
in
as
just
optional
and
just
put
a
separate
field
that
says
please
use
the
you
know
is
double
or
is
in
then
you
don't
have
to
specifically
specify
this
to
be
go
versus
to
be
other
language,
because
that
would
just
be
you
know.
Otl.
C
Yeah
we
could,
we
could
get
rid
of
so
so.
Okay,
let
me
write
that
down
effectively
and-
and
let's
not
add
the
repeated
complication
in
here,
because
it
doesn't
have
to
be
repeated-
oh
sure,
yeah
we
can
just
have
double
as
double
int
as
int
and
say
you
know,
the
value
itself
has
to
be
one
of
these
two.
We
just
get
rid
of
the
protocol
buffer
keyboard
and
enforcement
in
other
languages,
and
that
way
it's
consistent
across
go
in
other
languages.
What
happens
here
and
that
likely
fixes
the
performance
issues
right
right.
B
C
B
D
D
C
C
Yeah
yeah
that's
fair
and
I
can't
spell.
F
Not
to
throw
throw
things
into
out
of
whack,
but
is
there
a
possibility,
because
I
know
there
was
a
there's
been
some
recent
conversation
about
the
stability
again,
and
I
is:
is
this
going
to
throw
that?
Because
if
we
like
we've
made
some
other
changes
and
0.8
was
released
with
the
with
the
metrics
changes,
which
you
know,
we
knew
had
some
some
hard?
You
know
hard
changes,
but
then
there
was
also
some
stuff,
that's
being
deprecated.
You
know
in
three
months.
Is
this
something
that
would
break
that
compatibility?
C
It's
a
great
question:
we're
we're
running
out
of
time,
so
I'm
gonna
knit
this
in
the
butt.
No,
it
doesn't
because
one
of
and
optionals
actually
encode
the
same
in
binary.
C
So
if
you
take
in
a
binary
message
that
was
using
the
one
of
and
you
use
the
same
field
number
for
an
optional,
you
can
read
that
same
message.
It's
totally
binary
compatible
if
the
field
name
is
the
same.
Theoretically,
it's
json
compatible
depending
on
how
your
json
is
working,
but
that's
something
I
need
to
look
into.
C
Json,
I
was
just
worried
about
the
proto
side.
I
I
care
about
json
and
I
really
want
to
stop
breaking
it
all
the
time,
but
that's
a
different
story
so
but
yeah
as
long
as
the
field
numbers
are
encoded
correctly.
One
of
is
literally
just
a
semantic
concept
at
a
protocol
level
that
languages
have
to
encode
differently,
but
from
from
what's
in
the
binary,
it
doesn't
matter
that
field
number
is
the
only
thing
that
matters
and
there's
some
additional
check
semantics.
C
When
you
generate
the
message
that
don't
really
matter
on
the
wire
right,
they're,
not
there
on
the
wire,
they
don't
show
up.
So
it's
it's
almost
like
a
static
type
versus
runtime
type
thing.
So
runtime
we're
fine
in
all
of
these
things.
So
I
I
it's
a
great
great
concern.
Thank
you
for
raising
that
all
right
in
terms
of
follow-up,
since
we're
way
over
apologize.
C
Let's,
let's
take
each
of
these
brainstorming
ideas
and
see
if
we
can
attach
a
person
to
try
to
investigate
them
this.
This
patch
thing,
I
think,
might
be
used
in
tandem
with
one
of
these
other
bits.
I
might
push
on
this
notion
of
a
go
version
of
otlp
and
rerun
tigran's
benchmark,
where
all
we
do
is
remove
that
one
nested
one
of
and
see
what
the
performance
differential
is
to
identify.
C
If
that
really
is
the
primary
suspect
or
if
it's
something
else,
I
know
that
it
is
a
suspect,
but
I
don't
know
if
it's
enough
of
the
suspect
there
might
be
something
else
we
did
that
caused
some
slowdown,
so
that
I
was
gonna.
Take
all
that
victor.
Would
you
want
to
maybe
outline
this
as
a
as
a
proposal
or
for
us
to
evaluate
okay,
cool
anything
else?
That
folks
think
we
need
to
do
or
get
started
for
next
time?
C
Okay,
great,
thank
you.
Everyone,
sorry
for
running
over,
look
forward
to
seeing
you
all
next
week.