►
From YouTube: 2022-11-01 meeting
Description
cncf-opentelemetry@cncf.io's Personal Meeting Room
A
A
B
B
Okay,
let's
start
I
see
11
people
hope
that
more
people
will
join
in
a
bit.
So
we
can
start
with
the
first
item.
Josh
instrumentation
stability
update.
C
Yeah
I
didn't
have
a
chance
to
write
my
second
Point,
but
let's
talk
about
the
first
one,
so
we're
working
on
instrumentation
stability
and
we're
starting
to
dive
deep
into
like
what
we
want
to
enforce
with
the
metrics,
and
we
have
three
hard
questions
we're
trying
to
answer-
and
this
is
the
first
one
which
is,
should
we
consider
moving
a
metric
data
stream
from
integer
to
floating
point
a
breaking
change
or
not
I,
don't
know
like
we
had
a
bunch
of
discussions
when
we
switched
to
making
kind
of
integer
and
floating
point
being
this
one
of
and
our
current
thinking
right
now
from
the
the
working
group
is
we'd
like
to
make
a
proposal
that
that
is
not
a
breaking
change
and
that
is
allowable
and
it
is
considered
kind
of
an
optimization
technique
of
sdks.
C
So,
basically,
when
you
look
at
a
metric
stream,
you
need
to
take
either
integer
or
floating
point
and
kind
of
just
treat
them
as
number
right.
So
that's
that's
the
proposal
on
the
table
and
we're
looking
for
feedback
here
of
whether
that's
like
what
the
consideration
is,
our
reasoning
by
the
way
languages
like
JavaScript
right.
You
only
have
number
you
don't
have
integer
versus
floating
point,
and
so
when
we
look
at
sdks
and
things
it
might
actually
be
hard
or
impossible
for
different
sdks
to
actually
denote
an
integer
versus
floating
point.
C
It
would
the
the
change
would
be
like
if
I
release
a
new
version
and
I
switch
from
into
float
or
if
it's
an
integer
in
Java
and
a
float
in
JavaScript
for
the
same
semantic
convention
like
I.
Don't
think
we
should
allow
it
to
change
kind
of
per
in
process
right,
but
for
a
different
time
series
right
it
might.
It
might
be
an
integer
from
Javas
and
integer,
sorry
a
floating
point
from
JavaScript,
and
we
want
to
consider
that
kind
of
okay,
I.
D
F
That
that's
not
right,
not
in
the
data
model.
Yeah.
The
data
model
did
put
that
in
there
by
excluding
it
from
the
list
of
identifying
parts
and
I
recall
this
from
the
debate
with
open
metrics
when
we
decided
to
create
the
number
data
point
in
the
first
place,
we
had
this
discussion
and
the
open
metric
spec
allows
you
to
mix
these
and
half
my
recollection
was
it
had
to
do
with
the
representation
of
an
infinity
or
a
nand
value?
That
was
going
to
be
representative
of
a
missing
point.
F
I
meant
nand
value
so
because
a
nand
value
had
to
be
a
floating
Point.
Any
metric
could
be
a
floating
point,
because
what
I
read
from
the
open,
metrics,
spec
and
I
think
we
decided
to
keep
that
feature
was
that
it
was
thinking
of
it.
As
an
optimization
like
Josh
says,.
A
Okay,
only
one
concern
that
I
have
Josh
here
and
maybe
worth
defining
it
during
the
conversion.
You
may
lose
precision
and
during
this
Precision
may
not
be
that
important
or
that
much
but,
for
example,
in
case
of
a
cumulative
cumulative
monotonic
sum
that
we
have
that,
may
the
value
may
go
down.
A
Because
of
the
the
conversion
Precision
so
now,
this
is
where
this
Edge
case
you
have
to
cover
it.
You
have
to
to
make
sure
that
we
are
protecting
against
the
somehow
by
saying
okay,
this
is
the
if
it's
in
within
0.001
percent
or
whatever
it
is
the
Precision
that
we
are
looking
for.
Then
then
this
is
okay,
so.
C
All
right,
let
me
let
me
let
me
let
me
think
about
a
few
things
here.
So,
first
of
all,
what
I
hear
what
you're
saying,
but
that's
kind
of
a
back
end
concern
with
how
we
deal
with
otop.
Now
from
what
I
understand
we,
because
this
is
what
I
thought
as
well
I
just
couldn't
recall.
Specifically,
we
had
decided
to
have
integer
versus
floating
Point
kind
of
be
open
and
allowable.
C
The
open
metric
spec
also
allows
that
which
means
back-ends
kind
of
have
to
deal
with
it.
Now
we
could
take
a
so
so
I
I
think
from
a
semantic
convention
standpoint.
We
don't
enforce
the
data
type
in
the
semantic
convention.
We
allow
the
SDK
or
API
to
choose
the
data
type
that
they
want
if
they
can
and
where
they
can.
It
doesn't
matter
because,
again
you
know
they
don't
have
that
option.
C
C
If
they
have
like
generated
code
from
semantic
conventions,
but
again
like
that's,
my
goal
here
is
to
to
not
lock
down
things
that
we
don't
need
to
lock
down
but
to
lock
down
things
that
do
cause
breakages.
So
if
we've
already
exposed
all
back
ends
to
have
to
deal
with
this
problem,
I
would
I
would
prefer
just
not
enforcing
it.
C
No
no,
but
I
was
looking
through
there's
a
few
things
I
think
might
be
missing
in
terms
of
enforcement,
for
example,
the
semantic
conventions
enforce
what
instrument
to
use,
but
they
don't
enforce
if
it's
asynchronous
or
not,
because
again
that
doesn't
matter.
C
However,
if
we're
going
to
do
any
kind
of
Cogen
from
these,
then
it
does
matter
which
one
you
produce
or
you
just
produce
both
so
anyway.
We
just
want
to
be
very
explicit.
That
instrument
kind
is
not
breaking
if
you
switch
between
async
and
synchronous
of
the
same
instrument.
G
Has
the
units
and
and
if
we
think
the
like,
the
the
actual
type
and
it
hangs
then
the
next
question
is
that
we've
removed
those
types
from
this
bag.
I
I
thought
they
should
be
consistent.
They
were
saying
whether
it's
in
or
float
they
can
speak
freely
without
being
considered
as
breaking
change,
then
there's
no
point
to
have
the
the
underlying
point
and
in
the
stack
anymore.
C
It's
in
Jack,
you
have
your
hand
raised.
Do
you
have
a
do
you
wanna.
E
Yeah
I,
just
I,
just
have
a
quick
question.
So
in
Dynamic
languages,
dynamically
type
languages
like
JavaScript,
where
you
can't
enforce
floating
Point
versus
integer
arguments.
How
do
they
decide
when
they're
exporting
over
otlp,
whether
to
send
as
a
floating
point
or
an
integer
version
of
a
data
point
I.
H
Can
answer
for
JavaScript
specifically
when
you
generate
the
metric
there's
a
an
option
for
type
for
like
integer
versus
decimal,
and
if
you
don't
specify,
we
just
assume
it's
a
float.
So
you
can
specifically
tell
us
that
it's
an
integer
and
if,
if
the
type
is
integer,
then
we
export
it
that
way.
Otherwise
we
export
it
as
a
float
and.
E
A
C
C
So
I
guess
what
I'm
hearing
there's
there
there's
some
nervousness
about:
switching
between
integer
and
double
in
process,
but
having
like
one
process,
send
integers
and
a
different
process
and
doubles
we're
not
really
worried
about
sorry
one
process
in
integers,
one
process
and
floating
point
we're
not
worried
about.
Is
that
accurate?
C
Okay?
So
what
I
can
do
is
when
we
enforce
semantic
conventions
right?
We
won't
enforce
a
type
in
semantic
conventions
going
forward,
but
what
we
will
do
is
we
will,
for
instrumentation
purposes,
say
if
you're
sending
floating
Point
numbers
from
time
series
changing
it
to
integers,
considered
a
breaking
change
of
your
instrumentation.
C
E
D
A
Right,
it
does
not
totally
true,
because
if
you
are,
if
you
are
emitting
CPU
usage
of
your
system,
that
that
doesn't
change
when
you
restart
your
app.
A
So
so,
in
order
to
to
make
the
correctly
here,
George
is
not
restart
of
your
app.
It's
restart
of
the
time
series,
yeah
and
I
think
we
have
the
concept
of
a
restart
for
a
Time
series.
So
just
point
to
that
because
that's
that's
the
correct
definition
that
we
are
looking
for
and
then
and
then
yes
I
know
it's
a
bit
harder,
but
we
have
the
the
right
definition
there.
So.
D
C
To
find
a
place
in
the
specification
to
write
that
that
makes
sense
yeah
because
in
submit
so
so
I
think
going
forward.
Basically,
semantic
conventions
will
not
enforce
a
type.
We
will
put
this
line
about
like
for
the
life
cycle
of
a
Time
series.
The
Point
kind
should
remain
the
same.
I
might
add
that
into
the
data
model,
if
that's
amenable
to
folks
and
then
we'll
go
forward
with
our
our
next
set
of
problems,
yeah
Bogdan.
F
That
sounds
good
to
me.
Josh
there's
already
a
section
which
I
was
just
reviewing
and
you
know
we
have
to
the
way
the
data
model
is
written.
It
does
allow
you
to
change
instrument
types
as
long
as
you
change
something
else.
So
if
you
change
a
resource,
you
can
change
instrument
types
now,
I
know
for
semantic
conventions,
we're
not
trying
to
allow
that,
but
but
the
the
way.
F
The
way
that
is
already
written
is
that
if
you
change
your
your
resource
or
your
scope,
then
you
can
change
everything,
because
it's
a
new
time
series
and
that
might
be
a
good
place
to
put
your.
A
F
I
didn't
all
right.
I'm
here,
sorry,
I
think
I
was
muted
by
accident.
The
I'm
looking
at
this
section,
titled
open
Telemetry
protocol
data
model
in
the
data
model
and
it
finishes
with
a
sentence
within
certain
data.
Point
types:
EG
summons
gauge:
there
is
variation
permitted
in
the
numeric
point
value.
In
this
case.
The
associated
variation
is
not
considered
identifying
and
but
there's
nothing
about
timestamp
in
that
section.
I
just
wanted
to
call
that
out,
because
it's
trying
to
Define
what's
the
same
time
series
so
that
you
can
merge
them.
F
C
Yeah,
the
back
end
is
going
to
have
to
deal
with
this
at
some
point.
It's
it's
more
about
where,
where
and
how
so
I'm
more
worried
about,
say,
Delta
the
cumulative
there
I
think
this
at
least
gives
you
the
ability
to
treat
Delta
to
cumulative
with
the
same
point
kind.
And
if
the
point
changes,
it's
actually
a
series
restart
right,
but
it
doesn't
give
you
the
ability
to
say
remove
labels
and
things
without
having
to
look
at
both
integer
and
floating
point.
So
there's
some
use
cases.
I
That's
probably
an
additional
Nuance
that
we
need
to
consider
as
well
in
that
if
you
have
an
environment
that
persists
the
values
before
they
get
sent
on,
they
could
get
persisted
as
floats
and
then
to
get
restarted
and
converted
to
it
and
vice
versa.
I
So,
for
you
know
effectively
offline
scenarios
or
you're
unable
to
send
at
that
point
in
time
it
just
persisted
to
disk
and
then
reloads.
It.
H
I
know
that
this
is
important
for
at
least
some
Mobile
use
cases.
C
No,
but
in
those
cases,
I
think
that
the
data
should
be
effectively
a
previous
time
series
that
gets
written.
Are
you
saying
it
gets
aggregated
with
the
most
recent
value.
I
I
don't
know
about
the
metric,
Daniel
Daniel
might
be
I'll
answer
that
one
more.
It
was
just
more
of
a
general
comment
or
not
necessarily
linked
to
specific
time
series,
but
it's
just
allowing
things
to
change
what
it
should
do.
Should
it
treat
it
as
a
separate
time
series.
It
will
show
that
not.
C
Right
right
and
what
we're
saying
is
basically,
if
the
point
changes,
then
it
is
a
new
time
series
in
some
fashion,
right
back
ends
tend
to
merge
time
series
together
all
the
time.
So
where
a
back
end
merges
to
a
different
time
series,
you
need
to
be
ambivalous
towards
what
the
point
kind
was
yeah,
so
we
can
make
that
we
can
call
that
out
better
yeah.
C
No
specifically
we're
only
talking
about
metrics,
where
you
can
have
that
integer
floating
value.
We
actually
have
another
talk.
This
will
probably
be
in
in
a
week
or
two
we'll
have
a
discussion
about
attribute
Point
kind,
yeah,
I,
I,
don't
think
we
would
take
the
same
approach.
There
I
think
that
one
will
we
might
have
to
be
more
rigid
about,
but
yeah
yeah
cool.
D
Can
we
argue
for
the
opposite?
This
is
creating
work
for
the
back
end
right.
If
we
have
a
solution
in
Dynamic
languages
like
one
that
then
was
talking
about
requiring
it
to
be
in
the
API.
When
you
create
the
instrument,
then
maybe
we
should
be
more
restrictive
on
the
emitting
side.
Let's
restrict
it,
don't
allow
it.
C
I
think
that
I
I'm
willing
to
do
that
if
we're
willing
to
make
integer
versus
floating
point
a
required
part
of
the
API.
C
D
C
So
if
we
wanted
to
go
that
route,
that's
the
expense
of
it,
which
I
think
means
we
need.
We
need
to
talk
to
Ruby
and
we
need
to
talk
to
other
Dynamic
languages.
D
D
C
I
think
we're
trading
instrumentation
flexibility
for
back-end
flexibility.
D
E
G
G
The
our
cases
we
used
many
seconds
if
you
look
at
the
current
spec
I
I,
think
we
have
a
mixture
of
seconds
and
milliseconds
and
the
reason
I
remember
is
for
milliseconds
Some
people
prefer
that
for
like,
for
instance,
the
duration
type
of
things
and
second,
this
is
used
for
measuring
the
total
time
used
for
for
some
like
python
operation
or
the
CPU
time.
It'll
be
great.
E
Okay,
so
what
we
gain
as
the
by
retaining
this
flexibility
is
I
guess
a
larger
margin
of
error
when
we
stabilize
the
semantic
conventions
around
things
that
may
be
Precision
dependent.
C
Yeah
effectively
we're
treating
we're
treating
in
versus
floating
Point
as
a
precision
based
feature
yeah
right
and
we're
forcing
back
ends
to
deal
with
whatever
we
can
give
them,
which
I
think
is
common
for
backends
to
deal
with.
In
my
experience,
yeah.
D
J
Yeah
I
mean
I
just
want
to
point
out
that,
if
we're
talking
strictly
Precision
here
like
floating
Point
numbers
are
not
a
precise
representation
system,
so
maybe
the
opposite
is
actually
wanted.
Like
there's
only
54
bits
of
precision
in
a
64-bit
floating
point
right
like
it
may
be
that
you
actually
want
64
bits
of
precision
there.
D
D
D
D
A
C
C
D
And
that's
that's
the
thing
right.
If
you,
if
you
don't
allow
changing
the
units,
then
what's
the
other
use
case
like
the
what
Plyler
said?
Is
that
particular
use
case
when
you
move
from
milliseconds
to
seconds.
J
J
You
disallow
that
I
think
we're
getting
caught
up
on
the
unit
stuff
like
it
really
was
just
like.
If
you
have
values
that
exist
in
that
range
of
really
large
numbers
and
you're,
representing
them
with
floating
points,
but
you
could
actually
represent
them
with
64-bit
like
integers.
There
may
be
a
reason
you
want
to
switch
just
to
like
have
the
Precision
at
that
level.
Like
I
I,
don't
know
like
I,
you
know
clock
Cycles,
or
something
like
that
like
that,
doesn't
really
make
sense,
but,
like
I
should
say
like
there's.
F
J
C
Okay,
so
in
terms
of
making
progress
tigraine
you
raised
The
Devil's
Advocate
question
of.
Should
we
be
more
flexible
for
back
ends,
or
should
we
make
be
more
rigid
in
instrumentation,
so
backends
have
more
flexibility,
how
they
treat
things
I
think
I
think
it's
it's
an
interesting
question
I'm
still
leaning
towards
given
where
the
specification
is
today
not
enforcing
types,
because
I
think
going
and
trying
to
enforce
that
in
all
those
different
communities
is
just
a
bit
of
a
non-starter
given.
C
Metrics
are
currently
going
stable
right
and
that
we
didn't
enforce
that
previously.
That
said,
if
we
do
think
that
there's
enough
momentum
in
this
group
to
go
make
that
change.
That's
fine
I
mean
do.
We
feel
that
way,
though,.
A
I
I
think
I
think
the
changing
the
API
to
enforce
the
type,
the
time
creation
of
an
instrument.
It
will
be
a
hard
change
to
sell
and
probably
a
breaking
change
for
for
our
community,
because
the
API
is
stable
so
because
of
that
I
believe
and
because
of
the
fact
that
we
don't
have
anything
around
that
part
in
the
in
the
API
I.
Believe
it's
going
to
be
hard
to
to
do
this
change
degree
that
you
want.
A
Unfortunately,
it
it
is
what
it
is,
but
I
think
at
least
the
the
the
fact
that
within
us
single
time
series
we
can
change
only
after
restart
I
think
it's
good
enough.
So
then,
then
we
already
track
restarts
for
a
Time
series.
So
now
now
it's
it's
not
going
to
be
that
much
of
a
problem.
In
my
opinion,
it's
gonna
be
on
the
plotting
side.
So
if
you
are
plotting
the
same
time
series
with
restarts
you
you
have
to
to
worry
a
bit
about
this
thing,
but
I
don't
think
it's.
A
You
can
always
fall
back
to
double
and
lose
the
Precision.
If,
if
people
are
changing
this,
so
essentially
what
I
would
do?
What
I
would
do
is
a
recommendation
if
it's
only
ins
keep
it
in.
Otherwise,
if
you
see
any
double
in
this
interval
that
you
are
trying
to
plot
or
you
are
trying
to
merge
or
whatever
merge,
it
has
double,
even
even
merging
roads.
A
Oh
this
is
another
thing
Josh
sorry,
you
may
want
to
put
in
the
description
which
is
merging
time
series,
which
is
a
feature
that
we
are
working
on
in
The,
Collector
and
I.
Think
this
will
become
a
problem
but
I
think
what
I
just
described
to
you.
It's
a
reasonable
solution.
D
C
So
that
was
the
main
question
we
wanted
to
bring.
There
was
a
secondary
thing
we
wanted
to
raise,
but
you
actually
added
as
a
topic
so
I'll.
Let
you
go
next.
B
I
I
was
basically
just
going
to
some
of
the
new
PRS
and
new
issues,
and
this
is
about
adding,
like
new
contributors,
have
been
adding
new,
specifically
tracing
conventions
and
expanding.
What
that
already
exists,
mostly
for
messaging
systems.
What
other
relative
systems
and
I
was
wondering
whether
we
want
to
you
know
to
to
basically
pause
them,
or
is
it
okay
to
you,
know
have
stuff
or
what's
like
what
this
group
thinks
about
that
specifically
for
things
that
are
not
related
to
to
host
or
the
local
process.
C
So
we
we
talked
about
this
in
the
semantic
convention
working
group,
I'd,
say
two
things.
One
is
if
we
have
an
expert
group
put
together
now,
like
HTTP
or
messaging,
send
them
to
that
working
group.
That's
Sig,
right
and
and
have
them,
because
there's
an
open
question
on
that
PR
about
whether
these
semantic
conventions
actually
belong
as
more
generic
things
or
not,
and
so
we
like
we
and
I,
don't
I,
don't
know
if
anyone
followed
up
on
the
AI,
but
we're
supposed
to
comment
and
reach
out
to
them
to
join
that
messaging.
C
Sig
right
work
with
the
experts
that
we
have
around
messaging
to
kind
of
outline
is
what
you're
proposing
actually
generic
or
not,
and
if
it's
generic
we'd
like
to
have
the
generic
semantic
conventions,
but
otherwise
those
that
kind
of
work
should
be
able
to
make
progress
right.
The
thing
like
at
one
one
of
the
goals
we're
trying
to
do
is
unblock
things
as
quickly
as
possible,
and
if
there's
a
question
of
like
do,
we
think
semantic
convention
stability
will
block
a
PR
I
think
you
know
escalate.
C
We
can
talk
about
it
in
the
working
group.
We
can
talk
about
it
on
on
chat
and
if
we
think
that
this
particular
PR
won't
walk
into
an
area
where
we
think
there's
contention,
let's
let
it
through
right.
Let's
make
sure
it
goes
through
an
expert
committee
group
and
let's
get
it,
get
it
through.
Okay,.
B
Yeah
I
already
saw
the
comments
mentioning
that
they
should
like
the
messaging
group
will
be
reviewing
a
part
of
this
PR
so
or
issues.
So
that's
good,
okay,
good
to
know
that,
thanks
for
the
clarification
okay,
then,
in
that
case
we
can
go
to
the
next
item.
Anthony
Define,
otlt1.0
stability,
guarantees.
K
So
I
believe
we've
got
through
all
of
that
setting
issues
on
otlp,
Json
and
now
we're
at
a
point
where
we
need
to
define
the
disability
guarantees.
We
will
offer
for
otlp
1.0
as
a
predicate
to
actually
making
it.
K
1.0
tigrid
has
put
forward
a
draft
proposal
that
I
believe
reflects
the
consensus
that
has
been
established
through
conversations
we've
had
on
this
topic
in
in
the
Sega
and
on
other
issues,
and
I
would
appreciate
if
everyone
can
review
that
and
comment
on
that,
so
that
we
can
attempt
to
establish
that
it
is
the
consensus
position
to
move
forward.
D
So
that
PR
about
1.0
is
a
is
suggesting
that
the
strong
version
of
the
current
teams
right
and
yes,
I-
think
we
need
to
review
that
and
make
a
decision
on
that
even
before
that.
If
we
want
to
declare
the
Json
or
tlpjs
unstable,
we
can
do
that
with
a
subset
of
guarantees
that
are
in
this
PR
right.
We
don't
need
all
of
this.
For
for
TLP
Json,
that's
a
possibility.
If
we
want
to
declare
otlbj's
unstable
earlier
than
1.0
declaration,
we
can
make
that
as
an
intermediary
step.
A
Does
stable
mean
in
that
regard,
because
we
are
just
looking
at
the
stability
guarantees
and
we
are
just
making
something
stable
it's
it
may
be.
It
may
be
seen
as
a
joke
from
our
side,
because
we
haven't
defined
the
stability
guarantees,
but
we
are
declaring
some
of
this
table
I'm,
all
with
you.
If
you
want,
we
can
put
a
stamp
say
stable,
but
it
we
are
arguing
about
stability,
guarantees
and
making
something
stable
me
missing,
not
serious.
D
Yeah
I'm
saying
it
may
be
easier
to
come
to
an
agreement
about
the
subset
of
these
things
than
the
entire
thing
right.
That's
what
I
was
saying
so
for
Jason.
We
need
to
agree
on
smaller
number
of
things
and
for
1.0.
We
need
to
agree
on
on
larger
number
of
things.
So
I'm,
it's
it's
great
if
we
can
agree
on
1.0
quickly,
but
if,
if
we're
not
seeing
that
happening,
then
then
maybe
we'll
just
do
that
first
try
to
agree
on.
D
A
D
A
D
Doing
these
are
the
current,
so
you're
saying
the
current
ones
already
defined
what
it
means
to
be
otlp
Json,
stable,
wire
wire,
stable.
So
we
we
have
that
maybe
maybe
I'm
wrong:
okay,
okay,
cool!
Then
then:
okay,
I
guess,
then
it's
a
matter
of
just
changing
the
label
and
saying
that
we're
going
to
stick
to
those
to
those
guarantees.
D
Okay,
then
fine,
then
then,
okay,
then,
what's
remaining
is
okay.
So
then,
in
that
case,
two
separate
questions.
One
is
1.0
stability
guarantees.
We
need
to
agree
on
that.
There
is
a
PR
for
that.
Let's
discuss
that,
but
independently
from
that,
are
we
ready
to
change
the
label
on
all
PLP
Json?
It
is,
it
currently
says
Alpha
and
we
can
change
it
to
stable
the
maturity
and
be
bound
by
the
already
existing
stability
definition.
A
One
one
quick
thing:
if,
if
you
do
sorry
Anthony,
if
you
do
that,
make
sure
there
are
a
couple
of
things
that
we
Define
about
Json
in
the
Pro
in
the
specification
protocol
documentation
and
make
sure
those
are
somehow
in
in
the
face
of
the
users
in
the
Pro
Tour
as
well,
some
links,
something
that
we
we
say
is
not
a
blunt
Json
PP
it's
these
are
the
rules
here,
just
want
to
make
sure
that
we
we
put
that
in
the
face
of
the
user
to
to
be
visible.
A
D
A
And
I
don't
think
there
is
any
disagreement
on
that.
So,
if
there
is
any
disagreement
will
be
on
on
the
generated
code,
stability,
but.
H
D
D
B
Or
let's,
let's
move
to
the
next.
Thank
you
so
much
for
that
yeah.
The
last
one
is
about.
I
was
just
wondering
there
is
all
this
question
regarding
metrics
temporality,
and
you
know
there
was
this
proposal
in
August
about
just
going
like
stateless.
So,
basically
for
the,
for,
you
know
basically
trying
to
decrease
the
memory
footprint
in
general.
There
seems
to
be
support
for
that,
at
least
from
Jack,
walkden
and
Riley.
What
I
would
like
to
know
whether
there's
interest
from
you
know
the
rest
of
the
seeks?
B
F
As
a
brief
refresher,
this
stateless
temporality
is
one
that
will
be
cumulative
for
asynchronous
instruments,
so
that
no
memory
is
required
and
then
it
will
be
Delta
or
synchronous
instruments,
so
that
no
memory
is
required.
So
you
don't
have
to
compute
sums
or
differences.
None
of
the
above.
It
is
one
that
some
vendors
may
prefer
and
we
propose
it
to
help
with
that
with
that
that
issue
for
ourselves,
but
I,
think
it's
probably
more
applicable
than
just
one
vendor.
C
I'm
supportive
of
this
as
well,
but
I
want
to
not
say
that
it's
a
no
memory
solution
because
you're
still
aggregating
your
Deltas,
but
it's
it's
a
low
memory
solution.
Yeah.
A
We
call
it
minimum
memory
solution
for
this
and
you
can
even
then
tune
the
reporting
interval
to
a
second,
for
example.
You
can
reduce
that
even
more
but
but
let's
say
we
don't
build
a
state
between
different
reports,
at
least
so
our
state
is
equivalent
with
one
report.
A
Opposed
to
stateless
yeah,
yeah
yeah,
yeah
yeah,
that's
that's
fine,
but
but
I
think
it's
it's
a
good
change
for
users,
I
I!
Think
user
will
benefit.
So,
let's,
let's
make
sure
we
have
it.
G
I
have
a
question,
so
if
we
have
this
option,
do
we
actually
give
some
requirements
like
you
have
to
make
sure
the
employment
is
correctly,
so
it
actually
saves
memory.
My
voice,
like
some
ID
case
I,
know
they
never
Purge
existence
and
locate
easy
memory
and
keeping
that
option
the
user
might
be
surprised
to
see
the
Promise
it's
different
from
what
they
actually
got.
G
Hopefully,
we
will
never
give
any
like
requirements
from
the
IIT
hitchback
saying
that
memory
must
be
following
something
with
every
recommendation
thing.
This
is
something
you
should
consider,
and
this
is
the
way
how
you
think
about
memory,
and
you
need
to
be
cautious
as
the
owner
yeah.
A
But
but
I
think
this
is
unless
I'm
wrong,
but
this
should
be
an
internal
implementation
that
can
be
changed.
So
even
if
he's
not
there.
Yet.
Maybe
it's
not
a
bad
time
to
to
suggest
that
in
the
SDK,
not
necessarily
as
part
of
this,
but
we
can
start
with
this
approach
and
see
if
languages
are
coming
to
us
and
say:
hey
I'm
surprised
about
this,
because
I
need
more
work
and
then
we'll
figure
out
if
we
need
to
clarify
even
further
the
memory
requirements.
F
There
are
definitely
users
who
are
working
in
systems
that
will
not
accept
deltas,
and
those
users
might
find
this
to
be
somewhat
of
a
disruptive
change.
They're
is
a
question
about
what.
C
So
I
want
to
call
two
things:
I
think
one.
We
do
have
limited
amount
of
investment
that
we
have,
but
I
think
that
investment
in
the
sdks
on
lowering
footprint
is
absolutely
What,
I
Hear.
Whenever
anyone
talks
about
you
know
what
hotel
instrumentation
look
like.
That's
like
the
most
common
thing,
most
common
question:
you
know:
what's
your
overhead,
so
anything
that
lowers
overhead
or
speaks
to
that
I
think
is
super
compelling
and
something
we
absolutely
need
to
spend
time
on
lowering
overhead
overall.
C
A
But
where
are
we
yeah
this?
This
is
just
an
option,
so
so
this,
starting
as
we
have
a
couple
of
options
of
configuring
like
high
level
options,
one
is
configure
to
always
the
configure
to
all
this
community
and
we
want
to
have
a
third
option
to
say,
configure
for
low
memory,
which,
though,
is
not
going
to
be
a
a
solution
for
Prometheus
Prometheus
users
will
not
benefit
of
this,
but
otlp
users
can
benefit
of
this
I'm.
F
I
think
we
will,
we
may
have
a
partial
solution.
First
of
all,
the
collector
has
been
edited
by
various
volunteers
over
the
last
year
and
a
half
so
that
the
Prometheus
remote
expired
exporter,
recognizes
Delta
temporality
in
and
performs
its
own
Delta
to
cumulative
temporality
conversion.
I
wouldn't
serve
My
Life
by
that
being
correct,
but
I'd
love
to
see
it
be
sworn
as
correct
and
then
the
answer
would
be
run.
A
collector
convert
Delta
to
cumulative
and
you're
good.
C
Right
and
the
idea
here
is
I
can
control
where
my
resource
consumption
is
so.
My
application
instrumentation
might
have
lower
resource
consumption.
I
can
allocate
a
collector
somewhere
else
if
I
need
to
to
have
different
resource
usage
there,
but
the
the
the
thing
is
we're
going
to
be
evaluated
by
overall
overhead
and
I
still
think
when
it
comes
to
metrics
we're
going
to
be
evaluated
from
a
Prometheus
lens,
regardless
of
where
this
eventually
goes.
I
still
think
that
number
is
going
to
be
important
to
users,
even
if
they
don't
use
the
Prometheus
stuff
at
all.
A
C
A
I
B
C
Overall
question:
I
think
it
is:
do
we
want
to
ask
for
this
minimal
Delta
exporter,
or
do
we
want
to
push
on
reducing
cumulative
overhead
I?
Think
that's
an
overall
directional
question
we
have
like
I,
said:
I
still
think
we're
going
to
be
evaluated
with
a
Prometheus
lens
when
it
comes
to
metrics
overhead.
C
So
if
I
were
wearing
my
PM
hat
and
trying
to
say
what
will
people
look
at
first
I
still
think.
That's
probably
the
first
thing
they're
going
to
evaluate
us
on.
Even
if
you
know
we
have
this
other
solution,
that's
way
better
now,
I
could
be
wrong
there.
That's
just
that's
just
my
world
view
right
now:
I'm
not
even
happy
about
it,
but
I
think
that's
the
reality.
F
I
think
you're,
right,
Josh
and
I
know
there
have
been
some
fragments
of
conversations
in
the
Prometheus
working
group
about
approaching
this
harder
problem,
which
is
I
need
to
eject
some
stuff
from
my
memory.
How
do
I
do
it
correctly
and
safely?
I,
don't
want
to
try
and
just
summarize
it
here
and
now.
Maybe
tomorrow's
Prometheus
working
group
would
be
a
good
place
to
talk
about
that
further.
If
that
interests
you,
the
question
comes
down
to.
We
have
this
notion
of
a
start
time.
F
How
can
I
forget
a
Time
series
and
then
restart
it
well
without
stopping
my
SDK
and
I
think
the
answer
is
it's
possible
and
we
have
to
to
work
it
out
and
we
have
to
make
sure
Prometheus
is
happy
with
how
we
worked
it
out.
We
have
this
data
point
not
present
flag,
that
we
can
use
to
Signal
something
that
went
away
and
I
think
that
I
I
outlined
it
from
Prometheus
working
group.
They
nodded
that
sounded
good
to
them.
F
B
Okay,
fair
enough:
well,
those
are
all
the
items
in
agenda
anything
else
to
discuss.
Oh
by
the
way
we
are
supposed
to
do
a
multi-release
of
the
spec,
because
it's
November.
B
So
if
anybody
has
any
concern,
let
me
know,
but
I
will
prefer
the
pr.
So
it
goes
out
the
end
of
this
week.