►
From YouTube: 2022-02-15 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
let's
start
thank
you
for
joining
as
usual.
Okay,
let's
go
over
the
agenda
items
then
the
first
one
is
some
item
from
the
previous
week.
We
have
been
discussing
extensively.
Basically
it's
about
tracer
and
meter
being
associated
with
instrumentation
scope,
rather
than
necessarily
with
instrumentation
library.
This
has
already
enough
approvals,
so
we
will
be
merging
it
today,
right
after
the
caller
during
this
call.
So
this
is
just
for
everybody's
information
is
still
around,
maybe
not
okay.
I
will
okay
yep
thanks
bogdan
yeah.
We
will
merge
it.
It
is
called
or
later.
B
Yes,
I'm
here
so
the
I
will
share
my
screen.
If
that's
okay,
because
what
we
talked
about
was
in
in
the
messaging
sig,
we
identified
the
need
for
for
one
but
one
scenario
where
we
have
a
need
for
adding
spam
links
after
spank
creation,
we're
not
the
first
ones
to
discover
that
there
was
an
open
issue
here
for
quite
some
time.
That
also
requests
the
same
thing,
and
I
opened
this
back
pr
here
that
changes
the
spec
to
allow
adding
links
after
spank
creation.
B
There's
been
some
discussion
here
and
people
were
curious
about
our
use
case
in
messaging,
and
it
was
asked
to
quickly
like
present
here
why
we
would
need
that
so
wanna
go
ahead,
and
do
that
just
trying
to
make
it
very
short.
B
B
This
callback
gets
called
record
is
a
push
scenario
and
we
have
a
pull
scenario
where
the
client
basically
calls
the
receive
operation
and
receives
a
message
or
a
list
of
messages,
and
then,
after
this
receive
operation
is
finished,
something
yeah
is
done
with
those
messages,
so
those
are
the
two
main
scenarios
we
have
for
this
first
push
scenario
with
which
is
doing
a
callback.
That's
pretty
much
similar
to
standard
http
server
scenarios-
I
think
you
are
most
of
you-
probably
are
familiar
with
that
and
how
we
would
like
to
create
our
spans.
Here.
B
You
see
that
here
we
have
a
producer,
we
have
a
consumer
here,
two
messages
are
created
and
the
consumer
basically
wants
this
co
once
this
callback
is
invoked,
deliver
span
as
he
called
it.
He
started.
This
delivery
span
lasts
for
the
duration
of
the
callback,
and
here
we
have
basic
grid
operation,
trial
operations
that
happen
during
this
callback.
B
Here
we
see
all
the
messages
that
we
pass
along
with
this
deliver
span
on.
This
callback
are
arriving
before
we
call
this
before
the
callback
is
called.
So
we
we
give
a
batch
of
messages
in
here,
so
that
is
basically
our
messages
are
there
before
we
call
this,
so
we
can
actually
expand
creation,
create
links
to
those
create
contexts
here
on
the
producer
side.
So
that's
how
we
kind
of
tie
producers
and
consumers
together
this
create
and
deliver
operation.
B
What
we
would
like
to
do
here
is
kind
of
mirroring
the
other
scenario
being
as
consistent
as
possible,
having
a
receive
operation
here
corresponding
to
this
receive
call
and
then
basically
having
the
other
operations
basically
parented
to
this
ambient
context,
to
reflect
what
we
see
here
in
this,
in
what
the
user
does.
B
The
problem
here
is
that,
once
this
receive
operation
is
called,
we
don't
yet
have
all
the
messages
present
and
we
don't
yet
we
cannot.
We
don't
have
the
context
that
is
attached
to
the
messages
and
we
cannot
create
links
at
this
stage
here.
So
here
you
see,
messages
are
arriving
during
this
receive
operation
here,
and
this
prevents
us
basically
from
doing
this
same
model
here.
We
also
would
like
to
have
a
receive
spam
that
links
to
these
create
contexts
in
the
producer.
B
That
would
be
mirroring
what
we
have
on
the
on
the
push
side,
and
we
think
that
would
also
be.
This
would
also
be
something
that
the
user
actually
expect,
and
that
makes
sense
to
the
user
because
it
kind
of
mirrors
kind
of
what
the
user
sees
here
and
yeah.
That
was
the
main.
This
was
the
main
scenario
that
prompted
us
to
go
forward
and
kind
of
ask
for
creating
links
after
spang
creation.
B
Yeah.
That's
our
scenario.
I
mean
our
scenario
is
not
the
only
one
there
is
there
is
this
original
issue
laying
out
some
scenarios
just
also
joshua
here
somewhere
also
gave
an
other
scenario
where
it
would
make
sense
as
a
like
an
example
where
it
would
make
sense
to
add
links
after
spank
creation,
so.
B
I
think
allowing
this
would
make
would
make
it
much
easier
for
instrumentors
to
kind
of
model
many
of
the
scenarios
that
they
come
across
like,
for
example,
this
scenario
here.
B
If
not,
then
I
would
suggest
like
with
this
information.
Maybe
we
can
continue
in
continue
discussions
in
the
pr
and
if
you
have
strong
reasons
against
that
or
you
have
kind
of
you
don't
want
this
change.
Maybe
just
speak
up
now
or
comment
in
the
pr.
A
Yeah,
I
think
that
one
of
the
concerns
was
regarding
sampling,
which
I
don't
share,
but
that
was
one
of
the
points
I
don't
know
if
anybody
here
wants
to
mention
that
I
don't
remember
who
mentioned
that,
but
there
was
some
pushback
on
that
side,
but
once
I
think
that
once
we
clarify
what's
expected
output,
you
know
that
once
like
once
it's
sampled,
the
spanning
sample,
you
don't
have
to
take
that
into
account.
For
example,
it
could
just
work.
C
I
still
support
the
idea
of
adding
spam
links
after
spam
creation.
To
me
it
seems
a
lot
like
having
an
event
that
is
saying.
I
had
a
span
happen
or
I
had
a
spam
linkage
occur,
so
you
could
almost
say
what,
instead
of
having
a
new
link
added
after
spam
creation,
perhaps
I
would
like
to
have
a
semantic
convention
for
logging
links
in
my
span.
Events-
and
this
wouldn't
debate,
maybe
wouldn't
be
happening.
C
It's
a
stretch
either
way
to
find
a
problem
from
my
perspective,
because
sampling
is
pretty
much
independent,
you
make
a
sampling
decision
when
a
span
starts
and
if
you
expect
to
make
a
sampling
decision
based
on
spam
links,
you
should
provide
your
spam
links
at
span
start,
and
if
you
don't
have
your
spam
links
at
span
start,
then
sampling
has
to
be
made
first
and
that's
okay,
you
will
users
might
end
up
in
a
situation
using
your
diagram
where
you
have
receive
span
and
the
receive
span
is
not
sampled,
because
the
links
that
you
add
to
it
after
the
fact
may
have
been
sampled,
and
there
was
no
way
to
use
that
information,
and
I
think
everyone's
going
to
see
that
as
working
as
intended.
D
The
problem,
the
big
difference
compared
with
having
them
at
the
beginning,
is
the
fact
that
you
don't
have
the
continuation
after
that
so
receive
span
if
receive
spam,
generates
a
new
trace
id
generates
new
pnr
properties
from
your
schema,
then
you're
not
going
to
have
that
entire
subtree
sample.
So
you
don't
know,
what's
happening
with
the
consumer.
D
So
you
produce
something
a
very
important
race
with
whatever
you
really
wanted
to
sample
that
and
you
you,
you
produce
that
and
then,
because
now
you
apply
this
schema
and
we
do
not
take
care
of
the
fact
that
I
really
wanted
this
to
be
sampled.
The
entire
sub
3
not
only
the
receive
span,
because
what
they
suggest
is
that
the
entire
work
that
is
generated
by
the
fact
that
I
receive
this
message
is
under
that
receive
subtree.
So
I
will
not
have
any
information
about
it.
C
D
No,
I
wish
I'm
able
to
sample
everything
from
now
on
because
I
know
I
wanted.
I
want
to
spend
the
sample
and
that's
not
going
to
happen,
or
maybe
that's
going
to
happen,
but
I
wish
now
with
the
new
information
that
I
have
about
the
received
messages
and
the
received
context
from
these
messages
that
I
would
sample.
C
Could
if
you
modeled
a
creation
of
a
new,
a
new
sort
of
internal
span
on
every
spam
link,
received
you'd,
have
this
horribly
nested
structure
that
would
get
the
causality
that
you
wanted.
I
think
right
so,
the
first
time
you
receive
a
spam
link,
that's
sampled,
you
begin
sampling,
I
think,
is
what
you're
suggesting
kind
of
and
you
could
you
could
always
rewrite
your
spans
and
create
more
of
them.
I
think
to
get
that
type
of
outcome.
D
E
E
Kind
of
sampling
that
we're
missing
from
the
spec
we
don't
have
anywhere
talking
about
sampling
based
on
linked
traces.
We've
got
parent-based
samplers,
which
would
see
a
sample
based
on
the
sampling
flag
in
the
span
context
of
an
incoming
parent,
but
this
would
be
a
different
case
right.
This
would
be
a
root,
a
root
span,
but
with
a
link
that
was
sampled
and
we
want
a
sample.
Based
on
that.
D
C
It
also
sounds
like
we
are
currently
discussing
a
mode
of
sampling
that
is
also
not
in
the
spec.
That
would
be
some
sort
of
mute
mutation
of
the
current
spans
sampling
state,
and,
although
we
haven't
discussed
it,
the
perspective
I
would
bring
is
that
we,
you
know
we
publish
a
spec
now
that
has
information
about
probability
sampling
baked
into
those
parameters
that
we've
talked
about.
C
If
we
can't
really
change
the
decision,
that's
been
made
without
doing
something
to
those
parameters,
and
we
I
I
would
like
to
prevent
lying
about
statistics,
because
I
want
to
count
those
spans.
The
valid
transformation
that
we
could
make
has
been
detailed
as
this
zero
adjusted
count
span.
So
if
you
want
to
modify
the
sampled
flag
and
not
change
the
probability
sampling
decision,
you
can,
you
can
do
that
by
setting
zero
adjusted
count,
because
it
doesn't
impact
the
sum
that
you're
counting
at
the
end
of
the
pipeline.
C
This
suggests
that
you
could
have
a
a
tracer
that
receives
this
set
link
operation
that
will
be
provided
if
this
proposal
is
accepted
and
in
that
set
link
operation,
you
say:
aha,
I
have
seen
a
linkage
to
a
span
that
was
sampled
and
I
am
not
sampled,
because
the
probability
decision
was
no
I'm
going
to
change
my
adjusted
count
to
zero
and
start
sampling
every
child
from
here
on
out.
That
would
be
viable.
C
In
my
opinion,
that
would
preserve
the
probability
sampling
nature
of
counting
the
spans
at
the
end
of
the
pipeline
and
let
you
change
your
decision
and
I
think
that
would
probably
be
a
more
realistic
thing
to
do
than
to
try.
The
earlier
thing
I
suggested,
which
is
to
just
insert
a
span
everywhere
that
you
need
to
to
make
a
new
decision,
because
by
inserting
a
new
span
id
you
get
to
make
a
new
probability
decision.
C
So
you
can
say
I
am
now
going
to
insert
a
fake
span
here
and
all
of
my
descendants
will
be
inserted.
Underneath
this
fake
span
and
I'm
going
to
change
my
sampling
probability
right
now
and
my
fake
span
is
going
to
be
counted
one
for
one,
because
it's
100
probability
sampling
now
and
that
will
correctly
change
the
downstream
for
everything
that
was
caused
by
this
so-called
fake
span.
That
was
a
complicated
presentation.
I
don't
want
to
go
that
direction,
but
we
could
talk
about
it.
D
I'm
a
bit
I'm
a
bit
for
me.
It's
another
problem,
so
I'm
going
to
be
maybe
picky
on
this
example
and
the
example
is
besides
the
model
pull
versus
push
most
likely.
D
What
you're
gonna
do
in
that
pool
model
you're
gonna,
go
and
hit
an
e
pole
on
on
a
socket
correct
that
waits
for
for
something
to
come
from
the
wire
in
this
messaging
schema.
Is
that
not
the
same
thing
happening
besides
the
scene
in
a
push
model?
Somebody
else
is
waiting
for
you
to
to
on
the
socket
to
to
have
received
message.
I'm
I'm
like.
Why
do
we
try
to
model
things
completely
different
and
in
one
scenario
account
for
the
waiting
time
and
the
odd
in
the
other
one?
D
B
Well,
what
we
try
to
model
here
is
the
application
flow
that
the
user
experiences
and
when
you
model
what
really
happens
on
the
socket
where
messages
come
in
it
gets
very
complicated.
B
So
even
here
in
messaging,
clients
might
do
some
prefetching,
so
some
message
messages
might
already
be
there
when
this
course
starts
there
might
be
a
buffer
or
messages
or
prefetched,
but
some
messages
might
only
arrive
during
this
receive
span
and
each
messaging
client
does
those
things
slightly
different.
So
that's
very
implementation
dependent,
so
we're
trying
to
find
some
kind
of
common
ground
that
we
can
apply
across
all
different
messaging
clients
and
across
the
implementations.
B
That's
why
we
come
up
with
this.
With
this
very
simplified
model,.
D
Yeah,
but
I'm
I'm
curious,
why
do
you
want
to
count
for
for
the
car
client
time
versus
in
the
push
scenario
which
I
I
think
it's
kind
of
similar
in
that
regard?
Somebody
is
executing
that
whole
model.
That's
what
I'm
trying
to
say
somebody
it
has
to
go
and
watch
for
a
socket.
Why
do
we
not?
Why
do
we
model
them
differently?
D
B
Well,
that's
why
the
span
names
are
different.
You
see
here,
we
count
the
we
measure
delivery.
That
is
the
duration
of
the
callback,
and
here
we
are
measuring
receiving
time.
That
is
the
duration
of
this
receive
operation,
and
I
mean
the
delivery.
The
delivery
duration
by
definition
also
contains
work
that
is
done
here
in
this
during
in
this
callback,
whereas
the
receive
operation
kind
of
do
this
work
here,
that
is
done,
happens
outside
of
operation,
and
I
mean
these
differences.
We
just
we
just
tried
to
model
because
yeah
or
alternate
trace.
D
Why
do
not
have
a
execute
span
or
something
that
incorporates
the
two
console
logs
to
be
very
similar
with
delivery?
You
you
just
have
another
sibling
called
receive,
and
I
understand
that.
But
logically,
I
would
like
to
see
another
span
called
delivery,
execute
or
whatever
that
incorporates
a
two
console
log
to
measure
the
same
time
to
be
able
to.
B
In
this
case,
here
yeah
I
mean
yes,
I
mean
another
aim
that
we
have
in
coming
up
this
model
here
is
that
we
can
that
we
have
the
possibility
to
pack
all
or
most
of
the
instrumentation
that
we
specify
here
into
this
client
instrumentation.
B
So
we
could
have
some
kind
of
process
span
here.
That
covers
the
whole
operation,
but
this
span
could
not
be
created
by
by
this
client
here
by
some
instrumentation
library,
but
the
user
anytime.
The
user
uses
this
guy
and
they
would
need
to
create
this
fan
I
mean,
ideally,
it
would
be
apparent
span
here.
I
don't
don't
think.
D
B
Yes,
there
doesn't
necessarily
need
to
be
a
parent
span,
so
we
we
don't
spell
this
out,
but
in
most
cases
there
will
be
apparent
span.
Yes,
but.
B
D
Know
otherwise,
so
so
so,
how
do
you
propose
to
to
add
links
to
which
spam
the
the
messages
links
will
be
to
on
which
span.
B
I
mean
in
this
model
we
assume
that
the
that
there
is
an
ambient
context
and
that
kind
of
an
ambient
context
links
this
band
together.
But
we
cannot
strictly
require
this,
so
we
refrain
from
strictly
requiring
this,
because
that
would
kind
of
put
the
burden
or
that
would
kind
of
make
it
harder
for
some
cases.
So
so.
D
But
this
is,
this
gets
super
complicated
to
understand
even
besides
something
besides
something.
Even
if
I
have
everything
sampled,
I
I
have
all
this
information.
If
I
have
these
two
links
on
the
receive
span,
but
they
the
their
work,
is
executed
not
under
receive
span
but
under
possibly
under
its
parent.
We
don't
know
how
do
we
account
for
this?
How
do
we
measure
the
impact
of
this
or
even,
if
we
try
to
understand,
what's
happening
here?
Well,.
B
Well,
you
can't
the
the
the
main
thing
you
can
do
is
link
a
producer
in
the
consumer
span
together,
but
you
cannot
really
identify
all
the
chunk
of
works
and
separate
them.
That
goes
into
separate
messages
because
you
might
even
have
like
you
might
have
you
might
get
a
bunch
of
messages
and
then
yeah
for
all
the
messages
just
execute
like
a
single
database
operation.
B
D
Let's
assume
we
receive
one
message
in
this
model
if
the
message
is
linked
to
receive,
how
do
I
measure
the
the
time
spent
to
do
console.log,
or
how
do
I
know
the
time
that
I
spend
doing
the
console.log.
B
I
would
well
I
mean
the
the
main
point
of
of
this
year
is
that
you
cannot
really
allocate
processing
time
to
message,
but
you
can
say:
okay,
that
is
the
trace
where
the
message
was
created.
That
is
the
trace
where
the
message
was
processed
and
we
have
a
link
between
those
two.
B
B
We
found
that
it's
not
really
feasible,
but
what
we
can
answer
is
the
question.
Okay,
I
processed
this
message
here
in
this
trace.
Where
was
this
message
created
and
also
vice
versa?
This
message
was
created
here.
Where
was
this
message
in
what
other
service
and
operation
was
this
message
processed?
D
Okay,
let's
type
of
this
discussion,
that's
what
carlos
said:
I'm
not
I'm
not!
I'm
still
not
convinced
that
this
is
the
right
modeling,
especially
because
we
are
losing
informations
as
time
and
stuff,
but
I'm
not
an
expert
in
this
probably.
A
Yeah,
but
it
was
a
good
restart
of
the
discussion.
I
think
that
the
discussion
was
still
itself,
so
it's
it's
good
to
talk
about
these
details
so
yeah.
Let's
continue
that
offline,
I'm
very
curious
about
alternative
designs
for
that.
Okay.
Thank
you.
So
much
for
that.
Thank
you
for
allowing
us
to
time
box
this
okay.
The
next
point
is
adding
jager
protocol
environment.
Variable
I
mentioned
may
have
mentioned
that
yesterday,
guided
in
the
maintainers
meeting.
Basically
we
we
just
want
to
add
a
new
checker
environment
variable
for
specifying
the
protocol.
A
A
A
Okay,
there
we
are
so.
This
is
the
one,
as
I
said
before,
it's
basically
hotel
exported
integer
protocol
julian
didn't
review.
That
already
looks
great,
so
please
review
that.
As
I
said
before,
technically
it's
correct
from
a
technical
perspective.
That's
what
I
meant
but
yeah.
If
there's
any
doubt
about
whether
we
should
have
more
emblem
variables
or
not.
I
think
it's,
as
I
said
before.
I
think
it's
great
mostly
because
this
is
something
that
already
is
done
by
otlp
and
jagger
is
one
of
the
core
components
that
we
have
to
support.
A
F
All
right
so
quick
update
the
the
first
one,
the
wild
card
asked
as
an
apr.
It
got
merged.
So
thank
you.
The
second,
the
second
one
is
a
huge
pr.
Currently
I
see
many
approvals.
There
doesn't
seem
to
be
any
blocking
issue,
but
still
I've
seen
several
open
comments
like
suggesting
that
we
can
further
clarify
it.
F
F
C
I
do
not
plan
to
I
felt
at
the
end
of
last
week
that
all
the
I
guess,
data
model
questions
had
been
pinned
down
and
answered
in
clear
terms,
and
I
had
said
that
I
would
like
to
see.
I
think
people
were
still
asking
for
examples.
I
did
put
in
some
examples
yesterday.
So
all
I've
added
since
friday
is
examples,
and
you
may
want
to
see
that
and
that
may
spark
another
round
of
conversation,
but
I
don't
think
there
are
any
like
tough
questions
or
debates
that
are
happening
here.
F
F
Okay,
the
next
one
we
need
more
reviewers
and
approvals
after
josh
merged
his
pr.
I
I
think
this
one
is
almost
ready.
The
next
one,
the
important
question
about
view
implementation.
F
I
agree
with
aaron
discovered
here.
So
basically
the
spec
made
a
very
strong
statement,
but
without
the
clarification
it
seems
to
be
against
another
place
in
the
spec,
so
aaron.
Actually
the
guy
said
that
we're
called
it
is
the
same
meter,
and
I
think
that
makes
sense,
and
if
I
recall
correctly,
that
was
the
intention.
G
So,
just
to
be
sure
that
issue
can
be
closed
by
adding
in
the
same
meter
yeah.
So
basically,
we
add
add
this
all
right.
C
F
Okay,
all
fallout.
H
I
H
Added
this
issue-
and
I
suppose
I
can
talk
to
it
real
quickly.
Okay,
so
josh
josh
opened
this
pr
to
propose
a
change
in
aggregation,
temporality
export
for
certain
instruments,
and
the
premise
of
this
is
that
you
know
a
couple
of
instruments.
You
know
up
down
counter.
Both
synchronous
and
asynchronous
delta
back
ends
might
not
actually
ever
want
the
data
to
come
in
in
delta
format,
and
so
the
it's.
H
So
the
the
kind
of
implications
of
this
is
that,
if
this
is
true
and
delta,
back-ends
don't
actually
ever
want
deltas
for
up-down
counters,
then
the
design
of
metric
reader
and
metric
exporter,
which
specify
your
preferred
temporality
as
either
delta
or
either
cumulative,
might
need
to
be
rethought.
C
Thank
you
jack.
I
found
the
the
issue
or
the
pr
simply
because
I
was
aware
of
some
mismatch
of
expectations,
maybe
for
my
for
my
own
vendor
who
can
support
both,
but
because
of
that,
I'm
interested
in
helping
users
lower
memory,
but
as
as
we
know,
the
up
down
counter
creates
this
issue.
Where
almost
everyone
wants
them
cumulative
and
you
could
you
could
use
that
as
a
position
to
like
a
wedge
position
to
say
yeah,
the
encounters
always
create
problems.
Maybe
it
shouldn't
exist.
I
don't
take
that
interpretation.
C
I
just
I.
I
think
that
there
are
reasons
why
you
might
convert
the
the
sort
of
non-monotonic
sum
into
delta
for
computational
reasons
for
data
transformations
for
sampling
decisions
or
reduction
of
cardinality.
There's
all
kinds
of
reasons
why,
logically
speaking,
the
transformation
to
and
from
delta
is
correct,
but
we
for
the
most
part
expect
to
export
those
it's
cumulative
to
their
final
destination.
C
Under
that
interpretation,
we
need
to
make
these
refinements.
That
jack
has
described
more
eloquently
than
me.
H
And
so
you
know,
I
know
we're
trying
to
stabilize
the
metrics
sdk,
but
what
I'm
worried
about
is
that
the
sdk,
as
it's
currently
specified
you
know,
lends
itself
to
a
design
where
a
metric
exporter
returns
an
enum
value
for
its
preferred
temporality
of
either
cumulative
or
delta,
and
what
we're
kind
of
discovering,
through
the
conversation
on
this
pr,
is
that
there
there's
use
cases
and
and
likely,
like
common
use
cases
for
delta
back
ends,
where
it's
not
all
delta
or
all
cumulative.
It's
some.
H
F
I
have
a
question,
so
do
you
see
any
word
in
the
in
the
current
sdk
spec
saying
that
preferred
is
a
mandatory
thing.
So
when
I
introduce
this
concept
I
I
would
think
and
prefer
this
more
like
a
recommendation
and
the
preferred
can
be
you
prefer
delta
or
you
prefer
cumulative
or
you
don't
have
preference.
You
prefer
either.
F
And
that
would
give
us
additional
flexibility
later.
We
allow
individual
view
to
specify
something
you
can
say
by
default.
I
prefer
everything
to
be
converted
to
like
cumulative,
but
by
the
way,
if
I
see
something
from
a
counter,
I
always
want
that
to
be
delta,
and
then
you
can
call
a
more
strict
little
thing.
I
have
a
particular
instrument
name
and
I
want
that
thing
to
be
cumulative.
C
I
I
don't
believe
there
is
having
made
the
draft
pr.
At
least
I
think
this
was
truly
just
an
unstated
thing
and
I
was-
and
I
think
a
few
of
us
who,
out
of
I'm
going
to
say,
respect
for
the
vendors
that
they
don't
understand
that
have
delta
systems,
just
we're
on
board
with
saying
oh
yeah,
there's
a
delta
preference
and
there's
a
cumulative
preference.
C
It
sounds
simple
and
easy,
but
it's
a
little
bit
more
refined
than
that,
and
I
think
all
we're
asking
for
speaking
for
lightstep
as
well
as
for
new
relic,
it
sounds
like
and
probably
dyna
choice
is
that
we
want
our
monotonic
cumulative
monotonic
counters
to
be
expressed
as
deltas,
and
that
means
asynchronous
counters
should
I
I
don't
like
this,
but
it's
because
it's
more
work
but
asynchronous
counters
in
this
preference
model
should
be
converted
to
delta.
C
If
you
have
a
back
end,
that
speaks
prefers
deltas.
Now
I
have
a
preference,
that's
actually
a
little
bit
more
nuanced
than
not
even
I
prefer
statelessness.
In
other
words,
if
you're
an
asynchronous
counter,
I
would
be
fine
with
you
sending
me
a
cumulative
and,
if
you're
a
synchronous,
counter
I'd,
be
fine
with
you
sending
me
delta,
and
that
way
you,
the
sender,
have
the
least
memory
obligation.
C
In
other
words,
I
don't
care
for
you
to
do
cumulative
to
delta
conversion,
but
and
actually
the
reason
why
I
wrote
this
pr
in
the
first
place
was
that
was
trying
to
prevent
us
from
having
the
intimate
cumulative
delta
conversion
by
a
fine
rate
of
the
of
the
spec.
I
was
trying
to
make
that
be
an
optional,
because
it
was
already
a
little
bit
ambiguous
using
in
riley's
terms
like
it's
up.
It's
a
preference,
in
other
words,
not
a
not
a
hard
commitment.
C
So
if
we
could
change
the
preference
to
be
a
six-way
preference,
meaning
you
have
a
preference
for
every
instrument
kind,
this
issue
can
be
solved
and
jack's
proposal
was.
I
think
that
we
just
moved
forward
by
replacing
any
current
usage
that
looks
like
an
enum,
that's
two
valued
delta
and
cumulative
with
two
values
that
are
always
delta
and
always
cumulative
and
then
always
cumulative.
It's
still
the
thing
that
prometheus
users
want
and
always
delta
is
the
thing
that
we're
not
sure
anybody
wants.
C
We
want
something
that,
and
I
I'm
not
sure
what
the
name
should
be
here.
It
says,
says:
delta
phrasing,
a
delta
for
counters
for
monotonic
counters
that'll,
help
new
relic.
It
sounds
like
and
then
for
a
light
step.
I'd
actually
like
a
different
one:
that's
that's
delta,
for
synchronous,
counters
and
delta
for
synchronous,
histograms,
but
cumulative
for
asynchronous
counters.
If
I
may.
J
J
C
Update
encounters
should
always
be
cumulative
in
that
sense,
and
I
and
I
I
just
defended
earlier,
why
would
you
ever
even
consider
doing
that?
And
you
know
if
you're
gonna,
I
don't
know
sample
or
something
like
that,
sampling
and
metrics?
You
can
see
reasons
to
do
that,
but
I
don't
want
to
go
there.
J
Yeah,
I
think
that
is
common
from
from
from
the
issue.
At
least,
it
seems
to
me
that
this
is
a
common,
a
common
thing
for
the
up
down
counter,
at
least
but
yeah.
So
there
needs
to
be
a
way
to
distinguish
between
how
we
export
monotonic
counters,
not
non-monotonic
counters.
I
guess
okay.
C
D
So
one
thing
one
thing
that
we
need
to
understand
here
is
that
calculating
cumulative
to
delta,
even
including
in
the
collector,
because
bunch
of
operations
we
we
have
to
execute
only
on
the
on
delta
or
they
are
valid
or
easier
to
implement
on
delta.
Compare
with
with
cumulative
what
I'm
trying
to
say
is
computing
cumulative
to
delta
outside
of
the
source
is
much
harder
problem
because
you
need
a
global
state
versus
in
the
source.
You
need
a
state
just
to
that
source.
C
C
It
added
some
hundreds
of
lines
of
code
in
the
prototype
that
I
removed
it
from
thinking.
I
don't
want
to
see
this
again.
It
is.
It
means
that
you
need
to
know
how
to
subtract
numbers,
and
that
means
your
sum
points
have
to
be
subtracted
and
it,
unfortunately,
we
haven't
specified
any
kind
of
cumulative
histogram.
C
Otherwise
you'd
have
to
subtract
histograms,
so
it
doesn't
sound
as
hard
as
you
is
as
it
it
might
be,
because
you
only
have
to
handle
it
for
integers
and
floating
points,
which
is
simple,
but
it's
a
totally
new
code
path.
D
Yeah,
I
agree
I
mean
again,
we
have
only
instruments
that
by
default
I
mean
the
asic
instrument.
That
by
default
is
the
cumulative
part.
It's
it's
only
for
for
suns,
it's
only
a
difference,
so
I
think
I
think
we
should
not.
I
mean
for
any
other
type.
We
don't
have
a
problem
yet
because
we
user
cannot
give
us
cumulative
so
yeah.
I
think
it's
reasonable
to
say
that
for
for
dental
we
can
do
this
again
has
to
be
a
configuration.
I
completely
agree.
C
Here's
what
I
might
propose,
we
write
the
spec
to
say
that
the
sdk
should
support
a
preference
function
which
is
going
to
take
instrument
kind
as
an
argument
basically
and
return.
It
can
be
a
mapping,
a
six-way
mapping
or
a
function
or
something
like
that,
and
then
we
won't
give
names
to
the
preferences
except
the
most
important
ones.
C
Always
cumulative
is
clearly
an
important
one
and
I'm,
I
think
we
should
take
the
rest
of
this
naming
discussion
to
the
issue,
which
is,
I
forget,
which
issue
number,
but
we
should
discuss
this
outside
the
meetings.
Only
a
few
minutes
left
josh.
F
I
I
shared
a
link
where
we
put
some
example
like
how
how
to
do
the
conversion,
maybe
that
can
like
we
can
link
to
that
to
clarify
things
yeah.
I
I
have
a
suggestion.
I
I
think
when
we
talk
about
views
we
already
realized
like
in
views
we
we
can
potentially
allow
people
to
specify
like
given
an
instrument
type
without
a
name.
I
want
that
to
be
using
cumulative,
always
so
we're
going
to
add
such
functionality.
F
And,
and
given
that
I,
I
think
that
additional
feature
in
the
view
can
be
an
additive
change.
So
even
if
we
say
for
now,
we
don't
allow
that
in
the
initial
release.
After
that,
we
can
see
in
the
1.1
version
of
the
metrics
back
or
something
we,
we
release
a
new
functionality
that
will
allow
people
to
use
view
to
specify
the
instrument
type
that
force
the
aggregation
temporarily.
H
I
see
where
you're
coming
from,
but
you
know
we
we
already
have
allowed
exporters
and
readers
to
influence
the
temporality
so
they're,
already
partially
overlapping,
with
what
you
know,
you're
you're
kind
of
saying
could
be
done
at
the
view
level
and
so
where's
the
line
like
so
I
guess
why
why
do
this
via
views
versus
this
existing
kind
of
mechanism
where
an
exporter
and
reader
can
influence
temporality?
C
Yeah,
that's
definitely
not
meant
to
be
a
user-facing
interface.
This
is
a
way
to
let,
for
example,
lightstep
has
a
launcher
for
the
hotel
library
that
you
know
we
can
figure
all
the
stuff
for
you,
so
you
don't
have
to
think
about
it.
I
would
pass
in
exactly
the
function
that
I
want
there,
whereas
in
an
environment
variable
setting
you
you'd
only
have
a
few
well-defined
names
and
we're
saying
all
always
cumulative
would
be
one
of
them.
C
Of
course,
that'll
also
be
the
default,
and-
and
I
don't
want
to
discuss
names
anymore
here-
I
think
the
point
was
to
to
give
the
sdk
author
find
control.
I
do
have
an
opposition
to
riley.
C
Your
statement
like
I
think
that
the
choice
of
temporality
has
to
be
independent
from
the
view
setting
otherwise
you're
going
to
have
to
define
how
two
views
can
interact,
because
you're
going
to
one
view
to
configure
like
the
mapping
functions,
that
you
want,
the
change
of
aggregation
and
a
change
of
names
and
stuff
you're
getting
another
view
to
specify
all
the
defaults
for
temporality.
I
think
so
I
I
think
maybe
not,
but
we
can
also
find
a
different
way
to
make
a
backwards
compatible
change.
I
think.
A
Not
for
now,
okay,
thank
you,
okay,
because
I'm
asking
because
yeah
we
have
two
last
items,
so
we
can
talk
about
them
briefly.
The
first
one
is
regarding
kubernetes
resource
detection,
dashboard,
you're,
wrong.
H
Yeah
yeah
I'm
here
I
just
wanted
so
I
opened
this
a
week
or
two
ago.
I
did
a
review
of
all
the
detectors
that
use
kubernetes
downward
api
to
detect
stuff,
and
it
turns
out
it's
inconsistent
and
uses
things
that
we
probably
shouldn't
be
relying
on
my
goal.
Here
is
just
you
know.
This
is
the
mechanism
that
kubernetes
gives
us
to
discover
resource
information,
and
so
we
should
use
it
and
ideally
do
so
consistently
across
the
hotel
landscape.
I
think
that
will
greatly
benefit
users
and
yeah
make
make
things
a
lot
easier
there.
H
A
H
Sure
so
you're
talking
about
using
the
hotel
resource
attributes
great
yeah,
so
that
is
possible
today
at
minimum.
I
think
the
result
of
this
should
be
that
we
should
document
it
clearly
across
languages
and
stuff,
because
even
for
someone
familiar
with
kubernetes,
that
was
hard
to
find
and
we
should
remove,
I
think,
other
competing
mechanisms
that
are
misleading.
So
I
do
think
that
could
be
one
outcome.
The
there
are
two
drawbacks
to
that.
H
One
is
that
it's
a
little
bit
inconvenient
to
have
to
merge
everything
together,
just
because
then,
if
if,
for
example,
you're
inserting
a
snippet
into
all
of
your
pod
yamls
that
contains
setting
hotel
resource
attributes,
that's
now
incompatible
with
like
adding
some
extra
yourself.
So
that's
a
little
bit
tricky,
but
I
don't
think
that's
the
main
blocker.
H
The
main
blocker
is
that
this
doesn't,
I
think,
play
well
with
telemetry
schemas,
because
the
idea
is
like
I
should
discover
resource
attributes
associated
with
particular
version
of
the
semantic
conventions
which,
but
I
think
that's
a
more
general
problem
and
we
may
need
to
solve
in
a
different
way,
anyways
like
being
able
to
set
here's
the
version
of
schemas
that
my
hotel
resource
attributes
should
apply
to,
or
something
like
that.
So
I
think
there
may
be
ways
around
that
and
I'm
yeah
happy
to
see
the
discussion.
A
A
If
not,
please
review
that
it
has
a
pair
of
approvals,
but
we
need
more
eyes,
especially
for
a
tip.
So
please,
let's
review
that.
Okay,
thank
you
so
much
for
that.
The
final
point
today
is
a
clarification
on
timeout
for
exporters.
I
don't
know
what
it
is
but
feel
free
to
to
elaborate.
I
That
was
me
so
right
now,
I'm
trying
to
work
on
a
issue
to
add
time
out
to
the
otlp
ex
importers
for
javascript,
and
I
just
had
a
clarification
on
what
the
spec
says
about
timeout,
because
there
have
been
some
questions
in
our
meetings
about
it
is
the
timeout
for
a
certain
phase
on
the
request,
or
is
the
timeout
per
request,
or
is
it
a
timeout
for
the
entire
exporting
process,
including
retry
requests.
A
Yeah,
I
actually
doesn't
think
that's
for
maintainers.
I
think
that
this
is.
This
was
never
clear,
and
I
remember
this
was
a
source
of
discussion
in
java,
at
least
back
in
the
day.
So
maintainers
would
you
mind
telling
your
experience,
and
I
agree
that
after
this
we
should
clarify
that
in
the
specification
itself.
K
A
K
You
should
give
up
so
including
retries.
Is
there
a
time
out?
Should
we
have
a
timeout
at
some
point
where,
like
a
request,
is
taking
too
long
and
we
want
to
retry
it
if,
like
a
connection,
is
not
being
established
or
if
a
connection
is
established
and
it's
slow
should
that
time
out
at
a
certain
point
to
allow
for
a
retry
or
our
retry
is
not
useful
in
those
situations.
L
F
Yeah
and
down
the
path
I
imagine
like,
depending
on
which,
like
underlying
mechanism
you're
dealing
with,
if
you
write
a
file,
maybe
the
timeout
is
relatively
simple.
If
you
do
a
network
thing,
it
might
be
super
complex.
You
have
to
resolve
the
dns
establish
connection,
you
have
to
send
the
request
and
make
sure
the
request
is
received
and
you
have
to
wait
for
the
response
and
the
response
could
take
time.
So
there
could
be
plenty
timeouts
in
a
network
operation.
K
F
That's
a
good
question.
I
I
think
it's
up
to
the
exporter.
If
I
remember
correctly,
the
spike
is
saying
the
isdk
is
not
going
to
do
any
return.
Idk
should
not
do
retry,
it's
the
exporter's
responsibility
and
so.
A
In
any
case,
I
will
follow
up
with
daniel.
Maybe
we
need
to
add
clarification.
I
remember
also
that
the
sdk
has
more
details
in
regarding
this
part,
but
I
will
follow
up
with
that.
It
sounds.
It
feels
bad
that
it's
not
clear
enough.
Maybe
we
just
need
to
point
to
that
location,
that's
really
mentioned,
so
I
will.
I
will
follow
up
with
with
riley
with
you
first
and
then
we
can
probably
open
an
issue
or
just
a
pr
with
small
clarification
if
that
makes
sense.
So
I
I
pasted.
K
Yeah:
okay,
okay,
I
think
that
that's
pretty
clear.
I
do
think
that
the
things
that
we
say
are
the
responsibility
of
the
exporter
should
at
least
be
defined
for
the
otlp
exporter,
though.
M
K
Yeah
I
agree
in
javascript
we
have
six
otlp
exporters,
so
I
I
feel
that
pain.
M
H
Yeah,
so
I'm
on
the
call
I
can
speak
to
that.
You
know
we
only
have
a
minute
left,
so
maybe
we
could
should
take
it
offline.
I
Not
yet,
but
I
can
open
one
up
after
this
call.
A
F
Yeah
and
now
like
we
mentioned
retry,
I
I
want
to
share
a
link
so
in
in
opentimeshare.net,
where
we're
seeing
people
asking
for
the
net
network
related
exporters,
if
required,
failed
or
time
out
like
they
don't
even
have
network,
we
want
to
at
least
persist
the
data
before
the
process
access.
So
we
have
an
experimental
package
in
the
country.
A
Yeah,
sadly,
yes,
okay,
thank
you!
So
much
everybody,
please!
Yes,
javascript
contributors,
please
open
an
issue
before
we
forget.
Thank
you
so
much
and
talk
to
you
very
soon.
Stay
safe.