►
From YouTube: 2021-09-21 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
C
Okay,
I
think
we
can
start
now.
Thank
you,
everybody
for
joining.
We
have
a
few
items.
The
one
that
I
think
is
very
important
is
about
whether
we
should
release
1.7
or
not,
but
I
think
we
can
go
over
the
rest.
First,
no,
no
worries,
I
think.
Okay,
so,
first
of
all
metrics
update
who
wants
to
give
this
one.
D
All
right:
well,
then,
I
guess
the
status
is
that
we
are
entering
feature
freeze.
The
idea
now
is
we
try
to
get
the
three
implementations
worked
out.
The
goal
is
to
not
add
features,
but
to
account
for
kind
of
like
bugs
and
things
that
were
missing
or
you
know
like
friction
points.
So
that's
kind
of
starting
to
be
the
focus.
D
You
should
see
apis
and
implementations
hitting
your
one
of
the
three
target
languages
for
this
initial
feature
freeze,
so
that
would
be
we're
targeting
python,
c-sharp
and
java
as
primary
with
go
as
kind
of
secondary.
I
don't
know
if
there's
anything
else,
we
want
to
call
out
there
josh
anyone
else
from
the
metrics.
A
B
C
Perfect:
okay.
Moving
the
next
item:
j
mcd,
brief
update
on
two
sampling
tips.
B
Yeah
I'd
like
to
keep
this
brief,
because
I
know
there
are
some
more
substantial
topics
to
discuss
I'll
just
put
up
what
I
wrote.
So
you
can
see
it.
I
had
intended
to
give
a
little
bit
of
a
presentation,
but
I
couldn't
get
it
prepared
in
time,
so
I'm
going
to
just
tell
you
what
I've
got
and
that
is
and
I'll
try
for
next
week
to
give
you
a
presentation,
but
for
what
I
have
today
is
that
last
week
we
got
feedback
from
bogdan,
which
was
really
helpful
and
actionable.
B
The
point
that
was
made
pointed
out
that
we
already
have
trace
state
being
stored
in
the
span.
Anything
we
do
with
trace
state
will
automatically
get
stored
for
us.
Therefore,
we
don't
have
to
specify
anything
in
the
span
data
model
to
get
what
we
want.
We
just
have
to
complete
our
proposal
for
trace
state
propagate
it.
The
way
it's
been
described
in
otep,
168
and
then
store
trace
state
in
order
to
count
spans.
B
That
simplifies
things
quite
a
bit.
The
other
simplification
was
if
you've
been
following
along,
there's,
been
an
unknown
value
and
we
had
it
coded
with
an
explicit
value.
If
we
take
that
out,
things
get
simpler
and
now
we
are
saying
that,
because
this
information
is
stored
in
trace
state,
we
have
field
presence.
We
can
tell
when
it's
missing,
we
don't
need
to
have
a
zero
value
b
default
to
to
mean
something
like
unknown.
B
So
this
has
another
simplification
and
it
means
that
the
the
rules
for
composing
samplers
just
got
a
lot
simpler
and
it
came
out
quite
nice.
I
don't
believe
that
there
are
any
more
questions
about
how
this
proposal
works.
I
think
we've
gotten
sign
off
from
a
number
of
people
who
know
sampling
pretty
well,
and
I'm
I'm
pleased
with
that.
So
I
think
from
the
technicals
perspective,
this
can
go
through.
B
What's
left
are
naming
questions,
and
I
think
I
just
want
to
leave
that
with
you
now
and
I'll
stop
talking,
I
think
next
week
I
will
give
a
presentation
it'll
last
six
to
seven
minutes.
It
will
explain
the
basics
of
how
this
proposal
works,
so
that
anyone
should
should
be
able
to
approve
it
at
that
point
and
I'm
gonna
let
these
prs
rest
for
another
week.
B
While
we
do
that,
and
you
can
think
about
the
naming
questions
and
I
think
at
the
by
by
this
week
this
time
next
week,
we
should
be
having
discussion
about
how
to
write
the
spec
to
say
what
has
been
written
in
these
oteps
in
a
more
spec-oriented
way,
since
the
oteps
contain
quite
a
bit
of
technical
justification
along
with
it,
and
that's
where
we
are.
Thank
you.
F
Hi
I'm
here
I
was
on
vacation
last
week,
as
was
johannes.
It
looks
like
from
the
meeting
notes,
so
I
can't
give
a
super
detailed
update,
but
I
also
don't
see
ludmilla
or
others
here,
so
I
could
give
a
quick
overview.
F
F
F
F
Part
of
this
is
because
there's
interest
from
people
in
europe
and
there's
actually
a
lot
more
interest
these
days
from
people
in
apac
time
zones
atlassians
enjoy
the
effort
in
a
real
way
and
it
seems
like
in
general,
there
are
contributors
from
down
under,
and
so
that's
making
us
kind
of
have
a
choice
of
either
have
a
whole
lot
of
meetings
or
have
meetings
that
only
some
of
the
members
can
go
to
so
so
I
think
that's
tricky
piss
I
see
michael
lee
is
actually
on
the
call
were
you
there
present
last
week.
G
Yeah
so
yeah
johannes,
is
out
because
he
they
welcome
the
new
child,
so
I
think
he
should
be
back
next
week,
but
I
think
you're
right
in
terms
of
the
meetings.
I
think
we
should
keep
it
to
tuesdays
and
thursdays,
as
we
have
already
and
the
last
meet
in
our
last
meeting.
G
We
did
discuss
that
if
we
need
another
one,
we
would
do
it
thursday
afternoon
to
support
the
apac
folks,
but
as
of
right
now,
just
the
tuesday
alone
is
sufficient
and
in
the
tuesday
afternoon
we've
mostly
been
talking
http
at
this
at
this
moment.
So
I
don't
know
if
we'll
need
that
that
extra
http
meeting
so
so,
let's
see,
but
I
think
other
than
that
it's
just
steady
progress.
G
I
don't
think
there's
any
major
update
to
provide
today
here,
but
slowly
making
progress
and
I
think
the
main
focus
is
trying
to
get
that
messaging
otep.
You
know
agreed
upon.
So
that's
that's.
Like
the
main
action
item.
I
think
for
that
working
group,
yeah.
F
I
would
say,
as
part
of
doing
the
http
work.
We're
also
is
because
it's
like
the
first
major
group
of
conventions
we're
tackling
we're.
Also
tackling
a
lot
of
general
issues.
Around
instrumentation
and
conventions,
for
example,
layering
is,
is
a
big
one.
Mill
is
working
on
a
proposal
there.
F
The
java
sig
has
come
up
with
with
some
interesting
solutions
to
try
to
to
deal
with
the
fact
that
you
often
get
double
triple
instrumentation
in
some
cases,
just
due
to
the
way
software
composes,
so
that
that's
one
example,
just
general
issues
with
with
span
structures
for
handling
these
different
situations
and
a
lot
of
those
decisions
will
reverberate
into
all
the
other
instrumentation
we're
writing.
F
So
it's
interesting
stuff.
I
do
recommend
people
come
to
those
meetings.
If
they
can,
it
would
be
great
for
the
tc
to
start
following
this
work
at
least
and
be
aware
of
those
oteps
and
proposals
as
they're
coming
in,
because
I
predict
some
of
them
will
be
significant
and
not
just
the
details
of
like
a
particular
semantic
group.
H
I
have
a
quick
question
for
you
all
for
instrumentation
folks
for
http.
Is
there
going
to
be
an
effort
to
unify
the
naming
on
http
server
spans?
Is
that
part
of
the
effort
unify
the
naming
yeah?
I
was
just
I'm
putting
together
a
workshop
with
steve
flanders
for
open
of
open
telemetry
at
strange
loop
next
week
and
we're
you
know
putting
together
a
multi-language
example
and
definitely
noticing
that
the
languages
are
doing
different
things
with
their
http
service
band
names.
F
I
yeah
I
we
haven't
done
a
survey
yet,
but
definitely
part
of
going
through
each
one
of
these
semantic
groups
will
be
to
survey
what's
out
there
and
also
once
we've
stabilized
them,
do
a
round
of
updating
everywhere.
Right,
like
we're
going
to
improve
these
conventions
and
as
of
right
now
like
because
instrumentation
has
not
been
the
main
focus,
we
don't
have
a
lot
of
contrib
maintainers
I
feel
like
instrumentation
is
very
lumpy.
F
Just
in
general,
I
don't
think
we
have
any
kind
of
like
quality,
control
or
assurance
that
things
are
following
the
spec
and,
like
you
said,
in
some
cases
the
spec
is
vague
enough
that
they
they
diverge.
It
would
be
great
to
know
about
some
of
these
issues,
though.
So,
if
you're,
finding
things
john
just
posting
them
back
to
the
groups
that
were
aware
of
them
would
be
great.
F
F
C
E
I'm
here
sure
I'll
give
this
an
introduction.
So,
a
couple
of
weeks
ago,
I
brought
up
an
issue
with
the
metrics
group
requesting
to
add
some
sort
of
support
to
aggregate
a
min
max
summing
count
so
and
the
the
decision
that
or
not
the
decision,
but
one
of
the
proposals
was
to
accommodate
this
by
adding
a
mid
and
max
field
to
the
existing
histogram
proto.
E
And
so
you
know
I
went
about
opening
a
pr
and
that's
what's
linked
here,
to
suggest
expanding
the
histogram
data
model
to
have
this
additional
min
and
max
field,
and
you
know
it's
it's
not
as
simple
as,
as
you
know,
we
originally
thought
because
of
some
distinct
differences
between
how
the
min
and
max
would
behave
for
cumulative
aggregation,
temporality
versus
delta,
and
so
josh
has
added
some
nice
notes
to
this.
This
bullet
point
here
outlining
the
key
decision
points.
E
So
one
is,
you
know:
is
it
okay
for
cumulative
histograms
to
have
different
semantics
for
than
for
for
data
than
other
cumulative
data
points?
So
you
know
what
the
proposal
that
we
make
is
that
or
that
I've
made
is
that
on
cumulative
histograms,
the
min
and
max
fields
don't
represent
the
cumulative
min
and
max
because
that's
not
super
valuable
data.
E
So
the
suggestion
is
to
have
the
min
and
max
on
cumulative
histograms
diverge
from
that
cumulative
behavior
and,
in
fact,
be
recently
recorded
values,
and
so
it's
sort
of
like
a
delta,
behavior
embedded
within
a
cumulative
data
point,
and
so
that's
that's
a
bit
strange
and
then
the
other
question
that
josh
has
brought
up
is,
should
min
and
max
have
natural
aggregation
functions
of
mathematical,
min
and
max
or
gauge.
So
you
know
this
is
all
about.
E
You
know
when
you're
aggregating,
min
and
max
should
you
should
you
take,
for
example,
if
you're
aggregating
two
too
many
points,
should
you
take
the
min
of
the
two
min
points
and
the
max
of
two
max
points,
or
should
you
take
the
last
value?
Should
you
take
last
value
ones,
and
that
has
some
implications
on
this
as
well?
E
So
I
think
where
we're
at
with
this
pr
is
that
you
know:
there's
there's
a
couple
of
strong
opinions
on
both
sides
of
these
issues,
on
how
the
behavior
should
work
and
and
and
we
need
a
cup.
Some
extra
sets
of
eyes
to
you
know,
read
through
this
and
and
give
some
extra
opinions.
I
I
know
I'm
part
of
the
problem
so
sorry
for
for
causing
issues.
I
want
to
to
understand
a
bit
here.
I
Is
how
how
mean
and
max
would
be
used
in
different
scenarios?
So
so
microsd
comes
from
the
fact
that
if
they
do
not
match
the
temporarily,
so
let's
assume
they
they
are
a
random
window
ish
then
they
cannot
be
used
for
for
calculating
the
quantize
quantiles
correct
and
because,
because
you
you're
gonna
have
a
rollup
for
for
your
histogram
that
you
can
calculate
correctly,
and
you
know
between
t0
and
t1.
I
These
are
the
amount
of
points
in
different
buckets
so,
and
I
heard
that
mean
and
max
can
be
used
to
to
do
a
better
approximation
of
the
quantiles,
but
unless
they
represent
the
exact
roll
up,
the
exact
t0
t1
that
you
calculate
on
the
back
end.
They
are
useless,
so
you
can
just
display
them
as
a
gauge
or
or
just
display
them,
because
you
don't
know
what
to
do
with
that.
You
cannot
use
them
to
calculate
quantiles,
so
they
cannot
become
what
I'm
trying
to
to
say
is
they.
I
They
are
not
naturally
possible
to
be
combined
with
the
buckets
to
to
do
a
better
calculation
of
the
quantiles.
Am
I
wrong
here,
or
or
or
am
I
missing
something,
because
this
is
very
important
to
understand
for
me
if
min
and
max
make
sense
with
the
bucket,
or
they
are
just
two
simple,
separate
values
that
we
believe
is
good
to
to
have
them
in
the
same
metric,
but
actually
they
do
not
necessarily
belong
in
that
metric
type.
E
Well,
well,
they're
a
great
fit
for
delta
temporality
and
right
so
does.
Does
anyone
disagree
about
what
the
semantics
would
be
for
delta
temporality?
No,
I'm
not
disagreeing,
but
what
would
you
do
with
the
values?
What
would
I
do
with
the
values?
Yes
in
for
delta,
so
you
know
a
classic
thing
that
I
would
want
to
do
with
the
min
and
max
is
like
you
know,
let's
say
I'm
monitoring,
http
server
duration.
E
So
I
have
my
sum
in
my
account
already
and
so
from
that
I
can
compute
the
average
duration
and
then
I
can
also
on
a
graph
display
the
bounds,
the
min
and
max
that
it,
but
but
would.
E
No,
I
I
want
to,
I
wouldn't
combine
them,
but
they
can.
They
can
provide
additional
granularity
or
additional
data
that
you
know
you
you
just
don't
know.
Even
if
you
have,
even
if
you
have
lots
of
buckets,
you
know
the
the
buckets
at
like
the
the
you
know,
the
the
book
ends
of
it
won't
be
able
to
capture
the
exact
min
and
max.
So
you
won't
know
those
values.
E
They're
most
useful
without
the
buckets
so
like
primarily,
what
I
want
is
a
is
a
lightweight
mechanism
to
do
like
a
substitute
for
histograms
because,
like
I,
I
don't
think
the
world
is
ready
for
histograms.
Yet,
like
everywhere,
you
know,
I
don't
think
data
visualization
tools
are
are
great
at
visualizing
them
yet,
and
they
are
good
at
visualizing
min
max
summing
count.
I
Case,
why
do
we
try
to
put
them
inside
the
histogram
versus
either
put
them
into
the
summary
or
create
its
own
type
mean
max
some
count
as
a
standalone
aggregation
which
doesn't
have
the
temporality?
Because
or
if
I
mean
it
has
the
instantaneous
temporary
or
whatever
we
call
it.
We.
We
do
it's
something
similar
with
what
we
did
with
summary,
because
we
cannot
spatially
aggregate
them
like
combine
or
or
or
or
subtract
them.
B
Me
having
the
same
debate
if
it
was
not
talking
about
histogram,
though
right
I
mean,
and
the
reason
why
this
was
included
in
the
histogram.
Pr
is
just
that
in
my
memory,
and
I
don't
have
a
great
number
of
links
for
this,
but
in
my
memory
many
people
kept
coming
in
saying
I
need
min
max
2..
My
histogram
doesn't
have
min
max
I'd
like
it
and
so
from
my
my
sort
of
data
model
theory
point
of
view.
B
The
the
reason
why
the
min
max
form
a
compact
summary
is
like
it's
of
one
encoding
cost
to
to
to
take
your
aggregation
of
min
and
max.
You
can
always
merge,
spatially,
merge
and
correctly
compute,
min
and
max,
whereas
for
all
the
other
quantiles
between
zero
and
one,
you
can't
and
that's.
I
think
the
practical
reason
why
this
is
a
like:
a
favorite
among
people
that
want
lightweight
histograms.
I
You
can
just
combine,
you
cannot
subtract
them,
you
cannot
calculate.
You
cannot
calculate
delta
from
community
correct.
B
If
you
are
trying
to
aggregate
a
window
by
combining
histograms
over
a
temporal
timeline,
the
the
output
point
has
a
min
max
value
that
are
much
like
the
gauge
at
the
very
end
of
the
window.
So
it's
still
a
useful
min
and
max.
It
just
applies
to
the
end
of
the
window,
and
if
you
really
want
to
compute
a
max
time
series,
you
would
go
through
all
of
your
data
points
and
pull
out
the
max
and
put
them
in
a
timeline.
B
So
you
don't
ever
need
a
subtraction,
the
way
I'm
proposing
it
and
then,
but
it's
just
a
trick.
It's
really
just
a
like
flight
of
hand
to
make
cumulative
work
and
just
want
to
refer
to
the
precedent.
Since
no
one
said
it
yet
prometheus
summaries.
Do
this
and
we're
copying
the
behavior
of
a
prometheus
summary
for
me
in
math,
there's.
D
Why
aren't
we
generating
summaries
in
side
of
open
telemetry
and
the
answer
to
that
is
a
you
know,
we're
feature
freezing
on
generating
histograms
we're
not
going
to
be
generating
summaries
that
didn't
make
the
metrics
sdk
feature
freeze,
and
if
you
look
at
the
current
prometheus
recommendation
around
summaries,
they
actually
recommend
against
this
kind
of
min
max
summary
and
there's
there's
a
there's
a
note
in
openmetrics
about
that
that
I
included
in
one
of
these
issues,
I'm
not
sure
where,
but
that's
one
of
the
reasons
why
we're
not
doing
summaries.
D
So
what
what
we're
down
to
is
there
were
there
were.
I
think
I
proposed
four
ways
to
move
forward
with
this
issue
that
we
need
to
pick
one.
I
thought
the
fundamental
crux
here
is:
do
we
want
to
make
min
max,
have
a
natural
aggregation
of
min
and
max
or
be
a
gauge
like
that's
a
decision
we
need
to
make
about
how
we
want
min
and
max
to
to
behave
because
jack
mentioned
in
the
pr
he
doesn't
want,
min
and
max
to
have
a
gauge
like
behavior.
D
He
wants
a
natural
merge
function
that
is
min
and
max
which
we
do
not
have
in
our
data
model
at
all.
There
is
no
point
right
now
that
will
merge
min
as
min
and
max
as
max.
There
is
only
a
last
value
aggregation
which
is
gauge
and
there's
histogram
aggregation,
which
adds
all
these
junk
together,
which
is
one
reason
why,
if
we
want
natural
min
and
max,
I
think
it
belongs
in
histogram.
D
But
the
second
problem
here
is
around
what
to
do
with
cumulative
histogram
and
cumulative
min
and
max,
because
if
we
give
min
and
max
this
natural
mathematical
merge
operation-
and
we
don't
do
it
for
cumulative-
that
is
literally
the
only
data
piece
structure
where
we
aren't
using
our
cumulative
aggregation
function,
the
same
as
delta,
because
the
cumulative
window
says
it's
from
time,
t
0
to
t
1.
It
looks
exactly
the
same
as
a
delta
window,
just
with
a
different
time
period.
Right
and
there's.
D
This
notion
in
my
mind,
with
our
data
model
of
I
could
possibly
ignore
my
temporality
and
just
look
at
the
time
window,
and
my
data
should
be
for
that
time
window
that
I
see
and
cumulative
says.
I
start
at
time.
T
0
and
I
end
at
time,
t
n
so
that
entire
thing.
So
if
we're
going
to
diverge
from
that,
then
I
would
argue
that
so
my
the
the
suggestions
that
I
proposed
here
I
proposed
for
one
is:
we
have
min
and
max
have
gauge
semantics
in
terms
of
aggregation
and
then.
B
In
other
words,
my
definition
is
the
value
for
the
min
max
field
is
the
min
or
the
max
calculated
at
the
end
of
the
window
and
for
delta.
That
is
exactly
the
intuition
that
you
have
it's
over
the
entire
window
and
for
cumulative,
it's
just
towards
the
end
of
the
window,
and
you
can
still
merge
those.
You
take
last
value
first
for
temporal
and
you
take
min
max
for
spatial.
D
That's
my
argument
would
be
write
that
clearly
like
write
that,
in
a
way
that
it's
easy
to
understand
what
the
hell,
this
data
model
means
and
when
to
choose
one
versus
the
other,
and
I
think
we
run
into
a
bunch
of
problems
because
how
do
I
know
which
alignment
it
just
yeah?
I
feel
like
that's
going
to
be
very
hard
to
consistently
enforce
right.
So
the
second
option
is:
if
we
have
natural
min
and
max,
then
I
think
we
need
to
abide
by
the
notion
of
what
cumulative
is
and
say
many
maximum
cumulatives
aren't
useful.
D
We
just
sort
of
don't
generate
them
for
cumulative
histograms,
but
you
still
get
the
value
when
you're
doing
delta.
So
if
you're,
if
you
want
this
notion
of
delta
min
max
just
use
delta
histograms,
but
the
the
math
and
the
aggregation
semantics
are
consistent
and
the
third
option
is
we
provide
and
again
we
have
this
via
views
where
I
can
take
a
histogram
instrument
and
I
can
output
multiple
metrics.
So
I
can
out
support
my
min.
D
I
can
output
my
max
and
I
can
out
output
my
counts
and
mediums
as
separate
gauges
effectively
and
make
use
of
them
in
that
fashion.
Right
and
that's
that's
kind
of
what
that's
kind
of
how
some
things
are
modeled,
where
you
know,
if
we
don't
have
an
advanced
model,
we
model
via
the
sub
components
that
we
do
have.
I
think
those
are
all
viable
alternatives,
but
the
crux
here
is:
do
we
need
a
natural
min
and
max
aggregation
function
in
metrics
right,
like
fundamentally?
D
Is
that
something
we
want,
yay
or
nay,
and
for
what
I
hear
from
jack?
He
wants
that
natural
min
max,
and
so
I
think
the
path
forward
if
we
want
that
is
this
this.
This
concern,
then,
where
cumulative
histogram
min
max
doesn't
really
make
sense
in
terms
of
practically
for
users,
and
we
can
specify
that
basically,
if
you
have
a
cumulative
histogram,
you
know
don't
worry
too
much
about
min
and
max,
but
if
you
have
delta,
definitely
generate
it,
because
it's
more
useful
in
that
scenario,
but
that
would
be
my
recommendation
here.
I
B
Reason
I've
heard
is
that
accuracy
of
the
top
quantiles
is
better
when
you
know
true
max
so
like
you're
you've
got
my
max
is
in
this
last
bucket,
the
bookend
as
jack
called
it,
and
you
know
like
that.
That
gives
you
a
relative
error
of
you
know
some
big
relative
error.
If
you
want
to
know
your
true
max
like
people
want
it
exactly
is
what
I'm
doing.
Why.
I
D
I
would
put
them
in
both-
I
think
it's
more
of
an
issue
for,
for
the
fixed
size,
buckets
right,
because
you're
more
likely
to
have
poor,
fixed
size,
buckets.
J
You
have
to
understand
that
if
you
do
have
this
different
temporality
for
the
delta
histogram
versus
human
histogram,
that
goes
out
the
window
like
having
an
upper
bounds
for
a
cumulative
histogram
with
cumulative
data
in
it.
And
then
you
have
a
delta
temporality
for
the
mid
and
max
don't
apply
at
that
point
and
you
don't.
I
Have
a
pound
tyler:
can
you
can
you
talk
closer
to
the
microphone?
It's
very
quiet
and
I
barely
understand
you.
J
Yeah,
sorry
about
that,
it
seems
to
reset
itself
every.
F
Day
sorry
perfect.
J
Yeah,
so
if
you
have
this
different
temporality
for
the
min
and
max
for
a
cumulative
histogram
that
doesn't
actually
apply,
then
you
can't
bound
the
histogram
with
a
with
delta
data.
At
that
point,.
I
I
Otherwise,
it's
not
going
to
work,
because
if
I
give
you
a
random
min
max
over
the
last
10
minutes
and
you
calculate
roll
ups
in
your
back
end
of
every
minute
or
something
like
that
not
going
to
be
useful
at
all,
because
they're
not
going
to
represent
the
same
interval
that
you
store
in
the
back
end.
I.
B
B
What's
what's
your
information
and
I
feel
like
we
can
be
useful
to
the
user
by
telling
them
a
recent
min
and
max
in
that
case,
which
is
something
that
has
a
precedent
at
least,
and
when
you
combine
those,
I
I
mean
you're
still,
the
the
the
wording
was
you're
getting
the
approximate
10
minute
max
here
and
it
you
can
combine
those
and
still
get
the
approximate
10
minutes
max
and
there's
this
yes
there's
error
introduced
because
but
but
I
I
tried
to
point
out
that
this
this
conceptual
problem
that
everyone's
having
here
happens
with
deltas
too.
B
D
So
the
the
difference
with
cumulative,
though,
is
if
I
need
to
do
data
reduction.
I
can
literally
just
drop
points
because
I
have
a
cumulative
just
like
value
right
and
let's
say
I
I
want
to
change
from
having
a
one
minute.
You
know
temporality
to
a
five
minute
temporality.
I
can
actually
just
take
every
fifth
point
and
drop
the
other
four
and
everything
should
work.
That's
like
one
of
the
guaran
one
of
the
guarantees
of
cumulative
right,
like
that's
one
of
the
benefits
to
a
back
end
author
and
we're
getting
rid
of
that.
B
D
D
My
initial
reason
for
not
wanting
to
merge
that,
like
last
10-minute
sample,
was
I'm
actually
forcing
delta
problems
on
cumulative
aggregation
and
I
think
no,
like
users
can
decide
if
they
want
delta,
they
want
cumulative
right
and
and
which
metric
back
end
works
better
for
which
one,
but
we
shouldn't
force
like
we
don't
want
to
conflate
the
two
problems
we
want
to
try
to
keep
if
you're
using
delta.
You
get
these
benefits
and
you
have
these
negatives
if
you're
using
cumulative
you
have
these
benefits.
D
You
have
these
negatives,
this
pr
specifically
blends
those
two
things
together,
so
you
get
all
the
benefits
and
negatives
together
and
in
a
way
that
I
wasn't
super
happy
with
right.
So
again,
if,
if
it's
okay
like
I,
I
think
I'm
going
to
propose
this
again
and
I
want
to
hear
why
it's
a
super
descending
opinion.
So
if
we
have
min
and
max
on
histogram
with
natural
math
min
and
max
and
for
jack,
we
we
have
a
very
convenient
way
to
have
delta
temporality
right
for
your
metric
reporting.
D
Is
this
an
acceptable
solution
for
us
where
we
know
that
say
you
know
we're
unclear
whether
or
not
min
and
max
in
a
cumulative
fashion
will
have
as
much
value
as
it
will
in
delta?
That's
fine!
We
still
think
there's
enough
value,
we're
going
to
add
it
consistently
for
histograms
and
take
this
as
a
path
forward.
Does
that
sound,
reasonable.
D
I
D
I
Story,
I
do
understand,
but
what
I'm
trying
to
say
is
for
for
for
types
that
we
put
in
a
an
aggregation
temporality.
We
guaranteed
so
far
that
delta
to
cumulative
and
cumulative
to
delta
is
guaranteed.
Gauge
does
not
have
temporality
and
and
summary
does
not
have
temporality,
because
we
don't
know
how
to
do
it
correctly.
So
we
try
to
keep
a
consistency
on
this
and
I
tried
very
hard
to
make
sure
that
when
we
put
temporality
we
have
these
properties.
D
E
Well,
one
thing
sorry
go
ahead.
Jack
I
was
just
going
to
say.
The
language
about
summary
is
is:
is
unambiguous,
they're
not
recommended
for
new
metrics
going
forward
they're,
not
a
supported
aggregation
type.
They
they're
marked
as
deprecated
everywhere,
and
so
you
know,
we'd
have
to
backtrack
on
that
language.
I
think,
because
these
are
going
to
be
something
that
people
are
going
to
use.
E
B
I
originally
in
the
old
specs,
had
a
mid
maximum
count
data
point
type
because
of
this
type
of
desire.
I
also
have
created,
in
my
past
a
lightweight
aggregator,
to
deal
with
histogram
instruments
that
I
I
write
these
histogram
instruments
thinking
this
is
too
expensive.
I
don't
want
to
use
it.
This
is
too
expensive.
I
don't
want
to
use
it
and
then
finally,
I'm
running
production.
I
say:
oh
there's
this
one
histogram.
B
D
What
I'd
like
to
do
this?
Can
we
call
I
we're
still
arguing
and
we've
been
on
this
for
a
while?
We
have
an
entire
metric
sig
right
after
this.
Is
everyone
interested
in
this
discussion
able
to
continue
in
the
metric
sig,
because
I'd
like
to
spend
another,
possibly
30
minutes
on
this,
to
get
this
to
resolution
as
quickly
as
possible?
We
might
just
have
to
make
a
hard
decision.
Is
everyone
available
for
the
metric.
D
C
Perfect,
thank
you
so
much
for
that
yeah.
Actually,
it
was
about
to
call
time
because,
sadly,
we
have
more
stuff
on
the
agenda.
So
thank
you
for
that
yeah.
I
hope
you
it's
a
very
political
item,
so
I
wish
you
the
best
of
luck,
the
rest
yeah.
Basically,
as
you
guys
know,
we
want
to
release
1.7
and
we
have
been
delaying
that
because
of
a
pair
of
issues.
C
One
of
them
is
that
we
already
talked
about
that
about
reverting
this
change
in
the
kitchen
to
pass
instrumentation
library
info
to
the
sampling
api.
C
So
we
will
be
reverting
that
for
now,
so
we
we
will
try
to
discuss
what
are
the
alternatives
and
everything,
but
for
now
we
just
want
to
reverse
that,
and
I
think
that
we
have
enough
approvals.
But
if
not,
please
go
and
check
that
out.
I
don't
want
to
take
people.
You
know
like
out
of
the
sun,
expecting
that
this
is
part
of
this
would
be
part
of
the
next
release
and
it's
not
so.
Please
take
a
look
at
that.
C
Christian
new
emulator
said
that
we
should
feel
an
issue
and
I
will
fill
an
issue
after
the
call
mentioning
that
you
know
a
little
bit
more
of
context
and
saying
that
you
know
we
still
need
to
do
some
work
after
this,
the
other
one
that
I
think
would
be
nice
to
have.
There
is
just
specifying
the
that
gzip
is
the
only
supported
compression
format.
C
This
came
from
the
from
the
com
from
the
call
last
week,
a
pair
of
people
already
approved
that
it's
a
very
simple
pr,
but
I
need
more
eyes
on
that
and
there's
a
related
issue
that
I
opened
tigran.
I
would
like
priorities
because,
out
with
you,
probably
you
can
do
that
offline.
Maybe
is
that
also
to
to
have
everything
clearer?
C
I
would
like
just
to
mention
in
an
issue
I
mentioned
that
I
would
like
to
declare
that
gc
compression
must
be
supported
in
otp
receivers,
and
I
didn't
quite
get
your
question
there,
but
we
can
follow
up
offline,
probably
because
there's
not
enough
time
now.
So
please,
please,
I'm
really
begging
you
go
review.
These
things
make
sure
that
you
are
aware
what's
happening.
We
want
to
go
and
we
will
hit
with
1.7
soon
yeah,
please
just
that
would
be
all
from
my
side.
Hopefully
we
will
have
release
by
tomorrow
or
thursday.
C
I
could
say
that
javascript
wants
to
go
1.0,
okay,
so
technically
they
could
be
breaking
the
specification,
at
least
for
the
browser
case,
because
they
are
enabled
they
are
using
gcp
as
the
folder
and
in
the
specification
yep.
I
don't
see
any
pr
for
this
correct.
C
C
I
Okay,
can
you
explain
why
is
this
blocking
because.
C
Well,
I
wouldn't
say
blocking
but,
as
I
said
in
the
specification
at
this
time,
everybody's
supposed
to
be
setting
compression
otlp
compression
to
none,
no,
that's
the
default
and
javascript
for
the
browser
case.
They
could
be
non-compliant
because
for
the
browser
case
they
are,
they
want
to
send
by
default
gc
compressed
data.
A
C
We
could
this
is
the
original
wording
actually
well
part
of
the
original
wording,
but
the
thing
is
that
let's
say
that
we
want
to
support
the
snappy
or
some
other
more.
You
know
a
specific
compression
format,
but
we
need
then
we
we
would
need
to
make
sure
that
this
is
supported
or
handled,
at
least
in
the
otlp
receivers
in
the
collector.
So
yeah.
Let
me
explain
if
I.
I
Read
correctly
this
pr,
the
pr
does
not
say
anything
about
gzip
or
anything.
This
pr
just
adds
yet
another
environment
variable
in
configuration,
but
does
not
require
anything
into
the
protocol
for
receivers
to
support
that
or
anything.
I
think
if
you
want
to
do
what
javascript
needs,
it's
just
a
simple
pr
to
the
protocol
that
says
all
otlp
receivers
should
support
at
least
gzip
and
and
should
be
set
via
content,
and
that's
it.
I
don't
think
we
need
a
configuration
just
to
be
confirming
for
javascript
the.
C
Configuration
is
already
existing
by
the
way.
It's
just
a
clarification,
so
we
don't.
We
wouldn't
be
adding
a
new
option.
It
already
exists
there.
It's
just
a
small
clarification,
and
I
agree
that
the
other
one
about
the
otp
receivers
that
should
be
added
but
tigran.
You
would
be
fine
saying
that
otp
receivers
they
most
support,
gc
compression.
A
Yeah,
I'm
fine
with
that
and
I
think
also
to
unblock
the
the
javascript
release.
We
only
need
to
say
in
the
spec
that
if
the
environment
variable
is
not
specified,
then
the
compression
is
implementation
specific.
That's
all
we
need
to
do
in
that
case.
Javascript
can
do
gzip.
Others
can
do
nothing
right,
no
compression
and
we're
good
where
I'm
blocking
the
release
by.
A
C
Does
that
make
sense
back
then
yep,
perfect?
Okay,
I
will
update
that
later
today.
So
that's
on
that
side
and
please
again
just
check
the
other
one,
the
reverting
add
instrumentation
library
to
the
sampler
should
sample
method.
Once
that
is
done,
we
can
go
ahead
and
do
1.7.
C
Okay,
thank
you
for
that.
Thank
you
so
much
for
that.
One
thing
that
I
would
like
to
discuss,
but
probably
we
can
do
it
next
time-
is
that
we
should
probably
do
some
issue
grooming
like
in
the
past,
like
we
did
for
1.0.
I
don't
know,
that's
something
that
I
would
like
people
to
think
about.
Otherwise
we
will
end
well.
We
will
end
up
with
300
issues,
and
originally
this
came
when
we
were
trying
to
decide
whether
we
should
autoclose
issues
after
one
year,
for
example.
C
I
I
believe
that
so
give
me
a
second
yeah,
so
the
first
one
is
just
a
proposal
to
the
specs
yuri's
keep
pushing
for
a
no-tap,
which
we
probably
should
do,
even
though
I
do
not
believe
I
have
to
say
more
than
by
doing
this,
we
guarantee
some
kind
of
ordering,
but
I
will
try
to
say
it
in
a
page
the
same
idea
so
anyway,
the
idea
is
to
include
the
the
start
time
or
a
timestamp
of
when
the
the
trace
id
was
generated
as
the
first
32
bits.
I
The
main
idea
behind
this
is
this
will
give
us
some
kind
of
ordering
of
traces
if
you,
if
you,
if
you
order
them
by
trace
id,
they
are
kind
of
coming
at
the
end
like
it's
more
or
less
an
append
like
system
compared
with
now
where,
where
they
are
spread
across
the
entire
spectrum,
this
will
help
different
systems
like
one
of
the
first
one
that
comes
to
my
mind,
is
hbase,
which
is
way
more
optimized.
Even
bigtable
is
way
more
optimized
for
for
writing
at
the
end
versus
writing
across
the
entire
spectrum.
I
I
The
or
the
first
four
bytes
being
the
timestamp
four
bytes,
which
are
actually
encoded
as
eight
bytes
in
no
four
biting
hex,
whatever
math
is
hard
but
having
the
first
bytes
as
as
as
a
timestamp,
will
give
us
an
kind
of
an
ordering
because
usually
traces
finish
somehow
in
the
same
order
as
they
start.
So
again,
it's
not
perfect.
I'm
not
aiming
for
a
perfect
solution,
but
I'm
aiming
for
having
some
sort
of
ordering
of
these
events.
A
There
is
the
ulid
generator
ual
id
if
you're
aware,
which
aims
to
generate
globally
unique
identifiers
but
which
are,
at
the
same
time,
ordered
by
a
clock
in
a
sense
which
prevents
too
much
randomness.
Maybe
that's
what
you
would
want
to
do.
I
Here,
in
this
case,
no
I'm
not,
I
think
I
think
my
proposal
was
much
simpler.
I
wanted
to
do
something
like
aws
x3.
It's
already
defined,
it's
already
implemented
in
all
our
implementations
to
support
x-rays,
so
we
already
have
that
id
generator.
My
proposal
was
just
to
enable
that,
as
default,
instead
of
completely
random.
I
A
I
think
what
I'm
saying
is
achieving
the
same
goals
as
you
have
there
and
but
does
not
rely
on
aws's
definition.
So.
A
I
I'm
also
trying
to
be
nice
with
aws,
so,
instead
of
coming
with
yet
another
solution
for
this,
I
was
trying
to
adopt
their
solutions.
So
then
at
least
that
is
nicer
to
the
community.
We
can
come
up
with
yet
another
solution,
but
I'm
trying
to
to
not
do
that
to
limit
the
amount
of
new
things
that
we
add
and
again
the
id
will
still
remain
a
pack
from
our
perspective,
so
people
cannot
rely
on
the
fact
that
there
is
a
timestamp
there.
It's
just
like
the
algorithm
that
we
choose
is
a
bit
different.
I
I
Anyway,
this
is
the
first
one
and
the
second
one
is
something
that
we
discussed.
I
discussed
with
the
josh
and
during
the
the
sampling
sig.
This
is
a
proposal
in
the
in
the
wtc
trace
context
to
have
a
beat
that
so
we
currently
use
only
one
bit
from
the
trace
flags.
That
tells
you
the
the
fact
that
this
is
sampled
or
not,
and
I
I
was
proposing
to
use
another
bit
from
there
that
tells
you
guaranteed
the
last
63
bits
of
the
trace
id
are
randomly
generated
and
not.
I
B
Yeah,
it
does
sound
like
a
nice
way
to
get
randomness
into
the
choice
id
as
part
of
a
specification.
We
only
need
62
bits
of
randomness
the
way
the
specs
are
currently
written.
It
would
help
us
not
have
to
propagate
us
a
trace
state,
but
only
in
the
future,
where
we
can
also
change
trace
parent
to
have
a
few
more
flags,
p-value
field.
We've
discussed.
F
Yeah,
don't
these
issues
slightly
conflict,
you're,
saying
here
in
one
issue:
you're
saying
you
want
the
leftmost
32
bits
to
be
the
timestamp,
and
here
you're
saying
you
want
the
first,
no.
I
The
choice:
no,
no,
it's
a
different
discussion.
The
first
bit
from
the
flag
is
used
anyway,
the
least
significant
bit,
because
that
consider
so
I
want
to
use
the
second
least
significant
bit
yeah,
the
ordering
and
stuff
you
should
be.
But
essentially,
if
you
have
the
bytes,
I
want
the
first
bytes
to
be
ordered.
Somehow
the
last
buy
is
to
be
random
for
sampling.
I
think
this
gives
us
the
best
of
the
the
worlds
that
we
and
the
requirements
that
we
have
so
far.
I
And
and
again
for
the
for
the
randomness,
I
think
it's
important
to
have
this
propagated
and
that's
why
we
we
need
to
ask
for
a
bit
into
the
flags
which
is
cool,
because
we
don't
need
to
bump
the
version
or
anything
adding
a
flag
that
is
backwards
compatible
defined
like
zero
means.
We
don't
know
about
randomness
at
all.
One
means
we
guarantee
that
that's
the
last
63
for
64..
I
I
propose
63
for
for
two
reasons.
One
is
the
the
golang
naturally
has
a
random
63
method,
and
also
for
languages
that
do
not
support
unassigned,
transforming
that
into
a
not
like
for
for
java
example.
It's
only
long
is
not
only
a
sign
long
and
transforming
that
into
a
number
will
be
problematic
with
64..
So
that's
why
it
simplifies
a
lot
of
the
things.
If
we
stick
with
63
bits,
there.
I
Anyway,
so
far,
I'm
looking
for
plus
ones
comments.
If
you
are
okay,
not
for
the
wtc,
it's
very
important
to
to
say
if
you
are
agreeing
on
that
with
that,
because
that's
the
only
way
we
can
push
to
the
to
the
spec.
So
we
need
to
if
you
are
okay
with
that,
and
if
you
believe
that
this
is
a
good
idea
just
say
plus
one
there.
Otherwise
you
can
say
minus
one.
If
you
don't
believe
that.
C
I
I'm
not
sure
I
understand
so
so
the
the
only
change
is
one
beat
will
have
one
set
if
the
generator
has
the
the
bits
random
the
way
how
the
re,
like
the
way
how
you
use
it
is
differently,
I
mean
we
will
implement
josh's
proposal,
our
proposed
algorithm,
but
that's
independent
of
of
knowing
that
the
bits
are
so
so
I
think
josh
use
case
is
one
use
case.
Others
may.
H
I
C
Yeah,
so
I
see
I
see
that
okay
good
to
know
yeah
yeah
make
sense
yeah.
I
saw
you
two
two
examples
there
about
about
the
main
motivations,
but
I
was
curious
whether
we
we
should
need
to
spend
more
cycles,
but
it's
good
to
know.
C
Okay,
thanks
so
much
for
that
yeah,
please,
everybody
review
that
it's
very,
very
important.
It's
going
to
help
us
a
lot!
That's
the
end
of
the
agenda.
If
anybody
wants
to
discuss
something
else
now,
it's
your
opportunity,
other
than
the
min
and
max
histogram
fortune,
which
will
be
discussed
in
the
next
call.