►
From YouTube: 2021-05-25 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hey
everyone
previous
meeting,
ran
long
sorry
about.
A
A
B
A
All
right,
let
me
present
quick
and
sorry
if
everyone
can
fill
out
the
document.
Add
your
agenda
items.
We
have
one
blocking
issue
to
talk
through.
We
have
a
second
issue.
We
talked
about
talking
this
week
and
then
there's
one
that
I
want
to
slip
in
as
well.
That
I
think,
might
be
a
little
quicker
all
right
share,
screen.
A
There
we
go
all
right.
Are
you
seeing
the
open,
telemetry
cncf
project
page
here.
A
A
A
A
A
A
A
A
A
A
C
A
Yeah
yeah
the
so
right
now
our
histogram
does
not
allow
non-negatives
right,
there's
an
issue
to
figure
out
how
to
allow
non-negatives.
Where
is
that,
allow
some
to
present
with
negative
measurements
in
histogram?
So
I
think
for
now,
if
you
can
progress
without
negative
measurements
that
be
ideal.
If
you
need
to
make
this
a
blocker
and
escalate,
it
feel
free.
C
It's
not
a
blocker,
so
what
I
I
want
to
do
is
in
the
in
the
epa
spike
called
that
histogram
should
only
accept
non-negative
number.
For
now.
Anything
that
is
negative
has
undefined
behavior,
so
the
isdk
have
freedom
to
do
whatever
they
want.
Now
and
later
we
might
allow
negative,
but
it's
not
a
block
at
this
moment.
C
The
sdk
is
that,
okay,
it's
not
a
breaking
change,
because
it's
undefined
behavior.
That
means
people
should
not
rely
on
that
behavior.
Okay,
okay,
if
the
it's
specked
as
undefined
that
I'm
coming
from
the
c
plus
password.
So
anything
that
is
undefined
means
if
you
rely
on
the
actual
implementation.
Detail
like
a
particular
like
javascript,
might
decide
to
support
that.
If
you
plan
to
dis
like,
depending
on
that
you're
screwed
and
later,
if
they
remove
that
it's
your
problem,
not
the
sdk
problem.
C
A
Yeah,
I
I
think
so
so
what
you
were
saying
about
undefined
behavior,
I
think,
is
actually
okay.
I
just
want
to
call
out
that
we
should,
before
you
decide
to
make
a
breaking
change
there
or
or
any
change
to
it.
You
should
evaluate,
if
there's
common
usage,
that
if
you
break
people
they
will
really
complain
about,
because
it
could
be
that
all
the
sdks
implement
that
allowance
in
some
standard
way
or
some
way
that
users
love
and
then
we
effectively
have
our
hands
tied
in
practice
right
yeah.
A
Concerns
I'm
looking
here
instead
of
here.
Okay,
so,
oh,
could
someone
take
notes
by
the
way,
I'm
I'm
taking
the
notes?
Okay,
thank
you.
Thank
you.
I
am
unable
to
like
click
around
and
type
at
the
same
time.
For
some
reason,
let
me
add.
A
Cool
all
right
so
with
that
said,
let's
move
on
to
blocking
issues
or
move
back
to
blocking
issues.
So
we
have
a
safe
label
removal
listed
here
in
the
spec
as
a
blocking
issue
or
requirements
for
safe
label
removal
that
we
wanted
to
cover.
Now
the
pr
has
been
closed
as
stale.
D
That
pr
was
depending
on
another
that
I
had
written,
which
would
also
close
the
scale,
and
then
I
updated
the
first
of
them
last
night.
So
I
just
don't
want
to
have
open
pr's
that
are
that
I'm
not
working
on
or
that
are
you
know
days
out
so
the
stuff
about
stillness
markers
was
the
first
that
I've
worked
on
from
1648
and
1649..
D
We
were
the
the
outcome
of
this.
Was
you
know?
After
we
talked
about
temporal
alignment,
you
know
the
idea
of
safe
label
removal
or
attribute
removal
is
supposed
to
be
pretty
straightforward,
but
it
just
says
I
mean
that
was
pr
was
to
add
detail
and
the
requirement
is
to
add
requirements.
So
I'm
not
sure
we
need
to
add
the
detail
if
we
just
add
the
requirements,
I'm
not
sure
that's
a
helpful
answer,
though,.
A
Okay,
so
in
terms
of
making
progress
on
this,
can
we
is,
does
it?
Does
it
make
sense
to
take
this
and
just
extract
requirements
from
it
and
then
just
spec
out
the
requirements,
and
then
we
can
add
detail
later.
D
Yeah
and
the
reason,
the
reason
I
wrote
this
out
tab
156
that
talked
about
the
primitive
instruments
counter
up
down.
Counter-Engage
was
because
I
felt
there
was
confusion
leading
to
this
safe,
attribute
removal
discussion.
D
So
we've
tried
to
justify
that
each
data
point
has
an
intrinsic
default
way
of
being
aggregated,
so
that
say,
attribute
removal
does
what
what
the
user
intended
and
the
meaning
gets
preserved,
and
it
turned
out
that
that's
a
confusing
topic,
so
I
wrote
the
otep
and
I'm
glad
that
you've
commented
on
it.
That's
probably
the
next
thing
I
do
is
I've
got
lots
of
feedback
on
that
pr.
I
need
to
change
the
order
or
something
like
that.
I
don't
know,
but
still.
D
D
But
it's
I
sort
of
it's
all.
It's
implied
by
what
we've
got
in
some
some
ways
and
it's
a
lot
more
than
writing
requirements.
A
So
is
there
a
way
we
can
fragment
this
work?
Oh
sorry,
I
cut
someone
off
me.
E
This
is
one
more
reason
why
I
feel
stainless
and
also
attributes
remover
safe
asset
remover
should
be
different
documents
than
the
main
data
model
for
for
stability
perspective,
for
example
the
safe
attribute
removal.
We
never
tried
it
in
practice.
We
never
did
a
real
prototype,
wait.
E
A
D
Call
that
unsafe,
attribute
removal,
like
you,
can
just
edit
points,
no
matter
what
you
do
and
it's
it's
not
clearly
going
to
give
you
correct
results.
The
safe
attribute
removal
idea,
I'm
talking
about
is
you
know
the
meaning
is
preserved,
so
sums
are
preserved,
you
know,
rates
are
preserved
and
so
on.
Right.
E
D
Correct
josh,
I
think
you're
right,
so
I'm
I'm
starting
to
agree
that
we
can
just
literally
take
all
these
requirements
out
we've.
We
don't
need
to
spec
out
how
to
do
safe,
attribute
removal
or
temporal
alignment,
and
if
we
do
get
a
prototype,
that
does
it.
Perhaps
it's
a
good
example,
but
maybe
we
still
don't
have
to
have
requirements
all
right.
That.
D
D
Like
not
having
there
so
so
what
about
gaps?
That's
the
last
one,
then
that
is
open!
I've
closed
my
pr
about
temporal
alignment
and
about
safe,
attribute
removal,
and
maybe
we
should
prototype
that
right
in
six
months.
But
what
about
gaps
because
importing
data
from
prometheus-
and
I
wanted
to
have
a
one-to-one.
D
A
Yeah,
I
think
I
have
that
listed
next
to
talk
about
so
yeah
right
here.
That's
that
and
and
just
so
I'm
aware
josh,
that's
this
detail
about
metric
staleness.
D
A
Okay,
yes
yeah
we're
gonna
talk
about
that
next,
so
in
terms
of
safe
label
removal,
what
it
sounds
like
our
ais
are
based
on
the
previous
meaning,
we're
gonna
split
the
data
model
specification
into
a
bit.
That's
just
specification
and
a
bit
that's
guidance
around
how
to
use
the
data
model
and
safe
label.
Removal
will
be
in
guidance
on
how
to
use
the
data
model,
not
part
of
the
data
model
bit
so
we'll
split.
A
E
Josh
s
and
josh
both
josh,
I
think
also
an
action
item-
is
the
safe
attribute
removal
there.
There
are
two
parts:
one:
how
to
do
that
in
the
sdk,
which
is
super
simple
and
and
say
extremely
safe,
and
that's
why
we
design
this
way
and
so
on,
and
there
is
another
one:
how
to
do
it
on
an
external
source
on
top
of
the
data
model,
and
I
think
that's
something
we
discuss
josh
mcd-
that
we
may
need
an
optional
beat
or
we
may
need
some
tricks
into
the
data
model
to
to
fully
support
that.
E
There
are
a
lot
of
conversations
about
how
to
do
it,
correct
that
it
doesn't
change
identity.
So
you
don't.
If
you
get
from
two
sources,
if
you
remove
one
of
the
the
attribute,
it
doesn't
change,
identity
and
so
on.
So
there
is
a
whole
a
lot
of
discussion
about
how
to
do
that
in
the
collector.
Okay.
D
I
agree:
we're
gonna,
postpone
and
we'll
work
on
that
later
and
figure
out
what
we
need.
My
question
was
about
this
stuff
that
I
wrote
into
otep156.
It's
it's
sort
of
like
saying
suppose
we
aren't
going
to
write
down
safer
requirements
for
safe,
attribute
removal.
What
do
we
want
to
write
down?
I
feel
like
there's
still
something
missing.
D
That
explains
why
the
data
model
has
these
three
primitive
types,
two
two
different
sums
and
a
gauge
and
and
at
the
end
of
my
document,
josh
liked
josh
s
liked
my
examples
and
maybe
those
go
into
some
sort
of
api
level.
Explanation
for
why
you
would
choose
up
to
counter
versus
gauge,
but
in
the
data
model
we're
trying
to
say
these
each
point
has
a
different
kind
of
meaning
and
you
aggregate
them
together
and
preserve
meaning.
E
I
think
I
think
the
reason
why
we
have
non-monotonic
sun
and
we
have
gauge
it's
a
good
thing-
that
we
may
probably
put
us
a
10
lines,
paragraph
into
the
current
specification
document
for
people
to
understand
the
difference
between
them,
not
necessarily
specifying
how
to
use
them
very
clearly
but
say
that
hey,
for
example,
if
you
do
re-aggregations
or
if
you
do,
attributes
remover,
which
we
haven't
defined
yet.
But
if
you
do
that
now,
knowing
that
this
is
a
sun
it's
safe
to,
and
you
know
what
aggregations
to
apply
so
just
just
so.
A
I'm
clear
right:
if
we
look,
if
we
look
here
mm-hmm,
that's
some.
A
Oh
shoot
my
bad,
I
forgot
that
link's
actually
otilpee
right
and
we
talk.
We
talk
about
how
to
come
into
time,
series
which
I
think
that's
that's
where
it
starts
to
hit
some
guidance
stuff
and
some
you
know
we'll
have
to
think
about.
But
when
we
look
at
sums-
and
we
talk
about
a
sum-
is
monotonic
or
non-monotonic
right
is
this
enough
for
the
specification.
D
E
E
Just
prometheus,
okay,
I
think
we
should
update
the
title
to
be
prometheus
time
series.
Just
just
it's
fine
feel
free
to.
A
A
D
A
E
A
Yeah,
I
would
totally
agree,
and
so
so
to
to
follow
up
what
I
was
suggesting
for
ais.
Is
we
use
josh's
otep?
Actually,
I
think
this
is
the
right
place
to
do
it,
because
it's
starting
to
talk
about
those
right
concepts.
Let's
use
that
otep
to
flesh
this
out
and
then
drive
that
through
there
sound
reasonable.
D
A
I
will
also
say
that
it
it's
it's
deep,
so
it
took
me
a
long
time
to
think
through
it.
So
it
took
me
about
a
week
to
review
it.
So
I
apologize
for
that.
Okay,
cool,
okay,
so
that's
that's!
A
Our
conclusion
is
we'll
go
forward
on
the
otep
for
that
one
and
then
the
next
thing
I
want
to
talk
about
was
this
stillness
marker
pr,
and
so
I'm
gonna
scroll
down
to
my
comment:
if
that's
okay,
josh,
which
hopefully
wasn't
made
three
times
because
my
internet
died,
when
I
click
submit
oh
yeah,
it
did
submit
twice.
I'm
sorry
no
worries,
okay,
anyway.
A
So
basically
there's
a
question
here
around
in
in
this
pr
that
I
think
we
should
discuss
in
the
sick
and
so
the
the
two
questions
that
are
that
exist
are
do
we
want
to
represent
staleness
stillness,
meaning.
I
tried
to
scrape
some
endpoint
to
get
a
metric
and-
and
I
couldn't
do
it-
are
we
going
to
represent
like
I
don't-
have
a
metric
available
on
on
pool
as
nan
in
our
data
model.
A
That's
question
number
one!
That's
what
prometheus
does
and
just
so
you
know,
prometheus
uses
a
particular
encoding
in
ieee
of
nan,
where
their
parser
for
floating
point
numbers
and
strings
returns,
one
version
of
nand
if
they
scrape
a
nand
and
a
different
version
of
nand
to
represent
stillness
since
we're
using
protocol
buffers.
We
should
be
able
to
use
that
set
of
bytes
for
floating
point
fields.
D
A
We
can
encode
all
sorts
of
information
in
there
like
a
game
of
mario
anyway,
okay,
so
so
we
we
can
use
nan
to
represent
it
a
particular
set
of
bits
in
nan
to
represent
this
right
and
there's
implications
on
what
that
means
for
sdks
to
implement
staleness
markers
if
they
end
up
using
this
for
pool
based
metrics.
But
theoretically,
this
should
only
affect
prometheus
today
or
prometheus
like
things.
Okay.
Second
question
is
for
histogram
and
summary
metric
data
points
right.
A
A
D
Yeah
that
this
is
an
unfortunate
truth
that
you
just
output.
I
think
so,
let's
just
talk
about
an
alternative.
That's
not
written
down
just
to
to
see
what
to
frame
it
like
what
if
there
was
a
new
point,
type
called
invalid
or
missing,
and
it
was
an
alternative
to
the
the
four
points
that
we
have.
I'm
worried
that
that's
very
explicit
and
very
clear
and
correct,
but
like
now
we
have
to
deal
with
mixed
point
types
and
every
every
scenario
where
this
might
happen.
D
That's
just
what
I
don't
want,
because
we
we're
we're
sort
of
on
the
edge
of
saying
what
it
means
to
have
a
time
series
or
a
stream,
and
the
point
kind
is
kind
of
an
important
part
of
that,
and
I
don't
want
to
have
to
go.
Oh
and
if
it's
an
invalid
point
kind,
that's
okay,
because
it
means
something
else
or
whatever
and
go
bugged
in
please.
E
We
can
even
start
the
flags
and
reserve
some
beats
there
if
we
want
for
being
future
proof,
like
start
using
the
uin
64
or
something
like
that
and
use
only
the
last
beat
to
encode
some
metadata
right
now,
which
is
the.
D
One
on
that
note,
you
know:
there's
josh
has
opened
a
spec
issue.
Sorry
proto
issue
302,
I
think,
and
then
I
opened
another
one
kind
of
sibling
to
it
and
that
talks
about
bits
for
histograms
to
say.
Well,
you
know,
maybe
we
should
just
have
a
bit
that
says
whether
some
is
meaningful
or
not,
and
maybe
and
I
I
there's
a
discussion
about
instantaneous
temporality,
which
I've
closed,
because
I
don't
like
it
anymore.
D
There's
a
discussion
about
non-monotonic
counters
in
histograms,
which
I
think
is
what
the
gage
histogram
might
be
represented
as
and
I
wanted
to
link
those
two
because
they're
just
more
bits,
we
can
start
adding
bits,
I'm
not
sure
I
like
it.
We
can.
E
Yeah,
so
so
maybe
maybe
this
is
what
we
we
can
do
like
start
with
the
uint64,
their
mask
flags
or
whatever
and
start
defining
some
of
these
bits,
meaning
meaning
of
some
of
these
bits
that
will
give
us
freedom
to
add
some
of
the
the
things
that
we
didn't
mention.
We
can
represent
stainless
easily,
then
and
very
clear.
No,
it's
not
gonna
be
a
variant
of
none
yeah.
E
A
A
Stainless
okay
and
then
the
other
option
is
to
to
possibly
do
bits,
I'm
going
to
throw
out
the
dumb
option
of
add
new
boolean
fields
on
demand.
I'm
sure
people
love
that
option.
A
A
Fine
with
integers
yeah
yeah,
so
I'm
gonna,
throw
I'm
gonna,
throw
out
those
three
if
we
were
to
take
a
survey
of
the
sig
right
now,
folks
here
and
you
had
to
lean
between
representing
this
in
one
of
these
three
fashions
again,
I
feel
like
there's.
A
If
you
had
to
lean
towards
one
of
them,
because
we
can,
we
can
basically
noodle
in
that
direction
and,
what's
it
called,
you
know
when
you
like
try
to
solve
a
video
game
by
searching
your
strategy
space
and
optimizing
towards
the
most
likely
path,
which
one
of
these
three
do
we
feel
most
strongly
is,
is
almost
what
we
want
and
then
we'll
just
investigate
that
just
one
two
or
three.
A
D
A
A
E
Need
to
emit
some,
I
think,
in
a
push
model.
If
you
know,
if
you
can
determine
that
for
a
for
a
specific
instrument,
for
example,
if
the
instrument
is
closed
or
whatever
there
is
no
more
reference
to
that
instrument,
you
can
and
you're
never
gonna
need
values
for
that
instrument.
E
I
think
we
can
start
emitting
stainless
if
we
want
in
the
future.
Okay,
not
saying
that
we're
gonna,
do
it
right
now
or
anything,
but
I'm
just
saying
that
it
may
affect
the
the
sdk.
A
E
D
A
Actually
yeah
this
this.
This
makes
me
more
comfortable
with
the
notion
of
a
bit
set
because
we're
going
to
we,
we
have
some
notion
of
semantic
meaning.
We
want
to
convey
staleness
versus
final
point
marker
different,
and
I
do
agree
like
in
push.
We
have
a
lot
of
weird
concerns,
so
there's
always
dropped
data
right,
but
to
the
extent
that
you
can
actually
send
a
final
point,
marker
that'd
be
pretty.
D
D
Are
we
going
to
then
say
that
if
we
see
one
of
those,
two
well-known
nands
how
exactly
how
to
treat
them
and
it?
And
if
it's
that
other
man,
that
means
that
the
user
produced
it
and
are
we
okay
with
man
values,
because
I
don't
think
we've
said
we
haven't
said
anything
about
it
and
I
think
it's
fine
if
we
say
nothing
about
it,
but
I
just
wanted
to
say
that
so
I.
A
A
I
don't
think
we
touch
them
like
from
the
core
data
model
standpoint
now
from
a
prometheus
integration
standpoint,
I
I
think
it'd
be
my
preference
if
we
could
take
those
nand
values
on
ingestion
turn
them
into
this
bit
flag
and
then
turn
them
back
into
the
nand
value
later,
if
it's
flowing
through
open
telemetry
just
so,
we
have
crystal
clear,
like
denotation,
of
what
these
metric
points
mean.
What's
interesting
here
now
is
what
values
do
you
put
in,
for
example,
count
for
histogram?
A
Right,
I
think
count.
E
Or
can
be
can
be
minus
one
for
not
present
or
whatever
the
other
option,
by
the
way,
with
the
flags
we
can
have,
for
example,
32
flags
reserved
globally,
meaning
and
32
of
them
per
data
point,
and
one
of
the
option
would
be
to
encode
in
histogram
if
values
are
present
or
not
or
based
on
so,
for
example,
if
we
we
add
a
bit.
That
means
this
is
the
still
marker.
E
We
say:
okay,
anything
else
ignore
we're,
gonna,
leave
things
the
default
values
and
we're
not
going
to
be
we're
not
going
to
read
this
point
because
it's
invalid,
it's
the
the
stale
marker.
If
we
can
have
another
bit
that
says
this
is
the
final
point,
which
means
it's.
It
is
valid
values,
but
also
acts
as
a
still
from
now,
like
things
like
that.
So
so
we
can
have
different
meanings
on
based
on
the
flags
and
doesn't
matter
what
values
we
put.
We.
We
can
say
that
the
values
are
invalid
or
not
readable
or
you
don't.
A
A
A
A
If
I
have
a
reader
of
the
protocol
that
doesn't
know
about
the
flag,
what
do
they
do
and
if
we
encode
things
with
the
flag,
where
we
have
assumptions
on
stuff
you
know,
is
it
acceptable
to
crash
somebody
when
you
send
this
metric
like
or
or
I
mean
yeah
like,
we
could
actually
fully
specify
if
you
get
something
that
doesn't
abide
by
these
rules,
silently
drop
the
point
or
something
I
don't
know,
yeah.
E
I
think
I
think
the
first
flag
that
I
would
add
is
the
first
bit
is:
are
are
values
valid
or
not?
In
the
the
thing,
once
we
have
that
flag
that
beat
propagated
everywhere,
then
then
anything
else
we
can
do.
A
Okay,
so
I
think
that
this
deserves
a
some
some
owner
and
someone
to
kind
of
push
on
this
implementation
correct,
including
like
a
performance
eval.
So
who
wants
to
sign
up
for
that
to
investigate
this
so.
A
A
I
don't
know
I'm
just
throwing
out
ideas
of
what
what
we
need
to
do
to
validate
this
and
then
get
this
kind
of
specified
correctly.
I
want
to
see,
if
possible,
a
working
prototype
of
using
this
in
some
fashion,
possibly
with,
like
forked
prs
across
a
very
a
set
of
components
to
so
that
we're
comfortable
with
this.
So
we
have
an
idea
and
then
we
can
write
up
an
otep
and
and
propose
this
as
like
a
thing
that
we
do
sound
reasonable.
D
D
Oh
yay
awesome
victor,
I'm
going
to
close
the
pr
that
I
have
open
it.
It
did
have
some
important
bits
and
I
hope
that
we
keep
that.
The
most
important
bit
came
out
of
prometheus
working
group
from
a
few
weeks
ago,
which
is
to
be
sure
that
we
have
this
concept
documented
now
of
an
unbroken
sequence
of
observations,
meaning
we
have
a
start
time
and
we're
going
to
keep
making
observations.
That
are
a
single
sequence
and
if
you
have
some
staleness
in
the
middle
of
your
sequence,
it's
independent.
D
Those
observations
are
just
like
stale
values
and
you
still
have
an
unbroken
sequence
because
that
prometheus
server
never
crashed.
That
was
what
I
was
trying
to
make.
Sure
we
have
is
that
the
there's,
an
independence
of
unbroken
sequence
and
and
stillness
and
that's
important,
because
when
we
talk
about
overlap
resolution
you
should
be
able
to
detect
overlap
between
like
two
prometheus
servers.
Are
writing
the
same
series.
D
One
of
them
saw
a
gap
and
one
of
them
didn't
and
like
like
there's
an
error
happening
now,
and
it's
because
two
people
are
writing
the
same
series
and
we
can
detect
that
by
looking
at
those
start
times.
So
we
need
to
make
sure
that
start
times
are
still
working
for
a
prometheus
server,
independently
of
stillness.
A
That
we
should
probably
have
at
some
point-
and
I
think
I
think
we
keep
teasing
into
little
notions
of
that
here
and
there
so
anyway.
Thank
you
victor
so
much
for
signing
up
to
to
drive
this
and
you
know
don't
feel
like
you
have
to
do
all
this
yourself
and
sign
us
up
for
different
pieces
like
I'm
more
than
happy
to
help
with
the
various
components.
A
A
E
A
Okay,
right
and
victor,
I
am
very
slowly-
I
mean
frankly
like
way
more
slowly
than
I
want
to
admit
working
on
trying
to
take
those
performance
tests
that
we
had
and
get
them
into
the
proto
repo
in
a
reproducible
way.
I
think,
actually,
what
I'm
going
to
propose
is
getting
them
into
just
a
separate
repository
completely
and
having
a
github
action
that
runs
them
on
demand
so
that
you
have
a
way
to
do
these
experimentations
without
relying
on
tigran's.
You
know
personal
account.
B
A
Yeah
yeah
yeah.
No,
I
think
I
think
that's
going
to
be
part
and
posture,
so
I
guess
I
suggest
using
what
we
did
before
for
the
for
these
performance
checks,
but
trying
to
get
something
a
little
bit
more
hormone,
a
little
bit
more
up
to
date,
because
those
performance
tests
that
existed
are
missing
a
lot
of
concepts
that
have
been
added.
B
B
We
just
include
like
a
sequence
number,
the
timestamp
is
the
sequence
number,
but
the
timestamp
doesn't
well
okay,
you
could
do
the
overlap,
stuff
piece,
I
suppose,
but
for
these
things
that
we're
talking
about
you
know
where
we
have
like
the
you
know,
the
the
stainless
markers
and
the
last
you
know
last
metric
in
a
series
stuff.
Does
the
time
stamp
still
be
sufficient.
A
A
question
I
mean
so
so
the
difference
is
how
much
work
do
you
want
to
do
to
detect
stillness
like
if
you
read
josh's
specification
and
he
talks
about
implicit
notion
of
staleness,
it's
basically
detecting
missing
points
in
a
sequence
based
on
these
timestamps
right,
and
so
that's.
That
is
our
sequence
number
today
for
metrics
we
could
add
a
direct
sequence
number
if
you
want,
but
theoretically
the
timestamp
should
be
providing
the
same
value.
A
At
this
point,
I
don't
know
how
do
people
feel
about
adding
sequence
numbers
in
general,
like
I,
I
don't
think.
C
C
D
A
A
A
D
Because
I
mean,
I
think
they
should
be
treated
as
identical
points
like
if
you
write
the
same
point
and
it
happens
to
be
replicated
on
your
collection
path
for
some
reason,
like
you're
just
end
up.
Writing
the
same
point.
If
you
write
different
points
like
there's
some
sort
of
conflict
happening,
but
why
did
you
ever
write
two
points
and
that's
the
case
where
you
like
identical
overlap,
potentially
or
something
like
that?
D
But
this
is
gonna
come
up
and
we
won't
be
able
to
avoid
it
forever,
because
we
need
to
talk
about
later
driving
data,
and
I
think
when
you
do
talk
about
late
arriving
data,
it's
almost
natural
to
just
imagine
rewriting
the
same
point
and
then
the
sequence
number
vector
comes
in
to
say
this
is
the
second
copy
of
the
point
I've
written
or
the
third
copy
of
the
point.
I've
written
because
of
temporal
delays,
I'm
thinking,
but
that
currently
the
time
stamps
tell
you
enough
for
a
single
stream
whatever.
A
There's
also
the
the
thing
where
a
cumulative
point
you
can
kind
of
drop
points
in
the
middle
and
be
okay.
If
you
have
late
arriving
data,
just
drop
it,
because
your
most
recent
point
was
more,
you
know,
is
more
up-to-date
than
what
you
got,
but.
D
You
should
notice
when
there
are
gaps
in
the.
If
there's
an
unbroken
sequence,
then
you
know
it
was
just
missing
data
and
if
there's
a
breaking
sequence,
it
could
have
reset
and
that's
when
we
have
this
ambiguity
and
prometheus
is
okay,
there's
a
gap
as
a
reset.
I
don't
know
how
many
times
it
reset
during
that
gap.
A
All
right,
so
I'm
going
to
call
shenanigans
on
the
sequence
number
thing
victor.
I
think
if
you
want
to
have
a
longer
discussion,
let's
open
a
thread
like
open
a
bug
about
like
should
it
have?
There
was
an
open
thread
around
should
open
telemetry
have
a
sequence
number
that
I
closed,
saying
start
times
are
fine,
but
or
at
least
I
commented
on
it
saying
start
times
are
fine.
A
If
you
want
to
reopen
that
bug
or
discussion,
please
do
so
and
let's
put
it
on
the
agenda
for
the
next
meeting,
but
I
want
to
call
shenanigans
and
continue
with
some
of
the
other
topics
today.
If
that's
okay,.
C
D
A
Yeah
yeah,
I
I
think
that's
more
of
a
protocol
concern
in
the
like
high
level
protocol
but
yeah
so.
D
A
A
Okay,
let's
let's
move
on
so
I
did
want
to
talk
through
we
have
in
in.
Let
me
see
if
I
didn't
back
up
here.
We
have
one
remaining
temporality
related
bug
and
this
one's
kind
of
simple
to
talk
through.
A
So
the
concern
here
is:
there's
a
part
of
the
specification
that
talks
about
concurrent
requests
right
and
it
says
you
should
allow
concurrent
requests
in
otlp,
okay
and
the
client
implementation
should
expose
an
option
to
basically
wait
for
your
existing
request
to
go
out
the
door
before
you
continue
or
wait
for
them
for
shutdown
and
how
many
like
concurrent
otlp
requests
we
can
have.
So
this
is
a
little
bit
related
to
that
sequence,
number
and
a
little
bit
related
to
out
of
order
points
right.
A
A
A
So
I
guess
the
the
question
here
for
metrics
was:
should
we
allow
multiple
otlp
rights
to
be
happening
at
the
same
time
for
exports?
If,
for
some
reason,
one
export
with
retries
is
still
not
getting
through.
D
So
but
imagine
there's
a
pool
of
collectors
and
one
of
your
requests
goes
through
a
collector
that
gets
stalled
and
one
of
them
doesn't
and
now
the
point
from
30
seconds
ago
is
delayed.
But
the
point
20
seconds
ago
has
arrived.
That's
when
I
think
we
want
to
just
let
that
happen,
and
it
will
look
like
a
broken
sequence
of
events,
potentially.
E
D
I
don't
know:
that's
probably
I'm
not
sure
that,
because
it
depends
how
much
metric
data
you
produce,
for
you
know
a
process,
that's
forwarding
metrics
data
from
another
source.
You
may
not
be
able
to
do
that,
but.
D
Yeah,
I
guess
I
just
have
a
memory
of
span,
outputs
being
you
know
like
yes,
you
if
you
can
only
have
one
output
pending
at
a
time
you
end
up
getting
bigger
and
bigger,
bigger
batches
and
eventually
dispatches
are
so
large
that
the
latency
of
an
individual
request
becomes
a
problem,
and
you
can't
really
scale
up
from
there.
D
A
So
so
there
there's
this:
there's
this
equation
here
around
the
maximum
number
of
concurrent
requests
you
can
send
and
the
maximum
request
size
and
a
notion
of
whether
or
not
you
batch
metrics
together
in
chunks.
So
like
the
sdk,
if
it
knows
it
has
a
whole
bunch
of
like
internal
data
around
metrics,
you
could
send
out
like
10
at
a
time
in
bundles
in
parallel
right.
A
That's
a
thing
you
can
do
so
it's
not
like
I'm
going
to
have
one
call
of
metric
export
metrics
that
has
every
single
metric
for
the
current
time.
I
could
actually
chunk
it
as
well
right
yeah
send
things
out
in
parallel,
yeah,
so
there's
this
is
basically
saying
if
you
read
the
specification
on
it,
it's
calling
for
things
that
basically
grpc
might
not
allow.
A
For
example,
grpc
doesn't
actually
allow
fire
and
forget
so
from
the
standpoint
of
or
doesn't
seem
to
provide
fire
and
forget.
So
from
the
standpoint
of,
can
I
fire
something
off
and
kind
of
ignore
whether
or
not
I'm
firing
something
off
in
grpc?
The
client
may
rate
limit
you
anyway,
so
we
can't
specify
something
stronger
than
jrpc,
but
also
if
we
want
to
give
the
user
configuration
over
things.
You
know
what
we
talked
about
a
sequence
number.
A
A
I
have
some
open
questions
based
on
like,
like
higher
level
questions
based
on
what
this
raised.
I
think
this
particular
bug
was
mostly
trying
to
just
outline
specific
questions
on
the
spec,
but
I
wanted
to
raise
this
higher
level
concern
of
overall,
you
know
how
do
we
chunk
and
send
things
out
otlp,
but
if
I.
E
That's
in
the
spec
yeah,
but
it's
not
correct,
depends
on
in
case
of
brpc
is
not
correct,
because
the
way
how
grpc
works,
that
is
not
number
of
concurrent
requests,
is
actually
number
of
connections
or
sockets,
because
because,
if
you
have
concurrent
requests
on
the
same
grpc
connection,
they
are
actually
since
serialized
on
the
same
socket
jrpc
opens
only
one
socket
that
sends
all
of
the
data
serial,
not
serialized,
they
put
it
into
a
stream
and
they
do
chunks
and
so
on
so
yeah.
E
A
Okay,
so
I
guess
effectively
here,
let's
answer
the
question
on
on
a
data
model
standpoint
of.
A
I
think
we
only
have
five
minutes,
so
we're
not
gonna
have
time
to
really
talk
through
it,
so
I'll
raise
it
this
week.
Let's,
let's
I
don't
think
the
bug
is
the
right
place
to
comment
on
this
issue
either,
but
we
can
have
a
comment
next
week
about
whether
or
not
this
is
really
a
data
model.
Concern
or
not.
A
I
feel
like
there
is
a
data
model
concerned
around
concurrency
that
I
wanted
to
raise
attention
to
of
this
concurrent,
sending
requirement
thing
and
the
fact
that
things
could
be
batched
you're
right
that
it's
in
one
thread,
so
it
kind
of
doesn't
make
sense
in
any
case,.
C
Just
to
clarify
here,
I
I
think
we
will
have
scenario
over
you.
You
can
have
two
exploders
one
is
exporting
data
every
one
second
about
the
cpu,
which
is
very
critical,
and
you
have
some
slow
exposure
that
export
something
like
temperature,
every
15
minutes,
the
same
same
like
ingestion
on
the
collector
and
with
that,
even
if
the
xdk
or
the
exporter
can
guarantee
some
like
sequential
behavior.
Still
on
the
collector
side
on
the
protocol
side,
you
have
to
deal
with
the
fact
that
multiple
things
could
happen.
D
E
It
is
true
and
by
the
f,
if
you
have
a
load
balancer
and
go
through
multiple
collectors
and
then
go
through
another
set
of
collectors
in
the
middle
request
can
be
reordered.
We
are
talking
here
only
by
the
source
of
the
the
stream.
C
A
A
Okay?
So
let's,
let's
change,
I
we
only
have
three
minutes,
I'm
going
to
call
shenanigans
again
of.
Unfortunately,
I
have
to
be
the
time
guy.
Let's
answer
this
question
right
of,
should
we
write
this
down?
I'm
fine
if
someone
just
wants
to
open
a
pr
and
put
it
there,
but
from
the
standpoint
of
closing
this
issue
from
a
metrics
data
model
standpoint,
that's
the
fundamental
thing
that
we
have
to
say:
can
things
be
written
out
of
order
and
if
they
can,
then
we
need
to
make
sure
we're
accounting
for
it.
A
So
if
the
answer
to
this
is
yes-
which
I
think
I
agree
with
josh
yes,
we
should
explicitly
state
that
things
can
come
out
of
order.
Then
we
just
need
to
see
if
we've
accounted
for
it
in
our
spec
or
to
explicitly
write
it
somewhere
and
then
I
think
we're
okay
with
the
way
otlp
concurrency
is
currently
specked,
even
though
there
are
some
issues
here
that
we
can
talk
about
in
the
context
of
that,
but
that's
outside
of
the
data
model
spec.
A
This
is
really
the
only
thing
we
need
to
answer
is.
Should
we-
and
I
should
call
that
out
first-
should
we
explicitly.
D
Complete
this,
we
leave
open
the
option
that
an
sdk
can
decide
to
be
in
order
for
some
reason
and-
and
we
haven't
discussed
the
option
of
like
in
a
delta
encoding
strategy,
you
might
decide
to
like,
like
buffer
your
data
and
combine
it
on
the
way
out.
So
that,
like
there
are
other
options
here
and
we
don't
want
to
prescribe
what
people
do.
A
Okay
cool,
so
let's
at
least.
A
The
sdk
should
be
prescriptive
more
so,
yes,
greed
agreed.
Okay,
let's
move
on
then
to
I
will
do
ai
open
bug
about
explicitly
stating
out
of
order
points
in
data
model.
Okay,
so
I
will
either
open
a
an
explicit
bug
about
that
and
remove
that
existing
bug
from
the
data
model
sig
or
we'll
just
open
a
pr
to
clarify
that
issue
and
go
from
there.
I
does
that
sound
reasonable.
A
Okay,
next
steps,
I
think,
because
we
didn't
have
a
chance
to
get
into
a
newman
states
set-
and
I
think
that's
going
to
be
a
big
issue,
but
we
spent
most
of
our
time
kind
of
on
this
stuff
here,
which
is
higher
priority.
I'm
gonna
tentatively
push
a
newman
state
set
back
unless
we
have
I'm
also
gonna
reserve
room
for
victor
to
continue
discussing
the
this.
This
histogram
kind
of
bit
bit
flag
thing
next
week,
sound
reasonable
for
next
steps.