►
From YouTube: 2021-04-20 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
C
All
right,
oh
I
I
approved
your
instrument
cl.
I
don't
know
if
you
saw
my
comments.
A
Oh
sorry,
but
thank
you
that
regarding
the
decision
tree
or
like
the
high
level
like
which
one
people
should
use,
I
I
I'll
send
a
separate
pr
or
not.
Regarding
the
names,
I
I
think
we're
we're
still
struggling
there
like
people,
don't
like
it,
but
meanwhile
we
we
couldn't
find
a
better
alternative.
I
I
talked
with
john
wasn't
offline
as
well,
and
he
was
struggling
with
that
name.
C
Yeah,
I
I'm
of
the
opinion
when
it
comes
to
naming.
If
nobody
comes
up
with
a
better
name,
it's
it's
more
important
to
be
stable,
so
I'm
willing
to
defer
to
crowd
consensus
of
like
you
know:
nobody
likes
it,
but
nobody
hates
it
and
then
eventually,
someone's
gonna
come
out
with
a
proposal
and
you'll
be
like
cool.
Let's
make
this
name
and
it'll
just
be
a
veneer,
but
people
will
like
it.
A
Yeah,
so
so
here's
my
thinking,
I
I
think
once
we
we
agree
on
the
number
of
instruments
and
although
we
don't
like
some
of
the
names,
both
will
stick
with
that
for
this
pr
and
we
give
people
heads
up.
So
I'm
I'm
going
to
report
to
the
spike
say
once
this
pr
got
merged.
Just
tell
people
like
this
is
the
status
and
we
struggle
with
the
names.
A
So
you
come
with
a
new
proposal
and
if
that's
better,
we'll
we'll
just
change
the
name
and
once
the
once
we
reach
the
end
of
may
we'll
lock
down
the
names
and
I'm
not
going
to
change
that
again,
so
give
people
more
than
a
month
for
for
for
them
to
find
the
better
names.
Yeah.
C
Yeah,
no
one
has
pushed
on
the
names
on
the
otop
because
I
think
they're
pretty
good
in
the
sense
of
like
well
well
understood,
but
it
could
be
wrong.
So
I'm
trying
to
flesh
out
the
docs
better,
so
people
know
how
to
interpret
them
so
that
we
have
a
chance
to
find
dissent
before
we
lock
down.
But
I
really.
C
Pretty
good
yeah,
I
think
there,
so
I
think
the
problem
is
in
the
api,
we're
trying
to
blend
instrument
and
metric
right,
yeah
yeah,
and
that
that's.
I
think
why
the
names
are
hard,
because
you
are
doing
two
things
at
once
with
the
name
one
name
for
two
concepts:
yeah.
You
should
use
compound
words
there
you
go
all
right.
We
we
have.
We
have
it's.
It's
like
1202.,
I'm
just
waiting
to
see
if
people
are
adding
things
to
the
agenda
or
topics
here.
C
If,
as
as
you
roll
in
please,
you
know,
add
your
name
to
the
attendee
list.
B
B
C
All
right,
so
it's
about
it's
been
about
five
minutes.
Hopefully
people
have
had
a
chance
to
think
about
what
they
want
to
talk
about.
I
have
obviously
have
an
agenda
set
up
for
this
meeting
as
we
kick
off.
You
know,
please,
please
add
your
name
to
the
list
if
you're
an
attendee
and
thank
you
so
much
for
coming.
First,
I
just
want
to
set
up
with
some
reminders.
There
is
a
histogram
otep
still
in
play
for
doing
exponential
bucketing
in
histograms.
C
One
thing
I
want
to
call
out
was
there
was
some
recent
comments
I
think
from
someone
who
worked
on
circle
hist
just
like
in
the
past
few
hours,
so
there's
there's
some
more
activity
on
there.
Just
recently.
If
you
haven't,
please
take
a
look
comment.
I
think
it
only
has
two
approvals
so
far.
I
am
also
complicit
in
not
giving
it
approval
yet
because
I'm
still
working
internally
to
have
discussions
so
anyway,
please
take
a
look
at
it
because
it's
it
looks
good.
C
It
adds
adds
a
lot
of
good
justification
for
why
we
need
more
than
just
fixed
histograms.
Okay,
other
reminders,
there's
a
a
how
to
rebuild
delta.
To
cumulative
sums
plus
specification
around
like
what
sum
is
in
the
metric
data
model,
just
some
verbage
around
it.
C
So
you
can
understand
what
it
is
again
the
I
want
to
call
out
because
I
don't
know
if
it's
been
apparent
like
why
we
have
a
data
model
specification,
but
the
data
model
specification
is
written
to
two
people,
two
audience
members:
one
is
it's
written
for
people
who
want
to
generate
otlp
from
their
apis,
which
also
includes
our
metric
api
sig
right,
but
also
you
know
for
folks,
like
micrometer,
who
might
wanted
to
write
directly
to
otop
like
it's
written,
so
they
understand
how
to
select
otlp
format.
C
It's
also
written
for
people
who
are
in
the
collector
consuming
otlp
right.
So
they
understand
what
these
metrics
are,
what
they
can
do
with
them
and
how
to
interact
with
them,
and
it's
also
written
for
exporters.
So
they
understand
how
to
take
metrics
and
convert
them
into
time
series.
It
is
not
written
to
outline
what
the
api
is.
What
instruments
are
available
in
the
api
or
any
of
that?
C
It's
written
kind
of
generically
in
the
sense
of
here's,
specifically
what
otop
is
and
here's
these
conceptual
notions
on
each
side
and
here's
how
you
go
from
those
notions
into
here
and
here's
how
you
go
from
this
notion
out.
So
I
just
want
to
call
that
out
so
that,
where
you
know
when
we
do
code
reviews,
when
we
look
at
it,
that's
like
the
purpose
of
that
document
and
if
it's
not
apparent
in
the
document,
please
let
me
know
that
was
I
don't
remember.
C
If
it
was
this
pull
request
or
this
pull
request,
but
it
kind
of
came.
It
became
apparent
to
me
based
on
comments
that
maybe
it's
not
clear
what
what
this
whole
data
model.
Sig
document
you
know
specification
is
serving,
so
I
wanted
to
make
make
that
clear.
If
anyone
disagrees,
I'd
like
to
hear
and
kind
of
understand,
the
you
know
make
sure
that
we're
all
aligned
on
what
we're
trying
to
do
here.
C
Okay,
cool,
so
the
other
thing
to
bring
up
is
a
reminder.
We
have
the
label
to
attribute
pr.
Last
week
we
talked
about
how
to
make
progress.
On
that
I
don't
know
I
can
scroll
down
to
our
notes,
but
effectively
this
I
think,
come
on
there
we
go.
C
This,
I
think,
is
the
last
kind
of
major
change
to
otlp,
that's
planned
before
we
want
to
declare
stability,
and
I
I
want
to
kind
of
understand
how
to
get
that
through
or
submitted,
because
I'd
like
to
start
kicking
off
like
now,
some
benchmarking
against
that
and
we'll
talk
about
that
in
a
second.
But
just
as
a
reminder,
that's
that's
a
pr.
That's
out
it's
open.
I
believe
it
has
enough
approvals
to
go
through,
but
there
might
be
something
that's
blocking
it.
So
please,
please!
C
Let
us
know
if
you
have
any
concerns
around
that
all
right.
The
last
bit
is
this.
This
I
had
mentioned
last
time,
there's
a
pr
to
help
define
how
instruments
are
different
from
metrics,
which
I
think
was
a
confused.
There
was
a
bug
open
about
this.
It
was
a
little
bit
confusing.
So
please
take
a
look
at
the
pr
if
you
haven't.
Hopefully
it
helps
describe
that
all
right,
so
that's
all
the
reminders
of
like
things
that
are
active
in
the
data
bottle.
C
We
have
two
major
blocking
things
and
I
wanted
to
spend
most
of
this
meeting
talking
about
benchmarking
to
do's
again,
if
that's
okay,
so
in
that
sense,
let's,
let's
dive
in
is
this
sharing
my
whole
chrome
window
just
curious
like?
Are
we
looking
at
the
benchmark
right
now.
C
Okay,
all
right
I'll
have
to
click
lots
of
buttons.
Okay,
that's
fine!
So
I'd
like
to
propose
a
plan
for
going
forward
if
everyone's
amenable
to
that-
and
I
asked
tigran
to
join
because
tigran
kind
of
owns
the
existing
benchmarking
bug
and
and
some
utilities
there.
So
I
want
to
propose
this
as
a
plan
for
the
next
two
weeks.
So
we
take
the
the
pr
for
switching
from
labels
to
attributes
and
we
run
tigran
series
of
benchmarks
on
it,
as
is
so
we
take
it.
We
adapt
what
he
has.
C
We
flesh
it
out
for
better
metric
generation.
If
we
feel
like
we
need
to
the
the
data
in
it
and
we
run
a
series
of
benchmarks
and
look
at
the
performance
of
otop
and
we
basically
time
box
ourselves
as
of
not
this
friday,
but
next
friday
right,
we
will
have
investigated.
This
have
opened
bugs
where
we
feel
like
we
can
make.
C
You
know
meaningful
improvements
to
performance,
okay
and
those
bugs
can
be
considered
blocking
in
terms
of
declaring
stability,
but
we're
going
to
time
box
ourselves
to
from
today,
until
next
friday
we're
going
to
investigate
the
performance
of
what
we
would
call
the
final
incarnation
of
metrics
v1.
C
I
don't
even
know
if
v1
is
what
we
want
to
call
it
metrics
g8,
I
don't
know
whatever
protocol
buffer
stability
for
metrics
right,
so
we
spend
from
now
until
friday,
until
next
friday
investigating
the
performance.
We
use
tigran's
tool
and
we
we
try
to
identify
areas
where
we
can
make
improvements,
or
you
know,
give
ourselves
some
assurety
that
we
feel
like
this
is
good
enough
to
market
as
stable
for
now
and
then
we
evolve
it
the
way
we
would
any
protocol.
E
Josh
a
question:
are
you?
Is
this
matrix
happy
with
the
semantics
of
the
protocol?
Right
now
is
regarding
the
performance
for
now.
Is
this
the
last
pr,
the
the
change
from
the
labels
to
the
attribute,
the
one
that
you
want
to
have
in,
and
that
is
semantically
what
we
want
to
have
in
the
protocol?
That's
the
final
state
of
it
yeah.
Let.
C
Me
try
to
let
me
present
my
entire
chrome
tab
here,
so
I
can
show
where
we
are
and
again,
if
I'm
misspeaking,
please
folks
correct
me,
I
feel
that's.
D
C
E
E
We
need
right
to
go
over
the
throttle
and
do
some
benchmarking
and
see
if
there
is
anything
that
can
and
needs
to
be
improved
performance
wise
without
changing
the
semantics,
and
only
after
that,
after
getting
all
the
changes
in
make
the
release,
it's
likely,
I
don't
think
we
should
allocate
too
much
time
to
it.
I
don't
think
there
is
a
whole
lot
to
gain
here.
E
Probably
some,
maybe
some
gains
are
possible,
but
I
would
not
spend
months
on
trying
to
make
it
much
faster
than
it
is
right,
and
so,
if
it's
just
maybe
a
week
or
two
weeks,
I
think
it's
fine
to
delay
the
release
and
release
once
with
both
this
change
with
the
semantics
of
the
protocol,
but
also
with
the
performance
related
changes
right.
So
there's
it's
just
one
release,
because
it's
very
likely
that
whatever
you
do
or
it
is
likely
whatever
you
do
for
performance
may
actually
be
also
a
breaking
change.
C
Yeah,
I
I
totally
agree
with
that
that
if
it
wasn't
obvious
that's
kind
of
what
I
was
trying
to
propose
so
like
my
proposal,
is
we
the
this?
This
investigating
for
two
weeks
would
start
now
and
end
next
friday.
C
So
when
we
cut
well,
we
can
end
next
tuesday,
actually
or
that
two
weeks
from
now
we
come
with
a
set
of
performance-related,
bugs
that
we've
opened
or
if
we're
lucky
nothing,
but
I
expect
there'll
be
something
and
we
evaluate
those
bugs
and
say
is:
do
we
need
to
make
a
breaking
change
to
fix
this
or
not
and
make
a
final
decision
on
like
all
the
remaining
breaking
changes?
But
I
like
what
you're
saying
about:
let's
not
release
another
protocol
version
until
we've
addressed
performance
issues.
Right,
okay
sounds
good.
C
Okay
am
I
am
I
presenting.
C
This
project
plan:
yes,
no,
okay,
good,
good
good.
I
changed
zoom,
so
I
just
want
to
call
out
for
tigrin,
because
I
don't
know
if
this
is
apparent,
but
the
things
that
are
marked
required
for
ga
are
the
issues
that
we
felt
we
needed
to
address
before
we
could
mark
the
protocol
stable,
like
that.
We've
captured
the
right
semantics
and
right
now,
the
ones
that
are
open
either
already
have
prs
that
have
gotten
comments
and
some
approval
or,
for
example,
this
requirements
for
safe
label
removal.
C
We
do
feel
like
we
want
to
outline
that
before
we
mark
everything
stable,
but
we
don't
expect
any
changes
to
the
protocol
for
it.
Josh
since
you
own
that
issue,
you
can
confirm
that.
Is
that
correct?
We
don't
expect
changes,
okay,
so
so
just
wanted
to
call
that
when
you
asked
like
do,
we
feel
comfortable
with
what
we
have.
Yes,
we
still
have
a
ton
of
work
to
do
in
the
sig,
but
we'd
like
to
kind
of
switch
this
to
be
an
evolutionary
work
as
opposed
to.
C
All
right,
I
love
that
proposal,
so
so
let's
go
back
to
to
this
proposal.
So
in
terms
of
this,
let's
use
this
time
yeah.
I
won
a
time
box
to
this
april
30th
deadline.
Previously
what
we
said
was
that
we
would
try
to
get
the
metrics
data
model
marked
as
stable
by
the
end
of
march.
C
Then
we
pushed
it
back
to
april
30th
right.
I
think
we
have
figured
out
the
last
change.
We
want
to
make
there's
a
pr,
that's
approved
that
can
go
through,
and
I
think
this
benchmarking
is
the
last
bit
that
needs
to
happen.
I
would
like
to
tie
it
to
april
30th
for
no
other
reason,
but
for
press
reasons.
If
you
will
like
to
convince
ourselves
that
we
can
hit
a
deadline,
I
know
that's
not
a
good
reason,
but
that
is
literally
the
main
reason
does
anyone
does?
C
Does
anyone
want
think
we
need
more
time
to
benchmark
this
out
because
april
30th
would
not
be
this
friday,
but
next
friday.
E
I
can
help
right,
but
I
don't
think
I
will
have
time
to
fully
drive
that
myself
if
there
is
any
questions
with
with
the
source
code,
so
first
of
all
feel
free
to
borrow
the
source
code,
modify
it
in
any
way
you
want,
and
if
you
have
questions
I
will
be
happy
to
answer,
but
I
don't
think
I
will
have
enough
time
to
do
the
full
benchmarking
and
propose
the
changes
myself.
C
C
So
here's
a
question.
Then
we
had
talked
before
about
requirements,
and
so
I
think
tigran
has
a
set
of
benchmarks
that
are
go
specific
victor.
You
worked
in
the
go
benchmarks
and
had
net
benchmarks
right.
Okay,
so
I
guess
the
question
is:
if
we
have
an
investigation
and
go
around
benchmarking
and
net
in
benchmarking,
does
anyone
know
of
any
concerns
with
other
language
implementations
of
protocols
that
we
should
investigate.
E
Yes,
it's
likely
a
lot
of
work
to
do
right.
At
least
I
mean
we.
We
discussed
that
right.
Probably
the
right
approach
here
would
be
to
have
a
generator,
maybe
use
the
one
that
already
exists
in
go
which
outputs,
maybe
sterilized,
protobufs
and
then
in
languages.
You
only
do
the
small
amount
of
work
for
decoding
and
including
and
just
benchmark
that
which
is
code
wise,
smaller
small
portion
of
the
entire
work
that
needs
to
be
done.
E
C
Because
I
think
the
performance
and
go
is
significantly
different
than
other
languages.
I
also
think
that
this
is.
This
is
an
assumption
and
I
could
be
completely
wrong,
but
from
what
I've
seen
c,
sharp
and
java
are
kind
of
similar
performance
wise.
In
that
I
see
similar
degradations
between
the
two.
I
don't
know,
if
that's
100
true,
but
it's
enough
to
make
me
feel
a
little
bit
confident
there
that
it's
different
enough
from
go
similar
enough
to
other
languages
that
hopefully
will
catch
things.
Regarding
the.
C
E
C
C
If
you
take
what
you
have
and
run
it
on
your
computer
or
computers
to
make
decisions,
that
would
be
the
extent
of
what
we
do
over
the
next
two
weeks
to
evaluate
performance
here,
there's
a
danger
with
with
profiling
that
you
can
overtune
and
over
optimize,
especially
if
you
don't
shape
your
data
correctly
so
like
we
have
to
be
careful
and
we
have
to
be
careful
because
we're
using
like
limited
machines
to
like
you
know,
do
this.
C
So
I
I
do
think,
there's
a
longer
term
things
set
of
things
we
want
to
do
around
benchmarking,
and
I
don't
want
to
have
that
work
not
happen,
but
I'm
proposing
that
we
basically
short
circuit
for
now
to
to
reach
this
deadline.
Anyone
have
like
major
concerns.
Major
columns.
Does
that
sound,
reasonable.
E
I
would
approach
this
differently
just
time
boxing
right,
give
it
a
week
and
see
what
you
can
do
whatever
is
possible
to
achieve,
and
if
it's
ten
percent,
then
fine
if
it's
zero.
Maybe
that
also
is
fine
right,
whatever
is
possible
to
do
within
the
deadline
that
we
want
to
hit,
I
would
not
try
to
hit
any
specific
number
which
may
not
may
never
be
able
to
right.
It
may
not
be
possible
to
do
or
it
may
take
months
to
to
eat
so
just
just
time
box.
It.
D
I
support
all
of
this.
I
think
I
just
want
to
remind
us
that
we're
dealing
with
a
protocol
buffer
performance
problem
ultimately-
and
I
think,
we're
as
a
group
we've
decided
that
we
want
to
focus
on
semantics.
I
think
that
the
performance
problem
will
go
away.
One
day
when
someone
hand
writes
a
protocol
buffer
implementation
for
otlp
in
each
language
that
it
matters.
So
go
it's
going
to
matter
because
of
the
collector
and
we're
going
to
fix
it.
That's
my
position.
E
C
Okay,
does
anybody
else
want
to
sign
up
to
do
any
work
here,
because
what
I
will
I
will
do,
I
will
take
children's
go
benchmarks
and
tigran.
Thank
you
for
volunteering
to
help
me.
If
I
have
questions-
and
I
will
run
those
against
the
most
latest
thing
and
get
a
bunch
of
evaluation
reports
out
for
everybody
by
the
end
of
next
week
with
like
what
we
could
do,
you
know
open
bugs
that
sort
of
thing.
C
I
might
just
comment
on
the
existing
bug,
which
I
think
is
here
with
what
I
found
and
what,
where
I
think
we
can
make
improvements
for
go
victor
if
you
could
do
the
same
thing
for
c
sharp,
which
I
think
you
already
have
some
reports
for
c
sharp
right.
C
F
Yeah
so
let's
talk
and
we
could
you
know
figure
out
who
does
what.
C
Yeah,
it
could
also
be
that
we
both
look
at
go
in
both
experimental
things,
either
one's
fine,
mostly
what
I'm
looking.
F
C
Okay
sounds
good,
sounds
good.
I
mean
yeah
okay,
so
this
this,
I
think,
is
the
last
the
last
bit
of
work
until
we
can
declare
stability
so
awesome
awesome
awesome.
I
do
want
to
call
out
next.
This
aggregation
work
just
want
to
make
sure
so
josh
you
you're
signed
up
to
do
that
and
it's
marked
as
a
blocker.
Do
you
think
this
is
something
that
we
can
get
out
at
least
a
pr
for
next
week
to
evaluate,
as
a
group.
D
C
D
D
I
think
I've
got
an
answer
to
the
missing
start
time
thing,
or
at
least
a
proposal
that
I
need
to
write
and
then
I
would
just
want
to
find
a
priority.
The
second
priority
that
I
can
also
work
on
is
this
aggregations,
which
I
have
done
prototyping
a
little
code
to
make
sure
that
I
understand
the
mechanics
of.
But
yes,
I
think
I
can
do
that.
I.
G
G
D
C
D
C
D
C
C
D
D
If
it
does,
I,
I
feel,
like
we're,
there's
a
number
of
open
questions
where
the
answer
may
be
start,
adding
more
keys
and
values
into
the
resource
and
say
what
they
mean
that
that's
sort
of
catch.
All
at
this
point
in
my
head
and
like
this
there's
a
ticket
that
I
opened
about
external
labels
for
prometheus-
and
you
know
like
how
do
we
add
a
label
that
describes
your
replica
name
or
how
do
we
add
a
label
that
explains
some
kind
of
watermark
or
like
to
talk
about
the
time?
D
There's
all
kinds
of
ways
that
we
can
just
throw
more
information
into
keys
and
values?
And
I
think
for
performance
reasons.
People
are
telling
us
stop
adding
fields,
and
I
think
that
both
of
those
are
telling
us
that
we
should
just
be
happy
with
what
we
have
and
add
any
more
information
as
key
value.
C
Is
sorry
I
I
agree
with
that.
This
is
the
start
time
issue
right.
D
C
D
I'd
be
happy
to
just
introduce
it
for
a
sec.
The
the
idea
is
that
open
telemetry,
the
otlp
protocol
has
all
the
sum
points
and
histogram
points.
Anything
with
a
temporality
has
a
start
time
and
a
now
time
essentially,
and
a
number
of
legacy
systems,
including
various
sort
of
stateless
systems
that
are
making
observations,
don't
have
any
way
of
knowing
start
time,
it's
impossible
to
tell
when
something
restarted.
D
You
just
have
a
way
of
saying:
what's
the
current
value,
and
yet
we
we
don't
want
to
force
people
to
turn
those
into
gauges
just
because
they're
missing
a
start
time,
so
I've
seen
several
approaches.
The
reason
I'm
familiar
with
this
is
that
I've
been
working
on
the
prometheus
sidecar,
which
we
copied
from
stackdriver,
so
stackdriver
had
an
initial
approach
that
we
started
with.
The
stackdriver
approach
was
to
take
a
look
at
the
sequence
of
observations
and
just
look
for
resets
and
and
also
the
first
time.
D
D
I
know
that
it
resets,
but
I'm
going
to
reset
it
at
the
current
value
wherever
that
is
so
if
the
sidecar
started
and
is
in
the
middle
of
a
sequence.
You
know
this
value
is
now
100
000.,
and
so
I
reset
it
right
there
and
my
outputs
from
now
on
are
going
to
be
the
start
time
where
I
made
my
first
observation
and
the
difference
between
100
000
and
that
and
then
after
some
amount
of
time,
the
the
process
that
I'm
observing
is
going
to
restart
to
zero.
But
I
won't
make
that
zero
observation.
D
You
just
can't
do
that
in
prometheus.
I'm
going
to
make
a
first
observation,
which
is
somewhere,
let's
say
around
a
thousand,
so
at
some
point
it
drops
from
somewhere
above
100
000
to
about
a
thousand,
and
at
that
point
I
just
assume
that
reset
happens,
and
I
put
the
timestamp
of
my
last
known
observation
as
the
start
time
and
or
something
along
those
lines.
So
this
solution,
the
first
one
that
I
just
described,
is
to
drop
the
first
observation
and
start
outputting
resets.
D
I
don't
like
that
solution
anymore.
One
reason
I
don't
like
it
is.
If
you,
you
can
fall
into
an
error
case
where
you
drop
an
observation
and
that's
the
first,
the
only
point
you
ever
saw
in
that
series.
That
means
you'll
never
see
that
in
the
back
end.
So
what
I
started
then
doing
is
saying:
I'm
going
to
output
a
zero
to
explicitly
reset
series.
D
Now
the
stackdriver
approach
that
I
just
described
was
to
insert
a
reset,
and
then
I
changed
it
to
a
sort
of
zero
so
that
I
know
when
the
resets
are
happening
and
the
point
is
you
don't
actually
need
that
zero
if
the
if
the
metric
system
downstream
does
its
job
correctly,
and
so
this
is
actually
a
data
model
question.
Here's
here's
what
I'm
going
to
say
the
third
approach.
When
I
see
my
first
observation
at
100
000,
I
record
a
hundred
thousand
in
a
start
time
of
now.
D
If
the
user
cares
to
obscene
the
absolute
value,
they
will
see
a
hundred
thousand.
If
the
user
cares
to
compute
a
rate,
they
must
use
the
reset
value
and
the
and
the
start
time
in
order
to
compute
a
rate
and
so
you're
going
to
see
another
point,
some
point
out
in
time:
that's
greater
than
100
000..
If
you
want
to
know
the
rate
of
that
series,
you
must
compute
the
difference
between
100
000
in
your
value
and
the.
D
I
think
this
proposal
that
I've
just
described
both
captures
the
stateless
information
like
I
just
I
don't
know
start
time.
I
want
to
record
it
and
it
also
lets
an
observer
start
to
include
all
the
reset
information
so
that
you
can
compute
rates
correctly
and
detect
restarts
correctly.
So
I
think
I've
said
enough.
Yeah.
C
Yeah
I
want
to
I
want
to
call
out.
I
ran
into
this
issue
when
I
was
doing
delta
to
cumulative
of.
I
wanted
a
statsd
specification
section
where
I
go
in
and
say:
what
do
you
do
when
you
don't
have
time
stamps
on
subs
right,
because
that's
statsd
doesn't
here's
the
algorithm
they
have.
I
mean
you
called
this
out
my
document
of
like
it
was
awkward
where
I
put
it.
They
don't
have
time
stamps
and
you
need
to
deal
with
it.
C
So
my
question
is:
is
it
a
otlp
data
model
thing
or
is
it
a
prometheus
to
data
model?
Mapping
specification
right
because,
like
with
the
with
the
statsd
thing,
around
delta
cumulatives,
where
your
deltas
don't
have
time
stamps,
you
know
how
do
you
detect
a
reset
or
a
counter?
How
do
you
accurately
map
from
that
model
to
a
model
where
all
the
stuff
does
matter
and
to
some
extent
it
might
be
that
you
can't
where
you
can?
There
needs
to
be
a
specification,
but
I
want
to
ask
this
meta
question
of.
G
Okay,
I
think
at
least
on
our
side.
We
should
clarify,
if
start
time
is
required
or
not,
and
what
is
what
do
you
do
when
you
don't
have
a
start
time,
you
will
say
that
that's
the
prometheus
problem,
I
don't
think
it's
a
prometheus
problem.
We
we
scrape
metrics
from.
D
C
Like
I
said,
the
the
purpose
of
the
metrics
data
model
specification
is
how
to
map
from
existing
metrics
into
open
telemetry
metrics.
So
I
agree
with
you.
It
needs
to
be
specified
here.
I
guess
my
question
is
like:
is
there
a
section
of
the
spec
that
talks
about
how
to
map
from
you
know
things
that
aren't
open
telemetry
into
open
telemetry,
where
we
handle
these
like
weird
side
scenarios
of
what
to
do?
If
your
metrics
look
like
this
right,
I'm
trying
to
ask
where
does
this
belong.
G
D
G
So
I
think
it's
very
important
in
the
proto
to
be
specified
if
it's
required
and
if
it's
required
first
thing.
First,
as
as
josh
pointed
user,
if
they
don't
have
it,
they
will
say
okay,
then
I
will
fall
back
to
the
gauge,
maybe
maybe
just
a
link
to
some
section
of
how
to
deal
these
in
the
proto
is
also
useful.
D
Another
thing
we
can
do
is
just
specify
that
zero
start
time
is
acceptable
and
a
valid
interpretation
is
start
time,
equals
end
time
and
then
and
then
we've
effectively
turned
counters
into
gauges.
You
cannot
detect
a
reset
without
start
time
and
that's
that's
a
fact
of
life.
If
you
have
no
start
time,
it's
possible
that
there
will
be
a
hundred
thousand
restarts
between
now
and
your
next
observation.
D
It's
also
possible
that
after
that
hundred
thousand
restart
it
it
skyrocketed
to
a
point,
that's
greater
than
the
current
value
and
you
detected
zero
resets,
and
that
is
the
fact
of
life
with
prometheus
and
then
the
reason
why
we
have
otlp
start
times
is
to
avoid
that
ambiguity.
But
we
have
that
ambiguity
any
time.
An
observation
arrives
without
a
start
time,
and
I
think.
C
We
can
I
rephrase
what
bogdan
said,
which
is
the
decision
we're
making.
I
think
of
making
it
required
or
not
is.
Are
we
going
to
force
importers
and
receivers
of
metrics
to
fit
to
synthesize
an
appropriate
start
time,
and
can
we
specify
an
algorithm?
We
think
works
in
99
of
cases
that
people
can
use
to
do
that,
which
is
what
I
think
josh
was
outlining
earlier,
so
that
we
can
make
it
required
and
then
downstream.
You
can
assume
you
have
a
start
time.
C
That's
reasonable
right,
or
do
we
not
specify
start
time
and
require
exporters
to
deal
with
this
issue
right?
I
think
that's
when
I,
when
I
think
of
making
start
time,
be
required
or
not
it's
do
I
push
this
issue
on
everyone
who
needs
to
consume
the
data,
or
do
I
force
it?
You
know
importers
and
receivers
to
synthesize,
something
that's
reasonable,
and
then
we
need
to
specify
that
to
help
people
out
and
then
downstream.
I
can
assume
that
there's
a
start
time
right.
Is
that
a
fair
way
to
rephrase
that
bogdan.
H
G
D
D
G
Then
we
so
by
the
way,
start
time
right
now:
fyi,
it's
a
hint
in
proto,
so
we
don't
have
a
null,
so
we
have
zero
as
fallback
anyway.
That's
that's
one
thing
second
thing
is:
we
need
to
also
document.
This
is
not
the
preferred
way
better
to
find
a
way
to
to
do
it.
Things
like
that,
because
from
proto
perspective
there
will
be
people
that
will
produce
the
proto
not
via
some
code,
and
they
need
to
know
how
to
what
to
do
how
to
produce
the
right.
D
Data
so
there's
I
can
imagine
a
couple
of
approaches.
One
is
to
to
make
a
semantic
convention
for
clients
to
insert
a
start
time,
so
there
could
be
a
resource
attribute.
This
is
one
of
these
non-identifying
types
that
I've
been
trying
to
find
a
place
for
which
says
my
start
time
was,
and-
and
maybe
that
is
a
it's
a
it's
descriptive,
but
you
better
not
index
by
start
time
or
something
like
that,
and
then
we
can
always
say.
D
Oh,
I
see
a
cumulative
under
restart
time,
but
it
has
a
resource
that
tells
me
it's
restart
time
and
I
can
put
that
in
there
and
that
will
work
for
enough
cases.
I
think,
and
then
there
I
think,
there's
a
second
recommendation
which
this
these
are
all
just
recommendations,
which
is,
if
you
are
stateful
and
you
are
receiving
untime
stamped
observations,
then
you
can,
and
you
should,
I
think,
we're
saying-
should
become
stateful
and
insert
your
own
start
time.
E
G
Yeah,
so
maybe
maybe
all
of
these
can
go
to
the
proto.
If
you
look
at
the
explanation
for
for
temporality,
you
will
see
it's
a
pretty
big
explanation,
but
once
people
read
that
I
I
did
not
hear
too
many
questions
about
that.
So
once
we
had
that,
but
I
heard
a
lot
of
questions
about
start
time,
so
it
may
not
be
bad
to
just
put
some
of
these
things
in
the
proto
for
people
when
they
read
the
difference.
D
I
agree,
but
let
me
ask
another
question
bogdan
this:
the
host
start
time
is
an
example
of
one
where
actually,
we
might
be
able
to
figure
out
when
the
host
reset
and
then
we
could
think
like.
I
don't
know
how
to
get
that
from
a
linux
host,
but
I
bet
you
can,
you
know,
get
there's
some
authoritative
answer
and
you
know
for
all
the
non-monotonics
they
don't
reset
so
like,
like
you
could
just
fix
this
and
I
think
we
might
say
to
the
host
metrics
receiver.
D
You
should
fix
this
like
put
in
a
start
time.
Please,
but
then
that's
a
special
case
because
you're
asking
the
operating
system
about
those
numbers-
and
I
think,
but
I
think
there
is
an
uptime,
an
uptime
on
the
linux
system
yeah.
So
so
so
I
think
we're
we're
trying
to
say
that
standard
receivers
should
fill
in
start
time,
not
leave
it
empty
and
that's
another
recommendation.
D
Yes,
that's
that
and
that's
a
very
important
one
in
my
opinion
and
those
are
maybe
for
the
asynchronous
observations
and
then
and
then
you
know
for
the
like
the
sidecar
cases,
I'm
someone's
streaming
in
these
things-
and
I
I
just
literally
may
not
know
what
when
they
started
and
then
I
can
do
that.
Stateful
thing
I
described
at
the
beginning.
C
All
right,
I'm
going
to
call
a
little
bit
of
time,
shenanigans
here,
just
because
the
time
box,
I
think
a
lot
of
what
we're
talking
about
here,
needs
to
get
written
down
step
one,
so
other
people
can
read
it
later.
But
secondarily
I
want
to
summarize
what
I
think
I'm
hearing
and
you
guys
can
forget.
Okay,
what
I
think
I'm
hearing
is
start
time
will
not
be
required
and
the
number
zero
represents
an
unknown
start
time.
C
We
will
highly
highly
recommend
that
you
always
fill
start
time
and
that
standard
receivers
should
include
a
start
time,
but
there
are
some
cases
where
we
consider
that
impractical,
and
so
you
need
to
be
able
to
deal
with
not
start
time.
There
is
a
bunch
of
information
we
can
outline
for
how
to
deal
with
zero
start
times,
how
to
reset
your,
how
to
detect
a
reset
in
the
counter
and
what
to
do
in
that
scenario
that
we
can
document
and
write
down.
That
will
take
some
time
but
effectively.
C
D
Consensus
that
works
for
me
perfectly
and
just
to
be
totally
self-interested
and
transparent,
transparent
about
it.
Lightstep
does
not
want
to
receive
any
zeros
and
and
that's
why
sidecar
is
doing
the
work
and
that's
why
I'm
saying
host
receiver
should
do
the
work
and
that's
why
I'm
saying
you
know
in
this
other
case
that
was
random.
Like
there's
an
algorithm,
you
can
use.
D
I
don't
want
to
force
vendors
to
do
a
bunch
of
work
and
my
my
back-end
team
absolutely
hates
this.
They
because
there's
some
sort
of
like
queuing
system
where
all
the
points
get
arranged
in
the
same
place
and
something
stateful
has
to
happen
there
instead
of
way
back
deeper
in
their
system
and
that's
the
reason
why
they
object
to.
D
D
It
and
it
makes
out
of
order,
reading
and
writing
harder.
I
think
I
have
to
formalize
that
statement.
A
little
bit
like
reset
detection
has
to
be
serialized.
That's
the
reason
why
my
back
end
team
doesn't
like
it.
The
prometheus
heuristic
has
to
be
serialized,
and
it's
like
a
play.
Deriving
data
would
interfere
with
it.
D
C
C
C
It
should
not
be
recommended
at
all
right,
something
like
that:
okay,
yeah,
like
that's
okay
and
like
I,
I
hear
you
around
the
difficulties
and
different
and
and
fun
that
this
is,
and
I
think
the
there's
a
general
semantic
of
like
the
better
shape
data
we
can
get
out
of
open
telemetry
to
different
vendors,
the
better
it
will
be
in
that
back
end.
So
we
should
do
our
best
to
provide
good
quality
data
and
consistency
with
like
how
to
deal
with
this
stuff.
But
that
said,
I
don't.
D
Yeah
yeah
yeah,
I
I
think
and
and
and
I
remember
we
had
theo
schwarzenegger
author
of
circle,
his
on
the
call
once
and
and
he.
I
D
An
example
in
in
the
air
I
think
it
was
like
you've,
got
a
sensor,
that's
reading
like
from
an
oil.
Well,
what's
your
current
like
static
pressure
or
something
like
that?
The
oil
well
is
never
going
to
tell
you
when
it
rebooted
and
you
should
not
expect
it
to
and
that
little
sensor
that's
doing
the
reading.
It's
not
going
to
maintain
state
either.
The
problem
is
like
all
over
the
place.
D
It's
just
that
like
by
the
time
it
reaches
a
vendor
like
the
data
has
to
have
been
shaped
and,
and
vendors
generally
want
scalable
data
coming
in
and
shaping
that
data
from
that.
One
oil
well
requires
serious
serial
observations
and
you
do
the
start
time
manipulation
when
you
have
the
serial
observation.
C
Okay,
so
I'm
going
to
list
this
down
as
action
items
to
perform
to
resolve
this
bug
so
yeah
effectively,
let's
say
proto
documentation
updates.
So
would
be
these
three
things.
We
say
that
start
time
is
not
required.
We
say
that
zero
equals
an
unknown
start
time,
and
then
we
also
just
heavily
imply
that
start
time
should
always
be
provided.
Unless
it's
you
know
absurd
to
do
so,
then,
in
the
data
model
specification.
C
We
would
have
recommendations
for
how
to
do
stateful
receivers
and
recommendations
for
how
to
do
like
a
resource-based
start
time.
Synthesis
and
these
bugs
are
non-blocking
and
can
be
documented
over
time
to
help
people
deal
with
the
issue.
This
is
blocking.
We
need
to
get
that
documentation
in
prior
to
declaring
stability.
D
G
And
I
thought
I
thought
you
were
doing
this
stuff
in
the
mda
I
was
just.
There
was
another
comment
that
there
was
a
missing
start
time
for
horse
metrics.
Where
was
that.
D
G
G
Values
reset
it
is,
it
is
available
in
different
places,
but
it
is
available.
C
Okay,
let's,
let's
open
a
bug
to
look
into
that,
do
we
consider
that
blocking?
That's
that's
more
of
an
implementation
concern.
What.
G
D
I
Question
on
this
maybe-
and
maybe
this
is
not
the
right
thing-
maybe
prometheus
working.
E
I
You
know
not
the
delta
from
the
first
time
stamp
that
you
know.
E
I
We
know
about
and
then
you
know
the
difference
since
then
so
the.
D
I
I
have
to
interrupt
because
this
was
exactly
what
I
was
saying
earlier,
that
I
kept
changing
my
mind
on
and
the
outcome
now
of.
The
previous
discussion,
I
think,
is
an
unknown
start.
Time
will
be
passed
through
with
otlp
as
well.
You'll
have
exactly
the
same
data
and
the
start
time
is
there
to
help
you
correctly
compute
rates
that
are
aware
of
restarts.
D
G
D
D
This
should
be
discussed
tomorrow
at
8am.
However,
I
think
that
would
be
fair
too
may
I
spend
some
time
trying
to
draft
this,
and
then
we
can
discuss
it
in
the
meeting
tomorrow
we
won't
say:
hey
you,
don't
support,
openmetrics,
we're
saying
this
is
our
approach
to
missing
start
time.
Openmetrics
has
a
start
time
for
the
same
reason,
and
we
want
to
discuss
it.
C
D
Which
which
may
boil
down
to
I
I'm
afraid
that
the
history
is
that
google
and
stackdriver
met
with
prometheus
in
2017,
and
we
ended
up
with
a
start
time
and
and
then
open
metrics
was
was
therefore
trying
to
to
satisfy
something
that
ultimately
was
turned
into
open
census
and
then
open
telemetry.
So
the
answer
may
be
that
prometheus
doesn't
care
about
start
time.
They
put
it
in
there
for
other
people
be
ready
for
that
answer.
C
C
I
will
try
to
join
the
prometheus
working
group.
I
might
have
a
conflict
to
to
hear
the
answer
to
this,
to
make
sure
that
we
know
what
we're
doing.
Does
anyone
have
any
other
questions
or
things
here,
I'd
like
to
take,
I'm
going
to
document
this
consensus
on
the
bug
itself
of
what
we're
doing
and
josh
you're
taking
ownership
of
it.
C
If
anyone
has
any
concerns,
let's
continue
the
discussion
on
the
bug,
and
I
think
that
resolves
this,
so
I
want
to
spend
the
next
five
minutes
or
less
talking
about
what
we
do
in
the
next
meeting.
Given
that
we
expect
the
benchmarking
work
to
take
two
weeks,
we
may
or
may
not
have
results
to
talk
about
next
meeting.
If
we
do,
I,
I
expect
there
to
be
a
proposal
around
hey.
We
found
this
concern
with
the
current
otlp
specification.
C
With
this
new
attribute
thing:
here's
a
proposal
for
how
we
think
we
can
fix
it.
Hopefully,
that's
shows
up
in
the
in
the
agenda
for
next
week.
What
are
other
things
that
we
need
to
discuss
next,
because
again,
we've
been
focused
on
getting
the
stability.
C
Sorry,
do
we
need
to
talk
about
start
time
next
week.
D
C
Okay,
good
all
right,
so
you
react
yep,
good,
anything
else
that
we
think
we
want
to
prepare
to
discuss
next
week.
I
can
grab
information
on
any
of
the
other
things
safe
label.
Removal
might
be
long
enough
for
the
entire
agenda,
with
the
benchmarking
stuff.
So
anything
else,
other
people
want
to
escalate
and
talk.
C
C
Okay,
I'll
tell
you
what
if
we
have
time-
and
we
don't
have
anything
else,
pressing-
I'm
going
to
throw
the
exponential
bucketing
histogram
for
open
discussion
and
I'll
try
to
pull
in
the
right
people
for
it.
If
we,
if
we
don't
run,
I
want
to,
I
want
to
give
them
some
more
attention
span
or
attention
time
in
the
sig
to
talk
through
that.
So
if
we
can
fit
it
in
next
week,
I'll
try
to
fit
it
in
otherwise
we'll
delay.
Josh.
C
I
I
hear
I
hear
what
you're
saying
the
priority
is
that
we
get
a
stable
metric
data
model
out
quickly,
so,
to
the
extent
that
safe
data
label
removal
is
marked
as
blocking
we
need
to
talk
about
that
first
histograms
is
my
next
thing.
I
want
to
talk
about
it's
just
I'm
trying
to
figure
out
how
to
fit
it
in
like
when
we
are
done
with
stable
things.
Two.
C
And
half
next
week:
that's
what
I'm
that's
what
I'm
questioning!
So,
let's,
let's
look
at
what
you
have
for
safe
data
removal
and
and
get
an
estimate
if
anything
related
to
the
benchmark
shows
up
to
talk
about.
I'm
gonna
punt
it
one
more
week
to
kind
of
focus
on
it
this
in
this
meeting,
if
that's
acceptable
with
everybody,
otherwise
we'll
try
to
fit
it
in
next
week.