►
From YouTube: 2021-03-23 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Yeah
but
we
are
waiting
for
josh
and
george
yeah,
like
the
vaccine,
dnd
johnson,
johnson.
B
B
C
C
I've
seen
a
lot
of
folks
in
the
meeting,
so
while
we're
waiting,
will
you
please
open
the
agenda
and
put
your
name
under
the
attendees
section?
Thank
you.
A
Should
I
be
presenting
the
agenda
document
go
for
it?
Okay,
let's
see
if
my
chrome
light's
on.
A
A
A
E
Yeah
yeah,
I
just
want
to
make
sure
we
that
it's
on
there
cool.
A
A
All
right
cool
well
welcome
to
the
data
model.
Sig.
If
you
haven't
added
your
name,
please
do
cool
first,
I
want
to
start
off
talking
about
pending
prs
around
the
data
model
and
things
that
we
need
to
do
and
try
to
make
some
progress
on
these,
so
the
first
and
kind
of
most
important
one.
I
think
this
one
still
needs
a
review.
Time
is
basically
from.
I
think
this
is
sharing
my
entire
window
right
yeah.
A
This
is
the
implementation
of
our
discussion
from
last
week.
So
can
I
show
that
last
week
we
talked
about
attributes
versus
labels
made
a
proposal,
and
this
is
the
implementation
of
that.
So
let
me
come
back
up.
A
Anyway,
yeah,
I
think
you
know,
pending
pr
pull,
request,
review,
cycles
and
stuff.
This
needs
to
sit
for
a
while
and
allow
some
comments.
So
please
take
a
look
at
it.
It's
not
huge.
It's
just
the
metrics
labels
attribute
stuff
and
it
looks
like
somebody
wants
a
benchmark
already.
Nice.
F
Fi,
I
updated
only
the
non-deprecated
points
with
this,
so
okay,
so
we
already
deprecated
the
int
versions
and
stuff
of
some
of
the
things
and
I
simply
said:
okay
screw
it.
We're
not
gonna
use
that
they're
deprecated.
No,
no
need
to
update
that
yeah.
That's
that's
totally
fair.
A
Totally
fair
cool
all
right,
so
that's
number
one,
just
just
a
note
to
review
one
thing
I
did
want
to
talk
about
there's
another
pr
from
bogdan
here
around
adding
timelines,
and
the
discussion
I
want
to
have
is
the
one
that
we're
having
right
here
of
when
we
first
made
some
deprecations
as
part
of
the
data
model
work.
A
You
know
we
changed.
In
sum
to
sum
and
engaged
gauge.
The
current
guidelines
are
that
I
think
it's
only
three
months
like
I
think
this
is
per.
How
we've
specified
today
and
what
I
wanted
to
ask
of
kind
of
us
in
the
community
is.
Is
that
enough
time
and
here's
why
I'm
concerned
number
one?
Is
we
have
a
long
poll
of
people
who
are
trying
to
use
metrics
today
and
I
know
they
exist?
A
We
have
the
existing
apis
and
sdk
implementations
and
those
sdks
kind
of
adapting
to
this
new
protocol
buffer
right.
Then
we
have
users
adopting
that
new
protocol
buffer.
A
Then
we
have
removing
it
from
the
collector
right
after
that.
So
like
that
chain
of
events
of
like,
let's
update
the
collector
and
the
sdk
to
work
with
this
new
proto.
Let's
wait
for
people
to
stop
using
the
the
old
thing
which
hopefully
is
rapid
because
you
know
we're
on
unstable
territory
and
we're
making
releases
rapidly.
My
question
is:
is
three
months
enough
time
for
all
of
that
to
happen
like
I,
I
think
it's
enough
time
for
our
community
to
do
our
job.
A
C
We
might
be
able
to
get
some
download
number
for
different
versions
of
the
collector
like
for
the
for
the
donut.
I
see
I've
seen
a
lot
of
people
are
still
using
the
0.6
release,
which
is
still
happening
today.
Every
day
I
see
the
increase,
but
I
start
to
see
the
trend.
People
moving
to
one
point
x
and
based
on
that,
I
I
think
for
at
least
for
download.
I'm
saying
three
months
is
not
enough.
C
A
Okay,
one
of
the
things
I
want
to
say
is
I'm
okay
with
us
publicly
saying
that
it'll
be
three
months
and
then
not
having
it
be
three
months
and
like
waiting
longer
and
looking
at
kind
of
adoption
rates
and
and
people
people's
usage
of
the
api
to
determine
when
we
actually
drop
it.
Like
I'm.
Okay
with
that,
but
I
wanted
to
have
that
discussion
explicitly
because
three
months
seems
really
aggressive
to
me
for,
like
user
owned
code.
G
E
A
Harm
the
harm
is
like
again,
if,
if
you
throw
people
out
of
a
particular
release
train
that
means
they're
responsible
for
security,
vulnerabilities
and
patches
themselves
at
some
point,
and
so,
if
somebody's
actually
relying
on
this,
you
want
to
give
them
kind
of
a
smooth
ramp.
Off
of
you
know,
wait
for
them
to
get
around
to
their
updates,
with
whatever
that
adoption
looks
like
it's
something.
I
think
this
community
needs
to
understand
as
well.
Going
forward
is
like
how
long
does
it
take
our
users
to
update
to
latest
versions?
A
So
when
can
we
actually
like,
when?
Is
it
practical
to
drop
a
deprecated
thing
without
causing
churn
right
even
for
an
experimental
feature
that
somehow
got
tons
of
adoption,
so
so
that
that's
just
a
fundamental
question
of
you
know
when
when
do
we
punt
people
off
and
force
them
to
to
update
if
they
want
security
fixes,
I
know
that
that's
my
main
concern
actually
is
not
necessarily
like.
Oh,
they
could
just
use
no
collector
it's
more.
A
Will
they
see
that
as
okay
or
will
they
see
this
as
churn
again,
it's
like
a
perception
thing.
So
anyway,
I
don't
wanna.
I
don't
harp
on
this
issue
too
much.
I
think
we
are
already
over
my
five-minute
time
box
of
just
do
we
feel,
like
three
months,
is
enough
time
for
users
to
migrate
or
not.
A
D
Yeah,
I
guess
to
to
the
point
that
a
victor
brought
up.
It's
it's.
You
know
if
people
are
running
the
older
version
that
has
the
deprecation
notice
but
doesn't
have
the
removal,
then
it's
one
of
those
things
that
that
adds
a
little
bit
of
time
span
into
there,
and
it's
almost
one
of
those
things
that
like
I
don't
I
don't
know.
I
don't
know
because,
like
I
think,
I
honestly
think
that
it
was
put
in
beta
too
quickly,
but
that's
just
based
on
the
perspective
of
like
coming
in
now.
D
It
seems
like
a
number
of
these
things
where
maybe
it
was
at
the
time
seemed,
seemed
more
ready
than
it
was,
and
then
there
was
a
bit
of
a
rethink.
So
maybe
that's
why
it
got
pushed
to
beta,
but
I
think
it
was
definitely
more
alpha
at
the
time
and
it
maybe
maybe
it
was
just
one
of
those
things
where
the
guarantees
in
the
future.
Yes,
three
months
should
be
sufficient
because,
ideally
you're
making
less
drastic
changes
than
we're
making
right
now
but
yeah.
D
A
A
Yeah,
that's
a
great
point.
Let
me
where's
the
notes.
A
I'll
take
an
action
item
here,
let's
tap
on
using.
A
So
I
that's
probably
not
valid
english.
I'm
sorry.
I
need
to
drink
more
coffee
this
morning
effectively.
What
I'm
suggesting
is.
We
modify
our
deprecation
process
to
say
that
the
currently
specified
deprecation
policy
is
the
minimum,
but
we
will
leverage
like
user
adoption
metrics
to
understand
when
we'll
actually
pull
something
out
and
that
it's
acceptable
for
user
adoption
metrics
to
kind
of
prevent
deprec
like
we
can.
We
can
it'll
be
up
to
the
judgment
of
the
technical
committee.
A
Right
is
how
I'll
probably
phrase
that,
but
it'll
at
least
give
us
a
hook
to
to
require
like
a
little
bit
of
a
usage
evaluation
before
a
pr
just
goes
through
to
remove
this,
because
it's
been
up
to
that
deadline
that
we
actually
check
and
see
if
it's
going
to
cause
turn
in
users.
A
So
we
give
them
aggressive
deadline,
but
I
know
me,
as
a
user,
I'm
not
going
to
meet
that
deadline
until
you
actually
tell
me
to
drop
dead
and
then
I'll
fix
it
right
so
being
a
little
bit
flexible
to
cause
less
turn
I'll
open
a
notep
about
it.
For
my
concern
and
consider
all
my
comments,
non-blocking,
does
that
sound
good.
H
A
All
right
next
thing
I
wanted
to
talk
about
a
little
bit
is
I
threw
out
the
next,
the
next
set
of
the
metrics
data
model
spec
from
josh,
and
I
added
a
little
bit
to
the
single
writer
description,
not
a
ton,
but
what
I
realized
as
I
was
reading
it
and
writing.
It
was
something
that
kind
of
victor
brought
up
several
times.
A
G
A
Yeah
yeah,
like
the
so
so
effectively
the
identity
of
an
instrument
right,
doesn't
necessarily
correlate
directly
to
the
identity
of
a
time
series.
It
could
be
multiple
time
series
come
out
of
one
instrument,
but
then
the
next
question
is
is
the
identity
of
say
a
metric
thing
in
the
data
model
and
at
the
same
time,.
A
G
Time
series
from
api
level
like
if
there
is
a
concept
of
a
thing
that
has
an
identity
in
the
data
model
itself.
I
I
I
wanted
to
not
use
the
word
time
series
because
it's
confusing
so
I
did
start
using
the
word
stream,
but
I'm
not
sure
that
that's
the
best
yeah
and
we're
trying
to
convey
that
the
stream
can
be
out
of
order,
whereas
the
time
series
can't
be
it's
like
much
more
ordered
with
time.
A
I
didn't
get
as
deep
as
I
wanted
to
go
in
terms
of
like
adding
more
specifics
to
what
you'd
written,
but
I
kept
running
into
gymnastics
trying
to
talk
about
the
identity
of
a
time
series
where
that's
not
even
a
concept
in
the
data
model
right
or
it
it's
it
is,
but
it
isn't
right.
I
wonder
if
we
should
just
call
that
out
as
a
thing
that
we
can
speak
about
to
specify
what
the
hell
this
means.
G
G
G
A
No,
I
mean
time
series,
maybe
because
that's
how
you
query
time,
series
directly
correlates
with
how
you
query
metrics
right
when
I
query
metrics,
I
am
filtering
with
labels
and
grabbing
out
time
series
and
I'll
see
like
different
actual
graphs,
so
that
the
user
should
recognize
instrument
name,
they
should
recognize
all
the
stuff
in
the
middle
is
for
us
to
do
our
manipulations
that
users
don't
need
to
see.
G
E
I
E
A
So
this
shouldn't
this
shouldn't
the
notion
of
stream
shouldn't
significantly
like
change
the
user's
perception
of
what's
going
in
the
back
end,
I
think,
like
the
idea
of
stream
is
going
to
be
really
really
highly
correlated
with
time
series,
but
with
some
nuances
and
those
nuances
are
going
to
be
important
and
where
they're
important
is
like.
There
are
operations
that
we
can
do
on
streaming
metric
data
that
will
be
safe.
For
us
to
do.
That
is
cool.
A
There
are
operations
you
can
do
on
a
time
series
database
with
time
series
data
and
those
operations
are
not
exactly
the
same
right.
There's
some
things.
We
can
do
that.
You
can't
do
later,
there's
some
things
that
you
can
do
later,
that
we
can't
do,
and
we
just
need
to
make
it
clear
like
what
what
this
means
and
what's
allowed
and
where
things
flow.
So
the
idea
behind
the
single
writer
bit
is:
how
do
we
deal
with
error
scenarios
right
around
metric
label
mismatches?
A
A
Single
writer
gives
us
kind
of
a
context
in
the
grounding
for
how
we
can
look
at
that
problem
and
what
to
do
with
it.
Single
writer
also
determines
a
little
bit
architecturally
things
that
we
already
kind
of
know.
If
you're
going
to
aggregate
metrics
together
in
us
in
a
label
set
you
kind
of
need
to
get
them
to
the
same
location
to
have
that
have
any
meaning
right.
You
need
a
single
writer
of
that
time.
Series
to
have
that
aggregation
actually
like
work
when
it's
in
motion.
A
So
like
that's
the
idea
behind
single
writer,
that's
the
idea
behind
identity.
The
problem
is,
as
we
talk
about
this
right,
we're
always
talking
about
the
time
series,
which
is
how
users
will
see
identity
because
that's
their
back
end
and
then
we
talk
about
you,
know
instruments,
but
we
didn't
have
anything
to
talk
about
identity
in
the
middle
and
we
can't
just
say
like
identity.
Is
these
protocol
buffer
entries
mapped
together
right
like
that?
Just
doesn't
exist.
A
So
that's
why
I
think,
maybe
maybe
we
should
call
that
out,
make
sure
it
highly
aligns
with
one
or
the
other
and
and
use
that
specification
for
ourselves,
not
for
users
again,
if
you're
worried
about
like
what
users
see,
I
don't
think
this
is
the
right
sig,
but
I
could
be
wrong
there.
I
don't.
A
A
I
A
Think
sorry,
my
cat
keeps
jumping
on
my
computer
and
keyboard
okay.
So,
yes,
I
will
take
it
offline.
What
I
want
to
make
sure
of,
though,
let's
let's,
I
think,
based
on
this
discussion,
we'll
call
we'll
make
a
term
called
metric
stream,
which
is
this
idea
of
an
instrument,
has
generated
a
set
of
metrics
with
labels
and
we
have
an
identifying
set
of
labels,
resource
instrument,
library,
metric
name,
and
yes,
I'm
going
to
throw
all
three
in
there
and
we're
going
to
have
a
single
writer
of
those
all
the
way
through.
A
So
that's
how
I'll
specify
or
how
I'll
update
the
specification
I'll
go
through
and
try
to
make
it
consistent
all
the
way
through.
But
please
let
me
know
when
I
you
know
step
over
bounds
or
do
something
dumb,
and
then
we
can
talk
in
the
pull
requests,
cool.
F
F
Instrument
yeah,
I
I
totally
agree
with
that.
So
so
what
I'm
trying
to
say
there
is
on
general
in
general
case
99
of
the
time
it
will
be
the
case,
but
it
may
not
be
always
the
case,
so
I
want
to
make
sure
that
people
understand
that
by
a
different
configuration
we
may
be,
we
may
allow
people
to
change
the
instrument,
the
metric
name,
to
not
be
the
instrument
name.
A
G
Was
going
to
say,
I'm
trying
to
clarify
if
you're
asking
something
about
like
if
you're
going
to
generate
a
new
aggregation,
that's
separate
from
whatever
the
default
aggregation
is
you
have
to
choose
a
new
name
so
that
you
get
a
new
identity
and
the
thing
that
I
think
we're
trying
to
avoid
is
double
counting
or
something
like
that.
Like
you
take
a
counter
and
you
aggregate
it
twice
if
they
have
the
same
name,
you
now
you've
doubled,
doubled
the
count
and
that's
what
we
don't
want.
A
Okay,
I
don't
want
to.
E
A
Let's
talk
about
the
use
case
where
they're
not
the
same
so
by
default,
we're
going
to
have
them
be
the
same.
That's
the
idea
of,
like
you
know,
a
metric
name.
An
instrument
name
tend
to
be
the
same
with
this
identified
default,
aggregation
semantic
right
coming
out
the
gate.
The
cases
where
it's
not
the
same
is
where
I
have
a
view
and
a
view
is
like.
I
have
this
same
instrument,
but
I
want
to
aggregate
differently
for
some
reason
and
I
can't
think
of
a
good
use
case
honestly.
A
Besides,
maybe
I'm
aggregating
like
you
know,
summaries
and
I'm
also
aggregating
a
histogram,
and
I
have
like
different
ways:
I'm
sending
histograms.
I
don't
know
something
like
that
right,
so
I
actually
end
up
with
two
different
metric
streams
coming
out,
because
the
data
points
are
different.
They're
interpreted
different,
they
aggregate
differently
out
of
the
gate,
but
the
instrument's
still
the
same.
The
instrumented
code
is
still
just
saying:
hey
I
had
an
rpc
here
was
the
latency
that
I
recorded.
That's
it
right.
So
that's.
F
F
For
example,
you
can
do
the
classic
one
for
latency
emit
summary
in
prometheus.
We
calculate
the
percent
the
client
side
and
also
emit
histograms.
In
this
case,
you
cannot
have
the
same
name,
because
the
system,
the
back
end,
will
get
very
confused
and
will
most
likely
reject
points
for
you
so
that
that
shouldn't
be
the
case.
F
There
are
other
scenarios
like
people,
have
a
legacy
metric
for
good
or
for
bad,
and
even
though
we
they
may
use
the
default
aggregation,
they
want
to
keep
the
name
as
they
had
it
before
in
dashboards
and
and
stuff.
So
that
may
also
be
a
case
where,
where
they
want
to
change
the
name,
so
there
are
multiple
scenarios
where
people
want
to
to
not
use
the
instrument
name.
E
F
Think
think
also,
as
I
said,
there
are
third-party
instrumentations
that
choose
a
instrument
name
for
good
or
for
bad,
and
the
user
doesn't
like
that,
and
they
really
want
to
change
that.
They
will
change
it.
So,
hence
it
will
not
be
the
same
okay,
but
it's
user,
provided
that's
what
I'm
trying
to
say
either
via
the
instrumentation
when
they
created
the
instrument
or
via
a
config
later.
A
Also,
based
on
that
discussion,
I,
like
stream,
so
good
pick
of
words
josh.
I
think
I'm
gonna
go
with
that
cool
all
right.
Next
discussion,
victor,
you
want
to
kick
us
off
with
hold
on.
I
keep
having
windows
with
points
aggregation
and
this
I
it
might
be
better
if
you
just
rephrase
the
whole
discussion.
Sorry.
E
E
My
you
know:
temperature.
It
doesn't
have
to
be
temperature,
but
some
measurement
every
second,
but
my
configuration
for
sdk
may
be
to
output.
You
know
to
do
export
only
once
a
minute.
So
how
would
that
look
like
you
know
on
the
on
the
line
and
what?
What
does
that
imply
with
aggregation
and
and
what
configurations
for
you
know?
Do
I
configure
my
export
for
one
second
or
for
once
a
minute?
E
If
it's
once
a
minute,
does
it
send
60
data
points
or
is
it
aggregated
to
one,
and
then
you
know,
and
then,
if
it
is
aggregate
to
one
then
do
we
have
a
concern
or
otlp
resending
the
stream
multiple
times.
So
those
are
all
questions,
then,
that
all
kind
of
leads
to
is
that
just
a
use
case
of
something
we
don't
have
today,
which
is
just
a
raw
raw
type
of
some
kind?
So
so
anyway,
that's
my
description.
F
F
Because
that's
that's
one
question,
but
maybe
maybe
there
are
good
reasons
in
my
mind.
The
way
how
it
was
I
was
thinking
to
do
this
was
actually
you're
gonna.
Have
you
can
have
a
configuration
at
the
exporter
level
to
say
metrics
that
map
this
this
this
this
name
or
whatever
some
condition
will
be
exported
every
second,
the
other
one
will
be
exported
every
60
seconds.
F
But
but
one
one
thing
that
I
need
to
understand
is:
what
is
the
benefit
to
send
metrics
with
one
second
granularity?
If
I
send
them
every
60
seconds,
I
do
understand.
If
I
send
them
every
second,
then
I
can.
I
can
alert
faster
and
do
whatever,
because
I
have
the
data
on
the
back
and
faster,
but
to
to
to
have
them
every
second,
but
export
every
60
seconds.
Is
that
what
what
is
the
benefit?
E
So
so
the
the
example
I'll
put
up
is
that
you
know
if
I
export
so
the
data,
the
data
that
I'm
looking
for
once
a
second,
presumably
you
know
the
reason
I
have
it
once
a
second
is
because
there
potentially
could
be
spikes
in
the
system
or
in
the
in
the
you
know,
time
series
you
know
within
the
second
and
that's
why
my
granularity
is
set
to
one
second,
then
the
question
then,
is
that
what
frequency
do
I
export
at
and
obviously,
if
I
just
set
that
to
export
every
second,
then
I
think
that
solves
the
one
first
problem,
but
in
general
we
probably
don't
want
to
set
the
exporter
to
export
every.
E
A
A
A
Why
what
why
aren't
you
using
a
histogram?
I
guess
would
be
question
number
one.
I
think
that's
the
con,
so
do
you
actually
need
that
one
second
granularity
or
if
you're,
trying
to
catch
spikes,
is
a
histogram
good
enough,
because
the
histogram
is
going
to
alleviate
a
lot
of
back
end
pressure
for
you
with
storing
that
data,
so
is
a
histogram
enough
or
not,
and
I
think
the
my
my
expectation
when
you
asked
this
question
was
immediately
in
my
mind.
A
I
would
be
using
a
histogram
here,
but
that
doesn't
mean
it's
not
a
valid
use
case.
I
think.
Actually
it's
something
that
could
be
looked
at
there
could
be
points
that
get
added.
I
don't
think
it's
necessarily
needed
for
like
it's
something
that
is
easy
to
add.
As
soon
as
we
have
a
use
case.
The
question
is,
you
know,
what's
the
use
case
here
and
for
you
I
want
to
understand
why
I'm
doing
this
instead
of
histograming.
E
Yeah,
so
so
I
I
don't
know
the
answer
to
that
specifically
I'll.
Tell
you
that
my
my
original
envisioning
of
why
I
wanted
to
have
cpu
or
whatever
per
second
is
because
of
my
back-end
system.
I
would
set
up
alerts
and
monitor
triggers
to
basically,
you
know
trigger
on
you
know
the
rise
or
the
fall
and
there's
and
in
our
backend
system,
there's
many
ways
in
which
we
do
that
using
ai
and
so
forth.
So
I
don't
know
how
we
would
currently
apply
that
using
histogram.
Now
to
your
point,
maybe
a
histogram
is
enough.
E
I
just
don't
know
enough
about
how
to
make
that
work
and
then
also
to
your
point,
the
the
speed
in
which
we
can
alert
on
it
may
or
may
not
be
a
problem
as
well.
So.
A
Yeah,
so
a
histogram
is
just
a
compression
technique
for
your
series
of
points
right.
You
know
you
pick
like
bands
on
your
graph
and
you
just
record
counts
of
the
number
of
points
there,
so
you
can
theoretically
say:
okay,
I'll
just
take
the
middle
of
the
band
and
I'll
pretend
like.
I
had
five
points
in
that
band
in
this
time
series
and
then
you
can
alert
and
get
generally
decent
semantics
for
these
kind
of
spikes
and
things
like
it's.
A
It's
okay,
it's
not
like
super
super
crisp,
so
the
only
time
I
would
want
the
points
is
where
I
actually
need
like
to.
I
ideally
detect
things
like
I'm
thinking
again.
This
comes
from
my
past.
I
used
to
detect
torpedoes
okay.
That
was
like
an
earlier
thing
and
there
yeah,
I
mean
seconds
matter
right
like
that
thing
is
coming
at
you.
It's
gonna
explode
so
like
there.
I
want
this
kind
of
resolution.
A
The
question
that
the
question
I
I
haven't
convinced
myself
of
and
I
really
want
to
know
if
this
is
the
case
for
people-
do
we
have
observability
problems
with
that
are
at
that
level
of
detail
because
that'd
be
that
would
be
a
use
case.
That'd
be
awesome
to
to
resolve
and
have
and
say,
like
here's,
how
you
do
it.
G
I
I
there's
a
question
here
that
I
want
to
rephrase
it's
about
whether
there's
a
difference
between
turning
the
resolution
from
60
seconds
down
to
one
second
and
then
you're
still
aggregating
over
one
second
or
whether
you
really
want
to
turn
it
down
to
the
point
where
you've
got
point
values
that
are
truly
raw
and
are
not
being
aggregated.
I
think
that's
that
question
keeps
coming
back
and
it
and
I
think
we
we
keep
avoiding
it,
because
you
can
always
just
turn
down
your
interval
and
get
very
close
to
that.
G
But
we
still
have
a
data
model
problem
and
it
comes
out
with
statsd
and
amazon's
emf
format,
which
are
fundamentally
point
point
data
points
in
the
model
and
and
if
we're
going
to
to
carry
a
statsd
request
or
an
emf,
data
point
or
a
raw
histogram
event
or
care
about
the
case
where,
where
synchronous
gauge
map
like
matters
like
every
synchronous
gauge
set
matters,
then
we
have
to
have
raw
points
because
no
aggregation
is
is
good
enough,
like
in
order
for
otlp
to
carry
that
type
of
data
point.
G
We
need
a
raw,
a
raw
thing
and
then
what's
really
more
confusing
here
is
that
gauge
is
already
raw.
It
has
a
time
stamp
and
it
has
a
value.
So
so,
when
you
aggregate
a
gauge,
you
end
up
with
a
gauge,
so
there's
no
difference
in
the
protocol
level
between
a
raw
gauge.
That's
like
one
to
one
and
an
aggregated
gauge.
That's
been
taken
last
value
from
some
unknown
number
of
points,
and
so
with
gauge.
I
have
that
type
of
concern,
but
truly
with
a
gauge.
G
F
I
do
understand
the
raw
data
use
case
and
I
do
understand
your
concern,
george,
that
it
feels
very
close
to
gauge
gauge
was
designed
this
way
because
we
knew
you're,
not
gonna,
send
millions
of
points
for
for,
for
that
interval,
you're
gonna
send
only
one
and
it
looks
way
more
similar
with
the
sum
or
with
the
histogram
and
everything.
F
So
the
downside,
if
we
add
the
raw
points
as
an
another
aggregation
where
we
optimize
for
the
case
that
you're
going
to
send
multiple
values
for
the
same
identity
combination
stuff,
is
this
may
confuse
user
between
gauge
and
draw
points?
Maybe,
but,
but
the
benefit
is
also,
it
will
clarify
that
gauge.
G
I
mean
here's
a
thought,
experiment,
a
raw
histogram
point
and
a
raw
gauge
point
are
some
ways
different
in
some
ways
the
same,
and
I
I
want
us
to
eventually
solve
this
question.
We
don't
really
have
a
way
to
represent
a
raw
histogram
point,
which
is
where
statsd
receivers
come
into
trouble.
I
don't
think
we
have
trouble
representing
raw
counters
in
raw
gauges.
It's
just
a
semantic
notion
that
we
can't
tell
the
difference
between
raw
and
and
aggregated
but
with
histograms
there's,
actually
not
a
great
representation.
G
That
gives
us
a
raw
histogram
point,
and
that's
the
to
me,
that's
the
problem.
What
is
the
raw
histogram
point?
It's
an
event
saying
somebody
observed
this
value
in
my
histogram,
okay,.
H
G
Somebody
downstream
may
want
to
aggregate
that,
but
right
now
it's
just
a
raw
event
and
in
the
statsd
case
or
the
amazon
emf
case,
it's
actually
got
a
weight
that
says
how
many
effective
counts
or
how
many
adjusting
counts.
You
should
give
and
that's
where
it
might
not
be
one.
It
might
be
a
count
of
non-zero.
E
B
A
G
A
H
G
G
Yeah,
the
first
example
is
gork
320,
it's
a
millisecond
and
it's
at
sample
rate
of
1
in
10,
so
that
that
says
you
should
count
10
in
your
histogram
when
you
finally
aggregate
this
at
value
320,
I'm
looking
at
here
that
is
hard
to
represent
inside
of
otlp
and,
as
a
result,
the
hotel,
the
collector's
statsd
receiver,
has
to
do
its
own
aggregation.
G
A
Isn't
that
fundamental
to
statsd,
though,
that
you
have
to
do
like,
like
the
whole
ecosystem
is
designed
around?
I
have
some
local
agent,
that's
very,
very,
very
freaking
close
that
does
this
aggregation
for
me
and
then
generates
my
time.
Series
right,
like
is,
is
that
something
that
we
want
to
expose
in
ltlp
for
other
formats,
or
do
we
want
to
require
stuff
like
statsd
to
have
that
local
aggregator
to
turn
it
into
otop?
A
G
From
amazon
came
in
asking
that
they
wanted
to
output
data
in
emf,
format,
which
is
still
a
point,
sampled
representation.
So
there's
some
people
who
say
I
want
to
see
statsd
in
and
I'm
expecting
a
histogram
out
and
one
way
to
put
a
histogram
out
is
to
just
output
those
weighted
points
and
I'm
not
saying
that.
That's
the
right
or
only
solution,
but
that
is
what
people
come
in
expecting.
Sometimes,
when
they've
been
doing
this
type
of
thing.
F
Should
we
have,
then,
then,
when
we
talk
about
drop
points
or
raw
things,
whatever
we
call
them,
we
can
have
a
raw
measurement
or
stuff.
That
is
clearly
you
don't
have
any
weight
or
anything,
and
these
are
the
raw
measurements
and
we
can
have
a
weighted
measurements
or
whatever
we
call
them.
That
would
contain
a
weight
and
stop
so
so
we
can
add
two
new
types
of
aggregations,
one
that
is
no
aggregation.
Essentially
you
you
just
pass
whatever
you
receive
from
the
because,
because
this
is
a
kind
of
aggregation
that
happens
in
the
stasd.
F
G
We
can,
I
think
I
agree,
I
would
add
new
points
to
this
for
this
and
I
think
we
can
keep
delaying
the
discussion
about
raw
points
forever
and
I
think
getting
back
to
where
victor
started
this.
We
should
be
talking
about
higher
resolution
export
with
aggregation,
not
about
raw
points,
but
let's
keep
kicking
this.
Can
I
don't
think
it
matters.
F
A
Yeah,
okay,
so
in
the
interest
of
time
boxing
it's
it's,
it's
been
a
great
discussion.
I
think
we
all
kind
of
understand
where
we
are
now
better.
I
hope-
and
let's
push
this
feature
request
down
the
chain
in
the
sense
of
if
you
want
to
work
on
it
feel
free,
but
in
terms
of
metric
data
model
stability,
we
think
we
can
add
in
a
backward
compatible
way.
So
let's
focus
on
things
that
we
think
are
not
backwards,
compatible
right
now
in
the
sig
going
forward,
sound
good,
yes,
cool,
also
josh.
A
We
should
probably
start
a
stats
d
discussion
thread
kind
of
like
the
prometheus
one
anyway,
random
assigns
here
we
go
let's
get
into
our
time
box
for
next
steps.
So
some
of
you
might
notice
that
we
didn't
come
into
this
meeting
with
a
proposal
of
what
to
do
next.
That
was
kind
of
on
purpose.
A
We
met
and
we
decided
that
we
needed
to
make
progress
on
the
the
pr's
that
are
currently
in
process
and
there's
a
little
bit
of
a
feeling
that
possibly
all
the
rest
of
this
work
is
not
going
to
require
breaking
changes
to
the
data
model.
A
So
I
want
to
talk
about
what
we
do
next
in
this
sig.
I
think
the
sig
does
need
to
continue
and
make
progress
on
these
issues
over
time
and
continue
to
adapt
the
data
model
and
apply
new
use
cases,
and
we
need
to
do
that
piece
meal,
but
I
think
at
this
point
in
time
we
have
the
the
big
item.
Big
ticket
items
are
kind
of
out
this
labels
to
attributes
is
out.
A
A
We
have
a
few
bits
of
spec
to
write
and
then
I
believe
once
this
clarity
on
the
scope
of
instrument,
name
from
victor
I'm
hoping
based
on
our
discussion
today
around
identity
and
streams
that
if
we
write
up
this
finish
up
this
single
writer
bit
that
that'll
take
care
of
those
questions,
I'm
hoping
not
positive,
but
I'm
hoping
so
what
we
have
left
now
again
histograms.
I
think
we
beat
to
death
so
we're
pretty
good
there
min
max.
We
agree
is
non-blocking,
so
this
temporality
stuff.
A
Does
anyone
see
anything
in
here
that
they
want
to
talk
about
next
week,
specifically
that
we
should
go
dive
in
and
investigate
otherwise
I'll
try
to
come
up
with
something
on
my
own?
But
that's
that
I
think
this
is
next
week's
discussion
is:
is
there
anything
around
temporality
that
we
think
could
block
declaring
what
we
have
now
stable
and
then
adding
on
the
additional
features
in
in
backwards
compatible
ways.
G
I
agree
with
that.
I
do
see
a
fellow
on
the
call
vishwanath
who
asked
a
question
about
temporality
last
thursday
in
our
api
sdk
call.
It
might
be
a
good
time
to
repeat
the
question,
but
we
could
also
wait
until
a
dedicated
discussion
about
temporality
where
everyone
comes
prepared.
It
was
a
temporality
question.
Vishwanatha.
Are
you
there?
Yes,
yes,.
I
Thank
you
josh,
so
my
question
was:
where
are
we
resetting
the
magnitude
for
for
accumulated
metric?
I
G
Yeah,
so
this
just
so
everyone
else
another
way
of
phrasing.
That
is
that
you
know
the
otlp
model
has
a
two
time,
stamps
per
point,
and
so
anytime
you're
expressing
a
sum
or
a
rate.
You
know,
we've
got
the
start
time
in
in
there
and
in
the
time
series
model
by
the
time
you've
cooked
the
data.
G
There's
no
start
time
and
and
that's
the
way
that
that
prometheus
presents
you
the
data,
you
just
get
one
value,
one
time
stamp
and
one
value,
and
so
the
question
becomes
like
when
does
the
absolute
value
of
a
cumulative
matter,
and
when
is
it
just
to
be
used
for
calculating
a
rate,
and
I
think
that
we
are
facing
essentially
a
decision
where
I
don't
know
how
to.
I
don't
know
the
history
exactly,
but
I
know
that
stackdriver
began
pushing
to
to
represent
these
cumulative
points
as
just
the
rate
matters.
G
That
means
you
can
reset
the
time
series.
That
means
you
can
free
your
memory
and
your
client
and
people
have
been
doing
this
with
prometheus
for
a
long
time.
You
don't
want
to
have
to
remember
everything
for
all
time,
so
you
reset
your
memory,
and
that
means
you
can
reset
your
count.
So
then
the
question
is
you
asked
about
seeing
an
absolute
value,
but
the
client
is
allowed
to
reset
it
whenever
it
wants
in
some
sense.
So
why
does
the
absolute
value
actually
matter?
And
so
then
you
end
up.
G
You
end
up
in
a
situation
where
you
can
say
well,
assuming
no
one
ever
reset
this
now
that
it's
the
true
absolute
value.
So
the
question
that
we
need
to
discuss
is
like:
when
you
don't
know
a
start
time,
is
it
and-
and
you
and
all
you
have
is
a
sum
and
you
think
that
sum
is
monotonic.
G
What
should
you
do
and
and
what
we've
been
doing
is
to
insert
a
reset
and
and
and
change
the
absolute
value
so
that
you
know
the
true
rate,
and
you
know
that's
that's
one
approach.
I
would
say
that
the
prometheus
ecosystem
takes
a
different
approach
and
the
prometheus
ecosystem
loses
counts
at
a
reset.
You
cannot
tell
how
many
times
a
reset
has
happened
because
of
the
heuristics
that
are
in
place.
G
A
G
I
end
up
marking
it
as
a
non-cumulative
just
because
the
actual
value
matters-
and
that
means
I
had
to
really
understand
my
data
model,
and
I
had
to
understand
that
before
I
started
recording
data
to
my
back
end,
which
is
not
great
to
be
clear.
I
I
think
that
stackdriver
and
the
monarch
data
model
actually
have
thrown
things
off
here.
So,
for
example,
in
light
step,
you
cannot
see
the
current
the
absolute
value
of
accumulated
metric.
Sometimes
that
is
a
useful
thing
to
do,
and
the
prometheus
users
definitely
expect
it.
G
I
Sorry
so
I
was
about
to
ask,
is
it?
Is.
I
So
that
whoever
wants
to
add
it
up,
you
know
anywhere
in
the
pipeline,
can
actually
do
that
or
they
just
continue
to
use
the
the
delta
cumulative
value.
That
is
acids
right
now,
so
it
worked
both
ways,
for
example,
if
you're
exporting
to
a
system
like
prometheus,
you
can
add
it
up,
for
example,
and
I
I
agree
that
it's
a
database.
A
A
You
see
if
you
see
this
here
with
these
temporality
bugs
if
any
of
these
are
correlated-
and
it
makes
sense
to
to
add
your
question
to
one
of
them
feel
free,
otherwise
open
a
new
bug
for
the
discussion
for
us
to
talk
through,
because
I
I
yeah
I
we
don't
have
enough
time
to
actually
come
to
resolution
here.
We
have.
G
F
Okay,
can
we
can?
We
also
pretend
that
this
is
prometheus
not
being
compatible
with
open
metrics
and
not
preserving
the
start
time,
because
if
they
preserve
that
time
that
they
added
in
open
metrics,
we
will
not
talk
about
this
problem.
Okay,
yeah.
D
A
Yeah
yeah-
sorry
I
I
want
to
I
want
to
give
us
all
five
minutes
back,
because
I
think
we
need
to
spend
time
on
that
and
talk
through
it
a
little
bit.
If
you
have
anything
you
want
to
talk
about
around
aggregation.
Let
me
know
I'll
try
to
prepare
an
agenda.
I
don't
know
if
there
are
any
concrete
proposals
for
change,
though
at
this
point
I
think
maybe
we
just
need
to
have
some
discussions
around
the
issues.
A
If
we
don't
think,
there's
anything
release
blocking,
then
I
think
we
can
start
taking
the
prs
that
we
have
get
that
data
model
into
a
1.0
state
and
then
start
making
iterative
changes.
So
we'll
continue
to
have
this
meeting
to
discuss
all
the
things
that
we've
been
talking
about,
adding
and
make
sure
that
those
get
through.
We
still
need
that
momentum,
but
let's,
let's
make
sure
that
we
have
a
stable
base
to
start
from
sound
cool
yeah.
Can
I
ask
something.
F
Josh
josh
and
everyone
is
anyone,
does
anyone
mind
if
we
add
the
we
make
a
release
before
the
label
to
attributes
change
and
we
make
another
release
after
that,
will
help
me
tremendously
in
upgrading
the
collector
to
not
have
to
deal
with
too
many
changes
in
one
place.
F
F
I
F
Get
to
one
zero,
but
I
mean
for
me:
I
don't
think
I
necessarily
rush
into
one
zero
as
long
as
we
declare
stable
and
we
follow
the
guarantees
of
stableness
but
yeah.
So
just
just
the
question
was:
if
I
can
do
an
intermediate
release,
I
I
still
do
need
one
fix
for
the
for
the
exemplar
id.
I
I
misused
one
of
the
id
for
for
for
the
new
int
value.
If
anyone
can
approve
that
it
will
be
great
to
not
release
a
version
with
that
wrong
id.
F
Yes,
the
fix
yeah
that
one.
I
I
by
mistake,
move
from
five
to
seven
and
I
wanna
revert
that
to
u6
and
was
not
released
was
much
yesterday,
so
we
should
not
care
to
do
this
change.
G
Anyone
here,
who's
also
curious
about
data
model.
There's
a
bunch
of
histogram
work
in
otap
form
right
now,
as
well
as
in
the
proto
repo
that
I've
been
not
catching
up
on
and
I
plan
to
just.
F
Yeah,
I
also
ask
one
of
the
our
data
scientists
to
look
at
that
thing
for
for
histograms,
and
he
also
mentioned
that
he
may
want
to
have
a
conversation
in
a
couple
of
weeks
months
for
about
our
intention
to
support
a
pure
sketch
like
kl
or
something
like
that,
which
is
an
addition
to
our
what
we
have
or
whatever.
F
So
I'm
not
worried
that
we,
we
cannot
do
that,
but
I
talked
to
him
and
to
make
another
for
that
if
he
wants
to
to
move
forward
sweet
thanks,
everyone,
cool
yeah,
so
next
week,
temporality
it'll
be
great.
Thank
you.
So
much.
A
F
For
everyone
to
know,
I
will
do
a
release,
probably
today
or
tomorrow,
with
everything
that
we
have
right
now,
knowing
that
this
is
not
a
stable
release
for
metrics.
We
need
a
couple
of
more
changes,
but
it
will
allow
me
to
incrementally
change
the
collector
and
everything.