►
From YouTube: 2020-07-28 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
C
D
D
So,
oh
good,
we
have
lots
of
people
yeah.
So
should
I
make
a
preface
to
this
conversation
I
don't
know.
D
D
So
one
could
call
into
question
the
the
decisions
that
have
been
made
or
debated
about
this
idea
that
you
can
always
turn
a
delta
into
a
cumulative
or
accumulative
into
the
delta.
These
relate
to
the
things
that
we're
summing
up
and
then
there's
this
question
about
what
what's
a
gauge
or
or
what
we've
called
value,
recorder
or
value
observer,
and
so
we've
got
several
iterations
bogdan
and
I
have
been
doing
all
the
talking
and
there
are
definitely
some
parts
of
this
that
can
be
left
out.
D
My
latest,
like
directive
from
all
the
people
that
I
talk
to
on
my
side,
is
like
get
something
done
that
we
can
all
agree
on,
and
if
there
are
things
we
can't
agree
on,
perhaps
we
can
just
leave
them
out.
So
one
of
those
things
we
can
leave
out
is
instantaneous
or
raw
values
like.
I
don't
think
we
need
raw
values
right
now
because
they
seem
to
be
causing
trouble.
D
This
other
thing
that
I've
been
advocating
for
called
continuity,
which
is
like
whether
you're
created
by
a
snapshot
like
callback
or
whether
you're
created
by
synchronous,
calls
that
doesn't
need
to
be
in
there.
It
should
be
completely
independent
and
orthogonal,
I
think
so
that
can
be
left
out.
I
believe,
but
the
place
we
are
right.
Now
we
have
a
protocol
that
has
instantaneous
in
it.
It
has
monotonic
tied
up
with
value
type,
and
so
I
don't
know
we're
stuck
that's
all.
I
have
to
say
right
now.
A
Oh
yeah,
we
we're
definitely
right
now.
What
is
submitted
is
unusable.
Let's
put
this
way,
that's
that's
an
understanding
of
everyone
that
we're
halfway
through
defining
something,
and
I
think
both
of
us.
We
took
some
approaches
to
polish
what
we
have
with
with
more
details,
recombining
things
and
stuff.
A
First
of
all,
between,
I
would
like
some
feedback
about
the
two
high
level
approaches
that
me
and
josh
took,
which
are
a
bit
different,
and
I
would
like
to
to
hear
some
ideas
and
again
it
doesn't.
A
A
So
yeah,
so
so
we
took
two
different
approaches.
My
my
approach
was
more
like
describing
the
form
of
the
data
that
we
we
are.
We
are
reporting
like
this
is
a
row
measurement.
This
is
a
sum
that
I
calculated.
This
is
a
histogram.
This
is
a
summary.
A
This
is
whatever
formats
we
support
multiple
formats,
and
here
are
the
properties
for
every
format.
I
think
I
think,
that's
how
I
try
to
structure
it
again.
Some
of
the
things
that
I
have
here
may
not
make
sense.
We
can
polish
things,
but
this
was
my
idea
like
I'm,
starting
from
the
fact
that
I'm
explaining
what
kind
of
data
I'm
I'm
sending
on
the
wire
and
what
are
the
properties
of
this
data
for,
for
example,
this
is
a
sum.
C
A
I
have
only
monotonic
sums
for
for
different
things.
We
can
change
all
of
these
things,
but
what
I'm
trying
to
say
is
this
is
a
monotonic
sun
or
this
is
a
sum
it's
monotonic
or
not
it's
a
delta
like
resets,
every
export
or
it's
cumulative,
it's
whatever
all
the
properties
we
have,
but
we
say
we
start
from
the
fact
that
it's
a
sum
like
we
describe
the
aggregation
or
or
so.
D
Just
as
so
everyone's
reading
this,
this
is
the
metric
descriptor
that
we're
looking
at
and
in
this,
so
we
have
a
type
name
measurement
value
type.
We
have
raw
measurement.
These
are
the
type,
the
sort
of
kind
or
the
type
of
the
metric,
and
so
because,
just
it's
a
little
misleading,
because
we
have
a
histogram
here,
that's
a
type
and
then
down
below
there's
a
histogram
data
point
which
is
not
the
same.
D
So
what
we're
looking
at
are
the
type
descriptors
effect,
essentially
yeah
and
and
just
to
briefly
summarize,
the
the
one
that
I
put
together,
which
comes
at
it
from
a
pretty
different
direction,
is
to
say
that
I
was
thinking
of
these
descriptives
as
being
about
what
went
in
so
the
inputs,
not
the
outputs.
That
explains
almost
the
entire
differences
here
and
so
mine.
This
is
the
more
recent
one
which
is
the
simpler
version.
D
Basically,
we
in
this
iteration
of
it
I
left
temporality,
but
I
removed
instantaneous,
which
is
something
bogan
has
also
done
so
temporality
has
two
values:
structure
has
three
values,
and
this
other
thing
continuity.
Well,
there's
all
kinds
of
lengthy
comments
and
I'm
sorry
I
didn't
respond
yet
this
thing
continuity,
which
has
two
values
and
that
and
that
basically
a
metric
descriptor
is
characterized
by
all
three
of
those
inde
independent
properties
and
all
combinations
of
those
properties,
I
think,
are
meaningful.
D
The
thing
I
liked
about
the
the
approach
that
I
took
is
that
basically
there's
a
really
clear
one-to-one
mapping
from
the
what's
in
the
protocol
to
what
started
the
whole
thing
from
the
otlp
from
the
open,
telemetry
metrics
api
perspective,
and
I
think
that
when
we
do
it,
something
like
that
it
gives
us
the
ability
to
record
like
events
in
the
client
and
and
just
process
them
in
the
collector,
as
though
they
were
happening
locally,
so
that
you
could
have
written
exporter.
D
That
does
exactly
the
same
thing
in
the
collector
as
it
would
do
in
any
one
of
the
sdks
which
I'm
not
convinced
is
going
to
be
easy
to
do
with
your
approach.
Bogan,
it
doesn't
mean
it's
a
problem,
it's
just
something.
I'm
thinking.
A
A
When
I
put
these
things
on
the
wire,
it's
still
a
sum
doesn't
matter
who
calculated
it
doesn't
matter.
If,
if
I
saw,
I
saw
the
raw
events
of,
for
example,
in
case
of
memory
usage,
I
saw
all
the
new
or
delete
calls
versus
the
kernel
so
that
and
calculated
the
sum
of
these
things
for
me
is
still
now.
I
have
a
son,
and
this
is
this
is
a
fact.
This
is
what
I
have
right
now
again.
A
Coming
from
from
being
a
maintainer
of
java
and
the
collector
as
well,
I
saw
a
bunch
of
metrics
that
in
the
collector
we
scrape
as
an
external
from
third
party
from
other
places,
like
we
scrape
kubernetes
for
for
cpus
for
pods
memory
usage
for
things
like
that.
We
also
scrape
host
for
for
for
different
other
things.
So
we
are
more
consumers
of
other
third
party
libraries
in
that
part.
So
that's
why
I
always
my
mind
is
thinking
about
what
are
now
the
format
of
the
data
that
I
have.
A
B
Please
justin
go
ahead,
yeah,
an
issue
I
see
with
years
and
there's
things
I
like
about
exporting
it
yeah
the
way
yours
does
to
be
able
to
put
the
like
cpu
intensive
work
of
histogram
just
like
that
to
somewhere
do
another
component
outside
of
my
main
node.
But
the
issue
I
see
first
with
yours
is
the
you
have
stuff
like
instrument
counter
monotonic
continuous,
which,
as
we've
discussed
with
configurable
aggregate
aggregates,
a
counter,
isn't
necessarily
even
a
counter
in
like
you
create
a
counter
in
the
api.
B
But
apparently
you
could
turn
it
into
anything
else.
Instagram
right.
D
Well,
I
so
in
my
effort
here
what
I,
what
I
didn't
do
was
just
like
put
an
instrument
kind
like
property
here,
which
would
be
like
one
of
the
original
hotel,
six
instruments
that
we've
just
designed,
because
I
don't
want
to
reflect
the
actual
source
in
that
level
of
detail.
What
I
tried
to
do
was
to
to
describe
the
properties
of
the
instruments
in
a
generic
way.
D
And
I
guess
the
reason
why
I
was
looking
at
that
is
that
I
see
what
I
think
are
is
a
potential
to
improve
like
usability
for
metrics
vendors
and
systems.
So
I
I'm
you
know
like
if
I'm
reporting
a
a
sum
like
it
gets
converted
into
a
gauge
at
some
point
in
a
lot
of
the
systems,
and
I
feel
that
that's
the
loss
of
information
like
I
believe
that
gauges
and
sums
are
different
in
their
properties
and
I
and
like
if
I'm
gonna
get
fancy
like.
D
I
think
that
they
should
be
treated
differently
mathematically,
and
so
I
wanted
to
preserve
information
about
what's
a
sum
and
because
that
way,
when
you're
in
a
user
interface-
and
you
see
a
metric
that
you've
never
seen
before
well,
if
it's,
if
it's
one
of
these
sums,
I
think
that
my
best
default
is
to
display
a
rate
of
change.
D
But
if
it's
a
gauge,
I
think
that
my
best
default
is
to
show
you
the
quantiles.
That's.
Why
that's
why?
I
think
it's
interesting,
but
I
also
see
a
difference
between
what
like
the
up
down
counter
and
like
which
creates
a
positive
negative,
changing
sum.
Like
I
still
think
of
that,
as
a
thumb,
it
shouldn't
be
turned
into
a
gauge,
so
I
was
trying
to
preserve
what's
a
sum
versus:
what's
not
a
thumb.
A
Okay,
that's
a
very
good
point
josh
and
I
think
I
miss.
I
don't
know
if
you
can
see
the
history,
but
initially
I
had
some,
including
both
and
I
had
a
property
saying,
monotony,
okay,
which
which
100
we
can
debate
that.
But
do
we
agree
so
for
me
again,
I
think
this
is
where,
where,
where
I,
where
I
struggle
a
bit,
is
so
we
have
an
instrument.
A
Like
tristan
pointed
we
have
a
counter
which
usually
record
some
delta
changes
in
in
you
know,
in
a
instrument
like
I
don't
know,
but
number
of
requests
executed.
Okay,
we
do
a
plus
one.
Every
time
when
we
finish
or
number
of
the
request
started,
we
do
a
plus
one
every
time
when
we
start
a
request.
Okay,
so
now,
if
somehow
this,
I
turn
it
into
a
histogram
a
lot
of
the
properties
of
this.
A
The
fact
that
it
was
a
sum
initially
or
it
was
a
delta
of
a
change
of
things-
are
gonna
disappear.
The
reason
why
are
gonna
disappear
is,
or
in
some
aggregation
may
disappear.
If,
if
the
aggregation
that
I'm
computing
right
now
is
I
don't
know,
I'm
just
computing
the
mean.
A
Never
should
do
that,
but
let's
assume
I'm
computing
the
mean
of
these
things
and
I'm
exposing
that
mean
it's
still
an
aggregate,
a
valid.
We
have
an
input
that
wasn't
adding
a
positive,
adding
thing.
We
knew
you,
it
is
delta,
but
when
once
I
turn
it
into
a
meme,
all
the
informations
and
everything
is
lost
right
now.
The
only
the
only
good
thing
that
I
can
do
right
now
is
to
describe
what
you
have
now
you
have
a
gauge
or
you
have
whatever
we
call
it,
but
you
actually
have.
A
These
are
aggregated
data.
I
lost
I
lost
temporarity
of
when
things
happened.
I
lost
a
lot
of
things
I
I
can
just
say
that
this
is
a
I
mean
more
more
or
less
does
it
make
sense
where
I
struggled
a
bit
and
when,
when
I
when
I
I
gave
you
this
example
of
aggregation
of
aggregation
in
that
document,
the
whole
idea
was
like
if
I'm
applying
an
aggregation.
That
is
losing
a
lot
of
this
information.
A
I
I
don't
know
how
to
to
to
right
now
to
manage
all
these
properties
because
they
are
no
longer.
D
Yeah,
it
makes
sense.
I,
and
I
mean
part
of
me,
thinks
well,
we
haven't
really
defined
mean
as
an
aggregation,
but
but
but
if
you
have
sum
and
count
like
that's
equivalent
to
a
mean,
but
I
do
know
something
about
the
sum
when
it's
adding
versus
grouping
like,
for
example
like
so
it's
sort
of
like,
I
know
something,
even
though
it's
like
marginal,
it's
something
I
know,
and
that.
A
D
D
If
we
think
more
about
those
as
being
a
concrete
set
of
aggregations,
I
think
that
for
each
one
of
those
aggregations
we
can
figure
out
what
we
mean
by
those
and
the
inputs
are
still
like,
like
they
are
the
source
of
this,
and
I
know
the
continuity
has
been
like
lower
priority,
but,
like
I
I
have
mentioned
how
like,
even
if
it
was
just
a
bunch
of
means.
If
I
know
there
were
snapshots
like,
I
can
tell
you
something
about
the
size
of
the
label
set,
tells
me
something
different
for
snapshot
and
continuous.
D
So
I,
like,
I
feel,
like
there's
like
a
little
bit
of
information
here,
and
my
goal
was
just
to
like
preserve
all
the
original
information
that
could
potentially
tell
you
that
an
aggregation
is
like
not
very
meaningful.
So
if
you
count
a
rate
of
of
gauges
like
that's
pretty
cool,
but
if
it's
a
rate
of
snapshots
like
that's
the
sdk
controls
that
so
you're
looking
at
sdk
behavior,
not
application
behavior.
A
That's
that's
that's
very
cool
and
I
I
really
think
I
was
keep
thinking
for
example,
especially
for
histogram.
I
found
it
very
handy
to
know
if,
if,
if
the
the
things
that
I'm
putting
into
the
histograms
are
coming
synchronously
or
I'm
asynchronously,
I
didn't
find
that
useful
for
a
sum
for
the
reason
that
I
was
explaining
in
this
long
comment
that
for
a
sum,
I
no
longer
have
the
count
of
the
the
count
of
measurements
there.
A
A
I
don't
know
if
it's
useful
or
not-
I
may
be
convinced
otherwise,
but
I
found
handy,
for
example,
for
histogram,
where
I
have
a
count
of
of
samples
that
are
in
the
histogram
to
know
if
this
count
comes
as
an
effect
of
number
of
requests
in
the
application
versus
comes
as
as
a
behavior
of
the
the
the
sdk
or
any
other
scraper
of
the
of
this
method.
So
so
this
is.
A
This
is
an
interesting
point,
especially
if
you
have
the
count
of
measurements
inside
the
aggregation,
so
so
the
count
of
measurements
inside
the
aggregation
comes
with
this
nice
property
that
or
require,
or
it's
useful,
to
have
this
extra
information.
If
you
have
the
count
of
measurements,
let's
put
this
way,
but
I
think
what
I
found
it
hard.
As
I
said,
and
to
give
you
a
more
example
josh.
Why?
Why?
A
I
I
think
more
than
just
these
four
or
five
aggregations
that
we
have
is
because
I
have
to
support
things
like
signalfx
backend,
where,
for
example,
they
want
for
for
cpu
usage
to
save
money,
for
whatever
reasons
to
save
money
for
cpu
usage,
they
want
to
report
cpu
usage
as
a
delta,
so
so
every
report
time
they
report
a
delta
and
they
calculate
a
percent
of
the
total.
They
called
it
cpu
utilization,
which
is,
if
I
read
a
t0
t1.
I
calculate
t1
minus
t0
value
and
I
divide
it
by
the
total.
A
A
D
Thank
god
like
we
don't
need
with
hope
we're
not
joining
metrics
here
but,
like
you
multiply
by
a
thousand
or
by
a
hundred
I
don't
feel
like
I
I
wasn't.
I
wasn't
I'm
not.
I
wasn't
thinking
we'd
put
that
kind
of
support
in
the
protocol.
I
was
thinking
it
was
just
like
a
way
to
move
raw
metrics
around
or
aggregated
metrics
around,
not
derived
metrics
around
or
something
like
that.
A
I
should
have
a
way
to
to
say
that
not
that
this
represents
necessarily
a
percentage
wise,
but
it
is
not
a
sum:
it's
not
a
histogram,
it's
still
a
scalar,
it's
it's,
a
scholar
that
and
we're
not
gonna,
go
and
add
all
the
the
possible
aggregations
in
our
protocol.
So
my
idea
was
like
okay.
We
have
couple
of
applications
that
we
know
are
very
useful
even
for
the
back
end
to
know
because
they
can
play
with
them.
They
can
calculate
new
ranges.
A
They
can
time
like
new
time,
overlaps
new
new,
a
lot
of
things,
but
there
are
gonna,
be
some
random
people
coming
with
random
aggregations
that
they
will
still
be
there
and
we
need
to
support
them
as
well.
D
Thing
to
do
well,
I
guess
this
is
why
I
was
saying
I
wanted
that
otlp
to
represent
what
came
in
so
you
could
have
a
metric
named
percent
and
it
was
you
know
like
it,
has
all
the
properties
that
we
talked
about
what
went
into
it
and
then
you
multiply
by
100.
It
still
has
the
same
properties,
but
you
name
it
something
differently
to
say:
it's
been
multiplied.
A
A
A
A
D
So
I
was
thinking
that
so,
let's
see
so
in
the
input
is
cpu
usage
by
core
or
something
like
that.
Is
that
correct?
Let's
say
and
I
like
that
to
be
done
as
a
callback
and
that
has
that
nice
property,
the
snapshot
property
that
I've
been
referring
to,
that
makes
them
all
instantaneous
measurements,
so
they
were
taking
at
the
same
time.
So
I
can.
I
can
relate
them
to
each
other
without
referring
to
a
window
of
time
so
and
they're
they're.
D
So
you
can
turn
those
into
deltas
by
subtracting
from
the
previous
window
right
and
they
they
are
still.
They
still
have
the
same
structure
and
they
still
have
the
same
continuity
by
changing.
All
you've
done
is
change
temporality.
So
now
you
have
these
deltas
in
terms
of
cpu
seconds,
and
I
I
I
think
I
understand
you
to
say
that
you
want
to
look
at
the
total
time
span
and
make
some
calculation
there
or
you
want
to
look
at
the
sum
of
cpu
seconds
there.
D
I
wasn't
quite
sure,
but
I
feel
like
at
that
point
you
change,
structure,
you're,
saying
I'm
going
to
divide
this
by
this,
which
is
meaningful
because
they
were
computed
the
same
instant
by
the
same
callback,
but
I
took
two
additive
things
and
I
divided
so
now
like
it's
no
longer
additive
computer
ratio,
which
is
a
good
candidate
for
a
gauge
okay.
So
that's
when
I
would
change
structure.
I
was
thinking.
D
Well,
it's
related
to
that
yeah,
okay,
so
yeah!
I
feel
like
you
get
to
a
part
and
I'm
confused
at
this
point,
and
this
I
think
raw
values
is,
is
like.
D
Right
so
just
to
wrap
that
up,
it
is
no
longer
the
the
percentage
is
not
a
change
of
percentage,
it
is
also
not
a
cumulative
percentage.
It
is
derived.
So
part
of
me
has
been
thinking,
as
in
the
background
of
this
conversation,
is
that,
like
there's
just
some
new
thing,
which
is
like
a
derived
thing,
which
tells
you
that
the
original
inputs
have
been
mutilated
or
combined
in
some
ways?
But
I
don't
know,
I
don't
want
to
add
more
things.
D
Versus
a
cumulative
of
a
gauge-
and
I've
been
like
racking
my
head
over
this
for
a
couple
like
a
day
now
because,
like
I
think
of
temporality
or
delta
as
being
a
change
over
time
between
some
values,
but
there's
also
a
sense
in
which
we
think
of
cumulative
when
I
think
about
prometheus,
which
is
like
I'm
going
to
go
all
the
way
back
to
the
beginning
of
my
start
by
process
time
and
I'm
being
cumulative,
which
means
I'm
preserving
values
that
were
recorded
in
old
times.
Like
I
I'm
accumulative.
D
Therefore,
I'm
going
to
show
you
the
value
from
like
an
hour
ago,
or
you
know,
like
different
window
every
time,
I'm
scraped
and
so
there's
a
sense
in
which
cumulative
might
refer
to
the
set
of
values
that
you're
exporting
as
opposed
to
the
way
in
which
an
individual
value
is
computed
and-
and
I'm
stuck
hit.
This
way.
A
A
I
think
for
me
for
me
again
not
trying
to
monopolize
this
discussion,
for
me
was
much
easier
to
understand
from
the
point
of
what
are
the
data
that
I'm
seeing
right
now,
and
maybe
some
of
these
properties
that
we
want
to
preserve
about
the
initial
data
we
can
preserve.
A
A
Does
it
make
sense
josh
where,
where,
where
I
struggled
a
bit
and
again,
we
can
argue
if
some
of
the
informations
from
the
input
we
we
should
preserve
in
this
thing,
but
I
think
what
I'm
trying
to
say
is,
I
think,
the
the
the
type
that
the
the
things
that
we
put
into
the
descriptor
should
should
say
what
are
the
data
very
clear?
What
are
these
data
right
now
that
you,
you
are
seeing,
and
then
here
are
some
properties
about
what
happened
or
what
were
the
initial
things,
not
the
the
other
way
around.
A
And
another
example
is
this
one
that
you
are
pointing
here
and
for
people
for
me,
it
was
was
very
interesting
one
exactly
this
thing
that
if
there
are,
there
are
two
metrics.
In
this
example,
one
metric
was
a
sum
observer,
a
sum
observer
that
monitors
the
kernel
metric
that
keeps
count
of
allocations
and
in
the
allocation.
Okay-
and
I
I
said
for
this
experiment-
the
kernel
metric
comes
with
another
location
label,
which
is
called
stack,
and
here,
okay.
A
So
now
I
said,
the
second
metric
is
my
own
counter,
my
own,
my
own
up
down
counter
where
I
go
and
instrument
tc
malloc
and
every
time
when,
when
a
when
a
new
or
a
delete
happens,
I'm
recording
the
value
to
the
library
okay,
both
in
and
these
are
the
sequences
of
things
from
the
start
of
the
process.
I'm
allocating
some
objects,
the
allocate
deleting
some
other
objects.
For
me.
When
I
looked
at
the
data
produced
by
these
two
metrics,
one
was
produced
by
the
kernel
one
was
produced
by
me.
A
I
I
see
no
difference
for
me
when,
when,
when
these
data
are
on
the
wire
I
lost
and-
and
the
discussion
was,
is
this
snapshot
or
or
not,
for
example,
from
that's
where
from
where
we
started
and
for
me
was
like?
Okay,
I
know
before
was
a
snapshot
in
in
what
we
call
snapshot
was
the
fact
that
the
the
the
library
reads
every
every
cycle,
these
values
and
so
on.
D
I
was
thinking
like
the
application,
for
snapshot
may
be,
like
I'm
recording
like
the
size
of
all
current
shards
in
my
application.
If
I
do
it
in
a
callback,
I
can
ask
you
at
this
moment
in
time
how
many
current
charts
were
there.
If
I
do
it
with
continuous
inputs
and
I
narrow
the
window
to
infinitesimally
small,
the
size
of
the
number
of
label
sets
or
the
number
of
things
I'm
summarizing
drops
to
one
or
zero.
D
So
so,
there's,
if
you
ever
like,
want
to
count
the
size
of
a
set.
Then
then
snapshots
are
very
useful
to
you
and
and
continuous
variables
are
less
useful
because
you
have
to
think
about
how
long
the
window
is
and
that
the
answer
is
variable
and
it
like
it's.
It
may
be
ancient
history,
but
but
staff
steve
did
include
a
thing
that
was
about
counting
cardinality
like
this
is
one
way
you
can.
D
You
can
get
that
so
so
cardinality
is
point
in
time
when
you're
dealing
with
snapshot,
which
I
feel
like
is
meaningful,
but
it's
also
like
it's
esoteric
and
minor.
So
I
I
like,
I
wish
that
we
shouldn't
get
hung
up
on
that.
I
think
our
biggest
question
is
like:
what's
it
need
to
be
a
delta
gauge
versus
a
cumulative
gauge,
because
I
actually
think
that
other
other
things
are
all
answered
and
and
in
particular,
if
you've
got
raw
inputs.
D
Well,
that's
right
so
only
for
raw
values.
So
so
once
you've
aggregated
I've
got
a
time
range
and
then
I
know
what
I'm
dealing
with.
Yes,.
A
D
The
window,
so
I'm
trying
to
tell
the
difference
between
I
aggregated
over
a
minute-
and
this
is
my
last
value-
I
did
have
a
start
time
and
I
know
the
last
value
timestamp
versus
I
aggregated
over
a
minute-
and
these
are
all
of
my
values
like
I'm
just
trying
to
record
raw
data.
A
And
I
feel
like
that
case,
you
are
trying
to
record
the
raw
data,
and
why
do
you
group
in
100
minutes?
That's
where
that's
where
or
a
minute?
If
you,
if
you
record,
wrote
the
raw
measurements,
every
measurement
has
only
one
timestamp,
which
corresponds
to
the
moment
when
that
measurement
was
recorded.
It
was
given
to
us.
A
What's
the
temporality
for
raw
measurements,
if
you
saw
in
my
proposal
raw
measurements,
don't
have
temporarily,
they
have
just
a
time
where,
where
is
they?
They
don't
have
a
temporary
itself
because
they
are
just
represent
one
event.
If
I
have
an
aggregation
of
events,
that's
when
I
described
from
when
to
end.
I
aggregated
this
event.
D
Yeah
someone
else
helped
us
here.
I
don't
know
what
to
do.
D
So,
and
in
my
proposal-
that's
number
181,
I
said
if
it's,
if
it's
unaggregated
just
leave
off
start
time
and
we'll
know
what
you
mean
but
and
that
in
that
sense
I
know,
but
temporality
still
means
nothing
to
me
if
it's
a
raw
value
with
no
with
no
start
time
stamp.
D
And
so
that's
that's
one
of
the
problems
with
my
latest,
which
is
181,
which,
which
is
that
you
have
this
temporality
field
that
you
actually
don't
need
to
use
in
certain
cases.
It's
worth
reminding
everyone
at
this
point
that
that
part
of
the
reason
why
one
of
the
original
changes
which
was
168
was
so
convoluted,
was
that
there's
been
a
concern
raised
about
performance
like
the
more
like,
one-offs
and
and
nested
messages,
and
and
actually
just
simply
fields,
the
more
memory
that
the
collector
is
going
to
have
tied
up.
D
So
this
this
pr,
I'm
ready
to
close
it's
sort
of
not
so
good,
but
I
I
I
stuffed
all
three
of
those
same
values
into
one
field
so
that
I
wouldn't
have
to
hold.
You
know
five
words
of
memory
instead
of
one,
and
so
this
this
pr
168
is
almost
equivalent
to
181
and
except
that
I
exploded
those
into
separate
fields
that
has
a
performance
head
associated
with
it.
So
whatever
we
decide
here,
there's
still
going
to
be
pressure
to
keep
these
structures
small
and
to
reuse
to
find
multiple
uses
for
individual
fields.
A
I
think
josh
we
can
do
it
differently.
We
can
put
all
these
things
very
explicitly
and
then
apply
an
encoding
in
a
way
that
we
want
of
all
these
things,
but
things
are
just
very
explicit
for
the
user.
I
don't
think
we
should
think
in
in
an
encoded
way
like
we
have
all
the
technologies
and
we
we
can
define
our
own
encoding
and
this
this
can
end
up
in
let's
say
so.
A
We
we
can
still
do
that,
let's,
let's
first
agree,
if
which
approach
we
take
and
again,
I
feel
to
answer
your
question
by
the
way:
100
delta
gauge
it's
a
bit
for
me,
it's
not
a
thing
because
a
gauge
a
gate,
just
a
gauge.
It's
just
represent
the
current
value.
It
does
not
represent
anything
or
even
though
it
may
represent
something,
because
we
don't
know
what
exactly
it's
at
their
aggregation,
their
transformation
that
they
use
to
obtain
these.
We
say
we
just
don't
know
temporarily,
and
I
I
I
started.
A
D
Then
go
ahead.
Okay,
go
ahead!
Well,
I'm
I'm
starting
to
like
your
proposal
more
after
we've
talked
it
out
here
I,
but
on
the
surface
of
it
this
is
going
to
perform
badly.
So
so
I
think,
there's
that
concern.
A
A
A
Use
the
same
thing
we
can
start
from
there,
but
but
what
I'm
trying
to
say
is
I
was
hoping
to
get
these
messages
different
and
stuff
for
better
understanding,
even
though
we
encode
all
of
them
into
a
ue
64
or
whatever.
However,
we
encode
it
for
performance,
we
may
even
not
do
that.
For
example,
we
may
just
say
we
write
a
special
marshaller
in
pro
in
the
in
proto
to
to
do
this.
A
For
us,
we
can
discuss
how
we
solve
the
performance
problem,
but
I
would
like
to
to
make
sure
we
we
do
do
have
clear
understanding
of
the
data,
clear
structures
and
stuff
semantics
and
then
and
then
optimize
on
that,
I'm
all
for
optimizations.
But
let's
make
sure
semantically.
We
are
on
the
same
page
and
then
optimize
as
much
as
we
want.
D
A
Of
is
so
raw
measurements,
which
are
the
engages
engages,
don't
have
temporality,
I
don't
know
how
temporal
they
are.
I
mean
raw
gauges.
We
know
is
just
that
event
that
recorded
the
measurement
gauge
is
a
transformation
or
something
produce
this
value.
I
don't
know
the
temporality,
I
don't.
I
cannot
claim
anything.
There
is
then
some
which
again
there
is
the
the
other.
There
is
an
issue
linked
there
in
the
c
statement,
which
should
we
support
non
non-monotonic
sums
deltas.
A
D
To
me,
the
sums
and
gauges
were
just
scalars
and
the
structure
was
telling
me
whether
they
were
gauges
or
multi-sums
or
non-multiplex
sums
and
temporality
was
telling
me
whether
they
were
delta,
sums
or
cumulative
sums.
I
I
I
will
have
to
say
I
still
believe
structure
applies
universally,
but
I,
but
I
don't
want
to
try
to
make
that
case
now.
We've
been
45
minutes
here.
I
don't
know
if
we
think
this
meeting
is
going
to
go
a
lot
longer.
D
I
feel
that
now
that
I've
heard
you
say,
I
think
that
we
should
get
to
an
understanding
like
you
have
here
and
then
map
those
into
codes
potentially
to
optimize.
That
sounds
pretty
good.
I
anyone
else
want
to
say
something
I
was
thinking
of.
I
was
trying
to
call
an
end
to
the
meeting
so
that
we
can
all
go
think
harder
now.
D
Now
that
I
understand
a
lot
about
about
basically
now
that
I
see
an
opportunity
to
remove
temporality
from
certain
measurements,
I'm
starting
to
rethink
this,
it
means
I
have
new
feedback
on
this
pr
as
well
as
I
have
a
change
I
want
to
make
in
mind,
but
I
haven't
thought
through
yet
I
also
have
kids
to
pick
up
from
school
and
well
daycare
in
40
minutes,
20
minutes
five
15
minutes,
so
I
might
want
to
go.
A
Yeah,
thank
you
so
much
josh
for
for
your
time
and
please
everyone,
please
please
give
us
feedback
me
and
josh
are
going
back
and
forward
with
a
lot
of
ideas.
I'm
trying
to
to
put
in
comments,
as
you
saw
a
lot
of
examples
and
where
I'm
coming
from,
please
read
them.
Just
tell
me
I'm
off
or
tell
anyone
that
is
off
or
whatever
just
give
us
feedback.
We
we
me
and
josh
felt.
Sometimes
that
are
just
the
two
of
us
trying.
D
Here,
yeah
really
appreciate
that
an
addendum
to
that,
like
the
the
pace
of
open,
telemetry
metrics,
has
been
slowing
and
my
company
is
looking
at
me
and
saying
josh.
I
think
we
want
to
pull
you
back
to
doing
other
project
work,
so
my
my
commitment
is
not
really
100
any
longer.
I
was
on
call
today
and
that's
why
I
I
was
like
busy
afternoon,
which
is
why
I
hadn't
read
this
comment,
so
that
we
need
more
investment,
more
involvement
to
make
this
happen,
and
it's
I
I'm
only
I'm
not
on
100
anymore.
D
A
Yeah,
so
I'm
I'm
happy
to
lead
that
pr
to
succession
and
stuff
you
saw
that
in
the
past
couple
of
days,
I'm
dedicating
like
weeks
at
least
one
week
and
a
half.
I
started
to
provide
like
this
feedback
for
all
of
these.
I
really
really
have
dedicated
couple
of
weeks
just
to
finish
this.
So
please,
please,
everyone
provide
us
feedback,
josh,
you
can,
if
you,
if
you
prefer
me
to
do
more
work,
just
let
me
know
what
exactly
you
expect
and
I
can
do
tests.
I
can
do
whatever
other
things.
D
D
D
F
D
Values
I
can
make
t
digest
into
like
samples,
but
let's
we
gotta
get
like
the
basics,
working
and
the
prometheus
export
path
happening
and
that
none
of
the
aws
interns
have
spoken
up,
but
I
know
they're
listening
like
we'd
like
cortex
working
we'd,
like
dog
fats,
coming
in
as
deltas
and
going
out
as
prometheus
we'd
like
dark
sets
coming
in
and
going
out
of
the
otlp
like.
Let's
get
the
basics
working
and
like
I
no
more
needs
to
be
done.
Okay,.
A
I'm
also
by
the
way,
I'm
also
every
time
when
I'm
designing
things,
I'm
doing.
In
my
background,
I'm
doing
a
transformation
between
this
protocol
and
prometheus.
I
will
start
to
work
on
one
between
these
and
status
d-
I'm
not
very
familiar,
but
I
will
start
to
have
that
ready
just
to
prove
that
it's
it's
possible.
D
The
thing
that
I
keep
remembering
is
that
there
is,
you
can
count
with
a
negative,
and
so
so
negative
counters
are
meant
to
transform
into
up
down
counters,
and
so
that
just
feels
like
it,
you
know,
confuses
things,
but
there's
there
are
legitimate
uses
out
there
of
negative
counts
and
such.
A
I
see
yeah
I
mean
okay,
we'll
have
to
treat
that
as
a
up
down,
because
we
cannot
make
it
happy.
I
mean
sending
that
to
prometheus
as
monotonic
will
screw
prometheus.
So
we
have
to
get
the
the
most
general
thing
that
we
have.
D
Yeah,
I
I
think
we
probably
have
another
meeting
to
talk
about
the
delta
exporters,
especially
when
they're
going
through
the
collector
first
there's
been
an
issue
filed
which
is
basically
pointing
out
that
we
can't
make
prometheus
work
until
the
the
clients
are
exporting
cumulatives.
D
And
I,
when
I
got
into
this
whole
story
like
I
had
worked
with
only
stephanie
doug
festy
a
lot,
and
we
were
very
happy
with
the
fact
that
my
client
has
no
memory,
but
it
has
tons
of
cpu
it's
very
expensive,
so
moving
towards
prometheus
means,
I'm
going
to
add
memory
and
lower
my
cpu,
ideally,
but
but
I,
but
if
I
have
high
cardinality
like
that,
this
problem
doesn't
go
away.
So
I
wanted
us
to
be
able
to
do.
D
Otlp
could
be
state
like
you
have
statelessness
in
the
client
so
that
you
don't
have
to
remember
everything,
and
it
is
possible
in
this
story
that
we're
creating,
but
it
is
flipping
away
from
us
right
now
because,
like
there's
a
drive
to
just
get
prometheus
working,
I
think
they're
like
to
get
deltas
to
work
correctly.
There's
got
to
be
some
magic
where
you
say:
okay,
I'm
starting.
D
My
otlp
exporter,
I
need
to
know
whether
I'm
talking
to
an
agent
or
a
pool
of
collectors-
and
I
kind
of
also
want
to
know
whether
I'm
going
to
try
to
export
deltas
at
the
downstream,
because
if
I'm
talking
to
an
agent
good
one
endpoint
that
one
endpoint
can
do
a
delta
to
cumulative
for
me
and
then
I
have
no
memory.
My
agent
has
memory,
but
it
can
restart
and
reset
things
and
that's
fine.
D
But
if
I'm
sending
my
otlp
to
a
scalable
collector
pool
the
handling
deltas
is
very
hard
at
that
point
and
we
don't
have
a
solution.
I
don't
see
one
coming
soon
into
the
chard
and
like
do
some
sort
of
management
of
that,
so
chambers
are
needed
to
get
a
scalable
pull
to
work.
So
then
I'm
starting
up-
and
I
don't
know
whether
I
want
to
send
kimonos
or
deltas-
it's
not
there's
no
good
solution
here.
I
completely
agree
with.
A
You
we,
we
can
talk
about
how
to
improve
user
usability,
but
I
think
let's
get.
Let's
get.
D
Sure
the
user
gets-
and
I
know
that
we
do
have
elan
on
the
call
he
hasn't
spoken,
but
like
there's,
there's
less
and
less
of
a
call
for
getting
deltas
to
work.
The
longer
we've
been
on
this
project
and
I'm
I'm
not
sure
what
to
do
with
that
information.
Like
I,
I
respect
delta's,
but
there
haven't
been
a
great
deal
of
vendor
support
like
asking
for
it
new
relic
is
the
only
one
and
part
of
me
thinks
a
lot
of
the
problems
that
we
have
here
would
be
address.
G
Mike
michaels
michael's,
you
also
hear
from
big
dog
and-
and
I
think
earlier
more
pointed
opinions
here,
but
I
I
I
I'm
happy
to
share
our
thoughts
yeah.
You
know,
you
also
said
you
had
to
run
off
and
pick
up
kids
and
we
just
we
kept
you.
I.
D
A
Elon,
can
you
read
on
the
specification?
There
is
an
issue
725.
Can
you
read
and
tell
me
if
your
thoughts
are
similar
with
that?
I
got
that
information
from
somebody
from
signalfx
telling
me
why
to
not
use
deltas,
but
tell
me
if
you
have
similar
understanding.
Why
deltas?
No
just
just
for
for
collecting
more
feedback,
collecting
more
information.
G
D
A
question
to
a
new
relic
person,
but
it's
a
similar
sort
of
question
for
any
kind
of
delta
export,
which
is
you
have
a
number
which
goes
up
and
down?
Do
you
want
to
see
differences,
or
do
you
want
to
see
a
gauge
like?
D
I
feel
like
both
answers
are
valid
and,
and
I
think
a
default
is
needed,
maybe,
but
it
relates
to
whether
we're
going
to
put
memory
in
the
client
or
put
memory
in
the
server
I
don't
know
anyway.
I
think
I
have
to
run
if
you
guys
make
any
progress
put
in
the
issues
or
something
like
that
we
should
reconvene.
This
was
really
useful.
I
think
we
should
have
two
meetings
a
week
until
we
solve
the
tlp.
Yes,
as
many
as.
A
You
of
thursday,
all
for
this
see
you
thursday
josh.
If
anyone
wants
to
stay
and
ask
different
questions
about
different
things,
I'm
I'm
here
maybe
discuss
more
things.
G
Sure
I
think
you
know
michael
and
I
have
been
sort
of
joining
in
and
and
to
be
honest,
mostly
quiet
observers
on
the
on
the
metric
sig.
That's
not
good,
be
more
vocal,
yeah.
F
G
I
figured
that
was
gonna,
be
your
first
answer,
but
the
second
one
I
was
gonna
say
is:
I
think
this
one.
I
I
think
we
missed
that
this
is
a
separate
sig
on
is
there
is
the
is
the
is.
This?
Is
the
intention
to
keep
this
conversation
sort
of
separate?
Do
you?
Was
it
just
another
additional
meeting
we
should
keep
an
eye
out
for.
There's
was
just
an
additional.
A
Meeting
and
so
probably
I'll
make
this
regular,
I
will
keep
this
regular
every
tuesday
until
we
finish
with
this
otlp
thing,
I
I
really
think
we
need
the
the
stupid
protocol
working
doing
some
decisions,
making
some
decisions
unblock
a
lot
of
things.
So,
let's,
let's
focus
on
this
read
as
I
said,
there
are
a
couple
of
issues
that,
like
that
one
with
deltas,
where
I
put
some
experience
from
from
our
side,
why
we
think
could
not
be
delta,
you
can
say:
no,
we
don't
have
this
problem
or
we.
Yes,
we
agree
with
this.
A
There
is
the
other
part
where,
where
again
it
it's
more
about
the
proto
definition
of
the
otlp
stuff.
Where
me
and
josh,
we
have
these
two
different
approaches.
Please
please
try
to
make
sense
of
what
we
propose
again.
Mine
is
not
as
complete
as
josh
wants,
but
for
me
is
more
like
showing
people
that
hey.
If
we
look
at
the
data
in
this
way,
it's
probably
much
clearer
again.
G
We'll
we'll
take
a
look
and
and
and
and
provide
some
feedback
as
well
as
if
we're
we
should
be
on
the
next
call
as
well.
We're
not
not
a
you
problem,
but
in
us
problem
is
we're
about
we're
just
we're
on,
like
the
last
week
stretch
before
our
annual
user
conference.
So,
as
you
can
imagine,
we've
got
we're
splitting
our
time
between
that
and
this,
but
we'll
get
back
to
you.
B
Yeah
still
trying
to
wrap
my
head
around
the
differences,
but
basically
leaning
towards
your
solution
and
wanting
to
get
it
all,
I
think,
need
for
simple.
B
Simplifying
it
is,
would
help
with
reviews
even
is
having
it
in
a
state
where
yeah
we
can
add
stuff
on
later,
like
josh
was
talking
about
like
getting
it
just
enough
of
what
we
need
there.
A
Yeah
and
again
that
pr,
if,
if
we
decide,
I
would
like
not
to
review
the
entire
pr,
because
I
added
things
for
like
the
raw
measurements,
which
I
should
not
merge
right
now,
because
we
there
is
a
to
do
there
at
the
actual
values
like
at
support
for
this
and
stuff.
It
was
more
like
to
show
others
like
hey.
We
have
some
of
these
things.
We
can
add
new
role
measurements.
We
can
add
new
fields
inside
these,
it's
all
the
flexibility
we
have
and
and
stuff.
A
So
maybe
maybe
maybe
I
will
take
it
as
an
action
item
tonight
to
polish
that
tr
or
or
or
create
a
new
pr
clone
this
pr
and
create
a
one
that
is
kind
of
more
submittable
in
a
submittable
in
a
mergeable
state,
so
that
people
can
see
what
I'm
proposing
immediately
versus
where
we
can
go.
If
we
go
with
this
approach,.
E
A
A
E
Okay,
okay,
but
I
I
see
because
I'm
doing
this
for
my
internship
project,
which
means
that
I
I
can't
I
can't
get
this
working
until
you
guys
have
figured.
I
figured
this
out.
Can
you.
A
Sorry
can
you
can
you
ping
me
on
guitar,
explain
to
me
a
bit
about
what's
your
project
and
I'll,
try
to
make
sure
that
I'm
giving
and
I'm
blocking
you
in
a
way
that
you
get
something
during
this
internship
and
we
don't
throw
it
away
completely
or
anything
we
just
polish
after
that,
yeah
yeah.
That
would
be
great
thanks.
Thank
you
so
much,
I'm
I'm!
I
know
how
it
is
to
be
an
intern,
and
I
I'll
try
as
much
as
possible
to
help
to
for
to
you
have
to
say
something
useful
it'll
be
great.
A
That
would
be
great
thanks.
Thank
you!
So
much
okay.
I
think
we
are
done
here
thanks
everyone,
as
I
said
probably,
I
will
make
this
meeting
recurrent
for
the
next
couple
of
weeks.
Two
or
three
weeks
cannot
promise
for
longer,
but
at
least
the
next
two
three
weeks,
let's
keep
in
touch
more
often
and
have
these
useful
discussions
and
presentation,
whatever
you
want
to
to
call
them
that
explain
our
thoughts
and
stuff
thanks.