►
From YouTube: 2020-06-18 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
A
B
A
C
C
C
D
D
D
D
D
D
D
A
D
D
This
is
now
leading
over
into
the
next
one.
We
have
well
spec
issue
651.
This
is
briefly
says
that
we
have
been
asking
for
standard
conventional
metric
names.
We
have
at
least
two
sets
of
proposals
that
need
to
be
kind
of
integrated
and
formalized,
and
this
has
been
assigned
to
Aaron.
Are
you
on
the
call?
I
can
see.
D
B
Sure
yeah,
so
yeah,
so
I
think
I'm
about
ready
to
open,
like
at
least
a
introductory
poll
request
for
for
that
issue
for
standard
metric
names,
I
wasn't
sure
like
how
in-depth
we
should
go
in
this
one
into
like
runtime
metrics,
specifically
for
like
different
environments
and
in
general
I
was
wondering
whether
or
not
we
need
that
kind
of
stuff
in
this
Becket's.
It's
not
exactly
cross-cutting,
like
like
JVM
stuff
like
that
and
then
yeah.
My
comment
was.
B
D
B
Yeah
exactly
so
I'm
just
worried
about
discoverability
so
like
when
trying
to
name
stuff.
For
instance,
I
came
across
load
average
for
like
Linux,
CPU,
metrics
and
I
was
I
was
wondering
like
if
that's
not
under,
for
instance,
system
CPU.
It
could
be
a
little
bit
confusing
for
somebody
just
searching
and
then
for
something
like
UNIX,
where
you
have
like
a
bunch
of
potential.
B
Different
implementations
like
the
this
hotep
right
here
from
Ted,
is
pretty
clear
that
that
they
should
be
like
a
clearly
unambiguous,
so
for
something
that
might
have
like
small
differences
between
different
unixes
or
something
like
that.
Like
load
average,
it
might
be
like
confusing
to
actually
find
which
one
you
want.
I.
D
D
B
F
If
you
wanted
one
dashboard
that
had
like
your,
like
your
fleet
of
load
averages
and
you
span
different
UNIX
implementations
or
Linux
implementations,
it
would
be
pretty
annoying
to
have
to
build
like
a
separate
query
for
each
of
those
and
not
I,
mean
I.
Don't
know
it's
solvable,
but
I
can
see,
certainly
seeing
the
discoverability
or
dashboard
ability
being
a
little
bit
annoying
I.
B
Mean
yeah:
that's
that's
all
I'm
trying
to
bring
up
just
and
some
actually
trying
to
write
down
the
specific
ones.
We're
gonna
have
so,
for
instance,
I
think
in
the
collector
there's
already
load
average
for
there's
some
concept
of
load
average
for
Windows,
even
though
it's
not
something
available
in
Windows,
so
I
mean
obviously
that
that's
not
going
to
be
good
to
have.
As
like
a
general
system,
dot,
CPU,
download,
average
sort
of
thing.
Yeah
I
was
just
something:
I
was
coming
coming
across,
while
I
was
trying
to
come
up
with
names,
so
I
think.
D
This
occurs
to
me
is
that
there's
this
proposal,
that's
pretty
new
and
the
collector
can
should
we
thought
about
metrics
transformation
like
like,
ideally,
some
user,
who
finds
this
confusing,
might
be
renamed
their
metrics
to
say
if
they
in
their
congestion
pipeline
to
help
with
this
problem,
but
that
that
would
be
an
option
I'm
just
thinking
out
loud,
so
I.
My
question
sort
of
to
the
group
is
what
it
would
need
to
do
to
to
move
this
forward.
If
we
had
enough,
we
have
the
provers
sorted
out.
D
D
This
other
issue
on
naming
I
think
has
created
quite
a
lot
more
sort
of
controversy
and
or
discussion.
It
is,
unfortunately
I'm
not
sure
how
we
can
resolve
this
exactly
it
seems
like
it.
There
are
different
ways
you
could
approach
this
semantic
convention
and
the
impact
of
this
choice
will
be
felt
in
the
systems
downstream.
So
you
can
see
this
comment
started
off
with
your
feedback.
D
D
G
Sure
so,
basically
I
feel
like
we
should
get
this
resolved,
these
kind
of
New
Relic.
We
refer
to
them
as
the
golden
metrics
and
I
feel
like
we
should
get
them
kind
of
resolved
sooner
rather
than
later,
and
so
really
my
goal
in
creating
the
issue,
and
this
PR
was
to
start
this
conversation
moving
forward
about
how
we
will
represent
those.
D
Me
too,
so
to
get
back
to
the
main
point
of
contention
to
start
with
Yuri
saying
one
way
to
do.
This
is
to
have
a
sum
of
successes
and
sum
of
errors
and
combine
them
when
you
need
to
one
way
to
do
this,
to
have
a
sum
of
all
requests
and
a
sum
of
errors
and
divide
them
without
doing
about
a
summation
and
it's
sort
of
like
those
are
valid.
Those
can
work
and
it
depends
on
the
query
processor
that
you're
using
downstream
whether
that
one
is
preferable.
D
D
You
know
we
ought
to
be
able
to
use
just
one
one
instrument
and-
and
then
maybe
it's
two
instruments,
if
you,
depending
on
the
outcome
of
this
discussion,
so
I
I'm,
inviting
people
who
do
have
more
familiar
familiarity
with
sort
of
metric
systems
out
there
as
for
what
they,
what
they
prefer.
I
consulted.
Some
internal
people
here
that
have
like
the
Google
background
that
had
you
know,
had
an
opinion
on
this,
but
that
doesn't
necessarily
make
it
a
good
evening
for
the
world
at
large.
D
And
that
is
what
is
sort
of
like
a
philosophical
question.
This
a
lot
of
people
are
pointing
at
the
Google
SRA
book
and
then
my
colleague
out
John
Beck,
admitted
to
me
that
he
actually
was
a
co-author
or
a
reviewer
of
that
chapter
where
they
talked
about
this.
So
his
opinion
is
like
like
to
conflated
with
that
book
and
I.
Don't
so
there's
not
two
points
of
information
right.
D
There
I
confessed
it
and
like
logically
speaking
and
thinking
mostly
about
the
API
myself,
but
I
was
thinking
that
it
would
be
good
to
just
tag,
successes
and
failures
and
and
have
one
instrument.
But
having
read
this
feedback,
I'm
inclined
to
support
the
other
side,
the
idea
that
we
want
to
keep
two
metrics
one
for
successes
and
one
for
one
for
all
over
quests
or
was
no
way
yeah
good
confuse.
Now,
the
two
phases
are
one
one
metric
success
is
one
metric
for
failures,
different
metric
names
or
a
single
metric
with
labels
on
each.
G
F
You
know
we
could.
We
could
really
deal
with
either
way,
I
think
to
what
Justin
said.
We
can
certainly
deal
with
it
as
being
one
one
value
reporter
and
one
set
of
metrics
with
labels.
I
mean
it's
probably
easier
to
deal
with
it
that
way
for
us,
but
we
could
really
I
mean
we
could
slice
and
dice
it.
How
everyone
and
I
think
it
doesn't
I
think
it's
probably
not
super
important
for
us,
which
way
we
choose
as
long
as
we
settle
on
one
that
makes
okay.
D
So
of
the
three
options:
I'm
seeing
three
options:
one
is
one
instrument
labeled
with
success
or
failure
to
is
any
number.
These
two
is
two
instruments,
one
for
success,
one
for
failure
and
the
other
is
two
instruments,
one
for
all
requests,
one
for
failures.
What
I'm
hearing
is
that
the
third
option
is
the
least
preferred,
but
that
was
the
original
proposal
Justin,
so
I
want
to
check
that
you
are
actually
okay
with
either
of
the
first
two
yeah.
D
D
G
D
Haven't
read
it
I
haven't
taken
second
hand,
information
here,
I
understood,
as
you
are
discouraged,
from
a
roll
up
ever
doing
a
roll
up.
That
includes
both
successes
and
failures,
and-
and
we
have
this
in
the
document
that
we
just
looked
at
the
number
108
said
it's
copied
from
prometheus.
All
aggregations
should
over
any
labels
that
should
be
meaningful
and
that
this
would
be
a
case
where
it's
not
meaningful.
According
to
the
accessory
book,
it's
not
meaningful.
By
definition,
it's
meaningful
in
some
sense,
but
it's
not
even
like
Google
is
trying
to
define.
D
Okay,
I
linked
here
I,
don't
want
to
talk
about
now,
but
I
linked
here.
There's
an
older
issue
inspector
about
talking
about
what
shouldn't
our
PC
game
work
automatically
generate
in
terms
of
metrics
and
and
this
I
think
would
fall
under
the
same
like
prevention.
If
we
choose
that,
but
there's
also
a
discussion
there
about
how
we
might
just
want
to
talk
about
generating
metrics
from
span
dogmatically,
which
case
I
think
we
would
still
follow
this
in
conventions,
but
there's
sort
of
a
little
bit
of
different
logic
going
in
I.
D
D
That's
sort
of
slowing
us
down
or
holding
us
at
this
point
are
they've
questions
surrounding
OT
LP,
so
I
wanted
to
go
talk
about
those
next,
so
I
may
have
put
these
in
the
wrong
order.
I
think
the
trickiest
outstanding
question
is
this:
are
well
these
ones
in
the
protocol,
repo,
there's,
150
and
158.
D
So
I
think
we
should
talk
a
little
bit
about
structure
and
temporality.
This
is
Tyler's,
PR
I
think
I
want
to
apologize
to
everybody
here,
because
we
could
go
or
I
sort
of
I.
Think
we
verbally
discussed
this
and
I
had
put
out
a
fairly
strong
position,
which
I
am
now
retracting.
I
thought
about
this
a
little
bit
about.
D
So
you
can't
see
it
in
this
disk,
but
you
have
the
state
appointment
called
in
64
data
point.
So
so
you
have
an
instrument.
It
produces
a
single
number
as
its
aggregation
in
a
jury.
You
don't
actually
know
what
type
of
aggregation
it
is
given
just
the
protocol
here
you
know
its
type.
You
know
it's
an
integer,
you
may
know
if
it's
monotonic,
but
let's
say
you,
don't
know
it's
monotonic,
it's
just
an
integer.
It
could
be
a
value.
D
It
could
be
a
sum
like
so
if,
if
there
were
only
two
choices
there,
knowing
grouping
versus
adding
structure
will
tell
you
which
one
it
is
but
I,
don't
necessarily
believe
that
that's
enough
of
a
reason
to
adopt
this
new
concept
called,
adding
or
grouping
and
at
the
same
time,
I
uncovered
a
new
reason
to
want
to
have
the
temporality
information
that
we
talked
about.
I
read
this
didn't
get
her
yesterday.
These
are
the
two
points
of
mentioned
this
one
here.
D
This
is
a
related
point.
Knowing
adding
or
grouping
tells
you
something
about
how
you
can
display
something,
and
then
this
notion
of
temporality
tells
you
something
about
Delta
versus
cumulative
you're,
going
to
find
yourself
in
a
point
of
the
collector,
where
you
need
to
know
whether
I
start
make
keeping
state
or
not
and
you
and
if
it's
an
implicit
property
of
timestamp,
so
you're
gonna
have
to
wait
to
second
data
point
to
know
the
answer
and
that
will
be
a
tricky
irritating
situation.
All
this
is
like
it's
confusing
Tyler.
D
Basically
disappears
now
stalled.
I
wrote
myself
like
a
poops
I'm.
Sorry
I
realized
here
I'm
starting
to
feels
like
these.
All
these
issues
are
coming
together,
but
all
the
things
that
are
sort
of
troublesome
right
now
about
CLP
are
starting
to
become
connected,
and
one
issue
to
me
so
I
feel
like
progress
has
been
made,
even
if
it's
just
realizing
that
we
they're
confused
about
this
still
I
I.
D
Think
I
am
I've,
been
leaning
towards
wanting
to
know
tempura
temporarily,
which
is
to
say
Delta
versus
cumulative,
but
I'm
still
confused
as
to
why
there
was
a
decent
aeneas
value
in
that
setting
before
I.
Believe
instantaneous
corresponds
with
grouping
so
that
we
would
only
need
to
two
values
with
two
states:
Egypt
cumulative
where
it's
Delta
and
it's
grouping
or
its
cumulative
and
Delta
can
be
inferred
from
timestamps.
But
you
need
more
data
potentially
to
do
that
and
then
grouping
and
adding
might
tell
you
something
about
what
type
of
data
you're
going
to
receive.
D
Although
we
already
have
a
value
type
and
so
another
way
to
look
at
this,
is
you
have
a
value
for
histogram
and
you
have
a
value
type
for
summary?
What
do
you
know
about
the
values?
If
you
see
a
histogram
or
a
summary
and
actually
I,
don't
think
you
know
you
don't
necessarily
know
enough
to
not
need
this
grouping
versus
adding
information.
So
if
I
get
a
histogram,
a
student's
been
applied
to
like
a
enough
counter.
That
would
be
a
nonsense
figuration.
D
But
then
the
value
of
your
of
your
histogram
is
something
where
the
sum
is
still
meaningful,
whereas
if
you
apply
a
histogram
to
one
of
these
grouping
instruments,
you
get
a
histogram
of
the
number
of
values
that
you
saw,
but
the
sum
is
maybe
not
a
meaningful
value
of
that.
So
knowing
whether
your
histogram
was
a
grouping
or
or
an
adding
instrument
is
actually
but
I
use
something
useful,
but
then
again
we
don't
actually
get
as
far
as
I
want.
D
So
if
you
knew
it
was
an
a
monotonic
histogram,
then
you
should
be
able
to
show
it
as
a
rate.
But
if
I
don't
know,
it's
monotonic
I'm
still
not
actually
getting
the
value
that
I
was
hoping
to.
Now.
If
you
look
at
the
protocol,
we
have
int
data
points.
We
have
floating
point
points.
We
have
histogram
and
we
have
summary
data
points.
It
would
be
nice
if
you
could
just
knowing
the
type
of
data
infer
whether
it
was
or
grouping
but
I've
just
I
believe
give
an
example
that
you
do
that.
D
We
don't
know
that
and
this
I
feel
like
there's
still
more
uncertainty
when,
when
we
talk
about
views
one
day,
we're
gonna
talk
about
views.
If
you
can
change
the
aggregation
on
an
instrument,
how
do
you
know
what
category
is
I
I
think
that
I
have
I
have
what
I
feel
like
a
solution
is
problem
is
to
start
introducing
new
value
types.
So
I
did
propose
this
except
yesterday,
I
feel
like
this
was
actually
proposal
that
was
made
in
the
past
by
Tyler
and
John.
D
When
first
came
to
implementing
or
TLP
the
question
was
we
have
this
specs
as
the
default
aggregator
as
me,
like
some
count,
I,
don't
know
how
to
represent
min
counting
no
TLP.
You
can
look
at
Oh
TLP.
There
is
a
way
to
stuff
mean,
like
sum
count,
can
do
a
summary
value,
but
summary
values
are
really
problematic:
they're,
like
basically
deprecated
in
Prometheus
they're,
not
mirja
Bowl.
So
if
you
start
using
a
summary
value,
it's
only
going
to
lead
to
trouble.
D
So
how
do
I
represent
a
max
of
some
account
and
a
less
or
how
do
I
represent
those
values
in
LP
right
now,
it's
really
unclear,
potentially
I
could
represent
those
as
multiple
integer
or
floating-point
data
point,
but
then
I
need
a
note.
We
need
to
know
which
aggregation
I'm
using
that's
not
part
of
protocol.
Today,
adding
this
new
value
type,
which
is
the
next
sum
count,
actually
solves
the
problem,
because
when
I
see
him
in
next
time
count
and
know
which
value
types
it
has
I
know
which
aggregations
it
contains.
A
But
that's
an
interesting
just
I
was
at
the
start
of
the
meeting.
I
think
this
is
a
decent
proposal
to
kind
of
address
some
of
the
issues,
one
of
the
other
things
that
Bogdan
had
raised
as
a
counterpoint
to
needing
to
include.
He
didn't
call
it
the
grouping
and
adding
distinction,
but
I
think
that
it's
something
that
is
just
kind
of
floating.
B
A
Is
the
idea
that,
on
like
a
back-end
system
during
roll-up,
if
you
have
a
bunch
of
data
that
came
in
over
an
hour,
you
may
want
to
roll
that
data
up
into
some
sort
of
unified
long-term
storage
format,
and
the
way
you
would
roll
it
up
is
gonna
be
different
based
on
if
that
data
was
additive
or
whether
it
was
grouping
mainly
because
then
you
get
like
a
statistical
summary
or
even
just
sum.
The
whole
thing,
I
guess,
is
the
distinction
he
wanted
to
know
from
the
back
end.
A
I
think
that
knowing
what
you're
describing
here,
though,
is
probably
useful,
because
then,
if
you
have
a
data
type,
that's
a
min
max
last
sub
count.
I,
think
that
if
you
had
had
a
data
type
of
that
format,
you
would
just
know
that
you
shouldn't
be
adding
these
values.
Well,
I
guess
you
could
add
the
sums,
but
the
idea
is
like
there's,
there's
distinct
understanding
of
what
you
should
be
doing
with
that
data,
and
maybe
the
other
ones
like
not
as
clear.
Then,
if
you
have
like
integer
data,
it's
not
obvious
that
you
should
be.
D
B
D
Lpd
do
I
put
it
as
one
field
of
the
minimax
class
time
count
structure:
that's
possible,
although
if
you
read
this
proposal,
it's
going
to
create
some
confusion
because
I
tried
to
optimize
the
representation
of
the
data,
so
that
if
you
only
have
one
value
its
represented
as
one
value,
and
that
is
what
I
just
said
will
create
ambiguity.
So
there's
there.
A
That
I
think
that
the
idea
there
was
that
was
where
the
instantaneous
temporality
was
coming
into
play,
whereas
if
you
had
some
sort
of
essentially
a
slice,
there's
only
like
you're,
only
truly
representing
a
single
measurement,
and
that
should
be
an
instantaneous
value.
It's
not
actually
being
measured
over
the
entire
time
interval.
It
was
actually
measured
at
one
point
in
time
and
that
was
yeah
yeah.
D
F
Of
wanted
to
comment
a
little
bit
on
what
Tyler
just
said
so
I
mean
I
introduced
that
instantaneous
idea,
and
it
is
a
way
to
model
the
inputs
and
have
it
reflected
in
the
output.
So
it's
the
idea
here
is
that
that
open,
telemetry
isn't
doing
any
aggregation
on
this.
Really
it's
just
taking
a
point
in
time,
measurement
and
reporting
it
passing
it
through
directly
without
really
doing
any
aggregations
at
all.
Now
then,
what's
the
difference
between
that
and
something
that
was
aggravated
via
last
value?
I,
don't
know,
I
mean.
D
Yeah
I
think
we
can
I
think
we
can
all
agree
on
like
what
are
the
properties
of
a
less
value?
Is
it
grouping?
Is
it
or
the
continuous?
But
but
the
fact
is
I,
don't
know
the
aggregation.
You
know
TLP,
and
this
suggests
that
instead
of
having
64
data
point
and
foot
64
data
points,
you
should
have
last
value
data
point
and
you
should
have
some
data
point
and
you
should
have,
and
then
the
thumb
is
and
some
data
point
knees
have
an
64
field
and
a
double
field
and
then
last
value
data
point.
D
D
D
There's
this
related
discussion:
Connor.
Are
you
on
the
call
here?
Yes,
maybe
yeah,
so
this
this
has
been
discussed
and
for
quite
a
while
there's
I
think
general
agreement
that
we
should
merge
it
and
start
talking
about
the
protocol,
which
we'll
run
into
all
those
other
issues
that
we
just
discussed.
D
H
And
my
maybe
slightly
biased
opinion
and
I
think
this
is
ready
to
be
merged
yeah.
You
did
bring
up
good
points
about
the
protocol,
but
I
think
that
can
be
discussed.
Then
I
have
started
working
on
it
a
bit
but
I.
Think
I'm
gonna
have
to
do
some
discussion
with
you
guys.
First
before
I
come
to
any
real
conclusions,
because
yeah
there's
there's
questions
that.
D
H
D
That
sounds
good
to
me.
I've
been
I,
worry
that
that
there's
lots
going
on
and
we're
not
paying
enough
attention
to
your
work.
I
feel
this
is
like
actually
one
of
the
most
interesting
things
happening
right
now
and
I
am
pretty
keen
on
on
seeing
a
optional
field
in
the
metric
that
it's
like.
Just
these
are
samples
exemplars,
and
you
can
do
a
lot
with
those
that'll
be
in
parallel
to
all
the
other
data
that
we've
got
I
really
like
this
proposal
and
I.
Don't
want
to
look
like
we're
not
paying
attention.
Okay,
well
notice.
D
D
All
right,
we've
gotten
into
the
weeds
and
now
we're
back
out
I.
Think
I
should
mention
that.
There's
this
other
topic,
that's
holding
up
o
TLT
Tigran,
just
wrote
an
essay
I
have
not
read
as
si.
This
is
his
essay.
We
should
read
his
essay
and
comment,
but
I
haven't
yes,
I
in
approve
this
PR
anyway,
but
but
it's
been
stuck
for
a
long
time.
So
I
guess
he
wasn't
convinced.
So
we
should.
We
should
help
this
happen,
because
this
will
change
ot
LP
and
be
a
breaking
change.
Yeah.
F
I
just
want
to
come
in
a
little
bit,
I,
don't
know,
I,
don't
think
it's
Tigran
on
the
call
I've
been
starting
to
get
very
nervous
about
the
resources
specification
and
it's
kind
of
conflating
several
things
and
I
think
Tigran
kind
of
points
that
I
would
bleakly,
at
least
in
his
essay
yeah
saying
the
same.
Concerns
have
started
really
starting
to
bug
me
about
like
what
can
we
we're
mixing
together?
F
Several
different
things
in
the
resource
and
the
instrumentation
library
info
also
is
quasi
related
and
I've
been
trying
to
formulate
a
good
way
to
put
the
issue
in
and
I'm
gonna
bring
it
up
on
Tuesday
to
the
general
general
specs
group,
but
I'm
still
trying
to
figure
out
exactly
what
the
issue
is.
I
know.
D
That
yeah,
because
I'm
gonna
happen,
if
you
have
a
preview
for
us
like
I,
think
I've
I'm
at
the
Tuesday
call
this
week.
Like
that,
the
thing
that
we
call
resource
some
people
want
to
be
the
process
and
that's
a
slippery
slope
in
my
opinion,
and
some
people
want
to
be
anything
that's
static
and
then
the
question
of
whether
I
can
have
those
be
variable
for
each
tracer
or
each
meter
is
the
next
logical
question
and.
F
Yeah
yeah
New
Relic
has
New.
Relic
has
an
idea,
and
this
is
why
it's
starting
to
bug
me
as
this
idea
of
an
entity
and
we
do
our
best
to
synthesize
an
entity
based
on
the
telemetry
that's
being
sent
and
that
entity
basically
is.
Is
it
a
host,
or
is
this
a
service
running
on
a
host,
or
is
this
some
other
thing
and
the
ability
to
synthesize
that
entity
from
telemetry?
It's
an
interesting
problem?
F
We
really
don't
want
to
be
stapling
out
of
the
metrics
when
we
send
them
over
backends,
because
it's
gonna
cause
cardinality
issues,
and
so
that's
where
I
feel
like
the
core
problem
is
and
I
haven't,
really
been
able
to
pin
down
how
to
state
that
as
an
issue
and
figure
out
a
road
to
road
to
the
resolution.
But
that's
where
my
brain
is
kind
of
starting
to
get
bothered
by
this
yeah.
D
It
occurs
to
me
that,
like
just
like
with
exemplars,
we
can
probably
imagine
like
a
configuration
that
says,
doesn't
matter
what's
in
your
resource
here
are
the
resource
labels
that
I
care
about?
These
are
the
ones
that
I
considered
to
be
an
entity.
I
will
use
those
in
the
metric
and
the
rest
of
them
will
be
suppressed
and
they
could
it's
there
in
an
exemplar.
D
One
is
sort
of
just
metadata
data
and
one
is
entity
information
and
you
can
have
metadata,
be
sitting
around
your
resources
and
not
become
part
of
the
entity.
Somehow,
but
that's
a
configuration
question.
I
think
that's
kind
of
what
Tigran
is
getting
at
yep
I
I,
like
the
idea
of
her
of
a
resource,
API
and
I,
put
together
a
prototype
of
one.
Anyone
wants
to
see
that
I
can
find
it.
D
D
D
It
has
a
histogram
interface
that
gets
turned
into
a
summary,
so
I'm
hoping
people
will
look
through
this
and
agree
to
this
set
of
defaults.
This
does
depend
on
the
one
I
showed
earlier
with
min.
Next
class
on
count
for
the
a
key
reason
that
we
need
to
add
a
last
value
to
get
a
gauge
in
the
default
configuration
when
sending
value
recorder
or
value
observer
data
over
Oh
TLP.
So
that's
that's
at
least
up
for
review
now
and
I
think
it
solves
the
majority
of
our
problems.
D
Less
interesting,
I
think
is
this
one,
that
this
is
a
go
at
the
kpr
there's
been
some
various
issues
all
asking
this
question
so
at
least
now
I
think
we've
got
proof
that
the
answer
is
here.
The
question
was:
how
do
I
know
whether
I'm
getting
Delta
our
cumulative?
This
changes
some
of
the
sort
of
export
interfaces
to
let
the
exporter
decide
so
there's
an
export
kind.
D
I,
don't
think
we
should
review
this
change
right
now,
but
there's
an
export
kind
and
the
there's
a
component
that
we
are
calling
integrated
right
now,
there's
going
to
be
renamed
processor,
the
processor
knows
what
you
want
and
does
the
right
thing.
So
if
you
have
a
cumulative
input
and
you
want
to
Delta
output,
the
key
state
and
vice
versa,
if
you
have
a
delta
input
you
want
to
what
about,
but
you
keep
state,
that's
what
this
PR
does.
D
The
other
thing
the
sphere
does,
is
it
maybe
wasn't
asked
or
and
maybe
is
not
necessary
as
it
supports
multiple
accumulators,
so
you
can
attach
more
than
one
SDK
to
this
integrator
and
it
will
do
the
right
thing
that
hasn't
been
discussed
all
this
week
as
being
potentially
something
people
don't
want.
It's
like
raises
complexity
and
I
said
hey
the
reason
why
people
want
that
is
because
of
resources.
D
So
we're
back
to
that
prior
topic,
I
was
going
to
say
you
can
say
you
can
set
up
an
SDK
with
different
resources
and
feed
them
all
through
one
aggregator
or
sorry,
one
integrator,
and
that
is
supported
now.
But
there's
potentially
a
the
specification
might
say:
that's
not
not
a
good
idea
and
it's
because
it
combines
entities.
So
if
we
sort
out
the
question
of
resources
and
entities,
I
think
this
may
or
may
not
have
done
more
than
is
needed,
we'll
see.
F
D
I'm,
looking
for
our
dad
kind
of
doing
that
off,
I
am
getting
pulled
back
into
life
stuff
to
do
more
system
works
and
stuff.
So
so
I
am
hoping
to
delegate
more
work
and
just
be
a
kind
of
ordinary
member.
Here,
I've
been
a
hundred
percent
on
this
project
for
a
good
eight
months
and
then
white
stuff
wants
me
back.
D
The
the
next
topic
is
one
both
of
these
are
really
interesting
and
once
I
don't
quite
know
enough
about,
there's
been
a
proposal
submitted
to
talk
about
transforming
metrics
in
the
collector
for
the
most
part,
it's
talking
about
renaming
and
filtering
labels,
but
it's
wrote.
It's
approaching
a
views.
Api
I.
Think.
If
I
ask
bogan
two
weeks
ago,
I
get
asked,
though
in
two
weeks
ago,
would
we
have
a
views,
API
and
the
collector,
and
he
said
no.
No,
no!
No,
but
it
looks
like
that's
a
proposal
anyway.
D
I'm
pretty
sure
that's
what
users
are
going
to
want.
So
take
a
look
at
this
I've
only
skimmed
over
it.
It's
it's
already
had
a
lot
of
debate
and
then
maybe
we'll
end
up
discussing
that
again.
Next
time
there
is
and
I've
seen,
Nick
Fisher's
on
the
call,
so
I
would
like
to
ask
Nick
for
an
update.
This
is
one
that's
quite
interesting
to
me
because,
as
you
notice,
I
was
just
writing
a
doc
about
how
to
handle
stats.
T.
D
G
Yeah,
so
I
don't
have
any
updates
beyond
kind
of
the
little
skeleton
app
I've
been
working
on
a
lot
of
documentation
more
internally
or
some
of
our
stuff.
I
saw
you
updated
with
that
doc.
I
haven't
had
a
chance
to
read
it,
but
I
fully
intend
to
and
I
guess
I
can
provide
feedback
I'm
also
still
just
trying
to
absorb
majority
of
the
nomenclature
involved
in
the
project.
B
D
D
D
A
D
B
D
I'm
but
I'm
starting
to
think
about
how
views
will
interact
with
OTO
P
like
the
whole.
The
whole
thing
that
we
don't
have
an
aggregation
like
explicitly
I,
don't
know
what
the
segregation
was
unless
I
can
infer
it
from
the
type
I
feel
like
that's,
create
trouble
when
you
start
to
think
about
views,
but
but
I
know.
I
Right
yeah,
like
D,
the
original
idea
behind
creating
a
prototype,
was
because
we
wanted
to
get
more
traction
than
the
otep
conversation
like.
If
we're
actually
fully
you
know
focused
on
views
like
it
should
be.
The
other
way
around
right,
like
I,
it
shouldn't
be
like
I,
create
something,
and
then
we
discover
that
this
is
a
problem
and
then
we
add
it
in
Theo
tip
so
like
I'm,
just
like
kind
of
hesitant
on
like
committing
some
of
money.
D
Yeah
the
version
that
work
that
was
kind
of
just
like
you
know
we
you
can
attach
to
any
aggregator
to
any
instrument
and
set
of
set
of
labels.
That's
like
the
basic
set
of
functionality
here
and
I
also
don't
know
exactly
how
to
translate
those
outputs
in
Chowchilla,
P
I
feel
like
my
one
of
the
realizations
I've
had
recently
is:
there's
been
a
lot
of
sort
of
voices
saying
that
the
fields
in
an
OT
LP
are
representing
an
aggregation.
D
D
A
I
think
I
think
he
make
a
really
good
point.
Josh
and
I
think
that
you're
right
that,
like
we,
would
just
gained
a
lot
from
having
the
open,
telemetry
protobuf,
actually
include
all
the
open,
telemetry
information
in
the
transport
I
think
the
original
idea
that
it
wasn't
there
was
probably
inherited
from
the
open,
metrics
project,
which
you
know
just
forget
that
I
said
it,
because
it's
not
really
I,
don't
think
it's
too
much
for
this.
You
anymore,
but
it
might
be
something
to
just
read,
address
I.
Think
that
maybe
just
somebody.
A
It
up,
and
so
maybe
it's
just
something
that
we
could
try
that
or
address.
Do
you
think
you're
correct
that
you
would
need
to
still
include
the
aggregation
there,
because
there's
especially
if
you
introduce
into
you,
there's
no
guarantee
that
the
instrument
was
then
brought
through
the
processing
pipeline
of
a
particular
aggregator
yeah.
That's
a
really
good
point
now
that
you
say
it
makes
a
lot
of
sense
to
me
why
you
would
want
to
include
the
instrument.
I
mean
yeah,
he's.
D
One
of
the
one
of
the
counterpoints
might
be
one
that
was
raised
in
one
of
those
PRS
in
the
protocol.
Repo
was
that
we
have
this
existing
requirement
to
convert
from
open
sentence.
No
consensus
is
missing
this
information,
so
maybe
that's
an
existence,
proof
that
you
don't
need
this
information,
but
I'm
I.
A
D
D
I
think
we've
we've
held
ourselves
back,
but
if
I
say
Oh,
T
LT
has
to
be
solved
first
and
late,
you're
totally
right,
Oh
TLP
has
to
solve
use
and
okay
LT
has
to
solve
all
the
current
problems.
So
this
I
think
probably
the
way
to
solve
this
is
to
is
to
press
forward,
try
to
write
a
views
API
and
then
try
to
translate
the
output
into
Oh
TLP
and
see
what
happens
and
I.
D
A
I
D
You
make
sense,
I,
I
kind
of
want
to
believe,
though,
that
the
basic
views
problem,
if
you
just
think
about
one
instrument
and
a
set
of
labels
and
an
aggregation
that
the
views,
if
you
just
expressing
all
the
configurations
that
we
knew
were
possible,
given
the
API
design,
so
I
have
a
histogram
aggregator
attached
to
my
whatever
instrument.
How
do
I
express
that
like
it's
just
when
it's
and
it's
got
some
labels
and
it's
got
a
datatype?
It
think
that's
enough,
so
I
and
I
don't
believe.
B
D
Just
basic
stuff,
so
maybe
it
it'll
just
work,
and
maybe
the
like
appearance
that
doesn't
work
is
exactly
the
same
is
about
eleven
six
I've
been
like
looking
at
Express
I
lost
value
either.
So
I
want
to
kind
of
lump
all
these
things
together
with.
We
can't
keep
having
the
same
confusion
every
week
and
I
think
we're
making
progress.
So
next
week,
I
expected
to
have
a
lot
more
firm
opinion
for
myself.
Maybe
some
proposal
and
that's
probably
the
best
approach
I,
can
think
of.
D
E
Josh,
do
we
have
issues
filed
for
all
the
work
that
needs
to
be
done?
Like
we
talked
last
week,
we
took
an
inventory
of
what
needs
to
be
done
before
GA,
like
we
met
with
the
the
regular
spec
sig
earlier
this
week
and
they're
filing
issues
for
everything
needs
to
be
done,
so
we
can
then
go
to
the
language
SIG's
and
have
them
file
issues
on
their
upon
themselves,
for
all
the
work
that
that's
remaining
for
them,
both
for
unimplemented
spec
features
and
features
that
are
in
the
spec
that
they
haven't
done
yet.
G
B
D
D
At
least
I
think,
if
you
like
answering
the
same
question
for
management
for
me
like
what
what
do
you
need
to
be
doing
Josh
to
get
like
at
least
so
that
you
can
kind
of
call
it
call
it
ready
and
yeah
I've
been
telling
them
that
I
need
to
get
the
stuff
with
it
DK
to
development.
Yes,
that's
kind
of
my
assignment
and
then
the
stuff
with
OTO
P
is,
we
all
have
to
figure
it
out.
Okay,
those
me
are
the
big
one:
sweet.