►
From YouTube: 2020-08-06 Spec SIG
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
C
C
See
we
have
a
full
agenda
today
and
it's
in
the
order
that
we
will
do
it.
I
might
like
to
start
off
with
a
preface,
so
here
we
are.
C
I
want
to
remind
everyone
that
we
have
something
that
we
ought
to
be
shipping
soonish,
and
I
think
we
need
to
start
realizing
that
some
of
these
debates
we're
having
are
things
that
are
not
going
to
get
resolved
and
we
can
think
about
which
things
we
have
to
prioritize
for
getting
some
users
to
try
this
out
and
think
that
this
is
a
little
bit.
C
I'm
getting
a
little
bit
of
pressure
for
on
this
from
my
company,
because
you
know
there
are
some
discussions,
we're
having
that
are
really
about
nice
things
to
have
in
the
future
and
some
of
these
discussions
that
we're
having
about
like
getting
it
to
work
in
october.
C
So
I
think
we
should
just
all
keep
this
in
mind.
Is
that
we're
going
to
have
to
cut
off
cut
something?
That's
ga
soon?
I
think,
and
maybe
the
views
proposal
is
not
going
to
make
it
as
a
requirement
for
that
point,
but
I
want
to
figure
out
how
we
can
kind
of
separate
the
progress
on
views
from
the
gia
a
little
bit.
That's
my
preference
here,
but
I
so,
but
we're
close,
we're
really
close
here.
C
The
otlp
questions
that
we're
going
to
talk
about
in
the
middle
of
this
meeting
are
kind
of
blockers
for
getting
something
to
ga
but
and
and
sort
of
the
view
stuff
may
not
be.
I
think
that
can
be
debated
and
also,
I
think
semantic
conventions
are
things
that
we
can
keep
debating
as
we
get
something
to
actually
all
you
need
is
like.
I
have
some
counts
and
I
need
to
report
them.
I
have
some
gauges.
C
I
need
to
report
them,
so
we
need
to
remember
that
that
said,
it's
the
reason
we're
having
these
debates
is
that
their
importance
and
the
the
value
of
open
geometry
is
that
everyone
has
a
voice.
So
we
need
to.
We
need
to
talk
through
these,
and
so
that,
with
that,
I
want
to
to
hand
this
over
to
graham
for
talk
about
http
metrics.
I
think,
and
and
take
it
away.
A
Thank
you,
josh
I'll,
try
and
keep
this
concise.
I
I
wanted
to
make
sure
that
we're
aligned
in
terms
of
the
spec,
the
http
metrics
symmetric
spec,
on
kind
of
what
it
is
and
what
it
isn't.
A
I
added
a
couple
sentences
to
the
top
of
the
spec
to
try
to
clarify
this
for
readers.
It
pretty
much
says
that
the
spec
describes
what
the
metrics
should
look
like
for
an
http
operation
and
avoids
describing
how
those
metrics
are
generated.
A
A
C
I've
I've
come
around
to
agree
with
you
on
that.
Like
there's
this,
for
example
like
there's
a
http
duration,
or
something
like
that,
and
you
can
talk
about
ways
that
the
views
api
will.
Let
you
generate
accounts
from
that,
but
it's
totally
interfering
with
the
ga
date.
I
think-
and
it's
interfering
with
this
mix
discussion
so
you're
right.
Graham,
we
should.
C
We
should
be
able
to
talk
about
what
we
want
and
now
about
how
we're
going
to
get
them,
because
maybe
in
six
months
or
a
year
we
have
a
fuse
api
and
you
can
reconfigure
so
that,
instead
of
having
four
metrics
in
your
plug-in,
you
have
one
metric
instrument.
You're
plugged
in
and
you'd
still
generate
four
metrics,
but
that's
going
to
be
down
the
road
and
it
won't
change
the
semantics
of
these
metrics.
A
C
I
agree
I
would
like
others
to
comment,
though
I
know
that
we
also
have
justin
on
the
call
who's
been
involved
in
some
semantics
work
for
duration,
like
events
or
spam.
Like
events,
so
I
want
to
make
sure
that
we're
both
getting
a
voice-
or
both
of
you
are
having
a
voice
here.
But
I
intend
to
stand
out
of
the
way-
and
I
agree
with,
but
I
agree
with
you.
D
Okay,
cool
yeah,
yeah,
I'm
totally
in
agreement
graham
and
I
are
both
from
new
relic
we've
talked
about
this.
I
think
everyone
from
the.
D
Good
to
know
yeah,
I
also
submitted
an
otep
just
kind
of
kind
of
also
trying
to
describe
kind
of
what
the
point
of
these
pr's
is
going
to
be.
I
feel
like.
D
Maybe
this
was
a
misstep
not
to
have
submitted
that
otep
before,
but
I
would
love
for
us
to
just
agree
that
this
is
a
sort
of
spec
that
we
would
like
to
have
and
that
graham's
pr
is
an
example
of
such
one
of
one
of
these
specs
and
I
had
an
older
pr,
there's
kind
of
generic
semantic
conventions
for
timed
operations,
and
I
am
kind
of
of
the
opinion
that
that
one
should
be
closed
and
that
this
otep
describes
what
it
was
trying
to
say
and
then
graham's
bpr
is
more
detailed
and
then
we'll
have
follow-on
vrs
about
databases
and
grpc,
etc.
C
And
then
the
thing
that
I've
changed
my
opinion
on
in
the
last
couple
months
is
this
statement
here
that
we
can
use
a
single
value
recorder
to
get
lots
of
information
out,
but
that
is
causing
us
to
slow
down
and
causing
debate,
which
I
think
is
unnecessary
and
so
graham's
point
is
well
taken.
It's
like
semantically.
It
doesn't
matter
whether
you
use
one
instrument
or
four.
The
point:
is
you
get
four
instruments,
four
metrics
out
so
so
we
should
not
be
specifying
what
the
instruments
are.
C
C
Well,
so
do
we
should
we
have
any
specific
conversations
here?
I
think
that
when
this
came
up.
A
C
Have
talked
you
out
of
having
this
http
server.
request
count,
and-
and
maybe
I
that's
that's
where
I'm
sort
of
realizing
I
may
have
made
an
error
there
was
there
were
questions
here
about
whether
we're
duplicating
the
semantic
conventions
for
spam.
Http
requests.
I
don't
think
that's
the
intention
here,
but
but
I
gather
what
what
we
were
trying
to
do
is
limit
cardinality
up
front.
Essentially.
A
Yeah,
you
know
a
lot
a
lot
of
the
support
from
spain,
so
it's
possible
to
create
this
stuff
from
spain,
but
you
know
we
explicitly
avoided
that
writing
that
into
here
in
case
you
know,
samantha
exists.
C
And
so
this
can
be
then
read
as
we
as
as
you,
if
you
were
generating
a
metric
from
an
http
span.
You'd
probably
just
copy
these.
These
attributes
out.
If
you
are
creating
a
metric
from
a
http
handler,
then
you
just
generate
these
these
attributes
and
probably
nothing
else,
because
anything
else
is
going
to
be
high
cardinality
and
we
know
we
don't
want
those
for
metrics,
basically
speaking
yeah,
so
I
think
when
a
person
who's
been
familiar
with
the
trace
semantic
convention
sees
this.
They
might
exactly
run
into
the
question
I
just
raised
about.
D
C
What
are
you
doing?
Are
you
respecting
something,
but
I
think
if
it
might,
there
might
be
like
a
sentence
up
at
the
top
that
could
be
added
to
to
prevent
that
confusion.
I
think
just
to
say
that
these
are
the
same
conventions.
These
are
just
the
recommended
labels
for
metrics
from
http
events,
which
are
restricted
to
those
which
we
believe
are
low
cardinality.
I
guess.
C
In
the
like
spec
as
what
I
was
thinking
just
to
avoid
confusion
over
like
are
these
the
same
conventions?
Are
these
different?
In
other
words,
we're
not
we're
not
making
semantic
conventions
in
this
list?
These
are
the
same
ones
that
are
already
that's
true,
yeah,
okay,
you're
nodding.
I
agree.
C
Well
I
support
this.
Is
there
a?
Is
there
like
a
recommendation
like
what
do
we
need
to
do
to
satisfy
this?
Is
this
like
we
go
to
the
ot,
http
plugins
and
start
writing
metric
instruments
in
there
with
these
labels?
C
A
span
to
metric
event,
somewhere
in
the
pipeline,
like
a
collector,
you
have
a
collector
plug-in
that
would
take
a
span
and
turn
it
into
a
metric.
Using
these
labels.
A
Yeah
something
like
that
that
would
be
kind
of
the
next
steps
in
terms
of
implementing
this
yeah.
C
Later
in
this
call,
if
we
have
time,
I
want
to
talk
a
little
bit
about
what
I
call
dimensionality
reduction,
you
could
imagine
a
configuration
where
there's
an
http
span
being
output.
It
has
these
attributes
and
more
and
then
you
configure
a
label
reduction
to
say
I
only
want
these
in
my
metrics
and
they'll
cut
out
the
ones
that
you
that
you
decide
not
to,
in
other
words,
you'd
have
a
list
of
all
the
methods
that
you
want
and
it
would
just
collapse
to
those
to
those
dimensions.
C
That's
another
example
of
there's
many
ways
we
can
do
this.
We
just
need
some
convention,
it
doesn't
matter
how
we're
going
to
do
it.
So
I
I
should
stop
saying
those
things.
Okay.
Well,
I
will
be
happy
to
add
an
approval
to
this.
After
the
call,
I
think
the
request
is
that
everybody
else
who's
here
and
especially,
if
you're
an
approver
to
go,
give
that
one
a
look
and
respect
yourself,
graeme
and
justin.
Do
you
think
there's
anything
else?
We
should
talk
about
here,
also
for
justin
otep.
C
That,
otherwise,
that's
I
think
this
is
great.
We
should
make
sure
that
that
the
people
I
saw
some
comments
on
this
pr
by,
let's
see
who
was
it,
people
who
oberon
zero
zero
he's
worked
a
lot
on
the
senate
conventions
for
http.
So
that's
his
concern.
That's
why
I
met
recommended
sort
of
a
statement
to
address
the
concern
that
he
gave.
Although
you
have
resolved
the
issue
already
so
maybe
he's
accepted
it.
C
C
Get
in
there
and
try
and
help,
I
can
ask
him
to
to
address
it.
If
not.
Okay,
I'm
like
you
know
to
come
back
to
this
one.
Okay,
thank
you,
justin
and,
graham
now,
I
think
we
should
talk
about
views
proposal.
E
C
E
Need
logged
in
here
darn
it
yeah,
so
I
cut
it
down
to
five
minutes
because
I
don't
think
we
have
the
right
people
here,
but
basic
idea,
126
is
proposing
a
super
super
super
basic
idea,
which
is
kind
of
a
stepping
stone
to
views
and
128
is
tristan's.
Take
on
doing
kind
of
a
combination
of
that
plus
a
full
views
proposal
and
the
the
question
that
is
left
on
126
is:
do
we
keep
126?
E
We
do
we
get
that
merged,
build
that
spec
and
then
have
128
build
on
that
or
do
we
take
128
and
make
it
the
official
otep
with
both
both
parts
in
it
get
that
merged
and
then
write
two
separate
like
break
the
spec
writing
up
into
two
steps,
and
that's
really
it's
more
of
just
a
procedural
thing.
I
think
everyone
kind
of
agrees
that
what
we've
got
here
is
something
we
need
to
build.
We
have
a
stepping
stone
we
can
do
simply.
E
We
probably
might,
I
think,
actually,
we
need
to
have
something
akin
to
it
for
ga
to
at
least
allow
exporters
to
be
configured
correctly,
although
I
could
be
argued
out
of
that,
and
I
think
views
I
think
full
views
I
think
could
be
beyond
ga
from
in
my
opinion,
but
without
interest
in
moving
here.
I
think
it's
hard
to
assert
that
I.
C
Share
that
that
attitude,
I
feel
that
there's
some
minimum
amount
that
we
ought
to
have
to
choose
like
I
I
I
was
thinking
label
rejection
like
because,
because
of
we
know
that
the
criminality
can
be
a
problem
having
a
kind
of
really
course
tools,
just
like
limit
label
cardinality
would
be
very,
very
useful
and
that
needs
to
be
configurable.
We
know
about
configuring,
cumulative
versus
delta,
for
certain
exporters.
We
know
about
configuring
various
options
of
aggregator
as
well.
C
Aggregation
yeah,
those
are
like
the
core
minimum.
I
think,
and
there's
lots
of
nice
to
haves
that
are
on
top
of
that,
like
getting
labels
out
of
your
context
or
like
multiple
aggregations
and
I've
been
thinking
in
the
back
of
my
head
about
multiple
aggregations,
there's.
Some
reason
why
that
seems
problematic
to
me
and
I've
come
up
on
an
answer.
I
think-
and
I
I
don't
know
if
anyone
cares
to
hear
it,
but
but
I'm
starting
to
worry
about
multiple
aggregations.
E
Interesting
question
sounds
like
something
we
should
I
mean
without
bogdan
and
tristan
here.
We
should
probably
defer
that
discussion.
Yeah.
C
There's
some
notion
of
like
a
frame
that
I
want
to
add
to
this
conversation,
which
is
to
say
that
if
I
have
two
aggregations
of
one
metric,
I
want
to
make
sure
that
I
don't
add
them
together.
Like
I
can.
I
can
compute
different
sample
strategies
to
compute
different
samples
from
the
same
data.
C
I
don't
want
them
to
look
like
two
data
streams
and,
if
like
so,
I
want
some
way
of
like
recognizing
that
I'm
outputting
multiple
versions
of
the
same
data
so
that
I
don't
multiple,
multiple
count
them
multiple
times
in
an
aggregation
like.
I
don't
want
to
re-aggregate
and
combine
data.
That
was
already
extrapolated
from
the
same
metric
and
I
don't
know
how
we're
going
to
do
that.
So
I
think
that
that
alone,
questions
me
on
multiple
aggregators.
C
So
that's
just
a
warning
that
I
have,
but
I
think
and
and
if
bogdan's
not
here,
we're
gonna
have
trouble
talking
through
the
next
section.
Oh.
C
G
Discussing
with
the
with
the
tc
members
about
how
can
we
make
specs
move
faster,
I'm
trying
I'm
trying
sorry,
I
joined
this
okay.
C
Great
sorry-
sorry,
sorry,
okay!
Well!
What
I
was
about
to
finish
with
saying
talking
to
john
here
about
some
minimum
amount
of
views
configuration
is
that
I
think
that
that
I
I
see
that
as
the
mvp,
but
it's
just
got
to
be
minimal
and-
and
I
don't
think
we
need
multiple
aggregations,
I
don't
think
we
need
distributed
context.
I
don't
think
we
need
a
lot
of
stuff,
but
configuring
labels,
aggregations
and
disablement
is,
I
think,
what
I'm
looking
at
for
me.
That
sounds
right
and
another
beginning
this
hour.
C
I
prefaced
this
with
always
we
got
to
march
towards
ga
and
there's
some
things
here
that
don't
need
to
block
us
and
I
think
there's
some
of
the
views
proposal
ought
not
to
block
us
and
maybe
you're
right
that
some
small
subset
of
it
is
something
we
get
to
to
that
end,
I'm
going
to
talk
about
dimensionality
reduction
below
because
I
think
that's
in
it
for
me,
and
I
realize
that
we
haven't
finished
it
so,
but
first
we
should
talk
about
this
next
section
blockers
and
then
odlp
protocol.
H
Oh
josh,
this
being
I
added
this
part,
I
think
hey,
hey
josh,
I'm
from
adws
yeah
yeah.
We
have
discussion
couple
weeks
ago.
I
think
12
don't
get
resolved.
I
just
curious.
If
we
can
go
ga
those
two
other
open
ones,
5159
71
just
curious.
Do
we
are.
We
are
they're
still
blockers
before
go
to
ga
for
the
for
to
get
the
proposal
finalized.
G
This
go
ahead.
Logan
this
one
is
merged.
171
is
already
merged.
It's
it's
yeah.
It's
resolved
159.
I
was
providing
a
bunch
of
comments
and
I
was
asking
josh
for
helping
me
with
these
labels
and
how
how
exemplar
labels,
combined
with
the
data
point
labels
once
we
have
them
in
data
point.
Are
we
gonna
apply
some
some?
G
Are
we
gonna
duplicate
them
in
the
examples,
or
are
we
just
gonna
do
a
merging
of
them?
I
don't
know
exactly
what
it
is
the
the
right
thing
to
do,
but
we
need
some
decision
here
and
anyway
trying
to
to
trying
to
make
progress
on
this.
So
I
think
I
think
this
is
personally.
This
is
not
a
blocker
for
calling
the
protocol
stable
because
ads
on
top
of
the
current
protocol.
It
does
not
change
anything
anything
inside
the
protocol.
G
For
the
moment
there
are
other
brokers,
in
my
opinion,
but
this
one
is
not
a
blocker.
It's
something
that
we
can
add
at
any
moment
and.
F
C
I
would
agree
with
that,
it's
back
because
this
is
not
a
blocker
for
us.
This
is
like
in
the
category
of
things
that
I
think
we
we
will
one
day
want
or
should
have
in
otlp,
but
it's
like
getting
in
the
way
of
gaa,
and
I
don't.
I
realize
that
we've
gotta
cut
away
things
that
we
can.
That
said,
you
know,
conor
has
been
working
on
this
work
and
has
a
prototype.
That
is
looking
good
and
I
was
intending
to
talk
about
it
below.
C
G
Josh
josh
just
try
to
comment
there.
I'm
I'm
very
interested
in
making
that
happen,
but
just
give
up
give
me
some
hints
about
what
to
do
there.
C
Yeah,
no,
I
I
will
follow
this
one
later
today.
Hopefully,
at
the
end
of
the
hour,
we
can
talk
about
it
because
I
have
a
connect,
a
connection
there
at
the
bottom
about
dimensionality
reduction.
So
but
let's
talk
through
the
otlp
stuff,
the
rep
191
through
195.
F
G
Of
them,
I
think,
are
merged
and
reviewed
by
by
a
bunch
of
people
at
least
four
or
five,
except
the
ones
that
I
removed
some
to-do's
and
stuff,
where
I
didn't
wait
for
for
too
many
approvals,
because
I
I
do
not
believe
that
that's
that's
very
important,
but
besides
that
everything
else
was
reviewed,
at
least
by
four
or
five
people
and
they
merged
them.
G
There
are
two
two
issues
that
I
know
of.
If
you
look
into
the
proto,
there
is
a
enum
standard
which
I'm
working
right
now
wait.
Where
should
I
look
so
in
the
issues
in
the
proto
issues?
There
is
a
issue
about
the
namings
of
the
enum
entries:
oh
yeah,
the
the
one,
the
bottom
of
the
page,
no,
no,
the
the
bottom
of
the
page
that
one
yes,
that
one
is
a
blocker.
I
need
to
to
go
through
the
all
the
protos
and
do
a
small
breaking
change
and
and
just
follow
some
convention.
G
I
will
try
to
find
it
yeah.
I
will
try
to
find
what
exactly
the
convention
is.
There
is
the
other
one
that
I
I
need
to
do,
which
is
the
the
performance
thing,
but
that
does
not
change
the
that
will
match
yeah.
G
G
There
are
in
the
specs
two
issues,
which
I
don't
get
feedback,
and
I
would
like
people
to
help
me
with
that
in
the
in
the
specification.
There
is
the
issue
I
will
give
you
the
numbers
immediately
so
metrics
in
the
specification,
the
ones
that
I'm
interested
in
is
the
default
changing
otlp
exporter
to
export
cumulative
metrics
by
default.
This
does
not
affect
otlp,
but
I
think
one
decision
that
we
made
and
we
want
to
make
sure
everyone
agrees
on.
We
support
both
we
support
deltas
and
we
support
cumulative.
G
This
issue
is
more
and
only
about
what
is
the
default
behavior
of
our
sdk,
which
we
can
change.
I'm
not
opposed
to
that
and
we
can
discuss
pros
and
cons,
but
from
otlp
perspective
does
everyone
agree
that
we
support
both?
I
know
josh,
you
say
yes,
I
say
yes,
but
I
want
to
to
hear
if
others
have
other
opinions.
C
Yes
and
I'd
like
to
ask
gang
to
speak
because
he's
was
mentioning
a
otep
to
to
drive
this
even
more
so
maybe
we
need
no
chap
to
say
this.
Are
you
on
the
call.
B
Yes,
yeah
yeah,
so
pretty
much.
I
had
an
old
type
draft
that
pretty
much
just
formalizes
and
summarizes
like
the
discussion
that
has
been
going
on
in
the
sick
meetings
for
these
weeks
and
like
pretty
like
it
pretty
much.
It
says
that
the
sdk
otp
exporter
should
support,
like
should
be
configurable
to
have
the
cumulative
and
delta
export
strategies,
which
means
that
otlp
metric
will
support
both
and
that
the
default
is
cumulative.
I
haven't
followed
that
yet
yeah
so
pretty
much
what
you've
said.
Both
then.
G
Okay,
there
needs
to
be
a
follow-up
in
the
specification,
I
think
we
say
somewhere
in
the
api,
that
the
default
is
delta,
but
there
may
be
a
change
in
the
specs.
C
I
yeah,
I
don't
know
where
so.
In
truth,
I
mean
one
of
the
elements
of
the
room
is
I
keep
saying
I'm
gonna
work
on
sdk
spec
and
I
keep
saying
we're
not
ready,
but
I
think
we're
ready
and
I
think
that's
where
you
put
such
a
spec.
I
wanted
us
to
have
a
pass-through
which
means
no
memory
in
the
client,
but
it's
just
it's
not
going
to
work
until
we
have
a
much
more
mature,
downstream
infrastructure.
So
I
think
until
we
have
a
configurability
that
can
automatically
decide
based
on
downstream
conditions.
C
Perfect
and
then
back
to
john's
statement,
we
do
need
some
minimum
amount
of
configurability
and
what
I
understand
is
that
I
think
that
there
are
vendors
like
new
relic.
That
would
prefer
to
just
see
deltas
for
the
time
being-
and
I
know
I
know
a
few
other
vendors
that
are
in
that
boat,
and
so
if
we
can
just
say
that
otep
the
otlp
exporters
allow
configuration
and
defaults
to
cumulative.
I
think
we're
good
enough.
It's
not
the
greatest!
There's
no
perfect
default.
Here,
though,
I
think
it's
what
we
know.
E
John
I'm,
okay
with
that.
Oh
john,
are
you
okay
with
that?
I
think
is
tyler
on
the
call,
because
I
think
he
knows
a
lot
more
about
how
our
exporter
for
the
collector
works.
I
Yeah,
the
idea
is
to
have
it
support
both
come
through
the
collector
and
then
being
able
to
parse
it
there
yeah.
That
sounds
good
to
me.
Okay,
yeah
and.
C
One
thing
I
know,
for
example,
is
that
lifestep
is
packaging,
hotel
libraries
with
a
what
we're
calling
launchers,
which
are
basically
just
a
thin
wrapper
around
open
telemetry
with
the
settings
that
we
know
we
need
and
what
this
is
one
vendor
specific
setting
where
it.
If
the
user
is
not
running
any
light
step
code
at
all,
they
might
get
it
wrong
and
we
can
tell
them
what
to
do.
C
But
if
you
just
use
our
code,
it'll
work
right
and
then
maybe
that's
something
we
can
do
as
vendors
to
help
users
in
this
situation,
but
it
should
be
a
simple
configuration.
G
Good
there
is
the
next
issue
that
is
affecting
otlp
that
I
would
like
to
make
a
decision
on
is
seven
two
five,
the
one
that
I
filed
again.
The
decision
may
be.
We
just
support
deltas
for
up-down
counters,
that's
it,
but
I
try
to
summarize
my
my
my
thoughts
about
why
we
should
not
support
this
again.
The
decision
may
be
just
we
for
for
this
specific
case,
we
do
support
deltas,
we'll
figure
out
later
what
to
do.
If
you.
C
Yeah,
so
I
think
my
my
read
is
the
problem:
is
that
if
you're
losing
your
updates,
your
your
sums
are
going
to
be
incorrect,
and
part
of
me
thinks
that
that's
the
nature
of
the
beast.
C
If
you're
dealing
with
deltas
you,
you
shouldn't,
be
counting,
totals
and,
and
maybe
then
you're
just
you're,
looking
at
negative
rates
and
such,
but
I
also
feel
that
well,
I
was
going
to
mention
in
the
last
brief
conversation
that
there's
this
notion
that
that
I
had
of
calling
as
pass
through
so
that
you
know
some
observers
and
up
down
some
observers
would
come
out
as
though
with
a
cumulative
and
update
counter
would
come
out
of
deltas,
because
it's
the
zero
memory
configuration
but
in
this,
if,
if
we
adopt
the
default
that
we
just
discussed
yen
yang,
is
going
to
make
a
change
we're
going
to
have
otlp
default
to
cumulative,
then
this
problem
is
addressed
by
default.
G
This
is
not
a
problem,
correct,
that's
true,
but
so
I
would
like
us
to
think
a
bit
more
in
the
future
and
say
right
now.
Yes,
we
do
choose
cumulative
and
we
believe
about
this
and
and
all
of
these
things,
but
in
the
future
do
we
believe
that
we
will
have
infrastructure
to
support
this,
or
we
believe
that
we
should
never
support
this
in
our
infrastructure
and
we
should
just
drop.
C
Support,
well,
I
don't
know
that
my
opinion
is
strong
enough
to
I
feel,
like
this
debate
is
not
worth
having.
I
guess
I
feel
like
it's.
It's
cases
where
you
get
into
a
question
of
is
something
meaningful
and
is
it
useful?
I
think
it's
meaningful
and
I
think
what
you're
saying
is
it's
not
useful
and
I
don't
know
that
being
useless
and
meaningful
as
enough
of
a
reason
to
take
something
out,
but
there's,
I
think,
there's
a
nice
amount
of
symmetry
that
we
have
here.
C
If
I
just
say
you
can
be
delta
ortiz
period
and
okay,
the
fact
I
mean
the
fact
that.
C
F
I
I
Turn
off
your
video,
I
think
that's
what
pogrom
is
also
recommending.
I
think
that
yeah
richie
also
pointed
out
in
the
comments
that
it's
important
as
the
internal
structure
is
how
that's
actually
stored
internally
and
I
think,
there's
a
really
good
point
to
be
made
about
the
correctness
of
the
data
as
it's
coming
through,
but
it's
also,
I
think,
a
perspective
of
the
users.
I
If
a
user
really
wants
to
do
something
that
is
going
to
not
work,
I
think
there's
ways
that
we
can
help
them
fix
that,
but
I
don't
know
if
it's
something
we
need
to.
I
think
focus
a
lot
of
time
on
before
the
ga.
I
think
it's
a
good
issue
that
we
could
we
could
work
on
after
the
ga,
but
I
yeah,
I
don't
know.
G
G
G
I
Yeah,
that's
a
good
point.
I
I
think
that
that's
a
really
good
point.
It
needs
to
get
resolved
sooner
rather
than
later.
That
is,
I
think,
the
only
follow-up
to
that.
I
I'm
not
a
people.
It's.
C
G
Don't
see
a
difference,
the
only
difference
is
most
likely
for
most
of
the
monotonic
counters.
People
are
only
interested
in
rates
and
because
of
that,
because
of
that
they
are
not
necessarily
interested
in
how
much
network
I
I
consumed
from
a
week
ago
when
I
wrote
the
service,
they
are
most
likely
interested
in
how
much
network
I
consume
per
second.
J
G
G
If
it's
not
monotonic,
you
can
only
use
the
start
time
which
I
know
prometheus
does
not
use
right
now
and
but,
let's
start
with
patient,
but
still
still,
the
problem
is,
if
you
lose
a
delta,
if
you
lose
a
point
for
a
counter.
Besides
the
fact
that
you
are
able
to
recognize
reset
and
stuff
most
likely,
you
don't
lose
too
many
informations,
because
you're
never
gonna,
look
at
the
the
current
value.
So
so,
if
you
drop
a
delta
between
between
report
like
if
you
drop
a
delta,
it's
it's.
G
J
If
you,
if
you
drop
a
delta,
you
lose
that
part
of
your
rate,
because
that
traffic
never
happened
for
you.
If
you
have
a
comp
counter,
you
might
see
a
spike
in
your
data,
which
is
not
correct,
because,
obviously
you
you
lost
a
sample
over
a
time
but
you're
not
losing
the
complete
the
complete
rate
information.
So
if
you
do
calculations
based
on
that
rate,
which
which
deduce
the
total
amount
of
transmission
that
will
work,
if
you
do
cumulative
with
monotonic
rise,
it
will
not
work
with
non-monotonic
it
will.
J
Memory,
you
can
assume
yes,
but
you
don't
know
and
if,
if
you
start
guessing
within
your
within
your
monitoring
system,
you
better
make
sure
you
really
really
surface
this
to
the
user,
because
this
violates
a
lot
of
principles
like
if
you
start
guessing
and
interpolating
automatically
you're,
basically
you're
breaking
user
trust
in
my
opinion,
because
the
user
relies
on
that
data
being
exact
or
not
there,
but
but
just
guessing
without
exposing.
This
effect
to
the
user
seems
counter-intuitive.
I
So
I
I
want
to
point
out:
I
don't
want
to
go
too
deep
into
this.
We've
already
had
this
conversation
two
weeks
ago,
but
yeah
to
there's
this
point.
Like
that's,
that's
a
really
good
point.
I
think
that
if
you
have
a
contract
with
your
user
and
you
are
making
strong
guarantees
around
the
correctness
of
it,
that
makes
a
lot
of
sense,
but
that
may
not
be,
I
think,
the
universal
case.
I
There
may
be
systems
that
don't
have
a
stronger
contract
of
correctness
with
their
users
and
being
able
to
support
that
use
case
is
also,
I
think,
something
that
would
be
useful
for
them
to
consider.
I
I
I
yeah.
The
point
of
correctness
is
definitely
something
that
that
is
useful.
The
monotonic
element
of
the
counter,
but
the
the
goal
I
think
in
the
whole
project
is
also.
Is
there
a
use
that
we're
going
to
lose
by
not
including
this?
I
If
we
do
remove
this
from
the
the
instruments,
if
there's
not
an
up-down
counter,
are
we
compatible.
G
G
I
C
That's
the
reason
why
I
want
this
option
is
that
if
I
have
to
keep
sums
for
my
delt
for
my
up
down
counters,
I
have
to
keep
memory
and
and
and
I
I
find
the
arguments
that
you're-
that
I'm
hearing
to
be
valid
for
some
systems,
but
not
universal
life.
Tyler
said
so
I
guess
I'm,
but
I
don't
I
don't.
I
don't
want
to
slow
us
down.
So,
like
honestly,
this
is
something
we
can
add
later.
I
C
Well,
I
can
I
mean
I,
I
added
this
pr
in
the
go
sdk
that
would
compute
cumulatives
from
delta.
So
so
this,
the
the
yang
proposal
to
set
the
default
to
be
cumulative
will
get
the
right
behavior
by
default,
and
the
question
is
whether
we
ever
want
to
support
delta
for
up
down
counters,
and
I
I
still
don't
really
agree
with
the
arguments
of
why
counter
is
different
enough
to
counter.
But
I
would
rather
see
this
debate
and
then
keep
debating
it.
C
So
so
I'm
willing
to
go
with
the
recommendation
that
dogen
and
rich
are
making.
G
G
Yeah
yeah
sounds
good.
So
then,
okay,
if
somebody
can
document
this
in
the
in
the
issue,
while
we
are
going
to
the
next
one,
the
the
last
one
for
me
that
is
blocking
is
six
one.
Seven,
if
you
can
present
that
one
six,
one
seven.
G
If
people
would
like
to
see
a
pr
right
now
to
add
support
to
this
in
otlp,
and
I
can
do
it,
but
I
would
like
to
make
a
decision
of
if
this
is
important
enough
for
rga
to
have
a
pr
right
now
and
and
I'll
do
it.
C
So,
for
me
there
was
a
potential
overlap
with
exemplars
and
I'm
and
as
we
all
know,
as
I
say
at
the
beginning
like
I
would
like
to
see
us
finish
something
so
if
leaving
out
raw
values
is
necessary
to
do
now,
and
maybe
the
questions
that
were
raised
in
this
issue
that
you
asked
me
to
get
feedback
on.
Maybe
those
are
part
of
the
problem.
We
haven't
made
any
progress,
but
it's
exemplars
and
raw
values
are
very
close
to
each
other,
and
yet,
if
this
is
slowing
us
down,
let's
forget
about
it.
G
G
I
Yeah,
I
probably
don't
have
the
context
because
I
saw
conor's
pr
that
looked
pretty
like
I
don't
know
done
to
me.
So
I
yeah
I
maybe
I
just
am.
C
C
Yeah,
isn't
this
close
so
so
the
assertion
is
that
this
raw
value
can
be
used
either
as
an
exemplar
or
as
a
raw
value,
and
I
and
I
the
way
I
think
about
that-
is
that
there's
this
sample
count
field
if
you
omit
sample
count.
The
implication
is
this
is
a
raw
value,
because
its
value
is
one.
Every
raw
example
represents
one
it's
raw
data.
If
you
put
a
value
that
is
well
actually
sorry
that
doesn't
quite
work,
but
the
point
is
if
this
value
is
greater
than
one
it's
a
probabilistic
sample.
G
More
than
that,
there
are
more
implications
than
that
for
me
for
me,
if,
if
we
support
directly
raw
measurements
from
the
synchronous
instruments,
which
is
the
most
the
focused
one,
I
would.
I
would
put
this
into
the
list
of
where,
where
there
is
a
repeated
field
for
every
data,
point
I'll
put
a
repeated
field
for
this,
because
you
don't
want
to
send
them
inside
a
sum
or
a
histogram
like
you
want
to
send
just
the
raw
values,
not
not.
F
C
G
C
Here
the
right
here
it
is
so
you
so
I
I
would
have
put
another
one
of
these,
which
was
raw
data
point.
And
if
you
recall
at
some
point,
I
tried
to
combine
the
the
two
integer
and
double
into
a
scalar.
Part
of
my
motivation
was
that
I
wanted
to
add
another
one
for
raw
data
and
examples,
and
that
would
keep
us
break
even
but
don't
care.
G
No,
no,
no,
let's,
let's
not
do
optimizations
right
now.
We
I
will
have
next
week
just
do
this
kind
of
tricky
optimizations,
combining
messages,
measure,
memory
and
stuff,
but
but
let's
finish
the
the
the
overall
structure
and
then
do
the
optimizations
like
that,
what
I'm
trying
to
say
is:
should
we
have
a
repeated
field
here
as
well?
G
C
Yeah,
that
could
be
a
value
type
or
something
like
that.
Additionally,
yes,
and
if
there's
time
now,
I
I
did
list
the
python
prototype
as
well
as
my
my
sort
of
beginnings
of
some
related
work
and
go
we
can
get
to
I,
there
is
definitely
support
for
the
raw
values.
C
It's
not
clear
how
strong
it
is,
but
I
see
it
there
and
one
of
the
more
like
you
could
argue
that
you
know
as
open
telemetry
we've
done
our
jobs.
We've
created
an
api
sdk
separation.
A
different
sdk
could
be
implemented
to
do
raw
values,
but
it
wouldn't
use
otlp
and
it
might
not
be
what
the
user
wants.
So.
G
Quick
quick
thing,
quick
thing
by
the
way
we
need.
We
need
this
to
to
put
a
bit
of
context.
We
need
this
for
some
languages
like
php,
where
there
is
no
such
thing
as
a
as
a
state
or
as
a
global
state,
or
something
where
you
can
do
aggregations
so
in
php.
G
F
G
C
Well,
I
I
support
this.
The
one
thing
that
I
got
hung
up
on
earlier
when
I
realized
that
I
said
at
the
very
beginning
before
you
dialed
in
is
that
we
have
to
avoid
recombining
metrics
that
were
computed
from
the
same
source.
So
if
I
have
you
know,
compute
a
histogram
and
I
compute
exemplars,
I
want
to
make
sure
that
they
don't
end
up
looking
like
duplicates
of
each
other
in
others
I
can,
if
I'm
doing
probabilistic
sampling
and
I
compute
my
exemplars,
I
can
ex
estimate
approximate
data
from
the
example
bars.
C
If
I
do
that,
and
I
combine
it
with
a
histogram
which
was
an
exact
summary
of
the
same
data
now
I've
got
duplicate
data.
We
have
to
watch
out
for
that
problem,
so
that
was
a
concern
for
me
for
multiple
views.
When
I
talked
about
review
configuration,
that's
my
strongest
concern
right
now,
but
I
think
if
we
just
state
clearly
that
raw
values
may
be
raw
raw
data-
and
they
may
be
example,
ours
and
if
they
are
exemplars,
don't
combine
them
as
as
real
data.
You
know.
Essentially,
we
need
to.
G
We
need
to
have
a
way
to
signal
in
every
type.
I
think
we
need
to
have
a
way
to
signal
that
hey
there
are
so
if
we
go
depends
on
how
we
model
this
in
the
protocol,
but
we
have
to
clearly
see
that
this
type
comes
with
examples
or
or
something
like
that
and
make
sure
we
understand
that
exactly
what
you
pointed
like.
But
yes,.
C
My
attitude
had
been
that
if
you,
if
you
just
include
exemplars,
just
go
ahead
and
do
it
if
you
actually
are
giving
raw
data
change
the
value
type
to
indicate,
and
that
will
say
whether
the
data
is
in
the
raw
data
or
whether
it's
just
helpful
extra
data.
That's
what
I
would
do,
but
it's
not
the
only
way
to
do
it.
Then,
if
we
may
move.
G
Please
just
just
comment
this
idea
on
that
examples,
pr
or
just
to
have
there
in
the
meantime.
In
the
meantime,
I
I
know
what
I
have
next
to
do
to
tuna,
be
the
protocol
for
seven
to
five
and
seven
three
one
and
send
I
will
send
prs
very
soon.
Anyone
else
if
you
want
to
do
prs,
just
assign
the
issues
to
yourself.
C
It
so
I'm
glad
you're
doing
this.
I
will
work
on
an
sk
spec,
so
I'm
not
going
to
do
these
pr's.
I
wanted
to
move
us
forward.
This
is
sort
of
on
the
same
topic,
so
so
I've
looked
over
conor's
ppr
he's
remember,
he's
an
intern
who
will
not
be
with
us
for
much
longer,
but
this
is
it
and
it
does
implement
both
the
trace
exemplars,
as
well
as
the
statistical
example.
Examples
that
I
was
interested
in.
C
So
there
is
no
way
to
think
about
how
a
statistical
examples
can
be
the
data,
as
opposed
to
just
auxiliary
information,
and
this
this
gets
down
to
this
question
about
extra
labels.
So
I
said
I
would
answer
this
after
the
meeting.
I
don't
want
to
do
it
now,
but
if
you're
following
along
what
is
it
called
excluded?
C
Sorry
filtered,
I
can't
remember
connor.
Are
you
on
the
call.
C
Yeah,
so
so
what
conor's
done-
and
I
discussed
in
the
pr
is
there's
a
couple
ways
I
could
imagine
doing
it,
but
the
aggregators
in
this
case
are
getting
the
set
of
labels
that
were
dropped
and
that's
it
if
you're
going
to
do
dimensionality
reduction
the
the
sensible
place
to
do
it
is
at
the
moment
where
you're
computing
your
examples,
because
the
example
is
then
can
contain
full
information
and
then
the
rest
of
the
accumulator
state
can
build
up
reduced
state,
and
so
one
approach
here
is
to
give
the
aggregators
themselves
the
information
that
they
need
to
compute
exemplars.
C
In
the
comments
for
this
pr.
I
just
noted
that
I
I
could
imagine
doing
it
another
way
and
you
may
look
at
my
pr
here
which
which
is
related,
and
this
I
think
this
is
enough.
This
is
all
background
on
this
question
about
raw,
how
we
represent
raw
data
in
this
pr,
which
is
a
draft
I
don't
tend
to
merge
it,
as
is
I
added
an
ability
to
filter
label
sets.
This
would
be
done
first
thing
in
the
accumulator,
and
this
is
the
way
we
can
configure
essentially
lower
cost.
C
So
we're
going
to
draw
for
labels
that
don't
matter
to
us
up
front
before
we
compute
the
accumulator
state,
but
that
is
also
the
moment
where
you
want
to
do
some
example,
our
calculation.
So
if
I
search
for
the
word
filter
in
this,
you
can
see
here.
C
C
Oh,
I
shouldn't
be
searching
in
real
time.
The
point
being
here
is
that
I
added
to
the
label
set
an
ability
to
construct
a
new
label
set
and
a
filter,
so
you
pass
in
a
predicate
which
that's
true
or
false,
for
every
label
key
and
then
in
the
sdk.
Here
we
are
what
it
does
is
at
the
moment
where
it
would
ordinarily
compute
a
label
set.
It
filters
that
label
set
so
you've
got
to
configure
a
filter
which
might
be
a
regular
expression.
C
It
might
be
a
map
of
strings
to
true
or
false
some
configuration.
It
doesn't
matter
how
it's
got
configured
somehow
and
what
it's
going
to
do
is
it's
going
to
return
the
the
effective
label
set,
which
is
the
one
that
is
going
to
be
used
as
a
map
key.
So
this
is
reducing
dimensionality
and
it
returns
a
value
that
I
am
ignoring
in
this
pr.
This
is
the
one.
This
is
value
that
I
would
use
to
compute
a
statistical
exemplar
with
additional
information.
C
The
instagram
then
needs
to
know
the
labels
that
were
dropped
before
it
can
fill
out
each
bucket,
but
there
are
ways-
and
it's
a
little
bit
more
complicated
to
do
a
generic
general
purpose
calculation
here,
but
then
you
lose
the
easy
way
to
control
where
the
exemplars
lie,
and
that
is
a
more
complicated
question.
I
don't
actually
want
to
settle
this
now.
G
Go
ahead,
I
would
also
think
about
performance
implications
of
this
remember.
This
is
gonna
happen
for
every
measurement
recorded.
So
I
haven't
thought
very
well
on
on
this,
but
just
to
point
out
that
it
will
be
good
to
make
sure
that
any
any
implementation
that
we
follow
should
should
have
reasonable
performance,
that
we
don't
yeah.
C
So
I
will
say
that
that
this
in
this
version
I
did
as
best
I
could.
I
kept
the
existing
performance
roughly
speaking,
so
the
existing
performance
is
like.
We
count
allocations
very
carefully.
You
get
one
slice
for
your
keys
and
values
and
it
sorts
them
in
place
to
de-duplicate
them
and
it
moves
them
to
the
end.
So
it
can
be
item
potent
and
then
what
it
does
is
it
filters
them
and
moves
them
again.
So
it
is
calling
these
filters.
C
It
does
not
require
any
new
allocations
which
felt
really
important
to
me
so
and-
and
I
think
when
I
was
sort
of
more
distant
from
this
topic,
when
I
was
thinking-
oh
I
just
I
want
to
reduce
dimensionality
remember
early
on.
We
had
this
thing
called
the
default
keys,
aggregator
or
sorry
default
is
batcher.
It
was
a
processor
in
today's
terminology.
If
you
wait
for
the
processor
to
compute
reduction
of
dimensionality,
you
pay
the
cost
of
high
dimensionality
in
the
accumulator,
so
this
approach
here
saves
it.
C
G
I
I
wanna
some
experience
here
with
census
from
google
the
we
ended
up,
not
reducing,
so
we
ended
up
having
some
stages.
First,
we
the
first
thing
we
compute
a
one
second
delta
inside
the
aggregator,
without
reducing
any
cardinality
or
anything
to
not
do
any
string
comparison
and
anything
like.
Let's
just
blindly
put
everything
together
if
it
happens
to
match
labels
good.
G
We
aggregate
things
for
one
second,
as
much
as
we
can,
then
in
the
next
stage
we
started
to
drop
things
and
apply
rules
like
this
so
but
but
keep
in
mind
that
for
examples
we
want
to
keep
things
that
are
unique
per
call
like
yeah.
G
This
idea,
which
will
give
you
troubles
so
not
seeing
that
that
does
the
right
thing
to
do
what
google
did,
because
google,
what
google
did
was
again
did
not
drop
any
labels
but
picked
some
examples
from
this
initial
one-second
mini
delta
thing
to
save
for
for
the
trace
id
span
id
and
then,
and
then
kind
of
these
were
the
the
exemplars
that
you
you
state.
So
it
was
super
randomly
chosen.
G
C
I
wanna
say
that
approach
does
work
to
me
at
least
so
it's
a
question
of
whether
there's
benefit
of
reducing
dimensionality
before
you
build
up
a
big
map
of
high
cardinality.
I
think
this
is
not
the
only
way
to
do
it.
I
guess
I
I
felt
like
this
was
going
to
be
a
slight
advantage
in
performance
to
never
index
those
high
cardinality
combinations
and
and
only
do
something
expensive
here.
C
If
you
care
to
get
those
exemplars,
I
I
would
be
happy
to
leave
this
debate
open
and-
and
you
know
let
it
play
out
slowly.
F
G
Thing
we
think
in
census,
inside
google
there
was
another
thing
that
was
very
important:
there
were
multiple
views
for
the
same
instrument
so
and
one
thing
that
this
mini
delta
did
was
computing
all
possible
aggregations.
So
so
so,
instead
of
every
every
view
will
have
different
requirements
for
the
reduction
of
the
labels.
G
So
in
order
to
not
do
all
the
reductions
for
every
view,
what
we
did
in
this
mini
delta,
we
we
dropped
the
the
we
did
not
do
the
reduction
of
the
labels
yet,
but
we
computed
every
every
possible
aggregation
like
if
there
was
a
histogram
and
the
sum
and
whatever
we
we
made
a
merged
allegation
or
whatever
and
then
from
there
from
there.
We
started
to
split
in
real
views
in
real,
like
you
need
a
sum
here
is
the
sum
for
for
this,
you
may
do
more
aggregation,
more
reduction
or
whatever.
C
Okay,
so
that's
viable,
maybe
that's
the
better
way
to
go.
I
think
we
shouldn't
let
this
slow
us
down.
So
somehow
we
ought
to
figure
out
whether
we
can
just
decide
on
a
protocol
for
raw
values
without
debating
this,
because
those
are
two
viable
ways
to
go
and
it
could
be
just
a
performance
question.
Maybe
it's
not
that
important.
You.
G
C
Right,
we
shouldn't
specify
this
at
the
sdk
level
either.
These
are
two
valid
approaches
and
I
don't
think
it
matters
much
okay.
We
have
hit
the
end
of
an
hour.
We
didn't
get
to
the
stuff
at
the
end
about
cortex,
although
I
have
been
talking
privately
with
the
cortex
team,
and
I
just
want
to
say
that
the
amazon
is
contributing
a
lot
of
work
on
cortex,
it's
coming
along
nicely.
If
you
see
those
issues
in
the
collector
or
the
go
contribute
repos,
please
pay
attention.
G
If
you
are
interested
in
the
collector
in
the
collector
one,
I'm
actively
reviewing,
I
put
a
lot
of
comments
there
josh.
If
you
want
to
follow
up
more,
but
I'm
I'm
doing
an
active
review
for
the
country.
I
did
not
look
at
that.
My
fault,
but
I'm
probably
like
that's.
C
Okay,
I
have
taken
a
look
at
that.
One:
okay,
well,
will
helped
out
there
and
anyone
else
who's
interested,
please
contribute.
Thank
you
all.
I
think
we
have
reached
the
end
and
let's
keep
going
towards
ga.
Thank
you
all
thanks,
okay,
see
you
thanks
next
time,.