►
From YouTube: 2020-12-04 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
A
Yeah
I
got
to
work
my
lighting,
maybe
I
need
more
like
anti-glare
powder
or
something
I
don't
know.
B
A
A
I
put
a
note
in
the
metric
sig
agenda
item,
saying
that
everything
is
going
to
be
moved
over
to
the
specific
agenda
doc,
because
this
is
kind
of
a
sub
sig
I'll
be
talking
about
changes
to
the
spec,
and
this
came
out
of
discussion
at
the
maintainers
meeting.
So
I
also
threw
up
a
link
in
getter
in
case
anyone
outside
these
meetings.
Looking
for
stuff,
like
that
I'll
I'll,
clean
things
up
and
make
the
banner
a
little
bit
more
obvious,
but
from
now
on
we'll
be
working
off.
A
C
Andrew,
I
don't
know
if
you
know
this,
but
or
maybe
you
already
have,
can
you
update
the
the
calendar,
invite
as
well
to
include
the
correct
agenda
doc.
A
It
I
have
standing
agenda
item
for
status,
updates
on
the
issues
we've
triaged
in
the
spec
repo
related
to
metrics
p1
metrics
issues,
not
a
lot
of
movement.
Since
the
last
meeting
upon
which
this
was
reported
before
the
thanksgiving
break,
there
was
also.
A
B
All
right,
thank
you,
andrew
yeah.
I
looked
through
those
issues
myself
earlier
today
and
I
can
say
that
yeah
not
a
lot
of
movement.
I
I
was
just
observing
earlier
today
as
well.
How
difficult
the
month
of
november
is
for
anyone
involved
in
a
large,
open
source
project
plus
so
who
perhaps
lives
in
this
country
with
a
thanksgiving
all
day,
so
yeah
there's
a
lot
of
stuff,
that's
still
open.
I
have
two
of
my
own
prs,
one
of
which
I
finally
got
updated
yesterday
night
about
about
the
metrics
sdk
spec.
B
So
I
hope
that
the
month
of
december
is
better
for
myself
and
yeah.
I
don't
think
we
should
go
through
that
particular
list
today,
because
I
I
went
through
myself
and
pulled
out
the
two
I'd
like
us
to
see
us
talking
about
here.
If
we
can,
I
also
put
an
announcement
here.
We
have
a
new
metric,
spec
approver,
although
I
haven't
made
it
official
in
the
code
orders
file.
B
B
Been
fully
discussed
and
approved
with
a
technical
committee
now
I
have
repeatedly
to
others
said
that,
to
I
think
the
two
big
issues
that
we
have
you
know
hotel
metrics
are
about
histograms
and
about
this
terminology
for
labels.
B
That's
not
to
ignore
the
open,
metrics
discussion,
that's
going
to
happen,
but
this
thread
about
histogram
types
has
been
going
on
for
a
long
time
and
uk
has
been
here
and
we've
had
datadog
involved,
and
then
this
it's
pretty
wild.
There's
a
new
new
discussion
about
something
from
google.
B
I
think
we
should
talk
about
this.
Would
anyone
like
to
lead
that.
D
So
that
would
that
would
be
me,
I
guess
so.
Yes,
please
hi.
I
don't
come
to
these
meetings
very
often
I'm
josh.
I
work
on.
D
Our
like
internal
instrumentation
team,
so
we've
been
working
on
a
algorithm
for
or
sort
of
like
dynamically
expanding
the
range
that
histogram
fits
and
then
re-bucketing
in
order
to
in
order
to
have
a
sort
of
like,
like
deliberate
trade-off
between
the
the
precision
and
dynamic
range
that
we
that
we
can
support
in
a
very,
very
small
amount
of
memory.
So
it's
sort
of
like
a
slightly
different
thing,
we're
targeting.
Then.
D
I
feel
like
that,
like
a
similar
concept
to
dd
sketch,
but
we're
targeting
the
situation
where
we
we
have
a
modestly
high
cardinality,
but
very
very
tiny
amount
of
memory.
You
say
like
like
16
buckets
or
something,
and
we
can't
cause
memory
regressions
because
it'll
break
our
existing
users
who
might
be
working
on.
You
know
like
something,
that's
that
that's
that
can't
use
or
isn't
using
a
lot
of
computational
resources.
D
So
we
were
so.
I
wrote
the
comment
that
we
were.
D
It
uses
a
traditional
exponential
bucketer,
but
it
helps
to
know
like
I
basically
have
a
hint
of
how
the
buckets
can
overlap.
As
was
pointed
out,
you
can
sort
of
derive
that,
or
at
least
an
estimate
of
it
and
say
close
enough.
These
do
overlap
so
for
now,
we'll
just
do
that,
but
perhaps
in
the
future
we
can
see
if
it
makes
sense
to
include
those
sorts
of
hints
about
how
the
buckets
overlap.
D
If
this
becomes
useful
to
other
people
so
yeah,
I
don't
know
if
there's
anything
else
that
you
wanted
me
to
mention,
but
that's
pretty
much
it.
For
my
end.
E
I
also
don't
know
if
you've
seen
it
but
there's
a
there's,
a
udd
sketch
paper
that
was
written
by
some
some
people
out
of
italy.
I
forget
the
university,
but
it
looks
similar
is
my
understanding
to
this
as
well
and
might
be
interesting?
B
That
as
well
and
that's
a
good
connection,
michael,
I
hadn't
thought
of
it
myself,
but
that's
a
really
cool
connection
to
to
hear
about
josh.
This.
Also,
this
sort
of
sounds
like
the
kind
of
thing
one
publishes
a
paper
about
as
google
thought
about
opening
this
up
in
a
more
like
thorough
exposition.
D
Yes,
so
the
short
answer,
we're
still
we're
still
like
working
on
this.
We
we
have
we
plan
to
we
kind
of
like
released
something
publicly
in
2021,
so
at
some
point
during
the
next
year,
we're
going
to
see
whether
or
not
it's
feasible
to
turn
it
into
research
at
the
very
least
we'll
be
like
releasing
whatever
we
have.
But
I'll
take
a
look
at
what
the
at
this
link
here
too,
to
see
if
there's
any
overlap
or.
B
I
need
it
in
this
exact
representation
like
please
coerce
that
somehow
for
me,
and
that,
like
does
that
transformation
for
me,
because
I
don't
think
we
can
count
on
just
writing
a
document
with
some
sort
of
equations
in
it
and
saying
please
go
ahead
and
make
histograms
work
and
then,
on
top
of
that,
one
of
the
things
that
I've
noticed
is
if
you're
talking
about
prometheus,
like
the
boundaries
have
this
less
than
equal
in
them
and
in
open
geometry
is
specified
a
greater
and
equal,
and
that
alone
is
like
this
kind
of
irritating
problem.
B
So
we
have
a
lot
of
work
to
do
to
make
sort
of
histograms
representation.
I
guess
seamless
and
I'm
afraid
of
that,
the
more
different
representations
we
have
the
worse
it
gets.
So
I
don't
know
what
we
should
be
doing,
because
it's
clear
clearly
a
will
to
have
variable
histograms,
but
practically
speaking,
so
many
problems
we're
facing.
F
F
B
Does
that
become
a
problem
when
you
only
have
16
buckets,
I
was
actually
really
interested
in
what
josh
just
described
but
yeah,
and
I
think
that
is
the
default
with
prometheus,
as
well
as
a
very
small
number
of
buckets.
D
Are
you
talking
about?
Are
you
requesting
me?
Yes,
so
in
in
the
in
the
default
in
the
default
latency
metric
that
we
have
for
for
this,
like
one
particular
very
commonly
used
latency
metric,
it
is
some
I
think
it
was.
I
can't
remember
it's
16
or
18
buckets
from
something
on
that
order,
and
so
because
they
can
span
a
very
large
like
the
range
you
end
up,
having
a
very,
very
large
exponential
growth
there.
In
order
to
let
you
capture
the
entire
distribution.
F
D
F
B
Actually,
I'm
sorry
I
mentioned
it
at
this
point.
I
don't
think
it's
a
great
big
problem.
I
just
sort
of
feel
like
there's
too
much
complexity
and
confusion
surrounding
histograms.
Still
probably
the
biggest
question
I
have
is:
how
do
we
resolve
these?
B
These
questions
on
on
paper
or
on
you
know
in
a
profile,
this
looks
good
to
me,
but
I
think
what
we
all
want
to
see
is
the
that
sort
of
code
that
we've
just
that
we'll
need
to
export
a
prometheus
histogram
or
a
dd,
sketch,
histogram
or
given
whatever
comes
in
and
and
if
we
have
four
different
types
that
can
come
in
and
three
different
types
that
may
go
out.
We
need
a
grand
conversion
code
and
that
probably
we
should
be
looking
at
that
before
we
go
further
yeah.
F
F
I
tried
to
alleviate
this
concern
by
including
detailed
comments
in
the
proto
file
that
is
explaining
for
each
of
the
format.
How
do
you
convert
to
the
basic
one,
which
is
the
explicit
so
from
a
receiver
point
of
view?
If
you
receive
a
equal
with
the
linear
one,
that's
easier,
you
just
add
the
width,
so
similarly
it
is
exponential.
I
added
detailed
comment
for
the
mostly
for
the
benefit
of
receiver.
You
will
receive
a
message
of
this.
F
F
F
B
I
personally
want
another
chance
to
look
it
over
one
more
time,
but
I've
read
it
a
few
times
and
I
think,
having
read
through
the
various
proposals
for
dd
sketch
like
this
seems
like
where
we
need
to
need
it
to
end
up.
I
also
am
pretty
familiar
with
circle
hist,
which
I
think
also
maps
into
this
and
then
obviously
sort
of
straightforward
exponential
bucketing
works.
B
F
So
yeah,
so
a
lot
of
the
one-off
thing
sparked
and
haven't
get
haven't,
been
able
to
reach
him
so.
B
C
Yeah,
I
thought
that
bogdan
was
kind
of
talking
about
that
at
the
sig
meeting
this
morning
this
week
and
that
they
found
a
way
to
optimize
to
negate
any
sort
of
performance
impact.
B
It's
the
go.
It's
using
gogo,
I
think,
was
the
general
idea,
and
maybe
what
may
be
relevant
is
that
some
of
us
who
have
worked
at
google
do
recall
that
the
c-plus
plus
protobuf
library
has
a
different
way
of
supporting
optional
fields.
B
These
one-offs
are
handled
in
such
a
way
that
you
don't
have
that
weird
memory
cost
that
you
do
get
and
go,
and
so,
like
all
of
us
kind
of
know,
that
you
could
re-implement
your
protobuf
api
and
your
protobuf
compiler
and
fix
this
thing
that
we
don't
like,
but
for
now
I
think
we
should
just
do
it
so
yeah,
I'm
totally.
Okay,
with
the
one-off.
B
B
B
That
actually
brings
me,
I
think,
to
the
next
item
on
the
agenda.
If
anyone
else
wants
to
wrap
up
our
conversation
about
histograms,
otherwise,
I
think
we
should
all
try
and
approve
this
and
review
it
and
approve
it.
There's
a
real
objection.
You
have
to
say
it
in
this
pr.
I
don't
have
any
real
objections,
so
I'm
gonna
go
try
and
do
that.
E
There
was
a
one
issue
brought
up.
I
think
two
weeks
ago
I
was
in
here
last
week
about
thread
safety
in
the
java
implementation
that
we're
working
on,
and
I
just
I
forget
who
said
that,
but
I
think
the
team
told
me
today
that
either
they
think
they've
solved
for
that
or
or
they
know
that
they
can-
and
I
don't
know
which
one
but
whoever
brought
that
up.
I
just
wanted
to
see
if
that's.
G
Still
a
concern
yeah.
That
would
I
mean
that
was
me
and
I'm
still,
if
we,
if
we
have
a
fix
for
it,
that'd
be
great,
hopefully
tyler
will
be
able
to
tyler.
Benson
will
be
able
to
go
into
our
legs
because
I
yeah.
I
just
fear
that
if
we
don't
have
thread
safety
and
we
have
to
walk
on
every
single
recording,
we're
gonna,
it's
it's-
I
mean,
like
it's
gonna,
be
difficult.
We're
going
to
have
to
figure
out
what
to
do
when
we
can't
keep
up.
G
Or
if
or
just
follow
up
with
tyler
like
he-
and
I
talk
all
the
time
so
tyler
can,
let
me
know
what's
going
on
and
then
we
haven't,
we
haven't
merged
his
pr.
Yet
I
think
it's
still
out
there
in
review.
B
G
The
the
real
concern
is
that,
because
it
locks
currently
on
every
recording
that
we
have
to
have
a
strategy
in
the
code
in
the
sdk
about
what
we
do
when
we
can't
keep
up,
because
we
can't
slow
the
app
down
and
right
now,
the
way
when
you
know
tyler's
initial,
the
tyler
benson's
initial
pr
put
a
cue
in
front
of
that
in
front
of
the
recordings,
which
is
fine,
but
we
still
need
to
know
what
happens
when
the
queue
is
full.
What
do
we
do?
Do
we
drop?
Do
we
block
like?
G
What's
what's
the
behavior,
so
I
think
this
is
actually
probably
a
problem
that
we
need
to
talk
about
at
some
point
in
the
sdk
specification
about
what
how
do
we
deal
with?
How
do
we
deal
when
we
can't
keep
up
if
a
given
aggregation
can't
keep
up?
We
need
to
have
a
strategy
for
how
to
bail
out.
F
Interesting
things
I've
been
looking
at
the
concurrency
issue
recently
too
the
decent
benchmark.
My
initial
result
is
that
funny
thing
is
using
the
simplest
just
to
sync
put
a
synchronized
in
front
of
the
insert
or
add
method
that
appears
to
be
actually
the
fastest
way,
which
without
think
think
in
single
thread
mode.
Each
insert
to
a
single
histogram
costs.
You
like
10
to
20
nanoseconds,
we
think
with
low
low
contention
of
threats.
It
costs
you
like
30
nanoseconds
with
high
contention.
F
If
you
go
anything
along
that
direction,
it
instantly
costs
you
hundreds
of
nanoseconds,
even
even
with
the
I
mean,
even
with
the
with
very
low
contention
nice,
just
oh
by
the
way,
obviously
you
reading
java.
So
I
suspect
the
reason
is
that
the
operation,
the
insert
operation
itself,
is
simple
enough.
You
just
compute
a
bucket
index
bump
up
the
country
in
that
bucket
done.
G
I
I
you
know
I've
in
other
benchmarking,
I've
done
in
open,
telemetry
library
in
the
java
I
mean
was
very
surprised
to
find
that
a
simple
sync
like
old
school
synchron
synchronizing
was
often
tremendously
faster
than
using
atomics
or
using
any
of
the
other
more
modern.
F
G
B
B
One
is
a
count
and
one's
a
sub,
so
it's
not
an
atomic
operation,
so
they
keep
two
banks
of
data
and
then
there's
this
hot
cold
swap
that
happens
and
there's
an
atomic
operation,
two
of
them
that
have
a
like
high
bit
set
to
tell
you
whether
you're
in
the
hot
cold
set
and
it's
very
cool.
But
it's
really
a
lot
of
overhead
and
bowden
was
the
one
who
came
in
and
said
this
is
too
slow.
Why
don't
you
just
do
the
simple
thing
and
we
did
so.
B
A
B
A
lot
better
than
you
think
and
I've
I've
got
the
dd
sketch
implementation
from
go,
which
I
think
is
not
their
the
current.
You
know
frontal
head
of
line
like
algorithm,
but
the
cost
was
like
in
the
30
nanosecond
range
and
I
think
is
definitely
cheap
enough
to
just
like
not
worry
about.
For
the
mutex.
F
So
yeah
another
concern
of
using
a
threading
and
queuing
that
this
thread.
When
you
introduce
the
new
thread,
it
adds
a
huge
amount
of
overall
baggage
onto
it.
See
if
my
app
is
a
singapore
single
thread
now
in
order
to
use
metrics,
I
sometimes
have
the
background
thread
doing
the
job,
and
I
and
the
threading
model
is
also
some
platforms.
B
Yeah
I
wanted
to
add
to
that.
Well,
the
there
was
a
discussion
in
the
last
hour
at
the
go
sig
there's
a
pr
standing
in
the
hotel
go
it's
it's.
The
terminology
has
been
debated.
The
current
title
of
that
is
called
the
enricher
api,
but
we've
talked
about
it
here
in
the
past.
Bo
didn't
want
to
name
a
processor
api.
We
already
have
a
processor
api
which
maybe
we
should
rename
so
there's
some
confusion.
B
This
is
a
place
where
you
it
takes
so
much
time
to
pull
out
the
distributed
key
context,
keys
that
you
actually
do
need
to
do
something
along
these
lines
of
putting
a
cue
in
place
and
saying
I
have
a
metrics
event:
here's
my
distributed
context.
Here's
my
event:
please
sort
the
saturday
synchronously,
because
I
can't
block
the
caller,
so
I
have
evidence
that
this
is
not
always
so
simple,
but
it's
not
just
the
cost
of
a
histogram
update.
It's
the
cost
of
pulling
context
and
doing
stuff
with
it.
B
Nobody
else
else
won't
talk
about
that.
Otherwise,
so
there
was
a
meeting
about
openmetrics,
I
didn't
make
it.
I
apologize,
I
know
tyler,
you
were
there.
Maybe
somebody
else
in
this
room
was.
Would
anyone
like
to
briefly
describe
it?
I
know,
there's
gonna
be
more
meetings.
C
Yeah,
I
can,
I
can
jump
in
here
so
yeah
there's.
Actually
some
pretty
good
notes
from
the
meeting
on
here.
There
was
just
at
a
high
level.
The
discussion
is
down
to
the
fact
that
you
know
open
telemetry
as
well
as
metrics
are
both
cncf
projects.
Open
metrics
is
has
released
their
protocol.
C
The
protocol
is
the
openmetrics
protobuf
itself,
something
that
we've
we've
kind
of
based
a
lot
of
compatibility
design
requirements
around,
and
this
meeting
kind
of
was
the
main
question
of
the
meeting,
and
the
main
topic
of
the
meeting
was
asking
is
open,
telemetry,
eventually
going
to
adopt
the
openmetrics,
protobuf
and
otherwise
there's
a
cncf.
You
know
two
different
cncf
projects
that
are
also
like
kind
of
promoting
two
different
protobufs
in
this
project,
which
is
not
to
say
that
it's
not
allowed.
C
In
fact,
I
think
it's
explicitly
been
allowed
in
the
past
by
cncf
members,
but
it's
just
the
question
is
is
to
if
we
could
unify
onto
a
single
protobuf.
Is
that
you
know?
Is
that
possible?
I
guess
it
was
the
question
and
there
was
a
lot
of
the
notes,
kind
of
revolve
around
the
the
details.
There
is
some
high
level
conversation
around
the
details,
but
it
was
eventually
kind
of
come
to
the
conclusion
that,
like
it
may
be
possible,
but
there
is
there's,
definitely
some
hard
additions.
C
Obviously
things
like
resources
are
a
tough
one
which
are
potentially
added
to
the
open
metrics
coming
up.
It
sounds
like
that's
a
possibility,
but
also
things
like
tracing
and
our
and
our
logs
portions
of
otlp
are
not
currently
existing
in
openmetrics,
and
so
we
would
need
to
find
some
sort
of
way
to
address
that,
and
I
think
rich
hickey
had
some
some
ideas
on
that.
So
there's
there's
possibilities
there.
C
There
was
also
just
kind
of
pointed
out
at
the
end
that
it
may
be
a
fool's
errand
to
do
this
and
that
it
may
be
just
more
fruitful.
To
kind
of
assume
that
you
know
compatibility
is
a
design
goal
rather
than
a
reuse
of
the
unification
of
the
pro,
the
idea,
it
being
that
you
know,
as
long
as
the
collector
is
able
to
produce
openmetrics
protobuf
as
well
as
consume,
openmetrics
protobuf,
that
that
compatibility
is
going
to
be
good
enough.
C
At
least
that
was
kind
of
what
I
had
said
at
the
end
after
subsuming
somebody
else's
comments,
I
don't
think
I'm
directly
can't
can't
assume
all
the
credit
for
that
one.
But
the
idea
is
that,
like
you
know,
is,
is
that
the
kind
of
the
goal
is
eventually
the
openmetrics
and
open
tracing
going
to
merge
into
the
fact
that,
like
open,
I'm
sorry
open,
telemetry
uses
openmetrics
protobuf,
or
is
it
something
that
is
going
to
be
two
separate
projects
that
are
going
to
evolve
alongside
each
other?
C
C
I
said,
like
you
know,
it's
a
really
tough
thing
to
make
a
decision
like
this,
especially
without
some
other
voices
in
the
room,
and
I
wanted
to
get
especially
people
in
this
call
to
weigh
in
on
that
decision,
and
it
sounds
like
rich
was
going
to
entice
some
other
prometheus
developers,
other
openmetrics
contributors
to
try
to
come
to
next
week's
openmetrics.
C
I'm
sorry
open
telemetry
metrics
today,
specifically
so
that
we
can
discuss
this
exact
topic
that
this
time
is
really
hard
for
people,
because
they're
over
in
europe,
so
the
next
week
in
the
morning,
we'll
work
a
little
better.
So
we
we
should
plan
to
have
a
conversation
next
week,
probably
about
a
half
hour
next
week
to
dedicate
the
meeting
to
discussing
this
topic.
But
I
think
we
can
probably
just
dive
in
a
little
bit
here
as
to
understanding
what
our
feelings
are
on
the
matter.
B
B
C
Was
discussed
as
one
of
the
the
sticking
points
was,
whether
or
not
that
would
actually
be
supported
in
open,
metrics
and
there's
there's
a
possibility.
There's
also
really
strong
opinions
that
that
shouldn't
just
be
a
thing
that
delta
is
bad.
B
Here's
what
I've?
That's!
That's
why
I
actually
think
of
these
as
separate
protocols
and
like
I
I
would.
I
would
happily
ex
describe
to
to
anybody
that
I
think
of
openmetrics
as
a
as
a
last
mile
type
of
protocol.
It's
only
for
pull,
it's
meant
to
be
human,
readable
and
all
those
things
make
it
sort
of
a
different
protocol
than
the
one
that
can
support
push
in
this
delta
temporality
mode,
which
is
very
statsd
like,
and
it's
also
a
way
that
we
can
get
statelessness
and
I've.
B
C
Yeah
you're
you're,
actually
echoing
some
of
the
the
statements
that
were
made
at
that
meeting,
specifically
around
that
exactly
like
it,
it's
a
it's,
not
a
cosmetic.
I'm
sorry,
that's
just
yeah
like
it's
a
it's
a
last
mile
like
it's
a
last
mile
push
protocol
and
currently
there
are
already
vendors-
and
there
are
already
you
know,
end
users
that
are
already
using
it.
For
that
reason,
so,
like
yeah,
maybe
there's
just
that
that
was
kind
of
where
initially,
I
think,
led
to
that
discussion.
C
C
You
know
final
state
of
things
included
in
it,
and
so
I
think
that,
from
what
the
impression
I
got
was
is
people
are
open
to
that
being
separate
and
opened
into
you
know
really,
while
defining
our
compatibility
design
and
our
compatibility
with
metric
to
being
able
to
you
know
commit
to
supporting
it,
because
I
think
that's
that's
the
goal.
We
don't
want
to
have
a
state
where
there's
just
two
worlds
and
you
only
can
use
openmetrics,
so
you
can
only
use
open
telemetry.
I
don't
think
anybody
wants
that.
C
You
know.
Obviously
we
want
to
start
a
unified
for
the
users
is
kind
of
the
goal,
but
I
think
just
what
you
said:
josh,
that
was
a
voiced
opinion
as
well.
So
if,
when
they
come
in
next
week,
we
have
that
discussion.
I
think
they'd
be
very
receptive
to
that.
B
Cool
yeah,
I
have
more
thoughts,
but
I'd
like
to
share
it
with
them
in
the
room
rather
than
just
talking
to
everybody
here
like
but
yeah.
Let's
look
for.
I
look
forward
to
that.
Like
be
cool,
if
we
just
say
the
text,
format
of
of
otlp
is
openmetrics
and
like
that's,
that's
one
way
to
do
it.
B
So
I
keep
wanting
to
have
bogdan
in
these
meetings
and
I
don't
know
his
schedule
so
he's
not
here
and
we
could
talk
about
it.
Does
anybody
want
to
have
a
conversation
about
renaming
label
to
attribute
this.
B
It's
bothering
me
because
I'd
like
to
just
do
it,
and
yet
every
time
we
this
like
this
thing,
that
I
don't
actually
care
about
comes
up,
which
is
to
me
there's
this.
The
reason
we
put
these
labels
is
for
semantics
and
and
to
me
that
a
string
value
is
not
really
very
different
than
in
value,
and
so
I
kind
of
don't
care,
but
that's
not
to
deny
the
complexity
of
allowing
people
to
have
like
strings
an
integer
value
labels
and
your
database,
because
they're
pros
and
cons-
and
it's
not
simple.
B
In
the
last
time
we
discussed
this
into
tuesday's
sig
meeting
the
there
was
at
least
a
whisper
of
an
idea
that
we
might
actually
break
compatibility
and
just
accept
multi-valued
attribute
types
in
our
metric
protocol,
because
the
whole
reason
we're
in
this
position
in
the
first
place
is
that
someone
was
worried
about
the
cost
of
that
protocol.
That
extra
protobuf
wrapper
for
the
string,
valued
string
label
values
are
one
like
a
few
bytes
cheaper,
because
you
don't
have
an
extra
wrapping
or
an
extra
object
in
your
you
know.
Protobuf
representation.
B
C
Simplified
yeah,
I
I
don't
know
I
like
the
way
that
that
ended
on
tuesday
I
thought
bogdan
had
said
he
was
going
to
be
here
to
have
a
discussion,
but,
as
all
of
us
are
where
it's
really
busy
right
now
so.
B
If
anyone
else
has
a
topic,
a
conversation
there,
then
please.
Otherwise,
I
think
we
should
see.
Let's
move
on
see
what
else
there
is
here.
Actually
there
are
20
minutes
left
in
this
meeting
and
the
only
two
remaining
items
are
kind
of
minor.
So
maybe
this
is
a
good
place
to
have
the
conversation,
except
maybe
nobody
opposes
the
idea
in
the
room.
G
We
have
lots
of
time
to
figure
this
out
because
it
doesn't
matter
like
whatever
we're
releasing
initially
won't,
I
mean
there's
no
metrics,
so
it
doesn't
matter
what
we
call
the
thing
and
we
can
work
on
it.
So
I
think
it's
maybe
potentially
getting
less.
We
still
need
to
figure
it
out,
but
it
may
be
potentially
getting
less
urgent
than
it
would
have
if
we
were
actually
planning
on
releasing
some
beta
alpha
form
of
metrics
as
1.0.
B
G
G
And
the
java
sdk
records
metrics
using
the
metrics
api
right
now
like
to
measure
qd
like
some
queue,
depths
of
the
span,
processor
and
number
of
spans
that
are
dropped
under
high
volume
and
things
like
that.
So
we're
using
the
metrics
api
we'll
have
to
figure
out
what
we're
going
to
do
that
in
sdk
as
well.
G
B
I
have
okay,
I
don't
think
we
can
really
have
a
serious
conversation
about
labels
and
attributes
plus
you're
right
john.
Maybe
we'll
just
keep
postponing
that.
If
that's
the
case,
we
may
have
just
come
to
the
end
here.
I
have
two
items
here:
one's
about
the
hotel,
prometheus
sidecar.
This
is
this
project.
I
was
working
on
and
it's
functional.
We
are
starting
to
recommend
it
to
lightstep
customers
if
they
have
prometheus
data,
they
want
to
try
with,
and
the
problem
is
it's.
B
B
B
And
then
it's
not
urgent,
but
then,
if
anyone
would
like
to
talk
about
something
new
there's
this
issue
here
about
metric
start
time
that
I
just
filed
and
it's
been
the
outcome
of
a
lengthy
discussion.
We
were
having
inside
lightstep
about
cumulative
sums
and
especially
non-monotonic
cumulative
sums,
which
are
new
in
otlp
relative
to
the
prometheus
model,
and
the
question
essentially
is
when
I
see
one
of
these
data
points,
that's
a
cumulative
sum.
B
Rather
than
that,
we
want
to
know-
and
it's
come
up
in
the
past-
a
few
different
conversations
I
linked
to
here-
one
is
about
having
an
uptime
metric
and
it
occurs
to
me
that,
instead
of
having
an
uptime
metric,
we
can
have
an
up
metric
and
a
start
time
resource.
That's
my
proposal
here.
B
And
so
I
believe
that
for
a
non-monotonic
cumulative,
the
correct
display
is
as
a
total
meaning.
I
want
to
know
what
the
actual
number
is
right
now,
but
if
I've
been
computing,
non-monotonic
cumulatives
by
converting
deltas,
there's
a
chance
that
I've
been
reset
because
say
a
sidecar
crashed,
and
if
that
sidecar
crashes,
I
want
to
be
able
to
tell
you
this
number,
although
it
is
accurate
from
its
start
time.
B
H
Sort
now
seems
like
an
okay
time
for
me
to
mention
that
we
have
several
other
semantic
convention
prs
that
that
need
additional
review
and
which
became
stale
over
the
holiday
break.
B
H
Yep
I'll
I'll
go
drop
comments
and
the
ones
that
got
mark
stale
were
auto
closed.
B
B
All
right,
I
think,
we've
probably
run
out
of
things.
I
appreciate
everyone
here
being
here,
and
that
was
interesting,
especially
the
stuff
about
histograms
I'll,
see
you
next
week
and
we're
gonna
have
open
metrics
next
week.
Look
looking
forward
to
that.